OpenAI at $25B, Anthropic at $19B: What the Revenue Gap Means


graphical user interface

OpenAI just crossed $25 billion in annualized revenue. Anthropic is approaching $19 billion. A year ago, these numbers would have seemed like aggressive projections from a pitch deck. Today they’re quarterly operating reality — and the gap between them tells you something important about how this market is actually shaking out, who’s winning which customers, and where the next 18 months get interesting.

This isn’t a story about which model scores higher on benchmarks nobody uses. It’s about two companies with fundamentally different product philosophies racing toward the same enterprise dollar — and both, apparently, finding it.

The Revenue Numbers, in Context

$25 billion annualized for OpenAI. $19 billion for Anthropic. Let’s be honest about what those figures mean and don’t mean.

Annualized revenue is a projection based on current run rate, not a trailing twelve-month audit. But even with that caveat, both numbers represent something real: sustained, recurring demand from developers and enterprises paying for API access, ChatGPT subscriptions, and increasingly, deeply integrated enterprise contracts. OpenAI’s 800 million weekly users gives you a sense of the top-of-funnel scale. Anthropic doesn’t publish user counts with the same frequency, but its $100 million Partner Network — announced March 12, 2026 — signals that enterprise channel strategy is becoming a serious revenue lever.

The gap between $25B and $19B sounds significant until you remember that Anthropic didn’t exist four years ago and is now the second-largest AI platform company in the world by revenue. That is genuinely unusual. Most enterprise software categories don’t spawn a credible $19B revenue competitor that fast. It tells you the market is expanding fast enough for two strong players to both be winning simultaneously — and it raises the obvious question of whether that dynamic holds as the market matures.

OpenAI, meanwhile, is reportedly taking early steps toward a public listing, potentially late 2026. If that materializes, those revenue numbers become the basis for a very consequential valuation conversation.

What GPT-5.4 Actually Represents

GPT-5.4 launched March 5, 2026, and the most important thing about it isn’t the capability jump — it’s the efficiency story. The model handles the same tasks with significantly fewer tokens than GPT-5.2, which means faster responses and lower API costs for developers. That matters enormously at scale. When you’re processing millions of API calls, token efficiency isn’t a footnote — it’s a margin line.

The model brings a 1 million token context window, better reasoning, and improved agentic workflow performance. It’s available in ChatGPT as “GPT-5.4 Thinking/Pro” and via the API, and OpenAI bundled a ChatGPT-for-Excel add-in alongside the launch — a clear signal that the consumer-to-enterprise bridge is a priority. The interactive math and science modules (70+ topics with adjustable variables) are an interesting side bet on educational use cases that doesn’t get talked about enough given the potential market size.

OpenAI Codex Security is worth paying attention to. It uses GPT-5.4’s native computer use capabilities to run autonomous code security reviews — essentially an agent that can navigate a codebase, identify vulnerabilities, and surface them without a human holding its hand at every step. Computer use inside Codex is early-stage and the reliability profile in production environments is still being established, but the direction is clear. Peter Steinberger, who built OpenClaw and joined OpenAI on February 14, 2026, is likely somewhere in this stack — his background in developer tooling fits squarely with where Codex is heading.

The retirement of GPT-5.1 models (Instant, Thinking, Pro) on March 11 is the less glamorous but equally important part of the story. OpenAI is actively compressing its model lineup, which reduces the cognitive overhead for developers choosing a model and signals confidence in the newer generation. Fewer options, cleaner story.

Anthropic’s Enterprise Push Is More Aggressive Than It Looks

The Anthropic news cycle in early 2026 has been dominated by model releases, but the more interesting story is the enterprise infrastructure being built around those models.

Claude Opus 4.6 and Claude Sonnet 4.6 (launched February 17, 2026) both carry 1 million token context windows. Opus 4.6 is the heavy lifter — Anthropic’s own Frontier Red Team used it to find over 500 vulnerabilities in production open-source code, which is a compelling internal validation of its capabilities on real-world codebases. Sonnet 4.6 came in at the same price as 4.5 with better performance and fewer tokens for equivalent tasks, which is a similar efficiency story to what OpenAI is telling with GPT-5.4. The older Opus 4 and 4.1 models have been removed from the model selector, consolidating the lineup.

But the product move that deserves more attention is Claude Cowork, launched in research preview at the end of January 2026. It’s a desktop app — macOS first — that runs in an isolated VM on your local machine, with full access to local files and MCP integrations. Scott White, Anthropic’s Head of Product for Enterprise, described it as “transitioning into vibe working” — a framing where knowledge workers direct AI through intent rather than writing code themselves. The whole app was built using Claude Code in ten days, which is either a remarkable demonstration of their own tooling or a very good marketing story, probably both.

The domain-specific plugins for legal, financial analysis, HR, engineering, and operations are where Claude Cowork becomes a serious enterprise proposition. This isn’t a general-purpose chat interface dropped into a corporate environment — it’s a structured agent environment designed around the actual workflows that enterprise buyers care about. That’s a harder product to build but a much easier product to sell to a CFO or General Counsel.

Anthropic engineers reportedly use Claude for about 60% of their own work and ship 60 to 100 internal releases per day. Take that figure with appropriate skepticism about what “use Claude for” means precisely, but the directional signal is real: the company is eating its own cooking at a scale that’s operationally meaningful.

The self-serve Enterprise plans — no sales call required — and the $100 million Partner Network suggest Anthropic is trying to solve two different go-to-market problems at once: reduce friction for mid-market buyers who don’t want to talk to a salesperson, while building a partner ecosystem that can reach enterprise accounts at scale.

Claude Code vs. Codex: The Developer Battleground

If there’s one segment where the competitive intensity between OpenAI and Anthropic is most visible right now, it’s developer tooling. Both companies are treating the developer who writes code — and increasingly the developer who directs AI to write code — as the highest-value customer to win.

Claude Code is shipping daily releases, which is an unusual cadence for a product at this stage. Recent additions include the Skills API (organized folders with SKILL.md files that let agents carry structured knowledge), pre-built skills for PPTX, XLSX, DOCX, and PDF formats, a --bare flag for scripted automation, and voice mode. The --channels permission relay is in research preview and enables more sophisticated multi-agent coordination. Claude Code is now included in every Team plan standard seat, which removes a purchasing decision and accelerates adoption.

OpenAI Codex, with GPT-5.4&#8217

Ty Sutherland

Ty Sutherland is the Chief Editor of AI Rising Trends. Living in what he believes to be the most transformative era in history, Ty is deeply captivated by the boundless potential of emerging technologies like the metaverse and artificial intelligence. He envisions a future where these innovations seamlessly enhance every facet of human existence. With a fervent desire to champion the adoption of AI for humanity's collective betterment, Ty emphasizes the urgency of integrating AI into our professional and personal spheres, cautioning against the risk of obsolescence for those who lag behind. "Airising Trends" stands as a testament to his mission, dedicated to spotlighting the latest in AI advancements and offering guidance on harnessing these tools to elevate one's life.

Recent Posts