Six weeks ago, Anthropic's run-rate revenue was $14 billion.

Yesterday, the company disclosed it has crossed $30 billion, more than tripling from $9 billion at the end of 2025. At the same time, it announced a 3.5 gigawatt compute deal with Google and Broadcom, the largest infrastructure commitment in its history.

That growth trajectory doesn't have much precedent in enterprise technology. And it's worth understanding what's actually driving it.

This edition covers the Anthropic revenue story, what the compute deal signals, and what it means for anyone building or buying AI right now.

The Number That Reframes Everything

Anthropic's revenue disclosure yesterday came bundled with a specific detail that tells you more than the headline number.

When the company raised $30 billion in its Series G round in February, it noted that over 500 business customers were each spending more than $1 million annually. As of yesterday, that number exceeds 1,000 — doubling in less than two months.

That's enterprises committing serious, recurring budgets to Claude as operational infrastructure. The growth is coming from companies integrating Claude into workflows, instead of just some individuals opening the app.

The revenue trajectory reinforces this. Anthropic went from $1 billion at the start of 2025 to $9 billion by year-end, $14 billion by February, and $30 billion now. The acceleration is steepening, not flattening.

To support that demand, the company signed a new agreement with Google and Broadcom for approximately 3.5 gigawatts of next-generation TPU capacity, coming online starting in 2027. Anthropic's CFO described it as "our most significant compute commitment to date." The company already runs Claude across AWS Trainium, Google TPUs, and Nvidia GPUs, and Claude is the only frontier AI model available across all three major cloud platforms: AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry.

  • The Pentagon dispute didn't slow Anthropic down. It accelerated growth. When the Trump administration banned federal agencies from using Claude in February, downloads surged, paid subscribers doubled, and Claude hit number one on the App Store. A federal judge blocked the supply-chain risk designation on March 26, ruling it "classic illegal First Amendment retaliation." The legal battle isn't over, but Anthropic's commercial trajectory was unaffected — and arguably strengthened by the public stance it took.

  • The enterprise split between Anthropic and OpenAI is now measurable. Anthropic's run-rate revenue has crossed $30 billion. OpenAI is at approximately $25 billion. Anthropic captures 73% of spending among companies buying AI tools for the first time. The divergence reflects a structural difference: Anthropic built its business around enterprise API adoption and developer tooling from the start. OpenAI's growth has been heavily driven by consumer ChatGPT subscriptions, where margins are thinner, and switching costs are lower.

  • Claude Code is doing more work than most people realize. The coding agent accounts for roughly 10% of public GitHub commits, generates over $2.5 billion in annualized revenue on its own, and has doubled its weekly active users since January. It's the product that has most directly taken share from OpenAI among software engineers and enterprises, which is a significant reason OpenAI called a "code red" and moved to consolidate its products into a superapp.

  • The compute deal is a bet on continued exponential demand. 3.5 gigawatts of TPU capacity is not provisioned for a company expecting growth to moderate. Broadcom's own analysts estimated the deal could generate $42 billion in revenue for Broadcom in 2027 alone. The infrastructure commitment says more about Anthropic's forward expectations than any revenue disclosure could.

What This Means For You:

The Anthropic numbers are significant as a business story. They're more significant as a signal about where enterprise AI adoption actually is.

When over 1,000 companies are each spending more than $1 million annually on a single AI provider — and that number doubled in two months — the "AI is still experimental" framing is no longer accurate. These are real budgets, committed to real workflows, at companies making long-term infrastructure decisions.

The practical question for any organization watching this is: what side of that divide are you on? The companies in that $1M+ customer base have already mapped their workflows, defined their governance, and committed to a deployment model. The ones still running pilots are operating in a different environment than they were six months ago.

The window to get ahead of this hasn't closed. But it has narrowed considerably.

Clutch. Just launched.

OpenClaw made it easy to get an agent running. Clutch makes it safe to run that agent at work.

Secure multi-agent deployment, built for teams that need more than a single-machine setup. We just launched.

The $30 billion number landed in my inbox yesterday alongside a note from a client who said, simply: "Does this change anything for us?"

My answer: It confirms something that was already true.

The organizations I work with that are furthest ahead didn't wait for a number like this to start building. They started mapping workflows 12 months ago, ran real pilots with real accountability, and now have deployment infrastructure they can scale. The revenue disclosure validates the direction they chose.

For everyone else, the useful question isn't "how did Anthropic get here?" It's "what does it mean that 1,000 companies are each spending $1M+ a year on this, and we haven't committed to a deployment model yet?"

That gap closes faster than most organizations expect once a competitor starts moving.

Haroon

P.S. If the governance question is one you're actively working through, that's the core of what Clutch is built for.

Reply

Avatar

or to participate

Keep Reading