Anthropic went from $1B to $30B ARR in 16 months, then locked in 3.5 gigawatts of compute through 2031. This is what becoming the Token Daddy looks like in real time.
$1B to $30B in 16 months. This growth rate shouldn't exist in enterprise software.
Anthropic isn't growing fast. It's growing at a speed that breaks the mental model for what enterprise revenue can do. $1B annualized in December 2024. $9B by end of 2025. $30B by April 2026. Enterprise customers spending $1M+ per year doubled from 500 to 1,000 in under two months.
Companies writing seven-figure checks because Claude is doing work they can't replace. That's a different kind of traction than app downloads.
Anthropic just reserved 3.5 gigawatts of Google/Broadcom TPU capacity through 2031. On top of the 1 GW they already use. That's 4.5x their current footprint.
AI models run on specialized chips doing math billions of times per second. You can't just buy these chips off the shelf. You reserve the right to use them, years in advance. Anthropic locked in 3.5 GW of new capacity (2027-2031), giving them a total of 4.5 GW. If you don't lock it in now, someone else takes it and you're stuck waiting.
Total US data center power in 2024 was about 20 GW. Anthropic alone at 4.5 GW represents roughly 22% of that entire industry's power draw. One company. One model family.
Most AI companies are tied to one cloud, one chip. Anthropic is on all three major clouds and trains on three chip architectures. They're not switching partners. They're adding them.
Claude is available on AWS Bedrock, Google Vertex AI, and Azure Foundry. It's the only frontier AI model on all three. They train on AWS Trainium (Project Rainier), Google TPUs, and NVIDIA GPUs. Amazon remains primary. The Google deal adds on top. No one was dropped.
Each advantage reinforces the others. It's not one wall. It's a self-reinforcing loop that gets stronger the faster it spins.
Multi-cloud distribution makes multi-hardware training possible. Multi-hardware prevents cloud lock-in. Enterprise revenue funds more compute. More compute trains better models. Better models attract more enterprise customers. Attack any one side and the other three cover it.
Everyone in this chain is betting that demand for Claude keeps climbing. At $30B ARR, that's less of a caveat and more of a flex. But it's still a bet.
Broadcom's SEC filing literally says this deal depends on Anthropic's "continued commercial success." The $30B raise at $380B valuation assumes it. The potential October 2026 IPO at $400-500B assumes it. Every partner, investor, and chip supplier is making the same wager: that enterprises keep writing bigger checks for Claude.
AI hits a usefulness ceiling for enterprises. The $1M checks stop growing. Growth decelerates from 30x to 2x.
Open-source models become "good enough" for free. Llama 5 or Mistral Large closes the capability gap. Pricing power evaporates.
Locked into TPU hardware that becomes suboptimal. A new chip architecture emerges that makes TPUs look slow.
1,000 big customers is powerful but fragile. Lose 50 of them and the growth story cracks. Enterprise churn at this scale is existential.
The Token Daddy thesis, playing out in front of us.
The part most people miss: Anthropic being on all three clouds isn't just a distribution advantage. It's a power dynamic inversion. AWS can't drop Anthropic without losing customers to Google Cloud and Azure, who still have Claude. Google can't either. Each cloud competes to give Anthropic better deals because the alternative is enterprises leaving for a cloud that HAS Claude. Anthropic turned a vendor-platform relationship into mutual dependency. They're the only AI company that can survive losing any single cloud partner.