Anthropic have announced a significant expansion of its Google Cloud infrastructure, planning to deploy up to one million TPUs in a deal worth tens of billions of dollars. The expansion is expected to bring well over a gigawatt of capacity online in 2026.
The AI company currently serves more than 300,000 business customers, with large accounts representing over $100,000 in run-rate revenue growing nearly 7x in the past year. The expanded computational resources will support testing, alignment research, and deployment at scale.
"Anthropic's choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years," said Thomas Kurian, CEO at Google Cloud. "We are continuing to innovate and drive further efficiencies and increased capacity of our TPUs, building on our already mature AI accelerator portfolio, including our seventh generation TPU, Ironwood."
Anthropic maintains a multi-platform compute strategy using three chip platforms: Google's TPUs, Amazon's Trainium, and NVIDIA's GPUs. The company emphasised its continued partnership with Amazon as its primary training partner and cloud provider, including work on Project Rainier, a massive compute cluster with hundreds of thousands of AI chips across multiple U.S. data centres.
"Our customers—from Fortune 500 companies to AI-native startups—depend on Claude for their most important work, and this expanded capacity ensures we can meet our exponentially growing demand while keeping our models at the cutting edge of the industry," said Krishna Rao, CFO of Anthropic.
The expansion addresses exponentially growing enterprise demand with 300,000 business customers currently served. Large enterprise accounts have increased nearly 7x year-over-year, reflecting rapid adoption among organisations requiring substantial AI compute resources for critical business operations.
The tens of billions dollar investment signals major infrastructure scaling for enterprise AI services. The diversified chip platform approach across Google TPUs, Amazon Trainium, and NVIDIA GPUs provides infrastructure resilience for enterprise customers. The over one gigawatt capacity addition in 2026 positions the company to meet growing computational demands from Fortune 500 companies and AI-native startups.