
Anthropic has announced a major expansion of its partnership with Google Cloud, planning to deploy up to one million Tensor Processing Units (TPUs) to scale its AI research and product development. The deal, valued in the tens of billions of dollars, is expected to bring over a gigawatt of new compute capacity online by 2026.
For eeNews Europe readers, the move signals accelerating demand for advanced AI compute infrastructure — and showcases the growing influence of custom silicon like Google’s TPUs in powering next-generation large language models and AI services.
Scaling AI infrastructure with TPUs
The expansion reflects Anthropic’s growing compute requirements as it continues to advance its Claude family of AI models. With more than 300,000 business customers and a sevenfold increase in large enterprise accounts over the past year, the company is rapidly scaling its infrastructure to meet demand.
“Anthropic’s choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years,” said Thomas Kurian, CEO at Google Cloud. “We are continuing to innovate and drive further efficiencies and increased capacity of our TPUs, building on our already mature AI accelerator portfolio, including our seventh generation TPU, Ironwood.”
The new TPU capacity will power Anthropic’s ongoing alignment research, model training, and large-scale deployment. The company says the investment is key to maintaining responsible and efficient growth as model complexity and customer use cases continue to expand.
Multi-cloud and multi-chip strategy
Despite the scale of this new agreement, Anthropic emphasized that its compute strategy remains diversified. Alongside Google’s TPUs, the company also uses Amazon’s Trainium and NVIDIA GPUs to train and run its AI models.
“Anthropic and Google have a longstanding partnership and this latest expansion will help us continue to grow the compute we need to define the frontier of AI,” said Krishna Rao, CFO of Anthropic. “Our customers — from Fortune 500 companies to AI-native startups — depend on Claude for their most important work, and this expanded capacity ensures we can meet our exponentially growing demand while keeping our models at the cutting edge of the industry.”
Anthropic continues to collaborate with Amazon, which remains its primary training partner and cloud provider. The two companies are working together on Project Rainier, a large-scale compute cluster spanning multiple U.S. data centers with hundreds of thousands of AI chips.

