Anthropic is expanding its deal with Google to use up to a million of the tech giant’s artificial intelligence chips, worth tens of billions of dollars, as the startup strives to advance its AI systems in a competitive market.
Under the deal announced Thursday, Anthropic will have access to more than a gigawatt of computing capacity, coming online in 2026, to train the next generations of its Claude AI model on Google’s internal tensor processing units, or TPUs, traditionally reserved for internal use.
Anthropic said it chose the TPUs because of their value for money and efficiency, as well as its existing experience in training and supplying its Claude models with the processors.
The deal is the latest sign of insatiable demand for chips in the AI sector, where companies are racing to develop technology that can match or surpass human intelligence.
Alphabet-owned Google, whose TPUs are available for rental on Google Cloud and are an alternative to Nvidia chips which are in limited supply, will also provide additional cloud computing services to Anthropic.
Rival OpenAI recently signed several deals that could cost more than $1 trillion to secure about 26 gigawatts of computing capacity, enough to power about 20 million U.S. homes. A gigawatt of computing can cost about $50 billion, industry executives have said.
OpenAI, the creator of ChatGPT, is actively using Nvidia’s graphics processing units and AMD’s AI chips to meet its growing demand.
Reuters exclusively reported earlier in October that Anthropic expects to more than double, and potentially nearly triple, its annualized revenue rate next year, fueled by rapid adoption of its enterprise products.
The startup focuses on AI security and building models for enterprise use cases. His models helped fuel the rise of mood coding startups like Cursor.




