- OPENAI adds Google Tpus to reduce dependence on NVIDIA GPUs
- TPU adoption highlights Openai’s thrust to diversify the calculation options
- Google Cloud wins as a customer despite a competitive dynamic
OPENAI would have started using Google (TPUS) treatment units for Power Chatgpt and other products.
A report of PK Press ClubWho quotes a familiar source with the move, notes that this is the first major OPENAI discharge of NVIDIA equipment, which has so far formed the backbone of the Openai calculation battery.
Google rents TPUs via its cloud platform, adding OpenAI to a growing list of external customers which includes an Apple, anthropic and safe superintendent.
Don’t give up Nvidia
Although rented fleas are not the most advanced TPU models of Google, the agreement reflects OpenAi’s efforts to reduce the costs of inference and diversify beyond Nvidia and Microsoft Azure.
The decision comes as the inference workloads increase in parallel with the use of chatgpt, now serving more than 100 million active users.
This request represents a substantial part of the estimated annual calculation budget of $ 40 billion in Openai.
Google TPUs V6e “Trillium” are designed for permanent speed inference and offer high speed with lower operational costs compared to high -end GPUs.
Although Google refused to comment and Openai did not immediately respond to PK Press ClubThe arrangement suggests a deepening of infrastructure options.
OPENAI continues to rely on Azure supported by Microsoft for most of its deployment (Microsoft is the largest investor in the company), but supply problems and pressures on prices around GPUs have exposed the risks of depending on a single supplier.
Bring Google into the mixture not only improves OpenAi’s ability to develop the calculation, but also align with a broader trend in the industry to mix material sources for flexibility and pricing effect.
There is no suggestion that Openai plans to completely abandon Nvidia, but the incorporation of Google’s TPU adds more control over cost and availability.
The extent to which Openai can integrate this material into its battery remains to be seen, in particular given the long -standing dependence of the software ecosystem with Cuda and Nvidia tools.




