- Sam Altman says that Openai will soon spend 1 million GPU and will target 100 million more
- Executing 100 million GPU could cost 3 dollars and break the global energy infrastructure limits
- The expansion of Openai in Oracle and TPU shows an increasing impatience with the current cloud limits
OPENAI says that it is on the right track to exploit more than a million GPU by the end of 2025, a figure which already places it well in advance on competitors in terms of calculation resources.
However, for the CEO of Sam Altman, this milestone is only the beginning, “we will cross much more than a million GPU online by the end of this year,” he said.
The comment, delivered with apparent lightness, nevertheless sparked serious discussions on the feasibility of the deployment of 100 million GPUs in the predictable future.
A vision far beyond the current scale
To put this figure in perspective, the XAI of Elon Musk directs Grok 4 on approximately 200,000 GPUs, which means that the planned scale of Openai is already five times this number.
However, the 100 million scaling would involve astronomical costs, estimated at around 3 billions of dollars, and would pose major challenges in manufacturing, energy consumption and physical deployment.
“Very proud of the team, but now they would better get to work to understand how 100 times this lol,” wrote Altman.
Although Azure de Microsoft remains the main cloud platform in Openai, it has also joined Oracle and explored Google’s TPU accelerators.
This diversification reflects a tendency to the industry level, with Meta, Amazon and Google also evolving towards internal chips and greater dependence on the main bandwidth memory.
SK Hynix is one of the companies likely to benefit from this expansion – as GPU demand increases, the same goes for HBM, a key element in AI training.
According to an initiate in the data center industry, “in some cases, the specifications of GPUs and HBMs … are determined by customers (such as OPENAI) … configured according to customer requests.”
SK Hynix’s performance has already experienced strong growth, with forecasts suggesting record operating profit in T2 2025.
Openai’s collaboration with the SK group seems to deepen. President Chey Tae-Won and CEO Kwak No-Jung recently met Altman, who would have strengthened their position in the IA infrastructure supply chain.
The relationship is based on past events such as SK Telecom AI competition with Chatgpt and participation in MIT Genai Impact Consortium.
That said, the rapid expansion of Openai has raised concerns concerning financial sustainability, with reports that Softbank could reconsider its investment.
If the 100 million OPENAI GPU objectives materialize, this will not only require capital but major breakthroughs in the efficiency of calculation, manufacturing capacity and global energy infrastructure.
For the moment, the objective seems ambitious, a daring signal of intention rather than a practical roadmap.
Via Tomshardware