- New partnership gives OpenAI access to hundreds of thousands of Nvidia GPUs on AWS
- AWS will bundle GB200 and GB300 GPUs for low-latency AI performance
- OpenAI can expand its use of compute until 2027 under this agreement
The AI industry is advancing faster than any other technology in history, and its demand for computing power is immense.
To meet this demand, OpenAI and Amazon Web Services (AWS) have entered into a multi-year partnership that could reshape the way AI tools are built and deployed.
The collaboration, valued at $38 billion, gives OpenAI access to AWS’s vast infrastructure to run and scale its most advanced artificial intelligence workloads.
Building the Foundation for Massive Computing Power
The agreement grants OpenAI immediate access to AWS compute systems powered by Nvidia GPUs and Amazon EC2 UltraServers.
These systems are designed to deliver high performance and low latency for demanding AI operations, including ChatGPT model training and inference.
“Scaling edge AI requires massive, reliable computing,” said Sam Altman, co-founder and CEO of OpenAI. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone. »
AWS says the new architecture will group GPUs such as the GB200 and GB300 into interconnected systems to ensure seamless processing efficiency across all workloads.
The infrastructure is expected to be fully deployed before the end of 2026, with room for further expansion until 2027.
“As OpenAI continues to push the boundaries of what is possible, AWS’s best-in-class infrastructure will serve as the backbone of its AI ambitions,” said Matt Garman, CEO of AWS. “The scale and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads.
AWS’ infrastructure, already known for its scalability in cloud hosting and web hosting, is expected to play a central role in the success of the partnership.
Data centers managing OpenAI workloads will use tightly connected clusters, capable of handling hundreds of thousands of processing units.
Everyday users may soon notice faster, more responsive AI tools, powered by stronger infrastructure behind ChatGPT and similar services.
Developers and businesses could benefit from simpler and more direct access to OpenAI models through AWS, making it easier to integrate AI into applications and data systems.
However, the possibility of scaling this system to tens of millions of processors raises both technical possibilities and logistical questions regarding cost, durability and long-term efficiency.
This rapid increase in computing resources could lead to increased energy consumption and maintenance costs for such large systems.
Furthermore, concentrating AI development under the responsibility of major cloud providers could increase concerns about dependency, control and reduced competition.
OpenAI and AWS have been working together for some time. Earlier this year, OpenAI made its core models available through Amazon Bedrock, allowing AWS users to integrate them into their existing systems.
The availability of these models on a large cloud hosting platform has enabled more developers to experiment with generative AI tools for data analysis, coding, and automation.
Companies such as Peloton, Thomson PK Press Club and Verana Health are already using OpenAI models in the AWS environment to improve their business workflows.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




