- Microsoft’s Nebius deal gives it 100,000 Nvidia chips without building more infrastructure
- Neocloud providers like CoreWeave and Lambda now power Microsoft’s expanding AI infrastructure
- Hundreds of thousands of Nvidia GPUs will soon fill Microsoft’s Wisconsin facility
Microsoft’s growing reliance on third-party data center operators has entered a new phase following a $19.4 billion deal with Nebius.
Nebius is one of several “neocloud” providers it has backed with a combined $33 billion investment, and with this deal, Microsoft now has access to more than 100,000 of Nvidia’s latest GB300 chips.
Microsoft has made billions renting computing power to its customers and aims to increase that figure to justify increasing its AI data center budget.
Rent computing to support AI ambitions
The deal is part of a broader effort by Microsoft to strengthen its AI capabilities and expand the computing power behind its growing ecosystem of AI tools without having to commit its entire own infrastructure.
This shows how Microsoft manages its massive demand for AI by leasing capacity from others while reserving its own facilities for paying customers.
The company’s internal data center infrastructure, already one of the largest in the world, was positioned as a commercial service.
The Nebius deal gives Microsoft temporary access to Nvidia’s latest GB300 NVL72 server racks, each containing 72 of the high-end B300 GPUs.
Estimates put the cost of a fully equipped rack at around $3 million, suggesting that Nebius’ share of the deal could exceed $4 billion in hardware alone.
For Microsoft, this is a shortcut to vast computing resources without waiting for its own installations to come online.
Microsoft’s partnerships with neocloud providers such as CoreWeave, Nscale, and Lambda show a shift toward distributing AI workloads across smaller, specialized compute networks.
These companies act as middlemen, leasing their GPU clusters to giants like OpenAI and now Microsoft.
At the same time, Microsoft is also investing heavily in its physical footprint.
The company’s upcoming 315-acre data center complex in Mount Pleasant, Wisconsin, is expected to house hundreds of thousands of Nvidia GPUs.
It will also include enough fiber optic cable to “circle the earth 4.5 times.”
Designed with a self-contained power supply, it demonstrates an attempt to reduce dependence on external suppliers in the long term.
The rapid construction of GPU-driven data centers has already begun to strain local energy systems.
Wholesale electricity prices near major AI facilities have reportedly increased 267% over five years, sparking concern among U.S. residents and regulators.
Environmental impacts are also attracting attention, with new projects in Tennessee and Wyoming linked to increased emissions and energy consumption.
Nvidia’s $100 billion investment in OpenAI has intensified questions about market concentration and antitrust risks.
Microsoft’s deep ties to Nvidia and OpenAI now place it at the center of the same debate.
This highlights how the pursuit of computational scale continues to blur the lines between partnership and dominance in the AI ecosystem.
Via Toms Hardware
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.