- Nvidia Commits Capital, Hardware to Accelerate CoreWeave AI Factory Expansion
- CoreWeave Gains Early Access to Vera Rubin Platforms Across Multiple Data Centers
- Financial backing directly links Nvidia’s balance sheet to AI infrastructure growth
Nvidia and CoreWeave have expanded their long-standing relationship with an agreement that links infrastructure deployment, capital investment and early access to future computing platforms.
This agreement places CoreWeave among the first cloud providers to deploy Nvidia’s Vera Rubin generation, strengthening its role as a preferred partner for large-scale AI infrastructures.
Nvidia also committed $2 billion to CoreWeave via a direct stock purchase, highlighting the financial depth of the collaboration.
Scaling AI factories with aligned infrastructure
The deal focuses on accelerating the construction of AI factories, with CoreWeave forecasting more than five gigawatts of capacity by 2030.
Nvidia’s involvement goes beyond providing accelerators, as it supports the acquisition of land, electricity and physical infrastructure.
This approach directly links capital availability to hardware deployment timelines, reflecting how AI expansion increasingly depends on coordination between finance and compute delivery.
“AI is crossing a new frontier and driving the largest infrastructure build in human history,” said Jensen Huang, founder and CEO of Nvidia.
“CoreWeave’s deep AI factory expertise, platform software and unmatched execution speed are recognized across the industry. Together, we are working to meet the extraordinary demand for Nvidia AI factories – the foundation of the AI industrial revolution.”
Nvidia and CoreWeave are also deepening the alignment between infrastructure and software layers.
CoreWeave’s cloud stack and operational tools will be tested and validated alongside Nvidia reference architectures.
“From the beginning, our collaboration has been guided by a simple belief: AI succeeds when software, infrastructure and operations are designed together,” said Michael Intrator, co-founder, president and CEO of CoreWeave.
CoreWeave is expected to deploy multiple generations of the Nvidia platform in its data centers, including early adoption of the Rubin platform, Vera processors and BlueField storage systems.
This multi-generation strategy suggests that Nvidia is using CoreWeave as a testing ground for full-stack deployments rather than as isolated components.
Vera processors are expected to be offered as a standalone option, signaling Nvidia’s intention to address the CPU constraints that are becoming increasingly visible as agentic AI workloads increase.
These processors use a custom Arm architecture with high core counts, large coherent memory capacity, and high bandwidth interconnects.
“For the first time ever, we’re going to offer Vera processors. Vera is such an incredible processor. We’re going to offer Vera processors as a standalone part of the infrastructure. So not only can you run your computing stack on Nvidia GPUs, but now you can also run your computing stack, regardless of its CPU workload, on Nvidia processors… Vera is completely revolutionary,” Jensen Huang said via Ed Ludlow on X.
Concretely, the collaboration reflects two narratives that shape today’s AI market.
Server processors are emerging as another pressure point in the supply chain, particularly for agent-driven applications.
At the same time, offering high-end processors separately gives customers an alternative to large-scale systems, which can reduce entry costs for certain deployments.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




