- Nvidia integrates Samsung Foundry to extend NVLink Fusion for custom AI silicon
- NVLink Fusion allows CPUs, GPUs and accelerators to communicate seamlessly
- Intel and Fujitsu can now create processors connected directly to Nvidia GPUs
Nvidia is stepping up its efforts to make itself indispensable in the AI landscape by expanding its NVLink Fusion ecosystem.
Following a recent collaboration with Intel, which allows x86 processors to connect directly to Nvidia platforms, the company has now tapped Samsung Foundry to help design and manufacture custom CPUs and XPUs.
The move, announced at the Open Compute Project (OCP) 2025 global summit in San Jose, shows Nvidia’s ambition to expand its control over the entire AI computing hardware stack.
Integrate new players into NVLink Fusion
Ian Buck, vice president of HPC and Hyperscale at Nvidia, explained that NVLink Fusion is an IP and chiplet solution designed to seamlessly integrate CPUs, GPUs and accelerators into MGX and OCP infrastructure.
It enables direct, high-speed communication between processors within rack-scale systems, with the goal of removing traditional performance bottlenecks between computing components.
At the summit, Nvidia revealed several ecosystem partners, including Intel and Fujitsu, both of which are now capable of creating processors that communicate directly with Nvidia GPUs via NVLink Fusion.
Samsung Foundry joins this list, offering comprehensive expertise from design to manufacturing of custom silicon, an addition that strengthens Nvidia’s reach in semiconductor manufacturing.
The collaboration between Nvidia and Samsung reflects a growing shift in the AI hardware market.
As AI workloads increase and competition intensifies, Nvidia’s custom CPU and XPU designs aim to ensure its technologies remain at the heart of next-generation data centers.
According to TechPowerUpNvidia’s strategy comes with strict restrictions.
Custom chips developed under NVLink Fusion must interface with Nvidia products, with Nvidia retaining control of communications controllers, PHY layers, and NVLink Switch licensing.
This gives Nvidia considerable leverage in the ecosystem, although it also raises concerns around openness and interoperability.
This increased integration comes as competitors such as OpenAI, Google, AWS, Meta and Broadcom develop in-house chips to reduce reliance on Nvidia hardware.
Nvidia is woven deeper into the fabric of AI infrastructure by making its technologies inevitable rather than optional.
With NVLink Fusion and the addition of Samsung Foundry to its custom silicon ecosystem, the company is expanding its influence across the entire hardware stack, from chips to data center architectures.
This reflects a broader trend among its co-opetitors. Broadcom deepens its approach to AI with custom accelerators for hyperscalers.
OpenAI is also reportedly designing its own in-house chips to reduce its reliance on Nvidia GPUs.
Together, these developments mark a new phase of competition in AI hardware, where control of the silicon-to-software pipeline determines who leads the industry.
Nvidia’s partnership with Samsung appears intended to counter this problem by accelerating the deployment of custom solutions that can be quickly deployed at scale.
By integrating its intellectual property into broader infrastructure designs, Nvidia is positioning itself as an essential part of modern AI factories, rather than just a GPU supplier.
Despite Nvidia’s contributions to the OCP open hardware initiative, its NVLink Fusion ecosystem maintains strict limits that favor its architecture.
While this may ensure benefits in performance and ecosystem consistency, it could also increase concerns about vendor lock-in.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.