- Meta’s ORW design opens a new direction for opening data centers
- AMD favors the opening of silicon to the rack, even if the neutrality of the industry remains uncertain
- Liquid cooling and Ethernet fabric highlight the system’s focus on serviceability.
At the recent Open Compute Project (OCP) 2025 Global Summit in San Jose, AMD demonstrated its “Helios” rack-scale platform, built on Meta’s new Open Rack Wide (ORW) standard.
The design has been described as a double-width open frame aimed at improving energy efficiency, cooling and serviceability of artificial intelligence systems.
AMD is positioning “Helios” as a major step toward open, interoperable data center infrastructure, but it remains to be seen how much of this openness will translate into practical industry-wide adoption.
Meta contributed the ORW specification to the OCP community, describing it as the foundation for large-scale AI data centers.
The new form factor was developed to meet the growing demand for standardized hardware architectures.
AMD’s “Helios” appears to serve as a test case for this concept, blending Meta’s open-rack principles with AMD’s own hardware.
This collaboration marks a move away from proprietary systems, even if the use of major players like Meta raises questions about the neutrality and accessibility of the standard.
The “Helios” system is powered by AMD Instinct MI450 GPUs based on the CDNA architecture, alongside EPYC processors and the Pensando network.
Each MI450 is said to offer up to 432 GB of high-bandwidth memory and 19.6 TB/s of bandwidth, providing capacity for data-intensive AI tools.
At full scale, a “Helios” rack equipped with 72 of these GPUs should achieve 1.4 exaFLOPS in FP8 and 2.9 exaFLOPS in FP4 precision, supported by 31 TB of HBM4 memory and 1.4 PB/s of total bandwidth.
It also provides up to 260 TB/s of internal interconnect throughput and 43 TB/s of Ethernet-based scalability capacity.
AMD estimates performance up to 17.9 times higher than its previous generation and approximately 50% higher memory capacity and bandwidth than Nvidia’s Vera Rubin system.
AMD says this represents a quantum leap in AI training and inference capability, but these are technical estimates, not field results.
The open rack design integrates OCP DC-MHS, UALink and Ultra Ethernet Consortium frameworks, enabling both scalable and scalable deployment.
It also includes standards-based liquid cooling and Ethernet for reliability.
The “Helios” design extends the company’s drive for openness from the chip level to the rack level, and Oracle’s early commitment to deploy 50,000 AMD GPUs suggests commercial interest.
Still, broad ecosystem support will determine whether “Helios” becomes a shared industry standard or remains another branded interpretation of open infrastructure.
As AMD targets 2026 for volume deployment, the extent to which competitors and partners adopt ORW will reveal whether opening up AI hardware can move from concept to actual practice.
“Open collaboration is essential to effectively scaling AI,” said Forrest Norrod, executive vice president and general manager of AMD’s Data Center Solutions Group.
“With “Helios,” we are transforming open standards into real, deployable systems, combining AMD Instinct GPUs, EPYC processors and open fabrics to offer the industry a flexible, high-performance platform built for the next generation of AI workloads. »
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.