- AMD aims for MI500 launch in 2027 while Nvidia prepares Vera-Rubin a year earlier
- CES 2026 shows growing gap between AMD and Nvidia AI accelerator deadlines
- AMD expands AI portfolio as next-gen hardware remains a year away
At CES 2026, AMD discussed its near- and long-term AI hardware plans, including a look at the Instinct MI500 Series accelerators expected to arrive in 2027.
The company used the show to provide a first look at Helios, a rack-scale platform built around Instinct MI455X GPUs and EPYC Venice processors. Helios is positioning itself as a very large-scale AI infrastructure model rather than a shipping product.
AMD also introduced the Instinct MI440X, a new accelerator for on-premises enterprise deployments, designed to scale with eight existing GPU systems for training, tuning, and inference workloads.
Nvidia Vera-Rubin also on the way
However, what comes next is more interesting to many industry observers. AMD said the Instinct MI500 series is expected to launch in 2027 and will offer a significant increase in AI performance compared to the MI300X generation.
The MI500 is expected to use AMD’s CDNA 6 architecture, a 2nm process, and HBM4E memory.
AMD claims the design is on track to deliver up to a 1,000x increase in AI performance over the MI300X, although, as that’s still a ways off, no detailed benchmarks have been shared.
As exciting as this is, the timing is tricky for AMD as Nvidia prepares to introduce its Vera-Rubin platform this year.
At CES 2026, Nvidia also detailed its replacement for Grace-Blackwell rack-scale designs: The Vera-Rubin platform is built from six new chips designed to work as a single rack-scale system.
These include the Vera CPU, Rubin GPU, NVLink 6 switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet switch.
In its NVL72 configuration, the system combines 72 Rubin GPUs and 36 Vera processors connected via NVSwitch and NVLink Fabric to operate as a shared memory system.
Nvidia claims that Vera-Rubin NVL72 systems reduce the inference cost per token for mixed models by 10 times and reduce the number of GPUs needed for training by four times.
Rubin GPUs use eight HBM4 memory stacks and include a new Transformer Engine with hardware-supported adaptive compression, intended to improve efficiency during inference and training without impacting model accuracy.
Rubin-based systems will be available from partners in the second half of 2026, including NVL72 rack-scale systems and smaller HGX NVL8 configurations, with deployments planned to cloud providers, AI infrastructure operators and system vendors.
When AMD’s Instinct MI500 series arrives in 2027, Nvidia’s Vera-Rubin platform should be available from partners and used widely.
TechRadar will cover this year’s events extensively THESEand will bring you all the big announcements as they happen. Visit our CES 2026 News for the latest stories and our hands-on verdicts on everything from wireless TVs and foldable displays to new phones, laptops, smart home gadgets and the latest in AI. You can also ask us a question about the show in our Live Q&A from CES 2026 and we will do our best to answer them.
And don’t forget to follow us on TikTok And WhatsApp for the latest news from the CES show!




