- Arm launches into silicon production with a processor designed for large-scale AI workloads
- New AGI processor doubles rack performance compared to traditional x86 systems
- Meta and OpenAI adopt Arm chip for next-generation infrastructure
Arm has expanded its compute platform to production silicon for the first time with the introduction of what it calls the “next evolution of the Arm compute platform,” the AGI processor.
The company says the processor is designed specifically for AI data centers, supporting agentic AI workloads that involve the continuous execution of agents capable of reasoning, planning and taking action.
The processor features up to 136 Neoverse V3 cores per processor, with 6 GB/s memory bandwidth per core and less than 100 ns latency, enabling higher workload density and improved system efficiency.
Article continues below
Performance and capacity
The Arm AGI processor promises deterministic performance under sustained load with a TDP of 300 watts and one dedicated core per program thread.
The processor supports 1U air-cooled server chassis with up to 8,160 cores per rack and liquid-cooled deployments up to 45,000 cores per rack.
Compared to x86 processors, the Arm AGI processor can deliver more than double the performance per rack, supporting larger AI workloads while remaining power efficient.
These capabilities aim to improve computing density, accelerator utilization, and overall infrastructure efficiency.
Meta is the lead partner and co-developer of the Arm AGI processor, integrating it with its Meta Training and Inference Accelerator (MTIA) to optimize data center performance.
Early commercial adoption also includes OpenAI, Cerebras, Cloudflare, Positron, Rebellions, SAP and SK Telecom.
Arm is working with OEMs and ODMs such as Lenovo, Supermicro, Quanta Computer and ASRock Rack to deliver the first systems, with wider availability expected in the second half of 2026.
More than 50 industry leaders in hyperscale, cloud, semiconductor, memory, networking, software and systems design support the processor deployment.
“Over the past decade, we have worked closely with Arm to build Graviton here at AWS, and it has been a remarkable success: the majority of the compute capacity that AWS added to our fleet in 2025 was powered by Graviton,” said James Hamilton, Amazon SVP and Distinguished Engineer.
“This collaboration has been great for both companies, and Graviton continues to deliver better price/performance for our customers.” »
Industry partners also highlighted the new processor’s broader infrastructure implications.
“The new Arm AGI processor will further open the Arm ecosystem to a broad range of customers, creating new opportunities for everyone…” said Charlie Kawwas, President, Semiconductor Solutions Group, Broadcom Inc.
“As Broadcom builds the world’s highest performance XPU and networking solutions for hyperscalers… our partnership with Arm has allowed us to move forward with unparalleled intent and speed.
The Arm AGI processor is intended to serve as the foundation for agentic AI workloads, enabling organizations to deploy AI tools at scale while maintaining high efficiency.
The processor supports large-scale deployment of AI applications, including accelerator management, control plane processing, and cloud or enterprise-based API and task hosting.
That said, the success of the Arm AGI processor will depend on data center adoption, integration with existing accelerators and memory, and proven performance gains over alternatives.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.



