- Nvidia CEO Jensen Huang predicts record times ahead
- Huang says he expects about $1 trillion from Rubin and Blackwell sales
- Nvidia Unveils New Vera Server Chips and Racks at GTC 2026
Jensen Huang said he expects Nvidia to earn around $1 trillion from the sale of its AI hardware through 2027.
Speaking during his keynote speech at Nvidia GTC 2026, the CEO and co-founder said sales of its Blackwell and Rubin chips are expected to be a huge source of revenue for the company in the coming months.
And that might not be all: Nvidia has announced a series of new hardware releases, further expanding its range of offerings.
Article continues below
All about calculation
“I forecast (AI chip sales) until 2027 – at least a trillion dollars,” Huang said in a presentation full of announcements, but with a focus on meeting the growing demand for computing in the AI era.
“I think IT demand has increased 1 million times in the last two years,” Huang said. “It’s the feeling we all have. It’s the feeling every startup has.”
The $1 trillion figure had the thousands in attendance at Nvidia GTC 2026 gasping, especially as Huang noted that the company had previously forecast that data center equipment would generate $500 billion in sales through the end of 2026.
To build on this momentum, Huang presented several major announcements on stage, including no less than seven new Vera Rubin chips.
These include a new Vera processor, available in the second half of 2026, which the company says is “purpose-built” for agentic AI, offering twice the efficiency and 50% faster than traditional processors, as well as the highest single-threaded performance and bandwidth per core currently.
Nvidia also announced a new rack integrating 256 liquid-cooled Vera processors, enough to support more than 22,500 simultaneous processor environments, each operating independently at full performance – a key part of the company’s push toward “AI factories” to power use cases ranging from quantum computing to robotics.
Huang also revealed that the Groq 3 LPU (language processing unit) will now be part of Nvidia’s product line, helping to boost large language model (LLM) inference and improve how responses to AI prompts are generated.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




