- Google claims that its Ironwood TPU is 24x faster than the El Capitan SuperCalculator
- An analyst indicates that the comparison of Google’s performance is “perfectly silly”
- The comparison of AI systems and HPC machines is good, but they serve different objectives
During the recent Google Cloud 2025 event, the technology giant said that its new Ironwood TPU V7P Pod was 24 times faster than El Capitan, the Exascale class supercomputer at Lawrence Livermore National Laboratory.
But Timothy Prickett Morgan de Thenextplatform rejected the complaint.
“Google compares the sustained performance of El Capitan with 44,544 Antares-A ‘Instinct MI300A CPU-GPU Hybrid Calines performing the Benchmark Floating Point High Performance with the theoretical peak performance of an iron wood pod with 9.216 of TPU V7P TPU V7P,” He wrote. “This is a perfectly stupid comparison, and Google’s best brass should not only know better, but the fact.”
24x The performance of El Capitan? No!
Prickett Morgan argues that if such comparisons are valid between AI and HPC machines, the two systems serve different objectives – El Capitan is optimized for high -precision simulations; The Ironwood Pod is adapted to the inference and formation of low -precision AI.
What matters, he adds, is not only advanced performance but cost. “The high performance must have the lowest cost as possible, and no one obtains better offers on HPC equipment than the US government ministry.”
Esteem of Thenextplatform Claiming Ironwood pod issues 21.26 Exaflops of FP16 and 42.52 Exaflops of FP8 performance, costs $ 445 million to build and $ 1.1 billion for three years. This results in a cost per teraflops of $ 21 (build) or $ 52 (rental).
Meanwhile, El Capitan provides 43.68 FP16 Exaflops and 87.36 FP8 Exaflops at a construction cost of $ 600 million, or $ 14 per Teraflops.
“El Capitan has 2.05x more performance at the FP16 and FP8 resolution than an advanced theoretical wooden wooden pod,” notes Prickett Morgan. “The Ironwood Pod is not 24x the performance of El Capitan.”
He adds: “HPL-MXP uses a lot of mixed precision calculations to converge towards the same result as All-FP64 mathematics on the HPL test, and these days deliver an effective increase in effective performance of the order of magnitude.”
The item also includes a full table (below) comparing high-end AI and HPC systems on performance, memory, storage and profitability. While Google’s TPU pods remain competitive, Prickett Morgan argues that, from a cost / performance point of view, El Capitan always has a clear advantage.
“This comparison is not perfect, we realize,” he admits. “All estimates are indicated in daring red italics, and we have question points where we are unable to make an estimate at the moment.”