- El Capitan is a classified U.S. government property that processes data related to the U.S. nuclear arsenal.
- Patrick Kennedy from ServeTheHome was invited for the launch at LLNL in California
- The CEOs of AMD and HPE were also present at the ceremony.
In November 2024, El Capitan, powered by AMD, officially became the world’s fastest supercomputer, delivering 2.7 exaflops peak performance and 1.7 exaflops sustained performance.
Built by HPE for the National Nuclear Security Administration (NNSA) at Lawrence Livermore National Laboratory (LLNL) to simulate nuclear weapons testing, it is powered by AMD Instinct MI300A APUs and has dethroned the previous leader, Frontier, relegating it in second place among AMD Instinct MI300A APUs. the most powerful supercomputers in the world.
Patrick Kennedy of ServeTheHome was recently invited to the launch event at LLNL in California, which was also attended by the CEOs of AMD and HPE, and was allowed to bring his phone to capture “a few shots before El Capitan launches into its classified mission.
Not the biggest
During the tour, Kennedy observed, “Each rack has 128 fully liquid-cooled compute blades. This system was very quiet, with more noise coming from storage and other floor systems.
He then noted, “On the other side of the racks we have the HPE Slingshot interconnect wired to the DACs and optics. »
The Slingshot interconnect side of El Capitan is – as you would expect – liquid-cooled, with the switching trays only taking up the bottom half of the space. LLNL explained to Kennedy that their codes do not require a full population, leaving the top half to the “Rabbit,” a liquid-cooled unit housing 18 NVMe SSDs.
Looking inside the system, Kennedy saw “a processor that looks like an AMD EPYC 7003 Milan part, which seems perfect considering the generation of AMD MI300A.” Unlike the APU, the Rabbit’s CPU was equipped with DIMMs and what looks like liquid-cooled DDR4 memory. Like standard blades, everything is liquid cooled, so there are no fans in the system.
While El Capitan is less than half the size of the xAI Colossus cluster in September, when Elon Musk’s supercomputer was equipped with “only” 100,000 Nvidia H100 GPUs (plans are underway to expand it to a million GPUs), Kennedy points out that “systems like this remain enormous and are done with a fraction of the budget of a system with more than 100,000 GPUs.