- Cooling of the liquid is no longer optional, it is the only way to survive the thermal assault in AI
- The jump to 400 VDC strongly borrows the supply chains of electric vehicles and the design logic
- Google TPU supercomputers now operate on the Gigawatt scale with an availability of 99.999%
While the demand for workloads of artificial intelligence is intensifying, the physical infrastructure of data centers undergoes a rapid and radical transformation.
Google’s tastes, Microsoft and Meta are now based on technologies initially developed for electric vehicles (EV), in particular 400 VDC systems, to meet the double challenges of power and high density thermal management.
The emerging vision is data center racks capable of providing up to 1 megawatt energy, associated with liquid cooling systems designed to manage resulting heat.
Borrow EV technology for the evolution of the data center
The transition to 400 VDC electrical distribution marks a decisive rupture of inherited systems. Google previously defended the transition from the 12VDC industry to 48VDC, but the current transition to +/- 400VDC is activated by the EV supply chains and propelled by necessity.
The MT. Diablo initiative, supported by META, Microsoft and the Open Calculation Project (OCP), aims to normalize interfaces at this voltage level.
Google says that this architecture is a pragmatic movement which releases a precious rack space to calculate resources by decoupling power delivery from computer racks via Side-Car-Dc units. It also improves the efficiency of around 3%.
Cooling, however, has become an equally pressing problem. With new generation fleas consuming more than 1,000 watts each, traditional air cooling quickly becomes obsolete.
Cooling of liquid has become the only evolutionary solution to manage heat in high density calculation environments.
Google adopted this approach with large -scale deployments; Its liquid -cooled TPU pods are now operating across Gigawatt and have given birth to 99.999% availability in the past seven years.
These systems have replaced large thermal dissipators with compact cold plates, half the physical imprint of the server equipment and the quadrupled calculation density compared to previous generations.
However, despite these technical achievements, skepticism is justified. The push to racks of 1 MW is based on the hypothesis of continuous increase in demand, a trend that may not materialize as expected.
While Google’s roadmap highlights the power needs of the AI - projecting more than 500 kW per rack by 2030 – it remains uncertain if these projections will be held on the wider market.
It should also be noted that the integration of EV technologies in data centers provides not only efficiency gains, but also new complexities, in particular concerning safety and ease of high -voltage service.
However, the collaboration between hyperscalers and the open equipment community signals shared recognition that existing paradigms are no longer sufficient.
Via Storagereview