- AI -centered racks should consume up to 1 MW each by 2030
- Average racks that should go up regularly to 30 to 50 kW during the same period
- Energy cooling and distribution become strategic priorities for future data centers
Considered the basic unit of a data center for a long time, the rack is reshaped by the rise of AI, and a new graphic (above) of Lennox Data Center Solutions shows how speed this change takes place.
When they once consumed only a few kilowatts, the company’s projections suggest that in 2030, an AI -focused rack could reach 1 MW of energy consumption, a scale which was formerly reserved for entire installations.
The average data center racks should reach 30 to 50 kW during the same period, reflecting a regular rise in calculation density, and the contrast with the workloads of the AI is striking.
New delivery and cooling requests
According to projections, only one Rack AI can use the energy of its counterpart for general use 20 to 30 times, creating new requests for electricity and coolant infrastructure.
Ted Pulfer, director of Lennox Data Center Solutions, said that cooling has become central to industry.
“The cooling, once” a part of the “support infrastructure, has now moved to the foreground of the conversation, motivated by the increase in calculation densities, the workloads of AI and the growing interest in approaches such as liquid cooling,” he said.
Pulfer described the level of collaboration in the industry. “Manufacturers, engineers and end users all work more closely than ever, sharing ideas and experimenting together in the laboratory and in real deployments. This practical cooperation helps to take up some of the most complex cooling challenges,” he said.
The objective of providing 1 MW of power to a rack is also to reshape the way the systems are built.
“Instead of the traditional low-voltage CA, the industry moves to a high voltage DC, such as +/- 400V. This reduces the loss of power and the size of the cable,” explained Pulfer.
“The cooling is managed by the” central “CDUs of the installation which manage the liquid flow to the varieties of rack. From there, the fluid comes to individual cold plates mounted directly on the warmest components of the servers. ”
Most data centers are based today on cold plates, but the approach has limits. Microsoft has tested the microfluidic, where tiny grooves are engraved at the back of the chip itself, allowing the cooling liquid to flow directly on the silicon.
In the first tests, this removed heat up to three times more effectively than cold plates, depending on the workload and an increase in the temperature of the GPU reduced by 65%.
By combining this design with the AI that maps the hot spots through the chip, Microsoft was able to direct the coolant with greater precision.
Although hyperscalers can dominate this space, Pulfer believes that small operators still have room to compete.
“Sometimes the volume of orders moving in factories can create bottlenecks of delivery, which opens the door to others to intervene and add value. In this fast market, agility and innovation continue to be key forces in industry,” he said.
What is clear is that power and heat rejection are now central problems, which are no longer secondary to calculate performance.
As Pulfer says, “heat rejection is essential to maintaining the digital foundations of the world gently, in the frill and durably”.
At the end of the decade, the form and scale of the rack itself can determine the future of digital infrastructure.