- Distributed micro data centers convert unused electricity into functional AI computing
- The network targets 400,000 GPUs installed at 1,000 modular sites worldwide
- Energy-focused deployment avoids delays caused by slow grid connection approvals
AI infrastructure faces a hard limit that has little to do with chips and everything to do with power. New data centers are often ready to be built, but wait years for permission to connect to already congested power grids.
This delay has sparked interest in building data centers where electricity is available instead of expanding the grid to reach them.
French AI infrastructure company Antimatter is deploying a network of 1,000 modular micro data centers placed directly next to power sources across the US, Europe and GCC regions.
Article continues below
1 GW of secure capacity thanks to grid connection
These smaller facilities use electricity that existing grid connections cannot deliver to customers, running AI workloads on-site instead of waiting years for new transmission lines to be built.
Each unit fits into container-style modules for up to 400 GPUs and can be deployed in approximately five months.
Traditional hyperscale releases often require more than two years to reach a similar level of readiness.
Wind, solar, hydroelectric and biogas installations are the main targets because many of them already produce electricity that cannot always be delivered to customers when transmission capacity is limited.
Placing data centers next to these sites allows energy that would otherwise be limited to be used for processing instead.
Antimatter says more than 1 GW of capacity has been secured through grid connection agreements and dedicated locations, with more than 160 MW already operational in Texas and Oregon.
Ten units across eight sites make up the initial footprint, with hundreds more facilities in development.
The first phase of large-scale construction focuses on 100 deployments planned for 2027, supporting more than 40,000 GPUs and approximately 3.6 exaFLOPS of compute capacity.
Longer-term plans expand to 1,000 sites by the end of 2030, providing more than 400,000 GPUs and approximately 36 exaFLOPS in dozens of countries.
“In the age of AI, it’s not intelligence that’s the bottleneck, it’s energy,” said David Gurlé, co-founder, executive chairman and CEO of Antimatter.
“The infrastructure built for the first era of cloud and AI was designed at a centralized scale. But the era of inference requires a different model: more distributed, faster to deploy, and sovereign by design. This is the infrastructure that Antimatter is building.”
Much of the demand comes from inference workloads, where trained models run continuously within co-pilots, automated services, and real-time decision systems.
Smaller distributed facilities linked through shared software allow these systems to operate as a single network while keeping processing physically closer to users.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds.




