- The Google Cloud’s US-East5-C zone (Columbus, Ohio) has experienced a six-hour failure
- He was caused by a failure of his diet without interruption
- More than 20 services and certain storage discs have dropped
Like the insause Titanic encountered its disappearance during its flow, Google Cloud recovered from a major failure caused by an interruption of its without interruption (UPS).
The company confirmed that its US-East5-C zone, otherwise known as Columbus, Ohio, experienced a “degraded service or unavailability” for a period of six hours and 10 minutes on March 29, 2025, blaming it on a “loss of utilitarian power in the affected area”.
More than 20 cloud services have undergone reduced performance or downtime due to the breakdown, including Bigquery, Cloud SQL, VPN Cloud and Virtual Private Cloud.
Google’s without interruption power just had a fairly major failure
In its incident report, the company explained exactly what had happened: “This power failure sparked a cascading failure within the without interruption (UPS) power supply system responsible for the power of the area during these events.”
“The UPS system, which is based on batteries to fill the gap between the loss of power of the utility and the activation of the generator’s energy, experienced critical battery failure,” continues the newspaper.
Google’s Columbus area uses powerful Intel fleas like Broadwell, Haswell, Skylake, Lake Cascade, Sapphire Rapids and Emerald Rapids, as well as AMD Epyc Rome and Milan processors to supply its cloud computing services. The cloud giant also noted that “a limited number of storage disks in the area has become unavailable during the breakdown”.
The engineers were informed of the breakdown at 12:54 pm on March 29, successfully bypassing the failing UPS and restored via Generator by 2:49 pm PT. Most of the services were brought back online fairly quickly, but a manual action was necessary for a full restoration, hence the six -hour breakdown.
Google now promises to learn from this event, hardening power and power recovery paths from cluster and audit systems that have not automatically tilted, as well as working with its UPS supplier to mitigate future incidents.




