- StorageReview’s physical server calculated 314 trillion digits without distributed cloud infrastructure
- The entire calculation took place continuously for 110 days without interruption
- Power consumption has decreased significantly compared to previous cluster-based Pi recordings
A new benchmark for large-scale digital computing has been set with the calculation of 314 trillion digits of pi on a single on-premises system.
The execution was carried out by StorageReview, surpassing previous cloud-based efforts, including Google Cloud’s 100 trillion-digit calculation from 2022.
Unlike hyperscale approaches that relied on massively distributed resources, this record was achieved on a single physical server using tightly controlled hardware and software choices.
Runtime and system stability
The calculation ran continuously for 110 days, significantly shorter than the approximately 225 days required by the previous large-scale recording, even though that earlier effort produced fewer numbers.
The uninterrupted execution was attributed to operating system stability and limited background activity.
It also depends on a balanced NUMA topology and careful tuning of memory and storage designed to match the behavior of the y-cruncher application.
The workload was treated less as a demonstration and more as an extended stress test of production systems.
At the center of the effort was a Dell PowerEdge R7725 system equipped with two AMD EPYC 9965 processors, providing 384 processor cores, as well as 1.5 TB of DDR5 memory.
Storage consisted of forty 61.44 TB Micron 6550 Ion NVMe drives, providing approximately 2.1 PB of raw capacity.
Thirty-four of these disks were allocated to the y-cruncher workspace in a JBOD configuration, while the remaining disks formed a software RAID volume to protect the final output.
This configuration prioritized throughput and energy efficiency over total data resiliency during computation.
The digital workload generated significant disk activity, including approximately 132 PB of logical reads and 112 PB of logical writes during execution.
The maximum logical disk utilization reached around 1.43 PiB, while the largest checkpoint exceeded 774 TiB.
SSD wear measurements reported approximately 7.3 PB written per disk, totaling approximately 249 PB across all swap devices.
Internal benchmarks showed that sequential read and write performance more than doubled compared to the previous platform at 202 trillion digits.
For this setup, power consumption was reported to be around 1,600 watts, with a total power consumption of around 4,305 kWh, or 13.70 kWh per trillion calculated.
This figure is much lower than estimates of the previous record of 300 trillion based on clusters, which would have consumed more than 33,000 kWh.
The result suggests that for some workloads, carefully tuned servers and workstations can outperform cloud infrastructure in terms of efficiency.
This assessment, however, applies narrowly to this class of computing and does not automatically extend to all scientific or commercial use cases.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




