- IBM Triples System Capacity to Support More Data-Intensive AI and Supercomputing Demands
- New Flash Enclosure Enables Larger Caches Designed for Dense Multitenant Cluster Workloads
- Extended Hardware Targets Operators Scaling Parallel Processing Pipelines on Massive Datasets
IBM has expanded the Storage Scale System 6000 to support full rack capacity of up to 47 PB, following the introduction of new all-flash expansion enclosures equipped with 122 TB QLC flash drives.
This update represents a tripling of previous limits and is intended for environments that handle high-volume data operations.
The system is aimed at organizations working with intensive computing tasks, large AI pipelines and the provision of cloud computing services.
Hardware designed for higher throughput
The company says the new design can support workloads that rely heavily on consistent throughput and high availability.
It also says the larger platform simplifies scaling for operators running large clusters.
The all-flash expansion enclosure supports larger caches that enable multi-tiering within a cluster.
IBM says operators can run multiple data-intensive workloads without creating file system bottlenecks.
The enclosure can accommodate up to four Nvidia BlueField-3 DPUs and twenty-six dual-port QLC flash drives in a 2U unit, enabling the system to meet requirements for AI training, simulation workloads, and extensive parallel processing.
Support for Nvidia’s Spectrum-X Ethernet switches is also included, helping to shorten checkpoint times in model training processes.
IBM considers these hardware links essential in environments where rapid data movement is required to maintain active GPU fleets and complex scheduling.
IBM has updated its Storage Scale System software to align with the increase in total storage.
Version 7.0.0 adds support for higher capacity modules and includes wider erasure coding with a 16+2 configuration intended to improve efficiency.
Write performance has also been increased to match improvements in throughput and IOPS, with earlier figures from the four-rack configuration putting the system at around 2.2 PB of capacity, up to 13 million IOPS, and read speeds of up to 330 GB per second.
The 2025 update raises the IOPS cap to 28 million and increases the read throughput to 340 GB per second.
These adjustments are intended to ensure that expanded hardware does not introduce new delays when workloads increase.
The enclosure provides a high-density option for operators that rely on an SSD tier as their primary storage base while still using cloud storage for distribution beyond the primary data center.
IBM says the increased volume allows its global caching layer to keep larger active data sets closer to the GPUs, removing separate islands of data and maintaining pipeline stability.
The architecture is designed to serve clusters that require predictable movement of information between nodes, particularly in situations where CPU usage increases during busy computational windows.
The company’s messaging presents the update as a three-tier improvement combining higher density, better data management and broader workload support.
That said, the long-term impact will depend on how consistently the system operates at full capacity once deployed at scale.
Via HPCWire
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




