- IBM triples system capability to help heavier AI and supercomputing information calls for
- New flash enclosure permits bigger caches designed for dense multitenant cluster workloads
- Expanded {hardware} targets operators scaling parallel processing pipelines throughout large datasets
IBM has expanded the Storage Scale System 6000 to help a full rack capability of as much as 47PB, following the introduction of latest All-Flash Enlargement Enclosures outfitted with 122TB QLC flash drives.
This replace represents a threefold leap from earlier limits and is geared toward environments that deal with high-volume information operations.
The system is positioned for organisations that work with supercomputing tasks, large AI pipelines, and cloud computing service supply.
Hardware built for heavier throughput
The company claims the new design can sustain workloads that rely heavily on steady throughput and high availability.
It also states that the larger platform simplifies scaling for operators that maintain large clusters.
The All-Flash Expansion Enclosure brings support for bigger caches that enable multitenancy at multiple levels within a cluster.
IBM states operators can run several data-intensive workloads without creating bottlenecks across the file system.
The enclosure can house up to four Nvidia BlueField-3 DPUs and twenty-six dual-port QLC flash drives inside a 2U unit, which permits the system to satisfy necessities linked to AI coaching, simulation workloads, and broad parallel processing.
Help for Nvidia’s Spectrum-X Ethernet switches can also be included, permitting checkpoint instances in mannequin coaching processes to be shortened.
IBM positions these {hardware} hyperlinks as important in environments the place quick information motion is required to maintain energetic GPU fleets and complicated scheduling.
IBM has up to date its Storage Scale System software program to align with the rise in whole storage.
The 7.0.0 launch provides help for the upper capability modules and contains broader erasure coding with a 16+2 configuration that’s meant to enhance effectivity.
Write efficiency has additionally been elevated to match the enhancements in throughput and IOPS, with earlier figures from the four-rack configuration positioned the system at round 2.2PB of capability, as much as 13 million IOPS, and skim speeds of as much as 330GB per second.
The 2025 replace lifts the IOPS ceiling to twenty-eight million and raises learn throughput to 340GB per second.
These changes intention to make sure that the expanded {hardware} doesn’t introduce new delays when workloads scale.
The enclosure acts as a high-density possibility for operators that depend on an SSD layer as their main storage base whereas persevering with to make use of cloud storage for distribution past the core information centre.
IBM states that the elevated quantity permits its global-caching layer to maintain bigger energetic datasets nearer to GPUs, eradicating separate information islands and holding pipelines regular.
The structure is constructed to serve clusters that want predictable motion of knowledge between nodes, particularly in conditions the place CPU utilisation rises throughout heavy compute home windows.
The corporate’s messaging frames the replace as a triple-tier enchancment that mixes greater density, higher information dealing with, and wider workload help.
That stated, the long-term affect will rely on how constantly the system performs at full capability as soon as deployed at scale.
Through HPCWire
Follow TechRadar on Google News and add us as a preferred source to get our professional information, evaluations, and opinion in your feeds. Be sure to click on the Observe button!
And naturally you can too follow TechRadar on TikTok for information, evaluations, unboxings in video type, and get common updates from us on WhatsApp too.


