- Nvidia pushes SSD manufacturers to 100 million iops storage target
- But the CEO of Silicon Motion says that industry lacks memory technology to meet AI demands
- A new memory may be necessary to unlock an ultra-fast AI storage
As the GPU develops faster and the bandwidth of memory is on terrabyte per second, storage has become the next major IT computer bottleneck.
Nvidia seeks to push storage to respond to requests from AI models by reaching an ambitious target for random readings of small block.
“Right, they are targeting 100 million pi – which is huge,” said Wallace C. Kuo, CEO of Silicon Motion, said Tom material.
The fastest PCIe 5.0 SSDs today are in the lead at around 14.5 GB / s and 2 to 3 million IOPS in workloads involving 4K and 512 bytes readings.
While larger blocks promote bandwidth, AI inference generally draws small dispersed data from data. This makes the readings random 512B more relevant and much more difficult to accelerate.
Kioxia is already preparing an “IA SSD” based on XL-Flash, which should exceed 10 million iops. It could be launched alongside the next Vera Rubin platform in Nvidia next year. But scaling beyond could require more than faster NAND controllers or adjustments.
“I think they are looking for a media change,” said Kuo. “Optane was supposed to be the ideal solution, but it has disappeared now. Kioxia tries to bring XL-Nand and improve its performance. Sandisk tries to introduce a high bandwidth flash, but honestly, I don’t really believe it.”
The energy, the cost and the latency all pose challenges. “The industry really needs something fundamentally new,” added Kuo. “Otherwise, it will be very difficult to reach 100 million iops while being profitable.”
Micron, Sandisk and others run to invent new forms of non -volatile memory.
That one of them will arrive in time for the next wave of Nvidia equipment is the great unknown.