- John Carmack shared an idea for using fiber rather than RAM
- This is a vision of the future for replacing RAM modules in AI workloads.
- Although this is very theoretical and still a long way off, there are other possible shorter-term solutions to reduce AI’s all-consuming appetite for RAM.
John Carmack has floated the idea of effectively using fiber optic cables as “storage” rather than conventional RAM modules, which is a particularly intriguing vision of the future given the current memory crisis and all the havoc it is wreaking.
Tom’s Hardware noticed the id Software co-founder’s post on
Carmack observes: “Data rates of 256 TB/s over a distance of 200 km have been demonstrated over single-mode optical fiber, which equates to 32 GB of data in flight, “stored” in the fiber, with a bandwidth of 32 TB/s. Neural network inference and training [AI] can have deterministic weight reference models, so it is fun to consider a system without DRAM and with weights streamed into an L2 cache by a fiber recycle loop.
This means that said length of fiber is a loop in which the necessary data (normally stored in RAM) is “streamed” and keeps the AI processor always powered (because the weights of the AI model are accessed sequentially – it wouldn’t work otherwise). It would also be a very environmentally friendly and energy efficient way to accomplish these tasks, compared to traditional RAM.
As Carmack points out, this is “the modern equivalent of the old mercury echo tube memories”, or delay line memory, where data is stored in waves passing through a spool of wire.
It’s not a feasible idea now, but a concept for the future, as mentioned – and what Carmack argues is that it’s a feasible path forward that perhaps has a “better growth trajectory” than we’re currently looking at with traditional DRAM.
Analysis: flash forward
There are very obvious issues with RAM right now in terms of supply and demand, with the latter far outstripping the former thanks to the rise of AI and the enormous memory requirements it requires. (Not just for servers in data centers that respond to queries from popular AI models, but also for video RAM in AI accelerator cards.)
So what Carmack envisions is a different way of working with AI models that use fiber lines instead. This could, in theory, free us to stop worrying about the cost of RAM costing a ridiculous amount of money (or even a PC, or a graphics card, and the list goes on with the price effects of the memory crisis).
The problem is that there are many problems with such a fiber proposal, as Carmack acknowledges. This includes the large amount of fiber required and the difficulties of maintaining signal strength through the loop.
However, there are other possibilities along these lines, and other people have discussed similar concepts in recent years. Carmack mentions: “Much more practically, you should be able to package cheap flash memory to provide almost all the read bandwidth you need, provided it’s done page by page and routed well in advance.” This should be viable for inference service today if flash and accelerator vendors could agree on a high-speed interface.
In other words, it’s an army of cheap flash memory modules thrown together, working massively in parallel, but as Carmack notes, the key would be to agree on an interface where these chips could work directly with the AI accelerator.
This is an interesting proposition in the short term, but one that depends on the relevant manufacturers (of AI GPUs and storage) getting their act together and building a new system in this vein.
The RAM crisis is expected to last this year, and probably next year too, and could last even longer, with all kinds of pain for consumers. So, looking for alternative solutions for memory in terms of AI models could be a valuable move to ensure that this RAM crisis is the last such episode we have to endure.

The best laptops for every budget
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on YouTube And TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




