Web3 has a memory problem – and we finally have a corrective

Web3 has a memory problem. Not in the sense “we have forgotten something”, but in the fundamental architectural sense. He has no real layer of memory.

Blockchains today do not seem completely foreign compared to traditional computers, but a fundamental aspect of the inheritance is always missing: a layer of memory designed for decentralization which will support the next iteration of the Internet.

Muriel Médard is a speaker at consensus 2025 from May 14 to 16. Register to get your ticket here.

After the Second World War, John von Neumann Modern computers’ architecture. Each computer needs input and output, a processor for control and arithmetic, and memory to store the latest version data, as well as a “bus” to recover and update this data in memory. Commonly known as RAM, this architecture has been the foundation of computer science for decades.

Basically, web3 is a decentralized computer – a “global computer”. At the upper layers, it is quite recognizable: operating systems (EVM, SVM) operating on thousands of decentralized nodes, supplying decentralized applications and protocols.

But, when you dig more deeply, something is missing. The essential memory layer for the storage, access and update of short and long -term data does not resemble the memory unit or the memory unit that Von Neumann was considering.

Instead, it is a mashup of different most effective approaches to achieve this objective, and the results are generally disorderly, ineffective and difficult to navigate.

Here is the problem: if we want to build a global computer fundamentally different from the Von Neumann model, it is better to be a very good reason to do so. Currently, the web’s memory layer is not only different, it is convoluted and ineffective. The transactions are slow. Storage is slow and expensive. The scale of mass adoption with this current approach is almost impossible. And this is not what decentralization was supposed to be.

But there is another way.

Many people in this space make them best to get around this limitation and we are now at a time when current bypass solution solutions simply cannot follow. This is where the use of algebraic coding, which uses equations to represent data for efficiency, resilience and flexibility, between.

The basic problem is as follows: how to implement the decentralized code for web3?

A new memory infrastructure

This is why I made the jump of the university world where I occupied the role of president of MIT NEC and teacher of software and engineering sciences to devote myself, me and a team of experts in the progress of high performance memory for web3.

I saw something bigger: the potential to redefine the way we think of computers in a decentralized world.

My team of optimum creates a decentralized memory that works as a dedicated computer. Our approach is fueled by the coding of random linear network (RLNC), a technology developed in my MIT laboratory over almost two decades. It is a proven data coding method which maximizes flow and resilience in high reliability networks of industrial systems on the Internet.

Data coding is the process of converting information from one format to another for effective storage, transmission or processing. Data coding has existed for decades and there are a lot of iterations used in networks today. RLNC is the modern approach to data coding specifically built for decentralized computers. This scheme transforms data into transmission packages on a network of nodes, guaranteeing high speed and efficiency.

With several engineering prizes for the best world institutions, more than 80 patents and many real deployments, RLNC is no longer a theory. RLNC has obtained significant recognition, in particular the price of the joint article of the 2009 Information Communications and Information Company for Work “A random linear linear network coding approach”. RLNC’s impact was recognized by the IEEE Koji Kobayashi Computer and Communications Prize in 2022.

RLNC is now ready for decentralized systems, allowing faster propagation of data, effective storage and real -time access, making it a key solution for the challenges of scalability and web3 efficiency.

Why this counts

Take a step back. Why is all this important? Because we need memory for the global computer which is not only decentralized but also effective, scalable and reliable.

Currently, blockchains are based on the most effective ad hoc solutions that partially perform high performance computer memory. What is missing is a layer of unified memory that includes both the memory bus for data spread and RAM for storage and data access.

The computer part of the computer should not become the bottleneck, as it does now. Let me explain.

“Gossip” is the common method for the propagation of data in blockchain networks. It is a peer communication protocol in which nodes exchange information with random peers to disseminate data on the network. In its current implementation, it is struggling on a large scale.

Imagine that you need 10 neighbors who repeat what they heard. When you talk to them, you first get new information. But when approaching nine out of 10, the chances of hearing something new from a neighbor fall, which makes the last information most difficult to obtain. The chances are 90% that the next thing you hear is something you already know.

This is how blockchain gossip works today – effective at first, but redundant and slow when you try to finish sharing information. You should be extremely lucky to get something new every time.

With RLNC, we go around the problem of basic scalability in current gossip. RLNC works as if you had managed to have an extremely lucky chance, so whenever you hear information, it is just information that is new to you. This means a much greater flow and much lower latency. These gossip supplied by RLNC are our first product, which validators can implement via a simple API call to optimize data propagation for their nodes.

Let us now examine the memory part. It helps to consider memory as dynamic storage, like RAM in a computer or, moreover, our closet. Decentralized RAM should imitate a cupboard; It must be structured, reliable and coherent. A piece of data is there or not, no half bits, no missing sleeves. It is atomicity. The articles remain in the order they have been placed – you could see an older version, but never a bad. This is consistency. And, unless everything is moved, everything remains in place; The data does not disappear. It is sustainability.

Instead of the closet, what do we have? Mempools are not something we keep in computers, so why do we do that in web3? The main reason is that there is no layer of appropriate memory. If we think of data management in blockchains like managing clothes in our closet, a mempool, it’s like having a pile of linen on the ground, where you are not sure of what is in there and that you have to search.

Current delays in the treatment of transactions can be extremely high for any single channel. Citing Ethereum as an example, it takes two eras or 12.8 minutes to finalize any single transaction. Without decentralized RAM, Web3 relies on Mempools, where transactions are located until they are treated, resulting in delays, congestion and unpredictability.

Complete nodes store everything, inflating the system and creating costly recovery complex. In computers, the RAM keeps what is currently necessary, while the less used data move towards cold storage, perhaps in the cloud or on the disk. The full nodes are like a cupboard with all the clothes you have ever worn (from everything you’ve ever worn for baby so far).

This is not something we do on our computers, but they exist in web3 because storage and access to reading / writing are not optimized. With RLNC, we create decentralized ram (Deram) for a timely and up -to -date state in an economical, resilient and evolutionary way.

Deram and the propagation of data fueled by RLNC can solve the largest bottlenecks of web3 by making memory faster, more efficient and more scalable. It optimizes data propagation, reduces storage bloating and allows real -time access without compromising decentralization. It has long been a missing keypiece in the world computer, but not for a long time.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top