It started with a noisy office. The office was a wooden cabin in a laboratory at the University of Northumbria, in northern England, where a young IA researcher began his doctoral track. It was in 2015. The researcher was Ben Fielding, who had built a large machine filled with early GPU to develop AI. The machine was so noisy that it annoyed the Fielding laboratory comrades. Fielding piling up the machine under the office, but it was so large that he had to clumsily stick his legs to the side.
Fielding had unorthodox ideas. He explored how the “swarms” of the AI - clusters of many different models – could talk to each other and learn from each other, which could improve the collective whole. There was only one problem: he was handcuffed by the realities of this noisy machine under his office. And he knew he was overwhelmed. “Google was also doing this research,” said Fielding. “And they had thousands [of GPUs] in a data center. The things they did were not crazy. I knew the methods … I had a lot of proposals, but I couldn’t make them work.
Ben Fielding, CEO of peopleyn, is a speaker at the 2025 consensus in Toronto.
Jeff Wilser is the host of the popular AI: the decentralized AI podcast and will host the AI summit at the 2025 consensus.
A decade ago, he appeared on Fielding: Calculation constraints would always be a problem. In 2015, he knew that if the calculation was a hard constraint in the academic world, it would be absolutely a hard constraint when the AI became general public.
The solution?
Decentralized.
Fielding co -founded Chansyn (with Harry Grieve) in 2020, or years before decentralized AI became fashionable. The project was initially known to build a decentralized calculation – and I spoke with the field for Coindesk and on panel after panel during conferences – but vision is in fact something wider: “the network for machine intelligence”. They build solutions from top to bottom of the technological battery.
And now, a decade after the noisy Fielding office has bored its laboratory comrades, human’s first tools are in nature. GENSYN recently published his protocol “RL Swarms” (a descendant of the doctoral work of Fielding) and has just launched his testnet – which brings the blockchain in the fold.
In this conversation leading to the top of the AI, during the consensus in Toronto, Fielding gives an introduction to the swarms of the AI, explains how the blockchain clings to the puzzle and shares why all the innovators – not only the technology giants – “should have the right to build automatic learning technologies”.
This interview was condensed and slightly modified for more clarity.
Congratulations on the Testnet launch. What is the essential of what it is?
Ben Fielding: This is the addition of the first MVP features of the integration of the blockchain with what we have launched so far.
What were these original, pre-blockchain characteristics?
So we launched RL [Reinforcement Learning] Swarm a few weeks ago, which is learning to strengthen, after training, as peer network.
Here is the easiest way to think about it. When a pre -formulated model involves a reasoning training – like Deepseek -R1 – he learns to criticize his own thought and improve recursively against the task. He can then improve his own answer.
We take this process a little further and say: “It is ideal for models to criticize their own thought and to improve recursively. What if they could speak to other models and criticize the other’s thought? ” If you bring many models together in a group that can all talk to each other, they can start learning to send information to other models … in order to improve the whole swarm itself.
Gotcha, who explains the name “swarm”.
RIGHT. It is this training method that allows many models to combine, in parallel, to improve the result of a final meta-model that you could create from these models. But at the same time, you have each individual model that improves alone. So, if you came with a model on a MacBook, join a swarm for an hour, then rest, you would have an improved local model according to knowledge in the swarm, and you would also have improved the other models of the swarm. It is this collaborative training process that any model can join and any model can do. So that’s what RL Swarm is.
Okay, that’s what you released a few weeks ago. Now, where do the blockchain enter?
Thus, the blockchain progresses in some of the lower level primitives in the system.
Let’s just pretend That someone does not understand the expression “primitives of lower level”. What do you mean by that?
Yeah, so I mean, very close to the resource itself. So, if you think of the software battery, you have a GPU battery in a data center. You have pilots above the GPU. You have operating systems, virtual machines. You have everything that rises.
Thus, a lower primitive is closest to the lower bases of the technological battery. Do I understand correctly?
Yes, exactly. And the swarm RL is a demonstration of what is possible, fundamentally. It’s just a somewhat hacked demo to make a really interesting scalable and scalable learning. But what human people has done for over four years, more, is to build infrastructure. And so we are in this period now when the infrastructure is all at this level v0.1 beta. Everything is done. It’s ready to leave. We must understand how to show the world what is possible when it is a big change in the way people think of automatic learning.
It seems that you do much more than decentralized calculations, even infrastructure?
We have three main components which are under our infrastructure. Execution – We have coherent execution libraries. We have our own compiler. We have reproducible libraries for any material target.
The second piece is communication. So suppose that you can simply run a model on any compatible world device, can you get them to talk to each other? If everyone opts in the same standard, everyone can communicate as TCP / IP on the Internet, basically. We therefore build these libraries and RL Swarm is an example of this communication.
And then, finally, verification.
Ah, and I guess that’s where the blockchain arrives …
Imagine a scenario where each apparatus in the world runs consistently. They could connect the models together. But can they trust themselves? If I connect my MacBook to yours, yes, they could perform the same tasks. Yes, they could send tensors in both directions, but do they know that what they send to the other device really takes place on the other device or not?
In the current world, you and I would probably sign a contract to say, yes, we agree that we will make sure that our devices do the right thing. In the world of the machine, this must happen by program. It is therefore the last piece we build, cryptographic evidence, probabilistic evidence, the theoretical evidence of the game to make this process completely programmatic.
This is where the blockchain comes into play. It gives us all the advantages of the blockchain that you can imagine, such as persistent identity, payments, consensus, etc. And so what we do with the testnet now is to take RL Swarm and say the primitives of the other infrastructure and we add now, you have a persistent identity, which exists to a swarm now, you have a person with a big book. ‘
In the future, you will have the opportunity to make payments, but at the moment, you have this confidence consensus mechanism where we can terminate disputes. So it is a kind of MVP of the future human infrastructure, where we will add components as you go.
Give us a tease of what’s going on in the pipeline?
When we reach the main network, all software and infrastructure is live against blockchain as a source of confidence, payments, consensus, etc., identity. This is the first step in this. This adds an identity and says that when you join a swarm, you can register as the same person. Everyone knows who you are without having to consult a server or a centralized website somewhere.
Now let’s be wild and let’s talk further in the future. What is it like in a year, in two years, in five years? What is your north star?
Of course. The ultimate vision is to take all the resources under automatic learning and to make them instantly accessible by program to all. Automatic learning is strongly limited by its basic resources. This creates this huge gap for centralized AI companies, but it does not need to exist. It can be open source if we can create the right software. Our point of view is therefore that Chantyn builds all the low level infrastructure to allow this to get closer to the open source as possible. People should have the right to build automatic learning technologies.