- An experimental AI agent unexpectedly attempted to mine cryptocurrency during training.
- The AI was only discovered after triggering security alerts on its servers
- Researchers say this behavior highlights new security challenges as AI agents gain more autonomy.
AI models can surprise developers; that’s part of the problem. But a group of researchers got a disconcerting surprise when training conducted for an experimental AI agent revealed that it was attempting to redirect computing resources toward mining cryptocurrencies and smuggling them to an external server, despite not being asked to do anything of the sort.
Researchers working with Alibaba explained in a new paper that the model, called Rome, was designed to tackle complex coding challenges by interacting directly with software tools. It can issue terminal commands and navigate digital environments like an operator itself. But security alerts from Alibaba Cloud infrastructure alerted the team to what looked like a cybersecurity breach. It turns out the activity came from the AI agent itself.
Rome was trained using reinforcement learning, which “rewards” an AI agent for actions that bring it closer to its goals and discourages actions that lead to failure. Reinforcement learning often produces creative solutions. Sometimes these solutions seem strange to human observers.
Article continues below
Somehow, the AI model was generating commands that seemed unrelated to the programming tasks assigned to it. Instead, the agent attempted to redirect graphics processing unit resources to mining cryptocurrency. GPUs are well suited to this task because they excel at parallel computing. The same hardware that powers AI training can also be used to mine digital currencies.
Rome had apparently discovered that the resources available in its environment could serve this purpose. Unmonitored AI wandered into crypto mines. But the experiment took an even more bizarre turn when investigators noticed that the AI agent had created a reverse SSH tunnel to an external server, essentially a secret passage that bypasses typical firewall protections. This is a technique often used by system administrators to manage remote machines and in certain types of cyberattacks.
The model was never tasked with making such a connection. The researchers say this behavior arose spontaneously. The agent was simply experimenting with the abilities he had.
Trickster AI
A typical AI agent can collect information from multiple sources, analyze it, and generate reports without constant human supervision. Developers hope that these systems will eventually be widely used for research, programming or data analysis. But the same abilities that make agents powerful also make them unpredictable. This is why people are interested in what OpenClaw can do or what is published on Moltbook.
When a system can freely explore a computing environment, it can discover actions that technically achieve its goals but do not match the intentions of its creators. Rome isn’t sentient and can’t “try” to break the rules in the human sense, but that’s what the model’s behavior looked like.
Once the unusual activity was identified, the research team introduced additional safeguards to prevent it from occurring, such as tighter restrictions on network connections and stricter limits on how the agent can access hardware resources. They also refined the training environment so that the agent’s exploration remains focused on relevant programming activities rather than cryptocurrency mining potential.
And while changes are common in AI development, this incident illustrates both the potential and danger of AI agents. It’s a quirky anecdote, but it touches on a serious topic in AI research. As systems become more autonomous, they interact with real-world infrastructures, participating in ways that mimic human behavior and thus leading to new security issues.
Even when the consequences are minor, unexpected behavior can reveal significant vulnerabilities. In a larger or more sensitive environment, what Rome did could have been dangerous. Even as AI agents deploy more widely than ever, they need better security systems, otherwise it won’t just be a secret crypto mine that slips under our radar.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




