- OpenClaw Exposures Reveal Thousands of High-Risk Systems Accessible Over the Internet
- AI agents are deployed with excessive permissions in critical environments
- Remote Code Execution Vulnerabilities Expose Most Observed OpenClaw Deployments
Agent systems are rapidly moving from experimentation to everyday workflows, but recent findings suggest that security practices are not keeping pace.
According to SecurityScorecard, thousands of OpenClaw deployments are exposed directly to the Internet with minimal safeguards.
The team identified a total of 40,214 OpenClaw instances exposed to the Internet, with 28,663 unique IP addresses hosting control panels accessible from anywhere on the Internet.
Article continues below
Exposed AI agents become hackers’ dream target
“The math is simple: When you give an AI agent full access to your computer, you give the same access to anyone who might compromise it,” the researchers said.
About 63% of observed deployments appear vulnerable to remote code execution, allowing attackers to take control of the host machine without user interaction.
Among the exposures, there were three common high-severity vulnerabilities and exposures affecting OpenClaw, with CVSS scores ranging from 7.8 to 8.8.
Public exploit code is already available for all three vulnerabilities, meaning attackers do not need advanced skills to compromise exposed systems.
The study also found that 549 exposed instances correlate with previous breach activity and 1,493 are associated with known vulnerabilities that increase the risk to users.
Exposed deployments are heavily concentrated at major cloud and hosting providers, indicating reproducible and easily replicated insecure deployment models.
OpenClaw, formerly known as Moltbot and Clawdbot, bills itself as a personal AI agent that can schedule meetings, send emails, and manage tasks on behalf of users.
The problem lies not in the capabilities of AI but in the access and permissions granted to these systems without proper security controls.
“In practice, because it was written by AI, security was not a dominant feature in the development process,” said Jeremy Turner, vice president of Threat Intelligence at SecurityScorecard.
“For those who want to use more agent-like AI systems, you really need to think carefully about what integrations you support and what permissions you actually grant.”
Many users configure these bots with personal names and company names, revealing exactly who is using these AI tools and making them attractive targets for attackers.
Each time a user connects an AI agent to a platform, they assign it an identity with specific permissions.
This identity may be able to post content, access email, read files, or interact with other systems on behalf of the user.
“The risk is not that these systems think for themselves,” Turner said. “It’s because we give them access to everything.”
“It’s like giving your laptop to a stranger on the street and hoping nothing bad happens… All communications… on that device… will be interfaces from untrusted third parties who can… take certain actions.”
A compromised agent could be asked to transfer funds, delete files, or send malicious messages without raising immediate alarms because the behavior appears legitimate.
Unfortunately, the report reveals a fundamental disconnect between AI adoption and security practices.
Users are asked to grant these agents extensive access to the system, which in many cases has already led to data exposure, unintended actions, and loss of control.
In some cases, OpenClaw takes steps beyond what users explicitly request, and Microsoft has since advised against running it on standard personal or corporate devices.
Chinese authorities have restricted its use in office environments due to its tendency to expose data and broader security risks.
Some OpenClaw vulnerabilities allow hackers to access sensitive data and have been used to distribute malware through GitHub repositories.
“Don’t blindly download one of these things and start using it on a system that has access to your entire personal life. Establish some separation and conduct your own experiments before you really trust the new technology to do what you want it to do,” Turner said.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds.




