- Check Point discovered three vulnerabilities in the Claude Code AI coding assistant
- Flaws allowed the theft of RCE and API keys
- Issues exploited via malicious repositories; all corrected before disclosure
If you are considering deeply integrating AI tools into your workflows, be very careful, as some popular AI models have serious vulnerabilities that can turn a trusted digital assistant into a malicious insider.
Researchers at Check Point (CPR) have detailed three vulnerabilities in Claude Code that can be used to remotely execute malicious code (RCE) or steal sensitive data such as API credentials from unsuspecting victims.
Among the three flaws, two have been labeled: CVE-2025-59536 (8.7/10) and CVE-2026-21852 (5.3/10). The third that has not yet received a CVE is a code injection vulnerability.
Reassessing traditional security assumptions
Claude Code is an advanced AI-powered coding assistant that allows developers to work with AI directly in their coding environment (like their terminal or IDE). The assistant can do all kinds of things, including running tasks on entire code bases, all based on natural language instructions.
CPR states that an attacker could create a malicious repository including specially crafted project-level configuration files and share it with a developer (for example, via a phishing email or fake work assignment).
If the developer clones the repository to their local machine and opens the project directory in Claude Code, the tool will automatically load it, allowing the attacker to abuse built-in mechanisms and trigger hidden shell commands. As a result, user consent prompts are ignored and external tools and services are initialized before receiving explicit approval.
Simply put, the attacker can take advantage of remote code execution capabilities or exfiltrate Anthropic’s API keys before the user confirms their trust in the project.
“AI-based coding tools are quickly becoming part of enterprise development workflows. Their productivity benefits are significant, as is the need to re-evaluate traditional security assumptions,” CPR said.
“Configuration files are no longer passive settings. They can influence execution, networking, and permissions. As AI integration deepens, security controls must evolve to match new trust limits.”
Fortunately, the CPR says all issues were resolved before public disclosure.
The best antivirus for every budget
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




