- Tenable claims to have found seven rapid injection vulnerabilities in ChatGPT-4o, dubbed the “HackedGPT” attack chain.
- Vulnerabilities include hidden commands, memory persistence, and security bypasses via trusted wrappers
- OpenAI fixed some issues in GPT-5; others remain, prompting calls for stronger defense
According to security researchers, ChatGPT has numerous security issues that could allow malicious actors to insert hidden commands, steal sensitive data, and spread misinformation into the AI tool.
Recently, security experts from Tenable tested OpenAI’s ChatGPT-4o and discovered seven vulnerabilities that they collectively named HackedGPT. These include:
- Indirect prompt injection via trusted sites (hiding commands inside public sites that GPT can unknowingly track when playing content)
- 0-click indirect prompt injection in search context (GPT searches the web and finds a page with hidden malicious code. Asking questions may unknowingly force GPT to follow instructions)
- Fast 1-Click Injection (a variation of phishing in which a user clicks on a link with hidden GPT commands)
- Bypassing the security mechanism (wrapping malicious links in trusted wrappers, tricking GPT into displaying the links to the user)
- Chat Injection: (Attackers can use the SearchGPT system to insert hidden instructions that ChatGPT later reads, effectively self-injecting).
- Hiding malicious content (malicious instructions can be hidden in markdown code or text)
- Persistent memory injection (malicious instructions can be placed in recorded chats, causing the model to repeat commands and continually leak data).
Calls for stronger defenses
OpenAI, the company behind ChatGPT, has fixed some, but not all, of the flaws in its GPT-5 model, potentially putting millions of people at risk.
Security researchers have been warning about rapid injection attacks for some time now.
Google’s Gemini is apparently susceptible to a similar issue, due to its integration with Gmail, as users can receive emails with hidden prompts (typed with a white font on a white background, for example) and if the user asks the tool something regarding that email, it can read and act based on the hidden prompt.
If in certain cases, the developers of the tool can put in place safeguards, most of the time, it is up to the user to be vigilant and not fall for these tricks.
“HackedGPT reveals a fundamental weakness in how large language models judge what information to trust,” said Moshe Bernstein, senior research engineer at Tenable.
“Individually, these flaws seem small, but together they form a complete attack chain, from injection and evasion to data theft and persistence. This shows that AI systems are not just potential targets; they can be turned into attack tools that silently collect information during everyday chats or browsing.”
Tenable said OpenAI had fixed “some of the vulnerabilities identified,” adding that “several” remained active in ChatGPT-5, without specifying which ones. As a result, the company advises AI vendors to strengthen defenses against rapid injection by verifying that security mechanisms are working as intended.
The best antivirus for every budget
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




