Researchers say ChatGPT has a host of worrying security flaws. Here’s what they found


  • Tenable claims to have found seven rapid injection vulnerabilities in ChatGPT-4o, dubbed the “HackedGPT” attack chain.
  • Vulnerabilities include hidden commands, memory persistence, and security bypasses via trusted wrappers
  • OpenAI fixed some issues in GPT-5; others remain, prompting calls for stronger defense

According to security researchers, ChatGPT has numerous security issues that could allow malicious actors to insert hidden commands, steal sensitive data, and spread misinformation into the AI ​​tool.

Recently, security experts from Tenable tested OpenAI’s ChatGPT-4o and discovered seven vulnerabilities that they collectively named HackedGPT. These include:

  • Indirect prompt injection via trusted sites (hiding commands inside public sites that GPT can unknowingly track when playing content)
  • 0-click indirect prompt injection in search context (GPT searches the web and finds a page with hidden malicious code. Asking questions may unknowingly force GPT to follow instructions)
  • Fast 1-Click Injection (a variation of phishing in which a user clicks on a link with hidden GPT commands)
  • Bypassing the security mechanism (wrapping malicious links in trusted wrappers, tricking GPT into displaying the links to the user)
  • Chat Injection: (Attackers can use the SearchGPT system to insert hidden instructions that ChatGPT later reads, effectively self-injecting).
  • Hiding malicious content (malicious instructions can be hidden in markdown code or text)
  • Persistent memory injection (malicious instructions can be placed in recorded chats, causing the model to repeat commands and continually leak data).

Calls for stronger defenses

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top