Rapid injection attacks may ‘never be properly mitigated’, UK NCSC warns


  • UK NCSC warns rapid injection attacks may never be fully mitigated due to LLM design
  • Unlike SQL injection, LLMs lack separation between statements and data, making them inherently vulnerable.
  • Developers are encouraged to treat LLMs as “confused adjuncts” and design systems that limit compromised outcomes.

Rapid injection attacks—attempts to manipulate a large language model (LLM) by embedding hidden or malicious instructions in user-provided content—may never be properly mitigated.

That’s according to David C, technical director of the UK’s National Cyber ​​Security Center (NCSC) for platform research, who published the assessment in a blog evaluating the technique. In the article he claims that many compare rapid injection to SQL injection, which is inaccurate, since the former is fundamentally different and arguably more dangerous.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top