- UK NCSC warns rapid injection attacks may never be fully mitigated due to LLM design
- Unlike SQL injection, LLMs lack separation between statements and data, making them inherently vulnerable.
- Developers are encouraged to treat LLMs as “confused adjuncts” and design systems that limit compromised outcomes.
Rapid injection attacks—attempts to manipulate a large language model (LLM) by embedding hidden or malicious instructions in user-provided content—may never be properly mitigated.
That’s according to David C, technical director of the UK’s National Cyber Security Center (NCSC) for platform research, who published the assessment in a blog evaluating the technique. In the article he claims that many compare rapid injection to SQL injection, which is inaccurate, since the former is fundamentally different and arguably more dangerous.
The main difference between the two is that LLMs do not impose any real separation between instructions and data.
Inherently confused MPs
“Although initially reported as command execution, the underlying problem turned out to be more fundamental than typical client/server vulnerabilities,” he wrote. “Current large language models (LLMs) simply do not impose a safe boundary between instructions and data within a prompt.”
Rapid injection attacks are regularly reported in systems that use generative AI (genAI) and are OWASP’s #1 attack to consider when “developing and securing large-scale generative AI applications and language models.”
In typical vulnerabilities, data and instructions are processed differently, but LLMs operate solely on predicting the next token, meaning they cannot inherently distinguish user-supplied data from operational instructions. “There is a good chance that rapid injection will never be properly mitigated in the same way,” he added.
The NCSC official also claims that the industry is repeating the same mistakes made in the early 2000s, when SQL injection was poorly understood and therefore widely exploited.
But SQL injection eventually became better understood and new protections became standard. For LLMs, developers should treat them as “intrinsically confoundable adjuncts” and thus design systems that limit the consequences of compromised results.
If an application cannot tolerate residual risk, he cautions, it may simply not be an appropriate use case for an LLM.
The best antivirus for every budget
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




