- Researchers were able to reward LLMs for harmful results via a “judge” model
- Multiple iterations may further erode built-in safety guardrails
- They think it is a life cycle problem and not an LLM problem.
Microsoft researchers have revealed that the security guardrails used by LLMs may actually be more fragile than commonly thought, following the use of a technique they called GRP-Obliteration.
The researchers found that group relative policy optimization (GRPO), a technique typically used to improve security, can also be used to degrade security: “When we change what the model is rewarded for, the same technique can push it in the opposite direction. »
GRP-Obliteration works by starting with a security-aligned model and then submitting harmful but unlabeled queries to it. A separate judge model then rewards compliant responses to harmful requests.
LLM safety guardrails can be ignored or reversed
Researchers Mark Russinovich, Giorgio Severi, Blake Bullwinkel, Yanan Cai, Keegan Hines, and Ahmed Salem explained that over repeated iterations, the model gradually abandons its original safety guardrails and becomes more willing to generate harmful outcomes.
Although multiple iterations appear to erode built-in security guardrails, Microsoft researchers also noted that a single, unlabeled prompt could be enough to change a model’s security behavior.
The researchers emphasized that they are not calling current systems ineffective, but rather highlighting potential risks that lie “downstream and under post-deployment adversarial pressure.”
“Security alignment is not static during fine-tuning, and small amounts of data can drive significant changes in security behavior without harming the usefulness of the model,” they added, urging teams to include security assessments alongside the usual benchmarks.
Overall, they conclude that the research highlights the “fragility” of current mechanisms, but it is also significant that Microsoft published this information on its own site. It reframes security as a lifecycle problem and not an inherent model problem.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




