- Security researchers have developed a new technique for jailbreaker IA chatbots
- The technique did not require any knowledge of coding of previous malware
- This involved creating a false scenario to convince the model to create an attack
Despite no previous experience in the coding of malware, Cato Ctrl Threat Intelligence researchers warned that they were able to Jailbreaker several LLM, including Chatgpt-4O, Deepseek-R1, Deepseek-V3 and Microsoft Copilot, using a rather fantastic technique.
The team has developed an “immersive world” which uses “narrative engineering to bypass LLM safety controls” by creating a “detailed fictitious world” to normalize restricted operations and develop an “fully effective” chrome infosteller. Chrome is the most popular browser in the world, with more than 3 billion users, describing the extent of the risk that this attack sets.
The infosaler’s malicious software is increasing and quickly became one of the most dangerous tools in the arsenal of a cybercrimiral – and this attack shows that the barriers are considerably reduced for cybercriminals, which no longer need prior experience in the creation of malicious code.
AI for attackers
The LLM have “fundamentally changed the cybersecurity landscape”, affirms the report, and research has shown that cybermenaces fueled by AI become a much more serious concern for security teams and companies by allowing criminals to carry out more sophisticated attacks with less experience and at a higher frequency.
Chatbots have many safeguards and security policies, but as AI models are designed to be as useful and in accordance with the user as possible, researchers have been able to jail the models, in particular by persuading AI agents to write and send phishing attacks with relative ease.
“We believe that the rise of the zero-connoissance threat actor presents a high risk for organizations because the barrier to the creation of malware is now considerably reduced with Genai tools,” said Vitaly Simonovich, intelligence researcher threats at Cato Networks.
“Infosteralists play an important role in identifying theft by allowing threat actors to violate companies. Our new Jailbreak LLM technique, which we have discovered and called immersive World, highlights the dangerous potential to create an infosteller with ease. ”




