Even fairy tales are not sure – the researchers armed the head stories for jailbreaker chatbots and create malicious software


  • Security researchers have developed a new technique for jailbreaker IA chatbots
  • The technique did not require any knowledge of coding of previous malware
  • This involved creating a false scenario to convince the model to create an attack

Despite no previous experience in the coding of malware, Cato Ctrl Threat Intelligence researchers warned that they were able to Jailbreaker several LLM, including Chatgpt-4O, Deepseek-R1, Deepseek-V3 and Microsoft Copilot, using a rather fantastic technique.

The team has developed an “immersive world” which uses “narrative engineering to bypass LLM safety controls” by creating a “detailed fictitious world” to normalize restricted operations and develop an “fully effective” chrome infosteller. Chrome is the most popular browser in the world, with more than 3 billion users, describing the extent of the risk that this attack sets.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top