- New research shows that AI tools are used and mistreated by cybercriminals
- The pirates create tools that exploit legitimate llms
- Criminals also form their own LLMS
It is undeniable that AI is used by cybersecurity teams and cybercriminals, but new research by Cisco Talos reveal that criminals are creative. The latest development of the AI / cybersecurity landscape is that the LLM “not censored”, the LLM Jailbreake and the LLM designed by the cybercrimina are being used against targets.
It has recently been revealed that the Grok and Mistral IA models fed variants of wormpt which generated malicious code, social engineering attacks and even the supply of hacking tutorials – it therefore becomes clear a popular tactic.
The LLMs are built with safety features and railings, guaranteeing a bias and minimal outings that are made up of human values and ethics, as well as ensuring that chatbots do not behave in harmful behavior, such as the creation of malicious software or phishing emails-but there is work.
Jailbreaké and not censored
The so-called non-censored LLM observed in this research are versions of AI models that operate outside normal constraints. This means that they are able to perform tasks for criminals and create harmful content. These are quite easy to find, depending on the research, and are easy to perform – with only relatively simple prompts required.
Some criminals have gone below, creating their own LLM, such as Wormgpt, Fraudgpt and Darkgpt. These are marketed with bad players and have a multitude of harmful features. For example, Fraudgpt claims to be able to create automatic scripts to replicate newspapers / cookies, write pages / letters of fraud, find leaks and vulnerabilities and even learn to code / hack.
Others sail in the safety characteristics of legitimate AI models through “jailbreaking” chatbots. This can be done using “darkness techniques”, which include the basic coding64 / rot-13, using different languages, “L33T SP34K”, Emojis and even the Morse code.
“While AI technology continues to develop, Cisco Talos expects that cybercriminals continue to adopt LLM to help rationalize their processes, write tools / scripts that can be used to compromise users and generate content that can more circumvent defenses.