- AI tools are more popular than ever – but security risks too
- The best tools are being lever by cybercriminals with a malicious intention
- Grok and Mixtral were both used by Crimianls
The new research has warned that the best IA tools fuel the “wormpt” variants, the malicious Genai tools that generate malware, social engineering attacks and even the supply of hacking tutorials.
With large language models (LLM) now widely used alongside tools like Mistral AI and XAI Grok, CTRL CTO experts found that it was not always in the way they are intended to be used.
“The emergence of Wormgpt has stimulated the development and promotion of other non -censored LLMs, indicating an increasing market for such tools within cybercrime. Fraudgpt (also known as fraud) has quickly increased as a leading alternative and announced with a wider range of malware,” noted the researchers.
Worm
Wormpt is a wider name for LLM “ non censored ” which are exploited by threat actors, and the researchers have identified different strains with different capacities and objectives.
For example, Keanu-Wormgpt, an unwelcome assistant was able to create phishing emails when he was invited. When the researchers still dug, the LLM revealed that it was fueled by Grok, but the safety characteristics of the platform had been around.
After that, the creator then added UNIS railing to ensure that this information was not disclosed to users, but other variants of Wormpt proved to be based on Mixtral IA, so that the legitimate LLMs are clearly in jailber and exploited by hackers.
“Beyond LLM MalVeurs, the trend of actors in the threat trying to jailbrush legitimate LLMs like Chatgpt and Google Bard / Gemini to get around their security measures have also gained ground,” noted the researchers.
“In addition, there are indications that threat actors actively recruit AI experts to develop their own unconneited LLM personalized adapted to specific needs and attack vectors.”
Most areas of cybersecurity will be familiar with the idea that AI “reduces entry barriers” for cybercriminals, which can certainly be seen here.
If everything you need is to ask a chatbot preexisting a few questions well in phase, it is quite sure to assume that cybercrime could become much more common in the months and years to come.