Malicious LLMs allow even unskilled hackers to create new, dangerous malware.


  • Hackers use offline LLMs like WormGPT 4 and KawaiiGPT for cybercrime
  • WormGPT 4 enables encryptors, exfiltration tools and ransom notes; KawaiiGPT creates phishing scripts
  • Both models have hundreds of Telegram subscribers, reducing the barriers to entry for cybercrime.

Most generative AI tools used today are not unlimited – for example, they are not allowed to teach people how to make bombs or how to commit suicide – nor are they allowed to facilitate cybercrime.

While some hackers attempt to “jailbreak” tools by bypassing these guardrails using clever prompts, others simply create their own, completely independent Large Language Models (LLMs) to be used exclusively for cybercrime.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top