- GTIG discovers that malicious actors are cloning mature AI models using distillation attacks.
- Sophisticated malware can use AI to manipulate code in real time to avoid detection.
- State-sponsored groups create very convincing phishing kits and social engineering campaigns.
If you have used modern AI tools, you will know that they can be a great help in reducing the boredom of mundane and tedious tasks.
Well, it turns out that threat actors feel the same way, as Google Threat Intelligence Group’s latest AI Threat Tracker report revealed that attackers are using AI more than ever.
From understanding how AI models reason to cloning them to integrating them into attack chains to bypass traditional network detection, GTIG has outlined some of the most pressing threats. Here’s what they found.
How threat actors use AI in their attacks
For starters, GTIG has discovered that malicious actors are increasingly using distillation attacks to quickly clone large language models so that they can be used by malicious actors for their own purposes. Attackers will use a large number of prompts to discover how the LLM reasons with queries, then use the answers to train their own model.
Attackers can then use their own model to avoid paying for the legitimate service, use the distilled model to analyze how the LLM is constructed, or look for ways to exploit their own model that can also be used to exploit the legitimate service.
AI is also used to support intelligence gathering and social engineering campaigns. Iranian and North Korean state-sponsored groups have used AI tools in this way, with the former using AI to gather information about business relationships to create a pretext for contact, and the latter using AI to merge intelligence to plan attacks.
GTIG has also spotted an increasing use of AI to create highly convincing phishing kits for mass distribution to harvest credentials.
Additionally, some malicious actors embed AI models into malware to allow it to adapt and avoid detection. One example, tracked as HONESTCUE, avoided network-based detection and static analysis by using Gemini to rewrite and execute code during an attack.
But not all threat actors are the same. GTIG also noted that there is strong demand for custom AI tools designed for attackers, with specific calls for tools capable of writing code for malware. For now, attackers rely on distillation attacks to create custom models to use offensively.
But if these tools became widely available and easy to distribute, it is likely that malicious actors would quickly adopt malicious AI into their attack vectors to improve the performance of malware, phishing, and social engineering campaigns.
To defend against AI-enhanced malware, many security solutions deploy their own AI tools to fight back. Rather than relying on static analysis, AI can be used to analyze potential threats in real-time to recognize the behavior of AI-powered malware.
AI is also used to analyze emails and messages to detect phishing in real time on a scale that would require thousands of hours of human work.
Additionally, Google is actively looking for potentially malicious uses of AI in Gemini and has deployed a tool to help scan for software vulnerabilities (Big Sleep) and a tool to help patch vulnerabilities (CodeMender).

The best antivirus for every budget




