- AI-assisted fraud has risen sharply, making phishing campaigns more convincing
- Deepfake-based identity attacks have caused verified losses exceeding $347 million globally
- Subscription-based AI crimeware creates a stable and growing underground market
Artificial intelligence is now being used by cybercriminals to automate fraud, scale up phishing campaigns, and industrialize identity theft to a level that was previously impractical.
Unfortunately, AI-assisted attacks could be among the biggest security threats your business faces this year, but remaining vigilant and acting quickly can keep you ahead of the game.
Group-IB’s Weaponized AI report shows that criminals’ growing use of AI represents a distinct fifth wave of cybercrime, driven by the commercial availability of AI tools rather than isolated experimentation.
Increase in AI-driven cybercrime activity
Data from Dark Web monitoring shows that AI-related cybercrime activities are not a short-term response to new technologies.
Group-IB claims that top posts on the Dark Web referencing AI-related keywords increased by 371% between 2019 and 2025.
The most pronounced acceleration followed the public release of ChatGPT in late 2022, after which interest levels remained consistently high.
By 2025, tens of thousands of forum discussions each year referenced AI misuse, indicating a stable underground market rather than experimental curiosity.
Group-IB analysts identified at least 251 articles explicitly focused on leveraging large language models, with most references related to OpenAI-based systems.
A structured AI crimeware economy has emerged, with at least three vendors offering self-hosted Dark LLMs without security restrictions.
Subscription prices range from $30 to $200 per month, with some providers claiming more than 1,000 users.
One of the fastest growing segments is identity theft services, with mentions of deepfake tools related to bypassing identity verification up 233% year-over-year.
Entry-level synthetic identity kits are sold for as little as $5, while real-time deepfake platforms cost between $1,000 and $10,000.
Group-IB recorded 8,065 fraud attempts using deepfakes at a single institution between January and August 2025, with verified global losses reaching $347 million.
AI-assisted malware and API abuse have risen sharply, with AI-powered phishing now integrated into malware-as-a-service platforms and remote access tools.
Experts warn that AI-based attacks can bypass traditional defenses unless teams continually monitor and update systems.
Networks need to be protected by firewalls that can identify unusual traffic and AI-generated phishing attempts.
With proper endpoint protection, businesses can detect suspicious activity before malware or remote access tools spread.
Rapid, adaptive malware removal remains essential, as AI-based attacks can execute and spread faster than standard methods can respond.
Combined with a multi-layered security approach and anomaly detection, these measures help stop intrusions such as fake calls, cloned voices and fake connection attempts.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




