- IA imitation scams use vocal cloning and Deepfake video to convincingly imitate trusted people
- Cybercriminals target people and businesses via calls, video meetings, messages and emails
- Experts say that the independent identity verification and the use of multi-factory authentication are essential to protect you
Imagine getting a frantic call from your best friend. Their voice is trembling because they tell you that they were in an accident and need urgently. You recognize the voice instantly; After all, you have known them for years. But what happens if this voice is not really real?
In 2025, crooks used the AI to clone the voices more and more, imitate faces and pretend to be the most confidence.
The increase in this type of scam was amazing. According to MoonIA scams jumped 148% this year, criminals using advanced tools that make their deception almost impossible to detect.
So how can you be safe from this growing science fiction threat? Here is everything you need to know, including what cybersecurity experts recommend.
What are AI’s identity scams?
IA imitations scams are a form of rapid growth fraud where criminals use artificial intelligence to imitate the voice, face or style of a person with alarming precision.
These scams are often based on vocal cloning, which is a technology that can recreate someone’s speech models with only a few seconds of recorded audio.
Samples are not difficult to find; You can often spot them in voice messages, interviews or social media videos. According to Montclair State UniversityEven short clips of a podcast or an online class may be sufficient to build an imitation of incentive to the convincing intermediary of someone’s voice.
Some scams go even further, using a Deepfake video to simulate live calls. For example, Forbes reports that the crooks usurped the identity of the company’s managers during video meetings, which convinced the staff of Allow large wire transfers.
Experts claim that the rapid growth of IA identity scams in 2025 is based on three factors: better technology, lower costs and wider accessibility.
With these digital fakes alongside them, the attackers assume the identity of someone you trust, such as a family member, a boss or even a government official. They then request valuable and confidential information, or jump the additional step and require urgent payments.
These imitant voices can be very convincing, which makes them particularly harmful. Like the US Senate Judicial Committee Recently informed, even trained professionals can be deceived.
Who is affected by AI’s identity scams?
AI identity scams can occur on telephone calls, video calls, messaging applications and emails, often catching victims in the middle of their daily routines. Criminals use vocal cloning to make so -called “Vish” calls, which are telephone scams that look like a trusted person.
The FBI recently warned against calls generated by AI pretending to be American politicians, especially Senator Marco RubioTo spread a disinformation and request a public reaction.

To watch
On the side of the “Vishing” company, cybercriminals organized Deepfake video meetings that pretended to be business leaders. In one case in 2024, the threat stakeholders brought the financial director of the British engineering company Arup and prompted its employees to authorize transfers totaling $ 25 million.
These attacks generally scratch the images and videos of LinkedIn, corporate websites and social media in order to develop convincing identity theft.
IA identity theft also becomes more sophisticated – and fast. The messaging supplier Paubox found that almost 48% of Phishing attempts generated by AIIncluding vocal and video clones, succeeding in e-mail detection and current security systems.
How to stay away from IA identity scams
Experts say that IA imitations scams succeed because they create a false sense of urgency in their victims. Criminals exploit your instinct to trust familiar voices or faces.
The most important defense is to simply slow down; Take your time to confirm their identity before acting. THE Take 9 initiative said that simply take a break for nine seconds can greatly contribute to staying safe.
If you receive a suspicious call or video of someone you know, hang up and remind you of the number you already have. As cybersecurity analyst Ashwin Raghu said Initiate of BusinessThe crooks are counting on people who react in the moment, and recall eliminates this emergency.

To watch
It is also important to monitor subtle red flags. Deepfake videos can have a few breeds, such as unnatural food movements, flickering backgrounds or visual contact that feels a bit “off”. Likewise, the voices generated by AI can have unusual breaks or an inconsistent background noise, even if they seem convincing at the beginning.
Adding additional safety layers can also help. Multi-faters (MFA) authentication makes it more difficult for crooks to enter your accounts even if they successfully steal your identification information.
Cybersecurity expert Jacqueline Jayne said Australian That your best bet is to couple direct verification with a certain form of MFA – in particular during periods of activity with high scam, as during the tax season.
The AI offers a ton of mind -boggling capacities, but it also gives crooks new ways of deceiving. By remaining vigilant, by checking suspicious requests and speaking openly about these threats, you can reduce the risk of being caught up in – no matter how real fake may seem.