- False advertisements on AI video editors target Facebook users
- The UNC6032 threat group has been identified to spread malware
- Advertisements have reached more than 2 million users
Google’s mandiant Threat Defense Group identified a campaign, followed as UNC6032, which “arms interest around AI tools” – in particular the tools used to generate videos based on user prompts.
Mandiant experts have identified thousands of publications from false websites of “AI video generators” which really distribute malicious software, which has led to the deployment of useful charges, “like python -based infosteralists and several wanderings”.
The campaign sees legitimate AI generator tools like Canva Dream Lab, Luma Ai and Kling Ai imitated in order to deceive the victims, who have collectively reached “millions of users” on Linkedin and Facebook – although Google suspects that similar campaigns can also target users on several different platforms.
The group, UNC6032, would have links with Vietnam, but the EU transparency rules allowed researchers to see that a sample of 120 malicious ads had a total range of more than 2.3 million users – although this does not necessarily result in many victims.
“Although our survey was limited, we have discovered that well -designed false” IA websites “represent a significant threat to individual organizations and users,” confirm researchers.
“These AI tools no longer only target graphic designers; anyone can be attracted to an apparently harmless ad. The temptation to try the latest AI tool can lead to anyone to become a victim. We advise users to exercise caution when you engage with AI tools and to verify the legitimacy of the website of the web.”
Make sure you completely check all advertisements on social networks and manually search for software offers in a search engine before downloading anything to correctly check the source.
We also recommend that you check the best malware deletion tools to ensure the security of your devices.