- The IPROOV study reveals that the elderly have the most fight with deep buttocks
- False confidence is widespread among the young generation
- Social media is a Deepfake Hotspot, warn the experts
While Deepfake technology continues to progress, concerns concerning disinformation, fraud and identity theft increases, thanks to literacy in AI tools being at a surprising level.
A recent IPROOV study claims that most people find it difficult to distinguish the deep content from reality, because it took 2,000 participants in the United Kingdom and the United States exposed to a mixture of images and videos Real and generated by AI, finding only 0.1% of participants – two whole people – distinguished correctly between real and deep stimuli.
The study revealed that the elderly are particularly sensitive to the deception generated by AI. About 30% of 55 to 64 years, and 39% over 65, had never heard of Deepfakes before. While the youngest participants were more confident in their ability to detect Deepfakes, their actual performance in the study did not improve.
Older generations are more vulnerable
Deepfake videos were significantly more difficult to detect than the images, added the study, because the participants were 36% less likely to correctly identify a false video compared to an image, which raises concerns about video fraud and Disinformation.
Social media platforms have been highlighted as major sources of Deepfake content. Almost half of the participants (49%) identified meta-platforms, including Facebook and Instagram, such as the most common places where Deepfakes is found, while 47% said Tiktok.
“”[This underlines] The vulneration of organizations and consumers is vulnerable to the threat of identity fraud in the era of Deep forts, “said Andrew Bud, founder and CEO of IPROOV.
“Criminals exploit the inability of consumers to distinguish real achievements from false images, endangering personal information and financial security.”
Bud has added even when people suspect an a deep, most of them take no measure. Only 20% of respondents said they would signal an alleged deepake if they met one online.
With Deepfakes becoming more and more sophisticated, Iproov believes that human perception is no longer reliable for detection, and Bud underlined the need for biometric security solutions with a detection of liveliness to fight against the threat of deeply more convincing material .
“It is due to technological companies to protect their customers by implementing solid security measures,” he said. “The use of facial biometrics with Livity provides a reliable authentication factor and prioritizes both safety and individual control, ensuring that organizations and users can keep the pace of the evolution of these threats.”