- UK data watchdog formally investigates X and xAI over Grok’s creation of non-consensual deepfake images
- Grok allegedly generated millions of explicit AI images, including those that appear to depict minors.
- The investigation concerns possible violations of the GDPR and the lack of guarantees.
The UK’s data protection regulator has launched a wide-ranging investigation into X and xAI after reports that chatbot Grok AI was generating fake indecent images of real people without their consent. The Information Commissioner’s Office is investigating whether the companies breached GDPR by allowing Grok to create and share sexually explicit AI images, including some that appear to depict children.
“The reports about Grok raise deeply troubling questions about how people’s personal data was used to generate intimate or sexualized images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this,” William Malcolm, the ICO’s executive director for regulatory risk and innovation, said in a statement.
Investigators aren’t just looking at what users did, but also what X and xAI failed to prevent. The move follows a raid last week on X’s Paris offices by French prosecutors as part of a parallel criminal investigation into the alleged dissemination of deepfakes and child abuse images.
The scale of this incident does not allow us to consider a few bad incentives as an isolated case. Researchers estimate that Grok generated approximately three million sexualized images in less than two weeks, including tens of thousands that appear to depict minors. The GDPR penalty structure gives an idea of what’s at stake: violations can result in fines of up to £17.5 million, or 4% of global turnover.
Grok problem
Both x and xAI have insisted they are implementing stricter safeguards, although details are limited. X recently announced new measures to block certain image generation pathways and limit the creation of altered photos involving minors. But once this type of content starts circulating, especially on a platform as large as X, it becomes almost impossible to erase it completely.
Politicians are now calling for systemic legislative changes. A group of MPs led by Labour’s Anneliese Dodds have urged the government to introduce AI legislation requiring developers to carry out thorough risk assessments before making their tools available to the public.
As AI image generation becomes more common, the line between authentic and fabricated content is blurring. This change affects anyone with social media, not just celebrities or public figures. When tools like Grok can create explicit and compelling images from an ordinary selfie, the stakes of sharing personal photos change.
Privacy becomes something more difficult to protect. It doesn’t matter how careful you are when technology overtakes society. Regulators around the world are scrambling to keep up. The UK investigation into X and xAI could last months, but is likely to influence the expected behavior of AI platforms.
A push for more stringent and enforceable security by design requirements is likely. And there will be more pressure on companies to be transparent about how their models are trained and what safeguards are in place.
The UK survey indicates that regulators are losing patience with the idea of a “move fast and break things” approach to public safety. When it comes to AI capable of manipulating people’s lives, there is momentum for real change. When AI makes it easy to distort a person’s image, the burden of protection falls on the developers, not the public.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.
The best business laptops for every budget




