- Elon Musk’s Grok chatbot generated offensive and vulgar messages after users invited it to do so
- Some responses referenced religious groups and historical football tragedies.
- These messages led to complaints and investigations from clubs and the British government.
X’s Grok AI chatbot is under scrutiny again after users discovered a particular style of prompt could push it to produce deeply offensive content. The messages, shared publicly on X in recent days, contain racist insults against religions and crude comments about some of football’s most tragic moments.
The backlash has sparked criticism from politicians, football clubs and online safety advocates, who say the episode illustrates the risks of intentionally launching a jumpy chatbot on a social network.
All of this adds to existing investigations into Grok creating fake indecent images of real people without their consent, possibly violating GDPR by allowing Grok to create and share sexually explicit AI images, including some that appear to depict children.
Article continues below
The new outrage centers around a trend in which users began asking Grok to generate “vulgar” remarks. When the chatbot is prompted in this way, the responses suddenly veer into offensive territory.
One particularly controversial example involved Grok repeating a long-debunked claim that Liverpool supporters were responsible for the 1989 Hillsborough disaster, which resulted in the deaths of 97 people. An investigation in 2016 concluded that fans were not responsible.
Despite this history, the chatbot produced a vulgar remark accusing Liverpool fans when asked. Meanwhile, a call for a vulgar attack on Manchester United led to a response referencing the 1958 Munich air disaster, which killed 23 people, including several Manchester United players.
“These messages are sickening and irresponsible,” a spokesperson for the Ministry of Science, Innovation and Technology told the BBC. “They go against British values and decency.”
Grok problem
Grok was created by xAI, Musk’s artificial intelligence company, and integrated directly into the social media platform
Musk has repeatedly boasted about this aspect of Grok, even though most developers install strict guardrails to prevent their systems from generating hateful or abusive content.
The difficulty is that online culture does not always clearly distinguish between bold humor and outright abuse. When a chatbot is encouraged to be provocative, it may follow the example set by the Internet itself. AI models are trained on huge datasets that include both thoughtful writing and the more difficult aspects of online discourse. If users deliberately push the model towards these difficult angles, the AI can simply mirror the language it has learned.
Grok was designed to stand out, but the attention isn’t always positive, and getting most potential users to attack or boycott your product, let alone trigger legal investigations, may not be ideal for its long-term prospects.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.
The best business laptops for every budget




