The authenticity of ChatGPT-generated content has been called into question after a recent investigation revealed that the latest version of ChatGPT, GPT-5.2, scraped content from Grokipedia, an AI-generated online encyclopedia created by Elon Musk in 2023.
This disclosure sparked a frenzy among researchers and journalists over the reliability of the results obtained by artificial intelligence (AI) platforms. What is even more worrying is that Internet users rely heavily on these tools for information.
A report of The guardian mentioned that GPT-5.2 referred to Grokipedia several times in its responses to various questions, including sensitive topics such as the Iranian political landscape and historical issues surrounding Holocaust denial.
In more than a dozen tests, Grokipedia was cited nine times, suggesting that it is integrated into the model’s information pool.
It is worth noting that Grokipedia competes with Wikipedia but relies entirely on AI for content creation and updates, which highlights the frightening biases and inaccuracies implanted in AI-generated content.
The OpenAI-owned chatbot has previously been flagged by critics for promoting right-wing perspectives on controversial social and political issues.
It should be remembered that ChatGPT made no reference to Grokipedia when asked about topics containing controversial claims, such as the January 6 Capitol attack or misinformation about HIV/AIDS.
Grokipedia appeared in ChatGPT responses mostly in obscure questions, making stronger claims beyond established facts, such as links between an Iranian telecommunications company and the supreme leader’s office.
This problem is not limited to ChatGPT; other major language models (LLMs), including Anthropic’s Claude, have also cited Grokipedia on various topics.
OpenAI explained that its models draw on various sources and apply security filters to mitigate the spread of harmful information.
Highlighting the need for rigorous source evaluation in AI development, experts warned that relying on unreliable sources could mislead users and reinforce misinformation.




