ChatGPT begins using Elon Musk's Grokipedia as a source of information

ChatGPT begins using Elon Musk’s Grokipedia as a source of information

One of the maxims in the world of AI is that if garbage comes in, garbage comes out. In other words, if you train an artificial intelligence with lies or data of dubious origin, the content it will generate will reproduce those same patterns. And the same goes for the web sources on which the answers are based. This reality has become clear after an investigation by the newspaper Guardianwhich has discovered that the latest ChatGPT model, GPT-5.2, has started using Grokipedia as a source of authority to respond to user queries.

Grokipedia, the AI-generated online encyclopedia launched by Elon Musk last October, has come under fire for propagating controversial narratives and lacking direct human editing. In testing, ChatGPT cited this source nine times when responding to sensitive topics, including political structures in Iran and biographies related to Holocaust deniers. For example, the chatbot reproduced statements from Grokipedia about the historian Sir Richard Evans that the British newspaper itself had previously denied.

The danger of subtle misinformation

The worrying thing is not that the AI ​​hallucinates, but that validate undesirable patterns. The analysis showed that ChatGPT did not cite Grokipedia on topics where misinformation is evident and easy to filter, such as the January 6 insurrection in the United States. However, information from Musk’s encyclopedia did leak into more obscure or specific topics, where security filters are more lax. This creates a vicious cycle of validation: if ChatGPT cites Grokipedia, the user may mistakenly assume that it is a verified and reliable source.

Security experts warn about the phenomenon of LLM groomingwhere malicious actors generate massive volumes of misinformation for chatbots to absorb those lies during their training or web search. Misinformation researcher Nina Jankowicz points out that Grokipedia often relies on unreliable sources, and its inclusion in ChatGPT responses legitimizes these biases.

While OpenAI defends that its search engine tries to extract data from a wide range of sources and that it applies security filters, the response of xAI, owner of Grokipedia, to the controversy has been blunt and brief: “Traditional media lie.”