Great languages models have an awkward history by telling the truth, especially if they cannot provide a real answer. Hallucinations have been a danger to AI chatbots since technology made its debut a few years ago. But Chatgpt 5 seems to go for a new humble approach not to know the answers; admit it.
Although most of Chatbot AI’s responses are accurate, it is impossible to interact with a chatbot a long time before it provides partial or complete manufacturing as a response. The AI displays so much confidence in its answers, whatever their precision. AI hallucinations tormented users and even led to embarrassing moments for developers during demonstrations.
Openai had suggested that the new version of Chatgpt would be willing to argue the ignorance of the creation of an answer, and a post viral X of Kol Tregaskes drew attention to the revolutionary concept of Chatgpt saying: “I don’t know – and I cannot discover it reliably.”
GPT-5 says “I don’t know”. I love you, thank you. pic.twitter.com/k6snfkqzbgAugust 18, 2025
Technically, hallucinations are cooked in the operation of these models. They do not recover the facts of a database, even if it looks like this way; They predict the next most likely word depending on the language models. When asking questions about something dark or complicated, AI is guessing the right words to answer them, not hunting for the classic search engine. Consequently, the appearance of completely invented sources, statistics or quotes.
But GPT-5’s ability to stop and say: “I don’t know”, reflects an evolution of the way in which the models of AI deal with their limits in terms of their responses, at least. A frank admission of ignorance replaces fictitious filling. It may seem anticlimitative, but it is more important to make AI more trustworthy.
Clarity on hallucinations
Confidence is crucial for AI chatbots. Why do you use them if you don’t trust the answers? Chatgpt and other IA chatbots have warnings integrated to them not to count too much on their answers because of hallucinations, but there are always stories of people ignoring this warning and entering hot water. If AI simply says that he cannot answer a question, people may be more likely to trust the answers he provides.
Of course, there is always a risk that users interpret the doubt of the model as a failure. The expression “I don’t know” could present itself as a bug, not a functionality, if you do not realize that the alternative is a hallucination, not the right answer. Admitting uncertainty is not the way omniscient AI, some imagine the chatpt would behave.
But it is undoubtedly the most human thing that Chatgpt could do in this case. Openai is proclaimed The objective is the general artificial intelligence, the AI which can perform any intellectual task that a human can. But one of the Ironies of Act is that the imitation of human thought understands uncertainties as well as capacities.
Sometimes the smartest thing you can do is say that you don’t know something. You can’t learn if you refuse to admit that there are things you don’t know. And, at least, it avoids the spectacle of an AI that tells you to eat rocks for your health.