Chatgpt 5 finally says “I don’t know” – this is why it’s a big problem

Great languages ​​models have an awkward history by telling the truth, especially if they cannot provide a real answer. Hallucinations have been a danger to AI chatbots since technology made its debut a few years ago. But Chatgpt 5 seems to go for a new humble approach not to know the answers; admit it.

Although most of Chatbot AI’s responses are accurate, it is impossible to interact with a chatbot a long time before it provides partial or complete manufacturing as a response. The AI ​​displays so much confidence in its answers, whatever their precision. AI hallucinations tormented users and even led to embarrassing moments for developers during demonstrations.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top