Chatgpt becomes smarter, but its spiral hallucinations


  • The latest models of Openai, GPT O3 and O4-Mini, hallucinate much more often than their predecessors
  • The increased complexity of models can lead to more confident inaccuracies
  • High error rates arouse concerns concerning the reliability of AI in real world applications

Brilliant but unworthy people are a must of fiction (and history). The same correlation can also apply to AI, on the basis of an Openai survey and shared by The New York Times. Hallucinations, imaginary facts and direct lies have been part of AI chatbots since their creation. The improvements to models are theoretically reduced the frequency to which they appear.

The latest flagship models of Openai, GPT O3 and O4-Mini, are supposed to imitate human logic. Unlike their predecessors, who mainly focused on the generation of current text, Openai built GPT O3 and O4-Mini to think about the step by step. Openai boasted that the O1 could match or exceed the performance of doctoral students in chemistry, biology and mathematics. But Openai’s report highlights painful results for anyone who takes on chatgpt answers to its nominal value.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top