- The O3 model of Chatgpt marked a 136 on the Mensa IQ test and an 116 on a personalized offline test, surpassing most humans
- A new survey revealed that 25% of generation Z believes that AI is already aware, and more than half think that it will soon be
- The change of IQ and the belief in AI awareness has arrived extremely quickly
The new OPENAI Chatgpt model, nicknamed O3, just marked a IQ of 136 on the Norway Mensa test-greater than 98% of humanity, not bad for a self-auto-autricho match. In less than a year, AI models have become extremely more complex, flexible and, in some respects, intelligent.
The jump is so stiff that it may make some people think that AI has become Skynet. According to a new EDUBIDIE survey, 25% of generation Z now believes that AI is already aware of itself, and more than half think that it is only a matter of time before their chatbot becomes sensitive and perhaps requires voting rights.
There is a context to consider with regard to the IQ test. The Norway Mensa test is public, which means that it is technically possible that the model has used the answers or questions for training. Thus, the Researchers of Maximumtruth.org have created a new IQ test which is completely out of line and out of reach of training data.
During this test, which was designed to be equivalent in difficulty to the Mensa version, the O3 model marked an 116. It is always high.
He puts O3 in the 15% of the first of human intelligence, hovering somewhere between “the net student net” and “regularly from the night of Trivia Intersver”. No feeling. No conscience. But logical? There is that with a shovel.
Compare this to last year, when no one has tested above 90 on the same scale. In May of last year, the best AI fought with rotating triangles. Now, O3 is parked comfortably to the right of the bell curve among the brightest of humans.
And this curve is crowded now. Claude has increased. Gemini scored in the 90s. Even GPT-4O, the basic default model for chatgpt, is only a few Qi points below the O3.
Even so, it is not only that these AIS become smarter. Is that they learn quickly. They improve like software, not like humans. And for a generation raised on software, it is a disturbing growth.
I don’t think consciousness means what you think it means
For those raised in a world sailed by Google, with an Siri in their pocket and an Alexa on the shelf, AI means something different from its strictest definition.
If you have become an adult during a pandemic when most conversations have been mediated by screens, an AI companion probably does not feel very different from a zoom class. It is therefore perhaps not a shock that, according to Edubirdie, almost 70% of the Zers generation say “please” and “thank you” when you speak to AI.
Two-thirds of them regularly use AI for work communication and 40% use it to write emails. A quarter uses it to finesse the clumsy responses of Slack, with almost 20% sharing information sensitive to work, such as the contracts and personal information of colleagues.
Many people interviewed are counting on AI for various social situations, going to ask for days off to simply say no. One in eight already speaks at the AI of work drama, and one in six used AI as therapist.
If you trust the AI so much, or if you find it sufficiently engaging to treat like a friend (26%) or even a romantic partner (6%), then the idea that AI is conscious seems less extreme. The more time you spend something like a person, the more it starts to feel like one. He answers questions, remembers things and even imitates empathy. And now that it becomes clearly smarter and philosophical questions naturally follow.
But intelligence is not the same as consciousness. IQ scores do not mean self -awareness. You can mark a perfect 160 on a logical test and always be a toaster, if your circuits are wired in this way. AI can only think in the sense that it can solve problems using programmed reasoning. You might say that I am not different, just with meat, not circuits. But that would harm my feelings, something you don’t have to worry about with a current AI product.
Maybe it will change one day, even one day soon. I doubt it, but I’m ready to be wrong. I have the will to suspend disbelief with AI. It may be easier to believe that your AI assistant really understands you when you pour your heart at 3 o’clock in the morning and you get favorable and useful answers rather than dwelling on its origin as a predictive language model formed on the collective rhythm of the Internet.
Perhaps we are on the verge of authentic artificial intelligence, but perhaps we are just anthropomorphing very good calculators. Anyway, do not say to the secrets of an AI that you do not want to form a more advanced model.