Dr. Geoffrey Hinton deserves the merit of having helped to build the bases of practically all generative AI based on the neural network that we use today. You can also credit it in recent years with consistency: he still thinks that the rapid expansion of development and use of AI will lead to fairly disastrous results.
Two years ago, in an interview with The New York TimesWarned Dr. Hinton: “It is difficult to see how you can prevent bad players from using it for bad things.”
Now, in a new sit-down, this time with CBS News, the Nobel Prize winner increases concern, admitting that when he understood how to make a computer brain work more like a human brain, he “did not think he arrives here in only 40 years”, “adding that there is” 10 years ago, I did not believe that we would happen “.”
However, now we are here, and we are heading towards an unknowable future, the rhythm of the development of the IA model easily exceeding the rhythm of Moore’s law (which indicates that the number of transistors on a double chip about 18 months). Some may say that artificial intelligence doubles capacity every 12 months and undoubtedly makes significant jumps on a quarterly basis.
Naturally, the reasons for Dr. Hinton are now multiple. Here are some of what he said to CBS News.
1. There is a risk of 10% to 20% that AIS take over
According to CBS News, this is the current assessment of Dr. Hinton of the Ai-Versus Human risk factor. It is not that Dr. Hinton does not believe that the progress of AI will not pay dividends in medicine, education and climate science; I suppose that the question here is, when AI becomes so intelligent that we do not know what he thinks or, perhaps, to consider?
Dr. Hinton has not directly dealt with artificial general intelligence (AG) in the interview, but it must be in his mind. AG, which remains a somewhat amorphous concept, could mean that AI machines go beyond human intelligence-and if they do this, when AI begins, like humans, act in its own interest?
2. Is a “cute” that have one day that one day killing you?
By trying to explain his concerns, Dr. Hinton compared current AI to someone who has a small tiger. “It’s just such a pretty Cub Tiger, unless you are very sure that it won’t want to kill you when it is tall.”
Analogy makes sense when you consider how most people get involved with AIs like Chatgpt, Copilot and Gemini, using them to generate funny images and videos, and declaring: “Isn’t it adorable?” But behind all this amusement and sharing imaging is an emotional without system that only wants to provide the best result because its network of neurons and its models understand it.
3. Pirates will be more effective – banks and more could be at risk
Regarding current AI threats, Dr. Hinton clearly takes them seriously. He thinks that AI will make pirates more effective to attack targets such as banks, hospitals and infrastructure.
AI, which can code for you and help you solve difficult problems, could overeat their efforts. Dr. Hinton’s response? Attenuation of risks by broadcasting your money on three banks. It seems to be good advice.
4. Authorities can abuse AI
Dr. Hinton is so concerned about the imminent threat of the AI that he told CBS News that he was happy that he was 77 years old, which, I suppose, means that he hopes to have disappeared for a long time before the worst case involving the AI which is potentially happening.
I’m not sure he will come out in time, however. We have an increasing legion of authorities around the world, some of which already use images generated by AI to propel their propaganda.
5. Technological companies do not focus enough on AI security
Dr. Hinton argues that large technological companies focusing on AI, namely Openai, Microsoft, Meta and Google (where Dr. Hinton used to work), emphasize short -term profits and not enough on IA security. It is difficult to verify and, for their defense, most governments have done a bad job to enforce any real AI regulations.
Dr. Hinton noted when some try to ring the alarm. He told CBS News that he was proud of his former protégé and the former Openai chief scientist, Heya Sutskever, who helped to briefly oust the CEO of Openai, Sam Altman, on IA security problems. Altman has soon returned and Sutskever is finally far away.
As for what comes next, and what we have to do on this subject, Dr. Hinton offers no response. In fact, it seems almost as overwhelmed by all this as the rest of us, telling CBS News that even if he does not despair, “we are at this very special moment in history where, in a relatively short time, everything could completely change to a change of a scale that we have never seen before. It is difficult to absorb this emotionally.”
You can repeat it, Dr. Hinton.