- Sam Altman says that humanity is “close to the creation of digital superintendent”
- Intelligent robots that can build other robots “are not so far”
- He sees “whole classes of jobs disappear” but “the capacities will increase just as quickly, and we will all get better things”
In a long blog article, the CEO of Openai, Sam Altman, exposed his vision of the future and reveals how the general artificial intelligence (AG) is now inevitable and about to change the world.
In what could be considered an attempt to explain why we have not yet reached act, Altman seems to emphasize that the progression of AI as a soft curve rather than a rapid acceleration, but that we have now “passed the horizon of the event” and that “when we look back in a few decades, gradual changes have achieved something big”.
“From a relativistic point of view, the singularity occurs gradually”, writes Altman “, and the fusion occurs slowly. We climb the long arc of exponential technological progress; it always seems vertical, looking forward and flat behind, but it is a fluid curve.”
But even with a more decelerated calendar, Altman is convinced that we are on the way to act and predict three ways to shape the future:
1. Robotics
Altman is particularly interesting is the role that robotics will play in the future:
“2025 saw the arrival of agents who can do real cognitive work; IT code writing will never be the same. 2026 will probably see the arrival of systems that may include new perspectives. 2027 can see the arrival of robots which can do tasks in the real world. ”
To perform real tasks in the world, as Altman imagines, robots should be humanoids, because our world is designed to be used by humans, after all.
Altman says: “… the robots that can build other robots … are not so far. If we have to make the first million old -fashioned humanoid robots, but they can then operate the whole supply chain – dig and refine minerals, driving trucks, racing factories, etc. – To build more robots, which can build more flea manufacturing facilities, data centers, etc.
2. Job losses but also opportunities
Altman says that the company will have to change to adapt to AI, on the one hand through job losses, but also by increased opportunities:
“The rate of technological progress will continue to accelerate, and it will continue to be the case that people are able to adapt to almost everything. There will be very difficult parts such as entire job classes, but on the other hand, the world will become so rich so quickly that we will be able to seriously entertain new political ideas that we have never been able to previously. “
Altman seems to balance the evolution of the landscape of jobs with the new opportunities that superintelligence will bring: “… perhaps we will pass from the resolution of high energy physics a year to start space colonization next year; Or a great revolution in the science of materials for a year at a real interface to the high band with band width of the year. ”
3. AG will be cheap and widely available
In Altman’s new future future, superintendent will be cheap and widely available. When you describe the best way to follow, Altman first suggests that we solve the “alignment problem”, which implies obtaining “… AI systems to learn and act towards what we want collectively in the long term”.
“SO [we need to] Focus on mastery of cheap superintelligence, widely available and not too concentrated with a person, a business or a country … Giving users a lot of freedom, within the general limits must decide, seems very important. The sooner the world will be able to start a conversation on the general limits and how we define collective alignment, the better. »»
This is not necessarily
By reading Altman’s blog, there is a kind of inevitability behind its prediction according to which humanity works uninterrupted towards Act. It is as if he had seen the future, and there is no room for doubt in his vision, but is he right?
Altman’s vision contrasts strongly with the recent Apple article which suggested that we are much more distant from the realization of AGE than many AI defenders.
“The illusion of thought”, a new Apple research document, declares that “despite their sophisticated self-reflection mechanisms learned thanks to the learning of reinforcement, these models do not develop the general capacity for problem solving to plan tasks, with performance which collapse at zero beyond a certain threshold of complexity.”
The research was carried out on large models of reasoning, such as the O1 / O3 models of Openai and Claude 3.7 Sonnet Thinking.
“Particularly to the counter-intuitive reduction in the reasoning effort as problems address critical complexity, suggesting a limit of scaling inherent in LRM.”, According to the document.
On the other hand, Altman is convinced that “the intelligence too cheap for the meter is well at hand.
As with all predictions on the future, we will discover if Altman is right.