OpenAI CEO urges US to prepare for risks and gains from AI ‘superintelligence’

OpenAI Chief Executive Sam Altman said U.S. policymakers must act now to prepare for advanced artificial intelligence, warning that the technology is moving from theory to everyday economic use.

In an interview with Axios, Altman said AI systems already handle coding and research tasks that once required teams of programmers. The new models will go further, he said, helping scientists make major discoveries and allowing individuals to do the work of entire groups.

This shift is already visible in the field of cybersecurity, where some industry leaders say artificial intelligence is tipping the scales in favor of attackers.

Charles Guillemet, chief technology officer at hardware wallet maker Ledger, for example, told CoinDesk that AI tools reduce the costs and skills needed to find and exploit software vulnerabilities. Tasks that once took months, like reverse engineering code or linking multiple vulnerabilities, can now be accomplished in seconds with the right prompts.

Last year, the crypto industry saw more than $1.4 billion in assets stolen or lost in attacks. This figure could continue to grow, suggests Guillemet. Additionally, developers are increasingly relying on AI-generated code, which can potentially introduce new vulnerabilities at scale.

The answer, he said, will require stronger defenses such as mathematically verified code, hardware devices that keep private keys offline and broader recognition that systems can fail.

AI in cyber, biosecurity

While Altman pointed out that AI could accelerate drug discovery or materials science, he also signaled that it could also enable more powerful cyberattacks and lower barriers to harmful biological research. Such threats could emerge within a year, making coordination between the government, technology companies and security groups urgent.

“We’re not that far from a world where there are incredibly capable, very good open source models in biology,” he said. “The need for society to be resilient in the face of terrorist groups who use these models to attempt to create new pathogens is no longer a theoretical question. »

Another example he suggested is a “world-shaking cyberattack” that could happen as early as this year. To avoid that, he said, would require “an enormous amount of work.”

He presented OpenAI’s policy ideas as a starting point, with the aim of promoting debate on how to manage systems that learn quickly and act across many domains. According to him, it is important to use AI to defend against these potential attacks.

Regarding the potential nationalization of OpenAI, Altman said the arguments against the company are based on the need for the United States to achieve “superintelligence” before its rivals.

“The main argument against nationalization would be that we need the United States to succeed in building superintelligence in a way that is aligned with American democratic values ​​before anyone else does,” he said. “It probably wouldn’t work as a government project, I think that’s a sad thing.”

Altman nonetheless said he believes companies involved in AI need to work closely with the U.S. government.

Given his role at OpenAI, Altman also has a financial stake in the evolution of the industry. This stance could shape how it views both the urgency of regulation and the role of private companies like OpenAI in managing emerging risks, which could influence the company’s competitive position.

AI as a utility

Energy is one area where he sees rapid progress, as greater processing power capacity could reduce costs as demand for AI increases.

Altman also pointed to early signs of workforce changes. A programmer in 2026, he says, already works differently than a year earlier.

AI will become a kind of utility, like electricity, embedded in all devices, while the cost of basic intelligence will decline and cutting-edge systems will remain expensive.

“You will have this great personal assistant running in the cloud,” Altman said. “If you use it a lot or at high intelligence levels your bill will be higher for a month and if you use it less your bill will be lower.”

It is “extremely important that the people developing AI are people of high integrity and trustworthiness.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top