- OpenAI warns that future LLMs could contribute to zero-day development or advanced cyberespionage
- Company invests in defensive tools, access controls and a multi-tiered cybersecurity program
- New Border Risk Council will guide responsible safeguards and capabilities across border models
OpenAI’s future Large Language Models (LLMs) could pose higher cybersecurity risks because, in theory, they might be capable of developing functional zero-day remote exploits against well-defended systems, or significantly contributing to complex and stealthy cyberespionage campaigns.
This is according to OpenAI itself, which in a recent blog stated that the cyber capabilities of its AI models are “advancing rapidly.”
While this may sound grim, OpenAI sees the situation from a positive perspective, saying the advancements also bring “significant benefits for cyber defense.”
Browser crash
To prepare in advance for future models that could be abused in this way, OpenAI said it is “investing in hardening models for defensive cybersecurity tasks and creating tools that make it easier for defenders to perform workflows such as code auditing and patching vulnerabilities.”
The best way to do this, according to the blog, is a combination of access controls, infrastructure hardening, exit controls and monitoring.
Additionally, OpenAI announced that it will soon introduce a program that should allow users and customers working on cybersecurity tasks to access enhanced features, in a phased manner.
Finally, the Microsoft-backed AI giant announced plans to create an advisory group called the Frontier Risk Council. This group should be comprised of seasoned cybersecurity experts and practitioners and, after initially focusing on cybersecurity, should expand its reach elsewhere.
“Members will advise on the line between useful, responsible capacity and potential misuse, and these lessons will directly inform our assessments and safeguards. We will share more with the board soon,” the blog reads.
OpenAI also said that cyber misuse could be viable “from any industry frontier model,” which is why it is part of the Frontier Model Forum, where it shares knowledge and best practices with industry partners.
“In this context, threat modeling helps mitigate risks by identifying how AI capabilities could be weaponized, where critical bottlenecks exist for different threat actors, and how pioneering models could deliver significant improvement. »
Via PK Press Club
The best antivirus for every budget
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




