OpenAI admits new models likely to pose ‘high’ cybersecurity risk


  • OpenAI warns that future LLMs could contribute to zero-day development or advanced cyberespionage
  • Company invests in defensive tools, access controls and a multi-tiered cybersecurity program
  • New Border Risk Council will guide responsible safeguards and capabilities across border models

OpenAI’s future Large Language Models (LLMs) could pose higher cybersecurity risks because, in theory, they might be capable of developing functional zero-day remote exploits against well-defended systems, or significantly contributing to complex and stealthy cyberespionage campaigns.

This is according to OpenAI itself, which in a recent blog stated that the cyber capabilities of its AI models are “advancing rapidly.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top