- OpenAi increases its pay bonus payments
- Identify vulnerabilities with high impact could allow researchers of $ 100,000
- This decision comes while more AI agents and systems are developed
Openai hopes to encourage security researchers to identify security vulnerabilities by increasing their awards to locate bugs.
The AI giant revealed that it increased its $ 20,000 security bug bonuse program to $ 100,000, and widens the scope of its cybersecurity subsidy program, as well as developing new tools to protect AI agents against malicious threats.
This follows the recent warnings that AI agents can be diverted to write and send phishing attacks, and the company wishes to describe its “commitment to reward significant and high impact security research that helps us to protect users and maintain confidence in our systems”.
Disrupt threats
Since the launch of the cybersecurity subsidy program in 2023, Openai has examined thousands of requests and even financed 28 research initiatives, helping the company obtaining precious information on security subjects such as autonomous cybersecurity defenses, rapid injections and the generation of secure code.
Openai says that he continuously monitors malicious actors who seek to exploit his systems and identify and disrupt targeted campaigns.
“We are not only defending ourselves,” said the company, “we share professions with other AI laboratories to strengthen our collective defenses.
OPENAI is not the only company to increase its award program, Google announcing in 2024 an increase in five factors in bonus bonus awards, arguing that more secure products make bugs more difficult, which is reflected in higher compensations.
With more advanced models and agents, and more users and developments, there are inevitably more vulnerability points that could be exploited, so the relationship between researchers and software developers is more important than ever.
“We are committing researchers and practitioners throughout the cybersecurity community,” said Open IA.
“This allows us to take advantage of the latest thoughts and share our results with those working towards a more secure digital world. To train our models, we associate ourselves with experts in academic, government and commercial laboratories to compare skills gaps and obtain structured examples of advanced reasoning in the fields of cybersecurity. ”
Via cybernews




