- Codeder automatically generates AI assessment safety fixes for open source projects
- Google Deepmind says that codepender reduces vulnerability workloads via validation of the code
- Deepmind provides for a broader developer version once code is confirmed is confirmed
Google Deepmind has revealed Codedern, an artificial intelligence agent who, according to him, can automatically detect and correct software vulnerabilities before being operated by hackers.
Google’s AI research arm claims that the new tool can secure open source projects by generating fixes that can be applied once they have been examined by human researchers.
CODEPEND is based on the Gemini Deep thinkmind model of Deepmind and uses several analysis tools, including fuzzing, static analysis and differential tests, to identify the deep causes of bugs and prevent regressions.
Help not replace humans
Raluca Ada Popa, principal researcher in researcher at Deepmind, and John “Four” Flynn, his security vice-president, said the system had already delivered dozens of fixes.
“In the last six months that we have built Codepender, we have already improved 72 security fixes to open source projects, including some of 4.5 million lines of code,” wrote Popa and Flynn in a Deepmind blog article.
The company affirms that Codepender can act both reactively and proactive, the repair of discovered defects and rewriting the code to fully delete vulnerability classes.
The system should ultimately be able to reduce the safety maintenance workload by validating its own fixes before sending them for human examination.
The examination stage is something that Google wishes to emphasize, noting that Codeder is not there to replace humans, but rather to act as a useful agent and extend the growing volume of vulnerabilities that automated systems can detect.
In one case, the team claims that Codemender has automatically applied security annotations -Facts to certain parts of the Libwebp image compression library, a DeepMind stage, the affirmations would have prevented past exploits.
The annotations force the compiler to verify the limits of the buffer, reducing the risk of attacks based on an overflow.
Developers also recognize the growing use of AI by malicious actors and argue that defenders need equivalent tools.
Deepmind plans to extend tests with open source managers and, once its reliability is properly proven, hopes to publish Codepender for a broader use of developers.
Google has also revised its secure AI manager and launched a new vulnerability reward program for AI defects.
You might also love
Follow Techradar on Google News And Add us as a favorite source To get our news, criticisms and expert opinions in your flows. Be sure to click on the follow!
And of course, you can also Follow Techradar on Tiktok For news, criticism, unpacking in video form and obtain regular updates to us on Whatsapp Also.