- Big Sleep Sleep Ai Powerred Vulnerability Hunter built by Deepmind and Project Zero
- The first batch of 20 vulnerabilities he spotted were announced
- The details are under the wraps to give the devs time to correct them
The tool powered by Google designed to find bugs, Big Sleep, reported its first batch of 20 security vulnerabilities in open source software.
Developed by the AI and the Deepmind and Project Zero security teams from Google, the first vulnerabilities were found in FFMPEG and ImageMagick, but the details of these vulnerabilities remain uncluttered until they have been corrected.
Google says that the big sleep marks a significant step forward in the safety of applications, with an AI capable of discovering and reporting independently and reporting the vulnerabilities that human security workers.
Big Sleep determines dirt on open source software bugs
Each of the 20 bugs was found and reproduced independently by Big Sleep, although Google notes that a human expert examines the results before making reports public – with an important human review to temper concerns about false positives or hallucinated bugs, ensuring that problems are worthy to be reported to their respective developers.
Thinner details such as ID CVEs, technical explanations and concept evidence is retained for the moment as part of the 90 -day Google policy to give developers time to correct vulnerabilities without the attackers being first.
“In November 2024, Big Sleep was able to find his first real security vulnerability of the real world, showing the immense potential of the AI to connect the security holes before they have an impact on users,” said World Affairs President Kent Walker in a blog post.
VP for security engineering, Heather Adkins, announced the news in a post X: “Today, as part of our commitment to transparency in this space, we are proud to announce that we have reported the first 20 vulnerabilities discovered using our” Big Sleep “system based on AI propelled by Gemini.”
Google retains a full list of vulnerabilities, which currently includes the first 20, separated into high, medium and low impact problems.
Google provides a complete technical briefing during the next Black Hat USA and Def Con 33 events, and will give training data anonymized to the secure AI framework so that other researchers can benefit from technology.