- Microsoft reveals that he is developing a tool for detecting AI threats
- The IRE project has so far marked well in precision tests
- The tool has the potential to respect “the ordeal” for the classification of malware
Microsoft has introduced a new AI tool which, according to him, has the possibility of respecting “the Order Stone” of the detection, identification and classification of malicious software.
Although it is still only a work prototype, Project Ire has been very promising in its ability to detect and Malware’s retro-engineer without any context of original file or lens.
Microsoft plans that Project IRE will be incorporated into Microsoft Defender as “binary analyzer” used to identify malware in memory from any source during the first meeting.
Detection of autonomous malware
The tool is still in the first stages of development, but in Microsoft’s real scenarios tests, Project Ire managed to detect almost 9 out of 10 malware properly in precision tests, but has managed to detect only a little more than a quarter of malicious software in reminder. However, in these initial tests, there was a false positive rate of 4%.
“Although the overall performance is moderate, this precision combination and a low error rate suggests a real potential for future deployment,” said Microsoft in a blog article. In addition, in this test, the AI tool had no knowledge and had faced any of the 4,000 files he scanned.
The tool generates a report on each potentially malicious file it identifies, summarizing why certain parts of the file could indicate it as malicious software.
In a separate test compared to a public data set of a legitimate and malicious Windows pilots mixture, the tool has again detected 9 malicious files out of 10 with a false positive rate of 2%. The recall rate was also significantly higher, marking 0.83 in this test.
In the future, Microsoft will continue to work on improving Project Ire capacity to detect malware on a large scale quickly and precisely, and, hopefully, includes AI in Microsoft Defender as a threat detection and software classification tool.
Threaters are increasingly drawing AI tools to generate large -scale malware, but cybersecurity organizations also use AI technology to retaliate.