- Experts tore apart MIT paper for making evidence-free claims about AI
- Kevin Beaumont dismissed the findings as almost completely absurd, without any evidence.
- Marcus Hutchins also mocked the research, saying he laughed even more when reading about its methods.
The MIT Sloan School of Management was forced to withdraw a working paper claiming that AI played a “significant role” in most ransomware attacks after widespread criticism from experts.
The study, co-authored by MIT researchers and Safe Security executives, claims that “80.83% of recorded ransomware events were attributed to malicious actors using AI.”
Published earlier in 2025 and later cited by several media outlets, the report immediately attracted scrutiny because it presented extraordinary numbers with little evidence.
Questionable research
Among them was prominent security researcher Kevin Beaumont, who called the paper “absolutely ridiculous,” calling its findings “almost complete nonsense.”
“It describes almost every major ransomware group as using AI – without any evidence (that’s not true either, I monitor a lot of them),” Beaumont wrote in a thread on Mastodon.
“He even talks about Emotet (which hasn’t existed for many years) as being AI-driven.”
Cybersecurity expert Marcus Hutchins agrees: “I laughed out loud at the title” and “when I read their methodology, I laughed even harder.”
He also criticized the article because it harms the public’s understanding of threats such as ransomware and malware removal practices.
Following the backlash, MIT Sloan removed the document from its site and replaced it with a note stating that it was “being updated based on recent criticism.”
Michael Siegel, one of the authors, confirmed that revisions were underway.
“We received comments on the discussion paper recently and are working as quickly as possible to provide an updated version,” Siegel said.
“The main points of the paper are that the use of AI in ransomware attacks is growing, that we should find a way to measure it, and that there are things companies can do now to prepare.”
Simply put, he says the document does not assert a definitive overall percentage but is a warning about how AI might be measured in cyberattacks.
Even Google’s AI-powered search assistant rejected the claim, saying the figure was “not supported by current data.”
The controversy reflects a growing tension in cybersecurity research, where enthusiasm for AI can sometimes trump factual analysis.
AI has real potential for attack and defense, so improving ransomware protection, automated threat detection, and antivirus systems is a good thing.
However, exaggerating its malicious use risks distorting priorities, especially when it comes from institutions as important as MIT Sloan.
Via the register
The best antivirus for every budget
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




