- Anthropic claimed to have observed an AI cyberattack without substantial human intervention
- Experts say this claim is likely inflated by minimizing human intervention.
- The reports only highlight what security professionals already know: AI tools accelerate the attack process.
Anthropic recently reported that Chinese hackers had hijacked its Claude platform to launch cyberattacks orchestrated entirely by AI – but this claim has since been met with skepticism in the cybersecurity community.
It seems likely that, even though the AI carried out a significant portion of the attack (around 80-90%), the technology still needs vital human input – since the AI cannot “think” for itself, it can only copy.
Some researchers believe this is simply a marketing tactic to inflate the perceived capabilities of AI, or perhaps a fear-mongering campaign to fuel the narrative around the U.S.-China AI race.
Nothing new
“I continue to refuse to believe that attackers are somehow capable of making these models jump through hurdles that no one else can,” said Dan Tentler, executive founder of Phobos Group. Ars Technica.
“Why do the models give these attackers what they want 90% of the time while the rest of us have to deal with ass kissing, blocking and acid trips?”
While it is true that AI has advanced by leaps and bounds in recent months, it is unlikely that it will be able to accomplish a range of complex tasks without human intervention. These tools are useful, but they enhance human capabilities rather than completely replacing them.
“The implication here is that the attacker was using existing tools, but used an AI agent to replace the human who would normally pilot those tools and move through the phases of the attack much more quickly,” said Tim Mitchell, principal security researcher, Sophos X-Ops Counter Threat Unit.
“From a defender’s perspective, this means there is nothing new to defend against here – but the window to spot and defend against attack is significantly reduced.”
Another point to note is that, according to Anthropic’s own reporting, only a “small number” of attempts to infiltrate organizations with AI have been successful – although this would have represented a first step in a rapidly evolving process.
TechRadar Pro requested comment from Anthropic, but had not heard anything at the time of publication.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.
The best identity theft protection for every budget




