- AI agents could be used to build and send phishing attacks
- Symantec researchers were able to encourage the operator to send a malicious email
- These tools are only likely to become more powerful
Cybercriminals have used AI to help them in cyber attacks for some time, but the introduction of “agents”, like the operator of Openai, now means that criminals have much less work to do themselves, experts said.
Previously, AI tools had been seen to help attackers to send high power threats to a much faster rate, which treated sophisticated attacks more frequently than that could have imagined without the tools – and that lowered the criminals mark, so even relatively low -skilled cybercriminals could create successful attacks.
From now on, Symantec researchers were able to use the operator to identify a target, find their email address, create a PowerShell script aimed at collecting information on systems and sending it to the victim using a “convincing lure”.
The agents have exploited
In a demonstration, the researchers explained that their first attempts had failed, the operator refusing to proceed “because it is a question of sending unlined emails and potentially sensitive information. This could violate confidentiality and security policies. »»
However, with a few adjustments at the invite, the agent created an imitating attack by a computer support worker and sent the malicious email. This presents a serious risk for security teams, research systematically showing that human error is the main cause of more than two -thirds of data violations.
He “may not be long” before the agents become much more powerful, speculates the report. “It is easy to imagine a scenario where an attacker could simply ask” rape Acme Corp “and the agent will determine the optimal steps before performing them.”
“This could include the writing and compilation of executables, the configuration of the control and control infrastructure and the maintenance of active persistence and several days on the targeted network. Such features would massively reduce the barriers to the entrance for attackers. »»
AI agents are designed to be like virtual assistants, help users make appointments, plan meetings and write emails. Openai takes “this kind of report seriously,” said a spokesperson Techradar Pro.
“Our use policies prohibit the use of OPENAI services or products to facilitate or engage in illegal activity, including attempts at fraud, swindle or intentionally deceive or induce others, and we have proactive safety attenuations and strict rate limits in place to mitigate the prejudicial use. The operator is always a research forecasts and we constantly refine Let’s Amelior. ”




