- The failure on the chatppt server side allows attackers to steal data without any user interaction
- Shadowleak completely bypasses traditional terminal security
- Millions of professional users have been exposed due to Shadowleak exploits
Companies are increasingly using AI tools such as Chatgpt depth research agent to analyze emails, CRM data and internal reports for strategic decision-making, experts warned.
These platforms offer automation and efficiency, but also introduce new security challenges, especially when sensitive commercial information is involved.
Radware recently revealed a zero flaw click in the in -depth research agent of Chatgpt, nicknamed “Shadowleak”, but unlike traditional vulnerabilities, this defect exfiltrates the sensitive data secretly.
Shadowleak: a zero -click feat, server side
It allows attackers to fully exfiltrate sensitive data from Openai servers, without requiring user interaction.
“This is the ultimate at zero -click attack,” said David Aviv, technology director at Radware.
“No user action is required, no visible clue, and no means for victims to know that their data has been compromised. Everything happens entirely behind the scenes thanks to actions of autonomous agent on the servers of Cloud Openai. ”
Shadowleak also operates independently of termination points or networks, making detection extremely difficult for business security teams.
The researchers have shown that sending an email with hidden instructions could trigger the research agent in depth to disclose information independently.
Pascal Geenens, director of the Cyber-Menace Intelligence at Radware, explained that “companies adopting AI cannot count on integrated guarantees alone to prevent abuses.
“AI workflows can be manipulated in a non -anticipated way, and these attack vectors often bypass the visibility and detection capacities of traditional security solutions.”
Vulnerability represents the first exfiltration of zero data click on the server side, leaving almost no evidence from the point of view of companies.
With Chatgpt reporting more than 5 million paid commercial users, the potential extent of the exposure is substantial.
Human surveillance and strict access controls remain essential when sensitive data is connected to autonomous AI agents.
Consequently, organizations adopting AI must approach these tools with caution, permanently assess security gaps and combine technology with informed operational practices.
How to stay safe
- Put the cybersecurity defenses in layers to protect against several types of attacks simultaneously.
- Regularly monitor AI -focused workflows to detect unusual activity or potential data leaks.
- Deploy the best antivirus solutions through systems to protect against traditional malware attacks.
- Maintain protection of robust ransomware to protect sensitive information from lateral movement threats.
- Apply strict access controls and user authorizations for interacting AI tools with sensitive data.
- Ensure human surveillance when autonomous AI agents access or process sensitive information.
- Implement the journalization and audit of the activity of AI agents to identify the anomalies early.
- Integrate additional AI tools for the detection of anomalies and automated safety alerts.
- Educate employees on AI threats and the risks of the workflows of autonomous agents.
- Combine software defenses, best operational practices and continuous alertness to reduce exposure.