- Almost half of the IT teams do not know how much their AI agents access daily
- Companies love AI agents, but also fear what they are doing behind closed digital doors
- AI tools now need governance, audit and control trails as well as human employees
Despite the growing enthusiasm for agency AI between companies, new research suggests that the rapid expansion of these tools exceeds efforts to secure them.
A SAILPOINT survey of 353 IT professionals with business security responsibilities has revealed a complex mixture of optimism and anxiety compared to AI agents.
The survey reports that 98% of organizations intend to expand their use of AI agents in the coming year.
The adoption of AI agents goes beyond security
AI agents are integrated into operations that manage sensitive data from businesses, customer and finance files to legal documents and supply chain transactions – however, 96% of respondents said they considered these same agents as a growing security threat.
A central problem is visibility: only 54% of professionals claim to have a full awareness of the data to which agents can access – which leaves almost half of corporate environments in the dark on the way in which AI agents interact with critical information.
Composing the problem, 92% of those questioned agreed that the governance of AI agents is crucial for security, but that 44% have a real policy in place.
In addition, eight out of ten companies say that their AI agents have taken measures that they were not supposed to – this includes access to unauthorized systems (39%), the sharing of inappropriate data (33%) and the download of sensitive content (32%).
Even more disturbing, 23% of respondents admitted that their AI agents had been returned to the revelation of access titles, a potential gold mine for malicious actors.
A notable overview is that 72% think that AI agents have greater risks than traditional machine identities.
Part of the reason is that AI agents often require multiple identities to operate effectively, especially when integrated into high performance AI tools or systems used for development and writing.
Calls for a transition to an identity model focused on identity become stronger, but SailPoint and others maintain that organizations must deal with AI agents as human users, with access controls, liability mechanisms and complete audit trails.
AI agents are a relatively new addition to the commercial space, and it will take time to organizations to integrate them fully into their operations.
“Many organizations are still at the start of this trip, and growing concerns concerning data control highlight the need for stronger and more complete identity security strategies,” said SailPoint.