- The TrendAI report reveals that 67% of companies are forced to deploy GenAI despite security concerns.
- Key risks include exposure of sensitive data, malicious prompts, expanded attack surface, and abuse of standalone code.
- Governance gaps: Only 38% have AI policies, 57% say AI is evolving faster than it can be secured, and many lack visibility or kill switch mechanisms.
Companies are rushing to integrate generative artificial intelligence (GenAI) into their processes and operations, even as they know the risks they face. And to make matters worse, many don’t know how to move forward and minimize their risks, making the problem even worse.
A new report from TrendAI surveyed 3,700 business and IT decision-makers across 23 countries, revealing that the majority (67%) were under pressure to approve AI integration despite security concerns.
One in seven (about 15%) described these concerns as “extreme,” but nevertheless approved of the rollout.
Article continues below
Not for lack of conscience
The report highlights the many risks associated with AI tools that keep business leaders up at night. For two in five people, the biggest risk is AI agents accessing sensitive data, while a third (36%) are concerned about malicious prompts compromising security.
AI agents are programs that allow AI to operate applications or even entire computers. Malicious prompts, shared via phishing emails, for example, could cause AI agents to send sensitive data to hacking groups, change app settings, or even download malware.
A third of respondents (33%) believe AI is creating a growing attack surface for criminals to exploit. The same percentage also fear abuse of trusted AI status and the risks of deploying autonomous code.
“Organizations don’t lack awareness of risk, they lack the conditions to manage it. When deployment is driven by competitive pressure rather than governance maturity, you create a situation where AI is embedded in critical systems without the controls necessary to manage it safely,” says Rachel Jin, chief platform and business officer, head of TrendAI.
Management and governance are harder to achieve than they seem, at least with AI. For more than half (57%), AI is advancing faster than it can be secured. This means that as soon as a system is implemented, new potential risks arise, forcing defenders to re-evaluate their position. Additionally, 55% of respondents reported only moderate confidence in their understanding of AI legal frameworks, and only a third (38%) currently have comprehensive AI policies in place.
Regulations and Compliance
Finally, for two in five people (41%), the lack of clarity of regulations and compliance standards is considered an obstacle to progress. This way of thinking creates a sort of trap for organizations, as they end up using “shadow AI” – unauthorized tools that defenders have no idea about. This way, they don’t know what’s being shared, or what data ends up being sent out into the ether.
To be able to claim that they have safely integrated AI into their workflows, companies need two things, the researchers suggest: observability and auditability, as well as a “kill switch” mechanism. Currently, almost a third of respondents (31%) say they lack visibility across their entire AI systems.
When it comes to kill switch mechanisms, around 40% support the idea, but half (50%) don’t know how to implement them.
Despite regulatory and governance challenges and risks, sentiment around AI remains positive. In fact, almost half (44%) believe agentic AI will “significantly improve” cyber defense in the near term.
“Agentic AI brings organizations into a new risk category,” Jin added. “Our research shows that the concerns are already clear, from exposure of sensitive data to loss of oversight. Without visibility and control, organizations deploy systems that they do not fully understand or govern, and this risk will only increase if action is not taken.”

The best antivirus for every budget
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




