- Palo Alto Warns Rapid AI Adoption Expands Cloud Attack Surfaces, Raising Unprecedented Security Risks
- Excessive permissions and misconfigurations cause incidents; 80% are related to identity issues, not malware
- Non-human identities outnumber human ones, poorly managed, creating exploitable entry points for adversaries
Rapid enterprise adoption of cloud-native artificial intelligence (AI) tools and AI services is dramatically expanding cloud attack surfaces and exposing businesses to higher risks than ever before.
That’s according to the “State of Cloud Security Report,” a new paper published by cybersecurity researchers at Palo Alto Networks.
According to the paper, there are a few major issues with AI adoption: the speed at which AI is deployed, the permissions it is granted, misconfigurations, and the rise of non-human identities.
Permissions, Misconfigurations, and Non-Human Identities
Palo Alto says organizations are deploying their workloads faster than they can secure them – often without complete visibility into how tools access, process or share sensitive data.
In fact, the report states that more than 70% of organizations are now using AI-enabled cloud services in production, up sharply year-over-year. This speed at which these tools are deployed is now seen as a major contributor to an “unprecedented increase” in security risks in the cloud.
Then there is the problem of excessive permissions. AI services often require extensive access to cloud resources, APIs and data stores. The report shows that many organizations grant overly permissive identities to AI-driven workloads. According to the study, 80% of cloud security incidents in the past year were related to identity-related issues and not malware.
Palo Alto also highlighted that misconfigurations are a growing problem, especially in environments supporting AI development. AI storage buckets, databases, and training pipelines are often exposed, which malicious actors are increasingly exploiting, rather than simply trying to deploy malware.
Finally, the research highlights a rise in non-human identities, such as service accounts, API keys, and automation tokens used by AI systems. In many cloud environments, there are now more non-human identities than human identities, and many of them are poorly monitored, rarely rotated, and difficult to assign.
“The rise of large language models (LLM) and agentic AI is pushing the attack surface beyond traditional infrastructure,” the report concludes.
“Adversaries target LLM tools and systems, the underlying infrastructure that supports model development, the actions taken by those systems, and, most importantly, their memory reserves. Each represents a potential point of compromise.”
The best antivirus for every budget
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




