- Usage of GenAI SaaS tripled, with prompt volumes increasing sixfold in one year
- Nearly half of users rely on unauthorized “shadow AI,” creating significant visibility gaps.
- Sensitive data leaks have doubled, with insider threats linked to the use of personal cloud applications
Generative artificial intelligence (GenAI) may be great for productivity, but it comes with serious security and compliance complications. That’s according to a new report from Netskope, which says that as GenAI usage in the office skyrockets, so do incidents of policy violations.
In its Cloud and Threat: 2026 report, released earlier this week, Netskope said that enterprise use of software-as-a-service (SaaS) GenAI is “rising rapidly,” with the number of people using tools like ChatGPT or Gemini tripling over the year.
Users are also spending a lot more time with the tools: the number of messages sent to apps has also increased sixfold over the past 12 months, from 3,000 a year ago to more than 18,000 messages per month today.
Shadow AI
Additionally, the top 25% of organizations send more than 70,000 prompts per month, and the top 1% send more than 1.4 million prompts per month.
But many tools, and their use cases, have not been approved by relevant departments and leaders. Nearly half (47%) of GenAI users use personal AI applications (called “Shadow AI”), which gives the organization no visibility into the type of data being shared or the tools that read those files.
As a result, the number of incidents in which users send sensitive data to AI applications has doubled over the past year.
Today, the average organization faces a staggering 223 incidents per month. Netskope also said that personal apps pose a “significant insider threat risk,” as 60% of insider threat incidents involved personal cloud app instances.
Regulated data, intellectual property, source code, and credentials are frequently sent to personal application instances in violation of organizational policies.
“Organizations will struggle to maintain data governance as sensitive information flows freely through untrusted AI ecosystems, leading to increased accidental data exposure and compliance risks,” the report concludes.
“Attackers, conversely, will exploit this fragmented environment, leveraging AI to perform hyper-efficient reconnaissance and craft highly personalized attacks targeting proprietary models and training data. »
The best antivirus for every budget
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




