- Almost 40% of IT workers secretly admit using unauthorized generative tools
- Shadow AI develops as training gaps and fear of layoffs use fuel
- The AI tools used unattended can disclose sensitive data and bypass existing safety protocols
While artificial intelligence is becoming increasingly anchored in the workplace, organizations have trouble managing its adoption in a responsibility, said new research.
A report by Ivanti said that the growing use of unauthorized AI tools in workplaces raises concerns concerning the deepening of skills gaps and the increase in safety risks.
Among the IT workers, more than a third (38%) admit to use unauthorized generative tools, while almost half of office employees (46%) say that some or all AI tools on which they count were not provided by their employers.
Some companies allow the use of AI
Interestingly, 44% of companies have integrated AI into all departments, but a large part of employees secretly use unauthorized tools due to insufficient training.
One in three workers says they hide their use of management AI, often quoting “the secret advantage” it provides.
Some employees avoid disclosing their use of AI because they do not want to be perceived as incompetent.
With 27% reporting impostor syndrome fueled by AI and 30% feared that their roles could be replaced, disconnection also contributes to anxiety and professional exhaustion.
These behaviors indicate a lack of confidence and transparency, emphasizing the need for organizations to establish policies of clear and inclusive use of AI.
“Organizations should consider building an AI sustainable governance model, prioritizing transparency and taking up the complex challenge of IA impostor syndrome thanks to reinvention,” said Ivanti’s chief legal advisor Brooke Johnson.
The secret use of AI also has a serious risk. Without appropriate monitoring, unauthorized tools can flee data, get around security protocols and expose the systems to be tackled, especially when used by administrators with high access.
Organizations must not respond by repressing, but modernizing. This includes the establishment of inclusive AI policies and the deployment of secure infrastructure – starting with a strong final point protection to detect rogue applications and ZTNA solutions to apply strict access controls in distributed environments.
Ivanti Notes AI is not the problem; Real problems are not clear policies, weak security and lack of confidence. If it is not controlled, the shadow of the shadow could widen the gap of skills, stretch mental health and compromise critical systems.