- EU AI law requires AI explainability and accountability
- Only 38% of employees can accurately identify who is responsible for their company.
- More than half (59%) don’t even know how quickly they could shut down AI in a crisis
Despite the rapid adoption of AI, new research from ISACA suggests many businesses may be jumping into it blindly: more than half (59%) of UK businesses wouldn’t even know how quickly they could shut down AI in a crisis.
Only about one in five (21%) say they feel confident shutting down an AI system within 30 minutes, highlighting significant security gaps.
And it’s not just their closure that’s the problem: not even half (42%) say they can explain AI failure to managers or regulators.
Article continues below
Are companies blind to the risks of AI?
ISACA explained that the gaps are not only in business operations and reputation, but also in the legislative framework. EU AI law requires explainability and accountability.
Part of the failure is due to a lack of clarity around accountability, with 20% of workers unclear who is responsible for AI failures. Poor visibility is also a contributing factor, with one in three organizations not requiring AI use at work to be disclosed, which ISACA says is a blind spot nightmare.
The report explains that companies currently treat this problem as a technical problem, but should instead focus on the fact that it is an organization-wide governance challenge. “Simply changing processes is not enough to truly close the gap,” wrote Chris Dimitriadis, director of global strategy. “Rather, it will require professionals with the expertise to rigorously assess AI risks and integrate monitoring throughout the lifecycle.”
Going forward, companies are encouraged to define accountability at the next level and start deploying better visibility and auditing. Additionally, they must also integrate AI incident response into their strategies and factor it into their broader cybersecurity postures.
With only 38% of respondents identifying the board or an executive as being responsible in the event of an AI incident, it is clear that more needs to be done to disseminate information and processes among staff.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




