- According to a report, 43% of organizations still have no plans for an AI policy.
- Currently, workers are adopting AI faster than companies are writing policies.
- Nexos.ai calls on SMEs to put basic policies in place – they can grow from there
Even though 70% of legal workers already use general-purpose AI in their work, 43% of organizations say they still don’t have formal AI policies in place (and don’t plan to).
New research from Nexos.ai has found that the biggest risk from AI tools may actually come from a lack of visibility and governance.
And SMEs are generally the most at risk, simply because they have fewer resources, both in terms of workers and procedures.
Article continues below
AI remains mostly unmanaged
Nexos.ai has found that employees routinely paste contracts, NDAs, or legal correspondence into public chatbots to save time, putting sensitive information at risk. While enterprise AI products promise maximum data security and no training on customer data, public versions are not so strict.
Data security (46%) was cited as legal teams’ biggest concern, ahead of ethical issues (42%) and legal privilege (39%), but how workers interact with public chatbots doesn’t match the concerns.
Nexos.ai also noted that SMEs may already be using legal AI workflows without them being formally established and recognized, as AI adoption is happening incrementally and without governance, leaving companies playing catch-up to govern the correct and safe use of AI once employees have already started using the tools.
“The risk for SMEs is not careless use of AI, but an invisible change in workflow,” wrote product manager Zilvinas Girenas.
But it doesn’t need to be difficult: the report explains that a basic AI policy doesn’t need to be complex. Defining approved tools, prohibiting use cases, and identifying restrictions on sensitive data might be enough – or at least, they might be better than current governance scenarios.
Looking ahead, Nexos.ai suggests companies start with a simple AI policy to keep sensitive data out of unapproved tools. Before widespread adoption of AI, the report calls for companies to approve tools before teams adopt them, but once implemented, Nexos.ai still recommends that humans supervise before AI-generated content is used in legal applications.
“If these tools are integrated before the company has defined approved use, data limits and review steps, efficiency comes faster than governance,” Girenas concluded.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




