- 30% of British provide AI chatbots with confidential personal information
- NYMVPN’s search shows that business and customers data are also at risk
- Underlines the importance of taking precautions, such as the use of a quality VPN
Nearly one in three British sharing sensitive personal data with Ai chatbots Like the Openai Chatppt, according to research from the Cybersecurity Company Nymvpn. 30% of British have fed AI chatbots with confidential information such as health and bank data, potentially putting their privacy – and that of others – in danger.
This surpasses with tastes Cat And Google Gemini Despite 48% of respondents expressing confidentiality problems concerning AI chatbots. This indicates that the problem extends to the workplace, employees sharing data sensitive to businesses and customers.
NYMVPN’s results arise following a number of high -level recent data violations, including the Marks & Spencer Cyber AttackThis shows how confidential data can fall into bad hands.
“Convenience is being priority on security”
NYMVPN’s research reveals that 26% of respondents admitted having disclosed financial information related to salary, investments and mortgages to AI chatbots. Even more risky, 18% credit card or shared bank card data.
24% of those questioned by NYMVPN admit that they have shared customer data – including email names and addresses – with IA chatbots. Even more worrying, 16% downloaded the company’s financial data and internal documents such as contracts. This despite 43% expressing the concern of the sensitive data of the company disclosed by AI tools.
“AI tools have quickly been part of the way people work, but we see a disturbing trend where convenience is in priority on security,” said Harry Halpin, CEO of NYMVPN.
Ms, CooperativeAnd Adidas have all made the headlines for bad reasons, having been the victim of data violations. “Large -scale violations show how vulnerable the major organizations can be, the more personal and business data are fueled in AI, the greater the target for cybercriminals,” said Halpin.
The importance of not surpassing
Since almost a quarter of respondents share customer data with AI chatbots, this emphasizes the urgency of companies implementing clear directives and formal policies for the use of AI at the workplace.
“Employees and companies must urgently think about how they protect both personal confidentiality and company data when using AI tools,” said Halpin.
Although fully avoiding AI chatbots is entirely the optimal solution for confidentiality, it is not always the most practical. Users should, at the very least, avoid sharing sensitive information with AI chatbots. Privacy parameters can also be modified, such as deactivating chat history or resumption of model formation.
A VPN can add a confidentiality layer when using AI chatbots such as chatgpt, encrypting the Internet traffic of a user and the original IP address. This helps keep the location of a private user and prevents their ISP from seeing what they do online. However, even the Best VPN This is not enough if sensitive personal data is always fed at AI.