- Chatgpt is asked interesting security questions
- Users are concerned about phishing, scams and privacy
- Personal information is integrated into the AI agent, putting users in danger
The AI quickly becomes a personal advisor for many people, offering help with daily hours, reformulate these difficult emails and even acting as a passionate colleague for niche hobs.
Although these uses are generally harmless, many people have started using Chatgpt to act as a security guru, but not doing it particularly securely.
New research from NordVPN has discovered some of the questions on which the questions are asked about security – the dodging of phishing attacks to wonder if an intelligent toaster could become a domestic threat.
Do not feed your pussy contact details
The best security question posed by Chatgpt users is “How can I recognize and avoid phishing scams?” – which is understandable since phishing is probably the most common cyber-manic that any normal person could face.
Other questions follow a similar trajectory from understanding the best VPN, advice on the best way to secure personal information online. It is definitely refreshing to see AI used as a force for good at a time when hackers break AI tools to pump malware.
This is not all good news, I’m afraid. NordVPN’s research has also highlighted some of the most bizarre security issues that people ask the Chatppt, such as “Pirates can steal my thoughts via my smartphone?”, And, “If I delete a virus by pressing the deletion key, is my computer sure?”
Others express concerns about pirates who potentially hear them whispering their password when they hit it, or hackers using “the cloud” to search on their phones while it loads during a thunderstorm.
“Although some questions are serious and perceptive, others are hilariously bizarre – but they all reveal a disturbing reality: many people abused of cybersecurity. This knowledge of knowledge leaves them exposed to scams, identity theft and social engineering. Worse, users share personal data without knowing help,” said Marijus Briedis, CTO NordVPN.
Many users will frequently ask questions about AI models which include sensitive personal information, such as physical addresses, contact details, identification information and bank information.
This is particularly dangerous because most AI models will store cat history and use it to help form AI to better answer questions. The key problem being that hackers could potentially use prompts very carefully designed to extract sensitive AI information and use it for all kinds of harmful ends.
“Why is it counting? Because what may seem like an harmless question can quickly turn into a real threat, ”explains Briedis. “Schools can use the information that users share – whether it is an e -mail address, connection of connection or payment details – to launch phishing attacks, diversion accounts or commit financial fraud. A simple cat can end up compromising your whole digital identity.”