Being single on Valentine’s Day can be depressing, but finding comfort in conversations with an AI assistant is no less. Not only do they have no personality, but their only real desire is your personal data.
Surfhark confidentiality experts found that four of the five most popular AI AI applications on the Apple App Store can follow your personal use of for -profit data.
“Instead of being there for us, they can look more like surveillance tools,” said the Surfshark cybersecurity expert Miguel Forné, stressing how the monitoring of the companions of AI can shake up the confidence of the users while invading their privacy.
Compagnons ai: What are the most eager for data?
The Surfshark team carefully inspected the data collection practices of the Five Compos of AI. These details came from the Apple App Store and include the number, type and management of data types collected by each application.
Among the applications analyzed – Kindroid, Nomi, Relima, Eva and Character AI – 80% “can use data to follow their users.”
Monitoring, experts explain, refer to the link of user or device data collected from the application with user or device data collected from other applications and websites for targeted advertising purposes. Monitoring also involves sharing user data or device with data brokers.
“These detailed data can lead companies to influence your choices, which can have negative effects, such as overwhelming advertisements, financial risks or other unexpected problems,” said the Surfshark cybersecurity expert.
AI character Was the service the most in love with user data. While the average was 9 unique types of data collected out of 35. The AI character rises above its competitors by collecting up to 15 of them. Eva was the second most swallowed of lot data, collecting 11 data types. Worse, these two applications collect approximate location information from users to disseminate targeted announcements.
Nomi was the only application to separate by claiming not to collect data for follow -up.
Not only do the data collected by the service seem to be problematic. Application developers, Surfshark explains, could also access the data you are happy to share during your conversation with the AI chatbot.
The danger here is that AI Companies applications are designed to simulate human interactions such as friendship and love. You may be more willing to disclose even more sensitive information than you would do with chatgpt type chatbots.
“This can lead to unprecedented consequences, especially since the regulations on AI are only to emerge,” note the experts.
This is why Surfshark is strongly advised to take precautions when using the IA companions services to ensure the safety of your personal data and minimize misuse.
Forné said: “Make sure you frequently check the authorizations of these applications and be aware of the information you share.”