- The South Korea privacy watchdog has temporarily stopped Deepseek downloads
- Deepseek works with the authorities to become compliant
- Last in a series of confidentiality problems raised about AI chatbots
The South Korea Personal Information Protection Commission (PIPC) has temporarily stopped new DEEPSEEK chatbot downloads, belonging to Chinese.
Reports of Techcrunch Confirm that the application is always operational for those who have already installed it, and that the decision will not affect the use of the application – but new downloads will be interrupted until the Chinese company conforms to Korean confidentiality laws.
South Korea is not the first to ban new downloads from the chatbot, the disappearing model of the Italian App Store and Google Play Store at the end of January 2025 after the country’s watchdog filed a confidentiality complaint and Asked information on how Deepseek manages users “Personal information.
Recurring concerns
Deepseek has since appointed a local representative to work with the authorities in South Korea, but the data protection agency said that it “strongly advises” current users to refrain from entering personal data to Deepseek until As long as a final decision is made – here is all that we have known so far so far.
The restriction is temporary while the PIPC assesses the use and storage of data by Deepseek, but the agency confirms that the model will be available for download once it is in accordance.
The PIPC noted that Deepseek had transferred the data of South Korean users to Bytedance – Tiktok’s parent company. Tiktok, as many will remember – has been briefly prohibited in the United States for confidentiality and security problems.
Deepseek is not the first model of AI to be examined for confidentiality problems. The nature of large -language models is a bit of a field of privacy mines because they scratch all the internet corners so that the data form their models – without the consent of the owners / authors / creators of the media they use.
Further on, Openai has never asked people permission to use their data, and it is not possible for a person to confirm the data used or stored – or to be deleted. This contradicts an important facet of the laws of the GDPR, which protect the right to forget and should guarantee individuals the possibility of making their personal data spoil on request.
As a new child on the block, Deepseek is under the spotlight for several reasons-and there have been legitimate concerns about how the platform collects and stores your personal information such as your email address, your name and name Your date of birth, as well as the data you enter the chatbot and the technical information of the device you use, such as the IP address, the operating system, etc.
Use of AI safely
So is Deepseek sure to use? And can it be used while maintaining your privacy? Well, there are things you can do to mitigate the risks.
As with all LLM, if you are concerned about data confidentiality, the use of AI is probably not a good idea. LLMS scratches Internet data without authorization and will use your interactions to add to the data pool with which the model is formed, and this is not something you can withdraw, Deepseek included.
If you are in South Korea or Italy and you always want to download Deepseek, even the best VPN services will need a little additional help – because they do not train it, you will have to download it elsewhere . This is something that we generally recommend, because it can be a really easy way to be deceived to download malicious software – so do it with caution.
Regarding the risks of cybersecurity, there have been reports that Deepseek is “ incredibly vulnerable ” to attacks and did not blocked harmful guests when they are tested, underperformatives against its competitors.
You should be wary when you use these chatbots – especially on a company’s device or if you work in an industry that has national security connections – there is a reason why the Australian and Indian ministries blocked the Use of Deepseek from work devices.
A general rule is that users must be particularly cautious with the information you provide to a chatbot. Do not enter your health information, your financial data or anything you don’t want a third party to know. Monitor your accounts regularly for any suspicious activity and report everything you see as soon as you see it.