- The Australian Department of Internal Affairs has prohibited the use of Deepseek
- The Finance Department of India has also warned of its use
- The concerns appeared on the privacy and safety of AI models
The new Chatbot Ai Deepseek made the headlines, and has even briefly became the most popular chatbot in the world within 10 days of launch – exceeding existing models like Chatgpt and Gemini.
However, new research has said that the Deepseek chatbot was “incredibly vulnerable” to attacks, arousing national security problems that led the Australian Ministry of Internal Affairs to prohibit the use of the model on federal government’s devices.
Politics, published on February 4, 2025, determines the use of Deepseek products and web services “poses an unacceptable level of security for the Australian government” and warns that the departments must manage the risks of the `in -depth collection of Data ” and the exposure of data to the “extrajudicial directions of a foreign government which conflict with Australian law”.
After the trend
Australia is not alone in there. The Ministry of Finance of India also asked its employees to avoid using AI tools, such as Deepseek and Chatgpt for official purposes, and has cited the risk of confidential government documents and data.
Likewise, the American navy has prohibited the use of Deepseek in “ any capacity ” because of the potential security and ethical concerns’ political concerns, providing only information that the regulator has found “completely insufficient”.
AI companies, such as Chatgpt and Deepseek, collect large amounts of data from all the internet corners to train their chatbots and have encountered problems against data confidentiality in the world.
Beyond that, some models have worrying privacy policies. For example, Openai has never asked consent to use their data, and it is not possible for individuals to verify what information has been stored.