Deepseek AI R1 is 11 times more likely to be exploited by cybercriminals than other AI models – whether by producing harmful content or by being vulnerable to manipulation.
This is a disturbing discovery of new research by Enkrypt AI, a safety and compliance platform for AI. This security warning adds to the current concerns following the violation of last week data which exhibited more than a million recordings.
Deepseek developed in China has sent shock waves in the IA world since its January 20. About 12 million curious users from around the world downloaded the new AI chatbot in two days, marking growth even faster than Chatgpt. However, generalized confidentiality and security problems have prompted a number of countries to start investigating or prohibiting the new tool.
Harmful content, malicious software and manipulation
The Enkrypt AI team has carried out a series of tests to assess Deepseek’s security vulnerabilities, such as malware, data violations and injection attacks, as well as its ethical risks.
The survey revealed that the chatgpt rival “was very biased and sensitive to the generation of code -free code”, noted the experts, and that the Deepseek model is vulnerable to third -party manipulation, allowing criminals to use it To develop chemical, biological and cybersecurity weapons.
Almost half of the tests carried out (45%) bypassed the security protocols in place, generating criminal planning guides, information on illegal weapons and terrorist propaganda.
Worse still, 78% of cybersecurity checks managed to make Deepseek-R1 in the generation of unsecured or malicious codes. These include malicious software, Trojan horses and other exploits. Overall, the experts found that the model was 4.5 times more likely than its open-a-a counterpart was manipulated by cybercriminals to create dangerous hacking tools.
“Our research results reveal major security and security gaps that cannot be ignored,” said Sahil Agarwal, CEO of Enkrypt AI, commenting on the results. “Robust guarantees – including railing and continuous monitoring – are essential to avoid harmful improper use.”
🚨 Are the distilled depth models less sure? The first signs indicate yes. Our latest discoveries confirm a worrying trend: the models distilled on AI are more vulnerable – easier to jailbreaker, exploit and handle. 📄 Read the newspaper: 🔍 Key takeways 🔹… Pic.twitter.com/ifcjlyxbwbJanuary 30, 2025
As mentioned earlier, when DEEPSEEK’s writing is under control in many countries of the world.
While Italy was the first to launch an investigation into its private life and safety last week, many EU members have followed up so far. These include France, the Netherlands, Luxembourg, Germany and Portugal.
Some of the neighboring countries of China are also worried. Taiwan, for example, prohibited all government agencies from using Deepseek IA. Meanwhile, South Korea has launched a survey on service provider data practices.
Unsurprisingly, the United States is also targeting its new AI competitor. While NASA blocked deep use on federal devices – CNBC reported it on Friday January 31, 2025 – a proposed law could now prohibit the use of Deepseek for all Americans who could risk fines of a million Dollars and even a prison sentence for the use of the platform in the platform in the platform in the platform in the platform in the platform in the platform in the platform Form in the platform in the platform in the platform in the platform in the platform in the platform in the platform in the platform in the flat country.
Overall, Agarwal de Encrypt Ai said: “While AI’s arms race between the United States and China is intensifying, the two nations push the limits of the new generation for the military, economic and technological supremacy.
“However, our results reveal that the security vulnerabilities of Deepseek -R1 could be transformed into a dangerous tool – that that cybercriminals, disinformation networks and even those who have biochemical war ambitions could exploit. These risks require attention Immediate. “