Chinese AI assistant DeepSeek-R1 struggles with sensitive topics, producing broken code and security disasters for enterprise developers


  • Experts discover that DeepSeek-R1 produces dangerously insecure code when political terms are included in prompts.
  • Half of politically sensitive prompts make DeepSeek-R1 refuse to generate code
  • Hardcoded secrets and handling of insecure input frequently appear under politically charged prompts.

When it was released in January 2025, DeepSeek-R1, a large Chinese language model (LLM) caused a frenzy and has since been widely adopted as a coding assistant.

However, independent testing by CrowdStrike claims that the model’s output can vary significantly based on seemingly irrelevant contextual modifiers.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top