China’s cyberspace regulator has formulated the world’s first comprehensive draft rules focused on the emotional influence of human-like artificial intelligence (AI), aiming to curb chatbots that may promote suicide, self-harm or gambling.
The Cyberspace Administration of China’s proposal regulates human-like imitative interactions that affect personality and emotionally engage users.
Recently, cases of AI-related psychosis have increased, serving as a prime example of how AI interactions harm the human mind.
Key measures stipulate that AI must not generate content that incites suicide, self-harm or emotional manipulation that is harmful to mental health.
In the most critical cases, such as the user’s suicidal intent, a human must take over the conversation and inform the guardian.
The rules introduce strict protections for minors, requiring guardian consent for emotional companion AI and implementing usage time limits.
Platforms must also proactively recognize underage users. Additionally, services with more than 1 million registered users will be subject to mandatory security assessments.
The regulations come after Chinese AI chatbot startups Minimax and Z.ai filed initial public offerings (IPOs), highlighting the rapid growth of the domestic AI companion industry.
This initiative aligns with China’s border efforts to lead global AI governance, compared to the United States’ fragmented regulatory approach.




