The short version

China’s Cyberspace Administration (CAC) has released draft rules for “anthropomorphic AI interaction services” — systems that mimic human personalities and engage users emotionally. The proposal, open for public comment until January 25, 2026, requires clear AI labeling, break reminders for heavy use, special protections for minors and older adults, and strict content limits aligned with existing national rules. Reuters and official outlets including Xinhua and Guangming Online confirmed the details.

A conceptual chat interface of a human-like AI companion in Chinese-language UI with a pop-up warning reminding the user to take a break after extended use

What counts as “human-like” AI here?

The draft targets products that simulate human personality traits, thinking styles, and communication patterns, and that interact with the public via text, images, audio, or video — in other words, chatty agents and companions designed to feel person-like. That scope is narrower than China’s 2023 generative AI rules and specifically zeroes in on emotionally responsive systems. Guangming Online (full text); Library of Congress overview of 2023 rules.

Headline requirements at a glance

Key obligations in China’s draft rules for anthropomorphic AI

RequirementWhat the draft saysWhat builders should prepare
Clear AI disclosureProviders must prominently tell users they’re interacting with AI (not a human). Show dynamic reminders at first use, re‑login, and when over‑dependence is detected.Add in‑flow disclosures, persistent labels, and re‑auth reminders.
Break remindersIf a user continuously uses the service for over two hours, a pop‑up should prompt them to pause.Instrument session timers and user‑state tracking; design a respectful break UX.
Detect and interveneProviders should gauge user emotions and dependency and “take necessary measures” if extreme emotions or addiction appear; pre‑set safe‑reply templates; escalate to human agents and contact guardians/emergency contacts when life or property risks surface.Build risk classifiers, response playbooks, and human‑in‑the‑loop escalation; capture emergency contacts during onboarding (with privacy safeguards).
Minors’ protectionsDedicated “minors mode,” parental consent for emotional‑companion services, guardian dashboards (usage summaries, time limits, role blocking, spend controls).Age‑aware flows, guardian controls, and auditable logs; annual compliance audits on handling minors’ data.
Older adultsEncourage emergency‑contact setup; ban services that simulate a user’s real relatives or specific relationship figures.Collect emergency contacts; restrict role libraries and prevent deceptive “relative” personas.
Content limitsProhibit content endangering national security; rumors disrupting order; pornography, gambling, violence; incitement to crime; self‑harm glamorization; emotional manipulation or “traps”; solicitation of sensitive/secret info.Tune safety policies beyond usual toxicity filters to cover manipulation and dependency harms.
Data and trainingEncrypt interaction data; allow deletion of chat histories; don’t use user interaction data or sensitive personal data for training without separate consent; require “core socialist values” alignment of training data.Update privacy notices, consent granularity, retention/deletion APIs, and data provenance checks.
Scale‑triggered safety assessmentsConduct safety assessments and file reports with provincial CAC when you go live, make major tech changes, exceed 1M registered users or 100k MAU, or present certain risks.Create a recurring safety‑assessment program tied to product milestones and growth thresholds.
Algorithm filing and app‑store dutiesMaintain algorithm filings (with annual review) and support app‑store compliance checks, including takedowns for violations.Track algorithm change management; prepare evidence for store reviews and regulators.

Sources for table items: Guangming Online (draft text); Reuters; China News Service; China Daily/Xinhua.

Why this, and why now

China has already built a layered AI governance stack: deep‑synthesis rules (2023), interim measures for generative AI (effective August 15, 2023), and a growing filing regime for consumer‑facing models. As of November 1, 2025, authorities reported 611 generative AI services had completed filings, with 306 apps or features registered via local regulators — a sign of rapid consumer deployment that likely sharpened focus on human‑like, emotionally engaging agents. CAC notice, Nov. 11, 2025; Xinhua roundup.

Unlike broad model‑safety mandates, these draft rules spotlight mental‑health and dependency risks. The two‑hour break prompt and explicit bans on “emotional traps” and manipulative design patterns are unusual in global policy, mapping more closely to product‑level UX safeguards than to abstract risk tiers. At the same time, the draft retains China’s familiar redlines on content and national‑security considerations. Guangming Online (draft text); Straits Times summary.

What this means for AI and automation leaders

For teams shipping chatbots, companions, or embodied agents that talk like people, the draft reads like a detailed product spec for responsible design in China. Even if you operate outside the mainland, the ideas are worth borrowing to reduce user harm and regulatory risk.

The bigger picture: convergence and divergence

  • Convergence: Many elements — labeling, parental controls, data‑minimization — echo norms surfacing in the EU and other jurisdictions, even if the legal framing differs.
  • Divergence: China’s emphasis on “core socialist values,” bans on simulating relatives, and mandated break prompts are more prescriptive than most Western rules. Meanwhile, Beijing’s filing system has already pulled hundreds of services into formal oversight, a scale that could nudge industry standards globally. CAC filings update; Xinhua/SCIO filing snapshot.

What to watch next

  • Final text and timeline: The comment window runs through January 25, 2026. Expect clarifications on how “dependency” and “extreme emotions” will be operationalized and audited. Guangming Online notice.
  • Enforcement mechanics: How provincial CAC offices and app stores coordinate safety assessments, algorithm filings, and takedowns.
  • Scope creep or carve‑outs: Whether enterprise use (non‑public) and open‑source community projects will face similar duties as consumer apps.
  • Impact on product roadmaps: Expect Chinese platforms to ship minors modes, guardian dashboards, and emergency‑contact flows as default features in 2026.

Sources

  • Reuters. China issues draft rules to regulate AI with human‑like interaction (Dec. 27, 2025).
  • Cyberspace Administration of China via Guangming Online. Draft Interim Measures on the Administration of Anthropomorphic AI Interaction Services (full text, Dec. 27, 2025).
  • Xinhua/China Daily. China seeks public feedback on draft rules for anthropomorphic AI services (Dec. 27, 2025).
  • China News Service. CAC moves to regulate anthropomorphic AI interaction services (Dec. 27, 2025).
  • CAC. Filing update: 611 generative AI services recorded as of Nov. 1, 2025 (Nov. 11, 2025).
  • State Council Information Office (via Xinhua). Filing system snapshot, April 9, 2025.