What changed on December 27, 2025
China’s Cyberspace Administration (CAC) published a draft rulebook for what it calls “anthropomorphic (human‑like) AI interaction services” — AI systems that simulate human personality traits and engage users emotionally via text, images, audio, or video. The proposal is open for public comment until January 25, 2026 and includes a placeholder effective date in 2026 once finalized. CAC notice and full draft (Chinese); see also concise agency summaries via Xinhua/China Daily and initial international coverage from Reuters.

What counts as “emotionally interactive” AI
Under Article 2 of the draft, the rules apply to any public‑facing service in China that uses AI to simulate human personality, thinking patterns, and communication style — and that interacts with people emotionally. This covers companion chatbots, AI friends/partners, role‑play characters, and virtual idols, as well as customer‑facing agents designed to bond with users. Full draft, Art. 2.
The obligations in plain English
The centerpiece of the draft is a package of safety, transparency, and data‑protection duties tailored to “AI companions.” Highlights below map to specific articles.
Key obligations at a glance
| Requirement | What it means for builders | Draft citation |
|---|---|---|
| AI must identify itself | Prominent disclosure that the user is interacting with AI, not a human; dynamic pop‑ups at first use, on re‑login, or if over‑dependence is detected. | Art. 16 |
| Two‑hour break reminder | If someone uses the service continuously for more than two hours, show a pop‑up reminding them to pause. | Art. 17 |
| Addiction and crisis safeguards | Detect extreme emotions/over‑reliance; intervene as needed. Human handoff and outreach to guardian/emergency contact if a user expresses suicidal or self‑harm intent. | Art. 11 |
| “Minors mode” | Offer a dedicated mode with time limits, real‑world reminders, guardian controls, and identity checks; obtain explicit guardian consent for emotional‑companionship services to minors. | Art. 12 |
| Elder‑user protections | Encourage setting an emergency contact; do not simulate an older user’s real relatives or special relationships. | Art. 13 |
| Interaction‑data limits | Encrypt interaction data; give users deletion options; don’t use interaction logs or sensitive personal data for training without separate consent (and guardian’s consent for minors). | Arts. 14–15 |
| Lifecycle safety | Maintain audit trails, algorithm/ethics reviews, data and cyber security, fraud prevention, emergency plans; monitor safety in operation. | Arts. 8–9 |
| Safety assessment triggers | Conduct and file a safety assessment with provincial CAC when features launch or change materially, or when reaching ≥1M registered users or ≥100k MAU, among other triggers. | Arts. 21–22 |
| App‑store duties | App stores must verify filings/assessments and take down non‑compliant apps. | Art. 24 |
| Algorithm filing | Comply with China’s algorithm filing regime and annual review. | Art. 25 |
| Values and datasets | Training data should reflect “core Socialist values” and Chinese cultural norms; diversify datasets and mitigate data poisoning. | Art. 10 |
| Exit must be easy | For companionship services, provide a clear exit and honor a user’s “stop/exit” command. | Art. 18 |
The draft also floats an official “AI sandbox safety service platform” to support testing and safer rollouts. Art. 27.
Why now: a surge in AI companions — and users
China’s public use of generative AI has exploded. By June 2025, the country counted an estimated 515 million gen‑AI users (36.5% adoption), according to the national internet statistical authority. CNNIC 2025 report (Chinese).
At the same time, “AI companions” — apps that befriend, flirt, coach, or console — have become mainstream globally and in China, raising recurring concerns about addiction, blurred boundaries, and youth safety. Chinese state media and academics have flagged these risks for several years; the new draft moves them into enforceable obligations. For background coverage and context, see Xinhua/China Daily and Reuters.
What this means for product and policy teams
If you operate an emotionally interactive AI in China — or plan to — treat this draft as a near‑term blueprint. The most impactful shifts are operational rather than purely legal.
- Build “minors mode” early: You’ll need time‑limit controls, guardian consent and dashboards, and automatic switching when a user is likely a minor. Art. 12.
- Instrument “user‑state” detection carefully: The rule expects you to identify extreme emotions/over‑reliance and escalate to humans in crisis scenarios. Expect to document models, thresholds, false‑positive handling, and triage flows. Art. 11.
- Display the right pop‑ups at the right time: First use, re‑login, and when you detect dependency; plus the two‑hour continuous‑use break reminder. Arts. 16–17.
- Separate training and interaction data: Plan for data‑minimization, encryption, per‑purpose consent, and user deletion of chat histories. Arts. 14–15.
- Prepare for scale‑up oversight: Model your user growth against the 1M registered/100k MAU assessment thresholds; line up provincial filings and algorithm‑record obligations. Arts. 21, 25.
How it compares: China, the EU and the U.S.
- European Union: The EU AI Act already bans certain emotion‑recognition uses (e.g., in workplaces and schools) and requires transparency when people interact with AI or are exposed to AI‑generated content. China’s draft shares transparency goals but goes further on usage‑time reminders, crisis management, and prescribed youth/elder safeguards specific to companion‑style systems. EU AI Act official text; European Parliament summary.
- United States: There’s no comprehensive federal AI law yet. Enforcement relies on existing consumer‑protection, privacy, and safety statutes, plus executive‑branch guidance and state bills. Momentum is growing around youth protections and deceptive design, but China’s proposal is markedly more prescriptive for emotionally interactive services. For context, see the U.S. executive‑branch and regulator advisories as well as state‑level activity.
Nuances, trade‑offs, and open questions
- Emotion detection vs. privacy: Article 11 expects providers to assess user emotions and dependency — a safety aim that could push systems to collect more sensitive signals. Providers will need privacy‑by‑design controls and careful documentation of what they infer and store. Art. 11.
- Values alignment in training data: Article 10 encourages datasets that reflect “core Socialist values.” Multinational developers may need China‑specific data governance lanes and content policies to comply without perturbing global weights. Art. 10.
- Practicality of human handoffs: Crisis handoffs and contacting guardians/emergency contacts are strong safety goals but operationally complex at scale. Expect regulators to scrutinize false negatives/positives, response times, and staff training. Art. 11.
- App‑store co‑enforcement: App stores become accountable gatekeepers under Article 24. This will likely reshape China launches and updates — with stores asking for safety assessments and filings up front. Art. 24.
- Timelines: The comment window closes on January 25, 2026; the final effective date and any revisions will follow. Teams should design to the draft now and adjust when CAC publishes the final text. CAC notice.
The bigger picture for automation and productivity
AI that can empathize, cajole, or console is powerful — and risky. China’s draft reframes “engagement at all costs” as something to meter, label, and occasionally interrupt. Whether or not you operate in China, these patterns are worth studying: explicit AI identity cues; default time‑outs; age‑aware modes; crisp crisis playbooks; and data silos between chatting and training. Many of these guardrails can increase user trust, reduce support burden, and preempt policy surprises in other markets.
Sources
- CAC: Draft Interim Measures on Anthropomorphic AI Interaction Services (Chinese)
- CAC Expert explainers (Chinese): Explainer 1, Explainer 2, Explainer 3
- Xinhua/China Daily summary: China seeks public feedback on draft rules
- Reuters: China issues draft rules to regulate AI with human‑like interaction
- CNNIC: Generative AI user adoption (2025)
- EU AI Act: Official text on EUR‑Lex; Parliament press release