What happened, and why it matters
China’s top cyber regulator, the Cyberspace Administration of China (CAC), has released a draft rule set aimed squarely at “anthropomorphic” or human‑like AI—think AI companions, chatbots with personalities, and virtual agents that build emotional rapport. Published on December 27, 2025, the draft Interim Measures for the Administration of Anthropomorphic AI Interaction Services is open for public comment until January 25, 2026. It would apply to any product that simulates human personality traits, thinking patterns, and communication styles and engages in emotional interaction via text, images, audio, or video.
The big changes at a glance
If adopted as written, the draft would require providers of AI companions and other human‑like agents to:
- Clearly disclose that users are interacting with AI, not a human, including dynamic pop‑ups on first use, re‑login, or when over‑reliance is detected.
- Nudge healthy use and step in when risks appear: for sessions exceeding two hours, show a pause reminder; if users display extreme emotions or addiction signs, intervene and escalate appropriately.
- Build “life‑cycle safety” into design, training, deployment, operations, and shutdown, including algorithm reviews, data security, personal‑data protections, and emergency plans.
- Protect minors and older adults with dedicated safeguards. For minors: a “minor mode,” guardian consent for emotional‑companionship services, time limits, spending controls, and guardian dashboards. For older adults: require an emergency contact and prohibit simulating a user’s relatives.
- Treat sensitive chat data with extra care: encrypt and restrict access; provide deletion controls; don’t use interaction logs or sensitive personal information to train models without separate consent (and for minors, separate guardian consent).
- Pass safety reviews when certain thresholds or changes occur (e.g., when launching anthropomorphic features; when user counts exceed 1 million registered users or 100,000 monthly actives; or when major tech changes happen). App stores would also need to check compliance before listing.
The draft repeats familiar “red lines”—no content that endangers national security or promotes violence or obscenity—and adds behavioral guardrails against psychological harm, manipulation, and “emotional traps.”
Where this fits in China’s AI rulebook
Beijing isn’t starting from zero. The proposal plugs into a framework that already includes 2023’s Generative AI Measures, 2023 Deep Synthesis rules (deepfakes), and 2025’s mandatory labeling requirements for AI‑generated content effective September 1, 2025. Together they form a stack covering transparency, provenance, and platform responsibilities—now extended to the psychology of human‑like interaction.
Notable is a newly explicit focus on mental‑health risk, dependency, and “quasi‑social bonds.” CAC’s expert commentaries accompanying the draft emphasize preventing the blurring of human–machine boundaries, over‑attachment, and covert value shaping—while still encouraging “responsible innovation” and sandbox testing.
What changes for builders of AI companions and agents
China’s draft is prescriptive, with consequences for product design, data governance, and operations:
- Identity and boundary design: Prominent, repeated AI disclosures; easy exits for emotional‑companionship sessions; human takeover when self‑harm risks are explicit; routing to crisis support and guardian/contacts for minors and older users.
- Safety by design: Safety objectives must be present from model design through end‑of‑life—not tacked on later. Providers need to maintain logs, run risk assessments, and keep incident response plans ready.
- Affective monitoring vs. privacy: Providers are told to “assess user emotions and dependence” so they can intervene early—yet they also face strict limits on using chat data for training without separate consent, plus encryption and access controls. Walking that line will take careful feature scoping and DPIAs.
- Children and seniors as priority users: Separate modes, consent flows, configurable blocklists, time limits, spending protections, and emergency‑contact workflows elevate safety for vulnerable groups.
- Threshold‑triggered reviews: Launching anthropomorphic features or crossing user thresholds (≥1,000,000 registered or ≥100,000 MAU) triggers safety assessments, while app stores must verify compliance before listing.
Global context: how this compares to the EU and U.S.
Europe’s AI Act already bans certain emotion‑recognition uses (e.g., in workplaces and schools) and outlaws manipulative systems that exploit vulnerabilities, reflecting a rights‑first approach. China’s draft pushes in a different, more “public‑health plus platform duty” direction—requiring state‑of‑mind monitoring and time‑use nudges alongside content rules. In the U.S., rules are piecemeal; one recent example is California’s transparency and youth‑safety moves around chatbots, although a broader minors‑chatbot bill was vetoed while a lighter‑touch disclosure law passed. For global products, that means threading a needle: monitoring user wellbeing where required (China) without violating restrictions on emotion tracking (EU).
Comparing emerging guardrails for “human‑like” AI
| Topic | China (Draft anthropomorphic AI measures) | EU AI Act | U.S. (example: California 2025) |
|---|---|---|---|
| Identity disclosure | Mandatory, recurring AI labels and prompts | Transparency obligations vary by risk; outright bans for specific manipulative practices | New law requires periodic disclosure that a bot is a bot to minors |
| Emotion/behavior monitoring | Providers must assess user emotions/dependence and intervene | Bans emotion recognition in workplaces/schools; limits manipulative systems | Focus on disclosures and crisis referral; broader bans not statewide |
| Minors’ protections | Minor mode; guardian consent for companionship; time/spend limits; guardian tools | Child‑safety duties embedded in risk‑based controls | Youth‑oriented disclosures and safeguards debated; some measures enacted |
| Data use for training | No training on chat logs/sensitive personal info without separate consent | Strict data‑protection baseline via GDPR + AI Act requirements | Varies by state and company policy |
| Threshold reviews | Safety assessment when launching features or crossing user thresholds | Conformity assessments for high‑risk systems | No equivalent statewide |
Sources: CAC draft; EU AI Act; AP reporting on California.
Why China is moving now
AI companions and “emotional agents” are surging in popularity, especially among young users, and China has been explicit about wanting to prevent over‑attachment, manipulation, and harm while still encouraging innovation. The draft also dovetails with 2025’s AI‑content labeling regime and broader algorithm filing rules—signaling a full‑stack approach from provenance to mental wellbeing.
Practical takeaways for product, trust & safety, and data teams
- Treat “companionship mode” as a regulated feature set. You’ll need distinct UX, safeguards, and escalation.
- Build cross‑jurisdictional toggles: enable emotional‑state checks and timers in China; disable or constrain emotion recognition where the EU bans it.
- Minimize and compartmentalize data. Keep interaction logs separate, encrypted, and access‑controlled. Default to non‑training unless there is separate consent.
- Instrument risk signals responsibly. Start with lightweight heuristics (session length, rebound rates), layer opt‑in self‑reports, and only then consider narrow affect signals—with clear explanations and off‑ramps.
- Prepare documentation now. Safety assessments, algorithm filings, and app‑store checks all demand clear evidence of controls in place.

What to watch next
- How “user‑state” detection gets defined in practice. Expect technical guidance or standards on acceptable signals and privacy boundaries.
- Industry sandboxes. The draft explicitly encourages sandboxed testing; watch for pilots that clarify compliance patterns.
- Convergence (or conflict) with global norms. Europe’s bans on emotion recognition in certain contexts may force product segmentation; China’s rules could become a blueprint for “companion safety” features elsewhere.
Sources
- Cyberspace Administration of China (official draft, Chinese): Notice and full text: Interim Measures for Anthropomorphic AI Interaction Services (Draft for Comment).
- Xinhua/China Daily (announcement): China seeks public feedback on draft rules for anthropomorphic AI services.
- Reuters coverage: China issues draft rules to regulate AI with human‑like interaction.
- Labeling regime (2025): CAC Q&A on AI‑generated content labeling (Chinese); Reuters summary.
- EU AI Act context: European Parliament press release; EU Publications OJ entry.
- U.S. example (California): AP News on minors and chatbots legislation.