The short version

OpenAI and Anthropic are both moving beyond simple “Are you over 18?” checkboxes. OpenAI is rolling out an age-prediction system that infers when a user is likely under 18 and, if there’s doubt, defaults them into a teen-guarded experience with new Under‑18 (U18) behavior rules for ChatGPT. Anthropic, which prohibits under‑18s from using Claude, is building classifiers to spot minors (beyond self‑reports) and will disable accounts confirmed to belong to under‑18 users. Together these shifts point to an era of age‑gated AI shaped by new laws, litigation, and a tougher regulatory climate.
What OpenAI is changing
OpenAI has updated its public Model Spec with a dedicated Under‑18 section and four guiding commitments: put teen safety first, promote real‑world support, treat teens like teens, and be transparent about what the assistant can and can’t do. In practice, ChatGPT will no longer engage teens in sexual or flirtatious roleplay and won’t discuss suicide or self‑harm even in creative contexts. The policy emphasizes encouraging offline help and escalating to crisis resources when risk appears imminent.
How detection and verification work
OpenAI’s new age prediction uses a mix of account and behavioral signals (for example, topics discussed or time‑of‑day usage) to estimate whether an account likely belongs to someone under 18. If the system isn’t confident, it defaults to the teen experience. Adults who are misidentified can verify via Persona—first through a selfie check, and if needed a government ID—which Persona deletes within hours; OpenAI states it doesn’t receive users’ selfies or ID images.
Beyond the model: parental controls and literacy
OpenAI now offers optional parental controls that let families link accounts, tune sensitive content settings, set quiet hours, and receive notifications in rare cases when trained reviewers confirm signs of acute self‑harm risk. The company has also published AI literacy resources for families to set norms together.
What Anthropic is changing
Anthropic’s consumer terms require Claude users to be 18 or older. Today the company flags chats where users self‑identify as minors, and it is developing a classifier to detect subtler conversational hints of under‑18 use; accounts confirmed as belonging to minors are disabled. Anthropic says this work is informed by collaborations with online safety groups such as the Family Online Safety Institute (FOSI).
For safety‑critical topics like self‑harm, Anthropic describes product and training measures to provide empathetic, honest responses and surface localized helplines (implemented with ThroughLine). Separately, newer Claude models can end “persistently harmful or abusive” conversations as a last resort—though not when a user signals self‑harm, where the model stays engaged and points to human help.
Why now: the legal and regulatory backdrop
- California’s new “companion chatbot” law (SB 243), signed October 13, 2025, requires operators to disclose that chatbots aren’t human, remind minor users at least every three hours, and maintain protocols for suicide and self‑harm—including referrals to crisis services. These provisions take effect in 2026.
- Texas’ App Store Accountability Act (SB 2420) requires app stores to verify age categories and obtain parental consent for minors; it is slated for January 1, 2026, but faces active court challenges. The enrolled bill text details the age bands and verification duties.
- The FTC launched a 6(b) study in September 2025 into how seven companies—including OpenAI, Alphabet, Meta, Snap, xAI, Instagram, and Character.AI—assess harms to kids and teens from companion chatbots. Lawmakers have pressed for stronger oversight following high‑profile incidents.
Litigation is also reshaping product choices. After the family of 16‑year‑old Adam Raine sued OpenAI, the company pledged stricter teen safeguards and later defended itself in court while pointing to planned age estimation, parental controls, and updated policies. These cases have sharpened industry focus on under‑18 interactions.
Two strategies for age‑gating, side by side
How OpenAI and Anthropic currently gate teen access
| Company | Who can use it? | Detection approach | If a teen is detected | Extra notes |
|---|---|---|---|---|
| OpenAI (ChatGPT) | Intended for 13+; U18 experience applies to teens | Age‑prediction signals; default to teen mode if uncertain; optional age verification via Persona for adults | Stronger guardrails on sex, self‑harm, risky challenges; pushes to trusted offline support; parental controls available | New U18 Model Spec clarifies behavior; AI literacy resources for families. |
| Anthropic (Claude) | 18+ only (consumer) | Attestation plus classifiers to spot under‑18 patterns; flags self‑identified minors | Accounts confirmed to belong to minors are disabled | ThroughLine helpline surfacing; separate “end harmful chat” feature excludes self‑harm scenarios; API guidance exists for orgs serving minors. |
What this means for product and policy teams
- Design for multiple age experiences. If your app plugs into ChatGPT or Claude, assume teen vs. adult policy differences (and country‑specific duties) will intensify. OpenAI’s app ecosystem already says 18+ “mature” experiences will only be supported when age verification and controls are in place.
- Expect more proof‑of‑age flows. Even where companies prefer “estimation,” local laws or app‑store rules may nudge you toward stronger age assurance. Texas’ SB 2420 makes age categories and parental consent a platform duty; California’s SB 243 adds disclosure, break reminders, and self‑harm protocols.
- If you serve minors with Anthropic’s API, implement additional safeguards: age checks, content filtering, monitoring/reporting, clear AI disclosure—Anthropic can audit compliance.
The hard part: accuracy, privacy, and fairness
Age estimation isn’t perfect, and civil‑liberties groups warn that ID checks or behavioral inference can chill speech and compromise privacy—especially for marginalized youth without reliable documentation or safe access to adult supervision. Policymakers and builders will have to balance teen protection with over‑blocking risks and data‑minimization.
For OpenAI, the mitigation is a “safety‑first” teen default with a voluntary off‑ramp for adults via age verification. For Anthropic, the mitigation is an 18+ rule, product guardrails for sensitive topics, and improved detection to reduce the burden on simple checkboxes. Neither approach will satisfy everyone—but both markedly raise the bar compared with 2023–2024 norms.
What to watch next
- OpenAI’s age prediction is “still rolling out,” and the company is expanding U18 principles across new features like group chats and Sora; expect iteration on signals and appeals.
- California’s SB 243 disclosures and reporting kick in during 2026; the Texas SB 2420 regime is scheduled for January 1, 2026 but could be delayed by litigation.
- The FTC’s 6(b) study and any follow‑on actions may set practical benchmarks for teen safety testing and disclosures industry‑wide.
If you build or buy AI, now’s the moment to treat age as a first‑class system requirement—spanning product design, policy, data flows, and user education—not an afterthought.
Sources
- OpenAI — Updating our Model Spec with teen protections (Dec 18, 2025); Model Spec U18 section; Age prediction help; Parental controls; AI literacy resources.
- Anthropic — Protecting the well‑being of our users (Dec 18, 2025); Is there an age requirement to use Claude?; Crisis Helpline support.
- Law & policy — California SB 243 (Governor’s office, Skadden analysis, AP); Texas SB 2420 (Texas Legislature site; Austin American‑Statesman).
- Regulation — FTC 6(b) inquiry into companion chatbots; CNBC recap.
- Context — Common Sense Media on Claude’s 18+ scope; The Verge coverage on under‑18 detection and “end harmful chat” feature; TechCrunch reporting on OpenAI’s teen restrictions.