The short version

California starts 2026 by flipping on several new AI rules that make transparency the default. Developers now face training‑data disclosure (AB 2013) and “frontier model” reporting (SB 53). Operators of AI companions must clearly tell users they’re bots and maintain crisis protocols (SB 243). Agencies have to disclose when police reports are drafted with AI (SB 524). And if AI causes harm, defendants can’t claim “the AI did it” as a legal defense (AB 316). Texas’ AI law arrives the same day with prohibited uses and an advisory council, while bigger provenance and watermarking duties in California and comprehensive rules in Colorado are slated for mid‑year and beyond.

California State Capitol with subtle AI circuit motifs and a checklist overlay representing compliance steps

Why this matters now

For AI builders, product leaders, and operations teams, January 1, 2026 is the moment U.S. state rules stop being theoretical. California’s updates emphasize transparency—how models are trained, how risks are handled, and when users are engaging with AI—setting a practical bar for disclosures and governance even if you operate nationally. Texas takes a narrower, prohibited‑practices approach. And Colorado’s broad “high‑risk AI” regime is queued up for June 30. Expect customers, partners, and regulators to start asking for documentation that aligns to these laws.

What changed in California on January 1, 2026

Training‑data transparency (AB 2013)

Developers who make a generative AI system available to Californians must publish a high‑level summary of the data used to train it (including sources, licensing, presence of personal or copyrighted data, and whether synthetic data was used). This applies to systems released or substantially modified on or after January 1, 2022, with disclosures due by January 1, 2026 and updated on subsequent releases.

Frontier developer transparency (SB 53)

California’s Transparency in Frontier AI Act requires “frontier developers” to publish an AI safety framework for covered models and to report critical safety incidents. “Large frontier developers” (≥10^26 training compute and ≥$500M revenue) face heightened duties and potential civil penalties up to $1M per violation. Core disclosure and whistleblower protections begin January 1, 2026; additional state reporting ramps up in 2027.

Accountability and disclosures around AI use (AB 316, AB 489, SB 524, SB 243)

  • AB 316 bars defendants from arguing that autonomous AI—not people—caused harm; responsibility remains with the developers, modifiers, or users of the AI. Effective January 1, 2026.
  • AB 489 prohibits AI systems from implying they are licensed health professionals or presenting AI advice as clinician‑provided. Effective January 1, 2026.
  • SB 524 requires law enforcement to disclose, on each page of an official report, when AI was used to generate any draft or the final report and to retain AI‑generated drafts. Effective January 1, 2026.
  • SB 243 regulates “companion chatbots”: clear disclosure that the user is interacting with AI; crisis intervention protocols for suicidal ideation; break reminders for minors; and public reporting to the Office of Suicide Prevention starting July 1, 2027. Effective January 1, 2026.

What’s coming later in 2026 (and beyond)

  • California AI Transparency Act (SB 942 as amended by AB 853): Requires major GenAI providers with ≥1M monthly users to embed durable provenance signals and offer a free public detection tool (including an API) to check whether audio, image, or video was produced by their system. Originally operative January 1, 2026; now operative August 2, 2026, with platform and device duties phased through 2027–2028.
  • Colorado AI Act (SB 24‑205): The nation’s first comprehensive “high‑risk AI” statute, delayed to June 30, 2026. It imposes risk management, notices, and duties on developers and deployers aimed at preventing algorithmic discrimination in consequential decisions (e.g., employment, credit, housing). Further amendments are being debated in 2026.

Other states turning the key on January 1, 2026

  • Texas Responsible AI Governance Act (HB 149): Prohibits developing or deploying AI for certain restricted purposes (e.g., unlawful discrimination, incitement to self‑harm/violence, child sexual abuse material), establishes an AI advisory council and regulatory sandbox, and empowers the Attorney General to investigate. Effective January 1, 2026.
  • New York: Not new today, but worth noting—New York’s companion‑chatbot safeguards took effect in November 2025, requiring crisis protocols and periodic reminders that users are not chatting with a human. A separate frontier‑model framework bill was signed in December 2025.

Practical playbook for AI, product, and compliance teams

Timeline and responsibilities

Key 2026–2028 state AI dates (selected)

DateJurisdictionRequirementWho’s on the hook
Jan 1, 2026CaliforniaTraining‑data summaries live (AB 2013)GenAI developers making systems available in CA
Jan 1, 2026CaliforniaFrontier model safety frameworks and core duties (SB 53)Frontier/large frontier developers
Jan 1, 2026CaliforniaPolice reports: AI‑use disclosure + draft retention (SB 524)Law enforcement agencies/vendors
Jan 1, 2026CaliforniaCompanion chatbots: identity disclosures and crisis protocols (SB 243)Companion chatbot operators
Jan 1, 2026CaliforniaNo “AI did it” liability defense (AB 316); healthcare title protections (AB 489)Defendants using/deploying AI; health AI deployers
Jan 1, 2026TexasProhibited AI practices; AG investigative powers; council/sandbox (HB 149)Developers/deployers in TX
Jun 30, 2026ColoradoHigh‑risk AI Act operative (SB 24‑205)Developers/deployers of high‑risk AI
Aug 2, 2026CaliforniaAI Transparency Act operative; free detection tool + provenance duties begin (SB 942/AB 853)Covered GenAI providers
Jul 1, 2027CaliforniaAnnual companion‑chatbot safety reports begin (SB 243)Companion chatbot operators
Jan 1, 2028CaliforniaCapture‑device provenance options (e.g., phones/cameras) (AB 853)Capture‑device manufacturers

Sources for dates and scopes include the California Lawyers Association bill tracker, official bill texts, and state notices.

What it means for automation and productivity

If you already practice model documentation, red‑teaming, watermarking/provenance, and user disclosures, you’re largely on‑track—these laws codify that operational hygiene. The main lift is making your documentation public (AB 2013; SB 53), ensuring provenance signals are durable (SB 942/AB 853), and tuning user experience for clear identity disclosures and crisis‑response UX (SB 243). For multi‑state deployments, implement one policy set that satisfies the strictest common denominator and feature‑flag the rest.

TipGood patterns that scale
  • Align your governance to NIST AI RMF and ISO/IEC 23894, then map those controls to SB 53’s framework expectations.
  • Treat provenance like observability: embed it in the pipeline, test it regularly, and expose it to users and platforms.
  • Build a “state law binder” for sales and procurement—one‑pagers that show how your product meets CA/TX/CO requirements.

Frequently asked

Do these laws apply if my company isn’t based in California? Yes—many duties trigger if your system is available to Californians, regardless of where you’re headquartered.

What if my model is open‑source weights? California’s provenance rules add duties for hosting platforms that make models available; ensure disclosures are preserved and detectable.

Will these rules conflict with federal action? States are moving first. If federal standards arrive later, expect alignment rather than wholesale preemption; plan for documentation you can re‑use.


Sources

  • California Lawyers Association bill status roundup (effective‑date summary for AB 2013, AB 316, AB 489, SB 524, SB 53, AB 853).
  • California SB 53: Governor’s signing note and legal analyses with effective dates and duties.
  • California AB 2013: bill text and client alerts describing January 1, 2026 disclosure deadline.
  • California SB 942 (AI Transparency Act) and AB 853 amendment (operative date shift to August 2, 2026; platform/device phases).
  • Colorado AI Act: effective date extension to June 30, 2026 and scope of “high‑risk AI.”
  • Texas Responsible AI Governance Act: scope, prohibitions, AG powers, and Jan 1, 2026 effective date.
  • Companion chatbot laws: California SB 243 (effective 2026; reporting July 2027) and New York’s 2025 safeguards.