What happened

A bipartisan coalition of 35 state attorneys general — joined by the District of Columbia and several U.S. territories — urged congressional leaders on November 25, 2025, not to pass any federal measure that would block states from enacting or enforcing their own AI rules. The letter, led by New York Attorney General Letitia James with counterparts in North Carolina, Utah, and New Hampshire, warns that sweeping preemption would leave consumers exposed to AI‑driven harms from scams to unsafe chatbots. Reuters first reported the coalition; James’ office posted the letter and signatories. Read the letter (PDF).

A desk with a formal letter signed by many attorneys general, the U.S. Capitol in the background, and state flags in the foreground

The timing is not accidental. After the Senate stripped a 10‑year state AI moratorium from a budget bill in July by a 99–1 vote, some in Washington floated new preemption language for the National Defense Authorization Act (NDAA). Meanwhile, the White House recently paused a draft executive order that would have sought to preempt state AI laws via litigation and funding conditions. NCSL and the Senate Commerce Committee hailed the July vote, and Reuters reported the executive‑order pause.

1,100+
State AI bills introduced (2025)Source: businessroundtable-2025

Why this matters

The core fight is over federalism. Tech firms and some federal policymakers argue that a patchwork of state AI laws will raise costs, chill innovation, and create constitutional conflicts across interstate commerce. Business coalitions have asked Congress to preempt state AI rules, and prominent investors have warned that expansive state regimes (for example, California and Colorado proposals) risk becoming de facto national standards. See the U.S. Chamber coalition letter backing a moratorium on state AI enforcement and Business Roundtable’s call for broad preemption. (U.S. Chamber, Business Roundtable, and context from a16z.)

State officials counter that they are closer to real‑world harms and can move faster than Congress. The attorneys general point to AI‑enabled fraud, discriminatory algorithms, and unsafe conversational agents — concerns underscored by investigative reporting and litigation tied to chatbot misuse and tragic outcomes. See Reuters’ investigation into a fatal incident involving a Meta AI persona and a separate lawsuit alleging a chatbot’s role in a teen’s suicide. (Reuters investigation; Reuters legal coverage.)

The July 2025 Senate vote offers a recent benchmark: after bipartisan blowback, senators removed a state AI moratorium tied to broadband funds from a budget package. NCSL press and a statement from Senate Commerce leaders document the 99–1 result. (Senate Commerce release).


What states are already doing

Even as Congress debates national standards, states have been legislating across distinct risk areas. A few examples that the AGs cite — and that companies should track — are below.

  • Colorado’s AI Act targets “high‑risk” AI in consequential decisions (housing, employment, lending, education, health care). It requires risk management, impact assessments, notices, and documentation; implementation was recently delayed to June 30, 2026. (Colorado AG overview; statute summary; delay analysis).
  • California passed multiple measures effective beginning in 2026, including the Transparency in Frontier AI Act (SB 53) and content provenance/disclosure requirements (SB 942), plus training‑data transparency (AB 2013). (AP News; JD Supra; Pillsbury; Mondaq).
  • New York now requires “AI companion” apps to detect and respond to suicidal ideation and to remind users at least every three hours that they are not interacting with a human. Effective November 5, 2025; enforced by the AG. (Governor’s notice; law firm summary).
  • States continue to curb deceptive political deepfakes, especially around elections, often via disclosure and pre‑election bans, while a new federal law (the TAKE IT DOWN Act) requires platforms to remove non‑consensual intimate deepfakes within 48 hours of a victim’s request by May 19, 2026. (NCSL election AI tracker; CRS explainer on TAKE IT DOWN).

Near‑term AI obligations to watch

JurisdictionWhat’s requiredWhen it hitsWho it affects
New YorkAI companions: crisis protocols; periodic “not a human” disclosuresIn force Nov 5, 2025Providers of AI companion/chat apps
California (SB 53)Frontier‑model transparency and incident reportingJan 1, 2026 (core obligations)Developers of advanced models meeting thresholds
California (AB 2013)Training‑data documentation for gen‑AIJan 1, 2026Developers releasing gen‑AI in CA
California (SB 942)Detection tool and content provenance/watermarkingJan 1, 2026Large consumer gen‑AI providers
Colorado (SB 24‑205)High‑risk AI risk management, impact assessments, noticesJune 30, 2026 (delayed)Developers and deployers in consequential decisions
Federal (TAKE IT DOWN)48‑hour takedown process for NCII deepfakesMay 19, 2026 (platform process deadline)Covered platforms

The preemption push — and the backlash

  • The administration considered an executive order to attack state AI laws in court and condition certain federal funds on state compliance; that draft was put on hold after bipartisan criticism. (Reuters).
  • Industry groups and investors continue to urge Congress to create a single national framework that explicitly preempts state AI rules. (Reuters; Business Roundtable; U.S. Chamber letter).
  • State lawmakers and civil society organizations — including NCSL, EFF, CDT and others — have filed their own letters urging Congress not to handcuff the states. (NCSL letter; EFF letter; CDT coalition letter).

What this means for product, policy, and legal teams

Even if Congress debates preemption, most state rules now on the books are not going away overnight. Expect overlapping obligations (transparency, testing, provenance, youth safety, and anti‑discrimination) to become the default due‑diligence checklist for AI operations in 2026.


The bottom line

The coalition letter doesn’t end the federal preemption debate — but it raises the political cost of sidelining states just as major state laws are about to take effect. For AI builders and buyers, that means planning for state‑led guardrails in 2026 while staying nimble if Congress lands on a national framework. Either way, the era of “AI with no paperwork” is closing fast.

Sources