What happened
The White House has put on hold a draft executive order that would have sought to preempt state artificial intelligence laws by directing the Justice Department to challenge them in court and by tying some federal funds to state compliance, according to Reuters reporting on November 22, 2025. The leaked order would have stood up a DOJ “AI Litigation Task Force” and prodded the Commerce Department to flag state AI rules seen as burdensome. It followed days of backlash from both parties and state officials. Reuters; earlier leak coverage: Politico, Washington Post.

How we got here: a short timeline
- Spring–Summer 2025: House Republicans advanced a 10‑year moratorium on state AI regulation as part of a sprawling budget package. The Senate stripped the provision in July after a 99–1 vote against the approach and sent the bill back without the moratorium. Covington’s Global Policy Watch.
- November 19–21, 2025: Multiple outlets reported the White House was drafting an executive order to challenge or override state AI laws via DOJ lawsuits and administrative levers, including potential conditions on broadband funding. Politico; Washington Post.
- November 22, 2025: Facing bipartisan resistance—including from some Republican lawmakers and state attorneys general—the White House paused the order. Reuters.
The funding nexus matters because the administration’s earlier legislative effort flirted with conditioning state access to the $42.45 billion BEAD broadband program on AI policy choices.
See NTIA’s overview of BEAD funding. NTIA.
Why the pause matters for builders and operators
For now, the “50‑state patchwork” remains the operating reality. If you ship AI features nationwide, you are still navigating state and local obligations that differ by sector and use case. A few that product, policy, and legal teams should keep on the radar:
- Colorado’s AI Act (SB 24‑205): establishes duties for developers and deployers of high‑risk AI systems (e.g., hiring, credit, housing), with enforcement by the Attorney General; key obligations start in 2026 (recent legislation adjusted timing). Colorado General Assembly summary; adjustment to dates in 2025: SB25b‑004.
- New York City’s AEDT law (Local Law 144): requires annual bias audits and notices for automated employment decision tools used in hiring/promotion. Enforcement began July 5, 2023, and continues. NYC DCWP.
- Tennessee’s ELVIS Act: first‑in‑the‑nation protection of voice likeness against unauthorized AI cloning. TN Governor’s office; AP.
- California deepfake election content: new platform duties to label/remove deceptive election deepfakes (AB 2655) even as a broader deepfake ban (AB 2839) was enjoined on First Amendment grounds. Governor’s office; injunction coverage: Los Angeles Times.
The competing narratives
- The case for preemption: Industry groups argue that a growing patchwork creates heavy, conflicting compliance burdens, especially for startups that can’t geofence features or run state‑by‑state legal playbooks. Trade associations like NetChoice have publicly urged Congress to create a federal framework and pause state rules until it’s in place. NetChoice letter.
- The case against: State AGs (from both parties), civil society groups, and some Republicans framed the draft order and prior moratorium as federal overreach that would nullify protections against harms like deepfakes, fraud, and discriminatory algorithms. Forty state AGs opposed a 2025 moratorium; the Senate later rejected it 99–1. Reuters; Covington recap.
- Not all tech is aligned: Some leading lab voices warned a sweeping preemption push is “too blunt,” urging targeted federal standards (e.g., transparency) instead. Reuters summary of Dario Amodei’s op‑ed.
NetChoice’s estimate underscores the velocity of state action, though counts vary by tracker. NetChoice analysis.
Practical impacts: what product and compliance teams should do now
Even with the federal pause, enforcement risk persists—and in some areas (hiring, deceptive media) it’s increasing.
- Map your AI use cases to jurisdictional obligations.
- Build and maintain a register of “high‑risk” uses (credit, employment, housing, health, education, public benefits). Align with NIST’s AI RMF “Govern‑Map‑Measure‑Manage” functions. NIST AI RMF.
- Stand up (or refresh) your risk management program.
- If you deploy high‑stakes models, document intended use, training data sources, evaluation methods, post‑deployment monitoring, and user disclosures. The NIST AI RMF Playbook offers concrete checklists. NIST Playbook.
- For hiring tools, prepare for audits—even if you’re outside NYC.
- NYC’s AEDT law set a template. Vendors and employers should contractually allocate audit responsibilities, ensure appropriate validation, and publish audit summaries when required. NYC DCWP.
- For generative media, implement provenance and labeling.
- Adopt C2PA Content Credentials for images/video and build visible AI labels where required or prudent. It’s becoming a de facto standard across major platforms and tools. C2PA; Adobe overview.
- Tighten incident response for deceptive AI content.
- Election‑adjacent rules are evolving quickly; some are enforceable and others are being litigated. Ensure you can act fast on takedowns, labels, and user notices in geographies with active mandates. California AB 2655.
- Watch state AG activity.
- Colorado’s AG will have exclusive enforcement under its AI Act; Illinois’ retooled BIPA still drives privacy litigation. Plan for inquiries and preserve audit trails. Colorado SB 24‑205; Reuters on BIPA update.
What could come next
- A narrower federal effort: Reporting indicates the administration may revisit a tailored order or seek congressional language in a must‑pass bill (e.g., NDAA) to curb specific state provisions. For now, the broader attempt is on ice. Reuters; Bloomberg.
- More state activity in 2026: Colorado’s framework phases in; more states are drafting targeted rules (voice likeness, deepfakes in elections, sector‑specific disclosures). Expect continued divergence.
- Courts as the referee: With no comprehensive federal statute, litigation will keep shaping the boundaries—especially around compelled speech and content regulation.
What the leaked federal order signaled vs. the status quo
| Area | Leaked EO concept (reporting) | Status after the pause |
|---|---|---|
| DOJ role | Create “AI Litigation Task Force” to sue states over certain AI laws | No task force created; states keep enforcing their laws. Reuters, Politico |
| Federal funds | Consider conditioning some broadband funds on state AI policies | No new conditions; BEAD remains governed by NTIA program rules. NTIA |
| Agency harmonization | Explore federal disclosure standards that could override conflicting state rules | No change; no uniform federal standards adopted via EO |
| Congressional path | Prior moratorium attempt failed (99–1 Senate vote to strip) | High bar for broad preemption without new legislation. Covington |
Bottom line
The federal bid to short‑circuit state AI rules is paused. Until Congress acts or agencies promulgate clear, well‑grounded standards, enterprises should operate on the assumption that state and local AI obligations will continue to expand—and diverge. The teams that win in 2026 will be those that treat AI governance like any other scaling discipline: documented, automated where possible, and designed to absorb regulatory variance without derailing the roadmap.
Sources
- Reuters: White House pauses draft order to preempt state AI laws (Nov. 22, 2025). Link
- Politico: Leaked draft EO to block state AI laws (Nov. 19, 2025). Link
- Washington Post: DOJ‑led litigation concept against state AI rules (Nov. 19, 2025). Link
- Covington Global Policy Watch: Senate strips state‑AI moratorium from reconciliation (July 2025). Link
- NTIA: BEAD program overview and funding amount. Link
- Colorado AI Act (SB 24‑205). Link; Timing update (SB25b‑004). Link
- NYC Local Law 144 (AEDT). Link
- Tennessee ELVIS Act. Link
- California AB 2655 deepfake election content duties; governor release. Link; AB 2839 injunction coverage. Link
- NIST AI Risk Management Framework and Playbook. Link; Playbook
- NetChoice statements on state AI preemption. Letter; Analysis
- Anthropic CEO perspective on preemption (Reuters summary). Link