What New York just passed — and why it matters

On December 19, 2025, Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act (S6953B/A6453B) into law, giving New York a formal framework for governing “frontier” AI models. The law requires the biggest model developers to publish safety plans, withhold systems that present an unreasonable risk of “critical harm,” and report major safety incidents to the state within 72 hours. It also creates a new oversight office inside the Department of Financial Services (DFS) to assess large developers and publish annual reports. Governor’s announcement.

The New York State Capitol in Albany with a stylized overlay of circuitry and an AI model diagram

Who’s in scope: New York’s definition of “frontier” and “large developer”

New York’s law is carefully scoped at the top end of the market:

  • Frontier model: A model trained with more than 10^26 computational operations and costing over $100 million to train, or a model produced by knowledge distillation from such a model (with a $5 million compute-cost threshold for the distilled model). Legislative text, S6953B.
  • Large developer: Any person or company that has trained at least one frontier model and spent over $100 million in aggregate compute costs training frontier models. Accredited colleges and universities engaged in academic research are excluded. S6953B.

These thresholds mean the law targets a small set of labs operating at the frontier; for context, widely cited estimates place training costs for state-of-the-art systems in the tens to hundreds of millions of dollars. Forbes.

What the RAISE Act requires

The core obligations for large developers include:

  • Publish and maintain a safety and security protocol (with appropriate redactions to protect security and trade secrets), and retain an unredacted copy and test records for as long as a model is deployed plus five years. S6953B.
  • Do not deploy a frontier model if doing so would create an unreasonable risk of critical harm. S6953B.
  • Report a “safety incident” within 72 hours of learning of it—or of learning facts sufficient to reasonably believe one occurred—to both the New York Attorney General and the Division of Homeland Security and Emergency Services (DHSES). S6953B.
  • Expect DFS oversight: a new office within DFS will assess large developers and publish annual transparency reports. Governor’s announcement.

Enforcement and penalties

  • The Attorney General can bring civil actions for violations. In the final chaptered law, penalties are up to $1 million for a first violation and up to $3 million for subsequent violations. Governor’s announcement.
  • The enrolled bill text (pre-amendments) reflected higher penalties (up to $10 million/$30 million), but the Governor’s office confirms the chapter amendments set the final levels. S6953 history.

When it takes effect

The act takes effect 90 days after it becomes law, which places the effective date around March 19, 2026. S6953B. Agencies will now begin implementation and rulemaking (including standing up the DFS oversight office) ahead of the first annual reports.

How New York compares with California’s SB 53

California became the first state to adopt a frontier-AI transparency law—SB 53—on September 29, 2025. New York’s approach explicitly “builds on” that framework but differs in key ways. CA Governor’s SB 53 release; Latham & Watkins analysis.

California SB 53 vs. New York RAISE (at a glance)

FeatureCalifornia SB 53New York RAISE
Who’s covered“Frontier developers,” with additional duties for “large” frontier developers (>$500M prior-year revenue)“Large developers” defined by compute spend (> $100M aggregate) who trained at least one frontier model
Frontier model threshold>10^26 operations>10^26 operations; $100M compute; distilled models >$5M compute
Transparency artifactsPublic “frontier AI framework” and transparency reportsPublic safety and security protocol; annual updates; retention of test data
Incident reportingCritical safety incidents to California OESSafety incidents to NY AG and DHSES within 72 hours
Withhold risky modelsEmphasis on frameworks/transparencyExplicit ban on deploying models that present unreasonable risk of critical harm
PenaltiesUp to $1M per violation (AG enforcement)Up to $1M (first) / $3M (subsequent) per violation (AG enforcement)

The federal backdrop: a brewing preemption fight

On December 11, 2025, the White House issued an executive order directing DOJ to form an “AI Litigation Task Force” to challenge state AI laws deemed inconsistent with a national policy, setting up an inevitable test of federal preemption. New York’s enactment arrives despite that pressure, and California leaders have already signaled opposition to federal attempts to nullify state AI rules. White House executive order; The Guardian; Time.

What it means for builders and buyers of AI

For frontier labs

  • Treat New York and California as the new baseline: alignment across both regimes—frameworks/safety plans, risk testing, model-weight security, and rapid incident reporting—will drive compliance strategies. CA SB 53; NY RAISE.
  • Build “holdback” criteria into launch gates: New York’s explicit prohibition on deploying models that present unreasonable risk requires documented go/no-go thresholds and escalation paths. S6953B.
  • Prepare incident-response muscle memory: 72-hour reporting to AG and DHSES requires named owners, playbooks, and immutable logging for investigations. S6953B.

For enterprises procuring AI

  • Ask vendors for their safety and security protocol and incident-reporting commitments. Use these to update vendor due-diligence questionnaires and SLAs.
  • Expect more transparency over high-capability models (evals, red-teaming results, weight security measures), which can improve risk management in regulated industries.

What happens next

  • DFS will stand up its new oversight office and begin annual assessments and reporting. Governor’s announcement.
  • Watch for rulemaking/clarifications on definitions, reporting formats, and how the state will evaluate “unreasonable risk.”
  • Expect legal challenges testing federal preemption; until courts decide, companies should treat New York’s and California’s requirements as enforceable.

Sources

  • New York Governor’s Office: “Governor Hochul Signs Nation-Leading Legislation to Require AI Frameworks for AI Frontier Models” (Dec 19, 2025) — confirms signing, DFS oversight office, 72-hour reporting, and $1M/$3M penalties. Link
  • NY Assembly bill page and actions for S06953 (Chapter 699; Approval Memo 76). Link
  • Full bill text (enrolled S6953B) — definitions, thresholds, reporting, and prohibitions. Link
  • California SB 53 signing release (Sep 29, 2025). Link
  • Latham & Watkins analysis comparing SB 53 scope, duties, penalties. Link
  • Axios: overview of New York’s final bill and alignment with SB 53. Link
  • White House Executive Order on federal preemption posture and litigation task force (Dec 11, 2025). Link
  • Coverage of state pushback to federal preemption attempt. Guardian, Time
  • Training-cost context for frontier models. Forbes