Today in AI – 12-20-2025

A composite visual of a New York State capitol silhouette, semiconductors, and a data center row, representing policy, chips, and infrastructure

Key stories (past 48 hours)

  1. New York signs the RAISE Act, creating a state-level baseline for frontier AI safety — Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act on December 19, requiring large frontier-model developers to publish safety frameworks, report “critical harm” incidents within 72 hours, and submit to oversight by a new office at the Department of Financial Services; penalties can reach $1M for a first violation and $3M for subsequent violations. The Wall Street Journal reports core provisions take effect on January 1, 2027. The move tracks California’s SB 53 but with stricter incident reporting, signaling that New York and California may set a de facto U.S. baseline while federal rules lag. Read our standalone brief.

  2. Washington eyes partial reopening of AI chip sales to China — The U.S. administration launched an interagency review that could permit licensed sales of Nvidia’s H200 to China, with a proposed 25% fee on sales; decisions will follow a 30‑day process with the final call resting with the President. It marks a break from the prior blanket prohibitions and would reshape supply assumptions for training and inference workloads across Chinese cloud and internet platforms. [Context: exclusive Reuters report]. Read our standalone brief.

  3. SoftBank races to deliver $22.5B to OpenAI by year‑end — Reuters reports Masayoshi Son is selling assets (Nvidia, T‑Mobile US stakes), cutting expenses, and may tap $11.5B of margin loans secured by Arm to fulfill a huge follow‑on commitment that underpins OpenAI’s hyperscale data center buildout. The financing underscores ongoing capital intensity for foundation models and could influence valuations and secondary liquidity across the stack. Read our standalone brief.

  4. Google sues SerpApi over “astonishing scale” scraping of Search results — Google filed suit alleging SerpApi bypassed technical controls (including “SearchGuard”) and re-sold licensed or copyrighted material from Google properties. The case escalates the legal front over AI-era data access/licensing and follows Reddit’s earlier action against scrapers. Expect ripple effects for LLM retrieval pipelines that rely on third‑party SERP data.

  5. Google Cloud–Palo Alto Networks sign a landmark AI security pact reportedly near $10B — An expanded, multi‑year agreement deepens joint go‑to‑market and embeds Palo Alto’s Prisma AIRS with Google Cloud AI (e.g., Vertex/Agent Engine), described by one source as Google Cloud’s largest security deal to date. It’s an indicator that “AI security” has become a board‑level budget line as enterprises move from pilots to agentic applications.

  6. FTC clears Nvidia’s $5B investment in Intel — Antitrust agencies approved Nvidia’s strategic investment in Intel, first announced in September. The greenlight could reshape alliances from accelerators to memory and packaging, with second‑order effects for AMD and TSMC as ecosystem boundaries blur between chip design, foundry capacity, and systems integration.

$61B
Data‑center dealmaking (YTD)Source: reuters-spglobal-2025-12-19

Emerging trends

  • States step into the policy vacuum — With New York’s RAISE Act now law, statehouses are defining enforceable obligations (transparency, incident reporting, oversight) for frontier models while federal preemption remains uncertain. For AI leaders, that means mapping multi‑state compliance programs, not just federal ones. Early signal: parallel California/NY frameworks and explicit 72‑hour reporting duties.

  • Security becomes the AI transformation’s “first‑class” spending category — The near‑$10B Google Cloud–Palo Alto agreement, plus DOE’s Genesis Mission collaborations across 24 organizations (AWS, Google, Microsoft, NVIDIA, OpenAI, Anthropic, xAI, Oracle, IBM, Intel, AMD, HPE and others), show buyers want native security controls from code to runtime as they deploy agentic systems in production.

  • Data rights and scraping fights intensify — Google’s suit against SerpApi adds to a flurry of platform–scraper litigation and will test how courts treat large‑scale circumvention of anti‑bot measures in the LLM era. Outcome will shape which data pipelines are legally “safe” for retrieval‑augmented generation and benchmarking.

  • Export‑control arbitrage is now a board risk — The U.S. H200 review suggests calibrated loosening; at the same time, reports that Tencent can access Nvidia Blackwell chips via non‑Chinese clouds (e.g., in Japan/Australia) highlight policy gaps around remote access. Expect new guidance on “use vs. possession,” monitoring, and enforcement.

  • Power and real estate constraints move from backdrop to bottleneck — S&P data shows a record $61B in 2025 data‑center dealmaking as utilities and regulators race to keep up (e.g., Georgia’s PSC approved a major generation increase largely for data center demand). AI leaders should model siting, power contracting, and on‑site generation options earlier in roadmaps.


Conversations & insights

  • “Is New York’s AI bill a blueprint or a compromise?” — Discussion spans policy Twitter/LinkedIn and mainstream tech policy outlets. Supporters (NY Governor’s office; Axios coverage) frame it as a pragmatic, nation‑leading baseline; critics (e.g., The American Prospect) argue late‑stage changes watered down bans on releasing high‑risk models and reduced penalties. Takeaway: regardless of politics, incident response and safety reporting processes are becoming table stakes for companies selling into NY.

  • “Who owns the open web in the LLM era?” — Google’s general counsel blog post and coverage in The Verge/Reuters sparked debate among SEOs, devs, and AI practitioners about scraping, licensing, and the line between “public” and “permissioned” data. Some applaud Google’s pushback as necessary to sustain the content ecosystem; others flag the irony given search engines’ own crawling history. Takeaway: teams relying on scraped SERP feeds for retrieval or evaluation should prepare contingencies (direct licensing, publisher APIs, or first‑party corpora).

  • “AI infra: bubble or durable build‑out?” — After S&P counted a record $61B in data‑center M&A/investments and Georgia approved a massive power increase to meet expected AI demand, the investor community is debating ROI timelines versus overbuild risk. Takeaway: procurement leaders should scrutinize long‑term utilization assumptions, power prices, and modular/portable capacity options before locking into multi‑decade commitments.


Quick takeaways

  • Start scoping a NY RAISE Act playbook now (map model inventory, define “critical harm” thresholds, create 72‑hour incident procedures, and assign an executive owner).
  • Expect tighter export‑control guidance on “remote access” to restricted chips; stress‑test supply scenarios with and without H200 access in China‑exposed demand.
  • Treat AI security as a program, not a product: integrate posture management, runtime defense, agent/tool permissions, and red‑team exercises into your SDLC as deployments scale.
  • Revisit data strategy: shift from SERP scraping dependencies to licensed content, publisher APIs, and first‑party knowledge bases to de‑risk legal exposure.
  • Capacity isn’t guaranteed: include power procurement and siting analyses in 2026–2028 AI roadmaps; consider modular data centers or on‑site generation where feasible.

Sources