Key Stories (past 48 hours)

A split scene showing a spreadsheet filled with AI-generated insights on one side, and a data center aisle with GPU and TPU servers on the other, hinting at regulation and geopolitics in the background

Anthropic launches Claude Opus 4.5 with memory and agentic upgrades

Full analysis: Anthropic launches Claude Opus 4.5 with memory and agentic upgrades

Anthropic released Opus 4.5 on November 24 with notable improvements to coding, “agentic” workflows, and computer-use tasks, alongside integrations for Chrome and Excel and a new “endless chat” memory experience. Pricing dropped to $5 (input) / $25 (output) per million tokens, broadening access to frontier-level capabilities. Anthropic says Opus 4.5 sets new marks on practical coding benchmarks; TechCrunch reports the model crosses 80% on SWE-bench Verified and emphasizes the productization of memory for longer tasks. For engineering, data, and ops teams, this is a signal that agent workflows and spreadsheet-native copilots are moving from pilots to production.

<<stat label="Opus 4.5 SWE-bench Verified" value=">80%" source="anthropic-2025-11-24">>

35 state AGs to Congress: don’t preempt state AI laws

Full analysis: 35 state AGs tell Congress: don't preempt state AI laws

A bipartisan group representing 35 states plus Washington, D.C., urged Congressional leaders on November 25 to reject federal preemption of state AI rules, warning of “disastrous consequences” if states can’t address harms from deepfakes, child safety risks, and chatbot misuse. The New York AG’s office published the coalition letter the same day. For AI builders, this raises the likelihood of a multi-jurisdictional compliance map in 2026–2027, even if national standards eventually arrive.

36 offices
AGs opposing AI preemptionSource: nyag-2025-11-25

China reportedly blocks ByteDance from using Nvidia chips in new data centers

Full analysis: China blocks ByteDance from using Nvidia chips in new data centers

Chinese regulators have barred ByteDance from deploying Nvidia accelerators in upcoming facilities, pushing the company toward domestic silicon amid U.S.–China tech tensions, according to a report relayed by Reuters on November 26. For multinational AI roadmaps, this underscores a broader shift toward regionalized compute stacks and supply-chain hedging between GPUs and local ASICs—implications that stretch beyond China.

AWS pledges up to $50B for U.S. government AI and supercomputing

Amazon announced November 24 it will invest up to $50B to expand AI/HPC capacity for federal customers across AWS GovCloud, Secret, and Top Secret regions, adding roughly 1.3 GW of specialized compute from 2026. Expect procurement cycles to tilt toward secure agentic workflows, sovereign controls, and mixed accelerator stacks (Trainium + Nvidia) in classified and regulated environments.

~1.3 GW
AWS US Gov AI capacity addSource: amazon-2025-11-24

Perplexity rolls out an AI personal shopper with PayPal “Instant Buy”

Perplexity introduced a shopping assistant that surfaces product cards, reviews, and specs, and can complete purchases via PayPal’s Instant Buy, starting with desktop/web in the U.S. The launch intensifies the race to make AI assistants directly transactional and could reshape how brands measure attribution and conversions over the holidays.

Meta weighs Google TPUs as Nvidia stresses its lead

Following reports Meta may spend billions on Google’s TPUs (including rentals as early as 2026), Nvidia issued an unusual public statement asserting its platform is “a generation ahead.” Analysts highlighted that any shift is likely a hedge against supply constraints rather than a wholesale platform switch, but the competitive narrative pressured NVDA shares. For builders, this is another nudge to plan for multi-accelerator portability.


Emerging Trends

Agentic AI goes mainstream

Opus 4.5’s focus on longer-running plans, tool orchestration, and spreadsheet/computer use—plus “endless chat” memory—marks a practical turn from demos to deployable agent workflows. Early signals: expanded Chrome and Excel experiences and higher usage caps for paid tiers. Expect vendors to compete not just on tokens and context windows, but on plan quality, memory handling, and safe autonomy controls.

Hardware hedging and regionalization of compute

Over two days, we saw: (1) China pushing ByteDance off Nvidia chips in new builds, and (2) Meta evaluating Google TPUs, prompting Nvidia’s response. The throughline is risk mitigation—capacity, cost, export controls, and geopolitics—driving multi-accelerator strategies (GPU + TPU + domestic ASICs). Tooling portability and vendor neutrality are becoming table stakes.

Public-sector AI buildout accelerates

AWS’s up-to-$50B commitment for U.S. government AI/HPC from 2026 signals multi-year demand for accelerators, HBM memory, liquid cooling, and power. Agencies will have expanded access to Bedrock/Claude/Nova and secure Trainium+Nvidia configurations, likely catalyzing use cases from intel fusion to biomedical modeling. Vendors serving the public sector should prepare compliance artifacts and ATO-ready blueprints.

Shopping assistants become a conversion layer

Perplexity’s assistant and OpenAI’s new “shopping research” mode point to assistants owning more of the discovery-to-decision funnel, with memory and clarifying questions improving product-market fit. Expect attribution models to evolve and retail ops to test merchant integrations and instant checkout rails.


Conversations & Insights

“Benchmarks are nice; memory and price win deployments”

Where: r/Anthropic threads, TechCrunch interviews, product blogs (Nov 24–25). Voices: Anthropic PMs discussing memory vs. pure context scaling; developers noting price cuts and usage-limit changes; trade press highlighting >80% SWE-bench Verified. Takeaway: The conversation is shifting from headline benchmarks to total cost of capability—stable memory, plan quality, and integration surface area (Excel/Chrome) are emerging as decisive enterprise factors.

“TPU vs GPU isn’t binary—it's hedging, not replacement”

Where: X/financial press analysis (Nov 25–26). Voices: Nvidia’s newsroom post emphasizing platform breadth; market analysts arguing TPUs are a capacity hedge amid GPU lead times; coverage in Barron’s on investor reaction. Takeaway: Expect mixed fleets and software portability to matter more than single-vendor roadmaps; CUDA lock-in vs. alternative stacks will be a 2026 planning theme.

“Federal standard vs. state patchwork?”

Where: AG coalition letter; policy press (Nov 25–26). Voices: State AGs arguing for agility to address harms; industry voices pushing uniform national standards; reporting on (now-shelved) preemption efforts adding fuel to the debate. Takeaway: Compliance leaders should assume a multi-state regime through 2026 at least; invest in policy monitoring and adaptable AI assurance frameworks for disclosures, safety, and risk management.


Quick Takeaways

  • Budget for agentic pilots in 1H26: prioritize workflows where memory + tool orchestration yields clear ROI (e.g., code refactors, financial modeling, ops runbooks). Start with tiered rollouts using Opus 4.5-class models.
  • Assume multi-accelerator futures: design for portability (containerized runtimes, framework abstraction, model/tool decoupling) to hedge GPU/TPU/ASIC constraints and regional rules.
  • Prepare for state-by-state guardrails: map upcoming disclosures, AI literacy, and model-risk obligations; keep templates modular to slot in state variations without rework.
  • Public-sector partners: align offerings to ATO pathways and sovereign data patterns now; demand will spike as the AWS buildout progresses.

Sources