The short version

Nvidia just posted a record-smashing quarter: $57.0 billion in revenue for the three months ended October 26, 2025, with $51.2 billion coming from its data center business. Gross margins held in the mid‑70s and management guided next quarter to roughly $65 billion. In CEO Jensen Huang’s words, “Blackwell sales are off the charts,” and cloud GPUs are effectively sold out. Nvidia Q3 FY26 press release.

Immersive view inside an AI datacenter—rows of GPU racks glowing green, evoking the idea of an ‘AI factory’ humming with activity

Why this quarter matters for AI and automation

The headline numbers aren’t just about one company. They signal a step‑function increase in global AI compute:

  • Data center revenue jumped 25% quarter-over-quarter and 66% year-over-year, reflecting an accelerating buildout of what Huang calls “AI factories” that turn energy and data into tokens and software. Nvidia Q3 FY26 press release.
  • Nvidia and OpenAI signed a strategic partnership to deploy at least 10 gigawatts of Nvidia systems—equivalent to millions of GPUs—starting in 2026, with Nvidia intending to invest up to $100 billion as capacity is deployed. This is the most visible proof point that frontier AI teams are planning for unprecedented compute scale. Nvidia IR: OpenAI–Nvidia 10 GW LOI, OpenAI post.
  • Performance leadership continues at the rack scale: Nvidia’s Blackwell Ultra platform (GB300 NVL72) set new records in the MLPerf Inference v5.1 suite, particularly on reasoning workloads—relevant as teams push toward longer-context, agentic systems. Nvidia blog: Blackwell Ultra MLPerf.

Under the hood: where the growth is coming from

Three pillars are driving this overdrive phase of AI compute:

  1. Blackwell-era accelerators and systems

    • Nvidia’s Blackwell family—now including Blackwell Ultra—anchors the training and (increasingly) inference buildout in the major clouds and specialized providers. Rack-scale NVL72 systems, high-bandwidth memory, fifth‑gen NVLink/NVSwitch, and Grace CPU pairing are pushing cluster utilization and TCO in the right direction. Nvidia blog: Blackwell Ultra MLPerf.
  2. Networking and full-stack plumbing

    • Hyperscalers such as Meta and Oracle are expanding adoption of Nvidia’s Spectrum‑X Ethernet to wire up “giga‑scale” AI factories—an alternative to InfiniBand with congestion control tuned for LLM training. Nvidia newsroom: Spectrum‑X for Meta and Oracle.
    • At the silicon-fabric layer, Arm announced Neoverse integration with Nvidia NVLink Fusion for coherent CPU‑accelerator designs, pointing to tighter CPU–GPU coupling across the ecosystem. Arm Newsroom.
    • Nvidia also introduced NVQLink to couple GPU supercomputers with quantum processors at leading U.S. labs—early, but a clear signal of hybrid compute roadmaps. Nvidia newsroom: NVQLink.
  3. System software and data movement

    • BlueField‑4—the DPU at the heart of Nvidia’s “AI factory” operating system—adds 800Gb/s networking and DOCA microservices to accelerate storage, security, and multi‑tenant scheduling at cluster scale. Nvidia blog: BlueField‑4.

The numbers at a glance

Q3 FY2026 revenue by segment

SegmentRevenueQ/QY/Y
Data Center$51.2B+25%+66%
Gaming$4.3B−1%+30%
Professional Visualization$0.76B+26%+56%
Automotive & Robotics$0.59B+1%+32%
Total$57.0B+22%+62%

Source: Nvidia Q3 FY26 press release.

Supply, demand—and the new rate limiters

Demand is clearly outrunning near‑term supply; Nvidia said “cloud GPUs are sold out.” The company simultaneously highlighted milestones meant to expand capacity and resilience:

  • Onshoring steps with foundry partners—Nvidia and TSMC celebrated the first U.S.-produced Blackwell wafer in Arizona this fall. Nvidia blog.
  • Sovereign and hyperscale projects are ballooning in scale: beyond OpenAI’s 10‑GW plan, governments and consortia are announcing national AI infrastructure programs. Reuters: Stargate sites expansion.

The practical bottlenecks are shifting from GPUs alone to land, power, and networking:

  • Power availability is emerging as the rate limiter in key metros; even turnkey facilities can sit idle awaiting grid upgrades. Fortune report on power constraints.
  • Nvidia’s own long‑term thesis is that next‑gen, “reasoning” AI will require far more compute—not less—intensifying that infrastructure push. CNBC interview, Feb 26, 2025.

A note on “circular deals” and concentration risk

The quarter also spotlighted a financing pattern drawing scrutiny: suppliers investing in (and selling to) their biggest customers.

  • Nvidia intends to invest up to $100B in OpenAI tied to 10 GW of deployments, while OpenAI commits to fill those sites with Nvidia systems. Nvidia IR. Separate announcements show Microsoft and Nvidia planning up to $15B in Anthropic alongside a large Azure compute commitment. Financial Times.
  • Analysts debate whether these are efficient ecosystem investments or “circular” vendor financing that could overstate durable end‑demand. Bloomberg and others have documented the concern. Bloomberg video explainer.

What this means for builders and automation leaders

If you’re planning AI‑driven automation at scale, Nvidia’s print has practical implications:

  • Expect better inference economics: Blackwell‑era systems are posting large step‑ups in tokens/second/Watt on real benchmarks, which should translate into lower unit cost for high‑throughput workloads. Nvidia blog: Blackwell Ultra MLPerf.
  • Design for long‑context and agentic patterns: hardware roadmaps (e.g., Rubin CPX for “massive context” inference) are explicitly targeting million‑token windows and complex tool‑use—opening the door to full‑codebase copilots, procedures‑aware RPA, and multi‑hour video understanding. Nvidia newsroom: Rubin CPX.
  • Plan for networking as a first‑class constraint: early adopters standardizing on Spectrum‑X or NVLink Fusion are doing so to keep training efficiency predictable as clusters scale. Nvidia Spectrum‑X, Arm + NVLink Fusion.

Outlook

Nvidia guided Q4 FY26 revenue to $65B (±2%) with non‑GAAP gross margin around 75%, reaffirming a multi‑quarter growth arc powered by Blackwell shipments today and Rubin‑class systems in 2026. Whether you cheer or worry about the scale, one takeaway is clear: AI compute has shifted from “scarce and experimental” to “strategic and industrial.” For automation leaders, that means the capacity to run bigger, longer, and more capable AI is arriving faster than most roadmaps anticipated—plan accordingly. Nvidia Q3 FY26 press release.


Sources