The short version
Nvidia just delivered another staggering print. For the quarter ended October 26, 2025 (fiscal Q3 2026), revenue hit $57.0 billion, up 22% sequentially and 62% year over year. Data center revenue alone reached $51.2 billion, rising 25% quarter over quarter and 66% year over year—powered by demand for Blackwell-generation AI systems and a surge in high-speed networking. Nvidia guided to $65.0 billion (±2%) for next quarter.

What happened, exactly?
- Headline revenue: $57.0B; GAAP and non‑GAAP EPS: $1.30; non‑GAAP gross margin: 73.6%.
- Segment mix: Data Center $51.215B; Gaming $4.265B; Pro Visualization $0.760B; Automotive $0.592B; OEM/Other $0.174B. Within Data Center, compute was $43.0B and networking $8.2B as customers scaled up NVLink, InfiniBand (Quantum) and Spectrum‑X Ethernet fabrics.
- Guidance: Q4 FY26 revenue of $65.0B (±2%) with gross margins around 75%.
Nvidia Q3 FY2026 revenue by segment
| Segment | Revenue (USD B) | Q/Q | Y/Y |
|---|---|---|---|
| Data Center | 51.215 | +25% | +66% |
| • Compute | 43.028 | +27% | +56% |
| • Networking | 8.187 | +13% | +162% |
| Gaming | 4.265 | −1% | +30% |
| Pro Visualization | 0.760 | +26% | +56% |
| Automotive | 0.592 | +1% | +32% |
| OEM & Other | 0.174 | +1% | +79% |
| Total | 57.006 | +22% | +62% |
Source: Nvidia Q3 FY26 CFO commentary.
The engine: AI compute plus high‑speed networks
Nvidia says “cloud GPUs are sold out,” reflecting sustained demand for training and increasingly large-scale inference. The company also highlighted a strategic plan to deploy “at least 10 gigawatts” of Nvidia systems with OpenAI, underscoring the shift toward giga‑scale “AI factories.”
Under the hood, this quarter marks the first full‑throttle period where Blackwell‑generation systems (including Blackwell Ultra) became the leading architecture across customer categories. That showed up not just in GPU compute, but in the $8.2B networking line as buyers moved to rack‑scale NVLink systems and AI‑optimized Ethernet/InfiniBand.
Jensen Huang framed the macro trend succinctly: accelerated computing is replacing general‑purpose compute; generative and agentic AI are spreading across industries; and AI is moving into “physical” domains like robotics.
Constraints and risks to watch
- Power and build‑out limits: Nvidia and analysts continue to flag power availability and capital intensity as pacing factors for AI capacity adds. The company remains optimistic, but acknowledged that customers without sufficient capital and energy access could see slower ramps.
- China exposure: Nvidia noted H20 sales were “insignificant” this quarter amid ongoing export restrictions, reinforcing how geopolitics influence product mix.
- Supply versus demand: Management reiterated that demand outstrips supply even as it ramps each new platform. The company still guided confidently for Q4, but the cadence depends on supply chain execution across GPUs, HBM memory, networking silicon and advanced packaging.
Competition and the evolving stack
AMD narrowed the gap this year with MI3xx deployments and unveiled MI400/Helios systems slated for 2026, backed by marquee interest from OpenAI. Analysts now model materially higher AMD data center ambitions through the decade. Still, the Q3 print underscores Nvidia’s advantage in full‑stack hardware, networking, and the CUDA software ecosystem.
At the same time, hyperscalers continue investing in custom silicon. The near‑term takeaway for operators: multi‑vendor, multi‑chip strategies are becoming the norm, but Nvidia’s platform remains the default for the largest training runs and many high‑throughput inference workloads—especially where NVLink‑connected racks deliver superior TCO.
What it means for automation and productivity teams
The signal from this quarter is clear: AI is moving from pilots to platforms. If you’re scaling automation, plan around compute, networking and power as first‑class constraints, not afterthoughts.
- Prioritize inference efficiency. Model distillation, 8‑bit/FP4 quantization and retrieval strategies can slash per‑query cost without sacrificing quality—key to keeping unit economics in line as usage spikes.
- Budget for networks, not just GPUs. The step‑function gains often arrive when you adopt NVLink‑class interconnects (or their Ethernet analogs) and align your software to exploit them.
- Design for portability. Even if you standardize on Nvidia, use frameworks (e.g., ONNX Runtime, Triton Inference Server) and abstraction layers that ease future migration to alternative accelerators where it makes sense.
- Treat power as a product requirement. Engage facilities and sustainability teams early; the difference between air and liquid cooling, or 30MW versus 100MW build‑outs, can dictate timelines.
The road ahead
Management’s tone suggests we’re still in the “build” phase of AI infrastructure. Nvidia’s guidance implies another record quarter, and commentary on “AI factories” points to multi‑year capex with annual product cadences (Blackwell now, Rubin next). For now, the scoreboard is unambiguous: a $57B quarter with $51.2B from data centers is the clearest sign yet that AI is not just hype—it’s becoming the new substrate for software and automation.
Investor side notes
- Shareholder returns: Nvidia returned $37B in the first nine months of FY26 via buybacks and dividends; $62.2B remains under authorization. The next $0.01/share dividend is slated for December 26, 2025.