What happened

The U.S. Department of Commerce has authorized export licenses for NVIDIA’s latest Blackwell-generation GPUs to two Gulf buyers: Abu Dhabi’s G42 and Saudi Arabia’s HUMAIN. Each company is cleared to procure the equivalent of up to 35,000 GB300-class Blackwell chips, with strict security and reporting conditions attached to the licenses. Commerce Department press release; see also Reuters.

Up to 35,000
Per-buyer export license (GB300)Source: commerce-2025-11-19

The timing dovetails with fresh capacity announcements in both countries. HUMAIN says it plans to deploy up to 600,000 NVIDIA GPUs over the next three years across Saudi Arabia and the U.S., and it unveiled a joint plan with Elon Musk’s xAI for a 500 MW data center in the Kingdom. HUMAIN release; see also context via Bloomberg and Reuters.

Up to 600,000
HUMAIN GPU target (3 years)Source: humain-pr-2025-11-19
Map-style editorial illustration of U.S.-to-Gulf GPU exports and new AI campuses in Abu Dhabi and Riyadh

Why this matters for AI, automation, and productivity

For enterprises, the approvals mean two things at once:

  • Access to top-tier compute in the Middle East is increasing. Buyers that need sovereign or low-latency AI services in the region will see more options for training and inference close to users and data. This follows a broader pivot to U.S.-aligned infrastructure in the UAE, including Microsoft’s $1.5B investment in G42 under an intergovernmental security framework. Microsoft; Reuters on Huawei ties.
  • Supply relief for frontier workloads. GB300 “Blackwell Ultra” platforms are designed for reasoning-intensive and test-time-scaling inference, promising a step-change in throughput versus Hopper-era systems. NVIDIA GB300 platform; NVIDIA investor release.

If realized on the scale HUMAIN and G42 describe, these builds will accelerate LLM/agentic automation, multimodal video generation, and “physical AI” (digital twins and robotics) across energy, logistics, and public-sector workloads—areas where compute scarcity has been the primary blocker.


The two big build-outs

1) HUMAIN (Saudi Arabia)

  • Licenses: U.S. approval for GB300 exports (up to 35,000 units, license-equivalent). U.S. Commerce.
  • Capacity roadmap: Up to 600,000 NVIDIA GPUs within three years, spanning Saudi Arabia and a newly announced U.S. footprint. HUMAIN.
  • Early phases: First GB300 supercomputer tranche of ~18,000 GPUs disclosed in May; a flagship 500 MW Saudi data center planned with xAI to train future Grok models and handle inference at scale. NVIDIA + HUMAIN (May); Bloomberg on xAI 500 MW.
  • Cloud partnerships: AWS and HUMAIN disclosed plans to deploy and manage up to 150,000 accelerators in a dedicated “AI Zone” in Riyadh (mixing GB300-class systems and AWS silicon). Business Wire; HUMAIN summary site.

2) G42 (United Arab Emirates)

  • Licenses: U.S. approval for GB300 exports (up to 35,000 units, license-equivalent). U.S. Commerce.
  • Flagship project: Stargate UAE—a 1 GW AI cluster inside a planned 5 GW UAE–U.S. AI Campus in Abu Dhabi, built with OpenAI, Oracle, NVIDIA, Cisco, and SoftBank; the first 200 MW is slated for 2026. SoftBank release; Cisco; regional coverage via The National.
  • Additional context: Microsoft will ship more than 60,000 NVIDIA AI chips, including GB300-class, to the UAE under a separate U.S.-approved deal—helping prime the pump for regional AI services. AP News.

Who’s getting what

EntityCountryLicense scope (U.S.)Stated GPU planFlagship buildEarliest go‑live
HUMAINSaudi ArabiaUp to 35,000 GB300 (license-equivalent)Up to 600,000 GPUs over 3 years500 MW xAI-aligned DC; initial ~18k GB300 cluster2026 (initial phases)
G42UAEUp to 35,000 GB300 (license-equivalent)1 GW Stargate UAE (within 5 GW campus)200 MW phase in 2026 with OpenAI/Oracle/NVIDIA/Cisco/SoftBank2026

Policy backdrop: from restrictions to managed openness

Washington’s stance has evolved from the 2023 curb on exporting NVIDIA’s A100/H100-class GPUs to parts of the Middle East to a more managed, license-based openness with compliance guardrails. The Guardian backgrounder. In the UAE’s case, G42’s tie-up with Microsoft includes a first-of-its-kind intergovernmental assurance agreement designed to enforce security standards and reduce the risk of diversion. Microsoft.


Technology angle: what “GB300 Blackwell” unlocks

NVIDIA’s GB300 NVL72 rack-scale systems (part of the Blackwell Ultra stack) consolidate 72 Blackwell GPUs and 36 Grace CPUs into a single, liquid‑cooled “AI factory” building block. NVIDIA positions GB300 as delivering materially higher reasoning inference throughput versus prior-generation Blackwell/Hopper platforms—important for agentic workflows, long‑context LLMs, and real‑time video generation. NVIDIA GB300 overview; NVIDIA investor release.

Liquid‑cooled GB300 racks in a futuristic Middle Eastern data center, with engineers and subtle geometric motifs

Practical takeaways for AI & automation leaders

  • Co‑locate where data and demand live. If you operate in the Gulf, regional GB300 capacity—whether via HUMAIN, G42, or hyperscalers—will reduce data‑movement overhead and latency for LLM, RAG, and vision pipelines.
  • Plan for reasoning‑heavy inference. Agentic systems and test‑time scaling are compute‑hungry; GB300‑class fleets can shift cost curves for high‑concurrency inference and synthetic data loops.
  • Budget for power and cooling early. 500 MW‑class campuses and 1 GW clusters highlight the energy realities of frontier AI. Coordinate with facilities and sustainability teams before committing to multi‑rack builds.
  • Don’t skip governance. License terms come with obligations—vendor attestations, end‑use restrictions, and reporting. Bake compliance into your MSA and deployment playbooks.

The bottom line

Washington’s green light for Blackwell exports marks a pragmatic shift: keep the most advanced silicon within a U.S.-aligned ecosystem while enabling trusted partners to build “AI factories” at home. If HUMAIN’s 600k‑GPU goal and G42’s Stargate UAE deliver on schedule, the Middle East will become one of the most consequential new regions for AI compute—expanding choice for builders and, ultimately, speeding up real-world automation.


Sources