The short version

Tencent is reportedly renting access to cutting‑edge Nvidia Blackwell GPUs from a Japanese cloud partner, allowing its AI teams to train models on state‑of‑the‑art hardware without importing the chips into mainland China. Multiple reports identify Tokyo‑listed DataSection as the provider, running clusters in Osaka (Japan) and Sydney (Australia). Tencent has not publicly confirmed the arrangement; DataSection also avoids naming the end customer. Still, public procurement notices and press releases show massive Blackwell deployments in both locations.

Map-centric illustration of China, Japan and Australia with fiber links; stylized Blackwell GPU racks inside a Japanese data center; subtle compliance & export-control icons

<<stat label="Osaka cluster (B200)" value=">100 ExaFLOPS (FP4)" source="prtimes-2025-07-04">>

<<stat label="Sydney cluster (B300)" value=">200 ExaFLOPS (FP4)" source="prtimes-2025-12-11">>

What, exactly, is Tencent tapping?

  • Osaka: DataSection announced on July 4, 2025 it would build “Japan’s first, among Asia’s largest” Blackwell B200 super‑cluster: 5,000 B200 GPUs across 625 servers, with shipments starting in August.
  • Sydney: On December 11, 2025 the company disclosed a 10,000‑GPU B300 deployment (1,250 servers) it calls the world’s first hyper‑scale B300 cluster, quoting >200 EF (FP4). Taiwanese ODM Inventec is the server supplier.

Financial Times and Barron’s report that these fleets are reserved for Tencent via long‑term capacity deals, estimating the overall spend into the billion‑dollar range. The FT also cites larger totals that extend beyond DataSection’s published phase one figures. Tencent has said the setup complies with relevant regulations; Nvidia told Barron’s the rules were designed to permit clouds outside controlled countries.

Why Blackwell matters

Nvidia’s Blackwell platform (B200/B300 and the Grace‑Blackwell systems) is the company’s flagship for training and serving multi‑trillion‑parameter models, with architectural updates (new Tensor Cores, FP4 precision, high‑bandwidth memory, and faster NVLink) that can collapse training time and cost compared with prior generations. For builders, that means faster iteration cycles and more ambitious multi‑agent, tool‑using systems.

How the “Japan cloud” workaround works

U.S. export controls restrict direct sales and re‑exports of top‑tier Nvidia accelerators (including Blackwell) to China without a license. But today’s rules do not categorically ban Chinese companies from renting time on those GPUs when they physically reside in allied countries and are operated by non‑U.S. providers. In practice, Tencent can submit jobs over the network to data centers in Japan or Australia and never “take possession” of the chips in China. That’s the loophole policymakers have been debating for the last two years.

Is it legal? A fast primer on the rules

  • U.S. export controls on advanced AI chips to China were tightened from 2022–2024 and expanded in early 2025, before parts of a broader “AI diffusion” framework were subsequently rescinded in May 2025. That rollback did not reopen the Blackwell floodgates to China; instead it reverted to pre‑January‑2025 baselines while the Commerce Department drafts replacements.
  • Separately, the 2023 AI Executive Order pushed for “know‑your‑customer” reporting by U.S. IaaS providers when foreign persons train large models—but those obligations target U.S. clouds and resellers, not a Japanese provider operating on Japanese soil.
  • Lawmakers in both parties have floated bills to explicitly bar support for remote access to controlled accelerators from China or Macau, but those proposals have not been enacted. Expect fresh attempts to “close the cloud loophole.”

Bottom line: Renting compute from allied‑country data centers is not the same as importing chips into China—and under today’s rules, that distinction matters. Whether it remains permissible will depend on how Washington rewrites the guardrails in 2026.

Why Japan?

Japan is rapidly becoming an AI‑compute hub. Local operators are rolling out Blackwell‑class capacity, backed by strong power/cooling, policy support, and proximity to Asian demand:

  • SoftBank, KDDI and others are building clusters on Nvidia’s latest platforms; KDDI and HPE plan a GB200 NVL72 deployment at the Osaka‑Sakai facility in 2026.
  • Japan’s cloud ecosystem—SoftBank, GMO Internet Group, SAKURA Internet and others—has partnered with Nvidia since 2024 to expand national AI infrastructure. GMO, for example, is bringing B300 “Blackwell Ultra” instances online in 2025.

DataSection’s published Blackwell builds (2025)

LocationGPU and countServersPublished perf.Server partner
Osaka, JapanB200 × 5,000625>100 EF (FP4)Giga Computing (GIGABYTE)
Sydney, AustraliaB300 × 10,0001,250>200 EF (FP4)Inventec

Sources: Company press releases, July 4 and Dec. 11, 2025.

What this means for AI, automation and productivity

  • For model builders in China: Near‑frontier compute becomes a network service. Expect faster training schedules, larger context windows, and more experimentation with multi‑agent and tool‑use pipelines—without the latency penalties of hopping across the Pacific.
  • For U.S. vendors and regulators: The episode underscores that control regimes focused on “shipping boxes” are easier to route around than ones focused on “access to capability.” Policymakers now face a hard trade‑off: keep allied clouds open (to preserve U.S. tech standards) or extend controls to the act of renting compute.
  • For Japanese clouds: Demand tailwinds look strong, but execution risk is real—power, cooling, networking, and financing must scale in lockstep, and scrutiny of customer rosters will intensify.

What to watch next

  1. New U.S. rules on cross‑border AI compute access and reporting. Drafts are expected to revisit “who can use what, from where,” not just “who can buy what.”
  2. Whether other Chinese tech giants follow Tencent to Japan, Australia or the EU for frontier‑grade compute.
  3. Japan’s data‑center buildout: GB200 “AI factories,” liquid cooling retrofits, and grid upgrades around Osaka, Hokkaido and Kyushu.

Sources

  • Financial Times: “Inside Tencent’s deal to use Nvidia’s best AI chips in Japan.” [FT.com] (paywall).
  • Barron’s: “China’s Tencent Is Accessing Banned Nvidia Chips Through the Cloud.”
  • DataSection (PR TIMES): Osaka B200 cluster (July 4, 2025) and first shipments (Aug. 14, 2025).
  • DataSection (PR TIMES & IR site): Sydney B300 cluster (Dec. 11, 2025).
  • Nvidia: Blackwell platform and DGX B200 SuperPOD technical overview.
  • HPE + KDDI: Osaka‑Sakai GB200 NVL72 announcement (June 25, 2025).
  • Nvidia (Newsroom): “Japan cloud leaders build NVIDIA AI infrastructure” (Nov. 12, 2024).
  • U.S. policy background: BIS rescinds AI Diffusion Rule (Kirkland & Ellis; Akin Gump) and EO 14110 IaaS reporting framework (Federal Register).