What happened

Nvidia used the SC25 supercomputing conference in St. Louis on November 17, 2025, to unveil Apollo — a family of open AI physics models aimed at speeding up industrial simulation and engineering workflows. The models will ship with pretrained checkpoints and reference workflows, and Nvidia says they’ll be available on build.nvidia.com, Hugging Face, and as NVIDIA NIM microservices.

A modern factory as a photorealistic digital twin, with translucent overlays of airflow and stress fields visualizing AI physics running in real time

Why it matters

Most real-world productivity gains from AI won’t come from chatbots — they’ll come from physical AI: models that understand and predict how things move, flow, heat up, bend, and break. Traditional high‑fidelity simulations are immensely powerful but slow and expensive; AI “surrogate” models trained on physics data can return accurate predictions in seconds, enabling interactive design, broader design‑space exploration, and faster iteration. Apollo is Nvidia’s bid to make those surrogates broadly accessible to engineering teams.

What’s in Apollo

Apollo is positioned as a library of physics‑optimized, domain‑specific models that blend state‑of‑the‑art ML architectures (neural operators, transformers, diffusion) with domain knowledge. Initial focus areas include:

  • Electronic device automation and semiconductors (e.g., computational lithography, electrothermal modeling)
  • Structural mechanics
  • Weather and climate
  • Computational fluid dynamics (CFD)
  • Electromagnetics
  • Multiphysics problems (e.g., plasma, fusion)

Each release includes checkpoints plus training/inference/benchmark workflows to help teams adapt models to their own data and constraints.

Who’s kicking the tires

Early adopters span the industrial stack:

  • Applied Materials reports up to 35× acceleration in parts of its ACE+ multiphysics software and is building surrogate models for near‑real‑time chamber modeling.
  • Cadence generated thousands of high‑fidelity aircraft simulations on NVIDIA systems, then trained an AI physics model for an interactive aircraft digital twin.
  • Northrop Grumman and Luminary Cloud are developing a physics‑AI foundation model to rapidly design spacecraft thruster nozzles — an example of Apollo‑style surrogates moving into aerospace workflows.
  • Siemens, Synopsys, KLA, Lam Research, PhysicsX, Rescale and others say they’ll integrate the new open models into their tools and platforms.

How “open” are these models?

Nvidia describes Apollo as an “open model family.” Practically, that means weights and checkpoints will be released under the NVIDIA Open Model License — a permissive license that allows commercial use and derivatives, while imposing some guardrail conditions (for example, you can’t remove safety controls without equivalent safeguards). It’s not an OSI open‑source software license, but it’s materially more permissive than many “research‑only” model terms.

How it plugs into the stack

Apollo doesn’t live in isolation. It’s designed to sit alongside:

  • PhysicsNeMo, Nvidia’s open‑source framework for training and validating physics‑ML models at scale.
  • Warp, Nvidia’s Apache‑licensed Python framework that JIT‑compiles Python kernels to CUDA for fast simulation and data generation — handy for creating training data and building differentiable pipelines.
  • Omniverse and OpenUSD for interoperable industrial digital twins, including Nvidia’s emerging “AI factory” blueprints used by major CAE vendors.
  • NIM microservices to deploy tuned Apollo variants behind stable, enterprise‑friendly inference APIs.

Where Apollo fits in an engineering workflow

StageTraditional approachWith Apollo‑style surrogates
Data generationHPC solvers (hours–days per run)Generate a smaller set of high‑fidelity runs; augment with synthetic data; curate with PhysicsNeMo
Model buildingManual calibration, reduced‑order modelsTrain/fine‑tune domain model (neural operator/transformer/diffusion) on curated physics data
Design iterationLimited design points due to costInteractive “what‑if” loops and broad design‑space exploration in seconds
DeploymentOffline reports, batch computePackage as a NIM microservice; call from CAD/CAE or factory systems

What this could change for teams

  • Product teams get interactive design: swapping geometries, boundary conditions, or materials and seeing plausible physics responses immediately. That unlocks generative exploration, not just validation at the end.
  • CAE and HPC admins can reserve big clusters for fewer full‑fidelity runs, using surrogates for the bulk of iteration — a speed/cost win.
  • Platform teams can serve surrogates behind standard APIs, making “AI physics” a service that downstream apps can call without wrestling with GPUs and kernels.

Getting started (practical path)

  1. Identify one high‑value bottleneck (for example, a CFD sub‑problem that blocks design sprints) and capture a representative dataset of high‑fidelity runs.
  2. Use Warp or your solver’s Python hooks to automate data generation and curation; log metadata rigorously.
  3. Fine‑tune an Apollo model with PhysicsNeMo, incorporating constraints or PDE residuals if appropriate.
  4. Validate against withheld cases and edge conditions; quantify error bars and acceptable operating envelopes.
  5. Deploy the model as a NIM microservice with rate limits and monitoring; keep a “fall‑back” path to the full solver for out‑of‑distribution cases.
TipWhere to watch for drops

Nvidia says Apollo models will roll out via build.nvidia.com, Hugging Face and as NIM microservices. Bookmark the SC25 press kit and Apollo blog for availability updates.

The bigger picture

Apollo continues Nvidia’s push to turn digital twins into real‑time, robot‑ready “physical AI” systems. With CAE vendors and industrials already wiring GPU‑accelerated solvers into Omniverse pipelines — and with examples like Cadence’s aircraft digital twin and Northrop’s AI‑designed thruster — the stack for AI‑accelerated engineering is solidifying. The open release of Apollo’s models under a permissive license should pull more practitioners into that stack — and shorten the distance between an idea and a validated design.


Sources