TL;DR and what changed
On December 11, 2025, OpenAI removed automatic routing to “reasoning” models for ChatGPT’s Free and low‑cost Go tiers. Those users now default to GPT‑5.2 Instant for every new message, and can opt into deeper reasoning by manually choosing Thinking from the composer’s tools menu. Paid plans keep the router.

Why this matters
Model routing (“Auto”) was meant to decide when your prompt deserved a faster model or a slower, more capable reasoning model without you thinking about it. The rollback simplifies expectations for Free/Go users, reduces surprise latency spikes, and gives teams clearer control over cost and speed in everyday workflows. OpenAI says Free/Go users preferred the predictable baseline—even if that means opting into reasoning only when needed.
Quick background: what the router was doing
With GPT‑5’s launch in August 2025, ChatGPT introduced an “Auto” mode that analyzed your query and dynamically chose between fast and reasoning models, while also restoring the model picker for people who wanted direct control. That design was later refined with explicit modes (Auto, Fast, Thinking).
What OpenAI actually changed (and when)
- Date: December 11, 2025 (rolled out with GPT‑5.2).
- Scope: Free and Go tiers only. The automatic switch to Thinking is off by default for them.
- How to get reasoning: Click the tools menu in the composer and choose Thinking.
- Paid tiers: Plus, Pro, Business, and Enterprise continue to have the router alongside manual model selection. These details appear in OpenAI’s release notes and product docs.
Why revert now? The trade‑off between speed, cost, and control
Reporting indicates the router nudged many more Free users into the costlier reasoning path, increased wait times, and didn’t land well with a chunk of the audience that values snappy replies. OpenAI told WIRED that user feedback favored predictability; the company also claims GPT‑5.2 Instant closes much of the gap on safety/quality that originally justified routing sensitive prompts to reasoning models.
On OpenAI’s own benchmarks, GPT‑5.2 Instant shows materially better safety behavior on mental‑health‑related evaluations than earlier Instant models, supporting the decision to rely less on automatic escalation for most queries.
What changes for you and your team
Defaults and options by ChatGPT tier
| Tier | Default model now | Auto routing | Can choose Thinking? |
|---|---|---|---|
| Free | GPT‑5.2 Instant | Off | Yes, manually from the tools menu |
| Go | GPT‑5.2 Instant | Off | Yes, manually |
| Plus / Business | GPT‑5.2 (with router + picker) | On | Yes |
| Pro / Enterprise | GPT‑5.2 (with router + picker) | On | Yes |
Sources: OpenAI release notes and GPT‑5.2 docs.
Practical implications for productivity and automation
- Latency becomes more predictable on Free/Go. If your automations piggyback on Free/Go accounts, expect fewer surprise “long‑think” responses—but also fewer automatic deep‑reasoning assists. Instrument critical flows accordingly.
- Human‑in‑the‑loop prompts matter more. When you genuinely need chain‑of‑thought‑style depth, be explicit and manually choose Thinking for that message/thread.
- Train teammates on the picker. Many users never noticed the router; others didn’t know how to override it. A quick refresher on Instant vs. Thinking can save minutes per task across a team.
A note on limits and availability
- Free users still get a small quota of GPT‑5.2 messages per window; when they hit it, ChatGPT falls back to a “mini” variant until the limit resets.
- Plus/Business users can use up to 3,000 Thinking messages per week; Pro/Enterprise have broader access subject to abuse guardrails.
- Canvas/image generation aren’t available with Pro in ChatGPT. Confirm specifics in OpenAI’s tier documentation, which can change over time.
The bigger picture: routers aren’t dead
OpenAI frames this as a product tuning, not a philosophy change. Paid users still have Auto, and the company signals routing will evolve and likely return to broader audiences once the experience is clearer, faster, and better aligned with user expectations. In other words: this is a UX and trust iteration, not a retreat from model selection intelligence.