OpenAIintermediate

How to Switch Models in Codex CLI

Learn how to switch between GPT-5.2-Codex, GPT-5.1-Codex-Mini, and other models in OpenAI Codex CLI. Covers command-line flags, config.toml settings, and mid-session model switching with performance and cost considerations.

10 min readUpdated January 2026

Want us to handle this for you?

Get expert help →

OpenAI Codex CLI supports multiple AI models optimized for different coding tasks. Choosing the right model can significantly impact both performance and cost. This guide covers all methods for switching models and helps you select the best model for your specific use case.

Available Models

Codex CLI supports several models in the GPT-5 family, each optimized for different scenarios.

ModelBest ForCredits/Task
gpt-5.2-codexComplex refactoring, multi-file changes, cybersecurity~5 (local)
gpt-5.1-codex-miniSimple tasks, high-volume work, cost-conscious usage~1 (local)

Full Model List

  • gpt-5.2-codex - Most advanced agentic coding model with improvements in context compaction, large refactors, Windows support, and cybersecurity
  • gpt-5.1-codex-mini - Smaller, cost-effective model for routine tasks (up to 4x more usage)
  • gpt-5.1-codex-max - Extended reasoning for long-running, project-scale work
  • gpt-5.2 - General-purpose agentic model (not coding-specific)
  • gpt-5.1-codex - Previous generation coding model
  • gpt-5.1 - General-purpose reasoning model
  • gpt-5-codex - Legacy coding model (superseded by 5.1)
  • gpt-5 - Base reasoning model

Method 1: Command-Line Flag

Override the default model for a single invocation using the --model or -m flag:

# Use the latest model for a complex task
codex --model gpt-5.2-codex "refactor this codebase to use TypeScript"

# Use the mini model for a simple task
codex -m gpt-5.1-codex-mini "add a comment to this function"

# Short form works the same way
codex -m gpt-5.2-codex "explain this algorithm"

This method is ideal when you want to temporarily use a different model without changing your default settings.

Method 2: Configuration File

For permanent changes, edit your ~/.codex/config.toml file.

Setting the Default Model

# ~/.codex/config.toml

model = "gpt-5.2-codex"

Creating Profiles for Different Use Cases

You can create named profiles for different workflows:

# ~/.codex/config.toml

# Default settings
model = "gpt-5.2-codex"

# Profile for quick tasks
[profiles.quick]
model = "gpt-5.1-codex-mini"
model_reasoning_effort = "low"

# Profile for complex work
[profiles.deep]
model = "gpt-5.1-codex-max"
model_reasoning_effort = "high"

# Profile for cost-conscious development
[profiles.budget]
model = "gpt-5.1-codex-mini"
model_reasoning_effort = "minimal"

Use profiles with the --profile flag:

codex --profile quick "fix this typo"
codex --profile deep "architect a microservices system"

Configuration File Location

  • macOS/Linux: ~/.codex/config.toml
  • Windows: %USERPROFILE%\.codex\config.toml

To open the config file from the IDE extension, select the gear icon and navigate to Codex Settings > Open config.toml.

Method 3: Mid-Session Switching

Switch models during an active interactive session using the /model slash command:

  1. Start an interactive Codex session:

    codex
    
  2. Type /model and press Enter

  3. Select your preferred model from the popup menu

  4. Codex confirms the change in the transcript

  5. Verify with /status to see the updated configuration

This method is useful when you realize mid-task that a different model would be more appropriate.

Reasoning Effort Configuration

Some models support adjustable reasoning effort, which affects both response quality and cost.

Available Reasoning Levels

LevelBest ForLatencyCost
minimalSimple lookups, formattingFastestLowest
lowRoutine tasks, small changesFastLow
mediumStandard development work (default)ModerateModerate
highComplex logic, architecture decisionsSlowerHigher
xhighProject-scale refactoringSlowestHighest

Note: xhigh is only available on gpt-5.1-codex-max and gpt-5.2-codex.

Setting Reasoning Effort

In config.toml:

model = "gpt-5.2-codex"
model_reasoning_effort = "high"

Via command line:

codex -c model_reasoning_effort="high" "design a database schema"

Combined with model selection:

codex -m gpt-5.1-codex-max -c model_reasoning_effort="xhigh" "migrate this Python 2 codebase to Python 3"

Platform-Specific Default Models

Codex uses different default models based on your operating system:

PlatformDefault Model (API Key)Default Model (ChatGPT Sign-in)
macOSgpt-5-codexgpt-5.2-codex
Linuxgpt-5-codexgpt-5.2-codex
Windowsgpt-5gpt-5.2-codex

Windows defaults to the general-purpose model rather than the coding-specific variant due to historical compatibility considerations. For optimal Windows performance, explicitly set model = "gpt-5.2-codex" in your config file, as GPT-5.2-Codex includes specific improvements for Windows environments.

Cost and Performance Considerations

Subscription Limits

If you are using a ChatGPT subscription, your model choice affects usage limits:

ChatGPT Plus ($20/month):

  • Local messages: 45-225 per 5-hour window
  • Cloud tasks: 10-60 per 5-hour window

ChatGPT Pro ($200/month):

  • Local messages: 300-1,500 per 5-hour window (6x higher)
  • Cloud tasks: 50-400 per 5-hour window (6x higher)

Using gpt-5.1-codex-mini provides up to 4x higher local message limits.

API Pricing (Per Million Tokens)

If you are using an API key, costs vary significantly by model:

ModelInputOutput
gpt-5.2-codex$1.75$14.00
gpt-5.1-codex-mini$1.50$6.00
gpt-5-mini$0.25$2.00

Choosing the Right Model

Use this decision framework:

  1. Simple edits, formatting, comments - Use gpt-5.1-codex-mini with low reasoning
  2. Standard development tasks - Use gpt-5.2-codex with medium reasoning (default)
  3. Complex refactoring, migrations - Use gpt-5.2-codex with high reasoning
  4. Project-scale architecture work - Use gpt-5.1-codex-max with xhigh reasoning

Using Alternative Providers

Codex can connect to other model providers, including local models via Ollama.

Local Models with Ollama

# Start Codex with the local open source provider
codex --oss "explain this code"

# Or configure in config.toml
# model_provider = "oss"

Note: This requires a running Ollama instance on your machine.

Custom Providers

You can configure custom model providers in your config.toml:

model_provider = "openai"  # Default provider

# Configure alternative providers in model_providers section

Verifying Your Current Model

Check your active model and configuration at any time:

# From the command line
codex --version

# During an interactive session
/status

The /status command displays the currently active model, approval policy, and token usage.

Troubleshooting

Model Not Available Error

If you receive an error that a model is not available:

  1. Ensure your subscription or API credits support the requested model
  2. Check for typos in the model name
  3. Verify you are signed in: codex auth status

Reasoning Effort Not Applied

Reasoning effort only works with models using the Responses API. If your setting is ignored:

  1. Verify you are using a compatible model (GPT-5 family)
  2. Check that model_reasoning_effort is spelled correctly in config.toml

Config File Not Being Read

If your config.toml changes are not taking effect:

  1. Verify the file location is correct (~/.codex/config.toml)
  2. Check for TOML syntax errors
  3. Command-line flags override config file settings (this is by design)

Next Steps

Frequently Asked Questions

Find answers to common questions

Codex CLI defaults to gpt-5.2-codex for users signed in with ChatGPT. For API key users, it defaults to gpt-5-codex on macOS and Linux, and gpt-5 on Windows.

Need Professional IT & Security Help?

Our team of experts is ready to help protect and optimize your technology infrastructure.