OpenAI Codex CLI supports multiple AI models optimized for different coding tasks. Choosing the right model can significantly impact both performance and cost. This guide covers all methods for switching models and helps you select the best model for your specific use case.
Available Models
Codex CLI supports several models in the GPT-5 family, each optimized for different scenarios.
Recommended Models
| Model | Best For | Credits/Task |
|---|---|---|
| gpt-5.2-codex | Complex refactoring, multi-file changes, cybersecurity | ~5 (local) |
| gpt-5.1-codex-mini | Simple tasks, high-volume work, cost-conscious usage | ~1 (local) |
Full Model List
- gpt-5.2-codex - Most advanced agentic coding model with improvements in context compaction, large refactors, Windows support, and cybersecurity
- gpt-5.1-codex-mini - Smaller, cost-effective model for routine tasks (up to 4x more usage)
- gpt-5.1-codex-max - Extended reasoning for long-running, project-scale work
- gpt-5.2 - General-purpose agentic model (not coding-specific)
- gpt-5.1-codex - Previous generation coding model
- gpt-5.1 - General-purpose reasoning model
- gpt-5-codex - Legacy coding model (superseded by 5.1)
- gpt-5 - Base reasoning model
Method 1: Command-Line Flag
Override the default model for a single invocation using the --model or -m flag:
# Use the latest model for a complex task
codex --model gpt-5.2-codex "refactor this codebase to use TypeScript"
# Use the mini model for a simple task
codex -m gpt-5.1-codex-mini "add a comment to this function"
# Short form works the same way
codex -m gpt-5.2-codex "explain this algorithm"
This method is ideal when you want to temporarily use a different model without changing your default settings.
Method 2: Configuration File
For permanent changes, edit your ~/.codex/config.toml file.
Setting the Default Model
# ~/.codex/config.toml
model = "gpt-5.2-codex"
Creating Profiles for Different Use Cases
You can create named profiles for different workflows:
# ~/.codex/config.toml
# Default settings
model = "gpt-5.2-codex"
# Profile for quick tasks
[profiles.quick]
model = "gpt-5.1-codex-mini"
model_reasoning_effort = "low"
# Profile for complex work
[profiles.deep]
model = "gpt-5.1-codex-max"
model_reasoning_effort = "high"
# Profile for cost-conscious development
[profiles.budget]
model = "gpt-5.1-codex-mini"
model_reasoning_effort = "minimal"
Use profiles with the --profile flag:
codex --profile quick "fix this typo"
codex --profile deep "architect a microservices system"
Configuration File Location
- macOS/Linux:
~/.codex/config.toml - Windows:
%USERPROFILE%\.codex\config.toml
To open the config file from the IDE extension, select the gear icon and navigate to Codex Settings > Open config.toml.
Method 3: Mid-Session Switching
Switch models during an active interactive session using the /model slash command:
-
Start an interactive Codex session:
codex -
Type
/modeland press Enter -
Select your preferred model from the popup menu
-
Codex confirms the change in the transcript
-
Verify with
/statusto see the updated configuration
This method is useful when you realize mid-task that a different model would be more appropriate.
Reasoning Effort Configuration
Some models support adjustable reasoning effort, which affects both response quality and cost.
Available Reasoning Levels
| Level | Best For | Latency | Cost |
|---|---|---|---|
minimal | Simple lookups, formatting | Fastest | Lowest |
low | Routine tasks, small changes | Fast | Low |
medium | Standard development work (default) | Moderate | Moderate |
high | Complex logic, architecture decisions | Slower | Higher |
xhigh | Project-scale refactoring | Slowest | Highest |
Note: xhigh is only available on gpt-5.1-codex-max and gpt-5.2-codex.
Setting Reasoning Effort
In config.toml:
model = "gpt-5.2-codex"
model_reasoning_effort = "high"
Via command line:
codex -c model_reasoning_effort="high" "design a database schema"
Combined with model selection:
codex -m gpt-5.1-codex-max -c model_reasoning_effort="xhigh" "migrate this Python 2 codebase to Python 3"
Platform-Specific Default Models
Codex uses different default models based on your operating system:
| Platform | Default Model (API Key) | Default Model (ChatGPT Sign-in) |
|---|---|---|
| macOS | gpt-5-codex | gpt-5.2-codex |
| Linux | gpt-5-codex | gpt-5.2-codex |
| Windows | gpt-5 | gpt-5.2-codex |
Windows defaults to the general-purpose model rather than the coding-specific variant due to historical compatibility considerations. For optimal Windows performance, explicitly set model = "gpt-5.2-codex" in your config file, as GPT-5.2-Codex includes specific improvements for Windows environments.
Cost and Performance Considerations
Subscription Limits
If you are using a ChatGPT subscription, your model choice affects usage limits:
ChatGPT Plus ($20/month):
- Local messages: 45-225 per 5-hour window
- Cloud tasks: 10-60 per 5-hour window
ChatGPT Pro ($200/month):
- Local messages: 300-1,500 per 5-hour window (6x higher)
- Cloud tasks: 50-400 per 5-hour window (6x higher)
Using gpt-5.1-codex-mini provides up to 4x higher local message limits.
API Pricing (Per Million Tokens)
If you are using an API key, costs vary significantly by model:
| Model | Input | Output |
|---|---|---|
| gpt-5.2-codex | $1.75 | $14.00 |
| gpt-5.1-codex-mini | $1.50 | $6.00 |
| gpt-5-mini | $0.25 | $2.00 |
Choosing the Right Model
Use this decision framework:
- Simple edits, formatting, comments - Use
gpt-5.1-codex-miniwithlowreasoning - Standard development tasks - Use
gpt-5.2-codexwithmediumreasoning (default) - Complex refactoring, migrations - Use
gpt-5.2-codexwithhighreasoning - Project-scale architecture work - Use
gpt-5.1-codex-maxwithxhighreasoning
Using Alternative Providers
Codex can connect to other model providers, including local models via Ollama.
Local Models with Ollama
# Start Codex with the local open source provider
codex --oss "explain this code"
# Or configure in config.toml
# model_provider = "oss"
Note: This requires a running Ollama instance on your machine.
Custom Providers
You can configure custom model providers in your config.toml:
model_provider = "openai" # Default provider
# Configure alternative providers in model_providers section
Verifying Your Current Model
Check your active model and configuration at any time:
# From the command line
codex --version
# During an interactive session
/status
The /status command displays the currently active model, approval policy, and token usage.
Troubleshooting
Model Not Available Error
If you receive an error that a model is not available:
- Ensure your subscription or API credits support the requested model
- Check for typos in the model name
- Verify you are signed in:
codex auth status
Reasoning Effort Not Applied
Reasoning effort only works with models using the Responses API. If your setting is ignored:
- Verify you are using a compatible model (GPT-5 family)
- Check that
model_reasoning_effortis spelled correctly in config.toml
Config File Not Being Read
If your config.toml changes are not taking effect:
- Verify the file location is correct (
~/.codex/config.toml) - Check for TOML syntax errors
- Command-line flags override config file settings (this is by design)
Next Steps
- Read the official Codex models documentation
- Explore Codex configuration options
- Learn about Codex pricing and credits
- Compare with Google Gemini CLI which offers a 1M token context window on its free tier