feat: codex uses high reasoning effort by default

gpt-5.2-codex is the only model available with ChatGPT login.
All commands now use model_reasoning_effort="high" for maximum
depth — the whole point is a thorough second opinion.
This commit is contained in:
Garry Tan
2026-03-18 21:26:26 -07:00
parent 0b009d2e84
commit 4e7e5de74d
2 changed files with 38 additions and 10 deletions
+19 -5
View File
@@ -221,13 +221,13 @@ TMPERR=$(mktemp /tmp/codex-err-XXXXXX.txt)
2. Run the review (5-minute timeout):
```bash
codex review --base <base> 2>"$TMPERR"
codex review --base <base> -c 'model_reasoning_effort="high"' 2>"$TMPERR"
```
Use `timeout: 300000` on the Bash call. If the user provided custom instructions
(e.g., `/codex review focus on security`), pass them as the prompt argument:
```bash
codex review "focus on security" --base <base> 2>"$TMPERR"
codex review "focus on security" --base <base> -c 'model_reasoning_effort="high"' 2>"$TMPERR"
```
3. Capture the output. Then parse cost from stderr:
@@ -306,7 +306,7 @@ With focus (e.g., "security"):
3. Run codex exec (5-minute timeout):
```bash
codex exec "<prompt>" -s read-only -o "$TMPRESP" 2>"$TMPERR"
codex exec "<prompt>" -s read-only -c 'model_reasoning_effort="high"' -o "$TMPRESP" 2>"$TMPERR"
```
4. Read the response and parse cost:
@@ -369,12 +369,12 @@ THE PLAN:
For a **new session:**
```bash
codex exec "<prompt>" -s read-only -o "$TMPRESP" 2>"$TMPERR"
codex exec "<prompt>" -s read-only -c 'model_reasoning_effort="high"' -o "$TMPRESP" 2>"$TMPERR"
```
For a **resumed session** (user chose "Continue"):
```bash
codex exec resume <session-id> "<prompt>" -s read-only -o "$TMPRESP" 2>"$TMPERR"
codex exec resume <session-id> "<prompt>" -s read-only -c 'model_reasoning_effort="high"' -o "$TMPRESP" 2>"$TMPERR"
```
5. Capture and save session ID:
@@ -408,6 +408,20 @@ Session saved — run /codex again to continue this conversation.
---
## Model & Reasoning
Codex with ChatGPT login only supports `gpt-5.2-codex` (the default). Other models
(o3, o4-mini, gpt-4o) require API key auth and are not available with ChatGPT login.
All codex commands use `-c 'model_reasoning_effort="high"'` because the whole point of
this skill is deep, thorough analysis. We want maximum reasoning power — this is your
"200 IQ autistic developer" second opinion, not a quick sanity check.
If the user has API key auth and wants a different model, they can say
`/codex review -m o3` and the `-m` flag should be passed through to codex.
---
## Cost Estimation
Parse token count from stderr. Codex prints `tokens used\nN` to stderr.
+19 -5
View File
@@ -75,13 +75,13 @@ TMPERR=$(mktemp /tmp/codex-err-XXXXXX.txt)
2. Run the review (5-minute timeout):
```bash
codex review --base <base> 2>"$TMPERR"
codex review --base <base> -c 'model_reasoning_effort="high"' 2>"$TMPERR"
```
Use `timeout: 300000` on the Bash call. If the user provided custom instructions
(e.g., `/codex review focus on security`), pass them as the prompt argument:
```bash
codex review "focus on security" --base <base> 2>"$TMPERR"
codex review "focus on security" --base <base> -c 'model_reasoning_effort="high"' 2>"$TMPERR"
```
3. Capture the output. Then parse cost from stderr:
@@ -160,7 +160,7 @@ With focus (e.g., "security"):
3. Run codex exec (5-minute timeout):
```bash
codex exec "<prompt>" -s read-only -o "$TMPRESP" 2>"$TMPERR"
codex exec "<prompt>" -s read-only -c 'model_reasoning_effort="high"' -o "$TMPRESP" 2>"$TMPERR"
```
4. Read the response and parse cost:
@@ -223,12 +223,12 @@ THE PLAN:
For a **new session:**
```bash
codex exec "<prompt>" -s read-only -o "$TMPRESP" 2>"$TMPERR"
codex exec "<prompt>" -s read-only -c 'model_reasoning_effort="high"' -o "$TMPRESP" 2>"$TMPERR"
```
For a **resumed session** (user chose "Continue"):
```bash
codex exec resume <session-id> "<prompt>" -s read-only -o "$TMPRESP" 2>"$TMPERR"
codex exec resume <session-id> "<prompt>" -s read-only -c 'model_reasoning_effort="high"' -o "$TMPRESP" 2>"$TMPERR"
```
5. Capture and save session ID:
@@ -262,6 +262,20 @@ Session saved — run /codex again to continue this conversation.
---
## Model & Reasoning
Codex with ChatGPT login only supports `gpt-5.2-codex` (the default). Other models
(o3, o4-mini, gpt-4o) require API key auth and are not available with ChatGPT login.
All codex commands use `-c 'model_reasoning_effort="high"'` because the whole point of
this skill is deep, thorough analysis. We want maximum reasoning power — this is your
"200 IQ autistic developer" second opinion, not a quick sanity check.
If the user has API key auth and wants a different model, they can say
`/codex review -m o3` and the `-m` flag should be passed through to codex.
---
## Cost Estimation
Parse token count from stderr. Codex prints `tokens used\nN` to stderr.