mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-05 13:15:24 +02:00
feat: codex uses high reasoning effort by default
gpt-5.2-codex is the only model available with ChatGPT login. All commands now use model_reasoning_effort="high" for maximum depth — the whole point is a thorough second opinion.
This commit is contained in:
+19
-5
@@ -221,13 +221,13 @@ TMPERR=$(mktemp /tmp/codex-err-XXXXXX.txt)
|
||||
|
||||
2. Run the review (5-minute timeout):
|
||||
```bash
|
||||
codex review --base <base> 2>"$TMPERR"
|
||||
codex review --base <base> -c 'model_reasoning_effort="high"' 2>"$TMPERR"
|
||||
```
|
||||
|
||||
Use `timeout: 300000` on the Bash call. If the user provided custom instructions
|
||||
(e.g., `/codex review focus on security`), pass them as the prompt argument:
|
||||
```bash
|
||||
codex review "focus on security" --base <base> 2>"$TMPERR"
|
||||
codex review "focus on security" --base <base> -c 'model_reasoning_effort="high"' 2>"$TMPERR"
|
||||
```
|
||||
|
||||
3. Capture the output. Then parse cost from stderr:
|
||||
@@ -306,7 +306,7 @@ With focus (e.g., "security"):
|
||||
|
||||
3. Run codex exec (5-minute timeout):
|
||||
```bash
|
||||
codex exec "<prompt>" -s read-only -o "$TMPRESP" 2>"$TMPERR"
|
||||
codex exec "<prompt>" -s read-only -c 'model_reasoning_effort="high"' -o "$TMPRESP" 2>"$TMPERR"
|
||||
```
|
||||
|
||||
4. Read the response and parse cost:
|
||||
@@ -369,12 +369,12 @@ THE PLAN:
|
||||
|
||||
For a **new session:**
|
||||
```bash
|
||||
codex exec "<prompt>" -s read-only -o "$TMPRESP" 2>"$TMPERR"
|
||||
codex exec "<prompt>" -s read-only -c 'model_reasoning_effort="high"' -o "$TMPRESP" 2>"$TMPERR"
|
||||
```
|
||||
|
||||
For a **resumed session** (user chose "Continue"):
|
||||
```bash
|
||||
codex exec resume <session-id> "<prompt>" -s read-only -o "$TMPRESP" 2>"$TMPERR"
|
||||
codex exec resume <session-id> "<prompt>" -s read-only -c 'model_reasoning_effort="high"' -o "$TMPRESP" 2>"$TMPERR"
|
||||
```
|
||||
|
||||
5. Capture and save session ID:
|
||||
@@ -408,6 +408,20 @@ Session saved — run /codex again to continue this conversation.
|
||||
|
||||
---
|
||||
|
||||
## Model & Reasoning
|
||||
|
||||
Codex with ChatGPT login only supports `gpt-5.2-codex` (the default). Other models
|
||||
(o3, o4-mini, gpt-4o) require API key auth and are not available with ChatGPT login.
|
||||
|
||||
All codex commands use `-c 'model_reasoning_effort="high"'` because the whole point of
|
||||
this skill is deep, thorough analysis. We want maximum reasoning power — this is your
|
||||
"200 IQ autistic developer" second opinion, not a quick sanity check.
|
||||
|
||||
If the user has API key auth and wants a different model, they can say
|
||||
`/codex review -m o3` and the `-m` flag should be passed through to codex.
|
||||
|
||||
---
|
||||
|
||||
## Cost Estimation
|
||||
|
||||
Parse token count from stderr. Codex prints `tokens used\nN` to stderr.
|
||||
|
||||
+19
-5
@@ -75,13 +75,13 @@ TMPERR=$(mktemp /tmp/codex-err-XXXXXX.txt)
|
||||
|
||||
2. Run the review (5-minute timeout):
|
||||
```bash
|
||||
codex review --base <base> 2>"$TMPERR"
|
||||
codex review --base <base> -c 'model_reasoning_effort="high"' 2>"$TMPERR"
|
||||
```
|
||||
|
||||
Use `timeout: 300000` on the Bash call. If the user provided custom instructions
|
||||
(e.g., `/codex review focus on security`), pass them as the prompt argument:
|
||||
```bash
|
||||
codex review "focus on security" --base <base> 2>"$TMPERR"
|
||||
codex review "focus on security" --base <base> -c 'model_reasoning_effort="high"' 2>"$TMPERR"
|
||||
```
|
||||
|
||||
3. Capture the output. Then parse cost from stderr:
|
||||
@@ -160,7 +160,7 @@ With focus (e.g., "security"):
|
||||
|
||||
3. Run codex exec (5-minute timeout):
|
||||
```bash
|
||||
codex exec "<prompt>" -s read-only -o "$TMPRESP" 2>"$TMPERR"
|
||||
codex exec "<prompt>" -s read-only -c 'model_reasoning_effort="high"' -o "$TMPRESP" 2>"$TMPERR"
|
||||
```
|
||||
|
||||
4. Read the response and parse cost:
|
||||
@@ -223,12 +223,12 @@ THE PLAN:
|
||||
|
||||
For a **new session:**
|
||||
```bash
|
||||
codex exec "<prompt>" -s read-only -o "$TMPRESP" 2>"$TMPERR"
|
||||
codex exec "<prompt>" -s read-only -c 'model_reasoning_effort="high"' -o "$TMPRESP" 2>"$TMPERR"
|
||||
```
|
||||
|
||||
For a **resumed session** (user chose "Continue"):
|
||||
```bash
|
||||
codex exec resume <session-id> "<prompt>" -s read-only -o "$TMPRESP" 2>"$TMPERR"
|
||||
codex exec resume <session-id> "<prompt>" -s read-only -c 'model_reasoning_effort="high"' -o "$TMPRESP" 2>"$TMPERR"
|
||||
```
|
||||
|
||||
5. Capture and save session ID:
|
||||
@@ -262,6 +262,20 @@ Session saved — run /codex again to continue this conversation.
|
||||
|
||||
---
|
||||
|
||||
## Model & Reasoning
|
||||
|
||||
Codex with ChatGPT login only supports `gpt-5.2-codex` (the default). Other models
|
||||
(o3, o4-mini, gpt-4o) require API key auth and are not available with ChatGPT login.
|
||||
|
||||
All codex commands use `-c 'model_reasoning_effort="high"'` because the whole point of
|
||||
this skill is deep, thorough analysis. We want maximum reasoning power — this is your
|
||||
"200 IQ autistic developer" second opinion, not a quick sanity check.
|
||||
|
||||
If the user has API key auth and wants a different model, they can say
|
||||
`/codex review -m o3` and the `-m` flag should be passed through to codex.
|
||||
|
||||
---
|
||||
|
||||
## Cost Estimation
|
||||
|
||||
Parse token count from stderr. Codex prints `tokens used\nN` to stderr.
|
||||
|
||||
Reference in New Issue
Block a user