From 5ec2dd05a590f33cb06f543b9c7a2329e9393ac3 Mon Sep 17 00:00:00 2001 From: Garry Tan Date: Wed, 18 Mar 2026 21:32:58 -0700 Subject: [PATCH] =?UTF-8?q?refactor:=20don't=20hardcode=20model=20?= =?UTF-8?q?=E2=80=94=20use=20codex=20default=20(always=20latest)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- codex/SKILL.md | 17 ++++++----------- codex/SKILL.md.tmpl | 17 ++++++----------- 2 files changed, 12 insertions(+), 22 deletions(-) diff --git a/codex/SKILL.md b/codex/SKILL.md index 04e85664..c9b3e056 100644 --- a/codex/SKILL.md +++ b/codex/SKILL.md @@ -410,11 +410,9 @@ Session saved — run /codex again to continue this conversation. ## Model & Reasoning -**Models** (with ChatGPT login): -- `gpt-5.2-codex` (default) — frontier agentic coding model, best overall -- `gpt-5.2` — deeper reasoning for non-coding analysis -- `gpt-5.1-codex-max` — deep and fast reasoning, 30% cheaper -- `gpt-5.1-codex-mini` — faster and cheaper, less capable +**Model:** No model is hardcoded — codex uses whatever its current default is (the frontier +agentic coding model). This means as OpenAI ships newer models, /codex automatically +uses them. If the user wants a specific model, pass `-m` through to codex. **Reasoning effort** varies by mode — use the right level for each task: - **Review mode:** `high` — thorough but not slow. Diff review benefits from depth but doesn't need maximum compute. @@ -424,19 +422,16 @@ Session saved — run /codex again to continue this conversation. **Web search:** All codex commands use `--enable web_search_cached` so Codex can look up docs and APIs during review. This is OpenAI's cached index — fast, no extra cost. -If the user specifies a model (e.g., `/codex review -m gpt-5.1-codex-max`), -pass the `-m` flag through to codex. +If the user specifies a model (e.g., `/codex review -m gpt-5.1-codex-max` +or `/codex challenge -m gpt-5.2`), pass the `-m` flag through to codex. --- ## Cost Estimation Parse token count from stderr. Codex prints `tokens used\nN` to stderr. -Estimate cost based on the model: -- gpt-5.2-codex: ~$0.008 per 1K tokens (estimate) -- o3: ~$0.01 per 1K tokens (estimate) -Display as: `Tokens: N | Est. cost: ~$X.XX` +Display as: `Tokens: N` If token count is not available, display: `Tokens: unknown` diff --git a/codex/SKILL.md.tmpl b/codex/SKILL.md.tmpl index bfd1fa20..291c53fc 100644 --- a/codex/SKILL.md.tmpl +++ b/codex/SKILL.md.tmpl @@ -264,11 +264,9 @@ Session saved — run /codex again to continue this conversation. ## Model & Reasoning -**Models** (with ChatGPT login): -- `gpt-5.2-codex` (default) — frontier agentic coding model, best overall -- `gpt-5.2` — deeper reasoning for non-coding analysis -- `gpt-5.1-codex-max` — deep and fast reasoning, 30% cheaper -- `gpt-5.1-codex-mini` — faster and cheaper, less capable +**Model:** No model is hardcoded — codex uses whatever its current default is (the frontier +agentic coding model). This means as OpenAI ships newer models, /codex automatically +uses them. If the user wants a specific model, pass `-m` through to codex. **Reasoning effort** varies by mode — use the right level for each task: - **Review mode:** `high` — thorough but not slow. Diff review benefits from depth but doesn't need maximum compute. @@ -278,19 +276,16 @@ Session saved — run /codex again to continue this conversation. **Web search:** All codex commands use `--enable web_search_cached` so Codex can look up docs and APIs during review. This is OpenAI's cached index — fast, no extra cost. -If the user specifies a model (e.g., `/codex review -m gpt-5.1-codex-max`), -pass the `-m` flag through to codex. +If the user specifies a model (e.g., `/codex review -m gpt-5.1-codex-max` +or `/codex challenge -m gpt-5.2`), pass the `-m` flag through to codex. --- ## Cost Estimation Parse token count from stderr. Codex prints `tokens used\nN` to stderr. -Estimate cost based on the model: -- gpt-5.2-codex: ~$0.008 per 1K tokens (estimate) -- o3: ~$0.01 per 1K tokens (estimate) -Display as: `Tokens: N | Est. cost: ~$X.XX` +Display as: `Tokens: N` If token count is not available, display: `Tokens: unknown`