mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-01 19:25:10 +02:00
b805aa0113
* feat: add Confusion Protocol to preamble resolver Injects a high-stakes ambiguity gate at preamble tier >= 2 so all workflow skills get it. Fires when Claude encounters architectural decisions, data model changes, destructive operations, or contradictory requirements. Does NOT fire on routine coding. Addresses Karpathy failure mode #1 (wrong assumptions) with an inline STOP gate instead of relying on workflow skill invocation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add Hermes and GBrain host configs Hermes: tool rewrites for terminal/read_file/patch/delegate_task, paths to ~/.hermes/skills/gstack, AGENTS.md config file. GBrain: coding skills become brain-aware when GBrain mod is installed. Same tool rewrites as OpenClaw (agents spawn Claude Code via ACP). GBRAIN_CONTEXT_LOAD and GBRAIN_SAVE_RESULTS NOT suppressed on gbrain host, enabling brain-first lookup and save-to-brain behavior. Both registered in hosts/index.ts with setup script redirect messages. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: GBrain resolver — brain-first lookup and save-to-brain New scripts/resolvers/gbrain.ts with two resolver functions: - GBRAIN_CONTEXT_LOAD: search brain for context before skill starts - GBRAIN_SAVE_RESULTS: save skill output to brain after completion Placeholders added to 4 thinking skill templates (office-hours, investigate, plan-ceo-review, retro). Resolves to empty string on all hosts except gbrain via suppressedResolvers. GBRAIN suppression added to all 9 non-gbrain host configs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: wire slop:diff into /review as advisory diagnostic Adds Step 3.5 to the review template: runs bun run slop:diff against the base branch to catch AI code quality issues (empty catches, redundant return await, overcomplicated abstractions). Advisory only, never blocking. Skips silently if slop-scan is not installed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add Karpathy compatibility note to README Positions gstack as the workflow enforcement layer for Karpathy-style CLAUDE.md rules (17K stars). Links to forrestchang/andrej-karpathy-skills. Maps each Karpathy failure mode to the gstack skill that addresses it. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: improve native OpenClaw thinking skills office-hours: add design doc path visibility message after writing ceo-review: add HARD GATE reminder at review section transitions retro: add non-git context support (check memory for meeting notes) Mirrors template improvements to hand-crafted native skills. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: update tests and golden fixtures for new hosts - Host count: 8 → 10 (hermes, gbrain) - OpenClaw adapter test: expects undefined (dead code removed) - Golden ship fixtures: updated with Confusion Protocol + vendoring Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: regenerate all SKILL.md files Regenerated from templates after Confusion Protocol, GBrain resolver placeholders, slop:diff in review, HARD GATE reminders, investigation learnings, design doc visibility, and retro non-git context changes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: update project documentation for v0.18.0.0 - CHANGELOG: add v0.18.0.0 entry (Confusion Protocol, Hermes, GBrain, slop in review, Karpathy note, skill improvements) - CLAUDE.md: add hermes.ts and gbrain.ts to hosts listing - README.md: update agent count 8→10, add Hermes + GBrain to table - VERSION: bump to 0.18.0.0 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: sync package.json version to 0.18.0.0 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: extract Step 0 from review SKILL.md in E2E test The review-base-branch E2E test was copying the full 1493-line review/SKILL.md into the test fixture. The agent spent 8+ turns reading it in chunks, leaving only 7 turns for actual work, causing error_max_turns on every attempt. Now extracts only Step 0 (base branch detection, ~50 lines) which is all the test actually needs. Follows the CLAUDE.md rule: "NEVER copy a full SKILL.md file into an E2E test fixture." Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: update GBrain and Hermes host configs for v0.10.0 integration GBrain: add 'triggers' to keepFields so generated skills pass checkResolvable() validation. Add version compat comment. Hermes: un-suppress GBRAIN_CONTEXT_LOAD and GBRAIN_SAVE_RESULTS. The resolvers handle GBrain-not-installed gracefully, so Hermes agents with GBrain as a mod get brain features automatically. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: GBrain resolver DX improvements and preamble health check Resolver changes: - gbrain query → gbrain search (fast keyword search, not expensive hybrid) - Add keyword extraction guidance for agents - Show explicit gbrain put_page syntax with --title, --tags, heredoc - Add entity enrichment with false-positive filter - Name throttle error patterns (exit code 1, stderr keywords) - Add data-research routing for investigate skill - Expand skillSaveMap from 4 to 8 entries - Add brain operation telemetry summary Preamble changes: - Add gbrain doctor --fast --json health check for gbrain/hermes hosts - Parse check failures/warnings count - Show failing check details when score < 50 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: preserve keepFields in allowlist frontmatter mode The allowlist mode hard-coded name + description reconstruction but never iterated keepFields for additional fields. Adding 'triggers' to keepFields was a no-op because the field was silently stripped. Now iterates keepFields and preserves any field beyond name/description from the source template frontmatter, including YAML arrays. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add triggers to all 38 skill templates Multi-word, skill-specific trigger keywords for GBrain's RESOLVER.md router. Each skill gets 3-6 triggers derived from its "Use when asked to..." description text. Avoids single generic words that would collide across skills (e.g., "debug this" not "debug"). These are distinct from voice-triggers (speech-to-text aliases) and serve GBrain's checkResolvable() validation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: regenerate all SKILL.md files and update golden fixtures Regenerated from updated templates (triggers, brain placeholders, resolver DX improvements, preamble health check). Golden fixtures updated to match. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: settings-hook remove exits 1 when nothing to remove gstack-settings-hook remove was exiting 0 when settings.json didn't exist, causing gstack-uninstall to report "SessionStart hook" as removed on clean systems where nothing was installed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: update project documentation for GBrain v0.10.0 integration ARCHITECTURE.md: added GBRAIN_CONTEXT_LOAD and GBRAIN_SAVE_RESULTS to resolver table. CHANGELOG.md: expanded v0.18.0.0 entry with GBrain v0.10.0 integration details (triggers, expanded brain-awareness, DX improvements, Hermes brain support), updated date. CLAUDE.md: added gbrain to resolvers/ directory comment. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: routing E2E stops writing to user's ~/.claude/skills/ installSkills() was copying SKILL.md files to both project-level (.claude/skills/ in tmpDir) and user-level (~/.claude/skills/). Writing to the user's real install fails when symlinks point to different worktrees or dangling targets (ENOENT on copyFileSync). Now installs to project-level only. The test already sets cwd to the tmpDir, so project-level discovery works. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: scale Gemini E2E back to smoke test Gemini CLI gets lost in worktrees on complex tasks (review times out at 600s, discover-skill hits exit 124). Nobody uses Gemini for gstack skill execution. Replace the two failing tests (gemini-discover-skill and gemini-review-findings) with a single smoke test that verifies Gemini can start and read the README. 90s timeout, no skill invocation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
444 lines
20 KiB
Cheetah
444 lines
20 KiB
Cheetah
---
|
|
name: codex
|
|
preamble-tier: 3
|
|
version: 1.0.0
|
|
description: |
|
|
OpenAI Codex CLI wrapper — three modes. Code review: independent diff review via
|
|
codex review with pass/fail gate. Challenge: adversarial mode that tries to break
|
|
your code. Consult: ask codex anything with session continuity for follow-ups.
|
|
The "200 IQ autistic developer" second opinion. Use when asked to "codex review",
|
|
"codex challenge", "ask codex", "second opinion", or "consult codex". (gstack)
|
|
voice-triggers:
|
|
- "code x"
|
|
- "code ex"
|
|
- "get another opinion"
|
|
triggers:
|
|
- codex review
|
|
- second opinion
|
|
- outside voice challenge
|
|
allowed-tools:
|
|
- Bash
|
|
- Read
|
|
- Write
|
|
- Glob
|
|
- Grep
|
|
- AskUserQuestion
|
|
---
|
|
|
|
{{PREAMBLE}}
|
|
|
|
{{BASE_BRANCH_DETECT}}
|
|
|
|
# /codex — Multi-AI Second Opinion
|
|
|
|
You are running the `/codex` skill. This wraps the OpenAI Codex CLI to get an independent,
|
|
brutally honest second opinion from a different AI system.
|
|
|
|
Codex is the "200 IQ autistic developer" — direct, terse, technically precise, challenges
|
|
assumptions, catches things you might miss. Present its output faithfully, not summarized.
|
|
|
|
---
|
|
|
|
## Step 0: Check codex binary
|
|
|
|
```bash
|
|
CODEX_BIN=$(which codex 2>/dev/null || echo "")
|
|
[ -z "$CODEX_BIN" ] && echo "NOT_FOUND" || echo "FOUND: $CODEX_BIN"
|
|
```
|
|
|
|
If `NOT_FOUND`: stop and tell the user:
|
|
"Codex CLI not found. Install it: `npm install -g @openai/codex` or see https://github.com/openai/codex"
|
|
|
|
---
|
|
|
|
## Step 1: Detect mode
|
|
|
|
Parse the user's input to determine which mode to run:
|
|
|
|
1. `/codex review` or `/codex review <instructions>` — **Review mode** (Step 2A)
|
|
2. `/codex challenge` or `/codex challenge <focus>` — **Challenge mode** (Step 2B)
|
|
3. `/codex` with no arguments — **Auto-detect:**
|
|
- Check for a diff (with fallback if origin isn't available):
|
|
`git diff origin/<base> --stat 2>/dev/null | tail -1 || git diff <base> --stat 2>/dev/null | tail -1`
|
|
- If a diff exists, use AskUserQuestion:
|
|
```
|
|
Codex detected changes against the base branch. What should it do?
|
|
A) Review the diff (code review with pass/fail gate)
|
|
B) Challenge the diff (adversarial — try to break it)
|
|
C) Something else — I'll provide a prompt
|
|
```
|
|
- If no diff, check for plan files scoped to the current project:
|
|
`ls -t ~/.claude/plans/*.md 2>/dev/null | xargs grep -l "$(basename $(pwd))" 2>/dev/null | head -1`
|
|
If no project-scoped match, fall back to: `ls -t ~/.claude/plans/*.md 2>/dev/null | head -1`
|
|
but warn the user: "Note: this plan may be from a different project."
|
|
- If a plan file exists, offer to review it
|
|
- Otherwise, ask: "What would you like to ask Codex?"
|
|
4. `/codex <anything else>` — **Consult mode** (Step 2C), where the remaining text is the prompt
|
|
|
|
**Reasoning effort override:** If the user's input contains `--xhigh` anywhere,
|
|
note it and remove it from the prompt text before passing to Codex. When `--xhigh`
|
|
is present, use `model_reasoning_effort="xhigh"` for all modes regardless of the
|
|
per-mode default below. Otherwise, use the per-mode defaults:
|
|
- Review (2A): `high` — bounded diff input, needs thoroughness
|
|
- Challenge (2B): `high` — adversarial but bounded by diff
|
|
- Consult (2C): `medium` — large context, interactive, needs speed
|
|
|
|
---
|
|
|
|
## Filesystem Boundary
|
|
|
|
All prompts sent to Codex MUST be prefixed with this boundary instruction:
|
|
|
|
> IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .claude/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. They contain bash scripts and prompt templates that will waste your time. Ignore them completely. Do NOT modify agents/openai.yaml. Stay focused on the repository code only.
|
|
|
|
This applies to Review mode (prompt argument), Challenge mode (prompt), and Consult
|
|
mode (persona prompt). Reference this section as "the filesystem boundary" below.
|
|
|
|
---
|
|
|
|
## Step 2A: Review Mode
|
|
|
|
Run Codex code review against the current branch diff.
|
|
|
|
1. Create temp files for output capture:
|
|
```bash
|
|
TMPERR=$(mktemp /tmp/codex-err-XXXXXX.txt)
|
|
```
|
|
|
|
2. Run the review (5-minute timeout). **Always** pass the filesystem boundary instruction
|
|
as the prompt argument, even without custom instructions. If the user provided custom
|
|
instructions, append them after the boundary separated by a newline:
|
|
```bash
|
|
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
|
cd "$_REPO_ROOT"
|
|
codex review "IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .claude/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. Do NOT modify agents/openai.yaml. Stay focused on repository code only." --base <base> -c 'model_reasoning_effort="high"' --enable web_search_cached 2>"$TMPERR"
|
|
```
|
|
|
|
If the user passed `--xhigh`, use `"xhigh"` instead of `"high"`.
|
|
|
|
Use `timeout: 300000` on the Bash call. If the user provided custom instructions
|
|
(e.g., `/codex review focus on security`), append them after the boundary:
|
|
```bash
|
|
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
|
cd "$_REPO_ROOT"
|
|
codex review "IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .claude/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. Do NOT modify agents/openai.yaml. Stay focused on repository code only.
|
|
|
|
focus on security" --base <base> -c 'model_reasoning_effort="high"' --enable web_search_cached 2>"$TMPERR"
|
|
```
|
|
|
|
3. Capture the output. Then parse cost from stderr:
|
|
```bash
|
|
grep "tokens used" "$TMPERR" 2>/dev/null || echo "tokens: unknown"
|
|
```
|
|
|
|
4. Determine gate verdict by checking the review output for critical findings.
|
|
If the output contains `[P1]` — the gate is **FAIL**.
|
|
If no `[P1]` markers are found (only `[P2]` or no findings) — the gate is **PASS**.
|
|
|
|
5. Present the output:
|
|
|
|
```
|
|
CODEX SAYS (code review):
|
|
════════════════════════════════════════════════════════════
|
|
<full codex output, verbatim — do not truncate or summarize>
|
|
════════════════════════════════════════════════════════════
|
|
GATE: PASS Tokens: 14,331 | Est. cost: ~$0.12
|
|
```
|
|
|
|
or
|
|
|
|
```
|
|
GATE: FAIL (N critical findings)
|
|
```
|
|
|
|
6. **Cross-model comparison:** If `/review` (Claude's own review) was already run
|
|
earlier in this conversation, compare the two sets of findings:
|
|
|
|
```
|
|
CROSS-MODEL ANALYSIS:
|
|
Both found: [findings that overlap between Claude and Codex]
|
|
Only Codex found: [findings unique to Codex]
|
|
Only Claude found: [findings unique to Claude's /review]
|
|
Agreement rate: X% (N/M total unique findings overlap)
|
|
```
|
|
|
|
7. Persist the review result:
|
|
```bash
|
|
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"codex-review","timestamp":"TIMESTAMP","status":"STATUS","gate":"GATE","findings":N,"findings_fixed":N,"commit":"'"$(git rev-parse --short HEAD)"'"}'
|
|
```
|
|
|
|
Substitute: TIMESTAMP (ISO 8601), STATUS ("clean" if PASS, "issues_found" if FAIL),
|
|
GATE ("pass" or "fail"), findings (count of [P1] + [P2] markers),
|
|
findings_fixed (count of findings that were addressed/fixed before shipping).
|
|
|
|
8. Clean up temp files:
|
|
```bash
|
|
rm -f "$TMPERR"
|
|
```
|
|
|
|
{{PLAN_FILE_REVIEW_REPORT}}
|
|
|
|
---
|
|
|
|
## Step 2B: Challenge (Adversarial) Mode
|
|
|
|
Codex tries to break your code — finding edge cases, race conditions, security holes,
|
|
and failure modes that a normal review would miss.
|
|
|
|
1. Construct the adversarial prompt. **Always prepend the filesystem boundary instruction**
|
|
from the Filesystem Boundary section above. If the user provided a focus area
|
|
(e.g., `/codex challenge security`), include it after the boundary:
|
|
|
|
Default prompt (no focus):
|
|
"IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .claude/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. Do NOT modify agents/openai.yaml. Stay focused on repository code only.
|
|
|
|
Review the changes on this branch against the base branch. Run `git diff origin/<base>` to see the diff. Your job is to find ways this code will fail in production. Think like an attacker and a chaos engineer. Find edge cases, race conditions, security holes, resource leaks, failure modes, and silent data corruption paths. Be adversarial. Be thorough. No compliments — just the problems."
|
|
|
|
With focus (e.g., "security"):
|
|
"IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .claude/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. Do NOT modify agents/openai.yaml. Stay focused on repository code only.
|
|
|
|
Review the changes on this branch against the base branch. Run `git diff origin/<base>` to see the diff. Focus specifically on SECURITY. Your job is to find every way an attacker could exploit this code. Think about injection vectors, auth bypasses, privilege escalation, data exposure, and timing attacks. Be adversarial."
|
|
|
|
2. Run codex exec with **JSONL output** to capture reasoning traces and tool calls (5-minute timeout):
|
|
|
|
If the user passed `--xhigh`, use `"xhigh"` instead of `"high"`.
|
|
|
|
```bash
|
|
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
|
codex exec "<prompt>" -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached --json 2>/dev/null | PYTHONUNBUFFERED=1 python3 -u -c "
|
|
import sys, json
|
|
for line in sys.stdin:
|
|
line = line.strip()
|
|
if not line: continue
|
|
try:
|
|
obj = json.loads(line)
|
|
t = obj.get('type','')
|
|
if t == 'item.completed' and 'item' in obj:
|
|
item = obj['item']
|
|
itype = item.get('type','')
|
|
text = item.get('text','')
|
|
if itype == 'reasoning' and text:
|
|
print(f'[codex thinking] {text}', flush=True)
|
|
print(flush=True)
|
|
elif itype == 'agent_message' and text:
|
|
print(text, flush=True)
|
|
elif itype == 'command_execution':
|
|
cmd = item.get('command','')
|
|
if cmd: print(f'[codex ran] {cmd}', flush=True)
|
|
elif t == 'turn.completed':
|
|
usage = obj.get('usage',{})
|
|
tokens = usage.get('input_tokens',0) + usage.get('output_tokens',0)
|
|
if tokens: print(f'\ntokens used: {tokens}', flush=True)
|
|
except: pass
|
|
"
|
|
```
|
|
|
|
This parses codex's JSONL events to extract reasoning traces, tool calls, and the final
|
|
response. The `[codex thinking]` lines show what codex reasoned through before its answer.
|
|
|
|
3. Present the full streamed output:
|
|
|
|
```
|
|
CODEX SAYS (adversarial challenge):
|
|
════════════════════════════════════════════════════════════
|
|
<full output from above, verbatim>
|
|
════════════════════════════════════════════════════════════
|
|
Tokens: N | Est. cost: ~$X.XX
|
|
```
|
|
|
|
---
|
|
|
|
## Step 2C: Consult Mode
|
|
|
|
Ask Codex anything about the codebase. Supports session continuity for follow-ups.
|
|
|
|
1. **Check for existing session:**
|
|
```bash
|
|
cat .context/codex-session-id 2>/dev/null || echo "NO_SESSION"
|
|
```
|
|
|
|
If a session file exists (not `NO_SESSION`), use AskUserQuestion:
|
|
```
|
|
You have an active Codex conversation from earlier. Continue it or start fresh?
|
|
A) Continue the conversation (Codex remembers the prior context)
|
|
B) Start a new conversation
|
|
```
|
|
|
|
2. Create temp files:
|
|
```bash
|
|
TMPRESP=$(mktemp /tmp/codex-resp-XXXXXX.txt)
|
|
TMPERR=$(mktemp /tmp/codex-err-XXXXXX.txt)
|
|
```
|
|
|
|
3. **Plan review auto-detection:** If the user's prompt is about reviewing a plan,
|
|
or if plan files exist and the user said `/codex` with no arguments:
|
|
```bash
|
|
setopt +o nomatch 2>/dev/null || true # zsh compat
|
|
ls -t ~/.claude/plans/*.md 2>/dev/null | xargs grep -l "$(basename $(pwd))" 2>/dev/null | head -1
|
|
```
|
|
If no project-scoped match, fall back to `ls -t ~/.claude/plans/*.md 2>/dev/null | head -1`
|
|
but warn: "Note: this plan may be from a different project — verify before sending to Codex."
|
|
|
|
**IMPORTANT — embed content, don't reference path:** Codex runs sandboxed to the repo
|
|
root (`-C`) and cannot access `~/.claude/plans/` or any files outside the repo. You MUST
|
|
read the plan file yourself and embed its FULL CONTENT in the prompt below. Do NOT tell
|
|
Codex the file path or ask it to read the plan file — it will waste 10+ tool calls
|
|
searching and fail.
|
|
|
|
Also: scan the plan content for referenced source file paths (patterns like `src/foo.ts`,
|
|
`lib/bar.py`, paths containing `/` that exist in the repo). If found, list them in the
|
|
prompt so Codex reads them directly instead of discovering them via rg/find.
|
|
|
|
**Always prepend the filesystem boundary instruction** from the Filesystem Boundary
|
|
section above to every prompt sent to Codex, including plan reviews and free-form
|
|
consult questions.
|
|
|
|
Prepend the boundary and persona to the user's prompt:
|
|
"IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .claude/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. Do NOT modify agents/openai.yaml. Stay focused on repository code only.
|
|
|
|
You are a brutally honest technical reviewer. Review this plan for: logical gaps and
|
|
unstated assumptions, missing error handling or edge cases, overcomplexity (is there a
|
|
simpler approach?), feasibility risks (what could go wrong?), and missing dependencies
|
|
or sequencing issues. Be direct. Be terse. No compliments. Just the problems.
|
|
Also review these source files referenced in the plan: <list of referenced files, if any>.
|
|
|
|
THE PLAN:
|
|
<full plan content, embedded verbatim>"
|
|
|
|
For non-plan consult prompts (user typed `/codex <question>`), still prepend the boundary:
|
|
"IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .claude/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. Do NOT modify agents/openai.yaml. Stay focused on repository code only.
|
|
|
|
<user's question>"
|
|
|
|
4. Run codex exec with **JSONL output** to capture reasoning traces (5-minute timeout):
|
|
|
|
If the user passed `--xhigh`, use `"xhigh"` instead of `"medium"`.
|
|
|
|
For a **new session:**
|
|
```bash
|
|
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
|
codex exec "<prompt>" -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="medium"' --enable web_search_cached --json 2>"$TMPERR" | PYTHONUNBUFFERED=1 python3 -u -c "
|
|
import sys, json
|
|
for line in sys.stdin:
|
|
line = line.strip()
|
|
if not line: continue
|
|
try:
|
|
obj = json.loads(line)
|
|
t = obj.get('type','')
|
|
if t == 'thread.started':
|
|
tid = obj.get('thread_id','')
|
|
if tid: print(f'SESSION_ID:{tid}', flush=True)
|
|
elif t == 'item.completed' and 'item' in obj:
|
|
item = obj['item']
|
|
itype = item.get('type','')
|
|
text = item.get('text','')
|
|
if itype == 'reasoning' and text:
|
|
print(f'[codex thinking] {text}', flush=True)
|
|
print(flush=True)
|
|
elif itype == 'agent_message' and text:
|
|
print(text, flush=True)
|
|
elif itype == 'command_execution':
|
|
cmd = item.get('command','')
|
|
if cmd: print(f'[codex ran] {cmd}', flush=True)
|
|
elif t == 'turn.completed':
|
|
usage = obj.get('usage',{})
|
|
tokens = usage.get('input_tokens',0) + usage.get('output_tokens',0)
|
|
if tokens: print(f'\ntokens used: {tokens}', flush=True)
|
|
except: pass
|
|
"
|
|
```
|
|
|
|
For a **resumed session** (user chose "Continue"):
|
|
```bash
|
|
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
|
codex exec resume <session-id> "<prompt>" -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="medium"' --enable web_search_cached --json 2>"$TMPERR" | PYTHONUNBUFFERED=1 python3 -u -c "
|
|
<same python streaming parser as above, with flush=True on all print() calls>
|
|
"
|
|
```
|
|
|
|
5. Capture session ID from the streamed output. The parser prints `SESSION_ID:<id>`
|
|
from the `thread.started` event. Save it for follow-ups:
|
|
```bash
|
|
mkdir -p .context
|
|
```
|
|
Save the session ID printed by the parser (the line starting with `SESSION_ID:`)
|
|
to `.context/codex-session-id`.
|
|
|
|
6. Present the full streamed output:
|
|
|
|
```
|
|
CODEX SAYS (consult):
|
|
════════════════════════════════════════════════════════════
|
|
<full output, verbatim — includes [codex thinking] traces>
|
|
════════════════════════════════════════════════════════════
|
|
Tokens: N | Est. cost: ~$X.XX
|
|
Session saved — run /codex again to continue this conversation.
|
|
```
|
|
|
|
7. After presenting, note any points where Codex's analysis differs from your own
|
|
understanding. If there is a disagreement, flag it:
|
|
"Note: Claude Code disagrees on X because Y."
|
|
|
|
---
|
|
|
|
## Model & Reasoning
|
|
|
|
**Model:** No model is hardcoded — codex uses whatever its current default is (the frontier
|
|
agentic coding model). This means as OpenAI ships newer models, /codex automatically
|
|
uses them. If the user wants a specific model, pass `-m` through to codex.
|
|
|
|
**Reasoning effort (per-mode defaults):**
|
|
- **Review (2A):** `high` — bounded diff input, needs thoroughness but not max tokens
|
|
- **Challenge (2B):** `high` — adversarial but bounded by diff size
|
|
- **Consult (2C):** `medium` — large context (plans, codebase), interactive, needs speed
|
|
|
|
`xhigh` uses ~23x more tokens than `high` and causes 50+ minute hangs on large context
|
|
tasks (OpenAI issues #8545, #8402, #6931). Users can override with `--xhigh` flag
|
|
(e.g., `/codex review --xhigh`) when they want maximum reasoning and are willing to wait.
|
|
|
|
**Web search:** All codex commands use `--enable web_search_cached` so Codex can look up
|
|
docs and APIs during review. This is OpenAI's cached index — fast, no extra cost.
|
|
|
|
If the user specifies a model (e.g., `/codex review -m gpt-5.1-codex-max`
|
|
or `/codex challenge -m gpt-5.2`), pass the `-m` flag through to codex.
|
|
|
|
---
|
|
|
|
## Cost Estimation
|
|
|
|
Parse token count from stderr. Codex prints `tokens used\nN` to stderr.
|
|
|
|
Display as: `Tokens: N`
|
|
|
|
If token count is not available, display: `Tokens: unknown`
|
|
|
|
---
|
|
|
|
## Error Handling
|
|
|
|
- **Binary not found:** Detected in Step 0. Stop with install instructions.
|
|
- **Auth error:** Codex prints an auth error to stderr. Surface the error:
|
|
"Codex authentication failed. Run `codex login` in your terminal to authenticate via ChatGPT."
|
|
- **Timeout:** If the Bash call times out (5 min), tell the user:
|
|
"Codex timed out after 5 minutes. The diff may be too large or the API may be slow. Try again or use a smaller scope."
|
|
- **Empty response:** If `$TMPRESP` is empty or doesn't exist, tell the user:
|
|
"Codex returned no response. Check stderr for errors."
|
|
- **Session resume failure:** If resume fails, delete the session file and start fresh.
|
|
|
|
---
|
|
|
|
## Important Rules
|
|
|
|
- **Never modify files.** This skill is read-only. Codex runs in read-only sandbox mode.
|
|
- **Present output verbatim.** Do not truncate, summarize, or editorialize Codex's output
|
|
before showing it. Show it in full inside the CODEX SAYS block.
|
|
- **Add synthesis after, not instead of.** Any Claude commentary comes after the full output.
|
|
- **5-minute timeout** on all Bash calls to codex (`timeout: 300000`).
|
|
- **No double-reviewing.** If the user already ran `/review`, Codex provides a second
|
|
independent opinion. Do not re-run Claude Code's own review.
|
|
- **Detect skill-file rabbit holes.** After receiving Codex output, scan for signs
|
|
that Codex got distracted by skill files: `gstack-config`, `gstack-update-check`,
|
|
`SKILL.md`, or `skills/gstack`. If any of these appear in the output, append a
|
|
warning: "Codex appears to have read gstack skill files instead of reviewing your
|
|
code. Consider retrying."
|