mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-07 05:56:41 +02:00
18bf4244ac
* refactor: remove 6 dead resolver function copies from gen-skill-docs.ts
These functions were moved to scripts/resolvers/{review,design}.ts but the
old copies in gen-skill-docs.ts were never deleted. They are defined but
never called — the RESOLVERS map from resolvers/index.ts is the live
dispatch. The dead copies had already diverged from the live versions.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: resolve codex exec -C repo root eagerly to prevent wrong-project reviews
When codex exec commands run in background bash tasks (e.g., Conductor
workspaces), $(git rev-parse --show-toplevel) evaluates in whatever cwd
the background shell inherits, which may be a different project. Fix by
resolving _REPO_ROOT once at the top of each bash block and referencing
the stored value in -C.
12 occurrences fixed across 4 source files:
- codex/SKILL.md.tmpl (3)
- autoplan/SKILL.md.tmpl (3)
- scripts/resolvers/review.ts (3)
- scripts/resolvers/design.ts (3)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test: regression guard for codex exec inline git rev-parse in -C flag
Scans all .tmpl and resolver .ts source files for codex exec commands
that use inline $(git rev-parse --show-toplevel) in the -C flag. This
pattern causes wrong-project reviews in Conductor workspaces. The test
ensures nobody reintroduces the old pattern.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* chore: bump version and changelog (v0.12.6.0)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address adversarial review findings — codex review cwd, test scope, fail-loud
1. codex review commands now cd to $_REPO_ROOT (review doesn't support -C)
2. Autoplan codex commands converted from prose "Prerequisite" to fenced bash blocks
3. || pwd fallback replaced with hard fail — silent wrong-dir is worse than error
4. Regression test now scans all resolver .ts files + generated SKILL.md files
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test: harden regression test — Bun.Glob, SKILL.md scan, codex review check
Fixes three gaps found by adversarial review:
1. fs.readdirSync recursive hits ELOOP on .claude/skills/gstack symlink.
Switched to Bun.Glob with followSymlinks:false.
2. Generated SKILL.md files now scanned (not just .tmpl sources).
3. New test: codex review commands must not use inline git rev-parse
(codex review doesn't support -C, so cd "$_REPO_ROOT" is the fix).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
641 lines
29 KiB
Cheetah
641 lines
29 KiB
Cheetah
---
|
|
name: autoplan
|
|
preamble-tier: 3
|
|
version: 1.0.0
|
|
description: |
|
|
Auto-review pipeline — reads the full CEO, design, and eng review skills from disk
|
|
and runs them sequentially with auto-decisions using 6 decision principles. Surfaces
|
|
taste decisions (close approaches, borderline scope, codex disagreements) at a final
|
|
approval gate. One command, fully reviewed plan out.
|
|
Use when asked to "auto review", "autoplan", "run all reviews", "review this plan
|
|
automatically", or "make the decisions for me".
|
|
Proactively suggest when the user has a plan file and wants to run the full review
|
|
gauntlet without answering 15-30 intermediate questions.
|
|
benefits-from: [office-hours]
|
|
allowed-tools:
|
|
- Bash
|
|
- Read
|
|
- Write
|
|
- Edit
|
|
- Glob
|
|
- Grep
|
|
- WebSearch
|
|
- AskUserQuestion
|
|
---
|
|
|
|
{{PREAMBLE}}
|
|
|
|
{{BASE_BRANCH_DETECT}}
|
|
|
|
{{BENEFITS_FROM}}
|
|
|
|
# /autoplan — Auto-Review Pipeline
|
|
|
|
One command. Rough plan in, fully reviewed plan out.
|
|
|
|
/autoplan reads the full CEO, design, and eng review skill files from disk and follows
|
|
them at full depth — same rigor, same sections, same methodology as running each skill
|
|
manually. The only difference: intermediate AskUserQuestion calls are auto-decided using
|
|
the 6 principles below. Taste decisions (where reasonable people could disagree) are
|
|
surfaced at a final approval gate.
|
|
|
|
---
|
|
|
|
## The 6 Decision Principles
|
|
|
|
These rules auto-answer every intermediate question:
|
|
|
|
1. **Choose completeness** — Ship the whole thing. Pick the approach that covers more edge cases.
|
|
2. **Boil lakes** — Fix everything in the blast radius (files modified by this plan + direct importers). Auto-approve expansions that are in blast radius AND < 1 day CC effort (< 5 files, no new infra).
|
|
3. **Pragmatic** — If two options fix the same thing, pick the cleaner one. 5 seconds choosing, not 5 minutes.
|
|
4. **DRY** — Duplicates existing functionality? Reject. Reuse what exists.
|
|
5. **Explicit over clever** — 10-line obvious fix > 200-line abstraction. Pick what a new contributor reads in 30 seconds.
|
|
6. **Bias toward action** — Merge > review cycles > stale deliberation. Flag concerns but don't block.
|
|
|
|
**Conflict resolution (context-dependent tiebreakers):**
|
|
- **CEO phase:** P1 (completeness) + P2 (boil lakes) dominate.
|
|
- **Eng phase:** P5 (explicit) + P3 (pragmatic) dominate.
|
|
- **Design phase:** P5 (explicit) + P1 (completeness) dominate.
|
|
|
|
---
|
|
|
|
## Decision Classification
|
|
|
|
Every auto-decision is classified:
|
|
|
|
**Mechanical** — one clearly right answer. Auto-decide silently.
|
|
Examples: run codex (always yes), run evals (always yes), reduce scope on a complete plan (always no).
|
|
|
|
**Taste** — reasonable people could disagree. Auto-decide with recommendation, but surface at the final gate. Three natural sources:
|
|
1. **Close approaches** — top two are both viable with different tradeoffs.
|
|
2. **Borderline scope** — in blast radius but 3-5 files, or ambiguous radius.
|
|
3. **Codex disagreements** — codex recommends differently and has a valid point.
|
|
|
|
---
|
|
|
|
## Sequential Execution — MANDATORY
|
|
|
|
Phases MUST execute in strict order: CEO → Design → Eng.
|
|
Each phase MUST complete fully before the next begins.
|
|
NEVER run phases in parallel — each builds on the previous.
|
|
|
|
Between each phase, emit a phase-transition summary and verify that all required
|
|
outputs from the prior phase are written before starting the next.
|
|
|
|
---
|
|
|
|
## What "Auto-Decide" Means
|
|
|
|
Auto-decide replaces the USER'S judgment with the 6 principles. It does NOT replace
|
|
the ANALYSIS. Every section in the loaded skill files must still be executed at the
|
|
same depth as the interactive version. The only thing that changes is who answers the
|
|
AskUserQuestion: you do, using the 6 principles, instead of the user.
|
|
|
|
**You MUST still:**
|
|
- READ the actual code, diffs, and files each section references
|
|
- PRODUCE every output the section requires (diagrams, tables, registries, artifacts)
|
|
- IDENTIFY every issue the section is designed to catch
|
|
- DECIDE each issue using the 6 principles (instead of asking the user)
|
|
- LOG each decision in the audit trail
|
|
- WRITE all required artifacts to disk
|
|
|
|
**You MUST NOT:**
|
|
- Compress a review section into a one-liner table row
|
|
- Write "no issues found" without showing what you examined
|
|
- Skip a section because "it doesn't apply" without stating what you checked and why
|
|
- Produce a summary instead of the required output (e.g., "architecture looks good"
|
|
instead of the ASCII dependency graph the section requires)
|
|
|
|
"No issues found" is a valid output for a section — but only after doing the analysis.
|
|
State what you examined and why nothing was flagged (1-2 sentences minimum).
|
|
"Skipped" is never valid for a non-skip-listed section.
|
|
|
|
---
|
|
|
|
## Phase 0: Intake + Restore Point
|
|
|
|
### Step 1: Capture restore point
|
|
|
|
Before doing anything, save the plan file's current state to an external file:
|
|
|
|
```bash
|
|
{{SLUG_SETUP}}
|
|
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null | tr '/' '-')
|
|
DATETIME=$(date +%Y%m%d-%H%M%S)
|
|
echo "RESTORE_PATH=$HOME/.gstack/projects/$SLUG/${BRANCH}-autoplan-restore-${DATETIME}.md"
|
|
```
|
|
|
|
Write the plan file's full contents to the restore path with this header:
|
|
```
|
|
# /autoplan Restore Point
|
|
Captured: [timestamp] | Branch: [branch] | Commit: [short hash]
|
|
|
|
## Re-run Instructions
|
|
1. Copy "Original Plan State" below back to your plan file
|
|
2. Invoke /autoplan
|
|
|
|
## Original Plan State
|
|
[verbatim plan file contents]
|
|
```
|
|
|
|
Then prepend a one-line HTML comment to the plan file:
|
|
`<!-- /autoplan restore point: [RESTORE_PATH] -->`
|
|
|
|
### Step 2: Read context
|
|
|
|
- Read CLAUDE.md, TODOS.md, git log -30, git diff against the base branch --stat
|
|
- Discover design docs: `ls -t ~/.gstack/projects/$SLUG/*-design-*.md 2>/dev/null | head -1`
|
|
- Detect UI scope: grep the plan for view/rendering terms (component, screen, form,
|
|
button, modal, layout, dashboard, sidebar, nav, dialog). Require 2+ matches. Exclude
|
|
false positives ("page" alone, "UI" in acronyms).
|
|
|
|
### Step 3: Load skill files from disk
|
|
|
|
Read each file using the Read tool:
|
|
- `~/.claude/skills/gstack/plan-ceo-review/SKILL.md`
|
|
- `~/.claude/skills/gstack/plan-design-review/SKILL.md` (only if UI scope detected)
|
|
- `~/.claude/skills/gstack/plan-eng-review/SKILL.md`
|
|
|
|
**Section skip list — when following a loaded skill file, SKIP these sections
|
|
(they are already handled by /autoplan):**
|
|
- Preamble (run first)
|
|
- AskUserQuestion Format
|
|
- Completeness Principle — Boil the Lake
|
|
- Search Before Building
|
|
- Contributor Mode
|
|
- Completion Status Protocol
|
|
- Telemetry (run last)
|
|
- Step 0: Detect base branch
|
|
- Review Readiness Dashboard
|
|
- Plan File Review Report
|
|
- Prerequisite Skill Offer (BENEFITS_FROM)
|
|
- Outside Voice — Independent Plan Challenge
|
|
- Design Outside Voices (parallel)
|
|
|
|
Follow ONLY the review-specific methodology, sections, and required outputs.
|
|
|
|
Output: "Here's what I'm working with: [plan summary]. UI scope: [yes/no].
|
|
Loaded review skills from disk. Starting full review pipeline with auto-decisions."
|
|
|
|
---
|
|
|
|
## Phase 1: CEO Review (Strategy & Scope)
|
|
|
|
Follow plan-ceo-review/SKILL.md — all sections, full depth.
|
|
Override: every AskUserQuestion → auto-decide using the 6 principles.
|
|
|
|
**Override rules:**
|
|
- Mode selection: SELECTIVE EXPANSION
|
|
- Premises: accept reasonable ones (P6), challenge only clearly wrong ones
|
|
- **GATE: Present premises to user for confirmation** — this is the ONE AskUserQuestion
|
|
that is NOT auto-decided. Premises require human judgment.
|
|
- Alternatives: pick highest completeness (P1). If tied, pick simplest (P5).
|
|
If top 2 are close → mark TASTE DECISION.
|
|
- Scope expansion: in blast radius + <1d CC → approve (P2). Outside → defer to TODOS.md (P3).
|
|
Duplicates → reject (P4). Borderline (3-5 files) → mark TASTE DECISION.
|
|
- All 10 review sections: run fully, auto-decide each issue, log every decision.
|
|
- Dual voices: always run BOTH Claude subagent AND Codex if available (P6).
|
|
Run them simultaneously (Agent tool for subagent, Bash for Codex).
|
|
|
|
**Codex CEO voice** (via Bash):
|
|
```bash
|
|
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
|
codex exec "You are a CEO/founder advisor reviewing a development plan.
|
|
Challenge the strategic foundations: Are the premises valid or assumed? Is this the
|
|
right problem to solve, or is there a reframing that would be 10x more impactful?
|
|
What alternatives were dismissed too quickly? What competitive or market risks are
|
|
unaddressed? What scope decisions will look foolish in 6 months? Be adversarial.
|
|
No compliments. Just the strategic blind spots.
|
|
File: <plan_path>" -C "$_REPO_ROOT" -s read-only --enable web_search_cached
|
|
```
|
|
Timeout: 10 minutes
|
|
|
|
**Claude CEO subagent** (via Agent tool):
|
|
"Read the plan file at <plan_path>. You are an independent CEO/strategist
|
|
reviewing this plan. You have NOT seen any prior review. Evaluate:
|
|
1. Is this the right problem to solve? Could a reframing yield 10x impact?
|
|
2. Are the premises stated or just assumed? Which ones could be wrong?
|
|
3. What's the 6-month regret scenario — what will look foolish?
|
|
4. What alternatives were dismissed without sufficient analysis?
|
|
5. What's the competitive risk — could someone else solve this first/better?
|
|
For each finding: what's wrong, severity (critical/high/medium), and the fix."
|
|
|
|
**Error handling:** All non-blocking. Codex auth/timeout/empty → proceed with
|
|
Claude subagent only, tagged `[single-model]`. If Claude subagent also fails →
|
|
"Outside voices unavailable — continuing with primary review."
|
|
|
|
**Degradation matrix:** Both fail → "single-reviewer mode". Codex only →
|
|
tag `[codex-only]`. Subagent only → tag `[subagent-only]`.
|
|
|
|
- Strategy choices: if codex disagrees with a premise or scope decision with valid
|
|
strategic reason → TASTE DECISION.
|
|
|
|
**Required execution checklist (CEO):**
|
|
|
|
Step 0 (0A-0F) — run each sub-step and produce:
|
|
- 0A: Premise challenge with specific premises named and evaluated
|
|
- 0B: Existing code leverage map (sub-problems → existing code)
|
|
- 0C: Dream state diagram (CURRENT → THIS PLAN → 12-MONTH IDEAL)
|
|
- 0C-bis: Implementation alternatives table (2-3 approaches with effort/risk/pros/cons)
|
|
- 0D: Mode-specific analysis with scope decisions logged
|
|
- 0E: Temporal interrogation (HOUR 1 → HOUR 6+)
|
|
- 0F: Mode selection confirmation
|
|
|
|
Step 0.5 (Dual Voices): Run Claude subagent AND Codex simultaneously. Present
|
|
Codex output under CODEX SAYS (CEO — strategy challenge) header. Present subagent
|
|
output under CLAUDE SUBAGENT (CEO — strategic independence) header. Produce CEO
|
|
consensus table:
|
|
|
|
```
|
|
CEO DUAL VOICES — CONSENSUS TABLE:
|
|
═══════════════════════════════════════════════════════════════
|
|
Dimension Claude Codex Consensus
|
|
──────────────────────────────────── ─────── ─────── ─────────
|
|
1. Premises valid? — — —
|
|
2. Right problem to solve? — — —
|
|
3. Scope calibration correct? — — —
|
|
4. Alternatives sufficiently explored?— — —
|
|
5. Competitive/market risks covered? — — —
|
|
6. 6-month trajectory sound? — — —
|
|
═══════════════════════════════════════════════════════════════
|
|
CONFIRMED = both agree. DISAGREE = models differ (→ taste decision).
|
|
Missing voice = N/A (not CONFIRMED). Single critical finding from one voice = flagged regardless.
|
|
```
|
|
|
|
Sections 1-10 — for EACH section, run the evaluation criteria from the loaded skill file:
|
|
- Sections WITH findings: full analysis, auto-decide each issue, log to audit trail
|
|
- Sections with NO findings: 1-2 sentences stating what was examined and why nothing
|
|
was flagged. NEVER compress a section to just its name in a table row.
|
|
- Section 11 (Design): run only if UI scope was detected in Phase 0
|
|
|
|
**Mandatory outputs from Phase 1:**
|
|
- "NOT in scope" section with deferred items and rationale
|
|
- "What already exists" section mapping sub-problems to existing code
|
|
- Error & Rescue Registry table (from Section 2)
|
|
- Failure Modes Registry table (from review sections)
|
|
- Dream state delta (where this plan leaves us vs 12-month ideal)
|
|
- Completion Summary (the full summary table from the CEO skill)
|
|
|
|
**PHASE 1 COMPLETE.** Emit phase-transition summary:
|
|
> **Phase 1 complete.** Codex: [N concerns]. Claude subagent: [N issues].
|
|
> Consensus: [X/6 confirmed, Y disagreements → surfaced at gate].
|
|
> Passing to Phase 2.
|
|
|
|
Do NOT begin Phase 2 until all Phase 1 outputs are written to the plan file
|
|
and the premise gate has been passed.
|
|
|
|
---
|
|
|
|
**Pre-Phase 2 checklist (verify before starting):**
|
|
- [ ] CEO completion summary written to plan file
|
|
- [ ] CEO dual voices ran (Codex + Claude subagent, or noted unavailable)
|
|
- [ ] CEO consensus table produced
|
|
- [ ] Premise gate passed (user confirmed)
|
|
- [ ] Phase-transition summary emitted
|
|
|
|
## Phase 2: Design Review (conditional — skip if no UI scope)
|
|
|
|
Follow plan-design-review/SKILL.md — all 7 dimensions, full depth.
|
|
Override: every AskUserQuestion → auto-decide using the 6 principles.
|
|
|
|
**Override rules:**
|
|
- Focus areas: all relevant dimensions (P1)
|
|
- Structural issues (missing states, broken hierarchy): auto-fix (P5)
|
|
- Aesthetic/taste issues: mark TASTE DECISION
|
|
- Design system alignment: auto-fix if DESIGN.md exists and fix is obvious
|
|
- Dual voices: always run BOTH Claude subagent AND Codex if available (P6).
|
|
|
|
**Codex design voice** (via Bash):
|
|
```bash
|
|
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
|
codex exec "Read the plan file at <plan_path>. Evaluate this plan's
|
|
UI/UX design decisions.
|
|
|
|
Also consider these findings from the CEO review phase:
|
|
<insert CEO dual voice findings summary — key concerns, disagreements>
|
|
|
|
Does the information hierarchy serve the user or the developer? Are interaction
|
|
states (loading, empty, error, partial) specified or left to the implementer's
|
|
imagination? Is the responsive strategy intentional or afterthought? Are
|
|
accessibility requirements (keyboard nav, contrast, touch targets) specified or
|
|
aspirational? Does the plan describe specific UI decisions or generic patterns?
|
|
What design decisions will haunt the implementer if left ambiguous?
|
|
Be opinionated. No hedging." -C "$_REPO_ROOT" -s read-only --enable web_search_cached
|
|
```
|
|
Timeout: 10 minutes
|
|
|
|
**Claude design subagent** (via Agent tool):
|
|
"Read the plan file at <plan_path>. You are an independent senior product designer
|
|
reviewing this plan. You have NOT seen any prior review. Evaluate:
|
|
1. Information hierarchy: what does the user see first, second, third? Is it right?
|
|
2. Missing states: loading, empty, error, success, partial — which are unspecified?
|
|
3. User journey: what's the emotional arc? Where does it break?
|
|
4. Specificity: does the plan describe SPECIFIC UI or generic patterns?
|
|
5. What design decisions will haunt the implementer if left ambiguous?
|
|
For each finding: what's wrong, severity (critical/high/medium), and the fix."
|
|
NO prior-phase context — subagent must be truly independent.
|
|
|
|
Error handling: same as Phase 1 (non-blocking, degradation matrix applies).
|
|
|
|
- Design choices: if codex disagrees with a design decision with valid UX reasoning
|
|
→ TASTE DECISION.
|
|
|
|
**Required execution checklist (Design):**
|
|
|
|
1. Step 0 (Design Scope): Rate completeness 0-10. Check DESIGN.md. Map existing patterns.
|
|
|
|
2. Step 0.5 (Dual Voices): Run Claude subagent AND Codex simultaneously. Present under
|
|
CODEX SAYS (design — UX challenge) and CLAUDE SUBAGENT (design — independent review)
|
|
headers. Produce design litmus scorecard (consensus table). Use the litmus scorecard
|
|
format from plan-design-review. Include CEO phase findings in Codex prompt ONLY
|
|
(not Claude subagent — stays independent).
|
|
|
|
3. Passes 1-7: Run each from loaded skill. Rate 0-10. Auto-decide each issue.
|
|
DISAGREE items from scorecard → raised in the relevant pass with both perspectives.
|
|
|
|
**PHASE 2 COMPLETE.** Emit phase-transition summary:
|
|
> **Phase 2 complete.** Codex: [N concerns]. Claude subagent: [N issues].
|
|
> Consensus: [X/Y confirmed, Z disagreements → surfaced at gate].
|
|
> Passing to Phase 3.
|
|
|
|
Do NOT begin Phase 3 until all Phase 2 outputs (if run) are written to the plan file.
|
|
|
|
---
|
|
|
|
**Pre-Phase 3 checklist (verify before starting):**
|
|
- [ ] All Phase 1 items above confirmed
|
|
- [ ] Design completion summary written (or "skipped, no UI scope")
|
|
- [ ] Design dual voices ran (if Phase 2 ran)
|
|
- [ ] Design consensus table produced (if Phase 2 ran)
|
|
- [ ] Phase-transition summary emitted
|
|
|
|
## Phase 3: Eng Review + Dual Voices
|
|
|
|
Follow plan-eng-review/SKILL.md — all sections, full depth.
|
|
Override: every AskUserQuestion → auto-decide using the 6 principles.
|
|
|
|
**Override rules:**
|
|
- Scope challenge: never reduce (P2)
|
|
- Dual voices: always run BOTH Claude subagent AND Codex if available (P6).
|
|
|
|
**Codex eng voice** (via Bash):
|
|
```bash
|
|
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
|
codex exec "Review this plan for architectural issues, missing edge cases,
|
|
and hidden complexity. Be adversarial.
|
|
|
|
Also consider these findings from prior review phases:
|
|
CEO: <insert CEO consensus table summary — key concerns, DISAGREEs>
|
|
Design: <insert Design consensus table summary, or 'skipped, no UI scope'>
|
|
|
|
File: <plan_path>" -C "$_REPO_ROOT" -s read-only --enable web_search_cached
|
|
```
|
|
Timeout: 10 minutes
|
|
|
|
**Claude eng subagent** (via Agent tool):
|
|
"Read the plan file at <plan_path>. You are an independent senior engineer
|
|
reviewing this plan. You have NOT seen any prior review. Evaluate:
|
|
1. Architecture: Is the component structure sound? Coupling concerns?
|
|
2. Edge cases: What breaks under 10x load? What's the nil/empty/error path?
|
|
3. Tests: What's missing from the test plan? What would break at 2am Friday?
|
|
4. Security: New attack surface? Auth boundaries? Input validation?
|
|
5. Hidden complexity: What looks simple but isn't?
|
|
For each finding: what's wrong, severity, and the fix."
|
|
NO prior-phase context — subagent must be truly independent.
|
|
|
|
Error handling: same as Phase 1 (non-blocking, degradation matrix applies).
|
|
|
|
- Architecture choices: explicit over clever (P5). If codex disagrees with valid reason → TASTE DECISION.
|
|
- Evals: always include all relevant suites (P1)
|
|
- Test plan: generate artifact at `~/.gstack/projects/$SLUG/{user}-{branch}-test-plan-{datetime}.md`
|
|
- TODOS.md: collect all deferred scope expansions from Phase 1, auto-write
|
|
|
|
**Required execution checklist (Eng):**
|
|
|
|
1. Step 0 (Scope Challenge): Read actual code referenced by the plan. Map each
|
|
sub-problem to existing code. Run the complexity check. Produce concrete findings.
|
|
|
|
2. Step 0.5 (Dual Voices): Run Claude subagent AND Codex simultaneously. Present
|
|
Codex output under CODEX SAYS (eng — architecture challenge) header. Present subagent
|
|
output under CLAUDE SUBAGENT (eng — independent review) header. Produce eng consensus
|
|
table:
|
|
|
|
```
|
|
ENG DUAL VOICES — CONSENSUS TABLE:
|
|
═══════════════════════════════════════════════════════════════
|
|
Dimension Claude Codex Consensus
|
|
──────────────────────────────────── ─────── ─────── ─────────
|
|
1. Architecture sound? — — —
|
|
2. Test coverage sufficient? — — —
|
|
3. Performance risks addressed? — — —
|
|
4. Security threats covered? — — —
|
|
5. Error paths handled? — — —
|
|
6. Deployment risk manageable? — — —
|
|
═══════════════════════════════════════════════════════════════
|
|
CONFIRMED = both agree. DISAGREE = models differ (→ taste decision).
|
|
Missing voice = N/A (not CONFIRMED). Single critical finding from one voice = flagged regardless.
|
|
```
|
|
|
|
3. Section 1 (Architecture): Produce ASCII dependency graph showing new components
|
|
and their relationships to existing ones. Evaluate coupling, scaling, security.
|
|
|
|
4. Section 2 (Code Quality): Identify DRY violations, naming issues, complexity.
|
|
Reference specific files and patterns. Auto-decide each finding.
|
|
|
|
5. **Section 3 (Test Review) — NEVER SKIP OR COMPRESS.**
|
|
This section requires reading actual code, not summarizing from memory.
|
|
- Read the diff or the plan's affected files
|
|
- Build the test diagram: list every NEW UX flow, data flow, codepath, and branch
|
|
- For EACH item in the diagram: what type of test covers it? Does one exist? Gaps?
|
|
- For LLM/prompt changes: which eval suites must run?
|
|
- Auto-deciding test gaps means: identify the gap → decide whether to add a test
|
|
or defer (with rationale and principle) → log the decision. It does NOT mean
|
|
skipping the analysis.
|
|
- Write the test plan artifact to disk
|
|
|
|
6. Section 4 (Performance): Evaluate N+1 queries, memory, caching, slow paths.
|
|
|
|
**Mandatory outputs from Phase 3:**
|
|
- "NOT in scope" section
|
|
- "What already exists" section
|
|
- Architecture ASCII diagram (Section 1)
|
|
- Test diagram mapping codepaths to coverage (Section 3)
|
|
- Test plan artifact written to disk (Section 3)
|
|
- Failure modes registry with critical gap flags
|
|
- Completion Summary (the full summary from the Eng skill)
|
|
- TODOS.md updates (collected from all phases)
|
|
|
|
---
|
|
|
|
## Decision Audit Trail
|
|
|
|
After each auto-decision, append a row to the plan file using Edit:
|
|
|
|
```markdown
|
|
<!-- AUTONOMOUS DECISION LOG -->
|
|
## Decision Audit Trail
|
|
|
|
| # | Phase | Decision | Principle | Rationale | Rejected |
|
|
|---|-------|----------|-----------|-----------|----------|
|
|
```
|
|
|
|
Write one row per decision incrementally (via Edit). This keeps the audit on disk,
|
|
not accumulated in conversation context.
|
|
|
|
---
|
|
|
|
## Pre-Gate Verification
|
|
|
|
Before presenting the Final Approval Gate, verify that required outputs were actually
|
|
produced. Check the plan file and conversation for each item.
|
|
|
|
**Phase 1 (CEO) outputs:**
|
|
- [ ] Premise challenge with specific premises named (not just "premises accepted")
|
|
- [ ] All applicable review sections have findings OR explicit "examined X, nothing flagged"
|
|
- [ ] Error & Rescue Registry table produced (or noted N/A with reason)
|
|
- [ ] Failure Modes Registry table produced (or noted N/A with reason)
|
|
- [ ] "NOT in scope" section written
|
|
- [ ] "What already exists" section written
|
|
- [ ] Dream state delta written
|
|
- [ ] Completion Summary produced
|
|
- [ ] Dual voices ran (Codex + Claude subagent, or noted unavailable)
|
|
- [ ] CEO consensus table produced
|
|
|
|
**Phase 2 (Design) outputs — only if UI scope detected:**
|
|
- [ ] All 7 dimensions evaluated with scores
|
|
- [ ] Issues identified and auto-decided
|
|
- [ ] Dual voices ran (or noted unavailable/skipped with phase)
|
|
- [ ] Design litmus scorecard produced
|
|
|
|
**Phase 3 (Eng) outputs:**
|
|
- [ ] Scope challenge with actual code analysis (not just "scope is fine")
|
|
- [ ] Architecture ASCII diagram produced
|
|
- [ ] Test diagram mapping codepaths to test coverage
|
|
- [ ] Test plan artifact written to disk at ~/.gstack/projects/$SLUG/
|
|
- [ ] "NOT in scope" section written
|
|
- [ ] "What already exists" section written
|
|
- [ ] Failure modes registry with critical gap assessment
|
|
- [ ] Completion Summary produced
|
|
- [ ] Dual voices ran (Codex + Claude subagent, or noted unavailable)
|
|
- [ ] Eng consensus table produced
|
|
|
|
**Cross-phase:**
|
|
- [ ] Cross-phase themes section written
|
|
|
|
**Audit trail:**
|
|
- [ ] Decision Audit Trail has at least one row per auto-decision (not empty)
|
|
|
|
If ANY checkbox above is missing, go back and produce the missing output. Max 2
|
|
attempts — if still missing after retrying twice, proceed to the gate with a warning
|
|
noting which items are incomplete. Do not loop indefinitely.
|
|
|
|
---
|
|
|
|
## Phase 4: Final Approval Gate
|
|
|
|
**STOP here and present the final state to the user.**
|
|
|
|
Present as a message, then use AskUserQuestion:
|
|
|
|
```
|
|
## /autoplan Review Complete
|
|
|
|
### Plan Summary
|
|
[1-3 sentence summary]
|
|
|
|
### Decisions Made: [N] total ([M] auto-decided, [K] choices for you)
|
|
|
|
### Your Choices (taste decisions)
|
|
[For each taste decision:]
|
|
**Choice [N]: [title]** (from [phase])
|
|
I recommend [X] — [principle]. But [Y] is also viable:
|
|
[1-sentence downstream impact if you pick Y]
|
|
|
|
### Auto-Decided: [M] decisions [see Decision Audit Trail in plan file]
|
|
|
|
### Review Scores
|
|
- CEO: [summary]
|
|
- CEO Voices: Codex [summary], Claude subagent [summary], Consensus [X/6 confirmed]
|
|
- Design: [summary or "skipped, no UI scope"]
|
|
- Design Voices: Codex [summary], Claude subagent [summary], Consensus [X/7 confirmed] (or "skipped")
|
|
- Eng: [summary]
|
|
- Eng Voices: Codex [summary], Claude subagent [summary], Consensus [X/6 confirmed]
|
|
|
|
### Cross-Phase Themes
|
|
[For any concern that appeared in 2+ phases' dual voices independently:]
|
|
**Theme: [topic]** — flagged in [Phase 1, Phase 3]. High-confidence signal.
|
|
[If no themes span phases:] "No cross-phase themes — each phase's concerns were distinct."
|
|
|
|
### Deferred to TODOS.md
|
|
[Items auto-deferred with reasons]
|
|
```
|
|
|
|
**Cognitive load management:**
|
|
- 0 taste decisions: skip "Your Choices" section
|
|
- 1-7 taste decisions: flat list
|
|
- 8+: group by phase. Add warning: "This plan had unusually high ambiguity ([N] taste decisions). Review carefully."
|
|
|
|
AskUserQuestion options:
|
|
- A) Approve as-is (accept all recommendations)
|
|
- B) Approve with overrides (specify which taste decisions to change)
|
|
- C) Interrogate (ask about any specific decision)
|
|
- D) Revise (the plan itself needs changes)
|
|
- E) Reject (start over)
|
|
|
|
**Option handling:**
|
|
- A: mark APPROVED, write review logs, suggest /ship
|
|
- B: ask which overrides, apply, re-present gate
|
|
- C: answer freeform, re-present gate
|
|
- D: make changes, re-run affected phases (scope→1B, design→2, test plan→3, arch→3). Max 3 cycles.
|
|
- E: start over
|
|
|
|
---
|
|
|
|
## Completion: Write Review Logs
|
|
|
|
On approval, write 3 separate review log entries so /ship's dashboard recognizes them.
|
|
Replace TIMESTAMP, STATUS, and N with actual values from each review phase.
|
|
STATUS is "clean" if no unresolved issues, "issues_open" otherwise.
|
|
|
|
```bash
|
|
COMMIT=$(git rev-parse --short HEAD 2>/dev/null)
|
|
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ)
|
|
|
|
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"plan-ceo-review","timestamp":"'"$TIMESTAMP"'","status":"STATUS","unresolved":N,"critical_gaps":N,"mode":"SELECTIVE_EXPANSION","via":"autoplan","commit":"'"$COMMIT"'"}'
|
|
|
|
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"plan-eng-review","timestamp":"'"$TIMESTAMP"'","status":"STATUS","unresolved":N,"critical_gaps":N,"issues_found":N,"mode":"FULL_REVIEW","via":"autoplan","commit":"'"$COMMIT"'"}'
|
|
```
|
|
|
|
If Phase 2 ran (UI scope):
|
|
```bash
|
|
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"plan-design-review","timestamp":"'"$TIMESTAMP"'","status":"STATUS","unresolved":N,"via":"autoplan","commit":"'"$COMMIT"'"}'
|
|
```
|
|
|
|
Dual voice logs (one per phase that ran):
|
|
```bash
|
|
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"autoplan-voices","timestamp":"'"$TIMESTAMP"'","status":"STATUS","source":"SOURCE","phase":"ceo","via":"autoplan","consensus_confirmed":N,"consensus_disagree":N,"commit":"'"$COMMIT"'"}'
|
|
|
|
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"autoplan-voices","timestamp":"'"$TIMESTAMP"'","status":"STATUS","source":"SOURCE","phase":"eng","via":"autoplan","consensus_confirmed":N,"consensus_disagree":N,"commit":"'"$COMMIT"'"}'
|
|
```
|
|
|
|
If Phase 2 ran (UI scope), also log:
|
|
```bash
|
|
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"autoplan-voices","timestamp":"'"$TIMESTAMP"'","status":"STATUS","source":"SOURCE","phase":"design","via":"autoplan","consensus_confirmed":N,"consensus_disagree":N,"commit":"'"$COMMIT"'"}'
|
|
```
|
|
|
|
SOURCE = "codex+subagent", "codex-only", "subagent-only", or "unavailable".
|
|
Replace N values with actual consensus counts from the tables.
|
|
|
|
Suggest next step: `/ship` when ready to create the PR.
|
|
|
|
---
|
|
|
|
## Important Rules
|
|
|
|
- **Never abort.** The user chose /autoplan. Respect that choice. Surface all taste decisions, never redirect to interactive review.
|
|
- **Premises are the one gate.** The only non-auto-decided AskUserQuestion is the premise confirmation in Phase 1.
|
|
- **Log every decision.** No silent auto-decisions. Every choice gets a row in the audit trail.
|
|
- **Full depth means full depth.** Do not compress or skip sections from the loaded skill files (except the skip list in Phase 0). "Full depth" means: read the code the section asks you to read, produce the outputs the section requires, identify every issue, and decide each one. A one-sentence summary of a section is not "full depth" — it is a skip. If you catch yourself writing fewer than 3 sentences for any review section, you are likely compressing.
|
|
- **Artifacts are deliverables.** Test plan artifact, failure modes registry, error/rescue table, ASCII diagrams — these must exist on disk or in the plan file when the review completes. If they don't exist, the review is incomplete.
|
|
- **Sequential order.** CEO → Design → Eng. Each phase builds on the last.
|