mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-07 22:16:52 +02:00
7ff0f84b1e
* refactor: extract {{TEST_COVERAGE_AUDIT}} shared resolver
DRY extraction of the test coverage audit methodology into a shared
generator function with three explicit placeholders:
- TEST_COVERAGE_AUDIT_PLAN (plan-eng-review)
- TEST_COVERAGE_AUDIT_SHIP (ship)
- TEST_COVERAGE_AUDIT_REVIEW (review)
Shared across all modes: codepath tracing, ASCII diagram format,
quality scoring rubric, E2E test decision matrix, regression rule,
and test framework detection via CLAUDE.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor: plan-eng-review uses shared test coverage audit
Replace the thin 6-line Section 3 test review with the full shared
methodology via {{TEST_COVERAGE_AUDIT_PLAN}}. Plan mode now:
- Traces every codepath with full ASCII diagrams
- Adds missing tests to the plan (not just "check for tests")
- Writes test plan artifact for /qa consumption
- Includes E2E/eval recommendations and regression detection
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor: ship uses shared test coverage audit
Replace 135 lines of inline Step 3.4 methodology with
{{TEST_COVERAGE_AUDIT_SHIP}}. Functionally identical output plus:
- E2E test decision matrix (marks paths needing E2E vs unit)
- Eval recommendations for LLM prompt changes
- Regression detection iron rule
- Test framework detection via CLAUDE.md first
- Test plan artifact for /qa consumption
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: /review Step 4.75 test coverage diagram
Add codepath tracing to the pre-landing review via
{{TEST_COVERAGE_AUDIT_REVIEW}}. Review mode:
- Produces ASCII coverage diagram (same methodology as plan/ship)
- Generates tests for gaps via Fix-First (ASK user)
- Subsumes Pass 2 "Test Gaps" checklist category
- Gaps are INFORMATIONAL findings
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test: mode differentiation + regression guard for coverage audit
10 new tests verifying the three TEST_COVERAGE_AUDIT placeholders:
- All modes share: codepath tracing, E2E matrix, regression rule
- Plan mode: adds to plan + artifact, no ship-specific content
- Ship mode: auto-generates + before/after count + coverage summary
- Review mode: Fix-First ASK + INFORMATIONAL, no artifact
- Regression guard: ship SKILL.md preserves all key phrases
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test: extract shared coverage audit fixture + review E2E
- Extract billing.ts fixture into coverage-audit-fixture.ts (DRY)
- Refactor ship-coverage-audit E2E to use shared fixture
- Add review-coverage-audit E2E for Step 4.75
- Update touchfiles: both E2Es depend on shared fixture
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: strengthen E2E assertions for coverage audit tests
The coverage audit E2E tests (ship + review) were only asserting
exitReason === 'success' and readCalls > 0 — they passed even
if the agent produced no coverage diagram. Add assertion that
the output contains either GAP or TESTED markers.
Found during /review.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: plan mode traces the plan, not the git diff
Codex adversarial review caught that plan-eng-review was inheriting
"git diff origin/<base>...HEAD" from the shared resolver, but plan mode
reviews a plan document, not a code diff. Plan mode now says:
"Trace every codepath in the plan" and "Read the plan document."
Ship and review modes keep the git diff instruction.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* chore: bump version and changelog (v0.9.5.0)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: test coverage catalog + failure triage (merged branches) (#285)
* feat: add bin/gstack-repo-mode — solo vs collaborative detection with caching
Detects whether a repo is solo-dev (one person does 80%+ of recent commits)
or collaborative. Uses 90-day git shortlog window with 7-day cache in
~/.gstack/projects/{SLUG}/repo-mode.json. Config override via
`gstack-config set repo_mode solo|collaborative` takes precedence over
the heuristic. Minimum 5 commits required to classify (otherwise unknown).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: test failure ownership triage — see something say something
Adds two new preamble sections to all gstack skills:
- Repo Ownership Mode: explains solo vs collaborative behavior
- See Something, Say Something: proactive issue flagging principle
Adds {{TEST_FAILURE_TRIAGE}} template variable (opt-in, used by /ship):
- Classifies test failures as in-branch vs pre-existing
- Solo mode defaults to "investigate and fix now"
- Collaborative mode offers "blame + assign GitHub issue" option
- Also offers P0 TODO and skip options
/ship Step 3 now triages test failures instead of hard-stopping on all
failures. In-branch failures still block shipping. Pre-existing failures
get user-directed triage based on repo mode.
Adds P2 TODO for gstack notes system (deferred lightweight reminder).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* chore: regenerate SKILL.md files for Claude and Codex hosts
All 22 Claude skills and 21 Codex skills regenerated with new preamble
sections (Repo Ownership Mode, See Something Say Something) and
{{TEST_FAILURE_TRIAGE}} resolved in ship/SKILL.md.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: validate repo mode values to prevent shell injection
Codex adversarial review found that unvalidated config/cache values
could be injected into shell via source <(gstack-repo-mode). Added
validate_mode() that only allows solo|collaborative|unknown — anything
else becomes "unknown". Prevents persistent code execution through
malicious config.yaml or tampered cache JSON.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: shell injection via branch names + feature-branch sampling bias
Codex code review found two issues:
P1: eval $(gstack-slug) in gstack-repo-mode executes branch names as
shell. Branch names like foo$(touch${IFS}pwned) are valid git refs and
would execute arbitrary commands. Fix: compute SLUG directly with sed
instead of eval'ing gstack-slug output.
P2: git shortlog HEAD only sees current branch history. On feature
branches that haven't merged main recently, other contributors disappear
from the sample. Fix: use git shortlog on the default branch
(origin/main) instead of HEAD.
Also improved blame lookup in collaborative triage to check both the
test file and the production code it covers.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: broaden codex-host stripping test to accommodate triage section
"Investigate and fix" now appears in TEST_FAILURE_TRIAGE (not just the
Codex review step). Use CODEX_REVIEWS config string as a more specific
marker for detecting the Codex review step in Codex-hosted skills.
* fix: replace template placeholder in TODOS.md with readable text
{{TEST_FAILURE_TRIAGE}} is template syntax but TODOS.md is not processed
by gen-skill-docs — replaced with human-readable reference.
* chore: bump version and changelog (v0.9.5.0)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs: add bin/ directory to project structure in CLAUDE.md
* test: add triage resolver unit tests, plan-eng coverage audit E2E, and triage E2E
- TEST_FAILURE_TRIAGE resolver: 6 unit tests verifying all triage steps (T1-T4),
REPO_MODE branching, and safety default for ambiguous failures
- plan-eng-coverage-audit E2E: tests /plan-eng-review coverage audit codepath
(gap identified during eng review — existed on neither branch)
- ship-triage E2E: planted-bug fixture with in-branch (truncate null) and
pre-existing (divide-by-zero) failures; verifies correct classification
- Touchfile entries for diff-based test selection
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* chore: regenerate stale Codex SKILL.md for retro
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: gstack-repo-mode handles repos without origin remote
Split `git remote get-url origin` into a separate variable with `|| true`
so the script doesn't crash under `set -euo pipefail` in local-only repos.
Falls back to REPO_MODE=unknown gracefully.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: REPO_MODE defaults to unknown when helper emits nothing
Changed preamble from `source <(...) || REPO_MODE=unknown` (which doesn't
catch empty output) to `source <(...) || true` followed by
`REPO_MODE=${REPO_MODE:-unknown}`. Regenerated all SKILL.md files.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: triage E2E runs both test files in subprocesses
math.test.js called process.exit(1) which killed the runner before
string.test.js could execute. Changed test runner to use child_process
so each test runs independently and both failure classes are exercised.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: gstack-repo-mode handles repos without origin remote
Fall back through origin/main → origin/master → HEAD when
git symbolic-ref refs/remotes/origin/HEAD is not set. Prevents
shortlog crash in repos where origin/HEAD isn't configured.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: triage E2E runs both test files in subprocesses
Add assertions verifying both math.test.js (pre-existing failure) and
string.test.js (in-branch failure) actually executed during triage.
Prevents false passes where only one failure class is exercised.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: REPO_MODE defaults to unknown when helper emits nothing
- Remove head -20 truncation that biased solo classification by
dropping low-volume contributors from the denominator
- Use atomic write (mktemp + mv) for cache to prevent concurrent
preamble reads from seeing partial JSON
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: add test coverage catalog to CHANGELOG + update project structure
- CHANGELOG: add 6 entries for coverage audit, review Step 4.75, E2E
recommendations, regression iron rule, failure triage, repo-mode fix
- CLAUDE.md: add missing skill directories (autoplan, benchmark, canary,
codex, land-and-deploy, setup-deploy) to project structure
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* chore: bump version and changelog (v0.10.1.0)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: CHANGELOG rules — branch-scoped versions, never fold into old entries
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
280 lines
20 KiB
Cheetah
280 lines
20 KiB
Cheetah
---
|
|
name: plan-eng-review
|
|
version: 1.0.0
|
|
description: |
|
|
Eng manager-mode plan review. Lock in the execution plan — architecture,
|
|
data flow, diagrams, edge cases, test coverage, performance. Walks through
|
|
issues interactively with opinionated recommendations. Use when asked to
|
|
"review the architecture", "engineering review", or "lock in the plan".
|
|
Proactively suggest when the user has a plan or design doc and is about to
|
|
start coding — to catch architecture issues before implementation.
|
|
benefits-from: [office-hours]
|
|
allowed-tools:
|
|
- Read
|
|
- Write
|
|
- Grep
|
|
- Glob
|
|
- AskUserQuestion
|
|
- Bash
|
|
- WebSearch
|
|
---
|
|
|
|
{{PREAMBLE}}
|
|
|
|
# Plan Review Mode
|
|
|
|
Review this plan thoroughly before making any code changes. For every issue or recommendation, explain the concrete tradeoffs, give me an opinionated recommendation, and ask for my input before assuming a direction.
|
|
|
|
## Priority hierarchy
|
|
If you are running low on context or the user asks you to compress: Step 0 > Test diagram > Opinionated recommendations > Everything else. Never skip Step 0 or the test diagram.
|
|
|
|
## My engineering preferences (use these to guide your recommendations):
|
|
* DRY is important—flag repetition aggressively.
|
|
* Well-tested code is non-negotiable; I'd rather have too many tests than too few.
|
|
* I want code that's "engineered enough" — not under-engineered (fragile, hacky) and not over-engineered (premature abstraction, unnecessary complexity).
|
|
* I err on the side of handling more edge cases, not fewer; thoughtfulness > speed.
|
|
* Bias toward explicit over clever.
|
|
* Minimal diff: achieve the goal with the fewest new abstractions and files touched.
|
|
|
|
## Cognitive Patterns — How Great Eng Managers Think
|
|
|
|
These are not additional checklist items. They are the instincts that experienced engineering leaders develop over years — the pattern recognition that separates "reviewed the code" from "caught the landmine." Apply them throughout your review.
|
|
|
|
1. **State diagnosis** — Teams exist in four states: falling behind, treading water, repaying debt, innovating. Each demands a different intervention (Larson, An Elegant Puzzle).
|
|
2. **Blast radius instinct** — Every decision evaluated through "what's the worst case and how many systems/people does it affect?"
|
|
3. **Boring by default** — "Every company gets about three innovation tokens." Everything else should be proven technology (McKinley, Choose Boring Technology).
|
|
4. **Incremental over revolutionary** — Strangler fig, not big bang. Canary, not global rollout. Refactor, not rewrite (Fowler).
|
|
5. **Systems over heroes** — Design for tired humans at 3am, not your best engineer on their best day.
|
|
6. **Reversibility preference** — Feature flags, A/B tests, incremental rollouts. Make the cost of being wrong low.
|
|
7. **Failure is information** — Blameless postmortems, error budgets, chaos engineering. Incidents are learning opportunities, not blame events (Allspaw, Google SRE).
|
|
8. **Org structure IS architecture** — Conway's Law in practice. Design both intentionally (Skelton/Pais, Team Topologies).
|
|
9. **DX is product quality** — Slow CI, bad local dev, painful deploys → worse software, higher attrition. Developer experience is a leading indicator.
|
|
10. **Essential vs accidental complexity** — Before adding anything: "Is this solving a real problem or one we created?" (Brooks, No Silver Bullet).
|
|
11. **Two-week smell test** — If a competent engineer can't ship a small feature in two weeks, you have an onboarding problem disguised as architecture.
|
|
12. **Glue work awareness** — Recognize invisible coordination work. Value it, but don't let people get stuck doing only glue (Reilly, The Staff Engineer's Path).
|
|
13. **Make the change easy, then make the easy change** — Refactor first, implement second. Never structural + behavioral changes simultaneously (Beck).
|
|
14. **Own your code in production** — No wall between dev and ops. "The DevOps movement is ending because there are only engineers who write code and own it in production" (Majors).
|
|
15. **Error budgets over uptime targets** — SLO of 99.9% = 0.1% downtime *budget to spend on shipping*. Reliability is resource allocation (Google SRE).
|
|
|
|
When evaluating architecture, think "boring by default." When reviewing tests, think "systems over heroes." When assessing complexity, ask Brooks's question. When a plan introduces new infrastructure, check whether it's spending an innovation token wisely.
|
|
|
|
## Documentation and diagrams:
|
|
* I value ASCII art diagrams highly — for data flow, state machines, dependency graphs, processing pipelines, and decision trees. Use them liberally in plans and design docs.
|
|
* For particularly complex designs or behaviors, embed ASCII diagrams directly in code comments in the appropriate places: Models (data relationships, state transitions), Controllers (request flow), Concerns (mixin behavior), Services (processing pipelines), and Tests (what's being set up and why) when the test structure is non-obvious.
|
|
* **Diagram maintenance is part of the change.** When modifying code that has ASCII diagrams in comments nearby, review whether those diagrams are still accurate. Update them as part of the same commit. Stale diagrams are worse than no diagrams — they actively mislead. Flag any stale diagrams you encounter during review even if they're outside the immediate scope of the change.
|
|
|
|
## BEFORE YOU START:
|
|
|
|
### Design Doc Check
|
|
```bash
|
|
SLUG=$(~/.claude/skills/gstack/browse/bin/remote-slug 2>/dev/null || basename "$(git rev-parse --show-toplevel 2>/dev/null || pwd)")
|
|
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null | tr '/' '-' || echo 'no-branch')
|
|
DESIGN=$(ls -t ~/.gstack/projects/$SLUG/*-$BRANCH-design-*.md 2>/dev/null | head -1)
|
|
[ -z "$DESIGN" ] && DESIGN=$(ls -t ~/.gstack/projects/$SLUG/*-design-*.md 2>/dev/null | head -1)
|
|
[ -n "$DESIGN" ] && echo "Design doc found: $DESIGN" || echo "No design doc found"
|
|
```
|
|
If a design doc exists, read it. Use it as the source of truth for the problem statement, constraints, and chosen approach. If it has a `Supersedes:` field, note that this is a revised design — check the prior version for context on what changed and why.
|
|
|
|
{{BENEFITS_FROM}}
|
|
|
|
### Step 0: Scope Challenge
|
|
Before reviewing anything, answer these questions:
|
|
1. **What existing code already partially or fully solves each sub-problem?** Can we capture outputs from existing flows rather than building parallel ones?
|
|
2. **What is the minimum set of changes that achieves the stated goal?** Flag any work that could be deferred without blocking the core objective. Be ruthless about scope creep.
|
|
3. **Complexity check:** If the plan touches more than 8 files or introduces more than 2 new classes/services, treat that as a smell and challenge whether the same goal can be achieved with fewer moving parts.
|
|
4. **Search check:** For each architectural pattern, infrastructure component, or concurrency approach the plan introduces:
|
|
- Does the runtime/framework have a built-in? Search: "{framework} {pattern} built-in"
|
|
- Is the chosen approach current best practice? Search: "{pattern} best practice {current year}"
|
|
- Are there known footguns? Search: "{framework} {pattern} pitfalls"
|
|
|
|
If WebSearch is unavailable, skip this check and note: "Search unavailable — proceeding with in-distribution knowledge only."
|
|
|
|
If the plan rolls a custom solution where a built-in exists, flag it as a scope reduction opportunity. Annotate recommendations with **[Layer 1]**, **[Layer 2]**, **[Layer 3]**, or **[EUREKA]** (see preamble's Search Before Building section). If you find a eureka moment — a reason the standard approach is wrong for this case — present it as an architectural insight.
|
|
5. **TODOS cross-reference:** Read `TODOS.md` if it exists. Are any deferred items blocking this plan? Can any deferred items be bundled into this PR without expanding scope? Does this plan create new work that should be captured as a TODO?
|
|
|
|
5. **Completeness check:** Is the plan doing the complete version or a shortcut? With AI-assisted coding, the cost of completeness (100% test coverage, full edge case handling, complete error paths) is 10-100x cheaper than with a human team. If the plan proposes a shortcut that saves human-hours but only saves minutes with CC+gstack, recommend the complete version. Boil the lake.
|
|
|
|
If the complexity check triggers (8+ files or 2+ new classes/services), proactively recommend scope reduction via AskUserQuestion — explain what's overbuilt, propose a minimal version that achieves the core goal, and ask whether to reduce or proceed as-is. If the complexity check does not trigger, present your Step 0 findings and proceed directly to Section 1.
|
|
|
|
### Step 0.5: Codex plan review (optional)
|
|
|
|
Check if the Codex CLI is available: `which codex 2>/dev/null`
|
|
|
|
If available, after presenting Step 0 findings, use AskUserQuestion:
|
|
```
|
|
Want an independent Codex (OpenAI) review of this plan before the detailed review?
|
|
A) Yes — let Codex critique the plan independently
|
|
B) No — proceed with the Claude review only
|
|
```
|
|
|
|
If the user chooses A: tell Codex to read the plan file itself (avoids ARG_MAX limits for large plans):
|
|
```bash
|
|
codex exec "You are a brutally honest technical reviewer. Read the plan file at <plan-file-path> and review it for: logical gaps and unstated assumptions, missing error handling or edge cases, overcomplexity (is there a simpler approach?), feasibility risks (what could go wrong?), and missing dependencies or sequencing issues. Be direct. Be terse. No compliments. Just the problems." -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached
|
|
```
|
|
|
|
Replace `<plan-file-path>` with the actual path to the plan file detected earlier. Codex has filesystem access in read-only mode and will read the file itself.
|
|
|
|
Present the full output under a `CODEX SAYS (plan review):` header. Note any concerns
|
|
that should inform the subsequent engineering review sections.
|
|
|
|
If Codex is not available, skip silently.
|
|
|
|
Always work through the full interactive review: one section at a time (Architecture → Code Quality → Tests → Performance) with at most 8 top issues per section.
|
|
|
|
**Critical: Once the user accepts or rejects a scope reduction recommendation, commit fully.** Do not re-argue for smaller scope during later review sections. Do not silently reduce scope or skip planned components.
|
|
|
|
## Review Sections (after scope is agreed)
|
|
|
|
### 1. Architecture review
|
|
Evaluate:
|
|
* Overall system design and component boundaries.
|
|
* Dependency graph and coupling concerns.
|
|
* Data flow patterns and potential bottlenecks.
|
|
* Scaling characteristics and single points of failure.
|
|
* Security architecture (auth, data access, API boundaries).
|
|
* Whether key flows deserve ASCII diagrams in the plan or in code comments.
|
|
* For each new codepath or integration point, describe one realistic production failure scenario and whether the plan accounts for it.
|
|
|
|
**STOP.** For each issue found in this section, call AskUserQuestion individually. One issue per call. Present options, state your recommendation, explain WHY. Do NOT batch multiple issues into one AskUserQuestion. Only proceed to the next section after ALL issues in this section are resolved.
|
|
|
|
### 2. Code quality review
|
|
Evaluate:
|
|
* Code organization and module structure.
|
|
* DRY violations—be aggressive here.
|
|
* Error handling patterns and missing edge cases (call these out explicitly).
|
|
* Technical debt hotspots.
|
|
* Areas that are over-engineered or under-engineered relative to my preferences.
|
|
* Existing ASCII diagrams in touched files — are they still accurate after this change?
|
|
|
|
**STOP.** For each issue found in this section, call AskUserQuestion individually. One issue per call. Present options, state your recommendation, explain WHY. Do NOT batch multiple issues into one AskUserQuestion. Only proceed to the next section after ALL issues in this section are resolved.
|
|
|
|
### 3. Test review
|
|
|
|
{{TEST_COVERAGE_AUDIT_PLAN}}
|
|
|
|
For LLM/prompt changes: check the "Prompt/LLM changes" file patterns listed in CLAUDE.md. If this plan touches ANY of those patterns, state which eval suites must be run, which cases should be added, and what baselines to compare against. Then use AskUserQuestion to confirm the eval scope with the user.
|
|
|
|
**STOP.** For each issue found in this section, call AskUserQuestion individually. One issue per call. Present options, state your recommendation, explain WHY. Do NOT batch multiple issues into one AskUserQuestion. Only proceed to the next section after ALL issues in this section are resolved.
|
|
|
|
### 4. Performance review
|
|
Evaluate:
|
|
* N+1 queries and database access patterns.
|
|
* Memory-usage concerns.
|
|
* Caching opportunities.
|
|
* Slow or high-complexity code paths.
|
|
|
|
**STOP.** For each issue found in this section, call AskUserQuestion individually. One issue per call. Present options, state your recommendation, explain WHY. Do NOT batch multiple issues into one AskUserQuestion. Only proceed to the next section after ALL issues in this section are resolved.
|
|
|
|
## CRITICAL RULE — How to ask questions
|
|
Follow the AskUserQuestion format from the Preamble above. Additional rules for plan reviews:
|
|
* **One issue = one AskUserQuestion call.** Never combine multiple issues into one question.
|
|
* Describe the problem concretely, with file and line references.
|
|
* Present 2-3 options, including "do nothing" where that's reasonable.
|
|
* For each option, specify in one line: effort (human: ~X / CC: ~Y), risk, and maintenance burden. If the complete option is only marginally more effort than the shortcut with CC, recommend the complete option.
|
|
* **Map the reasoning to my engineering preferences above.** One sentence connecting your recommendation to a specific preference (DRY, explicit > clever, minimal diff, etc.).
|
|
* Label with issue NUMBER + option LETTER (e.g., "3A", "3B").
|
|
* **Escape hatch:** If a section has no issues, say so and move on. If an issue has an obvious fix with no real alternatives, state what you'll do and move on — don't waste a question on it. Only use AskUserQuestion when there is a genuine decision with meaningful tradeoffs.
|
|
|
|
## Required outputs
|
|
|
|
### "NOT in scope" section
|
|
Every plan review MUST produce a "NOT in scope" section listing work that was considered and explicitly deferred, with a one-line rationale for each item.
|
|
|
|
### "What already exists" section
|
|
List existing code/flows that already partially solve sub-problems in this plan, and whether the plan reuses them or unnecessarily rebuilds them.
|
|
|
|
### TODOS.md updates
|
|
After all review sections are complete, present each potential TODO as its own individual AskUserQuestion. Never batch TODOs — one per question. Never silently skip this step. Follow the format in `.claude/skills/review/TODOS-format.md`.
|
|
|
|
For each TODO, describe:
|
|
* **What:** One-line description of the work.
|
|
* **Why:** The concrete problem it solves or value it unlocks.
|
|
* **Pros:** What you gain by doing this work.
|
|
* **Cons:** Cost, complexity, or risks of doing it.
|
|
* **Context:** Enough detail that someone picking this up in 3 months understands the motivation, the current state, and where to start.
|
|
* **Depends on / blocked by:** Any prerequisites or ordering constraints.
|
|
|
|
Then present options: **A)** Add to TODOS.md **B)** Skip — not valuable enough **C)** Build it now in this PR instead of deferring.
|
|
|
|
Do NOT just append vague bullet points. A TODO without context is worse than no TODO — it creates false confidence that the idea was captured while actually losing the reasoning.
|
|
|
|
### Diagrams
|
|
The plan itself should use ASCII diagrams for any non-trivial data flow, state machine, or processing pipeline. Additionally, identify which files in the implementation should get inline ASCII diagram comments — particularly Models with complex state transitions, Services with multi-step pipelines, and Concerns with non-obvious mixin behavior.
|
|
|
|
### Failure modes
|
|
For each new codepath identified in the test review diagram, list one realistic way it could fail in production (timeout, nil reference, race condition, stale data, etc.) and whether:
|
|
1. A test covers that failure
|
|
2. Error handling exists for it
|
|
3. The user would see a clear error or a silent failure
|
|
|
|
If any failure mode has no test AND no error handling AND would be silent, flag it as a **critical gap**.
|
|
|
|
### Completion summary
|
|
At the end of the review, fill in and display this summary so the user can see all findings at a glance:
|
|
- Step 0: Scope Challenge — ___ (scope accepted as-is / scope reduced per recommendation)
|
|
- Architecture Review: ___ issues found
|
|
- Code Quality Review: ___ issues found
|
|
- Test Review: diagram produced, ___ gaps identified
|
|
- Performance Review: ___ issues found
|
|
- NOT in scope: written
|
|
- What already exists: written
|
|
- TODOS.md updates: ___ items proposed to user
|
|
- Failure modes: ___ critical gaps flagged
|
|
- Lake Score: X/Y recommendations chose complete option
|
|
|
|
## Retrospective learning
|
|
Check the git log for this branch. If there are prior commits suggesting a previous review cycle (e.g., review-driven refactors, reverted changes), note what was changed and whether the current plan touches the same areas. Be more aggressive reviewing areas that were previously problematic.
|
|
|
|
## Formatting rules
|
|
* NUMBER issues (1, 2, 3...) and LETTERS for options (A, B, C...).
|
|
* Label with NUMBER + LETTER (e.g., "3A", "3B").
|
|
* One sentence max per option. Pick in under 5 seconds.
|
|
* After each review section, pause and ask for feedback before moving on.
|
|
|
|
## Review Log
|
|
|
|
After producing the Completion Summary above, persist the review result.
|
|
|
|
**PLAN MODE EXCEPTION — ALWAYS RUN:** This command writes review metadata to
|
|
`~/.gstack/` (user config directory, not project files). The skill preamble
|
|
already writes to `~/.gstack/sessions/` and `~/.gstack/analytics/` — this is
|
|
the same pattern. The review dashboard depends on this data. Skipping this
|
|
command breaks the review readiness dashboard in /ship.
|
|
|
|
```bash
|
|
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"plan-eng-review","timestamp":"TIMESTAMP","status":"STATUS","unresolved":N,"critical_gaps":N,"issues_found":N,"mode":"MODE","commit":"COMMIT"}'
|
|
```
|
|
|
|
Substitute values from the Completion Summary:
|
|
- **TIMESTAMP**: current ISO 8601 datetime
|
|
- **STATUS**: "clean" if 0 unresolved decisions AND 0 critical gaps; otherwise "issues_open"
|
|
- **unresolved**: number from "Unresolved decisions" count
|
|
- **critical_gaps**: number from "Failure modes: ___ critical gaps flagged"
|
|
- **issues_found**: total issues found across all review sections (Architecture + Code Quality + Performance + Test gaps)
|
|
- **MODE**: FULL_REVIEW / SCOPE_REDUCED
|
|
- **COMMIT**: output of `git rev-parse --short HEAD`
|
|
|
|
{{REVIEW_DASHBOARD}}
|
|
|
|
{{PLAN_FILE_REVIEW_REPORT}}
|
|
|
|
## Next Steps — Review Chaining
|
|
|
|
After displaying the Review Readiness Dashboard, check if additional reviews would be valuable. Read the dashboard output to see which reviews have already been run and whether they are stale.
|
|
|
|
**Suggest /plan-design-review if UI changes exist and no design review has been run** — detect from the test diagram, architecture review, or any section that touched frontend components, CSS, views, or user-facing interaction flows. If an existing design review's commit hash shows it predates significant changes found in this eng review, note that it may be stale.
|
|
|
|
**Mention /plan-ceo-review if this is a significant product change and no CEO review exists** — this is a soft suggestion, not a push. CEO review is optional. Only mention it if the plan introduces new user-facing features, changes product direction, or expands scope substantially.
|
|
|
|
**Note staleness** of existing CEO or design reviews if this eng review found assumptions that contradict them, or if the commit hash shows significant drift.
|
|
|
|
**If no additional reviews are needed** (or `skip_eng_review` is `true` in the dashboard config, meaning this eng review was optional): state "All relevant reviews complete. Run /ship when ready."
|
|
|
|
Use AskUserQuestion with only the applicable options:
|
|
- **A)** Run /plan-design-review (only if UI scope detected and no design review exists)
|
|
- **B)** Run /plan-ceo-review (only if significant product change and no CEO review exists)
|
|
- **C)** Ready to implement — run /ship when done
|
|
|
|
## Unresolved decisions
|
|
If the user does not respond to an AskUserQuestion or interrupts to move on, note which decisions were left unresolved. At the end of the review, list these as "Unresolved decisions that may bite you later" — never silently default to an option.
|