mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-05 21:25:27 +02:00
Merge remote-tracking branch 'origin/main' into garrytan/gstack-eval-optimization
This commit is contained in:
@@ -203,6 +203,8 @@ Templates contain the workflows, tips, and examples that require human judgment.
|
||||
| `{{BASE_BRANCH_DETECT}}` | `gen-skill-docs.ts` | Dynamic base branch detection for PR-targeting skills (ship, review, qa, plan-ceo-review) |
|
||||
| `{{QA_METHODOLOGY}}` | `gen-skill-docs.ts` | Shared QA methodology block for /qa and /qa-only |
|
||||
| `{{DESIGN_METHODOLOGY}}` | `gen-skill-docs.ts` | Shared design audit methodology for /plan-design-review and /qa-design-review |
|
||||
| `{{REVIEW_DASHBOARD}}` | `gen-skill-docs.ts` | Review Readiness Dashboard for /ship pre-flight |
|
||||
| `{{TEST_BOOTSTRAP}}` | `gen-skill-docs.ts` | Test framework detection, bootstrap, CI/CD setup for /qa, /ship, /qa-design-review |
|
||||
|
||||
This is structurally sound — if a command exists in code, it appears in docs. If it doesn't exist, it can't appear.
|
||||
|
||||
|
||||
@@ -1,5 +1,65 @@
|
||||
# Changelog
|
||||
|
||||
## 0.6.0.1 — 2026-03-17
|
||||
|
||||
- **`/gstack-upgrade` now catches stale vendored copies automatically.** If your global gstack is up to date but the vendored copy in your project is behind, `/gstack-upgrade` detects the mismatch and syncs it. No more manually asking "did we vendor it?" — it just tells you and offers to update.
|
||||
- **Upgrade sync is safer.** If `./setup` fails while syncing a vendored copy, gstack restores the previous version from backup instead of leaving a broken install.
|
||||
|
||||
### For contributors
|
||||
|
||||
- Standalone usage section in `gstack-upgrade/SKILL.md.tmpl` now references Steps 2 and 4.5 (DRY) instead of duplicating detection/sync bash blocks. Added one new version-comparison bash block.
|
||||
- Update check fallback in standalone mode now matches the preamble pattern (global path → local path → `|| true`).
|
||||
|
||||
## 0.6.0 — 2026-03-17
|
||||
|
||||
- **100% test coverage is the key to great vibe coding.** gstack now bootstraps test frameworks from scratch when your project doesn't have one. Detects your runtime, researches the best framework, asks you to pick, installs it, writes 3-5 real tests for your actual code, sets up CI/CD (GitHub Actions), creates TESTING.md, and adds test culture instructions to CLAUDE.md. Every Claude Code session after that writes tests naturally.
|
||||
- **Every bug fix now gets a regression test.** When `/qa` fixes a bug and verifies it, Phase 8e.5 automatically generates a regression test that catches the exact scenario that broke. Tests include full attribution tracing back to the QA report. Auto-incrementing filenames prevent collisions across sessions.
|
||||
- **Ship with confidence — coverage audit shows what's tested and what's not.** `/ship` Step 3.4 builds a code path map from your diff, searches for corresponding tests, and produces an ASCII coverage diagram with quality stars (★★★ = edge cases + errors, ★★ = happy path, ★ = smoke test). Gaps get tests auto-generated. PR body shows "Tests: 42 → 47 (+5 new)".
|
||||
- **Your retro tracks test health.** `/retro` now shows total test files, tests added this period, regression test commits, and trend deltas. If test ratio drops below 20%, it flags it as a growth area.
|
||||
- **Design reviews generate regression tests too.** `/qa-design-review` Phase 8e.5 skips CSS-only fixes (those are caught by re-running the design audit) but writes tests for JavaScript behavior changes like broken dropdowns or animation failures.
|
||||
|
||||
### For contributors
|
||||
|
||||
- Added `generateTestBootstrap()` resolver to `gen-skill-docs.ts` (~155 lines). Registered as `{{TEST_BOOTSTRAP}}` in the RESOLVERS map. Inserted into qa, ship (Step 2.5), and qa-design-review templates.
|
||||
- Phase 8e.5 regression test generation added to `qa/SKILL.md.tmpl` (46 lines) and CSS-aware variant to `qa-design-review/SKILL.md.tmpl` (12 lines). Rule 13 amended to allow creating new test files.
|
||||
- Step 3.4 test coverage audit added to `ship/SKILL.md.tmpl` (88 lines) with quality scoring rubric and ASCII diagram format.
|
||||
- Test health tracking added to `retro/SKILL.md.tmpl`: 3 new data gathering commands, metrics row, narrative section, JSON schema field.
|
||||
- `qa-only/SKILL.md.tmpl` gets recommendation note when no test framework detected.
|
||||
- `qa-report-template.md` gains Regression Tests section with deferred test specs.
|
||||
- ARCHITECTURE.md placeholder table updated with `{{TEST_BOOTSTRAP}}` and `{{REVIEW_DASHBOARD}}`.
|
||||
- WebSearch added to allowed-tools for qa, ship, qa-design-review.
|
||||
- 26 new validation tests, 2 new E2E evals (bootstrap + coverage audit).
|
||||
- 2 new P3 TODOs: CI/CD for non-GitHub providers, auto-upgrade weak tests.
|
||||
|
||||
## 0.5.4 — 2026-03-17
|
||||
|
||||
- **Engineering review is always the full review now.** `/plan-eng-review` no longer asks you to choose between "big change" and "small change" modes. Every plan gets the full interactive walkthrough (architecture, code quality, tests, performance). Scope reduction is only suggested when the complexity check actually triggers — not as a standing menu option.
|
||||
- **Ship stops asking about reviews once you've answered.** When `/ship` asks about missing reviews and you say "ship anyway" or "not relevant," that decision is saved for the branch. No more getting re-asked every time you re-run `/ship` after a pre-landing fix.
|
||||
|
||||
### For contributors
|
||||
|
||||
- Removed SMALL_CHANGE / BIG_CHANGE / SCOPE_REDUCTION menu from `plan-eng-review/SKILL.md.tmpl`. Scope reduction is now proactive (triggered by complexity check) rather than a menu item.
|
||||
- Added review gate override persistence to `ship/SKILL.md.tmpl` — writes `ship-review-override` entries to `$BRANCH-reviews.jsonl` so subsequent `/ship` runs skip the gate.
|
||||
- Updated 2 E2E test prompts to match new flow.
|
||||
|
||||
## 0.5.3 — 2026-03-17
|
||||
|
||||
- **You're always in control — even when dreaming big.** `/plan-ceo-review` now presents every scope expansion as an individual decision you opt into. EXPANSION mode recommends enthusiastically, but you say yes or no to each idea. No more "the agent went wild and added 5 features I didn't ask for."
|
||||
- **New mode: SELECTIVE EXPANSION.** Hold your current scope as the baseline, but see what else is possible. The agent surfaces expansion opportunities one by one with neutral recommendations — you cherry-pick the ones worth doing. Perfect for iterating on existing features where you want rigor but also want to be tempted by adjacent improvements.
|
||||
- **Your CEO review visions are saved, not lost.** Expansion ideas, cherry-pick decisions, and 10x visions are now persisted to `~/.gstack/projects/{repo}/ceo-plans/` as structured design documents. Stale plans get archived automatically. If a vision is exceptional, you can promote it to `docs/designs/` in your repo for the team.
|
||||
|
||||
- **Smarter ship gates.** `/ship` no longer nags you about CEO and Design reviews when they're not relevant. Eng Review is the only required gate (and you can disable even that with `gstack-config set skip_eng_review true`). CEO Review is recommended for big product changes; Design Review for UI work. The dashboard still shows all three — it just won't block you for the optional ones.
|
||||
|
||||
### For contributors
|
||||
|
||||
- Added SELECTIVE EXPANSION mode to `plan-ceo-review/SKILL.md.tmpl` with cherry-pick ceremony, neutral recommendation posture, and HOLD SCOPE baseline.
|
||||
- Rewrote EXPANSION mode's Step 0D to include opt-in ceremony — distill vision into discrete proposals, present each as AskUserQuestion.
|
||||
- Added CEO plan persistence (0D-POST step): structured markdown with YAML frontmatter (`status: ACTIVE/ARCHIVED/PROMOTED`), scope decisions table, archival flow.
|
||||
- Added `docs/designs` promotion step after Review Log.
|
||||
- Mode Quick Reference table expanded to 4 columns.
|
||||
- Review Readiness Dashboard: Eng Review required (overridable via `skip_eng_review` config), CEO/Design optional with agent judgment.
|
||||
- New tests: CEO review mode validation (4 modes, persistence, promotion), SELECTIVE EXPANSION E2E test.
|
||||
|
||||
## 0.5.2 — 2026-03-17
|
||||
|
||||
- **Your design consultant now takes creative risks.** `/design-consultation` doesn't just propose a safe, coherent system — it explicitly breaks down SAFE CHOICES (category baseline) vs. RISKS (where your product stands out). You pick which rules to break. Every risk comes with a rationale for why it works and what it costs.
|
||||
|
||||
@@ -263,6 +263,30 @@
|
||||
**Effort:** S
|
||||
**Priority:** P3
|
||||
|
||||
### CI/CD generation for non-GitHub providers
|
||||
|
||||
**What:** Extend CI/CD bootstrap to generate GitLab CI (`.gitlab-ci.yml`), CircleCI (`.circleci/config.yml`), and Bitrise pipelines.
|
||||
|
||||
**Why:** Not all projects use GitHub Actions. Universal CI/CD bootstrap would make test bootstrap work for everyone.
|
||||
|
||||
**Context:** v1 ships with GitHub Actions only. Detection logic already checks for `.gitlab-ci.yml`, `.circleci/`, `bitrise.yml` and skips with an informational note. Each provider needs ~20 lines of template text in `generateTestBootstrap()`.
|
||||
|
||||
**Effort:** M
|
||||
**Priority:** P3
|
||||
**Depends on:** Test bootstrap (shipped)
|
||||
|
||||
### Auto-upgrade weak tests (★) to strong tests (★★★)
|
||||
|
||||
**What:** When Step 3.4 coverage audit identifies existing ★-rated tests (smoke/trivial assertions), generate improved versions testing edge cases and error paths.
|
||||
|
||||
**Why:** Many codebases have tests that technically exist but don't catch real bugs — `expect(component).toBeDefined()` isn't testing behavior. Upgrading these closes the gap between "has tests" and "has good tests."
|
||||
|
||||
**Context:** Requires the quality scoring rubric from the test coverage audit. Modifying existing test files is riskier than creating new ones — needs careful diffing to ensure the upgraded test still passes. Consider creating a companion test file rather than modifying the original.
|
||||
|
||||
**Effort:** M
|
||||
**Priority:** P3
|
||||
**Depends on:** Test quality scoring (shipped)
|
||||
|
||||
## Retro
|
||||
|
||||
### Deployment health tracking (retro + browse)
|
||||
|
||||
+26
-2
@@ -156,6 +156,13 @@ rm -rf "$LOCAL_GSTACK.bak"
|
||||
```
|
||||
Tell user: "Also updated vendored copy at `$LOCAL_GSTACK` — commit `.claude/skills/gstack/` when you're ready."
|
||||
|
||||
If `./setup` fails, restore from backup and warn the user:
|
||||
```bash
|
||||
rm -rf "$LOCAL_GSTACK"
|
||||
mv "$LOCAL_GSTACK.bak" "$LOCAL_GSTACK"
|
||||
```
|
||||
Tell user: "Sync failed — restored previous version at `$LOCAL_GSTACK`. Run `/gstack-upgrade` manually to retry."
|
||||
|
||||
### Step 5: Write marker + clear cache
|
||||
|
||||
```bash
|
||||
@@ -193,9 +200,26 @@ When invoked directly as `/gstack-upgrade` (not from a preamble):
|
||||
|
||||
1. Force a fresh update check (bypass cache):
|
||||
```bash
|
||||
~/.claude/skills/gstack/bin/gstack-update-check --force
|
||||
~/.claude/skills/gstack/bin/gstack-update-check --force 2>/dev/null || \
|
||||
.claude/skills/gstack/bin/gstack-update-check --force 2>/dev/null || true
|
||||
```
|
||||
Use the output to determine if an upgrade is available.
|
||||
|
||||
2. If `UPGRADE_AVAILABLE <old> <new>`: follow Steps 2-6 above.
|
||||
3. If no output (up to date): tell the user "You're already on the latest version (v{version})."
|
||||
|
||||
3. If no output (primary is up to date): check for a stale local vendored copy.
|
||||
|
||||
Run the Step 2 bash block above to detect the primary install type and directory (`INSTALL_TYPE` and `INSTALL_DIR`). Then run the Step 4.5 detection bash block above to check for a local vendored copy (`LOCAL_GSTACK`).
|
||||
|
||||
**If `LOCAL_GSTACK` is empty** (no local vendored copy): tell the user "You're already on the latest version (v{version})."
|
||||
|
||||
**If `LOCAL_GSTACK` is non-empty**, compare versions:
|
||||
```bash
|
||||
PRIMARY_VER=$(cat "$INSTALL_DIR/VERSION" 2>/dev/null || echo "unknown")
|
||||
LOCAL_VER=$(cat "$LOCAL_GSTACK/VERSION" 2>/dev/null || echo "unknown")
|
||||
echo "PRIMARY=$PRIMARY_VER LOCAL=$LOCAL_VER"
|
||||
```
|
||||
|
||||
**If versions differ:** follow the Step 4.5 sync bash block above to update the local copy from the primary. Tell user: "Global v{PRIMARY_VER} is up to date. Updated local vendored copy from v{LOCAL_VER} → v{PRIMARY_VER}. Commit `.claude/skills/gstack/` when you're ready."
|
||||
|
||||
**If versions match:** tell the user "You're on the latest version (v{PRIMARY_VER}). Global and local vendored copy are both up to date."
|
||||
|
||||
@@ -154,6 +154,13 @@ rm -rf "$LOCAL_GSTACK.bak"
|
||||
```
|
||||
Tell user: "Also updated vendored copy at `$LOCAL_GSTACK` — commit `.claude/skills/gstack/` when you're ready."
|
||||
|
||||
If `./setup` fails, restore from backup and warn the user:
|
||||
```bash
|
||||
rm -rf "$LOCAL_GSTACK"
|
||||
mv "$LOCAL_GSTACK.bak" "$LOCAL_GSTACK"
|
||||
```
|
||||
Tell user: "Sync failed — restored previous version at `$LOCAL_GSTACK`. Run `/gstack-upgrade` manually to retry."
|
||||
|
||||
### Step 5: Write marker + clear cache
|
||||
|
||||
```bash
|
||||
@@ -191,9 +198,26 @@ When invoked directly as `/gstack-upgrade` (not from a preamble):
|
||||
|
||||
1. Force a fresh update check (bypass cache):
|
||||
```bash
|
||||
~/.claude/skills/gstack/bin/gstack-update-check --force
|
||||
~/.claude/skills/gstack/bin/gstack-update-check --force 2>/dev/null || \
|
||||
.claude/skills/gstack/bin/gstack-update-check --force 2>/dev/null || true
|
||||
```
|
||||
Use the output to determine if an upgrade is available.
|
||||
|
||||
2. If `UPGRADE_AVAILABLE <old> <new>`: follow Steps 2-6 above.
|
||||
3. If no output (up to date): tell the user "You're already on the latest version (v{version})."
|
||||
|
||||
3. If no output (primary is up to date): check for a stale local vendored copy.
|
||||
|
||||
Run the Step 2 bash block above to detect the primary install type and directory (`INSTALL_TYPE` and `INSTALL_DIR`). Then run the Step 4.5 detection bash block above to check for a local vendored copy (`LOCAL_GSTACK`).
|
||||
|
||||
**If `LOCAL_GSTACK` is empty** (no local vendored copy): tell the user "You're already on the latest version (v{version})."
|
||||
|
||||
**If `LOCAL_GSTACK` is non-empty**, compare versions:
|
||||
```bash
|
||||
PRIMARY_VER=$(cat "$INSTALL_DIR/VERSION" 2>/dev/null || echo "unknown")
|
||||
LOCAL_VER=$(cat "$LOCAL_GSTACK/VERSION" 2>/dev/null || echo "unknown")
|
||||
echo "PRIMARY=$PRIMARY_VER LOCAL=$LOCAL_VER"
|
||||
```
|
||||
|
||||
**If versions differ:** follow the Step 4.5 sync bash block above to update the local copy from the primary. Tell user: "Global v{PRIMARY_VER} is up to date. Updated local vendored copy from v{LOCAL_VER} → v{PRIMARY_VER}. Commit `.claude/skills/gstack/` when you're ready."
|
||||
|
||||
**If versions match:** tell the user "You're on the latest version (v{PRIMARY_VER}). Global and local vendored copy are both up to date."
|
||||
|
||||
+159
-60
@@ -3,9 +3,9 @@ name: plan-ceo-review
|
||||
version: 1.0.0
|
||||
description: |
|
||||
CEO/founder-mode plan review. Rethink the problem, find the 10-star product,
|
||||
challenge premises, expand scope when it creates a better product. Three modes:
|
||||
SCOPE EXPANSION (dream big), HOLD SCOPE (maximum rigor), SCOPE REDUCTION
|
||||
(strip to essentials).
|
||||
challenge premises, expand scope when it creates a better product. Four modes:
|
||||
SCOPE EXPANSION (dream big), SELECTIVE EXPANSION (hold scope + cherry-pick
|
||||
expansions), HOLD SCOPE (maximum rigor), SCOPE REDUCTION (strip to essentials).
|
||||
allowed-tools:
|
||||
- Read
|
||||
- Grep
|
||||
@@ -105,10 +105,11 @@ branch name wherever the instructions say "the base branch."
|
||||
## Philosophy
|
||||
You are not here to rubber-stamp this plan. You are here to make it extraordinary, catch every landmine before it explodes, and ensure that when this ships, it ships at the highest possible standard.
|
||||
But your posture depends on what the user needs:
|
||||
* SCOPE EXPANSION: You are building a cathedral. Envision the platonic ideal. Push scope UP. Ask "what would make this 10x better for 2x the effort?" The answer to "should we also build X?" is "yes, if it serves the vision." You have permission to dream.
|
||||
* SCOPE EXPANSION: You are building a cathedral. Envision the platonic ideal. Push scope UP. Ask "what would make this 10x better for 2x the effort?" You have permission to dream — and to recommend enthusiastically. But every expansion is the user's decision. Present each scope-expanding idea as an AskUserQuestion. The user opts in or out.
|
||||
* SELECTIVE EXPANSION: You are a rigorous reviewer who also has taste. Hold the current scope as your baseline — make it bulletproof. But separately, surface every expansion opportunity you see and present each one individually as an AskUserQuestion so the user can cherry-pick. Neutral recommendation posture — present the opportunity, state effort and risk, let the user decide. Accepted expansions become part of the plan's scope for the remaining sections. Rejected ones go to "NOT in scope."
|
||||
* HOLD SCOPE: You are a rigorous reviewer. The plan's scope is accepted. Your job is to make it bulletproof — catch every failure mode, test every edge case, ensure observability, map every error path. Do not silently reduce OR expand.
|
||||
* SCOPE REDUCTION: You are a surgeon. Find the minimum viable version that achieves the core outcome. Cut everything else. Be ruthless.
|
||||
Critical rule: Once the user selects a mode, COMMIT to it. Do not silently drift toward a different mode. If EXPANSION is selected, do not argue for less work during later sections. If REDUCTION is selected, do not sneak scope back in. Raise concerns once in Step 0 — after that, execute the chosen mode faithfully.
|
||||
Critical rule: In ALL modes, the user is 100% in control. Every scope change is an explicit opt-in via AskUserQuestion — never silently add or remove scope. Once the user selects a mode, COMMIT to it. Do not silently drift toward a different mode. If EXPANSION is selected, do not argue for less work during later sections. If SELECTIVE EXPANSION is selected, surface expansions as individual decisions — do not silently include or exclude them. If REDUCTION is selected, do not sneak scope back in. Raise concerns once in Step 0 — after that, execute the chosen mode faithfully.
|
||||
Do NOT make any code changes. Do NOT start implementation. Your only job right now is to review the plan with maximum rigor and the appropriate level of ambition.
|
||||
|
||||
## Prime Directives
|
||||
@@ -164,7 +165,7 @@ Map:
|
||||
### Retrospective Check
|
||||
Check the git log for this branch. If there are prior commits suggesting a previous review cycle (review-driven refactors, reverted changes), note what was changed and whether the current plan re-touches those areas. Be MORE aggressive reviewing areas that were previously problematic. Recurring problem areas are architectural smells — surface them as architectural concerns.
|
||||
|
||||
### Taste Calibration (EXPANSION mode only)
|
||||
### Taste Calibration (EXPANSION and SELECTIVE EXPANSION modes)
|
||||
Identify 2-3 files or patterns in the existing codebase that are particularly well-designed. Note them as style references for the review. Also note 1-2 patterns that are frustrating or poorly designed — these are anti-patterns to avoid repeating.
|
||||
Report findings before proceeding to Step 0.
|
||||
|
||||
@@ -187,10 +188,20 @@ Describe the ideal end state of this system 12 months from now. Does this plan m
|
||||
```
|
||||
|
||||
### 0D. Mode-Specific Analysis
|
||||
**For SCOPE EXPANSION** — run all three:
|
||||
**For SCOPE EXPANSION** — run all three, then the opt-in ceremony:
|
||||
1. 10x check: What's the version that's 10x more ambitious and delivers 10x more value for 2x the effort? Describe it concretely.
|
||||
2. Platonic ideal: If the best engineer in the world had unlimited time and perfect taste, what would this system look like? What would the user feel when using it? Start from experience, not architecture.
|
||||
3. Delight opportunities: What adjacent 30-minute improvements would make this feature sing? Things where a user would think "oh nice, they thought of that." List at least 3.
|
||||
3. Delight opportunities: What adjacent 30-minute improvements would make this feature sing? Things where a user would think "oh nice, they thought of that." List at least 5.
|
||||
4. **Expansion opt-in ceremony:** Describe the vision first (10x check, platonic ideal). Then distill concrete scope proposals from those visions — individual features, components, or improvements. Present each proposal as its own AskUserQuestion. Recommend enthusiastically — explain why it's worth doing. But the user decides. Options: **A)** Add to this plan's scope **B)** Defer to TODOS.md **C)** Skip. Accepted items become plan scope for all remaining review sections. Rejected items go to "NOT in scope."
|
||||
|
||||
**For SELECTIVE EXPANSION** — run the HOLD SCOPE analysis first, then surface expansions:
|
||||
1. Complexity check: If the plan touches more than 8 files or introduces more than 2 new classes/services, treat that as a smell and challenge whether the same goal can be achieved with fewer moving parts.
|
||||
2. What is the minimum set of changes that achieves the stated goal? Flag any work that could be deferred without blocking the core objective.
|
||||
3. Then run the expansion scan (do NOT add these to scope yet — they are candidates):
|
||||
- 10x check: What's the version that's 10x more ambitious? Describe it concretely.
|
||||
- Delight opportunities: What adjacent 30-minute improvements would make this feature sing? List at least 5.
|
||||
- Platform potential: Would any expansion turn this feature into infrastructure other features can build on?
|
||||
4. **Cherry-pick ceremony:** Present each expansion opportunity as its own individual AskUserQuestion. Neutral recommendation posture — present the opportunity, state effort (S/M/L) and risk, let the user decide without bias. Options: **A)** Add to this plan's scope **B)** Defer to TODOS.md **C)** Skip. If you have more than 8 candidates, present the top 5-6 and note the remainder as lower-priority options the user can request. Accepted items become plan scope for all remaining review sections. Rejected items go to "NOT in scope."
|
||||
|
||||
**For HOLD SCOPE** — run this:
|
||||
1. Complexity check: If the plan touches more than 8 files or introduces more than 2 new classes/services, treat that as a smell and challenge whether the same goal can be achieved with fewer moving parts.
|
||||
@@ -200,7 +211,57 @@ Describe the ideal end state of this system 12 months from now. Does this plan m
|
||||
1. Ruthless cut: What is the absolute minimum that ships value to a user? Everything else is deferred. No exceptions.
|
||||
2. What can be a follow-up PR? Separate "must ship together" from "nice to ship together."
|
||||
|
||||
### 0E. Temporal Interrogation (EXPANSION and HOLD modes)
|
||||
### 0D-POST. Persist CEO Plan (EXPANSION and SELECTIVE EXPANSION only)
|
||||
|
||||
After the opt-in/cherry-pick ceremony, write the plan to disk so the vision and decisions survive beyond this conversation. Only run this step for EXPANSION and SELECTIVE EXPANSION modes.
|
||||
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
mkdir -p ~/.gstack/projects/$SLUG/ceo-plans
|
||||
```
|
||||
|
||||
Before writing, check for existing CEO plans in the ceo-plans/ directory. If any are >30 days old or their branch has been merged/deleted, offer to archive them:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.gstack/projects/$SLUG/ceo-plans/archive
|
||||
# For each stale plan: mv ~/.gstack/projects/$SLUG/ceo-plans/{old-plan}.md ~/.gstack/projects/$SLUG/ceo-plans/archive/
|
||||
```
|
||||
|
||||
Write to `~/.gstack/projects/$SLUG/ceo-plans/{date}-{feature-slug}.md` using this format:
|
||||
|
||||
```markdown
|
||||
---
|
||||
status: ACTIVE
|
||||
---
|
||||
# CEO Plan: {Feature Name}
|
||||
Generated by /plan-ceo-review on {date}
|
||||
Branch: {branch} | Mode: {EXPANSION / SELECTIVE EXPANSION}
|
||||
Repo: {owner/repo}
|
||||
|
||||
## Vision
|
||||
|
||||
### 10x Check
|
||||
{10x vision description}
|
||||
|
||||
### Platonic Ideal
|
||||
{platonic ideal description — EXPANSION mode only}
|
||||
|
||||
## Scope Decisions
|
||||
|
||||
| # | Proposal | Effort | Decision | Reasoning |
|
||||
|---|----------|--------|----------|-----------|
|
||||
| 1 | {proposal} | S/M/L | ACCEPTED / DEFERRED / SKIPPED | {why} |
|
||||
|
||||
## Accepted Scope (added to this plan)
|
||||
- {bullet list of what's now in scope}
|
||||
|
||||
## Deferred to TODOS.md
|
||||
- {items with context}
|
||||
```
|
||||
|
||||
Derive the feature slug from the plan being reviewed (e.g., "user-dashboard", "auth-refactor"). Use the date in YYYY-MM-DD format.
|
||||
|
||||
### 0E. Temporal Interrogation (EXPANSION, SELECTIVE EXPANSION, and HOLD modes)
|
||||
Think ahead to implementation: What decisions will need to be made during implementation that should be resolved NOW in the plan?
|
||||
```
|
||||
HOUR 1 (foundations): What does the implementer need to know?
|
||||
@@ -211,17 +272,22 @@ Think ahead to implementation: What decisions will need to be made during implem
|
||||
Surface these as questions for the user NOW, not as "figure it out later."
|
||||
|
||||
### 0F. Mode Selection
|
||||
Present three options:
|
||||
1. **SCOPE EXPANSION:** The plan is good but could be great. Propose the ambitious version, then review that. Push scope up. Build the cathedral.
|
||||
2. **HOLD SCOPE:** The plan's scope is right. Review it with maximum rigor — architecture, security, edge cases, observability, deployment. Make it bulletproof.
|
||||
3. **SCOPE REDUCTION:** The plan is overbuilt or wrong-headed. Propose a minimal version that achieves the core goal, then review that.
|
||||
In every mode, you are 100% in control. No scope is added without your explicit approval.
|
||||
|
||||
Present four options:
|
||||
1. **SCOPE EXPANSION:** The plan is good but could be great. Dream big — propose the ambitious version. Every expansion is presented individually for your approval. You opt in to each one.
|
||||
2. **SELECTIVE EXPANSION:** The plan's scope is the baseline, but you want to see what else is possible. Every expansion opportunity presented individually — you cherry-pick the ones worth doing. Neutral recommendations.
|
||||
3. **HOLD SCOPE:** The plan's scope is right. Review it with maximum rigor — architecture, security, edge cases, observability, deployment. Make it bulletproof. No expansions surfaced.
|
||||
4. **SCOPE REDUCTION:** The plan is overbuilt or wrong-headed. Propose a minimal version that achieves the core goal, then review that.
|
||||
|
||||
Context-dependent defaults:
|
||||
* Greenfield feature → default EXPANSION
|
||||
* Feature enhancement or iteration on existing system → default SELECTIVE EXPANSION
|
||||
* Bug fix or hotfix → default HOLD SCOPE
|
||||
* Refactor → default HOLD SCOPE
|
||||
* Plan touching >15 files → suggest REDUCTION unless user pushes back
|
||||
* User says "go big" / "ambitious" / "cathedral" → EXPANSION, no question
|
||||
* User says "hold scope but tempt me" / "show me options" / "cherry-pick" → SELECTIVE EXPANSION, no question
|
||||
|
||||
Once selected, commit fully. Do not silently drift.
|
||||
**STOP.** AskUserQuestion once per issue. Do NOT batch. Recommend + WHY. If no issues or fix is obvious, state what you'll do and move on — don't waste a question. Do NOT proceed until user responds.
|
||||
@@ -244,10 +310,12 @@ Evaluate and diagram:
|
||||
* Production failure scenarios. For each new integration point, describe one realistic production failure (timeout, cascade, data corruption, auth failure) and whether the plan accounts for it.
|
||||
* Rollback posture. If this ships and immediately breaks, what's the rollback procedure? Git revert? Feature flag? DB migration rollback? How long?
|
||||
|
||||
**EXPANSION mode additions:**
|
||||
**EXPANSION and SELECTIVE EXPANSION additions:**
|
||||
* What would make this architecture beautiful? Not just correct — elegant. Is there a design that would make a new engineer joining in 6 months say "oh, that's clever and obvious at the same time"?
|
||||
* What infrastructure would make this feature a platform that other features can build on?
|
||||
|
||||
**SELECTIVE EXPANSION:** If any accepted cherry-picks from Step 0D affect the architecture, evaluate their architectural fit here. Flag any that create coupling concerns or don't integrate cleanly — this is a chance to revisit the decision with new information.
|
||||
|
||||
Required ASCII diagram: full system architecture showing new components and their relationships to existing ones.
|
||||
**STOP.** AskUserQuestion once per issue. Do NOT batch. Recommend + WHY. If no issues or fix is obvious, state what you'll do and move on — don't waste a question. Do NOT proceed until user responds.
|
||||
|
||||
@@ -406,8 +474,8 @@ Evaluate:
|
||||
* Admin tooling. New operational tasks that need admin UI or rake tasks?
|
||||
* Runbooks. For each new failure mode: what's the operational response?
|
||||
|
||||
**EXPANSION mode addition:**
|
||||
* What observability would make this feature a joy to operate?
|
||||
**EXPANSION and SELECTIVE EXPANSION addition:**
|
||||
* What observability would make this feature a joy to operate? (For SELECTIVE EXPANSION, include observability for any accepted cherry-picks.)
|
||||
**STOP.** AskUserQuestion once per issue. Do NOT batch. Recommend + WHY. If no issues or fix is obvious, state what you'll do and move on — don't waste a question. Do NOT proceed until user responds.
|
||||
|
||||
### Section 9: Deployment & Rollout Review
|
||||
@@ -421,8 +489,8 @@ Evaluate:
|
||||
* Post-deploy verification checklist. First 5 minutes? First hour?
|
||||
* Smoke tests. What automated checks should run immediately post-deploy?
|
||||
|
||||
**EXPANSION mode addition:**
|
||||
* What deploy infrastructure would make shipping this feature routine?
|
||||
**EXPANSION and SELECTIVE EXPANSION addition:**
|
||||
* What deploy infrastructure would make shipping this feature routine? (For SELECTIVE EXPANSION, assess whether accepted cherry-picks change the deployment risk profile.)
|
||||
**STOP.** AskUserQuestion once per issue. Do NOT batch. Recommend + WHY. If no issues or fix is obvious, state what you'll do and move on — don't waste a question. Do NOT proceed until user responds.
|
||||
|
||||
### Section 10: Long-Term Trajectory Review
|
||||
@@ -434,9 +502,10 @@ Evaluate:
|
||||
* Ecosystem fit. Aligns with Rails/JS ecosystem direction?
|
||||
* The 1-year question. Read this plan as a new engineer in 12 months — obvious?
|
||||
|
||||
**EXPANSION mode additions:**
|
||||
**EXPANSION and SELECTIVE EXPANSION additions:**
|
||||
* What comes after this ships? Phase 2? Phase 3? Does the architecture support that trajectory?
|
||||
* Platform potential. Does this create capabilities other features can leverage?
|
||||
* (SELECTIVE EXPANSION only) Retrospective: Were the right cherry-picks accepted? Did any rejected expansions turn out to be load-bearing for the accepted ones?
|
||||
**STOP.** AskUserQuestion once per issue. Do NOT batch. Recommend + WHY. If no issues or fix is obvious, state what you'll do and move on — don't waste a question. Do NOT proceed until user responds.
|
||||
|
||||
## CRITICAL RULE — How to ask questions
|
||||
@@ -485,8 +554,11 @@ For each TODO, describe:
|
||||
|
||||
Then present options: **A)** Add to TODOS.md **B)** Skip — not valuable enough **C)** Build it now in this PR instead of deferring.
|
||||
|
||||
### Delight Opportunities (EXPANSION mode only)
|
||||
Identify at least 5 "bonus chunk" opportunities (<30 min each) that would make users think "oh nice, they thought of that." Present each delight opportunity as its own individual AskUserQuestion. Never batch them. For each one, describe what it is, why it would delight users, and effort estimate. Then present options: **A)** Add to TODOS.md as a vision item **B)** Skip **C)** Build it now in this PR.
|
||||
### Scope Expansion Decisions (EXPANSION and SELECTIVE EXPANSION only)
|
||||
For EXPANSION and SELECTIVE EXPANSION modes: expansion opportunities and delight items were surfaced and decided in Step 0D (opt-in/cherry-pick ceremony). The decisions are persisted in the CEO plan document. Reference the CEO plan for the full record. Do not re-surface them here — list the accepted expansions for completeness:
|
||||
* Accepted: {list items added to scope}
|
||||
* Deferred: {list items sent to TODOS.md}
|
||||
* Skipped: {list items rejected}
|
||||
|
||||
### Diagrams (mandatory, produce all that apply)
|
||||
1. System architecture
|
||||
@@ -504,7 +576,7 @@ List every ASCII diagram in files this plan touches. Still accurate?
|
||||
+====================================================================+
|
||||
| MEGA PLAN REVIEW — COMPLETION SUMMARY |
|
||||
+====================================================================+
|
||||
| Mode selected | EXPANSION / HOLD / REDUCTION |
|
||||
| Mode selected | EXPANSION / SELECTIVE / HOLD / REDUCTION |
|
||||
| System Audit | [key findings] |
|
||||
| Step 0 | [mode + key decisions] |
|
||||
| Section 1 (Arch) | ___ issues found |
|
||||
@@ -524,7 +596,8 @@ List every ASCII diagram in files this plan touches. Still accurate?
|
||||
| Error/rescue registry| ___ methods, ___ CRITICAL GAPS |
|
||||
| Failure modes | ___ total, ___ CRITICAL GAPS |
|
||||
| TODOS.md updates | ___ items proposed |
|
||||
| Delight opportunities| ___ identified (EXPANSION only) |
|
||||
| Scope proposals | ___ proposed, ___ accepted (EXP + SEL) |
|
||||
| CEO plan | written / skipped (HOLD/REDUCTION) |
|
||||
| Diagrams produced | ___ (list types) |
|
||||
| Stale diagrams found | ___ |
|
||||
| Unresolved decisions | ___ (listed below) |
|
||||
@@ -549,15 +622,17 @@ Before running this command, substitute the placeholder values from the Completi
|
||||
- **STATUS**: "clean" if 0 unresolved decisions AND 0 critical gaps; otherwise "issues_open"
|
||||
- **unresolved**: number from "Unresolved decisions" in the summary
|
||||
- **critical_gaps**: number from "Failure modes: ___ CRITICAL GAPS" in the summary
|
||||
- **MODE**: the mode the user selected (SCOPE_EXPANSION / HOLD_SCOPE / SCOPE_REDUCTION)
|
||||
- **MODE**: the mode the user selected (SCOPE_EXPANSION / SELECTIVE_EXPANSION / HOLD_SCOPE / SCOPE_REDUCTION)
|
||||
|
||||
## Review Readiness Dashboard
|
||||
|
||||
After completing the review, read the review log to display the dashboard.
|
||||
After completing the review, read the review log and config to display the dashboard.
|
||||
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
cat ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_REVIEWS"
|
||||
echo "---CONFIG---"
|
||||
~/.claude/skills/gstack/bin/gstack-config get skip_eng_review 2>/dev/null || echo "false"
|
||||
```
|
||||
|
||||
Parse the output. Find the most recent entry for each skill (plan-ceo-review, plan-eng-review, plan-design-review). Ignore entries with timestamps older than 7 days. Display:
|
||||
@@ -566,20 +641,37 @@ Parse the output. Find the most recent entry for each skill (plan-ceo-review, pl
|
||||
+====================================================================+
|
||||
| REVIEW READINESS DASHBOARD |
|
||||
+====================================================================+
|
||||
| Review | Runs | Last Run | Status |
|
||||
|-----------------|------|---------------------|----------------------|
|
||||
| CEO Review | 1 | 2026-03-16 14:30 | CLEAR |
|
||||
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR |
|
||||
| Design Review | 0 | — | NOT YET RUN |
|
||||
| Review | Runs | Last Run | Status | Required |
|
||||
|-----------------|------|---------------------|-----------|----------|
|
||||
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR | YES |
|
||||
| CEO Review | 0 | — | — | no |
|
||||
| Design Review | 0 | — | — | no |
|
||||
+--------------------------------------------------------------------+
|
||||
| VERDICT: 2/3 CLEAR — Design Review not yet run |
|
||||
| VERDICT: CLEARED — Eng Review passed |
|
||||
+====================================================================+
|
||||
```
|
||||
|
||||
**Review tiers:**
|
||||
- **Eng Review (required by default):** The only review that gates shipping. Covers architecture, code quality, tests, performance. Can be disabled globally with \`gstack-config set skip_eng_review true\` (the "don't bother me" setting).
|
||||
- **CEO Review (optional):** Use your judgment. Recommend it for big product/business changes, new user-facing features, or scope decisions. Skip for bug fixes, refactors, infra, and cleanup.
|
||||
- **Design Review (optional):** Use your judgment. Recommend it for UI/UX changes. Skip for backend-only, infra, or prompt-only changes.
|
||||
|
||||
**Verdict logic:**
|
||||
- **CLEARED TO SHIP (3/3)**: All three have >= 1 entry within 7 days AND most recent status is "clean"
|
||||
- **N/3 CLEAR**: Show count and list which are missing, have open issues, or are stale (>7 days)
|
||||
- Informational only — does NOT block.
|
||||
- **CLEARED**: Eng Review has >= 1 entry within 7 days with status "clean" (or \`skip_eng_review\` is \`true\`)
|
||||
- **NOT CLEARED**: Eng Review missing, stale (>7 days), or has open issues
|
||||
- CEO and Design reviews are shown for context but never block shipping
|
||||
- If \`skip_eng_review\` config is \`true\`, Eng Review shows "SKIPPED (global)" and verdict is CLEARED
|
||||
|
||||
## docs/designs Promotion (EXPANSION and SELECTIVE EXPANSION only)
|
||||
|
||||
At the end of the review, if the vision produced a compelling feature direction, offer to promote the CEO plan to the project repo. AskUserQuestion:
|
||||
|
||||
"The vision from this review produced {N} accepted scope expansions. Want to promote it to a design doc in the repo?"
|
||||
- **A)** Promote to `docs/designs/{FEATURE}.md` (committed to repo, visible to the team)
|
||||
- **B)** Keep in `~/.gstack/projects/` only (local, personal reference)
|
||||
- **C)** Skip
|
||||
|
||||
If promoted, copy the CEO plan content to `docs/designs/{FEATURE}.md` (create the directory if needed) and update the `status` field in the original CEO plan from `ACTIVE` to `PROMOTED`.
|
||||
|
||||
## Formatting Rules
|
||||
* NUMBER issues (1, 2, 3...) and LETTERS for options (A, B, C...).
|
||||
@@ -590,30 +682,37 @@ Parse the output. Find the most recent entry for each skill (plan-ceo-review, pl
|
||||
|
||||
## Mode Quick Reference
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ MODE COMPARISON │
|
||||
├─────────────┬──────────────┬──────────────┬────────────────────┤
|
||||
│ │ EXPANSION │ HOLD SCOPE │ REDUCTION │
|
||||
├─────────────┼──────────────┼──────────────┼────────────────────┤
|
||||
│ Scope │ Push UP │ Maintain │ Push DOWN │
|
||||
│ 10x check │ Mandatory │ Optional │ Skip │
|
||||
│ Platonic │ Yes │ No │ No │
|
||||
│ ideal │ │ │ │
|
||||
│ Delight │ 5+ items │ Note if seen │ Skip │
|
||||
│ opps │ │ │ │
|
||||
│ Complexity │ "Is it big │ "Is it too │ "Is it the bare │
|
||||
│ question │ enough?" │ complex?" │ minimum?" │
|
||||
│ Taste │ Yes │ No │ No │
|
||||
│ calibration │ │ │ │
|
||||
│ Temporal │ Full (hr 1-6)│ Key decisions│ Skip │
|
||||
│ interrogate │ │ only │ │
|
||||
│ Observ. │ "Joy to │ "Can we │ "Can we see if │
|
||||
│ standard │ operate" │ debug it?" │ it's broken?" │
|
||||
│ Deploy │ Infra as │ Safe deploy │ Simplest possible │
|
||||
│ standard │ feature scope│ + rollback │ deploy │
|
||||
│ Error map │ Full + chaos │ Full │ Critical paths │
|
||||
│ │ scenarios │ │ only │
|
||||
│ Phase 2/3 │ Map it │ Note it │ Skip │
|
||||
│ planning │ │ │ │
|
||||
└─────────────┴──────────────┴──────────────┴────────────────────┘
|
||||
┌────────────────────────────────────────────────────────────────────────────────┐
|
||||
│ MODE COMPARISON │
|
||||
├─────────────┬──────────────┬──────────────┬──────────────┬────────────────────┤
|
||||
│ │ EXPANSION │ SELECTIVE │ HOLD SCOPE │ REDUCTION │
|
||||
├─────────────┼──────────────┼──────────────┼──────────────┼────────────────────┤
|
||||
│ Scope │ Push UP │ Hold + offer │ Maintain │ Push DOWN │
|
||||
│ │ (opt-in) │ │ │ │
|
||||
│ Recommend │ Enthusiastic │ Neutral │ N/A │ N/A │
|
||||
│ posture │ │ │ │ │
|
||||
│ 10x check │ Mandatory │ Surface as │ Optional │ Skip │
|
||||
│ │ │ cherry-pick │ │ │
|
||||
│ Platonic │ Yes │ No │ No │ No │
|
||||
│ ideal │ │ │ │ │
|
||||
│ Delight │ Opt-in │ Cherry-pick │ Note if seen │ Skip │
|
||||
│ opps │ ceremony │ ceremony │ │ │
|
||||
│ Complexity │ "Is it big │ "Is it right │ "Is it too │ "Is it the bare │
|
||||
│ question │ enough?" │ + what else │ complex?" │ minimum?" │
|
||||
│ │ │ is tempting"│ │ │
|
||||
│ Taste │ Yes │ Yes │ No │ No │
|
||||
│ calibration │ │ │ │ │
|
||||
│ Temporal │ Full (hr 1-6)│ Full (hr 1-6)│ Key decisions│ Skip │
|
||||
│ interrogate │ │ │ only │ │
|
||||
│ Observ. │ "Joy to │ "Joy to │ "Can we │ "Can we see if │
|
||||
│ standard │ operate" │ operate" │ debug it?" │ it's broken?" │
|
||||
│ Deploy │ Infra as │ Safe deploy │ Safe deploy │ Simplest possible │
|
||||
│ standard │ feature scope│ + cherry-pick│ + rollback │ deploy │
|
||||
│ │ │ risk check │ │ │
|
||||
│ Error map │ Full + chaos │ Full + chaos │ Full │ Critical paths │
|
||||
│ │ scenarios │ for accepted │ │ only │
|
||||
│ CEO plan │ Written │ Written │ Skipped │ Skipped │
|
||||
│ Phase 2/3 │ Map accepted │ Map accepted │ Note it │ Skip │
|
||||
│ planning │ │ cherry-picks │ │ │
|
||||
└─────────────┴──────────────┴──────────────┴──────────────┴────────────────────┘
|
||||
```
|
||||
|
||||
+141
-50
@@ -3,9 +3,9 @@ name: plan-ceo-review
|
||||
version: 1.0.0
|
||||
description: |
|
||||
CEO/founder-mode plan review. Rethink the problem, find the 10-star product,
|
||||
challenge premises, expand scope when it creates a better product. Three modes:
|
||||
SCOPE EXPANSION (dream big), HOLD SCOPE (maximum rigor), SCOPE REDUCTION
|
||||
(strip to essentials).
|
||||
challenge premises, expand scope when it creates a better product. Four modes:
|
||||
SCOPE EXPANSION (dream big), SELECTIVE EXPANSION (hold scope + cherry-pick
|
||||
expansions), HOLD SCOPE (maximum rigor), SCOPE REDUCTION (strip to essentials).
|
||||
allowed-tools:
|
||||
- Read
|
||||
- Grep
|
||||
@@ -23,10 +23,11 @@ allowed-tools:
|
||||
## Philosophy
|
||||
You are not here to rubber-stamp this plan. You are here to make it extraordinary, catch every landmine before it explodes, and ensure that when this ships, it ships at the highest possible standard.
|
||||
But your posture depends on what the user needs:
|
||||
* SCOPE EXPANSION: You are building a cathedral. Envision the platonic ideal. Push scope UP. Ask "what would make this 10x better for 2x the effort?" The answer to "should we also build X?" is "yes, if it serves the vision." You have permission to dream.
|
||||
* SCOPE EXPANSION: You are building a cathedral. Envision the platonic ideal. Push scope UP. Ask "what would make this 10x better for 2x the effort?" You have permission to dream — and to recommend enthusiastically. But every expansion is the user's decision. Present each scope-expanding idea as an AskUserQuestion. The user opts in or out.
|
||||
* SELECTIVE EXPANSION: You are a rigorous reviewer who also has taste. Hold the current scope as your baseline — make it bulletproof. But separately, surface every expansion opportunity you see and present each one individually as an AskUserQuestion so the user can cherry-pick. Neutral recommendation posture — present the opportunity, state effort and risk, let the user decide. Accepted expansions become part of the plan's scope for the remaining sections. Rejected ones go to "NOT in scope."
|
||||
* HOLD SCOPE: You are a rigorous reviewer. The plan's scope is accepted. Your job is to make it bulletproof — catch every failure mode, test every edge case, ensure observability, map every error path. Do not silently reduce OR expand.
|
||||
* SCOPE REDUCTION: You are a surgeon. Find the minimum viable version that achieves the core outcome. Cut everything else. Be ruthless.
|
||||
Critical rule: Once the user selects a mode, COMMIT to it. Do not silently drift toward a different mode. If EXPANSION is selected, do not argue for less work during later sections. If REDUCTION is selected, do not sneak scope back in. Raise concerns once in Step 0 — after that, execute the chosen mode faithfully.
|
||||
Critical rule: In ALL modes, the user is 100% in control. Every scope change is an explicit opt-in via AskUserQuestion — never silently add or remove scope. Once the user selects a mode, COMMIT to it. Do not silently drift toward a different mode. If EXPANSION is selected, do not argue for less work during later sections. If SELECTIVE EXPANSION is selected, surface expansions as individual decisions — do not silently include or exclude them. If REDUCTION is selected, do not sneak scope back in. Raise concerns once in Step 0 — after that, execute the chosen mode faithfully.
|
||||
Do NOT make any code changes. Do NOT start implementation. Your only job right now is to review the plan with maximum rigor and the appropriate level of ambition.
|
||||
|
||||
## Prime Directives
|
||||
@@ -82,7 +83,7 @@ Map:
|
||||
### Retrospective Check
|
||||
Check the git log for this branch. If there are prior commits suggesting a previous review cycle (review-driven refactors, reverted changes), note what was changed and whether the current plan re-touches those areas. Be MORE aggressive reviewing areas that were previously problematic. Recurring problem areas are architectural smells — surface them as architectural concerns.
|
||||
|
||||
### Taste Calibration (EXPANSION mode only)
|
||||
### Taste Calibration (EXPANSION and SELECTIVE EXPANSION modes)
|
||||
Identify 2-3 files or patterns in the existing codebase that are particularly well-designed. Note them as style references for the review. Also note 1-2 patterns that are frustrating or poorly designed — these are anti-patterns to avoid repeating.
|
||||
Report findings before proceeding to Step 0.
|
||||
|
||||
@@ -105,10 +106,20 @@ Describe the ideal end state of this system 12 months from now. Does this plan m
|
||||
```
|
||||
|
||||
### 0D. Mode-Specific Analysis
|
||||
**For SCOPE EXPANSION** — run all three:
|
||||
**For SCOPE EXPANSION** — run all three, then the opt-in ceremony:
|
||||
1. 10x check: What's the version that's 10x more ambitious and delivers 10x more value for 2x the effort? Describe it concretely.
|
||||
2. Platonic ideal: If the best engineer in the world had unlimited time and perfect taste, what would this system look like? What would the user feel when using it? Start from experience, not architecture.
|
||||
3. Delight opportunities: What adjacent 30-minute improvements would make this feature sing? Things where a user would think "oh nice, they thought of that." List at least 3.
|
||||
3. Delight opportunities: What adjacent 30-minute improvements would make this feature sing? Things where a user would think "oh nice, they thought of that." List at least 5.
|
||||
4. **Expansion opt-in ceremony:** Describe the vision first (10x check, platonic ideal). Then distill concrete scope proposals from those visions — individual features, components, or improvements. Present each proposal as its own AskUserQuestion. Recommend enthusiastically — explain why it's worth doing. But the user decides. Options: **A)** Add to this plan's scope **B)** Defer to TODOS.md **C)** Skip. Accepted items become plan scope for all remaining review sections. Rejected items go to "NOT in scope."
|
||||
|
||||
**For SELECTIVE EXPANSION** — run the HOLD SCOPE analysis first, then surface expansions:
|
||||
1. Complexity check: If the plan touches more than 8 files or introduces more than 2 new classes/services, treat that as a smell and challenge whether the same goal can be achieved with fewer moving parts.
|
||||
2. What is the minimum set of changes that achieves the stated goal? Flag any work that could be deferred without blocking the core objective.
|
||||
3. Then run the expansion scan (do NOT add these to scope yet — they are candidates):
|
||||
- 10x check: What's the version that's 10x more ambitious? Describe it concretely.
|
||||
- Delight opportunities: What adjacent 30-minute improvements would make this feature sing? List at least 5.
|
||||
- Platform potential: Would any expansion turn this feature into infrastructure other features can build on?
|
||||
4. **Cherry-pick ceremony:** Present each expansion opportunity as its own individual AskUserQuestion. Neutral recommendation posture — present the opportunity, state effort (S/M/L) and risk, let the user decide without bias. Options: **A)** Add to this plan's scope **B)** Defer to TODOS.md **C)** Skip. If you have more than 8 candidates, present the top 5-6 and note the remainder as lower-priority options the user can request. Accepted items become plan scope for all remaining review sections. Rejected items go to "NOT in scope."
|
||||
|
||||
**For HOLD SCOPE** — run this:
|
||||
1. Complexity check: If the plan touches more than 8 files or introduces more than 2 new classes/services, treat that as a smell and challenge whether the same goal can be achieved with fewer moving parts.
|
||||
@@ -118,7 +129,57 @@ Describe the ideal end state of this system 12 months from now. Does this plan m
|
||||
1. Ruthless cut: What is the absolute minimum that ships value to a user? Everything else is deferred. No exceptions.
|
||||
2. What can be a follow-up PR? Separate "must ship together" from "nice to ship together."
|
||||
|
||||
### 0E. Temporal Interrogation (EXPANSION and HOLD modes)
|
||||
### 0D-POST. Persist CEO Plan (EXPANSION and SELECTIVE EXPANSION only)
|
||||
|
||||
After the opt-in/cherry-pick ceremony, write the plan to disk so the vision and decisions survive beyond this conversation. Only run this step for EXPANSION and SELECTIVE EXPANSION modes.
|
||||
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
mkdir -p ~/.gstack/projects/$SLUG/ceo-plans
|
||||
```
|
||||
|
||||
Before writing, check for existing CEO plans in the ceo-plans/ directory. If any are >30 days old or their branch has been merged/deleted, offer to archive them:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.gstack/projects/$SLUG/ceo-plans/archive
|
||||
# For each stale plan: mv ~/.gstack/projects/$SLUG/ceo-plans/{old-plan}.md ~/.gstack/projects/$SLUG/ceo-plans/archive/
|
||||
```
|
||||
|
||||
Write to `~/.gstack/projects/$SLUG/ceo-plans/{date}-{feature-slug}.md` using this format:
|
||||
|
||||
```markdown
|
||||
---
|
||||
status: ACTIVE
|
||||
---
|
||||
# CEO Plan: {Feature Name}
|
||||
Generated by /plan-ceo-review on {date}
|
||||
Branch: {branch} | Mode: {EXPANSION / SELECTIVE EXPANSION}
|
||||
Repo: {owner/repo}
|
||||
|
||||
## Vision
|
||||
|
||||
### 10x Check
|
||||
{10x vision description}
|
||||
|
||||
### Platonic Ideal
|
||||
{platonic ideal description — EXPANSION mode only}
|
||||
|
||||
## Scope Decisions
|
||||
|
||||
| # | Proposal | Effort | Decision | Reasoning |
|
||||
|---|----------|--------|----------|-----------|
|
||||
| 1 | {proposal} | S/M/L | ACCEPTED / DEFERRED / SKIPPED | {why} |
|
||||
|
||||
## Accepted Scope (added to this plan)
|
||||
- {bullet list of what's now in scope}
|
||||
|
||||
## Deferred to TODOS.md
|
||||
- {items with context}
|
||||
```
|
||||
|
||||
Derive the feature slug from the plan being reviewed (e.g., "user-dashboard", "auth-refactor"). Use the date in YYYY-MM-DD format.
|
||||
|
||||
### 0E. Temporal Interrogation (EXPANSION, SELECTIVE EXPANSION, and HOLD modes)
|
||||
Think ahead to implementation: What decisions will need to be made during implementation that should be resolved NOW in the plan?
|
||||
```
|
||||
HOUR 1 (foundations): What does the implementer need to know?
|
||||
@@ -129,17 +190,22 @@ Think ahead to implementation: What decisions will need to be made during implem
|
||||
Surface these as questions for the user NOW, not as "figure it out later."
|
||||
|
||||
### 0F. Mode Selection
|
||||
Present three options:
|
||||
1. **SCOPE EXPANSION:** The plan is good but could be great. Propose the ambitious version, then review that. Push scope up. Build the cathedral.
|
||||
2. **HOLD SCOPE:** The plan's scope is right. Review it with maximum rigor — architecture, security, edge cases, observability, deployment. Make it bulletproof.
|
||||
3. **SCOPE REDUCTION:** The plan is overbuilt or wrong-headed. Propose a minimal version that achieves the core goal, then review that.
|
||||
In every mode, you are 100% in control. No scope is added without your explicit approval.
|
||||
|
||||
Present four options:
|
||||
1. **SCOPE EXPANSION:** The plan is good but could be great. Dream big — propose the ambitious version. Every expansion is presented individually for your approval. You opt in to each one.
|
||||
2. **SELECTIVE EXPANSION:** The plan's scope is the baseline, but you want to see what else is possible. Every expansion opportunity presented individually — you cherry-pick the ones worth doing. Neutral recommendations.
|
||||
3. **HOLD SCOPE:** The plan's scope is right. Review it with maximum rigor — architecture, security, edge cases, observability, deployment. Make it bulletproof. No expansions surfaced.
|
||||
4. **SCOPE REDUCTION:** The plan is overbuilt or wrong-headed. Propose a minimal version that achieves the core goal, then review that.
|
||||
|
||||
Context-dependent defaults:
|
||||
* Greenfield feature → default EXPANSION
|
||||
* Feature enhancement or iteration on existing system → default SELECTIVE EXPANSION
|
||||
* Bug fix or hotfix → default HOLD SCOPE
|
||||
* Refactor → default HOLD SCOPE
|
||||
* Plan touching >15 files → suggest REDUCTION unless user pushes back
|
||||
* User says "go big" / "ambitious" / "cathedral" → EXPANSION, no question
|
||||
* User says "hold scope but tempt me" / "show me options" / "cherry-pick" → SELECTIVE EXPANSION, no question
|
||||
|
||||
Once selected, commit fully. Do not silently drift.
|
||||
**STOP.** AskUserQuestion once per issue. Do NOT batch. Recommend + WHY. If no issues or fix is obvious, state what you'll do and move on — don't waste a question. Do NOT proceed until user responds.
|
||||
@@ -162,10 +228,12 @@ Evaluate and diagram:
|
||||
* Production failure scenarios. For each new integration point, describe one realistic production failure (timeout, cascade, data corruption, auth failure) and whether the plan accounts for it.
|
||||
* Rollback posture. If this ships and immediately breaks, what's the rollback procedure? Git revert? Feature flag? DB migration rollback? How long?
|
||||
|
||||
**EXPANSION mode additions:**
|
||||
**EXPANSION and SELECTIVE EXPANSION additions:**
|
||||
* What would make this architecture beautiful? Not just correct — elegant. Is there a design that would make a new engineer joining in 6 months say "oh, that's clever and obvious at the same time"?
|
||||
* What infrastructure would make this feature a platform that other features can build on?
|
||||
|
||||
**SELECTIVE EXPANSION:** If any accepted cherry-picks from Step 0D affect the architecture, evaluate their architectural fit here. Flag any that create coupling concerns or don't integrate cleanly — this is a chance to revisit the decision with new information.
|
||||
|
||||
Required ASCII diagram: full system architecture showing new components and their relationships to existing ones.
|
||||
**STOP.** AskUserQuestion once per issue. Do NOT batch. Recommend + WHY. If no issues or fix is obvious, state what you'll do and move on — don't waste a question. Do NOT proceed until user responds.
|
||||
|
||||
@@ -324,8 +392,8 @@ Evaluate:
|
||||
* Admin tooling. New operational tasks that need admin UI or rake tasks?
|
||||
* Runbooks. For each new failure mode: what's the operational response?
|
||||
|
||||
**EXPANSION mode addition:**
|
||||
* What observability would make this feature a joy to operate?
|
||||
**EXPANSION and SELECTIVE EXPANSION addition:**
|
||||
* What observability would make this feature a joy to operate? (For SELECTIVE EXPANSION, include observability for any accepted cherry-picks.)
|
||||
**STOP.** AskUserQuestion once per issue. Do NOT batch. Recommend + WHY. If no issues or fix is obvious, state what you'll do and move on — don't waste a question. Do NOT proceed until user responds.
|
||||
|
||||
### Section 9: Deployment & Rollout Review
|
||||
@@ -339,8 +407,8 @@ Evaluate:
|
||||
* Post-deploy verification checklist. First 5 minutes? First hour?
|
||||
* Smoke tests. What automated checks should run immediately post-deploy?
|
||||
|
||||
**EXPANSION mode addition:**
|
||||
* What deploy infrastructure would make shipping this feature routine?
|
||||
**EXPANSION and SELECTIVE EXPANSION addition:**
|
||||
* What deploy infrastructure would make shipping this feature routine? (For SELECTIVE EXPANSION, assess whether accepted cherry-picks change the deployment risk profile.)
|
||||
**STOP.** AskUserQuestion once per issue. Do NOT batch. Recommend + WHY. If no issues or fix is obvious, state what you'll do and move on — don't waste a question. Do NOT proceed until user responds.
|
||||
|
||||
### Section 10: Long-Term Trajectory Review
|
||||
@@ -352,9 +420,10 @@ Evaluate:
|
||||
* Ecosystem fit. Aligns with Rails/JS ecosystem direction?
|
||||
* The 1-year question. Read this plan as a new engineer in 12 months — obvious?
|
||||
|
||||
**EXPANSION mode additions:**
|
||||
**EXPANSION and SELECTIVE EXPANSION additions:**
|
||||
* What comes after this ships? Phase 2? Phase 3? Does the architecture support that trajectory?
|
||||
* Platform potential. Does this create capabilities other features can leverage?
|
||||
* (SELECTIVE EXPANSION only) Retrospective: Were the right cherry-picks accepted? Did any rejected expansions turn out to be load-bearing for the accepted ones?
|
||||
**STOP.** AskUserQuestion once per issue. Do NOT batch. Recommend + WHY. If no issues or fix is obvious, state what you'll do and move on — don't waste a question. Do NOT proceed until user responds.
|
||||
|
||||
## CRITICAL RULE — How to ask questions
|
||||
@@ -403,8 +472,11 @@ For each TODO, describe:
|
||||
|
||||
Then present options: **A)** Add to TODOS.md **B)** Skip — not valuable enough **C)** Build it now in this PR instead of deferring.
|
||||
|
||||
### Delight Opportunities (EXPANSION mode only)
|
||||
Identify at least 5 "bonus chunk" opportunities (<30 min each) that would make users think "oh nice, they thought of that." Present each delight opportunity as its own individual AskUserQuestion. Never batch them. For each one, describe what it is, why it would delight users, and effort estimate. Then present options: **A)** Add to TODOS.md as a vision item **B)** Skip **C)** Build it now in this PR.
|
||||
### Scope Expansion Decisions (EXPANSION and SELECTIVE EXPANSION only)
|
||||
For EXPANSION and SELECTIVE EXPANSION modes: expansion opportunities and delight items were surfaced and decided in Step 0D (opt-in/cherry-pick ceremony). The decisions are persisted in the CEO plan document. Reference the CEO plan for the full record. Do not re-surface them here — list the accepted expansions for completeness:
|
||||
* Accepted: {list items added to scope}
|
||||
* Deferred: {list items sent to TODOS.md}
|
||||
* Skipped: {list items rejected}
|
||||
|
||||
### Diagrams (mandatory, produce all that apply)
|
||||
1. System architecture
|
||||
@@ -422,7 +494,7 @@ List every ASCII diagram in files this plan touches. Still accurate?
|
||||
+====================================================================+
|
||||
| MEGA PLAN REVIEW — COMPLETION SUMMARY |
|
||||
+====================================================================+
|
||||
| Mode selected | EXPANSION / HOLD / REDUCTION |
|
||||
| Mode selected | EXPANSION / SELECTIVE / HOLD / REDUCTION |
|
||||
| System Audit | [key findings] |
|
||||
| Step 0 | [mode + key decisions] |
|
||||
| Section 1 (Arch) | ___ issues found |
|
||||
@@ -442,7 +514,8 @@ List every ASCII diagram in files this plan touches. Still accurate?
|
||||
| Error/rescue registry| ___ methods, ___ CRITICAL GAPS |
|
||||
| Failure modes | ___ total, ___ CRITICAL GAPS |
|
||||
| TODOS.md updates | ___ items proposed |
|
||||
| Delight opportunities| ___ identified (EXPANSION only) |
|
||||
| Scope proposals | ___ proposed, ___ accepted (EXP + SEL) |
|
||||
| CEO plan | written / skipped (HOLD/REDUCTION) |
|
||||
| Diagrams produced | ___ (list types) |
|
||||
| Stale diagrams found | ___ |
|
||||
| Unresolved decisions | ___ (listed below) |
|
||||
@@ -467,10 +540,21 @@ Before running this command, substitute the placeholder values from the Completi
|
||||
- **STATUS**: "clean" if 0 unresolved decisions AND 0 critical gaps; otherwise "issues_open"
|
||||
- **unresolved**: number from "Unresolved decisions" in the summary
|
||||
- **critical_gaps**: number from "Failure modes: ___ CRITICAL GAPS" in the summary
|
||||
- **MODE**: the mode the user selected (SCOPE_EXPANSION / HOLD_SCOPE / SCOPE_REDUCTION)
|
||||
- **MODE**: the mode the user selected (SCOPE_EXPANSION / SELECTIVE_EXPANSION / HOLD_SCOPE / SCOPE_REDUCTION)
|
||||
|
||||
{{REVIEW_DASHBOARD}}
|
||||
|
||||
## docs/designs Promotion (EXPANSION and SELECTIVE EXPANSION only)
|
||||
|
||||
At the end of the review, if the vision produced a compelling feature direction, offer to promote the CEO plan to the project repo. AskUserQuestion:
|
||||
|
||||
"The vision from this review produced {N} accepted scope expansions. Want to promote it to a design doc in the repo?"
|
||||
- **A)** Promote to `docs/designs/{FEATURE}.md` (committed to repo, visible to the team)
|
||||
- **B)** Keep in `~/.gstack/projects/` only (local, personal reference)
|
||||
- **C)** Skip
|
||||
|
||||
If promoted, copy the CEO plan content to `docs/designs/{FEATURE}.md` (create the directory if needed) and update the `status` field in the original CEO plan from `ACTIVE` to `PROMOTED`.
|
||||
|
||||
## Formatting Rules
|
||||
* NUMBER issues (1, 2, 3...) and LETTERS for options (A, B, C...).
|
||||
* Label with NUMBER + LETTER (e.g., "3A", "3B").
|
||||
@@ -480,30 +564,37 @@ Before running this command, substitute the placeholder values from the Completi
|
||||
|
||||
## Mode Quick Reference
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ MODE COMPARISON │
|
||||
├─────────────┬──────────────┬──────────────┬────────────────────┤
|
||||
│ │ EXPANSION │ HOLD SCOPE │ REDUCTION │
|
||||
├─────────────┼──────────────┼──────────────┼────────────────────┤
|
||||
│ Scope │ Push UP │ Maintain │ Push DOWN │
|
||||
│ 10x check │ Mandatory │ Optional │ Skip │
|
||||
│ Platonic │ Yes │ No │ No │
|
||||
│ ideal │ │ │ │
|
||||
│ Delight │ 5+ items │ Note if seen │ Skip │
|
||||
│ opps │ │ │ │
|
||||
│ Complexity │ "Is it big │ "Is it too │ "Is it the bare │
|
||||
│ question │ enough?" │ complex?" │ minimum?" │
|
||||
│ Taste │ Yes │ No │ No │
|
||||
│ calibration │ │ │ │
|
||||
│ Temporal │ Full (hr 1-6)│ Key decisions│ Skip │
|
||||
│ interrogate │ │ only │ │
|
||||
│ Observ. │ "Joy to │ "Can we │ "Can we see if │
|
||||
│ standard │ operate" │ debug it?" │ it's broken?" │
|
||||
│ Deploy │ Infra as │ Safe deploy │ Simplest possible │
|
||||
│ standard │ feature scope│ + rollback │ deploy │
|
||||
│ Error map │ Full + chaos │ Full │ Critical paths │
|
||||
│ │ scenarios │ │ only │
|
||||
│ Phase 2/3 │ Map it │ Note it │ Skip │
|
||||
│ planning │ │ │ │
|
||||
└─────────────┴──────────────┴──────────────┴────────────────────┘
|
||||
┌────────────────────────────────────────────────────────────────────────────────┐
|
||||
│ MODE COMPARISON │
|
||||
├─────────────┬──────────────┬──────────────┬──────────────┬────────────────────┤
|
||||
│ │ EXPANSION │ SELECTIVE │ HOLD SCOPE │ REDUCTION │
|
||||
├─────────────┼──────────────┼──────────────┼──────────────┼────────────────────┤
|
||||
│ Scope │ Push UP │ Hold + offer │ Maintain │ Push DOWN │
|
||||
│ │ (opt-in) │ │ │ │
|
||||
│ Recommend │ Enthusiastic │ Neutral │ N/A │ N/A │
|
||||
│ posture │ │ │ │ │
|
||||
│ 10x check │ Mandatory │ Surface as │ Optional │ Skip │
|
||||
│ │ │ cherry-pick │ │ │
|
||||
│ Platonic │ Yes │ No │ No │ No │
|
||||
│ ideal │ │ │ │ │
|
||||
│ Delight │ Opt-in │ Cherry-pick │ Note if seen │ Skip │
|
||||
│ opps │ ceremony │ ceremony │ │ │
|
||||
│ Complexity │ "Is it big │ "Is it right │ "Is it too │ "Is it the bare │
|
||||
│ question │ enough?" │ + what else │ complex?" │ minimum?" │
|
||||
│ │ │ is tempting"│ │ │
|
||||
│ Taste │ Yes │ Yes │ No │ No │
|
||||
│ calibration │ │ │ │ │
|
||||
│ Temporal │ Full (hr 1-6)│ Full (hr 1-6)│ Key decisions│ Skip │
|
||||
│ interrogate │ │ │ only │ │
|
||||
│ Observ. │ "Joy to │ "Joy to │ "Can we │ "Can we see if │
|
||||
│ standard │ operate" │ operate" │ debug it?" │ it's broken?" │
|
||||
│ Deploy │ Infra as │ Safe deploy │ Safe deploy │ Simplest possible │
|
||||
│ standard │ feature scope│ + cherry-pick│ + rollback │ deploy │
|
||||
│ │ │ risk check │ │ │
|
||||
│ Error map │ Full + chaos │ Full + chaos │ Full │ Critical paths │
|
||||
│ │ scenarios │ for accepted │ │ only │
|
||||
│ CEO plan │ Written │ Written │ Skipped │ Skipped │
|
||||
│ Phase 2/3 │ Map accepted │ Map accepted │ Note it │ Skip │
|
||||
│ planning │ │ cherry-picks │ │ │
|
||||
└─────────────┴──────────────┴──────────────┴──────────────┴────────────────────┘
|
||||
```
|
||||
|
||||
+18
-10
@@ -576,11 +576,13 @@ Substitute values from the report:
|
||||
|
||||
## Review Readiness Dashboard
|
||||
|
||||
After completing the review, read the review log to display the dashboard.
|
||||
After completing the review, read the review log and config to display the dashboard.
|
||||
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
cat ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_REVIEWS"
|
||||
echo "---CONFIG---"
|
||||
~/.claude/skills/gstack/bin/gstack-config get skip_eng_review 2>/dev/null || echo "false"
|
||||
```
|
||||
|
||||
Parse the output. Find the most recent entry for each skill (plan-ceo-review, plan-eng-review, plan-design-review). Ignore entries with timestamps older than 7 days. Display:
|
||||
@@ -589,17 +591,23 @@ Parse the output. Find the most recent entry for each skill (plan-ceo-review, pl
|
||||
+====================================================================+
|
||||
| REVIEW READINESS DASHBOARD |
|
||||
+====================================================================+
|
||||
| Review | Runs | Last Run | Status |
|
||||
|-----------------|------|---------------------|----------------------|
|
||||
| CEO Review | 1 | 2026-03-16 14:30 | CLEAR |
|
||||
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR |
|
||||
| Design Review | 0 | — | NOT YET RUN |
|
||||
| Review | Runs | Last Run | Status | Required |
|
||||
|-----------------|------|---------------------|-----------|----------|
|
||||
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR | YES |
|
||||
| CEO Review | 0 | — | — | no |
|
||||
| Design Review | 0 | — | — | no |
|
||||
+--------------------------------------------------------------------+
|
||||
| VERDICT: 2/3 CLEAR — Design Review not yet run |
|
||||
| VERDICT: CLEARED — Eng Review passed |
|
||||
+====================================================================+
|
||||
```
|
||||
|
||||
**Review tiers:**
|
||||
- **Eng Review (required by default):** The only review that gates shipping. Covers architecture, code quality, tests, performance. Can be disabled globally with \`gstack-config set skip_eng_review true\` (the "don't bother me" setting).
|
||||
- **CEO Review (optional):** Use your judgment. Recommend it for big product/business changes, new user-facing features, or scope decisions. Skip for bug fixes, refactors, infra, and cleanup.
|
||||
- **Design Review (optional):** Use your judgment. Recommend it for UI/UX changes. Skip for backend-only, infra, or prompt-only changes.
|
||||
|
||||
**Verdict logic:**
|
||||
- **CLEARED TO SHIP (3/3)**: All three have >= 1 entry within 7 days AND most recent status is "clean"
|
||||
- **N/3 CLEAR**: Show count and list which are missing, have open issues, or are stale (>7 days)
|
||||
- Informational only — does NOT block.
|
||||
- **CLEARED**: Eng Review has >= 1 entry within 7 days with status "clean" (or \`skip_eng_review\` is \`true\`)
|
||||
- **NOT CLEARED**: Eng Review missing, stale (>7 days), or has open issues
|
||||
- CEO and Design reviews are shown for context but never block shipping
|
||||
- If \`skip_eng_review\` config is \`true\`, Eng Review shows "SKIPPED (global)" and verdict is CLEARED
|
||||
|
||||
+24
-18
@@ -110,12 +110,11 @@ Before reviewing anything, answer these questions:
|
||||
3. **Complexity check:** If the plan touches more than 8 files or introduces more than 2 new classes/services, treat that as a smell and challenge whether the same goal can be achieved with fewer moving parts.
|
||||
4. **TODOS cross-reference:** Read `TODOS.md` if it exists. Are any deferred items blocking this plan? Can any deferred items be bundled into this PR without expanding scope? Does this plan create new work that should be captured as a TODO?
|
||||
|
||||
Then ask if I want one of three options:
|
||||
1. **SCOPE REDUCTION:** The plan is overbuilt. Propose a minimal version that achieves the core goal, then review that.
|
||||
2. **BIG CHANGE:** Work through interactively, one section at a time (Architecture → Code Quality → Tests → Performance) with at most 8 top issues per section.
|
||||
3. **SMALL CHANGE:** Compressed review — Step 0 + one combined pass covering all 4 sections. For each section, pick the single most important issue (think hard — this forces you to prioritize). Present as a single numbered list with lettered options + mandatory test diagram + completion summary. One AskUserQuestion round at the end. For each issue in the batch, state your recommendation and explain WHY, with lettered options.
|
||||
If the complexity check triggers (8+ files or 2+ new classes/services), proactively recommend scope reduction via AskUserQuestion — explain what's overbuilt, propose a minimal version that achieves the core goal, and ask whether to reduce or proceed as-is. If the complexity check does not trigger, present your Step 0 findings and proceed directly to Section 1.
|
||||
|
||||
**Critical: If I do not select SCOPE REDUCTION, respect that decision fully.** Your job becomes making the plan I chose succeed, not continuing to lobby for a smaller plan. Raise scope concerns once in Step 0 — after that, commit to my chosen scope and optimize within it. Do not silently reduce scope, skip planned components, or re-argue for less work during later review sections.
|
||||
Always work through the full interactive review: one section at a time (Architecture → Code Quality → Tests → Performance) with at most 8 top issues per section.
|
||||
|
||||
**Critical: Once the user accepts or rejects a scope reduction recommendation, commit fully.** Do not re-argue for smaller scope during later review sections. Do not silently reduce scope or skip planned components.
|
||||
|
||||
## Review Sections (after scope is agreed)
|
||||
|
||||
@@ -201,7 +200,6 @@ Follow the AskUserQuestion format from the Preamble above. Additional rules for
|
||||
* **Map the reasoning to my engineering preferences above.** One sentence connecting your recommendation to a specific preference (DRY, explicit > clever, minimal diff, etc.).
|
||||
* Label with issue NUMBER + option LETTER (e.g., "3A", "3B").
|
||||
* **Escape hatch:** If a section has no issues, say so and move on. If an issue has an obvious fix with no real alternatives, state what you'll do and move on — don't waste a question on it. Only use AskUserQuestion when there is a genuine decision with meaningful tradeoffs.
|
||||
* **Exception:** SMALL CHANGE mode intentionally batches one issue per section into a single AskUserQuestion at the end — but each issue in that batch still requires its own recommendation + WHY + lettered options.
|
||||
|
||||
## Required outputs
|
||||
|
||||
@@ -239,7 +237,7 @@ If any failure mode has no test AND no error handling AND would be silent, flag
|
||||
|
||||
### Completion summary
|
||||
At the end of the review, fill in and display this summary so the user can see all findings at a glance:
|
||||
- Step 0: Scope Challenge (user chose: ___)
|
||||
- Step 0: Scope Challenge — ___ (scope accepted as-is / scope reduced per recommendation)
|
||||
- Architecture Review: ___ issues found
|
||||
- Code Quality Review: ___ issues found
|
||||
- Test Review: diagram produced, ___ gaps identified
|
||||
@@ -273,15 +271,17 @@ Substitute values from the Completion Summary:
|
||||
- **STATUS**: "clean" if 0 unresolved decisions AND 0 critical gaps; otherwise "issues_open"
|
||||
- **unresolved**: number from "Unresolved decisions" count
|
||||
- **critical_gaps**: number from "Failure modes: ___ critical gaps flagged"
|
||||
- **MODE**: SCOPE_REDUCTION / BIG_CHANGE / SMALL_CHANGE
|
||||
- **MODE**: FULL_REVIEW / SCOPE_REDUCED
|
||||
|
||||
## Review Readiness Dashboard
|
||||
|
||||
After completing the review, read the review log to display the dashboard.
|
||||
After completing the review, read the review log and config to display the dashboard.
|
||||
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
cat ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_REVIEWS"
|
||||
echo "---CONFIG---"
|
||||
~/.claude/skills/gstack/bin/gstack-config get skip_eng_review 2>/dev/null || echo "false"
|
||||
```
|
||||
|
||||
Parse the output. Find the most recent entry for each skill (plan-ceo-review, plan-eng-review, plan-design-review). Ignore entries with timestamps older than 7 days. Display:
|
||||
@@ -290,20 +290,26 @@ Parse the output. Find the most recent entry for each skill (plan-ceo-review, pl
|
||||
+====================================================================+
|
||||
| REVIEW READINESS DASHBOARD |
|
||||
+====================================================================+
|
||||
| Review | Runs | Last Run | Status |
|
||||
|-----------------|------|---------------------|----------------------|
|
||||
| CEO Review | 1 | 2026-03-16 14:30 | CLEAR |
|
||||
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR |
|
||||
| Design Review | 0 | — | NOT YET RUN |
|
||||
| Review | Runs | Last Run | Status | Required |
|
||||
|-----------------|------|---------------------|-----------|----------|
|
||||
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR | YES |
|
||||
| CEO Review | 0 | — | — | no |
|
||||
| Design Review | 0 | — | — | no |
|
||||
+--------------------------------------------------------------------+
|
||||
| VERDICT: 2/3 CLEAR — Design Review not yet run |
|
||||
| VERDICT: CLEARED — Eng Review passed |
|
||||
+====================================================================+
|
||||
```
|
||||
|
||||
**Review tiers:**
|
||||
- **Eng Review (required by default):** The only review that gates shipping. Covers architecture, code quality, tests, performance. Can be disabled globally with \`gstack-config set skip_eng_review true\` (the "don't bother me" setting).
|
||||
- **CEO Review (optional):** Use your judgment. Recommend it for big product/business changes, new user-facing features, or scope decisions. Skip for bug fixes, refactors, infra, and cleanup.
|
||||
- **Design Review (optional):** Use your judgment. Recommend it for UI/UX changes. Skip for backend-only, infra, or prompt-only changes.
|
||||
|
||||
**Verdict logic:**
|
||||
- **CLEARED TO SHIP (3/3)**: All three have >= 1 entry within 7 days AND most recent status is "clean"
|
||||
- **N/3 CLEAR**: Show count and list which are missing, have open issues, or are stale (>7 days)
|
||||
- Informational only — does NOT block.
|
||||
- **CLEARED**: Eng Review has >= 1 entry within 7 days with status "clean" (or \`skip_eng_review\` is \`true\`)
|
||||
- **NOT CLEARED**: Eng Review missing, stale (>7 days), or has open issues
|
||||
- CEO and Design reviews are shown for context but never block shipping
|
||||
- If \`skip_eng_review\` config is \`true\`, Eng Review shows "SKIPPED (global)" and verdict is CLEARED
|
||||
|
||||
## Unresolved decisions
|
||||
If the user does not respond to an AskUserQuestion or interrupts to move on, note which decisions were left unresolved. At the end of the review, list these as "Unresolved decisions that may bite you later" — never silently default to an option.
|
||||
|
||||
@@ -45,12 +45,11 @@ Before reviewing anything, answer these questions:
|
||||
3. **Complexity check:** If the plan touches more than 8 files or introduces more than 2 new classes/services, treat that as a smell and challenge whether the same goal can be achieved with fewer moving parts.
|
||||
4. **TODOS cross-reference:** Read `TODOS.md` if it exists. Are any deferred items blocking this plan? Can any deferred items be bundled into this PR without expanding scope? Does this plan create new work that should be captured as a TODO?
|
||||
|
||||
Then ask if I want one of three options:
|
||||
1. **SCOPE REDUCTION:** The plan is overbuilt. Propose a minimal version that achieves the core goal, then review that.
|
||||
2. **BIG CHANGE:** Work through interactively, one section at a time (Architecture → Code Quality → Tests → Performance) with at most 8 top issues per section.
|
||||
3. **SMALL CHANGE:** Compressed review — Step 0 + one combined pass covering all 4 sections. For each section, pick the single most important issue (think hard — this forces you to prioritize). Present as a single numbered list with lettered options + mandatory test diagram + completion summary. One AskUserQuestion round at the end. For each issue in the batch, state your recommendation and explain WHY, with lettered options.
|
||||
If the complexity check triggers (8+ files or 2+ new classes/services), proactively recommend scope reduction via AskUserQuestion — explain what's overbuilt, propose a minimal version that achieves the core goal, and ask whether to reduce or proceed as-is. If the complexity check does not trigger, present your Step 0 findings and proceed directly to Section 1.
|
||||
|
||||
**Critical: If I do not select SCOPE REDUCTION, respect that decision fully.** Your job becomes making the plan I chose succeed, not continuing to lobby for a smaller plan. Raise scope concerns once in Step 0 — after that, commit to my chosen scope and optimize within it. Do not silently reduce scope, skip planned components, or re-argue for less work during later review sections.
|
||||
Always work through the full interactive review: one section at a time (Architecture → Code Quality → Tests → Performance) with at most 8 top issues per section.
|
||||
|
||||
**Critical: Once the user accepts or rejects a scope reduction recommendation, commit fully.** Do not re-argue for smaller scope during later review sections. Do not silently reduce scope or skip planned components.
|
||||
|
||||
## Review Sections (after scope is agreed)
|
||||
|
||||
@@ -136,7 +135,6 @@ Follow the AskUserQuestion format from the Preamble above. Additional rules for
|
||||
* **Map the reasoning to my engineering preferences above.** One sentence connecting your recommendation to a specific preference (DRY, explicit > clever, minimal diff, etc.).
|
||||
* Label with issue NUMBER + option LETTER (e.g., "3A", "3B").
|
||||
* **Escape hatch:** If a section has no issues, say so and move on. If an issue has an obvious fix with no real alternatives, state what you'll do and move on — don't waste a question on it. Only use AskUserQuestion when there is a genuine decision with meaningful tradeoffs.
|
||||
* **Exception:** SMALL CHANGE mode intentionally batches one issue per section into a single AskUserQuestion at the end — but each issue in that batch still requires its own recommendation + WHY + lettered options.
|
||||
|
||||
## Required outputs
|
||||
|
||||
@@ -174,7 +172,7 @@ If any failure mode has no test AND no error handling AND would be silent, flag
|
||||
|
||||
### Completion summary
|
||||
At the end of the review, fill in and display this summary so the user can see all findings at a glance:
|
||||
- Step 0: Scope Challenge (user chose: ___)
|
||||
- Step 0: Scope Challenge — ___ (scope accepted as-is / scope reduced per recommendation)
|
||||
- Architecture Review: ___ issues found
|
||||
- Code Quality Review: ___ issues found
|
||||
- Test Review: diagram produced, ___ gaps identified
|
||||
@@ -208,7 +206,7 @@ Substitute values from the Completion Summary:
|
||||
- **STATUS**: "clean" if 0 unresolved decisions AND 0 critical gaps; otherwise "issues_open"
|
||||
- **unresolved**: number from "Unresolved decisions" count
|
||||
- **critical_gaps**: number from "Failure modes: ___ critical gaps flagged"
|
||||
- **MODE**: SCOPE_REDUCTION / BIG_CHANGE / SMALL_CHANGE
|
||||
- **MODE**: FULL_REVIEW / SCOPE_REDUCED
|
||||
|
||||
{{REVIEW_DASHBOARD}}
|
||||
|
||||
|
||||
+169
-1
@@ -14,6 +14,7 @@ allowed-tools:
|
||||
- Glob
|
||||
- Grep
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
|
||||
<!-- Regenerate: bun run gen:skill-docs -->
|
||||
@@ -136,6 +137,161 @@ If `NEEDS_SETUP`:
|
||||
2. Run: `cd <SKILL_DIR> && ./setup`
|
||||
3. If `bun` is not installed: `curl -fsSL https://bun.sh/install | bash`
|
||||
|
||||
**Check test framework (bootstrap if needed):**
|
||||
|
||||
## Test Framework Bootstrap
|
||||
|
||||
**Detect existing test framework and project runtime:**
|
||||
|
||||
```bash
|
||||
# Detect project runtime
|
||||
[ -f Gemfile ] && echo "RUNTIME:ruby"
|
||||
[ -f package.json ] && echo "RUNTIME:node"
|
||||
[ -f requirements.txt ] || [ -f pyproject.toml ] && echo "RUNTIME:python"
|
||||
[ -f go.mod ] && echo "RUNTIME:go"
|
||||
[ -f Cargo.toml ] && echo "RUNTIME:rust"
|
||||
[ -f composer.json ] && echo "RUNTIME:php"
|
||||
[ -f mix.exs ] && echo "RUNTIME:elixir"
|
||||
# Detect sub-frameworks
|
||||
[ -f Gemfile ] && grep -q "rails" Gemfile 2>/dev/null && echo "FRAMEWORK:rails"
|
||||
[ -f package.json ] && grep -q '"next"' package.json 2>/dev/null && echo "FRAMEWORK:nextjs"
|
||||
# Check for existing test infrastructure
|
||||
ls jest.config.* vitest.config.* playwright.config.* .rspec pytest.ini pyproject.toml phpunit.xml 2>/dev/null
|
||||
ls -d test/ tests/ spec/ __tests__/ cypress/ e2e/ 2>/dev/null
|
||||
# Check opt-out marker
|
||||
[ -f .gstack/no-test-bootstrap ] && echo "BOOTSTRAP_DECLINED"
|
||||
```
|
||||
|
||||
**If test framework detected** (config files or test directories found):
|
||||
Print "Test framework detected: {name} ({N} existing tests). Skipping bootstrap."
|
||||
Read 2-3 existing test files to learn conventions (naming, imports, assertion style, setup patterns).
|
||||
Store conventions as prose context for use in Phase 8e.5 or Step 3.4. **Skip the rest of bootstrap.**
|
||||
|
||||
**If BOOTSTRAP_DECLINED** appears: Print "Test bootstrap previously declined — skipping." **Skip the rest of bootstrap.**
|
||||
|
||||
**If NO runtime detected** (no config files found): Use AskUserQuestion:
|
||||
"I couldn't detect your project's language. What runtime are you using?"
|
||||
Options: A) Node.js/TypeScript B) Ruby/Rails C) Python D) Go E) Rust F) PHP G) Elixir H) This project doesn't need tests.
|
||||
If user picks H → write `.gstack/no-test-bootstrap` and continue without tests.
|
||||
|
||||
**If runtime detected but no test framework — bootstrap:**
|
||||
|
||||
### B2. Research best practices
|
||||
|
||||
Use WebSearch to find current best practices for the detected runtime:
|
||||
- `"[runtime] best test framework 2025 2026"`
|
||||
- `"[framework A] vs [framework B] comparison"`
|
||||
|
||||
If WebSearch is unavailable, use this built-in knowledge table:
|
||||
|
||||
| Runtime | Primary recommendation | Alternative |
|
||||
|---------|----------------------|-------------|
|
||||
| Ruby/Rails | minitest + fixtures + capybara | rspec + factory_bot + shoulda-matchers |
|
||||
| Node.js | vitest + @testing-library | jest + @testing-library |
|
||||
| Next.js | vitest + @testing-library/react + playwright | jest + cypress |
|
||||
| Python | pytest + pytest-cov | unittest |
|
||||
| Go | stdlib testing + testify | stdlib only |
|
||||
| Rust | cargo test (built-in) + mockall | — |
|
||||
| PHP | phpunit + mockery | pest |
|
||||
| Elixir | ExUnit (built-in) + ex_machina | — |
|
||||
|
||||
### B3. Framework selection
|
||||
|
||||
Use AskUserQuestion:
|
||||
"I detected this is a [Runtime/Framework] project with no test framework. I researched current best practices. Here are the options:
|
||||
A) [Primary] — [rationale]. Includes: [packages]. Supports: unit, integration, smoke, e2e
|
||||
B) [Alternative] — [rationale]. Includes: [packages]
|
||||
C) Skip — don't set up testing right now
|
||||
RECOMMENDATION: Choose A because [reason based on project context]"
|
||||
|
||||
If user picks C → write `.gstack/no-test-bootstrap`. Tell user: "If you change your mind later, delete `.gstack/no-test-bootstrap` and re-run." Continue without tests.
|
||||
|
||||
If multiple runtimes detected (monorepo) → ask which runtime to set up first, with option to do both sequentially.
|
||||
|
||||
### B4. Install and configure
|
||||
|
||||
1. Install the chosen packages (npm/bun/gem/pip/etc.)
|
||||
2. Create minimal config file
|
||||
3. Create directory structure (test/, spec/, etc.)
|
||||
4. Create one example test matching the project's code to verify setup works
|
||||
|
||||
If package installation fails → debug once. If still failing → revert with `git checkout -- package.json package-lock.json` (or equivalent for the runtime). Warn user and continue without tests.
|
||||
|
||||
### B4.5. First real tests
|
||||
|
||||
Generate 3-5 real tests for existing code:
|
||||
|
||||
1. **Find recently changed files:** `git log --since=30.days --name-only --format="" | sort | uniq -c | sort -rn | head -10`
|
||||
2. **Prioritize by risk:** Error handlers > business logic with conditionals > API endpoints > pure functions
|
||||
3. **For each file:** Write one test that tests real behavior with meaningful assertions. Never `expect(x).toBeDefined()` — test what the code DOES.
|
||||
4. Run each test. Passes → keep. Fails → fix once. Still fails → delete silently.
|
||||
5. Generate at least 1 test, cap at 5.
|
||||
|
||||
Never import secrets, API keys, or credentials in test files. Use environment variables or test fixtures.
|
||||
|
||||
### B5. Verify
|
||||
|
||||
```bash
|
||||
# Run the full test suite to confirm everything works
|
||||
{detected test command}
|
||||
```
|
||||
|
||||
If tests fail → debug once. If still failing → revert all bootstrap changes and warn user.
|
||||
|
||||
### B5.5. CI/CD pipeline
|
||||
|
||||
```bash
|
||||
# Check CI provider
|
||||
ls -d .github/ 2>/dev/null && echo "CI:github"
|
||||
ls .gitlab-ci.yml .circleci/ bitrise.yml 2>/dev/null
|
||||
```
|
||||
|
||||
If `.github/` exists (or no CI detected — default to GitHub Actions):
|
||||
Create `.github/workflows/test.yml` with:
|
||||
- `runs-on: ubuntu-latest`
|
||||
- Appropriate setup action for the runtime (setup-node, setup-ruby, setup-python, etc.)
|
||||
- The same test command verified in B5
|
||||
- Trigger: push + pull_request
|
||||
|
||||
If non-GitHub CI detected → skip CI generation with note: "Detected {provider} — CI pipeline generation supports GitHub Actions only. Add test step to your existing pipeline manually."
|
||||
|
||||
### B6. Create TESTING.md
|
||||
|
||||
First check: If TESTING.md already exists → read it and update/append rather than overwriting. Never destroy existing content.
|
||||
|
||||
Write TESTING.md with:
|
||||
- Philosophy: "100% test coverage is the key to great vibe coding. Tests let you move fast, trust your instincts, and ship with confidence — without them, vibe coding is just yolo coding. With tests, it's a superpower."
|
||||
- Framework name and version
|
||||
- How to run tests (the verified command from B5)
|
||||
- Test layers: Unit tests (what, where, when), Integration tests, Smoke tests, E2E tests
|
||||
- Conventions: file naming, assertion style, setup/teardown patterns
|
||||
|
||||
### B7. Update CLAUDE.md
|
||||
|
||||
First check: If CLAUDE.md already has a `## Testing` section → skip. Don't duplicate.
|
||||
|
||||
Append a `## Testing` section:
|
||||
- Run command and test directory
|
||||
- Reference to TESTING.md
|
||||
- Test expectations:
|
||||
- 100% test coverage is the goal — tests make vibe coding safe
|
||||
- When writing new functions, write a corresponding test
|
||||
- When fixing a bug, write a regression test
|
||||
- When adding error handling, write a test that triggers the error
|
||||
- When adding a conditional (if/else, switch), write tests for BOTH paths
|
||||
- Never commit code that makes existing tests fail
|
||||
|
||||
### B8. Commit
|
||||
|
||||
```bash
|
||||
git status --porcelain
|
||||
```
|
||||
|
||||
Only commit if there are changes. Stage all bootstrap files (config, test directory, TESTING.md, CLAUDE.md, .github/workflows/test.yml if created):
|
||||
`git commit -m "chore: bootstrap test framework ({framework name})"`
|
||||
|
||||
---
|
||||
|
||||
**Create output directories:**
|
||||
|
||||
```bash
|
||||
@@ -565,6 +721,18 @@ Take **before/after screenshot pair** for every fix.
|
||||
- **best-effort**: fix applied but couldn't fully verify (e.g., needs specific browser state)
|
||||
- **reverted**: regression detected → `git revert HEAD` → mark finding as "deferred"
|
||||
|
||||
### 8e.5. Regression Test (design-review variant)
|
||||
|
||||
Design fixes are typically CSS-only. Only generate regression tests for fixes involving
|
||||
JavaScript behavior changes — broken dropdowns, animation failures, conditional rendering,
|
||||
interactive state issues.
|
||||
|
||||
For CSS-only fixes: skip entirely. CSS regressions are caught by re-running /qa-design-review.
|
||||
|
||||
If the fix involved JS behavior: follow the same procedure as /qa Phase 8e.5 (study existing
|
||||
test patterns, write a regression test encoding the exact bug condition, run it, commit if
|
||||
passes or defer if fails). Commit format: `test(design): regression test for FINDING-NNN`.
|
||||
|
||||
### 8f. Self-Regulation (STOP AND EVALUATE)
|
||||
|
||||
Every 5 fixes (or after any revert), compute the design-fix risk level:
|
||||
@@ -639,7 +807,7 @@ If the repo has a `TODOS.md`:
|
||||
|
||||
11. **Clean working tree required.** Refuse to start if `git status --porcelain` is non-empty.
|
||||
12. **One commit per fix.** Never bundle multiple design fixes into one commit.
|
||||
13. **Never modify tests or CI configuration.** Only fix application source code and styles.
|
||||
13. **Only modify tests when generating regression tests in Phase 8e.5.** Never modify CI configuration. Never modify existing tests — only create new test files.
|
||||
14. **Revert on regression.** If a fix makes things worse, `git revert HEAD` immediately.
|
||||
15. **Self-regulate.** Follow the design-fix risk heuristic. When in doubt, stop and ask.
|
||||
16. **CSS-first.** Prefer CSS/styling changes over structural component changes. CSS-only changes are safer and more reversible.
|
||||
|
||||
@@ -14,6 +14,7 @@ allowed-tools:
|
||||
- Glob
|
||||
- Grep
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
|
||||
{{PREAMBLE}}
|
||||
@@ -54,6 +55,10 @@ fi
|
||||
|
||||
{{BROWSE_SETUP}}
|
||||
|
||||
**Check test framework (bootstrap if needed):**
|
||||
|
||||
{{TEST_BOOTSTRAP}}
|
||||
|
||||
**Create output directories:**
|
||||
|
||||
```bash
|
||||
@@ -153,6 +158,18 @@ Take **before/after screenshot pair** for every fix.
|
||||
- **best-effort**: fix applied but couldn't fully verify (e.g., needs specific browser state)
|
||||
- **reverted**: regression detected → `git revert HEAD` → mark finding as "deferred"
|
||||
|
||||
### 8e.5. Regression Test (design-review variant)
|
||||
|
||||
Design fixes are typically CSS-only. Only generate regression tests for fixes involving
|
||||
JavaScript behavior changes — broken dropdowns, animation failures, conditional rendering,
|
||||
interactive state issues.
|
||||
|
||||
For CSS-only fixes: skip entirely. CSS regressions are caught by re-running /qa-design-review.
|
||||
|
||||
If the fix involved JS behavior: follow the same procedure as /qa Phase 8e.5 (study existing
|
||||
test patterns, write a regression test encoding the exact bug condition, run it, commit if
|
||||
passes or defer if fails). Commit format: `test(design): regression test for FINDING-NNN`.
|
||||
|
||||
### 8f. Self-Regulation (STOP AND EVALUATE)
|
||||
|
||||
Every 5 fixes (or after any revert), compute the design-fix risk level:
|
||||
@@ -227,7 +244,7 @@ If the repo has a `TODOS.md`:
|
||||
|
||||
11. **Clean working tree required.** Refuse to start if `git status --porcelain` is non-empty.
|
||||
12. **One commit per fix.** Never bundle multiple design fixes into one commit.
|
||||
13. **Never modify tests or CI configuration.** Only fix application source code and styles.
|
||||
13. **Only modify tests when generating regression tests in Phase 8e.5.** Never modify CI configuration. Never modify existing tests — only create new test files.
|
||||
14. **Revert on regression.** If a fix makes things worse, `git revert HEAD` immediately.
|
||||
15. **Self-regulate.** Follow the design-fix risk heuristic. When in doubt, stop and ask.
|
||||
16. **CSS-first.** Prefer CSS/styling changes over structural component changes. CSS-only changes are safer and more reversible.
|
||||
|
||||
@@ -452,3 +452,4 @@ Report filenames use the domain and date: `qa-report-myapp-com-2026-03-12.md`
|
||||
## Additional Rules (qa-only specific)
|
||||
|
||||
11. **Never fix bugs.** Find and document only. Do not read source code, edit files, or suggest fixes in the report. Your job is to report what's broken, not to fix it. Use `/qa` for the test-fix-verify loop.
|
||||
12. **No test framework detected?** If the project has no test infrastructure (no test config files, no test directories), include in the report summary: "No test framework detected. Run `/qa` to bootstrap one and enable regression test generation."
|
||||
|
||||
@@ -97,3 +97,4 @@ Report filenames use the domain and date: `qa-report-myapp-com-2026-03-12.md`
|
||||
## Additional Rules (qa-only specific)
|
||||
|
||||
11. **Never fix bugs.** Find and document only. Do not read source code, edit files, or suggest fixes in the report. Your job is to report what's broken, not to fix it. Use `/qa` for the test-fix-verify loop.
|
||||
12. **No test framework detected?** If the project has no test infrastructure (no test config files, no test directories), include in the report summary: "No test framework detected. Run `/qa` to bootstrap one and enable regression test generation."
|
||||
|
||||
+210
-1
@@ -16,6 +16,7 @@ allowed-tools:
|
||||
- Glob
|
||||
- Grep
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
|
||||
<!-- Regenerate: bun run gen:skill-docs -->
|
||||
@@ -157,6 +158,161 @@ If `NEEDS_SETUP`:
|
||||
2. Run: `cd <SKILL_DIR> && ./setup`
|
||||
3. If `bun` is not installed: `curl -fsSL https://bun.sh/install | bash`
|
||||
|
||||
**Check test framework (bootstrap if needed):**
|
||||
|
||||
## Test Framework Bootstrap
|
||||
|
||||
**Detect existing test framework and project runtime:**
|
||||
|
||||
```bash
|
||||
# Detect project runtime
|
||||
[ -f Gemfile ] && echo "RUNTIME:ruby"
|
||||
[ -f package.json ] && echo "RUNTIME:node"
|
||||
[ -f requirements.txt ] || [ -f pyproject.toml ] && echo "RUNTIME:python"
|
||||
[ -f go.mod ] && echo "RUNTIME:go"
|
||||
[ -f Cargo.toml ] && echo "RUNTIME:rust"
|
||||
[ -f composer.json ] && echo "RUNTIME:php"
|
||||
[ -f mix.exs ] && echo "RUNTIME:elixir"
|
||||
# Detect sub-frameworks
|
||||
[ -f Gemfile ] && grep -q "rails" Gemfile 2>/dev/null && echo "FRAMEWORK:rails"
|
||||
[ -f package.json ] && grep -q '"next"' package.json 2>/dev/null && echo "FRAMEWORK:nextjs"
|
||||
# Check for existing test infrastructure
|
||||
ls jest.config.* vitest.config.* playwright.config.* .rspec pytest.ini pyproject.toml phpunit.xml 2>/dev/null
|
||||
ls -d test/ tests/ spec/ __tests__/ cypress/ e2e/ 2>/dev/null
|
||||
# Check opt-out marker
|
||||
[ -f .gstack/no-test-bootstrap ] && echo "BOOTSTRAP_DECLINED"
|
||||
```
|
||||
|
||||
**If test framework detected** (config files or test directories found):
|
||||
Print "Test framework detected: {name} ({N} existing tests). Skipping bootstrap."
|
||||
Read 2-3 existing test files to learn conventions (naming, imports, assertion style, setup patterns).
|
||||
Store conventions as prose context for use in Phase 8e.5 or Step 3.4. **Skip the rest of bootstrap.**
|
||||
|
||||
**If BOOTSTRAP_DECLINED** appears: Print "Test bootstrap previously declined — skipping." **Skip the rest of bootstrap.**
|
||||
|
||||
**If NO runtime detected** (no config files found): Use AskUserQuestion:
|
||||
"I couldn't detect your project's language. What runtime are you using?"
|
||||
Options: A) Node.js/TypeScript B) Ruby/Rails C) Python D) Go E) Rust F) PHP G) Elixir H) This project doesn't need tests.
|
||||
If user picks H → write `.gstack/no-test-bootstrap` and continue without tests.
|
||||
|
||||
**If runtime detected but no test framework — bootstrap:**
|
||||
|
||||
### B2. Research best practices
|
||||
|
||||
Use WebSearch to find current best practices for the detected runtime:
|
||||
- `"[runtime] best test framework 2025 2026"`
|
||||
- `"[framework A] vs [framework B] comparison"`
|
||||
|
||||
If WebSearch is unavailable, use this built-in knowledge table:
|
||||
|
||||
| Runtime | Primary recommendation | Alternative |
|
||||
|---------|----------------------|-------------|
|
||||
| Ruby/Rails | minitest + fixtures + capybara | rspec + factory_bot + shoulda-matchers |
|
||||
| Node.js | vitest + @testing-library | jest + @testing-library |
|
||||
| Next.js | vitest + @testing-library/react + playwright | jest + cypress |
|
||||
| Python | pytest + pytest-cov | unittest |
|
||||
| Go | stdlib testing + testify | stdlib only |
|
||||
| Rust | cargo test (built-in) + mockall | — |
|
||||
| PHP | phpunit + mockery | pest |
|
||||
| Elixir | ExUnit (built-in) + ex_machina | — |
|
||||
|
||||
### B3. Framework selection
|
||||
|
||||
Use AskUserQuestion:
|
||||
"I detected this is a [Runtime/Framework] project with no test framework. I researched current best practices. Here are the options:
|
||||
A) [Primary] — [rationale]. Includes: [packages]. Supports: unit, integration, smoke, e2e
|
||||
B) [Alternative] — [rationale]. Includes: [packages]
|
||||
C) Skip — don't set up testing right now
|
||||
RECOMMENDATION: Choose A because [reason based on project context]"
|
||||
|
||||
If user picks C → write `.gstack/no-test-bootstrap`. Tell user: "If you change your mind later, delete `.gstack/no-test-bootstrap` and re-run." Continue without tests.
|
||||
|
||||
If multiple runtimes detected (monorepo) → ask which runtime to set up first, with option to do both sequentially.
|
||||
|
||||
### B4. Install and configure
|
||||
|
||||
1. Install the chosen packages (npm/bun/gem/pip/etc.)
|
||||
2. Create minimal config file
|
||||
3. Create directory structure (test/, spec/, etc.)
|
||||
4. Create one example test matching the project's code to verify setup works
|
||||
|
||||
If package installation fails → debug once. If still failing → revert with `git checkout -- package.json package-lock.json` (or equivalent for the runtime). Warn user and continue without tests.
|
||||
|
||||
### B4.5. First real tests
|
||||
|
||||
Generate 3-5 real tests for existing code:
|
||||
|
||||
1. **Find recently changed files:** `git log --since=30.days --name-only --format="" | sort | uniq -c | sort -rn | head -10`
|
||||
2. **Prioritize by risk:** Error handlers > business logic with conditionals > API endpoints > pure functions
|
||||
3. **For each file:** Write one test that tests real behavior with meaningful assertions. Never `expect(x).toBeDefined()` — test what the code DOES.
|
||||
4. Run each test. Passes → keep. Fails → fix once. Still fails → delete silently.
|
||||
5. Generate at least 1 test, cap at 5.
|
||||
|
||||
Never import secrets, API keys, or credentials in test files. Use environment variables or test fixtures.
|
||||
|
||||
### B5. Verify
|
||||
|
||||
```bash
|
||||
# Run the full test suite to confirm everything works
|
||||
{detected test command}
|
||||
```
|
||||
|
||||
If tests fail → debug once. If still failing → revert all bootstrap changes and warn user.
|
||||
|
||||
### B5.5. CI/CD pipeline
|
||||
|
||||
```bash
|
||||
# Check CI provider
|
||||
ls -d .github/ 2>/dev/null && echo "CI:github"
|
||||
ls .gitlab-ci.yml .circleci/ bitrise.yml 2>/dev/null
|
||||
```
|
||||
|
||||
If `.github/` exists (or no CI detected — default to GitHub Actions):
|
||||
Create `.github/workflows/test.yml` with:
|
||||
- `runs-on: ubuntu-latest`
|
||||
- Appropriate setup action for the runtime (setup-node, setup-ruby, setup-python, etc.)
|
||||
- The same test command verified in B5
|
||||
- Trigger: push + pull_request
|
||||
|
||||
If non-GitHub CI detected → skip CI generation with note: "Detected {provider} — CI pipeline generation supports GitHub Actions only. Add test step to your existing pipeline manually."
|
||||
|
||||
### B6. Create TESTING.md
|
||||
|
||||
First check: If TESTING.md already exists → read it and update/append rather than overwriting. Never destroy existing content.
|
||||
|
||||
Write TESTING.md with:
|
||||
- Philosophy: "100% test coverage is the key to great vibe coding. Tests let you move fast, trust your instincts, and ship with confidence — without them, vibe coding is just yolo coding. With tests, it's a superpower."
|
||||
- Framework name and version
|
||||
- How to run tests (the verified command from B5)
|
||||
- Test layers: Unit tests (what, where, when), Integration tests, Smoke tests, E2E tests
|
||||
- Conventions: file naming, assertion style, setup/teardown patterns
|
||||
|
||||
### B7. Update CLAUDE.md
|
||||
|
||||
First check: If CLAUDE.md already has a `## Testing` section → skip. Don't duplicate.
|
||||
|
||||
Append a `## Testing` section:
|
||||
- Run command and test directory
|
||||
- Reference to TESTING.md
|
||||
- Test expectations:
|
||||
- 100% test coverage is the goal — tests make vibe coding safe
|
||||
- When writing new functions, write a corresponding test
|
||||
- When fixing a bug, write a regression test
|
||||
- When adding error handling, write a test that triggers the error
|
||||
- When adding a conditional (if/else, switch), write tests for BOTH paths
|
||||
- Never commit code that makes existing tests fail
|
||||
|
||||
### B8. Commit
|
||||
|
||||
```bash
|
||||
git status --porcelain
|
||||
```
|
||||
|
||||
Only commit if there are changes. Stage all bootstrap files (config, test directory, TESTING.md, CLAUDE.md, .github/workflows/test.yml if created):
|
||||
`git commit -m "chore: bootstrap test framework ({framework name})"`
|
||||
|
||||
---
|
||||
|
||||
**Create output directories:**
|
||||
|
||||
```bash
|
||||
@@ -541,6 +697,59 @@ $B snapshot -D
|
||||
- **best-effort**: fix applied but couldn't fully verify (e.g., needs auth state, external service)
|
||||
- **reverted**: regression detected → `git revert HEAD` → mark issue as "deferred"
|
||||
|
||||
### 8e.5. Regression Test
|
||||
|
||||
Skip if: classification is not "verified", OR the fix is purely visual/CSS with no JS behavior, OR no test framework was detected AND user declined bootstrap.
|
||||
|
||||
**1. Study the project's existing test patterns:**
|
||||
|
||||
Read 2-3 test files closest to the fix (same directory, same code type). Match exactly:
|
||||
- File naming, imports, assertion style, describe/it nesting, setup/teardown patterns
|
||||
The regression test must look like it was written by the same developer.
|
||||
|
||||
**2. Trace the bug's codepath, then write a regression test:**
|
||||
|
||||
Before writing the test, trace the data flow through the code you just fixed:
|
||||
- What input/state triggered the bug? (the exact precondition)
|
||||
- What codepath did it follow? (which branches, which function calls)
|
||||
- Where did it break? (the exact line/condition that failed)
|
||||
- What other inputs could hit the same codepath? (edge cases around the fix)
|
||||
|
||||
The test MUST:
|
||||
- Set up the precondition that triggered the bug (the exact state that made it break)
|
||||
- Perform the action that exposed the bug
|
||||
- Assert the correct behavior (NOT "it renders" or "it doesn't throw")
|
||||
- If you found adjacent edge cases while tracing, test those too (e.g., null input, empty array, boundary value)
|
||||
- Include full attribution comment:
|
||||
```
|
||||
// Regression: ISSUE-NNN — {what broke}
|
||||
// Found by /qa on {YYYY-MM-DD}
|
||||
// Report: .gstack/qa-reports/qa-report-{domain}-{date}.md
|
||||
```
|
||||
|
||||
Test type decision:
|
||||
- Console error / JS exception / logic bug → unit or integration test
|
||||
- Broken form / API failure / data flow bug → integration test with request/response
|
||||
- Visual bug with JS behavior (broken dropdown, animation) → component test
|
||||
- Pure CSS → skip (caught by QA reruns)
|
||||
|
||||
Generate unit tests. Mock all external dependencies (DB, API, Redis, file system).
|
||||
|
||||
Use auto-incrementing names to avoid collisions: check existing `{name}.regression-*.test.{ext}` files, take max number + 1.
|
||||
|
||||
**3. Run only the new test file:**
|
||||
|
||||
```bash
|
||||
{detected test command} {new-test-file}
|
||||
```
|
||||
|
||||
**4. Evaluate:**
|
||||
- Passes → commit: `git commit -m "test(qa): regression test for ISSUE-NNN — {desc}"`
|
||||
- Fails → fix test once. Still failing → delete test, defer.
|
||||
- Taking >2 min exploration → skip and defer.
|
||||
|
||||
**5. WTF-likelihood exclusion:** Test commits don't count toward the heuristic.
|
||||
|
||||
### 8f. Self-Regulation (STOP AND EVALUATE)
|
||||
|
||||
Every 5 fixes (or after any revert), compute the WTF-likelihood:
|
||||
@@ -614,6 +823,6 @@ If the repo has a `TODOS.md`:
|
||||
|
||||
11. **Clean working tree required.** Refuse to start if `git status --porcelain` is non-empty.
|
||||
12. **One commit per fix.** Never bundle multiple fixes into one commit.
|
||||
13. **Never modify tests or CI configuration.** Only fix application source code.
|
||||
13. **Only modify tests when generating regression tests in Phase 8e.5.** Never modify CI configuration. Never modify existing tests — only create new test files.
|
||||
14. **Revert on regression.** If a fix makes things worse, `git revert HEAD` immediately.
|
||||
15. **Self-regulate.** Follow the WTF-likelihood heuristic. When in doubt, stop and ask.
|
||||
|
||||
+59
-1
@@ -16,6 +16,7 @@ allowed-tools:
|
||||
- Glob
|
||||
- Grep
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
|
||||
{{PREAMBLE}}
|
||||
@@ -58,6 +59,10 @@ fi
|
||||
|
||||
{{BROWSE_SETUP}}
|
||||
|
||||
**Check test framework (bootstrap if needed):**
|
||||
|
||||
{{TEST_BOOTSTRAP}}
|
||||
|
||||
**Create output directories:**
|
||||
|
||||
```bash
|
||||
@@ -169,6 +174,59 @@ $B snapshot -D
|
||||
- **best-effort**: fix applied but couldn't fully verify (e.g., needs auth state, external service)
|
||||
- **reverted**: regression detected → `git revert HEAD` → mark issue as "deferred"
|
||||
|
||||
### 8e.5. Regression Test
|
||||
|
||||
Skip if: classification is not "verified", OR the fix is purely visual/CSS with no JS behavior, OR no test framework was detected AND user declined bootstrap.
|
||||
|
||||
**1. Study the project's existing test patterns:**
|
||||
|
||||
Read 2-3 test files closest to the fix (same directory, same code type). Match exactly:
|
||||
- File naming, imports, assertion style, describe/it nesting, setup/teardown patterns
|
||||
The regression test must look like it was written by the same developer.
|
||||
|
||||
**2. Trace the bug's codepath, then write a regression test:**
|
||||
|
||||
Before writing the test, trace the data flow through the code you just fixed:
|
||||
- What input/state triggered the bug? (the exact precondition)
|
||||
- What codepath did it follow? (which branches, which function calls)
|
||||
- Where did it break? (the exact line/condition that failed)
|
||||
- What other inputs could hit the same codepath? (edge cases around the fix)
|
||||
|
||||
The test MUST:
|
||||
- Set up the precondition that triggered the bug (the exact state that made it break)
|
||||
- Perform the action that exposed the bug
|
||||
- Assert the correct behavior (NOT "it renders" or "it doesn't throw")
|
||||
- If you found adjacent edge cases while tracing, test those too (e.g., null input, empty array, boundary value)
|
||||
- Include full attribution comment:
|
||||
```
|
||||
// Regression: ISSUE-NNN — {what broke}
|
||||
// Found by /qa on {YYYY-MM-DD}
|
||||
// Report: .gstack/qa-reports/qa-report-{domain}-{date}.md
|
||||
```
|
||||
|
||||
Test type decision:
|
||||
- Console error / JS exception / logic bug → unit or integration test
|
||||
- Broken form / API failure / data flow bug → integration test with request/response
|
||||
- Visual bug with JS behavior (broken dropdown, animation) → component test
|
||||
- Pure CSS → skip (caught by QA reruns)
|
||||
|
||||
Generate unit tests. Mock all external dependencies (DB, API, Redis, file system).
|
||||
|
||||
Use auto-incrementing names to avoid collisions: check existing `{name}.regression-*.test.{ext}` files, take max number + 1.
|
||||
|
||||
**3. Run only the new test file:**
|
||||
|
||||
```bash
|
||||
{detected test command} {new-test-file}
|
||||
```
|
||||
|
||||
**4. Evaluate:**
|
||||
- Passes → commit: `git commit -m "test(qa): regression test for ISSUE-NNN — {desc}"`
|
||||
- Fails → fix test once. Still failing → delete test, defer.
|
||||
- Taking >2 min exploration → skip and defer.
|
||||
|
||||
**5. WTF-likelihood exclusion:** Test commits don't count toward the heuristic.
|
||||
|
||||
### 8f. Self-Regulation (STOP AND EVALUATE)
|
||||
|
||||
Every 5 fixes (or after any revert), compute the WTF-likelihood:
|
||||
@@ -242,6 +300,6 @@ If the repo has a `TODOS.md`:
|
||||
|
||||
11. **Clean working tree required.** Refuse to start if `git status --porcelain` is non-empty.
|
||||
12. **One commit per fix.** Never bundle multiple fixes into one commit.
|
||||
13. **Never modify tests or CI configuration.** Only fix application source code.
|
||||
13. **Only modify tests when generating regression tests in Phase 8e.5.** Never modify CI configuration. Never modify existing tests — only create new test files.
|
||||
14. **Revert on regression.** If a fix makes things worse, `git revert HEAD` immediately.
|
||||
15. **Self-regulate.** Follow the WTF-likelihood heuristic. When in doubt, stop and ask.
|
||||
|
||||
@@ -86,6 +86,22 @@
|
||||
|
||||
---
|
||||
|
||||
## Regression Tests
|
||||
|
||||
| Issue | Test File | Status | Description |
|
||||
|-------|-----------|--------|-------------|
|
||||
| ISSUE-NNN | path/to/test | committed / deferred / skipped | description |
|
||||
|
||||
### Deferred Tests
|
||||
|
||||
#### ISSUE-NNN: {title}
|
||||
**Precondition:** {setup state that triggers the bug}
|
||||
**Action:** {what the user does}
|
||||
**Expected:** {correct behavior}
|
||||
**Why deferred:** {reason}
|
||||
|
||||
---
|
||||
|
||||
## Ship Readiness
|
||||
|
||||
| Metric | Value |
|
||||
|
||||
+28
-1
@@ -164,6 +164,15 @@ cat ~/.gstack/greptile-history.md 2>/dev/null || true
|
||||
|
||||
# 9. TODOS.md backlog (if available)
|
||||
cat TODOS.md 2>/dev/null || true
|
||||
|
||||
# 10. Test file count
|
||||
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' 2>/dev/null | grep -v node_modules | wc -l
|
||||
|
||||
# 11. Regression test commits in window
|
||||
git log origin/<default> --since="<window>" --oneline --grep="test(qa):" --grep="test(design):" --grep="test: coverage"
|
||||
|
||||
# 12. Test files changed in window
|
||||
git log origin/<default> --since="<window>" --format="" --name-only | grep -E '\.(test|spec)\.' | sort -u | wc -l
|
||||
```
|
||||
|
||||
### Step 2: Compute Metrics
|
||||
@@ -185,6 +194,7 @@ Calculate and present these metrics in a summary table:
|
||||
| Detected sessions | N |
|
||||
| Avg LOC/session-hour | N |
|
||||
| Greptile signal | N% (Y catches, Z FPs) |
|
||||
| Test Health | N total tests · M added this period · K regression tests |
|
||||
|
||||
Then show a **per-author leaderboard** immediately below:
|
||||
|
||||
@@ -408,7 +418,17 @@ Use the Write tool to save the JSON file with this schema:
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** Only include the `greptile` field if `~/.gstack/greptile-history.md` exists and has entries within the time window. Only include the `backlog` field if `TODOS.md` exists. If either has no data, omit the field entirely.
|
||||
**Note:** Only include the `greptile` field if `~/.gstack/greptile-history.md` exists and has entries within the time window. Only include the `backlog` field if `TODOS.md` exists. Only include the `test_health` field if test files were found (command 10 returns > 0). If any has no data, omit the field entirely.
|
||||
|
||||
Include test health data in the JSON when test files exist:
|
||||
```json
|
||||
"test_health": {
|
||||
"total_test_files": 47,
|
||||
"tests_added_this_period": 5,
|
||||
"regression_test_commits": 3,
|
||||
"test_files_changed": 8
|
||||
}
|
||||
```
|
||||
|
||||
Include backlog data in the JSON when TODOS.md exists:
|
||||
```json
|
||||
@@ -464,6 +484,13 @@ Narrative covering:
|
||||
- Any XL PRs that should have been split
|
||||
- Greptile signal ratio and trend (if history exists): "Greptile: X% signal (Y valid catches, Z false positives)"
|
||||
|
||||
### Test Health
|
||||
- Total test files: N (from command 10)
|
||||
- Tests added this period: M (from command 12 — test files changed)
|
||||
- Regression test commits: list `test(qa):` and `test(design):` and `test: coverage` commits from command 11
|
||||
- If prior retro exists and has `test_health`: show delta "Test count: {last} → {now} (+{delta})"
|
||||
- If test ratio < 20%: flag as growth area — "100% test coverage is the goal. Tests make vibe coding safe."
|
||||
|
||||
### Focus & Highlights
|
||||
(from Step 8)
|
||||
- Focus score with interpretation
|
||||
|
||||
+28
-1
@@ -99,6 +99,15 @@ cat ~/.gstack/greptile-history.md 2>/dev/null || true
|
||||
|
||||
# 9. TODOS.md backlog (if available)
|
||||
cat TODOS.md 2>/dev/null || true
|
||||
|
||||
# 10. Test file count
|
||||
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' 2>/dev/null | grep -v node_modules | wc -l
|
||||
|
||||
# 11. Regression test commits in window
|
||||
git log origin/<default> --since="<window>" --oneline --grep="test(qa):" --grep="test(design):" --grep="test: coverage"
|
||||
|
||||
# 12. Test files changed in window
|
||||
git log origin/<default> --since="<window>" --format="" --name-only | grep -E '\.(test|spec)\.' | sort -u | wc -l
|
||||
```
|
||||
|
||||
### Step 2: Compute Metrics
|
||||
@@ -120,6 +129,7 @@ Calculate and present these metrics in a summary table:
|
||||
| Detected sessions | N |
|
||||
| Avg LOC/session-hour | N |
|
||||
| Greptile signal | N% (Y catches, Z FPs) |
|
||||
| Test Health | N total tests · M added this period · K regression tests |
|
||||
|
||||
Then show a **per-author leaderboard** immediately below:
|
||||
|
||||
@@ -343,7 +353,17 @@ Use the Write tool to save the JSON file with this schema:
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** Only include the `greptile` field if `~/.gstack/greptile-history.md` exists and has entries within the time window. Only include the `backlog` field if `TODOS.md` exists. If either has no data, omit the field entirely.
|
||||
**Note:** Only include the `greptile` field if `~/.gstack/greptile-history.md` exists and has entries within the time window. Only include the `backlog` field if `TODOS.md` exists. Only include the `test_health` field if test files were found (command 10 returns > 0). If any has no data, omit the field entirely.
|
||||
|
||||
Include test health data in the JSON when test files exist:
|
||||
```json
|
||||
"test_health": {
|
||||
"total_test_files": 47,
|
||||
"tests_added_this_period": 5,
|
||||
"regression_test_commits": 3,
|
||||
"test_files_changed": 8
|
||||
}
|
||||
```
|
||||
|
||||
Include backlog data in the JSON when TODOS.md exists:
|
||||
```json
|
||||
@@ -399,6 +419,13 @@ Narrative covering:
|
||||
- Any XL PRs that should have been split
|
||||
- Greptile signal ratio and trend (if history exists): "Greptile: X% signal (Y valid catches, Z false positives)"
|
||||
|
||||
### Test Health
|
||||
- Total test files: N (from command 10)
|
||||
- Tests added this period: M (from command 12 — test files changed)
|
||||
- Regression test commits: list `test(qa):` and `test(design):` and `test: coverage` commits from command 11
|
||||
- If prior retro exists and has `test_health`: show delta "Test count: {last} → {now} (+{delta})"
|
||||
- If test ratio < 20%: flag as growth area — "100% test coverage is the goal. Tests make vibe coding safe."
|
||||
|
||||
### Focus & Highlights
|
||||
(from Step 8)
|
||||
- Focus score with interpretation
|
||||
|
||||
+174
-10
@@ -817,11 +817,13 @@ Tie everything to user goals and product objectives. Always suggest specific imp
|
||||
function generateReviewDashboard(): string {
|
||||
return `## Review Readiness Dashboard
|
||||
|
||||
After completing the review, read the review log to display the dashboard.
|
||||
After completing the review, read the review log and config to display the dashboard.
|
||||
|
||||
\`\`\`bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
cat ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_REVIEWS"
|
||||
echo "---CONFIG---"
|
||||
~/.claude/skills/gstack/bin/gstack-config get skip_eng_review 2>/dev/null || echo "false"
|
||||
\`\`\`
|
||||
|
||||
Parse the output. Find the most recent entry for each skill (plan-ceo-review, plan-eng-review, plan-design-review). Ignore entries with timestamps older than 7 days. Display:
|
||||
@@ -830,20 +832,181 @@ Parse the output. Find the most recent entry for each skill (plan-ceo-review, pl
|
||||
+====================================================================+
|
||||
| REVIEW READINESS DASHBOARD |
|
||||
+====================================================================+
|
||||
| Review | Runs | Last Run | Status |
|
||||
|-----------------|------|---------------------|----------------------|
|
||||
| CEO Review | 1 | 2026-03-16 14:30 | CLEAR |
|
||||
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR |
|
||||
| Design Review | 0 | — | NOT YET RUN |
|
||||
| Review | Runs | Last Run | Status | Required |
|
||||
|-----------------|------|---------------------|-----------|----------|
|
||||
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR | YES |
|
||||
| CEO Review | 0 | — | — | no |
|
||||
| Design Review | 0 | — | — | no |
|
||||
+--------------------------------------------------------------------+
|
||||
| VERDICT: 2/3 CLEAR — Design Review not yet run |
|
||||
| VERDICT: CLEARED — Eng Review passed |
|
||||
+====================================================================+
|
||||
\`\`\`
|
||||
|
||||
**Review tiers:**
|
||||
- **Eng Review (required by default):** The only review that gates shipping. Covers architecture, code quality, tests, performance. Can be disabled globally with \\\`gstack-config set skip_eng_review true\\\` (the "don't bother me" setting).
|
||||
- **CEO Review (optional):** Use your judgment. Recommend it for big product/business changes, new user-facing features, or scope decisions. Skip for bug fixes, refactors, infra, and cleanup.
|
||||
- **Design Review (optional):** Use your judgment. Recommend it for UI/UX changes. Skip for backend-only, infra, or prompt-only changes.
|
||||
|
||||
**Verdict logic:**
|
||||
- **CLEARED TO SHIP (3/3)**: All three have >= 1 entry within 7 days AND most recent status is "clean"
|
||||
- **N/3 CLEAR**: Show count and list which are missing, have open issues, or are stale (>7 days)
|
||||
- Informational only — does NOT block.`;
|
||||
- **CLEARED**: Eng Review has >= 1 entry within 7 days with status "clean" (or \\\`skip_eng_review\\\` is \\\`true\\\`)
|
||||
- **NOT CLEARED**: Eng Review missing, stale (>7 days), or has open issues
|
||||
- CEO and Design reviews are shown for context but never block shipping
|
||||
- If \\\`skip_eng_review\\\` config is \\\`true\\\`, Eng Review shows "SKIPPED (global)" and verdict is CLEARED`;
|
||||
}
|
||||
|
||||
function generateTestBootstrap(): string {
|
||||
return `## Test Framework Bootstrap
|
||||
|
||||
**Detect existing test framework and project runtime:**
|
||||
|
||||
\`\`\`bash
|
||||
# Detect project runtime
|
||||
[ -f Gemfile ] && echo "RUNTIME:ruby"
|
||||
[ -f package.json ] && echo "RUNTIME:node"
|
||||
[ -f requirements.txt ] || [ -f pyproject.toml ] && echo "RUNTIME:python"
|
||||
[ -f go.mod ] && echo "RUNTIME:go"
|
||||
[ -f Cargo.toml ] && echo "RUNTIME:rust"
|
||||
[ -f composer.json ] && echo "RUNTIME:php"
|
||||
[ -f mix.exs ] && echo "RUNTIME:elixir"
|
||||
# Detect sub-frameworks
|
||||
[ -f Gemfile ] && grep -q "rails" Gemfile 2>/dev/null && echo "FRAMEWORK:rails"
|
||||
[ -f package.json ] && grep -q '"next"' package.json 2>/dev/null && echo "FRAMEWORK:nextjs"
|
||||
# Check for existing test infrastructure
|
||||
ls jest.config.* vitest.config.* playwright.config.* .rspec pytest.ini pyproject.toml phpunit.xml 2>/dev/null
|
||||
ls -d test/ tests/ spec/ __tests__/ cypress/ e2e/ 2>/dev/null
|
||||
# Check opt-out marker
|
||||
[ -f .gstack/no-test-bootstrap ] && echo "BOOTSTRAP_DECLINED"
|
||||
\`\`\`
|
||||
|
||||
**If test framework detected** (config files or test directories found):
|
||||
Print "Test framework detected: {name} ({N} existing tests). Skipping bootstrap."
|
||||
Read 2-3 existing test files to learn conventions (naming, imports, assertion style, setup patterns).
|
||||
Store conventions as prose context for use in Phase 8e.5 or Step 3.4. **Skip the rest of bootstrap.**
|
||||
|
||||
**If BOOTSTRAP_DECLINED** appears: Print "Test bootstrap previously declined — skipping." **Skip the rest of bootstrap.**
|
||||
|
||||
**If NO runtime detected** (no config files found): Use AskUserQuestion:
|
||||
"I couldn't detect your project's language. What runtime are you using?"
|
||||
Options: A) Node.js/TypeScript B) Ruby/Rails C) Python D) Go E) Rust F) PHP G) Elixir H) This project doesn't need tests.
|
||||
If user picks H → write \`.gstack/no-test-bootstrap\` and continue without tests.
|
||||
|
||||
**If runtime detected but no test framework — bootstrap:**
|
||||
|
||||
### B2. Research best practices
|
||||
|
||||
Use WebSearch to find current best practices for the detected runtime:
|
||||
- \`"[runtime] best test framework 2025 2026"\`
|
||||
- \`"[framework A] vs [framework B] comparison"\`
|
||||
|
||||
If WebSearch is unavailable, use this built-in knowledge table:
|
||||
|
||||
| Runtime | Primary recommendation | Alternative |
|
||||
|---------|----------------------|-------------|
|
||||
| Ruby/Rails | minitest + fixtures + capybara | rspec + factory_bot + shoulda-matchers |
|
||||
| Node.js | vitest + @testing-library | jest + @testing-library |
|
||||
| Next.js | vitest + @testing-library/react + playwright | jest + cypress |
|
||||
| Python | pytest + pytest-cov | unittest |
|
||||
| Go | stdlib testing + testify | stdlib only |
|
||||
| Rust | cargo test (built-in) + mockall | — |
|
||||
| PHP | phpunit + mockery | pest |
|
||||
| Elixir | ExUnit (built-in) + ex_machina | — |
|
||||
|
||||
### B3. Framework selection
|
||||
|
||||
Use AskUserQuestion:
|
||||
"I detected this is a [Runtime/Framework] project with no test framework. I researched current best practices. Here are the options:
|
||||
A) [Primary] — [rationale]. Includes: [packages]. Supports: unit, integration, smoke, e2e
|
||||
B) [Alternative] — [rationale]. Includes: [packages]
|
||||
C) Skip — don't set up testing right now
|
||||
RECOMMENDATION: Choose A because [reason based on project context]"
|
||||
|
||||
If user picks C → write \`.gstack/no-test-bootstrap\`. Tell user: "If you change your mind later, delete \`.gstack/no-test-bootstrap\` and re-run." Continue without tests.
|
||||
|
||||
If multiple runtimes detected (monorepo) → ask which runtime to set up first, with option to do both sequentially.
|
||||
|
||||
### B4. Install and configure
|
||||
|
||||
1. Install the chosen packages (npm/bun/gem/pip/etc.)
|
||||
2. Create minimal config file
|
||||
3. Create directory structure (test/, spec/, etc.)
|
||||
4. Create one example test matching the project's code to verify setup works
|
||||
|
||||
If package installation fails → debug once. If still failing → revert with \`git checkout -- package.json package-lock.json\` (or equivalent for the runtime). Warn user and continue without tests.
|
||||
|
||||
### B4.5. First real tests
|
||||
|
||||
Generate 3-5 real tests for existing code:
|
||||
|
||||
1. **Find recently changed files:** \`git log --since=30.days --name-only --format="" | sort | uniq -c | sort -rn | head -10\`
|
||||
2. **Prioritize by risk:** Error handlers > business logic with conditionals > API endpoints > pure functions
|
||||
3. **For each file:** Write one test that tests real behavior with meaningful assertions. Never \`expect(x).toBeDefined()\` — test what the code DOES.
|
||||
4. Run each test. Passes → keep. Fails → fix once. Still fails → delete silently.
|
||||
5. Generate at least 1 test, cap at 5.
|
||||
|
||||
Never import secrets, API keys, or credentials in test files. Use environment variables or test fixtures.
|
||||
|
||||
### B5. Verify
|
||||
|
||||
\`\`\`bash
|
||||
# Run the full test suite to confirm everything works
|
||||
{detected test command}
|
||||
\`\`\`
|
||||
|
||||
If tests fail → debug once. If still failing → revert all bootstrap changes and warn user.
|
||||
|
||||
### B5.5. CI/CD pipeline
|
||||
|
||||
\`\`\`bash
|
||||
# Check CI provider
|
||||
ls -d .github/ 2>/dev/null && echo "CI:github"
|
||||
ls .gitlab-ci.yml .circleci/ bitrise.yml 2>/dev/null
|
||||
\`\`\`
|
||||
|
||||
If \`.github/\` exists (or no CI detected — default to GitHub Actions):
|
||||
Create \`.github/workflows/test.yml\` with:
|
||||
- \`runs-on: ubuntu-latest\`
|
||||
- Appropriate setup action for the runtime (setup-node, setup-ruby, setup-python, etc.)
|
||||
- The same test command verified in B5
|
||||
- Trigger: push + pull_request
|
||||
|
||||
If non-GitHub CI detected → skip CI generation with note: "Detected {provider} — CI pipeline generation supports GitHub Actions only. Add test step to your existing pipeline manually."
|
||||
|
||||
### B6. Create TESTING.md
|
||||
|
||||
First check: If TESTING.md already exists → read it and update/append rather than overwriting. Never destroy existing content.
|
||||
|
||||
Write TESTING.md with:
|
||||
- Philosophy: "100% test coverage is the key to great vibe coding. Tests let you move fast, trust your instincts, and ship with confidence — without them, vibe coding is just yolo coding. With tests, it's a superpower."
|
||||
- Framework name and version
|
||||
- How to run tests (the verified command from B5)
|
||||
- Test layers: Unit tests (what, where, when), Integration tests, Smoke tests, E2E tests
|
||||
- Conventions: file naming, assertion style, setup/teardown patterns
|
||||
|
||||
### B7. Update CLAUDE.md
|
||||
|
||||
First check: If CLAUDE.md already has a \`## Testing\` section → skip. Don't duplicate.
|
||||
|
||||
Append a \`## Testing\` section:
|
||||
- Run command and test directory
|
||||
- Reference to TESTING.md
|
||||
- Test expectations:
|
||||
- 100% test coverage is the goal — tests make vibe coding safe
|
||||
- When writing new functions, write a corresponding test
|
||||
- When fixing a bug, write a regression test
|
||||
- When adding error handling, write a test that triggers the error
|
||||
- When adding a conditional (if/else, switch), write tests for BOTH paths
|
||||
- Never commit code that makes existing tests fail
|
||||
|
||||
### B8. Commit
|
||||
|
||||
\`\`\`bash
|
||||
git status --porcelain
|
||||
\`\`\`
|
||||
|
||||
Only commit if there are changes. Stage all bootstrap files (config, test directory, TESTING.md, CLAUDE.md, .github/workflows/test.yml if created):
|
||||
\`git commit -m "chore: bootstrap test framework ({framework name})"\`
|
||||
|
||||
---`;
|
||||
}
|
||||
|
||||
const RESOLVERS: Record<string, () => string> = {
|
||||
@@ -855,6 +1018,7 @@ const RESOLVERS: Record<string, () => string> = {
|
||||
QA_METHODOLOGY: generateQAMethodology,
|
||||
DESIGN_METHODOLOGY: generateDesignMethodology,
|
||||
REVIEW_DASHBOARD: generateReviewDashboard,
|
||||
TEST_BOOTSTRAP: generateTestBootstrap,
|
||||
};
|
||||
|
||||
// ─── Template Processing ────────────────────────────────────
|
||||
|
||||
+342
-15
@@ -11,6 +11,7 @@ allowed-tools:
|
||||
- Grep
|
||||
- Glob
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
|
||||
<!-- Regenerate: bun run gen:skill-docs -->
|
||||
@@ -121,6 +122,7 @@ You are running the `/ship` workflow. This is a **non-interactive, fully automat
|
||||
- Multi-file changesets (auto-split into bisectable commits)
|
||||
- TODOS.md completed-item detection (auto-mark)
|
||||
- Auto-fixable review findings (dead code, N+1, stale comments — fixed automatically)
|
||||
- Test coverage gaps (auto-generate and commit, or flag in PR body)
|
||||
|
||||
---
|
||||
|
||||
@@ -136,11 +138,13 @@ You are running the `/ship` workflow. This is a **non-interactive, fully automat
|
||||
|
||||
## Review Readiness Dashboard
|
||||
|
||||
After completing the review, read the review log to display the dashboard.
|
||||
After completing the review, read the review log and config to display the dashboard.
|
||||
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
cat ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_REVIEWS"
|
||||
echo "---CONFIG---"
|
||||
~/.claude/skills/gstack/bin/gstack-config get skip_eng_review 2>/dev/null || echo "false"
|
||||
```
|
||||
|
||||
Parse the output. Find the most recent entry for each skill (plan-ceo-review, plan-eng-review, plan-design-review). Ignore entries with timestamps older than 7 days. Display:
|
||||
@@ -149,25 +153,48 @@ Parse the output. Find the most recent entry for each skill (plan-ceo-review, pl
|
||||
+====================================================================+
|
||||
| REVIEW READINESS DASHBOARD |
|
||||
+====================================================================+
|
||||
| Review | Runs | Last Run | Status |
|
||||
|-----------------|------|---------------------|----------------------|
|
||||
| CEO Review | 1 | 2026-03-16 14:30 | CLEAR |
|
||||
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR |
|
||||
| Design Review | 0 | — | NOT YET RUN |
|
||||
| Review | Runs | Last Run | Status | Required |
|
||||
|-----------------|------|---------------------|-----------|----------|
|
||||
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR | YES |
|
||||
| CEO Review | 0 | — | — | no |
|
||||
| Design Review | 0 | — | — | no |
|
||||
+--------------------------------------------------------------------+
|
||||
| VERDICT: 2/3 CLEAR — Design Review not yet run |
|
||||
| VERDICT: CLEARED — Eng Review passed |
|
||||
+====================================================================+
|
||||
```
|
||||
|
||||
**Verdict logic:**
|
||||
- **CLEARED TO SHIP (3/3)**: All three have >= 1 entry within 7 days AND most recent status is "clean"
|
||||
- **N/3 CLEAR**: Show count and list which are missing, have open issues, or are stale (>7 days)
|
||||
- Informational only — does NOT block.
|
||||
**Review tiers:**
|
||||
- **Eng Review (required by default):** The only review that gates shipping. Covers architecture, code quality, tests, performance. Can be disabled globally with \`gstack-config set skip_eng_review true\` (the "don't bother me" setting).
|
||||
- **CEO Review (optional):** Use your judgment. Recommend it for big product/business changes, new user-facing features, or scope decisions. Skip for bug fixes, refactors, infra, and cleanup.
|
||||
- **Design Review (optional):** Use your judgment. Recommend it for UI/UX changes. Skip for backend-only, infra, or prompt-only changes.
|
||||
|
||||
If the verdict is NOT "CLEARED TO SHIP (3/3)", use AskUserQuestion:
|
||||
- Show which reviews are missing or have open issues
|
||||
- RECOMMENDATION: Choose B (run missing reviews first) unless the change is trivial
|
||||
- Options: A) Ship anyway B) Abort — run missing review(s) first C) Reviews not relevant for this change
|
||||
**Verdict logic:**
|
||||
- **CLEARED**: Eng Review has >= 1 entry within 7 days with status "clean" (or \`skip_eng_review\` is \`true\`)
|
||||
- **NOT CLEARED**: Eng Review missing, stale (>7 days), or has open issues
|
||||
- CEO and Design reviews are shown for context but never block shipping
|
||||
- If \`skip_eng_review\` config is \`true\`, Eng Review shows "SKIPPED (global)" and verdict is CLEARED
|
||||
|
||||
If the Eng Review is NOT "CLEAR":
|
||||
|
||||
1. **Check for a prior override on this branch:**
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
grep '"skill":"ship-review-override"' ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_OVERRIDE"
|
||||
```
|
||||
If an override exists, display the dashboard and note "Review gate previously accepted — continuing." Do NOT ask again.
|
||||
|
||||
2. **If no override exists,** use AskUserQuestion:
|
||||
- Show that Eng Review is missing or has open issues
|
||||
- RECOMMENDATION: Choose C if the change is obviously trivial (< 20 lines, typo fix, config-only); Choose B for larger changes
|
||||
- Options: A) Ship anyway B) Abort — run /plan-eng-review first C) Change is too small to need eng review
|
||||
- If CEO/Design reviews are missing, mention them as informational ("CEO Review not run — recommended for product changes") but do NOT block or recommend aborting for them
|
||||
|
||||
3. **If the user chooses A or C,** persist the decision so future `/ship` runs on this branch skip the gate:
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
echo '{"skill":"ship-review-override","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","decision":"USER_CHOICE"}' >> ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl
|
||||
```
|
||||
Substitute USER_CHOICE with "ship_anyway" or "not_relevant".
|
||||
|
||||
---
|
||||
|
||||
@@ -185,6 +212,163 @@ git fetch origin <base> && git merge origin/<base> --no-edit
|
||||
|
||||
---
|
||||
|
||||
## Step 2.5: Test Framework Bootstrap
|
||||
|
||||
## Test Framework Bootstrap
|
||||
|
||||
**Detect existing test framework and project runtime:**
|
||||
|
||||
```bash
|
||||
# Detect project runtime
|
||||
[ -f Gemfile ] && echo "RUNTIME:ruby"
|
||||
[ -f package.json ] && echo "RUNTIME:node"
|
||||
[ -f requirements.txt ] || [ -f pyproject.toml ] && echo "RUNTIME:python"
|
||||
[ -f go.mod ] && echo "RUNTIME:go"
|
||||
[ -f Cargo.toml ] && echo "RUNTIME:rust"
|
||||
[ -f composer.json ] && echo "RUNTIME:php"
|
||||
[ -f mix.exs ] && echo "RUNTIME:elixir"
|
||||
# Detect sub-frameworks
|
||||
[ -f Gemfile ] && grep -q "rails" Gemfile 2>/dev/null && echo "FRAMEWORK:rails"
|
||||
[ -f package.json ] && grep -q '"next"' package.json 2>/dev/null && echo "FRAMEWORK:nextjs"
|
||||
# Check for existing test infrastructure
|
||||
ls jest.config.* vitest.config.* playwright.config.* .rspec pytest.ini pyproject.toml phpunit.xml 2>/dev/null
|
||||
ls -d test/ tests/ spec/ __tests__/ cypress/ e2e/ 2>/dev/null
|
||||
# Check opt-out marker
|
||||
[ -f .gstack/no-test-bootstrap ] && echo "BOOTSTRAP_DECLINED"
|
||||
```
|
||||
|
||||
**If test framework detected** (config files or test directories found):
|
||||
Print "Test framework detected: {name} ({N} existing tests). Skipping bootstrap."
|
||||
Read 2-3 existing test files to learn conventions (naming, imports, assertion style, setup patterns).
|
||||
Store conventions as prose context for use in Phase 8e.5 or Step 3.4. **Skip the rest of bootstrap.**
|
||||
|
||||
**If BOOTSTRAP_DECLINED** appears: Print "Test bootstrap previously declined — skipping." **Skip the rest of bootstrap.**
|
||||
|
||||
**If NO runtime detected** (no config files found): Use AskUserQuestion:
|
||||
"I couldn't detect your project's language. What runtime are you using?"
|
||||
Options: A) Node.js/TypeScript B) Ruby/Rails C) Python D) Go E) Rust F) PHP G) Elixir H) This project doesn't need tests.
|
||||
If user picks H → write `.gstack/no-test-bootstrap` and continue without tests.
|
||||
|
||||
**If runtime detected but no test framework — bootstrap:**
|
||||
|
||||
### B2. Research best practices
|
||||
|
||||
Use WebSearch to find current best practices for the detected runtime:
|
||||
- `"[runtime] best test framework 2025 2026"`
|
||||
- `"[framework A] vs [framework B] comparison"`
|
||||
|
||||
If WebSearch is unavailable, use this built-in knowledge table:
|
||||
|
||||
| Runtime | Primary recommendation | Alternative |
|
||||
|---------|----------------------|-------------|
|
||||
| Ruby/Rails | minitest + fixtures + capybara | rspec + factory_bot + shoulda-matchers |
|
||||
| Node.js | vitest + @testing-library | jest + @testing-library |
|
||||
| Next.js | vitest + @testing-library/react + playwright | jest + cypress |
|
||||
| Python | pytest + pytest-cov | unittest |
|
||||
| Go | stdlib testing + testify | stdlib only |
|
||||
| Rust | cargo test (built-in) + mockall | — |
|
||||
| PHP | phpunit + mockery | pest |
|
||||
| Elixir | ExUnit (built-in) + ex_machina | — |
|
||||
|
||||
### B3. Framework selection
|
||||
|
||||
Use AskUserQuestion:
|
||||
"I detected this is a [Runtime/Framework] project with no test framework. I researched current best practices. Here are the options:
|
||||
A) [Primary] — [rationale]. Includes: [packages]. Supports: unit, integration, smoke, e2e
|
||||
B) [Alternative] — [rationale]. Includes: [packages]
|
||||
C) Skip — don't set up testing right now
|
||||
RECOMMENDATION: Choose A because [reason based on project context]"
|
||||
|
||||
If user picks C → write `.gstack/no-test-bootstrap`. Tell user: "If you change your mind later, delete `.gstack/no-test-bootstrap` and re-run." Continue without tests.
|
||||
|
||||
If multiple runtimes detected (monorepo) → ask which runtime to set up first, with option to do both sequentially.
|
||||
|
||||
### B4. Install and configure
|
||||
|
||||
1. Install the chosen packages (npm/bun/gem/pip/etc.)
|
||||
2. Create minimal config file
|
||||
3. Create directory structure (test/, spec/, etc.)
|
||||
4. Create one example test matching the project's code to verify setup works
|
||||
|
||||
If package installation fails → debug once. If still failing → revert with `git checkout -- package.json package-lock.json` (or equivalent for the runtime). Warn user and continue without tests.
|
||||
|
||||
### B4.5. First real tests
|
||||
|
||||
Generate 3-5 real tests for existing code:
|
||||
|
||||
1. **Find recently changed files:** `git log --since=30.days --name-only --format="" | sort | uniq -c | sort -rn | head -10`
|
||||
2. **Prioritize by risk:** Error handlers > business logic with conditionals > API endpoints > pure functions
|
||||
3. **For each file:** Write one test that tests real behavior with meaningful assertions. Never `expect(x).toBeDefined()` — test what the code DOES.
|
||||
4. Run each test. Passes → keep. Fails → fix once. Still fails → delete silently.
|
||||
5. Generate at least 1 test, cap at 5.
|
||||
|
||||
Never import secrets, API keys, or credentials in test files. Use environment variables or test fixtures.
|
||||
|
||||
### B5. Verify
|
||||
|
||||
```bash
|
||||
# Run the full test suite to confirm everything works
|
||||
{detected test command}
|
||||
```
|
||||
|
||||
If tests fail → debug once. If still failing → revert all bootstrap changes and warn user.
|
||||
|
||||
### B5.5. CI/CD pipeline
|
||||
|
||||
```bash
|
||||
# Check CI provider
|
||||
ls -d .github/ 2>/dev/null && echo "CI:github"
|
||||
ls .gitlab-ci.yml .circleci/ bitrise.yml 2>/dev/null
|
||||
```
|
||||
|
||||
If `.github/` exists (or no CI detected — default to GitHub Actions):
|
||||
Create `.github/workflows/test.yml` with:
|
||||
- `runs-on: ubuntu-latest`
|
||||
- Appropriate setup action for the runtime (setup-node, setup-ruby, setup-python, etc.)
|
||||
- The same test command verified in B5
|
||||
- Trigger: push + pull_request
|
||||
|
||||
If non-GitHub CI detected → skip CI generation with note: "Detected {provider} — CI pipeline generation supports GitHub Actions only. Add test step to your existing pipeline manually."
|
||||
|
||||
### B6. Create TESTING.md
|
||||
|
||||
First check: If TESTING.md already exists → read it and update/append rather than overwriting. Never destroy existing content.
|
||||
|
||||
Write TESTING.md with:
|
||||
- Philosophy: "100% test coverage is the key to great vibe coding. Tests let you move fast, trust your instincts, and ship with confidence — without them, vibe coding is just yolo coding. With tests, it's a superpower."
|
||||
- Framework name and version
|
||||
- How to run tests (the verified command from B5)
|
||||
- Test layers: Unit tests (what, where, when), Integration tests, Smoke tests, E2E tests
|
||||
- Conventions: file naming, assertion style, setup/teardown patterns
|
||||
|
||||
### B7. Update CLAUDE.md
|
||||
|
||||
First check: If CLAUDE.md already has a `## Testing` section → skip. Don't duplicate.
|
||||
|
||||
Append a `## Testing` section:
|
||||
- Run command and test directory
|
||||
- Reference to TESTING.md
|
||||
- Test expectations:
|
||||
- 100% test coverage is the goal — tests make vibe coding safe
|
||||
- When writing new functions, write a corresponding test
|
||||
- When fixing a bug, write a regression test
|
||||
- When adding error handling, write a test that triggers the error
|
||||
- When adding a conditional (if/else, switch), write tests for BOTH paths
|
||||
- Never commit code that makes existing tests fail
|
||||
|
||||
### B8. Commit
|
||||
|
||||
```bash
|
||||
git status --porcelain
|
||||
```
|
||||
|
||||
Only commit if there are changes. Stage all bootstrap files (config, test directory, TESTING.md, CLAUDE.md, .github/workflows/test.yml if created):
|
||||
`git commit -m "chore: bootstrap test framework ({framework name})"`
|
||||
|
||||
---
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run tests (on merged code)
|
||||
|
||||
**Do NOT run `RAILS_ENV=test bin/rails db:migrate`** — `bin/test-lane` already calls
|
||||
@@ -269,6 +453,144 @@ If multiple suites need to run, run them sequentially (each needs a test lane).
|
||||
|
||||
---
|
||||
|
||||
## Step 3.4: Test Coverage Audit
|
||||
|
||||
100% coverage is the goal — every untested path is a path where bugs hide and vibe coding becomes yolo coding. Evaluate what was ACTUALLY coded (from the diff), not what was planned.
|
||||
|
||||
**0. Before/after test count:**
|
||||
|
||||
```bash
|
||||
# Count test files before any generation
|
||||
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' | grep -v node_modules | wc -l
|
||||
```
|
||||
|
||||
Store this number for the PR body.
|
||||
|
||||
**1. Trace every codepath changed** using `git diff origin/<base>...HEAD`:
|
||||
|
||||
Read every changed file. For each one, trace how data flows through the code — don't just list functions, actually follow the execution:
|
||||
|
||||
1. **Read the diff.** For each changed file, read the full file (not just the diff hunk) to understand context.
|
||||
2. **Trace data flow.** Starting from each entry point (route handler, exported function, event listener, component render), follow the data through every branch:
|
||||
- Where does input come from? (request params, props, database, API call)
|
||||
- What transforms it? (validation, mapping, computation)
|
||||
- Where does it go? (database write, API response, rendered output, side effect)
|
||||
- What can go wrong at each step? (null/undefined, invalid input, network failure, empty collection)
|
||||
3. **Diagram the execution.** For each changed file, draw an ASCII diagram showing:
|
||||
- Every function/method that was added or modified
|
||||
- Every conditional branch (if/else, switch, ternary, guard clause, early return)
|
||||
- Every error path (try/catch, rescue, error boundary, fallback)
|
||||
- Every call to another function (trace into it — does IT have untested branches?)
|
||||
- Every edge: what happens with null input? Empty array? Invalid type?
|
||||
|
||||
This is the critical step — you're building a map of every line of code that can execute differently based on input. Every branch in this diagram needs a test.
|
||||
|
||||
**2. Map user flows, interactions, and error states:**
|
||||
|
||||
Code coverage isn't enough — you need to cover how real users interact with the changed code. For each changed feature, think through:
|
||||
|
||||
- **User flows:** What sequence of actions does a user take that touches this code? Map the full journey (e.g., "user clicks 'Pay' → form validates → API call → success/failure screen"). Each step in the journey needs a test.
|
||||
- **Interaction edge cases:** What happens when the user does something unexpected?
|
||||
- Double-click/rapid resubmit
|
||||
- Navigate away mid-operation (back button, close tab, click another link)
|
||||
- Submit with stale data (page sat open for 30 minutes, session expired)
|
||||
- Slow connection (API takes 10 seconds — what does the user see?)
|
||||
- Concurrent actions (two tabs, same form)
|
||||
- **Error states the user can see:** For every error the code handles, what does the user actually experience?
|
||||
- Is there a clear error message or a silent failure?
|
||||
- Can the user recover (retry, go back, fix input) or are they stuck?
|
||||
- What happens with no network? With a 500 from the API? With invalid data from the server?
|
||||
- **Empty/zero/boundary states:** What does the UI show with zero results? With 10,000 results? With a single character input? With maximum-length input?
|
||||
|
||||
Add these to your diagram alongside the code branches. A user flow with no test is just as much a gap as an untested if/else.
|
||||
|
||||
**3. Check each branch against existing tests:**
|
||||
|
||||
Go through your diagram branch by branch — both code paths AND user flows. For each one, search for a test that exercises it:
|
||||
- Function `processPayment()` → look for `billing.test.ts`, `billing.spec.ts`, `test/billing_test.rb`
|
||||
- An if/else → look for tests covering BOTH the true AND false path
|
||||
- An error handler → look for a test that triggers that specific error condition
|
||||
- A call to `helperFn()` that has its own branches → those branches need tests too
|
||||
- A user flow → look for an integration or E2E test that walks through the journey
|
||||
- An interaction edge case → look for a test that simulates the unexpected action
|
||||
|
||||
Quality scoring rubric:
|
||||
- ★★★ Tests behavior with edge cases AND error paths
|
||||
- ★★ Tests correct behavior, happy path only
|
||||
- ★ Smoke test / existence check / trivial assertion (e.g., "it renders", "it doesn't throw")
|
||||
|
||||
**4. Output ASCII coverage diagram:**
|
||||
|
||||
Include BOTH code paths and user flows in the same diagram:
|
||||
|
||||
```
|
||||
CODE PATH COVERAGE
|
||||
===========================
|
||||
[+] src/services/billing.ts
|
||||
│
|
||||
├── processPayment()
|
||||
│ ├── [★★★ TESTED] Happy path + card declined + timeout — billing.test.ts:42
|
||||
│ ├── [GAP] Network timeout — NO TEST
|
||||
│ └── [GAP] Invalid currency — NO TEST
|
||||
│
|
||||
└── refundPayment()
|
||||
├── [★★ TESTED] Full refund — billing.test.ts:89
|
||||
└── [★ TESTED] Partial refund (checks non-throw only) — billing.test.ts:101
|
||||
|
||||
USER FLOW COVERAGE
|
||||
===========================
|
||||
[+] Payment checkout flow
|
||||
│
|
||||
├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
|
||||
├── [GAP] Double-click submit — NO TEST
|
||||
├── [GAP] Navigate away during payment — NO TEST
|
||||
└── [★ TESTED] Form validation errors (checks render only) — checkout.test.ts:40
|
||||
|
||||
[+] Error states
|
||||
│
|
||||
├── [★★ TESTED] Card declined message — billing.test.ts:58
|
||||
├── [GAP] Network timeout UX (what does user see?) — NO TEST
|
||||
└── [GAP] Empty cart submission — NO TEST
|
||||
|
||||
─────────────────────────────────
|
||||
COVERAGE: 5/12 paths tested (42%)
|
||||
Code paths: 3/5 (60%)
|
||||
User flows: 2/7 (29%)
|
||||
QUALITY: ★★★: 2 ★★: 2 ★: 1
|
||||
GAPS: 7 paths need tests
|
||||
─────────────────────────────────
|
||||
```
|
||||
|
||||
**Fast path:** All paths covered → "Step 3.4: All new code paths have test coverage ✓" Continue.
|
||||
|
||||
**5. Generate tests for uncovered paths:**
|
||||
|
||||
If test framework detected (or bootstrapped in Step 2.5):
|
||||
- Prioritize error handlers and edge cases first (happy paths are more likely already tested)
|
||||
- Read 2-3 existing test files to match conventions exactly
|
||||
- Generate unit tests. Mock all external dependencies (DB, API, Redis).
|
||||
- Write tests that exercise the specific uncovered path with real assertions
|
||||
- Run each test. Passes → commit as `test: coverage for {feature}`
|
||||
- Fails → fix once. Still fails → revert, note gap in diagram.
|
||||
|
||||
Caps: 30 code paths max, 20 tests generated max (code + user flow combined), 2-min per-test exploration cap.
|
||||
|
||||
If no test framework AND user declined bootstrap → diagram only, no generation. Note: "Test generation skipped — no test framework configured."
|
||||
|
||||
**Diff is test-only changes:** Skip Step 3.4 entirely: "No new application code paths to audit."
|
||||
|
||||
**6. After-count and coverage summary:**
|
||||
|
||||
```bash
|
||||
# Count test files after generation
|
||||
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' | grep -v node_modules | wc -l
|
||||
```
|
||||
|
||||
For PR body: `Tests: {before} → {after} (+{delta} new)`
|
||||
Coverage line: `Test Coverage Audit: N new code paths. M covered (X%). K tests generated, J committed.`
|
||||
|
||||
---
|
||||
|
||||
## Step 3.5: Pre-Landing Review
|
||||
|
||||
Review the diff for structural issues that tests don't catch.
|
||||
@@ -497,6 +819,10 @@ gh pr create --base <base> --title "<type>: <summary>" --body "$(cat <<'EOF'
|
||||
## Summary
|
||||
<bullet points from CHANGELOG>
|
||||
|
||||
## Test Coverage
|
||||
<coverage diagram from Step 3.4, or "All new code paths have test coverage.">
|
||||
<If Step 3.4 ran: "Tests: {before} → {after} (+{delta} new)">
|
||||
|
||||
## Pre-Landing Review
|
||||
<findings from Step 3.5, or "No issues found.">
|
||||
|
||||
@@ -538,4 +864,5 @@ EOF
|
||||
- **Split commits for bisectability** — each commit = one logical change.
|
||||
- **TODOS.md completion detection must be conservative.** Only mark items as completed when the diff clearly shows the work is done.
|
||||
- **Use Greptile reply templates from greptile-triage.md.** Every reply includes evidence (inline diff, code references, re-rank suggestion). Never post vague replies.
|
||||
- **Step 3.4 generates coverage tests.** They must pass before committing. Never commit failing tests.
|
||||
- **The goal is: user says `/ship`, next thing they see is the review + PR URL.**
|
||||
|
||||
+172
-4
@@ -11,6 +11,7 @@ allowed-tools:
|
||||
- Grep
|
||||
- Glob
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
|
||||
{{PREAMBLE}}
|
||||
@@ -39,6 +40,7 @@ You are running the `/ship` workflow. This is a **non-interactive, fully automat
|
||||
- Multi-file changesets (auto-split into bisectable commits)
|
||||
- TODOS.md completed-item detection (auto-mark)
|
||||
- Auto-fixable review findings (dead code, N+1, stale comments — fixed automatically)
|
||||
- Test coverage gaps (auto-generate and commit, or flag in PR body)
|
||||
|
||||
---
|
||||
|
||||
@@ -54,10 +56,27 @@ You are running the `/ship` workflow. This is a **non-interactive, fully automat
|
||||
|
||||
{{REVIEW_DASHBOARD}}
|
||||
|
||||
If the verdict is NOT "CLEARED TO SHIP (3/3)", use AskUserQuestion:
|
||||
- Show which reviews are missing or have open issues
|
||||
- RECOMMENDATION: Choose B (run missing reviews first) unless the change is trivial
|
||||
- Options: A) Ship anyway B) Abort — run missing review(s) first C) Reviews not relevant for this change
|
||||
If the Eng Review is NOT "CLEAR":
|
||||
|
||||
1. **Check for a prior override on this branch:**
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
grep '"skill":"ship-review-override"' ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_OVERRIDE"
|
||||
```
|
||||
If an override exists, display the dashboard and note "Review gate previously accepted — continuing." Do NOT ask again.
|
||||
|
||||
2. **If no override exists,** use AskUserQuestion:
|
||||
- Show that Eng Review is missing or has open issues
|
||||
- RECOMMENDATION: Choose C if the change is obviously trivial (< 20 lines, typo fix, config-only); Choose B for larger changes
|
||||
- Options: A) Ship anyway B) Abort — run /plan-eng-review first C) Change is too small to need eng review
|
||||
- If CEO/Design reviews are missing, mention them as informational ("CEO Review not run — recommended for product changes") but do NOT block or recommend aborting for them
|
||||
|
||||
3. **If the user chooses A or C,** persist the decision so future `/ship` runs on this branch skip the gate:
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
echo '{"skill":"ship-review-override","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","decision":"USER_CHOICE"}' >> ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl
|
||||
```
|
||||
Substitute USER_CHOICE with "ship_anyway" or "not_relevant".
|
||||
|
||||
---
|
||||
|
||||
@@ -75,6 +94,12 @@ git fetch origin <base> && git merge origin/<base> --no-edit
|
||||
|
||||
---
|
||||
|
||||
## Step 2.5: Test Framework Bootstrap
|
||||
|
||||
{{TEST_BOOTSTRAP}}
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run tests (on merged code)
|
||||
|
||||
**Do NOT run `RAILS_ENV=test bin/rails db:migrate`** — `bin/test-lane` already calls
|
||||
@@ -159,6 +184,144 @@ If multiple suites need to run, run them sequentially (each needs a test lane).
|
||||
|
||||
---
|
||||
|
||||
## Step 3.4: Test Coverage Audit
|
||||
|
||||
100% coverage is the goal — every untested path is a path where bugs hide and vibe coding becomes yolo coding. Evaluate what was ACTUALLY coded (from the diff), not what was planned.
|
||||
|
||||
**0. Before/after test count:**
|
||||
|
||||
```bash
|
||||
# Count test files before any generation
|
||||
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' | grep -v node_modules | wc -l
|
||||
```
|
||||
|
||||
Store this number for the PR body.
|
||||
|
||||
**1. Trace every codepath changed** using `git diff origin/<base>...HEAD`:
|
||||
|
||||
Read every changed file. For each one, trace how data flows through the code — don't just list functions, actually follow the execution:
|
||||
|
||||
1. **Read the diff.** For each changed file, read the full file (not just the diff hunk) to understand context.
|
||||
2. **Trace data flow.** Starting from each entry point (route handler, exported function, event listener, component render), follow the data through every branch:
|
||||
- Where does input come from? (request params, props, database, API call)
|
||||
- What transforms it? (validation, mapping, computation)
|
||||
- Where does it go? (database write, API response, rendered output, side effect)
|
||||
- What can go wrong at each step? (null/undefined, invalid input, network failure, empty collection)
|
||||
3. **Diagram the execution.** For each changed file, draw an ASCII diagram showing:
|
||||
- Every function/method that was added or modified
|
||||
- Every conditional branch (if/else, switch, ternary, guard clause, early return)
|
||||
- Every error path (try/catch, rescue, error boundary, fallback)
|
||||
- Every call to another function (trace into it — does IT have untested branches?)
|
||||
- Every edge: what happens with null input? Empty array? Invalid type?
|
||||
|
||||
This is the critical step — you're building a map of every line of code that can execute differently based on input. Every branch in this diagram needs a test.
|
||||
|
||||
**2. Map user flows, interactions, and error states:**
|
||||
|
||||
Code coverage isn't enough — you need to cover how real users interact with the changed code. For each changed feature, think through:
|
||||
|
||||
- **User flows:** What sequence of actions does a user take that touches this code? Map the full journey (e.g., "user clicks 'Pay' → form validates → API call → success/failure screen"). Each step in the journey needs a test.
|
||||
- **Interaction edge cases:** What happens when the user does something unexpected?
|
||||
- Double-click/rapid resubmit
|
||||
- Navigate away mid-operation (back button, close tab, click another link)
|
||||
- Submit with stale data (page sat open for 30 minutes, session expired)
|
||||
- Slow connection (API takes 10 seconds — what does the user see?)
|
||||
- Concurrent actions (two tabs, same form)
|
||||
- **Error states the user can see:** For every error the code handles, what does the user actually experience?
|
||||
- Is there a clear error message or a silent failure?
|
||||
- Can the user recover (retry, go back, fix input) or are they stuck?
|
||||
- What happens with no network? With a 500 from the API? With invalid data from the server?
|
||||
- **Empty/zero/boundary states:** What does the UI show with zero results? With 10,000 results? With a single character input? With maximum-length input?
|
||||
|
||||
Add these to your diagram alongside the code branches. A user flow with no test is just as much a gap as an untested if/else.
|
||||
|
||||
**3. Check each branch against existing tests:**
|
||||
|
||||
Go through your diagram branch by branch — both code paths AND user flows. For each one, search for a test that exercises it:
|
||||
- Function `processPayment()` → look for `billing.test.ts`, `billing.spec.ts`, `test/billing_test.rb`
|
||||
- An if/else → look for tests covering BOTH the true AND false path
|
||||
- An error handler → look for a test that triggers that specific error condition
|
||||
- A call to `helperFn()` that has its own branches → those branches need tests too
|
||||
- A user flow → look for an integration or E2E test that walks through the journey
|
||||
- An interaction edge case → look for a test that simulates the unexpected action
|
||||
|
||||
Quality scoring rubric:
|
||||
- ★★★ Tests behavior with edge cases AND error paths
|
||||
- ★★ Tests correct behavior, happy path only
|
||||
- ★ Smoke test / existence check / trivial assertion (e.g., "it renders", "it doesn't throw")
|
||||
|
||||
**4. Output ASCII coverage diagram:**
|
||||
|
||||
Include BOTH code paths and user flows in the same diagram:
|
||||
|
||||
```
|
||||
CODE PATH COVERAGE
|
||||
===========================
|
||||
[+] src/services/billing.ts
|
||||
│
|
||||
├── processPayment()
|
||||
│ ├── [★★★ TESTED] Happy path + card declined + timeout — billing.test.ts:42
|
||||
│ ├── [GAP] Network timeout — NO TEST
|
||||
│ └── [GAP] Invalid currency — NO TEST
|
||||
│
|
||||
└── refundPayment()
|
||||
├── [★★ TESTED] Full refund — billing.test.ts:89
|
||||
└── [★ TESTED] Partial refund (checks non-throw only) — billing.test.ts:101
|
||||
|
||||
USER FLOW COVERAGE
|
||||
===========================
|
||||
[+] Payment checkout flow
|
||||
│
|
||||
├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
|
||||
├── [GAP] Double-click submit — NO TEST
|
||||
├── [GAP] Navigate away during payment — NO TEST
|
||||
└── [★ TESTED] Form validation errors (checks render only) — checkout.test.ts:40
|
||||
|
||||
[+] Error states
|
||||
│
|
||||
├── [★★ TESTED] Card declined message — billing.test.ts:58
|
||||
├── [GAP] Network timeout UX (what does user see?) — NO TEST
|
||||
└── [GAP] Empty cart submission — NO TEST
|
||||
|
||||
─────────────────────────────────
|
||||
COVERAGE: 5/12 paths tested (42%)
|
||||
Code paths: 3/5 (60%)
|
||||
User flows: 2/7 (29%)
|
||||
QUALITY: ★★★: 2 ★★: 2 ★: 1
|
||||
GAPS: 7 paths need tests
|
||||
─────────────────────────────────
|
||||
```
|
||||
|
||||
**Fast path:** All paths covered → "Step 3.4: All new code paths have test coverage ✓" Continue.
|
||||
|
||||
**5. Generate tests for uncovered paths:**
|
||||
|
||||
If test framework detected (or bootstrapped in Step 2.5):
|
||||
- Prioritize error handlers and edge cases first (happy paths are more likely already tested)
|
||||
- Read 2-3 existing test files to match conventions exactly
|
||||
- Generate unit tests. Mock all external dependencies (DB, API, Redis).
|
||||
- Write tests that exercise the specific uncovered path with real assertions
|
||||
- Run each test. Passes → commit as `test: coverage for {feature}`
|
||||
- Fails → fix once. Still fails → revert, note gap in diagram.
|
||||
|
||||
Caps: 30 code paths max, 20 tests generated max (code + user flow combined), 2-min per-test exploration cap.
|
||||
|
||||
If no test framework AND user declined bootstrap → diagram only, no generation. Note: "Test generation skipped — no test framework configured."
|
||||
|
||||
**Diff is test-only changes:** Skip Step 3.4 entirely: "No new application code paths to audit."
|
||||
|
||||
**6. After-count and coverage summary:**
|
||||
|
||||
```bash
|
||||
# Count test files after generation
|
||||
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' | grep -v node_modules | wc -l
|
||||
```
|
||||
|
||||
For PR body: `Tests: {before} → {after} (+{delta} new)`
|
||||
Coverage line: `Test Coverage Audit: N new code paths. M covered (X%). K tests generated, J committed.`
|
||||
|
||||
---
|
||||
|
||||
## Step 3.5: Pre-Landing Review
|
||||
|
||||
Review the diff for structural issues that tests don't catch.
|
||||
@@ -387,6 +550,10 @@ gh pr create --base <base> --title "<type>: <summary>" --body "$(cat <<'EOF'
|
||||
## Summary
|
||||
<bullet points from CHANGELOG>
|
||||
|
||||
## Test Coverage
|
||||
<coverage diagram from Step 3.4, or "All new code paths have test coverage.">
|
||||
<If Step 3.4 ran: "Tests: {before} → {after} (+{delta} new)">
|
||||
|
||||
## Pre-Landing Review
|
||||
<findings from Step 3.5, or "No issues found.">
|
||||
|
||||
@@ -428,4 +595,5 @@ EOF
|
||||
- **Split commits for bisectability** — each commit = one logical change.
|
||||
- **TODOS.md completion detection must be conservative.** Only mark items as completed when the diff clearly shows the work is done.
|
||||
- **Use Greptile reply templates from greptile-triage.md.** Every reply includes evidence (inline diff, code references, re-rank suggestion). Never post vague replies.
|
||||
- **Step 3.4 generates coverage tests.** They must pass before committing. Never commit failing tests.
|
||||
- **The goal is: user says `/ship`, next thing they see is the review + PR URL.**
|
||||
|
||||
@@ -343,9 +343,10 @@ describe('REVIEW_DASHBOARD resolver', () => {
|
||||
test('resolver output contains key dashboard elements', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'plan-ceo-review', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('VERDICT');
|
||||
expect(content).toContain('CLEARED TO SHIP');
|
||||
expect(content).toContain('NOT YET RUN');
|
||||
expect(content).toContain('CLEARED');
|
||||
expect(content).toContain('Eng Review');
|
||||
expect(content).toContain('7 days');
|
||||
expect(content).toContain('Design Review');
|
||||
expect(content).toContain('skip_eng_review');
|
||||
});
|
||||
});
|
||||
|
||||
+348
-2
@@ -894,6 +894,89 @@ Focus on reviewing the plan content: architecture, error handling, security, and
|
||||
}, 420_000);
|
||||
});
|
||||
|
||||
// --- Plan CEO Review (SELECTIVE EXPANSION) E2E ---
|
||||
|
||||
describeE2E('Plan CEO Review SELECTIVE EXPANSION E2E', () => {
|
||||
let planDir: string;
|
||||
|
||||
beforeAll(() => {
|
||||
planDir = fs.mkdtempSync(path.join(os.tmpdir(), 'skill-e2e-plan-ceo-sel-'));
|
||||
const { spawnSync } = require('child_process');
|
||||
const run = (cmd: string, args: string[]) =>
|
||||
spawnSync(cmd, args, { cwd: planDir, stdio: 'pipe', timeout: 5000 });
|
||||
|
||||
run('git', ['init']);
|
||||
run('git', ['config', 'user.email', 'test@test.com']);
|
||||
run('git', ['config', 'user.name', 'Test']);
|
||||
|
||||
fs.writeFileSync(path.join(planDir, 'plan.md'), `# Plan: Add User Dashboard
|
||||
|
||||
## Context
|
||||
We're building a new user dashboard that shows recent activity, notifications, and quick actions.
|
||||
|
||||
## Changes
|
||||
1. New React component \`UserDashboard\` in \`src/components/\`
|
||||
2. REST API endpoint \`GET /api/dashboard\` returning user stats
|
||||
3. PostgreSQL query for activity aggregation
|
||||
4. Redis cache layer for dashboard data (5min TTL)
|
||||
|
||||
## Architecture
|
||||
- Frontend: React + TailwindCSS
|
||||
- Backend: Express.js REST API
|
||||
- Database: PostgreSQL with existing user/activity tables
|
||||
- Cache: Redis for dashboard aggregates
|
||||
|
||||
## Open questions
|
||||
- Should we use WebSocket for real-time updates?
|
||||
- How do we handle users with 100k+ activity records?
|
||||
`);
|
||||
|
||||
run('git', ['add', '.']);
|
||||
run('git', ['commit', '-m', 'add plan']);
|
||||
|
||||
fs.mkdirSync(path.join(planDir, 'plan-ceo-review'), { recursive: true });
|
||||
fs.copyFileSync(
|
||||
path.join(ROOT, 'plan-ceo-review', 'SKILL.md'),
|
||||
path.join(planDir, 'plan-ceo-review', 'SKILL.md'),
|
||||
);
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
try { fs.rmSync(planDir, { recursive: true, force: true }); } catch {}
|
||||
});
|
||||
|
||||
test('/plan-ceo-review SELECTIVE EXPANSION produces structured review output', async () => {
|
||||
const result = await runSkillTest({
|
||||
prompt: `Read plan-ceo-review/SKILL.md for the review workflow.
|
||||
|
||||
Read plan.md — that's the plan to review. This is a standalone plan document, not a codebase — skip any codebase exploration or system audit steps.
|
||||
|
||||
Choose SELECTIVE EXPANSION mode. Skip any AskUserQuestion calls — this is non-interactive.
|
||||
For the cherry-pick ceremony, accept all expansion proposals automatically.
|
||||
Write your complete review directly to ${planDir}/review-output-selective.md
|
||||
|
||||
Focus on reviewing the plan content: architecture, error handling, security, and performance.`,
|
||||
workingDirectory: planDir,
|
||||
maxTurns: 15,
|
||||
timeout: 360_000,
|
||||
testName: 'plan-ceo-review-selective',
|
||||
runId,
|
||||
});
|
||||
|
||||
logCost('/plan-ceo-review (SELECTIVE)', result);
|
||||
recordE2E('/plan-ceo-review-selective', 'Plan CEO Review SELECTIVE EXPANSION E2E', result, {
|
||||
passed: ['success', 'error_max_turns'].includes(result.exitReason),
|
||||
});
|
||||
expect(['success', 'error_max_turns']).toContain(result.exitReason);
|
||||
|
||||
const reviewPath = path.join(planDir, 'review-output-selective.md');
|
||||
if (fs.existsSync(reviewPath)) {
|
||||
const review = fs.readFileSync(reviewPath, 'utf-8');
|
||||
expect(review.length).toBeGreaterThan(200);
|
||||
}
|
||||
}, 420_000);
|
||||
});
|
||||
|
||||
// --- Plan Eng Review E2E ---
|
||||
|
||||
describeIfSelected('Plan Eng Review E2E', ['plan-eng-review'], () => {
|
||||
@@ -962,7 +1045,7 @@ Replace session-cookie auth with JWT tokens. Currently using express-session + R
|
||||
|
||||
Read plan.md — that's the plan to review. This is a standalone plan document, not a codebase — skip any codebase exploration steps.
|
||||
|
||||
Choose SMALL CHANGE mode. Skip any AskUserQuestion calls — this is non-interactive.
|
||||
Proceed directly to the full review. Skip any AskUserQuestion calls — this is non-interactive.
|
||||
Write your complete review directly to ${planDir}/review-output.md
|
||||
|
||||
Focus on architecture, code quality, tests, and performance sections.`,
|
||||
@@ -1363,7 +1446,7 @@ export function main() { return Dashboard(); }
|
||||
|
||||
Read plan.md — that's the plan to review. This is a standalone plan with source code in app.ts and dashboard.ts.
|
||||
|
||||
Choose SMALL CHANGE mode. Skip any AskUserQuestion calls — this is non-interactive.
|
||||
Proceed directly to the full review. Skip any AskUserQuestion calls — this is non-interactive.
|
||||
|
||||
IMPORTANT: After your review, you MUST write the test-plan artifact as described in the "Test Plan Artifact" section of SKILL.md. The remote-slug shim is at ${planDir}/browse/bin/remote-slug.
|
||||
|
||||
@@ -2261,6 +2344,269 @@ Review the site at ${serverUrl}. Use --quick mode. Skip any AskUserQuestion call
|
||||
}, 420_000);
|
||||
});
|
||||
|
||||
// --- Test Bootstrap E2E ---
|
||||
|
||||
describeE2E('Test Bootstrap E2E', () => {
|
||||
let bootstrapDir: string;
|
||||
let bootstrapServer: ReturnType<typeof Bun.serve>;
|
||||
|
||||
beforeAll(() => {
|
||||
bootstrapDir = fs.mkdtempSync(path.join(os.tmpdir(), 'skill-e2e-bootstrap-'));
|
||||
setupBrowseShims(bootstrapDir);
|
||||
|
||||
// Copy qa skill files
|
||||
copyDirSync(path.join(ROOT, 'qa'), path.join(bootstrapDir, 'qa'));
|
||||
|
||||
// Create a minimal Node.js project with NO test framework
|
||||
fs.writeFileSync(path.join(bootstrapDir, 'package.json'), JSON.stringify({
|
||||
name: 'test-bootstrap-app',
|
||||
version: '1.0.0',
|
||||
type: 'module',
|
||||
}, null, 2));
|
||||
|
||||
// Create a simple app file with a bug
|
||||
fs.writeFileSync(path.join(bootstrapDir, 'app.js'), `
|
||||
export function add(a, b) { return a + b; }
|
||||
export function subtract(a, b) { return a - b; }
|
||||
export function divide(a, b) { return a / b; } // BUG: no zero check
|
||||
`);
|
||||
|
||||
// Create a simple HTML page with a bug
|
||||
fs.writeFileSync(path.join(bootstrapDir, 'index.html'), `<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head><meta charset="utf-8"><title>Bootstrap Test</title></head>
|
||||
<body>
|
||||
<h1>Test App</h1>
|
||||
<a href="/nonexistent-page">Broken Link</a>
|
||||
<script>console.error("ReferenceError: undefinedVar is not defined");</script>
|
||||
</body>
|
||||
</html>
|
||||
`);
|
||||
|
||||
// Init git repo
|
||||
const run = (cmd: string, args: string[]) =>
|
||||
spawnSync(cmd, args, { cwd: bootstrapDir, stdio: 'pipe', timeout: 5000 });
|
||||
run('git', ['init']);
|
||||
run('git', ['config', 'user.email', 'test@test.com']);
|
||||
run('git', ['config', 'user.name', 'Test']);
|
||||
run('git', ['add', '.']);
|
||||
run('git', ['commit', '-m', 'initial commit']);
|
||||
|
||||
// Serve from working directory
|
||||
bootstrapServer = Bun.serve({
|
||||
port: 0,
|
||||
hostname: '127.0.0.1',
|
||||
fetch(req) {
|
||||
const url = new URL(req.url);
|
||||
let filePath = url.pathname === '/' ? '/index.html' : url.pathname;
|
||||
filePath = filePath.replace(/^\//, '');
|
||||
const fullPath = path.join(bootstrapDir, filePath);
|
||||
if (!fs.existsSync(fullPath)) {
|
||||
return new Response('Not Found', { status: 404 });
|
||||
}
|
||||
const content = fs.readFileSync(fullPath, 'utf-8');
|
||||
return new Response(content, {
|
||||
headers: { 'Content-Type': 'text/html' },
|
||||
});
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
bootstrapServer?.stop();
|
||||
try { fs.rmSync(bootstrapDir, { recursive: true, force: true }); } catch {}
|
||||
});
|
||||
|
||||
test('/qa bootstrap + regression test on zero-test project', async () => {
|
||||
const serverUrl = `http://127.0.0.1:${bootstrapServer!.port}`;
|
||||
|
||||
const result = await runSkillTest({
|
||||
prompt: `You have a browse binary at ${browseBin}. Assign it to B variable like: B="${browseBin}"
|
||||
|
||||
Read the file qa/SKILL.md for the QA workflow instructions.
|
||||
|
||||
Run a Quick-tier QA test on ${serverUrl}
|
||||
The source code for this page is at ${bootstrapDir}/index.html — you can fix bugs there.
|
||||
Do NOT use AskUserQuestion — for any AskUserQuestion prompts, choose the RECOMMENDED option automatically.
|
||||
Write your report to ${bootstrapDir}/qa-reports/qa-report.md
|
||||
|
||||
This project has NO test framework. When the bootstrap asks, pick vitest (option A).
|
||||
This is a test+fix loop: find bugs, fix them, write regression tests, commit each fix.`,
|
||||
workingDirectory: bootstrapDir,
|
||||
maxTurns: 50,
|
||||
allowedTools: ['Bash', 'Read', 'Write', 'Edit', 'Glob', 'Grep'],
|
||||
timeout: 420_000,
|
||||
testName: 'qa-bootstrap',
|
||||
runId,
|
||||
});
|
||||
|
||||
logCost('/qa bootstrap', result);
|
||||
recordE2E('/qa bootstrap + regression test', 'Test Bootstrap E2E', result, {
|
||||
passed: ['success', 'error_max_turns'].includes(result.exitReason),
|
||||
});
|
||||
|
||||
expect(['success', 'error_max_turns']).toContain(result.exitReason);
|
||||
|
||||
// Verify bootstrap created test infrastructure
|
||||
const hasTestConfig = fs.existsSync(path.join(bootstrapDir, 'vitest.config.ts'))
|
||||
|| fs.existsSync(path.join(bootstrapDir, 'vitest.config.js'))
|
||||
|| fs.existsSync(path.join(bootstrapDir, 'jest.config.js'))
|
||||
|| fs.existsSync(path.join(bootstrapDir, 'jest.config.ts'));
|
||||
console.log(`Test config created: ${hasTestConfig}`);
|
||||
|
||||
const hasTestingMd = fs.existsSync(path.join(bootstrapDir, 'TESTING.md'));
|
||||
console.log(`TESTING.md created: ${hasTestingMd}`);
|
||||
|
||||
// Check for bootstrap commit
|
||||
const gitLog = spawnSync('git', ['log', '--oneline', '--grep=bootstrap'], {
|
||||
cwd: bootstrapDir, stdio: 'pipe',
|
||||
});
|
||||
const bootstrapCommits = gitLog.stdout.toString().trim();
|
||||
console.log(`Bootstrap commits: ${bootstrapCommits || 'none'}`);
|
||||
|
||||
// Check for regression test commits
|
||||
const regressionLog = spawnSync('git', ['log', '--oneline', '--grep=test(qa)'], {
|
||||
cwd: bootstrapDir, stdio: 'pipe',
|
||||
});
|
||||
const regressionCommits = regressionLog.stdout.toString().trim();
|
||||
console.log(`Regression test commits: ${regressionCommits || 'none'}`);
|
||||
|
||||
// Verify at least the bootstrap happened (fix commits are bonus)
|
||||
const allCommits = spawnSync('git', ['log', '--oneline'], {
|
||||
cwd: bootstrapDir, stdio: 'pipe',
|
||||
});
|
||||
const totalCommits = allCommits.stdout.toString().trim().split('\n').length;
|
||||
console.log(`Total commits: ${totalCommits}`);
|
||||
expect(totalCommits).toBeGreaterThan(1); // At least initial + bootstrap
|
||||
}, 420_000);
|
||||
});
|
||||
|
||||
// --- Test Coverage Audit E2E ---
|
||||
|
||||
describeE2E('Test Coverage Audit E2E', () => {
|
||||
let coverageDir: string;
|
||||
|
||||
beforeAll(() => {
|
||||
coverageDir = fs.mkdtempSync(path.join(os.tmpdir(), 'skill-e2e-coverage-'));
|
||||
|
||||
// Copy ship skill files
|
||||
copyDirSync(path.join(ROOT, 'ship'), path.join(coverageDir, 'ship'));
|
||||
copyDirSync(path.join(ROOT, 'review'), path.join(coverageDir, 'review'));
|
||||
|
||||
// Create a Node.js project WITH test framework but coverage gaps
|
||||
fs.writeFileSync(path.join(coverageDir, 'package.json'), JSON.stringify({
|
||||
name: 'test-coverage-app',
|
||||
version: '1.0.0',
|
||||
type: 'module',
|
||||
scripts: { test: 'echo "no tests yet"' },
|
||||
devDependencies: { vitest: '^1.0.0' },
|
||||
}, null, 2));
|
||||
|
||||
// Create vitest config
|
||||
fs.writeFileSync(path.join(coverageDir, 'vitest.config.ts'),
|
||||
`import { defineConfig } from 'vitest/config';\nexport default defineConfig({ test: {} });\n`);
|
||||
|
||||
fs.writeFileSync(path.join(coverageDir, 'VERSION'), '0.1.0.0\n');
|
||||
fs.writeFileSync(path.join(coverageDir, 'CHANGELOG.md'), '# Changelog\n');
|
||||
|
||||
// Create source file with multiple code paths
|
||||
fs.mkdirSync(path.join(coverageDir, 'src'), { recursive: true });
|
||||
fs.writeFileSync(path.join(coverageDir, 'src', 'billing.ts'), `
|
||||
export function processPayment(amount: number, currency: string) {
|
||||
if (amount <= 0) throw new Error('Invalid amount');
|
||||
if (currency !== 'USD' && currency !== 'EUR') throw new Error('Unsupported currency');
|
||||
return { status: 'success', amount, currency };
|
||||
}
|
||||
|
||||
export function refundPayment(paymentId: string, reason: string) {
|
||||
if (!paymentId) throw new Error('Payment ID required');
|
||||
if (!reason) throw new Error('Reason required');
|
||||
return { status: 'refunded', paymentId, reason };
|
||||
}
|
||||
`);
|
||||
|
||||
// Create a test directory with ONE test (partial coverage)
|
||||
fs.mkdirSync(path.join(coverageDir, 'test'), { recursive: true });
|
||||
fs.writeFileSync(path.join(coverageDir, 'test', 'billing.test.ts'), `
|
||||
import { describe, test, expect } from 'vitest';
|
||||
import { processPayment } from '../src/billing';
|
||||
|
||||
describe('processPayment', () => {
|
||||
test('processes valid payment', () => {
|
||||
const result = processPayment(100, 'USD');
|
||||
expect(result.status).toBe('success');
|
||||
});
|
||||
// GAP: no test for invalid amount
|
||||
// GAP: no test for unsupported currency
|
||||
// GAP: refundPayment not tested at all
|
||||
});
|
||||
`);
|
||||
|
||||
// Init git repo with main branch
|
||||
const run = (cmd: string, args: string[]) =>
|
||||
spawnSync(cmd, args, { cwd: coverageDir, stdio: 'pipe', timeout: 5000 });
|
||||
run('git', ['init', '-b', 'main']);
|
||||
run('git', ['config', 'user.email', 'test@test.com']);
|
||||
run('git', ['config', 'user.name', 'Test']);
|
||||
run('git', ['add', '.']);
|
||||
run('git', ['commit', '-m', 'initial commit']);
|
||||
|
||||
// Create feature branch
|
||||
run('git', ['checkout', '-b', 'feature/billing']);
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
try { fs.rmSync(coverageDir, { recursive: true, force: true }); } catch {}
|
||||
});
|
||||
|
||||
test('/ship Step 3.4 produces coverage diagram', async () => {
|
||||
const result = await runSkillTest({
|
||||
prompt: `Read the file ship/SKILL.md for the ship workflow instructions.
|
||||
|
||||
You are on the feature/billing branch. The base branch is main.
|
||||
This is a test project — there is no remote, no PR to create.
|
||||
|
||||
ONLY run Step 3.4 (Test Coverage Audit) from the ship workflow.
|
||||
Skip all other steps (tests, evals, review, version, changelog, commit, push, PR).
|
||||
|
||||
The source code is in ${coverageDir}/src/billing.ts.
|
||||
Existing tests are in ${coverageDir}/test/billing.test.ts.
|
||||
The test command is: echo "tests pass" (mocked — just pretend tests pass).
|
||||
|
||||
Produce the ASCII coverage diagram showing which code paths are tested and which have gaps.
|
||||
Do NOT generate new tests — just produce the diagram and coverage summary.
|
||||
Output the diagram directly.`,
|
||||
workingDirectory: coverageDir,
|
||||
maxTurns: 15,
|
||||
allowedTools: ['Bash', 'Read', 'Write', 'Edit', 'Glob', 'Grep'],
|
||||
timeout: 120_000,
|
||||
testName: 'ship-coverage-audit',
|
||||
runId,
|
||||
});
|
||||
|
||||
logCost('/ship coverage audit', result);
|
||||
recordE2E('/ship Step 3.4 coverage audit', 'Test Coverage Audit E2E', result, {
|
||||
passed: result.exitReason === 'success',
|
||||
});
|
||||
|
||||
expect(result.exitReason).toBe('success');
|
||||
|
||||
// Check output contains coverage diagram elements
|
||||
const output = result.output || '';
|
||||
const hasGap = output.includes('GAP') || output.includes('gap') || output.includes('NO TEST');
|
||||
const hasTested = output.includes('TESTED') || output.includes('tested') || output.includes('✓');
|
||||
const hasCoverage = output.includes('COVERAGE') || output.includes('coverage') || output.includes('paths tested');
|
||||
|
||||
console.log(`Output has GAP markers: ${hasGap}`);
|
||||
console.log(`Output has TESTED markers: ${hasTested}`);
|
||||
console.log(`Output has coverage summary: ${hasCoverage}`);
|
||||
|
||||
// At minimum, the agent should have read the source and test files
|
||||
const readCalls = result.toolCalls.filter(tc => tc.tool === 'Read');
|
||||
expect(readCalls.length).toBeGreaterThan(0);
|
||||
}, 180_000);
|
||||
});
|
||||
|
||||
// Module-level afterAll — finalize eval collector after all tests complete
|
||||
afterAll(async () => {
|
||||
if (evalCollector) {
|
||||
|
||||
@@ -666,6 +666,36 @@ describe('Planted-bug fixture validation', () => {
|
||||
});
|
||||
});
|
||||
|
||||
// --- CEO review mode validation ---
|
||||
|
||||
describe('CEO review mode validation', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'plan-ceo-review', 'SKILL.md'), 'utf-8');
|
||||
|
||||
test('has all four CEO review modes defined', () => {
|
||||
const modes = ['SCOPE EXPANSION', 'SELECTIVE EXPANSION', 'HOLD SCOPE', 'SCOPE REDUCTION'];
|
||||
for (const mode of modes) {
|
||||
expect(content).toContain(mode);
|
||||
}
|
||||
});
|
||||
|
||||
test('has CEO plan persistence step', () => {
|
||||
expect(content).toContain('ceo-plans');
|
||||
expect(content).toContain('status: ACTIVE');
|
||||
});
|
||||
|
||||
test('has docs/designs promotion section', () => {
|
||||
expect(content).toContain('docs/designs');
|
||||
expect(content).toContain('PROMOTED');
|
||||
});
|
||||
|
||||
test('mode quick reference has four columns', () => {
|
||||
expect(content).toContain('EXPANSION');
|
||||
expect(content).toContain('SELECTIVE');
|
||||
expect(content).toContain('HOLD SCOPE');
|
||||
expect(content).toContain('REDUCTION');
|
||||
});
|
||||
});
|
||||
|
||||
// --- gstack-slug helper ---
|
||||
|
||||
describe('gstack-slug', () => {
|
||||
@@ -707,3 +737,225 @@ describe('gstack-slug', () => {
|
||||
expect(lines[1]).toMatch(/^BRANCH=.+/);
|
||||
});
|
||||
});
|
||||
|
||||
// --- Test Bootstrap validation ---
|
||||
|
||||
describe('Test Bootstrap ({{TEST_BOOTSTRAP}}) integration', () => {
|
||||
test('TEST_BOOTSTRAP resolver produces valid content', () => {
|
||||
const qaContent = fs.readFileSync(path.join(ROOT, 'qa', 'SKILL.md'), 'utf-8');
|
||||
expect(qaContent).toContain('Test Framework Bootstrap');
|
||||
expect(qaContent).toContain('RUNTIME:ruby');
|
||||
expect(qaContent).toContain('RUNTIME:node');
|
||||
expect(qaContent).toContain('RUNTIME:python');
|
||||
expect(qaContent).toContain('no-test-bootstrap');
|
||||
expect(qaContent).toContain('BOOTSTRAP_DECLINED');
|
||||
});
|
||||
|
||||
test('TEST_BOOTSTRAP appears in qa/SKILL.md', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('Test Framework Bootstrap');
|
||||
expect(content).toContain('TESTING.md');
|
||||
expect(content).toContain('CLAUDE.md');
|
||||
});
|
||||
|
||||
test('TEST_BOOTSTRAP appears in ship/SKILL.md', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'ship', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('Test Framework Bootstrap');
|
||||
expect(content).toContain('Step 2.5');
|
||||
});
|
||||
|
||||
test('TEST_BOOTSTRAP appears in qa-design-review/SKILL.md', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa-design-review', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('Test Framework Bootstrap');
|
||||
});
|
||||
|
||||
test('TEST_BOOTSTRAP does NOT appear in qa-only/SKILL.md', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa-only', 'SKILL.md'), 'utf-8');
|
||||
expect(content).not.toContain('Test Framework Bootstrap');
|
||||
// But should have the recommendation note
|
||||
expect(content).toContain('No test framework detected');
|
||||
expect(content).toContain('Run `/qa` to bootstrap');
|
||||
});
|
||||
|
||||
test('bootstrap includes framework knowledge table', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('vitest');
|
||||
expect(content).toContain('minitest');
|
||||
expect(content).toContain('pytest');
|
||||
expect(content).toContain('cargo test');
|
||||
expect(content).toContain('phpunit');
|
||||
expect(content).toContain('ExUnit');
|
||||
});
|
||||
|
||||
test('bootstrap includes CI/CD pipeline generation', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('.github/workflows/test.yml');
|
||||
expect(content).toContain('GitHub Actions');
|
||||
});
|
||||
|
||||
test('bootstrap includes first real tests step', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('First real tests');
|
||||
expect(content).toContain('git log --since=30.days');
|
||||
expect(content).toContain('Prioritize by risk');
|
||||
});
|
||||
|
||||
test('bootstrap includes vibe coding philosophy', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('vibe coding');
|
||||
expect(content).toContain('100% test coverage');
|
||||
});
|
||||
|
||||
test('WebSearch is in allowed-tools for qa, ship, qa-design-review', () => {
|
||||
const qa = fs.readFileSync(path.join(ROOT, 'qa', 'SKILL.md'), 'utf-8');
|
||||
const ship = fs.readFileSync(path.join(ROOT, 'ship', 'SKILL.md'), 'utf-8');
|
||||
const qaDesign = fs.readFileSync(path.join(ROOT, 'qa-design-review', 'SKILL.md'), 'utf-8');
|
||||
expect(qa).toContain('WebSearch');
|
||||
expect(ship).toContain('WebSearch');
|
||||
expect(qaDesign).toContain('WebSearch');
|
||||
});
|
||||
});
|
||||
|
||||
// --- Phase 8e.5 regression test validation ---
|
||||
|
||||
describe('Phase 8e.5 regression test generation', () => {
|
||||
test('qa/SKILL.md contains Phase 8e.5', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('8e.5. Regression Test');
|
||||
expect(content).toContain('test(qa): regression test');
|
||||
expect(content).toContain('WTF-likelihood exclusion');
|
||||
});
|
||||
|
||||
test('qa/SKILL.md Rule 13 is amended for regression tests', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('Only modify tests when generating regression tests in Phase 8e.5');
|
||||
expect(content).not.toContain('Never modify tests or CI configuration');
|
||||
});
|
||||
|
||||
test('qa-design-review has CSS-aware Phase 8e.5 variant', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa-design-review', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('8e.5. Regression Test (design-review variant)');
|
||||
expect(content).toContain('CSS-only');
|
||||
expect(content).toContain('test(design): regression test');
|
||||
});
|
||||
|
||||
test('regression test includes full attribution comment format', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('// Regression: ISSUE-NNN');
|
||||
expect(content).toContain('// Found by /qa on');
|
||||
expect(content).toContain('// Report: .gstack/qa-reports/');
|
||||
});
|
||||
|
||||
test('regression test uses auto-incrementing names', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('auto-incrementing');
|
||||
expect(content).toContain('max number + 1');
|
||||
});
|
||||
});
|
||||
|
||||
// --- Step 3.4 coverage audit validation ---
|
||||
|
||||
describe('Step 3.4 test coverage audit', () => {
|
||||
test('ship/SKILL.md contains Step 3.4', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'ship', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('Step 3.4: Test Coverage Audit');
|
||||
expect(content).toContain('CODE PATH COVERAGE');
|
||||
});
|
||||
|
||||
test('Step 3.4 includes quality scoring rubric', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'ship', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('★★★');
|
||||
expect(content).toContain('★★');
|
||||
expect(content).toContain('edge cases AND error paths');
|
||||
expect(content).toContain('happy path only');
|
||||
});
|
||||
|
||||
test('Step 3.4 includes before/after test count', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'ship', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('Count test files before');
|
||||
expect(content).toContain('Count test files after');
|
||||
});
|
||||
|
||||
test('ship PR body includes Test Coverage section', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'ship', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('## Test Coverage');
|
||||
});
|
||||
|
||||
test('ship rules include test generation rule', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'ship', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('Step 3.4 generates coverage tests');
|
||||
expect(content).toContain('Never commit failing tests');
|
||||
});
|
||||
|
||||
test('Step 3.4 includes vibe coding philosophy', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'ship', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('vibe coding becomes yolo coding');
|
||||
});
|
||||
|
||||
test('Step 3.4 traces actual codepaths, not just syntax', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'ship', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('Trace every codepath');
|
||||
expect(content).toContain('Trace data flow');
|
||||
expect(content).toContain('Diagram the execution');
|
||||
});
|
||||
|
||||
test('Step 3.4 maps user flows and interaction edge cases', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'ship', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('Map user flows');
|
||||
expect(content).toContain('Interaction edge cases');
|
||||
expect(content).toContain('Double-click');
|
||||
expect(content).toContain('Navigate away');
|
||||
expect(content).toContain('Error states the user can see');
|
||||
expect(content).toContain('Empty/zero/boundary states');
|
||||
});
|
||||
|
||||
test('Step 3.4 diagram includes USER FLOW COVERAGE section', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'ship', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('USER FLOW COVERAGE');
|
||||
expect(content).toContain('Code paths:');
|
||||
expect(content).toContain('User flows:');
|
||||
});
|
||||
});
|
||||
|
||||
// --- Retro test health validation ---
|
||||
|
||||
describe('Retro test health tracking', () => {
|
||||
test('retro/SKILL.md has test health data gathering commands', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'retro', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('# 10. Test file count');
|
||||
expect(content).toContain('# 11. Regression test commits');
|
||||
expect(content).toContain('# 12. Test files changed');
|
||||
});
|
||||
|
||||
test('retro/SKILL.md has Test Health metrics row', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'retro', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('Test Health');
|
||||
expect(content).toContain('regression tests');
|
||||
});
|
||||
|
||||
test('retro/SKILL.md has Test Health narrative section', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'retro', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('### Test Health');
|
||||
expect(content).toContain('Total test files');
|
||||
expect(content).toContain('vibe coding safe');
|
||||
});
|
||||
|
||||
test('retro JSON schema includes test_health field', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'retro', 'SKILL.md'), 'utf-8');
|
||||
expect(content).toContain('test_health');
|
||||
expect(content).toContain('total_test_files');
|
||||
expect(content).toContain('regression_test_commits');
|
||||
});
|
||||
});
|
||||
|
||||
// --- QA report template regression tests section ---
|
||||
|
||||
describe('QA report template', () => {
|
||||
test('qa-report-template.md has Regression Tests section', () => {
|
||||
const content = fs.readFileSync(path.join(ROOT, 'qa', 'templates', 'qa-report-template.md'), 'utf-8');
|
||||
expect(content).toContain('## Regression Tests');
|
||||
expect(content).toContain('committed / deferred / skipped');
|
||||
expect(content).toContain('### Deferred Tests');
|
||||
expect(content).toContain('**Precondition:**');
|
||||
});
|
||||
});
|
||||
|
||||
Reference in New Issue
Block a user