mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-07 05:56:41 +02:00
fix: resolve merge conflicts with origin/main (v0.6.0 + v0.6.0.1 + v0.6.1)
Merge main's test bootstrap, boil-the-lake completeness principle, selective expansion, ship gate overrides, and gstack-upgrade vendor sync. Conflicts resolved: - CHANGELOG: keep main's 0.6.1/0.6.0.1/0.6.0/0.5.4/0.5.3 entries - VERSION: take main's 0.6.1 - design-consultation: office-hours naming + main's "what's out there" phrasing - ship: keep both verification rules (fresh evidence + coverage tests) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
+383
-17
@@ -11,6 +11,7 @@ allowed-tools:
|
||||
- Grep
|
||||
- Glob
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
|
||||
<!-- Regenerate: bun run gen:skill-docs -->
|
||||
@@ -27,22 +28,61 @@ find ~/.gstack/sessions -mmin +120 -type f -delete 2>/dev/null || true
|
||||
_CONTRIB=$(~/.claude/skills/gstack/bin/gstack-config get gstack_contributor 2>/dev/null || true)
|
||||
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
|
||||
echo "BRANCH: $_BRANCH"
|
||||
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
|
||||
echo "LAKE_INTRO: $_LAKE_SEEN"
|
||||
```
|
||||
|
||||
If output shows `UPGRADE_AVAILABLE <old> <new>`: read `~/.claude/skills/gstack/gstack-upgrade/SKILL.md` and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If `JUST_UPGRADED <from> <to>`: tell user "Running gstack v{to} (just updated!)" and continue.
|
||||
|
||||
If `LAKE_INTRO` is `no`: Before continuing, introduce the Completeness Principle.
|
||||
Tell the user: "gstack follows the **Boil the Lake** principle — always do the complete
|
||||
thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean"
|
||||
Then offer to open the essay in their default browser:
|
||||
|
||||
```bash
|
||||
open https://garryslist.org/posts/boil-the-ocean
|
||||
touch ~/.gstack/.completeness-intro-seen
|
||||
```
|
||||
|
||||
Only run `open` if the user says yes. Always run `touch` to mark as seen. This only happens once.
|
||||
|
||||
## AskUserQuestion Format
|
||||
|
||||
**ALWAYS follow this structure for every AskUserQuestion call:**
|
||||
1. **Re-ground:** State the project, the current branch (use the `_BRANCH` value printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)
|
||||
2. **Simplify:** Explain the problem in plain English a smart 16-year-old could follow. No raw function names, no internal jargon, no implementation details. Use concrete examples and analogies. Say what it DOES, not what it's called.
|
||||
3. **Recommend:** `RECOMMENDATION: Choose [X] because [one-line reason]`
|
||||
4. **Options:** Lettered options: `A) ... B) ... C) ...`
|
||||
3. **Recommend:** `RECOMMENDATION: Choose [X] because [one-line reason]` — always prefer the complete option over shortcuts (see Completeness Principle). Include `Completeness: X/10` for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it.
|
||||
4. **Options:** Lettered options: `A) ... B) ... C) ...` — when an option involves effort, show both scales: `(human: ~X / CC: ~Y)`
|
||||
|
||||
Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
|
||||
|
||||
Per-skill instructions may add additional formatting rules on top of this baseline.
|
||||
|
||||
## Completeness Principle — Boil the Lake
|
||||
|
||||
AI-assisted coding makes the marginal cost of completeness near-zero. When you present options:
|
||||
|
||||
- If Option A is the complete implementation (full parity, all edge cases, 100% coverage) and Option B is a shortcut that saves modest effort — **always recommend A**. The delta between 80 lines and 150 lines is meaningless with CC+gstack. "Good enough" is the wrong instinct when "complete" costs minutes more.
|
||||
- **Lake vs. ocean:** A "lake" is boilable — 100% test coverage for a module, full feature implementation, handling all edge cases, complete error paths. An "ocean" is not — rewriting an entire system from scratch, adding features to dependencies you don't control, multi-quarter platform migrations. Recommend boiling lakes. Flag oceans as out of scope.
|
||||
- **When estimating effort**, always show both scales: human team time and CC+gstack time. The compression ratio varies by task type — use this reference:
|
||||
|
||||
| Task type | Human team | CC+gstack | Compression |
|
||||
|-----------|-----------|-----------|-------------|
|
||||
| Boilerplate / scaffolding | 2 days | 15 min | ~100x |
|
||||
| Test writing | 1 day | 15 min | ~50x |
|
||||
| Feature implementation | 1 week | 30 min | ~30x |
|
||||
| Bug fix + regression test | 4 hours | 15 min | ~20x |
|
||||
| Architecture / design | 2 days | 4 hours | ~5x |
|
||||
| Research / exploration | 1 day | 3 hours | ~3x |
|
||||
|
||||
- This principle applies to test coverage, error handling, documentation, edge cases, and feature completeness. Don't skip the last 10% to "save time" — with AI, that 10% costs seconds.
|
||||
|
||||
**Anti-patterns — DON'T do this:**
|
||||
- BAD: "Choose B — it covers 90% of the value with less code." (If A is only 70 lines more, choose A.)
|
||||
- BAD: "We can skip edge case handling to save time." (Edge case handling costs minutes with CC.)
|
||||
- BAD: "Let's defer test coverage to a follow-up PR." (Tests are the cheapest lake to boil.)
|
||||
- BAD: Quoting only human-team effort: "This would take 2 weeks." (Say: "2 weeks human / ~1 hour CC.")
|
||||
|
||||
## Contributor Mode
|
||||
|
||||
If `_CONTRIB` is `true`: you are in **contributor mode**. You're a gstack user who also helps make it better.
|
||||
@@ -146,6 +186,7 @@ You are running the `/ship` workflow. This is a **non-interactive, fully automat
|
||||
- Multi-file changesets (auto-split into bisectable commits)
|
||||
- TODOS.md completed-item detection (auto-mark)
|
||||
- Auto-fixable review findings (dead code, N+1, stale comments — fixed automatically)
|
||||
- Test coverage gaps (auto-generate and commit, or flag in PR body)
|
||||
|
||||
---
|
||||
|
||||
@@ -161,11 +202,13 @@ You are running the `/ship` workflow. This is a **non-interactive, fully automat
|
||||
|
||||
## Review Readiness Dashboard
|
||||
|
||||
After completing the review, read the review log to display the dashboard.
|
||||
After completing the review, read the review log and config to display the dashboard.
|
||||
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
cat ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_REVIEWS"
|
||||
echo "---CONFIG---"
|
||||
~/.claude/skills/gstack/bin/gstack-config get skip_eng_review 2>/dev/null || echo "false"
|
||||
```
|
||||
|
||||
Parse the output. Find the most recent entry for each skill (plan-ceo-review, plan-eng-review, plan-design-review). Ignore entries with timestamps older than 7 days. Display:
|
||||
@@ -174,25 +217,48 @@ Parse the output. Find the most recent entry for each skill (plan-ceo-review, pl
|
||||
+====================================================================+
|
||||
| REVIEW READINESS DASHBOARD |
|
||||
+====================================================================+
|
||||
| Review | Runs | Last Run | Status |
|
||||
|-----------------|------|---------------------|----------------------|
|
||||
| CEO Review | 1 | 2026-03-16 14:30 | CLEAR |
|
||||
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR |
|
||||
| Design Review | 0 | — | NOT YET RUN |
|
||||
| Review | Runs | Last Run | Status | Required |
|
||||
|-----------------|------|---------------------|-----------|----------|
|
||||
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR | YES |
|
||||
| CEO Review | 0 | — | — | no |
|
||||
| Design Review | 0 | — | — | no |
|
||||
+--------------------------------------------------------------------+
|
||||
| VERDICT: 2/3 CLEAR — Design Review not yet run |
|
||||
| VERDICT: CLEARED — Eng Review passed |
|
||||
+====================================================================+
|
||||
```
|
||||
|
||||
**Verdict logic:**
|
||||
- **CLEARED TO SHIP (3/3)**: All three have >= 1 entry within 7 days AND most recent status is "clean"
|
||||
- **N/3 CLEAR**: Show count and list which are missing, have open issues, or are stale (>7 days)
|
||||
- Informational only — does NOT block.
|
||||
**Review tiers:**
|
||||
- **Eng Review (required by default):** The only review that gates shipping. Covers architecture, code quality, tests, performance. Can be disabled globally with \`gstack-config set skip_eng_review true\` (the "don't bother me" setting).
|
||||
- **CEO Review (optional):** Use your judgment. Recommend it for big product/business changes, new user-facing features, or scope decisions. Skip for bug fixes, refactors, infra, and cleanup.
|
||||
- **Design Review (optional):** Use your judgment. Recommend it for UI/UX changes. Skip for backend-only, infra, or prompt-only changes.
|
||||
|
||||
If the verdict is NOT "CLEARED TO SHIP (3/3)", use AskUserQuestion:
|
||||
- Show which reviews are missing or have open issues
|
||||
- RECOMMENDATION: Choose B (run missing reviews first) unless the change is trivial
|
||||
- Options: A) Ship anyway B) Abort — run missing review(s) first C) Reviews not relevant for this change
|
||||
**Verdict logic:**
|
||||
- **CLEARED**: Eng Review has >= 1 entry within 7 days with status "clean" (or \`skip_eng_review\` is \`true\`)
|
||||
- **NOT CLEARED**: Eng Review missing, stale (>7 days), or has open issues
|
||||
- CEO and Design reviews are shown for context but never block shipping
|
||||
- If \`skip_eng_review\` config is \`true\`, Eng Review shows "SKIPPED (global)" and verdict is CLEARED
|
||||
|
||||
If the Eng Review is NOT "CLEAR":
|
||||
|
||||
1. **Check for a prior override on this branch:**
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
grep '"skill":"ship-review-override"' ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_OVERRIDE"
|
||||
```
|
||||
If an override exists, display the dashboard and note "Review gate previously accepted — continuing." Do NOT ask again.
|
||||
|
||||
2. **If no override exists,** use AskUserQuestion:
|
||||
- Show that Eng Review is missing or has open issues
|
||||
- RECOMMENDATION: Choose C if the change is obviously trivial (< 20 lines, typo fix, config-only); Choose B for larger changes
|
||||
- Options: A) Ship anyway B) Abort — run /plan-eng-review first C) Change is too small to need eng review
|
||||
- If CEO/Design reviews are missing, mention them as informational ("CEO Review not run — recommended for product changes") but do NOT block or recommend aborting for them
|
||||
|
||||
3. **If the user chooses A or C,** persist the decision so future `/ship` runs on this branch skip the gate:
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
echo '{"skill":"ship-review-override","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","decision":"USER_CHOICE"}' >> ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl
|
||||
```
|
||||
Substitute USER_CHOICE with "ship_anyway" or "not_relevant".
|
||||
|
||||
---
|
||||
|
||||
@@ -210,6 +276,163 @@ git fetch origin <base> && git merge origin/<base> --no-edit
|
||||
|
||||
---
|
||||
|
||||
## Step 2.5: Test Framework Bootstrap
|
||||
|
||||
## Test Framework Bootstrap
|
||||
|
||||
**Detect existing test framework and project runtime:**
|
||||
|
||||
```bash
|
||||
# Detect project runtime
|
||||
[ -f Gemfile ] && echo "RUNTIME:ruby"
|
||||
[ -f package.json ] && echo "RUNTIME:node"
|
||||
[ -f requirements.txt ] || [ -f pyproject.toml ] && echo "RUNTIME:python"
|
||||
[ -f go.mod ] && echo "RUNTIME:go"
|
||||
[ -f Cargo.toml ] && echo "RUNTIME:rust"
|
||||
[ -f composer.json ] && echo "RUNTIME:php"
|
||||
[ -f mix.exs ] && echo "RUNTIME:elixir"
|
||||
# Detect sub-frameworks
|
||||
[ -f Gemfile ] && grep -q "rails" Gemfile 2>/dev/null && echo "FRAMEWORK:rails"
|
||||
[ -f package.json ] && grep -q '"next"' package.json 2>/dev/null && echo "FRAMEWORK:nextjs"
|
||||
# Check for existing test infrastructure
|
||||
ls jest.config.* vitest.config.* playwright.config.* .rspec pytest.ini pyproject.toml phpunit.xml 2>/dev/null
|
||||
ls -d test/ tests/ spec/ __tests__/ cypress/ e2e/ 2>/dev/null
|
||||
# Check opt-out marker
|
||||
[ -f .gstack/no-test-bootstrap ] && echo "BOOTSTRAP_DECLINED"
|
||||
```
|
||||
|
||||
**If test framework detected** (config files or test directories found):
|
||||
Print "Test framework detected: {name} ({N} existing tests). Skipping bootstrap."
|
||||
Read 2-3 existing test files to learn conventions (naming, imports, assertion style, setup patterns).
|
||||
Store conventions as prose context for use in Phase 8e.5 or Step 3.4. **Skip the rest of bootstrap.**
|
||||
|
||||
**If BOOTSTRAP_DECLINED** appears: Print "Test bootstrap previously declined — skipping." **Skip the rest of bootstrap.**
|
||||
|
||||
**If NO runtime detected** (no config files found): Use AskUserQuestion:
|
||||
"I couldn't detect your project's language. What runtime are you using?"
|
||||
Options: A) Node.js/TypeScript B) Ruby/Rails C) Python D) Go E) Rust F) PHP G) Elixir H) This project doesn't need tests.
|
||||
If user picks H → write `.gstack/no-test-bootstrap` and continue without tests.
|
||||
|
||||
**If runtime detected but no test framework — bootstrap:**
|
||||
|
||||
### B2. Research best practices
|
||||
|
||||
Use WebSearch to find current best practices for the detected runtime:
|
||||
- `"[runtime] best test framework 2025 2026"`
|
||||
- `"[framework A] vs [framework B] comparison"`
|
||||
|
||||
If WebSearch is unavailable, use this built-in knowledge table:
|
||||
|
||||
| Runtime | Primary recommendation | Alternative |
|
||||
|---------|----------------------|-------------|
|
||||
| Ruby/Rails | minitest + fixtures + capybara | rspec + factory_bot + shoulda-matchers |
|
||||
| Node.js | vitest + @testing-library | jest + @testing-library |
|
||||
| Next.js | vitest + @testing-library/react + playwright | jest + cypress |
|
||||
| Python | pytest + pytest-cov | unittest |
|
||||
| Go | stdlib testing + testify | stdlib only |
|
||||
| Rust | cargo test (built-in) + mockall | — |
|
||||
| PHP | phpunit + mockery | pest |
|
||||
| Elixir | ExUnit (built-in) + ex_machina | — |
|
||||
|
||||
### B3. Framework selection
|
||||
|
||||
Use AskUserQuestion:
|
||||
"I detected this is a [Runtime/Framework] project with no test framework. I researched current best practices. Here are the options:
|
||||
A) [Primary] — [rationale]. Includes: [packages]. Supports: unit, integration, smoke, e2e
|
||||
B) [Alternative] — [rationale]. Includes: [packages]
|
||||
C) Skip — don't set up testing right now
|
||||
RECOMMENDATION: Choose A because [reason based on project context]"
|
||||
|
||||
If user picks C → write `.gstack/no-test-bootstrap`. Tell user: "If you change your mind later, delete `.gstack/no-test-bootstrap` and re-run." Continue without tests.
|
||||
|
||||
If multiple runtimes detected (monorepo) → ask which runtime to set up first, with option to do both sequentially.
|
||||
|
||||
### B4. Install and configure
|
||||
|
||||
1. Install the chosen packages (npm/bun/gem/pip/etc.)
|
||||
2. Create minimal config file
|
||||
3. Create directory structure (test/, spec/, etc.)
|
||||
4. Create one example test matching the project's code to verify setup works
|
||||
|
||||
If package installation fails → debug once. If still failing → revert with `git checkout -- package.json package-lock.json` (or equivalent for the runtime). Warn user and continue without tests.
|
||||
|
||||
### B4.5. First real tests
|
||||
|
||||
Generate 3-5 real tests for existing code:
|
||||
|
||||
1. **Find recently changed files:** `git log --since=30.days --name-only --format="" | sort | uniq -c | sort -rn | head -10`
|
||||
2. **Prioritize by risk:** Error handlers > business logic with conditionals > API endpoints > pure functions
|
||||
3. **For each file:** Write one test that tests real behavior with meaningful assertions. Never `expect(x).toBeDefined()` — test what the code DOES.
|
||||
4. Run each test. Passes → keep. Fails → fix once. Still fails → delete silently.
|
||||
5. Generate at least 1 test, cap at 5.
|
||||
|
||||
Never import secrets, API keys, or credentials in test files. Use environment variables or test fixtures.
|
||||
|
||||
### B5. Verify
|
||||
|
||||
```bash
|
||||
# Run the full test suite to confirm everything works
|
||||
{detected test command}
|
||||
```
|
||||
|
||||
If tests fail → debug once. If still failing → revert all bootstrap changes and warn user.
|
||||
|
||||
### B5.5. CI/CD pipeline
|
||||
|
||||
```bash
|
||||
# Check CI provider
|
||||
ls -d .github/ 2>/dev/null && echo "CI:github"
|
||||
ls .gitlab-ci.yml .circleci/ bitrise.yml 2>/dev/null
|
||||
```
|
||||
|
||||
If `.github/` exists (or no CI detected — default to GitHub Actions):
|
||||
Create `.github/workflows/test.yml` with:
|
||||
- `runs-on: ubuntu-latest`
|
||||
- Appropriate setup action for the runtime (setup-node, setup-ruby, setup-python, etc.)
|
||||
- The same test command verified in B5
|
||||
- Trigger: push + pull_request
|
||||
|
||||
If non-GitHub CI detected → skip CI generation with note: "Detected {provider} — CI pipeline generation supports GitHub Actions only. Add test step to your existing pipeline manually."
|
||||
|
||||
### B6. Create TESTING.md
|
||||
|
||||
First check: If TESTING.md already exists → read it and update/append rather than overwriting. Never destroy existing content.
|
||||
|
||||
Write TESTING.md with:
|
||||
- Philosophy: "100% test coverage is the key to great vibe coding. Tests let you move fast, trust your instincts, and ship with confidence — without them, vibe coding is just yolo coding. With tests, it's a superpower."
|
||||
- Framework name and version
|
||||
- How to run tests (the verified command from B5)
|
||||
- Test layers: Unit tests (what, where, when), Integration tests, Smoke tests, E2E tests
|
||||
- Conventions: file naming, assertion style, setup/teardown patterns
|
||||
|
||||
### B7. Update CLAUDE.md
|
||||
|
||||
First check: If CLAUDE.md already has a `## Testing` section → skip. Don't duplicate.
|
||||
|
||||
Append a `## Testing` section:
|
||||
- Run command and test directory
|
||||
- Reference to TESTING.md
|
||||
- Test expectations:
|
||||
- 100% test coverage is the goal — tests make vibe coding safe
|
||||
- When writing new functions, write a corresponding test
|
||||
- When fixing a bug, write a regression test
|
||||
- When adding error handling, write a test that triggers the error
|
||||
- When adding a conditional (if/else, switch), write tests for BOTH paths
|
||||
- Never commit code that makes existing tests fail
|
||||
|
||||
### B8. Commit
|
||||
|
||||
```bash
|
||||
git status --porcelain
|
||||
```
|
||||
|
||||
Only commit if there are changes. Stage all bootstrap files (config, test directory, TESTING.md, CLAUDE.md, .github/workflows/test.yml if created):
|
||||
`git commit -m "chore: bootstrap test framework ({framework name})"`
|
||||
|
||||
---
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run tests (on merged code)
|
||||
|
||||
**Do NOT run `RAILS_ENV=test bin/rails db:migrate`** — `bin/test-lane` already calls
|
||||
@@ -294,6 +517,144 @@ If multiple suites need to run, run them sequentially (each needs a test lane).
|
||||
|
||||
---
|
||||
|
||||
## Step 3.4: Test Coverage Audit
|
||||
|
||||
100% coverage is the goal — every untested path is a path where bugs hide and vibe coding becomes yolo coding. Evaluate what was ACTUALLY coded (from the diff), not what was planned.
|
||||
|
||||
**0. Before/after test count:**
|
||||
|
||||
```bash
|
||||
# Count test files before any generation
|
||||
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' | grep -v node_modules | wc -l
|
||||
```
|
||||
|
||||
Store this number for the PR body.
|
||||
|
||||
**1. Trace every codepath changed** using `git diff origin/<base>...HEAD`:
|
||||
|
||||
Read every changed file. For each one, trace how data flows through the code — don't just list functions, actually follow the execution:
|
||||
|
||||
1. **Read the diff.** For each changed file, read the full file (not just the diff hunk) to understand context.
|
||||
2. **Trace data flow.** Starting from each entry point (route handler, exported function, event listener, component render), follow the data through every branch:
|
||||
- Where does input come from? (request params, props, database, API call)
|
||||
- What transforms it? (validation, mapping, computation)
|
||||
- Where does it go? (database write, API response, rendered output, side effect)
|
||||
- What can go wrong at each step? (null/undefined, invalid input, network failure, empty collection)
|
||||
3. **Diagram the execution.** For each changed file, draw an ASCII diagram showing:
|
||||
- Every function/method that was added or modified
|
||||
- Every conditional branch (if/else, switch, ternary, guard clause, early return)
|
||||
- Every error path (try/catch, rescue, error boundary, fallback)
|
||||
- Every call to another function (trace into it — does IT have untested branches?)
|
||||
- Every edge: what happens with null input? Empty array? Invalid type?
|
||||
|
||||
This is the critical step — you're building a map of every line of code that can execute differently based on input. Every branch in this diagram needs a test.
|
||||
|
||||
**2. Map user flows, interactions, and error states:**
|
||||
|
||||
Code coverage isn't enough — you need to cover how real users interact with the changed code. For each changed feature, think through:
|
||||
|
||||
- **User flows:** What sequence of actions does a user take that touches this code? Map the full journey (e.g., "user clicks 'Pay' → form validates → API call → success/failure screen"). Each step in the journey needs a test.
|
||||
- **Interaction edge cases:** What happens when the user does something unexpected?
|
||||
- Double-click/rapid resubmit
|
||||
- Navigate away mid-operation (back button, close tab, click another link)
|
||||
- Submit with stale data (page sat open for 30 minutes, session expired)
|
||||
- Slow connection (API takes 10 seconds — what does the user see?)
|
||||
- Concurrent actions (two tabs, same form)
|
||||
- **Error states the user can see:** For every error the code handles, what does the user actually experience?
|
||||
- Is there a clear error message or a silent failure?
|
||||
- Can the user recover (retry, go back, fix input) or are they stuck?
|
||||
- What happens with no network? With a 500 from the API? With invalid data from the server?
|
||||
- **Empty/zero/boundary states:** What does the UI show with zero results? With 10,000 results? With a single character input? With maximum-length input?
|
||||
|
||||
Add these to your diagram alongside the code branches. A user flow with no test is just as much a gap as an untested if/else.
|
||||
|
||||
**3. Check each branch against existing tests:**
|
||||
|
||||
Go through your diagram branch by branch — both code paths AND user flows. For each one, search for a test that exercises it:
|
||||
- Function `processPayment()` → look for `billing.test.ts`, `billing.spec.ts`, `test/billing_test.rb`
|
||||
- An if/else → look for tests covering BOTH the true AND false path
|
||||
- An error handler → look for a test that triggers that specific error condition
|
||||
- A call to `helperFn()` that has its own branches → those branches need tests too
|
||||
- A user flow → look for an integration or E2E test that walks through the journey
|
||||
- An interaction edge case → look for a test that simulates the unexpected action
|
||||
|
||||
Quality scoring rubric:
|
||||
- ★★★ Tests behavior with edge cases AND error paths
|
||||
- ★★ Tests correct behavior, happy path only
|
||||
- ★ Smoke test / existence check / trivial assertion (e.g., "it renders", "it doesn't throw")
|
||||
|
||||
**4. Output ASCII coverage diagram:**
|
||||
|
||||
Include BOTH code paths and user flows in the same diagram:
|
||||
|
||||
```
|
||||
CODE PATH COVERAGE
|
||||
===========================
|
||||
[+] src/services/billing.ts
|
||||
│
|
||||
├── processPayment()
|
||||
│ ├── [★★★ TESTED] Happy path + card declined + timeout — billing.test.ts:42
|
||||
│ ├── [GAP] Network timeout — NO TEST
|
||||
│ └── [GAP] Invalid currency — NO TEST
|
||||
│
|
||||
└── refundPayment()
|
||||
├── [★★ TESTED] Full refund — billing.test.ts:89
|
||||
└── [★ TESTED] Partial refund (checks non-throw only) — billing.test.ts:101
|
||||
|
||||
USER FLOW COVERAGE
|
||||
===========================
|
||||
[+] Payment checkout flow
|
||||
│
|
||||
├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
|
||||
├── [GAP] Double-click submit — NO TEST
|
||||
├── [GAP] Navigate away during payment — NO TEST
|
||||
└── [★ TESTED] Form validation errors (checks render only) — checkout.test.ts:40
|
||||
|
||||
[+] Error states
|
||||
│
|
||||
├── [★★ TESTED] Card declined message — billing.test.ts:58
|
||||
├── [GAP] Network timeout UX (what does user see?) — NO TEST
|
||||
└── [GAP] Empty cart submission — NO TEST
|
||||
|
||||
─────────────────────────────────
|
||||
COVERAGE: 5/12 paths tested (42%)
|
||||
Code paths: 3/5 (60%)
|
||||
User flows: 2/7 (29%)
|
||||
QUALITY: ★★★: 2 ★★: 2 ★: 1
|
||||
GAPS: 7 paths need tests
|
||||
─────────────────────────────────
|
||||
```
|
||||
|
||||
**Fast path:** All paths covered → "Step 3.4: All new code paths have test coverage ✓" Continue.
|
||||
|
||||
**5. Generate tests for uncovered paths:**
|
||||
|
||||
If test framework detected (or bootstrapped in Step 2.5):
|
||||
- Prioritize error handlers and edge cases first (happy paths are more likely already tested)
|
||||
- Read 2-3 existing test files to match conventions exactly
|
||||
- Generate unit tests. Mock all external dependencies (DB, API, Redis).
|
||||
- Write tests that exercise the specific uncovered path with real assertions
|
||||
- Run each test. Passes → commit as `test: coverage for {feature}`
|
||||
- Fails → fix once. Still fails → revert, note gap in diagram.
|
||||
|
||||
Caps: 30 code paths max, 20 tests generated max (code + user flow combined), 2-min per-test exploration cap.
|
||||
|
||||
If no test framework AND user declined bootstrap → diagram only, no generation. Note: "Test generation skipped — no test framework configured."
|
||||
|
||||
**Diff is test-only changes:** Skip Step 3.4 entirely: "No new application code paths to audit."
|
||||
|
||||
**6. After-count and coverage summary:**
|
||||
|
||||
```bash
|
||||
# Count test files after generation
|
||||
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' | grep -v node_modules | wc -l
|
||||
```
|
||||
|
||||
For PR body: `Tests: {before} → {after} (+{delta} new)`
|
||||
Coverage line: `Test Coverage Audit: N new code paths. M covered (X%). K tests generated, J committed.`
|
||||
|
||||
---
|
||||
|
||||
## Step 3.5: Pre-Landing Review
|
||||
|
||||
Review the diff for structural issues that tests don't catch.
|
||||
@@ -544,6 +905,10 @@ gh pr create --base <base> --title "<type>: <summary>" --body "$(cat <<'EOF'
|
||||
## Summary
|
||||
<bullet points from CHANGELOG>
|
||||
|
||||
## Test Coverage
|
||||
<coverage diagram from Step 3.4, or "All new code paths have test coverage.">
|
||||
<If Step 3.4 ran: "Tests: {before} → {after} (+{delta} new)">
|
||||
|
||||
## Pre-Landing Review
|
||||
<findings from Step 3.5, or "No issues found.">
|
||||
|
||||
@@ -586,4 +951,5 @@ EOF
|
||||
- **TODOS.md completion detection must be conservative.** Only mark items as completed when the diff clearly shows the work is done.
|
||||
- **Use Greptile reply templates from greptile-triage.md.** Every reply includes evidence (inline diff, code references, re-rank suggestion). Never post vague replies.
|
||||
- **Never push without fresh verification evidence.** If code changed after Step 3 tests, re-run before pushing.
|
||||
- **Step 3.4 generates coverage tests.** They must pass before committing. Never commit failing tests.
|
||||
- **The goal is: user says `/ship`, next thing they see is the review + PR URL.**
|
||||
|
||||
+172
-4
@@ -11,6 +11,7 @@ allowed-tools:
|
||||
- Grep
|
||||
- Glob
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
|
||||
{{PREAMBLE}}
|
||||
@@ -39,6 +40,7 @@ You are running the `/ship` workflow. This is a **non-interactive, fully automat
|
||||
- Multi-file changesets (auto-split into bisectable commits)
|
||||
- TODOS.md completed-item detection (auto-mark)
|
||||
- Auto-fixable review findings (dead code, N+1, stale comments — fixed automatically)
|
||||
- Test coverage gaps (auto-generate and commit, or flag in PR body)
|
||||
|
||||
---
|
||||
|
||||
@@ -54,10 +56,27 @@ You are running the `/ship` workflow. This is a **non-interactive, fully automat
|
||||
|
||||
{{REVIEW_DASHBOARD}}
|
||||
|
||||
If the verdict is NOT "CLEARED TO SHIP (3/3)", use AskUserQuestion:
|
||||
- Show which reviews are missing or have open issues
|
||||
- RECOMMENDATION: Choose B (run missing reviews first) unless the change is trivial
|
||||
- Options: A) Ship anyway B) Abort — run missing review(s) first C) Reviews not relevant for this change
|
||||
If the Eng Review is NOT "CLEAR":
|
||||
|
||||
1. **Check for a prior override on this branch:**
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
grep '"skill":"ship-review-override"' ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_OVERRIDE"
|
||||
```
|
||||
If an override exists, display the dashboard and note "Review gate previously accepted — continuing." Do NOT ask again.
|
||||
|
||||
2. **If no override exists,** use AskUserQuestion:
|
||||
- Show that Eng Review is missing or has open issues
|
||||
- RECOMMENDATION: Choose C if the change is obviously trivial (< 20 lines, typo fix, config-only); Choose B for larger changes
|
||||
- Options: A) Ship anyway B) Abort — run /plan-eng-review first C) Change is too small to need eng review
|
||||
- If CEO/Design reviews are missing, mention them as informational ("CEO Review not run — recommended for product changes") but do NOT block or recommend aborting for them
|
||||
|
||||
3. **If the user chooses A or C,** persist the decision so future `/ship` runs on this branch skip the gate:
|
||||
```bash
|
||||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||||
echo '{"skill":"ship-review-override","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","decision":"USER_CHOICE"}' >> ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl
|
||||
```
|
||||
Substitute USER_CHOICE with "ship_anyway" or "not_relevant".
|
||||
|
||||
---
|
||||
|
||||
@@ -75,6 +94,12 @@ git fetch origin <base> && git merge origin/<base> --no-edit
|
||||
|
||||
---
|
||||
|
||||
## Step 2.5: Test Framework Bootstrap
|
||||
|
||||
{{TEST_BOOTSTRAP}}
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run tests (on merged code)
|
||||
|
||||
**Do NOT run `RAILS_ENV=test bin/rails db:migrate`** — `bin/test-lane` already calls
|
||||
@@ -159,6 +184,144 @@ If multiple suites need to run, run them sequentially (each needs a test lane).
|
||||
|
||||
---
|
||||
|
||||
## Step 3.4: Test Coverage Audit
|
||||
|
||||
100% coverage is the goal — every untested path is a path where bugs hide and vibe coding becomes yolo coding. Evaluate what was ACTUALLY coded (from the diff), not what was planned.
|
||||
|
||||
**0. Before/after test count:**
|
||||
|
||||
```bash
|
||||
# Count test files before any generation
|
||||
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' | grep -v node_modules | wc -l
|
||||
```
|
||||
|
||||
Store this number for the PR body.
|
||||
|
||||
**1. Trace every codepath changed** using `git diff origin/<base>...HEAD`:
|
||||
|
||||
Read every changed file. For each one, trace how data flows through the code — don't just list functions, actually follow the execution:
|
||||
|
||||
1. **Read the diff.** For each changed file, read the full file (not just the diff hunk) to understand context.
|
||||
2. **Trace data flow.** Starting from each entry point (route handler, exported function, event listener, component render), follow the data through every branch:
|
||||
- Where does input come from? (request params, props, database, API call)
|
||||
- What transforms it? (validation, mapping, computation)
|
||||
- Where does it go? (database write, API response, rendered output, side effect)
|
||||
- What can go wrong at each step? (null/undefined, invalid input, network failure, empty collection)
|
||||
3. **Diagram the execution.** For each changed file, draw an ASCII diagram showing:
|
||||
- Every function/method that was added or modified
|
||||
- Every conditional branch (if/else, switch, ternary, guard clause, early return)
|
||||
- Every error path (try/catch, rescue, error boundary, fallback)
|
||||
- Every call to another function (trace into it — does IT have untested branches?)
|
||||
- Every edge: what happens with null input? Empty array? Invalid type?
|
||||
|
||||
This is the critical step — you're building a map of every line of code that can execute differently based on input. Every branch in this diagram needs a test.
|
||||
|
||||
**2. Map user flows, interactions, and error states:**
|
||||
|
||||
Code coverage isn't enough — you need to cover how real users interact with the changed code. For each changed feature, think through:
|
||||
|
||||
- **User flows:** What sequence of actions does a user take that touches this code? Map the full journey (e.g., "user clicks 'Pay' → form validates → API call → success/failure screen"). Each step in the journey needs a test.
|
||||
- **Interaction edge cases:** What happens when the user does something unexpected?
|
||||
- Double-click/rapid resubmit
|
||||
- Navigate away mid-operation (back button, close tab, click another link)
|
||||
- Submit with stale data (page sat open for 30 minutes, session expired)
|
||||
- Slow connection (API takes 10 seconds — what does the user see?)
|
||||
- Concurrent actions (two tabs, same form)
|
||||
- **Error states the user can see:** For every error the code handles, what does the user actually experience?
|
||||
- Is there a clear error message or a silent failure?
|
||||
- Can the user recover (retry, go back, fix input) or are they stuck?
|
||||
- What happens with no network? With a 500 from the API? With invalid data from the server?
|
||||
- **Empty/zero/boundary states:** What does the UI show with zero results? With 10,000 results? With a single character input? With maximum-length input?
|
||||
|
||||
Add these to your diagram alongside the code branches. A user flow with no test is just as much a gap as an untested if/else.
|
||||
|
||||
**3. Check each branch against existing tests:**
|
||||
|
||||
Go through your diagram branch by branch — both code paths AND user flows. For each one, search for a test that exercises it:
|
||||
- Function `processPayment()` → look for `billing.test.ts`, `billing.spec.ts`, `test/billing_test.rb`
|
||||
- An if/else → look for tests covering BOTH the true AND false path
|
||||
- An error handler → look for a test that triggers that specific error condition
|
||||
- A call to `helperFn()` that has its own branches → those branches need tests too
|
||||
- A user flow → look for an integration or E2E test that walks through the journey
|
||||
- An interaction edge case → look for a test that simulates the unexpected action
|
||||
|
||||
Quality scoring rubric:
|
||||
- ★★★ Tests behavior with edge cases AND error paths
|
||||
- ★★ Tests correct behavior, happy path only
|
||||
- ★ Smoke test / existence check / trivial assertion (e.g., "it renders", "it doesn't throw")
|
||||
|
||||
**4. Output ASCII coverage diagram:**
|
||||
|
||||
Include BOTH code paths and user flows in the same diagram:
|
||||
|
||||
```
|
||||
CODE PATH COVERAGE
|
||||
===========================
|
||||
[+] src/services/billing.ts
|
||||
│
|
||||
├── processPayment()
|
||||
│ ├── [★★★ TESTED] Happy path + card declined + timeout — billing.test.ts:42
|
||||
│ ├── [GAP] Network timeout — NO TEST
|
||||
│ └── [GAP] Invalid currency — NO TEST
|
||||
│
|
||||
└── refundPayment()
|
||||
├── [★★ TESTED] Full refund — billing.test.ts:89
|
||||
└── [★ TESTED] Partial refund (checks non-throw only) — billing.test.ts:101
|
||||
|
||||
USER FLOW COVERAGE
|
||||
===========================
|
||||
[+] Payment checkout flow
|
||||
│
|
||||
├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
|
||||
├── [GAP] Double-click submit — NO TEST
|
||||
├── [GAP] Navigate away during payment — NO TEST
|
||||
└── [★ TESTED] Form validation errors (checks render only) — checkout.test.ts:40
|
||||
|
||||
[+] Error states
|
||||
│
|
||||
├── [★★ TESTED] Card declined message — billing.test.ts:58
|
||||
├── [GAP] Network timeout UX (what does user see?) — NO TEST
|
||||
└── [GAP] Empty cart submission — NO TEST
|
||||
|
||||
─────────────────────────────────
|
||||
COVERAGE: 5/12 paths tested (42%)
|
||||
Code paths: 3/5 (60%)
|
||||
User flows: 2/7 (29%)
|
||||
QUALITY: ★★★: 2 ★★: 2 ★: 1
|
||||
GAPS: 7 paths need tests
|
||||
─────────────────────────────────
|
||||
```
|
||||
|
||||
**Fast path:** All paths covered → "Step 3.4: All new code paths have test coverage ✓" Continue.
|
||||
|
||||
**5. Generate tests for uncovered paths:**
|
||||
|
||||
If test framework detected (or bootstrapped in Step 2.5):
|
||||
- Prioritize error handlers and edge cases first (happy paths are more likely already tested)
|
||||
- Read 2-3 existing test files to match conventions exactly
|
||||
- Generate unit tests. Mock all external dependencies (DB, API, Redis).
|
||||
- Write tests that exercise the specific uncovered path with real assertions
|
||||
- Run each test. Passes → commit as `test: coverage for {feature}`
|
||||
- Fails → fix once. Still fails → revert, note gap in diagram.
|
||||
|
||||
Caps: 30 code paths max, 20 tests generated max (code + user flow combined), 2-min per-test exploration cap.
|
||||
|
||||
If no test framework AND user declined bootstrap → diagram only, no generation. Note: "Test generation skipped — no test framework configured."
|
||||
|
||||
**Diff is test-only changes:** Skip Step 3.4 entirely: "No new application code paths to audit."
|
||||
|
||||
**6. After-count and coverage summary:**
|
||||
|
||||
```bash
|
||||
# Count test files after generation
|
||||
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' | grep -v node_modules | wc -l
|
||||
```
|
||||
|
||||
For PR body: `Tests: {before} → {after} (+{delta} new)`
|
||||
Coverage line: `Test Coverage Audit: N new code paths. M covered (X%). K tests generated, J committed.`
|
||||
|
||||
---
|
||||
|
||||
## Step 3.5: Pre-Landing Review
|
||||
|
||||
Review the diff for structural issues that tests don't catch.
|
||||
@@ -409,6 +572,10 @@ gh pr create --base <base> --title "<type>: <summary>" --body "$(cat <<'EOF'
|
||||
## Summary
|
||||
<bullet points from CHANGELOG>
|
||||
|
||||
## Test Coverage
|
||||
<coverage diagram from Step 3.4, or "All new code paths have test coverage.">
|
||||
<If Step 3.4 ran: "Tests: {before} → {after} (+{delta} new)">
|
||||
|
||||
## Pre-Landing Review
|
||||
<findings from Step 3.5, or "No issues found.">
|
||||
|
||||
@@ -451,4 +618,5 @@ EOF
|
||||
- **TODOS.md completion detection must be conservative.** Only mark items as completed when the diff clearly shows the work is done.
|
||||
- **Use Greptile reply templates from greptile-triage.md.** Every reply includes evidence (inline diff, code references, re-rank suggestion). Never post vague replies.
|
||||
- **Never push without fresh verification evidence.** If code changed after Step 3 tests, re-run before pushing.
|
||||
- **Step 3.4 generates coverage tests.** They must pass before committing. Never commit failing tests.
|
||||
- **The goal is: user says `/ship`, next thing they see is the review + PR URL.**
|
||||
|
||||
Reference in New Issue
Block a user