mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-06 21:46:40 +02:00
fix: resolve merge conflicts with origin/main (v0.6.0 + v0.6.0.1 + v0.6.1)
Merge main's test bootstrap, boil-the-lake completeness principle, selective expansion, ship gate overrides, and gstack-upgrade vendor sync. Conflicts resolved: - CHANGELOG: keep main's 0.6.1/0.6.0.1/0.6.0/0.5.4/0.5.3 entries - VERSION: take main's 0.6.1 - design-consultation: office-hours naming + main's "what's out there" phrasing - ship: keep both verification rules (fresh evidence + coverage tests) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
+251
-3
@@ -16,6 +16,7 @@ allowed-tools:
|
||||
- Glob
|
||||
- Grep
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
|
||||
<!-- Regenerate: bun run gen:skill-docs -->
|
||||
@@ -32,22 +33,61 @@ find ~/.gstack/sessions -mmin +120 -type f -delete 2>/dev/null || true
|
||||
_CONTRIB=$(~/.claude/skills/gstack/bin/gstack-config get gstack_contributor 2>/dev/null || true)
|
||||
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
|
||||
echo "BRANCH: $_BRANCH"
|
||||
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
|
||||
echo "LAKE_INTRO: $_LAKE_SEEN"
|
||||
```
|
||||
|
||||
If output shows `UPGRADE_AVAILABLE <old> <new>`: read `~/.claude/skills/gstack/gstack-upgrade/SKILL.md` and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If `JUST_UPGRADED <from> <to>`: tell user "Running gstack v{to} (just updated!)" and continue.
|
||||
|
||||
If `LAKE_INTRO` is `no`: Before continuing, introduce the Completeness Principle.
|
||||
Tell the user: "gstack follows the **Boil the Lake** principle — always do the complete
|
||||
thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean"
|
||||
Then offer to open the essay in their default browser:
|
||||
|
||||
```bash
|
||||
open https://garryslist.org/posts/boil-the-ocean
|
||||
touch ~/.gstack/.completeness-intro-seen
|
||||
```
|
||||
|
||||
Only run `open` if the user says yes. Always run `touch` to mark as seen. This only happens once.
|
||||
|
||||
## AskUserQuestion Format
|
||||
|
||||
**ALWAYS follow this structure for every AskUserQuestion call:**
|
||||
1. **Re-ground:** State the project, the current branch (use the `_BRANCH` value printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)
|
||||
2. **Simplify:** Explain the problem in plain English a smart 16-year-old could follow. No raw function names, no internal jargon, no implementation details. Use concrete examples and analogies. Say what it DOES, not what it's called.
|
||||
3. **Recommend:** `RECOMMENDATION: Choose [X] because [one-line reason]`
|
||||
4. **Options:** Lettered options: `A) ... B) ... C) ...`
|
||||
3. **Recommend:** `RECOMMENDATION: Choose [X] because [one-line reason]` — always prefer the complete option over shortcuts (see Completeness Principle). Include `Completeness: X/10` for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it.
|
||||
4. **Options:** Lettered options: `A) ... B) ... C) ...` — when an option involves effort, show both scales: `(human: ~X / CC: ~Y)`
|
||||
|
||||
Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
|
||||
|
||||
Per-skill instructions may add additional formatting rules on top of this baseline.
|
||||
|
||||
## Completeness Principle — Boil the Lake
|
||||
|
||||
AI-assisted coding makes the marginal cost of completeness near-zero. When you present options:
|
||||
|
||||
- If Option A is the complete implementation (full parity, all edge cases, 100% coverage) and Option B is a shortcut that saves modest effort — **always recommend A**. The delta between 80 lines and 150 lines is meaningless with CC+gstack. "Good enough" is the wrong instinct when "complete" costs minutes more.
|
||||
- **Lake vs. ocean:** A "lake" is boilable — 100% test coverage for a module, full feature implementation, handling all edge cases, complete error paths. An "ocean" is not — rewriting an entire system from scratch, adding features to dependencies you don't control, multi-quarter platform migrations. Recommend boiling lakes. Flag oceans as out of scope.
|
||||
- **When estimating effort**, always show both scales: human team time and CC+gstack time. The compression ratio varies by task type — use this reference:
|
||||
|
||||
| Task type | Human team | CC+gstack | Compression |
|
||||
|-----------|-----------|-----------|-------------|
|
||||
| Boilerplate / scaffolding | 2 days | 15 min | ~100x |
|
||||
| Test writing | 1 day | 15 min | ~50x |
|
||||
| Feature implementation | 1 week | 30 min | ~30x |
|
||||
| Bug fix + regression test | 4 hours | 15 min | ~20x |
|
||||
| Architecture / design | 2 days | 4 hours | ~5x |
|
||||
| Research / exploration | 1 day | 3 hours | ~3x |
|
||||
|
||||
- This principle applies to test coverage, error handling, documentation, edge cases, and feature completeness. Don't skip the last 10% to "save time" — with AI, that 10% costs seconds.
|
||||
|
||||
**Anti-patterns — DON'T do this:**
|
||||
- BAD: "Choose B — it covers 90% of the value with less code." (If A is only 70 lines more, choose A.)
|
||||
- BAD: "We can skip edge case handling to save time." (Edge case handling costs minutes with CC.)
|
||||
- BAD: "Let's defer test coverage to a follow-up PR." (Tests are the cheapest lake to boil.)
|
||||
- BAD: Quoting only human-team effort: "This would take 2 weeks." (Say: "2 weeks human / ~1 hour CC.")
|
||||
|
||||
## Contributor Mode
|
||||
|
||||
If `_CONTRIB` is `true`: you are in **contributor mode**. You're a gstack user who also helps make it better.
|
||||
@@ -182,6 +222,161 @@ If `NEEDS_SETUP`:
|
||||
2. Run: `cd <SKILL_DIR> && ./setup`
|
||||
3. If `bun` is not installed: `curl -fsSL https://bun.sh/install | bash`
|
||||
|
||||
**Check test framework (bootstrap if needed):**
|
||||
|
||||
## Test Framework Bootstrap
|
||||
|
||||
**Detect existing test framework and project runtime:**
|
||||
|
||||
```bash
|
||||
# Detect project runtime
|
||||
[ -f Gemfile ] && echo "RUNTIME:ruby"
|
||||
[ -f package.json ] && echo "RUNTIME:node"
|
||||
[ -f requirements.txt ] || [ -f pyproject.toml ] && echo "RUNTIME:python"
|
||||
[ -f go.mod ] && echo "RUNTIME:go"
|
||||
[ -f Cargo.toml ] && echo "RUNTIME:rust"
|
||||
[ -f composer.json ] && echo "RUNTIME:php"
|
||||
[ -f mix.exs ] && echo "RUNTIME:elixir"
|
||||
# Detect sub-frameworks
|
||||
[ -f Gemfile ] && grep -q "rails" Gemfile 2>/dev/null && echo "FRAMEWORK:rails"
|
||||
[ -f package.json ] && grep -q '"next"' package.json 2>/dev/null && echo "FRAMEWORK:nextjs"
|
||||
# Check for existing test infrastructure
|
||||
ls jest.config.* vitest.config.* playwright.config.* .rspec pytest.ini pyproject.toml phpunit.xml 2>/dev/null
|
||||
ls -d test/ tests/ spec/ __tests__/ cypress/ e2e/ 2>/dev/null
|
||||
# Check opt-out marker
|
||||
[ -f .gstack/no-test-bootstrap ] && echo "BOOTSTRAP_DECLINED"
|
||||
```
|
||||
|
||||
**If test framework detected** (config files or test directories found):
|
||||
Print "Test framework detected: {name} ({N} existing tests). Skipping bootstrap."
|
||||
Read 2-3 existing test files to learn conventions (naming, imports, assertion style, setup patterns).
|
||||
Store conventions as prose context for use in Phase 8e.5 or Step 3.4. **Skip the rest of bootstrap.**
|
||||
|
||||
**If BOOTSTRAP_DECLINED** appears: Print "Test bootstrap previously declined — skipping." **Skip the rest of bootstrap.**
|
||||
|
||||
**If NO runtime detected** (no config files found): Use AskUserQuestion:
|
||||
"I couldn't detect your project's language. What runtime are you using?"
|
||||
Options: A) Node.js/TypeScript B) Ruby/Rails C) Python D) Go E) Rust F) PHP G) Elixir H) This project doesn't need tests.
|
||||
If user picks H → write `.gstack/no-test-bootstrap` and continue without tests.
|
||||
|
||||
**If runtime detected but no test framework — bootstrap:**
|
||||
|
||||
### B2. Research best practices
|
||||
|
||||
Use WebSearch to find current best practices for the detected runtime:
|
||||
- `"[runtime] best test framework 2025 2026"`
|
||||
- `"[framework A] vs [framework B] comparison"`
|
||||
|
||||
If WebSearch is unavailable, use this built-in knowledge table:
|
||||
|
||||
| Runtime | Primary recommendation | Alternative |
|
||||
|---------|----------------------|-------------|
|
||||
| Ruby/Rails | minitest + fixtures + capybara | rspec + factory_bot + shoulda-matchers |
|
||||
| Node.js | vitest + @testing-library | jest + @testing-library |
|
||||
| Next.js | vitest + @testing-library/react + playwright | jest + cypress |
|
||||
| Python | pytest + pytest-cov | unittest |
|
||||
| Go | stdlib testing + testify | stdlib only |
|
||||
| Rust | cargo test (built-in) + mockall | — |
|
||||
| PHP | phpunit + mockery | pest |
|
||||
| Elixir | ExUnit (built-in) + ex_machina | — |
|
||||
|
||||
### B3. Framework selection
|
||||
|
||||
Use AskUserQuestion:
|
||||
"I detected this is a [Runtime/Framework] project with no test framework. I researched current best practices. Here are the options:
|
||||
A) [Primary] — [rationale]. Includes: [packages]. Supports: unit, integration, smoke, e2e
|
||||
B) [Alternative] — [rationale]. Includes: [packages]
|
||||
C) Skip — don't set up testing right now
|
||||
RECOMMENDATION: Choose A because [reason based on project context]"
|
||||
|
||||
If user picks C → write `.gstack/no-test-bootstrap`. Tell user: "If you change your mind later, delete `.gstack/no-test-bootstrap` and re-run." Continue without tests.
|
||||
|
||||
If multiple runtimes detected (monorepo) → ask which runtime to set up first, with option to do both sequentially.
|
||||
|
||||
### B4. Install and configure
|
||||
|
||||
1. Install the chosen packages (npm/bun/gem/pip/etc.)
|
||||
2. Create minimal config file
|
||||
3. Create directory structure (test/, spec/, etc.)
|
||||
4. Create one example test matching the project's code to verify setup works
|
||||
|
||||
If package installation fails → debug once. If still failing → revert with `git checkout -- package.json package-lock.json` (or equivalent for the runtime). Warn user and continue without tests.
|
||||
|
||||
### B4.5. First real tests
|
||||
|
||||
Generate 3-5 real tests for existing code:
|
||||
|
||||
1. **Find recently changed files:** `git log --since=30.days --name-only --format="" | sort | uniq -c | sort -rn | head -10`
|
||||
2. **Prioritize by risk:** Error handlers > business logic with conditionals > API endpoints > pure functions
|
||||
3. **For each file:** Write one test that tests real behavior with meaningful assertions. Never `expect(x).toBeDefined()` — test what the code DOES.
|
||||
4. Run each test. Passes → keep. Fails → fix once. Still fails → delete silently.
|
||||
5. Generate at least 1 test, cap at 5.
|
||||
|
||||
Never import secrets, API keys, or credentials in test files. Use environment variables or test fixtures.
|
||||
|
||||
### B5. Verify
|
||||
|
||||
```bash
|
||||
# Run the full test suite to confirm everything works
|
||||
{detected test command}
|
||||
```
|
||||
|
||||
If tests fail → debug once. If still failing → revert all bootstrap changes and warn user.
|
||||
|
||||
### B5.5. CI/CD pipeline
|
||||
|
||||
```bash
|
||||
# Check CI provider
|
||||
ls -d .github/ 2>/dev/null && echo "CI:github"
|
||||
ls .gitlab-ci.yml .circleci/ bitrise.yml 2>/dev/null
|
||||
```
|
||||
|
||||
If `.github/` exists (or no CI detected — default to GitHub Actions):
|
||||
Create `.github/workflows/test.yml` with:
|
||||
- `runs-on: ubuntu-latest`
|
||||
- Appropriate setup action for the runtime (setup-node, setup-ruby, setup-python, etc.)
|
||||
- The same test command verified in B5
|
||||
- Trigger: push + pull_request
|
||||
|
||||
If non-GitHub CI detected → skip CI generation with note: "Detected {provider} — CI pipeline generation supports GitHub Actions only. Add test step to your existing pipeline manually."
|
||||
|
||||
### B6. Create TESTING.md
|
||||
|
||||
First check: If TESTING.md already exists → read it and update/append rather than overwriting. Never destroy existing content.
|
||||
|
||||
Write TESTING.md with:
|
||||
- Philosophy: "100% test coverage is the key to great vibe coding. Tests let you move fast, trust your instincts, and ship with confidence — without them, vibe coding is just yolo coding. With tests, it's a superpower."
|
||||
- Framework name and version
|
||||
- How to run tests (the verified command from B5)
|
||||
- Test layers: Unit tests (what, where, when), Integration tests, Smoke tests, E2E tests
|
||||
- Conventions: file naming, assertion style, setup/teardown patterns
|
||||
|
||||
### B7. Update CLAUDE.md
|
||||
|
||||
First check: If CLAUDE.md already has a `## Testing` section → skip. Don't duplicate.
|
||||
|
||||
Append a `## Testing` section:
|
||||
- Run command and test directory
|
||||
- Reference to TESTING.md
|
||||
- Test expectations:
|
||||
- 100% test coverage is the goal — tests make vibe coding safe
|
||||
- When writing new functions, write a corresponding test
|
||||
- When fixing a bug, write a regression test
|
||||
- When adding error handling, write a test that triggers the error
|
||||
- When adding a conditional (if/else, switch), write tests for BOTH paths
|
||||
- Never commit code that makes existing tests fail
|
||||
|
||||
### B8. Commit
|
||||
|
||||
```bash
|
||||
git status --porcelain
|
||||
```
|
||||
|
||||
Only commit if there are changes. Stage all bootstrap files (config, test directory, TESTING.md, CLAUDE.md, .github/workflows/test.yml if created):
|
||||
`git commit -m "chore: bootstrap test framework ({framework name})"`
|
||||
|
||||
---
|
||||
|
||||
**Create output directories:**
|
||||
|
||||
```bash
|
||||
@@ -566,6 +761,59 @@ $B snapshot -D
|
||||
- **best-effort**: fix applied but couldn't fully verify (e.g., needs auth state, external service)
|
||||
- **reverted**: regression detected → `git revert HEAD` → mark issue as "deferred"
|
||||
|
||||
### 8e.5. Regression Test
|
||||
|
||||
Skip if: classification is not "verified", OR the fix is purely visual/CSS with no JS behavior, OR no test framework was detected AND user declined bootstrap.
|
||||
|
||||
**1. Study the project's existing test patterns:**
|
||||
|
||||
Read 2-3 test files closest to the fix (same directory, same code type). Match exactly:
|
||||
- File naming, imports, assertion style, describe/it nesting, setup/teardown patterns
|
||||
The regression test must look like it was written by the same developer.
|
||||
|
||||
**2. Trace the bug's codepath, then write a regression test:**
|
||||
|
||||
Before writing the test, trace the data flow through the code you just fixed:
|
||||
- What input/state triggered the bug? (the exact precondition)
|
||||
- What codepath did it follow? (which branches, which function calls)
|
||||
- Where did it break? (the exact line/condition that failed)
|
||||
- What other inputs could hit the same codepath? (edge cases around the fix)
|
||||
|
||||
The test MUST:
|
||||
- Set up the precondition that triggered the bug (the exact state that made it break)
|
||||
- Perform the action that exposed the bug
|
||||
- Assert the correct behavior (NOT "it renders" or "it doesn't throw")
|
||||
- If you found adjacent edge cases while tracing, test those too (e.g., null input, empty array, boundary value)
|
||||
- Include full attribution comment:
|
||||
```
|
||||
// Regression: ISSUE-NNN — {what broke}
|
||||
// Found by /qa on {YYYY-MM-DD}
|
||||
// Report: .gstack/qa-reports/qa-report-{domain}-{date}.md
|
||||
```
|
||||
|
||||
Test type decision:
|
||||
- Console error / JS exception / logic bug → unit or integration test
|
||||
- Broken form / API failure / data flow bug → integration test with request/response
|
||||
- Visual bug with JS behavior (broken dropdown, animation) → component test
|
||||
- Pure CSS → skip (caught by QA reruns)
|
||||
|
||||
Generate unit tests. Mock all external dependencies (DB, API, Redis, file system).
|
||||
|
||||
Use auto-incrementing names to avoid collisions: check existing `{name}.regression-*.test.{ext}` files, take max number + 1.
|
||||
|
||||
**3. Run only the new test file:**
|
||||
|
||||
```bash
|
||||
{detected test command} {new-test-file}
|
||||
```
|
||||
|
||||
**4. Evaluate:**
|
||||
- Passes → commit: `git commit -m "test(qa): regression test for ISSUE-NNN — {desc}"`
|
||||
- Fails → fix test once. Still failing → delete test, defer.
|
||||
- Taking >2 min exploration → skip and defer.
|
||||
|
||||
**5. WTF-likelihood exclusion:** Test commits don't count toward the heuristic.
|
||||
|
||||
### 8f. Self-Regulation (STOP AND EVALUATE)
|
||||
|
||||
Every 5 fixes (or after any revert), compute the WTF-likelihood:
|
||||
@@ -639,6 +887,6 @@ If the repo has a `TODOS.md`:
|
||||
|
||||
11. **Clean working tree required.** Refuse to start if `git status --porcelain` is non-empty.
|
||||
12. **One commit per fix.** Never bundle multiple fixes into one commit.
|
||||
13. **Never modify tests or CI configuration.** Only fix application source code.
|
||||
13. **Only modify tests when generating regression tests in Phase 8e.5.** Never modify CI configuration. Never modify existing tests — only create new test files.
|
||||
14. **Revert on regression.** If a fix makes things worse, `git revert HEAD` immediately.
|
||||
15. **Self-regulate.** Follow the WTF-likelihood heuristic. When in doubt, stop and ask.
|
||||
|
||||
+59
-1
@@ -16,6 +16,7 @@ allowed-tools:
|
||||
- Glob
|
||||
- Grep
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
|
||||
{{PREAMBLE}}
|
||||
@@ -58,6 +59,10 @@ fi
|
||||
|
||||
{{BROWSE_SETUP}}
|
||||
|
||||
**Check test framework (bootstrap if needed):**
|
||||
|
||||
{{TEST_BOOTSTRAP}}
|
||||
|
||||
**Create output directories:**
|
||||
|
||||
```bash
|
||||
@@ -169,6 +174,59 @@ $B snapshot -D
|
||||
- **best-effort**: fix applied but couldn't fully verify (e.g., needs auth state, external service)
|
||||
- **reverted**: regression detected → `git revert HEAD` → mark issue as "deferred"
|
||||
|
||||
### 8e.5. Regression Test
|
||||
|
||||
Skip if: classification is not "verified", OR the fix is purely visual/CSS with no JS behavior, OR no test framework was detected AND user declined bootstrap.
|
||||
|
||||
**1. Study the project's existing test patterns:**
|
||||
|
||||
Read 2-3 test files closest to the fix (same directory, same code type). Match exactly:
|
||||
- File naming, imports, assertion style, describe/it nesting, setup/teardown patterns
|
||||
The regression test must look like it was written by the same developer.
|
||||
|
||||
**2. Trace the bug's codepath, then write a regression test:**
|
||||
|
||||
Before writing the test, trace the data flow through the code you just fixed:
|
||||
- What input/state triggered the bug? (the exact precondition)
|
||||
- What codepath did it follow? (which branches, which function calls)
|
||||
- Where did it break? (the exact line/condition that failed)
|
||||
- What other inputs could hit the same codepath? (edge cases around the fix)
|
||||
|
||||
The test MUST:
|
||||
- Set up the precondition that triggered the bug (the exact state that made it break)
|
||||
- Perform the action that exposed the bug
|
||||
- Assert the correct behavior (NOT "it renders" or "it doesn't throw")
|
||||
- If you found adjacent edge cases while tracing, test those too (e.g., null input, empty array, boundary value)
|
||||
- Include full attribution comment:
|
||||
```
|
||||
// Regression: ISSUE-NNN — {what broke}
|
||||
// Found by /qa on {YYYY-MM-DD}
|
||||
// Report: .gstack/qa-reports/qa-report-{domain}-{date}.md
|
||||
```
|
||||
|
||||
Test type decision:
|
||||
- Console error / JS exception / logic bug → unit or integration test
|
||||
- Broken form / API failure / data flow bug → integration test with request/response
|
||||
- Visual bug with JS behavior (broken dropdown, animation) → component test
|
||||
- Pure CSS → skip (caught by QA reruns)
|
||||
|
||||
Generate unit tests. Mock all external dependencies (DB, API, Redis, file system).
|
||||
|
||||
Use auto-incrementing names to avoid collisions: check existing `{name}.regression-*.test.{ext}` files, take max number + 1.
|
||||
|
||||
**3. Run only the new test file:**
|
||||
|
||||
```bash
|
||||
{detected test command} {new-test-file}
|
||||
```
|
||||
|
||||
**4. Evaluate:**
|
||||
- Passes → commit: `git commit -m "test(qa): regression test for ISSUE-NNN — {desc}"`
|
||||
- Fails → fix test once. Still failing → delete test, defer.
|
||||
- Taking >2 min exploration → skip and defer.
|
||||
|
||||
**5. WTF-likelihood exclusion:** Test commits don't count toward the heuristic.
|
||||
|
||||
### 8f. Self-Regulation (STOP AND EVALUATE)
|
||||
|
||||
Every 5 fixes (or after any revert), compute the WTF-likelihood:
|
||||
@@ -242,6 +300,6 @@ If the repo has a `TODOS.md`:
|
||||
|
||||
11. **Clean working tree required.** Refuse to start if `git status --porcelain` is non-empty.
|
||||
12. **One commit per fix.** Never bundle multiple fixes into one commit.
|
||||
13. **Never modify tests or CI configuration.** Only fix application source code.
|
||||
13. **Only modify tests when generating regression tests in Phase 8e.5.** Never modify CI configuration. Never modify existing tests — only create new test files.
|
||||
14. **Revert on regression.** If a fix makes things worse, `git revert HEAD` immediately.
|
||||
15. **Self-regulate.** Follow the WTF-likelihood heuristic. When in doubt, stop and ask.
|
||||
|
||||
@@ -86,6 +86,22 @@
|
||||
|
||||
---
|
||||
|
||||
## Regression Tests
|
||||
|
||||
| Issue | Test File | Status | Description |
|
||||
|-------|-----------|--------|-------------|
|
||||
| ISSUE-NNN | path/to/test | committed / deferred / skipped | description |
|
||||
|
||||
### Deferred Tests
|
||||
|
||||
#### ISSUE-NNN: {title}
|
||||
**Precondition:** {setup state that triggers the bug}
|
||||
**Action:** {what the user does}
|
||||
**Expected:** {correct behavior}
|
||||
**Why deferred:** {reason}
|
||||
|
||||
---
|
||||
|
||||
## Ship Readiness
|
||||
|
||||
| Metric | Value |
|
||||
|
||||
Reference in New Issue
Block a user