* feat: Completeness Principle — Boil the Lake (WIP, pre-merge) Add Completeness Principle to all skill preambles, dual-time estimates, compression table, anti-pattern gallery, Lake Score, and completeness gaps review category. VERSION/CHANGELOG will be rebased after merge. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: update stale version reference in TODOS.md (v0.5.3 → v0.6.1) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: update CHANGELOG date + README for v0.6.1 features - Add date to CHANGELOG 0.6.1 entry - Add Completeness Principle to README intro - Add SELECTIVE EXPANSION mode to CEO review section - Add test bootstrap mention to /ship section - Fix uninstall command missing design-consultation in project uninstall - Add "recommends shortcuts" and "no tests" to Without gstack list Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: split README into lean intro + docs/ directory (gh CLI pattern) README: 875 → 243 lines. Keeps intro, skill table, demo, install, and troubleshooting. All per-skill deep dives, Greptile integration guide, and contributor mode docs moved to docs/ directory. - docs/skills.md — full philosophy and examples for all 13 skills - docs/greptile.md — Greptile setup and triage workflow - docs/contributor-mode.md — how to enable and use contributor mode - README now links to docs/ via Documentation table - Updated skill table entries with latest features (fix-first, regression tests, test health, completeness gaps) - Updated demo transcript with AUTO-FIXED, coverage audit, regression test Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: remove "competitor" language, rewrite README in Garry's voice Replace "browses competitors" with "knows the landscape" / "what's out there" throughout all user-facing copy. Trim README from 243 to 167 lines — tighter, more opinionated, less listicle energy. Remove Completeness Principle from README top (it lives in CLAUDE.md and the skill preambles where Claude actually reads it). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: rewrite README in Garry's raw voice — AGI era, L8 factory, real stories The README now sounds like Garry, not a product page. Leads with the live experiment, the 16k LOC/day reality, the real-life coding stories (Austin, hospital bedside). Highlights the newest unlocks (design at the heart, /qa parallelism, smart review routing, test bootstrap). Closes with an open invitation — free MIT, fork it, let's all ride the wave together. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add Garry's bonafides to README intro — Palantir, Posterous, YC, 600k LOC Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add real /retro numbers — 140k lines, 362 commits across 3 projects Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add "in the last 60 days" timeframe to 600k LOC claim Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add GitHub contribution graphs — 2026 vs 2013 side by side Same person, different era. 2013: 772 contributions building Bookface. 2026: 1,237 contributions and accelerating. The difference is the tooling. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: clarify /retro stats are from last 7 days Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add designer/PM/eng manager roles to intro Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: remove Josh/L8 reference from README Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: move demo up, make it dramatically more impressive Show the actual architecture diagram, auto-fixed issues, 100% coverage, regression test generation. Punch line: "That is not a copilot. That is a team." Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: remove "My journey" section — intro already covers it Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: prefix all skill commands with You: in demo transcript Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: collapse You/Claude lines in demo — no gap between command and response Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: clarify plan mode flow in demo — approve, exit, Claude implements Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: move /ship to end of demo — review → QA → ship is the real flow Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add /plan-design-review to demo, tighten CEO response Shorter CEO reply, compressed eng diagram, added design audit with AI Slop score. Seven commands now: plan → eng → build → design → review → QA → ship. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: move design review before implementation — it's part of planning Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: reorder demo — design before eng, after CEO Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: remove URL from /plan-design-review in demo Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add [...] annotations showing what actually happens at each step Each step now shows what the agent does under the hood: 8 expansion proposals cherry-picked, 80-item design audit, ASCII diagrams for every flow, 2400 lines written in 8 minutes, real browser QA, bug found and fixed. Makes the demo feel real, not abstract. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: rename Contributor Mode to How to Contribute in docs table Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add Coinbase, Instacart, Rippling to YC bonafides Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add "one or two people in a garage" to founder story Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add skill table to top of skills.md with anchor links Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: consolidate — roll contributor-mode into CONTRIBUTING, greptile into skills - docs/contributor-mode.md → merged into CONTRIBUTING.md (session awareness section) - docs/greptile.md → merged into docs/skills.md (Greptile integration section) - Reordered docs table: Skills > Architecture > Browser > Contributing > Changelog Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
15 KiB
name, version, description, allowed-tools
| name | version | description | allowed-tools | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
| review | 1.0.0 | Pre-landing PR review. Analyzes diff against the base branch for SQL safety, LLM trust boundary violations, conditional side effects, and other structural issues. |
|
Preamble (run first)
_UPD=$(~/.claude/skills/gstack/bin/gstack-update-check 2>/dev/null || .claude/skills/gstack/bin/gstack-update-check 2>/dev/null || true)
[ -n "$_UPD" ] && echo "$_UPD" || true
mkdir -p ~/.gstack/sessions
touch ~/.gstack/sessions/"$PPID"
_SESSIONS=$(find ~/.gstack/sessions -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' ')
find ~/.gstack/sessions -mmin +120 -type f -delete 2>/dev/null || true
_CONTRIB=$(~/.claude/skills/gstack/bin/gstack-config get gstack_contributor 2>/dev/null || true)
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
echo "BRANCH: $_BRANCH"
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
echo "LAKE_INTRO: $_LAKE_SEEN"
If output shows UPGRADE_AVAILABLE <old> <new>: read ~/.claude/skills/gstack/gstack-upgrade/SKILL.md and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If JUST_UPGRADED <from> <to>: tell user "Running gstack v{to} (just updated!)" and continue.
If LAKE_INTRO is no: Before continuing, introduce the Completeness Principle.
Tell the user: "gstack follows the Boil the Lake principle — always do the complete
thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean"
Then offer to open the essay in their default browser:
open https://garryslist.org/posts/boil-the-ocean
touch ~/.gstack/.completeness-intro-seen
Only run open if the user says yes. Always run touch to mark as seen. This only happens once.
AskUserQuestion Format
ALWAYS follow this structure for every AskUserQuestion call:
- Re-ground: State the project, the current branch (use the
_BRANCHvalue printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences) - Simplify: Explain the problem in plain English a smart 16-year-old could follow. No raw function names, no internal jargon, no implementation details. Use concrete examples and analogies. Say what it DOES, not what it's called.
- Recommend:
RECOMMENDATION: Choose [X] because [one-line reason]— always prefer the complete option over shortcuts (see Completeness Principle). IncludeCompleteness: X/10for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it. - Options: Lettered options:
A) ... B) ... C) ...— when an option involves effort, show both scales:(human: ~X / CC: ~Y)
Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
Per-skill instructions may add additional formatting rules on top of this baseline.
Completeness Principle — Boil the Lake
AI-assisted coding makes the marginal cost of completeness near-zero. When you present options:
- If Option A is the complete implementation (full parity, all edge cases, 100% coverage) and Option B is a shortcut that saves modest effort — always recommend A. The delta between 80 lines and 150 lines is meaningless with CC+gstack. "Good enough" is the wrong instinct when "complete" costs minutes more.
- Lake vs. ocean: A "lake" is boilable — 100% test coverage for a module, full feature implementation, handling all edge cases, complete error paths. An "ocean" is not — rewriting an entire system from scratch, adding features to dependencies you don't control, multi-quarter platform migrations. Recommend boiling lakes. Flag oceans as out of scope.
- When estimating effort, always show both scales: human team time and CC+gstack time. The compression ratio varies by task type — use this reference:
| Task type | Human team | CC+gstack | Compression |
|---|---|---|---|
| Boilerplate / scaffolding | 2 days | 15 min | ~100x |
| Test writing | 1 day | 15 min | ~50x |
| Feature implementation | 1 week | 30 min | ~30x |
| Bug fix + regression test | 4 hours | 15 min | ~20x |
| Architecture / design | 2 days | 4 hours | ~5x |
| Research / exploration | 1 day | 3 hours | ~3x |
- This principle applies to test coverage, error handling, documentation, edge cases, and feature completeness. Don't skip the last 10% to "save time" — with AI, that 10% costs seconds.
Anti-patterns — DON'T do this:
- BAD: "Choose B — it covers 90% of the value with less code." (If A is only 70 lines more, choose A.)
- BAD: "We can skip edge case handling to save time." (Edge case handling costs minutes with CC.)
- BAD: "Let's defer test coverage to a follow-up PR." (Tests are the cheapest lake to boil.)
- BAD: Quoting only human-team effort: "This would take 2 weeks." (Say: "2 weeks human / ~1 hour CC.")
Contributor Mode
If _CONTRIB is true: you are in contributor mode. You're a gstack user who also helps make it better.
At the end of each major workflow step (not after every single command), reflect on the gstack tooling you used. Rate your experience 0 to 10. If it wasn't a 10, think about why. If there is an obvious, actionable bug OR an insightful, interesting thing that could have been done better by gstack code or skill markdown — file a field report. Maybe our contributor will help make us better!
Calibration — this is the bar: For example, $B js "await fetch(...)" used to fail with SyntaxError: await is only valid in async functions because gstack didn't wrap expressions in async context. Small, but the input was reasonable and gstack should have handled it — that's the kind of thing worth filing. Things less consequential than this, ignore.
NOT worth filing: user's app bugs, network errors to user's URL, auth failures on user's site, user's own JS logic bugs.
To file: write ~/.gstack/contributor-logs/{slug}.md with all sections below (do not truncate — include every section through the Date/Version footer):
# {Title}
Hey gstack team — ran into this while using /{skill-name}:
**What I was trying to do:** {what the user/agent was attempting}
**What happened instead:** {what actually happened}
**My rating:** {0-10} — {one sentence on why it wasn't a 10}
## Steps to reproduce
1. {step}
## Raw output
{paste the actual error or unexpected output here}
## What would make this a 10
{one sentence: what gstack should have done differently}
**Date:** {YYYY-MM-DD} | **Version:** {gstack version} | **Skill:** /{skill}
Slug: lowercase, hyphens, max 60 chars (e.g. browse-js-no-await). Skip if file already exists. Max 3 reports per session. File inline and continue — don't stop the workflow. Tell user: "Filed gstack field report: {title}"
Step 0: Detect base branch
Determine which branch this PR targets. Use the result as "the base branch" in all subsequent steps.
-
Check if a PR already exists for this branch:
gh pr view --json baseRefName -q .baseRefNameIf this succeeds, use the printed branch name as the base branch. -
If no PR exists (command fails), detect the repo's default branch:
gh repo view --json defaultBranchRef -q .defaultBranchRef.name -
If both commands fail, fall back to
main.
Print the detected base branch name. In every subsequent git diff, git log,
git fetch, git merge, and gh pr create command, substitute the detected
branch name wherever the instructions say "the base branch."
Pre-Landing PR Review
You are running the /review workflow. Analyze the current branch's diff against the base branch for structural issues that tests don't catch.
Step 1: Check branch
- Run
git branch --show-currentto get the current branch. - If on the base branch, output: "Nothing to review — you're on the base branch or have no changes against it." and stop.
- Run
git fetch origin <base> --quiet && git diff origin/<base> --statto check if there's a diff. If no diff, output the same message and stop.
Step 2: Read the checklist
Read .claude/skills/review/checklist.md.
If the file cannot be read, STOP and report the error. Do not proceed without the checklist.
Step 2.5: Check for Greptile review comments
Read .claude/skills/review/greptile-triage.md and follow the fetch, filter, classify, and escalation detection steps.
If no PR exists, gh fails, API returns an error, or there are zero Greptile comments: Skip this step silently. Greptile integration is additive — the review works without it.
If Greptile comments are found: Store the classifications (VALID & ACTIONABLE, VALID BUT ALREADY FIXED, FALSE POSITIVE, SUPPRESSED) — you will need them in Step 5.
Step 3: Get the diff
Fetch the latest base branch to avoid false positives from stale local state:
git fetch origin <base> --quiet
Run git diff origin/<base> to get the full diff. This includes both committed and uncommitted changes against the latest base branch.
Step 4: Two-pass review
Apply the checklist against the diff in two passes:
- Pass 1 (CRITICAL): SQL & Data Safety, Race Conditions & Concurrency, LLM Output Trust Boundary, Enum & Value Completeness
- Pass 2 (INFORMATIONAL): Conditional Side Effects, Magic Numbers & String Coupling, Dead Code & Consistency, LLM Prompt Issues, Test Gaps, View/Frontend
Enum & Value Completeness requires reading code OUTSIDE the diff. When the diff introduces a new enum value, status, tier, or type constant, use Grep to find all files that reference sibling values, then Read those files to check if the new value is handled. This is the one category where within-diff review is insufficient.
Follow the output format specified in the checklist. Respect the suppressions — do NOT flag items listed in the "DO NOT flag" section.
Step 5: Fix-First Review
Every finding gets action — not just critical ones.
Output a summary header: Pre-Landing Review: N issues (X critical, Y informational)
Step 5a: Classify each finding
For each finding, classify as AUTO-FIX or ASK per the Fix-First Heuristic in checklist.md. Critical findings lean toward ASK; informational findings lean toward AUTO-FIX.
Step 5b: Auto-fix all AUTO-FIX items
Apply each fix directly. For each one, output a one-line summary:
[AUTO-FIXED] [file:line] Problem → what you did
Step 5c: Batch-ask about ASK items
If there are ASK items remaining, present them in ONE AskUserQuestion:
- List each item with a number, the severity label, the problem, and a recommended fix
- For each item, provide options: A) Fix as recommended, B) Skip
- Include an overall RECOMMENDATION
Example format:
I auto-fixed 5 issues. 2 need your input:
1. [CRITICAL] app/models/post.rb:42 — Race condition in status transition
Fix: Add `WHERE status = 'draft'` to the UPDATE
→ A) Fix B) Skip
2. [INFORMATIONAL] app/services/generator.rb:88 — LLM output not type-checked before DB write
Fix: Add JSON schema validation
→ A) Fix B) Skip
RECOMMENDATION: Fix both — #1 is a real race condition, #2 prevents silent data corruption.
If 3 or fewer ASK items, you may use individual AskUserQuestion calls instead of batching.
Step 5d: Apply user-approved fixes
Apply fixes for items where the user chose "Fix." Output what was fixed.
If no ASK items exist (everything was AUTO-FIX), skip the question entirely.
Greptile comment resolution
After outputting your own findings, if Greptile comments were classified in Step 2.5:
Include a Greptile summary in your output header: + N Greptile comments (X valid, Y fixed, Z FP)
Before replying to any comment, run the Escalation Detection algorithm from greptile-triage.md to determine whether to use Tier 1 (friendly) or Tier 2 (firm) reply templates.
-
VALID & ACTIONABLE comments: These are included in your findings — they follow the Fix-First flow (auto-fixed if mechanical, batched into ASK if not) (A: Fix it now, B: Acknowledge, C: False positive). If the user chooses A (fix), reply using the Fix reply template from greptile-triage.md (include inline diff + explanation). If the user chooses C (false positive), reply using the False Positive reply template (include evidence + suggested re-rank), save to both per-project and global greptile-history.
-
FALSE POSITIVE comments: Present each one via AskUserQuestion:
- Show the Greptile comment: file:line (or [top-level]) + body summary + permalink URL
- Explain concisely why it's a false positive
- Options:
- A) Reply to Greptile explaining why this is incorrect (recommended if clearly wrong)
- B) Fix it anyway (if low-effort and harmless)
- C) Ignore — don't reply, don't fix
If the user chooses A, reply using the False Positive reply template from greptile-triage.md (include evidence + suggested re-rank), save to both per-project and global greptile-history.
-
VALID BUT ALREADY FIXED comments: Reply using the Already Fixed reply template from greptile-triage.md — no AskUserQuestion needed:
- Include what was done and the fixing commit SHA
- Save to both per-project and global greptile-history
-
SUPPRESSED comments: Skip silently — these are known false positives from previous triage.
Step 5.5: TODOS cross-reference
Read TODOS.md in the repository root (if it exists). Cross-reference the PR against open TODOs:
- Does this PR close any open TODOs? If yes, note which items in your output: "This PR addresses TODO: