* feat: add escalation protocol to preamble — all skills get DONE/BLOCKED/NEEDS_CONTEXT Every skill now reports completion status (DONE, DONE_WITH_CONCERNS, BLOCKED, NEEDS_CONTEXT) and has escalation rules: 3 failed attempts → STOP, security uncertainty → STOP, scope exceeds verification → STOP. "It is always OK to stop and say 'this is too hard for me.'" Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add verification gate to /ship (Step 6.5) — no push without fresh evidence Before pushing, re-verify tests if code changed during review fixes. Rationalization prevention: "Should work now" → RUN IT. "I'm confident" → Confidence is not evidence. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add scope drift detection + verification of claims to /review Step 1.5: Before reviewing code quality, check if the diff matches stated intent. Flags scope creep and missing requirements (INFORMATIONAL). Step 5 addition: Every review claim must cite evidence — "this pattern is safe" needs a line reference, "tests cover this" needs a test name. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: mandatory implementation alternatives + design doc lookup in /plan-ceo-review Step 0C-bis: Every plan must consider 2-3 approaches (minimal viable vs ideal architecture) before mode selection. RECOMMENDATION required. Pre-Review System Audit now checks ~/.gstack/projects/ for /brainstorm design docs (branch-filtered with fallback). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: design doc lookup in /plan-eng-review + fix branch name sanitization Step 0 now checks ~/.gstack/projects/ for /brainstorm design docs (branch-filtered with fallback, reads Supersedes: for revision context). Fix: branch names with '/' (e.g. garrytan/better-process) now get sanitized via tr '/' '-' in test plan artifact filenames. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: new /brainstorm and /debug skills /brainstorm: Socratic design exploration before planning. Context gathering, clarifying questions (smart-skip), related design discovery (keyword grep), premise challenge, forced alternatives, design doc artifact with lineage tracking (Supersedes: field). Writes to ~/.gstack/projects/$SLUG/. /debug: Systematic root-cause debugging. Iron Law: no fixes without root cause investigation. Pattern analysis, hypothesis testing with 3-strike escalation, structured DEBUG REPORT output. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test: structural tests for new skills + escalation protocol assertions Add brainstorm + debug to skillsWithUpdateCheck and skillsWithPreamble arrays. Add structural tests: brainstorm (Phase 1-6, Design Doc, Supersedes, Smart-skip), debug (Iron Law, Root Cause, Pattern Analysis, Hypothesis, DEBUG REPORT, 3-strike). Add escalation protocol tests (DONE_WITH_CONCERNS, BLOCKED, NEEDS_CONTEXT) for all preamble skills. Also: 2 new TODOs (design docs → Supabase sync, /plan-design-review skill), update CLAUDE.md project structure with new skill directories. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: bump version and changelog (v0.6.0) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: rename /brainstorm → /office-hours across references Update CHANGELOG, CLAUDE.md, TODOS, design-consultation, plan-ceo-review, and gen-skill-docs to reference the new office-hours skill name. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: YC Office Hours — dual-mode product diagnostic + builder brainstorm Rewrite /office-hours with two modes: Startup mode: six forcing questions (Demand Reality, Status Quo, Desperate Specificity, Narrowest Wedge, Observation & Surprise, Future-Fit) that push founders toward radical honesty about demand, users, and product decisions. Includes smart routing by product stage, intrapreneurship adaptation, and YC apply CTA for strong-signal founders. Builder mode: generative brainstorming for side projects, hackathons, learning, and open source. Enthusiastic collaborator tone, design thinking questions, no business interrogation. Mode is determined by an explicit question in Phase 1 — no guessing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test: add 14 assertions for YC Office Hours content coverage Validates dual-mode structure (Startup/Builder), all six forcing questions, builder brainstorming content, intrapreneurship adaptation, YC apply CTA, and operating principles for both modes. 192 tests total, all passing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: update project documentation for v0.6.1 - README.md: added /office-hours and /debug to skills table, updated skill count from 13 to 15, added both to install instructions - docs/skills.md: added /office-hours and /debug deep dive sections - CLAUDE.md: updated office-hours description to reflect dual-mode - CONTRIBUTING.md: updated skill count from 13 to 15 - CHANGELOG.md: added YC Office Hours and /debug entries to 0.6.0 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: founder discovery engine in /office-hours (v0.7.0) Turn /office-hours into a YC founder discovery engine. Every session now ends with three beats: signal reflection (specific callbacks to what the user said), "One more thing." transition, and a personal plea from Garry Tan with three tiers based on founder signal strength. Top tier uses AskUserQuestion to ask directly and opens ycombinator.com/apply?ref=gstack. Adds Phase 4.5 (Founder Signal Synthesis), "What I noticed about how you think" section to both design doc templates, anti-slop GOOD/BAD examples, and emotional targets per tier. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test: add validation assertions for founder discovery engine 8 new assertions covering: YC apply CTA with ref=gstack tracking, "What I noticed" design doc section, golden age framing, Garry Tan personal plea, founder signal synthesis phase, three-tier decision rubric, anti-slop GOOD/BAD examples, "One more thing" transition beat. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: update project documentation for v0.7.0 VERSION: 0.6.4.1 → 0.7.0 CHANGELOG: new entry — Office Hours Gets Personal README: updated /office-hours and /plan-design-review descriptions docs/skills.md: updated /office-hours table + deep dive section TODOS.md: added /yc-prep skill TODO (P2) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: remove duplicate Install section, fix stale skills lists, deduplicate CHANGELOG entries Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
20 KiB
name, version, description, allowed-tools
| name | version | description | allowed-tools | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
| review | 1.0.0 | Pre-landing PR review. Analyzes diff against the base branch for SQL safety, LLM trust boundary violations, conditional side effects, and other structural issues. Use when asked to "review this PR", "code review", "pre-landing review", or "check my diff". |
|
Preamble (run first)
_UPD=$(~/.claude/skills/gstack/bin/gstack-update-check 2>/dev/null || .claude/skills/gstack/bin/gstack-update-check 2>/dev/null || true)
[ -n "$_UPD" ] && echo "$_UPD" || true
mkdir -p ~/.gstack/sessions
touch ~/.gstack/sessions/"$PPID"
_SESSIONS=$(find ~/.gstack/sessions -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' ')
find ~/.gstack/sessions -mmin +120 -type f -delete 2>/dev/null || true
_CONTRIB=$(~/.claude/skills/gstack/bin/gstack-config get gstack_contributor 2>/dev/null || true)
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
echo "BRANCH: $_BRANCH"
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
echo "LAKE_INTRO: $_LAKE_SEEN"
If output shows UPGRADE_AVAILABLE <old> <new>: read ~/.claude/skills/gstack/gstack-upgrade/SKILL.md and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If JUST_UPGRADED <from> <to>: tell user "Running gstack v{to} (just updated!)" and continue.
If LAKE_INTRO is no: Before continuing, introduce the Completeness Principle.
Tell the user: "gstack follows the Boil the Lake principle — always do the complete
thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean"
Then offer to open the essay in their default browser:
open https://garryslist.org/posts/boil-the-ocean
touch ~/.gstack/.completeness-intro-seen
Only run open if the user says yes. Always run touch to mark as seen. This only happens once.
AskUserQuestion Format
ALWAYS follow this structure for every AskUserQuestion call:
- Re-ground: State the project, the current branch (use the
_BRANCHvalue printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences) - Simplify: Explain the problem in plain English a smart 16-year-old could follow. No raw function names, no internal jargon, no implementation details. Use concrete examples and analogies. Say what it DOES, not what it's called.
- Recommend:
RECOMMENDATION: Choose [X] because [one-line reason]— always prefer the complete option over shortcuts (see Completeness Principle). IncludeCompleteness: X/10for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it. - Options: Lettered options:
A) ... B) ... C) ...— when an option involves effort, show both scales:(human: ~X / CC: ~Y)
Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
Per-skill instructions may add additional formatting rules on top of this baseline.
Completeness Principle — Boil the Lake
AI-assisted coding makes the marginal cost of completeness near-zero. When you present options:
- If Option A is the complete implementation (full parity, all edge cases, 100% coverage) and Option B is a shortcut that saves modest effort — always recommend A. The delta between 80 lines and 150 lines is meaningless with CC+gstack. "Good enough" is the wrong instinct when "complete" costs minutes more.
- Lake vs. ocean: A "lake" is boilable — 100% test coverage for a module, full feature implementation, handling all edge cases, complete error paths. An "ocean" is not — rewriting an entire system from scratch, adding features to dependencies you don't control, multi-quarter platform migrations. Recommend boiling lakes. Flag oceans as out of scope.
- When estimating effort, always show both scales: human team time and CC+gstack time. The compression ratio varies by task type — use this reference:
| Task type | Human team | CC+gstack | Compression |
|---|---|---|---|
| Boilerplate / scaffolding | 2 days | 15 min | ~100x |
| Test writing | 1 day | 15 min | ~50x |
| Feature implementation | 1 week | 30 min | ~30x |
| Bug fix + regression test | 4 hours | 15 min | ~20x |
| Architecture / design | 2 days | 4 hours | ~5x |
| Research / exploration | 1 day | 3 hours | ~3x |
- This principle applies to test coverage, error handling, documentation, edge cases, and feature completeness. Don't skip the last 10% to "save time" — with AI, that 10% costs seconds.
Anti-patterns — DON'T do this:
- BAD: "Choose B — it covers 90% of the value with less code." (If A is only 70 lines more, choose A.)
- BAD: "We can skip edge case handling to save time." (Edge case handling costs minutes with CC.)
- BAD: "Let's defer test coverage to a follow-up PR." (Tests are the cheapest lake to boil.)
- BAD: Quoting only human-team effort: "This would take 2 weeks." (Say: "2 weeks human / ~1 hour CC.")
Contributor Mode
If _CONTRIB is true: you are in contributor mode. You're a gstack user who also helps make it better.
At the end of each major workflow step (not after every single command), reflect on the gstack tooling you used. Rate your experience 0 to 10. If it wasn't a 10, think about why. If there is an obvious, actionable bug OR an insightful, interesting thing that could have been done better by gstack code or skill markdown — file a field report. Maybe our contributor will help make us better!
Calibration — this is the bar: For example, $B js "await fetch(...)" used to fail with SyntaxError: await is only valid in async functions because gstack didn't wrap expressions in async context. Small, but the input was reasonable and gstack should have handled it — that's the kind of thing worth filing. Things less consequential than this, ignore.
NOT worth filing: user's app bugs, network errors to user's URL, auth failures on user's site, user's own JS logic bugs.
To file: write ~/.gstack/contributor-logs/{slug}.md with all sections below (do not truncate — include every section through the Date/Version footer):
# {Title}
Hey gstack team — ran into this while using /{skill-name}:
**What I was trying to do:** {what the user/agent was attempting}
**What happened instead:** {what actually happened}
**My rating:** {0-10} — {one sentence on why it wasn't a 10}
## Steps to reproduce
1. {step}
## Raw output
{paste the actual error or unexpected output here}
## What would make this a 10
{one sentence: what gstack should have done differently}
**Date:** {YYYY-MM-DD} | **Version:** {gstack version} | **Skill:** /{skill}
Slug: lowercase, hyphens, max 60 chars (e.g. browse-js-no-await). Skip if file already exists. Max 3 reports per session. File inline and continue — don't stop the workflow. Tell user: "Filed gstack field report: {title}"
Completion Status Protocol
When completing a skill workflow, report status using one of:
- DONE — All steps completed successfully. Evidence provided for each claim.
- DONE_WITH_CONCERNS — Completed, but with issues the user should know about. List each concern.
- BLOCKED — Cannot proceed. State what is blocking and what was tried.
- NEEDS_CONTEXT — Missing information required to continue. State exactly what you need.
Escalation
It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."
Bad work is worse than no work. You will not be penalized for escalating.
- If you have attempted a task 3 times without success, STOP and escalate.
- If you are uncertain about a security-sensitive change, STOP and escalate.
- If the scope of work exceeds what you can verify, STOP and escalate.
Escalation format:
STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]
Step 0: Detect base branch
Determine which branch this PR targets. Use the result as "the base branch" in all subsequent steps.
-
Check if a PR already exists for this branch:
gh pr view --json baseRefName -q .baseRefNameIf this succeeds, use the printed branch name as the base branch. -
If no PR exists (command fails), detect the repo's default branch:
gh repo view --json defaultBranchRef -q .defaultBranchRef.name -
If both commands fail, fall back to
main.
Print the detected base branch name. In every subsequent git diff, git log,
git fetch, git merge, and gh pr create command, substitute the detected
branch name wherever the instructions say "the base branch."
Pre-Landing PR Review
You are running the /review workflow. Analyze the current branch's diff against the base branch for structural issues that tests don't catch.
Step 1: Check branch
- Run
git branch --show-currentto get the current branch. - If on the base branch, output: "Nothing to review — you're on the base branch or have no changes against it." and stop.
- Run
git fetch origin <base> --quiet && git diff origin/<base> --statto check if there's a diff. If no diff, output the same message and stop.
Step 1.5: Scope Drift Detection
Before reviewing code quality, check: did they build what was requested — nothing more, nothing less?
-
Read
TODOS.md(if it exists). Read PR description (gh pr view --json body --jq .body 2>/dev/null || true). Read commit messages (git log origin/<base>..HEAD --oneline). If no PR exists: rely on commit messages and TODOS.md for stated intent — this is the common case since /review runs before /ship creates the PR. -
Identify the stated intent — what was this branch supposed to accomplish?
-
Run
git diff origin/<base> --statand compare the files changed against the stated intent. -
Evaluate with skepticism:
SCOPE CREEP detection:
- Files changed that are unrelated to the stated intent
- New features or refactors not mentioned in the plan
- "While I was in there..." changes that expand blast radius
MISSING REQUIREMENTS detection:
- Requirements from TODOS.md/PR description not addressed in the diff
- Test coverage gaps for stated requirements
- Partial implementations (started but not finished)
-
Output (before the main review begins):
Scope Check: [CLEAN / DRIFT DETECTED / REQUIREMENTS MISSING] Intent: <1-line summary of what was requested> Delivered: <1-line summary of what the diff actually does> [If drift: list each out-of-scope change] [If missing: list each unaddressed requirement] -
This is INFORMATIONAL — does not block the review. Proceed to Step 2.
Step 2: Read the checklist
Read .claude/skills/review/checklist.md.
If the file cannot be read, STOP and report the error. Do not proceed without the checklist.
Step 2.5: Check for Greptile review comments
Read .claude/skills/review/greptile-triage.md and follow the fetch, filter, classify, and escalation detection steps.
If no PR exists, gh fails, API returns an error, or there are zero Greptile comments: Skip this step silently. Greptile integration is additive — the review works without it.
If Greptile comments are found: Store the classifications (VALID & ACTIONABLE, VALID BUT ALREADY FIXED, FALSE POSITIVE, SUPPRESSED) — you will need them in Step 5.
Step 3: Get the diff
Fetch the latest base branch to avoid false positives from stale local state:
git fetch origin <base> --quiet
Run git diff origin/<base> to get the full diff. This includes both committed and uncommitted changes against the latest base branch.
Step 4: Two-pass review
Apply the checklist against the diff in two passes:
- Pass 1 (CRITICAL): SQL & Data Safety, Race Conditions & Concurrency, LLM Output Trust Boundary, Enum & Value Completeness
- Pass 2 (INFORMATIONAL): Conditional Side Effects, Magic Numbers & String Coupling, Dead Code & Consistency, LLM Prompt Issues, Test Gaps, View/Frontend
Enum & Value Completeness requires reading code OUTSIDE the diff. When the diff introduces a new enum value, status, tier, or type constant, use Grep to find all files that reference sibling values, then Read those files to check if the new value is handled. This is the one category where within-diff review is insufficient.
Follow the output format specified in the checklist. Respect the suppressions — do NOT flag items listed in the "DO NOT flag" section.
Step 4.5: Design Review (conditional)
Design Review (conditional, diff-scoped)
Check if the diff touches frontend files using gstack-diff-scope:
eval $(~/.claude/skills/gstack/bin/gstack-diff-scope <base> 2>/dev/null)
If SCOPE_FRONTEND=false: Skip design review silently. No output.
If SCOPE_FRONTEND=true:
-
Check for DESIGN.md. If
DESIGN.mdordesign-system.mdexists in the repo root, read it. All design findings are calibrated against it — patterns blessed in DESIGN.md are not flagged. If not found, use universal design principles. -
Read
.claude/skills/review/design-checklist.md. If the file cannot be read, skip design review with a note: "Design checklist not found — skipping design review." -
Read each changed frontend file (full file, not just diff hunks). Frontend files are identified by the patterns listed in the checklist.
-
Apply the design checklist against the changed files. For each item:
- [HIGH] mechanical CSS fix (
outline: none,!important,font-size < 16px): classify as AUTO-FIX - [HIGH/MEDIUM] design judgment needed: classify as ASK
- [LOW] intent-based detection: present as "Possible — verify visually or run /design-review"
- [HIGH] mechanical CSS fix (
-
Include findings in the review output under a "Design Review" header, following the output format in the checklist. Design findings merge with code review findings into the same Fix-First flow.
-
Log the result for the Review Readiness Dashboard:
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
mkdir -p ~/.gstack/projects/$SLUG
echo '{"skill":"design-review-lite","timestamp":"TIMESTAMP","status":"STATUS","findings":N,"auto_fixed":M}' >> ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl
Substitute: TIMESTAMP = ISO 8601 datetime, STATUS = "clean" if 0 findings or "issues_found", N = total findings, M = auto-fixed count.
Include any design findings alongside the findings from Step 4. They follow the same Fix-First flow in Step 5 — AUTO-FIX for mechanical CSS fixes, ASK for everything else.
Step 5: Fix-First Review
Every finding gets action — not just critical ones.
Output a summary header: Pre-Landing Review: N issues (X critical, Y informational)
Step 5a: Classify each finding
For each finding, classify as AUTO-FIX or ASK per the Fix-First Heuristic in checklist.md. Critical findings lean toward ASK; informational findings lean toward AUTO-FIX.
Step 5b: Auto-fix all AUTO-FIX items
Apply each fix directly. For each one, output a one-line summary:
[AUTO-FIXED] [file:line] Problem → what you did
Step 5c: Batch-ask about ASK items
If there are ASK items remaining, present them in ONE AskUserQuestion:
- List each item with a number, the severity label, the problem, and a recommended fix
- For each item, provide options: A) Fix as recommended, B) Skip
- Include an overall RECOMMENDATION
Example format:
I auto-fixed 5 issues. 2 need your input:
1. [CRITICAL] app/models/post.rb:42 — Race condition in status transition
Fix: Add `WHERE status = 'draft'` to the UPDATE
→ A) Fix B) Skip
2. [INFORMATIONAL] app/services/generator.rb:88 — LLM output not type-checked before DB write
Fix: Add JSON schema validation
→ A) Fix B) Skip
RECOMMENDATION: Fix both — #1 is a real race condition, #2 prevents silent data corruption.
If 3 or fewer ASK items, you may use individual AskUserQuestion calls instead of batching.
Step 5d: Apply user-approved fixes
Apply fixes for items where the user chose "Fix." Output what was fixed.
If no ASK items exist (everything was AUTO-FIX), skip the question entirely.
Verification of claims
Before producing the final review output:
- If you claim "this pattern is safe" → cite the specific line proving safety
- If you claim "this is handled elsewhere" → read and cite the handling code
- If you claim "tests cover this" → name the test file and method
- Never say "likely handled" or "probably tested" — verify or flag as unknown
Rationalization prevention: "This looks fine" is not a finding. Either cite evidence it IS fine, or flag it as unverified.
Greptile comment resolution
After outputting your own findings, if Greptile comments were classified in Step 2.5:
Include a Greptile summary in your output header: + N Greptile comments (X valid, Y fixed, Z FP)
Before replying to any comment, run the Escalation Detection algorithm from greptile-triage.md to determine whether to use Tier 1 (friendly) or Tier 2 (firm) reply templates.
-
VALID & ACTIONABLE comments: These are included in your findings — they follow the Fix-First flow (auto-fixed if mechanical, batched into ASK if not) (A: Fix it now, B: Acknowledge, C: False positive). If the user chooses A (fix), reply using the Fix reply template from greptile-triage.md (include inline diff + explanation). If the user chooses C (false positive), reply using the False Positive reply template (include evidence + suggested re-rank), save to both per-project and global greptile-history.
-
FALSE POSITIVE comments: Present each one via AskUserQuestion:
- Show the Greptile comment: file:line (or [top-level]) + body summary + permalink URL
- Explain concisely why it's a false positive
- Options:
- A) Reply to Greptile explaining why this is incorrect (recommended if clearly wrong)
- B) Fix it anyway (if low-effort and harmless)
- C) Ignore — don't reply, don't fix
If the user chooses A, reply using the False Positive reply template from greptile-triage.md (include evidence + suggested re-rank), save to both per-project and global greptile-history.
-
VALID BUT ALREADY FIXED comments: Reply using the Already Fixed reply template from greptile-triage.md — no AskUserQuestion needed:
- Include what was done and the fixing commit SHA
- Save to both per-project and global greptile-history
-
SUPPRESSED comments: Skip silently — these are known false positives from previous triage.
Step 5.5: TODOS cross-reference
Read TODOS.md in the repository root (if it exists). Cross-reference the PR against open TODOs:
- Does this PR close any open TODOs? If yes, note which items in your output: "This PR addresses TODO: