* fix(preamble): reorder AskUserQuestion Format above model overlay + rewrite Opus 4.7 pacing directive
Root cause of plan-review regression (v1.6.4.0): model overlays rendered
ABOVE the pacing rule in every SKILL.md, so Opus 4.7 read "Batch your
questions" first and absorbed it as the ambient default. The overlay's
claimed subordination ("skill wins on pacing, always") didn't stick —
literal-interpretation mode reads physical order, not claimed hierarchy.
Part 1 of 4 (plan: ~/.claude/plans/system-instruction-you-are-working-polymorphic-twilight.md):
scripts/resolvers/preamble.ts
- Move generateAskUserFormat above generateModelOverlay in section array
- Comment explains why — prevents future refactors from silently reverting
model-overlays/opus-4-7.md
- Replace "Batch your questions" block with "Pace questions to the skill"
- New wording makes one-question-per-turn the default when the skill
contains STOP directives; batching becomes the explicit exception
Regenerated 30 SKILL.md files via bun run gen:skill-docs.
Verified:
- With --model opus-4-7: Format renders at line 359, Model-Specific
Patch at 373, "Pace questions" at 419 (Format comes first, overlay
second, pacing directive intact).
- bun test passes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(plan-reviews): tighten STOP/escape-hatch directives across 4 templates
Part 2 of 4 (plan: ~/.claude/plans/system-instruction-you-are-working-polymorphic-twilight.md).
Codex caught that v1.6.3.0's reasoning collapsed on Opus 4.7: the old
escape-hatch wording ("If no issues or fix is obvious, state what
you'll do and move on — don't waste a question") let the literal
interpreter classify every finding as having an "obvious fix" and skip
AskUserQuestion entirely. Reviews became reports.
Per-template hardening (16 sites total, verified by rg):
plan-ceo-review/SKILL.md.tmpl (13 sites):
- 12 inline STOP directives: replace the full escape-hatch clause with
"zero findings → say so and proceed; findings → MUST call AskUserQuestion
as a tool_use, including for obvious fixes."
- 1 Escape hatch bullet in CRITICAL RULE section: tightened.
plan-eng-review, plan-design-review, plan-devex-review (1 site each):
- Each template's Escape hatch bullet tightened to match the new CEO wording,
adapted for each review's domain (issue/gap, decision/design/DX alternatives).
After regeneration: rg "don't waste a question" returns 0 across all
*SKILL.md.tmpl and *SKILL.md files. "zero findings, state" wording
present 16 times (matches prior count of escape-hatch sites).
bun test passes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(preamble): upgrade AskUserQuestion format to Pros/Cons decision brief
Part 4 of 4 (plan: ~/.claude/plans/system-instruction-you-are-working-polymorphic-twilight.md).
Every AskUserQuestion now renders as a decision brief, not a bullet list:
D-numbered header, ELI10, Stakes-if-we-pick-wrong, Recommendation, Pros/Cons
with ✅/❌ markers per option, closing Net: tradeoff synthesis.
scripts/resolvers/preamble/generate-ask-user-format.ts
- Full rewrite. Preserves prior rules (Re-ground, ELI10, Recommend,
Completeness, Options) and adds:
- D-numbering per skill invocation (model-level, not runtime state)
- Stakes line (pain avoided / capability unlocked / consequence named)
- Pros/Cons block with min 2 ✅ + 1 ❌ per option, min 40 chars/bullet
- Hard-stop escape: "✅ No cons — this is a hard-stop choice" for
genuine one-sided choices (destructive-action confirmations)
- Neutral-posture handling (CT1-compliant): (recommended) label
STAYS on default option to preserve AUTO_DECIDE contract; neutrality
expressed as prose in Recommendation line only
- Net line closes the decision with a one-sentence tradeoff frame
- Rule 11: tool_use mandate (prose "Question:" blocks don't count)
- Self-check list before emitting
test/skill-validation.test.ts
- Update format assertions to check for new Pros/Cons tokens
(Pros / cons:, Recommendation: <choice>, Net:, ELI10, Stakes if we
pick wrong:, ✅, ❌) across all tier-2+ skills
- Old "RECOMMENDATION: Choose" expectation removed (the new format uses
mixed-case "Recommendation:" with no literal "Choose")
test/skill-e2e-plan-format.test.ts
- Add v1.7.0.0 format token regexes (PROS_CONS_HEADER_RE, PRO_BULLET_RE,
CON_BULLET_RE, NET_LINE_RE, D_NUMBER_RE, STAKES_RE)
- Existing RECOMMENDATION_RE loosened to accept mixed-case "Recommendation:"
(canonical v1.7.0.0 form) alongside all-caps (legacy). Tests are
additive — the strict new-format gate is the upcoming cadence eval.
Regenerated 30 SKILL.md files via bun run gen:skill-docs.
Verified:
- bun test: 319 pass (1 pre-existing security-bench fixture oversize
failure on main, unrelated — confirmed via git stash test on main HEAD)
- New format tokens render in all tier-2+ skills (plan-ceo-review,
plan-eng-review, ship, office-hours verified)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test: gate-tier units + periodic Pros/Cons evals for AskUserQuestion format
Part 3 of 4 (plan: ~/.claude/plans/system-instruction-you-are-working-polymorphic-twilight.md).
Gate-tier (E1, free, runs on every `bun test`):
test/preamble-compose.test.ts — pins the composition order
Asserts AskUserQuestion Format section renders BEFORE Model-Specific
Behavioral Patch in tier-≥2 preamble output. Covers claude default,
opus-4-7 overlay, tier 2/3, and codex host. Catches any future edit
to scripts/resolvers/preamble.ts that silently reverts the order.
test/resolver-ask-user-format.test.ts — pins the Pros/Cons contract
14 assertions against generateAskUserFormat output: D<N>, ELI10,
Stakes if we pick wrong:, Recommendation: <choice>, Pros / cons:,
✅/❌ markers, min 2 pros + 1 con rules, hard-stop escape exact
phrase, neutral-posture CT1 rule ((recommended) label preserved for
AUTO_DECIDE), Completeness coverage-vs-kind, tool_use mandate
(rule 11), self-check list, D-numbering model-level caveat.
test/model-overlay-opus-4-7.test.ts — pins the pacing directive
Asserts raw overlay file + resolved overlay output contain "Pace
questions to the skill" and NOT "Batch your questions". Verifies
INHERIT:claude chain still works (Todo-list, subordination wrapper),
Fan out / Effort-match / Literal interpretation nudges preserved.
Also asserts claude base overlay does NOT carry the Opus-specific
pacing directive (no cross-contamination).
Periodic-tier (E2, Opus-dependent, ~$1-2/run):
test/skill-e2e-plan-prosons.test.ts — 4 cases extending v1.6.3.0 harness
1. Format positive — every token present when plan has real tradeoff
2. Hard-stop NEGATIVE — plan with genuine tradeoff must NOT dodge to
"No cons — hard-stop choice" escape
3. Neutral-posture NEGATIVE — plan where one option dominates must emit
(recommended) label + "because <reason>", must NOT dodge to
"taste call" / "no preference"
4. Hard-stop POSITIVE — destructive-action plan may legitimately use
the hard-stop escape
test/helpers/touchfiles.ts — entries for all new eval cases
Dependencies: overlay, preamble.ts, generate-ask-user-format.ts, and
the 4 plan-review templates. Diff-based selection triggers the evals
whenever those files change. Also added entries for 7 expanded-coverage
cases (ship, office-hours, investigate, qa, review, design-review,
document-release) — test cases will land in follow-up PRs per skill.
Follow-ups noted in test file header:
- True multi-turn cadence eval (3 findings → 3 distinct asks) — current
harness captures one $OUT_FILE per session; multi-turn capture needs
new harness support.
- Expanded-coverage test cases for the 7 non-plan-review skills.
Verified:
- bun test: 349 pass (30 new + 319 baseline), 1 pre-existing security-bench
oversize failure on main (unrelated, unchanged).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test: regenerate golden fixtures + update ELI10 phrase check for v1.7.0.0
Pros/Cons format rewrite (6b99df9d) changed the resolver output across all
tier-2+ SKILL.md files. Three golden-file regression tests in
test/host-config.test.ts and one phrase-check test in test/gen-skill-docs.test.ts
were failing as expected.
- test/fixtures/golden/claude-ship-SKILL.md
- test/fixtures/golden/codex-ship-SKILL.md
- test/fixtures/golden/factory-ship-SKILL.md
Regenerated via `bun run gen:skill-docs --host all` + cp into fixtures.
- test/gen-skill-docs.test.ts line 244: rename test from "ELI16 simplification
rules" to "ELI10 simplification rules" and match the new phrase pattern.
v1.7.0.0 uses "ELI10 (ALWAYS)" rather than legacy "Simplify (ELI10, ALWAYS)".
bun test: 744 pass, 1 fail (pre-existing security-bench fixture oversize,
unrelated to this branch).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* v1.7.0.0: plan reviews walk you through each issue with Pros/Cons
Restores AskUserQuestion cadence on Opus 4.7 (v1.6.4.0 regression) and
upgrades the format to a numbered decision brief — D<N> header, ELI10,
Stakes, Recommendation, per-option ✅/❌ bullets, Net: closing line.
Fix: composition reorder + overlay rewrite + 16-site escape-hatch hardening
across the 4 plan-review templates.
Feature: Pros/Cons format in the preamble resolver, inherited by every
tier-2+ skill automatically.
30 new gate-tier unit tests pin the format contract (runs in <100ms, $0).
4 new periodic-tier eval cases defend against escape-hatch abuse
(2 positive, 2 negative). Golden fixtures regenerated.
CEO + Eng + Codex reviews completed. 5 of 8 Codex findings incorporated;
CT2 (16 sites, not 31) and CT1 (AUTO_DECIDE contract break) were
load-bearing catches the primary reviews missed.
bun test: 774 pass, 1 fail (pre-existing security-bench oversize, unrelated).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* v1.10.0.0: bump VERSION (was v1.7.0.0, align with branch discipline)
Per user direction — jumping to 1.10.0.0 for versioning alignment.
No functional changes from the prior ship commit (5f038ab7). The
regression fix + Pros/Cons format are identical.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
gstack
"I don't think I've typed like a line of code probably since December, basically, which is an extremely large change." — Andrej Karpathy, No Priors podcast, March 2026
When I heard Karpathy say this, I wanted to find out how. How does one person ship like a team of twenty? Peter Steinberger built OpenClaw — 247K GitHub stars — essentially solo with AI agents. The revolution is here. A single builder with the right tooling can move faster than a traditional team.
I'm Garry Tan, President & CEO of Y Combinator. I've worked with thousands of startups — Coinbase, Instacart, Rippling — when they were one or two people in a garage. Before YC, I was one of the first eng/PM/designers at Palantir, cofounded Posterous (sold to Twitter), and built Bookface, YC's internal social network.
gstack is my answer. I've been building products for twenty years, and right now I'm shipping more products than I ever have. In the last 60 days: 3 production services, 40+ shipped features, part-time, while running YC full-time. On logical code change — not raw LOC, which AI inflates — my 2026 run rate is ~810× my 2013 pace (11,417 vs 14 logical lines/day). Year-to-date (through April 18), 2026 has already produced 240× the entire 2013 year. Measured across 40 public + private garrytan/* repos including Bookface, after excluding one demo repo. AI wrote most of it. The point isn't who typed it, it's what shipped.
The LOC critics aren't wrong that raw line counts inflate with AI. They are wrong that normalized-for-inflation, I'm less productive. I'm more productive, by a lot. Full methodology, caveats, and reproduction script: On the LOC Controversy.
2026 — 1,237 contributions and counting:
2013 — when I built Bookface at YC (772 contributions):
Same person. Different era. The difference is the tooling.
gstack is how I do it. It turns Claude Code into a virtual engineering team — a CEO who rethinks the product, an eng manager who locks architecture, a designer who catches AI slop, a reviewer who finds production bugs, a QA lead who opens a real browser, a security officer who runs OWASP + STRIDE audits, and a release engineer who ships the PR. Twenty-three specialists and eight power tools, all slash commands, all Markdown, all free, MIT license.
This is my open source software factory. I use it every day. I'm sharing it because these tools should be available to everyone.
Fork it. Improve it. Make it yours. And if you want to hate on free open source software — you're welcome to, but I'd rather you just try it first.
Who this is for:
- Founders and CEOs — especially technical ones who still want to ship
- First-time Claude Code users — structured roles instead of a blank prompt
- Tech leads and staff engineers — rigorous review, QA, and release automation on every PR
Quick start
- Install gstack (30 seconds — see below)
- Run
/office-hours— describe what you're building - Run
/plan-ceo-reviewon any feature idea - Run
/reviewon any branch with changes - Run
/qaon your staging URL - Stop there. You'll know if this is for you.
Install — 30 seconds
Requirements: Claude Code, Git, Bun v1.0+, Node.js (Windows only)
Step 1: Install on your machine
Open Claude Code and paste this. Claude does the rest.
Install gstack: run
git clone --single-branch --depth 1 https://github.com/garrytan/gstack.git ~/.claude/skills/gstack && cd ~/.claude/skills/gstack && ./setupthen add a "gstack" section to CLAUDE.md that says to use the /browse skill from gstack for all web browsing, never use mcp__claude-in-chrome__* tools, and lists the available skills: /office-hours, /plan-ceo-review, /plan-eng-review, /plan-design-review, /design-consultation, /design-shotgun, /design-html, /review, /ship, /land-and-deploy, /canary, /benchmark, /browse, /connect-chrome, /qa, /qa-only, /design-review, /setup-browser-cookies, /setup-deploy, /retro, /investigate, /document-release, /codex, /cso, /autoplan, /plan-devex-review, /devex-review, /careful, /freeze, /guard, /unfreeze, /gstack-upgrade, /learn. Then ask the user if they also want to add gstack to the current project so teammates get it.
Step 2: Team mode — auto-update for shared repos (recommended)
From inside your repo, paste this. Switches you to team mode, bootstraps the repo so teammates get gstack automatically, and commits the change:
(cd ~/.claude/skills/gstack && ./setup --team) && ~/.claude/skills/gstack/bin/gstack-team-init required && git add .claude/ CLAUDE.md && git commit -m "require gstack for AI-assisted work"
No vendored files in your repo, no version drift, no manual upgrades. Every Claude Code session starts with a fast auto-update check (throttled to once/hour, network-failure-safe, completely silent).
Swap required for optional if you'd rather nudge teammates than block them.
OpenClaw
OpenClaw spawns Claude Code sessions via ACP, so every gstack skill just works when Claude Code has gstack installed. Paste this to your OpenClaw agent:
Install gstack: run
git clone --single-branch --depth 1 https://github.com/garrytan/gstack.git ~/.claude/skills/gstack && cd ~/.claude/skills/gstack && ./setupto install gstack for Claude Code. Then add a "Coding Tasks" section to AGENTS.md that says: when spawning Claude Code sessions for coding work, tell the session to use gstack skills. Include these examples — security audit: "Load gstack. Run /cso", code review: "Load gstack. Run /review", QA test a URL: "Load gstack. Run /qa https://...", build a feature end-to-end: "Load gstack. Run /autoplan, implement the plan, then run /ship", plan before building: "Load gstack. Run /office-hours then /autoplan. Save the plan, don't implement."
After setup, just talk to your OpenClaw agent naturally:
| You say | What happens |
|---|---|
| "Fix the typo in README" | Simple — Claude Code session, no gstack needed |
| "Run a security audit on this repo" | Spawns Claude Code with Run /cso |
| "Build me a notifications feature" | Spawns Claude Code with /autoplan → implement → /ship |
| "Help me plan the v2 API redesign" | Spawns Claude Code with /office-hours → /autoplan, saves plan |
See docs/OPENCLAW.md for advanced dispatch routing and the gstack-lite/gstack-full prompt templates.
Native OpenClaw Skills (via ClawHub)
Four methodology skills that work directly in your OpenClaw agent, no Claude Code session needed. Install from ClawHub:
clawhub install gstack-openclaw-office-hours gstack-openclaw-ceo-review gstack-openclaw-investigate gstack-openclaw-retro
| Skill | What it does |
|---|---|
gstack-openclaw-office-hours |
Product interrogation with 6 forcing questions |
gstack-openclaw-ceo-review |
Strategic challenge with 4 scope modes |
gstack-openclaw-investigate |
Root cause debugging methodology |
gstack-openclaw-retro |
Weekly engineering retrospective |
These are conversational skills. Your OpenClaw agent runs them directly via chat.
Other AI Agents
gstack works on 10 AI coding agents, not just Claude. Setup auto-detects which agents you have installed:
git clone --single-branch --depth 1 https://github.com/garrytan/gstack.git ~/gstack
cd ~/gstack && ./setup
Or target a specific agent with ./setup --host <name>:
| Agent | Flag | Skills install to |
|---|---|---|
| OpenAI Codex CLI | --host codex |
~/.codex/skills/gstack-*/ |
| OpenCode | --host opencode |
~/.config/opencode/skills/gstack-*/ |
| Cursor | --host cursor |
~/.cursor/skills/gstack-*/ |
| Factory Droid | --host factory |
~/.factory/skills/gstack-*/ |
| Slate | --host slate |
~/.slate/skills/gstack-*/ |
| Kiro | --host kiro |
~/.kiro/skills/gstack-*/ |
| Hermes | --host hermes |
~/.hermes/skills/gstack-*/ |
| GBrain (mod) | --host gbrain |
~/.gbrain/skills/gstack-*/ |
Want to add support for another agent? See docs/ADDING_A_HOST.md. It's one TypeScript config file, zero code changes.
See it work
You: I want to build a daily briefing app for my calendar.
You: /office-hours
Claude: [asks about the pain — specific examples, not hypotheticals]
You: Multiple Google calendars, events with stale info, wrong locations.
Prep takes forever and the results aren't good enough...
Claude: I'm going to push back on the framing. You said "daily briefing
app." But what you actually described is a personal chief of
staff AI.
[extracts 5 capabilities you didn't realize you were describing]
[challenges 4 premises — you agree, disagree, or adjust]
[generates 3 implementation approaches with effort estimates]
RECOMMENDATION: Ship the narrowest wedge tomorrow, learn from
real usage. The full vision is a 3-month project — start with
the daily briefing that actually works.
[writes design doc → feeds into downstream skills automatically]
You: /plan-ceo-review
[reads the design doc, challenges scope, runs 10-section review]
You: /plan-eng-review
[ASCII diagrams for data flow, state machines, error paths]
[test matrix, failure modes, security concerns]
You: Approve plan. Exit plan mode.
[writes 2,400 lines across 11 files. ~8 minutes.]
You: /review
[AUTO-FIXED] 2 issues. [ASK] Race condition → you approve fix.
You: /qa https://staging.myapp.com
[opens real browser, clicks through flows, finds and fixes a bug]
You: /ship
Tests: 42 → 51 (+9 new). PR: github.com/you/app/pull/42
You said "daily briefing app." The agent said "you're building a chief of staff AI" — because it listened to your pain, not your feature request. Eight commands, end to end. That is not a copilot. That is a team.
The sprint
gstack is a process, not a collection of tools. The skills run in the order a sprint runs:
Think → Plan → Build → Review → Test → Ship → Reflect
Each skill feeds into the next. /office-hours writes a design doc that /plan-ceo-review reads. /plan-eng-review writes a test plan that /qa picks up. /review catches bugs that /ship verifies are fixed. Nothing falls through the cracks because every step knows what came before it.
| Skill | Your specialist | What they do |
|---|---|---|
/office-hours |
YC Office Hours | Start here. Six forcing questions that reframe your product before you write code. Pushes back on your framing, challenges premises, generates implementation alternatives. Design doc feeds into every downstream skill. |
/plan-ceo-review |
CEO / Founder | Rethink the problem. Find the 10-star product hiding inside the request. Four modes: Expansion, Selective Expansion, Hold Scope, Reduction. |
/plan-eng-review |
Eng Manager | Lock in architecture, data flow, diagrams, edge cases, and tests. Forces hidden assumptions into the open. |
/plan-design-review |
Senior Designer | Rates each design dimension 0-10, explains what a 10 looks like, then edits the plan to get there. AI Slop detection. Interactive — one AskUserQuestion per design choice. |
/plan-devex-review |
Developer Experience Lead | Interactive DX review: explores developer personas, benchmarks against competitors' TTHW, designs your magical moment, traces friction points step by step. Three modes: DX EXPANSION, DX POLISH, DX TRIAGE. 20-45 forcing questions. |
/design-consultation |
Design Partner | Build a complete design system from scratch. Researches the landscape, proposes creative risks, generates realistic product mockups. |
/review |
Staff Engineer | Find the bugs that pass CI but blow up in production. Auto-fixes the obvious ones. Flags completeness gaps. |
/investigate |
Debugger | Systematic root-cause debugging. Iron Law: no fixes without investigation. Traces data flow, tests hypotheses, stops after 3 failed fixes. |
/design-review |
Designer Who Codes | Same audit as /plan-design-review, then fixes what it finds. Atomic commits, before/after screenshots. |
/devex-review |
DX Tester | Live developer experience audit. Actually tests your onboarding: navigates docs, tries the getting started flow, times TTHW, screenshots errors. Compares against /plan-devex-review scores — the boomerang that shows if your plan matched reality. |
/design-shotgun |
Design Explorer | "Show me options." Generates 4-6 AI mockup variants, opens a comparison board in your browser, collects your feedback, and iterates. Taste memory learns what you like. Repeat until you love something, then hand it to /design-html. |
/design-html |
Design Engineer | Turn a mockup into production HTML that actually works. Pretext computed layout: text reflows, heights adjust, layouts are dynamic. 30KB, zero deps. Detects React/Svelte/Vue. Smart API routing per design type (landing page vs dashboard vs form). The output is shippable, not a demo. |
/qa |
QA Lead | Test your app, find bugs, fix them with atomic commits, re-verify. Auto-generates regression tests for every fix. |
/qa-only |
QA Reporter | Same methodology as /qa but report only. Pure bug report without code changes. |
/pair-agent |
Multi-Agent Coordinator | Share your browser with any AI agent. One command, one paste, connected. Works with OpenClaw, Hermes, Codex, Cursor, or anything that can curl. Each agent gets its own tab. Auto-launches headed mode so you watch everything. Auto-starts ngrok tunnel for remote agents. Scoped tokens, tab isolation, rate limiting, activity attribution. |
/cso |
Chief Security Officer | OWASP Top 10 + STRIDE threat model. Zero-noise: 17 false positive exclusions, 8/10+ confidence gate, independent finding verification. Each finding includes a concrete exploit scenario. |
/ship |
Release Engineer | Sync main, run tests, audit coverage, push, open PR. Bootstraps test frameworks if you don't have one. |
/land-and-deploy |
Release Engineer | Merge the PR, wait for CI and deploy, verify production health. One command from "approved" to "verified in production." |
/canary |
SRE | Post-deploy monitoring loop. Watches for console errors, performance regressions, and page failures. |
/benchmark |
Performance Engineer | Baseline page load times, Core Web Vitals, and resource sizes. Compare before/after on every PR. |
/document-release |
Technical Writer | Update all project docs to match what you just shipped. Catches stale READMEs automatically. |
/retro |
Eng Manager | Team-aware weekly retro. Per-person breakdowns, shipping streaks, test health trends, growth opportunities. /retro global runs across all your projects and AI tools (Claude Code, Codex, Gemini). |
/browse |
QA Engineer | Give the agent eyes. Real Chromium browser, real clicks, real screenshots. ~100ms per command. /open-gstack-browser launches GStack Browser with sidebar, anti-bot stealth, and auto model routing. |
/setup-browser-cookies |
Session Manager | Import cookies from your real browser (Chrome, Arc, Brave, Edge) into the headless session. Test authenticated pages. |
/autoplan |
Review Pipeline | One command, fully reviewed plan. Runs CEO → design → eng review automatically with encoded decision principles. Surfaces only taste decisions for your approval. |
/learn |
Memory | Manage what gstack learned across sessions. Review, search, prune, and export project-specific patterns, pitfalls, and preferences. Learnings compound across sessions so gstack gets smarter on your codebase over time. |
Which review should I use?
| Building for... | Plan stage (before code) | Live audit (after shipping) |
|---|---|---|
| End users (UI, web app, mobile) | /plan-design-review |
/design-review |
| Developers (API, CLI, SDK, docs) | /plan-devex-review |
/devex-review |
| Architecture (data flow, perf, tests) | /plan-eng-review |
/review |
| All of the above | /autoplan (runs CEO → design → eng → DX, auto-detects which apply) |
— |
Power tools
| Skill | What it does |
|---|---|
/codex |
Second Opinion — independent code review from OpenAI Codex CLI. Three modes: review (pass/fail gate), adversarial challenge, and open consultation. Cross-model analysis when both /review and /codex have run. |
/careful |
Safety Guardrails — warns before destructive commands (rm -rf, DROP TABLE, force-push). Say "be careful" to activate. Override any warning. |
/freeze |
Edit Lock — restrict file edits to one directory. Prevents accidental changes outside scope while debugging. |
/guard |
Full Safety — /careful + /freeze in one command. Maximum safety for prod work. |
/unfreeze |
Unlock — remove the /freeze boundary. |
/open-gstack-browser |
GStack Browser — launch GStack Browser with sidebar, anti-bot stealth, auto model routing (Sonnet for actions, Opus for analysis), one-click cookie import, and Claude Code integration. Clean up pages, take smart screenshots, edit CSS, and pass info back to your terminal. |
/setup-deploy |
Deploy Configurator — one-time setup for /land-and-deploy. Detects your platform, production URL, and deploy commands. |
/gstack-upgrade |
Self-Updater — upgrade gstack to latest. Detects global vs vendored install, syncs both, shows what changed. |
New binaries (v0.19)
Beyond the slash-command skills, gstack ships standalone CLIs for workflows that don't belong inside a session:
| Command | What it does |
|---|---|
gstack-model-benchmark |
Cross-model benchmark — run the same prompt through Claude, GPT (via Codex CLI), and Gemini; compare latency, tokens, cost, and (optionally) LLM-judge quality score. Auth detected per provider, unavailable providers skip cleanly. Output as table, JSON, or markdown. --dry-run validates flags + auth without spending API calls. |
gstack-taste-update |
Design taste learning — writes approvals and rejections from /design-shotgun into a persistent per-project taste profile. Decays 5%/week. Feeds back into future variant generation so the system learns what you actually pick. |
Continuous checkpoint mode (opt-in, local by default)
Set gstack-config set checkpoint_mode continuous and skills auto-commit your work as you go with a WIP: prefix plus a structured [gstack-context] body (decisions, remaining work, failed approaches). Survives crashes and context switches. /context-restore reads those commits to reconstruct session state. /ship filter-squashes WIP commits before the PR (preserving non-WIP commits) so bisect stays clean. Push is opt-in via checkpoint_push=true — default is local-only so you don't trigger CI on every WIP commit.
Deep dives with examples and philosophy for every skill →
Karpathy's four failure modes? Already covered.
Andrej Karpathy's AI coding rules (17K stars) nail four failure modes: wrong assumptions, overcomplexity, orthogonal edits, imperative over declarative. gstack's workflow skills enforce all four. /office-hours forces assumptions into the open before code is written. The Confusion Protocol stops Claude from guessing on architectural decisions. /review catches unnecessary complexity and drive-by edits. /ship transforms tasks into verifiable goals with test-first execution. If you already use Karpathy-style CLAUDE.md rules, gstack is the workflow enforcement layer that makes them stick across entire sprints, not just single prompts.
Parallel sprints
gstack works well with one sprint. It gets interesting with ten running at once.
Design is at the heart. /design-consultation builds your design system from scratch, researches what's out there, proposes creative risks, and writes DESIGN.md. But the real magic is the shotgun-to-HTML pipeline.
/design-shotgun is how you explore. You describe what you want. It generates 4-6 AI mockup variants using GPT Image. Then it opens a comparison board in your browser with all variants side by side. You pick favorites, leave feedback ("more whitespace", "bolder headline", "lose the gradient"), and it generates a new round. Repeat until you love something. Taste memory kicks in after a few rounds so it starts biasing toward what you actually like. No more describing your vision in words and hoping the AI gets it. You see options, pick the good ones, and iterate visually.
/design-html makes it real. Take that approved mockup (from /design-shotgun, a CEO plan, a design review, or just a description) and turn it into production-quality HTML/CSS. Not the kind of AI HTML that looks fine at one viewport width and breaks everywhere else. This uses Pretext for computed text layout: text actually reflows on resize, heights adjust to content, layouts are dynamic. 30KB overhead, zero dependencies. It detects your framework (React, Svelte, Vue) and outputs the right format. Smart API routing picks different Pretext patterns depending on whether it's a landing page, dashboard, form, or card layout. The output is something you'd actually ship, not a demo.
/qa was a massive unlock. It let me go from 6 to 12 parallel workers. Claude Code saying "I SEE THE ISSUE" and then actually fixing it, generating a regression test, and verifying the fix — that changed how I work. The agent has eyes now.
Smart review routing. Just like at a well-run startup: CEO doesn't have to look at infra bug fixes, design review isn't needed for backend changes. gstack tracks what reviews are run, figures out what's appropriate, and just does the smart thing. The Review Readiness Dashboard tells you where you stand before you ship.
Test everything. /ship bootstraps test frameworks from scratch if your project doesn't have one. Every /ship run produces a coverage audit. Every /qa bug fix generates a regression test. 100% test coverage is the goal — tests make vibe coding safe instead of yolo coding.
/document-release is the engineer you never had. It reads every doc file in your project, cross-references the diff, and updates everything that drifted. README, ARCHITECTURE, CONTRIBUTING, CLAUDE.md, TODOS — all kept current automatically. And now /ship auto-invokes it — docs stay current without an extra command.
Real browser mode. /open-gstack-browser launches GStack Browser, an AI-controlled Chromium with anti-bot stealth, custom branding, and the sidebar extension baked in. Sites like Google and NYTimes work without captchas. The menu bar says "GStack Browser" instead of "Chrome for Testing." Your regular Chrome stays untouched. All existing browse commands work unchanged. $B disconnect returns to headless. The browser stays alive as long as the window is open... no idle timeout killing it while you're working.
Sidebar agent — your AI browser assistant. Type natural language in the Chrome side panel and a child Claude instance executes it. "Navigate to the settings page and screenshot it." "Fill out this form with test data." "Go through every item in this list and extract the prices." The sidebar auto-routes to the right model: Sonnet for fast actions (click, navigate, screenshot) and Opus for reading and analysis. Each task gets up to 5 minutes. The sidebar agent runs in an isolated session, so it won't interfere with your main Claude Code window. One-click cookie import right from the sidebar footer.
Personal automation. The sidebar agent isn't just for dev workflows. Example: "Browse my kid's school parent portal and add all the other parents' names, phone numbers, and photos to my Google Contacts." Two ways to get authenticated: (1) log in once in the headed browser, your session persists, or (2) click the "cookies" button in the sidebar footer to import cookies from your real Chrome. Once authenticated, Claude navigates the directory, extracts the data, and creates the contacts.
Prompt injection defense. Hostile web pages try to hijack your sidebar agent. gstack ships a layered defense: a 22MB ML classifier bundled with the browser scans every page and tool output locally, a Claude Haiku transcript check votes on the full conversation shape, a random canary token in the system prompt catches session exfil attempts across text, tool args, URLs, and file writes, and a verdict combiner requires two classifiers to agree before blocking (prevents single-model false positives on Stack Overflow-style instruction pages). A shield icon in the sidebar header shows status (green/amber/red). Opt in to a 721MB DeBERTa-v3 ensemble via GSTACK_SECURITY_ENSEMBLE=deberta for 2-of-3 agreement. Emergency kill switch: GSTACK_SECURITY_OFF=1. See ARCHITECTURE.md for the full stack.
Browser handoff when the AI gets stuck. Hit a CAPTCHA, auth wall, or MFA prompt? $B handoff opens a visible Chrome at the exact same page with all your cookies and tabs intact. Solve the problem, tell Claude you're done, $B resume picks up right where it left off. The agent even suggests it automatically after 3 consecutive failures.
/pair-agent is cross-agent coordination. You're in Claude Code. You also have OpenClaw running. Or Hermes. Or Codex. You want them both looking at the same website. Type /pair-agent, pick your agent, and a GStack Browser window opens so you can watch. The skill prints a block of instructions. Paste that block into the other agent's chat. It exchanges a one-time setup key for a session token, creates its own tab, and starts browsing. You see both agents working in the same browser, each in their own tab, neither able to interfere with the other. If ngrok is installed, the tunnel starts automatically so the other agent can be on a completely different machine. Same-machine agents get a zero-friction shortcut that writes credentials directly. This is the first time AI agents from different vendors can coordinate through a shared browser with real security: scoped tokens, tab isolation, rate limiting, domain restrictions, and activity attribution.
Multi-AI second opinion. /codex gets an independent review from OpenAI's Codex CLI — a completely different AI looking at the same diff. Three modes: code review with a pass/fail gate, adversarial challenge that actively tries to break your code, and open consultation with session continuity. When both /review (Claude) and /codex (OpenAI) have reviewed the same branch, you get a cross-model analysis showing which findings overlap and which are unique to each.
Safety guardrails on demand. Say "be careful" and /careful warns before any destructive command — rm -rf, DROP TABLE, force-push, git reset --hard. /freeze locks edits to one directory while debugging so Claude can't accidentally "fix" unrelated code. /guard activates both. /investigate auto-freezes to the module being investigated.
Proactive skill suggestions. gstack notices what stage you're in — brainstorming, reviewing, debugging, testing — and suggests the right skill. Don't like it? Say "stop suggesting" and it remembers across sessions.
10-15 parallel sprints
gstack is powerful with one sprint. It is transformative with ten running at once.
Conductor runs multiple Claude Code sessions in parallel — each in its own isolated workspace. One session running /office-hours on a new idea, another doing /review on a PR, a third implementing a feature, a fourth running /qa on staging, and six more on other branches. All at the same time. I regularly run 10-15 parallel sprints — that's the practical max right now.
The sprint structure is what makes parallelism work. Without a process, ten agents is ten sources of chaos. With a process — think, plan, build, review, test, ship — each agent knows exactly what to do and when to stop. You manage them the way a CEO manages a team: check in on the decisions that matter, let the rest run.
Voice input (AquaVoice, Whisper, etc.)
gstack skills have voice-friendly trigger phrases. Say what you want naturally — "run a security check", "test the website", "do an engineering review" — and the right skill activates. You don't need to remember slash command names or acronyms.
Uninstall
Option 1: Run the uninstall script
If gstack is installed on your machine:
~/.claude/skills/gstack/bin/gstack-uninstall
This handles skills, symlinks, global state (~/.gstack/), project-local state, browse daemons, and temp files. Use --keep-state to preserve config and analytics. Use --force to skip confirmation.
Option 2: Manual removal (no local repo)
If you don't have the repo cloned (e.g. you installed via a Claude Code paste and later deleted the clone):
# 1. Stop browse daemons
pkill -f "gstack.*browse" 2>/dev/null || true
# 2. Remove per-skill symlinks pointing into gstack/
find ~/.claude/skills -maxdepth 1 -type l 2>/dev/null | while read -r link; do
case "$(readlink "$link" 2>/dev/null)" in gstack/*|*/gstack/*) rm -f "$link" ;; esac
done
# 3. Remove gstack
rm -rf ~/.claude/skills/gstack
# 4. Remove global state
rm -rf ~/.gstack
# 5. Remove integrations (skip any you never installed)
rm -rf ~/.codex/skills/gstack* 2>/dev/null
rm -rf ~/.factory/skills/gstack* 2>/dev/null
rm -rf ~/.kiro/skills/gstack* 2>/dev/null
rm -rf ~/.openclaw/skills/gstack* 2>/dev/null
# 6. Remove temp files
rm -f /tmp/gstack-* 2>/dev/null
# 7. Per-project cleanup (run from each project root)
rm -rf .gstack .gstack-worktrees .claude/skills/gstack 2>/dev/null
rm -rf .agents/skills/gstack* .factory/skills/gstack* 2>/dev/null
Clean up CLAUDE.md
The uninstall script does not edit CLAUDE.md. In each project where gstack was added, remove the ## gstack and ## Skill routing sections.
Playwright
~/Library/Caches/ms-playwright/ (macOS) is left in place because other tools may share it. Remove it if nothing else needs it.
Free, MIT licensed, open source. No premium tier, no waitlist.
I open sourced how I build software. You can fork it and make it your own.
We're hiring. Want to ship real products at AI-coding speed and help harden gstack? Come work at YC — ycombinator.com/software Extremely competitive salary and equity. San Francisco, Dogpatch District.
Cross-machine memory with GBrain sync
gstack accumulates a lot of useful state on your laptop: learnings, CEO plans, design docs, retros, developer profile. Today, all of that dies when you switch machines. GBrain sync optionally pushes a curated, secret-scanned subset to a private git repo so your memory follows you, and (if you use GBrain) becomes indexable there.
One command to turn it on:
gstack-brain-init
That creates a private GitHub repo (or any git remote you prefer — GitLab, Gitea, self-hosted). Every skill run syncs the queue at its start and end boundaries. No daemon, no background process. A one-time privacy prompt asks how much you want to share (everything allowlisted / artifacts only / off). Secret-shaped content (AWS keys, GitHub tokens, PEM blocks, JWTs, etc.) is blocked from sync before it leaves your machine.
New machine? Copy ~/.gstack-brain-remote.txt over, run
gstack-brain-restore, and yesterday's learnings surface on today's
laptop.
Full guide: docs/gbrain-sync.md • Error index: docs/gbrain-sync-errors.md
Docs
| Doc | What it covers |
|---|---|
| Skill Deep Dives | Philosophy, examples, and workflow for every skill (includes Greptile integration) |
| Builder Ethos | Builder philosophy: Boil the Lake, Search Before Building, three layers of knowledge |
| GBrain Sync | Cross-machine memory setup, privacy modes, troubleshooting |
| Architecture | Design decisions and system internals |
| Browser Reference | Full command reference for /browse |
| Contributing | Dev setup, testing, contributor mode, and dev mode |
| Changelog | What's new in every version |
Privacy & Telemetry
gstack includes opt-in usage telemetry to help improve the project. Here's exactly what happens:
- Default is off. Nothing is sent anywhere unless you explicitly say yes.
- On first run, gstack asks if you want to share anonymous usage data. You can say no.
- What's sent (if you opt in): skill name, duration, success/fail, gstack version, OS. That's it.
- What's never sent: code, file paths, repo names, branch names, prompts, or any user-generated content.
- Change anytime:
gstack-config set telemetry offdisables everything instantly.
Data is stored in Supabase (open source Firebase alternative). The schema is in supabase/migrations/ — you can verify exactly what's collected. The Supabase publishable key in the repo is a public key (like a Firebase API key) — row-level security policies deny all direct access. Telemetry flows through validated edge functions that enforce schema checks, event type allowlists, and field length limits.
Local analytics are always available. Run gstack-analytics to see your personal usage dashboard from the local JSONL file — no remote data needed.
Troubleshooting
Skill not showing up? cd ~/.claude/skills/gstack && ./setup
/browse fails? cd ~/.claude/skills/gstack && bun install && bun run build
Stale install? Run /gstack-upgrade — or set auto_upgrade: true in ~/.gstack/config.yaml
Want shorter commands? cd ~/.claude/skills/gstack && ./setup --no-prefix — switches from /gstack-qa to /qa. Your choice is remembered for future upgrades.
Want namespaced commands? cd ~/.claude/skills/gstack && ./setup --prefix — switches from /qa to /gstack-qa. Useful if you run other skill packs alongside gstack.
Codex says "Skipped loading skill(s) due to invalid SKILL.md"? Your Codex skill descriptions are stale. Fix: cd ~/.codex/skills/gstack && git pull && ./setup --host codex — or for repo-local installs: cd "$(readlink -f .agents/skills/gstack)" && git pull && ./setup --host codex
Windows users: gstack works on Windows 11 via Git Bash or WSL. Node.js is required in addition to Bun — Bun has a known bug with Playwright's pipe transport on Windows (bun#4253). The browse server automatically falls back to Node.js. Make sure both bun and node are on your PATH.
Claude says it can't see the skills? Make sure your project's CLAUDE.md has a gstack section. Add this:
## gstack
Use /browse from gstack for all web browsing. Never use mcp__claude-in-chrome__* tools.
Available skills: /office-hours, /plan-ceo-review, /plan-eng-review, /plan-design-review,
/design-consultation, /design-shotgun, /design-html, /review, /ship, /land-and-deploy,
/canary, /benchmark, /browse, /open-gstack-browser, /qa, /qa-only, /design-review,
/setup-browser-cookies, /setup-deploy, /retro, /investigate, /document-release, /codex,
/cso, /autoplan, /pair-agent, /careful, /freeze, /guard, /unfreeze, /gstack-upgrade, /learn.
License
MIT. Free forever. Go build something.

