mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-01 19:25:10 +02:00
22a4451e0e
* chore: regenerate stale ship golden fixtures
Golden fixtures were missing the VENDORED_GSTACK preamble section that
landed on main. Regression tests failed on all three hosts (claude, codex,
factory). Regenerated from current preamble output.
No code changes, unblocks test suite.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: anti-slop design constraints + delete duplicate constants
Tightens design-consultation and design-shotgun to push back on the
convergence traps every AI design tool falls into.
Changes:
- scripts/resolvers/constants.ts: add "system-ui as primary font" to
AI_SLOP_BLACKLIST. Document Space Grotesk as the new "safe alternative
to Inter" convergence trap alongside the existing overused fonts.
- scripts/gen-skill-docs.ts: delete duplicate AI slop constants block
(dead code — scripts/resolvers/constants.ts is the live source).
Prevents drift between the two definitions.
- design-consultation/SKILL.md.tmpl: add Space Grotesk + system-ui to
overused/slop lists. Add "anti-convergence directive" — vary across
generations in the same project. Add Phase 1 "memorable-thing forcing
question" (what's the one thing someone will remember?). Add Phase 5
"would a human designer be embarrassed by this?" self-gate before
presenting variants.
- design-shotgun/SKILL.md.tmpl: anti-convergence directive — each
variant must use a different font, palette, and layout. If two
variants look like siblings, one of them failed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: context health soft directive in preamble (T2+)
Adds a "periodically self-summarize" nudge to long-running skills.
Soft directive only — no thresholds, no enforcement, no auto-commit.
Goal: self-awareness during /qa, /investigate, /cso etc. If you notice
yourself going in circles, STOP and reassess instead of thrashing.
Codex review caught that fake precision thresholds (15/30/45 tool calls)
were unimplementable — SKILL.md is a static prompt, not runtime code.
This ships the soft version only.
Changes:
- scripts/resolvers/preamble.ts: add generateContextHealth(), wire into
T2+ tier. Format: [PROGRESS] ... summary line. Explicit rule that
progress reporting must never mutate git state.
- All T2+ skill SKILL.md files regenerated to include the new section.
- Golden ship fixtures updated (T4 skill, picks up the change).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: model overlays with explicit --model flag (no auto-detect)
Adds a per-model behavioral patch layer orthogonal to the host axis.
Different LLMs have different tendencies (GPT won't stop, Gemini
over-explains, o-series wants structured output). Overlays nudge each
model toward better defaults for gstack workflows.
Codex review caught three landmines the prior reviews missed:
1. Host != model — Claude Code can run any Claude model, Codex runs
GPT/o-series, Cursor fronts multiple providers. Auto-detecting from
host would lie. Dropped auto-detect. --model is explicit (default
claude). Missing overlay file → empty string (graceful).
2. Import cycle — putting Model in resolvers/types.ts would cycle
through hosts/index. Created neutral scripts/models.ts instead.
3. "Final say" is dangerous — overlay at the end of preamble could
override STOP points, AskUserQuestion gates, /ship review gates.
Placed overlay after spawned-session-check but before voice + tier
sections. Wrapper heading adds explicit subordination language on
every overlay: "subordinate to skill workflow, STOP points,
AskUserQuestion gates, plan-mode safety, and /ship review gates."
Changes:
- scripts/models.ts: new neutral module. ALL_MODEL_NAMES, Model type,
resolveModel() for family heuristics (gpt-5.4-mini → gpt-5.4, o3 →
o-series, claude-opus-4-7 → claude), validateModel() helper.
- scripts/resolvers/types.ts: import Model, add ctx.model field.
- scripts/resolvers/model-overlay.ts: new resolver. Reads
model-overlays/{model}.md. Supports {{INHERIT:base}} directive at
top of file for concat (gpt-5.4 inherits gpt). Cycle guard.
- scripts/resolvers/index.ts: register MODEL_OVERLAY resolver.
- scripts/resolvers/preamble.ts: wire generateModelOverlay into
composition before voice. Print MODEL_OVERLAY: {model} in preamble
bash so users can see which overlay is active. Filter empty sections.
- scripts/gen-skill-docs.ts: parse --model CLI flag. Default claude.
Unknown model → throw with list of valid options.
- model-overlays/{claude,gpt,gpt-5.4,gemini,o-series}.md: behavioral
patches per model family. gpt-5.4.md uses {{INHERIT:gpt}} to extend
gpt.md without duplication.
- test/gen-skill-docs.test.ts: fix qa-only guardrail regex scope.
Was matching Edit/Glob/Grep anywhere after `allowed-tools:` in the
whole file. Now scoped to frontmatter only. Body prose (Claude
overlay references Edit as a tool) correctly no longer breaks it.
Verification:
- bun run gen:skill-docs --host all --dry-run → all fresh
- bun run gen:skill-docs --model gpt-5.4 → concat works, gpt.md +
gpt-5.4.md content appears in order
- bun run gen:skill-docs --model unknown → errors with valid list
- All generated skills contain MODEL_OVERLAY: claude in preamble
- Golden ship fixtures regenerated
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: continuous checkpoint mode with non-destructive WIP squash
Adds opt-in auto-commit during long sessions so work survives Claude
Code crashes, Conductor workspace handoffs, and context switches.
Local-only by default — pushing requires explicit opt-in.
Codex review caught multiple landmines that would have shipped:
1. checkpoint_push=true default would push WIP commits to shared
branches, trigger CI/deploys, expose secrets. Now default false.
2. Plan's original /ship squash (git reset --soft to merge base) was
destructive — uncommitted ALL branch commits, not just WIP, and
caused non-fast-forward pushes. Redesigned: rebase --autosquash
scoped to WIP commits only, with explicit fallback for WIP-only
branches and STOP-and-ask for conflicts.
3. gstack-config get returned empty for missing keys with exit 0,
ignoring the annotated defaults in the header comments. Fixed:
get now falls back to a lookup_default() table that is the
canonical source for defaults.
4. Telemetry default mismatched: header said 'anonymous' but runtime
treated empty as 'off'. Aligned: default is 'off' everywhere.
5. /checkpoint resume only read markdown checkpoint files, not the
WIP commit [gstack-context] bodies the plan referenced. Wired up
parsing of [gstack-context] blocks from WIP commits as a second
recovery trail alongside the markdown checkpoints.
Changes:
- bin/gstack-config: add checkpoint_mode (default explicit) and
checkpoint_push (default false) to CONFIG_HEADER. Add lookup_default()
as canonical default source. get() falls back to defaults when key
absent. list now shows value + source (set/default). New 'defaults'
subcommand to inspect the table.
- scripts/resolvers/preamble.ts: preamble bash reads _CHECKPOINT_MODE
and _CHECKPOINT_PUSH, prints CHECKPOINT_MODE: and CHECKPOINT_PUSH: so
the mode is visible. New generateContinuousCheckpoint() section in
T2+ tier describes WIP commit format with [gstack-context] body and
the rules (never git add -A, never commit broken tests, push only
if opted in). Example deliberately shows a clean-state context so
it doesn't contradict the rules.
- ship/SKILL.md.tmpl: new Step 5.75 WIP Commit Squash. Detects WIP
count, exports [gstack-context] blocks before squash (as backup),
uses rebase --autosquash for mixed branches and soft-reset only when
VERIFIED WIP-only. Explicit anti-footgun rules against blind soft-
reset. Aborts with BLOCKED status on conflict instead of destroying
non-WIP commits.
- checkpoint/SKILL.md.tmpl: new Step 1.5 to parse [gstack-context]
blocks from WIP commits via git log --grep="^WIP:". Merges with
markdown checkpoint for fuller session recovery.
- Golden ship fixtures regenerated (ship is T4, preamble change shows up).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: feature discovery flow gated by per-feature markers
Extends generateUpgradeCheck() to surface new features once per user
after a just-upgraded session. No more silent features.
Codex review caught: spawned sessions (OpenClaw, etc.) must skip the
discovery prompt entirely — they can't interactively answer. Feature
discovery now checks SPAWNED_SESSION first and is silent in those.
Discovery is per-feature, not per-upgrade. Each feature has its own
marker file at ~/.claude/skills/gstack/.feature-prompted-{name}. Once
the user has been shown a feature (accepted, shown docs, or skipped),
the marker is touched and the prompt never fires again for that
feature. Future features get their own markers.
V1 features surfaced:
- continuous-checkpoint: offer to enable checkpoint_mode=continuous
- model-overlay: inform-only note about --model flag and MODEL_OVERLAY
line in preamble output
Max one prompt per session to avoid nagging. Fires only on JUST_UPGRADED
(not every session), plus spawned-session skip.
Changes:
- scripts/resolvers/preamble.ts: extend generateUpgradeCheck() with
feature discovery rules, per-marker-file semantics, spawned-session
exclusion, and max-one-per-session cap.
- All skill SKILL.md files regenerated to include the new section.
- Golden ship fixtures regenerated.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: design taste engine with persistent schema
Adds a cross-session taste profile that learns from design-shotgun
approval/rejection decisions. Biases future design-consultation and
design-shotgun proposals toward the user's demonstrated preferences.
Codex review caught that the plan had "taste engine" as a vague goal
without schema, decay, migration, or placeholder insertion points. This
commit ships the full spec.
Schema v1 at ~/.gstack/projects/$SLUG/taste-profile.json:
- version, updated_at
- dimensions: fonts, colors, layouts, aesthetics — each with approved[]
and rejected[] preference lists
- sessions: last 50 (FIFO truncation), each with ts/action/variant/reason
- Preference: { value, confidence, approved_count, rejected_count, last_seen }
- Confidence: Laplace-smoothed approved/(total+1)
- Decay: 5% per week of inactivity, computed at read time (not write)
Changes:
- bin/gstack-taste-update: new CLI. Subcommands approved/rejected/show/
migrate. Parses reason string for dimension signals (e.g.,
"fonts: Geist; colors: slate; aesthetics: minimal"). Emits taste-drift
NOTE when a new signal contradicts a strong opposing signal. Legacy
approved.json aggregates migrate to v1 on next write.
- scripts/resolvers/design.ts: new generateTasteProfile() resolver.
Produces the prose that skills see: how to read the profile, how to
factor into proposals, conflict handling, schema migration.
- scripts/resolvers/index.ts: register TASTE_PROFILE and a BIN_DIR
resolver (returns ctx.paths.binDir, used by templates that shell out
to gstack-* binaries).
- design-consultation/SKILL.md.tmpl: insert {{TASTE_PROFILE}} placeholder
in Phase 1 right after the memorable-thing forcing question so the
Phase 3 proposal can factor in learned preferences.
- design-shotgun/SKILL.md.tmpl: taste memory section now reads
taste-profile.json via {{TASTE_PROFILE}}, falls back to per-session
approved.json (legacy). Approval flow documented to call
gstack-taste-update after user picks/rejects a variant.
Known gap: v1 extracts dimension signals from a reason string passed
by the caller ("fonts: X; colors: Y"). Future v2 can read EXIF or an
accompanying manifest written by design-shotgun alongside each variant
for automatic dimension extraction without needing the reason argument.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: multi-provider model benchmark (boil the ocean)
Adds the full spec Codex asked for: real provider adapters with auth
detection, normalized RunResult, pricing tables, tool compatibility
maps, parallel execution with error isolation, and table/JSON/markdown
output. Judge stays on Anthropic SDK as the single stable source of
quality scoring, gated behind --judge.
Codex flagged the original plan as massively under-scoped — the
existing runner is Claude-only and the judge is Anthropic-only. You
can't benchmark GPT or Gemini without real provider infrastructure.
This commit ships it.
New architecture:
test/helpers/providers/types.ts ProviderAdapter interface
test/helpers/providers/claude.ts wraps `claude -p --output-format json`
test/helpers/providers/gpt.ts wraps `codex exec --json`
test/helpers/providers/gemini.ts wraps `gemini -p --output-format stream-json --yolo`
test/helpers/pricing.ts per-model USD cost tables (quarterly)
test/helpers/tool-map.ts which tools each CLI exposes
test/helpers/benchmark-runner.ts orchestrator (Promise.allSettled)
test/helpers/benchmark-judge.ts Anthropic SDK quality scorer
bin/gstack-model-benchmark CLI entry
test/benchmark-runner.test.ts 9 unit tests (cost math, formatters, tool-map)
Per-provider error isolation:
- auth → record reason, don't abort batch
- timeout → record reason, don't abort batch
- rate_limit → record reason, don't abort batch
- binary_missing → record in available() check, skip if --skip-unavailable
Pricing correction: cached input tokens are disjoint from uncached
input tokens (Anthropic/OpenAI report them separately). Original
math subtracted them, producing negative costs. Now adds cached at
the 10% discount alongside the full uncached input cost.
CLI:
gstack-model-benchmark --prompt "..." --models claude,gpt,gemini
gstack-model-benchmark ./prompt.txt --output json --judge
gstack-model-benchmark ./prompt.txt --models claude --timeout-ms 60000
Output formats: table (default), json, markdown. Each shows model,
latency, in→out tokens, cost, quality (when --judge used), tool calls,
and any errors.
Known limitations for v1:
- Claude adapter approximates toolCalls as num_turns (stream-json
would give exact counts; v2 can upgrade).
- Live E2E tests (test/providers.e2e.test.ts) not included — they
require CI secrets for all three providers. Unit tests cover the
shape and math.
- Provider CLIs sometimes return non-JSON error text to stdout; the
parsers fall back to treating raw output as plain text in that case.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: standalone methodology skill publishing via gstack-publish
Ships the marketplace-distribution half of Item 5 (reframed): publish
the existing standalone OpenClaw methodology skills to multiple
marketplaces with one command.
Codex review caught that the original plan assumed raw generated
multi-host skills could be published directly. They can't — those
depend on gstack binaries, generated host paths, tool names, and
telemetry. The correct artifact class is hand-crafted standalone
skills in openclaw/skills/gstack-openclaw-* (already exist and work
without gstack runtime). This commit adds the wrapper that publishes
them to ClawHub + SkillsMP + Vercel Skills.sh with per-marketplace
error isolation and dry-run validation.
Changes:
- skills.json: root manifest with 4 skills (office-hours, ceo-review,
investigate, retro) each pointing at its openclaw/skills source.
Each skill declares per-marketplace targets with a slug, a publish
flag, and a compatible-hosts list. Marketplace configs include CLI
name, login command, publish command template (with placeholder
substitution), docs URL, and auth_check command.
- bin/gstack-publish: new CLI. Subcommands:
gstack-publish Publish all skills
gstack-publish <slug> Publish one skill
gstack-publish --dry-run Validate + auth-check without publishing
gstack-publish --list List skills + marketplace targets
Features:
* Manifest validation (missing source files, missing slugs, empty
marketplace list all reported).
* Per-marketplace auth check before any publish attempt.
* Per-skill / per-marketplace error isolation: one failure doesn't
abort the batch.
* Idempotent — re-running with the same version is safe; markets
that reject duplicate versions report it as a failure for that
single target without affecting others.
* --dry-run walks the full pipeline but skips execSync; useful in
CI to validate manifest before bumping version.
Tested locally: clawhub auth detected, skillsmp/vercel CLIs not
installed (marked NOT READY and skipped cleanly in dry-run).
Follow-up work (tracked in TODOS.md later):
- Version-bump helper that reads openclaw/skills/*/SKILL.md frontmatter
and updates skills.json in lockstep.
- CI workflow that runs gstack-publish --dry-run on every PR and
gstack-publish on tags.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: split preamble.ts into submodules (byte-identical output)
Splits scripts/resolvers/preamble.ts (841 lines, 18 generator functions +
composition root) into one file per generator under
scripts/resolvers/preamble/. Root preamble.ts becomes a thin composition
layer (~80 lines of imports + generatePreamble).
Before:
scripts/resolvers/preamble.ts 841 lines
After:
scripts/resolvers/preamble.ts 83 lines
scripts/resolvers/preamble/generate-preamble-bash.ts 97 lines
scripts/resolvers/preamble/generate-upgrade-check.ts 48 lines
scripts/resolvers/preamble/generate-lake-intro.ts 16 lines
scripts/resolvers/preamble/generate-telemetry-prompt.ts 37 lines
scripts/resolvers/preamble/generate-proactive-prompt.ts 25 lines
scripts/resolvers/preamble/generate-routing-injection.ts 49 lines
scripts/resolvers/preamble/generate-vendoring-deprecation.ts 36 lines
scripts/resolvers/preamble/generate-spawned-session-check.ts 11 lines
scripts/resolvers/preamble/generate-ask-user-format.ts 16 lines
scripts/resolvers/preamble/generate-completeness-section.ts 19 lines
scripts/resolvers/preamble/generate-repo-mode-section.ts 12 lines
scripts/resolvers/preamble/generate-test-failure-triage.ts 108 lines
scripts/resolvers/preamble/generate-search-before-building.ts 14 lines
scripts/resolvers/preamble/generate-completion-status.ts 161 lines
scripts/resolvers/preamble/generate-voice-directive.ts 60 lines
scripts/resolvers/preamble/generate-context-recovery.ts 51 lines
scripts/resolvers/preamble/generate-continuous-checkpoint.ts 48 lines
scripts/resolvers/preamble/generate-context-health.ts 31 lines
Byte-identity verification (the real gate per Codex correction):
- Before refactor: snapshotted 135 generated SKILL.md files via
`find -name SKILL.md -type f | grep -v /gstack/` across all hosts.
- After refactor: regenerated with `bun run gen:skill-docs --host all`
and re-snapshotted.
- `diff -r baseline after` returned zero differences and exit 0.
The `--host all --dry-run` gate passes too. No template or host behavior
changes — purely a code-organization refactor.
Test fix: audit-compliance.test.ts's telemetry check previously grepped
preamble.ts directly for `_TEL != "off"`. After the refactor that logic
lives in preamble/generate-preamble-bash.ts. Test now concatenates all
preamble submodule sources before asserting — tracks the semantic contract,
not the file layout. Doing the minimum rewrite preserves the test's intent
(conditional telemetry) without coupling it to file boundaries.
Why now: we were in-session with full context. Codex had downgraded this
from mandatory to optional, but the preamble had grown to 841 lines and
was getting harder to navigate. User asked "why not?" given the context
was hot. Shipping it as a clean bisectable commit while all the prior
preamble.ts changes are fresh reduces rebase pain later.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: bump version and changelog (v0.19.0.0)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: trim verbose preamble + coverage audit prose
Compress without removing behavior or voice. Three targeted cuts:
1. scripts/resolvers/testing.ts coverage diagram example: 40 lines → 14
lines. Two-column ASCII layout instead of stacked sections.
Preserves all required regression-guard phrases (processPayment,
refundPayment, billing.test.ts, checkout.e2e.ts, COVERAGE, QUALITY,
GAPS, Code paths, User flows, ASCII coverage diagram).
2. scripts/resolvers/preamble/generate-completion-status.ts Plan Status
Footer: was 35 lines with embedded markdown table example, now 7
lines that describe the table inline. The footer fires only at
ExitPlanMode time — Claude can construct the placeholder table from
the inline description without copying a literal example.
3. Same file's Plan Mode Safe Operations + Skill Invocation During Plan
Mode sections compressed from ~25 lines combined to ~12. Preserves
all required test phrases (precedence over generic plan mode behavior,
Do not continue the workflow, cancel the skill or leave plan mode,
PLAN MODE EXCEPTION).
NOT touched:
- Voice directive (Garry's voice — protected per CLAUDE.md)
- Office-hours Phase 6 Handoff (Garry's voice + YC pitch)
- Test bootstrap, review army, plan completion (carefully tuned behavior)
Token savings (per skill, system-wide):
ship/SKILL.md 35474 → 34992 tokens (-482)
plan-ceo-review 29436 → 28940 (-496)
office-hours 26700 → 26204 (-496)
Still over the 25K ceiling. Bigger reduction requires restructure
(move large resolvers to externally-referenced docs, split /ship into
ship-quick + ship-full, or refactor the coverage audit + review army
into shorter prose). That's a follow-up — added to TODOS.
Tests: 420/420 pass on gen-skill-docs.test.ts + host-config.test.ts.
Goldens regenerated for claude/codex/factory ship.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(ci): install Node.js from official tarball instead of NodeSource apt setup
The CI Dockerfile's Node install was failing on ubicloud runners. NodeSource's
setup_22.x script runs two internal apt operations that both depend on
archive.ubuntu.com + security.ubuntu.com being reachable:
1. apt-get update (to refresh package lists)
2. apt-get install gnupg (as a prerequisite for its gpg keyring)
Ubicloud's CI runners frequently can't reach those mirrors — last build hit
~2min of connection timeouts to every security.ubuntu.com IP (185.125.190.82,
91.189.91.83, 91.189.92.24, etc.) plus archive.ubuntu.com mirrors. Compounding
this: on Ubuntu 24.04 (noble) "gnupg" was renamed to "gpg" and "gpgconf".
NodeSource's setup script still looks for "gnupg", so even when apt works,
it fails with "Package 'gnupg' has no installation candidate." The subsequent
apt-get install nodejs then fails because the NodeSource repo was never added.
Fix: drop NodeSource entirely. Download Node.js v22.20.0 from nodejs.org as a
tarball, extract to /usr/local. One host, no apt, no script, no keyring.
Before:
RUN curl -fsSL https://deb.nodesource.com/setup_22.x | bash - \
&& apt-get install -y --no-install-recommends nodejs ...
After:
ENV NODE_VERSION=22.20.0
RUN curl -fsSL "https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.xz" -o /tmp/node.tar.xz \
&& tar -xJ -C /usr/local --strip-components=1 --no-same-owner -f /tmp/node.tar.xz \
&& rm -f /tmp/node.tar.xz \
&& node --version && npm --version
Same installed path (/usr/local/bin/node and npm). Pinned version for
reproducibility. Version is bump-visible in the Dockerfile now.
Does not address the separate apt flakiness that affects the GitHub CLI
install (line 17) or `npx playwright install-deps chromium` (line 33) —
those use apt too. If those fail on a future build we can address then.
Failing job: build-image (71777913820)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: raise skill token ceiling warning from 25K to 40K
The 25K ceiling predated flagship models with 200K-1M windows and assumed
every skill prompt dominates context cost. Modern reality: prompt caching
amortizes the skill load across invocations, and three carefully-tuned
skills (ship, plan-ceo-review, office-hours) legitimately pack 25-35K
tokens of behavior that can't be cut without degrading quality or removing
protected content (Garry's voice, YC pitch, specialist review instructions).
We made the safe prose cuts earlier (coverage diagram, plan status footer,
plan mode operations). The remaining gap is structural — real compression
would require splitting /ship into ship-quick vs ship-full, externalizing
large resolvers to reference docs, or removing detailed skill behavior.
Each is 1-2 days of work. The cost of the warning firing is zero (it's
a warning, not an error). The cost of hitting it is ~15¢ per invocation
at worst, amortized further by prompt caching.
Raising to 40K catches what it's supposed to catch — a runaway 10K+ token
growth in a single release — without crying wolf on legitimately big
skills. Reference doc in CLAUDE.md updated to reflect the new philosophy:
when you hit 40K, ask WHAT grew, don't blindly compress tuned prose.
scripts/gen-skill-docs.ts: TOKEN_CEILING_BYTES 100_000 → 160_000.
CLAUDE.md: document the "watch for feature bloat, not force compression"
intent of the ceiling.
Verification: `bun run gen:skill-docs --host all` shows zero TOKEN
CEILING warnings under the new 40K threshold.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(ci): install xz-utils so Node tarball extraction works
The direct-tarball Node install (switched from NodeSource apt in the last
CI fix) failed with "xz: Cannot exec: No such file or directory" because
Ubuntu 24.04 base doesn't include xz-utils. Node ships .tar.xz by default,
and `tar -xJ` shells out to xz, which was missing.
Add xz-utils to the base apt install alongside git/curl/unzip/etc.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(benchmark): pass --skip-git-repo-check to codex adapter
The gpt provider adapter spawns `codex exec -C <workdir>` with arbitrary
working directories (benchmark temp dirs, non-git paths). Without
`--skip-git-repo-check`, codex refuses to run and returns "Not inside a
trusted directory" — surfaced as a generic error.code='unknown' that
looks like an API failure.
Benchmarks don't care about codex's git-repo trust model; we just want
the prompt executed. Surfaced by the new provider live E2E test on a
temp workdir.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(benchmark): add --dry-run flag to gstack-model-benchmark
Matches gstack-publish --dry-run semantics. Validates the provider list,
resolves per-adapter auth, echoes the resolved flag values, and exits
without invoking any provider CLI. Zero-cost pre-flight for CI pipelines
and for catching auth drift before starting a paid benchmark run.
Output shape:
== gstack-model-benchmark --dry-run ==
prompt: <truncated>
providers: claude, gpt, gemini
workdir: /tmp/...
timeout_ms: 300000
output: table
judge: off
Adapter availability:
claude: OK
gpt: NOT READY — <reason>
gemini: NOT READY — <reason>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test: lite E2E coverage for benchmark, taste engine, publish
Fills real coverage gaps in v0.19.0.0 primitives. 44 new deterministic
tests (gate tier, ~3s) + 8 live-API tests (periodic tier).
New gate-tier test files (free, <3s total):
- test/taste-engine.test.ts — 24 tests against gstack-taste-update:
schema shape, Laplace-smoothed confidence, 5%/week decay clamped at 0,
multi-dimension extraction, case-insensitive matching, session cap,
legacy profile migration with session truncation, taste-drift conflict
warning, malformed-JSON recovery, missing-variant exit code.
- test/publish-dry-run.test.ts — 13 tests against gstack-publish --dry-run:
manifest parsing, missing/malformed JSON, per-skill validation errors
(missing source file / slug / version / marketplaces), slug filter,
unknown-skill exit, per-marketplace auth isolation (fake marketplaces
with always-pass / always-fail / missing-binary CLIs), and a sanity
check against the real repo manifest.
- test/benchmark-cli.test.ts — 11 tests against gstack-model-benchmark
--dry-run: provider default, unknown-provider WARN, empty list
fallback, flag passthrough (timeout/workdir/judge/output), long-prompt
truncation, prompt resolution (inline vs file vs positional), missing
prompt exit.
New periodic-tier test file (paid, gated EVALS=1):
- test/skill-e2e-benchmark-providers.test.ts — 8 tests hitting real
claude, codex, gemini CLIs with a trivial prompt (~$0.001/provider).
Verifies output parsing, token accounting, cost estimation, timeout
error.code semantics, Promise.allSettled parallel isolation.
Per-provider availability gate — unauthed providers skip cleanly.
This suite already caught one real bug (codex adapter missing
--skip-git-repo-check, fixed in 5260987d).
Registered `benchmark-providers-live` in touchfiles.ts (periodic tier,
triggered by changes to bin/gstack-model-benchmark, providers/**,
benchmark-runner.ts, pricing.ts).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(benchmark): dedupe providers in --models
`--models claude,claude,gpt` previously produced a list with a duplicate
entry, meaning the benchmark would run claude twice and bill for two
runs. Surfaced by /review on this branch.
Use a Set internally; return Array.from(seen) to preserve type + order
of first occurrence.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test: /review hardening — NOT-READY env isolation, workdir cleanup, perf
Applied from the adversarial subagent pass during /review on this branch:
- test/benchmark-cli.test.ts — new "NOT READY path fires when auth env
vars are stripped" test. The default dry-run test always showed OK on
dev machines with auth, hiding regressions in the remediation-hint
branch. Stripped env (no auth vars, HOME→empty tmpdir) now force-
exercises gpt + gemini NOT READY paths and asserts every NOT READY
line includes a concrete remediation hint (install/login/export).
(claude adapter's os.homedir() call is Bun-cached; the 2-of-3 adapter
coverage is sufficient to exercise the branch.)
- test/taste-engine.test.ts — session-cap test rewritten to seed the
profile with 50 entries + one real CLI call, instead of 55 sequential
subprocess spawns. Same coverage (FIFO eviction at the boundary), ~5s
faster CI time. Also pins first-casing-wins on the Geist/GEIST merge
assertion — bumpPref() keeps the first-arrival casing, so the test
documents that policy.
- test/skill-e2e-benchmark-providers.test.ts — workdir creation moved
from module-load into beforeAll, cleanup added in afterAll. Previous
shape leaked a /tmp/bench-e2e-* dir every CI run.
- test/publish-dry-run.test.ts — removed unused empty test/helpers
mkdirSync from the sandbox setup. The bin doesn't import from there,
so the empty dir was a footgun for future maintainers.
- test/helpers/providers/gpt.ts — expanded the inline comment on
`--skip-git-repo-check` to explicitly note that `-s read-only` is now
load-bearing safety (the trust prompt was the secondary boundary;
removing read-only while keeping skip-git-repo-check would be unsafe).
Net: 45 passing tests (was 44), session-cap test 5s faster, one real
regression surface covered that didn't exist before.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: surface v0.19 binaries and continuous checkpoint in README
The /review doc-staleness check flagged that v0.19.0.0 ships three new CLIs
(gstack-model-benchmark, gstack-publish, gstack-taste-update) and an opt-in
continuous checkpoint mode, none of which were visible in README's Power
tools section. New users couldn't find them without reading CHANGELOG.
Added:
- "New binaries (v0.19)" subsection with one-row descriptions for each CLI
- "Continuous checkpoint mode (opt-in, local by default)" subsection
explaining WIP auto-commit + [gstack-context] body + /ship squash +
/checkpoint resume
CHANGELOG entry already has good voice from /ship; no polish needed.
VERSION already at 0.19.0.0. Other docs (ARCHITECTURE/CONTRIBUTING/BROWSER)
don't reference this surface — scoped intentionally.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(ship): Step 19.5 — offer gstack-publish for methodology skill changes
Wires the orphaned gstack-publish binary into /ship. When a PR touches
any standalone methodology skill (openclaw/skills/gstack-*/SKILL.md) or
skills.json, /ship now runs gstack-publish --dry-run after PR creation
and asks the user if they want to actually publish.
Previously, the only way to discover gstack-publish was reading the
CHANGELOG or README. Most methodology skill updates landed on main
without ever being pushed to ClawHub / SkillsMP / Vercel Skills.sh,
defeating the whole point of having a marketplace publisher.
The check is conditional — for PRs that don't touch methodology skills
(the common case), this step is a silent no-op. Dry-run runs first so
the user sees the full list of what would publish and which marketplaces
are authed before committing.
Golden fixtures (claude/codex/factory) regenerated.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(benchmark-models): new skill wrapping gstack-model-benchmark
Wires the orphaned gstack-model-benchmark binary into a dedicated skill
so users can discover cross-model benchmarking via /benchmark-models or
voice triggers ("compare models", "which model is best").
Deliberately separate from /benchmark (page performance) because the
two surfaces test completely different things — confusing them would
muddy both.
Flow:
1. Pick a prompt (an existing SKILL.md file, inline text, or file path)
2. Confirm providers (dry-run shows auth status per provider)
3. Decide on --judge (adds ~$0.05, scores output quality 0-10)
4. Run the benchmark — table output
5. Interpret results (fastest / cheapest / highest quality)
6. Offer to save to ~/.gstack/benchmarks/<date>.json for trend tracking
Uses gstack-model-benchmark --dry-run as a safety gate — auth status is
visible BEFORE the user spends API calls. If zero providers are authed,
the skill stops cleanly rather than attempting a run that produces no
useful output.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: v1.3.0.0 — complete CHANGELOG + bump for post-1.2 scope additions
VERSION 1.2.0.0 → 1.3.0.0. The original 1.2 entry was written before I
added substantial new scope: the /benchmark-models skill, /ship Step 19.5
gstack-publish integration, --dry-run on gstack-model-benchmark, and the
lite E2E test coverage (4 new test files). A minor bump gives those
changes their own version line instead of silently folding them into
1.2's scope.
CHANGELOG additions under 1.3.0.0:
- /benchmark-models skill (new Added)
- /ship Step 19.5 publish check (new Added)
- gstack-model-benchmark --dry-run (new Added)
- Token ceiling 25K → 40K (moved to Changed)
- New Fixed section — codex adapter --skip-git-repo-check, --models
dedupe, CI Dockerfile xz-utils + nodejs.org tarball
- 4 new test files documented under contributors (taste-engine,
publish-dry-run, benchmark-cli, skill-e2e-benchmark-providers)
- Ship golden fixtures for claude/codex/factory hosts
Pre-existing 1.2 content preserved verbatim — no entries clobbered or
reordered. Sequence remains contiguous (1.3.0.0 → 1.1.3.0 → 1.1.2.0 →
1.1.1.0 → 1.1.0.0 → 1.0.0.0 → 0.19.0.0 → ...).
package.json and VERSION both at 1.3.0.0. No drift.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: adopt gbrain's release-summary CHANGELOG format + apply to v1.3
Ported the "release-summary format" rules from ~/git/gbrain/CLAUDE.md
(lines 291-354) into gstack's CLAUDE.md under the existing
"CHANGELOG + VERSION style" section. Every future `## [X.Y.Z]` entry
now needs a verdict-style release summary at the top:
1. Two-line bold headline (10-14 words)
2. Lead paragraph (3-5 sentences)
3. "Numbers that matter" with BEFORE / AFTER / Δ table
4. "What this means for [audience]" closer
5. `### Itemized changes` header
6. Existing itemized subsections below
Rewrote v1.3.0.0 entry to match. Preserved every existing bullet in
Added / Changed / Fixed / For contributors (no content clobbered per
the CLAUDE.md CHANGELOG rule).
Numbers in the v1.3 release summary are verifiable — every row of the
BEFORE / AFTER table has a reproducible command listed in the setup
paragraph (git log, bun test, grep for wiring status). No made-up
metrics.
Also added the gbrain "always credit community contributions" rule to
the itemized-changes section. `Contributed by @username` for every
community PR that lands in a CHANGELOG entry.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: remove gstack-publish — no real user need
User feedback: "i don't think i would use gstack-publish, i think we
should remove it." Agreed. The CLI + marketplace wiring was an
ambitious but speculative primitive. Zero users, zero validated demand,
and the existing manual `clawhub publish` workflow already covers the
real case (OpenClaw methodology skill publishing).
Deleted:
- bin/gstack-publish (the CLI)
- skills.json (the marketplace manifest)
- test/publish-dry-run.test.ts (13 tests)
- ship/SKILL.md.tmpl Step 19.5 — the methodology-skill publish-on-ship
check. No target to dispatch to anymore.
- README.md Power tools row for gstack-publish
Updated:
- bin/gstack-model-benchmark doc comment: dropped "matches gstack-publish
--dry-run semantics" reference (self-describing flag now)
- CHANGELOG 1.3.0.0 entry:
* Release summary: "three new binaries" → "two new binaries".
Dropped the /ship publish-check narrative.
* Numbers table: "1 of 3 → 3 of 3 wired" → "1 of 2 → 2 of 2 wired".
Deterministic test count: 45 → 32 (removed publish-dry-run's 13).
* Added section: removed gstack-publish CLI bullet + /ship Step 19.5
bullet.
* "What this means for users" closer: replaced the /ship publish
paragraph with the design-taste-engine learning loop, which IS
real, wired, and something users hit every week via /design-shotgun.
* Contributors section: "Four new test files" → "Three new test files"
Retained:
- openclaw/skills/gstack-openclaw-* skill dirs (pre-existed this PR,
still publishable manually via `clawhub publish`, useful standalone
for ClawHub installs)
- CLAUDE.md publishing-native-skills section (same rationale)
Regenerated SKILL.md across all hosts. Ship golden fixtures refreshed
for claude/codex/factory. 455 tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(CHANGELOG): reorder v1.3 entry around day-to-day user wins
Previous entry led with internal metrics (CLIs wired to skills, preamble
line count, adapter bugs caught in CI). Useful to contributors, invisible
to users. Rewrote the release summary and Added section to lead with
what a day-to-day gstack user actually experiences.
Release summary changes:
- Headline: "Every new CLI wired to a slash command" → "Your design
skills learn your taste. Your session state survives a laptop close."
- Lead paragraph: shifted from "primitives discoverable from /commands"
to concrete day-to-day wins (design-shotgun taste memory, design-
consultation anti-slop gates, continuous checkpoint survival).
- Numbers table: swapped internal metrics (CLI wiring %, test counts,
preamble line count) for user-visible ones:
- Design-variant convergence gate (0 → 3 axes required)
- AI-slop font blacklist (~8 → 10+ fonts)
- Taste memory across sessions (none → per-project JSON with decay)
- Session state after crash (lost → auto-WIP with structured body)
- /context-restore sources (markdown only → + WIP commits)
- Models with behavioral overlays (1 → 5)
- "Most striking" interpretation: reframed around the mid-session
crash survival story instead of the codex adapter bug catch.
- "What this means" closer: reframed around /design-shotgun + /design-
consultation + continuous checkpoint workflow instead of
/benchmark-models.
Added section — reorganized into six subsections by user value:
1. Design skills that stop looking like AI
(anti-slop constraints, taste engine)
2. Session state that survives a crash
(continuous checkpoint, /context-restore WIP reading,
/ship non-destructive squash)
3. Quality-of-life
(feature discovery prompt, context health soft directive)
4. Cross-host support
(--model flag + 5 overlays)
5. Config
(gstack-config list/defaults, checkpoint_mode/push keys)
6. Power-user / internal
(gstack-model-benchmark + /benchmark-models skill — grouped and
pushed to the bottom since it's more of a research tool than a
daily workflow piece)
Changed / Fixed / For contributors sections unchanged. No content
clobbered per CLAUDE.md CHANGELOG rules — every existing bullet is
preserved, just reordered and grouped.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(CHANGELOG): reframe v1.3 entry around transparency vs laptop-close
User feedback: "'closing your laptop' in the changelog is overstated, i
mean claude code does already have session management. i think the use
of the context save restore is mainly just another tool that is more in
your control instead of opaque and a part of CC." Correct. CC handles
session persistence on its own; continuous checkpoint isn't filling a
gap there, it's giving users a parallel, inspectable, portable track.
Reframed every place the old copy overstated:
- Headline: "Your session state survives a laptop close" → "Your
session state lives in git, not a black box."
- Lead paragraph: dropped the "closing your laptop mid-refactor doesn't
vaporize your decisions" line. Now frames continuous checkpoint as
explicitly running alongside CC's built-in session management, not
replacing it. Emphasizes grep-ability, portability across tools and
branches.
- Numbers table row: "Session state after mid-refactor crash: lost
since last manual commit → auto-WIP commits" → "Session state
format: Claude Code's opaque session store → git commits +
[gstack-context] bodies + markdown (parallel track)". Honest about
what's actually changing.
- "Most striking" interpretation: replaced the "used to cost you every
decision" framing with the real user value — session state stops
being a black box, `git log --grep "WIP:"` shows the whole thread,
any tool reading git can see it.
- "What this means" closer: replaced "survives crashes, context
switches, and forgotten laptops" with accurate framing — parallel
track alongside CC's own, inspectable, portable, useful when you
want to review or hand off work.
- Added section: "Session state that survives a crash" subsection
renamed to "Session state you can see, grep, and move". Lead bullet
now explicitly notes continuous checkpoint runs alongside CC session
management, not instead.
No content clobbered. All other bullets and sections unchanged.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(CHANGELOG): correct session-state location — home dir by default, git only on opt-in
User correction: "wait is our session management really checked into
git? i don't think that's right, isn't it just saved in your home
dir?" Right. I had the location wrong. The default session-save
mechanism (`/context-save` + `/context-restore`) writes markdown
files to `~/.gstack/projects/$SLUG/checkpoints/` — HOME, not git.
Continuous checkpoint mode (opt-in) is what writes git commits.
Previous copy conflated the two and implied "lives in git" as the
default state, which is wrong.
Every affected location updated:
- Headline: "lives in git, not a black box" → "becomes files you
can grep, not a black box." Removes the false implication that
session state lands in git by default.
- Lead paragraph: now explicitly names the two separate mechanisms.
`/context-save` writes plaintext markdown to `~/.gstack/projects/
$SLUG/checkpoints/` (the default). Continuous checkpoint mode
(opt-in) additionally drops WIP: commits into the git log.
- Numbers table row: "Session state format" now reads "markdown in
`~/.gstack/` by default, plus WIP: git commits if you opt into
continuous mode (parallel track)." Tells the truth about which
path is default vs opt-in.
- "Most striking" row interpretation: now names both paths. Default
path = markdown files in home dir. Opt-in continuous mode = WIP:
commits in project git log. Either way, plain text the user owns.
- "What this means" closer: similarly names both paths explicitly.
"markdown files in your home directory by default, plus git
commits if you opt into continuous mode."
- Continuous checkpoint mode Added bullet: clarifies the commits
land in "your project's git log" (not implied to be the default),
and notes it runs alongside BOTH Claude Code's built-in session
management AND the default `/context-save` markdown flow.
No other bullets or sections touched. No content clobbered.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
481 lines
25 KiB
Cheetah
481 lines
25 KiB
Cheetah
---
|
|
name: design-consultation
|
|
preamble-tier: 3
|
|
version: 1.0.0
|
|
description: |
|
|
Design consultation: understands your product, researches the landscape, proposes a
|
|
complete design system (aesthetic, typography, color, layout, spacing, motion), and
|
|
generates font+color preview pages. Creates DESIGN.md as your project's design source
|
|
of truth. For existing sites, use /plan-design-review to infer the system instead.
|
|
Use when asked to "design system", "brand guidelines", or "create DESIGN.md".
|
|
Proactively suggest when starting a new project's UI with no existing
|
|
design system or DESIGN.md. (gstack)
|
|
allowed-tools:
|
|
- Bash
|
|
- Read
|
|
- Write
|
|
- Edit
|
|
- Glob
|
|
- Grep
|
|
- AskUserQuestion
|
|
- WebSearch
|
|
triggers:
|
|
- design system
|
|
- create a brand
|
|
- design from scratch
|
|
---
|
|
|
|
{{PREAMBLE}}
|
|
|
|
# /design-consultation: Your Design System, Built Together
|
|
|
|
You are a senior product designer with strong opinions about typography, color, and visual systems. You don't present menus — you listen, think, research, and propose. You're opinionated but not dogmatic. You explain your reasoning and welcome pushback.
|
|
|
|
**Your posture:** Design consultant, not form wizard. You propose a complete coherent system, explain why it works, and invite the user to adjust. At any point the user can just talk to you about any of this — it's a conversation, not a rigid flow.
|
|
|
|
---
|
|
|
|
## Phase 0: Pre-checks
|
|
|
|
**Check for existing DESIGN.md:**
|
|
|
|
```bash
|
|
ls DESIGN.md design-system.md 2>/dev/null || echo "NO_DESIGN_FILE"
|
|
```
|
|
|
|
- If a DESIGN.md exists: Read it. Ask the user: "You already have a design system. Want to **update** it, **start fresh**, or **cancel**?"
|
|
- If no DESIGN.md: continue.
|
|
|
|
**Gather product context from the codebase:**
|
|
|
|
```bash
|
|
cat README.md 2>/dev/null | head -50
|
|
cat package.json 2>/dev/null | head -20
|
|
ls src/ app/ pages/ components/ 2>/dev/null | head -30
|
|
```
|
|
|
|
Look for office-hours output:
|
|
|
|
```bash
|
|
setopt +o nomatch 2>/dev/null || true # zsh compat
|
|
{{SLUG_EVAL}}
|
|
ls ~/.gstack/projects/$SLUG/*office-hours* 2>/dev/null | head -5
|
|
ls .context/*office-hours* .context/attachments/*office-hours* 2>/dev/null | head -5
|
|
```
|
|
|
|
If office-hours output exists, read it — the product context is pre-filled.
|
|
|
|
If the codebase is empty and purpose is unclear, say: *"I don't have a clear picture of what you're building yet. Want to explore first with `/office-hours`? Once we know the product direction, we can set up the design system."*
|
|
|
|
**Find the browse binary (optional — enables visual competitive research):**
|
|
|
|
{{BROWSE_SETUP}}
|
|
|
|
If browse is not available, that's fine — visual research is optional. The skill works without it using WebSearch and your built-in design knowledge.
|
|
|
|
**Find the gstack designer (optional — enables AI mockup generation):**
|
|
|
|
{{DESIGN_SETUP}}
|
|
|
|
If `DESIGN_READY`: Phase 5 will generate AI mockups of your proposed design system applied to real screens, instead of just an HTML preview page. Much more powerful — the user sees what their product could actually look like.
|
|
|
|
If `DESIGN_NOT_AVAILABLE`: Phase 5 falls back to the HTML preview page (still good).
|
|
|
|
---
|
|
|
|
{{GBRAIN_CONTEXT_LOAD}}
|
|
|
|
{{LEARNINGS_SEARCH}}
|
|
|
|
## Phase 1: Product Context
|
|
|
|
Ask the user a single question that covers everything you need to know. Pre-fill what you can infer from the codebase.
|
|
|
|
**AskUserQuestion Q1 — include ALL of these:**
|
|
1. Confirm what the product is, who it's for, what space/industry
|
|
2. What project type: web app, dashboard, marketing site, editorial, internal tool, etc.
|
|
3. "Want me to research what top products in your space are doing for design, or should I work from my design knowledge?"
|
|
4. **Explicitly say:** "At any point you can just drop into chat and we'll talk through anything — this isn't a rigid form, it's a conversation."
|
|
|
|
If the README or office-hours output gives you enough context, pre-fill and confirm: *"From what I can see, this is [X] for [Y] in the [Z] space. Sound right? And would you like me to research what's out there in this space, or should I work from what I know?"*
|
|
|
|
**Memorable-thing forcing question.** Before moving on, ask the user: *"What's the one
|
|
thing you want someone to remember after they see this product for the first time?"*
|
|
|
|
One sentence answer. Could be a feeling ("this is serious software for serious work"),
|
|
a visual ("the blue that's almost black"), a claim ("faster than anything else"), or
|
|
a posture ("for builders, not managers"). Write it down. Every subsequent design
|
|
decision should serve this memorable thing. Design that tries to be memorable for
|
|
everything is memorable for nothing.
|
|
|
|
### Taste profile (if this user has prior sessions)
|
|
|
|
{{TASTE_PROFILE}}
|
|
|
|
If a taste profile exists for this project, factor it into your Phase 3 proposal.
|
|
The profile reflects what the user has actually approved in prior sessions — treat
|
|
it as a demonstrated preference, not a constraint. You may still deliberately
|
|
depart from it if the product direction demands something different; when you do,
|
|
say so explicitly and connect the departure to the memorable-thing answer above.
|
|
|
|
---
|
|
|
|
## Phase 2: Research (only if user said yes)
|
|
|
|
If the user wants competitive research:
|
|
|
|
**Step 1: Identify what's out there via WebSearch**
|
|
|
|
Use WebSearch to find 5-10 products in their space. Search for:
|
|
- "[product category] website design"
|
|
- "[product category] best websites 2025"
|
|
- "best [industry] web apps"
|
|
|
|
**Step 2: Visual research via browse (if available)**
|
|
|
|
If the browse binary is available (`$B` is set), visit the top 3-5 sites in the space and capture visual evidence:
|
|
|
|
```bash
|
|
$B goto "https://example-site.com"
|
|
$B screenshot "/tmp/design-research-site-name.png"
|
|
$B snapshot
|
|
```
|
|
|
|
For each site, analyze: fonts actually used, color palette, layout approach, spacing density, aesthetic direction. The screenshot gives you the feel; the snapshot gives you structural data.
|
|
|
|
If a site blocks the headless browser or requires login, skip it and note why.
|
|
|
|
If browse is not available, rely on WebSearch results and your built-in design knowledge — this is fine.
|
|
|
|
**Step 3: Synthesize findings**
|
|
|
|
**Three-layer synthesis:**
|
|
- **Layer 1 (tried and true):** What design patterns does every product in this category share? These are table stakes — users expect them.
|
|
- **Layer 2 (new and popular):** What are the search results and current design discourse saying? What's trending? What new patterns are emerging?
|
|
- **Layer 3 (first principles):** Given what we know about THIS product's users and positioning — is there a reason the conventional design approach is wrong? Where should we deliberately break from the category norms?
|
|
|
|
**Eureka check:** If Layer 3 reasoning reveals a genuine design insight — a reason the category's visual language fails THIS product — name it: "EUREKA: Every [category] product does X because they assume [assumption]. But this product's users [evidence] — so we should do Y instead." Log the eureka moment (see preamble).
|
|
|
|
Summarize conversationally:
|
|
> "I looked at what's out there. Here's the landscape: they converge on [patterns]. Most of them feel [observation — e.g., interchangeable, polished but generic, etc.]. The opportunity to stand out is [gap]. Here's where I'd play it safe and where I'd take a risk..."
|
|
|
|
**Graceful degradation:**
|
|
- Browse available → screenshots + snapshots + WebSearch (richest research)
|
|
- Browse unavailable → WebSearch only (still good)
|
|
- WebSearch also unavailable → agent's built-in design knowledge (always works)
|
|
|
|
If the user said no research, skip entirely and proceed to Phase 3 using your built-in design knowledge.
|
|
|
|
---
|
|
|
|
{{DESIGN_OUTSIDE_VOICES}}
|
|
|
|
## Phase 3: The Complete Proposal
|
|
|
|
This is the soul of the skill. Propose EVERYTHING as one coherent package.
|
|
|
|
**AskUserQuestion Q2 — present the full proposal with SAFE/RISK breakdown:**
|
|
|
|
```
|
|
Based on [product context] and [research findings / my design knowledge]:
|
|
|
|
AESTHETIC: [direction] — [one-line rationale]
|
|
DECORATION: [level] — [why this pairs with the aesthetic]
|
|
LAYOUT: [approach] — [why this fits the product type]
|
|
COLOR: [approach] + proposed palette (hex values) — [rationale]
|
|
TYPOGRAPHY: [3 font recommendations with roles] — [why these fonts]
|
|
SPACING: [base unit + density] — [rationale]
|
|
MOTION: [approach] — [rationale]
|
|
|
|
This system is coherent because [explain how choices reinforce each other].
|
|
|
|
SAFE CHOICES (category baseline — your users expect these):
|
|
- [2-3 decisions that match category conventions, with rationale for playing safe]
|
|
|
|
RISKS (where your product gets its own face):
|
|
- [2-3 deliberate departures from convention]
|
|
- For each risk: what it is, why it works, what you gain, what it costs
|
|
|
|
The safe choices keep you literate in your category. The risks are where
|
|
your product becomes memorable. Which risks appeal to you? Want to see
|
|
different ones? Or adjust anything else?
|
|
```
|
|
|
|
The SAFE/RISK breakdown is critical. Design coherence is table stakes — every product in a category can be coherent and still look identical. The real question is: where do you take creative risks? The agent should always propose at least 2 risks, each with a clear rationale for why the risk is worth taking and what the user gives up. Risks might include: an unexpected typeface for the category, a bold accent color nobody else uses, tighter or looser spacing than the norm, a layout approach that breaks from convention, motion choices that add personality.
|
|
|
|
**Options:** A) Looks great — generate the preview page. B) I want to adjust [section]. C) I want different risks — show me wilder options. D) Start over with a different direction. E) Skip the preview, just write DESIGN.md.
|
|
|
|
### Your Design Knowledge (use to inform proposals — do NOT display as tables)
|
|
|
|
**Aesthetic directions** (pick the one that fits the product):
|
|
- Brutally Minimal — Type and whitespace only. No decoration. Modernist.
|
|
- Maximalist Chaos — Dense, layered, pattern-heavy. Y2K meets contemporary.
|
|
- Retro-Futuristic — Vintage tech nostalgia. CRT glow, pixel grids, warm monospace.
|
|
- Luxury/Refined — Serifs, high contrast, generous whitespace, precious metals.
|
|
- Playful/Toy-like — Rounded, bouncy, bold primaries. Approachable and fun.
|
|
- Editorial/Magazine — Strong typographic hierarchy, asymmetric grids, pull quotes.
|
|
- Brutalist/Raw — Exposed structure, system fonts, visible grid, no polish.
|
|
- Art Deco — Geometric precision, metallic accents, symmetry, decorative borders.
|
|
- Organic/Natural — Earth tones, rounded forms, hand-drawn texture, grain.
|
|
- Industrial/Utilitarian — Function-first, data-dense, monospace accents, muted palette.
|
|
|
|
**Decoration levels:** minimal (typography does all the work) / intentional (subtle texture, grain, or background treatment) / expressive (full creative direction, layered depth, patterns)
|
|
|
|
**Layout approaches:** grid-disciplined (strict columns, predictable alignment) / creative-editorial (asymmetry, overlap, grid-breaking) / hybrid (grid for app, creative for marketing)
|
|
|
|
**Color approaches:** restrained (1 accent + neutrals, color is rare and meaningful) / balanced (primary + secondary, semantic colors for hierarchy) / expressive (color as a primary design tool, bold palettes)
|
|
|
|
**Motion approaches:** minimal-functional (only transitions that aid comprehension) / intentional (subtle entrance animations, meaningful state transitions) / expressive (full choreography, scroll-driven, playful)
|
|
|
|
**Font recommendations by purpose:**
|
|
- Display/Hero: Satoshi, General Sans, Instrument Serif, Fraunces, Clash Grotesk, Cabinet Grotesk
|
|
- Body: Instrument Sans, DM Sans, Source Sans 3, Geist, Plus Jakarta Sans, Outfit
|
|
- Data/Tables: Geist (tabular-nums), DM Sans (tabular-nums), JetBrains Mono, IBM Plex Mono
|
|
- Code: JetBrains Mono, Fira Code, Berkeley Mono, Geist Mono
|
|
|
|
**Font blacklist** (never recommend):
|
|
Papyrus, Comic Sans, Lobster, Impact, Jokerman, Bleeding Cowboys, Permanent Marker, Bradley Hand, Brush Script, Hobo, Trajan, Raleway, Clash Display, Courier New (for body)
|
|
|
|
**Overused fonts** (never recommend as primary — use only if user specifically requests):
|
|
Inter, Roboto, Arial, Helvetica, Open Sans, Lato, Montserrat, Poppins, Space Grotesk.
|
|
|
|
Space Grotesk is on the list specifically because every AI design tool converges on it
|
|
as "the safe alternative to Inter." That's the convergence trap. Treat it the same as
|
|
Inter: only use if the user asks for it by name.
|
|
|
|
**Anti-convergence directive:** Across multiple generations in the same project, VARY
|
|
light/dark, fonts, and aesthetic directions. Never propose the same choices twice
|
|
without explicit justification. If the user's prior session used Geist + dark + editorial,
|
|
propose something different this time (or explicitly acknowledge you're doubling down
|
|
because it fits the brief). Convergence across generations is slop.
|
|
|
|
**AI slop anti-patterns** (never include in your recommendations):
|
|
- Purple/violet gradients as default accent
|
|
- 3-column feature grid with icons in colored circles
|
|
- Centered everything with uniform spacing
|
|
- Uniform bubbly border-radius on all elements
|
|
- Gradient buttons as the primary CTA pattern
|
|
- Generic stock-photo-style hero sections
|
|
- system-ui / -apple-system as the primary display or body font (the "I gave up on typography" signal)
|
|
- "Built for X" / "Designed for Y" marketing copy patterns
|
|
|
|
### Coherence Validation
|
|
|
|
When the user overrides one section, check if the rest still coheres. Flag mismatches with a gentle nudge — never block:
|
|
|
|
- Brutalist/Minimal aesthetic + expressive motion → "Heads up: brutalist aesthetics usually pair with minimal motion. Your combo is unusual — which is fine if intentional. Want me to suggest motion that fits, or keep it?"
|
|
- Expressive color + restrained decoration → "Bold palette with minimal decoration can work, but the colors will carry a lot of weight. Want me to suggest decoration that supports the palette?"
|
|
- Creative-editorial layout + data-heavy product → "Editorial layouts are gorgeous but can fight data density. Want me to show how a hybrid approach keeps both?"
|
|
- Always accept the user's final choice. Never refuse to proceed.
|
|
|
|
---
|
|
|
|
## Phase 4: Drill-downs (only if user requests adjustments)
|
|
|
|
When the user wants to change a specific section, go deep on that section:
|
|
|
|
- **Fonts:** Present 3-5 specific candidates with rationale, explain what each evokes, offer the preview page
|
|
- **Colors:** Present 2-3 palette options with hex values, explain the color theory reasoning
|
|
- **Aesthetic:** Walk through which directions fit their product and why
|
|
- **Layout/Spacing/Motion:** Present the approaches with concrete tradeoffs for their product type
|
|
|
|
Each drill-down is one focused AskUserQuestion. After the user decides, re-check coherence with the rest of the system.
|
|
|
|
---
|
|
|
|
## Phase 5: Design System Preview (default ON)
|
|
|
|
This phase generates visual previews of the proposed design system. Two paths depending on whether the gstack designer is available.
|
|
|
|
### Path A: AI Mockups (if DESIGN_READY)
|
|
|
|
Generate AI-rendered mockups showing the proposed design system applied to realistic screens for this product. This is far more powerful than an HTML preview — the user sees what their product could actually look like.
|
|
|
|
```bash
|
|
eval "$(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)"
|
|
_DESIGN_DIR="$HOME/.gstack/projects/$SLUG/designs/design-system-$(date +%Y%m%d)"
|
|
mkdir -p "$_DESIGN_DIR"
|
|
echo "DESIGN_DIR: $_DESIGN_DIR"
|
|
```
|
|
|
|
Construct a design brief from the Phase 3 proposal (aesthetic, colors, typography, spacing, layout) and the product context from Phase 1:
|
|
|
|
```bash
|
|
$D variants --brief "<product name: [name]. Product type: [type]. Aesthetic: [direction]. Colors: primary [hex], secondary [hex], neutrals [range]. Typography: display [font], body [font]. Layout: [approach]. Show a realistic [page type] screen with [specific content for this product].>" --count 3 --output-dir "$_DESIGN_DIR/"
|
|
```
|
|
|
|
Run quality check on each variant:
|
|
|
|
```bash
|
|
$D check --image "$_DESIGN_DIR/variant-A.png" --brief "<the original brief>"
|
|
```
|
|
|
|
Show each variant inline (Read tool on each PNG) for instant preview.
|
|
|
|
**Before presenting to the user, self-gate:** For each variant, ask yourself: *"Would
|
|
a human designer be embarrassed to put their name on this?"* If yes, discard the
|
|
variant and regenerate. This is a hard gate. A mediocre AI mockup is worse than no
|
|
mockup. Embarrassment triggers include: purple gradient hero, 3-column SaaS grid,
|
|
centered-everything, Inter body text, generic stock-photo vibe, system-ui font,
|
|
gradient CTA button, bubble-radius everything. Any of those = reject and regenerate.
|
|
|
|
Tell the user: "I've generated 3 visual directions applying your design system to a realistic [product type] screen. Pick your favorite in the comparison board that just opened in your browser. You can also remix elements across variants."
|
|
|
|
{{DESIGN_SHOTGUN_LOOP}}
|
|
|
|
After the user picks a direction:
|
|
|
|
- Use `$D extract --image "$_DESIGN_DIR/variant-<CHOSEN>.png"` to analyze the approved mockup and extract design tokens (colors, typography, spacing) that will populate DESIGN.md in Phase 6. This grounds the design system in what was actually approved visually, not just what was described in text.
|
|
- If the user wants to iterate further: `$D iterate --feedback "<user's feedback>" --output "$_DESIGN_DIR/refined.png"`
|
|
|
|
**Plan mode vs. implementation mode:**
|
|
- **If in plan mode:** Add the approved mockup path (the full `$_DESIGN_DIR` path) and extracted tokens to the plan file under an "## Approved Design Direction" section. The design system gets written to DESIGN.md when the plan is implemented.
|
|
- **If NOT in plan mode:** Proceed directly to Phase 6 and write DESIGN.md with the extracted tokens.
|
|
|
|
### Path B: HTML Preview Page (fallback if DESIGN_NOT_AVAILABLE)
|
|
|
|
Generate a polished HTML preview page and open it in the user's browser. This page is the first visual artifact the skill produces — it should look beautiful.
|
|
|
|
```bash
|
|
PREVIEW_FILE="/tmp/design-consultation-preview-$(date +%s).html"
|
|
```
|
|
|
|
Write the preview HTML to `$PREVIEW_FILE`, then open it:
|
|
|
|
```bash
|
|
open "$PREVIEW_FILE"
|
|
```
|
|
|
|
### Preview Page Requirements (Path B only)
|
|
|
|
The agent writes a **single, self-contained HTML file** (no framework dependencies) that:
|
|
|
|
1. **Loads proposed fonts** from Google Fonts (or Bunny Fonts) via `<link>` tags
|
|
2. **Uses the proposed color palette** throughout — dogfood the design system
|
|
3. **Shows the product name** (not "Lorem Ipsum") as the hero heading
|
|
4. **Font specimen section:**
|
|
- Each font candidate shown in its proposed role (hero heading, body paragraph, button label, data table row)
|
|
- Side-by-side comparison if multiple candidates for one role
|
|
- Real content that matches the product (e.g., civic tech → government data examples)
|
|
5. **Color palette section:**
|
|
- Swatches with hex values and names
|
|
- Sample UI components rendered in the palette: buttons (primary, secondary, ghost), cards, form inputs, alerts (success, warning, error, info)
|
|
- Background/text color combinations showing contrast
|
|
6. **Realistic product mockups** — this is what makes the preview page powerful. Based on the project type from Phase 1, render 2-3 realistic page layouts using the full design system:
|
|
- **Dashboard / web app:** sample data table with metrics, sidebar nav, header with user avatar, stat cards
|
|
- **Marketing site:** hero section with real copy, feature highlights, testimonial block, CTA
|
|
- **Settings / admin:** form with labeled inputs, toggle switches, dropdowns, save button
|
|
- **Auth / onboarding:** login form with social buttons, branding, input validation states
|
|
- Use the product name, realistic content for the domain, and the proposed spacing/layout/border-radius. The user should see their product (roughly) before writing any code.
|
|
7. **Light/dark mode toggle** using CSS custom properties and a JS toggle button
|
|
8. **Clean, professional layout** — the preview page IS a taste signal for the skill
|
|
9. **Responsive** — looks good on any screen width
|
|
|
|
The page should make the user think "oh nice, they thought of this." It's selling the design system by showing what the product could feel like, not just listing hex codes and font names.
|
|
|
|
If `open` fails (headless environment), tell the user: *"I wrote the preview to [path] — open it in your browser to see the fonts and colors rendered."*
|
|
|
|
If the user says skip the preview, go directly to Phase 6.
|
|
|
|
---
|
|
|
|
## Phase 6: Write DESIGN.md & Confirm
|
|
|
|
If `$D extract` was used in Phase 5 (Path A), use the extracted tokens as the primary source for DESIGN.md values — colors, typography, and spacing grounded in the approved mockup rather than text descriptions alone. Merge extracted tokens with the Phase 3 proposal (the proposal provides rationale and context; the extraction provides exact values).
|
|
|
|
**If in plan mode:** Write the DESIGN.md content into the plan file as a "## Proposed DESIGN.md" section. Do NOT write the actual file — that happens at implementation time.
|
|
|
|
**If NOT in plan mode:** Write `DESIGN.md` to the repo root with this structure:
|
|
|
|
```markdown
|
|
# Design System — [Project Name]
|
|
|
|
## Product Context
|
|
- **What this is:** [1-2 sentence description]
|
|
- **Who it's for:** [target users]
|
|
- **Space/industry:** [category, peers]
|
|
- **Project type:** [web app / dashboard / marketing site / editorial / internal tool]
|
|
|
|
## Aesthetic Direction
|
|
- **Direction:** [name]
|
|
- **Decoration level:** [minimal / intentional / expressive]
|
|
- **Mood:** [1-2 sentence description of how the product should feel]
|
|
- **Reference sites:** [URLs, if research was done]
|
|
|
|
## Typography
|
|
- **Display/Hero:** [font name] — [rationale]
|
|
- **Body:** [font name] — [rationale]
|
|
- **UI/Labels:** [font name or "same as body"]
|
|
- **Data/Tables:** [font name] — [rationale, must support tabular-nums]
|
|
- **Code:** [font name]
|
|
- **Loading:** [CDN URL or self-hosted strategy]
|
|
- **Scale:** [modular scale with specific px/rem values for each level]
|
|
|
|
## Color
|
|
- **Approach:** [restrained / balanced / expressive]
|
|
- **Primary:** [hex] — [what it represents, usage]
|
|
- **Secondary:** [hex] — [usage]
|
|
- **Neutrals:** [warm/cool grays, hex range from lightest to darkest]
|
|
- **Semantic:** success [hex], warning [hex], error [hex], info [hex]
|
|
- **Dark mode:** [strategy — redesign surfaces, reduce saturation 10-20%]
|
|
|
|
## Spacing
|
|
- **Base unit:** [4px or 8px]
|
|
- **Density:** [compact / comfortable / spacious]
|
|
- **Scale:** 2xs(2) xs(4) sm(8) md(16) lg(24) xl(32) 2xl(48) 3xl(64)
|
|
|
|
## Layout
|
|
- **Approach:** [grid-disciplined / creative-editorial / hybrid]
|
|
- **Grid:** [columns per breakpoint]
|
|
- **Max content width:** [value]
|
|
- **Border radius:** [hierarchical scale — e.g., sm:4px, md:8px, lg:12px, full:9999px]
|
|
|
|
## Motion
|
|
- **Approach:** [minimal-functional / intentional / expressive]
|
|
- **Easing:** enter(ease-out) exit(ease-in) move(ease-in-out)
|
|
- **Duration:** micro(50-100ms) short(150-250ms) medium(250-400ms) long(400-700ms)
|
|
|
|
## Decisions Log
|
|
| Date | Decision | Rationale |
|
|
|------|----------|-----------|
|
|
| [today] | Initial design system created | Created by /design-consultation based on [product context / research] |
|
|
```
|
|
|
|
**Update CLAUDE.md** (or create it if it doesn't exist) — append this section:
|
|
|
|
```markdown
|
|
## Design System
|
|
Always read DESIGN.md before making any visual or UI decisions.
|
|
All font choices, colors, spacing, and aesthetic direction are defined there.
|
|
Do not deviate without explicit user approval.
|
|
In QA mode, flag any code that doesn't match DESIGN.md.
|
|
```
|
|
|
|
**AskUserQuestion Q-final — show summary and confirm:**
|
|
|
|
List all decisions. Flag any that used agent defaults without explicit user confirmation (the user should know what they're shipping). Options:
|
|
- A) Ship it — write DESIGN.md and CLAUDE.md
|
|
- B) I want to change something (specify what)
|
|
- C) Start over
|
|
|
|
After shipping DESIGN.md, if the session produced screen-level mockups or page layouts
|
|
(not just system-level tokens), suggest:
|
|
"Want to see this design system as working Pretext-native HTML? Run /design-html."
|
|
|
|
---
|
|
|
|
{{LEARNINGS_LOG}}
|
|
|
|
{{GBRAIN_SAVE_RESULTS}}
|
|
|
|
## Important Rules
|
|
|
|
1. **Propose, don't present menus.** You are a consultant, not a form. Make opinionated recommendations based on the product context, then let the user adjust.
|
|
2. **Every recommendation needs a rationale.** Never say "I recommend X" without "because Y."
|
|
3. **Coherence over individual choices.** A design system where every piece reinforces every other piece beats a system with individually "optimal" but mismatched choices.
|
|
4. **Never recommend blacklisted or overused fonts as primary.** If the user specifically requests one, comply but explain the tradeoff.
|
|
5. **The preview page must be beautiful.** It's the first visual output and sets the tone for the whole skill.
|
|
6. **Conversational tone.** This isn't a rigid workflow. If the user wants to talk through a decision, engage as a thoughtful design partner.
|
|
7. **Accept the user's final choice.** Nudge on coherence issues, but never block or refuse to write a DESIGN.md because you disagree with a choice.
|
|
8. **No AI slop in your own output.** Your recommendations, your preview page, your DESIGN.md — all should demonstrate the taste you're asking the user to adopt.
|