Files
gstack/scripts/resolvers/design.ts
T
Garry Tan 22a4451e0e feat(v1.3.0.0): open agents learnings + cross-model benchmark skill (#1040)
* chore: regenerate stale ship golden fixtures

Golden fixtures were missing the VENDORED_GSTACK preamble section that
landed on main. Regression tests failed on all three hosts (claude, codex,
factory). Regenerated from current preamble output.

No code changes, unblocks test suite.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: anti-slop design constraints + delete duplicate constants

Tightens design-consultation and design-shotgun to push back on the
convergence traps every AI design tool falls into.

Changes:
- scripts/resolvers/constants.ts: add "system-ui as primary font" to
  AI_SLOP_BLACKLIST. Document Space Grotesk as the new "safe alternative
  to Inter" convergence trap alongside the existing overused fonts.
- scripts/gen-skill-docs.ts: delete duplicate AI slop constants block
  (dead code — scripts/resolvers/constants.ts is the live source).
  Prevents drift between the two definitions.
- design-consultation/SKILL.md.tmpl: add Space Grotesk + system-ui to
  overused/slop lists. Add "anti-convergence directive" — vary across
  generations in the same project. Add Phase 1 "memorable-thing forcing
  question" (what's the one thing someone will remember?). Add Phase 5
  "would a human designer be embarrassed by this?" self-gate before
  presenting variants.
- design-shotgun/SKILL.md.tmpl: anti-convergence directive — each
  variant must use a different font, palette, and layout. If two
  variants look like siblings, one of them failed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: context health soft directive in preamble (T2+)

Adds a "periodically self-summarize" nudge to long-running skills.
Soft directive only — no thresholds, no enforcement, no auto-commit.

Goal: self-awareness during /qa, /investigate, /cso etc. If you notice
yourself going in circles, STOP and reassess instead of thrashing.

Codex review caught that fake precision thresholds (15/30/45 tool calls)
were unimplementable — SKILL.md is a static prompt, not runtime code.
This ships the soft version only.

Changes:
- scripts/resolvers/preamble.ts: add generateContextHealth(), wire into
  T2+ tier. Format: [PROGRESS] ... summary line. Explicit rule that
  progress reporting must never mutate git state.
- All T2+ skill SKILL.md files regenerated to include the new section.
- Golden ship fixtures updated (T4 skill, picks up the change).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: model overlays with explicit --model flag (no auto-detect)

Adds a per-model behavioral patch layer orthogonal to the host axis.
Different LLMs have different tendencies (GPT won't stop, Gemini
over-explains, o-series wants structured output). Overlays nudge each
model toward better defaults for gstack workflows.

Codex review caught three landmines the prior reviews missed:
1. Host != model — Claude Code can run any Claude model, Codex runs
   GPT/o-series, Cursor fronts multiple providers. Auto-detecting from
   host would lie. Dropped auto-detect. --model is explicit (default
   claude). Missing overlay file → empty string (graceful).
2. Import cycle — putting Model in resolvers/types.ts would cycle
   through hosts/index. Created neutral scripts/models.ts instead.
3. "Final say" is dangerous — overlay at the end of preamble could
   override STOP points, AskUserQuestion gates, /ship review gates.
   Placed overlay after spawned-session-check but before voice + tier
   sections. Wrapper heading adds explicit subordination language on
   every overlay: "subordinate to skill workflow, STOP points,
   AskUserQuestion gates, plan-mode safety, and /ship review gates."

Changes:
- scripts/models.ts: new neutral module. ALL_MODEL_NAMES, Model type,
  resolveModel() for family heuristics (gpt-5.4-mini → gpt-5.4, o3 →
  o-series, claude-opus-4-7 → claude), validateModel() helper.
- scripts/resolvers/types.ts: import Model, add ctx.model field.
- scripts/resolvers/model-overlay.ts: new resolver. Reads
  model-overlays/{model}.md. Supports {{INHERIT:base}} directive at
  top of file for concat (gpt-5.4 inherits gpt). Cycle guard.
- scripts/resolvers/index.ts: register MODEL_OVERLAY resolver.
- scripts/resolvers/preamble.ts: wire generateModelOverlay into
  composition before voice. Print MODEL_OVERLAY: {model} in preamble
  bash so users can see which overlay is active. Filter empty sections.
- scripts/gen-skill-docs.ts: parse --model CLI flag. Default claude.
  Unknown model → throw with list of valid options.
- model-overlays/{claude,gpt,gpt-5.4,gemini,o-series}.md: behavioral
  patches per model family. gpt-5.4.md uses {{INHERIT:gpt}} to extend
  gpt.md without duplication.
- test/gen-skill-docs.test.ts: fix qa-only guardrail regex scope.
  Was matching Edit/Glob/Grep anywhere after `allowed-tools:` in the
  whole file. Now scoped to frontmatter only. Body prose (Claude
  overlay references Edit as a tool) correctly no longer breaks it.

Verification:
- bun run gen:skill-docs --host all --dry-run → all fresh
- bun run gen:skill-docs --model gpt-5.4 → concat works, gpt.md +
  gpt-5.4.md content appears in order
- bun run gen:skill-docs --model unknown → errors with valid list
- All generated skills contain MODEL_OVERLAY: claude in preamble
- Golden ship fixtures regenerated

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: continuous checkpoint mode with non-destructive WIP squash

Adds opt-in auto-commit during long sessions so work survives Claude
Code crashes, Conductor workspace handoffs, and context switches.
Local-only by default — pushing requires explicit opt-in.

Codex review caught multiple landmines that would have shipped:
1. checkpoint_push=true default would push WIP commits to shared
   branches, trigger CI/deploys, expose secrets. Now default false.
2. Plan's original /ship squash (git reset --soft to merge base) was
   destructive — uncommitted ALL branch commits, not just WIP, and
   caused non-fast-forward pushes. Redesigned: rebase --autosquash
   scoped to WIP commits only, with explicit fallback for WIP-only
   branches and STOP-and-ask for conflicts.
3. gstack-config get returned empty for missing keys with exit 0,
   ignoring the annotated defaults in the header comments. Fixed:
   get now falls back to a lookup_default() table that is the
   canonical source for defaults.
4. Telemetry default mismatched: header said 'anonymous' but runtime
   treated empty as 'off'. Aligned: default is 'off' everywhere.
5. /checkpoint resume only read markdown checkpoint files, not the
   WIP commit [gstack-context] bodies the plan referenced. Wired up
   parsing of [gstack-context] blocks from WIP commits as a second
   recovery trail alongside the markdown checkpoints.

Changes:
- bin/gstack-config: add checkpoint_mode (default explicit) and
  checkpoint_push (default false) to CONFIG_HEADER. Add lookup_default()
  as canonical default source. get() falls back to defaults when key
  absent. list now shows value + source (set/default). New 'defaults'
  subcommand to inspect the table.
- scripts/resolvers/preamble.ts: preamble bash reads _CHECKPOINT_MODE
  and _CHECKPOINT_PUSH, prints CHECKPOINT_MODE: and CHECKPOINT_PUSH: so
  the mode is visible. New generateContinuousCheckpoint() section in
  T2+ tier describes WIP commit format with [gstack-context] body and
  the rules (never git add -A, never commit broken tests, push only
  if opted in). Example deliberately shows a clean-state context so
  it doesn't contradict the rules.
- ship/SKILL.md.tmpl: new Step 5.75 WIP Commit Squash. Detects WIP
  count, exports [gstack-context] blocks before squash (as backup),
  uses rebase --autosquash for mixed branches and soft-reset only when
  VERIFIED WIP-only. Explicit anti-footgun rules against blind soft-
  reset. Aborts with BLOCKED status on conflict instead of destroying
  non-WIP commits.
- checkpoint/SKILL.md.tmpl: new Step 1.5 to parse [gstack-context]
  blocks from WIP commits via git log --grep="^WIP:". Merges with
  markdown checkpoint for fuller session recovery.
- Golden ship fixtures regenerated (ship is T4, preamble change shows up).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: feature discovery flow gated by per-feature markers

Extends generateUpgradeCheck() to surface new features once per user
after a just-upgraded session. No more silent features.

Codex review caught: spawned sessions (OpenClaw, etc.) must skip the
discovery prompt entirely — they can't interactively answer. Feature
discovery now checks SPAWNED_SESSION first and is silent in those.

Discovery is per-feature, not per-upgrade. Each feature has its own
marker file at ~/.claude/skills/gstack/.feature-prompted-{name}. Once
the user has been shown a feature (accepted, shown docs, or skipped),
the marker is touched and the prompt never fires again for that
feature. Future features get their own markers.

V1 features surfaced:
- continuous-checkpoint: offer to enable checkpoint_mode=continuous
- model-overlay: inform-only note about --model flag and MODEL_OVERLAY
  line in preamble output

Max one prompt per session to avoid nagging. Fires only on JUST_UPGRADED
(not every session), plus spawned-session skip.

Changes:
- scripts/resolvers/preamble.ts: extend generateUpgradeCheck() with
  feature discovery rules, per-marker-file semantics, spawned-session
  exclusion, and max-one-per-session cap.
- All skill SKILL.md files regenerated to include the new section.
- Golden ship fixtures regenerated.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: design taste engine with persistent schema

Adds a cross-session taste profile that learns from design-shotgun
approval/rejection decisions. Biases future design-consultation and
design-shotgun proposals toward the user's demonstrated preferences.

Codex review caught that the plan had "taste engine" as a vague goal
without schema, decay, migration, or placeholder insertion points. This
commit ships the full spec.

Schema v1 at ~/.gstack/projects/$SLUG/taste-profile.json:
- version, updated_at
- dimensions: fonts, colors, layouts, aesthetics — each with approved[]
  and rejected[] preference lists
- sessions: last 50 (FIFO truncation), each with ts/action/variant/reason
- Preference: { value, confidence, approved_count, rejected_count, last_seen }
- Confidence: Laplace-smoothed approved/(total+1)
- Decay: 5% per week of inactivity, computed at read time (not write)

Changes:
- bin/gstack-taste-update: new CLI. Subcommands approved/rejected/show/
  migrate. Parses reason string for dimension signals (e.g.,
  "fonts: Geist; colors: slate; aesthetics: minimal"). Emits taste-drift
  NOTE when a new signal contradicts a strong opposing signal. Legacy
  approved.json aggregates migrate to v1 on next write.
- scripts/resolvers/design.ts: new generateTasteProfile() resolver.
  Produces the prose that skills see: how to read the profile, how to
  factor into proposals, conflict handling, schema migration.
- scripts/resolvers/index.ts: register TASTE_PROFILE and a BIN_DIR
  resolver (returns ctx.paths.binDir, used by templates that shell out
  to gstack-* binaries).
- design-consultation/SKILL.md.tmpl: insert {{TASTE_PROFILE}} placeholder
  in Phase 1 right after the memorable-thing forcing question so the
  Phase 3 proposal can factor in learned preferences.
- design-shotgun/SKILL.md.tmpl: taste memory section now reads
  taste-profile.json via {{TASTE_PROFILE}}, falls back to per-session
  approved.json (legacy). Approval flow documented to call
  gstack-taste-update after user picks/rejects a variant.

Known gap: v1 extracts dimension signals from a reason string passed
by the caller ("fonts: X; colors: Y"). Future v2 can read EXIF or an
accompanying manifest written by design-shotgun alongside each variant
for automatic dimension extraction without needing the reason argument.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: multi-provider model benchmark (boil the ocean)

Adds the full spec Codex asked for: real provider adapters with auth
detection, normalized RunResult, pricing tables, tool compatibility
maps, parallel execution with error isolation, and table/JSON/markdown
output. Judge stays on Anthropic SDK as the single stable source of
quality scoring, gated behind --judge.

Codex flagged the original plan as massively under-scoped — the
existing runner is Claude-only and the judge is Anthropic-only. You
can't benchmark GPT or Gemini without real provider infrastructure.
This commit ships it.

New architecture:

  test/helpers/providers/types.ts       ProviderAdapter interface
  test/helpers/providers/claude.ts      wraps `claude -p --output-format json`
  test/helpers/providers/gpt.ts         wraps `codex exec --json`
  test/helpers/providers/gemini.ts      wraps `gemini -p --output-format stream-json --yolo`
  test/helpers/pricing.ts               per-model USD cost tables (quarterly)
  test/helpers/tool-map.ts              which tools each CLI exposes
  test/helpers/benchmark-runner.ts      orchestrator (Promise.allSettled)
  test/helpers/benchmark-judge.ts       Anthropic SDK quality scorer
  bin/gstack-model-benchmark            CLI entry
  test/benchmark-runner.test.ts         9 unit tests (cost math, formatters, tool-map)

Per-provider error isolation:
  - auth → record reason, don't abort batch
  - timeout → record reason, don't abort batch
  - rate_limit → record reason, don't abort batch
  - binary_missing → record in available() check, skip if --skip-unavailable

Pricing correction: cached input tokens are disjoint from uncached
input tokens (Anthropic/OpenAI report them separately). Original
math subtracted them, producing negative costs. Now adds cached at
the 10% discount alongside the full uncached input cost.

CLI:
  gstack-model-benchmark --prompt "..." --models claude,gpt,gemini
  gstack-model-benchmark ./prompt.txt --output json --judge
  gstack-model-benchmark ./prompt.txt --models claude --timeout-ms 60000

Output formats: table (default), json, markdown. Each shows model,
latency, in→out tokens, cost, quality (when --judge used), tool calls,
and any errors.

Known limitations for v1:
- Claude adapter approximates toolCalls as num_turns (stream-json
  would give exact counts; v2 can upgrade).
- Live E2E tests (test/providers.e2e.test.ts) not included — they
  require CI secrets for all three providers. Unit tests cover the
  shape and math.
- Provider CLIs sometimes return non-JSON error text to stdout; the
  parsers fall back to treating raw output as plain text in that case.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: standalone methodology skill publishing via gstack-publish

Ships the marketplace-distribution half of Item 5 (reframed): publish
the existing standalone OpenClaw methodology skills to multiple
marketplaces with one command.

Codex review caught that the original plan assumed raw generated
multi-host skills could be published directly. They can't — those
depend on gstack binaries, generated host paths, tool names, and
telemetry. The correct artifact class is hand-crafted standalone
skills in openclaw/skills/gstack-openclaw-* (already exist and work
without gstack runtime). This commit adds the wrapper that publishes
them to ClawHub + SkillsMP + Vercel Skills.sh with per-marketplace
error isolation and dry-run validation.

Changes:
- skills.json: root manifest with 4 skills (office-hours, ceo-review,
  investigate, retro) each pointing at its openclaw/skills source.
  Each skill declares per-marketplace targets with a slug, a publish
  flag, and a compatible-hosts list. Marketplace configs include CLI
  name, login command, publish command template (with placeholder
  substitution), docs URL, and auth_check command.
- bin/gstack-publish: new CLI. Subcommands:
    gstack-publish              Publish all skills
    gstack-publish <slug>       Publish one skill
    gstack-publish --dry-run    Validate + auth-check without publishing
    gstack-publish --list       List skills + marketplace targets
  Features:
    * Manifest validation (missing source files, missing slugs, empty
      marketplace list all reported).
    * Per-marketplace auth check before any publish attempt.
    * Per-skill / per-marketplace error isolation: one failure doesn't
      abort the batch.
    * Idempotent — re-running with the same version is safe; markets
      that reject duplicate versions report it as a failure for that
      single target without affecting others.
    * --dry-run walks the full pipeline but skips execSync; useful in
      CI to validate manifest before bumping version.

Tested locally: clawhub auth detected, skillsmp/vercel CLIs not
installed (marked NOT READY and skipped cleanly in dry-run).

Follow-up work (tracked in TODOS.md later):
- Version-bump helper that reads openclaw/skills/*/SKILL.md frontmatter
  and updates skills.json in lockstep.
- CI workflow that runs gstack-publish --dry-run on every PR and
  gstack-publish on tags.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: split preamble.ts into submodules (byte-identical output)

Splits scripts/resolvers/preamble.ts (841 lines, 18 generator functions +
composition root) into one file per generator under
scripts/resolvers/preamble/. Root preamble.ts becomes a thin composition
layer (~80 lines of imports + generatePreamble).

Before:
  scripts/resolvers/preamble.ts  841 lines

After:
  scripts/resolvers/preamble.ts                                   83 lines
  scripts/resolvers/preamble/generate-preamble-bash.ts            97 lines
  scripts/resolvers/preamble/generate-upgrade-check.ts            48 lines
  scripts/resolvers/preamble/generate-lake-intro.ts               16 lines
  scripts/resolvers/preamble/generate-telemetry-prompt.ts         37 lines
  scripts/resolvers/preamble/generate-proactive-prompt.ts         25 lines
  scripts/resolvers/preamble/generate-routing-injection.ts        49 lines
  scripts/resolvers/preamble/generate-vendoring-deprecation.ts    36 lines
  scripts/resolvers/preamble/generate-spawned-session-check.ts    11 lines
  scripts/resolvers/preamble/generate-ask-user-format.ts          16 lines
  scripts/resolvers/preamble/generate-completeness-section.ts     19 lines
  scripts/resolvers/preamble/generate-repo-mode-section.ts        12 lines
  scripts/resolvers/preamble/generate-test-failure-triage.ts     108 lines
  scripts/resolvers/preamble/generate-search-before-building.ts   14 lines
  scripts/resolvers/preamble/generate-completion-status.ts       161 lines
  scripts/resolvers/preamble/generate-voice-directive.ts          60 lines
  scripts/resolvers/preamble/generate-context-recovery.ts         51 lines
  scripts/resolvers/preamble/generate-continuous-checkpoint.ts    48 lines
  scripts/resolvers/preamble/generate-context-health.ts           31 lines

Byte-identity verification (the real gate per Codex correction):
- Before refactor: snapshotted 135 generated SKILL.md files via
  `find -name SKILL.md -type f | grep -v /gstack/` across all hosts.
- After refactor: regenerated with `bun run gen:skill-docs --host all`
  and re-snapshotted.
- `diff -r baseline after` returned zero differences and exit 0.

The `--host all --dry-run` gate passes too. No template or host behavior
changes — purely a code-organization refactor.

Test fix: audit-compliance.test.ts's telemetry check previously grepped
preamble.ts directly for `_TEL != "off"`. After the refactor that logic
lives in preamble/generate-preamble-bash.ts. Test now concatenates all
preamble submodule sources before asserting — tracks the semantic contract,
not the file layout. Doing the minimum rewrite preserves the test's intent
(conditional telemetry) without coupling it to file boundaries.

Why now: we were in-session with full context. Codex had downgraded this
from mandatory to optional, but the preamble had grown to 841 lines and
was getting harder to navigate. User asked "why not?" given the context
was hot. Shipping it as a clean bisectable commit while all the prior
preamble.ts changes are fresh reduces rebase pain later.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: bump version and changelog (v0.19.0.0)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: trim verbose preamble + coverage audit prose

Compress without removing behavior or voice. Three targeted cuts:

1. scripts/resolvers/testing.ts coverage diagram example: 40 lines → 14
   lines. Two-column ASCII layout instead of stacked sections.
   Preserves all required regression-guard phrases (processPayment,
   refundPayment, billing.test.ts, checkout.e2e.ts, COVERAGE, QUALITY,
   GAPS, Code paths, User flows, ASCII coverage diagram).

2. scripts/resolvers/preamble/generate-completion-status.ts Plan Status
   Footer: was 35 lines with embedded markdown table example, now 7
   lines that describe the table inline. The footer fires only at
   ExitPlanMode time — Claude can construct the placeholder table from
   the inline description without copying a literal example.

3. Same file's Plan Mode Safe Operations + Skill Invocation During Plan
   Mode sections compressed from ~25 lines combined to ~12. Preserves
   all required test phrases (precedence over generic plan mode behavior,
   Do not continue the workflow, cancel the skill or leave plan mode,
   PLAN MODE EXCEPTION).

NOT touched:
- Voice directive (Garry's voice — protected per CLAUDE.md)
- Office-hours Phase 6 Handoff (Garry's voice + YC pitch)
- Test bootstrap, review army, plan completion (carefully tuned behavior)

Token savings (per skill, system-wide):
  ship/SKILL.md           35474 → 34992 tokens (-482)
  plan-ceo-review         29436 → 28940 (-496)
  office-hours            26700 → 26204 (-496)

Still over the 25K ceiling. Bigger reduction requires restructure
(move large resolvers to externally-referenced docs, split /ship into
ship-quick + ship-full, or refactor the coverage audit + review army
into shorter prose). That's a follow-up — added to TODOS.

Tests: 420/420 pass on gen-skill-docs.test.ts + host-config.test.ts.
Goldens regenerated for claude/codex/factory ship.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(ci): install Node.js from official tarball instead of NodeSource apt setup

The CI Dockerfile's Node install was failing on ubicloud runners. NodeSource's
setup_22.x script runs two internal apt operations that both depend on
archive.ubuntu.com + security.ubuntu.com being reachable:
1. apt-get update (to refresh package lists)
2. apt-get install gnupg (as a prerequisite for its gpg keyring)

Ubicloud's CI runners frequently can't reach those mirrors — last build hit
~2min of connection timeouts to every security.ubuntu.com IP (185.125.190.82,
91.189.91.83, 91.189.92.24, etc.) plus archive.ubuntu.com mirrors. Compounding
this: on Ubuntu 24.04 (noble) "gnupg" was renamed to "gpg" and "gpgconf".
NodeSource's setup script still looks for "gnupg", so even when apt works,
it fails with "Package 'gnupg' has no installation candidate." The subsequent
apt-get install nodejs then fails because the NodeSource repo was never added.

Fix: drop NodeSource entirely. Download Node.js v22.20.0 from nodejs.org as a
tarball, extract to /usr/local. One host, no apt, no script, no keyring.

Before:
  RUN curl -fsSL https://deb.nodesource.com/setup_22.x | bash - \
      && apt-get install -y --no-install-recommends nodejs ...

After:
  ENV NODE_VERSION=22.20.0
  RUN curl -fsSL "https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.xz" -o /tmp/node.tar.xz \
      && tar -xJ -C /usr/local --strip-components=1 --no-same-owner -f /tmp/node.tar.xz \
      && rm -f /tmp/node.tar.xz \
      && node --version && npm --version

Same installed path (/usr/local/bin/node and npm). Pinned version for
reproducibility. Version is bump-visible in the Dockerfile now.

Does not address the separate apt flakiness that affects the GitHub CLI
install (line 17) or `npx playwright install-deps chromium` (line 33) —
those use apt too. If those fail on a future build we can address then.

Failing job: build-image (71777913820)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: raise skill token ceiling warning from 25K to 40K

The 25K ceiling predated flagship models with 200K-1M windows and assumed
every skill prompt dominates context cost. Modern reality: prompt caching
amortizes the skill load across invocations, and three carefully-tuned
skills (ship, plan-ceo-review, office-hours) legitimately pack 25-35K
tokens of behavior that can't be cut without degrading quality or removing
protected content (Garry's voice, YC pitch, specialist review instructions).

We made the safe prose cuts earlier (coverage diagram, plan status footer,
plan mode operations). The remaining gap is structural — real compression
would require splitting /ship into ship-quick vs ship-full, externalizing
large resolvers to reference docs, or removing detailed skill behavior.
Each is 1-2 days of work. The cost of the warning firing is zero (it's
a warning, not an error). The cost of hitting it is ~15¢ per invocation
at worst, amortized further by prompt caching.

Raising to 40K catches what it's supposed to catch — a runaway 10K+ token
growth in a single release — without crying wolf on legitimately big
skills. Reference doc in CLAUDE.md updated to reflect the new philosophy:
when you hit 40K, ask WHAT grew, don't blindly compress tuned prose.

scripts/gen-skill-docs.ts: TOKEN_CEILING_BYTES 100_000 → 160_000.
CLAUDE.md: document the "watch for feature bloat, not force compression"
intent of the ceiling.

Verification: `bun run gen:skill-docs --host all` shows zero TOKEN
CEILING warnings under the new 40K threshold.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(ci): install xz-utils so Node tarball extraction works

The direct-tarball Node install (switched from NodeSource apt in the last
CI fix) failed with "xz: Cannot exec: No such file or directory" because
Ubuntu 24.04 base doesn't include xz-utils. Node ships .tar.xz by default,
and `tar -xJ` shells out to xz, which was missing.

Add xz-utils to the base apt install alongside git/curl/unzip/etc.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(benchmark): pass --skip-git-repo-check to codex adapter

The gpt provider adapter spawns `codex exec -C <workdir>` with arbitrary
working directories (benchmark temp dirs, non-git paths). Without
`--skip-git-repo-check`, codex refuses to run and returns "Not inside a
trusted directory" — surfaced as a generic error.code='unknown' that
looks like an API failure.

Benchmarks don't care about codex's git-repo trust model; we just want
the prompt executed. Surfaced by the new provider live E2E test on a
temp workdir.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(benchmark): add --dry-run flag to gstack-model-benchmark

Matches gstack-publish --dry-run semantics. Validates the provider list,
resolves per-adapter auth, echoes the resolved flag values, and exits
without invoking any provider CLI. Zero-cost pre-flight for CI pipelines
and for catching auth drift before starting a paid benchmark run.

Output shape:
  == gstack-model-benchmark --dry-run ==
    prompt:     <truncated>
    providers:  claude, gpt, gemini
    workdir:    /tmp/...
    timeout_ms: 300000
    output:     table
    judge:      off

  Adapter availability:
    claude: OK
    gpt:    NOT READY — <reason>
    gemini: NOT READY — <reason>

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test: lite E2E coverage for benchmark, taste engine, publish

Fills real coverage gaps in v0.19.0.0 primitives. 44 new deterministic
tests (gate tier, ~3s) + 8 live-API tests (periodic tier).

New gate-tier test files (free, <3s total):
- test/taste-engine.test.ts — 24 tests against gstack-taste-update:
  schema shape, Laplace-smoothed confidence, 5%/week decay clamped at 0,
  multi-dimension extraction, case-insensitive matching, session cap,
  legacy profile migration with session truncation, taste-drift conflict
  warning, malformed-JSON recovery, missing-variant exit code.
- test/publish-dry-run.test.ts — 13 tests against gstack-publish --dry-run:
  manifest parsing, missing/malformed JSON, per-skill validation errors
  (missing source file / slug / version / marketplaces), slug filter,
  unknown-skill exit, per-marketplace auth isolation (fake marketplaces
  with always-pass / always-fail / missing-binary CLIs), and a sanity
  check against the real repo manifest.
- test/benchmark-cli.test.ts — 11 tests against gstack-model-benchmark
  --dry-run: provider default, unknown-provider WARN, empty list
  fallback, flag passthrough (timeout/workdir/judge/output), long-prompt
  truncation, prompt resolution (inline vs file vs positional), missing
  prompt exit.

New periodic-tier test file (paid, gated EVALS=1):
- test/skill-e2e-benchmark-providers.test.ts — 8 tests hitting real
  claude, codex, gemini CLIs with a trivial prompt (~$0.001/provider).
  Verifies output parsing, token accounting, cost estimation, timeout
  error.code semantics, Promise.allSettled parallel isolation.
  Per-provider availability gate — unauthed providers skip cleanly.

This suite already caught one real bug (codex adapter missing
--skip-git-repo-check, fixed in 5260987d).

Registered `benchmark-providers-live` in touchfiles.ts (periodic tier,
triggered by changes to bin/gstack-model-benchmark, providers/**,
benchmark-runner.ts, pricing.ts).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(benchmark): dedupe providers in --models

`--models claude,claude,gpt` previously produced a list with a duplicate
entry, meaning the benchmark would run claude twice and bill for two
runs. Surfaced by /review on this branch.

Use a Set internally; return Array.from(seen) to preserve type + order
of first occurrence.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test: /review hardening — NOT-READY env isolation, workdir cleanup, perf

Applied from the adversarial subagent pass during /review on this branch:

- test/benchmark-cli.test.ts — new "NOT READY path fires when auth env
  vars are stripped" test. The default dry-run test always showed OK on
  dev machines with auth, hiding regressions in the remediation-hint
  branch. Stripped env (no auth vars, HOME→empty tmpdir) now force-
  exercises gpt + gemini NOT READY paths and asserts every NOT READY
  line includes a concrete remediation hint (install/login/export).
  (claude adapter's os.homedir() call is Bun-cached; the 2-of-3 adapter
  coverage is sufficient to exercise the branch.)

- test/taste-engine.test.ts — session-cap test rewritten to seed the
  profile with 50 entries + one real CLI call, instead of 55 sequential
  subprocess spawns. Same coverage (FIFO eviction at the boundary), ~5s
  faster CI time. Also pins first-casing-wins on the Geist/GEIST merge
  assertion — bumpPref() keeps the first-arrival casing, so the test
  documents that policy.

- test/skill-e2e-benchmark-providers.test.ts — workdir creation moved
  from module-load into beforeAll, cleanup added in afterAll. Previous
  shape leaked a /tmp/bench-e2e-* dir every CI run.

- test/publish-dry-run.test.ts — removed unused empty test/helpers
  mkdirSync from the sandbox setup. The bin doesn't import from there,
  so the empty dir was a footgun for future maintainers.

- test/helpers/providers/gpt.ts — expanded the inline comment on
  `--skip-git-repo-check` to explicitly note that `-s read-only` is now
  load-bearing safety (the trust prompt was the secondary boundary;
  removing read-only while keeping skip-git-repo-check would be unsafe).

Net: 45 passing tests (was 44), session-cap test 5s faster, one real
regression surface covered that didn't exist before.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: surface v0.19 binaries and continuous checkpoint in README

The /review doc-staleness check flagged that v0.19.0.0 ships three new CLIs
(gstack-model-benchmark, gstack-publish, gstack-taste-update) and an opt-in
continuous checkpoint mode, none of which were visible in README's Power
tools section. New users couldn't find them without reading CHANGELOG.

Added:
- "New binaries (v0.19)" subsection with one-row descriptions for each CLI
- "Continuous checkpoint mode (opt-in, local by default)" subsection
  explaining WIP auto-commit + [gstack-context] body + /ship squash +
  /checkpoint resume

CHANGELOG entry already has good voice from /ship; no polish needed.
VERSION already at 0.19.0.0. Other docs (ARCHITECTURE/CONTRIBUTING/BROWSER)
don't reference this surface — scoped intentionally.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(ship): Step 19.5 — offer gstack-publish for methodology skill changes

Wires the orphaned gstack-publish binary into /ship. When a PR touches
any standalone methodology skill (openclaw/skills/gstack-*/SKILL.md) or
skills.json, /ship now runs gstack-publish --dry-run after PR creation
and asks the user if they want to actually publish.

Previously, the only way to discover gstack-publish was reading the
CHANGELOG or README. Most methodology skill updates landed on main
without ever being pushed to ClawHub / SkillsMP / Vercel Skills.sh,
defeating the whole point of having a marketplace publisher.

The check is conditional — for PRs that don't touch methodology skills
(the common case), this step is a silent no-op. Dry-run runs first so
the user sees the full list of what would publish and which marketplaces
are authed before committing.

Golden fixtures (claude/codex/factory) regenerated.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(benchmark-models): new skill wrapping gstack-model-benchmark

Wires the orphaned gstack-model-benchmark binary into a dedicated skill
so users can discover cross-model benchmarking via /benchmark-models or
voice triggers ("compare models", "which model is best").

Deliberately separate from /benchmark (page performance) because the
two surfaces test completely different things — confusing them would
muddy both.

Flow:
  1. Pick a prompt (an existing SKILL.md file, inline text, or file path)
  2. Confirm providers (dry-run shows auth status per provider)
  3. Decide on --judge (adds ~$0.05, scores output quality 0-10)
  4. Run the benchmark — table output
  5. Interpret results (fastest / cheapest / highest quality)
  6. Offer to save to ~/.gstack/benchmarks/<date>.json for trend tracking

Uses gstack-model-benchmark --dry-run as a safety gate — auth status is
visible BEFORE the user spends API calls. If zero providers are authed,
the skill stops cleanly rather than attempting a run that produces no
useful output.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: v1.3.0.0 — complete CHANGELOG + bump for post-1.2 scope additions

VERSION 1.2.0.0 → 1.3.0.0. The original 1.2 entry was written before I
added substantial new scope: the /benchmark-models skill, /ship Step 19.5
gstack-publish integration, --dry-run on gstack-model-benchmark, and the
lite E2E test coverage (4 new test files). A minor bump gives those
changes their own version line instead of silently folding them into
1.2's scope.

CHANGELOG additions under 1.3.0.0:
- /benchmark-models skill (new Added)
- /ship Step 19.5 publish check (new Added)
- gstack-model-benchmark --dry-run (new Added)
- Token ceiling 25K → 40K (moved to Changed)
- New Fixed section — codex adapter --skip-git-repo-check, --models
  dedupe, CI Dockerfile xz-utils + nodejs.org tarball
- 4 new test files documented under contributors (taste-engine,
  publish-dry-run, benchmark-cli, skill-e2e-benchmark-providers)
- Ship golden fixtures for claude/codex/factory hosts

Pre-existing 1.2 content preserved verbatim — no entries clobbered or
reordered. Sequence remains contiguous (1.3.0.0 → 1.1.3.0 → 1.1.2.0 →
1.1.1.0 → 1.1.0.0 → 1.0.0.0 → 0.19.0.0 → ...).

package.json and VERSION both at 1.3.0.0. No drift.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: adopt gbrain's release-summary CHANGELOG format + apply to v1.3

Ported the "release-summary format" rules from ~/git/gbrain/CLAUDE.md
(lines 291-354) into gstack's CLAUDE.md under the existing
"CHANGELOG + VERSION style" section. Every future `## [X.Y.Z]` entry
now needs a verdict-style release summary at the top:
1. Two-line bold headline (10-14 words)
2. Lead paragraph (3-5 sentences)
3. "Numbers that matter" with BEFORE / AFTER / Δ table
4. "What this means for [audience]" closer
5. `### Itemized changes` header
6. Existing itemized subsections below

Rewrote v1.3.0.0 entry to match. Preserved every existing bullet in
Added / Changed / Fixed / For contributors (no content clobbered per
the CLAUDE.md CHANGELOG rule).

Numbers in the v1.3 release summary are verifiable — every row of the
BEFORE / AFTER table has a reproducible command listed in the setup
paragraph (git log, bun test, grep for wiring status). No made-up
metrics.

Also added the gbrain "always credit community contributions" rule to
the itemized-changes section. `Contributed by @username` for every
community PR that lands in a CHANGELOG entry.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: remove gstack-publish — no real user need

User feedback: "i don't think i would use gstack-publish, i think we
should remove it." Agreed. The CLI + marketplace wiring was an
ambitious but speculative primitive. Zero users, zero validated demand,
and the existing manual `clawhub publish` workflow already covers the
real case (OpenClaw methodology skill publishing).

Deleted:
- bin/gstack-publish (the CLI)
- skills.json (the marketplace manifest)
- test/publish-dry-run.test.ts (13 tests)
- ship/SKILL.md.tmpl Step 19.5 — the methodology-skill publish-on-ship
  check. No target to dispatch to anymore.
- README.md Power tools row for gstack-publish

Updated:
- bin/gstack-model-benchmark doc comment: dropped "matches gstack-publish
  --dry-run semantics" reference (self-describing flag now)
- CHANGELOG 1.3.0.0 entry:
  * Release summary: "three new binaries" → "two new binaries".
    Dropped the /ship publish-check narrative.
  * Numbers table: "1 of 3 → 3 of 3 wired" → "1 of 2 → 2 of 2 wired".
    Deterministic test count: 45 → 32 (removed publish-dry-run's 13).
  * Added section: removed gstack-publish CLI bullet + /ship Step 19.5
    bullet.
  * "What this means for users" closer: replaced the /ship publish
    paragraph with the design-taste-engine learning loop, which IS
    real, wired, and something users hit every week via /design-shotgun.
  * Contributors section: "Four new test files" → "Three new test files"

Retained:
- openclaw/skills/gstack-openclaw-* skill dirs (pre-existed this PR,
  still publishable manually via `clawhub publish`, useful standalone
  for ClawHub installs)
- CLAUDE.md publishing-native-skills section (same rationale)

Regenerated SKILL.md across all hosts. Ship golden fixtures refreshed
for claude/codex/factory. 455 tests pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(CHANGELOG): reorder v1.3 entry around day-to-day user wins

Previous entry led with internal metrics (CLIs wired to skills, preamble
line count, adapter bugs caught in CI). Useful to contributors, invisible
to users. Rewrote the release summary and Added section to lead with
what a day-to-day gstack user actually experiences.

Release summary changes:
- Headline: "Every new CLI wired to a slash command" → "Your design
  skills learn your taste. Your session state survives a laptop close."
- Lead paragraph: shifted from "primitives discoverable from /commands"
  to concrete day-to-day wins (design-shotgun taste memory, design-
  consultation anti-slop gates, continuous checkpoint survival).
- Numbers table: swapped internal metrics (CLI wiring %, test counts,
  preamble line count) for user-visible ones:
    - Design-variant convergence gate (0 → 3 axes required)
    - AI-slop font blacklist (~8 → 10+ fonts)
    - Taste memory across sessions (none → per-project JSON with decay)
    - Session state after crash (lost → auto-WIP with structured body)
    - /context-restore sources (markdown only → + WIP commits)
    - Models with behavioral overlays (1 → 5)
- "Most striking" interpretation: reframed around the mid-session
  crash survival story instead of the codex adapter bug catch.
- "What this means" closer: reframed around /design-shotgun + /design-
  consultation + continuous checkpoint workflow instead of
  /benchmark-models.

Added section — reorganized into six subsections by user value:
  1. Design skills that stop looking like AI
     (anti-slop constraints, taste engine)
  2. Session state that survives a crash
     (continuous checkpoint, /context-restore WIP reading,
     /ship non-destructive squash)
  3. Quality-of-life
     (feature discovery prompt, context health soft directive)
  4. Cross-host support
     (--model flag + 5 overlays)
  5. Config
     (gstack-config list/defaults, checkpoint_mode/push keys)
  6. Power-user / internal
     (gstack-model-benchmark + /benchmark-models skill — grouped and
     pushed to the bottom since it's more of a research tool than a
     daily workflow piece)

Changed / Fixed / For contributors sections unchanged. No content
clobbered per CLAUDE.md CHANGELOG rules — every existing bullet is
preserved, just reordered and grouped.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(CHANGELOG): reframe v1.3 entry around transparency vs laptop-close

User feedback: "'closing your laptop' in the changelog is overstated, i
mean claude code does already have session management. i think the use
of the context save restore is mainly just another tool that is more in
your control instead of opaque and a part of CC." Correct. CC handles
session persistence on its own; continuous checkpoint isn't filling a
gap there, it's giving users a parallel, inspectable, portable track.

Reframed every place the old copy overstated:

- Headline: "Your session state survives a laptop close" → "Your
  session state lives in git, not a black box."
- Lead paragraph: dropped the "closing your laptop mid-refactor doesn't
  vaporize your decisions" line. Now frames continuous checkpoint as
  explicitly running alongside CC's built-in session management, not
  replacing it. Emphasizes grep-ability, portability across tools and
  branches.
- Numbers table row: "Session state after mid-refactor crash: lost
  since last manual commit → auto-WIP commits" → "Session state
  format: Claude Code's opaque session store → git commits +
  [gstack-context] bodies + markdown (parallel track)". Honest about
  what's actually changing.
- "Most striking" interpretation: replaced the "used to cost you every
  decision" framing with the real user value — session state stops
  being a black box, `git log --grep "WIP:"` shows the whole thread,
  any tool reading git can see it.
- "What this means" closer: replaced "survives crashes, context
  switches, and forgotten laptops" with accurate framing — parallel
  track alongside CC's own, inspectable, portable, useful when you
  want to review or hand off work.
- Added section: "Session state that survives a crash" subsection
  renamed to "Session state you can see, grep, and move". Lead bullet
  now explicitly notes continuous checkpoint runs alongside CC session
  management, not instead.

No content clobbered. All other bullets and sections unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(CHANGELOG): correct session-state location — home dir by default, git only on opt-in

User correction: "wait is our session management really checked into
git? i don't think that's right, isn't it just saved in your home
dir?" Right. I had the location wrong. The default session-save
mechanism (`/context-save` + `/context-restore`) writes markdown
files to `~/.gstack/projects/$SLUG/checkpoints/` — HOME, not git.
Continuous checkpoint mode (opt-in) is what writes git commits.
Previous copy conflated the two and implied "lives in git" as the
default state, which is wrong.

Every affected location updated:

- Headline: "lives in git, not a black box" → "becomes files you
  can grep, not a black box." Removes the false implication that
  session state lands in git by default.
- Lead paragraph: now explicitly names the two separate mechanisms.
  `/context-save` writes plaintext markdown to `~/.gstack/projects/
  $SLUG/checkpoints/` (the default). Continuous checkpoint mode
  (opt-in) additionally drops WIP: commits into the git log.
- Numbers table row: "Session state format" now reads "markdown in
  `~/.gstack/` by default, plus WIP: git commits if you opt into
  continuous mode (parallel track)." Tells the truth about which
  path is default vs opt-in.
- "Most striking" row interpretation: now names both paths. Default
  path = markdown files in home dir. Opt-in continuous mode = WIP:
  commits in project git log. Either way, plain text the user owns.
- "What this means" closer: similarly names both paths explicitly.
  "markdown files in your home directory by default, plus git
  commits if you opt into continuous mode."
- Continuous checkpoint mode Added bullet: clarifies the commits
  land in "your project's git log" (not implied to be the default),
  and notes it runs alongside BOTH Claude Code's built-in session
  management AND the default `/context-save` markdown flow.

No other bullets or sections touched. No content clobbered.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 17:50:31 +08:00

1143 lines
57 KiB
TypeScript

import type { TemplateContext } from './types';
import { AI_SLOP_BLACKLIST, OPENAI_HARD_REJECTIONS, OPENAI_LITMUS_CHECKS } from './constants';
export function generateDesignReviewLite(ctx: TemplateContext): string {
const litmusList = OPENAI_LITMUS_CHECKS.map((item, i) => `${i + 1}. ${item}`).join(' ');
const rejectionList = OPENAI_HARD_REJECTIONS.map((item, i) => `${i + 1}. ${item}`).join(' ');
// Codex block only for Claude host
const codexBlock = ctx.host === 'codex' ? '' : `
7. **Codex design voice** (optional, automatic if available):
\`\`\`bash
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"
\`\`\`
If Codex is available, run a lightweight design check on the diff:
\`\`\`bash
TMPERR_DRL=$(mktemp /tmp/codex-drl-XXXXXXXX)
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
codex exec "Review the git diff on this branch. Run 7 litmus checks (YES/NO each): ${litmusList} Flag any hard rejections: ${rejectionList} 5 most important design findings only. Reference file:line." -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached < /dev/null 2>"$TMPERR_DRL"
\`\`\`
Use a 5-minute timeout (\`timeout: 300000\`). After the command completes, read stderr:
\`\`\`bash
cat "$TMPERR_DRL" && rm -f "$TMPERR_DRL"
\`\`\`
**Error handling:** All errors are non-blocking. On auth failure, timeout, or empty response — skip with a brief note and continue.
Present Codex output under a \`CODEX (design):\` header, merged with the checklist findings above.`;
return `## Design Review (conditional, diff-scoped)
Check if the diff touches frontend files using \`gstack-diff-scope\`:
\`\`\`bash
source <(${ctx.paths.binDir}/gstack-diff-scope <base> 2>/dev/null)
\`\`\`
**If \`SCOPE_FRONTEND=false\`:** Skip design review silently. No output.
**If \`SCOPE_FRONTEND=true\`:**
1. **Check for DESIGN.md.** If \`DESIGN.md\` or \`design-system.md\` exists in the repo root, read it. All design findings are calibrated against it — patterns blessed in DESIGN.md are not flagged. If not found, use universal design principles.
2. **Read \`.claude/skills/review/design-checklist.md\`.** If the file cannot be read, skip design review with a note: "Design checklist not found — skipping design review."
3. **Read each changed frontend file** (full file, not just diff hunks). Frontend files are identified by the patterns listed in the checklist.
4. **Apply the design checklist** against the changed files. For each item:
- **[HIGH] mechanical CSS fix** (\`outline: none\`, \`!important\`, \`font-size < 16px\`): classify as AUTO-FIX
- **[HIGH/MEDIUM] design judgment needed**: classify as ASK
- **[LOW] intent-based detection**: present as "Possible — verify visually or run /design-review"
5. **Include findings** in the review output under a "Design Review" header, following the output format in the checklist. Design findings merge with code review findings into the same Fix-First flow.
6. **Log the result** for the Review Readiness Dashboard:
\`\`\`bash
${ctx.paths.binDir}/gstack-review-log '{"skill":"design-review-lite","timestamp":"TIMESTAMP","status":"STATUS","findings":N,"auto_fixed":M,"commit":"COMMIT"}'
\`\`\`
Substitute: TIMESTAMP = ISO 8601 datetime, STATUS = "clean" if 0 findings or "issues_found", N = total findings, M = auto-fixed count, COMMIT = output of \`git rev-parse --short HEAD\`.${codexBlock}`;
}
// NOTE: design-checklist.md is a subset of this methodology for code-level detection.
// When adding items here, also update review/design-checklist.md, and vice versa.
export function generateDesignMethodology(_ctx: TemplateContext): string {
return `## Modes
### Full (default)
Systematic review of all pages reachable from homepage. Visit 5-8 pages. Full checklist evaluation, responsive screenshots, interaction flow testing. Produces complete design audit report with letter grades.
### Quick (\`--quick\`)
Homepage + 2 key pages only. First Impression + Design System Extraction + abbreviated checklist. Fastest path to a design score.
### Deep (\`--deep\`)
Comprehensive review: 10-15 pages, every interaction flow, exhaustive checklist. For pre-launch audits or major redesigns.
### Diff-aware (automatic when on a feature branch with no URL)
When on a feature branch, scope to pages affected by the branch changes:
1. Analyze the branch diff: \`git diff main...HEAD --name-only\`
2. Map changed files to affected pages/routes
3. Detect running app on common local ports (3000, 4000, 8080)
4. Audit only affected pages, compare design quality before/after
### Regression (\`--regression\` or previous \`design-baseline.json\` found)
Run full audit, then load previous \`design-baseline.json\`. Compare: per-category grade deltas, new findings, resolved findings. Output regression table in report.
---
## Phase 1: First Impression
The most uniquely designer-like output. Form a gut reaction before analyzing anything.
1. Navigate to the target URL
2. Take a full-page desktop screenshot: \`$B screenshot "$REPORT_DIR/screenshots/first-impression.png"\`
3. Write the **First Impression** using this structured critique format:
- "The site communicates **[what]**." (what it says at a glance — competence? playfulness? confusion?)
- "I notice **[observation]**." (what stands out, positive or negative — be specific)
- "The first 3 things my eye goes to are: **[1]**, **[2]**, **[3]**." (hierarchy check — are these the 3 things the designer intended? If not, the visual hierarchy is lying.)
- "If I had to describe this in one word: **[word]**." (gut verdict)
**Narration mode:** Write this section in first person, as if you are a user scanning the page for the first time. "I'm looking at this page... my eye goes to the logo, then a wall of text I skip entirely, then... wait, is that a button?" Name the specific element, its position, its visual weight. If you can't name it specifically, you're not actually scanning, you're generating platitudes.
**Page Area Test:** Point at each clearly defined area of the page. Can you instantly name its purpose? ("Things I can buy," "Today's deals," "How to search.") Areas you can't name in 2 seconds are poorly defined. List them.
This is the section users read first. Be opinionated. A designer doesn't hedge — they react.
---
## Phase 2: Design System Extraction
Extract the actual design system the site uses (not what a DESIGN.md says, but what's rendered):
\`\`\`bash
# Fonts in use (capped at 500 elements to avoid timeout)
$B js "JSON.stringify([...new Set([...document.querySelectorAll('*')].slice(0,500).map(e => getComputedStyle(e).fontFamily))])"
# Color palette in use
$B js "JSON.stringify([...new Set([...document.querySelectorAll('*')].slice(0,500).flatMap(e => [getComputedStyle(e).color, getComputedStyle(e).backgroundColor]).filter(c => c !== 'rgba(0, 0, 0, 0)'))])"
# Heading hierarchy
$B js "JSON.stringify([...document.querySelectorAll('h1,h2,h3,h4,h5,h6')].map(h => ({tag:h.tagName, text:h.textContent.trim().slice(0,50), size:getComputedStyle(h).fontSize, weight:getComputedStyle(h).fontWeight})))"
# Touch target audit (find undersized interactive elements)
$B js "JSON.stringify([...document.querySelectorAll('a,button,input,[role=button]')].filter(e => {const r=e.getBoundingClientRect(); return r.width>0 && (r.width<44||r.height<44)}).map(e => ({tag:e.tagName, text:(e.textContent||'').trim().slice(0,30), w:Math.round(e.getBoundingClientRect().width), h:Math.round(e.getBoundingClientRect().height)})).slice(0,20))"
# Performance baseline
$B perf
\`\`\`
Structure findings as an **Inferred Design System**:
- **Fonts:** list with usage counts. Flag if >3 distinct font families.
- **Colors:** palette extracted. Flag if >12 unique non-gray colors. Note warm/cool/mixed.
- **Heading Scale:** h1-h6 sizes. Flag skipped levels, non-systematic size jumps.
- **Spacing Patterns:** sample padding/margin values. Flag non-scale values.
After extraction, offer: *"Want me to save this as your DESIGN.md? I can lock in these observations as your project's design system baseline."*
---
## Phase 3: Page-by-Page Visual Audit
For each page in scope:
\`\`\`bash
$B goto <url>
$B snapshot -i -a -o "$REPORT_DIR/screenshots/{page}-annotated.png"
$B responsive "$REPORT_DIR/screenshots/{page}"
$B console --errors
$B perf
\`\`\`
### Auth Detection
After the first navigation, check if the URL changed to a login-like path:
\`\`\`bash
$B url
\`\`\`
If URL contains \`/login\`, \`/signin\`, \`/auth\`, or \`/sso\`: the site requires authentication. AskUserQuestion: "This site requires authentication. Want to import cookies from your browser? Run \`/setup-browser-cookies\` first if needed."
### Trunk Test (run on every page)
Imagine being dropped on this page with no context. Can you immediately answer:
1. What site is this? (Site ID visible and identifiable)
2. What page am I on? (Page name prominent, matches what I clicked)
3. What are the major sections? (Primary nav visible and clear)
4. What are my options at this level? (Local nav or content choices obvious)
5. Where am I in the scheme of things? ("You are here" indicator, breadcrumbs)
6. How can I search? (Search box findable without hunting)
Score: PASS (all 6 clear) / PARTIAL (4-5 clear) / FAIL (3 or fewer clear).
A FAIL on the trunk test is a HIGH-impact finding regardless of how polished the visual design is.
### Design Audit Checklist (10 categories, ~80 items)
Apply these at each page. Each finding gets an impact rating (high/medium/polish) and category.
**1. Visual Hierarchy & Composition** (8 items)
- Clear focal point? One primary CTA per view?
- Eye flows naturally top-left to bottom-right?
- Visual noise — competing elements fighting for attention?
- Information density appropriate for content type?
- Z-index clarity — nothing unexpectedly overlapping?
- Above-the-fold content communicates purpose in 3 seconds?
- Squint test: hierarchy still visible when blurred?
- White space is intentional, not leftover?
**2. Typography** (15 items)
- Font count <=3 (flag if more)
- Scale follows ratio (1.25 major third or 1.333 perfect fourth)
- Line-height: 1.5x body, 1.15-1.25x headings
- Measure: 45-75 chars per line (66 ideal)
- Heading hierarchy: no skipped levels (h1→h3 without h2)
- Weight contrast: >=2 weights used for hierarchy
- No blacklisted fonts (Papyrus, Comic Sans, Lobster, Impact, Jokerman)
- If primary font is Inter/Roboto/Open Sans/Poppins → flag as potentially generic
- \`text-wrap: balance\` or \`text-pretty\` on headings (check via \`$B css <heading> text-wrap\`)
- Curly quotes used, not straight quotes
- Ellipsis character (\`\`) not three dots (\`...\`)
- \`font-variant-numeric: tabular-nums\` on number columns
- Body text >= 16px
- Caption/label >= 12px
- No letterspacing on lowercase text
**3. Color & Contrast** (10 items)
- Palette coherent (<=12 unique non-gray colors)
- WCAG AA: body text 4.5:1, large text (18px+) 3:1, UI components 3:1
- Semantic colors consistent (success=green, error=red, warning=yellow/amber)
- No color-only encoding (always add labels, icons, or patterns)
- Dark mode: surfaces use elevation, not just lightness inversion
- Dark mode: text off-white (~#E0E0E0), not pure white
- Primary accent desaturated 10-20% in dark mode
- \`color-scheme: dark\` on html element (if dark mode present)
- No red/green only combinations (8% of men have red-green deficiency)
- Neutral palette is warm or cool consistently — not mixed
**4. Spacing & Layout** (12 items)
- Grid consistent at all breakpoints
- Spacing uses a scale (4px or 8px base), not arbitrary values
- Alignment is consistent — nothing floats outside the grid
- Rhythm: related items closer together, distinct sections further apart
- Border-radius hierarchy (not uniform bubbly radius on everything)
- Inner radius = outer radius - gap (nested elements)
- No horizontal scroll on mobile
- Max content width set (no full-bleed body text)
- \`env(safe-area-inset-*)\` for notch devices
- URL reflects state (filters, tabs, pagination in query params)
- Flex/grid used for layout (not JS measurement)
- Breakpoints: mobile (375), tablet (768), desktop (1024), wide (1440)
**5. Interaction States** (10 items)
- Hover state on all interactive elements
- \`focus-visible\` ring present (never \`outline: none\` without replacement)
- Active/pressed state with depth effect or color shift
- Disabled state: reduced opacity + \`cursor: not-allowed\`
- Loading: skeleton shapes match real content layout
- Empty states: warm message + primary action + visual (not just "No items.")
- Error messages: specific + include fix/next step
- Success: confirmation animation or color, auto-dismiss
- Touch targets >= 44px on all interactive elements
- \`cursor: pointer\` on all clickable elements
- Mindless choice audit: every decision point (button, link, dropdown, modal choice) is a mindless click (obvious what happens). If a click requires thought about whether it's the right choice, flag as HIGH.
**6. Responsive Design** (8 items)
- Mobile layout makes *design* sense (not just stacked desktop columns)
- Touch targets sufficient on mobile (>= 44px)
- No horizontal scroll on any viewport
- Images handle responsive (srcset, sizes, or CSS containment)
- Text readable without zooming on mobile (>= 16px body)
- Navigation collapses appropriately (hamburger, bottom nav, etc.)
- Forms usable on mobile (correct input types, no autoFocus on mobile)
- No \`user-scalable=no\` or \`maximum-scale=1\` in viewport meta
**7. Motion & Animation** (6 items)
- Easing: ease-out for entering, ease-in for exiting, ease-in-out for moving
- Duration: 50-700ms range (nothing slower unless page transition)
- Purpose: every animation communicates something (state change, attention, spatial relationship)
- \`prefers-reduced-motion\` respected (check: \`$B js "matchMedia('(prefers-reduced-motion: reduce)').matches"\`)
- No \`transition: all\` — properties listed explicitly
- Only \`transform\` and \`opacity\` animated (not layout properties like width, height, top, left)
**8. Content & Microcopy** (8 items)
- Empty states designed with warmth (message + action + illustration/icon)
- Error messages specific: what happened + why + what to do next
- Button labels specific ("Save API Key" not "Continue" or "Submit")
- No placeholder/lorem ipsum text visible in production
- Truncation handled (\`text-overflow: ellipsis\`, \`line-clamp\`, or \`break-words\`)
- Active voice ("Install the CLI" not "The CLI will be installed")
- Loading states end with \`\` ("Saving…" not "Saving...")
- Destructive actions have confirmation modal or undo window
- Happy talk detection: scan for introductory paragraphs that start with "Welcome to..." or tell users how great the site is. If you can hear "blah blah blah", it's happy talk. Flag for removal.
- Instructions detection: any visible instructions longer than one sentence. If users need to read instructions, the design has failed. Flag the instructions AND the interaction they're compensating for.
- Happy talk word count: count total visible words on the page. Classify each text block as "useful content" vs "happy talk" (welcome paragraphs, self-congratulatory text, instructions nobody reads). Report: "This page has X words. Y (Z%) are happy talk."
**9. AI Slop Detection** (10 anti-patterns — the blacklist)
The test: would a human designer at a respected studio ever ship this?
${AI_SLOP_BLACKLIST.map(item => `- ${item}`).join('\n')}
**10. Performance as Design** (6 items)
- LCP < 2.0s (web apps), < 1.5s (informational sites)
- CLS < 0.1 (no visible layout shifts during load)
- Skeleton quality: shapes match real content layout, shimmer animation
- Images: \`loading="lazy"\`, width/height dimensions set, WebP/AVIF format
- Fonts: \`font-display: swap\`, preconnect to CDN origins
- No visible font swap flash (FOUT) — critical fonts preloaded
---
## Phase 4: Interaction Flow Review
Walk 2-3 key user flows and evaluate the *feel*, not just the function:
\`\`\`bash
$B snapshot -i
$B click @e3 # perform action
$B snapshot -D # diff to see what changed
\`\`\`
Evaluate:
- **Response feel:** Does clicking feel responsive? Any delays or missing loading states?
- **Transition quality:** Are transitions intentional or generic/absent?
- **Feedback clarity:** Did the action clearly succeed or fail? Is the feedback immediate?
- **Form polish:** Focus states visible? Validation timing correct? Errors near the source?
**Narration mode:** Narrate the flow in first person. "I click 'Sign Up'... spinner appears... 3 seconds pass... still spinning... I'm getting nervous. Finally the dashboard loads, but where am I? The nav doesn't highlight anything." Name the specific element, its position, its visual weight. If you can't name it specifically, you're not actually experiencing the flow, you're generating platitudes.
### Goodwill Reservoir (track across the flow)
As you walk the user flow, maintain a mental goodwill meter (starts at 70/100).
These scores are heuristic, not measured. The value is in identifying specific
drains and fills, not in the final number.
Subtract points for:
- Hidden information the user would want (pricing, contact, shipping): subtract 15
- Format punishment (rejecting valid input like dashes in phone numbers): subtract 10
- Unnecessary information requests: subtract 10
- Interstitials, splash screens, forced tours blocking the task: subtract 15
- Sloppy or unprofessional appearance: subtract 10
- Ambiguous choices that require thinking: subtract 5 each
Add points for:
- Top user tasks are obvious and prominent: add 10
- Upfront about costs and limitations: add 5
- Saves steps (direct links, smart defaults, autofill): add 5 each
- Graceful error recovery with specific fix instructions: add 10
- Apologizes when things go wrong: add 5
Report the final goodwill score with a visual dashboard:
\`\`\`
Goodwill: 70 ████████████████████░░░░░░░░░░
Step 1: Login page 70 → 75 (+5 obvious primary action)
Step 2: Dashboard 75 → 60 (-15 interstitial tour popup)
Step 3: Settings 60 → 50 (-10 format punishment on phone)
Step 4: Billing 50 → 35 (-15 hidden pricing info)
FINAL: 35/100 ⚠️ CRITICAL UX DEBT
\`\`\`
Below 30 = critical UX debt. 30-60 = needs work. Above 60 = healthy.
Include the biggest drains and fills as specific findings.
---
## Phase 5: Cross-Page Consistency
Compare screenshots and observations across pages for:
- Navigation bar consistent across all pages?
- Footer consistent?
- Component reuse vs one-off designs (same button styled differently on different pages?)
- Tone consistency (one page playful while another is corporate?)
- Spacing rhythm carries across pages?
---
## Phase 6: Compile Report
### Output Locations
**Local:** \`.gstack/design-reports/design-audit-{domain}-{YYYY-MM-DD}.md\`
**Project-scoped:**
\`\`\`bash
eval "$(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)" && mkdir -p ~/.gstack/projects/$SLUG
\`\`\`
Write to: \`~/.gstack/projects/{slug}/{user}-{branch}-design-audit-{datetime}.md\`
**Baseline:** Write \`design-baseline.json\` for regression mode:
\`\`\`json
{
"date": "YYYY-MM-DD",
"url": "<target>",
"designScore": "B",
"aiSlopScore": "C",
"categoryGrades": { "hierarchy": "A", "typography": "B", ... },
"findings": [{ "id": "FINDING-001", "title": "...", "impact": "high", "category": "typography" }]
}
\`\`\`
### Scoring System
**Dual headline scores:**
- **Design Score: {A-F}** — weighted average of all 10 categories
- **AI Slop Score: {A-F}** — standalone grade with pithy verdict
**Per-category grades:**
- **A:** Intentional, polished, delightful. Shows design thinking.
- **B:** Solid fundamentals, minor inconsistencies. Looks professional.
- **C:** Functional but generic. No major problems, no design point of view.
- **D:** Noticeable problems. Feels unfinished or careless.
- **F:** Actively hurting user experience. Needs significant rework.
**Grade computation:** Each category starts at A. Each High-impact finding drops one letter grade. Each Medium-impact finding drops half a letter grade. Polish findings are noted but do not affect grade. Minimum is F.
**Category weights for Design Score:**
| Category | Weight |
|----------|--------|
| Visual Hierarchy | 15% |
| Typography | 15% |
| Spacing & Layout | 15% |
| Color & Contrast | 10% |
| Interaction States | 10% |
| Responsive | 10% |
| Content Quality | 10% |
| AI Slop | 5% |
| Motion | 5% |
| Performance Feel | 5% |
AI Slop is 5% of Design Score but also graded independently as a headline metric.
### Regression Output
When previous \`design-baseline.json\` exists or \`--regression\` flag is used:
- Load baseline grades
- Compare: per-category deltas, new findings, resolved findings
- Append regression table to report
---
## Design Critique Format
Use structured feedback, not opinions:
- "I notice..." — observation (e.g., "I notice the primary CTA competes with the secondary action")
- "I wonder..." — question (e.g., "I wonder if users will understand what 'Process' means here")
- "What if..." — suggestion (e.g., "What if we moved search to a more prominent position?")
- "I think... because..." — reasoned opinion (e.g., "I think the spacing between sections is too uniform because it doesn't create hierarchy")
Tie everything to user goals and product objectives. Always suggest specific improvements alongside problems.
---
## Important Rules
1. **Think like a designer, not a QA engineer.** You care whether things feel right, look intentional, and respect the user. You do NOT just care whether things "work."
2. **Screenshots are evidence.** Every finding needs at least one screenshot. Use annotated screenshots (\`snapshot -a\`) to highlight elements.
3. **Be specific and actionable.** "Change X to Y because Z" — not "the spacing feels off."
4. **Never read source code.** Evaluate the rendered site, not the implementation. (Exception: offer to write DESIGN.md from extracted observations.)
5. **AI Slop detection is your superpower.** Most developers can't evaluate whether their site looks AI-generated. You can. Be direct about it.
6. **Quick wins matter.** Always include a "Quick Wins" section — the 3-5 highest-impact fixes that take <30 minutes each.
7. **Use \`snapshot -C\` for tricky UIs.** Finds clickable divs that the accessibility tree misses.
8. **Responsive is design, not just "not broken."** A stacked desktop layout on mobile is not responsive design — it's lazy. Evaluate whether the mobile layout makes *design* sense.
9. **Document incrementally.** Write each finding to the report as you find it. Don't batch.
10. **Depth over breadth.** 5-10 well-documented findings with screenshots and specific suggestions > 20 vague observations.
11. **Show screenshots to the user.** After every \`$B screenshot\`, \`$B snapshot -a -o\`, or \`$B responsive\` command, use the Read tool on the output file(s) so the user can see them inline. For \`responsive\` (3 files), Read all three. This is critical — without it, screenshots are invisible to the user.`;
}
export function generateDesignSketch(_ctx: TemplateContext): string {
return `## Visual Sketch (UI ideas only)
If the chosen approach involves user-facing UI (screens, pages, forms, dashboards,
or interactive elements), generate a rough wireframe to help the user visualize it.
If the idea is backend-only, infrastructure, or has no UI component — skip this
section silently.
**Step 1: Gather design context**
1. Check if \`DESIGN.md\` exists in the repo root. If it does, read it for design
system constraints (colors, typography, spacing, component patterns). Use these
constraints in the wireframe.
2. Apply core design principles:
- **Information hierarchy** — what does the user see first, second, third?
- **Interaction states** — loading, empty, error, success, partial
- **Edge case paranoia** — what if the name is 47 chars? Zero results? Network fails?
- **Subtraction default** — "as little design as possible" (Rams). Every element earns its pixels.
- **Design for trust** — every interface element builds or erodes user trust.
**Step 2: Generate wireframe HTML**
Generate a single-page HTML file with these constraints:
- **Intentionally rough aesthetic** — use system fonts, thin gray borders, no color,
hand-drawn-style elements. This is a sketch, not a polished mockup.
- Self-contained — no external dependencies, no CDN links, inline CSS only
- Show the core interaction flow (1-3 screens/states max)
- Include realistic placeholder content (not "Lorem ipsum" — use content that
matches the actual use case)
- Add HTML comments explaining design decisions
Write to a temp file:
\`\`\`bash
SKETCH_FILE="/tmp/gstack-sketch-$(date +%s).html"
\`\`\`
**Step 3: Render and capture**
\`\`\`bash
$B goto "file://$SKETCH_FILE"
$B screenshot /tmp/gstack-sketch.png
\`\`\`
If \`$B\` is not available (browse binary not set up), skip the render step. Tell the
user: "Visual sketch requires the browse binary. Run the setup script to enable it."
**Step 4: Present and iterate**
Show the screenshot to the user. Ask: "Does this feel right? Want to iterate on the layout?"
If they want changes, regenerate the HTML with their feedback and re-render.
If they approve or say "good enough," proceed.
**Step 5: Include in design doc**
Reference the wireframe screenshot in the design doc's "Recommended Approach" section.
The screenshot file at \`/tmp/gstack-sketch.png\` can be referenced by downstream skills
(\`/plan-design-review\`, \`/design-review\`) to see what was originally envisioned.
**Step 6: Outside design voices** (optional)
After the wireframe is approved, offer outside design perspectives:
\`\`\`bash
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"
\`\`\`
If Codex is available, use AskUserQuestion:
> "Want outside design perspectives on the chosen approach? Codex proposes a visual thesis, content plan, and interaction ideas. A Claude subagent proposes an alternative aesthetic direction."
>
> A) Yes — get outside design voices
> B) No — proceed without
If user chooses A, launch both voices simultaneously:
1. **Codex** (via Bash, \`model_reasoning_effort="medium"\`):
\`\`\`bash
TMPERR_SKETCH=$(mktemp /tmp/codex-sketch-XXXXXXXX)
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
codex exec "For this product approach, provide: a visual thesis (one sentence — mood, material, energy), a content plan (hero → support → detail → CTA), and 2 interaction ideas that change page feel. Apply beautiful defaults: composition-first, brand-first, cardless, poster not document. Be opinionated." -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="medium"' --enable web_search_cached < /dev/null 2>"$TMPERR_SKETCH"
\`\`\`
Use a 5-minute timeout (\`timeout: 300000\`). After completion: \`cat "$TMPERR_SKETCH" && rm -f "$TMPERR_SKETCH"\`
2. **Claude subagent** (via Agent tool):
"For this product approach, what design direction would you recommend? What aesthetic, typography, and interaction patterns fit? What would make this approach feel inevitable to the user? Be specific — font names, hex colors, spacing values."
Present Codex output under \`CODEX SAYS (design sketch):\` and subagent output under \`CLAUDE SUBAGENT (design direction):\`.
Error handling: all non-blocking. On failure, skip and continue.`;
}
export function generateDesignOutsideVoices(ctx: TemplateContext): string {
// Codex host: strip entirely — Codex should never invoke itself
if (ctx.host === 'codex') return '';
const rejectionList = OPENAI_HARD_REJECTIONS.map((item, i) => `${i + 1}. ${item}`).join('\n');
const litmusList = OPENAI_LITMUS_CHECKS.map((item, i) => `${i + 1}. ${item}`).join('\n');
// Skill-specific configuration
const isPlanDesignReview = ctx.skillName === 'plan-design-review';
const isDesignReview = ctx.skillName === 'design-review';
const isDesignConsultation = ctx.skillName === 'design-consultation';
// Determine opt-in behavior and reasoning effort
const isAutomatic = isDesignReview; // design-review runs automatically
const reasoningEffort = isDesignConsultation ? 'medium' : 'high'; // creative vs analytical
// Build skill-specific Codex prompt
let codexPrompt: string;
let subagentPrompt: string;
if (isPlanDesignReview) {
codexPrompt = `Read the plan file at [plan-file-path]. Evaluate this plan's UI/UX design against these criteria.
HARD REJECTION — flag if ANY apply:
${rejectionList}
LITMUS CHECKS — answer YES or NO for each:
${litmusList}
HARD RULES — first classify as MARKETING/LANDING PAGE vs APP UI vs HYBRID, then flag violations of the matching rule set:
- MARKETING: First viewport as one composition, brand-first hierarchy, full-bleed hero, 2-3 intentional motions, composition-first layout
- APP UI: Calm surface hierarchy, dense but readable, utility language, minimal chrome
- UNIVERSAL: CSS variables for colors, no default font stacks, one job per section, cards earn existence
For each finding: what's wrong, what will happen if it ships unresolved, and the specific fix. Be opinionated. No hedging.`;
subagentPrompt = `Read the plan file at [plan-file-path]. You are an independent senior product designer reviewing this plan. You have NOT seen any prior review. Evaluate:
1. Information hierarchy: what does the user see first, second, third? Is it right?
2. Missing states: loading, empty, error, success, partial — which are unspecified?
3. User journey: what's the emotional arc? Where does it break?
4. Specificity: does the plan describe SPECIFIC UI ("48px Söhne Bold header, #1a1a1a on white") or generic patterns ("clean modern card-based layout")?
5. What design decisions will haunt the implementer if left ambiguous?
For each finding: what's wrong, severity (critical/high/medium), and the fix.`;
} else if (isDesignReview) {
codexPrompt = `Review the frontend source code in this repo. Evaluate against these design hard rules:
- Spacing: systematic (design tokens / CSS variables) or magic numbers?
- Typography: expressive purposeful fonts or default stacks?
- Color: CSS variables with defined system, or hardcoded hex scattered?
- Responsive: breakpoints defined? calc(100svh - header) for heroes? Mobile tested?
- A11y: ARIA landmarks, alt text, contrast ratios, 44px touch targets?
- Motion: 2-3 intentional animations, or zero / ornamental only?
- Cards: used only when card IS the interaction? No decorative card grids?
First classify as MARKETING/LANDING PAGE vs APP UI vs HYBRID, then apply matching rules.
LITMUS CHECKS — answer YES/NO:
${litmusList}
HARD REJECTION — flag if ANY apply:
${rejectionList}
Be specific. Reference file:line for every finding.`;
subagentPrompt = `Review the frontend source code in this repo. You are an independent senior product designer doing a source-code design audit. Focus on CONSISTENCY PATTERNS across files rather than individual violations:
- Are spacing values systematic across the codebase?
- Is there ONE color system or scattered approaches?
- Do responsive breakpoints follow a consistent set?
- Is the accessibility approach consistent or spotty?
For each finding: what's wrong, severity (critical/high/medium), and the file:line.`;
} else if (isDesignConsultation) {
codexPrompt = `Given this product context, propose a complete design direction:
- Visual thesis: one sentence describing mood, material, and energy
- Typography: specific font names (not defaults — no Inter/Roboto/Arial/system) + hex colors
- Color system: CSS variables for background, surface, primary text, muted text, accent
- Layout: composition-first, not component-first. First viewport as poster, not document
- Differentiation: 2 deliberate departures from category norms
- Anti-slop: no purple gradients, no 3-column icon grids, no centered everything, no decorative blobs
Be opinionated. Be specific. Do not hedge. This is YOUR design direction — own it.`;
subagentPrompt = `Given this product context, propose a design direction that would SURPRISE. What would the cool indie studio do that the enterprise UI team wouldn't?
- Propose an aesthetic direction, typography stack (specific font names), color palette (hex values)
- 2 deliberate departures from category norms
- What emotional reaction should the user have in the first 3 seconds?
Be bold. Be specific. No hedging.`;
} else {
// Unknown skill — return empty
return '';
}
// Build the opt-in section
const optInSection = isAutomatic ? `
**Automatic:** Outside voices run automatically when Codex is available. No opt-in needed.` : `
Use AskUserQuestion:
> "Want outside design voices${isPlanDesignReview ? ' before the detailed review' : ''}? Codex evaluates against OpenAI's design hard rules + litmus checks; Claude subagent does an independent ${isDesignConsultation ? 'design direction proposal' : 'completeness review'}."
>
> A) Yes — run outside design voices
> B) No — proceed without
If user chooses B, skip this step and continue.`;
// Build the synthesis section
const synthesisSection = isPlanDesignReview ? `
**Synthesis — Litmus scorecard:**
\`\`\`
DESIGN OUTSIDE VOICES — LITMUS SCORECARD:
═══════════════════════════════════════════════════════════════
Check Claude Codex Consensus
─────────────────────────────────────── ─────── ─────── ─────────
1. Brand unmistakable in first screen? — — —
2. One strong visual anchor? — — —
3. Scannable by headlines only? — — —
4. Each section has one job? — — —
5. Cards actually necessary? — — —
6. Motion improves hierarchy? — — —
7. Premium without decorative shadows? — — —
─────────────────────────────────────── ─────── ─────── ─────────
Hard rejections triggered: — — —
═══════════════════════════════════════════════════════════════
\`\`\`
Fill in each cell from the Codex and subagent outputs. CONFIRMED = both agree. DISAGREE = models differ. NOT SPEC'D = not enough info to evaluate.
**Pass integration (respects existing 7-pass contract):**
- Hard rejections → raised as the FIRST items in Pass 1, tagged \`[HARD REJECTION]\`
- Litmus DISAGREE items → raised in the relevant pass with both perspectives
- Litmus CONFIRMED failures → pre-loaded as known issues in the relevant pass
- Passes can skip discovery and go straight to fixing for pre-identified issues` :
isDesignConsultation ? `
**Synthesis:** Claude main references both Codex and subagent proposals in the Phase 3 proposal. Present:
- Areas of agreement between all three voices (Claude main + Codex + subagent)
- Genuine divergences as creative alternatives for the user to choose from
- "Codex and I agree on X. Codex suggested Y where I'm proposing Z — here's why..."` : `
**Synthesis — Litmus scorecard:**
Use the same scorecard format as /plan-design-review (shown above). Fill in from both outputs.
Merge findings into the triage with \`[codex]\` / \`[subagent]\` / \`[cross-model]\` tags.`;
const escapedCodexPrompt = codexPrompt.replace(/`/g, '\\`').replace(/\$/g, '\\$');
return `## Design Outside Voices (parallel)
${optInSection}
**Check Codex availability:**
\`\`\`bash
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"
\`\`\`
**If Codex is available**, launch both voices simultaneously:
1. **Codex design voice** (via Bash):
\`\`\`bash
TMPERR_DESIGN=$(mktemp /tmp/codex-design-XXXXXXXX)
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
codex exec "${escapedCodexPrompt}" -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="${reasoningEffort}"' --enable web_search_cached < /dev/null 2>"$TMPERR_DESIGN"
\`\`\`
Use a 5-minute timeout (\`timeout: 300000\`). After the command completes, read stderr:
\`\`\`bash
cat "$TMPERR_DESIGN" && rm -f "$TMPERR_DESIGN"
\`\`\`
2. **Claude design subagent** (via Agent tool):
Dispatch a subagent with this prompt:
"${subagentPrompt}"
**Error handling (all non-blocking):**
- **Auth failure:** If stderr contains "auth", "login", "unauthorized", or "API key": "Codex authentication failed. Run \`codex login\` to authenticate."
- **Timeout:** "Codex timed out after 5 minutes."
- **Empty response:** "Codex returned no response."
- On any Codex error: proceed with Claude subagent output only, tagged \`[single-model]\`.
- If Claude subagent also fails: "Outside voices unavailable — continuing with primary review."
Present Codex output under a \`CODEX SAYS (design ${isPlanDesignReview ? 'critique' : isDesignReview ? 'source audit' : 'direction'}):\` header.
Present subagent output under a \`CLAUDE SUBAGENT (design ${isPlanDesignReview ? 'completeness' : isDesignReview ? 'consistency' : 'direction'}):\` header.
${synthesisSection}
**Log the result:**
\`\`\`bash
${ctx.paths.binDir}/gstack-review-log '{"skill":"design-outside-voices","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","status":"STATUS","source":"SOURCE","commit":"'"$(git rev-parse --short HEAD)"'"}'
\`\`\`
Replace STATUS with "clean" or "issues_found", SOURCE with "codex+subagent", "codex-only", "subagent-only", or "unavailable".`;
}
// ─── Design Hard Rules (OpenAI framework + gstack slop blacklist) ───
export function generateDesignHardRules(_ctx: TemplateContext): string {
const slopItems = AI_SLOP_BLACKLIST.map((item, i) => `${i + 1}. ${item}`).join('\n');
const rejectionItems = OPENAI_HARD_REJECTIONS.map((item, i) => `${i + 1}. ${item}`).join('\n');
const litmusItems = OPENAI_LITMUS_CHECKS.map((item, i) => `${i + 1}. ${item}`).join('\n');
return `### Design Hard Rules
**Classifier — determine rule set before evaluating:**
- **MARKETING/LANDING PAGE** (hero-driven, brand-forward, conversion-focused) → apply Landing Page Rules
- **APP UI** (workspace-driven, data-dense, task-focused: dashboards, admin, settings) → apply App UI Rules
- **HYBRID** (marketing shell with app-like sections) → apply Landing Page Rules to hero/marketing sections, App UI Rules to functional sections
**Hard rejection criteria** (instant-fail patterns — flag if ANY apply):
${rejectionItems}
**Litmus checks** (answer YES/NO for each — used for cross-model consensus scoring):
${litmusItems}
**Landing page rules** (apply when classifier = MARKETING/LANDING):
- First viewport reads as one composition, not a dashboard
- Brand-first hierarchy: brand > headline > body > CTA
- Typography: expressive, purposeful — no default stacks (Inter, Roboto, Arial, system)
- No flat single-color backgrounds — use gradients, images, subtle patterns
- Hero: full-bleed, edge-to-edge, no inset/tiled/rounded variants
- Hero budget: brand, one headline, one supporting sentence, one CTA group, one image
- No cards in hero. Cards only when card IS the interaction
- One job per section: one purpose, one headline, one short supporting sentence
- Motion: 2-3 intentional motions minimum (entrance, scroll-linked, hover/reveal)
- Color: define CSS variables, avoid purple-on-white defaults, one accent color default
- Copy: product language not design commentary. "If deleting 30% improves it, keep deleting"
- Beautiful defaults: composition-first, brand as loudest text, two typefaces max, cardless by default, first viewport as poster not document
**App UI rules** (apply when classifier = APP UI):
- Calm surface hierarchy, strong typography, few colors
- Dense but readable, minimal chrome
- Organize: primary workspace, navigation, secondary context, one accent
- Avoid: dashboard-card mosaics, thick borders, decorative gradients, ornamental icons
- Copy: utility language — orientation, status, action. Not mood/brand/aspiration
- Cards only when card IS the interaction
- Section headings state what area is or what user can do ("Selected KPIs", "Plan status")
**Universal rules** (apply to ALL types):
- Define CSS variables for color system
- No default font stacks (Inter, Roboto, Arial, system)
- One job per section
- "If deleting 30% of the copy improves it, keep deleting"
- Cards earn their existence — no decorative card grids
- NEVER use small, low-contrast type (body text < 16px or contrast ratio < 4.5:1 on body text)
- NEVER put labels inside form fields as the only label (placeholder-as-label pattern — labels must be visible when the field has content)
- ALWAYS preserve visited vs unvisited link distinction (visited links must have a different color)
- NEVER float headings between paragraphs (heading must be visually closer to the section it introduces than to the preceding section)
**AI Slop blacklist** (the 10 patterns that scream "AI-generated"):
${slopItems}
Source: [OpenAI "Designing Delightful Frontends with GPT-5.4"](https://developers.openai.com/blog/designing-delightful-frontends-with-gpt-5-4) (Mar 2026) + gstack design methodology.`;
}
export function generateDesignSetup(ctx: TemplateContext): string {
return `## DESIGN SETUP (run this check BEFORE any design mockup command)
\`\`\`bash
_ROOT=$(git rev-parse --show-toplevel 2>/dev/null)
D=""
[ -n "$_ROOT" ] && [ -x "$_ROOT/${ctx.paths.localSkillRoot}/design/dist/design" ] && D="$_ROOT/${ctx.paths.localSkillRoot}/design/dist/design"
[ -z "$D" ] && D="$HOME${ctx.paths.designDir.replace(/^~/, '')}/design"
if [ -x "$D" ]; then
echo "DESIGN_READY: $D"
else
echo "DESIGN_NOT_AVAILABLE"
fi
B=""
[ -n "$_ROOT" ] && [ -x "$_ROOT/${ctx.paths.localSkillRoot}/browse/dist/browse" ] && B="$_ROOT/${ctx.paths.localSkillRoot}/browse/dist/browse"
[ -z "$B" ] && B="$HOME${ctx.paths.browseDir.replace(/^~/, '')}/browse"
if [ -x "$B" ]; then
echo "BROWSE_READY: $B"
else
echo "BROWSE_NOT_AVAILABLE (will use 'open' to view comparison boards)"
fi
\`\`\`
If \`DESIGN_NOT_AVAILABLE\`: skip visual mockup generation and fall back to the
existing HTML wireframe approach (\`DESIGN_SKETCH\`). Design mockups are a
progressive enhancement, not a hard requirement.
If \`BROWSE_NOT_AVAILABLE\`: use \`open file://...\` instead of \`$B goto\` to open
comparison boards. The user just needs to see the HTML file in any browser.
If \`DESIGN_READY\`: the design binary is available for visual mockup generation.
Commands:
- \`$D generate --brief "..." --output /path.png\` — generate a single mockup
- \`$D variants --brief "..." --count 3 --output-dir /path/\` — generate N style variants
- \`$D compare --images "a.png,b.png,c.png" --output /path/board.html --serve\` — comparison board + HTTP server
- \`$D serve --html /path/board.html\` — serve comparison board and collect feedback via HTTP
- \`$D check --image /path.png --brief "..."\` — vision quality gate
- \`$D iterate --session /path/session.json --feedback "..." --output /path.png\` — iterate
**CRITICAL PATH RULE:** All design artifacts (mockups, comparison boards, approved.json)
MUST be saved to \`~/.gstack/projects/$SLUG/designs/\`, NEVER to \`.context/\`,
\`docs/designs/\`, \`/tmp/\`, or any project-local directory. Design artifacts are USER
data, not project files. They persist across branches, conversations, and workspaces.`;
}
export function generateDesignMockup(ctx: TemplateContext): string {
return `## Visual Design Exploration
\`\`\`bash
_ROOT=$(git rev-parse --show-toplevel 2>/dev/null)
D=""
[ -n "$_ROOT" ] && [ -x "$_ROOT/${ctx.paths.localSkillRoot}/design/dist/design" ] && D="$_ROOT/${ctx.paths.localSkillRoot}/design/dist/design"
[ -z "$D" ] && D="$HOME${ctx.paths.designDir.replace(/^~/, '')}/design"
[ -x "$D" ] && echo "DESIGN_READY" || echo "DESIGN_NOT_AVAILABLE"
\`\`\`
**If \`DESIGN_NOT_AVAILABLE\`:** Fall back to the HTML wireframe approach below
(the existing DESIGN_SKETCH section). Visual mockups require the design binary.
**If \`DESIGN_READY\`:** Generate visual mockup explorations for the user.
Generating visual mockups of the proposed design... (say "skip" if you don't need visuals)
**Step 1: Set up the design directory**
\`\`\`bash
eval "$(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)"
_DESIGN_DIR="$HOME/.gstack/projects/$SLUG/designs/mockup-$(date +%Y%m%d)"
mkdir -p "$_DESIGN_DIR"
echo "DESIGN_DIR: $_DESIGN_DIR"
\`\`\`
**Step 2: Construct the design brief**
Read DESIGN.md if it exists — use it to constrain the visual style. If no DESIGN.md,
explore wide across diverse directions.
**Step 3: Generate 3 variants**
\`\`\`bash
$D variants --brief "<assembled brief>" --count 3 --output-dir "$_DESIGN_DIR/"
\`\`\`
This generates 3 style variations of the same brief (~40 seconds total).
**Step 4: Show variants inline, then open comparison board**
Show each variant to the user inline first (read the PNGs with Read tool), then
create and serve the comparison board:
\`\`\`bash
$D compare --images "$_DESIGN_DIR/variant-A.png,$_DESIGN_DIR/variant-B.png,$_DESIGN_DIR/variant-C.png" --output "$_DESIGN_DIR/design-board.html" --serve
\`\`\`
This opens the board in the user's default browser and blocks until feedback is
received. Read stdout for the structured JSON result. No polling needed.
If \`$D serve\` is not available or fails, fall back to AskUserQuestion:
"I've opened the design board. Which variant do you prefer? Any feedback?"
**Step 5: Handle feedback**
If the JSON contains \`"regenerated": true\`:
1. Read \`regenerateAction\` (or \`remixSpec\` for remix requests)
2. Generate new variants with \`$D iterate\` or \`$D variants\` using updated brief
3. Create new board with \`$D compare\`
4. POST the new HTML to the running server via \`curl -X POST http://localhost:PORT/api/reload -H 'Content-Type: application/json' -d '{"html":"$_DESIGN_DIR/design-board.html"}'\`
(parse the port from stderr: look for \`SERVE_STARTED: port=XXXXX\`)
5. Board auto-refreshes in the same tab
If \`"regenerated": false\`: proceed with the approved variant.
**Step 6: Save approved choice**
\`\`\`bash
echo '{"approved_variant":"<VARIANT>","feedback":"<FEEDBACK>","date":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","screen":"mockup","branch":"'$(git branch --show-current 2>/dev/null)'"}' > "$_DESIGN_DIR/approved.json"
\`\`\`
Reference the saved mockup in the design doc or plan.`;
}
export function generateDesignShotgunLoop(_ctx: TemplateContext): string {
return `### Comparison Board + Feedback Loop
Create the comparison board and serve it over HTTP:
\`\`\`bash
$D compare --images "$_DESIGN_DIR/variant-A.png,$_DESIGN_DIR/variant-B.png,$_DESIGN_DIR/variant-C.png" --output "$_DESIGN_DIR/design-board.html" --serve
\`\`\`
This command generates the board HTML, starts an HTTP server on a random port,
and opens it in the user's default browser. **Run it in the background** with \`&\`
because the server needs to stay running while the user interacts with the board.
Parse the port from stderr output: \`SERVE_STARTED: port=XXXXX\`. You need this
for the board URL and for reloading during regeneration cycles.
**PRIMARY WAIT: AskUserQuestion with board URL**
After the board is serving, use AskUserQuestion to wait for the user. Include the
board URL so they can click it if they lost the browser tab:
"I've opened a comparison board with the design variants:
http://127.0.0.1:<PORT>/ — Rate them, leave comments, remix
elements you like, and click Submit when you're done. Let me know when you've
submitted your feedback (or paste your preferences here). If you clicked
Regenerate or Remix on the board, tell me and I'll generate new variants."
**Do NOT use AskUserQuestion to ask which variant the user prefers.** The comparison
board IS the chooser. AskUserQuestion is just the blocking wait mechanism.
**After the user responds to AskUserQuestion:**
Check for feedback files next to the board HTML:
- \`$_DESIGN_DIR/feedback.json\` — written when user clicks Submit (final choice)
- \`$_DESIGN_DIR/feedback-pending.json\` — written when user clicks Regenerate/Remix/More Like This
\`\`\`bash
if [ -f "$_DESIGN_DIR/feedback.json" ]; then
echo "SUBMIT_RECEIVED"
cat "$_DESIGN_DIR/feedback.json"
elif [ -f "$_DESIGN_DIR/feedback-pending.json" ]; then
echo "REGENERATE_RECEIVED"
cat "$_DESIGN_DIR/feedback-pending.json"
rm "$_DESIGN_DIR/feedback-pending.json"
else
echo "NO_FEEDBACK_FILE"
fi
\`\`\`
The feedback JSON has this shape:
\`\`\`json
{
"preferred": "A",
"ratings": { "A": 4, "B": 3, "C": 2 },
"comments": { "A": "Love the spacing" },
"overall": "Go with A, bigger CTA",
"regenerated": false
}
\`\`\`
**If \`feedback.json\` found:** The user clicked Submit on the board.
Read \`preferred\`, \`ratings\`, \`comments\`, \`overall\` from the JSON. Proceed with
the approved variant.
**If \`feedback-pending.json\` found:** The user clicked Regenerate/Remix on the board.
1. Read \`regenerateAction\` from the JSON (\`"different"\`, \`"match"\`, \`"more_like_B"\`,
\`"remix"\`, or custom text)
2. If \`regenerateAction\` is \`"remix"\`, read \`remixSpec\` (e.g. \`{"layout":"A","colors":"B"}\`)
3. Generate new variants with \`$D iterate\` or \`$D variants\` using updated brief
4. Create new board: \`$D compare --images "..." --output "$_DESIGN_DIR/design-board.html"\`
5. Reload the board in the user's browser (same tab):
\`curl -s -X POST http://127.0.0.1:PORT/api/reload -H 'Content-Type: application/json' -d '{"html":"$_DESIGN_DIR/design-board.html"}'\`
6. The board auto-refreshes. **AskUserQuestion again** with the same board URL to
wait for the next round of feedback. Repeat until \`feedback.json\` appears.
**If \`NO_FEEDBACK_FILE\`:** The user typed their preferences directly in the
AskUserQuestion response instead of using the board. Use their text response
as the feedback.
**POLLING FALLBACK:** Only use polling if \`$D serve\` fails (no port available).
In that case, show each variant inline using the Read tool (so the user can see them),
then use AskUserQuestion:
"The comparison board server failed to start. I've shown the variants above.
Which do you prefer? Any feedback?"
**After receiving feedback (any path):** Output a clear summary confirming
what was understood:
"Here's what I understood from your feedback:
PREFERRED: Variant [X]
RATINGS: [list]
YOUR NOTES: [comments]
DIRECTION: [overall]
Is this right?"
Use AskUserQuestion to verify before proceeding.
**Save the approved choice:**
\`\`\`bash
echo '{"approved_variant":"<V>","feedback":"<FB>","date":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","screen":"<SCREEN>","branch":"'$(git branch --show-current 2>/dev/null)'"}' > "$_DESIGN_DIR/approved.json"
\`\`\``;
}
export function generateTasteProfile(ctx: TemplateContext): string {
return `Read the persistent taste profile if it exists:
\`\`\`bash
_TASTE_PROFILE=~/.gstack/projects/$SLUG/taste-profile.json
if [ -f "$_TASTE_PROFILE" ]; then
# Schema v1: { dimensions: { fonts, colors, layouts, aesthetics }, sessions: [] }
# Each dimension has approved[] and rejected[] entries with
# { value, confidence, approved_count, rejected_count, last_seen }
# Confidence decays 5% per week of inactivity — computed at read time.
cat "$_TASTE_PROFILE" 2>/dev/null | head -200
echo "TASTE_PROFILE_FOUND"
else
echo "NO_TASTE_PROFILE"
fi
\`\`\`
**If TASTE_PROFILE_FOUND:** Summarize the strongest signals (top 3 approved entries
per dimension by confidence * approved_count). Include them in the design brief:
"Based on ${'\\${SESSION_COUNT}'} prior sessions, this user's taste leans toward:
fonts [top-3], colors [top-3], layouts [top-3], aesthetics [top-3]. Bias
generation toward these unless the user explicitly requests a different direction.
Also avoid their strong rejections: [top-3 rejected per dimension]."
**If NO_TASTE_PROFILE:** Fall through to per-session approved.json files (legacy).
**Conflict handling:** If the current user request contradicts a strong persistent
signal (e.g., "make it playful" when taste profile strongly prefers minimal), flag
it: "Note: your taste profile strongly prefers minimal. You're asking for playful
this time — I'll proceed, but want me to update the taste profile, or treat this
as a one-off?"
**Decay:** Confidence scores decay 5% per week. A font approved 6 months ago with
10 approvals has less weight than one approved last week. The decay calculation
happens at read time, not write time, so the file only grows on change.
**Schema migration:** If the file has no \`version\` field or \`version: 0\`, it's
the legacy approved.json aggregate — \`${ctx.paths.binDir}/gstack-taste-update\`
will migrate it to schema v1 on the next write.`;
}
// ─── UX Behavioral Foundations (Krug + HCI research) ───
export function generateUXPrinciples(_ctx: TemplateContext): string {
return `## UX Principles: How Users Actually Behave
These principles govern how real humans interact with interfaces. They are observed
behavior, not preferences. Apply them before, during, and after every design decision.
### The Three Laws of Usability
1. **Don't make me think.** Every page should be self-evident. If a user stops
to think "What do I click?" or "What does this mean?", the design has failed.
Self-evident > self-explanatory > requires explanation.
2. **Clicks don't matter, thinking does.** Three mindless, unambiguous clicks
beat one click that requires thought. Each step should feel like an obvious
choice (animal, vegetable, or mineral), not a puzzle.
3. **Omit, then omit again.** Get rid of half the words on each page, then get
rid of half of what's left. Happy talk (self-congratulatory text) must die.
Instructions must die. If they need reading, the design has failed.
### How Users Actually Behave
- **Users scan, they don't read.** Design for scanning: visual hierarchy
(prominence = importance), clearly defined areas, headings and bullet lists,
highlighted key terms. We're designing billboards going by at 60 mph, not
product brochures people will study.
- **Users satisfice.** They pick the first reasonable option, not the best.
Make the right choice the most visible choice.
- **Users muddle through.** They don't figure out how things work. They wing
it. If they accomplish their goal by accident, they won't seek the "right" way.
Once they find something that works, no matter how badly, they stick to it.
- **Users don't read instructions.** They dive in. Guidance must be brief,
timely, and unavoidable, or it won't be seen.
### Billboard Design for Interfaces
- **Use conventions.** Logo top-left, nav top/left, search = magnifying glass.
Don't innovate on navigation to be clever. Innovate when you KNOW you have a
better idea, otherwise use conventions. Even across languages and cultures,
web conventions let people identify the logo, nav, search, and main content.
- **Visual hierarchy is everything.** Related things are visually grouped. Nested
things are visually contained. More important = more prominent. If everything
shouts, nothing is heard. Start with the assumption everything is visual noise,
guilty until proven innocent.
- **Make clickable things obviously clickable.** No relying on hover states for
discoverability, especially on mobile where hover doesn't exist. Shape, location,
and formatting (color, underlining) must signal clickability without interaction.
- **Eliminate noise.** Three sources: too many things shouting for attention
(shouting), things not organized logically (disorganization), and too much stuff
(clutter). Fix noise by removal, not addition.
- **Clarity trumps consistency.** If making something significantly clearer
requires making it slightly inconsistent, choose clarity every time.
### Navigation as Wayfinding
Users on the web have no sense of scale, direction, or location. Navigation
must always answer: What site is this? What page am I on? What are the major
sections? What are my options at this level? Where am I? How can I search?
Persistent navigation on every page. Breadcrumbs for deep hierarchies.
Current section visually indicated. The "trunk test": cover everything except
the navigation. You should still know what site this is, what page you're on,
and what the major sections are. If not, the navigation has failed.
### The Goodwill Reservoir
Users start with a reservoir of goodwill. Every friction point depletes it.
**Deplete faster:** Hiding info users want (pricing, contact, shipping). Punishing
users for not doing things your way (formatting requirements on phone numbers).
Asking for unnecessary information. Putting sizzle in their way (splash screens,
forced tours, interstitials). Unprofessional or sloppy appearance.
**Replenish:** Know what users want to do and make it obvious. Tell them what they
want to know upfront. Save them steps wherever possible. Make it easy to recover
from errors. When in doubt, apologize.
### Mobile: Same Rules, Higher Stakes
All the above applies on mobile, just more so. Real estate is scarce, but never
sacrifice usability for space savings. Affordances must be VISIBLE: no cursor
means no hover-to-discover. Touch targets must be big enough (44px minimum).
Flat design can strip away useful visual information that signals interactivity.
Prioritize ruthlessly: things needed in a hurry go close at hand, everything
else a few taps away with an obvious path to get there.`;
}