mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-01 19:25:10 +02:00
feat(v1.3.0.0): open agents learnings + cross-model benchmark skill (#1040)
* chore: regenerate stale ship golden fixtures
Golden fixtures were missing the VENDORED_GSTACK preamble section that
landed on main. Regression tests failed on all three hosts (claude, codex,
factory). Regenerated from current preamble output.
No code changes, unblocks test suite.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: anti-slop design constraints + delete duplicate constants
Tightens design-consultation and design-shotgun to push back on the
convergence traps every AI design tool falls into.
Changes:
- scripts/resolvers/constants.ts: add "system-ui as primary font" to
AI_SLOP_BLACKLIST. Document Space Grotesk as the new "safe alternative
to Inter" convergence trap alongside the existing overused fonts.
- scripts/gen-skill-docs.ts: delete duplicate AI slop constants block
(dead code — scripts/resolvers/constants.ts is the live source).
Prevents drift between the two definitions.
- design-consultation/SKILL.md.tmpl: add Space Grotesk + system-ui to
overused/slop lists. Add "anti-convergence directive" — vary across
generations in the same project. Add Phase 1 "memorable-thing forcing
question" (what's the one thing someone will remember?). Add Phase 5
"would a human designer be embarrassed by this?" self-gate before
presenting variants.
- design-shotgun/SKILL.md.tmpl: anti-convergence directive — each
variant must use a different font, palette, and layout. If two
variants look like siblings, one of them failed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: context health soft directive in preamble (T2+)
Adds a "periodically self-summarize" nudge to long-running skills.
Soft directive only — no thresholds, no enforcement, no auto-commit.
Goal: self-awareness during /qa, /investigate, /cso etc. If you notice
yourself going in circles, STOP and reassess instead of thrashing.
Codex review caught that fake precision thresholds (15/30/45 tool calls)
were unimplementable — SKILL.md is a static prompt, not runtime code.
This ships the soft version only.
Changes:
- scripts/resolvers/preamble.ts: add generateContextHealth(), wire into
T2+ tier. Format: [PROGRESS] ... summary line. Explicit rule that
progress reporting must never mutate git state.
- All T2+ skill SKILL.md files regenerated to include the new section.
- Golden ship fixtures updated (T4 skill, picks up the change).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: model overlays with explicit --model flag (no auto-detect)
Adds a per-model behavioral patch layer orthogonal to the host axis.
Different LLMs have different tendencies (GPT won't stop, Gemini
over-explains, o-series wants structured output). Overlays nudge each
model toward better defaults for gstack workflows.
Codex review caught three landmines the prior reviews missed:
1. Host != model — Claude Code can run any Claude model, Codex runs
GPT/o-series, Cursor fronts multiple providers. Auto-detecting from
host would lie. Dropped auto-detect. --model is explicit (default
claude). Missing overlay file → empty string (graceful).
2. Import cycle — putting Model in resolvers/types.ts would cycle
through hosts/index. Created neutral scripts/models.ts instead.
3. "Final say" is dangerous — overlay at the end of preamble could
override STOP points, AskUserQuestion gates, /ship review gates.
Placed overlay after spawned-session-check but before voice + tier
sections. Wrapper heading adds explicit subordination language on
every overlay: "subordinate to skill workflow, STOP points,
AskUserQuestion gates, plan-mode safety, and /ship review gates."
Changes:
- scripts/models.ts: new neutral module. ALL_MODEL_NAMES, Model type,
resolveModel() for family heuristics (gpt-5.4-mini → gpt-5.4, o3 →
o-series, claude-opus-4-7 → claude), validateModel() helper.
- scripts/resolvers/types.ts: import Model, add ctx.model field.
- scripts/resolvers/model-overlay.ts: new resolver. Reads
model-overlays/{model}.md. Supports {{INHERIT:base}} directive at
top of file for concat (gpt-5.4 inherits gpt). Cycle guard.
- scripts/resolvers/index.ts: register MODEL_OVERLAY resolver.
- scripts/resolvers/preamble.ts: wire generateModelOverlay into
composition before voice. Print MODEL_OVERLAY: {model} in preamble
bash so users can see which overlay is active. Filter empty sections.
- scripts/gen-skill-docs.ts: parse --model CLI flag. Default claude.
Unknown model → throw with list of valid options.
- model-overlays/{claude,gpt,gpt-5.4,gemini,o-series}.md: behavioral
patches per model family. gpt-5.4.md uses {{INHERIT:gpt}} to extend
gpt.md without duplication.
- test/gen-skill-docs.test.ts: fix qa-only guardrail regex scope.
Was matching Edit/Glob/Grep anywhere after `allowed-tools:` in the
whole file. Now scoped to frontmatter only. Body prose (Claude
overlay references Edit as a tool) correctly no longer breaks it.
Verification:
- bun run gen:skill-docs --host all --dry-run → all fresh
- bun run gen:skill-docs --model gpt-5.4 → concat works, gpt.md +
gpt-5.4.md content appears in order
- bun run gen:skill-docs --model unknown → errors with valid list
- All generated skills contain MODEL_OVERLAY: claude in preamble
- Golden ship fixtures regenerated
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: continuous checkpoint mode with non-destructive WIP squash
Adds opt-in auto-commit during long sessions so work survives Claude
Code crashes, Conductor workspace handoffs, and context switches.
Local-only by default — pushing requires explicit opt-in.
Codex review caught multiple landmines that would have shipped:
1. checkpoint_push=true default would push WIP commits to shared
branches, trigger CI/deploys, expose secrets. Now default false.
2. Plan's original /ship squash (git reset --soft to merge base) was
destructive — uncommitted ALL branch commits, not just WIP, and
caused non-fast-forward pushes. Redesigned: rebase --autosquash
scoped to WIP commits only, with explicit fallback for WIP-only
branches and STOP-and-ask for conflicts.
3. gstack-config get returned empty for missing keys with exit 0,
ignoring the annotated defaults in the header comments. Fixed:
get now falls back to a lookup_default() table that is the
canonical source for defaults.
4. Telemetry default mismatched: header said 'anonymous' but runtime
treated empty as 'off'. Aligned: default is 'off' everywhere.
5. /checkpoint resume only read markdown checkpoint files, not the
WIP commit [gstack-context] bodies the plan referenced. Wired up
parsing of [gstack-context] blocks from WIP commits as a second
recovery trail alongside the markdown checkpoints.
Changes:
- bin/gstack-config: add checkpoint_mode (default explicit) and
checkpoint_push (default false) to CONFIG_HEADER. Add lookup_default()
as canonical default source. get() falls back to defaults when key
absent. list now shows value + source (set/default). New 'defaults'
subcommand to inspect the table.
- scripts/resolvers/preamble.ts: preamble bash reads _CHECKPOINT_MODE
and _CHECKPOINT_PUSH, prints CHECKPOINT_MODE: and CHECKPOINT_PUSH: so
the mode is visible. New generateContinuousCheckpoint() section in
T2+ tier describes WIP commit format with [gstack-context] body and
the rules (never git add -A, never commit broken tests, push only
if opted in). Example deliberately shows a clean-state context so
it doesn't contradict the rules.
- ship/SKILL.md.tmpl: new Step 5.75 WIP Commit Squash. Detects WIP
count, exports [gstack-context] blocks before squash (as backup),
uses rebase --autosquash for mixed branches and soft-reset only when
VERIFIED WIP-only. Explicit anti-footgun rules against blind soft-
reset. Aborts with BLOCKED status on conflict instead of destroying
non-WIP commits.
- checkpoint/SKILL.md.tmpl: new Step 1.5 to parse [gstack-context]
blocks from WIP commits via git log --grep="^WIP:". Merges with
markdown checkpoint for fuller session recovery.
- Golden ship fixtures regenerated (ship is T4, preamble change shows up).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: feature discovery flow gated by per-feature markers
Extends generateUpgradeCheck() to surface new features once per user
after a just-upgraded session. No more silent features.
Codex review caught: spawned sessions (OpenClaw, etc.) must skip the
discovery prompt entirely — they can't interactively answer. Feature
discovery now checks SPAWNED_SESSION first and is silent in those.
Discovery is per-feature, not per-upgrade. Each feature has its own
marker file at ~/.claude/skills/gstack/.feature-prompted-{name}. Once
the user has been shown a feature (accepted, shown docs, or skipped),
the marker is touched and the prompt never fires again for that
feature. Future features get their own markers.
V1 features surfaced:
- continuous-checkpoint: offer to enable checkpoint_mode=continuous
- model-overlay: inform-only note about --model flag and MODEL_OVERLAY
line in preamble output
Max one prompt per session to avoid nagging. Fires only on JUST_UPGRADED
(not every session), plus spawned-session skip.
Changes:
- scripts/resolvers/preamble.ts: extend generateUpgradeCheck() with
feature discovery rules, per-marker-file semantics, spawned-session
exclusion, and max-one-per-session cap.
- All skill SKILL.md files regenerated to include the new section.
- Golden ship fixtures regenerated.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: design taste engine with persistent schema
Adds a cross-session taste profile that learns from design-shotgun
approval/rejection decisions. Biases future design-consultation and
design-shotgun proposals toward the user's demonstrated preferences.
Codex review caught that the plan had "taste engine" as a vague goal
without schema, decay, migration, or placeholder insertion points. This
commit ships the full spec.
Schema v1 at ~/.gstack/projects/$SLUG/taste-profile.json:
- version, updated_at
- dimensions: fonts, colors, layouts, aesthetics — each with approved[]
and rejected[] preference lists
- sessions: last 50 (FIFO truncation), each with ts/action/variant/reason
- Preference: { value, confidence, approved_count, rejected_count, last_seen }
- Confidence: Laplace-smoothed approved/(total+1)
- Decay: 5% per week of inactivity, computed at read time (not write)
Changes:
- bin/gstack-taste-update: new CLI. Subcommands approved/rejected/show/
migrate. Parses reason string for dimension signals (e.g.,
"fonts: Geist; colors: slate; aesthetics: minimal"). Emits taste-drift
NOTE when a new signal contradicts a strong opposing signal. Legacy
approved.json aggregates migrate to v1 on next write.
- scripts/resolvers/design.ts: new generateTasteProfile() resolver.
Produces the prose that skills see: how to read the profile, how to
factor into proposals, conflict handling, schema migration.
- scripts/resolvers/index.ts: register TASTE_PROFILE and a BIN_DIR
resolver (returns ctx.paths.binDir, used by templates that shell out
to gstack-* binaries).
- design-consultation/SKILL.md.tmpl: insert {{TASTE_PROFILE}} placeholder
in Phase 1 right after the memorable-thing forcing question so the
Phase 3 proposal can factor in learned preferences.
- design-shotgun/SKILL.md.tmpl: taste memory section now reads
taste-profile.json via {{TASTE_PROFILE}}, falls back to per-session
approved.json (legacy). Approval flow documented to call
gstack-taste-update after user picks/rejects a variant.
Known gap: v1 extracts dimension signals from a reason string passed
by the caller ("fonts: X; colors: Y"). Future v2 can read EXIF or an
accompanying manifest written by design-shotgun alongside each variant
for automatic dimension extraction without needing the reason argument.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: multi-provider model benchmark (boil the ocean)
Adds the full spec Codex asked for: real provider adapters with auth
detection, normalized RunResult, pricing tables, tool compatibility
maps, parallel execution with error isolation, and table/JSON/markdown
output. Judge stays on Anthropic SDK as the single stable source of
quality scoring, gated behind --judge.
Codex flagged the original plan as massively under-scoped — the
existing runner is Claude-only and the judge is Anthropic-only. You
can't benchmark GPT or Gemini without real provider infrastructure.
This commit ships it.
New architecture:
test/helpers/providers/types.ts ProviderAdapter interface
test/helpers/providers/claude.ts wraps `claude -p --output-format json`
test/helpers/providers/gpt.ts wraps `codex exec --json`
test/helpers/providers/gemini.ts wraps `gemini -p --output-format stream-json --yolo`
test/helpers/pricing.ts per-model USD cost tables (quarterly)
test/helpers/tool-map.ts which tools each CLI exposes
test/helpers/benchmark-runner.ts orchestrator (Promise.allSettled)
test/helpers/benchmark-judge.ts Anthropic SDK quality scorer
bin/gstack-model-benchmark CLI entry
test/benchmark-runner.test.ts 9 unit tests (cost math, formatters, tool-map)
Per-provider error isolation:
- auth → record reason, don't abort batch
- timeout → record reason, don't abort batch
- rate_limit → record reason, don't abort batch
- binary_missing → record in available() check, skip if --skip-unavailable
Pricing correction: cached input tokens are disjoint from uncached
input tokens (Anthropic/OpenAI report them separately). Original
math subtracted them, producing negative costs. Now adds cached at
the 10% discount alongside the full uncached input cost.
CLI:
gstack-model-benchmark --prompt "..." --models claude,gpt,gemini
gstack-model-benchmark ./prompt.txt --output json --judge
gstack-model-benchmark ./prompt.txt --models claude --timeout-ms 60000
Output formats: table (default), json, markdown. Each shows model,
latency, in→out tokens, cost, quality (when --judge used), tool calls,
and any errors.
Known limitations for v1:
- Claude adapter approximates toolCalls as num_turns (stream-json
would give exact counts; v2 can upgrade).
- Live E2E tests (test/providers.e2e.test.ts) not included — they
require CI secrets for all three providers. Unit tests cover the
shape and math.
- Provider CLIs sometimes return non-JSON error text to stdout; the
parsers fall back to treating raw output as plain text in that case.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: standalone methodology skill publishing via gstack-publish
Ships the marketplace-distribution half of Item 5 (reframed): publish
the existing standalone OpenClaw methodology skills to multiple
marketplaces with one command.
Codex review caught that the original plan assumed raw generated
multi-host skills could be published directly. They can't — those
depend on gstack binaries, generated host paths, tool names, and
telemetry. The correct artifact class is hand-crafted standalone
skills in openclaw/skills/gstack-openclaw-* (already exist and work
without gstack runtime). This commit adds the wrapper that publishes
them to ClawHub + SkillsMP + Vercel Skills.sh with per-marketplace
error isolation and dry-run validation.
Changes:
- skills.json: root manifest with 4 skills (office-hours, ceo-review,
investigate, retro) each pointing at its openclaw/skills source.
Each skill declares per-marketplace targets with a slug, a publish
flag, and a compatible-hosts list. Marketplace configs include CLI
name, login command, publish command template (with placeholder
substitution), docs URL, and auth_check command.
- bin/gstack-publish: new CLI. Subcommands:
gstack-publish Publish all skills
gstack-publish <slug> Publish one skill
gstack-publish --dry-run Validate + auth-check without publishing
gstack-publish --list List skills + marketplace targets
Features:
* Manifest validation (missing source files, missing slugs, empty
marketplace list all reported).
* Per-marketplace auth check before any publish attempt.
* Per-skill / per-marketplace error isolation: one failure doesn't
abort the batch.
* Idempotent — re-running with the same version is safe; markets
that reject duplicate versions report it as a failure for that
single target without affecting others.
* --dry-run walks the full pipeline but skips execSync; useful in
CI to validate manifest before bumping version.
Tested locally: clawhub auth detected, skillsmp/vercel CLIs not
installed (marked NOT READY and skipped cleanly in dry-run).
Follow-up work (tracked in TODOS.md later):
- Version-bump helper that reads openclaw/skills/*/SKILL.md frontmatter
and updates skills.json in lockstep.
- CI workflow that runs gstack-publish --dry-run on every PR and
gstack-publish on tags.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: split preamble.ts into submodules (byte-identical output)
Splits scripts/resolvers/preamble.ts (841 lines, 18 generator functions +
composition root) into one file per generator under
scripts/resolvers/preamble/. Root preamble.ts becomes a thin composition
layer (~80 lines of imports + generatePreamble).
Before:
scripts/resolvers/preamble.ts 841 lines
After:
scripts/resolvers/preamble.ts 83 lines
scripts/resolvers/preamble/generate-preamble-bash.ts 97 lines
scripts/resolvers/preamble/generate-upgrade-check.ts 48 lines
scripts/resolvers/preamble/generate-lake-intro.ts 16 lines
scripts/resolvers/preamble/generate-telemetry-prompt.ts 37 lines
scripts/resolvers/preamble/generate-proactive-prompt.ts 25 lines
scripts/resolvers/preamble/generate-routing-injection.ts 49 lines
scripts/resolvers/preamble/generate-vendoring-deprecation.ts 36 lines
scripts/resolvers/preamble/generate-spawned-session-check.ts 11 lines
scripts/resolvers/preamble/generate-ask-user-format.ts 16 lines
scripts/resolvers/preamble/generate-completeness-section.ts 19 lines
scripts/resolvers/preamble/generate-repo-mode-section.ts 12 lines
scripts/resolvers/preamble/generate-test-failure-triage.ts 108 lines
scripts/resolvers/preamble/generate-search-before-building.ts 14 lines
scripts/resolvers/preamble/generate-completion-status.ts 161 lines
scripts/resolvers/preamble/generate-voice-directive.ts 60 lines
scripts/resolvers/preamble/generate-context-recovery.ts 51 lines
scripts/resolvers/preamble/generate-continuous-checkpoint.ts 48 lines
scripts/resolvers/preamble/generate-context-health.ts 31 lines
Byte-identity verification (the real gate per Codex correction):
- Before refactor: snapshotted 135 generated SKILL.md files via
`find -name SKILL.md -type f | grep -v /gstack/` across all hosts.
- After refactor: regenerated with `bun run gen:skill-docs --host all`
and re-snapshotted.
- `diff -r baseline after` returned zero differences and exit 0.
The `--host all --dry-run` gate passes too. No template or host behavior
changes — purely a code-organization refactor.
Test fix: audit-compliance.test.ts's telemetry check previously grepped
preamble.ts directly for `_TEL != "off"`. After the refactor that logic
lives in preamble/generate-preamble-bash.ts. Test now concatenates all
preamble submodule sources before asserting — tracks the semantic contract,
not the file layout. Doing the minimum rewrite preserves the test's intent
(conditional telemetry) without coupling it to file boundaries.
Why now: we were in-session with full context. Codex had downgraded this
from mandatory to optional, but the preamble had grown to 841 lines and
was getting harder to navigate. User asked "why not?" given the context
was hot. Shipping it as a clean bisectable commit while all the prior
preamble.ts changes are fresh reduces rebase pain later.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: bump version and changelog (v0.19.0.0)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: trim verbose preamble + coverage audit prose
Compress without removing behavior or voice. Three targeted cuts:
1. scripts/resolvers/testing.ts coverage diagram example: 40 lines → 14
lines. Two-column ASCII layout instead of stacked sections.
Preserves all required regression-guard phrases (processPayment,
refundPayment, billing.test.ts, checkout.e2e.ts, COVERAGE, QUALITY,
GAPS, Code paths, User flows, ASCII coverage diagram).
2. scripts/resolvers/preamble/generate-completion-status.ts Plan Status
Footer: was 35 lines with embedded markdown table example, now 7
lines that describe the table inline. The footer fires only at
ExitPlanMode time — Claude can construct the placeholder table from
the inline description without copying a literal example.
3. Same file's Plan Mode Safe Operations + Skill Invocation During Plan
Mode sections compressed from ~25 lines combined to ~12. Preserves
all required test phrases (precedence over generic plan mode behavior,
Do not continue the workflow, cancel the skill or leave plan mode,
PLAN MODE EXCEPTION).
NOT touched:
- Voice directive (Garry's voice — protected per CLAUDE.md)
- Office-hours Phase 6 Handoff (Garry's voice + YC pitch)
- Test bootstrap, review army, plan completion (carefully tuned behavior)
Token savings (per skill, system-wide):
ship/SKILL.md 35474 → 34992 tokens (-482)
plan-ceo-review 29436 → 28940 (-496)
office-hours 26700 → 26204 (-496)
Still over the 25K ceiling. Bigger reduction requires restructure
(move large resolvers to externally-referenced docs, split /ship into
ship-quick + ship-full, or refactor the coverage audit + review army
into shorter prose). That's a follow-up — added to TODOS.
Tests: 420/420 pass on gen-skill-docs.test.ts + host-config.test.ts.
Goldens regenerated for claude/codex/factory ship.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(ci): install Node.js from official tarball instead of NodeSource apt setup
The CI Dockerfile's Node install was failing on ubicloud runners. NodeSource's
setup_22.x script runs two internal apt operations that both depend on
archive.ubuntu.com + security.ubuntu.com being reachable:
1. apt-get update (to refresh package lists)
2. apt-get install gnupg (as a prerequisite for its gpg keyring)
Ubicloud's CI runners frequently can't reach those mirrors — last build hit
~2min of connection timeouts to every security.ubuntu.com IP (185.125.190.82,
91.189.91.83, 91.189.92.24, etc.) plus archive.ubuntu.com mirrors. Compounding
this: on Ubuntu 24.04 (noble) "gnupg" was renamed to "gpg" and "gpgconf".
NodeSource's setup script still looks for "gnupg", so even when apt works,
it fails with "Package 'gnupg' has no installation candidate." The subsequent
apt-get install nodejs then fails because the NodeSource repo was never added.
Fix: drop NodeSource entirely. Download Node.js v22.20.0 from nodejs.org as a
tarball, extract to /usr/local. One host, no apt, no script, no keyring.
Before:
RUN curl -fsSL https://deb.nodesource.com/setup_22.x | bash - \
&& apt-get install -y --no-install-recommends nodejs ...
After:
ENV NODE_VERSION=22.20.0
RUN curl -fsSL "https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.xz" -o /tmp/node.tar.xz \
&& tar -xJ -C /usr/local --strip-components=1 --no-same-owner -f /tmp/node.tar.xz \
&& rm -f /tmp/node.tar.xz \
&& node --version && npm --version
Same installed path (/usr/local/bin/node and npm). Pinned version for
reproducibility. Version is bump-visible in the Dockerfile now.
Does not address the separate apt flakiness that affects the GitHub CLI
install (line 17) or `npx playwright install-deps chromium` (line 33) —
those use apt too. If those fail on a future build we can address then.
Failing job: build-image (71777913820)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: raise skill token ceiling warning from 25K to 40K
The 25K ceiling predated flagship models with 200K-1M windows and assumed
every skill prompt dominates context cost. Modern reality: prompt caching
amortizes the skill load across invocations, and three carefully-tuned
skills (ship, plan-ceo-review, office-hours) legitimately pack 25-35K
tokens of behavior that can't be cut without degrading quality or removing
protected content (Garry's voice, YC pitch, specialist review instructions).
We made the safe prose cuts earlier (coverage diagram, plan status footer,
plan mode operations). The remaining gap is structural — real compression
would require splitting /ship into ship-quick vs ship-full, externalizing
large resolvers to reference docs, or removing detailed skill behavior.
Each is 1-2 days of work. The cost of the warning firing is zero (it's
a warning, not an error). The cost of hitting it is ~15¢ per invocation
at worst, amortized further by prompt caching.
Raising to 40K catches what it's supposed to catch — a runaway 10K+ token
growth in a single release — without crying wolf on legitimately big
skills. Reference doc in CLAUDE.md updated to reflect the new philosophy:
when you hit 40K, ask WHAT grew, don't blindly compress tuned prose.
scripts/gen-skill-docs.ts: TOKEN_CEILING_BYTES 100_000 → 160_000.
CLAUDE.md: document the "watch for feature bloat, not force compression"
intent of the ceiling.
Verification: `bun run gen:skill-docs --host all` shows zero TOKEN
CEILING warnings under the new 40K threshold.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(ci): install xz-utils so Node tarball extraction works
The direct-tarball Node install (switched from NodeSource apt in the last
CI fix) failed with "xz: Cannot exec: No such file or directory" because
Ubuntu 24.04 base doesn't include xz-utils. Node ships .tar.xz by default,
and `tar -xJ` shells out to xz, which was missing.
Add xz-utils to the base apt install alongside git/curl/unzip/etc.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(benchmark): pass --skip-git-repo-check to codex adapter
The gpt provider adapter spawns `codex exec -C <workdir>` with arbitrary
working directories (benchmark temp dirs, non-git paths). Without
`--skip-git-repo-check`, codex refuses to run and returns "Not inside a
trusted directory" — surfaced as a generic error.code='unknown' that
looks like an API failure.
Benchmarks don't care about codex's git-repo trust model; we just want
the prompt executed. Surfaced by the new provider live E2E test on a
temp workdir.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(benchmark): add --dry-run flag to gstack-model-benchmark
Matches gstack-publish --dry-run semantics. Validates the provider list,
resolves per-adapter auth, echoes the resolved flag values, and exits
without invoking any provider CLI. Zero-cost pre-flight for CI pipelines
and for catching auth drift before starting a paid benchmark run.
Output shape:
== gstack-model-benchmark --dry-run ==
prompt: <truncated>
providers: claude, gpt, gemini
workdir: /tmp/...
timeout_ms: 300000
output: table
judge: off
Adapter availability:
claude: OK
gpt: NOT READY — <reason>
gemini: NOT READY — <reason>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test: lite E2E coverage for benchmark, taste engine, publish
Fills real coverage gaps in v0.19.0.0 primitives. 44 new deterministic
tests (gate tier, ~3s) + 8 live-API tests (periodic tier).
New gate-tier test files (free, <3s total):
- test/taste-engine.test.ts — 24 tests against gstack-taste-update:
schema shape, Laplace-smoothed confidence, 5%/week decay clamped at 0,
multi-dimension extraction, case-insensitive matching, session cap,
legacy profile migration with session truncation, taste-drift conflict
warning, malformed-JSON recovery, missing-variant exit code.
- test/publish-dry-run.test.ts — 13 tests against gstack-publish --dry-run:
manifest parsing, missing/malformed JSON, per-skill validation errors
(missing source file / slug / version / marketplaces), slug filter,
unknown-skill exit, per-marketplace auth isolation (fake marketplaces
with always-pass / always-fail / missing-binary CLIs), and a sanity
check against the real repo manifest.
- test/benchmark-cli.test.ts — 11 tests against gstack-model-benchmark
--dry-run: provider default, unknown-provider WARN, empty list
fallback, flag passthrough (timeout/workdir/judge/output), long-prompt
truncation, prompt resolution (inline vs file vs positional), missing
prompt exit.
New periodic-tier test file (paid, gated EVALS=1):
- test/skill-e2e-benchmark-providers.test.ts — 8 tests hitting real
claude, codex, gemini CLIs with a trivial prompt (~$0.001/provider).
Verifies output parsing, token accounting, cost estimation, timeout
error.code semantics, Promise.allSettled parallel isolation.
Per-provider availability gate — unauthed providers skip cleanly.
This suite already caught one real bug (codex adapter missing
--skip-git-repo-check, fixed in 5260987d).
Registered `benchmark-providers-live` in touchfiles.ts (periodic tier,
triggered by changes to bin/gstack-model-benchmark, providers/**,
benchmark-runner.ts, pricing.ts).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(benchmark): dedupe providers in --models
`--models claude,claude,gpt` previously produced a list with a duplicate
entry, meaning the benchmark would run claude twice and bill for two
runs. Surfaced by /review on this branch.
Use a Set internally; return Array.from(seen) to preserve type + order
of first occurrence.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test: /review hardening — NOT-READY env isolation, workdir cleanup, perf
Applied from the adversarial subagent pass during /review on this branch:
- test/benchmark-cli.test.ts — new "NOT READY path fires when auth env
vars are stripped" test. The default dry-run test always showed OK on
dev machines with auth, hiding regressions in the remediation-hint
branch. Stripped env (no auth vars, HOME→empty tmpdir) now force-
exercises gpt + gemini NOT READY paths and asserts every NOT READY
line includes a concrete remediation hint (install/login/export).
(claude adapter's os.homedir() call is Bun-cached; the 2-of-3 adapter
coverage is sufficient to exercise the branch.)
- test/taste-engine.test.ts — session-cap test rewritten to seed the
profile with 50 entries + one real CLI call, instead of 55 sequential
subprocess spawns. Same coverage (FIFO eviction at the boundary), ~5s
faster CI time. Also pins first-casing-wins on the Geist/GEIST merge
assertion — bumpPref() keeps the first-arrival casing, so the test
documents that policy.
- test/skill-e2e-benchmark-providers.test.ts — workdir creation moved
from module-load into beforeAll, cleanup added in afterAll. Previous
shape leaked a /tmp/bench-e2e-* dir every CI run.
- test/publish-dry-run.test.ts — removed unused empty test/helpers
mkdirSync from the sandbox setup. The bin doesn't import from there,
so the empty dir was a footgun for future maintainers.
- test/helpers/providers/gpt.ts — expanded the inline comment on
`--skip-git-repo-check` to explicitly note that `-s read-only` is now
load-bearing safety (the trust prompt was the secondary boundary;
removing read-only while keeping skip-git-repo-check would be unsafe).
Net: 45 passing tests (was 44), session-cap test 5s faster, one real
regression surface covered that didn't exist before.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: surface v0.19 binaries and continuous checkpoint in README
The /review doc-staleness check flagged that v0.19.0.0 ships three new CLIs
(gstack-model-benchmark, gstack-publish, gstack-taste-update) and an opt-in
continuous checkpoint mode, none of which were visible in README's Power
tools section. New users couldn't find them without reading CHANGELOG.
Added:
- "New binaries (v0.19)" subsection with one-row descriptions for each CLI
- "Continuous checkpoint mode (opt-in, local by default)" subsection
explaining WIP auto-commit + [gstack-context] body + /ship squash +
/checkpoint resume
CHANGELOG entry already has good voice from /ship; no polish needed.
VERSION already at 0.19.0.0. Other docs (ARCHITECTURE/CONTRIBUTING/BROWSER)
don't reference this surface — scoped intentionally.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(ship): Step 19.5 — offer gstack-publish for methodology skill changes
Wires the orphaned gstack-publish binary into /ship. When a PR touches
any standalone methodology skill (openclaw/skills/gstack-*/SKILL.md) or
skills.json, /ship now runs gstack-publish --dry-run after PR creation
and asks the user if they want to actually publish.
Previously, the only way to discover gstack-publish was reading the
CHANGELOG or README. Most methodology skill updates landed on main
without ever being pushed to ClawHub / SkillsMP / Vercel Skills.sh,
defeating the whole point of having a marketplace publisher.
The check is conditional — for PRs that don't touch methodology skills
(the common case), this step is a silent no-op. Dry-run runs first so
the user sees the full list of what would publish and which marketplaces
are authed before committing.
Golden fixtures (claude/codex/factory) regenerated.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(benchmark-models): new skill wrapping gstack-model-benchmark
Wires the orphaned gstack-model-benchmark binary into a dedicated skill
so users can discover cross-model benchmarking via /benchmark-models or
voice triggers ("compare models", "which model is best").
Deliberately separate from /benchmark (page performance) because the
two surfaces test completely different things — confusing them would
muddy both.
Flow:
1. Pick a prompt (an existing SKILL.md file, inline text, or file path)
2. Confirm providers (dry-run shows auth status per provider)
3. Decide on --judge (adds ~$0.05, scores output quality 0-10)
4. Run the benchmark — table output
5. Interpret results (fastest / cheapest / highest quality)
6. Offer to save to ~/.gstack/benchmarks/<date>.json for trend tracking
Uses gstack-model-benchmark --dry-run as a safety gate — auth status is
visible BEFORE the user spends API calls. If zero providers are authed,
the skill stops cleanly rather than attempting a run that produces no
useful output.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: v1.3.0.0 — complete CHANGELOG + bump for post-1.2 scope additions
VERSION 1.2.0.0 → 1.3.0.0. The original 1.2 entry was written before I
added substantial new scope: the /benchmark-models skill, /ship Step 19.5
gstack-publish integration, --dry-run on gstack-model-benchmark, and the
lite E2E test coverage (4 new test files). A minor bump gives those
changes their own version line instead of silently folding them into
1.2's scope.
CHANGELOG additions under 1.3.0.0:
- /benchmark-models skill (new Added)
- /ship Step 19.5 publish check (new Added)
- gstack-model-benchmark --dry-run (new Added)
- Token ceiling 25K → 40K (moved to Changed)
- New Fixed section — codex adapter --skip-git-repo-check, --models
dedupe, CI Dockerfile xz-utils + nodejs.org tarball
- 4 new test files documented under contributors (taste-engine,
publish-dry-run, benchmark-cli, skill-e2e-benchmark-providers)
- Ship golden fixtures for claude/codex/factory hosts
Pre-existing 1.2 content preserved verbatim — no entries clobbered or
reordered. Sequence remains contiguous (1.3.0.0 → 1.1.3.0 → 1.1.2.0 →
1.1.1.0 → 1.1.0.0 → 1.0.0.0 → 0.19.0.0 → ...).
package.json and VERSION both at 1.3.0.0. No drift.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: adopt gbrain's release-summary CHANGELOG format + apply to v1.3
Ported the "release-summary format" rules from ~/git/gbrain/CLAUDE.md
(lines 291-354) into gstack's CLAUDE.md under the existing
"CHANGELOG + VERSION style" section. Every future `## [X.Y.Z]` entry
now needs a verdict-style release summary at the top:
1. Two-line bold headline (10-14 words)
2. Lead paragraph (3-5 sentences)
3. "Numbers that matter" with BEFORE / AFTER / Δ table
4. "What this means for [audience]" closer
5. `### Itemized changes` header
6. Existing itemized subsections below
Rewrote v1.3.0.0 entry to match. Preserved every existing bullet in
Added / Changed / Fixed / For contributors (no content clobbered per
the CLAUDE.md CHANGELOG rule).
Numbers in the v1.3 release summary are verifiable — every row of the
BEFORE / AFTER table has a reproducible command listed in the setup
paragraph (git log, bun test, grep for wiring status). No made-up
metrics.
Also added the gbrain "always credit community contributions" rule to
the itemized-changes section. `Contributed by @username` for every
community PR that lands in a CHANGELOG entry.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: remove gstack-publish — no real user need
User feedback: "i don't think i would use gstack-publish, i think we
should remove it." Agreed. The CLI + marketplace wiring was an
ambitious but speculative primitive. Zero users, zero validated demand,
and the existing manual `clawhub publish` workflow already covers the
real case (OpenClaw methodology skill publishing).
Deleted:
- bin/gstack-publish (the CLI)
- skills.json (the marketplace manifest)
- test/publish-dry-run.test.ts (13 tests)
- ship/SKILL.md.tmpl Step 19.5 — the methodology-skill publish-on-ship
check. No target to dispatch to anymore.
- README.md Power tools row for gstack-publish
Updated:
- bin/gstack-model-benchmark doc comment: dropped "matches gstack-publish
--dry-run semantics" reference (self-describing flag now)
- CHANGELOG 1.3.0.0 entry:
* Release summary: "three new binaries" → "two new binaries".
Dropped the /ship publish-check narrative.
* Numbers table: "1 of 3 → 3 of 3 wired" → "1 of 2 → 2 of 2 wired".
Deterministic test count: 45 → 32 (removed publish-dry-run's 13).
* Added section: removed gstack-publish CLI bullet + /ship Step 19.5
bullet.
* "What this means for users" closer: replaced the /ship publish
paragraph with the design-taste-engine learning loop, which IS
real, wired, and something users hit every week via /design-shotgun.
* Contributors section: "Four new test files" → "Three new test files"
Retained:
- openclaw/skills/gstack-openclaw-* skill dirs (pre-existed this PR,
still publishable manually via `clawhub publish`, useful standalone
for ClawHub installs)
- CLAUDE.md publishing-native-skills section (same rationale)
Regenerated SKILL.md across all hosts. Ship golden fixtures refreshed
for claude/codex/factory. 455 tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(CHANGELOG): reorder v1.3 entry around day-to-day user wins
Previous entry led with internal metrics (CLIs wired to skills, preamble
line count, adapter bugs caught in CI). Useful to contributors, invisible
to users. Rewrote the release summary and Added section to lead with
what a day-to-day gstack user actually experiences.
Release summary changes:
- Headline: "Every new CLI wired to a slash command" → "Your design
skills learn your taste. Your session state survives a laptop close."
- Lead paragraph: shifted from "primitives discoverable from /commands"
to concrete day-to-day wins (design-shotgun taste memory, design-
consultation anti-slop gates, continuous checkpoint survival).
- Numbers table: swapped internal metrics (CLI wiring %, test counts,
preamble line count) for user-visible ones:
- Design-variant convergence gate (0 → 3 axes required)
- AI-slop font blacklist (~8 → 10+ fonts)
- Taste memory across sessions (none → per-project JSON with decay)
- Session state after crash (lost → auto-WIP with structured body)
- /context-restore sources (markdown only → + WIP commits)
- Models with behavioral overlays (1 → 5)
- "Most striking" interpretation: reframed around the mid-session
crash survival story instead of the codex adapter bug catch.
- "What this means" closer: reframed around /design-shotgun + /design-
consultation + continuous checkpoint workflow instead of
/benchmark-models.
Added section — reorganized into six subsections by user value:
1. Design skills that stop looking like AI
(anti-slop constraints, taste engine)
2. Session state that survives a crash
(continuous checkpoint, /context-restore WIP reading,
/ship non-destructive squash)
3. Quality-of-life
(feature discovery prompt, context health soft directive)
4. Cross-host support
(--model flag + 5 overlays)
5. Config
(gstack-config list/defaults, checkpoint_mode/push keys)
6. Power-user / internal
(gstack-model-benchmark + /benchmark-models skill — grouped and
pushed to the bottom since it's more of a research tool than a
daily workflow piece)
Changed / Fixed / For contributors sections unchanged. No content
clobbered per CLAUDE.md CHANGELOG rules — every existing bullet is
preserved, just reordered and grouped.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(CHANGELOG): reframe v1.3 entry around transparency vs laptop-close
User feedback: "'closing your laptop' in the changelog is overstated, i
mean claude code does already have session management. i think the use
of the context save restore is mainly just another tool that is more in
your control instead of opaque and a part of CC." Correct. CC handles
session persistence on its own; continuous checkpoint isn't filling a
gap there, it's giving users a parallel, inspectable, portable track.
Reframed every place the old copy overstated:
- Headline: "Your session state survives a laptop close" → "Your
session state lives in git, not a black box."
- Lead paragraph: dropped the "closing your laptop mid-refactor doesn't
vaporize your decisions" line. Now frames continuous checkpoint as
explicitly running alongside CC's built-in session management, not
replacing it. Emphasizes grep-ability, portability across tools and
branches.
- Numbers table row: "Session state after mid-refactor crash: lost
since last manual commit → auto-WIP commits" → "Session state
format: Claude Code's opaque session store → git commits +
[gstack-context] bodies + markdown (parallel track)". Honest about
what's actually changing.
- "Most striking" interpretation: replaced the "used to cost you every
decision" framing with the real user value — session state stops
being a black box, `git log --grep "WIP:"` shows the whole thread,
any tool reading git can see it.
- "What this means" closer: replaced "survives crashes, context
switches, and forgotten laptops" with accurate framing — parallel
track alongside CC's own, inspectable, portable, useful when you
want to review or hand off work.
- Added section: "Session state that survives a crash" subsection
renamed to "Session state you can see, grep, and move". Lead bullet
now explicitly notes continuous checkpoint runs alongside CC session
management, not instead.
No content clobbered. All other bullets and sections unchanged.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(CHANGELOG): correct session-state location — home dir by default, git only on opt-in
User correction: "wait is our session management really checked into
git? i don't think that's right, isn't it just saved in your home
dir?" Right. I had the location wrong. The default session-save
mechanism (`/context-save` + `/context-restore`) writes markdown
files to `~/.gstack/projects/$SLUG/checkpoints/` — HOME, not git.
Continuous checkpoint mode (opt-in) is what writes git commits.
Previous copy conflated the two and implied "lives in git" as the
default state, which is wrong.
Every affected location updated:
- Headline: "lives in git, not a black box" → "becomes files you
can grep, not a black box." Removes the false implication that
session state lands in git by default.
- Lead paragraph: now explicitly names the two separate mechanisms.
`/context-save` writes plaintext markdown to `~/.gstack/projects/
$SLUG/checkpoints/` (the default). Continuous checkpoint mode
(opt-in) additionally drops WIP: commits into the git log.
- Numbers table row: "Session state format" now reads "markdown in
`~/.gstack/` by default, plus WIP: git commits if you opt into
continuous mode (parallel track)." Tells the truth about which
path is default vs opt-in.
- "Most striking" row interpretation: now names both paths. Default
path = markdown files in home dir. Opt-in continuous mode = WIP:
commits in project git log. Either way, plain text the user owns.
- "What this means" closer: similarly names both paths explicitly.
"markdown files in your home directory by default, plus git
commits if you opt into continuous mode."
- Continuous checkpoint mode Added bullet: clarifies the commits
land in "your project's git log" (not implied to be the default),
and notes it runs alongside BOTH Claude Code's built-in session
management AND the default `/context-save` markdown flow.
No other bullets or sections touched. No content clobbered.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -33,7 +33,16 @@ describe('Audit compliance', () => {
|
||||
|
||||
// Fix 2: Conditional telemetry — binary calls wrapped with existence check
|
||||
test('preamble telemetry calls are conditional on _TEL and binary existence', () => {
|
||||
const preamble = readFileSync(join(ROOT, 'scripts/resolvers/preamble.ts'), 'utf-8');
|
||||
// After the preamble.ts refactor (Item 9), the bash/telemetry logic lives
|
||||
// in submodules under scripts/resolvers/preamble/. Concatenate all preamble
|
||||
// source (root + submodules) and assert against the combined text so this
|
||||
// test tracks the semantic contract, not the file layout.
|
||||
const preambleDir = join(ROOT, 'scripts/resolvers/preamble');
|
||||
const submoduleFiles = existsSync(preambleDir)
|
||||
? readdirSync(preambleDir).filter(f => f.endsWith('.ts')).map(f => readFileSync(join(preambleDir, f), 'utf-8'))
|
||||
: [];
|
||||
const rootPreamble = readFileSync(join(ROOT, 'scripts/resolvers/preamble.ts'), 'utf-8');
|
||||
const preamble = [rootPreamble, ...submoduleFiles].join('\n');
|
||||
// Pending finalization must check _TEL and binary existence
|
||||
expect(preamble).toContain('_TEL" != "off"');
|
||||
expect(preamble).toContain('-x ');
|
||||
|
||||
@@ -0,0 +1,177 @@
|
||||
/**
|
||||
* gstack-model-benchmark CLI tests (offline).
|
||||
*
|
||||
* Covers CLI wiring that unit tests against benchmark-runner.ts can't see:
|
||||
* - --dry-run auth/provider-list resolution
|
||||
* - unknown provider WARN path
|
||||
* - provider default (claude) when --models omitted
|
||||
* - prompt resolution (inline --prompt vs positional file path)
|
||||
* - output format flag wiring via --dry-run (avoids real CLI invocation)
|
||||
*
|
||||
* All tests use --dry-run so no API calls happen.
|
||||
*/
|
||||
|
||||
import { describe, test, expect } from 'bun:test';
|
||||
import { spawnSync } from 'child_process';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
|
||||
const ROOT = path.resolve(import.meta.dir, '..');
|
||||
const BIN = path.join(ROOT, 'bin', 'gstack-model-benchmark');
|
||||
|
||||
function run(args: string[], opts: { env?: Record<string, string> } = {}): { status: number | null; stdout: string; stderr: string } {
|
||||
const result = spawnSync('bun', ['run', BIN, ...args], {
|
||||
cwd: ROOT,
|
||||
env: { ...process.env, ...opts.env },
|
||||
encoding: 'utf-8',
|
||||
timeout: 15000,
|
||||
});
|
||||
return {
|
||||
status: result.status,
|
||||
stdout: result.stdout?.toString() ?? '',
|
||||
stderr: result.stderr?.toString() ?? '',
|
||||
};
|
||||
}
|
||||
|
||||
describe('gstack-model-benchmark --dry-run', () => {
|
||||
test('prints provider availability report and exits 0', () => {
|
||||
const r = run(['--prompt', 'hi', '--models', 'claude,gpt,gemini', '--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('gstack-model-benchmark --dry-run');
|
||||
expect(r.stdout).toContain('claude');
|
||||
expect(r.stdout).toContain('gpt');
|
||||
expect(r.stdout).toContain('gemini');
|
||||
expect(r.stdout).toContain('no prompts sent');
|
||||
});
|
||||
|
||||
test('reports default provider when --models omitted', () => {
|
||||
const r = run(['--prompt', 'hi', '--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('providers: claude');
|
||||
});
|
||||
|
||||
test('unknown provider in --models emits WARN and is dropped', () => {
|
||||
const r = run(['--prompt', 'hi', '--models', 'claude,gpt-42-fake', '--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stderr).toContain('unknown provider');
|
||||
expect(r.stderr).toContain('gpt-42-fake');
|
||||
expect(r.stdout).toContain('providers: claude');
|
||||
expect(r.stdout).not.toContain('gpt-42-fake');
|
||||
});
|
||||
|
||||
test('empty --models list falls back to claude default', () => {
|
||||
const r = run(['--prompt', 'hi', '--models', '', '--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('providers: claude');
|
||||
});
|
||||
|
||||
test('--timeout-ms and --workdir flags flow through to dry-run report', () => {
|
||||
const r = run(['--prompt', 'hi', '--timeout-ms', '9999', '--workdir', '/tmp', '--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('timeout_ms: 9999');
|
||||
expect(r.stdout).toContain('workdir: /tmp');
|
||||
});
|
||||
|
||||
test('--judge flag reported in dry-run output', () => {
|
||||
const r = run(['--prompt', 'hi', '--judge', '--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('judge: on');
|
||||
});
|
||||
|
||||
test('--output flag reported in dry-run', () => {
|
||||
const r = run(['--prompt', 'hi', '--output', 'json', '--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('output: json');
|
||||
});
|
||||
|
||||
test('each adapter reports either OK or NOT READY, never crashes', () => {
|
||||
const r = run(['--prompt', 'hi', '--models', 'claude,gpt,gemini', '--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
// Each provider line must end in OK or NOT READY
|
||||
const lines = r.stdout.split('\n');
|
||||
const adapterLines = lines.filter(l => /^\s+(claude|gpt|gemini):/.test(l));
|
||||
expect(adapterLines.length).toBe(3);
|
||||
for (const line of adapterLines) {
|
||||
expect(line).toMatch(/(OK|NOT READY)/);
|
||||
}
|
||||
});
|
||||
|
||||
test('NOT READY path fires when auth env vars are stripped', () => {
|
||||
// On a dev machine with full auth configured, the default --dry-run output
|
||||
// shows OK for every provider with credentials. Strip auth env vars AND
|
||||
// point HOME at an empty temp dir so adapters can't find file-based creds.
|
||||
// This test exists to catch regressions where the NOT READY branch itself
|
||||
// breaks (crash, missing remediation hint, wrong message format).
|
||||
//
|
||||
// Note: claude adapter's `os.homedir()` call is sometimes cached by Bun and
|
||||
// doesn't always pick up the HOME override, so this test asserts only on
|
||||
// gpt + gemini adapters where HOME redirection reliably makes the adapter's
|
||||
// credentials-path check fail. Two adapters hitting NOT READY with full
|
||||
// remediation messages is sufficient coverage for the branch.
|
||||
const emptyHome = fs.mkdtempSync(path.join(os.tmpdir(), 'bench-noauth-home-'));
|
||||
try {
|
||||
const minimalEnv: Record<string, string> = {
|
||||
PATH: process.env.PATH ?? '',
|
||||
TERM: process.env.TERM ?? 'xterm',
|
||||
HOME: emptyHome,
|
||||
};
|
||||
const result = spawnSync('bun', ['run', BIN, '--prompt', 'hi', '--models', 'claude,gpt,gemini', '--dry-run'], {
|
||||
cwd: ROOT,
|
||||
env: minimalEnv,
|
||||
encoding: 'utf-8',
|
||||
timeout: 15000,
|
||||
});
|
||||
expect(result.status).toBe(0);
|
||||
const out = result.stdout?.toString() ?? '';
|
||||
// gpt + gemini must report NOT READY in this clean env (their auth check
|
||||
// reads paths under the overridden HOME).
|
||||
expect(out).toMatch(/gpt:\s+NOT READY/);
|
||||
expect(out).toMatch(/gemini:\s+NOT READY/);
|
||||
// Every NOT READY line must include a concrete remediation hint so users
|
||||
// can resolve the missing auth. This is the regression we care about.
|
||||
const notReadyLines = out.split('\n').filter(l => l.includes('NOT READY'));
|
||||
expect(notReadyLines.length).toBeGreaterThanOrEqual(2);
|
||||
for (const line of notReadyLines) {
|
||||
expect(line).toMatch(/(install|Install|login|export|Run|Log in)/);
|
||||
}
|
||||
} finally {
|
||||
fs.rmSync(emptyHome, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('long prompt is truncated in dry-run display', () => {
|
||||
const longPrompt = 'x'.repeat(200);
|
||||
const r = run(['--prompt', longPrompt, '--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
// Summary truncates to 80 chars + ellipsis
|
||||
expect(r.stdout).toMatch(/prompt:\s+x{80}…/);
|
||||
});
|
||||
});
|
||||
|
||||
describe('gstack-model-benchmark prompt resolution', () => {
|
||||
test('positional file path is read and passed as prompt', () => {
|
||||
const tmp = fs.mkdtempSync(path.join(os.tmpdir(), 'bench-prompt-'));
|
||||
const promptFile = path.join(tmp, 'prompt.txt');
|
||||
fs.writeFileSync(promptFile, 'hello from file');
|
||||
try {
|
||||
const r = run([promptFile, '--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('hello from file');
|
||||
} finally {
|
||||
fs.rmSync(tmp, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('positional non-file arg is treated as inline prompt', () => {
|
||||
const r = run(['treat-me-as-inline', '--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('treat-me-as-inline');
|
||||
});
|
||||
|
||||
test('missing prompt exits non-zero', () => {
|
||||
const r = run(['--dry-run']);
|
||||
expect(r.status).not.toBe(0);
|
||||
expect(r.stderr).toContain('specify a prompt');
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,137 @@
|
||||
/**
|
||||
* Unit tests for the benchmark runner.
|
||||
*
|
||||
* Mocks adapters to verify:
|
||||
* - All adapters run in parallel (Promise.allSettled not serial)
|
||||
* - Unavailable adapters are skipped or marked depending on flag
|
||||
* - Per-adapter errors don't abort the batch
|
||||
* - Output formatters (table, json, markdown) produce non-empty strings
|
||||
*
|
||||
* Does NOT exercise live CLIs — see test/providers.e2e.test.ts for those.
|
||||
*/
|
||||
|
||||
import { test, expect } from 'bun:test';
|
||||
import { formatTable, formatJson, formatMarkdown, type BenchmarkReport } from './helpers/benchmark-runner';
|
||||
import { estimateCostUsd, PRICING } from './helpers/pricing';
|
||||
import { missingTools, TOOL_COMPATIBILITY } from './helpers/tool-map';
|
||||
|
||||
test('estimateCostUsd returns 0 for unknown model (no crash)', () => {
|
||||
const cost = estimateCostUsd({ input: 1000, output: 500 }, 'unknown-model-7b');
|
||||
expect(cost).toBe(0);
|
||||
});
|
||||
|
||||
test('estimateCostUsd computes correctly for known Claude model', () => {
|
||||
// claude-opus-4-7: $15/MTok input, $75/MTok output
|
||||
// 1M input + 0.5M output = $15 + $37.50 = $52.50
|
||||
const cost = estimateCostUsd({ input: 1_000_000, output: 500_000 }, 'claude-opus-4-7');
|
||||
expect(cost).toBeCloseTo(52.50, 2);
|
||||
});
|
||||
|
||||
test('estimateCostUsd applies cached input discount alongside uncached input', () => {
|
||||
// tokens.input is uncached-only; tokens.cached is disjoint cache-reads at 10%.
|
||||
// 0 uncached input, 1M cached → 10% of 15 = $1.50
|
||||
const cost1 = estimateCostUsd({ input: 0, output: 0, cached: 1_000_000 }, 'claude-opus-4-7');
|
||||
expect(cost1).toBeCloseTo(1.50, 2);
|
||||
// 500K uncached input + 500K cached → $7.50 + $0.75 = $8.25
|
||||
const cost2 = estimateCostUsd({ input: 500_000, output: 0, cached: 500_000 }, 'claude-opus-4-7');
|
||||
expect(cost2).toBeCloseTo(8.25, 2);
|
||||
});
|
||||
|
||||
test('PRICING table covers the key model families', () => {
|
||||
expect(PRICING['claude-opus-4-7']).toBeDefined();
|
||||
expect(PRICING['claude-sonnet-4-6']).toBeDefined();
|
||||
expect(PRICING['gpt-5.4']).toBeDefined();
|
||||
expect(PRICING['gemini-2.5-pro']).toBeDefined();
|
||||
});
|
||||
|
||||
test('missingTools reports unsupported tools per provider', () => {
|
||||
// GPT/Codex doesn't expose Edit, Glob, Grep
|
||||
expect(missingTools('gpt', ['Edit', 'Glob', 'Grep'])).toEqual(['Edit', 'Glob', 'Grep']);
|
||||
// Claude supports all core tools
|
||||
expect(missingTools('claude', ['Edit', 'Glob', 'Grep', 'Bash', 'Read'])).toEqual([]);
|
||||
// Gemini has very limited agentic surface
|
||||
expect(missingTools('gemini', ['Bash', 'Edit'])).toEqual(['Bash', 'Edit']);
|
||||
});
|
||||
|
||||
test('TOOL_COMPATIBILITY is populated for all three families', () => {
|
||||
expect(TOOL_COMPATIBILITY.claude).toBeDefined();
|
||||
expect(TOOL_COMPATIBILITY.gpt).toBeDefined();
|
||||
expect(TOOL_COMPATIBILITY.gemini).toBeDefined();
|
||||
});
|
||||
|
||||
test('formatTable handles a report with mixed success/error/unavailable entries', () => {
|
||||
const report: BenchmarkReport = {
|
||||
prompt: 'test prompt',
|
||||
workdir: '/tmp',
|
||||
startedAt: '2026-04-16T20:00:00Z',
|
||||
durationMs: 1500,
|
||||
entries: [
|
||||
{
|
||||
provider: 'claude',
|
||||
family: 'claude',
|
||||
available: true,
|
||||
result: {
|
||||
output: 'ok',
|
||||
tokens: { input: 100, output: 200 },
|
||||
durationMs: 800,
|
||||
toolCalls: 3,
|
||||
modelUsed: 'claude-opus-4-7',
|
||||
},
|
||||
costUsd: 0.0165,
|
||||
qualityScore: 9.2,
|
||||
},
|
||||
{
|
||||
provider: 'gpt',
|
||||
family: 'gpt',
|
||||
available: true,
|
||||
result: {
|
||||
output: '',
|
||||
tokens: { input: 0, output: 0 },
|
||||
durationMs: 200,
|
||||
toolCalls: 0,
|
||||
modelUsed: 'gpt-5.4',
|
||||
error: { code: 'auth', reason: 'codex login required' },
|
||||
},
|
||||
},
|
||||
{
|
||||
provider: 'gemini',
|
||||
family: 'gemini',
|
||||
available: false,
|
||||
unavailable_reason: 'gemini CLI not on PATH',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const table = formatTable(report);
|
||||
expect(table).toContain('claude-opus-4-7');
|
||||
expect(table).toContain('ERROR auth');
|
||||
expect(table).toContain('unavailable');
|
||||
expect(table).toContain('9.2/10');
|
||||
});
|
||||
|
||||
test('formatJson produces parseable JSON', () => {
|
||||
const report: BenchmarkReport = {
|
||||
prompt: 'x',
|
||||
workdir: '/tmp',
|
||||
startedAt: '2026-04-16T20:00:00Z',
|
||||
durationMs: 100,
|
||||
entries: [],
|
||||
};
|
||||
const json = formatJson(report);
|
||||
const parsed = JSON.parse(json);
|
||||
expect(parsed.prompt).toBe('x');
|
||||
expect(parsed.entries).toEqual([]);
|
||||
});
|
||||
|
||||
test('formatMarkdown produces a table header', () => {
|
||||
const report: BenchmarkReport = {
|
||||
prompt: 'x',
|
||||
workdir: '/tmp',
|
||||
startedAt: '2026-04-16T20:00:00Z',
|
||||
durationMs: 100,
|
||||
entries: [],
|
||||
};
|
||||
const md = formatMarkdown(report);
|
||||
expect(md).toContain('# Benchmark report');
|
||||
expect(md).toContain('| Model | Latency |');
|
||||
});
|
||||
+212
-117
@@ -55,16 +55,6 @@ _TEL_START=$(date +%s)
|
||||
_SESSION_ID="$$-$(date +%s)"
|
||||
echo "TELEMETRY: ${_TEL:-off}"
|
||||
echo "TEL_PROMPTED: $_TEL_PROMPTED"
|
||||
# Question tuning (opt-in; see /plan-tune + docs/designs/PLAN_TUNING_V0.md)
|
||||
_QUESTION_TUNING=$(~/.claude/skills/gstack/bin/gstack-config get question_tuning 2>/dev/null || echo "false")
|
||||
echo "QUESTION_TUNING: $_QUESTION_TUNING"
|
||||
# Writing style (V1: default = ELI10-style, terse = V0 prose. See docs/designs/PLAN_TUNING_V1.md)
|
||||
_EXPLAIN_LEVEL=$(~/.claude/skills/gstack/bin/gstack-config get explain_level 2>/dev/null || echo "default")
|
||||
if [ "$_EXPLAIN_LEVEL" != "default" ] && [ "$_EXPLAIN_LEVEL" != "terse" ]; then _EXPLAIN_LEVEL="default"; fi
|
||||
echo "EXPLAIN_LEVEL: $_EXPLAIN_LEVEL"
|
||||
# V1 upgrade migration pending-prompt flag
|
||||
_WRITING_STYLE_PENDING=$([ -f ~/.gstack/.writing-style-prompt-pending ] && echo "yes" || echo "no")
|
||||
echo "WRITING_STYLE_PENDING: $_WRITING_STYLE_PENDING"
|
||||
mkdir -p ~/.gstack/analytics
|
||||
if [ "$_TEL" != "off" ]; then
|
||||
echo '{"skill":"ship","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","repo":"'$(basename "$(git rev-parse --show-toplevel 2>/dev/null)" 2>/dev/null || echo "unknown")'"}' >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true
|
||||
@@ -109,6 +99,12 @@ if [ -d ".claude/skills/gstack" ] && [ ! -L ".claude/skills/gstack" ]; then
|
||||
fi
|
||||
fi
|
||||
echo "VENDORED_GSTACK: $_VENDORED"
|
||||
echo "MODEL_OVERLAY: claude"
|
||||
# Checkpoint mode (explicit = no auto-commit, continuous = WIP commits as you go)
|
||||
_CHECKPOINT_MODE=$(~/.claude/skills/gstack/bin/gstack-config get checkpoint_mode 2>/dev/null || echo "explicit")
|
||||
_CHECKPOINT_PUSH=$(~/.claude/skills/gstack/bin/gstack-config get checkpoint_push 2>/dev/null || echo "false")
|
||||
echo "CHECKPOINT_MODE: $_CHECKPOINT_MODE"
|
||||
echo "CHECKPOINT_PUSH: $_CHECKPOINT_PUSH"
|
||||
# Detect spawned session (OpenClaw or other orchestrator)
|
||||
[ -n "$OPENCLAW_SESSION" ] && echo "SPAWNED_SESSION: true" || true
|
||||
```
|
||||
@@ -124,7 +120,38 @@ or invoking other gstack skills, use the `/gstack-` prefix (e.g., `/gstack-qa` i
|
||||
of `/qa`, `/gstack-ship` instead of `/ship`). Disk paths are unaffected — always use
|
||||
`~/.claude/skills/gstack/[skill-name]/SKILL.md` for reading skill files.
|
||||
|
||||
If output shows `UPGRADE_AVAILABLE <old> <new>`: read `~/.claude/skills/gstack/gstack-upgrade/SKILL.md` and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If `JUST_UPGRADED <from> <to>`: tell user "Running gstack v{to} (just updated!)" and continue.
|
||||
If output shows `UPGRADE_AVAILABLE <old> <new>`: read `~/.claude/skills/gstack/gstack-upgrade/SKILL.md` and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined).
|
||||
|
||||
If output shows `JUST_UPGRADED <from> <to>` AND `SPAWNED_SESSION` is NOT set: tell
|
||||
the user "Running gstack v{to} (just updated!)" and then check for new features to
|
||||
surface. For each per-feature marker below, if the marker file is missing AND the
|
||||
feature is plausibly useful for this user, use AskUserQuestion to let them try it.
|
||||
Fire once per feature per user, NOT once per upgrade.
|
||||
|
||||
**In spawned sessions (`SPAWNED_SESSION` = "true"): SKIP feature discovery entirely.**
|
||||
Just print "Running gstack v{to}" and continue. Orchestrators do not want interactive
|
||||
prompts from sub-sessions.
|
||||
|
||||
**Feature discovery markers and prompts** (one at a time, max one per session):
|
||||
|
||||
1. `~/.claude/skills/gstack/.feature-prompted-continuous-checkpoint` →
|
||||
Prompt: "Continuous checkpoint auto-commits your work as you go with `WIP:` prefix
|
||||
so you never lose progress to a crash. Local-only by default — doesn't push
|
||||
anywhere unless you turn that on. Want to try it?"
|
||||
Options: A) Enable continuous mode, B) Show me first (print the section from
|
||||
the preamble Continuous Checkpoint Mode), C) Skip.
|
||||
If A: run `~/.claude/skills/gstack/bin/gstack-config set checkpoint_mode continuous`.
|
||||
Always: `touch ~/.claude/skills/gstack/.feature-prompted-continuous-checkpoint`
|
||||
|
||||
2. `~/.claude/skills/gstack/.feature-prompted-model-overlay` →
|
||||
Inform only (no prompt): "Model overlays are active. `MODEL_OVERLAY: {model}`
|
||||
shown in the preamble output tells you which behavioral patch is applied.
|
||||
Override with `--model` when regenerating skills (e.g., `bun run gen:skill-docs
|
||||
--model gpt-5.4`). Default is claude."
|
||||
Always: `touch ~/.claude/skills/gstack/.feature-prompted-model-overlay`
|
||||
|
||||
After handling JUST_UPGRADED (prompts done or skipped), continue with the skill
|
||||
workflow.
|
||||
|
||||
If `WRITING_STYLE_PENDING` is `yes`: You're on the first skill run after upgrading
|
||||
to gstack v1. Ask the user once about the new default writing style. Use AskUserQuestion:
|
||||
@@ -249,8 +276,7 @@ Key routing rules:
|
||||
- Design system, brand → invoke design-consultation
|
||||
- Visual audit, design polish → invoke design-review
|
||||
- Architecture review → invoke plan-eng-review
|
||||
- Save progress, save state, save my work → invoke context-save
|
||||
- Resume, where was I, pick up where I left off → invoke context-restore
|
||||
- Save progress, checkpoint, resume → invoke checkpoint
|
||||
- Code quality, health check → invoke health
|
||||
```
|
||||
|
||||
@@ -300,7 +326,23 @@ AI orchestrator (e.g., OpenClaw). In spawned sessions:
|
||||
- Focus on completing the task and reporting results via prose output.
|
||||
- End with a completion report: what shipped, decisions made, anything uncertain.
|
||||
|
||||
## Model-Specific Behavioral Patch (claude)
|
||||
|
||||
The following nudges are tuned for the claude model family. They are
|
||||
**subordinate** to skill workflow, STOP points, AskUserQuestion gates, plan-mode
|
||||
safety, and /ship review gates. If a nudge below conflicts with skill instructions,
|
||||
the skill wins. Treat these as preferences, not rules.
|
||||
|
||||
**Todo-list discipline.** When working through a multi-step plan, mark each task
|
||||
complete individually as you finish it. Do not batch-complete at the end. If a task
|
||||
turns out to be unnecessary, mark it skipped with a one-line reason.
|
||||
|
||||
**Think before heavy actions.** For complex operations (refactors, migrations,
|
||||
non-trivial new features), briefly state your approach before executing. This lets
|
||||
the user course-correct cheaply instead of mid-flight.
|
||||
|
||||
**Dedicated tools over Bash.** Prefer Read, Edit, Write, Glob, Grep over shell
|
||||
equivalents (cat, sed, find, grep). The dedicated tools are cheaper and clearer.
|
||||
|
||||
## Voice
|
||||
|
||||
@@ -534,6 +576,65 @@ Ask the user. Do not guess on architectural or data model decisions.
|
||||
|
||||
This does NOT apply to routine coding, small features, or obvious changes.
|
||||
|
||||
## Continuous Checkpoint Mode
|
||||
|
||||
If `CHECKPOINT_MODE` is `"continuous"` (from preamble output): auto-commit work as
|
||||
you go with `WIP:` prefix so session state survives crashes and context switches.
|
||||
|
||||
**When to commit (continuous mode only):**
|
||||
- After creating a new file (not scratch/temp files)
|
||||
- After finishing a function/component/module
|
||||
- After fixing a bug that's verified by a passing test
|
||||
- Before any long-running operation (install, full build, full test suite)
|
||||
|
||||
**Commit format** — include structured context in the body:
|
||||
|
||||
```
|
||||
WIP: <concise description of what changed>
|
||||
|
||||
[gstack-context]
|
||||
Decisions: <key choices made this step>
|
||||
Remaining: <what's left in the logical unit>
|
||||
Tried: <failed approaches worth recording> (omit if none)
|
||||
Skill: </skill-name-if-running>
|
||||
[/gstack-context]
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- Stage only files you intentionally changed. NEVER `git add -A` in continuous mode.
|
||||
- Do NOT commit with known-broken tests. Fix first, then commit. The [gstack-context]
|
||||
example values MUST reflect a clean state.
|
||||
- Do NOT commit mid-edit. Finish the logical unit.
|
||||
- Push ONLY if `CHECKPOINT_PUSH` is `"true"` (default is false). Pushing WIP commits
|
||||
to a shared remote can trigger CI, deploys, and expose secrets — that is why push
|
||||
is opt-in, not default.
|
||||
- Background discipline — do NOT announce each commit to the user. They can see
|
||||
`git log` whenever they want.
|
||||
|
||||
**When `/context-restore` runs,** it parses `[gstack-context]` blocks from WIP
|
||||
commits on the current branch to reconstruct session state. When `/ship` runs, it
|
||||
filter-squashes WIP commits only (preserving non-WIP commits) via
|
||||
`git rebase --autosquash` so the PR contains clean bisectable commits.
|
||||
|
||||
If `CHECKPOINT_MODE` is `"explicit"` (the default): no auto-commit behavior. Commit
|
||||
only when the user explicitly asks, or when a skill workflow (like /ship) runs a
|
||||
commit step. Ignore this section entirely.
|
||||
|
||||
## Context Health (soft directive)
|
||||
|
||||
During long-running skill sessions, periodically write a brief `[PROGRESS]` summary
|
||||
(2-3 sentences: what's done, what's next, any surprises). Example:
|
||||
|
||||
`[PROGRESS] Found 3 auth bugs. Fixed 2. Remaining: session expiry race in auth.ts:147. Next: write regression test.`
|
||||
|
||||
If you notice you're going in circles — repeating the same diagnostic, re-reading the
|
||||
same file, or trying variants of a failed fix — STOP and reassess. Consider escalating
|
||||
or calling /context-save to save progress and start fresh.
|
||||
|
||||
This is a soft nudge, not a measurable feature. No thresholds, no enforcement. The
|
||||
goal is self-awareness during long sessions. If the session stays short, skip it.
|
||||
Progress summaries must NEVER mutate git state — they are reporting, not committing.
|
||||
|
||||
## Question Tuning (skip entirely if `QUESTION_TUNING: false`)
|
||||
|
||||
**Before each AskUserQuestion.** Pick a registered `question_id` (see
|
||||
@@ -669,80 +770,29 @@ remote binary only runs if telemetry is not off and the binary exists.
|
||||
|
||||
## Plan Mode Safe Operations
|
||||
|
||||
When in plan mode, these operations are always allowed because they produce
|
||||
artifacts that inform the plan, not code changes:
|
||||
|
||||
- `$B` commands (browse: screenshots, page inspection, navigation, snapshots)
|
||||
- `$D` commands (design: generate mockups, variants, comparison boards, iterate)
|
||||
- `codex exec` / `codex review` (outside voice, plan review, adversarial challenge)
|
||||
- Writing to `~/.gstack/` (config, analytics, review logs, design artifacts, learnings)
|
||||
- Writing to the plan file (already allowed by plan mode)
|
||||
- `open` commands for viewing generated artifacts (comparison boards, HTML previews)
|
||||
|
||||
These are read-only in spirit — they inspect the live site, generate visual artifacts,
|
||||
or get independent opinions. They do NOT modify project source files.
|
||||
In plan mode, these are always allowed (they inform the plan, don't modify source):
|
||||
`$B` (browse), `$D` (design), `codex exec`/`codex review`, writes to `~/.gstack/`,
|
||||
writes to the plan file, `open` for generated artifacts.
|
||||
|
||||
## Skill Invocation During Plan Mode
|
||||
|
||||
If a user invokes a skill during plan mode, that invoked skill workflow takes
|
||||
precedence over generic plan mode behavior until it finishes or the user explicitly
|
||||
cancels that skill.
|
||||
|
||||
Treat the loaded skill as executable instructions, not reference material. Follow
|
||||
it step by step. Do not summarize, skip, reorder, or shortcut its steps.
|
||||
|
||||
If the skill says to use AskUserQuestion, do that. Those AskUserQuestion calls
|
||||
satisfy plan mode's requirement to end turns with AskUserQuestion.
|
||||
|
||||
If the skill reaches a STOP point, stop immediately at that point, ask the required
|
||||
question if any, and wait for the user's response. Do not continue the workflow
|
||||
past a STOP point, and do not call ExitPlanMode at that point.
|
||||
|
||||
If the skill includes commands marked "PLAN MODE EXCEPTION — ALWAYS RUN," execute
|
||||
them. The skill may edit the plan file, and other writes are allowed only if they
|
||||
are already permitted by Plan Mode Safe Operations or explicitly marked as a plan
|
||||
mode exception.
|
||||
|
||||
Only call ExitPlanMode after the active skill workflow is complete and there are no
|
||||
other invoked skill workflows left to run, or if the user explicitly tells you to
|
||||
cancel the skill or leave plan mode.
|
||||
If the user invokes a skill in plan mode, that skill takes precedence over generic plan mode behavior. Treat it as executable instructions, not reference. Follow step
|
||||
by step. AskUserQuestion calls satisfy plan mode's end-of-turn requirement. At a STOP
|
||||
point, stop immediately. Do not continue the workflow past a STOP point and do not call ExitPlanMode there. Commands marked "PLAN
|
||||
MODE EXCEPTION — ALWAYS RUN" execute. Other writes need to be already permitted
|
||||
above or explicitly exception-marked. Call ExitPlanMode only after the skill
|
||||
workflow completes — only then call ExitPlanMode (or if the user tells you to cancel the skill or leave plan mode).
|
||||
|
||||
## Plan Status Footer
|
||||
|
||||
When you are in plan mode and about to call ExitPlanMode:
|
||||
In plan mode, before ExitPlanMode: if the plan file lacks a `## GSTACK REVIEW REPORT`
|
||||
section, run `~/.claude/skills/gstack/bin/gstack-review-read` and append a report.
|
||||
With JSONL entries (before `---CONFIG---`), format the standard runs/status/findings
|
||||
table. With `NO_REVIEWS` or empty, append a 5-row placeholder table (CEO/Codex/Eng/
|
||||
Design/DX Review) with all zeros and verdict "NO REVIEWS YET — run `/autoplan`".
|
||||
If a richer review report already exists, skip — review skills wrote it.
|
||||
|
||||
1. Check if the plan file already has a `## GSTACK REVIEW REPORT` section.
|
||||
2. If it DOES — skip (a review skill already wrote a richer report).
|
||||
3. If it does NOT — run this command:
|
||||
|
||||
\`\`\`bash
|
||||
~/.claude/skills/gstack/bin/gstack-review-read
|
||||
\`\`\`
|
||||
|
||||
Then write a `## GSTACK REVIEW REPORT` section to the end of the plan file:
|
||||
|
||||
- If the output contains review entries (JSONL lines before `---CONFIG---`): format the
|
||||
standard report table with runs/status/findings per skill, same format as the review
|
||||
skills use.
|
||||
- If the output is `NO_REVIEWS` or empty: write this placeholder table:
|
||||
|
||||
\`\`\`markdown
|
||||
## GSTACK REVIEW REPORT
|
||||
|
||||
| Review | Trigger | Why | Runs | Status | Findings |
|
||||
|--------|---------|-----|------|--------|----------|
|
||||
| CEO Review | \`/plan-ceo-review\` | Scope & strategy | 0 | — | — |
|
||||
| Codex Review | \`/codex review\` | Independent 2nd opinion | 0 | — | — |
|
||||
| Eng Review | \`/plan-eng-review\` | Architecture & tests (required) | 0 | — | — |
|
||||
| Design Review | \`/plan-design-review\` | UI/UX gaps | 0 | — | — |
|
||||
| DX Review | \`/plan-devex-review\` | Developer experience gaps | 0 | — | — |
|
||||
|
||||
**VERDICT:** NO REVIEWS YET — run \`/autoplan\` for full review pipeline, or individual reviews above.
|
||||
\`\`\`
|
||||
|
||||
**PLAN MODE EXCEPTION — ALWAYS RUN:** This writes to the plan file, which is the one
|
||||
file you are allowed to edit in plan mode. The plan file review report is part of the
|
||||
plan's living status.
|
||||
PLAN MODE EXCEPTION — always allowed (it's the plan file).
|
||||
|
||||
## Step 0: Detect platform and base branch
|
||||
|
||||
@@ -1420,47 +1470,25 @@ Format: commit as `test: regression test for {what broke}`
|
||||
Include BOTH code paths and user flows in the same diagram. Mark E2E-worthy and eval-worthy paths:
|
||||
|
||||
```
|
||||
CODE PATH COVERAGE
|
||||
===========================
|
||||
[+] src/services/billing.ts
|
||||
│
|
||||
├── processPayment()
|
||||
│ ├── [★★★ TESTED] Happy path + card declined + timeout — billing.test.ts:42
|
||||
│ ├── [GAP] Network timeout — NO TEST
|
||||
│ └── [GAP] Invalid currency — NO TEST
|
||||
│
|
||||
└── refundPayment()
|
||||
├── [★★ TESTED] Full refund — billing.test.ts:89
|
||||
└── [★ TESTED] Partial refund (checks non-throw only) — billing.test.ts:101
|
||||
CODE PATHS USER FLOWS
|
||||
[+] src/services/billing.ts [+] Payment checkout
|
||||
├── processPayment() ├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
|
||||
│ ├── [★★★ TESTED] happy + declined + timeout ├── [GAP] [→E2E] Double-click submit
|
||||
│ ├── [GAP] Network timeout └── [GAP] Navigate away mid-payment
|
||||
│ └── [GAP] Invalid currency
|
||||
└── refundPayment() [+] Error states
|
||||
├── [★★ TESTED] Full refund — :89 ├── [★★ TESTED] Card declined message
|
||||
└── [★ TESTED] Partial (non-throw only) — :101 └── [GAP] Network timeout UX
|
||||
|
||||
USER FLOW COVERAGE
|
||||
===========================
|
||||
[+] Payment checkout flow
|
||||
│
|
||||
├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
|
||||
├── [GAP] [→E2E] Double-click submit — needs E2E, not just unit
|
||||
├── [GAP] Navigate away during payment — unit test sufficient
|
||||
└── [★ TESTED] Form validation errors (checks render only) — checkout.test.ts:40
|
||||
LLM integration: [GAP] [→EVAL] Prompt template change — needs eval test
|
||||
|
||||
[+] Error states
|
||||
│
|
||||
├── [★★ TESTED] Card declined message — billing.test.ts:58
|
||||
├── [GAP] Network timeout UX (what does user see?) — NO TEST
|
||||
└── [GAP] Empty cart submission — NO TEST
|
||||
|
||||
[+] LLM integration
|
||||
│
|
||||
└── [GAP] [→EVAL] Prompt template change — needs eval test
|
||||
|
||||
─────────────────────────────────
|
||||
COVERAGE: 5/13 paths tested (38%)
|
||||
Code paths: 3/5 (60%)
|
||||
User flows: 2/8 (25%)
|
||||
QUALITY: ★★★: 2 ★★: 2 ★: 1
|
||||
GAPS: 8 paths need tests (2 need E2E, 1 needs eval)
|
||||
─────────────────────────────────
|
||||
COVERAGE: 5/13 paths tested (38%) | Code paths: 3/5 (60%) | User flows: 2/8 (25%)
|
||||
QUALITY: ★★★:2 ★★:2 ★:1 | GAPS: 8 (2 E2E, 1 eval)
|
||||
```
|
||||
|
||||
Legend: ★★★ behavior + edge + error | ★★ happy path | ★ smoke check
|
||||
[→E2E] = needs integration test | [→EVAL] = needs LLM eval
|
||||
|
||||
**Fast path:** All paths covered → "Step 7: All new code paths have test coverage ✓" Continue.
|
||||
|
||||
**5. Generate tests for uncovered paths:**
|
||||
@@ -2628,6 +2656,73 @@ Save this summary — it goes into the PR body in Step 19.
|
||||
|
||||
## Step 15: Commit (bisectable chunks)
|
||||
|
||||
### Step 15.0: WIP Commit Squash (continuous checkpoint mode only)
|
||||
|
||||
If `CHECKPOINT_MODE` is `"continuous"`, the branch likely contains `WIP:` commits
|
||||
from auto-checkpointing. These must be squashed INTO the corresponding logical
|
||||
commits before the bisectable-grouping logic in Step 15.1 runs. Non-WIP commits
|
||||
on the branch (earlier landed work) must be preserved.
|
||||
|
||||
**Detection:**
|
||||
```bash
|
||||
WIP_COUNT=$(git log <base>..HEAD --oneline --grep="^WIP:" 2>/dev/null | wc -l | tr -d ' ')
|
||||
echo "WIP_COMMITS: $WIP_COUNT"
|
||||
```
|
||||
|
||||
If `WIP_COUNT` is 0: skip this sub-step entirely.
|
||||
|
||||
If `WIP_COUNT` > 0, collect the WIP context first so it survives the squash:
|
||||
|
||||
```bash
|
||||
# Export [gstack-context] blocks from all WIP commits on this branch.
|
||||
# This file becomes input to the CHANGELOG entry and may inform PR body context.
|
||||
mkdir -p "$(git rev-parse --show-toplevel)/.gstack"
|
||||
git log <base>..HEAD --grep="^WIP:" --format="%H%n%B%n---END---" > \
|
||||
"$(git rev-parse --show-toplevel)/.gstack/wip-context-before-squash.md" 2>/dev/null || true
|
||||
```
|
||||
|
||||
**Non-destructive squash strategy:**
|
||||
|
||||
`git reset --soft <merge-base>` WOULD uncommit everything including non-WIP commits.
|
||||
DO NOT DO THAT. Instead, use `git rebase` scoped to filter WIP commits only.
|
||||
|
||||
Option 1 (preferred, if there are non-WIP commits mixed in):
|
||||
```bash
|
||||
# Interactive rebase with automated WIP squashing.
|
||||
# Mark every WIP commit as 'fixup' (drop its message, fold changes into prior commit).
|
||||
git rebase -i $(git merge-base HEAD origin/<base>) \
|
||||
--exec 'true' \
|
||||
-X ours 2>/dev/null || {
|
||||
echo "Rebase conflict. Aborting: git rebase --abort"
|
||||
git rebase --abort
|
||||
echo "STATUS: BLOCKED — manual WIP squash required"
|
||||
exit 1
|
||||
}
|
||||
```
|
||||
|
||||
Option 2 (simpler, if the branch is ALL WIP commits so far — no landed work):
|
||||
```bash
|
||||
# Branch contains only WIP commits. Reset-soft is safe here because there's
|
||||
# nothing non-WIP to preserve. Verify first.
|
||||
NON_WIP=$(git log <base>..HEAD --oneline --invert-grep --grep="^WIP:" 2>/dev/null | wc -l | tr -d ' ')
|
||||
if [ "$NON_WIP" -eq 0 ]; then
|
||||
git reset --soft $(git merge-base HEAD origin/<base>)
|
||||
echo "WIP-only branch, reset-soft to merge base. Step 15.1 will create clean commits."
|
||||
fi
|
||||
```
|
||||
|
||||
Decide at runtime which option applies. If unsure, prefer stopping and asking the
|
||||
user via AskUserQuestion rather than destroying non-WIP commits.
|
||||
|
||||
**Anti-footgun rules:**
|
||||
- NEVER blind `git reset --soft` if there are non-WIP commits. Codex flagged this
|
||||
as destructive — it would uncommit real landed work and turn the push step into
|
||||
a non-fast-forward push for anyone who already pushed.
|
||||
- Only proceed to Step 15.1 after WIP commits are successfully squashed/absorbed
|
||||
or the branch has been verified to contain only WIP work.
|
||||
|
||||
### Step 15.1: Bisectable Commits
|
||||
|
||||
**Goal:** Create small, logical commits that work well with `git bisect` and help LLMs understand what changed.
|
||||
|
||||
1. Analyze the diff and group changes into logical commits. Each commit should represent **one coherent change** — not one file, but one logical unit.
|
||||
|
||||
+212
-117
@@ -44,16 +44,6 @@ _TEL_START=$(date +%s)
|
||||
_SESSION_ID="$$-$(date +%s)"
|
||||
echo "TELEMETRY: ${_TEL:-off}"
|
||||
echo "TEL_PROMPTED: $_TEL_PROMPTED"
|
||||
# Question tuning (opt-in; see /plan-tune + docs/designs/PLAN_TUNING_V0.md)
|
||||
_QUESTION_TUNING=$($GSTACK_BIN/gstack-config get question_tuning 2>/dev/null || echo "false")
|
||||
echo "QUESTION_TUNING: $_QUESTION_TUNING"
|
||||
# Writing style (V1: default = ELI10-style, terse = V0 prose. See docs/designs/PLAN_TUNING_V1.md)
|
||||
_EXPLAIN_LEVEL=$($GSTACK_BIN/gstack-config get explain_level 2>/dev/null || echo "default")
|
||||
if [ "$_EXPLAIN_LEVEL" != "default" ] && [ "$_EXPLAIN_LEVEL" != "terse" ]; then _EXPLAIN_LEVEL="default"; fi
|
||||
echo "EXPLAIN_LEVEL: $_EXPLAIN_LEVEL"
|
||||
# V1 upgrade migration pending-prompt flag
|
||||
_WRITING_STYLE_PENDING=$([ -f ~/.gstack/.writing-style-prompt-pending ] && echo "yes" || echo "no")
|
||||
echo "WRITING_STYLE_PENDING: $_WRITING_STYLE_PENDING"
|
||||
mkdir -p ~/.gstack/analytics
|
||||
if [ "$_TEL" != "off" ]; then
|
||||
echo '{"skill":"ship","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","repo":"'$(basename "$(git rev-parse --show-toplevel 2>/dev/null)" 2>/dev/null || echo "unknown")'"}' >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true
|
||||
@@ -98,6 +88,12 @@ if [ -d ".agents/skills/gstack" ] && [ ! -L ".agents/skills/gstack" ]; then
|
||||
fi
|
||||
fi
|
||||
echo "VENDORED_GSTACK: $_VENDORED"
|
||||
echo "MODEL_OVERLAY: claude"
|
||||
# Checkpoint mode (explicit = no auto-commit, continuous = WIP commits as you go)
|
||||
_CHECKPOINT_MODE=$($GSTACK_BIN/gstack-config get checkpoint_mode 2>/dev/null || echo "explicit")
|
||||
_CHECKPOINT_PUSH=$($GSTACK_BIN/gstack-config get checkpoint_push 2>/dev/null || echo "false")
|
||||
echo "CHECKPOINT_MODE: $_CHECKPOINT_MODE"
|
||||
echo "CHECKPOINT_PUSH: $_CHECKPOINT_PUSH"
|
||||
# Detect spawned session (OpenClaw or other orchestrator)
|
||||
[ -n "$OPENCLAW_SESSION" ] && echo "SPAWNED_SESSION: true" || true
|
||||
```
|
||||
@@ -113,7 +109,38 @@ or invoking other gstack skills, use the `/gstack-` prefix (e.g., `/gstack-qa` i
|
||||
of `/qa`, `/gstack-ship` instead of `/ship`). Disk paths are unaffected — always use
|
||||
`$GSTACK_ROOT/[skill-name]/SKILL.md` for reading skill files.
|
||||
|
||||
If output shows `UPGRADE_AVAILABLE <old> <new>`: read `$GSTACK_ROOT/gstack-upgrade/SKILL.md` and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If `JUST_UPGRADED <from> <to>`: tell user "Running gstack v{to} (just updated!)" and continue.
|
||||
If output shows `UPGRADE_AVAILABLE <old> <new>`: read `$GSTACK_ROOT/gstack-upgrade/SKILL.md` and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined).
|
||||
|
||||
If output shows `JUST_UPGRADED <from> <to>` AND `SPAWNED_SESSION` is NOT set: tell
|
||||
the user "Running gstack v{to} (just updated!)" and then check for new features to
|
||||
surface. For each per-feature marker below, if the marker file is missing AND the
|
||||
feature is plausibly useful for this user, use AskUserQuestion to let them try it.
|
||||
Fire once per feature per user, NOT once per upgrade.
|
||||
|
||||
**In spawned sessions (`SPAWNED_SESSION` = "true"): SKIP feature discovery entirely.**
|
||||
Just print "Running gstack v{to}" and continue. Orchestrators do not want interactive
|
||||
prompts from sub-sessions.
|
||||
|
||||
**Feature discovery markers and prompts** (one at a time, max one per session):
|
||||
|
||||
1. `$GSTACK_ROOT/.feature-prompted-continuous-checkpoint` →
|
||||
Prompt: "Continuous checkpoint auto-commits your work as you go with `WIP:` prefix
|
||||
so you never lose progress to a crash. Local-only by default — doesn't push
|
||||
anywhere unless you turn that on. Want to try it?"
|
||||
Options: A) Enable continuous mode, B) Show me first (print the section from
|
||||
the preamble Continuous Checkpoint Mode), C) Skip.
|
||||
If A: run `$GSTACK_BIN/gstack-config set checkpoint_mode continuous`.
|
||||
Always: `touch $GSTACK_ROOT/.feature-prompted-continuous-checkpoint`
|
||||
|
||||
2. `$GSTACK_ROOT/.feature-prompted-model-overlay` →
|
||||
Inform only (no prompt): "Model overlays are active. `MODEL_OVERLAY: {model}`
|
||||
shown in the preamble output tells you which behavioral patch is applied.
|
||||
Override with `--model` when regenerating skills (e.g., `bun run gen:skill-docs
|
||||
--model gpt-5.4`). Default is claude."
|
||||
Always: `touch $GSTACK_ROOT/.feature-prompted-model-overlay`
|
||||
|
||||
After handling JUST_UPGRADED (prompts done or skipped), continue with the skill
|
||||
workflow.
|
||||
|
||||
If `WRITING_STYLE_PENDING` is `yes`: You're on the first skill run after upgrading
|
||||
to gstack v1. Ask the user once about the new default writing style. Use AskUserQuestion:
|
||||
@@ -238,8 +265,7 @@ Key routing rules:
|
||||
- Design system, brand → invoke design-consultation
|
||||
- Visual audit, design polish → invoke design-review
|
||||
- Architecture review → invoke plan-eng-review
|
||||
- Save progress, save state, save my work → invoke context-save
|
||||
- Resume, where was I, pick up where I left off → invoke context-restore
|
||||
- Save progress, checkpoint, resume → invoke checkpoint
|
||||
- Code quality, health check → invoke health
|
||||
```
|
||||
|
||||
@@ -289,7 +315,23 @@ AI orchestrator (e.g., OpenClaw). In spawned sessions:
|
||||
- Focus on completing the task and reporting results via prose output.
|
||||
- End with a completion report: what shipped, decisions made, anything uncertain.
|
||||
|
||||
## Model-Specific Behavioral Patch (claude)
|
||||
|
||||
The following nudges are tuned for the claude model family. They are
|
||||
**subordinate** to skill workflow, STOP points, AskUserQuestion gates, plan-mode
|
||||
safety, and /ship review gates. If a nudge below conflicts with skill instructions,
|
||||
the skill wins. Treat these as preferences, not rules.
|
||||
|
||||
**Todo-list discipline.** When working through a multi-step plan, mark each task
|
||||
complete individually as you finish it. Do not batch-complete at the end. If a task
|
||||
turns out to be unnecessary, mark it skipped with a one-line reason.
|
||||
|
||||
**Think before heavy actions.** For complex operations (refactors, migrations,
|
||||
non-trivial new features), briefly state your approach before executing. This lets
|
||||
the user course-correct cheaply instead of mid-flight.
|
||||
|
||||
**Dedicated tools over Bash.** Prefer Read, Edit, Write, Glob, Grep over shell
|
||||
equivalents (cat, sed, find, grep). The dedicated tools are cheaper and clearer.
|
||||
|
||||
## Voice
|
||||
|
||||
@@ -523,6 +565,65 @@ Ask the user. Do not guess on architectural or data model decisions.
|
||||
|
||||
This does NOT apply to routine coding, small features, or obvious changes.
|
||||
|
||||
## Continuous Checkpoint Mode
|
||||
|
||||
If `CHECKPOINT_MODE` is `"continuous"` (from preamble output): auto-commit work as
|
||||
you go with `WIP:` prefix so session state survives crashes and context switches.
|
||||
|
||||
**When to commit (continuous mode only):**
|
||||
- After creating a new file (not scratch/temp files)
|
||||
- After finishing a function/component/module
|
||||
- After fixing a bug that's verified by a passing test
|
||||
- Before any long-running operation (install, full build, full test suite)
|
||||
|
||||
**Commit format** — include structured context in the body:
|
||||
|
||||
```
|
||||
WIP: <concise description of what changed>
|
||||
|
||||
[gstack-context]
|
||||
Decisions: <key choices made this step>
|
||||
Remaining: <what's left in the logical unit>
|
||||
Tried: <failed approaches worth recording> (omit if none)
|
||||
Skill: </skill-name-if-running>
|
||||
[/gstack-context]
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- Stage only files you intentionally changed. NEVER `git add -A` in continuous mode.
|
||||
- Do NOT commit with known-broken tests. Fix first, then commit. The [gstack-context]
|
||||
example values MUST reflect a clean state.
|
||||
- Do NOT commit mid-edit. Finish the logical unit.
|
||||
- Push ONLY if `CHECKPOINT_PUSH` is `"true"` (default is false). Pushing WIP commits
|
||||
to a shared remote can trigger CI, deploys, and expose secrets — that is why push
|
||||
is opt-in, not default.
|
||||
- Background discipline — do NOT announce each commit to the user. They can see
|
||||
`git log` whenever they want.
|
||||
|
||||
**When `/context-restore` runs,** it parses `[gstack-context]` blocks from WIP
|
||||
commits on the current branch to reconstruct session state. When `/ship` runs, it
|
||||
filter-squashes WIP commits only (preserving non-WIP commits) via
|
||||
`git rebase --autosquash` so the PR contains clean bisectable commits.
|
||||
|
||||
If `CHECKPOINT_MODE` is `"explicit"` (the default): no auto-commit behavior. Commit
|
||||
only when the user explicitly asks, or when a skill workflow (like /ship) runs a
|
||||
commit step. Ignore this section entirely.
|
||||
|
||||
## Context Health (soft directive)
|
||||
|
||||
During long-running skill sessions, periodically write a brief `[PROGRESS]` summary
|
||||
(2-3 sentences: what's done, what's next, any surprises). Example:
|
||||
|
||||
`[PROGRESS] Found 3 auth bugs. Fixed 2. Remaining: session expiry race in auth.ts:147. Next: write regression test.`
|
||||
|
||||
If you notice you're going in circles — repeating the same diagnostic, re-reading the
|
||||
same file, or trying variants of a failed fix — STOP and reassess. Consider escalating
|
||||
or calling /context-save to save progress and start fresh.
|
||||
|
||||
This is a soft nudge, not a measurable feature. No thresholds, no enforcement. The
|
||||
goal is self-awareness during long sessions. If the session stays short, skip it.
|
||||
Progress summaries must NEVER mutate git state — they are reporting, not committing.
|
||||
|
||||
## Question Tuning (skip entirely if `QUESTION_TUNING: false`)
|
||||
|
||||
**Before each AskUserQuestion.** Pick a registered `question_id` (see
|
||||
@@ -658,80 +759,29 @@ remote binary only runs if telemetry is not off and the binary exists.
|
||||
|
||||
## Plan Mode Safe Operations
|
||||
|
||||
When in plan mode, these operations are always allowed because they produce
|
||||
artifacts that inform the plan, not code changes:
|
||||
|
||||
- `$B` commands (browse: screenshots, page inspection, navigation, snapshots)
|
||||
- `$D` commands (design: generate mockups, variants, comparison boards, iterate)
|
||||
- `codex exec` / `codex review` (outside voice, plan review, adversarial challenge)
|
||||
- Writing to `~/.gstack/` (config, analytics, review logs, design artifacts, learnings)
|
||||
- Writing to the plan file (already allowed by plan mode)
|
||||
- `open` commands for viewing generated artifacts (comparison boards, HTML previews)
|
||||
|
||||
These are read-only in spirit — they inspect the live site, generate visual artifacts,
|
||||
or get independent opinions. They do NOT modify project source files.
|
||||
In plan mode, these are always allowed (they inform the plan, don't modify source):
|
||||
`$B` (browse), `$D` (design), `codex exec`/`codex review`, writes to `~/.gstack/`,
|
||||
writes to the plan file, `open` for generated artifacts.
|
||||
|
||||
## Skill Invocation During Plan Mode
|
||||
|
||||
If a user invokes a skill during plan mode, that invoked skill workflow takes
|
||||
precedence over generic plan mode behavior until it finishes or the user explicitly
|
||||
cancels that skill.
|
||||
|
||||
Treat the loaded skill as executable instructions, not reference material. Follow
|
||||
it step by step. Do not summarize, skip, reorder, or shortcut its steps.
|
||||
|
||||
If the skill says to use AskUserQuestion, do that. Those AskUserQuestion calls
|
||||
satisfy plan mode's requirement to end turns with AskUserQuestion.
|
||||
|
||||
If the skill reaches a STOP point, stop immediately at that point, ask the required
|
||||
question if any, and wait for the user's response. Do not continue the workflow
|
||||
past a STOP point, and do not call ExitPlanMode at that point.
|
||||
|
||||
If the skill includes commands marked "PLAN MODE EXCEPTION — ALWAYS RUN," execute
|
||||
them. The skill may edit the plan file, and other writes are allowed only if they
|
||||
are already permitted by Plan Mode Safe Operations or explicitly marked as a plan
|
||||
mode exception.
|
||||
|
||||
Only call ExitPlanMode after the active skill workflow is complete and there are no
|
||||
other invoked skill workflows left to run, or if the user explicitly tells you to
|
||||
cancel the skill or leave plan mode.
|
||||
If the user invokes a skill in plan mode, that skill takes precedence over generic plan mode behavior. Treat it as executable instructions, not reference. Follow step
|
||||
by step. AskUserQuestion calls satisfy plan mode's end-of-turn requirement. At a STOP
|
||||
point, stop immediately. Do not continue the workflow past a STOP point and do not call ExitPlanMode there. Commands marked "PLAN
|
||||
MODE EXCEPTION — ALWAYS RUN" execute. Other writes need to be already permitted
|
||||
above or explicitly exception-marked. Call ExitPlanMode only after the skill
|
||||
workflow completes — only then call ExitPlanMode (or if the user tells you to cancel the skill or leave plan mode).
|
||||
|
||||
## Plan Status Footer
|
||||
|
||||
When you are in plan mode and about to call ExitPlanMode:
|
||||
In plan mode, before ExitPlanMode: if the plan file lacks a `## GSTACK REVIEW REPORT`
|
||||
section, run `$GSTACK_ROOT/bin/gstack-review-read` and append a report.
|
||||
With JSONL entries (before `---CONFIG---`), format the standard runs/status/findings
|
||||
table. With `NO_REVIEWS` or empty, append a 5-row placeholder table (CEO/Codex/Eng/
|
||||
Design/DX Review) with all zeros and verdict "NO REVIEWS YET — run `/autoplan`".
|
||||
If a richer review report already exists, skip — review skills wrote it.
|
||||
|
||||
1. Check if the plan file already has a `## GSTACK REVIEW REPORT` section.
|
||||
2. If it DOES — skip (a review skill already wrote a richer report).
|
||||
3. If it does NOT — run this command:
|
||||
|
||||
\`\`\`bash
|
||||
$GSTACK_ROOT/bin/gstack-review-read
|
||||
\`\`\`
|
||||
|
||||
Then write a `## GSTACK REVIEW REPORT` section to the end of the plan file:
|
||||
|
||||
- If the output contains review entries (JSONL lines before `---CONFIG---`): format the
|
||||
standard report table with runs/status/findings per skill, same format as the review
|
||||
skills use.
|
||||
- If the output is `NO_REVIEWS` or empty: write this placeholder table:
|
||||
|
||||
\`\`\`markdown
|
||||
## GSTACK REVIEW REPORT
|
||||
|
||||
| Review | Trigger | Why | Runs | Status | Findings |
|
||||
|--------|---------|-----|------|--------|----------|
|
||||
| CEO Review | \`/plan-ceo-review\` | Scope & strategy | 0 | — | — |
|
||||
| Codex Review | \`/codex review\` | Independent 2nd opinion | 0 | — | — |
|
||||
| Eng Review | \`/plan-eng-review\` | Architecture & tests (required) | 0 | — | — |
|
||||
| Design Review | \`/plan-design-review\` | UI/UX gaps | 0 | — | — |
|
||||
| DX Review | \`/plan-devex-review\` | Developer experience gaps | 0 | — | — |
|
||||
|
||||
**VERDICT:** NO REVIEWS YET — run \`/autoplan\` for full review pipeline, or individual reviews above.
|
||||
\`\`\`
|
||||
|
||||
**PLAN MODE EXCEPTION — ALWAYS RUN:** This writes to the plan file, which is the one
|
||||
file you are allowed to edit in plan mode. The plan file review report is part of the
|
||||
plan's living status.
|
||||
PLAN MODE EXCEPTION — always allowed (it's the plan file).
|
||||
|
||||
## Step 0: Detect platform and base branch
|
||||
|
||||
@@ -1409,47 +1459,25 @@ Format: commit as `test: regression test for {what broke}`
|
||||
Include BOTH code paths and user flows in the same diagram. Mark E2E-worthy and eval-worthy paths:
|
||||
|
||||
```
|
||||
CODE PATH COVERAGE
|
||||
===========================
|
||||
[+] src/services/billing.ts
|
||||
│
|
||||
├── processPayment()
|
||||
│ ├── [★★★ TESTED] Happy path + card declined + timeout — billing.test.ts:42
|
||||
│ ├── [GAP] Network timeout — NO TEST
|
||||
│ └── [GAP] Invalid currency — NO TEST
|
||||
│
|
||||
└── refundPayment()
|
||||
├── [★★ TESTED] Full refund — billing.test.ts:89
|
||||
└── [★ TESTED] Partial refund (checks non-throw only) — billing.test.ts:101
|
||||
CODE PATHS USER FLOWS
|
||||
[+] src/services/billing.ts [+] Payment checkout
|
||||
├── processPayment() ├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
|
||||
│ ├── [★★★ TESTED] happy + declined + timeout ├── [GAP] [→E2E] Double-click submit
|
||||
│ ├── [GAP] Network timeout └── [GAP] Navigate away mid-payment
|
||||
│ └── [GAP] Invalid currency
|
||||
└── refundPayment() [+] Error states
|
||||
├── [★★ TESTED] Full refund — :89 ├── [★★ TESTED] Card declined message
|
||||
└── [★ TESTED] Partial (non-throw only) — :101 └── [GAP] Network timeout UX
|
||||
|
||||
USER FLOW COVERAGE
|
||||
===========================
|
||||
[+] Payment checkout flow
|
||||
│
|
||||
├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
|
||||
├── [GAP] [→E2E] Double-click submit — needs E2E, not just unit
|
||||
├── [GAP] Navigate away during payment — unit test sufficient
|
||||
└── [★ TESTED] Form validation errors (checks render only) — checkout.test.ts:40
|
||||
LLM integration: [GAP] [→EVAL] Prompt template change — needs eval test
|
||||
|
||||
[+] Error states
|
||||
│
|
||||
├── [★★ TESTED] Card declined message — billing.test.ts:58
|
||||
├── [GAP] Network timeout UX (what does user see?) — NO TEST
|
||||
└── [GAP] Empty cart submission — NO TEST
|
||||
|
||||
[+] LLM integration
|
||||
│
|
||||
└── [GAP] [→EVAL] Prompt template change — needs eval test
|
||||
|
||||
─────────────────────────────────
|
||||
COVERAGE: 5/13 paths tested (38%)
|
||||
Code paths: 3/5 (60%)
|
||||
User flows: 2/8 (25%)
|
||||
QUALITY: ★★★: 2 ★★: 2 ★: 1
|
||||
GAPS: 8 paths need tests (2 need E2E, 1 needs eval)
|
||||
─────────────────────────────────
|
||||
COVERAGE: 5/13 paths tested (38%) | Code paths: 3/5 (60%) | User flows: 2/8 (25%)
|
||||
QUALITY: ★★★:2 ★★:2 ★:1 | GAPS: 8 (2 E2E, 1 eval)
|
||||
```
|
||||
|
||||
Legend: ★★★ behavior + edge + error | ★★ happy path | ★ smoke check
|
||||
[→E2E] = needs integration test | [→EVAL] = needs LLM eval
|
||||
|
||||
**Fast path:** All paths covered → "Step 7: All new code paths have test coverage ✓" Continue.
|
||||
|
||||
**5. Generate tests for uncovered paths:**
|
||||
@@ -2243,6 +2271,73 @@ Save this summary — it goes into the PR body in Step 19.
|
||||
|
||||
## Step 15: Commit (bisectable chunks)
|
||||
|
||||
### Step 15.0: WIP Commit Squash (continuous checkpoint mode only)
|
||||
|
||||
If `CHECKPOINT_MODE` is `"continuous"`, the branch likely contains `WIP:` commits
|
||||
from auto-checkpointing. These must be squashed INTO the corresponding logical
|
||||
commits before the bisectable-grouping logic in Step 15.1 runs. Non-WIP commits
|
||||
on the branch (earlier landed work) must be preserved.
|
||||
|
||||
**Detection:**
|
||||
```bash
|
||||
WIP_COUNT=$(git log <base>..HEAD --oneline --grep="^WIP:" 2>/dev/null | wc -l | tr -d ' ')
|
||||
echo "WIP_COMMITS: $WIP_COUNT"
|
||||
```
|
||||
|
||||
If `WIP_COUNT` is 0: skip this sub-step entirely.
|
||||
|
||||
If `WIP_COUNT` > 0, collect the WIP context first so it survives the squash:
|
||||
|
||||
```bash
|
||||
# Export [gstack-context] blocks from all WIP commits on this branch.
|
||||
# This file becomes input to the CHANGELOG entry and may inform PR body context.
|
||||
mkdir -p "$(git rev-parse --show-toplevel)/.gstack"
|
||||
git log <base>..HEAD --grep="^WIP:" --format="%H%n%B%n---END---" > \
|
||||
"$(git rev-parse --show-toplevel)/.gstack/wip-context-before-squash.md" 2>/dev/null || true
|
||||
```
|
||||
|
||||
**Non-destructive squash strategy:**
|
||||
|
||||
`git reset --soft <merge-base>` WOULD uncommit everything including non-WIP commits.
|
||||
DO NOT DO THAT. Instead, use `git rebase` scoped to filter WIP commits only.
|
||||
|
||||
Option 1 (preferred, if there are non-WIP commits mixed in):
|
||||
```bash
|
||||
# Interactive rebase with automated WIP squashing.
|
||||
# Mark every WIP commit as 'fixup' (drop its message, fold changes into prior commit).
|
||||
git rebase -i $(git merge-base HEAD origin/<base>) \
|
||||
--exec 'true' \
|
||||
-X ours 2>/dev/null || {
|
||||
echo "Rebase conflict. Aborting: git rebase --abort"
|
||||
git rebase --abort
|
||||
echo "STATUS: BLOCKED — manual WIP squash required"
|
||||
exit 1
|
||||
}
|
||||
```
|
||||
|
||||
Option 2 (simpler, if the branch is ALL WIP commits so far — no landed work):
|
||||
```bash
|
||||
# Branch contains only WIP commits. Reset-soft is safe here because there's
|
||||
# nothing non-WIP to preserve. Verify first.
|
||||
NON_WIP=$(git log <base>..HEAD --oneline --invert-grep --grep="^WIP:" 2>/dev/null | wc -l | tr -d ' ')
|
||||
if [ "$NON_WIP" -eq 0 ]; then
|
||||
git reset --soft $(git merge-base HEAD origin/<base>)
|
||||
echo "WIP-only branch, reset-soft to merge base. Step 15.1 will create clean commits."
|
||||
fi
|
||||
```
|
||||
|
||||
Decide at runtime which option applies. If unsure, prefer stopping and asking the
|
||||
user via AskUserQuestion rather than destroying non-WIP commits.
|
||||
|
||||
**Anti-footgun rules:**
|
||||
- NEVER blind `git reset --soft` if there are non-WIP commits. Codex flagged this
|
||||
as destructive — it would uncommit real landed work and turn the push step into
|
||||
a non-fast-forward push for anyone who already pushed.
|
||||
- Only proceed to Step 15.1 after WIP commits are successfully squashed/absorbed
|
||||
or the branch has been verified to contain only WIP work.
|
||||
|
||||
### Step 15.1: Bisectable Commits
|
||||
|
||||
**Goal:** Create small, logical commits that work well with `git bisect` and help LLMs understand what changed.
|
||||
|
||||
1. Analyze the diff and group changes into logical commits. Each commit should represent **one coherent change** — not one file, but one logical unit.
|
||||
|
||||
+212
-117
@@ -46,16 +46,6 @@ _TEL_START=$(date +%s)
|
||||
_SESSION_ID="$$-$(date +%s)"
|
||||
echo "TELEMETRY: ${_TEL:-off}"
|
||||
echo "TEL_PROMPTED: $_TEL_PROMPTED"
|
||||
# Question tuning (opt-in; see /plan-tune + docs/designs/PLAN_TUNING_V0.md)
|
||||
_QUESTION_TUNING=$($GSTACK_BIN/gstack-config get question_tuning 2>/dev/null || echo "false")
|
||||
echo "QUESTION_TUNING: $_QUESTION_TUNING"
|
||||
# Writing style (V1: default = ELI10-style, terse = V0 prose. See docs/designs/PLAN_TUNING_V1.md)
|
||||
_EXPLAIN_LEVEL=$($GSTACK_BIN/gstack-config get explain_level 2>/dev/null || echo "default")
|
||||
if [ "$_EXPLAIN_LEVEL" != "default" ] && [ "$_EXPLAIN_LEVEL" != "terse" ]; then _EXPLAIN_LEVEL="default"; fi
|
||||
echo "EXPLAIN_LEVEL: $_EXPLAIN_LEVEL"
|
||||
# V1 upgrade migration pending-prompt flag
|
||||
_WRITING_STYLE_PENDING=$([ -f ~/.gstack/.writing-style-prompt-pending ] && echo "yes" || echo "no")
|
||||
echo "WRITING_STYLE_PENDING: $_WRITING_STYLE_PENDING"
|
||||
mkdir -p ~/.gstack/analytics
|
||||
if [ "$_TEL" != "off" ]; then
|
||||
echo '{"skill":"ship","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","repo":"'$(basename "$(git rev-parse --show-toplevel 2>/dev/null)" 2>/dev/null || echo "unknown")'"}' >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true
|
||||
@@ -100,6 +90,12 @@ if [ -d ".factory/skills/gstack" ] && [ ! -L ".factory/skills/gstack" ]; then
|
||||
fi
|
||||
fi
|
||||
echo "VENDORED_GSTACK: $_VENDORED"
|
||||
echo "MODEL_OVERLAY: claude"
|
||||
# Checkpoint mode (explicit = no auto-commit, continuous = WIP commits as you go)
|
||||
_CHECKPOINT_MODE=$($GSTACK_BIN/gstack-config get checkpoint_mode 2>/dev/null || echo "explicit")
|
||||
_CHECKPOINT_PUSH=$($GSTACK_BIN/gstack-config get checkpoint_push 2>/dev/null || echo "false")
|
||||
echo "CHECKPOINT_MODE: $_CHECKPOINT_MODE"
|
||||
echo "CHECKPOINT_PUSH: $_CHECKPOINT_PUSH"
|
||||
# Detect spawned session (OpenClaw or other orchestrator)
|
||||
[ -n "$OPENCLAW_SESSION" ] && echo "SPAWNED_SESSION: true" || true
|
||||
```
|
||||
@@ -115,7 +111,38 @@ or invoking other gstack skills, use the `/gstack-` prefix (e.g., `/gstack-qa` i
|
||||
of `/qa`, `/gstack-ship` instead of `/ship`). Disk paths are unaffected — always use
|
||||
`$GSTACK_ROOT/[skill-name]/SKILL.md` for reading skill files.
|
||||
|
||||
If output shows `UPGRADE_AVAILABLE <old> <new>`: read `$GSTACK_ROOT/gstack-upgrade/SKILL.md` and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If `JUST_UPGRADED <from> <to>`: tell user "Running gstack v{to} (just updated!)" and continue.
|
||||
If output shows `UPGRADE_AVAILABLE <old> <new>`: read `$GSTACK_ROOT/gstack-upgrade/SKILL.md` and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined).
|
||||
|
||||
If output shows `JUST_UPGRADED <from> <to>` AND `SPAWNED_SESSION` is NOT set: tell
|
||||
the user "Running gstack v{to} (just updated!)" and then check for new features to
|
||||
surface. For each per-feature marker below, if the marker file is missing AND the
|
||||
feature is plausibly useful for this user, use AskUserQuestion to let them try it.
|
||||
Fire once per feature per user, NOT once per upgrade.
|
||||
|
||||
**In spawned sessions (`SPAWNED_SESSION` = "true"): SKIP feature discovery entirely.**
|
||||
Just print "Running gstack v{to}" and continue. Orchestrators do not want interactive
|
||||
prompts from sub-sessions.
|
||||
|
||||
**Feature discovery markers and prompts** (one at a time, max one per session):
|
||||
|
||||
1. `$GSTACK_ROOT/.feature-prompted-continuous-checkpoint` →
|
||||
Prompt: "Continuous checkpoint auto-commits your work as you go with `WIP:` prefix
|
||||
so you never lose progress to a crash. Local-only by default — doesn't push
|
||||
anywhere unless you turn that on. Want to try it?"
|
||||
Options: A) Enable continuous mode, B) Show me first (print the section from
|
||||
the preamble Continuous Checkpoint Mode), C) Skip.
|
||||
If A: run `$GSTACK_BIN/gstack-config set checkpoint_mode continuous`.
|
||||
Always: `touch $GSTACK_ROOT/.feature-prompted-continuous-checkpoint`
|
||||
|
||||
2. `$GSTACK_ROOT/.feature-prompted-model-overlay` →
|
||||
Inform only (no prompt): "Model overlays are active. `MODEL_OVERLAY: {model}`
|
||||
shown in the preamble output tells you which behavioral patch is applied.
|
||||
Override with `--model` when regenerating skills (e.g., `bun run gen:skill-docs
|
||||
--model gpt-5.4`). Default is claude."
|
||||
Always: `touch $GSTACK_ROOT/.feature-prompted-model-overlay`
|
||||
|
||||
After handling JUST_UPGRADED (prompts done or skipped), continue with the skill
|
||||
workflow.
|
||||
|
||||
If `WRITING_STYLE_PENDING` is `yes`: You're on the first skill run after upgrading
|
||||
to gstack v1. Ask the user once about the new default writing style. Use AskUserQuestion:
|
||||
@@ -240,8 +267,7 @@ Key routing rules:
|
||||
- Design system, brand → invoke design-consultation
|
||||
- Visual audit, design polish → invoke design-review
|
||||
- Architecture review → invoke plan-eng-review
|
||||
- Save progress, save state, save my work → invoke context-save
|
||||
- Resume, where was I, pick up where I left off → invoke context-restore
|
||||
- Save progress, checkpoint, resume → invoke checkpoint
|
||||
- Code quality, health check → invoke health
|
||||
```
|
||||
|
||||
@@ -291,7 +317,23 @@ AI orchestrator (e.g., OpenClaw). In spawned sessions:
|
||||
- Focus on completing the task and reporting results via prose output.
|
||||
- End with a completion report: what shipped, decisions made, anything uncertain.
|
||||
|
||||
## Model-Specific Behavioral Patch (claude)
|
||||
|
||||
The following nudges are tuned for the claude model family. They are
|
||||
**subordinate** to skill workflow, STOP points, AskUserQuestion gates, plan-mode
|
||||
safety, and /ship review gates. If a nudge below conflicts with skill instructions,
|
||||
the skill wins. Treat these as preferences, not rules.
|
||||
|
||||
**Todo-list discipline.** When working through a multi-step plan, mark each task
|
||||
complete individually as you finish it. Do not batch-complete at the end. If a task
|
||||
turns out to be unnecessary, mark it skipped with a one-line reason.
|
||||
|
||||
**Think before heavy actions.** For complex operations (refactors, migrations,
|
||||
non-trivial new features), briefly state your approach before executing. This lets
|
||||
the user course-correct cheaply instead of mid-flight.
|
||||
|
||||
**Dedicated tools over Bash.** Prefer Read, Edit, Write, Glob, Grep over shell
|
||||
equivalents (cat, sed, find, grep). The dedicated tools are cheaper and clearer.
|
||||
|
||||
## Voice
|
||||
|
||||
@@ -525,6 +567,65 @@ Ask the user. Do not guess on architectural or data model decisions.
|
||||
|
||||
This does NOT apply to routine coding, small features, or obvious changes.
|
||||
|
||||
## Continuous Checkpoint Mode
|
||||
|
||||
If `CHECKPOINT_MODE` is `"continuous"` (from preamble output): auto-commit work as
|
||||
you go with `WIP:` prefix so session state survives crashes and context switches.
|
||||
|
||||
**When to commit (continuous mode only):**
|
||||
- After creating a new file (not scratch/temp files)
|
||||
- After finishing a function/component/module
|
||||
- After fixing a bug that's verified by a passing test
|
||||
- Before any long-running operation (install, full build, full test suite)
|
||||
|
||||
**Commit format** — include structured context in the body:
|
||||
|
||||
```
|
||||
WIP: <concise description of what changed>
|
||||
|
||||
[gstack-context]
|
||||
Decisions: <key choices made this step>
|
||||
Remaining: <what's left in the logical unit>
|
||||
Tried: <failed approaches worth recording> (omit if none)
|
||||
Skill: </skill-name-if-running>
|
||||
[/gstack-context]
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- Stage only files you intentionally changed. NEVER `git add -A` in continuous mode.
|
||||
- Do NOT commit with known-broken tests. Fix first, then commit. The [gstack-context]
|
||||
example values MUST reflect a clean state.
|
||||
- Do NOT commit mid-edit. Finish the logical unit.
|
||||
- Push ONLY if `CHECKPOINT_PUSH` is `"true"` (default is false). Pushing WIP commits
|
||||
to a shared remote can trigger CI, deploys, and expose secrets — that is why push
|
||||
is opt-in, not default.
|
||||
- Background discipline — do NOT announce each commit to the user. They can see
|
||||
`git log` whenever they want.
|
||||
|
||||
**When `/context-restore` runs,** it parses `[gstack-context]` blocks from WIP
|
||||
commits on the current branch to reconstruct session state. When `/ship` runs, it
|
||||
filter-squashes WIP commits only (preserving non-WIP commits) via
|
||||
`git rebase --autosquash` so the PR contains clean bisectable commits.
|
||||
|
||||
If `CHECKPOINT_MODE` is `"explicit"` (the default): no auto-commit behavior. Commit
|
||||
only when the user explicitly asks, or when a skill workflow (like /ship) runs a
|
||||
commit step. Ignore this section entirely.
|
||||
|
||||
## Context Health (soft directive)
|
||||
|
||||
During long-running skill sessions, periodically write a brief `[PROGRESS]` summary
|
||||
(2-3 sentences: what's done, what's next, any surprises). Example:
|
||||
|
||||
`[PROGRESS] Found 3 auth bugs. Fixed 2. Remaining: session expiry race in auth.ts:147. Next: write regression test.`
|
||||
|
||||
If you notice you're going in circles — repeating the same diagnostic, re-reading the
|
||||
same file, or trying variants of a failed fix — STOP and reassess. Consider escalating
|
||||
or calling /context-save to save progress and start fresh.
|
||||
|
||||
This is a soft nudge, not a measurable feature. No thresholds, no enforcement. The
|
||||
goal is self-awareness during long sessions. If the session stays short, skip it.
|
||||
Progress summaries must NEVER mutate git state — they are reporting, not committing.
|
||||
|
||||
## Question Tuning (skip entirely if `QUESTION_TUNING: false`)
|
||||
|
||||
**Before each AskUserQuestion.** Pick a registered `question_id` (see
|
||||
@@ -660,80 +761,29 @@ remote binary only runs if telemetry is not off and the binary exists.
|
||||
|
||||
## Plan Mode Safe Operations
|
||||
|
||||
When in plan mode, these operations are always allowed because they produce
|
||||
artifacts that inform the plan, not code changes:
|
||||
|
||||
- `$B` commands (browse: screenshots, page inspection, navigation, snapshots)
|
||||
- `$D` commands (design: generate mockups, variants, comparison boards, iterate)
|
||||
- `codex exec` / `codex review` (outside voice, plan review, adversarial challenge)
|
||||
- Writing to `~/.gstack/` (config, analytics, review logs, design artifacts, learnings)
|
||||
- Writing to the plan file (already allowed by plan mode)
|
||||
- `open` commands for viewing generated artifacts (comparison boards, HTML previews)
|
||||
|
||||
These are read-only in spirit — they inspect the live site, generate visual artifacts,
|
||||
or get independent opinions. They do NOT modify project source files.
|
||||
In plan mode, these are always allowed (they inform the plan, don't modify source):
|
||||
`$B` (browse), `$D` (design), `codex exec`/`codex review`, writes to `~/.gstack/`,
|
||||
writes to the plan file, `open` for generated artifacts.
|
||||
|
||||
## Skill Invocation During Plan Mode
|
||||
|
||||
If a user invokes a skill during plan mode, that invoked skill workflow takes
|
||||
precedence over generic plan mode behavior until it finishes or the user explicitly
|
||||
cancels that skill.
|
||||
|
||||
Treat the loaded skill as executable instructions, not reference material. Follow
|
||||
it step by step. Do not summarize, skip, reorder, or shortcut its steps.
|
||||
|
||||
If the skill says to use AskUserQuestion, do that. Those AskUserQuestion calls
|
||||
satisfy plan mode's requirement to end turns with AskUserQuestion.
|
||||
|
||||
If the skill reaches a STOP point, stop immediately at that point, ask the required
|
||||
question if any, and wait for the user's response. Do not continue the workflow
|
||||
past a STOP point, and do not call ExitPlanMode at that point.
|
||||
|
||||
If the skill includes commands marked "PLAN MODE EXCEPTION — ALWAYS RUN," execute
|
||||
them. The skill may edit the plan file, and other writes are allowed only if they
|
||||
are already permitted by Plan Mode Safe Operations or explicitly marked as a plan
|
||||
mode exception.
|
||||
|
||||
Only call ExitPlanMode after the active skill workflow is complete and there are no
|
||||
other invoked skill workflows left to run, or if the user explicitly tells you to
|
||||
cancel the skill or leave plan mode.
|
||||
If the user invokes a skill in plan mode, that skill takes precedence over generic plan mode behavior. Treat it as executable instructions, not reference. Follow step
|
||||
by step. AskUserQuestion calls satisfy plan mode's end-of-turn requirement. At a STOP
|
||||
point, stop immediately. Do not continue the workflow past a STOP point and do not call ExitPlanMode there. Commands marked "PLAN
|
||||
MODE EXCEPTION — ALWAYS RUN" execute. Other writes need to be already permitted
|
||||
above or explicitly exception-marked. Call ExitPlanMode only after the skill
|
||||
workflow completes — only then call ExitPlanMode (or if the user tells you to cancel the skill or leave plan mode).
|
||||
|
||||
## Plan Status Footer
|
||||
|
||||
When you are in plan mode and about to call ExitPlanMode:
|
||||
In plan mode, before ExitPlanMode: if the plan file lacks a `## GSTACK REVIEW REPORT`
|
||||
section, run `$GSTACK_ROOT/bin/gstack-review-read` and append a report.
|
||||
With JSONL entries (before `---CONFIG---`), format the standard runs/status/findings
|
||||
table. With `NO_REVIEWS` or empty, append a 5-row placeholder table (CEO/Codex/Eng/
|
||||
Design/DX Review) with all zeros and verdict "NO REVIEWS YET — run `/autoplan`".
|
||||
If a richer review report already exists, skip — review skills wrote it.
|
||||
|
||||
1. Check if the plan file already has a `## GSTACK REVIEW REPORT` section.
|
||||
2. If it DOES — skip (a review skill already wrote a richer report).
|
||||
3. If it does NOT — run this command:
|
||||
|
||||
\`\`\`bash
|
||||
$GSTACK_ROOT/bin/gstack-review-read
|
||||
\`\`\`
|
||||
|
||||
Then write a `## GSTACK REVIEW REPORT` section to the end of the plan file:
|
||||
|
||||
- If the output contains review entries (JSONL lines before `---CONFIG---`): format the
|
||||
standard report table with runs/status/findings per skill, same format as the review
|
||||
skills use.
|
||||
- If the output is `NO_REVIEWS` or empty: write this placeholder table:
|
||||
|
||||
\`\`\`markdown
|
||||
## GSTACK REVIEW REPORT
|
||||
|
||||
| Review | Trigger | Why | Runs | Status | Findings |
|
||||
|--------|---------|-----|------|--------|----------|
|
||||
| CEO Review | \`/plan-ceo-review\` | Scope & strategy | 0 | — | — |
|
||||
| Codex Review | \`/codex review\` | Independent 2nd opinion | 0 | — | — |
|
||||
| Eng Review | \`/plan-eng-review\` | Architecture & tests (required) | 0 | — | — |
|
||||
| Design Review | \`/plan-design-review\` | UI/UX gaps | 0 | — | — |
|
||||
| DX Review | \`/plan-devex-review\` | Developer experience gaps | 0 | — | — |
|
||||
|
||||
**VERDICT:** NO REVIEWS YET — run \`/autoplan\` for full review pipeline, or individual reviews above.
|
||||
\`\`\`
|
||||
|
||||
**PLAN MODE EXCEPTION — ALWAYS RUN:** This writes to the plan file, which is the one
|
||||
file you are allowed to edit in plan mode. The plan file review report is part of the
|
||||
plan's living status.
|
||||
PLAN MODE EXCEPTION — always allowed (it's the plan file).
|
||||
|
||||
## Step 0: Detect platform and base branch
|
||||
|
||||
@@ -1411,47 +1461,25 @@ Format: commit as `test: regression test for {what broke}`
|
||||
Include BOTH code paths and user flows in the same diagram. Mark E2E-worthy and eval-worthy paths:
|
||||
|
||||
```
|
||||
CODE PATH COVERAGE
|
||||
===========================
|
||||
[+] src/services/billing.ts
|
||||
│
|
||||
├── processPayment()
|
||||
│ ├── [★★★ TESTED] Happy path + card declined + timeout — billing.test.ts:42
|
||||
│ ├── [GAP] Network timeout — NO TEST
|
||||
│ └── [GAP] Invalid currency — NO TEST
|
||||
│
|
||||
└── refundPayment()
|
||||
├── [★★ TESTED] Full refund — billing.test.ts:89
|
||||
└── [★ TESTED] Partial refund (checks non-throw only) — billing.test.ts:101
|
||||
CODE PATHS USER FLOWS
|
||||
[+] src/services/billing.ts [+] Payment checkout
|
||||
├── processPayment() ├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
|
||||
│ ├── [★★★ TESTED] happy + declined + timeout ├── [GAP] [→E2E] Double-click submit
|
||||
│ ├── [GAP] Network timeout └── [GAP] Navigate away mid-payment
|
||||
│ └── [GAP] Invalid currency
|
||||
└── refundPayment() [+] Error states
|
||||
├── [★★ TESTED] Full refund — :89 ├── [★★ TESTED] Card declined message
|
||||
└── [★ TESTED] Partial (non-throw only) — :101 └── [GAP] Network timeout UX
|
||||
|
||||
USER FLOW COVERAGE
|
||||
===========================
|
||||
[+] Payment checkout flow
|
||||
│
|
||||
├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
|
||||
├── [GAP] [→E2E] Double-click submit — needs E2E, not just unit
|
||||
├── [GAP] Navigate away during payment — unit test sufficient
|
||||
└── [★ TESTED] Form validation errors (checks render only) — checkout.test.ts:40
|
||||
LLM integration: [GAP] [→EVAL] Prompt template change — needs eval test
|
||||
|
||||
[+] Error states
|
||||
│
|
||||
├── [★★ TESTED] Card declined message — billing.test.ts:58
|
||||
├── [GAP] Network timeout UX (what does user see?) — NO TEST
|
||||
└── [GAP] Empty cart submission — NO TEST
|
||||
|
||||
[+] LLM integration
|
||||
│
|
||||
└── [GAP] [→EVAL] Prompt template change — needs eval test
|
||||
|
||||
─────────────────────────────────
|
||||
COVERAGE: 5/13 paths tested (38%)
|
||||
Code paths: 3/5 (60%)
|
||||
User flows: 2/8 (25%)
|
||||
QUALITY: ★★★: 2 ★★: 2 ★: 1
|
||||
GAPS: 8 paths need tests (2 need E2E, 1 needs eval)
|
||||
─────────────────────────────────
|
||||
COVERAGE: 5/13 paths tested (38%) | Code paths: 3/5 (60%) | User flows: 2/8 (25%)
|
||||
QUALITY: ★★★:2 ★★:2 ★:1 | GAPS: 8 (2 E2E, 1 eval)
|
||||
```
|
||||
|
||||
Legend: ★★★ behavior + edge + error | ★★ happy path | ★ smoke check
|
||||
[→E2E] = needs integration test | [→EVAL] = needs LLM eval
|
||||
|
||||
**Fast path:** All paths covered → "Step 7: All new code paths have test coverage ✓" Continue.
|
||||
|
||||
**5. Generate tests for uncovered paths:**
|
||||
@@ -2619,6 +2647,73 @@ Save this summary — it goes into the PR body in Step 19.
|
||||
|
||||
## Step 15: Commit (bisectable chunks)
|
||||
|
||||
### Step 15.0: WIP Commit Squash (continuous checkpoint mode only)
|
||||
|
||||
If `CHECKPOINT_MODE` is `"continuous"`, the branch likely contains `WIP:` commits
|
||||
from auto-checkpointing. These must be squashed INTO the corresponding logical
|
||||
commits before the bisectable-grouping logic in Step 15.1 runs. Non-WIP commits
|
||||
on the branch (earlier landed work) must be preserved.
|
||||
|
||||
**Detection:**
|
||||
```bash
|
||||
WIP_COUNT=$(git log <base>..HEAD --oneline --grep="^WIP:" 2>/dev/null | wc -l | tr -d ' ')
|
||||
echo "WIP_COMMITS: $WIP_COUNT"
|
||||
```
|
||||
|
||||
If `WIP_COUNT` is 0: skip this sub-step entirely.
|
||||
|
||||
If `WIP_COUNT` > 0, collect the WIP context first so it survives the squash:
|
||||
|
||||
```bash
|
||||
# Export [gstack-context] blocks from all WIP commits on this branch.
|
||||
# This file becomes input to the CHANGELOG entry and may inform PR body context.
|
||||
mkdir -p "$(git rev-parse --show-toplevel)/.gstack"
|
||||
git log <base>..HEAD --grep="^WIP:" --format="%H%n%B%n---END---" > \
|
||||
"$(git rev-parse --show-toplevel)/.gstack/wip-context-before-squash.md" 2>/dev/null || true
|
||||
```
|
||||
|
||||
**Non-destructive squash strategy:**
|
||||
|
||||
`git reset --soft <merge-base>` WOULD uncommit everything including non-WIP commits.
|
||||
DO NOT DO THAT. Instead, use `git rebase` scoped to filter WIP commits only.
|
||||
|
||||
Option 1 (preferred, if there are non-WIP commits mixed in):
|
||||
```bash
|
||||
# Interactive rebase with automated WIP squashing.
|
||||
# Mark every WIP commit as 'fixup' (drop its message, fold changes into prior commit).
|
||||
git rebase -i $(git merge-base HEAD origin/<base>) \
|
||||
--exec 'true' \
|
||||
-X ours 2>/dev/null || {
|
||||
echo "Rebase conflict. Aborting: git rebase --abort"
|
||||
git rebase --abort
|
||||
echo "STATUS: BLOCKED — manual WIP squash required"
|
||||
exit 1
|
||||
}
|
||||
```
|
||||
|
||||
Option 2 (simpler, if the branch is ALL WIP commits so far — no landed work):
|
||||
```bash
|
||||
# Branch contains only WIP commits. Reset-soft is safe here because there's
|
||||
# nothing non-WIP to preserve. Verify first.
|
||||
NON_WIP=$(git log <base>..HEAD --oneline --invert-grep --grep="^WIP:" 2>/dev/null | wc -l | tr -d ' ')
|
||||
if [ "$NON_WIP" -eq 0 ]; then
|
||||
git reset --soft $(git merge-base HEAD origin/<base>)
|
||||
echo "WIP-only branch, reset-soft to merge base. Step 15.1 will create clean commits."
|
||||
fi
|
||||
```
|
||||
|
||||
Decide at runtime which option applies. If unsure, prefer stopping and asking the
|
||||
user via AskUserQuestion rather than destroying non-WIP commits.
|
||||
|
||||
**Anti-footgun rules:**
|
||||
- NEVER blind `git reset --soft` if there are non-WIP commits. Codex flagged this
|
||||
as destructive — it would uncommit real landed work and turn the push step into
|
||||
a non-fast-forward push for anyone who already pushed.
|
||||
- Only proceed to Step 15.1 after WIP commits are successfully squashed/absorbed
|
||||
or the branch has been verified to contain only WIP work.
|
||||
|
||||
### Step 15.1: Bisectable Commits
|
||||
|
||||
**Goal:** Create small, logical commits that work well with `git bisect` and help LLMs understand what changed.
|
||||
|
||||
1. Analyze the diff and group changes into logical commits. Each commit should represent **one coherent change** — not one file, but one logical unit.
|
||||
|
||||
@@ -358,10 +358,17 @@ describe('gen-skill-docs', () => {
|
||||
const qaOnlyContent = fs.readFileSync(path.join(ROOT, 'qa-only', 'SKILL.md'), 'utf-8');
|
||||
expect(qaOnlyContent).toContain('Never fix bugs');
|
||||
expect(qaOnlyContent).toContain('NEVER fix anything');
|
||||
// Should not have Edit, Glob, or Grep in allowed-tools
|
||||
expect(qaOnlyContent).not.toMatch(/allowed-tools:[\s\S]*?Edit/);
|
||||
expect(qaOnlyContent).not.toMatch(/allowed-tools:[\s\S]*?Glob/);
|
||||
expect(qaOnlyContent).not.toMatch(/allowed-tools:[\s\S]*?Grep/);
|
||||
// Should not have Edit, Glob, or Grep in allowed-tools.
|
||||
// Scope to frontmatter (between the first two --- lines) — the body can
|
||||
// legitimately mention these tool names in prose (e.g., Claude model
|
||||
// overlay says "prefer Read, Edit, Write, Glob, Grep over Bash").
|
||||
const fmMatch = qaOnlyContent.match(/^---\n([\s\S]*?)\n---/);
|
||||
expect(fmMatch).not.toBeNull();
|
||||
const frontmatter = fmMatch![1];
|
||||
expect(frontmatter).toMatch(/allowed-tools:/);
|
||||
expect(frontmatter).not.toMatch(/allowed-tools:[\s\S]*?- Edit/);
|
||||
expect(frontmatter).not.toMatch(/allowed-tools:[\s\S]*?- Glob/);
|
||||
expect(frontmatter).not.toMatch(/allowed-tools:[\s\S]*?- Grep/);
|
||||
});
|
||||
|
||||
test('qa has fix-loop tools and phases', () => {
|
||||
|
||||
@@ -0,0 +1,101 @@
|
||||
/**
|
||||
* Benchmark quality judge — wraps llm-judge.ts for multi-provider scoring.
|
||||
*
|
||||
* The judge is always Anthropic SDK (claude-sonnet-4-6) for stability. It sees
|
||||
* the prompt + N provider outputs and scores each on: correctness, completeness,
|
||||
* code quality, edge case handling. 0-10 per dimension; overall = average.
|
||||
*
|
||||
* Judge adds ~$0.05 per benchmark run. Gated by --judge CLI flag.
|
||||
*/
|
||||
|
||||
import type { BenchmarkReport, BenchmarkEntry } from './benchmark-runner';
|
||||
|
||||
export async function judgeEntries(report: BenchmarkReport): Promise<void> {
|
||||
if (!process.env.ANTHROPIC_API_KEY) {
|
||||
throw new Error('ANTHROPIC_API_KEY not set — judge requires Anthropic access.');
|
||||
}
|
||||
const { default: Anthropic } = await import('@anthropic-ai/sdk').catch(() => {
|
||||
throw new Error('@anthropic-ai/sdk not installed — run `bun add @anthropic-ai/sdk` if you want the judge.');
|
||||
});
|
||||
const client = new (Anthropic as unknown as new (opts: { apiKey: string }) => {
|
||||
messages: { create: (params: Record<string, unknown>) => Promise<{ content: Array<{ type: string; text: string }> }> };
|
||||
})({ apiKey: process.env.ANTHROPIC_API_KEY! });
|
||||
|
||||
const successful = report.entries.filter(e => e.available && e.result && !e.result.error);
|
||||
if (successful.length === 0) return;
|
||||
|
||||
const judgePrompt = buildJudgePrompt(report.prompt, successful);
|
||||
const msg = await client.messages.create({
|
||||
model: 'claude-sonnet-4-6',
|
||||
max_tokens: 2048,
|
||||
messages: [{ role: 'user', content: judgePrompt }],
|
||||
});
|
||||
const textBlock = msg.content.find(c => c.type === 'text');
|
||||
if (!textBlock) return;
|
||||
|
||||
const scores = parseScores(textBlock.text, successful.length);
|
||||
for (let i = 0; i < successful.length; i++) {
|
||||
const s = scores[i];
|
||||
if (!s) continue;
|
||||
successful[i].qualityScore = s.overall;
|
||||
successful[i].qualityDetails = s.dimensions;
|
||||
}
|
||||
}
|
||||
|
||||
function buildJudgePrompt(prompt: string, entries: BenchmarkEntry[]): string {
|
||||
const lines: string[] = [
|
||||
'You are a strict, fair technical reviewer scoring N model outputs against the same prompt.',
|
||||
'',
|
||||
'--- PROMPT ---',
|
||||
prompt.length > 4000 ? prompt.slice(0, 4000) + '\n[...truncated for judge budget...]' : prompt,
|
||||
'',
|
||||
'--- OUTPUTS ---',
|
||||
];
|
||||
entries.forEach((e, i) => {
|
||||
const r = e.result!;
|
||||
const out = r.output.length > 3000 ? r.output.slice(0, 3000) + '\n[...truncated...]' : r.output;
|
||||
lines.push(`=== Output ${i + 1}: ${r.modelUsed} ===`);
|
||||
lines.push(out);
|
||||
lines.push('');
|
||||
});
|
||||
lines.push('');
|
||||
lines.push('Score each output on these dimensions (0-10 per dimension):');
|
||||
lines.push(' - correctness: does it solve what the prompt asked?');
|
||||
lines.push(' - completeness: are edge cases and error paths addressed?');
|
||||
lines.push(' - code_quality: naming, structure, explicitness');
|
||||
lines.push(' - edge_cases: handling of nil/empty/invalid input');
|
||||
lines.push('');
|
||||
lines.push('Return JSON only, in this exact shape:');
|
||||
lines.push('{"scores":[');
|
||||
lines.push(' {"output":1,"correctness":N,"completeness":N,"code_quality":N,"edge_cases":N,"overall":N,"notes":"..."},');
|
||||
lines.push(' ...');
|
||||
lines.push(']}');
|
||||
lines.push('');
|
||||
lines.push('overall = rounded average of the 4 dimensions. No other commentary.');
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
interface ParsedScore {
|
||||
overall: number;
|
||||
dimensions: Record<string, number>;
|
||||
}
|
||||
|
||||
function parseScores(raw: string, expectedCount: number): ParsedScore[] {
|
||||
const match = raw.match(/\{[\s\S]*\}/);
|
||||
if (!match) return [];
|
||||
try {
|
||||
const obj = JSON.parse(match[0]);
|
||||
if (!Array.isArray(obj.scores)) return [];
|
||||
return obj.scores.slice(0, expectedCount).map((s: Record<string, number>) => ({
|
||||
overall: Number(s.overall ?? 0),
|
||||
dimensions: {
|
||||
correctness: Number(s.correctness ?? 0),
|
||||
completeness: Number(s.completeness ?? 0),
|
||||
code_quality: Number(s.code_quality ?? 0),
|
||||
edge_cases: Number(s.edge_cases ?? 0),
|
||||
},
|
||||
}));
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,165 @@
|
||||
/**
|
||||
* Multi-provider benchmark runner.
|
||||
*
|
||||
* Orchestrates running the same prompt across multiple provider adapters and
|
||||
* aggregates RunResult outputs + judge scores into a single report. Adapters
|
||||
* run in parallel (Promise.allSettled) so a slow provider doesn't block a fast
|
||||
* one. Per-provider auth/timeout/rate-limit errors don't abort the batch.
|
||||
*/
|
||||
|
||||
import type { ProviderAdapter, RunOpts, RunResult } from './providers/types';
|
||||
import { ClaudeAdapter } from './providers/claude';
|
||||
import { GptAdapter } from './providers/gpt';
|
||||
import { GeminiAdapter } from './providers/gemini';
|
||||
|
||||
export interface BenchmarkInput {
|
||||
prompt: string;
|
||||
workdir: string;
|
||||
timeoutMs?: number;
|
||||
/** Adapter names to run (e.g., ['claude', 'gpt', 'gemini']). */
|
||||
providers: Array<'claude' | 'gpt' | 'gemini'>;
|
||||
/** Optional per-provider model overrides. */
|
||||
models?: Partial<Record<'claude' | 'gpt' | 'gemini', string>>;
|
||||
/** If true, skip providers whose available() returns !ok. If false, include them with error. */
|
||||
skipUnavailable?: boolean;
|
||||
}
|
||||
|
||||
export interface BenchmarkEntry {
|
||||
provider: string;
|
||||
family: 'claude' | 'gpt' | 'gemini';
|
||||
available: boolean;
|
||||
unavailable_reason?: string;
|
||||
result?: RunResult;
|
||||
costUsd?: number;
|
||||
/** Judge score 0-10 across dimensions. Populated separately by the judge step. */
|
||||
qualityScore?: number;
|
||||
qualityDetails?: Record<string, number>;
|
||||
}
|
||||
|
||||
export interface BenchmarkReport {
|
||||
prompt: string;
|
||||
workdir: string;
|
||||
startedAt: string;
|
||||
durationMs: number;
|
||||
entries: BenchmarkEntry[];
|
||||
}
|
||||
|
||||
const ADAPTERS: Record<'claude' | 'gpt' | 'gemini', () => ProviderAdapter> = {
|
||||
claude: () => new ClaudeAdapter(),
|
||||
gpt: () => new GptAdapter(),
|
||||
gemini: () => new GeminiAdapter(),
|
||||
};
|
||||
|
||||
export async function runBenchmark(input: BenchmarkInput): Promise<BenchmarkReport> {
|
||||
const startedAtMs = Date.now();
|
||||
const startedAt = new Date(startedAtMs).toISOString();
|
||||
const timeoutMs = input.timeoutMs ?? 300_000;
|
||||
|
||||
const entries: BenchmarkEntry[] = [];
|
||||
const runPromises: Array<Promise<void>> = [];
|
||||
|
||||
for (const name of input.providers) {
|
||||
const factory = ADAPTERS[name];
|
||||
if (!factory) {
|
||||
entries.push({ provider: name, family: 'claude', available: false, unavailable_reason: `unknown provider: ${name}` });
|
||||
continue;
|
||||
}
|
||||
const adapter = factory();
|
||||
const entry: BenchmarkEntry = { provider: adapter.name, family: adapter.family, available: true };
|
||||
entries.push(entry);
|
||||
|
||||
runPromises.push((async () => {
|
||||
const check = await adapter.available();
|
||||
entry.available = check.ok;
|
||||
if (!check.ok) {
|
||||
entry.unavailable_reason = check.reason;
|
||||
if (input.skipUnavailable) return;
|
||||
}
|
||||
const opts: RunOpts = {
|
||||
prompt: input.prompt,
|
||||
workdir: input.workdir,
|
||||
timeoutMs,
|
||||
model: input.models?.[name],
|
||||
};
|
||||
const res = await adapter.run(opts);
|
||||
entry.result = res;
|
||||
entry.costUsd = adapter.estimateCost(res.tokens, res.modelUsed);
|
||||
})());
|
||||
}
|
||||
|
||||
await Promise.allSettled(runPromises);
|
||||
|
||||
return {
|
||||
prompt: input.prompt,
|
||||
workdir: input.workdir,
|
||||
startedAt,
|
||||
durationMs: Date.now() - startedAtMs,
|
||||
entries,
|
||||
};
|
||||
}
|
||||
|
||||
export function formatTable(report: BenchmarkReport): string {
|
||||
const header = `Model Latency In→Out Tokens Cost Quality Tool Calls Notes`;
|
||||
const sep = '-'.repeat(header.length);
|
||||
const rows: string[] = [header, sep];
|
||||
for (const e of report.entries) {
|
||||
if (!e.available) {
|
||||
rows.push(`${pad(e.provider, 20)} ${pad('-', 9)} ${pad('-', 20)} ${pad('-', 10)} ${pad('-', 9)} ${pad('-', 12)} unavailable: ${e.unavailable_reason ?? 'unknown'}`);
|
||||
continue;
|
||||
}
|
||||
const r = e.result!;
|
||||
if (r.error) {
|
||||
rows.push(`${pad(r.modelUsed, 20)} ${pad(msToStr(r.durationMs), 9)} ${pad(`${r.tokens.input}→${r.tokens.output}`, 20)} ${pad(fmtCost(e.costUsd), 10)} ${pad('-', 9)} ${pad(String(r.toolCalls), 12)} ERROR ${r.error.code}: ${r.error.reason.slice(0, 40)}`);
|
||||
continue;
|
||||
}
|
||||
const quality = e.qualityScore !== undefined ? `${e.qualityScore.toFixed(1)}/10` : '-';
|
||||
rows.push(`${pad(r.modelUsed, 20)} ${pad(msToStr(r.durationMs), 9)} ${pad(`${r.tokens.input}→${r.tokens.output}`, 20)} ${pad(fmtCost(e.costUsd), 10)} ${pad(quality, 9)} ${pad(String(r.toolCalls), 12)}`);
|
||||
}
|
||||
return rows.join('\n');
|
||||
}
|
||||
|
||||
export function formatJson(report: BenchmarkReport): string {
|
||||
return JSON.stringify(report, null, 2);
|
||||
}
|
||||
|
||||
export function formatMarkdown(report: BenchmarkReport): string {
|
||||
const lines: string[] = [
|
||||
`# Benchmark report — ${report.startedAt}`,
|
||||
'',
|
||||
`**Prompt:** ${report.prompt.length > 200 ? report.prompt.slice(0, 200) + '…' : report.prompt}`,
|
||||
`**Workdir:** \`${report.workdir}\``,
|
||||
`**Total duration:** ${msToStr(report.durationMs)}`,
|
||||
'',
|
||||
'| Model | Latency | Tokens (in→out) | Cost | Quality | Tools | Notes |',
|
||||
'|-------|---------|-----------------|------|---------|-------|-------|',
|
||||
];
|
||||
for (const e of report.entries) {
|
||||
if (!e.available) {
|
||||
lines.push(`| ${e.provider} | - | - | - | - | - | unavailable: ${e.unavailable_reason ?? 'unknown'} |`);
|
||||
continue;
|
||||
}
|
||||
const r = e.result!;
|
||||
if (r.error) {
|
||||
lines.push(`| ${r.modelUsed} | ${msToStr(r.durationMs)} | ${r.tokens.input}→${r.tokens.output} | ${fmtCost(e.costUsd)} | - | ${r.toolCalls} | ERROR ${r.error.code}: ${r.error.reason.slice(0, 80)} |`);
|
||||
continue;
|
||||
}
|
||||
const quality = e.qualityScore !== undefined ? `${e.qualityScore.toFixed(1)}/10` : '-';
|
||||
lines.push(`| ${r.modelUsed} | ${msToStr(r.durationMs)} | ${r.tokens.input}→${r.tokens.output} | ${fmtCost(e.costUsd)} | ${quality} | ${r.toolCalls} | |`);
|
||||
}
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
function pad(s: string, n: number): string {
|
||||
return s.length >= n ? s.slice(0, n) : s + ' '.repeat(n - s.length);
|
||||
}
|
||||
|
||||
function msToStr(ms: number): string {
|
||||
if (ms < 1000) return `${ms}ms`;
|
||||
return `${(ms / 1000).toFixed(1)}s`;
|
||||
}
|
||||
|
||||
function fmtCost(usd?: number): string {
|
||||
if (usd === undefined) return '-';
|
||||
if (usd < 0.01) return `$${usd.toFixed(4)}`;
|
||||
return `$${usd.toFixed(2)}`;
|
||||
}
|
||||
@@ -0,0 +1,61 @@
|
||||
/**
|
||||
* Per-model pricing tables.
|
||||
*
|
||||
* Prices are USD per million tokens as of `as_of`. Update quarterly.
|
||||
* Link to provider pricing pages:
|
||||
* - Anthropic: https://www.anthropic.com/pricing#api
|
||||
* - OpenAI: https://openai.com/api/pricing/
|
||||
* - Google AI: https://ai.google.dev/pricing
|
||||
*
|
||||
* When a model isn't in the table, estimateCost returns 0 with a console warning.
|
||||
* Prefer adding a new row to the table over guessing.
|
||||
*/
|
||||
|
||||
export interface ModelPricing {
|
||||
input_per_mtok: number;
|
||||
output_per_mtok: number;
|
||||
as_of: string; // YYYY-MM
|
||||
}
|
||||
|
||||
export const PRICING: Record<string, ModelPricing> = {
|
||||
// Claude (Anthropic)
|
||||
'claude-opus-4-7': { input_per_mtok: 15.00, output_per_mtok: 75.00, as_of: '2026-04' },
|
||||
'claude-sonnet-4-6': { input_per_mtok: 3.00, output_per_mtok: 15.00, as_of: '2026-04' },
|
||||
'claude-haiku-4-5': { input_per_mtok: 1.00, output_per_mtok: 5.00, as_of: '2026-04' },
|
||||
|
||||
// OpenAI (GPT + o-series)
|
||||
'gpt-5.4': { input_per_mtok: 2.50, output_per_mtok: 10.00, as_of: '2026-04' },
|
||||
'gpt-5.4-mini': { input_per_mtok: 0.60, output_per_mtok: 2.40, as_of: '2026-04' },
|
||||
'o3': { input_per_mtok: 15.00, output_per_mtok: 60.00, as_of: '2026-04' },
|
||||
'o4-mini': { input_per_mtok: 1.10, output_per_mtok: 4.40, as_of: '2026-04' },
|
||||
|
||||
// Google
|
||||
'gemini-2.5-pro': { input_per_mtok: 1.25, output_per_mtok: 5.00, as_of: '2026-04' },
|
||||
'gemini-2.5-flash': { input_per_mtok: 0.30, output_per_mtok: 1.20, as_of: '2026-04' },
|
||||
};
|
||||
|
||||
const WARNED = new Set<string>();
|
||||
|
||||
export function estimateCostUsd(
|
||||
tokens: { input: number; output: number; cached?: number },
|
||||
model: string | undefined
|
||||
): number {
|
||||
if (!model) return 0;
|
||||
const row = PRICING[model];
|
||||
if (!row) {
|
||||
if (!WARNED.has(model)) {
|
||||
WARNED.add(model);
|
||||
console.error(`WARN: no pricing for model ${model}; returning 0. Add it to test/helpers/pricing.ts.`);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
// Anthropic and OpenAI report cached tokens as a separate (disjoint) field from
|
||||
// uncached input tokens. tokens.input is already the uncached portion; tokens.cached
|
||||
// is the cache-read count billed at 10% of the regular input rate. Do NOT subtract
|
||||
// cached from input — they don't overlap.
|
||||
const cachedDiscount = 0.1;
|
||||
const inputCost = tokens.input * row.input_per_mtok / 1_000_000;
|
||||
const cachedCost = (tokens.cached ?? 0) * row.input_per_mtok * cachedDiscount / 1_000_000;
|
||||
const outputCost = tokens.output * row.output_per_mtok / 1_000_000;
|
||||
return +(inputCost + cachedCost + outputCost).toFixed(6);
|
||||
}
|
||||
@@ -0,0 +1,116 @@
|
||||
import type { ProviderAdapter, RunOpts, RunResult, AvailabilityCheck } from './types';
|
||||
import { estimateCostUsd } from '../pricing';
|
||||
import { execFileSync, spawnSync } from 'child_process';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
|
||||
/**
|
||||
* Claude adapter — wraps the `claude` CLI via claude -p.
|
||||
*
|
||||
* For brevity and to avoid duplicating the full stream-json parser, this adapter
|
||||
* uses claude CLI in non-interactive mode (--print) with the simpler JSON output
|
||||
* format. If richer event-level metrics are needed (per-tool timing etc.),
|
||||
* swap to session-runner's full stream-json parser.
|
||||
*/
|
||||
export class ClaudeAdapter implements ProviderAdapter {
|
||||
readonly name = 'claude';
|
||||
readonly family = 'claude' as const;
|
||||
|
||||
async available(): Promise<AvailabilityCheck> {
|
||||
// Binary on PATH?
|
||||
const res = spawnSync('sh', ['-c', 'command -v claude'], { timeout: 2000 });
|
||||
if (res.status !== 0) {
|
||||
return { ok: false, reason: 'claude CLI not found on PATH. Install from https://claude.ai/download or npm i -g @anthropic-ai/claude-code' };
|
||||
}
|
||||
// Auth sniff: ~/.claude/.credentials.json OR ANTHROPIC_API_KEY
|
||||
const credsPath = path.join(os.homedir(), '.claude', '.credentials.json');
|
||||
const hasCreds = fs.existsSync(credsPath);
|
||||
const hasKey = !!process.env.ANTHROPIC_API_KEY;
|
||||
if (!hasCreds && !hasKey) {
|
||||
return { ok: false, reason: 'No Claude auth found. Log in via `claude` interactive session, or export ANTHROPIC_API_KEY.' };
|
||||
}
|
||||
return { ok: true };
|
||||
}
|
||||
|
||||
async run(opts: RunOpts): Promise<RunResult> {
|
||||
const start = Date.now();
|
||||
const args = ['-p', '--output-format', 'json'];
|
||||
if (opts.model) args.push('--model', opts.model);
|
||||
if (opts.extraArgs) args.push(...opts.extraArgs);
|
||||
|
||||
try {
|
||||
const out = execFileSync('claude', args, {
|
||||
input: opts.prompt,
|
||||
cwd: opts.workdir,
|
||||
timeout: opts.timeoutMs,
|
||||
encoding: 'utf-8',
|
||||
maxBuffer: 32 * 1024 * 1024,
|
||||
});
|
||||
const parsed = this.parseOutput(out);
|
||||
return {
|
||||
output: parsed.output,
|
||||
tokens: parsed.tokens,
|
||||
durationMs: Date.now() - start,
|
||||
toolCalls: parsed.toolCalls,
|
||||
modelUsed: parsed.modelUsed || opts.model || 'claude-opus-4-7',
|
||||
};
|
||||
} catch (err: unknown) {
|
||||
const durationMs = Date.now() - start;
|
||||
const e = err as { code?: string; stderr?: Buffer; signal?: string; message?: string };
|
||||
const stderr = e.stderr?.toString() ?? '';
|
||||
if (e.signal === 'SIGTERM' || e.code === 'ETIMEDOUT') {
|
||||
return this.emptyResult(durationMs, { code: 'timeout', reason: `exceeded ${opts.timeoutMs}ms` }, opts.model);
|
||||
}
|
||||
if (/unauthorized|auth|login/i.test(stderr)) {
|
||||
return this.emptyResult(durationMs, { code: 'auth', reason: stderr.slice(0, 400) }, opts.model);
|
||||
}
|
||||
if (/rate[- ]?limit|429/i.test(stderr)) {
|
||||
return this.emptyResult(durationMs, { code: 'rate_limit', reason: stderr.slice(0, 400) }, opts.model);
|
||||
}
|
||||
return this.emptyResult(durationMs, { code: 'unknown', reason: (e.message ?? stderr ?? 'unknown').slice(0, 400) }, opts.model);
|
||||
}
|
||||
}
|
||||
|
||||
estimateCost(tokens: { input: number; output: number; cached?: number }, model?: string): number {
|
||||
return estimateCostUsd(tokens, model ?? 'claude-opus-4-7');
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse claude -p --output-format json output. Shape (as of 2026-04):
|
||||
* { type: "result", result: "<assistant text>", usage: { input_tokens, output_tokens, ... },
|
||||
* num_turns, session_id, ... }
|
||||
* Older formats may differ — adapter is best-effort.
|
||||
*/
|
||||
private parseOutput(raw: string): { output: string; tokens: { input: number; output: number; cached?: number }; toolCalls: number; modelUsed?: string } {
|
||||
try {
|
||||
const obj = JSON.parse(raw);
|
||||
const result = typeof obj.result === 'string' ? obj.result : String(obj.result ?? '');
|
||||
const u = obj.usage ?? {};
|
||||
return {
|
||||
output: result,
|
||||
tokens: {
|
||||
input: u.input_tokens ?? 0,
|
||||
output: u.output_tokens ?? 0,
|
||||
cached: u.cache_read_input_tokens,
|
||||
},
|
||||
toolCalls: obj.num_turns ?? 0,
|
||||
modelUsed: obj.model,
|
||||
};
|
||||
} catch {
|
||||
// Non-JSON output: treat as plain text.
|
||||
return { output: raw, tokens: { input: 0, output: 0 }, toolCalls: 0 };
|
||||
}
|
||||
}
|
||||
|
||||
private emptyResult(durationMs: number, error: RunResult['error'], model?: string): RunResult {
|
||||
return {
|
||||
output: '',
|
||||
tokens: { input: 0, output: 0 },
|
||||
durationMs,
|
||||
toolCalls: 0,
|
||||
modelUsed: model ?? 'claude-opus-4-7',
|
||||
error,
|
||||
};
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,123 @@
|
||||
import type { ProviderAdapter, RunOpts, RunResult, AvailabilityCheck } from './types';
|
||||
import { estimateCostUsd } from '../pricing';
|
||||
import { execFileSync, spawnSync } from 'child_process';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
|
||||
/**
|
||||
* Gemini adapter — wraps the `gemini` CLI.
|
||||
*
|
||||
* Gemini CLI auth comes from either ~/.config/gemini/ or GOOGLE_API_KEY. Output
|
||||
* format is NDJSON with `message`/`tool_use`/`result` events when `--output-format
|
||||
* stream-json` is requested. This adapter uses a single-response form for simplicity
|
||||
* in benchmarks; richer streaming lives in gemini-session-runner.ts.
|
||||
*/
|
||||
export class GeminiAdapter implements ProviderAdapter {
|
||||
readonly name = 'gemini';
|
||||
readonly family = 'gemini' as const;
|
||||
|
||||
async available(): Promise<AvailabilityCheck> {
|
||||
const res = spawnSync('sh', ['-c', 'command -v gemini'], { timeout: 2000 });
|
||||
if (res.status !== 0) {
|
||||
return { ok: false, reason: 'gemini CLI not found on PATH. Install per https://github.com/google-gemini/gemini-cli' };
|
||||
}
|
||||
const cfgDir = path.join(os.homedir(), '.config', 'gemini');
|
||||
const hasCfg = fs.existsSync(cfgDir);
|
||||
const hasKey = !!process.env.GOOGLE_API_KEY;
|
||||
if (!hasCfg && !hasKey) {
|
||||
return { ok: false, reason: 'No Gemini auth found. Log in via `gemini login` or export GOOGLE_API_KEY.' };
|
||||
}
|
||||
return { ok: true };
|
||||
}
|
||||
|
||||
async run(opts: RunOpts): Promise<RunResult> {
|
||||
const start = Date.now();
|
||||
// Default to --yolo (non-interactive) and stream-json output so we can parse
|
||||
// tokens + tool calls. Callers can override via extraArgs.
|
||||
const args = ['-p', opts.prompt, '--output-format', 'stream-json', '--yolo'];
|
||||
if (opts.model) args.push('--model', opts.model);
|
||||
if (opts.extraArgs) args.push(...opts.extraArgs);
|
||||
|
||||
try {
|
||||
const out = execFileSync('gemini', args, {
|
||||
cwd: opts.workdir,
|
||||
timeout: opts.timeoutMs,
|
||||
encoding: 'utf-8',
|
||||
maxBuffer: 32 * 1024 * 1024,
|
||||
});
|
||||
const parsed = this.parseStreamJson(out);
|
||||
return {
|
||||
output: parsed.output,
|
||||
tokens: parsed.tokens,
|
||||
durationMs: Date.now() - start,
|
||||
toolCalls: parsed.toolCalls,
|
||||
modelUsed: parsed.modelUsed || opts.model || 'gemini-2.5-pro',
|
||||
};
|
||||
} catch (err: unknown) {
|
||||
const durationMs = Date.now() - start;
|
||||
const e = err as { code?: string; stderr?: Buffer; signal?: string; message?: string };
|
||||
const stderr = e.stderr?.toString() ?? '';
|
||||
if (e.signal === 'SIGTERM' || e.code === 'ETIMEDOUT') {
|
||||
return this.emptyResult(durationMs, { code: 'timeout', reason: `exceeded ${opts.timeoutMs}ms` }, opts.model);
|
||||
}
|
||||
if (/unauthorized|auth|login|api key/i.test(stderr)) {
|
||||
return this.emptyResult(durationMs, { code: 'auth', reason: stderr.slice(0, 400) }, opts.model);
|
||||
}
|
||||
if (/rate[- ]?limit|429|quota/i.test(stderr)) {
|
||||
return this.emptyResult(durationMs, { code: 'rate_limit', reason: stderr.slice(0, 400) }, opts.model);
|
||||
}
|
||||
return this.emptyResult(durationMs, { code: 'unknown', reason: (e.message ?? stderr ?? 'unknown').slice(0, 400) }, opts.model);
|
||||
}
|
||||
}
|
||||
|
||||
estimateCost(tokens: { input: number; output: number; cached?: number }, model?: string): number {
|
||||
return estimateCostUsd(tokens, model ?? 'gemini-2.5-pro');
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse gemini NDJSON stream events:
|
||||
* init → session id (discarded here)
|
||||
* message { delta: true, text } → concat to output
|
||||
* tool_use { name } → increment toolCalls
|
||||
* result { usage: { input_token_count, output_token_count } } → tokens
|
||||
*/
|
||||
private parseStreamJson(raw: string): { output: string; tokens: { input: number; output: number }; toolCalls: number; modelUsed?: string } {
|
||||
let output = '';
|
||||
let input = 0;
|
||||
let out = 0;
|
||||
let toolCalls = 0;
|
||||
let modelUsed: string | undefined;
|
||||
for (const line of raw.split('\n')) {
|
||||
const s = line.trim();
|
||||
if (!s) continue;
|
||||
try {
|
||||
const obj = JSON.parse(s);
|
||||
if (obj.type === 'message' && typeof obj.text === 'string') {
|
||||
output += obj.text;
|
||||
} else if (obj.type === 'tool_use') {
|
||||
toolCalls += 1;
|
||||
} else if (obj.type === 'result') {
|
||||
const u = obj.usage ?? {};
|
||||
input += u.input_token_count ?? u.prompt_tokens ?? 0;
|
||||
out += u.output_token_count ?? u.completion_tokens ?? 0;
|
||||
if (obj.model) modelUsed = obj.model;
|
||||
}
|
||||
} catch {
|
||||
// skip malformed lines
|
||||
}
|
||||
}
|
||||
return { output, tokens: { input, output: out }, toolCalls, modelUsed };
|
||||
}
|
||||
|
||||
private emptyResult(durationMs: number, error: RunResult['error'], model?: string): RunResult {
|
||||
return {
|
||||
output: '',
|
||||
tokens: { input: 0, output: 0 },
|
||||
durationMs,
|
||||
toolCalls: 0,
|
||||
modelUsed: model ?? 'gemini-2.5-pro',
|
||||
error,
|
||||
};
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,127 @@
|
||||
import type { ProviderAdapter, RunOpts, RunResult, AvailabilityCheck } from './types';
|
||||
import { estimateCostUsd } from '../pricing';
|
||||
import { execFileSync, spawnSync } from 'child_process';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
|
||||
/**
|
||||
* GPT adapter — wraps the OpenAI `codex` CLI (codex exec with --json output).
|
||||
*
|
||||
* Codex uses ~/.codex/ for auth (not OPENAI_API_KEY). The --json flag emits
|
||||
* JSONL events; we parse `turn.completed` for usage and `agent_message` / etc.
|
||||
* for output aggregation.
|
||||
*/
|
||||
export class GptAdapter implements ProviderAdapter {
|
||||
readonly name = 'gpt';
|
||||
readonly family = 'gpt' as const;
|
||||
|
||||
async available(): Promise<AvailabilityCheck> {
|
||||
const res = spawnSync('sh', ['-c', 'command -v codex'], { timeout: 2000 });
|
||||
if (res.status !== 0) {
|
||||
return { ok: false, reason: 'codex CLI not found on PATH. Install: npm i -g @openai/codex' };
|
||||
}
|
||||
// Auth sniff: ~/.codex/ should contain auth state after `codex login`
|
||||
const codexDir = path.join(os.homedir(), '.codex');
|
||||
if (!fs.existsSync(codexDir)) {
|
||||
return { ok: false, reason: 'No ~/.codex/ found. Run `codex login` to authenticate via ChatGPT.' };
|
||||
}
|
||||
return { ok: true };
|
||||
}
|
||||
|
||||
async run(opts: RunOpts): Promise<RunResult> {
|
||||
const start = Date.now();
|
||||
// `-s read-only` is load-bearing safety. With `--skip-git-repo-check` we
|
||||
// bypass codex's interactive trust prompt for unknown directories (benchmarks
|
||||
// often run in temp dirs / non-git paths), so the read-only sandbox is now
|
||||
// the only boundary preventing codex from mutating the workdir. If you ever
|
||||
// remove `-s read-only`, drop `--skip-git-repo-check` too.
|
||||
const args = ['exec', opts.prompt, '-C', opts.workdir, '-s', 'read-only', '--skip-git-repo-check', '--json'];
|
||||
if (opts.model) args.push('-m', opts.model);
|
||||
if (opts.extraArgs) args.push(...opts.extraArgs);
|
||||
|
||||
try {
|
||||
const out = execFileSync('codex', args, {
|
||||
cwd: opts.workdir,
|
||||
timeout: opts.timeoutMs,
|
||||
encoding: 'utf-8',
|
||||
maxBuffer: 32 * 1024 * 1024,
|
||||
});
|
||||
const parsed = this.parseJsonl(out);
|
||||
return {
|
||||
output: parsed.output,
|
||||
tokens: parsed.tokens,
|
||||
durationMs: Date.now() - start,
|
||||
toolCalls: parsed.toolCalls,
|
||||
modelUsed: parsed.modelUsed || opts.model || 'gpt-5.4',
|
||||
};
|
||||
} catch (err: unknown) {
|
||||
const durationMs = Date.now() - start;
|
||||
const e = err as { code?: string; stderr?: Buffer; signal?: string; message?: string };
|
||||
const stderr = e.stderr?.toString() ?? '';
|
||||
if (e.signal === 'SIGTERM' || e.code === 'ETIMEDOUT') {
|
||||
return this.emptyResult(durationMs, { code: 'timeout', reason: `exceeded ${opts.timeoutMs}ms` }, opts.model);
|
||||
}
|
||||
if (/unauthorized|auth|login/i.test(stderr)) {
|
||||
return this.emptyResult(durationMs, { code: 'auth', reason: stderr.slice(0, 400) }, opts.model);
|
||||
}
|
||||
if (/rate[- ]?limit|429/i.test(stderr)) {
|
||||
return this.emptyResult(durationMs, { code: 'rate_limit', reason: stderr.slice(0, 400) }, opts.model);
|
||||
}
|
||||
return this.emptyResult(durationMs, { code: 'unknown', reason: (e.message ?? stderr ?? 'unknown').slice(0, 400) }, opts.model);
|
||||
}
|
||||
}
|
||||
|
||||
estimateCost(tokens: { input: number; output: number; cached?: number }, model?: string): number {
|
||||
return estimateCostUsd(tokens, model ?? 'gpt-5.4');
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse codex exec --json JSONL stream.
|
||||
* Key events:
|
||||
* - item.completed with item.type === 'agent_message' → text output
|
||||
* - item.completed with item.type === 'command_execution' → tool call
|
||||
* - turn.completed → usage.input_tokens, usage.output_tokens
|
||||
* - thread.started → session id (not used here)
|
||||
*/
|
||||
private parseJsonl(raw: string): { output: string; tokens: { input: number; output: number }; toolCalls: number; modelUsed?: string } {
|
||||
let output = '';
|
||||
let input = 0;
|
||||
let out = 0;
|
||||
let toolCalls = 0;
|
||||
let modelUsed: string | undefined;
|
||||
for (const line of raw.split('\n')) {
|
||||
const s = line.trim();
|
||||
if (!s) continue;
|
||||
try {
|
||||
const obj = JSON.parse(s);
|
||||
if (obj.type === 'item.completed' && obj.item) {
|
||||
if (obj.item.type === 'agent_message' && typeof obj.item.text === 'string') {
|
||||
output += (output ? '\n' : '') + obj.item.text;
|
||||
} else if (obj.item.type === 'command_execution') {
|
||||
toolCalls += 1;
|
||||
}
|
||||
} else if (obj.type === 'turn.completed') {
|
||||
const u = obj.usage ?? {};
|
||||
input += u.input_tokens ?? 0;
|
||||
out += u.output_tokens ?? 0;
|
||||
if (obj.model) modelUsed = obj.model;
|
||||
}
|
||||
} catch {
|
||||
// skip malformed lines — codex stderr can leak in
|
||||
}
|
||||
}
|
||||
return { output, tokens: { input, output: out }, toolCalls, modelUsed };
|
||||
}
|
||||
|
||||
private emptyResult(durationMs: number, error: RunResult['error'], model?: string): RunResult {
|
||||
return {
|
||||
output: '',
|
||||
tokens: { input: 0, output: 0 },
|
||||
durationMs,
|
||||
toolCalls: 0,
|
||||
modelUsed: model ?? 'gpt-5.4',
|
||||
error,
|
||||
};
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,74 @@
|
||||
/**
|
||||
* Provider adapter interface — uniform contract for Claude, GPT, Gemini.
|
||||
*
|
||||
* Each adapter wraps an existing runner (session-runner.ts, codex-session-runner.ts,
|
||||
* gemini-session-runner.ts) and normalizes its per-provider result shape into the
|
||||
* RunResult below. The benchmark harness only talks to adapters through this
|
||||
* interface, never to the underlying runners directly.
|
||||
*/
|
||||
|
||||
export interface RunOpts {
|
||||
/** The prompt to send to the model. */
|
||||
prompt: string;
|
||||
/** Working directory passed to the underlying CLI. */
|
||||
workdir: string;
|
||||
/** Hard wall-clock timeout in ms. Default: 300000 (5 min). */
|
||||
timeoutMs: number;
|
||||
/** Specific model within the family, optional. Adapters pass through to provider. */
|
||||
model?: string;
|
||||
/** Extra flags per-provider (escape hatch for rare cases). Prefer staying generic. */
|
||||
extraArgs?: string[];
|
||||
}
|
||||
|
||||
export interface TokenUsage {
|
||||
input: number;
|
||||
output: number;
|
||||
/** Cached input tokens (Anthropic/OpenAI support). Undefined if provider doesn't report. */
|
||||
cached?: number;
|
||||
}
|
||||
|
||||
export type RunError =
|
||||
| 'auth' // Credentials missing or invalid.
|
||||
| 'timeout' // Exceeded timeoutMs.
|
||||
| 'rate_limit' // Provider rate-limited us; backoff exceeded.
|
||||
| 'binary_missing' // CLI not found on PATH.
|
||||
| 'unknown'; // Catch-all with reason populated.
|
||||
|
||||
export interface RunResult {
|
||||
/** Provider's textual output for the prompt. */
|
||||
output: string;
|
||||
/** Normalized token usage. 0s if unreported. */
|
||||
tokens: TokenUsage;
|
||||
/** Wall-clock duration. */
|
||||
durationMs: number;
|
||||
/** Count of tool/function calls made during the run (0 if unsupported). */
|
||||
toolCalls: number;
|
||||
/** Actual model ID the provider reports using (may be a variant of the family). */
|
||||
modelUsed: string;
|
||||
/** If the run failed, error code + human reason. output/tokens may be partial. */
|
||||
error?: { code: RunError; reason: string };
|
||||
}
|
||||
|
||||
export interface AvailabilityCheck {
|
||||
ok: boolean;
|
||||
/** When !ok: short reason shown to user. Includes install / login / env var hint. */
|
||||
reason?: string;
|
||||
}
|
||||
|
||||
export type Family = 'claude' | 'gpt' | 'gemini';
|
||||
|
||||
export interface ProviderAdapter {
|
||||
/** Stable name used in output tables and config (e.g., 'claude', 'gpt', 'gemini'). */
|
||||
readonly name: string;
|
||||
/** Model family this adapter targets. */
|
||||
readonly family: Family;
|
||||
/**
|
||||
* Check whether the provider's CLI binary is present and authenticated.
|
||||
* Should never block >2s. Non-throwing: returns { ok: false, reason } on failure.
|
||||
*/
|
||||
available(): Promise<AvailabilityCheck>;
|
||||
/** Run a prompt and return normalized RunResult. Non-throwing. Errors go in result.error. */
|
||||
run(opts: RunOpts): Promise<RunResult>;
|
||||
/** Estimate USD cost for the reported token usage and model. */
|
||||
estimateCost(tokens: TokenUsage, model?: string): number;
|
||||
}
|
||||
@@ -0,0 +1,82 @@
|
||||
/**
|
||||
* Tool compatibility map across provider CLIs.
|
||||
*
|
||||
* Not all provider CLIs expose equivalent tools. A benchmark that uses Edit, Glob,
|
||||
* or Grep won't run cleanly on CLIs that don't have those. The map answers:
|
||||
* "which tools does each provider's CLI expose by default?"
|
||||
*
|
||||
* When a benchmark is scoped to a tool a provider lacks, the harness records
|
||||
* `unsupported_tool` in the result and continues with the other providers.
|
||||
*
|
||||
* Source-of-truth references:
|
||||
* - Claude Code: https://code.claude.com/docs/en/tools
|
||||
* - Codex CLI: `codex exec --help` tool listing
|
||||
* - Gemini CLI: `gemini --help` (limited tool surface as of 2026-04)
|
||||
*/
|
||||
|
||||
export type ToolName =
|
||||
| 'Read'
|
||||
| 'Write'
|
||||
| 'Edit'
|
||||
| 'Bash'
|
||||
| 'Agent'
|
||||
| 'Glob'
|
||||
| 'Grep'
|
||||
| 'AskUserQuestion'
|
||||
| 'WebSearch'
|
||||
| 'WebFetch';
|
||||
|
||||
export const TOOL_COMPATIBILITY: Record<'claude' | 'gpt' | 'gemini', Record<ToolName, boolean>> = {
|
||||
claude: {
|
||||
Read: true,
|
||||
Write: true,
|
||||
Edit: true,
|
||||
Bash: true,
|
||||
Agent: true,
|
||||
Glob: true,
|
||||
Grep: true,
|
||||
AskUserQuestion: true,
|
||||
WebSearch: true,
|
||||
WebFetch: true,
|
||||
},
|
||||
gpt: {
|
||||
// Codex CLI has a narrower tool surface: it uses shell + apply_patch.
|
||||
// Read/Glob/Grep-style operations happen via shell pipelines.
|
||||
Read: true,
|
||||
Write: false, // apply_patch handles writes; no standalone Write tool
|
||||
Edit: false, // apply_patch handles edits; no standalone Edit tool
|
||||
Bash: true,
|
||||
Agent: false,
|
||||
Glob: false,
|
||||
Grep: false,
|
||||
AskUserQuestion: false,
|
||||
WebSearch: true, // --enable web_search_cached
|
||||
WebFetch: false,
|
||||
},
|
||||
gemini: {
|
||||
// Gemini CLI (as of 2026-04) has a limited tool surface in --yolo mode.
|
||||
// Shell access depends on flags; most agentic tools are not exposed.
|
||||
Read: true,
|
||||
Write: false,
|
||||
Edit: false,
|
||||
Bash: false,
|
||||
Agent: false,
|
||||
Glob: false,
|
||||
Grep: false,
|
||||
AskUserQuestion: false,
|
||||
WebSearch: true,
|
||||
WebFetch: false,
|
||||
},
|
||||
};
|
||||
|
||||
/**
|
||||
* Determine which tools from a required-set are missing for a given provider.
|
||||
* Empty array means full compatibility.
|
||||
*/
|
||||
export function missingTools(
|
||||
provider: 'claude' | 'gpt' | 'gemini',
|
||||
requiredTools: ToolName[]
|
||||
): ToolName[] {
|
||||
const map = TOOL_COMPATIBILITY[provider];
|
||||
return requiredTools.filter(t => !map[t]);
|
||||
}
|
||||
@@ -192,6 +192,9 @@ export const E2E_TOUCHFILES: Record<string, string[]> = {
|
||||
'autoplan-core': ['autoplan/**', 'plan-ceo-review/**', 'plan-eng-review/**', 'plan-design-review/**'],
|
||||
'autoplan-dual-voice': ['autoplan/**', 'codex/**', 'bin/gstack-codex-probe', 'scripts/resolvers/review.ts', 'scripts/resolvers/design.ts'],
|
||||
|
||||
// Multi-provider benchmark adapters — live API smoke against real claude/codex/gemini CLIs
|
||||
'benchmark-providers-live': ['bin/gstack-model-benchmark', 'test/helpers/providers/**', 'test/helpers/benchmark-runner.ts', 'test/helpers/pricing.ts'],
|
||||
|
||||
// Skill routing — journey-stage tests (depend on ALL skill descriptions)
|
||||
'journey-ideation': ['*/SKILL.md.tmpl', 'SKILL.md.tmpl', 'scripts/gen-skill-docs.ts'],
|
||||
'journey-plan-eng': ['*/SKILL.md.tmpl', 'SKILL.md.tmpl', 'scripts/gen-skill-docs.ts'],
|
||||
@@ -355,6 +358,9 @@ export const E2E_TIERS: Record<string, 'gate' | 'periodic'> = {
|
||||
'autoplan-core': 'periodic',
|
||||
'autoplan-dual-voice': 'periodic',
|
||||
|
||||
// Multi-provider benchmark — periodic (requires external CLIs + auth, paid)
|
||||
'benchmark-providers-live': 'periodic',
|
||||
|
||||
// Skill routing — periodic (LLM routing is non-deterministic)
|
||||
'journey-ideation': 'periodic',
|
||||
'journey-plan-eng': 'periodic',
|
||||
|
||||
@@ -0,0 +1,186 @@
|
||||
/**
|
||||
* Multi-provider benchmark adapter E2E — hit real claude, codex, gemini CLIs.
|
||||
*
|
||||
* Periodic tier: runs under `bun run test:e2e` with EVALS=1. Each provider gated
|
||||
* on its own `available()` check so missing auth skips that provider (doesn't
|
||||
* abort the batch). Uses the simplest possible prompt ("Reply with exactly: ok")
|
||||
* to keep cost near $0.001/provider/run.
|
||||
*
|
||||
* What this catches that unit tests don't:
|
||||
* - CLI output-format drift (the #1 silent breakage path)
|
||||
* - Token parsing from real provider responses
|
||||
* - Auth-failure vs timeout vs rate-limit error code routing
|
||||
* - Cost estimation on real token counts
|
||||
* - Parallel execution via Promise.allSettled — slow provider doesn't block fast
|
||||
*
|
||||
* NOT covered here (would need dedicated test files):
|
||||
* - Quality judge integration (benchmark-judge.ts, adds ~$0.05/run)
|
||||
* - Multi-turn tool-using prompts — our single-turn smoke skips `toolCalls > 0`
|
||||
*/
|
||||
|
||||
import { describe, test, expect, beforeAll, afterAll } from 'bun:test';
|
||||
import { ClaudeAdapter } from './helpers/providers/claude';
|
||||
import { GptAdapter } from './helpers/providers/gpt';
|
||||
import { GeminiAdapter } from './helpers/providers/gemini';
|
||||
import { runBenchmark } from './helpers/benchmark-runner';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
|
||||
// --- Prerequisites / gating ---
|
||||
|
||||
const evalsEnabled = !!process.env.EVALS;
|
||||
const describeIfEvals = evalsEnabled ? describe : describe.skip;
|
||||
|
||||
const PROMPT = 'Reply with exactly this text and nothing else: ok';
|
||||
|
||||
// Per-provider gate — each test checks its own availability and skips cleanly.
|
||||
// We construct adapters outside `test` so Bun's test reporter shows the skip reason.
|
||||
const claude = new ClaudeAdapter();
|
||||
const gpt = new GptAdapter();
|
||||
const gemini = new GeminiAdapter();
|
||||
|
||||
// Use a temp working directory so provider CLIs can't accidentally touch the repo.
|
||||
// Created in beforeAll / cleaned in afterAll so concurrent CI runs don't leak.
|
||||
let workdir: string;
|
||||
|
||||
describeIfEvals('multi-provider benchmark adapters (live)', () => {
|
||||
beforeAll(() => {
|
||||
workdir = fs.mkdtempSync(path.join(os.tmpdir(), 'bench-e2e-'));
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
if (workdir && fs.existsSync(workdir)) {
|
||||
fs.rmSync(workdir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('claude: available() returns structured ok/reason', async () => {
|
||||
const check = await claude.available();
|
||||
expect(check).toHaveProperty('ok');
|
||||
if (!check.ok) {
|
||||
expect(typeof check.reason).toBe('string');
|
||||
expect(check.reason!.length).toBeGreaterThan(0);
|
||||
}
|
||||
});
|
||||
|
||||
test('gpt: available() returns structured ok/reason', async () => {
|
||||
const check = await gpt.available();
|
||||
expect(check).toHaveProperty('ok');
|
||||
if (!check.ok) {
|
||||
expect(typeof check.reason).toBe('string');
|
||||
}
|
||||
});
|
||||
|
||||
test('gemini: available() returns structured ok/reason', async () => {
|
||||
const check = await gemini.available();
|
||||
expect(check).toHaveProperty('ok');
|
||||
if (!check.ok) {
|
||||
expect(typeof check.reason).toBe('string');
|
||||
}
|
||||
});
|
||||
|
||||
test('claude: trivial prompt produces parseable output', async () => {
|
||||
const check = await claude.available();
|
||||
if (!check.ok) {
|
||||
process.stderr.write(`\nclaude live smoke: SKIPPED — ${check.reason}\n`);
|
||||
return;
|
||||
}
|
||||
const result = await claude.run({ prompt: PROMPT, workdir, timeoutMs: 120_000 });
|
||||
if (result.error) {
|
||||
throw new Error(`claude errored: ${result.error.code} — ${result.error.reason}`);
|
||||
}
|
||||
expect(result.output.toLowerCase()).toContain('ok');
|
||||
expect(result.tokens.input).toBeGreaterThan(0);
|
||||
expect(result.tokens.output).toBeGreaterThan(0);
|
||||
expect(result.durationMs).toBeGreaterThan(0);
|
||||
expect(typeof result.modelUsed).toBe('string');
|
||||
expect(result.modelUsed.length).toBeGreaterThan(0);
|
||||
const cost = claude.estimateCost(result.tokens, result.modelUsed);
|
||||
expect(cost).toBeGreaterThan(0);
|
||||
}, 150_000);
|
||||
|
||||
test('gpt: trivial prompt produces parseable output', async () => {
|
||||
const check = await gpt.available();
|
||||
if (!check.ok) {
|
||||
process.stderr.write(`\ngpt live smoke: SKIPPED — ${check.reason}\n`);
|
||||
return;
|
||||
}
|
||||
const result = await gpt.run({ prompt: PROMPT, workdir, timeoutMs: 120_000 });
|
||||
if (result.error) {
|
||||
throw new Error(`gpt errored: ${result.error.code} — ${result.error.reason}`);
|
||||
}
|
||||
expect(result.output.toLowerCase()).toContain('ok');
|
||||
expect(result.tokens.input).toBeGreaterThan(0);
|
||||
expect(result.tokens.output).toBeGreaterThan(0);
|
||||
expect(result.durationMs).toBeGreaterThan(0);
|
||||
expect(typeof result.modelUsed).toBe('string');
|
||||
const cost = gpt.estimateCost(result.tokens, result.modelUsed);
|
||||
expect(cost).toBeGreaterThan(0);
|
||||
}, 150_000);
|
||||
|
||||
test('gemini: trivial prompt produces parseable output', async () => {
|
||||
const check = await gemini.available();
|
||||
if (!check.ok) {
|
||||
process.stderr.write(`\ngemini live smoke: SKIPPED — ${check.reason}\n`);
|
||||
return;
|
||||
}
|
||||
const result = await gemini.run({ prompt: PROMPT, workdir, timeoutMs: 120_000 });
|
||||
if (result.error) {
|
||||
throw new Error(`gemini errored: ${result.error.code} — ${result.error.reason}`);
|
||||
}
|
||||
expect(result.output.toLowerCase()).toContain('ok');
|
||||
// Gemini CLI sometimes returns 0 tokens in the result event (older responses);
|
||||
// assert non-negative instead of strictly positive.
|
||||
expect(result.tokens.input).toBeGreaterThanOrEqual(0);
|
||||
expect(result.tokens.output).toBeGreaterThanOrEqual(0);
|
||||
expect(result.durationMs).toBeGreaterThan(0);
|
||||
expect(typeof result.modelUsed).toBe('string');
|
||||
}, 150_000);
|
||||
|
||||
test('timeout error surfaces as error.code=timeout (no exception)', async () => {
|
||||
// Use whatever adapter is available first — all three should share timeout semantics.
|
||||
const adapter = (await claude.available()).ok ? claude
|
||||
: (await gpt.available()).ok ? gpt
|
||||
: (await gemini.available()).ok ? gemini
|
||||
: null;
|
||||
if (!adapter) {
|
||||
process.stderr.write('\ntimeout smoke: SKIPPED — no provider available\n');
|
||||
return;
|
||||
}
|
||||
// 100ms timeout is far too short for any real CLI startup → must timeout.
|
||||
const result = await adapter.run({ prompt: PROMPT, workdir, timeoutMs: 100 });
|
||||
expect(result.error).toBeDefined();
|
||||
// Timeout, binary_missing, or unknown (if CLI dies differently) — all acceptable
|
||||
// non-crash outcomes. The point is the adapter returns a RunResult, not throws.
|
||||
expect(['timeout', 'unknown', 'binary_missing']).toContain(result.error!.code);
|
||||
expect(result.durationMs).toBeGreaterThan(0);
|
||||
}, 30_000);
|
||||
|
||||
test('runBenchmark: Promise.allSettled means one unavailable provider does not block others', async () => {
|
||||
// Use the full runner with all three providers — whichever are unauthed should
|
||||
// return entries with available=false and not crash the batch.
|
||||
const report = await runBenchmark({
|
||||
prompt: PROMPT,
|
||||
workdir,
|
||||
providers: ['claude', 'gpt', 'gemini'],
|
||||
timeoutMs: 120_000,
|
||||
skipUnavailable: false,
|
||||
});
|
||||
expect(report.entries).toHaveLength(3);
|
||||
for (const e of report.entries) {
|
||||
expect(['claude', 'gpt', 'gemini']).toContain(e.family);
|
||||
if (e.available) {
|
||||
expect(e.result).toBeDefined();
|
||||
} else {
|
||||
expect(typeof e.unavailable_reason).toBe('string');
|
||||
}
|
||||
}
|
||||
// At least one available provider should have produced a non-error result in a healthy CI env.
|
||||
const hadSuccess = report.entries.some(e => e.available && e.result && !e.result.error);
|
||||
// We don't hard-assert this: if NO providers are authed, skip silently.
|
||||
if (!hadSuccess) {
|
||||
process.stderr.write('\nrunBenchmark live: no provider produced a clean result (no auth?)\n');
|
||||
}
|
||||
}, 300_000);
|
||||
});
|
||||
@@ -0,0 +1,392 @@
|
||||
/**
|
||||
* Taste engine — end-to-end tests for `gstack-taste-update`.
|
||||
*
|
||||
* Covers the v1 taste profile contract: schema shape, Laplace-smoothed confidence,
|
||||
* 5%/week decay, dimension extraction from reason strings, session cap, schema
|
||||
* migration, conflict detection (taste drift), malformed-input recovery.
|
||||
*
|
||||
* All tests use GSTACK_STATE_DIR pointing at a temp dir so no real home dir is
|
||||
* touched. Each test isolates its own state directory.
|
||||
*/
|
||||
|
||||
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
|
||||
import { spawnSync } from 'child_process';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
|
||||
const ROOT = path.resolve(import.meta.dir, '..');
|
||||
const BIN = path.join(ROOT, 'bin', 'gstack-taste-update');
|
||||
|
||||
interface Preference {
|
||||
value: string;
|
||||
confidence: number;
|
||||
approved_count: number;
|
||||
rejected_count: number;
|
||||
last_seen: string;
|
||||
}
|
||||
|
||||
interface TasteProfile {
|
||||
version: number;
|
||||
updated_at: string;
|
||||
dimensions: Record<'fonts' | 'colors' | 'layouts' | 'aesthetics', { approved: Preference[]; rejected: Preference[] }>;
|
||||
sessions: Array<{ ts: string; action: 'approved' | 'rejected'; variant: string; reason?: string }>;
|
||||
}
|
||||
|
||||
let stateDir: string;
|
||||
let workdir: string;
|
||||
|
||||
beforeEach(() => {
|
||||
stateDir = fs.mkdtempSync(path.join(os.tmpdir(), 'taste-state-'));
|
||||
workdir = fs.mkdtempSync(path.join(os.tmpdir(), 'taste-work-'));
|
||||
// Initialize a git repo so gstack-taste-update's getSlug() finds a toplevel
|
||||
spawnSync('git', ['init', '-b', 'main'], { cwd: workdir, stdio: 'pipe' });
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
fs.rmSync(stateDir, { recursive: true, force: true });
|
||||
fs.rmSync(workdir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
function run(args: string[]): { status: number | null; stdout: string; stderr: string } {
|
||||
const result = spawnSync('bun', ['run', BIN, ...args], {
|
||||
cwd: workdir,
|
||||
env: { ...process.env, GSTACK_STATE_DIR: stateDir, HOME: stateDir },
|
||||
encoding: 'utf-8',
|
||||
timeout: 10000,
|
||||
});
|
||||
return {
|
||||
status: result.status,
|
||||
stdout: result.stdout?.toString() ?? '',
|
||||
stderr: result.stderr?.toString() ?? '',
|
||||
};
|
||||
}
|
||||
|
||||
function profilePath(): string {
|
||||
const slug = path.basename(workdir);
|
||||
return path.join(stateDir, 'projects', slug, 'taste-profile.json');
|
||||
}
|
||||
|
||||
function readProfile(): TasteProfile {
|
||||
return JSON.parse(fs.readFileSync(profilePath(), 'utf-8'));
|
||||
}
|
||||
|
||||
function writeProfile(p: unknown): void {
|
||||
const pp = profilePath();
|
||||
fs.mkdirSync(path.dirname(pp), { recursive: true });
|
||||
fs.writeFileSync(pp, JSON.stringify(p, null, 2));
|
||||
}
|
||||
|
||||
describe('taste-engine: first-write lifecycle', () => {
|
||||
test('approved creates profile with correct v1 schema', () => {
|
||||
const r = run(['approved', 'variant-A', '--reason', 'fonts: Geist Sans; colors: emerald']);
|
||||
expect(r.status).toBe(0);
|
||||
|
||||
const p = readProfile();
|
||||
expect(p.version).toBe(1);
|
||||
expect(p.dimensions.fonts.approved).toHaveLength(1);
|
||||
expect(p.dimensions.fonts.approved[0].value).toBe('Geist Sans');
|
||||
expect(p.dimensions.fonts.approved[0].approved_count).toBe(1);
|
||||
expect(p.dimensions.fonts.approved[0].rejected_count).toBe(0);
|
||||
// Laplace: 1 / (1 + 0 + 1) = 0.5
|
||||
expect(p.dimensions.fonts.approved[0].confidence).toBeCloseTo(0.5, 5);
|
||||
expect(p.dimensions.colors.approved[0].value).toBe('emerald');
|
||||
expect(p.sessions).toHaveLength(1);
|
||||
expect(p.sessions[0].action).toBe('approved');
|
||||
expect(p.sessions[0].variant).toBe('variant-A');
|
||||
});
|
||||
|
||||
test('rejected bumps rejected_count not approved_count', () => {
|
||||
run(['rejected', 'variant-B', '--reason', 'fonts: Comic Sans']);
|
||||
const p = readProfile();
|
||||
expect(p.dimensions.fonts.rejected).toHaveLength(1);
|
||||
expect(p.dimensions.fonts.rejected[0].rejected_count).toBe(1);
|
||||
expect(p.dimensions.fonts.rejected[0].approved_count).toBe(0);
|
||||
expect(p.dimensions.fonts.approved).toHaveLength(0);
|
||||
});
|
||||
|
||||
test('session recorded even when no dimensions extractable from reason', () => {
|
||||
const r = run(['approved', 'variant-C']); // no --reason
|
||||
expect(r.status).toBe(0);
|
||||
const p = readProfile();
|
||||
expect(p.sessions).toHaveLength(1);
|
||||
for (const dim of ['fonts', 'colors', 'layouts', 'aesthetics'] as const) {
|
||||
expect(p.dimensions[dim].approved).toHaveLength(0);
|
||||
expect(p.dimensions[dim].rejected).toHaveLength(0);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('taste-engine: Laplace-smoothed confidence', () => {
|
||||
test('repeated approvals raise confidence toward 1', () => {
|
||||
for (let i = 0; i < 5; i++) {
|
||||
run(['approved', `variant-${i}`, '--reason', 'fonts: Geist Sans']);
|
||||
}
|
||||
const p = readProfile();
|
||||
const pref = p.dimensions.fonts.approved[0];
|
||||
expect(pref.approved_count).toBe(5);
|
||||
// Laplace: 5 / (5 + 0 + 1) = 0.833
|
||||
expect(pref.confidence).toBeCloseTo(5 / 6, 5);
|
||||
});
|
||||
|
||||
test('mixed approvals + rejections balance out', () => {
|
||||
run(['approved', 'v1', '--reason', 'fonts: Inter']);
|
||||
run(['approved', 'v2', '--reason', 'fonts: Inter']);
|
||||
run(['rejected', 'v3', '--reason', 'fonts: Inter']);
|
||||
const p = readProfile();
|
||||
const approved = p.dimensions.fonts.approved[0];
|
||||
const rejected = p.dimensions.fonts.rejected[0];
|
||||
expect(approved.approved_count).toBe(2);
|
||||
expect(approved.rejected_count).toBe(0);
|
||||
expect(rejected.rejected_count).toBe(1);
|
||||
expect(rejected.approved_count).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('taste-engine: decay math', () => {
|
||||
test('show applies 5%/week decay to stored confidence', () => {
|
||||
// Seed with a profile where the single approved font was last_seen 4 weeks ago
|
||||
const fourWeeksAgo = new Date(Date.now() - 4 * 7 * 24 * 60 * 60 * 1000).toISOString();
|
||||
writeProfile({
|
||||
version: 1,
|
||||
updated_at: new Date().toISOString(),
|
||||
dimensions: {
|
||||
fonts: {
|
||||
approved: [{ value: 'Aged Font', confidence: 0.8, approved_count: 4, rejected_count: 0, last_seen: fourWeeksAgo }],
|
||||
rejected: [],
|
||||
},
|
||||
colors: { approved: [], rejected: [] },
|
||||
layouts: { approved: [], rejected: [] },
|
||||
aesthetics: { approved: [], rejected: [] },
|
||||
},
|
||||
sessions: [],
|
||||
});
|
||||
const r = run(['show']);
|
||||
expect(r.status).toBe(0);
|
||||
// After 4 weeks: 0.8 * (0.95)^4 ≈ 0.651
|
||||
const expectedConf = 0.8 * Math.pow(0.95, 4);
|
||||
const match = r.stdout.match(/Aged Font — conf (\d+\.\d+)/);
|
||||
expect(match).toBeTruthy();
|
||||
const displayedConf = parseFloat(match![1]);
|
||||
expect(displayedConf).toBeCloseTo(expectedConf, 2);
|
||||
});
|
||||
|
||||
test('decay never goes below zero', () => {
|
||||
// 3 years ≈ 156 weeks. 0.95^156 ≈ 0.00036, well below 0.01.
|
||||
const yearsAgo = new Date(Date.now() - 3 * 365 * 24 * 60 * 60 * 1000).toISOString();
|
||||
writeProfile({
|
||||
version: 1,
|
||||
updated_at: new Date().toISOString(),
|
||||
dimensions: {
|
||||
fonts: {
|
||||
approved: [{ value: 'Ancient', confidence: 1.0, approved_count: 1, rejected_count: 0, last_seen: yearsAgo }],
|
||||
rejected: [],
|
||||
},
|
||||
colors: { approved: [], rejected: [] },
|
||||
layouts: { approved: [], rejected: [] },
|
||||
aesthetics: { approved: [], rejected: [] },
|
||||
},
|
||||
sessions: [],
|
||||
});
|
||||
const r = run(['show']);
|
||||
expect(r.status).toBe(0);
|
||||
const match = r.stdout.match(/Ancient — conf (\d+\.\d+)/);
|
||||
expect(match).toBeTruthy();
|
||||
const conf = parseFloat(match![1]);
|
||||
expect(conf).toBeGreaterThanOrEqual(0);
|
||||
expect(conf).toBeLessThan(0.01);
|
||||
});
|
||||
});
|
||||
|
||||
describe('taste-engine: dimension extraction', () => {
|
||||
test('parses multiple dimensions from one reason string', () => {
|
||||
run(['approved', 'v1', '--reason', 'fonts: Geist, IBM Plex; colors: emerald; layouts: grid-12; aesthetics: brutalist']);
|
||||
const p = readProfile();
|
||||
expect(p.dimensions.fonts.approved.map(x => x.value).sort()).toEqual(['Geist', 'IBM Plex']);
|
||||
expect(p.dimensions.colors.approved[0].value).toBe('emerald');
|
||||
expect(p.dimensions.layouts.approved[0].value).toBe('grid-12');
|
||||
expect(p.dimensions.aesthetics.approved[0].value).toBe('brutalist');
|
||||
});
|
||||
|
||||
test('value matching is case-insensitive (first casing wins)', () => {
|
||||
run(['approved', 'v1', '--reason', 'fonts: Geist']);
|
||||
run(['approved', 'v2', '--reason', 'fonts: GEIST']);
|
||||
const p = readProfile();
|
||||
// Should merge into a single entry
|
||||
expect(p.dimensions.fonts.approved).toHaveLength(1);
|
||||
expect(p.dimensions.fonts.approved[0].approved_count).toBe(2);
|
||||
// Canonical value is the first-arrival casing. bumpPref() stores value on
|
||||
// insert and never overwrites on subsequent bumps.
|
||||
expect(p.dimensions.fonts.approved[0].value).toBe('Geist');
|
||||
});
|
||||
|
||||
test('unknown dimension labels are silently ignored', () => {
|
||||
run(['approved', 'v1', '--reason', 'weather: sunny; mood: happy']);
|
||||
const p = readProfile();
|
||||
// Session still recorded
|
||||
expect(p.sessions).toHaveLength(1);
|
||||
// No dimensions populated
|
||||
for (const dim of ['fonts', 'colors', 'layouts', 'aesthetics'] as const) {
|
||||
expect(p.dimensions[dim].approved).toHaveLength(0);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('taste-engine: session cap', () => {
|
||||
test('sessions truncate to last 50 entries (FIFO)', () => {
|
||||
// Seed the profile with 50 existing sessions, then one real CLI call writes
|
||||
// the 51st → the oldest must drop. Avoids 55 sequential subprocess spawns.
|
||||
const seededSessions = Array.from({ length: 50 }, (_, i) => ({
|
||||
ts: new Date(Date.now() - (50 - i) * 1000).toISOString(),
|
||||
action: 'approved' as const,
|
||||
variant: `seed-${i}`,
|
||||
}));
|
||||
writeProfile({
|
||||
version: 1,
|
||||
updated_at: new Date().toISOString(),
|
||||
dimensions: {
|
||||
fonts: { approved: [], rejected: [] },
|
||||
colors: { approved: [], rejected: [] },
|
||||
layouts: { approved: [], rejected: [] },
|
||||
aesthetics: { approved: [], rejected: [] },
|
||||
},
|
||||
sessions: seededSessions,
|
||||
});
|
||||
const r = run(['approved', 'new-one', '--reason', 'fonts: Geist']);
|
||||
expect(r.status).toBe(0);
|
||||
const p = readProfile();
|
||||
expect(p.sessions).toHaveLength(50);
|
||||
// The oldest seed (seed-0) must have been evicted FIFO; seed-1 is now first;
|
||||
// the new entry is last.
|
||||
expect(p.sessions[0].variant).toBe('seed-1');
|
||||
expect(p.sessions[48].variant).toBe('seed-49');
|
||||
expect(p.sessions[49].variant).toBe('new-one');
|
||||
});
|
||||
});
|
||||
|
||||
describe('taste-engine: taste drift conflict detection', () => {
|
||||
test('warns when approved value has strong opposite signal', () => {
|
||||
// Seed a strong rejected entry: 4 rejections, no approvals → Laplace = 0/5 but that's
|
||||
// not > 0.6. Let's seed it directly with confidence 0.8.
|
||||
writeProfile({
|
||||
version: 1,
|
||||
updated_at: new Date().toISOString(),
|
||||
dimensions: {
|
||||
fonts: {
|
||||
approved: [],
|
||||
rejected: [{ value: 'Comic Sans', confidence: 0.8, approved_count: 0, rejected_count: 4, last_seen: new Date().toISOString() }],
|
||||
},
|
||||
colors: { approved: [], rejected: [] },
|
||||
layouts: { approved: [], rejected: [] },
|
||||
aesthetics: { approved: [], rejected: [] },
|
||||
},
|
||||
sessions: [],
|
||||
});
|
||||
const r = run(['approved', 'v1', '--reason', 'fonts: Comic Sans']);
|
||||
expect(r.status).toBe(0);
|
||||
// "taste drift" note should go to stderr
|
||||
expect(r.stderr).toContain('taste drift');
|
||||
expect(r.stderr).toContain('Comic Sans');
|
||||
});
|
||||
|
||||
test('does NOT warn when signal is weak', () => {
|
||||
writeProfile({
|
||||
version: 1,
|
||||
updated_at: new Date().toISOString(),
|
||||
dimensions: {
|
||||
fonts: {
|
||||
approved: [],
|
||||
// Single rejection (< 3) — shouldn't trigger drift warning
|
||||
rejected: [{ value: 'Inter', confidence: 0.5, approved_count: 0, rejected_count: 1, last_seen: new Date().toISOString() }],
|
||||
},
|
||||
colors: { approved: [], rejected: [] },
|
||||
layouts: { approved: [], rejected: [] },
|
||||
aesthetics: { approved: [], rejected: [] },
|
||||
},
|
||||
sessions: [],
|
||||
});
|
||||
const r = run(['approved', 'v1', '--reason', 'fonts: Inter']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stderr).not.toContain('taste drift');
|
||||
});
|
||||
});
|
||||
|
||||
describe('taste-engine: migration', () => {
|
||||
test('legacy profile without version gets migrated to v1', () => {
|
||||
// Simulate a legacy approved.json-style structure
|
||||
writeProfile({
|
||||
// no version field
|
||||
dimensions: {
|
||||
fonts: {
|
||||
approved: [{ value: 'Legacy', confidence: 0.7, approved_count: 3, rejected_count: 1, last_seen: new Date().toISOString() }],
|
||||
rejected: [],
|
||||
},
|
||||
},
|
||||
sessions: [
|
||||
{ ts: new Date().toISOString(), action: 'approved', variant: 'legacy-v1' },
|
||||
],
|
||||
});
|
||||
|
||||
const r = run(['migrate']);
|
||||
expect(r.status).toBe(0);
|
||||
|
||||
const p = readProfile();
|
||||
expect(p.version).toBe(1);
|
||||
expect(p.dimensions.fonts.approved[0].value).toBe('Legacy');
|
||||
expect(p.dimensions.colors).toBeDefined();
|
||||
expect(p.dimensions.layouts).toBeDefined();
|
||||
expect(p.dimensions.aesthetics).toBeDefined();
|
||||
expect(p.sessions).toHaveLength(1);
|
||||
expect(p.sessions[0].variant).toBe('legacy-v1');
|
||||
});
|
||||
|
||||
test('migration truncates oversized sessions array to last 50', () => {
|
||||
const sessions = Array.from({ length: 100 }, (_, i) => ({
|
||||
ts: new Date().toISOString(),
|
||||
action: 'approved' as const,
|
||||
variant: `legacy-${i}`,
|
||||
}));
|
||||
writeProfile({ dimensions: {}, sessions });
|
||||
const r = run(['migrate']);
|
||||
expect(r.status).toBe(0);
|
||||
const p = readProfile();
|
||||
expect(p.sessions).toHaveLength(50);
|
||||
expect(p.sessions[0].variant).toBe('legacy-50');
|
||||
expect(p.sessions[49].variant).toBe('legacy-99');
|
||||
});
|
||||
});
|
||||
|
||||
describe('taste-engine: resilience', () => {
|
||||
test('malformed JSON profile falls back to empty and does not crash', () => {
|
||||
const pp = profilePath();
|
||||
fs.mkdirSync(path.dirname(pp), { recursive: true });
|
||||
fs.writeFileSync(pp, '{ this is not json');
|
||||
const r = run(['approved', 'v1', '--reason', 'fonts: Geist']);
|
||||
// Should succeed (graceful fallback)
|
||||
expect(r.status).toBe(0);
|
||||
// Warning on stderr
|
||||
expect(r.stderr).toContain('WARN');
|
||||
// File should now be valid JSON
|
||||
const p = readProfile();
|
||||
expect(p.version).toBe(1);
|
||||
expect(p.dimensions.fonts.approved[0].value).toBe('Geist');
|
||||
});
|
||||
|
||||
test('show on nonexistent profile prints empty summary without error', () => {
|
||||
const r = run(['show']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('taste-profile.json');
|
||||
});
|
||||
|
||||
test('approved without variant arg exits non-zero with usage hint', () => {
|
||||
const r = run(['approved']);
|
||||
expect(r.status).not.toBe(0);
|
||||
expect(r.stderr).toContain('Usage');
|
||||
});
|
||||
|
||||
test('unknown command exits non-zero', () => {
|
||||
const r = run(['banana']);
|
||||
expect(r.status).not.toBe(0);
|
||||
expect(r.stderr).toContain('Usage');
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user