mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-08 06:26:45 +02:00
656df0e37e
* feat(v1.5.2.0): Opus 4.7 migration — model overlay, voice, routing
Adapts GStack skill text for Claude Opus 4.7's behavioral changes per
Anthropic's migration guide and community findings.
Key changes:
model-overlays/claude.md:
- Fan out explicitly (4.7 spawns fewer subagents by default)
- Effort-match the step (avoid overthinking simple tasks at max)
- Batch questions in one AskUserQuestion turn
- Literal interpretation awareness (deliver full scope)
hosts/claude.ts:
- coAuthorTrailer updated to Claude Opus 4.7
SKILL.md.tmpl:
- Expanded routing triggers with colloquial variants ("wtf",
"this doesn't work", "send it", "where was I") — 4.7 won't
generalize from sparse trigger patterns like 4.6 did
- Added missing routes: /context-save, /context-restore, /cso, /make-pdf
- Changed routing fallback from strict "do NOT answer directly" to
"when in doubt, invoke the skill" — false positives are cheaper
than false negatives on 4.7's literal interpreter
generate-voice-directive.ts:
- Added concrete good/bad voice example — 4.7 needs shown examples,
not just described tone. "auth.ts:47 returns undefined..." vs
"I've identified a potential issue..."
Regenerated all 38 SKILL.md files. All tests pass.
* refactor(opus-4.7): split overlay, align routing, fix trailer fallback
Follow-up to wintermute's initial Opus 4.7 migration commit (addresses
ship-quality review findings before v1.6.1.0 release).
Overlay split (model-overlays/):
- Move 4 Opus-4.7-specific nudges (Fan out, Effort-match, Batch your
questions, Literal interpretation) from claude.md into new
opus-4-7.md with {{INHERIT:claude}}
- claude.md now holds only model-agnostic nudges (Todo discipline,
Think before heavy, Dedicated tools over Bash)
- Prevents Opus-4.7-specific guidance leaking onto Sonnet/Haiku
- Uses existing {{INHERIT:claude}} mechanism at
scripts/resolvers/model-overlay.ts:28-43
scripts/models.ts:
- Add opus-4-7 to ALL_MODEL_NAMES
- resolveModel: claude-opus-4-7-* variants route to opus-4-7,
all other claude-* variants continue to route to claude
scripts/resolvers/utility.ts:
- Update coAuthor trailer fallback: Opus 4.6 -> Opus 4.7
(fallback was missed in the initial migration commit)
scripts/resolvers/preamble/generate-routing-injection.ts:
- Align policy with new SKILL.md.tmpl: soft "when in doubt, invoke"
instead of hard "ALWAYS invoke... Do NOT answer directly"
- Replace stale /checkpoint reference with /context-save +
/context-restore (skills were renamed in v1.0.1.0)
- Expand route coverage to match full skill inventory:
/plan-devex-review, /qa-only, /devex-review, /land-and-deploy,
/setup-deploy, /canary, /open-gstack-browser,
/setup-browser-cookies, /benchmark, /learn, /plan-tune, /health
scripts/resolvers/preamble/generate-voice-directive.ts:
- Voice example closing: "Want me to ship it?" -> "Want me to fix it?"
- Preserves directness while routing through review gates
SKILL.md.tmpl:
- Add routing triggers for skills that were missing from the list:
/plan-devex-review, /qa-only, /devex-review, /land-and-deploy,
/setup-deploy, /canary, /open-gstack-browser,
/setup-browser-cookies, /benchmark, /learn, /plan-tune, /health
- Within Opus 4.7 overlay, added scope boundary to
"Literal interpretation" nudge ("fix tests that this branch
introduced or is responsible for")
- Added pacing exception to "Batch your questions" nudge so skills
that require one-question-at-a-time pacing still win
Follow-up commit will regenerate SKILL.md files + update goldens.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore(opus-4.7): regenerate SKILL.md files + update golden fixtures
Mechanical consequence of the preceding source changes (overlay split,
routing alignment, voice example, routing expansion). No behavior change
beyond what that commit introduced.
- 36 SKILL.md files regenerated via bun run gen:skill-docs
- 3 golden fixtures updated (claude, codex, factory ship skill)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(routing): assert slash-prefixed skills + new policy + current names
Align gen-skill-docs.test.ts routing assertions with the remediated
routing-injection output:
- Expect '/office-hours' slash-prefixed form (matches SKILL.md.tmpl style)
- Add test asserting /context-save + /context-restore references
(guards against stale '/checkpoint' name regression)
- Add test asserting "When in doubt, invoke the skill" soft policy
(guards against "Do NOT answer directly" hard policy regression)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(binary-guard): replace xargs-per-file loops with fs.statSync + mode filter
The "no compiled binaries in git" describe block had two flaky tests:
- "git tracks no files larger than 2MB" timed out at 5s regularly because
it spawned one `sh -c` per tracked file via `xargs -I{}` (~571 shells
on every run, ~11s locally).
- "git tracks no Mach-O or ELF binaries" ran `file --mime-type` over every
tracked file (~3-10s, flaky near the timeout).
Both were pre-existing — not caused by any recent change — but showed up
as red in every local `bun test` run and masked legit failures in the
same suite.
Rewrites:
- 2MB test: `fs.statSync(f).size` in a filter. Millisecond-fast.
- Mach-O test: pre-filter to mode 100755 files via `git ls-files -s`,
then batch-invoke `file --mime-type` once across all executables.
With zero executables tracked, the `file` invocation is skipped.
Test suite: 320 pass, 0 fail, 907ms (was ~12.7s with 2 fails).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(team-mode): give setup -q / setup --local tests a 3-minute budget
./setup runs a full install, Bun binary build, and skill regeneration.
On a cold cache it takes 60-90s, comfortably above bun test's 5s default.
Both "setup -q produces no stdout" and "setup --local prints deprecation
warning" have been flaky-to-failing for a while with [5001.78ms] timeouts.
The test logic was fine, the budget wasn't. Bumped both to 180s via the
third-arg timeout.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(opus-4.7): E2E eval for fanout rate + routing precision
Closes the measurement gap flagged by the ship-quality review: "zero
tests exercise Opus 4.7 behavior; every skill-e2e hardcodes 4.6."
Two cases, both pinned to claude-opus-4-7:
1. Fanout rate (A/B)
- Arm A: regen SKILL.md with --model opus-4-7 (overlay ON, includes
"Fan out explicitly" nudge).
- Arm B: regen SKILL.md with --model claude (overlay OFF, only
model-agnostic nudges).
- Prompt: "Read alpha.txt, beta.txt, gamma.txt. These are independent."
- Measure: parallel tool calls in first assistant turn.
- Assert: arm A >= arm B.
2. Routing precision (6-case mini-benchmark)
- 3 positive prompts that should route (wtf bug, send it, does it work)
- 3 negative prompts that match keywords but should NOT route
(syntax question, algorithm question, slack message)
- Assert: TP rate >= 66%, FP rate <= 33%.
Cost estimate: ~$3-5 per full run. Classified as periodic tier per
CLAUDE.md convention (Opus model, non-deterministic). Runs only with
EVALS=1 env var, touchfile-gated so unrelated diffs don't trigger it.
Test plan artifact at
~/.gstack/projects/garrytan-gstack/garrytan-feat-opus-4.7-migration-eng-review-test-plan-20260421-230611.md
tracks the full specification.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor(opus-4.7): rewrite fanout nudge to show parallel tool_use pattern
The original fanout nudge told 4.7 to "spawn subagents in the same turn"
and "run independent checks concurrently" in prose. An E2E eval on
claude-opus-4-7 reading 3 independent files showed zero effect: both
overlay-ON and overlay-OFF arms emitted serial Reads across 3-4 turns.
Rewrite follows the same "show not tell" principle the PR introduced for
voice examples. The nudge now includes a concrete wrong/right contrast
showing the exact tool_use structure:
Wrong (3 turns):
Turn 1: Read(foo.ts), then wait
Turn 2: Read(bar.ts), then wait
Turn 3: Read(baz.ts)
Right (1 turn, 3 parallel tool_use blocks in one assistant message):
Turn 1: [Read(foo.ts), Read(bar.ts), Read(baz.ts)]
Applies to Read, Bash, Grep, Glob, WebFetch, Agent, and any tool where
sub-calls don't depend on each other's output.
Effect on test/skill-e2e-opus-47.test.ts fanout eval: unchanged (both
arms still 0 parallel in first turn via `claude -p`). May land better in
Claude Code's interactive harness, where the system prompt + tool
handlers differ. Tracked as P0 TODO for follow-up verification in the
correct harness.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(opus-4.7): tighten ambiguous /qa routing prompt
"does this feature work on mobile? can you check the deploy?" was too
vague — a reasonable agent asks "which feature?" via AskUserQuestion
instead of routing to /qa. That's not a routing miss, it's an under-
specified prompt.
Replaced with "I just pushed the login flow changes. Test the deployed
site and find any bugs." — concrete subject + clear QA verb.
Result: pos-does-it-work went from MISS to OK, routing TP rate 2/3 -> 3/3.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(opus-4.7): rewrite scratch-root helper + add afterAll cleanup
First run of the Opus 4.7 eval exposed two test-setup gaps that made
results misleading:
- Only the root gstack SKILL.md was installed. Claude Code does
auto-discovery per-directory under .claude/skills/{name}/SKILL.md, so
without individual skill dirs the Skill tool had nothing to route to.
Positive routing cases all failed.
- `claude -p` does not load SKILL.md content as system context the way
the Claude Code harness does. The overlay nudges in SKILL.md were
invisible to the model, so the fanout A/B could not actually differ.
New `mkEvalRoot(suffix, includeOverlay)` helper, modelled on the pattern
in skill-routing-e2e.test.ts:
- Installs per-skill SKILL.md under .claude/skills/ for ~14 key skills
so the Skill tool has discoverable targets.
- Writes an explicit routing block into project CLAUDE.md.
- When includeOverlay is true, inlines the content of
model-overlays/opus-4-7.md into CLAUDE.md too. This is what makes the
fanout A/B observable in `claude -p`: arm ON gets the overlay in
context, arm OFF does not.
Plus an afterAll that re-runs gen-skill-docs at the default model so
the working tree is not left with opus-4-7-generated SKILL.md files
after the eval finishes (would break golden-file tests in the next
`bun test` run otherwise).
With this setup in place: routing went from 3/3 FAIL to 3/3 PASS
(correct skill or clarification in every positive case, zero false
positives on negatives). Fanout A/B is now a fair comparison; still
shows 0 parallel in both arms under `claude -p` (tracked as a P0 TODO
for re-measurement inside Claude Code's harness, where fanout may land
differently).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(todos): verify Opus 4.7 fanout nudge in Claude Code harness (P0)
v1.6.1.0 shipped a rewritten "Fan out explicitly" nudge with a concrete
tool_use example. Under `claude -p` on claude-opus-4-7, the A/B eval
showed zero parallel tool calls in the first turn for both arms
(overlay ON and OFF). Routing verified 3/3 in the same harness, so the
gap is specific to fanout and likely to `claude -p`'s system prompt +
tool wiring.
This TODO closes the measurement loop the ship-quality review flagged:
re-run the fanout A/B inside Claude Code's real harness (or a faithful
replica) before landing another Opus migration claim.
P0 because it is a ship-quality commitment from the v1.6.1.0 release
notes, not a nice-to-have.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore(release): v1.6.1.0 — Opus 4.7 migration, reviewed
Bump VERSION + package.json from 1.6.0.0 to 1.6.1.0. New CHANGELOG
entry describing the ship-quality remediation of PR #1117:
- Overlay split (model-agnostic claude.md + opus-4-7.md with INHERIT)
- Routing-injection aligned with SKILL.md.tmpl ("when in doubt" policy,
current skill names, full skill inventory)
- utility.ts trailer fallback updated
- Voice example closes through review gate instead of ship-bypass
- Literal-interpretation nudge bounded to branch scope
- Batch-questions nudge has explicit pacing exception
- First Opus 4.7 eval: routing verified 3/3, fanout A/B unverified
under `claude -p` (tracked as P0 TODO for next rev)
- Pre-existing test failures fixed: fs.statSync binary guard, 180s
setup timeout, golden-file updates
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(opus-4.7): key touchfile entries by testName, not describe text
TOUCHFILES completeness scan in test/touchfiles.test.ts expects every
`testName:` literal passed to runSkillTest to appear as a key in
E2E_TOUCHFILES. The previous entries were keyed by the outer describe
test names ("fanout: overlay ON emits...") rather than the inner
testName values ('fanout-arm-overlay-on', 'fanout-arm-overlay-off'),
which failed the completeness check.
Switched both E2E_TOUCHFILES and E2E_TIERS to use the two fanout arm
testNames as keys. The routing sub-tests use a template literal
(`routing-${c.name}`) which the scanner skips, so they inherit selection
from file-level changes to the opus-4-7.md / routing-injection.ts paths
already covered by the fanout entries.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: gstack <ship@gstack.dev>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
609 lines
29 KiB
Markdown
609 lines
29 KiB
Markdown
---
|
|
name: benchmark-models
|
|
preamble-tier: 1
|
|
version: 1.0.0
|
|
description: |
|
|
Cross-model benchmark for gstack skills. Runs the same prompt through Claude,
|
|
GPT (via Codex CLI), and Gemini side-by-side — compares latency, tokens, cost,
|
|
and optionally quality via LLM judge. Answers "which model is actually best
|
|
for this skill?" with data instead of vibes. Separate from /benchmark, which
|
|
measures web page performance. Use when: "benchmark models", "compare models",
|
|
"which model is best for X", "cross-model comparison", "model shootout". (gstack)
|
|
Voice triggers (speech-to-text aliases): "compare models", "model shootout", "which model is best".
|
|
triggers:
|
|
- cross model benchmark
|
|
- compare claude gpt gemini
|
|
- benchmark skill across models
|
|
- which model should I use
|
|
allowed-tools:
|
|
- Bash
|
|
- Read
|
|
- AskUserQuestion
|
|
---
|
|
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
|
|
<!-- Regenerate: bun run gen:skill-docs -->
|
|
|
|
## Preamble (run first)
|
|
|
|
```bash
|
|
_UPD=$(~/.claude/skills/gstack/bin/gstack-update-check 2>/dev/null || .claude/skills/gstack/bin/gstack-update-check 2>/dev/null || true)
|
|
[ -n "$_UPD" ] && echo "$_UPD" || true
|
|
mkdir -p ~/.gstack/sessions
|
|
touch ~/.gstack/sessions/"$PPID"
|
|
_SESSIONS=$(find ~/.gstack/sessions -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' ')
|
|
find ~/.gstack/sessions -mmin +120 -type f -exec rm {} + 2>/dev/null || true
|
|
_PROACTIVE=$(~/.claude/skills/gstack/bin/gstack-config get proactive 2>/dev/null || echo "true")
|
|
_PROACTIVE_PROMPTED=$([ -f ~/.gstack/.proactive-prompted ] && echo "yes" || echo "no")
|
|
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
|
|
echo "BRANCH: $_BRANCH"
|
|
_SKILL_PREFIX=$(~/.claude/skills/gstack/bin/gstack-config get skill_prefix 2>/dev/null || echo "false")
|
|
echo "PROACTIVE: $_PROACTIVE"
|
|
echo "PROACTIVE_PROMPTED: $_PROACTIVE_PROMPTED"
|
|
echo "SKILL_PREFIX: $_SKILL_PREFIX"
|
|
source <(~/.claude/skills/gstack/bin/gstack-repo-mode 2>/dev/null) || true
|
|
REPO_MODE=${REPO_MODE:-unknown}
|
|
echo "REPO_MODE: $REPO_MODE"
|
|
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
|
|
echo "LAKE_INTRO: $_LAKE_SEEN"
|
|
_TEL=$(~/.claude/skills/gstack/bin/gstack-config get telemetry 2>/dev/null || true)
|
|
_TEL_PROMPTED=$([ -f ~/.gstack/.telemetry-prompted ] && echo "yes" || echo "no")
|
|
_TEL_START=$(date +%s)
|
|
_SESSION_ID="$$-$(date +%s)"
|
|
echo "TELEMETRY: ${_TEL:-off}"
|
|
echo "TEL_PROMPTED: $_TEL_PROMPTED"
|
|
# Writing style verbosity (V1: default = ELI10, terse = tighter V0 prose.
|
|
# Read on every skill run so terse mode takes effect without a restart.)
|
|
_EXPLAIN_LEVEL=$(~/.claude/skills/gstack/bin/gstack-config get explain_level 2>/dev/null || echo "default")
|
|
if [ "$_EXPLAIN_LEVEL" != "default" ] && [ "$_EXPLAIN_LEVEL" != "terse" ]; then _EXPLAIN_LEVEL="default"; fi
|
|
echo "EXPLAIN_LEVEL: $_EXPLAIN_LEVEL"
|
|
# Question tuning (see /plan-tune). Observational only in V1.
|
|
_QUESTION_TUNING=$(~/.claude/skills/gstack/bin/gstack-config get question_tuning 2>/dev/null || echo "false")
|
|
echo "QUESTION_TUNING: $_QUESTION_TUNING"
|
|
mkdir -p ~/.gstack/analytics
|
|
if [ "$_TEL" != "off" ]; then
|
|
echo '{"skill":"benchmark-models","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","repo":"'$(basename "$(git rev-parse --show-toplevel 2>/dev/null)" 2>/dev/null || echo "unknown")'"}' >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true
|
|
fi
|
|
# zsh-compatible: use find instead of glob to avoid NOMATCH error
|
|
for _PF in $(find ~/.gstack/analytics -maxdepth 1 -name '.pending-*' 2>/dev/null); do
|
|
if [ -f "$_PF" ]; then
|
|
if [ "$_TEL" != "off" ] && [ -x "~/.claude/skills/gstack/bin/gstack-telemetry-log" ]; then
|
|
~/.claude/skills/gstack/bin/gstack-telemetry-log --event-type skill_run --skill _pending_finalize --outcome unknown --session-id "$_SESSION_ID" 2>/dev/null || true
|
|
fi
|
|
rm -f "$_PF" 2>/dev/null || true
|
|
fi
|
|
break
|
|
done
|
|
# Learnings count
|
|
eval "$(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)" 2>/dev/null || true
|
|
_LEARN_FILE="${GSTACK_HOME:-$HOME/.gstack}/projects/${SLUG:-unknown}/learnings.jsonl"
|
|
if [ -f "$_LEARN_FILE" ]; then
|
|
_LEARN_COUNT=$(wc -l < "$_LEARN_FILE" 2>/dev/null | tr -d ' ')
|
|
echo "LEARNINGS: $_LEARN_COUNT entries loaded"
|
|
if [ "$_LEARN_COUNT" -gt 5 ] 2>/dev/null; then
|
|
~/.claude/skills/gstack/bin/gstack-learnings-search --limit 3 2>/dev/null || true
|
|
fi
|
|
else
|
|
echo "LEARNINGS: 0"
|
|
fi
|
|
# Session timeline: record skill start (local-only, never sent anywhere)
|
|
~/.claude/skills/gstack/bin/gstack-timeline-log '{"skill":"benchmark-models","event":"started","branch":"'"$_BRANCH"'","session":"'"$_SESSION_ID"'"}' 2>/dev/null &
|
|
# Check if CLAUDE.md has routing rules
|
|
_HAS_ROUTING="no"
|
|
if [ -f CLAUDE.md ] && grep -q "## Skill routing" CLAUDE.md 2>/dev/null; then
|
|
_HAS_ROUTING="yes"
|
|
fi
|
|
_ROUTING_DECLINED=$(~/.claude/skills/gstack/bin/gstack-config get routing_declined 2>/dev/null || echo "false")
|
|
echo "HAS_ROUTING: $_HAS_ROUTING"
|
|
echo "ROUTING_DECLINED: $_ROUTING_DECLINED"
|
|
# Vendoring deprecation: detect if CWD has a vendored gstack copy
|
|
_VENDORED="no"
|
|
if [ -d ".claude/skills/gstack" ] && [ ! -L ".claude/skills/gstack" ]; then
|
|
if [ -f ".claude/skills/gstack/VERSION" ] || [ -d ".claude/skills/gstack/.git" ]; then
|
|
_VENDORED="yes"
|
|
fi
|
|
fi
|
|
echo "VENDORED_GSTACK: $_VENDORED"
|
|
echo "MODEL_OVERLAY: claude"
|
|
# Checkpoint mode (explicit = no auto-commit, continuous = WIP commits as you go)
|
|
_CHECKPOINT_MODE=$(~/.claude/skills/gstack/bin/gstack-config get checkpoint_mode 2>/dev/null || echo "explicit")
|
|
_CHECKPOINT_PUSH=$(~/.claude/skills/gstack/bin/gstack-config get checkpoint_push 2>/dev/null || echo "false")
|
|
echo "CHECKPOINT_MODE: $_CHECKPOINT_MODE"
|
|
echo "CHECKPOINT_PUSH: $_CHECKPOINT_PUSH"
|
|
# Detect spawned session (OpenClaw or other orchestrator)
|
|
[ -n "$OPENCLAW_SESSION" ] && echo "SPAWNED_SESSION: true" || true
|
|
```
|
|
|
|
If `PROACTIVE` is `"false"`, do not proactively suggest gstack skills AND do not
|
|
auto-invoke skills based on conversation context. Only run skills the user explicitly
|
|
types (e.g., /qa, /ship). If you would have auto-invoked a skill, instead briefly say:
|
|
"I think /skillname might help here — want me to run it?" and wait for confirmation.
|
|
The user opted out of proactive behavior.
|
|
|
|
If `SKILL_PREFIX` is `"true"`, the user has namespaced skill names. When suggesting
|
|
or invoking other gstack skills, use the `/gstack-` prefix (e.g., `/gstack-qa` instead
|
|
of `/qa`, `/gstack-ship` instead of `/ship`). Disk paths are unaffected — always use
|
|
`~/.claude/skills/gstack/[skill-name]/SKILL.md` for reading skill files.
|
|
|
|
If output shows `UPGRADE_AVAILABLE <old> <new>`: read `~/.claude/skills/gstack/gstack-upgrade/SKILL.md` and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined).
|
|
|
|
If output shows `JUST_UPGRADED <from> <to>` AND `SPAWNED_SESSION` is NOT set: tell
|
|
the user "Running gstack v{to} (just updated!)" and then check for new features to
|
|
surface. For each per-feature marker below, if the marker file is missing AND the
|
|
feature is plausibly useful for this user, use AskUserQuestion to let them try it.
|
|
Fire once per feature per user, NOT once per upgrade.
|
|
|
|
**In spawned sessions (`SPAWNED_SESSION` = "true"): SKIP feature discovery entirely.**
|
|
Just print "Running gstack v{to}" and continue. Orchestrators do not want interactive
|
|
prompts from sub-sessions.
|
|
|
|
**Feature discovery markers and prompts** (one at a time, max one per session):
|
|
|
|
1. `~/.claude/skills/gstack/.feature-prompted-continuous-checkpoint` →
|
|
Prompt: "Continuous checkpoint auto-commits your work as you go with `WIP:` prefix
|
|
so you never lose progress to a crash. Local-only by default — doesn't push
|
|
anywhere unless you turn that on. Want to try it?"
|
|
Options: A) Enable continuous mode, B) Show me first (print the section from
|
|
the preamble Continuous Checkpoint Mode), C) Skip.
|
|
If A: run `~/.claude/skills/gstack/bin/gstack-config set checkpoint_mode continuous`.
|
|
Always: `touch ~/.claude/skills/gstack/.feature-prompted-continuous-checkpoint`
|
|
|
|
2. `~/.claude/skills/gstack/.feature-prompted-model-overlay` →
|
|
Inform only (no prompt): "Model overlays are active. `MODEL_OVERLAY: {model}`
|
|
shown in the preamble output tells you which behavioral patch is applied.
|
|
Override with `--model` when regenerating skills (e.g., `bun run gen:skill-docs
|
|
--model gpt-5.4`). Default is claude."
|
|
Always: `touch ~/.claude/skills/gstack/.feature-prompted-model-overlay`
|
|
|
|
After handling JUST_UPGRADED (prompts done or skipped), continue with the skill
|
|
workflow.
|
|
|
|
If `WRITING_STYLE_PENDING` is `yes`: You're on the first skill run after upgrading
|
|
to gstack v1. Ask the user once about the new default writing style. Use AskUserQuestion:
|
|
|
|
> v1 prompts = simpler. Technical terms get a one-sentence gloss on first use,
|
|
> questions are framed in outcome terms, sentences are shorter.
|
|
>
|
|
> Keep the new default, or prefer the older tighter prose?
|
|
|
|
Options:
|
|
- A) Keep the new default (recommended — good writing helps everyone)
|
|
- B) Restore V0 prose — set `explain_level: terse`
|
|
|
|
If A: leave `explain_level` unset (defaults to `default`).
|
|
If B: run `~/.claude/skills/gstack/bin/gstack-config set explain_level terse`.
|
|
|
|
Always run (regardless of choice):
|
|
```bash
|
|
rm -f ~/.gstack/.writing-style-prompt-pending
|
|
touch ~/.gstack/.writing-style-prompted
|
|
```
|
|
|
|
This only happens once. If `WRITING_STYLE_PENDING` is `no`, skip this entirely.
|
|
|
|
If `LAKE_INTRO` is `no`: Before continuing, introduce the Completeness Principle.
|
|
Tell the user: "gstack follows the **Boil the Lake** principle — always do the complete
|
|
thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean"
|
|
Then offer to open the essay in their default browser:
|
|
|
|
```bash
|
|
open https://garryslist.org/posts/boil-the-ocean
|
|
touch ~/.gstack/.completeness-intro-seen
|
|
```
|
|
|
|
Only run `open` if the user says yes. Always run `touch` to mark as seen. This only happens once.
|
|
|
|
If `TEL_PROMPTED` is `no` AND `LAKE_INTRO` is `yes`: After the lake intro is handled,
|
|
ask the user about telemetry. Use AskUserQuestion:
|
|
|
|
> Help gstack get better! Community mode shares usage data (which skills you use, how long
|
|
> they take, crash info) with a stable device ID so we can track trends and fix bugs faster.
|
|
> No code, file paths, or repo names are ever sent.
|
|
> Change anytime with `gstack-config set telemetry off`.
|
|
|
|
Options:
|
|
- A) Help gstack get better! (recommended)
|
|
- B) No thanks
|
|
|
|
If A: run `~/.claude/skills/gstack/bin/gstack-config set telemetry community`
|
|
|
|
If B: ask a follow-up AskUserQuestion:
|
|
|
|
> How about anonymous mode? We just learn that *someone* used gstack — no unique ID,
|
|
> no way to connect sessions. Just a counter that helps us know if anyone's out there.
|
|
|
|
Options:
|
|
- A) Sure, anonymous is fine
|
|
- B) No thanks, fully off
|
|
|
|
If B→A: run `~/.claude/skills/gstack/bin/gstack-config set telemetry anonymous`
|
|
If B→B: run `~/.claude/skills/gstack/bin/gstack-config set telemetry off`
|
|
|
|
Always run:
|
|
```bash
|
|
touch ~/.gstack/.telemetry-prompted
|
|
```
|
|
|
|
This only happens once. If `TEL_PROMPTED` is `yes`, skip this entirely.
|
|
|
|
If `PROACTIVE_PROMPTED` is `no` AND `TEL_PROMPTED` is `yes`: After telemetry is handled,
|
|
ask the user about proactive behavior. Use AskUserQuestion:
|
|
|
|
> gstack can proactively figure out when you might need a skill while you work —
|
|
> like suggesting /qa when you say "does this work?" or /investigate when you hit
|
|
> a bug. We recommend keeping this on — it speeds up every part of your workflow.
|
|
|
|
Options:
|
|
- A) Keep it on (recommended)
|
|
- B) Turn it off — I'll type /commands myself
|
|
|
|
If A: run `~/.claude/skills/gstack/bin/gstack-config set proactive true`
|
|
If B: run `~/.claude/skills/gstack/bin/gstack-config set proactive false`
|
|
|
|
Always run:
|
|
```bash
|
|
touch ~/.gstack/.proactive-prompted
|
|
```
|
|
|
|
This only happens once. If `PROACTIVE_PROMPTED` is `yes`, skip this entirely.
|
|
|
|
If `HAS_ROUTING` is `no` AND `ROUTING_DECLINED` is `false` AND `PROACTIVE_PROMPTED` is `yes`:
|
|
Check if a CLAUDE.md file exists in the project root. If it does not exist, create it.
|
|
|
|
Use AskUserQuestion:
|
|
|
|
> gstack works best when your project's CLAUDE.md includes skill routing rules.
|
|
> This tells Claude to use specialized workflows (like /ship, /investigate, /qa)
|
|
> instead of answering directly. It's a one-time addition, about 15 lines.
|
|
|
|
Options:
|
|
- A) Add routing rules to CLAUDE.md (recommended)
|
|
- B) No thanks, I'll invoke skills manually
|
|
|
|
If A: Append this section to the end of CLAUDE.md:
|
|
|
|
```markdown
|
|
|
|
## Skill routing
|
|
|
|
When the user's request matches an available skill, invoke it via the Skill tool. The
|
|
skill has multi-step workflows, checklists, and quality gates that produce better
|
|
results than an ad-hoc answer. When in doubt, invoke the skill. A false positive is
|
|
cheaper than a false negative.
|
|
|
|
Key routing rules:
|
|
- Product ideas, "is this worth building", brainstorming → invoke /office-hours
|
|
- Strategy, scope, "think bigger", "what should we build" → invoke /plan-ceo-review
|
|
- Architecture, "does this design make sense" → invoke /plan-eng-review
|
|
- Design system, brand, "how should this look" → invoke /design-consultation
|
|
- Design review of a plan → invoke /plan-design-review
|
|
- Developer experience of a plan → invoke /plan-devex-review
|
|
- "Review everything", full review pipeline → invoke /autoplan
|
|
- Bugs, errors, "why is this broken", "wtf", "this doesn't work" → invoke /investigate
|
|
- Test the site, find bugs, "does this work" → invoke /qa (or /qa-only for report only)
|
|
- Code review, check the diff, "look at my changes" → invoke /review
|
|
- Visual polish, design audit, "this looks off" → invoke /design-review
|
|
- Developer experience audit, try onboarding → invoke /devex-review
|
|
- Ship, deploy, create a PR, "send it" → invoke /ship
|
|
- Merge + deploy + verify → invoke /land-and-deploy
|
|
- Configure deployment → invoke /setup-deploy
|
|
- Post-deploy monitoring → invoke /canary
|
|
- Update docs after shipping → invoke /document-release
|
|
- Weekly retro, "how'd we do" → invoke /retro
|
|
- Second opinion, codex review → invoke /codex
|
|
- Safety mode, careful mode, lock it down → invoke /careful or /guard
|
|
- Restrict edits to a directory → invoke /freeze or /unfreeze
|
|
- Upgrade gstack → invoke /gstack-upgrade
|
|
- Save progress, "save my work" → invoke /context-save
|
|
- Resume, restore, "where was I" → invoke /context-restore
|
|
- Security audit, OWASP, "is this secure" → invoke /cso
|
|
- Make a PDF, document, publication → invoke /make-pdf
|
|
- Launch real browser for QA → invoke /open-gstack-browser
|
|
- Import cookies for authenticated testing → invoke /setup-browser-cookies
|
|
- Performance regression, page speed, benchmarks → invoke /benchmark
|
|
- Review what gstack has learned → invoke /learn
|
|
- Tune question sensitivity → invoke /plan-tune
|
|
- Code quality dashboard → invoke /health
|
|
```
|
|
|
|
Then commit the change: `git add CLAUDE.md && git commit -m "chore: add gstack skill routing rules to CLAUDE.md"`
|
|
|
|
If B: run `~/.claude/skills/gstack/bin/gstack-config set routing_declined true`
|
|
Say "No problem. You can add routing rules later by running `gstack-config set routing_declined false` and re-running any skill."
|
|
|
|
This only happens once per project. If `HAS_ROUTING` is `yes` or `ROUTING_DECLINED` is `true`, skip this entirely.
|
|
|
|
If `VENDORED_GSTACK` is `yes`: This project has a vendored copy of gstack at
|
|
`.claude/skills/gstack/`. Vendoring is deprecated. We will not keep vendored copies
|
|
up to date, so this project's gstack will fall behind.
|
|
|
|
Use AskUserQuestion (one-time per project, check for `~/.gstack/.vendoring-warned-$SLUG` marker):
|
|
|
|
> This project has gstack vendored in `.claude/skills/gstack/`. Vendoring is deprecated.
|
|
> We won't keep this copy up to date, so you'll fall behind on new features and fixes.
|
|
>
|
|
> Want to migrate to team mode? It takes about 30 seconds.
|
|
|
|
Options:
|
|
- A) Yes, migrate to team mode now
|
|
- B) No, I'll handle it myself
|
|
|
|
If A:
|
|
1. Run `git rm -r .claude/skills/gstack/`
|
|
2. Run `echo '.claude/skills/gstack/' >> .gitignore`
|
|
3. Run `~/.claude/skills/gstack/bin/gstack-team-init required` (or `optional`)
|
|
4. Run `git add .claude/ .gitignore CLAUDE.md && git commit -m "chore: migrate gstack from vendored to team mode"`
|
|
5. Tell the user: "Done. Each developer now runs: `cd ~/.claude/skills/gstack && ./setup --team`"
|
|
|
|
If B: say "OK, you're on your own to keep the vendored copy up to date."
|
|
|
|
Always run (regardless of choice):
|
|
```bash
|
|
eval "$(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)" 2>/dev/null || true
|
|
touch ~/.gstack/.vendoring-warned-${SLUG:-unknown}
|
|
```
|
|
|
|
This only happens once per project. If the marker file exists, skip entirely.
|
|
|
|
If `SPAWNED_SESSION` is `"true"`, you are running inside a session spawned by an
|
|
AI orchestrator (e.g., OpenClaw). In spawned sessions:
|
|
- Do NOT use AskUserQuestion for interactive prompts. Auto-choose the recommended option.
|
|
- Do NOT run upgrade checks, telemetry prompts, routing injection, or lake intro.
|
|
- Focus on completing the task and reporting results via prose output.
|
|
- End with a completion report: what shipped, decisions made, anything uncertain.
|
|
|
|
## Model-Specific Behavioral Patch (claude)
|
|
|
|
The following nudges are tuned for the claude model family. They are
|
|
**subordinate** to skill workflow, STOP points, AskUserQuestion gates, plan-mode
|
|
safety, and /ship review gates. If a nudge below conflicts with skill instructions,
|
|
the skill wins. Treat these as preferences, not rules.
|
|
|
|
**Todo-list discipline.** When working through a multi-step plan, mark each task
|
|
complete individually as you finish it. Do not batch-complete at the end. If a task
|
|
turns out to be unnecessary, mark it skipped with a one-line reason.
|
|
|
|
**Think before heavy actions.** For complex operations (refactors, migrations,
|
|
non-trivial new features), briefly state your approach before executing. This lets
|
|
the user course-correct cheaply instead of mid-flight.
|
|
|
|
**Dedicated tools over Bash.** Prefer Read, Edit, Write, Glob, Grep over shell
|
|
equivalents (cat, sed, find, grep). The dedicated tools are cheaper and clearer.
|
|
|
|
## Voice
|
|
|
|
**Tone:** direct, concrete, sharp, never corporate, never academic. Sound like a builder, not a consultant. Name the file, the function, the command. No filler, no throat-clearing.
|
|
|
|
**Writing rules:** No em dashes (use commas, periods, "..."). No AI vocabulary (delve, crucial, robust, comprehensive, nuanced, etc.). Short paragraphs. End with what to do.
|
|
|
|
The user always has context you don't. Cross-model agreement is a recommendation, not a decision — the user decides.
|
|
|
|
## Completion Status Protocol
|
|
|
|
When completing a skill workflow, report status using one of:
|
|
- **DONE** — All steps completed successfully. Evidence provided for each claim.
|
|
- **DONE_WITH_CONCERNS** — Completed, but with issues the user should know about. List each concern.
|
|
- **BLOCKED** — Cannot proceed. State what is blocking and what was tried.
|
|
- **NEEDS_CONTEXT** — Missing information required to continue. State exactly what you need.
|
|
|
|
### Escalation
|
|
|
|
It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."
|
|
|
|
Bad work is worse than no work. You will not be penalized for escalating.
|
|
- If you have attempted a task 3 times without success, STOP and escalate.
|
|
- If you are uncertain about a security-sensitive change, STOP and escalate.
|
|
- If the scope of work exceeds what you can verify, STOP and escalate.
|
|
|
|
Escalation format:
|
|
```
|
|
STATUS: BLOCKED | NEEDS_CONTEXT
|
|
REASON: [1-2 sentences]
|
|
ATTEMPTED: [what you tried]
|
|
RECOMMENDATION: [what the user should do next]
|
|
```
|
|
|
|
## Operational Self-Improvement
|
|
|
|
Before completing, reflect on this session:
|
|
- Did any commands fail unexpectedly?
|
|
- Did you take a wrong approach and have to backtrack?
|
|
- Did you discover a project-specific quirk (build order, env vars, timing, auth)?
|
|
- Did something take longer than expected because of a missing flag or config?
|
|
|
|
If yes, log an operational learning for future sessions:
|
|
|
|
```bash
|
|
~/.claude/skills/gstack/bin/gstack-learnings-log '{"skill":"SKILL_NAME","type":"operational","key":"SHORT_KEY","insight":"DESCRIPTION","confidence":N,"source":"observed"}'
|
|
```
|
|
|
|
Replace SKILL_NAME with the current skill name. Only log genuine operational discoveries.
|
|
Don't log obvious things or one-time transient errors (network blips, rate limits).
|
|
A good test: would knowing this save 5+ minutes in a future session? If yes, log it.
|
|
|
|
## Telemetry (run last)
|
|
|
|
After the skill workflow completes (success, error, or abort), log the telemetry event.
|
|
Determine the skill name from the `name:` field in this file's YAML frontmatter.
|
|
Determine the outcome from the workflow result (success if completed normally, error
|
|
if it failed, abort if the user interrupted).
|
|
|
|
**PLAN MODE EXCEPTION — ALWAYS RUN:** This command writes telemetry to
|
|
`~/.gstack/analytics/` (user config directory, not project files). The skill
|
|
preamble already writes to the same directory — this is the same pattern.
|
|
Skipping this command loses session duration and outcome data.
|
|
|
|
Run this bash:
|
|
|
|
```bash
|
|
_TEL_END=$(date +%s)
|
|
_TEL_DUR=$(( _TEL_END - _TEL_START ))
|
|
rm -f ~/.gstack/analytics/.pending-"$_SESSION_ID" 2>/dev/null || true
|
|
# Session timeline: record skill completion (local-only, never sent anywhere)
|
|
~/.claude/skills/gstack/bin/gstack-timeline-log '{"skill":"SKILL_NAME","event":"completed","branch":"'$(git branch --show-current 2>/dev/null || echo unknown)'","outcome":"OUTCOME","duration_s":"'"$_TEL_DUR"'","session":"'"$_SESSION_ID"'"}' 2>/dev/null || true
|
|
# Local analytics (gated on telemetry setting)
|
|
if [ "$_TEL" != "off" ]; then
|
|
echo '{"skill":"SKILL_NAME","duration_s":"'"$_TEL_DUR"'","outcome":"OUTCOME","browse":"USED_BROWSE","session":"'"$_SESSION_ID"'","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"}' >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true
|
|
fi
|
|
# Remote telemetry (opt-in, requires binary)
|
|
if [ "$_TEL" != "off" ] && [ -x ~/.claude/skills/gstack/bin/gstack-telemetry-log ]; then
|
|
~/.claude/skills/gstack/bin/gstack-telemetry-log \
|
|
--skill "SKILL_NAME" --duration "$_TEL_DUR" --outcome "OUTCOME" \
|
|
--used-browse "USED_BROWSE" --session-id "$_SESSION_ID" 2>/dev/null &
|
|
fi
|
|
```
|
|
|
|
Replace `SKILL_NAME` with the actual skill name from frontmatter, `OUTCOME` with
|
|
success/error/abort, and `USED_BROWSE` with true/false based on whether `$B` was used.
|
|
If you cannot determine the outcome, use "unknown". The local JSONL always logs. The
|
|
remote binary only runs if telemetry is not off and the binary exists.
|
|
|
|
## Plan Mode Safe Operations
|
|
|
|
In plan mode, these are always allowed (they inform the plan, don't modify source):
|
|
`$B` (browse), `$D` (design), `codex exec`/`codex review`, writes to `~/.gstack/`,
|
|
writes to the plan file, `open` for generated artifacts.
|
|
|
|
## Skill Invocation During Plan Mode
|
|
|
|
If the user invokes a skill in plan mode, that skill takes precedence over generic plan mode behavior. Treat it as executable instructions, not reference. Follow step
|
|
by step. AskUserQuestion calls satisfy plan mode's end-of-turn requirement. At a STOP
|
|
point, stop immediately. Do not continue the workflow past a STOP point and do not call ExitPlanMode there. Commands marked "PLAN
|
|
MODE EXCEPTION — ALWAYS RUN" execute. Other writes need to be already permitted
|
|
above or explicitly exception-marked. Call ExitPlanMode only after the skill
|
|
workflow completes — only then call ExitPlanMode (or if the user tells you to cancel the skill or leave plan mode).
|
|
|
|
## Plan Status Footer
|
|
|
|
In plan mode, before ExitPlanMode: if the plan file lacks a `## GSTACK REVIEW REPORT`
|
|
section, run `~/.claude/skills/gstack/bin/gstack-review-read` and append a report.
|
|
With JSONL entries (before `---CONFIG---`), format the standard runs/status/findings
|
|
table. With `NO_REVIEWS` or empty, append a 5-row placeholder table (CEO/Codex/Eng/
|
|
Design/DX Review) with all zeros and verdict "NO REVIEWS YET — run `/autoplan`".
|
|
If a richer review report already exists, skip — review skills wrote it.
|
|
|
|
PLAN MODE EXCEPTION — always allowed (it's the plan file).
|
|
|
|
# /benchmark-models — Cross-Model Skill Benchmark
|
|
|
|
You are running the `/benchmark-models` workflow. Wraps the `gstack-model-benchmark` binary with an interactive flow that picks a prompt, confirms providers, previews auth, and runs the benchmark.
|
|
|
|
Different from `/benchmark` — that skill measures web page performance (Core Web Vitals, load times). This skill measures AI model performance on gstack skills or arbitrary prompts.
|
|
|
|
---
|
|
|
|
## Step 0: Locate the binary
|
|
|
|
```bash
|
|
BIN="$HOME/.claude/skills/gstack/bin/gstack-model-benchmark"
|
|
[ -x "$BIN" ] || BIN=".claude/skills/gstack/bin/gstack-model-benchmark"
|
|
[ -x "$BIN" ] || { echo "ERROR: gstack-model-benchmark not found. Run ./setup in the gstack install dir." >&2; exit 1; }
|
|
echo "BIN: $BIN"
|
|
```
|
|
|
|
If not found, stop and tell the user to reinstall gstack.
|
|
|
|
---
|
|
|
|
## Step 1: Choose a prompt
|
|
|
|
Use AskUserQuestion with the preamble format:
|
|
- **Re-ground:** current project + branch.
|
|
- **Simplify:** "A cross-model benchmark runs the same prompt through 2-3 AI models and shows you how they compare on speed, cost, and output quality. What prompt should we use?"
|
|
- **RECOMMENDATION:** A because benchmarking against a real skill exposes tool-use differences, not just raw generation.
|
|
- **Options:**
|
|
- A) Benchmark one of my gstack skills (we'll pick which skill next). Completeness: 10/10.
|
|
- B) Use an inline prompt — type it on the next turn. Completeness: 8/10.
|
|
- C) Point at a prompt file on disk — specify path on the next turn. Completeness: 8/10.
|
|
|
|
If A: list top-level gstack skills that have SKILL.md files (from `find . -maxdepth 2 -name SKILL.md -not -path './.*'`), ask the user to pick one via a second AskUserQuestion. Use the picked SKILL.md path as the prompt file.
|
|
|
|
If B: ask the user for the inline prompt. Use it verbatim via `--prompt "<text>"`.
|
|
|
|
If C: ask for the path. Verify it exists. Use as positional argument.
|
|
|
|
---
|
|
|
|
## Step 2: Choose providers
|
|
|
|
```bash
|
|
"$BIN" --prompt "unused, dry-run" --models claude,gpt,gemini --dry-run
|
|
```
|
|
|
|
Show the dry-run output. The "Adapter availability" section tells the user which providers will actually run (OK) vs skip (NOT READY — remediation hint included).
|
|
|
|
If ALL three show NOT READY: stop with a clear message — benchmark can't run without at least one authed provider. Suggest `claude login`, `codex login`, or `gemini login` / `export GOOGLE_API_KEY`.
|
|
|
|
If at least one is OK: AskUserQuestion:
|
|
- **Simplify:** "Which models should we include? The dry-run above showed which are authed. Unauthed ones will be skipped cleanly — they won't abort the batch."
|
|
- **RECOMMENDATION:** A (all authed providers) because running as many as possible gives the richest comparison.
|
|
- **Options:**
|
|
- A) All authed providers. Completeness: 10/10.
|
|
- B) Only Claude. Completeness: 6/10 (no cross-model signal — use /ship's review for solo claude benchmarks instead).
|
|
- C) Pick two — specify on next turn. Completeness: 8/10.
|
|
|
|
---
|
|
|
|
## Step 3: Decide on judge
|
|
|
|
```bash
|
|
[ -n "$ANTHROPIC_API_KEY" ] || grep -q 'ANTHROPIC' "$HOME/.claude/.credentials.json" 2>/dev/null && echo "JUDGE_AVAILABLE" || echo "JUDGE_UNAVAILABLE"
|
|
```
|
|
|
|
If judge is available, AskUserQuestion:
|
|
- **Simplify:** "The quality judge scores each model's output on a 0-10 scale using Anthropic's Claude as a tiebreaker. Adds ~$0.05/run. Recommended if you care about output quality, not just latency and cost."
|
|
- **RECOMMENDATION:** A — the whole point is comparing quality, not just speed.
|
|
- **Options:**
|
|
- A) Enable judge (adds ~$0.05). Completeness: 10/10.
|
|
- B) Skip judge — speed/cost/tokens only. Completeness: 7/10.
|
|
|
|
If judge is NOT available, skip this question and omit the `--judge` flag.
|
|
|
|
---
|
|
|
|
## Step 4: Run the benchmark
|
|
|
|
Construct the command from Step 1, 2, 3 decisions:
|
|
|
|
```bash
|
|
"$BIN" <prompt-spec> --models <picked-models> [--judge] --output table
|
|
```
|
|
|
|
Where `<prompt-spec>` is either `--prompt "<text>"` (Step 1B), a file path (Step 1A or 1C), and `<picked-models>` is the comma-separated list from Step 2.
|
|
|
|
Stream the output as it arrives. This is slow — each provider runs the prompt fully. Expect 30s-5min depending on prompt complexity and whether `--judge` is on.
|
|
|
|
---
|
|
|
|
## Step 5: Interpret results
|
|
|
|
After the table prints, summarize for the user:
|
|
- **Fastest** — provider with lowest latency.
|
|
- **Cheapest** — provider with lowest cost.
|
|
- **Highest quality** (if `--judge` ran) — provider with highest score.
|
|
- **Best overall** — use judgment. If judge ran: quality-weighted. Otherwise: note the tradeoff the user needs to make.
|
|
|
|
If any provider hit an error (auth/timeout/rate_limit), call it out with the remediation path.
|
|
|
|
---
|
|
|
|
## Step 6: Offer to save results
|
|
|
|
AskUserQuestion:
|
|
- **Simplify:** "Save this benchmark as JSON so you can compare future runs against it?"
|
|
- **RECOMMENDATION:** A — skill performance drifts as providers update their models; a saved baseline catches quality regressions.
|
|
- **Options:**
|
|
- A) Save to `~/.gstack/benchmarks/<date>-<skill-or-prompt-slug>.json`. Completeness: 10/10.
|
|
- B) Just print, don't save. Completeness: 5/10 (loses trend data).
|
|
|
|
If A: re-run with `--output json` and tee to the dated file. Print the path so the user can diff future runs against it.
|
|
|
|
---
|
|
|
|
## Important Rules
|
|
|
|
- **Never run a real benchmark without Step 2's dry-run first.** Users need to see auth status before spending API calls.
|
|
- **Never hardcode model names.** Always pass providers from user's Step 2 choice — the binary handles the rest.
|
|
- **Never auto-include `--judge`.** It adds real cost; user must opt in.
|
|
- **If zero providers are authed, STOP.** Don't attempt the benchmark — it produces no useful output.
|
|
- **Cost is visible.** Every run shows per-provider cost in the table. Users should see it before the next run.
|