mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-02 03:35:09 +02:00
feat(benchmark-models): new skill wrapping gstack-model-benchmark
Wires the orphaned gstack-model-benchmark binary into a dedicated skill
so users can discover cross-model benchmarking via /benchmark-models or
voice triggers ("compare models", "which model is best").
Deliberately separate from /benchmark (page performance) because the
two surfaces test completely different things — confusing them would
muddy both.
Flow:
1. Pick a prompt (an existing SKILL.md file, inline text, or file path)
2. Confirm providers (dry-run shows auth status per provider)
3. Decide on --judge (adds ~$0.05, scores output quality 0-10)
4. Run the benchmark — table output
5. Interpret results (fastest / cheapest / highest quality)
6. Offer to save to ~/.gstack/benchmarks/<date>.json for trend tracking
Uses gstack-model-benchmark --dry-run as a safety gate — auth status is
visible BEFORE the user spends API calls. If zero providers are authed,
the skill stops cleanly rather than attempting a run that produces no
useful output.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,556 @@
|
||||
---
|
||||
name: benchmark-models
|
||||
preamble-tier: 1
|
||||
version: 1.0.0
|
||||
description: |
|
||||
Cross-model benchmark for gstack skills. Runs the same prompt through Claude,
|
||||
GPT (via Codex CLI), and Gemini side-by-side — compares latency, tokens, cost,
|
||||
and optionally quality via LLM judge. Answers "which model is actually best
|
||||
for this skill?" with data instead of vibes. Separate from /benchmark, which
|
||||
measures web page performance. Use when: "benchmark models", "compare models",
|
||||
"which model is best for X", "cross-model comparison", "model shootout". (gstack)
|
||||
Voice triggers (speech-to-text aliases): "compare models", "model shootout", "which model is best".
|
||||
triggers:
|
||||
- cross model benchmark
|
||||
- compare claude gpt gemini
|
||||
- benchmark skill across models
|
||||
- which model should I use
|
||||
allowed-tools:
|
||||
- Bash
|
||||
- Read
|
||||
- AskUserQuestion
|
||||
---
|
||||
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
|
||||
<!-- Regenerate: bun run gen:skill-docs -->
|
||||
|
||||
## Preamble (run first)
|
||||
|
||||
```bash
|
||||
_UPD=$(~/.claude/skills/gstack/bin/gstack-update-check 2>/dev/null || .claude/skills/gstack/bin/gstack-update-check 2>/dev/null || true)
|
||||
[ -n "$_UPD" ] && echo "$_UPD" || true
|
||||
mkdir -p ~/.gstack/sessions
|
||||
touch ~/.gstack/sessions/"$PPID"
|
||||
_SESSIONS=$(find ~/.gstack/sessions -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' ')
|
||||
find ~/.gstack/sessions -mmin +120 -type f -exec rm {} + 2>/dev/null || true
|
||||
_PROACTIVE=$(~/.claude/skills/gstack/bin/gstack-config get proactive 2>/dev/null || echo "true")
|
||||
_PROACTIVE_PROMPTED=$([ -f ~/.gstack/.proactive-prompted ] && echo "yes" || echo "no")
|
||||
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
|
||||
echo "BRANCH: $_BRANCH"
|
||||
_SKILL_PREFIX=$(~/.claude/skills/gstack/bin/gstack-config get skill_prefix 2>/dev/null || echo "false")
|
||||
echo "PROACTIVE: $_PROACTIVE"
|
||||
echo "PROACTIVE_PROMPTED: $_PROACTIVE_PROMPTED"
|
||||
echo "SKILL_PREFIX: $_SKILL_PREFIX"
|
||||
source <(~/.claude/skills/gstack/bin/gstack-repo-mode 2>/dev/null) || true
|
||||
REPO_MODE=${REPO_MODE:-unknown}
|
||||
echo "REPO_MODE: $REPO_MODE"
|
||||
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
|
||||
echo "LAKE_INTRO: $_LAKE_SEEN"
|
||||
_TEL=$(~/.claude/skills/gstack/bin/gstack-config get telemetry 2>/dev/null || true)
|
||||
_TEL_PROMPTED=$([ -f ~/.gstack/.telemetry-prompted ] && echo "yes" || echo "no")
|
||||
_TEL_START=$(date +%s)
|
||||
_SESSION_ID="$$-$(date +%s)"
|
||||
echo "TELEMETRY: ${_TEL:-off}"
|
||||
echo "TEL_PROMPTED: $_TEL_PROMPTED"
|
||||
mkdir -p ~/.gstack/analytics
|
||||
if [ "$_TEL" != "off" ]; then
|
||||
echo '{"skill":"benchmark-models","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","repo":"'$(basename "$(git rev-parse --show-toplevel 2>/dev/null)" 2>/dev/null || echo "unknown")'"}' >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true
|
||||
fi
|
||||
# zsh-compatible: use find instead of glob to avoid NOMATCH error
|
||||
for _PF in $(find ~/.gstack/analytics -maxdepth 1 -name '.pending-*' 2>/dev/null); do
|
||||
if [ -f "$_PF" ]; then
|
||||
if [ "$_TEL" != "off" ] && [ -x "~/.claude/skills/gstack/bin/gstack-telemetry-log" ]; then
|
||||
~/.claude/skills/gstack/bin/gstack-telemetry-log --event-type skill_run --skill _pending_finalize --outcome unknown --session-id "$_SESSION_ID" 2>/dev/null || true
|
||||
fi
|
||||
rm -f "$_PF" 2>/dev/null || true
|
||||
fi
|
||||
break
|
||||
done
|
||||
# Learnings count
|
||||
eval "$(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)" 2>/dev/null || true
|
||||
_LEARN_FILE="${GSTACK_HOME:-$HOME/.gstack}/projects/${SLUG:-unknown}/learnings.jsonl"
|
||||
if [ -f "$_LEARN_FILE" ]; then
|
||||
_LEARN_COUNT=$(wc -l < "$_LEARN_FILE" 2>/dev/null | tr -d ' ')
|
||||
echo "LEARNINGS: $_LEARN_COUNT entries loaded"
|
||||
if [ "$_LEARN_COUNT" -gt 5 ] 2>/dev/null; then
|
||||
~/.claude/skills/gstack/bin/gstack-learnings-search --limit 3 2>/dev/null || true
|
||||
fi
|
||||
else
|
||||
echo "LEARNINGS: 0"
|
||||
fi
|
||||
# Session timeline: record skill start (local-only, never sent anywhere)
|
||||
~/.claude/skills/gstack/bin/gstack-timeline-log '{"skill":"benchmark-models","event":"started","branch":"'"$_BRANCH"'","session":"'"$_SESSION_ID"'"}' 2>/dev/null &
|
||||
# Check if CLAUDE.md has routing rules
|
||||
_HAS_ROUTING="no"
|
||||
if [ -f CLAUDE.md ] && grep -q "## Skill routing" CLAUDE.md 2>/dev/null; then
|
||||
_HAS_ROUTING="yes"
|
||||
fi
|
||||
_ROUTING_DECLINED=$(~/.claude/skills/gstack/bin/gstack-config get routing_declined 2>/dev/null || echo "false")
|
||||
echo "HAS_ROUTING: $_HAS_ROUTING"
|
||||
echo "ROUTING_DECLINED: $_ROUTING_DECLINED"
|
||||
# Vendoring deprecation: detect if CWD has a vendored gstack copy
|
||||
_VENDORED="no"
|
||||
if [ -d ".claude/skills/gstack" ] && [ ! -L ".claude/skills/gstack" ]; then
|
||||
if [ -f ".claude/skills/gstack/VERSION" ] || [ -d ".claude/skills/gstack/.git" ]; then
|
||||
_VENDORED="yes"
|
||||
fi
|
||||
fi
|
||||
echo "VENDORED_GSTACK: $_VENDORED"
|
||||
echo "MODEL_OVERLAY: claude"
|
||||
# Checkpoint mode (explicit = no auto-commit, continuous = WIP commits as you go)
|
||||
_CHECKPOINT_MODE=$(~/.claude/skills/gstack/bin/gstack-config get checkpoint_mode 2>/dev/null || echo "explicit")
|
||||
_CHECKPOINT_PUSH=$(~/.claude/skills/gstack/bin/gstack-config get checkpoint_push 2>/dev/null || echo "false")
|
||||
echo "CHECKPOINT_MODE: $_CHECKPOINT_MODE"
|
||||
echo "CHECKPOINT_PUSH: $_CHECKPOINT_PUSH"
|
||||
# Detect spawned session (OpenClaw or other orchestrator)
|
||||
[ -n "$OPENCLAW_SESSION" ] && echo "SPAWNED_SESSION: true" || true
|
||||
```
|
||||
|
||||
If `PROACTIVE` is `"false"`, do not proactively suggest gstack skills AND do not
|
||||
auto-invoke skills based on conversation context. Only run skills the user explicitly
|
||||
types (e.g., /qa, /ship). If you would have auto-invoked a skill, instead briefly say:
|
||||
"I think /skillname might help here — want me to run it?" and wait for confirmation.
|
||||
The user opted out of proactive behavior.
|
||||
|
||||
If `SKILL_PREFIX` is `"true"`, the user has namespaced skill names. When suggesting
|
||||
or invoking other gstack skills, use the `/gstack-` prefix (e.g., `/gstack-qa` instead
|
||||
of `/qa`, `/gstack-ship` instead of `/ship`). Disk paths are unaffected — always use
|
||||
`~/.claude/skills/gstack/[skill-name]/SKILL.md` for reading skill files.
|
||||
|
||||
If output shows `UPGRADE_AVAILABLE <old> <new>`: read `~/.claude/skills/gstack/gstack-upgrade/SKILL.md` and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined).
|
||||
|
||||
If output shows `JUST_UPGRADED <from> <to>` AND `SPAWNED_SESSION` is NOT set: tell
|
||||
the user "Running gstack v{to} (just updated!)" and then check for new features to
|
||||
surface. For each per-feature marker below, if the marker file is missing AND the
|
||||
feature is plausibly useful for this user, use AskUserQuestion to let them try it.
|
||||
Fire once per feature per user, NOT once per upgrade.
|
||||
|
||||
**In spawned sessions (`SPAWNED_SESSION` = "true"): SKIP feature discovery entirely.**
|
||||
Just print "Running gstack v{to}" and continue. Orchestrators do not want interactive
|
||||
prompts from sub-sessions.
|
||||
|
||||
**Feature discovery markers and prompts** (one at a time, max one per session):
|
||||
|
||||
1. `~/.claude/skills/gstack/.feature-prompted-continuous-checkpoint` →
|
||||
Prompt: "Continuous checkpoint auto-commits your work as you go with `WIP:` prefix
|
||||
so you never lose progress to a crash. Local-only by default — doesn't push
|
||||
anywhere unless you turn that on. Want to try it?"
|
||||
Options: A) Enable continuous mode, B) Show me first (print the section from
|
||||
the preamble Continuous Checkpoint Mode), C) Skip.
|
||||
If A: run `~/.claude/skills/gstack/bin/gstack-config set checkpoint_mode continuous`.
|
||||
Always: `touch ~/.claude/skills/gstack/.feature-prompted-continuous-checkpoint`
|
||||
|
||||
2. `~/.claude/skills/gstack/.feature-prompted-model-overlay` →
|
||||
Inform only (no prompt): "Model overlays are active. `MODEL_OVERLAY: {model}`
|
||||
shown in the preamble output tells you which behavioral patch is applied.
|
||||
Override with `--model` when regenerating skills (e.g., `bun run gen:skill-docs
|
||||
--model gpt-5.4`). Default is claude."
|
||||
Always: `touch ~/.claude/skills/gstack/.feature-prompted-model-overlay`
|
||||
|
||||
After handling JUST_UPGRADED (prompts done or skipped), continue with the skill
|
||||
workflow.
|
||||
|
||||
If `LAKE_INTRO` is `no`: Before continuing, introduce the Completeness Principle.
|
||||
Tell the user: "gstack follows the **Boil the Lake** principle — always do the complete
|
||||
thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean"
|
||||
Then offer to open the essay in their default browser:
|
||||
|
||||
```bash
|
||||
open https://garryslist.org/posts/boil-the-ocean
|
||||
touch ~/.gstack/.completeness-intro-seen
|
||||
```
|
||||
|
||||
Only run `open` if the user says yes. Always run `touch` to mark as seen. This only happens once.
|
||||
|
||||
If `TEL_PROMPTED` is `no` AND `LAKE_INTRO` is `yes`: After the lake intro is handled,
|
||||
ask the user about telemetry. Use AskUserQuestion:
|
||||
|
||||
> Help gstack get better! Community mode shares usage data (which skills you use, how long
|
||||
> they take, crash info) with a stable device ID so we can track trends and fix bugs faster.
|
||||
> No code, file paths, or repo names are ever sent.
|
||||
> Change anytime with `gstack-config set telemetry off`.
|
||||
|
||||
Options:
|
||||
- A) Help gstack get better! (recommended)
|
||||
- B) No thanks
|
||||
|
||||
If A: run `~/.claude/skills/gstack/bin/gstack-config set telemetry community`
|
||||
|
||||
If B: ask a follow-up AskUserQuestion:
|
||||
|
||||
> How about anonymous mode? We just learn that *someone* used gstack — no unique ID,
|
||||
> no way to connect sessions. Just a counter that helps us know if anyone's out there.
|
||||
|
||||
Options:
|
||||
- A) Sure, anonymous is fine
|
||||
- B) No thanks, fully off
|
||||
|
||||
If B→A: run `~/.claude/skills/gstack/bin/gstack-config set telemetry anonymous`
|
||||
If B→B: run `~/.claude/skills/gstack/bin/gstack-config set telemetry off`
|
||||
|
||||
Always run:
|
||||
```bash
|
||||
touch ~/.gstack/.telemetry-prompted
|
||||
```
|
||||
|
||||
This only happens once. If `TEL_PROMPTED` is `yes`, skip this entirely.
|
||||
|
||||
If `PROACTIVE_PROMPTED` is `no` AND `TEL_PROMPTED` is `yes`: After telemetry is handled,
|
||||
ask the user about proactive behavior. Use AskUserQuestion:
|
||||
|
||||
> gstack can proactively figure out when you might need a skill while you work —
|
||||
> like suggesting /qa when you say "does this work?" or /investigate when you hit
|
||||
> a bug. We recommend keeping this on — it speeds up every part of your workflow.
|
||||
|
||||
Options:
|
||||
- A) Keep it on (recommended)
|
||||
- B) Turn it off — I'll type /commands myself
|
||||
|
||||
If A: run `~/.claude/skills/gstack/bin/gstack-config set proactive true`
|
||||
If B: run `~/.claude/skills/gstack/bin/gstack-config set proactive false`
|
||||
|
||||
Always run:
|
||||
```bash
|
||||
touch ~/.gstack/.proactive-prompted
|
||||
```
|
||||
|
||||
This only happens once. If `PROACTIVE_PROMPTED` is `yes`, skip this entirely.
|
||||
|
||||
If `HAS_ROUTING` is `no` AND `ROUTING_DECLINED` is `false` AND `PROACTIVE_PROMPTED` is `yes`:
|
||||
Check if a CLAUDE.md file exists in the project root. If it does not exist, create it.
|
||||
|
||||
Use AskUserQuestion:
|
||||
|
||||
> gstack works best when your project's CLAUDE.md includes skill routing rules.
|
||||
> This tells Claude to use specialized workflows (like /ship, /investigate, /qa)
|
||||
> instead of answering directly. It's a one-time addition, about 15 lines.
|
||||
|
||||
Options:
|
||||
- A) Add routing rules to CLAUDE.md (recommended)
|
||||
- B) No thanks, I'll invoke skills manually
|
||||
|
||||
If A: Append this section to the end of CLAUDE.md:
|
||||
|
||||
```markdown
|
||||
|
||||
## Skill routing
|
||||
|
||||
When the user's request matches an available skill, ALWAYS invoke it using the Skill
|
||||
tool as your FIRST action. Do NOT answer directly, do NOT use other tools first.
|
||||
The skill has specialized workflows that produce better results than ad-hoc answers.
|
||||
|
||||
Key routing rules:
|
||||
- Product ideas, "is this worth building", brainstorming → invoke office-hours
|
||||
- Bugs, errors, "why is this broken", 500 errors → invoke investigate
|
||||
- Ship, deploy, push, create PR → invoke ship
|
||||
- QA, test the site, find bugs → invoke qa
|
||||
- Code review, check my diff → invoke review
|
||||
- Update docs after shipping → invoke document-release
|
||||
- Weekly retro → invoke retro
|
||||
- Design system, brand → invoke design-consultation
|
||||
- Visual audit, design polish → invoke design-review
|
||||
- Architecture review → invoke plan-eng-review
|
||||
- Save progress, checkpoint, resume → invoke checkpoint
|
||||
- Code quality, health check → invoke health
|
||||
```
|
||||
|
||||
Then commit the change: `git add CLAUDE.md && git commit -m "chore: add gstack skill routing rules to CLAUDE.md"`
|
||||
|
||||
If B: run `~/.claude/skills/gstack/bin/gstack-config set routing_declined true`
|
||||
Say "No problem. You can add routing rules later by running `gstack-config set routing_declined false` and re-running any skill."
|
||||
|
||||
This only happens once per project. If `HAS_ROUTING` is `yes` or `ROUTING_DECLINED` is `true`, skip this entirely.
|
||||
|
||||
If `VENDORED_GSTACK` is `yes`: This project has a vendored copy of gstack at
|
||||
`.claude/skills/gstack/`. Vendoring is deprecated. We will not keep vendored copies
|
||||
up to date, so this project's gstack will fall behind.
|
||||
|
||||
Use AskUserQuestion (one-time per project, check for `~/.gstack/.vendoring-warned-$SLUG` marker):
|
||||
|
||||
> This project has gstack vendored in `.claude/skills/gstack/`. Vendoring is deprecated.
|
||||
> We won't keep this copy up to date, so you'll fall behind on new features and fixes.
|
||||
>
|
||||
> Want to migrate to team mode? It takes about 30 seconds.
|
||||
|
||||
Options:
|
||||
- A) Yes, migrate to team mode now
|
||||
- B) No, I'll handle it myself
|
||||
|
||||
If A:
|
||||
1. Run `git rm -r .claude/skills/gstack/`
|
||||
2. Run `echo '.claude/skills/gstack/' >> .gitignore`
|
||||
3. Run `~/.claude/skills/gstack/bin/gstack-team-init required` (or `optional`)
|
||||
4. Run `git add .claude/ .gitignore CLAUDE.md && git commit -m "chore: migrate gstack from vendored to team mode"`
|
||||
5. Tell the user: "Done. Each developer now runs: `cd ~/.claude/skills/gstack && ./setup --team`"
|
||||
|
||||
If B: say "OK, you're on your own to keep the vendored copy up to date."
|
||||
|
||||
Always run (regardless of choice):
|
||||
```bash
|
||||
eval "$(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)" 2>/dev/null || true
|
||||
touch ~/.gstack/.vendoring-warned-${SLUG:-unknown}
|
||||
```
|
||||
|
||||
This only happens once per project. If the marker file exists, skip entirely.
|
||||
|
||||
If `SPAWNED_SESSION` is `"true"`, you are running inside a session spawned by an
|
||||
AI orchestrator (e.g., OpenClaw). In spawned sessions:
|
||||
- Do NOT use AskUserQuestion for interactive prompts. Auto-choose the recommended option.
|
||||
- Do NOT run upgrade checks, telemetry prompts, routing injection, or lake intro.
|
||||
- Focus on completing the task and reporting results via prose output.
|
||||
- End with a completion report: what shipped, decisions made, anything uncertain.
|
||||
|
||||
## Model-Specific Behavioral Patch (claude)
|
||||
|
||||
The following nudges are tuned for the claude model family. They are
|
||||
**subordinate** to skill workflow, STOP points, AskUserQuestion gates, plan-mode
|
||||
safety, and /ship review gates. If a nudge below conflicts with skill instructions,
|
||||
the skill wins. Treat these as preferences, not rules.
|
||||
|
||||
**Todo-list discipline.** When working through a multi-step plan, mark each task
|
||||
complete individually as you finish it. Do not batch-complete at the end. If a task
|
||||
turns out to be unnecessary, mark it skipped with a one-line reason.
|
||||
|
||||
**Think before heavy actions.** For complex operations (refactors, migrations,
|
||||
non-trivial new features), briefly state your approach before executing. This lets
|
||||
the user course-correct cheaply instead of mid-flight.
|
||||
|
||||
**Dedicated tools over Bash.** Prefer Read, Edit, Write, Glob, Grep over shell
|
||||
equivalents (cat, sed, find, grep). The dedicated tools are cheaper and clearer.
|
||||
|
||||
## Voice
|
||||
|
||||
**Tone:** direct, concrete, sharp, never corporate, never academic. Sound like a builder, not a consultant. Name the file, the function, the command. No filler, no throat-clearing.
|
||||
|
||||
**Writing rules:** No em dashes (use commas, periods, "..."). No AI vocabulary (delve, crucial, robust, comprehensive, nuanced, etc.). Short paragraphs. End with what to do.
|
||||
|
||||
The user always has context you don't. Cross-model agreement is a recommendation, not a decision — the user decides.
|
||||
|
||||
## Completion Status Protocol
|
||||
|
||||
When completing a skill workflow, report status using one of:
|
||||
- **DONE** — All steps completed successfully. Evidence provided for each claim.
|
||||
- **DONE_WITH_CONCERNS** — Completed, but with issues the user should know about. List each concern.
|
||||
- **BLOCKED** — Cannot proceed. State what is blocking and what was tried.
|
||||
- **NEEDS_CONTEXT** — Missing information required to continue. State exactly what you need.
|
||||
|
||||
### Escalation
|
||||
|
||||
It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."
|
||||
|
||||
Bad work is worse than no work. You will not be penalized for escalating.
|
||||
- If you have attempted a task 3 times without success, STOP and escalate.
|
||||
- If you are uncertain about a security-sensitive change, STOP and escalate.
|
||||
- If the scope of work exceeds what you can verify, STOP and escalate.
|
||||
|
||||
Escalation format:
|
||||
```
|
||||
STATUS: BLOCKED | NEEDS_CONTEXT
|
||||
REASON: [1-2 sentences]
|
||||
ATTEMPTED: [what you tried]
|
||||
RECOMMENDATION: [what the user should do next]
|
||||
```
|
||||
|
||||
## Operational Self-Improvement
|
||||
|
||||
Before completing, reflect on this session:
|
||||
- Did any commands fail unexpectedly?
|
||||
- Did you take a wrong approach and have to backtrack?
|
||||
- Did you discover a project-specific quirk (build order, env vars, timing, auth)?
|
||||
- Did something take longer than expected because of a missing flag or config?
|
||||
|
||||
If yes, log an operational learning for future sessions:
|
||||
|
||||
```bash
|
||||
~/.claude/skills/gstack/bin/gstack-learnings-log '{"skill":"SKILL_NAME","type":"operational","key":"SHORT_KEY","insight":"DESCRIPTION","confidence":N,"source":"observed"}'
|
||||
```
|
||||
|
||||
Replace SKILL_NAME with the current skill name. Only log genuine operational discoveries.
|
||||
Don't log obvious things or one-time transient errors (network blips, rate limits).
|
||||
A good test: would knowing this save 5+ minutes in a future session? If yes, log it.
|
||||
|
||||
## Telemetry (run last)
|
||||
|
||||
After the skill workflow completes (success, error, or abort), log the telemetry event.
|
||||
Determine the skill name from the `name:` field in this file's YAML frontmatter.
|
||||
Determine the outcome from the workflow result (success if completed normally, error
|
||||
if it failed, abort if the user interrupted).
|
||||
|
||||
**PLAN MODE EXCEPTION — ALWAYS RUN:** This command writes telemetry to
|
||||
`~/.gstack/analytics/` (user config directory, not project files). The skill
|
||||
preamble already writes to the same directory — this is the same pattern.
|
||||
Skipping this command loses session duration and outcome data.
|
||||
|
||||
Run this bash:
|
||||
|
||||
```bash
|
||||
_TEL_END=$(date +%s)
|
||||
_TEL_DUR=$(( _TEL_END - _TEL_START ))
|
||||
rm -f ~/.gstack/analytics/.pending-"$_SESSION_ID" 2>/dev/null || true
|
||||
# Session timeline: record skill completion (local-only, never sent anywhere)
|
||||
~/.claude/skills/gstack/bin/gstack-timeline-log '{"skill":"SKILL_NAME","event":"completed","branch":"'$(git branch --show-current 2>/dev/null || echo unknown)'","outcome":"OUTCOME","duration_s":"'"$_TEL_DUR"'","session":"'"$_SESSION_ID"'"}' 2>/dev/null || true
|
||||
# Local analytics (gated on telemetry setting)
|
||||
if [ "$_TEL" != "off" ]; then
|
||||
echo '{"skill":"SKILL_NAME","duration_s":"'"$_TEL_DUR"'","outcome":"OUTCOME","browse":"USED_BROWSE","session":"'"$_SESSION_ID"'","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"}' >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true
|
||||
fi
|
||||
# Remote telemetry (opt-in, requires binary)
|
||||
if [ "$_TEL" != "off" ] && [ -x ~/.claude/skills/gstack/bin/gstack-telemetry-log ]; then
|
||||
~/.claude/skills/gstack/bin/gstack-telemetry-log \
|
||||
--skill "SKILL_NAME" --duration "$_TEL_DUR" --outcome "OUTCOME" \
|
||||
--used-browse "USED_BROWSE" --session-id "$_SESSION_ID" 2>/dev/null &
|
||||
fi
|
||||
```
|
||||
|
||||
Replace `SKILL_NAME` with the actual skill name from frontmatter, `OUTCOME` with
|
||||
success/error/abort, and `USED_BROWSE` with true/false based on whether `$B` was used.
|
||||
If you cannot determine the outcome, use "unknown". The local JSONL always logs. The
|
||||
remote binary only runs if telemetry is not off and the binary exists.
|
||||
|
||||
## Plan Mode Safe Operations
|
||||
|
||||
In plan mode, these are always allowed (they inform the plan, don't modify source):
|
||||
`$B` (browse), `$D` (design), `codex exec`/`codex review`, writes to `~/.gstack/`,
|
||||
writes to the plan file, `open` for generated artifacts.
|
||||
|
||||
## Skill Invocation During Plan Mode
|
||||
|
||||
If the user invokes a skill in plan mode, that skill takes precedence over generic plan mode behavior. Treat it as executable instructions, not reference. Follow step
|
||||
by step. AskUserQuestion calls satisfy plan mode's end-of-turn requirement. At a STOP
|
||||
point, stop immediately. Do not continue the workflow past a STOP point and do not call ExitPlanMode there. Commands marked "PLAN
|
||||
MODE EXCEPTION — ALWAYS RUN" execute. Other writes need to be already permitted
|
||||
above or explicitly exception-marked. Call ExitPlanMode only after the skill
|
||||
workflow completes — only then call ExitPlanMode (or if the user tells you to cancel the skill or leave plan mode).
|
||||
|
||||
## Plan Status Footer
|
||||
|
||||
In plan mode, before ExitPlanMode: if the plan file lacks a `## GSTACK REVIEW REPORT`
|
||||
section, run `~/.claude/skills/gstack/bin/gstack-review-read` and append a report.
|
||||
With JSONL entries (before `---CONFIG---`), format the standard runs/status/findings
|
||||
table. With `NO_REVIEWS` or empty, append a 5-row placeholder table (CEO/Codex/Eng/
|
||||
Design/DX Review) with all zeros and verdict "NO REVIEWS YET — run `/autoplan`".
|
||||
If a richer review report already exists, skip — review skills wrote it.
|
||||
|
||||
PLAN MODE EXCEPTION — always allowed (it's the plan file).
|
||||
|
||||
# /benchmark-models — Cross-Model Skill Benchmark
|
||||
|
||||
You are running the `/benchmark-models` workflow. Wraps the `gstack-model-benchmark` binary with an interactive flow that picks a prompt, confirms providers, previews auth, and runs the benchmark.
|
||||
|
||||
Different from `/benchmark` — that skill measures web page performance (Core Web Vitals, load times). This skill measures AI model performance on gstack skills or arbitrary prompts.
|
||||
|
||||
---
|
||||
|
||||
## Step 0: Locate the binary
|
||||
|
||||
```bash
|
||||
BIN="$HOME/.claude/skills/gstack/bin/gstack-model-benchmark"
|
||||
[ -x "$BIN" ] || BIN=".claude/skills/gstack/bin/gstack-model-benchmark"
|
||||
[ -x "$BIN" ] || { echo "ERROR: gstack-model-benchmark not found. Run ./setup in the gstack install dir." >&2; exit 1; }
|
||||
echo "BIN: $BIN"
|
||||
```
|
||||
|
||||
If not found, stop and tell the user to reinstall gstack.
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Choose a prompt
|
||||
|
||||
Use AskUserQuestion with the preamble format:
|
||||
- **Re-ground:** current project + branch.
|
||||
- **Simplify:** "A cross-model benchmark runs the same prompt through 2-3 AI models and shows you how they compare on speed, cost, and output quality. What prompt should we use?"
|
||||
- **RECOMMENDATION:** A because benchmarking against a real skill exposes tool-use differences, not just raw generation.
|
||||
- **Options:**
|
||||
- A) Benchmark one of my gstack skills (we'll pick which skill next). Completeness: 10/10.
|
||||
- B) Use an inline prompt — type it on the next turn. Completeness: 8/10.
|
||||
- C) Point at a prompt file on disk — specify path on the next turn. Completeness: 8/10.
|
||||
|
||||
If A: list top-level gstack skills that have SKILL.md files (from `find . -maxdepth 2 -name SKILL.md -not -path './.*'`), ask the user to pick one via a second AskUserQuestion. Use the picked SKILL.md path as the prompt file.
|
||||
|
||||
If B: ask the user for the inline prompt. Use it verbatim via `--prompt "<text>"`.
|
||||
|
||||
If C: ask for the path. Verify it exists. Use as positional argument.
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Choose providers
|
||||
|
||||
```bash
|
||||
"$BIN" --prompt "unused, dry-run" --models claude,gpt,gemini --dry-run
|
||||
```
|
||||
|
||||
Show the dry-run output. The "Adapter availability" section tells the user which providers will actually run (OK) vs skip (NOT READY — remediation hint included).
|
||||
|
||||
If ALL three show NOT READY: stop with a clear message — benchmark can't run without at least one authed provider. Suggest `claude login`, `codex login`, or `gemini login` / `export GOOGLE_API_KEY`.
|
||||
|
||||
If at least one is OK: AskUserQuestion:
|
||||
- **Simplify:** "Which models should we include? The dry-run above showed which are authed. Unauthed ones will be skipped cleanly — they won't abort the batch."
|
||||
- **RECOMMENDATION:** A (all authed providers) because running as many as possible gives the richest comparison.
|
||||
- **Options:**
|
||||
- A) All authed providers. Completeness: 10/10.
|
||||
- B) Only Claude. Completeness: 6/10 (no cross-model signal — use /ship's review for solo claude benchmarks instead).
|
||||
- C) Pick two — specify on next turn. Completeness: 8/10.
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Decide on judge
|
||||
|
||||
```bash
|
||||
[ -n "$ANTHROPIC_API_KEY" ] || grep -q 'ANTHROPIC' "$HOME/.claude/.credentials.json" 2>/dev/null && echo "JUDGE_AVAILABLE" || echo "JUDGE_UNAVAILABLE"
|
||||
```
|
||||
|
||||
If judge is available, AskUserQuestion:
|
||||
- **Simplify:** "The quality judge scores each model's output on a 0-10 scale using Anthropic's Claude as a tiebreaker. Adds ~$0.05/run. Recommended if you care about output quality, not just latency and cost."
|
||||
- **RECOMMENDATION:** A — the whole point is comparing quality, not just speed.
|
||||
- **Options:**
|
||||
- A) Enable judge (adds ~$0.05). Completeness: 10/10.
|
||||
- B) Skip judge — speed/cost/tokens only. Completeness: 7/10.
|
||||
|
||||
If judge is NOT available, skip this question and omit the `--judge` flag.
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Run the benchmark
|
||||
|
||||
Construct the command from Step 1, 2, 3 decisions:
|
||||
|
||||
```bash
|
||||
"$BIN" <prompt-spec> --models <picked-models> [--judge] --output table
|
||||
```
|
||||
|
||||
Where `<prompt-spec>` is either `--prompt "<text>"` (Step 1B), a file path (Step 1A or 1C), and `<picked-models>` is the comma-separated list from Step 2.
|
||||
|
||||
Stream the output as it arrives. This is slow — each provider runs the prompt fully. Expect 30s-5min depending on prompt complexity and whether `--judge` is on.
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Interpret results
|
||||
|
||||
After the table prints, summarize for the user:
|
||||
- **Fastest** — provider with lowest latency.
|
||||
- **Cheapest** — provider with lowest cost.
|
||||
- **Highest quality** (if `--judge` ran) — provider with highest score.
|
||||
- **Best overall** — use judgment. If judge ran: quality-weighted. Otherwise: note the tradeoff the user needs to make.
|
||||
|
||||
If any provider hit an error (auth/timeout/rate_limit), call it out with the remediation path.
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Offer to save results
|
||||
|
||||
AskUserQuestion:
|
||||
- **Simplify:** "Save this benchmark as JSON so you can compare future runs against it?"
|
||||
- **RECOMMENDATION:** A — skill performance drifts as providers update their models; a saved baseline catches quality regressions.
|
||||
- **Options:**
|
||||
- A) Save to `~/.gstack/benchmarks/<date>-<skill-or-prompt-slug>.json`. Completeness: 10/10.
|
||||
- B) Just print, don't save. Completeness: 5/10 (loses trend data).
|
||||
|
||||
If A: re-run with `--output json` and tee to the dated file. Print the path so the user can diff future runs against it.
|
||||
|
||||
---
|
||||
|
||||
## Important Rules
|
||||
|
||||
- **Never run a real benchmark without Step 2's dry-run first.** Users need to see auth status before spending API calls.
|
||||
- **Never hardcode model names.** Always pass providers from user's Step 2 choice — the binary handles the rest.
|
||||
- **Never auto-include `--judge`.** It adds real cost; user must opt in.
|
||||
- **If zero providers are authed, STOP.** Don't attempt the benchmark — it produces no useful output.
|
||||
- **Cost is visible.** Every run shows per-provider cost in the table. Users should see it before the next run.
|
||||
@@ -0,0 +1,151 @@
|
||||
---
|
||||
name: benchmark-models
|
||||
preamble-tier: 1
|
||||
version: 1.0.0
|
||||
description: |
|
||||
Cross-model benchmark for gstack skills. Runs the same prompt through Claude,
|
||||
GPT (via Codex CLI), and Gemini side-by-side — compares latency, tokens, cost,
|
||||
and optionally quality via LLM judge. Answers "which model is actually best
|
||||
for this skill?" with data instead of vibes. Separate from /benchmark, which
|
||||
measures web page performance. Use when: "benchmark models", "compare models",
|
||||
"which model is best for X", "cross-model comparison", "model shootout". (gstack)
|
||||
voice-triggers:
|
||||
- "compare models"
|
||||
- "model shootout"
|
||||
- "which model is best"
|
||||
triggers:
|
||||
- cross model benchmark
|
||||
- compare claude gpt gemini
|
||||
- benchmark skill across models
|
||||
- which model should I use
|
||||
allowed-tools:
|
||||
- Bash
|
||||
- Read
|
||||
- AskUserQuestion
|
||||
---
|
||||
|
||||
{{PREAMBLE}}
|
||||
|
||||
# /benchmark-models — Cross-Model Skill Benchmark
|
||||
|
||||
You are running the `/benchmark-models` workflow. Wraps the `gstack-model-benchmark` binary with an interactive flow that picks a prompt, confirms providers, previews auth, and runs the benchmark.
|
||||
|
||||
Different from `/benchmark` — that skill measures web page performance (Core Web Vitals, load times). This skill measures AI model performance on gstack skills or arbitrary prompts.
|
||||
|
||||
---
|
||||
|
||||
## Step 0: Locate the binary
|
||||
|
||||
```bash
|
||||
BIN="$HOME/.claude/skills/gstack/bin/gstack-model-benchmark"
|
||||
[ -x "$BIN" ] || BIN=".claude/skills/gstack/bin/gstack-model-benchmark"
|
||||
[ -x "$BIN" ] || { echo "ERROR: gstack-model-benchmark not found. Run ./setup in the gstack install dir." >&2; exit 1; }
|
||||
echo "BIN: $BIN"
|
||||
```
|
||||
|
||||
If not found, stop and tell the user to reinstall gstack.
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Choose a prompt
|
||||
|
||||
Use AskUserQuestion with the preamble format:
|
||||
- **Re-ground:** current project + branch.
|
||||
- **Simplify:** "A cross-model benchmark runs the same prompt through 2-3 AI models and shows you how they compare on speed, cost, and output quality. What prompt should we use?"
|
||||
- **RECOMMENDATION:** A because benchmarking against a real skill exposes tool-use differences, not just raw generation.
|
||||
- **Options:**
|
||||
- A) Benchmark one of my gstack skills (we'll pick which skill next). Completeness: 10/10.
|
||||
- B) Use an inline prompt — type it on the next turn. Completeness: 8/10.
|
||||
- C) Point at a prompt file on disk — specify path on the next turn. Completeness: 8/10.
|
||||
|
||||
If A: list top-level gstack skills that have SKILL.md files (from `find . -maxdepth 2 -name SKILL.md -not -path './.*'`), ask the user to pick one via a second AskUserQuestion. Use the picked SKILL.md path as the prompt file.
|
||||
|
||||
If B: ask the user for the inline prompt. Use it verbatim via `--prompt "<text>"`.
|
||||
|
||||
If C: ask for the path. Verify it exists. Use as positional argument.
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Choose providers
|
||||
|
||||
```bash
|
||||
"$BIN" --prompt "unused, dry-run" --models claude,gpt,gemini --dry-run
|
||||
```
|
||||
|
||||
Show the dry-run output. The "Adapter availability" section tells the user which providers will actually run (OK) vs skip (NOT READY — remediation hint included).
|
||||
|
||||
If ALL three show NOT READY: stop with a clear message — benchmark can't run without at least one authed provider. Suggest `claude login`, `codex login`, or `gemini login` / `export GOOGLE_API_KEY`.
|
||||
|
||||
If at least one is OK: AskUserQuestion:
|
||||
- **Simplify:** "Which models should we include? The dry-run above showed which are authed. Unauthed ones will be skipped cleanly — they won't abort the batch."
|
||||
- **RECOMMENDATION:** A (all authed providers) because running as many as possible gives the richest comparison.
|
||||
- **Options:**
|
||||
- A) All authed providers. Completeness: 10/10.
|
||||
- B) Only Claude. Completeness: 6/10 (no cross-model signal — use /ship's review for solo claude benchmarks instead).
|
||||
- C) Pick two — specify on next turn. Completeness: 8/10.
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Decide on judge
|
||||
|
||||
```bash
|
||||
[ -n "$ANTHROPIC_API_KEY" ] || grep -q 'ANTHROPIC' "$HOME/.claude/.credentials.json" 2>/dev/null && echo "JUDGE_AVAILABLE" || echo "JUDGE_UNAVAILABLE"
|
||||
```
|
||||
|
||||
If judge is available, AskUserQuestion:
|
||||
- **Simplify:** "The quality judge scores each model's output on a 0-10 scale using Anthropic's Claude as a tiebreaker. Adds ~$0.05/run. Recommended if you care about output quality, not just latency and cost."
|
||||
- **RECOMMENDATION:** A — the whole point is comparing quality, not just speed.
|
||||
- **Options:**
|
||||
- A) Enable judge (adds ~$0.05). Completeness: 10/10.
|
||||
- B) Skip judge — speed/cost/tokens only. Completeness: 7/10.
|
||||
|
||||
If judge is NOT available, skip this question and omit the `--judge` flag.
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Run the benchmark
|
||||
|
||||
Construct the command from Step 1, 2, 3 decisions:
|
||||
|
||||
```bash
|
||||
"$BIN" <prompt-spec> --models <picked-models> [--judge] --output table
|
||||
```
|
||||
|
||||
Where `<prompt-spec>` is either `--prompt "<text>"` (Step 1B), a file path (Step 1A or 1C), and `<picked-models>` is the comma-separated list from Step 2.
|
||||
|
||||
Stream the output as it arrives. This is slow — each provider runs the prompt fully. Expect 30s-5min depending on prompt complexity and whether `--judge` is on.
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Interpret results
|
||||
|
||||
After the table prints, summarize for the user:
|
||||
- **Fastest** — provider with lowest latency.
|
||||
- **Cheapest** — provider with lowest cost.
|
||||
- **Highest quality** (if `--judge` ran) — provider with highest score.
|
||||
- **Best overall** — use judgment. If judge ran: quality-weighted. Otherwise: note the tradeoff the user needs to make.
|
||||
|
||||
If any provider hit an error (auth/timeout/rate_limit), call it out with the remediation path.
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Offer to save results
|
||||
|
||||
AskUserQuestion:
|
||||
- **Simplify:** "Save this benchmark as JSON so you can compare future runs against it?"
|
||||
- **RECOMMENDATION:** A — skill performance drifts as providers update their models; a saved baseline catches quality regressions.
|
||||
- **Options:**
|
||||
- A) Save to `~/.gstack/benchmarks/<date>-<skill-or-prompt-slug>.json`. Completeness: 10/10.
|
||||
- B) Just print, don't save. Completeness: 5/10 (loses trend data).
|
||||
|
||||
If A: re-run with `--output json` and tee to the dated file. Print the path so the user can diff future runs against it.
|
||||
|
||||
---
|
||||
|
||||
## Important Rules
|
||||
|
||||
- **Never run a real benchmark without Step 2's dry-run first.** Users need to see auth status before spending API calls.
|
||||
- **Never hardcode model names.** Always pass providers from user's Step 2 choice — the binary handles the rest.
|
||||
- **Never auto-include `--judge`.** It adds real cost; user must opt in.
|
||||
- **If zero providers are authed, STOP.** Don't attempt the benchmark — it produces no useful output.
|
||||
- **Cost is visible.** Every run shows per-provider cost in the table. Users should see it before the next run.
|
||||
Reference in New Issue
Block a user