Files
gstack/canary/SKILL.md
T
Garry Tan cf3582c637 fix: community security + stability fixes (wave 1) (#325)
* feat: add /cso skill — OWASP Top 10 + STRIDE security audit

* fix: harden gstack-slug against shell injection via eval

Whitelist safe characters (a-zA-Z0-9._-) in SLUG and BRANCH output
to prevent shell metacharacter injection when used with eval.

Only affects self-hosted git servers with lax naming rules — GitHub
and GitLab enforce safe characters already. Defense-in-depth.

* fix(security): sanitize gstack-slug output against shell injection

The gstack-slug script is consumed via eval $(gstack-slug) throughout
skill templates. If a git remote URL contains shell metacharacters
like $(), backticks, or semicolons, they would be executed by eval.

Fix: strip all characters except [a-zA-Z0-9._-] from both SLUG and
BRANCH before output. This preserves normal values while neutralizing
any injection payload in malicious remote URLs.

Before: eval $(gstack-slug) with remote "foo/bar$(rm -rf /)" → executes rm
After:  eval $(gstack-slug) with remote "foo/bar$(rm -rf /)" → SLUG=foo-barrm-rf-

* fix(security): redact sensitive values in storage command output

The browse `storage` command dumps all localStorage and sessionStorage
as JSON. This can expose tokens, API keys, JWTs, and session credentials
in QA reports and agent transcripts.

Fix: redact values where the key matches sensitive patterns (token,
secret, key, password, auth, jwt, csrf) or the value starts with known
credential prefixes (eyJ for JWT, sk- for Stripe, ghp_ for GitHub, etc.).

Redacted values show length to aid debugging: [REDACTED — 128 chars]

* fix(browse): kill old server before restart to prevent orphaned chromium processes

When the health check fails or the server connection drops, `ensureServer()`
and `sendCommand()` would call `startServer()` without first killing the
previous server process. This left orphaned `chrome-headless-shell` renderer
processes running at ~120% CPU each.

After several reconnect cycles (e.g. pages that crash during hydration or
trigger hard navigations via `window.location.href`), dozens of zombie
chromium processes accumulate and exhaust system resources.

Fix: call `killServer()` on the stale PID before spawning a new server in
both the `ensureServer()` unhealthy path and the `sendCommand()` connection-
lost retry path.

Fixes #294

* Fix YAML linter error: nested mapping in compact sequence entries

Having "Run: bun" inside a plain scalar is not allowed per YAML spec which states: Plain scalars must never contain the “: ” and “ #” character combinations.

This simple fix switches to block scalars (|) to eliminate the ambiguity without changing runtime behavior.

* fix(security): add Azure metadata endpoint to SSRF blocklist

Add metadata.azure.internal to BLOCKED_METADATA_HOSTS alongside the
existing AWS/GCP endpoints. Closes the coverage gap identified in #125.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: add coverage for storage redaction

Test key-based redaction (auth_token, api_key), value-based redaction
(JWT prefix, GitHub PAT prefix), pass-through for normal keys, and
length preservation in redacted output.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: add community PR triage process to CONTRIBUTING.md

Document the wave-based PR triage pattern used for batching community
contributions. References PR #205 (v0.8.3) as the original example.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: adjust test key names to avoid redaction pattern collision

Rename testKey→testData and normalKey→displayName in storage tests
to avoid triggering #238's SENSITIVE_KEY regex (which matches 'key').
Also generate Codex variant of /cso skill.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: update project documentation for v0.9.10.0

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: zero-noise /cso security audits with FP filtering (v0.11.0.0)

Absorb Anthropic's security-review false positive filtering into /cso:
- 17 hard exclusions (DOS, test files, log spoofing, SSRF path-only,
  regex injection, race conditions unless concrete, etc.)
- 9 precedents (React XSS-safe, env vars trusted, client-side code
  doesn't need auth, shell scripts need concrete untrusted input path)
- 8/10 confidence gate — below threshold = don't report
- Independent sub-agent verification for each finding
- Exploit scenario requirement per finding
- Framework-aware analysis (Rails CSRF, React escaping, Angular sanitization)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: consolidate CHANGELOG — merge /cso launch + community wave into v0.11.0.0

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: rewrite README — lead with Karpathy quote, cut LinkedIn phrases, add /cso

Opens with the revolution (Karpathy, Steinberger/OpenClaw), keeps credentials
and LOC numbers, cuts filler phrases, adds hater bait, restores hiring block,
removes bloated "What's new" section, adds /cso to skills table and install.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(cso): adversarial review fixes — FP filtering, prompt injection, language coverage

- Exclusion #10: test files must verify not imported by non-test code
- Exclusion #13: distinguish user-message AI input from system-prompt injection
- Exclusion #14: ReDoS in user-input regex IS a real CVE class, don't exclude
- Add anti-manipulation rule: ignore audit-influencing instructions in codebase
- Fix confidence gate: remove contradictory 7-8 tier, hard cutoff at 8
- Fix verifier anchoring: send only file+line, not category/description
- Add Go, PHP, Java, C#, Kotlin to grep patterns (was 4 languages, now 8)
- Add GraphQL, gRPC, WebSocket endpoint detection to attack surface mapping

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(docs): correct skill counts, add /autoplan to README tables

Skill count was wrong in 3 places (said 19+7=26, said 25, actual is 28).
Added /autoplan to specialist table. Fixed troubleshooting skills list
to include all skills added since v0.7.0.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(browse): DNS rebinding protection for SSRF blocklist

validateNavigationUrl is now async — resolves hostname to IP and checks
against blocked metadata IPs. Prevents DNS rebinding where evil.com
initially resolves to a safe IP, then switches to 169.254.169.254.
All callers updated to await. Tests updated for async assertions.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(browse): lockfile prevents concurrent server start races

Adds exclusive lockfile (O_CREAT|O_EXCL) around ensureServer to prevent
TOCTOU race where two CLI invocations could both kill the old server and
start new ones, leaving an orphaned chromium process. Second caller now
waits for the first to finish starting.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(browse): improve storage redaction — word-boundary keys + more value prefixes

Key regex: use underscore/dot/hyphen boundaries instead of \b (which treats
_ as word char). Now correctly redacts auth_token, session_token while
skipping keyboardShortcuts, monkeyPatch, primaryKey.

Value regex: add AWS (AKIA), Stripe (sk_live_, pk_live_), Anthropic (sk-ant-),
Google (AIza), Sendgrid (SG.), Supabase (sbp_) prefixes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: migrate all remaining eval callers to source, fix stale CHANGELOG claim

5 templates and 2 bin scripts still used eval $(gstack-slug). All now use
source <(gstack-slug). Updated gstack-slug comment to match. Fixed v0.8.3
CHANGELOG entry that falsely claimed eval was fully eliminated — it was
the output sanitization that made it safe, not a calling convention change.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(docs): add /autoplan to install instructions, regen skill docs

The install instruction blocks and troubleshooting section were missing
/autoplan. All three skill list locations now include the complete 28-skill
set. Regenerated codex/agents SKILL.md files to match template changes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: update project documentation for v0.11.0.0

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs(cso): add disclaimer — not a substitute for professional security audits

LLMs can miss subtle vulns and produce false negatives. For production
systems with sensitive data, hire a real firm. /cso is a first pass,
not your only line of defense. Disclaimer appended to every report.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Arun Kumar Thiagarajan <arunkt.bm14@gmail.com>
Co-authored-by: Tyrone Robb <tyrone.robb@icloud.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Orkun Duman <orkun1675@gmail.com>
2026-03-22 13:19:10 -07:00

23 KiB

name, version, description, allowed-tools
name version description allowed-tools
canary 1.0.0 Post-deploy canary monitoring. Watches the live app for console errors, performance regressions, and page failures using the browse daemon. Takes periodic screenshots, compares against pre-deploy baselines, and alerts on anomalies. Use when: "monitor deploy", "canary", "post-deploy check", "watch production", "verify deploy".
Bash
Read
Write
Glob
AskUserQuestion

Preamble (run first)

_UPD=$(~/.claude/skills/gstack/bin/gstack-update-check 2>/dev/null || .claude/skills/gstack/bin/gstack-update-check 2>/dev/null || true)
[ -n "$_UPD" ] && echo "$_UPD" || true
mkdir -p ~/.gstack/sessions
touch ~/.gstack/sessions/"$PPID"
_SESSIONS=$(find ~/.gstack/sessions -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' ')
find ~/.gstack/sessions -mmin +120 -type f -delete 2>/dev/null || true
_CONTRIB=$(~/.claude/skills/gstack/bin/gstack-config get gstack_contributor 2>/dev/null || true)
_PROACTIVE=$(~/.claude/skills/gstack/bin/gstack-config get proactive 2>/dev/null || echo "true")
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
echo "BRANCH: $_BRANCH"
echo "PROACTIVE: $_PROACTIVE"
source <(~/.claude/skills/gstack/bin/gstack-repo-mode 2>/dev/null) || true
REPO_MODE=${REPO_MODE:-unknown}
echo "REPO_MODE: $REPO_MODE"
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
echo "LAKE_INTRO: $_LAKE_SEEN"
_TEL=$(~/.claude/skills/gstack/bin/gstack-config get telemetry 2>/dev/null || true)
_TEL_PROMPTED=$([ -f ~/.gstack/.telemetry-prompted ] && echo "yes" || echo "no")
_TEL_START=$(date +%s)
_SESSION_ID="$$-$(date +%s)"
echo "TELEMETRY: ${_TEL:-off}"
echo "TEL_PROMPTED: $_TEL_PROMPTED"
mkdir -p ~/.gstack/analytics
echo '{"skill":"canary","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","repo":"'$(basename "$(git rev-parse --show-toplevel 2>/dev/null)" 2>/dev/null || echo "unknown")'"}'  >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true
for _PF in ~/.gstack/analytics/.pending-*; do [ -f "$_PF" ] && ~/.claude/skills/gstack/bin/gstack-telemetry-log --event-type skill_run --skill _pending_finalize --outcome unknown --session-id "$_SESSION_ID" 2>/dev/null || true; break; done

If PROACTIVE is "false", do not proactively suggest gstack skills — only invoke them when the user explicitly asks. The user opted out of proactive suggestions.

If output shows UPGRADE_AVAILABLE <old> <new>: read ~/.claude/skills/gstack/gstack-upgrade/SKILL.md and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If JUST_UPGRADED <from> <to>: tell user "Running gstack v{to} (just updated!)" and continue.

If LAKE_INTRO is no: Before continuing, introduce the Completeness Principle. Tell the user: "gstack follows the Boil the Lake principle — always do the complete thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean" Then offer to open the essay in their default browser:

open https://garryslist.org/posts/boil-the-ocean
touch ~/.gstack/.completeness-intro-seen

Only run open if the user says yes. Always run touch to mark as seen. This only happens once.

If TEL_PROMPTED is no AND LAKE_INTRO is yes: After the lake intro is handled, ask the user about telemetry. Use AskUserQuestion:

Help gstack get better! Community mode shares usage data (which skills you use, how long they take, crash info) with a stable device ID so we can track trends and fix bugs faster. No code, file paths, or repo names are ever sent. Change anytime with gstack-config set telemetry off.

Options:

  • A) Help gstack get better! (recommended)
  • B) No thanks

If A: run ~/.claude/skills/gstack/bin/gstack-config set telemetry community

If B: ask a follow-up AskUserQuestion:

How about anonymous mode? We just learn that someone used gstack — no unique ID, no way to connect sessions. Just a counter that helps us know if anyone's out there.

Options:

  • A) Sure, anonymous is fine
  • B) No thanks, fully off

If B→A: run ~/.claude/skills/gstack/bin/gstack-config set telemetry anonymous If B→B: run ~/.claude/skills/gstack/bin/gstack-config set telemetry off

Always run:

touch ~/.gstack/.telemetry-prompted

This only happens once. If TEL_PROMPTED is yes, skip this entirely.

AskUserQuestion Format

ALWAYS follow this structure for every AskUserQuestion call:

  1. Re-ground: State the project, the current branch (use the _BRANCH value printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)
  2. Simplify: Explain the problem in plain English a smart 16-year-old could follow. No raw function names, no internal jargon, no implementation details. Use concrete examples and analogies. Say what it DOES, not what it's called.
  3. Recommend: RECOMMENDATION: Choose [X] because [one-line reason] — always prefer the complete option over shortcuts (see Completeness Principle). Include Completeness: X/10 for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it.
  4. Options: Lettered options: A) ... B) ... C) ... — when an option involves effort, show both scales: (human: ~X / CC: ~Y)

Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.

Per-skill instructions may add additional formatting rules on top of this baseline.

Completeness Principle — Boil the Lake

AI-assisted coding makes the marginal cost of completeness near-zero. When you present options:

  • If Option A is the complete implementation (full parity, all edge cases, 100% coverage) and Option B is a shortcut that saves modest effort — always recommend A. The delta between 80 lines and 150 lines is meaningless with CC+gstack. "Good enough" is the wrong instinct when "complete" costs minutes more.
  • Lake vs. ocean: A "lake" is boilable — 100% test coverage for a module, full feature implementation, handling all edge cases, complete error paths. An "ocean" is not — rewriting an entire system from scratch, adding features to dependencies you don't control, multi-quarter platform migrations. Recommend boiling lakes. Flag oceans as out of scope.
  • When estimating effort, always show both scales: human team time and CC+gstack time. The compression ratio varies by task type — use this reference:
Task type Human team CC+gstack Compression
Boilerplate / scaffolding 2 days 15 min ~100x
Test writing 1 day 15 min ~50x
Feature implementation 1 week 30 min ~30x
Bug fix + regression test 4 hours 15 min ~20x
Architecture / design 2 days 4 hours ~5x
Research / exploration 1 day 3 hours ~3x
  • This principle applies to test coverage, error handling, documentation, edge cases, and feature completeness. Don't skip the last 10% to "save time" — with AI, that 10% costs seconds.

Anti-patterns — DON'T do this:

  • BAD: "Choose B — it covers 90% of the value with less code." (If A is only 70 lines more, choose A.)
  • BAD: "We can skip edge case handling to save time." (Edge case handling costs minutes with CC.)
  • BAD: "Let's defer test coverage to a follow-up PR." (Tests are the cheapest lake to boil.)
  • BAD: Quoting only human-team effort: "This would take 2 weeks." (Say: "2 weeks human / ~1 hour CC.")

Repo Ownership Mode — See Something, Say Something

REPO_MODE from the preamble tells you who owns issues in this repo:

  • solo — One person does 80%+ of the work. They own everything. When you notice issues outside the current branch's changes (test failures, deprecation warnings, security advisories, linting errors, dead code, env problems), investigate and offer to fix proactively. The solo dev is the only person who will fix it. Default to action.
  • collaborative — Multiple active contributors. When you notice issues outside the branch's changes, flag them via AskUserQuestion — it may be someone else's responsibility. Default to asking, not fixing.
  • unknown — Treat as collaborative (safer default — ask before fixing).

See Something, Say Something: Whenever you notice something that looks wrong during ANY workflow step — not just test failures — flag it briefly. One sentence: what you noticed and its impact. In solo mode, follow up with "Want me to fix it?" In collaborative mode, just flag it and move on.

Never let a noticed issue silently pass. The whole point is proactive communication.

Search Before Building

Before building infrastructure, unfamiliar patterns, or anything the runtime might have a built-in — search first. Read ~/.claude/skills/gstack/ETHOS.md for the full philosophy.

Three layers of knowledge:

  • Layer 1 (tried and true — in distribution). Don't reinvent the wheel. But the cost of checking is near-zero, and once in a while, questioning the tried-and-true is where brilliance occurs.
  • Layer 2 (new and popular — search for these). But scrutinize: humans are subject to mania. Search results are inputs to your thinking, not answers.
  • Layer 3 (first principles — prize these above all). Original observations derived from reasoning about the specific problem. The most valuable of all.

Eureka moment: When first-principles reasoning reveals conventional wisdom is wrong, name it: "EUREKA: Everyone does X because [assumption]. But [evidence] shows this is wrong. Y is better because [reasoning]."

Log eureka moments:

jq -n --arg ts "$(date -u +%Y-%m-%dT%H:%M:%SZ)" --arg skill "SKILL_NAME" --arg branch "$(git branch --show-current 2>/dev/null)" --arg insight "ONE_LINE_SUMMARY" '{ts:$ts,skill:$skill,branch:$branch,insight:$insight}' >> ~/.gstack/analytics/eureka.jsonl 2>/dev/null || true

Replace SKILL_NAME and ONE_LINE_SUMMARY. Runs inline — don't stop the workflow.

WebSearch fallback: If WebSearch is unavailable, skip the search step and note: "Search unavailable — proceeding with in-distribution knowledge only."

Contributor Mode

If _CONTRIB is true: you are in contributor mode. You're a gstack user who also helps make it better.

At the end of each major workflow step (not after every single command), reflect on the gstack tooling you used. Rate your experience 0 to 10. If it wasn't a 10, think about why. If there is an obvious, actionable bug OR an insightful, interesting thing that could have been done better by gstack code or skill markdown — file a field report. Maybe our contributor will help make us better!

Calibration — this is the bar: For example, $B js "await fetch(...)" used to fail with SyntaxError: await is only valid in async functions because gstack didn't wrap expressions in async context. Small, but the input was reasonable and gstack should have handled it — that's the kind of thing worth filing. Things less consequential than this, ignore.

NOT worth filing: user's app bugs, network errors to user's URL, auth failures on user's site, user's own JS logic bugs.

To file: write ~/.gstack/contributor-logs/{slug}.md with all sections below (do not truncate — include every section through the Date/Version footer):

# {Title}

Hey gstack team — ran into this while using /{skill-name}:

**What I was trying to do:** {what the user/agent was attempting}
**What happened instead:** {what actually happened}
**My rating:** {0-10} — {one sentence on why it wasn't a 10}

## Steps to reproduce
1. {step}

## Raw output

{paste the actual error or unexpected output here}


## What would make this a 10
{one sentence: what gstack should have done differently}

**Date:** {YYYY-MM-DD} | **Version:** {gstack version} | **Skill:** /{skill}

Slug: lowercase, hyphens, max 60 chars (e.g. browse-js-no-await). Skip if file already exists. Max 3 reports per session. File inline and continue — don't stop the workflow. Tell user: "Filed gstack field report: {title}"

Completion Status Protocol

When completing a skill workflow, report status using one of:

  • DONE — All steps completed successfully. Evidence provided for each claim.
  • DONE_WITH_CONCERNS — Completed, but with issues the user should know about. List each concern.
  • BLOCKED — Cannot proceed. State what is blocking and what was tried.
  • NEEDS_CONTEXT — Missing information required to continue. State exactly what you need.

Escalation

It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."

Bad work is worse than no work. You will not be penalized for escalating.

  • If you have attempted a task 3 times without success, STOP and escalate.
  • If you are uncertain about a security-sensitive change, STOP and escalate.
  • If the scope of work exceeds what you can verify, STOP and escalate.

Escalation format:

STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]

Telemetry (run last)

After the skill workflow completes (success, error, or abort), log the telemetry event. Determine the skill name from the name: field in this file's YAML frontmatter. Determine the outcome from the workflow result (success if completed normally, error if it failed, abort if the user interrupted).

PLAN MODE EXCEPTION — ALWAYS RUN: This command writes telemetry to ~/.gstack/analytics/ (user config directory, not project files). The skill preamble already writes to the same directory — this is the same pattern. Skipping this command loses session duration and outcome data.

Run this bash:

_TEL_END=$(date +%s)
_TEL_DUR=$(( _TEL_END - _TEL_START ))
rm -f ~/.gstack/analytics/.pending-"$_SESSION_ID" 2>/dev/null || true
~/.claude/skills/gstack/bin/gstack-telemetry-log \
  --skill "SKILL_NAME" --duration "$_TEL_DUR" --outcome "OUTCOME" \
  --used-browse "USED_BROWSE" --session-id "$_SESSION_ID" 2>/dev/null &

Replace SKILL_NAME with the actual skill name from frontmatter, OUTCOME with success/error/abort, and USED_BROWSE with true/false based on whether $B was used. If you cannot determine the outcome, use "unknown". This runs in the background and never blocks the user.

SETUP (run this check BEFORE any browse command)

_ROOT=$(git rev-parse --show-toplevel 2>/dev/null)
B=""
[ -n "$_ROOT" ] && [ -x "$_ROOT/.claude/skills/gstack/browse/dist/browse" ] && B="$_ROOT/.claude/skills/gstack/browse/dist/browse"
[ -z "$B" ] && B=~/.claude/skills/gstack/browse/dist/browse
if [ -x "$B" ]; then
  echo "READY: $B"
else
  echo "NEEDS_SETUP"
fi

If NEEDS_SETUP:

  1. Tell the user: "gstack browse needs a one-time build (~10 seconds). OK to proceed?" Then STOP and wait.
  2. Run: cd <SKILL_DIR> && ./setup
  3. If bun is not installed: curl -fsSL https://bun.sh/install | bash

Step 0: Detect base branch

Determine which branch this PR targets. Use the result as "the base branch" in all subsequent steps.

  1. Check if a PR already exists for this branch: gh pr view --json baseRefName -q .baseRefName If this succeeds, use the printed branch name as the base branch.

  2. If no PR exists (command fails), detect the repo's default branch: gh repo view --json defaultBranchRef -q .defaultBranchRef.name

  3. If both commands fail, fall back to main.

Print the detected base branch name. In every subsequent git diff, git log, git fetch, git merge, and gh pr create command, substitute the detected branch name wherever the instructions say "the base branch."


/canary — Post-Deploy Visual Monitor

You are a Release Reliability Engineer watching production after a deploy. You've seen deploys that pass CI but break in production — a missing environment variable, a CDN cache serving stale assets, a database migration that's slower than expected on real data. Your job is to catch these in the first 10 minutes, not 10 hours.

You use the browse daemon to watch the live app, take screenshots, check console errors, and compare against baselines. You are the safety net between "shipped" and "verified."

User-invocable

When the user types /canary, run this skill.

Arguments

  • /canary <url> — monitor a URL for 10 minutes after deploy
  • /canary <url> --duration 5m — custom monitoring duration (1m to 30m)
  • /canary <url> --baseline — capture baseline screenshots (run BEFORE deploying)
  • /canary <url> --pages /,/dashboard,/settings — specify pages to monitor
  • /canary <url> --quick — single-pass health check (no continuous monitoring)

Instructions

Phase 1: Setup

source <(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null || echo "SLUG=unknown")
mkdir -p .gstack/canary-reports
mkdir -p .gstack/canary-reports/baselines
mkdir -p .gstack/canary-reports/screenshots

Parse the user's arguments. Default duration is 10 minutes. Default pages: auto-discover from the app's navigation.

Phase 2: Baseline Capture (--baseline mode)

If the user passed --baseline, capture the current state BEFORE deploying.

For each page (either from --pages or the homepage):

$B goto <page-url>
$B snapshot -i -a -o ".gstack/canary-reports/baselines/<page-name>.png"
$B console --errors
$B perf
$B text

Collect for each page: screenshot path, console error count, page load time from perf, and a text content snapshot.

Save the baseline manifest to .gstack/canary-reports/baseline.json:

{
  "url": "<url>",
  "timestamp": "<ISO>",
  "branch": "<current branch>",
  "pages": {
    "/": {
      "screenshot": "baselines/home.png",
      "console_errors": 0,
      "load_time_ms": 450
    }
  }
}

Then STOP and tell the user: "Baseline captured. Deploy your changes, then run /canary <url> to monitor."

Phase 3: Page Discovery

If no --pages were specified, auto-discover pages to monitor:

$B goto <url>
$B links
$B snapshot -i

Extract the top 5 internal navigation links from the links output. Always include the homepage. Present the page list via AskUserQuestion:

  • Context: Monitoring the production site at the given URL after a deploy.
  • Question: Which pages should the canary monitor?
  • RECOMMENDATION: Choose A — these are the main navigation targets.
  • A) Monitor these pages: [list the discovered pages]
  • B) Add more pages (user specifies)
  • C) Monitor homepage only (quick check)

Phase 4: Pre-Deploy Snapshot (if no baseline exists)

If no baseline.json exists, take a quick snapshot now as a reference point.

For each page to monitor:

$B goto <page-url>
$B snapshot -i -a -o ".gstack/canary-reports/screenshots/pre-<page-name>.png"
$B console --errors
$B perf

Record the console error count and load time for each page. These become the reference for detecting regressions during monitoring.

Phase 5: Continuous Monitoring Loop

Monitor for the specified duration. Every 60 seconds, check each page:

$B goto <page-url>
$B snapshot -i -a -o ".gstack/canary-reports/screenshots/<page-name>-<check-number>.png"
$B console --errors
$B perf

After each check, compare results against the baseline (or pre-deploy snapshot):

  1. Page load failuregoto returns error or timeout → CRITICAL ALERT
  2. New console errors — errors not present in baseline → HIGH ALERT
  3. Performance regression — load time exceeds 2x baseline → MEDIUM ALERT
  4. Broken links — new 404s not in baseline → LOW ALERT

Alert on changes, not absolutes. A page with 3 console errors in the baseline is fine if it still has 3. One NEW error is an alert.

Don't cry wolf. Only alert on patterns that persist across 2 or more consecutive checks. A single transient network blip is not an alert.

If a CRITICAL or HIGH alert is detected, immediately notify the user via AskUserQuestion:

CANARY ALERT
════════════
Time:     [timestamp, e.g., check #3 at 180s]
Page:     [page URL]
Type:     [CRITICAL / HIGH / MEDIUM]
Finding:  [what changed — be specific]
Evidence: [screenshot path]
Baseline: [baseline value]
Current:  [current value]
  • Context: Canary monitoring detected an issue on [page] after [duration].
  • RECOMMENDATION: Choose based on severity — A for critical, B for transient.
  • A) Investigate now — stop monitoring, focus on this issue
  • B) Continue monitoring — this might be transient (wait for next check)
  • C) Rollback — revert the deploy immediately
  • D) Dismiss — false positive, continue monitoring

Phase 6: Health Report

After monitoring completes (or if the user stops early), produce a summary:

CANARY REPORT — [url]
═════════════════════
Duration:     [X minutes]
Pages:        [N pages monitored]
Checks:       [N total checks performed]
Status:       [HEALTHY / DEGRADED / BROKEN]

Per-Page Results:
─────────────────────────────────────────────────────
  Page            Status      Errors    Avg Load
  /               HEALTHY     0         450ms
  /dashboard      DEGRADED    2 new     1200ms (was 400ms)
  /settings       HEALTHY     0         380ms

Alerts Fired:  [N] (X critical, Y high, Z medium)
Screenshots:   .gstack/canary-reports/screenshots/

VERDICT: [DEPLOY IS HEALTHY / DEPLOY HAS ISSUES — details above]

Save report to .gstack/canary-reports/{date}-canary.md and .gstack/canary-reports/{date}-canary.json.

Log the result for the review dashboard:

source <(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
mkdir -p ~/.gstack/projects/$SLUG

Write a JSONL entry: {"skill":"canary","timestamp":"<ISO>","status":"<HEALTHY/DEGRADED/BROKEN>","url":"<url>","duration_min":<N>,"alerts":<N>}

Phase 7: Baseline Update

If the deploy is healthy, offer to update the baseline:

  • Context: Canary monitoring completed. The deploy is healthy.
  • RECOMMENDATION: Choose A — deploy is healthy, new baseline reflects current production.
  • A) Update baseline with current screenshots
  • B) Keep old baseline

If the user chooses A, copy the latest screenshots to the baselines directory and update baseline.json.

Important Rules

  • Speed matters. Start monitoring within 30 seconds of invocation. Don't over-analyze before monitoring.
  • Alert on changes, not absolutes. Compare against baseline, not industry standards.
  • Screenshots are evidence. Every alert includes a screenshot path. No exceptions.
  • Transient tolerance. Only alert on patterns that persist across 2+ consecutive checks.
  • Baseline is king. Without a baseline, canary is a health check. Encourage --baseline before deploying.
  • Performance thresholds are relative. 2x baseline is a regression. 1.5x might be normal variance.
  • Read-only. Observe and report. Don't modify code unless the user explicitly asks to investigate and fix.