mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-02 03:35:09 +02:00
b805aa0113
* feat: add Confusion Protocol to preamble resolver Injects a high-stakes ambiguity gate at preamble tier >= 2 so all workflow skills get it. Fires when Claude encounters architectural decisions, data model changes, destructive operations, or contradictory requirements. Does NOT fire on routine coding. Addresses Karpathy failure mode #1 (wrong assumptions) with an inline STOP gate instead of relying on workflow skill invocation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add Hermes and GBrain host configs Hermes: tool rewrites for terminal/read_file/patch/delegate_task, paths to ~/.hermes/skills/gstack, AGENTS.md config file. GBrain: coding skills become brain-aware when GBrain mod is installed. Same tool rewrites as OpenClaw (agents spawn Claude Code via ACP). GBRAIN_CONTEXT_LOAD and GBRAIN_SAVE_RESULTS NOT suppressed on gbrain host, enabling brain-first lookup and save-to-brain behavior. Both registered in hosts/index.ts with setup script redirect messages. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: GBrain resolver — brain-first lookup and save-to-brain New scripts/resolvers/gbrain.ts with two resolver functions: - GBRAIN_CONTEXT_LOAD: search brain for context before skill starts - GBRAIN_SAVE_RESULTS: save skill output to brain after completion Placeholders added to 4 thinking skill templates (office-hours, investigate, plan-ceo-review, retro). Resolves to empty string on all hosts except gbrain via suppressedResolvers. GBRAIN suppression added to all 9 non-gbrain host configs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: wire slop:diff into /review as advisory diagnostic Adds Step 3.5 to the review template: runs bun run slop:diff against the base branch to catch AI code quality issues (empty catches, redundant return await, overcomplicated abstractions). Advisory only, never blocking. Skips silently if slop-scan is not installed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add Karpathy compatibility note to README Positions gstack as the workflow enforcement layer for Karpathy-style CLAUDE.md rules (17K stars). Links to forrestchang/andrej-karpathy-skills. Maps each Karpathy failure mode to the gstack skill that addresses it. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: improve native OpenClaw thinking skills office-hours: add design doc path visibility message after writing ceo-review: add HARD GATE reminder at review section transitions retro: add non-git context support (check memory for meeting notes) Mirrors template improvements to hand-crafted native skills. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: update tests and golden fixtures for new hosts - Host count: 8 → 10 (hermes, gbrain) - OpenClaw adapter test: expects undefined (dead code removed) - Golden ship fixtures: updated with Confusion Protocol + vendoring Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: regenerate all SKILL.md files Regenerated from templates after Confusion Protocol, GBrain resolver placeholders, slop:diff in review, HARD GATE reminders, investigation learnings, design doc visibility, and retro non-git context changes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: update project documentation for v0.18.0.0 - CHANGELOG: add v0.18.0.0 entry (Confusion Protocol, Hermes, GBrain, slop in review, Karpathy note, skill improvements) - CLAUDE.md: add hermes.ts and gbrain.ts to hosts listing - README.md: update agent count 8→10, add Hermes + GBrain to table - VERSION: bump to 0.18.0.0 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: sync package.json version to 0.18.0.0 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: extract Step 0 from review SKILL.md in E2E test The review-base-branch E2E test was copying the full 1493-line review/SKILL.md into the test fixture. The agent spent 8+ turns reading it in chunks, leaving only 7 turns for actual work, causing error_max_turns on every attempt. Now extracts only Step 0 (base branch detection, ~50 lines) which is all the test actually needs. Follows the CLAUDE.md rule: "NEVER copy a full SKILL.md file into an E2E test fixture." Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: update GBrain and Hermes host configs for v0.10.0 integration GBrain: add 'triggers' to keepFields so generated skills pass checkResolvable() validation. Add version compat comment. Hermes: un-suppress GBRAIN_CONTEXT_LOAD and GBRAIN_SAVE_RESULTS. The resolvers handle GBrain-not-installed gracefully, so Hermes agents with GBrain as a mod get brain features automatically. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: GBrain resolver DX improvements and preamble health check Resolver changes: - gbrain query → gbrain search (fast keyword search, not expensive hybrid) - Add keyword extraction guidance for agents - Show explicit gbrain put_page syntax with --title, --tags, heredoc - Add entity enrichment with false-positive filter - Name throttle error patterns (exit code 1, stderr keywords) - Add data-research routing for investigate skill - Expand skillSaveMap from 4 to 8 entries - Add brain operation telemetry summary Preamble changes: - Add gbrain doctor --fast --json health check for gbrain/hermes hosts - Parse check failures/warnings count - Show failing check details when score < 50 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: preserve keepFields in allowlist frontmatter mode The allowlist mode hard-coded name + description reconstruction but never iterated keepFields for additional fields. Adding 'triggers' to keepFields was a no-op because the field was silently stripped. Now iterates keepFields and preserves any field beyond name/description from the source template frontmatter, including YAML arrays. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add triggers to all 38 skill templates Multi-word, skill-specific trigger keywords for GBrain's RESOLVER.md router. Each skill gets 3-6 triggers derived from its "Use when asked to..." description text. Avoids single generic words that would collide across skills (e.g., "debug this" not "debug"). These are distinct from voice-triggers (speech-to-text aliases) and serve GBrain's checkResolvable() validation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: regenerate all SKILL.md files and update golden fixtures Regenerated from updated templates (triggers, brain placeholders, resolver DX improvements, preamble health check). Golden fixtures updated to match. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: settings-hook remove exits 1 when nothing to remove gstack-settings-hook remove was exiting 0 when settings.json didn't exist, causing gstack-uninstall to report "SessionStart hook" as removed on clean systems where nothing was installed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: update project documentation for GBrain v0.10.0 integration ARCHITECTURE.md: added GBRAIN_CONTEXT_LOAD and GBRAIN_SAVE_RESULTS to resolver table. CHANGELOG.md: expanded v0.18.0.0 entry with GBrain v0.10.0 integration details (triggers, expanded brain-awareness, DX improvements, Hermes brain support), updated date. CLAUDE.md: added gbrain to resolvers/ directory comment. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: routing E2E stops writing to user's ~/.claude/skills/ installSkills() was copying SKILL.md files to both project-level (.claude/skills/ in tmpDir) and user-level (~/.claude/skills/). Writing to the user's real install fails when symlinks point to different worktrees or dangling targets (ENOENT on copyFileSync). Now installs to project-level only. The test already sets cwd to the tmpDir, so project-level discovery works. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: scale Gemini E2E back to smoke test Gemini CLI gets lost in worktrees on complex tasks (review times out at 600s, discover-skill hits exit 124). Nobody uses Gemini for gstack skill execution. Replace the two failing tests (gemini-discover-skill and gemini-review-findings) with a single smoke test that verifies Gemini can start and read the README. 90s timeout, no skill invocation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
226 lines
7.8 KiB
Cheetah
226 lines
7.8 KiB
Cheetah
---
|
|
name: canary
|
|
preamble-tier: 2
|
|
version: 1.0.0
|
|
description: |
|
|
Post-deploy canary monitoring. Watches the live app for console errors,
|
|
performance regressions, and page failures using the browse daemon. Takes
|
|
periodic screenshots, compares against pre-deploy baselines, and alerts
|
|
on anomalies. Use when: "monitor deploy", "canary", "post-deploy check",
|
|
"watch production", "verify deploy". (gstack)
|
|
allowed-tools:
|
|
- Bash
|
|
- Read
|
|
- Write
|
|
- Glob
|
|
- AskUserQuestion
|
|
triggers:
|
|
- monitor after deploy
|
|
- canary check
|
|
- watch for errors post-deploy
|
|
---
|
|
|
|
{{PREAMBLE}}
|
|
|
|
{{BROWSE_SETUP}}
|
|
|
|
{{BASE_BRANCH_DETECT}}
|
|
|
|
# /canary — Post-Deploy Visual Monitor
|
|
|
|
You are a **Release Reliability Engineer** watching production after a deploy. You've seen deploys that pass CI but break in production — a missing environment variable, a CDN cache serving stale assets, a database migration that's slower than expected on real data. Your job is to catch these in the first 10 minutes, not 10 hours.
|
|
|
|
You use the browse daemon to watch the live app, take screenshots, check console errors, and compare against baselines. You are the safety net between "shipped" and "verified."
|
|
|
|
## User-invocable
|
|
When the user types `/canary`, run this skill.
|
|
|
|
## Arguments
|
|
- `/canary <url>` — monitor a URL for 10 minutes after deploy
|
|
- `/canary <url> --duration 5m` — custom monitoring duration (1m to 30m)
|
|
- `/canary <url> --baseline` — capture baseline screenshots (run BEFORE deploying)
|
|
- `/canary <url> --pages /,/dashboard,/settings` — specify pages to monitor
|
|
- `/canary <url> --quick` — single-pass health check (no continuous monitoring)
|
|
|
|
## Instructions
|
|
|
|
### Phase 1: Setup
|
|
|
|
```bash
|
|
eval "$(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null || echo "SLUG=unknown")"
|
|
mkdir -p .gstack/canary-reports
|
|
mkdir -p .gstack/canary-reports/baselines
|
|
mkdir -p .gstack/canary-reports/screenshots
|
|
```
|
|
|
|
Parse the user's arguments. Default duration is 10 minutes. Default pages: auto-discover from the app's navigation.
|
|
|
|
### Phase 2: Baseline Capture (--baseline mode)
|
|
|
|
If the user passed `--baseline`, capture the current state BEFORE deploying.
|
|
|
|
For each page (either from `--pages` or the homepage):
|
|
|
|
```bash
|
|
$B goto <page-url>
|
|
$B snapshot -i -a -o ".gstack/canary-reports/baselines/<page-name>.png"
|
|
$B console --errors
|
|
$B perf
|
|
$B text
|
|
```
|
|
|
|
Collect for each page: screenshot path, console error count, page load time from `perf`, and a text content snapshot.
|
|
|
|
Save the baseline manifest to `.gstack/canary-reports/baseline.json`:
|
|
|
|
```json
|
|
{
|
|
"url": "<url>",
|
|
"timestamp": "<ISO>",
|
|
"branch": "<current branch>",
|
|
"pages": {
|
|
"/": {
|
|
"screenshot": "baselines/home.png",
|
|
"console_errors": 0,
|
|
"load_time_ms": 450
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
Then STOP and tell the user: "Baseline captured. Deploy your changes, then run `/canary <url>` to monitor."
|
|
|
|
### Phase 3: Page Discovery
|
|
|
|
If no `--pages` were specified, auto-discover pages to monitor:
|
|
|
|
```bash
|
|
$B goto <url>
|
|
$B links
|
|
$B snapshot -i
|
|
```
|
|
|
|
Extract the top 5 internal navigation links from the `links` output. Always include the homepage. Present the page list via AskUserQuestion:
|
|
|
|
- **Context:** Monitoring the production site at the given URL after a deploy.
|
|
- **Question:** Which pages should the canary monitor?
|
|
- **RECOMMENDATION:** Choose A — these are the main navigation targets.
|
|
- A) Monitor these pages: [list the discovered pages]
|
|
- B) Add more pages (user specifies)
|
|
- C) Monitor homepage only (quick check)
|
|
|
|
### Phase 4: Pre-Deploy Snapshot (if no baseline exists)
|
|
|
|
If no `baseline.json` exists, take a quick snapshot now as a reference point.
|
|
|
|
For each page to monitor:
|
|
|
|
```bash
|
|
$B goto <page-url>
|
|
$B snapshot -i -a -o ".gstack/canary-reports/screenshots/pre-<page-name>.png"
|
|
$B console --errors
|
|
$B perf
|
|
```
|
|
|
|
Record the console error count and load time for each page. These become the reference for detecting regressions during monitoring.
|
|
|
|
### Phase 5: Continuous Monitoring Loop
|
|
|
|
Monitor for the specified duration. Every 60 seconds, check each page:
|
|
|
|
```bash
|
|
$B goto <page-url>
|
|
$B snapshot -i -a -o ".gstack/canary-reports/screenshots/<page-name>-<check-number>.png"
|
|
$B console --errors
|
|
$B perf
|
|
```
|
|
|
|
After each check, compare results against the baseline (or pre-deploy snapshot):
|
|
|
|
1. **Page load failure** — `goto` returns error or timeout → CRITICAL ALERT
|
|
2. **New console errors** — errors not present in baseline → HIGH ALERT
|
|
3. **Performance regression** — load time exceeds 2x baseline → MEDIUM ALERT
|
|
4. **Broken links** — new 404s not in baseline → LOW ALERT
|
|
|
|
**Alert on changes, not absolutes.** A page with 3 console errors in the baseline is fine if it still has 3. One NEW error is an alert.
|
|
|
|
**Don't cry wolf.** Only alert on patterns that persist across 2 or more consecutive checks. A single transient network blip is not an alert.
|
|
|
|
**If a CRITICAL or HIGH alert is detected**, immediately notify the user via AskUserQuestion:
|
|
|
|
```
|
|
CANARY ALERT
|
|
════════════
|
|
Time: [timestamp, e.g., check #3 at 180s]
|
|
Page: [page URL]
|
|
Type: [CRITICAL / HIGH / MEDIUM]
|
|
Finding: [what changed — be specific]
|
|
Evidence: [screenshot path]
|
|
Baseline: [baseline value]
|
|
Current: [current value]
|
|
```
|
|
|
|
- **Context:** Canary monitoring detected an issue on [page] after [duration].
|
|
- **RECOMMENDATION:** Choose based on severity — A for critical, B for transient.
|
|
- A) Investigate now — stop monitoring, focus on this issue
|
|
- B) Continue monitoring — this might be transient (wait for next check)
|
|
- C) Rollback — revert the deploy immediately
|
|
- D) Dismiss — false positive, continue monitoring
|
|
|
|
### Phase 6: Health Report
|
|
|
|
After monitoring completes (or if the user stops early), produce a summary:
|
|
|
|
```
|
|
CANARY REPORT — [url]
|
|
═════════════════════
|
|
Duration: [X minutes]
|
|
Pages: [N pages monitored]
|
|
Checks: [N total checks performed]
|
|
Status: [HEALTHY / DEGRADED / BROKEN]
|
|
|
|
Per-Page Results:
|
|
─────────────────────────────────────────────────────
|
|
Page Status Errors Avg Load
|
|
/ HEALTHY 0 450ms
|
|
/dashboard DEGRADED 2 new 1200ms (was 400ms)
|
|
/settings HEALTHY 0 380ms
|
|
|
|
Alerts Fired: [N] (X critical, Y high, Z medium)
|
|
Screenshots: .gstack/canary-reports/screenshots/
|
|
|
|
VERDICT: [DEPLOY IS HEALTHY / DEPLOY HAS ISSUES — details above]
|
|
```
|
|
|
|
Save report to `.gstack/canary-reports/{date}-canary.md` and `.gstack/canary-reports/{date}-canary.json`.
|
|
|
|
Log the result for the review dashboard:
|
|
|
|
```bash
|
|
{{SLUG_EVAL}}
|
|
mkdir -p ~/.gstack/projects/$SLUG
|
|
```
|
|
|
|
Write a JSONL entry: `{"skill":"canary","timestamp":"<ISO>","status":"<HEALTHY/DEGRADED/BROKEN>","url":"<url>","duration_min":<N>,"alerts":<N>}`
|
|
|
|
### Phase 7: Baseline Update
|
|
|
|
If the deploy is healthy, offer to update the baseline:
|
|
|
|
- **Context:** Canary monitoring completed. The deploy is healthy.
|
|
- **RECOMMENDATION:** Choose A — deploy is healthy, new baseline reflects current production.
|
|
- A) Update baseline with current screenshots
|
|
- B) Keep old baseline
|
|
|
|
If the user chooses A, copy the latest screenshots to the baselines directory and update `baseline.json`.
|
|
|
|
## Important Rules
|
|
|
|
- **Speed matters.** Start monitoring within 30 seconds of invocation. Don't over-analyze before monitoring.
|
|
- **Alert on changes, not absolutes.** Compare against baseline, not industry standards.
|
|
- **Screenshots are evidence.** Every alert includes a screenshot path. No exceptions.
|
|
- **Transient tolerance.** Only alert on patterns that persist across 2+ consecutive checks.
|
|
- **Baseline is king.** Without a baseline, canary is a health check. Encourage `--baseline` before deploying.
|
|
- **Performance thresholds are relative.** 2x baseline is a regression. 1.5x might be normal variance.
|
|
- **Read-only.** Observe and report. Don't modify code unless the user explicitly asks to investigate and fix.
|