diff --git a/ARCHITECTURE.md b/ARCHITECTURE.md index daa64a8c..07d03bad 100644 --- a/ARCHITECTURE.md +++ b/ARCHITECTURE.md @@ -189,15 +189,15 @@ Three reasons: 2. **CI can validate freshness.** `gen:skill-docs --dry-run` + `git diff --exit-code` catches stale docs before merge. 3. **Git blame works.** You can see when a command was added and in which commit. -### Test tiers +### Template test tiers | Tier | What | Cost | Speed | |------|------|------|-------| | 1 — Static validation | Parse every `$B` command in SKILL.md, validate against registry | Free | <2s | -| 2 — E2E via Agent SDK | Spawn real Claude session, run `/qa`, check for errors | ~$0.50 | ~60s | -| 3 — LLM-as-judge | Haiku scores docs on clarity/completeness/actionability | ~$0.03 | ~10s | +| 2 — E2E via `claude -p` | Spawn real Claude session, run each skill, check for errors | ~$3.85 | ~20min | +| 3 — LLM-as-judge | Sonnet scores docs on clarity/completeness/actionability | ~$0.15 | ~30s | -Tier 1 runs on every `bun test`. Tier 2 and 3 are gated behind env vars. The idea is: catch 95% of issues for free, use LLMs only for the judgment calls. +Tier 1 runs on every `bun test`. Tiers 2+3 are gated behind `EVALS=1`. The idea is: catch 95% of issues for free, use LLMs only for judgment calls. ## Command dispatch @@ -231,6 +231,88 @@ Playwright's native errors are rewritten through `wrapError()` to strip internal The server doesn't try to self-heal. If Chromium crashes (`browser.on('disconnected')`), the server exits immediately. The CLI detects the dead server on the next command and auto-restarts. This is simpler and more reliable than trying to reconnect to a half-dead browser process. +## E2E test infrastructure + +### Session runner (`test/helpers/session-runner.ts`) + +E2E tests spawn `claude -p` as a completely independent subprocess — not via the Agent SDK, which can't nest inside Claude Code sessions. The runner: + +1. Writes the prompt to a temp file (avoids shell escaping issues) +2. Spawns `sh -c 'cat prompt | claude -p --output-format stream-json --verbose'` +3. Streams NDJSON from stdout for real-time progress +4. Races against a configurable timeout +5. Parses the full NDJSON transcript into structured results + +The `parseNDJSON()` function is pure — no I/O, no side effects — making it independently testable. + +### Observability data flow + +``` + skill-e2e.test.ts + │ + │ generates runId, passes testName + runId to each call + │ + ┌─────┼──────────────────────────────┐ + │ │ │ + │ runSkillTest() evalCollector + │ (session-runner.ts) (eval-store.ts) + │ │ │ + │ per tool call: per addTest(): + │ ┌──┼──────────┐ savePartial() + │ │ │ │ │ + │ ▼ ▼ ▼ ▼ + │ [HB] [PL] [NJ] _partial-e2e.json + │ │ │ │ (atomic overwrite) + │ │ │ │ + │ ▼ ▼ ▼ + │ e2e- prog- {name} + │ live ress .ndjson + │ .json .log + │ + │ on failure: + │ {name}-failure.json + │ + │ ALL files in ~/.gstack-dev/ + │ Run dir: e2e-runs/{runId}/ + │ + │ eval-watch.ts + │ │ + │ ┌─────┴─────┐ + │ read HB read partial + │ └─────┬─────┘ + │ ▼ + │ render dashboard + │ (stale >10min? warn) +``` + +**Split ownership:** session-runner owns the heartbeat (current test state), eval-store owns partial results (completed test state). The watcher reads both. Neither component knows about the other — they share data only through the filesystem. + +**Non-fatal everything:** All observability I/O is wrapped in try/catch. A write failure never causes a test to fail. The tests themselves are the source of truth; observability is best-effort. + +**Machine-readable diagnostics:** Each test result includes `exit_reason` (success, timeout, error_max_turns, error_api, exit_code_N), `timeout_at_turn`, and `last_tool_call`. This enables `jq` queries like: +```bash +jq '.tests[] | select(.exit_reason == "timeout") | .last_tool_call' ~/.gstack-dev/evals/_partial-e2e.json +``` + +### Eval persistence (`test/helpers/eval-store.ts`) + +The `EvalCollector` accumulates test results and writes them in two ways: + +1. **Incremental:** `savePartial()` writes `_partial-e2e.json` after each test (atomic: write `.tmp`, `fs.renameSync`). Survives kills. +2. **Final:** `finalize()` writes a timestamped eval file (e.g. `e2e-20260314-143022.json`). The partial file is never cleaned up — it persists alongside the final file for observability. + +`eval:compare` diffs two eval runs. `eval:summary` aggregates stats across all runs in `~/.gstack-dev/evals/`. + +### Test tiers + +| Tier | What | Cost | Speed | +|------|------|------|-------| +| 1 — Static validation | Parse `$B` commands, validate against registry, observability unit tests | Free | <5s | +| 2 — E2E via `claude -p` | Spawn real Claude session, run each skill, scan for errors | ~$3.85 | ~20min | +| 3 — LLM-as-judge | Sonnet scores docs on clarity/completeness/actionability | ~$0.15 | ~30s | + +Tier 1 runs on every `bun test`. Tiers 2+3 are gated behind `EVALS=1`. The idea: catch 95% of issues for free, use LLMs only for judgment calls and integration testing. + ## What's intentionally not here - **No WebSocket streaming.** HTTP request/response is simpler, debuggable with curl, and fast enough. Streaming would add complexity for marginal benefit. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index d98489ee..e85413ac 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -79,15 +79,14 @@ Bun auto-loads `.env` — no extra config. Conductor workspaces inherit `.env` f | Tier | Command | Cost | What it tests | |------|---------|------|---------------| -| 1 — Static | `bun test` | Free | Command validation, snapshot flags, SKILL.md correctness | -| 2 — E2E | `bun run test:e2e` | ~$0.50 | Full skill execution via Agent SDK | -| 3 — LLM eval | `bun run test:eval` | ~$0.03 | Doc quality scoring via LLM-as-judge | +| 1 — Static | `bun test` | Free | Command validation, snapshot flags, SKILL.md correctness, observability unit tests | +| 2 — E2E | `bun run test:e2e` | ~$3.85 | Full skill execution via `claude -p` subprocess | +| 3 — LLM eval | `bun run test:evals` | ~$4 | E2E + LLM-as-judge combined | ```bash bun test # Tier 1 only (runs on every commit, <5s) -bun run test:eval # Tier 3: LLM-as-judge (needs ANTHROPIC_API_KEY in .env) -bun run test:e2e # Tier 2: E2E (needs SKILL_E2E=1, can't run inside Claude Code) -bun run test:all # Tier 1 + Tier 2 +bun run test:e2e # Tier 2: E2E (needs EVALS=1, can't run inside Claude Code) +bun run test:evals # Tier 2 + 3 combined (~$4/run) ``` ### Tier 1: Static validation (free) @@ -98,23 +97,49 @@ Runs automatically with `bun test`. No API keys needed. - **Skill validation tests** (`test/skill-validation.test.ts`) — Validates that SKILL.md files reference only real commands and flags, and that command descriptions meet quality thresholds. - **Generator tests** (`test/gen-skill-docs.test.ts`) — Tests the template system: verifies placeholders resolve correctly, output includes value hints for flags (e.g. `-d ` not just `-d`), enriched descriptions for key commands (e.g. `is` lists valid states, `press` lists key examples). -### Tier 2: E2E via Agent SDK (~$0.50/run) +### Tier 2: E2E via `claude -p` (~$3.85/run) -Spawns a real Claude Code session, invokes `/qa` or `/browse`, and scans tool results for errors. This is the closest thing to "does this skill actually work end-to-end?" +Spawns `claude -p` as a subprocess with `--output-format stream-json --verbose`, streams NDJSON for real-time progress, and scans for browse errors. This is the closest thing to "does this skill actually work end-to-end?" ```bash # Must run from a plain terminal — can't nest inside Claude Code or Conductor -SKILL_E2E=1 bun test test/skill-e2e.test.ts +EVALS=1 bun test test/skill-e2e.test.ts ``` -- Gated by `SKILL_E2E=1` env var (prevents accidental expensive runs) -- Auto-skips if it detects it's running inside Claude Code (Agent SDK can't nest) -- Saves full conversation transcripts on failure for debugging +- Gated by `EVALS=1` env var (prevents accidental expensive runs) +- Auto-skips if running inside Claude Code (`claude -p` can't nest) +- API connectivity pre-check — fails fast on ConnectionRefused before burning budget +- Real-time progress to stderr: `[Ns] turn T tool #C: Name(...)` +- Saves full NDJSON transcripts and failure JSON for debugging - Tests live in `test/skill-e2e.test.ts`, runner logic in `test/helpers/session-runner.ts` -### Tier 3: LLM-as-judge (~$0.03/run) +### E2E observability -Uses Claude Haiku to score generated SKILL.md docs on three dimensions: +When E2E tests run, they produce machine-readable artifacts in `~/.gstack-dev/`: + +| Artifact | Path | Purpose | +|----------|------|---------| +| Heartbeat | `e2e-live.json` | Current test status (updated per tool call) | +| Partial results | `evals/_partial-e2e.json` | Completed tests (survives kills) | +| Progress log | `e2e-runs/{runId}/progress.log` | Append-only text log | +| NDJSON transcripts | `e2e-runs/{runId}/{test}.ndjson` | Raw `claude -p` output per test | +| Failure JSON | `e2e-runs/{runId}/{test}-failure.json` | Diagnostic data on failure | + +**Live dashboard:** Run `bun run eval:watch` in a second terminal to see a live dashboard showing completed tests, the currently running test, and cost. Use `--tail` to also show the last 10 lines of progress.log. + +**Eval history tools:** + +```bash +bun run eval:list # list all eval runs +bun run eval:compare # compare two runs (auto-picks most recent) +bun run eval:summary # aggregate stats across all runs +``` + +Artifacts are never cleaned up — they accumulate in `~/.gstack-dev/` for post-mortem debugging and trend analysis. + +### Tier 3: LLM-as-judge (~$0.15/run) + +Uses Claude Sonnet to score generated SKILL.md docs on three dimensions: - **Clarity** — Can an AI agent understand the instructions without ambiguity? - **Completeness** — Are all commands, flags, and usage patterns documented? @@ -123,13 +148,12 @@ Uses Claude Haiku to score generated SKILL.md docs on three dimensions: Each dimension is scored 1-5. Threshold: every dimension must score **≥ 4**. There's also a regression test that compares generated docs against the hand-maintained baseline from `origin/main` — generated must score equal or higher. ```bash -# Needs ANTHROPIC_API_KEY in .env -bun run test:eval +# Needs ANTHROPIC_API_KEY in .env — included in bun run test:evals ``` -- Uses `claude-haiku-4-5` for cost efficiency +- Uses `claude-sonnet-4-6` for scoring stability - Tests live in `test/skill-llm-eval.test.ts` -- Calls the Anthropic API directly (not Agent SDK), so it works from anywhere including inside Claude Code +- Calls the Anthropic API directly (not `claude -p`), so it works from anywhere including inside Claude Code ### CI diff --git a/README.md b/README.md index e0c94428..3e32fa89 100644 --- a/README.md +++ b/README.md @@ -619,7 +619,17 @@ Paste this into Claude Code: ## Development -See [BROWSER.md](BROWSER.md) for the full development guide, architecture, and command reference. +See [CONTRIBUTING.md](CONTRIBUTING.md) for setup, testing, and dev mode. See [ARCHITECTURE.md](ARCHITECTURE.md) for design decisions and system internals. See [BROWSER.md](BROWSER.md) for the browse command reference. + +### Testing + +```bash +bun test # free static tests (<5s) +EVALS=1 bun run test:evals # full E2E + LLM evals (~$4, ~20min) +bun run eval:watch # live dashboard during E2E runs +``` + +E2E tests stream real-time progress, write machine-readable diagnostics, and persist partial results that survive kills. See CONTRIBUTING.md for the full eval infrastructure. ## License