mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-02 11:45:20 +02:00
00bc482fe1
* feat: add /canary, /benchmark, /land-and-deploy skills (v0.7.0) Three new skills that close the deploy loop: - /canary: standalone post-deploy monitoring with browse daemon - /benchmark: performance regression detection with Web Vitals - /land-and-deploy: merge PR, wait for deploy, canary verify production Incorporates patterns from community PR #151. Co-Authored-By: HMAKT99 <HMAKT99@users.noreply.github.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add Performance & Bundle Impact category to review checklist New Pass 2 (INFORMATIONAL) category catching heavy dependencies (moment.js, lodash full), missing lazy loading, synchronous scripts, CSS @import blocking, fetch waterfalls, and tree-shaking breaks. Both /review and /ship automatically pick this up via checklist.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add {{DEPLOY_BOOTSTRAP}} resolver + deployed row in dashboard - New generateDeployBootstrap() resolver auto-detects deploy platform (Vercel, Netlify, Fly.io, GH Actions, etc.), production URL, and merge method. Persists to CLAUDE.md like test bootstrap. - Review Readiness Dashboard now shows a "Deployed" row from /land-and-deploy JSONL entries (informational, never gates shipping). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: mark 3 TODOs completed, bump v0.7.0, update CHANGELOG Superseded by /land-and-deploy: - /merge skill — review-gated PR merge - Deploy-verify skill - Post-deploy verification (ship + browse) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: /setup-deploy skill + platform-specific deploy verification - New /setup-deploy skill: interactive guided setup for deploy configuration. Detects Fly.io, Render, Vercel, Netlify, Heroku, Railway, GitHub Actions, and custom deploy scripts. Writes config to CLAUDE.md with custom hooks section for non-standard setups. - Enhanced deploy bootstrap: platform-specific URL resolution (fly.toml app → {app}.fly.dev, render.yaml → {service}.onrender.com, etc.), deploy status commands (fly status, heroku releases), and custom deploy hooks section in CLAUDE.md for manual/scripted deploys. - Platform-specific deploy verification in /land-and-deploy Step 6: Strategy A (GitHub Actions polling), Strategy B (platform CLI: fly/render/heroku), Strategy C (auto-deploy: vercel/netlify), Strategy D (custom hooks from CLAUDE.md). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test: E2E + LLM-judge evals for deploy skills - 4 E2E tests: land-and-deploy (Fly.io detection + deploy report), canary (monitoring report structure), benchmark (perf report schema), setup-deploy (platform detection → CLAUDE.md config) - 4 LLM-judge evals: workflow quality for all 4 new skills - Touchfile entries for diff-based test selection (E2E + LLM-judge) - 460 free tests pass, 0 fail Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: harden E2E tests — server lifecycle, timeouts, preamble budget, skip flaky Cross-cutting fixes: - Pre-seed ~/.gstack/.completeness-intro-seen and ~/.gstack/.telemetry-prompted so preamble doesn't burn 3-7 turns on lake intro + telemetry in every test - Each describe block creates its own test server instance instead of sharing a global that dies between suites Test fixes (5 tests): - /qa quick: own server instance + preamble skip - /review SQL injection: timeout 90→180s, maxTurns 15→20, added assertion that review output actually mentions SQL injection - /review design-lite: maxTurns 25→35 + preamble skip (now detects 7/7) - ship-base-branch: both timeouts 90→150/180s + preamble skip - plan-eng artifact: clean stale state in beforeAll, maxTurns 20→25 Skipped (4 flaky/redundant tests): - contributor-mode: tests prompt compliance, not skill functionality - design-consultation-research: WebSearch-dependent, redundant with core - design-consultation-preview: redundant with core test - /qa bootstrap: too ambitious (65 turns, installs vitest) Also: preamble skip added to qa-only, qa-fix-loop, design-consultation-core, and design-consultation-existing prompts. Updated touchfiles entries and touchfiles.test.ts. Added honest comment to codex-review-findings. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test: redesign 6 skipped/todo E2E tests + add test.concurrent support Redesigned tests (previously skipped/todo): - contributor-mode: pre-fail approach, 5 turns/30s (was 10 turns/90s) - design-consultation-research: WebSearch-only, 8 turns/90s (was 45/480s) - design-consultation-preview: preview HTML only, 8 turns/90s (was 30/480s) - qa-bootstrap: bootstrap-only, 12 turns/90s (was 65/420s) - /ship workflow: local bare remote, 15 turns/120s (was test.todo) - /setup-browser-cookies: browser detection smoke, 5 turns/45s (was test.todo) Added testConcurrentIfSelected() helper for future parallelization. Updated touchfiles entries for all 6 re-enabled tests. Target: 0 skip, 0 todo, 0 fail across all E2E tests. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: relax contributor-mode assertions — test structure not exact phrasing * perf: enable test.concurrent for 31 independent E2E tests Convert 18 skill-e2e, 11 routing, and 2 codex tests from sequential to test.concurrent. Only design-consultation tests (4) remain sequential due to shared designDir state. Expected ~6x speedup on Teams high-burst. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add --concurrent flag to bun test + convert remaining 4 sequential tests bun's test.concurrent only works within a describe block, not across describe blocks. Adding --concurrent to the CLI command makes ALL tests concurrent regardless of describe boundaries. Also converted the 4 design-consultation tests to concurrent (each already independent). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf: split monolithic E2E test into 8 parallel files Split test/skill-e2e.test.ts (3442 lines) into 8 category files: - skill-e2e-browse.test.ts (7 tests) - skill-e2e-review.test.ts (7 tests) - skill-e2e-qa-bugs.test.ts (3 tests) - skill-e2e-qa-workflow.test.ts (4 tests) - skill-e2e-plan.test.ts (6 tests) - skill-e2e-design.test.ts (7 tests) - skill-e2e-workflow.test.ts (6 tests) - skill-e2e-deploy.test.ts (4 tests) Bun runs each file in its own worker = 10 parallel workers (8 split + routing + codex). Expected: 78 min → ~12 min. Extracted shared helpers to test/helpers/e2e-helpers.ts. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf: bump default E2E concurrency to 15 * perf: add model pinning infrastructure + rate-limit telemetry to E2E runner Default E2E model changed from Opus to Sonnet (5x faster, 5x cheaper). Session runner now accepts `model` option with EVALS_MODEL env var override. Added timing telemetry (first_response_ms, max_inter_turn_ms) and wall_clock_ms to eval-store for diagnosing rate-limit impact. Added EVALS_FAST test filtering. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve 3 E2E test failures — tmpdir race, wasted turns, brittle assertions plan-design-review-plan-mode: give each test its own tmpdir to eliminate race condition where concurrent tests pollute each other's working directory. ship-local-workflow: inline ship workflow steps in prompt instead of having agent read 700+ line SKILL.md (was wasting 6 of 15 turns on file I/O). design-consultation-core: replace exact section name matching with fuzzy synonym-based matching (e.g. "Colors" matches "Color", "Type System" matches "Typography"). All 7 sections still required, LLM judge still hard fail. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf: pin quality tests to Opus, add --retry 2 and test:e2e:fast tier ~10 quality-sensitive tests (planted-bug detection, design quality judge, strategic review, retro analysis) explicitly pinned to Opus. ~30 structure tests default to Sonnet for 5x speed improvement. Added --retry 2 to all E2E scripts for flaky test resilience. Added test:e2e:fast script that excludes 8 slowest tests for quick feedback. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: mark E2E model pinning TODO as shipped Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add SKILL.md merge conflict directive to CLAUDE.md When resolving merge conflicts on generated SKILL.md files, always merge the .tmpl templates first, then regenerate — never accept either side's generated output directly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add DEPLOY_BOOTSTRAP resolver to gen-skill-docs The land-and-deploy template referenced {{DEPLOY_BOOTSTRAP}} but no resolver existed, causing gen-skill-docs to fail. Added generateDeployBootstrap() that generates the deploy config detection bash block (check CLAUDE.md for persisted config, auto-detect platform from config files, detect deploy workflows). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: regenerate SKILL.md files after DEPLOY_BOOTSTRAP fix Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: move prompt temp file outside workingDirectory to prevent race condition The .prompt-tmp file was written inside workingDirectory, which gets deleted by afterAll cleanup. With --concurrent --retry, afterAll can interleave with retries, causing "No such file or directory" crashes at 0s (seen in review-design-lite and office-hours-spec-review). Fix: write prompt file to os.tmpdir() with a unique suffix so it survives directory cleanup. Also convert review-design-lite from describeE2E to describeIfSelected for proper diff-based test selection. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add --retry 2 --concurrent flags to test:evals scripts for consistency test:evals and test:evals:all were missing the retry and concurrency flags that test:e2e already had, causing inconsistent behavior between the two script families. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: HMAKT99 <HMAKT99@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
173 lines
5.4 KiB
TypeScript
173 lines
5.4 KiB
TypeScript
/**
|
|
* Live E2E test watcher dashboard.
|
|
*
|
|
* Reads heartbeat (e2e-live.json) for current test status and
|
|
* partial eval results (_partial-e2e.json) for completed tests.
|
|
* Renders a terminal dashboard every 1s.
|
|
*
|
|
* Usage: bun run eval:watch [--tail]
|
|
*/
|
|
|
|
import * as fs from 'fs';
|
|
import * as path from 'path';
|
|
import * as os from 'os';
|
|
|
|
const GSTACK_DEV_DIR = path.join(os.homedir(), '.gstack-dev');
|
|
const HEARTBEAT_PATH = path.join(GSTACK_DEV_DIR, 'e2e-live.json');
|
|
const PARTIAL_PATH = path.join(GSTACK_DEV_DIR, 'evals', '_partial-e2e.json');
|
|
const STALE_THRESHOLD_SEC = 600; // 10 minutes
|
|
|
|
export interface HeartbeatData {
|
|
runId: string;
|
|
pid?: number;
|
|
startedAt: string;
|
|
currentTest: string;
|
|
status: string;
|
|
turn: number;
|
|
toolCount: number;
|
|
lastTool: string;
|
|
lastToolAt: string;
|
|
elapsedSec: number;
|
|
}
|
|
|
|
export interface PartialData {
|
|
tests: Array<{
|
|
name: string;
|
|
passed: boolean;
|
|
cost_usd: number;
|
|
duration_ms: number;
|
|
turns_used?: number;
|
|
exit_reason?: string;
|
|
}>;
|
|
total_cost_usd: number;
|
|
_partial?: boolean;
|
|
}
|
|
|
|
/** Read and parse a JSON file, returning null on any error. */
|
|
function readJSON<T>(filePath: string): T | null {
|
|
try {
|
|
return JSON.parse(fs.readFileSync(filePath, 'utf-8'));
|
|
} catch {
|
|
return null;
|
|
}
|
|
}
|
|
|
|
/** Check if a process is alive (signal 0 = existence check, doesn't kill). */
|
|
function isProcessAlive(pid: number): boolean {
|
|
try {
|
|
process.kill(pid, 0);
|
|
return true;
|
|
} catch {
|
|
return false;
|
|
}
|
|
}
|
|
|
|
/** Format seconds as Xm Ys */
|
|
function formatDuration(sec: number): string {
|
|
if (sec < 60) return `${sec}s`;
|
|
const m = Math.floor(sec / 60);
|
|
const s = sec % 60;
|
|
return `${m}m ${s}s`;
|
|
}
|
|
|
|
/** Render dashboard from heartbeat + partial data. Pure function for testability. */
|
|
export function renderDashboard(heartbeat: HeartbeatData | null, partial: PartialData | null): string {
|
|
const lines: string[] = [];
|
|
|
|
if (!heartbeat && !partial) {
|
|
lines.push('E2E Watch — No active run detected');
|
|
lines.push('');
|
|
lines.push(`Heartbeat: ${HEARTBEAT_PATH} (not found)`);
|
|
lines.push(`Partial: ${PARTIAL_PATH} (not found)`);
|
|
lines.push('');
|
|
lines.push('Start a run with: EVALS=1 bun test test/skill-e2e-*.test.ts');
|
|
return lines.join('\n');
|
|
}
|
|
|
|
const runId = heartbeat?.runId || 'unknown';
|
|
const elapsed = heartbeat?.elapsedSec || 0;
|
|
lines.push(`E2E Watch \u2014 Run ${runId} \u2014 ${formatDuration(elapsed)}`);
|
|
lines.push('\u2550'.repeat(55));
|
|
|
|
// Completed tests from partial
|
|
if (partial?.tests) {
|
|
for (const t of partial.tests) {
|
|
const icon = t.passed ? '\u2713' : '\u2717';
|
|
const cost = `$${t.cost_usd.toFixed(2)}`;
|
|
const dur = `${Math.round(t.duration_ms / 1000)}s`;
|
|
const turns = t.turns_used !== undefined ? `${t.turns_used} turns` : '';
|
|
const name = t.name.length > 30 ? t.name.slice(0, 27) + '...' : t.name.padEnd(30);
|
|
lines.push(` ${icon} ${name} ${cost.padStart(6)} ${dur.padStart(5)} ${turns}`);
|
|
}
|
|
}
|
|
|
|
// Current test from heartbeat
|
|
if (heartbeat && heartbeat.status === 'running') {
|
|
const name = heartbeat.currentTest.length > 30
|
|
? heartbeat.currentTest.slice(0, 27) + '...'
|
|
: heartbeat.currentTest.padEnd(30);
|
|
lines.push(` \u29D6 ${name} ${formatDuration(heartbeat.elapsedSec).padStart(6)} turn ${heartbeat.turn} last: ${heartbeat.lastTool}`);
|
|
|
|
// Stale detection
|
|
const lastToolTime = new Date(heartbeat.lastToolAt).getTime();
|
|
const staleSec = Math.round((Date.now() - lastToolTime) / 1000);
|
|
if (staleSec > STALE_THRESHOLD_SEC) {
|
|
lines.push(` \u26A0 STALE: last tool call was ${formatDuration(staleSec)} ago \u2014 run may have crashed`);
|
|
}
|
|
}
|
|
|
|
lines.push('\u2500'.repeat(55));
|
|
|
|
// Summary
|
|
const completedCount = partial?.tests?.length || 0;
|
|
const totalCost = partial?.total_cost_usd || 0;
|
|
const running = heartbeat?.status === 'running' ? 1 : 0;
|
|
lines.push(` Completed: ${completedCount} Running: ${running} Cost: $${totalCost.toFixed(2)} Elapsed: ${formatDuration(elapsed)}`);
|
|
|
|
if (heartbeat?.runId) {
|
|
const logPath = path.join(GSTACK_DEV_DIR, 'e2e-runs', heartbeat.runId, 'progress.log');
|
|
lines.push(` Logs: ${logPath}`);
|
|
}
|
|
|
|
return lines.join('\n');
|
|
}
|
|
|
|
// --- Main ---
|
|
|
|
if (import.meta.main) {
|
|
const showTail = process.argv.includes('--tail');
|
|
|
|
const render = () => {
|
|
let heartbeat = readJSON<HeartbeatData>(HEARTBEAT_PATH);
|
|
const partial = readJSON<PartialData>(PARTIAL_PATH);
|
|
|
|
// Auto-clear heartbeat if the process is dead
|
|
if (heartbeat?.pid && !isProcessAlive(heartbeat.pid)) {
|
|
try { fs.unlinkSync(HEARTBEAT_PATH); } catch { /* already gone */ }
|
|
process.stdout.write('\x1B[2J\x1B[H');
|
|
process.stdout.write(`Cleared stale heartbeat — PID ${heartbeat.pid} is no longer running.\n\n`);
|
|
heartbeat = null;
|
|
}
|
|
|
|
// Clear screen
|
|
process.stdout.write('\x1B[2J\x1B[H');
|
|
process.stdout.write(renderDashboard(heartbeat, partial) + '\n');
|
|
|
|
// --tail: show last 10 lines of progress.log
|
|
if (showTail && heartbeat?.runId) {
|
|
const logPath = path.join(GSTACK_DEV_DIR, 'e2e-runs', heartbeat.runId, 'progress.log');
|
|
try {
|
|
const content = fs.readFileSync(logPath, 'utf-8');
|
|
const tail = content.split('\n').filter(l => l.trim()).slice(-10);
|
|
process.stdout.write('\nRecent progress:\n');
|
|
for (const line of tail) {
|
|
process.stdout.write(line + '\n');
|
|
}
|
|
} catch { /* log file may not exist yet */ }
|
|
}
|
|
};
|
|
|
|
render();
|
|
setInterval(render, 1000);
|
|
}
|