Files
gstack/test/helpers/session-runner.ts
Garry Tan dc5e0538e5 feat: worktree isolation for E2E tests + infrastructure elegance (v0.11.12.0) (#425)
* refactor: extract gen-skill-docs into modular resolver architecture

Break the 3000-line monolith into 10 domain modules under scripts/resolvers/:
types, constants, preamble, utility, browse, design, testing, review,
codex-helpers, and index. Each module owns one domain of template generation.

The preamble module introduces a 4-tier composition system (T1-T4) so skills
only pay for the preamble sections they actually need, reducing token usage
for lightweight skills by ~40%.

Adds a token budget dashboard that prints after every generation run showing
per-skill and total token counts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: tiered preamble — skills only pay for what they use

Tag all 23 templates with preamble-tier (T1-T4). Lightweight skills
like /browse and /benchmark get a minimal preamble (~40% fewer tokens),
while review skills get the full stack. Regenerate all SKILL.md files.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: migrate eval storage to project-scoped paths

Move eval results and E2E run artifacts from ~/.gstack-dev/evals/ to
~/.gstack/projects/$SLUG/evals/ so each project's eval history lives
alongside its other gstack data. Falls back to legacy path if slug
detection fails.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: sync package.json version with VERSION after merge

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add WorktreeManager for isolated test environments

Reusable platform module (lib/worktree.ts) that creates git worktrees
for test isolation and harvests useful changes as patches. Includes
SHA-256 dedup, original SHA tracking for committed change detection,
and automatic gitignored artifact copying (.agents/, browse/dist/).

12 unit tests covering lifecycle, harvest, dedup, and error handling.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: integrate worktree isolation into E2E test infrastructure

Add createTestWorktree(), harvestAndCleanup(), and describeWithWorktree()
helpers to e2e-helpers.ts. Add harvest field to EvalTestEntry for
eval-store integration. Register lib/worktree.ts as a global touchfile.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: run Gemini and Codex E2E tests in worktrees

Switch both test suites from cwd: ROOT to worktree isolation.
Gemini (--yolo) no longer pollutes the working tree. Codex
(read-only) gets worktree for consistency. Useful changes are
harvested as patches for cherry-picking.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: skip symlinks in copyDirSync to prevent infinite recursion

Adversarial review caught that .claude/skills/gstack may be a symlink
back to the repo root, causing copyDirSync to recurse infinitely
when copying gitignored artifacts into worktrees.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: bump version and changelog (v0.11.12.0)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: relax session-awareness assertion to accept structured options

The LLM consistently presents well-formatted A/B choices with pros/cons
but doesn't always use the exact string "RECOMMENDATION". Accept
case-insensitive "recommend", "option a", "which do you want", or
"which approach" as equivalent signals of a structured recommendation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 23:05:22 -07:00

360 lines
12 KiB
TypeScript

/**
* Claude CLI subprocess runner for skill E2E testing.
*
* Spawns `claude -p` as a completely independent process (not via Agent SDK),
* so it works inside Claude Code sessions. Pipes prompt via stdin, streams
* NDJSON output for real-time progress, scans for browse errors.
*/
import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';
import { getProjectEvalDir } from './eval-store';
const GSTACK_DEV_DIR = path.join(os.homedir(), '.gstack-dev');
const HEARTBEAT_PATH = path.join(GSTACK_DEV_DIR, 'e2e-live.json'); // heartbeat stays global
const PROJECT_DIR = path.dirname(getProjectEvalDir()); // ~/.gstack/projects/$SLUG/
/** Sanitize test name for use as filename: strip leading slashes, replace / with - */
export function sanitizeTestName(name: string): string {
return name.replace(/^\/+/, '').replace(/\//g, '-');
}
/** Atomic write: write to .tmp then rename. Non-fatal on error. */
function atomicWriteSync(filePath: string, data: string): void {
const tmp = filePath + '.tmp';
fs.writeFileSync(tmp, data);
fs.renameSync(tmp, filePath);
}
export interface CostEstimate {
inputChars: number;
outputChars: number;
estimatedTokens: number;
estimatedCost: number; // USD
turnsUsed: number;
}
export interface SkillTestResult {
toolCalls: Array<{ tool: string; input: any; output: string }>;
browseErrors: string[];
exitReason: string;
duration: number;
output: string;
costEstimate: CostEstimate;
transcript: any[];
/** Which model was used for this test (added for Sonnet/Opus split diagnostics) */
model: string;
/** Time from spawn to first NDJSON line, in ms (added for rate-limit diagnostics) */
firstResponseMs: number;
/** Peak latency between consecutive tool calls, in ms */
maxInterTurnMs: number;
}
const BROWSE_ERROR_PATTERNS = [
/Unknown command: \w+/,
/Unknown snapshot flag: .+/,
/ERROR: browse binary not found/,
/Server failed to start/,
/no such file or directory.*browse/i,
];
// --- Testable NDJSON parser ---
export interface ParsedNDJSON {
transcript: any[];
resultLine: any | null;
turnCount: number;
toolCallCount: number;
toolCalls: Array<{ tool: string; input: any; output: string }>;
}
/**
* Parse an array of NDJSON lines into structured transcript data.
* Pure function — no I/O, no side effects. Used by both the streaming
* reader and unit tests.
*/
export function parseNDJSON(lines: string[]): ParsedNDJSON {
const transcript: any[] = [];
let resultLine: any = null;
let turnCount = 0;
let toolCallCount = 0;
const toolCalls: ParsedNDJSON['toolCalls'] = [];
for (const line of lines) {
if (!line.trim()) continue;
try {
const event = JSON.parse(line);
transcript.push(event);
// Track turns and tool calls from assistant events
if (event.type === 'assistant') {
turnCount++;
const content = event.message?.content || [];
for (const item of content) {
if (item.type === 'tool_use') {
toolCallCount++;
toolCalls.push({
tool: item.name || 'unknown',
input: item.input || {},
output: '',
});
}
}
}
if (event.type === 'result') resultLine = event;
} catch { /* skip malformed lines */ }
}
return { transcript, resultLine, turnCount, toolCallCount, toolCalls };
}
function truncate(s: string, max: number): string {
return s.length > max ? s.slice(0, max) + '…' : s;
}
// --- Main runner ---
export async function runSkillTest(options: {
prompt: string;
workingDirectory: string;
maxTurns?: number;
allowedTools?: string[];
timeout?: number;
testName?: string;
runId?: string;
/** Model to use. Defaults to claude-sonnet-4-6 (overridable via EVALS_MODEL env). */
model?: string;
}): Promise<SkillTestResult> {
const {
prompt,
workingDirectory,
maxTurns = 15,
allowedTools = ['Bash', 'Read', 'Write'],
timeout = 120_000,
testName,
runId,
} = options;
const model = options.model ?? process.env.EVALS_MODEL ?? 'claude-sonnet-4-6';
const startTime = Date.now();
const startedAt = new Date().toISOString();
// Set up per-run log directory if runId is provided
let runDir: string | null = null;
const safeName = testName ? sanitizeTestName(testName) : null;
if (runId) {
try {
runDir = path.join(PROJECT_DIR, 'e2e-runs', runId);
fs.mkdirSync(runDir, { recursive: true });
} catch { /* non-fatal */ }
}
// Spawn claude -p with streaming NDJSON output. Prompt piped via stdin to
// avoid shell escaping issues. --verbose is required for stream-json mode.
const args = [
'-p',
'--model', model,
'--output-format', 'stream-json',
'--verbose',
'--dangerously-skip-permissions',
'--max-turns', String(maxTurns),
'--allowed-tools', ...allowedTools,
];
// Write prompt to a temp file OUTSIDE workingDirectory to avoid race conditions
// where afterAll cleanup deletes the dir before cat reads the file (especially
// with --concurrent --retry). Using os.tmpdir() + unique suffix keeps it stable.
const promptFile = path.join(os.tmpdir(), `.prompt-${process.pid}-${Date.now()}-${Math.random().toString(36).slice(2)}`);
fs.writeFileSync(promptFile, prompt);
const proc = Bun.spawn(['sh', '-c', `cat "${promptFile}" | claude ${args.map(a => `"${a}"`).join(' ')}`], {
cwd: workingDirectory,
stdout: 'pipe',
stderr: 'pipe',
});
// Race against timeout
let stderr = '';
let exitReason = 'unknown';
let timedOut = false;
const timeoutId = setTimeout(() => {
timedOut = true;
proc.kill();
}, timeout);
// Stream NDJSON from stdout for real-time progress
const collectedLines: string[] = [];
let liveTurnCount = 0;
let liveToolCount = 0;
let firstResponseMs = 0;
let lastToolTime = 0;
let maxInterTurnMs = 0;
const stderrPromise = new Response(proc.stderr).text();
const reader = proc.stdout.getReader();
const decoder = new TextDecoder();
let buf = '';
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
buf += decoder.decode(value, { stream: true });
const lines = buf.split('\n');
buf = lines.pop() || '';
for (const line of lines) {
if (!line.trim()) continue;
collectedLines.push(line);
// Real-time progress to stderr + persistent logs
try {
const event = JSON.parse(line);
if (event.type === 'assistant') {
liveTurnCount++;
const content = event.message?.content || [];
for (const item of content) {
if (item.type === 'tool_use') {
liveToolCount++;
const now = Date.now();
const elapsed = Math.round((now - startTime) / 1000);
// Track timing telemetry
if (firstResponseMs === 0) firstResponseMs = now - startTime;
if (lastToolTime > 0) {
const interTurn = now - lastToolTime;
if (interTurn > maxInterTurnMs) maxInterTurnMs = interTurn;
}
lastToolTime = now;
const progressLine = ` [${elapsed}s] turn ${liveTurnCount} tool #${liveToolCount}: ${item.name}(${truncate(JSON.stringify(item.input || {}), 80)})\n`;
process.stderr.write(progressLine);
// Persist progress.log
if (runDir) {
try { fs.appendFileSync(path.join(runDir, 'progress.log'), progressLine); } catch { /* non-fatal */ }
}
// Write heartbeat (atomic)
if (runId && testName) {
try {
const toolDesc = `${item.name}(${truncate(JSON.stringify(item.input || {}), 60)})`;
atomicWriteSync(HEARTBEAT_PATH, JSON.stringify({
runId,
pid: proc.pid,
startedAt,
currentTest: testName,
status: 'running',
turn: liveTurnCount,
toolCount: liveToolCount,
lastTool: toolDesc,
lastToolAt: new Date().toISOString(),
elapsedSec: elapsed,
}, null, 2) + '\n');
} catch { /* non-fatal */ }
}
}
}
}
} catch { /* skip — parseNDJSON will handle it later */ }
// Append raw NDJSON line to per-test transcript file
if (runDir && safeName) {
try { fs.appendFileSync(path.join(runDir, `${safeName}.ndjson`), line + '\n'); } catch { /* non-fatal */ }
}
}
}
} catch { /* stream read error — fall through to exit code handling */ }
// Flush remaining buffer
if (buf.trim()) {
collectedLines.push(buf);
}
stderr = await stderrPromise;
const exitCode = await proc.exited;
clearTimeout(timeoutId);
try { fs.unlinkSync(promptFile); } catch { /* non-fatal */ }
if (timedOut) {
exitReason = 'timeout';
} else if (exitCode === 0) {
exitReason = 'success';
} else {
exitReason = `exit_code_${exitCode}`;
}
const duration = Date.now() - startTime;
// Parse all collected NDJSON lines
const parsed = parseNDJSON(collectedLines);
const { transcript, resultLine, toolCalls } = parsed;
const browseErrors: string[] = [];
// Scan transcript + stderr for browse errors
const allText = transcript.map(e => JSON.stringify(e)).join('\n') + '\n' + stderr;
for (const pattern of BROWSE_ERROR_PATTERNS) {
const match = allText.match(pattern);
if (match) {
browseErrors.push(match[0].slice(0, 200));
}
}
// Use resultLine for structured result data
if (resultLine) {
if (resultLine.is_error) {
// claude -p can return subtype=success with is_error=true (e.g. API connection failure)
exitReason = 'error_api';
} else if (resultLine.subtype === 'success') {
exitReason = 'success';
} else if (resultLine.subtype) {
exitReason = resultLine.subtype;
}
}
// Save failure transcript to persistent run directory (or fallback to workingDirectory)
if (browseErrors.length > 0 || exitReason !== 'success') {
try {
const failureDir = runDir || path.join(workingDirectory, '.gstack', 'test-transcripts');
fs.mkdirSync(failureDir, { recursive: true });
const failureName = safeName
? `${safeName}-failure.json`
: `e2e-${new Date().toISOString().replace(/[:.]/g, '-')}.json`;
fs.writeFileSync(
path.join(failureDir, failureName),
JSON.stringify({
prompt: prompt.slice(0, 500),
testName: testName || 'unknown',
exitReason,
browseErrors,
duration,
turnAtTimeout: timedOut ? liveTurnCount : undefined,
lastToolCall: liveToolCount > 0 ? `tool #${liveToolCount}` : undefined,
stderr: stderr.slice(0, 2000),
result: resultLine ? { type: resultLine.type, subtype: resultLine.subtype, result: resultLine.result?.slice?.(0, 500) } : null,
}, null, 2),
);
} catch { /* non-fatal */ }
}
// Cost from result line (exact) or estimate from chars
const turnsUsed = resultLine?.num_turns || 0;
const estimatedCost = resultLine?.total_cost_usd || 0;
const inputChars = prompt.length;
const outputChars = (resultLine?.result || '').length;
const estimatedTokens = (resultLine?.usage?.input_tokens || 0)
+ (resultLine?.usage?.output_tokens || 0)
+ (resultLine?.usage?.cache_read_input_tokens || 0);
const costEstimate: CostEstimate = {
inputChars,
outputChars,
estimatedTokens,
estimatedCost: Math.round((estimatedCost) * 100) / 100,
turnsUsed,
};
return { toolCalls, browseErrors, exitReason, duration, output: resultLine?.result || '', costEstimate, transcript, model, firstResponseMs, maxInterTurnMs };
}