mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-02 03:35:09 +02:00
feat(v1.10.1.0): overlay efficacy harness + Opus 4.7 fanout nudge removal (#1166)
* refactor: export readOverlay from model-overlay resolver Needed by the overlay-efficacy eval harness to resolve INHERIT directives without going through generateModelOverlay's full TemplateContext. * chore: add @anthropic-ai/claude-agent-sdk@0.2.117 dep Pinned exact for SDK event-shape stability. Used by the overlay-efficacy harness to drive the model through a closer-to-real Claude Code harness than `claude -p`. * feat(preflight): sanity check for agent-sdk + overlay resolver Verifies: SDK loads, claude-opus-4-7 is a live API model, SDKMessage event shape matches assumptions, readOverlay resolves INHERIT directives and includes expected content. Run with `bun run scripts/preflight-agent-sdk.ts`. PREFLIGHT OK on first run, $0.013 API spend. * feat(eval): parametric overlay-efficacy harness (runner + fixtures) `test/helpers/agent-sdk-runner.ts` wraps @anthropic-ai/claude-agent-sdk with explicit `AgentSdkResult` types, process-level API concurrency semaphore, and 3-shape 429 retry (thrown error, result-message error, mid-stream SDKRateLimitEvent). Pins the local claude binary via `pathToClaudeCodeExecutable`. `test/fixtures/overlay-nudges.ts` holds the typed registry. Two fixtures for the first measurement: `opus-4-7-fanout-toy` (3-file read) and `opus-4-7-fanout-realistic` (mixed-tool audit). Strict validator rejects duplicate ids, non-integer trials, unsafe overlay paths, non-safe id chars, and missing overlay files at module load. Adding a future overlay nudge eval = one fixture entry. * test(eval): unit tests for agent-sdk-runner (36 tests, free tier) Stub `queryProvider` feeds hand-crafted SDKMessage streams. Covers: happy-path shape, all 3 rate-limit shapes + retry, workspace reset on retry, persistent 429 -> `RateLimitExhaustedError`, non-429 propagation, process-level concurrency cap, options propagation, artifact path uniqueness, cost/turn mapping, and every validator rejection case. * test(eval): paid periodic overlay-efficacy harness `test/skill-e2e-overlay-harness.test.ts` iterates OVERLAY_FIXTURES, runs two arms per fixture (overlay-ON, overlay-OFF) at N=10 trials with bounded concurrency. Arms use SDK preset `claude_code` so both include the real Claude Code system prompt; overlay-ON appends the resolved overlay text. Saves per-trial raw event streams to `~/.gstack/projects/<slug>/transcripts/` for forensic recovery. Gated on `EVALS=1 && EVALS_TIER=periodic`. ~$3/run (40 trials). * test: register overlay harness in touchfiles (both maps) Entries for `overlay-harness-opus-4-7-fanout-toy` and `opus-4-7-fanout-realistic` in E2E_TOUCHFILES (deps: model-overlays/, fixtures file, runner, resolver) and E2E_TIERS (`periodic`). Passes `test/touchfiles.test.ts` completeness check. * fix(opus-4.7): remove "Fan out explicitly" overlay nudge Measured counterproductive under the new SDK harness. Baseline Opus 4.7 emits first-turn parallel tool_use blocks 70% of the time on a 3-file read prompt. With the custom nudge: 10%. With Anthropic's own canonical `<use_parallel_tool_calls>` block from their parallel-tool-use docs: 0%. Both overlays suppress fanout; neither improves it. On realistic multi-tool prompts (audit a project: read files + glob + summarize), Opus 4.7 never fans out in first turn regardless of overlay. Zero of 20 trials. Not a prompt problem. Keeping the other three nudges (effort-match, batch questions, literal interpretation) pending their own measurement. Harness is ready for follow-up fixtures — add one entry to `test/fixtures/overlay-nudges.ts` to measure any overlay bullet. Cost of investigation: ~$7 total across 3 eval runs. * chore: bump version and changelog (v1.6.5.0) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(eval): extend OverlayFixture with allowedTools, maxTurns, direction Per-fixture tool allowlist unblocks measuring nudges that need Edit/Write (e.g. literal-interpretation 'fix the failing tests' needs write access). Per-fixture maxTurns lets harder prompts run longer without changing the default. `direction` is cosmetic metadata for test output labeling. Also adds reusable predicates and metrics: - lowerIsBetter20Pct / higherIsBetter20Pct — 20% lift threshold vs baseline - bashToolCallCount — count of Bash tool_use across the session - turnsToCompletion — SDK-reported num_turns at result - uniqueFilesEdited — Edit/Write/MultiEdit file_path set size test/skill-e2e-overlay-harness.test.ts now threads fixture.allowedTools and fixture.maxTurns through runArm. * test(eval): 3 more overlay fixtures to measure remaining Claude nudges Measures three overlay bullets that haven't been tested yet: - claude-dedicated-tools-vs-bash — claude.md says 'prefer Read/Edit/Write/ Glob/Grep over cat/sed/find/grep'. Fixture prompts 'list every TypeScript file under src/ and tell me what each exports' and counts Bash tool_use across the session. Overlay-ON should drop it by >=20%. - opus-4-7-effort-match-trivial — opus-4-7.md says 'simple file reads don't need deep reasoning.' Fixture uses a trivial one-file prompt (config.json lookup) and measures turns_used. Overlay-ON should be <=80% of baseline turns. - opus-4-7-literal-interpretation — opus-4-7.md says 'fix ALL failing tests, not just the obvious one.' Fixture seeds three failing test files with deliberately distinct failure modes and counts unique files edited. Overlay-ON should touch >=20% more files. Adding a fourth fixture for any remaining overlay nudge is a single entry. The harness is now proven on: fanout (deleted after measurement), dedicated tools, effort-match, and literal-interpretation. * fix(eval): handle SDK max-turns throw gracefully Some @anthropic-ai/claude-agent-sdk versions throw from the query generator when maxTurns is reached, instead of emitting a result message with subtype='error_max_turns'. The runner treated that as a non-retryable error and killed the whole periodic run on the first fixture that exceeded its turn cap. Added isMaxTurnsError() detector and a catch branch that synthesizes an AgentSdkResult from events captured before the throw, with exitReason='error_max_turns' and costUsd=0 (unknown from the thrown path). The metric function still runs against whatever assistant turns were collected, so the trial produces a usable number. Hoisted events/assistantTurns/toolCalls/assistantTextParts and the timing counters out of the inner try so the catch branch can read them. No behavior change on the success path or on rate-limit retry paths. * test(eval): bump maxTurns to 15 for claude-dedicated-tools-vs-bash The prompt 'list every TypeScript file under src/ and tell me what each exports' needs 1 turn for Glob + ~5 for Reads + 1 for summary. Default maxTurns=5 was not enough; prior run threw from the SDK on this fixture and tanked the whole periodic eval. Bumping to 15 gives headroom. The runner now also handles max-turns gracefully even if a future fixture underestimates, so this is belt and suspenders. * test(eval): Sonnet 4.6 variants of the 5 Opus-4.7 fixtures Same overlays, same prompts, same metrics, `model: 'claude-sonnet-4-6'`. Tests whether the overlays behave differently on a weaker Claude model where baseline behavior is shakier. Sonnet trials cost ~3-4x less than Opus so these 5 add ~$4.50 to a full run. Measurement result from the first paired run (100 trials total, ~$14.55): - **Sonnet + effort-match shows real overlay benefit.** With the overlay on, Sonnet takes 2.5 turns on a trivial `What's the version in config.json?` prompt. Without, it takes exactly 3.0 turns in all 10 trials. ~17% reduction, below the 20% pass threshold but the signal is clean: overlay-ON distribution [2,2,2,2,2,3,3,3,3,3] vs overlay-OFF [3,3,3,3,3,3,3,3,3,3]. - All other Sonnet dimensions flat (fanout, dedicated-tools, literal interpretation). Same as Opus on those axes. - Opus effort-match remains flat (2.60 vs 2.50, +4% slower with overlay). Implication: model-stratified. The overlay stack helps Sonnet on some axes where it does nothing on Opus. Wholesale removal would hurt Sonnet. Per-nudge per-model measurement is the right move going forward. * chore: bump version to 1.10.1.0 Updates VERSION, package.json, CHANGELOG header, and TODOS completion marker from 1.6.5.0 to 1.10.1.0. --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,725 @@
|
||||
/**
|
||||
* Unit tests for test/helpers/agent-sdk-runner.ts.
|
||||
*
|
||||
* Runs in free `bun test` (no API calls). Uses a stub QueryProvider to
|
||||
* simulate SDK event streams — happy path, rate-limit retries across all
|
||||
* three shapes, persistent failure, non-retryable error, options
|
||||
* propagation, concurrency cap.
|
||||
*
|
||||
* Also covers validateFixtures() rejections.
|
||||
*/
|
||||
|
||||
import { describe, test, expect } from 'bun:test';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
import type {
|
||||
SDKMessage,
|
||||
Options,
|
||||
Query,
|
||||
} from '@anthropic-ai/claude-agent-sdk';
|
||||
import {
|
||||
runAgentSdkTest,
|
||||
toSkillTestResult,
|
||||
firstTurnParallelism,
|
||||
isRateLimitThrown,
|
||||
isRateLimitResult,
|
||||
isRateLimitEvent,
|
||||
RateLimitExhaustedError,
|
||||
__resetSemaphoreForTests,
|
||||
type QueryProvider,
|
||||
type AgentSdkResult,
|
||||
} from '../test/helpers/agent-sdk-runner';
|
||||
import {
|
||||
validateFixtures,
|
||||
fanoutPass,
|
||||
type OverlayFixture,
|
||||
} from '../test/fixtures/overlay-nudges';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Stub SDK event builders
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
let uuidCounter = 0;
|
||||
function uuid(): string {
|
||||
return `00000000-0000-0000-0000-${String(++uuidCounter).padStart(12, '0')}`;
|
||||
}
|
||||
|
||||
function systemInit(model = 'claude-opus-4-7', version = '2.1.117'): SDKMessage {
|
||||
return {
|
||||
type: 'system',
|
||||
subtype: 'init',
|
||||
apiKeySource: 'user',
|
||||
claude_code_version: version,
|
||||
cwd: '/tmp/x',
|
||||
tools: ['Read'],
|
||||
mcp_servers: [],
|
||||
model,
|
||||
permissionMode: 'bypassPermissions',
|
||||
slash_commands: [],
|
||||
output_style: 'default',
|
||||
skills: [],
|
||||
plugins: [],
|
||||
uuid: uuid(),
|
||||
session_id: 'test-session',
|
||||
} as unknown as SDKMessage;
|
||||
}
|
||||
|
||||
function assistantTurn(
|
||||
blocks: Array<{ type: 'text'; text: string } | { type: 'tool_use'; name: string; input: unknown }>,
|
||||
): SDKMessage {
|
||||
return {
|
||||
type: 'assistant',
|
||||
parent_tool_use_id: null,
|
||||
uuid: uuid(),
|
||||
session_id: 'test-session',
|
||||
message: {
|
||||
id: 'msg_' + uuid(),
|
||||
type: 'message',
|
||||
role: 'assistant',
|
||||
model: 'claude-opus-4-7',
|
||||
content: blocks.map((b) => ({ ...b })),
|
||||
stop_reason: 'end_turn',
|
||||
stop_sequence: null,
|
||||
usage: {
|
||||
input_tokens: 10,
|
||||
output_tokens: 20,
|
||||
cache_creation_input_tokens: 0,
|
||||
cache_read_input_tokens: 0,
|
||||
service_tier: 'standard',
|
||||
},
|
||||
},
|
||||
} as unknown as SDKMessage;
|
||||
}
|
||||
|
||||
function resultSuccess(cost = 0.01, turns = 1): SDKMessage {
|
||||
return {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
duration_ms: 100,
|
||||
duration_api_ms: 50,
|
||||
is_error: false,
|
||||
num_turns: turns,
|
||||
result: 'done',
|
||||
stop_reason: 'end_turn',
|
||||
total_cost_usd: cost,
|
||||
usage: {
|
||||
input_tokens: 10,
|
||||
output_tokens: 20,
|
||||
cache_creation_input_tokens: 0,
|
||||
cache_read_input_tokens: 0,
|
||||
server_tool_use: {},
|
||||
service_tier: 'standard',
|
||||
},
|
||||
modelUsage: {},
|
||||
permission_denials: [],
|
||||
uuid: uuid(),
|
||||
session_id: 'test-session',
|
||||
} as unknown as SDKMessage;
|
||||
}
|
||||
|
||||
function resultRateLimit(): SDKMessage {
|
||||
return {
|
||||
type: 'result',
|
||||
subtype: 'error_during_execution',
|
||||
duration_ms: 100,
|
||||
duration_api_ms: 50,
|
||||
is_error: true,
|
||||
num_turns: 0,
|
||||
stop_reason: null,
|
||||
total_cost_usd: 0,
|
||||
usage: {
|
||||
input_tokens: 0,
|
||||
output_tokens: 0,
|
||||
cache_creation_input_tokens: 0,
|
||||
cache_read_input_tokens: 0,
|
||||
server_tool_use: {},
|
||||
service_tier: 'standard',
|
||||
},
|
||||
modelUsage: {},
|
||||
permission_denials: [],
|
||||
errors: ['rate limit exceeded (429)'],
|
||||
uuid: uuid(),
|
||||
session_id: 'test-session',
|
||||
} as unknown as SDKMessage;
|
||||
}
|
||||
|
||||
function rateLimitEvent(): SDKMessage {
|
||||
return {
|
||||
type: 'rate_limit_event',
|
||||
rate_limit_info: {
|
||||
status: 'rejected',
|
||||
rateLimitType: 'five_hour',
|
||||
},
|
||||
uuid: uuid(),
|
||||
session_id: 'test-session',
|
||||
} as unknown as SDKMessage;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Stub query provider
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface StubConfig {
|
||||
/** One event stream per call. Exhausted calls throw. */
|
||||
streams: SDKMessage[][];
|
||||
/** Throw this error on the Nth call (0-indexed). */
|
||||
throwAt?: number;
|
||||
throwError?: unknown;
|
||||
/** Track calls for assertions. */
|
||||
calls: Array<{ prompt: string; options: Options | undefined; startedAt: number; endedAt?: number }>;
|
||||
}
|
||||
|
||||
function makeStubProvider(config: StubConfig): QueryProvider {
|
||||
let callIdx = -1;
|
||||
const provider: QueryProvider = (params) => {
|
||||
callIdx++;
|
||||
const idx = callIdx;
|
||||
const startedAt = Date.now();
|
||||
const prompt = typeof params.prompt === 'string' ? params.prompt : '<iterable>';
|
||||
config.calls.push({ prompt, options: params.options, startedAt });
|
||||
|
||||
if (config.throwAt !== undefined && idx === config.throwAt) {
|
||||
const err = config.throwError ?? new Error('stub throw');
|
||||
// Return an async generator that throws on first next().
|
||||
const gen = (async function* (): AsyncGenerator<SDKMessage, void> {
|
||||
throw err;
|
||||
})();
|
||||
return gen as unknown as Query;
|
||||
}
|
||||
|
||||
const stream = config.streams[idx];
|
||||
if (!stream) {
|
||||
const gen = (async function* (): AsyncGenerator<SDKMessage, void> {
|
||||
throw new Error(`stub has no stream for call ${idx}`);
|
||||
})();
|
||||
return gen as unknown as Query;
|
||||
}
|
||||
|
||||
const gen = (async function* (): AsyncGenerator<SDKMessage, void> {
|
||||
try {
|
||||
for (const ev of stream) {
|
||||
yield ev;
|
||||
}
|
||||
} finally {
|
||||
config.calls[idx]!.endedAt = Date.now();
|
||||
}
|
||||
})();
|
||||
return gen as unknown as Query;
|
||||
};
|
||||
return provider;
|
||||
}
|
||||
|
||||
const BASE_OPTS = {
|
||||
systemPrompt: '',
|
||||
userPrompt: 'test prompt',
|
||||
workingDirectory: '/tmp/test-dir',
|
||||
maxRetries: 3,
|
||||
};
|
||||
|
||||
// Reset semaphore before each test that depends on fresh capacity.
|
||||
function freshSem(cap = 10): void {
|
||||
__resetSemaphoreForTests(cap);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Happy path
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('runAgentSdkTest — happy path', () => {
|
||||
test('collects events, assistantTurns, toolCalls, and result fields', async () => {
|
||||
freshSem();
|
||||
const stub: StubConfig = {
|
||||
streams: [
|
||||
[
|
||||
systemInit(),
|
||||
assistantTurn([
|
||||
{ type: 'text', text: 'reading files' },
|
||||
{ type: 'tool_use', name: 'Read', input: { path: 'a.txt' } },
|
||||
{ type: 'tool_use', name: 'Read', input: { path: 'b.txt' } },
|
||||
]),
|
||||
assistantTurn([{ type: 'text', text: 'done' }]),
|
||||
resultSuccess(0.05, 2),
|
||||
],
|
||||
],
|
||||
calls: [],
|
||||
};
|
||||
const result = await runAgentSdkTest({
|
||||
...BASE_OPTS,
|
||||
queryProvider: makeStubProvider(stub),
|
||||
});
|
||||
|
||||
expect(result.events.length).toBe(4);
|
||||
expect(result.assistantTurns.length).toBe(2);
|
||||
expect(result.toolCalls.length).toBe(2);
|
||||
expect(result.toolCalls[0]!.tool).toBe('Read');
|
||||
expect(result.output).toContain('reading files');
|
||||
expect(result.output).toContain('done');
|
||||
expect(result.exitReason).toBe('success');
|
||||
expect(result.turnsUsed).toBe(2);
|
||||
expect(result.costUsd).toBe(0.05);
|
||||
expect(result.sdkClaudeCodeVersion).toBe('2.1.117');
|
||||
expect(result.model).toBe('claude-opus-4-7');
|
||||
expect(result.firstResponseMs).toBeGreaterThanOrEqual(0);
|
||||
});
|
||||
|
||||
test('first-turn parallelism: 3 tool_use blocks in first assistant turn', async () => {
|
||||
freshSem();
|
||||
const stub: StubConfig = {
|
||||
streams: [
|
||||
[
|
||||
systemInit(),
|
||||
assistantTurn([
|
||||
{ type: 'tool_use', name: 'Read', input: { path: 'a' } },
|
||||
{ type: 'tool_use', name: 'Read', input: { path: 'b' } },
|
||||
{ type: 'tool_use', name: 'Read', input: { path: 'c' } },
|
||||
]),
|
||||
resultSuccess(),
|
||||
],
|
||||
],
|
||||
calls: [],
|
||||
};
|
||||
const result = await runAgentSdkTest({
|
||||
...BASE_OPTS,
|
||||
queryProvider: makeStubProvider(stub),
|
||||
});
|
||||
expect(firstTurnParallelism(result.assistantTurns[0])).toBe(3);
|
||||
});
|
||||
|
||||
test('first-turn parallelism: 0 when first turn is text-only', async () => {
|
||||
freshSem();
|
||||
const stub: StubConfig = {
|
||||
streams: [
|
||||
[
|
||||
systemInit(),
|
||||
assistantTurn([{ type: 'text', text: 'thinking' }]),
|
||||
resultSuccess(),
|
||||
],
|
||||
],
|
||||
calls: [],
|
||||
};
|
||||
const result = await runAgentSdkTest({
|
||||
...BASE_OPTS,
|
||||
queryProvider: makeStubProvider(stub),
|
||||
});
|
||||
expect(firstTurnParallelism(result.assistantTurns[0])).toBe(0);
|
||||
});
|
||||
|
||||
test('first-turn parallelism: 0 when no first turn', () => {
|
||||
expect(firstTurnParallelism(undefined)).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Options propagation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('runAgentSdkTest — options propagation', () => {
|
||||
test('systemPrompt, model, cwd, allowedTools, disallowedTools, permissionMode, settingSources, env, pathToClaudeCodeExecutable reach query()', async () => {
|
||||
freshSem();
|
||||
const stub: StubConfig = {
|
||||
streams: [[systemInit(), assistantTurn([{ type: 'text', text: 'ok' }]), resultSuccess()]],
|
||||
calls: [],
|
||||
};
|
||||
await runAgentSdkTest({
|
||||
systemPrompt: 'you are a test overlay',
|
||||
userPrompt: 'go',
|
||||
workingDirectory: '/tmp/spec-dir',
|
||||
model: 'claude-opus-4-7',
|
||||
maxTurns: 7,
|
||||
allowedTools: ['Read', 'Glob'],
|
||||
disallowedTools: ['Bash', 'Write'],
|
||||
permissionMode: 'bypassPermissions',
|
||||
settingSources: [],
|
||||
env: { ANTHROPIC_API_KEY: 'fake' },
|
||||
pathToClaudeCodeExecutable: '/fake/path/claude',
|
||||
queryProvider: makeStubProvider(stub),
|
||||
});
|
||||
|
||||
const opts = stub.calls[0]!.options!;
|
||||
expect(opts.systemPrompt).toBe('you are a test overlay');
|
||||
expect(opts.model).toBe('claude-opus-4-7');
|
||||
expect(opts.cwd).toBe('/tmp/spec-dir');
|
||||
expect(opts.maxTurns).toBe(7);
|
||||
expect(opts.tools).toEqual(['Read', 'Glob']);
|
||||
expect(opts.allowedTools).toEqual(['Read', 'Glob']);
|
||||
expect(opts.disallowedTools).toEqual(['Bash', 'Write']);
|
||||
expect(opts.permissionMode).toBe('bypassPermissions');
|
||||
expect(opts.allowDangerouslySkipPermissions).toBe(true);
|
||||
expect(opts.settingSources).toEqual([]);
|
||||
expect(opts.env).toEqual({ ANTHROPIC_API_KEY: 'fake' });
|
||||
expect(opts.pathToClaudeCodeExecutable).toBe('/fake/path/claude');
|
||||
});
|
||||
|
||||
test('empty systemPrompt means no systemPrompt option passed', async () => {
|
||||
freshSem();
|
||||
const stub: StubConfig = {
|
||||
streams: [[systemInit(), assistantTurn([{ type: 'text', text: 'ok' }]), resultSuccess()]],
|
||||
calls: [],
|
||||
};
|
||||
await runAgentSdkTest({
|
||||
...BASE_OPTS,
|
||||
queryProvider: makeStubProvider(stub),
|
||||
});
|
||||
// systemPrompt is undefined when empty string passed (so SDK uses no override)
|
||||
expect(stub.calls[0]!.options!.systemPrompt).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Rate-limit retry (three shapes)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('runAgentSdkTest — rate-limit retry', () => {
|
||||
test('retryable on thrown 429-shaped error, then succeeds on 2nd attempt', async () => {
|
||||
freshSem();
|
||||
const stub: StubConfig = {
|
||||
streams: [
|
||||
// call 0: throws (handled via throwAt below)
|
||||
[],
|
||||
// call 1: success
|
||||
[systemInit(), assistantTurn([{ type: 'text', text: 'ok' }]), resultSuccess()],
|
||||
],
|
||||
throwAt: 0,
|
||||
throwError: Object.assign(new Error('429 too many requests'), { status: 429 }),
|
||||
calls: [],
|
||||
};
|
||||
const result = await runAgentSdkTest({
|
||||
...BASE_OPTS,
|
||||
queryProvider: makeStubProvider(stub),
|
||||
maxRetries: 2,
|
||||
});
|
||||
expect(result.exitReason).toBe('success');
|
||||
expect(stub.calls.length).toBe(2);
|
||||
});
|
||||
|
||||
test('retryable on result-message rate-limit, then succeeds', async () => {
|
||||
freshSem();
|
||||
const stub: StubConfig = {
|
||||
streams: [
|
||||
[systemInit(), resultRateLimit()],
|
||||
[systemInit(), assistantTurn([{ type: 'text', text: 'ok' }]), resultSuccess()],
|
||||
],
|
||||
calls: [],
|
||||
};
|
||||
const result = await runAgentSdkTest({
|
||||
...BASE_OPTS,
|
||||
queryProvider: makeStubProvider(stub),
|
||||
maxRetries: 2,
|
||||
});
|
||||
expect(result.exitReason).toBe('success');
|
||||
expect(stub.calls.length).toBe(2);
|
||||
});
|
||||
|
||||
test('retryable on mid-stream SDKRateLimitEvent, then succeeds', async () => {
|
||||
freshSem();
|
||||
const stub: StubConfig = {
|
||||
streams: [
|
||||
[systemInit(), rateLimitEvent()],
|
||||
[systemInit(), assistantTurn([{ type: 'text', text: 'ok' }]), resultSuccess()],
|
||||
],
|
||||
calls: [],
|
||||
};
|
||||
const result = await runAgentSdkTest({
|
||||
...BASE_OPTS,
|
||||
queryProvider: makeStubProvider(stub),
|
||||
maxRetries: 2,
|
||||
});
|
||||
expect(result.exitReason).toBe('success');
|
||||
expect(stub.calls.length).toBe(2);
|
||||
});
|
||||
|
||||
test('onRetry callback is invoked between attempts', async () => {
|
||||
freshSem();
|
||||
const resets: string[] = [];
|
||||
const stub: StubConfig = {
|
||||
streams: [
|
||||
[],
|
||||
[systemInit(), assistantTurn([{ type: 'text', text: 'ok' }]), resultSuccess()],
|
||||
],
|
||||
throwAt: 0,
|
||||
throwError: Object.assign(new Error('429'), { status: 429 }),
|
||||
calls: [],
|
||||
};
|
||||
await runAgentSdkTest({
|
||||
...BASE_OPTS,
|
||||
queryProvider: makeStubProvider(stub),
|
||||
maxRetries: 2,
|
||||
onRetry: (dir) => resets.push(dir),
|
||||
});
|
||||
expect(resets.length).toBe(1);
|
||||
expect(resets[0]).toBe('/tmp/test-dir');
|
||||
});
|
||||
|
||||
test('persistent 429 throws RateLimitExhaustedError after maxRetries', async () => {
|
||||
freshSem();
|
||||
const stub: StubConfig = {
|
||||
streams: [[], [], [], []], // 4 empty streams; throw on each
|
||||
calls: [],
|
||||
};
|
||||
// Every call throws:
|
||||
let callCount = 0;
|
||||
const alwaysThrowProvider: QueryProvider = (params) => {
|
||||
callCount++;
|
||||
stub.calls.push({
|
||||
prompt: typeof params.prompt === 'string' ? params.prompt : '',
|
||||
options: params.options,
|
||||
startedAt: Date.now(),
|
||||
});
|
||||
const gen = (async function* (): AsyncGenerator<SDKMessage, void> {
|
||||
throw Object.assign(new Error('429 always'), { status: 429 });
|
||||
})();
|
||||
return gen as unknown as Query;
|
||||
};
|
||||
|
||||
let caught: unknown = null;
|
||||
try {
|
||||
await runAgentSdkTest({
|
||||
...BASE_OPTS,
|
||||
queryProvider: alwaysThrowProvider,
|
||||
maxRetries: 2,
|
||||
});
|
||||
} catch (err) {
|
||||
caught = err;
|
||||
}
|
||||
expect(caught).toBeInstanceOf(RateLimitExhaustedError);
|
||||
expect((caught as RateLimitExhaustedError).attempts).toBe(3); // initial + 2 retries
|
||||
expect(callCount).toBe(3);
|
||||
});
|
||||
|
||||
test('non-429 error is NOT retried, propagates immediately', async () => {
|
||||
__resetSemaphoreForTests(10);
|
||||
let callCount = 0;
|
||||
const throwOnce: QueryProvider = () => {
|
||||
callCount++;
|
||||
const gen = (async function* (): AsyncGenerator<SDKMessage, void> {
|
||||
throw new Error('generic auth failure');
|
||||
})();
|
||||
return gen as unknown as Query;
|
||||
};
|
||||
let caught: unknown = null;
|
||||
try {
|
||||
await runAgentSdkTest({
|
||||
...BASE_OPTS,
|
||||
queryProvider: throwOnce,
|
||||
maxRetries: 3,
|
||||
});
|
||||
} catch (err) {
|
||||
caught = err;
|
||||
}
|
||||
expect(caught).toBeInstanceOf(Error);
|
||||
expect((caught as Error).message).toBe('generic auth failure');
|
||||
expect(callCount).toBe(1);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Rate-limit detectors (unit)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('rate-limit detectors', () => {
|
||||
test('isRateLimitThrown matches status 429, message, name', () => {
|
||||
expect(isRateLimitThrown(Object.assign(new Error('boom'), { status: 429 }))).toBe(true);
|
||||
expect(isRateLimitThrown(new Error('429 Too Many Requests'))).toBe(true);
|
||||
expect(isRateLimitThrown(new Error('rate-limit exceeded'))).toBe(true);
|
||||
expect(isRateLimitThrown(Object.assign(new Error('x'), { name: 'RateLimitError' }))).toBe(true);
|
||||
expect(isRateLimitThrown(new Error('auth failed'))).toBe(false);
|
||||
expect(isRateLimitThrown(null)).toBe(false);
|
||||
});
|
||||
|
||||
test('isRateLimitResult matches error_during_execution with 429-shaped errors', () => {
|
||||
expect(isRateLimitResult(resultRateLimit())).toBe(true);
|
||||
expect(isRateLimitResult(resultSuccess())).toBe(false);
|
||||
});
|
||||
|
||||
test('isRateLimitEvent matches rate_limit_event with status=rejected', () => {
|
||||
expect(isRateLimitEvent(rateLimitEvent())).toBe(true);
|
||||
expect(isRateLimitEvent(resultSuccess())).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Semaphore concurrency cap
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('runAgentSdkTest — concurrency', () => {
|
||||
test('process-level semaphore caps concurrent queries', async () => {
|
||||
__resetSemaphoreForTests(2);
|
||||
let inFlight = 0;
|
||||
let peakInFlight = 0;
|
||||
const slowStub: QueryProvider = () => {
|
||||
const gen = (async function* (): AsyncGenerator<SDKMessage, void> {
|
||||
inFlight++;
|
||||
if (inFlight > peakInFlight) peakInFlight = inFlight;
|
||||
yield systemInit();
|
||||
await new Promise((r) => setTimeout(r, 30));
|
||||
yield assistantTurn([{ type: 'text', text: 'ok' }]);
|
||||
yield resultSuccess();
|
||||
inFlight--;
|
||||
})();
|
||||
return gen as unknown as Query;
|
||||
};
|
||||
|
||||
await Promise.all(
|
||||
Array.from({ length: 6 }, (_, i) =>
|
||||
runAgentSdkTest({
|
||||
...BASE_OPTS,
|
||||
userPrompt: `trial-${i}`,
|
||||
queryProvider: slowStub,
|
||||
}),
|
||||
),
|
||||
);
|
||||
|
||||
expect(peakInFlight).toBeLessThanOrEqual(2);
|
||||
expect(peakInFlight).toBeGreaterThan(0);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// toSkillTestResult shape
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('toSkillTestResult', () => {
|
||||
test('produces a SkillTestResult-shaped object', async () => {
|
||||
freshSem();
|
||||
const stub: StubConfig = {
|
||||
streams: [[systemInit(), assistantTurn([{ type: 'text', text: 'hi' }]), resultSuccess(0.02, 1)]],
|
||||
calls: [],
|
||||
};
|
||||
const r = await runAgentSdkTest({
|
||||
...BASE_OPTS,
|
||||
queryProvider: makeStubProvider(stub),
|
||||
});
|
||||
const s = toSkillTestResult(r);
|
||||
expect(s.toolCalls).toBeArray();
|
||||
expect(s.browseErrors).toBeArray();
|
||||
expect(s.exitReason).toBe('success');
|
||||
expect(s.duration).toBeNumber();
|
||||
expect(s.output).toBe('hi');
|
||||
expect(s.costEstimate.estimatedCost).toBe(0.02);
|
||||
expect(s.costEstimate.turnsUsed).toBe(1);
|
||||
expect(s.model).toBe('claude-opus-4-7');
|
||||
expect(s.firstResponseMs).toBeNumber();
|
||||
expect(s.maxInterTurnMs).toBeNumber();
|
||||
expect(s.transcript).toBeArray();
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Fixture validator
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('validateFixtures', () => {
|
||||
function base(overrides: Partial<OverlayFixture> = {}): OverlayFixture {
|
||||
return {
|
||||
id: 'test-fixture',
|
||||
overlayPath: 'model-overlays/opus-4-7.md',
|
||||
model: 'claude-opus-4-7',
|
||||
trials: 10,
|
||||
setupWorkspace: () => {},
|
||||
userPrompt: 'go',
|
||||
metric: () => 0,
|
||||
pass: fanoutPass,
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
test('passes for a valid fixture', () => {
|
||||
expect(() => validateFixtures([base()])).not.toThrow();
|
||||
});
|
||||
|
||||
test('rejects empty id', () => {
|
||||
expect(() => validateFixtures([base({ id: '' })])).toThrow(/id must be/);
|
||||
});
|
||||
|
||||
test('rejects id with uppercase or unsafe chars', () => {
|
||||
expect(() => validateFixtures([base({ id: 'Test_Fixture' })])).toThrow(/id must be/);
|
||||
});
|
||||
|
||||
test('rejects duplicate ids', () => {
|
||||
expect(() => validateFixtures([base(), base()])).toThrow(/duplicate fixture id/);
|
||||
});
|
||||
|
||||
test('rejects non-integer trials', () => {
|
||||
expect(() => validateFixtures([base({ trials: 3.5 })])).toThrow(/trials must be/);
|
||||
});
|
||||
|
||||
test('rejects trials < 3', () => {
|
||||
expect(() => validateFixtures([base({ trials: 2 })])).toThrow(/trials must be/);
|
||||
});
|
||||
|
||||
test('rejects concurrency < 1', () => {
|
||||
expect(() => validateFixtures([base({ concurrency: 0 })])).toThrow(/concurrency must be/);
|
||||
});
|
||||
|
||||
test('rejects non-integer concurrency', () => {
|
||||
expect(() => validateFixtures([base({ concurrency: 2.5 })])).toThrow(/concurrency must be/);
|
||||
});
|
||||
|
||||
test('rejects empty model', () => {
|
||||
expect(() => validateFixtures([base({ model: '' })])).toThrow(/model must be/);
|
||||
});
|
||||
|
||||
test('rejects empty userPrompt', () => {
|
||||
expect(() => validateFixtures([base({ userPrompt: '' })])).toThrow(/userPrompt must be/);
|
||||
});
|
||||
|
||||
test('rejects absolute overlayPath', () => {
|
||||
expect(() => validateFixtures([base({ overlayPath: '/etc/passwd' })])).toThrow(/overlayPath must be/);
|
||||
});
|
||||
|
||||
test("rejects overlayPath containing '..'", () => {
|
||||
expect(() =>
|
||||
validateFixtures([base({ overlayPath: '../outside/file.md' })]),
|
||||
).toThrow(/overlayPath must be/);
|
||||
});
|
||||
|
||||
test('rejects missing overlay file', () => {
|
||||
expect(() =>
|
||||
validateFixtures([base({ overlayPath: 'model-overlays/nonexistent.md' })]),
|
||||
).toThrow(/overlay file not found/);
|
||||
});
|
||||
|
||||
test('rejects non-function setupWorkspace', () => {
|
||||
expect(() =>
|
||||
validateFixtures([base({ setupWorkspace: 'not a function' as unknown as (d: string) => void })]),
|
||||
).toThrow(/setupWorkspace must be a function/);
|
||||
});
|
||||
|
||||
test('rejects non-function metric', () => {
|
||||
expect(() =>
|
||||
validateFixtures([base({ metric: null as unknown as (r: AgentSdkResult) => number })]),
|
||||
).toThrow(/metric must be a function/);
|
||||
});
|
||||
|
||||
test('rejects non-function pass', () => {
|
||||
expect(() =>
|
||||
validateFixtures([base({ pass: undefined as unknown as OverlayFixture['pass'] })]),
|
||||
).toThrow(/pass must be a function/);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// fanoutPass predicate
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('fanoutPass predicate', () => {
|
||||
test('accepts mean lift >= 0.5 AND >=3/10 overlay trials >= 2', () => {
|
||||
const overlay = [2, 2, 2, 2, 2, 2, 2, 2, 2, 2];
|
||||
const off = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
|
||||
expect(fanoutPass({ overlay, off })).toBe(true);
|
||||
});
|
||||
|
||||
test('rejects when mean lift < 0.5', () => {
|
||||
const overlay = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1];
|
||||
const off = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1];
|
||||
expect(fanoutPass({ overlay, off })).toBe(false);
|
||||
});
|
||||
|
||||
test('rejects when mean lift >= 0.5 but <3 overlay trials emit >=2', () => {
|
||||
// Mean overlay = 1.2, off = 0.0, lift 1.2 but only 2 trials at >=2
|
||||
const overlay = [2, 2, 1, 1, 1, 1, 1, 1, 1, 1];
|
||||
const off = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
|
||||
expect(fanoutPass({ overlay, off })).toBe(false);
|
||||
});
|
||||
});
|
||||
Vendored
+487
@@ -0,0 +1,487 @@
|
||||
/**
|
||||
* Overlay-efficacy fixture registry.
|
||||
*
|
||||
* Each fixture defines a reproducible A/B test for one behavioral nudge
|
||||
* embedded in a model-overlays/*.md file. The harness at
|
||||
* test/skill-e2e-overlay-harness.test.ts iterates this registry and runs
|
||||
* `fixture.trials` A/B trials per fixture, asserting `fixture.pass(arms)`.
|
||||
*
|
||||
* Adding a new overlay eval = one entry in this list. The harness handles
|
||||
* arm wiring, concurrency, artifact storage, rate-limit retries, and the
|
||||
* cross-harness diagnostic.
|
||||
*/
|
||||
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import {
|
||||
firstTurnParallelism,
|
||||
type AgentSdkResult,
|
||||
} from '../helpers/agent-sdk-runner';
|
||||
|
||||
const REPO_ROOT = path.resolve(__dirname, '..', '..');
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface OverlayFixture {
|
||||
/** Unique, lowercase/digits/dash only. Used in artifact paths. */
|
||||
id: string;
|
||||
/** Path to the overlay file, relative to repo root. */
|
||||
overlayPath: string;
|
||||
/** API model ID, not the overlay family name. */
|
||||
model: string;
|
||||
/** Integer >= 3. Trials per arm. */
|
||||
trials: number;
|
||||
/** Max concurrent queries for this fixture's arms. Default 3. */
|
||||
concurrency?: number;
|
||||
/** Populate the workspace dir before each trial. */
|
||||
setupWorkspace: (dir: string) => void;
|
||||
/** The prompt the model receives. Non-empty. */
|
||||
userPrompt: string;
|
||||
/** Per-fixture tool allowlist. Omit to use runner default [Read, Glob, Grep, Bash]. */
|
||||
allowedTools?: string[];
|
||||
/** Max turns per trial. Omit to use runner default (5). */
|
||||
maxTurns?: number;
|
||||
/**
|
||||
* Direction of the expected effect. `higher_is_better` = overlay should
|
||||
* increase the metric (e.g. fanout, files touched for literal scope).
|
||||
* `lower_is_better` = overlay should decrease it (e.g. Bash count, turn count).
|
||||
* Used only for cosmetic logging in the test output; `pass` is the actual gate.
|
||||
*/
|
||||
direction?: 'higher_is_better' | 'lower_is_better';
|
||||
/** Compute the per-trial metric from the typed SDK result. */
|
||||
metric: (r: AgentSdkResult) => number;
|
||||
/** Acceptance predicate across all arms' per-trial metrics. */
|
||||
pass: (arms: { overlay: number[]; off: number[] }) => boolean;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Validation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export function validateFixtures(fixtures: OverlayFixture[]): void {
|
||||
const ids = new Set<string>();
|
||||
for (const f of fixtures) {
|
||||
if (!f.id || !/^[a-z0-9-]+$/.test(f.id)) {
|
||||
throw new Error(
|
||||
`fixture id must be non-empty, lowercase/digits/dash only: ${JSON.stringify(f.id)}`,
|
||||
);
|
||||
}
|
||||
if (ids.has(f.id)) {
|
||||
throw new Error(`duplicate fixture id: ${f.id}`);
|
||||
}
|
||||
ids.add(f.id);
|
||||
|
||||
if (!Number.isInteger(f.trials) || f.trials < 3) {
|
||||
throw new Error(`${f.id}: trials must be an integer >= 3 (got ${f.trials})`);
|
||||
}
|
||||
if (
|
||||
f.concurrency !== undefined &&
|
||||
(!Number.isInteger(f.concurrency) || f.concurrency < 1)
|
||||
) {
|
||||
throw new Error(
|
||||
`${f.id}: concurrency must be an integer >= 1 (got ${f.concurrency})`,
|
||||
);
|
||||
}
|
||||
|
||||
if (!f.model) throw new Error(`${f.id}: model must be non-empty`);
|
||||
if (!f.userPrompt) throw new Error(`${f.id}: userPrompt must be non-empty`);
|
||||
|
||||
if (path.isAbsolute(f.overlayPath) || f.overlayPath.includes('..')) {
|
||||
throw new Error(
|
||||
`${f.id}: overlayPath must be relative and must not contain '..' (got ${f.overlayPath})`,
|
||||
);
|
||||
}
|
||||
const fullPath = path.resolve(REPO_ROOT, f.overlayPath);
|
||||
if (!fs.existsSync(fullPath)) {
|
||||
throw new Error(`${f.id}: overlay file not found at ${f.overlayPath}`);
|
||||
}
|
||||
|
||||
for (const fn of ['setupWorkspace', 'metric', 'pass'] as const) {
|
||||
if (typeof f[fn] !== 'function') {
|
||||
throw new Error(`${f.id}: ${fn} must be a function`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Metric + predicate helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function mean(xs: number[]): number {
|
||||
if (xs.length === 0) return 0;
|
||||
return xs.reduce((a, b) => a + b, 0) / xs.length;
|
||||
}
|
||||
|
||||
/**
|
||||
* Standard fanout predicate: overlay mean beats off mean by at least 0.5
|
||||
* parallel tool_use blocks in first turn, AND at least 3 of the overlay
|
||||
* trials emit >= 2 parallel tool_use blocks.
|
||||
*
|
||||
* The combined rule catches both "overlay nudges every trial slightly"
|
||||
* (mean) and "overlay sometimes triggers real fanout" (floor). A single
|
||||
* 0.5 lift with every trial still emitting 1 call would be suspicious;
|
||||
* this predicate rejects it.
|
||||
*/
|
||||
export function fanoutPass(arms: { overlay: number[]; off: number[] }): boolean {
|
||||
const lift = mean(arms.overlay) - mean(arms.off);
|
||||
const floorHits = arms.overlay.filter((n) => n >= 2).length;
|
||||
return lift >= 0.5 && floorHits >= 3;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generic "lower is better" pass predicate: overlay mean should drop the
|
||||
* metric by at least 20% vs baseline. Used for nudges like "effort-match"
|
||||
* (fewer turns) and "dedicated tools vs Bash" (fewer Bash calls).
|
||||
*/
|
||||
export function lowerIsBetter20Pct(arms: { overlay: number[]; off: number[] }): boolean {
|
||||
const meanOff = mean(arms.off);
|
||||
if (meanOff === 0) return mean(arms.overlay) <= meanOff;
|
||||
return mean(arms.overlay) <= meanOff * 0.8;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generic "higher is better" pass predicate: overlay mean should lift the
|
||||
* metric by at least 20% vs baseline. Used for nudges like "literal
|
||||
* interpretation" (more files touched when scope is ambiguous).
|
||||
*/
|
||||
export function higherIsBetter20Pct(arms: { overlay: number[]; off: number[] }): boolean {
|
||||
const meanOff = mean(arms.off);
|
||||
const meanOn = mean(arms.overlay);
|
||||
if (meanOff === 0) return meanOn > 0;
|
||||
return meanOn >= meanOff * 1.2;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Metrics
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Count the total number of Bash tool_use blocks across ALL assistant turns.
|
||||
* Signal for "dedicated tools over Bash" nudge in claude.md.
|
||||
*/
|
||||
export function bashToolCallCount(r: AgentSdkResult): number {
|
||||
return r.toolCalls.filter((c) => c.tool === 'Bash').length;
|
||||
}
|
||||
|
||||
/**
|
||||
* Total turns the session used to complete. Signal for "effort-match the
|
||||
* step" nudge in opus-4-7.md — trivial prompts should complete quickly.
|
||||
*/
|
||||
export function turnsToCompletion(r: AgentSdkResult): number {
|
||||
return r.turnsUsed;
|
||||
}
|
||||
|
||||
/**
|
||||
* Count of unique files the model edited or wrote. Signal for "literal
|
||||
* interpretation" nudge in opus-4-7.md — "fix the tests" with multiple
|
||||
* failures should touch all of them.
|
||||
*/
|
||||
export function uniqueFilesEdited(r: AgentSdkResult): number {
|
||||
const touched = new Set<string>();
|
||||
for (const call of r.toolCalls) {
|
||||
if (call.tool === 'Edit' || call.tool === 'Write' || call.tool === 'MultiEdit') {
|
||||
const input = call.input as { file_path?: string } | null;
|
||||
if (input?.file_path) touched.add(input.file_path);
|
||||
}
|
||||
}
|
||||
return touched.size;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Fixtures
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const OVERLAY_FIXTURES: OverlayFixture[] = [
|
||||
{
|
||||
id: 'opus-4-7-fanout-toy',
|
||||
overlayPath: 'model-overlays/opus-4-7.md',
|
||||
model: 'claude-opus-4-7',
|
||||
trials: 10,
|
||||
concurrency: 3,
|
||||
setupWorkspace: (dir) => {
|
||||
fs.writeFileSync(path.join(dir, 'alpha.txt'), 'Alpha file: used in module A.\n');
|
||||
fs.writeFileSync(path.join(dir, 'beta.txt'), 'Beta file: used in module B.\n');
|
||||
fs.writeFileSync(path.join(dir, 'gamma.txt'), 'Gamma file: used in module C.\n');
|
||||
},
|
||||
userPrompt:
|
||||
'Read alpha.txt, beta.txt, and gamma.txt and summarize each in one line.',
|
||||
metric: (r) => firstTurnParallelism(r.assistantTurns[0]),
|
||||
pass: fanoutPass,
|
||||
},
|
||||
{
|
||||
id: 'opus-4-7-fanout-realistic',
|
||||
overlayPath: 'model-overlays/opus-4-7.md',
|
||||
model: 'claude-opus-4-7',
|
||||
trials: 10,
|
||||
concurrency: 3,
|
||||
setupWorkspace: (dir) => {
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'app.ts'),
|
||||
"import { config } from './config';\nimport { util } from './src/util';\n\nexport function main() { return config.name + ':' + util(); }\n",
|
||||
);
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'config.ts'),
|
||||
"export const config = { name: 'demo', version: 1 };\n",
|
||||
);
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'README.md'),
|
||||
'# demo project\n\nA small demo. Entry: `app.ts`. Config: `config.ts`.\n',
|
||||
);
|
||||
fs.mkdirSync(path.join(dir, 'src'), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'src', 'util.ts'),
|
||||
"export function util() { return 'util-result'; }\n",
|
||||
);
|
||||
},
|
||||
userPrompt:
|
||||
'Audit this project: read app.ts, config.ts, and README.md, and glob for ' +
|
||||
'every .ts file under src/. Summarize what you find in 3 bullet points.',
|
||||
metric: (r) => firstTurnParallelism(r.assistantTurns[0]),
|
||||
pass: fanoutPass,
|
||||
},
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// claude.md / "Dedicated tools over Bash"
|
||||
// -------------------------------------------------------------------------
|
||||
{
|
||||
id: 'claude-dedicated-tools-vs-bash',
|
||||
overlayPath: 'model-overlays/claude.md',
|
||||
model: 'claude-opus-4-7',
|
||||
trials: 10,
|
||||
concurrency: 3,
|
||||
direction: 'lower_is_better',
|
||||
// 5 files + summary = needs more than default 5 turns. SDK throws
|
||||
// instead of returning a result when it hits the cap.
|
||||
maxTurns: 15,
|
||||
setupWorkspace: (dir) => {
|
||||
fs.mkdirSync(path.join(dir, 'src'), { recursive: true });
|
||||
fs.writeFileSync(path.join(dir, 'src', 'index.ts'), "export const x = 1;\n");
|
||||
fs.writeFileSync(path.join(dir, 'src', 'util.ts'), "export function util() { return 42; }\n");
|
||||
fs.writeFileSync(path.join(dir, 'src', 'types.ts'), "export type Foo = { a: number };\n");
|
||||
fs.writeFileSync(path.join(dir, 'src', 'config.ts'), "export const c = { n: 'demo' };\n");
|
||||
fs.writeFileSync(path.join(dir, 'src', 'api.ts'), "export async function fetchFoo() { return null; }\n");
|
||||
},
|
||||
userPrompt:
|
||||
"List every TypeScript file under src/ and tell me what each exports. " +
|
||||
"You may use any tools available.",
|
||||
// Metric: total Bash tool_use count across the whole session.
|
||||
// The overlay says "prefer Read/Glob/Grep over cat/find/grep shell."
|
||||
// A model following that should emit Glob + Read, not Bash ls/find/cat.
|
||||
metric: bashToolCallCount,
|
||||
pass: lowerIsBetter20Pct,
|
||||
},
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// opus-4-7.md / "Effort-match the step"
|
||||
// -------------------------------------------------------------------------
|
||||
{
|
||||
id: 'opus-4-7-effort-match-trivial',
|
||||
overlayPath: 'model-overlays/opus-4-7.md',
|
||||
model: 'claude-opus-4-7',
|
||||
trials: 10,
|
||||
concurrency: 3,
|
||||
direction: 'lower_is_better',
|
||||
maxTurns: 8,
|
||||
setupWorkspace: (dir) => {
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'config.json'),
|
||||
'{"name": "demo", "version": "1.0.0"}\n',
|
||||
);
|
||||
},
|
||||
userPrompt: "What's the version in config.json?",
|
||||
// Metric: turns used to reach a result.
|
||||
// The overlay says "simple lookups don't need deep reasoning."
|
||||
// A trivial prompt like this should complete in 2 turns
|
||||
// (Read, respond). Extended deliberation bumps this up.
|
||||
metric: turnsToCompletion,
|
||||
pass: lowerIsBetter20Pct,
|
||||
},
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// opus-4-7.md / "Literal interpretation awareness"
|
||||
// -------------------------------------------------------------------------
|
||||
{
|
||||
id: 'opus-4-7-literal-interpretation',
|
||||
overlayPath: 'model-overlays/opus-4-7.md',
|
||||
model: 'claude-opus-4-7',
|
||||
trials: 10,
|
||||
concurrency: 3,
|
||||
direction: 'higher_is_better',
|
||||
// Allow Edit and Write so the model can fix the tests.
|
||||
allowedTools: ['Read', 'Glob', 'Grep', 'Bash', 'Edit', 'Write'],
|
||||
maxTurns: 15,
|
||||
setupWorkspace: (dir) => {
|
||||
// Three failing test files. The "obvious" one is named first; the
|
||||
// other two require the model to enumerate to notice them.
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'auth.test.ts'),
|
||||
"import { test, expect } from 'bun:test';\n" +
|
||||
"test('auth returns true', () => { expect(1).toBe(2); });\n",
|
||||
);
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'billing.test.ts'),
|
||||
"import { test, expect } from 'bun:test';\n" +
|
||||
"test('billing computes total', () => { expect('ten').toBe('10'); });\n",
|
||||
);
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'notifications.test.ts'),
|
||||
"import { test, expect } from 'bun:test';\n" +
|
||||
"test('notifications fire on save', () => { expect([1,2]).toEqual([1,2,3]); });\n",
|
||||
);
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'README.md'),
|
||||
"# demo\n\nSmall project with three failing test files.\n",
|
||||
);
|
||||
},
|
||||
userPrompt: "Fix the failing tests.",
|
||||
// Metric: number of unique test files the model edited/wrote.
|
||||
// The overlay says "fix ALL failing tests, not just the obvious one."
|
||||
// Overlay-ON should touch all 3 test files. Overlay-OFF might stop
|
||||
// at the first one after making it pass.
|
||||
metric: uniqueFilesEdited,
|
||||
pass: higherIsBetter20Pct,
|
||||
},
|
||||
|
||||
// =========================================================================
|
||||
// Sonnet 4.6 variants of the Opus-4.7 fixtures.
|
||||
//
|
||||
// Rationale: /claude.md + /opus-4-7.md overlays measured as no-op or
|
||||
// counterproductive on Opus 4.7. Before deleting the whole overlay stack,
|
||||
// check whether weaker Claude models (Sonnet, Haiku) benefit from the same
|
||||
// nudges. Same overlays, same prompts, same metrics, different model ID.
|
||||
// Sonnet is ~4x cheaper than Opus so these 5 add ~$3 to a run.
|
||||
// =========================================================================
|
||||
|
||||
{
|
||||
id: 'opus-4-7-fanout-toy-sonnet',
|
||||
overlayPath: 'model-overlays/opus-4-7.md',
|
||||
model: 'claude-sonnet-4-6',
|
||||
trials: 10,
|
||||
concurrency: 3,
|
||||
setupWorkspace: (dir) => {
|
||||
fs.writeFileSync(path.join(dir, 'alpha.txt'), 'Alpha file: used in module A.\n');
|
||||
fs.writeFileSync(path.join(dir, 'beta.txt'), 'Beta file: used in module B.\n');
|
||||
fs.writeFileSync(path.join(dir, 'gamma.txt'), 'Gamma file: used in module C.\n');
|
||||
},
|
||||
userPrompt:
|
||||
'Read alpha.txt, beta.txt, and gamma.txt and summarize each in one line.',
|
||||
metric: (r) => firstTurnParallelism(r.assistantTurns[0]),
|
||||
pass: fanoutPass,
|
||||
},
|
||||
|
||||
{
|
||||
id: 'opus-4-7-fanout-realistic-sonnet',
|
||||
overlayPath: 'model-overlays/opus-4-7.md',
|
||||
model: 'claude-sonnet-4-6',
|
||||
trials: 10,
|
||||
concurrency: 3,
|
||||
setupWorkspace: (dir) => {
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'app.ts'),
|
||||
"import { config } from './config';\nimport { util } from './src/util';\n\nexport function main() { return config.name + ':' + util(); }\n",
|
||||
);
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'config.ts'),
|
||||
"export const config = { name: 'demo', version: 1 };\n",
|
||||
);
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'README.md'),
|
||||
'# demo project\n\nA small demo. Entry: `app.ts`. Config: `config.ts`.\n',
|
||||
);
|
||||
fs.mkdirSync(path.join(dir, 'src'), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'src', 'util.ts'),
|
||||
"export function util() { return 'util-result'; }\n",
|
||||
);
|
||||
},
|
||||
userPrompt:
|
||||
'Audit this project: read app.ts, config.ts, and README.md, and glob for ' +
|
||||
'every .ts file under src/. Summarize what you find in 3 bullet points.',
|
||||
metric: (r) => firstTurnParallelism(r.assistantTurns[0]),
|
||||
pass: fanoutPass,
|
||||
},
|
||||
|
||||
{
|
||||
id: 'claude-dedicated-tools-vs-bash-sonnet',
|
||||
overlayPath: 'model-overlays/claude.md',
|
||||
model: 'claude-sonnet-4-6',
|
||||
trials: 10,
|
||||
concurrency: 3,
|
||||
direction: 'lower_is_better',
|
||||
maxTurns: 15,
|
||||
setupWorkspace: (dir) => {
|
||||
fs.mkdirSync(path.join(dir, 'src'), { recursive: true });
|
||||
fs.writeFileSync(path.join(dir, 'src', 'index.ts'), "export const x = 1;\n");
|
||||
fs.writeFileSync(path.join(dir, 'src', 'util.ts'), "export function util() { return 42; }\n");
|
||||
fs.writeFileSync(path.join(dir, 'src', 'types.ts'), "export type Foo = { a: number };\n");
|
||||
fs.writeFileSync(path.join(dir, 'src', 'config.ts'), "export const c = { n: 'demo' };\n");
|
||||
fs.writeFileSync(path.join(dir, 'src', 'api.ts'), "export async function fetchFoo() { return null; }\n");
|
||||
},
|
||||
userPrompt:
|
||||
"List every TypeScript file under src/ and tell me what each exports. " +
|
||||
"You may use any tools available.",
|
||||
metric: bashToolCallCount,
|
||||
pass: lowerIsBetter20Pct,
|
||||
},
|
||||
|
||||
{
|
||||
id: 'opus-4-7-effort-match-trivial-sonnet',
|
||||
overlayPath: 'model-overlays/opus-4-7.md',
|
||||
model: 'claude-sonnet-4-6',
|
||||
trials: 10,
|
||||
concurrency: 3,
|
||||
direction: 'lower_is_better',
|
||||
maxTurns: 8,
|
||||
setupWorkspace: (dir) => {
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'config.json'),
|
||||
'{"name": "demo", "version": "1.0.0"}\n',
|
||||
);
|
||||
},
|
||||
userPrompt: "What's the version in config.json?",
|
||||
metric: turnsToCompletion,
|
||||
pass: lowerIsBetter20Pct,
|
||||
},
|
||||
|
||||
{
|
||||
id: 'opus-4-7-literal-interpretation-sonnet',
|
||||
overlayPath: 'model-overlays/opus-4-7.md',
|
||||
model: 'claude-sonnet-4-6',
|
||||
trials: 10,
|
||||
concurrency: 3,
|
||||
direction: 'higher_is_better',
|
||||
allowedTools: ['Read', 'Glob', 'Grep', 'Bash', 'Edit', 'Write'],
|
||||
maxTurns: 15,
|
||||
setupWorkspace: (dir) => {
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'auth.test.ts'),
|
||||
"import { test, expect } from 'bun:test';\n" +
|
||||
"test('auth returns true', () => { expect(1).toBe(2); });\n",
|
||||
);
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'billing.test.ts'),
|
||||
"import { test, expect } from 'bun:test';\n" +
|
||||
"test('billing computes total', () => { expect('ten').toBe('10'); });\n",
|
||||
);
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'notifications.test.ts'),
|
||||
"import { test, expect } from 'bun:test';\n" +
|
||||
"test('notifications fire on save', () => { expect([1,2]).toEqual([1,2,3]); });\n",
|
||||
);
|
||||
fs.writeFileSync(
|
||||
path.join(dir, 'README.md'),
|
||||
"# demo\n\nSmall project with three failing test files.\n",
|
||||
);
|
||||
},
|
||||
userPrompt: "Fix the failing tests.",
|
||||
metric: uniqueFilesEdited,
|
||||
pass: higherIsBetter20Pct,
|
||||
},
|
||||
];
|
||||
|
||||
// Validate at module load so a broken fixture fails fast at test startup,
|
||||
// not mid-run after burning API dollars.
|
||||
validateFixtures(OVERLAY_FIXTURES);
|
||||
@@ -0,0 +1,509 @@
|
||||
/**
|
||||
* Claude Agent SDK wrapper for the overlay-efficacy harness.
|
||||
*
|
||||
* This sits alongside session-runner.ts (which drives `claude -p` as a
|
||||
* subprocess) but runs the model via the published @anthropic-ai/claude-agent-sdk
|
||||
* instead. The SDK exposes the same harness primitives Claude Code itself uses,
|
||||
* so overlay-driven behavior change is measured against a closer approximation
|
||||
* of real Claude Code than the `claude -p` subprocess path provides.
|
||||
*
|
||||
* Explicit design rules (from plan review):
|
||||
* - Use SDK-exported SDKMessage types. No `| unknown` union collapse.
|
||||
* - Permission surface is explicit: bypassPermissions + settingSources:[] +
|
||||
* disallowedTools inverse. Without these, the SDK inherits user settings,
|
||||
* project .claude/, and local hooks, and arms are no longer comparable.
|
||||
* - Binary pinning via pathToClaudeCodeExecutable. Resolve with `which claude`
|
||||
* at setup time; the SDK would otherwise use its bundled binary.
|
||||
* - 3-shape rate-limit detection: thrown error, result-message error subtype,
|
||||
* mid-stream SDKRateLimitEvent. All three recover on retry.
|
||||
* - On retry, caller resets workspace via a setupWorkspace callback so
|
||||
* partial Bash side-effects don't contaminate the next attempt.
|
||||
* - Process-level semaphore caps concurrent queries across all callers in
|
||||
* the same bun-test process. Composes with bun's own --concurrent flag.
|
||||
*/
|
||||
|
||||
import {
|
||||
query,
|
||||
type SDKMessage,
|
||||
type SDKAssistantMessage,
|
||||
type SDKResultMessage,
|
||||
type SDKSystemMessage,
|
||||
type PermissionMode,
|
||||
type SettingSource,
|
||||
type Options,
|
||||
} from '@anthropic-ai/claude-agent-sdk';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import { execSync } from 'child_process';
|
||||
import type { SkillTestResult } from './session-runner';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export interface AgentSdkResult {
|
||||
/** Full raw event stream for forensic recovery. */
|
||||
events: SDKMessage[];
|
||||
/** Assistant-typed subset, in order. */
|
||||
assistantTurns: SDKAssistantMessage[];
|
||||
/** Flat tool-call list, in order of emission. */
|
||||
toolCalls: Array<{ tool: string; input: unknown; output: string }>;
|
||||
/** Concatenated assistant text, newline-joined. */
|
||||
output: string;
|
||||
/** 'success' | 'error_during_execution' | 'error_max_turns' | ... */
|
||||
exitReason: string;
|
||||
turnsUsed: number;
|
||||
durationMs: number;
|
||||
firstResponseMs: number;
|
||||
maxInterTurnMs: number;
|
||||
costUsd: number;
|
||||
model: string;
|
||||
sdkVersion: string;
|
||||
/** claude_code_version from the SDK's system/init event (authoritative). */
|
||||
sdkClaudeCodeVersion: string;
|
||||
/** Path to the claude binary we pinned. */
|
||||
resolvedBinaryPath: string;
|
||||
/** browse-error pattern scan for SkillTestResult parity. Always empty here. */
|
||||
browseErrors: string[];
|
||||
}
|
||||
|
||||
/** Signature matching `query()` from the SDK. DI hook for unit tests. */
|
||||
export type QueryProvider = typeof query;
|
||||
|
||||
/** Subset of SDK Options['systemPrompt'] we support. */
|
||||
export type SystemPromptOption =
|
||||
| string
|
||||
| { type: 'preset'; preset: 'claude_code'; append?: string; excludeDynamicSections?: boolean };
|
||||
|
||||
export interface RunAgentSdkOptions {
|
||||
/**
|
||||
* System prompt surface.
|
||||
* - bare string "" -> omit entirely (SDK default: no system prompt)
|
||||
* - bare string "...text..." -> REPLACE default with given text (use sparingly)
|
||||
* - { type:'preset', preset:'claude_code' } -> use Claude Code default
|
||||
* - { type:'preset', preset:'claude_code', append: "..." } -> default + append
|
||||
*
|
||||
* For overlay-efficacy measurement, the preset+append pattern is the right
|
||||
* one: it measures "does adding overlay text to the REAL Claude Code system
|
||||
* prompt change behavior" rather than "does the overlay alone (stripped of
|
||||
* base scaffolding) change behavior".
|
||||
*/
|
||||
systemPrompt: SystemPromptOption;
|
||||
userPrompt: string;
|
||||
workingDirectory: string;
|
||||
model?: string;
|
||||
maxTurns?: number;
|
||||
allowedTools?: string[];
|
||||
disallowedTools?: string[];
|
||||
permissionMode?: PermissionMode;
|
||||
settingSources?: SettingSource[];
|
||||
env?: Record<string, string>;
|
||||
pathToClaudeCodeExecutable?: string;
|
||||
testName?: string;
|
||||
runId?: string;
|
||||
fixtureId?: string;
|
||||
queryProvider?: QueryProvider;
|
||||
/** Max 429 retries per call. Default 3. */
|
||||
maxRetries?: number;
|
||||
/**
|
||||
* Caller provides this when retry should reset the workspace. The harness
|
||||
* invokes it with a fresh dir after a rate-limit failure. When omitted,
|
||||
* retries reuse the original workingDirectory (fine for read-only tests).
|
||||
*/
|
||||
onRetry?: (freshDir: string) => void;
|
||||
}
|
||||
|
||||
export class RateLimitExhaustedError extends Error {
|
||||
readonly attempts: number;
|
||||
constructor(attempts: number, cause?: unknown) {
|
||||
super(`rate limit exhausted after ${attempts} attempts`);
|
||||
this.name = 'RateLimitExhaustedError';
|
||||
this.attempts = attempts;
|
||||
if (cause !== undefined) (this as { cause?: unknown }).cause = cause;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Process-level semaphore for API concurrency
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Bounded token bucket. Shared across all runAgentSdkTest calls in this
|
||||
* process so that bun's --concurrent flag does not compound with in-test
|
||||
* concurrency to blow past Anthropic's rate limits.
|
||||
*
|
||||
* Default capacity 3. Override via GSTACK_SDK_MAX_CONCURRENCY env var.
|
||||
*/
|
||||
class Semaphore {
|
||||
private available: number;
|
||||
private readonly queue: Array<() => void> = [];
|
||||
constructor(capacity: number) {
|
||||
this.available = capacity;
|
||||
}
|
||||
async acquire(): Promise<void> {
|
||||
if (this.available > 0) {
|
||||
this.available--;
|
||||
return;
|
||||
}
|
||||
await new Promise<void>((resolve) => this.queue.push(resolve));
|
||||
}
|
||||
release(): void {
|
||||
const next = this.queue.shift();
|
||||
if (next) {
|
||||
next();
|
||||
} else {
|
||||
this.available++;
|
||||
}
|
||||
}
|
||||
/** For tests. Returns tokens currently in-flight. */
|
||||
inFlight(): number {
|
||||
// Not introspectable from outside without tracking; approximate.
|
||||
return this.queue.length;
|
||||
}
|
||||
}
|
||||
|
||||
const DEFAULT_SDK_CONCURRENCY = Number(process.env.GSTACK_SDK_MAX_CONCURRENCY ?? 3);
|
||||
let _apiSemaphore: Semaphore | null = null;
|
||||
function getApiSemaphore(): Semaphore {
|
||||
if (!_apiSemaphore) _apiSemaphore = new Semaphore(DEFAULT_SDK_CONCURRENCY);
|
||||
return _apiSemaphore;
|
||||
}
|
||||
|
||||
/** Test-only. Resets the process-level semaphore. */
|
||||
export function __resetSemaphoreForTests(capacity: number): void {
|
||||
_apiSemaphore = new Semaphore(capacity);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Rate-limit detection
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/** True if `err` looks like a rate-limit thrown from the SDK. */
|
||||
export function isRateLimitThrown(err: unknown): boolean {
|
||||
if (!err || typeof err !== 'object') return false;
|
||||
const msg = (err as { message?: string }).message ?? '';
|
||||
const name = (err as { name?: string }).name ?? '';
|
||||
const status = (err as { status?: number }).status;
|
||||
return (
|
||||
status === 429 ||
|
||||
/rate.?limit|429|too many requests/i.test(msg) ||
|
||||
/RateLimit/i.test(name)
|
||||
);
|
||||
}
|
||||
|
||||
/** True if a SDKResultMessage is a rate-limit-shaped error. */
|
||||
export function isRateLimitResult(msg: SDKMessage): boolean {
|
||||
if (msg.type !== 'result') return false;
|
||||
const r = msg as SDKResultMessage;
|
||||
if (r.subtype === 'success') return false;
|
||||
// subtype === 'error_during_execution' | 'error_max_turns' | 'error_max_budget_usd' | ...
|
||||
if (r.subtype !== 'error_during_execution') return false;
|
||||
const errs = (r as { errors?: string[] }).errors ?? [];
|
||||
return errs.some((e) => /rate.?limit|429|too many requests/i.test(e));
|
||||
}
|
||||
|
||||
/** True if mid-stream SDKRateLimitEvent indicates a blocking rate-limit. */
|
||||
export function isRateLimitEvent(msg: SDKMessage): boolean {
|
||||
if (msg.type !== 'rate_limit_event') return false;
|
||||
const info = (msg as { rate_limit_info?: { status?: string } }).rate_limit_info;
|
||||
return info?.status === 'rejected';
|
||||
}
|
||||
|
||||
/**
|
||||
* True if `err` is the SDK's "max turns reached" throw. Some SDK versions
|
||||
* raise this as an exception from the generator instead of emitting a
|
||||
* result message with subtype='error_max_turns'. We treat it as terminal-
|
||||
* but-recoverable: record what we collected and continue, rather than
|
||||
* failing the whole run.
|
||||
*/
|
||||
export function isMaxTurnsError(err: unknown): boolean {
|
||||
if (!err || typeof err !== 'object') return false;
|
||||
const msg = (err as { message?: string }).message ?? '';
|
||||
return /reached maximum number of turns|max.?turns/i.test(msg);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Version resolution (cached)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
let _sdkVersionCache: string | null = null;
|
||||
function resolveSdkVersion(): string {
|
||||
if (_sdkVersionCache) return _sdkVersionCache;
|
||||
try {
|
||||
const pkgPath = require.resolve('@anthropic-ai/claude-agent-sdk/package.json');
|
||||
const pkg = JSON.parse(fs.readFileSync(pkgPath, 'utf-8')) as { version?: string };
|
||||
_sdkVersionCache = pkg.version ?? 'unknown';
|
||||
} catch {
|
||||
_sdkVersionCache = 'unknown';
|
||||
}
|
||||
return _sdkVersionCache;
|
||||
}
|
||||
|
||||
export function resolveClaudeBinary(): string | null {
|
||||
try {
|
||||
return execSync('which claude', { encoding: 'utf-8' }).trim() || null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Main runner
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Execute a single SDK query with retries. Returns a typed result.
|
||||
*
|
||||
* The retry loop treats 429 as recoverable and any other error as fatal.
|
||||
* Exponential backoff: 1s, 2s, 4s. After maxRetries failures, throws
|
||||
* RateLimitExhaustedError so the caller can decide what to do with the run.
|
||||
*/
|
||||
export async function runAgentSdkTest(
|
||||
opts: RunAgentSdkOptions,
|
||||
): Promise<AgentSdkResult> {
|
||||
const sem = getApiSemaphore();
|
||||
const maxRetries = opts.maxRetries ?? 3;
|
||||
const queryImpl: QueryProvider = opts.queryProvider ?? query;
|
||||
const model = opts.model ?? 'claude-opus-4-7';
|
||||
|
||||
let attempt = 0;
|
||||
let lastErr: unknown = null;
|
||||
|
||||
while (attempt <= maxRetries) {
|
||||
await sem.acquire();
|
||||
const startMs = Date.now();
|
||||
|
||||
// Hoisted so the max-turns catch branch can synthesize a result from
|
||||
// whatever we captured before the SDK threw.
|
||||
const events: SDKMessage[] = [];
|
||||
const assistantTurns: SDKAssistantMessage[] = [];
|
||||
const toolCalls: Array<{ tool: string; input: unknown; output: string }> = [];
|
||||
const assistantTextParts: string[] = [];
|
||||
let firstResponseMs = 0;
|
||||
let lastEventMs = startMs;
|
||||
let maxInterTurnMs = 0;
|
||||
let systemInitVersion = 'unknown';
|
||||
let rateLimited: unknown = null;
|
||||
let terminalResult: SDKResultMessage | null = null;
|
||||
|
||||
try {
|
||||
const sdkOpts: Options = {
|
||||
model,
|
||||
cwd: opts.workingDirectory,
|
||||
maxTurns: opts.maxTurns ?? 5,
|
||||
tools: opts.allowedTools ?? ['Read', 'Glob', 'Grep', 'Bash'],
|
||||
disallowedTools: opts.disallowedTools,
|
||||
allowedTools: opts.allowedTools ?? ['Read', 'Glob', 'Grep', 'Bash'],
|
||||
permissionMode: opts.permissionMode ?? 'bypassPermissions',
|
||||
allowDangerouslySkipPermissions:
|
||||
(opts.permissionMode ?? 'bypassPermissions') === 'bypassPermissions',
|
||||
settingSources: opts.settingSources ?? [],
|
||||
env: opts.env,
|
||||
pathToClaudeCodeExecutable: opts.pathToClaudeCodeExecutable,
|
||||
};
|
||||
// Empty bare string means "omit entirely" (SDK runs with no override).
|
||||
// Any object or non-empty string is passed through.
|
||||
if (typeof opts.systemPrompt === 'object' || opts.systemPrompt !== '') {
|
||||
sdkOpts.systemPrompt = opts.systemPrompt;
|
||||
}
|
||||
|
||||
const q = queryImpl({
|
||||
prompt: opts.userPrompt,
|
||||
options: sdkOpts,
|
||||
});
|
||||
|
||||
for await (const ev of q) {
|
||||
const now = Date.now();
|
||||
if (firstResponseMs === 0) firstResponseMs = now - startMs;
|
||||
const interTurn = now - lastEventMs;
|
||||
if (interTurn > maxInterTurnMs) maxInterTurnMs = interTurn;
|
||||
lastEventMs = now;
|
||||
|
||||
events.push(ev);
|
||||
|
||||
if (ev.type === 'system' && (ev as SDKSystemMessage).subtype === 'init') {
|
||||
systemInitVersion =
|
||||
(ev as SDKSystemMessage).claude_code_version ?? 'unknown';
|
||||
} else if (ev.type === 'assistant') {
|
||||
const am = ev as SDKAssistantMessage;
|
||||
assistantTurns.push(am);
|
||||
const content = am.message?.content;
|
||||
if (Array.isArray(content)) {
|
||||
for (const block of content as Array<
|
||||
| { type: 'text'; text?: string }
|
||||
| { type: 'tool_use'; name?: string; input?: unknown }
|
||||
| { type: string }
|
||||
>) {
|
||||
if (block.type === 'text') {
|
||||
const t = (block as { text?: string }).text;
|
||||
if (t) assistantTextParts.push(t);
|
||||
} else if (block.type === 'tool_use') {
|
||||
const tb = block as { name?: string; input?: unknown };
|
||||
toolCalls.push({
|
||||
tool: tb.name ?? 'unknown',
|
||||
input: tb.input ?? {},
|
||||
output: '',
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (isRateLimitEvent(ev)) {
|
||||
rateLimited = new Error(
|
||||
`mid-stream rate limit: ${JSON.stringify(
|
||||
(ev as { rate_limit_info?: unknown }).rate_limit_info,
|
||||
)}`,
|
||||
);
|
||||
} else if (ev.type === 'result') {
|
||||
terminalResult = ev as SDKResultMessage;
|
||||
if (isRateLimitResult(ev)) {
|
||||
rateLimited = new Error(
|
||||
`result-message rate limit: ${((ev as { errors?: string[] }).errors ?? []).join('; ')}`,
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (rateLimited) {
|
||||
throw rateLimited;
|
||||
}
|
||||
if (!terminalResult) {
|
||||
throw new Error('query stream ended without a result event');
|
||||
}
|
||||
|
||||
const durationMs = Date.now() - startMs;
|
||||
const costUsd =
|
||||
(terminalResult as { total_cost_usd?: number }).total_cost_usd ?? 0;
|
||||
const turnsUsed =
|
||||
(terminalResult as { num_turns?: number }).num_turns ??
|
||||
assistantTurns.length;
|
||||
const exitReason =
|
||||
(terminalResult as { subtype?: string }).subtype ?? 'unknown';
|
||||
|
||||
return {
|
||||
events,
|
||||
assistantTurns,
|
||||
toolCalls,
|
||||
output: assistantTextParts.join('\n'),
|
||||
exitReason,
|
||||
turnsUsed,
|
||||
durationMs,
|
||||
firstResponseMs,
|
||||
maxInterTurnMs,
|
||||
costUsd,
|
||||
model,
|
||||
sdkVersion: resolveSdkVersion(),
|
||||
sdkClaudeCodeVersion: systemInitVersion,
|
||||
resolvedBinaryPath: opts.pathToClaudeCodeExecutable ?? 'sdk-default',
|
||||
browseErrors: [],
|
||||
};
|
||||
} catch (err) {
|
||||
lastErr = err;
|
||||
|
||||
// "Max turns reached" is the SDK's way of saying "this session ran
|
||||
// out of turns." It's thrown from the generator instead of emitted
|
||||
// as a result message. Treat as a successful-but-capped trial: the
|
||||
// assistant turns we collected are real and carry a metric. Record
|
||||
// them with exitReason='error_max_turns' rather than failing the
|
||||
// whole run.
|
||||
if (isMaxTurnsError(err)) {
|
||||
const durationMs = Date.now() - startMs;
|
||||
return {
|
||||
events,
|
||||
assistantTurns,
|
||||
toolCalls,
|
||||
output: assistantTextParts.join('\n'),
|
||||
exitReason: 'error_max_turns',
|
||||
turnsUsed: assistantTurns.length,
|
||||
durationMs,
|
||||
firstResponseMs,
|
||||
maxInterTurnMs,
|
||||
costUsd: 0, // unknown from thrown-error path
|
||||
model,
|
||||
sdkVersion: resolveSdkVersion(),
|
||||
sdkClaudeCodeVersion: systemInitVersion,
|
||||
resolvedBinaryPath: opts.pathToClaudeCodeExecutable ?? 'sdk-default',
|
||||
browseErrors: [],
|
||||
};
|
||||
}
|
||||
|
||||
const isRetryable = isRateLimitThrown(err);
|
||||
if (!isRetryable || attempt >= maxRetries) {
|
||||
if (isRetryable) {
|
||||
throw new RateLimitExhaustedError(attempt + 1, err);
|
||||
}
|
||||
throw err;
|
||||
}
|
||||
attempt++;
|
||||
// backoff: 1s, 2s, 4s
|
||||
await new Promise((r) => setTimeout(r, 1000 * Math.pow(2, attempt - 1)));
|
||||
// Let caller reset workspace since prior attempt may have partially
|
||||
// mutated files via Bash.
|
||||
if (opts.onRetry) {
|
||||
opts.onRetry(opts.workingDirectory);
|
||||
}
|
||||
} finally {
|
||||
sem.release();
|
||||
}
|
||||
}
|
||||
|
||||
throw new RateLimitExhaustedError(attempt + 1, lastErr);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Legacy shape mapper
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Adapt AgentSdkResult to the legacy SkillTestResult shape so helpers that
|
||||
* expect the old `claude -p` output (extractToolSummary, etc) work unchanged.
|
||||
*/
|
||||
export function toSkillTestResult(r: AgentSdkResult): SkillTestResult {
|
||||
// Cost estimate: use SDK's authoritative cost; back-compute chars.
|
||||
// session-runner.ts:30 requires inputChars/outputChars/estimatedTokens.
|
||||
// These are rough; real consumers of CostEstimate use cost + turns.
|
||||
const outputChars = r.output.length;
|
||||
const inputChars = 0; // unknown from SDK path; not used for pass/fail
|
||||
const estimatedTokens = Math.round((inputChars + outputChars) / 4);
|
||||
|
||||
// Build a flat transcript list mimicking the NDJSON shape:
|
||||
// parseNDJSON emits [{ type: 'assistant', message: {...} }, ...].
|
||||
// Use the SDK's assistantTurns directly since their shape matches.
|
||||
const transcript: unknown[] = r.events.slice();
|
||||
|
||||
return {
|
||||
toolCalls: r.toolCalls,
|
||||
browseErrors: r.browseErrors,
|
||||
exitReason: r.exitReason,
|
||||
duration: r.durationMs,
|
||||
output: r.output,
|
||||
costEstimate: {
|
||||
inputChars,
|
||||
outputChars,
|
||||
estimatedTokens,
|
||||
estimatedCost: r.costUsd,
|
||||
turnsUsed: r.turnsUsed,
|
||||
},
|
||||
transcript,
|
||||
model: r.model,
|
||||
firstResponseMs: r.firstResponseMs,
|
||||
maxInterTurnMs: r.maxInterTurnMs,
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Metric helpers (re-exported for fixtures)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Count `tool_use` blocks in the first assistant turn of an SDK result.
|
||||
* Returns 0 if there is no first turn or no content array.
|
||||
*
|
||||
* This is the core "fanout" metric. A turn with N tool_use blocks = N
|
||||
* parallel tool invocations.
|
||||
*/
|
||||
export function firstTurnParallelism(firstTurn: SDKAssistantMessage | undefined): number {
|
||||
if (!firstTurn) return 0;
|
||||
const content = firstTurn.message?.content;
|
||||
if (!Array.isArray(content)) return 0;
|
||||
return (content as Array<{ type: string }>).filter((b) => b.type === 'tool_use').length;
|
||||
}
|
||||
@@ -239,6 +239,24 @@ export const E2E_TOUCHFILES: Record<string, string[]> = {
|
||||
['model-overlays/claude.md', 'model-overlays/opus-4-7.md', 'scripts/models.ts', 'scripts/resolvers/model-overlay.ts'],
|
||||
'fanout-arm-overlay-off':
|
||||
['model-overlays/claude.md', 'model-overlays/opus-4-7.md', 'scripts/models.ts', 'scripts/resolvers/model-overlay.ts'],
|
||||
|
||||
// Overlay efficacy harness (SDK) — measures whether overlay nudges change
|
||||
// behavior under @anthropic-ai/claude-agent-sdk (closer to real Claude Code
|
||||
// than `claude -p`). testNames in the file are template literals so the
|
||||
// completeness scanner doesn't require them; these entries exist for
|
||||
// diff-based selection accuracy.
|
||||
'overlay-harness-opus-4-7-fanout-toy': [
|
||||
'model-overlays/**',
|
||||
'test/fixtures/overlay-nudges.ts',
|
||||
'test/helpers/agent-sdk-runner.ts',
|
||||
'scripts/resolvers/model-overlay.ts',
|
||||
],
|
||||
'overlay-harness-opus-4-7-fanout-realistic': [
|
||||
'model-overlays/**',
|
||||
'test/fixtures/overlay-nudges.ts',
|
||||
'test/helpers/agent-sdk-runner.ts',
|
||||
'scripts/resolvers/model-overlay.ts',
|
||||
],
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -430,6 +448,10 @@ export const E2E_TIERS: Record<string, 'gate' | 'periodic'> = {
|
||||
// Opus 4.7 overlay evals — periodic (non-deterministic LLM behavior + Opus cost)
|
||||
'fanout-arm-overlay-on': 'periodic',
|
||||
'fanout-arm-overlay-off': 'periodic',
|
||||
|
||||
// Overlay efficacy harness (SDK, paid) — periodic only
|
||||
'overlay-harness-opus-4-7-fanout-toy': 'periodic',
|
||||
'overlay-harness-opus-4-7-fanout-realistic': 'periodic',
|
||||
};
|
||||
|
||||
/**
|
||||
|
||||
@@ -0,0 +1,320 @@
|
||||
/**
|
||||
* Overlay-efficacy harness (periodic tier, paid).
|
||||
*
|
||||
* Measures whether a model-specific overlay nudge actually changes model
|
||||
* behavior when run through the real Claude Agent SDK — the harness
|
||||
* Claude Code itself is built on. This complements test/skill-e2e-opus-47.test.ts
|
||||
* which measures the same thing via `claude -p` subprocess (a different
|
||||
* harness with different prompt composition).
|
||||
*
|
||||
* For each fixture in test/fixtures/overlay-nudges.ts, runs two arms at
|
||||
* `fixture.trials` trials per arm with bounded concurrency:
|
||||
* - overlay-on: SDK systemPrompt = resolved overlay content
|
||||
* - overlay-off: SDK systemPrompt = "" (empty)
|
||||
*
|
||||
* Both arms have no CLAUDE.md, no skills directory, no setting-source
|
||||
* inheritance (settingSources: []). This is the TRUE bare comparison —
|
||||
* the only variable is the overlay text.
|
||||
*
|
||||
* Budget ~$20 per run at 40 trials (2 fixtures × 2 arms × 10 trials).
|
||||
* Gated by EVALS=1 AND EVALS_TIER=periodic. Never runs under test:gate.
|
||||
*/
|
||||
|
||||
import { describe, test, expect, afterAll } from 'bun:test';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
import {
|
||||
runAgentSdkTest,
|
||||
resolveClaudeBinary,
|
||||
type AgentSdkResult,
|
||||
type SystemPromptOption,
|
||||
} from './helpers/agent-sdk-runner';
|
||||
import { EvalCollector, getProjectEvalDir } from './helpers/eval-store';
|
||||
import {
|
||||
OVERLAY_FIXTURES,
|
||||
type OverlayFixture,
|
||||
} from './fixtures/overlay-nudges';
|
||||
import { readOverlay } from '../scripts/resolvers/model-overlay';
|
||||
|
||||
const evalsEnabled = !!process.env.EVALS;
|
||||
const periodicTier = process.env.EVALS_TIER === 'periodic';
|
||||
const shouldRun = evalsEnabled && periodicTier;
|
||||
|
||||
const describeE2E = shouldRun ? describe : describe.skip;
|
||||
// EvalCollector's tier must be 'e2e' | 'llm-judge' per its type signature.
|
||||
// The existing paid evals violate this by passing descriptive names like
|
||||
// 'e2e-opus-47' — a pre-existing pattern that only works because bun-test
|
||||
// runs without strict typechecking. We stay conforming here.
|
||||
const evalCollector = shouldRun ? new EvalCollector('e2e') : null;
|
||||
|
||||
const REPO_ROOT = path.resolve(import.meta.dir, '..');
|
||||
const runId = new Date()
|
||||
.toISOString()
|
||||
.replace(/[:.]/g, '')
|
||||
.replace('T', '-')
|
||||
.slice(0, 15);
|
||||
const TRANSCRIPTS_DIR = path.join(
|
||||
path.dirname(getProjectEvalDir()),
|
||||
'transcripts',
|
||||
`overlay-harness-${runId}`,
|
||||
);
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Per-arm helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
type Arm = 'overlay-on' | 'overlay-off';
|
||||
|
||||
function mkTrialDir(fixtureId: string, arm: Arm, n: number): string {
|
||||
const dir = fs.mkdtempSync(
|
||||
path.join(os.tmpdir(), `overlay-harness-${fixtureId}-${arm}-${n}-`),
|
||||
);
|
||||
return dir;
|
||||
}
|
||||
|
||||
function saveRawTranscript(
|
||||
fixtureId: string,
|
||||
arm: Arm,
|
||||
n: number,
|
||||
result: AgentSdkResult,
|
||||
): void {
|
||||
fs.mkdirSync(TRANSCRIPTS_DIR, { recursive: true });
|
||||
const out = path.join(TRANSCRIPTS_DIR, `${fixtureId}-${arm}-${n}.jsonl`);
|
||||
const lines = result.events.map((e) => JSON.stringify(e));
|
||||
fs.writeFileSync(out, lines.join('\n') + '\n');
|
||||
}
|
||||
|
||||
function overlayContentFor(fixture: OverlayFixture): string {
|
||||
const family = path.basename(fixture.overlayPath, '.md');
|
||||
const resolved = readOverlay(family);
|
||||
if (!resolved) {
|
||||
throw new Error(
|
||||
`fixture ${fixture.id}: resolver returned empty content for ${family}`,
|
||||
);
|
||||
}
|
||||
return resolved;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Per-fixture runner
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface ArmResult {
|
||||
metrics: number[];
|
||||
costs: number[];
|
||||
durations: number[];
|
||||
rateLimitExhausted: number;
|
||||
sdkClaudeCodeVersions: Set<string>;
|
||||
}
|
||||
|
||||
async function runArm(
|
||||
fixture: OverlayFixture,
|
||||
arm: Arm,
|
||||
systemPrompt: SystemPromptOption,
|
||||
claudeBinary: string | null,
|
||||
): Promise<ArmResult> {
|
||||
const result: ArmResult = {
|
||||
metrics: [],
|
||||
costs: [],
|
||||
durations: [],
|
||||
rateLimitExhausted: 0,
|
||||
sdkClaudeCodeVersions: new Set(),
|
||||
};
|
||||
|
||||
const trials = fixture.trials;
|
||||
const concurrency = fixture.concurrency ?? 3;
|
||||
|
||||
// Simple bounded executor: run trials in chunks of `concurrency`.
|
||||
// The process-level semaphore in agent-sdk-runner.ts enforces the true cap.
|
||||
let nextTrial = 0;
|
||||
const workers = Array.from({ length: concurrency }, async () => {
|
||||
while (true) {
|
||||
const n = nextTrial++;
|
||||
if (n >= trials) return;
|
||||
|
||||
const dir = mkTrialDir(fixture.id, arm, n);
|
||||
fixture.setupWorkspace(dir);
|
||||
try {
|
||||
const sdkResult = await runAgentSdkTest({
|
||||
systemPrompt,
|
||||
userPrompt: fixture.userPrompt,
|
||||
workingDirectory: dir,
|
||||
model: fixture.model,
|
||||
maxTurns: fixture.maxTurns ?? 5,
|
||||
allowedTools: fixture.allowedTools ?? ['Read', 'Glob', 'Grep', 'Bash'],
|
||||
permissionMode: 'bypassPermissions',
|
||||
settingSources: [],
|
||||
env: { ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY ?? '' },
|
||||
pathToClaudeCodeExecutable: claudeBinary ?? undefined,
|
||||
testName: `${fixture.id}-${arm}-${n}`,
|
||||
runId,
|
||||
fixtureId: fixture.id,
|
||||
onRetry: (_) => {
|
||||
// Reset the workspace before the retry so partial Bash side effects
|
||||
// from the failed attempt don't contaminate.
|
||||
fs.rmSync(dir, { recursive: true, force: true });
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
fixture.setupWorkspace(dir);
|
||||
},
|
||||
});
|
||||
|
||||
saveRawTranscript(fixture.id, arm, n, sdkResult);
|
||||
|
||||
const metric = fixture.metric(sdkResult);
|
||||
result.metrics.push(metric);
|
||||
result.costs.push(sdkResult.costUsd);
|
||||
result.durations.push(sdkResult.durationMs);
|
||||
result.sdkClaudeCodeVersions.add(sdkResult.sdkClaudeCodeVersion);
|
||||
|
||||
evalCollector?.addTest({
|
||||
name: `${fixture.id}-${arm}-${n}`,
|
||||
suite: 'overlay-harness',
|
||||
tier: 'e2e',
|
||||
passed: true,
|
||||
duration_ms: sdkResult.durationMs,
|
||||
cost_usd: sdkResult.costUsd,
|
||||
transcript: sdkResult.events,
|
||||
prompt: fixture.userPrompt,
|
||||
output: sdkResult.output,
|
||||
turns_used: sdkResult.turnsUsed,
|
||||
browse_errors: sdkResult.browseErrors,
|
||||
exit_reason: sdkResult.exitReason,
|
||||
model: sdkResult.model,
|
||||
first_response_ms: sdkResult.firstResponseMs,
|
||||
max_inter_turn_ms: sdkResult.maxInterTurnMs,
|
||||
});
|
||||
} catch (err) {
|
||||
if (err instanceof Error && err.name === 'RateLimitExhaustedError') {
|
||||
result.rateLimitExhausted++;
|
||||
// Record a failed trial so the collector captures the attempt.
|
||||
evalCollector?.addTest({
|
||||
name: `${fixture.id}-${arm}-${n}`,
|
||||
suite: 'overlay-harness',
|
||||
tier: 'e2e',
|
||||
passed: false,
|
||||
duration_ms: 0,
|
||||
cost_usd: 0,
|
||||
exit_reason: 'rate_limit_exhausted',
|
||||
error: err.message,
|
||||
});
|
||||
} else {
|
||||
throw err;
|
||||
}
|
||||
} finally {
|
||||
try {
|
||||
fs.rmSync(dir, { recursive: true, force: true });
|
||||
} catch {
|
||||
// best-effort cleanup
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
await Promise.all(workers);
|
||||
return result;
|
||||
}
|
||||
|
||||
function mean(xs: number[]): number {
|
||||
if (xs.length === 0) return 0;
|
||||
return xs.reduce((a, b) => a + b, 0) / xs.length;
|
||||
}
|
||||
|
||||
function sum(xs: number[]): number {
|
||||
return xs.reduce((a, b) => a + b, 0);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Test bodies
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describeE2E('overlay efficacy harness (SDK)', () => {
|
||||
// Resolve binary once
|
||||
const claudeBinary = resolveClaudeBinary();
|
||||
|
||||
if (!claudeBinary) {
|
||||
test.skip(
|
||||
'no local `claude` binary on PATH — cannot pin for harness parity',
|
||||
() => {},
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
for (const fixture of OVERLAY_FIXTURES) {
|
||||
test(
|
||||
`${fixture.id}: overlay-ON vs overlay-OFF, N=${fixture.trials} per arm`,
|
||||
async () => {
|
||||
const overlayText = overlayContentFor(fixture);
|
||||
expect(overlayText.length).toBeGreaterThan(100);
|
||||
|
||||
// Arm composition: both arms use the real Claude Code default system
|
||||
// prompt (preset). Overlay-ON APPENDS the overlay text; overlay-OFF
|
||||
// uses the default alone. This measures the overlay's marginal effect
|
||||
// ON TOP of Claude Code's normal behavioral scaffolding — which is
|
||||
// the only measurement that matches how real Claude Code composes
|
||||
// overlays into its system prompt stack.
|
||||
const [onArm, offArm] = await Promise.all([
|
||||
runArm(
|
||||
fixture,
|
||||
'overlay-on',
|
||||
{ type: 'preset', preset: 'claude_code', append: overlayText },
|
||||
claudeBinary,
|
||||
),
|
||||
runArm(
|
||||
fixture,
|
||||
'overlay-off',
|
||||
{ type: 'preset', preset: 'claude_code' },
|
||||
claudeBinary,
|
||||
),
|
||||
]);
|
||||
|
||||
const arms = {
|
||||
overlay: onArm.metrics,
|
||||
off: offArm.metrics,
|
||||
};
|
||||
|
||||
const meanOn = mean(arms.overlay);
|
||||
const meanOff = mean(arms.off);
|
||||
const lift = meanOn - meanOff;
|
||||
const floorHits = arms.overlay.filter((n) => n >= 2).length;
|
||||
const totalCost = sum(onArm.costs) + sum(offArm.costs);
|
||||
const versionSet = new Set([
|
||||
...onArm.sdkClaudeCodeVersions,
|
||||
...offArm.sdkClaudeCodeVersions,
|
||||
]);
|
||||
|
||||
// Loud output for the next person reading the eval JSON:
|
||||
// eslint-disable-next-line no-console
|
||||
console.log(
|
||||
`\n[${fixture.id}]\n` +
|
||||
` binary: ${claudeBinary}\n` +
|
||||
` claude_code_version(s): ${[...versionSet].join(', ')}\n` +
|
||||
` overlay-ON metrics: [${arms.overlay.join(', ')}] mean=${meanOn.toFixed(2)}\n` +
|
||||
` overlay-OFF metrics: [${arms.off.join(', ')}] mean=${meanOff.toFixed(2)}\n` +
|
||||
` lift: ${lift.toFixed(2)} floor_hits(>=2): ${floorHits}/${fixture.trials}\n` +
|
||||
` rate_limit_exhausted: on=${onArm.rateLimitExhausted} off=${offArm.rateLimitExhausted}\n` +
|
||||
` total_cost_usd: $${totalCost.toFixed(4)}\n` +
|
||||
` transcripts: ${TRANSCRIPTS_DIR}`,
|
||||
);
|
||||
|
||||
// Demand enough trials actually completed to make the assertion
|
||||
// meaningful. If rate-limit exhaustion took out more than half of an
|
||||
// arm, fail loudly rather than pass/fail on a fragment.
|
||||
const minTrials = Math.ceil(fixture.trials / 2);
|
||||
expect(arms.overlay.length).toBeGreaterThanOrEqual(minTrials);
|
||||
expect(arms.off.length).toBeGreaterThanOrEqual(minTrials);
|
||||
|
||||
expect(fixture.pass(arms)).toBe(true);
|
||||
},
|
||||
30 * 60 * 1000, // 30 minute timeout per fixture
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
afterAll(async () => {
|
||||
if (evalCollector) {
|
||||
const filepath = await evalCollector.finalize();
|
||||
// eslint-disable-next-line no-console
|
||||
console.log(`\n[overlay-harness] eval results: ${filepath}`);
|
||||
}
|
||||
});
|
||||
Reference in New Issue
Block a user