mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-02 03:35:09 +02:00
e3d7f49c74
* refactor: export readOverlay from model-overlay resolver Needed by the overlay-efficacy eval harness to resolve INHERIT directives without going through generateModelOverlay's full TemplateContext. * chore: add @anthropic-ai/claude-agent-sdk@0.2.117 dep Pinned exact for SDK event-shape stability. Used by the overlay-efficacy harness to drive the model through a closer-to-real Claude Code harness than `claude -p`. * feat(preflight): sanity check for agent-sdk + overlay resolver Verifies: SDK loads, claude-opus-4-7 is a live API model, SDKMessage event shape matches assumptions, readOverlay resolves INHERIT directives and includes expected content. Run with `bun run scripts/preflight-agent-sdk.ts`. PREFLIGHT OK on first run, $0.013 API spend. * feat(eval): parametric overlay-efficacy harness (runner + fixtures) `test/helpers/agent-sdk-runner.ts` wraps @anthropic-ai/claude-agent-sdk with explicit `AgentSdkResult` types, process-level API concurrency semaphore, and 3-shape 429 retry (thrown error, result-message error, mid-stream SDKRateLimitEvent). Pins the local claude binary via `pathToClaudeCodeExecutable`. `test/fixtures/overlay-nudges.ts` holds the typed registry. Two fixtures for the first measurement: `opus-4-7-fanout-toy` (3-file read) and `opus-4-7-fanout-realistic` (mixed-tool audit). Strict validator rejects duplicate ids, non-integer trials, unsafe overlay paths, non-safe id chars, and missing overlay files at module load. Adding a future overlay nudge eval = one fixture entry. * test(eval): unit tests for agent-sdk-runner (36 tests, free tier) Stub `queryProvider` feeds hand-crafted SDKMessage streams. Covers: happy-path shape, all 3 rate-limit shapes + retry, workspace reset on retry, persistent 429 -> `RateLimitExhaustedError`, non-429 propagation, process-level concurrency cap, options propagation, artifact path uniqueness, cost/turn mapping, and every validator rejection case. * test(eval): paid periodic overlay-efficacy harness `test/skill-e2e-overlay-harness.test.ts` iterates OVERLAY_FIXTURES, runs two arms per fixture (overlay-ON, overlay-OFF) at N=10 trials with bounded concurrency. Arms use SDK preset `claude_code` so both include the real Claude Code system prompt; overlay-ON appends the resolved overlay text. Saves per-trial raw event streams to `~/.gstack/projects/<slug>/transcripts/` for forensic recovery. Gated on `EVALS=1 && EVALS_TIER=periodic`. ~$3/run (40 trials). * test: register overlay harness in touchfiles (both maps) Entries for `overlay-harness-opus-4-7-fanout-toy` and `opus-4-7-fanout-realistic` in E2E_TOUCHFILES (deps: model-overlays/, fixtures file, runner, resolver) and E2E_TIERS (`periodic`). Passes `test/touchfiles.test.ts` completeness check. * fix(opus-4.7): remove "Fan out explicitly" overlay nudge Measured counterproductive under the new SDK harness. Baseline Opus 4.7 emits first-turn parallel tool_use blocks 70% of the time on a 3-file read prompt. With the custom nudge: 10%. With Anthropic's own canonical `<use_parallel_tool_calls>` block from their parallel-tool-use docs: 0%. Both overlays suppress fanout; neither improves it. On realistic multi-tool prompts (audit a project: read files + glob + summarize), Opus 4.7 never fans out in first turn regardless of overlay. Zero of 20 trials. Not a prompt problem. Keeping the other three nudges (effort-match, batch questions, literal interpretation) pending their own measurement. Harness is ready for follow-up fixtures — add one entry to `test/fixtures/overlay-nudges.ts` to measure any overlay bullet. Cost of investigation: ~$7 total across 3 eval runs. * chore: bump version and changelog (v1.6.5.0) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(eval): extend OverlayFixture with allowedTools, maxTurns, direction Per-fixture tool allowlist unblocks measuring nudges that need Edit/Write (e.g. literal-interpretation 'fix the failing tests' needs write access). Per-fixture maxTurns lets harder prompts run longer without changing the default. `direction` is cosmetic metadata for test output labeling. Also adds reusable predicates and metrics: - lowerIsBetter20Pct / higherIsBetter20Pct — 20% lift threshold vs baseline - bashToolCallCount — count of Bash tool_use across the session - turnsToCompletion — SDK-reported num_turns at result - uniqueFilesEdited — Edit/Write/MultiEdit file_path set size test/skill-e2e-overlay-harness.test.ts now threads fixture.allowedTools and fixture.maxTurns through runArm. * test(eval): 3 more overlay fixtures to measure remaining Claude nudges Measures three overlay bullets that haven't been tested yet: - claude-dedicated-tools-vs-bash — claude.md says 'prefer Read/Edit/Write/ Glob/Grep over cat/sed/find/grep'. Fixture prompts 'list every TypeScript file under src/ and tell me what each exports' and counts Bash tool_use across the session. Overlay-ON should drop it by >=20%. - opus-4-7-effort-match-trivial — opus-4-7.md says 'simple file reads don't need deep reasoning.' Fixture uses a trivial one-file prompt (config.json lookup) and measures turns_used. Overlay-ON should be <=80% of baseline turns. - opus-4-7-literal-interpretation — opus-4-7.md says 'fix ALL failing tests, not just the obvious one.' Fixture seeds three failing test files with deliberately distinct failure modes and counts unique files edited. Overlay-ON should touch >=20% more files. Adding a fourth fixture for any remaining overlay nudge is a single entry. The harness is now proven on: fanout (deleted after measurement), dedicated tools, effort-match, and literal-interpretation. * fix(eval): handle SDK max-turns throw gracefully Some @anthropic-ai/claude-agent-sdk versions throw from the query generator when maxTurns is reached, instead of emitting a result message with subtype='error_max_turns'. The runner treated that as a non-retryable error and killed the whole periodic run on the first fixture that exceeded its turn cap. Added isMaxTurnsError() detector and a catch branch that synthesizes an AgentSdkResult from events captured before the throw, with exitReason='error_max_turns' and costUsd=0 (unknown from the thrown path). The metric function still runs against whatever assistant turns were collected, so the trial produces a usable number. Hoisted events/assistantTurns/toolCalls/assistantTextParts and the timing counters out of the inner try so the catch branch can read them. No behavior change on the success path or on rate-limit retry paths. * test(eval): bump maxTurns to 15 for claude-dedicated-tools-vs-bash The prompt 'list every TypeScript file under src/ and tell me what each exports' needs 1 turn for Glob + ~5 for Reads + 1 for summary. Default maxTurns=5 was not enough; prior run threw from the SDK on this fixture and tanked the whole periodic eval. Bumping to 15 gives headroom. The runner now also handles max-turns gracefully even if a future fixture underestimates, so this is belt and suspenders. * test(eval): Sonnet 4.6 variants of the 5 Opus-4.7 fixtures Same overlays, same prompts, same metrics, `model: 'claude-sonnet-4-6'`. Tests whether the overlays behave differently on a weaker Claude model where baseline behavior is shakier. Sonnet trials cost ~3-4x less than Opus so these 5 add ~$4.50 to a full run. Measurement result from the first paired run (100 trials total, ~$14.55): - **Sonnet + effort-match shows real overlay benefit.** With the overlay on, Sonnet takes 2.5 turns on a trivial `What's the version in config.json?` prompt. Without, it takes exactly 3.0 turns in all 10 trials. ~17% reduction, below the 20% pass threshold but the signal is clean: overlay-ON distribution [2,2,2,2,2,3,3,3,3,3] vs overlay-OFF [3,3,3,3,3,3,3,3,3,3]. - All other Sonnet dimensions flat (fanout, dedicated-tools, literal interpretation). Same as Opus on those axes. - Opus effort-match remains flat (2.60 vs 2.50, +4% slower with overlay). Implication: model-stratified. The overlay stack helps Sonnet on some axes where it does nothing on Opus. Wholesale removal would hurt Sonnet. Per-nudge per-model measurement is the right move going forward. * chore: bump version to 1.10.1.0 Updates VERSION, package.json, CHANGELOG header, and TODOS completion marker from 1.6.5.0 to 1.10.1.0. --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
726 lines
23 KiB
TypeScript
726 lines
23 KiB
TypeScript
/**
|
|
* Unit tests for test/helpers/agent-sdk-runner.ts.
|
|
*
|
|
* Runs in free `bun test` (no API calls). Uses a stub QueryProvider to
|
|
* simulate SDK event streams — happy path, rate-limit retries across all
|
|
* three shapes, persistent failure, non-retryable error, options
|
|
* propagation, concurrency cap.
|
|
*
|
|
* Also covers validateFixtures() rejections.
|
|
*/
|
|
|
|
import { describe, test, expect } from 'bun:test';
|
|
import * as fs from 'fs';
|
|
import * as path from 'path';
|
|
import * as os from 'os';
|
|
import type {
|
|
SDKMessage,
|
|
Options,
|
|
Query,
|
|
} from '@anthropic-ai/claude-agent-sdk';
|
|
import {
|
|
runAgentSdkTest,
|
|
toSkillTestResult,
|
|
firstTurnParallelism,
|
|
isRateLimitThrown,
|
|
isRateLimitResult,
|
|
isRateLimitEvent,
|
|
RateLimitExhaustedError,
|
|
__resetSemaphoreForTests,
|
|
type QueryProvider,
|
|
type AgentSdkResult,
|
|
} from '../test/helpers/agent-sdk-runner';
|
|
import {
|
|
validateFixtures,
|
|
fanoutPass,
|
|
type OverlayFixture,
|
|
} from '../test/fixtures/overlay-nudges';
|
|
|
|
// ---------------------------------------------------------------------------
|
|
// Stub SDK event builders
|
|
// ---------------------------------------------------------------------------
|
|
|
|
let uuidCounter = 0;
|
|
function uuid(): string {
|
|
return `00000000-0000-0000-0000-${String(++uuidCounter).padStart(12, '0')}`;
|
|
}
|
|
|
|
function systemInit(model = 'claude-opus-4-7', version = '2.1.117'): SDKMessage {
|
|
return {
|
|
type: 'system',
|
|
subtype: 'init',
|
|
apiKeySource: 'user',
|
|
claude_code_version: version,
|
|
cwd: '/tmp/x',
|
|
tools: ['Read'],
|
|
mcp_servers: [],
|
|
model,
|
|
permissionMode: 'bypassPermissions',
|
|
slash_commands: [],
|
|
output_style: 'default',
|
|
skills: [],
|
|
plugins: [],
|
|
uuid: uuid(),
|
|
session_id: 'test-session',
|
|
} as unknown as SDKMessage;
|
|
}
|
|
|
|
function assistantTurn(
|
|
blocks: Array<{ type: 'text'; text: string } | { type: 'tool_use'; name: string; input: unknown }>,
|
|
): SDKMessage {
|
|
return {
|
|
type: 'assistant',
|
|
parent_tool_use_id: null,
|
|
uuid: uuid(),
|
|
session_id: 'test-session',
|
|
message: {
|
|
id: 'msg_' + uuid(),
|
|
type: 'message',
|
|
role: 'assistant',
|
|
model: 'claude-opus-4-7',
|
|
content: blocks.map((b) => ({ ...b })),
|
|
stop_reason: 'end_turn',
|
|
stop_sequence: null,
|
|
usage: {
|
|
input_tokens: 10,
|
|
output_tokens: 20,
|
|
cache_creation_input_tokens: 0,
|
|
cache_read_input_tokens: 0,
|
|
service_tier: 'standard',
|
|
},
|
|
},
|
|
} as unknown as SDKMessage;
|
|
}
|
|
|
|
function resultSuccess(cost = 0.01, turns = 1): SDKMessage {
|
|
return {
|
|
type: 'result',
|
|
subtype: 'success',
|
|
duration_ms: 100,
|
|
duration_api_ms: 50,
|
|
is_error: false,
|
|
num_turns: turns,
|
|
result: 'done',
|
|
stop_reason: 'end_turn',
|
|
total_cost_usd: cost,
|
|
usage: {
|
|
input_tokens: 10,
|
|
output_tokens: 20,
|
|
cache_creation_input_tokens: 0,
|
|
cache_read_input_tokens: 0,
|
|
server_tool_use: {},
|
|
service_tier: 'standard',
|
|
},
|
|
modelUsage: {},
|
|
permission_denials: [],
|
|
uuid: uuid(),
|
|
session_id: 'test-session',
|
|
} as unknown as SDKMessage;
|
|
}
|
|
|
|
function resultRateLimit(): SDKMessage {
|
|
return {
|
|
type: 'result',
|
|
subtype: 'error_during_execution',
|
|
duration_ms: 100,
|
|
duration_api_ms: 50,
|
|
is_error: true,
|
|
num_turns: 0,
|
|
stop_reason: null,
|
|
total_cost_usd: 0,
|
|
usage: {
|
|
input_tokens: 0,
|
|
output_tokens: 0,
|
|
cache_creation_input_tokens: 0,
|
|
cache_read_input_tokens: 0,
|
|
server_tool_use: {},
|
|
service_tier: 'standard',
|
|
},
|
|
modelUsage: {},
|
|
permission_denials: [],
|
|
errors: ['rate limit exceeded (429)'],
|
|
uuid: uuid(),
|
|
session_id: 'test-session',
|
|
} as unknown as SDKMessage;
|
|
}
|
|
|
|
function rateLimitEvent(): SDKMessage {
|
|
return {
|
|
type: 'rate_limit_event',
|
|
rate_limit_info: {
|
|
status: 'rejected',
|
|
rateLimitType: 'five_hour',
|
|
},
|
|
uuid: uuid(),
|
|
session_id: 'test-session',
|
|
} as unknown as SDKMessage;
|
|
}
|
|
|
|
// ---------------------------------------------------------------------------
|
|
// Stub query provider
|
|
// ---------------------------------------------------------------------------
|
|
|
|
interface StubConfig {
|
|
/** One event stream per call. Exhausted calls throw. */
|
|
streams: SDKMessage[][];
|
|
/** Throw this error on the Nth call (0-indexed). */
|
|
throwAt?: number;
|
|
throwError?: unknown;
|
|
/** Track calls for assertions. */
|
|
calls: Array<{ prompt: string; options: Options | undefined; startedAt: number; endedAt?: number }>;
|
|
}
|
|
|
|
function makeStubProvider(config: StubConfig): QueryProvider {
|
|
let callIdx = -1;
|
|
const provider: QueryProvider = (params) => {
|
|
callIdx++;
|
|
const idx = callIdx;
|
|
const startedAt = Date.now();
|
|
const prompt = typeof params.prompt === 'string' ? params.prompt : '<iterable>';
|
|
config.calls.push({ prompt, options: params.options, startedAt });
|
|
|
|
if (config.throwAt !== undefined && idx === config.throwAt) {
|
|
const err = config.throwError ?? new Error('stub throw');
|
|
// Return an async generator that throws on first next().
|
|
const gen = (async function* (): AsyncGenerator<SDKMessage, void> {
|
|
throw err;
|
|
})();
|
|
return gen as unknown as Query;
|
|
}
|
|
|
|
const stream = config.streams[idx];
|
|
if (!stream) {
|
|
const gen = (async function* (): AsyncGenerator<SDKMessage, void> {
|
|
throw new Error(`stub has no stream for call ${idx}`);
|
|
})();
|
|
return gen as unknown as Query;
|
|
}
|
|
|
|
const gen = (async function* (): AsyncGenerator<SDKMessage, void> {
|
|
try {
|
|
for (const ev of stream) {
|
|
yield ev;
|
|
}
|
|
} finally {
|
|
config.calls[idx]!.endedAt = Date.now();
|
|
}
|
|
})();
|
|
return gen as unknown as Query;
|
|
};
|
|
return provider;
|
|
}
|
|
|
|
const BASE_OPTS = {
|
|
systemPrompt: '',
|
|
userPrompt: 'test prompt',
|
|
workingDirectory: '/tmp/test-dir',
|
|
maxRetries: 3,
|
|
};
|
|
|
|
// Reset semaphore before each test that depends on fresh capacity.
|
|
function freshSem(cap = 10): void {
|
|
__resetSemaphoreForTests(cap);
|
|
}
|
|
|
|
// ---------------------------------------------------------------------------
|
|
// Happy path
|
|
// ---------------------------------------------------------------------------
|
|
|
|
describe('runAgentSdkTest — happy path', () => {
|
|
test('collects events, assistantTurns, toolCalls, and result fields', async () => {
|
|
freshSem();
|
|
const stub: StubConfig = {
|
|
streams: [
|
|
[
|
|
systemInit(),
|
|
assistantTurn([
|
|
{ type: 'text', text: 'reading files' },
|
|
{ type: 'tool_use', name: 'Read', input: { path: 'a.txt' } },
|
|
{ type: 'tool_use', name: 'Read', input: { path: 'b.txt' } },
|
|
]),
|
|
assistantTurn([{ type: 'text', text: 'done' }]),
|
|
resultSuccess(0.05, 2),
|
|
],
|
|
],
|
|
calls: [],
|
|
};
|
|
const result = await runAgentSdkTest({
|
|
...BASE_OPTS,
|
|
queryProvider: makeStubProvider(stub),
|
|
});
|
|
|
|
expect(result.events.length).toBe(4);
|
|
expect(result.assistantTurns.length).toBe(2);
|
|
expect(result.toolCalls.length).toBe(2);
|
|
expect(result.toolCalls[0]!.tool).toBe('Read');
|
|
expect(result.output).toContain('reading files');
|
|
expect(result.output).toContain('done');
|
|
expect(result.exitReason).toBe('success');
|
|
expect(result.turnsUsed).toBe(2);
|
|
expect(result.costUsd).toBe(0.05);
|
|
expect(result.sdkClaudeCodeVersion).toBe('2.1.117');
|
|
expect(result.model).toBe('claude-opus-4-7');
|
|
expect(result.firstResponseMs).toBeGreaterThanOrEqual(0);
|
|
});
|
|
|
|
test('first-turn parallelism: 3 tool_use blocks in first assistant turn', async () => {
|
|
freshSem();
|
|
const stub: StubConfig = {
|
|
streams: [
|
|
[
|
|
systemInit(),
|
|
assistantTurn([
|
|
{ type: 'tool_use', name: 'Read', input: { path: 'a' } },
|
|
{ type: 'tool_use', name: 'Read', input: { path: 'b' } },
|
|
{ type: 'tool_use', name: 'Read', input: { path: 'c' } },
|
|
]),
|
|
resultSuccess(),
|
|
],
|
|
],
|
|
calls: [],
|
|
};
|
|
const result = await runAgentSdkTest({
|
|
...BASE_OPTS,
|
|
queryProvider: makeStubProvider(stub),
|
|
});
|
|
expect(firstTurnParallelism(result.assistantTurns[0])).toBe(3);
|
|
});
|
|
|
|
test('first-turn parallelism: 0 when first turn is text-only', async () => {
|
|
freshSem();
|
|
const stub: StubConfig = {
|
|
streams: [
|
|
[
|
|
systemInit(),
|
|
assistantTurn([{ type: 'text', text: 'thinking' }]),
|
|
resultSuccess(),
|
|
],
|
|
],
|
|
calls: [],
|
|
};
|
|
const result = await runAgentSdkTest({
|
|
...BASE_OPTS,
|
|
queryProvider: makeStubProvider(stub),
|
|
});
|
|
expect(firstTurnParallelism(result.assistantTurns[0])).toBe(0);
|
|
});
|
|
|
|
test('first-turn parallelism: 0 when no first turn', () => {
|
|
expect(firstTurnParallelism(undefined)).toBe(0);
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------------------
|
|
// Options propagation
|
|
// ---------------------------------------------------------------------------
|
|
|
|
describe('runAgentSdkTest — options propagation', () => {
|
|
test('systemPrompt, model, cwd, allowedTools, disallowedTools, permissionMode, settingSources, env, pathToClaudeCodeExecutable reach query()', async () => {
|
|
freshSem();
|
|
const stub: StubConfig = {
|
|
streams: [[systemInit(), assistantTurn([{ type: 'text', text: 'ok' }]), resultSuccess()]],
|
|
calls: [],
|
|
};
|
|
await runAgentSdkTest({
|
|
systemPrompt: 'you are a test overlay',
|
|
userPrompt: 'go',
|
|
workingDirectory: '/tmp/spec-dir',
|
|
model: 'claude-opus-4-7',
|
|
maxTurns: 7,
|
|
allowedTools: ['Read', 'Glob'],
|
|
disallowedTools: ['Bash', 'Write'],
|
|
permissionMode: 'bypassPermissions',
|
|
settingSources: [],
|
|
env: { ANTHROPIC_API_KEY: 'fake' },
|
|
pathToClaudeCodeExecutable: '/fake/path/claude',
|
|
queryProvider: makeStubProvider(stub),
|
|
});
|
|
|
|
const opts = stub.calls[0]!.options!;
|
|
expect(opts.systemPrompt).toBe('you are a test overlay');
|
|
expect(opts.model).toBe('claude-opus-4-7');
|
|
expect(opts.cwd).toBe('/tmp/spec-dir');
|
|
expect(opts.maxTurns).toBe(7);
|
|
expect(opts.tools).toEqual(['Read', 'Glob']);
|
|
expect(opts.allowedTools).toEqual(['Read', 'Glob']);
|
|
expect(opts.disallowedTools).toEqual(['Bash', 'Write']);
|
|
expect(opts.permissionMode).toBe('bypassPermissions');
|
|
expect(opts.allowDangerouslySkipPermissions).toBe(true);
|
|
expect(opts.settingSources).toEqual([]);
|
|
expect(opts.env).toEqual({ ANTHROPIC_API_KEY: 'fake' });
|
|
expect(opts.pathToClaudeCodeExecutable).toBe('/fake/path/claude');
|
|
});
|
|
|
|
test('empty systemPrompt means no systemPrompt option passed', async () => {
|
|
freshSem();
|
|
const stub: StubConfig = {
|
|
streams: [[systemInit(), assistantTurn([{ type: 'text', text: 'ok' }]), resultSuccess()]],
|
|
calls: [],
|
|
};
|
|
await runAgentSdkTest({
|
|
...BASE_OPTS,
|
|
queryProvider: makeStubProvider(stub),
|
|
});
|
|
// systemPrompt is undefined when empty string passed (so SDK uses no override)
|
|
expect(stub.calls[0]!.options!.systemPrompt).toBeUndefined();
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------------------
|
|
// Rate-limit retry (three shapes)
|
|
// ---------------------------------------------------------------------------
|
|
|
|
describe('runAgentSdkTest — rate-limit retry', () => {
|
|
test('retryable on thrown 429-shaped error, then succeeds on 2nd attempt', async () => {
|
|
freshSem();
|
|
const stub: StubConfig = {
|
|
streams: [
|
|
// call 0: throws (handled via throwAt below)
|
|
[],
|
|
// call 1: success
|
|
[systemInit(), assistantTurn([{ type: 'text', text: 'ok' }]), resultSuccess()],
|
|
],
|
|
throwAt: 0,
|
|
throwError: Object.assign(new Error('429 too many requests'), { status: 429 }),
|
|
calls: [],
|
|
};
|
|
const result = await runAgentSdkTest({
|
|
...BASE_OPTS,
|
|
queryProvider: makeStubProvider(stub),
|
|
maxRetries: 2,
|
|
});
|
|
expect(result.exitReason).toBe('success');
|
|
expect(stub.calls.length).toBe(2);
|
|
});
|
|
|
|
test('retryable on result-message rate-limit, then succeeds', async () => {
|
|
freshSem();
|
|
const stub: StubConfig = {
|
|
streams: [
|
|
[systemInit(), resultRateLimit()],
|
|
[systemInit(), assistantTurn([{ type: 'text', text: 'ok' }]), resultSuccess()],
|
|
],
|
|
calls: [],
|
|
};
|
|
const result = await runAgentSdkTest({
|
|
...BASE_OPTS,
|
|
queryProvider: makeStubProvider(stub),
|
|
maxRetries: 2,
|
|
});
|
|
expect(result.exitReason).toBe('success');
|
|
expect(stub.calls.length).toBe(2);
|
|
});
|
|
|
|
test('retryable on mid-stream SDKRateLimitEvent, then succeeds', async () => {
|
|
freshSem();
|
|
const stub: StubConfig = {
|
|
streams: [
|
|
[systemInit(), rateLimitEvent()],
|
|
[systemInit(), assistantTurn([{ type: 'text', text: 'ok' }]), resultSuccess()],
|
|
],
|
|
calls: [],
|
|
};
|
|
const result = await runAgentSdkTest({
|
|
...BASE_OPTS,
|
|
queryProvider: makeStubProvider(stub),
|
|
maxRetries: 2,
|
|
});
|
|
expect(result.exitReason).toBe('success');
|
|
expect(stub.calls.length).toBe(2);
|
|
});
|
|
|
|
test('onRetry callback is invoked between attempts', async () => {
|
|
freshSem();
|
|
const resets: string[] = [];
|
|
const stub: StubConfig = {
|
|
streams: [
|
|
[],
|
|
[systemInit(), assistantTurn([{ type: 'text', text: 'ok' }]), resultSuccess()],
|
|
],
|
|
throwAt: 0,
|
|
throwError: Object.assign(new Error('429'), { status: 429 }),
|
|
calls: [],
|
|
};
|
|
await runAgentSdkTest({
|
|
...BASE_OPTS,
|
|
queryProvider: makeStubProvider(stub),
|
|
maxRetries: 2,
|
|
onRetry: (dir) => resets.push(dir),
|
|
});
|
|
expect(resets.length).toBe(1);
|
|
expect(resets[0]).toBe('/tmp/test-dir');
|
|
});
|
|
|
|
test('persistent 429 throws RateLimitExhaustedError after maxRetries', async () => {
|
|
freshSem();
|
|
const stub: StubConfig = {
|
|
streams: [[], [], [], []], // 4 empty streams; throw on each
|
|
calls: [],
|
|
};
|
|
// Every call throws:
|
|
let callCount = 0;
|
|
const alwaysThrowProvider: QueryProvider = (params) => {
|
|
callCount++;
|
|
stub.calls.push({
|
|
prompt: typeof params.prompt === 'string' ? params.prompt : '',
|
|
options: params.options,
|
|
startedAt: Date.now(),
|
|
});
|
|
const gen = (async function* (): AsyncGenerator<SDKMessage, void> {
|
|
throw Object.assign(new Error('429 always'), { status: 429 });
|
|
})();
|
|
return gen as unknown as Query;
|
|
};
|
|
|
|
let caught: unknown = null;
|
|
try {
|
|
await runAgentSdkTest({
|
|
...BASE_OPTS,
|
|
queryProvider: alwaysThrowProvider,
|
|
maxRetries: 2,
|
|
});
|
|
} catch (err) {
|
|
caught = err;
|
|
}
|
|
expect(caught).toBeInstanceOf(RateLimitExhaustedError);
|
|
expect((caught as RateLimitExhaustedError).attempts).toBe(3); // initial + 2 retries
|
|
expect(callCount).toBe(3);
|
|
});
|
|
|
|
test('non-429 error is NOT retried, propagates immediately', async () => {
|
|
__resetSemaphoreForTests(10);
|
|
let callCount = 0;
|
|
const throwOnce: QueryProvider = () => {
|
|
callCount++;
|
|
const gen = (async function* (): AsyncGenerator<SDKMessage, void> {
|
|
throw new Error('generic auth failure');
|
|
})();
|
|
return gen as unknown as Query;
|
|
};
|
|
let caught: unknown = null;
|
|
try {
|
|
await runAgentSdkTest({
|
|
...BASE_OPTS,
|
|
queryProvider: throwOnce,
|
|
maxRetries: 3,
|
|
});
|
|
} catch (err) {
|
|
caught = err;
|
|
}
|
|
expect(caught).toBeInstanceOf(Error);
|
|
expect((caught as Error).message).toBe('generic auth failure');
|
|
expect(callCount).toBe(1);
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------------------
|
|
// Rate-limit detectors (unit)
|
|
// ---------------------------------------------------------------------------
|
|
|
|
describe('rate-limit detectors', () => {
|
|
test('isRateLimitThrown matches status 429, message, name', () => {
|
|
expect(isRateLimitThrown(Object.assign(new Error('boom'), { status: 429 }))).toBe(true);
|
|
expect(isRateLimitThrown(new Error('429 Too Many Requests'))).toBe(true);
|
|
expect(isRateLimitThrown(new Error('rate-limit exceeded'))).toBe(true);
|
|
expect(isRateLimitThrown(Object.assign(new Error('x'), { name: 'RateLimitError' }))).toBe(true);
|
|
expect(isRateLimitThrown(new Error('auth failed'))).toBe(false);
|
|
expect(isRateLimitThrown(null)).toBe(false);
|
|
});
|
|
|
|
test('isRateLimitResult matches error_during_execution with 429-shaped errors', () => {
|
|
expect(isRateLimitResult(resultRateLimit())).toBe(true);
|
|
expect(isRateLimitResult(resultSuccess())).toBe(false);
|
|
});
|
|
|
|
test('isRateLimitEvent matches rate_limit_event with status=rejected', () => {
|
|
expect(isRateLimitEvent(rateLimitEvent())).toBe(true);
|
|
expect(isRateLimitEvent(resultSuccess())).toBe(false);
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------------------
|
|
// Semaphore concurrency cap
|
|
// ---------------------------------------------------------------------------
|
|
|
|
describe('runAgentSdkTest — concurrency', () => {
|
|
test('process-level semaphore caps concurrent queries', async () => {
|
|
__resetSemaphoreForTests(2);
|
|
let inFlight = 0;
|
|
let peakInFlight = 0;
|
|
const slowStub: QueryProvider = () => {
|
|
const gen = (async function* (): AsyncGenerator<SDKMessage, void> {
|
|
inFlight++;
|
|
if (inFlight > peakInFlight) peakInFlight = inFlight;
|
|
yield systemInit();
|
|
await new Promise((r) => setTimeout(r, 30));
|
|
yield assistantTurn([{ type: 'text', text: 'ok' }]);
|
|
yield resultSuccess();
|
|
inFlight--;
|
|
})();
|
|
return gen as unknown as Query;
|
|
};
|
|
|
|
await Promise.all(
|
|
Array.from({ length: 6 }, (_, i) =>
|
|
runAgentSdkTest({
|
|
...BASE_OPTS,
|
|
userPrompt: `trial-${i}`,
|
|
queryProvider: slowStub,
|
|
}),
|
|
),
|
|
);
|
|
|
|
expect(peakInFlight).toBeLessThanOrEqual(2);
|
|
expect(peakInFlight).toBeGreaterThan(0);
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------------------
|
|
// toSkillTestResult shape
|
|
// ---------------------------------------------------------------------------
|
|
|
|
describe('toSkillTestResult', () => {
|
|
test('produces a SkillTestResult-shaped object', async () => {
|
|
freshSem();
|
|
const stub: StubConfig = {
|
|
streams: [[systemInit(), assistantTurn([{ type: 'text', text: 'hi' }]), resultSuccess(0.02, 1)]],
|
|
calls: [],
|
|
};
|
|
const r = await runAgentSdkTest({
|
|
...BASE_OPTS,
|
|
queryProvider: makeStubProvider(stub),
|
|
});
|
|
const s = toSkillTestResult(r);
|
|
expect(s.toolCalls).toBeArray();
|
|
expect(s.browseErrors).toBeArray();
|
|
expect(s.exitReason).toBe('success');
|
|
expect(s.duration).toBeNumber();
|
|
expect(s.output).toBe('hi');
|
|
expect(s.costEstimate.estimatedCost).toBe(0.02);
|
|
expect(s.costEstimate.turnsUsed).toBe(1);
|
|
expect(s.model).toBe('claude-opus-4-7');
|
|
expect(s.firstResponseMs).toBeNumber();
|
|
expect(s.maxInterTurnMs).toBeNumber();
|
|
expect(s.transcript).toBeArray();
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------------------
|
|
// Fixture validator
|
|
// ---------------------------------------------------------------------------
|
|
|
|
describe('validateFixtures', () => {
|
|
function base(overrides: Partial<OverlayFixture> = {}): OverlayFixture {
|
|
return {
|
|
id: 'test-fixture',
|
|
overlayPath: 'model-overlays/opus-4-7.md',
|
|
model: 'claude-opus-4-7',
|
|
trials: 10,
|
|
setupWorkspace: () => {},
|
|
userPrompt: 'go',
|
|
metric: () => 0,
|
|
pass: fanoutPass,
|
|
...overrides,
|
|
};
|
|
}
|
|
|
|
test('passes for a valid fixture', () => {
|
|
expect(() => validateFixtures([base()])).not.toThrow();
|
|
});
|
|
|
|
test('rejects empty id', () => {
|
|
expect(() => validateFixtures([base({ id: '' })])).toThrow(/id must be/);
|
|
});
|
|
|
|
test('rejects id with uppercase or unsafe chars', () => {
|
|
expect(() => validateFixtures([base({ id: 'Test_Fixture' })])).toThrow(/id must be/);
|
|
});
|
|
|
|
test('rejects duplicate ids', () => {
|
|
expect(() => validateFixtures([base(), base()])).toThrow(/duplicate fixture id/);
|
|
});
|
|
|
|
test('rejects non-integer trials', () => {
|
|
expect(() => validateFixtures([base({ trials: 3.5 })])).toThrow(/trials must be/);
|
|
});
|
|
|
|
test('rejects trials < 3', () => {
|
|
expect(() => validateFixtures([base({ trials: 2 })])).toThrow(/trials must be/);
|
|
});
|
|
|
|
test('rejects concurrency < 1', () => {
|
|
expect(() => validateFixtures([base({ concurrency: 0 })])).toThrow(/concurrency must be/);
|
|
});
|
|
|
|
test('rejects non-integer concurrency', () => {
|
|
expect(() => validateFixtures([base({ concurrency: 2.5 })])).toThrow(/concurrency must be/);
|
|
});
|
|
|
|
test('rejects empty model', () => {
|
|
expect(() => validateFixtures([base({ model: '' })])).toThrow(/model must be/);
|
|
});
|
|
|
|
test('rejects empty userPrompt', () => {
|
|
expect(() => validateFixtures([base({ userPrompt: '' })])).toThrow(/userPrompt must be/);
|
|
});
|
|
|
|
test('rejects absolute overlayPath', () => {
|
|
expect(() => validateFixtures([base({ overlayPath: '/etc/passwd' })])).toThrow(/overlayPath must be/);
|
|
});
|
|
|
|
test("rejects overlayPath containing '..'", () => {
|
|
expect(() =>
|
|
validateFixtures([base({ overlayPath: '../outside/file.md' })]),
|
|
).toThrow(/overlayPath must be/);
|
|
});
|
|
|
|
test('rejects missing overlay file', () => {
|
|
expect(() =>
|
|
validateFixtures([base({ overlayPath: 'model-overlays/nonexistent.md' })]),
|
|
).toThrow(/overlay file not found/);
|
|
});
|
|
|
|
test('rejects non-function setupWorkspace', () => {
|
|
expect(() =>
|
|
validateFixtures([base({ setupWorkspace: 'not a function' as unknown as (d: string) => void })]),
|
|
).toThrow(/setupWorkspace must be a function/);
|
|
});
|
|
|
|
test('rejects non-function metric', () => {
|
|
expect(() =>
|
|
validateFixtures([base({ metric: null as unknown as (r: AgentSdkResult) => number })]),
|
|
).toThrow(/metric must be a function/);
|
|
});
|
|
|
|
test('rejects non-function pass', () => {
|
|
expect(() =>
|
|
validateFixtures([base({ pass: undefined as unknown as OverlayFixture['pass'] })]),
|
|
).toThrow(/pass must be a function/);
|
|
});
|
|
});
|
|
|
|
// ---------------------------------------------------------------------------
|
|
// fanoutPass predicate
|
|
// ---------------------------------------------------------------------------
|
|
|
|
describe('fanoutPass predicate', () => {
|
|
test('accepts mean lift >= 0.5 AND >=3/10 overlay trials >= 2', () => {
|
|
const overlay = [2, 2, 2, 2, 2, 2, 2, 2, 2, 2];
|
|
const off = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
|
|
expect(fanoutPass({ overlay, off })).toBe(true);
|
|
});
|
|
|
|
test('rejects when mean lift < 0.5', () => {
|
|
const overlay = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1];
|
|
const off = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1];
|
|
expect(fanoutPass({ overlay, off })).toBe(false);
|
|
});
|
|
|
|
test('rejects when mean lift >= 0.5 but <3 overlay trials emit >=2', () => {
|
|
// Mean overlay = 1.2, off = 0.0, lift 1.2 but only 2 trials at >=2
|
|
const overlay = [2, 2, 1, 1, 1, 1, 1, 1, 1, 1];
|
|
const off = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
|
|
expect(fanoutPass({ overlay, off })).toBe(false);
|
|
});
|
|
});
|