mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-11 07:17:12 +02:00
a647064734
Three gate-tier E2E tests detect when preamble / template changes flatten the distinctive posture of /plan-ceo-review SCOPE EXPANSION or /office-hours (startup Q3, builder mode). The V1 regression that this PR fixes shipped without anyone catching it at ship time — this is the ongoing signal so the same thing doesn't happen again. Pieces: - `judgePosture(mode, text)` in `test/helpers/llm-judge.ts`. Sonnet judge with mode-specific dual-axis rubric (expansion: surface_framing + decision_preservation; forcing: stacking_preserved + domain_matched_consequence; builder: unexpected_combinations + excitement_over_optimization). Pass threshold 4/5 on both axes. - Three fixtures in `test/fixtures/mode-posture/` — deterministic input for expansion proposal generation, Q3 forcing question, and builder adjacent-unlock riffing. - `plan-ceo-review-expansion-energy` case appended to `test/skill-e2e-plan.test.ts`. Generator: Opus (skill default). Judge: Sonnet. - New `test/skill-e2e-office-hours.test.ts` with `office-hours-forcing-energy` + `office-hours-builder-wildness` cases. Generator: Sonnet. Judge: Sonnet. - Touchfile registration in `test/helpers/touchfiles.ts` — all three as `gate` tier in `E2E_TIERS`, triggered by changes to `scripts/resolvers/preamble.ts`, the relevant skill template, the judge helper, or any mode-posture fixture. Cost: ~$0.50-$1.50 per triggered PR. Sonnet judge is cheap; Opus generator for the plan-ceo-review case dominates. Known V1.1 tradeoff: judges test prose markers more than deep behavior. V1.2 candidate is a cross-provider (Codex) adversarial judge on the same output to decouple house-style bias.
193 lines
8.2 KiB
TypeScript
193 lines
8.2 KiB
TypeScript
/**
|
|
* Shared LLM-as-judge helpers for eval and E2E tests.
|
|
*
|
|
* Provides callJudge (generic JSON-from-LLM), judge (doc quality scorer),
|
|
* and outcomeJudge (planted-bug detection scorer).
|
|
*
|
|
* Requires: ANTHROPIC_API_KEY env var
|
|
*/
|
|
|
|
import Anthropic from '@anthropic-ai/sdk';
|
|
|
|
export interface JudgeScore {
|
|
clarity: number; // 1-5
|
|
completeness: number; // 1-5
|
|
actionability: number; // 1-5
|
|
reasoning: string;
|
|
}
|
|
|
|
export interface OutcomeJudgeResult {
|
|
detected: string[];
|
|
missed: string[];
|
|
false_positives: number;
|
|
detection_rate: number;
|
|
evidence_quality: number;
|
|
reasoning: string;
|
|
}
|
|
|
|
export interface PostureScore {
|
|
axis_a: number; // 1-5 — mode-specific primary rubric axis
|
|
axis_b: number; // 1-5 — mode-specific secondary rubric axis
|
|
reasoning: string;
|
|
}
|
|
|
|
export type PostureMode = 'expansion' | 'forcing' | 'builder';
|
|
|
|
/**
|
|
* Call claude-sonnet-4-6 with a prompt, extract JSON response.
|
|
* Retries once on 429 rate limit errors.
|
|
*/
|
|
export async function callJudge<T>(prompt: string): Promise<T> {
|
|
const client = new Anthropic();
|
|
|
|
const makeRequest = () => client.messages.create({
|
|
model: 'claude-sonnet-4-6',
|
|
max_tokens: 1024,
|
|
messages: [{ role: 'user', content: prompt }],
|
|
});
|
|
|
|
let response;
|
|
try {
|
|
response = await makeRequest();
|
|
} catch (err: any) {
|
|
if (err.status === 429) {
|
|
await new Promise(r => setTimeout(r, 1000));
|
|
response = await makeRequest();
|
|
} else {
|
|
throw err;
|
|
}
|
|
}
|
|
|
|
const text = response.content[0].type === 'text' ? response.content[0].text : '';
|
|
const jsonMatch = text.match(/\{[\s\S]*\}/);
|
|
if (!jsonMatch) throw new Error(`Judge returned non-JSON: ${text.slice(0, 200)}`);
|
|
return JSON.parse(jsonMatch[0]) as T;
|
|
}
|
|
|
|
/**
|
|
* Score documentation quality on clarity/completeness/actionability (1-5).
|
|
*/
|
|
export async function judge(section: string, content: string): Promise<JudgeScore> {
|
|
return callJudge<JudgeScore>(`You are evaluating documentation quality for an AI coding agent's CLI tool reference.
|
|
|
|
The agent reads this documentation to learn how to use a headless browser CLI. It needs to:
|
|
1. Understand what each command does
|
|
2. Know what arguments to pass
|
|
3. Know valid values for enum-like parameters
|
|
4. Construct correct command invocations without guessing
|
|
|
|
Rate the following ${section} on three dimensions (1-5 scale):
|
|
|
|
- **clarity** (1-5): Can an agent understand what each command/flag does from the description alone?
|
|
- **completeness** (1-5): Are arguments, valid values, and important behaviors documented? Would an agent need to guess anything?
|
|
- **actionability** (1-5): Can an agent construct correct command invocations from this reference alone?
|
|
|
|
Scoring guide:
|
|
- 5: Excellent — no ambiguity, all info present
|
|
- 4: Good — minor gaps an experienced agent could infer
|
|
- 3: Adequate — some guessing required
|
|
- 2: Poor — significant info missing
|
|
- 1: Unusable — agent would fail without external help
|
|
|
|
Respond with ONLY valid JSON in this exact format:
|
|
{"clarity": N, "completeness": N, "actionability": N, "reasoning": "brief explanation"}
|
|
|
|
Here is the ${section} to evaluate:
|
|
|
|
${content}`);
|
|
}
|
|
|
|
/**
|
|
* Evaluate a QA report against planted-bug ground truth.
|
|
* Returns detection metrics for the planted bugs.
|
|
*/
|
|
export async function outcomeJudge(
|
|
groundTruth: any,
|
|
report: string,
|
|
): Promise<OutcomeJudgeResult> {
|
|
return callJudge<OutcomeJudgeResult>(`You are evaluating a QA testing report against known ground truth bugs.
|
|
|
|
GROUND TRUTH (${groundTruth.total_bugs} planted bugs):
|
|
${JSON.stringify(groundTruth.bugs, null, 2)}
|
|
|
|
QA REPORT (generated by an AI agent):
|
|
${report}
|
|
|
|
For each planted bug, determine if the report identified it. A bug counts as
|
|
"detected" if the report describes the same defect, even if the wording differs.
|
|
Use the detection_hint keywords as guidance.
|
|
|
|
Also count false positives: issues in the report that don't correspond to any
|
|
planted bug AND aren't legitimate issues with the page.
|
|
|
|
Respond with ONLY valid JSON:
|
|
{
|
|
"detected": ["bug-id-1", "bug-id-2"],
|
|
"missed": ["bug-id-3"],
|
|
"false_positives": 0,
|
|
"detection_rate": 2,
|
|
"evidence_quality": 4,
|
|
"reasoning": "brief explanation"
|
|
}
|
|
|
|
Rules:
|
|
- "detected" and "missed" arrays must only contain IDs from the ground truth: ${groundTruth.bugs.map((b: any) => b.id).join(', ')}
|
|
- detection_rate = length of detected array
|
|
- evidence_quality (1-5): Do detected bugs have screenshots, repro steps, or specific element references?
|
|
5 = excellent evidence for every bug, 1 = no evidence at all`);
|
|
}
|
|
|
|
/**
|
|
* Score mode-specific prose posture on two mode-dependent axes (1-5 each).
|
|
*
|
|
* Used by mode-posture regression tests to detect whether V1's Writing Style
|
|
* rules have flattened the distinctive energy of expansion / forcing / builder
|
|
* modes. See docs/designs/PLAN_TUNING_V1.md and the V1.1 mode-posture fix.
|
|
*
|
|
* The generator model is whatever the skill runs with (often Opus for
|
|
* plan-ceo-review). The judge is always Sonnet via callJudge() for cost.
|
|
*/
|
|
export async function judgePosture(mode: PostureMode, text: string): Promise<PostureScore> {
|
|
const rubrics: Record<PostureMode, { axis_a: string; axis_b: string; context: string }> = {
|
|
expansion: {
|
|
context: 'This text is expansion proposals emitted by /plan-ceo-review in SCOPE EXPANSION or SELECTIVE EXPANSION mode. The skill is supposed to lead with felt-experience vision, then close with concrete effort and impact.',
|
|
axis_a: 'surface_framing (1-5): Does each proposal lead with felt-experience framing ("imagine", "when the user sees", "the moment X happens", or equivalent) BEFORE closing with concrete metrics? Penalize pure feature bullets ("Add X. Improves Y by Z%").',
|
|
axis_b: 'decision_preservation (1-5): Does each proposal contain the elements a scope-expansion decision needs — what to build (concrete shape), effort (ideally both human and CC scales), risk or integration note? Penalize pure prose with no actionable content.',
|
|
},
|
|
forcing: {
|
|
context: 'This text is the Q3 Desperate Specificity question emitted by /office-hours startup mode. The skill is supposed to force the founder to name a specific person and consequence, stacking multiple pressures.',
|
|
axis_a: 'stacking_preserved (1-5): Does the question include at least 3 distinct sub-pressures (e.g., title? promoted? fired? up at night? OR career? day? weekend?) rather than a single neutral ask? Penalize "Who is your target user?" style collapses.',
|
|
axis_b: 'domain_matched_consequence (1-5): Does the named consequence match the domain context in the input (B2B → career impact, consumer → daily pain, hobby/open-source → weekend project)? Penalize one-size-fits-all B2B career framing for non-B2B ideas.',
|
|
},
|
|
builder: {
|
|
context: 'This text is builder-mode response from /office-hours. The skill is supposed to riff creatively — "what if you also..." adjacent unlocks, cross-domain combinations, the "whoa" moment — not emit a structured product roadmap.',
|
|
axis_a: 'unexpected_combinations (1-5): Does the output include at least 2 cross-domain or surprising adjacent unlocks ("what if you also...", "pipe it into X", etc.)? Penalize structured feature lists with no creative leaps.',
|
|
axis_b: 'excitement_over_optimization (1-5): Does the output read as a creative riff (enthusiastic, opinionated, evocative) or as a PRD / product roadmap (structured, metric-driven, conservative)? Penalize PRD-voice language like "improve retention", "enable virality", "consider adding".',
|
|
},
|
|
};
|
|
|
|
const r = rubrics[mode];
|
|
return callJudge<PostureScore>(`You are evaluating prose quality for a mode-specific posture regression test.
|
|
|
|
Context: ${r.context}
|
|
|
|
Rate the following output on two dimensions (1-5 scale each):
|
|
|
|
- **axis_a** — ${r.axis_a}
|
|
- **axis_b** — ${r.axis_b}
|
|
|
|
Scoring guide:
|
|
- 5: Excellent — strong, unambiguous match for the posture
|
|
- 4: Good — matches posture with minor weakness
|
|
- 3: Adequate — partial match, noticeable flatness or structure
|
|
- 2: Poor — posture mostly flattened / collapsed
|
|
- 1: Fail — posture entirely missing, reads as the opposite mode
|
|
|
|
Respond with ONLY valid JSON in this exact format:
|
|
{"axis_a": N, "axis_b": N, "reasoning": "brief explanation naming specific phrases that drove the score"}
|
|
|
|
Here is the output to evaluate:
|
|
|
|
${text}`);
|
|
}
|