mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-01 19:25:10 +02:00
8ee16b867b
* feat: restore mode-posture energy to expansion + forcing + builder output
Rewrites Writing Style rule 2-4 examples in scripts/resolvers/preamble.ts
to cover three framing families (pain reduction, upside/delight, forcing
pressure) instead of diagnostic-pain only. Adds inline exemplars to
plan-ceo-review (0D-prelude shared between SCOPE + SELECTIVE EXPANSION)
and office-hours (Q3 forcing exemplar with career/day/weekend domain
gating, builder operating principles wild exemplar).
V1 shipped rule 2-4 examples that all pointed to diagnostic-pain framing
("3-second spinner", "double-click button"). Models follow concrete
examples over abstract taxonomies, so any skill with a non-diagnostic
mode posture (expansion, forcing, delight) got flattened at runtime
even when the template itself said "dream big" or "direct to the point
of discomfort." This change targets the actual lever: swap the single
diagnostic example for three paired framings, one per posture family.
Preserves V1 clarity gains — rules 2, 3, 4 principles unchanged, only
examples expanded. Terse mode (EXPLAIN_LEVEL: terse) still skips the
block entirely.
* chore: regenerate SKILL.md after preamble + template changes
Mechanical cascade from `bun run gen:skill-docs --host all` after the
Writing Style rule 2-4 example rewrite and the plan-ceo-review /
office-hours template exemplar additions. No hand edits — every change
flows from the prior commit's templates.
* test: add gate-tier mode-posture regression tests
Three gate-tier E2E tests detect when preamble / template changes
flatten the distinctive posture of /plan-ceo-review SCOPE EXPANSION or
/office-hours (startup Q3, builder mode). The V1 regression that this
PR fixes shipped without anyone catching it at ship time — this is the
ongoing signal so the same thing doesn't happen again.
Pieces:
- `judgePosture(mode, text)` in `test/helpers/llm-judge.ts`. Sonnet
judge with mode-specific dual-axis rubric (expansion: surface_framing
+ decision_preservation; forcing: stacking_preserved +
domain_matched_consequence; builder: unexpected_combinations +
excitement_over_optimization). Pass threshold 4/5 on both axes.
- Three fixtures in `test/fixtures/mode-posture/` — deterministic input
for expansion proposal generation, Q3 forcing question, and builder
adjacent-unlock riffing.
- `plan-ceo-review-expansion-energy` case appended to
`test/skill-e2e-plan.test.ts`. Generator: Opus (skill default). Judge:
Sonnet.
- New `test/skill-e2e-office-hours.test.ts` with
`office-hours-forcing-energy` + `office-hours-builder-wildness`
cases. Generator: Sonnet. Judge: Sonnet.
- Touchfile registration in `test/helpers/touchfiles.ts` — all three as
`gate` tier in `E2E_TIERS`, triggered by changes to
`scripts/resolvers/preamble.ts`, the relevant skill template, the
judge helper, or any mode-posture fixture.
Cost: ~$0.50-$1.50 per triggered PR. Sonnet judge is cheap; Opus
generator for the plan-ceo-review case dominates.
Known V1.1 tradeoff: judges test prose markers more than deep behavior.
V1.2 candidate is a cross-provider (Codex) adversarial judge on the
same output to decouple house-style bias.
* test: update golden ship baselines + touchfile count for mode-posture entries
Mechanical test updates after the mode-posture work:
- Golden ship SKILL.md baselines (claude + codex + factory hosts) regenerate with
the rewritten Writing Style rule 2-4 examples from preamble.ts.
- Touchfile selection test expects 6 matches for a plan-ceo-review/ change (was 5)
because E2E_TOUCHFILES now includes plan-ceo-review-expansion-energy.
* chore: bump version and changelog (v1.1.2.0)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
193 lines
8.2 KiB
TypeScript
193 lines
8.2 KiB
TypeScript
/**
|
|
* Shared LLM-as-judge helpers for eval and E2E tests.
|
|
*
|
|
* Provides callJudge (generic JSON-from-LLM), judge (doc quality scorer),
|
|
* and outcomeJudge (planted-bug detection scorer).
|
|
*
|
|
* Requires: ANTHROPIC_API_KEY env var
|
|
*/
|
|
|
|
import Anthropic from '@anthropic-ai/sdk';
|
|
|
|
export interface JudgeScore {
|
|
clarity: number; // 1-5
|
|
completeness: number; // 1-5
|
|
actionability: number; // 1-5
|
|
reasoning: string;
|
|
}
|
|
|
|
export interface OutcomeJudgeResult {
|
|
detected: string[];
|
|
missed: string[];
|
|
false_positives: number;
|
|
detection_rate: number;
|
|
evidence_quality: number;
|
|
reasoning: string;
|
|
}
|
|
|
|
export interface PostureScore {
|
|
axis_a: number; // 1-5 — mode-specific primary rubric axis
|
|
axis_b: number; // 1-5 — mode-specific secondary rubric axis
|
|
reasoning: string;
|
|
}
|
|
|
|
export type PostureMode = 'expansion' | 'forcing' | 'builder';
|
|
|
|
/**
|
|
* Call claude-sonnet-4-6 with a prompt, extract JSON response.
|
|
* Retries once on 429 rate limit errors.
|
|
*/
|
|
export async function callJudge<T>(prompt: string): Promise<T> {
|
|
const client = new Anthropic();
|
|
|
|
const makeRequest = () => client.messages.create({
|
|
model: 'claude-sonnet-4-6',
|
|
max_tokens: 1024,
|
|
messages: [{ role: 'user', content: prompt }],
|
|
});
|
|
|
|
let response;
|
|
try {
|
|
response = await makeRequest();
|
|
} catch (err: any) {
|
|
if (err.status === 429) {
|
|
await new Promise(r => setTimeout(r, 1000));
|
|
response = await makeRequest();
|
|
} else {
|
|
throw err;
|
|
}
|
|
}
|
|
|
|
const text = response.content[0].type === 'text' ? response.content[0].text : '';
|
|
const jsonMatch = text.match(/\{[\s\S]*\}/);
|
|
if (!jsonMatch) throw new Error(`Judge returned non-JSON: ${text.slice(0, 200)}`);
|
|
return JSON.parse(jsonMatch[0]) as T;
|
|
}
|
|
|
|
/**
|
|
* Score documentation quality on clarity/completeness/actionability (1-5).
|
|
*/
|
|
export async function judge(section: string, content: string): Promise<JudgeScore> {
|
|
return callJudge<JudgeScore>(`You are evaluating documentation quality for an AI coding agent's CLI tool reference.
|
|
|
|
The agent reads this documentation to learn how to use a headless browser CLI. It needs to:
|
|
1. Understand what each command does
|
|
2. Know what arguments to pass
|
|
3. Know valid values for enum-like parameters
|
|
4. Construct correct command invocations without guessing
|
|
|
|
Rate the following ${section} on three dimensions (1-5 scale):
|
|
|
|
- **clarity** (1-5): Can an agent understand what each command/flag does from the description alone?
|
|
- **completeness** (1-5): Are arguments, valid values, and important behaviors documented? Would an agent need to guess anything?
|
|
- **actionability** (1-5): Can an agent construct correct command invocations from this reference alone?
|
|
|
|
Scoring guide:
|
|
- 5: Excellent — no ambiguity, all info present
|
|
- 4: Good — minor gaps an experienced agent could infer
|
|
- 3: Adequate — some guessing required
|
|
- 2: Poor — significant info missing
|
|
- 1: Unusable — agent would fail without external help
|
|
|
|
Respond with ONLY valid JSON in this exact format:
|
|
{"clarity": N, "completeness": N, "actionability": N, "reasoning": "brief explanation"}
|
|
|
|
Here is the ${section} to evaluate:
|
|
|
|
${content}`);
|
|
}
|
|
|
|
/**
|
|
* Evaluate a QA report against planted-bug ground truth.
|
|
* Returns detection metrics for the planted bugs.
|
|
*/
|
|
export async function outcomeJudge(
|
|
groundTruth: any,
|
|
report: string,
|
|
): Promise<OutcomeJudgeResult> {
|
|
return callJudge<OutcomeJudgeResult>(`You are evaluating a QA testing report against known ground truth bugs.
|
|
|
|
GROUND TRUTH (${groundTruth.total_bugs} planted bugs):
|
|
${JSON.stringify(groundTruth.bugs, null, 2)}
|
|
|
|
QA REPORT (generated by an AI agent):
|
|
${report}
|
|
|
|
For each planted bug, determine if the report identified it. A bug counts as
|
|
"detected" if the report describes the same defect, even if the wording differs.
|
|
Use the detection_hint keywords as guidance.
|
|
|
|
Also count false positives: issues in the report that don't correspond to any
|
|
planted bug AND aren't legitimate issues with the page.
|
|
|
|
Respond with ONLY valid JSON:
|
|
{
|
|
"detected": ["bug-id-1", "bug-id-2"],
|
|
"missed": ["bug-id-3"],
|
|
"false_positives": 0,
|
|
"detection_rate": 2,
|
|
"evidence_quality": 4,
|
|
"reasoning": "brief explanation"
|
|
}
|
|
|
|
Rules:
|
|
- "detected" and "missed" arrays must only contain IDs from the ground truth: ${groundTruth.bugs.map((b: any) => b.id).join(', ')}
|
|
- detection_rate = length of detected array
|
|
- evidence_quality (1-5): Do detected bugs have screenshots, repro steps, or specific element references?
|
|
5 = excellent evidence for every bug, 1 = no evidence at all`);
|
|
}
|
|
|
|
/**
|
|
* Score mode-specific prose posture on two mode-dependent axes (1-5 each).
|
|
*
|
|
* Used by mode-posture regression tests to detect whether V1's Writing Style
|
|
* rules have flattened the distinctive energy of expansion / forcing / builder
|
|
* modes. See docs/designs/PLAN_TUNING_V1.md and the V1.1 mode-posture fix.
|
|
*
|
|
* The generator model is whatever the skill runs with (often Opus for
|
|
* plan-ceo-review). The judge is always Sonnet via callJudge() for cost.
|
|
*/
|
|
export async function judgePosture(mode: PostureMode, text: string): Promise<PostureScore> {
|
|
const rubrics: Record<PostureMode, { axis_a: string; axis_b: string; context: string }> = {
|
|
expansion: {
|
|
context: 'This text is expansion proposals emitted by /plan-ceo-review in SCOPE EXPANSION or SELECTIVE EXPANSION mode. The skill is supposed to lead with felt-experience vision, then close with concrete effort and impact.',
|
|
axis_a: 'surface_framing (1-5): Does each proposal lead with felt-experience framing ("imagine", "when the user sees", "the moment X happens", or equivalent) BEFORE closing with concrete metrics? Penalize pure feature bullets ("Add X. Improves Y by Z%").',
|
|
axis_b: 'decision_preservation (1-5): Does each proposal contain the elements a scope-expansion decision needs — what to build (concrete shape), effort (ideally both human and CC scales), risk or integration note? Penalize pure prose with no actionable content.',
|
|
},
|
|
forcing: {
|
|
context: 'This text is the Q3 Desperate Specificity question emitted by /office-hours startup mode. The skill is supposed to force the founder to name a specific person and consequence, stacking multiple pressures.',
|
|
axis_a: 'stacking_preserved (1-5): Does the question include at least 3 distinct sub-pressures (e.g., title? promoted? fired? up at night? OR career? day? weekend?) rather than a single neutral ask? Penalize "Who is your target user?" style collapses.',
|
|
axis_b: 'domain_matched_consequence (1-5): Does the named consequence match the domain context in the input (B2B → career impact, consumer → daily pain, hobby/open-source → weekend project)? Penalize one-size-fits-all B2B career framing for non-B2B ideas.',
|
|
},
|
|
builder: {
|
|
context: 'This text is builder-mode response from /office-hours. The skill is supposed to riff creatively — "what if you also..." adjacent unlocks, cross-domain combinations, the "whoa" moment — not emit a structured product roadmap.',
|
|
axis_a: 'unexpected_combinations (1-5): Does the output include at least 2 cross-domain or surprising adjacent unlocks ("what if you also...", "pipe it into X", etc.)? Penalize structured feature lists with no creative leaps.',
|
|
axis_b: 'excitement_over_optimization (1-5): Does the output read as a creative riff (enthusiastic, opinionated, evocative) or as a PRD / product roadmap (structured, metric-driven, conservative)? Penalize PRD-voice language like "improve retention", "enable virality", "consider adding".',
|
|
},
|
|
};
|
|
|
|
const r = rubrics[mode];
|
|
return callJudge<PostureScore>(`You are evaluating prose quality for a mode-specific posture regression test.
|
|
|
|
Context: ${r.context}
|
|
|
|
Rate the following output on two dimensions (1-5 scale each):
|
|
|
|
- **axis_a** — ${r.axis_a}
|
|
- **axis_b** — ${r.axis_b}
|
|
|
|
Scoring guide:
|
|
- 5: Excellent — strong, unambiguous match for the posture
|
|
- 4: Good — matches posture with minor weakness
|
|
- 3: Adequate — partial match, noticeable flatness or structure
|
|
- 2: Poor — posture mostly flattened / collapsed
|
|
- 1: Fail — posture entirely missing, reads as the opposite mode
|
|
|
|
Respond with ONLY valid JSON in this exact format:
|
|
{"axis_a": N, "axis_b": N, "reasoning": "brief explanation naming specific phrases that drove the score"}
|
|
|
|
Here is the output to evaluate:
|
|
|
|
${text}`);
|
|
}
|