feat: wire eval-cache + eval-tier into LLM judge, pin E2E model

callJudge/judge now return {result, meta} with SHA-based caching
(~$0.18/run savings when SKILL.md unchanged) and dynamic model
selection via EVAL_JUDGE_TIER env var. E2E tests pass --model from
EVAL_TIER to claude -p. outcomeJudge retains simple return type.
All 8 LLM eval test sites updated with real costs and costs[].

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Garry Tan
2026-03-15 16:47:35 -05:00
parent 02925cfc7a
commit 59752fc510
4 changed files with 227 additions and 52 deletions
+3 -1
View File
@@ -231,7 +231,7 @@
**Why:** Spot quality trends — is the app getting better or worse?
**Context:** QA already writes structured reports. This adds cross-run comparison.
**Context:** `eval:trend` now tracks test-level pass rates (eval infrastructure). QA-run-level trending (health scores over time across QA report files) is a separate feature that could reuse `computeTrends` pattern from `lib/cli-eval.ts`.
**Effort:** S
**Priority:** P2
@@ -335,6 +335,8 @@
**Why:** Reduce E2E test cost and flakiness.
**Status:** Model pinning shipped (session-runner.ts passes `--model` from `EVAL_TIER` env). Retry:2 still TODO.
**Effort:** XS
**Priority:** P2