mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-07 14:06:42 +02:00
chore(release): v1.5.1.0 — cut Haiku FP 44% → 23%
- VERSION: 1.5.0.0 → 1.5.1.0 (TUNING bump) - CHANGELOG: [1.5.1.0] entry with measured numbers, knob list, and stop-loss rule spec - TODOS: mark "Cut Haiku FP 44% → ~15%" P0 as SHIPPED with pointer to CHANGELOG and v1 plan Measured: 56.2% detection (CI 50.1-62.1) / 22.9% FP (CI 18.1-28.6) on 500-case BrowseSafe-Bench smoke. Gate passes (floor 55%, ceiling 25%). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -241,7 +241,13 @@ defend the compiled-side ingress.
|
||||
|
||||
### ML Prompt Injection Classifier — v2 Follow-ups
|
||||
|
||||
#### Cut Haiku false-positive rate from 44% toward ~15% (P0)
|
||||
#### ~~Cut Haiku false-positive rate from 44% toward ~15% (P0)~~ — SHIPPED in v1.5.1.0
|
||||
|
||||
Measured result (500-case BrowseSafe-Bench smoke): detection 67.3% → **56.2%**, FP 44.1% → **22.9%**. Gate passes (detection ≥ 55%, FP ≤ 25%). Knobs that landed: label-first ensemble voting (verdict label trumps numeric confidence for transcript layer), hallucination guard (`verdict=block` at conf < 0.40 → warn-vote), new `THRESHOLDS.SOLO_CONTENT_BLOCK = 0.92` for label-less content classifiers, label-first extension to toolOutput path, tighter Haiku prompt + 8 few-shot exemplars, pinned Haiku model, `claude -p` spawn from `os.tmpdir()` so CLAUDE.md can't poison the classifier, timeout bumped 15s → 45s. CI gate: `browse/test/security-bench-ensemble.test.ts` replays fixture, fail-closed on missing fixture + security-layer diff. The original plan's stop-loss revert order didn't move the FP needle (FPs came from single-layer-BLOCK paths, not ensemble); the real levers turned out to be architectural (label-first) plus a new decoupled threshold.
|
||||
|
||||
See CHANGELOG.md [1.5.1.0] for the full shipped summary.
|
||||
|
||||
#### Original spec (pre-ship, retained for archive)
|
||||
|
||||
**What:** v1 ships the Haiku transcript classifier on every tool output (Read/Grep/Bash/Glob/WebFetch). BrowseSafe-Bench smoke measured detection 67.3% + FP 44.1% — a 4.4x detection lift from L4-only, but FP tripled because Haiku is more aggressive than L4 on edge cases (phishing-style benign content, borderline social engineering). The review banner makes FPs recoverable but 44% is too high for a delightful default.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user