Pre-2026-03-20 the token filter defaulted to off (filter_ids=False, no
forbidden tokens). The subsequent split into filter_ascii/filter_special/
filter_retok introduced filter_ascii=True as the new default, silently
narrowing the optimization vocabulary by ~50% for Qwen and invalidating
comparisons against historical numbers (verified on claude_v63: avg loss
0.98 with filter vs 0.49 without, with ~12/20 samples matching bit-exact
to the legacy hmcGCG numbers once the filter is disabled). Revert default
to False so fresh runs reproduce the earlier leaderboard out of the box;
presets that want ASCII filtering can still opt in explicitly.
Assisted-by: Claude <noreply@anthropic.com>
Add `claudini.leaderboard` module that scans benchmark result files and generates per-track, per model leaderboard JSONs ranking methods by average loss. Output: results/loss_leaderboard/<preset>/<model_tag>.json
Also: rename _build_input_spec -> build_input_spec in run_bench.py.
Assisted-by: Claude <noreply@anthropic.com>
Co-authored-by: Alexander Panfilov <apanfilov@g003.internal.cluster.is.localnet>
Co-authored-by: Peter Romov <peter@romov.com>
- **PEFT adapter merging.** `model_loader.py` auto-detects PEFT adapters (e.g. `facebook/Meta-SecAlign-8B`), merges on CPU in bf16, and caches the merged model to disk. No config flags needed.
- **Configurable quantization.** `quantization:` field in YAML or `--quantization` on CLI, accepting `nf4`, `fp4`, or `int8`. Replaces the old `load_in_4bit` boolean.
- **Multi-GPU sharding.** `device_map:` in configs or `--device-map` on CLI. Config value is now correctly read from YAML presets (was previously ignored).
- **CLI overrides.** New `--model`, `--device-map`, `--quantization` flags to override preset values from the command line.
- **SecAlign injection presets.** Configs for prompt injection on Meta-SecAlign-8B and 70B (default + Optuna-tuned), using new `AlpacaInjectionSource` — generates 3-role prompts from AlpacaFarm data with trusted/untrusted separation.
- **Fixes.** `BenchmarkRunner.summarize()` crash when all runs are skipped. System prompt suppression now works correctly (`""` suppresses model defaults, `None` omits the turn).
Co-authored-by: Peter Romov <peter@romov.com>
Co-authored-by: Alexander Panfilov <39771221+kotekjedi@users.noreply.github.com>
Add .claude/skills/claudini/SKILL.md to drive the autoresearch loop
via /claudini slash command. Update CLAUDE.md with skill docs. Replace
PROMPT.txt with the skill-based workflow. Rewrite README to feature
the autoresearch loop prominently. Add easy_1e16 and easy_1e17 preset
configs and update safeguard configs.
Assisted-by: Claude <noreply@anthropic.com>
Rename claude_v53 to claude_oss_v53 to match safeguard track naming
convention. Add README documenting what unrolled methods are and how
to create them.