Add `claudini.leaderboard` module that scans benchmark result files and generates per-track, per model leaderboard JSONs ranking methods by average loss. Output: results/loss_leaderboard/<preset>/<model_tag>.json
Also: rename _build_input_spec -> build_input_spec in run_bench.py.
Assisted-by: Claude <noreply@anthropic.com>
Co-authored-by: Alexander Panfilov <apanfilov@g003.internal.cluster.is.localnet>
Co-authored-by: Peter Romov <peter@romov.com>
Add .claude/skills/claudini/SKILL.md to drive the autoresearch loop
via /claudini slash command. Update CLAUDE.md with skill docs. Replace
PROMPT.txt with the skill-based workflow. Rewrite README to feature
the autoresearch loop prominently. Add easy_1e16 and easy_1e17 preset
configs and update safeguard configs.
Assisted-by: Claude <noreply@anthropic.com>