3 Commits

Author SHA1 Message Date
Peter Romov 59106bdf3c SecAlign-70B support: configs, quantization, multi-GPU (#1)
- **PEFT adapter merging.** `model_loader.py` auto-detects PEFT adapters (e.g. `facebook/Meta-SecAlign-8B`), merges on CPU in bf16, and caches the merged model to disk. No config flags needed.

- **Configurable quantization.** `quantization:` field in YAML or `--quantization` on CLI, accepting `nf4`, `fp4`, or `int8`. Replaces the old `load_in_4bit` boolean.

- **Multi-GPU sharding.** `device_map:` in configs or `--device-map` on CLI. Config value is now correctly read from YAML presets (was previously ignored).

- **CLI overrides.** New `--model`, `--device-map`, `--quantization` flags to override preset values from the command line.

- **SecAlign injection presets.** Configs for prompt injection on Meta-SecAlign-8B and 70B (default + Optuna-tuned), using new `AlpacaInjectionSource` — generates 3-role prompts from AlpacaFarm data with trusted/untrusted separation.

- **Fixes.** `BenchmarkRunner.summarize()` crash when all runs are skipped. System prompt suppression now works correctly (`""` suppresses model defaults, `None` omits the turn).

Co-authored-by: Peter Romov <peter@romov.com>
Co-authored-by: Alexander Panfilov <39771221+kotekjedi@users.noreply.github.com>
2026-04-06 13:36:08 +00:00
Peter Romov 69c04a2b9e Add autoresearch skill, update configs and README
Add .claude/skills/claudini/SKILL.md to drive the autoresearch loop
via /claudini slash command. Update CLAUDE.md with skill docs. Replace
PROMPT.txt with the skill-based workflow. Rewrite README to feature
the autoresearch loop prominently. Add easy_1e16 and easy_1e17 preset
configs and update safeguard configs.

Assisted-by: Claude <noreply@anthropic.com>
2026-03-26 17:19:04 +00:00
Peter Romov 5b6058b3c4 Initial commit
Co-Authored-By: Alexander Panfilov <sasha_pusha@mail.de>
Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-25 02:09:26 +00:00