Files
OBLITERATUS/examples/remote_gpu_node.yaml
Stella Biderman cbdb772eb9 Add multi-GPU support with --gpus flag
Adds --gpus flag to obliterate, run, and tourney commands for controlling
which GPUs to use (sets CUDA_VISIBLE_DEVICES). Works both locally and with
--remote. Models are automatically split across selected GPUs via
accelerate's device_map="auto". Also adds gpus field to remote YAML config.
2026-03-12 12:35:29 -04:00

48 lines
1.3 KiB
YAML

# Example: Run an ablation study on a remote GPU node via SSH.
#
# Usage:
# obliteratus run examples/remote_gpu_node.yaml
#
# The 'remote' section tells Obliteratus to SSH into the specified host,
# install obliteratus if needed, run the pipeline there, and copy results
# back to the local machine.
#
# You can also use --remote on any command instead of a YAML section:
# obliteratus obliterate meta-llama/Llama-3.1-8B-Instruct --remote root@gpu-node --ssh-key ~/.ssh/id_rsa
#
# Multi-GPU: Models are automatically split across all available GPUs via
# accelerate's device_map="auto". Use --gpus or the gpus: field to select
# specific GPUs:
# obliteratus obliterate model --remote root@gpu-node --gpus 0,1,2,3
model:
name: meta-llama/Llama-3.1-8B-Instruct
task: causal_lm
dtype: float16
device: auto
dataset:
name: wikitext
split: test
max_samples: 500
strategies:
- name: layer_removal
params:
layer_indices: [10, 11, 12]
metrics: [perplexity]
batch_size: 8
max_length: 512
output_dir: results/remote_example
remote:
host: gpu-node.example.com
user: root
port: 22
ssh_key: ~/.ssh/id_rsa
remote_dir: /tmp/obliteratus_run
python: python3
sync_results: true
# gpus: "0,1,2,3" # uncomment to select specific GPUs (default: all)