Files
ai-llm-red-team-handbook/docs
shiva108 656fa1795c docs(lab-setup): enhance LLM lab setup and red teaming guidance
- Added detailed decision guides for local LLM deployment options and virtualization.
- Enhanced guidance for commercial LLM API testing, including cost, rate limiting, and logging.
- Provided a comprehensive overview of network isolation strategies and their GPU support.
- Introduced essential red team tooling categories and explained the use of Garak.
- Detailed the importance of kill switches, watchdog timers, and rate limiters for lab safety.
2026-02-03 13:38:42 +01:00
..
512
2026-01-22 23:16:00 +01:00
512
2026-01-22 23:16:00 +01:00
512
2026-01-22 23:16:00 +01:00
512
2026-01-22 23:16:00 +01:00
512
2026-01-22 23:16:00 +01:00
.
2026-01-21 19:05:22 +01:00
2026-01-11 23:14:48 +01:00

Introduction

Welcome to the AI LLM Red Team Handbook.

We designed this toolkit for security consultants, red teamers, and AI engineers. It provides end-to-end methodologies for identifying, assessing, and mitigating risks in Large Language Models (LLMs) and Generative AI systems.


🚀 Choose Your Path

🔬 The Consultant's Handbook ⚔️ The Field Manual


The foundational work. Theoretical deep-dives, detailed methodologies, compliance frameworks, and strategies for building a program.


The hands-on work. Operational playbooks, copy-paste payloads, quick reference cards, and checklists for live engagements.
📖 Browse Handbook Chapters Go to Field Manuals

📚 Handbook Structure

Part I: Foundations (Ethics, Legal, Mindset)
Part II: Project Preparation (Scoping, Threat Modeling)
Part III: Technical Fundamentals (Architecture, Tokenization)
Part IV: Pipeline Security (RAG, Supply Chain)
Part V: Attacks & Techniques (The Red Team Core)
Part VI: Defense & Mitigation
Part VII: Advanced Operations
Part VIII: Advanced Topics

🧩 Reference & Resources