mirror of
https://github.com/Shiva108/ai-llm-red-team-handbook.git
synced 2026-02-12 14:42:46 +00:00
6.9 KiB
6.9 KiB
Introduction
Welcome to the AI LLM Red Team Handbook.
We designed this toolkit for security consultants, red teamers, and AI engineers. It provides end-to-end methodologies for identifying, assessing, and mitigating risks in Large Language Models (LLMs) and Generative AI systems.
🚀 Choose Your Path
| 🔬 The Consultant's Handbook | ⚔️ The Field Manual |
|---|---|
The foundational work. Theoretical deep-dives, detailed methodologies, compliance frameworks, and strategies for building a program. |
The hands-on work. Operational playbooks, copy-paste payloads, quick reference cards, and checklists for live engagements. |
| 📖 Browse Handbook Chapters | ⚡ Go to Field Manuals |
📚 Handbook Structure
Part I: Foundations (Ethics, Legal, Mindset)
Part II: Project Preparation (Scoping, Threat Modeling)
Part III: Technical Fundamentals (Architecture, Tokenization)
Part IV: Pipeline Security (RAG, Supply Chain)
Part V: Attacks & Techniques (The Red Team Core)
- Chapter 14: Prompt Injection
- Chapter 15: Data Leakage and Extraction
- Chapter 16: Jailbreaks and Bypass Techniques
- Chapter 17: Plugin and API Exploitation
- Chapter 18: Evasion, Obfuscation, and Adversarial Inputs
- Chapter 19: Training Data Poisoning
- Chapter 20: Model Theft and Membership Inference
- Chapter 21: Model DoS and Resource Exhaustion
- Chapter 22: Cross-Modal and Multimodal Attacks
- Chapter 23: Advanced Persistence and Chaining
- Chapter 24: Social Engineering with LLMs
Part VI: Defense & Mitigation
Part VII: Advanced Operations
- Chapter 31: AI System Reconnaissance
- Chapter 32: Automated Attack Frameworks
- Chapter 33: Red Team Automation
- Chapter 34: Defense Evasion Techniques
- Chapter 35: Post-Exploitation in AI Systems
- Chapter 36: Reporting and Communication
- Chapter 37: Remediation Strategies
- Chapter 38: Continuous Red Teaming
- Chapter 39: AI Bug Bounty Programs