Files
ai-llm-red-team-handbook/docs
shiva108 1f8b097244 docs(rag): replace mermaid diagram with static image
- Removed the inline Mermaid diagram definition for the secure document ingestion pipeline.
- Replaced the diagram with a reference to a pre-rendered image (assets/rec21_secure_ingestion.png).
- Ensures consistent visual representation of the pipeline across different markdown viewers.
- Avoids potential rendering issues or inconsistencies associated with dynamic Mermaid diagrams.
2026-02-03 19:20:35 +01:00
..
512
2026-01-22 23:16:00 +01:00
512
2026-01-22 23:16:00 +01:00
512
2026-01-22 23:16:00 +01:00
512
2026-01-22 23:16:00 +01:00
.
2026-01-21 19:05:22 +01:00
2026-01-11 23:14:48 +01:00

Introduction

Welcome to the AI LLM Red Team Handbook.

We designed this toolkit for security consultants, red teamers, and AI engineers. It provides end-to-end methodologies for identifying, assessing, and mitigating risks in Large Language Models (LLMs) and Generative AI systems.


🚀 Choose Your Path

🔬 The Consultant's Handbook ⚔️ The Field Manual


The foundational work. Theoretical deep-dives, detailed methodologies, compliance frameworks, and strategies for building a program.


The hands-on work. Operational playbooks, copy-paste payloads, quick reference cards, and checklists for live engagements.
📖 Browse Handbook Chapters Go to Field Manuals

📚 Handbook Structure

Part I: Foundations (Ethics, Legal, Mindset)
Part II: Project Preparation (Scoping, Threat Modeling)
Part III: Technical Fundamentals (Architecture, Tokenization)
Part IV: Pipeline Security (RAG, Supply Chain)
Part V: Attacks & Techniques (The Red Team Core)
Part VI: Defense & Mitigation
Part VII: Advanced Operations
Part VIII: Advanced Topics

🧩 Reference & Resources