6.6 KiB
OWASP AI Testing Guide Table of Contents
1. Introduction
2. Threat Modeling AI Systems
-
2.1.2 Identify AI System Responsible AI (RAI)/Trustworthy AI Threats
-
2.2 Appendix A: Rationale For Using SAIF (Secure AI Framework)
-
2.2 Appendix B: Distributed, Immutable, Ephemeral (DIE) Threat Identification
-
2.2 Appendix D: Threat Enumeration to AI Architecture Components
-
2.2 Appendix E: Mapping AI Threats Against AI Systems Vulnerabilities (CVEs & CWEs)
3. OWASP AI Testing Guide Framework
-
3.1 🟦 AI Application Testing
-
3.1.1 | AITG-APP-01 | Testing for Prompt Injection |
-
3.1.2 | AITG-APP-02 | Testing for Indirect Prompt Injection |
-
3.1.3 | AITG-APP-03 | Testing for Sensitive Data Leak |
-
3.1.4 | AITG-APP-04 | Testing for Input Leakage |
-
3.1.5 | AITG-APP-05 | Testing for Unsafe Outputs |
-
3.1.6 | AITG-APP-06 | Testing for Agentic Behavior Limits |
-
3.1.7 | AITG-APP-07 | Testing for Prompt Disclosure |
-
3.1.8 | AITG-APP-08 | Testing for Embedding Manipulation |
-
3.1.9 | AITG-APP-09 | Testing for Model Extraction |
-
3.1.10 | AITG-APP-10 | Testing for ../Document/content Bias |
-
3.1.11 | AITG-APP-11 | Testing for Hallucinations |
-
3.1.12 | AITG-APP-12 | Testing for Toxic Output |
-
3.1.13 | AITG-APP-13 | Testing for Over-Reliance on AI |
-
3.1.14 | AITG-APP-14 | Testing for Explainability and Interpretability |
-
3.2 🟪 AI Model Testing
-
3.2.1 | AITG-MOD-01 | Testing for Evasion Attacks |
-
3.2.2 | AITG-MOD-02 | Testing for Runtime Model Poisoning |
-
3.2.3 | AITG-MOD-03 | Testing for Poisoned Training Sets |
-
3.2.4 | AITG-MOD-04 | Testing for Membership Inference |
-
3.2.5 | AITG-MOD-05 | Testing for Inversion Attacks |
-
3.2.6 | AITG-MOD-06 | Testing for Robustness to New Data |
-
3.2.7 | AITG-MOD-07 | Testing for Goal Alignment |
-
3.3.1 | AITG-INF-01 | Testing for Supply Chain Tampering |
-
3.3.2 | AITG-INF-02 | Testing for Resource Exhaustion |
-
3.3.3 | AITG-INF-03 | Testing for Plugin Boundary Violations |
-
3.3.4 | AITG-INF-04 | Testing for Capability Misuse |
-
3.3.5 | AITG-INF-05 | Testing for Fine-tuning Poisoning |
-
3.3.6 | AITG-INF-06 | Testing for Dev-Time Model Theft |
-
3.4 🟨 AI Data Testing
-
3.4.1 | AITG-DAT-01 | Testing for Training Data Exposure |
-
3.4.2 | AITG-DAT-02 | Testing for Runtime Exfiltration |
-
3.4.3 | AITG-DAT-03 | Testing for Dataset Diversity & Coverage |
-
3.4.4 | AITG-DAT-04 | Testing for Harmful ../Document/content in Data |
-
3.4.5 | AITG-DAT-05 | Testing for Data Minimization & Consent |
-
3.4.6 | AITG-DAT-06 | Testing for Robustness to New Data |
-
3.4.7 | AITG-DAT-07 | Testing for Goal Alignment |
4. Chapter 4: Domain Specific Testing
- 4.1 References