Files
www-project-ai-testing-guide/Document/content/4.0_Appendix_and_References.md
Matteo Meucci ea95dddc09 Clean up appendix and references section
Removed unnecessary text and options from the appendix.
2025-11-13 17:18:36 +01:00

4.3 KiB

4. Appendixes and References

Introduction

This chapter provides all supporting materials that complement the main body of the OWASP AI Testing Guide. The appendixes offer structured frameworks, threat models, risk lifecycles, and domain-specific guidance that reinforce the methodology proposed in the guide.

These resources serve three primary goals:

  1. Deepen the concepts presented earlier in the document.
  2. Operationalize AI testing through models, mappings, and methodologies.
  3. Ground the guide in recognized industry standards, security taxonomies, and academic literature.

The chapter concludes with a complete References section that documents all sources used throughout the guide.


4.1 Appendix A: Rationale for Using SAIF (Secure AI Framework)

Appendix A introduces the rationale for adopting the Secure AI Framework (SAIF) as a foundational model for trustworthy AI development and testing.

SAIF provides:

  • a holistic structure covering data, model, application, and infrastructure layers,
  • a secure-by-design perspective tailored to AI systems,
  • alignment with modern risk taxonomies and governance frameworks, and
  • conceptual continuity with the appendixes on threats, risk, and architecture.

This appendix explains why AI requires a framework beyond traditional software testing paradigms.


4.2 Appendix B: Distributed, Immutable, Ephemeral (DIE) Threat Identification

This appendix presents the DIE model—Distributed, Immutable, Ephemeral—as a lens for identifying threats in cloud-native and modern AI environments.

AI systems often include:

  • distributed compute clusters,
  • immutable artifacts (e.g., containers, model binaries),
  • ephemeral jobs (e.g., training pipelines, microservices).

These characteristics create unique attack surfaces. The DIE framework helps testers recognize threats such as: supply-chain injection, poisoned artifacts, workflow manipulation, and cloud environment exploitation.


4.3 Appendix C: Risk Lifecycle for Secure AI Systems

Appendix C describes the AI-specific risk lifecycle, reflecting the dynamic and evolving nature of AI systems.

The lifecycle includes:

  • identifying risks,
  • assessing likelihood and impact,
  • designing mitigation strategies,
  • monitoring for drift or adversarial manipulation,
  • reviewing residual risk and updating controls.

Special attention is given to phenomena unique to AI systems, such as data drift, model drift, and feedback-loop risks.


4.4 Appendix D: Threat Enumeration to AI Architecture Components

This appendix provides a structured mapping of threats across AI architectural components, including:

  • data layer,
  • model layer,
  • application/API layer,
  • infrastructure and deployment environment.

For each component, the appendix details:

  • key threat vectors,
  • typical vulnerabilities,
  • propagation effects across layers.

This enumeration forms the basis for the testing procedures defined earlier in the guide.


4.5 Appendix E: Mapping AI Threats Against AI System Vulnerabilities (CVEs & CWEs)

Appendix E connects AI-specific threats to established vulnerability taxonomies such as:

  • CWE (Common Weakness Enumeration),
  • CVE (Common Vulnerabilities and Exposures),
  • relevant MITRE classifications.

This mapping demonstrates how threats like model extraction, prompt injection, and training data leakage relate to traditional software weakness classes. The goal is to integrate AI-security testing with existing enterprise vulnerability management workflows.


4.6 Appendix F: Domain-Specific Testing

This appendix outlines considerations for testing AI systems in specific industry domains, including:

  • healthcare,
  • finance,
  • automotive and autonomous systems,
  • critical infrastructure,
  • defense and aerospace.

Each domain presents unique risks, regulatory frameworks, and operational constraints. The appendix provides guidance on tailoring AI testing strategies to sector-specific requirements and workflows.


4.7 References

The final section compiles all sources cited throughout this guide, including standards, academic research, industry papers, and open-source projects. These references provide the foundational material supporting the frameworks, methodologies, and recommendations outlined in the OWASP AI Testing Guide.