3.5 KiB
3.1 AI Application Testing
The AI Application Testing category addresses security, safety, and trust risks arising specifically from interactions between AI systems, end-users, and external data sources. This testing category evaluates the behavior of AI applications when processing user inputs, generating outputs, and handling runtime interactions, with the goal of uncovering and mitigating vulnerabilities unique to AI-driven interactions, such as prompt injections, sensitive data leaks, and unsafe or biased outputs.
Given the direct exposure of AI applications to users and external environments, testing at this layer is critical to prevent unauthorized access, manipulation of AI behavior, and compliance violations. The category covers comprehensive evaluation against well-defined threat scenarios, including adversarial prompt manipulation, unsafe outputs, agentic misbehavior, and risks related to model extraction or embedding manipulation.
Scope of This Testing Category
This category evaluates whether the AI application:
-
Is resistant to prompt manipulation and injection attacks
→ AITG-APP-01: Testing for Prompt Injection
→ AITG-APP-02: Testing for Indirect Prompt Injection -
Maintains information boundaries to avoid sensitive data leaks
→ AITG-APP-03: Testing for Sensitive Data Leak
→ AITG-APP-04: Testing for Input Leakage
→ AITG-APP-07: Testing for Prompt Disclosure -
Generates safe, unbiased, and properly aligned outputs
→ AITG-APP-05: Testing for Unsafe Outputs
→ AITG-APP-10: Testing for Harmful Content Bias
→ AITG-APP-11: Testing for Hallucinations
→ AITG-APP-12: Testing for Toxic Output -
Manages agentic behavior and operational limits effectively
→ AITG-APP-06: Testing for Agentic Behavior Limits
→ AITG-APP-13: Testing for Over-Reliance on AI -
Provides explainability and interpretability for AI decisions
→ AITG-APP-14: Testing for Explainability and Interpretability -
Is protected against embedding-based attacks and model extraction attempts
→ AITG-APP-08: Testing for Embedding Manipulation
→ AITG-APP-09: Testing for Model Extraction
Each test within the AI Application Testing category contributes to the holistic security posture of AI systems by systematically addressing application-level risks, ensuring robust operation in real-world environments, and helping organizations comply with ethical standards and regulations.