mirror of
https://github.com/OWASP/www-project-ai-testing-guide.git
synced 2026-02-12 21:52:45 +00:00
Update AITG-APP-04_Testing_for_Input_Leakage.md
This commit is contained in:
@@ -141,10 +141,8 @@ A vulnerability is confirmed if the AI model:
|
||||
- Ensure guardrails normalize inputs prior to filtering and detect obfuscated sensitive data and contextual cues in both inputs and outputs.
|
||||
|
||||
### Suggested Tools
|
||||
- **Garak – Input Leakage Probe**: Specialized Garak module designed to detect sensitive input data leaks.
|
||||
- **URL**: [https://github.com/NVIDIA/garak/blob/main/garak/probes/leakreveal.py](https://github.com/NVIDIA/garak/blob/main/garak/probes/leakreveal.py)
|
||||
- **Microsoft Counterfit**: An AI security tool capable of testing for input leakage issues in model interactions.
|
||||
- **URL**: [https://github.com/Azure/counterfit](https://github.com/Azure/counterfit)
|
||||
- **Garak – Input Leakage Probe**: Specialized Garak module designed to detect sensitive input data leaks - [Link](https://github.com/NVIDIA/garak/blob/main/garak/probes/leakreveal.py)
|
||||
- **Microsoft Counterfit**: An AI security tool capable of testing for input leakage issues in model interactions - [Link](https://github.com/Azure/counterfit)
|
||||
|
||||
### References
|
||||
- OWASP Top 10 LLM02:2025 Sensitive Information Disclosure - [https://genai.owasp.org](https://genai.owasp.org/llmrisk/llm02-insecure-output-handling)
|
||||
|
||||
Reference in New Issue
Block a user