Update AITG-APP-01_Testing_for_Prompt_Injection.md

This commit is contained in:
Matteo Meucci
2025-11-14 11:30:38 +01:00
committed by GitHub
parent c6f1bfe54a
commit e3d09e788a

View File

@@ -333,7 +333,7 @@ In 2023, researchers were able to bypass ChatGPT's filters using the "DAN" jailb
- **URL**: [https://promptfoo.dev](https://promptfoo.dev)
### References
- OWASP Top 10 LLM01:2025 Prompt Injection - [https://genai.owasp.org](https://genai.owasp.org)
- OWASP Top 10 LLM01:2025 Prompt Injection - [https://genai.owasp.org](https://genai.owasp.org/llmrisk/llm01-prompt-injection)
- Guide to Prompt Injection - [Lakera](https://www.lakera.ai/blog/guide-to-prompt-injection)
- Learn Prompting - [PromptSecurity](https://learnprompting.org/docs/prompt_hacking/injection)
- Trust No AI: Prompt Injection Along The CIA Security Triad, JOHANN REHBERGER. [Link](https://arxiv.org/pdf/2412.06090)