Update AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md

This commit is contained in:
Matteo Meucci
2025-11-14 11:36:10 +01:00
committed by GitHub
parent c4d7bd50e6
commit 73844d7cf8

View File

@@ -19,7 +19,7 @@ Technically verify whether an LLM or AI application can be indirectly manipulate
The following is a diagram that represents this kind of test:
<p align="center">
<img src="/Document/images/IndirectPromptInjection.png" alt="Description" width="800"/>
<img src="/Document/images/IndirectPromptInjection.png" alt="Description" width="400"/>
</p>
For this kind of test you need to craft a web page with the malicious payload that will be executed in the user prompt and observe if the AI system will execute your payload.