Update AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md

This commit is contained in:
Matteo Meucci
2025-11-14 11:37:34 +01:00
committed by GitHub
parent 73844d7cf8
commit 11463f42e5

View File

@@ -19,7 +19,7 @@ Technically verify whether an LLM or AI application can be indirectly manipulate
The following is a diagram that represents this kind of test:
<p align="center">
<img src="/Document/images/IndirectPromptInjection.png" alt="Description" width="400"/>
<img src="/Document/images/IndirectPromptInjection.png" alt="Description" width="600"/>
</p>
For this kind of test you need to craft a web page with the malicious payload that will be executed in the user prompt and observe if the AI system will execute your payload.