mirror of
https://github.com/OWASP/www-project-ai-testing-guide.git
synced 2026-03-19 08:44:06 +00:00
Update AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md
This commit is contained in:
@@ -19,7 +19,7 @@ Technically verify whether an LLM or AI application can be indirectly manipulate
|
||||
The following is a diagram that represents this kind of test:
|
||||
|
||||
<p align="center">
|
||||
<img src="/Document/images/IndirectPromptInjection.png" alt="Description" width="400"/>
|
||||
<img src="/Document/images/IndirectPromptInjection.png" alt="Description" width="600"/>
|
||||
</p>
|
||||
|
||||
For this kind of test you need to craft a web page with the malicious payload that will be executed in the user prompt and observe if the AI system will execute your payload.
|
||||
|
||||
Reference in New Issue
Block a user