diff --git a/Document/content/tests/AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md b/Document/content/tests/AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md index 2856b31..151d25e 100644 --- a/Document/content/tests/AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md +++ b/Document/content/tests/AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md @@ -19,7 +19,7 @@ Technically verify whether an LLM or AI application can be indirectly manipulate The following is a diagram that represents this kind of test:

- Description + Description

For this kind of test you need to craft a web page with the malicious payload that will be executed in the user prompt and observe if the AI system will execute your payload.