From 11463f42e570699ef8ca08c2f7c5541a34f5fd75 Mon Sep 17 00:00:00 2001 From: Matteo Meucci Date: Fri, 14 Nov 2025 11:37:34 +0100 Subject: [PATCH] Update AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md --- .../tests/AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Document/content/tests/AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md b/Document/content/tests/AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md index 2856b31..151d25e 100644 --- a/Document/content/tests/AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md +++ b/Document/content/tests/AITG-APP-02_Testing_for_Indirect_Prompt_Injection.md @@ -19,7 +19,7 @@ Technically verify whether an LLM or AI application can be indirectly manipulate The following is a diagram that represents this kind of test:

- Description + Description

For this kind of test you need to craft a web page with the malicious payload that will be executed in the user prompt and observe if the AI system will execute your payload.