From b42b6849749027a5918ce09fe077d55ce1c4c5fa Mon Sep 17 00:00:00 2001 From: Adam Wilson <42450907+lightbroker@users.noreply.github.com> Date: Wed, 12 Nov 2025 15:07:15 -0700 Subject: [PATCH] Revise README for clarity and updated references Updated the README to reflect the author's name and provide clearer links to the research paper. Removed redundant sections and clarified repository contents and usage notes. --- README.md | 37 +++++++------------------------------ 1 file changed, 7 insertions(+), 30 deletions(-) diff --git a/README.md b/README.md index 7fcd276d4..18324a078 100644 --- a/README.md +++ b/README.md @@ -2,45 +2,22 @@ ## Overview -This repository contains code, workflows, and experiments that support the research conducted for the paper titled "Automating Generative AI Guidelines: Reducing Prompt Injection Risk with 'Shift-Left' MITRE ATLAS Mitigation Testing". The paper is available at: https://www.sans.edu/cyber-research/automating-generative-ai-guidelines-reducing-prompt-injection-risk-shift-left-mitre-atlas-mitigation-testing/ +This repository contains code, workflows, and experiments that support graduate research conducted by Adam Wilson for the M.Sc., Information Security Engineering program at SANS Technology Institute. The full paper, "Automating Generative AI Guidelines: Reducing Prompt Injection Risk with 'Shift-Left' MITRE ATLAS Mitigation Testing," is available at [sans.org](https://www.sans.org/white-papers/automating-generative-ai-guidelines-reducing-prompt-injection-risk-shift-left-mitre-atlas-mitigation-testing) and [sans.edu](https://www.sans.edu/cyber-research/automating-generative-ai-guidelines-reducing-prompt-injection-risk-shift-left-mitre-atlas-mitigation-testing/). -This work also supports graduate research conducted by Adam Wilson for the M.Sc., Information Security Engineering program at SANS Technology Institute. - -Paper ------ -Title: Automating Generative AI Guidelines: Reducing Prompt Injection Risk with 'Shift-Left' MITRE ATLAS Mitigation Testing - -Abstract +Research Paper Abstract -------- -Automated testing during the build stage of the AI engineering life cycle can evaluate the -effectiveness of generative AI guidelines against prompt injection attacks. This technique -provides early feedback for developers and defenders when assessing the mitigation -performance of an LLM-integrated application. This research combines prompt -engineering techniques and automated policy violation checks in the GitHub Actions -cloud-native build system to demonstrate a practical “shift-left” approach to securing -apps based on foundation models. +Automated testing during the build stage of the AI engineering life cycle can evaluate the effectiveness of generative AI guidelines against prompt injection attacks. This technique provides early feedback for developers and defenders when assessing the mitigation performance of an LLM-integrated application. This research combines prompt engineering techniques and automated policy violation checks in the GitHub Actions cloud-native build system to demonstrate a practical “shift-left” approach to securing apps based on foundation models. -What you'll find in this repo +Repository Contents ---------------------------- - Example prompt payloads and test harnesses used to evaluate prompt injection mitigations. - GitHub Actions workflows that run automated tests during the build stage to detect policy violations. - Scripts and tooling that demonstrate how to integrate automated checks into an AI engineering pipeline. -Usage notes +Usage Notes ----------- Refer to individual directories and workflow files for details on running tests and customizing checks for your environment. The code is intended to reproduce and extend the experiments described in the paper. -# ⚠️ Disclaimer and Note on Offensive Content - -Some prompts and text generation responses stored in this repository may contain offensive, biased, or otherwise harmful content. This is due to the nature of the research, which involved testing potentially adversarial inputs and model outputs. - -The presence of such content is strictly for research and testing purposes only. The authors and contributors of this repository disclaim any responsibility or liability for the use, interpretation, or distribution of this material. - -By accessing or using this repository, you acknowledge that: -- Offensive or harmful content may be present due to the research methodology. -- The authors are not responsible for any consequences resulting from the use of this material. -- The repository is intended solely for academic, research, and testing purposes. - -License and citation +License and Citation -------------------- -If you use this repository for research or production, please cite the accompanying paper and follow any licensing terms included with the code. \ No newline at end of file +If you use this repository for research or production, please cite the accompanying paper and follow any licensing terms included with the code.