mirror of
https://github.com/Shiva108/ai-llm-red-team-handbook.git
synced 2026-02-12 14:42:46 +00:00
docs: add arXiv links to seminal papers and correct a publication year.
This commit is contained in:
@@ -1079,11 +1079,11 @@ This setup allows testing **"Indirect Prompt Injection"**, where the Attacker po
|
||||
|
||||
### Seminal Papers
|
||||
|
||||
| Paper | Year | Contribution |
|
||||
| :------------------------------------------- | :--- | :------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| **"LLMSmith: A Tool for Investigating RCE"** | 2024 | Demonstrated "Code Escape" where prompt injection leads to Remote Code Execution in LLM frameworks [1]. |
|
||||
| **"Whisper Leak: Side-channel attacks"** | 2025 | Showed how network traffic patterns (packet size/timing) can leak the topic of LLM prompts even over encrypted connections [2]. |
|
||||
| **"SandboxEval"** | 2025 | Introduced a benchmark for evaluating the security of sandboxes against code generated by LLMs [3]. |
|
||||
| Paper | Year | Contribution |
|
||||
| :--------------------------------------------------------------------------- | :--- | :------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| **["Demystifying RCE (LLMSmith)"](https://arxiv.org/abs/2309.02926)** | 2024 | Demonstrated "Code Escape" where prompt injection leads to Remote Code Execution in LLM frameworks [1]. |
|
||||
| **["Whisper Leak: Side-channel attacks"](https://arxiv.org/abs/2511.03675)** | 2024 | Showed how network traffic patterns (packet size/timing) can leak the topic of LLM prompts even over encrypted connections [2]. |
|
||||
| **["SandboxEval"](https://arxiv.org/abs/2504.00018)** | 2025 | Introduced a benchmark for evaluating the security of sandboxes against code generated by LLMs [3]. |
|
||||
|
||||
### Current Research Gaps
|
||||
|
||||
|
||||
Reference in New Issue
Block a user