docs: add arXiv links to seminal papers and correct a publication year.

This commit is contained in:
shiva108
2026-02-03 18:09:16 +01:00
parent 986856bef9
commit 07778c2ddb

View File

@@ -1079,11 +1079,11 @@ This setup allows testing **"Indirect Prompt Injection"**, where the Attacker po
### Seminal Papers
| Paper | Year | Contribution |
| :------------------------------------------- | :--- | :------------------------------------------------------------------------------------------------------------------------------ |
| **"LLMSmith: A Tool for Investigating RCE"** | 2024 | Demonstrated "Code Escape" where prompt injection leads to Remote Code Execution in LLM frameworks [1]. |
| **"Whisper Leak: Side-channel attacks"** | 2025 | Showed how network traffic patterns (packet size/timing) can leak the topic of LLM prompts even over encrypted connections [2]. |
| **"SandboxEval"** | 2025 | Introduced a benchmark for evaluating the security of sandboxes against code generated by LLMs [3]. |
| Paper | Year | Contribution |
| :--------------------------------------------------------------------------- | :--- | :------------------------------------------------------------------------------------------------------------------------------ |
| **["Demystifying RCE (LLMSmith)"](https://arxiv.org/abs/2309.02926)** | 2024 | Demonstrated "Code Escape" where prompt injection leads to Remote Code Execution in LLM frameworks [1]. |
| **["Whisper Leak: Side-channel attacks"](https://arxiv.org/abs/2511.03675)** | 2024 | Showed how network traffic patterns (packet size/timing) can leak the topic of LLM prompts even over encrypted connections [2]. |
| **["SandboxEval"](https://arxiv.org/abs/2504.00018)** | 2025 | Introduced a benchmark for evaluating the security of sandboxes against code generated by LLMs [3]. |
### Current Research Gaps