mirror of
https://github.com/jiaxiaojunQAQ/OmniSafeBench-MM.git
synced 2026-02-12 17:52:46 +00:00
Update
This commit is contained in:
@@ -54,7 +54,7 @@ The benchmark unifies multi-modal jailbreak attack–defense evaluation, 13 atta
|
||||
## 🛡️Integrated Defense Methods
|
||||
| Name | Title | Venue | Paper | Code |
|
||||
|---|--------------------------------------------------------------------------------------------------------------------|---|---|---|
|
||||
| JailGuard | JailGuard: A Universal Detection Framework for Prompt-based Attacks on LLM Systems | arXiv | [link](https://arxiv.org/abs/2312.10766) | [link](https://github.com/shiningrain/JailGuard) |
|
||||
| JailGuard | JailGuard: A Universal Detection Framework for Prompt-based Attacks on LLM Systems | TOSEM2025 | [link](https://arxiv.org/abs/2312.10766) | [link](https://github.com/shiningrain/JailGuard) |
|
||||
| MLLM-Protector | MLLM-Protector: Ensuring MLLM's Safety without Hurting Performance | EMNLP 2024 | [link](https://arxiv.org/abs/2401.02906) | [link](https://github.com/pipilurj/MLLM-protector) |
|
||||
| ECSO | Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation | ECCV 2024 | [link](https://arxiv.org/abs/2403.09572) | [link](https://gyhdog99.github.io/projects/ecso/) |
|
||||
| ShieldLM | ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors | EMNLP 2024 | [link](https://arxiv.org/abs/2402.16444) | [link](https://github.com/thu-coai/ShieldLM) |
|
||||
@@ -66,7 +66,7 @@ The benchmark unifies multi-modal jailbreak attack–defense evaluation, 13 atta
|
||||
| Llama-Guard-4 | Llama Guard 4 | Model Card | [link](https://www.llama.com/docs/model-cards-and-prompt-formats/llama-guard-4/) | [link](https://huggingface.co/meta-llama/Llama-Guard-4-12B) |
|
||||
| QGuard | QGuard: Question-based Zero-shot Guard for Multi-modal LLM Safety | arXiv | [link](https://arxiv.org/abs/2506.12299) | [link](https://github.com/taegyeong-lee/QGuard-Question-based-Zero-shot-Guard-for-Multi-modal-LLM-Safety) |
|
||||
| LlavaGuard | LlavaGuard: An Open VLM-based Framework for Safeguarding Vision Datasets and Models | ICML 2025 | [link](https://arxiv.org/abs/2406.05113) | [link](https://github.com/ml-research/LlavaGuard) |
|
||||
| Llama-Guard-3 | Llama Guard 3 | Model Card | [link](https://www.llama.com/docs/model-cards-and-prompt-formats/llama-guard-3/) | [model link](https://huggingface.co/meta-llama/Llama-Guard-3-8B) |
|
||||
| Llama-Guard-3 | Llama Guard 3 | Model Card | [link](https://www.llama.com/docs/model-cards-and-prompt-formats/llama-guard-3/) | [link](https://huggingface.co/meta-llama/Llama-Guard-3-8B) |
|
||||
| HiddenDetect | HiddenDetect: Detecting Jailbreak Attacks against Multimodal Large Language Models via Monitoring Hidden States | ACL 2025 | [link](https://arxiv.org/abs/2502.14744) | [link](https://github.com/leigest519/HiddenDetect) |
|
||||
| CoCA | CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration | COLM 2024 | [link](https://arxiv.org/abs/2409.11365) | Null |
|
||||
| VLGuard | Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models | ICML 2024 | [link](https://arxiv.org/abs/2402.02207) | [link](https://github.com/ys-zong/VLGuard) |
|
||||
|
||||
Reference in New Issue
Block a user