mirror of
https://github.com/liudaizong/Awesome-LVLM-Attack.git
synced 2026-02-12 17:42:44 +00:00
update 9.22.2025
This commit is contained in:
@@ -206,6 +206,10 @@ Here, we've summarized existing LVLM Attack methods in our survey paper👍.
|
||||
* Hanxun Huang, Sarah Erfani, Yige Li, Xingjun Ma, James Bailey
|
||||
* The University of Melbourne, Singapore Management University, Fudan University
|
||||
* [ICML2025] https://arxiv.org/abs/2505.05528
|
||||
* **Phi: Preference Hijacking in Multi-modal Large Language Models at Inference Time** | #
|
||||
* Yifan Lan, Yuanpu Cao, Weitong Zhang, Lu Lin, Jinghui Chen
|
||||
* The Pennsylvania State University, The University of North Carolina at Chapel Hill
|
||||
* [EMNLP2025] https://arxiv.org/abs/2509.12521
|
||||
|
||||
## Jailbreak-Attack
|
||||
* **Are aligned neural networks adversarially aligned?** |
|
||||
|
||||
Reference in New Issue
Block a user