From 157adc1f3a6c4bdede0da23b2c5443c525281e65 Mon Sep 17 00:00:00 2001 From: Zixuan Hu <44722490+Egg-Hu@users.noreply.github.com> Date: Tue, 27 May 2025 14:10:54 +0800 Subject: [PATCH] Add paper: SMI (ICML 2024) Add paper: Sparse Model Inversion (ICML 2024) --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 5e9de75..eb66af9 100644 --- a/README.md +++ b/README.md @@ -83,6 +83,8 @@ In model inversion attacks, a malicious user attempts to recover the private dat - [NDSS 2025] CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling [[paper]](https://arxiv.org/pdf/2501.15718) [[code]](https://github.com/KaiyuanZh/censor) [[project]](https://censor-gradient.github.io/) +- [ICML 2024] (white-box) Sparse Model Inversion: Efficient Inversion of Vision Transformers for Data-Free Applications [[paper]](https://openreview.net/pdf?id=T0lFfO8HaK) [[code]](https://github.com/Egg-Hu/SMI) + - [CVPR 2024] Model Inversion Robustness: Can Transfer Learning Help? [[paper]](https://openaccess.thecvf.com/content/CVPR2024/papers/Ho_Model_Inversion_Robustness_Can_Transfer_Learning_Help_CVPR_2024_paper.pdf) [[code]](https://hosytuyen.github.io/projects/TL-DMI) - [ICLR 2024] Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks [[paper]](https://arxiv.org/pdf/2310.06549) [[code]](https://github.com/LukasStruppek/Plug-and-Play-Attacks)