Awesome-model-inversion-attack
A curated list of resources for model inversion attack (MIA).
What is the model inversion attack?
In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset.
(by Variational Model Inversion Attacks, Wang et al, 2021.)
Survey
Arxiv 2021 - A Survey of Privacy Attacks in Machine Learning. [paper]
Arxiv 2022 - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. [paper]
Arxiv 2022 - Trustworthy Graph Neural Networks: Aspects, Methods and Trends. [paper]
Arxiv 2022 - A Survey of Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection. [paper]
General domain (including the computer vision)
USENIX Security 2014 - Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing. [paper]
CCS 2015 - Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. [paper] [code1] [code2] [code3] [code4]
CSF 2016 - A Methodology for Formalizing Model-Inversion Attacks. [paper]
CCS 2019 - Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment. [paper] [code]
Arxiv 2019 - Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment. [paper]
CVPR 2020 - The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks. [paper] [code] [video]
APSIPA ASC 2020 - Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator. [paper]
CCS 2021 - Membership Leakage in Label-Only Exposures. [paper] [code]
CCS 2021 - Black-box adversarial attacks on commercial speech platforms with minimal information. [paper]
CCS 2021 - Unleashing the tiger: Inference attacks on split learning [paper] [code]
CVPR 2021 - See through gradients: Image batch recovery via gradinversion. [paper]
CVPR 2021 - Soteria: Provable defense against privacy leakage in federated learning from representation perspective. [paper] [code]
CVPR 2021 - Imagine: Image synthesis by image-guided model inversion. [paper]
NeurIPS 2021 - Variational Model Inversion Attacks. [paper] [code]
ICCV 2021 - Exploiting Explanations for Model Inversion Attacks. [paper]
ICCV 2021 - Knowledge-Enriched Distributional Model Inversion Attacks. [paper] [code]
AAAI 2021 - Improving Robustness to Model Inversion Attacks via Mutual Information Regularization. [paper]
ICML 2021 - Label-Only Membership Inference Attack. [paper] [code]
ICML 2021 - When Does Data Augmentation Help With Membership Inference Attacks? paper(When Does Data Augmentation Help With Membership Inference Attacks?)
ICLR 2021 workshop - PRACTICAL DEFENCES AGAINST MODEL INVERSION ATTACKS FOR SPLIT NEURAL NETWORKS. [paper] [code] [video]
ICDE 2021 Feature inference attack on model predictions in vertical federated learning. [paper] [code]
DAC 2021 - PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems [paper]
ICSE 2021 - Robustness of on-device models: Adversarial attack to deep learning models on android apps. [paper]
ICML 2022 - Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks. [paper] [code]
CVPR 2022 - Label-Only Model Inversion Attacks via Boundary Repulsion. [paper] [code]
CVPR 2022 - ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning. [paper] [code]
KDD 2022 - Bilateral Dependency Optimization: Defending Against Model-inversion Attacks. [paper] [code]
USENIX Security 2022 - ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models. [paper] [code]
IEEE 2022 - An Approximate Memory based Defense against Model Inversion Attacks to Neural Networks. [paper] [code]
TIFS 2022 - Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System [paper]
Graph learning domain
USENIX Security 2020 - Stealing Links from Graph Neural Networks. [paper]
IJCAI 2021 - GraphMI: Extracting Private Graph Data from Graph Neural Networks. [paper] [code]
Arxiv 2021 - Node-Level Membership Inference Attacks Against Graph Neural Networks. [paper]
WWW 2022 - Learning Privacy-Preserving Graph Convolutional Network with Partially Observed Sensitive Attributes. [paper]
USENIX Security 2022 - Inference Attacks Against Graph Neural Networks [paper] [code]
IEEE S&P 2022 - Model Stealing Attacks Against Inductive Graph Neural Networks. [paper] [code]
Arxiv 2022 - DIFFERENTIALLY PRIVATE GRAPH CLASSIFICATION WITH GNNS. [paper]
Arxiv 2022 - GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation. [paper]
Arxiv 2022 - SOK: DIFFERENTIAL PRIVACY ON GRAPH-STRUCTURED DATA. [paper]
Arxiv 2022 - Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy. [paper]
Arxiv 2022 - Private Graph Extraction via Feature Explanations. [paper]
NLP domain
USENIX Security 2021 - Extracting training data from large language models. [paper]
Tools
AIJack: Implementation of algorithms for AI security.
Privacy-Attacks-in-Machine-Learning: Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.
ml-attack-framework: Universität des Saarlandes - Privacy Enhancing Technologies 2021 - Semester Project.
Others
2021 - ML and DP. [slides]