2023-06-25 20:55:01 +08:00
2023-06-25 20:55:01 +08:00

Awesome-model-inversion-attack

PRs awesome Stars

A curated list of resources for model inversion attack (MIA). If some related papers are missing, please contact us via pull requests.

Outlines:

[TOC]

What is the model inversion attack?

A model inversion attack is a privacy attack where the attacker is able to reconstruct the original samples that were used to train the synthetic model from the generated synthetic data set. (Mostly.ai)

The goal of model inversion attacks is to recreate training data or sensitive attributes. (Chen et al, 2021.)

In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset. (Wang et al, 2021.)

Survey

Arxiv 2021 - A Survey of Privacy Attacks in Machine Learning. [paper]

Arxiv 2022 - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. [paper]

Arxiv 2022 - Trustworthy Graph Neural Networks: Aspects, Methods and Trends. [paper]

Arxiv 2022 - A Survey of Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection. [paper]

Philosophical Transactions of the Royal Society A 2018. Algorithms that remember: model inversion attacks and data protection law.

[paper]

Computer vision domain

Year Title Adversarial Knowledge Venue Paper Link Code Link
2014 Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing white-box Security Paper
2015 Model inversion attacks that exploit confidence information and basic countermeasures CCS Paper Code1, Code2, Code3, Code4
2015 Regression Model Fitting under Differential Privacy and Model Inversion Attack IJCAI Paper Code
2016 A Methodology for Formalizing Model Inversion Attacks CSF Paper
2017 Machine Learning Models that Remember Too Much CCS Paper Code
2017 Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes PST Paper
2018 Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting CSF Paper
2019 MLPrivacyGuard: Defeating Confidence Information based Model Inversion Attacks on Machine Learning Systems GLSVLSI Paper
2019 Model inversion attacks against collaborative inference ACSAC Paper Code
2019 Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment CCS Paper Code
2019 Exploiting Unintended Feature Leakage in Collaborative Learning S&P Paper Code
2019 Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment Arxiv Paper
2019 GAMIN: An Adversarial Approach to Black-Box Model Inversion Arxiv Paper
2020 The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks CVPR Paper Code, Video
2020 Overlearning Reveals Sensitive Attributes ICLR Paper
2020 Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator APSIPA ASC Paper
2020 Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning USENIX Security Paper
2020 Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems IoTJ Paper Code
2020 Black-Box Face Recovery from Identity Features ECCV Workshop Paper
2020 Defending model inversion and membership inference attacks via prediction purification Arxiv Paper
2021 Black-box adversarial attacks on commercial speech platforms with minimal information CCS Paper
2021 Unleashing the tiger: Inference attacks on split learning CCS Paper Code
2021 Soteria: Provable defense against privacy leakage in federated learning from representation perspective CVPR Paper Code
2021 Variational Model Inversion Attacks NeurIPS Paper Code
2021 Exploiting Explanations for Model Inversion Attacks ICCV Paper
2021 Knowledge-Enriched Distributional Model Inversion Attacks ICCV Paper Code
2021 Improving Robustness to Model Inversion Attacks via Mutual Information Regularization AAAI Paper
2021 Practical Defences Against Model Inversion Attacks for Split Neural Networks ICLR Workshop Paper Code, Video
2021 Feature Inference Attack on Model Predictions in Vertical Federated Learning ICDE Paper Code
2021 PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems DAC Paper
2021 Robustness of On-Device Models: Adversarial Attack to Deep Learning Models on Android Apps ICSE Paper
2021 Defending Against Model Inversion Attack by Adversarial Examples CSR Workshops Paper
2022 Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks ICML Paper Code
2022 Label-Only Model Inversion Attacks via Boundary Repulsion CVPR Paper Code
2022 ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning CVPR Paper Code
2022 Bilateral Dependency Optimization: Defending Against Model-Inversion Attacks KDD Paper Code
2022 ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models USENIX Security Paper Code
2022 An Approximate Memory Based Defense Against Model Inversion Attacks to Neural Networks IEEE Paper Code
2022 Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System TIFS Paper
2022 Defending Against Reconstruction Attacks Through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-Ray Data Arxiv Paper
2023 Sparse Black-Box Inversion Attack with Limited Information IEEE black-box Paper code
2023 Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack Arxiv black-box Paper code
2023 Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network AAAI white-box Paper code
2023 C2FMI: Corse-to-Fine Black-box Model Inversion Attack IEEE black-box Paper
2023 Boosting Model Inversion Attacks with Adversarial Examples IEEE black-box Paper
2023 Reinforcement Learning-Based Black-Box Model Inversion Attacks CVPR black-box Paper code
2023 Re-thinking Model Inversion Attacks Against Deep Neural Networks CVPR white-box Paper code
2022 One Parameter Defense—Defending Against Data Inference Attacks via Differential Privacy IEEE black-box Paper
2022 Reconstructing Training Data from Diverse ML Models by Ensemble Inversion WACV white-box Paper
2022 SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination ECCV white-box Paper
2022 UnSplit: Data-Oblivious Model Inversion, Model Stealing, andLabel Inference Attacks Against Split Learning WPES S Paper code
2022 MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity NDSS white-box Paper code
2021 Practical Black Box Model Inversion Attacks Against Neural Nets ECML PKDD black-box Paper
2021 Model Inversion Attack against a Face Recognition System in a Black-Box Setting APSIPA black-box Paper
2020 Generative model-inversion attacks against deep neural networks CVPR white-box Paper code
2020 Privacy Preserving Facial Recognition Against Model Inversion Attacks IEEE white-box Paper
2020 Broadening Differential Privacy for Deep LearningAgainst Model Inversion Attacks IEEE white-box Paper

USENIX Security 2014 - Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing. [paper]

CCS 2015 - Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. [paper] [code1] [code2] [code3] [code4]

IJCAI 2015 - Regression model fitting under differential privacy and model inversion attack. [paper] [code]

CSF 2016 - A Methodology for Formalizing Model-Inversion Attacks. [paper]

CCS 2017 - Machine Learning Models that Remember Too Much. [paper] [code]

PST 2017 - Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes. [paper]

CSF 2018 - Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting. [paper]

GLSVLSI 2019 - MLPrivacyGuard: Defeating Confidence Information based Model Inversion Attacks on Machine Learning Systems. [paper]

ACSAC 2019 - Model inversion attacks against collaborative inference. [paper] [code]

CCS 2019 - Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment. [paper] [code]

IEEE S&P 2019 - Exploiting Unintended Feature Leakage in Collaborative Learning. [paper] [code]

Arxiv 2019 - Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment. [paper]

Arxiv 2019 - GAMIN: An Adversarial Approach to Black-Box Model Inversion. [paper]

CVPR 2020 - The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks. [paper] [code] [video]

ICLR 2020 - OVERLEARNING REVEALS SENSITIVE ATTRIBUTES. [paper]

APSIPA ASC 2020 - Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator. [paper]

USENIX Security 2020 - Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning. [paper]

IoTJ 2020 - Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems. [paper] [code]

ECCV 2020 Workshop - Black-Box Face Recovery from Identity Features. [paper]

Arxiv 2020 - Defending model inversion and membership inference attacks via prediction purification. [paper]

IJCAI 2021 - Contrastive Model Inversion for Data-Free Knowledge Distillation. [paper] [code]

CCS 2021 - Membership Leakage in Label-Only Exposures. [paper] [code]

CCS 2021 - Black-box adversarial attacks on commercial speech platforms with minimal information. [paper]

CCS 2021 - Unleashing the tiger: Inference attacks on split learning [paper] [code]

CVPR 2021 - See through gradients: Image batch recovery via gradinversion. [paper]

CVPR 2021 - Soteria: Provable defense against privacy leakage in federated learning from representation perspective. [paper] [code]

CVPR 2021 - Imagine: Image synthesis by image-guided model inversion. [paper]

NeurIPS 2021 - Variational Model Inversion Attacks. [paper] [code]

ICCV 2021 - Exploiting Explanations for Model Inversion Attacks. [paper]

ICCV 2021 - Knowledge-Enriched Distributional Model Inversion Attacks. [paper] [code]

AAAI 2021 - Improving Robustness to Model Inversion Attacks via Mutual Information Regularization. [paper]

ICML 2021 - Label-Only Membership Inference Attack. [paper] [code]

ICML 2021 - When Does Data Augmentation Help With Membership Inference Attacks? paper(When Does Data Augmentation Help With Membership Inference Attacks?)

ICLR 2021 workshop - PRACTICAL DEFENCES AGAINST MODEL INVERSION ATTACKS FOR SPLIT NEURAL NETWORKS. [paper] [code] [video]

ICDE 2021 Feature inference attack on model predictions in vertical federated learning. [paper] [code]

DAC 2021 - PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems [paper]

ICSE 2021 - Robustness of on-device models: Adversarial attack to deep learning models on android apps. [paper]

CSR Workshops 2021 - Defending Against Model Inversion Attack by Adversarial Examples. [paper]

ICML 2022 - Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks. [paper] [code]

CVPR 2022 - Label-Only Model Inversion Attacks via Boundary Repulsion. [paper] [code]

CVPR 2022 - ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning. [paper] [code]

KDD 2022 - Bilateral Dependency Optimization: Defending Against Model-inversion Attacks. [paper] [code]

USENIX Security 2022 - ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models. [paper] [code]

IEEE 2022 - An Approximate Memory based Defense against Model Inversion Attacks to Neural Networks. [paper] [code]

TIFS 2022 - Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System [paper]

Arxiv 2022 - Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-Ray Data. [paper]

IEEE 2021 - Defending Against Model Inversion Attack by Adversarial Examples. [paper]

IEEE 2023 - Sparse Black-Box Inversion Attack with Limited Information [paper] [code]

Arxiv 2023 - Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack [paper] [code]

AAAI 2023 - Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network [paper] [code]

IEEE 2023 - C2FMI: Corse-to-Fine Black-box Model Inversion Attack [paper]

IEEE 2023 - Boosting Model Inversion Attacks with Adversarial Examples [paper]

CVPR 2023 - Reinforcement Learning-Based Black-Box Model Inversion Attacks [paper] [code]

CVPR 2023 - Re-thinking Model Inversion Attacks Against Deep Neural Networks [paper] [code] |

IEEE 2022 - One Parameter Defense—Defending Against Data Inference Attacks via Differential Privacy [paper]

WACV 2022 - Reconstructing Training Data from Diverse ML Models by Ensemble Inversion [paper]

ECCV 2022 - SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination [paper]

WPES 2022- UnSplit: Data-Oblivious Model Inversion, Model Stealing, andLabel Inference Attacks Against Split Learning [paper] [code]

NDSS 2022 - MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity [paper] [code] |

ECML PKDD 2021 - Practical Black Box Model Inversion Attacks Against Neural Nets [paper]

APSIPA 2021 - Model Inversion Attack against a Face Recognition System in a Black-Box Setting [paper]

CVPR 2020 - Generative model-inversion attacks against deep neural networks [paper] [code]

IEEE 2020 - Privacy Preserving Facial Recognition Against Model Inversion Attacks [paper]

IEEE 2020 - Broadening Differential Privacy for Deep LearningAgainst Model Inversion Attacks [paper]

Graph learning domain

Year Title Adversarial Knowledge Venue Paper Link Code Link
2023 NetGuard: Protecting Commercial Web APIs from Model Inversion Atacks using GAN-generated Fake Samples white-box ACM Paper
2020 Improving Robustness to Model Inversion Attacks via Mutual Information Regularization black & white-box AAAI Paper
2021 A Survey on Gradient Inversion: Attacks, Defenses and Future Directions white-box IJCAI Paper
2021 GraphMI: Extracting Private Graph Data from Graph Neural Networks white-box IJCAI Paper code
2021 NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data black-box IEEE Paper code
2020 Reducing Risk of Model Inversion Using Privacy-Guided Training black & white-box Arxiv Paper
2022 A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability black & white-box Arxiv Paper

USENIX Security 2020 - Stealing Links from Graph Neural Networks. [paper] [code]

MobiQuitous 2020 - Quantifying Privacy Leakage in Graph Embedding. [paper] [code]

ICML 2021 - DeepWalking Backwards: From Node Embeddings Back to Graphs. [paper] [code]

IJCAI 2021 - GraphMI: Extracting Private Graph Data from Graph Neural Networks. [paper] [code]

Arxiv 2021 - Node-Level Membership Inference Attacks Against Graph Neural Networks. [paper]

WWW 2022 - Learning Privacy-Preserving Graph Convolutional Network with Partially Observed Sensitive Attributes. [paper]

USENIX Security 2022 - Inference Attacks Against Graph Neural Networks [paper] [code]

IEEE S&P 2022 - Model Stealing Attacks Against Inductive Graph Neural Networks. [paper] [code]

Arxiv 2022 - DIFFERENTIALLY PRIVATE GRAPH CLASSIFICATION WITH GNNS. [paper]

Arxiv 2022 - GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation. [paper]

Arxiv 2022 - SOK: DIFFERENTIAL PRIVACY ON GRAPH-STRUCTURED DATA. [paper]

Arxiv 2022 - Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy. [paper]

Arxiv 2022 - Private Graph Extraction via Feature Explanations. [paper]

Arxiv 2022 - Privacy and Transparency in Graph Machine Learning: A Unified Perspective. [paper]

CCS 2022 - Finding MNEMON: Reviving Memories of Node Embeddings. [paper]

IJIS 2022 - Defense against membership inference attack in graph neural networks through graph perturbation. [paper]

TKDE 2022 - Model Inversion Attacks against Graph Neural Networks. [paper]

ICML 2023 - On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation. [paper] [code]

Natural language processing domain

Year Title Adversarial Knowledge Venue Paper Link Code Link
2020 Extracting Training Data from Large Language Models black-box USENIX Security Paper code
2020 Privacy Risks of General-Purpose Language Models black & white-box IEEE Paper
2022 Defending against Reconstruction Attacks with Rényi Differential Privacy white-box Arxiv Paper
2018 Privacy-preserving Neural Representations of Text white-box Arxiv Paper code
2021 TAG: Gradient Attack on Transformer-based Language Models white-box EMNLP Paper
2020 Information Leakage in Embedding Models black & white-box ACM Paper
2022 Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers white-box Arxiv Paper

CCS 2020 - Information Leakage in Embedding Models. [paper]

USENIX Security 2021 - Extracting training data from large language models. [paper]

Arxiv 2022 - Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers. [paper]

Tools

AIJack: Implementation of algorithms for AI security.

Privacy-Attacks-in-Machine-Learning: Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.

ml-attack-framework: Universität des Saarlandes - Privacy Enhancing Technologies 2021 - Semester Project.

Others

2019 - Uncovering a models secrets. [blog1] [blog2]

2019 - Model Inversion Attacks Against Collaborative Inference. [slides]

2020 - Attacks against Machine Learning Privacy (Part 1): Model Inversion Attacks with the IBM-ART Framework. [blog]

2021 - ML and DP. [slides]

awesome-ml-privacy-attacks [repo]