mirror of
https://github.com/AndrewZhou924/Awesome-model-inversion-attack.git
synced 2026-01-25 22:11:26 +00:00
Update README.md
This commit is contained in:
711
README.md
711
README.md
@@ -4,7 +4,8 @@
|
||||
<a href="https://awesome.re"><img src="https://awesome.re/badge.svg" alt="awesome"></a>
|
||||
<img src="https://img.shields.io/github/stars/AndrewZhou924/Awesome-model-inversion-attack?color=yellow&label=Star" alt="Stars" >
|
||||
</p>
|
||||
A curated list of resources for model inversion attack (MIA).
|
||||
|
||||
A curated list of resources for model inversion attack (MIA).
|
||||
|
||||
Please star or watch this repository to keep tracking the latest updates! Contributions are welcome!
|
||||
|
||||
@@ -12,9 +13,9 @@ Please star or watch this repository to keep tracking the latest updates! Contri
|
||||
|
||||
- **[Nov/2024]** We release a comprehensive survey of model inversion attacks. Check the [paper](https://arxiv.org/pdf/2411.10023).
|
||||
|
||||
|
||||
**Outlines:**
|
||||
<!-- generated by MarkdownAllInOne -->
|
||||
|
||||
- [NEWS](#news)
|
||||
- [What is the model inversion attack?](#what-is-the-model-inversion-attack)
|
||||
- [Survey](#survey)
|
||||
- [Computer vision domain](#computer-vision-domain)
|
||||
@@ -23,7 +24,7 @@ Please star or watch this repository to keep tracking the latest updates! Contri
|
||||
- [Tools](#tools)
|
||||
- [Others](#others)
|
||||
- [Related repositories](#related-repositories)
|
||||
|
||||
- [Star History](#star-history)
|
||||
|
||||
## What is the model inversion attack?
|
||||
|
||||
@@ -35,611 +36,323 @@ The goal of model inversion attacks is to recreate training data or sensitive at
|
||||
In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset. (Wang et al, 2021.)
|
||||
|
||||
## Survey
|
||||
Arxiv 2022 - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability.
|
||||
[[paper]](https://arxiv.org/pdf/2204.08570.pdf)
|
||||
|
||||
Arxiv 2022 - Trustworthy Graph Neural Networks: Aspects, Methods and Trends.
|
||||
[[paper]](https://arxiv.org/pdf/2205.07424.pdf)
|
||||
- [arXiv 2024] Model Inversion Attacks: A Survey of Approaches and Countermeasures. [[paper]](https://arxiv.org/pdf/2411.10023)
|
||||
|
||||
Arxiv 2022 - A Survey of Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection.
|
||||
[[paper]](https://arxiv.org/pdf/2205.10014.pdf)
|
||||
- [Physical and Engineering Sciences 2024] Algorithms that remember: model inversion attacks and data protection law. [[paper]](https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2018.0083)
|
||||
|
||||
Philosophical Transactions of the Royal Society A 2018. Algorithms that remember: model inversion attacks and data protection law.
|
||||
[[paper]](https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2018.0083)
|
||||
- [CSF 2023] SoK: Model Inversion Attack Landscape: Taxonomy, Challenges, and Future Roadmap [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10221914)
|
||||
|
||||
(Rigaki and Garcia, 2020) A Survey of Privacy Attacks in Machine Learning [[paper]](https://arxiv.org/abs/2007.07646)
|
||||
- [arXiv 2022] Trustworthy Graph Neural Networks: Aspects, Methods and Trends. [[paper]](https://arxiv.org/pdf/2205.07424.pdf)
|
||||
|
||||
(De Cristofaro, 2020) An Overview of Privacy in Machine Learning [[paper]](https://arxiv.org/pdf/2005.08679)
|
||||
- [arXiv 2022] A Survey of Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection. [[paper]](https://arxiv.org/pdf/2205.10014.pdf)
|
||||
|
||||
(Fan et al., 2020) Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks [[paper]](https://arxiv.org/abs/2006.11601)
|
||||
- [arXiv 2022] A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. [[paper]](https://arxiv.org/pdf/2204.08570.pdf)
|
||||
|
||||
(Liu et al., 2021) Privacy and Security Issues in Deep Learning: A Survey [[paper]](https://ieeexplore.ieee.org/abstract/document/9294026)
|
||||
- [arXiv 2022] Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups [[paper]](https://arxiv.org/pdf/2111.03363)
|
||||
|
||||
(Liu et al., 2021) ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models [[paper]](https://arxiv.org/abs/2102.02551)
|
||||
- [arXiv 2022] I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences [[paper]](https://arxiv.org/pdf/2206.08451)
|
||||
|
||||
(Hu et al., 2021) Membership Inference Attacks on Machine Learning: A Survey [[paper]](https://arxiv.org/abs/2103.07853)
|
||||
- [arXiv 2021] Survey: Leakage and Privacy at Inference Time [[paper]](https://arxiv.org/pdf/2107.01614)
|
||||
|
||||
(Jegorova et al., 2021) Survey: Leakage and Privacy at Inference Time [[paper]](https://arxiv.org/abs/2107.01614)
|
||||
- [arXiv 2021] A Review of Confidentiality Threats Against Embedded Neural Network Models [[paper]](https://arxiv.org/pdf/2105.01401)
|
||||
|
||||
(Joud et al., 2021) A Review of Confidentiality Threats Against Embedded Neural Network Models [[paper]](https://arxiv.org/abs/2105.01401)
|
||||
- [arXiv 2021] Membership Inference Attacks on Machine Learning: A Survey [[paper]](https://arxiv.org/pdf/2103.07853)
|
||||
|
||||
(Wainakh et al., 2021) Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups [[paper]](https://arxiv.org/abs/2111.03363)
|
||||
- [arXiv 2021] ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models [[paper]](https://arxiv.org/pdf/2102.02551)
|
||||
|
||||
(Oliynyk et al., 2022) I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences [[paper]](https://arxiv.org/abs/2206.08451)
|
||||
|
||||
(Dibbo, S.V., 2023) SoK: Model Inversion Attack Landscape: Taxonomy, Challenges, and Future Roadmap [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10221914)
|
||||
- [IEEE Access 2020] Privacy and Security Issues in Deep Learning: A Survey [[paper]](https://ieeexplore.ieee.org/abstract/document/9294026)
|
||||
|
||||
- [arXiv 2020] A Survey of Privacy Attacks in Machine Learning [[paper]](https://arxiv.org/pdf/2007.07646)
|
||||
- [arXiv 2020] Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks [[paper]](https://arxiv.org/pdf/2006.11601)
|
||||
- [arXiv 2020] An Overview of Privacy in Machine Learning [[paper]](https://arxiv.org/pdf/2005.08679)
|
||||
|
||||
## Computer vision domain
|
||||
<!-- | Year | Title | Adversarial Knowledge | Venue | Paper Link | Code Link |
|
||||
| ---- | ----- | -------------------- | ----- | ---------- | --------- |
|
||||
| 2014 | Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing | white-box | Security | [Paper](https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-fredrikson-privacy.pdf) | |
|
||||
| 2015 | Model inversion attacks that exploit confidence information and basic countermeasures | | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/2810103.2813677) | [Code1](http://www.cs.cmu.edu/~mfredrik/mi-2016.zip), [Code2](https://github.com/yashkant/Model-Inversion-Attack), [Code3](https://github.com/zhangzp9970/MIA), [Code4](https://github.com/sarahsimionescu/simple-model-inversion) |
|
||||
| 2015 | Regression Model Fitting under Differential Privacy and Model Inversion Attack | | IJCAI | [Paper](http://www.csce.uark.edu/~xintaowu/publ/ijcai15.pdf) | [Code](https://github.com/cxs040/Regression-Model-Fitting-under-Differential-Privacy-and-Model-Inversion-Attack-Source-Code) |
|
||||
| 2016 | A Methodology for Formalizing Model Inversion Attacks | | CSF | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7536387&casa_token=ClIVAMYo6dcAAAAA:u75HHyFHj5lBRec9h5SqOZyAsL2dICcWIuQPCj6ltk8McREFCaM4ex42mv3S-oNPiGJLDfUqg0qL) | |
|
||||
| 2017 | Machine Learning Models that Remember Too Much | | CCS | [Paper](https://arxiv.org/pdf/1709.07886.pdf) | [Code](https://github.com/csong27/ml-model-remember) |
|
||||
| 2017 | Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes | | PST | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8476925) | |
|
||||
| 2018 | Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting | | CSF | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8429311) | |
|
||||
| 2019 | MLPrivacyGuard: Defeating Confidence Information based Model Inversion Attacks on Machine Learning Systems | | GLSVLSI | [Paper](https://www.researchgate.net/profile/Tiago-Alves-13/publication/333136362_MLPrivacyGuard_Defeating_Confidence_Information_based_Model_Inversion_Attacks_on_Machine_Learning_Systems/links/5cddb94d92851c4eaba682d7/MLPrivacyGuard-Defeating_Confidence-Information-based-Model-Inversion-Attacks-on-Machine-Learning-Systems.pdf) | |
|
||||
| 2019 | Model inversion attacks against collaborative inference | | ACSAC | [Paper](https://par.nsf.gov/servlets/purl/10208164) | [Code](https://github.com/zechenghe/Inverse_Collaborative_Inference) |
|
||||
| 2019 | Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment | | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3319535.3354261?casa_token=J81Ps-ZWXHkAAAAA:FYnXo7DQoHpdhqns8x2TclKFeHpAQlXVxMBW2hTrhJ5c20XKdsounqdT1Viw1g6Xsu9FtKj85elxQaA) | [Code](https://github.com/zhangzp9970/TB-MIA) |
|
||||
| 2019 | Exploiting Unintended Feature Leakage in Collaborative Learning | | S&P | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8835269) | [Code](https://github.com/csong27/property-inference-collaborative-ml) |
|
||||
| 2019 | Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment | | Arxiv | [Paper](https://arxiv.org/pdf/1902.08552.pdf) | |
|
||||
| 2019 | GAMIN: An Adversarial Approach to Black-Box Model Inversion | | Arxiv | [Paper](https://arxiv.org/pdf/1909.11835.pdf) | |
|
||||
| 2020 | The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks | white-box | CVPR | [Paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf) | [Code](https://github.com/AI-secure/GMI-Attack), [Video](https://www.youtube.com/watch?v=_g-oXYMhz4M) |
|
||||
| 2020 | Overlearning Reveals Sensitive Attributes | | ICLR | [Paper](https://arxiv.org/pdf/1905.11742.pdf) | |
|
||||
| 2020 | Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator | | APSIPA ASC | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9306253&casa_token=AWugOvIe0I0AAAAA:9wICCkMcfoljMqooM-lgl8m-6F6-cEl-ClHgNkE1SV8mZwqvBIaJ1HDjT1RWLyBz_P7tdB51jQVL&tag=1) | |
|
||||
| 2020 | Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning | | USENIX Security | [Paper](https://www.usenix.org/system/files/sec20-salem.pdf) | |
|
||||
| 2020 | Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems | | IoTJ | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9187880) | [Code](https://github.com/zechenghe/Inverse_Collaborative_Inference) |
|
||||
| 2020 | Black-Box Face Recovery from Identity Features | | ECCV Workshop | [Paper](https://arxiv.org/pdf/2007.13635.pdf) | |
|
||||
| 2020 | Defending model inversion and membership inference attacks via prediction purification | | Arxiv | [Paper](https://arxiv.org/pdf/2005.03915.pdf) | |
|
||||
| 2020 | Generative model-inversion attacks against deep neural networks | white-box | CVPR | [Paper](https://arxiv.org/pdf/1911.07135.pdf) | [code](https://github.com/AI-secure/GMI-Attack) |
|
||||
| 2020 | Privacy Preserving Facial Recognition Against Model Inversion Attacks | white-box | Globecom | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9322508) | |
|
||||
| 2020 | Broadening Differential Privacy for Deep LearningAgainst Model Inversion Attacks | white-box | Big Data | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9378274) | |
|
||||
| 2021 | Black-box adversarial attacks on commercial speech platforms with minimal information | | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3460120.3485383) | |
|
||||
| 2021 | Unleashing the tiger: Inference attacks on split learning | | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3460120.3485259) | [Code](https://github.com/pasquini-dario/SplitNN_FSHA) |
|
||||
| 2021 | Soteria: Provable defense against privacy leakage in federated learning from representation perspective | | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Soteria_Provable_Defense_Against_Privacy_Leakage_in_Federated_Learning_From_CVPR_2021_paper.pdf) | [Code](https://github.com/jeremy313/Soteria) |
|
||||
| 2021 | Variational Model Inversion Attacks | | NeurIPS | [Paper](https://proceedings.neurips.cc/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Paper.pdf) | [Code](https://github.com/wangkua1/vmi) |
|
||||
| 2021 | Exploiting Explanations for Model Inversion Attacks | | ICCV | [Paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Zhao_Exploiting_Explanations_for_Model_Inversion_Attacks_ICCV_2021_paper.pdf) | |
|
||||
| 2021 | Knowledge-Enriched Distributional Model Inversion Attacks | | ICCV | [Paper](https://arxiv.org/pdf/2010.04092.pdf) | [Code](https://github.com/SCccc21/Knowledge-Enriched-DMI) |
|
||||
| 2021 | Improving Robustness to Model Inversion Attacks via Mutual Information Regularization | | AAAI | [Paper](https://arxiv.org/pdf/2009.05241.pdf) | |
|
||||
| 2021 | Practical Defences Against Model Inversion Attacks for Split Neural Networks | | ICLR Workshop | [Paper](https://arxiv.org/pdf/2104.05743.pdf) | [Code](https://github.com/TTitcombe/Model-Inversion-SplitNN), [Video](https://crossminds.ai/video/practical-defences-against-model-inversion-attacks-for-split-neural-networks-60c3cee46af07cfaf7325850/) |
|
||||
| 2021 | Feature Inference Attack on Model Predictions in Vertical Federated Learning | | ICDE | [Paper](https://arxiv.org/pdf/2010.10152) | [Code](https://github.com/xj231/featureinference-vfl) |
|
||||
| 2021 | PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems | | DAC | [Paper](https://dl.acm.org/doi/abs/10.1109/DAC18074.2021.9586217) | |
|
||||
| 2021 | Robustness of On-Device Models: Adversarial Attack to Deep Learning Models on Android Apps | | ICSE | [Paper](https://arxiv.org/pdf/2101.04401) | |
|
||||
| 2021 | Defending Against Model Inversion Attack by Adversarial Examples | | CSR Workshops | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9527945) | |
|
||||
| 2021 | Practical Black Box Model Inversion Attacks Against Neural Nets | black-box | ECML PKDD | [Paper](https://link.springer.com/content/pdf/10.1007/978-3-030-93733-1.pdf?pdf=button) | |
|
||||
| 2021 | Model Inversion Attack against a Face Recognition System in a Black-Box Setting | black-box | APSIPA | [Paper](http://www.apsipa.org/proceedings/2021/pdfs/0001800.pdf) | |
|
||||
| 2022 | Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks | | ICML | [Paper](https://arxiv.org/pdf/2201.12179.pdf) | [Code](https://github.com/LukasStruppek/Plug-and-Play-Attacks) |
|
||||
| 2022 | Label-Only Model Inversion Attacks via Boundary Repulsion | | CVPR | [Paper](https://arxiv.org/pdf/2203.01925.pdf) | [Code](https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion) |
|
||||
| 2022 | ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning | | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2022/html/Li_ResSFL_A_Resistance_Transfer_Framework_for_Defending_Model_Inversion_Attack_CVPR_2022_paper.html) | [Code](https://github.com/zlijingtao/ResSFL) |
|
||||
| 2022 | Bilateral Dependency Optimization: Defending Against Model-Inversion Attacks | | KDD | [Paper](https://arxiv.org/pdf/2206.05483.pdf) | [Code](https://github.com/xpeng9719/Defend_MI) |
|
||||
| 2022 | ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models | | USENIX Security | [Paper](https://www.usenix.org/system/files/sec22summer_liu-yugeng.pdf) | [Code](https://github.com/liuyugeng/ML-Doctor) |
|
||||
| 2022 | An Approximate Memory Based Defense Against Model Inversion Attacks to Neural Networks | | IEEE | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9792582&casa_token=ymT57RhNhGEAAAAA:WsQHMpv77-4uyIp7l-p4hc7_Qmxvn5TeNpS5F7LHBFHyLay2O8Pe5eWqsKN2fu56v98NZsRrqeit) | [Code](https://github.com/katekemu/model_inversion_defense) |
|
||||
| 2022 | Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System | | TIFS | [Paper](https://dl.acm.org/doi/abs/10.1109/TIFS.2022.3140687) | |
|
||||
| 2022 | Defending Against Reconstruction Attacks Through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-Ray Data | | Arxiv | [Paper](https://arxiv.org/pdf/2205.03168.pdf) | |
|
||||
| 2022 | One Parameter Defense—Defending Against Data Inference Attacks via Differential Privacy | black-box | TIFS | [Paper](https://arxiv.org/pdf/2203.06580.pdf) | |
|
||||
| 2022 | Reconstructing Training Data from Diverse ML Models by Ensemble Inversion | white-box | WACV | [Paper](https://arxiv.org/pdf/2111.03702.pdf) | |
|
||||
| 2022 | SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination | white-box | ECCV | [Paper](https://arxiv.org/pdf/2207.12263.pdf) | |
|
||||
| 2022 | UnSplit: Data-Oblivious Model Inversion, Model Stealing, andLabel Inference Attacks Against Split Learning | S | WPES | [Paper](https://arxiv.org/pdf/2108.09033.pdf) | [code](https://github.com/ege-erdogan/unsplit) |
|
||||
| 2022 | MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity | white-box | NDSS | [Paper](https://www.cs.purdue.edu/homes/an93/static/papers/ndss2022_model_inversion.pdf) | [code](https://github.com/njuaplusplus/mirror) |
|
||||
| 2023 | Sparse Black-Box Inversion Attack with Limited Information | black-box | ICASSP | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10095514) | [code](https://github.com/Tencent/TFace/tree/master/recognition) |
|
||||
| 2023 | Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack | black-box | CVPR | [Paper](https://arxiv.org/pdf/2304.11436.pdf) | [code](https://github.com/FLAIR-THU/PairedLogitsInversion) |
|
||||
| 2023 | Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network | white-box | AAAI | [Paper](https://arxiv.org/pdf/2302.09814.pdf) | [code](https://github.com/lethesec/plg-mi-attack) |
|
||||
| 2023 | C2FMI: Corse-to-Fine Black-box Model Inversion Attack | black-box | TDSC | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10148574) | |
|
||||
| 2023 | Boosting Model Inversion Attacks with Adversarial Examples | black-box | TDSC | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10148576) | |
|
||||
| 2023 | Reinforcement Learning-Based Black-Box Model Inversion Attacks | black-box | CVPR | [Paper](https://arxiv.org/pdf/2304.04625.pdf) | [code](https://github.com/HanGyojin/RLB-MI) |
|
||||
| 2023 | Re-thinking Model Inversion Attacks Against Deep Neural Networks | white-box | CVPR | [Paper](https://arxiv.org/pdf/2304.01669.pdf) | [code](https://github.com/sutd-visual-computing-group/Re-thinking_MI) |
|
||||
-->
|
||||
|
||||
| Year | Title | Adversarial Knowledge | Venue | Paper Link | Code Link |
|
||||
| ---- | ------------------------------------------------------------ | --------------------- | --------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
|
||||
| 2014 | Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing | white-box (both) | USENIX Security | [paper](https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-fredrikson-privacy.pdf) | |
|
||||
| 2015 | Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures | white-box (both) | CCS | [paper](https://dl.acm.org/doi/pdf/10.1145/2810103.2813677) | [code1](http://www.cs.cmu.edu/~mfredrik/mi-2016.zip), [code2](https://github.com/yashkant/Model-Inversion-Attack), [code3](https://github.com/zhangzp9970/MIA), [code4](https://github.com/sarahsimionescu/simple-model-inversion) |
|
||||
| 2015 | Regression model fitting under differential privacy and model inversion attack | white-box (defense) | IJCAI | [paper](http://www.csce.uark.edu/~xintaowu/publ/ijcai15.pdf) | [code](https://github.com/cxs040/Regression-Model-Fitting-under-Differential-Privacy-and-Model-Inversion-Attack-Source-Code) |
|
||||
| 2016 | A Methodology for Formalizing Model-Inversion Attacks | black & white-box | CSF | [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7536387&casa_token=ClIVAMYo6dcAAAAA:u75HHyFHj5lBRec9h5SqOZyAsL2dICcWIuQPCj6ltk8McREFCaM4ex42mv3S-oNPiGJLDfUqg0qL) | |
|
||||
| 2017 | Machine Learning Models that Remember Too Much | white-box | CCS | [paper](https://arxiv.org/pdf/1709.07886.pdf) | [code](https://github.com/csong27/ml-model-remember) |
|
||||
| 2017 | Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes | white-box | PST | [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8476925) | |
|
||||
| 2018 | Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting | white-box | CSF | [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8429311) | |
|
||||
| 2019 | An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack | white-box | arXiv | [Paper](https://ieeexplore.ieee.org/document/8822435) | |
|
||||
| 2019 | MLPrivacyGuard: Defeating Confidence Information based Model Inversion Attacks on Machine Learning Systems | black-box (defense) | GLSVLSI | [paper](https://www.researchgate.net/profile/Tiago-Alves-13/publication/333136362_MLPrivacyGuard_Defeating_Confidence_Information_based_Model_Inversion_Attacks_on_Machine_Learning_Systems/links/5cddb94d92851c4eaba682d7/MLPrivacyGuard-Defeating-Confidence-Information-based-Model-Inversion-Attacks-on-Machine-Learning-Systems.pdf) | |
|
||||
| 2019 | Model inversion attacks against collaborative inference | black & white-box (collaborative inference) | ACSAC | [Paper](http://palms.ee.princeton.edu/system/files/Model+Inversion+Attack+against+Collaborative+Inference.pdf) | |
|
||||
| 2019 | Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment | black-box | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3319535.3354261?casa_token=J81Ps-ZWXHkAAAAA:FYnXo7DQoHpdhqns8x2TclKFeHpAQlXVxMBW2hTrhJ5c20XKdsounqdT1Viw1g6Xsu9FtKj85elxQaA) | [Code](https://github.com/zhangzp9970/TB-MIA) | - |
|
||||
| 2019 | GAMIN: An Adversarial Approach to Black-Box Model Inversion | black-box | Arxiv | [Paper](https://arxiv.org/pdf/1909.11835.pdf) | - | - |
|
||||
| 2020 | The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks | white-box | CVPR | [Paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf) | [Code](https://github.com/AI-secure/GMI-Attack) | [Video](https://www.youtube.com/watch?v=_g-oXYMhz4M) |
|
||||
| 2020 | Overlearning Reveals Sensitive Attributes | white-box | ICLR | [Paper](https://arxiv.org/pdf/1905.11742.pdf) | - | - |
|
||||
| 2020 | Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator | white-box | APSIPA ASC | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9306253&casa_token=AWugOvIe0I0AAAAA:9wICCkMcfoljMqooM-lgl8m-6F6-cEl-ClHgNkE1SV8mZwqvBIaJ1HDjT1RWLyBz_P7tdB51jQVL&tag=1) | - | - |
|
||||
| 2020 | Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning | black-box | USENIX Security | [Paper](https://www.usenix.org/system/files/sec20-salem.pdf) | - | - |
|
||||
| 2020 | Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems | black-box (collaborative inference) | IoT-J | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9187880) | [Code](https://github.com/zechenghe/Inverse_Collaborative_Inference) | - |
|
||||
| 2020 | Black-Box Face Recovery from Identity Features | black-box | ECCV Workshop | [Paper](https://arxiv.org/pdf/2007.13635.pdf) | - | - |
|
||||
| 2020 | MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery | white-box | arXiv | [Paper](https://arxiv.org/abs/2010.11463)
|
||||
| 2020 | Privacy Preserving Facial Recognition Against Model Inversion Attacks | white-box (defense) | Globecom | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9322508) | - | - |
|
||||
| 2020 | Broadening Differential Privacy for Deep Learning Against Model Inversion Attacks | white-box (defense) | Big Data | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9378274) | - | - |
|
||||
| 2020 | Evaluation Indicator for Model Inversion Attack | metric | AdvML| [Paper](https://drive.google.com/file/d/1rl77BGtGHzZ8obWUEOoqunXCjgvpzE8d/view) | |
|
||||
| 2021 | Variational Model Inversion Attacks | white-box | NeurIPS | [Paper](https://proceedings.neurips.cc/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Paper.pdf) | [Code](https://github.com/wangkua1/vmi) | - |
|
||||
| 2021 | Exploiting Explanations for Model Inversion Attacks | white-box | ICCV | [Paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Zhao_Exploiting_Explanations_for_Model_Inversion_Attacks_ICCV_2021_paper.pdf) | - |
|
||||
| 2021 | Knowledge-Enriched Distributional Model Inversion Attacks | white-box | ICCV | [Paper](https://arxiv.org/pdf/2010.04092.pdf) | [Code](https://github.com/SCccc21/Knowledge-Enriched-DMI) |
|
||||
| 2021 | Improving Robustness to Model Inversion Attacks via Mutual Information Regularization | white-box (defense) | AAAI | [Paper](https://arxiv.org/pdf/2009.05241.pdf) | - |
|
||||
| 2021 | Practical Defences Against Model Inversion Attacks for Split Neural Networks | black-box (defense, collaborative inference) | ICLR workshop | [Paper](https://arxiv.org/pdf/2104.05743.pdf) | [Code](https://github.com/TTitcombe/Model-Inversion-SplitNN) |
|
||||
| 2021 | Feature inference attack on model predictions in vertical federated learning | white-box (VFL) | ICDE | [Paper](https://arxiv.org/pdf/2010.10152) | [Code](https://github.com/xj231/featureinference-vfl) |
|
||||
| 2021 | PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems | black-box (both, collaborative inference) | DAC | [Paper](https://dl.acm.org/doi/abs/10.1109/DAC18074.2021.9586217) | - |
|
||||
| 2021 | Defending Against Model Inversion Attack by Adversarial Examples | black-box (defense) | CSR Workshops | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9527945) | - |
|
||||
| 2021 | Practical Black Box Model Inversion Attacks Against Neural Nets | black-box | ECML PKDD | [Paper](https://link.springer.com/chapter/10.1007/978-3-030-93733-1_3) | - |
|
||||
| 2021 | Model Inversion Attack against a Face Recognition System in a Black-Box Setting | black-box | APSIPA | [Paper](http://www.apsipa.org/proceedings/2021/pdfs/0001800.pdf) | - |
|
||||
| 2022 | Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks | white-box | ICML | [Paper](https://arxiv.org/pdf/2201.12179.pdf) | [Code](https://github.com/LukasStruppek/Plug-and-Play-Attacks) |
|
||||
| 2022 | Label-Only Model Inversion Attacks via Boundary Repulsion | black-box | CVPR | [Paper](https://arxiv.org/pdf/2203.01925.pdf) | [Code](https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion) |
|
||||
| 2022 | ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning | white-box (defense, SFL) | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2022/html/Li_ResSFL_A_Resistance_Transfer_Framework_for_Defending_Model_Inversion_Attack_CVPR_2022_paper.html) | [Code](https://github.com/zlijingtao/ResSFL) |
|
||||
| 2022 | Bilateral Dependency Optimization: Defending Against Model-inversion Attacks | white-box (defense) | KDD | [Paper](https://arxiv.org/pdf/2206.05483.pdf) | [Code](https://github.com/xpeng9719/Defend_MI) |
|
||||
| 2022 | ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models | holistic risk assessment | USENIX Security | [Paper](https://www.usenix.org/system/files/sec22summer_liu-yugeng.pdf) | [Code](https://github.com/liuyugeng/ML-Doctor) |
|
||||
| 2022 | Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System | white-box | TIFS | [Paper](https://dl.acm.org/doi/abs/10.1109/TIFS.2022.3140687) | - |
|
||||
| 2022 | One Parameter Defense—Defending Against Data Inference Attacks via Differential Privacy | black-box (defense) | TIFS | [Paper](https://arxiv.org/pdf/2203.06580.pdf) | |
|
||||
| 2022 | Reconstructing Training Data from Diverse ML Models by Ensemble Inversion | white-box | WACV | [Paper](https://arxiv.org/pdf/2111.03702.pdf) | |
|
||||
| 2022 | SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination | white-box | ECCV | [Paper](https://arxiv.org/pdf/2207.12263.pdf) | |
|
||||
| 2022 | UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning | black-box (split learnig) | WPES | [Paper](https://arxiv.org/pdf/2108.09033.pdf) | [code](https://github.com/ege-erdogan/unsplit) |
|
||||
| 2022 | MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity | white-box | NDSS | [Paper](https://www.cs.purdue.edu/homes/an93/static/papers/ndss2022_model_inversion.pdf) | [code](https://github.com/njuaplusplus/mirror) |
|
||||
| 2022 | Reconstructing Training Data with Informed Adversaries | white-box | SP | [Paper](https://arxiv.org/abs/2201.04845) | |
|
||||
| 2022 | Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks | white-box | BMVC | [Paper](https://arxiv.org/abs/2107.06304)
|
||||
| 2022 | Reconstructing Training Data from Trained Neural Networks | white-box | NeurIPS | [Paper](https://arxiv.org/abs/2206.07758) | |
|
||||
| 2023 | Sparse Black-Box Inversion Attack with Limited Information | black-box | ICASSP | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10095514) | [code](https://github.com/Tencent/TFace/tree/master/recognition) |
|
||||
| 2023 | Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack | black-box | CVPR | [Paper](https://arxiv.org/pdf/2304.11436.pdf) | [code](https://github.com/FLAIR-THU/PairedLogitsInversion) |
|
||||
| 2023 | Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network | white-box | AAAI | [Paper](https://arxiv.org/pdf/2302.09814.pdf) | [code](https://github.com/lethesec/plg-mi-attack) |
|
||||
| 2023 | C2FMI: Corse-to-Fine Black-box Model Inversion Attack | black-box | TDSC | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10148574) | |
|
||||
| 2023 | Boosting Model Inversion Attacks with Adversarial Examples | black-box | TDSC | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10148576) | |
|
||||
| 2023 | Reinforcement Learning-Based Black-Box Model Inversion Attacks | black-box | CVPR | [Paper](https://arxiv.org/pdf/2304.04625.pdf) | [code](https://github.com/HanGyojin/RLB-MI) |
|
||||
| 2023 | Re-thinking Model Inversion Attacks Against Deep Neural Networks | white-box | CVPR | [Paper](https://arxiv.org/pdf/2304.01669.pdf) | [code](https://github.com/sutd-visual-computing-group/Re-thinking_MI) |
|
||||
| 2023 | Purifier: Defending Data Inference Attacks via Transforming Confidence Scores | black-box (defense) | AAAI | [Paper](https://arxiv.org/pdf/2005.03915.pdf) | - | - |
|
||||
| 2023 | Unstoppable Attack: Label-Only Model Inversion via Conditional Diffusion Model | black-box | CCS | [Paper](https://arxiv.org/pdf/2307.08424.pdf) | - | - |
|
||||
|
||||
<!--
|
||||
### TODO
|
||||
| Year | Title | Adversarial Knowledge | Venue | Paper Link | Code Link |
|
||||
| :-----------: | --------------------------------------------------------------------------------------------- | --------------------- | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- |
|
||||
|xxx 2021 | Unleashing the tiger: Inference attacks on split learning | - | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3460120.3485259) | [Code](https://github.com/pasquini-dario/SplitNN_FSHA) | - |
|
||||
|
||||
| xxx 2021 | Soteria: Provable defense against privacy leakage in federated learning from representation perspective | - | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Soteria_Provable_Defense_Against_Privacy_Leakage_in_Federated_Learning_From_CVPR_2021_paper.pdf) | [Code](https://github.com/jeremy313/Soteria) | - |
|
||||
|
||||
| xxx 2019 | The secret sharer: Evaluating and testing unintended memorization in neural networks | white-box | USENIX | [Paper](https://www.usenix.org/system/files/sec19-carlini.pdf) | |
|
||||
| xxx 2019 | Deep leakage from gradients | white-box | NeurIPS | [Paper](https://papers.nips.cc/paper/9617-deep-leakage-from-gradients.pdf) | [code](https://github.com/mit-han-lab/dlg) |
|
||||
| xxx 2020 | Inverting Gradients - How easy is it to break privacy in federated learning? | white-box | NeurIPS | [Paper](https://arxiv.org/abs/2003.14053) | |
|
||||
| xxx 2018 | Reconstruction of training samples from loss functions | - | arXiv | [Paper](https://arxiv.org/pdf/1805.07337.pdf) | |
|
||||
| xxx 2020 | A Framework for Evaluating Gradient Leakage Attacks in Federated Learning | white-box | CoRR | [Paper](https://arxiv.org/pdf/2004.10397.pdf) | |
|
||||
| xxx 2017 | Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning | both | CCS | [Paper](https://arxiv.org/pdf/1702.07464.pdf) | |
|
||||
| xxx 2019 | Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning | white-box | IEEE INFOCOM | [Paper](https://arxiv.org/pdf/1812.00535.pdf) | |
|
||||
| xxx 2020 | SAPAG: A Self-Adaptive Privacy Attack From Gradients | white-box | arXiv | [Paper](https://arxiv.org/abs/2009.06228) | |
|
||||
| xxx 2022 | Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis | white-box | USENIX | [Paper](https://arxiv.org/abs/2010.13356) | |
|
||||
| |
|
||||
| xxx 2020 | Evaluation of Inference Attack Models for Deep Learning on Medical Data | black-box | arXiv | [Paper](https://arxiv.org/abs/2011.00177) | |
|
||||
| xxx 2020 | FaceLeaks: Inference Attacks against Transfer Learning Models via Black-box Queries | black-box | arXiv | [Paper](https://arxiv.org/abs/2010.14023) | |
|
||||
| xxx | Derivation of Constraints from Machine Learning Models and Applications to Security and Privacy | Theory | A | [Paper](https://hal.archives-ouvertes.fr/hal-03091740/) | |
|
||||
| xxx 2021 | On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models | both | IEEE EuroS&P | [Paper](https://arxiv.org/abs/2103.07101) | |
|
||||
| xxx 2021 | R-GAP: Recursive Gradient Attack on Privacy | white-box | ICLR | [Paper](https://arxiv.org/abs/2010.07733) | |
|
||||
| xxx 2021 | PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage | white-box | arXiv | [Paper](https://arxiv.org/abs/2108.04725) | |
|
||||
| xxx 2021 | On the Importance of Encrypting Deep Features | black-box | arXiv | [Paper](https://arxiv.org/abs/2108.07147) | |
|
||||
| |
|
||||
-->
|
||||
|
||||
<!-- USENIX Security 2014 - Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing.
|
||||
[[paper]](https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-fredrikson-privacy.pdf)
|
||||
|
||||
CCS 2015 - Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures.
|
||||
[[paper]](https://dl.acm.org/doi/pdf/10.1145/2810103.2813677)
|
||||
[[code1]](http://www.cs.cmu.edu/~mfredrik/mi-2016.zip)
|
||||
[[code2]](https://github.com/yashkant/Model-Inversion-Attack)
|
||||
[[code3]](https://github.com/zhangzp9970/MIA)
|
||||
[[code4]](https://github.com/sarahsimionescu/simple-model-inversion)
|
||||
|
||||
IJCAI 2015 - Regression model fitting under differential privacy and model inversion attack.
|
||||
[[paper]](http://www.csce.uark.edu/~xintaowu/publ/ijcai15.pdf)
|
||||
[[code]](https://github.com/cxs040/Regression-Model-Fitting-under-Differential-Privacy-and-Model-Inversion-Attack-Source-Code)
|
||||
|
||||
CSF 2016 - A Methodology for Formalizing Model-Inversion Attacks.
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7536387&casa_token=ClIVAMYo6dcAAAAA:u75HHyFHj5lBRec9h5SqOZyAsL2dICcWIuQPCj6ltk8McREFCaM4ex42mv3S-oNPiGJLDfUqg0qL)
|
||||
|
||||
CCS 2017 - Machine Learning Models that Remember Too Much.
|
||||
[[paper]](https://arxiv.org/pdf/1709.07886.pdf)
|
||||
[[code]](https://github.com/csong27/ml-model-remember)
|
||||
|
||||
PST 2017 - Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes.
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8476925)
|
||||
|
||||
CSF 2018 - Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting.
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8429311)
|
||||
|
||||
GLSVLSI 2019 - MLPrivacyGuard: Defeating Confidence Information based Model Inversion Attacks on Machine Learning Systems.
|
||||
[[paper]](https://www.researchgate.net/profile/Tiago-Alves-13/publication/333136362_MLPrivacyGuard_Defeating_Confidence_Information_based_Model_Inversion_Attacks_on_Machine_Learning_Systems/links/5cddb94d92851c4eaba682d7/MLPrivacyGuard-Defeating-Confidence-Information-based-Model-Inversion-Attacks-on-Machine-Learning-Systems.pdf)
|
||||
|
||||
ACSAC 2019 - Model inversion attacks against collaborative inference.
|
||||
[[paper]](https://par.nsf.gov/servlets/purl/10208164)
|
||||
[[code]](https://github.com/zechenghe/Inverse_Collaborative_Inference)
|
||||
|
||||
CCS 2019 - Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment.
|
||||
[[paper]](https://dl.acm.org/doi/pdf/10.1145/3319535.3354261?casa_token=J81Ps-ZWXHkAAAAA:FYnXo7DQoHpdhqns8x2TclKFeHpAQlXVxMBW2hTrhJ5c20XKdsounqdT1Viw1g6Xsu9FtKj85elxQaA)
|
||||
[[code]](https://github.com/zhangzp9970/TB-MIA)
|
||||
|
||||
IEEE S&P 2019 - Exploiting Unintended Feature Leakage in Collaborative Learning.
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8835269)
|
||||
[[code]](https://github.com/csong27/property-inference-collaborative-ml)
|
||||
|
||||
Arxiv 2019 - Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment.
|
||||
[[paper]](https://arxiv.org/pdf/1902.08552.pdf)
|
||||
|
||||
Arxiv 2019 - GAMIN: An Adversarial Approach to Black-Box Model Inversion.
|
||||
[[paper]](https://arxiv.org/pdf/1909.11835.pdf)
|
||||
|
||||
CVPR 2020 - The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks.
|
||||
[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf)
|
||||
[[code]](https://github.com/AI-secure/GMI-Attack)
|
||||
[[video]](https://www.youtube.com/watch?v=_g-oXYMhz4M)
|
||||
|
||||
ICLR 2020 - OVERLEARNING REVEALS SENSITIVE ATTRIBUTES.
|
||||
[[paper]](https://arxiv.org/pdf/1905.11742.pdf)
|
||||
|
||||
APSIPA ASC 2020 - Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator.
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9306253&casa_token=AWugOvIe0I0AAAAA:9wICCkMcfoljMqooM-lgl8m-6F6-cEl-ClHgNkE1SV8mZwqvBIaJ1HDjT1RWLyBz_P7tdB51jQVL&tag=1)
|
||||
|
||||
USENIX Security 2020 - Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning.
|
||||
[[paper]](https://www.usenix.org/system/files/sec20-salem.pdf)
|
||||
|
||||
IoTJ 2020 - Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems.
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9187880)
|
||||
[[code]](https://github.com/zechenghe/Inverse_Collaborative_Inference)
|
||||
|
||||
ECCV 2020 Workshop - Black-Box Face Recovery from Identity Features.
|
||||
[[paper]](https://arxiv.org/pdf/2007.13635.pdf)
|
||||
- [ICLR 2014] Intriguing properties of neural networks [[paper]](https://arxiv.org/pdf/1312.6199)
|
||||
|
||||
Arxiv 2020 - Defending model inversion and membership inference attacks via prediction purification.
|
||||
[[paper]](https://arxiv.org/pdf/2005.03915.pdf)
|
||||
- [ICLR 2014] Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps [[paper]](https://arxiv.org/pdf/1312.6034)
|
||||
- [CVPR 2015] Understanding Deep Image Representations by Inverting Them [[paper]](https://openaccess.thecvf.com/content_cvpr_2015/papers/Mahendran_Understanding_Deep_Image_2015_CVPR_paper.pdf)
|
||||
- [CVPR 2016] Inverting visual representations with convolutional networks [[paper]](https://arxiv.org/pdf/1506.02753)
|
||||
|
||||
CVPR 2020 - Generative model-inversion attacks against deep neural networks
|
||||
[[paper]](https://arxiv.org/pdf/1911.07135.pdf)
|
||||
[[code]](https://github.com/AI-secure/GMI-Attack)
|
||||
- [NIPS 2016] Generating Images with Perceptual Similarity Metrics based on Deep Networks [[paper]](https://proceedings.neurips.cc/paper/2016/file/371bce7dc83817b7893bcdeed13799b5-Paper.pdf)
|
||||
|
||||
Globecom 2020 - Privacy Preserving Facial Recognition Against Model Inversion Attacks
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9322508)
|
||||
- [ICML 2022] Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations [[paper]](https://proceedings.mlr.press/v162/ghiasi22a/ghiasi22a.pdf)
|
||||
|
||||
Big Data 2020 - Broadening Differential Privacy for Deep LearningAgainst Model Inversion Attacks
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9378274)
|
||||
- [CVPR 2020] Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion [[paper]](https://arxiv.org/pdf/1912.08795) [[code]](https://github.com/NVlabs/DeepInversion)
|
||||
|
||||
IJCAI 2021 - Contrastive Model Inversion for Data-Free Knowledge Distillation.
|
||||
[[paper]](https://www.ijcai.org/proceedings/2021/0327.pdf)
|
||||
[[code]](https://github.com/zju-vipa/CMI)
|
||||
- [USENIX Security 2014] (black & white-box) Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing [[paper]](https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-fredrikson-privacy.pdf)
|
||||
|
||||
CCS 2021 - Membership Leakage in Label-Only Exposures.
|
||||
[[paper]](https://yangzhangalmo.github.io/papers/CCS21-Label.pdf)
|
||||
[[code]](https://github.com/zhenglisec/decision-based-mia)
|
||||
- [CCS 2015] (black & white-box) Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [[paper]](https://dl.acm.org/doi/pdf/10.1145/2810103.2813677) [[code1]](http://www.cs.cmu.edu/~mfredrik/mi-2016.zip) [[code2]](https://github.com/yashkant/Model-Inversion-Attack) [[code3]](https://github.com/zhangzp9970/MIA) [[code4]](https://github.com/sarahsimionescu/simple-model-inversion)
|
||||
|
||||
CCS 2021 - Black-box adversarial attacks on commercial speech platforms with minimal information.
|
||||
[[paper]](https://dl.acm.org/doi/pdf/10.1145/3460120.3485383)
|
||||
- [IJCAI 2015] (white-box (defense)) Regression Model Fitting under Differential Privacy and Model Inversion Attack [[paper]](http://www.csce.uark.edu/~xintaowu/publ/ijcai15.pdf) [[code]](https://github.com/cxs040/Regression-Model-Fitting-under-Differential-Privacy-and-Model-Inversion-Attack-Source-Code)
|
||||
|
||||
CCS 2021 - Unleashing the tiger: Inference attacks on split learning
|
||||
[[paper]](https://dl.acm.org/doi/pdf/10.1145/3460120.3485259)
|
||||
[[code]](https://github.com/pasquini-dario/SplitNN_FSHA)
|
||||
- [CSF 2016] (black & white-box) A Methodology for Formalizing Model-Inversion Attacks [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7536387&casa_token=ClIVAMYo6dcAAAAA:u75HHyFHj5lBRec9h5SqOZyAsL2dICcWIuQPCj6ltk8McREFCaM4ex42mv3S-oNPiGJLDfUqg0qL)
|
||||
|
||||
CVPR 2021 - See through gradients: Image batch recovery via gradinversion.
|
||||
[[paper]](http://openaccess.thecvf.com/content/CVPR2021/papers/Yin_See_Through_Gradients_Image_Batch_Recovery_via_GradInversion_CVPR_2021_paper.pdf)
|
||||
- [CCS 2017] (white-box) Machine Learning Models that Remember Too Much [[paper]](https://arxiv.org/pdf/1709.07886.pdf) [[code]](https://github.com/csong27/ml-model-remember)
|
||||
|
||||
CVPR 2021 - Soteria: Provable defense against privacy leakage in federated learning from representation perspective.
|
||||
[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Soteria_Provable_Defense_Against_Privacy_Leakage_in_Federated_Learning_From_CVPR_2021_paper.pdf)
|
||||
[[code]](https://github.com/jeremy313/Soteria)
|
||||
- [PST 2017] (white-box) Model Inversion Attacks for Prediction Systems: Without knowledge of Non-Sensitive Attributes [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8476925)
|
||||
|
||||
CVPR 2021 - Imagine: Image synthesis by image-guided model inversion.
|
||||
[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_IMAGINE_Image_Synthesis_by_Image-Guided_Model_Inversion_CVPR_2021_paper.pdf)
|
||||
- [CSF 2018] (white-box) Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8429311)
|
||||
|
||||
NeurIPS 2021 - Variational Model Inversion Attacks.
|
||||
[[paper]](https://proceedings.neurips.cc/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Paper.pdf)
|
||||
[[code]](https://github.com/wangkua1/vmi)
|
||||
- [arXiv 2019] (white-box) An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack [[paper]](https://ieeexplore.ieee.org/document/8822435)
|
||||
|
||||
ICCV 2021 - Exploiting Explanations for Model Inversion Attacks.
|
||||
[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Zhao_Exploiting_Explanations_for_Model_Inversion_Attacks_ICCV_2021_paper.pdf)
|
||||
- [CVPR 2019] A style-based generator architecture for generative adversarial networks [[paper]](https://arxiv.org/pdf/2411.10023)
|
||||
|
||||
ICCV 2021 - Knowledge-Enriched Distributional Model Inversion Attacks.
|
||||
[[paper]](https://arxiv.org/pdf/2010.04092.pdf)
|
||||
[[code]](https://github.com/SCccc21/Knowledge-Enriched-DMI)
|
||||
- [GLSVLSI 2019] (black-box (defense)) MLPrivacyGuard: Defeating Confidence Information based Model Inversion Attacks on Machine Learning Systems [[paper]](https://www.researchgate.net/profile/Tiago-Alves-13/publication/333136362_MLPrivacyGuard_Defeating_Confidence_Information_based_Model_Inversion_Attacks_on_Machine_Learning_Systems/links/5cddb94d92851c4eaba682d7/MLPrivacyGuard-Defeating-Confidence-Information-based-Model-Inversion-Attacks-on-Machine-Learning-Systems.pdf)
|
||||
|
||||
AAAI 2021 - Improving Robustness to Model Inversion Attacks via Mutual Information Regularization.
|
||||
[[paper]](https://arxiv.org/pdf/2009.05241.pdf)
|
||||
- [ACSAC 2019] (black & white-box) Model Inversion Attacks Against Collaborative Inference [[paper]](http://palms.ee.princeton.edu/system/files/Model+Inversion+Attack+against+Collaborative+Inference.pdf)
|
||||
|
||||
ICML 2021 - Label-Only Membership Inference Attack.
|
||||
[[paper]](http://proceedings.mlr.press/v139/choquette-choo21a/choquette-choo21a.pdf)
|
||||
[[code]](https://github.com/cchoquette/membership-inference)
|
||||
- [CCS 2019] (black-box) Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment [[paper]](https://dl.acm.org/doi/pdf/10.1145/3319535.3354261?casa_token=J81Ps-ZWXHkAAAAA:FYnXo7DQoHpdhqns8x2TclKFeHpAQlXVxMBW2hTrhJ5c20XKdsounqdT1Viw1g6Xsu9FtKj85elxQaA) [[code]](https://github.com/zhangzp9970/TB-MIA)
|
||||
|
||||
ICML 2021 - When Does Data Augmentation Help With Membership Inference Attacks?
|
||||
[[paper]](http://proceedings.mlr.press/v139/kaya21a/kaya21a.pdf)
|
||||
- [arXiv 2019] (black-box) GAMIN: An Adversarial Approach to Black-Box Model Inversion [[paper]](https://arxiv.org/pdf/1909.11835.pdf)
|
||||
|
||||
ICLR 2021 workshop - PRACTICAL DEFENCES AGAINST MODEL INVERSION ATTACKS FOR SPLIT NEURAL NETWORKS.
|
||||
[[paper]](https://arxiv.org/pdf/2104.05743.pdf)
|
||||
[[code]](https://github.com/TTitcombe/Model-Inversion-SplitNN)
|
||||
[[video]](https://crossminds.ai/video/practical-defences-against-model-inversion-attacks-for-split-neural-networks-60c3cee46af07cfaf7325850/)
|
||||
- [CVPR 2020] (white-box) The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks [[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf) [[code]](https://github.com/AI-secure/GMI-Attack) [[video]](https://www.youtube.com/watch?v=_g-oXYMhz4M)
|
||||
|
||||
ICDE 2021 Feature inference attack on model predictions in vertical federated learning.
|
||||
[[paper]](https://arxiv.org/pdf/2010.10152)
|
||||
[[code]](https://github.com/xj231/featureinference-vfl)
|
||||
- [ICLR 2020] (white-box) Overlearning Reveals Sensitive Attributes [[paper]](https://arxiv.org/pdf/1905.11742.pdf)
|
||||
|
||||
DAC 2021 - PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems
|
||||
[[paper]](https://dl.acm.org/doi/abs/10.1109/DAC18074.2021.9586217)
|
||||
- [APSIPA ASC 2020] (white-box) Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9306253&casa_token=AWugOvIe0I0AAAAA:9wICCkMcfoljMqooM-lgl8m-6F6-cEl-ClHgNkE1SV8mZwqvBIaJ1HDjT1RWLyBz_P7tdB51jQVL&tag=1)
|
||||
|
||||
ICSE 2021 - Robustness of on-device models: Adversarial attack to deep learning models on android apps.
|
||||
[[paper]](https://arxiv.org/pdf/2101.04401)
|
||||
- [USENIX Security 2020] (black-box) Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning [[paper]](https://www.usenix.org/system/files/sec20-salem.pdf)
|
||||
|
||||
CSR Workshops 2021 - Defending Against Model Inversion Attack by Adversarial Examples.
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9527945)
|
||||
- [IoT-J 2020] (black-box) Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9187880) [[code]](https://github.com/zechenghe/Inverse_Collaborative_Inference)
|
||||
|
||||
ECML PKDD 2021 - Practical Black Box Model Inversion Attacks Against Neural Nets
|
||||
[[paper]](https://link.springer.com/content/pdf/10.1007/978-3-030-93733-1.pdf?pdf=button)
|
||||
- [ECCV Workshop 2020] (black-box) Black-Box Face Recovery from Identity Features [[paper]](https://arxiv.org/pdf/2007.13635.pdf)
|
||||
|
||||
APSIPA 2021 - Model Inversion Attack against a Face Recognition System in a Black-Box Setting
|
||||
[[paper]](http://www.apsipa.org/proceedings/2021/pdfs/0001800.pdf)
|
||||
- [arXiv 2020] (white-box) MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery [[paper]](https://arxiv.org/abs/2010.11463)
|
||||
|
||||
ICML 2022 - Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks.
|
||||
[[paper]](https://arxiv.org/pdf/2201.12179.pdf)
|
||||
[[code]](https://github.com/LukasStruppek/Plug-and-Play-Attacks)
|
||||
- [Globecom 2020] (white-box (defense)) Privacy Preserving Facial Recognition Against Model Inversion Attacks [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9322508)
|
||||
|
||||
CVPR 2022 - Label-Only Model Inversion Attacks via Boundary Repulsion.
|
||||
[[paper]](https://arxiv.org/pdf/2203.01925.pdf)
|
||||
[[code]](https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion)
|
||||
- [Big Data 2020] (white-box (defense)) Broadening Differential Privacy for Deep Learning Against Model Inversion Attacks [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9378274)
|
||||
|
||||
CVPR 2022 - ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning.
|
||||
[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Li_ResSFL_A_Resistance_Transfer_Framework_for_Defending_Model_Inversion_Attack_CVPR_2022_paper.html)
|
||||
[[code]](https://github.com/zlijingtao/ResSFL)
|
||||
- [AdvML 2020] (metric) Evaluation Indicator for Model Inversion Attack [[paper]](https://drive.google.com/file/d/1rl77BGtGHzZ8obWUEOoqunXCjgvpzE8d/view)
|
||||
|
||||
KDD 2022 - Bilateral Dependency Optimization: Defending Against Model-inversion Attacks.
|
||||
[[paper]](https://arxiv.org/pdf/2206.05483.pdf)
|
||||
[[code]](https://github.com/xpeng9719/Defend_MI)
|
||||
- [NeurIPS 2021] (white-box) Variational Model Inversion Attacks [[paper]](https://proceedings.neurips.cc/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Paper.pdf) [[code]](https://github.com/wangkua1/vmi)
|
||||
|
||||
USENIX Security 2022 - ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models.
|
||||
[[paper]](https://www.usenix.org/system/files/sec22summer_liu-yugeng.pdf)
|
||||
[[code]](https://github.com/liuyugeng/ML-Doctor)
|
||||
- [ICCV 2021] (white-box) Exploiting Explanations for Model Inversion Attacks [[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Zhao_Exploiting_Explanations_for_Model_Inversion_Attacks_ICCV_2021_paper.pdf)
|
||||
|
||||
IEEE 2022 - An Approximate Memory based Defense against Model Inversion Attacks to Neural Networks.
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9792582&casa_token=ymT57RhNhGEAAAAA:WsQHMpv77-4uyIp7l-p4hc7_Qmxvn5TeNpS5F7LHBFHyLay2O8Pe5eWqsKN2fu56v98NZsRrqeit)
|
||||
[[code]](https://github.com/katekemu/model_inversion_defense)
|
||||
- [ICCV 2021] (white-box) Knowledge-Enriched Distributional Model Inversion Attacks [[paper]](https://arxiv.org/pdf/2010.04092.pdf) [[code]](https://github.com/SCccc21/Knowledge-Enriched-DMI)
|
||||
|
||||
TIFS 2022 - Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System
|
||||
[[paper]](https://dl.acm.org/doi/abs/10.1109/TIFS.2022.3140687)
|
||||
- [AAAI 2021] (white-box (defense)) Improving Robustness to Model Inversion Attacks via Mutual Information Regularization [[paper]](https://arxiv.org/pdf/2009.05241.pdf)
|
||||
|
||||
Arxiv 2022 - Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-Ray Data.
|
||||
[[paper]](https://arxiv.org/pdf/2205.03168.pdf)
|
||||
- [ICLR Workshop 2021] (black-box (defense)) Practical Defences Against Model Inversion Attacks for Split Neural Networks [[paper]](https://arxiv.org/pdf/2104.05743.pdf) [[code]](https://github.com/TTitcombe/Model-Inversion-SplitNN)
|
||||
|
||||
TIFS 2022 - One Parameter Defense—Defending Against Data Inference Attacks via Differential Privacy
|
||||
[[paper]](https://arxiv.org/pdf/2203.06580.pdf)
|
||||
- [ICDE 2021] (white-box) Feature inference attack on model predictions in vertical federated learning [[paper]](https://arxiv.org/pdf/2010.10152) [[code]](https://github.com/xj231/featureinference-vfl)
|
||||
|
||||
WACV 2022 - Reconstructing Training Data from Diverse ML Models by Ensemble Inversion
|
||||
[[paper]](https://arxiv.org/pdf/2111.03702.pdf)
|
||||
- [DAC 2021] (black & white-box) PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems [[paper]](https://dl.acm.org/doi/abs/10.1109/DAC18074.2021.9586217)
|
||||
|
||||
ECCV 2022 - SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination
|
||||
[[paper]](https://arxiv.org/pdf/2207.12263.pdf)
|
||||
- [CSR Workshops 2021] (black-box (defense)) Defending Against Model Inversion Attack by Adversarial Examples [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9527945)
|
||||
|
||||
WPES 2022- UnSplit: Data-Oblivious Model Inversion, Model Stealing, andLabel Inference Attacks Against Split Learning
|
||||
[[paper]](https://arxiv.org/pdf/2108.09033.pdf)
|
||||
[[code]](https://github.com/ege-erdogan/unsplit)
|
||||
- [ECML PKDD 2021] (black-box) Practical Black Box Model Inversion Attacks Against Neural Nets [[paper]](https://link.springer.com/chapter/10.1007/978-3-030-93733-1_3)
|
||||
|
||||
NDSS 2022 - MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity
|
||||
[[paper]](https://www.cs.purdue.edu/homes/an93/static/papers/ndss2022_model_inversion.pdf)
|
||||
[[code]](https://github.com/njuaplusplus/mirror) |
|
||||
- [APSIPA 2021] (black-box) Model Inversion Attack against a Face Recognition System in a Black-Box Setting [[paper]](http://www.apsipa.org/proceedings/2021/pdfs/0001800.pdf)
|
||||
|
||||
ICASSP 2023 - Sparse Black-Box Inversion Attack with Limited Information
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10095514)
|
||||
[[code]](https://github.com/Tencent/TFace/tree/master/recognition)
|
||||
- [ICML 2022] (white-box) Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks [[paper]](https://arxiv.org/pdf/2201.12179.pdf) [[code]](https://github.com/LukasStruppek/Plug-and-Play-Attacks)
|
||||
|
||||
CVPR 2023 - Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack
|
||||
[[paper]](https://arxiv.org/pdf/2304.11436.pdf)
|
||||
[[code]](https://github.com/FLAIR-THU/PairedLogitsInversion)
|
||||
- [CVPR 2022] (black-box) Label-Only Model Inversion Attacks via Boundary Repulsion [[paper]](https://arxiv.org/pdf/2203.01925.pdf) [[code]](https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion)
|
||||
|
||||
AAAI 2023 - Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network
|
||||
[[paper]](https://arxiv.org/pdf/2302.09814.pdf)
|
||||
[[code]](https://github.com/lethesec/plg-mi-attack)
|
||||
- [CVPR 2022] (white-box (defense)) ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning [[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Li_ResSFL_A_Resistance_Transfer_Framework_for_Defending_Model_Inversion_Attack_CVPR_2022_paper.html) [[code]](https://github.com/zlijingtao/ResSFL)
|
||||
|
||||
TDSC 2023 - C2FMI: Corse-to-Fine Black-box Model Inversion Attack
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10148574)
|
||||
- [KDD 2022] (white-box (defense)) Bilateral Dependency Optimization: Defending Against Model-inversion Attacks [[paper]](https://arxiv.org/pdf/2206.05483.pdf) [[code]](https://github.com/xpeng9719/Defend_MI)
|
||||
|
||||
TDSC 2023 - Boosting Model Inversion Attacks with Adversarial Examples
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10148576)
|
||||
- [USENIX Security 2022] (holistic risk assessment) ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models [[paper]](https://www.usenix.org/system/files/sec22summer_liu-yugeng.pdf) [[code]](https://github.com/liuyugeng/ML-Doctor)
|
||||
|
||||
CVPR 2023 - Reinforcement Learning-Based Black-Box Model Inversion Attacks
|
||||
[[paper]](https://arxiv.org/pdf/2304.04625.pdf)
|
||||
[[code]](https://github.com/HanGyojin/RLB-MI)
|
||||
- [TIFS 2022] (white-box) Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System [[paper]](https://dl.acm.org/doi/abs/10.1109/TIFS.2022.3140687)
|
||||
|
||||
CVPR 2023 - Re-thinking Model Inversion Attacks Against Deep Neural Networks
|
||||
[[paper]](https://arxiv.org/pdf/2304.01669.pdf)
|
||||
[[code]](https://github.com/sutd-visual-computing-group/Re-thinking_MI) | -->
|
||||
- [TIFS 2022] (black-box (defense)) One Parameter Defense—Defending Against Data Inference Attacks via Differential Privacy [[paper]](https://arxiv.org/pdf/2203.06580.pdf)
|
||||
|
||||
- [WACV 2022] (white-box) Reconstructing Training Data from Diverse ML Models by Ensemble Inversion [[paper]](https://arxiv.org/pdf/2111.03702.pdf)
|
||||
|
||||
- [ECCV 2022] (white-box) SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination [[paper]](https://arxiv.org/pdf/2207.12263.pdf)
|
||||
|
||||
- [WPES 2022] (black-box) UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning [[paper]](https://arxiv.org/pdf/2108.09033.pdf) [[code]](https://github.com/ege-erdogan/unsplit)
|
||||
|
||||
- [NDSS 2022] (white-box) MIRROR: Model Inversion for Deep Learning Network with High Fidelity [[paper]](https://www.cs.purdue.edu/homes/an93/static/papers/ndss2022_model_inversion.pdf) [[code]](https://github.com/njuaplusplus/mirror)
|
||||
|
||||
- [SP 2022] (white-box) Reconstructing Training Data with Informed Adversaries [[paper]](https://arxiv.org/abs/2201.04845)
|
||||
|
||||
- [BMVC 2022] (white-box) Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks [[paper]](https://arxiv.org/abs/2107.06304)
|
||||
|
||||
- [NeurIPS 2022] (white-box) Reconstructing Training Data from Trained Neural Networks [[paper]](https://arxiv.org/abs/2206.07758)
|
||||
|
||||
- [ICASSP 2023] (black-box) Sparse Black-Box Inversion Attack with Limited Information [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10095514) [[code]](https://github.com/Tencent/TFace/tree/master/recognition)
|
||||
|
||||
- [CVPR 2023] (black-box) Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack [[paper]](https://arxiv.org/pdf/2304.11436.pdf) [[code]](https://github.com/FLAIR-THU/PairedLogitsInversion)
|
||||
|
||||
- [AAAI 2023] (white-box) Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network [[paper]](https://arxiv.org/pdf/2302.09814.pdf) [[code]](https://github.com/lethesec/plg-mi-attack)
|
||||
|
||||
- [TDSC 2023] (black-box) C2FMI: Coarse-to-Fine Black-box Model Inversion Attack [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10148574)
|
||||
|
||||
- [TDSC 2023] (black-box) Boosting Model Inversion Attacks with Adversarial Examples [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10148576)
|
||||
|
||||
- [CVPR 2023] (black-box) Reinforcement Learning-Based Black-Box Model Inversion Attacks [[paper]](https://arxiv.org/pdf/2304.04625.pdf) [[code]](https://github.com/HanGyojin/RLB-MI)
|
||||
|
||||
- [CVPR 2023] (white-box) Re-thinking Model Inversion Attacks Against Deep Neural Networks [[paper]](https://arxiv.org/pdf/2304.01669.pdf) [[code]](https://github.com/sutd-visual-computing-group/Re-thinking_MI)
|
||||
|
||||
- [AAAI 2023] (black-box (defense)) Purifier: Defending Data Inference Attacks via Transforming Confidence Scores [[paper]](https://arxiv.org/pdf/2005.03915.pdf)
|
||||
|
||||
- [CCS 2023] (black-box) Unstoppable Attack: Label-Only Model Inversion via Conditional Diffusion Model [[paper]](https://arxiv.org/pdf/2307.08424.pdf)
|
||||
|
||||
- [TDSC 2023] C2FMI: Corse-to-Fine Black-Box Model Inversion Attack [[paper]](https://ieeexplore.ieee.org/document/10148574) [[code]](https://github.com/MiLabHITSZ/2022YeC2FMI)
|
||||
|
||||
- [ACSAC 2019] Model Inversion Attacks Against Collaborative Inference [[paper]](https://www.acsac.org/2019/program/final/1/167.pdf) [[code]](https://github.com/zechenghe/Inverse_Collaborative_Inference)
|
||||
|
||||
- [CCS 2021] Unleashing the tiger: Inference attacks on split learning [[paper]](https://arxiv.org/pdf/2012.02670) [[code]](https://github.com/pasquini-dario/SplitNN_FSHA)
|
||||
|
||||
- [USENIX Security 2014] Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing [[paper]](https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-fredrikson-privacy.pdf)
|
||||
|
||||
- [CVPR 2020] The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks [[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf)
|
||||
|
||||
- [IEEE Transactions on Information Forensics and Security 2023] A GAN-Based Defense Framework Against Model Inversion Attacks [[paper]](https://ieeexplore.ieee.org/document/10184476)
|
||||
|
||||
- [CVPR 2024] Model Inversion Robustness: Can Transfer Learning Help? [[paper]](https://openaccess.thecvf.com/content/CVPR2024/papers/Ho_Model_Inversion_Robustness_Can_Transfer_Learning_Help_CVPR_2024_paper.pdf) [[code]](https://hosytuyen.github.io/projects/TL-DMI)
|
||||
|
||||
- [KDD 2022] Bilateral Dependency Optimization: Defending Against Model-inversion Attacks [[paper]](https://arxiv.org/pdf/2206.05483)
|
||||
|
||||
- [ICLR 2024] Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks [[paper]](https://arxiv.org/pdf/2310.06549) [[code]](https://github.com/LukasStruppek/Plug-and-Play-Attacks)
|
||||
|
||||
- [AAAI 2020] (black & white-box) Improving Robustness to Model Inversion Attacks via Mutual Information Regularization [[paper]](https://arxiv.org/pdf/2009.05241v1.pdf)
|
||||
|
||||
- [arXiv 2020] Defending Model Inversion and Membership Inference Attacks via Prediction Purification [[paper]](https://arxiv.org/pdf/2005.03915)
|
||||
|
||||
- [CSR 2021] Defending Against Model Inversion Attack by Adversarial Examples [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9527945)
|
||||
|
||||
## Graph learning domain
|
||||
| Year | Title | Adversarial Knowledge | Venue | Paper Link | Code Link |
|
||||
| ---- | ----- | -------------------- | ----- | ---------- | --------- |
|
||||
| 2020 | Stealing Links from Graph Neural Networks | - | USENIX Security | [Paper](https://www.usenix.org/system/files/sec21-he-xinlei.pdf) | [Code](https://github.com/xinleihe/link_stealing_attack) |
|
||||
| 2020 | Improving Robustness to Model Inversion Attacks via Mutual Information Regularization | black & white-box | AAAI | [Paper](https://arxiv.org/pdf/2009.05241v1.pdf) | |
|
||||
| 2020 | Reducing Risk of Model Inversion Using Privacy-Guided Training | black & white-box | Arxiv | [Paper](https://arxiv.org/pdf/2006.15877.pdf) | |
|
||||
| 2020 | Quantifying Privacy Leakage in Graph Embedding | - | MobiQuitous | [Paper](https://arxiv.org/pdf/2010.00906.pdf) | [Code](https://github.com/vasishtduddu/GraphLeaks) |
|
||||
| 2021 | A Survey on Gradient Inversion: Attacks, Defenses and Future Directions | white-box | IJCAI | [Paper](https://arxiv.org/pdf/2206.07284.pdf) | |
|
||||
| 2021 | NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data | black-box | ICDE | [Paper](https://arxiv.org/pdf/2106.11865.pdf) | [code](https://github.com/ICHproject/NetFense) |
|
||||
| 2021 | DeepWalking Backwards: From Node Embeddings Back to Graphs | - | ICML | [Paper](http://proceedings.mlr.press/v139/chanpuriya21a/chanpuriya21a.pdf) | [Code](https://github.com/konsotirop/Invert_Embeddings) |
|
||||
| 2021 | GraphMI: Extracting Private Graph Data from Graph Neural Networks | white-box | IJCAI | [Paper](https://arxiv.org/pdf/2106.02820v1.pdf) | [code](https://github.com/zaixizhang/GraphMI) |
|
||||
| 2021 | Node-Level Membership Inference Attacks Against Graph Neural Networks | - | Arxiv | [Paper](https://arxiv.org/pdf/2102.05429.pdf) | - |
|
||||
| 2022 | A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability | black & white-box | Arxiv | [Paper](https://arxiv.org/pdf/2204.08570.pdf) | |
|
||||
| 2022 | Learning Privacy-Preserving Graph Convolutional Network with Partially Observed Sensitive Attributes | - | WWW | [Paper](https://dl.acm.org/doi/pdf/10.1145/3485447.3511975?casa_token=Xsle4t9cLdcAAAAA:Gmij-qWaTJ2esGVa-yzKNHqVOMzYyaIgdNcgGmEzHrVyMdwwf9idn3qBjkhCcQeRTvbAkaT6OxiwXsk) | - |
|
||||
| 2022 | Inference Attacks Against Graph Neural Networks | - | USENIX Security | [Paper](https://www.usenix.org/system/files/sec22summer_zhang-zhikun.pdf) | [Code](https://github.com/Zhangzhk0819/GNN-Embedding-Leaks) |
|
||||
| 2022 | Model Stealing Attacks Against Inductive Graph Neural Networks | - | IEEE S&P | [Paper](https://arxiv.org/pdf/2112.08331.pdf) | [Code](https://github.com/xinleihe/GNNStealing) |
|
||||
| 2022 | DIFFERENTIALLY PRIVATE GRAPH CLASSIFICATION WITH GNNS | - | Arxiv | [Paper](https://arxiv.org/pdf/2202.02575.pdf) | - |
|
||||
| 2022 | GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation | - | Arxiv | [Paper](https://arxiv.org/pdf/2203.00949.pdf) | - |
|
||||
| 2022 | SOK: DIFFERENTIAL PRIVACY ON GRAPH-STRUCTURED DATA | - | Arxiv | [Paper](https://arxiv.org/pdf/2203.09205.pdf) | - |
|
||||
| 2022 | Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy | - | Arxiv | [Paper](https://arxiv.org/pdf/2202.10209.pdf) | - |
|
||||
| 2022 | Private Graph Extraction via Feature Explanations | - | Arxiv | [Paper](https://arxiv.org/pdf/2206.14724.pdf) | - |
|
||||
| 2022 | Privacy and Transparency in Graph Machine Learning: A Unified Perspective | - | Arxiv | [Paper](https://arxiv.org/pdf/2207.10896.pdf) | - |
|
||||
| 2022 | Finding MNEMON: Reviving Memories of Node Embeddings | - | CCS | [Paper](https://arxiv.org/pdf/2204.06963.pdf) | - |
|
||||
| 2022 | Defense against membership inference attack in graph neural networks through graph perturbation | - | IJIS | [Paper](https://link.springer.com/article/10.1007/s10207-022-00646-y) | - |
|
||||
| 2022 | Model Inversion Attacks against Graph Neural Networks | - | TKDE | [Paper](https://arxiv.org/pdf/2209.07807.pdf) | - |
|
||||
| 2023 | On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation | white-box | ICML |[Paper](https://openreview.net/pdf?id=Vcl3qckVyh) | [Code](https://github.com/tmlr-group/MC-GRA) |
|
||||
| 2023 | Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural Networks | white-box | SecureComm |[Paper](https://arxiv.org/pdf/2310.09800) | - |
|
||||
|
||||
<!--
|
||||
USENIX Security 2020 - Stealing Links from Graph Neural Networks.
|
||||
[[paper]](https://www.usenix.org/system/files/sec21-he-xinlei.pdf)
|
||||
[[code]](https://github.com/xinleihe/link_stealing_attack)
|
||||
- [USENIX Security 2020] Stealing Links from Graph Neural Networks [[paper]](https://www.usenix.org/system/files/sec21-he-xinlei.pdf) [[code]](https://github.com/xinleihe/link_stealing_attack)
|
||||
|
||||
MobiQuitous 2020 - Quantifying Privacy Leakage in Graph Embedding.
|
||||
[[paper]](https://arxiv.org/pdf/2010.00906.pdf)
|
||||
[[code]](https://github.com/vasishtduddu/GraphLeaks)
|
||||
- [arXiv 2020] (black & white-box) Reducing Risk of Model Inversion Using Privacy-Guided Training [[paper]](https://arxiv.org/pdf/2006.15877.pdf)
|
||||
|
||||
ICML 2021 - DeepWalking Backwards: From Node Embeddings Back to Graphs.
|
||||
[[paper]](http://proceedings.mlr.press/v139/chanpuriya21a/chanpuriya21a.pdf)
|
||||
[[code]](https://github.com/konsotirop/Invert_Embeddings)
|
||||
- [MobiQuitous 2020] Quantifying Privacy Leakage in Graph Embedding [[paper]](https://arxiv.org/pdf/2010.00906.pdf) [[code]](https://github.com/vasishtduddu/GraphLeaks)
|
||||
|
||||
IJCAI 2021 - GraphMI: Extracting Private Graph Data from Graph Neural Networks.
|
||||
[[paper]](https://arxiv.org/pdf/2106.02820)
|
||||
[[code]](https://github.com/zaixizhang/GraphMI)
|
||||
- [IJCAI 2021] (white-box) A Survey on Gradient Inversion: Attacks, Defenses and Future Directions [[paper]](https://arxiv.org/pdf/2206.07284.pdf)
|
||||
|
||||
Arxiv 2021 - Node-Level Membership Inference Attacks Against Graph Neural Networks.
|
||||
[[paper]](https://arxiv.org/pdf/2102.05429.pdf)
|
||||
- [ICDE 2021] (black-box) NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data [[paper]](https://arxiv.org/pdf/2106.11865.pdf) [[code]](https://github.com/ICHproject/NetFense)
|
||||
|
||||
WWW 2022 - Learning Privacy-Preserving Graph Convolutional Network with Partially Observed Sensitive Attributes.
|
||||
[[paper]](https://dl.acm.org/doi/pdf/10.1145/3485447.3511975?casa_token=Xsle4t9cLdcAAAAA:Gmij-qWaTJ2esGVa-yzKNHqVOMzYyaIgdNcgGmEzHrVyMdwwf9idn3qBjkhCcQeRTvbAkaT6OxiwXsk)
|
||||
- [ICML 2021] DeepWalking Backwards: From Node Embeddings Back to Graphs [[paper]](http://proceedings.mlr.press/v139/chanpuriya21a/chanpuriya21a.pdf) [[code]](https://github.com/konsotirop/Invert_Embeddings)
|
||||
|
||||
USENIX Security 2022 - Inference Attacks Against Graph Neural Networks
|
||||
[[paper]](https://www.usenix.org/system/files/sec22summer_zhang-zhikun.pdf)
|
||||
[[code]](https://github.com/Zhangzhk0819/GNN-Embedding-Leaks)
|
||||
- [IJCAI 2021] (white-box) GraphMI: Extracting Private Graph Data from Graph Neural Networks [[paper]](https://arxiv.org/pdf/2106.02820v1.pdf) [[code]](https://github.com/zaixizhang/GraphMI)
|
||||
|
||||
IEEE S&P 2022 - Model Stealing Attacks Against Inductive Graph Neural Networks.
|
||||
[[paper]](https://arxiv.org/pdf/2112.08331.pdf)
|
||||
[[code]](https://github.com/xinleihe/GNNStealing)
|
||||
- [arXiv 2021] Node-Level Membership Inference Attacks Against Graph Neural Networks [[paper]](https://arxiv.org/pdf/2102.05429.pdf)
|
||||
|
||||
Arxiv 2022 - DIFFERENTIALLY PRIVATE GRAPH CLASSIFICATION WITH GNNS.
|
||||
[[paper]](https://arxiv.org/pdf/2202.02575.pdf)
|
||||
- [ICML 2021] DeepWalking Backwards: From Embeddings Back to Graphs [[paper]](https://proceedings.mlr.press/v139/chanpuriya21a/chanpuriya21a.pdf)
|
||||
|
||||
Arxiv 2022 - GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation.
|
||||
[[paper]](https://arxiv.org/pdf/2203.00949.pdf)
|
||||
- [arXiv 2022] (black & white-box) A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability [[paper]](https://arxiv.org/pdf/2204.08570.pdf)
|
||||
|
||||
Arxiv 2022 - SOK: DIFFERENTIAL PRIVACY ON GRAPH-STRUCTURED DATA.
|
||||
[[paper]](https://arxiv.org/pdf/2203.09205.pdf)
|
||||
- [WWW 2022] Learning Privacy-Preserving Graph Convolutional Network with Partially Observed Sensitive Attributes [[paper]](https://dl.acm.org/doi/pdf/10.1145/3485447.3511975?casa_token=Xsle4t9cLdcAAAAA:Gmij-qWaTJ2esGVa-yzKNHqVOMzYyaIgdNcgGmEzHrVyMdwwf9idn3qBjkhCcQeRTvbAkaT6OxiwXsk)
|
||||
|
||||
Arxiv 2022 - Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy.
|
||||
[[paper]](https://arxiv.org/pdf/2202.10209.pdf)
|
||||
- [USENIX Security 2022] Inference Attacks Against Graph Neural Networks [[paper]](https://www.usenix.org/system/files/sec22summer_zhang-zhikun.pdf) [[code]](https://github.com/Zhangzhk0819/GNN-Embedding-Leaks)
|
||||
|
||||
Arxiv 2022 - Private Graph Extraction via Feature Explanations.
|
||||
[[paper]](https://arxiv.org/pdf/2206.14724.pdf)
|
||||
- [IEEE S&P 2022] Model Stealing Attacks Against Inductive Graph Neural Networks [[paper]](https://arxiv.org/pdf/2112.08331.pdf) [[code]](https://github.com/xinleihe/GNNStealing)
|
||||
|
||||
Arxiv 2022 - Privacy and Transparency in Graph Machine Learning: A Unified Perspective.
|
||||
[[paper]](https://arxiv.org/pdf/2207.10896.pdf)
|
||||
- [arXiv 2022] Differentially Private Graph Classification With GNNs [[paper]](https://arxiv.org/pdf/2202.02575.pdf)
|
||||
|
||||
CCS 2022 - Finding MNEMON: Reviving Memories of Node Embeddings.
|
||||
[[paper]](https://arxiv.org/pdf/2204.06963.pdf)
|
||||
- [arXiv 2022] GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation [[paper]](https://arxiv.org/pdf/2203.00949.pdf)
|
||||
|
||||
IJIS 2022 - Defense against membership inference attack in graph neural networks through graph perturbation.
|
||||
[[paper]](https://link.springer.com/article/10.1007/s10207-022-00646-y)
|
||||
- [arXiv 2022] Sok: Differential Privacy on Graph-Structured Data[[paper]](https://arxiv.org/pdf/2203.09205.pdf)
|
||||
|
||||
TKDE 2022 - Model Inversion Attacks against Graph Neural Networks.
|
||||
[[paper]](https://arxiv.org/pdf/2209.07807.pdf)
|
||||
- [arXiv 2022] Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy [[paper]](https://arxiv.org/pdf/2202.10209.pdf)
|
||||
|
||||
ICML 2023 - On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation.
|
||||
[[paper]](https://openreview.net/pdf?id=Vcl3qckVyh)
|
||||
[[code]](https://github.com/tmlr-group/MC-GRA) -->
|
||||
- [arXiv 2022] Private Graph Extraction via Feature Explanations [[paper]](https://arxiv.org/pdf/2206.14724.pdf)
|
||||
|
||||
- [arXiv 2022] Privacy and Transparency in Graph Machine Learning: A Unified Perspective [[paper]](https://arxiv.org/pdf/2207.10896.pdf)
|
||||
|
||||
- [CCS 2022] Finding MNEMON: Reviving Memories of Node Embeddings [[paper]](https://arxiv.org/pdf/2204.06963.pdf)
|
||||
|
||||
- [IJIS 2022] Defense Against Membership Inference Attack in Graph Neural Networks Through Graph Perturbation [[paper]](https://link.springer.com/article/10.1007/s10207-022-00646-y)
|
||||
|
||||
- [TKDE 2022] Model Inversion Attacks against Graph Neural Networks [[paper]](https://arxiv.org/pdf/2209.07807.pdf)
|
||||
|
||||
- [ICML 2023] (white-box) On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation [[paper]](https://openreview.net/pdf?id=Vcl3qckVyh) [[code]](https://github.com/tmlr-group/MC-GRA)
|
||||
|
||||
- [SecureComm 2023] (white-box) Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural Networks [[paper]](https://arxiv.org/pdf/2310.09800)
|
||||
|
||||
## Natural language processing domain
|
||||
| Year | Title | Adversarial Knowledge | Venue | Paper Link | Code Link |
|
||||
| ---- | ----- | -------------------- | ----- | ---------- | --------- |
|
||||
| 2020 | Extracting Training Data from Large Language Models | black-box | USENIX Security | [Paper](https://arxiv.org/pdf/2012.07805.pdf) | [code](https://arxiv.org/pdf/2012.07805.pdf) |
|
||||
| 2020 | Privacy Risks of General-Purpose Language Models | black & white-box | S&P | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9152761) | |
|
||||
| 2020 | Information Leakage in Embedding Models | black & white-box | CCS | [Paper](https://arxiv.org/pdf/2004.00053.pdf) | |
|
||||
| 2021 | TAG: Gradient Attack on Transformer-based Language Models | white-box | EMNLP | [Paper](https://arxiv.org/pdf/2103.06819.pdf) | |
|
||||
| 2021 | Dataset Reconstruction Attack against Language Models | black-box | CEUR workshop | [paper](https://ceur-ws.org/Vol-2942/paper1.pdf) | |
|
||||
| 2022 | KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models | black-box | Arxiv | [paper](https://arxiv.org/pdf/2101.00036v1.pdf) | [code](https://github.com/yutanakamura-tky/kart) |
|
||||
| 2022 | Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers | white-box | Arxiv | [Paper](https://arxiv.org/pdf/2209.10505.pdf) | |
|
||||
| 2022 | Canary Extraction in Natural Language Understanding Models | white-box | ACL | [paper](https://arxiv.org/pdf/2203.13920.pdf) | |
|
||||
| 2022 | Are Large Pre-Trained Language Models Leaking Your Personal Information? | white-box | NAACL | [paper](https://aclanthology.org/2022.findings-emnlp.148.pdf) | [code](https://github.com/jeffhj/LM_PersonalInfoLeak) |
|
||||
| 2022 | Recovering Private Text in Federated Learning of Language Models | white-box | NeurIPS | [paper](https://arxiv.org/pdf/2205.08514.pdf) | [code](https://github.com/princeton-sysml/film) |
|
||||
| 2023 | Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence | black-box | ACL | [paper](https://arxiv.org/pdf/2305.03010.pdf) | [code](https://github.com/hkust-knowcomp/geia) |
|
||||
| 2023 | Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models | white-box | Arxiv | [Paper](https://arxiv.org/pdf/2306.13789.pdf) | |
|
||||
| 2023 | Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability | black-box | SaTML | [Paper](https://openreview.net/pdf?id=x42Lo6Mkcrf) | - |
|
||||
| 2023 | Text Embeddings Reveal (Almost) As Much As Text | black-box | EMNLP | [paper](https://arxiv.org/abs/2311.13647) | [code](https://github.com/jxmorris12/vec2text) |
|
||||
| 2024 | Extracting Prompts by Inverting LLM Outputs | black-box | arXiv | [paper](https://arxiv.org/pdf/2405.15012) | [code](https://github.com/collinzrj/output2prompt)) |
|
||||
| 2024 | Do Membership Inference Attacks Work on Large Language Models? | white-box | Arxiv | [Paper](https://arxiv.org/pdf/2402.07841.pdf) | |
|
||||
| 2024 | Language Model Inversion | black-box | ICLR | [paper](https://arxiv.org/abs/2311.13647) | [code](https://github.com/jxmorris12/vec2text) |
|
||||
|
||||
<!--
|
||||
USENIX Security 2020 - Extracting Training Data from Large Language Models
|
||||
[[paper]](https://arxiv.org/pdf/2012.07805.pdf)
|
||||
[[code]](https://arxiv.org/pdf/2012.07805.pdf)
|
||||
- [USENIX Security 2019] The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks [[paper]](https://www.usenix.org/system/files/sec19-carlini.pdf)
|
||||
|
||||
S&P 2020 - Privacy Risks of General-Purpose Language Models
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9152761)
|
||||
- [USENIX Security 2020] (black-box) Extracting Training Data from Large Language Models [[paper]](https://arxiv.org/pdf/2012.07805.pdf) [[code]](https://arxiv.org/pdf/2012.07805.pdf)
|
||||
|
||||
CCS 2020 - Information Leakage in Embedding Models
|
||||
[[paper]](https://arxiv.org/pdf/2004.00053.pdf)
|
||||
- [S&P 2020] (black & white-box) Privacy Risks of General-Purpose Language Models [[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9152761)
|
||||
|
||||
EMNLP 2021 - TAG: Gradient Attack on Transformer-based Language Models
|
||||
[[paper]](https://arxiv.org/pdf/2103.06819.pdf)
|
||||
- [arXiv 2018] Privacy-preserving Neural Representations of Text [[paper]](https://arxiv.org/pdf/1808.09408)
|
||||
|
||||
CEUR workshop 2021 - Dataset Reconstruction Attack against Language Models
|
||||
[[paper]](https://ceur-ws.org/Vol-2942/paper1.pdf)
|
||||
- [arXiv 2015] Censoring Representations with an Adversary [[paper]](https://arxiv.org/pdf/1511.05897)
|
||||
|
||||
Arxiv 2022 - KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models
|
||||
[[paper]](https://arxiv.org/abs/2101.00036v1)
|
||||
[[code]](https://github.com/yutanakamura-tky/kart)
|
||||
- [arXiv 2018] Adversarial Removal of Demographic Attributes from Text Data [[paper]](https://arxiv.org/pdf/1808.06640)
|
||||
|
||||
Arxiv 2022 - Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers
|
||||
[[paper]](https://arxiv.org/pdf/2209.10505.pdf)
|
||||
- [arXiv 2018] Towards Robust and Privacy-preserving Text Representations [[paper]](https://arxiv.org/pdf/1805.06093)
|
||||
|
||||
ACL 2022 - Canary Extraction in Natural Language Understanding Models
|
||||
[[paper]](https://arxiv.org/pdf/2203.13920.pdf)
|
||||
- [NIPS 2017] Controllable Invariance through Adversarial Feature Learning [[paper]](https://proceedings.neurips.cc/paper_files/paper/2017/file/8cb22bdd0b7ba1ab13d742e22eed8da2-Paper.pdf)
|
||||
|
||||
NeurIPS 2022 - Recovering Private Text in Federated Learning of Language Models
|
||||
[[paper]](https://arxiv.org/pdf/2205.08514.pdf)
|
||||
[[code]](https://github.com/princeton-sysml/film)
|
||||
- [CCS 2020] (black & white-box) Information Leakage in Embedding Models [[paper]](https://arxiv.org/pdf/2004.00053.pdf)
|
||||
|
||||
ACL 2023 - Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence
|
||||
[[paper]](https://arxiv.org/pdf/2305.03010.pdf)
|
||||
[[code]](https://github.com/hkust-knowcomp/geia)
|
||||
- [EMNLP 2021] (white-box) TAG: Gradient Attack on Transformer-based Language Models [[paper]](https://arxiv.org/pdf/2103.06819.pdf)
|
||||
|
||||
Arxiv 2023 - Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models
|
||||
[[paper]](https://arxiv.org/pdf/2306.13789.pdf)
|
||||
-->
|
||||
- [CEUR Workshop 2021] (black-box) Dataset Reconstruction Attack against Language Models [[paper]](https://ceur-ws.org/Vol-2942/paper1.pdf)
|
||||
|
||||
- [arXiv 2022] (black-box) KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models [[paper]](https://arxiv.org/pdf/2101.00036v1.pdf) [[code]](https://github.com/yutanakamura-tky/kart)
|
||||
|
||||
- [arXiv 2022] (white-box) Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers [[paper]](https://arxiv.org/pdf/2209.10505.pdf)
|
||||
|
||||
- [ACL 2022] (white-box) Canary Extraction in Natural Language Understanding Models [[paper]](https://arxiv.org/pdf/2203.13920.pdf)
|
||||
|
||||
- [NAACL 2022] (white-box) Are Large Pre-Trained Language Models Leaking Your Personal Information? [[paper]](https://aclanthology.org/2022.findings-emnlp.148.pdf) [[code]](https://github.com/jeffhj/LM_PersonalInfoLeak)
|
||||
|
||||
- [NeurIPS 2022] (white-box) Recovering Private Text in Federated Learning of Language Models [[paper]](https://arxiv.org/pdf/2205.08514.pdf) [[code]](https://github.com/princeton-sysml/film)
|
||||
|
||||
- [ACL 2023] (black-box) Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence [[paper]](https://arxiv.org/pdf/2305.03010.pdf) [[code]](https://github.com/hkust-knowcomp/geia)
|
||||
|
||||
- [arXiv 2023] (white-box) Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models [[paper]](https://arxiv.org/pdf/2306.13789.pdf)
|
||||
|
||||
- [SaTML 2023] (black-box) Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability [[paper]](https://openreview.net/pdf?id=x42Lo6Mkcrf)
|
||||
|
||||
- [EMNLP 2023] (black-box) Text Embeddings Reveal (Almost) As Much As Text [[paper]](https://arxiv.org/abs/2311.13647) [[code]](https://github.com/jxmorris12/vec2text)
|
||||
|
||||
- [ACL 2024] (black-box) Text Embedding Inversion Security for Multilingual Language Models [[paper]](https://arxiv.org/abs/2401.12192) [[code]](https://github.com/siebeniris/multivec2text)
|
||||
|
||||
- [EMNLP 2024] (black-box) Extracting Prompts by Inverting LLM Outputs [[paper]](https://arxiv.org/pdf/2405.15012) [[code]](https://github.com/collinzrj/output2prompt)
|
||||
|
||||
- [arXiv 2024] (white-box) Do Membership Inference Attacks Work on Large Language Models? [[paper]](https://arxiv.org/pdf/2402.07841.pdf)
|
||||
|
||||
- [ICLR 2024] (black-box) Language Model Inversion [[paper]](https://arxiv.org/abs/2311.13647) [[code]](https://github.com/jxmorris12/vec2text)
|
||||
|
||||
- [ACL 2024] (black-box) Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries [[paper]](https://aclanthology.org/2024.acl-long.230/)
|
||||
|
||||
- [COLM 2024] Effective Prompt Extraction from Language Models [[paper]](https://openreview.net/forum?id=0o95CVdNuz#discussion)
|
||||
|
||||
## Tools
|
||||
[AIJack](https://github.com/Koukyosyumei/AIJack): Implementation of algorithms for AI security.
|
||||
|
||||
[Privacy-Attacks-in-Machine-Learning](https://github.com/shrebox/Privacy-Attacks-in-Machine-Learning): Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.
|
||||
- [AIJack](https://github.com/Koukyosyumei/AIJack): Implementation of algorithms for AI security.
|
||||
|
||||
[ml-attack-framework](https://github.com/Pilladian/ml-attack-framework): Universität des Saarlandes - Privacy Enhancing Technologies 2021 - Semester Project.
|
||||
- [Privacy-Attacks-in-Machine-Learning](https://github.com/shrebox/Privacy-Attacks-in-Machine-Learning): Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.
|
||||
|
||||
(Trail of Bits) PrivacyRaven [[GitHub]](https://github.com/trailofbits/PrivacyRaven)
|
||||
- [ml-attack-framework](https://github.com/Pilladian/ml-attack-framework): Universität des Saarlandes - Privacy Enhancing Technologies 2021 - Semester Project.
|
||||
|
||||
(TensorFlow) TensorFlow Privacy [[GitHub]](https://github.com/tensorflow/privacy/tree/master/tensorflow_privacy/privacy/membership_inference_attack)
|
||||
- (Trail of Bits) PrivacyRaven [[GitHub]](https://github.com/trailofbits/PrivacyRaven)
|
||||
|
||||
(NUS Data Privacy and Trustworthy Machine Learning Lab) Machine Learning Privacy Meter [[GitHub]](https://github.com/privacytrustlab/ml_privacy_meter)
|
||||
- (TensorFlow) TensorFlow Privacy [[GitHub]](https://github.com/tensorflow/privacy/tree/master/tensorflow_privacy/privacy/membership_inference_attack)
|
||||
|
||||
(IQT Labs/Lab 41) CypherCat (archive-only) [[GitHub]](https://github.com/Lab41/cyphercat)
|
||||
- (NUS Data Privacy and Trustworthy Machine Learning Lab) Machine Learning Privacy Meter [[GitHub]](https://github.com/privacytrustlab/ml_privacy_meter)
|
||||
|
||||
(IBM) Adversarial Robustness Toolbox (ART) [[GitHub]](https://github.com/Trusted-AI/adversarial-robustness-toolbox)
|
||||
- (IQT Labs/Lab 41) CypherCat (archive-only) [[GitHub]](https://github.com/Lab41/cyphercat)
|
||||
|
||||
- (IBM) Adversarial Robustness Toolbox (ART) [[GitHub]](https://github.com/Trusted-AI/adversarial-robustness-toolbox)
|
||||
|
||||
## Attacks against synthetic data
|
||||
|
||||
- [arXiv 2023] A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data [[paper]](https://arxiv.org/pdf/2301.10053) [[code]](https://github.com/synthetic-society/recon-synth)
|
||||
- [USENIX 2022] Synthetic Data - Anonymisation of Groundhog Day [[paper]](https://www.usenix.org/system/files/sec22summer_stadler.pdf) [[code]](https://github.com/spring-epfl/synthetic_data_release)
|
||||
|
||||
## Others
|
||||
2019 - Uncovering a model’s secrets.
|
||||
[[blog1]](https://gab41.lab41.org/uncovering-a-models-secrets-model-inversion-part-i-ce460eab93d6)
|
||||
[[blog2]](https://gab41.lab41.org/robust-or-private-model-inversion-part-ii-94d54fd8d4a5)
|
||||
|
||||
2019 - Model Inversion Attacks Against Collaborative Inference.
|
||||
[[slides]](https://www.acsac.org/2019/program/final/1/167.pdf)
|
||||
|
||||
2020 - Attacks against Machine Learning Privacy (Part 1): Model Inversion Attacks with the IBM-ART Framework.
|
||||
[[blog]](https://franziska-boenisch.de/posts/2020/12/model-inversion/)
|
||||
|
||||
2021 - ML and DP.
|
||||
[[slides]](https://www.cs.toronto.edu/~toni/Courses/Fairness/Lectures/ML-and-DP-v2.pdf)
|
||||
|
||||
2022 - USENIX
|
||||
Synthetic Data – Anonymisation Groundhog Day
|
||||
[[paper]](https://www.usenix.org/system/files/sec22summer_stadler.pdf)
|
||||
[[code]](https://github.com/spring-epfl/synthetic_data_release)
|
||||
|
||||
2023 - arXiv
|
||||
A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data
|
||||
[[paper]](https://arxiv.org/pdf/2301.10053)
|
||||
[[code]](https://github.com/synthetic-society/recon-synth)
|
||||
|
||||
- [Blog 2020] Uncovering a model's secrets [[blog1]](https://gab41.lab41.org/uncovering-a-models-secrets-model-inversion-part-i-ce460eab93d6) [[blog2]](https://gab41.lab41.org/robust-or-private-model-inversion-part-ii-94d54fd8d4a5)
|
||||
- [Blog 2020] Attacks against Machine Learning Privacy (Part 1): Model Inversion Attacks with the IBM-ART Framework [[blog]](https://franziska-boenisch.de/posts/2020/12/model-inversion/)
|
||||
- [Slides 2020] ML and DP [[slides]](https://www.cs.toronto.edu/~toni/Courses/Fairness/Lectures/ML-and-DP-v2.pdf)
|
||||
|
||||
## Related repositories
|
||||
awesome-ml-privacy-attacks
|
||||
[[repo]](https://github.com/stratosphereips/awesome-ml-privacy-attacks#reconstruction)
|
||||
|
||||
- awesome-ml-privacy-attacks [[repo]](https://github.com/stratosphereips/awesome-ml-privacy-attacks#reconstruction)
|
||||
|
||||
## Star History
|
||||
|
||||
|
||||
Reference in New Issue
Block a user