mirror of
https://github.com/AndrewZhou924/Awesome-model-inversion-attack.git
synced 2026-01-25 22:11:26 +00:00
Update README.md
This commit is contained in:
40
README.md
40
README.md
@@ -134,7 +134,8 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
|
||||
| 2016 | A Methodology for Formalizing Model-Inversion Attacks | black & white-box | CSF | [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7536387&casa_token=ClIVAMYo6dcAAAAA:u75HHyFHj5lBRec9h5SqOZyAsL2dICcWIuQPCj6ltk8McREFCaM4ex42mv3S-oNPiGJLDfUqg0qL) | |
|
||||
| 2017 | Machine Learning Models that Remember Too Much | white-box | CCS | [paper](https://arxiv.org/pdf/1709.07886.pdf) | [code](https://github.com/csong27/ml-model-remember) |
|
||||
| 2017 | Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes | white-box | PST | [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8476925) | |
|
||||
| 2018 | Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting | white-box | CSF | [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8429311) | |
|
||||
| 2018 | Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting | white-box | CSF | [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8429311) | |
|
||||
| 2019 | An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack | white-box | arXiv | [Paper](https://ieeexplore.ieee.org/document/8822435) | |
|
||||
| 2019 | MLPrivacyGuard: Defeating Confidence Information based Model Inversion Attacks on Machine Learning Systems | black-box (defense) | GLSVLSI | [paper](https://www.researchgate.net/profile/Tiago-Alves-13/publication/333136362_MLPrivacyGuard_Defeating_Confidence_Information_based_Model_Inversion_Attacks_on_Machine_Learning_Systems/links/5cddb94d92851c4eaba682d7/MLPrivacyGuard-Defeating-Confidence-Information-based-Model-Inversion-Attacks-on-Machine-Learning-Systems.pdf) | |
|
||||
| 2019 | Model inversion attacks against collaborative inference | black & white-box (collaborative inference) | ACSAC | [Paper](http://palms.ee.princeton.edu/system/files/Model+Inversion+Attack+against+Collaborative+Inference.pdf) | |
|
||||
| 2019 | Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment | black-box | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3319535.3354261?casa_token=J81Ps-ZWXHkAAAAA:FYnXo7DQoHpdhqns8x2TclKFeHpAQlXVxMBW2hTrhJ5c20XKdsounqdT1Viw1g6Xsu9FtKj85elxQaA) | [Code](https://github.com/zhangzp9970/TB-MIA) | - |
|
||||
@@ -145,32 +146,35 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
|
||||
| 2020 | Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning | black-box | USENIX Security | [Paper](https://www.usenix.org/system/files/sec20-salem.pdf) | - | - |
|
||||
| 2020 | Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems | (collaborative inference) | IoT-J | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9187880) | [Code](https://github.com/zechenghe/Inverse_Collaborative_Inference) | - |
|
||||
| 2020 | Black-Box Face Recovery from Identity Features | black-box | ECCV Workshop | [Paper](https://arxiv.org/pdf/2007.13635.pdf) | - | - |
|
||||
| 2020 | MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery | white-box | arXiv | [Paper](https://arxiv.org/abs/2010.11463)
|
||||
| 2020 | Privacy Preserving Facial Recognition Against Model Inversion Attacks | white-box (defense) | Globecom | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9322508) | - | - |
|
||||
| 2020 | Broadening Differential Privacy for Deep Learning Against Model Inversion Attacks | white-box (defense) | Big Data | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9378274) | - | - |
|
||||
| 2021 | Unleashing the tiger: Inference attacks on split learning | - | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3460120.3485259) | [Code](https://github.com/pasquini-dario/SplitNN_FSHA) | - |
|
||||
| 2021 | Soteria: Provable defense against privacy leakage in federated learning from representation perspective | - | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Soteria_Provable_Defense_Against_Privacy_Leakage_in_Federated_Learning_From_CVPR_2021_paper.pdf) | [Code](https://github.com/jeremy313/Soteria) | - |
|
||||
| 2020 | Evaluation Indicator for Model Inversion Attack | metric | AdvML| [Paper](https://drive.google.com/file/d/1rl77BGtGHzZ8obWUEOoqunXCjgvpzE8d/view) | |
|
||||
| 2021 | Variational Model Inversion Attacks | white-box | NeurIPS | [Paper](https://proceedings.neurips.cc/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Paper.pdf) | [Code](https://github.com/wangkua1/vmi) | - |
|
||||
| 2021 | Exploiting Explanations for Model Inversion Attacks | white-box | ICCV | [Paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Zhao_Exploiting_Explanations_for_Model_Inversion_Attacks_ICCV_2021_paper.pdf) | - |
|
||||
| 2021 | Knowledge-Enriched Distributional Model Inversion Attacks | white-box | ICCV | [Paper](https://arxiv.org/pdf/2010.04092.pdf) | [Code](https://github.com/SCccc21/Knowledge-Enriched-DMI) |
|
||||
| 2021 | Improving Robustness to Model Inversion Attacks via Mutual Information Regularization | white-box (defense) | AAAI | [Paper](https://arxiv.org/pdf/2009.05241.pdf) | - |
|
||||
| 2021 | Practical Defences Against Model Inversion Attacks for Split Neural Networks | - | ICLR workshop | [Paper](https://arxiv.org/pdf/2104.05743.pdf) | [Code](https://github.com/TTitcombe/Model-Inversion-SplitNN) |
|
||||
| 2021 | Feature inference attack on model predictions in vertical federated learning | - | ICDE | [Paper](https://arxiv.org/pdf/2010.10152) | [Code](https://github.com/xj231/featureinference-vfl) |
|
||||
| 2021 | PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems | - | DAC | [Paper](https://dl.acm.org/doi/abs/10.1109/DAC18074.2021.9586217) | - |
|
||||
| 2021 | Defending Against Model Inversion Attack by Adversarial Examples | - | CSR Workshops | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9527945) | - |
|
||||
| 2021 | Practical Black Box Model Inversion Attacks Against Neural Nets | - | ECML PKDD | [Paper](https://link.springer.com/content/pdf/10.1007/978-3-030-93733-1.pdf?pdf=button) | - |
|
||||
| 2021 | Model Inversion Attack against a Face Recognition System in a Black-Box Setting | - | APSIPA | [Paper](http://www.apsipa.org/proceedings/2021/pdfs/0001800.pdf) | - |
|
||||
| 2021 | Practical Defences Against Model Inversion Attacks for Split Neural Networks | black-box (defense, collaborative inference) | ICLR workshop | [Paper](https://arxiv.org/pdf/2104.05743.pdf) | [Code](https://github.com/TTitcombe/Model-Inversion-SplitNN) |
|
||||
| 2021 | Feature inference attack on model predictions in vertical federated learning | white-box (VFL) | ICDE | [Paper](https://arxiv.org/pdf/2010.10152) | [Code](https://github.com/xj231/featureinference-vfl) |
|
||||
| 2021 | PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems | black-box (both, collaborative inference) | DAC | [Paper](https://dl.acm.org/doi/abs/10.1109/DAC18074.2021.9586217) | - |
|
||||
| 2021 | Defending Against Model Inversion Attack by Adversarial Examples | black-box (defense) | CSR Workshops | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9527945) | - |
|
||||
| 2021 | Practical Black Box Model Inversion Attacks Against Neural Nets | black-box | ECML PKDD | [Paper](https://link.springer.com/chapter/10.1007/978-3-030-93733-1_3) | - |
|
||||
| 2021 | Model Inversion Attack against a Face Recognition System in a Black-Box Setting | black-box | APSIPA | [Paper](http://www.apsipa.org/proceedings/2021/pdfs/0001800.pdf) | - |
|
||||
| 2022 | Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks | white-box | ICML | [Paper](https://arxiv.org/pdf/2201.12179.pdf) | [Code](https://github.com/LukasStruppek/Plug-and-Play-Attacks) |
|
||||
| 2022 | Label-Only Model Inversion Attacks via Boundary Repulsion | black-box | CVPR | [Paper](https://arxiv.org/pdf/2203.01925.pdf) | [Code](https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion) |
|
||||
| 2022 | ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning | - | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2022/html/Li_ResSFL_A_Resistance_Transfer_Framework_for_Defending_Model_Inversion_Attack_CVPR_2022_paper.html) | [Code](https://github.com/zlijingtao/ResSFL) |
|
||||
| 2022 | ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning | white-box (defense, SFL) | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2022/html/Li_ResSFL_A_Resistance_Transfer_Framework_for_Defending_Model_Inversion_Attack_CVPR_2022_paper.html) | [Code](https://github.com/zlijingtao/ResSFL) |
|
||||
| 2022 | Bilateral Dependency Optimization: Defending Against Model-inversion Attacks | white-box (defense) | KDD | [Paper](https://arxiv.org/pdf/2206.05483.pdf) | [Code](https://github.com/xpeng9719/Defend_MI) |
|
||||
| 2022 | ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models | - | USENIX Security | [Paper](https://www.usenix.org/system/files/sec22summer_liu-yugeng.pdf) | [Code](https://github.com/liuyugeng/ML-Doctor) |
|
||||
| 2022 | Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System | - | TIFS | [Paper](https://dl.acm.org/doi/abs/10.1109/TIFS.2022.3140687) | - |
|
||||
| 2022 | ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models | holistic risk assessment | USENIX Security | [Paper](https://www.usenix.org/system/files/sec22summer_liu-yugeng.pdf) | [Code](https://github.com/liuyugeng/ML-Doctor) |
|
||||
| 2022 | Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System | white-box | TIFS | [Paper](https://dl.acm.org/doi/abs/10.1109/TIFS.2022.3140687) | - |
|
||||
| 2022 | Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-Ray Data | - | Arxiv | [Paper](https://arxiv.org/pdf/2205.03168.pdf) | - |
|
||||
| 2022 | One Parameter Defense—Defending Against Data Inference Attacks via Differential Privacy | black-box | TIFS | [Paper](https://arxiv.org/pdf/2203.06580.pdf) | |
|
||||
| 2022 | Reconstructing Training Data from Diverse ML Models by Ensemble Inversion | white-box | WACV | [Paper](https://arxiv.org/pdf/2111.03702.pdf) | |
|
||||
| 2022 | SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination | white-box | ECCV | [Paper](https://arxiv.org/pdf/2207.12263.pdf) | |
|
||||
| 2022 | UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning | - | WPES | [Paper](https://arxiv.org/pdf/2108.09033.pdf) | [code](https://github.com/ege-erdogan/unsplit) |
|
||||
| 2022 | MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity | white-box | NDSS | [Paper](https://www.cs.purdue.edu/homes/an93/static/papers/ndss2022_model_inversion.pdf) | [code](https://github.com/njuaplusplus/mirror) |
|
||||
| 2022 | Reconstructing Training Data with Informed Adversaries | white-box | SP | [Paper](https://arxiv.org/abs/2201.04845) | |
|
||||
| 2022 | Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks | white-box | BMVC | [Paper](https://arxiv.org/abs/2107.06304)
|
||||
| 2022 | Reconstructing Training Data from Trained Neural Networks | white-box | NIPS | [Paper](https://arxiv.org/abs/2206.07758) | |
|
||||
| 2023 | Sparse Black-Box Inversion Attack with Limited Information | black-box | ICASSP | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10095514) | [code](https://github.com/Tencent/TFace/tree/master/recognition) |
|
||||
| 2023 | Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack | black-box | CVPR | [Paper](https://arxiv.org/pdf/2304.11436.pdf) | [code](https://github.com/FLAIR-THU/PairedLogitsInversion) |
|
||||
| 2023 | Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network | white-box | AAAI | [Paper](https://arxiv.org/pdf/2302.09814.pdf) | [code](https://github.com/lethesec/plg-mi-attack) |
|
||||
@@ -185,19 +189,20 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
|
||||
### TODO
|
||||
| Year | Title | Adversarial Knowledge | Venue | Paper Link | Code Link |
|
||||
| :-----------: | --------------------------------------------------------------------------------------------- | --------------------- | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- |
|
||||
|xxx 2021 | Unleashing the tiger: Inference attacks on split learning | - | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3460120.3485259) | [Code](https://github.com/pasquini-dario/SplitNN_FSHA) | - |
|
||||
|
||||
| xxx 2021 | Soteria: Provable defense against privacy leakage in federated learning from representation perspective | - | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Soteria_Provable_Defense_Against_Privacy_Leakage_in_Federated_Learning_From_CVPR_2021_paper.pdf) | [Code](https://github.com/jeremy313/Soteria) | - |
|
||||
|
||||
| xxx 2019 | The secret sharer: Evaluating and testing unintended memorization in neural networks | white-box | USENIX | [Paper](https://www.usenix.org/system/files/sec19-carlini.pdf) | |
|
||||
| xxx 2019 | Deep leakage from gradients | white-box | NIPS | [Paper](https://papers.nips.cc/paper/9617-deep-leakage-from-gradients.pdf) | [code](https://github.com/mit-han-lab/dlg) |
|
||||
| xxx 2020 | Inverting Gradients - How easy is it to break privacy in federated learning? | white-box | NIPS | [Paper](https://arxiv.org/abs/2003.14053) | |
|
||||
| xxx 2018 | Reconstruction of training samples from loss functions | - | arXiv | [Paper](https://arxiv.org/pdf/1805.07337.pdf) | |
|
||||
| vvv 2022 | Reconstructing Training Data from Trained Neural Networks | white-box | NIPS | [Paper](https://arxiv.org/abs/2206.07758) | |
|
||||
| xxx 2020 | A Framework for Evaluating Gradient Leakage Attacks in Federated Learning | white-box | CoRR | [Paper](https://arxiv.org/pdf/2004.10397.pdf) | |
|
||||
| xxx 2017 | Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning | both | CCS | [Paper](https://arxiv.org/pdf/1702.07464.pdf) | |
|
||||
| xxx 2019 | Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning | white-box | IEEE INFOCOM | [Paper](https://arxiv.org/pdf/1812.00535.pdf) | |
|
||||
| vvv 2020 | Evaluation Indicator for Model Inversion Attack | metric | AdvML | [Paper](https://drive.google.com/file/d/1rl77BGtGHzZ8obWUEOoqunXCjgvpzE8d/view) | |
|
||||
| vvv 2019 | An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack | white-box | arXiv | [Paper](https://ieeexplore.ieee.org/document/8822435) | |
|
||||
| xxx 2020 | SAPAG: A Self-Adaptive Privacy Attack From Gradients | white-box | arXiv | [Paper](https://arxiv.org/abs/2009.06228) | |
|
||||
| xxx 2022 | Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis | white-box | USENIX | [Paper](https://arxiv.org/abs/2010.13356) | |
|
||||
| vvv 2020 | MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery | white-box | arXiv | [Paper](https://arxiv.org/abs/2010.11463) | |
|
||||
| |
|
||||
| xxx 2020 | Evaluation of Inference Attack Models for Deep Learning on Medical Data | black-box | arXiv | [Paper](https://arxiv.org/abs/2011.00177) | |
|
||||
| xxx 2020 | FaceLeaks: Inference Attacks against Transfer Learning Models via Black-box Queries | black-box | arXiv | [Paper](https://arxiv.org/abs/2010.14023) | |
|
||||
| xxx | Derivation of Constraints from Machine Learning Models and Applications to Security and Privacy | Theory | A | [Paper](https://hal.archives-ouvertes.fr/hal-03091740/) | |
|
||||
@@ -205,8 +210,7 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
|
||||
| xxx 2021 | R-GAP: Recursive Gradient Attack on Privacy | white-box | ICLR | [Paper](https://arxiv.org/abs/2010.07733) | |
|
||||
| xxx 2021 | PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage | white-box | arXiv | [Paper](https://arxiv.org/abs/2108.04725) | |
|
||||
| xxx 2021 | On the Importance of Encrypting Deep Features | black-box | arXiv | [Paper](https://arxiv.org/abs/2108.07147) | |
|
||||
| vvv 2022 | Reconstructing Training Data with Informed Adversaries | white-box | SP | [Paper](https://arxiv.org/abs/2201.04845) | |
|
||||
| vvv 2022 | Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks | white-box | BMVC | [Paper](https://arxiv.org/abs/2107.06304) | |
|
||||
| |
|
||||
-->
|
||||
|
||||
<!-- USENIX Security 2014 - Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing.
|
||||
|
||||
Reference in New Issue
Block a user