Update README.md

This commit is contained in:
Xiong PENG
2023-07-17 19:49:29 +08:00
committed by GitHub
parent 4626e74016
commit 12e9b41701

View File

@@ -142,40 +142,34 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
| 2020 | The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks | white-box | CVPR | [Paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf) | [Code](https://github.com/AI-secure/GMI-Attack) | [Video](https://www.youtube.com/watch?v=_g-oXYMhz4M) |
| 2020 | Overlearning Reveals Sensitive Attributes | white-box | ICLR | [Paper](https://arxiv.org/pdf/1905.11742.pdf) | - | - |
| 2020 | Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator | white-box | APSIPA ASC | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9306253&casa_token=AWugOvIe0I0AAAAA:9wICCkMcfoljMqooM-lgl8m-6F6-cEl-ClHgNkE1SV8mZwqvBIaJ1HDjT1RWLyBz_P7tdB51jQVL&tag=1) | - | - |
| 2020 | Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning | - | USENIX Security | [Paper](https://www.usenix.org/system/files/sec20-salem.pdf) | - | - |
| 2020 | Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems | - | IoTJ | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9187880) | [Code](https://github.com/zechenghe/Inverse_Collaborative_Inference) | - |
| 2020 | Black-Box Face Recovery from Identity Features | - | ECCV Workshop | [Paper](https://arxiv.org/pdf/2007.13635.pdf) | - | - |
| 2020 | Defending model inversion and membership inference attacks via prediction purification | - | Arxiv | [Paper](https://arxiv.org/pdf/2005.03915.pdf) | - | - |
| 2020 | Privacy Preserving Facial Recognition Against Model Inversion Attacks | - | Globecom | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9322508) | - | - |
| 2020 | Broadening Differential Privacy for Deep Learning Against Model Inversion Attacks | - | Big Data | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9378274) | - | - |
| 2021 | Black-box adversarial attacks on commercial speech platforms with minimal information | - | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3460120.3485383) | - | - |
| 2020 | Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning | black-box | USENIX Security | [Paper](https://www.usenix.org/system/files/sec20-salem.pdf) | - | - |
| 2020 | Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems | (collaborative inference) | IoT-J | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9187880) | [Code](https://github.com/zechenghe/Inverse_Collaborative_Inference) | - |
| 2020 | Black-Box Face Recovery from Identity Features | black-box | ECCV Workshop | [Paper](https://arxiv.org/pdf/2007.13635.pdf) | - | - |
| 2020 | Privacy Preserving Facial Recognition Against Model Inversion Attacks | white-box (defense) | Globecom | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9322508) | - | - |
| 2020 | Broadening Differential Privacy for Deep Learning Against Model Inversion Attacks | white-box (defense) | Big Data | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9378274) | - | - |
| 2021 | Unleashing the tiger: Inference attacks on split learning | - | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3460120.3485259) | [Code](https://github.com/pasquini-dario/SplitNN_FSHA) | - |
| 2021 | See through gradients: Image batch recovery via gradinversion | - | CVPR | [Paper](http://openaccess.thecvf.com/content/CVPR2021/papers/Yin_See_Through_Gradients_Image_Batch_Recovery_via_GradInversion_CVPR_2021_paper.pdf) | - | - |
| 2021 | Soteria: Provable defense against privacy leakage in federated learning from representation perspective | - | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Soteria_Provable_Defense_Against_Privacy_Leakage_in_Federated_Learning_From_CVPR_2021_paper.pdf) | [Code](https://github.com/jeremy313/Soteria) | - |
| 2021 | Variational Model Inversion Attacks | white-box | NeurIPS | [Paper](https://proceedings.neurips.cc/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Paper.pdf) | [Code](https://github.com/wangkua1/vmi) | - |
| 2021 | Exploiting Explanations for Model Inversion Attacks | white-box | ICCV | [Paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Zhao_Exploiting_Explanations_for_Model_Inversion_Attacks_ICCV_2021_paper.pdf) | - |
| 2021 | Knowledge-Enriched Distributional Model Inversion Attacks | white-box | ICCV | [Paper](https://arxiv.org/pdf/2010.04092.pdf) | [Code](https://github.com/SCccc21/Knowledge-Enriched-DMI) |
| 2021 | Improving Robustness to Model Inversion Attacks via Mutual Information Regularization | white-box (defense) | AAAI | [Paper](https://arxiv.org/pdf/2009.05241.pdf) | - |
| 2021 | When Does Data Augmentation Help With Membership Inference Attacks? | - | ICML | [Paper](http://proceedings.mlr.press/v139/kaya21a/kaya21a.pdf) | - |
| 2021 | PRACTICAL DEFENCES AGAINST MODEL INVERSION ATTACKS FOR SPLIT NEURAL NETWORKS | - | ICLR workshop | [Paper](https://arxiv.org/pdf/2104.05743.pdf) | [Code](https://github.com/TTitcombe/Model-Inversion-SplitNN) |
| 2021 | Practical Defences Against Model Inversion Attacks for Split Neural Networks | - | ICLR workshop | [Paper](https://arxiv.org/pdf/2104.05743.pdf) | [Code](https://github.com/TTitcombe/Model-Inversion-SplitNN) |
| 2021 | Feature inference attack on model predictions in vertical federated learning | - | ICDE | [Paper](https://arxiv.org/pdf/2010.10152) | [Code](https://github.com/xj231/featureinference-vfl) |
| 2021 | PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems | - | DAC | [Paper](https://dl.acm.org/doi/abs/10.1109/DAC18074.2021.9586217) | - |
| 2021 | Robustness of on-device models: Adversarial attack to deep learning models on android apps | - | ICSE | [Paper](https://arxiv.org/pdf/2101.04401) | - |
| 2021 | Defending Against Model Inversion Attack by Adversarial Examples | - | CSR Workshops | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9527945) | - |
| 2021 | Practical Black Box Model Inversion Attacks Against Neural Nets | - | ECML PKDD | [Paper](https://link.springer.com/content/pdf/10.1007/978-3-030-93733-1.pdf?pdf=button) | - |
| 2021 | Model Inversion Attack against a Face Recognition System in a Black-Box Setting | - | APSIPA | [Paper](http://www.apsipa.org/proceedings/2021/pdfs/0001800.pdf) | - |
| 2022 | Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks | - | ICML | [Paper](https://arxiv.org/pdf/2201.12179.pdf) | [Code](https://github.com/LukasStruppek/Plug-and-Play-Attacks) |
| 2022 | Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks | white-box | ICML | [Paper](https://arxiv.org/pdf/2201.12179.pdf) | [Code](https://github.com/LukasStruppek/Plug-and-Play-Attacks) |
| 2022 | Label-Only Model Inversion Attacks via Boundary Repulsion | black-box | CVPR | [Paper](https://arxiv.org/pdf/2203.01925.pdf) | [Code](https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion) |
| 2022 | ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning | - | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2022/html/Li_ResSFL_A_Resistance_Transfer_Framework_for_Defending_Model_Inversion_Attack_CVPR_2022_paper.html) | [Code](https://github.com/zlijingtao/ResSFL) |
| 2022 | Bilateral Dependency Optimization: Defending Against Model-inversion Attacks | - | KDD | [Paper](https://arxiv.org/pdf/2206.05483.pdf) | [Code](https://github.com/xpeng9719/Defend_MI) |
| 2022 | Bilateral Dependency Optimization: Defending Against Model-inversion Attacks | white-box (defense) | KDD | [Paper](https://arxiv.org/pdf/2206.05483.pdf) | [Code](https://github.com/xpeng9719/Defend_MI) |
| 2022 | ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models | - | USENIX Security | [Paper](https://www.usenix.org/system/files/sec22summer_liu-yugeng.pdf) | [Code](https://github.com/liuyugeng/ML-Doctor) |
| 2022 | An Approximate Memory based Defense against Model Inversion Attacks to Neural Networks | - | IEEE | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9792582&casa_token=ymT57RhNhGEAAAAA:WsQHMpv77-4uyIp7l-p4hc7_Qmxvn5TeNpS5F7LHBFHyLay2O8Pe5eWqsKN2fu56v98NZsRrqeit) | [Code](https://github.com/katekemu/model_inversion_defense) |
| 2022 | Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System | - | TIFS | [Paper](https://dl.acm.org/doi/abs/10.1109/TIFS.2022.3140687) | - |
| 2022 | Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-Ray Data | - | Arxiv | [Paper](https://arxiv.org/pdf/2205.03168.pdf) | - |
| 2022 | One Parameter Defense—Defending Against Data Inference Attacks via Differential Privacy | black-box | TIFS | [Paper](https://arxiv.org/pdf/2203.06580.pdf) | |
| 2022 | Reconstructing Training Data from Diverse ML Models by Ensemble Inversion | white-box | WACV | [Paper](https://arxiv.org/pdf/2111.03702.pdf) | |
| 2022 | SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination | white-box | ECCV | [Paper](https://arxiv.org/pdf/2207.12263.pdf) | |
| 2022 | UnSplit: Data-Oblivious Model Inversion, Model Stealing, andLabel Inference Attacks Against Split Learning | - | WPES | [Paper](https://arxiv.org/pdf/2108.09033.pdf) | [code](https://github.com/ege-erdogan/unsplit) |
| 2022 | UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning | - | WPES | [Paper](https://arxiv.org/pdf/2108.09033.pdf) | [code](https://github.com/ege-erdogan/unsplit) |
| 2022 | MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity | white-box | NDSS | [Paper](https://www.cs.purdue.edu/homes/an93/static/papers/ndss2022_model_inversion.pdf) | [code](https://github.com/njuaplusplus/mirror) |
| 2023 | Sparse Black-Box Inversion Attack with Limited Information | black-box | ICASSP | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10095514) | [code](https://github.com/Tencent/TFace/tree/master/recognition) |
| 2023 | Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack | black-box | CVPR | [Paper](https://arxiv.org/pdf/2304.11436.pdf) | [code](https://github.com/FLAIR-THU/PairedLogitsInversion) |
@@ -184,6 +178,7 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
| 2023 | Boosting Model Inversion Attacks with Adversarial Examples | black-box | TDSC | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10148576) | |
| 2023 | Reinforcement Learning-Based Black-Box Model Inversion Attacks | black-box | CVPR | [Paper](https://arxiv.org/pdf/2304.04625.pdf) | [code](https://github.com/HanGyojin/RLB-MI) |
| 2023 | Re-thinking Model Inversion Attacks Against Deep Neural Networks | white-box | CVPR | [Paper](https://arxiv.org/pdf/2304.01669.pdf) | [code](https://github.com/sutd-visual-computing-group/Re-thinking_MI) |
| 2023 | Purifier: Defending Data Inference Attacks via Transforming Confidence Scores | black-box (defense) | AAAI | [Paper](https://arxiv.org/pdf/2005.03915.pdf) | - | - |
<!--