Update README.md

This commit is contained in:
Xiong PENG
2023-07-16 23:46:44 +08:00
committed by GitHub
parent c31ceacd78
commit 4626e74016

View File

@@ -140,27 +140,22 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
| 2019 | Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment | black-box | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3319535.3354261?casa_token=J81Ps-ZWXHkAAAAA:FYnXo7DQoHpdhqns8x2TclKFeHpAQlXVxMBW2hTrhJ5c20XKdsounqdT1Viw1g6Xsu9FtKj85elxQaA) | [Code](https://github.com/zhangzp9970/TB-MIA) | - |
| 2019 | GAMIN: An Adversarial Approach to Black-Box Model Inversion | black-box | Arxiv | [Paper](https://arxiv.org/pdf/1909.11835.pdf) | - | - |
| 2020 | The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks | white-box | CVPR | [Paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf) | [Code](https://github.com/AI-secure/GMI-Attack) | [Video](https://www.youtube.com/watch?v=_g-oXYMhz4M) |
| 2020 | Overlearning Reveals Sensitive Attributes | - | ICLR | [Paper](https://arxiv.org/pdf/1905.11742.pdf) | - | - |
| 2020 | Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator | - | APSIPA ASC | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9306253&casa_token=AWugOvIe0I0AAAAA:9wICCkMcfoljMqooM-lgl8m-6F6-cEl-ClHgNkE1SV8mZwqvBIaJ1HDjT1RWLyBz_P7tdB51jQVL&tag=1) | - | - |
| 2020 | Overlearning Reveals Sensitive Attributes | white-box | ICLR | [Paper](https://arxiv.org/pdf/1905.11742.pdf) | - | - |
| 2020 | Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator | white-box | APSIPA ASC | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9306253&casa_token=AWugOvIe0I0AAAAA:9wICCkMcfoljMqooM-lgl8m-6F6-cEl-ClHgNkE1SV8mZwqvBIaJ1HDjT1RWLyBz_P7tdB51jQVL&tag=1) | - | - |
| 2020 | Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning | - | USENIX Security | [Paper](https://www.usenix.org/system/files/sec20-salem.pdf) | - | - |
| 2020 | Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems | - | IoTJ | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9187880) | [Code](https://github.com/zechenghe/Inverse_Collaborative_Inference) | - |
| 2020 | Black-Box Face Recovery from Identity Features | - | ECCV Workshop | [Paper](https://arxiv.org/pdf/2007.13635.pdf) | - | - |
| 2020 | Defending model inversion and membership inference attacks via prediction purification | - | Arxiv | [Paper](https://arxiv.org/pdf/2005.03915.pdf) | - | - |
| 2020 | Generative model-inversion attacks against deep neural networks | - | CVPR | [Paper](https://arxiv.org/pdf/1911.07135.pdf) | [Code](https://github.com/AI-secure/GMI-Attack) | - |
| 2020 | Privacy Preserving Facial Recognition Against Model Inversion Attacks | - | Globecom | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9322508) | - | - |
| 2020 | Broadening Differential Privacy for Deep Learning Against Model Inversion Attacks | - | Big Data | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9378274) | - | - |
| 2021 | Contrastive Model Inversion for Data-Free Knowledge Distillation | - | IJCAI | [Paper](https://www.ijcai.org/proceedings/2021/0327.pdf) | [Code](https://github.com/zju-vipa/CMI) | - |
| 2021 | Membership Leakage in Label-Only Exposures | - | CCS | [Paper](https://yangzhangalmo.github.io/papers/CCS21-Label.pdf) | [Code](https://github.com/zhenglisec/decision-based-mia) | - |
| 2021 | Black-box adversarial attacks on commercial speech platforms with minimal information | - | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3460120.3485383) | - | - |
| 2021 | Unleashing the tiger: Inference attacks on split learning | - | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3460120.3485259) | [Code](https://github.com/pasquini-dario/SplitNN_FSHA) | - |
| 2021 | See through gradients: Image batch recovery via gradinversion | - | CVPR | [Paper](http://openaccess.thecvf.com/content/CVPR2021/papers/Yin_See_Through_Gradients_Image_Batch_Recovery_via_GradInversion_CVPR_2021_paper.pdf) | - | - |
| 2021 | Soteria: Provable defense against privacy leakage in federated learning from representation perspective | - | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Soteria_Provable_Defense_Against_Privacy_Leakage_in_Federated_Learning_From_CVPR_2021_paper.pdf) | [Code](https://github.com/jeremy313/Soteria) | - |
| 2021 | Imagine: Image synthesis by image-guided model inversion | - | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_IMAGINE_Image_Synthesis_by_Image-Guided_Model_Inversion_CVPR_2021_paper.pdf) | - | - |
| 2021 | Variational Model Inversion Attacks | - | NeurIPS | [Paper](https://proceedings.neurips.cc/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Paper.pdf) | [Code](https://github.com/wangkua1/vmi) | - |
| 2021 | Exploiting Explanations for Model Inversion Attacks | - | ICCV | [Paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Zhao_Exploiting_Explanations_for_Model_Inversion_Attacks_ICCV_2021_paper.pdf) | - |
| 2021 | Knowledge-Enriched Distributional Model Inversion Attacks | - | ICCV | [Paper](https://arxiv.org/pdf/2010.04092.pdf) | [Code](https://github.com/SCccc21/Knowledge-Enriched-DMI) |
| 2021 | Improving Robustness to Model Inversion Attacks via Mutual Information Regularization | - | AAAI | [Paper](https://arxiv.org/pdf/2009.05241.pdf) | - |
| 2021 | Label-Only Membership Inference Attack | - | ICML | [Paper](http://proceedings.mlr.press/v139/choquette-choo21a/choquette-choo21a.pdf) | [Code](https://github.com/cchoquette/membership-inference) |
| 2021 | Variational Model Inversion Attacks | white-box | NeurIPS | [Paper](https://proceedings.neurips.cc/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Paper.pdf) | [Code](https://github.com/wangkua1/vmi) | - |
| 2021 | Exploiting Explanations for Model Inversion Attacks | white-box | ICCV | [Paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Zhao_Exploiting_Explanations_for_Model_Inversion_Attacks_ICCV_2021_paper.pdf) | - |
| 2021 | Knowledge-Enriched Distributional Model Inversion Attacks | white-box | ICCV | [Paper](https://arxiv.org/pdf/2010.04092.pdf) | [Code](https://github.com/SCccc21/Knowledge-Enriched-DMI) |
| 2021 | Improving Robustness to Model Inversion Attacks via Mutual Information Regularization | white-box (defense) | AAAI | [Paper](https://arxiv.org/pdf/2009.05241.pdf) | - |
| 2021 | When Does Data Augmentation Help With Membership Inference Attacks? | - | ICML | [Paper](http://proceedings.mlr.press/v139/kaya21a/kaya21a.pdf) | - |
| 2021 | PRACTICAL DEFENCES AGAINST MODEL INVERSION ATTACKS FOR SPLIT NEURAL NETWORKS | - | ICLR workshop | [Paper](https://arxiv.org/pdf/2104.05743.pdf) | [Code](https://github.com/TTitcombe/Model-Inversion-SplitNN) |
| 2021 | Feature inference attack on model predictions in vertical federated learning | - | ICDE | [Paper](https://arxiv.org/pdf/2010.10152) | [Code](https://github.com/xj231/featureinference-vfl) |
@@ -170,7 +165,7 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
| 2021 | Practical Black Box Model Inversion Attacks Against Neural Nets | - | ECML PKDD | [Paper](https://link.springer.com/content/pdf/10.1007/978-3-030-93733-1.pdf?pdf=button) | - |
| 2021 | Model Inversion Attack against a Face Recognition System in a Black-Box Setting | - | APSIPA | [Paper](http://www.apsipa.org/proceedings/2021/pdfs/0001800.pdf) | - |
| 2022 | Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks | - | ICML | [Paper](https://arxiv.org/pdf/2201.12179.pdf) | [Code](https://github.com/LukasStruppek/Plug-and-Play-Attacks) |
| 2022 | Label-Only Model Inversion Attacks via Boundary Repulsion | - | CVPR | [Paper](https://arxiv.org/pdf/2203.01925.pdf) | [Code](https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion) |
| 2022 | Label-Only Model Inversion Attacks via Boundary Repulsion | black-box | CVPR | [Paper](https://arxiv.org/pdf/2203.01925.pdf) | [Code](https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion) |
| 2022 | ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning | - | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2022/html/Li_ResSFL_A_Resistance_Transfer_Framework_for_Defending_Model_Inversion_Attack_CVPR_2022_paper.html) | [Code](https://github.com/zlijingtao/ResSFL) |
| 2022 | Bilateral Dependency Optimization: Defending Against Model-inversion Attacks | - | KDD | [Paper](https://arxiv.org/pdf/2206.05483.pdf) | [Code](https://github.com/xpeng9719/Defend_MI) |
| 2022 | ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models | - | USENIX Security | [Paper](https://www.usenix.org/system/files/sec22summer_liu-yugeng.pdf) | [Code](https://github.com/liuyugeng/ML-Doctor) |