Update README.md

This commit is contained in:
Xiong PENG
2023-08-21 20:44:33 +08:00
committed by GitHub
parent c8123ba2e4
commit c1b2613ef3

View File

@@ -173,7 +173,7 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
| 2022 | MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity | white-box | NDSS | [Paper](https://www.cs.purdue.edu/homes/an93/static/papers/ndss2022_model_inversion.pdf) | [code](https://github.com/njuaplusplus/mirror) |
| 2022 | Reconstructing Training Data with Informed Adversaries | white-box | SP | [Paper](https://arxiv.org/abs/2201.04845) | |
| 2022 | Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks | white-box | BMVC | [Paper](https://arxiv.org/abs/2107.06304)
| 2022 | Reconstructing Training Data from Trained Neural Networks | white-box | NIPS | [Paper](https://arxiv.org/abs/2206.07758) | |
| 2022 | Reconstructing Training Data from Trained Neural Networks | white-box | NeurIPS | [Paper](https://arxiv.org/abs/2206.07758) | |
| 2023 | Sparse Black-Box Inversion Attack with Limited Information | black-box | ICASSP | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10095514) | [code](https://github.com/Tencent/TFace/tree/master/recognition) |
| 2023 | Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack | black-box | CVPR | [Paper](https://arxiv.org/pdf/2304.11436.pdf) | [code](https://github.com/FLAIR-THU/PairedLogitsInversion) |
| 2023 | Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network | white-box | AAAI | [Paper](https://arxiv.org/pdf/2302.09814.pdf) | [code](https://github.com/lethesec/plg-mi-attack) |
@@ -194,8 +194,8 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
| xxx 2021 | Soteria: Provable defense against privacy leakage in federated learning from representation perspective | - | CVPR | [Paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Sun_Soteria_Provable_Defense_Against_Privacy_Leakage_in_Federated_Learning_From_CVPR_2021_paper.pdf) | [Code](https://github.com/jeremy313/Soteria) | - |
| xxx 2019 | The secret sharer: Evaluating and testing unintended memorization in neural networks | white-box | USENIX | [Paper](https://www.usenix.org/system/files/sec19-carlini.pdf) | |
| xxx 2019 | Deep leakage from gradients | white-box | NIPS | [Paper](https://papers.nips.cc/paper/9617-deep-leakage-from-gradients.pdf) | [code](https://github.com/mit-han-lab/dlg) |
| xxx 2020 | Inverting Gradients - How easy is it to break privacy in federated learning? | white-box | NIPS | [Paper](https://arxiv.org/abs/2003.14053) | |
| xxx 2019 | Deep leakage from gradients | white-box | NeurIPS | [Paper](https://papers.nips.cc/paper/9617-deep-leakage-from-gradients.pdf) | [code](https://github.com/mit-han-lab/dlg) |
| xxx 2020 | Inverting Gradients - How easy is it to break privacy in federated learning? | white-box | NeurIPS | [Paper](https://arxiv.org/abs/2003.14053) | |
| xxx 2018 | Reconstruction of training samples from loss functions | - | arXiv | [Paper](https://arxiv.org/pdf/1805.07337.pdf) | |
| xxx 2020 | A Framework for Evaluating Gradient Leakage Attacks in Federated Learning | white-box | CoRR | [Paper](https://arxiv.org/pdf/2004.10397.pdf) | |
| xxx 2017 | Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning | both | CCS | [Paper](https://arxiv.org/pdf/1702.07464.pdf) | |