Update README.md

This commit is contained in:
Xiong PENG
2023-07-15 21:59:04 +08:00
committed by GitHub
parent 8fbfbd76d8
commit b6edaed0fe

View File

@@ -197,33 +197,33 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
| 2023 | Re-thinking Model Inversion Attacks Against Deep Neural Networks | white-box | CVPR | [Paper](https://arxiv.org/pdf/2304.01669.pdf) | [code](https://github.com/sutd-visual-computing-group/Re-thinking_MI) |
<!--
### TODO
| Year | Title | Adversarial Knowledge | Venue | Paper Link | Code Link |
| :-----------: | --------------------------------------------------------------------------------------------- | --------------------- | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- |
| 2019 | The secret sharer: Evaluating and testing unintended memorization in neural networks | white-box | USENIX | [Paper](https://www.usenix.org/system/files/sec19-carlini.pdf) | |
| 2019 | Deep leakage from gradients | white-box | NIPS | [Paper](https://papers.nips.cc/paper/9617-deep-leakage-from-gradients.pdf) | [code](https://github.com/mit-han-lab/dlg) |
| 2020 | Inverting Gradients - How easy is it to break privacy in federated learning? | white-box | NIPS | [Paper](https://arxiv.org/abs/2003.14053) | |
| 2018 | Reconstruction of training samples from loss functions | - | arXiv | [Paper](https://arxiv.org/pdf/1805.07337.pdf) | |
| 2022 | Reconstructing Training Data from Trained Neural Networks | white-box | NIPS | [Paper](https://arxiv.org/abs/2206.07758) | |
| 2020 | A Framework for Evaluating Gradient Leakage Attacks in Federated Learning | white-box | CoRR | [Paper](https://arxiv.org/pdf/2004.10397.pdf) | |
| 2017 | Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning | both | CCS | [Paper](https://arxiv.org/pdf/1702.07464.pdf) | |
| 2019 | Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning | white-box | IEEE INFOCOM | [Paper](https://arxiv.org/pdf/1812.00535.pdf) | |
| 2020 | Evaluation Indicator for Model Inversion Attack | metric | AdvML | [Paper](https://drive.google.com/file/d/1rl77BGtGHzZ8obWUEOoqunXCjgvpzE8d/view) | |
| - | An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack | white-box | arXiv | [Paper](https://ieeexplore.ieee.org/document/8822435) | |
| 2020 | SAPAG: A Self-Adaptive Privacy Attack From Gradients | white-box | arXiv | [Paper](https://arxiv.org/abs/2009.06228) | |
| 2022 | Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis | white-box | USENIX | [Paper](https://arxiv.org/abs/2010.13356) | |
| 2020 | Knowledge-Enriched Distributional Model Inversion Attacks | white-box | arXiv | [Paper](https://arxiv.org/abs/2010.04092) | |
| 2020 | MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery | white-box | arXiv | [Paper](https://arxiv.org/abs/2010.11463) | |
| 2020 | Evaluation of Inference Attack Models for Deep Learning on Medical Data | black-box | arXiv | [Paper](https://arxiv.org/abs/2011.00177) | |
| 2020 | FaceLeaks: Inference Attacks against Transfer Learning Models via Black-box Queries | black-box | arXiv | [Paper](https://arxiv.org/abs/2010.14023) | |
| - | Derivation of Constraints from Machine Learning Models and Applications to Security and Privacy | Theory | A | [Paper](https://hal.archives-ouvertes.fr/hal-03091740/) | |
| 2021 | On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models | both | IEEE EuroS&P | [Paper](https://arxiv.org/abs/2103.07101) | |
| 2021 | R-GAP: Recursive Gradient Attack on Privacy | white-box | ICLR | [Paper](https://arxiv.org/abs/2010.07733) | |
| 2021 | PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage | white-box | arXiv | [Paper](https://arxiv.org/abs/2108.04725) | |
| 2021 | On the Importance of Encrypting Deep Features | black-box | arXiv | [Paper](https://arxiv.org/abs/2108.07147) | |
| 2022 | Reconstructing Training Data with Informed Adversaries | white-box | arXiv | [Paper](https://arxiv.org/abs/2201.04845) | |
| 2022 | Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks | white-box | arXiv | [Paper](https://arxiv.org/abs/2107.06304) | |
| xxx 2019 | The secret sharer: Evaluating and testing unintended memorization in neural networks | white-box | USENIX | [Paper](https://www.usenix.org/system/files/sec19-carlini.pdf) | |
| xxx 2019 | Deep leakage from gradients | white-box | NIPS | [Paper](https://papers.nips.cc/paper/9617-deep-leakage-from-gradients.pdf) | [code](https://github.com/mit-han-lab/dlg) |
| xxx 2020 | Inverting Gradients - How easy is it to break privacy in federated learning? | white-box | NIPS | [Paper](https://arxiv.org/abs/2003.14053) | |
| xxx 2018 | Reconstruction of training samples from loss functions | - | arXiv | [Paper](https://arxiv.org/pdf/1805.07337.pdf) | |
| vvv 2022 | Reconstructing Training Data from Trained Neural Networks | white-box | NIPS | [Paper](https://arxiv.org/abs/2206.07758) | |
| xxx 2020 | A Framework for Evaluating Gradient Leakage Attacks in Federated Learning | white-box | CoRR | [Paper](https://arxiv.org/pdf/2004.10397.pdf) | |
| xxx 2017 | Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning | both | CCS | [Paper](https://arxiv.org/pdf/1702.07464.pdf) | |
| xxx 2019 | Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning | white-box | IEEE INFOCOM | [Paper](https://arxiv.org/pdf/1812.00535.pdf) | |
| vvv 2020 | Evaluation Indicator for Model Inversion Attack | metric | AdvML | [Paper](https://drive.google.com/file/d/1rl77BGtGHzZ8obWUEOoqunXCjgvpzE8d/view) | |
| vvv 2019 | An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack | white-box | arXiv | [Paper](https://ieeexplore.ieee.org/document/8822435) | |
| xxx 2020 | SAPAG: A Self-Adaptive Privacy Attack From Gradients | white-box | arXiv | [Paper](https://arxiv.org/abs/2009.06228) | |
| xxx 2022 | Exploring the Security Boundary of Data Reconstruction via Neuron Exclusivity Analysis | white-box | USENIX | [Paper](https://arxiv.org/abs/2010.13356) | |
| vvv 2020 | MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery | white-box | arXiv | [Paper](https://arxiv.org/abs/2010.11463) | |
| xxx 2020 | Evaluation of Inference Attack Models for Deep Learning on Medical Data | black-box | arXiv | [Paper](https://arxiv.org/abs/2011.00177) | |
| xxx 2020 | FaceLeaks: Inference Attacks against Transfer Learning Models via Black-box Queries | black-box | arXiv | [Paper](https://arxiv.org/abs/2010.14023) | |
| xxx | Derivation of Constraints from Machine Learning Models and Applications to Security and Privacy | Theory | A | [Paper](https://hal.archives-ouvertes.fr/hal-03091740/) | |
| xxx 2021 | On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models | both | IEEE EuroS&P | [Paper](https://arxiv.org/abs/2103.07101) | |
| xxx 2021 | R-GAP: Recursive Gradient Attack on Privacy | white-box | ICLR | [Paper](https://arxiv.org/abs/2010.07733) | |
| xxx 2021 | PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage | white-box | arXiv | [Paper](https://arxiv.org/abs/2108.04725) | |
| xxx 2021 | On the Importance of Encrypting Deep Features | black-box | arXiv | [Paper](https://arxiv.org/abs/2108.07147) | |
| vvv 2022 | Reconstructing Training Data with Informed Adversaries | white-box | SP | [Paper](https://arxiv.org/abs/2201.04845) | |
| vvv 2022 | Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks | white-box | BMVC | [Paper](https://arxiv.org/abs/2107.06304) | |
-->
<!-- USENIX Security 2014 - Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing.
[[paper]](https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-fredrikson-privacy.pdf)