mirror of
https://github.com/AndrewZhou924/Awesome-model-inversion-attack.git
synced 2026-01-25 22:11:26 +00:00
Update README.md
This commit is contained in:
22
README.md
22
README.md
@@ -128,21 +128,19 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
|
||||
|
||||
| Year | Title | Adversarial Knowledge | Venue | Paper Link | Code Link |
|
||||
| ---- | ------------------------------------------------------------ | --------------------- | --------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
|
||||
| 2014 | Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing | white-box | USENIX Security | [paper](https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-fredrikson-privacy.pdf) | |
|
||||
| 2015 | Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures | white-box | CCS | [paper](https://dl.acm.org/doi/pdf/10.1145/2810103.2813677) | [code1](http://www.cs.cmu.edu/~mfredrik/mi-2016.zip), [code2](https://github.com/yashkant/Model-Inversion-Attack), [code3](https://github.com/zhangzp9970/MIA), [code4](https://github.com/sarahsimionescu/simple-model-inversion) |
|
||||
| 2015 | Regression model fitting under differential privacy and model inversion attack | white-box | IJCAI | [paper](http://www.csce.uark.edu/~xintaowu/publ/ijcai15.pdf) | [code](https://github.com/cxs040/Regression-Model-Fitting-under-Differential-Privacy-and-Model-Inversion-Attack-Source-Code) |
|
||||
| 2016 | A Methodology for Formalizing Model-Inversion Attacks | white-box | CSF | [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7536387&casa_token=ClIVAMYo6dcAAAAA:u75HHyFHj5lBRec9h5SqOZyAsL2dICcWIuQPCj6ltk8McREFCaM4ex42mv3S-oNPiGJLDfUqg0qL) | |
|
||||
| 2014 | Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing | white-box (both) | USENIX Security | [paper](https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-fredrikson-privacy.pdf) | |
|
||||
| 2015 | Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures | white-box (both) | CCS | [paper](https://dl.acm.org/doi/pdf/10.1145/2810103.2813677) | [code1](http://www.cs.cmu.edu/~mfredrik/mi-2016.zip), [code2](https://github.com/yashkant/Model-Inversion-Attack), [code3](https://github.com/zhangzp9970/MIA), [code4](https://github.com/sarahsimionescu/simple-model-inversion) |
|
||||
| 2015 | Regression model fitting under differential privacy and model inversion attack | white-box (defense) | IJCAI | [paper](http://www.csce.uark.edu/~xintaowu/publ/ijcai15.pdf) | [code](https://github.com/cxs040/Regression-Model-Fitting-under-Differential-Privacy-and-Model-Inversion-Attack-Source-Code) |
|
||||
| 2016 | A Methodology for Formalizing Model-Inversion Attacks | black & white-box | CSF | [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7536387&casa_token=ClIVAMYo6dcAAAAA:u75HHyFHj5lBRec9h5SqOZyAsL2dICcWIuQPCj6ltk8McREFCaM4ex42mv3S-oNPiGJLDfUqg0qL) | |
|
||||
| 2017 | Machine Learning Models that Remember Too Much | white-box | CCS | [paper](https://arxiv.org/pdf/1709.07886.pdf) | [code](https://github.com/csong27/ml-model-remember) |
|
||||
| 2017 | Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes | white-box | PST | [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8476925) | |
|
||||
| 2018 | Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting | white-box | CSF | [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8429311) | |
|
||||
| 2019 | MLPrivacyGuard: Defeating Confidence Information based Model Inversion Attacks on Machine Learning Systems | white-box | GLSVLSI | [paper](https://www.researchgate.net/profile/Tiago-Alves-13/publication/333136362_MLPrivacyGuard_Defeating_Confidence_Information_based_Model_Inversion_Attacks_on_Machine_Learning_Systems/links/5cddb94d92851c4eaba682d7/MLPrivacyGuard-Defeating-Confidence-Information-based-Model-Inversion-Attacks-on-Machine-Learning-Systems.pdf) | |
|
||||
| 2019 | Model inversion attacks against collaborative inference | white-box | | | |
|
||||
| 2019 | Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment | - | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3319535.3354261?casa_token=J81Ps-ZWXHkAAAAA:FYnXo7DQoHpdhqns8x2TclKFeHpAQlXVxMBW2hTrhJ5c20XKdsounqdT1Viw1g6Xsu9FtKj85elxQaA) | [Code](https://github.com/zhangzp9970/TB-MIA) | - |
|
||||
| 2019 | Exploiting Unintended Feature Leakage in Collaborative Learning | - | IEEE S&P | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8835269) | [Code](https://github.com/csong27/property-inference-collaborative-ml) | - |
|
||||
| 2019 | Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment | - | Arxiv | [Paper](https://arxiv.org/pdf/1902.08552.pdf) | - | - |
|
||||
| 2019 | GAMIN: An Adversarial Approach to Black-Box Model Inversion | - | Arxiv | [Paper](https://arxiv.org/pdf/1909.11835.pdf) | - | - |
|
||||
| 2020 | The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks | - | CVPR | [Paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf) | [Code](https://github.com/AI-secure/GMI-Attack) | [Video](https://www.youtube.com/watch?v=_g-oXYMhz4M) |
|
||||
| 2020 | OVERLEARNING REVEALS SENSITIVE ATTRIBUTES | - | ICLR | [Paper](https://arxiv.org/pdf/1905.11742.pdf) | - | - |
|
||||
| 2019 | MLPrivacyGuard: Defeating Confidence Information based Model Inversion Attacks on Machine Learning Systems | black-box (defense) | GLSVLSI | [paper](https://www.researchgate.net/profile/Tiago-Alves-13/publication/333136362_MLPrivacyGuard_Defeating_Confidence_Information_based_Model_Inversion_Attacks_on_Machine_Learning_Systems/links/5cddb94d92851c4eaba682d7/MLPrivacyGuard-Defeating-Confidence-Information-based-Model-Inversion-Attacks-on-Machine-Learning-Systems.pdf) | |
|
||||
| 2019 | Model inversion attacks against collaborative inference | black & white-box (collaborative inference) | ACSAC | [Paper](http://palms.ee.princeton.edu/system/files/Model+Inversion+Attack+against+Collaborative+Inference.pdf) | |
|
||||
| 2019 | Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment | black-box | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3319535.3354261?casa_token=J81Ps-ZWXHkAAAAA:FYnXo7DQoHpdhqns8x2TclKFeHpAQlXVxMBW2hTrhJ5c20XKdsounqdT1Viw1g6Xsu9FtKj85elxQaA) | [Code](https://github.com/zhangzp9970/TB-MIA) | - |
|
||||
| 2019 | GAMIN: An Adversarial Approach to Black-Box Model Inversion | black-box | Arxiv | [Paper](https://arxiv.org/pdf/1909.11835.pdf) | - | - |
|
||||
| 2020 | The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks | white-box | CVPR | [Paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf) | [Code](https://github.com/AI-secure/GMI-Attack) | [Video](https://www.youtube.com/watch?v=_g-oXYMhz4M) |
|
||||
| 2020 | Overlearning Reveals Sensitive Attributes | - | ICLR | [Paper](https://arxiv.org/pdf/1905.11742.pdf) | - | - |
|
||||
| 2020 | Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator | - | APSIPA ASC | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9306253&casa_token=AWugOvIe0I0AAAAA:9wICCkMcfoljMqooM-lgl8m-6F6-cEl-ClHgNkE1SV8mZwqvBIaJ1HDjT1RWLyBz_P7tdB51jQVL&tag=1) | - | - |
|
||||
| 2020 | Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning | - | USENIX Security | [Paper](https://www.usenix.org/system/files/sec20-salem.pdf) | - | - |
|
||||
| 2020 | Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems | - | IoTJ | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9187880) | [Code](https://github.com/zechenghe/Inverse_Collaborative_Inference) | - |
|
||||
|
||||
Reference in New Issue
Block a user