mirror of
https://github.com/AndrewZhou924/Awesome-model-inversion-attack.git
synced 2026-01-25 22:11:26 +00:00
Update README.md
This commit is contained in:
10
README.md
10
README.md
@@ -41,17 +41,17 @@ Philosophical Transactions of the Royal Society A 2018. Algorithms that remember
|
||||
## Computer vision domain
|
||||
| Year | Title | Adversarial Knowledge | Venue | Paper Link | Code Link |
|
||||
| ---- | ----- | -------------------- | ----- | ---------- | --------- |
|
||||
| 2014 | Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing | | USENIX Security | [Paper](https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-fredrikson-privacy.pdf) | |
|
||||
| 2015 | Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures | | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/2810103.2813677) | [Code1](http://www.cs.cmu.edu/~mfredrik/mi-2016.zip), [Code2](https://github.com/yashkant/Model-Inversion-Attack), [Code3](https://github.com/zhangzp9970/MIA), [Code4](https://github.com/sarahsimionescu/simple-model-inversion) |
|
||||
| 2015 | Regression model fitting under differential privacy and model inversion attack | | IJCAI | [Paper](http://www.csce.uark.edu/~xintaowu/publ/ijcai15.pdf) | [Code](https://github.com/cxs040/Regression-Model-Fitting-under-Differential-Privacy-and-Model-Inversion-Attack-Source-Code) |
|
||||
| 2016 | A Methodology for Formalizing Model-Inversion Attacks | | CSF | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7536387&casa_token=ClIVAMYo6dcAAAAA:u75HHyFHj5lBRec9h5SqOZyAsL2dICcWIuQPCj6ltk8McREFCaM4ex42mv3S-oNPiGJLDfUqg0qL) | |
|
||||
| 2014 | Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing | white-box | Security | [Paper](https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-fredrikson-privacy.pdf) | |
|
||||
| 2015 | Model inversion attacks that exploit confidence information and basic countermeasures | | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/2810103.2813677) | [Code1](http://www.cs.cmu.edu/~mfredrik/mi-2016.zip), [Code2](https://github.com/yashkant/Model-Inversion-Attack), [Code3](https://github.com/zhangzp9970/MIA), [Code4](https://github.com/sarahsimionescu/simple-model-inversion) |
|
||||
| 2015 | Regression Model Fitting under Differential Privacy and Model Inversion Attack | | IJCAI | [Paper](http://www.csce.uark.edu/~xintaowu/publ/ijcai15.pdf) | [Code](https://github.com/cxs040/Regression-Model-Fitting-under-Differential-Privacy-and-Model-Inversion-Attack-Source-Code) |
|
||||
| 2016 | A Methodology for Formalizing Model Inversion Attacks | | CSF | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7536387&casa_token=ClIVAMYo6dcAAAAA:u75HHyFHj5lBRec9h5SqOZyAsL2dICcWIuQPCj6ltk8McREFCaM4ex42mv3S-oNPiGJLDfUqg0qL) | |
|
||||
| 2017 | Machine Learning Models that Remember Too Much | | CCS | [Paper](https://arxiv.org/pdf/1709.07886.pdf) | [Code](https://github.com/csong27/ml-model-remember) |
|
||||
| 2017 | Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes | | PST | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8476925) | |
|
||||
| 2018 | Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting | | CSF | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8429311) | |
|
||||
| 2019 | MLPrivacyGuard: Defeating Confidence Information based Model Inversion Attacks on Machine Learning Systems | | GLSVLSI | [Paper](https://www.researchgate.net/profile/Tiago-Alves-13/publication/333136362_MLPrivacyGuard_Defeating_Confidence_Information_based_Model_Inversion_Attacks_on_Machine_Learning_Systems/links/5cddb94d92851c4eaba682d7/MLPrivacyGuard-Defeating_Confidence-Information-based-Model-Inversion-Attacks-on-Machine-Learning-Systems.pdf) | |
|
||||
| 2019 | Model inversion attacks against collaborative inference | | ACSAC | [Paper](https://par.nsf.gov/servlets/purl/10208164) | [Code](https://github.com/zechenghe/Inverse_Collaborative_Inference) |
|
||||
| 2019 | Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment | | CCS | [Paper](https://dl.acm.org/doi/pdf/10.1145/3319535.3354261?casa_token=J81Ps-ZWXHkAAAAA:FYnXo7DQoHpdhqns8x2TclKFeHpAQlXVxMBW2hTrhJ5c20XKdsounqdT1Viw1g6Xsu9FtKj85elxQaA) | [Code](https://github.com/zhangzp9970/TB-MIA) |
|
||||
| 2019 | Exploiting Unintended Feature Leakage in Collaborative Learning | | IEEE S&P | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8835269) | [Code](https://github.com/csong27/property-inference-collaborative-ml) |
|
||||
| 2019 | Exploiting Unintended Feature Leakage in Collaborative Learning | | S&P | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8835269) | [Code](https://github.com/csong27/property-inference-collaborative-ml) |
|
||||
| 2019 | Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment | | Arxiv | [Paper](https://arxiv.org/pdf/1902.08552.pdf) | |
|
||||
| 2019 | GAMIN: An Adversarial Approach to Black-Box Model Inversion | | Arxiv | [Paper](https://arxiv.org/pdf/1909.11835.pdf) | |
|
||||
| 2020 | The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks | | CVPR | [Paper](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf) | [Code](https://github.com/AI-secure/GMI-Attack), [Video](https://www.youtube.com/watch?v=_g-oXYMhz4M) |
|
||||
|
||||
Reference in New Issue
Block a user