mirror of
https://github.com/AndrewZhou924/Awesome-model-inversion-attack.git
synced 2026-01-25 22:11:26 +00:00
Update README.md
This commit is contained in:
61
README.md
61
README.md
@@ -1,6 +1,9 @@
|
||||
# Awesome-model-inversion-attack
|
||||
|
||||
### Survey
|
||||
Arxiv 2021 - A Survey of Privacy Attacks in Machine Learning.
|
||||
[[paper]](https://arxiv.org/pdf/2007.07646.pdf)
|
||||
|
||||
Arxiv 2022 - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability.
|
||||
[[paper]](https://arxiv.org/pdf/2204.08570.pdf)
|
||||
|
||||
@@ -11,15 +14,71 @@ Arxiv 2022 - A Survey of Trustworthy Graph Learning: Reliability, Explainability
|
||||
[[paper]](https://arxiv.org/pdf/2205.10014.pdf)
|
||||
|
||||
|
||||
### Computer Vision
|
||||
|
||||
CCS 2015 - Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures.
|
||||
[[paper]](https://dl.acm.org/doi/pdf/10.1145/2810103.2813677)
|
||||
[[code1]](http://www.cs.cmu.edu/~mfredrik/mi-2016.zip)
|
||||
[[code2]](https://github.com/yashkant/Model-Inversion-Attack)
|
||||
[[code3]](https://github.com/zhangzp9970/MIA)
|
||||
|
||||
CSF 2016 - A Methodology for Formalizing Model-Inversion Attacks.
|
||||
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7536387&casa_token=ClIVAMYo6dcAAAAA:u75HHyFHj5lBRec9h5SqOZyAsL2dICcWIuQPCj6ltk8McREFCaM4ex42mv3S-oNPiGJLDfUqg0qL)
|
||||
|
||||
2020 - Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment.
|
||||
[[paper]](https://dl.acm.org/doi/pdf/10.1145/3319535.3354261?casa_token=J81Ps-ZWXHkAAAAA:FYnXo7DQoHpdhqns8x2TclKFeHpAQlXVxMBW2hTrhJ5c20XKdsounqdT1Viw1g6Xsu9FtKj85elxQaA)
|
||||
[[code]](https://github.com/zhangzp9970/TB-MIA)
|
||||
|
||||
CVPR 2020 - The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks.
|
||||
[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf)
|
||||
[[code]](https://github.com/AI-secure/GMI-Attack)
|
||||
|
||||
NeurIPS 2021 - Variational Model Inversion Attacks.
|
||||
[[paper]](https://proceedings.neurips.cc/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Paper.pdf)
|
||||
[[code]](https://github.com/wangkua1/vmi)
|
||||
|
||||
ICCV 2021 - Exploiting Explanations for Model Inversion Attacks.
|
||||
[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Zhao_Exploiting_Explanations_for_Model_Inversion_Attacks_ICCV_2021_paper.pdf)
|
||||
|
||||
ICCV 2021 - Knowledge-Enriched Distributional Model Inversion Attacks.
|
||||
[[paper]](https://arxiv.org/pdf/2010.04092.pdf)
|
||||
[[code]](https://github.com/SCccc21/Knowledge-Enriched-DMI)
|
||||
|
||||
AAAI 2021 - Improving Robustness to Model Inversion Attacks via Mutual Information Regularization.
|
||||
[[paper]](https://arxiv.org/pdf/2009.05241.pdf)
|
||||
|
||||
CVPR 2022 - Label-Only Model Inversion Attacks via Boundary Repulsion.
|
||||
[[paper]](https://arxiv.org/pdf/2203.01925.pdf)
|
||||
[[code]]()
|
||||
|
||||
KDD 2022 - Bilateral Dependency Optimization: Defending Against Model-inversion Attacks.
|
||||
[[paper]](https://arxiv.org/pdf/2206.05483.pdf)
|
||||
[[code]](https://github.com/xpeng9719/Defend_MI)
|
||||
|
||||
USENIX Security 2022 - ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models.
|
||||
[[paper]](https://www.usenix.org/system/files/sec22summer_liu-yugeng.pdf)
|
||||
[[code]](https://github.com/liuyugeng/ML-Doctor)
|
||||
|
||||
|
||||
### Graph Learning
|
||||
|
||||
USENIX Security 2020 - Stealing Links from Graph Neural Networks.
|
||||
[[paper]](https://www.usenix.org/system/files/sec21-he-xinlei.pdf)
|
||||
|
||||
IJCAI 2021 - GraphMI: Extracting Private Graph Data from Graph Neural Networks.
|
||||
[[paper]](https://arxiv.org/abs/2106.02820)
|
||||
[[paper]](https://arxiv.org/pdf/2106.02820)
|
||||
[[code]](https://github.com/zaixizhang/GraphMI)
|
||||
|
||||
Arxiv 2021 - Node-Level Membership Inference Attacks Against Graph Neural Networks.
|
||||
[[paper]](https://arxiv.org/pdf/2102.05429.pdf)
|
||||
|
||||
WWW 2022 - Learning Privacy-Preserving Graph Convolutional Network with Partially Observed Sensitive Attributes.
|
||||
[[paper]](https://dl.acm.org/doi/pdf/10.1145/3485447.3511975?casa_token=Xsle4t9cLdcAAAAA:Gmij-qWaTJ2esGVa-yzKNHqVOMzYyaIgdNcgGmEzHrVyMdwwf9idn3qBjkhCcQeRTvbAkaT6OxiwXsk)
|
||||
|
||||
USENIX Security 2022 - Inference Attacks Against Graph Neural Networks
|
||||
[[paper]](https://www.usenix.org/system/files/sec22summer_zhang-zhikun.pdf)
|
||||
[[code]](https://github.com/Zhangzhk0819/GNN-Embedding-Leaks)
|
||||
|
||||
Arxiv 2022 - DIFFERENTIALLY PRIVATE GRAPH CLASSIFICATION WITH GNNS.
|
||||
[[paper]](https://arxiv.org/pdf/2202.02575.pdf)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user