mirror of
https://github.com/AndrewZhou924/Awesome-model-inversion-attack.git
synced 2026-01-25 22:11:26 +00:00
Update README.md
This commit is contained in:
@@ -567,6 +567,7 @@ ICML 2023 - On Strengthening and Defending Graph Reconstruction Attack with Mark
|
||||
| 2020 | Information Leakage in Embedding Models | black & white-box | CCS | [Paper](https://arxiv.org/pdf/2004.00053.pdf) | |
|
||||
| 2020 | KART: Privacy Leakage Framework of Language Models Pre-trained with Clinical Records | black-box | arXiv | [Paper](https://arxiv.org/abs/2101.00036) | |
|
||||
| 2021 | TAG: Gradient Attack on Transformer-based Language Models | white-box | EMNLP | [Paper](https://arxiv.org/pdf/2103.06819.pdf) | |
|
||||
| 2021 | Dataset Reconstruction Attack against Language Models | black-box | CEUR workshop | [paper](https://ceur-ws.org/Vol-2942/paper1.pdf) | |
|
||||
| 2022 | KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models | black-box | Arxiv | [paper](https://arxiv.org/pdf/2101.00036v1.pdf) | [code](https://github.com/yutanakamura-tky/kart) |
|
||||
| 2022 | Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers | white-box | Arxiv | [Paper](https://arxiv.org/pdf/2209.10505.pdf) | |
|
||||
| 2022 | Canary Extraction in Natural Language Understanding Models | white-box | ACL | [paper](https://arxiv.org/pdf/2203.13920.pdf) | |
|
||||
@@ -588,6 +589,9 @@ CCS 2020 - Information Leakage in Embedding Models
|
||||
EMNLP 2021 - TAG: Gradient Attack on Transformer-based Language Models
|
||||
[[paper]](https://arxiv.org/pdf/2103.06819.pdf)
|
||||
|
||||
CEUR workshop 2021 - Dataset Reconstruction Attack against Language Models
|
||||
[[paper]](https://ceur-ws.org/Vol-2942/paper1.pdf)
|
||||
|
||||
Arxiv 2022 - KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models
|
||||
[[paper]](https://arxiv.org/abs/2101.00036v1)
|
||||
[[code]](https://github.com/yutanakamura-tky/kart)
|
||||
|
||||
Reference in New Issue
Block a user