Update README.md

This commit is contained in:
AlvinYu
2023-07-08 14:54:50 +08:00
committed by GitHub
parent df19adc8ad
commit d7de4cfdcc

View File

@@ -562,7 +562,6 @@ ICML 2023 - On Strengthening and Defending Graph Reconstruction Attack with Mark
## Natural language processing domain
| Year | Title | Adversarial Knowledge | Venue | Paper Link | Code Link |
| ---- | ----- | -------------------- | ----- | ---------- | --------- |
| 2018 | Privacy-preserving Neural Representations of Text | white-box | EMNLP | [Paper](https://arxiv.org/pdf/1808.09408.pdf) | [code](https://github.com/mcoavoux/pnet) |
| 2020 | Extracting Training Data from Large Language Models | black-box | USENIX Security | [Paper](https://arxiv.org/pdf/2012.07805.pdf) | [code](https://arxiv.org/pdf/2012.07805.pdf) |
| 2020 | Privacy Risks of General-Purpose Language Models | black & white-box | S&P | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9152761) | |
| 2020 | Information Leakage in Embedding Models | black & white-box | CCS | [Paper](https://arxiv.org/pdf/2004.00053.pdf) | |
@@ -573,15 +572,43 @@ ICML 2023 - On Strengthening and Defending Graph Reconstruction Attack with Mark
| 2022 | Canary Extraction in Natural Language Understanding Models | white-box | ACL | [paper](https://arxiv.org/pdf/2203.13920.pdf) | |
| 2022 | Recovering Private Text in Federated Learning of Language Models | white-box | NeurIPS | [paper](https://arxiv.org/pdf/2205.08514.pdf) | [code](https://github.com/princeton-sysml/film) |
| 2023 | Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence | black-box | ACL | [paper](https://arxiv.org/pdf/2305.03010.pdf) | [code](https://github.com/hkust-knowcomp/geia) |
| 2023 | Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models | white-box | Arxiv | [Paper](https://arxiv.org/pdf/2306.13789.pdf) | |
<!-- CCS 2020 - Information Leakage in Embedding Models.
[[paper]](https://dl.acm.org/doi/pdf/10.1145/3372297.3417270?casa_token=0ltuTKcG5cIAAAAA:YcpnOm4WlV0UnSS2dOWdtcnFh6DqSygG9MuS31gGQEgMxOBHQKeXsoNGkFhEw8gvlqY78gTkaRn9gUo)
<!--
USENIX Security 2020 - Extracting Training Data from Large Language Models
[[paper]](https://arxiv.org/pdf/2012.07805.pdf)
[[code]](https://arxiv.org/pdf/2012.07805.pdf)
USENIX Security 2021 - Extracting training data from large language models.
[[paper]](https://www.usenix.org/system/files/sec21-carlini-extracting.pdf)
S&P 2020 - Privacy Risks of General-Purpose Language Models
[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9152761)
Arxiv 2022 - Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers.
[[paper]](https://arxiv.org/pdf/2209.10505.pdf) -->
CCS 2020 - Information Leakage in Embedding Models
[[paper]](https://arxiv.org/pdf/2004.00053.pdf)
EMNLP 2021 - TAG: Gradient Attack on Transformer-based Language Models
[[paper]](https://arxiv.org/pdf/2103.06819.pdf)
Arxiv 2022 - KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models
[[paper]](https://arxiv.org/abs/2101.00036v1)
[[code]](https://github.com/yutanakamura-tky/kart)
Arxiv 2022 - Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers
[[paper]](https://arxiv.org/pdf/2209.10505.pdf)
ACL 2022 - Canary Extraction in Natural Language Understanding Models
[[paper]](https://arxiv.org/pdf/2203.13920.pdf)
NeurIPS 2022 - Recovering Private Text in Federated Learning of Language Models
[[paper]](https://arxiv.org/pdf/2205.08514.pdf)
[[code]](https://github.com/princeton-sysml/film)
ACL 2023 - Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence
[[paper]](https://arxiv.org/pdf/2305.03010.pdf)
[[code]](https://github.com/hkust-knowcomp/geia)
Arxiv 2023 - Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models
[[paper]](https://arxiv.org/pdf/2306.13789.pdf)
-->
## Tools