From d7de4cfdcc2788c4d57843c00a195047bdce786d Mon Sep 17 00:00:00 2001 From: AlvinYu <125203137+AlvinYu025@users.noreply.github.com> Date: Sat, 8 Jul 2023 14:54:50 +0800 Subject: [PATCH] Update README.md --- README.md | 41 ++++++++++++++++++++++++++++++++++------- 1 file changed, 34 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 3f8a4f7..fd5c5c5 100644 --- a/README.md +++ b/README.md @@ -562,7 +562,6 @@ ICML 2023 - On Strengthening and Defending Graph Reconstruction Attack with Mark ## Natural language processing domain | Year | Title | Adversarial Knowledge | Venue | Paper Link | Code Link | | ---- | ----- | -------------------- | ----- | ---------- | --------- | -| 2018 | Privacy-preserving Neural Representations of Text | white-box | EMNLP | [Paper](https://arxiv.org/pdf/1808.09408.pdf) | [code](https://github.com/mcoavoux/pnet) | | 2020 | Extracting Training Data from Large Language Models | black-box | USENIX Security | [Paper](https://arxiv.org/pdf/2012.07805.pdf) | [code](https://arxiv.org/pdf/2012.07805.pdf) | | 2020 | Privacy Risks of General-Purpose Language Models | black & white-box | S&P | [Paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9152761) | | | 2020 | Information Leakage in Embedding Models | black & white-box | CCS | [Paper](https://arxiv.org/pdf/2004.00053.pdf) | | @@ -573,15 +572,43 @@ ICML 2023 - On Strengthening and Defending Graph Reconstruction Attack with Mark | 2022 | Canary Extraction in Natural Language Understanding Models | white-box | ACL | [paper](https://arxiv.org/pdf/2203.13920.pdf) | | | 2022 | Recovering Private Text in Federated Learning of Language Models | white-box | NeurIPS | [paper](https://arxiv.org/pdf/2205.08514.pdf) | [code](https://github.com/princeton-sysml/film) | | 2023 | Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence | black-box | ACL | [paper](https://arxiv.org/pdf/2305.03010.pdf) | [code](https://github.com/hkust-knowcomp/geia) | +| 2023 | Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models | white-box | Arxiv | [Paper](https://arxiv.org/pdf/2306.13789.pdf) | | - +CCS 2020 - Information Leakage in Embedding Models +[[paper]](https://arxiv.org/pdf/2004.00053.pdf) + +EMNLP 2021 - TAG: Gradient Attack on Transformer-based Language Models +[[paper]](https://arxiv.org/pdf/2103.06819.pdf) + +Arxiv 2022 - KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models +[[paper]](https://arxiv.org/abs/2101.00036v1) +[[code]](https://github.com/yutanakamura-tky/kart) + +Arxiv 2022 - Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers +[[paper]](https://arxiv.org/pdf/2209.10505.pdf) + +ACL 2022 - Canary Extraction in Natural Language Understanding Models +[[paper]](https://arxiv.org/pdf/2203.13920.pdf) + +NeurIPS 2022 - Recovering Private Text in Federated Learning of Language Models +[[paper]](https://arxiv.org/pdf/2205.08514.pdf) +[[code]](https://github.com/princeton-sysml/film) + +ACL 2023 - Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence +[[paper]](https://arxiv.org/pdf/2305.03010.pdf) +[[code]](https://github.com/hkust-knowcomp/geia) + +Arxiv 2023 - Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models +[[paper]](https://arxiv.org/pdf/2306.13789.pdf) +--> ## Tools