mirror of
https://github.com/stratosphereips/awesome-ml-privacy-attacks.git
synced 2025-12-15 22:59:39 +00:00
@@ -236,6 +236,9 @@ Reconstruction attacks cover also attacks known as *model inversion* and *attrib
|
||||
- [**Stateful Detection of Model Extraction Attacks**](https://arxiv.org/abs/2107.05166) (Pal et al., 2021)
|
||||
- [**MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI**](https://arxiv.org/abs/2107.08909) (Miura et al., 2021)
|
||||
- [**INVERSENET: Augmenting Model Extraction Attacks with Training Data Inversion**](https://www.ijcai.org/proceedings/2021/0336.pdf) (Gong et al., 2021)
|
||||
- [**Increasing the Cost of Model Extraction with Calibrated Proof of Work**](https://openreview.net/forum?id=EAy7C1cgE1L) (Dziedzic et al. 2022) [code](https://github.com/cleverhans-lab/model-extraction-iclr)
|
||||
- [**On the Difficulty of Defending Self-Supervised Learning against Model Extraction**](https://proceedings.mlr.press/v162/dziedzic22a/dziedzic22a.pdf) (Dziedzic et al., 2022) [code](https://github.com/cleverhans-lab/ssl-attacks-defenses)
|
||||
- [**Dataset Inference for Self-Supervised Models**](https://proceedings.neurips.cc/paper_files/paper/2022/hash/4ebf0617b32da2cd083c3b17c7285cce-Abstract-Conference.html) (Dziedzic et al., 2022) [code](https://github.com/cleverhans-lab/DatasetInferenceForSelfSupervisedModels)
|
||||
- [**Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders**](https://arxiv.org/abs/2201.07513) (Sha et al., 2022)
|
||||
- [**StolenEncoder: Stealing Pre-trained Encoders**](https://arxiv.org/abs/2201.05889) (Liu et al., 2022)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user