From 6dd9114766e31c7050e11ae039b52bb7ed522587 Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Thu, 16 Jul 2020 16:24:48 -0400 Subject: [PATCH 01/15] Fix spelling of title --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a29a758..86c9944 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Awesome atacks on machine learning privacy [![Awesome](https://awesome.re/badge.svg)](https://awesome.re) +# Awesome Attacks on Machine Learning Privacy [![Awesome](https://awesome.re/badge.svg)](https://awesome.re) This repository contains a curated list of papers related to privacy attacks against machine learning. A code repository is provided when available by the authors. For corrections, suggestions or missing papers, please either open an issue or submit a pull request. # Surveys and Overviews From e8f77008352d99586037263f01c4da15a057ecc2 Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Thu, 16 Jul 2020 17:25:12 -0400 Subject: [PATCH 02/15] Add new overview paper; Fix 1 reference [**Test**](Link)(Ref) --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 86c9944..3fe63ab 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,7 @@ This repository contains a curated list of papers related to privacy attacks aga # Surveys and Overviews - [**A Survey of Privacy Attacks in Machine Learning**](https://arxiv.org/abs/2007.07646) (Rigaki and Garcia, 2020) - [**An Overview of Privacy in Machine Learning**](https://arxiv.org/pdf/2005.08679) (De Cristofaro, 2020) +- [**Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks**](https://arxiv.org/abs/2006.11601)(Fan et al., 2020) # Papers and Code @@ -11,7 +12,7 @@ This repository contains a curated list of papers related to privacy attacks aga - [**Membership inference attacks against machine learning models**](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7958568) (Shokri et al., 2017) ([code](https://github.com/csong27/membership-inference)) - [**Understanding membership inferences on well-generalized learning models**](https://arxiv.org/pdf/1802.04889)(Long et al., 2018) - [**Privacy risk in machine learning:Analyzing the connection to overfitting**](https://ieeexplore.ieee.org/document/8429311), (Yeom et al., 2018) ([code](https://github.com/samuel-yeom/ml-privacy-csf18)) -- [**Membership inference attack against differentially private deep learning model**](http://www.tdp.cat/issues16/tdp.a289a17.pdf) (Rahman ett al., 2018) +- [**Membership inference attack against differentially private deep learning model**](http://www.tdp.cat/issues16/tdp.a289a17.pdf) (Rahman et al., 2018) - [**Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning.**](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8835245) (Nasr et al., 2019) ([code](https://github.com/privacytrustlab/ml_privacy_meter)) - [**Logan: Membership inference attacks against generative models.**](https://content.sciendo.com/downloadpdf/journals/popets/2019/1/article-p133.xml) (Hayes et al. 2019) ([code](https://github.com/jhayes14/gen_mem_inf)) - [**Evaluating differentially private machine learning in practice**](https://www.usenix.org/system/files/sec19-jayaraman.pdf) (Jayaraman and Evans, 2019) ([code](https://github.com/bargavj/EvaluatingDPML)) From 1360b6b6b712e959f3b74fdc6f6a7f0bccdebabb Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Thu, 16 Jul 2020 17:26:36 -0400 Subject: [PATCH 03/15] Add space after paper title --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 3fe63ab..91adcb4 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ This repository contains a curated list of papers related to privacy attacks aga # Surveys and Overviews - [**A Survey of Privacy Attacks in Machine Learning**](https://arxiv.org/abs/2007.07646) (Rigaki and Garcia, 2020) - [**An Overview of Privacy in Machine Learning**](https://arxiv.org/pdf/2005.08679) (De Cristofaro, 2020) -- [**Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks**](https://arxiv.org/abs/2006.11601)(Fan et al., 2020) +- [**Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks**](https://arxiv.org/abs/2006.11601) (Fan et al., 2020) # Papers and Code From dcb175b10a8a4a9f845d46ba42141e569c7d785a Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Thu, 16 Jul 2020 17:40:49 -0400 Subject: [PATCH 04/15] Add 2 new membership inference papers --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 91adcb4..3443eee 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,9 @@ This repository contains a curated list of papers related to privacy attacks aga - [**Gan-leaks: A taxonomy of membership inference attacks against gans**](https://arxiv.org/pdf/1909.03935.pdf) (Chen,et al., 2019)) - [**Auditing Data Provenance in Text-Generation Models**](https://dl.acm.org/doi/pdf/10.1145/3292500.3330885) (Song and Shmatikov, 2019) - [**Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?**](https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00299) (Hisamoto et al., 2020) -- [**Revisiting Membership InferenceUnder Realistic Assumptions**](https://arxiv.org/pdf/2005.10881.pdf) (Jayaraman et al., 2020) +- [**Revisiting Membership Inference Under Realistic Assumptions**](https://arxiv.org/pdf/2005.10881.pdf) (Jayaraman et al., 2020) +- [**When Machine Unlearning Jeopardizes Privacy**](https://arxiv.org/pdf/2005.02205.pdf) (Chen et al., 2020) +- [**Modelling and Quantifying Membership Information Leakage in Machine Learning **](https://arxiv.org/pdf/2001.10648.pdf) (Farokhi and Kaafar, 2020) ## Reconstruction Reconstruction attacks cover also attacks known as *model inversion* and *attribute inference*. From 31e54ea575a9d05b792c181c0700e8e49958b473 Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Thu, 16 Jul 2020 22:46:38 -0400 Subject: [PATCH 05/15] Fix formatting of Farokhi's paper --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 3443eee..d4703ab 100644 --- a/README.md +++ b/README.md @@ -28,7 +28,7 @@ This repository contains a curated list of papers related to privacy attacks aga - [**Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?**](https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00299) (Hisamoto et al., 2020) - [**Revisiting Membership Inference Under Realistic Assumptions**](https://arxiv.org/pdf/2005.10881.pdf) (Jayaraman et al., 2020) - [**When Machine Unlearning Jeopardizes Privacy**](https://arxiv.org/pdf/2005.02205.pdf) (Chen et al., 2020) -- [**Modelling and Quantifying Membership Information Leakage in Machine Learning **](https://arxiv.org/pdf/2001.10648.pdf) (Farokhi and Kaafar, 2020) +- [**Modelling and Quantifying Membership Information Leakage in Machine Learning**](https://arxiv.org/pdf/2001.10648.pdf) (Farokhi and Kaafar, 2020) ## Reconstruction Reconstruction attacks cover also attacks known as *model inversion* and *attribute inference*. From 65c6e69f781c8348f78c4b81706b71e2aa6ed453 Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Thu, 16 Jul 2020 23:08:12 -0400 Subject: [PATCH 06/15] Add more MI and extraction papers --- README.md | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index d4703ab..4de6234 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ This repository contains a curated list of papers related to privacy attacks aga ## Membership inference - [**Membership inference attacks against machine learning models**](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7958568) (Shokri et al., 2017) ([code](https://github.com/csong27/membership-inference)) - [**Understanding membership inferences on well-generalized learning models**](https://arxiv.org/pdf/1802.04889)(Long et al., 2018) -- [**Privacy risk in machine learning:Analyzing the connection to overfitting**](https://ieeexplore.ieee.org/document/8429311), (Yeom et al., 2018) ([code](https://github.com/samuel-yeom/ml-privacy-csf18)) +- [**Privacy risk in machine learning: Analyzing the connection to overfitting**](https://ieeexplore.ieee.org/document/8429311), (Yeom et al., 2018) ([code](https://github.com/samuel-yeom/ml-privacy-csf18)) - [**Membership inference attack against differentially private deep learning model**](http://www.tdp.cat/issues16/tdp.a289a17.pdf) (Rahman et al., 2018) - [**Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning.**](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8835245) (Nasr et al., 2019) ([code](https://github.com/privacytrustlab/ml_privacy_meter)) - [**Logan: Membership inference attacks against generative models.**](https://content.sciendo.com/downloadpdf/journals/popets/2019/1/article-p133.xml) (Hayes et al. 2019) ([code](https://github.com/jhayes14/gen_mem_inf)) @@ -29,6 +29,9 @@ This repository contains a curated list of papers related to privacy attacks aga - [**Revisiting Membership Inference Under Realistic Assumptions**](https://arxiv.org/pdf/2005.10881.pdf) (Jayaraman et al., 2020) - [**When Machine Unlearning Jeopardizes Privacy**](https://arxiv.org/pdf/2005.02205.pdf) (Chen et al., 2020) - [**Modelling and Quantifying Membership Information Leakage in Machine Learning**](https://arxiv.org/pdf/2001.10648.pdf) (Farokhi and Kaafar, 2020) +- [**Privacy Risks of Securing Machine Learning Models against Adversarial Examples**](https://arxiv.org/abs/1905.10291) (Song et al., 2019) ([code](https://github.com/inspire-group/privacy-vs-robustness)) +- [**Systematic Evaluation of Privacy Risks of Machine Learning Models**](https://arxiv.org/abs/2003.10595) (Song and Mittal, 2020) ([code](https://github.com/inspire-group/membership-inference-evaluation)) +- [**Towards the Infeasibility of Membership Inference on Deep Models**](https://arxiv.org/pdf/2005.13702.pdf) (Rezaei and Liu, 2020) ([code](https://github.com/shrezaei/MI-Attack)) ## Reconstruction Reconstruction attacks cover also attacks known as *model inversion* and *attribute inference*. @@ -56,7 +59,7 @@ Reconstruction attacks cover also attacks known as *model inversion* and *attrib - [**Overlearning Reveals Sensitive Attributes**](https://openreview.net/pdf?id=SJeNz04tDS) (Song C. et al., 2020) ([code](https://drive.google.com/file/d/1hu0PhN3pWXe6LobxiPFeYBm8L-vQX2zJ/view?usp=sharing)) ## Model extraction -- [**Stealing machine learning models via prediction apis**](https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_tramer.pdf) (Tramèr et al., 2016) ([code](https://github.com/ftramer/Steal-ML)) +- [**Stealing machine learning models via prediction apis**](https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_tramer.pdf) (Tramèr et al., 2016) ([code](https://github.com/ftramer/Steal-ML)) ([code](https://github.com/ftramer/Steal-ML)) - [**Stealing hyperparameters in machine learning**](https://ieeexplore.ieee.org/iel7/8418581/8418583/08418595.pdf)(Wang B. et al., 2018) - [**Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data**](https://ieeexplore.ieee.org/document/8489592) (Correia-Silva et al., 2018) ([code](https://github.com/jeiks/Stealing_DL_Models)) - [**Towards reverse-engineering black-box neural networks.**](https://openreview.net/forum?id=BydjJte0-)(Oh et al., 2018) ([code](https://github.com/coallaoh/WhitenBlackBox)) @@ -66,3 +69,11 @@ Reconstruction attacks cover also attacks known as *model inversion* and *attrib - [**Exploring connections between active learning and model extraction**](https://www.usenix.org/system/files/sec20summer_chandrasekaran_prepub.pdf) (Chandrasekaran et al., 2020) - [**High Accuracy and High Fidelity Extraction of Neural Networks**](https://www.usenix.org/conference/usenixsecurity20/presentation/jagielski) (Jagielski et al., 2020) - [**Thieves on Sesame Street! Model Extraction of BERT-based APIs**](https://openreview.net/attachment?id=Byl5NREFDr&name=original_pdf) (Krishna et al., 2020) ([code](https://github.com/google-research/language/tree/master/language/bert_extraction)) +- [**Cryptanalytic Extraction of Neural Network Models**](https://arxiv.org/pdf/2003.04884.pdf) (Carlini et al., 2020) +- [**CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples**](https://www.ndss-symposium.org/ndss-paper/cloudleak-large-scale-deep-learning-models-stealing-through-adversarial-examples/) (Yu et al., 2020) +- [**ACTIVETHIEF: Model Extraction Using Active Learning and Unannotated Public Data**](https://aaai.org/Papers/AAAI/2020GB/AAAI-PalS.7093.pdf) (Pal et al., 2020) +- [**Efficiently Stealing your Machine Learning Models**](https://encrypto.de/papers/RST19.pdf) (Reith et al., 2019) +- [**A framework for the extraction of Deep Neural Networks by leveraging public data**](https://arxiv.org/abs/1905.09165) (Pal et al., 2019) +- [**Extraction of Complex DNN Models: Real Threat or Boogeyman?**](https://arxiv.org/pdf/1910.05429.pdf) (Atli et al., 2020) + + From 6b95a00e8c81b74dbba1c6325d568e201d1b403a Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Thu, 16 Jul 2020 23:21:42 -0400 Subject: [PATCH 07/15] Add more inversion papers --- README.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 4de6234..dc4cd82 100644 --- a/README.md +++ b/README.md @@ -50,7 +50,13 @@ Reconstruction attacks cover also attacks known as *model inversion* and *attrib - [**Privacy Risks of General-Purpose Language Models**](https://www.researchgate.net/profile/Xudong_Pan3/publication/340965355_Privacy_Risks_of_General-Purpose_Language_Models/links/5ea7ca55a6fdccd7945b6a7d/Privacy-Risks-of-General-Purpose-Language-Models.pdf) (Pan et al., 2020) - [**The secret revealer: generative model-inversion attacks against deep neural networks**](http://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf)) (Zhang et al., 2020) - [**Inverting Gradients - How easy is it to break privacy in federated learning?**](https://arxiv.org/abs/2003.14053) (Geiping et al., 2020) - +- [**GAMIN: An Adversarial Approach to Black-Box Model Inversion**](https://arxiv.org/abs/1909.11835) (Aivodji et al., 2019) +- [**Adversarial Privacy Preservation under Attribute Inference Attack**](https://arxiv.org/abs/1906.07902) (Zhao et al., 2019) +- [**Reconstruction of training samples from loss functions**](https://arxiv.org/pdf/1805.07337.pdf) (Sannai, 2018) +- [**A Framework for Evaluating Gradient Leakage Attacks in Federated Learning**](https://arxiv.org/pdf/2004.10397.pdf) (Wei et al., 2020) +- [**Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning**](https://arxiv.org/pdf/1702.07464.pdf) (Hitaj et al., 2017) +- [**Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning**](https://arxiv.org/pdf/1812.00535.pdf) (Wang et al., 2018) +- [**Exploring Image Reconstruction Attack in Deep Learning Computation Offloading**](https://dl.acm.org/doi/pdf/10.1145/3325413.3329791) (Oh and Lee, 2019) ## Property inference - [**Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers**](https://dl.acm.org/doi/10.1504/IJSN.2015.071829) (Ateniese et al., 2015) From eb65c9eb7f10c1a7fb65676fea668292bd6b9529 Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Thu, 16 Jul 2020 23:26:38 -0400 Subject: [PATCH 08/15] Add side-channel papers --- README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index dc4cd82..9f82a4e 100644 --- a/README.md +++ b/README.md @@ -57,6 +57,7 @@ Reconstruction attacks cover also attacks known as *model inversion* and *attrib - [**Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning**](https://arxiv.org/pdf/1702.07464.pdf) (Hitaj et al., 2017) - [**Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning**](https://arxiv.org/pdf/1812.00535.pdf) (Wang et al., 2018) - [**Exploring Image Reconstruction Attack in Deep Learning Computation Offloading**](https://dl.acm.org/doi/pdf/10.1145/3325413.3329791) (Oh and Lee, 2019) +- [**I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators**](https://arxiv.org/pdf/1803.05847.pdf) (Wei et al., 2019) ## Property inference - [**Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers**](https://dl.acm.org/doi/10.1504/IJSN.2015.071829) (Ateniese et al., 2015) @@ -81,5 +82,5 @@ Reconstruction attacks cover also attacks known as *model inversion* and *attrib - [**Efficiently Stealing your Machine Learning Models**](https://encrypto.de/papers/RST19.pdf) (Reith et al., 2019) - [**A framework for the extraction of Deep Neural Networks by leveraging public data**](https://arxiv.org/abs/1905.09165) (Pal et al., 2019) - [**Extraction of Complex DNN Models: Real Threat or Boogeyman?**](https://arxiv.org/pdf/1910.05429.pdf) (Atli et al., 2020) - - +- [**Stealing Neural Networks via Timing Side Channels**](https://arxiv.org/pdf/1812.11720.pdf) (Duddu et al., 2019) +- [**DeepSniffer: A DNN Model Extraction Framework Based on Learning Architectural Hints**](https://dl.acm.org/doi/pdf/10.1145/3373376.3378460) (Hu et al., 2020) From ab8f28f9ff3419663f7ee37614e707ddb36fd9dd Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Thu, 16 Jul 2020 23:31:05 -0400 Subject: [PATCH 09/15] Added metrics and tools sections --- README.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/README.md b/README.md index 9f82a4e..6d664b4 100644 --- a/README.md +++ b/README.md @@ -84,3 +84,9 @@ Reconstruction attacks cover also attacks known as *model inversion* and *attrib - [**Extraction of Complex DNN Models: Real Threat or Boogeyman?**](https://arxiv.org/pdf/1910.05429.pdf) (Atli et al., 2020) - [**Stealing Neural Networks via Timing Side Channels**](https://arxiv.org/pdf/1812.11720.pdf) (Duddu et al., 2019) - [**DeepSniffer: A DNN Model Extraction Framework Based on Learning Architectural Hints**](https://dl.acm.org/doi/pdf/10.1145/3373376.3378460) (Hu et al., 2020) + +## Privacy Metrics and Estimations +- [**Estimating g-Leakage via Machine Learning**](https://arxiv.org/abs/2005.04399) (Romanelli et al., 2020) + +# Tools +Coming Soon From b0cc1a5f09e2b2f7a885437f05a59c4782247953 Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Thu, 16 Jul 2020 23:32:50 -0400 Subject: [PATCH 10/15] Add 1 privacy metric paper --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 6d664b4..c63c04b 100644 --- a/README.md +++ b/README.md @@ -87,6 +87,7 @@ Reconstruction attacks cover also attacks known as *model inversion* and *attrib ## Privacy Metrics and Estimations - [**Estimating g-Leakage via Machine Learning**](https://arxiv.org/abs/2005.04399) (Romanelli et al., 2020) +- [**Information-Theoretic Bounds on the Generalization Error and Privacy Leakage in Federated Learning**](https://arxiv.org/abs/2005.02503) (Yagli et al., 2020) # Tools Coming Soon From 840d51e9b2c6891ccdba92bf914e534b354a0ce6 Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Thu, 16 Jul 2020 23:51:29 -0400 Subject: [PATCH 11/15] Add 1 reconstruction paper --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index c63c04b..01e27db 100644 --- a/README.md +++ b/README.md @@ -58,6 +58,7 @@ Reconstruction attacks cover also attacks known as *model inversion* and *attrib - [**Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning**](https://arxiv.org/pdf/1812.00535.pdf) (Wang et al., 2018) - [**Exploring Image Reconstruction Attack in Deep Learning Computation Offloading**](https://dl.acm.org/doi/pdf/10.1145/3325413.3329791) (Oh and Lee, 2019) - [**I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators**](https://arxiv.org/pdf/1803.05847.pdf) (Wei et al., 2019) +- [**Illuminating the Dark or how to recover what should not be seen in FE-based classifiers**](https://eprint.iacr.org/2018/1001) (Carpov et al., 2020) ## Property inference - [**Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers**](https://dl.acm.org/doi/10.1504/IJSN.2015.071829) (Ateniese et al., 2015) From 07b29f0d224aea2aa89b324c7aa723564a426a30 Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Thu, 16 Jul 2020 23:52:59 -0400 Subject: [PATCH 12/15] Briefly remove estimation and tools --- README.md | 7 ------- 1 file changed, 7 deletions(-) diff --git a/README.md b/README.md index 01e27db..3b287aa 100644 --- a/README.md +++ b/README.md @@ -85,10 +85,3 @@ Reconstruction attacks cover also attacks known as *model inversion* and *attrib - [**Extraction of Complex DNN Models: Real Threat or Boogeyman?**](https://arxiv.org/pdf/1910.05429.pdf) (Atli et al., 2020) - [**Stealing Neural Networks via Timing Side Channels**](https://arxiv.org/pdf/1812.11720.pdf) (Duddu et al., 2019) - [**DeepSniffer: A DNN Model Extraction Framework Based on Learning Architectural Hints**](https://dl.acm.org/doi/pdf/10.1145/3373376.3378460) (Hu et al., 2020) - -## Privacy Metrics and Estimations -- [**Estimating g-Leakage via Machine Learning**](https://arxiv.org/abs/2005.04399) (Romanelli et al., 2020) -- [**Information-Theoretic Bounds on the Generalization Error and Privacy Leakage in Federated Learning**](https://arxiv.org/abs/2005.02503) (Yagli et al., 2020) - -# Tools -Coming Soon From 13d84bb8d6000eebe8be6488941812ea3dac268f Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Fri, 17 Jul 2020 20:46:48 -0400 Subject: [PATCH 13/15] Remove extra code link --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 3b287aa..5e293d5 100644 --- a/README.md +++ b/README.md @@ -67,7 +67,7 @@ Reconstruction attacks cover also attacks known as *model inversion* and *attrib - [**Overlearning Reveals Sensitive Attributes**](https://openreview.net/pdf?id=SJeNz04tDS) (Song C. et al., 2020) ([code](https://drive.google.com/file/d/1hu0PhN3pWXe6LobxiPFeYBm8L-vQX2zJ/view?usp=sharing)) ## Model extraction -- [**Stealing machine learning models via prediction apis**](https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_tramer.pdf) (Tramèr et al., 2016) ([code](https://github.com/ftramer/Steal-ML)) ([code](https://github.com/ftramer/Steal-ML)) +- [**Stealing machine learning models via prediction apis**](https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_tramer.pdf) (Tramèr et al., 2016) ([code](https://github.com/ftramer/Steal-ML)) - [**Stealing hyperparameters in machine learning**](https://ieeexplore.ieee.org/iel7/8418581/8418583/08418595.pdf)(Wang B. et al., 2018) - [**Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data**](https://ieeexplore.ieee.org/document/8489592) (Correia-Silva et al., 2018) ([code](https://github.com/jeiks/Stealing_DL_Models)) - [**Towards reverse-engineering black-box neural networks.**](https://openreview.net/forum?id=BydjJte0-)(Oh et al., 2018) ([code](https://github.com/coallaoh/WhitenBlackBox)) From 8e62d3df8b2cba3ce687a544125fe182fee38124 Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Fri, 17 Jul 2020 20:47:39 -0400 Subject: [PATCH 14/15] Punctuation --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 5e293d5..9727248 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ # Awesome Attacks on Machine Learning Privacy [![Awesome](https://awesome.re/badge.svg)](https://awesome.re) -This repository contains a curated list of papers related to privacy attacks against machine learning. A code repository is provided when available by the authors. For corrections, suggestions or missing papers, please either open an issue or submit a pull request. +This repository contains a curated list of papers related to privacy attacks against machine learning. A code repository is provided when available by the authors. For corrections, suggestions, or missing papers, please either open an issue or submit a pull request. # Surveys and Overviews - [**A Survey of Privacy Attacks in Machine Learning**](https://arxiv.org/abs/2007.07646) (Rigaki and Garcia, 2020) From ef504ce38d2e9239a891f403d0837440fa3d4c09 Mon Sep 17 00:00:00 2001 From: Suha Sabi Hussain Date: Fri, 17 Jul 2020 20:51:20 -0400 Subject: [PATCH 15/15] Include space before reference --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 9727248..46f2ffb 100644 --- a/README.md +++ b/README.md @@ -68,7 +68,7 @@ Reconstruction attacks cover also attacks known as *model inversion* and *attrib ## Model extraction - [**Stealing machine learning models via prediction apis**](https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_tramer.pdf) (Tramèr et al., 2016) ([code](https://github.com/ftramer/Steal-ML)) -- [**Stealing hyperparameters in machine learning**](https://ieeexplore.ieee.org/iel7/8418581/8418583/08418595.pdf)(Wang B. et al., 2018) +- [**Stealing hyperparameters in machine learning**](https://ieeexplore.ieee.org/iel7/8418581/8418583/08418595.pdf) (Wang B. et al., 2018) - [**Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data**](https://ieeexplore.ieee.org/document/8489592) (Correia-Silva et al., 2018) ([code](https://github.com/jeiks/Stealing_DL_Models)) - [**Towards reverse-engineering black-box neural networks.**](https://openreview.net/forum?id=BydjJte0-)(Oh et al., 2018) ([code](https://github.com/coallaoh/WhitenBlackBox)) - [**Knockoff nets: Stealing functionality of black-box models**](http://openaccess.thecvf.com/content_CVPR_2019/papers/Orekondy_Knockoff_Nets_Stealing_Functionality_of_Black-Box_Models_CVPR_2019_paper.pdf) (Orekondy et al., 2019) ([code](https://github.com/tribhuvanesh/knockoffnets))