From 6c1a3b6fa8b8a48e8b6032b9c8a9f57dfbdb4b7a Mon Sep 17 00:00:00 2001 From: Zhanke Zhou <45969108+AndrewZhou924@users.noreply.github.com> Date: Mon, 31 Oct 2022 21:35:07 +0800 Subject: [PATCH] Update README.md --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index a835d68..3d7d1b3 100644 --- a/README.md +++ b/README.md @@ -3,9 +3,11 @@ A curated list of resources for model inversion attack (MIA). ### What is the model inversion attack? -In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset. -(by Variational Model Inversion Attacks, Wang et al, 2021.) +The goal of model inversion attacks is to recreate training data or sensitive attributes. +(Chen et al, 2021.) + +In model inversion attacks, a malicious user attempts to recover the private dataset used to train a supervised neural network. A successful model inversion attack should generate realistic and diverse samples that accurately describe each of the classes in the private dataset. (Wang et al, 2021.) ### Survey Arxiv 2021 - A Survey of Privacy Attacks in Machine Learning.