From 9e851d95b5d585d3da186abad3cd78424c2f8db9 Mon Sep 17 00:00:00 2001 From: Zhanke Zhou Date: Mon, 18 Nov 2024 14:34:15 +0800 Subject: [PATCH] Update README.md --- README.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index d2bf9c8..b6ef39d 100644 --- a/README.md +++ b/README.md @@ -8,9 +8,13 @@ A curated list of resources for model inversion attack (MIA). Please star or watch this repository to keep tracking the latest updates! Contributions are welcome! +## NEWS + +- **[Nov/2024]** We release a comprehensive survey of model inversion attacks. Check the [paper](https://arxiv.org/pdf/2411.10023). + + **Outlines:** - - [What is the model inversion attack?](#what-is-the-model-inversion-attack) - [Survey](#survey) - [Computer vision domain](#computer-vision-domain) @@ -20,6 +24,7 @@ Please star or watch this repository to keep tracking the latest updates! Contri - [Others](#others) - [Related repositories](#related-repositories) + ## What is the model inversion attack? A model inversion attack is a privacy attack where the attacker is able to reconstruct the original samples that were used to train the synthetic model from the generated synthetic data set. (Mostly.ai)