From be0385d8cf5a9c106d4e410b2d5752f28454940f Mon Sep 17 00:00:00 2001 From: Luca Demetrio Date: Fri, 8 Aug 2025 09:57:15 +0200 Subject: [PATCH 1/3] Update AITG-MOD-01_Testing_for_Evasion_Attacks.md Update AI security testing tools by adding difference between general-purpose and domain-specific libraries --- ...AITG-MOD-01_Testing_for_Evasion_Attacks.md | 33 +++++++++++++++---- 1 file changed, 26 insertions(+), 7 deletions(-) diff --git a/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md b/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md index ae55dc3..6c3eb62 100644 --- a/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md +++ b/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md @@ -30,18 +30,37 @@ AI-generated outputs must: - Regularly evaluate models using adversarial robustness tools to proactively detect and mitigate vulnerabilities. - Continuously update and refine input validation and sanitization strategies to counter evolving adversarial techniques. -#### Suggested Tools for this Specific Test -- **Adversarial Robustness Toolbox (ART)** - - Framework for adversarial attack generation, detection, and mitigation for AI models. - - Tool Link: [Adversarial Robustness Toolbox](https://github.com/Trusted-AI/adversarial-robustness-toolbox) +#### Suggested Tools for this Specific Test +AI Security Testing tool can be divided into *general-purpose*, which can be used to test a variety of adversarial attacks on the image domain or at the feature-level of every model, and *domain-specific*, that enables security testing directly on the input source. +## General-purpose tools +- **Foolbox** + - Tool for creating adversarial examples and evaluating model robustness, compatible with PyTorch, TensorFlow, and JAX. + - Tool Link: [Foolbox on GitHub](https://github.com/bethgelab/foolbox) +- TODO SECML +- TODO ADVLIB + +## Domain-specific tools +- **Maltorch** + - Python library for computing security evaluations against Windows malware detectors implemented in Pytorch. The library contains most of the proposed attacks in the literature, and pre-trained models that can be used to test attacks. + - Tool Link: [Maltorch on Github](https://github.com/zangobot/maltorch) +- **Waf-a-MoLE** + - Python library for computing adversarial SQL injections against Web Application Firewalls + - Tool Link: [Waf-a-MoLE on GitHub](https://github.com/AvalZ/WAF-A-MoLE) - **TextAttack** - Python framework specifically designed to evaluate and enhance the adversarial robustness of NLP models. - Tool Link: [TextAttack on GitHub](https://github.com/QData/TextAttack) -- **Foolbox** - - Tool for creating adversarial examples and evaluating model robustness, compatible with PyTorch, TensorFlow, and JAX. - - Tool Link: [Foolbox on GitHub](https://github.com/bethgelab/foolbox) +## Jack-of-all-trades +- **Adversarial Robustness Toolbox (ART)** + - Framework for adversarial attack generation, detection, and mitigation for AI models. + - Tool Link: [Adversarial Robustness Toolbox](https://github.com/Trusted-AI/adversarial-robustness-toolbox) + +## Outdated libraries +We also list here some of the libraries that have been used years ago, but now are inactive, not maintained and probably bugged. +- **CleverHans** +- Library for computing adversarial evasion attacks against model deployed in Pytorch, Tensorflow / Keras, and JAX. +- Tool link: [CleverHans on GitHub](https://github.com/cleverhans-lab/cleverhans) - **DeepSec** - Security evaluation toolkit focused on deep learning models for adversarial example detection and defense. From 0ed6bb99ad1915a0e849d4cdd06d2f0b80f79cc1 Mon Sep 17 00:00:00 2001 From: maurapintor Date: Fri, 8 Aug 2025 10:16:15 +0200 Subject: [PATCH 2/3] added secml-torch and adv-lib, updated description of deepsec --- .../tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md b/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md index 6c3eb62..25f39bc 100644 --- a/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md +++ b/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md @@ -34,11 +34,15 @@ AI-generated outputs must: AI Security Testing tool can be divided into *general-purpose*, which can be used to test a variety of adversarial attacks on the image domain or at the feature-level of every model, and *domain-specific*, that enables security testing directly on the input source. ## General-purpose tools +- **Adversarial Library** + - A powerful library of various adversarial attacks resources in PyTorch. It contains the most efficient implementations of several state-of-the-art attacks, at the expense of less OOP-structured tools. + - Tool Link: [Adversarial Library on GitHub](https://github.com/jeromerony/adversarial-library) - **Foolbox** - Tool for creating adversarial examples and evaluating model robustness, compatible with PyTorch, TensorFlow, and JAX. - Tool Link: [Foolbox on GitHub](https://github.com/bethgelab/foolbox) -- TODO SECML -- TODO ADVLIB +- **SecML-Torch** + - Tool for evaluating adversarial robustness of deep learning models. Based on PyTorch, it includes debugging functionalities and interfaces to customize attacks and conduct trustworthy security evaluations. + - Tool Link: [SecML-Torch on GitHub](https://github.com/pralab/secml-torch) ## Domain-specific tools - **Maltorch** @@ -62,8 +66,8 @@ We also list here some of the libraries that have been used years ago, but now a - Library for computing adversarial evasion attacks against model deployed in Pytorch, Tensorflow / Keras, and JAX. - Tool link: [CleverHans on GitHub](https://github.com/cleverhans-lab/cleverhans) -- **DeepSec** - - Security evaluation toolkit focused on deep learning models for adversarial example detection and defense. +- **DeepSec** (BUGGED) + - Security evaluation toolkit focused on deep learning models for adversarial example detection and defense. It has been strongly criticized as bugged, as visible from the (still) open [issues](https://github.com/ryderling/DEEPSEC/issues). - Tool Link: [DeepSec on GitHub](https://github.com/ryderling/DEEPSEC) #### References From 0749eeda553f292ad1c19051d0ee08c79a3cdf67 Mon Sep 17 00:00:00 2001 From: Luca Demetrio Date: Tue, 2 Sep 2025 11:21:23 +0200 Subject: [PATCH 3/3] Update AITG-MOD-01_Testing_for_Evasion_Attacks.md Removed typo --- .../content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md b/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md index 25f39bc..81a1088 100644 --- a/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md +++ b/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md @@ -63,8 +63,8 @@ AI Security Testing tool can be divided into *general-purpose*, which can be use ## Outdated libraries We also list here some of the libraries that have been used years ago, but now are inactive, not maintained and probably bugged. - **CleverHans** -- Library for computing adversarial evasion attacks against model deployed in Pytorch, Tensorflow / Keras, and JAX. -- Tool link: [CleverHans on GitHub](https://github.com/cleverhans-lab/cleverhans) + - Library for computing adversarial evasion attacks against model deployed in Pytorch, Tensorflow / Keras, and JAX. + - Tool link: [CleverHans on GitHub](https://github.com/cleverhans-lab/cleverhans) - **DeepSec** (BUGGED) - Security evaluation toolkit focused on deep learning models for adversarial example detection and defense. It has been strongly criticized as bugged, as visible from the (still) open [issues](https://github.com/ryderling/DEEPSEC/issues).