mirror of
https://github.com/OWASP/www-project-ai-testing-guide.git
synced 2026-02-12 21:52:45 +00:00
Merge pull request #32 from zangobot/main
Include more testing tools, by dividing them between general-purpouse or domain-specific
This commit is contained in:
@@ -30,21 +30,44 @@ AI-generated outputs must:
|
||||
- Regularly evaluate models using adversarial robustness tools to proactively detect and mitigate vulnerabilities.
|
||||
- Continuously update and refine input validation and sanitization strategies to counter evolving adversarial techniques.
|
||||
|
||||
#### Suggested Tools for this Specific Test
|
||||
- **Adversarial Robustness Toolbox (ART)**
|
||||
- Framework for adversarial attack generation, detection, and mitigation for AI models.
|
||||
- Tool Link: [Adversarial Robustness Toolbox](https://github.com/Trusted-AI/adversarial-robustness-toolbox)
|
||||
#### Suggested Tools for this Specific Test
|
||||
AI Security Testing tool can be divided into *general-purpose*, which can be used to test a variety of adversarial attacks on the image domain or at the feature-level of every model, and *domain-specific*, that enables security testing directly on the input source.
|
||||
|
||||
## General-purpose tools
|
||||
- **Adversarial Library**
|
||||
- A powerful library of various adversarial attacks resources in PyTorch. It contains the most efficient implementations of several state-of-the-art attacks, at the expense of less OOP-structured tools.
|
||||
- Tool Link: [Adversarial Library on GitHub](https://github.com/jeromerony/adversarial-library)
|
||||
- **Foolbox**
|
||||
- Tool for creating adversarial examples and evaluating model robustness, compatible with PyTorch, TensorFlow, and JAX.
|
||||
- Tool Link: [Foolbox on GitHub](https://github.com/bethgelab/foolbox)
|
||||
- **SecML-Torch**
|
||||
- Tool for evaluating adversarial robustness of deep learning models. Based on PyTorch, it includes debugging functionalities and interfaces to customize attacks and conduct trustworthy security evaluations.
|
||||
- Tool Link: [SecML-Torch on GitHub](https://github.com/pralab/secml-torch)
|
||||
|
||||
## Domain-specific tools
|
||||
- **Maltorch**
|
||||
- Python library for computing security evaluations against Windows malware detectors implemented in Pytorch. The library contains most of the proposed attacks in the literature, and pre-trained models that can be used to test attacks.
|
||||
- Tool Link: [Maltorch on Github](https://github.com/zangobot/maltorch)
|
||||
- **Waf-a-MoLE**
|
||||
- Python library for computing adversarial SQL injections against Web Application Firewalls
|
||||
- Tool Link: [Waf-a-MoLE on GitHub](https://github.com/AvalZ/WAF-A-MoLE)
|
||||
- **TextAttack**
|
||||
- Python framework specifically designed to evaluate and enhance the adversarial robustness of NLP models.
|
||||
- Tool Link: [TextAttack on GitHub](https://github.com/QData/TextAttack)
|
||||
|
||||
- **Foolbox**
|
||||
- Tool for creating adversarial examples and evaluating model robustness, compatible with PyTorch, TensorFlow, and JAX.
|
||||
- Tool Link: [Foolbox on GitHub](https://github.com/bethgelab/foolbox)
|
||||
## Jack-of-all-trades
|
||||
- **Adversarial Robustness Toolbox (ART)**
|
||||
- Framework for adversarial attack generation, detection, and mitigation for AI models.
|
||||
- Tool Link: [Adversarial Robustness Toolbox](https://github.com/Trusted-AI/adversarial-robustness-toolbox)
|
||||
|
||||
- **DeepSec**
|
||||
- Security evaluation toolkit focused on deep learning models for adversarial example detection and defense.
|
||||
## Outdated libraries
|
||||
We also list here some of the libraries that have been used years ago, but now are inactive, not maintained and probably bugged.
|
||||
- **CleverHans**
|
||||
- Library for computing adversarial evasion attacks against model deployed in Pytorch, Tensorflow / Keras, and JAX.
|
||||
- Tool link: [CleverHans on GitHub](https://github.com/cleverhans-lab/cleverhans)
|
||||
|
||||
- **DeepSec** (BUGGED)
|
||||
- Security evaluation toolkit focused on deep learning models for adversarial example detection and defense. It has been strongly criticized as bugged, as visible from the (still) open [issues](https://github.com/ryderling/DEEPSEC/issues).
|
||||
- Tool Link: [DeepSec on GitHub](https://github.com/ryderling/DEEPSEC)
|
||||
|
||||
#### References
|
||||
|
||||
Reference in New Issue
Block a user