Update AI_MODEL_TESTING.md

This commit is contained in:
Matteo Meucci
2025-06-18 11:32:00 +02:00
committed by GitHub
parent 0ab1cc2bee
commit 37b2f28bae

View File

@@ -10,23 +10,23 @@ Testing at the model level helps detect fundamental weaknesses such as susceptib
This category evaluates whether the AI model:
- Is robust and resilient against **adversarial evasion attacks**
→ [AITG-MOD-01: Testing for Evasion Attacks](https://github.com/MatOwasp/AI-Testing-Guide/blob/main/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md)
→ [AITG-MOD-01: Testing for Evasion Attacks](/Document/content/tests/AITG-MOD-01_Testing_for_Evasion_Attacks.md)
- Protects effectively against **runtime model poisoning**
→ [AITG-MOD-02: Testing for Runtime Model Poisoning](https://github.com/MatOwasp/AI-Testing-Guide/blob/main/Document/content/tests/AITG-MOD-02_Testing_for_Runtime_Model_Poisoning.md)
→ [AITG-MOD-02: Testing for Runtime Model Poisoning](/Document/content/tests/AITG-MOD-02_Testing_for_Runtime_Model_Poisoning.md)
- Is resistant to **training-time poisoning attacks**
→ [AITG-MOD-03: Testing for Poisoned Training Sets](https://github.com/MatOwasp/AI-Testing-Guide/blob/main/Document/content/tests/AITG-MOD-03_Testing_for_Poisoned_Training_Sets.md)
→ [AITG-MOD-03: Testing for Poisoned Training Sets](/Document/content/tests/AITG-MOD-03_Testing_for_Poisoned_Training_Sets.md)
- Preserves **data privacy** against inference and inversion attacks
→ [AITG-MOD-04: Testing for Membership Inference](https://github.com/MatOwasp/AI-Testing-Guide/blob/main/Document/content/tests/AITG-MOD-04_Testing_for_Membership_Inference.md)
→ [AITG-MOD-05: Testing for Inversion Attacks](https://github.com/MatOwasp/AI-Testing-Guide/blob/main/Document/content/tests/AITG-MOD-05_Testing_for_Inversion_Attacks.md)
→ [AITG-MOD-04: Testing for Membership Inference](/Document/content/tests/AITG-MOD-04_Testing_for_Membership_Inference.md)
→ [AITG-MOD-05: Testing for Inversion Attacks](/Document/content/tests/AITG-MOD-05_Testing_for_Inversion_Attacks.md)
- Maintains **robustness when presented with new or adversarial data**
→ [AITG-MOD-06: Testing for Robustness to New Data](https://github.com/MatOwasp/AI-Testing-Guide/blob/main/Document/content/tests/AITG-MOD-06_Testing_for_Robustness_to_New_Data.md)
→ [AITG-MOD-06: Testing for Robustness to New Data](/Document/content/tests/AITG-MOD-06_Testing_for_Robustness_to_New_Data.md)
- Remains consistently **aligned with predefined goals and constraints**
→ [AITG-MOD-07: Testing for Goal Alignment](https://github.com/MatOwasp/AI-Testing-Guide/blob/main/Document/content/tests/AITG-MOD-07_Testing_for_Goal_Alignment.md)
→ [AITG-MOD-07: Testing for Goal Alignment](/Document/content/tests/AITG-MOD-07_Testing_for_Goal_Alignment.md)
Each test within the AI Model Testing category helps ensure the fundamental resilience, reliability, and safety of AI models, reducing systemic risk across all deployments and applications.