Refactor threat and testing focus sections

This commit is contained in:
Matteo Meucci
2025-11-13 19:16:52 +01:00
committed by GitHub
parent 0559169c01
commit eeaa84828d

View File

@@ -92,62 +92,62 @@ For broader security assurance across networks, infrastructure, and traditional
* Scope: Front-end UX, APIs, agents/plugins, user-AI interactions
* Threats:
* Prompt injection (LLM01)
* Output handling (LLM05)
* Excessive agency (LLM06)
* Misinformation (LLM09)
* Automation bias
* Hallucinations
* Toxic content
* Explainability gaps
- Prompt injection (LLM01)
- Output handling (LLM05)
- Excessive agency (LLM06)
- Misinformation (LLM09)
- Automation bias
- Hallucinations
- Toxic content
- Explainability gaps
* Testing Focus:
* Behavior consistency
* Ethical content validation
* User interface abuse (e.g., phishing via AI)
* Interpretability & transparency evaluation
- Behavior consistency
- Ethical content validation
- User interface abuse (e.g., phishing via AI)
- Interpretability & transparency evaluation
### **2\. AI Model Testing**
* Scope: Model training, fine-tuning, inference behavior
* Threats:
* Model & data poisoning (LLM04)
* Inversion/inference attacks
* Bias/discrimination
* Model exfiltration
* Overfitting / generalization issues
* Explainability & fairness gaps
- Model & data poisoning (LLM04)
- Inversion/inference attacks
- Bias/discrimination
- Model exfiltration
- Overfitting / generalization issues
- Explainability & fairness gaps
* Testing Focus:
* Adversarial robustness
* Fairness auditing
* Membership inference testing
* Alignment and behavior under edge cases
- Adversarial robustness
- Fairness auditing
- Membership inference testing
- Alignment and behavior under edge cases
### **3\. AI Infrastructure Testing**
* Scope: Hosting, serving, orchestration, APIs, plugin permissions
* Threats:
* System prompt leakage (LLM07)
* Resource abuse (LLM10)
* Supply chain poisoning (LLM03)
* Unauthorized API control
* Insecure agent capabilities
- System prompt leakage (LLM07)
- Resource abuse (LLM10)
- Supply chain poisoning (LLM03)
- Unauthorized API control
- Insecure agent capabilities
* Testing Focus:
* Least privilege enforcement
* Resource sandboxing
* Plugin/agent boundary testing
* Environment security (CI/CD, containers)
- Least privilege enforcement
- Resource sandboxing
- Plugin/agent boundary testing
- Environment security (CI/CD, containers)
### **4\. AI Data Testing**
* Scope: Data collection, curation, storage, labeling, filtering
* Threats:
** Data poisoning (LLM04)
** Training data leaks
** Toxic/unrepresentative data
** Bias introduced during preprocessing
** Mislabeling or filtering inconsistencies
- Data poisoning (LLM04)
- Training data leaks
- Toxic/unrepresentative data
- Bias introduced during preprocessing
- Mislabeling or filtering inconsistencies
* Testing Focus:
* Dataset integrity & labeling accuracy
* Bias and diversity analysis
* Data provenance validation
* Filtering robustness (toxicity, duplication)
- Dataset integrity & labeling accuracy
- Bias and diversity analysis
- Data provenance validation
- Filtering robustness (toxicity, duplication)