Files
fuzzforge_ai/backend/toolbox/modules/analyzer/security_analyzer.py
tduhamel42 ec812461d6 CI/CD Integration with Ephemeral Deployment Model (#14)
* feat: Complete migration from Prefect to Temporal

BREAKING CHANGE: Replaces Prefect workflow orchestration with Temporal

## Major Changes
- Replace Prefect with Temporal for workflow orchestration
- Implement vertical worker architecture (rust, android)
- Replace Docker registry with MinIO for unified storage
- Refactor activities to be co-located with workflows
- Update all API endpoints for Temporal compatibility

## Infrastructure
- New: docker-compose.temporal.yaml (Temporal + MinIO + workers)
- New: workers/ directory with rust and android vertical workers
- New: backend/src/temporal/ (manager, discovery)
- New: backend/src/storage/ (S3-cached storage with MinIO)
- New: backend/toolbox/common/ (shared storage activities)
- Deleted: docker-compose.yaml (old Prefect setup)
- Deleted: backend/src/core/prefect_manager.py
- Deleted: backend/src/services/prefect_stats_monitor.py
- Deleted: Docker registry and insecure-registries requirement

## Workflows
- Migrated: security_assessment workflow to Temporal
- New: rust_test workflow (example/test workflow)
- Deleted: secret_detection_scan (Prefect-based, to be reimplemented)
- Activities now co-located with workflows for independent testing

## API Changes
- Updated: backend/src/api/workflows.py (Temporal submission)
- Updated: backend/src/api/runs.py (Temporal status/results)
- Updated: backend/src/main.py (727 lines, TemporalManager integration)
- Updated: All 16 MCP tools to use TemporalManager

## Testing
-  All services healthy (Temporal, PostgreSQL, MinIO, workers, backend)
-  All API endpoints functional
-  End-to-end workflow test passed (72 findings from vulnerable_app)
-  MinIO storage integration working (target upload/download, results)
-  Worker activity discovery working (6 activities registered)
-  Tarball extraction working
-  SARIF report generation working

## Documentation
- ARCHITECTURE.md: Complete Temporal architecture documentation
- QUICKSTART_TEMPORAL.md: Getting started guide
- MIGRATION_DECISION.md: Why we chose Temporal over Prefect
- IMPLEMENTATION_STATUS.md: Migration progress tracking
- workers/README.md: Worker development guide

## Dependencies
- Added: temporalio>=1.6.0
- Added: boto3>=1.34.0 (MinIO S3 client)
- Removed: prefect>=3.4.18

* feat: Add Python fuzzing vertical with Atheris integration

This commit implements a complete Python fuzzing workflow using Atheris:

## Python Worker (workers/python/)
- Dockerfile with Python 3.11, Atheris, and build tools
- Generic worker.py for dynamic workflow discovery
- requirements.txt with temporalio, boto3, atheris dependencies
- Added to docker-compose.temporal.yaml with dedicated cache volume

## AtherisFuzzer Module (backend/toolbox/modules/fuzzer/)
- Reusable module extending BaseModule
- Auto-discovers fuzz targets (fuzz_*.py, *_fuzz.py, fuzz_target.py)
- Recursive search to find targets in nested directories
- Dynamically loads TestOneInput() function
- Configurable max_iterations and timeout
- Real-time stats callback support for live monitoring
- Returns findings as ModuleFinding objects

## Atheris Fuzzing Workflow (backend/toolbox/workflows/atheris_fuzzing/)
- Temporal workflow for orchestrating fuzzing
- Downloads user code from MinIO
- Executes AtherisFuzzer module
- Uploads results to MinIO
- Cleans up cache after execution
- metadata.yaml with vertical: python for routing

## Test Project (test_projects/python_fuzz_waterfall/)
- Demonstrates stateful waterfall vulnerability
- main.py with check_secret() that leaks progress
- fuzz_target.py with Atheris TestOneInput() harness
- Complete README with usage instructions

## Backend Fixes
- Fixed parameter merging in REST API endpoints (workflows.py)
- Changed workflow parameter passing from positional args to kwargs (manager.py)
- Default parameters now properly merged with user parameters

## Testing
 Worker discovered AtherisFuzzingWorkflow
 Workflow executed end-to-end successfully
 Fuzz target auto-discovered in nested directories
 Atheris ran 100,000 iterations
 Results uploaded and cache cleaned

* chore: Complete Temporal migration with updated CLI/SDK/docs

This commit includes all remaining Temporal migration changes:

## CLI Updates (cli/)
- Updated workflow execution commands for Temporal
- Enhanced error handling and exceptions
- Updated dependencies in uv.lock

## SDK Updates (sdk/)
- Client methods updated for Temporal workflows
- Updated models for new workflow execution
- Updated dependencies in uv.lock

## Documentation Updates (docs/)
- Architecture documentation for Temporal
- Workflow concept documentation
- Resource management documentation (new)
- Debugging guide (new)
- Updated tutorials and how-to guides
- Troubleshooting updates

## README Updates
- Main README with Temporal instructions
- Backend README
- CLI README
- SDK README

## Other
- Updated IMPLEMENTATION_STATUS.md
- Removed old vulnerable_app.tar.gz

These changes complete the Temporal migration and ensure the
CLI/SDK work correctly with the new backend.

* fix: Use positional args instead of kwargs for Temporal workflows

The Temporal Python SDK's start_workflow() method doesn't accept
a 'kwargs' parameter. Workflows must receive parameters as positional
arguments via the 'args' parameter.

Changed from:
  args=workflow_args  # Positional arguments

This fixes the error:
  TypeError: Client.start_workflow() got an unexpected keyword argument 'kwargs'

Workflows now correctly receive parameters in order:
- security_assessment: [target_id, scanner_config, analyzer_config, reporter_config]
- atheris_fuzzing: [target_id, target_file, max_iterations, timeout_seconds]
- rust_test: [target_id, test_message]

* fix: Filter metadata-only parameters from workflow arguments

SecurityAssessmentWorkflow was receiving 7 arguments instead of 2-5.
The issue was that target_path and volume_mode from default_parameters
were being passed to the workflow, when they should only be used by
the system for configuration.

Now filters out metadata-only parameters (target_path, volume_mode)
before passing arguments to workflow execution.

* refactor: Remove Prefect leftovers and volume mounting legacy

Complete cleanup of Prefect migration artifacts:

Backend:
- Delete registry.py and workflow_discovery.py (Prefect-specific files)
- Remove Docker validation from setup.py (no longer needed)
- Remove ResourceLimits and VolumeMount models
- Remove target_path and volume_mode from WorkflowSubmission
- Remove supported_volume_modes from API and discovery
- Clean up metadata.yaml files (remove volume/path fields)
- Simplify parameter filtering in manager.py

SDK:
- Remove volume_mode parameter from client methods
- Remove ResourceLimits and VolumeMount models
- Remove Prefect error patterns from docker_logs.py
- Clean up WorkflowSubmission and WorkflowMetadata models

CLI:
- Remove Volume Modes display from workflow info

All removed features are Prefect-specific or Docker volume mounting
artifacts. Temporal workflows use MinIO storage exclusively.

* feat: Add comprehensive test suite and benchmark infrastructure

- Add 68 unit tests for fuzzer, scanner, and analyzer modules
- Implement pytest-based test infrastructure with fixtures
- Add 6 performance benchmarks with category-specific thresholds
- Configure GitHub Actions for automated testing and benchmarking
- Add test and benchmark documentation

Test coverage:
- AtherisFuzzer: 8 tests
- CargoFuzzer: 14 tests
- FileScanner: 22 tests
- SecurityAnalyzer: 24 tests

All tests passing (68/68)
All benchmarks passing (6/6)

* fix: Resolve all ruff linting violations across codebase

Fixed 27 ruff violations in 12 files:
- Removed unused imports (Depends, Dict, Any, Optional, etc.)
- Fixed undefined workflow_info variable in workflows.py
- Removed dead code with undefined variables in atheris_fuzzer.py
- Changed f-string to regular string where no placeholders used

All files now pass ruff checks for CI/CD compliance.

* fix: Configure CI for unit tests only

- Renamed docker-compose.temporal.yaml → docker-compose.yml for CI compatibility
- Commented out integration-tests job (no integration tests yet)
- Updated test-summary to only depend on lint and unit-tests

CI will now run successfully with 68 unit tests. Integration tests can be added later.

* feat: Add CI/CD integration with ephemeral deployment model

Implements comprehensive CI/CD support for FuzzForge with on-demand worker management:

**Worker Management (v0.7.0)**
- Add WorkerManager for automatic worker lifecycle control
- Auto-start workers from stopped state when workflows execute
- Auto-stop workers after workflow completion
- Health checks and startup timeout handling (90s default)

**CI/CD Features**
- `--fail-on` flag: Fail builds based on SARIF severity levels (error/warning/note/info)
- `--export-sarif` flag: Export findings in SARIF 2.1.0 format
- `--auto-start`/`--auto-stop` flags: Control worker lifecycle
- Exit code propagation: Returns 1 on blocking findings, 0 on success

**Exit Code Fix**
- Add `except typer.Exit: raise` handlers at 3 critical locations
- Move worker cleanup to finally block for guaranteed execution
- Exit codes now propagate correctly even when build fails

**CI Scripts & Examples**
- ci-start.sh: Start FuzzForge services with health checks
- ci-stop.sh: Clean shutdown with volume preservation option
- GitHub Actions workflow example (security-scan.yml)
- GitLab CI pipeline example (.gitlab-ci.example.yml)
- docker-compose.ci.yml: CI-optimized compose file with profiles

**OSS-Fuzz Integration**
- New ossfuzz_campaign workflow for running OSS-Fuzz projects
- OSS-Fuzz worker with Docker-in-Docker support
- Configurable campaign duration and project selection

**Documentation**
- Comprehensive CI/CD integration guide (docs/how-to/cicd-integration.md)
- Updated architecture docs with worker lifecycle details
- Updated workspace isolation documentation
- CLI README with worker management examples

**SDK Enhancements**
- Add get_workflow_worker_info() endpoint
- Worker vertical metadata in workflow responses

**Testing**
- All workflows tested: security_assessment, atheris_fuzzing, secret_detection, cargo_fuzzing
- All monitoring commands tested: stats, crashes, status, finding
- Full CI pipeline simulation verified
- Exit codes verified for success/failure scenarios

Ephemeral CI/CD model: ~3-4GB RAM, ~60-90s startup, runs entirely in CI containers.

* fix: Resolve ruff linting violations in CI/CD code

- Remove unused variables (run_id, defaults, result)
- Remove unused imports
- Fix f-string without placeholders

All CI/CD integration files now pass ruff checks.
2025-10-14 10:13:45 +02:00

368 lines
14 KiB
Python

"""
Security Analyzer Module - Analyzes code for security vulnerabilities
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
import logging
import re
from pathlib import Path
from typing import Dict, Any, List
try:
from toolbox.modules.base import BaseModule, ModuleMetadata, ModuleResult, ModuleFinding
except ImportError:
try:
from modules.base import BaseModule, ModuleMetadata, ModuleResult, ModuleFinding
except ImportError:
from src.toolbox.modules.base import BaseModule, ModuleMetadata, ModuleResult, ModuleFinding
logger = logging.getLogger(__name__)
class SecurityAnalyzer(BaseModule):
"""
Analyzes source code for common security vulnerabilities.
This module:
- Detects hardcoded secrets and credentials
- Identifies dangerous function calls
- Finds SQL injection vulnerabilities
- Detects insecure configurations
"""
def get_metadata(self) -> ModuleMetadata:
"""Get module metadata"""
return ModuleMetadata(
name="security_analyzer",
version="1.0.0",
description="Analyzes code for security vulnerabilities",
author="FuzzForge Team",
category="analyzer",
tags=["security", "vulnerabilities", "static-analysis"],
input_schema={
"file_extensions": {
"type": "array",
"items": {"type": "string"},
"description": "File extensions to analyze",
"default": [".py", ".js", ".java", ".php", ".rb", ".go"]
},
"check_secrets": {
"type": "boolean",
"description": "Check for hardcoded secrets",
"default": True
},
"check_sql": {
"type": "boolean",
"description": "Check for SQL injection risks",
"default": True
},
"check_dangerous_functions": {
"type": "boolean",
"description": "Check for dangerous function calls",
"default": True
}
},
output_schema={
"findings": {
"type": "array",
"description": "List of security findings"
}
},
requires_workspace=True
)
def validate_config(self, config: Dict[str, Any]) -> bool:
"""Validate module configuration"""
extensions = config.get("file_extensions", [])
if not isinstance(extensions, list):
raise ValueError("file_extensions must be a list")
return True
async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
"""
Execute the security analysis module.
Args:
config: Module configuration
workspace: Path to the workspace directory
Returns:
ModuleResult with security findings
"""
self.start_timer()
self.validate_workspace(workspace)
self.validate_config(config)
findings = []
files_analyzed = 0
# Get configuration
file_extensions = config.get("file_extensions", [".py", ".js", ".java", ".php", ".rb", ".go"])
check_secrets = config.get("check_secrets", True)
check_sql = config.get("check_sql", True)
check_dangerous = config.get("check_dangerous_functions", True)
logger.info(f"Analyzing files with extensions: {file_extensions}")
try:
# Analyze each file
for ext in file_extensions:
for file_path in workspace.rglob(f"*{ext}"):
if not file_path.is_file():
continue
files_analyzed += 1
relative_path = file_path.relative_to(workspace)
try:
content = file_path.read_text(encoding='utf-8', errors='ignore')
lines = content.splitlines()
# Check for secrets
if check_secrets:
secret_findings = self._check_hardcoded_secrets(
content, lines, relative_path
)
findings.extend(secret_findings)
# Check for SQL injection
if check_sql and ext in [".py", ".php", ".java", ".js"]:
sql_findings = self._check_sql_injection(
content, lines, relative_path
)
findings.extend(sql_findings)
# Check for dangerous functions
if check_dangerous:
dangerous_findings = self._check_dangerous_functions(
content, lines, relative_path, ext
)
findings.extend(dangerous_findings)
except Exception as e:
logger.error(f"Error analyzing file {relative_path}: {e}")
# Create summary
summary = {
"files_analyzed": files_analyzed,
"total_findings": len(findings),
"extensions_scanned": file_extensions
}
return self.create_result(
findings=findings,
status="success" if files_analyzed > 0 else "partial",
summary=summary,
metadata={
"workspace": str(workspace),
"config": config
}
)
except Exception as e:
logger.error(f"Security analyzer failed: {e}")
return self.create_result(
findings=findings,
status="failed",
error=str(e)
)
def _check_hardcoded_secrets(
self, content: str, lines: List[str], file_path: Path
) -> List[ModuleFinding]:
"""
Check for hardcoded secrets in code.
Args:
content: File content
lines: File lines
file_path: Relative file path
Returns:
List of findings
"""
findings = []
# Patterns for secrets
secret_patterns = [
(r'api[_-]?key\s*=\s*["\']([^"\']{20,})["\']', 'API Key'),
(r'api[_-]?secret\s*=\s*["\']([^"\']{20,})["\']', 'API Secret'),
(r'password\s*=\s*["\']([^"\']+)["\']', 'Hardcoded Password'),
(r'token\s*=\s*["\']([^"\']{20,})["\']', 'Authentication Token'),
(r'aws[_-]?access[_-]?key\s*=\s*["\']([^"\']+)["\']', 'AWS Access Key'),
(r'aws[_-]?secret[_-]?key\s*=\s*["\']([^"\']+)["\']', 'AWS Secret Key'),
(r'private[_-]?key\s*=\s*["\']([^"\']+)["\']', 'Private Key'),
(r'["\']([A-Za-z0-9]{32,})["\']', 'Potential Secret Hash'),
(r'Bearer\s+([A-Za-z0-9\-_]+\.[A-Za-z0-9\-_]+\.[A-Za-z0-9\-_]+)', 'JWT Token'),
]
for pattern, secret_type in secret_patterns:
for match in re.finditer(pattern, content, re.IGNORECASE):
# Find line number
line_num = content[:match.start()].count('\n') + 1
line_content = lines[line_num - 1] if line_num <= len(lines) else ""
# Skip common false positives
if self._is_false_positive_secret(match.group(0)):
continue
findings.append(self.create_finding(
title=f"Hardcoded {secret_type} detected",
description=f"Found potential hardcoded {secret_type} in {file_path}",
severity="high" if "key" in secret_type.lower() else "medium",
category="hardcoded_secret",
file_path=str(file_path),
line_start=line_num,
code_snippet=line_content.strip()[:100],
recommendation=f"Remove hardcoded {secret_type} and use environment variables or secure vault",
metadata={"secret_type": secret_type}
))
return findings
def _check_sql_injection(
self, content: str, lines: List[str], file_path: Path
) -> List[ModuleFinding]:
"""
Check for potential SQL injection vulnerabilities.
Args:
content: File content
lines: File lines
file_path: Relative file path
Returns:
List of findings
"""
findings = []
# SQL injection patterns
sql_patterns = [
(r'(SELECT|INSERT|UPDATE|DELETE).*\+\s*[\'"]?\s*\+?\s*\w+', 'String concatenation in SQL'),
(r'(SELECT|INSERT|UPDATE|DELETE).*%\s*[\'"]?\s*%?\s*\w+', 'String formatting in SQL'),
(r'f[\'"].*?(SELECT|INSERT|UPDATE|DELETE).*?\{.*?\}', 'F-string in SQL query'),
(r'query\s*=.*?\+', 'Dynamic query building'),
(r'execute\s*\(.*?\+.*?\)', 'Dynamic execute statement'),
]
for pattern, vuln_type in sql_patterns:
for match in re.finditer(pattern, content, re.IGNORECASE):
line_num = content[:match.start()].count('\n') + 1
line_content = lines[line_num - 1] if line_num <= len(lines) else ""
findings.append(self.create_finding(
title=f"Potential SQL Injection: {vuln_type}",
description=f"Detected potential SQL injection vulnerability via {vuln_type}",
severity="high",
category="sql_injection",
file_path=str(file_path),
line_start=line_num,
code_snippet=line_content.strip()[:100],
recommendation="Use parameterized queries or prepared statements instead",
metadata={"vulnerability_type": vuln_type}
))
return findings
def _check_dangerous_functions(
self, content: str, lines: List[str], file_path: Path, ext: str
) -> List[ModuleFinding]:
"""
Check for dangerous function calls.
Args:
content: File content
lines: File lines
file_path: Relative file path
ext: File extension
Returns:
List of findings
"""
findings = []
# Language-specific dangerous functions
dangerous_functions = {
".py": [
(r'eval\s*\(', 'eval()', 'Arbitrary code execution'),
(r'exec\s*\(', 'exec()', 'Arbitrary code execution'),
(r'os\.system\s*\(', 'os.system()', 'Command injection risk'),
(r'subprocess\.call\s*\(.*shell=True', 'subprocess with shell=True', 'Command injection risk'),
(r'pickle\.loads?\s*\(', 'pickle.load()', 'Deserialization vulnerability'),
],
".js": [
(r'eval\s*\(', 'eval()', 'Arbitrary code execution'),
(r'new\s+Function\s*\(', 'new Function()', 'Arbitrary code execution'),
(r'innerHTML\s*=', 'innerHTML', 'XSS vulnerability'),
(r'document\.write\s*\(', 'document.write()', 'XSS vulnerability'),
],
".php": [
(r'eval\s*\(', 'eval()', 'Arbitrary code execution'),
(r'exec\s*\(', 'exec()', 'Command execution'),
(r'system\s*\(', 'system()', 'Command execution'),
(r'shell_exec\s*\(', 'shell_exec()', 'Command execution'),
(r'\$_GET\[', 'Direct $_GET usage', 'Input validation missing'),
(r'\$_POST\[', 'Direct $_POST usage', 'Input validation missing'),
]
}
if ext in dangerous_functions:
for pattern, func_name, risk_type in dangerous_functions[ext]:
for match in re.finditer(pattern, content):
line_num = content[:match.start()].count('\n') + 1
line_content = lines[line_num - 1] if line_num <= len(lines) else ""
findings.append(self.create_finding(
title=f"Dangerous function: {func_name}",
description=f"Use of potentially dangerous function {func_name}: {risk_type}",
severity="medium",
category="dangerous_function",
file_path=str(file_path),
line_start=line_num,
code_snippet=line_content.strip()[:100],
recommendation=f"Consider safer alternatives to {func_name}",
metadata={
"function": func_name,
"risk": risk_type
}
))
return findings
def _is_false_positive_secret(self, value: str) -> bool:
"""
Check if a potential secret is likely a false positive.
Args:
value: Potential secret value
Returns:
True if likely false positive
"""
false_positive_patterns = [
'example',
'test',
'demo',
'sample',
'dummy',
'placeholder',
'xxx',
'123',
'change',
'your',
'here'
]
value_lower = value.lower()
return any(pattern in value_lower for pattern in false_positive_patterns)