mirror of
https://github.com/FuzzingLabs/fuzzforge_ai.git
synced 2026-02-12 20:32:46 +00:00
* feat: Complete migration from Prefect to Temporal BREAKING CHANGE: Replaces Prefect workflow orchestration with Temporal ## Major Changes - Replace Prefect with Temporal for workflow orchestration - Implement vertical worker architecture (rust, android) - Replace Docker registry with MinIO for unified storage - Refactor activities to be co-located with workflows - Update all API endpoints for Temporal compatibility ## Infrastructure - New: docker-compose.temporal.yaml (Temporal + MinIO + workers) - New: workers/ directory with rust and android vertical workers - New: backend/src/temporal/ (manager, discovery) - New: backend/src/storage/ (S3-cached storage with MinIO) - New: backend/toolbox/common/ (shared storage activities) - Deleted: docker-compose.yaml (old Prefect setup) - Deleted: backend/src/core/prefect_manager.py - Deleted: backend/src/services/prefect_stats_monitor.py - Deleted: Docker registry and insecure-registries requirement ## Workflows - Migrated: security_assessment workflow to Temporal - New: rust_test workflow (example/test workflow) - Deleted: secret_detection_scan (Prefect-based, to be reimplemented) - Activities now co-located with workflows for independent testing ## API Changes - Updated: backend/src/api/workflows.py (Temporal submission) - Updated: backend/src/api/runs.py (Temporal status/results) - Updated: backend/src/main.py (727 lines, TemporalManager integration) - Updated: All 16 MCP tools to use TemporalManager ## Testing - ✅ All services healthy (Temporal, PostgreSQL, MinIO, workers, backend) - ✅ All API endpoints functional - ✅ End-to-end workflow test passed (72 findings from vulnerable_app) - ✅ MinIO storage integration working (target upload/download, results) - ✅ Worker activity discovery working (6 activities registered) - ✅ Tarball extraction working - ✅ SARIF report generation working ## Documentation - ARCHITECTURE.md: Complete Temporal architecture documentation - QUICKSTART_TEMPORAL.md: Getting started guide - MIGRATION_DECISION.md: Why we chose Temporal over Prefect - IMPLEMENTATION_STATUS.md: Migration progress tracking - workers/README.md: Worker development guide ## Dependencies - Added: temporalio>=1.6.0 - Added: boto3>=1.34.0 (MinIO S3 client) - Removed: prefect>=3.4.18 * feat: Add Python fuzzing vertical with Atheris integration This commit implements a complete Python fuzzing workflow using Atheris: ## Python Worker (workers/python/) - Dockerfile with Python 3.11, Atheris, and build tools - Generic worker.py for dynamic workflow discovery - requirements.txt with temporalio, boto3, atheris dependencies - Added to docker-compose.temporal.yaml with dedicated cache volume ## AtherisFuzzer Module (backend/toolbox/modules/fuzzer/) - Reusable module extending BaseModule - Auto-discovers fuzz targets (fuzz_*.py, *_fuzz.py, fuzz_target.py) - Recursive search to find targets in nested directories - Dynamically loads TestOneInput() function - Configurable max_iterations and timeout - Real-time stats callback support for live monitoring - Returns findings as ModuleFinding objects ## Atheris Fuzzing Workflow (backend/toolbox/workflows/atheris_fuzzing/) - Temporal workflow for orchestrating fuzzing - Downloads user code from MinIO - Executes AtherisFuzzer module - Uploads results to MinIO - Cleans up cache after execution - metadata.yaml with vertical: python for routing ## Test Project (test_projects/python_fuzz_waterfall/) - Demonstrates stateful waterfall vulnerability - main.py with check_secret() that leaks progress - fuzz_target.py with Atheris TestOneInput() harness - Complete README with usage instructions ## Backend Fixes - Fixed parameter merging in REST API endpoints (workflows.py) - Changed workflow parameter passing from positional args to kwargs (manager.py) - Default parameters now properly merged with user parameters ## Testing ✅ Worker discovered AtherisFuzzingWorkflow ✅ Workflow executed end-to-end successfully ✅ Fuzz target auto-discovered in nested directories ✅ Atheris ran 100,000 iterations ✅ Results uploaded and cache cleaned * chore: Complete Temporal migration with updated CLI/SDK/docs This commit includes all remaining Temporal migration changes: ## CLI Updates (cli/) - Updated workflow execution commands for Temporal - Enhanced error handling and exceptions - Updated dependencies in uv.lock ## SDK Updates (sdk/) - Client methods updated for Temporal workflows - Updated models for new workflow execution - Updated dependencies in uv.lock ## Documentation Updates (docs/) - Architecture documentation for Temporal - Workflow concept documentation - Resource management documentation (new) - Debugging guide (new) - Updated tutorials and how-to guides - Troubleshooting updates ## README Updates - Main README with Temporal instructions - Backend README - CLI README - SDK README ## Other - Updated IMPLEMENTATION_STATUS.md - Removed old vulnerable_app.tar.gz These changes complete the Temporal migration and ensure the CLI/SDK work correctly with the new backend. * fix: Use positional args instead of kwargs for Temporal workflows The Temporal Python SDK's start_workflow() method doesn't accept a 'kwargs' parameter. Workflows must receive parameters as positional arguments via the 'args' parameter. Changed from: args=workflow_args # Positional arguments This fixes the error: TypeError: Client.start_workflow() got an unexpected keyword argument 'kwargs' Workflows now correctly receive parameters in order: - security_assessment: [target_id, scanner_config, analyzer_config, reporter_config] - atheris_fuzzing: [target_id, target_file, max_iterations, timeout_seconds] - rust_test: [target_id, test_message] * fix: Filter metadata-only parameters from workflow arguments SecurityAssessmentWorkflow was receiving 7 arguments instead of 2-5. The issue was that target_path and volume_mode from default_parameters were being passed to the workflow, when they should only be used by the system for configuration. Now filters out metadata-only parameters (target_path, volume_mode) before passing arguments to workflow execution. * refactor: Remove Prefect leftovers and volume mounting legacy Complete cleanup of Prefect migration artifacts: Backend: - Delete registry.py and workflow_discovery.py (Prefect-specific files) - Remove Docker validation from setup.py (no longer needed) - Remove ResourceLimits and VolumeMount models - Remove target_path and volume_mode from WorkflowSubmission - Remove supported_volume_modes from API and discovery - Clean up metadata.yaml files (remove volume/path fields) - Simplify parameter filtering in manager.py SDK: - Remove volume_mode parameter from client methods - Remove ResourceLimits and VolumeMount models - Remove Prefect error patterns from docker_logs.py - Clean up WorkflowSubmission and WorkflowMetadata models CLI: - Remove Volume Modes display from workflow info All removed features are Prefect-specific or Docker volume mounting artifacts. Temporal workflows use MinIO storage exclusively. * feat: Add comprehensive test suite and benchmark infrastructure - Add 68 unit tests for fuzzer, scanner, and analyzer modules - Implement pytest-based test infrastructure with fixtures - Add 6 performance benchmarks with category-specific thresholds - Configure GitHub Actions for automated testing and benchmarking - Add test and benchmark documentation Test coverage: - AtherisFuzzer: 8 tests - CargoFuzzer: 14 tests - FileScanner: 22 tests - SecurityAnalyzer: 24 tests All tests passing (68/68) All benchmarks passing (6/6) * fix: Resolve all ruff linting violations across codebase Fixed 27 ruff violations in 12 files: - Removed unused imports (Depends, Dict, Any, Optional, etc.) - Fixed undefined workflow_info variable in workflows.py - Removed dead code with undefined variables in atheris_fuzzer.py - Changed f-string to regular string where no placeholders used All files now pass ruff checks for CI/CD compliance. * fix: Configure CI for unit tests only - Renamed docker-compose.temporal.yaml → docker-compose.yml for CI compatibility - Commented out integration-tests job (no integration tests yet) - Updated test-summary to only depend on lint and unit-tests CI will now run successfully with 68 unit tests. Integration tests can be added later. * feat: Add CI/CD integration with ephemeral deployment model Implements comprehensive CI/CD support for FuzzForge with on-demand worker management: **Worker Management (v0.7.0)** - Add WorkerManager for automatic worker lifecycle control - Auto-start workers from stopped state when workflows execute - Auto-stop workers after workflow completion - Health checks and startup timeout handling (90s default) **CI/CD Features** - `--fail-on` flag: Fail builds based on SARIF severity levels (error/warning/note/info) - `--export-sarif` flag: Export findings in SARIF 2.1.0 format - `--auto-start`/`--auto-stop` flags: Control worker lifecycle - Exit code propagation: Returns 1 on blocking findings, 0 on success **Exit Code Fix** - Add `except typer.Exit: raise` handlers at 3 critical locations - Move worker cleanup to finally block for guaranteed execution - Exit codes now propagate correctly even when build fails **CI Scripts & Examples** - ci-start.sh: Start FuzzForge services with health checks - ci-stop.sh: Clean shutdown with volume preservation option - GitHub Actions workflow example (security-scan.yml) - GitLab CI pipeline example (.gitlab-ci.example.yml) - docker-compose.ci.yml: CI-optimized compose file with profiles **OSS-Fuzz Integration** - New ossfuzz_campaign workflow for running OSS-Fuzz projects - OSS-Fuzz worker with Docker-in-Docker support - Configurable campaign duration and project selection **Documentation** - Comprehensive CI/CD integration guide (docs/how-to/cicd-integration.md) - Updated architecture docs with worker lifecycle details - Updated workspace isolation documentation - CLI README with worker management examples **SDK Enhancements** - Add get_workflow_worker_info() endpoint - Worker vertical metadata in workflow responses **Testing** - All workflows tested: security_assessment, atheris_fuzzing, secret_detection, cargo_fuzzing - All monitoring commands tested: stats, crashes, status, finding - Full CI pipeline simulation verified - Exit codes verified for success/failure scenarios Ephemeral CI/CD model: ~3-4GB RAM, ~60-90s startup, runs entirely in CI containers. * fix: Resolve ruff linting violations in CI/CD code - Remove unused variables (run_id, defaults, result) - Remove unused imports - Fix f-string without placeholders All CI/CD integration files now pass ruff checks.
387 lines
13 KiB
Python
387 lines
13 KiB
Python
"""
|
|
Docker log integration for enhanced error reporting.
|
|
|
|
This module provides functionality to fetch and parse Docker container logs
|
|
to provide better context for deployment and workflow execution errors.
|
|
"""
|
|
# Copyright (c) 2025 FuzzingLabs
|
|
#
|
|
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
|
|
# at the root of this repository for details.
|
|
#
|
|
# After the Change Date (four years from publication), this version of the
|
|
# Licensed Work will be made available under the Apache License, Version 2.0.
|
|
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Additional attribution and requirements are provided in the NOTICE file.
|
|
|
|
|
|
import logging
|
|
import re
|
|
import subprocess
|
|
import json
|
|
from typing import Dict, Any, List, Optional
|
|
from datetime import datetime, timezone
|
|
from dataclasses import dataclass
|
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
|
|
@dataclass
|
|
class ContainerLogEntry:
|
|
"""A single log entry from a container."""
|
|
timestamp: datetime
|
|
level: str
|
|
message: str
|
|
stream: str # 'stdout' or 'stderr'
|
|
raw: str
|
|
|
|
|
|
@dataclass
|
|
class ContainerDiagnostics:
|
|
"""Complete diagnostics for a container."""
|
|
container_id: Optional[str]
|
|
status: str
|
|
exit_code: Optional[int]
|
|
error: Optional[str]
|
|
logs: List[ContainerLogEntry]
|
|
resource_usage: Dict[str, Any]
|
|
volume_mounts: List[Dict[str, str]]
|
|
|
|
|
|
class DockerLogIntegration:
|
|
"""
|
|
Integration with Docker to fetch container logs and diagnostics.
|
|
|
|
This class provides methods to fetch container logs, parse common error
|
|
patterns, and extract meaningful diagnostic information from Docker
|
|
containers related to FuzzForge workflow execution.
|
|
"""
|
|
|
|
def __init__(self):
|
|
self.docker_available = self._check_docker_availability()
|
|
|
|
# Common error patterns in container logs
|
|
self.error_patterns = {
|
|
'permission_denied': [
|
|
r'permission denied',
|
|
r'operation not permitted',
|
|
r'cannot access.*permission denied'
|
|
],
|
|
'out_of_memory': [
|
|
r'out of memory',
|
|
r'oom killed',
|
|
r'cannot allocate memory'
|
|
],
|
|
'image_pull_failed': [
|
|
r'failed to pull image',
|
|
r'pull access denied',
|
|
r'image not found'
|
|
],
|
|
'volume_mount_failed': [
|
|
r'invalid mount config',
|
|
r'mount denied',
|
|
r'no such file or directory.*mount'
|
|
],
|
|
'network_error': [
|
|
r'network is unreachable',
|
|
r'connection refused',
|
|
r'timeout.*connect'
|
|
]
|
|
}
|
|
|
|
def _check_docker_availability(self) -> bool:
|
|
"""Check if Docker is available and accessible."""
|
|
try:
|
|
result = subprocess.run(['docker', 'version', '--format', 'json'],
|
|
capture_output=True, text=True, timeout=5)
|
|
return result.returncode == 0
|
|
except (subprocess.TimeoutExpired, FileNotFoundError, subprocess.SubprocessError):
|
|
return False
|
|
|
|
def get_container_logs(self, container_name_or_id: str, tail: int = 100) -> List[ContainerLogEntry]:
|
|
"""
|
|
Fetch logs from a Docker container.
|
|
|
|
Args:
|
|
container_name_or_id: Container name or ID
|
|
tail: Number of log lines to retrieve
|
|
|
|
Returns:
|
|
List of parsed log entries
|
|
"""
|
|
if not self.docker_available:
|
|
logger.warning("Docker not available, cannot fetch container logs")
|
|
return []
|
|
|
|
try:
|
|
cmd = ['docker', 'logs', '--timestamps', '--tail', str(tail), container_name_or_id]
|
|
result = subprocess.run(cmd, capture_output=True, text=True, timeout=10)
|
|
|
|
if result.returncode != 0:
|
|
logger.error(f"Failed to fetch logs for container {container_name_or_id}: {result.stderr}")
|
|
return []
|
|
|
|
return self._parse_docker_logs(result.stdout + result.stderr)
|
|
|
|
except subprocess.TimeoutExpired:
|
|
logger.error(f"Timeout fetching logs for container {container_name_or_id}")
|
|
return []
|
|
except Exception as e:
|
|
logger.error(f"Error fetching container logs: {e}")
|
|
return []
|
|
|
|
def _parse_docker_logs(self, raw_logs: str) -> List[ContainerLogEntry]:
|
|
"""Parse raw Docker logs into structured entries."""
|
|
entries = []
|
|
|
|
for line in raw_logs.strip().split('\n'):
|
|
if not line.strip():
|
|
continue
|
|
|
|
entry = self._parse_log_line(line)
|
|
if entry:
|
|
entries.append(entry)
|
|
|
|
return entries
|
|
|
|
def _parse_log_line(self, line: str) -> Optional[ContainerLogEntry]:
|
|
"""Parse a single log line with timestamp."""
|
|
# Docker log format: 2023-10-01T12:00:00.000000000Z message
|
|
timestamp_match = re.match(r'^(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+Z)\s+(.*)', line)
|
|
|
|
if timestamp_match:
|
|
timestamp_str, message = timestamp_match.groups()
|
|
try:
|
|
timestamp = datetime.fromisoformat(timestamp_str.replace('Z', '+00:00'))
|
|
except ValueError:
|
|
timestamp = datetime.now(timezone.utc)
|
|
else:
|
|
timestamp = datetime.now(timezone.utc)
|
|
message = line
|
|
|
|
# Determine log level from message content
|
|
level = self._extract_log_level(message)
|
|
|
|
# Determine stream (simplified - Docker doesn't clearly separate in combined output)
|
|
stream = 'stderr' if any(keyword in message.lower() for keyword in ['error', 'failed', 'exception']) else 'stdout'
|
|
|
|
return ContainerLogEntry(
|
|
timestamp=timestamp,
|
|
level=level,
|
|
message=message.strip(),
|
|
stream=stream,
|
|
raw=line
|
|
)
|
|
|
|
def _extract_log_level(self, message: str) -> str:
|
|
"""Extract log level from message content."""
|
|
message_lower = message.lower()
|
|
|
|
if any(keyword in message_lower for keyword in ['error', 'failed', 'exception', 'fatal']):
|
|
return 'ERROR'
|
|
elif any(keyword in message_lower for keyword in ['warning', 'warn']):
|
|
return 'WARNING'
|
|
elif any(keyword in message_lower for keyword in ['info', 'information']):
|
|
return 'INFO'
|
|
elif any(keyword in message_lower for keyword in ['debug']):
|
|
return 'DEBUG'
|
|
else:
|
|
return 'INFO'
|
|
|
|
def get_container_diagnostics(self, container_name_or_id: str) -> ContainerDiagnostics:
|
|
"""
|
|
Get complete diagnostics for a container including logs, status, and resource usage.
|
|
|
|
Args:
|
|
container_name_or_id: Container name or ID
|
|
|
|
Returns:
|
|
Complete container diagnostics
|
|
"""
|
|
if not self.docker_available:
|
|
return ContainerDiagnostics(
|
|
container_id=None,
|
|
status="unknown",
|
|
exit_code=None,
|
|
error="Docker not available",
|
|
logs=[],
|
|
resource_usage={},
|
|
volume_mounts=[]
|
|
)
|
|
|
|
# Get container inspect data
|
|
inspect_data = self._get_container_inspect(container_name_or_id)
|
|
|
|
# Get logs
|
|
logs = self.get_container_logs(container_name_or_id)
|
|
|
|
# Extract key information
|
|
if inspect_data:
|
|
state = inspect_data.get('State', {})
|
|
config = inspect_data.get('Config', {})
|
|
host_config = inspect_data.get('HostConfig', {})
|
|
|
|
status = state.get('Status', 'unknown')
|
|
exit_code = state.get('ExitCode')
|
|
error = state.get('Error', '')
|
|
|
|
# Get volume mounts
|
|
mounts = inspect_data.get('Mounts', [])
|
|
volume_mounts = [
|
|
{
|
|
'source': mount.get('Source', ''),
|
|
'destination': mount.get('Destination', ''),
|
|
'mode': mount.get('Mode', ''),
|
|
'type': mount.get('Type', '')
|
|
}
|
|
for mount in mounts
|
|
]
|
|
|
|
# Get resource limits
|
|
resource_usage = {
|
|
'memory_limit': host_config.get('Memory', 0),
|
|
'cpu_limit': host_config.get('CpuQuota', 0),
|
|
'cpu_period': host_config.get('CpuPeriod', 0)
|
|
}
|
|
|
|
else:
|
|
status = "not_found"
|
|
exit_code = None
|
|
error = f"Container {container_name_or_id} not found"
|
|
volume_mounts = []
|
|
resource_usage = {}
|
|
|
|
return ContainerDiagnostics(
|
|
container_id=container_name_or_id,
|
|
status=status,
|
|
exit_code=exit_code,
|
|
error=error,
|
|
logs=logs,
|
|
resource_usage=resource_usage,
|
|
volume_mounts=volume_mounts
|
|
)
|
|
|
|
def _get_container_inspect(self, container_name_or_id: str) -> Optional[Dict[str, Any]]:
|
|
"""Get container inspection data."""
|
|
try:
|
|
cmd = ['docker', 'inspect', container_name_or_id]
|
|
result = subprocess.run(cmd, capture_output=True, text=True, timeout=5)
|
|
|
|
if result.returncode != 0:
|
|
return None
|
|
|
|
data = json.loads(result.stdout)
|
|
return data[0] if data else None
|
|
|
|
except (subprocess.TimeoutExpired, json.JSONDecodeError, Exception) as e:
|
|
logger.debug(f"Failed to inspect container {container_name_or_id}: {e}")
|
|
return None
|
|
|
|
def analyze_error_patterns(self, logs: List[ContainerLogEntry]) -> Dict[str, List[str]]:
|
|
"""
|
|
Analyze logs for common error patterns.
|
|
|
|
Args:
|
|
logs: List of log entries to analyze
|
|
|
|
Returns:
|
|
Dictionary mapping error types to matching log messages
|
|
"""
|
|
detected_errors = {}
|
|
|
|
for error_type, patterns in self.error_patterns.items():
|
|
matches = []
|
|
|
|
for log_entry in logs:
|
|
for pattern in patterns:
|
|
if re.search(pattern, log_entry.message, re.IGNORECASE):
|
|
matches.append(log_entry.message)
|
|
break # Don't match the same message multiple times
|
|
|
|
if matches:
|
|
detected_errors[error_type] = matches
|
|
|
|
return detected_errors
|
|
|
|
def get_container_names_by_label(self, label_filter: str) -> List[str]:
|
|
"""
|
|
Get container names that match a specific label filter.
|
|
|
|
Args:
|
|
label_filter: Label filter (e.g., "prefect.flow-run-id=12345")
|
|
|
|
Returns:
|
|
List of container names
|
|
"""
|
|
if not self.docker_available:
|
|
return []
|
|
|
|
try:
|
|
cmd = ['docker', 'ps', '-a', '--filter', f'label={label_filter}', '--format', '{{.Names}}']
|
|
result = subprocess.run(cmd, capture_output=True, text=True, timeout=5)
|
|
|
|
if result.returncode != 0:
|
|
return []
|
|
|
|
return [name.strip() for name in result.stdout.strip().split('\n') if name.strip()]
|
|
|
|
except Exception as e:
|
|
logger.debug(f"Failed to get containers by label {label_filter}: {e}")
|
|
return []
|
|
|
|
def suggest_fixes(self, error_analysis: Dict[str, List[str]]) -> List[str]:
|
|
"""
|
|
Suggest fixes based on detected error patterns.
|
|
|
|
Args:
|
|
error_analysis: Result from analyze_error_patterns()
|
|
|
|
Returns:
|
|
List of suggested fixes
|
|
"""
|
|
suggestions = []
|
|
|
|
if 'permission_denied' in error_analysis:
|
|
suggestions.extend([
|
|
"Check file permissions on the target path",
|
|
"Ensure the Docker daemon has access to the mounted volumes",
|
|
"Try running with elevated privileges or adjust volume ownership"
|
|
])
|
|
|
|
if 'out_of_memory' in error_analysis:
|
|
suggestions.extend([
|
|
"Increase memory limits for the workflow",
|
|
"Check if the target files are too large for available memory",
|
|
"Consider using streaming processing for large datasets"
|
|
])
|
|
|
|
if 'image_pull_failed' in error_analysis:
|
|
suggestions.extend([
|
|
"Check network connectivity to Docker registry",
|
|
"Verify image name and tag are correct",
|
|
"Ensure Docker registry credentials are configured"
|
|
])
|
|
|
|
if 'volume_mount_failed' in error_analysis:
|
|
suggestions.extend([
|
|
"Verify the target path exists and is accessible",
|
|
"Check volume mount syntax and permissions",
|
|
"Ensure the path is not already in use by another process"
|
|
])
|
|
|
|
if 'network_error' in error_analysis:
|
|
suggestions.extend([
|
|
"Check network connectivity",
|
|
"Verify backend services are running (docker-compose up -d)",
|
|
"Check firewall settings and port availability"
|
|
])
|
|
|
|
if not suggestions:
|
|
suggestions.append("Review the container logs above for specific error details")
|
|
|
|
return suggestions
|
|
|
|
|
|
# Global instance for easy access
|
|
docker_integration = DockerLogIntegration() |