Files
fuzzforge_ai/backend/toolbox/modules/base.py
tduhamel42 60ca088ecf CI/CD Integration with Ephemeral Deployment Model (#14)
* feat: Complete migration from Prefect to Temporal

BREAKING CHANGE: Replaces Prefect workflow orchestration with Temporal

## Major Changes
- Replace Prefect with Temporal for workflow orchestration
- Implement vertical worker architecture (rust, android)
- Replace Docker registry with MinIO for unified storage
- Refactor activities to be co-located with workflows
- Update all API endpoints for Temporal compatibility

## Infrastructure
- New: docker-compose.temporal.yaml (Temporal + MinIO + workers)
- New: workers/ directory with rust and android vertical workers
- New: backend/src/temporal/ (manager, discovery)
- New: backend/src/storage/ (S3-cached storage with MinIO)
- New: backend/toolbox/common/ (shared storage activities)
- Deleted: docker-compose.yaml (old Prefect setup)
- Deleted: backend/src/core/prefect_manager.py
- Deleted: backend/src/services/prefect_stats_monitor.py
- Deleted: Docker registry and insecure-registries requirement

## Workflows
- Migrated: security_assessment workflow to Temporal
- New: rust_test workflow (example/test workflow)
- Deleted: secret_detection_scan (Prefect-based, to be reimplemented)
- Activities now co-located with workflows for independent testing

## API Changes
- Updated: backend/src/api/workflows.py (Temporal submission)
- Updated: backend/src/api/runs.py (Temporal status/results)
- Updated: backend/src/main.py (727 lines, TemporalManager integration)
- Updated: All 16 MCP tools to use TemporalManager

## Testing
-  All services healthy (Temporal, PostgreSQL, MinIO, workers, backend)
-  All API endpoints functional
-  End-to-end workflow test passed (72 findings from vulnerable_app)
-  MinIO storage integration working (target upload/download, results)
-  Worker activity discovery working (6 activities registered)
-  Tarball extraction working
-  SARIF report generation working

## Documentation
- ARCHITECTURE.md: Complete Temporal architecture documentation
- QUICKSTART_TEMPORAL.md: Getting started guide
- MIGRATION_DECISION.md: Why we chose Temporal over Prefect
- IMPLEMENTATION_STATUS.md: Migration progress tracking
- workers/README.md: Worker development guide

## Dependencies
- Added: temporalio>=1.6.0
- Added: boto3>=1.34.0 (MinIO S3 client)
- Removed: prefect>=3.4.18

* feat: Add Python fuzzing vertical with Atheris integration

This commit implements a complete Python fuzzing workflow using Atheris:

## Python Worker (workers/python/)
- Dockerfile with Python 3.11, Atheris, and build tools
- Generic worker.py for dynamic workflow discovery
- requirements.txt with temporalio, boto3, atheris dependencies
- Added to docker-compose.temporal.yaml with dedicated cache volume

## AtherisFuzzer Module (backend/toolbox/modules/fuzzer/)
- Reusable module extending BaseModule
- Auto-discovers fuzz targets (fuzz_*.py, *_fuzz.py, fuzz_target.py)
- Recursive search to find targets in nested directories
- Dynamically loads TestOneInput() function
- Configurable max_iterations and timeout
- Real-time stats callback support for live monitoring
- Returns findings as ModuleFinding objects

## Atheris Fuzzing Workflow (backend/toolbox/workflows/atheris_fuzzing/)
- Temporal workflow for orchestrating fuzzing
- Downloads user code from MinIO
- Executes AtherisFuzzer module
- Uploads results to MinIO
- Cleans up cache after execution
- metadata.yaml with vertical: python for routing

## Test Project (test_projects/python_fuzz_waterfall/)
- Demonstrates stateful waterfall vulnerability
- main.py with check_secret() that leaks progress
- fuzz_target.py with Atheris TestOneInput() harness
- Complete README with usage instructions

## Backend Fixes
- Fixed parameter merging in REST API endpoints (workflows.py)
- Changed workflow parameter passing from positional args to kwargs (manager.py)
- Default parameters now properly merged with user parameters

## Testing
 Worker discovered AtherisFuzzingWorkflow
 Workflow executed end-to-end successfully
 Fuzz target auto-discovered in nested directories
 Atheris ran 100,000 iterations
 Results uploaded and cache cleaned

* chore: Complete Temporal migration with updated CLI/SDK/docs

This commit includes all remaining Temporal migration changes:

## CLI Updates (cli/)
- Updated workflow execution commands for Temporal
- Enhanced error handling and exceptions
- Updated dependencies in uv.lock

## SDK Updates (sdk/)
- Client methods updated for Temporal workflows
- Updated models for new workflow execution
- Updated dependencies in uv.lock

## Documentation Updates (docs/)
- Architecture documentation for Temporal
- Workflow concept documentation
- Resource management documentation (new)
- Debugging guide (new)
- Updated tutorials and how-to guides
- Troubleshooting updates

## README Updates
- Main README with Temporal instructions
- Backend README
- CLI README
- SDK README

## Other
- Updated IMPLEMENTATION_STATUS.md
- Removed old vulnerable_app.tar.gz

These changes complete the Temporal migration and ensure the
CLI/SDK work correctly with the new backend.

* fix: Use positional args instead of kwargs for Temporal workflows

The Temporal Python SDK's start_workflow() method doesn't accept
a 'kwargs' parameter. Workflows must receive parameters as positional
arguments via the 'args' parameter.

Changed from:
  args=workflow_args  # Positional arguments

This fixes the error:
  TypeError: Client.start_workflow() got an unexpected keyword argument 'kwargs'

Workflows now correctly receive parameters in order:
- security_assessment: [target_id, scanner_config, analyzer_config, reporter_config]
- atheris_fuzzing: [target_id, target_file, max_iterations, timeout_seconds]
- rust_test: [target_id, test_message]

* fix: Filter metadata-only parameters from workflow arguments

SecurityAssessmentWorkflow was receiving 7 arguments instead of 2-5.
The issue was that target_path and volume_mode from default_parameters
were being passed to the workflow, when they should only be used by
the system for configuration.

Now filters out metadata-only parameters (target_path, volume_mode)
before passing arguments to workflow execution.

* refactor: Remove Prefect leftovers and volume mounting legacy

Complete cleanup of Prefect migration artifacts:

Backend:
- Delete registry.py and workflow_discovery.py (Prefect-specific files)
- Remove Docker validation from setup.py (no longer needed)
- Remove ResourceLimits and VolumeMount models
- Remove target_path and volume_mode from WorkflowSubmission
- Remove supported_volume_modes from API and discovery
- Clean up metadata.yaml files (remove volume/path fields)
- Simplify parameter filtering in manager.py

SDK:
- Remove volume_mode parameter from client methods
- Remove ResourceLimits and VolumeMount models
- Remove Prefect error patterns from docker_logs.py
- Clean up WorkflowSubmission and WorkflowMetadata models

CLI:
- Remove Volume Modes display from workflow info

All removed features are Prefect-specific or Docker volume mounting
artifacts. Temporal workflows use MinIO storage exclusively.

* feat: Add comprehensive test suite and benchmark infrastructure

- Add 68 unit tests for fuzzer, scanner, and analyzer modules
- Implement pytest-based test infrastructure with fixtures
- Add 6 performance benchmarks with category-specific thresholds
- Configure GitHub Actions for automated testing and benchmarking
- Add test and benchmark documentation

Test coverage:
- AtherisFuzzer: 8 tests
- CargoFuzzer: 14 tests
- FileScanner: 22 tests
- SecurityAnalyzer: 24 tests

All tests passing (68/68)
All benchmarks passing (6/6)

* fix: Resolve all ruff linting violations across codebase

Fixed 27 ruff violations in 12 files:
- Removed unused imports (Depends, Dict, Any, Optional, etc.)
- Fixed undefined workflow_info variable in workflows.py
- Removed dead code with undefined variables in atheris_fuzzer.py
- Changed f-string to regular string where no placeholders used

All files now pass ruff checks for CI/CD compliance.

* fix: Configure CI for unit tests only

- Renamed docker-compose.temporal.yaml → docker-compose.yml for CI compatibility
- Commented out integration-tests job (no integration tests yet)
- Updated test-summary to only depend on lint and unit-tests

CI will now run successfully with 68 unit tests. Integration tests can be added later.

* feat: Add CI/CD integration with ephemeral deployment model

Implements comprehensive CI/CD support for FuzzForge with on-demand worker management:

**Worker Management (v0.7.0)**
- Add WorkerManager for automatic worker lifecycle control
- Auto-start workers from stopped state when workflows execute
- Auto-stop workers after workflow completion
- Health checks and startup timeout handling (90s default)

**CI/CD Features**
- `--fail-on` flag: Fail builds based on SARIF severity levels (error/warning/note/info)
- `--export-sarif` flag: Export findings in SARIF 2.1.0 format
- `--auto-start`/`--auto-stop` flags: Control worker lifecycle
- Exit code propagation: Returns 1 on blocking findings, 0 on success

**Exit Code Fix**
- Add `except typer.Exit: raise` handlers at 3 critical locations
- Move worker cleanup to finally block for guaranteed execution
- Exit codes now propagate correctly even when build fails

**CI Scripts & Examples**
- ci-start.sh: Start FuzzForge services with health checks
- ci-stop.sh: Clean shutdown with volume preservation option
- GitHub Actions workflow example (security-scan.yml)
- GitLab CI pipeline example (.gitlab-ci.example.yml)
- docker-compose.ci.yml: CI-optimized compose file with profiles

**OSS-Fuzz Integration**
- New ossfuzz_campaign workflow for running OSS-Fuzz projects
- OSS-Fuzz worker with Docker-in-Docker support
- Configurable campaign duration and project selection

**Documentation**
- Comprehensive CI/CD integration guide (docs/how-to/cicd-integration.md)
- Updated architecture docs with worker lifecycle details
- Updated workspace isolation documentation
- CLI README with worker management examples

**SDK Enhancements**
- Add get_workflow_worker_info() endpoint
- Worker vertical metadata in workflow responses

**Testing**
- All workflows tested: security_assessment, atheris_fuzzing, secret_detection, cargo_fuzzing
- All monitoring commands tested: stats, crashes, status, finding
- Full CI pipeline simulation verified
- Exit codes verified for success/failure scenarios

Ephemeral CI/CD model: ~3-4GB RAM, ~60-90s startup, runs entirely in CI containers.

* fix: Resolve ruff linting violations in CI/CD code

- Remove unused variables (run_id, defaults, result)
- Remove unused imports
- Fix f-string without placeholders

All CI/CD integration files now pass ruff checks.
2025-10-14 10:13:45 +02:00

271 lines
8.9 KiB
Python

"""
Base module interface for all FuzzForge modules
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Dict, Any, List, Optional
from pydantic import BaseModel, Field
import logging
logger = logging.getLogger(__name__)
class ModuleMetadata(BaseModel):
"""Metadata describing a module's capabilities and requirements"""
name: str = Field(..., description="Module name")
version: str = Field(..., description="Module version")
description: str = Field(..., description="Module description")
author: Optional[str] = Field(None, description="Module author")
category: str = Field(..., description="Module category (scanner, analyzer, reporter, etc.)")
tags: List[str] = Field(default_factory=list, description="Module tags")
input_schema: Dict[str, Any] = Field(default_factory=dict, description="Expected input schema")
output_schema: Dict[str, Any] = Field(default_factory=dict, description="Output schema")
requires_workspace: bool = Field(True, description="Whether module requires workspace access")
class ModuleFinding(BaseModel):
"""Individual finding from a module"""
id: str = Field(..., description="Unique finding ID")
title: str = Field(..., description="Finding title")
description: str = Field(..., description="Detailed description")
severity: str = Field(..., description="Severity level (info, low, medium, high, critical)")
category: str = Field(..., description="Finding category")
file_path: Optional[str] = Field(None, description="Affected file path relative to workspace")
line_start: Optional[int] = Field(None, description="Starting line number")
line_end: Optional[int] = Field(None, description="Ending line number")
code_snippet: Optional[str] = Field(None, description="Relevant code snippet")
recommendation: Optional[str] = Field(None, description="Remediation recommendation")
metadata: Dict[str, Any] = Field(default_factory=dict, description="Additional metadata")
class ModuleResult(BaseModel):
"""Standard result format from module execution"""
module: str = Field(..., description="Module name")
version: str = Field(..., description="Module version")
status: str = Field(default="success", description="Execution status (success, partial, failed)")
execution_time: float = Field(..., description="Execution time in seconds")
findings: List[ModuleFinding] = Field(default_factory=list, description="List of findings")
summary: Dict[str, Any] = Field(default_factory=dict, description="Summary statistics")
metadata: Dict[str, Any] = Field(default_factory=dict, description="Additional metadata")
error: Optional[str] = Field(None, description="Error message if failed")
sarif: Optional[Dict[str, Any]] = Field(None, description="SARIF report if generated by reporter module")
class BaseModule(ABC):
"""
Base interface for all security testing modules.
All modules must inherit from this class and implement the required methods.
Modules are designed to be stateless and reusable across different workflows.
"""
def __init__(self):
"""Initialize the module"""
self._metadata = self.get_metadata()
self._start_time = None
logger.info(f"Initialized module: {self._metadata.name} v{self._metadata.version}")
@abstractmethod
def get_metadata(self) -> ModuleMetadata:
"""
Get module metadata.
Returns:
ModuleMetadata object describing the module
"""
pass
@abstractmethod
async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
"""
Execute the module with given configuration and workspace.
Args:
config: Module-specific configuration parameters
workspace: Path to the mounted workspace directory
Returns:
ModuleResult containing findings and metadata
"""
pass
@abstractmethod
def validate_config(self, config: Dict[str, Any]) -> bool:
"""
Validate the provided configuration against module requirements.
Args:
config: Configuration to validate
Returns:
True if configuration is valid, False otherwise
Raises:
ValueError: If configuration is invalid with details
"""
pass
def validate_workspace(self, workspace: Path) -> bool:
"""
Validate that the workspace exists and is accessible.
Args:
workspace: Path to the workspace
Returns:
True if workspace is valid
Raises:
ValueError: If workspace is invalid
"""
if not workspace.exists():
raise ValueError(f"Workspace does not exist: {workspace}")
if not workspace.is_dir():
raise ValueError(f"Workspace is not a directory: {workspace}")
return True
def create_finding(
self,
title: str,
description: str,
severity: str,
category: str,
**kwargs
) -> ModuleFinding:
"""
Helper method to create a standardized finding.
Args:
title: Finding title
description: Detailed description
severity: Severity level
category: Finding category
**kwargs: Additional finding fields
Returns:
ModuleFinding object
"""
import uuid
finding_id = str(uuid.uuid4())
return ModuleFinding(
id=finding_id,
title=title,
description=description,
severity=severity,
category=category,
**kwargs
)
def start_timer(self):
"""Start the execution timer"""
from time import time
self._start_time = time()
def get_execution_time(self) -> float:
"""Get the execution time in seconds"""
from time import time
if self._start_time is None:
return 0.0
return time() - self._start_time
def create_result(
self,
findings: List[ModuleFinding],
status: str = "success",
summary: Dict[str, Any] = None,
metadata: Dict[str, Any] = None,
error: str = None
) -> ModuleResult:
"""
Helper method to create a module result.
Args:
findings: List of findings
status: Execution status
summary: Summary statistics
metadata: Additional metadata
error: Error message if failed
Returns:
ModuleResult object
"""
return ModuleResult(
module=self._metadata.name,
version=self._metadata.version,
status=status,
execution_time=self.get_execution_time(),
findings=findings,
summary=summary or self._generate_summary(findings),
metadata=metadata or {},
error=error
)
def _generate_summary(self, findings: List[ModuleFinding]) -> Dict[str, Any]:
"""
Generate summary statistics from findings.
Args:
findings: List of findings
Returns:
Summary dictionary
"""
severity_counts = {
"info": 0,
"low": 0,
"medium": 0,
"high": 0,
"critical": 0
}
category_counts = {}
for finding in findings:
# Count by severity
if finding.severity in severity_counts:
severity_counts[finding.severity] += 1
# Count by category
if finding.category not in category_counts:
category_counts[finding.category] = 0
category_counts[finding.category] += 1
return {
"total_findings": len(findings),
"severity_counts": severity_counts,
"category_counts": category_counts,
"highest_severity": self._get_highest_severity(findings)
}
def _get_highest_severity(self, findings: List[ModuleFinding]) -> str:
"""
Get the highest severity from findings.
Args:
findings: List of findings
Returns:
Highest severity level
"""
severity_order = ["critical", "high", "medium", "low", "info"]
for severity in severity_order:
if any(f.severity == severity for f in findings):
return severity
return "none"