Files
fuzzforge_ai/sdk/examples/fuzzing_monitor.py
tduhamel42 60ca088ecf CI/CD Integration with Ephemeral Deployment Model (#14)
* feat: Complete migration from Prefect to Temporal

BREAKING CHANGE: Replaces Prefect workflow orchestration with Temporal

## Major Changes
- Replace Prefect with Temporal for workflow orchestration
- Implement vertical worker architecture (rust, android)
- Replace Docker registry with MinIO for unified storage
- Refactor activities to be co-located with workflows
- Update all API endpoints for Temporal compatibility

## Infrastructure
- New: docker-compose.temporal.yaml (Temporal + MinIO + workers)
- New: workers/ directory with rust and android vertical workers
- New: backend/src/temporal/ (manager, discovery)
- New: backend/src/storage/ (S3-cached storage with MinIO)
- New: backend/toolbox/common/ (shared storage activities)
- Deleted: docker-compose.yaml (old Prefect setup)
- Deleted: backend/src/core/prefect_manager.py
- Deleted: backend/src/services/prefect_stats_monitor.py
- Deleted: Docker registry and insecure-registries requirement

## Workflows
- Migrated: security_assessment workflow to Temporal
- New: rust_test workflow (example/test workflow)
- Deleted: secret_detection_scan (Prefect-based, to be reimplemented)
- Activities now co-located with workflows for independent testing

## API Changes
- Updated: backend/src/api/workflows.py (Temporal submission)
- Updated: backend/src/api/runs.py (Temporal status/results)
- Updated: backend/src/main.py (727 lines, TemporalManager integration)
- Updated: All 16 MCP tools to use TemporalManager

## Testing
-  All services healthy (Temporal, PostgreSQL, MinIO, workers, backend)
-  All API endpoints functional
-  End-to-end workflow test passed (72 findings from vulnerable_app)
-  MinIO storage integration working (target upload/download, results)
-  Worker activity discovery working (6 activities registered)
-  Tarball extraction working
-  SARIF report generation working

## Documentation
- ARCHITECTURE.md: Complete Temporal architecture documentation
- QUICKSTART_TEMPORAL.md: Getting started guide
- MIGRATION_DECISION.md: Why we chose Temporal over Prefect
- IMPLEMENTATION_STATUS.md: Migration progress tracking
- workers/README.md: Worker development guide

## Dependencies
- Added: temporalio>=1.6.0
- Added: boto3>=1.34.0 (MinIO S3 client)
- Removed: prefect>=3.4.18

* feat: Add Python fuzzing vertical with Atheris integration

This commit implements a complete Python fuzzing workflow using Atheris:

## Python Worker (workers/python/)
- Dockerfile with Python 3.11, Atheris, and build tools
- Generic worker.py for dynamic workflow discovery
- requirements.txt with temporalio, boto3, atheris dependencies
- Added to docker-compose.temporal.yaml with dedicated cache volume

## AtherisFuzzer Module (backend/toolbox/modules/fuzzer/)
- Reusable module extending BaseModule
- Auto-discovers fuzz targets (fuzz_*.py, *_fuzz.py, fuzz_target.py)
- Recursive search to find targets in nested directories
- Dynamically loads TestOneInput() function
- Configurable max_iterations and timeout
- Real-time stats callback support for live monitoring
- Returns findings as ModuleFinding objects

## Atheris Fuzzing Workflow (backend/toolbox/workflows/atheris_fuzzing/)
- Temporal workflow for orchestrating fuzzing
- Downloads user code from MinIO
- Executes AtherisFuzzer module
- Uploads results to MinIO
- Cleans up cache after execution
- metadata.yaml with vertical: python for routing

## Test Project (test_projects/python_fuzz_waterfall/)
- Demonstrates stateful waterfall vulnerability
- main.py with check_secret() that leaks progress
- fuzz_target.py with Atheris TestOneInput() harness
- Complete README with usage instructions

## Backend Fixes
- Fixed parameter merging in REST API endpoints (workflows.py)
- Changed workflow parameter passing from positional args to kwargs (manager.py)
- Default parameters now properly merged with user parameters

## Testing
 Worker discovered AtherisFuzzingWorkflow
 Workflow executed end-to-end successfully
 Fuzz target auto-discovered in nested directories
 Atheris ran 100,000 iterations
 Results uploaded and cache cleaned

* chore: Complete Temporal migration with updated CLI/SDK/docs

This commit includes all remaining Temporal migration changes:

## CLI Updates (cli/)
- Updated workflow execution commands for Temporal
- Enhanced error handling and exceptions
- Updated dependencies in uv.lock

## SDK Updates (sdk/)
- Client methods updated for Temporal workflows
- Updated models for new workflow execution
- Updated dependencies in uv.lock

## Documentation Updates (docs/)
- Architecture documentation for Temporal
- Workflow concept documentation
- Resource management documentation (new)
- Debugging guide (new)
- Updated tutorials and how-to guides
- Troubleshooting updates

## README Updates
- Main README with Temporal instructions
- Backend README
- CLI README
- SDK README

## Other
- Updated IMPLEMENTATION_STATUS.md
- Removed old vulnerable_app.tar.gz

These changes complete the Temporal migration and ensure the
CLI/SDK work correctly with the new backend.

* fix: Use positional args instead of kwargs for Temporal workflows

The Temporal Python SDK's start_workflow() method doesn't accept
a 'kwargs' parameter. Workflows must receive parameters as positional
arguments via the 'args' parameter.

Changed from:
  args=workflow_args  # Positional arguments

This fixes the error:
  TypeError: Client.start_workflow() got an unexpected keyword argument 'kwargs'

Workflows now correctly receive parameters in order:
- security_assessment: [target_id, scanner_config, analyzer_config, reporter_config]
- atheris_fuzzing: [target_id, target_file, max_iterations, timeout_seconds]
- rust_test: [target_id, test_message]

* fix: Filter metadata-only parameters from workflow arguments

SecurityAssessmentWorkflow was receiving 7 arguments instead of 2-5.
The issue was that target_path and volume_mode from default_parameters
were being passed to the workflow, when they should only be used by
the system for configuration.

Now filters out metadata-only parameters (target_path, volume_mode)
before passing arguments to workflow execution.

* refactor: Remove Prefect leftovers and volume mounting legacy

Complete cleanup of Prefect migration artifacts:

Backend:
- Delete registry.py and workflow_discovery.py (Prefect-specific files)
- Remove Docker validation from setup.py (no longer needed)
- Remove ResourceLimits and VolumeMount models
- Remove target_path and volume_mode from WorkflowSubmission
- Remove supported_volume_modes from API and discovery
- Clean up metadata.yaml files (remove volume/path fields)
- Simplify parameter filtering in manager.py

SDK:
- Remove volume_mode parameter from client methods
- Remove ResourceLimits and VolumeMount models
- Remove Prefect error patterns from docker_logs.py
- Clean up WorkflowSubmission and WorkflowMetadata models

CLI:
- Remove Volume Modes display from workflow info

All removed features are Prefect-specific or Docker volume mounting
artifacts. Temporal workflows use MinIO storage exclusively.

* feat: Add comprehensive test suite and benchmark infrastructure

- Add 68 unit tests for fuzzer, scanner, and analyzer modules
- Implement pytest-based test infrastructure with fixtures
- Add 6 performance benchmarks with category-specific thresholds
- Configure GitHub Actions for automated testing and benchmarking
- Add test and benchmark documentation

Test coverage:
- AtherisFuzzer: 8 tests
- CargoFuzzer: 14 tests
- FileScanner: 22 tests
- SecurityAnalyzer: 24 tests

All tests passing (68/68)
All benchmarks passing (6/6)

* fix: Resolve all ruff linting violations across codebase

Fixed 27 ruff violations in 12 files:
- Removed unused imports (Depends, Dict, Any, Optional, etc.)
- Fixed undefined workflow_info variable in workflows.py
- Removed dead code with undefined variables in atheris_fuzzer.py
- Changed f-string to regular string where no placeholders used

All files now pass ruff checks for CI/CD compliance.

* fix: Configure CI for unit tests only

- Renamed docker-compose.temporal.yaml → docker-compose.yml for CI compatibility
- Commented out integration-tests job (no integration tests yet)
- Updated test-summary to only depend on lint and unit-tests

CI will now run successfully with 68 unit tests. Integration tests can be added later.

* feat: Add CI/CD integration with ephemeral deployment model

Implements comprehensive CI/CD support for FuzzForge with on-demand worker management:

**Worker Management (v0.7.0)**
- Add WorkerManager for automatic worker lifecycle control
- Auto-start workers from stopped state when workflows execute
- Auto-stop workers after workflow completion
- Health checks and startup timeout handling (90s default)

**CI/CD Features**
- `--fail-on` flag: Fail builds based on SARIF severity levels (error/warning/note/info)
- `--export-sarif` flag: Export findings in SARIF 2.1.0 format
- `--auto-start`/`--auto-stop` flags: Control worker lifecycle
- Exit code propagation: Returns 1 on blocking findings, 0 on success

**Exit Code Fix**
- Add `except typer.Exit: raise` handlers at 3 critical locations
- Move worker cleanup to finally block for guaranteed execution
- Exit codes now propagate correctly even when build fails

**CI Scripts & Examples**
- ci-start.sh: Start FuzzForge services with health checks
- ci-stop.sh: Clean shutdown with volume preservation option
- GitHub Actions workflow example (security-scan.yml)
- GitLab CI pipeline example (.gitlab-ci.example.yml)
- docker-compose.ci.yml: CI-optimized compose file with profiles

**OSS-Fuzz Integration**
- New ossfuzz_campaign workflow for running OSS-Fuzz projects
- OSS-Fuzz worker with Docker-in-Docker support
- Configurable campaign duration and project selection

**Documentation**
- Comprehensive CI/CD integration guide (docs/how-to/cicd-integration.md)
- Updated architecture docs with worker lifecycle details
- Updated workspace isolation documentation
- CLI README with worker management examples

**SDK Enhancements**
- Add get_workflow_worker_info() endpoint
- Worker vertical metadata in workflow responses

**Testing**
- All workflows tested: security_assessment, atheris_fuzzing, secret_detection, cargo_fuzzing
- All monitoring commands tested: stats, crashes, status, finding
- Full CI pipeline simulation verified
- Exit codes verified for success/failure scenarios

Ephemeral CI/CD model: ~3-4GB RAM, ~60-90s startup, runs entirely in CI containers.

* fix: Resolve ruff linting violations in CI/CD code

- Remove unused variables (run_id, defaults, result)
- Remove unused imports
- Fix f-string without placeholders

All CI/CD integration files now pass ruff checks.
2025-10-14 10:13:45 +02:00

285 lines
9.8 KiB
Python

#!/usr/bin/env python3
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
"""
Real-time fuzzing monitoring example.
This example demonstrates how to:
1. Submit a fuzzing workflow
2. Monitor fuzzing progress in real-time using WebSocket or SSE
3. Display live statistics and crash reports
4. Handle real-time data updates
"""
import asyncio
import signal
import sys
from pathlib import Path
from datetime import datetime
from fuzzforge_sdk import FuzzForgeClient
from fuzzforge_sdk.utils import (
create_workflow_submission,
create_resource_limits,
format_duration,
format_execution_rate
)
class FuzzingMonitor:
"""Real-time fuzzing monitor with graceful shutdown."""
def __init__(self, client: FuzzForgeClient):
self.client = client
self.running = True
self.run_id = None
def signal_handler(self, signum, frame):
"""Handle shutdown signals gracefully."""
print(f"\n🛑 Received signal {signum}, shutting down...")
self.running = False
async def monitor_websocket(self, run_id: str):
"""Monitor fuzzing via WebSocket."""
print("🔌 Starting WebSocket monitoring...")
try:
async for message in self.client.monitor_fuzzing_websocket(run_id):
if not self.running:
break
if message.type == "stats_update":
self.display_stats(message.data)
elif message.type == "crash_report":
self.display_crash(message.data)
elif message.type == "heartbeat":
print("💓 Heartbeat")
else:
print(f"📨 Received: {message.type}")
except KeyboardInterrupt:
print("\n⏹️ Interrupted by user")
except Exception as e:
print(f"❌ WebSocket error: {e}")
def monitor_sse(self, run_id: str):
"""Monitor fuzzing via Server-Sent Events."""
print("📡 Starting SSE monitoring...")
try:
for message in self.client.monitor_fuzzing_sse(run_id):
if not self.running:
break
if message.type == "stats":
self.display_stats(message.data)
elif message.type == "crash":
self.display_crash(message.data)
else:
print(f"📨 Received: {message.type}")
except KeyboardInterrupt:
print("\n⏹️ Interrupted by user")
except Exception as e:
print(f"❌ SSE error: {e}")
def display_stats(self, stats_data):
"""Display fuzzing statistics."""
# Clear screen and move cursor to top
print("\033[2J\033[H", end="")
print("🎯 FuzzForge Live Fuzzing Monitor")
print("=" * 50)
print(f"Run ID: {stats_data.get('run_id', 'unknown')}")
print(f"Workflow: {stats_data.get('workflow', 'unknown')}")
print()
# Statistics
executions = stats_data.get('executions', 0)
exec_per_sec = stats_data.get('executions_per_sec', 0.0)
crashes = stats_data.get('crashes', 0)
unique_crashes = stats_data.get('unique_crashes', 0)
coverage = stats_data.get('coverage')
corpus_size = stats_data.get('corpus_size', 0)
elapsed_time = stats_data.get('elapsed_time', 0)
print("📊 Statistics:")
print(f" Executions: {executions:,}")
print(f" Rate: {format_execution_rate(exec_per_sec)}")
print(f" Runtime: {format_duration(elapsed_time)}")
print(f" Corpus size: {corpus_size:,}")
if coverage is not None:
print(f" Coverage: {coverage:.1f}%")
print()
print("💥 Crashes:")
print(f" Total crashes: {crashes}")
print(f" Unique crashes: {unique_crashes}")
last_crash = stats_data.get('last_crash_time')
if last_crash:
crash_time = datetime.fromisoformat(last_crash.replace('Z', '+00:00'))
print(f" Last crash: {crash_time.strftime('%H:%M:%S')}")
print()
print("Press Ctrl+C to stop monitoring")
print("-" * 50)
def display_crash(self, crash_data):
"""Display new crash report."""
print("\n🚨 NEW CRASH DETECTED!")
print(f" Crash ID: {crash_data.get('crash_id')}")
print(f" Signal: {crash_data.get('signal', 'unknown')}")
print(f" Type: {crash_data.get('crash_type', 'unknown')}")
print(f" Severity: {crash_data.get('severity', 'unknown')}")
if crash_data.get('input_file'):
print(f" Input file: {crash_data['input_file']}")
print("-" * 30)
async def main():
"""Main fuzzing monitoring example."""
# Initialize client
client = FuzzForgeClient(base_url="http://localhost:8000")
monitor = FuzzingMonitor(client)
# Set up signal handlers
signal.signal(signal.SIGINT, monitor.signal_handler)
signal.signal(signal.SIGTERM, monitor.signal_handler)
try:
# Check API status
print("🔗 Connecting to FuzzForge API...")
status = await client.aget_api_status()
print(f"✅ Connected to {status.name} v{status.version}\n")
# List workflows and find fuzzing ones
workflows = await client.alist_workflows()
fuzzing_workflows = [w for w in workflows if "fuzz" in w.name.lower() or "fuzzing" in w.tags]
if not fuzzing_workflows:
print("❌ No fuzzing workflows found")
print("Available workflows:")
for w in workflows:
print(f"{w.name} (tags: {w.tags})")
return
# Select first fuzzing workflow
selected_workflow = fuzzing_workflows[0]
print(f"🎯 Selected fuzzing workflow: {selected_workflow.name}")
# Create submission with fuzzing-appropriate settings
target_path = Path.cwd().absolute()
# Set longer timeout and resource limits for fuzzing
resource_limits = create_resource_limits(
cpu_limit="2", # 2 CPU cores
memory_limit="4Gi", # 4GB memory
cpu_request="1", # Guarantee 1 core
memory_request="2Gi" # Guarantee 2GB
)
submission = create_workflow_submission(
target_path=target_path,
volume_mode="rw", # Fuzzing may need to write files
timeout=3600, # 1 hour timeout
resource_limits=resource_limits,
parameters={
"max_len": 1024, # Maximum input length
"timeout": 10, # Per-execution timeout
"runs": 1000000, # Number of executions
}
)
print("🚀 Submitting fuzzing workflow...")
response = await client.asubmit_workflow(selected_workflow.name, submission)
monitor.run_id = response.run_id
print("✅ Fuzzing started!")
print(f" Run ID: {response.run_id}")
print(f" Initial status: {response.status}")
print()
# Wait a moment for fuzzing to initialize
await asyncio.sleep(5)
# Get initial stats to verify fuzzing is tracked
try:
stats = await client.aget_fuzzing_stats(response.run_id)
print(f"📊 Fuzzing tracking initialized for workflow: {stats.workflow}")
except Exception as e:
print(f"⚠️ Warning: Fuzzing tracking not available: {e}")
print(" Monitoring will show run status updates only")
# Choose monitoring method
if len(sys.argv) > 1 and sys.argv[1] == "--sse":
print("📡 Using Server-Sent Events for monitoring...")
monitor.monitor_sse(response.run_id)
else:
print("🔌 Using WebSocket for monitoring...")
await monitor.monitor_websocket(response.run_id)
except KeyboardInterrupt:
print("\n⏹️ Interrupted by user")
except Exception as e:
print(f"❌ Error: {e}")
finally:
# Cleanup
if monitor.run_id:
try:
print(f"\n🧹 Cleaning up fuzzing run {monitor.run_id}...")
await client.acleanup_fuzzing_run(monitor.run_id)
print("✅ Cleanup completed")
except Exception as e:
print(f"⚠️ Cleanup failed: {e}")
await client.aclose()
def sync_monitor_example():
"""Example of synchronous SSE monitoring."""
client = FuzzForgeClient(base_url="http://localhost:8000")
try:
# This would require a pre-existing fuzzing run
run_id = input("Enter fuzzing run ID to monitor: ").strip()
if not run_id:
print("❌ Run ID required")
return
print(f"📡 Monitoring fuzzing run: {run_id}")
print("Press Ctrl+C to stop")
print()
monitor = FuzzingMonitor(client)
monitor.monitor_sse(run_id)
except KeyboardInterrupt:
print("\n⏹️ Monitoring stopped")
except Exception as e:
print(f"❌ Error: {e}")
finally:
client.close()
if __name__ == "__main__":
if len(sys.argv) > 1 and sys.argv[1] == "--sync":
print("🔄 Running synchronous SSE monitoring...")
sync_monitor_example()
else:
print("🔄 Running async WebSocket monitoring...")
print("💡 Use --sse flag for Server-Sent Events")
print("💡 Use --sync flag for synchronous monitoring")
asyncio.run(main())