* feat: Complete migration from Prefect to Temporal BREAKING CHANGE: Replaces Prefect workflow orchestration with Temporal ## Major Changes - Replace Prefect with Temporal for workflow orchestration - Implement vertical worker architecture (rust, android) - Replace Docker registry with MinIO for unified storage - Refactor activities to be co-located with workflows - Update all API endpoints for Temporal compatibility ## Infrastructure - New: docker-compose.temporal.yaml (Temporal + MinIO + workers) - New: workers/ directory with rust and android vertical workers - New: backend/src/temporal/ (manager, discovery) - New: backend/src/storage/ (S3-cached storage with MinIO) - New: backend/toolbox/common/ (shared storage activities) - Deleted: docker-compose.yaml (old Prefect setup) - Deleted: backend/src/core/prefect_manager.py - Deleted: backend/src/services/prefect_stats_monitor.py - Deleted: Docker registry and insecure-registries requirement ## Workflows - Migrated: security_assessment workflow to Temporal - New: rust_test workflow (example/test workflow) - Deleted: secret_detection_scan (Prefect-based, to be reimplemented) - Activities now co-located with workflows for independent testing ## API Changes - Updated: backend/src/api/workflows.py (Temporal submission) - Updated: backend/src/api/runs.py (Temporal status/results) - Updated: backend/src/main.py (727 lines, TemporalManager integration) - Updated: All 16 MCP tools to use TemporalManager ## Testing - ✅ All services healthy (Temporal, PostgreSQL, MinIO, workers, backend) - ✅ All API endpoints functional - ✅ End-to-end workflow test passed (72 findings from vulnerable_app) - ✅ MinIO storage integration working (target upload/download, results) - ✅ Worker activity discovery working (6 activities registered) - ✅ Tarball extraction working - ✅ SARIF report generation working ## Documentation - ARCHITECTURE.md: Complete Temporal architecture documentation - QUICKSTART_TEMPORAL.md: Getting started guide - MIGRATION_DECISION.md: Why we chose Temporal over Prefect - IMPLEMENTATION_STATUS.md: Migration progress tracking - workers/README.md: Worker development guide ## Dependencies - Added: temporalio>=1.6.0 - Added: boto3>=1.34.0 (MinIO S3 client) - Removed: prefect>=3.4.18 * feat: Add Python fuzzing vertical with Atheris integration This commit implements a complete Python fuzzing workflow using Atheris: ## Python Worker (workers/python/) - Dockerfile with Python 3.11, Atheris, and build tools - Generic worker.py for dynamic workflow discovery - requirements.txt with temporalio, boto3, atheris dependencies - Added to docker-compose.temporal.yaml with dedicated cache volume ## AtherisFuzzer Module (backend/toolbox/modules/fuzzer/) - Reusable module extending BaseModule - Auto-discovers fuzz targets (fuzz_*.py, *_fuzz.py, fuzz_target.py) - Recursive search to find targets in nested directories - Dynamically loads TestOneInput() function - Configurable max_iterations and timeout - Real-time stats callback support for live monitoring - Returns findings as ModuleFinding objects ## Atheris Fuzzing Workflow (backend/toolbox/workflows/atheris_fuzzing/) - Temporal workflow for orchestrating fuzzing - Downloads user code from MinIO - Executes AtherisFuzzer module - Uploads results to MinIO - Cleans up cache after execution - metadata.yaml with vertical: python for routing ## Test Project (test_projects/python_fuzz_waterfall/) - Demonstrates stateful waterfall vulnerability - main.py with check_secret() that leaks progress - fuzz_target.py with Atheris TestOneInput() harness - Complete README with usage instructions ## Backend Fixes - Fixed parameter merging in REST API endpoints (workflows.py) - Changed workflow parameter passing from positional args to kwargs (manager.py) - Default parameters now properly merged with user parameters ## Testing ✅ Worker discovered AtherisFuzzingWorkflow ✅ Workflow executed end-to-end successfully ✅ Fuzz target auto-discovered in nested directories ✅ Atheris ran 100,000 iterations ✅ Results uploaded and cache cleaned * chore: Complete Temporal migration with updated CLI/SDK/docs This commit includes all remaining Temporal migration changes: ## CLI Updates (cli/) - Updated workflow execution commands for Temporal - Enhanced error handling and exceptions - Updated dependencies in uv.lock ## SDK Updates (sdk/) - Client methods updated for Temporal workflows - Updated models for new workflow execution - Updated dependencies in uv.lock ## Documentation Updates (docs/) - Architecture documentation for Temporal - Workflow concept documentation - Resource management documentation (new) - Debugging guide (new) - Updated tutorials and how-to guides - Troubleshooting updates ## README Updates - Main README with Temporal instructions - Backend README - CLI README - SDK README ## Other - Updated IMPLEMENTATION_STATUS.md - Removed old vulnerable_app.tar.gz These changes complete the Temporal migration and ensure the CLI/SDK work correctly with the new backend. * fix: Use positional args instead of kwargs for Temporal workflows The Temporal Python SDK's start_workflow() method doesn't accept a 'kwargs' parameter. Workflows must receive parameters as positional arguments via the 'args' parameter. Changed from: args=workflow_args # Positional arguments This fixes the error: TypeError: Client.start_workflow() got an unexpected keyword argument 'kwargs' Workflows now correctly receive parameters in order: - security_assessment: [target_id, scanner_config, analyzer_config, reporter_config] - atheris_fuzzing: [target_id, target_file, max_iterations, timeout_seconds] - rust_test: [target_id, test_message] * fix: Filter metadata-only parameters from workflow arguments SecurityAssessmentWorkflow was receiving 7 arguments instead of 2-5. The issue was that target_path and volume_mode from default_parameters were being passed to the workflow, when they should only be used by the system for configuration. Now filters out metadata-only parameters (target_path, volume_mode) before passing arguments to workflow execution. * refactor: Remove Prefect leftovers and volume mounting legacy Complete cleanup of Prefect migration artifacts: Backend: - Delete registry.py and workflow_discovery.py (Prefect-specific files) - Remove Docker validation from setup.py (no longer needed) - Remove ResourceLimits and VolumeMount models - Remove target_path and volume_mode from WorkflowSubmission - Remove supported_volume_modes from API and discovery - Clean up metadata.yaml files (remove volume/path fields) - Simplify parameter filtering in manager.py SDK: - Remove volume_mode parameter from client methods - Remove ResourceLimits and VolumeMount models - Remove Prefect error patterns from docker_logs.py - Clean up WorkflowSubmission and WorkflowMetadata models CLI: - Remove Volume Modes display from workflow info All removed features are Prefect-specific or Docker volume mounting artifacts. Temporal workflows use MinIO storage exclusively. * feat: Add comprehensive test suite and benchmark infrastructure - Add 68 unit tests for fuzzer, scanner, and analyzer modules - Implement pytest-based test infrastructure with fixtures - Add 6 performance benchmarks with category-specific thresholds - Configure GitHub Actions for automated testing and benchmarking - Add test and benchmark documentation Test coverage: - AtherisFuzzer: 8 tests - CargoFuzzer: 14 tests - FileScanner: 22 tests - SecurityAnalyzer: 24 tests All tests passing (68/68) All benchmarks passing (6/6) * fix: Resolve all ruff linting violations across codebase Fixed 27 ruff violations in 12 files: - Removed unused imports (Depends, Dict, Any, Optional, etc.) - Fixed undefined workflow_info variable in workflows.py - Removed dead code with undefined variables in atheris_fuzzer.py - Changed f-string to regular string where no placeholders used All files now pass ruff checks for CI/CD compliance. * fix: Configure CI for unit tests only - Renamed docker-compose.temporal.yaml → docker-compose.yml for CI compatibility - Commented out integration-tests job (no integration tests yet) - Updated test-summary to only depend on lint and unit-tests CI will now run successfully with 68 unit tests. Integration tests can be added later. * feat: Add CI/CD integration with ephemeral deployment model Implements comprehensive CI/CD support for FuzzForge with on-demand worker management: **Worker Management (v0.7.0)** - Add WorkerManager for automatic worker lifecycle control - Auto-start workers from stopped state when workflows execute - Auto-stop workers after workflow completion - Health checks and startup timeout handling (90s default) **CI/CD Features** - `--fail-on` flag: Fail builds based on SARIF severity levels (error/warning/note/info) - `--export-sarif` flag: Export findings in SARIF 2.1.0 format - `--auto-start`/`--auto-stop` flags: Control worker lifecycle - Exit code propagation: Returns 1 on blocking findings, 0 on success **Exit Code Fix** - Add `except typer.Exit: raise` handlers at 3 critical locations - Move worker cleanup to finally block for guaranteed execution - Exit codes now propagate correctly even when build fails **CI Scripts & Examples** - ci-start.sh: Start FuzzForge services with health checks - ci-stop.sh: Clean shutdown with volume preservation option - GitHub Actions workflow example (security-scan.yml) - GitLab CI pipeline example (.gitlab-ci.example.yml) - docker-compose.ci.yml: CI-optimized compose file with profiles **OSS-Fuzz Integration** - New ossfuzz_campaign workflow for running OSS-Fuzz projects - OSS-Fuzz worker with Docker-in-Docker support - Configurable campaign duration and project selection **Documentation** - Comprehensive CI/CD integration guide (docs/how-to/cicd-integration.md) - Updated architecture docs with worker lifecycle details - Updated workspace isolation documentation - CLI README with worker management examples **SDK Enhancements** - Add get_workflow_worker_info() endpoint - Worker vertical metadata in workflow responses **Testing** - All workflows tested: security_assessment, atheris_fuzzing, secret_detection, cargo_fuzzing - All monitoring commands tested: stats, crashes, status, finding - Full CI pipeline simulation verified - Exit codes verified for success/failure scenarios Ephemeral CI/CD model: ~3-4GB RAM, ~60-90s startup, runs entirely in CI containers. * fix: Resolve ruff linting violations in CI/CD code - Remove unused variables (run_id, defaults, result) - Remove unused imports - Fix f-string without placeholders All CI/CD integration files now pass ruff checks.
9.3 KiB
FuzzForge Vertical Workers
This directory contains vertical-specific worker implementations for the Temporal architecture.
Architecture
Each vertical worker is a long-lived container pre-built with domain-specific security toolchains:
workers/
├── rust/ # Rust/Native security (AFL++, cargo-fuzz, gdb, valgrind)
├── android/ # Android security (apktool, Frida, jadx, MobSF)
├── web/ # Web security (OWASP ZAP, semgrep, eslint)
├── ios/ # iOS security (class-dump, Clutch, Frida)
├── blockchain/ # Smart contract security (mythril, slither, echidna)
└── go/ # Go security (go-fuzz, staticcheck, gosec)
How It Works
- Worker Startup: Worker discovers workflows from
/app/toolbox/workflows - Filtering: Only loads workflows where
metadata.yamlhasvertical: <name> - Dynamic Import: Dynamically imports workflow Python modules
- Registration: Registers discovered workflows with Temporal
- Processing: Polls Temporal task queue for work
Adding a New Vertical
Step 1: Create Worker Directory
mkdir -p workers/my_vertical
cd workers/my_vertical
Step 2: Create Dockerfile
# workers/my_vertical/Dockerfile
FROM python:3.11-slim
# Install your vertical-specific tools
RUN apt-get update && apt-get install -y \
tool1 \
tool2 \
tool3 \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt /tmp/
RUN pip install --no-cache-dir -r /tmp/requirements.txt
# Copy worker files
COPY worker.py /app/worker.py
COPY activities.py /app/activities.py
WORKDIR /app
ENV PYTHONPATH="/app:/app/toolbox:${PYTHONPATH}"
ENV PYTHONUNBUFFERED=1
CMD ["python", "worker.py"]
Step 3: Copy Worker Files
# Copy from rust worker as template
cp workers/rust/worker.py workers/my_vertical/
cp workers/rust/activities.py workers/my_vertical/
cp workers/rust/requirements.txt workers/my_vertical/
Note: The worker.py and activities.py are generic and work for all verticals. You only need to customize the Dockerfile with your tools.
Step 4: Add to docker-compose.yml
Add profiles to prevent auto-start:
worker-my-vertical:
build:
context: ./workers/my_vertical
dockerfile: Dockerfile
container_name: fuzzforge-worker-my-vertical
profiles: # ← Prevents auto-start (saves RAM)
- workers
- my_vertical
depends_on:
temporal:
condition: service_healthy
minio:
condition: service_healthy
environment:
TEMPORAL_ADDRESS: temporal:7233
WORKER_VERTICAL: my_vertical # ← Important: matches metadata.yaml
WORKER_TASK_QUEUE: my-vertical-queue
MAX_CONCURRENT_ACTIVITIES: 5
# MinIO configuration (same for all workers)
STORAGE_BACKEND: s3
S3_ENDPOINT: http://minio:9000
S3_ACCESS_KEY: fuzzforge
S3_SECRET_KEY: fuzzforge123
S3_BUCKET: targets
CACHE_DIR: /cache
volumes:
- ./backend/toolbox:/app/toolbox:ro
- worker_my_vertical_cache:/cache
networks:
- fuzzforge-network
restart: "no" # ← Don't auto-restart
Why profiles? Workers are pre-built but don't auto-start, saving ~1-2GB RAM per worker when idle.
Step 5: Add Volume
volumes:
worker_my_vertical_cache:
name: fuzzforge_worker_my_vertical_cache
Step 6: Create Workflows for Your Vertical
mkdir -p backend/toolbox/workflows/my_workflow
metadata.yaml:
name: my_workflow
version: 1.0.0
vertical: my_vertical # ← Must match WORKER_VERTICAL
workflow.py:
from temporalio import workflow
from datetime import timedelta
@workflow.defn
class MyWorkflow:
@workflow.run
async def run(self, target_id: str) -> dict:
# Download target
target_path = await workflow.execute_activity(
"get_target",
target_id,
start_to_close_timeout=timedelta(minutes=5)
)
# Your analysis logic here
results = {"status": "success"}
# Cleanup
await workflow.execute_activity(
"cleanup_cache",
target_path,
start_to_close_timeout=timedelta(minutes=1)
)
return results
Step 7: Test
# Start services
docker-compose -f docker-compose.temporal.yaml up -d
# Check worker logs
docker logs -f fuzzforge-worker-my-vertical
# You should see:
# "Discovered workflow: MyWorkflow from my_workflow (vertical: my_vertical)"
Worker Components
worker.py
Generic worker entrypoint. Handles:
- Workflow discovery from mounted
/app/toolbox - Dynamic import of workflow modules
- Connection to Temporal
- Task queue polling
No customization needed - works for all verticals.
activities.py
Common activities available to all workflows:
get_target(target_id: str) -> str: Download target from MinIOcleanup_cache(target_path: str) -> None: Remove cached targetupload_results(workflow_id, results, format) -> str: Upload results to MinIO
Can be extended with vertical-specific activities:
# workers/my_vertical/activities.py
from temporalio import activity
@activity.defn(name="my_custom_activity")
async def my_custom_activity(input_data: str) -> str:
# Your vertical-specific logic
return "result"
# Add to worker.py activities list:
# activities=[..., my_custom_activity]
Dockerfile
Only component that needs customization for each vertical. Install your tools here.
Configuration
Environment Variables
All workers support these environment variables:
| Variable | Default | Description |
|---|---|---|
TEMPORAL_ADDRESS |
localhost:7233 |
Temporal server address |
TEMPORAL_NAMESPACE |
default |
Temporal namespace |
WORKER_VERTICAL |
rust |
Vertical name (must match metadata.yaml) |
WORKER_TASK_QUEUE |
{vertical}-queue |
Task queue name |
MAX_CONCURRENT_ACTIVITIES |
5 |
Max concurrent activities per worker |
S3_ENDPOINT |
http://minio:9000 |
MinIO/S3 endpoint |
S3_ACCESS_KEY |
fuzzforge |
S3 access key |
S3_SECRET_KEY |
fuzzforge123 |
S3 secret key |
S3_BUCKET |
targets |
Bucket for uploaded targets |
CACHE_DIR |
/cache |
Local cache directory |
CACHE_MAX_SIZE |
10GB |
Max cache size (not enforced yet) |
LOG_LEVEL |
INFO |
Logging level |
Scaling
Vertical Scaling (More Work Per Worker)
Increase concurrent activities:
environment:
MAX_CONCURRENT_ACTIVITIES: 10 # Handle 10 tasks at once
Horizontal Scaling (More Workers)
# Scale to 3 workers for rust vertical
docker-compose -f docker-compose.temporal.yaml up -d --scale worker-rust=3
# Each worker polls the same task queue
# Temporal automatically load balances
Troubleshooting
Worker Not Discovering Workflows
Check:
- Volume mount is correct:
./backend/toolbox:/app/toolbox:ro - Workflow has
metadata.yamlwith correctvertical:field - Workflow has
workflow.pywith@workflow.defndecorated class - Worker logs show discovery attempt
Cannot Connect to Temporal
Check:
- Temporal container is healthy:
docker ps - Network connectivity:
docker exec worker-rust ping temporal TEMPORAL_ADDRESSenvironment variable is correct
Cannot Download from MinIO
Check:
- MinIO is healthy:
docker ps - Buckets exist:
docker exec fuzzforge-minio mc ls fuzzforge/targets - S3 credentials are correct
- Target was uploaded: Check MinIO console at http://localhost:9001
Activity Timeouts
Increase timeout in workflow:
await workflow.execute_activity(
"my_activity",
args,
start_to_close_timeout=timedelta(hours=2) # Increase from default
)
Best Practices
- Keep Dockerfiles lean: Only install necessary tools
- Use multi-stage builds: Reduce final image size
- Pin tool versions: Ensure reproducibility
- Log liberally: Helps debugging workflow issues
- Handle errors gracefully: Don't fail workflow for non-critical issues
- Test locally first: Use docker-compose before deploying
On-Demand Worker Management
Workers use Docker Compose profiles and CLI-managed lifecycle for resource optimization.
How It Works
- Build Time:
docker-compose buildcreates all worker images - Startup: Workers DON'T auto-start with
docker-compose up -d - On Demand: CLI starts workers automatically when workflows need them
- Shutdown: Optional auto-stop after workflow completion
Manual Control
# Start specific worker
docker start fuzzforge-worker-ossfuzz
# Stop specific worker
docker stop fuzzforge-worker-ossfuzz
# Check worker status
docker ps --filter "name=fuzzforge-worker"
CLI Auto-Management
# Auto-start enabled by default
ff workflow run ossfuzz_campaign . project_name=zlib
# Disable auto-start
ff workflow run ossfuzz_campaign . project_name=zlib --no-auto-start
# Auto-stop after completion
ff workflow run ossfuzz_campaign . project_name=zlib --wait --auto-stop
Resource Savings
- Before: All workers running = ~8GB RAM
- After: Only core services running = ~1.2GB RAM
- Savings: ~6-7GB RAM when idle
Examples
See existing verticals for examples:
workers/rust/- Complete working examplebackend/toolbox/workflows/rust_test/- Simple test workflow