mirror of
https://github.com/FuzzingLabs/fuzzforge_ai.git
synced 2026-02-13 20:32:45 +00:00
* feat: Complete migration from Prefect to Temporal BREAKING CHANGE: Replaces Prefect workflow orchestration with Temporal ## Major Changes - Replace Prefect with Temporal for workflow orchestration - Implement vertical worker architecture (rust, android) - Replace Docker registry with MinIO for unified storage - Refactor activities to be co-located with workflows - Update all API endpoints for Temporal compatibility ## Infrastructure - New: docker-compose.temporal.yaml (Temporal + MinIO + workers) - New: workers/ directory with rust and android vertical workers - New: backend/src/temporal/ (manager, discovery) - New: backend/src/storage/ (S3-cached storage with MinIO) - New: backend/toolbox/common/ (shared storage activities) - Deleted: docker-compose.yaml (old Prefect setup) - Deleted: backend/src/core/prefect_manager.py - Deleted: backend/src/services/prefect_stats_monitor.py - Deleted: Docker registry and insecure-registries requirement ## Workflows - Migrated: security_assessment workflow to Temporal - New: rust_test workflow (example/test workflow) - Deleted: secret_detection_scan (Prefect-based, to be reimplemented) - Activities now co-located with workflows for independent testing ## API Changes - Updated: backend/src/api/workflows.py (Temporal submission) - Updated: backend/src/api/runs.py (Temporal status/results) - Updated: backend/src/main.py (727 lines, TemporalManager integration) - Updated: All 16 MCP tools to use TemporalManager ## Testing - ✅ All services healthy (Temporal, PostgreSQL, MinIO, workers, backend) - ✅ All API endpoints functional - ✅ End-to-end workflow test passed (72 findings from vulnerable_app) - ✅ MinIO storage integration working (target upload/download, results) - ✅ Worker activity discovery working (6 activities registered) - ✅ Tarball extraction working - ✅ SARIF report generation working ## Documentation - ARCHITECTURE.md: Complete Temporal architecture documentation - QUICKSTART_TEMPORAL.md: Getting started guide - MIGRATION_DECISION.md: Why we chose Temporal over Prefect - IMPLEMENTATION_STATUS.md: Migration progress tracking - workers/README.md: Worker development guide ## Dependencies - Added: temporalio>=1.6.0 - Added: boto3>=1.34.0 (MinIO S3 client) - Removed: prefect>=3.4.18 * feat: Add Python fuzzing vertical with Atheris integration This commit implements a complete Python fuzzing workflow using Atheris: ## Python Worker (workers/python/) - Dockerfile with Python 3.11, Atheris, and build tools - Generic worker.py for dynamic workflow discovery - requirements.txt with temporalio, boto3, atheris dependencies - Added to docker-compose.temporal.yaml with dedicated cache volume ## AtherisFuzzer Module (backend/toolbox/modules/fuzzer/) - Reusable module extending BaseModule - Auto-discovers fuzz targets (fuzz_*.py, *_fuzz.py, fuzz_target.py) - Recursive search to find targets in nested directories - Dynamically loads TestOneInput() function - Configurable max_iterations and timeout - Real-time stats callback support for live monitoring - Returns findings as ModuleFinding objects ## Atheris Fuzzing Workflow (backend/toolbox/workflows/atheris_fuzzing/) - Temporal workflow for orchestrating fuzzing - Downloads user code from MinIO - Executes AtherisFuzzer module - Uploads results to MinIO - Cleans up cache after execution - metadata.yaml with vertical: python for routing ## Test Project (test_projects/python_fuzz_waterfall/) - Demonstrates stateful waterfall vulnerability - main.py with check_secret() that leaks progress - fuzz_target.py with Atheris TestOneInput() harness - Complete README with usage instructions ## Backend Fixes - Fixed parameter merging in REST API endpoints (workflows.py) - Changed workflow parameter passing from positional args to kwargs (manager.py) - Default parameters now properly merged with user parameters ## Testing ✅ Worker discovered AtherisFuzzingWorkflow ✅ Workflow executed end-to-end successfully ✅ Fuzz target auto-discovered in nested directories ✅ Atheris ran 100,000 iterations ✅ Results uploaded and cache cleaned * chore: Complete Temporal migration with updated CLI/SDK/docs This commit includes all remaining Temporal migration changes: ## CLI Updates (cli/) - Updated workflow execution commands for Temporal - Enhanced error handling and exceptions - Updated dependencies in uv.lock ## SDK Updates (sdk/) - Client methods updated for Temporal workflows - Updated models for new workflow execution - Updated dependencies in uv.lock ## Documentation Updates (docs/) - Architecture documentation for Temporal - Workflow concept documentation - Resource management documentation (new) - Debugging guide (new) - Updated tutorials and how-to guides - Troubleshooting updates ## README Updates - Main README with Temporal instructions - Backend README - CLI README - SDK README ## Other - Updated IMPLEMENTATION_STATUS.md - Removed old vulnerable_app.tar.gz These changes complete the Temporal migration and ensure the CLI/SDK work correctly with the new backend. * fix: Use positional args instead of kwargs for Temporal workflows The Temporal Python SDK's start_workflow() method doesn't accept a 'kwargs' parameter. Workflows must receive parameters as positional arguments via the 'args' parameter. Changed from: args=workflow_args # Positional arguments This fixes the error: TypeError: Client.start_workflow() got an unexpected keyword argument 'kwargs' Workflows now correctly receive parameters in order: - security_assessment: [target_id, scanner_config, analyzer_config, reporter_config] - atheris_fuzzing: [target_id, target_file, max_iterations, timeout_seconds] - rust_test: [target_id, test_message] * fix: Filter metadata-only parameters from workflow arguments SecurityAssessmentWorkflow was receiving 7 arguments instead of 2-5. The issue was that target_path and volume_mode from default_parameters were being passed to the workflow, when they should only be used by the system for configuration. Now filters out metadata-only parameters (target_path, volume_mode) before passing arguments to workflow execution. * refactor: Remove Prefect leftovers and volume mounting legacy Complete cleanup of Prefect migration artifacts: Backend: - Delete registry.py and workflow_discovery.py (Prefect-specific files) - Remove Docker validation from setup.py (no longer needed) - Remove ResourceLimits and VolumeMount models - Remove target_path and volume_mode from WorkflowSubmission - Remove supported_volume_modes from API and discovery - Clean up metadata.yaml files (remove volume/path fields) - Simplify parameter filtering in manager.py SDK: - Remove volume_mode parameter from client methods - Remove ResourceLimits and VolumeMount models - Remove Prefect error patterns from docker_logs.py - Clean up WorkflowSubmission and WorkflowMetadata models CLI: - Remove Volume Modes display from workflow info All removed features are Prefect-specific or Docker volume mounting artifacts. Temporal workflows use MinIO storage exclusively. * feat: Add comprehensive test suite and benchmark infrastructure - Add 68 unit tests for fuzzer, scanner, and analyzer modules - Implement pytest-based test infrastructure with fixtures - Add 6 performance benchmarks with category-specific thresholds - Configure GitHub Actions for automated testing and benchmarking - Add test and benchmark documentation Test coverage: - AtherisFuzzer: 8 tests - CargoFuzzer: 14 tests - FileScanner: 22 tests - SecurityAnalyzer: 24 tests All tests passing (68/68) All benchmarks passing (6/6) * fix: Resolve all ruff linting violations across codebase Fixed 27 ruff violations in 12 files: - Removed unused imports (Depends, Dict, Any, Optional, etc.) - Fixed undefined workflow_info variable in workflows.py - Removed dead code with undefined variables in atheris_fuzzer.py - Changed f-string to regular string where no placeholders used All files now pass ruff checks for CI/CD compliance. * fix: Configure CI for unit tests only - Renamed docker-compose.temporal.yaml → docker-compose.yml for CI compatibility - Commented out integration-tests job (no integration tests yet) - Updated test-summary to only depend on lint and unit-tests CI will now run successfully with 68 unit tests. Integration tests can be added later. * feat: Add CI/CD integration with ephemeral deployment model Implements comprehensive CI/CD support for FuzzForge with on-demand worker management: **Worker Management (v0.7.0)** - Add WorkerManager for automatic worker lifecycle control - Auto-start workers from stopped state when workflows execute - Auto-stop workers after workflow completion - Health checks and startup timeout handling (90s default) **CI/CD Features** - `--fail-on` flag: Fail builds based on SARIF severity levels (error/warning/note/info) - `--export-sarif` flag: Export findings in SARIF 2.1.0 format - `--auto-start`/`--auto-stop` flags: Control worker lifecycle - Exit code propagation: Returns 1 on blocking findings, 0 on success **Exit Code Fix** - Add `except typer.Exit: raise` handlers at 3 critical locations - Move worker cleanup to finally block for guaranteed execution - Exit codes now propagate correctly even when build fails **CI Scripts & Examples** - ci-start.sh: Start FuzzForge services with health checks - ci-stop.sh: Clean shutdown with volume preservation option - GitHub Actions workflow example (security-scan.yml) - GitLab CI pipeline example (.gitlab-ci.example.yml) - docker-compose.ci.yml: CI-optimized compose file with profiles **OSS-Fuzz Integration** - New ossfuzz_campaign workflow for running OSS-Fuzz projects - OSS-Fuzz worker with Docker-in-Docker support - Configurable campaign duration and project selection **Documentation** - Comprehensive CI/CD integration guide (docs/how-to/cicd-integration.md) - Updated architecture docs with worker lifecycle details - Updated workspace isolation documentation - CLI README with worker management examples **SDK Enhancements** - Add get_workflow_worker_info() endpoint - Worker vertical metadata in workflow responses **Testing** - All workflows tested: security_assessment, atheris_fuzzing, secret_detection, cargo_fuzzing - All monitoring commands tested: stats, crashes, status, finding - Full CI pipeline simulation verified - Exit codes verified for success/failure scenarios Ephemeral CI/CD model: ~3-4GB RAM, ~60-90s startup, runs entirely in CI containers. * fix: Resolve ruff linting violations in CI/CD code - Remove unused variables (run_id, defaults, result) - Remove unused imports - Fix f-string without placeholders All CI/CD integration files now pass ruff checks.
382 lines
13 KiB
Python
382 lines
13 KiB
Python
"""
|
|
Configuration management commands.
|
|
"""
|
|
# Copyright (c) 2025 FuzzingLabs
|
|
#
|
|
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
|
|
# at the root of this repository for details.
|
|
#
|
|
# After the Change Date (four years from publication), this version of the
|
|
# Licensed Work will be made available under the Apache License, Version 2.0.
|
|
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Additional attribution and requirements are provided in the NOTICE file.
|
|
|
|
|
|
import typer
|
|
from pathlib import Path
|
|
from rich.console import Console
|
|
from rich.table import Table
|
|
from rich.panel import Panel
|
|
from rich.prompt import Confirm
|
|
from rich import box
|
|
|
|
from ..config import (
|
|
get_project_config,
|
|
get_global_config,
|
|
save_global_config,
|
|
FuzzForgeConfig
|
|
)
|
|
from ..exceptions import require_project, ValidationError, handle_error
|
|
|
|
console = Console()
|
|
app = typer.Typer()
|
|
|
|
|
|
@app.command("show")
|
|
def show_config(
|
|
global_config: bool = typer.Option(
|
|
False, "--global", "-g",
|
|
help="Show global configuration instead of project config"
|
|
)
|
|
):
|
|
"""
|
|
📋 Display current configuration settings
|
|
"""
|
|
if global_config:
|
|
config = get_global_config()
|
|
config_type = "Global"
|
|
config_path = Path.home() / ".config" / "fuzzforge" / "config.yaml"
|
|
else:
|
|
try:
|
|
require_project()
|
|
config = get_project_config()
|
|
if not config:
|
|
raise ValidationError("project configuration", "missing", "initialized project")
|
|
except Exception as e:
|
|
handle_error(e, "loading project configuration")
|
|
return # Unreachable, but makes static analysis happy
|
|
config_type = "Project"
|
|
config_path = Path.cwd() / ".fuzzforge" / "config.yaml"
|
|
|
|
console.print(f"\n⚙️ [bold]{config_type} Configuration[/bold]\n")
|
|
|
|
# Project settings
|
|
project_table = Table(show_header=False, box=box.SIMPLE)
|
|
project_table.add_column("Setting", style="bold cyan")
|
|
project_table.add_column("Value")
|
|
|
|
project_table.add_row("Project Name", config.project.name)
|
|
project_table.add_row("API URL", config.project.api_url)
|
|
project_table.add_row("Default Timeout", f"{config.project.default_timeout}s")
|
|
if config.project.default_workflow:
|
|
project_table.add_row("Default Workflow", config.project.default_workflow)
|
|
|
|
console.print(
|
|
Panel.fit(
|
|
project_table,
|
|
title="📁 Project Settings",
|
|
box=box.ROUNDED
|
|
)
|
|
)
|
|
|
|
# Retention settings
|
|
retention_table = Table(show_header=False, box=box.SIMPLE)
|
|
retention_table.add_column("Setting", style="bold cyan")
|
|
retention_table.add_column("Value")
|
|
|
|
retention_table.add_row("Max Runs", str(config.retention.max_runs))
|
|
retention_table.add_row("Keep Findings (days)", str(config.retention.keep_findings_days))
|
|
|
|
console.print(
|
|
Panel.fit(
|
|
retention_table,
|
|
title="🗄️ Data Retention",
|
|
box=box.ROUNDED
|
|
)
|
|
)
|
|
|
|
# Preferences
|
|
prefs_table = Table(show_header=False, box=box.SIMPLE)
|
|
prefs_table.add_column("Setting", style="bold cyan")
|
|
prefs_table.add_column("Value")
|
|
|
|
prefs_table.add_row("Auto Save Findings", "✅ Yes" if config.preferences.auto_save_findings else "❌ No")
|
|
prefs_table.add_row("Show Progress Bars", "✅ Yes" if config.preferences.show_progress_bars else "❌ No")
|
|
prefs_table.add_row("Table Style", config.preferences.table_style)
|
|
prefs_table.add_row("Color Output", "✅ Yes" if config.preferences.color_output else "❌ No")
|
|
|
|
console.print(
|
|
Panel.fit(
|
|
prefs_table,
|
|
title="🎨 Preferences",
|
|
box=box.ROUNDED
|
|
)
|
|
)
|
|
|
|
console.print(f"\n📍 Config file: [dim]{config_path}[/dim]")
|
|
|
|
|
|
@app.command("set")
|
|
def set_config(
|
|
key: str = typer.Argument(..., help="Configuration key to set (e.g., 'project.name', 'project.api_url')"),
|
|
value: str = typer.Argument(..., help="Value to set"),
|
|
global_config: bool = typer.Option(
|
|
False, "--global", "-g",
|
|
help="Set in global configuration instead of project config"
|
|
)
|
|
):
|
|
"""
|
|
⚙️ Set a configuration value
|
|
"""
|
|
if global_config:
|
|
config = get_global_config()
|
|
config_type = "global"
|
|
else:
|
|
config = get_project_config()
|
|
if not config:
|
|
console.print("❌ No project configuration found. Run 'ff init' first.", style="red")
|
|
raise typer.Exit(1)
|
|
config_type = "project"
|
|
|
|
# Parse the key path
|
|
key_parts = key.split('.')
|
|
if len(key_parts) != 2:
|
|
console.print("❌ Key must be in format 'section.setting' (e.g., 'project.name')", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
section, setting = key_parts
|
|
|
|
try:
|
|
# Update configuration
|
|
if section == "project":
|
|
if setting == "name":
|
|
config.project.name = value
|
|
elif setting == "api_url":
|
|
config.project.api_url = value
|
|
elif setting == "default_timeout":
|
|
config.project.default_timeout = int(value)
|
|
elif setting == "default_workflow":
|
|
config.project.default_workflow = value if value.lower() != "none" else None
|
|
else:
|
|
console.print(f"❌ Unknown project setting: {setting}", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
elif section == "retention":
|
|
if setting == "max_runs":
|
|
config.retention.max_runs = int(value)
|
|
elif setting == "keep_findings_days":
|
|
config.retention.keep_findings_days = int(value)
|
|
else:
|
|
console.print(f"❌ Unknown retention setting: {setting}", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
elif section == "preferences":
|
|
if setting == "auto_save_findings":
|
|
config.preferences.auto_save_findings = value.lower() in ("true", "yes", "1", "on")
|
|
elif setting == "show_progress_bars":
|
|
config.preferences.show_progress_bars = value.lower() in ("true", "yes", "1", "on")
|
|
elif setting == "table_style":
|
|
config.preferences.table_style = value
|
|
elif setting == "color_output":
|
|
config.preferences.color_output = value.lower() in ("true", "yes", "1", "on")
|
|
else:
|
|
console.print(f"❌ Unknown preferences setting: {setting}", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
else:
|
|
console.print(f"❌ Unknown configuration section: {section}", style="red")
|
|
console.print("Valid sections: project, retention, preferences", style="dim")
|
|
raise typer.Exit(1)
|
|
|
|
# Save configuration
|
|
if global_config:
|
|
save_global_config(config)
|
|
else:
|
|
config_path = Path.cwd() / ".fuzzforge" / "config.yaml"
|
|
config.save_to_file(config_path)
|
|
|
|
console.print(f"✅ Set {config_type} configuration: [bold cyan]{key}[/bold cyan] = [bold]{value}[/bold]", style="green")
|
|
|
|
except ValueError as e:
|
|
console.print(f"❌ Invalid value for {key}: {e}", style="red")
|
|
raise typer.Exit(1)
|
|
except Exception as e:
|
|
console.print(f"❌ Failed to set configuration: {e}", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
|
|
@app.command("get")
|
|
def get_config(
|
|
key: str = typer.Argument(..., help="Configuration key to get (e.g., 'project.name')"),
|
|
global_config: bool = typer.Option(
|
|
False, "--global", "-g",
|
|
help="Get from global configuration instead of project config"
|
|
)
|
|
):
|
|
"""
|
|
📖 Get a specific configuration value
|
|
"""
|
|
if global_config:
|
|
config = get_global_config()
|
|
else:
|
|
config = get_project_config()
|
|
if not config:
|
|
console.print("❌ No project configuration found. Run 'ff init' first.", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
# Parse the key path
|
|
key_parts = key.split('.')
|
|
if len(key_parts) != 2:
|
|
console.print("❌ Key must be in format 'section.setting' (e.g., 'project.name')", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
section, setting = key_parts
|
|
|
|
try:
|
|
# Get configuration value
|
|
if section == "project":
|
|
if setting == "name":
|
|
value = config.project.name
|
|
elif setting == "api_url":
|
|
value = config.project.api_url
|
|
elif setting == "default_timeout":
|
|
value = config.project.default_timeout
|
|
elif setting == "default_workflow":
|
|
value = config.project.default_workflow or "none"
|
|
else:
|
|
console.print(f"❌ Unknown project setting: {setting}", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
elif section == "retention":
|
|
if setting == "max_runs":
|
|
value = config.retention.max_runs
|
|
elif setting == "keep_findings_days":
|
|
value = config.retention.keep_findings_days
|
|
else:
|
|
console.print(f"❌ Unknown retention setting: {setting}", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
elif section == "preferences":
|
|
if setting == "auto_save_findings":
|
|
value = config.preferences.auto_save_findings
|
|
elif setting == "show_progress_bars":
|
|
value = config.preferences.show_progress_bars
|
|
elif setting == "table_style":
|
|
value = config.preferences.table_style
|
|
elif setting == "color_output":
|
|
value = config.preferences.color_output
|
|
else:
|
|
console.print(f"❌ Unknown preferences setting: {setting}", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
else:
|
|
console.print(f"❌ Unknown configuration section: {section}", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
console.print(f"{key}: [bold cyan]{value}[/bold cyan]")
|
|
|
|
except Exception as e:
|
|
console.print(f"❌ Failed to get configuration: {e}", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
|
|
@app.command("reset")
|
|
def reset_config(
|
|
global_config: bool = typer.Option(
|
|
False, "--global", "-g",
|
|
help="Reset global configuration instead of project config"
|
|
),
|
|
force: bool = typer.Option(
|
|
False, "--force", "-f",
|
|
help="Skip confirmation prompt"
|
|
)
|
|
):
|
|
"""
|
|
🔄 Reset configuration to defaults
|
|
"""
|
|
config_type = "global" if global_config else "project"
|
|
|
|
if not force:
|
|
if not Confirm.ask(f"Reset {config_type} configuration to defaults?", default=False, console=console):
|
|
console.print("❌ Reset cancelled", style="yellow")
|
|
raise typer.Exit(0)
|
|
|
|
try:
|
|
# Create new default configuration
|
|
new_config = FuzzForgeConfig()
|
|
|
|
if global_config:
|
|
save_global_config(new_config)
|
|
else:
|
|
if not Path.cwd().joinpath(".fuzzforge").exists():
|
|
console.print("❌ No project configuration found. Run 'ff init' first.", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
config_path = Path.cwd() / ".fuzzforge" / "config.yaml"
|
|
new_config.save_to_file(config_path)
|
|
|
|
console.print(f"✅ {config_type.title()} configuration reset to defaults", style="green")
|
|
|
|
except Exception as e:
|
|
console.print(f"❌ Failed to reset configuration: {e}", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
|
|
@app.command("edit")
|
|
def edit_config(
|
|
global_config: bool = typer.Option(
|
|
False, "--global", "-g",
|
|
help="Edit global configuration instead of project config"
|
|
)
|
|
):
|
|
"""
|
|
📝 Open configuration file in default editor
|
|
"""
|
|
import subprocess
|
|
|
|
if global_config:
|
|
config_path = Path.home() / ".config" / "fuzzforge" / "config.yaml"
|
|
config_type = "global"
|
|
else:
|
|
config_path = Path.cwd() / ".fuzzforge" / "config.yaml"
|
|
config_type = "project"
|
|
|
|
if not config_path.exists():
|
|
console.print("❌ No project configuration found. Run 'ff init' first.", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
# Try to find a suitable editor
|
|
editors = ["code", "vim", "nano", "notepad"]
|
|
editor = None
|
|
|
|
for e in editors:
|
|
try:
|
|
subprocess.run([e, "--version"], capture_output=True, check=True)
|
|
editor = e
|
|
break
|
|
except (subprocess.CalledProcessError, FileNotFoundError):
|
|
continue
|
|
|
|
if not editor:
|
|
console.print(f"📍 Configuration file: [bold cyan]{config_path}[/bold cyan]")
|
|
console.print("❌ No suitable editor found. Please edit the file manually.", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
try:
|
|
console.print(f"📝 Opening {config_type} configuration in {editor}...")
|
|
subprocess.run([editor, str(config_path)], check=True)
|
|
console.print("✅ Configuration file edited", style="green")
|
|
|
|
except subprocess.CalledProcessError as e:
|
|
console.print(f"❌ Failed to open editor: {e}", style="red")
|
|
raise typer.Exit(1)
|
|
|
|
|
|
@app.callback()
|
|
def config_callback():
|
|
"""
|
|
⚙️ Manage configuration settings
|
|
"""
|
|
pass
|