CI/CD Integration with Ephemeral Deployment Model (#14)

* feat: Complete migration from Prefect to Temporal

BREAKING CHANGE: Replaces Prefect workflow orchestration with Temporal

## Major Changes
- Replace Prefect with Temporal for workflow orchestration
- Implement vertical worker architecture (rust, android)
- Replace Docker registry with MinIO for unified storage
- Refactor activities to be co-located with workflows
- Update all API endpoints for Temporal compatibility

## Infrastructure
- New: docker-compose.temporal.yaml (Temporal + MinIO + workers)
- New: workers/ directory with rust and android vertical workers
- New: backend/src/temporal/ (manager, discovery)
- New: backend/src/storage/ (S3-cached storage with MinIO)
- New: backend/toolbox/common/ (shared storage activities)
- Deleted: docker-compose.yaml (old Prefect setup)
- Deleted: backend/src/core/prefect_manager.py
- Deleted: backend/src/services/prefect_stats_monitor.py
- Deleted: Docker registry and insecure-registries requirement

## Workflows
- Migrated: security_assessment workflow to Temporal
- New: rust_test workflow (example/test workflow)
- Deleted: secret_detection_scan (Prefect-based, to be reimplemented)
- Activities now co-located with workflows for independent testing

## API Changes
- Updated: backend/src/api/workflows.py (Temporal submission)
- Updated: backend/src/api/runs.py (Temporal status/results)
- Updated: backend/src/main.py (727 lines, TemporalManager integration)
- Updated: All 16 MCP tools to use TemporalManager

## Testing
-  All services healthy (Temporal, PostgreSQL, MinIO, workers, backend)
-  All API endpoints functional
-  End-to-end workflow test passed (72 findings from vulnerable_app)
-  MinIO storage integration working (target upload/download, results)
-  Worker activity discovery working (6 activities registered)
-  Tarball extraction working
-  SARIF report generation working

## Documentation
- ARCHITECTURE.md: Complete Temporal architecture documentation
- QUICKSTART_TEMPORAL.md: Getting started guide
- MIGRATION_DECISION.md: Why we chose Temporal over Prefect
- IMPLEMENTATION_STATUS.md: Migration progress tracking
- workers/README.md: Worker development guide

## Dependencies
- Added: temporalio>=1.6.0
- Added: boto3>=1.34.0 (MinIO S3 client)
- Removed: prefect>=3.4.18

* feat: Add Python fuzzing vertical with Atheris integration

This commit implements a complete Python fuzzing workflow using Atheris:

## Python Worker (workers/python/)
- Dockerfile with Python 3.11, Atheris, and build tools
- Generic worker.py for dynamic workflow discovery
- requirements.txt with temporalio, boto3, atheris dependencies
- Added to docker-compose.temporal.yaml with dedicated cache volume

## AtherisFuzzer Module (backend/toolbox/modules/fuzzer/)
- Reusable module extending BaseModule
- Auto-discovers fuzz targets (fuzz_*.py, *_fuzz.py, fuzz_target.py)
- Recursive search to find targets in nested directories
- Dynamically loads TestOneInput() function
- Configurable max_iterations and timeout
- Real-time stats callback support for live monitoring
- Returns findings as ModuleFinding objects

## Atheris Fuzzing Workflow (backend/toolbox/workflows/atheris_fuzzing/)
- Temporal workflow for orchestrating fuzzing
- Downloads user code from MinIO
- Executes AtherisFuzzer module
- Uploads results to MinIO
- Cleans up cache after execution
- metadata.yaml with vertical: python for routing

## Test Project (test_projects/python_fuzz_waterfall/)
- Demonstrates stateful waterfall vulnerability
- main.py with check_secret() that leaks progress
- fuzz_target.py with Atheris TestOneInput() harness
- Complete README with usage instructions

## Backend Fixes
- Fixed parameter merging in REST API endpoints (workflows.py)
- Changed workflow parameter passing from positional args to kwargs (manager.py)
- Default parameters now properly merged with user parameters

## Testing
 Worker discovered AtherisFuzzingWorkflow
 Workflow executed end-to-end successfully
 Fuzz target auto-discovered in nested directories
 Atheris ran 100,000 iterations
 Results uploaded and cache cleaned

* chore: Complete Temporal migration with updated CLI/SDK/docs

This commit includes all remaining Temporal migration changes:

## CLI Updates (cli/)
- Updated workflow execution commands for Temporal
- Enhanced error handling and exceptions
- Updated dependencies in uv.lock

## SDK Updates (sdk/)
- Client methods updated for Temporal workflows
- Updated models for new workflow execution
- Updated dependencies in uv.lock

## Documentation Updates (docs/)
- Architecture documentation for Temporal
- Workflow concept documentation
- Resource management documentation (new)
- Debugging guide (new)
- Updated tutorials and how-to guides
- Troubleshooting updates

## README Updates
- Main README with Temporal instructions
- Backend README
- CLI README
- SDK README

## Other
- Updated IMPLEMENTATION_STATUS.md
- Removed old vulnerable_app.tar.gz

These changes complete the Temporal migration and ensure the
CLI/SDK work correctly with the new backend.

* fix: Use positional args instead of kwargs for Temporal workflows

The Temporal Python SDK's start_workflow() method doesn't accept
a 'kwargs' parameter. Workflows must receive parameters as positional
arguments via the 'args' parameter.

Changed from:
  args=workflow_args  # Positional arguments

This fixes the error:
  TypeError: Client.start_workflow() got an unexpected keyword argument 'kwargs'

Workflows now correctly receive parameters in order:
- security_assessment: [target_id, scanner_config, analyzer_config, reporter_config]
- atheris_fuzzing: [target_id, target_file, max_iterations, timeout_seconds]
- rust_test: [target_id, test_message]

* fix: Filter metadata-only parameters from workflow arguments

SecurityAssessmentWorkflow was receiving 7 arguments instead of 2-5.
The issue was that target_path and volume_mode from default_parameters
were being passed to the workflow, when they should only be used by
the system for configuration.

Now filters out metadata-only parameters (target_path, volume_mode)
before passing arguments to workflow execution.

* refactor: Remove Prefect leftovers and volume mounting legacy

Complete cleanup of Prefect migration artifacts:

Backend:
- Delete registry.py and workflow_discovery.py (Prefect-specific files)
- Remove Docker validation from setup.py (no longer needed)
- Remove ResourceLimits and VolumeMount models
- Remove target_path and volume_mode from WorkflowSubmission
- Remove supported_volume_modes from API and discovery
- Clean up metadata.yaml files (remove volume/path fields)
- Simplify parameter filtering in manager.py

SDK:
- Remove volume_mode parameter from client methods
- Remove ResourceLimits and VolumeMount models
- Remove Prefect error patterns from docker_logs.py
- Clean up WorkflowSubmission and WorkflowMetadata models

CLI:
- Remove Volume Modes display from workflow info

All removed features are Prefect-specific or Docker volume mounting
artifacts. Temporal workflows use MinIO storage exclusively.

* feat: Add comprehensive test suite and benchmark infrastructure

- Add 68 unit tests for fuzzer, scanner, and analyzer modules
- Implement pytest-based test infrastructure with fixtures
- Add 6 performance benchmarks with category-specific thresholds
- Configure GitHub Actions for automated testing and benchmarking
- Add test and benchmark documentation

Test coverage:
- AtherisFuzzer: 8 tests
- CargoFuzzer: 14 tests
- FileScanner: 22 tests
- SecurityAnalyzer: 24 tests

All tests passing (68/68)
All benchmarks passing (6/6)

* fix: Resolve all ruff linting violations across codebase

Fixed 27 ruff violations in 12 files:
- Removed unused imports (Depends, Dict, Any, Optional, etc.)
- Fixed undefined workflow_info variable in workflows.py
- Removed dead code with undefined variables in atheris_fuzzer.py
- Changed f-string to regular string where no placeholders used

All files now pass ruff checks for CI/CD compliance.

* fix: Configure CI for unit tests only

- Renamed docker-compose.temporal.yaml → docker-compose.yml for CI compatibility
- Commented out integration-tests job (no integration tests yet)
- Updated test-summary to only depend on lint and unit-tests

CI will now run successfully with 68 unit tests. Integration tests can be added later.

* feat: Add CI/CD integration with ephemeral deployment model

Implements comprehensive CI/CD support for FuzzForge with on-demand worker management:

**Worker Management (v0.7.0)**
- Add WorkerManager for automatic worker lifecycle control
- Auto-start workers from stopped state when workflows execute
- Auto-stop workers after workflow completion
- Health checks and startup timeout handling (90s default)

**CI/CD Features**
- `--fail-on` flag: Fail builds based on SARIF severity levels (error/warning/note/info)
- `--export-sarif` flag: Export findings in SARIF 2.1.0 format
- `--auto-start`/`--auto-stop` flags: Control worker lifecycle
- Exit code propagation: Returns 1 on blocking findings, 0 on success

**Exit Code Fix**
- Add `except typer.Exit: raise` handlers at 3 critical locations
- Move worker cleanup to finally block for guaranteed execution
- Exit codes now propagate correctly even when build fails

**CI Scripts & Examples**
- ci-start.sh: Start FuzzForge services with health checks
- ci-stop.sh: Clean shutdown with volume preservation option
- GitHub Actions workflow example (security-scan.yml)
- GitLab CI pipeline example (.gitlab-ci.example.yml)
- docker-compose.ci.yml: CI-optimized compose file with profiles

**OSS-Fuzz Integration**
- New ossfuzz_campaign workflow for running OSS-Fuzz projects
- OSS-Fuzz worker with Docker-in-Docker support
- Configurable campaign duration and project selection

**Documentation**
- Comprehensive CI/CD integration guide (docs/how-to/cicd-integration.md)
- Updated architecture docs with worker lifecycle details
- Updated workspace isolation documentation
- CLI README with worker management examples

**SDK Enhancements**
- Add get_workflow_worker_info() endpoint
- Worker vertical metadata in workflow responses

**Testing**
- All workflows tested: security_assessment, atheris_fuzzing, secret_detection, cargo_fuzzing
- All monitoring commands tested: stats, crashes, status, finding
- Full CI pipeline simulation verified
- Exit codes verified for success/failure scenarios

Ephemeral CI/CD model: ~3-4GB RAM, ~60-90s startup, runs entirely in CI containers.

* fix: Resolve ruff linting violations in CI/CD code

- Remove unused variables (run_id, defaults, result)
- Remove unused imports
- Fix f-string without placeholders

All CI/CD integration files now pass ruff checks.
This commit is contained in:
tduhamel42
2025-10-14 10:13:45 +02:00
committed by GitHub
parent 987c49569c
commit 60ca088ecf
167 changed files with 26101 additions and 5703 deletions

View File

@@ -14,10 +14,10 @@ API response validation and graceful degradation utilities.
import logging
from typing import Any, Dict, List, Optional, Union
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, ValidationError as PydanticValidationError
from .exceptions import ValidationError, APIConnectionError
from .exceptions import ValidationError
logger = logging.getLogger(__name__)
@@ -29,7 +29,6 @@ class WorkflowMetadata(BaseModel):
author: Optional[str] = None
description: Optional[str] = None
parameters: Dict[str, Any] = {}
supported_volume_modes: List[str] = ["ro", "rw"]
class RunStatus(BaseModel):

View File

@@ -15,15 +15,11 @@ from __future__ import annotations
import asyncio
import os
from datetime import datetime
from typing import Optional
import typer
from rich.console import Console
from rich.panel import Panel
from rich.table import Table
from ..config import ProjectConfigManager
console = Console()
app = typer.Typer(name="ai", help="Interact with the FuzzForge AI system")

View File

@@ -18,13 +18,11 @@ from pathlib import Path
from rich.console import Console
from rich.table import Table
from rich.panel import Panel
from rich.prompt import Prompt, Confirm
from rich.prompt import Confirm
from rich import box
from typing import Optional
from ..config import (
get_project_config,
ensure_project_config,
get_global_config,
save_global_config,
FuzzForgeConfig
@@ -335,7 +333,6 @@ def edit_config(
"""
📝 Open configuration file in default editor
"""
import os
import subprocess
if global_config:
@@ -369,7 +366,7 @@ def edit_config(
try:
console.print(f"📝 Opening {config_type} configuration in {editor}...")
subprocess.run([editor, str(config_path)], check=True)
console.print(f"✅ Configuration file edited", style="green")
console.print("✅ Configuration file edited", style="green")
except subprocess.CalledProcessError as e:
console.print(f"❌ Failed to open editor: {e}", style="red")

View File

@@ -21,18 +21,17 @@ from typing import Optional, Dict, Any, List
import typer
from rich.console import Console
from rich.table import Table, Column
from rich.table import Table
from rich.panel import Panel
from rich.syntax import Syntax
from rich.tree import Tree
from rich.text import Text
from rich import box
from ..config import get_project_config, FuzzForgeConfig
from ..database import get_project_db, ensure_project_db, FindingRecord
from ..exceptions import (
handle_error, retry_on_network_error, validate_run_id,
require_project, ValidationError, DatabaseError
retry_on_network_error, validate_run_id,
require_project, ValidationError
)
from fuzzforge_sdk import FuzzForgeClient
@@ -159,7 +158,7 @@ def display_findings_table(sarif_data: Dict[str, Any]):
driver = tool.get("driver", {})
# Tool information
console.print(f"\n🔍 [bold]Security Analysis Results[/bold]")
console.print("\n🔍 [bold]Security Analysis Results[/bold]")
if driver.get("name"):
console.print(f"Tool: {driver.get('name')} v{driver.get('version', 'unknown')}")
@@ -241,7 +240,7 @@ def display_findings_table(sarif_data: Dict[str, Any]):
location_text
)
console.print(f"\n📋 [bold]Detailed Results[/bold]")
console.print("\n📋 [bold]Detailed Results[/bold]")
if len(results) > 50:
console.print(f"Showing first 50 of {len(results)} results")
console.print()
@@ -297,7 +296,7 @@ def findings_history(
console.print(f"\n📚 [bold]Findings History ({len(findings)})[/bold]\n")
console.print(table)
console.print(f"\n💡 Use [bold cyan]fuzzforge finding <run-id>[/bold cyan] to view detailed findings")
console.print("\n💡 Use [bold cyan]fuzzforge finding <run-id>[/bold cyan] to view detailed findings")
except Exception as e:
console.print(f"❌ Failed to get findings history: {e}", style="red")
@@ -710,10 +709,10 @@ def all_findings(
if show_findings:
display_detailed_findings(findings, max_findings)
console.print(f"\n💡 Use filters to refine results: --workflow, --severity, --since")
console.print(f"💡 Show findings content: --show-findings")
console.print(f"💡 Export findings: --export json --output report.json")
console.print(f"💡 View specific findings: [bold cyan]fuzzforge finding <run-id>[/bold cyan]")
console.print("\n💡 Use filters to refine results: --workflow, --severity, --since")
console.print("💡 Show findings content: --show-findings")
console.print("💡 Export findings: --export json --output report.json")
console.print("💡 View specific findings: [bold cyan]fuzzforge finding <run-id>[/bold cyan]")
except Exception as e:
console.print(f"❌ Failed to get all findings: {e}", style="red")

View File

@@ -164,7 +164,7 @@ fuzzforge finding <run-id>
console.print("📚 Created README.md")
console.print("\n✅ FuzzForge project initialized successfully!", style="green")
console.print(f"\n🎯 Next steps:")
console.print("\n🎯 Next steps:")
console.print(" • ff workflows - See available workflows")
console.print(" • ff status - Check API connectivity")
console.print(" • ff workflow <workflow> <path> - Start your first analysis")

View File

@@ -13,23 +13,18 @@ Real-time monitoring and statistics commands.
# Additional attribution and requirements are provided in the NOTICE file.
import asyncio
import time
from datetime import datetime, timedelta
from typing import Optional
from datetime import datetime
import typer
from rich.console import Console
from rich.table import Table
from rich.panel import Panel
from rich.live import Live
from rich.layout import Layout
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from rich.align import Align
from rich import box
from ..config import get_project_config, FuzzForgeConfig
from ..database import get_project_db, ensure_project_db, CrashRecord
from ..database import ensure_project_db, CrashRecord
from fuzzforge_sdk import FuzzForgeClient
console = Console()
@@ -93,9 +88,21 @@ def fuzzing_stats(
with Live(auto_refresh=False, console=console) as live:
while True:
try:
# Check workflow status
run_status = client.get_run_status(run_id)
stats = client.get_fuzzing_stats(run_id)
table = create_stats_table(stats)
live.update(table, refresh=True)
# Exit if workflow completed or failed
if getattr(run_status, 'is_completed', False) or getattr(run_status, 'is_failed', False):
final_status = getattr(run_status, 'status', 'Unknown')
if getattr(run_status, 'is_completed', False):
console.print("\n✅ [bold green]Workflow completed[/bold green]", style="green")
else:
console.print(f"\n⚠️ [bold yellow]Workflow ended[/bold yellow] | Status: {final_status}", style="yellow")
break
time.sleep(refresh)
except KeyboardInterrupt:
console.print("\n📊 Monitoring stopped", style="yellow")
@@ -124,8 +131,8 @@ def create_stats_table(stats) -> Panel:
stats_table.add_row("Total Crashes", format_number(stats.crashes))
stats_table.add_row("Unique Crashes", format_number(stats.unique_crashes))
if stats.coverage is not None:
stats_table.add_row("Code Coverage", f"{stats.coverage:.1f}%")
if stats.coverage is not None and stats.coverage > 0:
stats_table.add_row("Code Coverage", f"{stats.coverage} edges")
stats_table.add_row("Corpus Size", format_number(stats.corpus_size))
stats_table.add_row("Elapsed Time", format_duration(stats.elapsed_time))
@@ -206,7 +213,7 @@ def crash_reports(
console.print(
Panel.fit(
summary_table,
title=f"🐛 Crash Summary",
title="🐛 Crash Summary",
box=box.ROUNDED
)
)
@@ -246,7 +253,7 @@ def crash_reports(
input_display
)
console.print(f"\n🐛 [bold]Crash Details[/bold]")
console.print("\n🐛 [bold]Crash Details[/bold]")
if len(crashes) > limit:
console.print(f"Showing first {limit} of {len(crashes)} crashes")
console.print()
@@ -260,78 +267,70 @@ def crash_reports(
def _live_monitor(run_id: str, refresh: int):
"""Helper for live monitoring to allow for cleaner exit handling"""
"""Helper for live monitoring with inline real-time display"""
with get_client() as client:
start_time = time.time()
def render_layout(run_status, stats):
layout = Layout()
layout.split_column(
Layout(name="header", size=3),
Layout(name="main", ratio=1),
Layout(name="footer", size=3)
)
layout["main"].split_row(
Layout(name="stats", ratio=1),
Layout(name="progress", ratio=1)
)
header = Panel(
f"[bold]FuzzForge Live Monitor[/bold]\n"
f"Run: {run_id[:12]}... | Status: {run_status.status} | "
f"Uptime: {format_duration(int(time.time() - start_time))}",
box=box.ROUNDED,
style="cyan"
)
layout["header"].update(header)
layout["stats"].update(create_stats_table(stats))
def render_inline_stats(run_status, stats):
"""Render inline stats display (non-dashboard)"""
lines = []
progress_table = Table(show_header=False, box=box.SIMPLE)
progress_table.add_column("Metric", style="bold")
progress_table.add_column("Progress")
if stats.executions > 0:
exec_rate_percent = min(100, (stats.executions_per_sec / 1000) * 100)
progress_table.add_row("Exec Rate", create_progress_bar(exec_rate_percent, "green"))
crash_rate = (stats.crashes / stats.executions) * 100000
crash_rate_percent = min(100, crash_rate * 10)
progress_table.add_row("Crash Rate", create_progress_bar(crash_rate_percent, "red"))
if stats.coverage is not None:
progress_table.add_row("Coverage", create_progress_bar(stats.coverage, "blue"))
layout["progress"].update(Panel.fit(progress_table, title="📊 Progress Indicators", box=box.ROUNDED))
# Header line
workflow_name = getattr(stats, 'workflow', 'unknown')
status_emoji = "🔄" if not getattr(run_status, 'is_completed', False) else ""
status_color = "yellow" if not getattr(run_status, 'is_completed', False) else "green"
footer = Panel(
f"Last updated: {datetime.now().strftime('%H:%M:%S')} | "
f"Refresh interval: {refresh}s | Press Ctrl+C to exit",
box=box.ROUNDED,
style="dim"
)
layout["footer"].update(footer)
return layout
lines.append(f"\n[bold cyan]📊 Live Fuzzing Monitor[/bold cyan] - {workflow_name} (Run: {run_id[:12]}...)\n")
with Live(auto_refresh=False, console=console, screen=True) as live:
# Stats lines with emojis
lines.append(f" [bold]⚡ Executions[/bold] {format_number(stats.executions):>8} [dim]({stats.executions_per_sec:,.1f}/sec)[/dim]")
lines.append(f" [bold]💥 Crashes[/bold] {stats.crashes:>8} [dim](unique: {stats.unique_crashes})[/dim]")
lines.append(f" [bold]📦 Corpus[/bold] {stats.corpus_size:>8} inputs")
if stats.coverage is not None and stats.coverage > 0:
lines.append(f" [bold]📈 Coverage[/bold] {stats.coverage:>8} edges")
lines.append(f" [bold]⏱️ Elapsed[/bold] {format_duration(stats.elapsed_time):>8}")
# Last crash info
if stats.last_crash_time:
time_since = datetime.now() - stats.last_crash_time
crash_ago = format_duration(int(time_since.total_seconds()))
lines.append(f" [bold red]🐛 Last Crash[/bold red] {crash_ago:>8} ago")
# Status line
status_text = getattr(run_status, 'status', 'Unknown')
current_time = datetime.now().strftime('%H:%M:%S')
lines.append(f"\n[{status_color}]{status_emoji} Status: {status_text}[/{status_color}] | Last update: [dim]{current_time}[/dim] | Refresh: {refresh}s | [dim]Press Ctrl+C to stop[/dim]")
return "\n".join(lines)
# Fallback stats class
class FallbackStats:
def __init__(self, run_id):
self.run_id = run_id
self.workflow = "unknown"
self.executions = 0
self.executions_per_sec = 0.0
self.crashes = 0
self.unique_crashes = 0
self.coverage = None
self.corpus_size = 0
self.elapsed_time = 0
self.last_crash_time = None
with Live(auto_refresh=False, console=console) as live:
# Initial fetch
try:
run_status = client.get_run_status(run_id)
stats = client.get_fuzzing_stats(run_id)
except Exception:
# Minimal fallback stats
class FallbackStats:
def __init__(self, run_id):
self.run_id = run_id
self.workflow = "unknown"
self.executions = 0
self.executions_per_sec = 0.0
self.crashes = 0
self.unique_crashes = 0
self.coverage = None
self.corpus_size = 0
self.elapsed_time = 0
self.last_crash_time = None
stats = FallbackStats(run_id)
run_status = type("RS", (), {"status":"Unknown","is_completed":False,"is_failed":False})()
live.update(render_layout(run_status, stats), refresh=True)
live.update(render_inline_stats(run_status, stats), refresh=True)
# Simple polling approach that actually works
# Polling loop
consecutive_errors = 0
max_errors = 5
@@ -344,7 +343,7 @@ def _live_monitor(run_id: str, refresh: int):
except Exception as e:
consecutive_errors += 1
if consecutive_errors >= max_errors:
console.print(f"❌ Too many errors getting run status: {e}", style="red")
console.print(f"\n❌ Too many errors getting run status: {e}", style="red")
break
time.sleep(refresh)
continue
@@ -352,18 +351,14 @@ def _live_monitor(run_id: str, refresh: int):
# Try to get fuzzing stats
try:
stats = client.get_fuzzing_stats(run_id)
except Exception as e:
# Create fallback stats if not available
except Exception:
stats = FallbackStats(run_id)
# Update display
live.update(render_layout(run_status, stats), refresh=True)
live.update(render_inline_stats(run_status, stats), refresh=True)
# Check if completed
if getattr(run_status, 'is_completed', False) or getattr(run_status, 'is_failed', False):
# Show final state for a few seconds
console.print("\n🏁 Run completed. Showing final state for 10 seconds...")
time.sleep(10)
break
# Wait before next poll
@@ -372,17 +367,17 @@ def _live_monitor(run_id: str, refresh: int):
except KeyboardInterrupt:
raise
except Exception as e:
console.print(f"⚠️ Monitoring error: {e}", style="yellow")
console.print(f"\n⚠️ Monitoring error: {e}", style="yellow")
time.sleep(refresh)
# Completed status update
final_message = (
f"[bold]FuzzForge Live Monitor - COMPLETED[/bold]\n"
f"Run: {run_id[:12]}... | Status: {run_status.status} | "
f"Total runtime: {format_duration(int(time.time() - start_time))}"
)
style = "green" if getattr(run_status, 'is_completed', False) else "red"
live.update(Panel(final_message, box=box.ROUNDED, style=style), refresh=True)
# Final status
final_status = getattr(run_status, 'status', 'Unknown')
total_time = format_duration(int(time.time() - start_time))
if getattr(run_status, 'is_completed', False):
console.print(f"\n✅ [bold green]Run completed successfully[/bold green] | Total runtime: {total_time}")
else:
console.print(f"\n⚠️ [bold yellow]Run ended[/bold yellow] | Status: {final_status} | Total runtime: {total_time}")
@app.command("live")
@@ -390,21 +385,18 @@ def live_monitor(
run_id: str = typer.Argument(..., help="Run ID to monitor live"),
refresh: int = typer.Option(
2, "--refresh", "-r",
help="Refresh interval in seconds (fallback when streaming unavailable)"
help="Refresh interval in seconds"
)
):
"""
📺 Real-time monitoring dashboard with live updates (WebSocket/SSE with REST fallback)
📺 Real-time inline monitoring with live statistics updates
"""
console.print(f"📺 [bold]Live Monitoring Dashboard[/bold]")
console.print(f"Run: {run_id}")
console.print(f"Press Ctrl+C to stop monitoring\n")
try:
_live_monitor(run_id, refresh)
except KeyboardInterrupt:
console.print("\n📊 Monitoring stopped by user.", style="yellow")
console.print("\n\n📊 Monitoring stopped by user.", style="yellow")
except Exception as e:
console.print(f"❌ Failed to start live monitoring: {e}", style="red")
console.print(f"\n❌ Failed to start live monitoring: {e}", style="red")
raise typer.Exit(1)
@@ -426,11 +418,11 @@ def monitor_callback(ctx: typer.Context):
# Let the subcommand handle it
return
# Show not implemented message for default command
# Show help message for default command
from rich.console import Console
console = Console()
console.print("🚧 [yellow]Monitor command is not fully implemented yet.[/yellow]")
console.print("Please use specific subcommands:")
console.print("📊 [bold cyan]Monitor Command[/bold cyan]")
console.print("\nAvailable subcommands:")
console.print(" • [cyan]ff monitor stats <run-id>[/cyan] - Show execution statistics")
console.print(" • [cyan]ff monitor crashes <run-id>[/cyan] - Show crash reports")
console.print(" • [cyan]ff monitor live <run-id>[/cyan] - Live monitoring dashboard")
console.print(" • [cyan]ff monitor live <run-id>[/cyan] - Real-time inline monitoring")

View File

@@ -115,7 +115,7 @@ def show_status():
api_table.add_column("Property", style="bold cyan")
api_table.add_column("Value")
api_table.add_row("Status", f"✅ Connected")
api_table.add_row("Status", "✅ Connected")
api_table.add_row("Service", f"{api_status.name} v{api_status.version}")
api_table.add_row("Workflows", str(len(workflows)))

View File

@@ -24,27 +24,25 @@ import typer
from rich.console import Console
from rich.table import Table
from rich.panel import Panel
from rich.progress import Progress, SpinnerColumn, TextColumn, BarColumn, TaskProgressColumn
from rich.prompt import Prompt, Confirm
from rich.live import Live
from rich import box
from ..config import get_project_config, FuzzForgeConfig
from ..database import get_project_db, ensure_project_db, RunRecord
from ..exceptions import (
handle_error, retry_on_network_error, safe_json_load, require_project,
APIConnectionError, ValidationError, DatabaseError, FileOperationError
ValidationError, DatabaseError
)
from ..validation import (
validate_run_id, validate_workflow_name, validate_target_path,
validate_volume_mode, validate_parameters, validate_timeout
validate_parameters, validate_timeout
)
from ..progress import progress_manager, spinner, step_progress
from ..completion import WorkflowNameComplete, TargetPathComplete, VolumeModetComplete
from ..progress import step_progress
from ..constants import (
STATUS_EMOJIS, MAX_RUN_ID_DISPLAY_LENGTH, DEFAULT_VOLUME_MODE,
PROGRESS_STEP_DELAYS, MAX_RETRIES, RETRY_DELAY, POLL_INTERVAL
)
from ..worker_manager import WorkerManager
from fuzzforge_sdk import FuzzForgeClient, WorkflowSubmission
console = Console()
@@ -63,6 +61,47 @@ def status_emoji(status: str) -> str:
return STATUS_EMOJIS.get(status.lower(), STATUS_EMOJIS["unknown"])
def should_fail_build(sarif_data: Dict[str, Any], fail_on: str) -> bool:
"""
Check if findings warrant build failure based on SARIF severity levels.
Args:
sarif_data: SARIF format findings data
fail_on: Comma-separated SARIF levels (error,warning,note,info,all,none)
Returns:
True if build should fail, False otherwise
"""
if fail_on == "none":
return False
# Parse fail_on parameter - accept SARIF levels
if fail_on == "all":
check_levels = {"error", "warning", "note", "info"}
else:
check_levels = {s.strip().lower() for s in fail_on.split(",")}
# Validate levels
valid_levels = {"error", "warning", "note", "info", "none"}
invalid = check_levels - valid_levels
if invalid:
console.print(f"⚠️ Invalid SARIF levels: {', '.join(invalid)}", style="yellow")
console.print("Valid levels: error, warning, note, info, all, none")
# Check SARIF results
runs = sarif_data.get("runs", [])
if not runs:
return False
results = runs[0].get("results", [])
for result in results:
level = result.get("level", "note") # SARIF default is "note"
if level in check_levels:
return True
return False
def parse_inline_parameters(params: List[str]) -> Dict[str, Any]:
"""Parse inline key=value parameters using improved validation"""
return validate_parameters(params)
@@ -77,17 +116,15 @@ def execute_workflow_submission(
timeout: Optional[int],
interactive: bool
) -> Any:
"""Handle the workflow submission process"""
"""Handle the workflow submission process with file upload"""
# Get workflow metadata for parameter validation
console.print(f"🔧 Getting workflow information for: {workflow}")
workflow_meta = client.get_workflow_metadata(workflow)
param_response = client.get_workflow_parameters(workflow)
# Interactive parameter input
if interactive and workflow_meta.parameters.get("properties"):
properties = workflow_meta.parameters.get("properties", {})
required_params = set(workflow_meta.parameters.get("required", []))
defaults = param_response.defaults
missing_required = required_params - set(parameters.keys())
@@ -123,24 +160,10 @@ def execute_workflow_submission(
except ValueError as e:
console.print(f"❌ Invalid {param_type}: {e}", style="red")
# Validate volume mode
validate_volume_mode(volume_mode)
if volume_mode not in workflow_meta.supported_volume_modes:
raise ValidationError(
"volume mode", volume_mode,
f"one of: {', '.join(workflow_meta.supported_volume_modes)}"
)
# Create submission
submission = WorkflowSubmission(
target_path=target_path,
volume_mode=volume_mode,
parameters=parameters,
timeout=timeout
)
# Note: volume_mode is no longer used (Temporal uses MinIO storage)
# Show submission summary
console.print(f"\n🎯 [bold]Executing workflow:[/bold]")
console.print("\n🎯 [bold]Executing workflow:[/bold]")
console.print(f" Workflow: {workflow}")
console.print(f" Target: {target_path}")
console.print(f" Volume Mode: {volume_mode}")
@@ -149,6 +172,22 @@ def execute_workflow_submission(
if timeout:
console.print(f" Timeout: {timeout}s")
# Check if target path exists locally
target_path_obj = Path(target_path)
use_upload = target_path_obj.exists()
if use_upload:
# Show file/directory info
if target_path_obj.is_dir():
num_files = sum(1 for _ in target_path_obj.rglob("*") if _.is_file())
console.print(f" Upload: Directory with {num_files} files")
else:
size_mb = target_path_obj.stat().st_size / (1024 * 1024)
console.print(f" Upload: File ({size_mb:.2f} MB)")
else:
console.print(" [yellow]⚠️ Warning: Target path does not exist locally[/yellow]")
console.print(" [yellow] Attempting to use path-based submission (backend must have access)[/yellow]")
# Only ask for confirmation in interactive mode
if interactive:
if not Confirm.ask("\nExecute workflow?", default=True, console=console):
@@ -160,32 +199,74 @@ def execute_workflow_submission(
# Submit the workflow with enhanced progress
console.print(f"\n🚀 Executing workflow: [bold yellow]{workflow}[/bold yellow]")
steps = [
"Validating workflow configuration",
"Connecting to FuzzForge API",
"Uploading parameters and settings",
"Creating workflow deployment",
"Initializing execution environment"
]
if use_upload:
# Use new upload-based submission
steps = [
"Validating workflow configuration",
"Creating tarball (if directory)",
"Uploading target to backend",
"Starting workflow execution",
"Initializing execution environment"
]
with step_progress(steps, f"Executing {workflow}") as progress:
progress.next_step() # Validating
time.sleep(PROGRESS_STEP_DELAYS["validating"])
with step_progress(steps, f"Executing {workflow}") as progress:
progress.next_step() # Validating
time.sleep(PROGRESS_STEP_DELAYS["validating"])
progress.next_step() # Connecting
time.sleep(PROGRESS_STEP_DELAYS["connecting"])
progress.next_step() # Creating tarball
time.sleep(PROGRESS_STEP_DELAYS["connecting"])
progress.next_step() # Uploading
response = client.submit_workflow(workflow, submission)
time.sleep(PROGRESS_STEP_DELAYS["uploading"])
progress.next_step() # Uploading
# Use the new upload method
response = client.submit_workflow_with_upload(
workflow_name=workflow,
target_path=target_path,
parameters=parameters,
timeout=timeout
)
time.sleep(PROGRESS_STEP_DELAYS["uploading"])
progress.next_step() # Creating deployment
time.sleep(PROGRESS_STEP_DELAYS["creating"])
progress.next_step() # Starting
time.sleep(PROGRESS_STEP_DELAYS["creating"])
progress.next_step() # Initializing
time.sleep(PROGRESS_STEP_DELAYS["initializing"])
progress.next_step() # Initializing
time.sleep(PROGRESS_STEP_DELAYS["initializing"])
progress.complete(f"Workflow started successfully!")
progress.complete("Workflow started successfully!")
else:
# Fall back to path-based submission (for backward compatibility)
steps = [
"Validating workflow configuration",
"Connecting to FuzzForge API",
"Submitting workflow parameters",
"Creating workflow deployment",
"Initializing execution environment"
]
with step_progress(steps, f"Executing {workflow}") as progress:
progress.next_step() # Validating
time.sleep(PROGRESS_STEP_DELAYS["validating"])
progress.next_step() # Connecting
time.sleep(PROGRESS_STEP_DELAYS["connecting"])
progress.next_step() # Submitting
submission = WorkflowSubmission(
target_path=target_path,
volume_mode=volume_mode,
parameters=parameters,
timeout=timeout
)
response = client.submit_workflow(workflow, submission)
time.sleep(PROGRESS_STEP_DELAYS["uploading"])
progress.next_step() # Creating deployment
time.sleep(PROGRESS_STEP_DELAYS["creating"])
progress.next_step() # Initializing
time.sleep(PROGRESS_STEP_DELAYS["initializing"])
progress.complete("Workflow started successfully!")
return response
@@ -219,6 +300,22 @@ def execute_workflow(
live: bool = typer.Option(
False, "--live", "-l",
help="Start live monitoring after execution (useful for fuzzing workflows)"
),
auto_start: Optional[bool] = typer.Option(
None, "--auto-start/--no-auto-start",
help="Automatically start required worker if not running (default: from config)"
),
auto_stop: Optional[bool] = typer.Option(
None, "--auto-stop/--no-auto-stop",
help="Automatically stop worker after execution completes (default: from config)"
),
fail_on: Optional[str] = typer.Option(
None, "--fail-on",
help="Fail build if findings match severity (critical,high,medium,low,all,none). Use with --wait"
),
export_sarif: Optional[str] = typer.Option(
None, "--export-sarif",
help="Export SARIF results to file after completion. Use with --wait"
)
):
"""
@@ -226,6 +323,8 @@ def execute_workflow(
Use --live for fuzzing workflows to see real-time progress.
Use --wait to wait for completion without live dashboard.
Use --fail-on with --wait to fail CI builds based on finding severity.
Use --export-sarif with --wait to export SARIF findings to a file.
"""
try:
# Validate inputs
@@ -261,14 +360,60 @@ def execute_workflow(
except Exception as e:
handle_error(e, "parsing parameters")
# Get config for worker management settings
config = get_project_config() or FuzzForgeConfig()
should_auto_start = auto_start if auto_start is not None else config.workers.auto_start_workers
should_auto_stop = auto_stop if auto_stop is not None else config.workers.auto_stop_workers
worker_container = None # Track for cleanup
worker_mgr = None
wait_completed = False # Track if wait completed successfully
try:
with get_client() as client:
# Get worker information for this workflow
try:
console.print(f"🔍 Checking worker requirements for: {workflow}")
worker_info = client.get_workflow_worker_info(workflow)
# Initialize worker manager
compose_file = config.workers.docker_compose_file
worker_mgr = WorkerManager(
compose_file=Path(compose_file) if compose_file else None,
startup_timeout=config.workers.worker_startup_timeout
)
# Ensure worker is running
worker_container = worker_info["worker_container"]
if not worker_mgr.ensure_worker_running(worker_info, auto_start=should_auto_start):
console.print(
f"❌ Worker not available: {worker_info['vertical']}",
style="red"
)
console.print(
f"💡 Start the worker manually: docker-compose start {worker_container}"
)
raise typer.Exit(1)
except typer.Exit:
raise # Re-raise Exit to preserve exit code
except Exception as e:
# If we can't get worker info, warn but continue (might be old backend)
console.print(
f"⚠️ Could not check worker requirements: {e}",
style="yellow"
)
console.print(
" Continuing without worker management...",
style="yellow"
)
response = execute_workflow_submission(
client, workflow, target_path, parameters,
volume_mode, timeout, interactive
)
console.print(f"✅ Workflow execution started!", style="green")
console.print("✅ Workflow execution started!", style="green")
console.print(f" Execution ID: [bold cyan]{response.run_id}[/bold cyan]")
console.print(f" Status: {status_emoji(response.status)} {response.status}")
@@ -288,22 +433,22 @@ def execute_workflow(
# Don't fail the whole operation if database save fails
console.print(f"⚠️ Failed to save execution to database: {e}", style="yellow")
console.print(f"\n💡 Monitor progress: [bold cyan]fuzzforge monitor {response.run_id}[/bold cyan]")
console.print(f"\n💡 Monitor progress: [bold cyan]fuzzforge monitor stats {response.run_id}[/bold cyan]")
console.print(f"💡 Check status: [bold cyan]fuzzforge workflow status {response.run_id}[/bold cyan]")
# Suggest --live for fuzzing workflows
if not live and not wait and "fuzzing" in workflow.lower():
console.print(f"💡 Next time try: [bold cyan]fuzzforge workflow {workflow} {target_path} --live[/bold cyan] for real-time fuzzing dashboard", style="dim")
console.print(f"💡 Next time try: [bold cyan]fuzzforge workflow {workflow} {target_path} --live[/bold cyan] for real-time monitoring", style="dim")
# Start live monitoring if requested
if live:
# Check if this is a fuzzing workflow to show appropriate messaging
is_fuzzing = "fuzzing" in workflow.lower()
if is_fuzzing:
console.print(f"\n📺 Starting live fuzzing dashboard...")
console.print("\n📺 Starting live fuzzing monitor...")
console.print("💡 You'll see real-time crash discovery, execution stats, and coverage data.")
else:
console.print(f"\n📺 Starting live monitoring dashboard...")
console.print("\n📺 Starting live monitoring...")
console.print("Press Ctrl+C to stop monitoring (execution continues in background).\n")
@@ -312,14 +457,14 @@ def execute_workflow(
# Import monitor command and run it
live_monitor(response.run_id, refresh=3)
except KeyboardInterrupt:
console.print(f"\n⏹️ Live monitoring stopped (execution continues in background)", style="yellow")
console.print("\n⏹️ Live monitoring stopped (execution continues in background)", style="yellow")
except Exception as e:
console.print(f"⚠️ Failed to start live monitoring: {e}", style="yellow")
console.print(f"💡 You can still monitor manually: [bold cyan]fuzzforge monitor {response.run_id}[/bold cyan]")
# Wait for completion if requested
elif wait:
console.print(f"\n⏳ Waiting for execution to complete...")
console.print("\n⏳ Waiting for execution to complete...")
try:
final_status = client.wait_for_completion(response.run_id, poll_interval=POLL_INTERVAL)
@@ -334,17 +479,63 @@ def execute_workflow(
console.print(f"⚠️ Failed to update database: {e}", style="yellow")
console.print(f"🏁 Execution completed with status: {status_emoji(final_status.status)} {final_status.status}")
wait_completed = True # Mark wait as completed
if final_status.is_completed:
console.print(f"💡 View findings: [bold cyan]fuzzforge findings {response.run_id}[/bold cyan]")
# Export SARIF if requested
if export_sarif:
try:
console.print("\n📤 Exporting SARIF results...")
findings = client.get_run_findings(response.run_id)
output_path = Path(export_sarif)
with open(output_path, 'w') as f:
json.dump(findings.sarif, f, indent=2)
console.print(f"✅ SARIF exported to: [bold cyan]{output_path}[/bold cyan]")
except Exception as e:
console.print(f"⚠️ Failed to export SARIF: {e}", style="yellow")
# Check if build should fail based on findings
if fail_on:
try:
console.print(f"\n🔍 Checking findings against severity threshold: {fail_on}")
findings = client.get_run_findings(response.run_id)
if should_fail_build(findings.sarif, fail_on):
console.print("❌ [bold red]Build failed: Found blocking security issues[/bold red]")
console.print(f"💡 View details: [bold cyan]fuzzforge finding {response.run_id}[/bold cyan]")
raise typer.Exit(1)
else:
console.print("✅ [bold green]No blocking security issues found[/bold green]")
except typer.Exit:
raise # Re-raise Exit to preserve exit code
except Exception as e:
console.print(f"⚠️ Failed to check findings: {e}", style="yellow")
if not fail_on and not export_sarif:
console.print(f"💡 View findings: [bold cyan]fuzzforge findings {response.run_id}[/bold cyan]")
except KeyboardInterrupt:
console.print(f"\n⏹️ Monitoring cancelled (execution continues in background)", style="yellow")
console.print("\n⏹️ Monitoring cancelled (execution continues in background)", style="yellow")
except typer.Exit:
raise # Re-raise Exit to preserve exit code
except Exception as e:
handle_error(e, "waiting for completion")
except typer.Exit:
raise # Re-raise Exit to preserve exit code
except Exception as e:
handle_error(e, "executing workflow")
finally:
# Stop worker if auto-stop is enabled and wait completed
if should_auto_stop and worker_container and worker_mgr and wait_completed:
try:
console.print("\n🛑 Stopping worker (auto-stop enabled)...")
if worker_mgr.stop_worker(worker_container):
console.print(f"✅ Worker stopped: {worker_container}")
except Exception as e:
console.print(
f"⚠️ Failed to stop worker: {e}",
style="yellow"
)
@app.command("status")
@@ -409,7 +600,7 @@ def workflow_status(
console.print(
Panel.fit(
status_table,
title=f"📊 Status Information",
title="📊 Status Information",
box=box.ROUNDED
)
)
@@ -479,7 +670,7 @@ def workflow_history(
console.print()
console.print(table)
console.print(f"\n💡 Use [bold cyan]fuzzforge workflow status <execution-id>[/bold cyan] for detailed status")
console.print("\n💡 Use [bold cyan]fuzzforge workflow status <execution-id>[/bold cyan] for detailed status")
except Exception as e:
handle_error(e, "listing execution history")
@@ -527,7 +718,7 @@ def retry_workflow(
# Modify parameters if requested
if modify_params and parameters:
console.print(f"\n📝 [bold]Current parameters:[/bold]")
console.print("\n📝 [bold]Current parameters:[/bold]")
for key, value in parameters.items():
new_value = Prompt.ask(
f"{key}",
@@ -559,7 +750,7 @@ def retry_workflow(
response = client.submit_workflow(original_run.workflow, submission)
console.print(f"\n✅ Retry submitted successfully!", style="green")
console.print("\n✅ Retry submitted successfully!", style="green")
console.print(f" New Execution ID: [bold cyan]{response.run_id}[/bold cyan]")
console.print(f" Status: {status_emoji(response.status)} {response.status}")
@@ -578,7 +769,7 @@ def retry_workflow(
except Exception as e:
console.print(f"⚠️ Failed to save execution to database: {e}", style="yellow")
console.print(f"\n💡 Monitor progress: [bold cyan]fuzzforge monitor {response.run_id}[/bold cyan]")
console.print(f"\n💡 Monitor progress: [bold cyan]fuzzforge monitor stats {response.run_id}[/bold cyan]")
except Exception as e:
handle_error(e, "retrying workflow")

View File

@@ -18,10 +18,10 @@ import typer
from rich.console import Console
from rich.table import Table
from rich.panel import Panel
from rich.prompt import Prompt, Confirm
from rich.prompt import Prompt
from rich.syntax import Syntax
from rich import box
from typing import Optional, Dict, Any
from typing import Optional
from ..config import get_project_config, FuzzForgeConfig
from ..fuzzy import enhanced_workflow_not_found_handler
@@ -68,7 +68,7 @@ def list_workflows():
console.print(f"\n🔧 [bold]Available Workflows ({len(workflows)})[/bold]\n")
console.print(table)
console.print(f"\n💡 Use [bold cyan]fuzzforge workflows info <name>[/bold cyan] for detailed information")
console.print("\n💡 Use [bold cyan]fuzzforge workflows info <name>[/bold cyan] for detailed information")
except Exception as e:
console.print(f"❌ Failed to fetch workflows: {e}", style="red")
@@ -100,7 +100,6 @@ def workflow_info(
info_table.add_row("Author", workflow.author)
if workflow.tags:
info_table.add_row("Tags", ", ".join(workflow.tags))
info_table.add_row("Volume Modes", ", ".join(workflow.supported_volume_modes))
info_table.add_row("Custom Docker", "✅ Yes" if workflow.has_custom_docker else "❌ No")
console.print(
@@ -193,7 +192,7 @@ def workflow_parameters(
parameters = {}
properties = workflow.parameters.get("properties", {})
required_params = set(workflow.parameters.get("required", []))
defaults = param_response.defaults
defaults = param_response.default_parameters
if interactive:
console.print("🔧 Enter parameter values (press Enter for default):\n")

View File

@@ -16,7 +16,7 @@ Provides intelligent tab completion for commands, workflows, run IDs, and parame
import typer
from typing import List, Optional
from typing import List
from pathlib import Path
from .config import get_project_config, FuzzForgeConfig

View File

@@ -66,6 +66,15 @@ class PreferencesConfig(BaseModel):
color_output: bool = True
class WorkerConfig(BaseModel):
"""Worker lifecycle management configuration."""
auto_start_workers: bool = True
auto_stop_workers: bool = False
worker_startup_timeout: int = 60
docker_compose_file: Optional[str] = None
class CogneeConfig(BaseModel):
"""Cognee integration metadata."""
@@ -84,6 +93,7 @@ class FuzzForgeConfig(BaseModel):
project: ProjectConfig = Field(default_factory=ProjectConfig)
retention: RetentionConfig = Field(default_factory=RetentionConfig)
preferences: PreferencesConfig = Field(default_factory=PreferencesConfig)
workers: WorkerConfig = Field(default_factory=WorkerConfig)
cognee: CogneeConfig = Field(default_factory=CogneeConfig)
@classmethod

View File

@@ -163,7 +163,7 @@ class FuzzForgeDatabase:
"Database is corrupted. Use 'ff init --force' to reset."
) from e
raise
except Exception as e:
except Exception:
if conn:
try:
conn.rollback()

View File

@@ -15,7 +15,7 @@ Enhanced exception handling and error utilities for FuzzForge CLI with rich cont
import time
import functools
from typing import Any, Callable, Optional, Type, Union, List
from typing import Any, Callable, Optional, Union, List
from pathlib import Path
import typer
@@ -24,20 +24,10 @@ from rich.console import Console
from rich.panel import Panel
from rich.text import Text
from rich.table import Table
from rich.columns import Columns
from rich.syntax import Syntax
from rich.markdown import Markdown
# Import SDK exceptions for rich handling
from fuzzforge_sdk.exceptions import (
FuzzForgeError as SDKFuzzForgeError,
FuzzForgeHTTPError,
DeploymentError,
WorkflowExecutionError,
ContainerError,
VolumeError,
ValidationError as SDKValidationError,
ConnectionError as SDKConnectionError
FuzzForgeError as SDKFuzzForgeError
)
console = Console()
@@ -335,7 +325,7 @@ def handle_error(error: Exception, context: str = "") -> None:
# Show error details for debugging
console.print(f"\n[dim yellow]Error type: {type(error).__name__}[/dim yellow]")
console.print(f"[dim yellow]Please report this issue if it persists[/dim yellow]")
console.print("[dim yellow]Please report this issue if it persists[/dim yellow]")
console.print()
raise typer.Exit(1)
@@ -430,8 +420,9 @@ def validate_run_id(run_id: str) -> str:
if not run_id or len(run_id) < 8:
raise ValidationError("run_id", run_id, "at least 8 characters")
if not run_id.replace('-', '').isalnum():
raise ValidationError("run_id", run_id, "alphanumeric characters and hyphens only")
# Allow alphanumeric characters, hyphens, and underscores
if not run_id.replace('-', '').replace('_', '').isalnum():
raise ValidationError("run_id", run_id, "alphanumeric characters, hyphens, and underscores only")
return run_id

View File

@@ -117,7 +117,6 @@ def config(
"""
⚙️ Manage configuration (show all, get, or set values)
"""
from .commands import config as config_cmd
if key is None:
# No arguments: show all config
@@ -205,10 +204,29 @@ def run_workflow(
live: bool = typer.Option(
False, "--live", "-l",
help="Start live monitoring after execution (useful for fuzzing workflows)"
),
auto_start: Optional[bool] = typer.Option(
None, "--auto-start/--no-auto-start",
help="Automatically start required worker if not running (default: from config)"
),
auto_stop: Optional[bool] = typer.Option(
None, "--auto-stop/--no-auto-stop",
help="Automatically stop worker after execution completes (default: from config)"
),
fail_on: Optional[str] = typer.Option(
None, "--fail-on",
help="Fail build if findings match SARIF level (error,warning,note,info,all,none). Use with --wait"
),
export_sarif: Optional[str] = typer.Option(
None, "--export-sarif",
help="Export SARIF results to file after completion. Use with --wait"
)
):
"""
🚀 Execute a security testing workflow
Use --fail-on with --wait to fail CI builds based on finding severity.
Use --export-sarif with --wait to export SARIF findings to a file.
"""
from .commands.workflow_exec import execute_workflow
@@ -221,7 +239,11 @@ def run_workflow(
timeout=timeout,
interactive=interactive,
wait=wait,
live=live
live=live,
auto_start=auto_start,
auto_stop=auto_stop,
fail_on=fail_on,
export_sarif=export_sarif
)
@workflow_app.callback()
@@ -356,43 +378,6 @@ app.add_typer(ai.app, name="ai", help="🤖 AI integration features")
app.add_typer(ingest.app, name="ingest", help="🧠 Ingest knowledge into AI")
# Help and utility commands
@app.command()
def examples():
"""
📚 Show usage examples
"""
examples_text = """
[bold cyan]FuzzForge CLI Examples[/bold cyan]
[bold]Getting Started:[/bold]
ff init # Initialize a project
ff workflows # List available workflows
ff workflow info afl-fuzzing # Get workflow details
[bold]Execute Workflows:[/bold]
ff workflow afl-fuzzing ./target # Run fuzzing on target
ff workflow afl-fuzzing . --live # Run with live monitoring
ff workflow scan-c ./src timeout=300 threads=4 # With parameters
[bold]Monitor Execution:[/bold]
ff status # Check latest execution
ff workflow status # Same as above
ff monitor # Live monitoring dashboard
ff workflow history # Show past executions
[bold]Review Findings:[/bold]
ff findings # List all findings
ff finding # Show latest finding
ff finding export --format sarif # Export findings
[bold]AI Features:[/bold]
ff ai chat # Interactive AI chat
ff ai suggest ./src # Get workflow suggestions
ff finding analyze # AI analysis of latest finding
"""
console.print(examples_text)
@app.command()
def version():
"""
@@ -400,7 +385,7 @@ def version():
"""
from . import __version__
console.print(f"FuzzForge CLI v{__version__}")
console.print(f"Short command: ff")
console.print("Short command: ff")
@app.callback()
@@ -418,7 +403,6 @@ def main_callback(
• ff init - Initialize a new project
• ff workflows - See available workflows
• ff workflow <name> <target> - Execute a workflow
• ff examples - Show usage examples
"""
if version:
from . import __version__
@@ -468,7 +452,7 @@ def main():
'workflows', 'workflow',
'findings', 'finding',
'monitor', 'ai', 'ingest',
'examples', 'version'
'version'
]
if main_cmd not in valid_commands:

View File

@@ -16,10 +16,9 @@ Provides rich progress bars, spinners, and status displays for all long-running
import time
import asyncio
from contextlib import contextmanager
from typing import Optional, Callable, Any, Dict, List
from datetime import datetime, timedelta
from typing import Optional, Any, Dict, List
from datetime import datetime
from rich.console import Console
from rich.progress import (

View File

@@ -15,7 +15,7 @@ Input validation utilities for FuzzForge CLI.
import re
from pathlib import Path
from typing import Any, Dict, List, Optional, Union
from typing import Any, Dict, List, Optional
from .constants import SUPPORTED_VOLUME_MODES, SUPPORTED_EXPORT_FORMATS
from .exceptions import ValidationError

View File

@@ -0,0 +1,286 @@
"""
Worker lifecycle management for FuzzForge CLI.
Manages on-demand startup and shutdown of Temporal workers using Docker Compose.
"""
# Copyright (c) 2025 FuzzingLabs
#
# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
# at the root of this repository for details.
#
# After the Change Date (four years from publication), this version of the
# Licensed Work will be made available under the Apache License, Version 2.0.
# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
#
# Additional attribution and requirements are provided in the NOTICE file.
import logging
import subprocess
import time
from pathlib import Path
from typing import Optional, Dict, Any
from rich.console import Console
logger = logging.getLogger(__name__)
console = Console()
class WorkerManager:
"""
Manages Temporal worker lifecycle using docker-compose.
This class handles:
- Checking if workers are running
- Starting workers on demand
- Waiting for workers to be ready
- Stopping workers when done
"""
def __init__(
self,
compose_file: Optional[Path] = None,
startup_timeout: int = 60,
health_check_interval: float = 2.0
):
"""
Initialize WorkerManager.
Args:
compose_file: Path to docker-compose.yml (defaults to auto-detect)
startup_timeout: Maximum seconds to wait for worker startup
health_check_interval: Seconds between health checks
"""
self.compose_file = compose_file or self._find_compose_file()
self.startup_timeout = startup_timeout
self.health_check_interval = health_check_interval
def _find_compose_file(self) -> Path:
"""
Auto-detect docker-compose.yml location.
Searches upward from current directory to find the compose file.
"""
current = Path.cwd()
# Try current directory and parents
for parent in [current] + list(current.parents):
compose_path = parent / "docker-compose.yml"
if compose_path.exists():
return compose_path
# Fallback to default location
return Path("docker-compose.yml")
def _run_docker_compose(self, *args: str) -> subprocess.CompletedProcess:
"""
Run docker-compose command.
Args:
*args: Arguments to pass to docker-compose
Returns:
CompletedProcess with result
Raises:
subprocess.CalledProcessError: If command fails
"""
cmd = ["docker-compose", "-f", str(self.compose_file)] + list(args)
logger.debug(f"Running: {' '.join(cmd)}")
return subprocess.run(
cmd,
capture_output=True,
text=True,
check=True
)
def is_worker_running(self, container_name: str) -> bool:
"""
Check if a worker container is running.
Args:
container_name: Name of the Docker container (e.g., "fuzzforge-worker-ossfuzz")
Returns:
True if container is running, False otherwise
"""
try:
result = subprocess.run(
["docker", "inspect", "-f", "{{.State.Running}}", container_name],
capture_output=True,
text=True,
check=False
)
# Output is "true" or "false"
return result.stdout.strip().lower() == "true"
except Exception as e:
logger.debug(f"Failed to check worker status: {e}")
return False
def start_worker(self, container_name: str) -> bool:
"""
Start a worker container using docker.
Args:
container_name: Name of the Docker container to start
Returns:
True if started successfully, False otherwise
"""
try:
console.print(f"🚀 Starting worker: {container_name}")
# Use docker start directly (works with container name)
subprocess.run(
["docker", "start", container_name],
capture_output=True,
text=True,
check=True
)
logger.info(f"Worker {container_name} started")
return True
except subprocess.CalledProcessError as e:
logger.error(f"Failed to start worker {container_name}: {e.stderr}")
console.print(f"❌ Failed to start worker: {e.stderr}", style="red")
return False
except Exception as e:
logger.error(f"Unexpected error starting worker {container_name}: {e}")
console.print(f"❌ Unexpected error: {e}", style="red")
return False
def wait_for_worker_ready(self, container_name: str, timeout: Optional[int] = None) -> bool:
"""
Wait for a worker to be healthy and ready to process tasks.
Args:
container_name: Name of the Docker container
timeout: Maximum seconds to wait (uses instance default if not specified)
Returns:
True if worker is ready, False if timeout reached
Raises:
TimeoutError: If worker doesn't become ready within timeout
"""
timeout = timeout or self.startup_timeout
start_time = time.time()
console.print("⏳ Waiting for worker to be ready...")
while time.time() - start_time < timeout:
# Check if container is running
if not self.is_worker_running(container_name):
logger.debug(f"Worker {container_name} not running yet")
time.sleep(self.health_check_interval)
continue
# Check container health status
try:
result = subprocess.run(
["docker", "inspect", "-f", "{{.State.Health.Status}}", container_name],
capture_output=True,
text=True,
check=False
)
health_status = result.stdout.strip()
# If no health check is defined, assume healthy after running
if health_status == "<no value>" or health_status == "":
logger.info(f"Worker {container_name} is running (no health check)")
console.print(f"✅ Worker ready: {container_name}")
return True
if health_status == "healthy":
logger.info(f"Worker {container_name} is healthy")
console.print(f"✅ Worker ready: {container_name}")
return True
logger.debug(f"Worker {container_name} health: {health_status}")
except Exception as e:
logger.debug(f"Failed to check health: {e}")
time.sleep(self.health_check_interval)
elapsed = time.time() - start_time
logger.warning(f"Worker {container_name} did not become ready within {elapsed:.1f}s")
console.print(f"⚠️ Worker startup timeout after {elapsed:.1f}s", style="yellow")
return False
def stop_worker(self, container_name: str) -> bool:
"""
Stop a worker container using docker.
Args:
container_name: Name of the Docker container to stop
Returns:
True if stopped successfully, False otherwise
"""
try:
console.print(f"🛑 Stopping worker: {container_name}")
# Use docker stop directly (works with container name)
subprocess.run(
["docker", "stop", container_name],
capture_output=True,
text=True,
check=True
)
logger.info(f"Worker {container_name} stopped")
return True
except subprocess.CalledProcessError as e:
logger.error(f"Failed to stop worker {container_name}: {e.stderr}")
console.print(f"❌ Failed to stop worker: {e.stderr}", style="red")
return False
except Exception as e:
logger.error(f"Unexpected error stopping worker {container_name}: {e}")
console.print(f"❌ Unexpected error: {e}", style="red")
return False
def ensure_worker_running(
self,
worker_info: Dict[str, Any],
auto_start: bool = True
) -> bool:
"""
Ensure a worker is running, starting it if necessary.
Args:
worker_info: Worker information dict from API (contains worker_container, etc.)
auto_start: Whether to automatically start the worker if not running
Returns:
True if worker is running, False otherwise
"""
container_name = worker_info["worker_container"]
vertical = worker_info["vertical"]
# Check if already running
if self.is_worker_running(container_name):
console.print(f"✓ Worker already running: {vertical}")
return True
if not auto_start:
console.print(
f"⚠️ Worker not running: {vertical}. Use --auto-start to start automatically.",
style="yellow"
)
return False
# Start the worker
if not self.start_worker(container_name):
return False
# Wait for it to be ready
return self.wait_for_worker_ready(container_name)