Files
fuzzforge_ai/cli/README.md
tduhamel42 60ca088ecf CI/CD Integration with Ephemeral Deployment Model (#14)
* feat: Complete migration from Prefect to Temporal

BREAKING CHANGE: Replaces Prefect workflow orchestration with Temporal

## Major Changes
- Replace Prefect with Temporal for workflow orchestration
- Implement vertical worker architecture (rust, android)
- Replace Docker registry with MinIO for unified storage
- Refactor activities to be co-located with workflows
- Update all API endpoints for Temporal compatibility

## Infrastructure
- New: docker-compose.temporal.yaml (Temporal + MinIO + workers)
- New: workers/ directory with rust and android vertical workers
- New: backend/src/temporal/ (manager, discovery)
- New: backend/src/storage/ (S3-cached storage with MinIO)
- New: backend/toolbox/common/ (shared storage activities)
- Deleted: docker-compose.yaml (old Prefect setup)
- Deleted: backend/src/core/prefect_manager.py
- Deleted: backend/src/services/prefect_stats_monitor.py
- Deleted: Docker registry and insecure-registries requirement

## Workflows
- Migrated: security_assessment workflow to Temporal
- New: rust_test workflow (example/test workflow)
- Deleted: secret_detection_scan (Prefect-based, to be reimplemented)
- Activities now co-located with workflows for independent testing

## API Changes
- Updated: backend/src/api/workflows.py (Temporal submission)
- Updated: backend/src/api/runs.py (Temporal status/results)
- Updated: backend/src/main.py (727 lines, TemporalManager integration)
- Updated: All 16 MCP tools to use TemporalManager

## Testing
-  All services healthy (Temporal, PostgreSQL, MinIO, workers, backend)
-  All API endpoints functional
-  End-to-end workflow test passed (72 findings from vulnerable_app)
-  MinIO storage integration working (target upload/download, results)
-  Worker activity discovery working (6 activities registered)
-  Tarball extraction working
-  SARIF report generation working

## Documentation
- ARCHITECTURE.md: Complete Temporal architecture documentation
- QUICKSTART_TEMPORAL.md: Getting started guide
- MIGRATION_DECISION.md: Why we chose Temporal over Prefect
- IMPLEMENTATION_STATUS.md: Migration progress tracking
- workers/README.md: Worker development guide

## Dependencies
- Added: temporalio>=1.6.0
- Added: boto3>=1.34.0 (MinIO S3 client)
- Removed: prefect>=3.4.18

* feat: Add Python fuzzing vertical with Atheris integration

This commit implements a complete Python fuzzing workflow using Atheris:

## Python Worker (workers/python/)
- Dockerfile with Python 3.11, Atheris, and build tools
- Generic worker.py for dynamic workflow discovery
- requirements.txt with temporalio, boto3, atheris dependencies
- Added to docker-compose.temporal.yaml with dedicated cache volume

## AtherisFuzzer Module (backend/toolbox/modules/fuzzer/)
- Reusable module extending BaseModule
- Auto-discovers fuzz targets (fuzz_*.py, *_fuzz.py, fuzz_target.py)
- Recursive search to find targets in nested directories
- Dynamically loads TestOneInput() function
- Configurable max_iterations and timeout
- Real-time stats callback support for live monitoring
- Returns findings as ModuleFinding objects

## Atheris Fuzzing Workflow (backend/toolbox/workflows/atheris_fuzzing/)
- Temporal workflow for orchestrating fuzzing
- Downloads user code from MinIO
- Executes AtherisFuzzer module
- Uploads results to MinIO
- Cleans up cache after execution
- metadata.yaml with vertical: python for routing

## Test Project (test_projects/python_fuzz_waterfall/)
- Demonstrates stateful waterfall vulnerability
- main.py with check_secret() that leaks progress
- fuzz_target.py with Atheris TestOneInput() harness
- Complete README with usage instructions

## Backend Fixes
- Fixed parameter merging in REST API endpoints (workflows.py)
- Changed workflow parameter passing from positional args to kwargs (manager.py)
- Default parameters now properly merged with user parameters

## Testing
 Worker discovered AtherisFuzzingWorkflow
 Workflow executed end-to-end successfully
 Fuzz target auto-discovered in nested directories
 Atheris ran 100,000 iterations
 Results uploaded and cache cleaned

* chore: Complete Temporal migration with updated CLI/SDK/docs

This commit includes all remaining Temporal migration changes:

## CLI Updates (cli/)
- Updated workflow execution commands for Temporal
- Enhanced error handling and exceptions
- Updated dependencies in uv.lock

## SDK Updates (sdk/)
- Client methods updated for Temporal workflows
- Updated models for new workflow execution
- Updated dependencies in uv.lock

## Documentation Updates (docs/)
- Architecture documentation for Temporal
- Workflow concept documentation
- Resource management documentation (new)
- Debugging guide (new)
- Updated tutorials and how-to guides
- Troubleshooting updates

## README Updates
- Main README with Temporal instructions
- Backend README
- CLI README
- SDK README

## Other
- Updated IMPLEMENTATION_STATUS.md
- Removed old vulnerable_app.tar.gz

These changes complete the Temporal migration and ensure the
CLI/SDK work correctly with the new backend.

* fix: Use positional args instead of kwargs for Temporal workflows

The Temporal Python SDK's start_workflow() method doesn't accept
a 'kwargs' parameter. Workflows must receive parameters as positional
arguments via the 'args' parameter.

Changed from:
  args=workflow_args  # Positional arguments

This fixes the error:
  TypeError: Client.start_workflow() got an unexpected keyword argument 'kwargs'

Workflows now correctly receive parameters in order:
- security_assessment: [target_id, scanner_config, analyzer_config, reporter_config]
- atheris_fuzzing: [target_id, target_file, max_iterations, timeout_seconds]
- rust_test: [target_id, test_message]

* fix: Filter metadata-only parameters from workflow arguments

SecurityAssessmentWorkflow was receiving 7 arguments instead of 2-5.
The issue was that target_path and volume_mode from default_parameters
were being passed to the workflow, when they should only be used by
the system for configuration.

Now filters out metadata-only parameters (target_path, volume_mode)
before passing arguments to workflow execution.

* refactor: Remove Prefect leftovers and volume mounting legacy

Complete cleanup of Prefect migration artifacts:

Backend:
- Delete registry.py and workflow_discovery.py (Prefect-specific files)
- Remove Docker validation from setup.py (no longer needed)
- Remove ResourceLimits and VolumeMount models
- Remove target_path and volume_mode from WorkflowSubmission
- Remove supported_volume_modes from API and discovery
- Clean up metadata.yaml files (remove volume/path fields)
- Simplify parameter filtering in manager.py

SDK:
- Remove volume_mode parameter from client methods
- Remove ResourceLimits and VolumeMount models
- Remove Prefect error patterns from docker_logs.py
- Clean up WorkflowSubmission and WorkflowMetadata models

CLI:
- Remove Volume Modes display from workflow info

All removed features are Prefect-specific or Docker volume mounting
artifacts. Temporal workflows use MinIO storage exclusively.

* feat: Add comprehensive test suite and benchmark infrastructure

- Add 68 unit tests for fuzzer, scanner, and analyzer modules
- Implement pytest-based test infrastructure with fixtures
- Add 6 performance benchmarks with category-specific thresholds
- Configure GitHub Actions for automated testing and benchmarking
- Add test and benchmark documentation

Test coverage:
- AtherisFuzzer: 8 tests
- CargoFuzzer: 14 tests
- FileScanner: 22 tests
- SecurityAnalyzer: 24 tests

All tests passing (68/68)
All benchmarks passing (6/6)

* fix: Resolve all ruff linting violations across codebase

Fixed 27 ruff violations in 12 files:
- Removed unused imports (Depends, Dict, Any, Optional, etc.)
- Fixed undefined workflow_info variable in workflows.py
- Removed dead code with undefined variables in atheris_fuzzer.py
- Changed f-string to regular string where no placeholders used

All files now pass ruff checks for CI/CD compliance.

* fix: Configure CI for unit tests only

- Renamed docker-compose.temporal.yaml → docker-compose.yml for CI compatibility
- Commented out integration-tests job (no integration tests yet)
- Updated test-summary to only depend on lint and unit-tests

CI will now run successfully with 68 unit tests. Integration tests can be added later.

* feat: Add CI/CD integration with ephemeral deployment model

Implements comprehensive CI/CD support for FuzzForge with on-demand worker management:

**Worker Management (v0.7.0)**
- Add WorkerManager for automatic worker lifecycle control
- Auto-start workers from stopped state when workflows execute
- Auto-stop workers after workflow completion
- Health checks and startup timeout handling (90s default)

**CI/CD Features**
- `--fail-on` flag: Fail builds based on SARIF severity levels (error/warning/note/info)
- `--export-sarif` flag: Export findings in SARIF 2.1.0 format
- `--auto-start`/`--auto-stop` flags: Control worker lifecycle
- Exit code propagation: Returns 1 on blocking findings, 0 on success

**Exit Code Fix**
- Add `except typer.Exit: raise` handlers at 3 critical locations
- Move worker cleanup to finally block for guaranteed execution
- Exit codes now propagate correctly even when build fails

**CI Scripts & Examples**
- ci-start.sh: Start FuzzForge services with health checks
- ci-stop.sh: Clean shutdown with volume preservation option
- GitHub Actions workflow example (security-scan.yml)
- GitLab CI pipeline example (.gitlab-ci.example.yml)
- docker-compose.ci.yml: CI-optimized compose file with profiles

**OSS-Fuzz Integration**
- New ossfuzz_campaign workflow for running OSS-Fuzz projects
- OSS-Fuzz worker with Docker-in-Docker support
- Configurable campaign duration and project selection

**Documentation**
- Comprehensive CI/CD integration guide (docs/how-to/cicd-integration.md)
- Updated architecture docs with worker lifecycle details
- Updated workspace isolation documentation
- CLI README with worker management examples

**SDK Enhancements**
- Add get_workflow_worker_info() endpoint
- Worker vertical metadata in workflow responses

**Testing**
- All workflows tested: security_assessment, atheris_fuzzing, secret_detection, cargo_fuzzing
- All monitoring commands tested: stats, crashes, status, finding
- Full CI pipeline simulation verified
- Exit codes verified for success/failure scenarios

Ephemeral CI/CD model: ~3-4GB RAM, ~60-90s startup, runs entirely in CI containers.

* fix: Resolve ruff linting violations in CI/CD code

- Remove unused variables (run_id, defaults, result)
- Remove unused imports
- Fix f-string without placeholders

All CI/CD integration files now pass ruff checks.
2025-10-14 10:13:45 +02:00

17 KiB

FuzzForge CLI

🛡️ FuzzForge CLI - Command-line interface for FuzzForge security testing platform

A comprehensive CLI for managing security testing workflows, monitoring runs in real-time, and analyzing findings with beautiful terminal interfaces and persistent project management.

Features

  • 📁 Project Management - Initialize and manage FuzzForge projects with local databases
  • 🔧 Workflow Management - Browse, configure, and run security testing workflows
  • 🚀 Workflow Execution - Execute and manage security testing workflows
  • 🔍 Findings Analysis - View, export, and analyze security findings in multiple formats
  • 📊 Real-time Monitoring - Live dashboards for fuzzing statistics and crash reports
  • ⚙️ Configuration - Flexible project and global configuration management
  • 🎨 Rich UI - Beautiful tables, progress bars, and interactive prompts
  • 💾 Persistent Storage - SQLite database for runs, findings, and crash data
  • 🛡️ Error Handling - Comprehensive error handling with user-friendly messages
  • 🔄 Network Resilience - Automatic retries and graceful degradation

🚀 Quick Start

Installation

Prerequisites

  • Python 3.11 or higher
  • uv package manager

Install FuzzForge CLI

# Clone the repository
git clone https://github.com/FuzzingLabs/fuzzforge_alpha.git
cd fuzzforge_alpha/cli

# Install globally with uv (recommended)
uv tool install .

# Alternative: Install in development mode
uv sync
uv add --editable ../sdk
uv tool install --editable .

# Verify installation
fuzzforge --help

Shell Completion (Optional)

# Install completion for your shell
fuzzforge --install-completion

Initialize Your First Project

# Create a new project directory
mkdir my-security-project
cd my-security-project

# Initialize FuzzForge project
ff init

# Check status
fuzzforge status

This creates a .fuzzforge/ directory with:

  • SQLite database for persistent storage
  • Configuration file (config.yaml)
  • Project metadata

Run Your First Analysis

# List available workflows
fuzzforge workflows list

# Get workflow details
fuzzforge workflows info security_assessment

# Submit a workflow for analysis
fuzzforge workflow security_assessment /path/to/your/code

# Monitor progress in real-time
fuzzforge monitor live <execution-id>

# View findings when complete
fuzzforge finding <execution-id>

📚 Command Reference

Project Management

ff init

Initialize a new FuzzForge project in the current directory.

ff init --name "My Security Project" --api-url "http://localhost:8000"

Options:

  • --name, -n - Project name (defaults to directory name)
  • --api-url, -u - FuzzForge API URL (defaults to http://localhost:8000)
  • --force, -f - Force initialization even if project exists

fuzzforge status

Show comprehensive project and API status information.

fuzzforge status

Displays:

  • Project information and configuration
  • Database statistics (runs, findings, crashes)
  • API connectivity and available workflows

Workflow Management

fuzzforge workflows list

List all available security testing workflows.

fuzzforge workflows list

fuzzforge workflows info <workflow-name>

Show detailed information about a specific workflow.

fuzzforge workflows info security_assessment

Displays:

  • Workflow metadata (version, author, description)
  • Parameter schema and requirements
  • Supported volume modes and features

fuzzforge workflows parameters <workflow-name>

Interactive parameter builder for workflows.

# Interactive mode
fuzzforge workflows parameters security_assessment

# Save parameters to file
fuzzforge workflows parameters security_assessment --output params.json

# Non-interactive mode (show schema only)
fuzzforge workflows parameters security_assessment --no-interactive

Workflow Execution

fuzzforge workflow <workflow> <target-path>

Execute a security testing workflow with automatic file upload.

# Basic execution - CLI automatically detects local files and uploads them
fuzzforge workflow security_assessment /path/to/code

# With parameters
fuzzforge workflow security_assessment /path/to/binary \
  --param timeout=3600 \
  --param iterations=10000

# With parameter file
fuzzforge workflow security_assessment /path/to/code \
  --param-file my-params.json

# Wait for completion
fuzzforge workflow security_assessment /path/to/code --wait

Automatic File Upload Behavior:

The CLI intelligently handles target files based on whether they exist locally:

  1. Local file/directory existsAutomatic upload to MinIO:

    • CLI creates a compressed tarball (.tar.gz) for directories
    • Uploads via HTTP to backend API
    • Backend stores in MinIO with unique target_id
    • Worker downloads from MinIO when ready to analyze
    • Works from any machine (no shared filesystem needed)
  2. Path doesn't exist locallyPath-based submission (legacy):

    • Path is sent to backend as-is
    • Backend expects target to be accessible on its filesystem
    • ⚠️ Only works when CLI and backend share filesystem

Example workflow:

$ ff workflow security_assessment ./my-project

🔧 Getting workflow information for: security_assessment
📦 Detected local directory: ./my-project (21 files)
🗜️  Creating compressed tarball...
📤 Uploading to backend (0.01 MB)...
✅ Upload complete! Target ID: 548193a1-f73f-4ec1-8068-19ec2660b8e4

🎯 Executing workflow:
   Workflow: security_assessment
   Target: my-project.tar.gz (uploaded)
   Volume Mode: ro
   Status: 🔄 RUNNING

✅ Workflow started successfully!
   Execution ID: security_assessment-52781925

Upload Details:

  • Max file size: 10 GB (configurable on backend)
  • Compression: Automatic for directories (reduces upload time)
  • Storage: Files stored in MinIO (S3-compatible)
  • Lifecycle: Automatic cleanup after 7 days
  • Caching: Workers cache downloaded targets for faster repeated workflows

Options:

  • --param, -p - Parameter in key=value format (can be used multiple times)
  • --param-file, -f - JSON file containing parameters
  • --volume-mode, -v - Volume mount mode: ro (read-only) or rw (read-write)
  • --timeout, -t - Execution timeout in seconds
  • --interactive/--no-interactive, -i/-n - Interactive parameter input
  • --wait, -w - Wait for execution to complete
  • --live, -l - Show live monitoring during execution

Worker Lifecycle Options (v0.7.0):

  • --auto-start/--no-auto-start - Auto-start required worker (default: from config)
  • --auto-stop/--no-auto-stop - Auto-stop worker after completion (default: from config)

Examples:

# Worker starts automatically (default behavior)
fuzzforge workflow ossfuzz_campaign . project_name=zlib

# Disable auto-start (worker must be running already)
fuzzforge workflow ossfuzz_campaign . --no-auto-start

# Auto-stop worker after completion
fuzzforge workflow ossfuzz_campaign . --wait --auto-stop

fuzzforge workflow status [execution-id]

Check the status of a workflow execution.

# Check specific execution
fuzzforge workflow status abc123def456

# Check most recent execution
fuzzforge workflow status

fuzzforge workflow history

Show workflow execution history from local database.

# List all executions
fuzzforge workflow history

# Filter by workflow
fuzzforge workflow history --workflow security_assessment

# Filter by status
fuzzforge workflow history --status completed

# Limit results
fuzzforge workflow history --limit 10

fuzzforge workflow retry <execution-id>

Retry a workflow with the same or modified parameters.

# Retry with same parameters
fuzzforge workflow retry abc123def456

# Modify parameters interactively
fuzzforge workflow retry abc123def456 --modify-params

Findings Management

fuzzforge finding [execution-id]

View security findings for a specific execution.

# Display latest findings
fuzzforge finding

# Display specific execution findings
fuzzforge finding abc123def456

fuzzforge findings

Browse all security findings from local database.

# List all findings
fuzzforge findings

# Show findings history
fuzzforge findings history --limit 20

fuzzforge finding export [execution-id]

Export security findings in various formats.

# Export latest findings
fuzzforge finding export --format json

# Export specific execution findings
fuzzforge finding export abc123def456 --format sarif

# Export as CSV with output file
fuzzforge finding export abc123def456 --format csv --output report.csv

# Export as HTML report
fuzzforge finding export --format html --output report.html

Real-time Monitoring

fuzzforge monitor stats <execution-id>

Show current fuzzing statistics.

# Show stats once
fuzzforge monitor stats abc123def456 --once

# Live updating stats (default)
fuzzforge monitor stats abc123def456 --refresh 5

fuzzforge monitor crashes <run-id>

Display crash reports for a fuzzing run.

fuzzforge monitor crashes abc123def456 --limit 50

fuzzforge monitor live <run-id>

Real-time monitoring dashboard with live updates.

fuzzforge monitor live abc123def456 --refresh 3

Features:

  • Live updating statistics
  • Progress indicators and bars
  • Run status monitoring
  • Automatic completion detection

Configuration Management

fuzzforge config show

Display current configuration settings.

# Show project configuration
fuzzforge config show

# Show global configuration
fuzzforge config show --global

fuzzforge config set <key> <value>

Set a configuration value.

# Project settings
fuzzforge config set project.api_url "http://api.fuzzforge.com"
fuzzforge config set project.default_timeout 7200
fuzzforge config set project.default_workflow "security_assessment"

# Retention settings
fuzzforge config set retention.max_runs 200
fuzzforge config set retention.keep_findings_days 120

# Preferences
fuzzforge config set preferences.auto_save_findings true
fuzzforge config set preferences.show_progress_bars false

# Global configuration
fuzzforge config set project.api_url "http://global.api.com" --global

fuzzforge config get <key>

Get a specific configuration value.

fuzzforge config get project.api_url
fuzzforge config get retention.max_runs --global

fuzzforge config reset

Reset configuration to defaults.

# Reset project configuration
fuzzforge config reset

# Reset global configuration
fuzzforge config reset --global

# Skip confirmation
fuzzforge config reset --force

fuzzforge config edit

Open configuration file in default editor.

# Edit project configuration
fuzzforge config edit

# Edit global configuration
fuzzforge config edit --global

🏗️ Project Structure

When you initialize a FuzzForge project, the following structure is created:

my-project/
├── .fuzzforge/
│   ├── config.yaml          # Project configuration
│   └── findings.db          # SQLite database
├── .gitignore               # Updated with FuzzForge entries
└── README.md                # Project README (if created)

Database Schema

The SQLite database stores:

  • runs - Workflow run history and metadata
  • findings - Security findings and SARIF data
  • crashes - Crash reports and fuzzing data

Configuration Format

Project configuration (.fuzzforge/config.yaml):

project:
  name: "My Security Project"
  api_url: "http://localhost:8000"
  default_timeout: 3600
  default_workflow: null

retention:
  max_runs: 100
  keep_findings_days: 90

preferences:
  auto_save_findings: true
  show_progress_bars: true
  table_style: "rich"
  color_output: true

workers:
  auto_start_workers: true    # Auto-start workers when needed
  auto_stop_workers: false    # Auto-stop workers after completion
  worker_startup_timeout: 60  # Worker startup timeout (seconds)
  docker_compose_file: null   # Custom docker-compose.yml path

🔧 Advanced Usage

Parameter Handling

FuzzForge CLI supports flexible parameter input:

  1. Command line parameters:

    ff workflow workflow-name /path key1=value1 key2=value2
    
  2. Parameter files:

    echo '{"timeout": 3600, "threads": 4}' > params.json
    ff workflow workflow-name /path --param-file params.json
    
  3. Interactive prompts:

    ff workflow workflow-name /path --interactive
    
  4. Parameter builder:

    ff workflows parameters workflow-name --output my-params.json
    ff workflow workflow-name /path --param-file my-params.json
    

Environment Variables

Override configuration with environment variables:

export FUZZFORGE_API_URL="http://production.api.com"
export FUZZFORGE_TIMEOUT="7200"

Data Retention

Configure automatic cleanup of old data:

# Keep only 50 runs
fuzzforge config set retention.max_runs 50

# Keep findings for 30 days
fuzzforge config set retention.keep_findings_days 30

Export Formats

Support for multiple export formats:

  • JSON - Simplified findings structure
  • CSV - Tabular data for spreadsheets
  • HTML - Interactive web report
  • SARIF - Standard security analysis format

🛠️ Development

Setup Development Environment

# Clone repository
git clone https://github.com/FuzzingLabs/fuzzforge_alpha.git
cd fuzzforge_alpha/cli

# Install in development mode
uv sync
uv add --editable ../sdk

# Install CLI in editable mode
uv tool install --editable .

Project Structure

cli/
├── src/fuzzforge_cli/
│   ├── __init__.py
│   ├── main.py              # Main CLI app
│   ├── config.py            # Configuration management
│   ├── database.py          # Database operations
│   ├── exceptions.py        # Error handling
│   ├── api_validation.py    # API response validation
│   └── commands/            # Command implementations
│       ├── init.py          # Project initialization
│       ├── workflows.py     # Workflow management
│       ├── runs.py          # Run management
│       ├── findings.py      # Findings management
│       ├── monitor.py       # Real-time monitoring
│       ├── config.py        # Configuration commands
│       └── status.py        # Status information
├── pyproject.toml           # Project configuration
└── README.md               # This file

Running Tests

# Run tests (when available)
uv run pytest

# Code formatting
uv run black src/
uv run isort src/

# Type checking
uv run mypy src/

⚠️ Troubleshooting

Common Issues

"No FuzzForge project found"

# Initialize a project first
ff init

API Connection Failed

# Check API URL configuration
fuzzforge config get project.api_url

# Test API connectivity
fuzzforge status

# Update API URL if needed
fuzzforge config set project.api_url "http://correct-url:8000"

Permission Errors

# Ensure proper permissions for project directory
chmod -R 755 .fuzzforge/

# Check file ownership
ls -la .fuzzforge/

Database Issues

# Check database file exists
ls -la .fuzzforge/findings.db

# Reinitialize if corrupted (will lose data)
rm .fuzzforge/findings.db
ff init --force

Environment Variables

Set these environment variables for debugging:

export FUZZFORGE_DEBUG=1           # Enable debug logging
export FUZZFORGE_API_URL="..."     # Override API URL
export FUZZFORGE_TIMEOUT="30"      # Override timeout

Getting Help

# General help
fuzzforge --help

# Command-specific help
ff workflows --help
ff workflow run --help
ff monitor live --help

# Show version
fuzzforge --version

🏆 Example Workflow

Here's a complete example of analyzing a project:

# 1. Initialize project
mkdir my-security-audit
cd my-security-audit
ff init --name "Security Audit 2024"

# 2. Check available workflows
fuzzforge workflows list

# 3. Submit comprehensive security assessment
ff workflow security_assessment /path/to/source/code --wait

# 4. View findings in table format
fuzzforge findings get <run-id>

# 5. Export detailed report
fuzzforge findings export <run-id> --format html --output security_report.html

# 6. Check project statistics
fuzzforge status

📜 License

This project is licensed under the terms specified in the main FuzzForge repository.

🤝 Contributing

Contributions are welcome! Please see the main FuzzForge repository for contribution guidelines.


FuzzForge CLI - Making security testing workflows accessible and efficient from the command line.