* ci: add worker validation and Docker build checks Add automated validation to prevent worker-related issues: **Worker Validation Script:** - New script: .github/scripts/validate-workers.sh - Validates all workers in docker-compose.yml exist - Checks required files: Dockerfile, requirements.txt, worker.py - Verifies files are tracked by git (not gitignored) - Detects gitignore issues that could hide workers **CI Workflow Updates:** - Added validate-workers job (runs on every PR) - Added build-workers job (runs if workers/ modified) - Uses Docker Buildx for caching - Validates Docker images build successfully - Updated test-summary to check validation results **PR Template:** - New pull request template with comprehensive checklist - Specific section for worker-related changes - Reminds contributors to validate worker files - Includes documentation and changelog reminders These checks would have caught the secrets worker gitignore issue. Implements Phase 1 improvements from CI/CD quality assessment. * fix: add dev branch to test workflow triggers The test workflow was configured for 'develop' but the actual branch is named 'dev'. This caused tests not to run on PRs to dev branch. Now tests will run on: - PRs to: main, master, dev, develop - Pushes to: main, master, dev, develop, feature/** * fix: properly detect worker file changes in CI The previous condition used invalid GitHub context field. Now uses git diff to properly detect changes to workers/ or docker-compose.yml. Behavior: - Job always runs the check step - Detects if workers/ or docker-compose.yml modified - Only builds Docker images if workers actually changed - Shows clear skip message when no worker changes detected * feat: Add Python SAST workflow with three security analysis tools Implements Issue #5 - Python SAST workflow that combines: - Dependency scanning (pip-audit) for CVE detection - Security linting (Bandit) for vulnerability patterns - Type checking (Mypy) for type safety issues ## Changes **New Modules:** - `DependencyScanner`: Scans Python dependencies for known CVEs using pip-audit - `BanditAnalyzer`: Analyzes Python code for security issues using Bandit - `MypyAnalyzer`: Checks Python code for type safety issues using Mypy **New Workflow:** - `python_sast`: Temporal workflow that orchestrates all three SAST tools - Runs tools in parallel for fast feedback (3-5 min vs hours for fuzzing) - Generates unified SARIF report with findings from all tools - Supports configurable severity/confidence thresholds **Updates:** - Added SAST dependencies to Python worker (bandit, pip-audit, mypy) - Updated module __init__.py files to export new analyzers - Added type_errors.py test file to vulnerable_app for Mypy validation ## Testing Workflow tested successfully on vulnerable_app: - ✅ Bandit: Detected 9 security issues (command injection, unsafe functions) - ✅ Mypy: Detected 5 type errors - ✅ DependencyScanner: Ran successfully (no CVEs in test dependencies) - ✅ SARIF export: Generated valid SARIF with 14 total findings * fix: Remove unused imports to pass linter * fix: resolve live monitoring bug, remove deprecated parameters, and auto-start Python worker - Fix live monitoring style error by calling _live_monitor() helper directly - Remove default_parameters duplication from 10 workflow metadata files - Remove deprecated volume_mode parameter from 26 files across CLI, SDK, backend, and docs - Configure Python worker to start automatically with docker compose up - Clean up constants, validation, completion, and example files Fixes # - Live monitoring now works correctly with --live flag - Workflow metadata follows JSON Schema standard - Cleaner codebase without deprecated volume_mode - Python worker (most commonly used) starts by default * fix: resolve linter errors and optimize CI worker builds - Remove unused Literal import from backend findings model - Remove unnecessary f-string prefixes in CLI findings command - Optimize GitHub Actions to build only modified workers - Detect specific worker changes (python, secrets, rust, android, ossfuzz) - Build only changed workers instead of all 5 - Build all workers if docker-compose.yml changes - Significantly reduces CI build time * feat: Add Android static analysis workflow with Jadx, OpenGrep, and MobSF Comprehensive Android security testing workflow converted from Prefect to Temporal architecture: Modules (3): - JadxDecompiler: APK to Java source code decompilation - OpenGrepAndroid: Static analysis with Android-specific security rules - MobSFScanner: Comprehensive mobile security framework integration Custom Rules (13): - clipboard-sensitive-data, hardcoded-secrets, insecure-data-storage - insecure-deeplink, insecure-logging, intent-redirection - sensitive_data_sharedPreferences, sqlite-injection - vulnerable-activity, vulnerable-content-provider, vulnerable-service - webview-javascript-enabled, webview-load-arbitrary-url Workflow: - 6-phase Temporal workflow: download → Jadx → OpenGrep → MobSF → SARIF → upload - 4 activities: decompile_with_jadx, scan_with_opengrep, scan_with_mobsf, generate_android_sarif - SARIF output combining findings from all security tools Docker Worker: - ARM64 Mac compatibility via amd64 platform emulation - Pre-installed: Android SDK, Jadx 1.4.7, OpenGrep 1.45.0, MobSF 3.9.7 - MobSF runs as background service with API key auto-generation - Added aiohttp for async HTTP communication Test APKs: - BeetleBug.apk and shopnest.apk for workflow validation * fix(android): correct activity names and MobSF API key generation - Fix activity names in workflow.py (get_target, upload_results, cleanup_cache) - Fix MobSF API key generation in Dockerfile startup script (cut delimiter) - Update activity parameter signatures to match actual implementations - Workflow now executes successfully with Jadx and OpenGrep * feat: add platform-aware worker architecture with ARM64 support Implement platform-specific Dockerfile selection and graceful tool degradation to support both x86_64 and ARM64 (Apple Silicon) platforms. **Backend Changes:** - Add system info API endpoint (/system/info) exposing host filesystem paths - Add FUZZFORGE_HOST_ROOT environment variable to backend service - Add graceful degradation in MobSF activity for ARM64 platforms **CLI Changes:** - Implement multi-strategy path resolution (backend API, .fuzzforge marker, env var) - Add platform detection (linux/amd64 vs linux/arm64) - Add worker metadata.yaml reading for platform capabilities - Auto-select appropriate Dockerfile based on detected platform - Pass platform-specific env vars to docker-compose **Worker Changes:** - Create workers/android/metadata.yaml defining platform capabilities - Rename Dockerfile -> Dockerfile.amd64 (full toolchain with MobSF) - Create Dockerfile.arm64 (excludes MobSF due to Rosetta 2 incompatibility) - Update docker-compose.yml to use ${ANDROID_DOCKERFILE} variable **Workflow Changes:** - Handle MobSF "skipped" status gracefully in workflow - Log clear warnings when tools are unavailable on platform **Key Features:** - Automatic platform detection and Dockerfile selection - Graceful degradation when tools unavailable (MobSF on ARM64) - Works from any directory (backend API provides paths) - Manual override via environment variables - Clear user feedback about platform and selected Dockerfile **Benefits:** - Android workflow now works on Apple Silicon Macs - No code changes needed for other workflows - Convention established for future platform-specific workers Closes: MobSF Rosetta 2 incompatibility issue Implements: Platform-aware worker architecture (Option B) * fix: make MobSFScanner import conditional for ARM64 compatibility - Add try-except block to conditionally import MobSFScanner in modules/android/__init__.py - Allows Android worker to start on ARM64 without MobSF dependencies (aiohttp) - MobSF activity gracefully skips on ARM64 with clear warning message - Remove workflow path detection logic (not needed - workflows receive directories) Platform-aware architecture fully functional on ARM64: - CLI detects ARM64 and selects Dockerfile.arm64 automatically - Worker builds and runs without MobSF on ARM64 - Jadx successfully decompiles APKs (4145 files from BeetleBug.apk) - OpenGrep finds security vulnerabilities (8 issues found) - MobSF gracefully skips with warning on ARM64 - Graceful degradation working as designed Tested with: ff workflow run android_static_analysis test_projects/android_test/ \ --wait --no-interactive apk_path=BeetleBug.apk decompile_apk=true Results: 8 security findings (1 ERROR, 7 WARNINGS) * docs: update CHANGELOG with Android workflow and ARM64 support Added [Unreleased] section documenting: - Android Static Analysis Workflow (Jadx, OpenGrep, MobSF) - Platform-Aware Worker Architecture with ARM64 support - Python SAST Workflow - CI/CD improvements and worker validation - CLI enhancements - Bug fixes and technical changes Fixed date typo: 2025-01-16 → 2025-10-16 * fix: resolve linter errors in Android modules - Remove unused imports from mobsf_scanner.py (asyncio, hashlib, json, Optional) - Remove unused variables from opengrep_android.py (start_col, end_col) - Remove duplicate Path import from workflow.py * ci: support multi-platform Dockerfiles in worker validation Updated worker validation script to accept both: - Single Dockerfile pattern (existing workers) - Multi-platform Dockerfile pattern (Dockerfile.amd64, Dockerfile.arm64, etc.) This enables platform-aware worker architectures like the Android worker which uses different Dockerfiles for x86_64 and ARM64 platforms. * Feature/litellm proxy (#27) * feat: seed governance config and responses routing * Add env-configurable timeout for proxy providers * Integrate LiteLLM OTEL collector and update docs * Make .env.litellm optional for LiteLLM proxy * Add LiteLLM proxy integration with model-agnostic virtual keys Changes: - Bootstrap generates 3 virtual keys with individual budgets (CLI: $100, Task-Agent: $25, Cognee: $50) - Task-agent loads config at runtime via entrypoint script to wait for bootstrap completion - All keys are model-agnostic by default (no LITELLM_DEFAULT_MODELS restrictions) - Bootstrap handles database/env mismatch after docker prune by deleting stale aliases - CLI and Cognee configured to use LiteLLM proxy with virtual keys - Added comprehensive documentation in volumes/env/README.md Technical details: - task-agent entrypoint waits for keys in .env file before starting uvicorn - Bootstrap creates/updates TASK_AGENT_API_KEY, COGNEE_API_KEY, and OPENAI_API_KEY - Removed hardcoded API keys from docker-compose.yml - All services route through http://localhost:10999 proxy * Fix CLI not loading virtual keys from global .env Project .env files with empty OPENAI_API_KEY values were overriding the global virtual keys. Updated _load_env_file_if_exists to only override with non-empty values. * Fix agent executor not passing API key to LiteLLM The agent was initializing LiteLlm without api_key or api_base, causing authentication errors when using the LiteLLM proxy. Now reads from OPENAI_API_KEY/LLM_API_KEY and LLM_ENDPOINT environment variables and passes them to LiteLlm constructor. * Auto-populate project .env with virtual key from global config When running 'ff init', the command now checks for a global volumes/env/.env file and automatically uses the OPENAI_API_KEY virtual key if found. This ensures projects work with LiteLLM proxy out of the box without manual key configuration. * docs: Update README with LiteLLM configuration instructions Add note about LITELLM_GEMINI_API_KEY configuration and clarify that OPENAI_API_KEY default value should not be changed as it's used for the LLM proxy. * Refactor workflow parameters to use JSON Schema defaults Consolidates parameter defaults into JSON Schema format, removing the separate default_parameters field. Adds extract_defaults_from_json_schema() helper to extract defaults from the standard schema structure. Updates LiteLLM proxy config to use LITELLM_OPENAI_API_KEY environment variable. * Remove .env.example from task_agent * Fix MDX syntax error in llm-proxy.md * fix: apply default parameters from metadata.yaml automatically Fixed TemporalManager.run_workflow() to correctly apply default parameter values from workflow metadata.yaml files when parameters are not provided by the caller. Previous behavior: - When workflow_params was empty {}, the condition `if workflow_params and 'parameters' in metadata` would fail - Parameters would not be extracted from schema, resulting in workflows receiving only target_id with no other parameters New behavior: - Removed the `workflow_params and` requirement from the condition - Now explicitly checks for defaults in parameter spec - Applies defaults from metadata.yaml automatically when param not provided - Workflows receive all parameters with proper fallback: provided value > metadata default > None This makes metadata.yaml the single source of truth for parameter defaults, removing the need for workflows to implement defensive default handling. Affected workflows: - llm_secret_detection (was failing with KeyError) - All other workflows now benefit from automatic default application Co-authored-by: tduhamel42 <tduhamel@fuzzinglabs.com> * fix: add default values to llm_analysis workflow parameters Resolves validation error where agent_url was None when not explicitly provided. The TemporalManager applies defaults from metadata.yaml, not from module input schemas, so all parameters need defaults in the workflow metadata. Changes: - Add default agent_url, llm_model (gpt-5-mini), llm_provider (openai) - Expand file_patterns to 45 comprehensive patterns covering code, configs, secrets, and Docker files - Increase default limits: max_files (10), max_file_size (100KB), timeout (90s) * refactor: replace .env.example with .env.template in documentation - Remove volumes/env/.env.example file - Update all documentation references to use .env.template instead - Update bootstrap script error message - Update .gitignore comment * feat(cli): add worker management commands with improved progress feedback Add comprehensive CLI commands for managing Temporal workers: - ff worker list - List workers with status and uptime - ff worker start <name> - Start specific worker with optional rebuild - ff worker stop - Safely stop all workers without affecting core services Improvements: - Live progress display during worker startup with Rich Status spinner - Real-time elapsed time counter and container state updates - Health check status tracking (starting → unhealthy → healthy) - Helpful contextual hints at 10s, 30s, 60s intervals - Better timeout messages showing last known state Worker management enhancements: - Use 'docker compose' (space) instead of 'docker-compose' (hyphen) - Stop workers individually with 'docker stop' to avoid stopping core services - Platform detection and Dockerfile selection (ARM64/AMD64) Documentation: - Updated docker-setup.md with CLI commands as primary method - Created comprehensive cli-reference.md with all commands and examples - Added worker management best practices * fix: MobSF scanner now properly parses files dict structure MobSF returns 'files' as a dict (not list): {"filename": "line_numbers"} The parser was treating it as a list, causing zero findings to be extracted. Now properly iterates over the dict and creates one finding per affected file with correct line numbers and metadata (CWE, OWASP, MASVS, CVSS). Fixed in both code_analysis and behaviour sections. * chore: bump version to 0.7.3 * docs: fix broken documentation links in cli-reference * chore: add worker startup documentation and cleanup .gitignore - Add workflow-to-worker mapping tables across documentation - Update troubleshooting guide with worker requirements section - Enhance getting started guide with worker examples - Add quick reference to docker setup guide - Add WEEK_SUMMARY*.md pattern to .gitignore * docs: update CHANGELOG with missing versions and recent changes - Add Unreleased section for post-v0.7.3 documentation updates - Add v0.7.2 entry with bug fixes and worker improvements - Document that v0.7.1 was re-tagged as v0.7.2 - Fix v0.6.0 date to "Undocumented" (no tag exists) - Add version comparison links for easier navigation * chore: bump all package versions to 0.7.3 for consistency * Update GitHub link to fuzzforge_ai --------- Co-authored-by: Songbird99 <150154823+Songbird99@users.noreply.github.com> Co-authored-by: Songbird <Songbirdx99@gmail.com>
FuzzForge CLI
🛡️ FuzzForge CLI - Command-line interface for FuzzForge security testing platform
A comprehensive CLI for managing security testing workflows, monitoring runs in real-time, and analyzing findings with beautiful terminal interfaces and persistent project management.
✨ Features
- 📁 Project Management - Initialize and manage FuzzForge projects with local databases
- 🔧 Workflow Management - Browse, configure, and run security testing workflows
- 🚀 Workflow Execution - Execute and manage security testing workflows
- 🔍 Findings Analysis - View, export, and analyze security findings in multiple formats
- 📊 Real-time Monitoring - Live dashboards for fuzzing statistics and crash reports
- ⚙️ Configuration - Flexible project and global configuration management
- 🎨 Rich UI - Beautiful tables, progress bars, and interactive prompts
- 💾 Persistent Storage - SQLite database for runs, findings, and crash data
- 🛡️ Error Handling - Comprehensive error handling with user-friendly messages
- 🔄 Network Resilience - Automatic retries and graceful degradation
🚀 Quick Start
Installation
Prerequisites
- Python 3.11 or higher
- uv package manager
Install FuzzForge CLI
# Clone the repository
git clone https://github.com/FuzzingLabs/fuzzforge_alpha.git
cd fuzzforge_alpha/cli
# Install globally with uv (recommended)
uv tool install .
# Alternative: Install in development mode
uv sync
uv add --editable ../sdk
uv tool install --editable .
# Verify installation
fuzzforge --help
Shell Completion (Optional)
# Install completion for your shell
fuzzforge --install-completion
Initialize Your First Project
# Create a new project directory
mkdir my-security-project
cd my-security-project
# Initialize FuzzForge project
ff init
# Check status
fuzzforge status
This creates a .fuzzforge/ directory with:
- SQLite database for persistent storage
- Configuration file (
config.yaml) - Project metadata
Run Your First Analysis
# List available workflows
fuzzforge workflows list
# Get workflow details
fuzzforge workflows info security_assessment
# Submit a workflow for analysis
fuzzforge workflow run security_assessment /path/to/your/code
# View findings when complete
fuzzforge finding <execution-id>
📚 Command Reference
Project Management
ff init
Initialize a new FuzzForge project in the current directory.
ff init --name "My Security Project" --api-url "http://localhost:8000"
Options:
--name, -n- Project name (defaults to directory name)--api-url, -u- FuzzForge API URL (defaults to http://localhost:8000)--force, -f- Force initialization even if project exists
fuzzforge status
Show comprehensive project and API status information.
fuzzforge status
Displays:
- Project information and configuration
- Database statistics (runs, findings, crashes)
- API connectivity and available workflows
Workflow Management
fuzzforge workflows list
List all available security testing workflows.
fuzzforge workflows list
fuzzforge workflows info <workflow-name>
Show detailed information about a specific workflow.
fuzzforge workflows info security_assessment
Displays:
- Workflow metadata (version, author, description)
- Parameter schema and requirements
- Supported volume modes and features
fuzzforge workflows parameters <workflow-name>
Interactive parameter builder for workflows.
# Interactive mode
fuzzforge workflows parameters security_assessment
# Save parameters to file
fuzzforge workflows parameters security_assessment --output params.json
# Non-interactive mode (show schema only)
fuzzforge workflows parameters security_assessment --no-interactive
Workflow Execution
fuzzforge workflow run <workflow> <target-path>
Execute a security testing workflow with automatic file upload.
# Basic execution - CLI automatically detects local files and uploads them
fuzzforge workflow run security_assessment /path/to/code
# With parameters
fuzzforge workflow run security_assessment /path/to/binary \
--param timeout=3600 \
--param iterations=10000
# With parameter file
fuzzforge workflow run security_assessment /path/to/code \
--param-file my-params.json
# Wait for completion
fuzzforge workflow run security_assessment /path/to/code --wait
Automatic File Upload Behavior:
The CLI intelligently handles target files based on whether they exist locally:
-
Local file/directory exists → Automatic upload to MinIO:
- CLI creates a compressed tarball (
.tar.gz) for directories - Uploads via HTTP to backend API
- Backend stores in MinIO with unique
target_id - Worker downloads from MinIO when ready to analyze
- ✅ Works from any machine (no shared filesystem needed)
- CLI creates a compressed tarball (
-
Path doesn't exist locally → Path-based submission (legacy):
- Path is sent to backend as-is
- Backend expects target to be accessible on its filesystem
- ⚠️ Only works when CLI and backend share filesystem
Example workflow:
$ ff workflow security_assessment ./my-project
🔧 Getting workflow information for: security_assessment
📦 Detected local directory: ./my-project (21 files)
🗜️ Creating compressed tarball...
📤 Uploading to backend (0.01 MB)...
✅ Upload complete! Target ID: 548193a1-f73f-4ec1-8068-19ec2660b8e4
🎯 Executing workflow:
Workflow: security_assessment
Target: my-project.tar.gz (uploaded)
Volume Mode: ro
Status: 🔄 RUNNING
✅ Workflow started successfully!
Execution ID: security_assessment-52781925
Upload Details:
- Max file size: 10 GB (configurable on backend)
- Compression: Automatic for directories (reduces upload time)
- Storage: Files stored in MinIO (S3-compatible)
- Lifecycle: Automatic cleanup after 7 days
- Caching: Workers cache downloaded targets for faster repeated workflows
Options:
--param, -p- Parameter in key=value format (can be used multiple times)--param-file, -f- JSON file containing parameters--volume-mode, -v- Volume mount mode:ro(read-only) orrw(read-write)--timeout, -t- Execution timeout in seconds--interactive/--no-interactive, -i/-n- Interactive parameter input--wait, -w- Wait for execution to complete
Worker Lifecycle Options (v0.7.0):
--auto-start/--no-auto-start- Auto-start required worker (default: from config)--auto-stop/--no-auto-stop- Auto-stop worker after completion (default: from config)
Examples:
# Worker starts automatically (default behavior)
fuzzforge workflow ossfuzz_campaign . project_name=zlib
# Disable auto-start (worker must be running already)
fuzzforge workflow ossfuzz_campaign . --no-auto-start
# Auto-stop worker after completion
fuzzforge workflow ossfuzz_campaign . --wait --auto-stop
fuzzforge workflow status [execution-id]
Check the status of a workflow execution.
# Check specific execution
fuzzforge workflow status abc123def456
# Check most recent execution
fuzzforge workflow status
fuzzforge workflow history
Show workflow execution history from local database.
# List all executions
fuzzforge workflow history
# Filter by workflow
fuzzforge workflow history --workflow security_assessment
# Filter by status
fuzzforge workflow history --status completed
# Limit results
fuzzforge workflow history --limit 10
fuzzforge workflow retry <execution-id>
Retry a workflow with the same or modified parameters.
# Retry with same parameters
fuzzforge workflow retry abc123def456
# Modify parameters interactively
fuzzforge workflow retry abc123def456 --modify-params
Findings Management
fuzzforge finding [execution-id]
View security findings for a specific execution.
# Display latest findings
fuzzforge finding
# Display specific execution findings
fuzzforge finding abc123def456
fuzzforge findings
Browse all security findings from local database.
# List all findings
fuzzforge findings
# Show findings history
fuzzforge findings history --limit 20
fuzzforge finding export [execution-id]
Export security findings in various formats.
# Export latest findings
fuzzforge finding export --format json
# Export specific execution findings
fuzzforge finding export abc123def456 --format sarif
# Export as CSV with output file
fuzzforge finding export abc123def456 --format csv --output report.csv
# Export as HTML report
fuzzforge finding export --format html --output report.html
Configuration Management
fuzzforge config show
Display current configuration settings.
# Show project configuration
fuzzforge config show
# Show global configuration
fuzzforge config show --global
fuzzforge config set <key> <value>
Set a configuration value.
# Project settings
fuzzforge config set project.api_url "http://api.fuzzforge.com"
fuzzforge config set project.default_timeout 7200
fuzzforge config set project.default_workflow "security_assessment"
# Retention settings
fuzzforge config set retention.max_runs 200
fuzzforge config set retention.keep_findings_days 120
# Preferences
fuzzforge config set preferences.auto_save_findings true
fuzzforge config set preferences.show_progress_bars false
# Global configuration
fuzzforge config set project.api_url "http://global.api.com" --global
fuzzforge config get <key>
Get a specific configuration value.
fuzzforge config get project.api_url
fuzzforge config get retention.max_runs --global
fuzzforge config reset
Reset configuration to defaults.
# Reset project configuration
fuzzforge config reset
# Reset global configuration
fuzzforge config reset --global
# Skip confirmation
fuzzforge config reset --force
fuzzforge config edit
Open configuration file in default editor.
# Edit project configuration
fuzzforge config edit
# Edit global configuration
fuzzforge config edit --global
🏗️ Project Structure
When you initialize a FuzzForge project, the following structure is created:
my-project/
├── .fuzzforge/
│ ├── config.yaml # Project configuration
│ └── findings.db # SQLite database
├── .gitignore # Updated with FuzzForge entries
└── README.md # Project README (if created)
Database Schema
The SQLite database stores:
- runs - Workflow run history and metadata
- findings - Security findings and SARIF data
- crashes - Crash reports and fuzzing data
Configuration Format
Project configuration (.fuzzforge/config.yaml):
project:
name: "My Security Project"
api_url: "http://localhost:8000"
default_timeout: 3600
default_workflow: null
retention:
max_runs: 100
keep_findings_days: 90
preferences:
auto_save_findings: true
show_progress_bars: true
table_style: "rich"
color_output: true
workers:
auto_start_workers: true # Auto-start workers when needed
auto_stop_workers: false # Auto-stop workers after completion
worker_startup_timeout: 60 # Worker startup timeout (seconds)
docker_compose_file: null # Custom docker-compose.yml path
🔧 Advanced Usage
Parameter Handling
FuzzForge CLI supports flexible parameter input:
-
Command line parameters:
ff workflow workflow-name /path key1=value1 key2=value2 -
Parameter files:
echo '{"timeout": 3600, "threads": 4}' > params.json ff workflow workflow-name /path --param-file params.json -
Interactive prompts:
ff workflow workflow-name /path --interactive -
Parameter builder:
ff workflows parameters workflow-name --output my-params.json ff workflow workflow-name /path --param-file my-params.json
Environment Variables
Override configuration with environment variables:
export FUZZFORGE_API_URL="http://production.api.com"
export FUZZFORGE_TIMEOUT="7200"
Data Retention
Configure automatic cleanup of old data:
# Keep only 50 runs
fuzzforge config set retention.max_runs 50
# Keep findings for 30 days
fuzzforge config set retention.keep_findings_days 30
Export Formats
Support for multiple export formats:
- JSON - Simplified findings structure
- CSV - Tabular data for spreadsheets
- HTML - Interactive web report
- SARIF - Standard security analysis format
🛠️ Development
Setup Development Environment
# Clone repository
git clone https://github.com/FuzzingLabs/fuzzforge_alpha.git
cd fuzzforge_alpha/cli
# Install in development mode
uv sync
uv add --editable ../sdk
# Install CLI in editable mode
uv tool install --editable .
Project Structure
cli/
├── src/fuzzforge_cli/
│ ├── __init__.py
│ ├── main.py # Main CLI app
│ ├── config.py # Configuration management
│ ├── database.py # Database operations
│ ├── exceptions.py # Error handling
│ ├── api_validation.py # API response validation
│ └── commands/ # Command implementations
│ ├── init.py # Project initialization
│ ├── workflows.py # Workflow management
│ ├── runs.py # Run management
│ ├── findings.py # Findings management
│ ├── config.py # Configuration commands
│ └── status.py # Status information
├── pyproject.toml # Project configuration
└── README.md # This file
Running Tests
# Run tests (when available)
uv run pytest
# Code formatting
uv run black src/
uv run isort src/
# Type checking
uv run mypy src/
⚠️ Troubleshooting
Common Issues
"No FuzzForge project found"
# Initialize a project first
ff init
API Connection Failed
# Check API URL configuration
fuzzforge config get project.api_url
# Test API connectivity
fuzzforge status
# Update API URL if needed
fuzzforge config set project.api_url "http://correct-url:8000"
Permission Errors
# Ensure proper permissions for project directory
chmod -R 755 .fuzzforge/
# Check file ownership
ls -la .fuzzforge/
Database Issues
# Check database file exists
ls -la .fuzzforge/findings.db
# Reinitialize if corrupted (will lose data)
rm .fuzzforge/findings.db
ff init --force
Environment Variables
Set these environment variables for debugging:
export FUZZFORGE_DEBUG=1 # Enable debug logging
export FUZZFORGE_API_URL="..." # Override API URL
export FUZZFORGE_TIMEOUT="30" # Override timeout
Getting Help
# General help
fuzzforge --help
# Command-specific help
ff workflows --help
ff workflow run --help
# Show version
fuzzforge --version
🏆 Example Workflow
Here's a complete example of analyzing a project:
# 1. Initialize project
mkdir my-security-audit
cd my-security-audit
ff init --name "Security Audit 2024"
# 2. Check available workflows
fuzzforge workflows list
# 3. Submit comprehensive security assessment
ff workflow security_assessment /path/to/source/code --wait
# 4. View findings in table format
fuzzforge findings get <run-id>
# 5. Export detailed report
fuzzforge findings export <run-id> --format html --output security_report.html
# 6. Check project statistics
fuzzforge status
📜 License
This project is licensed under the terms specified in the main FuzzForge repository.
🤝 Contributing
Contributions are welcome! Please see the main FuzzForge repository for contribution guidelines.
FuzzForge CLI - Making security testing workflows accessible and efficient from the command line.