From 943bc9a114e94c5069b094b64bb10e4f083d2e16 Mon Sep 17 00:00:00 2001
From: tduhamel42
Date: Thu, 6 Nov 2025 11:07:50 +0100
Subject: [PATCH] Release v0.7.3 - Android workflows, LiteLLM integration,
ARM64 support (#32)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* ci: add worker validation and Docker build checks
Add automated validation to prevent worker-related issues:
**Worker Validation Script:**
- New script: .github/scripts/validate-workers.sh
- Validates all workers in docker-compose.yml exist
- Checks required files: Dockerfile, requirements.txt, worker.py
- Verifies files are tracked by git (not gitignored)
- Detects gitignore issues that could hide workers
**CI Workflow Updates:**
- Added validate-workers job (runs on every PR)
- Added build-workers job (runs if workers/ modified)
- Uses Docker Buildx for caching
- Validates Docker images build successfully
- Updated test-summary to check validation results
**PR Template:**
- New pull request template with comprehensive checklist
- Specific section for worker-related changes
- Reminds contributors to validate worker files
- Includes documentation and changelog reminders
These checks would have caught the secrets worker gitignore issue.
Implements Phase 1 improvements from CI/CD quality assessment.
* fix: add dev branch to test workflow triggers
The test workflow was configured for 'develop' but the actual branch is named 'dev'.
This caused tests not to run on PRs to dev branch.
Now tests will run on:
- PRs to: main, master, dev, develop
- Pushes to: main, master, dev, develop, feature/**
* fix: properly detect worker file changes in CI
The previous condition used invalid GitHub context field.
Now uses git diff to properly detect changes to workers/ or docker-compose.yml.
Behavior:
- Job always runs the check step
- Detects if workers/ or docker-compose.yml modified
- Only builds Docker images if workers actually changed
- Shows clear skip message when no worker changes detected
* feat: Add Python SAST workflow with three security analysis tools
Implements Issue #5 - Python SAST workflow that combines:
- Dependency scanning (pip-audit) for CVE detection
- Security linting (Bandit) for vulnerability patterns
- Type checking (Mypy) for type safety issues
## Changes
**New Modules:**
- `DependencyScanner`: Scans Python dependencies for known CVEs using pip-audit
- `BanditAnalyzer`: Analyzes Python code for security issues using Bandit
- `MypyAnalyzer`: Checks Python code for type safety issues using Mypy
**New Workflow:**
- `python_sast`: Temporal workflow that orchestrates all three SAST tools
- Runs tools in parallel for fast feedback (3-5 min vs hours for fuzzing)
- Generates unified SARIF report with findings from all tools
- Supports configurable severity/confidence thresholds
**Updates:**
- Added SAST dependencies to Python worker (bandit, pip-audit, mypy)
- Updated module __init__.py files to export new analyzers
- Added type_errors.py test file to vulnerable_app for Mypy validation
## Testing
Workflow tested successfully on vulnerable_app:
- ✅ Bandit: Detected 9 security issues (command injection, unsafe functions)
- ✅ Mypy: Detected 5 type errors
- ✅ DependencyScanner: Ran successfully (no CVEs in test dependencies)
- ✅ SARIF export: Generated valid SARIF with 14 total findings
* fix: Remove unused imports to pass linter
* fix: resolve live monitoring bug, remove deprecated parameters, and auto-start Python worker
- Fix live monitoring style error by calling _live_monitor() helper directly
- Remove default_parameters duplication from 10 workflow metadata files
- Remove deprecated volume_mode parameter from 26 files across CLI, SDK, backend, and docs
- Configure Python worker to start automatically with docker compose up
- Clean up constants, validation, completion, and example files
Fixes #
- Live monitoring now works correctly with --live flag
- Workflow metadata follows JSON Schema standard
- Cleaner codebase without deprecated volume_mode
- Python worker (most commonly used) starts by default
* fix: resolve linter errors and optimize CI worker builds
- Remove unused Literal import from backend findings model
- Remove unnecessary f-string prefixes in CLI findings command
- Optimize GitHub Actions to build only modified workers
- Detect specific worker changes (python, secrets, rust, android, ossfuzz)
- Build only changed workers instead of all 5
- Build all workers if docker-compose.yml changes
- Significantly reduces CI build time
* feat: Add Android static analysis workflow with Jadx, OpenGrep, and MobSF
Comprehensive Android security testing workflow converted from Prefect to Temporal architecture:
Modules (3):
- JadxDecompiler: APK to Java source code decompilation
- OpenGrepAndroid: Static analysis with Android-specific security rules
- MobSFScanner: Comprehensive mobile security framework integration
Custom Rules (13):
- clipboard-sensitive-data, hardcoded-secrets, insecure-data-storage
- insecure-deeplink, insecure-logging, intent-redirection
- sensitive_data_sharedPreferences, sqlite-injection
- vulnerable-activity, vulnerable-content-provider, vulnerable-service
- webview-javascript-enabled, webview-load-arbitrary-url
Workflow:
- 6-phase Temporal workflow: download → Jadx → OpenGrep → MobSF → SARIF → upload
- 4 activities: decompile_with_jadx, scan_with_opengrep, scan_with_mobsf, generate_android_sarif
- SARIF output combining findings from all security tools
Docker Worker:
- ARM64 Mac compatibility via amd64 platform emulation
- Pre-installed: Android SDK, Jadx 1.4.7, OpenGrep 1.45.0, MobSF 3.9.7
- MobSF runs as background service with API key auto-generation
- Added aiohttp for async HTTP communication
Test APKs:
- BeetleBug.apk and shopnest.apk for workflow validation
* fix(android): correct activity names and MobSF API key generation
- Fix activity names in workflow.py (get_target, upload_results, cleanup_cache)
- Fix MobSF API key generation in Dockerfile startup script (cut delimiter)
- Update activity parameter signatures to match actual implementations
- Workflow now executes successfully with Jadx and OpenGrep
* feat: add platform-aware worker architecture with ARM64 support
Implement platform-specific Dockerfile selection and graceful tool degradation to support both x86_64 and ARM64 (Apple Silicon) platforms.
**Backend Changes:**
- Add system info API endpoint (/system/info) exposing host filesystem paths
- Add FUZZFORGE_HOST_ROOT environment variable to backend service
- Add graceful degradation in MobSF activity for ARM64 platforms
**CLI Changes:**
- Implement multi-strategy path resolution (backend API, .fuzzforge marker, env var)
- Add platform detection (linux/amd64 vs linux/arm64)
- Add worker metadata.yaml reading for platform capabilities
- Auto-select appropriate Dockerfile based on detected platform
- Pass platform-specific env vars to docker-compose
**Worker Changes:**
- Create workers/android/metadata.yaml defining platform capabilities
- Rename Dockerfile -> Dockerfile.amd64 (full toolchain with MobSF)
- Create Dockerfile.arm64 (excludes MobSF due to Rosetta 2 incompatibility)
- Update docker-compose.yml to use ${ANDROID_DOCKERFILE} variable
**Workflow Changes:**
- Handle MobSF "skipped" status gracefully in workflow
- Log clear warnings when tools are unavailable on platform
**Key Features:**
- Automatic platform detection and Dockerfile selection
- Graceful degradation when tools unavailable (MobSF on ARM64)
- Works from any directory (backend API provides paths)
- Manual override via environment variables
- Clear user feedback about platform and selected Dockerfile
**Benefits:**
- Android workflow now works on Apple Silicon Macs
- No code changes needed for other workflows
- Convention established for future platform-specific workers
Closes: MobSF Rosetta 2 incompatibility issue
Implements: Platform-aware worker architecture (Option B)
* fix: make MobSFScanner import conditional for ARM64 compatibility
- Add try-except block to conditionally import MobSFScanner in modules/android/__init__.py
- Allows Android worker to start on ARM64 without MobSF dependencies (aiohttp)
- MobSF activity gracefully skips on ARM64 with clear warning message
- Remove workflow path detection logic (not needed - workflows receive directories)
Platform-aware architecture fully functional on ARM64:
- CLI detects ARM64 and selects Dockerfile.arm64 automatically
- Worker builds and runs without MobSF on ARM64
- Jadx successfully decompiles APKs (4145 files from BeetleBug.apk)
- OpenGrep finds security vulnerabilities (8 issues found)
- MobSF gracefully skips with warning on ARM64
- Graceful degradation working as designed
Tested with:
ff workflow run android_static_analysis test_projects/android_test/ \
--wait --no-interactive apk_path=BeetleBug.apk decompile_apk=true
Results: 8 security findings (1 ERROR, 7 WARNINGS)
* docs: update CHANGELOG with Android workflow and ARM64 support
Added [Unreleased] section documenting:
- Android Static Analysis Workflow (Jadx, OpenGrep, MobSF)
- Platform-Aware Worker Architecture with ARM64 support
- Python SAST Workflow
- CI/CD improvements and worker validation
- CLI enhancements
- Bug fixes and technical changes
Fixed date typo: 2025-01-16 → 2025-10-16
* fix: resolve linter errors in Android modules
- Remove unused imports from mobsf_scanner.py (asyncio, hashlib, json, Optional)
- Remove unused variables from opengrep_android.py (start_col, end_col)
- Remove duplicate Path import from workflow.py
* ci: support multi-platform Dockerfiles in worker validation
Updated worker validation script to accept both:
- Single Dockerfile pattern (existing workers)
- Multi-platform Dockerfile pattern (Dockerfile.amd64, Dockerfile.arm64, etc.)
This enables platform-aware worker architectures like the Android worker
which uses different Dockerfiles for x86_64 and ARM64 platforms.
* Feature/litellm proxy (#27)
* feat: seed governance config and responses routing
* Add env-configurable timeout for proxy providers
* Integrate LiteLLM OTEL collector and update docs
* Make .env.litellm optional for LiteLLM proxy
* Add LiteLLM proxy integration with model-agnostic virtual keys
Changes:
- Bootstrap generates 3 virtual keys with individual budgets (CLI: $100, Task-Agent: $25, Cognee: $50)
- Task-agent loads config at runtime via entrypoint script to wait for bootstrap completion
- All keys are model-agnostic by default (no LITELLM_DEFAULT_MODELS restrictions)
- Bootstrap handles database/env mismatch after docker prune by deleting stale aliases
- CLI and Cognee configured to use LiteLLM proxy with virtual keys
- Added comprehensive documentation in volumes/env/README.md
Technical details:
- task-agent entrypoint waits for keys in .env file before starting uvicorn
- Bootstrap creates/updates TASK_AGENT_API_KEY, COGNEE_API_KEY, and OPENAI_API_KEY
- Removed hardcoded API keys from docker-compose.yml
- All services route through http://localhost:10999 proxy
* Fix CLI not loading virtual keys from global .env
Project .env files with empty OPENAI_API_KEY values were overriding
the global virtual keys. Updated _load_env_file_if_exists to only
override with non-empty values.
* Fix agent executor not passing API key to LiteLLM
The agent was initializing LiteLlm without api_key or api_base,
causing authentication errors when using the LiteLLM proxy. Now
reads from OPENAI_API_KEY/LLM_API_KEY and LLM_ENDPOINT environment
variables and passes them to LiteLlm constructor.
* Auto-populate project .env with virtual key from global config
When running 'ff init', the command now checks for a global
volumes/env/.env file and automatically uses the OPENAI_API_KEY
virtual key if found. This ensures projects work with LiteLLM
proxy out of the box without manual key configuration.
* docs: Update README with LiteLLM configuration instructions
Add note about LITELLM_GEMINI_API_KEY configuration and clarify that OPENAI_API_KEY default value should not be changed as it's used for the LLM proxy.
* Refactor workflow parameters to use JSON Schema defaults
Consolidates parameter defaults into JSON Schema format, removing the separate default_parameters field. Adds extract_defaults_from_json_schema() helper to extract defaults from the standard schema structure. Updates LiteLLM proxy config to use LITELLM_OPENAI_API_KEY environment variable.
* Remove .env.example from task_agent
* Fix MDX syntax error in llm-proxy.md
* fix: apply default parameters from metadata.yaml automatically
Fixed TemporalManager.run_workflow() to correctly apply default parameter
values from workflow metadata.yaml files when parameters are not provided
by the caller.
Previous behavior:
- When workflow_params was empty {}, the condition
`if workflow_params and 'parameters' in metadata` would fail
- Parameters would not be extracted from schema, resulting in workflows
receiving only target_id with no other parameters
New behavior:
- Removed the `workflow_params and` requirement from the condition
- Now explicitly checks for defaults in parameter spec
- Applies defaults from metadata.yaml automatically when param not provided
- Workflows receive all parameters with proper fallback:
provided value > metadata default > None
This makes metadata.yaml the single source of truth for parameter defaults,
removing the need for workflows to implement defensive default handling.
Affected workflows:
- llm_secret_detection (was failing with KeyError)
- All other workflows now benefit from automatic default application
Co-authored-by: tduhamel42
* fix: add default values to llm_analysis workflow parameters
Resolves validation error where agent_url was None when not explicitly provided. The TemporalManager applies defaults from metadata.yaml, not from module input schemas, so all parameters need defaults in the workflow metadata.
Changes:
- Add default agent_url, llm_model (gpt-5-mini), llm_provider (openai)
- Expand file_patterns to 45 comprehensive patterns covering code, configs, secrets, and Docker files
- Increase default limits: max_files (10), max_file_size (100KB), timeout (90s)
* refactor: replace .env.example with .env.template in documentation
- Remove volumes/env/.env.example file
- Update all documentation references to use .env.template instead
- Update bootstrap script error message
- Update .gitignore comment
* feat(cli): add worker management commands with improved progress feedback
Add comprehensive CLI commands for managing Temporal workers:
- ff worker list - List workers with status and uptime
- ff worker start - Start specific worker with optional rebuild
- ff worker stop - Safely stop all workers without affecting core services
Improvements:
- Live progress display during worker startup with Rich Status spinner
- Real-time elapsed time counter and container state updates
- Health check status tracking (starting → unhealthy → healthy)
- Helpful contextual hints at 10s, 30s, 60s intervals
- Better timeout messages showing last known state
Worker management enhancements:
- Use 'docker compose' (space) instead of 'docker-compose' (hyphen)
- Stop workers individually with 'docker stop' to avoid stopping core services
- Platform detection and Dockerfile selection (ARM64/AMD64)
Documentation:
- Updated docker-setup.md with CLI commands as primary method
- Created comprehensive cli-reference.md with all commands and examples
- Added worker management best practices
* fix: MobSF scanner now properly parses files dict structure
MobSF returns 'files' as a dict (not list):
{"filename": "line_numbers"}
The parser was treating it as a list, causing zero findings
to be extracted. Now properly iterates over the dict and
creates one finding per affected file with correct line numbers
and metadata (CWE, OWASP, MASVS, CVSS).
Fixed in both code_analysis and behaviour sections.
* chore: bump version to 0.7.3
* docs: fix broken documentation links in cli-reference
* chore: add worker startup documentation and cleanup .gitignore
- Add workflow-to-worker mapping tables across documentation
- Update troubleshooting guide with worker requirements section
- Enhance getting started guide with worker examples
- Add quick reference to docker setup guide
- Add WEEK_SUMMARY*.md pattern to .gitignore
* docs: update CHANGELOG with missing versions and recent changes
- Add Unreleased section for post-v0.7.3 documentation updates
- Add v0.7.2 entry with bug fixes and worker improvements
- Document that v0.7.1 was re-tagged as v0.7.2
- Fix v0.6.0 date to "Undocumented" (no tag exists)
- Add version comparison links for easier navigation
* chore: bump all package versions to 0.7.3 for consistency
* Update GitHub link to fuzzforge_ai
---------
Co-authored-by: Songbird99 <150154823+Songbird99@users.noreply.github.com>
Co-authored-by: Songbird
---
.github/pull_request_template.md | 79 ++
.github/scripts/validate-workers.sh | 127 ++++
.github/workflows/test.yml | 99 ++-
.gitignore | 10 +-
CHANGELOG.md | 125 +++-
README.md | 18 +-
ai/agents/task_agent/.env.example | 10 -
ai/agents/task_agent/Dockerfile | 5 +
ai/agents/task_agent/README.md | 32 +-
ai/agents/task_agent/docker-entrypoint.sh | 31 +
ai/agents/task_agent/litellm_agent/config.py | 19 +-
ai/agents/task_agent/litellm_agent/state.py | 170 ++++-
ai/proxy/README.md | 5 +
ai/pyproject.toml | 2 +-
ai/src/fuzzforge_ai/__init__.py | 2 +-
ai/src/fuzzforge_ai/agent_executor.py | 27 +-
ai/src/fuzzforge_ai/cognee_service.py | 40 +-
backend/mcp-config.json | 1 -
backend/pyproject.toml | 2 +-
backend/src/api/system.py | 47 ++
backend/src/api/workflows.py | 49 +-
backend/src/main.py | 19 +-
backend/src/models/findings.py | 6 +-
backend/src/temporal/manager.py | 20 +-
backend/toolbox/modules/analyzer/__init__.py | 4 +-
.../modules/analyzer/bandit_analyzer.py | 328 +++++++++
.../toolbox/modules/analyzer/mypy_analyzer.py | 269 +++++++
backend/toolbox/modules/android/__init__.py | 31 +
.../clipboard-sensitive-data.yaml | 15 +
.../custom_rules/hardcoded-secrets.yaml | 23 +
.../custom_rules/insecure-data-storage.yaml | 18 +
.../custom_rules/insecure-deeplink.yaml | 16 +
.../custom_rules/insecure-logging.yaml | 21 +
.../custom_rules/intent-redirection.yaml | 15 +
.../sensitive_data_sharedPreferences.yaml | 18 +
.../custom_rules/sqlite-injection.yaml | 21 +
.../custom_rules/vulnerable-activity.yaml | 16 +
.../vulnerable-content-provider.yaml | 16 +
.../custom_rules/vulnerable-service.yaml | 16 +
.../webview-javascript-enabled.yaml | 16 +
.../webview-load-arbitrary-url.yaml | 16 +
.../modules/android/jadx_decompiler.py | 270 +++++++
.../toolbox/modules/android/mobsf_scanner.py | 437 +++++++++++
.../modules/android/opengrep_android.py | 440 +++++++++++
backend/toolbox/modules/scanner/__init__.py | 3 +-
.../modules/scanner/dependency_scanner.py | 302 ++++++++
.../secret_detection/llm_secret_detector.py | 19 +-
.../android_static_analysis/__init__.py | 35 +
.../android_static_analysis/activities.py | 213 ++++++
.../android_static_analysis/metadata.yaml | 172 +++++
.../android_static_analysis/workflow.py | 289 ++++++++
.../workflows/atheris_fuzzing/metadata.yaml | 5 -
.../workflows/cargo_fuzzing/metadata.yaml | 6 -
.../gitleaks_detection/metadata.yaml | 8 -
.../workflows/llm_analysis/metadata.yaml | 60 +-
.../llm_secret_detection/metadata.yaml | 44 +-
.../llm_secret_detection/workflow.py | 3 +
.../workflows/ossfuzz_campaign/metadata.yaml | 7 -
.../toolbox/workflows/python_sast/__init__.py | 10 +
.../workflows/python_sast/activities.py | 191 +++++
.../workflows/python_sast/metadata.yaml | 110 +++
.../toolbox/workflows/python_sast/workflow.py | 265 +++++++
.../security_assessment/metadata.yaml | 5 -
.../trufflehog_detection/metadata.yaml | 7 -
cli/pyproject.toml | 2 +-
cli/src/fuzzforge_cli/__init__.py | 2 +-
cli/src/fuzzforge_cli/commands/__init__.py | 3 +
cli/src/fuzzforge_cli/commands/findings.py | 8 +-
cli/src/fuzzforge_cli/commands/init.py | 39 +-
cli/src/fuzzforge_cli/commands/worker.py | 225 ++++++
.../fuzzforge_cli/commands/workflow_exec.py | 20 +-
cli/src/fuzzforge_cli/completion.py | 12 -
cli/src/fuzzforge_cli/config.py | 95 ++-
cli/src/fuzzforge_cli/constants.py | 4 -
cli/src/fuzzforge_cli/fuzzy.py | 2 -
cli/src/fuzzforge_cli/main.py | 15 +-
cli/src/fuzzforge_cli/validation.py | 11 +-
cli/src/fuzzforge_cli/worker_manager.py | 470 ++++++++++--
docker-compose.yml | 115 ++-
docker/scripts/bootstrap_llm_proxy.py | 636 ++++++++++++++++
...5-01-16-v0.7.0-temporal-workers-release.md | 2 +-
docs/docs/how-to/docker-setup.md | 56 +-
docs/docs/how-to/litellm-hot-swap.md | 179 +++++
docs/docs/how-to/llm-proxy.md | 194 +++++
docs/docs/how-to/troubleshooting.md | 42 +-
docs/docs/reference/cli-reference.md | 616 ++++++++++++++++
docs/docs/tutorial/getting-started.md | 38 +-
docs/docusaurus.config.ts | 4 +-
docs/index.md | 2 +-
pyproject.toml | 2 +-
sdk/examples/basic_workflow.py | 3 -
sdk/examples/batch_analysis.py | 5 -
sdk/examples/fuzzing_monitor.py | 1 -
sdk/examples/save_findings_demo.py | 1 -
sdk/pyproject.toml | 2 +-
sdk/src/fuzzforge_sdk/__init__.py | 2 +-
sdk/src/fuzzforge_sdk/client.py | 2 -
sdk/src/fuzzforge_sdk/testing.py | 2 -
src/fuzzforge/__init__.py | 2 +-
test_projects/android_test/BeetleBug.apk | Bin 0 -> 9429128 bytes
test_projects/android_test/shopnest.apk | Bin 0 -> 12329881 bytes
.../vulnerable_app/findings-security.json | 695 ++++++++++++++++++
test_projects/vulnerable_app/type_errors.py | 62 ++
volumes/env/.env.example | 17 -
volumes/env/.env.template | 65 ++
volumes/env/README.md | 95 ++-
volumes/litellm/proxy_config.yaml | 26 +
volumes/otel/collector-config.yaml | 25 +
workers/android/Dockerfile.amd64 | 148 ++++
.../android/{Dockerfile => Dockerfile.arm64} | 30 +-
workers/android/metadata.yaml | 42 ++
workers/python/requirements.txt | 5 +
112 files changed, 8358 insertions(+), 371 deletions(-)
create mode 100644 .github/pull_request_template.md
create mode 100755 .github/scripts/validate-workers.sh
delete mode 100644 ai/agents/task_agent/.env.example
create mode 100644 ai/agents/task_agent/docker-entrypoint.sh
create mode 100644 ai/proxy/README.md
create mode 100644 backend/src/api/system.py
create mode 100644 backend/toolbox/modules/analyzer/bandit_analyzer.py
create mode 100644 backend/toolbox/modules/analyzer/mypy_analyzer.py
create mode 100644 backend/toolbox/modules/android/__init__.py
create mode 100644 backend/toolbox/modules/android/custom_rules/clipboard-sensitive-data.yaml
create mode 100644 backend/toolbox/modules/android/custom_rules/hardcoded-secrets.yaml
create mode 100644 backend/toolbox/modules/android/custom_rules/insecure-data-storage.yaml
create mode 100644 backend/toolbox/modules/android/custom_rules/insecure-deeplink.yaml
create mode 100644 backend/toolbox/modules/android/custom_rules/insecure-logging.yaml
create mode 100644 backend/toolbox/modules/android/custom_rules/intent-redirection.yaml
create mode 100644 backend/toolbox/modules/android/custom_rules/sensitive_data_sharedPreferences.yaml
create mode 100644 backend/toolbox/modules/android/custom_rules/sqlite-injection.yaml
create mode 100644 backend/toolbox/modules/android/custom_rules/vulnerable-activity.yaml
create mode 100644 backend/toolbox/modules/android/custom_rules/vulnerable-content-provider.yaml
create mode 100644 backend/toolbox/modules/android/custom_rules/vulnerable-service.yaml
create mode 100644 backend/toolbox/modules/android/custom_rules/webview-javascript-enabled.yaml
create mode 100644 backend/toolbox/modules/android/custom_rules/webview-load-arbitrary-url.yaml
create mode 100644 backend/toolbox/modules/android/jadx_decompiler.py
create mode 100644 backend/toolbox/modules/android/mobsf_scanner.py
create mode 100644 backend/toolbox/modules/android/opengrep_android.py
create mode 100644 backend/toolbox/modules/scanner/dependency_scanner.py
create mode 100644 backend/toolbox/workflows/android_static_analysis/__init__.py
create mode 100644 backend/toolbox/workflows/android_static_analysis/activities.py
create mode 100644 backend/toolbox/workflows/android_static_analysis/metadata.yaml
create mode 100644 backend/toolbox/workflows/android_static_analysis/workflow.py
create mode 100644 backend/toolbox/workflows/python_sast/__init__.py
create mode 100644 backend/toolbox/workflows/python_sast/activities.py
create mode 100644 backend/toolbox/workflows/python_sast/metadata.yaml
create mode 100644 backend/toolbox/workflows/python_sast/workflow.py
create mode 100644 cli/src/fuzzforge_cli/commands/worker.py
create mode 100644 docker/scripts/bootstrap_llm_proxy.py
create mode 100644 docs/docs/how-to/litellm-hot-swap.md
create mode 100644 docs/docs/how-to/llm-proxy.md
create mode 100644 docs/docs/reference/cli-reference.md
create mode 100644 test_projects/android_test/BeetleBug.apk
create mode 100644 test_projects/android_test/shopnest.apk
create mode 100644 test_projects/vulnerable_app/findings-security.json
create mode 100644 test_projects/vulnerable_app/type_errors.py
delete mode 100644 volumes/env/.env.example
create mode 100644 volumes/env/.env.template
create mode 100644 volumes/litellm/proxy_config.yaml
create mode 100644 volumes/otel/collector-config.yaml
create mode 100644 workers/android/Dockerfile.amd64
rename workers/android/{Dockerfile => Dockerfile.arm64} (73%)
create mode 100644 workers/android/metadata.yaml
diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
new file mode 100644
index 0000000..04ece70
--- /dev/null
+++ b/.github/pull_request_template.md
@@ -0,0 +1,79 @@
+## Description
+
+
+
+## Type of Change
+
+
+
+- [ ] 🐛 Bug fix (non-breaking change which fixes an issue)
+- [ ] ✨ New feature (non-breaking change which adds functionality)
+- [ ] 💥 Breaking change (fix or feature that would cause existing functionality to not work as expected)
+- [ ] 📝 Documentation update
+- [ ] 🔧 Configuration change
+- [ ] ♻️ Refactoring (no functional changes)
+- [ ] 🎨 Style/formatting changes
+- [ ] ✅ Test additions or updates
+
+## Related Issues
+
+
+
+
+## Changes Made
+
+
+
+-
+-
+-
+
+## Testing
+
+
+
+### Tested Locally
+
+- [ ] All tests pass (`pytest`, `uv build`, etc.)
+- [ ] Linting passes (`ruff check`)
+- [ ] Code builds successfully
+
+### Worker Changes (if applicable)
+
+- [ ] Docker images build successfully (`docker compose build`)
+- [ ] Worker containers start correctly
+- [ ] Tested with actual workflow execution
+
+### Documentation
+
+- [ ] Documentation updated (if needed)
+- [ ] README updated (if needed)
+- [ ] CHANGELOG.md updated (if user-facing changes)
+
+## Pre-Merge Checklist
+
+
+
+- [ ] My code follows the project's coding standards
+- [ ] I have performed a self-review of my code
+- [ ] I have commented my code, particularly in hard-to-understand areas
+- [ ] I have made corresponding changes to the documentation
+- [ ] My changes generate no new warnings
+- [ ] I have added tests that prove my fix is effective or that my feature works
+- [ ] New and existing unit tests pass locally with my changes
+- [ ] Any dependent changes have been merged and published
+
+### Worker-Specific Checks (if workers/ modified)
+
+- [ ] All worker files properly tracked by git (not gitignored)
+- [ ] Worker validation script passes (`.github/scripts/validate-workers.sh`)
+- [ ] Docker images build without errors
+- [ ] Worker configuration updated in `docker-compose.yml` (if needed)
+
+## Screenshots (if applicable)
+
+
+
+## Additional Notes
+
+
diff --git a/.github/scripts/validate-workers.sh b/.github/scripts/validate-workers.sh
new file mode 100755
index 0000000..6b2c5f6
--- /dev/null
+++ b/.github/scripts/validate-workers.sh
@@ -0,0 +1,127 @@
+#!/bin/bash
+# Worker Validation Script
+# Ensures all workers defined in docker-compose.yml exist in the repository
+# and are properly tracked by git.
+
+set -e
+
+echo "🔍 Validating worker completeness..."
+
+# Colors for output
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+NC='\033[0m' # No Color
+
+ERRORS=0
+WARNINGS=0
+
+# Extract worker service names from docker-compose.yml
+echo ""
+echo "📋 Checking workers defined in docker-compose.yml..."
+WORKERS=$(grep -E "^\s+worker-" docker-compose.yml | grep -v "#" | cut -d: -f1 | tr -d ' ' | sort -u)
+
+if [ -z "$WORKERS" ]; then
+ echo -e "${RED}❌ No workers found in docker-compose.yml${NC}"
+ exit 1
+fi
+
+echo "Found workers:"
+for worker in $WORKERS; do
+ echo " - $worker"
+done
+
+# Check each worker
+echo ""
+echo "🔎 Validating worker files..."
+for worker in $WORKERS; do
+ WORKER_DIR="workers/${worker#worker-}"
+
+ echo ""
+ echo "Checking $worker ($WORKER_DIR)..."
+
+ # Check if directory exists
+ if [ ! -d "$WORKER_DIR" ]; then
+ echo -e "${RED} ❌ Directory not found: $WORKER_DIR${NC}"
+ ERRORS=$((ERRORS + 1))
+ continue
+ fi
+
+ # Check Dockerfile (single file or multi-platform pattern)
+ if [ -f "$WORKER_DIR/Dockerfile" ]; then
+ # Single Dockerfile
+ if ! git ls-files --error-unmatch "$WORKER_DIR/Dockerfile" &> /dev/null; then
+ echo -e "${RED} ❌ File not tracked by git: $WORKER_DIR/Dockerfile${NC}"
+ echo -e "${YELLOW} Check .gitignore patterns!${NC}"
+ ERRORS=$((ERRORS + 1))
+ else
+ echo -e "${GREEN} ✓ Dockerfile (tracked)${NC}"
+ fi
+ elif compgen -G "$WORKER_DIR/Dockerfile.*" > /dev/null; then
+ # Multi-platform Dockerfiles (e.g., Dockerfile.amd64, Dockerfile.arm64)
+ PLATFORM_DOCKERFILES=$(ls "$WORKER_DIR"/Dockerfile.* 2>/dev/null)
+ DOCKERFILE_FOUND=false
+ for dockerfile in $PLATFORM_DOCKERFILES; do
+ if git ls-files --error-unmatch "$dockerfile" &> /dev/null; then
+ echo -e "${GREEN} ✓ $(basename "$dockerfile") (tracked)${NC}"
+ DOCKERFILE_FOUND=true
+ else
+ echo -e "${RED} ❌ File not tracked by git: $dockerfile${NC}"
+ ERRORS=$((ERRORS + 1))
+ fi
+ done
+ if [ "$DOCKERFILE_FOUND" = false ]; then
+ echo -e "${RED} ❌ No platform-specific Dockerfiles found${NC}"
+ ERRORS=$((ERRORS + 1))
+ fi
+ else
+ echo -e "${RED} ❌ Missing Dockerfile or Dockerfile.* files${NC}"
+ ERRORS=$((ERRORS + 1))
+ fi
+
+ # Check other required files
+ REQUIRED_FILES=("requirements.txt" "worker.py")
+ for file in "${REQUIRED_FILES[@]}"; do
+ FILE_PATH="$WORKER_DIR/$file"
+
+ if [ ! -f "$FILE_PATH" ]; then
+ echo -e "${RED} ❌ Missing file: $FILE_PATH${NC}"
+ ERRORS=$((ERRORS + 1))
+ else
+ # Check if file is tracked by git
+ if ! git ls-files --error-unmatch "$FILE_PATH" &> /dev/null; then
+ echo -e "${RED} ❌ File not tracked by git: $FILE_PATH${NC}"
+ echo -e "${YELLOW} Check .gitignore patterns!${NC}"
+ ERRORS=$((ERRORS + 1))
+ else
+ echo -e "${GREEN} ✓ $file (tracked)${NC}"
+ fi
+ fi
+ done
+done
+
+# Check for any ignored worker files
+echo ""
+echo "🚫 Checking for gitignored worker files..."
+IGNORED_FILES=$(git check-ignore workers/*/* 2>/dev/null || true)
+if [ -n "$IGNORED_FILES" ]; then
+ echo -e "${YELLOW}⚠️ Warning: Some worker files are being ignored:${NC}"
+ echo "$IGNORED_FILES" | while read -r file; do
+ echo -e "${YELLOW} - $file${NC}"
+ done
+ WARNINGS=$((WARNINGS + 1))
+fi
+
+# Summary
+echo ""
+echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
+if [ $ERRORS -eq 0 ] && [ $WARNINGS -eq 0 ]; then
+ echo -e "${GREEN}✅ All workers validated successfully!${NC}"
+ exit 0
+elif [ $ERRORS -eq 0 ]; then
+ echo -e "${YELLOW}⚠️ Validation passed with $WARNINGS warning(s)${NC}"
+ exit 0
+else
+ echo -e "${RED}❌ Validation failed with $ERRORS error(s) and $WARNINGS warning(s)${NC}"
+ exit 1
+fi
diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml
index 03581ef..9f79b46 100644
--- a/.github/workflows/test.yml
+++ b/.github/workflows/test.yml
@@ -2,11 +2,100 @@ name: Tests
on:
push:
- branches: [ main, master, develop, feature/** ]
+ branches: [ main, master, dev, develop, feature/** ]
pull_request:
- branches: [ main, master, develop ]
+ branches: [ main, master, dev, develop ]
jobs:
+ validate-workers:
+ name: Validate Workers
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Run worker validation
+ run: |
+ chmod +x .github/scripts/validate-workers.sh
+ .github/scripts/validate-workers.sh
+
+ build-workers:
+ name: Build Worker Docker Images
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ fetch-depth: 0 # Fetch all history for proper diff
+
+ - name: Check which workers were modified
+ id: check-workers
+ run: |
+ if [ "${{ github.event_name }}" == "pull_request" ]; then
+ # For PRs, check changed files
+ CHANGED_FILES=$(git diff --name-only origin/${{ github.base_ref }}...HEAD)
+ echo "Changed files:"
+ echo "$CHANGED_FILES"
+ else
+ # For direct pushes, check last commit
+ CHANGED_FILES=$(git diff --name-only HEAD~1 HEAD)
+ fi
+
+ # Check if docker-compose.yml changed (build all workers)
+ if echo "$CHANGED_FILES" | grep -q "^docker-compose.yml"; then
+ echo "workers_to_build=worker-python worker-secrets worker-rust worker-android worker-ossfuzz" >> $GITHUB_OUTPUT
+ echo "workers_modified=true" >> $GITHUB_OUTPUT
+ echo "✅ docker-compose.yml modified - building all workers"
+ exit 0
+ fi
+
+ # Detect which specific workers changed
+ WORKERS_TO_BUILD=""
+
+ if echo "$CHANGED_FILES" | grep -q "^workers/python/"; then
+ WORKERS_TO_BUILD="$WORKERS_TO_BUILD worker-python"
+ echo "✅ Python worker modified"
+ fi
+
+ if echo "$CHANGED_FILES" | grep -q "^workers/secrets/"; then
+ WORKERS_TO_BUILD="$WORKERS_TO_BUILD worker-secrets"
+ echo "✅ Secrets worker modified"
+ fi
+
+ if echo "$CHANGED_FILES" | grep -q "^workers/rust/"; then
+ WORKERS_TO_BUILD="$WORKERS_TO_BUILD worker-rust"
+ echo "✅ Rust worker modified"
+ fi
+
+ if echo "$CHANGED_FILES" | grep -q "^workers/android/"; then
+ WORKERS_TO_BUILD="$WORKERS_TO_BUILD worker-android"
+ echo "✅ Android worker modified"
+ fi
+
+ if echo "$CHANGED_FILES" | grep -q "^workers/ossfuzz/"; then
+ WORKERS_TO_BUILD="$WORKERS_TO_BUILD worker-ossfuzz"
+ echo "✅ OSS-Fuzz worker modified"
+ fi
+
+ if [ -z "$WORKERS_TO_BUILD" ]; then
+ echo "workers_modified=false" >> $GITHUB_OUTPUT
+ echo "⏭️ No worker changes detected - skipping build"
+ else
+ echo "workers_to_build=$WORKERS_TO_BUILD" >> $GITHUB_OUTPUT
+ echo "workers_modified=true" >> $GITHUB_OUTPUT
+ echo "Building workers:$WORKERS_TO_BUILD"
+ fi
+
+ - name: Set up Docker Buildx
+ if: steps.check-workers.outputs.workers_modified == 'true'
+ uses: docker/setup-buildx-action@v3
+
+ - name: Build worker images
+ if: steps.check-workers.outputs.workers_modified == 'true'
+ run: |
+ WORKERS="${{ steps.check-workers.outputs.workers_to_build }}"
+ echo "Building worker Docker images: $WORKERS"
+ docker compose build $WORKERS --no-cache
+ continue-on-error: false
+
lint:
name: Lint
runs-on: ubuntu-latest
@@ -143,11 +232,15 @@ jobs:
test-summary:
name: Test Summary
runs-on: ubuntu-latest
- needs: [lint, unit-tests]
+ needs: [validate-workers, lint, unit-tests]
if: always()
steps:
- name: Check test results
run: |
+ if [ "${{ needs.validate-workers.result }}" != "success" ]; then
+ echo "Worker validation failed"
+ exit 1
+ fi
if [ "${{ needs.unit-tests.result }}" != "success" ]; then
echo "Unit tests failed"
exit 1
diff --git a/.gitignore b/.gitignore
index da918ac..a8d6e44 100644
--- a/.gitignore
+++ b/.gitignore
@@ -188,6 +188,10 @@ logs/
# Docker volume configs (keep .env.example but ignore actual .env)
volumes/env/.env
+# Vendored proxy sources (kept locally for reference)
+ai/proxy/bifrost/
+ai/proxy/litellm/
+
# Test project databases and configurations
test_projects/*/.fuzzforge/
test_projects/*/findings.db*
@@ -304,4 +308,8 @@ test_projects/*/.npmrc
test_projects/*/.git-credentials
test_projects/*/credentials.*
test_projects/*/api_keys.*
-test_projects/*/ci-*.sh
\ No newline at end of file
+test_projects/*/ci-*.sh
+
+# -------------------- Internal Documentation --------------------
+# Weekly summaries and temporary project documentation
+WEEK_SUMMARY*.md
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 649d8fb..c852469 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,7 +5,118 @@ All notable changes to FuzzForge will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
-## [0.7.0] - 2025-01-16
+## [Unreleased]
+
+### 📝 Documentation
+- Added comprehensive worker startup documentation across all guides
+- Added workflow-to-worker mapping tables in README, troubleshooting guide, getting started guide, and docker setup guide
+- Fixed broken documentation links in CLI reference
+- Added WEEK_SUMMARY*.md pattern to .gitignore
+
+---
+
+## [0.7.3] - 2025-10-30
+
+### 🎯 Major Features
+
+#### Android Static Analysis Workflow
+- **Added comprehensive Android security testing workflow** (`android_static_analysis`):
+ - Jadx decompiler for APK → Java source code decompilation
+ - OpenGrep/Semgrep static analysis with custom Android security rules
+ - MobSF integration for comprehensive mobile security scanning
+ - SARIF report generation with unified findings format
+ - Test results: Successfully decompiled 4,145 Java files, found 8 security vulnerabilities
+ - Full workflow completes in ~1.5 minutes
+
+#### Platform-Aware Worker Architecture
+- **ARM64 (Apple Silicon) support**:
+ - Automatic platform detection (ARM64 vs x86_64) in CLI using `platform.machine()`
+ - Worker metadata convention (`metadata.yaml`) for platform-specific capabilities
+ - Multi-Dockerfile support: `Dockerfile.amd64` (full toolchain) and `Dockerfile.arm64` (optimized)
+ - Conditional module imports for graceful degradation (MobSF skips on ARM64)
+ - Backend path resolution via `FUZZFORGE_HOST_ROOT` for CLI worker management
+- **Worker selection logic**:
+ - CLI automatically selects appropriate Dockerfile based on detected platform
+ - Multi-strategy path resolution (API → .fuzzforge marker → environment variable)
+ - Platform-specific tool availability documented in metadata
+
+#### Python SAST Workflow
+- **Added Python Static Application Security Testing workflow** (`python_sast`):
+ - Bandit for Python security linting (SAST)
+ - MyPy for static type checking
+ - Safety for dependency vulnerability scanning
+ - Integrated SARIF reporter for unified findings format
+ - Auto-start Python worker on-demand
+
+### ✨ Enhancements
+
+#### CI/CD Improvements
+- Added automated worker validation in CI pipeline
+- Docker build checks for all workers before merge
+- Worker file change detection for selective builds
+- Optimized Docker layer caching for faster builds
+- Dev branch testing workflow triggers
+
+#### CLI Improvements
+- Fixed live monitoring bug in `ff monitor live` command
+- Enhanced `ff findings` command with better table formatting
+- Improved `ff monitor` with clearer status displays
+- Auto-start workers on-demand when workflows require them
+- Better error messages with actionable manual start commands
+
+#### Worker Management
+- Standardized worker service names (`worker-python`, `worker-android`, etc.)
+- Added missing `worker-secrets` to repository
+- Improved worker naming consistency across codebase
+
+#### LiteLLM Integration
+- Centralized LLM provider management with proxy
+- Governance and request/response routing
+- OTEL collector integration for observability
+- Environment-based configurable timeouts
+- Optional `.env.litellm` configuration
+
+### 🐛 Bug Fixes
+
+- Fixed MobSF API key generation from secret file (SHA256 hash)
+- Corrected Temporal activity names (decompile_with_jadx, scan_with_opengrep, scan_with_mobsf)
+- Resolved linter errors across codebase
+- Fixed unused import issues to pass CI checks
+- Removed deprecated workflow parameters
+- Docker Compose version compatibility fixes
+
+### 🔧 Technical Changes
+
+- Conditional import pattern for optional dependencies (MobSF on ARM64)
+- Multi-platform Dockerfile architecture
+- Worker metadata convention for capability declaration
+- Improved CI worker build optimization
+- Enhanced storage activity error handling
+
+### 📝 Test Projects
+
+- Added `test_projects/android_test/` with BeetleBug.apk and shopnest.apk
+- Android workflow validation with real APK samples
+- ARM64 platform testing and validation
+
+---
+
+## [0.7.2] - 2025-10-22
+
+### 🐛 Bug Fixes
+- Fixed worker naming inconsistencies across codebase
+- Improved monitor command consolidation and usability
+- Enhanced findings CLI with better formatting and display
+- Added missing secrets worker to repository
+
+### 📝 Documentation
+- Added benchmark results files to git for secret detection workflows
+
+**Note:** v0.7.1 was re-tagged as v0.7.2 (both point to the same commit)
+
+---
+
+## [0.7.0] - 2025-10-16
### 🎯 Major Features
@@ -40,7 +151,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
#### Documentation
- Updated README for Temporal + MinIO architecture
-- Removed obsolete `volume_mode` references across all documentation
- Added `.env` configuration guide for AI agent API keys
- Fixed worker startup instructions with correct service names
- Updated docker compose commands to modern syntax
@@ -52,6 +162,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### 🐛 Bug Fixes
+- Fixed default parameters from metadata.yaml not being applied to workflows when no parameters provided
- Fixed gitleaks workflow failing on uploaded directories without Git history
- Fixed worker startup command suggestions (now uses `docker compose up -d` with service names)
- Fixed missing `cognify_text` method in CogneeProjectIntegration
@@ -71,7 +182,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
---
-## [0.6.0] - 2024-12-XX
+## [0.6.0] - Undocumented
### Features
- Initial Temporal migration
@@ -79,7 +190,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Security assessment workflow
- Basic CLI commands
+**Note:** No git tag exists for v0.6.0. Release date undocumented.
+
---
-[0.7.0]: https://github.com/FuzzingLabs/fuzzforge_ai/compare/v0.6.0...v0.7.0
-[0.6.0]: https://github.com/FuzzingLabs/fuzzforge_ai/releases/tag/v0.6.0
+[0.7.3]: https://github.com/FuzzingLabs/fuzzforge_ai/compare/v0.7.2...v0.7.3
+[0.7.2]: https://github.com/FuzzingLabs/fuzzforge_ai/compare/v0.7.0...v0.7.2
+[0.7.0]: https://github.com/FuzzingLabs/fuzzforge_ai/releases/tag/v0.7.0
+[0.6.0]: https://github.com/FuzzingLabs/fuzzforge_ai/tree/v0.6.0
diff --git a/README.md b/README.md
index 9b8eaaf..f76dcce 100644
--- a/README.md
+++ b/README.md
@@ -10,7 +10,7 @@
-
+
@@ -115,9 +115,11 @@ For containerized workflows, see the [Docker Installation Guide](https://docs.do
For AI-powered workflows, configure your LLM API keys:
```bash
-cp volumes/env/.env.example volumes/env/.env
+cp volumes/env/.env.template volumes/env/.env
# Edit volumes/env/.env and add your API keys (OpenAI, Anthropic, Google, etc.)
+# Add your key to LITELLM_GEMINI_API_KEY
```
+> Dont change the OPENAI_API_KEY default value, as it is used for the LLM proxy.
This is required for:
- `llm_secret_detection` workflow
@@ -150,7 +152,7 @@ git clone https://github.com/fuzzinglabs/fuzzforge_ai.git
cd fuzzforge_ai
# 2. Copy the default LLM env config
-cp volumes/env/.env.example volumes/env/.env
+cp volumes/env/.env.template volumes/env/.env
# 3. Start FuzzForge with Temporal
docker compose up -d
@@ -163,6 +165,16 @@ docker compose up -d worker-python
>
> Workers don't auto-start by default (saves RAM). Start the worker you need before running workflows.
+**Workflow-to-Worker Quick Reference:**
+
+| Workflow | Worker Required | Startup Command |
+|----------|----------------|-----------------|
+| `security_assessment`, `python_sast`, `llm_analysis`, `atheris_fuzzing` | worker-python | `docker compose up -d worker-python` |
+| `android_static_analysis` | worker-android | `docker compose up -d worker-android` |
+| `cargo_fuzzing` | worker-rust | `docker compose up -d worker-rust` |
+| `ossfuzz_campaign` | worker-ossfuzz | `docker compose up -d worker-ossfuzz` |
+| `llm_secret_detection`, `trufflehog_detection`, `gitleaks_detection` | worker-secrets | `docker compose up -d worker-secrets` |
+
```bash
# 5. Run your first workflow (files are automatically uploaded)
cd test_projects/vulnerable_app/
diff --git a/ai/agents/task_agent/.env.example b/ai/agents/task_agent/.env.example
deleted file mode 100644
index c71d59a..0000000
--- a/ai/agents/task_agent/.env.example
+++ /dev/null
@@ -1,10 +0,0 @@
-# Default LiteLLM configuration
-LITELLM_MODEL=gemini/gemini-2.0-flash-001
-# LITELLM_PROVIDER=gemini
-
-# API keys (uncomment and fill as needed)
-# GOOGLE_API_KEY=
-# OPENAI_API_KEY=
-# ANTHROPIC_API_KEY=
-# OPENROUTER_API_KEY=
-# MISTRAL_API_KEY=
diff --git a/ai/agents/task_agent/Dockerfile b/ai/agents/task_agent/Dockerfile
index eaf734b..c2b6686 100644
--- a/ai/agents/task_agent/Dockerfile
+++ b/ai/agents/task_agent/Dockerfile
@@ -16,4 +16,9 @@ COPY . /app/agent_with_adk_format
WORKDIR /app/agent_with_adk_format
ENV PYTHONPATH=/app
+# Copy and set up entrypoint
+COPY docker-entrypoint.sh /docker-entrypoint.sh
+RUN chmod +x /docker-entrypoint.sh
+
+ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
diff --git a/ai/agents/task_agent/README.md b/ai/agents/task_agent/README.md
index 769ce33..692e4e6 100644
--- a/ai/agents/task_agent/README.md
+++ b/ai/agents/task_agent/README.md
@@ -43,18 +43,34 @@ cd task_agent
# cp .env.example .env
```
-Edit `.env` (or `.env.example`) and add your API keys. The agent must be restarted after changes so the values are picked up:
+Edit `.env` (or `.env.example`) and add your proxy + API keys. The agent must be restarted after changes so the values are picked up:
```bash
-# Set default model
-LITELLM_MODEL=gemini/gemini-2.0-flash-001
+# Route every request through the proxy container (use http://localhost:10999 from the host)
+FF_LLM_PROXY_BASE_URL=http://llm-proxy:4000
-# Add API keys for providers you want to use
-GOOGLE_API_KEY=your_google_api_key
-OPENAI_API_KEY=your_openai_api_key
-ANTHROPIC_API_KEY=your_anthropic_api_key
-OPENROUTER_API_KEY=your_openrouter_api_key
+# Default model + provider the agent boots with
+LITELLM_MODEL=openai/gpt-4o-mini
+LITELLM_PROVIDER=openai
+
+# Virtual key issued by the proxy to the task agent (bootstrap replaces the placeholder)
+OPENAI_API_KEY=sk-proxy-default
+
+# Upstream keys stay inside the proxy. Store real secrets under the LiteLLM
+# aliases and the bootstrapper mirrors them into .env.litellm for the proxy container.
+LITELLM_OPENAI_API_KEY=your_real_openai_api_key
+LITELLM_ANTHROPIC_API_KEY=your_real_anthropic_key
+LITELLM_GEMINI_API_KEY=your_real_gemini_key
+LITELLM_MISTRAL_API_KEY=your_real_mistral_key
+LITELLM_OPENROUTER_API_KEY=your_real_openrouter_key
```
+> When running the agent outside of Docker, swap `FF_LLM_PROXY_BASE_URL` to the host port (default `http://localhost:10999`).
+
+The bootstrap container provisions LiteLLM, copies provider secrets into
+`volumes/env/.env.litellm`, and rewrites `volumes/env/.env` with the virtual key.
+Populate the `LITELLM_*_API_KEY` values before the first launch so the proxy can
+reach your upstream providers as soon as the bootstrap script runs.
+
### 2. Install Dependencies
```bash
diff --git a/ai/agents/task_agent/docker-entrypoint.sh b/ai/agents/task_agent/docker-entrypoint.sh
new file mode 100644
index 0000000..88e3733
--- /dev/null
+++ b/ai/agents/task_agent/docker-entrypoint.sh
@@ -0,0 +1,31 @@
+#!/bin/bash
+set -e
+
+# Wait for .env file to have keys (max 30 seconds)
+echo "[task-agent] Waiting for virtual keys to be provisioned..."
+for i in $(seq 1 30); do
+ if [ -f /app/config/.env ]; then
+ # Check if TASK_AGENT_API_KEY has a value (not empty)
+ KEY=$(grep -E '^TASK_AGENT_API_KEY=' /app/config/.env | cut -d'=' -f2)
+ if [ -n "$KEY" ] && [ "$KEY" != "" ]; then
+ echo "[task-agent] Virtual keys found, loading environment..."
+ # Export keys from .env file
+ export TASK_AGENT_API_KEY="$KEY"
+ export OPENAI_API_KEY=$(grep -E '^OPENAI_API_KEY=' /app/config/.env | cut -d'=' -f2)
+ export FF_LLM_PROXY_BASE_URL=$(grep -E '^FF_LLM_PROXY_BASE_URL=' /app/config/.env | cut -d'=' -f2)
+ echo "[task-agent] Loaded TASK_AGENT_API_KEY: ${TASK_AGENT_API_KEY:0:15}..."
+ echo "[task-agent] Loaded FF_LLM_PROXY_BASE_URL: $FF_LLM_PROXY_BASE_URL"
+ break
+ fi
+ fi
+ echo "[task-agent] Keys not ready yet, waiting... ($i/30)"
+ sleep 1
+done
+
+if [ -z "$TASK_AGENT_API_KEY" ]; then
+ echo "[task-agent] ERROR: Virtual keys were not provisioned within 30 seconds!"
+ exit 1
+fi
+
+echo "[task-agent] Starting uvicorn..."
+exec "$@"
diff --git a/ai/agents/task_agent/litellm_agent/config.py b/ai/agents/task_agent/litellm_agent/config.py
index 9b404bf..54ab609 100644
--- a/ai/agents/task_agent/litellm_agent/config.py
+++ b/ai/agents/task_agent/litellm_agent/config.py
@@ -4,13 +4,28 @@ from __future__ import annotations
import os
+
+def _normalize_proxy_base_url(raw_value: str | None) -> str | None:
+ if not raw_value:
+ return None
+ cleaned = raw_value.strip()
+ if not cleaned:
+ return None
+ # Avoid double slashes in downstream requests
+ return cleaned.rstrip("/")
+
AGENT_NAME = "litellm_agent"
AGENT_DESCRIPTION = (
"A LiteLLM-backed shell that exposes hot-swappable model and prompt controls."
)
-DEFAULT_MODEL = os.getenv("LITELLM_MODEL", "gemini-2.0-flash-001")
-DEFAULT_PROVIDER = os.getenv("LITELLM_PROVIDER")
+DEFAULT_MODEL = os.getenv("LITELLM_MODEL", "openai/gpt-4o-mini")
+DEFAULT_PROVIDER = os.getenv("LITELLM_PROVIDER") or None
+PROXY_BASE_URL = _normalize_proxy_base_url(
+ os.getenv("FF_LLM_PROXY_BASE_URL")
+ or os.getenv("LITELLM_API_BASE")
+ or os.getenv("LITELLM_BASE_URL")
+)
STATE_PREFIX = "app:litellm_agent/"
STATE_MODEL_KEY = f"{STATE_PREFIX}model"
diff --git a/ai/agents/task_agent/litellm_agent/state.py b/ai/agents/task_agent/litellm_agent/state.py
index 460d961..54f1308 100644
--- a/ai/agents/task_agent/litellm_agent/state.py
+++ b/ai/agents/task_agent/litellm_agent/state.py
@@ -3,11 +3,15 @@
from __future__ import annotations
from dataclasses import dataclass
+import os
from typing import Any, Mapping, MutableMapping, Optional
+import httpx
+
from .config import (
DEFAULT_MODEL,
DEFAULT_PROVIDER,
+ PROXY_BASE_URL,
STATE_MODEL_KEY,
STATE_PROMPT_KEY,
STATE_PROVIDER_KEY,
@@ -66,11 +70,109 @@ class HotSwapState:
"""Create a LiteLlm instance for the current state."""
from google.adk.models.lite_llm import LiteLlm # Lazy import to avoid cycle
+ from google.adk.models.lite_llm import LiteLLMClient
+ from litellm.types.utils import Choices, Message, ModelResponse, Usage
kwargs = {"model": self.model}
if self.provider:
kwargs["custom_llm_provider"] = self.provider
- return LiteLlm(**kwargs)
+ if PROXY_BASE_URL:
+ provider = (self.provider or DEFAULT_PROVIDER or "").lower()
+ if provider and provider != "openai":
+ kwargs["api_base"] = f"{PROXY_BASE_URL.rstrip('/')}/{provider}"
+ else:
+ kwargs["api_base"] = PROXY_BASE_URL
+ kwargs.setdefault("api_key", os.environ.get("TASK_AGENT_API_KEY") or os.environ.get("OPENAI_API_KEY"))
+
+ provider = (self.provider or DEFAULT_PROVIDER or "").lower()
+ model_suffix = self.model.split("/", 1)[-1]
+ use_responses = provider == "openai" and (
+ model_suffix.startswith("gpt-5") or model_suffix.startswith("o1")
+ )
+ if use_responses:
+ kwargs.setdefault("use_responses_api", True)
+
+ llm = LiteLlm(**kwargs)
+
+ if use_responses and PROXY_BASE_URL:
+
+ class _ResponsesAwareClient(LiteLLMClient):
+ def __init__(self, base_client: LiteLLMClient, api_base: str, api_key: str):
+ self._base_client = base_client
+ self._api_base = api_base.rstrip("/")
+ self._api_key = api_key
+
+ async def acompletion(self, model, messages, tools, **kwargs): # type: ignore[override]
+ use_responses_api = kwargs.pop("use_responses_api", False)
+ if not use_responses_api:
+ return await self._base_client.acompletion(
+ model=model,
+ messages=messages,
+ tools=tools,
+ **kwargs,
+ )
+
+ resolved_model = model
+ if "/" not in resolved_model:
+ resolved_model = f"openai/{resolved_model}"
+
+ payload = {
+ "model": resolved_model,
+ "input": _messages_to_responses_input(messages),
+ }
+
+ timeout = kwargs.get("timeout", 60)
+ headers = {
+ "Authorization": f"Bearer {self._api_key}",
+ "Content-Type": "application/json",
+ }
+
+ async with httpx.AsyncClient(timeout=timeout) as client:
+ response = await client.post(
+ f"{self._api_base}/v1/responses",
+ json=payload,
+ headers=headers,
+ )
+ try:
+ response.raise_for_status()
+ except httpx.HTTPStatusError as exc:
+ text = exc.response.text
+ raise RuntimeError(
+ f"LiteLLM responses request failed: {text}"
+ ) from exc
+ data = response.json()
+
+ text_output = _extract_output_text(data)
+ usage = data.get("usage", {})
+
+ return ModelResponse(
+ id=data.get("id"),
+ model=model,
+ choices=[
+ Choices(
+ finish_reason="stop",
+ index=0,
+ message=Message(role="assistant", content=text_output),
+ provider_specific_fields={"bifrost_response": data},
+ )
+ ],
+ usage=Usage(
+ prompt_tokens=usage.get("input_tokens"),
+ completion_tokens=usage.get("output_tokens"),
+ reasoning_tokens=usage.get("output_tokens_details", {}).get(
+ "reasoning_tokens"
+ ),
+ total_tokens=usage.get("total_tokens"),
+ ),
+ )
+
+ llm.llm_client = _ResponsesAwareClient(
+ llm.llm_client,
+ PROXY_BASE_URL,
+ os.environ.get("TASK_AGENT_API_KEY") or os.environ.get("OPENAI_API_KEY", ""),
+ )
+
+ return llm
@property
def display_model(self) -> str:
@@ -84,3 +186,69 @@ def apply_state_to_agent(invocation_context, state: HotSwapState) -> None:
agent = invocation_context.agent
agent.model = state.instantiate_llm()
+
+
+def _messages_to_responses_input(messages: list[dict[str, Any]]) -> list[dict[str, Any]]:
+ inputs: list[dict[str, Any]] = []
+ for message in messages:
+ role = message.get("role", "user")
+ content = message.get("content", "")
+ text_segments: list[str] = []
+
+ if isinstance(content, list):
+ for item in content:
+ if isinstance(item, dict):
+ text = item.get("text") or item.get("content")
+ if text:
+ text_segments.append(str(text))
+ elif isinstance(item, str):
+ text_segments.append(item)
+ elif isinstance(content, str):
+ text_segments.append(content)
+
+ text = "\n".join(segment.strip() for segment in text_segments if segment)
+ if not text:
+ continue
+
+ entry_type = "input_text"
+ if role == "assistant":
+ entry_type = "output_text"
+
+ inputs.append(
+ {
+ "role": role,
+ "content": [
+ {
+ "type": entry_type,
+ "text": text,
+ }
+ ],
+ }
+ )
+
+ if not inputs:
+ inputs.append(
+ {
+ "role": "user",
+ "content": [
+ {
+ "type": "input_text",
+ "text": "",
+ }
+ ],
+ }
+ )
+ return inputs
+
+
+def _extract_output_text(response_json: dict[str, Any]) -> str:
+ outputs = response_json.get("output", [])
+ collected: list[str] = []
+ for item in outputs:
+ if isinstance(item, dict) and item.get("type") == "message":
+ for part in item.get("content", []):
+ if isinstance(part, dict) and part.get("type") == "output_text":
+ text = part.get("text", "")
+ if text:
+ collected.append(str(text))
+ return "\n\n".join(collected).strip()
diff --git a/ai/proxy/README.md b/ai/proxy/README.md
new file mode 100644
index 0000000..fc941eb
--- /dev/null
+++ b/ai/proxy/README.md
@@ -0,0 +1,5 @@
+# LLM Proxy Integrations
+
+This directory contains vendor source trees that were vendored only for reference when integrating LLM gateways. The actual FuzzForge deployment uses the official Docker images for each project.
+
+See `docs/docs/how-to/llm-proxy.md` for up-to-date instructions on running the proxy services and issuing keys for the agents.
diff --git a/ai/pyproject.toml b/ai/pyproject.toml
index d5c0e77..120b9cc 100644
--- a/ai/pyproject.toml
+++ b/ai/pyproject.toml
@@ -1,6 +1,6 @@
[project]
name = "fuzzforge-ai"
-version = "0.7.0"
+version = "0.7.3"
description = "FuzzForge AI orchestration module"
readme = "README.md"
requires-python = ">=3.11"
diff --git a/ai/src/fuzzforge_ai/__init__.py b/ai/src/fuzzforge_ai/__init__.py
index cca81fc..eefecd9 100644
--- a/ai/src/fuzzforge_ai/__init__.py
+++ b/ai/src/fuzzforge_ai/__init__.py
@@ -21,4 +21,4 @@ Usage:
# Additional attribution and requirements are provided in the NOTICE file.
-__version__ = "0.6.0"
\ No newline at end of file
+__version__ = "0.7.3"
\ No newline at end of file
diff --git a/ai/src/fuzzforge_ai/agent_executor.py b/ai/src/fuzzforge_ai/agent_executor.py
index fd1f1d9..41613c0 100644
--- a/ai/src/fuzzforge_ai/agent_executor.py
+++ b/ai/src/fuzzforge_ai/agent_executor.py
@@ -831,20 +831,9 @@ class FuzzForgeExecutor:
async def submit_security_scan_mcp(
workflow_name: str,
target_path: str = "",
- volume_mode: str = "",
parameters: Dict[str, Any] | None = None,
tool_context: ToolContext | None = None,
) -> Any:
- # Normalise volume mode to supported values
- normalised_mode = (volume_mode or "ro").strip().lower().replace("-", "_")
- if normalised_mode in {"read_only", "readonly", "ro"}:
- normalised_mode = "ro"
- elif normalised_mode in {"read_write", "readwrite", "rw"}:
- normalised_mode = "rw"
- else:
- # Fall back to read-only if we can't recognise the input
- normalised_mode = "ro"
-
# Resolve the target path to an absolute path for validation
resolved_path = target_path or "."
try:
@@ -883,7 +872,6 @@ class FuzzForgeExecutor:
payload = {
"workflow_name": workflow_name,
"target_path": resolved_path,
- "volume_mode": normalised_mode,
"parameters": cleaned_parameters,
}
result = await _call_fuzzforge_mcp("submit_security_scan_mcp", payload)
@@ -1061,10 +1049,19 @@ class FuzzForgeExecutor:
FunctionTool(get_task_list)
])
-
- # Create the agent
+
+ # Create the agent with LiteLLM configuration
+ llm_kwargs = {}
+ api_key = os.getenv('OPENAI_API_KEY') or os.getenv('LLM_API_KEY')
+ api_base = os.getenv('LLM_ENDPOINT') or os.getenv('LLM_API_BASE') or os.getenv('OPENAI_API_BASE')
+
+ if api_key:
+ llm_kwargs['api_key'] = api_key
+ if api_base:
+ llm_kwargs['api_base'] = api_base
+
self.agent = LlmAgent(
- model=LiteLlm(model=self.model),
+ model=LiteLlm(model=self.model, **llm_kwargs),
name="fuzzforge_executor",
description="Intelligent A2A orchestrator with memory",
instruction=self._build_instruction(),
diff --git a/ai/src/fuzzforge_ai/cognee_service.py b/ai/src/fuzzforge_ai/cognee_service.py
index 968e956..ba14a30 100644
--- a/ai/src/fuzzforge_ai/cognee_service.py
+++ b/ai/src/fuzzforge_ai/cognee_service.py
@@ -56,7 +56,7 @@ class CogneeService:
# Configure LLM with API key BEFORE any other cognee operations
provider = os.getenv("LLM_PROVIDER", "openai")
model = os.getenv("LLM_MODEL") or os.getenv("LITELLM_MODEL", "gpt-4o-mini")
- api_key = os.getenv("LLM_API_KEY") or os.getenv("OPENAI_API_KEY")
+ api_key = os.getenv("COGNEE_API_KEY") or os.getenv("LLM_API_KEY") or os.getenv("OPENAI_API_KEY")
endpoint = os.getenv("LLM_ENDPOINT")
api_version = os.getenv("LLM_API_VERSION")
max_tokens = os.getenv("LLM_MAX_TOKENS")
@@ -78,48 +78,62 @@ class CogneeService:
os.environ.setdefault("OPENAI_API_KEY", api_key)
if endpoint:
os.environ["LLM_ENDPOINT"] = endpoint
+ os.environ.setdefault("LLM_API_BASE", endpoint)
+ os.environ.setdefault("OPENAI_API_BASE", endpoint)
+ os.environ.setdefault("LITELLM_PROXY_API_BASE", endpoint)
+ if api_key:
+ os.environ.setdefault("LITELLM_PROXY_API_KEY", api_key)
if api_version:
os.environ["LLM_API_VERSION"] = api_version
if max_tokens:
os.environ["LLM_MAX_TOKENS"] = str(max_tokens)
# Configure Cognee's runtime using its configuration helpers when available
+ embedding_model = os.getenv("LLM_EMBEDDING_MODEL")
+ embedding_endpoint = os.getenv("LLM_EMBEDDING_ENDPOINT")
+ if embedding_endpoint:
+ os.environ.setdefault("LLM_EMBEDDING_API_BASE", embedding_endpoint)
+
if hasattr(cognee.config, "set_llm_provider"):
cognee.config.set_llm_provider(provider)
- if hasattr(cognee.config, "set_llm_model"):
- cognee.config.set_llm_model(model)
- if api_key and hasattr(cognee.config, "set_llm_api_key"):
- cognee.config.set_llm_api_key(api_key)
- if endpoint and hasattr(cognee.config, "set_llm_endpoint"):
- cognee.config.set_llm_endpoint(endpoint)
+ if hasattr(cognee.config, "set_llm_model"):
+ cognee.config.set_llm_model(model)
+ if api_key and hasattr(cognee.config, "set_llm_api_key"):
+ cognee.config.set_llm_api_key(api_key)
+ if endpoint and hasattr(cognee.config, "set_llm_endpoint"):
+ cognee.config.set_llm_endpoint(endpoint)
+ if embedding_model and hasattr(cognee.config, "set_llm_embedding_model"):
+ cognee.config.set_llm_embedding_model(embedding_model)
+ if embedding_endpoint and hasattr(cognee.config, "set_llm_embedding_endpoint"):
+ cognee.config.set_llm_embedding_endpoint(embedding_endpoint)
if api_version and hasattr(cognee.config, "set_llm_api_version"):
cognee.config.set_llm_api_version(api_version)
if max_tokens and hasattr(cognee.config, "set_llm_max_tokens"):
cognee.config.set_llm_max_tokens(int(max_tokens))
-
+
# Configure graph database
cognee.config.set_graph_db_config({
"graph_database_provider": self.cognee_config.get("graph_database_provider", "kuzu"),
})
-
+
# Set data directories
data_dir = self.cognee_config.get("data_directory")
system_dir = self.cognee_config.get("system_directory")
-
+
if data_dir:
logger.debug("Setting cognee data root", extra={"path": data_dir})
cognee.config.data_root_directory(data_dir)
if system_dir:
logger.debug("Setting cognee system root", extra={"path": system_dir})
cognee.config.system_root_directory(system_dir)
-
+
# Setup multi-tenant user context
await self._setup_user_context()
-
+
self._initialized = True
logger.info(f"Cognee initialized for project {self.project_context['project_name']} "
f"with Kuzu at {system_dir}")
-
+
except ImportError:
logger.error("Cognee not installed. Install with: pip install cognee")
raise
diff --git a/backend/mcp-config.json b/backend/mcp-config.json
index 1b6e783..4f06ce4 100644
--- a/backend/mcp-config.json
+++ b/backend/mcp-config.json
@@ -22,7 +22,6 @@
"parameters": {
"workflow_name": "string",
"target_path": "string",
- "volume_mode": "string (ro|rw)",
"parameters": "object"
}
},
diff --git a/backend/pyproject.toml b/backend/pyproject.toml
index 03a7307..595d473 100644
--- a/backend/pyproject.toml
+++ b/backend/pyproject.toml
@@ -1,6 +1,6 @@
[project]
name = "backend"
-version = "0.7.0"
+version = "0.7.3"
description = "FuzzForge OSS backend"
authors = []
readme = "README.md"
diff --git a/backend/src/api/system.py b/backend/src/api/system.py
new file mode 100644
index 0000000..a4ee1a6
--- /dev/null
+++ b/backend/src/api/system.py
@@ -0,0 +1,47 @@
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
+
+"""
+System information endpoints for FuzzForge API.
+
+Provides system configuration and filesystem paths to CLI for worker management.
+"""
+
+import os
+from typing import Dict
+
+from fastapi import APIRouter
+
+router = APIRouter(prefix="/system", tags=["system"])
+
+
+@router.get("/info")
+async def get_system_info() -> Dict[str, str]:
+ """
+ Get system information including host filesystem paths.
+
+ This endpoint exposes paths needed by the CLI to manage workers via docker-compose.
+ The FUZZFORGE_HOST_ROOT environment variable is set by docker-compose and points
+ to the FuzzForge installation directory on the host machine.
+
+ Returns:
+ Dictionary containing:
+ - host_root: Absolute path to FuzzForge root on host
+ - docker_compose_path: Path to docker-compose.yml on host
+ - workers_dir: Path to workers directory on host
+ """
+ host_root = os.getenv("FUZZFORGE_HOST_ROOT", "")
+
+ return {
+ "host_root": host_root,
+ "docker_compose_path": f"{host_root}/docker-compose.yml" if host_root else "",
+ "workers_dir": f"{host_root}/workers" if host_root else "",
+ }
diff --git a/backend/src/api/workflows.py b/backend/src/api/workflows.py
index 3ffda9d..a4d1b7c 100644
--- a/backend/src/api/workflows.py
+++ b/backend/src/api/workflows.py
@@ -43,6 +43,42 @@ ALLOWED_CONTENT_TYPES = [
router = APIRouter(prefix="/workflows", tags=["workflows"])
+def extract_defaults_from_json_schema(metadata: Dict[str, Any]) -> Dict[str, Any]:
+ """
+ Extract default parameter values from JSON Schema format.
+
+ Converts from:
+ parameters:
+ properties:
+ param_name:
+ default: value
+
+ To:
+ {param_name: value}
+
+ Args:
+ metadata: Workflow metadata dictionary
+
+ Returns:
+ Dictionary of parameter defaults
+ """
+ defaults = {}
+
+ # Check if there's a legacy default_parameters field
+ if "default_parameters" in metadata:
+ defaults.update(metadata["default_parameters"])
+
+ # Extract defaults from JSON Schema parameters
+ parameters = metadata.get("parameters", {})
+ properties = parameters.get("properties", {})
+
+ for param_name, param_spec in properties.items():
+ if "default" in param_spec:
+ defaults[param_name] = param_spec["default"]
+
+ return defaults
+
+
def create_structured_error_response(
error_type: str,
message: str,
@@ -164,7 +200,7 @@ async def get_workflow_metadata(
author=metadata.get("author"),
tags=metadata.get("tags", []),
parameters=metadata.get("parameters", {}),
- default_parameters=metadata.get("default_parameters", {}),
+ default_parameters=extract_defaults_from_json_schema(metadata),
required_modules=metadata.get("required_modules", [])
)
@@ -221,7 +257,7 @@ async def submit_workflow(
# Merge default parameters with user parameters
workflow_info = temporal_mgr.workflows[workflow_name]
metadata = workflow_info.metadata or {}
- defaults = metadata.get("default_parameters", {})
+ defaults = extract_defaults_from_json_schema(metadata)
user_params = submission.parameters or {}
workflow_params = {**defaults, **user_params}
@@ -450,7 +486,7 @@ async def upload_and_submit_workflow(
# Merge default parameters with user parameters
workflow_info = temporal_mgr.workflows.get(workflow_name)
metadata = workflow_info.metadata or {}
- defaults = metadata.get("default_parameters", {})
+ defaults = extract_defaults_from_json_schema(metadata)
workflow_params = {**defaults, **workflow_params}
# Start workflow execution
@@ -617,11 +653,8 @@ async def get_workflow_parameters(
else:
param_definitions = parameters_schema
- # Add default values to the schema
- default_params = metadata.get("default_parameters", {})
- for param_name, param_schema in param_definitions.items():
- if isinstance(param_schema, dict) and param_name in default_params:
- param_schema["default"] = default_params[param_name]
+ # Extract default values from JSON Schema
+ default_params = extract_defaults_from_json_schema(metadata)
return {
"workflow": workflow_name,
diff --git a/backend/src/main.py b/backend/src/main.py
index 9866c43..c219742 100644
--- a/backend/src/main.py
+++ b/backend/src/main.py
@@ -24,7 +24,7 @@ from fastmcp.server.http import create_sse_app
from src.temporal.manager import TemporalManager
from src.core.setup import setup_result_storage, validate_infrastructure
-from src.api import workflows, runs, fuzzing
+from src.api import workflows, runs, fuzzing, system
from fastmcp import FastMCP
@@ -76,6 +76,7 @@ app = FastAPI(
app.include_router(workflows.router)
app.include_router(runs.router)
app.include_router(fuzzing.router)
+app.include_router(system.router)
def get_temporal_status() -> Dict[str, Any]:
@@ -212,14 +213,6 @@ def _lookup_workflow(workflow_name: str):
metadata = info.metadata
defaults = metadata.get("default_parameters", {})
default_target_path = metadata.get("default_target_path") or defaults.get("target_path")
- supported_modes = metadata.get("supported_volume_modes") or ["ro", "rw"]
- if not isinstance(supported_modes, list) or not supported_modes:
- supported_modes = ["ro", "rw"]
- default_volume_mode = (
- metadata.get("default_volume_mode")
- or defaults.get("volume_mode")
- or supported_modes[0]
- )
return {
"name": workflow_name,
"version": metadata.get("version", "0.6.0"),
@@ -229,9 +222,7 @@ def _lookup_workflow(workflow_name: str):
"parameters": metadata.get("parameters", {}),
"default_parameters": metadata.get("default_parameters", {}),
"required_modules": metadata.get("required_modules", []),
- "supported_volume_modes": supported_modes,
- "default_target_path": default_target_path,
- "default_volume_mode": default_volume_mode
+ "default_target_path": default_target_path
}
@@ -256,10 +247,6 @@ async def list_workflows_mcp() -> Dict[str, Any]:
"description": metadata.get("description", ""),
"author": metadata.get("author"),
"tags": metadata.get("tags", []),
- "supported_volume_modes": metadata.get("supported_volume_modes", ["ro", "rw"]),
- "default_volume_mode": metadata.get("default_volume_mode")
- or defaults.get("volume_mode")
- or "ro",
"default_target_path": metadata.get("default_target_path")
or defaults.get("target_path")
})
diff --git a/backend/src/models/findings.py b/backend/src/models/findings.py
index ddc756a..b71a9b6 100644
--- a/backend/src/models/findings.py
+++ b/backend/src/models/findings.py
@@ -14,7 +14,7 @@ Models for workflow findings and submissions
# Additional attribution and requirements are provided in the NOTICE file.
from pydantic import BaseModel, Field
-from typing import Dict, Any, Optional, Literal, List
+from typing import Dict, Any, Optional, List
from datetime import datetime
@@ -73,10 +73,6 @@ class WorkflowMetadata(BaseModel):
default_factory=list,
description="Required module names"
)
- supported_volume_modes: List[Literal["ro", "rw"]] = Field(
- default=["ro", "rw"],
- description="Supported volume mount modes"
- )
class WorkflowListItem(BaseModel):
diff --git a/backend/src/temporal/manager.py b/backend/src/temporal/manager.py
index 9a44e8b..96d9a84 100644
--- a/backend/src/temporal/manager.py
+++ b/backend/src/temporal/manager.py
@@ -187,12 +187,28 @@ class TemporalManager:
# Add parameters in order based on metadata schema
# This ensures parameters match the workflow signature order
- if workflow_params and 'parameters' in workflow_info.metadata:
+ # Apply defaults from metadata.yaml if parameter not provided
+ if 'parameters' in workflow_info.metadata:
param_schema = workflow_info.metadata['parameters'].get('properties', {})
+ logger.debug(f"Found {len(param_schema)} parameters in schema")
# Iterate parameters in schema order and add values
for param_name in param_schema.keys():
- param_value = workflow_params.get(param_name)
+ param_spec = param_schema[param_name]
+
+ # Use provided param, or fall back to default from metadata
+ if workflow_params and param_name in workflow_params:
+ param_value = workflow_params[param_name]
+ logger.debug(f"Using provided value for {param_name}: {param_value}")
+ elif 'default' in param_spec:
+ param_value = param_spec['default']
+ logger.debug(f"Using default for {param_name}: {param_value}")
+ else:
+ param_value = None
+ logger.debug(f"No value or default for {param_name}, using None")
+
workflow_args.append(param_value)
+ else:
+ logger.debug("No 'parameters' section found in workflow metadata")
# Determine task queue from workflow vertical
vertical = workflow_info.metadata.get("vertical", "default")
diff --git a/backend/toolbox/modules/analyzer/__init__.py b/backend/toolbox/modules/analyzer/__init__.py
index 527dab7..8bffdab 100644
--- a/backend/toolbox/modules/analyzer/__init__.py
+++ b/backend/toolbox/modules/analyzer/__init__.py
@@ -10,5 +10,7 @@
# Additional attribution and requirements are provided in the NOTICE file.
from .security_analyzer import SecurityAnalyzer
+from .bandit_analyzer import BanditAnalyzer
+from .mypy_analyzer import MypyAnalyzer
-__all__ = ["SecurityAnalyzer"]
\ No newline at end of file
+__all__ = ["SecurityAnalyzer", "BanditAnalyzer", "MypyAnalyzer"]
\ No newline at end of file
diff --git a/backend/toolbox/modules/analyzer/bandit_analyzer.py b/backend/toolbox/modules/analyzer/bandit_analyzer.py
new file mode 100644
index 0000000..ecf81a8
--- /dev/null
+++ b/backend/toolbox/modules/analyzer/bandit_analyzer.py
@@ -0,0 +1,328 @@
+"""
+Bandit Analyzer Module - Analyzes Python code for security issues using Bandit
+"""
+
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
+
+import asyncio
+import json
+import logging
+import time
+from pathlib import Path
+from typing import Dict, Any, List
+
+try:
+ from toolbox.modules.base import BaseModule, ModuleMetadata, ModuleResult, ModuleFinding
+except ImportError:
+ try:
+ from modules.base import BaseModule, ModuleMetadata, ModuleResult, ModuleFinding
+ except ImportError:
+ from src.toolbox.modules.base import BaseModule, ModuleMetadata, ModuleResult, ModuleFinding
+
+logger = logging.getLogger(__name__)
+
+
+class BanditAnalyzer(BaseModule):
+ """
+ Analyzes Python code for security issues using Bandit.
+
+ This module:
+ - Runs Bandit security linter on Python files
+ - Detects common security issues (SQL injection, hardcoded secrets, etc.)
+ - Reports findings with severity levels
+ """
+
+ # Severity mapping from Bandit levels to our standard
+ SEVERITY_MAP = {
+ "LOW": "low",
+ "MEDIUM": "medium",
+ "HIGH": "high"
+ }
+
+ def get_metadata(self) -> ModuleMetadata:
+ """Get module metadata"""
+ return ModuleMetadata(
+ name="bandit_analyzer",
+ version="1.0.0",
+ description="Analyzes Python code for security issues using Bandit",
+ author="FuzzForge Team",
+ category="analyzer",
+ tags=["python", "security", "bandit", "sast"],
+ input_schema={
+ "severity_level": {
+ "type": "string",
+ "enum": ["low", "medium", "high"],
+ "description": "Minimum severity level to report",
+ "default": "low"
+ },
+ "confidence_level": {
+ "type": "string",
+ "enum": ["low", "medium", "high"],
+ "description": "Minimum confidence level to report",
+ "default": "medium"
+ },
+ "exclude_tests": {
+ "type": "boolean",
+ "description": "Exclude test files from analysis",
+ "default": True
+ },
+ "skip_ids": {
+ "type": "array",
+ "items": {"type": "string"},
+ "description": "List of Bandit test IDs to skip",
+ "default": []
+ }
+ },
+ output_schema={
+ "findings": {
+ "type": "array",
+ "description": "List of security issues found by Bandit"
+ }
+ },
+ requires_workspace=True
+ )
+
+ def validate_config(self, config: Dict[str, Any]) -> bool:
+ """Validate module configuration"""
+ severity = config.get("severity_level", "low")
+ if severity not in ["low", "medium", "high"]:
+ raise ValueError("severity_level must be one of: low, medium, high")
+
+ confidence = config.get("confidence_level", "medium")
+ if confidence not in ["low", "medium", "high"]:
+ raise ValueError("confidence_level must be one of: low, medium, high")
+
+ skip_ids = config.get("skip_ids", [])
+ if not isinstance(skip_ids, list):
+ raise ValueError("skip_ids must be a list")
+
+ return True
+
+ async def _run_bandit(
+ self,
+ workspace: Path,
+ severity_level: str,
+ confidence_level: str,
+ exclude_tests: bool,
+ skip_ids: List[str]
+ ) -> Dict[str, Any]:
+ """
+ Run Bandit on the workspace.
+
+ Args:
+ workspace: Path to workspace
+ severity_level: Minimum severity to report
+ confidence_level: Minimum confidence to report
+ exclude_tests: Whether to exclude test files
+ skip_ids: List of test IDs to skip
+
+ Returns:
+ Bandit JSON output as dict
+ """
+ try:
+ # Build bandit command
+ cmd = [
+ "bandit",
+ "-r", str(workspace),
+ "-f", "json",
+ "-ll", # Report all findings (we'll filter later)
+ ]
+
+ # Add exclude patterns for test files
+ if exclude_tests:
+ cmd.extend(["-x", "*/test_*.py,*/tests/*,*_test.py"])
+
+ # Add skip IDs if specified
+ if skip_ids:
+ cmd.extend(["-s", ",".join(skip_ids)])
+
+ logger.info(f"Running Bandit on: {workspace}")
+ process = await asyncio.create_subprocess_exec(
+ *cmd,
+ stdout=asyncio.subprocess.PIPE,
+ stderr=asyncio.subprocess.PIPE
+ )
+
+ stdout, stderr = await process.communicate()
+
+ # Bandit returns non-zero if issues found, which is expected
+ if process.returncode not in [0, 1]:
+ logger.error(f"Bandit failed: {stderr.decode()}")
+ return {"results": []}
+
+ # Parse JSON output
+ result = json.loads(stdout.decode())
+ return result
+
+ except Exception as e:
+ logger.error(f"Error running Bandit: {e}")
+ return {"results": []}
+
+ def _should_include_finding(
+ self,
+ issue: Dict[str, Any],
+ min_severity: str,
+ min_confidence: str
+ ) -> bool:
+ """
+ Determine if a Bandit issue should be included based on severity/confidence.
+
+ Args:
+ issue: Bandit issue dict
+ min_severity: Minimum severity threshold
+ min_confidence: Minimum confidence threshold
+
+ Returns:
+ True if issue should be included
+ """
+ severity_order = ["low", "medium", "high"]
+ issue_severity = issue.get("issue_severity", "LOW").lower()
+ issue_confidence = issue.get("issue_confidence", "LOW").lower()
+
+ severity_meets_threshold = severity_order.index(issue_severity) >= severity_order.index(min_severity)
+ confidence_meets_threshold = severity_order.index(issue_confidence) >= severity_order.index(min_confidence)
+
+ return severity_meets_threshold and confidence_meets_threshold
+
+ def _convert_to_findings(
+ self,
+ bandit_result: Dict[str, Any],
+ workspace: Path,
+ min_severity: str,
+ min_confidence: str
+ ) -> List[ModuleFinding]:
+ """
+ Convert Bandit results to ModuleFindings.
+
+ Args:
+ bandit_result: Bandit JSON output
+ workspace: Workspace path for relative paths
+ min_severity: Minimum severity to include
+ min_confidence: Minimum confidence to include
+
+ Returns:
+ List of ModuleFindings
+ """
+ findings = []
+
+ for issue in bandit_result.get("results", []):
+ # Filter by severity and confidence
+ if not self._should_include_finding(issue, min_severity, min_confidence):
+ continue
+
+ # Extract issue details
+ test_id = issue.get("test_id", "B000")
+ test_name = issue.get("test_name", "unknown")
+ issue_text = issue.get("issue_text", "No description")
+ severity = self.SEVERITY_MAP.get(issue.get("issue_severity", "LOW"), "low")
+
+ # File location
+ filename = issue.get("filename", "")
+ line_number = issue.get("line_number", 0)
+ code = issue.get("code", "")
+
+ # Try to get relative path
+ try:
+ file_path = Path(filename)
+ rel_path = file_path.relative_to(workspace)
+ except (ValueError, TypeError):
+ rel_path = Path(filename).name
+
+ # Create finding
+ finding = self.create_finding(
+ title=f"{test_name} ({test_id})",
+ description=issue_text,
+ severity=severity,
+ category="security-issue",
+ file_path=str(rel_path),
+ line_start=line_number,
+ line_end=line_number,
+ code_snippet=code.strip() if code else None,
+ recommendation=f"Review and fix the security issue identified by Bandit test {test_id}",
+ metadata={
+ "test_id": test_id,
+ "test_name": test_name,
+ "confidence": issue.get("issue_confidence", "LOW").lower(),
+ "cwe": issue.get("issue_cwe", {}).get("id") if issue.get("issue_cwe") else None,
+ "more_info": issue.get("more_info", "")
+ }
+ )
+ findings.append(finding)
+
+ return findings
+
+ async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
+ """
+ Execute the Bandit analyzer module.
+
+ Args:
+ config: Module configuration
+ workspace: Path to workspace
+
+ Returns:
+ ModuleResult with security findings
+ """
+ start_time = time.time()
+ metadata = self.get_metadata()
+
+ # Validate inputs
+ self.validate_config(config)
+ self.validate_workspace(workspace)
+
+ # Get configuration
+ severity_level = config.get("severity_level", "low")
+ confidence_level = config.get("confidence_level", "medium")
+ exclude_tests = config.get("exclude_tests", True)
+ skip_ids = config.get("skip_ids", [])
+
+ # Run Bandit
+ logger.info("Starting Bandit analysis...")
+ bandit_result = await self._run_bandit(
+ workspace,
+ severity_level,
+ confidence_level,
+ exclude_tests,
+ skip_ids
+ )
+
+ # Convert to findings
+ findings = self._convert_to_findings(
+ bandit_result,
+ workspace,
+ severity_level,
+ confidence_level
+ )
+
+ # Calculate summary
+ severity_counts = {}
+ for finding in findings:
+ sev = finding.severity
+ severity_counts[sev] = severity_counts.get(sev, 0) + 1
+
+ execution_time = time.time() - start_time
+
+ return ModuleResult(
+ module=metadata.name,
+ version=metadata.version,
+ status="success",
+ execution_time=execution_time,
+ findings=findings,
+ summary={
+ "total_issues": len(findings),
+ "by_severity": severity_counts,
+ "files_analyzed": len(set(f.file_path for f in findings if f.file_path))
+ },
+ metadata={
+ "bandit_version": bandit_result.get("generated_at", "unknown"),
+ "metrics": bandit_result.get("metrics", {})
+ }
+ )
diff --git a/backend/toolbox/modules/analyzer/mypy_analyzer.py b/backend/toolbox/modules/analyzer/mypy_analyzer.py
new file mode 100644
index 0000000..9d3e39f
--- /dev/null
+++ b/backend/toolbox/modules/analyzer/mypy_analyzer.py
@@ -0,0 +1,269 @@
+"""
+Mypy Analyzer Module - Analyzes Python code for type safety issues using Mypy
+"""
+
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
+
+import asyncio
+import logging
+import re
+import time
+from pathlib import Path
+from typing import Dict, Any, List
+
+try:
+ from toolbox.modules.base import BaseModule, ModuleMetadata, ModuleResult, ModuleFinding
+except ImportError:
+ try:
+ from modules.base import BaseModule, ModuleMetadata, ModuleResult, ModuleFinding
+ except ImportError:
+ from src.toolbox.modules.base import BaseModule, ModuleMetadata, ModuleResult, ModuleFinding
+
+logger = logging.getLogger(__name__)
+
+
+class MypyAnalyzer(BaseModule):
+ """
+ Analyzes Python code for type safety issues using Mypy.
+
+ This module:
+ - Runs Mypy type checker on Python files
+ - Detects type errors and inconsistencies
+ - Reports findings with configurable strictness
+ """
+
+ # Map Mypy error codes to severity
+ ERROR_SEVERITY_MAP = {
+ "error": "medium",
+ "note": "info"
+ }
+
+ def get_metadata(self) -> ModuleMetadata:
+ """Get module metadata"""
+ return ModuleMetadata(
+ name="mypy_analyzer",
+ version="1.0.0",
+ description="Analyzes Python code for type safety issues using Mypy",
+ author="FuzzForge Team",
+ category="analyzer",
+ tags=["python", "type-checking", "mypy", "sast"],
+ input_schema={
+ "strict_mode": {
+ "type": "boolean",
+ "description": "Enable strict type checking",
+ "default": False
+ },
+ "ignore_missing_imports": {
+ "type": "boolean",
+ "description": "Ignore errors about missing imports",
+ "default": True
+ },
+ "follow_imports": {
+ "type": "string",
+ "enum": ["normal", "silent", "skip", "error"],
+ "description": "How to handle imports",
+ "default": "silent"
+ }
+ },
+ output_schema={
+ "findings": {
+ "type": "array",
+ "description": "List of type errors found by Mypy"
+ }
+ },
+ requires_workspace=True
+ )
+
+ def validate_config(self, config: Dict[str, Any]) -> bool:
+ """Validate module configuration"""
+ follow_imports = config.get("follow_imports", "silent")
+ if follow_imports not in ["normal", "silent", "skip", "error"]:
+ raise ValueError("follow_imports must be one of: normal, silent, skip, error")
+
+ return True
+
+ async def _run_mypy(
+ self,
+ workspace: Path,
+ strict_mode: bool,
+ ignore_missing_imports: bool,
+ follow_imports: str
+ ) -> str:
+ """
+ Run Mypy on the workspace.
+
+ Args:
+ workspace: Path to workspace
+ strict_mode: Enable strict checking
+ ignore_missing_imports: Ignore missing import errors
+ follow_imports: How to handle imports
+
+ Returns:
+ Mypy output as string
+ """
+ try:
+ # Build mypy command
+ cmd = [
+ "mypy",
+ str(workspace),
+ "--show-column-numbers",
+ "--no-error-summary",
+ f"--follow-imports={follow_imports}"
+ ]
+
+ if strict_mode:
+ cmd.append("--strict")
+
+ if ignore_missing_imports:
+ cmd.append("--ignore-missing-imports")
+
+ logger.info(f"Running Mypy on: {workspace}")
+ process = await asyncio.create_subprocess_exec(
+ *cmd,
+ stdout=asyncio.subprocess.PIPE,
+ stderr=asyncio.subprocess.PIPE
+ )
+
+ stdout, stderr = await process.communicate()
+
+ # Mypy returns non-zero if errors found, which is expected
+ output = stdout.decode()
+ return output
+
+ except Exception as e:
+ logger.error(f"Error running Mypy: {e}")
+ return ""
+
+ def _parse_mypy_output(self, output: str, workspace: Path) -> List[ModuleFinding]:
+ """
+ Parse Mypy output and convert to findings.
+
+ Mypy output format:
+ file.py:10:5: error: Incompatible return value type [return-value]
+ file.py:15: note: See https://...
+
+ Args:
+ output: Mypy stdout
+ workspace: Workspace path for relative paths
+
+ Returns:
+ List of ModuleFindings
+ """
+ findings = []
+
+ # Regex to parse mypy output lines
+ # Format: filename:line:column: level: message [error-code]
+ pattern = r'^(.+?):(\d+)(?::(\d+))?: (error|note): (.+?)(?:\s+\[([^\]]+)\])?$'
+
+ for line in output.splitlines():
+ match = re.match(pattern, line.strip())
+ if not match:
+ continue
+
+ filename, line_num, column, level, message, error_code = match.groups()
+
+ # Convert to relative path
+ try:
+ file_path = Path(filename)
+ rel_path = file_path.relative_to(workspace)
+ except (ValueError, TypeError):
+ rel_path = Path(filename).name
+
+ # Skip if it's just a note (unless it's a standalone note)
+ if level == "note" and not error_code:
+ continue
+
+ # Map severity
+ severity = self.ERROR_SEVERITY_MAP.get(level, "medium")
+
+ # Create finding
+ title = f"Type error: {error_code or 'type-issue'}"
+ description = message
+
+ finding = self.create_finding(
+ title=title,
+ description=description,
+ severity=severity,
+ category="type-error",
+ file_path=str(rel_path),
+ line_start=int(line_num),
+ line_end=int(line_num),
+ recommendation="Review and fix the type inconsistency or add appropriate type annotations",
+ metadata={
+ "error_code": error_code or "unknown",
+ "column": int(column) if column else None,
+ "level": level
+ }
+ )
+ findings.append(finding)
+
+ return findings
+
+ async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
+ """
+ Execute the Mypy analyzer module.
+
+ Args:
+ config: Module configuration
+ workspace: Path to workspace
+
+ Returns:
+ ModuleResult with type checking findings
+ """
+ start_time = time.time()
+ metadata = self.get_metadata()
+
+ # Validate inputs
+ self.validate_config(config)
+ self.validate_workspace(workspace)
+
+ # Get configuration
+ strict_mode = config.get("strict_mode", False)
+ ignore_missing_imports = config.get("ignore_missing_imports", True)
+ follow_imports = config.get("follow_imports", "silent")
+
+ # Run Mypy
+ logger.info("Starting Mypy analysis...")
+ mypy_output = await self._run_mypy(
+ workspace,
+ strict_mode,
+ ignore_missing_imports,
+ follow_imports
+ )
+
+ # Parse output to findings
+ findings = self._parse_mypy_output(mypy_output, workspace)
+
+ # Calculate summary
+ error_code_counts = {}
+ for finding in findings:
+ code = finding.metadata.get("error_code", "unknown")
+ error_code_counts[code] = error_code_counts.get(code, 0) + 1
+
+ execution_time = time.time() - start_time
+
+ return ModuleResult(
+ module=metadata.name,
+ version=metadata.version,
+ status="success",
+ execution_time=execution_time,
+ findings=findings,
+ summary={
+ "total_errors": len(findings),
+ "by_error_code": error_code_counts,
+ "files_with_errors": len(set(f.file_path for f in findings if f.file_path))
+ },
+ metadata={
+ "strict_mode": strict_mode,
+ "ignore_missing_imports": ignore_missing_imports
+ }
+ )
diff --git a/backend/toolbox/modules/android/__init__.py b/backend/toolbox/modules/android/__init__.py
new file mode 100644
index 0000000..ef2c74c
--- /dev/null
+++ b/backend/toolbox/modules/android/__init__.py
@@ -0,0 +1,31 @@
+"""
+Android Security Analysis Modules
+
+Modules for Android application security testing:
+- JadxDecompiler: APK decompilation using Jadx
+- MobSFScanner: Mobile security analysis using MobSF
+- OpenGrepAndroid: Static analysis using OpenGrep/Semgrep with Android-specific rules
+"""
+
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
+
+from .jadx_decompiler import JadxDecompiler
+from .opengrep_android import OpenGrepAndroid
+
+# MobSF is optional (not available on ARM64 platform)
+try:
+ from .mobsf_scanner import MobSFScanner
+ __all__ = ["JadxDecompiler", "MobSFScanner", "OpenGrepAndroid"]
+except ImportError:
+ # MobSF dependencies not available (e.g., ARM64 platform)
+ MobSFScanner = None
+ __all__ = ["JadxDecompiler", "OpenGrepAndroid"]
diff --git a/backend/toolbox/modules/android/custom_rules/clipboard-sensitive-data.yaml b/backend/toolbox/modules/android/custom_rules/clipboard-sensitive-data.yaml
new file mode 100644
index 0000000..df7944e
--- /dev/null
+++ b/backend/toolbox/modules/android/custom_rules/clipboard-sensitive-data.yaml
@@ -0,0 +1,15 @@
+rules:
+ - id: clipboard-sensitive-data
+ severity: WARNING
+ languages: [java]
+ message: "Sensitive data may be copied to the clipboard."
+ metadata:
+ authors:
+ - Guerric ELOI (FuzzingLabs)
+ category: security
+ area: clipboard
+ verification-level: [L1]
+ paths:
+ include:
+ - "**/*.java"
+ pattern: "$CLIPBOARD.setPrimaryClip($CLIP)"
diff --git a/backend/toolbox/modules/android/custom_rules/hardcoded-secrets.yaml b/backend/toolbox/modules/android/custom_rules/hardcoded-secrets.yaml
new file mode 100644
index 0000000..c353c96
--- /dev/null
+++ b/backend/toolbox/modules/android/custom_rules/hardcoded-secrets.yaml
@@ -0,0 +1,23 @@
+rules:
+ - id: hardcoded-secrets
+ severity: WARNING
+ languages: [java]
+ message: "Possible hardcoded secret found in variable '$NAME'."
+ metadata:
+ authors:
+ - Guerric ELOI (FuzzingLabs)
+ owasp-mobile: M2
+ category: secrets
+ verification-level: [L1]
+ paths:
+ include:
+ - "**/*.java"
+ patterns:
+ - pattern-either:
+ - pattern: 'String $NAME = "$VAL";'
+ - pattern: 'final String $NAME = "$VAL";'
+ - pattern: 'private String $NAME = "$VAL";'
+ - pattern: 'public static String $NAME = "$VAL";'
+ - pattern: 'static final String $NAME = "$VAL";'
+ - pattern-regex: "$NAME =~ /(?i).*(api|key|token|secret|pass|auth|session|bearer|access|private).*/"
+
diff --git a/backend/toolbox/modules/android/custom_rules/insecure-data-storage.yaml b/backend/toolbox/modules/android/custom_rules/insecure-data-storage.yaml
new file mode 100644
index 0000000..c22546d
--- /dev/null
+++ b/backend/toolbox/modules/android/custom_rules/insecure-data-storage.yaml
@@ -0,0 +1,18 @@
+rules:
+ - id: insecure-data-storage
+ severity: WARNING
+ languages: [java]
+ message: "Potential insecure data storage (external storage)."
+ metadata:
+ authors:
+ - Guerric ELOI (FuzzingLabs)
+ owasp-mobile: M2
+ category: security
+ area: storage
+ verification-level: [L1]
+ paths:
+ include:
+ - "**/*.java"
+ pattern-either:
+ - pattern: "$CTX.openFileOutput($NAME, $MODE)"
+ - pattern: "Environment.getExternalStorageDirectory()"
diff --git a/backend/toolbox/modules/android/custom_rules/insecure-deeplink.yaml b/backend/toolbox/modules/android/custom_rules/insecure-deeplink.yaml
new file mode 100644
index 0000000..4be31ad
--- /dev/null
+++ b/backend/toolbox/modules/android/custom_rules/insecure-deeplink.yaml
@@ -0,0 +1,16 @@
+rules:
+ - id: insecure-deeplink
+ severity: WARNING
+ languages: [xml]
+ message: "Potential insecure deeplink found in intent-filter."
+ metadata:
+ authors:
+ - Guerric ELOI (FuzzingLabs)
+ category: component
+ area: manifest
+ verification-level: [L1]
+ paths:
+ include:
+ - "**/AndroidManifest.xml"
+ pattern: |
+
diff --git a/backend/toolbox/modules/android/custom_rules/insecure-logging.yaml b/backend/toolbox/modules/android/custom_rules/insecure-logging.yaml
new file mode 100644
index 0000000..f36f2a7
--- /dev/null
+++ b/backend/toolbox/modules/android/custom_rules/insecure-logging.yaml
@@ -0,0 +1,21 @@
+rules:
+ - id: insecure-logging
+ severity: WARNING
+ languages: [java]
+ message: "Sensitive data logged via Android Log API."
+ metadata:
+ authors:
+ - Guerric ELOI (FuzzingLabs)
+ owasp-mobile: M2
+ category: logging
+ verification-level: [L1]
+ paths:
+ include:
+ - "**/*.java"
+ patterns:
+ - pattern-either:
+ - pattern: "Log.d($TAG, $MSG)"
+ - pattern: "Log.e($TAG, $MSG)"
+ - pattern: "System.out.println($MSG)"
+ - pattern-regex: "$MSG =~ /(?i).*(password|token|secret|api|auth|session).*/"
+
diff --git a/backend/toolbox/modules/android/custom_rules/intent-redirection.yaml b/backend/toolbox/modules/android/custom_rules/intent-redirection.yaml
new file mode 100644
index 0000000..ade522a
--- /dev/null
+++ b/backend/toolbox/modules/android/custom_rules/intent-redirection.yaml
@@ -0,0 +1,15 @@
+rules:
+ - id: intent-redirection
+ severity: WARNING
+ languages: [java]
+ message: "Potential intent redirection: using getIntent().getExtras() without validation."
+ metadata:
+ authors:
+ - Guerric ELOI (FuzzingLabs)
+ category: intent
+ area: intercomponent
+ verification-level: [L1]
+ paths:
+ include:
+ - "**/*.java"
+ pattern: "$ACT.getIntent().getExtras()"
diff --git a/backend/toolbox/modules/android/custom_rules/sensitive_data_sharedPreferences.yaml b/backend/toolbox/modules/android/custom_rules/sensitive_data_sharedPreferences.yaml
new file mode 100644
index 0000000..4f8f28f
--- /dev/null
+++ b/backend/toolbox/modules/android/custom_rules/sensitive_data_sharedPreferences.yaml
@@ -0,0 +1,18 @@
+rules:
+ - id: sensitive-data-in-shared-preferences
+ severity: WARNING
+ languages: [java]
+ message: "Sensitive data may be stored in SharedPreferences. Please review the key '$KEY'."
+ metadata:
+ authors:
+ - Guerric ELOI (FuzzingLabs)
+ owasp-mobile: M2
+ category: security
+ area: storage
+ verification-level: [L1]
+ paths:
+ include:
+ - "**/*.java"
+ patterns:
+ - pattern: "$EDITOR.putString($KEY, $VAL);"
+ - pattern-regex: "$KEY =~ /(?i).*(username|password|pass|token|auth_token|api_key|secret|sessionid|email).*/"
diff --git a/backend/toolbox/modules/android/custom_rules/sqlite-injection.yaml b/backend/toolbox/modules/android/custom_rules/sqlite-injection.yaml
new file mode 100644
index 0000000..5d07e22
--- /dev/null
+++ b/backend/toolbox/modules/android/custom_rules/sqlite-injection.yaml
@@ -0,0 +1,21 @@
+rules:
+ - id: sqlite-injection
+ severity: ERROR
+ languages: [java]
+ message: "Possible SQL injection: concatenated input in rawQuery or execSQL."
+ metadata:
+ authors:
+ - Guerric ELOI (FuzzingLabs)
+ owasp-mobile: M7
+ category: injection
+ area: database
+ verification-level: [L1]
+ paths:
+ include:
+ - "**/*.java"
+ patterns:
+ - pattern-either:
+ - pattern: "$DB.rawQuery($QUERY, ...)"
+ - pattern: "$DB.execSQL($QUERY)"
+ - pattern-regex: "$QUERY =~ /.*\".*\".*\\+.*/"
+
diff --git a/backend/toolbox/modules/android/custom_rules/vulnerable-activity.yaml b/backend/toolbox/modules/android/custom_rules/vulnerable-activity.yaml
new file mode 100644
index 0000000..0cef4fc
--- /dev/null
+++ b/backend/toolbox/modules/android/custom_rules/vulnerable-activity.yaml
@@ -0,0 +1,16 @@
+rules:
+ - id: vulnerable-activity
+ severity: WARNING
+ languages: [xml]
+ message: "Activity exported without permission."
+ metadata:
+ authors:
+ - Guerric ELOI (FuzzingLabs)
+ category: component
+ area: manifest
+ verification-level: [L1]
+ paths:
+ include:
+ - "**/AndroidManifest.xml"
+ pattern: |
+ ModuleMetadata:
+ return ModuleMetadata(
+ name="jadx_decompiler",
+ version="1.5.0",
+ description="Android APK decompilation using Jadx - converts DEX bytecode to Java source",
+ author="FuzzForge Team",
+ category="android",
+ tags=["android", "jadx", "decompilation", "reverse", "apk"],
+ input_schema={
+ "type": "object",
+ "properties": {
+ "apk_path": {
+ "type": "string",
+ "description": "Path to the APK to decompile (absolute or relative to workspace)",
+ },
+ "output_dir": {
+ "type": "string",
+ "description": "Directory (relative to workspace) where Jadx output should be written",
+ "default": "jadx_output",
+ },
+ "overwrite": {
+ "type": "boolean",
+ "description": "Overwrite existing output directory if present",
+ "default": True,
+ },
+ "threads": {
+ "type": "integer",
+ "description": "Number of Jadx decompilation threads",
+ "default": 4,
+ "minimum": 1,
+ "maximum": 32,
+ },
+ "decompiler_args": {
+ "type": "array",
+ "items": {"type": "string"},
+ "description": "Additional arguments passed directly to Jadx",
+ "default": [],
+ },
+ },
+ "required": ["apk_path"],
+ },
+ output_schema={
+ "type": "object",
+ "properties": {
+ "output_dir": {
+ "type": "string",
+ "description": "Path to decompiled output directory",
+ },
+ "source_dir": {
+ "type": "string",
+ "description": "Path to decompiled Java sources",
+ },
+ "resource_dir": {
+ "type": "string",
+ "description": "Path to extracted resources",
+ },
+ "java_files": {
+ "type": "integer",
+ "description": "Number of Java files decompiled",
+ },
+ },
+ },
+ requires_workspace=True,
+ )
+
+ def validate_config(self, config: Dict[str, Any]) -> bool:
+ """Validate module configuration"""
+ apk_path = config.get("apk_path")
+ if not apk_path:
+ raise ValueError("'apk_path' must be provided for Jadx decompilation")
+
+ threads = config.get("threads", 4)
+ if not isinstance(threads, int) or threads < 1 or threads > 32:
+ raise ValueError("threads must be between 1 and 32")
+
+ return True
+
+ async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
+ """
+ Execute Jadx decompilation on an APK file.
+
+ Args:
+ config: Configuration dict with apk_path, output_dir, etc.
+ workspace: Workspace directory path
+
+ Returns:
+ ModuleResult with decompilation summary and metadata
+ """
+ self.start_timer()
+
+ try:
+ self.validate_config(config)
+ self.validate_workspace(workspace)
+
+ workspace = workspace.resolve()
+
+ # Resolve APK path
+ apk_path = Path(config["apk_path"])
+ if not apk_path.is_absolute():
+ apk_path = (workspace / apk_path).resolve()
+
+ if not apk_path.exists():
+ raise ValueError(f"APK not found: {apk_path}")
+
+ if apk_path.is_dir():
+ raise ValueError(f"APK path must be a file, not a directory: {apk_path}")
+
+ logger.info(f"Decompiling APK: {apk_path}")
+
+ # Resolve output directory
+ output_dir = Path(config.get("output_dir", "jadx_output"))
+ if not output_dir.is_absolute():
+ output_dir = (workspace / output_dir).resolve()
+
+ # Handle existing output directory
+ if output_dir.exists():
+ if config.get("overwrite", True):
+ logger.info(f"Removing existing output directory: {output_dir}")
+ shutil.rmtree(output_dir)
+ else:
+ raise ValueError(
+ f"Output directory already exists: {output_dir}. Set overwrite=true to replace it."
+ )
+
+ output_dir.mkdir(parents=True, exist_ok=True)
+
+ # Build Jadx command
+ threads = str(config.get("threads", 4))
+ extra_args = config.get("decompiler_args", []) or []
+
+ cmd = [
+ "jadx",
+ "--threads-count",
+ threads,
+ "--deobf", # Deobfuscate code
+ "--output-dir",
+ str(output_dir),
+ ]
+ cmd.extend(extra_args)
+ cmd.append(str(apk_path))
+
+ logger.info(f"Running Jadx: {' '.join(cmd)}")
+
+ # Execute Jadx
+ process = await asyncio.create_subprocess_exec(
+ *cmd,
+ stdout=asyncio.subprocess.PIPE,
+ stderr=asyncio.subprocess.PIPE,
+ cwd=str(workspace),
+ )
+
+ stdout, stderr = await process.communicate()
+ stdout_str = stdout.decode(errors="ignore") if stdout else ""
+ stderr_str = stderr.decode(errors="ignore") if stderr else ""
+
+ if stdout_str:
+ logger.debug(f"Jadx stdout: {stdout_str[:200]}...")
+ if stderr_str:
+ logger.debug(f"Jadx stderr: {stderr_str[:200]}...")
+
+ if process.returncode != 0:
+ error_output = stderr_str or stdout_str or "No error output"
+ raise RuntimeError(
+ f"Jadx failed with exit code {process.returncode}: {error_output[:500]}"
+ )
+
+ # Verify output structure
+ source_dir = output_dir / "sources"
+ resource_dir = output_dir / "resources"
+
+ if not source_dir.exists():
+ logger.warning(
+ f"Jadx sources directory not found at expected path: {source_dir}"
+ )
+ # Use output_dir as fallback
+ source_dir = output_dir
+
+ # Count decompiled Java files
+ java_files = 0
+ if source_dir.exists():
+ java_files = sum(1 for _ in source_dir.rglob("*.java"))
+ logger.info(f"Decompiled {java_files} Java files")
+
+ # Log sample files for debugging
+ sample_files = []
+ for idx, file_path in enumerate(source_dir.rglob("*.java")):
+ sample_files.append(str(file_path.relative_to(workspace)))
+ if idx >= 4:
+ break
+ if sample_files:
+ logger.debug(f"Sample Java files: {sample_files}")
+
+ # Create summary
+ summary = {
+ "output_dir": str(output_dir),
+ "source_dir": str(source_dir if source_dir.exists() else output_dir),
+ "resource_dir": str(
+ resource_dir if resource_dir.exists() else output_dir
+ ),
+ "java_files": java_files,
+ "apk_name": apk_path.name,
+ "apk_size_bytes": apk_path.stat().st_size,
+ }
+
+ metadata = {
+ "apk_path": str(apk_path),
+ "output_dir": str(output_dir),
+ "source_dir": summary["source_dir"],
+ "resource_dir": summary["resource_dir"],
+ "threads": threads,
+ "decompiler": "jadx",
+ "decompiler_version": "1.5.0",
+ }
+
+ logger.info(
+ f"✓ Jadx decompilation completed: {java_files} Java files generated"
+ )
+
+ return self.create_result(
+ findings=[], # Jadx doesn't generate findings, only decompiles
+ status="success",
+ summary=summary,
+ metadata=metadata,
+ )
+
+ except Exception as exc:
+ logger.error(f"Jadx decompilation failed: {exc}", exc_info=True)
+ return self.create_result(
+ findings=[],
+ status="failed",
+ error=str(exc),
+ metadata={"decompiler": "jadx", "apk_path": config.get("apk_path")},
+ )
diff --git a/backend/toolbox/modules/android/mobsf_scanner.py b/backend/toolbox/modules/android/mobsf_scanner.py
new file mode 100644
index 0000000..3b16e1b
--- /dev/null
+++ b/backend/toolbox/modules/android/mobsf_scanner.py
@@ -0,0 +1,437 @@
+"""
+MobSF Scanner Module
+
+Mobile Security Framework (MobSF) integration for comprehensive Android app security analysis.
+Performs static analysis on APK files including permissions, manifest analysis, code analysis, and behavior checks.
+"""
+
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
+
+import logging
+import os
+from collections import Counter
+from pathlib import Path
+from typing import Dict, Any, List
+import aiohttp
+
+try:
+ from toolbox.modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
+except ImportError:
+ try:
+ from modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
+ except ImportError:
+ from src.toolbox.modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
+
+logger = logging.getLogger(__name__)
+
+
+class MobSFScanner(BaseModule):
+ """Mobile Security Framework (MobSF) scanner module for Android applications"""
+
+ SEVERITY_MAP = {
+ "dangerous": "critical",
+ "high": "high",
+ "warning": "medium",
+ "medium": "medium",
+ "low": "low",
+ "info": "low",
+ "secure": "low",
+ }
+
+ def get_metadata(self) -> ModuleMetadata:
+ return ModuleMetadata(
+ name="mobsf_scanner",
+ version="3.9.7",
+ description="Comprehensive Android security analysis using Mobile Security Framework (MobSF)",
+ author="FuzzForge Team",
+ category="android",
+ tags=["mobile", "android", "mobsf", "sast", "scanner", "security"],
+ input_schema={
+ "type": "object",
+ "properties": {
+ "mobsf_url": {
+ "type": "string",
+ "description": "MobSF server URL",
+ "default": "http://localhost:8877",
+ },
+ "file_path": {
+ "type": "string",
+ "description": "Path to the APK file to scan (absolute or relative to workspace)",
+ },
+ "api_key": {
+ "type": "string",
+ "description": "MobSF API key (if not provided, will try MOBSF_API_KEY env var)",
+ "default": None,
+ },
+ "rescan": {
+ "type": "boolean",
+ "description": "Force rescan even if file was previously analyzed",
+ "default": False,
+ },
+ },
+ "required": ["file_path"],
+ },
+ output_schema={
+ "type": "object",
+ "properties": {
+ "findings": {
+ "type": "array",
+ "description": "Security findings from MobSF analysis"
+ },
+ "scan_hash": {"type": "string"},
+ "total_findings": {"type": "integer"},
+ "severity_counts": {"type": "object"},
+ }
+ },
+ requires_workspace=True,
+ )
+
+ def validate_config(self, config: Dict[str, Any]) -> bool:
+ """Validate module configuration"""
+ if "mobsf_url" in config and not isinstance(config["mobsf_url"], str):
+ raise ValueError("mobsf_url must be a string")
+
+ file_path = config.get("file_path")
+ if not file_path:
+ raise ValueError("file_path is required for MobSF scanning")
+
+ return True
+
+ async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
+ """
+ Execute MobSF security analysis on an APK file.
+
+ Args:
+ config: Configuration dict with file_path, mobsf_url, api_key
+ workspace: Workspace directory path
+
+ Returns:
+ ModuleResult with security findings from MobSF
+ """
+ self.start_timer()
+
+ try:
+ self.validate_config(config)
+ self.validate_workspace(workspace)
+
+ # Get configuration
+ mobsf_url = config.get("mobsf_url", "http://localhost:8877")
+ file_path_str = config["file_path"]
+ rescan = config.get("rescan", False)
+
+ # Get API key from config or environment
+ api_key = config.get("api_key") or os.environ.get("MOBSF_API_KEY", "")
+ if not api_key:
+ logger.warning("No MobSF API key provided. Some functionality may be limited.")
+
+ # Resolve APK file path
+ file_path = Path(file_path_str)
+ if not file_path.is_absolute():
+ file_path = (workspace / file_path).resolve()
+
+ if not file_path.exists():
+ raise FileNotFoundError(f"APK file not found: {file_path}")
+
+ if not file_path.is_file():
+ raise ValueError(f"APK path must be a file: {file_path}")
+
+ logger.info(f"Starting MobSF scan of APK: {file_path}")
+
+ # Upload and scan APK
+ scan_hash = await self._upload_file(mobsf_url, file_path, api_key)
+ logger.info(f"APK uploaded to MobSF with hash: {scan_hash}")
+
+ # Start scan
+ await self._start_scan(mobsf_url, scan_hash, api_key, rescan=rescan)
+ logger.info(f"MobSF scan completed for hash: {scan_hash}")
+
+ # Get JSON results
+ scan_results = await self._get_json_results(mobsf_url, scan_hash, api_key)
+
+ # Parse results into findings
+ findings = self._parse_scan_results(scan_results, file_path)
+
+ # Create summary
+ summary = self._create_summary(findings, scan_hash)
+
+ logger.info(f"✓ MobSF scan completed: {len(findings)} findings")
+
+ return self.create_result(
+ findings=findings,
+ status="success",
+ summary=summary,
+ metadata={
+ "tool": "mobsf",
+ "tool_version": "3.9.7",
+ "scan_hash": scan_hash,
+ "apk_file": str(file_path),
+ "mobsf_url": mobsf_url,
+ }
+ )
+
+ except Exception as exc:
+ logger.error(f"MobSF scanner failed: {exc}", exc_info=True)
+ return self.create_result(
+ findings=[],
+ status="failed",
+ error=str(exc),
+ metadata={"tool": "mobsf", "file_path": config.get("file_path")}
+ )
+
+ async def _upload_file(self, mobsf_url: str, file_path: Path, api_key: str) -> str:
+ """
+ Upload APK file to MobSF server.
+
+ Returns:
+ Scan hash for the uploaded file
+ """
+ headers = {'X-Mobsf-Api-Key': api_key} if api_key else {}
+
+ # Create multipart form data
+ filename = file_path.name
+
+ async with aiohttp.ClientSession() as session:
+ with open(file_path, 'rb') as f:
+ data = aiohttp.FormData()
+ data.add_field('file',
+ f,
+ filename=filename,
+ content_type='application/vnd.android.package-archive')
+
+ async with session.post(
+ f"{mobsf_url}/api/v1/upload",
+ headers=headers,
+ data=data,
+ timeout=aiohttp.ClientTimeout(total=300)
+ ) as response:
+ if response.status != 200:
+ error_text = await response.text()
+ raise Exception(f"Failed to upload file to MobSF: {error_text}")
+
+ result = await response.json()
+ scan_hash = result.get('hash')
+ if not scan_hash:
+ raise Exception(f"MobSF upload failed: {result}")
+
+ return scan_hash
+
+ async def _start_scan(self, mobsf_url: str, scan_hash: str, api_key: str, rescan: bool = False) -> Dict[str, Any]:
+ """
+ Start MobSF scan for uploaded file.
+
+ Returns:
+ Scan result dictionary
+ """
+ headers = {'X-Mobsf-Api-Key': api_key} if api_key else {}
+ data = {
+ 'hash': scan_hash,
+ 're_scan': '1' if rescan else '0'
+ }
+
+ async with aiohttp.ClientSession() as session:
+ async with session.post(
+ f"{mobsf_url}/api/v1/scan",
+ headers=headers,
+ data=data,
+ timeout=aiohttp.ClientTimeout(total=600) # 10 minutes for scan
+ ) as response:
+ if response.status != 200:
+ error_text = await response.text()
+ raise Exception(f"MobSF scan failed: {error_text}")
+
+ result = await response.json()
+ return result
+
+ async def _get_json_results(self, mobsf_url: str, scan_hash: str, api_key: str) -> Dict[str, Any]:
+ """
+ Retrieve JSON scan results from MobSF.
+
+ Returns:
+ Scan results dictionary
+ """
+ headers = {'X-Mobsf-Api-Key': api_key} if api_key else {}
+ data = {'hash': scan_hash}
+
+ async with aiohttp.ClientSession() as session:
+ async with session.post(
+ f"{mobsf_url}/api/v1/report_json",
+ headers=headers,
+ data=data,
+ timeout=aiohttp.ClientTimeout(total=60)
+ ) as response:
+ if response.status != 200:
+ error_text = await response.text()
+ raise Exception(f"Failed to retrieve MobSF results: {error_text}")
+
+ return await response.json()
+
+ def _parse_scan_results(self, scan_data: Dict[str, Any], apk_path: Path) -> List[ModuleFinding]:
+ """Parse MobSF JSON results into standardized findings"""
+ findings = []
+
+ # Parse permissions
+ if 'permissions' in scan_data:
+ for perm_name, perm_attrs in scan_data['permissions'].items():
+ if isinstance(perm_attrs, dict):
+ severity = self.SEVERITY_MAP.get(
+ perm_attrs.get('status', '').lower(), 'low'
+ )
+
+ finding = self.create_finding(
+ title=f"Android Permission: {perm_name}",
+ description=perm_attrs.get('description', 'No description'),
+ severity=severity,
+ category="android-permission",
+ metadata={
+ 'permission': perm_name,
+ 'status': perm_attrs.get('status'),
+ 'info': perm_attrs.get('info'),
+ 'tool': 'mobsf',
+ }
+ )
+ findings.append(finding)
+
+ # Parse manifest analysis
+ if 'manifest_analysis' in scan_data:
+ manifest_findings = scan_data['manifest_analysis'].get('manifest_findings', [])
+ for item in manifest_findings:
+ if isinstance(item, dict):
+ severity = self.SEVERITY_MAP.get(item.get('severity', '').lower(), 'medium')
+
+ finding = self.create_finding(
+ title=item.get('title') or item.get('name') or "Manifest Issue",
+ description=item.get('description', 'No description'),
+ severity=severity,
+ category="android-manifest",
+ metadata={
+ 'rule': item.get('rule'),
+ 'tool': 'mobsf',
+ }
+ )
+ findings.append(finding)
+
+ # Parse code analysis
+ if 'code_analysis' in scan_data:
+ code_findings = scan_data['code_analysis'].get('findings', {})
+ for finding_name, finding_data in code_findings.items():
+ if isinstance(finding_data, dict):
+ metadata_dict = finding_data.get('metadata', {})
+ severity = self.SEVERITY_MAP.get(
+ metadata_dict.get('severity', '').lower(), 'medium'
+ )
+
+ # MobSF returns 'files' as a dict: {filename: line_numbers}
+ files_dict = finding_data.get('files', {})
+
+ # Create a finding for each affected file
+ if isinstance(files_dict, dict) and files_dict:
+ for file_path, line_numbers in files_dict.items():
+ finding = self.create_finding(
+ title=finding_name,
+ description=metadata_dict.get('description', 'No description'),
+ severity=severity,
+ category="android-code-analysis",
+ file_path=file_path,
+ line_number=line_numbers, # Can be string like "28" or "65,81"
+ metadata={
+ 'cwe': metadata_dict.get('cwe'),
+ 'owasp': metadata_dict.get('owasp'),
+ 'masvs': metadata_dict.get('masvs'),
+ 'cvss': metadata_dict.get('cvss'),
+ 'ref': metadata_dict.get('ref'),
+ 'line_numbers': line_numbers,
+ 'tool': 'mobsf',
+ }
+ )
+ findings.append(finding)
+ else:
+ # Fallback: create one finding without file info
+ finding = self.create_finding(
+ title=finding_name,
+ description=metadata_dict.get('description', 'No description'),
+ severity=severity,
+ category="android-code-analysis",
+ metadata={
+ 'cwe': metadata_dict.get('cwe'),
+ 'owasp': metadata_dict.get('owasp'),
+ 'masvs': metadata_dict.get('masvs'),
+ 'cvss': metadata_dict.get('cvss'),
+ 'ref': metadata_dict.get('ref'),
+ 'tool': 'mobsf',
+ }
+ )
+ findings.append(finding)
+
+ # Parse behavior analysis
+ if 'behaviour' in scan_data:
+ for key, value in scan_data['behaviour'].items():
+ if isinstance(value, dict):
+ metadata_dict = value.get('metadata', {})
+ labels = metadata_dict.get('label', [])
+ label = labels[0] if labels else 'Unknown Behavior'
+
+ severity = self.SEVERITY_MAP.get(
+ metadata_dict.get('severity', '').lower(), 'medium'
+ )
+
+ # MobSF returns 'files' as a dict: {filename: line_numbers}
+ files_dict = value.get('files', {})
+
+ # Create a finding for each affected file
+ if isinstance(files_dict, dict) and files_dict:
+ for file_path, line_numbers in files_dict.items():
+ finding = self.create_finding(
+ title=f"Behavior: {label}",
+ description=metadata_dict.get('description', 'No description'),
+ severity=severity,
+ category="android-behavior",
+ file_path=file_path,
+ line_number=line_numbers,
+ metadata={
+ 'line_numbers': line_numbers,
+ 'behavior_key': key,
+ 'tool': 'mobsf',
+ }
+ )
+ findings.append(finding)
+ else:
+ # Fallback: create one finding without file info
+ finding = self.create_finding(
+ title=f"Behavior: {label}",
+ description=metadata_dict.get('description', 'No description'),
+ severity=severity,
+ category="android-behavior",
+ metadata={
+ 'behavior_key': key,
+ 'tool': 'mobsf',
+ }
+ )
+ findings.append(finding)
+
+ logger.debug(f"Parsed {len(findings)} findings from MobSF results")
+ return findings
+
+ def _create_summary(self, findings: List[ModuleFinding], scan_hash: str) -> Dict[str, Any]:
+ """Create analysis summary"""
+ severity_counter = Counter()
+ category_counter = Counter()
+
+ for finding in findings:
+ severity_counter[finding.severity] += 1
+ category_counter[finding.category] += 1
+
+ return {
+ "scan_hash": scan_hash,
+ "total_findings": len(findings),
+ "severity_counts": dict(severity_counter),
+ "category_counts": dict(category_counter),
+ }
diff --git a/backend/toolbox/modules/android/opengrep_android.py b/backend/toolbox/modules/android/opengrep_android.py
new file mode 100644
index 0000000..01e32c4
--- /dev/null
+++ b/backend/toolbox/modules/android/opengrep_android.py
@@ -0,0 +1,440 @@
+"""
+OpenGrep Android Static Analysis Module
+
+Pattern-based static analysis for Android applications using OpenGrep/Semgrep
+with Android-specific security rules.
+"""
+
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
+
+import asyncio
+import json
+import logging
+from pathlib import Path
+from typing import Dict, Any, List
+
+try:
+ from toolbox.modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
+except ImportError:
+ try:
+ from modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
+ except ImportError:
+ from src.toolbox.modules.base import BaseModule, ModuleMetadata, ModuleFinding, ModuleResult
+
+logger = logging.getLogger(__name__)
+
+
+class OpenGrepAndroid(BaseModule):
+ """OpenGrep static analysis module specialized for Android security"""
+
+ def get_metadata(self) -> ModuleMetadata:
+ """Get module metadata"""
+ return ModuleMetadata(
+ name="opengrep_android",
+ version="1.45.0",
+ description="Android-focused static analysis using OpenGrep/Semgrep with custom security rules for Java/Kotlin",
+ author="FuzzForge Team",
+ category="android",
+ tags=["sast", "android", "opengrep", "semgrep", "java", "kotlin", "security"],
+ input_schema={
+ "type": "object",
+ "properties": {
+ "config": {
+ "type": "string",
+ "enum": ["auto", "p/security-audit", "p/owasp-top-ten", "p/cwe-top-25"],
+ "default": "auto",
+ "description": "Rule configuration to use"
+ },
+ "custom_rules_path": {
+ "type": "string",
+ "description": "Path to a directory containing custom OpenGrep rules (Android-specific rules recommended)",
+ "default": None,
+ },
+ "languages": {
+ "type": "array",
+ "items": {"type": "string"},
+ "description": "Specific languages to analyze (defaults to java, kotlin for Android)",
+ "default": ["java", "kotlin"],
+ },
+ "include_patterns": {
+ "type": "array",
+ "items": {"type": "string"},
+ "description": "File patterns to include",
+ "default": [],
+ },
+ "exclude_patterns": {
+ "type": "array",
+ "items": {"type": "string"},
+ "description": "File patterns to exclude",
+ "default": [],
+ },
+ "max_target_bytes": {
+ "type": "integer",
+ "default": 1000000,
+ "description": "Maximum file size to analyze (bytes)"
+ },
+ "timeout": {
+ "type": "integer",
+ "default": 300,
+ "description": "Analysis timeout in seconds"
+ },
+ "severity": {
+ "type": "array",
+ "items": {"type": "string", "enum": ["ERROR", "WARNING", "INFO"]},
+ "default": ["ERROR", "WARNING", "INFO"],
+ "description": "Minimum severity levels to report"
+ },
+ "confidence": {
+ "type": "array",
+ "items": {"type": "string", "enum": ["HIGH", "MEDIUM", "LOW"]},
+ "default": ["HIGH", "MEDIUM", "LOW"],
+ "description": "Minimum confidence levels to report"
+ }
+ }
+ },
+ output_schema={
+ "type": "object",
+ "properties": {
+ "findings": {
+ "type": "array",
+ "description": "Security findings from OpenGrep analysis"
+ },
+ "total_findings": {"type": "integer"},
+ "severity_counts": {"type": "object"},
+ "files_analyzed": {"type": "integer"},
+ }
+ },
+ requires_workspace=True,
+ )
+
+ def validate_config(self, config: Dict[str, Any]) -> bool:
+ """Validate configuration"""
+ timeout = config.get("timeout", 300)
+ if not isinstance(timeout, int) or timeout < 30 or timeout > 3600:
+ raise ValueError("Timeout must be between 30 and 3600 seconds")
+
+ max_bytes = config.get("max_target_bytes", 1000000)
+ if not isinstance(max_bytes, int) or max_bytes < 1000 or max_bytes > 10000000:
+ raise ValueError("max_target_bytes must be between 1000 and 10000000")
+
+ custom_rules_path = config.get("custom_rules_path")
+ if custom_rules_path:
+ rules_path = Path(custom_rules_path)
+ if not rules_path.exists():
+ logger.warning(f"Custom rules path does not exist: {custom_rules_path}")
+
+ return True
+
+ async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
+ """Execute OpenGrep static analysis on Android code"""
+ self.start_timer()
+
+ try:
+ # Validate inputs
+ self.validate_config(config)
+ self.validate_workspace(workspace)
+
+ logger.info(f"Running OpenGrep Android analysis on {workspace}")
+
+ # Build opengrep command
+ cmd = ["opengrep", "scan", "--json"]
+
+ # Add configuration
+ custom_rules_path = config.get("custom_rules_path")
+ use_custom_rules = False
+ if custom_rules_path and Path(custom_rules_path).exists():
+ cmd.extend(["--config", custom_rules_path])
+ use_custom_rules = True
+ logger.info(f"Using custom Android rules from: {custom_rules_path}")
+ else:
+ config_type = config.get("config", "auto")
+ if config_type == "auto":
+ cmd.extend(["--config", "auto"])
+ else:
+ cmd.extend(["--config", config_type])
+
+ # Add timeout
+ cmd.extend(["--timeout", str(config.get("timeout", 300))])
+
+ # Add max target bytes
+ cmd.extend(["--max-target-bytes", str(config.get("max_target_bytes", 1000000))])
+
+ # Add languages if specified (but NOT when using custom rules)
+ languages = config.get("languages", ["java", "kotlin"])
+ if languages and not use_custom_rules:
+ langs = ",".join(languages)
+ cmd.extend(["--lang", langs])
+ logger.debug(f"Analyzing languages: {langs}")
+
+ # Add include patterns
+ include_patterns = config.get("include_patterns", [])
+ for pattern in include_patterns:
+ cmd.extend(["--include", pattern])
+
+ # Add exclude patterns
+ exclude_patterns = config.get("exclude_patterns", [])
+ for pattern in exclude_patterns:
+ cmd.extend(["--exclude", pattern])
+
+ # Add severity filter if single level requested
+ severity_levels = config.get("severity", ["ERROR", "WARNING", "INFO"])
+ if severity_levels and len(severity_levels) == 1:
+ cmd.extend(["--severity", severity_levels[0]])
+
+ # Disable metrics collection
+ cmd.append("--disable-version-check")
+ cmd.append("--no-git-ignore")
+
+ # Add target directory
+ cmd.append(str(workspace))
+
+ logger.debug(f"Running command: {' '.join(cmd)}")
+
+ # Run OpenGrep
+ process = await asyncio.create_subprocess_exec(
+ *cmd,
+ stdout=asyncio.subprocess.PIPE,
+ stderr=asyncio.subprocess.PIPE,
+ cwd=workspace
+ )
+
+ stdout, stderr = await process.communicate()
+
+ # Parse results
+ findings = []
+ if process.returncode in [0, 1]: # 0 = no findings, 1 = findings found
+ findings = self._parse_opengrep_output(stdout.decode(), workspace, config)
+ logger.info(f"OpenGrep found {len(findings)} potential security issues")
+ else:
+ error_msg = stderr.decode()
+ logger.error(f"OpenGrep failed: {error_msg}")
+ return self.create_result(
+ findings=[],
+ status="failed",
+ error=f"OpenGrep execution failed (exit code {process.returncode}): {error_msg[:500]}"
+ )
+
+ # Create summary
+ summary = self._create_summary(findings)
+
+ return self.create_result(
+ findings=findings,
+ status="success",
+ summary=summary,
+ metadata={
+ "tool": "opengrep",
+ "tool_version": "1.45.0",
+ "languages": languages,
+ "custom_rules": bool(custom_rules_path),
+ }
+ )
+
+ except Exception as e:
+ logger.error(f"OpenGrep Android module failed: {e}", exc_info=True)
+ return self.create_result(
+ findings=[],
+ status="failed",
+ error=str(e)
+ )
+
+ def _parse_opengrep_output(self, output: str, workspace: Path, config: Dict[str, Any]) -> List[ModuleFinding]:
+ """Parse OpenGrep JSON output into findings"""
+ findings = []
+
+ if not output.strip():
+ return findings
+
+ try:
+ data = json.loads(output)
+ results = data.get("results", [])
+ logger.debug(f"OpenGrep returned {len(results)} raw results")
+
+ # Get filtering criteria
+ allowed_severities = set(config.get("severity", ["ERROR", "WARNING", "INFO"]))
+ allowed_confidences = set(config.get("confidence", ["HIGH", "MEDIUM", "LOW"]))
+
+ for result in results:
+ # Extract basic info
+ rule_id = result.get("check_id", "unknown")
+ message = result.get("message", "")
+ extra = result.get("extra", {})
+ severity = extra.get("severity", "INFO").upper()
+
+ # File location info
+ path_info = result.get("path", "")
+ start_line = result.get("start", {}).get("line", 0)
+ end_line = result.get("end", {}).get("line", 0)
+
+ # Code snippet
+ lines = extra.get("lines", "")
+
+ # Metadata
+ rule_metadata = extra.get("metadata", {})
+ cwe = rule_metadata.get("cwe", [])
+ owasp = rule_metadata.get("owasp", [])
+ confidence = extra.get("confidence", rule_metadata.get("confidence", "MEDIUM")).upper()
+
+ # Apply severity filter
+ if severity not in allowed_severities:
+ continue
+
+ # Apply confidence filter
+ if confidence not in allowed_confidences:
+ continue
+
+ # Make file path relative to workspace
+ if path_info:
+ try:
+ rel_path = Path(path_info).relative_to(workspace)
+ path_info = str(rel_path)
+ except ValueError:
+ pass
+
+ # Map severity to our standard levels
+ finding_severity = self._map_severity(severity)
+
+ # Create finding
+ finding = self.create_finding(
+ title=f"Android Security: {rule_id}",
+ description=message or f"OpenGrep rule {rule_id} triggered",
+ severity=finding_severity,
+ category=self._get_category(rule_id, extra),
+ file_path=path_info if path_info else None,
+ line_start=start_line if start_line > 0 else None,
+ line_end=end_line if end_line > 0 and end_line != start_line else None,
+ code_snippet=lines.strip() if lines else None,
+ recommendation=self._get_recommendation(rule_id, extra),
+ metadata={
+ "rule_id": rule_id,
+ "opengrep_severity": severity,
+ "confidence": confidence,
+ "cwe": cwe,
+ "owasp": owasp,
+ "fix": extra.get("fix", ""),
+ "impact": extra.get("impact", ""),
+ "likelihood": extra.get("likelihood", ""),
+ "references": extra.get("references", []),
+ "tool": "opengrep",
+ }
+ )
+
+ findings.append(finding)
+
+ except json.JSONDecodeError as e:
+ logger.warning(f"Failed to parse OpenGrep output: {e}. Output snippet: {output[:200]}...")
+ except Exception as e:
+ logger.warning(f"Error processing OpenGrep results: {e}", exc_info=True)
+
+ return findings
+
+ def _map_severity(self, opengrep_severity: str) -> str:
+ """Map OpenGrep severity to our standard severity levels"""
+ severity_map = {
+ "ERROR": "high",
+ "WARNING": "medium",
+ "INFO": "low"
+ }
+ return severity_map.get(opengrep_severity.upper(), "medium")
+
+ def _get_category(self, rule_id: str, extra: Dict[str, Any]) -> str:
+ """Determine finding category based on rule and metadata"""
+ rule_metadata = extra.get("metadata", {})
+ cwe_list = rule_metadata.get("cwe", [])
+ owasp_list = rule_metadata.get("owasp", [])
+
+ rule_lower = rule_id.lower()
+
+ # Android-specific categories
+ if "injection" in rule_lower or "sql" in rule_lower:
+ return "injection"
+ elif "intent" in rule_lower:
+ return "android-intent"
+ elif "webview" in rule_lower:
+ return "android-webview"
+ elif "deeplink" in rule_lower:
+ return "android-deeplink"
+ elif "storage" in rule_lower or "sharedpreferences" in rule_lower:
+ return "android-storage"
+ elif "logging" in rule_lower or "log" in rule_lower:
+ return "android-logging"
+ elif "clipboard" in rule_lower:
+ return "android-clipboard"
+ elif "activity" in rule_lower or "service" in rule_lower or "provider" in rule_lower:
+ return "android-component"
+ elif "crypto" in rule_lower or "encrypt" in rule_lower:
+ return "cryptography"
+ elif "hardcode" in rule_lower or "secret" in rule_lower:
+ return "secrets"
+ elif "auth" in rule_lower:
+ return "authentication"
+ elif cwe_list:
+ return f"cwe-{cwe_list[0]}"
+ elif owasp_list:
+ return f"owasp-{owasp_list[0].replace(' ', '-').lower()}"
+ else:
+ return "android-security"
+
+ def _get_recommendation(self, rule_id: str, extra: Dict[str, Any]) -> str:
+ """Generate recommendation based on rule and metadata"""
+ fix_suggestion = extra.get("fix", "")
+ if fix_suggestion:
+ return fix_suggestion
+
+ rule_lower = rule_id.lower()
+
+ # Android-specific recommendations
+ if "injection" in rule_lower or "sql" in rule_lower:
+ return "Use parameterized queries or Room database with type-safe queries to prevent SQL injection."
+ elif "intent" in rule_lower:
+ return "Validate all incoming Intent data and use explicit Intents when possible to prevent Intent manipulation attacks."
+ elif "webview" in rule_lower and "javascript" in rule_lower:
+ return "Disable JavaScript in WebView if not needed, or implement proper JavaScript interfaces with @JavascriptInterface annotation."
+ elif "deeplink" in rule_lower:
+ return "Validate all deeplink URLs and sanitize user input to prevent deeplink hijacking attacks."
+ elif "storage" in rule_lower or "sharedpreferences" in rule_lower:
+ return "Encrypt sensitive data before storing in SharedPreferences or use EncryptedSharedPreferences for Android API 23+."
+ elif "logging" in rule_lower:
+ return "Remove sensitive data from logs in production builds. Use ProGuard/R8 to strip logging statements."
+ elif "clipboard" in rule_lower:
+ return "Avoid placing sensitive data on the clipboard. If necessary, clear clipboard data when no longer needed."
+ elif "crypto" in rule_lower:
+ return "Use modern cryptographic algorithms (AES-GCM, RSA-OAEP) and Android Keystore for key management."
+ elif "hardcode" in rule_lower or "secret" in rule_lower:
+ return "Remove hardcoded secrets. Use Android Keystore, environment variables, or secure configuration management."
+ else:
+ return "Review this Android security issue and apply appropriate fixes based on Android security best practices."
+
+ def _create_summary(self, findings: List[ModuleFinding]) -> Dict[str, Any]:
+ """Create analysis summary"""
+ severity_counts = {"critical": 0, "high": 0, "medium": 0, "low": 0}
+ category_counts = {}
+ rule_counts = {}
+
+ for finding in findings:
+ # Count by severity
+ severity_counts[finding.severity] += 1
+
+ # Count by category
+ category = finding.category
+ category_counts[category] = category_counts.get(category, 0) + 1
+
+ # Count by rule
+ rule_id = finding.metadata.get("rule_id", "unknown")
+ rule_counts[rule_id] = rule_counts.get(rule_id, 0) + 1
+
+ return {
+ "total_findings": len(findings),
+ "severity_counts": severity_counts,
+ "category_counts": category_counts,
+ "top_rules": dict(sorted(rule_counts.items(), key=lambda x: x[1], reverse=True)[:10]),
+ "files_analyzed": len(set(f.file_path for f in findings if f.file_path))
+ }
diff --git a/backend/toolbox/modules/scanner/__init__.py b/backend/toolbox/modules/scanner/__init__.py
index ae02119..3efefe6 100644
--- a/backend/toolbox/modules/scanner/__init__.py
+++ b/backend/toolbox/modules/scanner/__init__.py
@@ -10,5 +10,6 @@
# Additional attribution and requirements are provided in the NOTICE file.
from .file_scanner import FileScanner
+from .dependency_scanner import DependencyScanner
-__all__ = ["FileScanner"]
\ No newline at end of file
+__all__ = ["FileScanner", "DependencyScanner"]
\ No newline at end of file
diff --git a/backend/toolbox/modules/scanner/dependency_scanner.py b/backend/toolbox/modules/scanner/dependency_scanner.py
new file mode 100644
index 0000000..4c7791c
--- /dev/null
+++ b/backend/toolbox/modules/scanner/dependency_scanner.py
@@ -0,0 +1,302 @@
+"""
+Dependency Scanner Module - Scans Python dependencies for known vulnerabilities using pip-audit
+"""
+
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
+
+import asyncio
+import json
+import logging
+import time
+from pathlib import Path
+from typing import Dict, Any, List
+
+try:
+ from toolbox.modules.base import BaseModule, ModuleMetadata, ModuleResult, ModuleFinding
+except ImportError:
+ try:
+ from modules.base import BaseModule, ModuleMetadata, ModuleResult, ModuleFinding
+ except ImportError:
+ from src.toolbox.modules.base import BaseModule, ModuleMetadata, ModuleResult, ModuleFinding
+
+logger = logging.getLogger(__name__)
+
+
+class DependencyScanner(BaseModule):
+ """
+ Scans Python dependencies for known vulnerabilities using pip-audit.
+
+ This module:
+ - Discovers dependency files (requirements.txt, pyproject.toml, setup.py, Pipfile)
+ - Runs pip-audit to check for vulnerable dependencies
+ - Reports CVEs with severity and affected versions
+ """
+
+ def get_metadata(self) -> ModuleMetadata:
+ """Get module metadata"""
+ return ModuleMetadata(
+ name="dependency_scanner",
+ version="1.0.0",
+ description="Scans Python dependencies for known vulnerabilities",
+ author="FuzzForge Team",
+ category="scanner",
+ tags=["dependencies", "cve", "vulnerabilities", "pip-audit"],
+ input_schema={
+ "dependency_files": {
+ "type": "array",
+ "items": {"type": "string"},
+ "description": "List of dependency files to scan (auto-discovered if empty)",
+ "default": []
+ },
+ "ignore_vulns": {
+ "type": "array",
+ "items": {"type": "string"},
+ "description": "List of vulnerability IDs to ignore",
+ "default": []
+ }
+ },
+ output_schema={
+ "findings": {
+ "type": "array",
+ "description": "List of vulnerable dependencies with CVE information"
+ }
+ },
+ requires_workspace=True
+ )
+
+ def validate_config(self, config: Dict[str, Any]) -> bool:
+ """Validate module configuration"""
+ dep_files = config.get("dependency_files", [])
+ if not isinstance(dep_files, list):
+ raise ValueError("dependency_files must be a list")
+
+ ignore_vulns = config.get("ignore_vulns", [])
+ if not isinstance(ignore_vulns, list):
+ raise ValueError("ignore_vulns must be a list")
+
+ return True
+
+ def _discover_dependency_files(self, workspace: Path) -> List[Path]:
+ """
+ Discover Python dependency files in workspace.
+
+ Returns:
+ List of discovered dependency file paths
+ """
+ dependency_patterns = [
+ "requirements.txt",
+ "*requirements*.txt",
+ "pyproject.toml",
+ "setup.py",
+ "Pipfile",
+ "poetry.lock"
+ ]
+
+ found_files = []
+ for pattern in dependency_patterns:
+ found_files.extend(workspace.rglob(pattern))
+
+ # Deduplicate and return
+ unique_files = list(set(found_files))
+ logger.info(f"Discovered {len(unique_files)} dependency files")
+ return unique_files
+
+ async def _run_pip_audit(self, file_path: Path) -> Dict[str, Any]:
+ """
+ Run pip-audit on a specific dependency file.
+
+ Args:
+ file_path: Path to dependency file
+
+ Returns:
+ pip-audit JSON output as dict
+ """
+ try:
+ # Run pip-audit with JSON output
+ cmd = [
+ "pip-audit",
+ "--requirement", str(file_path),
+ "--format", "json",
+ "--progress-spinner", "off"
+ ]
+
+ logger.info(f"Running pip-audit on: {file_path.name}")
+ process = await asyncio.create_subprocess_exec(
+ *cmd,
+ stdout=asyncio.subprocess.PIPE,
+ stderr=asyncio.subprocess.PIPE
+ )
+
+ stdout, stderr = await process.communicate()
+
+ # pip-audit returns 0 if no vulns, 1 if vulns found
+ if process.returncode not in [0, 1]:
+ logger.error(f"pip-audit failed: {stderr.decode()}")
+ return {"dependencies": []}
+
+ # Parse JSON output
+ result = json.loads(stdout.decode())
+ return result
+
+ except Exception as e:
+ logger.error(f"Error running pip-audit on {file_path}: {e}")
+ return {"dependencies": []}
+
+ def _convert_to_findings(
+ self,
+ audit_result: Dict[str, Any],
+ file_path: Path,
+ workspace: Path,
+ ignore_vulns: List[str]
+ ) -> List[ModuleFinding]:
+ """
+ Convert pip-audit results to ModuleFindings.
+
+ Args:
+ audit_result: pip-audit JSON output
+ file_path: Path to scanned file
+ workspace: Workspace path for relative path calculation
+ ignore_vulns: List of vulnerability IDs to ignore
+
+ Returns:
+ List of ModuleFindings
+ """
+ findings = []
+
+ # pip-audit format: {"dependencies": [{package, version, vulns: []}]}
+ for dep in audit_result.get("dependencies", []):
+ package_name = dep.get("name", "unknown")
+ package_version = dep.get("version", "unknown")
+ vulnerabilities = dep.get("vulns", [])
+
+ for vuln in vulnerabilities:
+ vuln_id = vuln.get("id", "UNKNOWN")
+
+ # Skip if in ignore list
+ if vuln_id in ignore_vulns:
+ logger.debug(f"Ignoring vulnerability: {vuln_id}")
+ continue
+
+ description = vuln.get("description", "No description available")
+ fix_versions = vuln.get("fix_versions", [])
+
+ # Map CVSS scores to severity
+ # pip-audit doesn't always provide CVSS, so we default to medium
+ severity = "medium"
+
+ # Try to get relative path
+ try:
+ rel_path = file_path.relative_to(workspace)
+ except ValueError:
+ rel_path = file_path
+
+ recommendation = f"Upgrade {package_name} to a fixed version: {', '.join(fix_versions)}" if fix_versions else f"Check for updates to {package_name}"
+
+ finding = self.create_finding(
+ title=f"Vulnerable dependency: {package_name} ({vuln_id})",
+ description=f"{description}\n\nAffected package: {package_name} {package_version}",
+ severity=severity,
+ category="vulnerable-dependency",
+ file_path=str(rel_path),
+ recommendation=recommendation,
+ metadata={
+ "cve_id": vuln_id,
+ "package": package_name,
+ "installed_version": package_version,
+ "fix_versions": fix_versions,
+ "aliases": vuln.get("aliases", []),
+ "link": vuln.get("link", "")
+ }
+ )
+ findings.append(finding)
+
+ return findings
+
+ async def execute(self, config: Dict[str, Any], workspace: Path) -> ModuleResult:
+ """
+ Execute the dependency scanning module.
+
+ Args:
+ config: Module configuration
+ workspace: Path to workspace
+
+ Returns:
+ ModuleResult with vulnerability findings
+ """
+ start_time = time.time()
+ metadata = self.get_metadata()
+
+ # Validate inputs
+ self.validate_config(config)
+ self.validate_workspace(workspace)
+
+ # Get configuration
+ specified_files = config.get("dependency_files", [])
+ ignore_vulns = config.get("ignore_vulns", [])
+
+ # Discover or use specified dependency files
+ if specified_files:
+ dep_files = [workspace / f for f in specified_files]
+ else:
+ dep_files = self._discover_dependency_files(workspace)
+
+ if not dep_files:
+ logger.warning("No dependency files found in workspace")
+ return ModuleResult(
+ module=metadata.name,
+ version=metadata.version,
+ status="success",
+ execution_time=time.time() - start_time,
+ findings=[],
+ summary={
+ "total_files": 0,
+ "total_vulnerabilities": 0,
+ "vulnerable_packages": 0
+ }
+ )
+
+ # Scan each dependency file
+ all_findings = []
+ files_scanned = 0
+
+ for dep_file in dep_files:
+ if not dep_file.exists():
+ logger.warning(f"Dependency file not found: {dep_file}")
+ continue
+
+ logger.info(f"Scanning dependencies in: {dep_file.name}")
+ audit_result = await self._run_pip_audit(dep_file)
+ findings = self._convert_to_findings(audit_result, dep_file, workspace, ignore_vulns)
+
+ all_findings.extend(findings)
+ files_scanned += 1
+
+ # Calculate summary
+ unique_packages = len(set(f.metadata.get("package") for f in all_findings))
+
+ execution_time = time.time() - start_time
+
+ return ModuleResult(
+ module=metadata.name,
+ version=metadata.version,
+ status="success",
+ execution_time=execution_time,
+ findings=all_findings,
+ summary={
+ "total_files": files_scanned,
+ "total_vulnerabilities": len(all_findings),
+ "vulnerable_packages": unique_packages
+ },
+ metadata={
+ "scanned_files": [str(f.name) for f in dep_files if f.exists()]
+ }
+ )
diff --git a/backend/toolbox/modules/secret_detection/llm_secret_detector.py b/backend/toolbox/modules/secret_detection/llm_secret_detector.py
index 3ba96f8..1adf341 100644
--- a/backend/toolbox/modules/secret_detection/llm_secret_detector.py
+++ b/backend/toolbox/modules/secret_detection/llm_secret_detector.py
@@ -107,7 +107,8 @@ class LLMSecretDetectorModule(BaseModule):
)
agent_url = config.get("agent_url")
- if not agent_url or not isinstance(agent_url, str):
+ # agent_url is optional - will have default from metadata.yaml
+ if agent_url is not None and not isinstance(agent_url, str):
raise ValueError("agent_url must be a valid URL string")
max_files = config.get("max_files", 20)
@@ -131,14 +132,14 @@ class LLMSecretDetectorModule(BaseModule):
logger.info(f"Starting LLM secret detection in workspace: {workspace}")
- # Extract configuration
- agent_url = config.get("agent_url", "http://fuzzforge-task-agent:8000/a2a/litellm_agent")
- llm_model = config.get("llm_model", "gpt-4o-mini")
- llm_provider = config.get("llm_provider", "openai")
- file_patterns = config.get("file_patterns", ["*.py", "*.js", "*.ts", "*.java", "*.go", "*.env", "*.yaml", "*.yml", "*.json", "*.xml", "*.ini", "*.sql", "*.properties", "*.sh", "*.bat", "*.config", "*.conf", "*.toml", "*id_rsa*", "*.txt"])
- max_files = config.get("max_files", 20)
- max_file_size = config.get("max_file_size", 30000)
- timeout = config.get("timeout", 30) # Reduced from 45s
+ # Extract configuration (defaults come from metadata.yaml via API)
+ agent_url = config["agent_url"]
+ llm_model = config["llm_model"]
+ llm_provider = config["llm_provider"]
+ file_patterns = config["file_patterns"]
+ max_files = config["max_files"]
+ max_file_size = config["max_file_size"]
+ timeout = config["timeout"]
# Find files to analyze
# Skip files that are unlikely to contain secrets
diff --git a/backend/toolbox/workflows/android_static_analysis/__init__.py b/backend/toolbox/workflows/android_static_analysis/__init__.py
new file mode 100644
index 0000000..aec13c5
--- /dev/null
+++ b/backend/toolbox/workflows/android_static_analysis/__init__.py
@@ -0,0 +1,35 @@
+"""
+Android Static Analysis Workflow
+
+Comprehensive Android application security testing combining:
+- Jadx APK decompilation
+- OpenGrep/Semgrep static analysis with Android-specific rules
+- MobSF mobile security framework analysis
+"""
+
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
+
+from .workflow import AndroidStaticAnalysisWorkflow
+from .activities import (
+ decompile_with_jadx_activity,
+ scan_with_opengrep_activity,
+ scan_with_mobsf_activity,
+ generate_android_sarif_activity,
+)
+
+__all__ = [
+ "AndroidStaticAnalysisWorkflow",
+ "decompile_with_jadx_activity",
+ "scan_with_opengrep_activity",
+ "scan_with_mobsf_activity",
+ "generate_android_sarif_activity",
+]
diff --git a/backend/toolbox/workflows/android_static_analysis/activities.py b/backend/toolbox/workflows/android_static_analysis/activities.py
new file mode 100644
index 0000000..5d37729
--- /dev/null
+++ b/backend/toolbox/workflows/android_static_analysis/activities.py
@@ -0,0 +1,213 @@
+"""
+Android Static Analysis Workflow Activities
+
+Activities for the Android security testing workflow:
+- decompile_with_jadx_activity: Decompile APK using Jadx
+- scan_with_opengrep_activity: Analyze code with OpenGrep/Semgrep
+- scan_with_mobsf_activity: Scan APK with MobSF
+- generate_android_sarif_activity: Generate combined SARIF report
+"""
+
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
+
+import logging
+import sys
+from pathlib import Path
+
+from temporalio import activity
+
+# Configure logging
+logger = logging.getLogger(__name__)
+
+# Add toolbox to path for module imports
+sys.path.insert(0, '/app/toolbox')
+
+
+@activity.defn(name="decompile_with_jadx")
+async def decompile_with_jadx_activity(workspace_path: str, config: dict) -> dict:
+ """
+ Decompile Android APK to Java source code using Jadx.
+
+ Args:
+ workspace_path: Path to the workspace directory
+ config: JadxDecompiler configuration
+
+ Returns:
+ Decompilation results dictionary
+ """
+ logger.info(f"Activity: decompile_with_jadx (workspace={workspace_path})")
+
+ try:
+ from modules.android import JadxDecompiler
+
+ workspace = Path(workspace_path)
+ if not workspace.exists():
+ raise FileNotFoundError(f"Workspace not found: {workspace_path}")
+
+ decompiler = JadxDecompiler()
+ result = await decompiler.execute(config, workspace)
+
+ logger.info(
+ f"✓ Jadx decompilation completed: "
+ f"{result.summary.get('java_files', 0)} Java files generated"
+ )
+ return result.dict()
+
+ except Exception as e:
+ logger.error(f"Jadx decompilation failed: {e}", exc_info=True)
+ raise
+
+
+@activity.defn(name="scan_with_opengrep")
+async def scan_with_opengrep_activity(workspace_path: str, config: dict) -> dict:
+ """
+ Analyze Android code for security issues using OpenGrep/Semgrep.
+
+ Args:
+ workspace_path: Path to the workspace directory
+ config: OpenGrepAndroid configuration
+
+ Returns:
+ Analysis results dictionary
+ """
+ logger.info(f"Activity: scan_with_opengrep (workspace={workspace_path})")
+
+ try:
+ from modules.android import OpenGrepAndroid
+
+ workspace = Path(workspace_path)
+ if not workspace.exists():
+ raise FileNotFoundError(f"Workspace not found: {workspace_path}")
+
+ analyzer = OpenGrepAndroid()
+ result = await analyzer.execute(config, workspace)
+
+ logger.info(
+ f"✓ OpenGrep analysis completed: "
+ f"{result.summary.get('total_findings', 0)} security issues found"
+ )
+ return result.dict()
+
+ except Exception as e:
+ logger.error(f"OpenGrep analysis failed: {e}", exc_info=True)
+ raise
+
+
+@activity.defn(name="scan_with_mobsf")
+async def scan_with_mobsf_activity(workspace_path: str, config: dict) -> dict:
+ """
+ Analyze Android APK for security issues using MobSF.
+
+ Args:
+ workspace_path: Path to the workspace directory
+ config: MobSFScanner configuration
+
+ Returns:
+ Scan results dictionary (or skipped status if MobSF unavailable)
+ """
+ logger.info(f"Activity: scan_with_mobsf (workspace={workspace_path})")
+
+ # Check if MobSF is installed (graceful degradation for ARM64 platform)
+ mobsf_path = Path("/app/mobsf")
+ if not mobsf_path.exists():
+ logger.warning("MobSF not installed on this platform (ARM64/Rosetta limitation)")
+ return {
+ "status": "skipped",
+ "findings": [],
+ "summary": {
+ "total_findings": 0,
+ "skip_reason": "MobSF unavailable on ARM64 platform (Rosetta 2 incompatibility)"
+ }
+ }
+
+ try:
+ from modules.android import MobSFScanner
+
+ workspace = Path(workspace_path)
+ if not workspace.exists():
+ raise FileNotFoundError(f"Workspace not found: {workspace_path}")
+
+ scanner = MobSFScanner()
+ result = await scanner.execute(config, workspace)
+
+ logger.info(
+ f"✓ MobSF scan completed: "
+ f"{result.summary.get('total_findings', 0)} findings"
+ )
+ return result.dict()
+
+ except Exception as e:
+ logger.error(f"MobSF scan failed: {e}", exc_info=True)
+ raise
+
+
+@activity.defn(name="generate_android_sarif")
+async def generate_android_sarif_activity(
+ jadx_result: dict,
+ opengrep_result: dict,
+ mobsf_result: dict,
+ config: dict,
+ workspace_path: str
+) -> dict:
+ """
+ Generate combined SARIF report from all Android security findings.
+
+ Args:
+ jadx_result: Jadx decompilation results
+ opengrep_result: OpenGrep analysis results
+ mobsf_result: MobSF scan results (may be None if disabled)
+ config: Reporter configuration
+ workspace_path: Workspace path
+
+ Returns:
+ SARIF report dictionary
+ """
+ logger.info("Activity: generate_android_sarif")
+
+ try:
+ from modules.reporter import SARIFReporter
+
+ workspace = Path(workspace_path)
+
+ # Collect all findings
+ all_findings = []
+ all_findings.extend(opengrep_result.get("findings", []))
+
+ if mobsf_result:
+ all_findings.extend(mobsf_result.get("findings", []))
+
+ # Prepare reporter config
+ reporter_config = {
+ **(config or {}),
+ "findings": all_findings,
+ "tool_name": "FuzzForge Android Static Analysis",
+ "tool_version": "1.0.0",
+ "metadata": {
+ "jadx_version": "1.5.0",
+ "opengrep_version": "1.45.0",
+ "mobsf_version": "3.9.7",
+ "java_files_decompiled": jadx_result.get("summary", {}).get("java_files", 0),
+ }
+ }
+
+ reporter = SARIFReporter()
+ result = await reporter.execute(reporter_config, workspace)
+
+ sarif_report = result.dict().get("sarif", {})
+
+ logger.info(f"✓ SARIF report generated with {len(all_findings)} findings")
+
+ return sarif_report
+
+ except Exception as e:
+ logger.error(f"SARIF report generation failed: {e}", exc_info=True)
+ raise
diff --git a/backend/toolbox/workflows/android_static_analysis/metadata.yaml b/backend/toolbox/workflows/android_static_analysis/metadata.yaml
new file mode 100644
index 0000000..cd77e48
--- /dev/null
+++ b/backend/toolbox/workflows/android_static_analysis/metadata.yaml
@@ -0,0 +1,172 @@
+name: android_static_analysis
+version: "1.0.0"
+vertical: android
+description: "Comprehensive Android application security testing using Jadx decompilation, OpenGrep static analysis, and MobSF mobile security framework"
+author: "FuzzForge Team"
+tags:
+ - "android"
+ - "mobile"
+ - "static-analysis"
+ - "security"
+ - "opengrep"
+ - "semgrep"
+ - "mobsf"
+ - "jadx"
+ - "apk"
+ - "sarif"
+
+# Workspace isolation mode
+# Using "shared" mode for read-only APK analysis (no file modifications except decompilation output)
+workspace_isolation: "shared"
+
+parameters:
+ type: object
+ properties:
+ apk_path:
+ type: string
+ description: "Path to the APK file to analyze (relative to uploaded target or absolute within workspace)"
+ default: ""
+
+ decompile_apk:
+ type: boolean
+ description: "Whether to decompile APK with Jadx before OpenGrep analysis"
+ default: true
+
+ jadx_config:
+ type: object
+ description: "Jadx decompiler configuration"
+ properties:
+ output_dir:
+ type: string
+ description: "Output directory for decompiled sources"
+ default: "jadx_output"
+ overwrite:
+ type: boolean
+ description: "Overwrite existing decompilation output"
+ default: true
+ threads:
+ type: integer
+ description: "Number of decompilation threads"
+ default: 4
+ minimum: 1
+ maximum: 32
+ decompiler_args:
+ type: array
+ items:
+ type: string
+ description: "Additional Jadx arguments"
+ default: []
+
+ opengrep_config:
+ type: object
+ description: "OpenGrep/Semgrep static analysis configuration"
+ properties:
+ config:
+ type: string
+ enum: ["auto", "p/security-audit", "p/owasp-top-ten", "p/cwe-top-25"]
+ description: "Preset OpenGrep ruleset (ignored if custom_rules_path is set)"
+ default: "auto"
+ custom_rules_path:
+ type: string
+ description: "Path to custom OpenGrep rules directory (use Android-specific rules for best results)"
+ default: "/app/toolbox/modules/android/custom_rules"
+ languages:
+ type: array
+ items:
+ type: string
+ description: "Programming languages to analyze (defaults to java, kotlin for Android)"
+ default: ["java", "kotlin"]
+ include_patterns:
+ type: array
+ items:
+ type: string
+ description: "File patterns to include in scan"
+ default: []
+ exclude_patterns:
+ type: array
+ items:
+ type: string
+ description: "File patterns to exclude from scan"
+ default: []
+ max_target_bytes:
+ type: integer
+ description: "Maximum file size to analyze (bytes)"
+ default: 1000000
+ timeout:
+ type: integer
+ description: "Analysis timeout in seconds"
+ default: 300
+ severity:
+ type: array
+ items:
+ type: string
+ enum: ["ERROR", "WARNING", "INFO"]
+ description: "Severity levels to include in results"
+ default: ["ERROR", "WARNING", "INFO"]
+ confidence:
+ type: array
+ items:
+ type: string
+ enum: ["HIGH", "MEDIUM", "LOW"]
+ description: "Confidence levels to include in results"
+ default: ["HIGH", "MEDIUM", "LOW"]
+
+ mobsf_config:
+ type: object
+ description: "MobSF scanner configuration"
+ properties:
+ enabled:
+ type: boolean
+ description: "Enable MobSF analysis (requires APK file)"
+ default: true
+ mobsf_url:
+ type: string
+ description: "MobSF server URL"
+ default: "http://localhost:8877"
+ api_key:
+ type: string
+ description: "MobSF API key (if not provided, uses MOBSF_API_KEY env var)"
+ default: null
+ rescan:
+ type: boolean
+ description: "Force rescan even if APK was previously analyzed"
+ default: false
+
+ reporter_config:
+ type: object
+ description: "SARIF reporter configuration"
+ properties:
+ include_code_flows:
+ type: boolean
+ description: "Include code flow information in SARIF output"
+ default: false
+ logical_id:
+ type: string
+ description: "Custom identifier for the SARIF report"
+ default: null
+
+output_schema:
+ type: object
+ properties:
+ sarif:
+ type: object
+ description: "SARIF-formatted findings from all Android security tools"
+ summary:
+ type: object
+ description: "Android security analysis summary"
+ properties:
+ total_findings:
+ type: integer
+ decompiled_java_files:
+ type: integer
+ description: "Number of Java files decompiled by Jadx"
+ opengrep_findings:
+ type: integer
+ description: "Findings from OpenGrep/Semgrep analysis"
+ mobsf_findings:
+ type: integer
+ description: "Findings from MobSF analysis"
+ severity_distribution:
+ type: object
+ category_distribution:
+ type: object
diff --git a/backend/toolbox/workflows/android_static_analysis/workflow.py b/backend/toolbox/workflows/android_static_analysis/workflow.py
new file mode 100644
index 0000000..8376cd2
--- /dev/null
+++ b/backend/toolbox/workflows/android_static_analysis/workflow.py
@@ -0,0 +1,289 @@
+"""
+Android Static Analysis Workflow - Temporal Version
+
+Comprehensive security testing for Android applications using Jadx, OpenGrep, and MobSF.
+"""
+
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
+
+from datetime import timedelta
+from typing import Dict, Any, Optional
+from pathlib import Path
+
+from temporalio import workflow
+from temporalio.common import RetryPolicy
+
+# Import activity interfaces (will be executed by worker)
+with workflow.unsafe.imports_passed_through():
+ import logging
+
+logger = logging.getLogger(__name__)
+
+
+@workflow.defn
+class AndroidStaticAnalysisWorkflow:
+ """
+ Android Static Application Security Testing workflow.
+
+ This workflow:
+ 1. Downloads target (APK) from MinIO
+ 2. (Optional) Decompiles APK using Jadx
+ 3. Runs OpenGrep/Semgrep static analysis on decompiled code
+ 4. (Optional) Runs MobSF comprehensive security scan
+ 5. Generates a SARIF report with all findings
+ 6. Uploads results to MinIO
+ 7. Cleans up cache
+ """
+
+ @workflow.run
+ async def run(
+ self,
+ target_id: str,
+ apk_path: Optional[str] = None,
+ decompile_apk: bool = True,
+ jadx_config: Optional[Dict[str, Any]] = None,
+ opengrep_config: Optional[Dict[str, Any]] = None,
+ mobsf_config: Optional[Dict[str, Any]] = None,
+ reporter_config: Optional[Dict[str, Any]] = None
+ ) -> Dict[str, Any]:
+ """
+ Main workflow execution.
+
+ Args:
+ target_id: UUID of the uploaded target (APK) in MinIO
+ apk_path: Path to APK file within target (if target is not a single APK)
+ decompile_apk: Whether to decompile APK with Jadx before OpenGrep
+ jadx_config: Configuration for Jadx decompiler
+ opengrep_config: Configuration for OpenGrep analyzer
+ mobsf_config: Configuration for MobSF scanner
+ reporter_config: Configuration for SARIF reporter
+
+ Returns:
+ Dictionary containing SARIF report and summary
+ """
+ workflow_id = workflow.info().workflow_id
+
+ workflow.logger.info(
+ f"Starting AndroidStaticAnalysisWorkflow "
+ f"(workflow_id={workflow_id}, target_id={target_id})"
+ )
+
+ # Default configurations
+ if not jadx_config:
+ jadx_config = {
+ "output_dir": "jadx_output",
+ "overwrite": True,
+ "threads": 4,
+ "decompiler_args": []
+ }
+
+ if not opengrep_config:
+ opengrep_config = {
+ "config": "auto",
+ "custom_rules_path": "/app/toolbox/modules/android/custom_rules",
+ "languages": ["java", "kotlin"],
+ "severity": ["ERROR", "WARNING", "INFO"],
+ "confidence": ["HIGH", "MEDIUM", "LOW"],
+ "timeout": 300,
+ }
+
+ if not mobsf_config:
+ mobsf_config = {
+ "enabled": True,
+ "mobsf_url": "http://localhost:8877",
+ "api_key": None,
+ "rescan": False,
+ }
+
+ if not reporter_config:
+ reporter_config = {
+ "include_code_flows": False
+ }
+
+ # Activity retry policy
+ retry_policy = RetryPolicy(
+ initial_interval=timedelta(seconds=1),
+ maximum_interval=timedelta(seconds=60),
+ maximum_attempts=3,
+ backoff_coefficient=2.0,
+ )
+
+ # Phase 0: Download target from MinIO
+ workflow.logger.info(f"Phase 0: Downloading target from MinIO (target_id={target_id})")
+ workspace_path = await workflow.execute_activity(
+ "get_target",
+ args=[target_id, workflow.info().workflow_id, "shared"],
+ start_to_close_timeout=timedelta(minutes=10),
+ retry_policy=retry_policy,
+ )
+ workflow.logger.info(f"✓ Target downloaded to: {workspace_path}")
+
+ # Handle case where workspace_path is a file (single APK upload)
+ # vs. a directory containing files
+ workspace_path_obj = Path(workspace_path)
+
+ # Determine actual workspace directory and APK path
+ if apk_path:
+ # User explicitly provided apk_path
+ actual_apk_path = apk_path
+ # workspace_path could be either a file or directory
+ # If it's a file and apk_path matches the filename, use parent as workspace
+ if workspace_path_obj.name == apk_path:
+ workspace_path = str(workspace_path_obj.parent)
+ workflow.logger.info(f"Adjusted workspace to parent directory: {workspace_path}")
+ else:
+ # No apk_path provided - check if workspace_path is an APK file
+ if workspace_path_obj.suffix.lower() == '.apk' or workspace_path_obj.name.endswith('.apk'):
+ # workspace_path is the APK file itself
+ actual_apk_path = workspace_path_obj.name
+ workspace_path = str(workspace_path_obj.parent)
+ workflow.logger.info(f"Detected single APK file: {actual_apk_path}, workspace: {workspace_path}")
+ else:
+ # workspace_path is a directory, need to find APK within it
+ actual_apk_path = None
+ workflow.logger.info("Workspace is a directory, APK detection will be handled by modules")
+
+ # Phase 1: Jadx decompilation (if enabled and APK provided)
+ jadx_result = None
+ analysis_workspace = workspace_path
+
+ if decompile_apk and actual_apk_path:
+ workflow.logger.info(f"Phase 1: Decompiling APK with Jadx (apk={actual_apk_path})")
+
+ jadx_activity_config = {
+ **jadx_config,
+ "apk_path": actual_apk_path
+ }
+
+ jadx_result = await workflow.execute_activity(
+ "decompile_with_jadx",
+ args=[workspace_path, jadx_activity_config],
+ start_to_close_timeout=timedelta(minutes=15),
+ retry_policy=retry_policy,
+ )
+
+ if jadx_result.get("status") == "success":
+ # Use decompiled sources as workspace for OpenGrep
+ source_dir = jadx_result.get("summary", {}).get("source_dir")
+ if source_dir:
+ analysis_workspace = source_dir
+ workflow.logger.info(
+ f"✓ Jadx decompiled {jadx_result.get('summary', {}).get('java_files', 0)} Java files"
+ )
+ else:
+ workflow.logger.warning(f"Jadx decompilation failed: {jadx_result.get('error')}")
+ else:
+ workflow.logger.info("Phase 1: Jadx decompilation skipped")
+
+ # Phase 2: OpenGrep static analysis
+ workflow.logger.info(f"Phase 2: OpenGrep analysis on {analysis_workspace}")
+
+ opengrep_result = await workflow.execute_activity(
+ "scan_with_opengrep",
+ args=[analysis_workspace, opengrep_config],
+ start_to_close_timeout=timedelta(minutes=20),
+ retry_policy=retry_policy,
+ )
+
+ workflow.logger.info(
+ f"✓ OpenGrep completed: {opengrep_result.get('summary', {}).get('total_findings', 0)} findings"
+ )
+
+ # Phase 3: MobSF analysis (if enabled and APK provided)
+ mobsf_result = None
+
+ if mobsf_config.get("enabled", True) and actual_apk_path:
+ workflow.logger.info(f"Phase 3: MobSF scan on APK: {actual_apk_path}")
+
+ mobsf_activity_config = {
+ **mobsf_config,
+ "file_path": actual_apk_path
+ }
+
+ try:
+ mobsf_result = await workflow.execute_activity(
+ "scan_with_mobsf",
+ args=[workspace_path, mobsf_activity_config],
+ start_to_close_timeout=timedelta(minutes=30),
+ retry_policy=RetryPolicy(
+ maximum_attempts=2 # MobSF can be flaky, limit retries
+ ),
+ )
+
+ # Handle skipped or completed status
+ if mobsf_result.get("status") == "skipped":
+ workflow.logger.warning(
+ f"⚠️ MobSF skipped: {mobsf_result.get('summary', {}).get('skip_reason', 'Unknown reason')}"
+ )
+ else:
+ workflow.logger.info(
+ f"✓ MobSF completed: {mobsf_result.get('summary', {}).get('total_findings', 0)} findings"
+ )
+ except Exception as e:
+ workflow.logger.warning(f"MobSF scan failed (continuing without it): {e}")
+ mobsf_result = None
+ else:
+ workflow.logger.info("Phase 3: MobSF scan skipped (disabled or no APK)")
+
+ # Phase 4: Generate SARIF report
+ workflow.logger.info("Phase 4: Generating SARIF report")
+
+ sarif_report = await workflow.execute_activity(
+ "generate_android_sarif",
+ args=[jadx_result or {}, opengrep_result, mobsf_result, reporter_config, workspace_path],
+ start_to_close_timeout=timedelta(minutes=5),
+ retry_policy=retry_policy,
+ )
+
+ # Phase 5: Upload results to MinIO
+ workflow.logger.info("Phase 5: Uploading results to MinIO")
+
+ result_url = await workflow.execute_activity(
+ "upload_results",
+ args=[workflow.info().workflow_id, sarif_report, "sarif"],
+ start_to_close_timeout=timedelta(minutes=10),
+ retry_policy=retry_policy,
+ )
+
+ workflow.logger.info(f"✓ Results uploaded: {result_url}")
+
+ # Phase 6: Cleanup cache
+ workflow.logger.info("Phase 6: Cleaning up cache")
+
+ await workflow.execute_activity(
+ "cleanup_cache",
+ args=[workspace_path, "shared"],
+ start_to_close_timeout=timedelta(minutes=5),
+ retry_policy=RetryPolicy(maximum_attempts=1), # Don't retry cleanup
+ )
+
+ # Calculate summary
+ total_findings = len(sarif_report.get("runs", [{}])[0].get("results", []))
+
+ summary = {
+ "workflow": "android_static_analysis",
+ "target_id": target_id,
+ "total_findings": total_findings,
+ "decompiled_java_files": (jadx_result or {}).get("summary", {}).get("java_files", 0) if jadx_result else 0,
+ "opengrep_findings": opengrep_result.get("summary", {}).get("total_findings", 0),
+ "mobsf_findings": mobsf_result.get("summary", {}).get("total_findings", 0) if mobsf_result else 0,
+ "result_url": result_url,
+ }
+
+ workflow.logger.info(
+ f"✅ AndroidStaticAnalysisWorkflow completed successfully: {total_findings} findings"
+ )
+
+ return {
+ "sarif": sarif_report,
+ "summary": summary,
+ }
diff --git a/backend/toolbox/workflows/atheris_fuzzing/metadata.yaml b/backend/toolbox/workflows/atheris_fuzzing/metadata.yaml
index b079804..c119aad 100644
--- a/backend/toolbox/workflows/atheris_fuzzing/metadata.yaml
+++ b/backend/toolbox/workflows/atheris_fuzzing/metadata.yaml
@@ -16,11 +16,6 @@ tags:
# - "copy-on-write": Download once, copy for each run (balances performance and isolation)
workspace_isolation: "isolated"
-default_parameters:
- target_file: null
- max_iterations: 1000000
- timeout_seconds: 1800
-
parameters:
type: object
properties:
diff --git a/backend/toolbox/workflows/cargo_fuzzing/metadata.yaml b/backend/toolbox/workflows/cargo_fuzzing/metadata.yaml
index 39ff622..829a1f3 100644
--- a/backend/toolbox/workflows/cargo_fuzzing/metadata.yaml
+++ b/backend/toolbox/workflows/cargo_fuzzing/metadata.yaml
@@ -16,12 +16,6 @@ tags:
# - "copy-on-write": Download once, copy for each run (balances performance and isolation)
workspace_isolation: "isolated"
-default_parameters:
- target_name: null
- max_iterations: 1000000
- timeout_seconds: 1800
- sanitizer: "address"
-
parameters:
type: object
properties:
diff --git a/backend/toolbox/workflows/gitleaks_detection/metadata.yaml b/backend/toolbox/workflows/gitleaks_detection/metadata.yaml
index d2c343c..ad4ae45 100644
--- a/backend/toolbox/workflows/gitleaks_detection/metadata.yaml
+++ b/backend/toolbox/workflows/gitleaks_detection/metadata.yaml
@@ -30,13 +30,5 @@ parameters:
default: false
description: "Scan files without Git context"
-default_parameters:
- scan_mode: "detect"
- redact: true
- no_git: false
-
required_modules:
- "gitleaks"
-
-supported_volume_modes:
- - "ro"
diff --git a/backend/toolbox/workflows/llm_analysis/metadata.yaml b/backend/toolbox/workflows/llm_analysis/metadata.yaml
index 0a388bf..2631b59 100644
--- a/backend/toolbox/workflows/llm_analysis/metadata.yaml
+++ b/backend/toolbox/workflows/llm_analysis/metadata.yaml
@@ -13,38 +13,84 @@ tags:
# Workspace isolation mode
workspace_isolation: "shared"
-default_parameters:
- agent_url: "http://fuzzforge-task-agent:8000/a2a/litellm_agent"
- llm_model: "gpt-5-mini"
- llm_provider: "openai"
- max_files: 5
-
parameters:
type: object
properties:
agent_url:
type: string
description: "A2A agent endpoint URL"
+ default: "http://fuzzforge-task-agent:8000/a2a/litellm_agent"
llm_model:
type: string
description: "LLM model to use (e.g., gpt-4o-mini, claude-3-5-sonnet)"
+ default: "gpt-5-mini"
llm_provider:
type: string
description: "LLM provider (openai, anthropic, etc.)"
+ default: "openai"
file_patterns:
type: array
items:
type: string
- description: "File patterns to analyze (e.g., ['*.py', '*.js'])"
+ default:
+ - "*.py"
+ - "*.js"
+ - "*.ts"
+ - "*.jsx"
+ - "*.tsx"
+ - "*.java"
+ - "*.go"
+ - "*.rs"
+ - "*.c"
+ - "*.cpp"
+ - "*.h"
+ - "*.hpp"
+ - "*.cs"
+ - "*.php"
+ - "*.rb"
+ - "*.swift"
+ - "*.kt"
+ - "*.scala"
+ - "*.env"
+ - "*.yaml"
+ - "*.yml"
+ - "*.json"
+ - "*.xml"
+ - "*.ini"
+ - "*.sql"
+ - "*.properties"
+ - "*.sh"
+ - "*.bat"
+ - "*.ps1"
+ - "*.config"
+ - "*.conf"
+ - "*.toml"
+ - "*id_rsa*"
+ - "*id_dsa*"
+ - "*id_ecdsa*"
+ - "*id_ed25519*"
+ - "*.pem"
+ - "*.key"
+ - "*.pub"
+ - "*.txt"
+ - "*.md"
+ - "Dockerfile"
+ - "docker-compose.yml"
+ - ".gitignore"
+ - ".dockerignore"
+ description: "File patterns to analyze for security issues and secrets"
max_files:
type: integer
description: "Maximum number of files to analyze"
+ default: 10
max_file_size:
type: integer
description: "Maximum file size in bytes"
+ default: 100000
timeout:
type: integer
description: "Timeout per file in seconds"
+ default: 90
output_schema:
type: object
diff --git a/backend/toolbox/workflows/llm_secret_detection/metadata.yaml b/backend/toolbox/workflows/llm_secret_detection/metadata.yaml
index cf761ef..a97b859 100644
--- a/backend/toolbox/workflows/llm_secret_detection/metadata.yaml
+++ b/backend/toolbox/workflows/llm_secret_detection/metadata.yaml
@@ -30,14 +30,42 @@ parameters:
type: integer
default: 20
-default_parameters:
- agent_url: "http://fuzzforge-task-agent:8000/a2a/litellm_agent"
- llm_model: "gpt-5-mini"
- llm_provider: "openai"
- max_files: 20
+ max_file_size:
+ type: integer
+ default: 30000
+ description: "Maximum file size in bytes"
+
+ timeout:
+ type: integer
+ default: 30
+ description: "Timeout per file in seconds"
+
+ file_patterns:
+ type: array
+ items:
+ type: string
+ default:
+ - "*.py"
+ - "*.js"
+ - "*.ts"
+ - "*.java"
+ - "*.go"
+ - "*.env"
+ - "*.yaml"
+ - "*.yml"
+ - "*.json"
+ - "*.xml"
+ - "*.ini"
+ - "*.sql"
+ - "*.properties"
+ - "*.sh"
+ - "*.bat"
+ - "*.config"
+ - "*.conf"
+ - "*.toml"
+ - "*id_rsa*"
+ - "*.txt"
+ description: "File patterns to scan for secrets"
required_modules:
- "llm_secret_detector"
-
-supported_volume_modes:
- - "ro"
diff --git a/backend/toolbox/workflows/llm_secret_detection/workflow.py b/backend/toolbox/workflows/llm_secret_detection/workflow.py
index 4f693d0..a0c66d2 100644
--- a/backend/toolbox/workflows/llm_secret_detection/workflow.py
+++ b/backend/toolbox/workflows/llm_secret_detection/workflow.py
@@ -17,6 +17,7 @@ class LlmSecretDetectionWorkflow:
llm_model: Optional[str] = None,
llm_provider: Optional[str] = None,
max_files: Optional[int] = None,
+ max_file_size: Optional[int] = None,
timeout: Optional[int] = None,
file_patterns: Optional[list] = None
) -> Dict[str, Any]:
@@ -67,6 +68,8 @@ class LlmSecretDetectionWorkflow:
config["llm_provider"] = llm_provider
if max_files:
config["max_files"] = max_files
+ if max_file_size:
+ config["max_file_size"] = max_file_size
if timeout:
config["timeout"] = timeout
if file_patterns:
diff --git a/backend/toolbox/workflows/ossfuzz_campaign/metadata.yaml b/backend/toolbox/workflows/ossfuzz_campaign/metadata.yaml
index fbc1d51..d6766f9 100644
--- a/backend/toolbox/workflows/ossfuzz_campaign/metadata.yaml
+++ b/backend/toolbox/workflows/ossfuzz_campaign/metadata.yaml
@@ -16,13 +16,6 @@ tags:
# OSS-Fuzz campaigns use isolated mode for safe concurrent campaigns
workspace_isolation: "isolated"
-default_parameters:
- project_name: null
- campaign_duration_hours: 1
- override_engine: null
- override_sanitizer: null
- max_iterations: null
-
parameters:
type: object
required:
diff --git a/backend/toolbox/workflows/python_sast/__init__.py b/backend/toolbox/workflows/python_sast/__init__.py
new file mode 100644
index 0000000..e436884
--- /dev/null
+++ b/backend/toolbox/workflows/python_sast/__init__.py
@@ -0,0 +1,10 @@
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
diff --git a/backend/toolbox/workflows/python_sast/activities.py b/backend/toolbox/workflows/python_sast/activities.py
new file mode 100644
index 0000000..fea884f
--- /dev/null
+++ b/backend/toolbox/workflows/python_sast/activities.py
@@ -0,0 +1,191 @@
+"""
+Python SAST Workflow Activities
+
+Activities specific to the Python SAST workflow:
+- scan_dependencies_activity: Scan Python dependencies for CVEs using pip-audit
+- analyze_with_bandit_activity: Analyze Python code for security issues using Bandit
+- analyze_with_mypy_activity: Analyze Python code for type safety using Mypy
+- generate_python_sast_sarif_activity: Generate SARIF report from all findings
+"""
+
+import logging
+import sys
+from pathlib import Path
+
+from temporalio import activity
+
+# Configure logging
+logger = logging.getLogger(__name__)
+
+# Add toolbox to path for module imports
+sys.path.insert(0, '/app/toolbox')
+
+
+@activity.defn(name="scan_dependencies")
+async def scan_dependencies_activity(workspace_path: str, config: dict) -> dict:
+ """
+ Scan Python dependencies for known vulnerabilities using pip-audit.
+
+ Args:
+ workspace_path: Path to the workspace directory
+ config: DependencyScanner configuration
+
+ Returns:
+ Scanner results dictionary
+ """
+ logger.info(f"Activity: scan_dependencies (workspace={workspace_path})")
+
+ try:
+ from modules.scanner import DependencyScanner
+
+ workspace = Path(workspace_path)
+ if not workspace.exists():
+ raise FileNotFoundError(f"Workspace not found: {workspace_path}")
+
+ scanner = DependencyScanner()
+ result = await scanner.execute(config, workspace)
+
+ logger.info(
+ f"✓ Dependency scanning completed: "
+ f"{result.summary.get('total_vulnerabilities', 0)} vulnerabilities found"
+ )
+ return result.dict()
+
+ except Exception as e:
+ logger.error(f"Dependency scanning failed: {e}", exc_info=True)
+ raise
+
+
+@activity.defn(name="analyze_with_bandit")
+async def analyze_with_bandit_activity(workspace_path: str, config: dict) -> dict:
+ """
+ Analyze Python code for security issues using Bandit.
+
+ Args:
+ workspace_path: Path to the workspace directory
+ config: BanditAnalyzer configuration
+
+ Returns:
+ Analysis results dictionary
+ """
+ logger.info(f"Activity: analyze_with_bandit (workspace={workspace_path})")
+
+ try:
+ from modules.analyzer import BanditAnalyzer
+
+ workspace = Path(workspace_path)
+ if not workspace.exists():
+ raise FileNotFoundError(f"Workspace not found: {workspace_path}")
+
+ analyzer = BanditAnalyzer()
+ result = await analyzer.execute(config, workspace)
+
+ logger.info(
+ f"✓ Bandit analysis completed: "
+ f"{result.summary.get('total_issues', 0)} security issues found"
+ )
+ return result.dict()
+
+ except Exception as e:
+ logger.error(f"Bandit analysis failed: {e}", exc_info=True)
+ raise
+
+
+@activity.defn(name="analyze_with_mypy")
+async def analyze_with_mypy_activity(workspace_path: str, config: dict) -> dict:
+ """
+ Analyze Python code for type safety issues using Mypy.
+
+ Args:
+ workspace_path: Path to the workspace directory
+ config: MypyAnalyzer configuration
+
+ Returns:
+ Analysis results dictionary
+ """
+ logger.info(f"Activity: analyze_with_mypy (workspace={workspace_path})")
+
+ try:
+ from modules.analyzer import MypyAnalyzer
+
+ workspace = Path(workspace_path)
+ if not workspace.exists():
+ raise FileNotFoundError(f"Workspace not found: {workspace_path}")
+
+ analyzer = MypyAnalyzer()
+ result = await analyzer.execute(config, workspace)
+
+ logger.info(
+ f"✓ Mypy analysis completed: "
+ f"{result.summary.get('total_errors', 0)} type errors found"
+ )
+ return result.dict()
+
+ except Exception as e:
+ logger.error(f"Mypy analysis failed: {e}", exc_info=True)
+ raise
+
+
+@activity.defn(name="generate_python_sast_sarif")
+async def generate_python_sast_sarif_activity(
+ dependency_results: dict,
+ bandit_results: dict,
+ mypy_results: dict,
+ config: dict,
+ workspace_path: str
+) -> dict:
+ """
+ Generate SARIF report from all SAST analysis results.
+
+ Args:
+ dependency_results: Results from dependency scanner
+ bandit_results: Results from Bandit analyzer
+ mypy_results: Results from Mypy analyzer
+ config: Reporter configuration
+ workspace_path: Path to the workspace
+
+ Returns:
+ SARIF report dictionary
+ """
+ logger.info("Activity: generate_python_sast_sarif")
+
+ try:
+ from modules.reporter import SARIFReporter
+
+ workspace = Path(workspace_path)
+
+ # Combine findings from all modules
+ all_findings = []
+
+ # Add dependency scanner findings
+ dependency_findings = dependency_results.get("findings", [])
+ all_findings.extend(dependency_findings)
+
+ # Add Bandit findings
+ bandit_findings = bandit_results.get("findings", [])
+ all_findings.extend(bandit_findings)
+
+ # Add Mypy findings
+ mypy_findings = mypy_results.get("findings", [])
+ all_findings.extend(mypy_findings)
+
+ # Prepare reporter config
+ reporter_config = {
+ **config,
+ "findings": all_findings,
+ "tool_name": "FuzzForge Python SAST",
+ "tool_version": "1.0.0"
+ }
+
+ reporter = SARIFReporter()
+ result = await reporter.execute(reporter_config, workspace)
+
+ # Extract SARIF from result
+ sarif = result.dict().get("sarif", {})
+
+ logger.info(f"✓ SARIF report generated with {len(all_findings)} findings")
+ return sarif
+
+ except Exception as e:
+ logger.error(f"SARIF report generation failed: {e}", exc_info=True)
+ raise
diff --git a/backend/toolbox/workflows/python_sast/metadata.yaml b/backend/toolbox/workflows/python_sast/metadata.yaml
new file mode 100644
index 0000000..c7e209c
--- /dev/null
+++ b/backend/toolbox/workflows/python_sast/metadata.yaml
@@ -0,0 +1,110 @@
+name: python_sast
+version: "1.0.0"
+vertical: python
+description: "Python Static Application Security Testing (SAST) workflow combining dependency scanning (pip-audit), security linting (Bandit), and type checking (Mypy)"
+author: "FuzzForge Team"
+tags:
+ - "python"
+ - "sast"
+ - "security"
+ - "type-checking"
+ - "dependencies"
+ - "bandit"
+ - "mypy"
+ - "pip-audit"
+ - "sarif"
+
+# Workspace isolation mode (system-level configuration)
+# Using "shared" mode for read-only SAST analysis (no file modifications)
+workspace_isolation: "shared"
+
+parameters:
+ type: object
+ properties:
+ dependency_config:
+ type: object
+ description: "Dependency scanner (pip-audit) configuration"
+ properties:
+ dependency_files:
+ type: array
+ items:
+ type: string
+ description: "List of dependency files to scan (auto-discovered if empty)"
+ default: []
+ ignore_vulns:
+ type: array
+ items:
+ type: string
+ description: "List of vulnerability IDs to ignore"
+ default: []
+ bandit_config:
+ type: object
+ description: "Bandit security analyzer configuration"
+ properties:
+ severity_level:
+ type: string
+ enum: ["low", "medium", "high"]
+ description: "Minimum severity level to report"
+ default: "low"
+ confidence_level:
+ type: string
+ enum: ["low", "medium", "high"]
+ description: "Minimum confidence level to report"
+ default: "medium"
+ exclude_tests:
+ type: boolean
+ description: "Exclude test files from analysis"
+ default: true
+ skip_ids:
+ type: array
+ items:
+ type: string
+ description: "List of Bandit test IDs to skip"
+ default: []
+ mypy_config:
+ type: object
+ description: "Mypy type checker configuration"
+ properties:
+ strict_mode:
+ type: boolean
+ description: "Enable strict type checking"
+ default: false
+ ignore_missing_imports:
+ type: boolean
+ description: "Ignore errors about missing imports"
+ default: true
+ follow_imports:
+ type: string
+ enum: ["normal", "silent", "skip", "error"]
+ description: "How to handle imports"
+ default: "silent"
+ reporter_config:
+ type: object
+ description: "SARIF reporter configuration"
+ properties:
+ include_code_flows:
+ type: boolean
+ description: "Include code flow information"
+ default: false
+
+output_schema:
+ type: object
+ properties:
+ sarif:
+ type: object
+ description: "SARIF-formatted SAST findings from all tools"
+ summary:
+ type: object
+ description: "SAST execution summary"
+ properties:
+ total_findings:
+ type: integer
+ vulnerabilities:
+ type: integer
+ description: "CVEs found in dependencies"
+ security_issues:
+ type: integer
+ description: "Security issues found by Bandit"
+ type_errors:
+ type: integer
+ description: "Type errors found by Mypy"
diff --git a/backend/toolbox/workflows/python_sast/workflow.py b/backend/toolbox/workflows/python_sast/workflow.py
new file mode 100644
index 0000000..6d56a47
--- /dev/null
+++ b/backend/toolbox/workflows/python_sast/workflow.py
@@ -0,0 +1,265 @@
+"""
+Python SAST Workflow - Temporal Version
+
+Static Application Security Testing for Python projects using multiple tools.
+"""
+
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
+
+from datetime import timedelta
+from typing import Dict, Any, Optional
+
+from temporalio import workflow
+from temporalio.common import RetryPolicy
+
+# Import activity interfaces (will be executed by worker)
+with workflow.unsafe.imports_passed_through():
+ import logging
+
+logger = logging.getLogger(__name__)
+
+
+@workflow.defn
+class PythonSastWorkflow:
+ """
+ Python Static Application Security Testing workflow.
+
+ This workflow:
+ 1. Downloads target from MinIO
+ 2. Runs dependency scanning (pip-audit for CVEs)
+ 3. Runs security linting (Bandit for security issues)
+ 4. Runs type checking (Mypy for type safety)
+ 5. Generates a SARIF report with all findings
+ 6. Uploads results to MinIO
+ 7. Cleans up cache
+ """
+
+ @workflow.run
+ async def run(
+ self,
+ target_id: str,
+ dependency_config: Optional[Dict[str, Any]] = None,
+ bandit_config: Optional[Dict[str, Any]] = None,
+ mypy_config: Optional[Dict[str, Any]] = None,
+ reporter_config: Optional[Dict[str, Any]] = None
+ ) -> Dict[str, Any]:
+ """
+ Main workflow execution.
+
+ Args:
+ target_id: UUID of the uploaded target in MinIO
+ dependency_config: Configuration for dependency scanner
+ bandit_config: Configuration for Bandit analyzer
+ mypy_config: Configuration for Mypy analyzer
+ reporter_config: Configuration for SARIF reporter
+
+ Returns:
+ Dictionary containing SARIF report and summary
+ """
+ workflow_id = workflow.info().workflow_id
+
+ workflow.logger.info(
+ f"Starting PythonSASTWorkflow "
+ f"(workflow_id={workflow_id}, target_id={target_id})"
+ )
+
+ # Default configurations
+ if not dependency_config:
+ dependency_config = {
+ "dependency_files": [], # Auto-discover
+ "ignore_vulns": []
+ }
+
+ if not bandit_config:
+ bandit_config = {
+ "severity_level": "low",
+ "confidence_level": "medium",
+ "exclude_tests": True,
+ "skip_ids": []
+ }
+
+ if not mypy_config:
+ mypy_config = {
+ "strict_mode": False,
+ "ignore_missing_imports": True,
+ "follow_imports": "silent"
+ }
+
+ if not reporter_config:
+ reporter_config = {
+ "include_code_flows": False
+ }
+
+ results = {
+ "workflow_id": workflow_id,
+ "target_id": target_id,
+ "status": "running",
+ "steps": []
+ }
+
+ try:
+ # Get run ID for workspace isolation (using shared mode for read-only analysis)
+ run_id = workflow.info().run_id
+
+ # Step 1: Download target from MinIO
+ workflow.logger.info("Step 1: Downloading target from MinIO")
+ target_path = await workflow.execute_activity(
+ "get_target",
+ args=[target_id, run_id, "shared"], # target_id, run_id, workspace_isolation
+ start_to_close_timeout=timedelta(minutes=5),
+ retry_policy=RetryPolicy(
+ initial_interval=timedelta(seconds=1),
+ maximum_interval=timedelta(seconds=30),
+ maximum_attempts=3
+ )
+ )
+ results["steps"].append({
+ "step": "download_target",
+ "status": "success",
+ "target_path": target_path
+ })
+ workflow.logger.info(f"✓ Target downloaded to: {target_path}")
+
+ # Step 2: Dependency scanning (pip-audit)
+ workflow.logger.info("Step 2: Scanning dependencies for vulnerabilities")
+ dependency_results = await workflow.execute_activity(
+ "scan_dependencies",
+ args=[target_path, dependency_config],
+ start_to_close_timeout=timedelta(minutes=10),
+ retry_policy=RetryPolicy(
+ initial_interval=timedelta(seconds=2),
+ maximum_interval=timedelta(seconds=60),
+ maximum_attempts=2
+ )
+ )
+ results["steps"].append({
+ "step": "dependency_scanning",
+ "status": "success",
+ "vulnerabilities": dependency_results.get("summary", {}).get("total_vulnerabilities", 0)
+ })
+ workflow.logger.info(
+ f"✓ Dependency scanning completed: "
+ f"{dependency_results.get('summary', {}).get('total_vulnerabilities', 0)} vulnerabilities"
+ )
+
+ # Step 3: Security linting (Bandit)
+ workflow.logger.info("Step 3: Analyzing security issues with Bandit")
+ bandit_results = await workflow.execute_activity(
+ "analyze_with_bandit",
+ args=[target_path, bandit_config],
+ start_to_close_timeout=timedelta(minutes=10),
+ retry_policy=RetryPolicy(
+ initial_interval=timedelta(seconds=2),
+ maximum_interval=timedelta(seconds=60),
+ maximum_attempts=2
+ )
+ )
+ results["steps"].append({
+ "step": "bandit_analysis",
+ "status": "success",
+ "issues": bandit_results.get("summary", {}).get("total_issues", 0)
+ })
+ workflow.logger.info(
+ f"✓ Bandit analysis completed: "
+ f"{bandit_results.get('summary', {}).get('total_issues', 0)} security issues"
+ )
+
+ # Step 4: Type checking (Mypy)
+ workflow.logger.info("Step 4: Type checking with Mypy")
+ mypy_results = await workflow.execute_activity(
+ "analyze_with_mypy",
+ args=[target_path, mypy_config],
+ start_to_close_timeout=timedelta(minutes=10),
+ retry_policy=RetryPolicy(
+ initial_interval=timedelta(seconds=2),
+ maximum_interval=timedelta(seconds=60),
+ maximum_attempts=2
+ )
+ )
+ results["steps"].append({
+ "step": "mypy_analysis",
+ "status": "success",
+ "type_errors": mypy_results.get("summary", {}).get("total_errors", 0)
+ })
+ workflow.logger.info(
+ f"✓ Mypy analysis completed: "
+ f"{mypy_results.get('summary', {}).get('total_errors', 0)} type errors"
+ )
+
+ # Step 5: Generate SARIF report
+ workflow.logger.info("Step 5: Generating SARIF report")
+ sarif_report = await workflow.execute_activity(
+ "generate_python_sast_sarif",
+ args=[dependency_results, bandit_results, mypy_results, reporter_config, target_path],
+ start_to_close_timeout=timedelta(minutes=5)
+ )
+ results["steps"].append({
+ "step": "report_generation",
+ "status": "success"
+ })
+
+ # Count total findings in SARIF
+ total_findings = 0
+ if sarif_report and "runs" in sarif_report:
+ total_findings = len(sarif_report["runs"][0].get("results", []))
+
+ workflow.logger.info(f"✓ SARIF report generated with {total_findings} findings")
+
+ # Step 6: Upload results to MinIO
+ workflow.logger.info("Step 6: Uploading results")
+ try:
+ results_url = await workflow.execute_activity(
+ "upload_results",
+ args=[workflow_id, sarif_report, "sarif"],
+ start_to_close_timeout=timedelta(minutes=2)
+ )
+ results["results_url"] = results_url
+ workflow.logger.info(f"✓ Results uploaded to: {results_url}")
+ except Exception as e:
+ workflow.logger.warning(f"Failed to upload results: {e}")
+ results["results_url"] = None
+
+ # Step 7: Cleanup cache
+ workflow.logger.info("Step 7: Cleaning up cache")
+ try:
+ await workflow.execute_activity(
+ "cleanup_cache",
+ args=[target_path, "shared"], # target_path, workspace_isolation
+ start_to_close_timeout=timedelta(minutes=1)
+ )
+ workflow.logger.info("✓ Cache cleaned up (skipped for shared mode)")
+ except Exception as e:
+ workflow.logger.warning(f"Cache cleanup failed: {e}")
+
+ # Mark workflow as successful
+ results["status"] = "success"
+ results["sarif"] = sarif_report
+ results["summary"] = {
+ "total_findings": total_findings,
+ "vulnerabilities": dependency_results.get("summary", {}).get("total_vulnerabilities", 0),
+ "security_issues": bandit_results.get("summary", {}).get("total_issues", 0),
+ "type_errors": mypy_results.get("summary", {}).get("total_errors", 0)
+ }
+ workflow.logger.info(f"✓ Workflow completed successfully: {workflow_id}")
+
+ return results
+
+ except Exception as e:
+ workflow.logger.error(f"Workflow failed: {e}")
+ results["status"] = "error"
+ results["error"] = str(e)
+ results["steps"].append({
+ "step": "error",
+ "status": "failed",
+ "error": str(e)
+ })
+ raise
diff --git a/backend/toolbox/workflows/security_assessment/metadata.yaml b/backend/toolbox/workflows/security_assessment/metadata.yaml
index 572e50c..09addbd 100644
--- a/backend/toolbox/workflows/security_assessment/metadata.yaml
+++ b/backend/toolbox/workflows/security_assessment/metadata.yaml
@@ -18,11 +18,6 @@ tags:
# Using "shared" mode for read-only security analysis (no file modifications)
workspace_isolation: "shared"
-default_parameters:
- scanner_config: {}
- analyzer_config: {}
- reporter_config: {}
-
parameters:
type: object
properties:
diff --git a/backend/toolbox/workflows/trufflehog_detection/metadata.yaml b/backend/toolbox/workflows/trufflehog_detection/metadata.yaml
index 1a147f0..d725061 100644
--- a/backend/toolbox/workflows/trufflehog_detection/metadata.yaml
+++ b/backend/toolbox/workflows/trufflehog_detection/metadata.yaml
@@ -23,12 +23,5 @@ parameters:
default: 10
description: "Maximum directory depth to scan"
-default_parameters:
- verify: true
- max_depth: 10
-
required_modules:
- "trufflehog"
-
-supported_volume_modes:
- - "ro"
diff --git a/cli/pyproject.toml b/cli/pyproject.toml
index 1b8ddd9..4a71d1e 100644
--- a/cli/pyproject.toml
+++ b/cli/pyproject.toml
@@ -1,6 +1,6 @@
[project]
name = "fuzzforge-cli"
-version = "0.7.0"
+version = "0.7.3"
description = "FuzzForge CLI - Command-line interface for FuzzForge security testing platform"
readme = "README.md"
authors = [
diff --git a/cli/src/fuzzforge_cli/__init__.py b/cli/src/fuzzforge_cli/__init__.py
index 9d26c75..cc4a071 100644
--- a/cli/src/fuzzforge_cli/__init__.py
+++ b/cli/src/fuzzforge_cli/__init__.py
@@ -16,4 +16,4 @@ with local project management and persistent storage.
# Additional attribution and requirements are provided in the NOTICE file.
-__version__ = "0.6.0"
\ No newline at end of file
+__version__ = "0.7.3"
\ No newline at end of file
diff --git a/cli/src/fuzzforge_cli/commands/__init__.py b/cli/src/fuzzforge_cli/commands/__init__.py
index 7e53182..afcf0d9 100644
--- a/cli/src/fuzzforge_cli/commands/__init__.py
+++ b/cli/src/fuzzforge_cli/commands/__init__.py
@@ -12,3 +12,6 @@ Command modules for FuzzForge CLI.
#
# Additional attribution and requirements are provided in the NOTICE file.
+from . import worker
+
+__all__ = ["worker"]
diff --git a/cli/src/fuzzforge_cli/commands/findings.py b/cli/src/fuzzforge_cli/commands/findings.py
index 6335db1..7058527 100644
--- a/cli/src/fuzzforge_cli/commands/findings.py
+++ b/cli/src/fuzzforge_cli/commands/findings.py
@@ -253,15 +253,15 @@ def display_finding_detail(finding: Dict[str, Any], tool: Dict[str, Any], run_id
content_lines.append(f"[bold]Tool:[/bold] {tool.get('name', 'Unknown')} v{tool.get('version', 'unknown')}")
content_lines.append(f"[bold]Run ID:[/bold] {run_id}")
content_lines.append("")
- content_lines.append(f"[bold]Summary:[/bold]")
+ content_lines.append("[bold]Summary:[/bold]")
content_lines.append(message_text)
content_lines.append("")
- content_lines.append(f"[bold]Description:[/bold]")
+ content_lines.append("[bold]Description:[/bold]")
content_lines.append(message_markdown)
if code_snippet:
content_lines.append("")
- content_lines.append(f"[bold]Code Snippet:[/bold]")
+ content_lines.append("[bold]Code Snippet:[/bold]")
content_lines.append(f"[dim]{code_snippet}[/dim]")
content = "\n".join(content_lines)
@@ -270,7 +270,7 @@ def display_finding_detail(finding: Dict[str, Any], tool: Dict[str, Any], run_id
console.print()
console.print(Panel(
content,
- title=f"🔍 Finding Detail",
+ title="🔍 Finding Detail",
border_style=severity_color,
box=box.ROUNDED,
padding=(1, 2)
diff --git a/cli/src/fuzzforge_cli/commands/init.py b/cli/src/fuzzforge_cli/commands/init.py
index 9aa4ca7..ceb3586 100644
--- a/cli/src/fuzzforge_cli/commands/init.py
+++ b/cli/src/fuzzforge_cli/commands/init.py
@@ -187,19 +187,40 @@ def _ensure_env_file(fuzzforge_dir: Path, force: bool) -> None:
console.print("🧠 Configuring AI environment...")
console.print(" • Default LLM provider: openai")
- console.print(" • Default LLM model: gpt-5-mini")
+ console.print(" • Default LLM model: litellm_proxy/gpt-5-mini")
console.print(" • To customise provider/model later, edit .fuzzforge/.env")
llm_provider = "openai"
- llm_model = "gpt-5-mini"
+ llm_model = "litellm_proxy/gpt-5-mini"
+
+ # Check for global virtual keys from volumes/env/.env
+ global_env_key = None
+ for parent in fuzzforge_dir.parents:
+ global_env = parent / "volumes" / "env" / ".env"
+ if global_env.exists():
+ try:
+ for line in global_env.read_text(encoding="utf-8").splitlines():
+ if line.strip().startswith("OPENAI_API_KEY=") and "=" in line:
+ key_value = line.split("=", 1)[1].strip()
+ if key_value and not key_value.startswith("your-") and key_value.startswith("sk-"):
+ global_env_key = key_value
+ console.print(f" • Found virtual key in {global_env.relative_to(parent)}")
+ break
+ except Exception:
+ pass
+ break
api_key = Prompt.ask(
- "OpenAI API key (leave blank to fill manually)",
+ "OpenAI API key (leave blank to use global virtual key)" if global_env_key else "OpenAI API key (leave blank to fill manually)",
default="",
show_default=False,
console=console,
)
+ # Use global key if user didn't provide one
+ if not api_key and global_env_key:
+ api_key = global_env_key
+
session_db_path = fuzzforge_dir / "fuzzforge_sessions.db"
session_db_rel = session_db_path.relative_to(fuzzforge_dir.parent)
@@ -210,14 +231,20 @@ def _ensure_env_file(fuzzforge_dir: Path, force: bool) -> None:
f"LLM_PROVIDER={llm_provider}",
f"LLM_MODEL={llm_model}",
f"LITELLM_MODEL={llm_model}",
+ "LLM_ENDPOINT=http://localhost:10999",
+ "LLM_API_KEY=",
+ "LLM_EMBEDDING_MODEL=litellm_proxy/text-embedding-3-large",
+ "LLM_EMBEDDING_ENDPOINT=http://localhost:10999",
f"OPENAI_API_KEY={api_key}",
"FUZZFORGE_MCP_URL=http://localhost:8010/mcp",
"",
"# Cognee configuration mirrors the primary LLM by default",
f"LLM_COGNEE_PROVIDER={llm_provider}",
f"LLM_COGNEE_MODEL={llm_model}",
- f"LLM_COGNEE_API_KEY={api_key}",
- "LLM_COGNEE_ENDPOINT=",
+ "LLM_COGNEE_ENDPOINT=http://localhost:10999",
+ "LLM_COGNEE_API_KEY=",
+ "LLM_COGNEE_EMBEDDING_MODEL=litellm_proxy/text-embedding-3-large",
+ "LLM_COGNEE_EMBEDDING_ENDPOINT=http://localhost:10999",
"COGNEE_MCP_URL=",
"",
"# Session persistence options: inmemory | sqlite",
@@ -239,6 +266,8 @@ def _ensure_env_file(fuzzforge_dir: Path, force: bool) -> None:
for line in env_lines:
if line.startswith("OPENAI_API_KEY="):
template_lines.append("OPENAI_API_KEY=")
+ elif line.startswith("LLM_API_KEY="):
+ template_lines.append("LLM_API_KEY=")
elif line.startswith("LLM_COGNEE_API_KEY="):
template_lines.append("LLM_COGNEE_API_KEY=")
else:
diff --git a/cli/src/fuzzforge_cli/commands/worker.py b/cli/src/fuzzforge_cli/commands/worker.py
new file mode 100644
index 0000000..06b8b03
--- /dev/null
+++ b/cli/src/fuzzforge_cli/commands/worker.py
@@ -0,0 +1,225 @@
+"""
+Worker management commands for FuzzForge CLI.
+
+Provides commands to start, stop, and list Temporal workers.
+"""
+# Copyright (c) 2025 FuzzingLabs
+#
+# Licensed under the Business Source License 1.1 (BSL). See the LICENSE file
+# at the root of this repository for details.
+#
+# After the Change Date (four years from publication), this version of the
+# Licensed Work will be made available under the Apache License, Version 2.0.
+# See the LICENSE-APACHE file or http://www.apache.org/licenses/LICENSE-2.0
+#
+# Additional attribution and requirements are provided in the NOTICE file.
+
+import subprocess
+import sys
+import typer
+from pathlib import Path
+from rich.console import Console
+from rich.table import Table
+from typing import Optional
+
+from ..worker_manager import WorkerManager
+
+console = Console()
+app = typer.Typer(
+ name="worker",
+ help="🔧 Manage Temporal workers",
+ no_args_is_help=True,
+)
+
+
+@app.command("stop")
+def stop_workers(
+ all: bool = typer.Option(
+ False, "--all",
+ help="Stop all workers (default behavior, flag for clarity)"
+ )
+):
+ """
+ 🛑 Stop all running FuzzForge workers.
+
+ This command stops all worker containers using the proper Docker Compose
+ profile flag to ensure workers are actually stopped (since they're in profiles).
+
+ Examples:
+ $ ff worker stop
+ $ ff worker stop --all
+ """
+ try:
+ worker_mgr = WorkerManager()
+ success = worker_mgr.stop_all_workers()
+
+ if success:
+ sys.exit(0)
+ else:
+ console.print("⚠️ Some workers may not have stopped properly", style="yellow")
+ sys.exit(1)
+
+ except Exception as e:
+ console.print(f"❌ Error: {e}", style="red")
+ sys.exit(1)
+
+
+@app.command("list")
+def list_workers(
+ all: bool = typer.Option(
+ False, "--all", "-a",
+ help="Show all workers (including stopped)"
+ )
+):
+ """
+ 📋 List FuzzForge workers and their status.
+
+ By default, shows only running workers. Use --all to see all workers.
+
+ Examples:
+ $ ff worker list
+ $ ff worker list --all
+ """
+ try:
+ # Get list of running workers
+ result = subprocess.run(
+ ["docker", "ps", "--filter", "name=fuzzforge-worker-",
+ "--format", "{{.Names}}\t{{.Status}}\t{{.RunningFor}}"],
+ capture_output=True,
+ text=True,
+ check=False
+ )
+
+ running_workers = []
+ if result.stdout.strip():
+ for line in result.stdout.strip().splitlines():
+ parts = line.split('\t')
+ if len(parts) >= 3:
+ running_workers.append({
+ "name": parts[0].replace("fuzzforge-worker-", ""),
+ "status": "Running",
+ "uptime": parts[2]
+ })
+
+ # If --all, also get stopped workers
+ stopped_workers = []
+ if all:
+ result_all = subprocess.run(
+ ["docker", "ps", "-a", "--filter", "name=fuzzforge-worker-",
+ "--format", "{{.Names}}\t{{.Status}}"],
+ capture_output=True,
+ text=True,
+ check=False
+ )
+
+ all_worker_names = set()
+ for line in result_all.stdout.strip().splitlines():
+ parts = line.split('\t')
+ if len(parts) >= 2:
+ worker_name = parts[0].replace("fuzzforge-worker-", "")
+ all_worker_names.add(worker_name)
+ # If not running, it's stopped
+ if not any(w["name"] == worker_name for w in running_workers):
+ stopped_workers.append({
+ "name": worker_name,
+ "status": "Stopped",
+ "uptime": "-"
+ })
+
+ # Display results
+ if not running_workers and not stopped_workers:
+ console.print("ℹ️ No workers found", style="cyan")
+ console.print("\n💡 Start a worker with: [cyan]docker compose up -d worker-[/cyan]")
+ console.print(" Or run a workflow, which auto-starts workers: [cyan]ff workflow run [/cyan]")
+ return
+
+ # Create table
+ table = Table(title="FuzzForge Workers", show_header=True, header_style="bold cyan")
+ table.add_column("Worker", style="cyan", no_wrap=True)
+ table.add_column("Status", style="green")
+ table.add_column("Uptime", style="dim")
+
+ # Add running workers
+ for worker in running_workers:
+ table.add_row(
+ worker["name"],
+ f"[green]●[/green] {worker['status']}",
+ worker["uptime"]
+ )
+
+ # Add stopped workers if --all
+ for worker in stopped_workers:
+ table.add_row(
+ worker["name"],
+ f"[red]●[/red] {worker['status']}",
+ worker["uptime"]
+ )
+
+ console.print(table)
+
+ # Summary
+ if running_workers:
+ console.print(f"\n✅ {len(running_workers)} worker(s) running")
+ if stopped_workers:
+ console.print(f"⏹️ {len(stopped_workers)} worker(s) stopped")
+
+ except Exception as e:
+ console.print(f"❌ Error listing workers: {e}", style="red")
+ sys.exit(1)
+
+
+@app.command("start")
+def start_worker(
+ name: str = typer.Argument(
+ ...,
+ help="Worker name (e.g., 'python', 'android', 'secrets')"
+ ),
+ build: bool = typer.Option(
+ False, "--build",
+ help="Rebuild worker image before starting"
+ )
+):
+ """
+ 🚀 Start a specific worker.
+
+ The worker name should be the vertical name (e.g., 'python', 'android', 'rust').
+
+ Examples:
+ $ ff worker start python
+ $ ff worker start android --build
+ """
+ try:
+ service_name = f"worker-{name}"
+
+ console.print(f"🚀 Starting worker: [cyan]{service_name}[/cyan]")
+
+ # Build docker compose command
+ cmd = ["docker", "compose", "up", "-d"]
+ if build:
+ cmd.append("--build")
+ cmd.append(service_name)
+
+ result = subprocess.run(
+ cmd,
+ capture_output=True,
+ text=True,
+ check=False
+ )
+
+ if result.returncode == 0:
+ console.print(f"✅ Worker [cyan]{service_name}[/cyan] started successfully")
+ else:
+ console.print(f"❌ Failed to start worker: {result.stderr}", style="red")
+ console.print(
+ f"\n💡 Try manually: [yellow]docker compose up -d {service_name}[/yellow]",
+ style="dim"
+ )
+ sys.exit(1)
+
+ except Exception as e:
+ console.print(f"❌ Error: {e}", style="red")
+ sys.exit(1)
+
+
+if __name__ == "__main__":
+ app()
diff --git a/cli/src/fuzzforge_cli/commands/workflow_exec.py b/cli/src/fuzzforge_cli/commands/workflow_exec.py
index d1633e8..bfa7f12 100644
--- a/cli/src/fuzzforge_cli/commands/workflow_exec.py
+++ b/cli/src/fuzzforge_cli/commands/workflow_exec.py
@@ -39,7 +39,7 @@ from ..validation import (
)
from ..progress import step_progress
from ..constants import (
- STATUS_EMOJIS, MAX_RUN_ID_DISPLAY_LENGTH, DEFAULT_VOLUME_MODE,
+ STATUS_EMOJIS, MAX_RUN_ID_DISPLAY_LENGTH,
PROGRESS_STEP_DELAYS, MAX_RETRIES, RETRY_DELAY, POLL_INTERVAL
)
from ..worker_manager import WorkerManager
@@ -112,7 +112,6 @@ def execute_workflow_submission(
workflow: str,
target_path: str,
parameters: Dict[str, Any],
- volume_mode: str,
timeout: Optional[int],
interactive: bool
) -> Any:
@@ -160,13 +159,10 @@ def execute_workflow_submission(
except ValueError as e:
console.print(f"❌ Invalid {param_type}: {e}", style="red")
- # Note: volume_mode is no longer used (Temporal uses MinIO storage)
-
# Show submission summary
console.print("\n🎯 [bold]Executing workflow:[/bold]")
console.print(f" Workflow: {workflow}")
console.print(f" Target: {target_path}")
- console.print(f" Volume Mode: {volume_mode}")
if parameters:
console.print(f" Parameters: {len(parameters)} provided")
if timeout:
@@ -252,8 +248,6 @@ def execute_workflow_submission(
progress.next_step() # Submitting
submission = WorkflowSubmission(
- target_path=target_path,
- volume_mode=volume_mode,
parameters=parameters,
timeout=timeout
)
@@ -281,10 +275,6 @@ def execute_workflow(
None, "--param-file", "-f",
help="JSON file containing workflow parameters"
),
- volume_mode: str = typer.Option(
- DEFAULT_VOLUME_MODE, "--volume-mode", "-v",
- help="Volume mount mode: ro (read-only) or rw (read-write)"
- ),
timeout: Optional[int] = typer.Option(
None, "--timeout", "-t",
help="Execution timeout in seconds"
@@ -410,7 +400,7 @@ def execute_workflow(
response = execute_workflow_submission(
client, workflow, target_path, parameters,
- volume_mode, timeout, interactive
+ timeout, interactive
)
console.print("✅ Workflow execution started!", style="green")
@@ -453,9 +443,9 @@ def execute_workflow(
console.print("Press Ctrl+C to stop monitoring (execution continues in background).\n")
try:
- from ..commands.monitor import live_monitor
- # Import monitor command and run it
- live_monitor(response.run_id, refresh=3)
+ from ..commands.monitor import _live_monitor
+ # Call helper function directly with proper parameters
+ _live_monitor(response.run_id, refresh=3, once=False, style="inline")
except KeyboardInterrupt:
console.print("\n⏹️ Live monitoring stopped (execution continues in background)", style="yellow")
except Exception as e:
diff --git a/cli/src/fuzzforge_cli/completion.py b/cli/src/fuzzforge_cli/completion.py
index bd717cd..7bd7c5b 100644
--- a/cli/src/fuzzforge_cli/completion.py
+++ b/cli/src/fuzzforge_cli/completion.py
@@ -95,12 +95,6 @@ def complete_target_paths(incomplete: str) -> List[str]:
return []
-def complete_volume_modes(incomplete: str) -> List[str]:
- """Auto-complete volume mount modes."""
- modes = ["ro", "rw"]
- return [mode for mode in modes if mode.startswith(incomplete)]
-
-
def complete_export_formats(incomplete: str) -> List[str]:
"""Auto-complete export formats."""
formats = ["json", "csv", "html", "sarif"]
@@ -139,7 +133,6 @@ def complete_config_keys(incomplete: str) -> List[str]:
"api_url",
"api_timeout",
"default_workflow",
- "default_volume_mode",
"project_name",
"data_retention_days",
"auto_save_findings",
@@ -164,11 +157,6 @@ TargetPathComplete = typer.Argument(
help="Target path (tab completion available)"
)
-VolumeModetComplete = typer.Option(
- autocompletion=complete_volume_modes,
- help="Volume mode: ro or rw (tab completion available)"
-)
-
ExportFormatComplete = typer.Option(
autocompletion=complete_export_formats,
help="Export format (tab completion available)"
diff --git a/cli/src/fuzzforge_cli/config.py b/cli/src/fuzzforge_cli/config.py
index f21b87d..1a0ae28 100644
--- a/cli/src/fuzzforge_cli/config.py
+++ b/cli/src/fuzzforge_cli/config.py
@@ -28,6 +28,58 @@ try: # Optional dependency; fall back if not installed
except ImportError: # pragma: no cover - optional dependency
load_dotenv = None
+
+def _load_env_file_if_exists(path: Path, override: bool = False) -> bool:
+ if not path.exists():
+ return False
+ # Always use manual parsing to handle empty values correctly
+ try:
+ for line in path.read_text(encoding="utf-8").splitlines():
+ stripped = line.strip()
+ if not stripped or stripped.startswith("#") or "=" not in stripped:
+ continue
+ key, value = stripped.split("=", 1)
+ key = key.strip()
+ value = value.strip()
+ if override:
+ # Only override if value is non-empty
+ if value:
+ os.environ[key] = value
+ else:
+ # Set if not already in environment and value is non-empty
+ if key not in os.environ and value:
+ os.environ[key] = value
+ return True
+ except Exception: # pragma: no cover - best effort fallback
+ return False
+
+
+def _find_shared_env_file(project_dir: Path) -> Path | None:
+ for directory in [project_dir] + list(project_dir.parents):
+ candidate = directory / "volumes" / "env" / ".env"
+ if candidate.exists():
+ return candidate
+ return None
+
+
+def load_project_env(project_dir: Optional[Path] = None) -> Path | None:
+ """Load project-local env, falling back to shared volumes/env/.env."""
+
+ project_dir = Path(project_dir or Path.cwd())
+ shared_env = _find_shared_env_file(project_dir)
+ loaded_shared = False
+ if shared_env:
+ loaded_shared = _load_env_file_if_exists(shared_env, override=False)
+
+ project_env = project_dir / ".fuzzforge" / ".env"
+ if _load_env_file_if_exists(project_env, override=True):
+ return project_env
+
+ if loaded_shared:
+ return shared_env
+
+ return None
+
import yaml
from pydantic import BaseModel, Field
@@ -312,23 +364,7 @@ class ProjectConfigManager:
if not cognee.get("enabled", True):
return
- # Load project-specific environment overrides from .fuzzforge/.env if available
- env_file = self.project_dir / ".fuzzforge" / ".env"
- if env_file.exists():
- if load_dotenv:
- load_dotenv(env_file, override=False)
- else:
- try:
- for line in env_file.read_text(encoding="utf-8").splitlines():
- stripped = line.strip()
- if not stripped or stripped.startswith("#"):
- continue
- if "=" not in stripped:
- continue
- key, value = stripped.split("=", 1)
- os.environ.setdefault(key.strip(), value.strip())
- except Exception: # pragma: no cover - best effort fallback
- pass
+ load_project_env(self.project_dir)
backend_access = "true" if cognee.get("backend_access_control", True) else "false"
os.environ["ENABLE_BACKEND_ACCESS_CONTROL"] = backend_access
@@ -374,6 +410,17 @@ class ProjectConfigManager:
"OPENAI_API_KEY",
)
endpoint = _env("LLM_COGNEE_ENDPOINT", "COGNEE_LLM_ENDPOINT", "LLM_ENDPOINT")
+ embedding_model = _env(
+ "LLM_COGNEE_EMBEDDING_MODEL",
+ "COGNEE_LLM_EMBEDDING_MODEL",
+ "LLM_EMBEDDING_MODEL",
+ )
+ embedding_endpoint = _env(
+ "LLM_COGNEE_EMBEDDING_ENDPOINT",
+ "COGNEE_LLM_EMBEDDING_ENDPOINT",
+ "LLM_EMBEDDING_ENDPOINT",
+ "LLM_ENDPOINT",
+ )
api_version = _env(
"LLM_COGNEE_API_VERSION",
"COGNEE_LLM_API_VERSION",
@@ -398,6 +445,20 @@ class ProjectConfigManager:
os.environ.setdefault("OPENAI_API_KEY", api_key)
if endpoint:
os.environ["LLM_ENDPOINT"] = endpoint
+ os.environ.setdefault("LLM_API_BASE", endpoint)
+ os.environ.setdefault("LLM_EMBEDDING_ENDPOINT", endpoint)
+ os.environ.setdefault("LLM_EMBEDDING_API_BASE", endpoint)
+ os.environ.setdefault("OPENAI_API_BASE", endpoint)
+ # Set LiteLLM proxy environment variables for SDK usage
+ os.environ.setdefault("LITELLM_PROXY_API_BASE", endpoint)
+ if api_key:
+ # Set LiteLLM proxy API key from the virtual key
+ os.environ.setdefault("LITELLM_PROXY_API_KEY", api_key)
+ if embedding_model:
+ os.environ["LLM_EMBEDDING_MODEL"] = embedding_model
+ if embedding_endpoint:
+ os.environ["LLM_EMBEDDING_ENDPOINT"] = embedding_endpoint
+ os.environ.setdefault("LLM_EMBEDDING_API_BASE", embedding_endpoint)
if api_version:
os.environ["LLM_API_VERSION"] = api_version
if max_tokens:
diff --git a/cli/src/fuzzforge_cli/constants.py b/cli/src/fuzzforge_cli/constants.py
index 231f5b7..493dfb0 100644
--- a/cli/src/fuzzforge_cli/constants.py
+++ b/cli/src/fuzzforge_cli/constants.py
@@ -57,10 +57,6 @@ SEVERITY_STYLES = {
"info": "bold cyan"
}
-# Default volume modes
-DEFAULT_VOLUME_MODE = "ro"
-SUPPORTED_VOLUME_MODES = ["ro", "rw"]
-
# Default export formats
DEFAULT_EXPORT_FORMAT = "sarif"
SUPPORTED_EXPORT_FORMATS = ["sarif", "json", "csv"]
diff --git a/cli/src/fuzzforge_cli/fuzzy.py b/cli/src/fuzzforge_cli/fuzzy.py
index 48f16a5..4cec4de 100644
--- a/cli/src/fuzzforge_cli/fuzzy.py
+++ b/cli/src/fuzzforge_cli/fuzzy.py
@@ -52,7 +52,6 @@ class FuzzyMatcher:
# Common parameter names
self.parameter_names = [
"target_path",
- "volume_mode",
"timeout",
"workflow",
"param",
@@ -70,7 +69,6 @@ class FuzzyMatcher:
# Common values
self.common_values = {
- "volume_mode": ["ro", "rw"],
"format": ["json", "csv", "html", "sarif"],
"severity": ["critical", "high", "medium", "low", "info"],
}
diff --git a/cli/src/fuzzforge_cli/main.py b/cli/src/fuzzforge_cli/main.py
index 24baa9c..f869c8c 100644
--- a/cli/src/fuzzforge_cli/main.py
+++ b/cli/src/fuzzforge_cli/main.py
@@ -19,6 +19,8 @@ from rich.traceback import install
from typing import Optional, List
import sys
+from .config import load_project_env
+
from .commands import (
workflows,
workflow_exec,
@@ -27,13 +29,16 @@ from .commands import (
config as config_cmd,
ai,
ingest,
+ worker,
)
-from .constants import DEFAULT_VOLUME_MODE
from .fuzzy import enhanced_command_not_found_handler
# Install rich traceback handler
install(show_locals=True)
+# Ensure environment variables are available before command execution
+load_project_env()
+
# Create console for rich output
console = Console()
@@ -184,10 +189,6 @@ def run_workflow(
None, "--param-file", "-f",
help="JSON file containing workflow parameters"
),
- volume_mode: str = typer.Option(
- DEFAULT_VOLUME_MODE, "--volume-mode", "-v",
- help="Volume mount mode: ro (read-only) or rw (read-write)"
- ),
timeout: Optional[int] = typer.Option(
None, "--timeout", "-t",
help="Execution timeout in seconds"
@@ -234,7 +235,6 @@ def run_workflow(
target_path=target,
params=params,
param_file=param_file,
- volume_mode=volume_mode,
timeout=timeout,
interactive=interactive,
wait=wait,
@@ -335,6 +335,7 @@ app.add_typer(finding_app, name="finding", help="🔍 View and analyze findings"
app.add_typer(monitor.app, name="monitor", help="📊 Real-time monitoring")
app.add_typer(ai.app, name="ai", help="🤖 AI integration features")
app.add_typer(ingest.app, name="ingest", help="🧠 Ingest knowledge into AI")
+app.add_typer(worker.app, name="worker", help="🔧 Manage Temporal workers")
# Help and utility commands
@app.command()
@@ -410,7 +411,7 @@ def main():
'init', 'status', 'config', 'clean',
'workflows', 'workflow',
'findings', 'finding',
- 'monitor', 'ai', 'ingest',
+ 'monitor', 'ai', 'ingest', 'worker',
'version'
]
diff --git a/cli/src/fuzzforge_cli/validation.py b/cli/src/fuzzforge_cli/validation.py
index 1f524f6..b8fdfb7 100644
--- a/cli/src/fuzzforge_cli/validation.py
+++ b/cli/src/fuzzforge_cli/validation.py
@@ -17,7 +17,7 @@ import re
from pathlib import Path
from typing import Any, Dict, List, Optional
-from .constants import SUPPORTED_VOLUME_MODES, SUPPORTED_EXPORT_FORMATS
+from .constants import SUPPORTED_EXPORT_FORMATS
from .exceptions import ValidationError
@@ -65,15 +65,6 @@ def validate_target_path(target_path: str, must_exist: bool = True) -> Path:
return path
-def validate_volume_mode(volume_mode: str) -> None:
- """Validate volume mode"""
- if volume_mode not in SUPPORTED_VOLUME_MODES:
- raise ValidationError(
- "volume_mode", volume_mode,
- f"one of: {', '.join(SUPPORTED_VOLUME_MODES)}"
- )
-
-
def validate_export_format(export_format: str) -> None:
"""Validate export format"""
if export_format not in SUPPORTED_EXPORT_FORMATS:
diff --git a/cli/src/fuzzforge_cli/worker_manager.py b/cli/src/fuzzforge_cli/worker_manager.py
index b6102e0..a9b3eaf 100644
--- a/cli/src/fuzzforge_cli/worker_manager.py
+++ b/cli/src/fuzzforge_cli/worker_manager.py
@@ -15,12 +15,17 @@ Manages on-demand startup and shutdown of Temporal workers using Docker Compose.
# Additional attribution and requirements are provided in the NOTICE file.
import logging
+import os
+import platform
import subprocess
import time
from pathlib import Path
from typing import Optional, Dict, Any
+import requests
+import yaml
from rich.console import Console
+from rich.status import Status
logger = logging.getLogger(__name__)
console = Console()
@@ -57,27 +62,206 @@ class WorkerManager:
def _find_compose_file(self) -> Path:
"""
- Auto-detect docker-compose.yml location.
+ Auto-detect docker-compose.yml location using multiple strategies.
- Searches upward from current directory to find the compose file.
+ Strategies (in order):
+ 1. Query backend API for host path
+ 2. Search upward for .fuzzforge marker directory
+ 3. Use FUZZFORGE_ROOT environment variable
+ 4. Fallback to current directory
+
+ Returns:
+ Path to docker-compose.yml
+
+ Raises:
+ FileNotFoundError: If docker-compose.yml cannot be located
"""
- current = Path.cwd()
+ # Strategy 1: Ask backend for location
+ try:
+ backend_url = os.getenv("FUZZFORGE_API_URL", "http://localhost:8000")
+ response = requests.get(f"{backend_url}/system/info", timeout=2)
+ if response.ok:
+ info = response.json()
+ if compose_path_str := info.get("docker_compose_path"):
+ compose_path = Path(compose_path_str)
+ if compose_path.exists():
+ logger.debug(f"Found docker-compose.yml via backend API: {compose_path}")
+ return compose_path
+ except Exception as e:
+ logger.debug(f"Backend API not reachable for path lookup: {e}")
- # Try current directory and parents
+ # Strategy 2: Search upward for .fuzzforge marker directory
+ current = Path.cwd()
for parent in [current] + list(current.parents):
- compose_path = parent / "docker-compose.yml"
+ if (parent / ".fuzzforge").exists():
+ compose_path = parent / "docker-compose.yml"
+ if compose_path.exists():
+ logger.debug(f"Found docker-compose.yml via .fuzzforge marker: {compose_path}")
+ return compose_path
+
+ # Strategy 3: Environment variable
+ if fuzzforge_root := os.getenv("FUZZFORGE_ROOT"):
+ compose_path = Path(fuzzforge_root) / "docker-compose.yml"
if compose_path.exists():
+ logger.debug(f"Found docker-compose.yml via FUZZFORGE_ROOT: {compose_path}")
return compose_path
- # Fallback to default location
- return Path("docker-compose.yml")
+ # Strategy 4: Fallback to current directory
+ compose_path = Path("docker-compose.yml")
+ if compose_path.exists():
+ return compose_path
- def _run_docker_compose(self, *args: str) -> subprocess.CompletedProcess:
+ raise FileNotFoundError(
+ "Cannot find docker-compose.yml. Ensure backend is running, "
+ "run from FuzzForge directory, or set FUZZFORGE_ROOT environment variable."
+ )
+
+ def _get_workers_dir(self) -> Path:
"""
- Run docker-compose command.
+ Get the workers directory path.
+
+ Uses same strategy as _find_compose_file():
+ 1. Query backend API
+ 2. Derive from compose_file location
+ 3. Use FUZZFORGE_ROOT
+
+ Returns:
+ Path to workers directory
+ """
+ # Strategy 1: Ask backend
+ try:
+ backend_url = os.getenv("FUZZFORGE_API_URL", "http://localhost:8000")
+ response = requests.get(f"{backend_url}/system/info", timeout=2)
+ if response.ok:
+ info = response.json()
+ if workers_dir_str := info.get("workers_dir"):
+ workers_dir = Path(workers_dir_str)
+ if workers_dir.exists():
+ return workers_dir
+ except Exception:
+ pass
+
+ # Strategy 2: Derive from compose file location
+ if self.compose_file.exists():
+ workers_dir = self.compose_file.parent / "workers"
+ if workers_dir.exists():
+ return workers_dir
+
+ # Strategy 3: Use environment variable
+ if fuzzforge_root := os.getenv("FUZZFORGE_ROOT"):
+ workers_dir = Path(fuzzforge_root) / "workers"
+ if workers_dir.exists():
+ return workers_dir
+
+ # Fallback
+ return Path("workers")
+
+ def _detect_platform(self) -> str:
+ """
+ Detect the current platform.
+
+ Returns:
+ Platform string: "linux/amd64" or "linux/arm64"
+ """
+ machine = platform.machine().lower()
+ system = platform.system().lower()
+
+ logger.debug(f"Platform detection: machine={machine}, system={system}")
+
+ # Normalize machine architecture
+ if machine in ["x86_64", "amd64", "x64"]:
+ detected = "linux/amd64"
+ elif machine in ["arm64", "aarch64", "armv8", "arm64v8"]:
+ detected = "linux/arm64"
+ else:
+ # Fallback to amd64 for unknown architectures
+ logger.warning(
+ f"Unknown architecture '{machine}' detected, falling back to linux/amd64. "
+ f"Please report this issue if you're experiencing problems."
+ )
+ detected = "linux/amd64"
+
+ logger.info(f"Detected platform: {detected}")
+ return detected
+
+ def _read_worker_metadata(self, vertical: str) -> dict:
+ """
+ Read worker metadata.yaml for a vertical.
Args:
- *args: Arguments to pass to docker-compose
+ vertical: Worker vertical name (e.g., "android", "python")
+
+ Returns:
+ Dictionary containing metadata, or empty dict if not found
+ """
+ try:
+ workers_dir = self._get_workers_dir()
+ metadata_file = workers_dir / vertical / "metadata.yaml"
+
+ if not metadata_file.exists():
+ logger.debug(f"No metadata.yaml found for {vertical}")
+ return {}
+
+ with open(metadata_file, 'r') as f:
+ return yaml.safe_load(f) or {}
+ except Exception as e:
+ logger.debug(f"Failed to read metadata for {vertical}: {e}")
+ return {}
+
+ def _select_dockerfile(self, vertical: str) -> str:
+ """
+ Select the appropriate Dockerfile for the current platform.
+
+ Args:
+ vertical: Worker vertical name
+
+ Returns:
+ Dockerfile name (e.g., "Dockerfile.amd64", "Dockerfile.arm64")
+ """
+ detected_platform = self._detect_platform()
+ metadata = self._read_worker_metadata(vertical)
+
+ if not metadata:
+ # No metadata: use default Dockerfile
+ logger.debug(f"No metadata for {vertical}, using Dockerfile")
+ return "Dockerfile"
+
+ platforms = metadata.get("platforms", {})
+
+ if not platforms:
+ # Metadata exists but no platform definitions
+ logger.debug(f"No platform definitions in metadata for {vertical}, using Dockerfile")
+ return "Dockerfile"
+
+ # Try detected platform first
+ if detected_platform in platforms:
+ dockerfile = platforms[detected_platform].get("dockerfile", "Dockerfile")
+ logger.info(f"✓ Selected {dockerfile} for {vertical} on {detected_platform}")
+ return dockerfile
+
+ # Fallback to default platform
+ default_platform = metadata.get("default_platform", "linux/amd64")
+ logger.warning(
+ f"Platform {detected_platform} not found in metadata for {vertical}, "
+ f"falling back to default: {default_platform}"
+ )
+
+ if default_platform in platforms:
+ dockerfile = platforms[default_platform].get("dockerfile", "Dockerfile.amd64")
+ logger.info(f"Using default platform {default_platform}: {dockerfile}")
+ return dockerfile
+
+ # Last resort: just use Dockerfile
+ logger.warning(f"No suitable Dockerfile found for {vertical}, using 'Dockerfile'")
+ return "Dockerfile"
+
+ def _run_docker_compose(self, *args: str, env: Optional[Dict[str, str]] = None) -> subprocess.CompletedProcess:
+ """
+ Run docker compose command with optional environment variables.
+
+ Args:
+ *args: Arguments to pass to docker compose
+ env: Optional environment variables to set
Returns:
CompletedProcess with result
@@ -85,14 +269,21 @@ class WorkerManager:
Raises:
subprocess.CalledProcessError: If command fails
"""
- cmd = ["docker-compose", "-f", str(self.compose_file)] + list(args)
+ cmd = ["docker", "compose", "-f", str(self.compose_file)] + list(args)
logger.debug(f"Running: {' '.join(cmd)}")
+ # Merge with current environment
+ full_env = os.environ.copy()
+ if env:
+ full_env.update(env)
+ logger.debug(f"Environment overrides: {env}")
+
return subprocess.run(
cmd,
capture_output=True,
text=True,
- check=True
+ check=True,
+ env=full_env
)
def _service_to_container_name(self, service_name: str) -> str:
@@ -135,21 +326,35 @@ class WorkerManager:
def start_worker(self, service_name: str) -> bool:
"""
- Start a worker service using docker-compose.
+ Start a worker service using docker-compose with platform-specific Dockerfile.
Args:
- service_name: Name of the Docker Compose service to start (e.g., "worker-python")
+ service_name: Name of the Docker Compose service to start (e.g., "worker-android")
Returns:
True if started successfully, False otherwise
"""
try:
- console.print(f"🚀 Starting worker: {service_name}")
+ # Extract vertical name from service name
+ vertical = service_name.replace("worker-", "")
- # Use docker-compose up to create and start the service
- result = self._run_docker_compose("up", "-d", service_name)
+ # Detect platform and select appropriate Dockerfile
+ detected_platform = self._detect_platform()
+ dockerfile = self._select_dockerfile(vertical)
- logger.info(f"Worker {service_name} started")
+ # Set environment variable for docker-compose
+ env_var_name = f"{vertical.upper()}_DOCKERFILE"
+ env = {env_var_name: dockerfile}
+
+ console.print(
+ f"🚀 Starting worker: {service_name} "
+ f"(platform: {detected_platform}, using {dockerfile})"
+ )
+
+ # Use docker-compose up with --build to ensure correct Dockerfile is used
+ result = self._run_docker_compose("up", "-d", "--build", service_name, env=env)
+
+ logger.info(f"Worker {service_name} started with {dockerfile}")
return True
except subprocess.CalledProcessError as e:
@@ -163,9 +368,67 @@ class WorkerManager:
console.print(f"❌ Unexpected error: {e}", style="red")
return False
+ def _get_container_state(self, service_name: str) -> str:
+ """
+ Get the current state of a container (running, created, restarting, etc.).
+
+ Args:
+ service_name: Name of the Docker Compose service
+
+ Returns:
+ Container state string (running, created, restarting, exited, etc.) or "unknown"
+ """
+ try:
+ container_name = self._service_to_container_name(service_name)
+ result = subprocess.run(
+ ["docker", "inspect", "-f", "{{.State.Status}}", container_name],
+ capture_output=True,
+ text=True,
+ check=False
+ )
+ if result.returncode == 0:
+ return result.stdout.strip()
+ return "unknown"
+ except Exception as e:
+ logger.debug(f"Failed to get container state: {e}")
+ return "unknown"
+
+ def _get_health_status(self, container_name: str) -> str:
+ """
+ Get container health status.
+
+ Args:
+ container_name: Docker container name
+
+ Returns:
+ Health status: "healthy", "unhealthy", "starting", "none", or "unknown"
+ """
+ try:
+ result = subprocess.run(
+ ["docker", "inspect", "-f", "{{.State.Health.Status}}", container_name],
+ capture_output=True,
+ text=True,
+ check=False
+ )
+
+ if result.returncode != 0:
+ return "unknown"
+
+ health_status = result.stdout.strip()
+
+ if health_status == "" or health_status == "":
+ return "none" # No health check defined
+
+ return health_status # healthy, unhealthy, starting
+
+ except Exception as e:
+ logger.debug(f"Failed to check health: {e}")
+ return "unknown"
+
def wait_for_worker_ready(self, service_name: str, timeout: Optional[int] = None) -> bool:
"""
Wait for a worker to be healthy and ready to process tasks.
+ Shows live progress updates during startup.
Args:
service_name: Name of the Docker Compose service
@@ -173,56 +436,74 @@ class WorkerManager:
Returns:
True if worker is ready, False if timeout reached
-
- Raises:
- TimeoutError: If worker doesn't become ready within timeout
"""
timeout = timeout or self.startup_timeout
start_time = time.time()
container_name = self._service_to_container_name(service_name)
+ last_status_msg = ""
- console.print("⏳ Waiting for worker to be ready...")
+ with Status("[bold cyan]Starting worker...", console=console, spinner="dots") as status:
+ while time.time() - start_time < timeout:
+ elapsed = int(time.time() - start_time)
+
+ # Get container state
+ container_state = self._get_container_state(service_name)
+
+ # Get health status
+ health_status = self._get_health_status(container_name)
+
+ # Build status message based on current state
+ if container_state == "created":
+ status_msg = f"[cyan]Worker starting... ({elapsed}s)[/cyan]"
+ elif container_state == "restarting":
+ status_msg = f"[yellow]Worker restarting... ({elapsed}s)[/yellow]"
+ elif container_state == "running":
+ if health_status == "starting":
+ status_msg = f"[cyan]Worker running, health check starting... ({elapsed}s)[/cyan]"
+ elif health_status == "unhealthy":
+ status_msg = f"[yellow]Worker running, health check: unhealthy ({elapsed}s)[/yellow]"
+ elif health_status == "healthy":
+ status_msg = f"[green]Worker healthy! ({elapsed}s)[/green]"
+ status.update(status_msg)
+ console.print(f"✅ Worker ready: {service_name} (took {elapsed}s)")
+ logger.info(f"Worker {service_name} is healthy (took {elapsed}s)")
+ return True
+ elif health_status == "none":
+ # No health check defined, assume ready
+ status_msg = f"[green]Worker running (no health check) ({elapsed}s)[/green]"
+ status.update(status_msg)
+ console.print(f"✅ Worker ready: {service_name} (took {elapsed}s)")
+ logger.info(f"Worker {service_name} is running, no health check (took {elapsed}s)")
+ return True
+ else:
+ status_msg = f"[cyan]Worker running ({elapsed}s)[/cyan]"
+ elif not container_state or container_state == "exited":
+ status_msg = f"[yellow]Waiting for container to start... ({elapsed}s)[/yellow]"
+ else:
+ status_msg = f"[cyan]Worker state: {container_state} ({elapsed}s)[/cyan]"
+
+ # Show helpful hints at certain intervals
+ if elapsed == 10:
+ status_msg += " [dim](pulling image if not cached)[/dim]"
+ elif elapsed == 30:
+ status_msg += " [dim](large images can take time)[/dim]"
+ elif elapsed == 60:
+ status_msg += " [dim](still working...)[/dim]"
+
+ # Update status if changed
+ if status_msg != last_status_msg:
+ status.update(status_msg)
+ last_status_msg = status_msg
+ logger.debug(f"Worker {service_name} - state: {container_state}, health: {health_status}")
- while time.time() - start_time < timeout:
- # Check if container is running
- if not self.is_worker_running(service_name):
- logger.debug(f"Worker {service_name} not running yet")
time.sleep(self.health_check_interval)
- continue
- # Check container health status
- try:
- result = subprocess.run(
- ["docker", "inspect", "-f", "{{.State.Health.Status}}", container_name],
- capture_output=True,
- text=True,
- check=False
- )
-
- health_status = result.stdout.strip()
-
- # If no health check is defined, assume healthy after running
- if health_status == "" or health_status == "":
- logger.info(f"Worker {service_name} is running (no health check)")
- console.print(f"✅ Worker ready: {service_name}")
- return True
-
- if health_status == "healthy":
- logger.info(f"Worker {service_name} is healthy")
- console.print(f"✅ Worker ready: {service_name}")
- return True
-
- logger.debug(f"Worker {service_name} health: {health_status}")
-
- except Exception as e:
- logger.debug(f"Failed to check health: {e}")
-
- time.sleep(self.health_check_interval)
-
- elapsed = time.time() - start_time
- logger.warning(f"Worker {service_name} did not become ready within {elapsed:.1f}s")
- console.print(f"⚠️ Worker startup timeout after {elapsed:.1f}s", style="yellow")
- return False
+ # Timeout reached
+ elapsed = int(time.time() - start_time)
+ logger.warning(f"Worker {service_name} did not become ready within {elapsed}s")
+ console.print(f"⚠️ Worker startup timeout after {elapsed}s", style="yellow")
+ console.print(f" Last state: {container_state}, health: {health_status}", style="dim")
+ return False
def stop_worker(self, service_name: str) -> bool:
"""
@@ -253,6 +534,75 @@ class WorkerManager:
console.print(f"❌ Unexpected error: {e}", style="red")
return False
+ def stop_all_workers(self) -> bool:
+ """
+ Stop all running FuzzForge worker containers.
+
+ This uses `docker stop` to stop worker containers individually,
+ avoiding the Docker Compose profile issue and preventing accidental
+ shutdown of core services.
+
+ Returns:
+ True if all workers stopped successfully, False otherwise
+ """
+ try:
+ console.print("🛑 Stopping all FuzzForge workers...")
+
+ # Get list of all running worker containers
+ result = subprocess.run(
+ ["docker", "ps", "--filter", "name=fuzzforge-worker-", "--format", "{{.Names}}"],
+ capture_output=True,
+ text=True,
+ check=False
+ )
+
+ running_workers = [name.strip() for name in result.stdout.splitlines() if name.strip()]
+
+ if not running_workers:
+ console.print("✓ No workers running")
+ return True
+
+ console.print(f"Found {len(running_workers)} running worker(s):")
+ for worker in running_workers:
+ console.print(f" - {worker}")
+
+ # Stop each worker container individually using docker stop
+ # This is safer than docker compose down and won't affect core services
+ failed_workers = []
+ for worker in running_workers:
+ try:
+ logger.info(f"Stopping {worker}...")
+ result = subprocess.run(
+ ["docker", "stop", worker],
+ capture_output=True,
+ text=True,
+ check=True,
+ timeout=30
+ )
+ console.print(f" ✓ Stopped {worker}")
+ except subprocess.CalledProcessError as e:
+ logger.error(f"Failed to stop {worker}: {e.stderr}")
+ failed_workers.append(worker)
+ console.print(f" ✗ Failed to stop {worker}", style="red")
+ except subprocess.TimeoutExpired:
+ logger.error(f"Timeout stopping {worker}")
+ failed_workers.append(worker)
+ console.print(f" ✗ Timeout stopping {worker}", style="red")
+
+ if failed_workers:
+ console.print(f"\n⚠️ {len(failed_workers)} worker(s) failed to stop", style="yellow")
+ console.print("💡 Try manually: docker stop " + " ".join(failed_workers), style="dim")
+ return False
+
+ console.print("\n✅ All workers stopped")
+ logger.info("All workers stopped successfully")
+ return True
+
+ except Exception as e:
+ logger.error(f"Unexpected error stopping workers: {e}")
+ console.print(f"❌ Unexpected error: {e}", style="red")
+ return False
+
def ensure_worker_running(
self,
worker_info: Dict[str, Any],
diff --git a/docker-compose.yml b/docker-compose.yml
index 271f7e6..aae0fb5 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -144,6 +144,103 @@ services:
networks:
- fuzzforge-network
+ # ============================================================================
+ # LLM Proxy - LiteLLM Gateway
+ # ============================================================================
+ llm-proxy:
+ image: ghcr.io/berriai/litellm:main-stable
+ container_name: fuzzforge-llm-proxy
+ depends_on:
+ llm-proxy-db:
+ condition: service_healthy
+ otel-collector:
+ condition: service_started
+ env_file:
+ - ./volumes/env/.env
+ environment:
+ PORT: 4000
+ DATABASE_URL: postgresql://litellm:litellm@llm-proxy-db:5432/litellm
+ STORE_MODEL_IN_DB: "True"
+ UI_USERNAME: ${UI_USERNAME:-fuzzforge}
+ UI_PASSWORD: ${UI_PASSWORD:-fuzzforge123}
+ OTEL_EXPORTER_OTLP_ENDPOINT: http://otel-collector:4317
+ OTEL_EXPORTER_OTLP_PROTOCOL: grpc
+ ANTHROPIC_API_KEY: ${LITELLM_ANTHROPIC_API_KEY:-}
+ OPENAI_API_KEY: ${LITELLM_OPENAI_API_KEY:-}
+ command:
+ - "--config"
+ - "/etc/litellm/proxy_config.yaml"
+ ports:
+ - "10999:4000" # Web UI + OpenAI-compatible API
+ volumes:
+ - litellm_proxy_data:/var/lib/litellm
+ - ./volumes/litellm/proxy_config.yaml:/etc/litellm/proxy_config.yaml:ro
+ networks:
+ - fuzzforge-network
+ healthcheck:
+ test: ["CMD-SHELL", "wget --no-verbose --tries=1 http://localhost:4000/health/liveliness || exit 1"]
+ interval: 30s
+ timeout: 10s
+ retries: 3
+ start_period: 40s
+ restart: unless-stopped
+
+ otel-collector:
+ image: otel/opentelemetry-collector:latest
+ container_name: fuzzforge-otel-collector
+ command: ["--config=/etc/otel-collector/config.yaml"]
+ volumes:
+ - ./volumes/otel/collector-config.yaml:/etc/otel-collector/config.yaml:ro
+ ports:
+ - "4317:4317"
+ - "4318:4318"
+ networks:
+ - fuzzforge-network
+ restart: unless-stopped
+
+ llm-proxy-db:
+ image: postgres:16
+ container_name: fuzzforge-llm-proxy-db
+ environment:
+ POSTGRES_DB: litellm
+ POSTGRES_USER: litellm
+ POSTGRES_PASSWORD: litellm
+ healthcheck:
+ test: ["CMD-SHELL", "pg_isready -d litellm -U litellm"]
+ interval: 5s
+ timeout: 5s
+ retries: 12
+ volumes:
+ - litellm_proxy_db:/var/lib/postgresql/data
+ networks:
+ - fuzzforge-network
+ restart: unless-stopped
+
+ # ============================================================================
+ # LLM Proxy Bootstrap - Seed providers and virtual keys
+ # ============================================================================
+ llm-proxy-bootstrap:
+ image: python:3.11-slim
+ container_name: fuzzforge-llm-proxy-bootstrap
+ depends_on:
+ llm-proxy:
+ condition: service_started
+ env_file:
+ - ./volumes/env/.env
+ environment:
+ PROXY_BASE_URL: http://llm-proxy:4000
+ ENV_FILE_PATH: /bootstrap/env/.env
+ UI_USERNAME: ${UI_USERNAME:-fuzzforge}
+ UI_PASSWORD: ${UI_PASSWORD:-fuzzforge123}
+ volumes:
+ - ./docker/scripts/bootstrap_llm_proxy.py:/app/bootstrap.py:ro
+ - ./volumes/env:/bootstrap/env
+ - litellm_proxy_data:/bootstrap/data
+ networks:
+ - fuzzforge-network
+ command: ["python", "/app/bootstrap.py"]
+ restart: "no"
+
# ============================================================================
# Vertical Worker: Rust/Native Security
# ============================================================================
@@ -217,9 +314,6 @@ services:
context: ./workers/python
dockerfile: Dockerfile
container_name: fuzzforge-worker-python
- profiles:
- - workers
- - python
depends_on:
postgresql:
condition: service_healthy
@@ -345,7 +439,7 @@ services:
worker-android:
build:
context: ./workers/android
- dockerfile: Dockerfile
+ dockerfile: ${ANDROID_DOCKERFILE:-Dockerfile.amd64}
container_name: fuzzforge-worker-android
profiles:
- workers
@@ -433,6 +527,9 @@ services:
PYTHONPATH: /app
PYTHONUNBUFFERED: 1
+ # Host filesystem paths (for CLI worker management)
+ FUZZFORGE_HOST_ROOT: ${PWD}
+
# Logging
LOG_LEVEL: INFO
ports:
@@ -458,10 +555,11 @@ services:
context: ./ai/agents/task_agent
dockerfile: Dockerfile
container_name: fuzzforge-task-agent
+ depends_on:
+ llm-proxy-bootstrap:
+ condition: service_completed_successfully
ports:
- "10900:8000"
- env_file:
- - ./volumes/env/.env
environment:
- PORT=8000
- PYTHONUNBUFFERED=1
@@ -558,6 +656,10 @@ volumes:
name: fuzzforge_worker_ossfuzz_cache
worker_ossfuzz_build:
name: fuzzforge_worker_ossfuzz_build
+ litellm_proxy_data:
+ name: fuzzforge_litellm_proxy_data
+ litellm_proxy_db:
+ name: fuzzforge_litellm_proxy_db
# Add more worker caches as you add verticals:
# worker_web_cache:
# worker_ios_cache:
@@ -591,6 +693,7 @@ networks:
# 4. Web UIs:
# - Temporal UI: http://localhost:8233
# - MinIO Console: http://localhost:9001 (user: fuzzforge, pass: fuzzforge123)
+# - LiteLLM Proxy: http://localhost:10999
#
# 5. Resource Usage (Baseline):
# - Temporal: ~500MB
diff --git a/docker/scripts/bootstrap_llm_proxy.py b/docker/scripts/bootstrap_llm_proxy.py
new file mode 100644
index 0000000..68f6745
--- /dev/null
+++ b/docker/scripts/bootstrap_llm_proxy.py
@@ -0,0 +1,636 @@
+"""Bootstrap the LiteLLM proxy with provider secrets and default virtual keys.
+
+The bootstrapper runs as a one-shot container during docker-compose startup.
+It performs the following actions:
+
+ 1. Waits for the proxy health endpoint to respond.
+ 2. Collects upstream provider API keys from the shared .env file (plus any
+ legacy copies) and mirrors them into a proxy-specific env file
+ (volumes/env/.env.litellm) so only the proxy container can access them.
+ 3. Emits a default virtual key for the task agent by calling /key/generate,
+ persisting the generated token back into volumes/env/.env so the agent can
+ authenticate through the proxy instead of using raw provider secrets.
+ 4. Keeps the process idempotent: existing keys are reused and their allowed
+ model list is refreshed instead of issuing duplicates on every run.
+"""
+
+from __future__ import annotations
+
+import json
+import os
+import sys
+import time
+import urllib.error
+import urllib.parse
+import urllib.request
+from dataclasses import dataclass
+from pathlib import Path
+from typing import Iterable, Mapping
+
+PROXY_BASE_URL = os.getenv("PROXY_BASE_URL", "http://llm-proxy:4000").rstrip("/")
+ENV_FILE_PATH = Path(os.getenv("ENV_FILE_PATH", "/bootstrap/env/.env"))
+LITELLM_ENV_FILE_PATH = Path(
+ os.getenv("LITELLM_ENV_FILE_PATH", "/bootstrap/env/.env.litellm")
+)
+LEGACY_ENV_FILE_PATH = Path(
+ os.getenv("LEGACY_ENV_FILE_PATH", "/bootstrap/env/.env.bifrost")
+)
+MAX_WAIT_SECONDS = int(os.getenv("LITELLM_PROXY_WAIT_SECONDS", "120"))
+
+
+@dataclass(frozen=True)
+class VirtualKeySpec:
+ """Configuration for a virtual key to be provisioned."""
+ env_var: str
+ alias: str
+ user_id: str
+ budget_env_var: str
+ duration_env_var: str
+ default_budget: float
+ default_duration: str
+
+
+# Multiple virtual keys for different services
+VIRTUAL_KEYS: tuple[VirtualKeySpec, ...] = (
+ VirtualKeySpec(
+ env_var="OPENAI_API_KEY",
+ alias="fuzzforge-cli",
+ user_id="fuzzforge-cli",
+ budget_env_var="CLI_BUDGET",
+ duration_env_var="CLI_DURATION",
+ default_budget=100.0,
+ default_duration="30d",
+ ),
+ VirtualKeySpec(
+ env_var="TASK_AGENT_API_KEY",
+ alias="fuzzforge-task-agent",
+ user_id="fuzzforge-task-agent",
+ budget_env_var="TASK_AGENT_BUDGET",
+ duration_env_var="TASK_AGENT_DURATION",
+ default_budget=25.0,
+ default_duration="30d",
+ ),
+ VirtualKeySpec(
+ env_var="COGNEE_API_KEY",
+ alias="fuzzforge-cognee",
+ user_id="fuzzforge-cognee",
+ budget_env_var="COGNEE_BUDGET",
+ duration_env_var="COGNEE_DURATION",
+ default_budget=50.0,
+ default_duration="30d",
+ ),
+)
+
+
+@dataclass(frozen=True)
+class ProviderSpec:
+ name: str
+ litellm_env_var: str
+ alias_env_var: str
+ source_env_vars: tuple[str, ...]
+
+
+# Support fresh LiteLLM variables while gracefully migrating legacy env
+# aliases on first boot.
+PROVIDERS: tuple[ProviderSpec, ...] = (
+ ProviderSpec(
+ "openai",
+ "OPENAI_API_KEY",
+ "LITELLM_OPENAI_API_KEY",
+ ("LITELLM_OPENAI_API_KEY", "BIFROST_OPENAI_KEY"),
+ ),
+ ProviderSpec(
+ "anthropic",
+ "ANTHROPIC_API_KEY",
+ "LITELLM_ANTHROPIC_API_KEY",
+ ("LITELLM_ANTHROPIC_API_KEY", "BIFROST_ANTHROPIC_KEY"),
+ ),
+ ProviderSpec(
+ "gemini",
+ "GEMINI_API_KEY",
+ "LITELLM_GEMINI_API_KEY",
+ ("LITELLM_GEMINI_API_KEY", "BIFROST_GEMINI_KEY"),
+ ),
+ ProviderSpec(
+ "mistral",
+ "MISTRAL_API_KEY",
+ "LITELLM_MISTRAL_API_KEY",
+ ("LITELLM_MISTRAL_API_KEY", "BIFROST_MISTRAL_KEY"),
+ ),
+ ProviderSpec(
+ "openrouter",
+ "OPENROUTER_API_KEY",
+ "LITELLM_OPENROUTER_API_KEY",
+ ("LITELLM_OPENROUTER_API_KEY", "BIFROST_OPENROUTER_KEY"),
+ ),
+)
+
+PROVIDER_LOOKUP: dict[str, ProviderSpec] = {spec.name: spec for spec in PROVIDERS}
+
+
+def log(message: str) -> None:
+ print(f"[litellm-bootstrap] {message}", flush=True)
+
+
+def read_lines(path: Path) -> list[str]:
+ if not path.exists():
+ return []
+ return path.read_text().splitlines()
+
+
+def write_lines(path: Path, lines: Iterable[str]) -> None:
+ material = "\n".join(lines)
+ if material and not material.endswith("\n"):
+ material += "\n"
+ path.parent.mkdir(parents=True, exist_ok=True)
+ path.write_text(material)
+
+
+def read_env_file() -> list[str]:
+ if not ENV_FILE_PATH.exists():
+ raise FileNotFoundError(
+ f"Expected env file at {ENV_FILE_PATH}. Copy volumes/env/.env.template first."
+ )
+ return read_lines(ENV_FILE_PATH)
+
+
+def write_env_file(lines: Iterable[str]) -> None:
+ write_lines(ENV_FILE_PATH, lines)
+
+
+def read_litellm_env_file() -> list[str]:
+ return read_lines(LITELLM_ENV_FILE_PATH)
+
+
+def write_litellm_env_file(lines: Iterable[str]) -> None:
+ write_lines(LITELLM_ENV_FILE_PATH, lines)
+
+
+def read_legacy_env_file() -> Mapping[str, str]:
+ lines = read_lines(LEGACY_ENV_FILE_PATH)
+ return parse_env_lines(lines)
+
+
+def set_env_value(lines: list[str], key: str, value: str) -> tuple[list[str], bool]:
+ prefix = f"{key}="
+ new_line = f"{prefix}{value}"
+ for idx, line in enumerate(lines):
+ stripped = line.lstrip()
+ if not stripped or stripped.startswith("#"):
+ continue
+ if stripped.startswith(prefix):
+ if stripped == new_line:
+ return lines, False
+ indent = line[: len(line) - len(stripped)]
+ lines[idx] = f"{indent}{new_line}"
+ return lines, True
+ lines.append(new_line)
+ return lines, True
+
+
+def parse_env_lines(lines: list[str]) -> dict[str, str]:
+ mapping: dict[str, str] = {}
+ for raw_line in lines:
+ stripped = raw_line.strip()
+ if not stripped or stripped.startswith("#") or "=" not in stripped:
+ continue
+ key, value = stripped.split("=", 1)
+ mapping[key] = value
+ return mapping
+
+
+def wait_for_proxy() -> None:
+ health_paths = ("/health/liveliness", "/health", "/")
+ deadline = time.time() + MAX_WAIT_SECONDS
+ attempt = 0
+ while time.time() < deadline:
+ attempt += 1
+ for path in health_paths:
+ url = f"{PROXY_BASE_URL}{path}"
+ try:
+ with urllib.request.urlopen(url) as response: # noqa: S310
+ if response.status < 400:
+ log(f"Proxy responded on {path} (attempt {attempt})")
+ return
+ except urllib.error.URLError as exc:
+ log(f"Proxy not ready yet ({path}): {exc}")
+ time.sleep(3)
+ raise TimeoutError(f"Timed out waiting for proxy at {PROXY_BASE_URL}")
+
+
+def request_json(
+ path: str,
+ *,
+ method: str = "GET",
+ payload: Mapping[str, object] | None = None,
+ auth_token: str | None = None,
+) -> tuple[int, str]:
+ url = f"{PROXY_BASE_URL}{path}"
+ data = None
+ headers = {"Accept": "application/json"}
+ if auth_token:
+ headers["Authorization"] = f"Bearer {auth_token}"
+ if payload is not None:
+ data = json.dumps(payload).encode("utf-8")
+ headers["Content-Type"] = "application/json"
+ request = urllib.request.Request(url, data=data, headers=headers, method=method)
+ try:
+ with urllib.request.urlopen(request) as response: # noqa: S310
+ body = response.read().decode("utf-8")
+ return response.status, body
+ except urllib.error.HTTPError as exc:
+ body = exc.read().decode("utf-8")
+ return exc.code, body
+
+
+def get_master_key(env_map: Mapping[str, str]) -> str:
+ candidate = os.getenv("LITELLM_MASTER_KEY") or env_map.get("LITELLM_MASTER_KEY")
+ if not candidate:
+ raise RuntimeError(
+ "LITELLM_MASTER_KEY is not set. Add it to volumes/env/.env before starting Docker."
+ )
+ value = candidate.strip()
+ if not value:
+ raise RuntimeError(
+ "LITELLM_MASTER_KEY is blank. Provide a non-empty value in the env file."
+ )
+ return value
+
+
+def gather_provider_keys(
+ env_lines: list[str],
+ env_map: dict[str, str],
+ legacy_map: Mapping[str, str],
+) -> tuple[dict[str, str], list[str], bool]:
+ updated_lines = list(env_lines)
+ discovered: dict[str, str] = {}
+ changed = False
+
+ for spec in PROVIDERS:
+ value: str | None = None
+ for source_var in spec.source_env_vars:
+ candidate = env_map.get(source_var) or legacy_map.get(source_var) or os.getenv(
+ source_var
+ )
+ if not candidate:
+ continue
+ stripped = candidate.strip()
+ if stripped:
+ value = stripped
+ break
+ if not value:
+ continue
+
+ discovered[spec.litellm_env_var] = value
+ updated_lines, alias_changed = set_env_value(
+ updated_lines, spec.alias_env_var, value
+ )
+ if alias_changed:
+ env_map[spec.alias_env_var] = value
+ changed = True
+
+ return discovered, updated_lines, changed
+
+
+def ensure_litellm_env(provider_values: Mapping[str, str]) -> None:
+ if not provider_values:
+ log("No provider secrets discovered; skipping LiteLLM env update")
+ return
+ lines = read_litellm_env_file()
+ updated_lines = list(lines)
+ changed = False
+ for env_var, value in provider_values.items():
+ updated_lines, var_changed = set_env_value(updated_lines, env_var, value)
+ if var_changed:
+ changed = True
+ if changed or not lines:
+ write_litellm_env_file(updated_lines)
+ log(f"Wrote provider secrets to {LITELLM_ENV_FILE_PATH}")
+
+
+def current_env_key(env_map: Mapping[str, str], env_var: str) -> str | None:
+ candidate = os.getenv(env_var) or env_map.get(env_var)
+ if not candidate:
+ return None
+ value = candidate.strip()
+ if not value or value.startswith("sk-proxy-"):
+ return None
+ return value
+
+
+def collect_default_models(env_map: Mapping[str, str]) -> list[str]:
+ explicit = (
+ os.getenv("LITELLM_DEFAULT_MODELS")
+ or env_map.get("LITELLM_DEFAULT_MODELS")
+ or ""
+ )
+ models: list[str] = []
+ if explicit:
+ models.extend(
+ model.strip()
+ for model in explicit.split(",")
+ if model.strip()
+ )
+ if models:
+ return sorted(dict.fromkeys(models))
+
+ configured_model = (
+ os.getenv("LITELLM_MODEL") or env_map.get("LITELLM_MODEL") or ""
+ ).strip()
+ configured_provider = (
+ os.getenv("LITELLM_PROVIDER") or env_map.get("LITELLM_PROVIDER") or ""
+ ).strip()
+
+ if configured_model:
+ if "/" in configured_model:
+ models.append(configured_model)
+ elif configured_provider:
+ models.append(f"{configured_provider}/{configured_model}")
+ else:
+ log(
+ "LITELLM_MODEL is set without a provider; configure LITELLM_PROVIDER or "
+ "use the provider/model format (e.g. openai/gpt-4o-mini)."
+ )
+ elif configured_provider:
+ log(
+ "LITELLM_PROVIDER configured without a default model. Bootstrap will issue an "
+ "unrestricted virtual key allowing any proxy-registered model."
+ )
+
+ return sorted(dict.fromkeys(models))
+
+
+def fetch_existing_key_record(master_key: str, key_value: str) -> Mapping[str, object] | None:
+ encoded = urllib.parse.quote_plus(key_value)
+ status, body = request_json(f"/key/info?key={encoded}", auth_token=master_key)
+ if status != 200:
+ log(f"Key lookup failed ({status}); treating OPENAI_API_KEY as new")
+ return None
+ try:
+ data = json.loads(body)
+ except json.JSONDecodeError:
+ log("Key info response was not valid JSON; ignoring")
+ return None
+ if isinstance(data, Mapping) and data.get("key"):
+ return data
+ return None
+
+
+def fetch_key_by_alias(master_key: str, alias: str) -> str | None:
+ """Fetch existing key value by alias from LiteLLM proxy."""
+ status, body = request_json("/key/info", auth_token=master_key)
+ if status != 200:
+ return None
+ try:
+ data = json.loads(body)
+ except json.JSONDecodeError:
+ return None
+ if isinstance(data, dict) and "keys" in data:
+ for key_info in data.get("keys", []):
+ if isinstance(key_info, dict) and key_info.get("key_alias") == alias:
+ return str(key_info.get("key", "")).strip() or None
+ return None
+
+
+def generate_virtual_key(
+ master_key: str,
+ models: list[str],
+ spec: VirtualKeySpec,
+ env_map: Mapping[str, str],
+) -> str:
+ budget_str = os.getenv(spec.budget_env_var) or env_map.get(spec.budget_env_var) or str(spec.default_budget)
+ try:
+ budget = float(budget_str)
+ except ValueError:
+ budget = spec.default_budget
+
+ duration = os.getenv(spec.duration_env_var) or env_map.get(spec.duration_env_var) or spec.default_duration
+
+ payload: dict[str, object] = {
+ "key_alias": spec.alias,
+ "user_id": spec.user_id,
+ "duration": duration,
+ "max_budget": budget,
+ "metadata": {
+ "provisioned_by": "bootstrap",
+ "service": spec.alias,
+ "default_models": models,
+ },
+ "key_type": "llm_api",
+ }
+ if models:
+ payload["models"] = models
+ status, body = request_json(
+ "/key/generate", method="POST", payload=payload, auth_token=master_key
+ )
+ if status == 400 and "already exists" in body.lower():
+ # Key alias already exists but .env is out of sync (e.g., after docker prune)
+ # Delete the old key and regenerate
+ log(f"Key alias '{spec.alias}' already exists in database but not in .env; deleting and regenerating")
+ # Try to delete by alias using POST /key/delete with key_aliases array
+ delete_payload = {"key_aliases": [spec.alias]}
+ delete_status, delete_body = request_json(
+ "/key/delete", method="POST", payload=delete_payload, auth_token=master_key
+ )
+ if delete_status not in {200, 201}:
+ log(f"Warning: Could not delete existing key alias {spec.alias} ({delete_status}): {delete_body}")
+ # Continue anyway and try to generate
+ else:
+ log(f"Deleted existing key alias {spec.alias}")
+
+ # Retry generation
+ status, body = request_json(
+ "/key/generate", method="POST", payload=payload, auth_token=master_key
+ )
+ if status not in {200, 201}:
+ raise RuntimeError(f"Failed to generate virtual key for {spec.alias} ({status}): {body}")
+ try:
+ data = json.loads(body)
+ except json.JSONDecodeError as exc:
+ raise RuntimeError(f"Virtual key response for {spec.alias} was not valid JSON") from exc
+ if isinstance(data, Mapping):
+ key_value = str(data.get("key") or data.get("token") or "").strip()
+ if key_value:
+ log(f"Generated new LiteLLM virtual key for {spec.alias} (budget: ${budget}, duration: {duration})")
+ return key_value
+ raise RuntimeError(f"Virtual key response for {spec.alias} did not include a key field")
+
+
+def update_virtual_key(
+ master_key: str,
+ key_value: str,
+ models: list[str],
+ spec: VirtualKeySpec,
+) -> None:
+ if not models:
+ return
+ payload: dict[str, object] = {
+ "key": key_value,
+ "models": models,
+ }
+ status, body = request_json(
+ "/key/update", method="POST", payload=payload, auth_token=master_key
+ )
+ if status != 200:
+ log(f"Virtual key update for {spec.alias} skipped ({status}): {body}")
+ else:
+ log(f"Refreshed allowed models for {spec.alias}")
+
+
+def persist_key_to_env(new_key: str, env_var: str) -> None:
+ lines = read_env_file()
+ updated_lines, changed = set_env_value(lines, env_var, new_key)
+ # Always update the environment variable, even if file wasn't changed
+ os.environ[env_var] = new_key
+ if changed:
+ write_env_file(updated_lines)
+ log(f"Persisted {env_var} to {ENV_FILE_PATH}")
+ else:
+ log(f"{env_var} already up-to-date in env file")
+
+
+def ensure_virtual_key(
+ master_key: str,
+ models: list[str],
+ env_map: Mapping[str, str],
+ spec: VirtualKeySpec,
+) -> str:
+ allowed_models: list[str] = []
+ sync_flag = os.getenv("LITELLM_SYNC_VIRTUAL_KEY_MODELS", "").strip().lower()
+ if models and (sync_flag in {"1", "true", "yes", "on"} or models == ["*"]):
+ allowed_models = models
+ existing_key = current_env_key(env_map, spec.env_var)
+ if existing_key:
+ record = fetch_existing_key_record(master_key, existing_key)
+ if record:
+ log(f"Reusing existing LiteLLM virtual key for {spec.alias}")
+ if allowed_models:
+ update_virtual_key(master_key, existing_key, allowed_models, spec)
+ return existing_key
+ log(f"Existing {spec.env_var} not registered with proxy; generating new key")
+
+ new_key = generate_virtual_key(master_key, models, spec, env_map)
+ if allowed_models:
+ update_virtual_key(master_key, new_key, allowed_models, spec)
+ return new_key
+
+
+def _split_model_identifier(model: str) -> tuple[str | None, str]:
+ if "/" in model:
+ provider, short_name = model.split("/", 1)
+ return provider.lower().strip() or None, short_name.strip()
+ return None, model.strip()
+
+
+def ensure_models_registered(
+ master_key: str,
+ models: list[str],
+ env_map: Mapping[str, str],
+) -> None:
+ if not models:
+ return
+ for model in models:
+ provider, short_name = _split_model_identifier(model)
+ if not provider or not short_name:
+ log(f"Skipping model '{model}' (no provider segment)")
+ continue
+ spec = PROVIDER_LOOKUP.get(provider)
+ if not spec:
+ log(f"No provider spec registered for '{provider}'; skipping model '{model}'")
+ continue
+ provider_secret = (
+ env_map.get(spec.alias_env_var)
+ or env_map.get(spec.litellm_env_var)
+ or os.getenv(spec.alias_env_var)
+ or os.getenv(spec.litellm_env_var)
+ )
+ if not provider_secret:
+ log(
+ f"Provider secret for '{provider}' not found; skipping model registration"
+ )
+ continue
+
+ api_key_reference = f"os.environ/{spec.alias_env_var}"
+ payload: dict[str, object] = {
+ "model_name": model,
+ "litellm_params": {
+ "model": short_name,
+ "custom_llm_provider": provider,
+ "api_key": api_key_reference,
+ },
+ "model_info": {
+ "provider": provider,
+ "description": "Auto-registered during bootstrap",
+ },
+ }
+
+ status, body = request_json(
+ "/model/new", method="POST", payload=payload, auth_token=master_key
+ )
+ if status in {200, 201}:
+ log(f"Registered LiteLLM model '{model}'")
+ continue
+ try:
+ data = json.loads(body)
+ except json.JSONDecodeError:
+ data = body
+ error_message = (
+ data.get("error") if isinstance(data, Mapping) else str(data)
+ )
+ if status == 409 or (
+ isinstance(error_message, str)
+ and "already" in error_message.lower()
+ ):
+ log(f"Model '{model}' already present; skipping")
+ continue
+ log(f"Failed to register model '{model}' ({status}): {error_message}")
+
+
+def main() -> int:
+ log("Bootstrapping LiteLLM proxy")
+ try:
+ wait_for_proxy()
+ env_lines = read_env_file()
+ env_map = parse_env_lines(env_lines)
+ legacy_map = read_legacy_env_file()
+ master_key = get_master_key(env_map)
+
+ provider_values, updated_env_lines, env_changed = gather_provider_keys(
+ env_lines, env_map, legacy_map
+ )
+ if env_changed:
+ write_env_file(updated_env_lines)
+ env_map = parse_env_lines(updated_env_lines)
+ log("Updated LiteLLM provider aliases in shared env file")
+
+ ensure_litellm_env(provider_values)
+
+ models = collect_default_models(env_map)
+ if models:
+ log("Default models for virtual keys: %s" % ", ".join(models))
+ models_for_key = models
+ else:
+ log(
+ "No default models configured; provisioning virtual keys without model "
+ "restrictions (model-agnostic)."
+ )
+ models_for_key = ["*"]
+
+ # Generate virtual keys for each service
+ for spec in VIRTUAL_KEYS:
+ virtual_key = ensure_virtual_key(master_key, models_for_key, env_map, spec)
+ persist_key_to_env(virtual_key, spec.env_var)
+
+ # Register models if any were specified
+ if models:
+ ensure_models_registered(master_key, models, env_map)
+
+ log("Bootstrap complete")
+ return 0
+ except Exception as exc: # pragma: no cover - startup failure reported to logs
+ log(f"Bootstrap failed: {exc}")
+ return 1
+
+
+if __name__ == "__main__":
+ sys.exit(main())
diff --git a/docs/blog/2025-01-16-v0.7.0-temporal-workers-release.md b/docs/blog/2025-01-16-v0.7.0-temporal-workers-release.md
index ef8a641..329ca7a 100644
--- a/docs/blog/2025-01-16-v0.7.0-temporal-workers-release.md
+++ b/docs/blog/2025-01-16-v0.7.0-temporal-workers-release.md
@@ -225,7 +225,7 @@ docker compose up -d # All workers start
Set up AI workflows with API keys:
```bash
-cp volumes/env/.env.example volumes/env/.env
+cp volumes/env/.env.template volumes/env/.env
# Edit .env and add your API keys (OpenAI, Anthropic, etc.)
```
diff --git a/docs/docs/how-to/docker-setup.md b/docs/docs/how-to/docker-setup.md
index 39c0de9..f4c2fa0 100644
--- a/docs/docs/how-to/docker-setup.md
+++ b/docs/docs/how-to/docker-setup.md
@@ -110,7 +110,32 @@ fuzzforge workflow run secret_detection ./codebase
### Manual Worker Management
-Start specific workers when needed:
+**Quick Reference - Workflow to Worker Mapping:**
+
+| Workflow | Worker Service | Docker Command |
+|----------|----------------|----------------|
+| `security_assessment`, `python_sast`, `llm_analysis`, `atheris_fuzzing` | worker-python | `docker compose up -d worker-python` |
+| `android_static_analysis` | worker-android | `docker compose up -d worker-android` |
+| `cargo_fuzzing` | worker-rust | `docker compose up -d worker-rust` |
+| `ossfuzz_campaign` | worker-ossfuzz | `docker compose up -d worker-ossfuzz` |
+| `llm_secret_detection`, `trufflehog_detection`, `gitleaks_detection` | worker-secrets | `docker compose up -d worker-secrets` |
+
+FuzzForge CLI provides convenient commands for managing workers:
+
+```bash
+# List all workers and their status
+ff worker list
+ff worker list --all # Include stopped workers
+
+# Start a specific worker
+ff worker start python
+ff worker start android --build # Rebuild before starting
+
+# Stop all workers
+ff worker stop
+```
+
+You can also use Docker commands directly:
```bash
# Start a single worker
@@ -123,6 +148,33 @@ docker compose --profile workers up -d
docker stop fuzzforge-worker-ossfuzz
```
+### Stopping Workers Properly
+
+The easiest way to stop workers is using the CLI:
+
+```bash
+# Stop all running workers (recommended)
+ff worker stop
+```
+
+This command safely stops all worker containers without affecting core services.
+
+Alternatively, you can use Docker commands:
+
+```bash
+# Stop individual worker
+docker stop fuzzforge-worker-python
+
+# Stop all workers using docker compose
+# Note: This requires the --profile flag because workers are in profiles
+docker compose down --profile workers
+```
+
+**Important:** Workers use Docker Compose profiles to prevent auto-starting. When using Docker commands directly:
+- `docker compose down` (without `--profile workers`) does NOT stop workers
+- Workers remain running unless explicitly stopped with the profile flag or `docker stop`
+- Use `ff worker stop` for the safest option that won't affect core services
+
### Resource Comparison
| Command | Workers Started | RAM Usage |
@@ -171,7 +223,7 @@ FuzzForge requires `volumes/env/.env` to start. This file contains API keys and
```bash
# Copy the example file
-cp volumes/env/.env.example volumes/env/.env
+cp volumes/env/.env.template volumes/env/.env
# Edit to add your API keys (if using AI features)
nano volumes/env/.env
diff --git a/docs/docs/how-to/litellm-hot-swap.md b/docs/docs/how-to/litellm-hot-swap.md
new file mode 100644
index 0000000..8c1d138
--- /dev/null
+++ b/docs/docs/how-to/litellm-hot-swap.md
@@ -0,0 +1,179 @@
+---
+title: "Hot-Swap LiteLLM Models"
+description: "Register OpenAI and Anthropic models with the bundled LiteLLM proxy and switch them on the task agent without downtime."
+---
+
+LiteLLM sits between the task agent and upstream providers, so every model change
+is just an API call. This guide walks through registering OpenAI and Anthropic
+models, updating the virtual key, and exercising the A2A hot-swap flow.
+
+## Prerequisites
+
+- `docker compose up llm-proxy llm-proxy-db task-agent`
+- Provider secrets in `volumes/env/.env`:
+ - `LITELLM_OPENAI_API_KEY`
+ - `LITELLM_ANTHROPIC_API_KEY`
+- Master key (`LITELLM_MASTER_KEY`) and task-agent virtual key (auto-generated
+ during bootstrap)
+
+> UI access uses `UI_USERNAME` / `UI_PASSWORD` (defaults: `fuzzforge` /
+> `fuzzforge123`). Change them by exporting new values before running compose.
+
+## Register Provider Models
+
+Use the admin API to register the models the proxy should expose. The snippet
+below creates aliases for OpenAI `gpt-5`, `gpt-5-mini`, and Anthropic
+`claude-sonnet-4-5`.
+
+```bash
+MASTER_KEY=$(awk -F= '$1=="LITELLM_MASTER_KEY"{print $2}' volumes/env/.env)
+export OPENAI_API_KEY=$(awk -F= '$1=="OPENAI_API_KEY"{print $2}' volumes/env/.env)
+python - <<'PY'
+import os, requests
+master = os.environ['MASTER_KEY'].strip()
+base = 'http://localhost:10999'
+models = [
+ {
+ "model_name": "openai/gpt-5",
+ "litellm_params": {
+ "model": "gpt-5",
+ "custom_llm_provider": "openai",
+ "api_key": "os.environ/LITELLM_OPENAI_API_KEY"
+ },
+ "model_info": {
+ "provider": "openai",
+ "description": "OpenAI GPT-5"
+ }
+ },
+ {
+ "model_name": "openai/gpt-5-mini",
+ "litellm_params": {
+ "model": "gpt-5-mini",
+ "custom_llm_provider": "openai",
+ "api_key": "os.environ/LITELLM_OPENAI_API_KEY"
+ },
+ "model_info": {
+ "provider": "openai",
+ "description": "OpenAI GPT-5 mini"
+ }
+ },
+ {
+ "model_name": "anthropic/claude-sonnet-4-5",
+ "litellm_params": {
+ "model": "claude-sonnet-4-5",
+ "custom_llm_provider": "anthropic",
+ "api_key": "os.environ/LITELLM_ANTHROPIC_API_KEY"
+ },
+ "model_info": {
+ "provider": "anthropic",
+ "description": "Anthropic Claude Sonnet 4.5"
+ }
+ }
+]
+for payload in models:
+ resp = requests.post(
+ f"{base}/model/new",
+ headers={"Authorization": f"Bearer {master}", "Content-Type": "application/json"},
+ json=payload,
+ timeout=60,
+ )
+ if resp.status_code not in (200, 201, 409):
+ raise SystemExit(f"Failed to register {payload['model_name']}: {resp.status_code} {resp.text}")
+ print(payload['model_name'], '=>', resp.status_code)
+PY
+```
+
+Each entry stores the upstream secret by reference (`os.environ/...`) so the
+raw API key never leaves the container environment.
+
+## Relax Virtual Key Model Restrictions
+
+Let the agent key call every model on the proxy:
+
+```bash
+MASTER_KEY=$(awk -F= '$1=="LITELLM_MASTER_KEY"{print $2}' volumes/env/.env)
+VK=$(awk -F= '$1=="OPENAI_API_KEY"{print $2}' volumes/env/.env)
+python - <<'PY'
+import os, requests, json
+resp = requests.post(
+ 'http://localhost:10999/key/update',
+ headers={
+ 'Authorization': f"Bearer {os.environ['MASTER_KEY'].strip()}",
+ 'Content-Type': 'application/json'
+ },
+ json={'key': os.environ['VK'].strip(), 'models': []},
+ timeout=60,
+)
+print(json.dumps(resp.json(), indent=2))
+PY
+```
+
+Restart the task agent so it sees the refreshed key:
+
+```bash
+docker compose restart task-agent
+```
+
+## Hot-Swap With The A2A Helper
+
+Switch models without restarting the service:
+
+```bash
+# Ensure the CLI reads the latest virtual key
+export OPENAI_API_KEY=$(awk -F= '$1=="OPENAI_API_KEY"{print $2}' volumes/env/.env)
+
+# OpenAI gpt-5 alias
+python ai/agents/task_agent/a2a_hot_swap.py \
+ --url http://localhost:10900/a2a/litellm_agent \
+ --model openai gpt-5 \
+ --context switch-demo
+
+# Confirm the response comes from the new model
+python ai/agents/task_agent/a2a_hot_swap.py \
+ --url http://localhost:10900/a2a/litellm_agent \
+ --message "Which model am I using?" \
+ --context switch-demo
+
+# Swap to gpt-5-mini
+python ai/agents/task_agent/a2a_hot_swap.py --url http://localhost:10900/a2a/litellm_agent --model openai gpt-5-mini --context switch-demo
+
+# Swap to Anthropic Claude Sonnet 4.5
+python ai/agents/task_agent/a2a_hot_swap.py --url http://localhost:10900/a2a/litellm_agent --model anthropic claude-sonnet-4-5 --context switch-demo
+```
+
+> Each invocation reuses the same conversation context (`switch-demo`) so you
+> can confirm the active provider by asking follow-up questions.
+
+## Resetting The Proxy (Optional)
+
+To wipe the LiteLLM state and rerun bootstrap:
+
+```bash
+docker compose down llm-proxy llm-proxy-db llm-proxy-bootstrap
+
+docker volume rm fuzzforge_litellm_proxy_data fuzzforge_litellm_proxy_db
+
+docker compose up -d llm-proxy-db llm-proxy
+```
+
+After the proxy is healthy, rerun the registration script and key update. The
+bootstrap container mirrors secrets into `.env.litellm` and reissues the task
+agent key automatically.
+
+## How The Pieces Fit Together
+
+1. **LiteLLM Proxy** exposes OpenAI-compatible routes and stores provider
+ metadata in Postgres.
+2. **Bootstrap Container** waits for `/health/liveliness`, mirrors secrets into
+ `.env.litellm`, registers any models you script, and keeps the virtual key in
+ sync with the discovered model list.
+3. **Task Agent** calls the proxy via `FF_LLM_PROXY_BASE_URL`. The hot-swap tool
+ updates the agent’s runtime configuration, so switching providers is just a
+ control message.
+4. **Virtual Keys** carry quotas and allowed models. Setting the `models` array
+ to `[]` lets the key use anything registered on the proxy.
+
+Keep the master key and generated virtual keys somewhere safe—they grant full
+admin and agent access respectively. When you add a new provider (e.g., Ollama)
+just register the model via `/model/new`, update the key if needed, and repeat
+the hot-swap steps.
diff --git a/docs/docs/how-to/llm-proxy.md b/docs/docs/how-to/llm-proxy.md
new file mode 100644
index 0000000..4d6a0db
--- /dev/null
+++ b/docs/docs/how-to/llm-proxy.md
@@ -0,0 +1,194 @@
+---
+title: "Run the LLM Proxy"
+description: "Run the LiteLLM gateway that ships with FuzzForge and connect it to the task agent."
+---
+
+## Overview
+
+FuzzForge routes every LLM request through a LiteLLM proxy so that usage can be
+metered, priced, and rate limited per user. Docker Compose starts the proxy in a
+hardened container, while a bootstrap job seeds upstream provider secrets and
+issues a virtual key for the task agent automatically.
+
+LiteLLM exposes the OpenAI-compatible APIs (`/v1/*`) plus a rich admin UI. All
+traffic stays on your network and upstream credentials never leave the proxy
+container.
+
+## Before You Start
+
+1. Copy `volumes/env/.env.template` to `volumes/env/.env` and set the basics:
+ - `LITELLM_MASTER_KEY` — admin token used to manage the proxy
+ - `LITELLM_SALT_KEY` — random string used to encrypt provider credentials
+ - Provider secrets under `LITELLM__API_KEY` (for example
+ `LITELLM_OPENAI_API_KEY`)
+ - Leave `OPENAI_API_KEY=sk-proxy-default`; the bootstrap job replaces it with a
+ LiteLLM-issued virtual key
+2. When running tools outside Docker, change `FF_LLM_PROXY_BASE_URL` to the
+ published host port (`http://localhost:10999`). Inside Docker the default
+ value `http://llm-proxy:4000` already resolves to the container.
+
+## Start the Proxy
+
+```bash
+docker compose up llm-proxy
+```
+
+The service publishes two things:
+
+- HTTP API + admin UI on `http://localhost:10999`
+- Persistent SQLite state inside the named volume
+ `fuzzforge_litellm_proxy_data`
+
+The UI login uses the `UI_USERNAME` / `UI_PASSWORD` pair (defaults to
+`fuzzforge` / `fuzzforge123`). To change them, set the environment variables
+before you run `docker compose up`:
+
+```bash
+export UI_USERNAME=myadmin
+export UI_PASSWORD=super-secret
+docker compose up llm-proxy
+```
+
+You can also edit the values directly in `docker-compose.yml` if you prefer to
+check them into a different secrets manager.
+
+Proxy-wide settings now live in `volumes/litellm/proxy_config.yaml`. By
+default it enables `store_model_in_db` and `store_prompts_in_spend_logs`, which
+lets the UI display request/response payloads for new calls. Update this file
+if you need additional LiteLLM options and restart the `llm-proxy` container.
+
+LiteLLM's health endpoint lives at `/health/liveliness`. You can verify it from
+another terminal:
+
+```bash
+curl http://localhost:10999/health/liveliness
+```
+
+## What the Bootstrapper Does
+
+During startup the `llm-proxy-bootstrap` container performs three actions:
+
+1. **Wait for the proxy** — Blocks until `/health/liveliness` becomes healthy.
+2. **Mirror provider secrets** — Reads `volumes/env/.env` and writes any
+ `LITELLM_*_API_KEY` values into `volumes/env/.env.litellm`. The file is
+ created automatically on first boot; if you delete it, bootstrap will
+ recreate it and the proxy continues to read secrets from `.env`.
+3. **Issue the default virtual key** — Calls `/key/generate` with the master key
+ and persists the generated token back into `volumes/env/.env` (replacing the
+ `sk-proxy-default` placeholder). The key is scoped to
+ `LITELLM_DEFAULT_MODELS` when that variable is set; otherwise it uses the
+ model from `LITELLM_MODEL`.
+
+The sequence is idempotent. Existing provider secrets and virtual keys are
+reused on subsequent runs, and the allowed-model list is refreshed via
+`/key/update` if you change the defaults.
+
+## Managing Virtual Keys
+
+LiteLLM keys act as per-user credentials. The default key, named
+`task-agent default`, is created automatically for the task agent. You can issue
+more keys for teammates or CI jobs with the same management API:
+
+```bash
+curl http://localhost:10999/key/generate \
+ -H "Authorization: Bearer $LITELLM_MASTER_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "key_alias": "demo-user",
+ "user_id": "demo",
+ "models": ["openai/gpt-4o-mini"],
+ "duration": "30d",
+ "max_budget": 50,
+ "metadata": {"team": "sandbox"}
+ }'
+```
+
+Use `/key/update` to adjust budgets or the allowed-model list on existing keys:
+
+```bash
+curl http://localhost:10999/key/update \
+ -H "Authorization: Bearer $LITELLM_MASTER_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "key": "sk-...",
+ "models": ["openai/*", "anthropic/*"],
+ "max_budget": 100
+ }'
+```
+
+The admin UI (navigate to `http://localhost:10999/ui`) provides equivalent
+controls for creating keys, routing models, auditing spend, and exporting logs.
+
+## Wiring the Task Agent
+
+The task agent already expects to talk to the proxy. Confirm these values in
+`volumes/env/.env` before launching the stack:
+
+```bash
+FF_LLM_PROXY_BASE_URL=http://llm-proxy:4000 # or http://localhost:10999 when outside Docker
+OPENAI_API_KEY=
+LITELLM_MODEL=openai/gpt-5
+LITELLM_PROVIDER=openai
+```
+
+Restart the agent container after changing environment variables so the process
+picks up the updates.
+
+To validate the integration end to end, call the proxy directly:
+
+```bash
+curl -X POST http://localhost:10999/v1/chat/completions \
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "model": "openai/gpt-4o-mini",
+ "messages": [{"role": "user", "content": "Proxy health check"}]
+ }'
+```
+
+A JSON response indicates the proxy can reach your upstream provider using the
+mirrored secrets.
+
+## Local Runtimes (Ollama, etc.)
+
+LiteLLM supports non-hosted providers as well. To route requests to a local
+runtime such as Ollama:
+
+1. Set the appropriate provider key in the env file
+ (for Ollama, point LiteLLM at `OLLAMA_API_BASE` inside the container).
+2. Add the passthrough model either from the UI (**Models → Add Model**) or
+ by calling `/model/new` with the master key.
+3. Update `LITELLM_DEFAULT_MODELS` (and regenerate the virtual key if you want
+the default key to include it).
+
+The task agent keeps using the same OpenAI-compatible surface while LiteLLM
+handles the translation to your runtime.
+
+## Next Steps
+
+- Explore [LiteLLM's documentation](https://docs.litellm.ai/docs/simple_proxy)
+ for advanced routing, cost controls, and observability hooks.
+- Configure Slack/Prometheus integrations from the UI to monitor usage.
+- Rotate the master key periodically and store it in your secrets manager, as it
+ grants full admin access to the proxy.
+
+## Observability
+
+LiteLLM ships with OpenTelemetry hooks for traces and metrics. This repository
+already includes an OTLP collector (`otel-collector` service) and mounts a
+default configuration that forwards traces to standard output. To wire it up:
+
+1. Edit `volumes/otel/collector-config.yaml` if you want to forward to Jaeger,
+ Datadog, etc. The initial config uses the logging exporter so you can see
+ spans immediately via `docker compose logs -f otel-collector`.
+2. Customize `volumes/litellm/proxy_config.yaml` if you need additional
+ callbacks; `general_settings.otel: true` and `litellm_settings.callbacks:
+ ["otel"]` are already present so no extra code changes are required.
+3. (Optional) Override `OTEL_EXPORTER_OTLP_*` environment variables in
+ `docker-compose.yml` or your shell to point at a remote collector.
+
+After updating the configs, run `docker compose up -d otel-collector llm-proxy`
+and generate a request (for example, trigger `ff workflow run llm_analysis`).
+New traces will show up in the collector logs or whichever backend you
+configured. See the official LiteLLM guide for advanced exporter options:
+https://docs.litellm.ai/docs/observability/opentelemetry_integration.
diff --git a/docs/docs/how-to/troubleshooting.md b/docs/docs/how-to/troubleshooting.md
index 8784ef3..38165ae 100644
--- a/docs/docs/how-to/troubleshooting.md
+++ b/docs/docs/how-to/troubleshooting.md
@@ -33,7 +33,7 @@ The required `volumes/env/.env` file is missing. Docker Compose needs this file
**How to fix:**
```bash
# Create the environment file from the template
-cp volumes/env/.env.example volumes/env/.env
+cp volumes/env/.env.template volumes/env/.env
# Restart Docker Compose
docker compose -f docker-compose.yml down
@@ -106,6 +106,46 @@ File upload to MinIO failed or worker can't download target.
```
- Reduce the number of concurrent workflows if your system is resource-constrained.
+### Workflow requires worker not running
+
+**What's happening?**
+You see a warning message like:
+```
+⚠️ Could not check worker requirements: Cannot find docker-compose.yml.
+ Ensure backend is running, run from FuzzForge directory, or set
+ FUZZFORGE_ROOT environment variable.
+ Continuing without worker management...
+```
+
+Or the workflow fails to start because the required worker isn't running.
+
+**How to fix:**
+Start the worker required for your workflow before running it:
+
+| Workflow | Worker Required | Startup Command |
+|----------|----------------|-----------------|
+| `android_static_analysis` | worker-android | `docker compose up -d worker-android` |
+| `security_assessment` | worker-python | `docker compose up -d worker-python` |
+| `python_sast` | worker-python | `docker compose up -d worker-python` |
+| `llm_analysis` | worker-python | `docker compose up -d worker-python` |
+| `atheris_fuzzing` | worker-python | `docker compose up -d worker-python` |
+| `ossfuzz_campaign` | worker-ossfuzz | `docker compose up -d worker-ossfuzz` |
+| `cargo_fuzzing` | worker-rust | `docker compose up -d worker-rust` |
+| `llm_secret_detection` | worker-secrets | `docker compose up -d worker-secrets` |
+| `trufflehog_detection` | worker-secrets | `docker compose up -d worker-secrets` |
+| `gitleaks_detection` | worker-secrets | `docker compose up -d worker-secrets` |
+
+**Check worker status:**
+```bash
+# Check if a specific worker is running
+docker compose ps worker-android
+
+# Check all workers
+docker compose ps | grep worker
+```
+
+**Note:** Workers don't auto-start by default to save system resources. For more details on worker management, see the [Docker Setup guide](docker-setup.md#worker-management).
+
---
## Service Connectivity Issues
diff --git a/docs/docs/reference/cli-reference.md b/docs/docs/reference/cli-reference.md
new file mode 100644
index 0000000..dd7b4d2
--- /dev/null
+++ b/docs/docs/reference/cli-reference.md
@@ -0,0 +1,616 @@
+# FuzzForge CLI Reference
+
+Complete reference for the FuzzForge CLI (`ff` command). Use this as your quick lookup for all commands, options, and examples.
+
+---
+
+## Global Options
+
+| Option | Description |
+|--------|-------------|
+| `--help`, `-h` | Show help message |
+| `--version`, `-v` | Show version information |
+
+---
+
+## Core Commands
+
+### `ff init`
+
+Initialize a new FuzzForge project in the current directory.
+
+**Usage:**
+```bash
+ff init [OPTIONS]
+```
+
+**Options:**
+- `--name`, `-n` — Project name (defaults to current directory name)
+- `--api-url`, `-u` — FuzzForge API URL (defaults to http://localhost:8000)
+- `--force`, `-f` — Force initialization even if project already exists
+
+**Examples:**
+```bash
+ff init # Initialize with defaults
+ff init --name my-project # Set custom project name
+ff init --api-url http://prod:8000 # Use custom API URL
+```
+
+---
+
+### `ff status`
+
+Show project and latest execution status.
+
+**Usage:**
+```bash
+ff status
+```
+
+**Example Output:**
+```
+📊 Project Status
+ Project: my-security-project
+ API URL: http://localhost:8000
+
+Latest Execution:
+ Run ID: security_scan-a1b2c3
+ Workflow: security_assessment
+ Status: COMPLETED
+ Started: 2 hours ago
+```
+
+---
+
+### `ff config`
+
+Manage project configuration.
+
+**Usage:**
+```bash
+ff config # Show all config
+ff config # Get specific value
+ff config # Set value
+```
+
+**Examples:**
+```bash
+ff config # Display all settings
+ff config api_url # Get API URL
+ff config api_url http://prod:8000 # Set API URL
+```
+
+---
+
+### `ff clean`
+
+Clean old execution data and findings.
+
+**Usage:**
+```bash
+ff clean [OPTIONS]
+```
+
+**Options:**
+- `--days`, `-d` — Remove data older than this many days (default: 90)
+- `--dry-run` — Show what would be deleted without deleting
+
+**Examples:**
+```bash
+ff clean # Clean data older than 90 days
+ff clean --days 30 # Clean data older than 30 days
+ff clean --dry-run # Preview what would be deleted
+```
+
+---
+
+## Workflow Commands
+
+### `ff workflows`
+
+Browse and list available workflows.
+
+**Usage:**
+```bash
+ff workflows [COMMAND]
+```
+
+**Subcommands:**
+- `list` — List all available workflows
+- `info ` — Show detailed workflow information
+- `params ` — Show workflow parameters
+
+**Examples:**
+```bash
+ff workflows list # List all workflows
+ff workflows info python_sast # Show workflow details
+ff workflows params python_sast # Show parameters
+```
+
+---
+
+### `ff workflow`
+
+Execute and manage individual workflows.
+
+**Usage:**
+```bash
+ff workflow
+```
+
+**Subcommands:**
+
+#### `ff workflow run`
+
+Execute a security testing workflow.
+
+**Usage:**
+```bash
+ff workflow run [params...] [OPTIONS]
+```
+
+**Arguments:**
+- `` — Workflow name
+- `` — Target path to analyze
+- `[params...]` — Parameters as `key=value` pairs
+
+**Options:**
+- `--param-file`, `-f` — JSON file containing workflow parameters
+- `--timeout`, `-t` — Execution timeout in seconds
+- `--interactive` / `--no-interactive`, `-i` / `-n` — Interactive parameter input (default: interactive)
+- `--wait`, `-w` — Wait for execution to complete
+- `--live`, `-l` — Start live monitoring after execution
+- `--auto-start` / `--no-auto-start` — Automatically start required worker
+- `--auto-stop` / `--no-auto-stop` — Automatically stop worker after completion
+- `--fail-on` — Fail build if findings match SARIF level (error, warning, note, info, all, none)
+- `--export-sarif` — Export SARIF results to file after completion
+
+**Examples:**
+```bash
+# Basic workflow execution
+ff workflow run python_sast ./project
+
+# With parameters
+ff workflow run python_sast ./project check_secrets=true
+
+# CI/CD integration - fail on errors
+ff workflow run python_sast ./project --wait --no-interactive \
+ --fail-on error --export-sarif results.sarif
+
+# With parameter file
+ff workflow run python_sast ./project --param-file config.json
+
+# Live monitoring for fuzzing
+ff workflow run atheris_fuzzing ./project --live
+```
+
+#### `ff workflow status`
+
+Check status of latest or specific workflow execution.
+
+**Usage:**
+```bash
+ff workflow status [run_id]
+```
+
+**Examples:**
+```bash
+ff workflow status # Show latest execution status
+ff workflow status python_sast-abc123 # Show specific execution
+```
+
+#### `ff workflow history`
+
+Show execution history.
+
+**Usage:**
+```bash
+ff workflow history [OPTIONS]
+```
+
+**Options:**
+- `--limit`, `-l` — Number of executions to show (default: 10)
+
+**Example:**
+```bash
+ff workflow history --limit 20
+```
+
+#### `ff workflow retry`
+
+Retry a failed workflow execution.
+
+**Usage:**
+```bash
+ff workflow retry
+```
+
+**Example:**
+```bash
+ff workflow retry python_sast-abc123
+```
+
+---
+
+## Finding Commands
+
+### `ff findings`
+
+Browse all findings across executions.
+
+**Usage:**
+```bash
+ff findings [COMMAND]
+```
+
+**Subcommands:**
+
+#### `ff findings list`
+
+List findings from a specific run.
+
+**Usage:**
+```bash
+ff findings list [run_id] [OPTIONS]
+```
+
+**Options:**
+- `--format` — Output format: table, json, sarif (default: table)
+- `--save` — Save findings to file
+
+**Examples:**
+```bash
+ff findings list # Show latest findings
+ff findings list python_sast-abc123 # Show specific run
+ff findings list --format json # JSON output
+ff findings list --format sarif --save # Export SARIF
+```
+
+#### `ff findings export`
+
+Export findings to various formats.
+
+**Usage:**
+```bash
+ff findings export [OPTIONS]
+```
+
+**Options:**
+- `--format` — Output format: json, sarif, csv
+- `--output`, `-o` — Output file path
+
+**Example:**
+```bash
+ff findings export python_sast-abc123 --format sarif --output results.sarif
+```
+
+#### `ff findings history`
+
+Show finding history across multiple runs.
+
+**Usage:**
+```bash
+ff findings history [OPTIONS]
+```
+
+**Options:**
+- `--limit`, `-l` — Number of runs to include (default: 10)
+
+---
+
+### `ff finding`
+
+View and analyze individual findings.
+
+**Usage:**
+```bash
+ff finding [id] # Show latest or specific finding
+ff finding show --rule # Show specific finding detail
+```
+
+**Examples:**
+```bash
+ff finding # Show latest finding
+ff finding python_sast-abc123 # Show specific run findings
+ff finding show python_sast-abc123 --rule f2cf5e3e # Show specific finding
+```
+
+---
+
+## Worker Management Commands
+
+### `ff worker`
+
+Manage Temporal workers for workflow execution.
+
+**Usage:**
+```bash
+ff worker
+```
+
+**Subcommands:**
+
+#### `ff worker list`
+
+List FuzzForge workers and their status.
+
+**Usage:**
+```bash
+ff worker list [OPTIONS]
+```
+
+**Options:**
+- `--all`, `-a` — Show all workers (including stopped)
+
+**Examples:**
+```bash
+ff worker list # Show running workers
+ff worker list --all # Show all workers
+```
+
+**Example Output:**
+```
+FuzzForge Workers
+┏━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┓
+┃ Worker ┃ Status ┃ Uptime ┃
+┡━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━┩
+│ android │ ● Running │ 5 minutes ago │
+│ python │ ● Running │ 10 minutes ago │
+└─────────┴───────────┴────────────────┘
+
+✅ 2 worker(s) running
+```
+
+#### `ff worker start`
+
+Start a specific worker.
+
+**Usage:**
+```bash
+ff worker start [OPTIONS]
+```
+
+**Arguments:**
+- `` — Worker name (e.g., python, android, rust, secrets)
+
+**Options:**
+- `--build` — Rebuild worker image before starting
+
+**Examples:**
+```bash
+ff worker start python # Start Python worker
+ff worker start android --build # Rebuild and start Android worker
+```
+
+**Available Workers:**
+- `python` — Python security analysis and fuzzing
+- `android` — Android APK analysis
+- `rust` — Rust fuzzing and analysis
+- `secrets` — Secret detection workflows
+- `ossfuzz` — OSS-Fuzz integration
+
+#### `ff worker stop`
+
+Stop all running FuzzForge workers.
+
+**Usage:**
+```bash
+ff worker stop [OPTIONS]
+```
+
+**Options:**
+- `--all` — Stop all workers (default behavior, flag for clarity)
+
+**Example:**
+```bash
+ff worker stop
+```
+
+**Note:** This command stops only worker containers, leaving core services (backend, temporal, minio) running.
+
+---
+
+## Monitoring Commands
+
+### `ff monitor`
+
+Real-time monitoring for running workflows.
+
+**Usage:**
+```bash
+ff monitor [COMMAND]
+```
+
+**Subcommands:**
+- `live ` — Live monitoring for a specific execution
+- `stats ` — Show statistics for fuzzing workflows
+
+**Examples:**
+```bash
+ff monitor live atheris-abc123 # Monitor fuzzing campaign
+ff monitor stats atheris-abc123 # Show fuzzing statistics
+```
+
+---
+
+## AI Integration Commands
+
+### `ff ai`
+
+AI-powered analysis and assistance.
+
+**Usage:**
+```bash
+ff ai [COMMAND]
+```
+
+**Subcommands:**
+- `analyze ` — Analyze findings with AI
+- `explain ` — Get AI explanation of a finding
+- `remediate ` — Get remediation suggestions
+
+**Examples:**
+```bash
+ff ai analyze python_sast-abc123 # Analyze all findings
+ff ai explain python_sast-abc123:finding1 # Explain specific finding
+ff ai remediate python_sast-abc123:finding1 # Get fix suggestions
+```
+
+---
+
+## Knowledge Ingestion Commands
+
+### `ff ingest`
+
+Ingest knowledge into the AI knowledge base.
+
+**Usage:**
+```bash
+ff ingest [COMMAND]
+```
+
+**Subcommands:**
+- `file ` — Ingest a file
+- `directory ` — Ingest directory contents
+- `workflow ` — Ingest workflow documentation
+
+**Examples:**
+```bash
+ff ingest file ./docs/security.md # Ingest single file
+ff ingest directory ./docs # Ingest directory
+ff ingest workflow python_sast # Ingest workflow docs
+```
+
+---
+
+## Common Workflow Examples
+
+### CI/CD Integration
+
+```bash
+# Run security scan in CI, fail on errors
+ff workflow run python_sast . \
+ --wait \
+ --no-interactive \
+ --fail-on error \
+ --export-sarif results.sarif
+```
+
+### Local Development
+
+```bash
+# Quick security check
+ff workflow run python_sast ./my-code
+
+# Check specific file types
+ff workflow run python_sast . file_extensions='[".py",".js"]'
+
+# Interactive parameter configuration
+ff workflow run python_sast . --interactive
+```
+
+### Fuzzing Workflows
+
+```bash
+# Start fuzzing with live monitoring
+ff workflow run atheris_fuzzing ./project --live
+
+# Long-running fuzzing campaign
+ff workflow run ossfuzz_campaign ./project \
+ --auto-start \
+ duration=3600 \
+ --live
+```
+
+### Worker Management
+
+```bash
+# Check which workers are running
+ff worker list
+
+# Start needed worker manually
+ff worker start python --build
+
+# Stop all workers when done
+ff worker stop
+```
+
+---
+
+## Configuration Files
+
+### Project Config (`.fuzzforge/config.json`)
+
+```json
+{
+ "project_name": "my-security-project",
+ "api_url": "http://localhost:8000",
+ "default_workflow": "python_sast",
+ "auto_start_workers": true,
+ "auto_stop_workers": false
+}
+```
+
+### Parameter File Example
+
+```json
+{
+ "check_secrets": true,
+ "file_extensions": [".py", ".js", ".go"],
+ "severity_threshold": "medium",
+ "exclude_patterns": ["**/test/**", "**/vendor/**"]
+}
+```
+
+---
+
+## Exit Codes
+
+| Code | Meaning |
+|------|---------|
+| 0 | Success |
+| 1 | General error |
+| 2 | Findings matched `--fail-on` criteria |
+| 3 | Worker startup failed |
+| 4 | Workflow execution failed |
+
+---
+
+## Environment Variables
+
+| Variable | Description | Default |
+|----------|-------------|---------|
+| `FUZZFORGE_API_URL` | Backend API URL | http://localhost:8000 |
+| `FUZZFORGE_ROOT` | FuzzForge installation directory | Auto-detected |
+| `FUZZFORGE_DEBUG` | Enable debug logging | false |
+
+---
+
+## Tips and Best Practices
+
+1. **Use `--no-interactive` in CI/CD** — Prevents prompts that would hang automated pipelines
+2. **Use `--fail-on` for quality gates** — Fail builds based on finding severity
+3. **Export SARIF for tool integration** — Most security tools support SARIF format
+4. **Let workflows auto-start workers** — More efficient than manually managing workers
+5. **Use `--wait` with `--export-sarif`** — Ensures results are available before export
+6. **Check `ff worker list` regularly** — Helps manage system resources
+7. **Use parameter files for complex configs** — Easier to version control and reuse
+
+---
+
+## Related Documentation
+
+- [Docker Setup](../how-to/docker-setup.md) — Worker management and Docker configuration
+- [Getting Started](../tutorial/getting-started.md) — Complete setup guide
+- [Workflow Guide](../how-to/create-workflow.md) — Detailed workflow documentation
+- [CI/CD Integration](../how-to/cicd-integration.md) — CI/CD setup examples
+
+---
+
+**Need Help?**
+
+```bash
+ff --help # General help
+ff workflow run --help # Command-specific help
+ff worker --help # Worker management help
+```
diff --git a/docs/docs/tutorial/getting-started.md b/docs/docs/tutorial/getting-started.md
index 2049963..b376258 100644
--- a/docs/docs/tutorial/getting-started.md
+++ b/docs/docs/tutorial/getting-started.md
@@ -28,7 +28,7 @@ cd fuzzforge_ai
Create the environment configuration file:
```bash
-cp volumes/env/.env.example volumes/env/.env
+cp volumes/env/.env.template volumes/env/.env
```
This file is required for FuzzForge to start. You can leave it with default values if you're only using basic workflows.
@@ -89,9 +89,26 @@ curl http://localhost:8000/health
# Should return: {"status":"healthy"}
```
-### Start the Python Worker
+### Start Workers for Your Workflows
-Workers don't auto-start by default (saves RAM). Start the Python worker for your first workflow:
+Workers don't auto-start by default (saves RAM). You need to start the worker required for the workflow you want to run.
+
+**Workflow-to-Worker Mapping:**
+
+| Workflow | Worker Required | Startup Command |
+|----------|----------------|-----------------|
+| `security_assessment` | worker-python | `docker compose up -d worker-python` |
+| `python_sast` | worker-python | `docker compose up -d worker-python` |
+| `llm_analysis` | worker-python | `docker compose up -d worker-python` |
+| `atheris_fuzzing` | worker-python | `docker compose up -d worker-python` |
+| `android_static_analysis` | worker-android | `docker compose up -d worker-android` |
+| `cargo_fuzzing` | worker-rust | `docker compose up -d worker-rust` |
+| `ossfuzz_campaign` | worker-ossfuzz | `docker compose up -d worker-ossfuzz` |
+| `llm_secret_detection` | worker-secrets | `docker compose up -d worker-secrets` |
+| `trufflehog_detection` | worker-secrets | `docker compose up -d worker-secrets` |
+| `gitleaks_detection` | worker-secrets | `docker compose up -d worker-secrets` |
+
+**For your first workflow (security_assessment), start the Python worker:**
```bash
# Start the Python worker
@@ -102,7 +119,20 @@ docker compose ps worker-python
# Should show: Up (healthy)
```
-**Note:** Workers use Docker Compose profiles and only start when needed. For your first workflow run, it's safer to start the worker manually. Later, the CLI can auto-start workers on demand.
+**For other workflows, start the appropriate worker:**
+
+```bash
+# Example: For Android analysis
+docker compose up -d worker-android
+
+# Example: For Rust fuzzing
+docker compose up -d worker-rust
+
+# Check all running workers
+docker compose ps | grep worker
+```
+
+**Note:** Workers use Docker Compose profiles and only start when needed. For your first workflow run, it's safer to start the worker manually. Later, the CLI can auto-start workers on demand. If you see a warning about worker requirements, ensure you've started the correct worker for your workflow.
## Step 4: Install the CLI (Optional but Recommended)
diff --git a/docs/docusaurus.config.ts b/docs/docusaurus.config.ts
index 3630bd1..1068406 100644
--- a/docs/docusaurus.config.ts
+++ b/docs/docusaurus.config.ts
@@ -100,7 +100,7 @@ const config: Config = {
label: "AI",
},
{
- href: "https://github.com/FuzzingLabs/fuzzforge_alpha",
+ href: "https://github.com/FuzzingLabs/fuzzforge_ai",
label: "GitHub",
position: "right",
},
@@ -160,7 +160,7 @@ const config: Config = {
},
{
label: "GitHub",
- href: "https://github.com/FuzzingLabs/fuzzforge_alpha",
+ href: "https://github.com/FuzzingLabs/fuzzforge_ai",
},
],
},
diff --git a/docs/index.md b/docs/index.md
index dc0a13e..7d2cd85 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -89,7 +89,7 @@ Technical reference materials and specifications.
Before starting FuzzForge, you **must** create the environment configuration file:
```bash
-cp volumes/env/.env.example volumes/env/.env
+cp volumes/env/.env.template volumes/env/.env
```
Docker Compose will fail without this file. You can leave it with default values if you're only using basic workflows (no AI features).
diff --git a/pyproject.toml b/pyproject.toml
index 2ee4e56..2e54464 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[project]
name = "fuzzforge"
-version = "0.6.0"
+version = "0.7.3"
description = "FuzzForge Platform - Complete fuzzing and security testing platform with AI capabilities"
readme = "README.md"
license = { text = "BSL-1.1" }
diff --git a/sdk/examples/basic_workflow.py b/sdk/examples/basic_workflow.py
index 74b3a49..df55d38 100644
--- a/sdk/examples/basic_workflow.py
+++ b/sdk/examples/basic_workflow.py
@@ -64,7 +64,6 @@ def main():
print("📝 Workflow metadata:")
print(f" Author: {metadata.author}")
print(f" Required modules: {metadata.required_modules}")
- print(f" Supported volume modes: {metadata.supported_volume_modes}")
print()
# Prepare target path (use current directory as example)
@@ -74,7 +73,6 @@ def main():
# Create workflow submission
submission = create_workflow_submission(
target_path=target_path,
- volume_mode="ro",
timeout=300, # 5 minutes
)
@@ -234,7 +232,6 @@ async def async_main():
target_path = Path.cwd().absolute()
submission = create_workflow_submission(
target_path=target_path,
- volume_mode="ro",
timeout=300,
)
diff --git a/sdk/examples/batch_analysis.py b/sdk/examples/batch_analysis.py
index 5ac46bc..77aab78 100644
--- a/sdk/examples/batch_analysis.py
+++ b/sdk/examples/batch_analysis.py
@@ -135,23 +135,18 @@ class BatchAnalyzer:
# Determine appropriate timeout based on workflow type
if "fuzzing" in metadata.tags:
timeout = 1800 # 30 minutes for fuzzing
- volume_mode = "rw"
elif "dynamic" in metadata.tags:
timeout = 900 # 15 minutes for dynamic analysis
- volume_mode = "rw"
else:
timeout = 300 # 5 minutes for static analysis
- volume_mode = "ro"
except Exception:
# Fallback settings
timeout = 600
- volume_mode = "ro"
# Create submission
submission = create_workflow_submission(
target_path=project_path,
- volume_mode=volume_mode,
timeout=timeout
)
diff --git a/sdk/examples/fuzzing_monitor.py b/sdk/examples/fuzzing_monitor.py
index 096574d..07c4e06 100644
--- a/sdk/examples/fuzzing_monitor.py
+++ b/sdk/examples/fuzzing_monitor.py
@@ -193,7 +193,6 @@ async def main():
submission = create_workflow_submission(
target_path=target_path,
- volume_mode="rw", # Fuzzing may need to write files
timeout=3600, # 1 hour timeout
resource_limits=resource_limits,
parameters={
diff --git a/sdk/examples/save_findings_demo.py b/sdk/examples/save_findings_demo.py
index 304b17a..1d9568d 100644
--- a/sdk/examples/save_findings_demo.py
+++ b/sdk/examples/save_findings_demo.py
@@ -33,7 +33,6 @@ def main():
workflow_name = workflows[0].name
submission = create_workflow_submission(
target_path=Path.cwd().absolute(),
- volume_mode="ro",
timeout=300
)
diff --git a/sdk/pyproject.toml b/sdk/pyproject.toml
index 2afc681..694807f 100644
--- a/sdk/pyproject.toml
+++ b/sdk/pyproject.toml
@@ -1,6 +1,6 @@
[project]
name = "fuzzforge-sdk"
-version = "0.7.0"
+version = "0.7.3"
description = "Python SDK for FuzzForge security testing workflow orchestration platform"
readme = "README.md"
authors = [
diff --git a/sdk/src/fuzzforge_sdk/__init__.py b/sdk/src/fuzzforge_sdk/__init__.py
index b0da889..d50f599 100644
--- a/sdk/src/fuzzforge_sdk/__init__.py
+++ b/sdk/src/fuzzforge_sdk/__init__.py
@@ -42,7 +42,7 @@ from .testing import (
DEFAULT_TEST_CONFIG,
)
-__version__ = "0.6.0"
+__version__ = "0.7.3"
__all__ = [
"FuzzForgeClient",
"WorkflowSubmission",
diff --git a/sdk/src/fuzzforge_sdk/client.py b/sdk/src/fuzzforge_sdk/client.py
index 1319389..c4f29d3 100644
--- a/sdk/src/fuzzforge_sdk/client.py
+++ b/sdk/src/fuzzforge_sdk/client.py
@@ -440,7 +440,6 @@ class FuzzForgeClient:
workflow_name: str,
target_path: Union[str, Path],
parameters: Optional[Dict[str, Any]] = None,
- volume_mode: str = "ro",
timeout: Optional[int] = None,
progress_callback: Optional[Callable[[int, int], None]] = None
) -> RunSubmissionResponse:
@@ -454,7 +453,6 @@ class FuzzForgeClient:
workflow_name: Name of the workflow to execute
target_path: Local path to file or directory to analyze
parameters: Workflow-specific parameters
- volume_mode: Volume mount mode ("ro" or "rw")
timeout: Timeout in seconds
progress_callback: Optional callback(bytes_uploaded, total_bytes) for progress
diff --git a/sdk/src/fuzzforge_sdk/testing.py b/sdk/src/fuzzforge_sdk/testing.py
index 9f9297b..6d191b8 100644
--- a/sdk/src/fuzzforge_sdk/testing.py
+++ b/sdk/src/fuzzforge_sdk/testing.py
@@ -193,8 +193,6 @@ class WorkflowTester:
# Create workflow submission
submission = create_workflow_submission(
- target_path=str(test_path),
- volume_mode="ro",
**workflow_params
)
diff --git a/src/fuzzforge/__init__.py b/src/fuzzforge/__init__.py
index 3846ea1..87a087f 100644
--- a/src/fuzzforge/__init__.py
+++ b/src/fuzzforge/__init__.py
@@ -1,3 +1,3 @@
"""FuzzForge Platform - Complete security testing platform with AI capabilities."""
-__version__ = "0.6.0"
\ No newline at end of file
+__version__ = "0.7.3"
\ No newline at end of file
diff --git a/test_projects/android_test/BeetleBug.apk b/test_projects/android_test/BeetleBug.apk
new file mode 100644
index 0000000000000000000000000000000000000000..05f8f51d9750b208aa8c43db1ec442ca7fd3e391
GIT binary patch
literal 9429128
zcmeFZRajhIvo4AT3+_%JSkM5$8fYLPK=9zforK2SEx0DQTM`o73GNcCkpK;iyE~1;
zo_zoM_F4PxTDTyfWt|1(nFwEp$omSo&I)#C<%wpnd+LmT
z%g+ydugFg>b%pb8k}&NAPNIm<)45#~^42ICSyw9^8?*~Q!lkZbG(^6ae3^N-6GQxe
zwo;lP)4T*|-9E75|39sCaQ{!^xKtGIf*ABGHLt7J8LkV2m}e~
zam7PI{#X2u9dod=LW)pTmc_-U!UnpGD=+uvJrWWMa3XRR<`ZB~c{R_YG9f#^mz72;
z8==_-zPz!}k+)P*LV5vgV@n-=ipyT-1NJ
zk+U98l*XW*L_(58l7Azm;eouDg^{VRIn%qi&ulT_-O2Ke5c63Gov559+DFFq0(mv?
z8^UfCH9``pj?i&wLIRQ`mX^;%UJO{wEZ;5`ZY&2c#N022vK!K}vgWh6_)jy97UsMU
z#4qNY@7*mAst-R}Vly(J{Lk`#MFO}C@sj~H?_Qv!y))*~Xl~KI+J-G|_0+;$P)z4s
zLI?8piD9@hd;6r$goqRm$NfmjEVSpaoi_=Dtg(_@AkSBKXgAKy9wqY!!omw6wa`1O
z77x(_zkql9yPA7!du<#4$E|+T%G{P|5A9Yy;P4!{;dd#MlXoC&Vefd#=AS-~&j@tY
zO0vP)i+);F5iD<)la>d;>xP;Nr*Ye-OnR+q~%b+uBOg
z^WgUr`*yI58@gNWIPx%bOLBakw@do?n12>IKIDh>jJ1*^gGd4ox^x(qWs;Gad3pYZ2k!nk6+dYx?|QVT@3Yi=`;*5m;j{?u$~iNtIA
zw%z8%Eoz>xAwQDf@7_jJ5Ph!{Znd$E$$5yQ+|odI4Sknz|1+A|djR>Al-^
z3bT1PdPHBA?^lF={;}30CBYet<1?jTa?5$-JOm}Kzr8&O`!G2LA*+{YfpwI
zTTq~zT|Md4DNCz>8fO?Eqk#K-47q)RXxrwp2sf=FOmOxljLQ*i3$Xc8!B2yg;KeBC
zX&z_h6`G%&?aea=!M;wQPO5A1Zsv|dMLzH;1_QLk}dH5oC`!O+5vk~aq4egNC
z`ld`rH`JZ+8Kr0C-2I(M_@)a5dV>Glezi|w`&U|@g&d~SV_s}OcO(BMvOb=G9vmOT
z`&We_L^*p|i$T6db1&Kb@o#spXmK#wh7n52i7hR(bISh0h&9T9Lbh16~zL^20AWc4(qo@UM!|%DENsd45IH-%f8KhkP
z^Z})uLpA)UZ~S{){W^~4Vk2MYaor2p%H*G{0_!O9PR0oXavN_qT1xS#Rln=QsV@g(
zs*Srlau@tGDr+;x&U_WIVpQJ3w|t0Q&FTBv^Err6W)6PwuGhomp&na#h4JdeA0F8f
zUu>eytfInHtQ>?V>&cLAnMZV4w`bd4eb_rEudQ3hyi;}Q&25XPKo6U>8TWQ>=0)vua0|I27HI-D%=L9k@G;k^5rVijg~tERnJef6?=-C$6xHNY2Z3a@t*72&I+)9f3@
zYW5iz_XSB^jF15Zw`${9+WNw<@yjvfaAEFy$AjvIaK;hn?w}YUHc8{4v0r0Hp1|hk
z5iq6bVZOcB_SV%}HFj9ea)5t1QzuqB0e)L_`D;THseV2+*rmm#0_}wuU*bV1#kS}Y
zh4>#pDAPL6(mb;QsDIx$Hj(y%BRgI1=xI(Td}BTMUZ-B5_KV$m?nRjK8~cgeM|v1^
zJaC@Wcde(y_&(5_W~^E`2_Wvg#g?*8-McI{PN~*d+x=d*hS@W=G})_zPcOVZU#dv(
z`1Re5Qhtfxbyu<5C4#7gjK{pzmqL3ZiI$J4N^uEhyT5;U`#n0A=C9&FymkZPfGamFaoT
zJlD(dBUNzfAyz|GtV@gtjUVN7&!xWk0l!~cY{uB`_IRb8ve@T@8)!6L^1q#Sa#R>2
z;5h4*0@H>V==|+zAGoYTF7nxWa4Sy6;C31xXMNZ#d41D#zQz>)E5>~{8C)fMl&|L*b~`Qlp>P1FB4In
z0~elyT`rgw?pt823Vrvxu(rw#A0bHGubE9ttUXV_Id?&0^C9%^d^HBazd7L9dCpvf
z>k4rE;M?%}L;oLB%fh=99Gn+;Zk>Mgu=j@)?g?qyY`dZb%TS^RhLyut;@Vu43pn#7
z0wfm&D?O)3cL)fU7fw}l;>lcv*jL-cv>B>C)g
zN-pI{SPzK|`(UMr3s0Leq&J61)JR)EE9+~Tw!wvMabH(6`zKI4ERhD!Bu}QJ*WuGS
z%jHAiWe%Of6YM&;2?pr@X+pl`R%xSe&^!-zeOk3{Y%dWeV?m_OEflYeri8j5%&7;{UG
z$g`@h$g%#-ns~JP%5f3i;jM`I-Fd76=GX^csWbSgN^-RfeUb3ITKUfqhsLcS{kga-
zCxjBBK>OV=eD!`aSsG%b!yX;j#?clgQYkCdC&0{0-kA@jRY3@*K?QEp=Hfqho(PTM
zFaP#ezM+(Ef7k`RE-LvXNF2x`eRU8jjE)r|Eu0vNcO)yJC5jL~d9AZ`wfWwn4UFv-
zs|}m3^e2OaxJNg!c4a@ME#|mJQ?B?`Wut_$A*<%0s+5_bZG755%%q4<>y38YC*5Z%
zlhQ2;8|u3bJt>q6XGZ82-hCQ>Z?4+yq-PYaS!lkYxp)Vk_4JFop
zKGV9N16|?o9R`S_f~b=*ZYYQFxtL*JuhMchwUKV_q>p>wOt0pH{m
zOS|0TLvVAyjm^&k1BZ(YI=&^+DKau`VH8SIXpCr8Oy0AEfjtN5L8eHJ^HGGpq8AaVdaTEP9L|_Gq2CsC*W}P-!~l*+z+X3U_wIda);DV
z)PtK_6olEUV!!CsR&Dl7S-w?Z8kfK6AoB`Yr7yi`zmzSezr@%Rx<#EOf5hG91WWiY
z{QqW#`1_d72b+E>aF1ycTtHmuP)^dAO&DH&XniUX;EM*QTos$h!BDVVy1R8!4d}&g
z>y!e|E4?JDoEhi2aO-9lZN=hDn#Xt3M=W2CFn!~XRrzw*nhO#IWqKfV&D5FK9=
zKBCX%cWPVtZGCdi&TLA;RNT($tzko;w4AH*XKfBySK@y72o#7(^r_+lu}(?*AtN|t
zRIMw6#Rl~+&$a(j4YPz=v6*YiX((h+*RVjk6!YZMW~
zEafuRYV)K2P)8VbLj7cSMUk5Zz3#-EHp}W#%A}x$bo}yJJ0d@
zQbNZ6GQ-kugkoH^&6K3<8Fs=rlVpDlcHJ
zD$Lw
z99EDIV8H*1K9PW3+s^;(EZfWgG(Bp6$rsuGo)Un$h8_5yP_>uvlK<0OU-3DOy;0Nu
z6*A#}Zt_18_vfC2^HF`{fM=km>S{Jf5k)O*IzF{PFLdEg5+
znv-h(yh&K++pZI1Oi?2HT1=yO?)MyuJk~Eqt#S{bNNr5FWo*Fp0;tI6T_2zP9N^o>qzR_V
z%5tlHAM%Ew0qv4@*dD>Z6cU{TJrwwWq%B;9oXyqF|pygMglD^jW0pc&@ekGIeFIT{)8M}Wf2wJeu&-1PXvh#
zuVajV4xk))lwXkXWXWfrrY0w24QpEvvm69ACuJfpW-S?!?>jmVQQC;JjaS`67AesI
zFY}x@-dKT8)gr|WP1qNi3sYBDw|67=4a$V5^LBAj(NG#FxqoWCs}zTu-j13oHO?sR
z$J*qmH;3;v!ID+~gJQm=+G5JtuV0@SQA$-LgG$eH0^h}3qFoXdAbqH}%ksV6E)=wx
z;)K`P>Mcc3yfpocOD{1O85I@$VVborQx)FqSv{QklBqxL29i!I=1~zt^=d%WZ4Z{)
ztdl%bZ3VZ=@wF`0E*BKE#ibHp>o3-;w@Xe)C}{Qkh)C0L1+3xw9Keq4fL~AM;u7ZS
zn5mCt*tdSf2j4m{%=)OkMXtL!Xo9C%Wq-WcuQyuC`dR!j$ulV>1-|wtVZXAEuKMb9
z6GM7bGsW|KcN(fJey%|c_VZzAzV&W!u^YT)yQcG4LcTfle`r!L+=tiKCn$^lQ5IfC
z<6vj6jKm%e-hqAOc;uT)l^FO=N^5cQZ=xirnMbMDMd``T2J?h=e-^}6b#`{Lb*v63
ze>vSMh?@<@m!R;NoOASVa@+rK1MDeL$&r|9^Y?$yFkkeXGW1wRZ#o?Tp{nSteXFl_
zTzz}96pW8jPSN7t%&nC>y|)?%u3Nl2aIZ7&2-1-+DJkJRYAn^M?0g$XXN!)G{&JV%
z)RcBe80EHPA6D6A=zFfYx>>;j=}SJXnKFbtFg#q#uKojdcd=*>uRo>LanIF`VLS#uT*a+#
zZXSM(Urz_Qm)&iv_*ZEbYwbHp_$+0)FJSFRB5S80JvA!${U48VN~D@#eR`sKKq(#2
z8G6^jmVP_R4xVl`ob7Tu)G@X%fm!o#>Dqu^6k}jPW^35xz%E;(Z1!a6WShUew!f#K
zl&1`qGk^z`C2Cq?sFJOWIN9o1S?ua!^ge9{nq32}TlBg(pq=BrMWdrD53M##~ZW#T}0Hp(&|E67N
z>OR`3J$zbTwfNv(d1pO4%z9{RTekPj@@oZB&GiA)7y6nWb`GrD$upC6oI5Mw4b&^H6!6
z2Br8ROkP6HMlX;OJzcQiP-xNt|BY<6eg3s)R2+#I&iKaGyJE{@VL?faA9
zON$TMaal9hO&%7FgH53=D1{08Y
zoPK{=US4fc0S01?5HJvQ3hRj-Xl&82y)B$19JfQ0Fa&2MO|!t2#BdJ+-gQ0u+F5(1
zaJQ&MR}o5H!2{b8&oe&FkiM7`hn5K!pkivb1vr908v%?UA4s4$=%b!wwo)n%Id`;I
z!}#k?vjs8TWbN5WY-QL)32Ni|ehH=_!qXd3mA{Ip7`|xJZh8OQDU;tyS(QF=>>9=B
zAoDys;b#>Mwi4^n07p5RKT@XmK?Ab7MA&zQlv!H$Il^eM4O+JG^4}|wlANdf{a}iN
z#@#YW0ihN&G7r!4wtKHfyTX{yOL_GIALwH!u_}FCDE|a0BS8XE%X>eH@ydQ|Cx(im
zEcy->3`e>P)SkpctUUKFaveo@y>~V#sZ(S84o3r{mV6UrcS^)-PWAWGp_D%1*}oxs
z3$$Ft6u$GHnteJP_pHtj(ZqFjznFkQn+=ZBQ?=iDGew+#bbl}ipmNOIYr9;DlB1WE
zQ1CLl3Jb)cIp0NrIuDQ=CX}>;>4dprf&hMd7qHMacUHMX-;j}8_2+%8%S7qRX3X^l
zIB>b*1C63*ZHdJ`sJFD5D#9C_H`z?ys9+UN9(C>YSk!&8MQ^s6OpjD9*W4|JERWAh
zKok#!;$Dt89aD%Kb0WP~=A<
zFYb+aik6$fZPMGEg{W?thO6)#M1RS(v;Y#m
zJjl^%^(ol**g_p_bZK69`%%v`IKVPeUCJ}*W#b{fn;_@RHc|CJWwZy8D46c!L4|S%
zWVW{lq6S+nrKetTviHN#20EDM=tZ2&*G}I$j$oznv|g?(=^j`qyQ&OTO=$1={6?Qib*_Y{fsT1{ZTI;)abZf=7+^vNWk7
zy`dihdjv9%1oxE83`rmubRM}az>q7F)fg&!I0Y+ZRdylf#1}1jxa-}2bSP|~E9VdL
zsfsm7E_YAjAm-hT^lGfXfFK~2k-C92opoak`{LP(`8pLtYfXz<;`NjCmWu_@(Mp9_
z#FNO^k#9qO5a(LwUj;%F42KHhBt(VDC7{CMMY1%GpZ#?UF3ghh#?rQdv}zy=FsW^b
zBn&uFCebcUSKlEQX1RSBWuJ>+X|*#ls-&yqV2`W<9F0Wkco(@0cez=tNUty}RpEof
zaseegh72!r4Z&e_^^VtRbRPcQB(s>@;V(Oe@b3!eZLfk5RB<1gk5t)o&ojBT^m#6I
zG`@P5Nc5g(sn~r?%m6f6fzUd$t@-=A!kchOY{J}&;A~t$d9~&q9C5F+3c%Oibck71
zrsK0$h2m0MKHRiDPBw-%?_j&?xl{&x&Fw`B?~TAScn}soDS}3r1gP$RyAf
zbGY~}w~CF`v|pED;4&uTHs{!f9!z42^~d94v|)&`9h#T?W1z&UY_^UY`6=!r(hPrX
z{B)!R8z&T8l>_7PzZ%S4)HMhS<>7mhog-v#Xq@>W*f(zL)$T%zH$!8lF*Z@(RovA}
zu1(9;{CyJpvm!N$_vQ_!j#c5ArXQ%Z#h1?T=ibXz9a0X*uqf3)zC_n1IHGL@u1hA<
z-)+f$@yiHk7eo{o*H%kd(NJpYprwZeTvxop*IrNWK+A;X@#gpz};T>9Sh+{%z7?%p`efN1-_5}p~4SlyB?1pDg?v`oxZEdal!1+=23-JWS)B(ng=
z#=BkH5?Lzj=0!7jO{95AFXN|h{s0z`#c?6Mcx9!kLE?T0e8RH
zvY|b33OW~uD|ToQ5L%m{%%s{mOpIXP%=@RnBKIx0eB9@i(fHy3WVs>kJoy_7O_Y2M
z#{Ak_R~u7E(Rk`}rpqtPZ^Fn=MRic~5KD9G1Q2#Za#AC-_W{5q*kaE~D{TpX=BloW
zSrS2PcRPLx{9$T`3Ko`jbPuHd60vO6pH^KAOtp4VMN1EP92Oicix-dW{q-()pi0!O
zvSj&2M+>8@pGC!~NJ2gd)PmXHm23!AuO)m`>|Wwrt5QAtL5Xc@%l&tw^0rx;x0{63
zwH!NVHLvb18{Y$Q$uqPy6B|sf@BbvlU04D(pnLPOsjk~5$thM-wX7@N-<)GKg$U&+
zQBEqp4;8$Z0z!~Cx#*Z{+e+G{3NBv?f`bL0B$P^Dr@z4-~TMRlaI8-|J>~V&oT>~yDW75M>9TzNh)h4
zQ{Jb+`8qvHRC|Iia;aro4fTh85}~heKvwKnM!cT
zYuymrj$_;PB!Z8;;WdqK$a@SW__+H2)S2q2*~PjJ4c6r`_XrlLdxO5o%EFrzKpdSS
zKZZlc9-izM{zIp+O92*#nJxjF9p#rwKxx{{Cqj)fq1C*f~|7(+NJ8Xms?e~jHgzR
z+0&%oE%U
z)sZ-p5;zamC(VG#OR!69q_TanI`;?f8fRf;`^ynF6|JZXCS4(|@1FG7rCeNn%yPlKwU3biM#kOi$
zdg{fN?xL!{eO%ayjE|`H9UvXeGAik*;>D_V>z!GF|Z@kY$2ArL;QX#=SNtdT#iM
zJLl7{{i_=ZKgvBTGPPiC=l
zAA{aX(-XbU|3adUM@;+7Z;UOUit(FICPZ{y5*a*{0|rw(+iA9eY^*JX*!!$fcOan|
z&0S4T)*fCDRjSO@P8(7S%-EjdS=qf|w_xeM*VeQ@)!P4J?Nc$%
zuUH^e@jMN%Uta@XfC%N>Q6K>&0R|VxK0{`8d_NDWlW6z<BiBC4qexABw9zskHrFW#hGYHL;f*J#UgU3UTU%pO8KtI=QUk-SW=Nj
z{fYI5L8DI^*N4WxPNP0)L+u-?BZV93+^g$KmFHKRIT*wF$K%XSvB8w@-o4xFrM9oZ
zp!eb3+TAd^m>R%!o2BN5Twil_%znEU*#~+GJdHr^pxnkeQPne2a+e4e0EfEEY-y(2
z8(^OXjRhU({IG2>_I^pG^)Ox`mC94fOCYz(tJLT4l-ofm>W``Pa?Qj24ICMRh9LUj
zTi>wIgG6RmOo0k9rO1wv_Ot{#!>;pV$l*JiT2H^wW!_HSau+h(*WD!I0($Y!6ySd(
z-1s{+r#?yub8T4>?4u@X=q!F&Q*ZBZ
zs%^$pISxz3`T**leKH^hjgh=wOZkzsIuq1${^YyQDTgsa3l%^l%MgCrb}-+BaR>X#
zPUYz@SdK?8A;#8q(Rv~7y`H4ac;SQZJZx|>!YL4kS8u1IXy`l3XkEhuaB1d`H1(6B
zz42J2R|BVdBN86C=AEai*ak1Cc(Gp^Sg)Z))fWg7@Fs~X*zSH@tSPxi-B<9%K-2st
zG;sd#&Mk{Q9iWJ<`)HgMpW7_#_K+JzcQd;cht_
z|0%NkTVlsmePd?-2Di@C!PE;Nz5y0g5cyREOUBO=1qBpPaZwd-4C=kb)F4GHr?p0P
zm=#YwKdW**y-(F&GQVw_;yJTui+xTUu}^13uY|}qpf{szQ^`C=tx?9S>G{LVR9AT$
zE#Bul7C9SXHZgAUr`X7qf5M6mX+}U_2o@}hd&0e+If@Vj!_!
zoP8-4u=8(~QmOxZLk69#IDX_3>@akju~5^NXfiLoGrKR!1yuj@qdR6CeFG?v657wf
ziEK&sPnS$*`nwWlt5fa4RV_5T+CMyNb=#Qr++c@4cp{3}L5Ljx`zx&BC^A=*uZ&k$
zR~OvG?nX=K%@0mb^tDc&mVV;SXua)
zPI2Ph;0L~VWZCk2fZ455&JvZe6StahYr3lLc^mpqt$I8s%C(FUPO6lYd@jRg7~o$mAA!~rtfv&b?OupI%IM~A7;y@!J;a|g
z>$;47L{6*o_GEa+`xI8v02C1BNn}cnStpka
zBRcC%wfNOqf}mM%^pe2L9jUr+^ug52*M+(W_K!wDC`t(woPU*4^ZX_rxUi2sxE!l&
z_DkyyvwGP;54rOA`s7>o2o)f2DXMnpDY0A;wVPZSjt-5Gtc`KF0e
z!Q+`{a+tbng+CpJ28|ocWUAhNQBosxkcwl(371X`HX?BYX}Z7OWs+aesqE0Qn>P`!
zL6QC(;RZNnDiz5)Q);NPwk1Jem2=-=j;$`#c=_}|UxD8MRa?NrZTS=E{Crz0w4sM$
z%QyeRdaeIU$5#LKj`Fjfjh?oLyTgtFBzpf_N8c2*&26mJPfq||zZAeK+&>CqJSU1@
zS>fpwaS?nu&~a$LiL*$JW|xB<#tzE&oX&ZspyRX{`CY9C@~*WIrA60QRE$botr3IP
z6V*|^4BVbP^Luc;!~}hm6q*B##$-!H_$(jRK{}c!dw21{
zm+ur!oM2HC0MjXFB>Up7z&JwO1^oKa1$Hy%N!?6++sO=rlc-Lh;Cte{~s$yys^8mQxPbIG>W68+y15hr
z7QZAQyf%X2y)L$k=Nm7B_l^v+X4e=OX*etJi02H&Jq>x?XFKLWv-Z8XHBz*ZT!iJu
zWL|oX-szSC@WbY5G}ZaVP&DirvcGenzQcgZ*pyEn#%GlVb}
ze!A{X$-Dw{>sEEQo@O1Qp`s{@9d{d?-#spGb?Q>z(4@OLTI5)x&@%$q~Niux}v8mF{rXlXZ*TeZ4OC
z4-clwo7!zwWOVu&)gNt+o-XRvjG}P2geqEac(hSS-V1j%_gH_pgWZj3e>%%=lsTUH3%zD*~GflX;a;)|c-q@jrLc
zVwhktdwA~P9_?|?f3Q^NmHvyatle>M5fT1smC5rL!EKpK@%sqn0hWUsw
z$vX4~b2l`(OK+|7Xt^s`{^Kq&1QN8i1*Lzu?5q1Y!Q$@88?clg#a#uUcrk)cLwy&q
zjw0-lKZEf^`?LM7KYH&~_A#PTZ;WKw2rc?l&-*{zb;u1}Uw^H9O5^J_$W;&9o2h29
z$lZ(zW>E%aZm!repQ`Hnct=6J2ziGBD8GN`~e3$_Ud^!(|)uh1TG8!Mrz`mYs
z%7Ko=-Kf~z^rb3Yf`tzLsKkXm!|=NA9<3Kr9?nRpRT=;=>X$@mawGa9iCI*I+5N12
z+H8$}QB)1q`baAyh3N{ERdEJjDxpa~)2)^z2(m6_cdKG>weYb*n8Y%a2h{
z%@>+woB1lDIXMQ0e947V1A=0vS&6ok&bl-_X8Hxcv|II931UM>@S!GuI3!kIfd+Uz
z`F-sky=dB7Akdk}{=JhSh~~648ML{+;%+IEKB~Wt*wKLC$I)AQC+0qU2r>=Ku7yt9
zyXvC(_6sjDWNSx6Y|rGgMsg5bX5Fy+!7|^eX&KQq$6h&<2`SGncV3_Gq7v#Xo-0BH
zAX*01Kiu{w?(c3e3dGslZ>t(^@3_jXbagEeGR}cY1truswUyDwD5nhk;Q44=ctLaW
zad|m*_DomU)9No-Q3YSC(#{ygcA53+&7>DD*TWlC2?0f`VaT=j{UG*MlW_u5<3cxB
z$NM5Gw>;hG;mgs8M%xiQZ)OUtej(v2NRzsxl01lv23f)J?&)l(NB|ekIH=nmiEV;G
zmrKE=ufy=pv6)loLkL+r*R2)&%{Lx{RRP!QzPVR+gU)+Vy*1oNFhlQsQcTH>EV5;!T|2!ax_8}Q5Ln_4{`1hzvAIWu%bM@E=92O3t5kB|TCz&Oxv=0YcC-8#>B-oO{QcvxDt
zn-VPH&`sZlQhO;?S
zA-V~fAd1sl8)M|jW!2@YGd$l5W8UcjoFD@ez%E&In`tpmgdiD>+VE~IM_!nn>vGh2
z<58Igznf*AdrGw&Di)pHnRnM^s)Y6fqpqN*)!A1q6LNw%9RqZI)r4A?W#s
zrPzh|eO(`C-cU-AwX)D`zTyOl+j6URGmk$XINMX`+cjAF9fu1N$;kkHh1H=b&7gwF
z;4K(<^F0#?lqV_rhy{pVi*Ml@JyYSNcWCdkfw*`8EUn3)j#+^f%|D1SoQpI4Rx?A%
z8b1#TaCWrIYSKm}KdaAWe=^r4tAF?#y0WtJxf>5%4HhMHNEV`?<1@KI=N(5^A`?X>
z69M+R@`=;j=C{{
zA1+}tmdVgqmG6&7v@O&t2K(HP<>p&X#M=G+5*sIF%lmC#Dq>RGC}szz&r)~9COLe`
zlb8QeHE-@S@`x{d)U0s%7j3Q!k#V1xhmAz=XV(R
zuMp~IU8kuhs~GgdA-US8KcO-=jPZ78`TltmrEW!Wk7=T7^QnS|3C{$}S#cu`mEN7A
zZo9*OFoRHG-!VH*8oSpww3j(bCC{KRCCvkgbQ
zSKNGu-gUxZRN{~b0oke?N^!_)Uz~dn=84zNufm{aVp3ZY2xk*GAaEkM!Y=|-kdFxj
zc6pi}cE^OfSXgNLG@1H_H5&l3r(K_QIrf_Ct+XzZPF1WQVYDqUH@*)SXvmCzkrYgq
z&M|84y;rZ+_)@%j^p$a_T5#@(D<}4KNyDnrj9c%xY)ByMPf_x+((UpOs@e-A>NGKb
z&}R&CE0(1X&K!DdK3J$Tp%YEaJ440Qqzk<-4iM+%{^Hg1bDVAXL;ZFP3E_iyrJpvVA+tX?Mz_|G}Czxx67&>
z|3NCJj-0GKOsL9Hsoc--v3T6t8&HSY1%9=NHw-|%M_`9cXYr+>@=8;hQn_#Br%gJu
zwhD9{vTjmgPPU24=*+Vz``FvF36jHv+aps?<2!zE1fr0kh6AMSj96o{=oy3-R>kJik0(%
zRnHB6|NGmYF1}HxoI+`BHNRyY%60I;#d%#yK_zE7K&XXHAh!XP!`9H2Aqo7;*sh5k
zyF1a-I@e~S(NSn76LiWlT+u*X-|>3VWwLi#3?06NS8GV^RRR;B04QVhUlFQoKz)m&
z+u9e|w^=_@Vr&9OGK6>t-kk+FIv|0Ls^j1|Jk7>!*OTQzH<#p2gy+TOt)h7>au1@|
zFT5_w8aC1m209y$458o$9|!y7Kns&JwiTV;M>^-|>9wa*&y%TP`=WE`&5e1_%_W@_
zA2Zxs?iY=R*Jvp#7;j-
z*uu`UtB)DvNI>qFdo&A_L5}KgMG=5w;Dy=|w5K!8oJgB5QHuLuyI*}dz9(C1A~5V
zd~*;mhU^eL;U%Od0Kh|l*mQZqwO2%L2`I;?g4?%=6Zl48eiP*{7U>XUV@TUf(l-2P
z(DYV_BP8q3OC%(Yl*j)C(ASiW^Shco#Z|tBbC3Mu{^k&4;dP3TE?x%%a+NJ4*J^*_
zTyJkr$ZCmFHmWvZOk)K4W>Y$%w)bSxl`;MWFy-H`%m2!xR0+s0l~v20_gx|YHTb8n
z(dDd$_>Jfp!DNet@>KNPN)EQvw#{Tlzgo|~+*(?qs~vpd*p4Kt7|PlkSphFiRJNr}~q<-VJpZncaPJCPeb>%TXTP+VT#B(%@*thJfuwl4C$Tn^h$
zR9CKiFA&m;4H==~GY>a#>>)+ex7`i*MpGtK=$QG@)Eu1*@?fBn>))?>C+oy@`Ure!
zWWt4?==wEVCS>7ADUWSnAat8Baze5%75l3<{^Mr@XTjems{M+RFu#!^yQvpUZ%8R_
zI}WckxYbV=#_*h}8|B0{%pfWpQ_G{Wpw#)qFJ>-qR#wnl^f#YgKZShoxjUZ@W`DVr
z_fA7mt)^1`>q6;&mC=%&X&3axc7}T%i?uiaU=jD)XL9|IsX7w|ro*76uG_u)aP{#>
z0|vBa=qCO(-pi5QN7^M6FKwU;?o>!%yIpd1`yK)X`yj>YCKIzH2}4!a&Tb98Klm
zz?ja_B&n(?SW%jeC!!2JKHiDx;5Vr74D$yyQTR^T;a-%{eTv&UzQmn+_+isls6pujwd@XYU!OS$Bjh^)+9i90k0z#g%8M
zT2n_SGZ4kI5Ht
z?12wZ{TnRrIMXbzLem9gFZTgvwGPN$qHhZK=T=nkTKYBPpHfTED2V0#NpM7nm>K(J
z77TO!BBg<}`;#P^cN6Aw$1hWy84Y6JX5p%9dOSt2r~l^hxxvASss-}3c-g*%D4tU^
zHnuVh#E(F;Q(cR8B39Bhr_LwCVM(=Vlg`Hn|--FHQ
zQ?#T;9q~$jM(sZ2`=oR{U8z;JXYW5{s-9yxRg6w1nY7qUtjTk?RznHWoQAoqu_!yU
zHs!Abt;XRO2Tg}h3`GQO`5b0`0a4W}r`UuAK4oOm;!Kjzp@g@`8!3g{V-)4lVtr0!
zhApDTYbtkAq27TOq@O>%M%m=PCqaFB`LyA@OMBF6A9{n2CzeNwJ2VZIy?GoaSxH~u
z7n}es!bm`{gwy2i*g7Aqyt(IdfPwf3avvw}y_%)sI~C8+KRtEYEAB5xlI{-izdy|h
z#E3Bb+4}^WF_P_r?@BmhILiJA=zLc2wM0{zt?UiQ*j*gd3?%2NLR;?yjCb65#?bVd~ahk}{BM+b0Tt
zh?;U6Ed2`n&?NvVLu{sSp9kY<5Kx06xvD>o7_MuHrz(Vr=dPb%my`39b3PezBQE
z`QvUW%rnJT?zxyNFK{02FWL(1dd9vzR*0)U!RCuyIZMeiH7%1Jxc{g`n4%^WL;MVf
zTSi9k-bsTAL<(pBZV|k}2lkC{j1>nT&D^gCbWGt6myi;){u7SM0_uI5-S8>X
z5{+%v%gkky$r{DN$_0$(OGkAXw7G;DOLo_hea9J>_3Kk7fRo6hHy~Kg-W+O&;n6;p
zgC8I|7(%=cTc&}cZrZ0}#-~gdp3$fj`djl&?hdxThGHqL?0QumTX|v85Pb&zs@+gz
z8uOXlfV0frE_|OKK6NcoOS(43ZhjJvzFE!|CB}YHZr`i;j4_@qa6!&Pq9mStw5vmR
z9lpLHV2#F7ZadO`;qj7;0{qQNRxhFhc>v(73afdj=xba)~J4VJt1^$poarp8|u+ffPOV#6>B$nSAgMmL2=t
z)}5Is!joFST2-UbJZTUbHsRrL#<3X(l>a-gk)#ma{P~I8p&H!Zf#&|3Wb5dwX-3#)
zy{UIvmUu*=Jt2?z0
z+1#yQFOSPBEDi$d0R8>*jNqIDeV0PyUW}P6y0cAf&ky!i
zBN2wY%xtPRPA0gdHYKd6<^JxIPe2EgP>7kXEBGbdrf3c^iy~$^qeyD52x
z{OSm~s_>G9F
z2&3i)F%I=6%s@R9#=;_6Kv`0fnnr3)<
z4)sw!?}r5_s8$4mQZ67VOY|Gn}`86v&6z*RX@SSN%{?!
zkiAE_Dj3hN*M9JqsXsg&m7L0STMsk%Z9*5yJssQ2zhc*o-U&fO%)_&?fy<3b8&b0v
zHXH{yM3iB#=suE)W|$~HP7!46_eW3Xdi^MJGqU%7ok8d%+;C*anZV}f4=2g7PP4V{
zB-_D&7(!_aDy_jeQzc)mDLtB#*(@hHE9R@dMj}fXh#FmCPq+i1i$TtG4Wdf}Mbu$5
zUB`Tt?7>V04=aApwpciEgM%nP)wlG;OR`d?gF_gwNCj+s4MZw_j-PQp9Ox%Rz(Xg2
zXP5D1TAa#!zAP*<88ebUa+nm(ci9SUw?I(1LM%Iv0Cop4AuRCZap4Rw)*+&a-k#Au
zyA^fXfAgAkz!%ZJkGZ-4aL-adAnN!byU(saj`o$xFiQc1-x;bBQQ0=olQ)jFJ^v~m
zE-Y>-KLbxd$}1hRCbPqW4F;VWlJdaEGyBq2a=FolsNw0BqO+Q2br|D&J!iTAuw_yw
zhI21Vt1tlXV^S%YD6*pCSz6~Iw68cytjX!d1&b)J{TeqldzC^UK)!qd3g!A*#%Zyc
zIa@A6T`Q=A8h3{CwdO8;WUd;9hLya5zKm;6-y`|0WyzTZz@nOjB9~nbDh($Be|-F#
zO#?tuf%i_Yfzh!vV4zdu%g+fwoU~A7FjYn_XO+t?Ct@G1oszDVxpZ6Is;n34S>6Z{
zJLhB|fvMglGSn~DXUMtm9qDU-{fdVcJWH#b2HToptl#Wp{L%e8ztZSNlMJ^9@;$ORK&sK3d{lIKt95yFxVEuYMoDYZllUO
zKr8_u+De6S14LU!7$Pys7*@MFVA;}5YRy+)Y(ieG0Op2|sB%4+W?7S*jfjXiO4PPN
zPNhZ$A|J~&<+?2hlfeLkii;}aa$w1MM)x)H3vU^tn9AAz$~yo+bWABvx;W&*(^c{F
z(g4&_?dTlA(hAL~K*>TKn^yyUjpqI7p0+b-7Kf97&_hj4Lq}vJL3FZQ!{Z0J^C*ZW
zM>2#Kj13QWMF7$d0=nL+0kqym0sb1tml~5{iz5OOl)S@5ZNZJgo2(Dmf
znDYx+useG=ayK1^-uJ9Y(TOzn^GBo~Hr-m!hn_z=#WR8OtfeyZgJ%s7#*
z{NZ(W@%yu5B9Ar%`92x_TrlVRgo_V|X#CQh66Y?J?K4i0(PN%VHU7u+g@c{cY}P
z4ECZz%=
)0xD
z7UwhZ%DX?E#TOZrl$N3qMBTu@^2Gwg;KDXLIq@ugD7d$$0JY#yx?0NEq`m)eCTEiK
z{sKk-)n5<`Rj}+^!1@K@EPQpr^uB;qQpZridF0MjBH<1-ivqo)E|A3&d6Cs53f=2t
z9)?nH_gdX&Op?K&>J|upfA_YT8K7|+(}J_`L~c++cG3qX(7*IhCn_iFtqh&@UUv?;
zA(2miaIkd(MQz`J-|L>jYkRw&3c`a=48Q3O#b>tzSgx@#fHo5{%z8LZ=+uSrzx_7^
zFraz;UbV(1o7cIN4sNW&Qq&%VFM8Stxsos)Ni(OBij*;%9)SWF8uKTop$PZj9al$VG+VyYS2NLN2sAi|ZvIrkNB!>?5P(6R&MNl|7X^Sz
zC&DFwEh=*FF}z&MQbZOsslc=J9bgAE)!9PXL?wPCzvceqQAm-?Fq6HFESP39u)jzpq70S
zC9j{(tMH5fTFLOIzgoNZrD5G?c^EPNAcDZ0{e=ae8t!
zn8n33Fz;l5#|WMxHJlm7Isqz3!IOE-X>#|AK(;L*xX~+}Wg&bD<%`7Grt!BpolA9r
z9CcHHBGW#Dcg1+`JhY!=igbWm{6r1J%HGBdNCA&{DpX5_YF6J&Czc@M&2iuz=a)z{
zZISom<*8yShfTj4sbjW*d3fRxy2Rtvq+mK#G++Jz#Y<1v@ilPy#KrxBMVgVzhCsdT@4rlg0cM@v~?PsyBd
zjtlHT$|yg8Qxs^hU|pnXk62TER5$g9t+sUOrndM-Kfs`Sjkq(B%Z@iyVsNv+h&I2i
zUIt3xPE%D6*H@c4ci=4y(W|fdNM2h!Hw3r)PP=^!ZG9+wVQRV5Dr7G1?zC|{K3H^g
zA<9~<&I&qJ!nJ~r5|d<{p_UfrF-yG2ddVn3j<7c0{}(fLOch|c9M|I|>z`<2;=d(i
zdL!v3e$#A2g!7)=l^@iqSu8GSM?WE6vP8nO(dO2ivQ*NX5=PWi^8vM5O?|)g)ux6T
z)xrl$?u=F!OF9<6jG4)8sc1@2!oCLYS3jyJfCHceam)zHQXl#Coa%B9i+9~oqj9zd
z?RuzdOevy>BgRPczhqF4nZYt+2|Bwu@df4*Q8aIkM*
z9c*5iXFfGF##$uW#-^zMxePOYNiIi(=sD8Q4b;w)K*Bc>!t3dOfh38d8^}eN+8c9w
z14ERMPgBznc9QbBaO-f(o6=`8ZZvb|rS5>zV4-%ZzAD#=&EifE^Ab~Xj>I%6>DQLIM0lJ^T1W^VQB5A)R7&R-lnvm^s
zh%2>)YT%8Xg=R0ba*!k-ekl@AL=~N8`H}(YFX|LAyoaUVqR3|)i-hCN9G)>y;@S}U
zS%>tU8G4v0rYc$w-xdngBNH)aXZ;nnT=~mu{(yrpxG8nG)Cl079Aj16
z_ayw-vcf8_D#t60l3z8QT%f7hMW`0+`d{}kyc5|K=i^!aTy7y3SIuqcnIWOZ|rS
z7w^ShZOKLBhQB62J5bj=!l4L#)kVEc0_t{fXKWav%6ezdv7=d1;EX$_vDXY>z5fR5
zKN_t6{rkT?@c-Haq60X8KF!6ItWu9%BQ1tf<98E480YoQzEb~_)&VlJSEX5!p8?6C
z>MUO+YbEom;r@43q9jzOf&XY-Hm1|o6(tKn!_q~v5)h*?0O
zFTBO6vi+OX?D2-Q_CtoQ)j_jDz4ucu>7~QBj`~T1zAoNsCn~EC|DBJ=1<*LwqvDWZ
zfuwMY2&@7gOs`g4l?MBV>xu^$ONxs)FD+~IJTa;|Lw?ki2e{UxYr8@Y>Z5i%Hf->$
z@ob+A2MtRM4j=xrkOSsUXlP_^Z!8H)cQ_y&8S4S#A2yVRv{u0Y#1RB?{@(j2;^0pr
z)m9e+Tx!boHMkSinG183%9ZHs+({O~JffffjhZ|?upZmXgqjjbf!oKkdRAbhTCA?F
zDa}*DWD>HoC9#7rFqx?bLZcj5_a#~f$qa&!^+U)IHXv|O981vul@;aBXP}pf?pIZh
zOsHaF3>;m{i%V-68<)dderj}7(O8WEdztm-h59S7^+GfHoEaXAw06|M_pAL;i2yOASX%J4;O)(C}k8zv*y?+szN1->LM9h>P=0V7c(*t`$GU_1FV7i-Jmnxz_C$PPlIXwJ=vi8O$caq=!#y_qYD
zAl%HF=35J{c$9l)vrX(Myyg1*hy~Cnep&xkoO?2qJY-Dy{9&S6Sblv!a^-5+<2WlQ
zS5O4%ce1{SY*KZ#TXUABuEv{#p8DhIcN9`MjO|=pTwc=B()a+N^jSxc$z4F(ug!)E
zCPjwwen$isxoCtsMi53v?sC?!kNl_e>Xxa7k0I4OBgxrwFyjWPO#=Q_y~i~*_-fyi
z+jRZ=LUZB=Z#WQnU6R2KL=VI@i1Y<5^p6LC0N*DiC1I8o6=}=?1k@DzG&}K{MO5b^
z9zlTZ3qP2|b`s%m6`1fwx-QEJ
zw0wYEH$Oure}^k2vuDHZ3&Z6&Lraw0tM&Q=>1Si$06Y5RSHvul6)YMLM*okBXfeQ5ycIDKy4nDi|!V=8wu6+B7utD#s!+
z+EkZuWiG`dz}Mg*y(K)PLn>FwQ4FSL7)53nt;z0Yz3;a5xd3;##MZ01sF$A-P+zkn
zIflPvO~DJ6BZlY)PQEhmx?PVK-C;&ByZNqjNl^&AlQH~{WW4C;9Q0qgMR#!i9R+cj
zWe#|NEB`6#q8PQcxibeWJgj;z;t!iBo|ZFS&N8iz4=DJmxN+T|Nb5xoQ8a2ox!g2l
zkJ}%PrPrD~pxD;zD?>H9h4|o+A6hXwuQ~Ek!jl9CCl8@-9*Awal&f1pxt2+HTmznv
z{Wq36p06sDjkn!{fvhw)knJxT@2ryi4jc_kSATzHzwKp3&cR&7A*&<8Vt4H#4`w+V
zr5Xj=*WuPFEMY8C`G81$-svTxpgc{xt||&sRdwQ0d+N
zpa^M(WSdP-;u8Zt;OewOMK?nVgN_RqLPhy;-N+Ln_;&r|l(#l><{Ssrw7HK^!FTHm
z^>*hwotuqZqKSqNoRQ>p{gnjiIk3ZiFP%b4f$-35nNT>RA2NlV=Y^tj;4w2D`AZi%n&6nY~~#qKk7A^1I-qOx~#e9wYAZG0a@xf
z(&Nn4OUxZGA|88V?>n%K=3-b@iNLmWe)uS8LcuM<=4Ij&cI$kcd|%0jGaoK`0pQi@jK$k_1id^(sNiAYfE14eHC6?m#YBmElgME&^#
zi?oYRxwTI7e8KaPyXW@)!r|}&c{P6K=-L;RgrZ}CN;!i|v?Zim^_3X8gyq#sWcoQf
z0~MMd$NSc+lN;NA2V8Z29$jZL_v8Qpdwf`zK7X^%pP!evN&V7#4aBSFSpj)k{gP>J
zTs{McjD$y^iW^n+7XA=vcB>N8z*@F6ThPnCVPhITG^H`4qq@4f=t73Vxoxn|n##;4
z+SAS=j*{|eZPva+=zy|zME-jM^yYXWe@OBRp%~s1rl1V%`%w=>`xpL;&ZehVWh@f$
zNgUlteNgpB-pr_Q9bJ`_DTc{+SUJk0)&2OY2*65gSPJrF#=U_4vkK`uC5Y|wj%_za
zz&H&jnG~jiHhRR9zgX?>^mtUV%iNKT9^h6olNO|F4uWKOMthbBjE=;Sn&DZ=Z6h8O
zCCStLH{=?R-Dce@5$zNx{gsa|R@=Z`CLL~fSj7Sy9Q*up};gA=L_^>wJyvxNKe5%vNcTwQRbeX=Z`_xF7dV
zBn#JE2^tZmx4L#fHr6lST5nzLo|(P4S^X+=O*LMfRKjaeW&l&PL5T_GuvTg--eRTX
zlrrGsx)A>5S~x%E8h*P^(je8Fb`<3|*p~88(4FJq{aBL9Fla`baxHjS8L(Kima*T1
zlMb;i)tN5WhwBy&jr)GSZ88x0TLLANr(?u85L1vT*KHc`gzZ*u4{p)*cny1|W{MWK
zX{2U`9s^5hDJb$nRjP3=6c|eFtoyoKgXvA<1(x{TuKG2m4lXCx
zIRhm{Lm5Z*~Is*4(Jv~@;
zXxr|n0Y;X3%mhi4!8#;4Co8NJ$1ll6V0Ml7+8RiFAJj7s7g__AR${lR*!ji!8x^dZ!wB>@Ul1rs1b{*B35
z1RCZFc{9I8B<}R*l|kWPj*`(O&!=G*DFfk^=w-oo8Q+hrrE1%+SszEfZWcZejxA>>
zPPoK8V0s!0oBTaEi;pn&a1dGeRNq-=&Zn5zFcy$#(qX#xOkJ+=)pTD
zz~F2uFeJ0_@VjoIg>#?XS&ihOD7YmB3Y37_s^R{F@$FbSL^)M2qm^mW+;c8N{$y)$
z{w3iid3fjIe6OJFt?t&R&f!R@r-AlTQ~tqc)3{WTmlmqYy~`~tZqN8U3mPT=c3B*k
zbhwN2CK?Of*7*L-E2FAe8Sud63H3oO>-p0-xc0ziq>ibv%X_%0B1TLV4hf4=)PMy}
zwo8qmnBw~Z!CZx;fj_0F9~7t=UOsGk$v1}{C_ux&5o5(va*g$;%KR%K{bivvENOX>
zQ2I}ilakWVGAo`d_VuesSVh{RU23i`VHy7^nu&^bO^ec7Qvdsq@^gbm&nlRO7XDb7(@9g5I3sFO%Xj)x{B
zziDgINd`ntcF&rI89P9;CCn>E-m)}1V3Cf9tCPQ+Ab(Dsm;`vu$l95~;dX3_+_8!e12Pqu*I8Ni|_df1-@3oy%0MF;Ml}
zvL;c-Jod^s?pEc=$7*oydgJkF3x-fqD%ZPcymo@cAyd2>B_CF=V3vbwvbt!1
z0>QaYi@Ca(nWJM9ic$xveN9EZ3$KXJ(&}fPfN6@4b9g5f)fYQa8eXJ!BWEaE*J
zfx!YElF=>JWy%UiqLb7buj@`oU#BPc^OiF!?hhgSC6(rziqn-L>JcJ~4Rf0AB{wC`
zU1=2Z10t|M5iY+7nGo-Me6x@cyWT7g3pm#FQ!KTkUhYfM(96;z{QrCm45cEk6bYc9
z{s=%pp#cxeH>=mv4juVezZ<)yYoprfOu0Xw99z(qktbT;QQbM$d1xfk9$C23mbG$M
zu`Fm%Lh1w2xqmgyw^_3NX)$trqNFlFgZ>C&4u-K#TV;b1
zkRr@kL_xSmqEo6$3!!;*QuucGM|?XXvwhlqXm02hKR7!WUf5O)z*;aL;ZJE+)gdde
zFK{{t!q65Ep@~pFS(vj(k+3|_{4wx~(58OsAPQ(-)JJ;J1HLg#G$;~3YX~$Y@^_G@UkIqs&lZXessf78uLI5p?Gf!3
z>eh8tC5uX?76u7s0rZIydI;(UHX3HfPXyWrsTI`=$_vR0#tmVN+kmQ)R0I7IiUz`M
zK)5P-W9dg{Kn3A}7-bP@z%awlo+QDth%!~eXdqXvR3Jyn_nKVX<0&VDbQYB`9^addZf>a3&Df>hMQh=yJ2>pg3
z1O`m(C|0obumXN1pmfL^J9Il1b_^Xv9n3|jLl`H!~I
zDwb``*;Xi9%o&FHAh*{P^kO}wTY1RBpP3&}eYvaj#$Gin&sF?=C$(oSR%c+VGi~d$
z?)uwE7N0Hx5%Y*lDX$8vkI-je>>YbAgfp(>DO;-Yfl7s7AENr)kQK}Z5uXGITQ8Ybm%hrOh+j{(K(QI!
zP?m~~*ar$PPANrHMrB|det4ql;(xSnhNhisSKcp!?%v}{o{`2b<|rF*0?Rk_*XkEM
zaV)W3&5y^ib+)G$@4g2bNZN(*_xC+nwO66M>wr^I?Md9V?nK5@(9z6#)VGJaYOe-)
z*TGBv6g~P$p)VEE`&W?r!OD}^=0oYy7IG%)gYqPxsY^_g-W*HxlR))slhB$O2Lq8tNe>9Cv+_`9bvF#(iX}r!Y~=YA@p|gBZm<4
zRdD`t<==0<-`f~mpr<1S&S^M=s3%Fjdj!b$<>|@q_vDd3FdbO;7T=S0u~aXr9WuM;
zxu2|BqCzamD(aMsVo$ca$%u;h$p;crBMSX0`@j^juwV(Od{b
z=1mFnOpNAwV-7x!pxo&v-L<~VUCm`wXr^>b%V^zutz?8eYjbq8h(C`S3!nV*vW9kh
zlz;I3dDpr!f3V}#{;a)yp*NIdt{-tL`Bz_n+N?@RV5?!^Q8-s4<;f#uMW4zycW9?=
zWm&j9v*O^pa3<<4$3g?i+0B@+?)F6rsAA&r8%O3Su2;gg2*br)sbNLt)t!gb_61)c
z+oTWXdeY~Y65w+N;PaQy+ZT61zy+E8Ucw8r2d3WbJ}ehooSN4UF*n%zFBB5>el@(u0;zgv{ob0%;MM^P*E6wFze1FCg2t<-Xk;cRM!Bn8L(
zeB^q0$CCU#*2zW9yuEsJZtYY9mSOnDk9Xv0flEqQ+#8&;cB9m^ZF34&CHueT5@=V1
z%%?ui8tPi=2=kvd6I!)LmLYi7MwVf-6a`q9JSfT{dgFcB*G1H;$XC0uuKg2_-psIh
z)^u!^`djdCi*^aS{=Qyq*N$~rc}`J72Hnc3e@RY0hnhkkR)Dxd>g+Rg^yuKVUU7xt
z0{D+QNmvEs^$S{`6(a`+E>eRRh}uA?{iJO7e5{vTUNdC{{>)u*=6DR7Ln1O5Tdq;?6pd
zw!wx4soQwI=HL@U
zryEXQ_f7I4B8SIgjL0R9Ta0K=BpsnfMLQP%3zTx~j6wl-^3*?i?6Sg2eXH-fGQGQvJYI`ZJiwK_NiYQ*3RZPzsw!PB?3o{e9!0&nz)`ri}RqpdwUf1op
zCermbN?c4*57U87BDg3F200JxX5UYb1sSKJ#_}`-v(}ok)^m1hrCvEisdpaRzwh%b
z<14q`a?Z12Rbp{S9aM?v+?Jh*RT(zx)JmHga)wjdZjDOo5<(&C@MJM@M_&pd2uleG
zUs*ygSFkhM&R%n^fSs|~NQ97l;kHd~#gw0l)S#df6y~YG)Ib*6Zss)CPa@Zdy#L+&
z)~J=WA4qYTQ9Glf)E+)fa?syKx^eZ}NB^WNS$8fvF>!eG`}iYs&u(Bo1xv#=Vky7I
zcywZdIl;p?2bgX@>49#RW0+4we2KPQ^H6%wI<-jO0x;=T{HG6IxWYO)Da`wx5ekkr
zzKE9Qsh&CnE+Y8t?loKB*1ogZgp0Z}Ys*1u79wqUk?d>pDcfw6^bjr&onwUz5
z*%xMogtkU>%JV0C@H3VhYW@eXGqxLQpaVogJRNka;p@?(@{+YFBCxt#MQLSAeGYo>
z@xc3Uu0i6%YooKe>PQ01x_l;w<&_spXGARrvW9&vOns3pojNKj5K_J+R9}lb*z0K=
zU$>G!6w?NR+zDmB45rwAE~1`jrOtxoIDM7#e$8_dYWkqo%U{6C;d*+8ZRb-DbA#!o
z_1)S0>NrU@Q$2`W9;5P7Y^olo8@kS{TzXG7!%XKTb?c`0-zgcI@+&u6=?YImW1@C-
z2k(BW4HmM(Y4&MtPglUi%YlXD&!069e
zE#!Ja(ES%I7NTbYNMIUvlV0y`ZYxmk7fYvNShJXB+p*mZ@1OXc|bhtu#)p&`bq!
z44>eeBUq)Duqad}7VLer?H=JhFsBv%T3?Wc(#bR+#Dr9D6(Yz*srC6-(=hEy{4|>3
zaf2940pIzHLQU
zj4i(W*xsKz3BCG_?~dn`W>8HUe$3G)KgE5@8WQW2|0IPtxz(=TbY-7^CdT8f{Y%LA
zxobh+p+)iCK%I@N7?HQ(3kzm*o05I?wz)B9&!_0%uSaQK33EwjDOxnn$Xm`|
z?#LN#P)#7y8C=!nlR4!PJ8VNE!bdJkUC!S2mz_`L_Zy8rh<|CnDQ_S`fXA&5lZMBw
zdEc|XGumNq@8WX(uHis&t9IcD{TQ?Z&iOdg+_th-P6}xslQH31Tpt#9`b$?#P9h;;^j2&zxuYI5
z3SZ96N&(qqk+UVrnHM+yO6(K{nb@|D-XJ$ig@QR>o8Vv#x_I6pa?(KEj6XXf&EWHC50OvtWptW;wd@?2jfmgTG8W-5rT;m(TKfu0w1I=zU^(+
zCeGBj7{;~{8uQ!8L@O4qH$)dYn2{5v;?|?gkyys6n|_csHL4vdK6hiZml@4vx$~kU
z%blKxtfF}%T>im?gut9LjOu;$*U3W(%6IgvS;IqcL%+h6Z%lvA;*1tX<)6Funr-oF
zEADytXl3$yFxaIh%8qVbCv5g!^!sgbnp9ag3x7zLT_Mla3t3-?mfs5t%t#*F&=eAh
zkedQ$VzgwM6l`07X-JnI=$co}FHp2sWa4O4yDT>Aq`aoR_gHSVt-;vo=$_xT
zk@7KQ%x|=iIo~fqrhoN>fZ1H2`S#*5UjVX@Co_J7MVN9@z_YM-
zc-wui(1v5RI)m%v@83F26U$`14g_{Or~C*5>ANWTA1@fmGSv5EB0Xe#vmN9Z8MJS*
z+y^hZbgQCA^2eq7TzC^%JCr2A!=4qjMEN|bTU};7J8Y34R$Wd;#y*Bk&H-9iRrBs0
z>wxRg1%##K2J}`s-qGi;$@<@@MiQG)z8^WFluLQUDS#`Jftt^XRG4(>aaTGAGwk`7FH+pF8;F0fA7jL*q0v2?al1$
zDJoa^!TkqMH00XJi$A`*-8DT!B<<60v$~JjRh-u`g%+Qj6d8DkGI)m^6am3OU!6Jppw04JbWb#ks&NyPp{oW9j7I)H6n95RT8t*ip@tZas
zvj(gX7%ijR-{ae>Xd}nh8C%E6ipue2ZzkN+t+e{C-U(YSEVxo9(Xc%#fk%|vc27Ff
z#)BnIPEulOlr!b?alIaAJ$5wwiHED$3-K`FLn7Ux_U-I=l*l(f8vS*X%_jaGM5FS-
z$T*rRFRzH-SWy4-kI3)lpZebnGFrZG=h(+p&OLILuD3W__|b<@EX0wjq*!2QK)(_j
zpz?>8H$h1Se;uin8mS*k-xhZ-lp$3qVtwHFx?KOJofB7<339gya5Bk+vh&c%FC4B?nFqeZ9ER~)rfx}f3`
zi@;)YN659Xdz6qsb)4?h5WY?MX~*EmK9};nbtIJJiqoFDV!bBJG8A_qA(>kapoFs2^q)0NXUz1#R9JeLPPRcLZ7G_0QTDXI
z+u!`xXky)``nfRB;X^{Gv$IjS
z)OeY@?bu`AkWXC7s_dCDXcY(fpShVaaAXqqNCby$bbkLrk`Y36W2d2nCLO1eAhkUZ
zVtH)l2Br78s!hF`N00|EGWa9x8p?2cT-&Y--UpITY7UP~DX8@1MB_QiM=#@^GiPK@bnTpJ?`evE13wy5XL*-o^IWcU2~ak1)ePR^R0pDU^z37w;7>oM*9>m?HP)d&@DbD*pP(WH@fy`
z2MU)4#XncXSkf$~H?Z1UI~?7N(F|D7v<)CN|DZI2Qzzq(97r~f=uPGydAif(6MNfs-Vy}4w(1J25c3ZAQ&F4FpNLd4G-wK#QULAMs`XO{Mx
zyq)MN+5wu9#-fTW){o<^f42r`6T+JP=V8{Zv`fxSNSg4yW@1^?A^T<5ssmfxthHHL
z0|?QisuX{FO+^S{8UcOE!yrC@ewR6
zMM4ZN<(_Tl0XB4r_D^}#wRMP*2W8~^e-fO6y-jlp)^6(`(Z
zEY_FGOk|EinRUYzFTA=S3nP2g>ZS5rQ3TgQ?1w)u(HrFvgk&?OA`;P7!q9g~XSgJi
zXL8v;uHz;mpvl6(;-n(qeY{hJexx^f4M#heKLT2typTB);2X@qt_
z`g{c_IR#o(?%sB}c*9>2%^dm#2GeTlGjiG-RB
zy>3ewC>(yIyiB|{D)x*@@%<44D>H0WZqB52n!vWYCuwfI#QIE;XvNCsJ;}V+kEA?X
z#!vLJ6CY^zT4W~*(l3k}pp5mO?LXj_z94Q5dbH9>3YK`W>vV^mt%Zh^6(#W0{gYmxYjQ
z-PJIX6SBEQU7U!X))0@IJPRGaK>ttNIYTef#v2d7o$3Dv?#%OwI|pf@JCRN~x7(Tk
zFmvA2p={!&h&wG2Gs*XU-&rY+ajj}mnd+@6t*>#NsY_)2mK7#RPn|d&_3TwL>AeNN
zW7ZpE&S!dmLeZf^PhcTIcY*RkGQ>gLg`nDZmp?f#-yUV>uXukbNnlQi8sS;-zG-(q
zSpt$zTu6QSn{3BpJ1dcJQZC?Gul@Ld7+}hJqxjPP+#hdQfcf||O;b@yEsHb$8zMy^
zqlZ4$h8%WP0;*wDrfD1%+eGZsm6&-?h{f0Gxg=4+Mstt0sH0E|zYBd09pZ-z(Ok7(
zB|YUhWjsYVbzEgxwO$okC4_`PI$%1WeGr*-*1dVJ0Ep0LYeJ^b98_0t8iw*@%9N
zaEb?9f&97ory0H-7dyftnjk_eEK+0(0uod%WX8|b&(%)}dd4A#7NTXq!w!=yO3;YT
zi{ORch{_A^h3kfO%Y8}){4(09;OVzju2oZrHh@P1hbKY_!}*{;lAMx2R0Hf`E&bFC
zFk@h%VERC|uYfP~3Vb_eE6gLxXLYOus40kr0d6I}8Dg{F5`+sPOo_3Gyoi_v>x1!#
zd)gt2PT+}NiM!3!$hWH8iwfG_WIYMe6
zN~1IOM5A{#a#
z%Srk`O-k5BuzwXN45ZsLf?7x+hFTu9^po?GgI=Dr1knLNH)1IwH`FUW&An=2KsbhI
zFgQ2}5nNsJXVBz(RFuVLe-2kY1}?2<12$14G5v5#Pv
zkFRRcWM=L8cAdM4rF{S@(esdhkw?GOl<)h2yU(rBid)oeGmL>d+rC$SVs6ZW6(2I6
z!647aWt+jtm<8*UzB$7|K}-RwyWh-rdVkdC?;oi2jR&JgCNjaQFHM@~ay6SXrv0{J
zBXxJ=n~E*Nm?l$qPKl31n>rmBK-O7Z{eUzQN3KnqT|S@d9G_^GhwPKOErdS;K+S65
znPm(g8vKNA&EeT3FW;?hg${UgNBdb~y;tl<|I5}CDPHdrJq*i@IZs-}$Jhi~j~g^w
zoCb5#TRm3~>(|`{4%5@ph{#OuYR9n^gwUdTjBh7d`H;23Y&_BQZ#nVIM(er}HWG|K
zz-Z7SE-`85_q}B+pr`fRlQy8opD2r4azZ)m8iiej(ddN)g`RpsBxkeD`%#SF;w~8^
z+1B3C_4_}6u10RT{bW(P&b@gGkz}t!(X|M
z4dDN=TU?>`W}H(a|&SH##A@Lk|VZ^=u)`--&p1m@n!Q;t|S;{hU>G
zAo*Gn>hst53^m@dWv;W2(e+Ew91C~peKRHzGNsjlT38E&3-^Q8<+VpBrB){rh_tJD
zE-V9byBx&RX{+LuuzcYMXHx+lg1uR4Z&!%;o{ovyJfnOJq$-BTF-LdX94~1;+@t(*
z%{GO9A{x8!pnbB8c^T+e2(Z>Wze2BXdcR}*hKF&?PJ1rx3)S_wfi|tn=;gb6ugbDC
zMmTSbcUfYo85zM}C4_C;DnMQqJn!~TT`;FSNn&P#_(>VEI1Fh#kQHh&@3;_RKJm$^NLECt`TJQBn~i5@nOkA53$KNGlgC
zqI^rU8jax#_c*>5UHQ27yc+E%(j3txPWxdq?;8lmkQxD{NTSTHt6@Rjmugv()?&;q
zeojZWPO*XGjf)C98m3F7NkbPg8)w4iVCryBeE|;f4==)$VcL4BfC8kl#&?vN?{b9J
zEEru0#MR#(97^)N(UTCSdHG9i#`rlmFwkG-4XD(e{lMn11%a6S%wWX&M|HoxY<8ve
zbvFOvh)tom?Kc%pFCz=%Pzybtq&6Gz-g#6FaG3^J^C0q@dQnrP|3NF~7T41tHnh&Q
zSb6z{jiy?&0hnTw9k?5c7AmNJK?87}v3LkqP
zIjU-xWM`s7{CG}z{Pl6wD!QJevtSyf4P1U>+dD^av8#`wI~ZB6u#wVU3?x^k4_<-ovzdm^1(Trsh9YJMp>c#Z2gE%v4
zdXj}yT401UAO6fgWM-3RDj^;$;fWtAl+~y~{=%4r9n`qq38OMPdV6eNX(E89=%-;1
ze6xj1lt=d`BZLRrE0vrSoQp;s1sE8&BaMFuC5kD(1F|Lwyw@vlf#x205iTQNp%5NF
zqO6iwRGQ(r@a%xATF;@;tI>=p_#0+%I}w`l{t1P8cZDy}iI9Xu``^#KCHMq_O752Bn_?A-nV)KJ2p&AUKGt;=
z-VS(24IeaD?#Qu^KGNdzhbh)&906>;9MX;N2Y++Yg0x3P+HjO==q{Zxoq5wyq_=D=H1y<1
zVvNPh@7|xL9i?T1p6UV|}+??A*g1#1ZWmbG6g;ntszcG;R%9Kge)(&~D
zg57yg5w4@!QM&N*9boZI?O7~j@kM#}f06bUV09#07bxx)T!Xt?aCdhnxVyUqcMq
zH9(L+aCe6UcM0xp?__3@$;{>5`@io2b-KFG$6C90?b^Glx~lr34}AJV@nrVA#uEWz
zmhjilk87FP=cf8nPY?Fk`@M1a(@6u5sa$P~h&+9^f?83HwW9TqJo~~c`t)};VoOxk!jLz8AYBXtrl#3Jn@+EqFs+*s)AzHkS}LgwBKC-M
z8+UaDaiEQjBQ*IVrn~qU9;tH}AKgIvabt%ZeIu?oc1}`co~eYL#I>Ri8gY{bURfEtXVP1k->U*dw5KrO=stPm70Qy73bNRPv&-so0jt1B<^$rbZ)0Z
zXaP1IHTGV@n)ax9B`{-UMIfd&0%cWTIB1Srx|HGr=4c;fRG)&NRJTyA@SI?Y3C0p}uZCV3gUN<_!l6
zBhR9wE}Fr3&0&wHvWG@!+lSh-;1u+%kk?s0n5=E2!u?aTlBQ^ktLC^C1!6&NB5v4*
z6%|gJq4xDWIiZ2I-g2S$VK4kt;;VLZ1RPB-*H{yD0?qzT&Enil!a}H5gWP60!%2
zZo3)RR9U2bKc;&Z_@opn7HE3dtwu(+8&V~MSmq@7-p6OGWOlQ}Gh>?hRrV|w_dza4
zntq0foeya9UWZywRZ(4a9Cjk}I(g?*#bkD9xTthcgB%5QM(#IKgHi|n-J`?h1<4tM
zX=WB-*7e8}IDJYUC;>$Dz4`rdsCv-n^$tgvsBV}W&*+W7jSd6qH|pChsxh`i%lO(l
zVfYahoDXn)a-^bZ5tPwVU(b&dQf}8d8>#7&N512XgMJ>8+nhJrPBCdg;W<%H(Xic3
zl1RNq-@myJYMeINmu^2kb(o8W;K?q%X&JqwveJRxu2;(iOxtNbPsIVU)JBI@*REh>
zPHOb`I)rv)d6Oiwo2uTa?aALJKyK^#CQ~Bvx&Yabk~nkuV^Og~)?lfo@vcyUM4nO%
z=A*=tjiE$e^k{ol)_80C=*MOLV#{x%-gut=iUbo97N%LXSlQafqEJLCxLCfvbyX$>Q;`wAI
z(Wv);Nk$igcoQjBg-_3w3xd?!DxKVH;89LQ>4Ju|o4QpXV-75_lDi2tAviP49_n&9
zGKX`@9v{gKDR~(Y14NddZ&x}hKC1Tc;*CIF+N?H
zl5L2>s_3@2=8%SN?P}XGq*>GeV@`Pz3XW@LDiAYNPe<3}Y!k}LQUF^!;B^O1
z7i^Hw)hu?|W;B;L|DNuE=x{M=!;v1=lMaG4EXSVCcn9yK#e4n$&$uB!1!O}z&7zhX
z(~UrE&<$@@ac*EumP3|yVZ;=VDYu7}lm;?j)I8#D
zKIH`*!MEQtFA$A3$i70HZy>63bjR$*^bs*d$eY_hspeA8xo;fdcp@Im&Em|~M`d)p
z^>lHOCk)n4ef!vS4d&0)KZ6f*w6`o^@b0~ztOJ76`X{JVSW;PI(9C{IP;pbxe#1e2
z#6}1YPOgwykQz`nRm3<%Eb_(
z3y?LK!#6V*SPsM7KtyS6tzrs@>;p>A(s;RchQ8D+3}6ZNlS9fhqcvpK+RusRw+TqTIfw1Q1Sau6@lI(>3YZ7JptJ`oOnZtO-*v8##s}qTzu#A+L07k}gGb3AzyNnI`;cljF
zzntw{N*r|e23K#wX?MP#PBMSBx5?X`dgYoN+Le^_!!Y5+aT%&WtIK1*X?+=2-vd!0
zP0lzYrpBcgjmqkx*qJ9a`iVlEaBhj|i#M7X@*w175U0zx~aEn;B5sT*Ux#frFa461~
zi}mU|8Rq?n%Ts$lWV)?3B9wTjuBEA)v^gVeTNG5Ohs7Z3<=fkj4r2%y9U+FS~WJ&o*CZ
z=9}vfe6N(jI0BLOqj6nUlcy8uvZY>O+?{6HYQz-DszERMs*YU8ltsMXHBNJ=l~v&V
zxI*%}9Cf`>QluID36ouO%L!MP4)^(!HV4si>8o{=)n&BxP&eZsBz&O0N4L%x^7ia1
z?Xu9R%Y2GxW8SrWA$;3Po?f@P1xC8XHD#Y#+^9GLsT>N>T|$`t@+bE$TK~z@BTgTlP~*IQd_j~
z14x+fSdZ~j8~fVlCybD1HrYDkCr^?mnxM9_T=)ljWxuTZ_2HW7cQJQz*ZrBH
zb1`;s1k4_1q+_CEf%!S=`6rxNvCrdAIPRC>Y;0_u{+NBPtN$xY3wT&z>|kzfY~!SF
z^*25P9mBuy``(OS{lsSj0)9zcJ3B*LYdd`>UHzXqS_flCTW1GDW5=J+K#X+Ebd3K6
zn-&Q4|Azew5$HwCycX52pYk%itPcl6Gde?C2V>nIAG8k6HckLN{DftqWBd;t{H|kx
zm*sSHw*ky^cDHqQGPg0Y(swd;(AEFXYd=WKDt;P~XAG#oX9c_h;r8;{TG7=S5u@+S)o8ncL_)**g5G5rCfmb>$bs
z4?X;+%$n*sgFp2^{IbkHi|-c_BOPE5x_{Z%@8!n&vaHSyj-vNd%6QD;ret&DYlKl%TQ>!9yy{Ij@>
zemnU;<8r+$ue*)DwYj0bjk)!o9?0ll7yc_gP)=OqPdj+oc1;}gO@HV^_h;reJZ8E#
z|HU3&)~~q@zz*ze0TtNRLHGBQ-*Er4-u_;0FZ)Oqw14;?
zXY>CvHYX6s3qJC{VLO>y{q2HZ^f|xF&h-BQ--YFu*e~Lm3k%!*EW4L&!^+&m*wEb&
zV9396v{vRW#zy*1`hTeA7Xi}?v7>i>&+{k#o0sMOFZjQpftX(K!@tY^vfub6@o%#K
zE06$u{7Z8GjxG3qmivbnv9>ibwxV^^2WZF<5YK-_tpCmP|6-Fb=;s&DF9p4H55L?ya{PC+G6#fzQ#*Yd!>g
z7t8;$JIWVr0kCA#5O5#5e;t3Jy)5&Fo~znVfAq4h0XQ~}=08?}{($@|`wR6)+y39}
zm*2~e|HZbfi?JafQvC?n`UU_;`p;9^Kf|uE(NB{9Wg{>9iGPCri{mHsi|sh+r#bDP
zI(fO>{8z`u*vL*F&|9$j>n8r_?EHT17t0Ub1AhNYTz^^jKkAL$55K{r`*-#i>i@Ej
zm)qjMfc}R3h4`|K@bTd~{M5%k;tHVuL}zMiYieao_g62F4$xi#^w0FIbbn_4fz9%p
zt^B$8Sg=GfTo_;@Vt}OfyRDc4nzvTQw02g`rsg(|^e<4ET07DK7ATt=I??`B9<%^b
zLo;KCzy6EP&cPN?-JHyg9pMKBVBYaV0kwPj4&Yw|54peAd;ws@}Oh#Bp3`@~is
zTxq0Uryv~*#Rzc`_W{%hJtI@n+y1MrtJN0{L{Aa#TcDpE*gplYVS(xV(Qy?AB!T~%
z;6DWZQ(d%<#sHuAqo4V=h(p7){SqTGBh(T!1H)reGNTeSboDZ$V>41?V{&5>fCFtX
zQ-o5)AX9=;#N}egq8u$
zi_dU|fQtVI4sf`wu2Ke%Zq#7D(tM!${>
zQ%lee(yw+>q9D$MTfF|41lm(WKCA#{c|aoghXl?}X8(=7b{cTY>;FkhJ23u>k_sDf
z1_Xfq&+_jD(@>!WQ2j7o?B6K?mTC=HG5>eL{<>ABaZtgJn;FB?&BzP2!@1M>-RsIr
zn-7bN58QMtKHSP!$|(OLxqQ~2WCDOo1Jdh%k=WVV0D2HF7v#4W3vzUfwtoy12F+3b
zRYV9?hy}#I*f8}Hve^MFI)DV2jP|$P{Ak+#VL*nqR#pJl{bNUf7w)fi{6$&`8VNAw
z_;W`G#RRE$l8q-4y+ZKsup$3Nu3>i4MF$}LQJ-*MB7cdI=?CNXe-z&jM#csP;q?!I
z$-@7dZl@JtHAN5D?H~&j2#5y2V`=MTWo`o)#rjz_e#_GR`K7Lbv$+*ufW$Eh-~vhP3@ZC#H@_>|JEV*|ZFm2V
z8??K?B}qPMnK93irbvQRf>3#TWgYZrk(E;hOoPCmxU}WA5YB>O)4_
zq9Dk9G{07WN}!DoP5?a>y*^LKpEm3BOVLZR$A)0>^I#ebc`$TNoTtqm89JN^K!nuu
zNH8C*8ACy!8ad|2PvTYIiEv*#%iFcHCywr^1p3+sB$SnxDnmAP{7~w!0~G1Ip;5hO
z@^P$u#Uue7GIP{qE|UkMM=keSc%W1B+weRMJ7ot}ACa;4bErmKd}Rlmm@X0d!mrDvp6
z**c4%mdi($ESTA&SW=whR7sjml7tVPMc`^^IyH;&RKh(${HMbhW5B&Q1qT8$MgRi(
zVd#Gu-rrT~N09qjy&V5-dM&C;E>ogNUnM`0R2t{b`cd$Q+J`cJD1ewJ0x2Nno4y5B
z1fjIvjS)vZ5r|Wil9j;E`POoPass4%ioAGr!RYY-mbrxY7=$ZJ?~&%1J7>GIFf+`?
zEJ-4vNFqTLe>1VGm1#lWIeQFV`w8u?^)uRaCqXXpx&C)gn<`vSZ#IPcX>m2qFU8ny
z^oE-HV@5+6e4#DxOPGY=W%Z~X*BTebn&F`7brUfj7)}<8TE!9Oc6pqz$=pY$c>pKZtJdr*h-7bHG)JinkuRm4-b?sP*NOHMe2er6}XOE^025q2^
zl0r#QM92DA=K3%mr6d_9q-g?IUCxi0DrlA6Bl24Du@PsS0Qp0hw;}yp
zIT`gD!?(^=4ON6-VU8Qg#{Hv-Nkyb
zY*&%gi?a4gGJwNmCVaks6tV5YyJ2Mg+#`5Hh0~%pq=mLreC+ucI-}j_wacVT{f-^<
zTT$MAE6#D?Dr0A|hHUcYJCK^gRCPfW+herO9?OC{qd6gM_
zn8a4JWF5BIyaKXCi8!sP@*#r?z2ZouMgBBOuWXdK3)TbiDn1S}+C%Lv^8&exaZ(W1
zJOQ=J{tem;=Y%R=!M&soV}<=YmnN_(@DTEbnTmvHh$F}^D#wE{x+OxBg$}N$x_vr;
zZJY+8yRWBZ!L6|`s|}_TG^U9|P-?@cMMd#=q$|)$Mj|z3N-%=Z$;RBV9ZYunV@Hwh
z(1a!O$oP+-KE)GfJ3=J{w}@si^-#vyp34!jR6KLZl&Qjw-_{=#cg~4^^a@;EZI!nh
z05Odz+JgnccoB&)2X#FO%uZu*X
zTNH9qA6Tcix3TjicJ}0IHy?R7oiIHAU0($F0}1tNSna#$;xKr>{-h}aJ+ys~0RAM$
zw;&pw>kx$zb@r>E+Y}>hq3oQzL%}Pl#$7OwJN>NVC$Qs|A7xT>-r0bDA)M~QX4FkJ
z{`P4bJa7j@e)H&N|C4z}IaaTt3~S_7<~QRx_K`C3?zTA<5%tesNUMuoS*vfF%g*_2
z3o6LD0u{gdoKU-3Wz|onwkKtXdunW7DwSBEgs?p8l?({ile^pG#dccEJJWoq>m*nS
z@tun!G3IkenGBI!+kg7w_^+ChF
zIPUdix${>eFL78NmYIjQy^CY7Q$3^d&aDBn5xs0_mueO~1c-TNiUWmAHiM;woZjmV
z@WjNHY;U5peyP(44MOy0s>aS%hyMaM*_=An812Hu#-wCUY$UWG>{)*chi2{|Qiw>|
zN%r}@SGO`_^*-jkNiR;ksO&ytE&Qp&>%jTufjyFiomC!&$df>YK&@6ktyXL5b#ztZ
ztEI-#d(Cxkt8>4H^w5nd_E;*MevY%0GTm)wk4{sWwT)uV_kA}!Ir#Hvqkx&<5Z86c
zfnMj|=Qf#3NSRB>^UO{Zyc~oZHi9OU8wPh82nt5wqQAqH@(qBm8%oHyQ3^b$3?~>3z-Dz<^-|kW-V{-0u%&QC;Qr%e&hate>Jh^>TUnJ|0Ewjr#0(7R0P<
zxiGn~*g#Q*&tLn%h$0qMS=(UW_N2a2cofnhOngTKE)gz%XNcSnDw3Kkc*c6L8)26(
zbv5i(C%%xuFMuHUDR!H2g6*F%GUGcI&j2tGVD>){5XIjy@{jI=jlLEA&&
zur(Db38_Y*j#aY4RL3yeSz
zlsYa16U(l>MvO-bAjI>P({=?Vfa>`|lvL>&gI}Wv@|-n)Km%+byib77EBCQzLk=aB
z>ElSIbMs|X7<@>c_w_PtWp4sWQ;5wMO84Ff5q-5Ind7O6WR`*WrU|F_z#|bo{>j>-
zx0!=%f_{8&U=1PMAlwi-D&nQsS*i02G?SW3hqpeLPZD>iT~28xniN2+G^6@Up0hel
zj!2`!^zHmXP2mHP^*vOb@)i0{|)Z4dA?^x(3ZpnT0bPKJ~jd^)0BkqmD;nV9^-F)u4*`$Wk
zfo75a^-Z`0*HIh=#AG8hw08Vj#~X`48*NwyuWH!W4{UBXawq>X2QJ<>o@_`!KzD#X
z5Z>SB@Mp{E-!{;rspW>#=iGaM2a1gA$Ci>r3x#ejUT?(zX`_y42t-`|owOef(B68!
zihvzyCj>AyhKA&nqy$dsNPWcu=Cx@TJn;$iX?p`5p@?j4{JLzbO&XLL1ejL`&Z
zP|~d!e!RfrI69oc;pnoR&CCoHBdvhdAG2k{4uOKVxxi(IVOuv}LtV
z_q81HzRm<=ztN@1d`45<`ntXx;Z428Qbo^IVbGRASxA&l)8syj=I1K1;EuS*kI@2nzp*{V&iTT81us88w(BpM{Pkd%M>dMecKUx`Dr2Cl%
zt8Ry8Nm~o>$H}C5@A=j*i&o!qFsQBfvIbR~rn=4wRYKG@-XwiMR9~{Gy?a}Jz_M21
zYEhvzEgwZ%%%$1FB(u&i(~;oNH(%&IqF48AHds-!a>NS!OM?pN&>RkLEQ8Q-T_d4I
zr3@e5dqZ@;jD^7a3eBA(V-ty#f^Wqf`lX4UrjcZL-@1#HV0t2ga1t*a+%FC4)A~Y+
zt&A^ArC4M|K;MWyvMIwFJ!ukPSLx4Vt%0VYUXAA)B5qJI*wA94satARXtmJd?q0Fs
z5*}SyWa`Al@RZdJD6RFFG#OV{=YBOu-XOIsGMusGu|(NlU9^G6j&>@~G^wz8Yb{@A
zjFI=IVnG!Jr3oFb3yrL!uA_FcvbXsa2pyDKW#v{4
zb&nNz!qB4)y(w+o{-br7NE0W|_j*r@C~xKcT0I+PtX|cksiPAoqq6Eo^E!6WfK%u4
zO|3`ka@~<6sh&xZ^6fR)q-s?PI@&F%8+l5^>>4*x-#m3qx}@-N;zniA+j)zE?}=@n
zmrJ{(4@mY7lTzl3kY`+SRaY$D*`fembZBF}An890F;}NFA1)RsFzcgq5W-m(T
z+y