Songbird99 a2c760ea2b Feature/litellm proxy (#27)
* feat: seed governance config and responses routing

* Add env-configurable timeout for proxy providers

* Integrate LiteLLM OTEL collector and update docs

* Make .env.litellm optional for LiteLLM proxy

* Add LiteLLM proxy integration with model-agnostic virtual keys

Changes:
- Bootstrap generates 3 virtual keys with individual budgets (CLI: $100, Task-Agent: $25, Cognee: $50)
- Task-agent loads config at runtime via entrypoint script to wait for bootstrap completion
- All keys are model-agnostic by default (no LITELLM_DEFAULT_MODELS restrictions)
- Bootstrap handles database/env mismatch after docker prune by deleting stale aliases
- CLI and Cognee configured to use LiteLLM proxy with virtual keys
- Added comprehensive documentation in volumes/env/README.md

Technical details:
- task-agent entrypoint waits for keys in .env file before starting uvicorn
- Bootstrap creates/updates TASK_AGENT_API_KEY, COGNEE_API_KEY, and OPENAI_API_KEY
- Removed hardcoded API keys from docker-compose.yml
- All services route through http://localhost:10999 proxy

* Fix CLI not loading virtual keys from global .env

Project .env files with empty OPENAI_API_KEY values were overriding
the global virtual keys. Updated _load_env_file_if_exists to only
override with non-empty values.

* Fix agent executor not passing API key to LiteLLM

The agent was initializing LiteLlm without api_key or api_base,
causing authentication errors when using the LiteLLM proxy. Now
reads from OPENAI_API_KEY/LLM_API_KEY and LLM_ENDPOINT environment
variables and passes them to LiteLlm constructor.

* Auto-populate project .env with virtual key from global config

When running 'ff init', the command now checks for a global
volumes/env/.env file and automatically uses the OPENAI_API_KEY
virtual key if found. This ensures projects work with LiteLLM
proxy out of the box without manual key configuration.

* docs: Update README with LiteLLM configuration instructions

Add note about LITELLM_GEMINI_API_KEY configuration and clarify that OPENAI_API_KEY default value should not be changed as it's used for the LLM proxy.

* Refactor workflow parameters to use JSON Schema defaults

Consolidates parameter defaults into JSON Schema format, removing the separate default_parameters field. Adds extract_defaults_from_json_schema() helper to extract defaults from the standard schema structure. Updates LiteLLM proxy config to use LITELLM_OPENAI_API_KEY environment variable.

* Remove .env.example from task_agent

* Fix MDX syntax error in llm-proxy.md

* fix: apply default parameters from metadata.yaml automatically

Fixed TemporalManager.run_workflow() to correctly apply default parameter
values from workflow metadata.yaml files when parameters are not provided
by the caller.

Previous behavior:
- When workflow_params was empty {}, the condition
  `if workflow_params and 'parameters' in metadata` would fail
- Parameters would not be extracted from schema, resulting in workflows
  receiving only target_id with no other parameters

New behavior:
- Removed the `workflow_params and` requirement from the condition
- Now explicitly checks for defaults in parameter spec
- Applies defaults from metadata.yaml automatically when param not provided
- Workflows receive all parameters with proper fallback:
  provided value > metadata default > None

This makes metadata.yaml the single source of truth for parameter defaults,
removing the need for workflows to implement defensive default handling.

Affected workflows:
- llm_secret_detection (was failing with KeyError)
- All other workflows now benefit from automatic default application

Co-authored-by: tduhamel42 <tduhamel@fuzzinglabs.com>
2025-10-26 12:51:53 +01:00
2025-10-26 12:51:53 +01:00
2025-10-26 12:51:53 +01:00
2025-10-26 12:51:53 +01:00
2025-10-26 12:51:53 +01:00
2025-10-26 12:51:53 +01:00
2025-10-26 12:51:53 +01:00
2025-10-26 12:51:53 +01:00
2025-10-26 12:51:53 +01:00
2025-09-29 21:26:41 +02:00
2025-09-29 21:26:41 +02:00
2025-09-29 21:26:41 +02:00
2025-10-26 12:51:53 +01:00

FuzzForge Banner

🚧 FuzzForge is under active development

AI-powered workflow automation and AI Agents for AppSec, Fuzzing & Offensive Security

Discord License: BSL + Apache Python 3.11+ Website Version GitHub Stars

OverviewFeaturesInstallationQuickstartAI DemoContributingRoadmap


🚀 Overview

FuzzForge helps security researchers and engineers automate application security and offensive security workflows with the power of AI and fuzzing frameworks.

  • Orchestrate static & dynamic analysis
  • Automate vulnerability research
  • Scale AppSec testing with AI agents
  • Build, share & reuse workflows across teams

FuzzForge is open source, built to empower security teams, researchers, and the community.

🚧 FuzzForge is under active development. Expect breaking changes.

Note: Fuzzing workflows (atheris_fuzzing, cargo_fuzzing, ossfuzz_campaign) are in early development. OSS-Fuzz integration is under heavy active development. For stable workflows, use: security_assessment, gitleaks_detection, trufflehog_detection, or llm_secret_detection.


Demo - Manual Workflow Setup

Manual Workflow Demo

Setting up and running security workflows through the interface

👉 More installation options in the Documentation.


Key Features

  • 🤖 AI Agents for Security Specialized agents for AppSec, reversing, and fuzzing
  • 🛠 Workflow Automation Define & execute AppSec workflows as code
  • 📈 Vulnerability Research at Scale Rediscover 1-days & find 0-days with automation
  • 🔗 Fuzzer Integration Atheris (Python), cargo-fuzz (Rust), OSS-Fuzz campaigns
  • 🌐 Community Marketplace Share workflows, corpora, PoCs, and modules
  • 🔒 Enterprise Ready Team/Corp cloud tiers for scaling offensive security

Support the Project

GitHub Stars

If you find FuzzForge useful, please star the repo to support development 🚀


🔍 Secret Detection Benchmarks

FuzzForge includes three secret detection workflows benchmarked on a controlled dataset of 32 documented secrets (12 Easy, 10 Medium, 10 Hard):

Tool Recall Secrets Found Speed
LLM (gpt-5-mini) 84.4% 41 618s
LLM (gpt-4o-mini) 56.2% 30 297s
Gitleaks 37.5% 12 5s
TruffleHog 0.0% 1 5s

📊 Full benchmark results and analysis

The LLM-based detector excels at finding obfuscated and hidden secrets through semantic analysis, while pattern-based tools (Gitleaks) offer speed for standard secret formats.


📦 Installation

Requirements

Python 3.11+ Python 3.11 or higher is required.

uv Package Manager

curl -LsSf https://astral.sh/uv/install.sh | sh

Docker For containerized workflows, see the Docker Installation Guide.

Configure AI Agent API Keys (Optional)

For AI-powered workflows, configure your LLM API keys:

cp volumes/env/.env.example volumes/env/.env
# Edit volumes/env/.env and add your API keys (OpenAI, Anthropic, Google, etc.)
# Add your key to LITELLM_GEMINI_API_KEY 

Dont change the OPENAI_API_KEY default value, as it is used for the LLM proxy.

This is required for:

  • llm_secret_detection workflow
  • AI agent features (ff ai agent)

Basic security workflows (gitleaks, trufflehog, security_assessment) work without this configuration.

CLI Installation

After installing the requirements, install the FuzzForge CLI:

# Clone the repository
git clone https://github.com/fuzzinglabs/fuzzforge_ai.git
cd fuzzforge_ai

# Install CLI with uv (from the root directory)
uv tool install --python python3.12 .

Quickstart

Run your first workflow with Temporal orchestration and automatic file upload:

# 1. Clone the repo
git clone https://github.com/fuzzinglabs/fuzzforge_ai.git
cd fuzzforge_ai

# 2. Copy the default LLM env config
cp volumes/env/.env.example volumes/env/.env

# 3. Start FuzzForge with Temporal
docker compose up -d

# 4. Start the Python worker (needed for security_assessment workflow)
docker compose up -d worker-python

The first launch can take 2-3 minutes for services to initialize

Workers don't auto-start by default (saves RAM). Start the worker you need before running workflows.

# 5. Run your first workflow (files are automatically uploaded)
cd test_projects/vulnerable_app/
fuzzforge init                           # Initialize FuzzForge project
ff workflow run security_assessment .    # Start workflow - CLI uploads files automatically!

# The CLI will:
# - Detect the local directory
# - Create a compressed tarball
# - Upload to backend (via MinIO)
# - Start the workflow on vertical worker

What's running:

AI-Powered Workflow Execution

LLM Workflow Demo

AI agents automatically analyzing code and providing security insights

📚 Resources


🤝 Contributing

We welcome contributions from the community!
There are many ways to help:

  • Report bugs by opening an issue
  • Suggest new features or improvements
  • Submit pull requests with fixes or enhancements
  • Share workflows, corpora, or modules with the community

See our Contributing Guide for details.


🗺️ Roadmap

Planned features and improvements:

  • 📦 Public workflow & module marketplace
  • 🤖 New specialized AI agents (Rust, Go, Android, Automotive)
  • 🔗 Expanded fuzzer integrations (LibFuzzer, Jazzer, more network fuzzers)
  • ☁️ Multi-tenant SaaS platform with team collaboration
  • 📊 Advanced reporting & analytics

👉 Follow updates in the GitHub issues and Discord


📜 License

FuzzForge is released under the Business Source License (BSL) 1.1, with an automatic fallback to Apache 2.0 after 4 years.
See LICENSE and LICENSE-APACHE for details.

Languages
Python 95.7%
Makefile 3.1%
Dockerfile 1.2%