mirror of
https://github.com/FuzzingLabs/fuzzforge_ai.git
synced 2026-02-13 09:52:47 +00:00
* feat: seed governance config and responses routing * Add env-configurable timeout for proxy providers * Integrate LiteLLM OTEL collector and update docs * Make .env.litellm optional for LiteLLM proxy * Add LiteLLM proxy integration with model-agnostic virtual keys Changes: - Bootstrap generates 3 virtual keys with individual budgets (CLI: $100, Task-Agent: $25, Cognee: $50) - Task-agent loads config at runtime via entrypoint script to wait for bootstrap completion - All keys are model-agnostic by default (no LITELLM_DEFAULT_MODELS restrictions) - Bootstrap handles database/env mismatch after docker prune by deleting stale aliases - CLI and Cognee configured to use LiteLLM proxy with virtual keys - Added comprehensive documentation in volumes/env/README.md Technical details: - task-agent entrypoint waits for keys in .env file before starting uvicorn - Bootstrap creates/updates TASK_AGENT_API_KEY, COGNEE_API_KEY, and OPENAI_API_KEY - Removed hardcoded API keys from docker-compose.yml - All services route through http://localhost:10999 proxy * Fix CLI not loading virtual keys from global .env Project .env files with empty OPENAI_API_KEY values were overriding the global virtual keys. Updated _load_env_file_if_exists to only override with non-empty values. * Fix agent executor not passing API key to LiteLLM The agent was initializing LiteLlm without api_key or api_base, causing authentication errors when using the LiteLLM proxy. Now reads from OPENAI_API_KEY/LLM_API_KEY and LLM_ENDPOINT environment variables and passes them to LiteLlm constructor. * Auto-populate project .env with virtual key from global config When running 'ff init', the command now checks for a global volumes/env/.env file and automatically uses the OPENAI_API_KEY virtual key if found. This ensures projects work with LiteLLM proxy out of the box without manual key configuration. * docs: Update README with LiteLLM configuration instructions Add note about LITELLM_GEMINI_API_KEY configuration and clarify that OPENAI_API_KEY default value should not be changed as it's used for the LLM proxy. * Refactor workflow parameters to use JSON Schema defaults Consolidates parameter defaults into JSON Schema format, removing the separate default_parameters field. Adds extract_defaults_from_json_schema() helper to extract defaults from the standard schema structure. Updates LiteLLM proxy config to use LITELLM_OPENAI_API_KEY environment variable. * Remove .env.example from task_agent * Fix MDX syntax error in llm-proxy.md * fix: apply default parameters from metadata.yaml automatically Fixed TemporalManager.run_workflow() to correctly apply default parameter values from workflow metadata.yaml files when parameters are not provided by the caller. Previous behavior: - When workflow_params was empty {}, the condition `if workflow_params and 'parameters' in metadata` would fail - Parameters would not be extracted from schema, resulting in workflows receiving only target_id with no other parameters New behavior: - Removed the `workflow_params and` requirement from the condition - Now explicitly checks for defaults in parameter spec - Applies defaults from metadata.yaml automatically when param not provided - Workflows receive all parameters with proper fallback: provided value > metadata default > None This makes metadata.yaml the single source of truth for parameter defaults, removing the need for workflows to implement defensive default handling. Affected workflows: - llm_secret_detection (was failing with KeyError) - All other workflows now benefit from automatic default application Co-authored-by: tduhamel42 <tduhamel@fuzzinglabs.com>
36 lines
990 B
Python
36 lines
990 B
Python
"""Configuration constants for the LiteLLM hot-swap agent."""
|
|
|
|
from __future__ import annotations
|
|
|
|
import os
|
|
|
|
|
|
def _normalize_proxy_base_url(raw_value: str | None) -> str | None:
|
|
if not raw_value:
|
|
return None
|
|
cleaned = raw_value.strip()
|
|
if not cleaned:
|
|
return None
|
|
# Avoid double slashes in downstream requests
|
|
return cleaned.rstrip("/")
|
|
|
|
AGENT_NAME = "litellm_agent"
|
|
AGENT_DESCRIPTION = (
|
|
"A LiteLLM-backed shell that exposes hot-swappable model and prompt controls."
|
|
)
|
|
|
|
DEFAULT_MODEL = os.getenv("LITELLM_MODEL", "openai/gpt-4o-mini")
|
|
DEFAULT_PROVIDER = os.getenv("LITELLM_PROVIDER") or None
|
|
PROXY_BASE_URL = _normalize_proxy_base_url(
|
|
os.getenv("FF_LLM_PROXY_BASE_URL")
|
|
or os.getenv("LITELLM_API_BASE")
|
|
or os.getenv("LITELLM_BASE_URL")
|
|
)
|
|
|
|
STATE_PREFIX = "app:litellm_agent/"
|
|
STATE_MODEL_KEY = f"{STATE_PREFIX}model"
|
|
STATE_PROVIDER_KEY = f"{STATE_PREFIX}provider"
|
|
STATE_PROMPT_KEY = f"{STATE_PREFIX}prompt"
|
|
|
|
CONTROL_PREFIX = "[HOTSWAP"
|