mirror of
https://github.com/FuzzingLabs/fuzzforge_ai.git
synced 2026-02-12 21:52:47 +00:00
94 lines
3.0 KiB
Plaintext
94 lines
3.0 KiB
Plaintext
FuzzForge AI LLM Configuration Guide
|
||
===================================
|
||
|
||
This note summarises the environment variables and libraries that drive LiteLLM (via the Google ADK runtime) inside the FuzzForge AI module. For complete matrices and advanced examples, read `docs/advanced/configuration.md`.
|
||
|
||
Core Libraries
|
||
--------------
|
||
- `google-adk` – hosts the agent runtime, memory services, and LiteLLM bridge.
|
||
- `litellm` – provider-agnostic LLM client used by ADK and the executor.
|
||
- Provider SDKs – install the SDK that matches your target backend (`openai`, `anthropic`, `google-cloud-aiplatform`, `groq`, etc.).
|
||
- Optional extras: `agentops` for tracing, `cognee[all]` for knowledge-graph ingestion, `ollama` CLI for running local models.
|
||
|
||
Quick install foundation::
|
||
|
||
```
|
||
pip install google-adk litellm openai
|
||
```
|
||
|
||
Add any provider-specific SDKs (for example `pip install anthropic groq`) on top of that base.
|
||
|
||
Baseline Setup
|
||
--------------
|
||
Copy `.fuzzforge/.env.template` to `.fuzzforge/.env` and set the core fields:
|
||
|
||
```
|
||
LLM_PROVIDER=openai
|
||
LITELLM_MODEL=gpt-5-mini
|
||
OPENAI_API_KEY=sk-your-key
|
||
FUZZFORGE_MCP_URL=http://localhost:8010/mcp
|
||
SESSION_PERSISTENCE=sqlite
|
||
MEMORY_SERVICE=inmemory
|
||
```
|
||
|
||
LiteLLM Provider Examples
|
||
-------------------------
|
||
|
||
OpenAI-compatible (Azure, etc.)::
|
||
```
|
||
LLM_PROVIDER=azure_openai
|
||
LITELLM_MODEL=gpt-4o-mini
|
||
LLM_API_KEY=sk-your-azure-key
|
||
LLM_ENDPOINT=https://your-resource.openai.azure.com
|
||
```
|
||
|
||
Anthropic::
|
||
```
|
||
LLM_PROVIDER=anthropic
|
||
LITELLM_MODEL=claude-3-haiku-20240307
|
||
ANTHROPIC_API_KEY=sk-your-key
|
||
```
|
||
|
||
Ollama (local)::
|
||
```
|
||
LLM_PROVIDER=ollama_chat
|
||
LITELLM_MODEL=codellama:latest
|
||
OLLAMA_API_BASE=http://localhost:11434
|
||
```
|
||
Run `ollama pull codellama:latest` so the adapter can respond immediately.
|
||
|
||
Vertex AI::
|
||
```
|
||
LLM_PROVIDER=vertex_ai
|
||
LITELLM_MODEL=gemini-1.5-pro
|
||
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
|
||
```
|
||
|
||
Provider Checklist
|
||
------------------
|
||
- **OpenAI / Azure OpenAI**: `LLM_PROVIDER`, `LITELLM_MODEL`, API key, optional endpoint + API version (Azure).
|
||
- **Anthropic**: `LLM_PROVIDER=anthropic`, `LITELLM_MODEL`, `ANTHROPIC_API_KEY`.
|
||
- **Google Vertex AI**: `LLM_PROVIDER=vertex_ai`, `LITELLM_MODEL`, `GOOGLE_APPLICATION_CREDENTIALS`, `GOOGLE_CLOUD_PROJECT`.
|
||
- **Groq**: `LLM_PROVIDER=groq`, `LITELLM_MODEL`, `GROQ_API_KEY`.
|
||
- **Ollama / Local**: `LLM_PROVIDER=ollama_chat`, `LITELLM_MODEL`, `OLLAMA_API_BASE`, and the model pulled locally (`ollama pull <model>`).
|
||
|
||
Knowledge Graph Add-ons
|
||
-----------------------
|
||
Set these only if you plan to use Cognee project graphs:
|
||
|
||
```
|
||
LLM_COGNEE_PROVIDER=openai
|
||
LLM_COGNEE_MODEL=gpt-5-mini
|
||
LLM_COGNEE_API_KEY=sk-your-key
|
||
```
|
||
|
||
Tracing & Debugging
|
||
-------------------
|
||
- Provide `AGENTOPS_API_KEY` to enable hosted traces for every conversation.
|
||
- Set `FUZZFORGE_DEBUG=1` (and optionally `LOG_LEVEL=DEBUG`) for verbose executor output.
|
||
- Restart the agent after changing environment variables; LiteLLM loads configuration on boot.
|
||
|
||
Further Reading
|
||
---------------
|
||
`docs/advanced/configuration.md` – provider comparison, debugging flags, and referenced modules.
|