Compare commits

..

8 Commits

Author SHA1 Message Date
AFredefon
6cd8fd3cf5 fix(hub): fix hub config wiring and volume expansion in client 2026-02-25 23:54:15 +01:00
AFredefon
f3899279d5 feat(hub): add hub integration and rename project to FuzzForge AI 2026-02-25 23:12:42 +01:00
AFredefon
04c8383739 refactor(modules-sdk): write all events to stdout as JSONL 2026-02-23 02:21:12 +01:00
AFredefon
c6e9557541 Merge pull request #41 from FuzzingLabs/cleanup/remove-dead-code
refactor: remove dead code from OSS
2026-02-18 01:39:40 +01:00
AFredefon
829e8b994b refactor: remove dead code from OSS (fuzzforge-types and unused fuzzforge-common modules) 2026-02-18 01:38:35 +01:00
AFredefon
be55bd3426 Merge pull request #40 from FuzzingLabs/fuzzforge-ai-new-version
Missing modifications for the new version
2026-02-16 10:11:10 +01:00
tduhamel42
9ea4d66586 fix: update license badge to BSL 1.1 and add roadmap section to README 2026-02-10 18:36:30 +01:00
tduhamel42
ec16b37410 Merge fuzzforge-ai-new-version: complete rewrite with MCP-native module architecture
Fuzzforge ai new version
2026-02-10 18:28:46 +01:00
76 changed files with 2255 additions and 1447 deletions

View File

@@ -1,6 +1,6 @@
# Contributing to FuzzForge OSS
# Contributing to FuzzForge AI
Thank you for your interest in contributing to FuzzForge OSS! We welcome contributions from the community and are excited to collaborate with you.
Thank you for your interest in contributing to FuzzForge AI! We welcome contributions from the community and are excited to collaborate with you.
**Our Vision**: FuzzForge aims to be a **universal platform for security research** across all cybersecurity domains. Through our modular architecture, any security tool—from fuzzing engines to cloud scanners, from mobile app analyzers to IoT security tools—can be integrated as a containerized module and controlled via AI agents.
@@ -360,8 +360,8 @@ Beyond modules, you can contribute to FuzzForge's core components.
1. **Clone and Install**
```bash
git clone https://github.com/FuzzingLabs/fuzzforge-oss.git
cd fuzzforge-oss
git clone https://github.com/FuzzingLabs/fuzzforge_ai.git
cd fuzzforge_ai
uv sync --all-extras
```
@@ -538,7 +538,7 @@ Before submitting a new module:
## License
By contributing to FuzzForge OSS, you agree that your contributions will be licensed under the same license as the project (see [LICENSE](LICENSE)).
By contributing to FuzzForge AI, you agree that your contributions will be licensed under the same license as the project (see [LICENSE](LICENSE)).
For module contributions:
- Modules you create remain under the project license

View File

@@ -1,10 +1,10 @@
.PHONY: help install sync format lint typecheck test build-modules clean
.PHONY: help install sync format lint typecheck test build-modules build-hub-images clean
SHELL := /bin/bash
# Default target
help:
@echo "FuzzForge OSS Development Commands"
@echo "FuzzForge AI Development Commands"
@echo ""
@echo " make install - Install all dependencies"
@echo " make sync - Sync shared packages from upstream"
@@ -12,8 +12,9 @@ help:
@echo " make lint - Lint code with ruff"
@echo " make typecheck - Type check with mypy"
@echo " make test - Run all tests"
@echo " make build-modules - Build all module container images"
@echo " make clean - Clean build artifacts"
@echo " make build-modules - Build all module container images"
@echo " make build-hub-images - Build all mcp-security-hub images"
@echo " make clean - Clean build artifacts"
@echo ""
# Install all dependencies
@@ -93,6 +94,10 @@ build-modules:
@echo ""
@echo "✓ All modules built successfully!"
# Build all mcp-security-hub images for the firmware analysis pipeline
build-hub-images:
@bash scripts/build-hub-images.sh
# Clean build artifacts
clean:
find . -type d -name "__pycache__" -exec rm -rf {} + 2>/dev/null || true

View File

@@ -1,4 +1,4 @@
<h1 align="center"> FuzzForge OSS</h1>
<h1 align="center"> FuzzForge AI</h1>
<h3 align="center">AI-Powered Security Research Orchestration via MCP</h3>
<p align="center">
@@ -26,13 +26,13 @@
---
> 🚧 **FuzzForge OSS is under active development.** Expect breaking changes and new features!
> 🚧 **FuzzForge AI is under active development.** Expect breaking changes and new features!
---
## 🚀 Overview
**FuzzForge OSS** is an open-source runtime that enables AI agents (GitHub Copilot, Claude, etc.) to orchestrate security research workflows through the **Model Context Protocol (MCP)**.
**FuzzForge AI** is an open-source runtime that enables AI agents (GitHub Copilot, Claude, etc.) to orchestrate security research workflows through the **Model Context Protocol (MCP)**.
### The Core: Modules
@@ -43,7 +43,7 @@ At the heart of FuzzForge are **modules** - containerized security tools that AI
- **🔗 Composable**: Chain modules together into automated workflows
- **📦 Extensible**: Build custom modules with the Python SDK
The OSS runtime handles module discovery, execution, and result collection. Security modules (developed separately) provide the actual security tooling - from static analyzers to fuzzers to crash triagers.
FuzzForge AI handles module discovery, execution, and result collection. Security modules (developed separately) provide the actual security tooling - from static analyzers to fuzzers to crash triagers.
Instead of manually running security tools, describe what you want and let your AI assistant handle it.
@@ -171,11 +171,11 @@ FuzzForge modules are containerized security tools that AI agents can orchestrat
### Module Ecosystem
| | FuzzForge OSS | FuzzForge Enterprise Modules |
| | FuzzForge AI | FuzzForge Enterprise Modules |
|---|---|---|
| **What** | Runtime & MCP server | Security research modules |
| **License** | Apache 2.0 | BSL 1.1 (Business Source License) |
| **Compatibility** | ✅ Runs any compatible module | ✅ Works with OSS runtime |
| **Compatibility** | ✅ Runs any compatible module | ✅ Works with FuzzForge AI |
**Enterprise modules** are developed separately and provide production-ready security tooling:
@@ -187,7 +187,7 @@ FuzzForge modules are containerized security tools that AI agents can orchestrat
| 🔐 **Vulnerability Detection** | Pattern Matcher, Taint Analyzer | Security vulnerability scanning |
| 📝 **Reporting** | Report Generator, SARIF Exporter | Automated security report generation |
> 💡 **Build your own modules!** The FuzzForge SDK allows you to create custom modules that integrate seamlessly with the OSS runtime. See [Creating Custom Modules](#-creating-custom-modules).
> 💡 **Build your own modules!** The FuzzForge SDK allows you to create custom modules that integrate seamlessly with FuzzForge AI. See [Creating Custom Modules](#-creating-custom-modules).
### Execution Modes
@@ -259,6 +259,14 @@ fuzzforge_ai/
---
## 🗺️ What's Next
**[MCP Security Hub](https://github.com/FuzzingLabs/mcp-security-hub) integration** — Bridge 175+ offensive security tools (Nmap, Nuclei, Ghidra, and more) into FuzzForge workflows, all orchestrated by AI agents.
See [ROADMAP.md](ROADMAP.md) for the full roadmap.
---
## 🤝 Contributing
We welcome contributions from the community!

View File

@@ -1,6 +1,6 @@
# FuzzForge OSS Roadmap
# FuzzForge AI Roadmap
This document outlines the planned features and development direction for FuzzForge OSS.
This document outlines the planned features and development direction for FuzzForge AI.
---

View File

@@ -1,6 +1,6 @@
# FuzzForge OSS Usage Guide
# FuzzForge AI Usage Guide
This guide covers everything you need to know to get started with FuzzForge OSS - from installation to running your first security research workflow with AI.
This guide covers everything you need to know to get started with FuzzForge AI - from installation to running your first security research workflow with AI.
> **FuzzForge is designed to be used with AI agents** (GitHub Copilot, Claude, etc.) via MCP.
> The CLI is available for advanced users but the primary experience is through natural language interaction with your AI assistant.
@@ -31,8 +31,8 @@ This guide covers everything you need to know to get started with FuzzForge OSS
```bash
# 1. Clone and install
git clone https://github.com/FuzzingLabs/fuzzforge-oss.git
cd fuzzforge-oss
git clone https://github.com/FuzzingLabs/fuzzforge_ai.git
cd fuzzforge_ai
uv sync
# 2. Build the module images (one-time setup)
@@ -57,7 +57,7 @@ uv run fuzzforge mcp install claude-code # For Claude Code CLI
## Prerequisites
Before installing FuzzForge OSS, ensure you have:
Before installing FuzzForge AI, ensure you have:
- **Python 3.12+** - [Download Python](https://www.python.org/downloads/)
- **uv** package manager - [Install uv](https://docs.astral.sh/uv/)
@@ -95,8 +95,8 @@ sudo usermod -aG docker $USER
### 1. Clone the Repository
```bash
git clone https://github.com/FuzzingLabs/fuzzforge-oss.git
cd fuzzforge-oss
git clone https://github.com/FuzzingLabs/fuzzforge_ai.git
cd fuzzforge_ai
```
### 2. Install Dependencies
@@ -122,7 +122,7 @@ FuzzForge modules are containerized security tools. After cloning, you need to b
### Build All Modules
```bash
# From the fuzzforge-oss directory
# From the fuzzforge_ai directory
make build-modules
```
@@ -169,7 +169,7 @@ uv run fuzzforge mcp install copilot
The command auto-detects everything:
- **FuzzForge root** - Where FuzzForge is installed
- **Modules path** - Defaults to `fuzzforge-oss/fuzzforge-modules`
- **Modules path** - Defaults to `fuzzforge_ai/fuzzforge-modules`
- **Docker socket** - Auto-detects `/var/run/docker.sock`
**Optional overrides** (usually not needed):

View File

@@ -1,13 +1,12 @@
[project]
name = "fuzzforge-cli"
version = "0.0.1"
description = "FuzzForge CLI - Command-line interface for FuzzForge OSS."
description = "FuzzForge CLI - Command-line interface for FuzzForge AI."
authors = []
readme = "README.md"
requires-python = ">=3.14"
dependencies = [
"fuzzforge-runner==0.0.1",
"fuzzforge-types==0.0.1",
"rich>=14.0.0",
"typer==0.20.1",
]
@@ -27,4 +26,3 @@ fuzzforge = "fuzzforge_cli.__main__:main"
[tool.uv.sources]
fuzzforge-runner = { workspace = true }
fuzzforge-types = { workspace = true }

View File

@@ -12,7 +12,7 @@ from fuzzforge_cli.context import Context
application: Typer = Typer(
name="fuzzforge",
help="FuzzForge OSS - Security research orchestration platform.",
help="FuzzForge AI - Security research orchestration platform.",
)
@@ -62,7 +62,7 @@ def main(
] = "",
context: TyperContext = None, # type: ignore[assignment]
) -> None:
"""FuzzForge OSS - Security research orchestration platform.
"""FuzzForge AI - Security research orchestration platform.
Execute security research modules in isolated containers.

View File

@@ -185,6 +185,8 @@ def _generate_mcp_config(
"FUZZFORGE_ENGINE__TYPE": engine_type,
"FUZZFORGE_ENGINE__GRAPHROOT": str(graphroot),
"FUZZFORGE_ENGINE__RUNROOT": str(runroot),
"FUZZFORGE_HUB__ENABLED": "true",
"FUZZFORGE_HUB__CONFIG_PATH": str(fuzzforge_root / "hub-config.json"),
},
}
@@ -454,6 +456,7 @@ def install(
console.print(f" Modules Path: {resolved_modules}")
console.print(f" Engine: {engine}")
console.print(f" Socket: {socket}")
console.print(f" Hub Config: {fuzzforge_root / 'hub-config.json'}")
console.print()
console.print("[bold]Next steps:[/bold]")

View File

@@ -6,7 +6,6 @@ authors = []
readme = "README.md"
requires-python = ">=3.14"
dependencies = [
"fuzzforge-types==0.0.1",
"podman==5.6.0",
"pydantic==2.12.4",
"structlog>=24.0.0",
@@ -22,5 +21,4 @@ tests = [
"pytest==9.0.2",
]
[tool.uv.sources]
fuzzforge-types = { workspace = true }

View File

@@ -2,7 +2,6 @@
This package provides:
- Sandbox engine abstractions (Podman, Docker)
- Storage abstractions (S3) - requires 'storage' extra
- Common exceptions
Example usage:
@@ -12,9 +11,6 @@ Example usage:
Podman,
PodmanConfiguration,
)
# For storage (requires boto3):
from fuzzforge_common.storage import Storage
"""
from fuzzforge_common.exceptions import FuzzForgeError
@@ -29,14 +25,6 @@ from fuzzforge_common.sandboxes import (
PodmanConfiguration,
)
# Storage exceptions are always available (no boto3 required)
from fuzzforge_common.storage.exceptions import (
FuzzForgeStorageError,
StorageConnectionError,
StorageDownloadError,
StorageUploadError,
)
__all__ = [
"AbstractFuzzForgeEngineConfiguration",
"AbstractFuzzForgeSandboxEngine",
@@ -44,11 +32,7 @@ __all__ = [
"DockerConfiguration",
"FuzzForgeError",
"FuzzForgeSandboxEngines",
"FuzzForgeStorageError",
"ImageInfo",
"Podman",
"PodmanConfiguration",
"StorageConnectionError",
"StorageDownloadError",
"StorageUploadError",
]

View File

@@ -0,0 +1,42 @@
"""FuzzForge Hub - Generic MCP server bridge.
This module provides a generic bridge to connect FuzzForge with any MCP server.
It allows AI agents to discover and execute tools from external MCP servers
(like mcp-security-hub) through the same interface as native FuzzForge modules.
The hub is server-agnostic: it doesn't hardcode any specific tools or servers.
Instead, it dynamically discovers tools by connecting to configured MCP servers
and calling their `list_tools()` method.
Supported transport types:
- docker: Run MCP server as a Docker container with stdio transport
- command: Run MCP server as a local process with stdio transport
- sse: Connect to a remote MCP server via Server-Sent Events
"""
from fuzzforge_common.hub.client import HubClient, HubClientError
from fuzzforge_common.hub.executor import HubExecutionResult, HubExecutor
from fuzzforge_common.hub.models import (
HubConfig,
HubServer,
HubServerConfig,
HubServerType,
HubTool,
HubToolParameter,
)
from fuzzforge_common.hub.registry import HubRegistry
__all__ = [
"HubClient",
"HubClientError",
"HubConfig",
"HubExecutionResult",
"HubExecutor",
"HubRegistry",
"HubServer",
"HubServerConfig",
"HubServerType",
"HubTool",
"HubToolParameter",
]

View File

@@ -0,0 +1,444 @@
"""MCP client for communicating with hub servers.
This module provides a generic MCP client that can connect to any MCP server
via stdio (docker/command) or SSE transport. It handles:
- Starting containers/processes for stdio transport
- Connecting to SSE endpoints
- Discovering tools via list_tools()
- Executing tools via call_tool()
"""
from __future__ import annotations
import asyncio
import json
import os
import subprocess
from contextlib import asynccontextmanager
from typing import TYPE_CHECKING, Any, cast
from fuzzforge_common.hub.models import (
HubServer,
HubServerConfig,
HubServerType,
HubTool,
)
if TYPE_CHECKING:
from asyncio.subprocess import Process
from collections.abc import AsyncGenerator
from structlog.stdlib import BoundLogger
def get_logger() -> BoundLogger:
"""Get structlog logger instance.
:returns: Configured structlog logger.
"""
from structlog import get_logger # noqa: PLC0415
return cast("BoundLogger", get_logger())
class HubClientError(Exception):
"""Error in hub client operations."""
class HubClient:
"""Client for communicating with MCP hub servers.
Supports stdio (via docker/command) and SSE transports.
Uses the MCP protocol for tool discovery and execution.
"""
#: Default timeout for operations.
DEFAULT_TIMEOUT: int = 30
def __init__(self, timeout: int = DEFAULT_TIMEOUT) -> None:
"""Initialize the hub client.
:param timeout: Default timeout for operations in seconds.
"""
self._timeout = timeout
async def discover_tools(self, server: HubServer) -> list[HubTool]:
"""Discover tools from a hub server.
Connects to the server, calls list_tools(), and returns
parsed HubTool instances.
:param server: Hub server to discover tools from.
:returns: List of discovered tools.
:raises HubClientError: If discovery fails.
"""
logger = get_logger()
config = server.config
logger.info("Discovering tools", server=config.name, type=config.type.value)
try:
async with self._connect(config) as (reader, writer):
# Initialize MCP session
await self._initialize_session(reader, writer, config.name)
# List tools
tools_data = await self._call_method(
reader,
writer,
"tools/list",
{},
)
# Parse tools
tools = []
for tool_data in tools_data.get("tools", []):
tool = HubTool.from_mcp_tool(
server_name=config.name,
name=tool_data["name"],
description=tool_data.get("description"),
input_schema=tool_data.get("inputSchema", {}),
)
tools.append(tool)
logger.info(
"Discovered tools",
server=config.name,
count=len(tools),
)
return tools
except Exception as e:
logger.error(
"Tool discovery failed",
server=config.name,
error=str(e),
)
raise HubClientError(f"Discovery failed for {config.name}: {e}") from e
async def execute_tool(
self,
server: HubServer,
tool_name: str,
arguments: dict[str, Any],
*,
timeout: int | None = None,
) -> dict[str, Any]:
"""Execute a tool on a hub server.
:param server: Hub server to execute on.
:param tool_name: Name of the tool to execute.
:param arguments: Tool arguments.
:param timeout: Execution timeout (uses default if None).
:returns: Tool execution result.
:raises HubClientError: If execution fails.
"""
logger = get_logger()
config = server.config
exec_timeout = timeout or self._timeout
logger.info(
"Executing hub tool",
server=config.name,
tool=tool_name,
timeout=exec_timeout,
)
try:
async with self._connect(config) as (reader, writer):
# Initialize MCP session
await self._initialize_session(reader, writer, config.name)
# Call tool
result = await asyncio.wait_for(
self._call_method(
reader,
writer,
"tools/call",
{"name": tool_name, "arguments": arguments},
),
timeout=exec_timeout,
)
logger.info(
"Tool execution completed",
server=config.name,
tool=tool_name,
)
return result
except asyncio.TimeoutError as e:
logger.error(
"Tool execution timed out",
server=config.name,
tool=tool_name,
timeout=exec_timeout,
)
raise HubClientError(
f"Execution timed out for {config.name}:{tool_name}"
) from e
except Exception as e:
logger.error(
"Tool execution failed",
server=config.name,
tool=tool_name,
error=str(e),
)
raise HubClientError(
f"Execution failed for {config.name}:{tool_name}: {e}"
) from e
@asynccontextmanager
async def _connect(
self,
config: HubServerConfig,
) -> AsyncGenerator[tuple[asyncio.StreamReader, asyncio.StreamWriter], None]:
"""Connect to an MCP server.
:param config: Server configuration.
:yields: Tuple of (reader, writer) for communication.
"""
if config.type == HubServerType.DOCKER:
async with self._connect_docker(config) as streams:
yield streams
elif config.type == HubServerType.COMMAND:
async with self._connect_command(config) as streams:
yield streams
elif config.type == HubServerType.SSE:
async with self._connect_sse(config) as streams:
yield streams
else:
msg = f"Unsupported server type: {config.type}"
raise HubClientError(msg)
@asynccontextmanager
async def _connect_docker(
self,
config: HubServerConfig,
) -> AsyncGenerator[tuple[asyncio.StreamReader, asyncio.StreamWriter], None]:
"""Connect to a Docker-based MCP server.
:param config: Server configuration with image name.
:yields: Tuple of (reader, writer) for stdio communication.
"""
if not config.image:
msg = f"Docker image not specified for server '{config.name}'"
raise HubClientError(msg)
# Build docker command
cmd = ["docker", "run", "-i", "--rm"]
# Add capabilities
for cap in config.capabilities:
cmd.extend(["--cap-add", cap])
# Add volumes
for volume in config.volumes:
cmd.extend(["-v", os.path.expanduser(volume)])
# Add environment variables
for key, value in config.environment.items():
cmd.extend(["-e", f"{key}={value}"])
cmd.append(config.image)
process: Process = await asyncio.create_subprocess_exec(
*cmd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
try:
if process.stdin is None or process.stdout is None:
msg = "Failed to get process streams"
raise HubClientError(msg)
# Create asyncio streams from process pipes
reader = process.stdout
writer = process.stdin
yield reader, writer # type: ignore[misc]
finally:
process.terminate()
try:
await asyncio.wait_for(process.wait(), timeout=5)
except asyncio.TimeoutError:
process.kill()
@asynccontextmanager
async def _connect_command(
self,
config: HubServerConfig,
) -> AsyncGenerator[tuple[asyncio.StreamReader, asyncio.StreamWriter], None]:
"""Connect to a command-based MCP server.
:param config: Server configuration with command.
:yields: Tuple of (reader, writer) for stdio communication.
"""
if not config.command:
msg = f"Command not specified for server '{config.name}'"
raise HubClientError(msg)
# Set up environment
env = dict(config.environment) if config.environment else None
process: Process = await asyncio.create_subprocess_exec(
*config.command,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=env,
)
try:
if process.stdin is None or process.stdout is None:
msg = "Failed to get process streams"
raise HubClientError(msg)
reader = process.stdout
writer = process.stdin
yield reader, writer # type: ignore[misc]
finally:
process.terminate()
try:
await asyncio.wait_for(process.wait(), timeout=5)
except asyncio.TimeoutError:
process.kill()
@asynccontextmanager
async def _connect_sse(
self,
config: HubServerConfig,
) -> AsyncGenerator[tuple[asyncio.StreamReader, asyncio.StreamWriter], None]:
"""Connect to an SSE-based MCP server.
:param config: Server configuration with URL.
:yields: Tuple of (reader, writer) for SSE communication.
"""
# SSE support requires additional dependencies
# For now, raise not implemented
msg = "SSE transport not yet implemented"
raise NotImplementedError(msg)
async def _initialize_session(
self,
reader: asyncio.StreamReader,
writer: asyncio.StreamWriter,
server_name: str,
) -> dict[str, Any]:
"""Initialize MCP session with the server.
:param reader: Stream reader.
:param writer: Stream writer.
:param server_name: Server name for logging.
:returns: Server capabilities.
"""
# Send initialize request
result = await self._call_method(
reader,
writer,
"initialize",
{
"protocolVersion": "2024-11-05",
"capabilities": {},
"clientInfo": {
"name": "fuzzforge-hub",
"version": "0.1.0",
},
},
)
# Send initialized notification
await self._send_notification(reader, writer, "notifications/initialized", {})
return result
async def _call_method(
self,
reader: asyncio.StreamReader,
writer: asyncio.StreamWriter,
method: str,
params: dict[str, Any],
) -> dict[str, Any]:
"""Call an MCP method.
:param reader: Stream reader.
:param writer: Stream writer.
:param method: Method name.
:param params: Method parameters.
:returns: Method result.
"""
# Create JSON-RPC request
request = {
"jsonrpc": "2.0",
"id": 1,
"method": method,
"params": params,
}
# Send request
request_line = json.dumps(request) + "\n"
writer.write(request_line.encode())
await writer.drain()
# Read response
response_line = await asyncio.wait_for(
reader.readline(),
timeout=self._timeout,
)
if not response_line:
msg = "Empty response from server"
raise HubClientError(msg)
response = json.loads(response_line.decode())
if "error" in response:
error = response["error"]
msg = f"MCP error: {error.get('message', 'Unknown error')}"
raise HubClientError(msg)
return response.get("result", {})
async def _send_notification(
self,
reader: asyncio.StreamReader,
writer: asyncio.StreamWriter,
method: str,
params: dict[str, Any],
) -> None:
"""Send an MCP notification (no response expected).
:param reader: Stream reader (unused but kept for consistency).
:param writer: Stream writer.
:param method: Notification method name.
:param params: Notification parameters.
"""
# Create JSON-RPC notification (no id)
notification = {
"jsonrpc": "2.0",
"method": method,
"params": params,
}
notification_line = json.dumps(notification) + "\n"
writer.write(notification_line.encode())
await writer.drain()

View File

@@ -0,0 +1,334 @@
"""Hub executor for managing MCP server lifecycle and tool execution.
This module provides a high-level interface for:
- Discovering tools from all registered hub servers
- Executing tools with proper error handling
- Managing the lifecycle of hub operations
"""
from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING, Any, cast
from fuzzforge_common.hub.client import HubClient, HubClientError
from fuzzforge_common.hub.models import HubServer, HubServerConfig, HubTool
from fuzzforge_common.hub.registry import HubRegistry
if TYPE_CHECKING:
from structlog.stdlib import BoundLogger
def get_logger() -> BoundLogger:
"""Get structlog logger instance.
:returns: Configured structlog logger.
"""
from structlog import get_logger # noqa: PLC0415
return cast("BoundLogger", get_logger())
class HubExecutionResult:
"""Result of a hub tool execution."""
def __init__(
self,
*,
success: bool,
server_name: str,
tool_name: str,
result: dict[str, Any] | None = None,
error: str | None = None,
) -> None:
"""Initialize execution result.
:param success: Whether execution succeeded.
:param server_name: Name of the hub server.
:param tool_name: Name of the executed tool.
:param result: Tool execution result data.
:param error: Error message if execution failed.
"""
self.success = success
self.server_name = server_name
self.tool_name = tool_name
self.result = result or {}
self.error = error
@property
def identifier(self) -> str:
"""Get full tool identifier."""
return f"hub:{self.server_name}:{self.tool_name}"
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary.
:returns: Dictionary representation.
"""
return {
"success": self.success,
"identifier": self.identifier,
"server": self.server_name,
"tool": self.tool_name,
"result": self.result,
"error": self.error,
}
class HubExecutor:
"""Executor for hub server operations.
Provides high-level methods for discovering and executing
tools from hub servers.
"""
#: Hub registry instance.
_registry: HubRegistry
#: MCP client instance.
_client: HubClient
def __init__(
self,
config_path: Path | None = None,
timeout: int = 300,
) -> None:
"""Initialize the hub executor.
:param config_path: Path to hub-servers.json config file.
:param timeout: Default timeout for tool execution.
"""
self._registry = HubRegistry(config_path)
self._client = HubClient(timeout=timeout)
@property
def registry(self) -> HubRegistry:
"""Get the hub registry.
:returns: Hub registry instance.
"""
return self._registry
def add_server(self, config: HubServerConfig) -> HubServer:
"""Add a server to the registry.
:param config: Server configuration.
:returns: Created HubServer instance.
"""
return self._registry.add_server(config)
async def discover_all_tools(self) -> dict[str, list[HubTool]]:
"""Discover tools from all enabled servers.
:returns: Dict mapping server names to lists of discovered tools.
"""
logger = get_logger()
results: dict[str, list[HubTool]] = {}
for server in self._registry.enabled_servers:
try:
tools = await self._client.discover_tools(server)
self._registry.update_server_tools(server.name, tools)
results[server.name] = tools
except HubClientError as e:
logger.warning(
"Failed to discover tools",
server=server.name,
error=str(e),
)
self._registry.update_server_tools(server.name, [], error=str(e))
results[server.name] = []
return results
async def discover_server_tools(self, server_name: str) -> list[HubTool]:
"""Discover tools from a specific server.
:param server_name: Name of the server.
:returns: List of discovered tools.
:raises ValueError: If server not found.
"""
server = self._registry.get_server(server_name)
if not server:
msg = f"Server '{server_name}' not found"
raise ValueError(msg)
try:
tools = await self._client.discover_tools(server)
self._registry.update_server_tools(server_name, tools)
return tools
except HubClientError as e:
self._registry.update_server_tools(server_name, [], error=str(e))
raise
async def execute_tool(
self,
identifier: str,
arguments: dict[str, Any] | None = None,
*,
timeout: int | None = None,
) -> HubExecutionResult:
"""Execute a hub tool.
:param identifier: Tool identifier (hub:server:tool or server:tool).
:param arguments: Tool arguments.
:param timeout: Execution timeout.
:returns: Execution result.
"""
logger = get_logger()
arguments = arguments or {}
# Parse identifier and find tool
server, tool = self._registry.find_tool(identifier)
if not server or not tool:
# Try to parse as server:tool and discover
parts = identifier.replace("hub:", "").split(":")
if len(parts) == 2: # noqa: PLR2004
server_name, tool_name = parts
server = self._registry.get_server(server_name)
if server and not server.discovered:
# Try to discover tools first
try:
await self.discover_server_tools(server_name)
tool = server.get_tool(tool_name)
except HubClientError:
pass
if server and not tool:
# Tool not found, but server exists - try to execute anyway
# The server might have the tool even if discovery failed
tool_name_to_use = tool_name
else:
tool_name_to_use = tool.name if tool else ""
if not server:
return HubExecutionResult(
success=False,
server_name=server_name,
tool_name=tool_name,
error=f"Server '{server_name}' not found",
)
# Execute even if tool wasn't discovered (server might still have it)
try:
result = await self._client.execute_tool(
server,
tool_name_to_use or tool_name,
arguments,
timeout=timeout,
)
return HubExecutionResult(
success=True,
server_name=server.name,
tool_name=tool_name_to_use or tool_name,
result=result,
)
except HubClientError as e:
return HubExecutionResult(
success=False,
server_name=server.name,
tool_name=tool_name_to_use or tool_name,
error=str(e),
)
else:
return HubExecutionResult(
success=False,
server_name="unknown",
tool_name=identifier,
error=f"Invalid tool identifier: {identifier}",
)
# Execute the tool
logger.info(
"Executing hub tool",
server=server.name,
tool=tool.name,
arguments=arguments,
)
try:
result = await self._client.execute_tool(
server,
tool.name,
arguments,
timeout=timeout,
)
return HubExecutionResult(
success=True,
server_name=server.name,
tool_name=tool.name,
result=result,
)
except HubClientError as e:
return HubExecutionResult(
success=False,
server_name=server.name,
tool_name=tool.name,
error=str(e),
)
def list_servers(self) -> list[dict[str, Any]]:
"""List all registered servers with their status.
:returns: List of server info dicts.
"""
servers = []
for server in self._registry.servers:
servers.append({
"name": server.name,
"identifier": server.identifier,
"type": server.config.type.value,
"enabled": server.config.enabled,
"category": server.config.category,
"description": server.config.description,
"discovered": server.discovered,
"tool_count": len(server.tools),
"error": server.discovery_error,
})
return servers
def list_tools(self) -> list[dict[str, Any]]:
"""List all discovered tools.
:returns: List of tool info dicts.
"""
tools = []
for tool in self._registry.get_all_tools():
tools.append({
"identifier": tool.identifier,
"name": tool.name,
"server": tool.server_name,
"description": tool.description,
"parameters": [p.model_dump() for p in tool.parameters],
})
return tools
def get_tool_schema(self, identifier: str) -> dict[str, Any] | None:
"""Get the JSON Schema for a tool's input.
:param identifier: Tool identifier.
:returns: JSON Schema dict or None if not found.
"""
_, tool = self._registry.find_tool(identifier)
if tool:
return tool.input_schema
return None

View File

@@ -0,0 +1,284 @@
"""Data models for FuzzForge Hub.
This module defines the Pydantic models used to represent MCP servers
and their tools in the hub registry.
"""
from __future__ import annotations
from enum import Enum
from typing import Any
from pydantic import BaseModel, Field
class HubServerType(str, Enum):
"""Type of MCP server connection."""
#: Run as Docker container with stdio transport.
DOCKER = "docker"
#: Run as local command/process with stdio transport.
COMMAND = "command"
#: Connect via Server-Sent Events (HTTP).
SSE = "sse"
class HubServerConfig(BaseModel):
"""Configuration for an MCP server in the hub.
This defines how to connect to an MCP server, not what tools it provides.
Tools are discovered dynamically at runtime.
"""
#: Unique identifier for this server (e.g., "nmap", "nuclei").
name: str = Field(description="Unique server identifier")
#: Human-readable description of the server.
description: str | None = Field(
default=None,
description="Human-readable description",
)
#: Type of connection to use.
type: HubServerType = Field(description="Connection type")
#: Docker image name (for type=docker).
image: str | None = Field(
default=None,
description="Docker image name (for docker type)",
)
#: Command to run (for type=command).
command: list[str] | None = Field(
default=None,
description="Command and args (for command type)",
)
#: URL endpoint (for type=sse).
url: str | None = Field(
default=None,
description="SSE endpoint URL (for sse type)",
)
#: Environment variables to pass to the server.
environment: dict[str, str] = Field(
default_factory=dict,
description="Environment variables",
)
#: Docker capabilities to add (e.g., ["NET_RAW"] for nmap).
capabilities: list[str] = Field(
default_factory=list,
description="Docker capabilities to add",
)
#: Volume mounts for Docker (e.g., ["/host/path:/container/path:ro"]).
volumes: list[str] = Field(
default_factory=list,
description="Docker volume mounts",
)
#: Whether this server is enabled.
enabled: bool = Field(
default=True,
description="Whether server is enabled",
)
#: Category for grouping (e.g., "reconnaissance", "web-security").
category: str | None = Field(
default=None,
description="Category for grouping servers",
)
class HubToolParameter(BaseModel):
"""A parameter for an MCP tool.
Parsed from the tool's JSON Schema inputSchema.
"""
#: Parameter name.
name: str
#: Parameter type (string, integer, boolean, array, object).
type: str
#: Human-readable description.
description: str | None = None
#: Whether this parameter is required.
required: bool = False
#: Default value if any.
default: Any = None
#: Enum values if constrained.
enum: list[Any] | None = None
class HubTool(BaseModel):
"""An MCP tool discovered from a hub server.
This is populated by calling `list_tools()` on the MCP server.
"""
#: Tool name as defined by the MCP server.
name: str = Field(description="Tool name from MCP server")
#: Human-readable description.
description: str | None = Field(
default=None,
description="Tool description",
)
#: Name of the hub server this tool belongs to.
server_name: str = Field(description="Parent server name")
#: Parsed parameters from inputSchema.
parameters: list[HubToolParameter] = Field(
default_factory=list,
description="Tool parameters",
)
#: Raw JSON Schema for the tool input.
input_schema: dict[str, Any] = Field(
default_factory=dict,
description="Raw JSON Schema from MCP",
)
@property
def identifier(self) -> str:
"""Get the full tool identifier (hub:server:tool)."""
return f"hub:{self.server_name}:{self.name}"
@classmethod
def from_mcp_tool(
cls,
server_name: str,
name: str,
description: str | None,
input_schema: dict[str, Any],
) -> HubTool:
"""Create a HubTool from MCP tool metadata.
:param server_name: Name of the parent hub server.
:param name: Tool name.
:param description: Tool description.
:param input_schema: JSON Schema for tool input.
:returns: HubTool instance.
"""
parameters = cls._parse_parameters(input_schema)
return cls(
name=name,
description=description,
server_name=server_name,
parameters=parameters,
input_schema=input_schema,
)
@staticmethod
def _parse_parameters(schema: dict[str, Any]) -> list[HubToolParameter]:
"""Parse parameters from JSON Schema.
:param schema: JSON Schema dict.
:returns: List of parsed parameters.
"""
parameters: list[HubToolParameter] = []
properties = schema.get("properties", {})
required_params = set(schema.get("required", []))
for name, prop in properties.items():
param = HubToolParameter(
name=name,
type=prop.get("type", "string"),
description=prop.get("description"),
required=name in required_params,
default=prop.get("default"),
enum=prop.get("enum"),
)
parameters.append(param)
return parameters
class HubServer(BaseModel):
"""A hub server with its discovered tools.
Combines configuration with dynamically discovered tools.
"""
#: Server configuration.
config: HubServerConfig
#: Tools discovered from the server (populated at runtime).
tools: list[HubTool] = Field(
default_factory=list,
description="Discovered tools",
)
#: Whether tools have been discovered.
discovered: bool = Field(
default=False,
description="Whether tools have been discovered",
)
#: Error message if discovery failed.
discovery_error: str | None = Field(
default=None,
description="Error message if discovery failed",
)
@property
def name(self) -> str:
"""Get server name."""
return self.config.name
@property
def identifier(self) -> str:
"""Get server identifier for module listing."""
return f"hub:{self.config.name}"
def get_tool(self, tool_name: str) -> HubTool | None:
"""Get a tool by name.
:param tool_name: Name of the tool.
:returns: HubTool if found, None otherwise.
"""
for tool in self.tools:
if tool.name == tool_name:
return tool
return None
class HubConfig(BaseModel):
"""Configuration for the entire hub.
Loaded from hub-servers.json or similar config file.
"""
#: List of configured servers.
servers: list[HubServerConfig] = Field(
default_factory=list,
description="Configured MCP servers",
)
#: Default timeout for tool execution (seconds).
default_timeout: int = Field(
default=300,
description="Default execution timeout",
)
#: Whether to cache discovered tools.
cache_tools: bool = Field(
default=True,
description="Cache discovered tools",
)

View File

@@ -0,0 +1,258 @@
"""Hub registry for managing MCP server configurations.
The registry loads server configurations from a JSON file and provides
methods to access and manage them. It does not hardcode any specific
servers or tools - everything is configured by the user.
"""
from __future__ import annotations
import json
from pathlib import Path
from typing import TYPE_CHECKING, cast
from fuzzforge_common.hub.models import (
HubConfig,
HubServer,
HubServerConfig,
)
if TYPE_CHECKING:
from structlog.stdlib import BoundLogger
def get_logger() -> BoundLogger:
"""Get structlog logger instance.
:returns: Configured structlog logger.
"""
from structlog import get_logger # noqa: PLC0415
return cast("BoundLogger", get_logger())
class HubRegistry:
"""Registry for MCP hub servers.
Manages the configuration and state of hub servers.
Configurations are loaded from a JSON file.
"""
#: Loaded hub configuration.
_config: HubConfig
#: Server instances with discovered tools.
_servers: dict[str, HubServer]
#: Path to the configuration file.
_config_path: Path | None
def __init__(self, config_path: Path | str | None = None) -> None:
"""Initialize the hub registry.
:param config_path: Path to hub-servers.json config file.
If None, starts with empty configuration.
"""
if config_path is not None:
self._config_path = Path(config_path)
else:
self._config_path = None
self._servers = {}
self._config = HubConfig()
if self._config_path and self._config_path.exists():
self._load_config(self._config_path)
def _load_config(self, config_path: Path) -> None:
"""Load configuration from JSON file.
:param config_path: Path to config file.
"""
logger = get_logger()
try:
with config_path.open() as f:
data = json.load(f)
self._config = HubConfig.model_validate(data)
# Create server instances from config
for server_config in self._config.servers:
if server_config.enabled:
self._servers[server_config.name] = HubServer(
config=server_config,
)
logger.info(
"Loaded hub configuration",
path=str(config_path),
servers=len(self._servers),
)
except Exception as e:
logger.error(
"Failed to load hub configuration",
path=str(config_path),
error=str(e),
)
raise
def reload(self) -> None:
"""Reload configuration from file."""
if self._config_path and self._config_path.exists():
self._servers.clear()
self._load_config(self._config_path)
@property
def servers(self) -> list[HubServer]:
"""Get all registered servers.
:returns: List of hub servers.
"""
return list(self._servers.values())
@property
def enabled_servers(self) -> list[HubServer]:
"""Get all enabled servers.
:returns: List of enabled hub servers.
"""
return [s for s in self._servers.values() if s.config.enabled]
def get_server(self, name: str) -> HubServer | None:
"""Get a server by name.
:param name: Server name.
:returns: HubServer if found, None otherwise.
"""
return self._servers.get(name)
def add_server(self, config: HubServerConfig) -> HubServer:
"""Add a server to the registry.
:param config: Server configuration.
:returns: Created HubServer instance.
:raises ValueError: If server with same name exists.
"""
if config.name in self._servers:
msg = f"Server '{config.name}' already exists"
raise ValueError(msg)
server = HubServer(config=config)
self._servers[config.name] = server
self._config.servers.append(config)
get_logger().info("Added hub server", name=config.name, type=config.type)
return server
def remove_server(self, name: str) -> bool:
"""Remove a server from the registry.
:param name: Server name.
:returns: True if removed, False if not found.
"""
if name not in self._servers:
return False
del self._servers[name]
self._config.servers = [s for s in self._config.servers if s.name != name]
get_logger().info("Removed hub server", name=name)
return True
def save_config(self, path: Path | None = None) -> None:
"""Save current configuration to file.
:param path: Path to save to. Uses original path if None.
"""
save_path = path or self._config_path
if not save_path:
msg = "No config path specified"
raise ValueError(msg)
with save_path.open("w") as f:
json.dump(
self._config.model_dump(mode="json"),
f,
indent=2,
)
get_logger().info("Saved hub configuration", path=str(save_path))
def update_server_tools(
self,
server_name: str,
tools: list,
*,
error: str | None = None,
) -> None:
"""Update discovered tools for a server.
Called by the hub client after tool discovery.
:param server_name: Server name.
:param tools: List of HubTool instances.
:param error: Error message if discovery failed.
"""
server = self._servers.get(server_name)
if not server:
return
if error:
server.discovered = False
server.discovery_error = error
server.tools = []
else:
server.discovered = True
server.discovery_error = None
server.tools = tools
def get_all_tools(self) -> list:
"""Get all discovered tools from all servers.
:returns: Flat list of all HubTool instances.
"""
tools = []
for server in self._servers.values():
if server.discovered:
tools.extend(server.tools)
return tools
def find_tool(self, identifier: str):
"""Find a tool by its full identifier.
:param identifier: Full identifier (hub:server:tool or server:tool).
:returns: Tuple of (HubServer, HubTool) if found, (None, None) otherwise.
"""
# Parse identifier
parts = identifier.split(":")
if len(parts) == 3 and parts[0] == "hub": # noqa: PLR2004
# hub:server:tool format
server_name = parts[1]
tool_name = parts[2]
elif len(parts) == 2: # noqa: PLR2004
# server:tool format
server_name = parts[0]
tool_name = parts[1]
else:
return None, None
server = self._servers.get(server_name)
if not server:
return None, None
tool = server.get_tool(tool_name)
return server, tool

View File

@@ -272,6 +272,23 @@ class AbstractFuzzForgeSandboxEngine(ABC):
message: str = f"method 'read_file_from_container' is not implemented for class '{self.__class__.__name__}'"
raise NotImplementedError(message)
@abstractmethod
def tail_file_from_container(self, identifier: str, path: str, start_line: int = 1) -> str:
"""Read a file from a running container starting at a given line number.
Uses ``tail -n +{start_line}`` to avoid re-reading the entire file on
every poll. This is the preferred method for incremental reads of
append-only files such as ``stream.jsonl``.
:param identifier: Container identifier.
:param path: Path to file inside container.
:param start_line: 1-based line number to start reading from.
:returns: File contents from *start_line* onwards (may be empty).
"""
message: str = f"method 'tail_file_from_container' is not implemented for class '{self.__class__.__name__}'"
raise NotImplementedError(message)
@abstractmethod
def list_containers(self, all_containers: bool = True) -> list[dict]:
"""List containers.

View File

@@ -389,6 +389,24 @@ class DockerCLI(AbstractFuzzForgeSandboxEngine):
return ""
return result.stdout
def tail_file_from_container(self, identifier: str, path: str, start_line: int = 1) -> str:
"""Read a file from a container starting at a given line number.
:param identifier: Container identifier.
:param path: Path to file in container.
:param start_line: 1-based line number to start reading from.
:returns: File contents from *start_line* onwards.
"""
result = self._run(
["exec", identifier, "tail", "-n", f"+{start_line}", path],
check=False,
)
if result.returncode != 0:
get_logger().debug("failed to tail file from container", path=path, start_line=start_line)
return ""
return result.stdout
def list_containers(self, all_containers: bool = True) -> list[dict]:
"""List containers.

View File

@@ -168,6 +168,11 @@ class Docker(AbstractFuzzForgeSandboxEngine):
message: str = "Docker engine read_file_from_container is not yet implemented"
raise NotImplementedError(message)
def tail_file_from_container(self, identifier: str, path: str, start_line: int = 1) -> str:
"""Read a file from a container starting at a given line number."""
message: str = "Docker engine tail_file_from_container is not yet implemented"
raise NotImplementedError(message)
def list_containers(self, all_containers: bool = True) -> list[dict]:
"""List containers."""
message: str = "Docker engine list_containers is not yet implemented"

View File

@@ -449,6 +449,24 @@ class PodmanCLI(AbstractFuzzForgeSandboxEngine):
return ""
return result.stdout
def tail_file_from_container(self, identifier: str, path: str, start_line: int = 1) -> str:
"""Read a file from a container starting at a given line number.
:param identifier: Container identifier.
:param path: Path to file in container.
:param start_line: 1-based line number to start reading from.
:returns: File contents from *start_line* onwards.
"""
result = self._run(
["exec", identifier, "tail", "-n", f"+{start_line}", path],
check=False,
)
if result.returncode != 0:
get_logger().debug("failed to tail file from container", path=path, start_line=start_line)
return ""
return result.stdout
def list_containers(self, all_containers: bool = True) -> list[dict]:
"""List containers.

View File

@@ -475,6 +475,30 @@ class Podman(AbstractFuzzForgeSandboxEngine):
return ""
return stdout.decode("utf-8", errors="replace") if stdout else ""
def tail_file_from_container(self, identifier: str, path: str, start_line: int = 1) -> str:
"""Read a file from a container starting at a given line number.
:param identifier: Container identifier.
:param path: Path to file inside container.
:param start_line: 1-based line number to start reading from.
:returns: File contents from *start_line* onwards.
"""
client: PodmanClient = self.get_client()
with client:
container: Container = client.containers.get(key=identifier)
(status, (stdout, stderr)) = container.exec_run(
cmd=["tail", "-n", f"+{start_line}", path],
demux=True,
)
if status != 0:
error_msg = stderr.decode("utf-8", errors="replace") if stderr else "File not found"
get_logger().debug(
"failed to tail file from container", path=path, start_line=start_line, error=error_msg,
)
return ""
return stdout.decode("utf-8", errors="replace") if stdout else ""
def list_containers(self, all_containers: bool = True) -> list[dict]:
"""List containers.

View File

@@ -1,19 +0,0 @@
"""FuzzForge storage abstractions.
Storage class requires boto3. Import it explicitly:
from fuzzforge_common.storage.s3 import Storage
"""
from fuzzforge_common.storage.exceptions import (
FuzzForgeStorageError,
StorageConnectionError,
StorageDownloadError,
StorageUploadError,
)
__all__ = [
"FuzzForgeStorageError",
"StorageConnectionError",
"StorageDownloadError",
"StorageUploadError",
]

View File

@@ -1,20 +0,0 @@
from pydantic import BaseModel
from fuzzforge_common.storage.s3 import Storage
class StorageConfiguration(BaseModel):
"""TODO."""
#: S3 endpoint URL (e.g., "http://localhost:9000" for MinIO).
endpoint: str
#: S3 access key ID for authentication.
access_key: str
#: S3 secret access key for authentication.
secret_key: str
def into_storage(self) -> Storage:
"""TODO."""
return Storage(endpoint=self.endpoint, access_key=self.access_key, secret_key=self.secret_key)

View File

@@ -1,108 +0,0 @@
from fuzzforge_common.exceptions import FuzzForgeError
class FuzzForgeStorageError(FuzzForgeError):
"""Base exception for all storage-related errors.
Raised when storage operations (upload, download, connection) fail
during workflow execution.
"""
class StorageConnectionError(FuzzForgeStorageError):
"""Failed to connect to storage service.
:param endpoint: The storage endpoint that failed to connect.
:param reason: The underlying exception message.
"""
def __init__(self, endpoint: str, reason: str) -> None:
"""Initialize storage connection error.
:param endpoint: The storage endpoint that failed to connect.
:param reason: The underlying exception message.
"""
FuzzForgeStorageError.__init__(
self,
f"Failed to connect to storage at {endpoint}: {reason}",
)
self.endpoint = endpoint
self.reason = reason
class StorageUploadError(FuzzForgeStorageError):
"""Failed to upload object to storage.
:param bucket: The target bucket name.
:param object_key: The target object key.
:param reason: The underlying exception message.
"""
def __init__(self, bucket: str, object_key: str, reason: str) -> None:
"""Initialize storage upload error.
:param bucket: The target bucket name.
:param object_key: The target object key.
:param reason: The underlying exception message.
"""
FuzzForgeStorageError.__init__(
self,
f"Failed to upload to {bucket}/{object_key}: {reason}",
)
self.bucket = bucket
self.object_key = object_key
self.reason = reason
class StorageDownloadError(FuzzForgeStorageError):
"""Failed to download object from storage.
:param bucket: The source bucket name.
:param object_key: The source object key.
:param reason: The underlying exception message.
"""
def __init__(self, bucket: str, object_key: str, reason: str) -> None:
"""Initialize storage download error.
:param bucket: The source bucket name.
:param object_key: The source object key.
:param reason: The underlying exception message.
"""
FuzzForgeStorageError.__init__(
self,
f"Failed to download from {bucket}/{object_key}: {reason}",
)
self.bucket = bucket
self.object_key = object_key
self.reason = reason
class StorageDeletionError(FuzzForgeStorageError):
"""Failed to delete bucket from storage.
:param bucket: The bucket name that failed to delete.
:param reason: The underlying exception message.
"""
def __init__(self, bucket: str, reason: str) -> None:
"""Initialize storage deletion error.
:param bucket: The bucket name that failed to delete.
:param reason: The underlying exception message.
"""
FuzzForgeStorageError.__init__(
self,
f"Failed to delete bucket {bucket}: {reason}",
)
self.bucket = bucket
self.reason = reason

View File

@@ -1,351 +0,0 @@
from __future__ import annotations
from pathlib import Path, PurePath
from tarfile import TarInfo
from tarfile import open as Archive # noqa: N812
from tempfile import NamedTemporaryFile
from typing import TYPE_CHECKING, Any, cast
from botocore.exceptions import ClientError
from fuzzforge_common.storage.exceptions import StorageDeletionError, StorageDownloadError, StorageUploadError
if TYPE_CHECKING:
from botocore.client import BaseClient
from structlog.stdlib import BoundLogger
def get_logger() -> BoundLogger:
"""Get structlog logger instance.
Uses deferred import pattern required by Temporal for serialization.
:returns: Configured structlog logger.
"""
from structlog import get_logger # noqa: PLC0415 (required by temporal)
return cast("BoundLogger", get_logger())
class Storage:
"""S3-compatible storage backend implementation using boto3.
Supports MinIO, AWS S3, and other S3-compatible storage services.
Uses error-driven approach (EAFP) to handle bucket creation and
avoid race conditions.
"""
#: S3 endpoint URL (e.g., "http://localhost:9000" for MinIO).
__endpoint: str
#: S3 access key ID for authentication.
__access_key: str
#: S3 secret access key for authentication.
__secret_key: str
def __init__(self, endpoint: str, access_key: str, secret_key: str) -> None:
"""Initialize an instance of the class.
:param endpoint: TODO.
:param access_key: TODO.
:param secret_key: TODO.
"""
self.__endpoint = endpoint
self.__access_key = access_key
self.__secret_key = secret_key
def _get_client(self) -> BaseClient:
"""Create boto3 S3 client with configured credentials.
Uses deferred import pattern required by Temporal for serialization.
:returns: Configured boto3 S3 client.
"""
import boto3 # noqa: PLC0415 (required by temporal)
return boto3.client(
"s3",
endpoint_url=self.__endpoint,
aws_access_key_id=self.__access_key,
aws_secret_access_key=self.__secret_key,
)
def create_bucket(self, bucket: str) -> None:
"""Create the S3 bucket if it does not already exist.
Idempotent operation - succeeds if bucket already exists and is owned by you.
Fails if bucket exists but is owned by another account.
:raise ClientError: If bucket creation fails (permissions, name conflicts, etc.).
"""
logger = get_logger()
client = self._get_client()
logger.debug("creating_bucket", bucket=bucket)
try:
client.create_bucket(Bucket=bucket)
logger.info("bucket_created", bucket=bucket)
except ClientError as e:
error_code = e.response.get("Error", {}).get("Code")
# Bucket already exists and we own it - this is fine
if error_code in ("BucketAlreadyOwnedByYou", "BucketAlreadyExists"):
logger.debug(
"bucket_already_exists",
bucket=bucket,
error_code=error_code,
)
return
# Other errors are actual failures
logger.exception(
"bucket_creation_failed",
bucket=bucket,
error_code=error_code,
)
raise
def delete_bucket(self, bucket: str) -> None:
"""Delete an S3 bucket and all its contents.
Idempotent operation - succeeds if bucket doesn't exist.
Handles pagination for buckets with many objects.
:param bucket: The name of the bucket to delete.
:raises StorageDeletionError: If bucket deletion fails.
"""
logger = get_logger()
client = self._get_client()
logger.debug("deleting_bucket", bucket=bucket)
try:
# S3 requires bucket to be empty before deletion
# Delete all objects first with pagination support
continuation_token = None
while True:
# List objects (up to 1000 per request)
list_params = {"Bucket": bucket}
if continuation_token:
list_params["ContinuationToken"] = continuation_token
response = client.list_objects_v2(**list_params)
# Delete objects if any exist (max 1000 per delete_objects call)
if "Contents" in response:
objects = [{"Key": obj["Key"]} for obj in response["Contents"]]
client.delete_objects(Bucket=bucket, Delete={"Objects": objects})
logger.debug("deleted_objects", bucket=bucket, count=len(objects))
# Check if more objects exist
if not response.get("IsTruncated", False):
break
continuation_token = response.get("NextContinuationToken")
# Now delete the empty bucket
client.delete_bucket(Bucket=bucket)
logger.info("bucket_deleted", bucket=bucket)
except ClientError as error:
error_code = error.response.get("Error", {}).get("Code")
# Idempotent - bucket already doesn't exist
if error_code == "NoSuchBucket":
logger.debug("bucket_does_not_exist", bucket=bucket)
return
# Other errors are actual failures
logger.exception(
"bucket_deletion_failed",
bucket=bucket,
error_code=error_code,
)
raise StorageDeletionError(bucket=bucket, reason=str(error)) from error
def upload_file(
self,
bucket: str,
file: Path,
key: str,
) -> None:
"""Upload archive file to S3 storage at specified object key.
Assumes bucket exists. Fails gracefully if bucket or other resources missing.
:param bucket: TODO.
:param file: Local path to the archive file to upload.
:param key: Object key (path) in S3 where file should be uploaded.
:raise StorageUploadError: If upload operation fails.
"""
from boto3.exceptions import S3UploadFailedError # noqa: PLC0415 (required by 'temporal' at runtime)
logger = get_logger()
client = self._get_client()
logger.debug(
"uploading_archive_to_storage",
bucket=bucket,
object_key=key,
archive_path=str(file),
)
try:
client.upload_file(
Filename=str(file),
Bucket=bucket,
Key=key,
)
logger.info(
"archive_uploaded_successfully",
bucket=bucket,
object_key=key,
)
except S3UploadFailedError as e:
# Check if this is a NoSuchBucket error - create bucket and retry
if "NoSuchBucket" in str(e):
logger.info(
"bucket_does_not_exist_creating",
bucket=bucket,
)
self.create_bucket(bucket=bucket)
# Retry upload after creating bucket
try:
client.upload_file(
Filename=str(file),
Bucket=bucket,
Key=key,
)
logger.info(
"archive_uploaded_successfully_after_bucket_creation",
bucket=bucket,
object_key=key,
)
except S3UploadFailedError as retry_error:
logger.exception(
"upload_failed_after_bucket_creation",
bucket=bucket,
object_key=key,
)
raise StorageUploadError(
bucket=bucket,
object_key=key,
reason=str(retry_error),
) from retry_error
else:
logger.exception(
"upload_failed",
bucket=bucket,
object_key=key,
)
raise StorageUploadError(
bucket=bucket,
object_key=key,
reason=str(e),
) from e
def download_file(self, bucket: str, key: PurePath) -> Path:
"""Download a single file from S3 storage.
Downloads the file to a temporary location and returns the path.
:param bucket: S3 bucket name.
:param key: Object key (path) in S3 to download.
:returns: Path to the downloaded file.
:raise StorageDownloadError: If download operation fails.
"""
logger = get_logger()
client = self._get_client()
logger.debug(
"downloading_file_from_storage",
bucket=bucket,
object_key=str(key),
)
try:
# Create temporary file for download
with NamedTemporaryFile(delete=False, suffix=".tar.gz") as temp_file:
temp_path = Path(temp_file.name)
# Download object to temp file
client.download_file(
Bucket=bucket,
Key=str(key),
Filename=str(temp_path),
)
logger.info(
"file_downloaded_successfully",
bucket=bucket,
object_key=str(key),
local_path=str(temp_path),
)
return temp_path
except ClientError as error:
error_code = error.response.get("Error", {}).get("Code")
logger.exception(
"download_failed",
bucket=bucket,
object_key=str(key),
error_code=error_code,
)
raise StorageDownloadError(
bucket=bucket,
object_key=str(key),
reason=f"{error_code}: {error!s}",
) from error
def download_directory(self, bucket: str, directory: PurePath) -> Path:
"""TODO.
:param bucket: TODO.
:param directory: TODO.
:returns: TODO.
"""
with NamedTemporaryFile(delete=False) as file:
path: Path = Path(file.name)
# end-with
client: Any = self._get_client()
with Archive(name=str(path), mode="w:gz") as archive:
paginator = client.get_paginator("list_objects_v2")
try:
pages = paginator.paginate(Bucket=bucket, Prefix=str(directory))
except ClientError as exception:
raise StorageDownloadError(
bucket=bucket,
object_key=str(directory),
reason=exception.response["Error"]["Code"],
) from exception
for page in pages:
for entry in page.get("Contents", []):
key: str = entry["Key"]
try:
response: dict[str, Any] = client.get_object(Bucket=bucket, Key=key)
except ClientError as exception:
raise StorageDownloadError(
bucket=bucket,
object_key=key,
reason=exception.response["Error"]["Code"],
) from exception
archive.addfile(TarInfo(name=key), fileobj=response["Body"])
# end-for
# end-for
# end-with
return path

View File

@@ -1,8 +0,0 @@
from enum import StrEnum
class TemporalQueues(StrEnum):
"""Enumeration of available `Temporal Task Queues`."""
#: The default task queue.
DEFAULT = "default-task-queue"

View File

@@ -1,46 +0,0 @@
from enum import StrEnum
from typing import Literal
from fuzzforge_types import FuzzForgeWorkflowIdentifier # noqa: TC002 (required by 'pydantic' at runtime)
from pydantic import BaseModel
class Base(BaseModel):
"""TODO."""
class FuzzForgeWorkflowSteps(StrEnum):
"""Workflow step types."""
#: Execute a FuzzForge module
RUN_FUZZFORGE_MODULE = "run-fuzzforge-module"
class FuzzForgeWorkflowStep(Base):
"""TODO."""
#: The type of the workflow's step.
kind: FuzzForgeWorkflowSteps
class RunFuzzForgeModule(FuzzForgeWorkflowStep):
"""Execute a FuzzForge module."""
kind: Literal[FuzzForgeWorkflowSteps.RUN_FUZZFORGE_MODULE] = FuzzForgeWorkflowSteps.RUN_FUZZFORGE_MODULE
#: The name of the module.
module: str
#: The container of the module.
container: str
class FuzzForgeWorkflowDefinition(Base):
"""The definition of a FuzzForge workflow."""
#: The author of the workflow.
author: str
#: The identifier of the workflow.
identifier: FuzzForgeWorkflowIdentifier
#: The name of the workflow.
name: str
#: The collection of steps that compose the workflow.
steps: list[RunFuzzForgeModule]

View File

@@ -1,24 +0,0 @@
from pydantic import BaseModel
from fuzzforge_common.sandboxes.engines.docker.configuration import (
DockerConfiguration, # noqa: TC001 (required by pydantic at runtime)
)
from fuzzforge_common.sandboxes.engines.podman.configuration import (
PodmanConfiguration, # noqa: TC001 (required by pydantic at runtime)
)
from fuzzforge_common.storage.configuration import StorageConfiguration # noqa: TC001 (required by pydantic at runtime)
class TemporalWorkflowParameters(BaseModel):
"""Base parameters for Temporal workflows.
Provides common configuration shared across all workflow types,
including sandbox engine and storage backend instances.
"""
#: Sandbox engine for container operations (Docker or Podman).
engine_configuration: PodmanConfiguration | DockerConfiguration
#: Storage backend for uploading/downloading execution artifacts.
storage_configuration: StorageConfiguration

View File

@@ -1,108 +0,0 @@
"""Helper utilities for working with bridge transformations."""
from pathlib import Path
from typing import Any
def load_transform_from_file(file_path: str | Path) -> str:
"""Load bridge transformation code from a Python file.
This reads the transformation function from a .py file and extracts
the code as a string suitable for the bridge module.
Args:
file_path: Path to Python file containing transform() function
Returns:
Python code as a string
Example:
>>> code = load_transform_from_file("transformations/add_line_numbers.py")
>>> # code contains the transform() function as a string
"""
path = Path(file_path)
if not path.exists():
raise FileNotFoundError(f"Transformation file not found: {file_path}")
if path.suffix != ".py":
raise ValueError(f"Transformation file must be .py file, got: {path.suffix}")
# Read the entire file
code = path.read_text()
return code
def create_bridge_input(
transform_file: str | Path,
input_filename: str | None = None,
output_filename: str | None = None,
) -> dict[str, Any]:
"""Create bridge module input configuration from a transformation file.
Args:
transform_file: Path to Python file with transform() function
input_filename: Optional specific input file to transform
output_filename: Optional specific output filename
Returns:
Dictionary suitable for bridge module's input.json
Example:
>>> config = create_bridge_input("transformations/add_line_numbers.py")
>>> import json
>>> json.dump(config, open("input.json", "w"))
"""
code = load_transform_from_file(transform_file)
return {
"code": code,
"input_filename": input_filename,
"output_filename": output_filename,
}
def validate_transform_function(file_path: str | Path) -> bool:
"""Validate that a Python file contains a valid transform() function.
Args:
file_path: Path to Python file to validate
Returns:
True if valid, raises exception otherwise
Raises:
ValueError: If transform() function is not found or invalid
"""
code = load_transform_from_file(file_path)
# Check if transform function is defined
if "def transform(" not in code:
raise ValueError(
f"File {file_path} must contain a 'def transform(data)' function"
)
# Try to compile the code
try:
compile(code, str(file_path), "exec")
except SyntaxError as e:
raise ValueError(f"Syntax error in {file_path}: {e}") from e
# Try to execute and verify transform exists
namespace: dict[str, Any] = {"__builtins__": __builtins__}
try:
exec(code, namespace)
except Exception as e:
raise ValueError(f"Failed to execute {file_path}: {e}") from e
if "transform" not in namespace:
raise ValueError(f"No 'transform' function found in {file_path}")
if not callable(namespace["transform"]):
raise ValueError(f"'transform' in {file_path} is not callable")
return True

View File

@@ -1,27 +0,0 @@
from fuzzforge_types import (
FuzzForgeExecutionIdentifier, # noqa: TC002 (required by pydantic at runtime)
FuzzForgeProjectIdentifier, # noqa: TC002 (required by pydantic at runtime)
)
from fuzzforge_common.workflows.base.definitions import (
FuzzForgeWorkflowDefinition, # noqa: TC001 (required by pydantic at runtime)
)
from fuzzforge_common.workflows.base.parameters import TemporalWorkflowParameters
class ExecuteFuzzForgeWorkflowParameters(TemporalWorkflowParameters):
"""Parameters for the default FuzzForge workflow orchestration.
Contains workflow definition and execution tracking identifiers
for coordinating multi-module workflows.
"""
#: UUID7 identifier of this specific workflow execution.
execution_identifier: FuzzForgeExecutionIdentifier
#: UUID7 identifier of the project this execution belongs to.
project_identifier: FuzzForgeProjectIdentifier
#: The definition of the FuzzForge workflow to run.
workflow_definition: FuzzForgeWorkflowDefinition

View File

@@ -1,80 +0,0 @@
from typing import Any, Literal
from fuzzforge_types import (
FuzzForgeExecutionIdentifier, # noqa: TC002 (required by pydantic at runtime)
FuzzForgeProjectIdentifier, # noqa: TC002 (required by pydantic at runtime)
)
from fuzzforge_common.workflows.base.parameters import TemporalWorkflowParameters
class ExecuteFuzzForgeModuleParameters(TemporalWorkflowParameters):
"""Parameters for executing a single FuzzForge module workflow.
Contains module execution configuration including container image,
project context, and execution tracking identifiers.
Supports workflow chaining where modules can be executed in sequence,
with each module's output becoming the next module's input.
"""
#: The identifier of this module execution.
execution_identifier: FuzzForgeExecutionIdentifier
#: The identifier/name of the module to execute.
#: FIXME: Currently accepts both UUID (for registry lookups) and container names (e.g., "text-generator:0.0.1").
#: This should be split into module_identifier (UUID) and container_image (string) in the future.
module_identifier: str
#: The identifier of the project this module execution belongs to.
project_identifier: FuzzForgeProjectIdentifier
#: Optional configuration dictionary for the module.
#: Will be written to /data/input/config.json in the sandbox.
module_configuration: dict[str, Any] | None = None
# Workflow chaining fields
#: The identifier of the parent workflow execution (if part of a multi-module workflow).
#: For standalone module executions, this equals execution_identifier.
workflow_execution_identifier: FuzzForgeExecutionIdentifier | None = None
#: Position of this module in the workflow (0-based).
#: 0 = first module (reads from project assets)
#: N > 0 = subsequent module (reads from previous module's output)
step_index: int = 0
#: Execution identifier of the previous module in the workflow chain.
#: None for first module (step_index=0).
#: Used to locate previous module's output in storage.
previous_step_execution_identifier: FuzzForgeExecutionIdentifier | None = None
class WorkflowStep(TemporalWorkflowParameters):
"""A step in a workflow - a module execution.
Steps are executed sequentially in a workflow. Each step runs a containerized module.
Examples:
# Module step
WorkflowStep(
step_index=0,
step_type="module",
module_identifier="text-generator:0.0.1"
)
"""
#: Position of this step in the workflow (0-based)
step_index: int
#: Type of step: "module" (bridges are also modules now)
step_type: Literal["module"]
#: Module identifier (container image name like "text-generator:0.0.1")
#: Required if step_type="module"
module_identifier: str | None = None
#: Optional module configuration
module_configuration: dict[str, Any] | None = None

View File

@@ -1,42 +0,0 @@
from pathlib import Path
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from fuzzforge_common.storage.configuration import StorageConfiguration
def test_download_directory(
storage_configuration: StorageConfiguration,
boto3_client: Any,
random_bucket: str,
tmp_path: Path,
) -> None:
"""TODO."""
bucket = random_bucket
storage = storage_configuration.into_storage()
d1 = tmp_path.joinpath("d1")
f1 = d1.joinpath("f1")
d2 = tmp_path.joinpath("d2")
f2 = d2.joinpath("f2")
d3 = d2.joinpath("d3")
f3 = d3.joinpath("d3")
d1.mkdir()
d2.mkdir()
d3.mkdir()
f1.touch()
f2.touch()
f3.touch()
for path in [f1, f2, f3]:
key: Path = Path("assets", path.relative_to(other=tmp_path))
boto3_client.upload_file(
Bucket=bucket,
Filename=str(path),
Key=str(key),
)
path = storage.download_directory(bucket=bucket, directory="assets")
assert path.is_file()

View File

@@ -45,11 +45,11 @@ For custom setups, you can manually configure the MCP server.
{
"mcpServers": {
"fuzzforge": {
"command": "/path/to/fuzzforge-oss/.venv/bin/python",
"command": "/path/to/fuzzforge_ai/.venv/bin/python",
"args": ["-m", "fuzzforge_mcp"],
"cwd": "/path/to/fuzzforge-oss",
"cwd": "/path/to/fuzzforge_ai",
"env": {
"FUZZFORGE_MODULES_PATH": "/path/to/fuzzforge-oss/fuzzforge-modules",
"FUZZFORGE_MODULES_PATH": "/path/to/fuzzforge_ai/fuzzforge-modules",
"FUZZFORGE_ENGINE__TYPE": "docker"
}
}
@@ -64,11 +64,11 @@ For custom setups, you can manually configure the MCP server.
"servers": {
"fuzzforge": {
"type": "stdio",
"command": "/path/to/fuzzforge-oss/.venv/bin/python",
"command": "/path/to/fuzzforge_ai/.venv/bin/python",
"args": ["-m", "fuzzforge_mcp"],
"cwd": "/path/to/fuzzforge-oss",
"cwd": "/path/to/fuzzforge_ai",
"env": {
"FUZZFORGE_MODULES_PATH": "/path/to/fuzzforge-oss/fuzzforge-modules",
"FUZZFORGE_MODULES_PATH": "/path/to/fuzzforge_ai/fuzzforge-modules",
"FUZZFORGE_ENGINE__TYPE": "docker"
}
}
@@ -83,11 +83,11 @@ For custom setups, you can manually configure the MCP server.
"mcpServers": {
"fuzzforge": {
"type": "stdio",
"command": "/path/to/fuzzforge-oss/.venv/bin/python",
"command": "/path/to/fuzzforge_ai/.venv/bin/python",
"args": ["-m", "fuzzforge_mcp"],
"cwd": "/path/to/fuzzforge-oss",
"cwd": "/path/to/fuzzforge_ai",
"env": {
"FUZZFORGE_MODULES_PATH": "/path/to/fuzzforge-oss/fuzzforge-modules",
"FUZZFORGE_MODULES_PATH": "/path/to/fuzzforge_ai/fuzzforge-modules",
"FUZZFORGE_ENGINE__TYPE": "docker"
}
}

View File

@@ -1,14 +1,14 @@
[project]
name = "fuzzforge-mcp"
version = "0.0.1"
description = "FuzzForge MCP Server - AI agent gateway for FuzzForge OSS."
description = "FuzzForge MCP Server - AI agent gateway for FuzzForge AI."
authors = []
readme = "README.md"
requires-python = ">=3.14"
dependencies = [
"fastmcp==2.14.1",
"fuzzforge-common==0.0.1",
"fuzzforge-runner==0.0.1",
"fuzzforge-types==0.0.1",
"pydantic==2.12.4",
"pydantic-settings==2.12.0",
"structlog==25.5.0",
@@ -24,11 +24,13 @@ lints = [
"ruff==0.14.4",
]
tests = [
"fuzzforge-tests==0.0.1",
"pytest==9.0.2",
"pytest-asyncio==1.3.0",
"pytest-httpx==0.36.0",
]
[tool.uv.sources]
fuzzforge-common = { workspace = true }
fuzzforge-runner = { workspace = true }
fuzzforge-types = { workspace = true }
fuzzforge-tests = { workspace = true }

View File

@@ -43,6 +43,7 @@ FuzzForge is a security research orchestration platform. Use these tools to:
3. **Execute workflows**: Chain multiple modules together
4. **Manage projects**: Initialize and configure projects
5. **Get results**: Retrieve execution results
6. **Hub tools**: Discover and execute tools from external MCP servers
Typical workflow:
1. Initialize a project with `init_project`
@@ -50,6 +51,11 @@ Typical workflow:
3. List available modules with `list_modules`
4. Execute a module with `execute_module` — use `assets_path` param to pass different inputs per module
5. Read outputs from `results_path` returned by `execute_module` — check module's `output_artifacts` metadata for filenames
Hub workflow:
1. List available hub servers with `list_hub_servers`
2. Discover tools from servers with `discover_hub_tools`
3. Execute hub tools with `execute_hub_tool`
""",
lifespan=lifespan,
)

View File

@@ -1,6 +1,6 @@
"""Workflow resources for FuzzForge MCP.
Note: In FuzzForge OSS, workflows are defined at runtime rather than
Note: In FuzzForge AI, workflows are defined at runtime rather than
stored. This resource provides documentation about workflow capabilities.
"""
@@ -19,7 +19,7 @@ mcp: FastMCP = FastMCP()
async def get_workflow_help() -> dict[str, Any]:
"""Get help information about creating workflows.
Workflows in FuzzForge OSS are defined at execution time rather
Workflows in FuzzForge AI are defined at execution time rather
than stored. Use the execute_workflow tool with step definitions.
:return: Workflow documentation.

View File

@@ -2,13 +2,14 @@
from fastmcp import FastMCP
from fuzzforge_mcp.tools import modules, projects, workflows
from fuzzforge_mcp.tools import hub, modules, projects, workflows
mcp: FastMCP = FastMCP()
mcp.mount(modules.mcp)
mcp.mount(projects.mcp)
mcp.mount(workflows.mcp)
mcp.mount(hub.mcp)
__all__ = [
"mcp",

View File

@@ -0,0 +1,315 @@
"""MCP Hub tools for FuzzForge MCP server.
This module provides tools for interacting with external MCP servers
through the FuzzForge hub. AI agents can:
- List available hub servers and their tools
- Discover tools from hub servers
- Execute hub tools
"""
from __future__ import annotations
from pathlib import Path
from typing import Any
from fastmcp import FastMCP
from fastmcp.exceptions import ToolError
from fuzzforge_common.hub import HubExecutor, HubServerConfig, HubServerType
from fuzzforge_mcp.dependencies import get_settings
mcp: FastMCP = FastMCP()
# Global hub executor instance (lazy initialization)
_hub_executor: HubExecutor | None = None
def _get_hub_executor() -> HubExecutor:
"""Get or create the hub executor instance.
:returns: Hub executor instance.
:raises ToolError: If hub is disabled.
"""
global _hub_executor
settings = get_settings()
if not settings.hub.enabled:
msg = "MCP Hub is disabled. Enable it via FUZZFORGE_HUB__ENABLED=true"
raise ToolError(msg)
if _hub_executor is None:
config_path = settings.hub.config_path
_hub_executor = HubExecutor(
config_path=config_path,
timeout=settings.hub.timeout,
)
return _hub_executor
@mcp.tool
async def list_hub_servers() -> dict[str, Any]:
"""List all registered MCP hub servers.
Returns information about configured hub servers, including
their connection type, status, and discovered tool count.
:return: Dictionary with list of hub servers.
"""
try:
executor = _get_hub_executor()
servers = executor.list_servers()
return {
"servers": servers,
"count": len(servers),
"enabled_count": len([s for s in servers if s["enabled"]]),
}
except Exception as e:
if isinstance(e, ToolError):
raise
msg = f"Failed to list hub servers: {e}"
raise ToolError(msg) from e
@mcp.tool
async def discover_hub_tools(server_name: str | None = None) -> dict[str, Any]:
"""Discover tools from hub servers.
Connects to hub servers and retrieves their available tools.
If server_name is provided, only discovers from that server.
Otherwise discovers from all enabled servers.
:param server_name: Optional specific server to discover from.
:return: Dictionary with discovered tools.
"""
try:
executor = _get_hub_executor()
if server_name:
tools = await executor.discover_server_tools(server_name)
return {
"server": server_name,
"tools": [
{
"identifier": t.identifier,
"name": t.name,
"description": t.description,
"parameters": [p.model_dump() for p in t.parameters],
}
for t in tools
],
"count": len(tools),
}
else:
results = await executor.discover_all_tools()
all_tools = []
for server, tools in results.items():
for tool in tools:
all_tools.append({
"identifier": tool.identifier,
"name": tool.name,
"server": server,
"description": tool.description,
"parameters": [p.model_dump() for p in tool.parameters],
})
return {
"servers_discovered": len(results),
"tools": all_tools,
"count": len(all_tools),
}
except Exception as e:
if isinstance(e, ToolError):
raise
msg = f"Failed to discover hub tools: {e}"
raise ToolError(msg) from e
@mcp.tool
async def list_hub_tools() -> dict[str, Any]:
"""List all discovered hub tools.
Returns tools that have been previously discovered from hub servers.
Run discover_hub_tools first if no tools are listed.
:return: Dictionary with list of discovered tools.
"""
try:
executor = _get_hub_executor()
tools = executor.list_tools()
return {
"tools": tools,
"count": len(tools),
}
except Exception as e:
if isinstance(e, ToolError):
raise
msg = f"Failed to list hub tools: {e}"
raise ToolError(msg) from e
@mcp.tool
async def execute_hub_tool(
identifier: str,
arguments: dict[str, Any] | None = None,
timeout: int | None = None,
) -> dict[str, Any]:
"""Execute a tool from a hub server.
:param identifier: Tool identifier (format: hub:server:tool or server:tool).
:param arguments: Tool arguments matching the tool's input schema.
:param timeout: Optional execution timeout in seconds.
:return: Tool execution result.
Example identifiers:
- "hub:nmap:nmap_scan"
- "nmap:nmap_scan"
- "hub:nuclei:nuclei_scan"
"""
try:
executor = _get_hub_executor()
result = await executor.execute_tool(
identifier=identifier,
arguments=arguments or {},
timeout=timeout,
)
return result.to_dict()
except Exception as e:
if isinstance(e, ToolError):
raise
msg = f"Hub tool execution failed: {e}"
raise ToolError(msg) from e
@mcp.tool
async def get_hub_tool_schema(identifier: str) -> dict[str, Any]:
"""Get the input schema for a hub tool.
Returns the JSON Schema that describes the tool's expected arguments.
:param identifier: Tool identifier (format: hub:server:tool or server:tool).
:return: JSON Schema for the tool's input.
"""
try:
executor = _get_hub_executor()
schema = executor.get_tool_schema(identifier)
if schema is None:
msg = f"Tool '{identifier}' not found. Run discover_hub_tools first."
raise ToolError(msg)
return {
"identifier": identifier,
"schema": schema,
}
except Exception as e:
if isinstance(e, ToolError):
raise
msg = f"Failed to get tool schema: {e}"
raise ToolError(msg) from e
@mcp.tool
async def add_hub_server(
name: str,
server_type: str,
image: str | None = None,
command: list[str] | None = None,
url: str | None = None,
category: str | None = None,
description: str | None = None,
capabilities: list[str] | None = None,
environment: dict[str, str] | None = None,
) -> dict[str, Any]:
"""Add a new MCP server to the hub.
Register a new external MCP server that can be used for tool discovery
and execution. Servers can be Docker images, local commands, or SSE endpoints.
:param name: Unique name for the server (e.g., "nmap", "nuclei").
:param server_type: Connection type ("docker", "command", or "sse").
:param image: Docker image name (for docker type).
:param command: Command and args (for command type).
:param url: SSE endpoint URL (for sse type).
:param category: Category for grouping (e.g., "reconnaissance").
:param description: Human-readable description.
:param capabilities: Docker capabilities to add (e.g., ["NET_RAW"]).
:param environment: Environment variables to pass.
:return: Information about the added server.
Examples:
- Docker: add_hub_server("nmap", "docker", image="nmap-mcp:latest", capabilities=["NET_RAW"])
- Command: add_hub_server("custom", "command", command=["python", "server.py"])
"""
try:
executor = _get_hub_executor()
# Parse server type
try:
stype = HubServerType(server_type)
except ValueError:
msg = f"Invalid server type: {server_type}. Use 'docker', 'command', or 'sse'."
raise ToolError(msg) from None
# Validate required fields based on type
if stype == HubServerType.DOCKER and not image:
msg = "Docker image required for docker type"
raise ToolError(msg)
if stype == HubServerType.COMMAND and not command:
msg = "Command required for command type"
raise ToolError(msg)
if stype == HubServerType.SSE and not url:
msg = "URL required for sse type"
raise ToolError(msg)
config = HubServerConfig(
name=name,
type=stype,
image=image,
command=command,
url=url,
category=category,
description=description,
capabilities=capabilities or [],
environment=environment or {},
)
server = executor.add_server(config)
return {
"success": True,
"server": {
"name": server.name,
"identifier": server.identifier,
"type": server.config.type.value,
"enabled": server.config.enabled,
},
"message": f"Server '{name}' added. Use discover_hub_tools('{name}') to discover its tools.",
}
except ValueError as e:
msg = f"Failed to add server: {e}"
raise ToolError(msg) from e
except Exception as e:
if isinstance(e, ToolError):
raise
msg = f"Failed to add hub server: {e}"
raise ToolError(msg) from e

View File

@@ -187,6 +187,9 @@ async def start_continuous_module(
"container_id": result["container_id"],
"input_dir": result["input_dir"],
"project_path": str(project_path),
# Incremental stream.jsonl tracking
"stream_lines_read": 0,
"total_crashes": 0,
}
return {
@@ -204,24 +207,29 @@ async def start_continuous_module(
def _get_continuous_status_impl(session_id: str) -> dict[str, Any]:
"""Internal helper to get continuous session status (non-tool version)."""
"""Internal helper to get continuous session status (non-tool version).
Uses incremental reads of ``stream.jsonl`` via ``tail -n +offset`` so that
only new lines appended since the last poll are fetched and parsed. Crash
counts and latest metrics are accumulated across polls.
"""
if session_id not in _background_executions:
raise ToolError(f"Unknown session: {session_id}. Use list_continuous_sessions() to see active sessions.")
execution = _background_executions[session_id]
container_id = execution.get("container_id")
# Initialize metrics
# Carry forward accumulated state
metrics: dict[str, Any] = {
"total_executions": 0,
"total_crashes": 0,
"total_crashes": execution.get("total_crashes", 0),
"exec_per_sec": 0,
"coverage": 0,
"current_target": "",
"latest_events": [],
"new_events": [],
}
# Read stream.jsonl from inside the running container
if container_id:
try:
runner: Runner = get_runner()
@@ -232,34 +240,45 @@ def _get_continuous_status_impl(session_id: str) -> dict[str, Any]:
if container_status != "running":
execution["status"] = "stopped" if container_status == "exited" else container_status
# Read stream.jsonl from container
stream_content = executor.read_module_output(container_id, "/data/output/stream.jsonl")
# Incremental read: only fetch lines we haven't seen yet
lines_read: int = execution.get("stream_lines_read", 0)
stream_content = executor.read_module_output_incremental(
container_id,
start_line=lines_read + 1,
output_file="/data/output/stream.jsonl",
)
if stream_content:
lines = stream_content.strip().split("\n")
# Get last 20 events
recent_lines = lines[-20:] if len(lines) > 20 else lines
crash_count = 0
new_lines = stream_content.strip().split("\n")
new_line_count = 0
for line in recent_lines:
for line in new_lines:
if not line.strip():
continue
try:
event = json.loads(line)
metrics["latest_events"].append(event)
# Extract metrics from events
if event.get("event") == "metrics":
metrics["total_executions"] = event.get("executions", 0)
metrics["current_target"] = event.get("target", "")
metrics["exec_per_sec"] = event.get("exec_per_sec", 0)
metrics["coverage"] = event.get("coverage", 0)
if event.get("event") == "crash_detected":
crash_count += 1
except json.JSONDecodeError:
# Possible torn read on the very last line — skip it
# and do NOT advance the offset so it is re-read next
# poll when the write is complete.
continue
metrics["total_crashes"] = crash_count
new_line_count += 1
metrics["new_events"].append(event)
# Extract latest metrics snapshot
if event.get("event") == "metrics":
metrics["total_executions"] = event.get("executions", 0)
metrics["current_target"] = event.get("target", "")
metrics["exec_per_sec"] = event.get("exec_per_sec", 0)
metrics["coverage"] = event.get("coverage", 0)
if event.get("event") == "crash_detected":
metrics["total_crashes"] += 1
# Advance offset by successfully parsed lines only
execution["stream_lines_read"] = lines_read + new_line_count
execution["total_crashes"] = metrics["total_crashes"]
except Exception as e:
metrics["error"] = str(e)

View File

@@ -11,7 +11,7 @@ if TYPE_CHECKING:
from collections.abc import AsyncGenerator, Callable
from fastmcp.client import FastMCPTransport
from fuzzforge_types import FuzzForgeProjectIdentifier
from fuzzforge_tests.fixtures import FuzzForgeProjectIdentifier
pytest_plugins = ["fuzzforge_tests.fixtures"]

View File

@@ -1,6 +1,6 @@
"""MCP tool tests for FuzzForge OSS.
"""MCP tool tests for FuzzForge AI.
Tests the MCP tools that are available in the OSS version.
Tests the MCP tools that are available in FuzzForge AI.
"""
import pytest

View File

@@ -20,7 +20,7 @@ from typing import TYPE_CHECKING
import structlog
from fuzzforge_modules_sdk.api.constants import PATH_TO_INPUTS, PATH_TO_OUTPUTS
from fuzzforge_modules_sdk.api.models import FuzzForgeModuleResults
from fuzzforge_modules_sdk.api.models import FuzzForgeModuleResults, FuzzForgeModuleStatus
from fuzzforge_modules_sdk.api.modules.base import FuzzForgeModule
from module.models import Input, Output, CrashInfo, FuzzingStats, TargetResult
@@ -79,19 +79,19 @@ class Module(FuzzForgeModule):
logger.info("cargo-fuzzer starting", resource_count=len(resources))
# Emit initial progress
self.emit_progress(0, status="initializing", message="Setting up fuzzing environment")
self.emit_progress(0, status=FuzzForgeModuleStatus.INITIALIZING, message="Setting up fuzzing environment")
self.emit_event("module_started", resource_count=len(resources))
# Setup the fuzzing environment
if not self._setup_environment(resources):
self.emit_progress(100, status="failed", message="Failed to setup environment")
self.emit_progress(100, status=FuzzForgeModuleStatus.FAILED, message="Failed to setup environment")
return FuzzForgeModuleResults.FAILURE
# Get list of fuzz targets
targets = self._get_fuzz_targets()
if not targets:
logger.error("no fuzz targets found")
self.emit_progress(100, status="failed", message="No fuzz targets found")
self.emit_progress(100, status=FuzzForgeModuleStatus.FAILED, message="No fuzz targets found")
return FuzzForgeModuleResults.FAILURE
# Filter targets if specific ones were requested
@@ -100,7 +100,7 @@ class Module(FuzzForgeModule):
targets = [t for t in targets if t in requested]
if not targets:
logger.error("none of the requested targets found", requested=list(requested))
self.emit_progress(100, status="failed", message="Requested targets not found")
self.emit_progress(100, status=FuzzForgeModuleStatus.FAILED, message="Requested targets not found")
return FuzzForgeModuleResults.FAILURE
logger.info("found fuzz targets", targets=targets)
@@ -137,7 +137,7 @@ class Module(FuzzForgeModule):
progress = int((i / len(targets)) * 100) if not is_continuous else 50
self.emit_progress(
progress,
status="running",
status=FuzzForgeModuleStatus.RUNNING,
message=progress_msg,
current_task=target,
metrics={
@@ -177,7 +177,7 @@ class Module(FuzzForgeModule):
# Emit final progress
self.emit_progress(
100,
status="completed",
status=FuzzForgeModuleStatus.COMPLETED,
message=f"Fuzzing completed. Found {total_crashes} crashes.",
metrics={
"targets_fuzzed": len(self._target_results),

View File

@@ -1,13 +1,9 @@
from pathlib import Path
PATH_TO_DATA: Path = Path("/data")
PATH_TO_DATA: Path = Path("/fuzzforge")
PATH_TO_INPUTS: Path = PATH_TO_DATA.joinpath("input")
PATH_TO_INPUT: Path = PATH_TO_INPUTS.joinpath("input.json")
PATH_TO_OUTPUTS: Path = PATH_TO_DATA.joinpath("output")
PATH_TO_ARTIFACTS: Path = PATH_TO_OUTPUTS.joinpath("artifacts")
PATH_TO_RESULTS: Path = PATH_TO_OUTPUTS.joinpath("results.json")
PATH_TO_LOGS: Path = PATH_TO_OUTPUTS.joinpath("logs.jsonl")
# Streaming output paths for real-time progress
PATH_TO_PROGRESS: Path = PATH_TO_OUTPUTS.joinpath("progress.json")
PATH_TO_STREAM: Path = PATH_TO_OUTPUTS.joinpath("stream.jsonl")

View File

@@ -0,0 +1,90 @@
"""FuzzForge modules SDK models.
This module provides backward-compatible exports for all model types.
For Core SDK compatibility, use imports from `fuzzforge_modules_sdk.api.models.mod`.
"""
from enum import StrEnum
from pathlib import Path # noqa: TC003 (required by pydantic at runtime)
from pydantic import ConfigDict
# Re-export from mod.py for Core SDK compatibility
from fuzzforge_modules_sdk.api.models.mod import (
Base,
FuzzForgeModuleInputBase,
FuzzForgeModuleResource,
FuzzForgeModuleResources,
FuzzForgeModulesSettingsBase,
FuzzForgeModulesSettingsType,
)
class FuzzForgeModuleArtifacts(StrEnum):
"""Enumeration of artifact types."""
#: The artifact is an asset.
ASSET = "asset"
class FuzzForgeModuleArtifact(Base):
"""An artifact generated by the module during its run."""
#: The description of the artifact.
description: str
#: The type of the artifact.
kind: FuzzForgeModuleArtifacts
#: The name of the artifact.
name: str
#: The path to the artifact on disk.
path: Path
class FuzzForgeModuleResults(StrEnum):
"""Module execution result enumeration."""
SUCCESS = "success"
FAILURE = "failure"
class FuzzForgeModuleStatus(StrEnum):
"""Possible statuses emitted by a running module."""
#: Module is setting up its environment.
INITIALIZING = "initializing"
#: Module is actively running.
RUNNING = "running"
#: Module finished successfully.
COMPLETED = "completed"
#: Module encountered an error.
FAILED = "failed"
#: Module was stopped by the orchestrator (SIGTERM).
STOPPED = "stopped"
class FuzzForgeModuleOutputBase(Base):
"""The (standardized) output of a FuzzForge module."""
#: The collection of artifacts generated by the module during its run.
artifacts: list[FuzzForgeModuleArtifact]
#: The path to the logs.
logs: Path
#: The result of the module's run.
result: FuzzForgeModuleResults
__all__ = [
# Core SDK compatible exports
"Base",
"FuzzForgeModuleInputBase",
"FuzzForgeModuleResource",
"FuzzForgeModuleResources",
"FuzzForgeModulesSettingsBase",
"FuzzForgeModulesSettingsType",
# OSS-specific exports (also used in OSS modules)
"FuzzForgeModuleArtifact",
"FuzzForgeModuleArtifacts",
"FuzzForgeModuleOutputBase",
"FuzzForgeModuleResults",
"FuzzForgeModuleStatus",
]

View File

@@ -1,3 +1,9 @@
"""Core module models for FuzzForge modules SDK.
This module contains the base classes for module settings, inputs, and resources.
These are compatible with the fuzzforge-core SDK structure.
"""
from enum import StrEnum
from pathlib import Path # noqa: TC003 (required by pydantic at runtime)
from typing import TypeVar
@@ -6,27 +12,27 @@ from pydantic import BaseModel, ConfigDict
class Base(BaseModel):
"""TODO."""
"""Base model for all FuzzForge module types."""
model_config = ConfigDict(extra="forbid")
class FuzzForgeModulesSettingsBase(Base):
"""TODO."""
"""Base class for module settings."""
FuzzForgeModulesSettingsType = TypeVar("FuzzForgeModulesSettingsType", bound=FuzzForgeModulesSettingsBase)
class FuzzForgeModuleResources(StrEnum):
"""Enumeration of artifact types."""
"""Enumeration of resource types."""
#: The type of the resource is unknown or irrelevant.
UNKNOWN = "unknown"
class FuzzForgeModuleResource(Base):
"""TODO."""
"""A resource provided to a module as input."""
#: The description of the resource.
description: str
@@ -45,41 +51,3 @@ class FuzzForgeModuleInputBase[FuzzForgeModulesSettingsType: FuzzForgeModulesSet
resources: list[FuzzForgeModuleResource]
#: The settings of the module.
settings: FuzzForgeModulesSettingsType
class FuzzForgeModuleArtifacts(StrEnum):
"""Enumeration of artifact types."""
#: The artifact is an asset.
ASSET = "asset"
class FuzzForgeModuleArtifact(Base):
"""An artifact generated by the module during its run."""
#: The description of the artifact.
description: str
#: The type of the artifact.
kind: FuzzForgeModuleArtifacts
#: The name of the artifact.
name: str
#: The path to the artifact on disk.
path: Path
class FuzzForgeModuleResults(StrEnum):
"""TODO."""
SUCCESS = "success"
FAILURE = "failure"
class FuzzForgeModuleOutputBase(Base):
"""The (standardized) output of a FuzzForge module."""
#: The collection of artifacts generated by the module during its run.
artifacts: list[FuzzForgeModuleArtifacts]
#: The path to the logs.
logs: Path
#: The result of the module's run.
result: FuzzForgeModuleResults

View File

@@ -1,5 +1,7 @@
from abc import ABC, abstractmethod
import json
import signal
import threading
import time
from datetime import datetime, timezone
from shutil import rmtree
@@ -11,9 +13,7 @@ from fuzzforge_modules_sdk.api.constants import (
PATH_TO_ARTIFACTS,
PATH_TO_INPUT,
PATH_TO_LOGS,
PATH_TO_PROGRESS,
PATH_TO_RESULTS,
PATH_TO_STREAM,
)
from fuzzforge_modules_sdk.api.exceptions import FuzzForgeModuleError
from fuzzforge_modules_sdk.api.models import (
@@ -23,6 +23,7 @@ from fuzzforge_modules_sdk.api.models import (
FuzzForgeModuleOutputBase,
FuzzForgeModuleResource,
FuzzForgeModuleResults,
FuzzForgeModuleStatus,
FuzzForgeModulesSettingsType,
)
@@ -52,6 +53,11 @@ class FuzzForgeModule(ABC):
#: Custom output data set by the module.
__output_data: dict[str, Any]
#: Event set when stop is requested (SIGTERM received).
#: Using :class:`threading.Event` so multi-threaded modules can
#: efficiently wait on it via :pymethod:`threading.Event.wait`.
__stop_requested: threading.Event
def __init__(self, name: str, version: str) -> None:
"""Initialize an instance of the class.
@@ -65,10 +71,10 @@ class FuzzForgeModule(ABC):
self.__version = version
self.__start_time = time.time()
self.__output_data = {}
# Initialize streaming output files
PATH_TO_PROGRESS.parent.mkdir(exist_ok=True, parents=True)
PATH_TO_STREAM.parent.mkdir(exist_ok=True, parents=True)
self.__stop_requested = threading.Event()
# Register SIGTERM handler for graceful shutdown
signal.signal(signal.SIGTERM, self._handle_sigterm)
@final
def get_logger(self) -> BoundLogger:
@@ -86,6 +92,58 @@ class FuzzForgeModule(ABC):
return self.__version
@final
def is_stop_requested(self) -> bool:
"""Check if stop was requested (SIGTERM received).
Long-running modules should check this periodically and exit gracefully
when True. Results will be written automatically on SIGTERM.
The underlying :class:`threading.Event` can be obtained via
:meth:`stop_event` for modules that need to *wait* on it.
:returns: True if SIGTERM was received.
"""
return self.__stop_requested.is_set()
@final
def stop_event(self) -> threading.Event:
"""Return the stop :class:`threading.Event`.
Multi-threaded modules can use ``self.stop_event().wait(timeout)``
instead of polling :meth:`is_stop_requested` in a busy-loop.
:returns: The threading event that is set on SIGTERM.
"""
return self.__stop_requested
@final
def _handle_sigterm(self, signum: int, frame: Any) -> None:
"""Handle SIGTERM signal for graceful shutdown.
Sets the stop event and emits a final progress update, then returns.
The normal :meth:`main` lifecycle (run → cleanup → write results) will
complete as usual once :meth:`_run` observes :meth:`is_stop_requested`
and returns, giving the module a chance to do any last-minute work
before the process exits.
:param signum: Signal number.
:param frame: Current stack frame.
"""
self.__stop_requested.set()
self.get_logger().info("received SIGTERM, stopping after current operation")
# Emit final progress update
self.emit_progress(
progress=100,
status=FuzzForgeModuleStatus.STOPPED,
message="Module stopped by orchestrator (SIGTERM)",
)
@final
def set_output(self, **kwargs: Any) -> None:
"""Set custom output data to be included in results.json.
@@ -107,63 +165,53 @@ class FuzzForgeModule(ABC):
def emit_progress(
self,
progress: int,
status: str = "running",
status: FuzzForgeModuleStatus = FuzzForgeModuleStatus.RUNNING,
message: str = "",
metrics: dict[str, Any] | None = None,
current_task: str = "",
) -> None:
"""Emit a progress update to the progress file.
"""Emit a structured progress event to stdout (JSONL).
This method writes to /data/output/progress.json which can be polled
by the orchestrator or UI to show real-time progress.
Progress is written as a single JSON line to stdout so that the
orchestrator can capture it via ``kubectl logs`` without requiring
any file-system access inside the container.
:param progress: Progress percentage (0-100).
:param status: Current status ("initializing", "running", "completed", "failed").
:param status: Current module status.
:param message: Human-readable status message.
:param metrics: Dictionary of metrics (e.g., {"executions": 1000, "coverage": 50}).
:param current_task: Name of the current task being performed.
"""
elapsed = time.time() - self.__start_time
progress_data = {
"module": self.__name,
"version": self.__version,
"status": status,
"progress": max(0, min(100, progress)),
"message": message,
"current_task": current_task,
"elapsed_seconds": round(elapsed, 2),
"timestamp": datetime.now(timezone.utc).isoformat(),
"metrics": metrics or {},
}
PATH_TO_PROGRESS.write_text(json.dumps(progress_data, indent=2))
self.emit_event(
"progress",
status=status.value,
progress=max(0, min(100, progress)),
message=message,
current_task=current_task,
metrics=metrics or {},
)
@final
def emit_event(self, event: str, **data: Any) -> None:
"""Emit a streaming event to the stream file.
"""Emit a structured event to stdout as a single JSONL line.
This method appends to /data/output/stream.jsonl which can be tailed
by the orchestrator or UI for real-time event streaming.
All module events (including progress updates) are written to stdout
so the orchestrator can stream them in real time via ``kubectl logs``.
:param event: Event type (e.g., "crash_found", "target_started", "metrics").
:param event: Event type (e.g., ``"crash_found"``, ``"target_started"``,
``"progress"``, ``"metrics"``).
:param data: Additional event data as keyword arguments.
"""
elapsed = time.time() - self.__start_time
event_data = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"elapsed_seconds": round(elapsed, 2),
"elapsed_seconds": round(self.get_elapsed_seconds(), 2),
"module": self.__name,
"event": event,
**data,
}
# Append to stream file (create if doesn't exist)
with PATH_TO_STREAM.open("a") as f:
f.write(json.dumps(event_data) + "\n")
print(json.dumps(event_data), flush=True)
@final
def get_elapsed_seconds(self) -> float:
@@ -208,7 +256,7 @@ class FuzzForgeModule(ABC):
@final
def main(self) -> None:
"""TODO."""
"""Execute the module lifecycle: prepare → run → cleanup → write results."""
result = FuzzForgeModuleResults.SUCCESS
try:
@@ -238,9 +286,8 @@ class FuzzForgeModule(ABC):
result=result,
**self.__output_data,
)
buffer = output.model_dump_json().encode("utf-8")
PATH_TO_RESULTS.parent.mkdir(exist_ok=True, parents=True)
PATH_TO_RESULTS.write_bytes(buffer)
PATH_TO_RESULTS.write_bytes(output.model_dump_json().encode("utf-8"))
@classmethod
@abstractmethod

View File

@@ -1,6 +1,6 @@
# FuzzForge Runner
Direct execution engine for FuzzForge OSS. Provides simplified module and workflow execution without requiring Temporal or external infrastructure.
Direct execution engine for FuzzForge AI. Provides simplified module and workflow execution without requiring Temporal or external infrastructure.
## Overview

View File

@@ -1,13 +1,12 @@
[project]
name = "fuzzforge-runner"
version = "0.0.1"
description = "FuzzForge Runner - Direct execution engine for FuzzForge OSS."
description = "FuzzForge Runner - Direct execution engine for FuzzForge AI."
authors = []
readme = "README.md"
requires-python = ">=3.14"
dependencies = [
"fuzzforge-common",
"fuzzforge-types",
"structlog>=25.5.0",
"pydantic>=2.12.4",
"pydantic-settings>=2.8.1",
@@ -25,4 +24,3 @@ packages = ["src/fuzzforge_runner"]
[tool.uv.sources]
fuzzforge-common = { workspace = true }
fuzzforge-types = { workspace = true }

View File

@@ -1,4 +1,4 @@
"""FuzzForge Runner - Direct execution engine for FuzzForge OSS."""
"""FuzzForge Runner - Direct execution engine for FuzzForge AI."""
from fuzzforge_runner.runner import Runner
from fuzzforge_runner.settings import Settings

View File

@@ -1,10 +1,15 @@
"""FuzzForge Runner constants."""
from pydantic import UUID7
#: Type alias for execution identifiers.
type FuzzForgeExecutionIdentifier = UUID7
#: Default directory name for module input inside sandbox.
SANDBOX_INPUT_DIRECTORY: str = "/data/input"
SANDBOX_INPUT_DIRECTORY: str = "/fuzzforge/input"
#: Default directory name for module output inside sandbox.
SANDBOX_OUTPUT_DIRECTORY: str = "/data/output"
SANDBOX_OUTPUT_DIRECTORY: str = "/fuzzforge/output"
#: Default archive filename for results.
RESULTS_ARCHIVE_FILENAME: str = "results.tar.gz"

View File

@@ -18,13 +18,13 @@ from typing import TYPE_CHECKING, Any, cast
from fuzzforge_common.sandboxes.engines.docker.configuration import DockerConfiguration
from fuzzforge_common.sandboxes.engines.podman.configuration import PodmanConfiguration
from fuzzforge_types.executions import FuzzForgeExecutionIdentifier
from fuzzforge_runner.constants import (
MODULE_ENTRYPOINT,
RESULTS_ARCHIVE_FILENAME,
SANDBOX_INPUT_DIRECTORY,
SANDBOX_OUTPUT_DIRECTORY,
FuzzForgeExecutionIdentifier,
)
from fuzzforge_runner.exceptions import ModuleExecutionError, SandboxError
@@ -284,7 +284,7 @@ class ModuleExecutor:
Automatically pulls the module image from registry if it doesn't exist locally.
:param module_identifier: Name/identifier of the module image.
:param input_volume: Optional path to mount as /data/input in the container.
:param input_volume: Optional path to mount as /fuzzforge/input in the container.
:returns: The sandbox container identifier.
:raises SandboxError: If sandbox creation fails.
@@ -362,7 +362,7 @@ class ModuleExecutor:
"name": item.stem,
"description": f"Input file: {item.name}",
"kind": "unknown",
"path": f"/data/input/{item.name}",
"path": f"{SANDBOX_INPUT_DIRECTORY}/{item.name}",
}
)
elif item.is_dir():
@@ -371,7 +371,7 @@ class ModuleExecutor:
"name": item.name,
"description": f"Input directory: {item.name}",
"kind": "unknown",
"path": f"/data/input/{item.name}",
"path": f"{SANDBOX_INPUT_DIRECTORY}/{item.name}",
}
)
@@ -441,7 +441,7 @@ class ModuleExecutor:
"name": item.stem,
"description": f"Input file: {item.name}",
"kind": "unknown",
"path": f"/data/input/{item.name}",
"path": f"{SANDBOX_INPUT_DIRECTORY}/{item.name}",
}
)
elif item.is_dir():
@@ -450,7 +450,7 @@ class ModuleExecutor:
"name": item.name,
"description": f"Input directory: {item.name}",
"kind": "unknown",
"path": f"/data/input/{item.name}",
"path": f"{SANDBOX_INPUT_DIRECTORY}/{item.name}",
}
)
@@ -730,7 +730,7 @@ class ModuleExecutor:
"module": module_identifier,
}
def read_module_output(self, container_id: str, output_file: str = "/data/output/stream.jsonl") -> str:
def read_module_output(self, container_id: str, output_file: str = f"{SANDBOX_OUTPUT_DIRECTORY}/stream.jsonl") -> str:
"""Read output file from a running module container.
:param container_id: The container identifier.
@@ -741,6 +741,27 @@ class ModuleExecutor:
engine = self._get_engine()
return engine.read_file_from_container(container_id, output_file)
def read_module_output_incremental(
self,
container_id: str,
start_line: int = 1,
output_file: str = f"{SANDBOX_OUTPUT_DIRECTORY}/stream.jsonl",
) -> str:
"""Read new lines from an output file inside a running module container.
Uses ``tail -n +{start_line}`` so only lines appended since the last
read are returned. Callers should track the number of lines already
consumed and pass ``start_line = previous_count + 1`` on the next call.
:param container_id: The container identifier.
:param start_line: 1-based line number to start reading from.
:param output_file: Path to output file inside container.
:returns: New file contents from *start_line* onwards (may be empty).
"""
engine = self._get_engine()
return engine.tail_file_from_container(container_id, output_file, start_line=start_line)
def get_module_status(self, container_id: str) -> str:
"""Get the status of a running module container.

View File

@@ -13,8 +13,7 @@ from pathlib import Path
from typing import TYPE_CHECKING, Any, cast
from uuid import uuid4
from fuzzforge_types.executions import FuzzForgeExecutionIdentifier
from fuzzforge_runner.constants import FuzzForgeExecutionIdentifier
from fuzzforge_runner.exceptions import WorkflowExecutionError
from fuzzforge_runner.executor import ModuleExecutor

View File

@@ -1,6 +1,6 @@
"""FuzzForge Runner - Main runner interface.
This module provides the high-level interface for FuzzForge OSS,
This module provides the high-level interface for FuzzForge AI,
coordinating module execution, workflow orchestration, and storage.
"""

View File

@@ -71,6 +71,29 @@ class RegistrySettings(BaseModel):
password: str | None = None
class HubSettings(BaseModel):
"""MCP Hub configuration for external tool servers.
Controls the hub that bridges FuzzForge with external MCP servers
(e.g., mcp-security-hub). When enabled, AI agents can discover
and execute tools from registered MCP servers.
Configure via environment variables:
``FUZZFORGE_HUB__ENABLED=true``
``FUZZFORGE_HUB__CONFIG_PATH=/path/to/hub-config.json``
``FUZZFORGE_HUB__TIMEOUT=300``
"""
#: Whether the MCP hub is enabled.
enabled: bool = Field(default=True)
#: Path to the hub configuration JSON file.
config_path: Path = Field(default=Path.home() / ".fuzzforge" / "hub-config.json")
#: Default timeout in seconds for hub tool execution.
timeout: int = Field(default=300)
class Settings(BaseSettings):
"""FuzzForge Runner settings.
@@ -102,6 +125,9 @@ class Settings(BaseSettings):
#: Container registry settings.
registry: RegistrySettings = Field(default_factory=RegistrySettings)
#: MCP Hub settings.
hub: HubSettings = Field(default_factory=HubSettings)
#: Path to modules directory (for development/local builds).
modules_path: Path = Field(default=Path.home() / ".fuzzforge" / "modules")

View File

@@ -39,7 +39,7 @@ def get_logger() -> BoundLogger:
class LocalStorage:
"""Local filesystem storage backend for FuzzForge OSS.
"""Local filesystem storage backend for FuzzForge AI.
Provides lightweight storage for execution results while using
direct source mounting (no copying) for input assets.

View File

@@ -6,12 +6,10 @@ authors = []
readme = "README.md"
requires-python = ">=3.14"
dependencies = [
"boto3==1.42.8",
"podman==5.6.0",
"pydantic>=2.12.4",
"pytest==9.0.2",
"fuzzforge-common==0.0.1",
"fuzzforge-types==0.0.1",
"testcontainers[minio]==4.13.3",
]
[project.optional-dependencies]
@@ -23,4 +21,3 @@ lints = [
[tool.uv.sources]
fuzzforge-common = { workspace = true }
fuzzforge-types = { workspace = true }

View File

@@ -8,17 +8,21 @@ common test utilities shared across multiple FuzzForge packages.
import random
import string
from os import environ
from typing import TYPE_CHECKING, Any, cast
from typing import TYPE_CHECKING
from uuid import uuid4, uuid7
import boto3
import pytest
from fuzzforge_common.sandboxes.engines.podman.configuration import PodmanConfiguration
from fuzzforge_common.storage.configuration import StorageConfiguration
from podman import PodmanClient
from testcontainers.minio import MinioContainer
from pydantic import UUID7
# Constants for validation (moved from enterprise SDK)
# Type aliases for identifiers (inlined from fuzzforge-types)
type FuzzForgeProjectIdentifier = UUID7
type FuzzForgeModuleIdentifier = UUID7
type FuzzForgeWorkflowIdentifier = UUID7
type FuzzForgeExecutionIdentifier = UUID7
# Constants for validation
FUZZFORGE_PROJECT_NAME_LENGTH_MIN: int = 3
FUZZFORGE_PROJECT_NAME_LENGTH_MAX: int = 64
FUZZFORGE_PROJECT_DESCRIPTION_LENGTH_MAX: int = 256
@@ -35,16 +39,6 @@ if TYPE_CHECKING:
from collections.abc import Callable, Generator
from pathlib import Path
from fuzzforge_types import (
FuzzForgeExecutionIdentifier,
FuzzForgeModuleIdentifier,
FuzzForgeProjectIdentifier,
FuzzForgeWorkflowIdentifier,
)
MINIO_DEFAULT_IMAGE: str = "minio/minio:RELEASE.2025-09-07T16-13-09Z"
def generate_random_string(
min_length: int,
@@ -201,65 +195,6 @@ def random_module_execution_identifier() -> Callable[[], FuzzForgeExecutionIdent
return inner
@pytest.fixture(scope="session")
def minio_container() -> Generator[MinioContainer]:
"""Provide MinIO testcontainer for test session.
Creates a MinIO container that persists for the entire test session.
All tests share the same container but use different buckets/keys.
:return: MinIO container instance.
"""
with MinioContainer(image=MINIO_DEFAULT_IMAGE) as container:
yield container
@pytest.fixture
def minio_container_configuration(minio_container: MinioContainer) -> dict[str, str]:
"""TODO."""
return cast("dict[str, str]", minio_container.get_config())
@pytest.fixture
def storage_configuration(minio_container_configuration: dict[str, str]) -> StorageConfiguration:
"""Provide S3 storage backend connected to MinIO testcontainer.
Creates the bucket in MinIO before returning the backend instance.
:param minio_container: MinIO testcontainer fixture.
:return: Configured S3StorageBackend instance with bucket already created.
"""
return StorageConfiguration(
endpoint=f"http://{minio_container_configuration['endpoint']}",
access_key=minio_container_configuration["access_key"],
secret_key=minio_container_configuration["secret_key"],
)
@pytest.fixture
def boto3_client(minio_container_configuration: dict[str, str]) -> Any:
"""TODO."""
return boto3.client(
"s3",
endpoint_url=f"http://{minio_container_configuration['endpoint']}",
aws_access_key_id=minio_container_configuration["access_key"],
aws_secret_access_key=minio_container_configuration["secret_key"],
)
@pytest.fixture
def random_bucket(
boto3_client: Any,
random_project_identifier: Callable[[], FuzzForgeProjectIdentifier],
) -> str:
"""TODO."""
project_identifier: FuzzForgeProjectIdentifier = random_project_identifier()
boto3_client.create_bucket(Bucket=str(project_identifier))
return str(project_identifier)
@pytest.fixture
def podman_socket() -> str:
"""TODO."""

View File

@@ -1,33 +0,0 @@
PACKAGE=$(word 1, $(shell uv version))
VERSION=$(word 2, $(shell uv version))
ARTIFACTS?=./dist
SOURCES=./src
.PHONY: clean format mypy ruff version wheel
clean:
@find . -type d \( \
-name '*.egg-info' \
-o -name '.mypy_cache' \
-o -name '.ruff_cache' \
-o -name '__pycache__' \
\) -printf 'removing directory %p\n' -exec rm -rf {} +
cloc:
cloc $(SOURCES)
format:
uv run ruff format $(SOURCES)
mypy:
uv run mypy $(SOURCES)
ruff:
uv run ruff check --fix $(SOURCES)
version:
@echo '$(PACKAGE)@$(VERSION)'
wheel:
uv build --out-dir $(ARTIFACTS)

View File

@@ -1,3 +0,0 @@
# FuzzForge types
...

View File

@@ -1,6 +0,0 @@
[mypy]
plugins = pydantic.mypy
strict = True
warn_unused_ignores = True
warn_redundant_casts = True
warn_return_any = True

View File

@@ -1,17 +0,0 @@
[project]
name = "fuzzforge-types"
version = "0.0.1"
description = "Collection of types for the FuzzForge API."
authors = []
readme = "README.md"
requires-python = ">=3.14"
dependencies = [
"pydantic==2.12.4",
]
[project.optional-dependencies]
lints = [
"bandit==1.8.6",
"mypy==1.18.2",
"ruff==0.14.4",
]

View File

@@ -1,19 +0,0 @@
line-length = 120
[lint]
select = [ "ALL" ]
ignore = [
"COM812", # conflicts with the formatter
"D100", # ignoring missing docstrings in public modules
"D104", # ignoring missing docstrings in public packages
"D203", # conflicts with 'D211'
"D213", # conflicts with 'D212'
"TD002", # ignoring missing author in 'TODO' statements
"TD003", # ignoring missing issue link in 'TODO' statements
]
[lint.per-file-ignores]
"tests/*" = [
"PLR2004", # allowing comparisons using unamed numerical constants in tests
"S101", # allowing 'assert' statements in tests
]

View File

@@ -1,37 +0,0 @@
"""FuzzForge types package.
This package exports all public types used across FuzzForge components.
"""
from fuzzforge_types.definitions import (
FuzzForgeDefinitionIdentifier,
FuzzForgeDefinitionTypes,
)
from fuzzforge_types.executions import (
FuzzForgeExecution,
FuzzForgeExecutionError,
FuzzForgeExecutionIdentifier,
FuzzForgeExecutionIncludeFilter,
FuzzForgeExecutionStatus,
)
from fuzzforge_types.identifiers import FuzzForgeProjectIdentifier
from fuzzforge_types.modules import FuzzForgeModule, FuzzForgeModuleIdentifier
from fuzzforge_types.projects import FuzzForgeProject
from fuzzforge_types.workflows import FuzzForgeWorkflow, FuzzForgeWorkflowIdentifier
__all__ = [
"FuzzForgeDefinitionIdentifier",
"FuzzForgeDefinitionTypes",
"FuzzForgeExecution",
"FuzzForgeExecutionError",
"FuzzForgeExecutionIdentifier",
"FuzzForgeExecutionIncludeFilter",
"FuzzForgeExecutionStatus",
"FuzzForgeModule",
"FuzzForgeModuleIdentifier",
"FuzzForgeProject",
"FuzzForgeProjectIdentifier",
"FuzzForgeWorkflow",
"FuzzForgeWorkflowIdentifier",
]

View File

@@ -1,11 +0,0 @@
"""TODO."""
from pydantic import BaseModel
class Base(BaseModel):
"""TODO."""
model_config = {
"from_attributes": True,
}

View File

@@ -1,26 +0,0 @@
"""Definition types for FuzzForge.
This module defines the base types and enums for FuzzForge definitions,
including modules and workflows.
"""
from enum import StrEnum
from pydantic import UUID7
class FuzzForgeDefinitionTypes(StrEnum):
"""Kind of FuzzForge definition.
Discriminator enum used to distinguish between module and workflow definitions
in the unified definitions table.
"""
MODULE_DEFINITION = "module"
WORKFLOW_DEFINITION = "workflow"
# Type aliases for definition identifiers
type FuzzForgeDefinitionIdentifier = UUID7

View File

@@ -1,75 +0,0 @@
"""TODO."""
from datetime import datetime # noqa: TC003
from enum import StrEnum
from pydantic import UUID7, Field
from fuzzforge_types.bases import Base
from fuzzforge_types.definitions import FuzzForgeDefinitionIdentifier, FuzzForgeDefinitionTypes # noqa: TC001
from fuzzforge_types.identifiers import FuzzForgeProjectIdentifier # noqa: TC001
class FuzzForgeExecutionStatus(StrEnum):
"""TODO."""
PENDING = "PENDING"
RUNNING = "RUNNING"
FINISHED = "FINISHED"
class FuzzForgeExecutionError(StrEnum):
"""TODO."""
GENERIC_ERROR = "GENERIC_ERROR"
class FuzzForgeExecutionIncludeFilter(StrEnum):
"""Filter for including specific execution types when listing.
Used to filter executions by their definition kind (module or workflow).
This filter is required when listing executions to ensure explicit intent.
"""
ALL = "all"
MODULES = "modules"
WORKFLOWS = "workflows"
# Type alias for unified execution identifiers
type FuzzForgeExecutionIdentifier = UUID7
class FuzzForgeExecution(Base):
"""DTO for unified execution data.
Represents both module and workflow executions in a single model.
The definition_kind field discriminates between the two types.
"""
execution_identifier: FuzzForgeExecutionIdentifier = Field(
description="The identifier of this execution.",
)
execution_status: FuzzForgeExecutionStatus = Field(
description="The current status of the execution.",
)
execution_error: FuzzForgeExecutionError | None = Field(
description="The error associated with the execution, if any.",
)
project_identifier: FuzzForgeProjectIdentifier = Field(
description="The identifier of the project this execution belongs to.",
)
definition_identifier: FuzzForgeDefinitionIdentifier = Field(
description="The identifier of the definition (module or workflow) being executed.",
)
definition_kind: FuzzForgeDefinitionTypes = Field(
description="The kind of definition being executed (module or workflow).",
)
created_at: datetime = Field(
description="The creation date of the execution.",
)
updated_at: datetime = Field(
description="The latest modification date of the execution.",
)

View File

@@ -1,5 +0,0 @@
"""TODO."""
from pydantic import UUID7
type FuzzForgeProjectIdentifier = UUID7

View File

@@ -1,30 +0,0 @@
"""TODO."""
from datetime import datetime # noqa: TC003
from pydantic import Field
from fuzzforge_types.bases import Base
from fuzzforge_types.definitions import FuzzForgeDefinitionIdentifier
type FuzzForgeModuleIdentifier = FuzzForgeDefinitionIdentifier
class FuzzForgeModule(Base):
"""TODO."""
module_description: str = Field(
description="The description of the module.",
)
module_identifier: FuzzForgeModuleIdentifier = Field(
description="The identifier of the module.",
)
module_name: str = Field(
description="The name of the module.",
)
created_at: datetime = Field(
description="The creation date of the module.",
)
updated_at: datetime = Field(
description="The latest modification date of the module.",
)

View File

@@ -1,34 +0,0 @@
"""TODO."""
from datetime import datetime # noqa: TC003
from pydantic import Field
from fuzzforge_types.bases import Base
from fuzzforge_types.executions import FuzzForgeExecution # noqa: TC001
from fuzzforge_types.identifiers import FuzzForgeProjectIdentifier # noqa: TC001
class FuzzForgeProject(Base):
"""TODO."""
project_description: str = Field(
description="The description of the project.",
)
project_identifier: FuzzForgeProjectIdentifier = Field(
description="The identifier of the project.",
)
project_name: str = Field(
description="The name of the project.",
)
created_at: datetime = Field(
description="The creation date of the project.",
)
updated_at: datetime = Field(
description="The latest modification date of the project.",
)
executions: list[FuzzForgeExecution] | None = Field(
default=None,
description="The module and workflow executions associated with the project.",
)

View File

@@ -1,30 +0,0 @@
"""TODO."""
from datetime import datetime # noqa: TC003
from pydantic import Field
from fuzzforge_types.bases import Base
from fuzzforge_types.definitions import FuzzForgeDefinitionIdentifier
type FuzzForgeWorkflowIdentifier = FuzzForgeDefinitionIdentifier
class FuzzForgeWorkflow(Base):
"""TODO."""
workflow_description: str = Field(
description="The description of the workflow.",
)
workflow_identifier: FuzzForgeWorkflowIdentifier = Field(
description="The identifier of the workflow.",
)
workflow_name: str = Field(
description="The name of the workflow.",
)
created_at: datetime = Field(
description="The creation date of the workflow.",
)
updated_at: datetime = Field(
description="The latest modification date of the workflow.",
)

105
hub-config.json Normal file
View File

@@ -0,0 +1,105 @@
{
"servers": [
{
"name": "nmap-mcp",
"description": "Network reconnaissance using Nmap - port scanning, service detection, OS fingerprinting",
"type": "docker",
"image": "nmap-mcp:latest",
"category": "reconnaissance",
"capabilities": ["NET_RAW"],
"enabled": true
},
{
"name": "binwalk-mcp",
"description": "Firmware extraction and analysis using Binwalk - file signatures, entropy analysis, embedded file extraction",
"type": "docker",
"image": "binwalk-mcp:latest",
"category": "binary-analysis",
"capabilities": [],
"volumes": ["~/.fuzzforge/hub/workspace:/data"],
"enabled": true
},
{
"name": "yara-mcp",
"description": "Pattern matching and malware classification using YARA rules",
"type": "docker",
"image": "yara-mcp:latest",
"category": "binary-analysis",
"capabilities": [],
"volumes": ["~/.fuzzforge/hub/workspace:/data"],
"enabled": true
},
{
"name": "capa-mcp",
"description": "Static capability detection using capa - identifies malware capabilities in binaries",
"type": "docker",
"image": "capa-mcp:latest",
"category": "binary-analysis",
"capabilities": [],
"volumes": ["~/.fuzzforge/hub/workspace:/data"],
"enabled": true
},
{
"name": "radare2-mcp",
"description": "Binary analysis and reverse engineering using radare2",
"type": "docker",
"image": "radare2-mcp:latest",
"category": "binary-analysis",
"capabilities": [],
"volumes": ["~/.fuzzforge/hub/workspace:/data"],
"enabled": true
},
{
"name": "ghidra-mcp",
"description": "Advanced binary decompilation and reverse engineering using Ghidra",
"type": "docker",
"image": "ghcr.io/clearbluejar/pyghidra-mcp:latest",
"category": "binary-analysis",
"capabilities": [],
"volumes": ["~/.fuzzforge/hub/workspace:/data"],
"enabled": true
},
{
"name": "searchsploit-mcp",
"description": "CVE and exploit search using SearchSploit / Exploit-DB",
"type": "docker",
"image": "searchsploit-mcp:latest",
"category": "exploitation",
"capabilities": [],
"volumes": ["~/.fuzzforge/hub/workspace:/data"],
"enabled": true
},
{
"name": "nuclei-mcp",
"description": "Vulnerability scanning using Nuclei templates",
"type": "docker",
"image": "nuclei-mcp:latest",
"category": "web-security",
"capabilities": ["NET_RAW"],
"volumes": ["~/.fuzzforge/hub/workspace:/data"],
"enabled": true
},
{
"name": "trivy-mcp",
"description": "Container and filesystem vulnerability scanning using Trivy",
"type": "docker",
"image": "trivy-mcp:latest",
"category": "cloud-security",
"capabilities": [],
"volumes": ["~/.fuzzforge/hub/workspace:/data"],
"enabled": true
},
{
"name": "gitleaks-mcp",
"description": "Secret and credential detection in code and firmware using Gitleaks",
"type": "docker",
"image": "gitleaks-mcp:latest",
"category": "secrets",
"capabilities": [],
"volumes": ["~/.fuzzforge/hub/workspace:/data"],
"enabled": true
}
],
"default_timeout": 300,
"cache_tools": true
}

View File

@@ -1,7 +1,7 @@
[project]
name = "fuzzforge-oss"
version = "1.0.0"
description = "FuzzForge OSS - AI-driven security research platform for local execution"
description = "FuzzForge AI - AI-driven security research platform for local execution"
readme = "README.md"
requires-python = ">=3.14"
authors = [
@@ -15,14 +15,12 @@ dev = [
"pytest-httpx==0.36.0",
"fuzzforge-tests",
"fuzzforge-common",
"fuzzforge-types",
"fuzzforge-mcp",
]
[tool.uv.workspace]
members = [
"fuzzforge-common",
"fuzzforge-types",
"fuzzforge-modules/fuzzforge-modules-sdk",
"fuzzforge-runner",
"fuzzforge-mcp",
@@ -32,7 +30,6 @@ members = [
[tool.uv.sources]
fuzzforge-common = { workspace = true }
fuzzforge-types = { workspace = true }
fuzzforge-modules-sdk = { workspace = true }
fuzzforge-runner = { workspace = true }
fuzzforge-mcp = { workspace = true }