Compare commits

..

8 Commits

Author SHA1 Message Date
AFredefon
cd5bfc27ee fix: pipeline module fixes and improved AI agent guidance 2026-02-16 10:08:46 +01:00
AFredefon
8adc7a2e00 refactor: simplify module metadata schema for AI discoverability 2026-02-10 21:35:22 +01:00
tduhamel42
3b521dba42 fix: update license badge and footer from Apache 2.0 to BSL 1.1 2026-02-10 18:31:37 +01:00
AFredefon
66a10d1bc4 docs: add ROADMAP.md with planned features 2026-02-09 10:36:33 +01:00
AFredefon
48ad2a59af refactor(modules): rename metadata fields and use natural 2026-02-09 10:17:16 +01:00
AFredefon
8b8662d7af feat(modules): add harness-tester module for Rust fuzzing pipeline 2026-02-03 18:12:28 +01:00
AFredefon
f099bd018d chore(modules): remove redundant harness-validator module 2026-02-03 18:12:20 +01:00
tduhamel42
d786c6dab1 fix: block Podman on macOS and remove ghcr.io default (#39)
* fix: block Podman on macOS and remove ghcr.io default

- Add platform check in PodmanCLI.__init__() that raises FuzzForgeError
  on macOS with instructions to use Docker instead
- Change RegistrySettings.url default from "ghcr.io/fuzzinglabs" to ""
  (empty string) for local-only mode since no images are published yet
- Update _ensure_module_image() to show helpful error when image not
  found locally and no registry configured
- Update tests to mock Linux platform for Podman tests
- Add root ruff.toml to fix broken configuration in fuzzforge-runner

* rewrite guides for module architecture and update repo links

---------

Co-authored-by: AFredefon <antoinefredefon@yahoo.fr>
2026-02-03 10:15:16 +01:00
52 changed files with 2799 additions and 846 deletions

View File

@@ -3,7 +3,7 @@
<p align="center">
<a href="https://discord.gg/8XEX33UUwZ"><img src="https://img.shields.io/discord/1420767905255133267?logo=discord&label=Discord" alt="Discord"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-Apache%202.0-blue" alt="License: Apache 2.0"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-BSL%201.1-blue" alt="License: BSL 1.1"></a>
<a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.12%2B-blue" alt="Python 3.12+"/></a>
<a href="https://modelcontextprotocol.io"><img src="https://img.shields.io/badge/MCP-compatible-green" alt="MCP Compatible"/></a>
<a href="https://fuzzforge.ai"><img src="https://img.shields.io/badge/Website-fuzzforge.ai-purple" alt="Website"/></a>
@@ -274,10 +274,11 @@ See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## 📄 License
Apache 2.0 - See [LICENSE](LICENSE) for details.
BSL 1.1 - See [LICENSE](LICENSE) for details.
---
<p align="center">
<strong>Built with ❤️ by <a href="https://fuzzinglabs.com">FuzzingLabs</a></strong>
<strong>Maintained by <a href="https://fuzzinglabs.com">FuzzingLabs</a></strong>
<br>
</p>

125
ROADMAP.md Normal file
View File

@@ -0,0 +1,125 @@
# FuzzForge OSS Roadmap
This document outlines the planned features and development direction for FuzzForge OSS.
---
## 🎯 Upcoming Features
### 1. MCP Security Hub Integration
**Status:** 🔄 Planned
Integrate [mcp-security-hub](https://github.com/FuzzingLabs/mcp-security-hub) tools into FuzzForge, giving AI agents access to 28 MCP servers and 163+ security tools through a unified interface.
#### How It Works
Unlike native FuzzForge modules (built with the SDK), mcp-security-hub tools are **standalone MCP servers**. The integration will bridge these tools so they can be:
- Discovered via `list_modules` alongside native modules
- Executed through FuzzForge's orchestration layer
- Chained with native modules in workflows
| Aspect | Native Modules | MCP Hub Tools |
|--------|----------------|---------------|
| **Runtime** | FuzzForge SDK container | Standalone MCP server container |
| **Protocol** | Direct execution | MCP-to-MCP bridge |
| **Configuration** | Module config | Tool-specific args |
| **Output** | FuzzForge results format | Tool-native format (normalized) |
#### Goals
- Unified discovery of all available tools (native + hub)
- Orchestrate hub tools through FuzzForge's workflow engine
- Normalize outputs for consistent result handling
- No modification required to mcp-security-hub tools
#### Planned Tool Categories
| Category | Tools | Example Use Cases |
|----------|-------|-------------------|
| **Reconnaissance** | nmap, masscan, whatweb, shodan | Network scanning, service discovery |
| **Web Security** | nuclei, sqlmap, ffuf, nikto | Vulnerability scanning, fuzzing |
| **Binary Analysis** | radare2, binwalk, yara, capa, ghidra | Reverse engineering, malware analysis |
| **Cloud Security** | trivy, prowler | Container scanning, cloud auditing |
| **Secrets Detection** | gitleaks | Credential scanning |
| **OSINT** | maigret, dnstwist | Username tracking, typosquatting |
| **Threat Intel** | virustotal, otx | Malware analysis, IOC lookup |
#### Example Workflow
```
You: "Scan example.com for vulnerabilities and analyze any suspicious binaries"
AI Agent:
1. Uses nmap module for port discovery
2. Uses nuclei module for vulnerability scanning
3. Uses binwalk module to extract firmware
4. Uses yara module for malware detection
5. Generates consolidated report
```
---
### 2. User Interface
**Status:** 🔄 Planned
A graphical interface to manage FuzzForge without the command line.
#### Goals
- Provide an alternative to CLI for users who prefer visual tools
- Make configuration and monitoring more accessible
- Complement (not replace) the CLI experience
#### Planned Capabilities
| Capability | Description |
|------------|-------------|
| **Configuration** | Change MCP server settings, engine options, paths |
| **Module Management** | Browse, configure, and launch modules |
| **Execution Monitoring** | View running tasks, logs, progress, metrics |
| **Project Overview** | Manage projects and browse execution results |
| **Workflow Management** | Create and run multi-module workflows |
---
## 📋 Backlog
Features under consideration for future releases:
| Feature | Description |
|---------|-------------|
| **Module Marketplace** | Browse and install community modules |
| **Scheduled Executions** | Run modules on a schedule (cron-style) |
| **Team Collaboration** | Share projects, results, and workflows |
| **Reporting Engine** | Generate PDF/HTML security reports |
| **Notifications** | Slack, Discord, email alerts for findings |
---
## ✅ Completed
| Feature | Version | Date |
|---------|---------|------|
| Docker as default engine | 0.1.0 | Jan 2026 |
| MCP server for AI agents | 0.1.0 | Jan 2026 |
| CLI for project management | 0.1.0 | Jan 2026 |
| Continuous execution mode | 0.1.0 | Jan 2026 |
| Workflow orchestration | 0.1.0 | Jan 2026 |
---
## 💬 Feedback
Have suggestions for the roadmap?
- Open an issue on [GitHub](https://github.com/FuzzingLabs/fuzzforge_ai/issues)
- Join our [Discord](https://discord.gg/8XEX33UUwZ)
---
<p align="center">
<strong>Built with ❤️ by <a href="https://fuzzinglabs.com">FuzzingLabs</a></strong>
</p>

139
USAGE.md
View File

@@ -33,18 +33,9 @@ This guide covers everything you need to know to get started with FuzzForge OSS
# 1. Clone and install
git clone https://github.com/FuzzingLabs/fuzzforge-oss.git
cd fuzzforge-oss
uv sync --all-extras
uv sync
# 2. Build the SDK and module images (one-time setup)
# First, build the SDK base image and wheel
cd fuzzforge-modules/fuzzforge-modules-sdk
uv build
mkdir -p .wheels
cp ../../dist/fuzzforge_modules_sdk-*.whl .wheels/
cd ../..
docker build -t localhost/fuzzforge-modules-sdk:0.1.0 fuzzforge-modules/fuzzforge-modules-sdk/
# Then build all modules
# 2. Build the module images (one-time setup)
make build-modules
# 3. Install MCP for your AI agent
@@ -111,15 +102,10 @@ cd fuzzforge-oss
### 2. Install Dependencies
```bash
# Install all workspace dependencies including the CLI
uv sync --all-extras
uv sync
```
This installs all FuzzForge components in a virtual environment, including:
- `fuzzforge-cli` - Command-line interface
- `fuzzforge-mcp` - MCP server
- `fuzzforge-runner` - Module execution engine
- All supporting libraries
This installs all FuzzForge components in a virtual environment.
### 3. Verify Installation
@@ -131,30 +117,10 @@ uv run fuzzforge --help
## Building Modules
FuzzForge modules are containerized security tools. After cloning, you need to build them once.
> **Important:** The modules depend on a base SDK image that must be built first.
### Build the SDK Base Image (Required First)
```bash
# 1. Build the SDK Python package wheel
cd fuzzforge-modules/fuzzforge-modules-sdk
uv build
# 2. Copy wheel to the .wheels directory
mkdir -p .wheels
cp ../../dist/fuzzforge_modules_sdk-*.whl .wheels/
# 3. Build the SDK Docker image
cd ../..
docker build -t localhost/fuzzforge-modules-sdk:0.1.0 fuzzforge-modules/fuzzforge-modules-sdk/
```
FuzzForge modules are containerized security tools. After cloning, you need to build them once:
### Build All Modules
Once the SDK is built, build all modules:
```bash
# From the fuzzforge-oss directory
make build-modules
@@ -166,14 +132,12 @@ This builds all available modules:
- `fuzzforge-harness-validator` - Validates generated fuzzing harnesses
- `fuzzforge-crash-analyzer` - Analyzes crash inputs
> **Note:** The first build will take several minutes as it downloads Rust toolchains and dependencies.
### Build a Single Module
```bash
# Build a specific module (after SDK is built)
# Build a specific module
cd fuzzforge-modules/rust-analyzer
docker build -t fuzzforge-rust-analyzer:0.1.0 .
make build
```
### Verify Modules are Built
@@ -183,27 +147,13 @@ docker build -t fuzzforge-rust-analyzer:0.1.0 .
docker images | grep fuzzforge
```
You should see at least 5 images:
You should see something like:
```
localhost/fuzzforge-modules-sdk 0.1.0 abc123def456 5 minutes ago 465 MB
fuzzforge-rust-analyzer 0.1.0 def789ghi012 2 minutes ago 2.0 GB
fuzzforge-cargo-fuzzer 0.1.0 ghi012jkl345 2 minutes ago 1.9 GB
fuzzforge-harness-validator 0.1.0 jkl345mno678 2 minutes ago 1.9 GB
fuzzforge-crash-analyzer 0.1.0 mno678pqr901 2 minutes ago 517 MB
fuzzforge-rust-analyzer 0.1.0 abc123def456 2 minutes ago 850 MB
fuzzforge-cargo-fuzzer 0.1.0 789ghi012jkl 2 minutes ago 1.2 GB
...
```
### Verify CLI Installation
```bash
# Test the CLI
uv run fuzzforge --help
# List modules (with environment variable for modules path)
FUZZFORGE_MODULES_PATH=/path/to/fuzzforge-modules uv run fuzzforge modules list
```
You should see 4 available modules listed.
---
## MCP Server Configuration
@@ -295,21 +245,6 @@ uv run fuzzforge mcp uninstall claude-desktop
uv run fuzzforge mcp uninstall claude-code
```
### Test MCP Server
After installation, verify the MCP server is working:
```bash
# Check if MCP server process is running (in VS Code)
ps aux | grep fuzzforge_mcp
```
You can also test the MCP integration directly in your AI agent:
- **GitHub Copilot**: Ask "List available FuzzForge modules"
- **Claude**: Ask "What FuzzForge modules are available?"
The AI should respond with a list of 4 modules (rust-analyzer, cargo-fuzzer, harness-validator, crash-analyzer).
---
## Using FuzzForge with AI
@@ -457,39 +392,6 @@ sudo usermod -aG docker $USER
docker run --rm hello-world
```
### Module Build Fails: "fuzzforge-modules-sdk not found"
```
ERROR: failed to solve: localhost/fuzzforge-modules-sdk:0.1.0: not found
```
**Solution:** You need to build the SDK base image first:
```bash
# 1. Build SDK wheel
cd fuzzforge-modules/fuzzforge-modules-sdk
uv build
mkdir -p .wheels
cp ../../dist/fuzzforge_modules_sdk-*.whl .wheels/
# 2. Build SDK Docker image
cd ../..
docker build -t localhost/fuzzforge-modules-sdk:0.1.0 fuzzforge-modules/fuzzforge-modules-sdk/
# 3. Now build modules
make build-modules
```
### fuzzforge Command Not Found
```
error: Failed to spawn: `fuzzforge`
```
**Solution:** Install with `--all-extras` to include the CLI:
```bash
uv sync --all-extras
```
### No Modules Found
```
@@ -497,13 +399,9 @@ No modules found.
```
**Solution:**
1. Build the SDK first (see above)
2. Build the modules: `make build-modules`
3. Check the modules path with environment variable:
```bash
FUZZFORGE_MODULES_PATH=/path/to/fuzzforge-modules uv run fuzzforge modules list
```
4. Verify images exist: `docker images | grep fuzzforge`
1. Build the modules first: `make build-modules`
2. Check the modules path: `uv run fuzzforge modules list`
3. Verify images exist: `docker images | grep fuzzforge`
### MCP Server Not Starting
@@ -514,15 +412,6 @@ uv run fuzzforge mcp status
Verify the configuration file path exists and contains valid JSON.
If the server process isn't running:
```bash
# Check if MCP server is running
ps aux | grep fuzzforge_mcp
# Test the MCP server manually
uv run python -m fuzzforge_mcp
```
### Module Container Fails to Build
```bash

View File

@@ -25,6 +25,9 @@ class ImageInfo:
#: Image size in bytes.
size: int | None = None
#: Image labels/metadata.
labels: dict[str, str] | None = None
class AbstractFuzzForgeSandboxEngine(ABC):
"""Abstract class used as a base for all FuzzForge sandbox engine classes."""
@@ -279,3 +282,17 @@ class AbstractFuzzForgeSandboxEngine(ABC):
"""
message: str = f"method 'list_containers' is not implemented for class '{self.__class__.__name__}'"
raise NotImplementedError(message)
@abstractmethod
def read_file_from_image(self, image: str, path: str) -> str:
"""Read a file from inside an image without starting a container.
Creates a temporary container, copies the file, and removes the container.
:param image: Image reference (e.g., "fuzzforge-rust-analyzer:latest").
:param path: Path to file inside image.
:returns: File contents as string.
"""
message: str = f"method 'read_file_from_image' is not implemented for class '{self.__class__.__name__}'"
raise NotImplementedError(message)

View File

@@ -99,6 +99,17 @@ class DockerCLI(AbstractFuzzForgeSandboxEngine):
if filter_prefix and filter_prefix not in reference:
continue
# Try to get labels from image inspect
labels = {}
try:
inspect_result = self._run(["image", "inspect", reference], check=False)
if inspect_result.returncode == 0:
inspect_data = json.loads(inspect_result.stdout)
if inspect_data and len(inspect_data) > 0:
labels = inspect_data[0].get("Config", {}).get("Labels") or {}
except (json.JSONDecodeError, IndexError):
pass
images.append(
ImageInfo(
reference=reference,
@@ -106,6 +117,7 @@ class DockerCLI(AbstractFuzzForgeSandboxEngine):
tag=tag,
image_id=image.get("ID", "")[:12],
size=image.get("Size"),
labels=labels,
)
)
@@ -404,3 +416,27 @@ class DockerCLI(AbstractFuzzForgeSandboxEngine):
]
except json.JSONDecodeError:
return []
def read_file_from_image(self, image: str, path: str) -> str:
"""Read a file from inside an image without starting a long-running container.
Uses docker run with --entrypoint override to read the file via cat.
:param image: Image reference (e.g., "fuzzforge-rust-analyzer:latest").
:param path: Path to file inside image.
:returns: File contents as string.
"""
logger = get_logger()
# Use docker run with --entrypoint to override any container entrypoint
result = self._run(
["run", "--rm", "--entrypoint", "cat", image, path],
check=False,
)
if result.returncode != 0:
logger.debug("failed to read file from image", image=image, path=path, stderr=result.stderr)
return ""
return result.stdout

View File

@@ -172,3 +172,8 @@ class Docker(AbstractFuzzForgeSandboxEngine):
"""List containers."""
message: str = "Docker engine list_containers is not yet implemented"
raise NotImplementedError(message)
def read_file_from_image(self, image: str, path: str) -> str:
"""Read a file from inside an image without starting a long-running container."""
message: str = "Docker engine read_file_from_image is not yet implemented"
raise NotImplementedError(message)

View File

@@ -166,6 +166,9 @@ class PodmanCLI(AbstractFuzzForgeSandboxEngine):
repo = name
tag = "latest"
# Get labels if available
labels = image.get("Labels") or {}
images.append(
ImageInfo(
reference=name,
@@ -173,6 +176,7 @@ class PodmanCLI(AbstractFuzzForgeSandboxEngine):
tag=tag,
image_id=image.get("Id", "")[:12],
size=image.get("Size"),
labels=labels,
)
)
@@ -474,6 +478,30 @@ class PodmanCLI(AbstractFuzzForgeSandboxEngine):
except json.JSONDecodeError:
return []
def read_file_from_image(self, image: str, path: str) -> str:
"""Read a file from inside an image without starting a long-running container.
Uses podman run with --entrypoint override to read the file via cat.
:param image: Image reference (e.g., "fuzzforge-rust-analyzer:latest").
:param path: Path to file inside image.
:returns: File contents as string.
"""
logger = get_logger()
# Use podman run with --entrypoint to override any container entrypoint
result = self._run(
["run", "--rm", "--entrypoint", "cat", image, path],
check=False,
)
if result.returncode != 0:
logger.debug("failed to read file from image", image=image, path=path, stderr=result.stderr)
return ""
return result.stdout
# -------------------------------------------------------------------------
# Utility Methods
# -------------------------------------------------------------------------

View File

@@ -494,3 +494,40 @@ class Podman(AbstractFuzzForgeSandboxEngine):
}
for c in containers
]
def read_file_from_image(self, image: str, path: str) -> str:
"""Read a file from inside an image without starting a long-running container.
Creates a temporary container, reads the file, and removes the container.
:param image: Image reference (e.g., "fuzzforge-rust-analyzer:latest").
:param path: Path to file inside image.
:returns: File contents as string.
"""
logger = get_logger()
client: PodmanClient = self.get_client()
with client:
try:
# Create a container that just runs cat on the file
container = client.containers.create(
image=image,
command=["cat", path],
remove=True,
)
# Start it and wait for completion
container.start()
container.wait()
# Get the logs (which contain stdout)
output = container.logs(stdout=True, stderr=False)
if isinstance(output, bytes):
return output.decode("utf-8", errors="replace")
return str(output)
except Exception as exc:
logger.debug("failed to read file from image", image=image, path=path, error=str(exc))
return ""

View File

@@ -46,10 +46,10 @@ FuzzForge is a security research orchestration platform. Use these tools to:
Typical workflow:
1. Initialize a project with `init_project`
2. Set project assets with `set_project_assets` (optional)
2. Set project assets with `set_project_assets` (optional, only needed once for the source directory)
3. List available modules with `list_modules`
4. Execute a module with `execute_module`
5. Get results with `get_execution_results`
4. Execute a module with `execute_module` — use `assets_path` param to pass different inputs per module
5. Read outputs from `results_path` returned by `execute_module` — check module's `output_artifacts` metadata for filenames
""",
lifespan=lifespan,
)

View File

@@ -14,6 +14,22 @@ if TYPE_CHECKING:
from fastmcp import Context
# Track the current active project path (set by init_project)
_current_project_path: Path | None = None
def set_current_project_path(project_path: Path) -> None:
"""Set the current project path.
Called by init_project to track which project is active.
:param project_path: Path to the project directory.
"""
global _current_project_path
_current_project_path = project_path
def get_settings() -> Settings:
"""Get MCP server settings from context.
@@ -31,11 +47,17 @@ def get_settings() -> Settings:
def get_project_path() -> Path:
"""Get the current project path.
Returns the project path set by init_project, or falls back to
the current working directory if no project has been initialized.
:return: Path to the current project.
"""
settings: Settings = get_settings()
return Path(settings.project.default_path)
global _current_project_path
if _current_project_path is not None:
return _current_project_path
# Fall back to current working directory (where the AI agent is working)
return Path.cwd()
def get_runner() -> Runner:

View File

@@ -29,7 +29,8 @@ async def list_modules() -> dict[str, Any]:
"""List all available FuzzForge modules.
Returns information about modules that can be executed,
including their identifiers and availability status.
including their identifiers, availability status, and metadata
such as use cases, input requirements, and output artifacts.
:return: Dictionary with list of available modules and their details.
@@ -47,10 +48,26 @@ async def list_modules() -> dict[str, Any]:
"identifier": module.identifier,
"image": f"{module.identifier}:{module.version or 'latest'}",
"available": module.available,
"description": module.description,
# New metadata fields from pyproject.toml
"category": module.category,
"language": module.language,
"pipeline_stage": module.pipeline_stage,
"pipeline_order": module.pipeline_order,
"dependencies": module.dependencies,
"continuous_mode": module.continuous_mode,
"typical_duration": module.typical_duration,
# AI-discoverable metadata
"use_cases": module.use_cases,
"input_requirements": module.input_requirements,
"output_artifacts": module.output_artifacts,
}
for module in modules
]
# Sort by pipeline_order if available
available_modules.sort(key=lambda m: (m.get("pipeline_order") or 999, m["identifier"]))
return {
"modules": available_modules,
"count": len(available_modules),
@@ -75,9 +92,14 @@ async def execute_module(
This tool runs a module in a sandboxed environment.
The module receives input assets and produces output results.
The response includes `results_path` pointing to the stored results archive.
Use this path directly to read outputs — no need to call `get_execution_results`.
:param module_identifier: The identifier of the module to execute.
:param configuration: Optional configuration dict to pass to the module.
:param assets_path: Optional path to input assets. If not provided, uses project assets.
:param assets_path: Optional path to input assets. Use this to pass specific
inputs to a module (e.g. crash files to crash-analyzer) without changing
the project's default assets. If not provided, uses project assets.
:return: Execution result including status and results path.
"""
@@ -151,6 +173,8 @@ async def start_continuous_module(
module_identifier=module_identifier,
assets_path=actual_assets_path,
configuration=configuration,
project_path=project_path,
execution_id=session_id,
)
# Store execution info for tracking
@@ -162,6 +186,7 @@ async def start_continuous_module(
"status": "running",
"container_id": result["container_id"],
"input_dir": result["input_dir"],
"project_path": str(project_path),
}
return {

View File

@@ -8,7 +8,7 @@ from typing import TYPE_CHECKING, Any
from fastmcp import FastMCP
from fastmcp.exceptions import ToolError
from fuzzforge_mcp.dependencies import get_project_path, get_runner
from fuzzforge_mcp.dependencies import get_project_path, get_runner, set_current_project_path
if TYPE_CHECKING:
from fuzzforge_runner import Runner
@@ -21,8 +21,12 @@ mcp: FastMCP = FastMCP()
async def init_project(project_path: str | None = None) -> dict[str, Any]:
"""Initialize a new FuzzForge project.
Creates the necessary storage directories for a project. This should
be called before executing modules or workflows.
Creates a `.fuzzforge/` directory inside the project for storing:
- assets/: Input files (source code, etc.)
- inputs/: Prepared module inputs (for debugging)
- runs/: Execution results from each module
This should be called before executing modules or workflows.
:param project_path: Path to the project directory. If not provided, uses current directory.
:return: Project initialization result.
@@ -32,13 +36,17 @@ async def init_project(project_path: str | None = None) -> dict[str, Any]:
try:
path = Path(project_path) if project_path else get_project_path()
# Track this as the current active project
set_current_project_path(path)
storage_path = runner.init_project(path)
return {
"success": True,
"project_path": str(path),
"storage_path": str(storage_path),
"message": f"Project initialized at {path}",
"message": f"Project initialized. Storage at {path}/.fuzzforge/",
}
except Exception as exception:
@@ -48,12 +56,17 @@ async def init_project(project_path: str | None = None) -> dict[str, Any]:
@mcp.tool
async def set_project_assets(assets_path: str) -> dict[str, Any]:
"""Set the initial assets for a project.
"""Set the initial assets (source code) for a project.
Assets are input files that will be provided to modules during execution.
This could be source code, contracts, binaries, etc.
This sets the DEFAULT source directory mounted into modules.
Usually this is the project root containing source code (e.g. Cargo.toml, src/).
:param assets_path: Path to assets file (archive) or directory.
IMPORTANT: This OVERWRITES the previous assets path. Only call this once
during project setup. To pass different inputs to a specific module
(e.g. crash files to crash-analyzer), use the `assets_path` parameter
on `execute_module` instead.
:param assets_path: Path to the project source directory or archive.
:return: Result including stored assets path.
"""

View File

@@ -1,5 +1,7 @@
FROM localhost/fuzzforge-modules-sdk:0.1.0
# Module metadata is now read from pyproject.toml [tool.fuzzforge.module] section
# Install system dependencies for Rust compilation
RUN apt-get update && apt-get install -y \
curl \

View File

@@ -1,7 +1,7 @@
[project]
name = "cargo-fuzzer"
name = "fuzzforge-cargo-fuzzer"
version = "0.1.0"
description = "FuzzForge module that runs cargo-fuzz with libFuzzer on Rust targets"
description = "Runs continuous coverage-guided fuzzing on Rust targets using cargo-fuzz"
authors = []
readme = "README.md"
requires-python = ">=3.14"
@@ -29,3 +29,30 @@ fuzzforge-modules-sdk = { workspace = true }
[tool.uv]
package = true
# FuzzForge module metadata for AI agent discovery
[tool.fuzzforge.module]
identifier = "fuzzforge-cargo-fuzzer"
suggested_predecessors = ["fuzzforge-harness-tester"]
continuous_mode = true
use_cases = [
"Run continuous coverage-guided fuzzing on Rust targets with libFuzzer",
"Execute cargo-fuzz on validated harnesses",
"Produce crash artifacts for analysis",
"Long-running fuzzing campaign"
]
common_inputs = [
"validated-harnesses",
"Cargo.toml",
"rust-source-code"
]
output_artifacts = [
"fuzzing_results.json",
"crashes/",
"results.json"
]
output_treatment = "Read fuzzing_results.json which contains: targets_fuzzed, total_crashes, total_executions, crashes_path, and results array with per-target crash info. Display summary of crashes found. The crashes/ directory contains crash inputs for downstream crash-analyzer."

View File

@@ -458,34 +458,56 @@ class Module(FuzzForgeModule):
"""
crashes: list[CrashInfo] = []
seen_hashes: set[str] = set()
if self._fuzz_project_path is None or self._crashes_path is None:
return crashes
# Check for crashes in the artifacts directory
artifacts_dir = self._fuzz_project_path / "artifacts" / target
# Check multiple possible crash locations:
# 1. Standard artifacts directory (target-specific)
# 2. Generic artifacts directory
# 3. Fuzz project root (fork mode sometimes writes here)
# 4. Project root (parent of fuzz directory)
search_paths = [
self._fuzz_project_path / "artifacts" / target,
self._fuzz_project_path / "artifacts",
self._fuzz_project_path,
self._fuzz_project_path.parent,
]
if artifacts_dir.is_dir():
for crash_file in artifacts_dir.glob("crash-*"):
if crash_file.is_file():
# Copy crash to output
output_crash = self._crashes_path / target
output_crash.mkdir(parents=True, exist_ok=True)
dest = output_crash / crash_file.name
shutil.copy2(crash_file, dest)
for search_dir in search_paths:
if not search_dir.is_dir():
continue
# Use rglob to recursively find crash files
for crash_file in search_dir.rglob("crash-*"):
if not crash_file.is_file():
continue
# Skip duplicates by hash
if crash_file.name in seen_hashes:
continue
seen_hashes.add(crash_file.name)
# Read crash input
crash_data = crash_file.read_bytes()
# Copy crash to output
output_crash = self._crashes_path / target
output_crash.mkdir(parents=True, exist_ok=True)
dest = output_crash / crash_file.name
shutil.copy2(crash_file, dest)
crash_info = CrashInfo(
file_path=str(dest),
input_hash=crash_file.name,
input_size=len(crash_data),
)
crashes.append(crash_info)
# Read crash input
crash_data = crash_file.read_bytes()
logger.info("found crash", target=target, file=crash_file.name)
crash_info = CrashInfo(
file_path=str(dest),
input_hash=crash_file.name,
input_size=len(crash_data),
)
crashes.append(crash_info)
logger.info("found crash", target=target, file=crash_file.name, source=str(search_dir))
logger.info("crash collection complete", target=target, total_crashes=len(crashes))
return crashes
def _write_output(self) -> None:

View File

@@ -1,5 +1,7 @@
FROM localhost/fuzzforge-modules-sdk:0.1.0
# Module metadata is now read from pyproject.toml [tool.fuzzforge.module] section
COPY ./src /app/src
COPY ./pyproject.toml /app/pyproject.toml

View File

@@ -1,7 +1,7 @@
[project]
name = "crash-analyzer"
name = "fuzzforge-crash-analyzer"
version = "0.1.0"
description = "FuzzForge module that analyzes fuzzing crashes and generates security reports"
description = "Analyzes fuzzing crashes, deduplicates them, and generates security reports"
authors = []
readme = "README.md"
requires-python = ">=3.14"
@@ -30,3 +30,29 @@ fuzzforge-modules-sdk = { workspace = true }
[tool.uv]
package = true
# FuzzForge module metadata for AI agent discovery
[tool.fuzzforge.module]
identifier = "fuzzforge-crash-analyzer"
suggested_predecessors = ["fuzzforge-cargo-fuzzer"]
continuous_mode = false
use_cases = [
"Analyze Rust crash artifacts from fuzzing",
"Deduplicate crashes by stack trace signature",
"Triage crashes by severity (critical, high, medium, low)",
"Generate security vulnerability reports"
]
common_inputs = [
"crash-artifacts",
"stack-traces",
"rust-source-code"
]
output_artifacts = [
"crash_analysis.json",
"results.json"
]
output_treatment = "Read crash_analysis.json which contains: total_crashes, unique_crashes, duplicate_crashes, severity_summary (high/medium/low/unknown counts), and unique_analyses array with details per crash. Display a summary table of unique crashes by severity."

View File

@@ -1,4 +1,7 @@
FROM localhost/fuzzforge-modules-sdk:0.0.1
FROM localhost/fuzzforge-modules-sdk:0.1.0
# Module metadata is now read from pyproject.toml [tool.fuzzforge.module] section
# See MODULE_METADATA.md for documentation on configuring metadata
COPY ./src /app/src
COPY ./pyproject.toml /app/pyproject.toml

View File

@@ -1,7 +1,7 @@
[project]
name = "fuzzforge-module-template"
version = "0.0.1"
description = "FIXME"
version = "0.1.0"
description = "FIXME: Add module description"
authors = []
readme = "README.md"
requires-python = ">=3.14"
@@ -29,3 +29,31 @@ fuzzforge-modules-sdk = { workspace = true }
[tool.uv]
package = true
# FuzzForge module metadata for AI agent discovery
[tool.fuzzforge.module]
# REQUIRED: Unique module identifier (should match Docker image name)
identifier = "fuzzforge-module-template"
# Optional: List of module identifiers that should run before this one
suggested_predecessors = []
# Optional: Whether this module supports continuous/background execution
continuous_mode = false
# REQUIRED: Use cases help AI agents understand when to use this module
# Include language/target info here (e.g., "Analyze Rust crate...")
use_cases = [
"FIXME: Describe what this module does",
"FIXME: Describe typical usage scenario"
]
# REQUIRED: What inputs the module expects
common_inputs = [
"FIXME: List required input files or artifacts"
]
# REQUIRED: What outputs the module produces
output_artifacts = [
"FIXME: List output files produced"
]

View File

@@ -8,6 +8,7 @@ requires-python = ">=3.14"
dependencies = [
"podman==5.6.0",
"pydantic==2.12.4",
"structlog==25.5.0",
"tomlkit==0.13.3",
]

View File

@@ -1,4 +1,7 @@
FROM localhost/fuzzforge-modules-sdk:0.0.1
FROM localhost/fuzzforge-modules-sdk:0.1.0
# Module metadata is read from pyproject.toml [tool.fuzzforge.module] section
# See MODULE_METADATA.md for documentation on configuring metadata
COPY ./src /app/src
COPY ./pyproject.toml /app/pyproject.toml

View File

@@ -1,7 +1,7 @@
[project]
name = "fuzzforge-module-template"
version = "0.0.1"
description = "FIXME"
version = "0.1.0"
description = "FIXME: Add module description"
authors = []
readme = "README.md"
requires-python = ">=3.14"
@@ -29,3 +29,34 @@ fuzzforge-modules-sdk = { workspace = true }
[tool.uv]
package = true
# FuzzForge module metadata for AI agent discovery
[tool.fuzzforge.module]
# REQUIRED: Unique module identifier (should match Docker image name)
identifier = "fuzzforge-module-template"
# Optional: List of module identifiers that should run before this one
suggested_predecessors = []
# Optional: Whether this module supports continuous/background execution
continuous_mode = false
# REQUIRED: Use cases help AI agents understand when to use this module
# Include language/target info here (e.g., "Analyze Rust crate...")
use_cases = [
"FIXME: Describe what this module does",
"FIXME: Describe typical usage scenario"
]
# REQUIRED: What inputs the module expects
common_inputs = [
"FIXME: List required input files or artifacts"
]
# REQUIRED: What outputs the module produces
output_artifacts = [
"FIXME: List output files produced"
]
# REQUIRED: How AI should display output to user
output_treatment = "FIXME: Describe how to present the output"

View File

@@ -1,6 +1,8 @@
FROM localhost/fuzzforge-modules-sdk:0.1.0
# Install build tools and Rust nightly for compiling fuzz harnesses
# Module metadata is now read from pyproject.toml [tool.fuzzforge.module] section
# Install build tools and Rust nightly for compiling and testing fuzz harnesses
RUN apt-get update && apt-get install -y \
curl \
build-essential \
@@ -11,11 +13,12 @@ RUN apt-get update && apt-get install -y \
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain nightly
ENV PATH="/root/.cargo/bin:${PATH}"
# Install cargo-fuzz for validation
# Install cargo-fuzz for testing harnesses
RUN cargo install cargo-fuzz --locked || true
COPY ./src /app/src
COPY ./pyproject.toml /app/pyproject.toml
COPY ./README.md /app/README.md
# Remove workspace reference since we're using wheels
RUN sed -i '/\[tool\.uv\.sources\]/,/^$/d' /app/pyproject.toml

View File

@@ -0,0 +1,289 @@
# Harness Tester Feedback Types
Complete reference of all feedback the `harness-tester` module provides to help AI agents improve fuzz harnesses.
## Overview
The harness-tester evaluates harnesses across **6 dimensions** and provides specific, actionable suggestions for each issue detected.
---
## 1. Compilation Feedback
### ✅ Success Cases
- **Compiles successfully** → Strength noted
### ❌ Error Cases
| Issue Type | Severity | Detection | Suggestion |
|------------|----------|-----------|------------|
| `undefined_variable` | CRITICAL | "cannot find" in error | Check variable names match function signature. Use exact names from fuzzable_functions.json |
| `type_mismatch` | CRITICAL | "mismatched types" in error | Check function expects types you're passing. Convert fuzzer input to correct type (e.g., &[u8] to &str with from_utf8) |
| `trait_not_implemented` | CRITICAL | "trait" + "not implemented" | Ensure you're using correct types. Some functions require specific trait implementations |
| `compilation_error` | CRITICAL | Any other error | Review error message and fix syntax/type issues. Check function signatures in source code |
### ⚠️ Warning Cases
| Issue Type | Severity | Detection | Suggestion |
|------------|----------|-----------|------------|
| `unused_variable` | INFO | "unused" in warning | Remove unused variables or use underscore prefix (_variable) to suppress warning |
---
## 2. Execution Feedback
### ✅ Success Cases
- **Executes without crashing** → Strength noted
### ❌ Error Cases
| Issue Type | Severity | Detection | Suggestion |
|------------|----------|-----------|------------|
| `stack_overflow` | CRITICAL | "stack overflow" in crash | Check for infinite recursion or large stack allocations. Use heap allocation (Box, Vec) for large data structures |
| `panic_on_start` | CRITICAL | "panic" in crash | Check initialization code. Ensure required resources are available and input validation doesn't panic on empty input |
| `immediate_crash` | CRITICAL | Crashes on first run | Debug harness initialization. Add error handling and check for null/invalid pointers |
| `infinite_loop` | CRITICAL | Execution timeout | Check for loops that depend on fuzzer input. Add iteration limits or timeout mechanisms |
---
## 3. Coverage Feedback
### ✅ Success Cases
- **>50% coverage** → "Excellent coverage"
- **Good growth** → "Harness exploring code paths"
### ❌ Error Cases
| Issue Type | Severity | Detection | Suggestion |
|------------|----------|-----------|------------|
| `no_coverage` | CRITICAL | 0 new edges found | Ensure you're actually calling the target function with fuzzer-provided data. Check that 'data' parameter is passed to function |
| `very_low_coverage` | WARNING | <5% coverage or "none" growth | Harness may not be reaching target code. Verify correct entry point function. Check if input validation rejects all fuzzer data |
| `low_coverage` | WARNING | <20% coverage or "poor" growth | Try fuzzing multiple entry points or remove restrictive input validation. Consider using dictionary for structured inputs |
| `early_stagnation` | INFO | Coverage stops growing <10s | Harness may be hitting input validation barriers. Consider fuzzing with seed corpus of valid inputs |
---
## 4. Performance Feedback
### ✅ Success Cases
- **>1000 execs/s** → "Excellent performance"
- **>500 execs/s** → "Good performance"
### ❌ Error Cases
| Issue Type | Severity | Detection | Suggestion |
|------------|----------|-----------|------------|
| `extremely_slow` | CRITICAL | <10 execs/s | Remove file I/O, network operations, or expensive computations from harness loop. Move setup code outside fuzz target function |
| `slow_execution` | WARNING | <100 execs/s | Optimize harness: avoid allocations in hot path, reuse buffers, remove logging. Profile to find bottlenecks |
---
## 5. Stability Feedback
### ✅ Success Cases
- **Stable execution** → Strength noted
- **Found unique crashes** → "Found N potential bugs!"
### ⚠️ Warning Cases
| Issue Type | Severity | Detection | Suggestion |
|------------|----------|-----------|------------|
| `unstable_frequent_crashes` | WARNING | >10 crashes per 1000 execs | This might be expected if testing buggy code. If not, add error handling for edge cases or invalid inputs |
| `hangs_detected` | WARNING | Hangs found during trial | Add timeouts to prevent infinite loops. Check for blocking operations or resource exhaustion |
---
## 6. Code Quality Feedback
### Informational
| Issue Type | Severity | Detection | Suggestion |
|------------|----------|-----------|------------|
| `unused_variable` | INFO | Compiler warnings | Clean up code for better maintainability |
---
## Quality Scoring Formula
```
Base Score: 20 points (for compiling + running)
+ Coverage (0-40 points):
- Excellent growth: +40
- Good growth: +30
- Poor growth: +10
- No growth: +0
+ Performance (0-25 points):
- >1000 execs/s: +25
- >500 execs/s: +20
- >100 execs/s: +10
- >10 execs/s: +5
- <10 execs/s: +0
+ Stability (0-15 points):
- Stable: +15
- Unstable: +10
- Crashes frequently: +5
Maximum: 100 points
```
### Verdicts
- **70-100**: `production-ready` → Use for long-term fuzzing campaigns
- **30-69**: `needs-improvement` → Fix issues before production use
- **0-29**: `broken` → Critical issues block execution
---
## Example Feedback Flow
### Scenario 1: Broken Harness (Type Mismatch)
```json
{
"quality": {
"score": 0,
"verdict": "broken",
"issues": [
{
"category": "compilation",
"severity": "critical",
"type": "type_mismatch",
"message": "Type mismatch: expected &[u8], found &str",
"suggestion": "Check function expects types you're passing. Convert fuzzer input to correct type (e.g., &[u8] to &str with from_utf8)"
}
],
"recommended_actions": [
"Fix 1 critical issue(s) preventing execution"
]
}
}
```
**AI Agent Action**: Regenerate harness with correct type conversion
---
### Scenario 2: Low Coverage Harness
```json
{
"quality": {
"score": 35,
"verdict": "needs-improvement",
"issues": [
{
"category": "coverage",
"severity": "warning",
"type": "low_coverage",
"message": "Low coverage: 12% - not exploring enough code paths",
"suggestion": "Try fuzzing multiple entry points or remove restrictive input validation"
},
{
"category": "performance",
"severity": "warning",
"type": "slow_execution",
"message": "Slow execution: 45 execs/sec (expected 500+)",
"suggestion": "Optimize harness: avoid allocations in hot path, reuse buffers"
}
],
"strengths": [
"Compiles successfully",
"Executes without crashing"
],
"recommended_actions": [
"Address 2 warning(s) to improve harness quality"
]
}
}
```
**AI Agent Action**: Remove input validation, optimize performance
---
### Scenario 3: Production-Ready Harness
```json
{
"quality": {
"score": 85,
"verdict": "production-ready",
"issues": [],
"strengths": [
"Compiles successfully",
"Executes without crashing",
"Excellent coverage: 67% of target code reached",
"Excellent performance: 1507 execs/sec",
"Stable execution - no crashes or hangs"
],
"recommended_actions": [
"Harness is ready for production fuzzing"
]
}
}
```
**AI Agent Action**: Proceed to long-term fuzzing with cargo-fuzzer
---
## Integration with AI Workflow
```python
def iterative_harness_generation(target_function):
"""AI agent iteratively improves harness based on feedback."""
max_iterations = 3
for iteration in range(max_iterations):
# Generate or improve harness
if iteration == 0:
harness = ai_generate_harness(target_function)
else:
harness = ai_improve_harness(previous_harness, feedback)
# Test harness
result = execute_module("harness-tester", harness)
evaluation = result["harnesses"][0]
# Check verdict
if evaluation["quality"]["verdict"] == "production-ready":
return harness # Success!
# Extract feedback for next iteration
feedback = {
"issues": evaluation["quality"]["issues"],
"suggestions": [issue["suggestion"] for issue in evaluation["quality"]["issues"]],
"score": evaluation["quality"]["score"],
"coverage": evaluation["fuzzing_trial"]["coverage"] if "fuzzing_trial" in evaluation else None,
"performance": evaluation["fuzzing_trial"]["performance"] if "fuzzing_trial" in evaluation else None
}
# Store for next iteration
previous_harness = harness
return harness # Return best attempt after max iterations
```
---
## Summary
The harness-tester provides **comprehensive, actionable feedback** across 6 dimensions:
1.**Compilation** - Syntax and type correctness
2.**Execution** - Runtime stability
3.**Coverage** - Code exploration effectiveness
4.**Performance** - Execution speed
5.**Stability** - Crash/hang frequency
6.**Code Quality** - Best practices
Each issue includes:
- **Clear detection** of what went wrong
- **Specific suggestion** on how to fix it
- **Severity level** to prioritize fixes
This enables AI agents to rapidly iterate and produce high-quality fuzz harnesses with minimal human intervention.

View File

@@ -0,0 +1,28 @@
.PHONY: help build clean format lint test
help:
@echo "Available targets:"
@echo " build - Build Docker image"
@echo " clean - Remove build artifacts"
@echo " format - Format code with ruff"
@echo " lint - Lint code with ruff and mypy"
@echo " test - Run tests"
build:
docker build -t fuzzforge-harness-tester:0.1.0 .
clean:
rm -rf .pytest_cache
rm -rf .mypy_cache
rm -rf .ruff_cache
find . -type d -name __pycache__ -exec rm -rf {} +
format:
uv run ruff format ./src ./tests
lint:
uv run ruff check ./src ./tests
uv run mypy ./src
test:
uv run pytest tests/ -v

View File

@@ -0,0 +1,155 @@
# Harness Tester Module
Tests and evaluates fuzz harnesses with comprehensive feedback for AI-driven iteration.
## Overview
The `harness-tester` module runs a battery of tests on fuzz harnesses to provide actionable feedback:
1. **Compilation Testing** - Validates harness compiles correctly
2. **Execution Testing** - Ensures harness runs without immediate crashes
3. **Fuzzing Trial** - Runs short fuzzing session (default: 30s) to measure:
- Coverage growth
- Execution performance (execs/sec)
- Stability (crashes, hangs)
4. **Quality Assessment** - Generates scored evaluation with specific issues and suggestions
## Feedback Categories
### 1. Compilation Feedback
- Undefined variables → "Check variable names match function signature"
- Type mismatches → "Convert fuzzer input to correct type"
- Missing traits → "Ensure you're using correct types"
### 2. Execution Feedback
- Stack overflow → "Check for infinite recursion, use heap allocation"
- Immediate panic → "Check initialization code and input validation"
- Timeout/infinite loop → "Add iteration limits"
### 3. Coverage Feedback
- No coverage → "Harness may not be using fuzzer input"
- Very low coverage (<5%) → "May not be reaching target code, check entry point"
- Low coverage (<20%) → "Try fuzzing multiple entry points"
- Good/Excellent coverage → "Harness is exploring code paths well"
### 4. Performance Feedback
- Extremely slow (<10 execs/s) → "Remove file I/O or network operations"
- Slow (<100 execs/s) → "Optimize harness, avoid allocations in hot path"
- Good (>500 execs/s) → Ready for production
- Excellent (>1000 execs/s) → Optimal performance
### 5. Stability Feedback
- Frequent crashes → "Add error handling for edge cases"
- Hangs detected → "Add timeouts to prevent infinite loops"
- Stable → Ready for production
## Usage
```python
# Via MCP
result = execute_module("harness-tester",
assets_path="/path/to/rust/project",
configuration={
"trial_duration_sec": 30,
"execution_timeout_sec": 10
})
```
## Input Requirements
- Rust project with `Cargo.toml`
- Fuzz harnesses in `fuzz/fuzz_targets/`
- Source code to analyze
## Output Artifacts
### `harness-evaluation.json`
Complete structured evaluation with:
```json
{
"harnesses": [
{
"name": "fuzz_png_decode",
"compilation": { "success": true, "time_ms": 4523 },
"execution": { "success": true },
"fuzzing_trial": {
"coverage": {
"final_edges": 891,
"growth_rate": "good",
"percentage_estimate": 67.0
},
"performance": {
"execs_per_sec": 1507.0,
"performance_rating": "excellent"
},
"stability": { "status": "stable" }
},
"quality": {
"score": 85,
"verdict": "production-ready",
"issues": [],
"strengths": ["Excellent performance", "Good coverage"],
"recommended_actions": ["Ready for production fuzzing"]
}
}
],
"summary": {
"total_harnesses": 1,
"production_ready": 1,
"average_score": 85.0
}
}
```
### `feedback-summary.md`
Human-readable summary with all issues and suggestions.
## Quality Scoring
Harnesses are scored 0-100 based on:
- **Compilation** (20 points): Must compile to proceed
- **Execution** (20 points): Must run without crashing
- **Coverage** (40 points):
- Excellent growth: 40 pts
- Good growth: 30 pts
- Poor growth: 10 pts
- **Performance** (25 points):
- >1000 execs/s: 25 pts
- >500 execs/s: 20 pts
- >100 execs/s: 10 pts
- **Stability** (15 points):
- Stable: 15 pts
- Unstable: 10 pts
- Crashes frequently: 5 pts
**Verdicts:**
- 70-100: `production-ready`
- 30-69: `needs-improvement`
- 0-29: `broken`
## AI Agent Iteration Pattern
```
1. AI generates harness
2. harness-tester evaluates it
3. Returns: score=35, verdict="needs-improvement"
Issues: "Low coverage (8%), slow execution (7.8 execs/s)"
Suggestions: "Check entry point function, remove I/O operations"
4. AI fixes harness based on feedback
5. harness-tester re-evaluates
6. Returns: score=85, verdict="production-ready"
7. Proceed to production fuzzing
```
## Configuration Options
| Option | Default | Description |
|--------|---------|-------------|
| `trial_duration_sec` | 30 | How long to run fuzzing trial |
| `execution_timeout_sec` | 10 | Timeout for execution test |
## See Also
- [Module SDK Documentation](../fuzzforge-modules-sdk/README.md)
- [MODULE_METADATA.md](../MODULE_METADATA.md)

View File

@@ -0,0 +1,60 @@
[project]
name = "fuzzforge-harness-tester"
version = "0.1.0"
description = "Tests and evaluates fuzz harnesses with detailed feedback for AI-driven iteration"
readme = "README.md"
requires-python = ">=3.14"
dependencies = [
"fuzzforge-modules-sdk==0.0.1",
"pydantic==2.12.4",
"structlog==25.5.0",
]
[project.scripts]
module = "module.__main__:main"
[tool.uv.sources]
fuzzforge-modules-sdk = { workspace = true }
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.build.targets.wheel]
packages = ["src/module"]
[dependency-groups]
dev = [
"mypy>=1.8.0",
"pytest>=7.4.3",
"pytest-asyncio>=0.21.1",
"pytest-cov>=4.1.0",
"ruff>=0.1.9",
]
# FuzzForge module metadata for AI agent discovery
[tool.fuzzforge.module]
identifier = "fuzzforge-harness-tester"
suggested_predecessors = ["fuzzforge-rust-analyzer"]
continuous_mode = false
use_cases = [
"Validate Rust fuzz harnesses compile correctly",
"Run short fuzzing trials to assess harness quality",
"Provide detailed feedback for AI to improve harnesses",
"Gate before running expensive long fuzzing campaigns"
]
common_inputs = [
"fuzz-harnesses",
"Cargo.toml",
"rust-source-code"
]
output_artifacts = [
"artifacts/harness-evaluation.json",
"artifacts/feedback-summary.md",
"results.json"
]
output_treatment = "Display artifacts/feedback-summary.md as rendered markdown for quick review. Read artifacts/harness-evaluation.json for detailed per-harness results with verdict (production_ready/needs_improvement/broken), score, strengths, and issues with suggestions."

View File

@@ -0,0 +1,730 @@
"""Harness tester module - tests and evaluates fuzz harnesses."""
from __future__ import annotations
import json
import subprocess
import time
from pathlib import Path
from typing import TYPE_CHECKING, Any
from fuzzforge_modules_sdk.api.models import (
FuzzForgeModuleResource,
FuzzForgeModuleResults,
)
from fuzzforge_modules_sdk.api.modules.base import FuzzForgeModule
from module.analyzer import FeedbackGenerator
from module.feedback import (
CompilationResult,
CoverageMetrics,
EvaluationSummary,
ExecutionResult,
FuzzingTrial,
HarnessEvaluation,
HarnessTestReport,
PerformanceMetrics,
StabilityMetrics,
)
from module.models import Input, Output
from module.settings import Settings
class HarnessTesterModule(FuzzForgeModule):
"""Tests fuzz harnesses with compilation, execution, and short fuzzing trials."""
_settings: Settings | None
def __init__(self) -> None:
"""Initialize an instance of the class."""
name: str = "harness-tester"
version: str = "0.1.0"
FuzzForgeModule.__init__(self, name=name, version=version)
self._settings = None
self.configuration: dict[str, Any] = {}
@classmethod
def _get_input_type(cls) -> type[Input]:
"""Return the input type."""
return Input
@classmethod
def _get_output_type(cls) -> type[Output]:
"""Return the output type."""
return Output
def _prepare(self, settings: Settings) -> None: # type: ignore[override]
"""Prepare the module.
:param settings: Module settings.
"""
self._settings = settings
self.configuration = {
"trial_duration_sec": settings.trial_duration_sec,
"execution_timeout_sec": settings.execution_timeout_sec,
"enable_coverage": settings.enable_coverage,
"min_quality_score": settings.min_quality_score,
}
def _cleanup(self, settings: Settings) -> None: # type: ignore[override]
"""Cleanup after module execution.
:param settings: Module settings.
"""
pass # No cleanup needed
def _run(self, resources: list[FuzzForgeModuleResource]) -> FuzzForgeModuleResults:
"""Run harness testing on provided resources.
:param resources: List of resources (Rust project with fuzz harnesses)
:returns: Module execution result
"""
import shutil
self.emit_event("started", message="Beginning harness testing")
# Configuration
trial_duration = self.configuration.get("trial_duration_sec", 30)
timeout_sec = self.configuration.get("execution_timeout_sec", 10)
# Debug: Log resources
self.get_logger().info(
"Received resources",
count=len(resources),
resources=[str(r.path) for r in resources],
)
# Find Rust project
project_path = self._find_rust_project(resources)
if not project_path:
self.emit_event("error", message="No Rust project found in resources")
return FuzzForgeModuleResults.FAILURE
# Copy project to writable workspace (input is read-only)
workspace = Path("/tmp/harness-workspace")
if workspace.exists():
shutil.rmtree(workspace)
shutil.copytree(project_path, workspace)
project_path = workspace
self.get_logger().info("Copied project to writable workspace", path=str(project_path))
# Find fuzz harnesses
harnesses = self._find_fuzz_harnesses(project_path)
# Debug: Log fuzz directory status
fuzz_dir = project_path / "fuzz" / "fuzz_targets"
self.get_logger().info(
"Checking fuzz directory",
fuzz_dir=str(fuzz_dir),
exists=fuzz_dir.exists(),
)
if not harnesses:
self.emit_event("error", message="No fuzz harnesses found")
return FuzzForgeModuleResults.FAILURE
self.emit_event(
"found_harnesses",
count=len(harnesses),
harnesses=[h.name for h in harnesses],
)
# Test each harness
evaluations = []
total_harnesses = len(harnesses)
for idx, harness in enumerate(harnesses, 1):
self.emit_progress(
int((idx / total_harnesses) * 90),
status="testing",
message=f"Testing harness {idx}/{total_harnesses}: {harness.name}",
)
evaluation = self._test_harness(
project_path, harness, trial_duration, timeout_sec
)
evaluations.append(evaluation)
# Emit evaluation summary
self.emit_event(
"harness_tested",
harness=harness.name,
verdict=evaluation.quality.verdict,
score=evaluation.quality.score,
issues=len(evaluation.quality.issues),
)
# Generate summary
summary = self._generate_summary(evaluations)
# Create report
report = HarnessTestReport(
harnesses=evaluations,
summary=summary,
test_configuration={
"trial_duration_sec": trial_duration,
"execution_timeout_sec": timeout_sec,
},
)
# Save report
self._save_report(report)
self.emit_progress(100, status="completed", message="Harness testing complete")
self.emit_event(
"completed",
total_harnesses=total_harnesses,
production_ready=summary.production_ready,
needs_improvement=summary.needs_improvement,
broken=summary.broken,
)
return FuzzForgeModuleResults.SUCCESS
def _find_rust_project(self, resources: list[FuzzForgeModuleResource]) -> Path | None:
"""Find Rust project with Cargo.toml (the main project, not fuzz workspace).
:param resources: List of resources
:returns: Path to Rust project or None
"""
# First, try to find a directory with both Cargo.toml and src/
for resource in resources:
path = Path(resource.path)
cargo_toml = path / "Cargo.toml"
src_dir = path / "src"
if cargo_toml.exists() and src_dir.exists():
return path
# Fall back to finding parent of fuzz directory
for resource in resources:
path = Path(resource.path)
if path.name == "fuzz" and (path / "Cargo.toml").exists():
# This is the fuzz workspace, return parent
parent = path.parent
if (parent / "Cargo.toml").exists():
return parent
# Last resort: find any Cargo.toml
for resource in resources:
path = Path(resource.path)
cargo_toml = path / "Cargo.toml"
if cargo_toml.exists():
return path
return None
def _find_fuzz_harnesses(self, project_path: Path) -> list[Path]:
"""Find fuzz harnesses in project.
:param project_path: Path to Rust project
:returns: List of harness file paths
"""
fuzz_dir = project_path / "fuzz" / "fuzz_targets"
if not fuzz_dir.exists():
return []
harnesses = list(fuzz_dir.glob("*.rs"))
return harnesses
def _test_harness(
self,
project_path: Path,
harness_path: Path,
trial_duration: int,
timeout_sec: int,
) -> HarnessEvaluation:
"""Test a single harness comprehensively.
:param project_path: Path to Rust project
:param harness_path: Path to harness file
:param trial_duration: Duration for fuzzing trial in seconds
:param timeout_sec: Timeout for execution test
:returns: Harness evaluation
"""
harness_name = harness_path.stem
# Step 1: Compilation
self.emit_event("compiling", harness=harness_name)
compilation = self._test_compilation(project_path, harness_name)
# If compilation failed, generate feedback and return early
if not compilation.success:
quality = FeedbackGenerator.generate_quality_assessment(
compilation_result=compilation.model_dump(),
execution_result=None,
coverage=None,
performance=None,
stability=None,
)
return HarnessEvaluation(
name=harness_name,
path=str(harness_path),
compilation=compilation,
execution=None,
fuzzing_trial=None,
quality=quality,
)
# Step 2: Execution test
self.emit_event("testing_execution", harness=harness_name)
execution = self._test_execution(project_path, harness_name, timeout_sec)
if not execution.success:
quality = FeedbackGenerator.generate_quality_assessment(
compilation_result=compilation.model_dump(),
execution_result=execution.model_dump(),
coverage=None,
performance=None,
stability=None,
)
return HarnessEvaluation(
name=harness_name,
path=str(harness_path),
compilation=compilation,
execution=execution,
fuzzing_trial=None,
quality=quality,
)
# Step 3: Fuzzing trial
self.emit_event("running_trial", harness=harness_name, duration=trial_duration)
fuzzing_trial = self._run_fuzzing_trial(
project_path, harness_name, trial_duration
)
# Generate quality assessment
quality = FeedbackGenerator.generate_quality_assessment(
compilation_result=compilation.model_dump(),
execution_result=execution.model_dump(),
coverage=fuzzing_trial.coverage if fuzzing_trial else None,
performance=fuzzing_trial.performance if fuzzing_trial else None,
stability=fuzzing_trial.stability if fuzzing_trial else None,
)
return HarnessEvaluation(
name=harness_name,
path=str(harness_path),
compilation=compilation,
execution=execution,
fuzzing_trial=fuzzing_trial,
quality=quality,
)
def _test_compilation(self, project_path: Path, harness_name: str) -> CompilationResult:
"""Test harness compilation.
:param project_path: Path to Rust project
:param harness_name: Name of harness to compile
:returns: Compilation result
"""
start_time = time.time()
try:
result = subprocess.run(
["cargo", "fuzz", "build", harness_name],
cwd=project_path,
capture_output=True,
text=True,
timeout=300, # 5 min timeout for compilation
)
compilation_time = int((time.time() - start_time) * 1000)
if result.returncode == 0:
# Parse warnings
warnings = self._parse_compiler_warnings(result.stderr)
return CompilationResult(
success=True, time_ms=compilation_time, warnings=warnings
)
else:
# Parse errors
errors = self._parse_compiler_errors(result.stderr)
return CompilationResult(
success=False,
time_ms=compilation_time,
errors=errors,
stderr=result.stderr,
)
except subprocess.TimeoutExpired:
return CompilationResult(
success=False,
errors=["Compilation timed out after 5 minutes"],
stderr="Timeout",
)
except Exception as e:
return CompilationResult(
success=False, errors=[f"Compilation failed: {e!s}"], stderr=str(e)
)
def _test_execution(
self, project_path: Path, harness_name: str, timeout_sec: int
) -> ExecutionResult:
"""Test harness execution with minimal input.
:param project_path: Path to Rust project
:param harness_name: Name of harness
:param timeout_sec: Timeout for execution
:returns: Execution result
"""
try:
# Run with very short timeout and max runs
result = subprocess.run(
[
"cargo",
"fuzz",
"run",
harness_name,
"--",
"-runs=10",
f"-max_total_time={timeout_sec}",
],
cwd=project_path,
capture_output=True,
text=True,
timeout=timeout_sec + 5,
)
# Check if it crashed immediately
if "SUMMARY: libFuzzer: deadly signal" in result.stderr:
return ExecutionResult(
success=False,
immediate_crash=True,
crash_details=self._extract_crash_info(result.stderr),
)
# Success if completed runs
return ExecutionResult(success=True, runs_completed=10)
except subprocess.TimeoutExpired:
return ExecutionResult(success=False, timeout=True)
except Exception as e:
return ExecutionResult(
success=False, immediate_crash=True, crash_details=str(e)
)
def _run_fuzzing_trial(
self, project_path: Path, harness_name: str, duration_sec: int
) -> FuzzingTrial | None:
"""Run short fuzzing trial to gather metrics.
:param project_path: Path to Rust project
:param harness_name: Name of harness
:param duration_sec: Duration to run fuzzing
:returns: Fuzzing trial results or None if failed
"""
try:
result = subprocess.run(
[
"cargo",
"fuzz",
"run",
harness_name,
"--",
f"-max_total_time={duration_sec}",
"-print_final_stats=1",
],
cwd=project_path,
capture_output=True,
text=True,
timeout=duration_sec + 30,
)
# Parse fuzzing statistics
stats = self._parse_fuzzing_stats(result.stderr)
# Create metrics
coverage = CoverageMetrics(
initial_edges=stats.get("initial_edges", 0),
final_edges=stats.get("cov_edges", 0),
new_edges_found=stats.get("cov_edges", 0) - stats.get("initial_edges", 0),
growth_rate=self._assess_coverage_growth(stats),
percentage_estimate=self._estimate_coverage_percentage(stats),
stagnation_time_sec=stats.get("stagnation_time"),
)
performance = PerformanceMetrics(
total_execs=stats.get("total_execs", 0),
execs_per_sec=stats.get("exec_per_sec", 0.0),
performance_rating=self._assess_performance(stats.get("exec_per_sec", 0.0)),
)
stability = StabilityMetrics(
status=self._assess_stability(stats),
crashes_found=stats.get("crashes", 0),
unique_crashes=stats.get("unique_crashes", 0),
crash_rate=self._calculate_crash_rate(stats),
)
return FuzzingTrial(
duration_seconds=duration_sec,
coverage=coverage,
performance=performance,
stability=stability,
trial_successful=True,
)
except Exception:
return None
def _parse_compiler_errors(self, stderr: str) -> list[str]:
"""Parse compiler error messages.
:param stderr: Compiler stderr output
:returns: List of error messages
"""
errors = []
for line in stderr.split("\n"):
if "error:" in line or "error[" in line:
errors.append(line.strip())
return errors[:10] # Limit to first 10 errors
def _parse_compiler_warnings(self, stderr: str) -> list[str]:
"""Parse compiler warnings.
:param stderr: Compiler stderr output
:returns: List of warning messages
"""
warnings = []
for line in stderr.split("\n"):
if "warning:" in line:
warnings.append(line.strip())
return warnings[:5] # Limit to first 5 warnings
def _extract_crash_info(self, stderr: str) -> str:
"""Extract crash information from stderr.
:param stderr: Fuzzer stderr output
:returns: Crash details
"""
lines = stderr.split("\n")
for i, line in enumerate(lines):
if "SUMMARY:" in line or "deadly signal" in line:
return "\n".join(lines[max(0, i - 3) : i + 5])
return stderr[:500] # First 500 chars if no specific crash info
def _parse_fuzzing_stats(self, stderr: str) -> dict:
"""Parse fuzzing statistics from libFuzzer output.
:param stderr: Fuzzer stderr output
:returns: Dictionary of statistics
"""
stats = {
"total_execs": 0,
"exec_per_sec": 0.0,
"cov_edges": 0,
"initial_edges": 0,
"crashes": 0,
"unique_crashes": 0,
}
lines = stderr.split("\n")
# Find initial coverage
for line in lines[:20]:
if "cov:" in line:
try:
cov_part = line.split("cov:")[1].split()[0]
stats["initial_edges"] = int(cov_part)
break
except (IndexError, ValueError):
pass
# Parse final stats
for line in reversed(lines):
if "#" in line and "cov:" in line and "exec/s:" in line:
try:
# Parse line like: "#12345 cov: 891 ft: 1234 corp: 56/789b exec/s: 1507"
parts = line.split()
for i, part in enumerate(parts):
if part.startswith("#"):
stats["total_execs"] = int(part[1:])
elif part == "cov:":
stats["cov_edges"] = int(parts[i + 1])
elif part == "exec/s:":
stats["exec_per_sec"] = float(parts[i + 1])
except (IndexError, ValueError):
pass
# Count crashes
if "crash-" in line or "leak-" in line or "timeout-" in line:
stats["crashes"] += 1
# Estimate unique crashes (simplified)
stats["unique_crashes"] = min(stats["crashes"], 10)
return stats
def _assess_coverage_growth(self, stats: dict) -> str:
"""Assess coverage growth quality.
:param stats: Fuzzing statistics
:returns: Growth rate assessment
"""
new_edges = stats.get("cov_edges", 0) - stats.get("initial_edges", 0)
if new_edges == 0:
return "none"
elif new_edges < 50:
return "poor"
elif new_edges < 200:
return "good"
else:
return "excellent"
def _estimate_coverage_percentage(self, stats: dict) -> float | None:
"""Estimate coverage percentage (rough heuristic).
:param stats: Fuzzing statistics
:returns: Estimated percentage or None
"""
edges = stats.get("cov_edges", 0)
if edges == 0:
return 0.0
# Rough heuristic: assume medium-sized function has ~2000 edges
# This is very approximate
estimated = min((edges / 2000) * 100, 100)
return round(estimated, 1)
def _assess_performance(self, execs_per_sec: float) -> str:
"""Assess performance rating.
:param execs_per_sec: Executions per second
:returns: Performance rating
"""
if execs_per_sec > 1000:
return "excellent"
elif execs_per_sec > 100:
return "good"
else:
return "poor"
def _assess_stability(self, stats: dict) -> str:
"""Assess stability status.
:param stats: Fuzzing statistics
:returns: Stability status
"""
crashes = stats.get("crashes", 0)
total_execs = stats.get("total_execs", 0)
if total_execs == 0:
return "unknown"
crash_rate = (crashes / total_execs) * 1000
if crash_rate > 10:
return "crashes_frequently"
elif crash_rate > 1:
return "unstable"
else:
return "stable"
def _calculate_crash_rate(self, stats: dict) -> float:
"""Calculate crash rate per 1000 executions.
:param stats: Fuzzing statistics
:returns: Crash rate
"""
crashes = stats.get("crashes", 0)
total = stats.get("total_execs", 0)
if total == 0:
return 0.0
return (crashes / total) * 1000
def _generate_summary(self, evaluations: list[HarnessEvaluation]) -> EvaluationSummary:
"""Generate evaluation summary.
:param evaluations: List of harness evaluations
:returns: Summary statistics
"""
production_ready = sum(
1 for e in evaluations if e.quality.verdict == "production-ready"
)
needs_improvement = sum(
1 for e in evaluations if e.quality.verdict == "needs-improvement"
)
broken = sum(1 for e in evaluations if e.quality.verdict == "broken")
avg_score = (
sum(e.quality.score for e in evaluations) / len(evaluations)
if evaluations
else 0
)
# Generate recommendation
if broken > 0:
recommended_action = f"Fix {broken} broken harness(es) before proceeding."
elif needs_improvement > 0:
recommended_action = f"Improve {needs_improvement} harness(es) for better results."
else:
recommended_action = "All harnesses are production-ready!"
return EvaluationSummary(
total_harnesses=len(evaluations),
production_ready=production_ready,
needs_improvement=needs_improvement,
broken=broken,
average_score=round(avg_score, 1),
recommended_action=recommended_action,
)
def _save_report(self, report: HarnessTestReport) -> None:
"""Save test report to results directory.
:param report: Harness test report
"""
from fuzzforge_modules_sdk.api.constants import PATH_TO_ARTIFACTS
# Ensure artifacts directory exists
PATH_TO_ARTIFACTS.mkdir(parents=True, exist_ok=True)
# Save JSON report
results_path = PATH_TO_ARTIFACTS / "harness-evaluation.json"
with results_path.open("w") as f:
json.dump(report.model_dump(), f, indent=2)
# Save human-readable summary
summary_path = PATH_TO_ARTIFACTS / "feedback-summary.md"
with summary_path.open("w") as f:
f.write("# Harness Testing Report\n\n")
f.write(f"**Total Harnesses:** {report.summary.total_harnesses}\n")
f.write(f"**Production Ready:** {report.summary.production_ready}\n")
f.write(f"**Needs Improvement:** {report.summary.needs_improvement}\n")
f.write(f"**Broken:** {report.summary.broken}\n")
f.write(f"**Average Score:** {report.summary.average_score}/100\n\n")
f.write(f"**Recommendation:** {report.summary.recommended_action}\n\n")
f.write("## Individual Harness Results\n\n")
for harness in report.harnesses:
f.write(f"### {harness.name}\n\n")
f.write(f"- **Verdict:** {harness.quality.verdict}\n")
f.write(f"- **Score:** {harness.quality.score}/100\n\n")
if harness.quality.strengths:
f.write("**Strengths:**\n")
for strength in harness.quality.strengths:
f.write(f"- {strength}\n")
f.write("\n")
if harness.quality.issues:
f.write("**Issues:**\n")
for issue in harness.quality.issues:
f.write(f"- [{issue.severity.upper()}] {issue.message}\n")
f.write(f" - **Suggestion:** {issue.suggestion}\n")
f.write("\n")
if harness.quality.recommended_actions:
f.write("**Actions:**\n")
for action in harness.quality.recommended_actions:
f.write(f"- {action}\n")
f.write("\n")
# Export the module class for use by __main__.py
__all__ = ["HarnessTesterModule"]

View File

@@ -0,0 +1,16 @@
"""Harness tester module entrypoint."""
from fuzzforge_modules_sdk.api import logs
from module import HarnessTesterModule
def main() -> None:
"""Run the harness tester module."""
logs.configure()
module = HarnessTesterModule()
module.main()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,486 @@
"""Feedback generator with actionable suggestions for AI agents."""
from module.feedback import (
CoverageMetrics,
FeedbackCategory,
FeedbackIssue,
FeedbackSeverity,
PerformanceMetrics,
QualityAssessment,
StabilityMetrics,
)
class FeedbackGenerator:
"""Generates actionable feedback based on harness test results."""
@staticmethod
def analyze_compilation(
compilation_result: dict,
) -> tuple[list[FeedbackIssue], list[str]]:
"""Analyze compilation results and generate feedback.
:param compilation_result: Compilation output and errors
:returns: Tuple of (issues, strengths)
"""
issues = []
strengths = []
if not compilation_result.get("success"):
errors = compilation_result.get("errors", [])
for error in errors:
# Analyze specific error types
if "cannot find" in error.lower():
issues.append(
FeedbackIssue(
category=FeedbackCategory.COMPILATION,
severity=FeedbackSeverity.CRITICAL,
type="undefined_variable",
message=f"Compilation error: {error}",
suggestion="Check variable names match the function signature. Use the exact names from fuzzable_functions.json.",
details={"error": error},
)
)
elif "mismatched types" in error.lower():
issues.append(
FeedbackIssue(
category=FeedbackCategory.COMPILATION,
severity=FeedbackSeverity.CRITICAL,
type="type_mismatch",
message=f"Type mismatch: {error}",
suggestion="Check the function expects the types you're passing. Convert fuzzer input to the correct type (e.g., &[u8] to &str with from_utf8).",
details={"error": error},
)
)
elif "trait" in error.lower() and "not implemented" in error.lower():
issues.append(
FeedbackIssue(
category=FeedbackCategory.COMPILATION,
severity=FeedbackSeverity.CRITICAL,
type="trait_not_implemented",
message=f"Trait not implemented: {error}",
suggestion="Ensure you're using the correct types. Some functions require specific trait implementations.",
details={"error": error},
)
)
else:
issues.append(
FeedbackIssue(
category=FeedbackCategory.COMPILATION,
severity=FeedbackSeverity.CRITICAL,
type="compilation_error",
message=f"Compilation failed: {error}",
suggestion="Review the error message and fix syntax/type issues. Check function signatures in the source code.",
details={"error": error},
)
)
else:
strengths.append("Compiles successfully")
# Check for warnings
warnings = compilation_result.get("warnings", [])
if warnings:
for warning in warnings[:3]: # Limit to 3 most important
if "unused" in warning.lower():
issues.append(
FeedbackIssue(
category=FeedbackCategory.CODE_QUALITY,
severity=FeedbackSeverity.INFO,
type="unused_variable",
message=f"Code quality: {warning}",
suggestion="Remove unused variables or use underscore prefix (_variable) to suppress warning.",
details={"warning": warning},
)
)
return issues, strengths
@staticmethod
def analyze_execution(
execution_result: dict,
) -> tuple[list[FeedbackIssue], list[str]]:
"""Analyze execution results.
:param execution_result: Execution test results
:returns: Tuple of (issues, strengths)
"""
issues = []
strengths = []
if not execution_result.get("success"):
if execution_result.get("immediate_crash"):
crash_details = execution_result.get("crash_details", "")
if "stack overflow" in crash_details.lower():
issues.append(
FeedbackIssue(
category=FeedbackCategory.EXECUTION,
severity=FeedbackSeverity.CRITICAL,
type="stack_overflow",
message="Harness crashes immediately with stack overflow",
suggestion="Check for infinite recursion or large stack allocations. Use heap allocation (Box, Vec) for large data structures.",
details={"crash": crash_details},
)
)
elif "panic" in crash_details.lower():
issues.append(
FeedbackIssue(
category=FeedbackCategory.EXECUTION,
severity=FeedbackSeverity.CRITICAL,
type="panic_on_start",
message="Harness panics immediately",
suggestion="Check initialization code. Ensure required resources are available and input validation doesn't panic on empty input.",
details={"crash": crash_details},
)
)
else:
issues.append(
FeedbackIssue(
category=FeedbackCategory.EXECUTION,
severity=FeedbackSeverity.CRITICAL,
type="immediate_crash",
message=f"Harness crashes immediately: {crash_details}",
suggestion="Debug the harness initialization. Add error handling and check for null/invalid pointers.",
details={"crash": crash_details},
)
)
elif execution_result.get("timeout"):
issues.append(
FeedbackIssue(
category=FeedbackCategory.EXECUTION,
severity=FeedbackSeverity.CRITICAL,
type="infinite_loop",
message="Harness times out - likely infinite loop",
suggestion="Check for loops that depend on fuzzer input. Add iteration limits or timeout mechanisms.",
details={},
)
)
else:
strengths.append("Executes without crashing")
return issues, strengths
@staticmethod
def analyze_coverage(
coverage: CoverageMetrics,
) -> tuple[list[FeedbackIssue], list[str]]:
"""Analyze coverage metrics.
:param coverage: Coverage metrics from fuzzing trial
:returns: Tuple of (issues, strengths)
"""
issues = []
strengths = []
# No coverage growth
if coverage.new_edges_found == 0:
issues.append(
FeedbackIssue(
category=FeedbackCategory.COVERAGE,
severity=FeedbackSeverity.CRITICAL,
type="no_coverage",
message="No coverage detected - harness may not be using fuzzer input",
suggestion="Ensure you're actually calling the target function with fuzzer-provided data. Check that 'data' parameter is passed to the function being fuzzed.",
details={"initial_edges": coverage.initial_edges},
)
)
# Very low coverage
elif coverage.growth_rate == "none" or (
coverage.percentage_estimate and coverage.percentage_estimate < 5
):
issues.append(
FeedbackIssue(
category=FeedbackCategory.COVERAGE,
severity=FeedbackSeverity.WARNING,
type="very_low_coverage",
message=f"Very low coverage: ~{coverage.percentage_estimate}%",
suggestion="Harness may not be reaching the target code. Verify you're calling the correct entry point function. Check if there's input validation that rejects all fuzzer data.",
details={
"percentage": coverage.percentage_estimate,
"edges": coverage.final_edges,
},
)
)
# Low coverage
elif coverage.growth_rate == "poor" or (
coverage.percentage_estimate and coverage.percentage_estimate < 20
):
issues.append(
FeedbackIssue(
category=FeedbackCategory.COVERAGE,
severity=FeedbackSeverity.WARNING,
type="low_coverage",
message=f"Low coverage: {coverage.percentage_estimate}% - not exploring enough code paths",
suggestion="Try fuzzing multiple entry points or remove restrictive input validation. Consider using a dictionary for structured inputs.",
details={
"percentage": coverage.percentage_estimate,
"new_edges": coverage.new_edges_found,
},
)
)
# Good coverage
elif coverage.growth_rate in ["good", "excellent"]:
if coverage.percentage_estimate and coverage.percentage_estimate > 50:
strengths.append(
f"Excellent coverage: {coverage.percentage_estimate}% of target code reached"
)
else:
strengths.append("Good coverage growth - harness is exploring code paths")
# Coverage stagnation
if (
coverage.stagnation_time_sec
and coverage.stagnation_time_sec < 10
and coverage.final_edges < 500
):
issues.append(
FeedbackIssue(
category=FeedbackCategory.COVERAGE,
severity=FeedbackSeverity.INFO,
type="early_stagnation",
message=f"Coverage stopped growing after {coverage.stagnation_time_sec}s",
suggestion="Harness may be hitting input validation barriers. Consider fuzzing with a seed corpus of valid inputs.",
details={"stagnation_time": coverage.stagnation_time_sec},
)
)
return issues, strengths
@staticmethod
def analyze_performance(
performance: PerformanceMetrics,
) -> tuple[list[FeedbackIssue], list[str]]:
"""Analyze performance metrics.
:param performance: Performance metrics from fuzzing trial
:returns: Tuple of (issues, strengths)
"""
issues = []
strengths = []
execs_per_sec = performance.execs_per_sec
# Very slow execution
if execs_per_sec < 10:
issues.append(
FeedbackIssue(
category=FeedbackCategory.PERFORMANCE,
severity=FeedbackSeverity.CRITICAL,
type="extremely_slow",
message=f"Extremely slow: {execs_per_sec:.1f} execs/sec",
suggestion="Remove file I/O, network operations, or expensive computations from the harness loop. Move setup code outside the fuzz target function.",
details={"execs_per_sec": execs_per_sec},
)
)
# Slow execution
elif execs_per_sec < 100:
issues.append(
FeedbackIssue(
category=FeedbackCategory.PERFORMANCE,
severity=FeedbackSeverity.WARNING,
type="slow_execution",
message=f"Slow execution: {execs_per_sec:.1f} execs/sec (expected 500+)",
suggestion="Optimize harness: avoid allocations in hot path, reuse buffers, remove logging. Profile to find bottlenecks.",
details={"execs_per_sec": execs_per_sec},
)
)
# Good performance
elif execs_per_sec > 1000:
strengths.append(f"Excellent performance: {execs_per_sec:.0f} execs/sec")
elif execs_per_sec > 500:
strengths.append(f"Good performance: {execs_per_sec:.0f} execs/sec")
return issues, strengths
@staticmethod
def analyze_stability(
stability: StabilityMetrics,
) -> tuple[list[FeedbackIssue], list[str]]:
"""Analyze stability metrics.
:param stability: Stability metrics from fuzzing trial
:returns: Tuple of (issues, strengths)
"""
issues = []
strengths = []
if stability.status == "crashes_frequently":
issues.append(
FeedbackIssue(
category=FeedbackCategory.STABILITY,
severity=FeedbackSeverity.WARNING,
type="unstable_frequent_crashes",
message=f"Harness crashes frequently: {stability.crash_rate:.1f} crashes per 1000 execs",
suggestion="This might be expected if testing buggy code. If not, add error handling for edge cases or invalid inputs.",
details={
"crashes": stability.crashes_found,
"crash_rate": stability.crash_rate,
},
)
)
elif stability.status == "hangs":
issues.append(
FeedbackIssue(
category=FeedbackCategory.STABILITY,
severity=FeedbackSeverity.WARNING,
type="hangs_detected",
message=f"Harness hangs: {stability.hangs_found} detected",
suggestion="Add timeouts to prevent infinite loops. Check for blocking operations or resource exhaustion.",
details={"hangs": stability.hangs_found},
)
)
elif stability.status == "stable":
strengths.append("Stable execution - no crashes or hangs")
# Finding crashes can be good!
if stability.unique_crashes > 0 and stability.status != "crashes_frequently":
strengths.append(
f"Found {stability.unique_crashes} potential bugs during trial!"
)
return issues, strengths
@staticmethod
def calculate_quality_score(
compilation_success: bool,
execution_success: bool,
coverage: CoverageMetrics | None,
performance: PerformanceMetrics | None,
stability: StabilityMetrics | None,
) -> int:
"""Calculate overall quality score (0-100).
:param compilation_success: Whether compilation succeeded
:param execution_success: Whether execution succeeded
:param coverage: Coverage metrics
:param performance: Performance metrics
:param stability: Stability metrics
:returns: Quality score 0-100
"""
if not compilation_success:
return 0
if not execution_success:
return 10
score = 20 # Base score for compiling and running
# Coverage contribution (0-40 points)
if coverage:
if coverage.growth_rate == "excellent":
score += 40
elif coverage.growth_rate == "good":
score += 30
elif coverage.growth_rate == "poor":
score += 10
# Performance contribution (0-25 points)
if performance:
if performance.execs_per_sec > 1000:
score += 25
elif performance.execs_per_sec > 500:
score += 20
elif performance.execs_per_sec > 100:
score += 10
elif performance.execs_per_sec > 10:
score += 5
# Stability contribution (0-15 points)
if stability:
if stability.status == "stable":
score += 15
elif stability.status == "unstable":
score += 10
elif stability.status == "crashes_frequently":
score += 5
return min(score, 100)
@classmethod
def generate_quality_assessment(
cls,
compilation_result: dict,
execution_result: dict | None,
coverage: CoverageMetrics | None,
performance: PerformanceMetrics | None,
stability: StabilityMetrics | None,
) -> QualityAssessment:
"""Generate complete quality assessment with all feedback.
:param compilation_result: Compilation results
:param execution_result: Execution results
:param coverage: Coverage metrics
:param performance: Performance metrics
:param stability: Stability metrics
:returns: Complete quality assessment
"""
all_issues = []
all_strengths = []
# Analyze each aspect
comp_issues, comp_strengths = cls.analyze_compilation(compilation_result)
all_issues.extend(comp_issues)
all_strengths.extend(comp_strengths)
if execution_result:
exec_issues, exec_strengths = cls.analyze_execution(execution_result)
all_issues.extend(exec_issues)
all_strengths.extend(exec_strengths)
if coverage:
cov_issues, cov_strengths = cls.analyze_coverage(coverage)
all_issues.extend(cov_issues)
all_strengths.extend(cov_strengths)
if performance:
perf_issues, perf_strengths = cls.analyze_performance(performance)
all_issues.extend(perf_issues)
all_strengths.extend(perf_strengths)
if stability:
stab_issues, stab_strengths = cls.analyze_stability(stability)
all_issues.extend(stab_issues)
all_strengths.extend(stab_strengths)
# Calculate score
score = cls.calculate_quality_score(
compilation_result.get("success", False),
execution_result.get("success", False) if execution_result else False,
coverage,
performance,
stability,
)
# Determine verdict
if score >= 70:
verdict = "production-ready"
elif score >= 30:
verdict = "needs-improvement"
else:
verdict = "broken"
# Generate recommended actions
recommended_actions = []
critical_issues = [i for i in all_issues if i.severity == FeedbackSeverity.CRITICAL]
warning_issues = [i for i in all_issues if i.severity == FeedbackSeverity.WARNING]
if critical_issues:
recommended_actions.append(
f"Fix {len(critical_issues)} critical issue(s) preventing execution"
)
if warning_issues:
recommended_actions.append(
f"Address {len(warning_issues)} warning(s) to improve harness quality"
)
if verdict == "production-ready":
recommended_actions.append("Harness is ready for production fuzzing")
return QualityAssessment(
score=score,
verdict=verdict,
issues=all_issues,
strengths=all_strengths,
recommended_actions=recommended_actions,
)

View File

@@ -0,0 +1,148 @@
"""Feedback types and schemas for harness testing."""
from enum import Enum
from typing import Any
from pydantic import BaseModel, Field
class FeedbackSeverity(str, Enum):
"""Severity levels for feedback issues."""
CRITICAL = "critical" # Blocks execution (compilation errors, crashes)
WARNING = "warning" # Should fix (low coverage, slow execution)
INFO = "info" # Nice to have (optimization suggestions)
class FeedbackCategory(str, Enum):
"""Categories of feedback."""
COMPILATION = "compilation"
EXECUTION = "execution"
PERFORMANCE = "performance"
COVERAGE = "coverage"
STABILITY = "stability"
CODE_QUALITY = "code_quality"
class FeedbackIssue(BaseModel):
"""A single feedback issue with actionable suggestion."""
category: FeedbackCategory
severity: FeedbackSeverity
type: str = Field(description="Specific issue type (e.g., 'low_coverage', 'compilation_error')")
message: str = Field(description="Human-readable description of the issue")
suggestion: str = Field(description="Actionable suggestion for AI agent to fix the issue")
details: dict[str, Any] = Field(default_factory=dict, description="Additional technical details")
class CompilationResult(BaseModel):
"""Results from compilation attempt."""
success: bool
time_ms: int | None = None
errors: list[str] = Field(default_factory=list)
warnings: list[str] = Field(default_factory=list)
stderr: str | None = None
class ExecutionResult(BaseModel):
"""Results from execution test."""
success: bool
runs_completed: int | None = None
immediate_crash: bool = False
timeout: bool = False
crash_details: str | None = None
class CoverageMetrics(BaseModel):
"""Coverage metrics from fuzzing trial."""
initial_edges: int = 0
final_edges: int = 0
new_edges_found: int = 0
growth_rate: str = Field(
description="Qualitative assessment: 'excellent', 'good', 'poor', 'none'"
)
percentage_estimate: float | None = Field(
None, description="Estimated percentage of target code covered"
)
stagnation_time_sec: float | None = Field(
None, description="Time until coverage stopped growing"
)
class PerformanceMetrics(BaseModel):
"""Performance metrics from fuzzing trial."""
total_execs: int
execs_per_sec: float
average_exec_time_us: float | None = None
performance_rating: str = Field(
description="'excellent' (>1000/s), 'good' (100-1000/s), 'poor' (<100/s)"
)
class StabilityMetrics(BaseModel):
"""Stability metrics from fuzzing trial."""
status: str = Field(
description="'stable', 'unstable', 'crashes_frequently', 'hangs'"
)
crashes_found: int = 0
hangs_found: int = 0
unique_crashes: int = 0
crash_rate: float = Field(0.0, description="Crashes per 1000 executions")
class FuzzingTrial(BaseModel):
"""Results from short fuzzing trial."""
duration_seconds: int
coverage: CoverageMetrics
performance: PerformanceMetrics
stability: StabilityMetrics
trial_successful: bool
class QualityAssessment(BaseModel):
"""Overall quality assessment of the harness."""
score: int = Field(ge=0, le=100, description="Quality score 0-100")
verdict: str = Field(
description="'production-ready', 'needs-improvement', 'broken'"
)
issues: list[FeedbackIssue] = Field(default_factory=list)
strengths: list[str] = Field(default_factory=list)
recommended_actions: list[str] = Field(default_factory=list)
class HarnessEvaluation(BaseModel):
"""Complete evaluation of a single harness."""
name: str
path: str | None = None
compilation: CompilationResult
execution: ExecutionResult | None = None
fuzzing_trial: FuzzingTrial | None = None
quality: QualityAssessment
class EvaluationSummary(BaseModel):
"""Summary of all harness evaluations."""
total_harnesses: int
production_ready: int
needs_improvement: int
broken: int
average_score: float
recommended_action: str
class HarnessTestReport(BaseModel):
"""Complete harness testing report."""
harnesses: list[HarnessEvaluation]
summary: EvaluationSummary
test_configuration: dict[str, Any] = Field(default_factory=dict)

View File

@@ -0,0 +1,27 @@
"""Models for harness-tester module."""
from pathlib import Path
from typing import Any
from pydantic import BaseModel
from fuzzforge_modules_sdk.api.models import (
FuzzForgeModuleInputBase,
FuzzForgeModuleOutputBase,
)
from module.settings import Settings
class Input(FuzzForgeModuleInputBase[Settings]):
"""Input for the harness-tester module."""
class Output(FuzzForgeModuleOutputBase):
"""Output for the harness-tester module."""
#: The test report data.
report: dict[str, Any] | None = None
#: Path to the report JSON file.
report_file: Path | None = None

View File

@@ -0,0 +1,19 @@
"""Settings for harness-tester module."""
from pydantic import BaseModel, Field
class Settings(BaseModel):
"""Settings for the harness-tester module."""
#: Duration for each fuzzing trial in seconds.
trial_duration_sec: int = Field(default=30, ge=1, le=300)
#: Timeout for harness execution in seconds.
execution_timeout_sec: int = Field(default=10, ge=1, le=60)
#: Whether to generate coverage reports.
enable_coverage: bool = Field(default=True)
#: Minimum score threshold for harness to be considered "good".
min_quality_score: int = Field(default=50, ge=0, le=100)

View File

@@ -1,45 +0,0 @@
PACKAGE=$(word 1, $(shell uv version))
VERSION=$(word 2, $(shell uv version))
PODMAN?=/usr/bin/podman
SOURCES=./src
TESTS=./tests
.PHONY: bandit build clean format mypy pytest ruff version
bandit:
uv run bandit --recursive $(SOURCES)
build:
$(PODMAN) build --file ./Dockerfile --no-cache --tag $(PACKAGE):$(VERSION)
save: build
$(PODMAN) save --format oci-archive --output /tmp/$(PACKAGE)-$(VERSION).oci $(PACKAGE):$(VERSION)
clean:
@find . -type d \( \
-name '*.egg-info' \
-o -name '.mypy_cache' \
-o -name '.pytest_cache' \
-o -name '.ruff_cache' \
-o -name '__pycache__' \
\) -printf 'removing directory %p\n' -exec rm -rf {} +
cloc:
cloc $(SOURCES)
format:
uv run ruff format $(SOURCES) $(TESTS)
mypy:
uv run mypy $(SOURCES)
pytest:
uv run pytest $(TESTS)
ruff:
uv run ruff check --fix $(SOURCES) $(TESTS)
version:
@echo '$(PACKAGE)@$(VERSION)'

View File

@@ -1,46 +0,0 @@
# FuzzForge Modules - FIXME
## Installation
### Python
```shell
# install the package (users)
uv sync
# install the package and all development dependencies (developers)
uv sync --all-extras
```
### Container
```shell
# build the image
make build
# run the container
mkdir -p "${PWD}/data" "${PWD}/data/input" "${PWD}/data/output"
echo '{"settings":{},"resources":[]}' > "${PWD}/data/input/input.json"
podman run --rm \
--volume "${PWD}/data:/data" \
'<name>:<version>' 'uv run module'
```
## Usage
```shell
uv run module
```
## Development tools
```shell
# run ruff (formatter)
make format
# run mypy (type checker)
make mypy
# run tests (pytest)
make pytest
# run ruff (linter)
make ruff
```
See the file `Makefile` at the root of this directory for more tools.

View File

@@ -1,31 +0,0 @@
[project]
name = "harness-validator"
version = "0.1.0"
description = "FuzzForge module that validates fuzz harnesses compile correctly"
authors = []
readme = "README.md"
requires-python = ">=3.14"
dependencies = [
"fuzzforge-modules-sdk==0.0.1",
"pydantic==2.12.4",
"structlog==25.5.0",
]
[project.optional-dependencies]
lints = [
"bandit==1.8.6",
"mypy==1.18.2",
"ruff==0.14.4",
]
tests = [
"pytest==9.0.2",
]
[project.scripts]
module = "module.__main__:main"
[tool.uv.sources]
fuzzforge-modules-sdk = { workspace = true }
[tool.uv]
package = true

View File

@@ -1,19 +0,0 @@
from typing import TYPE_CHECKING
from fuzzforge_modules_sdk.api import logs
from module.mod import Module
if TYPE_CHECKING:
from fuzzforge_modules_sdk.api.modules.base import FuzzForgeModule
def main() -> None:
"""TODO."""
logs.configure()
module: FuzzForgeModule = Module()
module.main()
if __name__ == "__main__":
main()

View File

@@ -1,309 +0,0 @@
"""Harness Validator module for FuzzForge.
This module validates that fuzz harnesses compile correctly.
It takes a Rust project with a fuzz directory containing harnesses
and runs cargo build to verify they compile.
"""
from __future__ import annotations
import json
import subprocess
import os
from pathlib import Path
from typing import TYPE_CHECKING
import structlog
from fuzzforge_modules_sdk.api.constants import PATH_TO_INPUTS, PATH_TO_OUTPUTS
from fuzzforge_modules_sdk.api.models import FuzzForgeModuleResults
from fuzzforge_modules_sdk.api.modules.base import FuzzForgeModule
from module.models import Input, Output, ValidationResult, HarnessStatus
from module.settings import Settings
if TYPE_CHECKING:
from fuzzforge_modules_sdk.api.models import FuzzForgeModuleResource
logger = structlog.get_logger()
class Module(FuzzForgeModule):
"""Harness Validator module - validates that fuzz harnesses compile."""
_settings: Settings | None
_results: list[ValidationResult]
def __init__(self) -> None:
"""Initialize an instance of the class."""
name: str = "harness-validator"
version: str = "0.1.0"
FuzzForgeModule.__init__(self, name=name, version=version)
self._settings = None
self._results = []
@classmethod
def _get_input_type(cls) -> type[Input]:
"""Return the input type."""
return Input
@classmethod
def _get_output_type(cls) -> type[Output]:
"""Return the output type."""
return Output
def _prepare(self, settings: Settings) -> None: # type: ignore[override]
"""Prepare the module.
:param settings: Module settings.
"""
self._settings = settings
logger.info("harness-validator preparing", settings=settings.model_dump() if settings else {})
def _run(self, resources: list[FuzzForgeModuleResource]) -> FuzzForgeModuleResults:
"""Run the harness validator.
:param resources: Input resources (fuzz project directory).
:returns: Module execution result.
"""
logger.info("harness-validator starting", resource_count=len(resources))
# Find the fuzz project directory
fuzz_project_src = self._find_fuzz_project(resources)
if fuzz_project_src is None:
logger.error("No fuzz project found in resources")
return FuzzForgeModuleResults.FAILURE
logger.info("Found fuzz project", path=str(fuzz_project_src))
# Copy the project to a writable location since /data/input is read-only
# and cargo needs to write Cargo.lock and build artifacts
import shutil
work_dir = Path("/tmp/fuzz-build")
if work_dir.exists():
shutil.rmtree(work_dir)
# Copy entire project root (parent of fuzz directory)
project_root = fuzz_project_src.parent
work_project = work_dir / project_root.name
shutil.copytree(project_root, work_project, dirs_exist_ok=True)
# Adjust fuzz_project to point to the copied location
fuzz_project = work_dir / project_root.name / fuzz_project_src.name
logger.info("Copied project to writable location", work_dir=str(fuzz_project))
# Find all harness targets
targets = self._find_harness_targets(fuzz_project)
if not targets:
logger.error("No harness targets found")
return FuzzForgeModuleResults.FAILURE
logger.info("Found harness targets", count=len(targets))
# Validate each harness
all_valid = True
for target in targets:
result = self._validate_harness(fuzz_project, target)
self._results.append(result)
if result.status != HarnessStatus.VALID:
all_valid = False
logger.warning("Harness validation failed",
target=target,
status=result.status.value,
errors=result.errors)
else:
logger.info("Harness valid", target=target)
# Set output data for results.json
valid_targets = [r.target for r in self._results if r.status == HarnessStatus.VALID]
invalid_targets = [r.target for r in self._results if r.status != HarnessStatus.VALID]
self.set_output(
fuzz_project=str(fuzz_project),
total_targets=len(self._results),
valid_count=len(valid_targets),
invalid_count=len(invalid_targets),
valid_targets=valid_targets,
invalid_targets=invalid_targets,
results=[r.model_dump() for r in self._results],
)
valid_count = sum(1 for r in self._results if r.status == HarnessStatus.VALID)
logger.info("harness-validator completed",
total=len(self._results),
valid=valid_count,
invalid=len(self._results) - valid_count)
return FuzzForgeModuleResults.SUCCESS
def _cleanup(self, settings: Settings) -> None: # type: ignore[override]
"""Clean up after execution.
:param settings: Module settings.
"""
pass
def _find_fuzz_project(self, resources: list[FuzzForgeModuleResource]) -> Path | None:
"""Find the fuzz project directory in the resources.
:param resources: List of input resources.
:returns: Path to fuzz project or None.
"""
for resource in resources:
path = Path(resource.path)
# Check if it's a fuzz directory with Cargo.toml
if path.is_dir():
cargo_toml = path / "Cargo.toml"
if cargo_toml.exists():
# Check if it has fuzz_targets directory
fuzz_targets = path / "fuzz_targets"
if fuzz_targets.is_dir():
return path
# Check for fuzz subdirectory
fuzz_dir = path / "fuzz"
if fuzz_dir.is_dir():
cargo_toml = fuzz_dir / "Cargo.toml"
if cargo_toml.exists():
return fuzz_dir
return None
def _find_harness_targets(self, fuzz_project: Path) -> list[str]:
"""Find all harness target names in the fuzz project.
:param fuzz_project: Path to the fuzz project.
:returns: List of target names.
"""
targets = []
fuzz_targets_dir = fuzz_project / "fuzz_targets"
if fuzz_targets_dir.is_dir():
for rs_file in fuzz_targets_dir.glob("*.rs"):
# Target name is the file name without extension
target_name = rs_file.stem
targets.append(target_name)
return targets
def _validate_harness(self, fuzz_project: Path, target: str) -> ValidationResult:
"""Validate a single harness by compiling it.
:param fuzz_project: Path to the fuzz project.
:param target: Name of the harness target.
:returns: Validation result.
"""
harness_file = fuzz_project / "fuzz_targets" / f"{target}.rs"
if not harness_file.exists():
return ValidationResult(
target=target,
file_path=str(harness_file),
status=HarnessStatus.NOT_FOUND,
errors=["Harness file not found"],
)
# Try to compile just this target
try:
env = os.environ.copy()
env["CARGO_INCREMENTAL"] = "0"
result = subprocess.run(
[
"cargo", "build",
"--bin", target,
"--message-format=json",
],
cwd=fuzz_project,
capture_output=True,
text=True,
timeout=self._settings.compile_timeout if self._settings else 120,
env=env,
)
# Parse cargo output for errors
errors = []
warnings = []
for line in result.stdout.splitlines():
try:
msg = json.loads(line)
if msg.get("reason") == "compiler-message":
message = msg.get("message", {})
level = message.get("level", "")
rendered = message.get("rendered", "")
if level == "error":
errors.append(rendered.strip())
elif level == "warning":
warnings.append(rendered.strip())
except json.JSONDecodeError:
pass
# Also check stderr for any cargo errors
if result.returncode != 0 and not errors:
errors.append(result.stderr.strip() if result.stderr else "Build failed with unknown error")
if result.returncode == 0:
return ValidationResult(
target=target,
file_path=str(harness_file),
status=HarnessStatus.VALID,
errors=[],
warnings=warnings,
)
else:
return ValidationResult(
target=target,
file_path=str(harness_file),
status=HarnessStatus.COMPILE_ERROR,
errors=errors,
warnings=warnings,
)
except subprocess.TimeoutExpired:
return ValidationResult(
target=target,
file_path=str(harness_file),
status=HarnessStatus.TIMEOUT,
errors=["Compilation timed out"],
)
except Exception as e:
return ValidationResult(
target=target,
file_path=str(harness_file),
status=HarnessStatus.ERROR,
errors=[str(e)],
)
def _write_output(self, fuzz_project: Path) -> None:
"""Write the validation results to output.
:param fuzz_project: Path to the fuzz project.
"""
output_path = PATH_TO_OUTPUTS / "validation.json"
output_path.parent.mkdir(parents=True, exist_ok=True)
valid_targets = [r.target for r in self._results if r.status == HarnessStatus.VALID]
invalid_targets = [r.target for r in self._results if r.status != HarnessStatus.VALID]
output_data = {
"fuzz_project": str(fuzz_project),
"total_targets": len(self._results),
"valid_count": len(valid_targets),
"invalid_count": len(invalid_targets),
"valid_targets": valid_targets,
"invalid_targets": invalid_targets,
"results": [r.model_dump() for r in self._results],
}
output_path.write_text(json.dumps(output_data, indent=2))
logger.info("wrote validation results", path=str(output_path))

View File

@@ -1,71 +0,0 @@
"""Models for the harness-validator module."""
from enum import Enum
from pydantic import BaseModel, Field
from fuzzforge_modules_sdk.api.models import FuzzForgeModuleInputBase, FuzzForgeModuleOutputBase
from module.settings import Settings
class HarnessStatus(str, Enum):
"""Status of harness validation."""
VALID = "valid"
COMPILE_ERROR = "compile_error"
NOT_FOUND = "not_found"
TIMEOUT = "timeout"
ERROR = "error"
class ValidationResult(BaseModel):
"""Result of validating a single harness."""
#: Name of the harness target
target: str
#: Path to the harness file
file_path: str
#: Validation status
status: HarnessStatus
#: Compilation errors (if any)
errors: list[str] = Field(default_factory=list)
#: Compilation warnings (if any)
warnings: list[str] = Field(default_factory=list)
class Input(FuzzForgeModuleInputBase[Settings]):
"""Input for the harness-validator module.
Expects a fuzz project directory with:
- Cargo.toml
- fuzz_targets/ directory with .rs harness files
"""
class Output(FuzzForgeModuleOutputBase):
"""Output from the harness-validator module."""
#: Path to the fuzz project
fuzz_project: str = ""
#: Total number of harness targets
total_targets: int = 0
#: Number of valid (compilable) harnesses
valid_count: int = 0
#: Number of invalid harnesses
invalid_count: int = 0
#: List of valid target names (ready for fuzzing)
valid_targets: list[str] = Field(default_factory=list)
#: List of invalid target names (need fixes)
invalid_targets: list[str] = Field(default_factory=list)
#: Detailed validation results per target
results: list[ValidationResult] = Field(default_factory=list)

View File

@@ -1,13 +0,0 @@
"""Settings for the harness-validator module."""
from fuzzforge_modules_sdk.api.models import FuzzForgeModulesSettingsBase
class Settings(FuzzForgeModulesSettingsBase):
"""Settings for the harness-validator module."""
#: Timeout for compiling each harness (seconds)
compile_timeout: int = 120
#: Whether to stop on first error
fail_fast: bool = False

View File

@@ -1,5 +1,7 @@
FROM localhost/fuzzforge-modules-sdk:0.1.0
# Module metadata is now read from pyproject.toml [tool.fuzzforge.module] section
# Install system dependencies
RUN apt-get update && apt-get install -y \
curl \

View File

@@ -1,7 +1,7 @@
[project]
name = "rust-analyzer"
version = "0.0.1"
description = "FIXME"
name = "fuzzforge-rust-analyzer"
version = "0.1.0"
description = "Analyzes Rust projects to identify functions suitable for fuzzing"
authors = []
readme = "README.md"
requires-python = ">=3.14"
@@ -26,3 +26,27 @@ module = "module.__main__:main"
[tool.uv]
package = true
# FuzzForge module metadata for AI agent discovery
[tool.fuzzforge.module]
identifier = "fuzzforge-rust-analyzer"
suggested_predecessors = []
continuous_mode = false
use_cases = [
"Analyze Rust crate to find fuzzable functions",
"First step in Rust fuzzing pipeline before harness generation",
"Produces fuzzable_functions.json for AI harness generation"
]
common_inputs = [
"rust-source-code",
"Cargo.toml"
]
output_artifacts = [
"analysis.json",
"results.json"
]
output_treatment = "Read analysis.json which contains: project_info, fuzzable_functions (array with name, signature, file_path, fuzz_score), and vulnerabilities (array of known CVEs). Display fuzzable_functions as a table. Highlight any vulnerabilities found."

View File

@@ -322,14 +322,21 @@ class ModuleExecutor:
self,
assets_path: Path,
configuration: dict[str, Any] | None = None,
project_path: Path | None = None,
execution_id: str | None = None,
) -> Path:
"""Prepare input directory with assets and configuration.
Creates a temporary directory with input.json describing all resources.
Creates a directory with input.json describing all resources.
This directory can be volume-mounted into the container.
If assets_path is a directory, it is used directly (zero-copy mount).
If assets_path is a file (e.g., tar.gz), it is extracted first.
:param assets_path: Path to the assets (file or directory).
:param configuration: Optional module configuration dict.
:param project_path: Project directory for storing inputs in .fuzzforge/.
:param execution_id: Execution ID for organizing inputs.
:returns: Path to prepared input directory.
:raises SandboxError: If preparation fails.
@@ -339,12 +346,65 @@ class ModuleExecutor:
logger.info("preparing input directory", assets=str(assets_path))
try:
# Create temporary directory - caller must clean it up after container finishes
from tempfile import mkdtemp
# If assets_path is already a directory, use it directly (zero-copy mount)
if assets_path.exists() and assets_path.is_dir():
# Create input.json directly in the source directory
input_json_path = assets_path / "input.json"
# Scan files and build resource list
resources = []
for item in assets_path.iterdir():
if item.name == "input.json":
continue
if item.is_file():
resources.append(
{
"name": item.stem,
"description": f"Input file: {item.name}",
"kind": "unknown",
"path": f"/data/input/{item.name}",
}
)
elif item.is_dir():
resources.append(
{
"name": item.name,
"description": f"Input directory: {item.name}",
"kind": "unknown",
"path": f"/data/input/{item.name}",
}
)
temp_path = Path(mkdtemp(prefix="fuzzforge-input-"))
input_data = {
"settings": configuration or {},
"resources": resources,
}
input_json_path.write_text(json.dumps(input_data, indent=2))
# Copy assets to temp directory
logger.debug("using source directory directly", path=str(assets_path))
return assets_path
# File input: extract to a directory first
# Determine input directory location
if project_path:
# Store inputs in .fuzzforge/inputs/ for visibility
from fuzzforge_runner.storage import FUZZFORGE_DIR_NAME
exec_id = execution_id or "latest"
input_dir = project_path / FUZZFORGE_DIR_NAME / "inputs" / exec_id
input_dir.mkdir(parents=True, exist_ok=True)
# Clean previous contents if exists
import shutil
for item in input_dir.iterdir():
if item.is_file():
item.unlink()
elif item.is_dir():
shutil.rmtree(item)
else:
# Fallback to temporary directory
from tempfile import mkdtemp
input_dir = Path(mkdtemp(prefix="fuzzforge-input-"))
# Copy/extract assets to input directory
if assets_path.exists():
if assets_path.is_file():
# Check if it's a tar.gz archive that needs extraction
@@ -353,26 +413,26 @@ class ModuleExecutor:
import tarfile
with tarfile.open(assets_path, "r:gz") as tar:
tar.extractall(path=temp_path)
tar.extractall(path=input_dir)
logger.debug("extracted tar.gz archive", archive=str(assets_path))
else:
# Single file - copy it
import shutil
shutil.copy2(assets_path, temp_path / assets_path.name)
shutil.copy2(assets_path, input_dir / assets_path.name)
else:
# Directory - copy all files (including subdirectories)
import shutil
for item in assets_path.iterdir():
if item.is_file():
shutil.copy2(item, temp_path / item.name)
shutil.copy2(item, input_dir / item.name)
elif item.is_dir():
shutil.copytree(item, temp_path / item.name)
shutil.copytree(item, input_dir / item.name, dirs_exist_ok=True)
# Scan files and directories and build resource list
resources = []
for item in temp_path.iterdir():
for item in input_dir.iterdir():
if item.name == "input.json":
continue
if item.is_file():
@@ -399,11 +459,11 @@ class ModuleExecutor:
"settings": configuration or {},
"resources": resources,
}
input_json_path = temp_path / "input.json"
input_json_path = input_dir / "input.json"
input_json_path.write_text(json.dumps(input_data, indent=2))
logger.debug("prepared input directory", resources=len(resources), path=str(temp_path))
return temp_path
logger.debug("prepared input directory", resources=len(resources), path=str(input_dir))
return input_dir
except Exception as exc:
message = f"Failed to prepare input directory"
@@ -542,6 +602,8 @@ class ModuleExecutor:
module_identifier: str,
assets_path: Path,
configuration: dict[str, Any] | None = None,
project_path: Path | None = None,
execution_id: str | None = None,
) -> Path:
"""Execute a module end-to-end.
@@ -552,9 +614,17 @@ class ModuleExecutor:
4. Pull results
5. Terminate sandbox
All intermediate files are stored in {project_path}/.fuzzforge/ for
easy debugging and visibility.
Source directories are mounted directly without tar.gz compression
for better performance.
:param module_identifier: Name/identifier of the module to execute.
:param assets_path: Path to the input assets archive.
:param assets_path: Path to the input assets (file or directory).
:param configuration: Optional module configuration.
:param project_path: Project directory for .fuzzforge/ storage.
:param execution_id: Execution ID for organizing files.
:returns: Path to the results archive.
:raises ModuleExecutionError: If any step fails.
@@ -562,10 +632,20 @@ class ModuleExecutor:
logger = get_logger()
sandbox: str | None = None
input_dir: Path | None = None
# Don't cleanup if we're using the source directory directly
cleanup_input = False
try:
# 1. Prepare input directory with assets
input_dir = self.prepare_input_directory(assets_path, configuration)
input_dir = self.prepare_input_directory(
assets_path,
configuration,
project_path=project_path,
execution_id=execution_id,
)
# Only cleanup if we created a temp directory (file input case)
cleanup_input = input_dir != assets_path and project_path is None
# 2. Spawn sandbox with volume mount
sandbox = self.spawn_sandbox(module_identifier, input_volume=input_dir)
@@ -585,12 +665,12 @@ class ModuleExecutor:
return results_path
finally:
# 5. Always cleanup
# 5. Always cleanup sandbox
if sandbox:
self.terminate_sandbox(sandbox)
if input_dir and input_dir.exists():
# Only cleanup input if it was a temp directory
if cleanup_input and input_dir and input_dir.exists():
import shutil
shutil.rmtree(input_dir, ignore_errors=True)
# -------------------------------------------------------------------------
@@ -602,22 +682,34 @@ class ModuleExecutor:
module_identifier: str,
assets_path: Path,
configuration: dict[str, Any] | None = None,
project_path: Path | None = None,
execution_id: str | None = None,
) -> dict[str, Any]:
"""Start a module in continuous/background mode without waiting.
Returns immediately with container info. Use read_module_output() to
get current status and stop_module_continuous() to stop.
Source directories are mounted directly without tar.gz compression
for better performance.
:param module_identifier: Name/identifier of the module to execute.
:param assets_path: Path to the input assets archive.
:param assets_path: Path to the input assets (file or directory).
:param configuration: Optional module configuration.
:param project_path: Project directory for .fuzzforge/ storage.
:param execution_id: Execution ID for organizing files.
:returns: Dict with container_id, input_dir for later cleanup.
"""
logger = get_logger()
# 1. Prepare input directory with assets
input_dir = self.prepare_input_directory(assets_path, configuration)
input_dir = self.prepare_input_directory(
assets_path,
configuration,
project_path=project_path,
execution_id=execution_id,
)
# 2. Spawn sandbox with volume mount
sandbox = self.spawn_sandbox(module_identifier, input_volume=input_dir)

View File

@@ -214,11 +214,13 @@ class WorkflowOrchestrator:
message = f"No assets available for step {step_index}"
raise WorkflowExecutionError(message)
# Execute the module
# Execute the module (inputs stored in .fuzzforge/inputs/)
results_path = await self._executor.execute(
module_identifier=step.module_identifier,
assets_path=current_assets,
configuration=step.configuration,
project_path=project_path,
execution_id=step_execution_id,
)
completed_at = datetime.now(UTC)

View File

@@ -53,6 +53,24 @@ class ModuleInfo:
#: Whether module image exists locally.
available: bool = True
#: Module identifiers that should run before this one.
suggested_predecessors: list[str] | None = None
#: Whether module supports continuous/background execution.
continuous_mode: bool = False
#: Typical use cases and scenarios for this module.
use_cases: list[str] | None = None
#: Common inputs (e.g., ["rust-source-code", "Cargo.toml"]).
common_inputs: list[str] | None = None
#: Output artifacts produced (e.g., ["fuzzable_functions.json"]).
output_artifacts: list[str] | None = None
#: How AI should display/treat outputs.
output_treatment: str | None = None
class Runner:
"""Main FuzzForge Runner interface.
@@ -125,16 +143,19 @@ class Runner:
return self._storage.init_project(project_path)
def set_project_assets(self, project_path: Path, assets_path: Path) -> Path:
"""Set initial assets for a project.
"""Set source path for a project (no copying).
Just stores a reference to the source directory.
The source is mounted directly into containers at runtime.
:param project_path: Path to the project directory.
:param assets_path: Path to assets (file or directory).
:returns: Path to stored assets.
:param assets_path: Path to source directory.
:returns: The assets path (unchanged).
"""
logger = get_logger()
logger.info("setting project assets", project=str(project_path), assets=str(assets_path))
return self._storage.store_assets(project_path, assets_path)
return self._storage.set_project_assets(project_path, assets_path)
# -------------------------------------------------------------------------
# Module Discovery
@@ -182,12 +203,15 @@ class Runner:
"""List available module images from the container engine.
Uses the container engine API to discover built module images.
Reads metadata from pyproject.toml inside each image.
:param filter_prefix: Prefix to filter images (default: "fuzzforge-").
:param include_all_tags: If True, include all image tags, not just 'latest'.
:returns: List of available module images.
"""
import tomllib # noqa: PLC0415
logger = get_logger()
modules: list[ModuleInfo] = []
seen: set[str] = set()
@@ -223,18 +247,63 @@ class Runner:
# Add unique modules
if module_name not in seen:
seen.add(module_name)
# Read metadata from pyproject.toml inside the image
image_ref = f"{image.repository}:{image.tag}"
module_meta = self._get_module_metadata_from_image(engine, image_ref)
# Get basic info from pyproject.toml [project] section
project_info = module_meta.get("_project", {})
fuzzforge_meta = module_meta.get("module", {})
modules.append(
ModuleInfo(
identifier=module_name,
description=None,
version=image.tag,
identifier=fuzzforge_meta.get("identifier", module_name),
description=project_info.get("description"),
version=project_info.get("version", image.tag),
available=True,
suggested_predecessors=fuzzforge_meta.get("suggested_predecessors", []),
continuous_mode=fuzzforge_meta.get("continuous_mode", False),
use_cases=fuzzforge_meta.get("use_cases", []),
common_inputs=fuzzforge_meta.get("common_inputs", []),
output_artifacts=fuzzforge_meta.get("output_artifacts", []),
output_treatment=fuzzforge_meta.get("output_treatment"),
)
)
logger.info("listed module images", count=len(modules))
return modules
def _get_module_metadata_from_image(self, engine: Any, image_ref: str) -> dict:
"""Read module metadata from pyproject.toml inside a container image.
:param engine: Container engine instance.
:param image_ref: Image reference (e.g., "fuzzforge-rust-analyzer:latest").
:returns: Dict with module metadata from [tool.fuzzforge] section.
"""
import tomllib # noqa: PLC0415
logger = get_logger()
try:
# Read pyproject.toml from the image
content = engine.read_file_from_image(image_ref, "/app/pyproject.toml")
if not content:
logger.debug("no pyproject.toml found in image", image=image_ref)
return {}
pyproject = tomllib.loads(content)
# Return the [tool.fuzzforge] section plus [project] info
result = pyproject.get("tool", {}).get("fuzzforge", {})
result["_project"] = pyproject.get("project", {})
return result
except Exception as exc:
logger.debug("failed to read metadata from image", image=image_ref, error=str(exc))
return {}
def get_module_info(self, module_identifier: str) -> ModuleInfo | None:
"""Get information about a specific module.

View File

@@ -34,23 +34,14 @@ class EngineSettings(BaseModel):
class StorageSettings(BaseModel):
"""Storage configuration for local or S3 storage."""
"""Storage configuration for local filesystem storage.
#: Storage backend type.
type: Literal["local", "s3"] = "local"
OSS uses direct file mounting without archiving for simplicity.
"""
#: Base path for local storage (used when type is "local").
#: Base path for local storage.
path: Path = Field(default=Path.home() / ".fuzzforge" / "storage")
#: S3 endpoint URL (used when type is "s3").
s3_endpoint: str | None = None
#: S3 access key (used when type is "s3").
s3_access_key: str | None = None
#: S3 secret key (used when type is "s3").
s3_secret_key: str | None = None
class ProjectSettings(BaseModel):
"""Project configuration."""

View File

@@ -1,16 +1,20 @@
"""FuzzForge Runner - Local filesystem storage.
This module provides local filesystem storage as an alternative to S3,
enabling zero-configuration operation for OSS deployments.
This module provides local filesystem storage for OSS deployments.
Storage is placed directly in the project directory as `.fuzzforge/`
for maximum visibility and ease of debugging.
In OSS mode, source files are referenced (not copied) and mounted
directly into containers at runtime for zero-copy performance.
"""
from __future__ import annotations
import shutil
from pathlib import Path, PurePath
from pathlib import Path
from tarfile import open as Archive # noqa: N812
from tempfile import NamedTemporaryFile, TemporaryDirectory
from typing import TYPE_CHECKING, cast
from fuzzforge_runner.constants import RESULTS_ARCHIVE_FILENAME
@@ -19,6 +23,9 @@ from fuzzforge_runner.exceptions import StorageError
if TYPE_CHECKING:
from structlog.stdlib import BoundLogger
#: Name of the FuzzForge storage directory within projects.
FUZZFORGE_DIR_NAME: str = ".fuzzforge"
def get_logger() -> BoundLogger:
"""Get structlog logger instance.
@@ -32,33 +39,36 @@ def get_logger() -> BoundLogger:
class LocalStorage:
"""Local filesystem storage backend.
"""Local filesystem storage backend for FuzzForge OSS.
Provides S3-like operations using local filesystem, enabling
FuzzForge operation without external storage infrastructure.
Provides lightweight storage for execution results while using
direct source mounting (no copying) for input assets.
Directory structure:
{base_path}/
projects/
{project_id}/
assets/ # Initial project assets
runs/
{execution_id}/
Storage is placed directly in the project directory as `.fuzzforge/`
so users can easily inspect outputs and configuration.
Directory structure (inside project directory):
{project_path}/.fuzzforge/
config.json # Project config (source path reference)
runs/ # Execution results
{execution_id}/
results.tar.gz
{workflow_id}/
modules/
step-0-{exec_id}/
results.tar.gz
{workflow_id}/
modules/
step-0-{exec_id}/
results.tar.gz
Source files are NOT copied - they are referenced and mounted directly.
"""
#: Base path for all storage operations.
#: Base path for global storage (only used for fallback/config).
_base_path: Path
def __init__(self, base_path: Path) -> None:
"""Initialize an instance of the class.
:param base_path: Root directory for storage.
:param base_path: Root directory for global storage (fallback only).
"""
self._base_path = base_path
@@ -71,17 +81,22 @@ class LocalStorage:
def _get_project_path(self, project_path: Path) -> Path:
"""Get the storage path for a project.
:param project_path: Original project path (used as identifier).
:returns: Storage path for the project.
Storage is placed directly inside the project as `.fuzzforge/`.
:param project_path: Path to the project directory.
:returns: Storage path for the project (.fuzzforge inside project).
"""
# Use project path name as identifier
project_id = project_path.name
return self._base_path / "projects" / project_id
return project_path / FUZZFORGE_DIR_NAME
def init_project(self, project_path: Path) -> Path:
"""Initialize storage for a new project.
Creates a .fuzzforge/ directory inside the project for storing:
- assets/: Input files (source code, etc.)
- inputs/: Prepared module inputs (for debugging)
- runs/: Execution results from each module
:param project_path: Path to the project directory.
:returns: Path to the project storage directory.
@@ -89,102 +104,91 @@ class LocalStorage:
logger = get_logger()
storage_path = self._get_project_path(project_path)
# Create directory structure
(storage_path / "assets").mkdir(parents=True, exist_ok=True)
# Create directory structure (minimal for OSS)
storage_path.mkdir(parents=True, exist_ok=True)
(storage_path / "runs").mkdir(parents=True, exist_ok=True)
# Create .gitignore to avoid committing large files
gitignore_path = storage_path / ".gitignore"
if not gitignore_path.exists():
gitignore_content = """# FuzzForge storage - ignore large/temporary files
# Execution results (can be very large)
runs/
# Project configuration
!config.json
"""
gitignore_path.write_text(gitignore_content)
logger.info("initialized project storage", project=project_path.name, storage=str(storage_path))
return storage_path
def get_project_assets_path(self, project_path: Path) -> Path | None:
"""Get the path to project assets archive.
"""Get the path to project assets (source directory).
Returns the configured source path for the project.
In OSS mode, this is just a reference to the user's source - no copying.
:param project_path: Path to the project directory.
:returns: Path to assets archive, or None if not found.
:returns: Path to source directory, or None if not configured.
"""
storage_path = self._get_project_path(project_path)
assets_dir = storage_path / "assets"
config_path = storage_path / "config.json"
# Look for assets archive
archive_path = assets_dir / "assets.tar.gz"
if archive_path.exists():
return archive_path
if config_path.exists():
import json
config = json.loads(config_path.read_text())
source_path = config.get("source_path")
if source_path:
path = Path(source_path)
if path.exists():
return path
# Check if there are any files in assets directory
if assets_dir.exists() and any(assets_dir.iterdir()):
# Create archive from directory contents
return self._create_archive_from_directory(assets_dir)
# Fallback: check if project_path itself is the source
# (common case: user runs from their project directory)
if (project_path / "Cargo.toml").exists() or (project_path / "src").exists():
return project_path
return None
def _create_archive_from_directory(self, directory: Path) -> Path:
"""Create a tar.gz archive from a directory's contents.
def set_project_assets(self, project_path: Path, assets_path: Path) -> Path:
"""Set the source path for a project (no copying).
:param directory: Directory to archive.
:returns: Path to the created archive.
"""
archive_path = directory.parent / f"{directory.name}.tar.gz"
with Archive(archive_path, "w:gz") as tar:
for item in directory.iterdir():
tar.add(item, arcname=item.name)
return archive_path
def create_empty_assets_archive(self, project_path: Path) -> Path:
"""Create an empty assets archive for a project.
Just stores a reference to the source directory.
The source is mounted directly into containers at runtime.
:param project_path: Path to the project directory.
:returns: Path to the empty archive.
:param assets_path: Path to source directory.
:returns: The assets path (unchanged).
:raises StorageError: If path doesn't exist.
"""
storage_path = self._get_project_path(project_path)
assets_dir = storage_path / "assets"
assets_dir.mkdir(parents=True, exist_ok=True)
import json
archive_path = assets_dir / "assets.tar.gz"
# Create empty archive
with Archive(archive_path, "w:gz") as tar:
pass # Empty archive
return archive_path
def store_assets(self, project_path: Path, assets_path: Path) -> Path:
"""Store project assets from a local path.
:param project_path: Path to the project directory.
:param assets_path: Source path (file or directory) to store.
:returns: Path to the stored assets.
:raises StorageError: If storage operation fails.
"""
logger = get_logger()
if not assets_path.exists():
raise StorageError(f"Assets path does not exist: {assets_path}")
# Resolve to absolute path
assets_path = assets_path.resolve()
# Store reference in config
storage_path = self._get_project_path(project_path)
assets_dir = storage_path / "assets"
assets_dir.mkdir(parents=True, exist_ok=True)
storage_path.mkdir(parents=True, exist_ok=True)
config_path = storage_path / "config.json"
try:
if assets_path.is_file():
# Copy archive directly
dest_path = assets_dir / "assets.tar.gz"
shutil.copy2(assets_path, dest_path)
else:
# Create archive from directory
dest_path = assets_dir / "assets.tar.gz"
with Archive(dest_path, "w:gz") as tar:
for item in assets_path.iterdir():
tar.add(item, arcname=item.name)
config: dict = {}
if config_path.exists():
config = json.loads(config_path.read_text())
logger.info("stored project assets", project=project_path.name, path=str(dest_path))
return dest_path
config["source_path"] = str(assets_path)
config_path.write_text(json.dumps(config, indent=2))
except Exception as exc:
message = f"Failed to store assets: {exc}"
raise StorageError(message) from exc
logger.info("set project assets", project=project_path.name, source=str(assets_path))
return assets_path
def store_execution_results(
self,

View File

@@ -17,7 +17,6 @@ dev = [
"fuzzforge-common",
"fuzzforge-types",
"fuzzforge-mcp",
"fuzzforge-cli",
]
[tool.uv.workspace]