Files
fuzzforge_ai/backend/toolbox/workflows/llm_analysis/metadata.yaml
tduhamel42 09951d68d7 fix: resolve live monitoring bug, remove deprecated parameters, and auto-start Python worker
- Fix live monitoring style error by calling _live_monitor() helper directly
- Remove default_parameters duplication from 10 workflow metadata files
- Remove deprecated volume_mode parameter from 26 files across CLI, SDK, backend, and docs
- Configure Python worker to start automatically with docker compose up
- Clean up constants, validation, completion, and example files

Fixes #
- Live monitoring now works correctly with --live flag
- Workflow metadata follows JSON Schema standard
- Cleaner codebase without deprecated volume_mode
- Python worker (most commonly used) starts by default
2025-10-22 16:26:58 +02:00

59 lines
1.4 KiB
YAML

name: llm_analysis
version: "1.0.0"
vertical: python
description: "Uses AI/LLM to analyze code for security vulnerabilities and code quality issues"
author: "FuzzForge Team"
tags:
- "llm"
- "ai"
- "security"
- "static-analysis"
- "code-quality"
# Workspace isolation mode
workspace_isolation: "shared"
parameters:
type: object
properties:
agent_url:
type: string
description: "A2A agent endpoint URL"
llm_model:
type: string
description: "LLM model to use (e.g., gpt-4o-mini, claude-3-5-sonnet)"
llm_provider:
type: string
description: "LLM provider (openai, anthropic, etc.)"
file_patterns:
type: array
items:
type: string
description: "File patterns to analyze (e.g., ['*.py', '*.js'])"
max_files:
type: integer
description: "Maximum number of files to analyze"
max_file_size:
type: integer
description: "Maximum file size in bytes"
timeout:
type: integer
description: "Timeout per file in seconds"
output_schema:
type: object
properties:
sarif:
type: object
description: "SARIF-formatted security findings from LLM"
summary:
type: object
description: "Analysis summary"
properties:
files_analyzed:
type: integer
total_findings:
type: integer
model_used:
type: string