mirror of
https://github.com/CyberSecurityUP/NeuroSploit.git
synced 2026-02-12 14:02:45 +00:00
Add files via upload
This commit is contained in:
716
README.md
716
README.md
@@ -5,34 +5,40 @@
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
**AI-Powered Penetration Testing Platform with Web GUI**
|
||||
**AI-Powered Autonomous Penetration Testing Platform**
|
||||
|
||||
NeuroSploit v3 is an advanced security assessment platform that combines AI-driven vulnerability testing with a modern web interface. It uses prompt-driven testing to dynamically determine what vulnerabilities to test based on natural language instructions.
|
||||
NeuroSploit v3 is an advanced security assessment platform that combines AI-driven autonomous agents with 100 vulnerability types, per-scan isolated Kali Linux containers, false-positive hardening, exploit chaining, and a modern React web interface with real-time monitoring.
|
||||
|
||||
---
|
||||
|
||||
## What's New in v3
|
||||
## Highlights
|
||||
|
||||
- **Web GUI** - Modern React interface for scan management, real-time monitoring, and reports
|
||||
- **Dynamic Vulnerability Engine** - Tests 50+ vulnerability types based on prompt analysis
|
||||
- **Prompt-Driven Testing** - AI extracts vulnerability types from natural language prompts
|
||||
- **Real-time Dashboard** - WebSocket-powered live updates during scans
|
||||
- **Multiple Input Modes** - Single URL, comma-separated URLs, or file upload
|
||||
- **Preset Prompts** - Ready-to-use security testing profiles
|
||||
- **Export Reports** - HTML, PDF, and JSON export formats
|
||||
- **Docker Deployment** - One-command deployment with Docker Compose
|
||||
- **100 Vulnerability Types** across 10 categories with AI-driven testing prompts
|
||||
- **Autonomous Agent** - 3-stream parallel pentest (recon + junior tester + tool runner)
|
||||
- **Per-Scan Kali Containers** - Each scan runs in its own isolated Docker container
|
||||
- **Anti-Hallucination Pipeline** - Negative controls, proof-of-execution, confidence scoring
|
||||
- **Exploit Chain Engine** - Automatically chains findings (SSRF->internal, SQLi->DB-specific, etc.)
|
||||
- **WAF Detection & Bypass** - 16 WAF signatures, 12 bypass techniques
|
||||
- **Smart Strategy Adaptation** - Dead endpoint detection, diminishing returns, priority recomputation
|
||||
- **Multi-Provider LLM** - Claude, GPT, Gemini, Ollama, LMStudio, OpenRouter
|
||||
- **Real-Time Dashboard** - WebSocket-powered live scan progress, findings, and reports
|
||||
- **Sandbox Dashboard** - Monitor running Kali containers, tools, health checks in real-time
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Quick Start](#quick-start)
|
||||
- [Features](#features)
|
||||
- [Architecture](#architecture)
|
||||
- [Autonomous Agent](#autonomous-agent)
|
||||
- [100 Vulnerability Types](#100-vulnerability-types)
|
||||
- [Kali Sandbox System](#kali-sandbox-system)
|
||||
- [Anti-Hallucination & Validation](#anti-hallucination--validation)
|
||||
- [Web GUI](#web-gui)
|
||||
- [API Reference](#api-reference)
|
||||
- [Vulnerability Engine](#vulnerability-engine)
|
||||
- [Configuration](#configuration)
|
||||
- [Development](#development)
|
||||
- [Security Notice](#security-notice)
|
||||
@@ -45,30 +51,26 @@ NeuroSploit v3 is an advanced security assessment platform that combines AI-driv
|
||||
|
||||
```bash
|
||||
# Clone repository
|
||||
git clone https://github.com/CyberSecurityUP/NeuroSploit.git
|
||||
cd NeuroSploit
|
||||
git clone https://github.com/your-org/NeuroSploitv2.git
|
||||
cd NeuroSploitv2
|
||||
|
||||
# Copy environment file and add your API keys
|
||||
cp .env.example .env
|
||||
nano .env # Add ANTHROPIC_API_KEY or OPENAI_API_KEY
|
||||
nano .env # Add ANTHROPIC_API_KEY, OPENAI_API_KEY, or GEMINI_API_KEY
|
||||
|
||||
# Start with Docker Compose
|
||||
./start.sh
|
||||
# or
|
||||
docker-compose up -d
|
||||
# Build the Kali sandbox image (first time only, ~5 min)
|
||||
./scripts/build-kali.sh
|
||||
|
||||
# Start backend
|
||||
uvicorn backend.main:app --host 0.0.0.0 --port 8000
|
||||
```
|
||||
|
||||
Access the web interface at **http://localhost:3000**
|
||||
|
||||
### Option 2: Manual Setup
|
||||
|
||||
```bash
|
||||
# Backend
|
||||
cd backend
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
uvicorn backend.main:app --host 0.0.0.0 --port 8000
|
||||
uvicorn backend.main:app --host 0.0.0.0 --port 8000 --reload
|
||||
|
||||
# Frontend (new terminal)
|
||||
cd frontend
|
||||
@@ -76,37 +78,23 @@ npm install
|
||||
npm run dev
|
||||
```
|
||||
|
||||
---
|
||||
### Build Kali Sandbox Image
|
||||
|
||||
## Features
|
||||
```bash
|
||||
# Normal build (uses Docker cache)
|
||||
./scripts/build-kali.sh
|
||||
|
||||
### Core Capabilities
|
||||
# Full rebuild (no cache)
|
||||
./scripts/build-kali.sh --fresh
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| **Dynamic Testing** | 50+ vulnerability types across 10 categories |
|
||||
| **Prompt-Driven** | AI extracts test types from natural language |
|
||||
| **Web Interface** | Modern React dashboard with real-time updates |
|
||||
| **Multiple Inputs** | Single URL, bulk URLs, or file upload |
|
||||
| **Preset Prompts** | Bug Bounty, OWASP Top 10, API Security, and more |
|
||||
| **Export Reports** | HTML, PDF, JSON with professional styling |
|
||||
| **WebSocket Updates** | Real-time scan progress and findings |
|
||||
| **Docker Ready** | One-command deployment |
|
||||
# Build + run health check
|
||||
./scripts/build-kali.sh --test
|
||||
|
||||
### Vulnerability Categories
|
||||
# Or via docker-compose
|
||||
docker compose -f docker/docker-compose.kali.yml build
|
||||
```
|
||||
|
||||
| Category | Vulnerability Types |
|
||||
|----------|---------------------|
|
||||
| **Injection** | XSS (Reflected/Stored/DOM), SQLi, NoSQLi, Command Injection, SSTI, LDAP, XPath |
|
||||
| **File Access** | LFI, RFI, Path Traversal, File Upload, XXE |
|
||||
| **Request Forgery** | SSRF, CSRF, Cloud Metadata Access |
|
||||
| **Authentication** | Auth Bypass, JWT Manipulation, Session Fixation, OAuth Flaws |
|
||||
| **Authorization** | IDOR, BOLA, BFLA, Privilege Escalation |
|
||||
| **API Security** | Rate Limiting, Mass Assignment, GraphQL Injection |
|
||||
| **Logic Flaws** | Race Conditions, Business Logic, Workflow Bypass |
|
||||
| **Client-Side** | CORS Misconfiguration, Clickjacking, Open Redirect, WebSocket |
|
||||
| **Info Disclosure** | Error Disclosure, Source Code Exposure, Debug Endpoints |
|
||||
| **Infrastructure** | Security Headers, SSL/TLS Issues, HTTP Methods |
|
||||
Access the web interface at **http://localhost:8000** (production build) or **http://localhost:5173** (dev mode).
|
||||
|
||||
---
|
||||
|
||||
@@ -114,94 +102,311 @@ npm run dev
|
||||
|
||||
```
|
||||
NeuroSploitv3/
|
||||
├── backend/ # FastAPI Backend
|
||||
│ ├── api/v1/ # REST API endpoints
|
||||
│ │ ├── scans.py # Scan CRUD operations
|
||||
│ │ ├── targets.py # Target validation
|
||||
│ │ ├── prompts.py # Preset prompts
|
||||
│ │ ├── reports.py # Report generation
|
||||
│ │ ├── dashboard.py # Dashboard stats
|
||||
│ │ └── vulnerabilities.py # Vulnerability management
|
||||
├── backend/ # FastAPI Backend
|
||||
│ ├── api/v1/ # REST API (13 routers)
|
||||
│ │ ├── scans.py # Scan CRUD + pause/resume/stop
|
||||
│ │ ├── agent.py # AI Agent control
|
||||
│ │ ├── agent_tasks.py # Scan task tracking
|
||||
│ │ ├── dashboard.py # Stats + activity feed
|
||||
│ │ ├── reports.py # Report generation (HTML/PDF/JSON)
|
||||
│ │ ├── scheduler.py # Cron/interval scheduling
|
||||
│ │ ├── vuln_lab.py # Per-type vulnerability lab
|
||||
│ │ ├── terminal.py # Terminal agent (10 endpoints)
|
||||
│ │ ├── sandbox.py # Sandbox container monitoring
|
||||
│ │ ├── targets.py # Target validation
|
||||
│ │ ├── prompts.py # Preset prompts
|
||||
│ │ ├── vulnerabilities.py # Vulnerability management
|
||||
│ │ └── settings.py # Runtime settings
|
||||
│ ├── core/
|
||||
│ │ ├── vuln_engine/ # Dynamic vulnerability testing
|
||||
│ │ │ ├── engine.py # Main testing engine
|
||||
│ │ │ ├── registry.py # Vulnerability registry
|
||||
│ │ │ ├── payload_generator.py
|
||||
│ │ │ └── testers/ # Category-specific testers
|
||||
│ │ ├── prompt_engine/ # Prompt parsing
|
||||
│ │ │ └── parser.py # Extract vuln types from prompts
|
||||
│ │ └── report_engine/ # Report generation
|
||||
│ │ └── generator.py # HTML/PDF/JSON export
|
||||
│ ├── models/ # SQLAlchemy ORM models
|
||||
│ ├── schemas/ # Pydantic validation schemas
|
||||
│ ├── services/ # Business logic
|
||||
│ └── main.py # FastAPI app entry
|
||||
│ │ ├── autonomous_agent.py # Main AI agent (~7000 lines)
|
||||
│ │ ├── vuln_engine/ # 100-type vulnerability engine
|
||||
│ │ │ ├── registry.py # 100 VULNERABILITY_INFO entries
|
||||
│ │ │ ├── payload_generator.py # 526 payloads across 95 libraries
|
||||
│ │ │ ├── ai_prompts.py # Per-vuln AI decision prompts
|
||||
│ │ │ ├── system_prompts.py # 12 anti-hallucination prompts
|
||||
│ │ │ └── testers/ # 10 category tester modules
|
||||
│ │ ├── validation/ # False-positive hardening
|
||||
│ │ │ ├── negative_control.py # Benign request control engine
|
||||
│ │ │ ├── proof_of_execution.py # Per-type proof checks (25+ methods)
|
||||
│ │ │ ├── confidence_scorer.py # Numeric 0-100 scoring
|
||||
│ │ │ └── validation_judge.py # Sole authority for finding approval
|
||||
│ │ ├── request_engine.py # Retry, rate limit, circuit breaker
|
||||
│ │ ├── waf_detector.py # 16 WAF signatures + bypass
|
||||
│ │ ├── strategy_adapter.py # Mid-scan strategy adaptation
|
||||
│ │ ├── chain_engine.py # 10 exploit chain rules
|
||||
│ │ ├── auth_manager.py # Multi-user auth management
|
||||
│ │ ├── xss_context_analyzer.py # 8-context XSS analysis
|
||||
│ │ ├── poc_generator.py # 20+ per-type PoC generators
|
||||
│ │ ├── execution_history.py # Cross-scan learning
|
||||
│ │ ├── access_control_learner.py # Adaptive BOLA/BFLA/IDOR learning
|
||||
│ │ ├── response_verifier.py # 4-signal response verification
|
||||
│ │ ├── agent_memory.py # Bounded dedup agent memory
|
||||
│ │ └── report_engine/ # OHVR report generator
|
||||
│ ├── models/ # SQLAlchemy ORM models
|
||||
│ ├── db/ # Database layer
|
||||
│ ├── config.py # Pydantic settings
|
||||
│ └── main.py # FastAPI app entry
|
||||
│
|
||||
├── frontend/ # React Frontend
|
||||
├── core/ # Shared core modules
|
||||
│ ├── llm_manager.py # Multi-provider LLM routing
|
||||
│ ├── sandbox_manager.py # BaseSandbox ABC + legacy shared sandbox
|
||||
│ ├── kali_sandbox.py # Per-scan Kali container manager
|
||||
│ ├── container_pool.py # Global container pool coordinator
|
||||
│ ├── tool_registry.py # 56 tool install recipes for Kali
|
||||
│ ├── mcp_server.py # MCP server (12 tools, stdio)
|
||||
│ ├── scheduler.py # APScheduler scan scheduling
|
||||
│ └── browser_validator.py # Playwright browser validation
|
||||
│
|
||||
├── frontend/ # React + TypeScript Frontend
|
||||
│ ├── src/
|
||||
│ │ ├── pages/ # Page components
|
||||
│ │ │ ├── HomePage.tsx # Dashboard
|
||||
│ │ │ ├── NewScanPage.tsx # Create scan
|
||||
│ │ │ ├── ScanDetailsPage.tsx
|
||||
│ │ │ ├── ReportsPage.tsx
|
||||
│ │ │ └── ReportViewPage.tsx
|
||||
│ │ ├── components/ # Reusable components
|
||||
│ │ ├── services/ # API client
|
||||
│ │ └── store/ # Zustand state
|
||||
│ │ ├── pages/
|
||||
│ │ │ ├── HomePage.tsx # Dashboard with stats
|
||||
│ │ │ ├── AutoPentestPage.tsx # 3-stream auto pentest
|
||||
│ │ │ ├── VulnLabPage.tsx # Per-type vulnerability lab
|
||||
│ │ │ ├── TerminalAgentPage.tsx # AI terminal chat
|
||||
│ │ │ ├── SandboxDashboardPage.tsx # Container monitoring
|
||||
│ │ │ ├── ScanDetailsPage.tsx # Findings + validation
|
||||
│ │ │ ├── SchedulerPage.tsx # Cron/interval scheduling
|
||||
│ │ │ ├── SettingsPage.tsx # Configuration
|
||||
│ │ │ └── ReportsPage.tsx # Report management
|
||||
│ │ ├── components/ # Reusable UI components
|
||||
│ │ ├── services/api.ts # API client layer
|
||||
│ │ └── types/index.ts # TypeScript interfaces
|
||||
│ └── package.json
|
||||
│
|
||||
├── docker/ # Docker configuration
|
||||
│ ├── Dockerfile.backend
|
||||
│ ├── Dockerfile.frontend
|
||||
│ └── nginx.conf
|
||||
├── docker/
|
||||
│ ├── Dockerfile.kali # Multi-stage Kali sandbox (11 Go tools)
|
||||
│ ├── Dockerfile.sandbox # Legacy Debian sandbox
|
||||
│ ├── Dockerfile.backend # Backend container
|
||||
│ ├── Dockerfile.frontend # Frontend container
|
||||
│ ├── docker-compose.kali.yml # Kali sandbox build
|
||||
│ └── docker-compose.sandbox.yml # Legacy sandbox
|
||||
│
|
||||
├── docker-compose.yml
|
||||
├── start.sh
|
||||
└── .env.example
|
||||
├── config/config.json # Profiles, tools, sandbox, MCP
|
||||
├── data/
|
||||
│ ├── vuln_knowledge_base.json # 100 vuln type definitions
|
||||
│ ├── execution_history.json # Cross-scan learning data
|
||||
│ └── access_control_learning.json # BOLA/BFLA adaptive data
|
||||
│
|
||||
├── scripts/
|
||||
│ └── build-kali.sh # Build/rebuild Kali image
|
||||
├── tools/
|
||||
│ └── benchmark_runner.py # 104 CTF challenges
|
||||
├── agents/base_agent.py # BaseAgent class
|
||||
├── neurosploit.py # CLI entry point
|
||||
└── requirements.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Autonomous Agent
|
||||
|
||||
The AI agent (`autonomous_agent.py`) orchestrates the entire penetration test autonomously.
|
||||
|
||||
### 3-Stream Parallel Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────┐
|
||||
│ Auto Pentest │
|
||||
│ Target URL(s) │
|
||||
└────────┬────────────┘
|
||||
│
|
||||
┌──────────────┼──────────────┐
|
||||
▼ ▼ ▼
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ Stream 1 │ │ Stream 2 │ │ Stream 3 │
|
||||
│ Recon │ │ Junior Test │ │ Tool Runner │
|
||||
│ ─────────── │ │ ─────────── │ │ ─────────── │
|
||||
│ Crawl pages │ │ Test target │ │ Nuclei scan │
|
||||
│ Find params │ │ AI-priority │ │ Naabu ports │
|
||||
│ Tech detect │ │ 3 payloads │ │ AI decides │
|
||||
│ WAF detect │ │ per endpoint│ │ extra tools │
|
||||
└──────┬───────┘ └──────┬───────┘ └──────┬───────┘
|
||||
│ │ │
|
||||
└────────────────┼────────────────┘
|
||||
▼
|
||||
┌─────────────────────┐
|
||||
│ Deep Analysis │
|
||||
│ 100 vuln types │
|
||||
│ Full payload sets │
|
||||
│ Chain exploitation │
|
||||
└─────────┬───────────┘
|
||||
▼
|
||||
┌─────────────────────┐
|
||||
│ Report Generation │
|
||||
│ AI executive brief │
|
||||
│ PoC code per find │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
### Agent Autonomy Modules
|
||||
|
||||
| Module | Description |
|
||||
|--------|-------------|
|
||||
| **Request Engine** | Retry with backoff, per-host rate limiting, circuit breaker, adaptive timeouts |
|
||||
| **WAF Detector** | 16 WAF signatures (Cloudflare, AWS, Akamai, Imperva, etc.), 12 bypass techniques |
|
||||
| **Strategy Adapter** | Dead endpoint detection, diminishing returns, 403 bypass, priority recomputation |
|
||||
| **Chain Engine** | 10 chain rules (SSRF->internal, SQLi->DB-specific, LFI->config, IDOR pattern transfer) |
|
||||
| **Auth Manager** | Multi-user contexts (user_a, user_b, admin), login form detection, session management |
|
||||
|
||||
### Scan Features
|
||||
|
||||
- **Pause / Resume / Stop** with checkpoints
|
||||
- **Manual Validation** - Confirm or reject AI findings
|
||||
- **Screenshot Capture** on confirmed findings (Playwright)
|
||||
- **Cross-Scan Learning** - Historical success rates influence future priorities
|
||||
- **CVE Testing** - Regex detection + AI-generated payloads
|
||||
|
||||
---
|
||||
|
||||
## 100 Vulnerability Types
|
||||
|
||||
### Categories
|
||||
|
||||
| Category | Types | Examples |
|
||||
|----------|-------|---------|
|
||||
| **Injection** | 38 | XSS (reflected/stored/DOM), SQLi, NoSQLi, Command Injection, SSTI, LDAP, XPath, CRLF, Header Injection, Log Injection, GraphQL Injection |
|
||||
| **Inspection** | 21 | Security Headers, CORS, Clickjacking, Info Disclosure, Debug Endpoints, Error Disclosure, Source Code Exposure |
|
||||
| **AI-Driven** | 41 | BOLA, BFLA, IDOR, Race Condition, Business Logic, JWT Manipulation, OAuth Flaws, Prototype Pollution, WebSocket Hijacking, Cache Poisoning, HTTP Request Smuggling |
|
||||
| **Authentication** | 8 | Auth Bypass, Session Fixation, Credential Stuffing, Password Reset Flaws, MFA Bypass, Default Credentials |
|
||||
| **Authorization** | 6 | BOLA, BFLA, IDOR, Privilege Escalation, Forced Browsing, Function-Level Access Control |
|
||||
| **File Access** | 5 | LFI, RFI, Path Traversal, File Upload, XXE |
|
||||
| **Request Forgery** | 4 | SSRF, CSRF, Cloud Metadata, DNS Rebinding |
|
||||
| **Client-Side** | 8 | CORS, Clickjacking, Open Redirect, DOM Clobbering, Prototype Pollution, PostMessage, CSS Injection |
|
||||
| **Infrastructure** | 6 | SSL/TLS, HTTP Methods, Subdomain Takeover, Host Header, CNAME Hijacking |
|
||||
| **Cloud/Supply** | 4 | Cloud Metadata, S3 Bucket Misconfiguration, Dependency Confusion, Third-Party Script |
|
||||
|
||||
### Payload Engine
|
||||
|
||||
- **526 payloads** across 95 libraries
|
||||
- **73 XSS stored payloads** + 5 context-specific sets
|
||||
- Per-type AI decision prompts with anti-hallucination directives
|
||||
- WAF-adaptive payload transformation (12 techniques)
|
||||
|
||||
---
|
||||
|
||||
## Kali Sandbox System
|
||||
|
||||
Each scan runs in its own **isolated Kali Linux Docker container**, providing:
|
||||
|
||||
- **Complete Isolation** - No interference between concurrent scans
|
||||
- **On-Demand Tools** - 56 tools installed only when needed
|
||||
- **Auto Cleanup** - Containers destroyed when scan completes
|
||||
- **Resource Limits** - Per-container memory (2GB) and CPU (2 cores) limits
|
||||
|
||||
### Pre-Installed Tools (28)
|
||||
|
||||
| Category | Tools |
|
||||
|----------|-------|
|
||||
| **Scanners** | nuclei, naabu, httpx, nmap, nikto, masscan, whatweb |
|
||||
| **Discovery** | subfinder, katana, dnsx, uncover, ffuf, gobuster, waybackurls |
|
||||
| **Exploitation** | dalfox, sqlmap |
|
||||
| **System** | curl, wget, git, python3, pip3, go, jq, dig, whois, openssl, netcat, bash |
|
||||
|
||||
### On-Demand Tools (28 more)
|
||||
|
||||
Installed automatically inside the container when first requested:
|
||||
|
||||
- **APT**: wpscan, dirb, hydra, john, hashcat, testssl, sslscan, enum4linux, dnsrecon, amass, medusa, crackmapexec, etc.
|
||||
- **Go**: gau, gitleaks, anew, httprobe
|
||||
- **Pip**: dirsearch, wfuzz, arjun, wafw00f, sslyze, commix, trufflehog, retire
|
||||
|
||||
### Container Pool
|
||||
|
||||
```
|
||||
ContainerPool (global coordinator, max 5 concurrent)
|
||||
├── KaliSandbox(scan_id="abc") → docker: neurosploit-abc
|
||||
├── KaliSandbox(scan_id="def") → docker: neurosploit-def
|
||||
└── KaliSandbox(scan_id="ghi") → docker: neurosploit-ghi
|
||||
```
|
||||
|
||||
- **TTL enforcement** - Containers auto-destroyed after 60 min
|
||||
- **Orphan cleanup** - Stale containers removed on server startup
|
||||
- **Graceful fallback** - Falls back to shared container if Docker unavailable
|
||||
|
||||
---
|
||||
|
||||
## Anti-Hallucination & Validation
|
||||
|
||||
NeuroSploit uses a multi-layered validation pipeline to eliminate false positives:
|
||||
|
||||
### Validation Pipeline
|
||||
|
||||
```
|
||||
Finding Candidate
|
||||
│
|
||||
▼
|
||||
┌─────────────────────┐
|
||||
│ Negative Controls │ Send benign/empty requests as controls
|
||||
│ Same behavior = FP │ -60 confidence if same response
|
||||
└─────────┬───────────┘
|
||||
▼
|
||||
┌─────────────────────┐
|
||||
│ Proof of Execution │ 25+ per-vuln-type proof methods
|
||||
│ XSS: context check │ SSRF: metadata markers
|
||||
│ SQLi: DB errors │ BOLA: data comparison
|
||||
└─────────┬───────────┘
|
||||
▼
|
||||
┌─────────────────────┐
|
||||
│ AI Interpretation │ LLM with anti-hallucination prompts
|
||||
│ Per-type system msgs │ 12 composable prompt templates
|
||||
└─────────┬───────────┘
|
||||
▼
|
||||
┌─────────────────────┐
|
||||
│ Confidence Scorer │ 0-100 numeric score
|
||||
│ ≥90 = confirmed │ +proof, +impact, +controls
|
||||
│ ≥60 = likely │ -baseline_only, -same_behavior
|
||||
│ <60 = rejected │ Breakdown visible in UI
|
||||
└─────────┬───────────┘
|
||||
▼
|
||||
┌─────────────────────┐
|
||||
│ Validation Judge │ Final verdict authority
|
||||
│ approve / reject │ Records for adaptive learning
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
### Anti-Hallucination System Prompts
|
||||
|
||||
12 composable prompts applied across 7 task contexts:
|
||||
- `anti_hallucination` - Core truthfulness directives
|
||||
- `proof_of_execution` - Require concrete evidence
|
||||
- `negative_controls` - Compare with benign requests
|
||||
- `anti_severity_inflation` - Accurate severity ratings
|
||||
- `access_control_intelligence` - BOLA/BFLA data comparison methodology
|
||||
|
||||
### Access Control Adaptive Learning
|
||||
|
||||
- Records TP/FP outcomes per domain for BOLA/BFLA/IDOR
|
||||
- 9 default response patterns, 6 known FP patterns (WSO2, Keycloak, etc.)
|
||||
- Historical FP rate influences future confidence scoring
|
||||
|
||||
---
|
||||
|
||||
## Web GUI
|
||||
|
||||
### Dashboard (Home Page)
|
||||
### Pages
|
||||
|
||||
- **Stats Overview** - Total scans, vulnerabilities by severity, success rate
|
||||
- **Severity Distribution** - Visual chart of critical/high/medium/low findings
|
||||
- **Recent Scans** - Quick access to latest scan results
|
||||
- **Recent Findings** - Latest discovered vulnerabilities
|
||||
| Page | Route | Description |
|
||||
|------|-------|-------------|
|
||||
| **Dashboard** | `/` | Stats overview, severity distribution, recent activity feed |
|
||||
| **Auto Pentest** | `/auto` | One-click autonomous pentest with 3-stream live display |
|
||||
| **Vuln Lab** | `/vuln-lab` | Per-type vulnerability testing (100 types, 11 categories) |
|
||||
| **Terminal Agent** | `/terminal` | AI-powered interactive security chat + tool execution |
|
||||
| **Sandboxes** | `/sandboxes` | Real-time Docker container monitoring + management |
|
||||
| **AI Agent** | `/scan/new` | Manual scan creation with prompt selection |
|
||||
| **Scan Details** | `/scan/:id` | Findings with confidence badges, pause/resume/stop |
|
||||
| **Scheduler** | `/scheduler` | Cron/interval automated scan scheduling |
|
||||
| **Reports** | `/reports` | HTML/PDF/JSON report generation and viewing |
|
||||
| **Settings** | `/settings` | LLM providers, model routing, feature toggles |
|
||||
|
||||
### New Scan Page
|
||||
### Sandbox Dashboard
|
||||
|
||||
**Target Input Modes:**
|
||||
- **Single URL** - Enter one target URL
|
||||
- **Multiple URLs** - Comma-separated list
|
||||
- **File Upload** - Upload .txt file with URLs (one per line)
|
||||
|
||||
**Prompt Options:**
|
||||
- **Preset Prompts** - Select from ready-to-use profiles:
|
||||
- Full Penetration Test
|
||||
- OWASP Top 10
|
||||
- API Security Assessment
|
||||
- Bug Bounty Hunter
|
||||
- Quick Security Scan
|
||||
- Authentication Testing
|
||||
- **Custom Prompt** - Write your own testing instructions
|
||||
- **No Prompt** - Run all available tests
|
||||
|
||||
### Scan Details Page
|
||||
|
||||
- **Progress Bar** - Real-time scan progress
|
||||
- **Discovered Endpoints** - List of found paths and URLs
|
||||
- **Vulnerabilities** - Real-time findings with severity badges
|
||||
- **Activity Log** - Live scan events via WebSocket
|
||||
|
||||
### Reports Page
|
||||
|
||||
- **Report List** - All generated reports with metadata
|
||||
- **View Report** - In-browser HTML viewer
|
||||
- **Export Options** - Download as HTML, PDF, or JSON
|
||||
- **Delete Reports** - Remove old reports
|
||||
Real-time monitoring of per-scan Kali containers:
|
||||
- **Pool stats** - Active/max containers, Docker status, TTL
|
||||
- **Capacity bar** - Visual utilization indicator
|
||||
- **Per-container cards** - Name, scan link, uptime, installed tools, status
|
||||
- **Actions** - Health check, destroy (with confirmation), cleanup expired/orphans
|
||||
- **5-second auto-polling** for real-time updates
|
||||
|
||||
---
|
||||
|
||||
@@ -222,114 +427,77 @@ http://localhost:8000/api/v1
|
||||
| `POST` | `/scans` | Create new scan |
|
||||
| `GET` | `/scans` | List all scans |
|
||||
| `GET` | `/scans/{id}` | Get scan details |
|
||||
| `POST` | `/scans/{id}/start` | Start scan execution |
|
||||
| `POST` | `/scans/{id}/stop` | Stop running scan |
|
||||
| `POST` | `/scans/{id}/start` | Start scan |
|
||||
| `POST` | `/scans/{id}/stop` | Stop scan |
|
||||
| `POST` | `/scans/{id}/pause` | Pause scan |
|
||||
| `POST` | `/scans/{id}/resume` | Resume scan |
|
||||
| `DELETE` | `/scans/{id}` | Delete scan |
|
||||
| `GET` | `/scans/{id}/endpoints` | Get discovered endpoints |
|
||||
| `GET` | `/scans/{id}/vulnerabilities` | Get found vulnerabilities |
|
||||
|
||||
#### Targets
|
||||
#### AI Agent
|
||||
|
||||
| Method | Endpoint | Description |
|
||||
|--------|----------|-------------|
|
||||
| `POST` | `/targets/validate` | Validate URL(s) |
|
||||
| `POST` | `/targets/upload` | Upload URL file |
|
||||
| `POST` | `/agent/run` | Launch autonomous agent |
|
||||
| `GET` | `/agent/status/{id}` | Get agent status + findings |
|
||||
| `GET` | `/agent/by-scan/{scan_id}` | Get agent by scan ID |
|
||||
| `POST` | `/agent/stop/{id}` | Stop agent |
|
||||
| `POST` | `/agent/pause/{id}` | Pause agent |
|
||||
| `POST` | `/agent/resume/{id}` | Resume agent |
|
||||
| `GET` | `/agent/findings/{id}` | Get findings with details |
|
||||
| `GET` | `/agent/logs/{id}` | Get agent logs |
|
||||
|
||||
#### Prompts
|
||||
#### Sandbox
|
||||
|
||||
| Method | Endpoint | Description |
|
||||
|--------|----------|-------------|
|
||||
| `GET` | `/prompts/presets` | List preset prompts |
|
||||
| `GET` | `/prompts/presets/{id}` | Get preset details |
|
||||
| `POST` | `/prompts/parse` | Parse custom prompt |
|
||||
| `GET` | `/sandbox` | List containers + pool status |
|
||||
| `GET` | `/sandbox/{scan_id}` | Health check container |
|
||||
| `DELETE` | `/sandbox/{scan_id}` | Destroy container |
|
||||
| `POST` | `/sandbox/cleanup` | Remove expired containers |
|
||||
| `POST` | `/sandbox/cleanup-orphans` | Remove orphan containers |
|
||||
|
||||
#### Reports
|
||||
#### Scheduler
|
||||
|
||||
| Method | Endpoint | Description |
|
||||
|--------|----------|-------------|
|
||||
| `GET` | `/reports` | List all reports |
|
||||
| `GET` | `/reports/{id}` | Get report details |
|
||||
| `GET` | `/reports/{id}/download` | Download report |
|
||||
| `DELETE` | `/reports/{id}` | Delete report |
|
||||
| `GET` | `/scheduler` | List scheduled jobs |
|
||||
| `POST` | `/scheduler` | Create scheduled job |
|
||||
| `DELETE` | `/scheduler/{id}` | Delete job |
|
||||
| `POST` | `/scheduler/{id}/pause` | Pause job |
|
||||
| `POST` | `/scheduler/{id}/resume` | Resume job |
|
||||
|
||||
#### Dashboard
|
||||
#### Vulnerability Lab
|
||||
|
||||
| Method | Endpoint | Description |
|
||||
|--------|----------|-------------|
|
||||
| `GET` | `/dashboard/stats` | Get dashboard statistics |
|
||||
| `GET` | `/dashboard/recent-scans` | Get recent scans |
|
||||
| `GET` | `/dashboard/recent-findings` | Get recent vulnerabilities |
|
||||
| `GET` | `/vuln-lab/types` | List 100 vuln types by category |
|
||||
| `POST` | `/vuln-lab/run` | Run per-type vulnerability test |
|
||||
| `GET` | `/vuln-lab/challenges` | List challenge runs |
|
||||
| `GET` | `/vuln-lab/stats` | Detection rate stats |
|
||||
|
||||
#### Reports & Dashboard
|
||||
|
||||
| Method | Endpoint | Description |
|
||||
|--------|----------|-------------|
|
||||
| `POST` | `/reports` | Generate report |
|
||||
| `POST` | `/reports/ai-generate` | AI-powered report |
|
||||
| `GET` | `/reports/{id}/view` | View HTML report |
|
||||
| `GET` | `/dashboard/stats` | Dashboard statistics |
|
||||
| `GET` | `/dashboard/activity-feed` | Recent activity |
|
||||
|
||||
### WebSocket
|
||||
|
||||
```
|
||||
ws://localhost:8000/ws/{scan_id}
|
||||
ws://localhost:8000/ws/scan/{scan_id}
|
||||
```
|
||||
|
||||
**Events:**
|
||||
- `scan_started` - Scan has begun
|
||||
- `scan_progress` - Progress update (percentage)
|
||||
- `endpoint_found` - New endpoint discovered
|
||||
- `vulnerability_found` - New vulnerability found
|
||||
- `scan_completed` - Scan finished
|
||||
- `scan_error` - Error occurred
|
||||
Events: `scan_started`, `progress_update`, `finding_discovered`, `scan_completed`, `scan_error`
|
||||
|
||||
---
|
||||
### API Docs
|
||||
|
||||
## Vulnerability Engine
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **Prompt Parsing** - User prompt analyzed for vulnerability keywords
|
||||
2. **Type Extraction** - Relevant vulnerability types identified
|
||||
3. **Tester Selection** - Appropriate testers loaded from registry
|
||||
4. **Payload Generation** - Context-aware payloads generated
|
||||
5. **Testing Execution** - Tests run against target endpoints
|
||||
6. **Finding Reporting** - Results sent via WebSocket in real-time
|
||||
|
||||
### Prompt Examples
|
||||
|
||||
```
|
||||
"Test for SQL injection and XSS vulnerabilities"
|
||||
→ Extracts: sql_injection, xss_reflected, xss_stored
|
||||
|
||||
"Check for OWASP Top 10 issues"
|
||||
→ Extracts: All major vulnerability types
|
||||
|
||||
"Look for authentication bypass and IDOR"
|
||||
→ Extracts: auth_bypass, idor, bola
|
||||
|
||||
"Find server-side request forgery and file inclusion"
|
||||
→ Extracts: ssrf, lfi, rfi, path_traversal
|
||||
```
|
||||
|
||||
### Adding Custom Testers
|
||||
|
||||
Create a new tester in `backend/core/vuln_engine/testers/`:
|
||||
|
||||
```python
|
||||
from .base_tester import BaseTester, TestResult
|
||||
|
||||
class MyCustomTester(BaseTester):
|
||||
"""Custom vulnerability tester"""
|
||||
|
||||
async def test(self, url: str, endpoint: str, params: dict) -> list[TestResult]:
|
||||
results = []
|
||||
# Your testing logic here
|
||||
return results
|
||||
```
|
||||
|
||||
Register in `backend/core/vuln_engine/registry.py`:
|
||||
|
||||
```python
|
||||
VULNERABILITY_REGISTRY["my_custom_vuln"] = {
|
||||
"name": "My Custom Vulnerability",
|
||||
"category": "custom",
|
||||
"severity": "high",
|
||||
"tester": "MyCustomTester",
|
||||
# ...
|
||||
}
|
||||
```
|
||||
Interactive docs available at:
|
||||
- Swagger UI: `http://localhost:8000/api/docs`
|
||||
- ReDoc: `http://localhost:8000/api/redoc`
|
||||
|
||||
---
|
||||
|
||||
@@ -338,91 +506,89 @@ VULNERABILITY_REGISTRY["my_custom_vuln"] = {
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
# .env file
|
||||
# LLM API Keys (at least one required)
|
||||
ANTHROPIC_API_KEY=your-key
|
||||
OPENAI_API_KEY=your-key
|
||||
GEMINI_API_KEY=your-key
|
||||
|
||||
# LLM API Keys (at least one required for AI-powered testing)
|
||||
ANTHROPIC_API_KEY=your-anthropic-api-key
|
||||
OPENAI_API_KEY=your-openai-api-key
|
||||
# Local LLM (optional)
|
||||
OLLAMA_BASE_URL=http://localhost:11434
|
||||
LMSTUDIO_BASE_URL=http://localhost:1234
|
||||
OPENROUTER_API_KEY=your-key
|
||||
|
||||
# Database (default is SQLite)
|
||||
# Database
|
||||
DATABASE_URL=sqlite+aiosqlite:///./data/neurosploit.db
|
||||
|
||||
# Server Configuration
|
||||
# Server
|
||||
HOST=0.0.0.0
|
||||
PORT=8000
|
||||
DEBUG=false
|
||||
```
|
||||
|
||||
### Preset Prompts
|
||||
### config/config.json
|
||||
|
||||
Available presets in `/api/v1/prompts/presets`:
|
||||
|
||||
| ID | Name | Description |
|
||||
|----|------|-------------|
|
||||
| `full_pentest` | Full Penetration Test | Comprehensive testing across all categories |
|
||||
| `owasp_top10` | OWASP Top 10 | Focus on OWASP Top 10 vulnerabilities |
|
||||
| `api_security` | API Security | API-specific security testing |
|
||||
| `bug_bounty` | Bug Bounty Hunter | High-impact findings for bounty programs |
|
||||
| `quick_scan` | Quick Security Scan | Fast essential security checks |
|
||||
| `auth_testing` | Authentication Testing | Auth and session security |
|
||||
```json
|
||||
{
|
||||
"llm": {
|
||||
"default_profile": "gemini_pro_default",
|
||||
"profiles": { ... }
|
||||
},
|
||||
"agent_roles": {
|
||||
"pentest_generalist": { "vuln_coverage": 100 },
|
||||
"bug_bounty_hunter": { "vuln_coverage": 100 }
|
||||
},
|
||||
"sandbox": {
|
||||
"mode": "per_scan",
|
||||
"kali": {
|
||||
"enabled": true,
|
||||
"image": "neurosploit-kali:latest",
|
||||
"max_concurrent": 5,
|
||||
"container_ttl_minutes": 60
|
||||
}
|
||||
},
|
||||
"mcp_servers": {
|
||||
"neurosploit_tools": {
|
||||
"transport": "stdio",
|
||||
"command": "python3",
|
||||
"args": ["-m", "core.mcp_server"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Development
|
||||
|
||||
### Backend Development
|
||||
### Backend
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Run with hot reload
|
||||
uvicorn backend.main:app --reload --host 0.0.0.0 --port 8000
|
||||
|
||||
# API docs available at http://localhost:8000/docs
|
||||
# API docs: http://localhost:8000/api/docs
|
||||
```
|
||||
|
||||
### Frontend Development
|
||||
### Frontend
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
npm install
|
||||
npm run dev
|
||||
|
||||
# Build for production
|
||||
npm run build
|
||||
npm run dev # Dev server at http://localhost:5173
|
||||
npm run build # Production build
|
||||
```
|
||||
|
||||
### Running Tests
|
||||
### Build Kali Sandbox
|
||||
|
||||
```bash
|
||||
# Backend tests
|
||||
cd backend
|
||||
pytest
|
||||
|
||||
# Frontend tests
|
||||
cd frontend
|
||||
npm test
|
||||
./scripts/build-kali.sh --test # Build + health check
|
||||
```
|
||||
|
||||
---
|
||||
### MCP Server
|
||||
|
||||
## Upgrading from v2
|
||||
|
||||
v3 is a complete rewrite with a new architecture. Key differences:
|
||||
|
||||
| Feature | v2 | v3 |
|
||||
|---------|----|----|
|
||||
| Interface | CLI only | Web GUI + API |
|
||||
| Vulnerability Testing | Hardcoded (XSS, SQLi, LFI) | Dynamic 50+ types |
|
||||
| Test Selection | Manual | Prompt-driven |
|
||||
| Progress Updates | Terminal output | WebSocket real-time |
|
||||
| Reports | HTML file | Web viewer + export |
|
||||
| Deployment | Python script | Docker Compose |
|
||||
|
||||
**Migration:** v3 is a separate installation. Your v2 configurations and results are not compatible.
|
||||
```bash
|
||||
python3 -m core.mcp_server # Starts stdio MCP server (12 tools)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
@@ -430,7 +596,7 @@ v3 is a complete rewrite with a new architecture. Key differences:
|
||||
|
||||
**This tool is for authorized security testing only.**
|
||||
|
||||
- Only test systems you own or have written permission to test
|
||||
- Only test systems you own or have explicit written permission to test
|
||||
- Follow responsible disclosure practices
|
||||
- Comply with all applicable laws and regulations
|
||||
- Unauthorized access to computer systems is illegal
|
||||
@@ -443,25 +609,17 @@ MIT License - See [LICENSE](LICENSE) for details.
|
||||
|
||||
---
|
||||
|
||||
## Contributing
|
||||
## Tech Stack
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch
|
||||
3. Submit a pull request
|
||||
| Layer | Technologies |
|
||||
|-------|-------------|
|
||||
| **Backend** | Python, FastAPI, SQLAlchemy, Pydantic, aiohttp |
|
||||
| **Frontend** | React 18, TypeScript, TailwindCSS, Vite |
|
||||
| **AI/LLM** | Anthropic Claude, OpenAI GPT, Google Gemini, Ollama, LMStudio, OpenRouter |
|
||||
| **Sandbox** | Docker, Kali Linux, ProjectDiscovery suite, Nmap, SQLMap, Nikto |
|
||||
| **Tools** | Nuclei, Naabu, httpx, Subfinder, Katana, FFuf, Gobuster, Dalfox |
|
||||
| **Infra** | Docker Compose, MCP Protocol, Playwright, APScheduler |
|
||||
|
||||
---
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
### Technologies
|
||||
- FastAPI, SQLAlchemy, Pydantic
|
||||
- React, TypeScript, TailwindCSS, Zustand
|
||||
- Docker, Nginx
|
||||
|
||||
### LLM Providers
|
||||
- Anthropic Claude
|
||||
- OpenAI GPT
|
||||
|
||||
---
|
||||
|
||||
**NeuroSploit v3** - *AI-Powered Penetration Testing Platform*
|
||||
**NeuroSploit v3** - *AI-Powered Autonomous Penetration Testing Platform*
|
||||
|
||||
1
logs/backend.log
Normal file
1
logs/backend.log
Normal file
@@ -0,0 +1 @@
|
||||
1
|
||||
1
logs/frontend.log
Normal file
1
logs/frontend.log
Normal file
@@ -0,0 +1 @@
|
||||
2
|
||||
1826
models/bug-bounty/bugbounty_chat_finetuning.jsonl
Normal file
1826
models/bug-bounty/bugbounty_chat_finetuning.jsonl
Normal file
File diff suppressed because one or more lines are too long
9132
models/bug-bounty/bugbounty_finetuning_dataset.json
Normal file
9132
models/bug-bounty/bugbounty_finetuning_dataset.json
Normal file
File diff suppressed because one or more lines are too long
BIN
models/bug-bounty/bugbounty_pentest_dataset.zip
Normal file
BIN
models/bug-bounty/bugbounty_pentest_dataset.zip
Normal file
Binary file not shown.
@@ -663,6 +663,43 @@ class NeuroSploitv2:
|
||||
margin-top: 3rem;
|
||||
}}
|
||||
|
||||
/* OHVR Structure */
|
||||
.ohvr-section {{
|
||||
margin: 1rem 0;
|
||||
padding: 1rem;
|
||||
background: rgba(0,0,0,0.2);
|
||||
border-radius: 8px;
|
||||
}}
|
||||
.ohvr-section h5 {{
|
||||
color: var(--accent);
|
||||
margin-bottom: 0.5rem;
|
||||
text-transform: uppercase;
|
||||
font-size: 0.8rem;
|
||||
letter-spacing: 1px;
|
||||
}}
|
||||
.screenshot-grid {{
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
|
||||
gap: 1rem;
|
||||
margin: 1rem 0;
|
||||
}}
|
||||
.screenshot-card {{
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 8px;
|
||||
overflow: hidden;
|
||||
}}
|
||||
.screenshot-card img {{
|
||||
width: 100%;
|
||||
height: auto;
|
||||
display: block;
|
||||
}}
|
||||
.screenshot-caption {{
|
||||
padding: 0.5rem;
|
||||
font-size: 0.8rem;
|
||||
color: var(--text-secondary);
|
||||
text-align: center;
|
||||
}}
|
||||
|
||||
/* Print Styles */
|
||||
@media print {{
|
||||
body {{ background: white; color: black; }}
|
||||
|
||||
@@ -1,52 +1,25 @@
|
||||
{
|
||||
"network_recon": {
|
||||
"recon": {
|
||||
"network_scan": "Analyze network scan results and identify attack vectors",
|
||||
"osint": "Perform OSINT analysis on target organization",
|
||||
"ai_analysis_user": "Analyze the following network reconnaissance data and provide insights:\n\n{recon_data_json}\n\nProvide:\n1. Attack surface summary\n2. Prioritized network target list\n3. Identified network vulnerabilities or misconfigurations\n4. Recommended next steps for network exploitation\n5. Network risk assessment\n6. Stealth considerations for network activities\n\nResponse in JSON format with actionable recommendations.",
|
||||
"ai_analysis_system": "You are an expert network penetration tester analyzing reconnaissance data.\nIdentify network security weaknesses, network attack vectors, and provide strategic recommendations.\nConsider both technical and operational security aspects."
|
||||
},
|
||||
"web_recon": {
|
||||
"web_enum": "Enumerate web application for vulnerabilities",
|
||||
"ai_analysis_user": "Analyze the following potential web vulnerabilities identified on {target} and provide insights:\n\nVulnerabilities: {vulnerabilities_json}\n\nProvide:\n1. Prioritized list of vulnerabilities\n2. Recommended exploitation steps for each (if applicable)\n3. Potential impact\n4. Remediation suggestions\n\nResponse in JSON format with actionable recommendations.",
|
||||
"ai_analysis_system": "You are an expert web penetration tester and security analyst.\nProvide precise analysis of web vulnerabilities and practical advice for exploitation and remediation."
|
||||
"osint": "Perform OSINT analysis on target organization"
|
||||
},
|
||||
"exploitation": {
|
||||
"ai_exploit_planning_user": "Plan an exploitation strategy based on the following data:\n\nVulnerabilities Identified:\n{vulnerabilities_json}\n\nReconnaissance Data:\n{recon_data_json}\n\nProvide:\n1. Prioritized exploitation order\n2. Recommended exploits for each vulnerability\n3. Step-by-step exploitation guide\n4. Payload suggestions\n5. Evasion techniques\n6. Fallback strategies\n7. Success probability estimates\n\nResponse in JSON format with detailed exploitation roadmap.",
|
||||
"ai_exploit_planning_system": "You are an expert exploit developer and penetration tester.\nCreate sophisticated exploitation plans considering detection, success rates, and impact.\nPrioritize stealthy, reliable exploits over noisy attempts.",
|
||||
"analyze_vulnerability_user": "Analyze the following vulnerability data and provide exploitation recommendations:\n\nVulnerability: {vulnerability_data_json}\n\nProvide:\n1. Severity assessment (Critical/High/Medium/Low)\n2. Exploitation difficulty\n3. Potential impact\n4. Step-by-step exploitation guide\n5. Recommended exploit techniques\n6. Detection evasion strategies\n7. Post-exploitation actions\n\nResponse in JSON format.",
|
||||
"analyze_vulnerability_system": "You are an expert penetration tester and security researcher.\nAnalyze vulnerabilities and provide detailed, actionable exploitation strategies.\nConsider OWASP, CWE, and MITRE ATT&CK frameworks.\nAlways include ethical considerations and legal boundaries.",
|
||||
"generate_payload_user": "Generate an exploit payload for the following scenario:\n\nTarget Information:\n{target_info_json}\n\nVulnerability Type: {vulnerability_type}\n\nRequirements:\n1. Generate a working payload\n2. Include obfuscation techniques\n3. Add error handling\n4. Ensure minimal detection footprint\n5. Include cleanup procedures\n\nProvide the payload code with detailed comments.",
|
||||
"generate_payload_system": "You are an expert exploit developer.\nGenerate sophisticated, tested payloads that are effective yet responsible.\nAlways include safety mechanisms and ethical guidelines."
|
||||
"web_vuln": "Generate exploit for identified web vulnerability",
|
||||
"network_exploit": "Create network service exploitation strategy",
|
||||
"payload_generation": "Generate obfuscated payload for target system"
|
||||
},
|
||||
"privesc": {
|
||||
"suggest_privilege_escalation_user": "Based on the following system information, suggest privilege escalation techniques:\n\nSystem Info:\n{system_info_json}\n\nProvide:\n1. Top 5 privilege escalation vectors\n2. Step-by-step exploitation guide\n3. Required tools and commands\n4. Detection likelihood\n5. Success probability\n6. Alternative approaches\n\nResponse in JSON format with prioritized list.",
|
||||
"suggest_privilege_escalation_system": "You are a privilege escalation specialist.\nAnalyze system configurations and suggest effective escalation paths.\nConsider Windows, Linux, and Active Directory environments."
|
||||
"linux": "Analyze Linux system for privilege escalation paths",
|
||||
"windows": "Identify Windows privilege escalation opportunities",
|
||||
"kernel": "Recommend kernel exploits for target version"
|
||||
},
|
||||
"persistence": {
|
||||
"ai_persistence_strategy_user": "Plan a comprehensive persistence strategy based on the following context:\n\n{context_json}\n\nProvide:\n1. Recommended persistence techniques (prioritized)\n2. Step-by-step implementation guide\n3. Stealth considerations\n4. Resilience against system reboots\n5. Evasion of detection mechanisms\n6. Multiple fallback mechanisms\n7. Cleanup and removal procedures\n\nResponse in JSON format with detailed implementation plan.",
|
||||
"ai_persistence_strategy_system": "You are an expert in persistence techniques and advanced persistent threats.\nDesign robust, stealthy persistence mechanisms that survive reboots and detection attempts.\nConsider both Windows and Linux environments.\nPrioritize operational security and longevity."
|
||||
"backdoor": "Design stealthy persistence mechanism",
|
||||
"scheduled_task": "Create covert scheduled task for persistence"
|
||||
},
|
||||
"lateral_movement": {
|
||||
"ai_movement_strategy_user": "Plan a lateral movement strategy based on the following:\n\nCurrent Context:\n{context_json}\n\nDiscovered Hosts:\n{hosts_json}\n\nProvide:\n1. Target prioritization (high-value targets first)\n2. Movement techniques for each target\n3. Step-by-step exploitation guide\n4. Credential strategies\n5. Evasion techniques\n6. Attack path optimization\n7. Fallback options\n\nResponse in JSON format with detailed attack paths.",
|
||||
"ai_movement_strategy_system": "You are an expert in lateral movement and Active Directory attacks.\nPlan sophisticated movement strategies that minimize detection and maximize impact.\nConsider Pass-the-Hash, Pass-the-Ticket, RDP, WMI, PSExec, and other techniques.\nPrioritize domain controllers and critical infrastructure."
|
||||
},
|
||||
"vulnerability_testing": {
|
||||
"ssrf_user": "Analyze the following HTTP request/response data for potential Server-Side Request Forgery (SSRF) vulnerabilities:\n\n{http_data_json}\n\nProvide:\n1. Confirmation of SSRF vulnerability\n2. Potential impact\n3. Step-by-step exploitation guide (if confirmed)\n4. Remediation suggestions\n\nResponse in JSON format.",
|
||||
"ssrf_system": "You are an expert web vulnerability tester, specializing in SSRF.\nAnalyze provided HTTP data to confirm and detail SSRF vulnerabilities, their impact, and exploitation.",
|
||||
|
||||
"sql_injection_user": "Analyze the following input field data and database responses for potential SQL Injection vulnerabilities:\n\n{input_data_json}\n\nProvide:\n1. Confirmation of SQL Injection vulnerability (including type: boolean-based, error-based, time-based, UNION-based)\n2. Potential impact (data exfiltration, authentication bypass)\n3. Step-by-step exploitation guide (if confirmed)\n4. Remediation suggestions\n\nResponse in JSON format.",
|
||||
"sql_injection_system": "You are an expert web vulnerability tester, specializing in SQL Injection (all types).\nAnalyze provided data to confirm and detail SQL Injection vulnerabilities, their impact, and exploitation.",
|
||||
|
||||
"xss_user": "Analyze the following input fields, reflected outputs, and DOM for potential Cross-Site Scripting (XSS) vulnerabilities:\n\n{xss_data_json}\n\nProvide:\n1. Confirmation of XSS vulnerability (including type: Reflected, Stored, DOM-based)\n2. Potential impact (session hijacking, defacement, malware delivery)\n3. Step-by-step exploitation guide (if confirmed)\n4. Remediation suggestions\n\nResponse in JSON format.",
|
||||
"xss_system": "You are an expert web vulnerability tester, specializing in XSS (Reflected, Stored, DOM-based).\nAnalyze provided data to confirm and detail XSS vulnerabilities, their impact, and exploitation.",
|
||||
|
||||
"lfi_user": "Analyze the following file path inputs and server responses for potential Local File Inclusion (LFI) vulnerabilities:\n\n{lfi_data_json}\n\nProvide:\n1. Confirmation of LFI vulnerability\n2. Potential impact (information disclosure, remote code execution via log poisoning)\n3. Step-by-step exploitation guide (if confirmed)\n4. Remediation suggestions\n\nResponse in JSON format.",
|
||||
"lfi_system": "You are an expert web vulnerability tester, specializing in Local File Inclusion (LFI).\nAnalyze provided data to confirm and detail LFI vulnerabilities, their impact, and exploitation.",
|
||||
|
||||
"broken_object_user": "Analyze the following API endpoint behavior and object IDs for potential Broken Object Level Authorization (BOLA) vulnerabilities:\n\n{api_data_json}\n\nProvide:\n1. Confirmation of BOLA vulnerability\n2. Potential impact (unauthorized access to sensitive data/actions)\n3. Step-by-step exploitation guide (if confirmed)\n4. Remediation suggestions\n\nResponse in JSON format.",
|
||||
"broken_object_system": "You are an expert API security tester, specializing in Broken Object Level Authorization (BOLA).\nAnalyze provided API data to confirm and detail BOLA vulnerabilities, their impact, and exploitation.",
|
||||
|
||||
"broken_auth_user": "Analyze the following authentication mechanisms (login, session management, password reset) for potential Broken Authentication vulnerabilities:\n\n{auth_data_json}\n\nProvide:\n1. Confirmation of Broken Authentication vulnerability (e.g., weak password policy, session fixation, credential stuffing, improper logout)\n2. Potential impact (account takeover, unauthorized access)\n3. Step-by-step exploitation guide (if confirmed)\n4. Remediation suggestions\n\nResponse in JSON format.",
|
||||
"broken_auth_system": "You are an expert web security tester, specializing in Broken Authentication vulnerabilities (e.g., session management, password policies, credential handling).\nAnalyze provided data to confirm and detail Broken Authentication vulnerabilities, their impact, and exploitation."
|
||||
"ad_attack": "Plan Active Directory attack path",
|
||||
"credential_reuse": "Strategy for credential reuse across network"
|
||||
}
|
||||
}
|
||||
}
|
||||
249
prompts/task_library.json
Normal file
249
prompts/task_library.json
Normal file
@@ -0,0 +1,249 @@
|
||||
{
|
||||
"version": "1.0",
|
||||
"updated_at": "2026-02-11T13:17:02.797476",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "recon_full",
|
||||
"name": "Full Reconnaissance",
|
||||
"description": "Complete reconnaissance: subdomains, ports, technologies, endpoints",
|
||||
"category": "recon",
|
||||
"prompt": "Perform comprehensive reconnaissance on the target:\n\n1. **Subdomain Enumeration**: Find all subdomains\n2. **Port Scanning**: Identify open ports and services\n3. **Technology Detection**: Fingerprint web technologies, frameworks, servers\n4. **Endpoint Discovery**: Crawl and find all accessible endpoints\n5. **Parameter Discovery**: Find URL parameters and form inputs\n6. **JavaScript Analysis**: Extract endpoints from JS files\n7. **API Discovery**: Find API endpoints and documentation\n\nConsolidate all findings into a structured report.",
|
||||
"system_prompt": "You are a reconnaissance expert. Gather information systematically and thoroughly.",
|
||||
"tools_required": [
|
||||
"subfinder",
|
||||
"httpx",
|
||||
"nmap",
|
||||
"katana",
|
||||
"gau"
|
||||
],
|
||||
"estimated_tokens": 2000,
|
||||
"created_at": "2026-02-08T18:02:15.119727",
|
||||
"updated_at": "2026-02-08T18:02:15.119727",
|
||||
"author": "user",
|
||||
"tags": [
|
||||
"recon",
|
||||
"discovery",
|
||||
"enumeration"
|
||||
],
|
||||
"is_preset": true
|
||||
},
|
||||
{
|
||||
"id": "recon_passive",
|
||||
"name": "Passive Reconnaissance",
|
||||
"description": "Non-intrusive reconnaissance using public data only",
|
||||
"category": "recon",
|
||||
"prompt": "Perform PASSIVE reconnaissance only (no direct interaction with target):\n\n1. **OSINT**: Search for public information\n2. **DNS Records**: Enumerate DNS records\n3. **Historical Data**: Check Wayback Machine, archive.org\n4. **Certificate Transparency**: Find subdomains from CT logs\n5. **Google Dorking**: Search for exposed files/information\n6. **Social Media**: Find related accounts and information\n\nDo NOT send any requests directly to the target.",
|
||||
"system_prompt": "You are an OSINT expert. Only use passive techniques.",
|
||||
"tools_required": [
|
||||
"subfinder",
|
||||
"gau",
|
||||
"waybackurls"
|
||||
],
|
||||
"estimated_tokens": 1500,
|
||||
"created_at": "2026-02-08T18:02:15.119744",
|
||||
"updated_at": "2026-02-08T18:02:15.119744",
|
||||
"author": "user",
|
||||
"tags": [
|
||||
"recon",
|
||||
"passive",
|
||||
"osint"
|
||||
],
|
||||
"is_preset": true
|
||||
},
|
||||
{
|
||||
"id": "vuln_owasp_top10",
|
||||
"name": "OWASP Top 10 Assessment",
|
||||
"description": "Test for OWASP Top 10 vulnerabilities",
|
||||
"category": "vulnerability",
|
||||
"prompt": "Test the target for OWASP Top 10 vulnerabilities:\n\n1. **A01 - Broken Access Control**: Test for IDOR, privilege escalation\n2. **A02 - Cryptographic Failures**: Check for weak crypto, exposed secrets\n3. **A03 - Injection**: Test SQL, NoSQL, OS, LDAP injection\n4. **A04 - Insecure Design**: Analyze business logic flaws\n5. **A05 - Security Misconfiguration**: Check headers, default configs\n6. **A06 - Vulnerable Components**: Identify outdated libraries\n7. **A07 - Authentication Failures**: Test auth bypass, weak passwords\n8. **A08 - Data Integrity Failures**: Check for insecure deserialization\n9. **A09 - Security Logging Failures**: Test for logging gaps\n10. **A10 - SSRF**: Test for server-side request forgery\n\nFor each finding:\n- Provide CVSS score and calculation\n- Detailed description\n- Proof of Concept\n- Remediation recommendation",
|
||||
"system_prompt": "You are a web security expert specializing in OWASP vulnerabilities.",
|
||||
"tools_required": [
|
||||
"nuclei",
|
||||
"sqlmap",
|
||||
"xsstrike"
|
||||
],
|
||||
"estimated_tokens": 5000,
|
||||
"created_at": "2026-02-08T18:02:15.119754",
|
||||
"updated_at": "2026-02-08T18:02:15.119754",
|
||||
"author": "user",
|
||||
"tags": [
|
||||
"vulnerability",
|
||||
"owasp",
|
||||
"web"
|
||||
],
|
||||
"is_preset": true
|
||||
},
|
||||
{
|
||||
"id": "vuln_api_security",
|
||||
"name": "API Security Testing",
|
||||
"description": "Test API endpoints for security issues",
|
||||
"category": "vulnerability",
|
||||
"prompt": "Test the API for security vulnerabilities:\n\n1. **Authentication**: Test JWT, OAuth, API keys\n2. **Authorization**: Check for BOLA, BFLA, broken object level auth\n3. **Rate Limiting**: Test for missing rate limits\n4. **Input Validation**: Injection attacks on API params\n5. **Data Exposure**: Check for excessive data exposure\n6. **Mass Assignment**: Test for mass assignment vulnerabilities\n7. **Security Misconfiguration**: CORS, headers, error handling\n8. **Injection**: GraphQL, SQL, NoSQL injection\n\nFor each finding provide CVSS, PoC, and remediation.",
|
||||
"system_prompt": "You are an API security expert.",
|
||||
"tools_required": [
|
||||
"nuclei",
|
||||
"ffuf"
|
||||
],
|
||||
"estimated_tokens": 4000,
|
||||
"created_at": "2026-02-08T18:02:15.119761",
|
||||
"updated_at": "2026-02-08T18:02:15.119761",
|
||||
"author": "user",
|
||||
"tags": [
|
||||
"vulnerability",
|
||||
"api",
|
||||
"rest",
|
||||
"graphql"
|
||||
],
|
||||
"is_preset": true
|
||||
},
|
||||
{
|
||||
"id": "vuln_injection",
|
||||
"name": "Injection Testing",
|
||||
"description": "Comprehensive injection vulnerability testing",
|
||||
"category": "vulnerability",
|
||||
"prompt": "Test all input points for injection vulnerabilities:\n\n1. **SQL Injection**: Error-based, union, blind, time-based\n2. **NoSQL Injection**: MongoDB, CouchDB injections\n3. **Command Injection**: OS command execution\n4. **LDAP Injection**: Directory service injection\n5. **XPath Injection**: XML path injection\n6. **Template Injection (SSTI)**: Jinja2, Twig, Freemarker\n7. **Header Injection**: Host header, CRLF injection\n8. **Email Header Injection**: SMTP injection\n\nTest ALL parameters: URL, POST body, headers, cookies.\nProvide working PoC for each finding.",
|
||||
"system_prompt": "You are an injection attack specialist. Test thoroughly but safely.",
|
||||
"tools_required": [
|
||||
"sqlmap",
|
||||
"commix"
|
||||
],
|
||||
"estimated_tokens": 4000,
|
||||
"created_at": "2026-02-08T18:02:15.119768",
|
||||
"updated_at": "2026-02-08T18:02:15.119768",
|
||||
"author": "user",
|
||||
"tags": [
|
||||
"vulnerability",
|
||||
"injection",
|
||||
"sqli",
|
||||
"rce"
|
||||
],
|
||||
"is_preset": true
|
||||
},
|
||||
{
|
||||
"id": "full_bug_bounty",
|
||||
"name": "Bug Bounty Hunter Mode",
|
||||
"description": "Full automated bug bounty workflow: recon -> analyze -> test -> report",
|
||||
"category": "full_auto",
|
||||
"prompt": "Execute complete bug bounty workflow:\n\n## PHASE 1: RECONNAISSANCE\n- Enumerate all subdomains and assets\n- Probe for live hosts\n- Discover all endpoints\n- Identify technologies and frameworks\n\n## PHASE 2: ANALYSIS\n- Analyze attack surface\n- Identify high-value targets\n- Map authentication flows\n- Document API endpoints\n\n## PHASE 3: VULNERABILITY TESTING\n- Test for critical vulnerabilities first (RCE, SQLi, Auth Bypass)\n- Test for high severity (XSS, SSRF, IDOR)\n- Test for medium/low (Info disclosure, misconfigs)\n\n## PHASE 4: EXPLOITATION\n- Develop PoC for confirmed vulnerabilities\n- Calculate CVSS scores\n- Document impact and risk\n\n## PHASE 5: REPORTING\n- Generate professional report\n- Include all findings with evidence\n- Provide remediation steps\n\nFocus on impact. Prioritize critical findings.",
|
||||
"system_prompt": "You are an elite bug bounty hunter. Your goal is to find real, impactful vulnerabilities.\nBe thorough but efficient. Focus on high-severity issues first.\nEvery finding must have: Evidence, CVSS, Impact, PoC, Remediation.",
|
||||
"tools_required": [
|
||||
"subfinder",
|
||||
"httpx",
|
||||
"nuclei",
|
||||
"katana",
|
||||
"sqlmap"
|
||||
],
|
||||
"estimated_tokens": 10000,
|
||||
"created_at": "2026-02-08T18:02:15.119779",
|
||||
"updated_at": "2026-02-08T18:02:15.119779",
|
||||
"author": "user",
|
||||
"tags": [
|
||||
"full",
|
||||
"bug_bounty",
|
||||
"automated"
|
||||
],
|
||||
"is_preset": true
|
||||
},
|
||||
{
|
||||
"id": "full_pentest",
|
||||
"name": "Full Penetration Test",
|
||||
"description": "Complete penetration test workflow",
|
||||
"category": "full_auto",
|
||||
"prompt": "Execute comprehensive penetration test:\n\n## PHASE 1: INFORMATION GATHERING\n- Passive reconnaissance\n- Active reconnaissance\n- Network mapping\n- Service enumeration\n\n## PHASE 2: VULNERABILITY ANALYSIS\n- Automated scanning\n- Manual testing\n- Business logic analysis\n- Configuration review\n\n## PHASE 3: EXPLOITATION\n- Exploit confirmed vulnerabilities\n- Post-exploitation (if authorized)\n- Privilege escalation attempts\n- Lateral movement (if authorized)\n\n## PHASE 4: DOCUMENTATION\n- Document all findings\n- Calculate CVSS 3.1 scores\n- Create proof of concepts\n- Write remediation recommendations\n\n## PHASE 5: REPORTING\n- Executive summary\n- Technical findings\n- Risk assessment\n- Remediation roadmap\n\nThis is a full penetration test. Be thorough and professional.",
|
||||
"system_prompt": "You are a professional penetration tester conducting an authorized security assessment.\nDocument everything. Be thorough. Follow methodology.\nAll findings must include: Title, CVSS, Description, Evidence, Impact, Remediation.",
|
||||
"tools_required": [
|
||||
"nmap",
|
||||
"nuclei",
|
||||
"sqlmap",
|
||||
"nikto",
|
||||
"ffuf"
|
||||
],
|
||||
"estimated_tokens": 15000,
|
||||
"created_at": "2026-02-08T18:02:15.119785",
|
||||
"updated_at": "2026-02-08T18:02:15.119785",
|
||||
"author": "user",
|
||||
"tags": [
|
||||
"full",
|
||||
"pentest",
|
||||
"professional"
|
||||
],
|
||||
"is_preset": true
|
||||
},
|
||||
{
|
||||
"id": "custom_prompt",
|
||||
"name": "Custom Prompt (Full AI Mode)",
|
||||
"description": "Execute any custom prompt - AI decides what tools to use",
|
||||
"category": "custom",
|
||||
"prompt": "[USER_PROMPT_HERE]\n\nAnalyze this request and:\n1. Determine what information/tools are needed\n2. Plan the approach\n3. Execute the necessary tests\n4. Analyze results\n5. Report findings\n\nYou have full autonomy to use any tools and techniques needed.",
|
||||
"system_prompt": "You are an autonomous AI security agent.\nAnalyze the user's request and execute it completely.\nYou can use any tools available. Be creative and thorough.\nIf the task requires testing, test. If it requires analysis, analyze.\nAlways provide detailed results with evidence.",
|
||||
"tools_required": [],
|
||||
"estimated_tokens": 5000,
|
||||
"created_at": "2026-02-08T18:02:15.119794",
|
||||
"updated_at": "2026-02-08T18:02:15.119794",
|
||||
"author": "user",
|
||||
"tags": [
|
||||
"custom",
|
||||
"flexible",
|
||||
"ai"
|
||||
],
|
||||
"is_preset": true
|
||||
},
|
||||
{
|
||||
"id": "analyze_only",
|
||||
"name": "Analysis Only (No Testing)",
|
||||
"description": "AI analysis without active testing - uses provided data",
|
||||
"category": "custom",
|
||||
"prompt": "Analyze the provided data/context WITHOUT performing active tests:\n\n1. Review all provided information\n2. Identify potential security issues\n3. Assess risk levels\n4. Provide recommendations\n\nDo NOT send any requests to the target.\nBase your analysis only on provided data.",
|
||||
"system_prompt": "You are a security analyst. Analyze provided data without active testing.",
|
||||
"tools_required": [],
|
||||
"estimated_tokens": 2000,
|
||||
"created_at": "2026-02-08T18:02:15.119799",
|
||||
"updated_at": "2026-02-08T18:02:15.119799",
|
||||
"author": "user",
|
||||
"tags": [
|
||||
"analysis",
|
||||
"passive",
|
||||
"review"
|
||||
],
|
||||
"is_preset": true
|
||||
},
|
||||
{
|
||||
"id": "report_executive",
|
||||
"name": "Executive Summary Report",
|
||||
"description": "Generate executive-level security report",
|
||||
"category": "reporting",
|
||||
"prompt": "Generate an executive summary report from the findings:\n\n1. **Executive Summary**: High-level overview for management\n2. **Risk Assessment**: Overall security posture rating\n3. **Key Findings**: Top critical/high findings only\n4. **Business Impact**: How vulnerabilities affect the business\n5. **Recommendations**: Prioritized remediation roadmap\n6. **Metrics**: Charts and statistics\n\nKeep it concise and business-focused. Avoid technical jargon.",
|
||||
"system_prompt": "You are a security consultant writing for executives.",
|
||||
"tools_required": [],
|
||||
"estimated_tokens": 2000,
|
||||
"created_at": "2026-02-08T18:02:15.119804",
|
||||
"updated_at": "2026-02-08T18:02:15.119804",
|
||||
"author": "user",
|
||||
"tags": [
|
||||
"reporting",
|
||||
"executive",
|
||||
"summary"
|
||||
],
|
||||
"is_preset": true
|
||||
},
|
||||
{
|
||||
"id": "report_technical",
|
||||
"name": "Technical Security Report",
|
||||
"description": "Generate detailed technical security report",
|
||||
"category": "reporting",
|
||||
"prompt": "Generate a detailed technical security report:\n\nFor each vulnerability include:\n1. **Title**: Clear, descriptive title\n2. **Severity**: Critical/High/Medium/Low/Info\n3. **CVSS Score**: Calculate CVSS 3.1 score with vector\n4. **CWE ID**: Relevant CWE classification\n5. **Description**: Detailed technical explanation\n6. **Affected Component**: Endpoint, parameter, function\n7. **Proof of Concept**: Working PoC code/steps\n8. **Evidence**: Screenshots, requests, responses\n9. **Impact**: What an attacker could achieve\n10. **Remediation**: Specific fix recommendations\n11. **References**: OWASP, CWE, vendor docs\n\nBe thorough and technical.",
|
||||
"system_prompt": "You are a senior security engineer writing a technical report.",
|
||||
"tools_required": [],
|
||||
"estimated_tokens": 3000,
|
||||
"created_at": "2026-02-08T18:02:15.119809",
|
||||
"updated_at": "2026-02-08T18:02:15.119809",
|
||||
"author": "user",
|
||||
"tags": [
|
||||
"reporting",
|
||||
"technical",
|
||||
"detailed"
|
||||
],
|
||||
"is_preset": true
|
||||
}
|
||||
]
|
||||
}
|
||||
44
pyproject.toml
Normal file
44
pyproject.toml
Normal file
@@ -0,0 +1,44 @@
|
||||
# ============================================================================
|
||||
# NeuroSploit v3 - Project Configuration
|
||||
# ============================================================================
|
||||
# This file is for tool configuration (pytest, linters, etc.) and dependency
|
||||
# documentation. Dependencies are installed via requirements.txt files.
|
||||
# Run: ./rebuild.sh --install
|
||||
# ============================================================================
|
||||
|
||||
[project]
|
||||
name = "neurosploitv2"
|
||||
version = "3.0.0"
|
||||
description = "AI-Powered Penetration Testing Framework"
|
||||
requires-python = ">=3.8"
|
||||
license = {text = "MIT"}
|
||||
|
||||
# Reference only — actual install uses requirements.txt / backend/requirements.txt
|
||||
dependencies = [
|
||||
"fastapi>=0.109.0",
|
||||
"uvicorn[standard]>=0.27.0",
|
||||
"pydantic>=2.5.0",
|
||||
"pydantic-settings>=2.1.0",
|
||||
"sqlalchemy[asyncio]>=2.0.0",
|
||||
"aiosqlite>=0.19.0",
|
||||
"requests",
|
||||
"aiohttp>=3.9.0",
|
||||
"httpx>=0.26.0",
|
||||
"anthropic>=0.18.0",
|
||||
"openai>=1.10.0",
|
||||
"google-generativeai",
|
||||
"python-multipart>=0.0.6",
|
||||
"python-jose[cryptography]>=3.3.0",
|
||||
"python-dotenv>=1.0.0",
|
||||
"jinja2>=3.1.0",
|
||||
"weasyprint>=60.0; platform_system != 'Windows'",
|
||||
"apscheduler>=3.10.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
extras = ["mcp>=1.0.0", "playwright>=1.40.0"]
|
||||
dev = ["pytest>=7.4.0", "pytest-asyncio>=0.23.0"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
asyncio_mode = "auto"
|
||||
testpaths = ["tests"]
|
||||
443
rebuild.sh
Normal file
443
rebuild.sh
Normal file
@@ -0,0 +1,443 @@
|
||||
#!/usr/bin/env bash
|
||||
# ============================================================================
|
||||
# NeuroSploit v3 - Rebuild & Launch Script
|
||||
# ============================================================================
|
||||
# Usage: chmod +x rebuild.sh && ./rebuild.sh
|
||||
# Options:
|
||||
# --backend-only Only start the backend (skip frontend)
|
||||
# --frontend-only Only start the frontend (skip backend)
|
||||
# --build Build frontend for production instead of dev mode
|
||||
# --install Force reinstall all dependencies
|
||||
# --reset-db Delete and recreate the database (for schema changes)
|
||||
# ============================================================================
|
||||
|
||||
set -e
|
||||
|
||||
PROJECT_DIR="/opt/NeuroSploitv2"
|
||||
VENV_DIR="$PROJECT_DIR/venv"
|
||||
FRONTEND_DIR="$PROJECT_DIR/frontend"
|
||||
DATA_DIR="$PROJECT_DIR/data"
|
||||
LOGS_DIR="$PROJECT_DIR/logs"
|
||||
PID_DIR="$PROJECT_DIR/.pids"
|
||||
DB_PATH="$DATA_DIR/neurosploit.db"
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Parse args
|
||||
BACKEND_ONLY=false
|
||||
FRONTEND_ONLY=false
|
||||
PRODUCTION_BUILD=false
|
||||
FORCE_INSTALL=false
|
||||
RESET_DB=false
|
||||
|
||||
for arg in "$@"; do
|
||||
case $arg in
|
||||
--backend-only) BACKEND_ONLY=true ;;
|
||||
--frontend-only) FRONTEND_ONLY=true ;;
|
||||
--build) PRODUCTION_BUILD=true ;;
|
||||
--install) FORCE_INSTALL=true ;;
|
||||
--reset-db) RESET_DB=true ;;
|
||||
esac
|
||||
done
|
||||
|
||||
header() {
|
||||
echo ""
|
||||
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${CYAN} $1${NC}"
|
||||
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
}
|
||||
|
||||
step() {
|
||||
echo -e "${GREEN}[+]${NC} $1"
|
||||
}
|
||||
|
||||
warn() {
|
||||
echo -e "${YELLOW}[!]${NC} $1"
|
||||
}
|
||||
|
||||
fail() {
|
||||
echo -e "${RED}[x]${NC} $1"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# 0. Kill previous instances
|
||||
# ============================================================================
|
||||
header "Stopping previous instances"
|
||||
|
||||
mkdir -p "$PID_DIR"
|
||||
|
||||
# Kill by PID files if they exist
|
||||
for pidfile in "$PID_DIR"/*.pid; do
|
||||
[ -f "$pidfile" ] || continue
|
||||
pid=$(cat "$pidfile" 2>/dev/null)
|
||||
if [ -n "$pid" ] && kill -0 "$pid" 2>/dev/null; then
|
||||
step "Stopping process $pid ($(basename "$pidfile" .pid))"
|
||||
kill "$pid" 2>/dev/null || true
|
||||
sleep 1
|
||||
kill -9 "$pid" 2>/dev/null || true
|
||||
fi
|
||||
rm -f "$pidfile"
|
||||
done
|
||||
|
||||
# Also kill any lingering uvicorn/vite on our ports
|
||||
if lsof -ti:8000 >/dev/null 2>&1; then
|
||||
step "Killing process on port 8000"
|
||||
kill $(lsof -ti:8000) 2>/dev/null || true
|
||||
fi
|
||||
if lsof -ti:3000 >/dev/null 2>&1; then
|
||||
step "Killing process on port 3000"
|
||||
kill $(lsof -ti:3000) 2>/dev/null || true
|
||||
fi
|
||||
|
||||
sleep 1
|
||||
step "Previous instances stopped"
|
||||
|
||||
# ============================================================================
|
||||
# 1. Ensure directories exist
|
||||
# ============================================================================
|
||||
header "Preparing directories"
|
||||
mkdir -p "$DATA_DIR" "$LOGS_DIR" "$PID_DIR"
|
||||
mkdir -p "$PROJECT_DIR/reports/screenshots"
|
||||
mkdir -p "$PROJECT_DIR/reports/benchmark_results/logs"
|
||||
step "Directories ready"
|
||||
|
||||
# ============================================================================
|
||||
# 1b. Database reset (if requested)
|
||||
# ============================================================================
|
||||
if [ "$RESET_DB" = true ]; then
|
||||
header "Resetting database"
|
||||
if [ -f "$DB_PATH" ]; then
|
||||
BACKUP="$DB_PATH.backup.$(date +%Y%m%d%H%M%S)"
|
||||
step "Backing up existing DB to $BACKUP"
|
||||
cp "$DB_PATH" "$BACKUP"
|
||||
rm -f "$DB_PATH"
|
||||
step "Database deleted (will be recreated with new schema on startup)"
|
||||
else
|
||||
step "No existing database found"
|
||||
fi
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# 2. Environment check
|
||||
# ============================================================================
|
||||
header "Checking environment"
|
||||
|
||||
if [ ! -f "$PROJECT_DIR/.env" ]; then
|
||||
if [ -f "$PROJECT_DIR/.env.example" ]; then
|
||||
warn ".env not found, copying from .env.example"
|
||||
cp "$PROJECT_DIR/.env.example" "$PROJECT_DIR/.env"
|
||||
else
|
||||
fail ".env file not found and no .env.example to copy from"
|
||||
fi
|
||||
fi
|
||||
step ".env file present"
|
||||
|
||||
# Check Python
|
||||
if command -v python3 &>/dev/null; then
|
||||
PYTHON=python3
|
||||
elif command -v python &>/dev/null; then
|
||||
PYTHON=python
|
||||
else
|
||||
fail "Python not found. Install Python 3.10+"
|
||||
fi
|
||||
step "Python: $($PYTHON --version)"
|
||||
|
||||
# Check Node
|
||||
if command -v node &>/dev/null; then
|
||||
step "Node: $(node --version)"
|
||||
else
|
||||
if [ "$BACKEND_ONLY" = false ]; then
|
||||
fail "Node.js not found. Install Node.js 18+"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check Docker (optional - needed for sandbox & benchmarks)
|
||||
if command -v docker &>/dev/null; then
|
||||
step "Docker: $(docker --version 2>/dev/null | head -1)"
|
||||
# Check compose
|
||||
if docker compose version &>/dev/null 2>&1; then
|
||||
step "Docker Compose: plugin (docker compose)"
|
||||
elif command -v docker-compose &>/dev/null; then
|
||||
step "Docker Compose: standalone ($(docker-compose version --short 2>/dev/null))"
|
||||
else
|
||||
warn "Docker Compose not found (needed for sandbox & benchmarks)"
|
||||
fi
|
||||
else
|
||||
warn "Docker not found (optional - needed for sandbox & benchmarks)"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# 3. Python virtual environment & dependencies
|
||||
# ============================================================================
|
||||
if [ "$FRONTEND_ONLY" = false ]; then
|
||||
header "Setting up Python backend"
|
||||
|
||||
if [ ! -d "$VENV_DIR" ] || [ "$FORCE_INSTALL" = true ]; then
|
||||
step "Creating virtual environment..."
|
||||
$PYTHON -m venv "$VENV_DIR"
|
||||
fi
|
||||
|
||||
source "$VENV_DIR/bin/activate"
|
||||
step "Virtual environment activated"
|
||||
|
||||
if [ "$FORCE_INSTALL" = true ] || [ ! -f "$VENV_DIR/.deps_installed" ]; then
|
||||
step "Installing backend dependencies..."
|
||||
pip install --quiet --upgrade pip
|
||||
|
||||
# Install from requirements files (pyproject.toml is for tool config only)
|
||||
pip install --quiet -r "$PROJECT_DIR/backend/requirements.txt" 2>&1 | tail -5
|
||||
pip install --quiet -r "$PROJECT_DIR/requirements.txt" 2>&1 | tail -5
|
||||
touch "$VENV_DIR/.deps_installed"
|
||||
step "Core dependencies installed"
|
||||
|
||||
# Try optional deps (may fail on Python <3.10)
|
||||
if [ -f "$PROJECT_DIR/requirements-optional.txt" ]; then
|
||||
step "Installing optional dependencies (best-effort)..."
|
||||
pip install --quiet -r "$PROJECT_DIR/requirements-optional.txt" 2>/dev/null && \
|
||||
step "Optional deps installed (mcp, playwright)" || \
|
||||
warn "Some optional deps skipped (Python 3.10+ required for mcp/playwright)"
|
||||
fi
|
||||
else
|
||||
step "Dependencies already installed (use --install to force)"
|
||||
fi
|
||||
|
||||
# Validate key modules
|
||||
step "Validating Python modules..."
|
||||
$PYTHON -c "
|
||||
import sys
|
||||
modules = [
|
||||
('backend.main', 'FastAPI app'),
|
||||
('backend.config', 'Settings'),
|
||||
('backend.api.v1.vuln_lab', 'VulnLab API'),
|
||||
('backend.models.vuln_lab', 'VulnLab Model'),
|
||||
('core.llm_manager', 'LLM Manager'),
|
||||
('core.model_router', 'Model Router'),
|
||||
('core.scheduler', 'Scheduler'),
|
||||
('core.knowledge_augmentor', 'Knowledge Augmentor'),
|
||||
('core.browser_validator', 'Browser Validator'),
|
||||
('core.mcp_client', 'MCP Client'),
|
||||
('core.mcp_server', 'MCP Server'),
|
||||
('core.sandbox_manager', 'Sandbox Manager'),
|
||||
('backend.core.agent_memory', 'Agent Memory'),
|
||||
('backend.core.response_verifier', 'Response Verifier'),
|
||||
('backend.core.vuln_engine.registry', 'VulnEngine Registry'),
|
||||
('backend.core.vuln_engine.payload_generator', 'VulnEngine Payloads'),
|
||||
('backend.core.vuln_engine.ai_prompts', 'VulnEngine AI Prompts'),
|
||||
]
|
||||
errors = 0
|
||||
for mod, name in modules:
|
||||
try:
|
||||
__import__(mod)
|
||||
print(f' OK {name} ({mod})')
|
||||
except Exception as e:
|
||||
print(f' WARN {name} ({mod}): {e}')
|
||||
errors += 1
|
||||
if errors > 0:
|
||||
print(f'\n {errors} module(s) had import warnings (optional deps may be missing)')
|
||||
else:
|
||||
print('\n All modules loaded successfully')
|
||||
" 2>&1 || true
|
||||
|
||||
# Validate knowledge base
|
||||
step "Validating knowledge base..."
|
||||
$PYTHON -c "
|
||||
import json, os
|
||||
kb_path = os.path.join('$PROJECT_DIR', 'data', 'vuln_knowledge_base.json')
|
||||
if os.path.exists(kb_path):
|
||||
kb = json.load(open(kb_path))
|
||||
types = len(kb.get('vulnerability_types', {}))
|
||||
insights = len(kb.get('xbow_insights', kb.get('attack_insights', {})))
|
||||
print(f' OK Knowledge base: {types} vuln types, {insights} insight categories')
|
||||
else:
|
||||
print(' WARN Knowledge base not found at data/vuln_knowledge_base.json')
|
||||
" 2>&1 || true
|
||||
|
||||
# Validate VulnEngine coverage
|
||||
step "Validating VulnEngine coverage..."
|
||||
$PYTHON -c "
|
||||
from backend.core.vuln_engine.registry import VulnerabilityRegistry
|
||||
from backend.core.vuln_engine.payload_generator import PayloadGenerator
|
||||
from backend.core.vuln_engine.ai_prompts import VULN_AI_PROMPTS
|
||||
r = VulnerabilityRegistry()
|
||||
p = PayloadGenerator()
|
||||
total_payloads = sum(len(v) for v in p.payload_libraries.values())
|
||||
print(f' OK Registry: {len(r.VULNERABILITY_INFO)} types, {len(r.TESTER_CLASSES)} testers')
|
||||
print(f' OK Payloads: {total_payloads} across {len(p.payload_libraries)} categories')
|
||||
print(f' OK AI Prompts: {len(VULN_AI_PROMPTS)} per-vuln decision prompts')
|
||||
" 2>&1 || true
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# 4. Frontend dependencies
|
||||
# ============================================================================
|
||||
if [ "$BACKEND_ONLY" = false ]; then
|
||||
header "Setting up React frontend"
|
||||
|
||||
cd "$FRONTEND_DIR"
|
||||
|
||||
if [ ! -d "node_modules" ] || [ "$FORCE_INSTALL" = true ]; then
|
||||
step "Installing frontend dependencies..."
|
||||
npm install --silent 2>&1 | tail -3
|
||||
step "Frontend dependencies installed"
|
||||
else
|
||||
step "node_modules present (use --install to force)"
|
||||
fi
|
||||
|
||||
cd "$PROJECT_DIR"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# 5. Launch backend
|
||||
# ============================================================================
|
||||
if [ "$FRONTEND_ONLY" = false ]; then
|
||||
header "Starting FastAPI backend (port 8000)"
|
||||
|
||||
source "$VENV_DIR/bin/activate"
|
||||
|
||||
# Export env vars
|
||||
set -a
|
||||
source "$PROJECT_DIR/.env"
|
||||
set +a
|
||||
|
||||
PYTHONPATH="$PROJECT_DIR" uvicorn backend.main:app \
|
||||
--host 0.0.0.0 \
|
||||
--port 8000 \
|
||||
--reload \
|
||||
--log-level info \
|
||||
> "$LOGS_DIR/backend.log" 2>&1 &
|
||||
|
||||
BACKEND_PID=$!
|
||||
echo "$BACKEND_PID" > "$PID_DIR/backend.pid"
|
||||
step "Backend started (PID: $BACKEND_PID)"
|
||||
step "Backend logs: $LOGS_DIR/backend.log"
|
||||
|
||||
# Wait for backend to be ready
|
||||
step "Waiting for backend..."
|
||||
for i in $(seq 1 15); do
|
||||
if curl -s http://localhost:8000/docs >/dev/null 2>&1; then
|
||||
step "Backend is ready"
|
||||
break
|
||||
fi
|
||||
if [ $i -eq 15 ]; then
|
||||
warn "Backend may still be starting. Check logs."
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# 6. Launch frontend
|
||||
# ============================================================================
|
||||
if [ "$BACKEND_ONLY" = false ]; then
|
||||
header "Starting React frontend (port 3000)"
|
||||
|
||||
cd "$FRONTEND_DIR"
|
||||
|
||||
if [ "$PRODUCTION_BUILD" = true ]; then
|
||||
step "Building production frontend..."
|
||||
npm run build 2>&1 | tail -5
|
||||
step "Build complete. Serving from dist/"
|
||||
npx vite preview --port 3000 \
|
||||
> "$LOGS_DIR/frontend.log" 2>&1 &
|
||||
else
|
||||
step "Starting development server..."
|
||||
npx vite --port 3000 \
|
||||
> "$LOGS_DIR/frontend.log" 2>&1 &
|
||||
fi
|
||||
|
||||
FRONTEND_PID=$!
|
||||
echo "$FRONTEND_PID" > "$PID_DIR/frontend.pid"
|
||||
step "Frontend started (PID: $FRONTEND_PID)"
|
||||
step "Frontend logs: $LOGS_DIR/frontend.log"
|
||||
|
||||
cd "$PROJECT_DIR"
|
||||
|
||||
# Wait for frontend
|
||||
for i in $(seq 1 10); do
|
||||
if curl -s http://localhost:3000 >/dev/null 2>&1; then
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# 7. Summary
|
||||
# ============================================================================
|
||||
header "NeuroSploit v3 is running"
|
||||
|
||||
echo ""
|
||||
if [ "$FRONTEND_ONLY" = false ]; then
|
||||
echo -e " ${GREEN}Backend API:${NC} http://localhost:8000"
|
||||
echo -e " ${GREEN}API Docs:${NC} http://localhost:8000/docs"
|
||||
echo -e " ${GREEN}Scheduler API:${NC} http://localhost:8000/api/v1/scheduler/"
|
||||
echo -e " ${GREEN}VulnLab API:${NC} http://localhost:8000/api/v1/vuln-lab/"
|
||||
fi
|
||||
if [ "$BACKEND_ONLY" = false ]; then
|
||||
echo -e " ${GREEN}Frontend UI:${NC} http://localhost:3000"
|
||||
fi
|
||||
echo ""
|
||||
echo -e " ${BLUE}Logs:${NC}"
|
||||
[ "$FRONTEND_ONLY" = false ] && echo -e " Backend: tail -f $LOGS_DIR/backend.log"
|
||||
[ "$BACKEND_ONLY" = false ] && echo -e " Frontend: tail -f $LOGS_DIR/frontend.log"
|
||||
echo ""
|
||||
echo -e " ${YELLOW}Stop:${NC} $0 (re-run kills previous)"
|
||||
echo -e " kill \$(cat $PID_DIR/backend.pid) \$(cat $PID_DIR/frontend.pid)"
|
||||
echo ""
|
||||
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${GREEN} NeuroSploit v3 - Autonomous Security Agent${NC}"
|
||||
echo -e ""
|
||||
echo -e " ${BLUE}VulnEngine (100-Type):${NC}"
|
||||
echo -e " - Registry: 100 vuln types, 428 payloads, 100 testers"
|
||||
echo -e " - AI Prompts: 100 per-vuln AI decision prompts"
|
||||
echo -e " - Agent Memory: Bounded dedup stores, baseline caching"
|
||||
echo -e " - Multi-Signal: 4-signal verification (tester+baseline+"
|
||||
echo -e " payload_effect+error_patterns)"
|
||||
echo -e " - Payload Effect: Baseline-compared checks (eliminates FP"
|
||||
echo -e " for NoSQL, HPP, type juggling, HTML inj)"
|
||||
echo -e " - Anti-Hallucination: AI cross-validation, evidence grounding"
|
||||
echo -e " - Knowledge Base: 100 vuln types + insight categories"
|
||||
echo -e " - Attack Plan: 5-tier priority (P1 critical -> P5 info)"
|
||||
echo -e ""
|
||||
echo -e " ${BLUE}Autonomous Agent:${NC}"
|
||||
echo -e " - Full Auto: One-click full vulnerability assessment"
|
||||
echo -e " - Auto Pentest: 6-phase automated penetration testing"
|
||||
echo -e " - Pause/Resume/Stop: Real-time scan control (pause, resume, terminate)"
|
||||
echo -e " - MCP Server: 12 tools (screenshot, dns, port scan, etc.)"
|
||||
echo -e " - Security Sandbox: Docker-based tool isolation (22 tools)"
|
||||
echo -e " - Benchmark Runner: 104 CTF challenges for accuracy testing"
|
||||
echo -e ""
|
||||
echo -e " ${BLUE}Vulnerability Lab:${NC}"
|
||||
echo -e " - Isolated Testing: Test individual vuln types one at a time"
|
||||
echo -e " - 100 Vuln Types: All VulnEngine types available for testing"
|
||||
echo -e " - Lab/CTF Support: PortSwigger, CTFs, custom targets"
|
||||
echo -e " - Auth Support: Cookie, Bearer, Basic, Custom headers"
|
||||
echo -e " - Detection Stats: Per-type & per-category detection rates"
|
||||
echo -e " - Challenge History: Full history with results tracking"
|
||||
echo -e ""
|
||||
echo -e " ${BLUE}Verification & Reports:${NC}"
|
||||
echo -e " - Anti-FP: Baseline-compared payload effect checks"
|
||||
echo -e " - ZIP Reports: Download HTML report + screenshots as ZIP"
|
||||
echo -e " - OHVR Reports: Observation-Hypothesis-Validation-Result"
|
||||
echo -e " - Severity Sorting: Critical/High findings appear first"
|
||||
echo -e ""
|
||||
echo -e " ${BLUE}Platform Features:${NC}"
|
||||
echo -e " - Scheduler: /scheduler (cron & interval scheduling)"
|
||||
echo -e " - OpenRouter: Settings > LLM Configuration > OpenRouter"
|
||||
echo -e " - Model Routing: Settings > Advanced Features toggle"
|
||||
echo -e " - Knowledge Aug: Settings > Advanced Features toggle"
|
||||
echo -e " - Browser Validation: Settings > Advanced Features toggle"
|
||||
echo -e " - Skip-to-Phase: Agent + Scan pages (skip ahead in pipeline)"
|
||||
echo -e " - Reset DB: ./rebuild.sh --reset-db (schema changes)"
|
||||
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
|
||||
# Keep script running so bg processes stay alive
|
||||
wait
|
||||
1211
reports/benchmark/NEUROSPLOIT_BENCHMARK_REPORT.html
Normal file
1211
reports/benchmark/NEUROSPLOIT_BENCHMARK_REPORT.html
Normal file
File diff suppressed because it is too large
Load Diff
81
scripts/build-kali.sh
Normal file
81
scripts/build-kali.sh
Normal file
@@ -0,0 +1,81 @@
|
||||
#!/bin/bash
|
||||
# NeuroSploit v3 - Build Kali Linux Sandbox Image
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/build-kali.sh # Normal build (uses cache)
|
||||
# ./scripts/build-kali.sh --fresh # Full rebuild (no cache)
|
||||
# ./scripts/build-kali.sh --test # Build + run health check
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PROJECT_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
IMAGE_NAME="neurosploit-kali:latest"
|
||||
|
||||
cd "$PROJECT_DIR"
|
||||
|
||||
echo "================================================"
|
||||
echo " NeuroSploit Kali Sandbox Builder"
|
||||
echo "================================================"
|
||||
echo ""
|
||||
|
||||
# Check Docker
|
||||
if ! docker info > /dev/null 2>&1; then
|
||||
echo "ERROR: Docker daemon is not running."
|
||||
echo " Start Docker Desktop and try again."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Parse args
|
||||
NO_CACHE=""
|
||||
RUN_TEST=false
|
||||
|
||||
for arg in "$@"; do
|
||||
case $arg in
|
||||
--fresh|--no-cache)
|
||||
NO_CACHE="--no-cache"
|
||||
echo "[*] Full rebuild mode (no cache)"
|
||||
;;
|
||||
--test)
|
||||
RUN_TEST=true
|
||||
echo "[*] Will run health check after build"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
echo "[*] Building image: $IMAGE_NAME"
|
||||
echo "[*] Dockerfile: docker/Dockerfile.kali"
|
||||
echo "[*] Context: docker/"
|
||||
echo ""
|
||||
|
||||
# Build
|
||||
docker build $NO_CACHE \
|
||||
-f docker/Dockerfile.kali \
|
||||
-t "$IMAGE_NAME" \
|
||||
docker/
|
||||
|
||||
echo ""
|
||||
echo "[+] Build complete: $IMAGE_NAME"
|
||||
|
||||
# Show image info
|
||||
docker image inspect "$IMAGE_NAME" --format \
|
||||
" Size: {{.Size}} bytes ({{printf \"%.0f\" (divf .Size 1048576)}} MB)
|
||||
Created: {{.Created}}
|
||||
Arch: {{.Architecture}}" 2>/dev/null || true
|
||||
|
||||
# Run test if requested
|
||||
if [ "$RUN_TEST" = true ]; then
|
||||
echo ""
|
||||
echo "[*] Running health check..."
|
||||
docker run --rm "$IMAGE_NAME" \
|
||||
"nuclei -version 2>&1; echo '---'; naabu -version 2>&1; echo '---'; httpx -version 2>&1; echo '---'; subfinder -version 2>&1; echo '---'; nmap --version 2>&1 | head -1; echo '---'; nikto -Version 2>&1 | head -1; echo '---'; sqlmap --version 2>&1; echo '---'; ffuf -V 2>&1; echo '---'; echo 'ALL OK'"
|
||||
echo ""
|
||||
echo "[+] Health check passed"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "================================================"
|
||||
echo " Build complete! To use:"
|
||||
echo " - Start NeuroSploit backend (it auto-creates containers per scan)"
|
||||
echo " - Monitor via Sandbox Dashboard: http://localhost:8000/sandboxes"
|
||||
echo "================================================"
|
||||
1627
tools/benchmark_runner.py
Normal file
1627
tools/benchmark_runner.py
Normal file
File diff suppressed because it is too large
Load Diff
1
tools/browser/__init__.py
Normal file
1
tools/browser/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Browser-based security validation tools using Playwright."""
|
||||
211
tools/browser/playwright_runner.py
Normal file
211
tools/browser/playwright_runner.py
Normal file
@@ -0,0 +1,211 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Playwright Runner - Low-level browser automation helpers for security testing.
|
||||
|
||||
Provides convenience functions for common browser-based security validation tasks.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from typing import Dict, List, Optional
|
||||
from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
try:
|
||||
from playwright.async_api import async_playwright
|
||||
HAS_PLAYWRIGHT = True
|
||||
except ImportError:
|
||||
HAS_PLAYWRIGHT = False
|
||||
|
||||
|
||||
async def check_xss_reflection(url: str, payload: str, headless: bool = True) -> Dict:
|
||||
"""Check if a payload is reflected in page content or triggers a dialog.
|
||||
|
||||
Args:
|
||||
url: Target URL (payload should be in query params)
|
||||
payload: The XSS payload being tested
|
||||
headless: Run in headless mode
|
||||
|
||||
Returns:
|
||||
Dict with reflection status, dialog detection, and page content snippet
|
||||
"""
|
||||
if not HAS_PLAYWRIGHT:
|
||||
return {"error": "Playwright not installed"}
|
||||
|
||||
result = {
|
||||
"url": url,
|
||||
"payload": payload,
|
||||
"reflected": False,
|
||||
"dialog_triggered": False,
|
||||
"dialog_message": None,
|
||||
"content_snippet": ""
|
||||
}
|
||||
|
||||
async with async_playwright() as p:
|
||||
browser = await p.chromium.launch(headless=headless)
|
||||
context = await browser.new_context(ignore_https_errors=True)
|
||||
page = await context.new_page()
|
||||
|
||||
dialogs = []
|
||||
|
||||
async def on_dialog(dialog):
|
||||
dialogs.append(dialog.message)
|
||||
await dialog.dismiss()
|
||||
|
||||
page.on("dialog", on_dialog)
|
||||
|
||||
try:
|
||||
await page.goto(url, wait_until="networkidle", timeout=15000)
|
||||
content = await page.content()
|
||||
|
||||
if payload in content:
|
||||
result["reflected"] = True
|
||||
idx = content.find(payload)
|
||||
start = max(0, idx - 100)
|
||||
end = min(len(content), idx + len(payload) + 100)
|
||||
result["content_snippet"] = content[start:end]
|
||||
|
||||
if dialogs:
|
||||
result["dialog_triggered"] = True
|
||||
result["dialog_message"] = dialogs[0]
|
||||
|
||||
except Exception as e:
|
||||
result["error"] = str(e)
|
||||
finally:
|
||||
await browser.close()
|
||||
|
||||
return result
|
||||
|
||||
|
||||
async def capture_page_state(url: str, screenshot_path: str,
|
||||
headless: bool = True) -> Dict:
|
||||
"""Capture the full state of a page: screenshot, title, headers, cookies.
|
||||
|
||||
Args:
|
||||
url: Page URL to capture
|
||||
screenshot_path: Path to save the screenshot
|
||||
headless: Run in headless mode
|
||||
|
||||
Returns:
|
||||
Dict with page title, cookies, response headers, console messages
|
||||
"""
|
||||
if not HAS_PLAYWRIGHT:
|
||||
return {"error": "Playwright not installed"}
|
||||
|
||||
result = {
|
||||
"url": url,
|
||||
"title": "",
|
||||
"screenshot": screenshot_path,
|
||||
"cookies": [],
|
||||
"console_messages": [],
|
||||
"response_headers": {},
|
||||
"status_code": None
|
||||
}
|
||||
|
||||
async with async_playwright() as p:
|
||||
browser = await p.chromium.launch(headless=headless)
|
||||
context = await browser.new_context(ignore_https_errors=True)
|
||||
page = await context.new_page()
|
||||
|
||||
console_msgs = []
|
||||
page.on("console", lambda msg: console_msgs.append({
|
||||
"type": msg.type, "text": msg.text
|
||||
}))
|
||||
|
||||
try:
|
||||
response = await page.goto(url, wait_until="networkidle", timeout=20000)
|
||||
|
||||
result["title"] = await page.title()
|
||||
result["status_code"] = response.status if response else None
|
||||
result["response_headers"] = dict(response.headers) if response else {}
|
||||
|
||||
Path(screenshot_path).parent.mkdir(parents=True, exist_ok=True)
|
||||
await page.screenshot(path=screenshot_path, full_page=True)
|
||||
|
||||
cookies = await context.cookies()
|
||||
result["cookies"] = [
|
||||
{"name": c["name"], "domain": c["domain"],
|
||||
"secure": c["secure"], "httpOnly": c["httpOnly"],
|
||||
"sameSite": c.get("sameSite", "None")}
|
||||
for c in cookies
|
||||
]
|
||||
|
||||
result["console_messages"] = console_msgs
|
||||
|
||||
except Exception as e:
|
||||
result["error"] = str(e)
|
||||
finally:
|
||||
await browser.close()
|
||||
|
||||
return result
|
||||
|
||||
|
||||
async def test_form_submission(url: str, form_data: Dict[str, str],
|
||||
submit_selector: str = "button[type=submit]",
|
||||
screenshot_dir: str = "/tmp/form_test",
|
||||
headless: bool = True) -> Dict:
|
||||
"""Submit a form and capture before/after state.
|
||||
|
||||
Args:
|
||||
url: URL containing the form
|
||||
form_data: Dict of selector -> value to fill
|
||||
submit_selector: CSS selector for the submit button
|
||||
screenshot_dir: Directory to store screenshots
|
||||
headless: Run in headless mode
|
||||
|
||||
Returns:
|
||||
Dict with before/after screenshots, response info, and any triggered dialogs
|
||||
"""
|
||||
if not HAS_PLAYWRIGHT:
|
||||
return {"error": "Playwright not installed"}
|
||||
|
||||
ss_dir = Path(screenshot_dir)
|
||||
ss_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
result = {
|
||||
"url": url,
|
||||
"before_screenshot": str(ss_dir / "before.png"),
|
||||
"after_screenshot": str(ss_dir / "after.png"),
|
||||
"dialogs": [],
|
||||
"response_url": "",
|
||||
"status": "unknown"
|
||||
}
|
||||
|
||||
async with async_playwright() as p:
|
||||
browser = await p.chromium.launch(headless=headless)
|
||||
context = await browser.new_context(ignore_https_errors=True)
|
||||
page = await context.new_page()
|
||||
|
||||
dialogs = []
|
||||
|
||||
async def on_dialog(dialog):
|
||||
dialogs.append({"type": dialog.type, "message": dialog.message})
|
||||
await dialog.dismiss()
|
||||
|
||||
page.on("dialog", on_dialog)
|
||||
|
||||
try:
|
||||
await page.goto(url, wait_until="networkidle", timeout=15000)
|
||||
await page.screenshot(path=result["before_screenshot"])
|
||||
|
||||
# Fill form fields
|
||||
for selector, value in form_data.items():
|
||||
await page.fill(selector, value)
|
||||
|
||||
# Submit
|
||||
await page.click(submit_selector)
|
||||
await page.wait_for_load_state("networkidle")
|
||||
|
||||
await page.screenshot(path=result["after_screenshot"], full_page=True)
|
||||
result["response_url"] = page.url
|
||||
result["dialogs"] = dialogs
|
||||
result["status"] = "completed"
|
||||
|
||||
except Exception as e:
|
||||
result["error"] = str(e)
|
||||
result["status"] = "error"
|
||||
finally:
|
||||
await browser.close()
|
||||
|
||||
return result
|
||||
Reference in New Issue
Block a user