mirror of
https://github.com/Vyntral/god-eye.git
synced 2026-02-12 16:52:45 +00:00
v0.1.1: Major AI improvements, new security modules, and documentation fixes
## AI & CVE Improvements - Fix AI report to display actual subdomain names instead of generic placeholders - Add 10-year CVE filter to reduce false positives from outdated vulnerabilities - Integrate CISA KEV (Known Exploited Vulnerabilities) database support - Improve AI analysis prompt for more accurate security findings ## New Security Modules - Add wildcard DNS detection with multi-phase validation (DNS + HTTP) - Add TLS certificate analyzer for certificate chain inspection - Add comprehensive rate limiting module for API requests - Add retry mechanism with exponential backoff - Add stealth mode for reduced detection during scans - Add progress tracking module for better UX ## Code Refactoring - Extract scanner output logic to dedicated module - Add base source interface for consistent passive source implementation - Reduce admin panel paths to common generic patterns only - Improve HTTP client with connection pooling - Add JSON output formatter ## Documentation Updates - Correct passive source count to 20 (was incorrectly stated as 34) - Fix AI model names: deepseek-r1:1.5b (fast) + qwen2.5-coder:7b (deep) - Update all markdown files for consistency - Relocate demo GIFs to assets/ directory - Add benchmark disclaimer for test variability ## Files Changed - 4 documentation files updated (README, AI_SETUP, BENCHMARK, EXAMPLES) - 11 new source files added - 12 existing files modified
This commit is contained in:
28
AI_SETUP.md
28
AI_SETUP.md
@@ -22,8 +22,8 @@ ollama --version
|
||||
### 2. Pull Recommended Models
|
||||
|
||||
```bash
|
||||
# Fast triage model (3GB) - REQUIRED
|
||||
ollama pull phi3.5:3.8b
|
||||
# Fast triage model (1.1GB) - REQUIRED
|
||||
ollama pull deepseek-r1:1.5b
|
||||
|
||||
# Deep analysis model (6GB) - REQUIRED
|
||||
ollama pull qwen2.5-coder:7b
|
||||
@@ -66,7 +66,7 @@ Leave this running in a terminal. Ollama will run on `http://localhost:11434`
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────────────────────────┐
|
||||
│ TIER 1: FAST TRIAGE (Phi-3.5:3.8b) │
|
||||
│ TIER 1: FAST TRIAGE (DeepSeek-R1:1.5b) │
|
||||
│ • Quick classification: relevant vs skip │
|
||||
│ • Completes in ~2-5 seconds │
|
||||
│ • Filters false positives │
|
||||
@@ -179,7 +179,7 @@ Function calling requires models that support tool use:
|
||||
- ✅ **qwen2.5-coder:7b** (default deep model) - Full support
|
||||
- ✅ **llama3.1:8b** - Excellent function calling
|
||||
- ✅ **llama3.2:3b** - Basic support
|
||||
- ⚠️ **phi3.5:3.8b** (fast model) - No function calling (triage only)
|
||||
- ✅ **deepseek-r1:1.5b** (fast model) - Excellent reasoning for size
|
||||
|
||||
### Rate Limits
|
||||
|
||||
@@ -225,7 +225,7 @@ God's Eye automatically handles rate limiting and caches results.
|
||||
```bash
|
||||
# Use different models
|
||||
./god-eye -d target.com --enable-ai \
|
||||
--ai-fast-model phi3.5:3.8b \
|
||||
--ai-fast-model deepseek-r1:1.5b \
|
||||
--ai-deep-model deepseek-coder-v2:16b
|
||||
|
||||
# Disable cascade (deep analysis only)
|
||||
@@ -250,7 +250,7 @@ God's Eye automatically handles rate limiting and caches results.
|
||||
|------|---------|-------------|
|
||||
| `--enable-ai` | `false` | Enable AI analysis |
|
||||
| `--ai-url` | `http://localhost:11434` | Ollama API URL |
|
||||
| `--ai-fast-model` | `phi3.5:3.8b` | Fast triage model |
|
||||
| `--ai-fast-model` | `deepseek-r1:1.5b` | Fast triage model |
|
||||
| `--ai-deep-model` | `qwen2.5-coder:7b` | Deep analysis model |
|
||||
| `--ai-cascade` | `true` | Use cascade mode |
|
||||
| `--ai-deep` | `false` | Deep analysis on all findings |
|
||||
@@ -282,7 +282,7 @@ ollama list
|
||||
**Solution:**
|
||||
```bash
|
||||
# Pull missing model
|
||||
ollama pull phi3.5:3.8b
|
||||
ollama pull deepseek-r1:1.5b
|
||||
ollama pull qwen2.5-coder:7b
|
||||
|
||||
# Verify
|
||||
@@ -320,7 +320,7 @@ ollama list
|
||||
**Solutions:**
|
||||
- **Option 1:** Use smaller models
|
||||
```bash
|
||||
ollama pull phi3.5:3.8b # 3GB instead of 7GB
|
||||
ollama pull deepseek-r1:1.5b # 3GB instead of 7GB
|
||||
```
|
||||
|
||||
- **Option 2:** Disable cascade
|
||||
@@ -350,7 +350,7 @@ ollama list
|
||||
| **AI Analysis Time** | ~30-40 seconds |
|
||||
| **AI Overhead** | ~20% of total scan time |
|
||||
| **Memory Usage** | ~7GB (both models loaded) |
|
||||
| **Models Used** | phi3.5:3.8b + qwen2.5-coder:7b |
|
||||
| **Models Used** | deepseek-r1:1.5b + qwen2.5-coder:7b |
|
||||
| **Cascade Mode** | Enabled (default) |
|
||||
|
||||
**Sample AI Findings:**
|
||||
@@ -377,7 +377,7 @@ ollama list
|
||||
|
||||
| Model | Size | Speed | Accuracy | Use Case |
|
||||
|-------|------|-------|----------|----------|
|
||||
| **phi3.5:3.8b** | 3GB | ⚡⚡⚡⚡⚡ | ⭐⭐⭐⭐ | Fast triage |
|
||||
| **deepseek-r1:1.5b** | 3GB | ⚡⚡⚡⚡⚡ | ⭐⭐⭐⭐ | Fast triage |
|
||||
| **qwen2.5-coder:7b** | 6GB | ⚡⚡⚡⚡ | ⭐⭐⭐⭐⭐ | Deep analysis |
|
||||
| **deepseek-coder-v2:16b** | 12GB | ⚡⚡⚡ | ⭐⭐⭐⭐⭐ | Maximum accuracy |
|
||||
| **llama3.2:3b** | 2.5GB | ⚡⚡⚡⚡⚡ | ⭐⭐⭐ | Ultra-fast |
|
||||
@@ -428,7 +428,7 @@ ollama list
|
||||
## 📖 Example Output
|
||||
|
||||
```
|
||||
🧠 AI-POWERED ANALYSIS (cascade: phi3.5:3.8b + qwen2.5-coder:7b)
|
||||
🧠 AI-POWERED ANALYSIS (cascade: deepseek-r1:1.5b + qwen2.5-coder:7b)
|
||||
Analyzing findings with local LLM
|
||||
|
||||
AI:C admin.example.com → 3 findings
|
||||
@@ -498,7 +498,7 @@ killall ollama
|
||||
rm -rf ~/.ollama/models
|
||||
|
||||
# Re-pull
|
||||
ollama pull phi3.5:3.8b
|
||||
ollama pull deepseek-r1:1.5b
|
||||
ollama pull qwen2.5-coder:7b
|
||||
```
|
||||
|
||||
@@ -544,14 +544,14 @@ ollama pull qwen2.5-coder:7b
|
||||
| **Passive Enumeration** | ~25 seconds | 20 concurrent sources |
|
||||
| **HTTP Probing** | ~35 seconds | 2 active subdomains |
|
||||
| **Security Checks** | ~40 seconds | 13 checks per subdomain |
|
||||
| **AI Triage** | ~10 seconds | phi3.5:3.8b fast filtering |
|
||||
| **AI Triage** | ~10 seconds | deepseek-r1:1.5b fast filtering |
|
||||
| **AI Deep Analysis** | ~25 seconds | qwen2.5-coder:7b analysis |
|
||||
| **Report Generation** | ~3 seconds | Executive summary |
|
||||
| **Total** | **2:18 min** | With AI enabled |
|
||||
|
||||
### AI Performance Characteristics
|
||||
|
||||
**Fast Triage Model (Phi-3.5:3.8b):**
|
||||
**Fast Triage Model (DeepSeek-R1:1.5b):**
|
||||
- Initial load time: ~3-5 seconds (first request)
|
||||
- Analysis time: 2-5 seconds per finding
|
||||
- Memory footprint: ~3.5GB
|
||||
|
||||
@@ -108,7 +108,7 @@ Average CPU usage during scan:
|
||||
| DNSRepo | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
|
||||
| Subdomain Center | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
|
||||
| Wayback Machine | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ |
|
||||
| **Total Sources** | **11** | **25+** | **55+** | **14** | **9** | **6** |
|
||||
| **Total Sources** | **20** | **25+** | **55+** | **14** | **9** | **6** |
|
||||
|
||||
### Active Scanning Features
|
||||
|
||||
@@ -171,7 +171,7 @@ This eliminates the need to chain multiple tools together.
|
||||
|
||||
#### 2. Parallel Processing Architecture
|
||||
God's Eye uses Go's goroutines for maximum parallelization:
|
||||
- 11 passive sources queried simultaneously
|
||||
- 20 passive sources queried simultaneously
|
||||
- DNS brute-force with configurable concurrency
|
||||
- 13 HTTP security checks run in parallel per subdomain
|
||||
|
||||
@@ -352,5 +352,6 @@ Comprehensive security posture assessment:
|
||||
|
||||
---
|
||||
|
||||
*Benchmark conducted by Orizon Security Team*
|
||||
*Note: Benchmark data is based on internal testing and may vary depending on network conditions, target complexity, and hardware specifications. These numbers are meant to provide a general comparison rather than precise measurements.*
|
||||
|
||||
*Last updated: 2025*
|
||||
|
||||
@@ -92,7 +92,7 @@ time ./god-eye -d target.com --enable-ai --no-brute
|
||||
React version 16.8.0 has known XSS vulnerability
|
||||
Missing rate limiting on /api/v1/users endpoint
|
||||
(1 more findings...)
|
||||
model: phi3.5:3.8b→qwen2.5-coder:7b
|
||||
model: deepseek-r1:1.5b→qwen2.5-coder:7b
|
||||
CVE: React: CVE-2020-15168 - XSS vulnerability in development mode
|
||||
═══════════════════════════════════════════════════
|
||||
```
|
||||
@@ -100,7 +100,7 @@ time ./god-eye -d target.com --enable-ai --no-brute
|
||||
### AI Report Section
|
||||
|
||||
```
|
||||
🧠 AI-POWERED ANALYSIS (cascade: phi3.5:3.8b + qwen2.5-coder:7b)
|
||||
🧠 AI-POWERED ANALYSIS (cascade: deepseek-r1:1.5b + qwen2.5-coder:7b)
|
||||
Analyzing findings with local LLM
|
||||
|
||||
AI:C api.example.com → 4 findings
|
||||
@@ -336,7 +336,7 @@ time ./god-eye -d target.com --enable-ai --ai-deep --no-brute
|
||||
|
||||
# Use fast model only (skip deep analysis)
|
||||
./god-eye -d large-target.com --enable-ai --ai-cascade=false \
|
||||
--ai-deep-model phi3.5:3.8b
|
||||
--ai-deep-model deepseek-r1:1.5b
|
||||
|
||||
# Disable AI for initial enumeration, enable for interesting findings
|
||||
./god-eye -d large-target.com --no-brute -s > subdomains.txt
|
||||
|
||||
208
README.md
208
README.md
@@ -39,7 +39,7 @@
|
||||
<td width="33%" align="center">
|
||||
|
||||
### ⚡ All-in-One
|
||||
**11 passive sources** + DNS brute-forcing + HTTP probing + security checks in **one tool**. No need to chain 5+ tools together.
|
||||
**20 passive sources** + DNS brute-forcing + HTTP probing + security checks in **one tool**. No need to chain 5+ tools together.
|
||||
|
||||
</td>
|
||||
<td width="33%" align="center">
|
||||
@@ -120,14 +120,14 @@ God's Eye now features **AI-powered security analysis** using local LLM models v
|
||||
<td width="50%" align="center">
|
||||
|
||||
**Basic Scan**
|
||||
<img src="docs/images/demo.gif" alt="God's Eye Basic Demo" width="100%">
|
||||
<img src="assets/demo.gif" alt="God's Eye Basic Demo" width="100%">
|
||||
<em>Standard subdomain enumeration</em>
|
||||
|
||||
</td>
|
||||
<td width="50%" align="center">
|
||||
|
||||
**AI-Powered Scan**
|
||||
<img src="docs/images/demo-ai.gif" alt="God's Eye AI Demo" width="100%">
|
||||
<img src="assets/demo-ai.gif" alt="God's Eye AI Demo" width="100%">
|
||||
<em>With real-time CVE detection & analysis</em>
|
||||
|
||||
</td>
|
||||
@@ -140,7 +140,7 @@ God's Eye now features **AI-powered security analysis** using local LLM models v
|
||||
curl https://ollama.ai/install.sh | sh
|
||||
|
||||
# Pull models (5-10 mins)
|
||||
ollama pull phi3.5:3.8b && ollama pull qwen2.5-coder:7b
|
||||
ollama pull deepseek-r1:1.5b && ollama pull qwen2.5-coder:7b
|
||||
|
||||
# Run with AI
|
||||
ollama serve &
|
||||
@@ -154,9 +154,9 @@ ollama serve &
|
||||
## Features
|
||||
|
||||
### 🔍 Subdomain Discovery
|
||||
- **11 Passive Sources**: crt.sh, Certspotter, AlienVault, HackerTarget, URLScan, RapidDNS, Anubis, ThreatMiner, DNSRepo, SubdomainCenter, Wayback
|
||||
- **20 Passive Sources**: crt.sh, Certspotter, AlienVault, HackerTarget, URLScan, RapidDNS, Anubis, ThreatMiner, DNSRepo, SubdomainCenter, Wayback, CommonCrawl, Sitedossier, Riddler, Robtex, DNSHistory, ArchiveToday, JLDC, SynapsInt, CensysFree
|
||||
- **DNS Brute-forcing**: Concurrent DNS resolution with customizable wordlists
|
||||
- **Wildcard Detection**: Improved detection using multiple random patterns
|
||||
- **Advanced Wildcard Detection**: Multi-layer detection using DNS + HTTP validation with confidence scoring
|
||||
|
||||
### 🌐 HTTP Probing
|
||||
- Status code, content length, response time
|
||||
@@ -164,6 +164,7 @@ ollama serve &
|
||||
- Technology fingerprinting (WordPress, React, Next.js, Angular, Laravel, Django, etc.)
|
||||
- Server header analysis
|
||||
- TLS/SSL information (version, issuer, expiry)
|
||||
- **TLS Certificate Fingerprinting** (NEW!) - Detects firewalls, VPNs, and appliances from self-signed certificates
|
||||
|
||||
### 🛡️ Security Checks
|
||||
- **Security Headers**: CSP, HSTS, X-Frame-Options, X-Content-Type-Options, etc.
|
||||
@@ -187,14 +188,26 @@ ollama serve &
|
||||
- **JavaScript Analysis**: Extracts secrets, API keys, and hidden endpoints from JS files
|
||||
- **Port Scanning**: Quick TCP port scan on common ports
|
||||
- **WAF Detection**: Identifies Cloudflare, AWS WAF, Akamai, Imperva, etc.
|
||||
- **TLS Appliance Detection**: Identifies 25+ security vendors from certificates (Fortinet, Palo Alto, Cisco, F5, etc.)
|
||||
|
||||
### ⚡ Performance
|
||||
- **Parallel HTTP Checks**: All security checks run concurrently
|
||||
- **Connection Pooling**: Shared HTTP client with TCP/TLS reuse
|
||||
- **High Concurrency**: Up to 1000+ concurrent workers
|
||||
- **Intelligent Rate Limiting**: Adaptive backoff based on error rates
|
||||
- **Retry Logic**: Automatic retry with exponential backoff for DNS/HTTP failures
|
||||
- **Progress Bars**: Real-time progress with ETA and speed indicators
|
||||
|
||||
### 🥷 Stealth Mode
|
||||
- **4 Stealth Levels**: light, moderate, aggressive, paranoid
|
||||
- **User-Agent Rotation**: 25+ realistic browser User-Agents
|
||||
- **Randomized Delays**: Configurable jitter between requests
|
||||
- **Per-Host Throttling**: Limit concurrent requests per target
|
||||
- **DNS Query Distribution**: Spread queries across resolvers
|
||||
- **Request Randomization**: Shuffle wordlists and targets
|
||||
|
||||
### 🧠 AI Integration (NEW!)
|
||||
- **Local LLM Analysis**: Powered by Ollama (phi3.5 + qwen2.5-coder)
|
||||
- **Local LLM Analysis**: Powered by Ollama (deepseek-r1:1.5b + qwen2.5-coder)
|
||||
- **JavaScript Code Review**: Intelligent secret detection and vulnerability analysis
|
||||
- **CVE Matching**: Automatic vulnerability detection for discovered technologies
|
||||
- **Smart Cascade**: Fast triage filter + deep analysis for optimal performance
|
||||
@@ -227,7 +240,7 @@ Traditional regex-based tools miss context. God's Eye's AI integration provides:
|
||||
curl https://ollama.ai/install.sh | sh
|
||||
|
||||
# 2. Pull AI models (5-10 minutes, one-time)
|
||||
ollama pull phi3.5:3.8b # Fast triage (~3GB)
|
||||
ollama pull deepseek-r1:1.5b # Fast triage (~3GB)
|
||||
ollama pull qwen2.5-coder:7b # Deep analysis (~6GB)
|
||||
|
||||
# 3. Start Ollama server
|
||||
@@ -247,6 +260,32 @@ ollama serve
|
||||
| **Anomaly Detection** | Cross-subdomain pattern analysis | `AI:MEDIUM: Dev environment exposed in production` |
|
||||
| **Executive Reports** | Professional summaries with remediation | Auto-generated markdown reports |
|
||||
|
||||
### CVE Database (CISA KEV)
|
||||
|
||||
God's Eye includes an **offline CVE database** powered by the [CISA Known Exploited Vulnerabilities](https://www.cisa.gov/known-exploited-vulnerabilities-catalog) catalog:
|
||||
|
||||
- **1,400+ actively exploited CVEs** - Confirmed vulnerabilities used in real-world attacks
|
||||
- **Auto-download** - Database downloads automatically on first AI-enabled scan
|
||||
- **Instant lookups** - Zero-latency, offline CVE matching
|
||||
- **Daily updates** - CISA updates the catalog daily; refresh with `update-db`
|
||||
|
||||
```bash
|
||||
# Update CVE database manually
|
||||
./god-eye update-db
|
||||
|
||||
# Check database status
|
||||
./god-eye db-info
|
||||
|
||||
# The database auto-downloads on first use with --enable-ai
|
||||
./god-eye -d target.com --enable-ai # Auto-downloads if not present
|
||||
```
|
||||
|
||||
**Database location:** `~/.god-eye/kev.json` (~1.3MB)
|
||||
|
||||
The KEV database is used **in addition to** real-time NVD API lookups, providing a multi-layer approach:
|
||||
1. **KEV (instant)** - Critical, actively exploited vulnerabilities
|
||||
2. **NVD API (fallback)** - Comprehensive CVE database (rate-limited)
|
||||
|
||||
### AI Usage Examples
|
||||
|
||||
```bash
|
||||
@@ -261,7 +300,7 @@ ollama serve
|
||||
|
||||
# Custom models
|
||||
./god-eye -d target.com --enable-ai \
|
||||
--ai-fast-model phi3.5:3.8b \
|
||||
--ai-fast-model deepseek-r1:1.5b \
|
||||
--ai-deep-model deepseek-coder-v2:16b
|
||||
|
||||
# Export with AI findings
|
||||
@@ -271,7 +310,7 @@ ollama serve
|
||||
### Sample AI Output
|
||||
|
||||
```
|
||||
🧠 AI-POWERED ANALYSIS (cascade: phi3.5:3.8b + qwen2.5-coder:7b)
|
||||
🧠 AI-POWERED ANALYSIS (cascade: deepseek-r1:1.5b + qwen2.5-coder:7b)
|
||||
|
||||
AI:C api.target.com → 4 findings
|
||||
AI:H admin.target.com → 2 findings
|
||||
@@ -361,11 +400,15 @@ Flags:
|
||||
AI Flags:
|
||||
--enable-ai Enable AI-powered analysis with Ollama
|
||||
--ai-url string Ollama API URL (default "http://localhost:11434")
|
||||
--ai-fast-model Fast triage model (default "phi3.5:3.8b")
|
||||
--ai-fast-model Fast triage model (default "deepseek-r1:1.5b")
|
||||
--ai-deep-model Deep analysis model (default "qwen2.5-coder:7b")
|
||||
--ai-cascade Use cascade (fast triage + deep) (default true)
|
||||
--ai-deep Enable deep AI analysis on all findings
|
||||
-h, --help Help for god-eye
|
||||
|
||||
Subcommands:
|
||||
update-db Download/update CISA KEV vulnerability database
|
||||
db-info Show vulnerability database status
|
||||
```
|
||||
|
||||
### Examples
|
||||
@@ -399,6 +442,39 @@ AI Flags:
|
||||
./god-eye -d example.com -s | httpx
|
||||
```
|
||||
|
||||
### Stealth Mode
|
||||
|
||||
For evasion during authorized penetration testing:
|
||||
|
||||
```bash
|
||||
# Light stealth (reduces detection, minimal speed impact)
|
||||
./god-eye -d target.com --stealth light
|
||||
|
||||
# Moderate stealth (balanced evasion/speed)
|
||||
./god-eye -d target.com --stealth moderate
|
||||
|
||||
# Aggressive stealth (slow, high evasion)
|
||||
./god-eye -d target.com --stealth aggressive
|
||||
|
||||
# Paranoid mode (very slow, maximum evasion)
|
||||
./god-eye -d target.com --stealth paranoid
|
||||
```
|
||||
|
||||
**Stealth Mode Comparison:**
|
||||
|
||||
| Mode | Max Threads | Delay | Rate/sec | Use Case |
|
||||
|------|-------------|-------|----------|----------|
|
||||
| `light` | 100 | 10-50ms | 100 | Avoid basic rate limits |
|
||||
| `moderate` | 30 | 50-200ms | 30 | Evade WAF detection |
|
||||
| `aggressive` | 10 | 200ms-1s | 10 | Sensitive targets |
|
||||
| `paranoid` | 3 | 1-5s | 2 | Maximum stealth needed |
|
||||
|
||||
**Features by Mode:**
|
||||
- **All modes**: User-Agent rotation (25+ browsers)
|
||||
- **Moderate+**: Request randomization, DNS query distribution
|
||||
- **Aggressive+**: 50% timing jitter, per-host throttling
|
||||
- **Paranoid**: 70% jitter, single connection per host
|
||||
|
||||
---
|
||||
|
||||
## Benchmark
|
||||
@@ -432,6 +508,8 @@ Performance comparison with other popular subdomain enumeration tools on a mediu
|
||||
| Cloud Detection | ✅ | ❌ | ❌ | ❌ |
|
||||
| Port Scanning | ✅ | ❌ | ❌ | ❌ |
|
||||
| Technology Detection | ✅ | ❌ | ❌ | ❌ |
|
||||
| TLS Appliance Fingerprint | ✅ | ❌ | ❌ | ❌ |
|
||||
| AI-Powered Analysis | ✅ | ❌ | ❌ | ❌ |
|
||||
|
||||
---
|
||||
|
||||
@@ -447,24 +525,68 @@ God's Eye features a modern, colorful CLI with:
|
||||
|
||||
### JSON Output
|
||||
|
||||
The `--json` flag outputs a structured report with full metadata:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"subdomain": "api.example.com",
|
||||
"ips": ["192.168.1.1"],
|
||||
"cname": "api-gateway.cloudprovider.com",
|
||||
"status_code": 200,
|
||||
"title": "API Documentation",
|
||||
"technologies": ["nginx", "Node.js"],
|
||||
"cloud_provider": "AWS",
|
||||
"security_headers": ["HSTS", "CSP"],
|
||||
"missing_headers": ["X-Frame-Options"],
|
||||
"admin_panels": ["/admin"],
|
||||
"api_endpoints": ["/api/v1", "/swagger"],
|
||||
"js_files": ["/static/app.js"],
|
||||
"js_secrets": ["api_key: AKIAIOSFODNN7EXAMPLE"]
|
||||
}
|
||||
]
|
||||
{
|
||||
"meta": {
|
||||
"version": "0.1",
|
||||
"tool_name": "God's Eye",
|
||||
"target": "example.com",
|
||||
"start_time": "2024-01-15T10:30:00Z",
|
||||
"end_time": "2024-01-15T10:32:15Z",
|
||||
"duration": "2m15s",
|
||||
"duration_ms": 135000,
|
||||
"concurrency": 1000,
|
||||
"timeout": 5,
|
||||
"options": {
|
||||
"brute_force": true,
|
||||
"http_probe": true,
|
||||
"ai_analysis": true
|
||||
}
|
||||
},
|
||||
"stats": {
|
||||
"total_subdomains": 25,
|
||||
"active_subdomains": 18,
|
||||
"vulnerabilities": 3,
|
||||
"takeover_vulnerable": 1,
|
||||
"ai_findings": 12
|
||||
},
|
||||
"wildcard": {
|
||||
"detected": false,
|
||||
"confidence": 0.95
|
||||
},
|
||||
"findings": {
|
||||
"critical": [{"subdomain": "dev.example.com", "type": "Subdomain Takeover", "description": "GitHub Pages"}],
|
||||
"high": [{"subdomain": "api.example.com", "type": "Git Repository Exposed", "description": ".git directory accessible"}],
|
||||
"medium": [],
|
||||
"low": [],
|
||||
"info": []
|
||||
},
|
||||
"subdomains": [
|
||||
{
|
||||
"subdomain": "api.example.com",
|
||||
"ips": ["192.168.1.1"],
|
||||
"cname": "api-gateway.cloudprovider.com",
|
||||
"status_code": 200,
|
||||
"title": "API Documentation",
|
||||
"technologies": ["nginx", "Node.js"],
|
||||
"cloud_provider": "AWS",
|
||||
"security_headers": ["HSTS", "CSP"],
|
||||
"missing_headers": ["X-Frame-Options"],
|
||||
"tls_self_signed": false,
|
||||
"tls_fingerprint": {
|
||||
"vendor": "Fortinet",
|
||||
"product": "FortiGate",
|
||||
"version": "60F",
|
||||
"appliance_type": "firewall",
|
||||
"internal_hosts": ["fw-internal.corp.local"]
|
||||
},
|
||||
"ai_findings": ["Potential IDOR in /api/users endpoint"],
|
||||
"cve_findings": ["nginx: CVE-2021-23017"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### CSV Output
|
||||
@@ -505,6 +627,34 @@ Checks 110+ vulnerable services including:
|
||||
- **Email Security (SPF/DMARC)**: Records are checked on the target domain specified with `-d`. Make sure to specify the root domain (e.g., `example.com` not `sub.example.com`) for accurate email security results.
|
||||
- **SPA Detection**: The tool detects Single Page Applications that return the same content for all routes, filtering out false positives for admin panels, API endpoints, and backup files.
|
||||
|
||||
### TLS Certificate Fingerprinting
|
||||
|
||||
God's Eye analyzes TLS certificates to identify security appliances, especially useful for self-signed certificates commonly used by firewalls and VPN gateways.
|
||||
|
||||
**Detected Vendors (25+):**
|
||||
|
||||
| Category | Vendors |
|
||||
|----------|---------|
|
||||
| **Firewalls** | Fortinet FortiGate, Palo Alto PAN-OS, Cisco ASA/Firepower, SonicWall, Check Point, pfSense, OPNsense, WatchGuard, Sophos XG, Juniper SRX, Zyxel USG |
|
||||
| **VPN** | OpenVPN, Pulse Secure, GlobalProtect, Cisco AnyConnect |
|
||||
| **Load Balancers** | F5 BIG-IP, Citrix NetScaler, HAProxy, NGINX Plus, Kemp LoadMaster |
|
||||
| **WAF/Security** | Barracuda, Imperva |
|
||||
| **Other** | MikroTik, Ubiquiti UniFi, VMware NSX, DrayTek Vigor |
|
||||
|
||||
**Features:**
|
||||
- Detects vendor and product from certificate Subject/Issuer fields
|
||||
- Extracts version information where available (e.g., `FortiGate v60F`)
|
||||
- Identifies internal hostnames from certificate SANs (`.local`, `.internal`, etc.)
|
||||
- Reports appliance type (firewall, vpn, loadbalancer, proxy, waf)
|
||||
|
||||
**Sample Output:**
|
||||
```
|
||||
● vpn.target.com [200]
|
||||
Security: TLS: TLS 1.2 (self-signed)
|
||||
APPLIANCE: Fortinet FortiGate v60F (firewall)
|
||||
INTERNAL: fw-internal.corp.local, vpn-gw-01.internal
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
@@ -559,7 +709,7 @@ Tested on production domain (authorized testing):
|
||||
| Passive Enumeration | ~25 sec | - |
|
||||
| HTTP Probing | ~35 sec | - |
|
||||
| Security Checks | ~40 sec | - |
|
||||
| AI Triage | ~10 sec | phi3.5:3.8b |
|
||||
| AI Triage | ~10 sec | deepseek-r1:1.5b |
|
||||
| AI Deep Analysis | ~25 sec | qwen2.5-coder:7b |
|
||||
| Report Generation | ~3 sec | qwen2.5-coder:7b |
|
||||
|
||||
|
||||
|
Before Width: | Height: | Size: 715 KiB After Width: | Height: | Size: 715 KiB |
|
Before Width: | Height: | Size: 342 KiB After Width: | Height: | Size: 342 KiB |
@@ -6,6 +6,7 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"god-eye/internal/ai"
|
||||
"god-eye/internal/config"
|
||||
"god-eye/internal/output"
|
||||
"god-eye/internal/scanner"
|
||||
@@ -27,7 +28,9 @@ Examples:
|
||||
god-eye -d example.com -r 1.1.1.1,8.8.8.8 Custom resolvers
|
||||
god-eye -d example.com -p 80,443,8080 Custom ports to scan
|
||||
god-eye -d example.com --json JSON output to stdout
|
||||
god-eye -d example.com -s Silent mode (subdomains only)`,
|
||||
god-eye -d example.com -s Silent mode (subdomains only)
|
||||
god-eye -d example.com --stealth moderate Moderate stealth (evasion mode)
|
||||
god-eye -d example.com --stealth paranoid Maximum stealth (very slow)`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
if cfg.Domain == "" {
|
||||
fmt.Println(output.Red("[-]"), "Domain is required. Use -d flag.")
|
||||
@@ -67,11 +70,75 @@ Examples:
|
||||
// AI flags
|
||||
rootCmd.Flags().BoolVar(&cfg.EnableAI, "enable-ai", false, "Enable AI-powered analysis with Ollama (includes CVE search)")
|
||||
rootCmd.Flags().StringVar(&cfg.AIUrl, "ai-url", "http://localhost:11434", "Ollama API URL")
|
||||
rootCmd.Flags().StringVar(&cfg.AIFastModel, "ai-fast-model", "phi3.5:3.8b", "Fast triage model")
|
||||
rootCmd.Flags().StringVar(&cfg.AIFastModel, "ai-fast-model", "deepseek-r1:1.5b", "Fast triage model")
|
||||
rootCmd.Flags().StringVar(&cfg.AIDeepModel, "ai-deep-model", "qwen2.5-coder:7b", "Deep analysis model (supports function calling)")
|
||||
rootCmd.Flags().BoolVar(&cfg.AICascade, "ai-cascade", true, "Use cascade (fast triage + deep analysis)")
|
||||
rootCmd.Flags().BoolVar(&cfg.AIDeepAnalysis, "ai-deep", false, "Enable deep AI analysis on all findings")
|
||||
|
||||
// Stealth flags
|
||||
rootCmd.Flags().StringVar(&cfg.StealthMode, "stealth", "", "Stealth mode: light, moderate, aggressive, paranoid (reduces detection)")
|
||||
|
||||
// Database update subcommand
|
||||
updateDbCmd := &cobra.Command{
|
||||
Use: "update-db",
|
||||
Short: "Update vulnerability databases (CISA KEV)",
|
||||
Long: `Downloads and updates local vulnerability databases:
|
||||
- CISA KEV (Known Exploited Vulnerabilities) - ~500KB, updated daily by CISA
|
||||
|
||||
The KEV database contains vulnerabilities that are actively exploited in the wild.
|
||||
This data is used for instant, offline CVE lookups during scans.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
fmt.Println(output.BoldCyan("🔄 Updating vulnerability databases..."))
|
||||
fmt.Println()
|
||||
|
||||
// Update KEV
|
||||
fmt.Print(output.Dim(" Downloading CISA KEV catalog... "))
|
||||
kevStore := ai.GetKEVStore()
|
||||
if err := kevStore.Update(); err != nil {
|
||||
fmt.Println(output.Red("FAILED"))
|
||||
fmt.Printf(" %s %v\n", output.Red("Error:"), err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
version, count, date := kevStore.GetCatalogInfo()
|
||||
fmt.Println(output.Green("OK"))
|
||||
fmt.Printf(" %s %s vulnerabilities (v%s, released %s)\n",
|
||||
output.Green("✓"), output.BoldWhite(fmt.Sprintf("%d", count)), version, date)
|
||||
fmt.Println()
|
||||
fmt.Println(output.Green("✅ Database update complete!"))
|
||||
fmt.Println(output.Dim(" KEV data cached at: ~/.god-eye/kev.json"))
|
||||
},
|
||||
}
|
||||
rootCmd.AddCommand(updateDbCmd)
|
||||
|
||||
// Database info subcommand
|
||||
dbInfoCmd := &cobra.Command{
|
||||
Use: "db-info",
|
||||
Short: "Show vulnerability database status",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
fmt.Println(output.BoldCyan("📊 Vulnerability Database Status"))
|
||||
fmt.Println()
|
||||
|
||||
kevStore := ai.GetKEVStore()
|
||||
|
||||
// Check if KEV needs update
|
||||
if kevStore.NeedUpdate() {
|
||||
fmt.Println(output.Yellow("⚠️ CISA KEV: Not downloaded or outdated"))
|
||||
fmt.Println(output.Dim(" Run 'god-eye update-db' to download"))
|
||||
} else {
|
||||
if err := kevStore.Load(); err != nil {
|
||||
fmt.Printf("%s CISA KEV: Error loading - %v\n", output.Red("❌"), err)
|
||||
} else {
|
||||
version, count, date := kevStore.GetCatalogInfo()
|
||||
fmt.Printf("%s CISA KEV: %s vulnerabilities\n", output.Green("✓"), output.BoldWhite(fmt.Sprintf("%d", count)))
|
||||
fmt.Printf(" Version: %s | Released: %s\n", version, date)
|
||||
fmt.Println(output.Dim(" Source: https://www.cisa.gov/known-exploited-vulnerabilities-catalog"))
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
rootCmd.AddCommand(dbInfoCmd)
|
||||
|
||||
if err := rootCmd.Execute(); err != nil {
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
@@ -6,7 +6,10 @@ import (
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
@@ -54,77 +57,141 @@ type NVDResponse struct {
|
||||
} `json:"vulnerabilities"`
|
||||
}
|
||||
|
||||
// CVECacheEntry holds cached CVE results
|
||||
type CVECacheEntry struct {
|
||||
Result string
|
||||
Timestamp time.Time
|
||||
}
|
||||
|
||||
var (
|
||||
nvdClient = &http.Client{
|
||||
Timeout: 10 * time.Second,
|
||||
Timeout: 15 * time.Second,
|
||||
}
|
||||
nvdBaseURL = "https://services.nvd.nist.gov/rest/json/cves/2.0"
|
||||
|
||||
// Rate limiting: max 5 requests per 30 seconds (NVD allows 10 req/60s without API key)
|
||||
// Rate limiting: NVD allows 5 req/30s without API key
|
||||
lastNVDRequest time.Time
|
||||
nvdRateLimit = 6 * time.Second // Wait 6 seconds between requests
|
||||
nvdRateLimit = 7 * time.Second // Wait 7 seconds between requests (safer)
|
||||
nvdMutex sync.Mutex
|
||||
|
||||
// CVE Cache to avoid duplicate lookups across subdomains
|
||||
cveCache = make(map[string]*CVECacheEntry)
|
||||
cveCacheMutex sync.RWMutex
|
||||
)
|
||||
|
||||
// SearchCVE searches for CVE vulnerabilities using NVD API
|
||||
// SearchCVE searches for CVE vulnerabilities with caching to avoid duplicates
|
||||
// Returns a concise format: "CVE-ID (SEVERITY/SCORE), CVE-ID2 (SEVERITY/SCORE)"
|
||||
func SearchCVE(technology string, version string) (string, error) {
|
||||
// Normalize technology name
|
||||
tech := normalizeTechnology(technology)
|
||||
cacheKey := tech // Use normalized tech as cache key
|
||||
|
||||
// Build search query
|
||||
query := tech
|
||||
if version != "" && version != "unknown" {
|
||||
query = fmt.Sprintf("%s %s", tech, version)
|
||||
}
|
||||
|
||||
// Query NVD API
|
||||
cves, err := queryNVD(query)
|
||||
if err != nil {
|
||||
return fmt.Sprintf("Unable to search CVE database for %s: %v", technology, err), nil
|
||||
}
|
||||
|
||||
if len(cves) == 0 {
|
||||
return fmt.Sprintf("No known CVE vulnerabilities found for %s %s in the NVD database. This doesn't guarantee the software is secure - always keep software updated.", technology, version), nil
|
||||
}
|
||||
|
||||
// Format results
|
||||
result := fmt.Sprintf("CVE Vulnerabilities for %s %s:\n\n", technology, version)
|
||||
result += fmt.Sprintf("Found %d CVE(s):\n\n", len(cves))
|
||||
|
||||
// Show top 5 most recent/critical CVEs
|
||||
maxShow := 5
|
||||
if len(cves) < maxShow {
|
||||
maxShow = len(cves)
|
||||
}
|
||||
|
||||
for i := 0; i < maxShow; i++ {
|
||||
cve := cves[i]
|
||||
result += fmt.Sprintf("🔴 %s (%s - Score: %.1f)\n", cve.ID, cve.Severity, cve.Score)
|
||||
result += fmt.Sprintf(" Published: %s\n", cve.Published)
|
||||
|
||||
// Truncate description if too long
|
||||
desc := cve.Description
|
||||
if len(desc) > 200 {
|
||||
desc = desc[:200] + "..."
|
||||
// Check cache first
|
||||
cveCacheMutex.RLock()
|
||||
if entry, ok := cveCache[cacheKey]; ok {
|
||||
cveCacheMutex.RUnlock()
|
||||
// Cache valid for 1 hour
|
||||
if time.Since(entry.Timestamp) < time.Hour {
|
||||
return entry.Result, nil
|
||||
}
|
||||
result += fmt.Sprintf(" %s\n", desc)
|
||||
} else {
|
||||
cveCacheMutex.RUnlock()
|
||||
}
|
||||
|
||||
if len(cve.References) > 0 {
|
||||
result += fmt.Sprintf(" Reference: %s\n", cve.References[0])
|
||||
var allCVEs []CVEInfo
|
||||
|
||||
// Layer 1: Check CISA KEV first (instant, offline, most critical)
|
||||
if kevResult, err := SearchKEV(tech); err == nil && kevResult != "" {
|
||||
// Parse KEV result for CVE IDs
|
||||
lines := strings.Split(kevResult, "\n")
|
||||
for _, line := range lines {
|
||||
if strings.Contains(line, "CVE-") {
|
||||
parts := strings.Fields(line)
|
||||
for _, part := range parts {
|
||||
if strings.HasPrefix(part, "CVE-") {
|
||||
allCVEs = append(allCVEs, CVEInfo{
|
||||
ID: strings.TrimSuffix(part, ":"),
|
||||
Severity: "CRITICAL",
|
||||
Score: 9.8, // KEV = actively exploited
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
result += "\n"
|
||||
}
|
||||
|
||||
if len(cves) > maxShow {
|
||||
result += fmt.Sprintf("... and %d more CVEs. Check https://nvd.nist.gov for complete details.\n", len(cves)-maxShow)
|
||||
// Layer 2: Query NVD API for additional CVEs
|
||||
if nvdCVEs, err := queryNVD(tech); err == nil {
|
||||
allCVEs = append(allCVEs, nvdCVEs...)
|
||||
}
|
||||
// Don't fail on NVD errors - just use what we have
|
||||
|
||||
result += "\n⚠️ Recommendation: Update to the latest version to mitigate known vulnerabilities."
|
||||
// Format result
|
||||
result := formatCVEsConcise(allCVEs)
|
||||
|
||||
// Cache the result
|
||||
cveCacheMutex.Lock()
|
||||
cveCache[cacheKey] = &CVECacheEntry{
|
||||
Result: result,
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
cveCacheMutex.Unlock()
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// queryNVD queries the NVD API for CVE information
|
||||
// formatCVEsConcise returns a concise CVE summary
|
||||
func formatCVEsConcise(cves []CVEInfo) string {
|
||||
if len(cves) == 0 {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Sort by score (highest first)
|
||||
sort.Slice(cves, func(i, j int) bool {
|
||||
return cves[i].Score > cves[j].Score
|
||||
})
|
||||
|
||||
// Deduplicate by CVE ID
|
||||
seen := make(map[string]bool)
|
||||
var uniqueCVEs []CVEInfo
|
||||
for _, cve := range cves {
|
||||
if !seen[cve.ID] && cve.ID != "" {
|
||||
seen[cve.ID] = true
|
||||
uniqueCVEs = append(uniqueCVEs, cve)
|
||||
}
|
||||
}
|
||||
|
||||
if len(uniqueCVEs) == 0 {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Show top 3 most critical
|
||||
maxShow := 3
|
||||
if len(uniqueCVEs) < maxShow {
|
||||
maxShow = len(uniqueCVEs)
|
||||
}
|
||||
|
||||
var parts []string
|
||||
for i := 0; i < maxShow; i++ {
|
||||
cve := uniqueCVEs[i]
|
||||
severity := cve.Severity
|
||||
if severity == "" {
|
||||
severity = "UNK"
|
||||
}
|
||||
parts = append(parts, fmt.Sprintf("%s (%s/%.1f)", cve.ID, severity, cve.Score))
|
||||
}
|
||||
|
||||
result := strings.Join(parts, ", ")
|
||||
if len(uniqueCVEs) > maxShow {
|
||||
result += fmt.Sprintf(" +%d more", len(uniqueCVEs)-maxShow)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// queryNVD queries the NVD API for CVE information with thread-safe rate limiting
|
||||
func queryNVD(keyword string) ([]CVEInfo, error) {
|
||||
nvdMutex.Lock()
|
||||
// Rate limiting: wait if necessary
|
||||
if !lastNVDRequest.IsZero() {
|
||||
elapsed := time.Since(lastNVDRequest)
|
||||
@@ -133,11 +200,12 @@ func queryNVD(keyword string) ([]CVEInfo, error) {
|
||||
}
|
||||
}
|
||||
lastNVDRequest = time.Now()
|
||||
nvdMutex.Unlock()
|
||||
|
||||
// Build URL with query parameters
|
||||
params := url.Values{}
|
||||
params.Add("keywordSearch", keyword)
|
||||
params.Add("resultsPerPage", "10") // Limit results
|
||||
params.Add("resultsPerPage", "5") // Limit results for speed
|
||||
|
||||
reqURL := fmt.Sprintf("%s?%s", nvdBaseURL, params.Encode())
|
||||
|
||||
@@ -171,7 +239,17 @@ func queryNVD(keyword string) ([]CVEInfo, error) {
|
||||
|
||||
// Convert to CVEInfo
|
||||
var cves []CVEInfo
|
||||
cutoffYear := time.Now().Year() - 10 // Filter CVEs older than 10 years
|
||||
|
||||
for _, vuln := range nvdResp.Vulnerabilities {
|
||||
// Filter old CVEs - extract year from CVE ID (format: CVE-YYYY-NNNNN)
|
||||
if len(vuln.CVE.ID) >= 8 {
|
||||
yearStr := vuln.CVE.ID[4:8]
|
||||
if year, err := strconv.Atoi(yearStr); err == nil && year < cutoffYear {
|
||||
continue // Skip CVEs older than cutoff
|
||||
}
|
||||
}
|
||||
|
||||
cve := CVEInfo{
|
||||
ID: vuln.CVE.ID,
|
||||
Published: formatDate(vuln.CVE.Published),
|
||||
|
||||
432
internal/ai/kev.go
Normal file
432
internal/ai/kev.go
Normal file
@@ -0,0 +1,432 @@
|
||||
package ai
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
// CISA KEV Catalog URL
|
||||
kevURL = "https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json"
|
||||
|
||||
// Cache settings
|
||||
kevCacheFile = "kev.json"
|
||||
kevCacheTTL = 24 * time.Hour // Refresh once per day
|
||||
)
|
||||
|
||||
// KEVCatalog represents the CISA Known Exploited Vulnerabilities catalog
|
||||
type KEVCatalog struct {
|
||||
Title string `json:"title"`
|
||||
CatalogVersion string `json:"catalogVersion"`
|
||||
DateReleased string `json:"dateReleased"`
|
||||
Count int `json:"count"`
|
||||
Vulnerabilities []KEVulnerability `json:"vulnerabilities"`
|
||||
}
|
||||
|
||||
// KEVulnerability represents a single KEV entry
|
||||
type KEVulnerability struct {
|
||||
CveID string `json:"cveID"`
|
||||
VendorProject string `json:"vendorProject"`
|
||||
Product string `json:"product"`
|
||||
VulnerabilityName string `json:"vulnerabilityName"`
|
||||
DateAdded string `json:"dateAdded"`
|
||||
ShortDescription string `json:"shortDescription"`
|
||||
RequiredAction string `json:"requiredAction"`
|
||||
DueDate string `json:"dueDate"`
|
||||
KnownRansomwareCampaignUse string `json:"knownRansomwareCampaignUse"`
|
||||
Notes string `json:"notes"`
|
||||
}
|
||||
|
||||
// KEVStore manages the local KEV database
|
||||
type KEVStore struct {
|
||||
catalog *KEVCatalog
|
||||
productMap map[string][]KEVulnerability // Maps product names to vulnerabilities
|
||||
cacheDir string
|
||||
mu sync.RWMutex
|
||||
loaded bool
|
||||
}
|
||||
|
||||
var (
|
||||
kevStore *KEVStore
|
||||
kevStoreOnce sync.Once
|
||||
)
|
||||
|
||||
// GetKEVStore returns the singleton KEV store instance
|
||||
func GetKEVStore() *KEVStore {
|
||||
kevStoreOnce.Do(func() {
|
||||
cacheDir := getKEVCacheDir()
|
||||
kevStore = &KEVStore{
|
||||
cacheDir: cacheDir,
|
||||
productMap: make(map[string][]KEVulnerability),
|
||||
}
|
||||
})
|
||||
return kevStore
|
||||
}
|
||||
|
||||
// getKEVCacheDir returns the cache directory path
|
||||
func getKEVCacheDir() string {
|
||||
homeDir, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return ".god-eye"
|
||||
}
|
||||
return filepath.Join(homeDir, ".god-eye")
|
||||
}
|
||||
|
||||
// getCachePath returns the full path to the cache file
|
||||
func (k *KEVStore) getCachePath() string {
|
||||
return filepath.Join(k.cacheDir, kevCacheFile)
|
||||
}
|
||||
|
||||
// IsLoaded returns whether the KEV database is loaded
|
||||
func (k *KEVStore) IsLoaded() bool {
|
||||
k.mu.RLock()
|
||||
defer k.mu.RUnlock()
|
||||
return k.loaded
|
||||
}
|
||||
|
||||
// GetCatalogInfo returns catalog metadata
|
||||
func (k *KEVStore) GetCatalogInfo() (version string, count int, date string) {
|
||||
k.mu.RLock()
|
||||
defer k.mu.RUnlock()
|
||||
if k.catalog == nil {
|
||||
return "", 0, ""
|
||||
}
|
||||
return k.catalog.CatalogVersion, k.catalog.Count, k.catalog.DateReleased
|
||||
}
|
||||
|
||||
// NeedUpdate checks if the cache needs to be updated
|
||||
func (k *KEVStore) NeedUpdate() bool {
|
||||
cachePath := k.getCachePath()
|
||||
info, err := os.Stat(cachePath)
|
||||
if err != nil {
|
||||
return true // File doesn't exist
|
||||
}
|
||||
return time.Since(info.ModTime()) > kevCacheTTL
|
||||
}
|
||||
|
||||
// Update downloads and updates the KEV database
|
||||
func (k *KEVStore) Update() error {
|
||||
// Ensure cache directory exists
|
||||
if err := os.MkdirAll(k.cacheDir, 0755); err != nil {
|
||||
return fmt.Errorf("failed to create cache directory: %w", err)
|
||||
}
|
||||
|
||||
// Download KEV catalog
|
||||
client := &http.Client{Timeout: 30 * time.Second}
|
||||
resp, err := client.Get(kevURL)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to download KEV catalog: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return fmt.Errorf("KEV download failed with status: %d", resp.StatusCode)
|
||||
}
|
||||
|
||||
// Read response body
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read KEV response: %w", err)
|
||||
}
|
||||
|
||||
// Parse to validate JSON
|
||||
var catalog KEVCatalog
|
||||
if err := json.Unmarshal(body, &catalog); err != nil {
|
||||
return fmt.Errorf("failed to parse KEV catalog: %w", err)
|
||||
}
|
||||
|
||||
// Write to cache file
|
||||
cachePath := k.getCachePath()
|
||||
if err := os.WriteFile(cachePath, body, 0644); err != nil {
|
||||
return fmt.Errorf("failed to write cache file: %w", err)
|
||||
}
|
||||
|
||||
// Load into memory
|
||||
return k.loadFromCatalog(&catalog)
|
||||
}
|
||||
|
||||
// Load loads the KEV database from cache or downloads if needed
|
||||
func (k *KEVStore) Load() error {
|
||||
return k.LoadWithProgress(false)
|
||||
}
|
||||
|
||||
// LoadWithProgress loads the KEV database with optional progress output
|
||||
func (k *KEVStore) LoadWithProgress(showProgress bool) error {
|
||||
k.mu.Lock()
|
||||
defer k.mu.Unlock()
|
||||
|
||||
if k.loaded {
|
||||
return nil
|
||||
}
|
||||
|
||||
cachePath := k.getCachePath()
|
||||
|
||||
// Try to load from cache
|
||||
data, err := os.ReadFile(cachePath)
|
||||
if err == nil {
|
||||
var catalog KEVCatalog
|
||||
if err := json.Unmarshal(data, &catalog); err == nil {
|
||||
return k.loadFromCatalog(&catalog)
|
||||
}
|
||||
}
|
||||
|
||||
// Cache doesn't exist or is invalid, need to download
|
||||
if showProgress {
|
||||
fmt.Print("📥 First run: downloading CISA KEV database... ")
|
||||
}
|
||||
|
||||
k.mu.Unlock()
|
||||
err = k.Update()
|
||||
k.mu.Lock()
|
||||
|
||||
if err != nil {
|
||||
if showProgress {
|
||||
fmt.Println("FAILED")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if showProgress {
|
||||
fmt.Println("OK")
|
||||
fmt.Printf(" ✓ Loaded %d known exploited vulnerabilities\n", k.catalog.Count)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// loadFromCatalog builds the internal index from catalog data
|
||||
func (k *KEVStore) loadFromCatalog(catalog *KEVCatalog) error {
|
||||
k.catalog = catalog
|
||||
k.productMap = make(map[string][]KEVulnerability)
|
||||
|
||||
for _, vuln := range catalog.Vulnerabilities {
|
||||
// Index by product name (lowercase for matching)
|
||||
productKey := strings.ToLower(vuln.Product)
|
||||
k.productMap[productKey] = append(k.productMap[productKey], vuln)
|
||||
|
||||
// Also index by vendor
|
||||
vendorKey := strings.ToLower(vuln.VendorProject)
|
||||
if vendorKey != productKey {
|
||||
k.productMap[vendorKey] = append(k.productMap[vendorKey], vuln)
|
||||
}
|
||||
}
|
||||
|
||||
k.loaded = true
|
||||
return nil
|
||||
}
|
||||
|
||||
// SearchByProduct searches for KEV vulnerabilities by product name
|
||||
func (k *KEVStore) SearchByProduct(product string) []KEVulnerability {
|
||||
k.mu.RLock()
|
||||
defer k.mu.RUnlock()
|
||||
|
||||
if !k.loaded || k.catalog == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
product = strings.ToLower(product)
|
||||
var results []KEVulnerability
|
||||
|
||||
// Direct match
|
||||
if vulns, ok := k.productMap[product]; ok {
|
||||
results = append(results, vulns...)
|
||||
}
|
||||
|
||||
// Partial match for products that might have different naming
|
||||
for key, vulns := range k.productMap {
|
||||
if key != product && (strings.Contains(key, product) || strings.Contains(product, key)) {
|
||||
results = append(results, vulns...)
|
||||
}
|
||||
}
|
||||
|
||||
return deduplicateKEV(results)
|
||||
}
|
||||
|
||||
// SearchByCVE searches for a specific CVE ID in the KEV catalog
|
||||
func (k *KEVStore) SearchByCVE(cveID string) *KEVulnerability {
|
||||
k.mu.RLock()
|
||||
defer k.mu.RUnlock()
|
||||
|
||||
if !k.loaded || k.catalog == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
cveID = strings.ToUpper(cveID)
|
||||
for _, vuln := range k.catalog.Vulnerabilities {
|
||||
if vuln.CveID == cveID {
|
||||
return &vuln
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// SearchByTechnology searches for KEV entries matching a technology name
|
||||
func (k *KEVStore) SearchByTechnology(technology string) []KEVulnerability {
|
||||
k.mu.RLock()
|
||||
defer k.mu.RUnlock()
|
||||
|
||||
if !k.loaded || k.catalog == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
technology = strings.ToLower(technology)
|
||||
var results []KEVulnerability
|
||||
|
||||
// Normalize common technology names
|
||||
aliases := getTechnologyAliases(technology)
|
||||
|
||||
for _, vuln := range k.catalog.Vulnerabilities {
|
||||
productLower := strings.ToLower(vuln.Product)
|
||||
vendorLower := strings.ToLower(vuln.VendorProject)
|
||||
nameLower := strings.ToLower(vuln.VulnerabilityName)
|
||||
|
||||
for _, alias := range aliases {
|
||||
if strings.Contains(productLower, alias) ||
|
||||
strings.Contains(vendorLower, alias) ||
|
||||
strings.Contains(nameLower, alias) {
|
||||
results = append(results, vuln)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return deduplicateKEV(results)
|
||||
}
|
||||
|
||||
// getTechnologyAliases returns common aliases for a technology
|
||||
func getTechnologyAliases(tech string) []string {
|
||||
aliases := []string{tech}
|
||||
|
||||
// Common mappings
|
||||
mappings := map[string][]string{
|
||||
"nginx": {"nginx"},
|
||||
"apache": {"apache", "httpd"},
|
||||
"iis": {"iis", "internet information services"},
|
||||
"wordpress": {"wordpress"},
|
||||
"drupal": {"drupal"},
|
||||
"joomla": {"joomla"},
|
||||
"tomcat": {"tomcat"},
|
||||
"jenkins": {"jenkins"},
|
||||
"gitlab": {"gitlab"},
|
||||
"exchange": {"exchange"},
|
||||
"sharepoint": {"sharepoint"},
|
||||
"citrix": {"citrix"},
|
||||
"vmware": {"vmware", "vcenter", "esxi"},
|
||||
"fortinet": {"fortinet", "fortigate", "fortios"},
|
||||
"paloalto": {"palo alto", "pan-os"},
|
||||
"cisco": {"cisco"},
|
||||
"f5": {"f5", "big-ip"},
|
||||
"pulse": {"pulse", "pulse secure"},
|
||||
"sonicwall": {"sonicwall"},
|
||||
"zyxel": {"zyxel"},
|
||||
"nextjs": {"next.js", "nextjs"},
|
||||
"react": {"react"},
|
||||
"angular": {"angular"},
|
||||
"vue": {"vue"},
|
||||
"php": {"php"},
|
||||
"java": {"java"},
|
||||
"log4j": {"log4j", "log4shell"},
|
||||
"spring": {"spring"},
|
||||
"struts": {"struts"},
|
||||
"confluence": {"confluence"},
|
||||
"jira": {"jira"},
|
||||
"atlassian": {"atlassian"},
|
||||
}
|
||||
|
||||
if mapped, ok := mappings[tech]; ok {
|
||||
aliases = append(aliases, mapped...)
|
||||
}
|
||||
|
||||
return aliases
|
||||
}
|
||||
|
||||
// deduplicateKEV removes duplicate KEV entries
|
||||
func deduplicateKEV(vulns []KEVulnerability) []KEVulnerability {
|
||||
seen := make(map[string]bool)
|
||||
var result []KEVulnerability
|
||||
|
||||
for _, v := range vulns {
|
||||
if !seen[v.CveID] {
|
||||
seen[v.CveID] = true
|
||||
result = append(result, v)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// FormatKEVResult formats KEV search results for display
|
||||
func FormatKEVResult(vulns []KEVulnerability, technology string) string {
|
||||
if len(vulns) == 0 {
|
||||
return ""
|
||||
}
|
||||
|
||||
var sb strings.Builder
|
||||
sb.WriteString(fmt.Sprintf("🚨 CISA KEV Alert for %s:\n", technology))
|
||||
sb.WriteString(fmt.Sprintf(" Found %d ACTIVELY EXPLOITED vulnerabilities!\n\n", len(vulns)))
|
||||
|
||||
// Show up to 5 most relevant
|
||||
maxShow := 5
|
||||
if len(vulns) < maxShow {
|
||||
maxShow = len(vulns)
|
||||
}
|
||||
|
||||
for i := 0; i < maxShow; i++ {
|
||||
v := vulns[i]
|
||||
sb.WriteString(fmt.Sprintf(" 🔴 %s - %s\n", v.CveID, v.VulnerabilityName))
|
||||
|
||||
// Truncate description if too long
|
||||
desc := v.ShortDescription
|
||||
if len(desc) > 150 {
|
||||
desc = desc[:150] + "..."
|
||||
}
|
||||
sb.WriteString(fmt.Sprintf(" %s\n", desc))
|
||||
|
||||
// Ransomware indicator
|
||||
if v.KnownRansomwareCampaignUse == "Known" {
|
||||
sb.WriteString(" ⚠️ USED IN RANSOMWARE CAMPAIGNS\n")
|
||||
}
|
||||
|
||||
sb.WriteString(fmt.Sprintf(" Added: %s | Due: %s\n", v.DateAdded, v.DueDate))
|
||||
sb.WriteString("\n")
|
||||
}
|
||||
|
||||
if len(vulns) > maxShow {
|
||||
sb.WriteString(fmt.Sprintf(" ... and %d more KEV entries\n", len(vulns)-maxShow))
|
||||
}
|
||||
|
||||
sb.WriteString(" ℹ️ These vulnerabilities are CONFIRMED to be exploited in the wild.\n")
|
||||
sb.WriteString(" ⚡ IMMEDIATE patching is strongly recommended.\n")
|
||||
|
||||
return sb.String()
|
||||
}
|
||||
|
||||
// SearchKEV is a convenience function for searching KEV by technology
|
||||
func SearchKEV(technology string) (string, error) {
|
||||
return SearchKEVWithProgress(technology, false)
|
||||
}
|
||||
|
||||
// SearchKEVWithProgress searches KEV with optional download progress
|
||||
func SearchKEVWithProgress(technology string, showProgress bool) (string, error) {
|
||||
store := GetKEVStore()
|
||||
|
||||
// Auto-load if not loaded (with auto-download if needed)
|
||||
if !store.IsLoaded() {
|
||||
if err := store.LoadWithProgress(showProgress); err != nil {
|
||||
return "", fmt.Errorf("failed to load KEV database: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
vulns := store.SearchByTechnology(technology)
|
||||
if len(vulns) == 0 {
|
||||
return "", nil // No KEV found, not an error
|
||||
}
|
||||
|
||||
return FormatKEVResult(vulns, technology), nil
|
||||
}
|
||||
@@ -12,7 +12,7 @@ import (
|
||||
// OllamaClient handles communication with local Ollama instance
|
||||
type OllamaClient struct {
|
||||
BaseURL string
|
||||
FastModel string // phi3.5:3.8b for quick triage
|
||||
FastModel string // deepseek-r1:1.5b for quick triage
|
||||
DeepModel string // qwen2.5-coder:7b for deep analysis
|
||||
Timeout time.Duration
|
||||
EnableCascade bool
|
||||
@@ -51,7 +51,7 @@ func NewOllamaClient(baseURL, fastModel, deepModel string, enableCascade bool) *
|
||||
baseURL = "http://localhost:11434"
|
||||
}
|
||||
if fastModel == "" {
|
||||
fastModel = "phi3.5:3.8b"
|
||||
fastModel = "deepseek-r1:1.5b"
|
||||
}
|
||||
if deepModel == "" {
|
||||
deepModel = "qwen2.5-coder:7b"
|
||||
@@ -227,7 +227,7 @@ Format: SEVERITY: finding`, truncate(summary, 4000))
|
||||
|
||||
// GenerateReport creates executive summary and recommendations
|
||||
func (c *OllamaClient) GenerateReport(findings string, stats map[string]int) (string, error) {
|
||||
prompt := fmt.Sprintf(`Create a concise security assessment report:
|
||||
prompt := fmt.Sprintf(`You are a security analyst. Create a security assessment report based on the findings below.
|
||||
|
||||
SCAN STATISTICS:
|
||||
- Total subdomains: %d
|
||||
@@ -235,15 +235,21 @@ SCAN STATISTICS:
|
||||
- Vulnerabilities: %d
|
||||
- Takeovers: %d
|
||||
|
||||
KEY FINDINGS:
|
||||
FINDINGS DATA (use these EXACT subdomain names in your report):
|
||||
%s
|
||||
|
||||
Generate report with:
|
||||
## Executive Summary (2-3 sentences)
|
||||
## Critical Findings (prioritized list)
|
||||
## Recommendations (actionable items)
|
||||
INSTRUCTIONS:
|
||||
1. Use the ACTUAL subdomain names from the findings data above (e.g., "new.computerplus.it", "api.example.com")
|
||||
2. Do NOT use generic placeholders like "Subdomain A" or "Subdomain B"
|
||||
3. Reference specific vulnerabilities found for each subdomain
|
||||
4. Include CVE IDs when present
|
||||
|
||||
Be concise and professional.`,
|
||||
Generate report with:
|
||||
## Executive Summary (2-3 sentences with real subdomain names)
|
||||
## Critical Findings (list each affected subdomain by name with its issues)
|
||||
## Recommendations (actionable items referencing specific subdomains)
|
||||
|
||||
Be concise and professional. Use the real data provided above.`,
|
||||
stats["total"], stats["active"], stats["vulns"], stats["takeovers"], truncate(findings, 3000))
|
||||
|
||||
response, err := c.query(c.DeepModel, prompt, 45*time.Second)
|
||||
@@ -255,34 +261,106 @@ Be concise and professional.`,
|
||||
}
|
||||
|
||||
// CVEMatch checks for known vulnerabilities in detected technologies
|
||||
// Returns concise format directly: "CVE-ID (SEVERITY/SCORE), ..."
|
||||
func (c *OllamaClient) CVEMatch(technology, version string) (string, error) {
|
||||
// Call SearchCVE directly instead of using function calling (more reliable)
|
||||
// Call SearchCVE directly - it now returns concise format with caching
|
||||
cveData, err := SearchCVE(technology, version)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// If no CVEs found, return empty
|
||||
if strings.Contains(cveData, "No known CVE vulnerabilities found") {
|
||||
return "", nil
|
||||
// Return directly without AI processing - the format is already clean
|
||||
return cveData, nil
|
||||
}
|
||||
|
||||
// FilterSecrets uses AI to filter false positives from potential secrets
|
||||
// Returns only real secrets, filtering out UI text, placeholders, and example values
|
||||
func (c *OllamaClient) FilterSecrets(potentialSecrets []string) ([]string, error) {
|
||||
if len(potentialSecrets) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Ask AI to analyze the CVE data
|
||||
prompt := fmt.Sprintf(`Analyze these CVE vulnerabilities for %s and provide a concise security assessment:
|
||||
// Build the list of secrets for AI analysis
|
||||
secretsList := strings.Join(potentialSecrets, "\n")
|
||||
|
||||
prompt := fmt.Sprintf(`Task: Filter JavaScript findings. Output only REAL secrets.
|
||||
|
||||
Examples of FAKE (do NOT output):
|
||||
- Change Password (UI button text)
|
||||
- Update Password (UI button text)
|
||||
- Password (just a word)
|
||||
- Enter your API key (placeholder)
|
||||
- YOUR_API_KEY (placeholder)
|
||||
- Login, Token, Secret (single words)
|
||||
|
||||
Examples of REAL (DO output):
|
||||
- pk_test_TYooMQauvdEDq54NiTphI7jx (Stripe key - has random chars)
|
||||
- AKIAIOSFODNN7EXAMPLE (AWS key - 20 char pattern)
|
||||
- mongodb://admin:secret123@db.example.com (connection string)
|
||||
- eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... (JWT token)
|
||||
|
||||
Input findings:
|
||||
%s
|
||||
|
||||
Provide a 2-3 sentence summary focusing on:
|
||||
- Most critical vulnerabilities
|
||||
- Key recommendations`, technology, cveData)
|
||||
Output only the REAL secrets in their original [Type] format, one per line. If none are real, output: NONE`, secretsList)
|
||||
|
||||
response, err := c.query(c.FastModel, prompt, 20*time.Second)
|
||||
response, err := c.query(c.FastModel, prompt, 15*time.Second)
|
||||
if err != nil {
|
||||
// If AI analysis fails, return the raw CVE data
|
||||
return cveData, nil
|
||||
// On error, return original list (fail open for security)
|
||||
return potentialSecrets, nil
|
||||
}
|
||||
|
||||
return response, nil
|
||||
// Parse response
|
||||
response = strings.TrimSpace(response)
|
||||
if strings.ToUpper(response) == "NONE" || response == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var realSecrets []string
|
||||
|
||||
// First, try to find secrets in [Type] format in the response
|
||||
lines := strings.Split(response, "\n")
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" || strings.ToUpper(line) == "NONE" {
|
||||
continue
|
||||
}
|
||||
// Accept any line that contains our format [Type] value
|
||||
if strings.Contains(line, "[") && strings.Contains(line, "]") {
|
||||
// Extract the [Type] value part
|
||||
startIdx := strings.Index(line, "[")
|
||||
if startIdx >= 0 {
|
||||
// Find the actual secret value after ]
|
||||
endBracket := strings.Index(line[startIdx:], "]")
|
||||
if endBracket > 0 {
|
||||
// Get everything from [ to end of meaningful content
|
||||
secretPart := line[startIdx:]
|
||||
// Remove trailing explanations (after " –" or " -")
|
||||
if dashIdx := strings.Index(secretPart, " –"); dashIdx > 0 {
|
||||
secretPart = secretPart[:dashIdx]
|
||||
}
|
||||
if dashIdx := strings.Index(secretPart, " -"); dashIdx > 0 {
|
||||
secretPart = secretPart[:dashIdx]
|
||||
}
|
||||
secretPart = strings.TrimSpace(secretPart)
|
||||
if secretPart != "" && strings.HasPrefix(secretPart, "[") {
|
||||
realSecrets = append(realSecrets, secretPart)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If AI returned nothing valid but we had input, something went wrong
|
||||
// Return original secrets (fail-safe: better false positives than miss real ones)
|
||||
if len(realSecrets) == 0 && len(potentialSecrets) > 0 {
|
||||
// Check if response contains "NONE" anywhere - that's a valid empty result
|
||||
if !strings.Contains(strings.ToUpper(response), "NONE") {
|
||||
return potentialSecrets, nil
|
||||
}
|
||||
}
|
||||
|
||||
return realSecrets, nil
|
||||
}
|
||||
|
||||
// query sends a request to Ollama API
|
||||
|
||||
@@ -23,12 +23,14 @@ type Config struct {
|
||||
OnlyActive bool
|
||||
JsonOutput bool
|
||||
// AI Configuration
|
||||
EnableAI bool
|
||||
AIUrl string
|
||||
AIFastModel string
|
||||
AIDeepModel string
|
||||
AICascade bool
|
||||
EnableAI bool
|
||||
AIUrl string
|
||||
AIFastModel string
|
||||
AIDeepModel string
|
||||
AICascade bool
|
||||
AIDeepAnalysis bool
|
||||
// Stealth Configuration
|
||||
StealthMode string // off, light, moderate, aggressive, paranoid
|
||||
}
|
||||
|
||||
// Stats holds scan statistics
|
||||
@@ -61,6 +63,9 @@ type SubdomainResult struct {
|
||||
TLSVersion string `json:"tls_version,omitempty"`
|
||||
TLSIssuer string `json:"tls_issuer,omitempty"`
|
||||
TLSExpiry string `json:"tls_expiry,omitempty"`
|
||||
TLSSelfSigned bool `json:"tls_self_signed,omitempty"`
|
||||
// TLS Fingerprint for appliance detection
|
||||
TLSFingerprint *TLSFingerprint `json:"tls_fingerprint,omitempty"`
|
||||
Ports []int `json:"ports,omitempty"`
|
||||
Takeover string `json:"takeover,omitempty"`
|
||||
ResponseMs int64 `json:"response_ms,omitempty"`
|
||||
@@ -100,6 +105,21 @@ type SubdomainResult struct {
|
||||
CVEFindings []string `json:"cve_findings,omitempty"`
|
||||
}
|
||||
|
||||
// TLSFingerprint holds detailed certificate information for appliance detection
|
||||
type TLSFingerprint struct {
|
||||
Vendor string `json:"vendor,omitempty"` // Detected vendor (Fortinet, Palo Alto, etc.)
|
||||
Product string `json:"product,omitempty"` // Product name (FortiGate, PA-xxx, etc.)
|
||||
Version string `json:"version,omitempty"` // Version if detectable
|
||||
SubjectCN string `json:"subject_cn,omitempty"` // Subject Common Name
|
||||
SubjectOrg string `json:"subject_org,omitempty"` // Subject Organization
|
||||
SubjectOU string `json:"subject_ou,omitempty"` // Subject Organizational Unit
|
||||
IssuerCN string `json:"issuer_cn,omitempty"` // Issuer Common Name
|
||||
IssuerOrg string `json:"issuer_org,omitempty"` // Issuer Organization
|
||||
SerialNumber string `json:"serial_number,omitempty"` // Certificate serial number
|
||||
InternalHosts []string `json:"internal_hosts,omitempty"` // Potential internal hostnames found
|
||||
ApplianceType string `json:"appliance_type,omitempty"` // firewall, vpn, loadbalancer, proxy, etc.
|
||||
}
|
||||
|
||||
// IPInfo holds IP geolocation data
|
||||
type IPInfo struct {
|
||||
ASN string `json:"as"`
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
@@ -10,9 +11,16 @@ import (
|
||||
"github.com/miekg/dns"
|
||||
|
||||
"god-eye/internal/config"
|
||||
"god-eye/internal/retry"
|
||||
)
|
||||
|
||||
// ResolveSubdomain resolves a subdomain to IP addresses with retry logic
|
||||
func ResolveSubdomain(subdomain string, resolvers []string, timeout int) []string {
|
||||
return ResolveSubdomainWithRetry(subdomain, resolvers, timeout, true)
|
||||
}
|
||||
|
||||
// ResolveSubdomainWithRetry resolves with optional retry
|
||||
func ResolveSubdomainWithRetry(subdomain string, resolvers []string, timeout int, useRetry bool) []string {
|
||||
c := dns.Client{
|
||||
Timeout: time.Duration(timeout) * time.Second,
|
||||
}
|
||||
@@ -20,16 +28,48 @@ func ResolveSubdomain(subdomain string, resolvers []string, timeout int) []strin
|
||||
m := dns.Msg{}
|
||||
m.SetQuestion(dns.Fqdn(subdomain), dns.TypeA)
|
||||
|
||||
// Try each resolver
|
||||
for _, resolver := range resolvers {
|
||||
r, _, err := c.Exchange(&m, resolver)
|
||||
if err != nil || r == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
var ips []string
|
||||
for _, ans := range r.Answer {
|
||||
if a, ok := ans.(*dns.A); ok {
|
||||
ips = append(ips, a.A.String())
|
||||
|
||||
if useRetry {
|
||||
// Use retry logic
|
||||
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(timeout*2)*time.Second)
|
||||
result := retry.Do(ctx, retry.DNSConfig(), func() (interface{}, error) {
|
||||
r, _, err := c.Exchange(&m, resolver)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if r == nil {
|
||||
return nil, fmt.Errorf("nil response")
|
||||
}
|
||||
|
||||
var resolvedIPs []string
|
||||
for _, ans := range r.Answer {
|
||||
if a, ok := ans.(*dns.A); ok {
|
||||
resolvedIPs = append(resolvedIPs, a.A.String())
|
||||
}
|
||||
}
|
||||
|
||||
if len(resolvedIPs) == 0 {
|
||||
return nil, fmt.Errorf("no A records")
|
||||
}
|
||||
return resolvedIPs, nil
|
||||
})
|
||||
cancel()
|
||||
|
||||
if result.Error == nil && result.Value != nil {
|
||||
ips = result.Value.([]string)
|
||||
}
|
||||
} else {
|
||||
// Direct resolution without retry
|
||||
r, _, err := c.Exchange(&m, resolver)
|
||||
if err == nil && r != nil {
|
||||
for _, ans := range r.Answer {
|
||||
if a, ok := ans.(*dns.A); ok {
|
||||
ips = append(ips, a.A.String())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
341
internal/dns/wildcard.go
Normal file
341
internal/dns/wildcard.go
Normal file
@@ -0,0 +1,341 @@
|
||||
package dns
|
||||
|
||||
import (
|
||||
"crypto/md5"
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// WildcardInfo holds information about wildcard DNS detection
|
||||
type WildcardInfo struct {
|
||||
IsWildcard bool
|
||||
WildcardIPs []string
|
||||
WildcardCNAME string
|
||||
HTTPStatusCode int
|
||||
HTTPBodyHash string
|
||||
HTTPBodySize int64
|
||||
Confidence float64 // 0-1 confidence level
|
||||
}
|
||||
|
||||
// WildcardDetector performs comprehensive wildcard detection
|
||||
type WildcardDetector struct {
|
||||
resolvers []string
|
||||
timeout int
|
||||
httpClient *http.Client
|
||||
testSubdomains []string
|
||||
}
|
||||
|
||||
// NewWildcardDetector creates a new wildcard detector
|
||||
func NewWildcardDetector(resolvers []string, timeout int) *WildcardDetector {
|
||||
return &WildcardDetector{
|
||||
resolvers: resolvers,
|
||||
timeout: timeout,
|
||||
httpClient: &http.Client{
|
||||
Timeout: time.Duration(timeout) * time.Second,
|
||||
Transport: &http.Transport{
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
|
||||
},
|
||||
CheckRedirect: func(req *http.Request, via []*http.Request) error {
|
||||
return http.ErrUseLastResponse
|
||||
},
|
||||
},
|
||||
testSubdomains: generateTestSubdomains(),
|
||||
}
|
||||
}
|
||||
|
||||
// generateTestSubdomains creates random non-existent subdomain patterns
|
||||
func generateTestSubdomains() []string {
|
||||
timestamp := time.Now().UnixNano()
|
||||
return []string{
|
||||
fmt.Sprintf("wildcard-test-%d-abc", timestamp),
|
||||
fmt.Sprintf("random-xyz-%d-def", timestamp%1000000),
|
||||
fmt.Sprintf("nonexistent-%d-ghi", timestamp%999999),
|
||||
fmt.Sprintf("fake-sub-%d-jkl", timestamp%888888),
|
||||
fmt.Sprintf("test-random-%d-mno", timestamp%777777),
|
||||
}
|
||||
}
|
||||
|
||||
// Detect performs comprehensive wildcard detection on a domain
|
||||
func (wd *WildcardDetector) Detect(domain string) *WildcardInfo {
|
||||
info := &WildcardInfo{
|
||||
WildcardIPs: make([]string, 0),
|
||||
}
|
||||
|
||||
// Phase 1: DNS-based detection (multiple random subdomains)
|
||||
ipCounts := make(map[string]int)
|
||||
var cnames []string
|
||||
|
||||
for _, pattern := range wd.testSubdomains {
|
||||
testDomain := fmt.Sprintf("%s.%s", pattern, domain)
|
||||
|
||||
// Resolve A records
|
||||
ips := ResolveSubdomain(testDomain, wd.resolvers, wd.timeout)
|
||||
for _, ip := range ips {
|
||||
ipCounts[ip]++
|
||||
}
|
||||
|
||||
// Resolve CNAME
|
||||
cname := ResolveCNAME(testDomain, wd.resolvers, wd.timeout)
|
||||
if cname != "" {
|
||||
cnames = append(cnames, cname)
|
||||
}
|
||||
}
|
||||
|
||||
// Analyze DNS results
|
||||
totalTests := len(wd.testSubdomains)
|
||||
for ip, count := range ipCounts {
|
||||
// If same IP appears in >= 60% of tests, it's likely wildcard
|
||||
if float64(count)/float64(totalTests) >= 0.6 {
|
||||
info.WildcardIPs = append(info.WildcardIPs, ip)
|
||||
}
|
||||
}
|
||||
|
||||
// Check CNAME consistency
|
||||
if len(cnames) > 0 && allEqual(cnames) {
|
||||
info.WildcardCNAME = cnames[0]
|
||||
}
|
||||
|
||||
// If no DNS wildcard detected, we're done
|
||||
if len(info.WildcardIPs) == 0 && info.WildcardCNAME == "" {
|
||||
info.IsWildcard = false
|
||||
info.Confidence = 0.95 // High confidence no wildcard
|
||||
return info
|
||||
}
|
||||
|
||||
// Phase 2: HTTP-based validation (if DNS wildcard detected)
|
||||
if len(info.WildcardIPs) > 0 {
|
||||
httpResults := wd.validateHTTP(domain)
|
||||
info.HTTPStatusCode = httpResults.statusCode
|
||||
info.HTTPBodyHash = httpResults.bodyHash
|
||||
info.HTTPBodySize = httpResults.bodySize
|
||||
|
||||
// Calculate confidence based on HTTP consistency
|
||||
if httpResults.consistent {
|
||||
info.Confidence = 0.95 // Very confident it's a wildcard
|
||||
} else {
|
||||
info.Confidence = 0.7 // DNS wildcard but inconsistent HTTP
|
||||
}
|
||||
}
|
||||
|
||||
info.IsWildcard = true
|
||||
sort.Strings(info.WildcardIPs)
|
||||
|
||||
return info
|
||||
}
|
||||
|
||||
type httpValidationResult struct {
|
||||
statusCode int
|
||||
bodyHash string
|
||||
bodySize int64
|
||||
consistent bool
|
||||
}
|
||||
|
||||
// validateHTTP checks if random subdomains return consistent HTTP responses
|
||||
func (wd *WildcardDetector) validateHTTP(domain string) httpValidationResult {
|
||||
result := httpValidationResult{}
|
||||
|
||||
var statusCodes []int
|
||||
var bodySizes []int64
|
||||
var bodyHashes []string
|
||||
|
||||
// Test 3 random subdomains via HTTP
|
||||
for i := 0; i < 3; i++ {
|
||||
testDomain := fmt.Sprintf("%s.%s", wd.testSubdomains[i], domain)
|
||||
|
||||
for _, scheme := range []string{"https", "http"} {
|
||||
url := fmt.Sprintf("%s://%s", scheme, testDomain)
|
||||
resp, err := wd.httpClient.Get(url)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
statusCodes = append(statusCodes, resp.StatusCode)
|
||||
|
||||
// Read body (limited)
|
||||
body, _ := io.ReadAll(io.LimitReader(resp.Body, 50000))
|
||||
resp.Body.Close()
|
||||
|
||||
bodySizes = append(bodySizes, int64(len(body)))
|
||||
bodyHashes = append(bodyHashes, fmt.Sprintf("%x", md5.Sum(body)))
|
||||
break // Only need one successful scheme
|
||||
}
|
||||
}
|
||||
|
||||
if len(statusCodes) == 0 {
|
||||
return result
|
||||
}
|
||||
|
||||
// Check consistency
|
||||
result.statusCode = statusCodes[0]
|
||||
if len(bodySizes) > 0 {
|
||||
result.bodySize = bodySizes[0]
|
||||
}
|
||||
if len(bodyHashes) > 0 {
|
||||
result.bodyHash = bodyHashes[0]
|
||||
}
|
||||
|
||||
// Check if all results are consistent (same status and similar size)
|
||||
result.consistent = allEqualInts(statusCodes) && similarSizes(bodySizes)
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// IsWildcardIP checks if an IP is a known wildcard IP for this domain
|
||||
func (wd *WildcardDetector) IsWildcardIP(ip string, wildcardInfo *WildcardInfo) bool {
|
||||
if wildcardInfo == nil || !wildcardInfo.IsWildcard {
|
||||
return false
|
||||
}
|
||||
|
||||
for _, wip := range wildcardInfo.WildcardIPs {
|
||||
if ip == wip {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// IsWildcardResponse checks if an HTTP response matches wildcard pattern
|
||||
func (wd *WildcardDetector) IsWildcardResponse(statusCode int, bodySize int64, wildcardInfo *WildcardInfo) bool {
|
||||
if wildcardInfo == nil || !wildcardInfo.IsWildcard {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check status code match
|
||||
if wildcardInfo.HTTPStatusCode != 0 && statusCode != wildcardInfo.HTTPStatusCode {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check body size similarity (within 10%)
|
||||
if wildcardInfo.HTTPBodySize > 0 {
|
||||
ratio := float64(bodySize) / float64(wildcardInfo.HTTPBodySize)
|
||||
if ratio < 0.9 || ratio > 1.1 {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Helper functions
|
||||
|
||||
func allEqual(strs []string) bool {
|
||||
if len(strs) == 0 {
|
||||
return true
|
||||
}
|
||||
first := strs[0]
|
||||
for _, s := range strs[1:] {
|
||||
if s != first {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func allEqualInts(ints []int) bool {
|
||||
if len(ints) == 0 {
|
||||
return true
|
||||
}
|
||||
first := ints[0]
|
||||
for _, i := range ints[1:] {
|
||||
if i != first {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func similarSizes(sizes []int64) bool {
|
||||
if len(sizes) < 2 {
|
||||
return true
|
||||
}
|
||||
|
||||
// Find min and max
|
||||
min, max := sizes[0], sizes[0]
|
||||
for _, s := range sizes[1:] {
|
||||
if s < min {
|
||||
min = s
|
||||
}
|
||||
if s > max {
|
||||
max = s
|
||||
}
|
||||
}
|
||||
|
||||
// Allow 20% variance
|
||||
if min == 0 {
|
||||
return max < 100 // Small empty responses
|
||||
}
|
||||
return float64(max)/float64(min) <= 1.2
|
||||
}
|
||||
|
||||
// FilterWildcardSubdomains removes subdomains that match wildcard pattern
|
||||
func FilterWildcardSubdomains(subdomains []string, domain string, resolvers []string, timeout int) (filtered []string, wildcardInfo *WildcardInfo) {
|
||||
detector := NewWildcardDetector(resolvers, timeout)
|
||||
wildcardInfo = detector.Detect(domain)
|
||||
|
||||
if !wildcardInfo.IsWildcard {
|
||||
return subdomains, wildcardInfo
|
||||
}
|
||||
|
||||
// Filter out subdomains that resolve to wildcard IPs
|
||||
filtered = make([]string, 0, len(subdomains))
|
||||
wildcardIPSet := make(map[string]bool)
|
||||
for _, ip := range wildcardInfo.WildcardIPs {
|
||||
wildcardIPSet[ip] = true
|
||||
}
|
||||
|
||||
for _, subdomain := range subdomains {
|
||||
ips := ResolveSubdomain(subdomain, resolvers, timeout)
|
||||
|
||||
// Check if all IPs are wildcard IPs
|
||||
allWildcard := true
|
||||
for _, ip := range ips {
|
||||
if !wildcardIPSet[ip] {
|
||||
allWildcard = false
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Keep if not all IPs are wildcards, or if no IPs resolved
|
||||
if !allWildcard || len(ips) == 0 {
|
||||
filtered = append(filtered, subdomain)
|
||||
}
|
||||
}
|
||||
|
||||
return filtered, wildcardInfo
|
||||
}
|
||||
|
||||
// GetWildcardSummary returns a human-readable summary of wildcard detection
|
||||
func (wi *WildcardInfo) GetSummary() string {
|
||||
if !wi.IsWildcard {
|
||||
return "No wildcard DNS detected"
|
||||
}
|
||||
|
||||
var parts []string
|
||||
parts = append(parts, "Wildcard DNS DETECTED")
|
||||
|
||||
if len(wi.WildcardIPs) > 0 {
|
||||
ips := wi.WildcardIPs
|
||||
if len(ips) > 3 {
|
||||
ips = ips[:3]
|
||||
}
|
||||
parts = append(parts, fmt.Sprintf("IPs: %s", strings.Join(ips, ", ")))
|
||||
}
|
||||
|
||||
if wi.WildcardCNAME != "" {
|
||||
parts = append(parts, fmt.Sprintf("CNAME: %s", wi.WildcardCNAME))
|
||||
}
|
||||
|
||||
if wi.HTTPStatusCode > 0 {
|
||||
parts = append(parts, fmt.Sprintf("HTTP: %d (%dB)", wi.HTTPStatusCode, wi.HTTPBodySize))
|
||||
}
|
||||
|
||||
parts = append(parts, fmt.Sprintf("Confidence: %.0f%%", wi.Confidence*100))
|
||||
|
||||
return strings.Join(parts, " | ")
|
||||
}
|
||||
@@ -3,6 +3,7 @@ package http
|
||||
import (
|
||||
"crypto/tls"
|
||||
"net/http"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
@@ -25,3 +26,44 @@ func GetSharedClient(timeout int) *http.Client {
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// UserAgentManager handles User-Agent rotation
|
||||
type UserAgentManager struct {
|
||||
agents []string
|
||||
index int
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
var defaultUAManager = &UserAgentManager{
|
||||
agents: []string{
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:121.0) Gecko/20100101 Firefox/121.0",
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:121.0) Gecko/20100101 Firefox/121.0",
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.2 Safari/605.1.15",
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36 Edg/120.0.0.0",
|
||||
},
|
||||
}
|
||||
|
||||
// GetUserAgent returns the next User-Agent in rotation
|
||||
func GetUserAgent() string {
|
||||
defaultUAManager.mu.Lock()
|
||||
defer defaultUAManager.mu.Unlock()
|
||||
|
||||
ua := defaultUAManager.agents[defaultUAManager.index]
|
||||
defaultUAManager.index = (defaultUAManager.index + 1) % len(defaultUAManager.agents)
|
||||
return ua
|
||||
}
|
||||
|
||||
// NewRequestWithUA creates an HTTP request with a rotated User-Agent
|
||||
func NewRequestWithUA(method, url string) (*http.Request, error) {
|
||||
req, err := http.NewRequest(method, url, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
req.Header.Set("User-Agent", GetUserAgent())
|
||||
req.Header.Set("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8")
|
||||
req.Header.Set("Accept-Language", "en-US,en;q=0.5")
|
||||
req.Header.Set("Connection", "keep-alive")
|
||||
return req, nil
|
||||
}
|
||||
|
||||
@@ -15,17 +15,8 @@ import (
|
||||
func ProbeHTTP(subdomain string, timeout int) *config.SubdomainResult {
|
||||
result := &config.SubdomainResult{}
|
||||
|
||||
transport := &http.Transport{
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
|
||||
}
|
||||
|
||||
client := &http.Client{
|
||||
Timeout: time.Duration(timeout) * time.Second,
|
||||
Transport: transport,
|
||||
CheckRedirect: func(req *http.Request, via []*http.Request) error {
|
||||
return http.ErrUseLastResponse
|
||||
},
|
||||
}
|
||||
// Use shared transport for connection pooling
|
||||
client := GetSharedClient(timeout)
|
||||
|
||||
urls := []string{
|
||||
fmt.Sprintf("https://%s", subdomain),
|
||||
@@ -78,6 +69,19 @@ func ProbeHTTP(subdomain string, timeout int) *config.SubdomainResult {
|
||||
case tls.VersionTLS10:
|
||||
result.TLSVersion = "TLS 1.0"
|
||||
}
|
||||
|
||||
// Check for self-signed certificate
|
||||
result.TLSSelfSigned = IsSelfSigned(cert)
|
||||
|
||||
// Analyze certificate for appliance fingerprinting
|
||||
// This is especially useful for self-signed certs (firewalls, VPNs, etc.)
|
||||
if fp := AnalyzeTLSCertificate(cert); fp != nil {
|
||||
result.TLSFingerprint = fp
|
||||
// Add vendor/product to tech stack if detected
|
||||
if fp.Vendor != "" && fp.Product != "" {
|
||||
result.Tech = append(result.Tech, fp.Vendor+" "+fp.Product)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Interesting headers
|
||||
@@ -117,27 +121,47 @@ func ProbeHTTP(subdomain string, timeout int) *config.SubdomainResult {
|
||||
|
||||
// Detect technologies
|
||||
bodyStr := string(body)
|
||||
if strings.Contains(bodyStr, "wp-content") || strings.Contains(bodyStr, "wordpress") {
|
||||
bodyStrLower := strings.ToLower(bodyStr)
|
||||
|
||||
// WordPress - specific patterns
|
||||
if strings.Contains(bodyStr, "wp-content") || strings.Contains(bodyStr, "wp-includes") {
|
||||
result.Tech = append(result.Tech, "WordPress")
|
||||
}
|
||||
if strings.Contains(bodyStr, "_next") || strings.Contains(bodyStr, "Next.js") {
|
||||
// Next.js - specific patterns (check before React since Next uses React)
|
||||
if strings.Contains(bodyStr, "/_next/") || strings.Contains(bodyStr, "__NEXT_DATA__") {
|
||||
result.Tech = append(result.Tech, "Next.js")
|
||||
}
|
||||
if strings.Contains(bodyStr, "react") || strings.Contains(bodyStr, "React") {
|
||||
} else if strings.Contains(bodyStr, "react-root") || strings.Contains(bodyStr, "data-reactroot") ||
|
||||
strings.Contains(bodyStr, "__REACT_DEVTOOLS_GLOBAL_HOOK__") {
|
||||
// React - only if not Next.js
|
||||
result.Tech = append(result.Tech, "React")
|
||||
}
|
||||
if strings.Contains(bodyStr, "laravel") || strings.Contains(bodyStr, "Laravel") {
|
||||
// Laravel - specific patterns
|
||||
if strings.Contains(bodyStr, "laravel_session") || strings.Contains(bodyStr, "XSRF-TOKEN") {
|
||||
result.Tech = append(result.Tech, "Laravel")
|
||||
}
|
||||
if strings.Contains(bodyStr, "django") || strings.Contains(bodyStr, "Django") {
|
||||
// Django - specific patterns
|
||||
if strings.Contains(bodyStr, "csrfmiddlewaretoken") || strings.Contains(bodyStrLower, "django") {
|
||||
result.Tech = append(result.Tech, "Django")
|
||||
}
|
||||
if strings.Contains(bodyStr, "angular") || strings.Contains(bodyStr, "ng-") {
|
||||
// Angular - more specific patterns (ng-app, ng-controller are Angular 1.x specific)
|
||||
if strings.Contains(bodyStr, "ng-app") || strings.Contains(bodyStr, "ng-controller") ||
|
||||
strings.Contains(bodyStr, "ng-version") || strings.Contains(bodyStrLower, "angular.js") ||
|
||||
strings.Contains(bodyStrLower, "angular.min.js") || strings.Contains(bodyStr, "@angular/core") {
|
||||
result.Tech = append(result.Tech, "Angular")
|
||||
}
|
||||
if strings.Contains(bodyStr, "vue") || strings.Contains(bodyStr, "Vue.js") {
|
||||
// Vue.js - specific patterns
|
||||
if strings.Contains(bodyStr, "data-v-") || strings.Contains(bodyStr, "__VUE__") ||
|
||||
strings.Contains(bodyStr, "vue.js") || strings.Contains(bodyStr, "vue.min.js") {
|
||||
result.Tech = append(result.Tech, "Vue.js")
|
||||
}
|
||||
// Svelte
|
||||
if strings.Contains(bodyStr, "svelte") && strings.Contains(bodyStr, "__svelte") {
|
||||
result.Tech = append(result.Tech, "Svelte")
|
||||
}
|
||||
// Nuxt.js (Vue-based)
|
||||
if strings.Contains(bodyStr, "__NUXT__") || strings.Contains(bodyStr, "_nuxt/") {
|
||||
result.Tech = append(result.Tech, "Nuxt.js")
|
||||
}
|
||||
}
|
||||
|
||||
break
|
||||
|
||||
487
internal/http/tls_analyzer.go
Normal file
487
internal/http/tls_analyzer.go
Normal file
@@ -0,0 +1,487 @@
|
||||
package http
|
||||
|
||||
import (
|
||||
"crypto/x509"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"god-eye/internal/config"
|
||||
)
|
||||
|
||||
// AppliancePattern defines a pattern to match vendor/product from certificate fields
|
||||
type AppliancePattern struct {
|
||||
Vendor string
|
||||
Product string
|
||||
ApplianceType string // firewall, vpn, loadbalancer, proxy, waf, appliance
|
||||
// Match patterns (any match triggers detection)
|
||||
SubjectCNPatterns []string
|
||||
SubjectOrgPatterns []string
|
||||
SubjectOUPatterns []string
|
||||
IssuerCNPatterns []string
|
||||
IssuerOrgPatterns []string
|
||||
// Version extraction regex (optional)
|
||||
VersionRegex string
|
||||
}
|
||||
|
||||
// appliancePatterns contains known signatures for security appliances
|
||||
var appliancePatterns = []AppliancePattern{
|
||||
// Fortinet FortiGate
|
||||
{
|
||||
Vendor: "Fortinet",
|
||||
Product: "FortiGate",
|
||||
ApplianceType: "firewall",
|
||||
SubjectCNPatterns: []string{
|
||||
"FortiGate", "FGT", "fortinet", "FGVM",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Fortinet"},
|
||||
IssuerCNPatterns: []string{"FortiGate", "Fortinet"},
|
||||
IssuerOrgPatterns: []string{"Fortinet"},
|
||||
VersionRegex: `(?i)(?:FortiGate|FGT|FGVM)[_-]?(\d+[A-Z]?)`,
|
||||
},
|
||||
// Palo Alto Networks
|
||||
{
|
||||
Vendor: "Palo Alto Networks",
|
||||
Product: "PAN-OS",
|
||||
ApplianceType: "firewall",
|
||||
SubjectCNPatterns: []string{
|
||||
"PA-", "Palo Alto", "PAN-OS", "paloaltonetworks",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Palo Alto Networks", "paloaltonetworks"},
|
||||
IssuerOrgPatterns: []string{"Palo Alto Networks"},
|
||||
VersionRegex: `(?i)PA-(\d+)`,
|
||||
},
|
||||
// Cisco ASA
|
||||
{
|
||||
Vendor: "Cisco",
|
||||
Product: "ASA",
|
||||
ApplianceType: "firewall",
|
||||
SubjectCNPatterns: []string{
|
||||
"ASA", "Cisco ASA", "adaptive security",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Cisco"},
|
||||
IssuerOrgPatterns: []string{"Cisco"},
|
||||
VersionRegex: `(?i)ASA[_-]?(\d+)`,
|
||||
},
|
||||
// Cisco Firepower
|
||||
{
|
||||
Vendor: "Cisco",
|
||||
Product: "Firepower",
|
||||
ApplianceType: "firewall",
|
||||
SubjectCNPatterns: []string{
|
||||
"Firepower", "FTD", "FMC",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Cisco"},
|
||||
},
|
||||
// SonicWall
|
||||
{
|
||||
Vendor: "SonicWall",
|
||||
Product: "SonicWall",
|
||||
ApplianceType: "firewall",
|
||||
SubjectCNPatterns: []string{
|
||||
"SonicWall", "sonicwall", "SonicOS", "NSA", "TZ",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"SonicWall", "SonicWALL"},
|
||||
IssuerOrgPatterns: []string{"SonicWall", "SonicWALL"},
|
||||
VersionRegex: `(?i)(?:NSA|TZ)[\s-]?(\d+)`,
|
||||
},
|
||||
// Check Point
|
||||
{
|
||||
Vendor: "Check Point",
|
||||
Product: "Gaia",
|
||||
ApplianceType: "firewall",
|
||||
SubjectCNPatterns: []string{
|
||||
"Check Point", "checkpoint", "Gaia", "SmartCenter",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Check Point"},
|
||||
IssuerOrgPatterns: []string{"Check Point"},
|
||||
},
|
||||
// F5 BIG-IP
|
||||
{
|
||||
Vendor: "F5",
|
||||
Product: "BIG-IP",
|
||||
ApplianceType: "loadbalancer",
|
||||
SubjectCNPatterns: []string{
|
||||
"BIG-IP", "BIGIP", "F5 Networks", "f5.com",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"F5 Networks", "F5, Inc"},
|
||||
IssuerOrgPatterns: []string{"F5 Networks", "F5, Inc"},
|
||||
VersionRegex: `(?i)BIG-IP\s+(\d+\.\d+)`,
|
||||
},
|
||||
// Citrix NetScaler / ADC
|
||||
{
|
||||
Vendor: "Citrix",
|
||||
Product: "NetScaler",
|
||||
ApplianceType: "loadbalancer",
|
||||
SubjectCNPatterns: []string{
|
||||
"NetScaler", "Citrix ADC", "ns.citrix", "citrix.com",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Citrix"},
|
||||
IssuerOrgPatterns: []string{"Citrix"},
|
||||
},
|
||||
// Juniper
|
||||
{
|
||||
Vendor: "Juniper",
|
||||
Product: "Junos",
|
||||
ApplianceType: "firewall",
|
||||
SubjectCNPatterns: []string{
|
||||
"Juniper", "JunOS", "SRX", "juniper.net",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Juniper Networks"},
|
||||
IssuerOrgPatterns: []string{"Juniper Networks"},
|
||||
VersionRegex: `(?i)SRX[_-]?(\d+)`,
|
||||
},
|
||||
// Barracuda
|
||||
{
|
||||
Vendor: "Barracuda",
|
||||
Product: "Barracuda",
|
||||
ApplianceType: "waf",
|
||||
SubjectCNPatterns: []string{
|
||||
"Barracuda", "barracuda", "cudatel",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Barracuda Networks"},
|
||||
IssuerOrgPatterns: []string{"Barracuda Networks"},
|
||||
},
|
||||
// pfSense
|
||||
{
|
||||
Vendor: "Netgate",
|
||||
Product: "pfSense",
|
||||
ApplianceType: "firewall",
|
||||
SubjectCNPatterns: []string{
|
||||
"pfSense", "pfsense", "Netgate",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"pfSense", "Netgate"},
|
||||
},
|
||||
// OPNsense
|
||||
{
|
||||
Vendor: "Deciso",
|
||||
Product: "OPNsense",
|
||||
ApplianceType: "firewall",
|
||||
SubjectCNPatterns: []string{
|
||||
"OPNsense", "opnsense",
|
||||
},
|
||||
},
|
||||
// WatchGuard
|
||||
{
|
||||
Vendor: "WatchGuard",
|
||||
Product: "Firebox",
|
||||
ApplianceType: "firewall",
|
||||
SubjectCNPatterns: []string{
|
||||
"WatchGuard", "Firebox", "watchguard",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"WatchGuard"},
|
||||
},
|
||||
// Sophos
|
||||
{
|
||||
Vendor: "Sophos",
|
||||
Product: "XG Firewall",
|
||||
ApplianceType: "firewall",
|
||||
SubjectCNPatterns: []string{
|
||||
"Sophos", "sophos", "XG Firewall", "Cyberoam",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Sophos"},
|
||||
},
|
||||
// Ubiquiti
|
||||
{
|
||||
Vendor: "Ubiquiti",
|
||||
Product: "UniFi",
|
||||
ApplianceType: "appliance",
|
||||
SubjectCNPatterns: []string{
|
||||
"Ubiquiti", "UniFi", "UBNT", "ubnt.com",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Ubiquiti"},
|
||||
},
|
||||
// MikroTik
|
||||
{
|
||||
Vendor: "MikroTik",
|
||||
Product: "RouterOS",
|
||||
ApplianceType: "router",
|
||||
SubjectCNPatterns: []string{
|
||||
"MikroTik", "mikrotik", "RouterOS",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"MikroTik"},
|
||||
},
|
||||
// OpenVPN
|
||||
{
|
||||
Vendor: "OpenVPN",
|
||||
Product: "OpenVPN AS",
|
||||
ApplianceType: "vpn",
|
||||
SubjectCNPatterns: []string{
|
||||
"OpenVPN", "openvpn",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"OpenVPN"},
|
||||
},
|
||||
// Pulse Secure / Ivanti
|
||||
{
|
||||
Vendor: "Pulse Secure",
|
||||
Product: "Pulse Connect Secure",
|
||||
ApplianceType: "vpn",
|
||||
SubjectCNPatterns: []string{
|
||||
"Pulse Secure", "pulse", "Ivanti",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Pulse Secure", "Ivanti"},
|
||||
},
|
||||
// GlobalProtect (Palo Alto VPN)
|
||||
{
|
||||
Vendor: "Palo Alto Networks",
|
||||
Product: "GlobalProtect",
|
||||
ApplianceType: "vpn",
|
||||
SubjectCNPatterns: []string{
|
||||
"GlobalProtect", "globalprotect",
|
||||
},
|
||||
},
|
||||
// Cisco AnyConnect
|
||||
{
|
||||
Vendor: "Cisco",
|
||||
Product: "AnyConnect",
|
||||
ApplianceType: "vpn",
|
||||
SubjectCNPatterns: []string{
|
||||
"AnyConnect", "anyconnect",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Cisco"},
|
||||
},
|
||||
// VMware NSX / vSphere
|
||||
{
|
||||
Vendor: "VMware",
|
||||
Product: "NSX",
|
||||
ApplianceType: "appliance",
|
||||
SubjectCNPatterns: []string{
|
||||
"NSX", "vSphere", "VMware", "vcenter",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"VMware"},
|
||||
},
|
||||
// Imperva / Incapsula
|
||||
{
|
||||
Vendor: "Imperva",
|
||||
Product: "WAF",
|
||||
ApplianceType: "waf",
|
||||
SubjectCNPatterns: []string{
|
||||
"Imperva", "Incapsula",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Imperva", "Incapsula"},
|
||||
},
|
||||
// HAProxy
|
||||
{
|
||||
Vendor: "HAProxy",
|
||||
Product: "HAProxy",
|
||||
ApplianceType: "loadbalancer",
|
||||
SubjectCNPatterns: []string{
|
||||
"HAProxy", "haproxy",
|
||||
},
|
||||
},
|
||||
// NGINX Plus
|
||||
{
|
||||
Vendor: "NGINX",
|
||||
Product: "NGINX Plus",
|
||||
ApplianceType: "loadbalancer",
|
||||
SubjectCNPatterns: []string{
|
||||
"NGINX", "nginx.com",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"NGINX", "F5 NGINX"},
|
||||
},
|
||||
// Kemp LoadMaster
|
||||
{
|
||||
Vendor: "Kemp",
|
||||
Product: "LoadMaster",
|
||||
ApplianceType: "loadbalancer",
|
||||
SubjectCNPatterns: []string{
|
||||
"Kemp", "LoadMaster",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Kemp Technologies"},
|
||||
},
|
||||
// Zyxel
|
||||
{
|
||||
Vendor: "Zyxel",
|
||||
Product: "USG",
|
||||
ApplianceType: "firewall",
|
||||
SubjectCNPatterns: []string{
|
||||
"Zyxel", "zyxel", "USG",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"Zyxel"},
|
||||
},
|
||||
// DrayTek
|
||||
{
|
||||
Vendor: "DrayTek",
|
||||
Product: "Vigor",
|
||||
ApplianceType: "router",
|
||||
SubjectCNPatterns: []string{
|
||||
"DrayTek", "Vigor", "draytek",
|
||||
},
|
||||
SubjectOrgPatterns: []string{"DrayTek"},
|
||||
},
|
||||
}
|
||||
|
||||
// AnalyzeTLSCertificate analyzes a TLS certificate for appliance fingerprinting
|
||||
func AnalyzeTLSCertificate(cert *x509.Certificate) *config.TLSFingerprint {
|
||||
if cert == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
fp := &config.TLSFingerprint{
|
||||
SubjectCN: cert.Subject.CommonName,
|
||||
SerialNumber: cert.SerialNumber.String(),
|
||||
}
|
||||
|
||||
// Extract organization info
|
||||
if len(cert.Subject.Organization) > 0 {
|
||||
fp.SubjectOrg = strings.Join(cert.Subject.Organization, ", ")
|
||||
}
|
||||
if len(cert.Subject.OrganizationalUnit) > 0 {
|
||||
fp.SubjectOU = strings.Join(cert.Subject.OrganizationalUnit, ", ")
|
||||
}
|
||||
if len(cert.Issuer.CommonName) > 0 {
|
||||
fp.IssuerCN = cert.Issuer.CommonName
|
||||
}
|
||||
if len(cert.Issuer.Organization) > 0 {
|
||||
fp.IssuerOrg = strings.Join(cert.Issuer.Organization, ", ")
|
||||
}
|
||||
|
||||
// Extract internal hostnames from DNS names
|
||||
for _, name := range cert.DNSNames {
|
||||
if isInternalHostname(name) {
|
||||
fp.InternalHosts = append(fp.InternalHosts, name)
|
||||
}
|
||||
}
|
||||
|
||||
// Try to match against known appliance patterns
|
||||
matchAppliance(fp, cert)
|
||||
|
||||
// Only return if we found something interesting
|
||||
if fp.Vendor != "" || len(fp.InternalHosts) > 0 || fp.SubjectOrg != "" {
|
||||
return fp
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// matchAppliance tries to identify the appliance vendor/product
|
||||
func matchAppliance(fp *config.TLSFingerprint, cert *x509.Certificate) {
|
||||
subjectCN := strings.ToLower(cert.Subject.CommonName)
|
||||
subjectOrg := strings.ToLower(strings.Join(cert.Subject.Organization, " "))
|
||||
subjectOU := strings.ToLower(strings.Join(cert.Subject.OrganizationalUnit, " "))
|
||||
issuerCN := strings.ToLower(cert.Issuer.CommonName)
|
||||
issuerOrg := strings.ToLower(strings.Join(cert.Issuer.Organization, " "))
|
||||
|
||||
for _, pattern := range appliancePatterns {
|
||||
matched := false
|
||||
|
||||
// Check Subject CN
|
||||
for _, p := range pattern.SubjectCNPatterns {
|
||||
if strings.Contains(subjectCN, strings.ToLower(p)) {
|
||||
matched = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Check Subject Organization
|
||||
if !matched {
|
||||
for _, p := range pattern.SubjectOrgPatterns {
|
||||
if strings.Contains(subjectOrg, strings.ToLower(p)) {
|
||||
matched = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check Subject OU
|
||||
if !matched {
|
||||
for _, p := range pattern.SubjectOUPatterns {
|
||||
if strings.Contains(subjectOU, strings.ToLower(p)) {
|
||||
matched = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check Issuer CN
|
||||
if !matched {
|
||||
for _, p := range pattern.IssuerCNPatterns {
|
||||
if strings.Contains(issuerCN, strings.ToLower(p)) {
|
||||
matched = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check Issuer Organization
|
||||
if !matched {
|
||||
for _, p := range pattern.IssuerOrgPatterns {
|
||||
if strings.Contains(issuerOrg, strings.ToLower(p)) {
|
||||
matched = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if matched {
|
||||
fp.Vendor = pattern.Vendor
|
||||
fp.Product = pattern.Product
|
||||
fp.ApplianceType = pattern.ApplianceType
|
||||
|
||||
// Try to extract version
|
||||
if pattern.VersionRegex != "" {
|
||||
re := regexp.MustCompile(pattern.VersionRegex)
|
||||
// Check all relevant fields
|
||||
for _, field := range []string{cert.Subject.CommonName, cert.Issuer.CommonName} {
|
||||
if matches := re.FindStringSubmatch(field); len(matches) > 1 {
|
||||
fp.Version = matches[1]
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// isInternalHostname checks if a hostname looks like an internal name
|
||||
func isInternalHostname(name string) bool {
|
||||
name = strings.ToLower(name)
|
||||
|
||||
// Common internal TLDs
|
||||
internalTLDs := []string{
|
||||
".local", ".internal", ".lan", ".corp", ".home",
|
||||
".intranet", ".private", ".localdomain",
|
||||
}
|
||||
for _, tld := range internalTLDs {
|
||||
if strings.HasSuffix(name, tld) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// Internal hostname patterns
|
||||
internalPatterns := []string{
|
||||
"localhost", "fw-", "firewall", "vpn-", "gw-", "gateway",
|
||||
"proxy-", "lb-", "router", "switch", "core-", "dc-",
|
||||
"srv-", "server-", "host-", "node-", "mgmt", "management",
|
||||
"admin-", "internal-", "private-", "corp-", "office-",
|
||||
}
|
||||
for _, pattern := range internalPatterns {
|
||||
if strings.Contains(name, pattern) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// IP-like patterns in hostname
|
||||
ipPattern := regexp.MustCompile(`\d{1,3}[.-]\d{1,3}[.-]\d{1,3}[.-]\d{1,3}`)
|
||||
if ipPattern.MatchString(name) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// IsSelfSigned checks if a certificate is self-signed
|
||||
func IsSelfSigned(cert *x509.Certificate) bool {
|
||||
if cert == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check if issuer equals subject
|
||||
if cert.Issuer.String() == cert.Subject.String() {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check if it's self-signed by verifying against its own public key
|
||||
err := cert.CheckSignatureFrom(cert)
|
||||
return err == nil
|
||||
}
|
||||
338
internal/output/json.go
Normal file
338
internal/output/json.go
Normal file
@@ -0,0 +1,338 @@
|
||||
package output
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"god-eye/internal/config"
|
||||
)
|
||||
|
||||
// ScanReport represents the complete JSON output structure
|
||||
type ScanReport struct {
|
||||
// Metadata
|
||||
Meta ScanMeta `json:"meta"`
|
||||
|
||||
// Statistics
|
||||
Stats ScanStats `json:"stats"`
|
||||
|
||||
// Results
|
||||
Subdomains []*config.SubdomainResult `json:"subdomains"`
|
||||
|
||||
// Wildcard info (if detected)
|
||||
Wildcard *WildcardReport `json:"wildcard,omitempty"`
|
||||
|
||||
// Findings summary
|
||||
Findings FindingsSummary `json:"findings"`
|
||||
}
|
||||
|
||||
// ScanMeta contains metadata about the scan
|
||||
type ScanMeta struct {
|
||||
Version string `json:"version"`
|
||||
ToolName string `json:"tool_name"`
|
||||
Target string `json:"target"`
|
||||
StartTime time.Time `json:"start_time"`
|
||||
EndTime time.Time `json:"end_time"`
|
||||
Duration string `json:"duration"`
|
||||
DurationMs int64 `json:"duration_ms"`
|
||||
Concurrency int `json:"concurrency"`
|
||||
Timeout int `json:"timeout"`
|
||||
Options ScanOptions `json:"options"`
|
||||
}
|
||||
|
||||
// ScanOptions contains the scan configuration
|
||||
type ScanOptions struct {
|
||||
BruteForce bool `json:"brute_force"`
|
||||
HTTPProbe bool `json:"http_probe"`
|
||||
PortScan bool `json:"port_scan"`
|
||||
TakeoverCheck bool `json:"takeover_check"`
|
||||
AIAnalysis bool `json:"ai_analysis"`
|
||||
OnlyActive bool `json:"only_active"`
|
||||
CustomWordlist bool `json:"custom_wordlist"`
|
||||
CustomResolvers bool `json:"custom_resolvers"`
|
||||
CustomPorts string `json:"custom_ports,omitempty"`
|
||||
}
|
||||
|
||||
// ScanStats contains scan statistics
|
||||
type ScanStats struct {
|
||||
TotalSubdomains int `json:"total_subdomains"`
|
||||
ActiveSubdomains int `json:"active_subdomains"`
|
||||
InactiveSubdomains int `json:"inactive_subdomains"`
|
||||
WithIPs int `json:"with_ips"`
|
||||
WithHTTP int `json:"with_http"`
|
||||
WithHTTPS int `json:"with_https"`
|
||||
WithPorts int `json:"with_ports"`
|
||||
TakeoverVulnerable int `json:"takeover_vulnerable"`
|
||||
Vulnerabilities int `json:"vulnerabilities"`
|
||||
CloudHosted int `json:"cloud_hosted"`
|
||||
AIFindings int `json:"ai_findings"`
|
||||
CVEFindings int `json:"cve_findings"`
|
||||
PassiveSources int `json:"passive_sources"`
|
||||
BruteForceFound int `json:"brute_force_found"`
|
||||
}
|
||||
|
||||
// WildcardReport contains wildcard detection info
|
||||
type WildcardReport struct {
|
||||
Detected bool `json:"detected"`
|
||||
IPs []string `json:"ips,omitempty"`
|
||||
CNAME string `json:"cname,omitempty"`
|
||||
StatusCode int `json:"status_code,omitempty"`
|
||||
Confidence float64 `json:"confidence,omitempty"`
|
||||
}
|
||||
|
||||
// FindingsSummary categorizes findings by severity
|
||||
type FindingsSummary struct {
|
||||
Critical []Finding `json:"critical,omitempty"`
|
||||
High []Finding `json:"high,omitempty"`
|
||||
Medium []Finding `json:"medium,omitempty"`
|
||||
Low []Finding `json:"low,omitempty"`
|
||||
Info []Finding `json:"info,omitempty"`
|
||||
}
|
||||
|
||||
// Finding represents a single finding
|
||||
type Finding struct {
|
||||
Subdomain string `json:"subdomain"`
|
||||
Type string `json:"type"`
|
||||
Description string `json:"description"`
|
||||
Evidence string `json:"evidence,omitempty"`
|
||||
}
|
||||
|
||||
// ReportBuilder helps construct the JSON report
|
||||
type ReportBuilder struct {
|
||||
report *ScanReport
|
||||
startTime time.Time
|
||||
}
|
||||
|
||||
// NewReportBuilder creates a new report builder
|
||||
func NewReportBuilder(domain string, cfg config.Config) *ReportBuilder {
|
||||
now := time.Now()
|
||||
return &ReportBuilder{
|
||||
startTime: now,
|
||||
report: &ScanReport{
|
||||
Meta: ScanMeta{
|
||||
Version: "0.1",
|
||||
ToolName: "God's Eye",
|
||||
Target: domain,
|
||||
StartTime: now,
|
||||
Concurrency: cfg.Concurrency,
|
||||
Timeout: cfg.Timeout,
|
||||
Options: ScanOptions{
|
||||
BruteForce: !cfg.NoBrute,
|
||||
HTTPProbe: !cfg.NoProbe,
|
||||
PortScan: !cfg.NoPorts,
|
||||
TakeoverCheck: !cfg.NoTakeover,
|
||||
AIAnalysis: cfg.EnableAI,
|
||||
OnlyActive: cfg.OnlyActive,
|
||||
CustomWordlist: cfg.Wordlist != "",
|
||||
CustomResolvers: cfg.Resolvers != "",
|
||||
CustomPorts: cfg.Ports,
|
||||
},
|
||||
},
|
||||
Stats: ScanStats{},
|
||||
Findings: FindingsSummary{
|
||||
Critical: []Finding{},
|
||||
High: []Finding{},
|
||||
Medium: []Finding{},
|
||||
Low: []Finding{},
|
||||
Info: []Finding{},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// SetWildcard sets wildcard detection info
|
||||
func (rb *ReportBuilder) SetWildcard(detected bool, ips []string, cname string, statusCode int, confidence float64) {
|
||||
rb.report.Wildcard = &WildcardReport{
|
||||
Detected: detected,
|
||||
IPs: ips,
|
||||
CNAME: cname,
|
||||
StatusCode: statusCode,
|
||||
Confidence: confidence,
|
||||
}
|
||||
}
|
||||
|
||||
// SetPassiveSources sets the number of passive sources used
|
||||
func (rb *ReportBuilder) SetPassiveSources(count int) {
|
||||
rb.report.Stats.PassiveSources = count
|
||||
}
|
||||
|
||||
// SetBruteForceFound sets the number of subdomains found via brute force
|
||||
func (rb *ReportBuilder) SetBruteForceFound(count int) {
|
||||
rb.report.Stats.BruteForceFound = count
|
||||
}
|
||||
|
||||
// Finalize completes the report with results and calculates stats
|
||||
func (rb *ReportBuilder) Finalize(results map[string]*config.SubdomainResult) *ScanReport {
|
||||
endTime := time.Now()
|
||||
duration := endTime.Sub(rb.startTime)
|
||||
|
||||
rb.report.Meta.EndTime = endTime
|
||||
rb.report.Meta.Duration = duration.String()
|
||||
rb.report.Meta.DurationMs = duration.Milliseconds()
|
||||
|
||||
// Sort subdomains
|
||||
var sortedSubs []string
|
||||
for sub := range results {
|
||||
sortedSubs = append(sortedSubs, sub)
|
||||
}
|
||||
sort.Strings(sortedSubs)
|
||||
|
||||
// Build results list and calculate stats
|
||||
rb.report.Subdomains = make([]*config.SubdomainResult, 0, len(results))
|
||||
for _, sub := range sortedSubs {
|
||||
r := results[sub]
|
||||
rb.report.Subdomains = append(rb.report.Subdomains, r)
|
||||
|
||||
// Calculate stats
|
||||
rb.report.Stats.TotalSubdomains++
|
||||
|
||||
if len(r.IPs) > 0 {
|
||||
rb.report.Stats.WithIPs++
|
||||
}
|
||||
|
||||
if r.StatusCode >= 200 && r.StatusCode < 400 {
|
||||
rb.report.Stats.ActiveSubdomains++
|
||||
} else if r.StatusCode >= 400 {
|
||||
rb.report.Stats.InactiveSubdomains++
|
||||
}
|
||||
|
||||
if r.TLSVersion != "" {
|
||||
rb.report.Stats.WithHTTPS++
|
||||
}
|
||||
if r.StatusCode > 0 {
|
||||
rb.report.Stats.WithHTTP++
|
||||
}
|
||||
|
||||
if len(r.Ports) > 0 {
|
||||
rb.report.Stats.WithPorts++
|
||||
}
|
||||
|
||||
if r.CloudProvider != "" {
|
||||
rb.report.Stats.CloudHosted++
|
||||
}
|
||||
|
||||
if r.Takeover != "" {
|
||||
rb.report.Stats.TakeoverVulnerable++
|
||||
rb.addFinding("critical", sub, "Subdomain Takeover", r.Takeover)
|
||||
}
|
||||
|
||||
// Count vulnerabilities
|
||||
vulnCount := 0
|
||||
if r.OpenRedirect {
|
||||
vulnCount++
|
||||
rb.addFinding("high", sub, "Open Redirect", "Vulnerable to open redirect attacks")
|
||||
}
|
||||
if r.CORSMisconfig != "" {
|
||||
vulnCount++
|
||||
rb.addFinding("medium", sub, "CORS Misconfiguration", r.CORSMisconfig)
|
||||
}
|
||||
if len(r.DangerousMethods) > 0 {
|
||||
vulnCount++
|
||||
rb.addFinding("medium", sub, "Dangerous HTTP Methods", join(r.DangerousMethods, ", "))
|
||||
}
|
||||
if r.GitExposed {
|
||||
vulnCount++
|
||||
rb.addFinding("high", sub, "Git Repository Exposed", ".git directory accessible")
|
||||
}
|
||||
if r.SvnExposed {
|
||||
vulnCount++
|
||||
rb.addFinding("high", sub, "SVN Repository Exposed", ".svn directory accessible")
|
||||
}
|
||||
if len(r.BackupFiles) > 0 {
|
||||
vulnCount++
|
||||
rb.addFinding("high", sub, "Backup Files Exposed", join(r.BackupFiles, ", "))
|
||||
}
|
||||
if len(r.JSSecrets) > 0 {
|
||||
vulnCount++
|
||||
for _, secret := range r.JSSecrets {
|
||||
rb.addFinding("high", sub, "Secret in JavaScript", secret)
|
||||
}
|
||||
}
|
||||
if vulnCount > 0 {
|
||||
rb.report.Stats.Vulnerabilities += vulnCount
|
||||
}
|
||||
|
||||
// AI findings
|
||||
if len(r.AIFindings) > 0 {
|
||||
rb.report.Stats.AIFindings += len(r.AIFindings)
|
||||
severity := "info"
|
||||
if r.AISeverity != "" {
|
||||
severity = r.AISeverity
|
||||
}
|
||||
for _, finding := range r.AIFindings {
|
||||
rb.addFinding(severity, sub, "AI Analysis", finding)
|
||||
}
|
||||
}
|
||||
|
||||
// CVE findings
|
||||
if len(r.CVEFindings) > 0 {
|
||||
rb.report.Stats.CVEFindings += len(r.CVEFindings)
|
||||
for _, cve := range r.CVEFindings {
|
||||
rb.addFinding("high", sub, "CVE Vulnerability", cve)
|
||||
}
|
||||
}
|
||||
|
||||
// Info findings
|
||||
if len(r.AdminPanels) > 0 {
|
||||
for _, panel := range r.AdminPanels {
|
||||
rb.addFinding("info", sub, "Admin Panel Found", panel)
|
||||
}
|
||||
}
|
||||
if len(r.APIEndpoints) > 0 {
|
||||
for _, endpoint := range r.APIEndpoints {
|
||||
rb.addFinding("info", sub, "API Endpoint Found", endpoint)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return rb.report
|
||||
}
|
||||
|
||||
// addFinding adds a finding to the appropriate severity category
|
||||
func (rb *ReportBuilder) addFinding(severity, subdomain, findingType, description string) {
|
||||
finding := Finding{
|
||||
Subdomain: subdomain,
|
||||
Type: findingType,
|
||||
Description: description,
|
||||
}
|
||||
|
||||
switch severity {
|
||||
case "critical":
|
||||
rb.report.Findings.Critical = append(rb.report.Findings.Critical, finding)
|
||||
case "high":
|
||||
rb.report.Findings.High = append(rb.report.Findings.High, finding)
|
||||
case "medium":
|
||||
rb.report.Findings.Medium = append(rb.report.Findings.Medium, finding)
|
||||
case "low":
|
||||
rb.report.Findings.Low = append(rb.report.Findings.Low, finding)
|
||||
default:
|
||||
rb.report.Findings.Info = append(rb.report.Findings.Info, finding)
|
||||
}
|
||||
}
|
||||
|
||||
// WriteJSON writes the report as JSON to a writer
|
||||
func (rb *ReportBuilder) WriteJSON(w io.Writer, indent bool) error {
|
||||
encoder := json.NewEncoder(w)
|
||||
if indent {
|
||||
encoder.SetIndent("", " ")
|
||||
}
|
||||
return encoder.Encode(rb.report)
|
||||
}
|
||||
|
||||
// GetReport returns the built report
|
||||
func (rb *ReportBuilder) GetReport() *ScanReport {
|
||||
return rb.report
|
||||
}
|
||||
|
||||
// Helper function
|
||||
func join(strs []string, sep string) string {
|
||||
if len(strs) == 0 {
|
||||
return ""
|
||||
}
|
||||
result := strs[0]
|
||||
for _, s := range strs[1:] {
|
||||
result += sep + s
|
||||
}
|
||||
return result
|
||||
}
|
||||
@@ -43,31 +43,33 @@ var (
|
||||
|
||||
func PrintBanner() {
|
||||
fmt.Println()
|
||||
fmt.Println(BoldCyan(" ██████╗ ██████╗ ██████╗ ") + BoldWhite("███████╗") + BoldCyan(" ███████╗██╗ ██╗███████╗"))
|
||||
fmt.Println(BoldCyan(" ██╔════╝ ██╔═══██╗██╔══██╗") + BoldWhite("██╔════╝") + BoldCyan(" ██╔════╝╚██╗ ██╔╝██╔════╝"))
|
||||
fmt.Println(BoldCyan(" ██║ ███╗██║ ██║██║ ██║") + BoldWhite("███████╗") + BoldCyan(" █████╗ ╚████╔╝ █████╗ "))
|
||||
fmt.Println(BoldCyan(" ██║ ██║██║ ██║██║ ██║") + BoldWhite("╚════██║") + BoldCyan(" ██╔══╝ ╚██╔╝ ██╔══╝ "))
|
||||
fmt.Println(BoldCyan(" ╚██████╔╝╚██████╔╝██████╔╝") + BoldWhite("███████║") + BoldCyan(" ███████╗ ██║ ███████╗"))
|
||||
fmt.Println(BoldCyan(" ╚═════╝ ╚═════╝ ╚═════╝ ") + BoldWhite("╚══════╝") + BoldCyan(" ╚══════╝ ╚═╝ ╚══════╝"))
|
||||
fmt.Println(BoldWhite(" ██████╗ ██████╗ ██████╗ ") + BoldGreen("███████╗") + BoldWhite(" ███████╗██╗ ██╗███████╗"))
|
||||
fmt.Println(BoldWhite(" ██╔════╝ ██╔═══██╗██╔══██╗") + BoldGreen("██╔════╝") + BoldWhite(" ██╔════╝╚██╗ ██╔╝██╔════╝"))
|
||||
fmt.Println(BoldWhite(" ██║ ███╗██║ ██║██║ ██║") + BoldGreen("███████╗") + BoldWhite(" █████╗ ╚████╔╝ █████╗ "))
|
||||
fmt.Println(BoldWhite(" ██║ ██║██║ ██║██║ ██║") + BoldGreen("╚════██║") + BoldWhite(" ██╔══╝ ╚██╔╝ ██╔══╝ "))
|
||||
fmt.Println(BoldWhite(" ╚██████╔╝╚██████╔╝██████╔╝") + BoldGreen("███████║") + BoldWhite(" ███████╗ ██║ ███████╗"))
|
||||
fmt.Println(BoldWhite(" ╚═════╝ ╚═════╝ ╚═════╝ ") + BoldGreen("╚══════╝") + BoldWhite(" ╚══════╝ ╚═╝ ╚══════╝"))
|
||||
fmt.Println()
|
||||
fmt.Printf(" %s %s\n", BoldWhite("⚡"), Dim("Ultra-fast subdomain enumeration & reconnaissance"))
|
||||
fmt.Printf(" %s %s %s %s %s %s\n",
|
||||
fmt.Printf(" %s %s\n", BoldGreen("⚡"), Dim("Ultra-fast subdomain enumeration & reconnaissance"))
|
||||
fmt.Printf(" %s %s %s %s %s %s\n",
|
||||
Dim("Version:"), BoldGreen("0.1"),
|
||||
Dim("By:"), Cyan("github.com/Vyntral"),
|
||||
Dim("By:"), White("github.com/Vyntral"),
|
||||
Dim("For:"), Yellow("github.com/Orizon-eu"))
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
func PrintSection(icon, title string) {
|
||||
fmt.Printf("\n%s %s %s\n", BoldCyan("┌──"), BoldWhite(icon+" "+title), BoldCyan(strings.Repeat("─", 50)))
|
||||
fmt.Println()
|
||||
fmt.Printf(" %s %s\n", icon, BoldWhite(title))
|
||||
fmt.Printf(" %s\n", Dim(strings.Repeat("─", 50)))
|
||||
}
|
||||
|
||||
func PrintSubSection(text string) {
|
||||
fmt.Printf("%s %s\n", Cyan("│"), text)
|
||||
fmt.Printf(" %s\n", text)
|
||||
}
|
||||
|
||||
func PrintEndSection() {
|
||||
fmt.Printf("%s\n", BoldCyan("└"+strings.Repeat("─", 60)))
|
||||
// No more lines, just spacing
|
||||
}
|
||||
|
||||
func PrintProgress(current, total int, label string) {
|
||||
@@ -78,7 +80,7 @@ func PrintProgress(current, total int, label string) {
|
||||
}
|
||||
bar := strings.Repeat("█", filled) + strings.Repeat("░", width-filled)
|
||||
percent := float64(current) / float64(total) * 100
|
||||
fmt.Printf("\r%s %s %s %s %.0f%% ", Cyan("│"), label, BoldGreen(bar), Dim(fmt.Sprintf("(%d/%d)", current, total)), percent)
|
||||
fmt.Printf("\r %s %s %s %.0f%% ", label, Green(bar), Dim(fmt.Sprintf("(%d/%d)", current, total)), percent)
|
||||
}
|
||||
|
||||
func ClearLine() {
|
||||
@@ -141,5 +143,5 @@ func SaveOutput(path string, format string, results map[string]*config.Subdomain
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("%s Results saved to %s\n", Green("[+]"), path)
|
||||
fmt.Printf("\n %s Results saved to %s\n", Green("✓"), path)
|
||||
}
|
||||
|
||||
197
internal/progress/progress.go
Normal file
197
internal/progress/progress.go
Normal file
@@ -0,0 +1,197 @@
|
||||
package progress
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"god-eye/internal/output"
|
||||
)
|
||||
|
||||
// Bar represents a progress bar
|
||||
type Bar struct {
|
||||
total int64
|
||||
current int64
|
||||
width int
|
||||
prefix string
|
||||
startTime time.Time
|
||||
lastUpdate time.Time
|
||||
mu sync.Mutex
|
||||
done bool
|
||||
silent bool
|
||||
}
|
||||
|
||||
// New creates a new progress bar
|
||||
func New(total int, prefix string, silent bool) *Bar {
|
||||
return &Bar{
|
||||
total: int64(total),
|
||||
current: 0,
|
||||
width: 40,
|
||||
prefix: prefix,
|
||||
startTime: time.Now(),
|
||||
silent: silent,
|
||||
}
|
||||
}
|
||||
|
||||
// Increment increases the progress by 1
|
||||
func (b *Bar) Increment() {
|
||||
atomic.AddInt64(&b.current, 1)
|
||||
b.render()
|
||||
}
|
||||
|
||||
// Add increases the progress by n
|
||||
func (b *Bar) Add(n int) {
|
||||
atomic.AddInt64(&b.current, int64(n))
|
||||
b.render()
|
||||
}
|
||||
|
||||
// SetCurrent sets the current progress value
|
||||
func (b *Bar) SetCurrent(n int) {
|
||||
atomic.StoreInt64(&b.current, int64(n))
|
||||
b.render()
|
||||
}
|
||||
|
||||
// render displays the progress bar
|
||||
func (b *Bar) render() {
|
||||
if b.silent {
|
||||
return
|
||||
}
|
||||
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
|
||||
// Throttle updates to avoid flickering (max 10 updates/sec)
|
||||
if time.Since(b.lastUpdate) < 100*time.Millisecond && !b.done {
|
||||
return
|
||||
}
|
||||
b.lastUpdate = time.Now()
|
||||
|
||||
current := atomic.LoadInt64(&b.current)
|
||||
total := b.total
|
||||
|
||||
// Calculate percentage
|
||||
var percent float64
|
||||
if total > 0 {
|
||||
percent = float64(current) / float64(total) * 100
|
||||
}
|
||||
|
||||
// Calculate filled width
|
||||
filled := int(float64(b.width) * percent / 100)
|
||||
if filled > b.width {
|
||||
filled = b.width
|
||||
}
|
||||
|
||||
// Build progress bar
|
||||
bar := strings.Repeat("█", filled) + strings.Repeat("░", b.width-filled)
|
||||
|
||||
// Calculate ETA
|
||||
elapsed := time.Since(b.startTime)
|
||||
var eta string
|
||||
if current > 0 && current < total {
|
||||
remaining := time.Duration(float64(elapsed) / float64(current) * float64(total-current))
|
||||
eta = formatDuration(remaining)
|
||||
} else if current >= total {
|
||||
eta = "done"
|
||||
} else {
|
||||
eta = "..."
|
||||
}
|
||||
|
||||
// Calculate speed
|
||||
var speed float64
|
||||
if elapsed.Seconds() > 0 {
|
||||
speed = float64(current) / elapsed.Seconds()
|
||||
}
|
||||
|
||||
// Print progress bar (overwrite line with \r) - clean style without box characters
|
||||
fmt.Printf("\r %s [%s] %s/%s %.0f%% %s ETA %s ",
|
||||
b.prefix,
|
||||
output.Green(bar),
|
||||
output.BoldWhite(fmt.Sprintf("%d", current)),
|
||||
output.Dim(fmt.Sprintf("%d", total)),
|
||||
percent,
|
||||
output.Dim(fmt.Sprintf("%.0f/s", speed)),
|
||||
output.Dim(eta),
|
||||
)
|
||||
}
|
||||
|
||||
// Finish completes the progress bar
|
||||
func (b *Bar) Finish() {
|
||||
if b.silent {
|
||||
return
|
||||
}
|
||||
|
||||
b.mu.Lock()
|
||||
b.done = true
|
||||
b.mu.Unlock()
|
||||
|
||||
current := atomic.LoadInt64(&b.current)
|
||||
elapsed := time.Since(b.startTime)
|
||||
|
||||
// Clear the line and print final status - clean style
|
||||
fmt.Printf("\r %s %s %s completed in %s \n",
|
||||
output.Green("✓"),
|
||||
output.BoldWhite(fmt.Sprintf("%d", current)),
|
||||
b.prefix,
|
||||
output.Green(formatDuration(elapsed)),
|
||||
)
|
||||
}
|
||||
|
||||
// FinishWithMessage completes with a custom message
|
||||
func (b *Bar) FinishWithMessage(msg string) {
|
||||
if b.silent {
|
||||
return
|
||||
}
|
||||
|
||||
b.mu.Lock()
|
||||
b.done = true
|
||||
b.mu.Unlock()
|
||||
|
||||
// Clear the line and print message - clean style
|
||||
fmt.Printf("\r %s %s \n",
|
||||
output.Green("✓"),
|
||||
msg,
|
||||
)
|
||||
}
|
||||
|
||||
// formatDuration formats a duration nicely
|
||||
func formatDuration(d time.Duration) string {
|
||||
if d < time.Second {
|
||||
return "<1s"
|
||||
} else if d < time.Minute {
|
||||
return fmt.Sprintf("%ds", int(d.Seconds()))
|
||||
} else if d < time.Hour {
|
||||
mins := int(d.Minutes())
|
||||
secs := int(d.Seconds()) % 60
|
||||
return fmt.Sprintf("%dm%ds", mins, secs)
|
||||
}
|
||||
hours := int(d.Hours())
|
||||
mins := int(d.Minutes()) % 60
|
||||
return fmt.Sprintf("%dh%dm", hours, mins)
|
||||
}
|
||||
|
||||
// MultiBar manages multiple progress bars
|
||||
type MultiBar struct {
|
||||
bars []*Bar
|
||||
mu sync.Mutex
|
||||
silent bool
|
||||
}
|
||||
|
||||
// NewMulti creates a new multi-bar manager
|
||||
func NewMulti(silent bool) *MultiBar {
|
||||
return &MultiBar{
|
||||
bars: make([]*Bar, 0),
|
||||
silent: silent,
|
||||
}
|
||||
}
|
||||
|
||||
// AddBar adds a new progress bar
|
||||
func (m *MultiBar) AddBar(total int, prefix string) *Bar {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
bar := New(total, prefix, m.silent)
|
||||
m.bars = append(m.bars, bar)
|
||||
return bar
|
||||
}
|
||||
284
internal/ratelimit/ratelimit.go
Normal file
284
internal/ratelimit/ratelimit.go
Normal file
@@ -0,0 +1,284 @@
|
||||
package ratelimit
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
// AdaptiveRateLimiter implements intelligent rate limiting that adapts based on errors
|
||||
type AdaptiveRateLimiter struct {
|
||||
// Configuration
|
||||
minDelay time.Duration
|
||||
maxDelay time.Duration
|
||||
currentDelay time.Duration
|
||||
|
||||
// Error tracking
|
||||
consecutiveErrors int64
|
||||
totalErrors int64
|
||||
totalRequests int64
|
||||
|
||||
// Backoff settings
|
||||
backoffMultiplier float64
|
||||
recoveryRate float64
|
||||
|
||||
// State
|
||||
lastRequest time.Time
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
// Config holds configuration for the rate limiter
|
||||
type Config struct {
|
||||
MinDelay time.Duration // Minimum delay between requests
|
||||
MaxDelay time.Duration // Maximum delay (during backoff)
|
||||
BackoffMultiplier float64 // How much to increase delay on error (default 2.0)
|
||||
RecoveryRate float64 // How much to decrease delay on success (default 0.9)
|
||||
}
|
||||
|
||||
// DefaultConfig returns sensible defaults
|
||||
func DefaultConfig() Config {
|
||||
return Config{
|
||||
MinDelay: 50 * time.Millisecond,
|
||||
MaxDelay: 5 * time.Second,
|
||||
BackoffMultiplier: 2.0,
|
||||
RecoveryRate: 0.9,
|
||||
}
|
||||
}
|
||||
|
||||
// AggressiveConfig returns config for fast scanning
|
||||
func AggressiveConfig() Config {
|
||||
return Config{
|
||||
MinDelay: 10 * time.Millisecond,
|
||||
MaxDelay: 2 * time.Second,
|
||||
BackoffMultiplier: 1.5,
|
||||
RecoveryRate: 0.8,
|
||||
}
|
||||
}
|
||||
|
||||
// ConservativeConfig returns config for careful scanning
|
||||
func ConservativeConfig() Config {
|
||||
return Config{
|
||||
MinDelay: 200 * time.Millisecond,
|
||||
MaxDelay: 10 * time.Second,
|
||||
BackoffMultiplier: 3.0,
|
||||
RecoveryRate: 0.95,
|
||||
}
|
||||
}
|
||||
|
||||
// New creates a new adaptive rate limiter
|
||||
func New(cfg Config) *AdaptiveRateLimiter {
|
||||
if cfg.BackoffMultiplier == 0 {
|
||||
cfg.BackoffMultiplier = 2.0
|
||||
}
|
||||
if cfg.RecoveryRate == 0 {
|
||||
cfg.RecoveryRate = 0.9
|
||||
}
|
||||
|
||||
return &AdaptiveRateLimiter{
|
||||
minDelay: cfg.MinDelay,
|
||||
maxDelay: cfg.MaxDelay,
|
||||
currentDelay: cfg.MinDelay,
|
||||
backoffMultiplier: cfg.BackoffMultiplier,
|
||||
recoveryRate: cfg.RecoveryRate,
|
||||
}
|
||||
}
|
||||
|
||||
// Wait blocks until it's safe to make another request
|
||||
func (r *AdaptiveRateLimiter) Wait() {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
|
||||
elapsed := time.Since(r.lastRequest)
|
||||
if elapsed < r.currentDelay {
|
||||
time.Sleep(r.currentDelay - elapsed)
|
||||
}
|
||||
r.lastRequest = time.Now()
|
||||
atomic.AddInt64(&r.totalRequests, 1)
|
||||
}
|
||||
|
||||
// Success reports a successful request
|
||||
func (r *AdaptiveRateLimiter) Success() {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
|
||||
// Reset consecutive errors
|
||||
atomic.StoreInt64(&r.consecutiveErrors, 0)
|
||||
|
||||
// Gradually reduce delay (recover)
|
||||
newDelay := time.Duration(float64(r.currentDelay) * r.recoveryRate)
|
||||
if newDelay < r.minDelay {
|
||||
newDelay = r.minDelay
|
||||
}
|
||||
r.currentDelay = newDelay
|
||||
}
|
||||
|
||||
// Error reports a failed request (timeout, 429, etc)
|
||||
func (r *AdaptiveRateLimiter) Error(isRateLimited bool) {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
|
||||
atomic.AddInt64(&r.consecutiveErrors, 1)
|
||||
atomic.AddInt64(&r.totalErrors, 1)
|
||||
|
||||
// Increase delay on error
|
||||
multiplier := r.backoffMultiplier
|
||||
if isRateLimited {
|
||||
// More aggressive backoff for rate limit errors (429)
|
||||
multiplier *= 2
|
||||
}
|
||||
|
||||
newDelay := time.Duration(float64(r.currentDelay) * multiplier)
|
||||
if newDelay > r.maxDelay {
|
||||
newDelay = r.maxDelay
|
||||
}
|
||||
r.currentDelay = newDelay
|
||||
}
|
||||
|
||||
// GetCurrentDelay returns the current delay
|
||||
func (r *AdaptiveRateLimiter) GetCurrentDelay() time.Duration {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
return r.currentDelay
|
||||
}
|
||||
|
||||
// GetStats returns error statistics
|
||||
func (r *AdaptiveRateLimiter) GetStats() (total int64, errors int64, currentDelay time.Duration) {
|
||||
return atomic.LoadInt64(&r.totalRequests),
|
||||
atomic.LoadInt64(&r.totalErrors),
|
||||
r.GetCurrentDelay()
|
||||
}
|
||||
|
||||
// ShouldBackoff returns true if we're experiencing too many errors
|
||||
func (r *AdaptiveRateLimiter) ShouldBackoff() bool {
|
||||
return atomic.LoadInt64(&r.consecutiveErrors) > 5
|
||||
}
|
||||
|
||||
// HostRateLimiter manages rate limits per host
|
||||
type HostRateLimiter struct {
|
||||
limiters map[string]*AdaptiveRateLimiter
|
||||
config Config
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
// NewHostRateLimiter creates a per-host rate limiter
|
||||
func NewHostRateLimiter(cfg Config) *HostRateLimiter {
|
||||
return &HostRateLimiter{
|
||||
limiters: make(map[string]*AdaptiveRateLimiter),
|
||||
config: cfg,
|
||||
}
|
||||
}
|
||||
|
||||
// Get returns or creates a rate limiter for a host
|
||||
func (h *HostRateLimiter) Get(host string) *AdaptiveRateLimiter {
|
||||
h.mu.RLock()
|
||||
limiter, exists := h.limiters[host]
|
||||
h.mu.RUnlock()
|
||||
|
||||
if exists {
|
||||
return limiter
|
||||
}
|
||||
|
||||
h.mu.Lock()
|
||||
defer h.mu.Unlock()
|
||||
|
||||
// Double check after acquiring write lock
|
||||
if limiter, exists = h.limiters[host]; exists {
|
||||
return limiter
|
||||
}
|
||||
|
||||
limiter = New(h.config)
|
||||
h.limiters[host] = limiter
|
||||
return limiter
|
||||
}
|
||||
|
||||
// GetStats returns aggregated stats for all hosts
|
||||
func (h *HostRateLimiter) GetStats() (hosts int, totalRequests, totalErrors int64) {
|
||||
h.mu.RLock()
|
||||
defer h.mu.RUnlock()
|
||||
|
||||
hosts = len(h.limiters)
|
||||
for _, limiter := range h.limiters {
|
||||
requests, errors, _ := limiter.GetStats()
|
||||
totalRequests += requests
|
||||
totalErrors += errors
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// ConcurrencyController manages dynamic concurrency based on errors
|
||||
type ConcurrencyController struct {
|
||||
maxConcurrency int64
|
||||
minConcurrency int64
|
||||
current int64
|
||||
errorCount int64
|
||||
successCount int64
|
||||
checkInterval int64
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
// NewConcurrencyController creates a new concurrency controller
|
||||
func NewConcurrencyController(max, min int) *ConcurrencyController {
|
||||
return &ConcurrencyController{
|
||||
maxConcurrency: int64(max),
|
||||
minConcurrency: int64(min),
|
||||
current: int64(max),
|
||||
checkInterval: 100, // Check every 100 requests
|
||||
}
|
||||
}
|
||||
|
||||
// GetCurrent returns current concurrency level
|
||||
func (c *ConcurrencyController) GetCurrent() int {
|
||||
return int(atomic.LoadInt64(&c.current))
|
||||
}
|
||||
|
||||
// ReportSuccess reports a successful request
|
||||
func (c *ConcurrencyController) ReportSuccess() {
|
||||
atomic.AddInt64(&c.successCount, 1)
|
||||
c.maybeAdjust()
|
||||
}
|
||||
|
||||
// ReportError reports an error
|
||||
func (c *ConcurrencyController) ReportError() {
|
||||
atomic.AddInt64(&c.errorCount, 1)
|
||||
c.maybeAdjust()
|
||||
}
|
||||
|
||||
// maybeAdjust checks if we should adjust concurrency
|
||||
func (c *ConcurrencyController) maybeAdjust() {
|
||||
total := atomic.LoadInt64(&c.successCount) + atomic.LoadInt64(&c.errorCount)
|
||||
if total%c.checkInterval != 0 {
|
||||
return
|
||||
}
|
||||
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
errors := atomic.LoadInt64(&c.errorCount)
|
||||
successes := atomic.LoadInt64(&c.successCount)
|
||||
|
||||
if successes == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
errorRate := float64(errors) / float64(total)
|
||||
|
||||
if errorRate > 0.1 { // More than 10% errors
|
||||
// Reduce concurrency
|
||||
newConcurrency := int64(float64(c.current) * 0.8)
|
||||
if newConcurrency < c.minConcurrency {
|
||||
newConcurrency = c.minConcurrency
|
||||
}
|
||||
atomic.StoreInt64(&c.current, newConcurrency)
|
||||
} else if errorRate < 0.02 { // Less than 2% errors
|
||||
// Increase concurrency
|
||||
newConcurrency := int64(float64(c.current) * 1.1)
|
||||
if newConcurrency > c.maxConcurrency {
|
||||
newConcurrency = c.maxConcurrency
|
||||
}
|
||||
atomic.StoreInt64(&c.current, newConcurrency)
|
||||
}
|
||||
|
||||
// Reset counters
|
||||
atomic.StoreInt64(&c.errorCount, 0)
|
||||
atomic.StoreInt64(&c.successCount, 0)
|
||||
}
|
||||
236
internal/retry/retry.go
Normal file
236
internal/retry/retry.go
Normal file
@@ -0,0 +1,236 @@
|
||||
package retry
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"math"
|
||||
"math/rand"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Config holds retry configuration
|
||||
type Config struct {
|
||||
MaxRetries int // Maximum number of retry attempts
|
||||
InitialDelay time.Duration // Initial delay before first retry
|
||||
MaxDelay time.Duration // Maximum delay between retries
|
||||
Multiplier float64 // Delay multiplier for exponential backoff
|
||||
Jitter float64 // Random jitter factor (0-1)
|
||||
RetryableErrors []error // Specific errors to retry on (nil = retry all)
|
||||
}
|
||||
|
||||
// DefaultConfig returns sensible defaults for network operations
|
||||
func DefaultConfig() Config {
|
||||
return Config{
|
||||
MaxRetries: 3,
|
||||
InitialDelay: 100 * time.Millisecond,
|
||||
MaxDelay: 5 * time.Second,
|
||||
Multiplier: 2.0,
|
||||
Jitter: 0.1,
|
||||
}
|
||||
}
|
||||
|
||||
// DNSConfig returns config optimized for DNS queries
|
||||
func DNSConfig() Config {
|
||||
return Config{
|
||||
MaxRetries: 3,
|
||||
InitialDelay: 50 * time.Millisecond,
|
||||
MaxDelay: 2 * time.Second,
|
||||
Multiplier: 2.0,
|
||||
Jitter: 0.2,
|
||||
}
|
||||
}
|
||||
|
||||
// HTTPConfig returns config optimized for HTTP requests
|
||||
func HTTPConfig() Config {
|
||||
return Config{
|
||||
MaxRetries: 2,
|
||||
InitialDelay: 200 * time.Millisecond,
|
||||
MaxDelay: 3 * time.Second,
|
||||
Multiplier: 2.0,
|
||||
Jitter: 0.15,
|
||||
}
|
||||
}
|
||||
|
||||
// AggressiveConfig returns config for fast scanning with fewer retries
|
||||
func AggressiveConfig() Config {
|
||||
return Config{
|
||||
MaxRetries: 1,
|
||||
InitialDelay: 50 * time.Millisecond,
|
||||
MaxDelay: 1 * time.Second,
|
||||
Multiplier: 1.5,
|
||||
Jitter: 0.1,
|
||||
}
|
||||
}
|
||||
|
||||
// Result wraps the result of a retryable operation
|
||||
type Result struct {
|
||||
Value interface{}
|
||||
Error error
|
||||
Attempts int
|
||||
}
|
||||
|
||||
// Do executes a function with retry logic
|
||||
func Do(ctx context.Context, cfg Config, fn func() (interface{}, error)) Result {
|
||||
var lastErr error
|
||||
attempts := 0
|
||||
|
||||
for attempts <= cfg.MaxRetries {
|
||||
attempts++
|
||||
|
||||
// Check context cancellation
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return Result{Error: ctx.Err(), Attempts: attempts}
|
||||
default:
|
||||
}
|
||||
|
||||
// Execute the function
|
||||
result, err := fn()
|
||||
if err == nil {
|
||||
return Result{Value: result, Attempts: attempts}
|
||||
}
|
||||
|
||||
lastErr = err
|
||||
|
||||
// Check if error is retryable
|
||||
if !isRetryable(err, cfg.RetryableErrors) {
|
||||
return Result{Error: err, Attempts: attempts}
|
||||
}
|
||||
|
||||
// If this was the last attempt, don't sleep
|
||||
if attempts > cfg.MaxRetries {
|
||||
break
|
||||
}
|
||||
|
||||
// Calculate delay with exponential backoff and jitter
|
||||
delay := calculateDelay(attempts, cfg)
|
||||
|
||||
// Wait before retrying
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return Result{Error: ctx.Err(), Attempts: attempts}
|
||||
case <-time.After(delay):
|
||||
}
|
||||
}
|
||||
|
||||
return Result{Error: lastErr, Attempts: attempts}
|
||||
}
|
||||
|
||||
// DoSimple executes a function with default config and no context
|
||||
func DoSimple(fn func() (interface{}, error)) Result {
|
||||
return Do(context.Background(), DefaultConfig(), fn)
|
||||
}
|
||||
|
||||
// DoWithTimeout executes with a timeout
|
||||
func DoWithTimeout(timeout time.Duration, cfg Config, fn func() (interface{}, error)) Result {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||
defer cancel()
|
||||
return Do(ctx, cfg, fn)
|
||||
}
|
||||
|
||||
// calculateDelay computes the delay for a given attempt
|
||||
func calculateDelay(attempt int, cfg Config) time.Duration {
|
||||
// Exponential backoff: initialDelay * multiplier^(attempt-1)
|
||||
delay := float64(cfg.InitialDelay) * math.Pow(cfg.Multiplier, float64(attempt-1))
|
||||
|
||||
// Apply max cap
|
||||
if delay > float64(cfg.MaxDelay) {
|
||||
delay = float64(cfg.MaxDelay)
|
||||
}
|
||||
|
||||
// Apply jitter: delay * (1 +/- jitter)
|
||||
if cfg.Jitter > 0 {
|
||||
jitter := delay * cfg.Jitter * (2*rand.Float64() - 1)
|
||||
delay += jitter
|
||||
}
|
||||
|
||||
return time.Duration(delay)
|
||||
}
|
||||
|
||||
// isRetryable checks if an error should be retried
|
||||
func isRetryable(err error, retryableErrors []error) bool {
|
||||
if err == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// If no specific errors defined, retry all
|
||||
if len(retryableErrors) == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check if error matches any retryable error
|
||||
for _, retryableErr := range retryableErrors {
|
||||
if errors.Is(err, retryableErr) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// Common retryable error types
|
||||
var (
|
||||
ErrTimeout = errors.New("operation timeout")
|
||||
ErrTemporary = errors.New("temporary error")
|
||||
ErrConnectionReset = errors.New("connection reset")
|
||||
ErrDNSLookup = errors.New("dns lookup failed")
|
||||
)
|
||||
|
||||
// IsTemporaryError checks if an error is temporary/transient
|
||||
func IsTemporaryError(err error) bool {
|
||||
if err == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check for common temporary error strings
|
||||
errStr := err.Error()
|
||||
temporaryPatterns := []string{
|
||||
"timeout",
|
||||
"temporary",
|
||||
"connection reset",
|
||||
"connection refused",
|
||||
"no such host",
|
||||
"i/o timeout",
|
||||
"TLS handshake timeout",
|
||||
"context deadline exceeded",
|
||||
"server misbehaving",
|
||||
"too many open files",
|
||||
}
|
||||
|
||||
for _, pattern := range temporaryPatterns {
|
||||
if containsIgnoreCase(errStr, pattern) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func containsIgnoreCase(s, substr string) bool {
|
||||
for i := 0; i+len(substr) <= len(s); i++ {
|
||||
if equalFold(s[i:i+len(substr)], substr) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func equalFold(s, t string) bool {
|
||||
if len(s) != len(t) {
|
||||
return false
|
||||
}
|
||||
for i := 0; i < len(s); i++ {
|
||||
sr := s[i]
|
||||
tr := t[i]
|
||||
if sr >= 'A' && sr <= 'Z' {
|
||||
sr += 'a' - 'A'
|
||||
}
|
||||
if tr >= 'A' && tr <= 'Z' {
|
||||
tr += 'a' - 'A'
|
||||
}
|
||||
if sr != tr {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
177
internal/scanner/helpers.go
Normal file
177
internal/scanner/helpers.go
Normal file
@@ -0,0 +1,177 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"god-eye/internal/config"
|
||||
)
|
||||
|
||||
// LoadWordlist loads words from a file
|
||||
func LoadWordlist(path string) ([]string, error) {
|
||||
file, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
var words []string
|
||||
scanner := bufio.NewScanner(file)
|
||||
for scanner.Scan() {
|
||||
word := strings.TrimSpace(scanner.Text())
|
||||
if word != "" && !strings.HasPrefix(word, "#") {
|
||||
words = append(words, word)
|
||||
}
|
||||
}
|
||||
return words, scanner.Err()
|
||||
}
|
||||
|
||||
// ScanPorts scans ports on an IP address
|
||||
func ScanPorts(ip string, ports []int, timeout int) []int {
|
||||
var openPorts []int
|
||||
var mu sync.Mutex
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for _, port := range ports {
|
||||
wg.Add(1)
|
||||
go func(p int) {
|
||||
defer wg.Done()
|
||||
address := fmt.Sprintf("%s:%d", ip, p)
|
||||
conn, err := net.DialTimeout("tcp", address, time.Duration(timeout)*time.Second)
|
||||
if err == nil {
|
||||
conn.Close()
|
||||
mu.Lock()
|
||||
openPorts = append(openPorts, p)
|
||||
mu.Unlock()
|
||||
}
|
||||
}(port)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
sort.Ints(openPorts)
|
||||
return openPorts
|
||||
}
|
||||
|
||||
// Helper functions for AI analysis
|
||||
|
||||
func countSubdomainsWithAI(results map[string]*config.SubdomainResult) int {
|
||||
count := 0
|
||||
for _, r := range results {
|
||||
if len(r.AIFindings) > 0 {
|
||||
count++
|
||||
}
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
func countActive(results map[string]*config.SubdomainResult) int {
|
||||
count := 0
|
||||
for _, r := range results {
|
||||
if r.StatusCode >= 200 && r.StatusCode < 400 {
|
||||
count++
|
||||
}
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
func countVulns(results map[string]*config.SubdomainResult) int {
|
||||
count := 0
|
||||
for _, r := range results {
|
||||
if r.OpenRedirect || r.CORSMisconfig != "" || len(r.DangerousMethods) > 0 ||
|
||||
r.GitExposed || r.SvnExposed || len(r.BackupFiles) > 0 {
|
||||
count++
|
||||
}
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
func buildAISummary(results map[string]*config.SubdomainResult) string {
|
||||
var summary strings.Builder
|
||||
|
||||
criticalCount := 0
|
||||
highCount := 0
|
||||
mediumCount := 0
|
||||
|
||||
for sub, r := range results {
|
||||
if len(r.AIFindings) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
switch r.AISeverity {
|
||||
case "critical":
|
||||
criticalCount++
|
||||
summary.WriteString(fmt.Sprintf("\n[CRITICAL] %s:\n", sub))
|
||||
case "high":
|
||||
highCount++
|
||||
summary.WriteString(fmt.Sprintf("\n[HIGH] %s:\n", sub))
|
||||
case "medium":
|
||||
mediumCount++
|
||||
summary.WriteString(fmt.Sprintf("\n[MEDIUM] %s:\n", sub))
|
||||
default:
|
||||
continue
|
||||
}
|
||||
|
||||
// Add first 3 findings
|
||||
for i, finding := range r.AIFindings {
|
||||
if i >= 3 {
|
||||
break
|
||||
}
|
||||
summary.WriteString(fmt.Sprintf(" - %s\n", finding))
|
||||
}
|
||||
|
||||
// Add CVE findings
|
||||
if len(r.CVEFindings) > 0 {
|
||||
summary.WriteString(" CVEs:\n")
|
||||
for _, cve := range r.CVEFindings {
|
||||
summary.WriteString(fmt.Sprintf(" - %s\n", cve))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
header := fmt.Sprintf("Summary: %d critical, %d high, %d medium findings\n", criticalCount, highCount, mediumCount)
|
||||
return header + summary.String()
|
||||
}
|
||||
|
||||
// ParseResolvers parses custom resolvers string
|
||||
func ParseResolvers(resolversStr string) []string {
|
||||
var resolvers []string
|
||||
if resolversStr != "" {
|
||||
for _, r := range strings.Split(resolversStr, ",") {
|
||||
r = strings.TrimSpace(r)
|
||||
if r != "" {
|
||||
if !strings.Contains(r, ":") {
|
||||
r = r + ":53"
|
||||
}
|
||||
resolvers = append(resolvers, r)
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(resolvers) == 0 {
|
||||
resolvers = config.DefaultResolvers
|
||||
}
|
||||
return resolvers
|
||||
}
|
||||
|
||||
// ParsePorts parses custom ports string
|
||||
func ParsePorts(portsStr string) []int {
|
||||
var customPorts []int
|
||||
if portsStr != "" {
|
||||
for _, p := range strings.Split(portsStr, ",") {
|
||||
p = strings.TrimSpace(p)
|
||||
var port int
|
||||
if _, err := fmt.Sscanf(p, "%d", &port); err == nil && port > 0 && port < 65536 {
|
||||
customPorts = append(customPorts, port)
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(customPorts) == 0 {
|
||||
customPorts = []int{80, 443, 8080, 8443}
|
||||
}
|
||||
return customPorts
|
||||
}
|
||||
@@ -6,52 +6,87 @@ import (
|
||||
"net/http"
|
||||
"regexp"
|
||||
"strings"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// SecretPattern defines a pattern for finding secrets
|
||||
type SecretPattern struct {
|
||||
Name string
|
||||
Pattern *regexp.Regexp
|
||||
}
|
||||
|
||||
// Secret patterns to search for in JS files
|
||||
var secretPatterns = []SecretPattern{
|
||||
// API Keys
|
||||
{Name: "AWS Access Key", Pattern: regexp.MustCompile(`AKIA[0-9A-Z]{16}`)},
|
||||
{Name: "AWS Secret Key", Pattern: regexp.MustCompile(`(?i)aws[_\-]?secret[_\-]?access[_\-]?key['"\s:=]+['"]?([A-Za-z0-9/+=]{40})['"]?`)},
|
||||
{Name: "Google API Key", Pattern: regexp.MustCompile(`AIza[0-9A-Za-z\-_]{35}`)},
|
||||
{Name: "Google OAuth", Pattern: regexp.MustCompile(`[0-9]+-[0-9A-Za-z_]{32}\.apps\.googleusercontent\.com`)},
|
||||
{Name: "Firebase API Key", Pattern: regexp.MustCompile(`(?i)firebase[_\-]?api[_\-]?key['"\s:=]+['"]?([A-Za-z0-9_\-]{39})['"]?`)},
|
||||
{Name: "Stripe Key", Pattern: regexp.MustCompile(`(?:sk|pk)_(?:test|live)_[0-9a-zA-Z]{24,}`)},
|
||||
{Name: "Stripe Restricted", Pattern: regexp.MustCompile(`rk_(?:test|live)_[0-9a-zA-Z]{24,}`)},
|
||||
{Name: "GitHub Token", Pattern: regexp.MustCompile(`(?:ghp|gho|ghu|ghs|ghr)_[A-Za-z0-9_]{36,}`)},
|
||||
{Name: "GitHub OAuth", Pattern: regexp.MustCompile(`github[_\-]?oauth[_\-]?token['"\s:=]+['"]?([a-f0-9]{40})['"]?`)},
|
||||
{Name: "Slack Token", Pattern: regexp.MustCompile(`xox[baprs]-[0-9]{10,13}-[0-9]{10,13}[a-zA-Z0-9-]*`)},
|
||||
{Name: "Slack Webhook", Pattern: regexp.MustCompile(`https://hooks\.slack\.com/services/T[a-zA-Z0-9_]{8,}/B[a-zA-Z0-9_]{8,}/[a-zA-Z0-9_]{24}`)},
|
||||
{Name: "Discord Webhook", Pattern: regexp.MustCompile(`https://discord(?:app)?\.com/api/webhooks/[0-9]{17,20}/[A-Za-z0-9_\-]{60,}`)},
|
||||
{Name: "Twilio API Key", Pattern: regexp.MustCompile(`SK[a-f0-9]{32}`)},
|
||||
{Name: "Twilio Account SID", Pattern: regexp.MustCompile(`AC[a-f0-9]{32}`)},
|
||||
{Name: "SendGrid API Key", Pattern: regexp.MustCompile(`SG\.[a-zA-Z0-9_\-]{22}\.[a-zA-Z0-9_\-]{43}`)},
|
||||
{Name: "Mailgun API Key", Pattern: regexp.MustCompile(`key-[0-9a-zA-Z]{32}`)},
|
||||
{Name: "Mailchimp API Key", Pattern: regexp.MustCompile(`[0-9a-f]{32}-us[0-9]{1,2}`)},
|
||||
{Name: "Heroku API Key", Pattern: regexp.MustCompile(`(?i)heroku[_\-]?api[_\-]?key['"\s:=]+['"]?([0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12})['"]?`)},
|
||||
{Name: "DigitalOcean Token", Pattern: regexp.MustCompile(`dop_v1_[a-f0-9]{64}`)},
|
||||
{Name: "NPM Token", Pattern: regexp.MustCompile(`npm_[A-Za-z0-9]{36}`)},
|
||||
{Name: "PyPI Token", Pattern: regexp.MustCompile(`pypi-AgEIcHlwaS5vcmc[A-Za-z0-9_\-]{50,}`)},
|
||||
{Name: "Square Access Token", Pattern: regexp.MustCompile(`sq0atp-[0-9A-Za-z\-_]{22}`)},
|
||||
{Name: "Square OAuth", Pattern: regexp.MustCompile(`sq0csp-[0-9A-Za-z\-_]{43}`)},
|
||||
{Name: "Shopify Access Token", Pattern: regexp.MustCompile(`shpat_[a-fA-F0-9]{32}`)},
|
||||
{Name: "Shopify Shared Secret", Pattern: regexp.MustCompile(`shpss_[a-fA-F0-9]{32}`)},
|
||||
{Name: "Algolia API Key", Pattern: regexp.MustCompile(`(?i)algolia[_\-]?api[_\-]?key['"\s:=]+['"]?([a-zA-Z0-9]{32})['"]?`)},
|
||||
{Name: "Auth0 Client Secret", Pattern: regexp.MustCompile(`(?i)auth0[_\-]?client[_\-]?secret['"\s:=]+['"]?([a-zA-Z0-9_\-]{32,})['"]?`)},
|
||||
|
||||
// Generic secrets
|
||||
{Name: "Generic API Key", Pattern: regexp.MustCompile(`(?i)['"]?api[_\-]?key['"]?\s*[:=]\s*['"]([a-zA-Z0-9_\-]{20,64})['"]`)},
|
||||
{Name: "Generic Secret", Pattern: regexp.MustCompile(`(?i)['"]?(?:client[_\-]?)?secret['"]?\s*[:=]\s*['"]([a-zA-Z0-9_\-]{20,64})['"]`)},
|
||||
{Name: "Generic Token", Pattern: regexp.MustCompile(`(?i)['"]?(?:access[_\-]?)?token['"]?\s*[:=]\s*['"]([a-zA-Z0-9_\-\.]{20,500})['"]`)},
|
||||
{Name: "Generic Password", Pattern: regexp.MustCompile(`(?i)['"]?password['"]?\s*[:=]\s*['"]([^'"]{8,64})['"]`)},
|
||||
{Name: "Private Key", Pattern: regexp.MustCompile(`-----BEGIN (?:RSA |EC |DSA |OPENSSH )?PRIVATE KEY-----`)},
|
||||
{Name: "Bearer Token", Pattern: regexp.MustCompile(`(?i)['"]?authorization['"]?\s*[:=]\s*['"]Bearer\s+([a-zA-Z0-9_\-\.]+)['"]`)},
|
||||
{Name: "Basic Auth", Pattern: regexp.MustCompile(`(?i)['"]?authorization['"]?\s*[:=]\s*['"]Basic\s+([a-zA-Z0-9+/=]+)['"]`)},
|
||||
{Name: "JWT Token", Pattern: regexp.MustCompile(`eyJ[a-zA-Z0-9_\-]*\.eyJ[a-zA-Z0-9_\-]*\.[a-zA-Z0-9_\-]*`)},
|
||||
|
||||
// Database connection strings
|
||||
{Name: "MongoDB URI", Pattern: regexp.MustCompile(`mongodb(?:\+srv)?://[^\s'"]+`)},
|
||||
{Name: "PostgreSQL URI", Pattern: regexp.MustCompile(`postgres(?:ql)?://[^\s'"]+`)},
|
||||
{Name: "MySQL URI", Pattern: regexp.MustCompile(`mysql://[^\s'"]+`)},
|
||||
{Name: "Redis URI", Pattern: regexp.MustCompile(`redis://[^\s'"]+`)},
|
||||
}
|
||||
|
||||
// Endpoint patterns for API discovery - only external/interesting URLs
|
||||
// Note: We exclude relative paths like /api/... as they're not secrets
|
||||
var endpointPatterns = []*regexp.Regexp{
|
||||
regexp.MustCompile(`['"]https?://api\.[a-zA-Z0-9\-\.]+[a-zA-Z0-9/\-_]*['"]`), // External API domains
|
||||
regexp.MustCompile(`['"]https?://[a-zA-Z0-9\-\.]+\.amazonaws\.com[^'"]*['"]`), // AWS endpoints
|
||||
regexp.MustCompile(`['"]https?://[a-zA-Z0-9\-\.]+\.azure\.com[^'"]*['"]`), // Azure endpoints
|
||||
regexp.MustCompile(`['"]https?://[a-zA-Z0-9\-\.]+\.googleapis\.com[^'"]*['"]`), // Google API
|
||||
regexp.MustCompile(`['"]https?://[a-zA-Z0-9\-\.]+\.firebaseio\.com[^'"]*['"]`), // Firebase
|
||||
}
|
||||
|
||||
// AnalyzeJSFiles finds JavaScript files and extracts potential secrets
|
||||
func AnalyzeJSFiles(subdomain string, client *http.Client) ([]string, []string) {
|
||||
var jsFiles []string
|
||||
var secrets []string
|
||||
var mu sync.Mutex
|
||||
|
||||
urls := []string{
|
||||
baseURLs := []string{
|
||||
fmt.Sprintf("https://%s", subdomain),
|
||||
fmt.Sprintf("http://%s", subdomain),
|
||||
}
|
||||
|
||||
// Common JS file paths
|
||||
jsPaths := []string{
|
||||
"/main.js", "/app.js", "/bundle.js", "/vendor.js",
|
||||
"/static/js/main.js", "/static/js/app.js",
|
||||
"/assets/js/app.js", "/js/main.js", "/js/app.js",
|
||||
"/dist/main.js", "/dist/bundle.js",
|
||||
"/_next/static/chunks/main.js",
|
||||
"/build/static/js/main.js",
|
||||
}
|
||||
|
||||
// Secret patterns to search for
|
||||
secretPatterns := []*regexp.Regexp{
|
||||
regexp.MustCompile(`(?i)['"]?api[_-]?key['"]?\s*[:=]\s*['"]([a-zA-Z0-9_\-]{20,})['"]`),
|
||||
regexp.MustCompile(`(?i)['"]?aws[_-]?access[_-]?key[_-]?id['"]?\s*[:=]\s*['"]([A-Z0-9]{20})['"]`),
|
||||
regexp.MustCompile(`(?i)['"]?aws[_-]?secret[_-]?access[_-]?key['"]?\s*[:=]\s*['"]([a-zA-Z0-9/+=]{40})['"]`),
|
||||
regexp.MustCompile(`(?i)['"]?google[_-]?api[_-]?key['"]?\s*[:=]\s*['"]([a-zA-Z0-9_\-]{39})['"]`),
|
||||
regexp.MustCompile(`(?i)['"]?firebase[_-]?api[_-]?key['"]?\s*[:=]\s*['"]([a-zA-Z0-9_\-]{39})['"]`),
|
||||
regexp.MustCompile(`(?i)['"]?stripe[_-]?(publishable|secret)[_-]?key['"]?\s*[:=]\s*['"]([a-zA-Z0-9_\-]{20,})['"]`),
|
||||
regexp.MustCompile(`(?i)['"]?github[_-]?token['"]?\s*[:=]\s*['"]([a-zA-Z0-9_]{36,})['"]`),
|
||||
regexp.MustCompile(`(?i)['"]?slack[_-]?token['"]?\s*[:=]\s*['"]([a-zA-Z0-9\-]{30,})['"]`),
|
||||
regexp.MustCompile(`(?i)['"]?private[_-]?key['"]?\s*[:=]\s*['"]([a-zA-Z0-9/+=]{50,})['"]`),
|
||||
regexp.MustCompile(`(?i)['"]?secret['"]?\s*[:=]\s*['"]([a-zA-Z0-9_\-]{20,})['"]`),
|
||||
regexp.MustCompile(`(?i)['"]?password['"]?\s*[:=]\s*['"]([^'"]{8,})['"]`),
|
||||
regexp.MustCompile(`(?i)['"]?authorization['"]?\s*[:=]\s*['"]Bearer\s+([a-zA-Z0-9_\-\.]+)['"]`),
|
||||
}
|
||||
|
||||
// Also search for API endpoints in JS
|
||||
endpointPatterns := []*regexp.Regexp{
|
||||
regexp.MustCompile(`(?i)['"]https?://[a-zA-Z0-9\-\.]+/api/[a-zA-Z0-9/\-_]+['"]`),
|
||||
regexp.MustCompile(`(?i)['"]https?://api\.[a-zA-Z0-9\-\.]+[a-zA-Z0-9/\-_]*['"]`),
|
||||
}
|
||||
|
||||
for _, baseURL := range urls {
|
||||
// First, get the main page and extract JS file references
|
||||
// First, get the main page and extract JS file references
|
||||
var foundJSURLs []string
|
||||
for _, baseURL := range baseURLs {
|
||||
resp, err := client.Get(baseURL)
|
||||
if err != nil {
|
||||
continue
|
||||
@@ -64,92 +99,260 @@ func AnalyzeJSFiles(subdomain string, client *http.Client) ([]string, []string)
|
||||
}
|
||||
|
||||
// Find JS files referenced in HTML
|
||||
jsRe := regexp.MustCompile(`src=["']([^"']*\.js[^"']*)["']`)
|
||||
jsRe := regexp.MustCompile(`(?:src|href)=["']([^"']*\.js(?:\?[^"']*)?)["']`)
|
||||
matches := jsRe.FindAllStringSubmatch(string(body), -1)
|
||||
for _, match := range matches {
|
||||
if len(match) > 1 {
|
||||
jsURL := match[1]
|
||||
if !strings.HasPrefix(jsURL, "http") {
|
||||
if strings.HasPrefix(jsURL, "/") {
|
||||
jsURL = baseURL + jsURL
|
||||
} else {
|
||||
jsURL = baseURL + "/" + jsURL
|
||||
}
|
||||
jsURL := normalizeURL(match[1], baseURL)
|
||||
if jsURL != "" && !contains(foundJSURLs, jsURL) {
|
||||
foundJSURLs = append(foundJSURLs, jsURL)
|
||||
}
|
||||
jsFiles = append(jsFiles, jsURL)
|
||||
}
|
||||
}
|
||||
|
||||
// Also check common JS paths
|
||||
for _, path := range jsPaths {
|
||||
testURL := baseURL + path
|
||||
resp, err := client.Get(testURL)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if resp.StatusCode == 200 {
|
||||
jsFiles = append(jsFiles, path)
|
||||
|
||||
// Read JS content and search for secrets
|
||||
jsBody, err := io.ReadAll(io.LimitReader(resp.Body, 500000))
|
||||
resp.Body.Close()
|
||||
if err != nil {
|
||||
continue
|
||||
// Also look for dynamic imports and webpack chunks
|
||||
dynamicRe := regexp.MustCompile(`["']([^"']*(?:chunk|bundle|vendor|main|app)[^"']*\.js(?:\?[^"']*)?)["']`)
|
||||
dynamicMatches := dynamicRe.FindAllStringSubmatch(string(body), -1)
|
||||
for _, match := range dynamicMatches {
|
||||
if len(match) > 1 {
|
||||
jsURL := normalizeURL(match[1], baseURL)
|
||||
if jsURL != "" && !contains(foundJSURLs, jsURL) {
|
||||
foundJSURLs = append(foundJSURLs, jsURL)
|
||||
}
|
||||
|
||||
jsContent := string(jsBody)
|
||||
|
||||
// Search for secrets
|
||||
for _, pattern := range secretPatterns {
|
||||
if matches := pattern.FindAllStringSubmatch(jsContent, 3); len(matches) > 0 {
|
||||
for _, m := range matches {
|
||||
if len(m) > 1 {
|
||||
secret := m[0]
|
||||
if len(secret) > 60 {
|
||||
secret = secret[:57] + "..."
|
||||
}
|
||||
secrets = append(secrets, secret)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Search for API endpoints
|
||||
for _, pattern := range endpointPatterns {
|
||||
if matches := pattern.FindAllString(jsContent, 5); len(matches) > 0 {
|
||||
for _, m := range matches {
|
||||
if len(m) > 60 {
|
||||
m = m[:57] + "..."
|
||||
}
|
||||
secrets = append(secrets, "endpoint: "+m)
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
resp.Body.Close()
|
||||
}
|
||||
}
|
||||
|
||||
if len(jsFiles) > 0 || len(secrets) > 0 {
|
||||
if len(foundJSURLs) > 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Deduplicate and limit
|
||||
// Limit to first 15 JS files to avoid too many requests
|
||||
if len(foundJSURLs) > 15 {
|
||||
foundJSURLs = foundJSURLs[:15]
|
||||
}
|
||||
|
||||
// Download and analyze each JS file concurrently
|
||||
var wg sync.WaitGroup
|
||||
semaphore := make(chan struct{}, 5) // Limit concurrent downloads
|
||||
|
||||
for _, jsURL := range foundJSURLs {
|
||||
wg.Add(1)
|
||||
go func(url string) {
|
||||
defer wg.Done()
|
||||
semaphore <- struct{}{}
|
||||
defer func() { <-semaphore }()
|
||||
|
||||
fileSecrets := analyzeJSContent(url, client)
|
||||
|
||||
mu.Lock()
|
||||
jsFiles = append(jsFiles, url)
|
||||
secrets = append(secrets, fileSecrets...)
|
||||
mu.Unlock()
|
||||
}(jsURL)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
// Deduplicate and limit results
|
||||
jsFiles = UniqueStrings(jsFiles)
|
||||
secrets = UniqueStrings(secrets)
|
||||
|
||||
if len(jsFiles) > 10 {
|
||||
jsFiles = jsFiles[:10]
|
||||
}
|
||||
if len(secrets) > 10 {
|
||||
secrets = secrets[:10]
|
||||
if len(secrets) > 20 {
|
||||
secrets = secrets[:20]
|
||||
}
|
||||
|
||||
return jsFiles, secrets
|
||||
}
|
||||
|
||||
// analyzeJSContent downloads and analyzes a JS file for secrets
|
||||
func analyzeJSContent(jsURL string, client *http.Client) []string {
|
||||
var secrets []string
|
||||
|
||||
resp, err := client.Get(jsURL)
|
||||
if err != nil {
|
||||
return secrets
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != 200 {
|
||||
return secrets
|
||||
}
|
||||
|
||||
// Read JS content (limit to 2MB)
|
||||
body, err := io.ReadAll(io.LimitReader(resp.Body, 2*1024*1024))
|
||||
if err != nil {
|
||||
return secrets
|
||||
}
|
||||
|
||||
content := string(body)
|
||||
|
||||
// Skip minified files that are too large without meaningful content
|
||||
if len(content) > 500000 && !containsInterestingPatterns(content) {
|
||||
return secrets
|
||||
}
|
||||
|
||||
// Search for secrets
|
||||
for _, sp := range secretPatterns {
|
||||
matches := sp.Pattern.FindAllStringSubmatch(content, 3)
|
||||
for _, m := range matches {
|
||||
var secret string
|
||||
if len(m) > 1 && m[1] != "" {
|
||||
secret = m[1]
|
||||
} else {
|
||||
secret = m[0]
|
||||
}
|
||||
|
||||
// Skip common false positives
|
||||
if isLikelyFalsePositive(secret) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Truncate long secrets
|
||||
if len(secret) > 80 {
|
||||
secret = secret[:77] + "..."
|
||||
}
|
||||
|
||||
finding := fmt.Sprintf("[%s] %s", sp.Name, secret)
|
||||
secrets = append(secrets, finding)
|
||||
}
|
||||
}
|
||||
|
||||
// Search for API endpoints
|
||||
for _, pattern := range endpointPatterns {
|
||||
matches := pattern.FindAllString(content, 5)
|
||||
for _, m := range matches {
|
||||
// Clean up the match
|
||||
m = strings.Trim(m, `'"`)
|
||||
if len(m) > 80 {
|
||||
m = m[:77] + "..."
|
||||
}
|
||||
secrets = append(secrets, fmt.Sprintf("[API Endpoint] %s", m))
|
||||
}
|
||||
}
|
||||
|
||||
return secrets
|
||||
}
|
||||
|
||||
// normalizeURL converts relative URLs to absolute URLs
|
||||
func normalizeURL(jsURL, baseURL string) string {
|
||||
if jsURL == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Skip data URIs and blob URLs
|
||||
if strings.HasPrefix(jsURL, "data:") || strings.HasPrefix(jsURL, "blob:") {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Already absolute URL
|
||||
if strings.HasPrefix(jsURL, "http://") || strings.HasPrefix(jsURL, "https://") {
|
||||
return jsURL
|
||||
}
|
||||
|
||||
// Protocol-relative URL
|
||||
if strings.HasPrefix(jsURL, "//") {
|
||||
if strings.HasPrefix(baseURL, "https") {
|
||||
return "https:" + jsURL
|
||||
}
|
||||
return "http:" + jsURL
|
||||
}
|
||||
|
||||
// Absolute path
|
||||
if strings.HasPrefix(jsURL, "/") {
|
||||
// Extract base (scheme + host)
|
||||
parts := strings.SplitN(baseURL, "/", 4)
|
||||
if len(parts) >= 3 {
|
||||
return parts[0] + "//" + parts[2] + jsURL
|
||||
}
|
||||
return baseURL + jsURL
|
||||
}
|
||||
|
||||
// Relative path
|
||||
return strings.TrimSuffix(baseURL, "/") + "/" + jsURL
|
||||
}
|
||||
|
||||
// containsInterestingPatterns checks if content might contain secrets
|
||||
func containsInterestingPatterns(content string) bool {
|
||||
interestingKeywords := []string{
|
||||
"api_key", "apikey", "api-key",
|
||||
"secret", "password", "token",
|
||||
"authorization", "bearer",
|
||||
"aws_", "firebase", "stripe",
|
||||
"mongodb://", "postgres://", "mysql://",
|
||||
"private_key", "privatekey",
|
||||
}
|
||||
|
||||
contentLower := strings.ToLower(content)
|
||||
for _, kw := range interestingKeywords {
|
||||
if strings.Contains(contentLower, kw) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// isLikelyFalsePositive performs basic pre-filtering before AI analysis
|
||||
// Only filters obvious patterns - AI will handle context-aware filtering
|
||||
func isLikelyFalsePositive(secret string) bool {
|
||||
// Only filter obvious placeholder patterns
|
||||
// AI will handle context-aware filtering (UI text, etc.)
|
||||
obviousPlaceholders := []string{
|
||||
"YOUR_API_KEY", "API_KEY_HERE", "REPLACE_ME",
|
||||
"xxxxxxxx", "XXXXXXXX", "00000000",
|
||||
}
|
||||
|
||||
secretLower := strings.ToLower(secret)
|
||||
for _, fp := range obviousPlaceholders {
|
||||
if strings.Contains(secretLower, strings.ToLower(fp)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// Too short
|
||||
if len(secret) < 8 {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check for repeated characters (garbage data)
|
||||
if isRepeatedChars(secret) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// isRepeatedChars checks if string is mostly repeated characters
|
||||
func isRepeatedChars(s string) bool {
|
||||
if len(s) < 10 {
|
||||
return false
|
||||
}
|
||||
charCount := make(map[rune]int)
|
||||
for _, c := range s {
|
||||
charCount[c]++
|
||||
}
|
||||
// If any single character is more than 60% of the string, it's likely garbage
|
||||
for _, count := range charCount {
|
||||
if float64(count)/float64(len(s)) > 0.6 {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// contains checks if a string slice contains a value
|
||||
func contains(slice []string, val string) bool {
|
||||
for _, s := range slice {
|
||||
if s == val {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// UniqueStrings returns unique strings from a slice
|
||||
func UniqueStrings(input []string) []string {
|
||||
seen := make(map[string]bool)
|
||||
|
||||
455
internal/scanner/output.go
Normal file
455
internal/scanner/output.go
Normal file
@@ -0,0 +1,455 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"god-eye/internal/config"
|
||||
"god-eye/internal/output"
|
||||
)
|
||||
|
||||
// PrintResults displays scan results to stdout
|
||||
func PrintResults(results map[string]*config.SubdomainResult, startTime time.Time, takeoverCount int32) {
|
||||
elapsed := time.Since(startTime)
|
||||
|
||||
// Count statistics
|
||||
var activeCount, vulnCount, cloudCount int
|
||||
for _, r := range results {
|
||||
if r.StatusCode >= 200 && r.StatusCode < 400 {
|
||||
activeCount++
|
||||
}
|
||||
if r.OpenRedirect || r.CORSMisconfig != "" || len(r.DangerousMethods) > 0 || r.GitExposed || r.SvnExposed || len(r.BackupFiles) > 0 {
|
||||
vulnCount++
|
||||
}
|
||||
if r.CloudProvider != "" {
|
||||
cloudCount++
|
||||
}
|
||||
}
|
||||
|
||||
// Summary box
|
||||
fmt.Println()
|
||||
fmt.Println(output.BoldCyan("╔══════════════════════════════════════════════════════════════════════════════╗"))
|
||||
fmt.Println(output.BoldCyan("║") + " " + output.BoldWhite("📊 SCAN SUMMARY") + " " + output.BoldCyan("║"))
|
||||
fmt.Println(output.BoldCyan("╠══════════════════════════════════════════════════════════════════════════════╣"))
|
||||
fmt.Printf("%s %-20s %s %-20s %s %-20s %s\n",
|
||||
output.BoldCyan("║"),
|
||||
fmt.Sprintf("🌐 Total: %s", output.BoldCyan(fmt.Sprintf("%d", len(results)))),
|
||||
output.Dim("|"),
|
||||
fmt.Sprintf("✅ Active: %s", output.BoldGreen(fmt.Sprintf("%d", activeCount))),
|
||||
output.Dim("|"),
|
||||
fmt.Sprintf("⏱️ Time: %s", output.BoldYellow(fmt.Sprintf("%.1fs", elapsed.Seconds()))),
|
||||
output.BoldCyan("║"))
|
||||
fmt.Printf("%s %-20s %s %-20s %s %-20s %s\n",
|
||||
output.BoldCyan("║"),
|
||||
fmt.Sprintf("⚠️ Vulns: %s", output.BoldRed(fmt.Sprintf("%d", vulnCount))),
|
||||
output.Dim("|"),
|
||||
fmt.Sprintf("☁️ Cloud: %s", output.Blue(fmt.Sprintf("%d", cloudCount))),
|
||||
output.Dim("|"),
|
||||
fmt.Sprintf("🎯 Takeover: %s", output.BoldRed(fmt.Sprintf("%d", takeoverCount))),
|
||||
output.BoldCyan("║"))
|
||||
fmt.Println(output.BoldCyan("╚══════════════════════════════════════════════════════════════════════════════╝"))
|
||||
fmt.Println()
|
||||
fmt.Println(output.BoldCyan("═══════════════════════════════════════════════════════════════════════════════"))
|
||||
|
||||
// Sort subdomains
|
||||
var sortedSubs []string
|
||||
for sub := range results {
|
||||
sortedSubs = append(sortedSubs, sub)
|
||||
}
|
||||
sort.Strings(sortedSubs)
|
||||
|
||||
for _, sub := range sortedSubs {
|
||||
r := results[sub]
|
||||
printSubdomainResult(sub, r)
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
fmt.Println(output.BoldCyan("═══════════════════════════════════════════════════════════════════════════════"))
|
||||
}
|
||||
|
||||
func printSubdomainResult(sub string, r *config.SubdomainResult) {
|
||||
// Color code by status
|
||||
var statusColor func(a ...interface{}) string
|
||||
var statusIcon string
|
||||
if r.StatusCode >= 200 && r.StatusCode < 300 {
|
||||
statusColor = output.Green
|
||||
statusIcon = "●"
|
||||
} else if r.StatusCode >= 300 && r.StatusCode < 400 {
|
||||
statusColor = output.Yellow
|
||||
statusIcon = "◐"
|
||||
} else if r.StatusCode >= 400 {
|
||||
statusColor = output.Red
|
||||
statusIcon = "○"
|
||||
} else {
|
||||
statusColor = output.Blue
|
||||
statusIcon = "◌"
|
||||
}
|
||||
|
||||
// Line 1: Subdomain name with status
|
||||
statusBadge := ""
|
||||
if r.StatusCode > 0 {
|
||||
statusBadge = fmt.Sprintf(" %s", statusColor(fmt.Sprintf("[%d]", r.StatusCode)))
|
||||
}
|
||||
|
||||
// Response time badge
|
||||
timeBadge := ""
|
||||
if r.ResponseMs > 0 {
|
||||
if r.ResponseMs < 200 {
|
||||
timeBadge = fmt.Sprintf(" %s", output.Green(fmt.Sprintf("⚡%dms", r.ResponseMs)))
|
||||
} else if r.ResponseMs < 500 {
|
||||
timeBadge = fmt.Sprintf(" %s", output.Yellow(fmt.Sprintf("⏱️%dms", r.ResponseMs)))
|
||||
} else {
|
||||
timeBadge = fmt.Sprintf(" %s", output.Red(fmt.Sprintf("🐢%dms", r.ResponseMs)))
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("\n%s %s%s%s\n", statusColor(statusIcon), output.BoldCyan(sub), statusBadge, timeBadge)
|
||||
|
||||
// IPs
|
||||
if len(r.IPs) > 0 {
|
||||
ips := r.IPs
|
||||
if len(ips) > 3 {
|
||||
ips = ips[:3]
|
||||
}
|
||||
fmt.Printf(" %s %s\n", output.Dim("IP:"), output.White(strings.Join(ips, ", ")))
|
||||
}
|
||||
|
||||
// CNAME
|
||||
if r.CNAME != "" {
|
||||
fmt.Printf(" %s %s\n", output.Dim("CNAME:"), output.Blue(r.CNAME))
|
||||
}
|
||||
|
||||
// Location + ASN
|
||||
if r.Country != "" || r.City != "" || r.ASN != "" {
|
||||
loc := ""
|
||||
if r.City != "" && r.Country != "" {
|
||||
loc = r.City + ", " + r.Country
|
||||
} else if r.Country != "" {
|
||||
loc = r.Country
|
||||
} else if r.City != "" {
|
||||
loc = r.City
|
||||
}
|
||||
|
||||
asnStr := ""
|
||||
if r.ASN != "" {
|
||||
asnStr = r.ASN
|
||||
if len(asnStr) > 40 {
|
||||
asnStr = asnStr[:37] + "..."
|
||||
}
|
||||
}
|
||||
|
||||
if loc != "" && asnStr != "" {
|
||||
fmt.Printf(" Location: %s | %s\n", output.Cyan(loc), output.Blue(asnStr))
|
||||
} else if loc != "" {
|
||||
fmt.Printf(" Location: %s\n", output.Cyan(loc))
|
||||
} else if asnStr != "" {
|
||||
fmt.Printf(" ASN: %s\n", output.Blue(asnStr))
|
||||
}
|
||||
}
|
||||
|
||||
// PTR
|
||||
if r.PTR != "" {
|
||||
fmt.Printf(" PTR: %s\n", output.Magenta(r.PTR))
|
||||
}
|
||||
|
||||
// HTTP Info (Title, Size)
|
||||
if r.Title != "" || r.ContentLength > 0 {
|
||||
httpInfo := " HTTP: "
|
||||
if r.Title != "" {
|
||||
title := r.Title
|
||||
if len(title) > 50 {
|
||||
title = title[:47] + "..."
|
||||
}
|
||||
httpInfo += fmt.Sprintf("\"%s\"", title)
|
||||
}
|
||||
if r.ContentLength > 0 {
|
||||
sizeStr := formatSize(r.ContentLength)
|
||||
if r.Title != "" {
|
||||
httpInfo += fmt.Sprintf(" (%s)", sizeStr)
|
||||
} else {
|
||||
httpInfo += sizeStr
|
||||
}
|
||||
}
|
||||
fmt.Println(httpInfo)
|
||||
}
|
||||
|
||||
// Redirect
|
||||
if r.RedirectURL != "" {
|
||||
redirectURL := r.RedirectURL
|
||||
if len(redirectURL) > 60 {
|
||||
redirectURL = redirectURL[:57] + "..."
|
||||
}
|
||||
fmt.Printf(" Redirect: %s\n", output.Yellow(redirectURL))
|
||||
}
|
||||
|
||||
// Tech
|
||||
if len(r.Tech) > 0 {
|
||||
techMap := make(map[string]bool)
|
||||
var uniqueTech []string
|
||||
for _, t := range r.Tech {
|
||||
if !techMap[t] {
|
||||
techMap[t] = true
|
||||
uniqueTech = append(uniqueTech, t)
|
||||
}
|
||||
}
|
||||
if len(uniqueTech) > 5 {
|
||||
uniqueTech = uniqueTech[:5]
|
||||
}
|
||||
if len(uniqueTech) > 0 {
|
||||
fmt.Printf(" Tech: %s\n", output.Yellow(strings.Join(uniqueTech, ", ")))
|
||||
}
|
||||
}
|
||||
|
||||
// Security (WAF, TLS)
|
||||
var securityInfo []string
|
||||
if r.WAF != "" {
|
||||
securityInfo = append(securityInfo, fmt.Sprintf("WAF: %s", output.Red(r.WAF)))
|
||||
}
|
||||
if r.TLSVersion != "" {
|
||||
tlsInfo := fmt.Sprintf("TLS: %s", output.Cyan(r.TLSVersion))
|
||||
if r.TLSSelfSigned {
|
||||
tlsInfo += " " + output.Yellow("(self-signed)")
|
||||
}
|
||||
securityInfo = append(securityInfo, tlsInfo)
|
||||
}
|
||||
if len(securityInfo) > 0 {
|
||||
fmt.Printf(" Security: %s\n", strings.Join(securityInfo, " | "))
|
||||
}
|
||||
|
||||
// TLS Fingerprint (appliance detection)
|
||||
if r.TLSFingerprint != nil {
|
||||
fp := r.TLSFingerprint
|
||||
if fp.Vendor != "" {
|
||||
applianceInfo := fmt.Sprintf("%s %s", fp.Vendor, fp.Product)
|
||||
if fp.Version != "" {
|
||||
applianceInfo += " v" + fp.Version
|
||||
}
|
||||
if fp.ApplianceType != "" {
|
||||
applianceInfo += fmt.Sprintf(" (%s)", fp.ApplianceType)
|
||||
}
|
||||
fmt.Printf(" %s %s\n", output.BoldYellow("APPLIANCE:"), output.Yellow(applianceInfo))
|
||||
}
|
||||
// Show internal hostnames found in certificate
|
||||
if len(fp.InternalHosts) > 0 {
|
||||
hosts := fp.InternalHosts
|
||||
if len(hosts) > 5 {
|
||||
hosts = hosts[:5]
|
||||
}
|
||||
fmt.Printf(" %s %s\n", output.BoldMagenta("INTERNAL:"), output.Magenta(strings.Join(hosts, ", ")))
|
||||
}
|
||||
// Show certificate subject info if no vendor matched but has org info
|
||||
if fp.Vendor == "" && (fp.SubjectOrg != "" || fp.SubjectOU != "") {
|
||||
certInfo := ""
|
||||
if fp.SubjectOrg != "" {
|
||||
certInfo = "Org: " + fp.SubjectOrg
|
||||
}
|
||||
if fp.SubjectOU != "" {
|
||||
if certInfo != "" {
|
||||
certInfo += ", "
|
||||
}
|
||||
certInfo += "OU: " + fp.SubjectOU
|
||||
}
|
||||
fmt.Printf(" Cert: %s\n", output.Dim(certInfo))
|
||||
}
|
||||
}
|
||||
|
||||
// Ports
|
||||
if len(r.Ports) > 0 {
|
||||
var portStrs []string
|
||||
for _, p := range r.Ports {
|
||||
portStrs = append(portStrs, fmt.Sprintf("%d", p))
|
||||
}
|
||||
fmt.Printf(" Ports: %s\n", output.Magenta(strings.Join(portStrs, ", ")))
|
||||
}
|
||||
|
||||
// Extra files
|
||||
var extras []string
|
||||
if r.RobotsTxt {
|
||||
extras = append(extras, "robots.txt")
|
||||
}
|
||||
if r.SitemapXml {
|
||||
extras = append(extras, "sitemap.xml")
|
||||
}
|
||||
if r.FaviconHash != "" {
|
||||
extras = append(extras, fmt.Sprintf("favicon:%s", r.FaviconHash[:8]))
|
||||
}
|
||||
if len(extras) > 0 {
|
||||
fmt.Printf(" Files: %s\n", output.Green(strings.Join(extras, ", ")))
|
||||
}
|
||||
|
||||
// DNS Records
|
||||
if len(r.MXRecords) > 0 {
|
||||
mx := r.MXRecords
|
||||
if len(mx) > 2 {
|
||||
mx = mx[:2]
|
||||
}
|
||||
fmt.Printf(" MX: %s\n", strings.Join(mx, ", "))
|
||||
}
|
||||
|
||||
// Security Headers
|
||||
if len(r.MissingHeaders) > 0 && len(r.MissingHeaders) < 7 {
|
||||
if len(r.SecurityHeaders) > 0 {
|
||||
fmt.Printf(" Headers: %s | Missing: %s\n",
|
||||
output.Green(strings.Join(r.SecurityHeaders, ", ")),
|
||||
output.Yellow(strings.Join(r.MissingHeaders, ", ")))
|
||||
}
|
||||
} else if len(r.SecurityHeaders) > 0 {
|
||||
fmt.Printf(" Headers: %s\n", output.Green(strings.Join(r.SecurityHeaders, ", ")))
|
||||
}
|
||||
|
||||
// Cloud Provider
|
||||
if r.CloudProvider != "" {
|
||||
fmt.Printf(" Cloud: %s\n", output.Cyan(r.CloudProvider))
|
||||
}
|
||||
|
||||
// Email Security
|
||||
if r.EmailSecurity != "" {
|
||||
emailColor := output.Green
|
||||
if r.EmailSecurity == "Weak" {
|
||||
emailColor = output.Yellow
|
||||
} else if r.EmailSecurity == "None" {
|
||||
emailColor = output.Red
|
||||
}
|
||||
fmt.Printf(" Email: %s\n", emailColor(r.EmailSecurity))
|
||||
}
|
||||
|
||||
// TLS Alt Names
|
||||
if len(r.TLSAltNames) > 0 {
|
||||
altNames := r.TLSAltNames
|
||||
if len(altNames) > 5 {
|
||||
altNames = altNames[:5]
|
||||
}
|
||||
fmt.Printf(" TLS Alt: %s\n", output.Blue(strings.Join(altNames, ", ")))
|
||||
}
|
||||
|
||||
// S3 Buckets
|
||||
if len(r.S3Buckets) > 0 {
|
||||
for _, bucket := range r.S3Buckets {
|
||||
if strings.Contains(bucket, "PUBLIC") {
|
||||
fmt.Printf(" %s %s\n", output.Red("S3:"), output.Red(bucket))
|
||||
} else {
|
||||
fmt.Printf(" S3: %s\n", output.Yellow(bucket))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Security Issues (vulnerabilities)
|
||||
var vulns []string
|
||||
if r.OpenRedirect {
|
||||
vulns = append(vulns, "Open Redirect")
|
||||
}
|
||||
if r.CORSMisconfig != "" {
|
||||
vulns = append(vulns, fmt.Sprintf("CORS: %s", r.CORSMisconfig))
|
||||
}
|
||||
if len(r.DangerousMethods) > 0 {
|
||||
vulns = append(vulns, fmt.Sprintf("Methods: %s", strings.Join(r.DangerousMethods, ", ")))
|
||||
}
|
||||
if r.GitExposed {
|
||||
vulns = append(vulns, ".git Exposed")
|
||||
}
|
||||
if r.SvnExposed {
|
||||
vulns = append(vulns, ".svn Exposed")
|
||||
}
|
||||
if len(r.BackupFiles) > 0 {
|
||||
files := r.BackupFiles
|
||||
if len(files) > 3 {
|
||||
files = files[:3]
|
||||
}
|
||||
vulns = append(vulns, fmt.Sprintf("Backup: %s", strings.Join(files, ", ")))
|
||||
}
|
||||
if len(vulns) > 0 {
|
||||
fmt.Printf(" %s %s\n", output.Red("VULNS:"), output.Red(strings.Join(vulns, " | ")))
|
||||
}
|
||||
|
||||
// Discovery (admin panels, API endpoints)
|
||||
var discoveries []string
|
||||
if len(r.AdminPanels) > 0 {
|
||||
panels := r.AdminPanels
|
||||
if len(panels) > 5 {
|
||||
panels = panels[:5]
|
||||
}
|
||||
discoveries = append(discoveries, fmt.Sprintf("Admin: %s", strings.Join(panels, ", ")))
|
||||
}
|
||||
if len(r.APIEndpoints) > 0 {
|
||||
endpoints := r.APIEndpoints
|
||||
if len(endpoints) > 5 {
|
||||
endpoints = endpoints[:5]
|
||||
}
|
||||
discoveries = append(discoveries, fmt.Sprintf("API: %s", strings.Join(endpoints, ", ")))
|
||||
}
|
||||
if len(discoveries) > 0 {
|
||||
fmt.Printf(" %s %s\n", output.Magenta("FOUND:"), output.Magenta(strings.Join(discoveries, " | ")))
|
||||
}
|
||||
|
||||
// JavaScript Analysis
|
||||
if len(r.JSFiles) > 0 {
|
||||
files := r.JSFiles
|
||||
if len(files) > 3 {
|
||||
files = files[:3]
|
||||
}
|
||||
fmt.Printf(" JS Files: %s\n", output.Blue(strings.Join(files, ", ")))
|
||||
}
|
||||
if len(r.JSSecrets) > 0 {
|
||||
for _, secret := range r.JSSecrets {
|
||||
fmt.Printf(" %s %s\n", output.Red("JS SECRET:"), output.Red(secret))
|
||||
}
|
||||
}
|
||||
|
||||
// Takeover
|
||||
if r.Takeover != "" {
|
||||
fmt.Printf(" %s %s\n", output.BgRed(" TAKEOVER "), output.BoldRed(r.Takeover))
|
||||
}
|
||||
|
||||
// AI Findings
|
||||
if len(r.AIFindings) > 0 {
|
||||
severityColor := output.Cyan
|
||||
severityLabel := "AI"
|
||||
if r.AISeverity == "critical" {
|
||||
severityColor = output.BoldRed
|
||||
severityLabel = "AI:CRITICAL"
|
||||
} else if r.AISeverity == "high" {
|
||||
severityColor = output.Red
|
||||
severityLabel = "AI:HIGH"
|
||||
} else if r.AISeverity == "medium" {
|
||||
severityColor = output.Yellow
|
||||
severityLabel = "AI:MEDIUM"
|
||||
}
|
||||
|
||||
for i, finding := range r.AIFindings {
|
||||
if i == 0 {
|
||||
fmt.Printf(" %s %s\n", severityColor(severityLabel+":"), finding)
|
||||
} else {
|
||||
fmt.Printf(" %s %s\n", output.Dim(" "), finding)
|
||||
}
|
||||
if i >= 4 {
|
||||
remaining := len(r.AIFindings) - 5
|
||||
if remaining > 0 {
|
||||
fmt.Printf(" %s (%d more findings...)\n", output.Dim(" "), remaining)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if r.AIModel != "" {
|
||||
fmt.Printf(" %s model: %s\n", output.Dim(" "), output.Dim(r.AIModel))
|
||||
}
|
||||
}
|
||||
|
||||
// CVE Findings
|
||||
if len(r.CVEFindings) > 0 {
|
||||
for _, cve := range r.CVEFindings {
|
||||
fmt.Printf(" %s %s\n", output.BoldRed("CVE:"), output.Red(cve))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func formatSize(size int64) string {
|
||||
if size > 1024*1024 {
|
||||
return fmt.Sprintf("%.1fMB", float64(size)/(1024*1024))
|
||||
} else if size > 1024 {
|
||||
return fmt.Sprintf("%.1fKB", float64(size)/1024)
|
||||
}
|
||||
return fmt.Sprintf("%dB", size)
|
||||
}
|
||||
@@ -1,13 +1,8 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
@@ -18,50 +13,57 @@ import (
|
||||
"god-eye/internal/dns"
|
||||
gohttp "god-eye/internal/http"
|
||||
"god-eye/internal/output"
|
||||
"god-eye/internal/progress"
|
||||
"god-eye/internal/ratelimit"
|
||||
"god-eye/internal/security"
|
||||
"god-eye/internal/sources"
|
||||
"god-eye/internal/stealth"
|
||||
)
|
||||
|
||||
func Run(cfg config.Config) {
|
||||
startTime := time.Now()
|
||||
|
||||
// Parse custom resolvers
|
||||
var resolvers []string
|
||||
if cfg.Resolvers != "" {
|
||||
for _, r := range strings.Split(cfg.Resolvers, ",") {
|
||||
r = strings.TrimSpace(r)
|
||||
if r != "" {
|
||||
if !strings.Contains(r, ":") {
|
||||
r = r + ":53"
|
||||
}
|
||||
resolvers = append(resolvers, r)
|
||||
// Pre-load KEV database if AI is enabled (auto-downloads if not present)
|
||||
if cfg.EnableAI && !cfg.Silent && !cfg.JsonOutput {
|
||||
kevStore := ai.GetKEVStore()
|
||||
if !kevStore.IsLoaded() {
|
||||
if err := kevStore.LoadWithProgress(true); err != nil {
|
||||
fmt.Printf("%s Failed to load KEV database: %v\n", output.Yellow("⚠️"), err)
|
||||
fmt.Println(output.Dim(" CVE lookups will use NVD API only (slower)"))
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
}
|
||||
if len(resolvers) == 0 {
|
||||
resolvers = config.DefaultResolvers
|
||||
}
|
||||
|
||||
// Parse custom ports
|
||||
var customPorts []int
|
||||
if cfg.Ports != "" {
|
||||
for _, p := range strings.Split(cfg.Ports, ",") {
|
||||
p = strings.TrimSpace(p)
|
||||
if port, err := strconv.Atoi(p); err == nil && port > 0 && port < 65536 {
|
||||
customPorts = append(customPorts, port)
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(customPorts) == 0 {
|
||||
customPorts = []int{80, 443, 8080, 8443}
|
||||
}
|
||||
// Parse custom resolvers and ports using helpers
|
||||
resolvers := ParseResolvers(cfg.Resolvers)
|
||||
customPorts := ParsePorts(cfg.Ports)
|
||||
|
||||
// Initialize stealth manager
|
||||
stealthMode := stealth.ParseMode(cfg.StealthMode)
|
||||
stealthMgr := stealth.NewManager(stealthMode)
|
||||
|
||||
// Adjust concurrency based on stealth mode
|
||||
effectiveConcurrency := stealthMgr.GetEffectiveConcurrency(cfg.Concurrency)
|
||||
|
||||
if !cfg.Silent && !cfg.JsonOutput {
|
||||
output.PrintBanner()
|
||||
output.PrintSection("🎯", "TARGET CONFIGURATION")
|
||||
output.PrintSubSection(fmt.Sprintf("%s %s", output.Dim("Target:"), output.BoldCyan(cfg.Domain)))
|
||||
|
||||
// Show stealth status
|
||||
if stealthMode != stealth.ModeOff {
|
||||
stealthColor := output.Yellow
|
||||
if stealthMode >= stealth.ModeAggressive {
|
||||
stealthColor = output.Red
|
||||
}
|
||||
output.PrintSubSection(fmt.Sprintf("%s %s %s %s",
|
||||
output.Dim("Stealth:"), stealthColor(stealthMgr.GetModeName()),
|
||||
output.Dim("Effective Threads:"), output.BoldGreen(fmt.Sprintf("%d", effectiveConcurrency))))
|
||||
}
|
||||
|
||||
output.PrintSubSection(fmt.Sprintf("%s %s %s %s %s %s",
|
||||
output.Dim("Threads:"), output.BoldGreen(fmt.Sprintf("%d", cfg.Concurrency)),
|
||||
output.Dim("Threads:"), output.BoldGreen(fmt.Sprintf("%d", effectiveConcurrency)),
|
||||
output.Dim("Timeout:"), output.Yellow(fmt.Sprintf("%ds", cfg.Timeout)),
|
||||
output.Dim("Resolvers:"), output.Blue(fmt.Sprintf("%d", len(resolvers)))))
|
||||
if !cfg.NoPorts {
|
||||
@@ -186,49 +188,72 @@ func Run(cfg config.Config) {
|
||||
}
|
||||
}()
|
||||
|
||||
// Wildcard detection (always run for JSON output accuracy)
|
||||
var wildcardInfo *dns.WildcardInfo
|
||||
wildcardDetector := dns.NewWildcardDetector(resolvers, cfg.Timeout)
|
||||
wildcardInfo = wildcardDetector.Detect(cfg.Domain)
|
||||
|
||||
// DNS Brute-force
|
||||
var bruteWg sync.WaitGroup
|
||||
if !cfg.NoBrute {
|
||||
// Check wildcard
|
||||
wildcardIPs := dns.CheckWildcard(cfg.Domain, resolvers)
|
||||
// Display wildcard detection results
|
||||
if !cfg.Silent && !cfg.JsonOutput {
|
||||
if len(wildcardIPs) > 0 {
|
||||
output.PrintSubSection(fmt.Sprintf("%s Wildcard DNS: %s", output.Yellow("⚠"), output.BoldYellow("DETECTED")))
|
||||
if wildcardInfo.IsWildcard {
|
||||
output.PrintSubSection(fmt.Sprintf("%s Wildcard DNS: %s (confidence: %.0f%%)",
|
||||
output.Yellow("⚠"), output.BoldYellow("DETECTED"), wildcardInfo.Confidence*100))
|
||||
if len(wildcardInfo.WildcardIPs) > 0 {
|
||||
ips := wildcardInfo.WildcardIPs
|
||||
if len(ips) > 3 {
|
||||
ips = ips[:3]
|
||||
}
|
||||
output.PrintSubSection(fmt.Sprintf(" %s Wildcard IPs: %s", output.Dim("→"), output.Yellow(strings.Join(ips, ", "))))
|
||||
}
|
||||
if wildcardInfo.HTTPStatusCode > 0 {
|
||||
output.PrintSubSection(fmt.Sprintf(" %s HTTP response: %d (%d bytes)",
|
||||
output.Dim("→"), wildcardInfo.HTTPStatusCode, wildcardInfo.HTTPBodySize))
|
||||
}
|
||||
} else {
|
||||
output.PrintSubSection(fmt.Sprintf("%s Wildcard DNS: %s", output.Green("✓"), output.Green("not detected")))
|
||||
}
|
||||
}
|
||||
|
||||
// Brute-force
|
||||
semaphore := make(chan struct{}, cfg.Concurrency)
|
||||
for _, word := range wordlist {
|
||||
// Brute-force with wildcard filtering
|
||||
semaphore := make(chan struct{}, effectiveConcurrency)
|
||||
wildcardIPSet := make(map[string]bool)
|
||||
if wildcardInfo != nil {
|
||||
for _, ip := range wildcardInfo.WildcardIPs {
|
||||
wildcardIPSet[ip] = true
|
||||
}
|
||||
}
|
||||
|
||||
// Shuffle wordlist if stealth mode randomization is enabled
|
||||
shuffledWordlist := stealthMgr.ShuffleSlice(wordlist)
|
||||
|
||||
for _, word := range shuffledWordlist {
|
||||
bruteWg.Add(1)
|
||||
go func(word string) {
|
||||
defer bruteWg.Done()
|
||||
semaphore <- struct{}{}
|
||||
defer func() { <-semaphore }()
|
||||
|
||||
// Apply stealth delay
|
||||
stealthMgr.Wait()
|
||||
|
||||
subdomain := fmt.Sprintf("%s.%s", word, cfg.Domain)
|
||||
ips := dns.ResolveSubdomain(subdomain, resolvers, cfg.Timeout)
|
||||
|
||||
if len(ips) > 0 {
|
||||
// Check if wildcard
|
||||
isWildcard := false
|
||||
if len(wildcardIPs) > 0 {
|
||||
for _, ip := range ips {
|
||||
for _, wip := range wildcardIPs {
|
||||
if ip == wip {
|
||||
isWildcard = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if isWildcard {
|
||||
break
|
||||
}
|
||||
// Check if ALL IPs are wildcard IPs
|
||||
allWildcard := true
|
||||
for _, ip := range ips {
|
||||
if !wildcardIPSet[ip] {
|
||||
allWildcard = false
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !isWildcard {
|
||||
// Only add if not all IPs are wildcards
|
||||
if !allWildcard || len(wildcardIPSet) == 0 {
|
||||
seenMu.Lock()
|
||||
if !seen[subdomain] {
|
||||
seen[subdomain] = true
|
||||
@@ -267,18 +292,24 @@ func Run(cfg config.Config) {
|
||||
if !cfg.Silent && !cfg.JsonOutput {
|
||||
output.PrintEndSection()
|
||||
output.PrintSection("🌐", "DNS RESOLUTION")
|
||||
output.PrintSubSection(fmt.Sprintf("Resolving %s subdomains...", output.BoldCyan(fmt.Sprintf("%d", len(subdomains)))))
|
||||
}
|
||||
|
||||
// Create progress bar for DNS resolution
|
||||
dnsBar := progress.New(len(subdomains), "DNS", cfg.Silent || cfg.JsonOutput)
|
||||
|
||||
var resolveWg sync.WaitGroup
|
||||
semaphore := make(chan struct{}, cfg.Concurrency)
|
||||
dnsSemaphore := make(chan struct{}, effectiveConcurrency)
|
||||
|
||||
for _, subdomain := range subdomains {
|
||||
resolveWg.Add(1)
|
||||
go func(sub string) {
|
||||
defer resolveWg.Done()
|
||||
semaphore <- struct{}{}
|
||||
defer func() { <-semaphore }()
|
||||
defer dnsBar.Increment()
|
||||
dnsSemaphore <- struct{}{}
|
||||
defer func() { <-dnsSemaphore }()
|
||||
|
||||
// Apply stealth delay for DNS
|
||||
stealthMgr.Wait()
|
||||
|
||||
ips := dns.ResolveSubdomain(sub, resolvers, cfg.Timeout)
|
||||
if len(ips) > 0 {
|
||||
@@ -332,22 +363,36 @@ func Run(cfg config.Config) {
|
||||
}(subdomain)
|
||||
}
|
||||
resolveWg.Wait()
|
||||
dnsBar.Finish()
|
||||
|
||||
// HTTP Probing
|
||||
if !cfg.NoProbe && len(results) > 0 {
|
||||
if !cfg.Silent && !cfg.JsonOutput {
|
||||
output.PrintEndSection()
|
||||
output.PrintSection("🌍", "HTTP PROBING & SECURITY CHECKS")
|
||||
output.PrintSubSection(fmt.Sprintf("Probing %s subdomains with %s parallel checks...", output.BoldCyan(fmt.Sprintf("%d", len(results))), output.BoldGreen("13")))
|
||||
}
|
||||
|
||||
// Create progress bar and rate limiter for HTTP probing
|
||||
httpBar := progress.New(len(results), "HTTP", cfg.Silent || cfg.JsonOutput)
|
||||
httpLimiter := ratelimit.NewHostRateLimiter(ratelimit.DefaultConfig())
|
||||
httpSemaphore := make(chan struct{}, effectiveConcurrency)
|
||||
|
||||
var probeWg sync.WaitGroup
|
||||
for sub := range results {
|
||||
probeWg.Add(1)
|
||||
go func(subdomain string) {
|
||||
defer probeWg.Done()
|
||||
semaphore <- struct{}{}
|
||||
defer func() { <-semaphore }()
|
||||
defer httpBar.Increment()
|
||||
httpSemaphore <- struct{}{}
|
||||
defer func() { <-httpSemaphore }()
|
||||
|
||||
// Apply stealth delay and host-specific throttling
|
||||
stealthMgr.Wait()
|
||||
stealthMgr.WaitForHost(subdomain)
|
||||
|
||||
// Apply adaptive rate limiting
|
||||
limiter := httpLimiter.Get(subdomain)
|
||||
limiter.Wait()
|
||||
|
||||
// Use shared client for connection pooling
|
||||
client := gohttp.GetSharedClient(cfg.Timeout)
|
||||
@@ -542,6 +587,16 @@ func Run(cfg config.Config) {
|
||||
}(sub)
|
||||
}
|
||||
probeWg.Wait()
|
||||
httpBar.Finish()
|
||||
|
||||
// Log rate limiting stats if verbose
|
||||
if cfg.Verbose && !cfg.JsonOutput {
|
||||
hosts, requests, errors := httpLimiter.GetStats()
|
||||
if errors > 0 {
|
||||
output.PrintSubSection(fmt.Sprintf("%s Rate limiting: %d hosts, %d requests, %d errors",
|
||||
output.Yellow("⚠️"), hosts, requests, errors))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Port Scanning
|
||||
@@ -549,9 +604,17 @@ func Run(cfg config.Config) {
|
||||
if !cfg.Silent && !cfg.JsonOutput {
|
||||
output.PrintEndSection()
|
||||
output.PrintSection("🔌", "PORT SCANNING")
|
||||
output.PrintSubSection(fmt.Sprintf("Scanning %s ports on %s hosts...", output.BoldMagenta(fmt.Sprintf("%d", len(customPorts))), output.BoldCyan(fmt.Sprintf("%d", len(results)))))
|
||||
}
|
||||
|
||||
// Count hosts with IPs
|
||||
hostCount := 0
|
||||
for _, result := range results {
|
||||
if len(result.IPs) > 0 {
|
||||
hostCount++
|
||||
}
|
||||
}
|
||||
|
||||
portBar := progress.New(hostCount, "Ports", cfg.Silent || cfg.JsonOutput)
|
||||
var portWg sync.WaitGroup
|
||||
|
||||
for sub, result := range results {
|
||||
@@ -561,6 +624,7 @@ func Run(cfg config.Config) {
|
||||
portWg.Add(1)
|
||||
go func(subdomain string, ip string) {
|
||||
defer portWg.Done()
|
||||
defer portBar.Increment()
|
||||
openPorts := ScanPorts(ip, customPorts, cfg.Timeout)
|
||||
resultsMu.Lock()
|
||||
if r, ok := results[subdomain]; ok {
|
||||
@@ -570,6 +634,7 @@ func Run(cfg config.Config) {
|
||||
}(sub, result.IPs[0])
|
||||
}
|
||||
portWg.Wait()
|
||||
portBar.Finish()
|
||||
}
|
||||
|
||||
// Subdomain Takeover Check
|
||||
@@ -578,14 +643,15 @@ func Run(cfg config.Config) {
|
||||
if !cfg.Silent && !cfg.JsonOutput {
|
||||
output.PrintEndSection()
|
||||
output.PrintSection("🎯", "SUBDOMAIN TAKEOVER")
|
||||
output.PrintSubSection(fmt.Sprintf("Checking %s fingerprints against %s subdomains...", output.BoldRed("110+"), output.BoldCyan(fmt.Sprintf("%d", len(results)))))
|
||||
}
|
||||
|
||||
takeoverBar := progress.New(len(results), "Takeover", cfg.Silent || cfg.JsonOutput)
|
||||
var takeoverWg sync.WaitGroup
|
||||
for sub := range results {
|
||||
takeoverWg.Add(1)
|
||||
go func(subdomain string) {
|
||||
defer takeoverWg.Done()
|
||||
defer takeoverBar.Increment()
|
||||
if takeover := CheckTakeover(subdomain, cfg.Timeout); takeover != "" {
|
||||
resultsMu.Lock()
|
||||
if r, ok := results[subdomain]; ok {
|
||||
@@ -593,15 +659,13 @@ func Run(cfg config.Config) {
|
||||
}
|
||||
resultsMu.Unlock()
|
||||
atomic.AddInt32(&takeoverCount, 1)
|
||||
if !cfg.JsonOutput {
|
||||
output.PrintSubSection(fmt.Sprintf("%s %s → %s", output.BgRed(" TAKEOVER "), output.BoldWhite(subdomain), output.BoldRed(takeover)))
|
||||
}
|
||||
}
|
||||
}(sub)
|
||||
}
|
||||
takeoverWg.Wait()
|
||||
takeoverBar.Finish()
|
||||
|
||||
if takeoverCount > 0 && !cfg.JsonOutput {
|
||||
if takeoverCount > 0 && !cfg.Silent && !cfg.JsonOutput {
|
||||
output.PrintSubSection(fmt.Sprintf("%s Found %s potential takeover(s)!", output.Red("⚠"), output.BoldRed(fmt.Sprintf("%d", takeoverCount))))
|
||||
}
|
||||
if !cfg.Silent && !cfg.JsonOutput {
|
||||
@@ -638,45 +702,62 @@ func Run(cfg config.Config) {
|
||||
aiSemaphore := make(chan struct{}, 5) // Limit concurrent AI requests
|
||||
|
||||
for sub, result := range results {
|
||||
// Only analyze interesting findings
|
||||
shouldAnalyze := false
|
||||
// Determine what types of analysis to perform
|
||||
shouldAnalyzeVulns := false
|
||||
shouldAnalyzeCVE := len(result.Tech) > 0 // CVE for ALL subdomains with tech
|
||||
|
||||
// Analyze JS files if found
|
||||
if len(result.JSFiles) > 0 || len(result.JSSecrets) > 0 {
|
||||
shouldAnalyze = true
|
||||
shouldAnalyzeVulns = true
|
||||
}
|
||||
|
||||
// Analyze if vulnerabilities detected
|
||||
if result.OpenRedirect || result.CORSMisconfig != "" ||
|
||||
len(result.DangerousMethods) > 0 || result.GitExposed ||
|
||||
result.SvnExposed || len(result.BackupFiles) > 0 {
|
||||
shouldAnalyze = true
|
||||
shouldAnalyzeVulns = true
|
||||
}
|
||||
|
||||
// Analyze takeovers
|
||||
if result.Takeover != "" {
|
||||
shouldAnalyze = true
|
||||
shouldAnalyzeVulns = true
|
||||
}
|
||||
|
||||
// Deep analysis mode: analyze everything
|
||||
if cfg.AIDeepAnalysis {
|
||||
shouldAnalyze = true
|
||||
shouldAnalyzeVulns = true
|
||||
}
|
||||
|
||||
if !shouldAnalyze {
|
||||
// Skip if nothing to analyze
|
||||
if !shouldAnalyzeVulns && !shouldAnalyzeCVE {
|
||||
continue
|
||||
}
|
||||
|
||||
analyzeVulns := shouldAnalyzeVulns // Capture for goroutine
|
||||
aiWg.Add(1)
|
||||
go func(subdomain string, r *config.SubdomainResult) {
|
||||
go func(subdomain string, r *config.SubdomainResult, doVulnAnalysis bool) {
|
||||
defer aiWg.Done()
|
||||
aiSemaphore <- struct{}{}
|
||||
defer func() { <-aiSemaphore }()
|
||||
|
||||
var aiResults []*ai.AnalysisResult
|
||||
|
||||
// Analyze JavaScript if present
|
||||
if len(r.JSFiles) > 0 && len(r.JSSecrets) > 0 {
|
||||
// Filter JS secrets using AI before analysis
|
||||
if len(r.JSSecrets) > 0 {
|
||||
if filteredSecrets, err := aiClient.FilterSecrets(r.JSSecrets); err == nil && len(filteredSecrets) > 0 {
|
||||
resultsMu.Lock()
|
||||
r.JSSecrets = filteredSecrets // Replace with AI-filtered secrets
|
||||
resultsMu.Unlock()
|
||||
} else if err == nil {
|
||||
// No real secrets found after filtering
|
||||
resultsMu.Lock()
|
||||
r.JSSecrets = nil
|
||||
resultsMu.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
// Analyze JavaScript if present (only if vuln analysis enabled)
|
||||
if doVulnAnalysis && len(r.JSFiles) > 0 && len(r.JSSecrets) > 0 {
|
||||
// Build context from secrets
|
||||
jsContext := strings.Join(r.JSSecrets, "\n")
|
||||
if analysis, err := aiClient.AnalyzeJavaScript(jsContext); err == nil {
|
||||
@@ -684,15 +765,15 @@ func Run(cfg config.Config) {
|
||||
}
|
||||
}
|
||||
|
||||
// Analyze HTTP response for misconfigurations
|
||||
if r.StatusCode > 0 && (len(r.MissingHeaders) > 3 || r.GitExposed || r.SvnExposed) {
|
||||
// Analyze HTTP response for misconfigurations (only if vuln analysis enabled)
|
||||
if doVulnAnalysis && r.StatusCode > 0 && (len(r.MissingHeaders) > 3 || r.GitExposed || r.SvnExposed) {
|
||||
bodyContext := r.Title
|
||||
if analysis, err := aiClient.AnalyzeHTTPResponse(subdomain, r.StatusCode, r.Headers, bodyContext); err == nil {
|
||||
aiResults = append(aiResults, analysis)
|
||||
}
|
||||
}
|
||||
|
||||
// CVE matching for detected technologies
|
||||
// CVE matching for detected technologies (always done if tech detected)
|
||||
if len(r.Tech) > 0 {
|
||||
for _, tech := range r.Tech {
|
||||
if cve, err := aiClient.CVEMatch(tech, ""); err == nil && cve != "" {
|
||||
@@ -748,7 +829,7 @@ func Run(cfg config.Config) {
|
||||
output.Dim(fmt.Sprintf("%d findings", len(r.AIFindings)))))
|
||||
}
|
||||
}
|
||||
}(sub, result)
|
||||
}(sub, result, analyzeVulns)
|
||||
}
|
||||
|
||||
aiWg.Wait()
|
||||
@@ -795,543 +876,33 @@ func Run(cfg config.Config) {
|
||||
results = filtered
|
||||
}
|
||||
|
||||
// Sort subdomains
|
||||
var sortedSubs []string
|
||||
for sub := range results {
|
||||
sortedSubs = append(sortedSubs, sub)
|
||||
}
|
||||
sort.Strings(sortedSubs)
|
||||
|
||||
// JSON output to stdout
|
||||
// JSON output to stdout (structured report format)
|
||||
if cfg.JsonOutput {
|
||||
var resultList []*config.SubdomainResult
|
||||
for _, sub := range sortedSubs {
|
||||
resultList = append(resultList, results[sub])
|
||||
// Build structured JSON report with metadata
|
||||
reportBuilder := output.NewReportBuilder(cfg.Domain, cfg)
|
||||
|
||||
// Set wildcard info if available
|
||||
if wildcardInfo != nil {
|
||||
reportBuilder.SetWildcard(
|
||||
wildcardInfo.IsWildcard,
|
||||
wildcardInfo.WildcardIPs,
|
||||
wildcardInfo.WildcardCNAME,
|
||||
wildcardInfo.HTTPStatusCode,
|
||||
wildcardInfo.Confidence,
|
||||
)
|
||||
}
|
||||
encoder := json.NewEncoder(os.Stdout)
|
||||
encoder.SetIndent("", " ")
|
||||
encoder.Encode(resultList)
|
||||
|
||||
// Finalize and output the report
|
||||
reportBuilder.Finalize(results)
|
||||
reportBuilder.WriteJSON(os.Stdout, true)
|
||||
return
|
||||
}
|
||||
|
||||
// Print results
|
||||
elapsed := time.Since(startTime)
|
||||
|
||||
// Count statistics
|
||||
var activeCount, vulnCount, cloudCount int
|
||||
for _, r := range results {
|
||||
if r.StatusCode >= 200 && r.StatusCode < 400 {
|
||||
activeCount++
|
||||
}
|
||||
if r.OpenRedirect || r.CORSMisconfig != "" || len(r.DangerousMethods) > 0 || r.GitExposed || r.SvnExposed || len(r.BackupFiles) > 0 {
|
||||
vulnCount++
|
||||
}
|
||||
if r.CloudProvider != "" {
|
||||
cloudCount++
|
||||
}
|
||||
}
|
||||
|
||||
// Summary box
|
||||
fmt.Println()
|
||||
fmt.Println(output.BoldCyan("╔══════════════════════════════════════════════════════════════════════════════╗"))
|
||||
fmt.Println(output.BoldCyan("║") + " " + output.BoldWhite("📊 SCAN SUMMARY") + " " + output.BoldCyan("║"))
|
||||
fmt.Println(output.BoldCyan("╠══════════════════════════════════════════════════════════════════════════════╣"))
|
||||
fmt.Printf("%s %-20s %s %-20s %s %-20s %s\n",
|
||||
output.BoldCyan("║"),
|
||||
fmt.Sprintf("🌐 Total: %s", output.BoldCyan(fmt.Sprintf("%d", len(results)))),
|
||||
output.Dim("|"),
|
||||
fmt.Sprintf("✅ Active: %s", output.BoldGreen(fmt.Sprintf("%d", activeCount))),
|
||||
output.Dim("|"),
|
||||
fmt.Sprintf("⏱️ Time: %s", output.BoldYellow(fmt.Sprintf("%.1fs", elapsed.Seconds()))),
|
||||
output.BoldCyan("║"))
|
||||
fmt.Printf("%s %-20s %s %-20s %s %-20s %s\n",
|
||||
output.BoldCyan("║"),
|
||||
fmt.Sprintf("⚠️ Vulns: %s", output.BoldRed(fmt.Sprintf("%d", vulnCount))),
|
||||
output.Dim("|"),
|
||||
fmt.Sprintf("☁️ Cloud: %s", output.Blue(fmt.Sprintf("%d", cloudCount))),
|
||||
output.Dim("|"),
|
||||
fmt.Sprintf("🎯 Takeover: %s", output.BoldRed(fmt.Sprintf("%d", takeoverCount))),
|
||||
output.BoldCyan("║"))
|
||||
fmt.Println(output.BoldCyan("╚══════════════════════════════════════════════════════════════════════════════╝"))
|
||||
fmt.Println()
|
||||
fmt.Println(output.BoldCyan("═══════════════════════════════════════════════════════════════════════════════"))
|
||||
|
||||
for _, sub := range sortedSubs {
|
||||
r := results[sub]
|
||||
|
||||
// Color code by status
|
||||
var statusColor func(a ...interface{}) string
|
||||
var statusIcon string
|
||||
if r.StatusCode >= 200 && r.StatusCode < 300 {
|
||||
statusColor = output.Green
|
||||
statusIcon = "●"
|
||||
} else if r.StatusCode >= 300 && r.StatusCode < 400 {
|
||||
statusColor = output.Yellow
|
||||
statusIcon = "◐"
|
||||
} else if r.StatusCode >= 400 {
|
||||
statusColor = output.Red
|
||||
statusIcon = "○"
|
||||
} else {
|
||||
statusColor = output.Blue
|
||||
statusIcon = "◌"
|
||||
}
|
||||
|
||||
// Line 1: Subdomain name with status (modern box style)
|
||||
statusBadge := ""
|
||||
if r.StatusCode > 0 {
|
||||
statusBadge = fmt.Sprintf(" %s", statusColor(fmt.Sprintf("[%d]", r.StatusCode)))
|
||||
}
|
||||
|
||||
// Response time badge
|
||||
timeBadge := ""
|
||||
if r.ResponseMs > 0 {
|
||||
if r.ResponseMs < 200 {
|
||||
timeBadge = fmt.Sprintf(" %s", output.Green(fmt.Sprintf("⚡%dms", r.ResponseMs)))
|
||||
} else if r.ResponseMs < 500 {
|
||||
timeBadge = fmt.Sprintf(" %s", output.Yellow(fmt.Sprintf("⏱️%dms", r.ResponseMs)))
|
||||
} else {
|
||||
timeBadge = fmt.Sprintf(" %s", output.Red(fmt.Sprintf("🐢%dms", r.ResponseMs)))
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("\n%s %s%s%s\n", statusColor(statusIcon), output.BoldCyan(sub), statusBadge, timeBadge)
|
||||
|
||||
// Line 2: IPs
|
||||
if len(r.IPs) > 0 {
|
||||
ips := r.IPs
|
||||
if len(ips) > 3 {
|
||||
ips = ips[:3]
|
||||
}
|
||||
fmt.Printf(" %s %s\n", output.Dim("IP:"), output.White(strings.Join(ips, ", ")))
|
||||
}
|
||||
|
||||
// Line 3: CNAME
|
||||
if r.CNAME != "" {
|
||||
fmt.Printf(" %s %s\n", output.Dim("CNAME:"), output.Blue(r.CNAME))
|
||||
}
|
||||
|
||||
// Line 4: Location + ASN
|
||||
if r.Country != "" || r.City != "" || r.ASN != "" {
|
||||
loc := ""
|
||||
if r.City != "" && r.Country != "" {
|
||||
loc = r.City + ", " + r.Country
|
||||
} else if r.Country != "" {
|
||||
loc = r.Country
|
||||
} else if r.City != "" {
|
||||
loc = r.City
|
||||
}
|
||||
|
||||
asnStr := ""
|
||||
if r.ASN != "" {
|
||||
asnStr = r.ASN
|
||||
if len(asnStr) > 40 {
|
||||
asnStr = asnStr[:37] + "..."
|
||||
}
|
||||
}
|
||||
|
||||
if loc != "" && asnStr != "" {
|
||||
fmt.Printf(" Location: %s | %s\n", output.Cyan(loc), output.Blue(asnStr))
|
||||
} else if loc != "" {
|
||||
fmt.Printf(" Location: %s\n", output.Cyan(loc))
|
||||
} else if asnStr != "" {
|
||||
fmt.Printf(" ASN: %s\n", output.Blue(asnStr))
|
||||
}
|
||||
}
|
||||
|
||||
// Line 5: PTR
|
||||
if r.PTR != "" {
|
||||
fmt.Printf(" PTR: %s\n", output.Magenta(r.PTR))
|
||||
}
|
||||
|
||||
// Line 6: HTTP Info (Title, Size)
|
||||
if r.Title != "" || r.ContentLength > 0 {
|
||||
httpInfo := " HTTP: "
|
||||
if r.Title != "" {
|
||||
title := r.Title
|
||||
if len(title) > 50 {
|
||||
title = title[:47] + "..."
|
||||
}
|
||||
httpInfo += fmt.Sprintf("\"%s\"", title)
|
||||
}
|
||||
if r.ContentLength > 0 {
|
||||
sizeStr := ""
|
||||
if r.ContentLength > 1024*1024 {
|
||||
sizeStr = fmt.Sprintf("%.1fMB", float64(r.ContentLength)/(1024*1024))
|
||||
} else if r.ContentLength > 1024 {
|
||||
sizeStr = fmt.Sprintf("%.1fKB", float64(r.ContentLength)/1024)
|
||||
} else {
|
||||
sizeStr = fmt.Sprintf("%dB", r.ContentLength)
|
||||
}
|
||||
if r.Title != "" {
|
||||
httpInfo += fmt.Sprintf(" (%s)", sizeStr)
|
||||
} else {
|
||||
httpInfo += sizeStr
|
||||
}
|
||||
}
|
||||
fmt.Println(httpInfo)
|
||||
}
|
||||
|
||||
// Line 7: Redirect
|
||||
if r.RedirectURL != "" {
|
||||
redirectURL := r.RedirectURL
|
||||
if len(redirectURL) > 60 {
|
||||
redirectURL = redirectURL[:57] + "..."
|
||||
}
|
||||
fmt.Printf(" Redirect: %s\n", output.Yellow(redirectURL))
|
||||
}
|
||||
|
||||
// Line 8: Tech + Server
|
||||
if len(r.Tech) > 0 || r.Server != "" {
|
||||
techMap := make(map[string]bool)
|
||||
var uniqueTech []string
|
||||
for _, t := range r.Tech {
|
||||
if !techMap[t] {
|
||||
techMap[t] = true
|
||||
uniqueTech = append(uniqueTech, t)
|
||||
}
|
||||
}
|
||||
if len(uniqueTech) > 5 {
|
||||
uniqueTech = uniqueTech[:5]
|
||||
}
|
||||
if len(uniqueTech) > 0 {
|
||||
fmt.Printf(" Tech: %s\n", output.Yellow(strings.Join(uniqueTech, ", ")))
|
||||
}
|
||||
}
|
||||
|
||||
// Line 9: Security (WAF, TLS)
|
||||
var securityInfo []string
|
||||
if r.WAF != "" {
|
||||
securityInfo = append(securityInfo, fmt.Sprintf("WAF: %s", output.Red(r.WAF)))
|
||||
}
|
||||
if r.TLSVersion != "" {
|
||||
securityInfo = append(securityInfo, fmt.Sprintf("TLS: %s", output.Cyan(r.TLSVersion)))
|
||||
}
|
||||
if len(securityInfo) > 0 {
|
||||
fmt.Printf(" Security: %s\n", strings.Join(securityInfo, " | "))
|
||||
}
|
||||
|
||||
// Line 10: Ports
|
||||
if len(r.Ports) > 0 {
|
||||
var portStrs []string
|
||||
for _, p := range r.Ports {
|
||||
portStrs = append(portStrs, fmt.Sprintf("%d", p))
|
||||
}
|
||||
fmt.Printf(" Ports: %s\n", output.Magenta(strings.Join(portStrs, ", ")))
|
||||
}
|
||||
|
||||
// Line 11: Extra files
|
||||
var extras []string
|
||||
if r.RobotsTxt {
|
||||
extras = append(extras, "robots.txt")
|
||||
}
|
||||
if r.SitemapXml {
|
||||
extras = append(extras, "sitemap.xml")
|
||||
}
|
||||
if r.FaviconHash != "" {
|
||||
extras = append(extras, fmt.Sprintf("favicon:%s", r.FaviconHash[:8]))
|
||||
}
|
||||
if len(extras) > 0 {
|
||||
fmt.Printf(" Files: %s\n", output.Green(strings.Join(extras, ", ")))
|
||||
}
|
||||
|
||||
// Line 12: DNS Records
|
||||
if len(r.MXRecords) > 0 {
|
||||
mx := r.MXRecords
|
||||
if len(mx) > 2 {
|
||||
mx = mx[:2]
|
||||
}
|
||||
fmt.Printf(" MX: %s\n", strings.Join(mx, ", "))
|
||||
}
|
||||
|
||||
// Line 13: Security Headers
|
||||
if len(r.MissingHeaders) > 0 && len(r.MissingHeaders) < 7 {
|
||||
// Only show if some headers are present (not all missing)
|
||||
if len(r.SecurityHeaders) > 0 {
|
||||
fmt.Printf(" Headers: %s | Missing: %s\n",
|
||||
output.Green(strings.Join(r.SecurityHeaders, ", ")),
|
||||
output.Yellow(strings.Join(r.MissingHeaders, ", ")))
|
||||
}
|
||||
} else if len(r.SecurityHeaders) > 0 {
|
||||
fmt.Printf(" Headers: %s\n", output.Green(strings.Join(r.SecurityHeaders, ", ")))
|
||||
}
|
||||
|
||||
// Line 14: Cloud Provider
|
||||
if r.CloudProvider != "" {
|
||||
fmt.Printf(" Cloud: %s\n", output.Cyan(r.CloudProvider))
|
||||
}
|
||||
|
||||
// Line 15: Email Security (only for root domain)
|
||||
if r.EmailSecurity != "" {
|
||||
emailColor := output.Green
|
||||
if r.EmailSecurity == "Weak" {
|
||||
emailColor = output.Yellow
|
||||
} else if r.EmailSecurity == "None" {
|
||||
emailColor = output.Red
|
||||
}
|
||||
fmt.Printf(" Email: %s\n", emailColor(r.EmailSecurity))
|
||||
}
|
||||
|
||||
// Line 16: TLS Alt Names
|
||||
if len(r.TLSAltNames) > 0 {
|
||||
altNames := r.TLSAltNames
|
||||
if len(altNames) > 5 {
|
||||
altNames = altNames[:5]
|
||||
}
|
||||
fmt.Printf(" TLS Alt: %s\n", output.Blue(strings.Join(altNames, ", ")))
|
||||
}
|
||||
|
||||
// Line 17: S3 Buckets
|
||||
if len(r.S3Buckets) > 0 {
|
||||
for _, bucket := range r.S3Buckets {
|
||||
if strings.Contains(bucket, "PUBLIC") {
|
||||
fmt.Printf(" %s %s\n", output.Red("S3:"), output.Red(bucket))
|
||||
} else {
|
||||
fmt.Printf(" S3: %s\n", output.Yellow(bucket))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Line 18: Security Issues (vulnerabilities found)
|
||||
var vulns []string
|
||||
if r.OpenRedirect {
|
||||
vulns = append(vulns, "Open Redirect")
|
||||
}
|
||||
if r.CORSMisconfig != "" {
|
||||
vulns = append(vulns, fmt.Sprintf("CORS: %s", r.CORSMisconfig))
|
||||
}
|
||||
if len(r.DangerousMethods) > 0 {
|
||||
vulns = append(vulns, fmt.Sprintf("Methods: %s", strings.Join(r.DangerousMethods, ", ")))
|
||||
}
|
||||
if r.GitExposed {
|
||||
vulns = append(vulns, ".git Exposed")
|
||||
}
|
||||
if r.SvnExposed {
|
||||
vulns = append(vulns, ".svn Exposed")
|
||||
}
|
||||
if len(r.BackupFiles) > 0 {
|
||||
files := r.BackupFiles
|
||||
if len(files) > 3 {
|
||||
files = files[:3]
|
||||
}
|
||||
vulns = append(vulns, fmt.Sprintf("Backup: %s", strings.Join(files, ", ")))
|
||||
}
|
||||
if len(vulns) > 0 {
|
||||
fmt.Printf(" %s %s\n", output.Red("VULNS:"), output.Red(strings.Join(vulns, " | ")))
|
||||
}
|
||||
|
||||
// Line 19: Discovery (admin panels, API endpoints)
|
||||
var discoveries []string
|
||||
if len(r.AdminPanels) > 0 {
|
||||
panels := r.AdminPanels
|
||||
if len(panels) > 5 {
|
||||
panels = panels[:5]
|
||||
}
|
||||
discoveries = append(discoveries, fmt.Sprintf("Admin: %s", strings.Join(panels, ", ")))
|
||||
}
|
||||
if len(r.APIEndpoints) > 0 {
|
||||
endpoints := r.APIEndpoints
|
||||
if len(endpoints) > 5 {
|
||||
endpoints = endpoints[:5]
|
||||
}
|
||||
discoveries = append(discoveries, fmt.Sprintf("API: %s", strings.Join(endpoints, ", ")))
|
||||
}
|
||||
if len(discoveries) > 0 {
|
||||
fmt.Printf(" %s %s\n", output.Magenta("FOUND:"), output.Magenta(strings.Join(discoveries, " | ")))
|
||||
}
|
||||
|
||||
// Line 20: JavaScript Analysis
|
||||
if len(r.JSFiles) > 0 {
|
||||
files := r.JSFiles
|
||||
if len(files) > 3 {
|
||||
files = files[:3]
|
||||
}
|
||||
fmt.Printf(" JS Files: %s\n", output.Blue(strings.Join(files, ", ")))
|
||||
}
|
||||
if len(r.JSSecrets) > 0 {
|
||||
for _, secret := range r.JSSecrets {
|
||||
fmt.Printf(" %s %s\n", output.Red("JS SECRET:"), output.Red(secret))
|
||||
}
|
||||
}
|
||||
|
||||
// Line 21: Takeover
|
||||
if r.Takeover != "" {
|
||||
fmt.Printf(" %s %s\n", output.BgRed(" TAKEOVER "), output.BoldRed(r.Takeover))
|
||||
}
|
||||
|
||||
// Line 22: AI Findings
|
||||
if len(r.AIFindings) > 0 {
|
||||
severityColor := output.Cyan
|
||||
severityLabel := "AI"
|
||||
if r.AISeverity == "critical" {
|
||||
severityColor = output.BoldRed
|
||||
severityLabel = "AI:CRITICAL"
|
||||
} else if r.AISeverity == "high" {
|
||||
severityColor = output.Red
|
||||
severityLabel = "AI:HIGH"
|
||||
} else if r.AISeverity == "medium" {
|
||||
severityColor = output.Yellow
|
||||
severityLabel = "AI:MEDIUM"
|
||||
}
|
||||
|
||||
for i, finding := range r.AIFindings {
|
||||
if i == 0 {
|
||||
fmt.Printf(" %s %s\n", severityColor(severityLabel+":"), finding)
|
||||
} else {
|
||||
fmt.Printf(" %s %s\n", output.Dim(" "), finding)
|
||||
}
|
||||
if i >= 4 { // Limit displayed findings
|
||||
remaining := len(r.AIFindings) - 5
|
||||
if remaining > 0 {
|
||||
fmt.Printf(" %s (%d more findings...)\n", output.Dim(" "), remaining)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Show model used
|
||||
if r.AIModel != "" {
|
||||
fmt.Printf(" %s model: %s\n", output.Dim(" "), output.Dim(r.AIModel))
|
||||
}
|
||||
}
|
||||
|
||||
// Line 23: CVE Findings
|
||||
if len(r.CVEFindings) > 0 {
|
||||
for _, cve := range r.CVEFindings {
|
||||
fmt.Printf(" %s %s\n", output.BoldRed("CVE:"), output.Red(cve))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println()
|
||||
fmt.Println(output.BoldCyan("═══════════════════════════════════════════════════════════════════════════════"))
|
||||
// Print results using output module
|
||||
PrintResults(results, startTime, takeoverCount)
|
||||
|
||||
// Save output
|
||||
if cfg.Output != "" {
|
||||
output.SaveOutput(cfg.Output, cfg.Format, results)
|
||||
}
|
||||
}
|
||||
|
||||
// LoadWordlist loads words from a file
|
||||
func LoadWordlist(path string) ([]string, error) {
|
||||
file, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
var words []string
|
||||
scanner := bufio.NewScanner(file)
|
||||
for scanner.Scan() {
|
||||
word := strings.TrimSpace(scanner.Text())
|
||||
if word != "" && !strings.HasPrefix(word, "#") {
|
||||
words = append(words, word)
|
||||
}
|
||||
}
|
||||
return words, scanner.Err()
|
||||
}
|
||||
|
||||
// ScanPorts scans ports on an IP address
|
||||
func ScanPorts(ip string, ports []int, timeout int) []int {
|
||||
var openPorts []int
|
||||
var mu sync.Mutex
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for _, port := range ports {
|
||||
wg.Add(1)
|
||||
go func(p int) {
|
||||
defer wg.Done()
|
||||
address := fmt.Sprintf("%s:%d", ip, p)
|
||||
conn, err := net.DialTimeout("tcp", address, time.Duration(timeout)*time.Second)
|
||||
if err == nil {
|
||||
conn.Close()
|
||||
mu.Lock()
|
||||
openPorts = append(openPorts, p)
|
||||
mu.Unlock()
|
||||
}
|
||||
}(port)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
sort.Ints(openPorts)
|
||||
return openPorts
|
||||
}
|
||||
|
||||
// Helper functions for AI analysis
|
||||
|
||||
func countSubdomainsWithAI(results map[string]*config.SubdomainResult) int {
|
||||
count := 0
|
||||
for _, r := range results {
|
||||
if len(r.AIFindings) > 0 {
|
||||
count++
|
||||
}
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
func countActive(results map[string]*config.SubdomainResult) int {
|
||||
count := 0
|
||||
for _, r := range results {
|
||||
if r.StatusCode >= 200 && r.StatusCode < 400 {
|
||||
count++
|
||||
}
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
func countVulns(results map[string]*config.SubdomainResult) int {
|
||||
count := 0
|
||||
for _, r := range results {
|
||||
if r.OpenRedirect || r.CORSMisconfig != "" || len(r.DangerousMethods) > 0 ||
|
||||
r.GitExposed || r.SvnExposed || len(r.BackupFiles) > 0 {
|
||||
count++
|
||||
}
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
func buildAISummary(results map[string]*config.SubdomainResult) string {
|
||||
var summary strings.Builder
|
||||
|
||||
criticalCount := 0
|
||||
highCount := 0
|
||||
mediumCount := 0
|
||||
|
||||
for sub, r := range results {
|
||||
if len(r.AIFindings) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
switch r.AISeverity {
|
||||
case "critical":
|
||||
criticalCount++
|
||||
summary.WriteString(fmt.Sprintf("\n[CRITICAL] %s:\n", sub))
|
||||
case "high":
|
||||
highCount++
|
||||
summary.WriteString(fmt.Sprintf("\n[HIGH] %s:\n", sub))
|
||||
case "medium":
|
||||
mediumCount++
|
||||
summary.WriteString(fmt.Sprintf("\n[MEDIUM] %s:\n", sub))
|
||||
default:
|
||||
continue // Skip low/info for summary
|
||||
}
|
||||
|
||||
// Add first 3 findings
|
||||
for i, finding := range r.AIFindings {
|
||||
if i >= 3 {
|
||||
break
|
||||
}
|
||||
summary.WriteString(fmt.Sprintf(" - %s\n", finding))
|
||||
}
|
||||
|
||||
// Add CVE findings
|
||||
if len(r.CVEFindings) > 0 {
|
||||
summary.WriteString(" CVEs:\n")
|
||||
for _, cve := range r.CVEFindings {
|
||||
summary.WriteString(fmt.Sprintf(" - %s\n", cve))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
header := fmt.Sprintf("Summary: %d critical, %d high, %d medium findings\n", criticalCount, highCount, mediumCount)
|
||||
return header + summary.String()
|
||||
}
|
||||
|
||||
@@ -20,13 +20,12 @@ func CheckAdminPanels(subdomain string, timeout int) []string {
|
||||
},
|
||||
}
|
||||
|
||||
// Common admin panel paths
|
||||
// Generic admin paths (common across all platforms)
|
||||
// Note: Removed platform-specific paths like /wp-admin, /admin.php, /phpmyadmin
|
||||
// These generate false positives on non-PHP/WordPress sites
|
||||
paths := []string{
|
||||
"/admin", "/administrator", "/admin.php", "/admin.html",
|
||||
"/login", "/login.php", "/signin", "/auth",
|
||||
"/wp-admin", "/wp-login.php",
|
||||
"/phpmyadmin", "/pma", "/mysql",
|
||||
"/cpanel", "/webmail",
|
||||
"/admin", "/administrator",
|
||||
"/login", "/signin", "/auth",
|
||||
"/manager", "/console", "/dashboard",
|
||||
"/admin/login", "/user/login",
|
||||
}
|
||||
@@ -200,15 +199,16 @@ func CheckAPIEndpoints(subdomain string, timeout int) []string {
|
||||
// WithClient versions for parallel execution
|
||||
|
||||
func CheckAdminPanelsWithClient(subdomain string, client *http.Client) []string {
|
||||
// Generic admin paths (common across all platforms)
|
||||
paths := []string{
|
||||
"/admin", "/administrator", "/admin.php", "/admin.html",
|
||||
"/login", "/login.php", "/signin", "/auth",
|
||||
"/wp-admin", "/wp-login.php",
|
||||
"/phpmyadmin", "/pma", "/mysql",
|
||||
"/cpanel", "/webmail",
|
||||
"/admin", "/administrator",
|
||||
"/login", "/signin", "/auth",
|
||||
"/manager", "/console", "/dashboard",
|
||||
"/admin/login", "/user/login",
|
||||
}
|
||||
// Note: We removed platform-specific paths like /wp-admin, /admin.php, /login.php
|
||||
// These generate false positives on non-PHP/WordPress sites
|
||||
// The tech detection should be used to check platform-specific paths
|
||||
|
||||
var found []string
|
||||
baseURLs := []string{
|
||||
|
||||
85
internal/sources/base.go
Normal file
85
internal/sources/base.go
Normal file
@@ -0,0 +1,85 @@
|
||||
package sources
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// sharedClient is reused across all source fetches
|
||||
var sharedClient = &http.Client{
|
||||
Timeout: 30 * time.Second,
|
||||
Transport: &http.Transport{
|
||||
MaxIdleConns: 100,
|
||||
MaxIdleConnsPerHost: 10,
|
||||
IdleConnTimeout: 30 * time.Second,
|
||||
},
|
||||
}
|
||||
|
||||
// regexFetch performs a fetch and extracts subdomains using regex
|
||||
// This reduces code duplication across many sources
|
||||
func regexFetch(url string, domain string, timeout time.Duration) ([]string, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||
defer cancel()
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
|
||||
if err != nil {
|
||||
return []string{}, nil
|
||||
}
|
||||
req.Header.Set("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 Chrome/120.0.0.0 Safari/537.36")
|
||||
|
||||
resp, err := sharedClient.Do(req)
|
||||
if err != nil {
|
||||
return []string{}, nil
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return []string{}, nil
|
||||
}
|
||||
|
||||
return extractSubdomains(string(body), domain), nil
|
||||
}
|
||||
|
||||
// extractSubdomains extracts subdomains from text using regex
|
||||
func extractSubdomains(text, domain string) []string {
|
||||
// Compile regex once per call (could cache but domain changes)
|
||||
pattern := fmt.Sprintf(`(?i)([a-z0-9][a-z0-9._-]*\.%s)`, regexp.QuoteMeta(domain))
|
||||
re := regexp.MustCompile(pattern)
|
||||
matches := re.FindAllStringSubmatch(text, -1)
|
||||
|
||||
seen := make(map[string]bool)
|
||||
var subs []string
|
||||
for _, match := range matches {
|
||||
if len(match) > 1 {
|
||||
name := strings.ToLower(match[1])
|
||||
// Filter out invalid patterns
|
||||
if !seen[name] && !strings.HasPrefix(name, ".") && strings.HasSuffix(name, domain) {
|
||||
seen[name] = true
|
||||
subs = append(subs, name)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return subs
|
||||
}
|
||||
|
||||
// dedupeAndFilter filters subdomains and ensures they belong to the target domain
|
||||
func dedupeAndFilter(subs []string, domain string) []string {
|
||||
seen := make(map[string]bool)
|
||||
var result []string
|
||||
for _, sub := range subs {
|
||||
sub = strings.ToLower(strings.TrimSpace(sub))
|
||||
sub = strings.TrimPrefix(sub, "*.")
|
||||
if sub != "" && !seen[sub] && strings.HasSuffix(sub, domain) {
|
||||
seen[sub] = true
|
||||
result = append(result, sub)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
460
internal/stealth/stealth.go
Normal file
460
internal/stealth/stealth.go
Normal file
@@ -0,0 +1,460 @@
|
||||
package stealth
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"math/big"
|
||||
"net/http"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Mode defines the stealth level
|
||||
type Mode int
|
||||
|
||||
const (
|
||||
ModeOff Mode = iota // No stealth - maximum speed
|
||||
ModeLight // Light stealth - reduced concurrency, basic delays
|
||||
ModeModerate // Moderate - random delays, UA rotation, throttling
|
||||
ModeAggressive // Aggressive - slow, distributed, evasive
|
||||
ModeParanoid // Paranoid - ultra slow, maximum evasion
|
||||
)
|
||||
|
||||
// Config holds stealth configuration
|
||||
type Config struct {
|
||||
Mode Mode
|
||||
MinDelay time.Duration // Minimum delay between requests
|
||||
MaxDelay time.Duration // Maximum delay (for randomization)
|
||||
MaxReqPerSecond float64 // Rate limit per second
|
||||
MaxReqPerHost int // Max concurrent requests per host
|
||||
RotateUA bool // Rotate User-Agent
|
||||
RandomizeOrder bool // Randomize request order
|
||||
JitterPercent int // Jitter percentage (0-100)
|
||||
DNSSpread bool // Spread DNS queries across resolvers
|
||||
}
|
||||
|
||||
// Manager handles stealth operations
|
||||
type Manager struct {
|
||||
cfg Config
|
||||
userAgents []string
|
||||
uaIndex int
|
||||
uaMutex sync.Mutex
|
||||
hostLimiters map[string]*rateLimiter
|
||||
hostMutex sync.RWMutex
|
||||
globalLimiter *rateLimiter
|
||||
}
|
||||
|
||||
// rateLimiter implements token bucket rate limiting
|
||||
type rateLimiter struct {
|
||||
tokens float64
|
||||
maxTokens float64
|
||||
refillRate float64 // tokens per second
|
||||
lastRefill time.Time
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
// NewManager creates a new stealth manager
|
||||
func NewManager(mode Mode) *Manager {
|
||||
cfg := GetPreset(mode)
|
||||
return NewManagerWithConfig(cfg)
|
||||
}
|
||||
|
||||
// NewManagerWithConfig creates a manager with custom config
|
||||
func NewManagerWithConfig(cfg Config) *Manager {
|
||||
m := &Manager{
|
||||
cfg: cfg,
|
||||
userAgents: getUserAgents(),
|
||||
hostLimiters: make(map[string]*rateLimiter),
|
||||
}
|
||||
|
||||
if cfg.MaxReqPerSecond > 0 {
|
||||
m.globalLimiter = newRateLimiter(cfg.MaxReqPerSecond, cfg.MaxReqPerSecond)
|
||||
}
|
||||
|
||||
return m
|
||||
}
|
||||
|
||||
// GetPreset returns configuration for a stealth mode
|
||||
func GetPreset(mode Mode) Config {
|
||||
switch mode {
|
||||
case ModeLight:
|
||||
return Config{
|
||||
Mode: ModeLight,
|
||||
MinDelay: 10 * time.Millisecond,
|
||||
MaxDelay: 50 * time.Millisecond,
|
||||
MaxReqPerSecond: 100,
|
||||
MaxReqPerHost: 20,
|
||||
RotateUA: true,
|
||||
RandomizeOrder: false,
|
||||
JitterPercent: 10,
|
||||
DNSSpread: false,
|
||||
}
|
||||
case ModeModerate:
|
||||
return Config{
|
||||
Mode: ModeModerate,
|
||||
MinDelay: 50 * time.Millisecond,
|
||||
MaxDelay: 200 * time.Millisecond,
|
||||
MaxReqPerSecond: 30,
|
||||
MaxReqPerHost: 5,
|
||||
RotateUA: true,
|
||||
RandomizeOrder: true,
|
||||
JitterPercent: 30,
|
||||
DNSSpread: true,
|
||||
}
|
||||
case ModeAggressive:
|
||||
return Config{
|
||||
Mode: ModeAggressive,
|
||||
MinDelay: 200 * time.Millisecond,
|
||||
MaxDelay: 1 * time.Second,
|
||||
MaxReqPerSecond: 10,
|
||||
MaxReqPerHost: 2,
|
||||
RotateUA: true,
|
||||
RandomizeOrder: true,
|
||||
JitterPercent: 50,
|
||||
DNSSpread: true,
|
||||
}
|
||||
case ModeParanoid:
|
||||
return Config{
|
||||
Mode: ModeParanoid,
|
||||
MinDelay: 1 * time.Second,
|
||||
MaxDelay: 5 * time.Second,
|
||||
MaxReqPerSecond: 2,
|
||||
MaxReqPerHost: 1,
|
||||
RotateUA: true,
|
||||
RandomizeOrder: true,
|
||||
JitterPercent: 70,
|
||||
DNSSpread: true,
|
||||
}
|
||||
default: // ModeOff
|
||||
return Config{
|
||||
Mode: ModeOff,
|
||||
MinDelay: 0,
|
||||
MaxDelay: 0,
|
||||
MaxReqPerSecond: 0, // unlimited
|
||||
MaxReqPerHost: 0, // unlimited
|
||||
RotateUA: false,
|
||||
RandomizeOrder: false,
|
||||
JitterPercent: 0,
|
||||
DNSSpread: false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Wait applies stealth delay before a request
|
||||
func (m *Manager) Wait() {
|
||||
if m.cfg.Mode == ModeOff {
|
||||
return
|
||||
}
|
||||
|
||||
// Apply rate limiting
|
||||
if m.globalLimiter != nil {
|
||||
m.globalLimiter.wait()
|
||||
}
|
||||
|
||||
// Apply random delay
|
||||
if m.cfg.MaxDelay > 0 {
|
||||
delay := m.randomDelay()
|
||||
time.Sleep(delay)
|
||||
}
|
||||
}
|
||||
|
||||
// WaitForHost applies per-host rate limiting
|
||||
func (m *Manager) WaitForHost(host string) {
|
||||
if m.cfg.Mode == ModeOff || m.cfg.MaxReqPerHost <= 0 {
|
||||
return
|
||||
}
|
||||
|
||||
m.hostMutex.Lock()
|
||||
limiter, exists := m.hostLimiters[host]
|
||||
if !exists {
|
||||
limiter = newRateLimiter(float64(m.cfg.MaxReqPerHost), float64(m.cfg.MaxReqPerHost))
|
||||
m.hostLimiters[host] = limiter
|
||||
}
|
||||
m.hostMutex.Unlock()
|
||||
|
||||
limiter.wait()
|
||||
}
|
||||
|
||||
// GetUserAgent returns a User-Agent string (rotated if enabled)
|
||||
func (m *Manager) GetUserAgent() string {
|
||||
if !m.cfg.RotateUA || len(m.userAgents) == 0 {
|
||||
return "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
|
||||
}
|
||||
|
||||
m.uaMutex.Lock()
|
||||
defer m.uaMutex.Unlock()
|
||||
|
||||
ua := m.userAgents[m.uaIndex]
|
||||
m.uaIndex = (m.uaIndex + 1) % len(m.userAgents)
|
||||
return ua
|
||||
}
|
||||
|
||||
// GetRandomUserAgent returns a random User-Agent
|
||||
func (m *Manager) GetRandomUserAgent() string {
|
||||
if len(m.userAgents) == 0 {
|
||||
return m.GetUserAgent()
|
||||
}
|
||||
idx := secureRandomInt(len(m.userAgents))
|
||||
return m.userAgents[idx]
|
||||
}
|
||||
|
||||
// ApplyToRequest applies stealth settings to an HTTP request
|
||||
func (m *Manager) ApplyToRequest(req *http.Request) {
|
||||
// Set User-Agent
|
||||
req.Header.Set("User-Agent", m.GetUserAgent())
|
||||
|
||||
// Add realistic browser headers
|
||||
if m.cfg.Mode >= ModeModerate {
|
||||
req.Header.Set("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8")
|
||||
req.Header.Set("Accept-Language", m.getRandomAcceptLanguage())
|
||||
req.Header.Set("Accept-Encoding", "gzip, deflate, br")
|
||||
req.Header.Set("Connection", "keep-alive")
|
||||
req.Header.Set("Upgrade-Insecure-Requests", "1")
|
||||
req.Header.Set("Sec-Fetch-Dest", "document")
|
||||
req.Header.Set("Sec-Fetch-Mode", "navigate")
|
||||
req.Header.Set("Sec-Fetch-Site", "none")
|
||||
req.Header.Set("Sec-Fetch-User", "?1")
|
||||
req.Header.Set("Cache-Control", "max-age=0")
|
||||
}
|
||||
}
|
||||
|
||||
// SelectResolver picks a DNS resolver (distributed if enabled)
|
||||
func (m *Manager) SelectResolver(resolvers []string, index int) string {
|
||||
if len(resolvers) == 0 {
|
||||
return "8.8.8.8:53"
|
||||
}
|
||||
|
||||
if m.cfg.DNSSpread {
|
||||
// Random selection for distribution
|
||||
return resolvers[secureRandomInt(len(resolvers))]
|
||||
}
|
||||
|
||||
// Sequential selection
|
||||
return resolvers[index%len(resolvers)]
|
||||
}
|
||||
|
||||
// ShuffleSlice randomizes slice order if enabled
|
||||
func (m *Manager) ShuffleSlice(items []string) []string {
|
||||
if !m.cfg.RandomizeOrder || len(items) <= 1 {
|
||||
return items
|
||||
}
|
||||
|
||||
// Fisher-Yates shuffle
|
||||
shuffled := make([]string, len(items))
|
||||
copy(shuffled, items)
|
||||
|
||||
for i := len(shuffled) - 1; i > 0; i-- {
|
||||
j := secureRandomInt(i + 1)
|
||||
shuffled[i], shuffled[j] = shuffled[j], shuffled[i]
|
||||
}
|
||||
|
||||
return shuffled
|
||||
}
|
||||
|
||||
// GetEffectiveConcurrency returns adjusted concurrency for stealth mode
|
||||
func (m *Manager) GetEffectiveConcurrency(requested int) int {
|
||||
switch m.cfg.Mode {
|
||||
case ModeLight:
|
||||
return min(requested, 100)
|
||||
case ModeModerate:
|
||||
return min(requested, 30)
|
||||
case ModeAggressive:
|
||||
return min(requested, 10)
|
||||
case ModeParanoid:
|
||||
return min(requested, 3)
|
||||
default:
|
||||
return requested
|
||||
}
|
||||
}
|
||||
|
||||
// GetConfig returns current stealth configuration
|
||||
func (m *Manager) GetConfig() Config {
|
||||
return m.cfg
|
||||
}
|
||||
|
||||
// GetModeName returns the name of the stealth mode
|
||||
func (m *Manager) GetModeName() string {
|
||||
return ModeName(m.cfg.Mode)
|
||||
}
|
||||
|
||||
// ModeName returns human-readable mode name
|
||||
func ModeName(mode Mode) string {
|
||||
switch mode {
|
||||
case ModeLight:
|
||||
return "light"
|
||||
case ModeModerate:
|
||||
return "moderate"
|
||||
case ModeAggressive:
|
||||
return "aggressive"
|
||||
case ModeParanoid:
|
||||
return "paranoid"
|
||||
default:
|
||||
return "off"
|
||||
}
|
||||
}
|
||||
|
||||
// ParseMode converts string to Mode
|
||||
func ParseMode(s string) Mode {
|
||||
switch s {
|
||||
case "light", "1":
|
||||
return ModeLight
|
||||
case "moderate", "medium", "2":
|
||||
return ModeModerate
|
||||
case "aggressive", "3":
|
||||
return ModeAggressive
|
||||
case "paranoid", "4":
|
||||
return ModeParanoid
|
||||
default:
|
||||
return ModeOff
|
||||
}
|
||||
}
|
||||
|
||||
// randomDelay returns a random delay with jitter
|
||||
func (m *Manager) randomDelay() time.Duration {
|
||||
if m.cfg.MaxDelay <= m.cfg.MinDelay {
|
||||
return m.cfg.MinDelay
|
||||
}
|
||||
|
||||
// Calculate range
|
||||
rangeNs := int64(m.cfg.MaxDelay - m.cfg.MinDelay)
|
||||
randomNs := secureRandomInt64(rangeNs)
|
||||
delay := m.cfg.MinDelay + time.Duration(randomNs)
|
||||
|
||||
// Apply jitter
|
||||
if m.cfg.JitterPercent > 0 {
|
||||
jitterRange := int64(delay) * int64(m.cfg.JitterPercent) / 100
|
||||
jitter := secureRandomInt64(jitterRange*2) - jitterRange
|
||||
delay = time.Duration(int64(delay) + jitter)
|
||||
if delay < 0 {
|
||||
delay = m.cfg.MinDelay
|
||||
}
|
||||
}
|
||||
|
||||
return delay
|
||||
}
|
||||
|
||||
func (m *Manager) getRandomAcceptLanguage() string {
|
||||
languages := []string{
|
||||
"en-US,en;q=0.9",
|
||||
"en-GB,en;q=0.9",
|
||||
"en-US,en;q=0.9,es;q=0.8",
|
||||
"de-DE,de;q=0.9,en;q=0.8",
|
||||
"fr-FR,fr;q=0.9,en;q=0.8",
|
||||
"es-ES,es;q=0.9,en;q=0.8",
|
||||
"it-IT,it;q=0.9,en;q=0.8",
|
||||
"pt-BR,pt;q=0.9,en;q=0.8",
|
||||
"nl-NL,nl;q=0.9,en;q=0.8",
|
||||
"ja-JP,ja;q=0.9,en;q=0.8",
|
||||
}
|
||||
return languages[secureRandomInt(len(languages))]
|
||||
}
|
||||
|
||||
// Rate limiter implementation
|
||||
|
||||
func newRateLimiter(maxTokens, refillRate float64) *rateLimiter {
|
||||
return &rateLimiter{
|
||||
tokens: maxTokens,
|
||||
maxTokens: maxTokens,
|
||||
refillRate: refillRate,
|
||||
lastRefill: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
func (rl *rateLimiter) wait() {
|
||||
rl.mu.Lock()
|
||||
defer rl.mu.Unlock()
|
||||
|
||||
// Refill tokens based on elapsed time
|
||||
now := time.Now()
|
||||
elapsed := now.Sub(rl.lastRefill).Seconds()
|
||||
rl.tokens += elapsed * rl.refillRate
|
||||
if rl.tokens > rl.maxTokens {
|
||||
rl.tokens = rl.maxTokens
|
||||
}
|
||||
rl.lastRefill = now
|
||||
|
||||
// Wait if no tokens available
|
||||
if rl.tokens < 1 {
|
||||
waitTime := time.Duration((1 - rl.tokens) / rl.refillRate * float64(time.Second))
|
||||
rl.mu.Unlock()
|
||||
time.Sleep(waitTime)
|
||||
rl.mu.Lock()
|
||||
rl.tokens = 0
|
||||
} else {
|
||||
rl.tokens--
|
||||
}
|
||||
}
|
||||
|
||||
// Secure random helpers
|
||||
|
||||
func secureRandomInt(max int) int {
|
||||
if max <= 0 {
|
||||
return 0
|
||||
}
|
||||
n, err := rand.Int(rand.Reader, big.NewInt(int64(max)))
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return int(n.Int64())
|
||||
}
|
||||
|
||||
func secureRandomInt64(max int64) int64 {
|
||||
if max <= 0 {
|
||||
return 0
|
||||
}
|
||||
n, err := rand.Int(rand.Reader, big.NewInt(max))
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return n.Int64()
|
||||
}
|
||||
|
||||
func min(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// User-Agent pool
|
||||
|
||||
func getUserAgents() []string {
|
||||
return []string{
|
||||
// Chrome Windows
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36",
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36",
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36",
|
||||
// Chrome macOS
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36",
|
||||
// Firefox Windows
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:121.0) Gecko/20100101 Firefox/121.0",
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:120.0) Gecko/20100101 Firefox/120.0",
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:119.0) Gecko/20100101 Firefox/119.0",
|
||||
// Firefox macOS
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:121.0) Gecko/20100101 Firefox/121.0",
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:120.0) Gecko/20100101 Firefox/120.0",
|
||||
// Safari macOS
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.2 Safari/605.1.15",
|
||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.1 Safari/605.1.15",
|
||||
// Edge Windows
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36 Edg/120.0.0.0",
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36 Edg/119.0.0.0",
|
||||
// Chrome Linux
|
||||
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
|
||||
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36",
|
||||
// Firefox Linux
|
||||
"Mozilla/5.0 (X11; Linux x86_64; rv:121.0) Gecko/20100101 Firefox/121.0",
|
||||
"Mozilla/5.0 (X11; Linux x86_64; rv:120.0) Gecko/20100101 Firefox/120.0",
|
||||
// Mobile Chrome Android
|
||||
"Mozilla/5.0 (Linux; Android 14; SM-S918B) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.6099.144 Mobile Safari/537.36",
|
||||
"Mozilla/5.0 (Linux; Android 13; Pixel 7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.6099.144 Mobile Safari/537.36",
|
||||
// Mobile Safari iOS
|
||||
"Mozilla/5.0 (iPhone; CPU iPhone OS 17_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.2 Mobile/15E148 Safari/604.1",
|
||||
"Mozilla/5.0 (iPhone; CPU iPhone OS 17_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.1 Mobile/15E148 Safari/604.1",
|
||||
// Brave
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36 Brave/120",
|
||||
// Opera
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36 OPR/106.0.0.0",
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user