mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-06 05:35:46 +02:00
d58207fd1a
Credits: Sentry (confidence gating), Trail of Bits (mental model + variant analysis), Shannon/Keygraph (active verification validation), afiqiqmal (framework detection + LLM security), Snyk ToxicSkills (skill supply chain), Miessler PAI (incident playbooks), McGo (report format), Claude Code Security Pack (modular validation), Anthropic CCS (500+ zero-days), and @gus_argon (v1 blind spot identification). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
3.0 KiB
3.0 KiB
Acknowledgements
/cso v2 was informed by research across the security audit landscape. Credits to:
- Sentry Security Review — The confidence-based reporting system (only HIGH confidence findings get reported) and the "research before reporting" methodology (trace data flow, check upstream validation) validated our 8/10 daily confidence gate. TimOnWeb rated it the only security skill worth installing out of 5 tested.
- Trail of Bits Skills — The audit-context-building methodology (build a mental model before hunting bugs) directly inspired Phase 0. Their variant analysis concept (found one vuln? Search the whole codebase for the same pattern) inspired Phase 12's variant analysis step.
- Shannon by Keygraph — Autonomous AI pentester achieving 96.15% on the XBOW benchmark (100/104 exploits). Validated that AI can do real security testing, not just checklist scanning. Our Phase 12 active verification is the static-analysis version of what Shannon does live.
- afiqiqmal/claude-security-audit — The AI/LLM-specific security checks (prompt injection, RAG poisoning, tool calling permissions) inspired Phase 7. Their framework-level auto-detection (detecting "Next.js" not just "Node/TypeScript") inspired Phase 0's framework detection step.
- Snyk ToxicSkills Research — The finding that 36% of AI agent skills have security flaws and 13.4% are malicious inspired Phase 8 (Skill Supply Chain scanning).
- Daniel Miessler's Personal AI Infrastructure — The incident response playbooks and protection file concept informed the remediation and LLM security phases.
- McGo/claude-code-security-audit — The idea of generating shareable reports and actionable epics informed our report format evolution.
- Claude Code Security Pack — Modular approach (separate /security-audit, /secret-scanner, /deps-check skills) validated that these are distinct concerns. Our unified approach sacrifices modularity for cross-phase reasoning.
- Anthropic Claude Code Security — Multi-stage verification and confidence scoring validated our parallel finding verification approach. Found 500+ zero-days in open source.
- @gus_argon — Identified critical v1 blind spots: no stack detection (runs all-language patterns), uses bash grep instead of Claude Code's Grep tool,
| head -20truncates results silently, and preamble bloat. These directly shaped v2's stack-first approach and Grep tool mandate.