Files
gstack/review/checklist.md
T
Garry Tan a4a181ca92 feat: Review Army — parallel specialist reviewers for /review (v0.14.3.0) (#692)
* feat: extend gstack-diff-scope with SCOPE_MIGRATIONS, SCOPE_API, SCOPE_AUTH

Three new scope signals for Review Army specialist activation:
- SCOPE_MIGRATIONS: db/migrate/, prisma/migrations/, alembic/, *.sql
- SCOPE_API: *controller*, *route*, *endpoint*, *.graphql, openapi.*
- SCOPE_AUTH: *auth*, *session*, *jwt*, *oauth*, *permission*, *role*

* feat: add 7 specialist checklist files for Review Army

- testing.md (always-on): coverage gaps, flaky patterns, security enforcement
- maintainability.md (always-on): dead code, DRY, stale comments
- security.md (conditional): OWASP deep analysis, auth bypass, injection
- performance.md (conditional): N+1 queries, bundle impact, complexity
- data-migration.md (conditional): reversibility, lock duration, backfill
- api-contract.md (conditional): breaking changes, versioning, error format
- red-team.md (conditional): adversarial analysis, cross-cutting concerns

All use standard header with JSON output schema and NO FINDINGS fallback.

* feat: Review Army resolver — parallel specialist dispatch + merge

New resolver in review-army.ts generates template prose for:
- Stack detection and specialist selection
- Parallel Agent tool dispatch with learning-informed prompts
- JSON finding collection, fingerprint dedup, consensus highlighting
- PR quality score computation
- Red Team conditional dispatch

Registered as REVIEW_ARMY in resolvers/index.ts.

* refactor: restructure /review template for Review Army

- Replace Steps 4-4.75 with CRITICAL pass + {{REVIEW_ARMY}}
- Remove {{DESIGN_REVIEW_LITE}} and {{TEST_COVERAGE_AUDIT_REVIEW}}
  (subsumed into Design and Testing specialists respectively)
- Extract specialist-covered categories from checklist.md
- Keep CRITICAL + uncovered INFORMATIONAL in main agent pass

* test: Review Army — 14 diff-scope tests + 7 E2E tests

- test/diff-scope.test.ts: 14 tests for all 9 scope signals
- test/skill-e2e-review-army.test.ts: 7 E2E tests
  Gate: migration safety, N+1 detection, delivery audit,
        quality score, JSON findings
  Periodic: red team, consensus
- Updated gen-skill-docs tests for new review structure
- Added touchfile entries and tier classifications

* docs: update SELF_LEARNING_V0.md with Release 2 status + Release 2.5

Mark Release 2 (Review Army) as in-progress. Add Release 2.5 for
deferred expansions (E1 adaptive gating, E3 test stubs, E5 cross-review
dedup, E7 specialist tracking).

* chore: bump version and changelog (v0.14.3.0)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 22:07:50 -06:00

11 KiB

Pre-Landing Review Checklist

Instructions

Review the git diff origin/main output for the issues listed below. Be specific — cite file:line and suggest fixes. Skip anything that's fine. Only flag real problems.

Two-pass review:

  • Pass 1 (CRITICAL): Run SQL & Data Safety, Race Conditions, LLM Output Trust Boundary, Shell Injection, and Enum Completeness first. Highest severity.
  • Pass 2 (INFORMATIONAL): Run remaining categories below. Lower severity but still actioned.
  • Specialist categories (handled by parallel subagents, NOT this checklist): Test Gaps, Dead Code, Magic Numbers, Conditional Side Effects, Performance & Bundle Impact, Crypto & Entropy. See review/specialists/ for these.

All findings get action via Fix-First Review: obvious mechanical fixes are applied automatically, genuinely ambiguous issues are batched into a single user question.

Output format:

Pre-Landing Review: N issues (X critical, Y informational)

**AUTO-FIXED:**
- [file:line] Problem → fix applied

**NEEDS INPUT:**
- [file:line] Problem description
  Recommended fix: suggested fix

If no issues found: Pre-Landing Review: No issues found.

Be terse. For each issue: one line describing the problem, one line with the fix. No preamble, no summaries, no "looks good overall."


Review Categories

Pass 1 — CRITICAL

SQL & Data Safety

  • String interpolation in SQL (even if values are .to_i/.to_f — use parameterized queries (Rails: sanitize_sql_array/Arel; Node: prepared statements; Python: parameterized queries))
  • TOCTOU races: check-then-set patterns that should be atomic WHERE + update_all
  • Bypassing model validations for direct DB writes (Rails: update_column; Django: QuerySet.update(); Prisma: raw queries)
  • N+1 queries: Missing eager loading (Rails: .includes(); SQLAlchemy: joinedload(); Prisma: include) for associations used in loops/views

Race Conditions & Concurrency

  • Read-check-write without uniqueness constraint or catch duplicate key error and retry (e.g., where(hash:).first then save! without handling concurrent insert)
  • find-or-create without unique DB index — concurrent calls can create duplicates
  • Status transitions that don't use atomic WHERE old_status = ? UPDATE SET new_status — concurrent updates can skip or double-apply transitions
  • Unsafe HTML rendering (Rails: .html_safe/raw(); React: dangerouslySetInnerHTML; Vue: v-html; Django: |safe/mark_safe) on user-controlled data (XSS)

LLM Output Trust Boundary

  • LLM-generated values (emails, URLs, names) written to DB or passed to mailers without format validation. Add lightweight guards (EMAIL_REGEXP, URI.parse, .strip) before persisting.
  • Structured tool output (arrays, hashes) accepted without type/shape checks before database writes.
  • LLM-generated URLs fetched without allowlist — SSRF risk if URL points to internal network (Python: urllib.parse.urlparse → check hostname against blocklist before requests.get/httpx.get)
  • LLM output stored in knowledge bases or vector DBs without sanitization — stored prompt injection risk

Shell Injection (Python-specific)

  • subprocess.run() / subprocess.call() / subprocess.Popen() with shell=True AND f-string/.format() interpolation in the command string — use argument arrays instead
  • os.system() with variable interpolation — replace with subprocess.run() using argument arrays
  • eval() / exec() on LLM-generated code without sandboxing

Enum & Value Completeness

When the diff introduces a new enum value, status string, tier name, or type constant:

  • Trace it through every consumer. Read (don't just grep — READ) each file that switches on, filters by, or displays that value. If any consumer doesn't handle the new value, flag it. Common miss: adding a value to the frontend dropdown but the backend model/compute method doesn't persist it.
  • Check allowlists/filter arrays. Search for arrays or %w[] lists containing sibling values (e.g., if adding "revise" to tiers, find every %w[quick lfg mega] and verify "revise" is included where needed).
  • Check case/if-elsif chains. If existing code branches on the enum, does the new value fall through to a wrong default? To do this: use Grep to find all references to the sibling values (e.g., grep for "lfg" or "mega" to find all tier consumers). Read each match. This step requires reading code OUTSIDE the diff.

Pass 2 — INFORMATIONAL

Async/Sync Mixing (Python-specific)

  • Synchronous subprocess.run(), open(), requests.get() inside async def endpoints — blocks the event loop. Use asyncio.to_thread(), aiofiles, or httpx.AsyncClient instead.
  • time.sleep() inside async functions — use asyncio.sleep()
  • Sync DB calls in async context without run_in_executor() wrapping

Column/Field Name Safety

  • Verify column names in ORM queries (.select(), .eq(), .gte(), .order()) against actual DB schema — wrong column names silently return empty results or throw swallowed errors
  • Check .get() calls on query results use the column name that was actually selected
  • Cross-reference with schema documentation when available

Dead Code & Consistency (version/changelog only — other items handled by maintainability specialist)

  • Version mismatch between PR title and VERSION/CHANGELOG files
  • CHANGELOG entries that describe changes inaccurately (e.g., "changed from X to Y" when X never existed)

LLM Prompt Issues

  • 0-indexed lists in prompts (LLMs reliably return 1-indexed)
  • Prompt text listing available tools/capabilities that don't match what's actually wired up in the tool_classes/tools array
  • Word/token limits stated in multiple places that could drift

Completeness Gaps

  • Shortcut implementations where the complete version would cost <30 minutes CC time (e.g., partial enum handling, incomplete error paths, missing edge cases that are straightforward to add)
  • Options presented with only human-team effort estimates — should show both human and CC+gstack time
  • Test coverage gaps where adding the missing tests is a "lake" not an "ocean" (e.g., missing negative-path tests, missing edge case tests that mirror happy-path structure)
  • Features implemented at 80-90% when 100% is achievable with modest additional code

Time Window Safety

  • Date-key lookups that assume "today" covers 24h — report at 8am PT only sees midnight→8am under today's key
  • Mismatched time windows between related features — one uses hourly buckets, another uses daily keys for the same data

Type Coercion at Boundaries

  • Values crossing Ruby→JSON→JS boundaries where type could change (numeric vs string) — hash/digest inputs must normalize types
  • Hash/digest inputs that don't call .to_s or equivalent before serialization — { cores: 8 } vs { cores: "8" } produce different hashes

View/Frontend

  • Inline <style> blocks in partials (re-parsed every render)
  • O(n*m) lookups in views (Array#find in a loop instead of index_by hash)
  • Ruby-side .select{} filtering on DB results that could be a WHERE clause (unless intentionally avoiding leading-wildcard LIKE)

Distribution & CI/CD Pipeline

  • CI/CD workflow changes (.github/workflows/): verify build tool versions match project requirements, artifact names/paths are correct, secrets use ${{ secrets.X }} not hardcoded values
  • New artifact types (CLI binary, library, package): verify a publish/release workflow exists and targets correct platforms
  • Cross-platform builds: verify CI matrix covers all target OS/arch combinations, or documents which are untested
  • Version tag format consistency: v1.2.3 vs 1.2.3 — must match across VERSION file, git tags, and publish scripts
  • Publish step idempotency: re-running the publish workflow should not fail (e.g., gh release delete before gh release create)

DO NOT flag:

  • Web services with existing auto-deploy pipelines (Docker build + K8s deploy)
  • Internal tools not distributed outside the team
  • Test-only CI changes (adding test steps, not publish steps)

Severity Classification

CRITICAL (highest severity):      INFORMATIONAL (main agent):      SPECIALIST (parallel subagents):
├─ SQL & Data Safety              ├─ Async/Sync Mixing             ├─ Testing specialist
├─ Race Conditions & Concurrency  ├─ Column/Field Name Safety      ├─ Maintainability specialist
├─ LLM Output Trust Boundary      ├─ Dead Code (version only)      ├─ Security specialist
├─ Shell Injection                ├─ LLM Prompt Issues             ├─ Performance specialist
└─ Enum & Value Completeness      ├─ Completeness Gaps             ├─ Data Migration specialist
                                   ├─ Time Window Safety            ├─ API Contract specialist
                                   ├─ Type Coercion at Boundaries   └─ Red Team (conditional)
                                   ├─ View/Frontend
                                   └─ Distribution & CI/CD Pipeline

All findings are actioned via Fix-First Review. Severity determines
presentation order and classification of AUTO-FIX vs ASK — critical
findings lean toward ASK (they're riskier), informational findings
lean toward AUTO-FIX (they're more mechanical).

Fix-First Heuristic

This heuristic is referenced by both /review and /ship. It determines whether the agent auto-fixes a finding or asks the user.

AUTO-FIX (agent fixes without asking):     ASK (needs human judgment):
├─ Dead code / unused variables            ├─ Security (auth, XSS, injection)
├─ N+1 queries (missing eager loading)      ├─ Race conditions
├─ Stale comments contradicting code       ├─ Design decisions
├─ Magic numbers → named constants         ├─ Large fixes (>20 lines)
├─ Missing LLM output validation           ├─ Enum completeness
├─ Version/path mismatches                 ├─ Removing functionality
├─ Variables assigned but never read       └─ Anything changing user-visible
└─ Inline styles, O(n*m) view lookups        behavior

Rule of thumb: If the fix is mechanical and a senior engineer would apply it without discussion, it's AUTO-FIX. If reasonable engineers could disagree about the fix, it's ASK.

Critical findings default toward ASK (they're inherently riskier). Informational findings default toward AUTO-FIX (they're more mechanical).


Suppressions — DO NOT flag these

  • "X is redundant with Y" when the redundancy is harmless and aids readability (e.g., present? redundant with length > 20)
  • "Add a comment explaining why this threshold/constant was chosen" — thresholds change during tuning, comments rot
  • "This assertion could be tighter" when the assertion already covers the behavior
  • Suggesting consistency-only changes (wrapping a value in a conditional to match how another constant is guarded)
  • "Regex doesn't handle edge case X" when the input is constrained and X never occurs in practice
  • "Test exercises multiple guards simultaneously" — that's fine, tests don't need to isolate every guard
  • Eval threshold changes (max_actionable, min scores) — these are tuned empirically and change constantly
  • Harmless no-ops (e.g., .reject on an element that's never in the array)
  • ANYTHING already addressed in the diff you're reviewing — read the FULL diff before commenting