mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-06 05:35:46 +02:00
Merge remote-tracking branch 'origin/main' into garrytan/learn-from-reviews
Resolved conflicts: - VERSION: bumped to 0.14.6.0 (our branch on top of main's 0.14.5.0) - CHANGELOG.md: kept our entry on top, main's 7 new entries below, updated version - package.json: version synced to 0.14.6.0 - Regenerated all SKILL.md files from merged templates Main brought: Review Army (parallel specialist reviewers), always-on adversarial, CSS inspector, per-tab agents, design-to-code, comparison board, ship idempotency, skill prefix fix, session intelligence roadmap. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -472,3 +472,16 @@ if (failures.length > 0 && HOST_ARG_VAL === 'all') {
|
||||
if (failures.some(f => f.host === 'claude')) process.exit(1);
|
||||
}
|
||||
// Single host dry-run failure already handled above
|
||||
|
||||
// After all hosts processed, warn if prefix patches may need re-applying
|
||||
if (!DRY_RUN) {
|
||||
try {
|
||||
const configPath = path.join(process.env.HOME || '', '.gstack', 'config.yaml');
|
||||
if (fs.existsSync(configPath)) {
|
||||
const config = fs.readFileSync(configPath, 'utf-8');
|
||||
if (/^skill_prefix:\s*true/m.test(config)) {
|
||||
console.log('\nNote: skill_prefix is true. Run gstack-relink to re-apply name: patches.');
|
||||
}
|
||||
}
|
||||
} catch { /* non-fatal */ }
|
||||
}
|
||||
|
||||
+46
-29
@@ -855,31 +855,42 @@ $D compare --images "$_DESIGN_DIR/variant-A.png,$_DESIGN_DIR/variant-B.png,$_DES
|
||||
|
||||
This command generates the board HTML, starts an HTTP server on a random port,
|
||||
and opens it in the user's default browser. **Run it in the background** with \`&\`
|
||||
because the agent needs to keep running while the user interacts with the board.
|
||||
because the server needs to stay running while the user interacts with the board.
|
||||
|
||||
**IMPORTANT: Reading feedback via file polling (not stdout):**
|
||||
Parse the port from stderr output: \`SERVE_STARTED: port=XXXXX\`. You need this
|
||||
for the board URL and for reloading during regeneration cycles.
|
||||
|
||||
The server writes feedback to files next to the board HTML. The agent polls for these:
|
||||
**PRIMARY WAIT: AskUserQuestion with board URL**
|
||||
|
||||
After the board is serving, use AskUserQuestion to wait for the user. Include the
|
||||
board URL so they can click it if they lost the browser tab:
|
||||
|
||||
"I've opened a comparison board with the design variants:
|
||||
http://127.0.0.1:<PORT>/ — Rate them, leave comments, remix
|
||||
elements you like, and click Submit when you're done. Let me know when you've
|
||||
submitted your feedback (or paste your preferences here). If you clicked
|
||||
Regenerate or Remix on the board, tell me and I'll generate new variants."
|
||||
|
||||
**Do NOT use AskUserQuestion to ask which variant the user prefers.** The comparison
|
||||
board IS the chooser. AskUserQuestion is just the blocking wait mechanism.
|
||||
|
||||
**After the user responds to AskUserQuestion:**
|
||||
|
||||
Check for feedback files next to the board HTML:
|
||||
- \`$_DESIGN_DIR/feedback.json\` — written when user clicks Submit (final choice)
|
||||
- \`$_DESIGN_DIR/feedback-pending.json\` — written when user clicks Regenerate/Remix/More Like This
|
||||
|
||||
**Polling loop** (run after launching \`$D serve\` in background):
|
||||
|
||||
\`\`\`bash
|
||||
# Poll for feedback files every 5 seconds (up to 10 minutes)
|
||||
for i in $(seq 1 120); do
|
||||
if [ -f "$_DESIGN_DIR/feedback.json" ]; then
|
||||
echo "SUBMIT_RECEIVED"
|
||||
cat "$_DESIGN_DIR/feedback.json"
|
||||
break
|
||||
elif [ -f "$_DESIGN_DIR/feedback-pending.json" ]; then
|
||||
echo "REGENERATE_RECEIVED"
|
||||
cat "$_DESIGN_DIR/feedback-pending.json"
|
||||
rm "$_DESIGN_DIR/feedback-pending.json"
|
||||
break
|
||||
fi
|
||||
sleep 5
|
||||
done
|
||||
if [ -f "$_DESIGN_DIR/feedback.json" ]; then
|
||||
echo "SUBMIT_RECEIVED"
|
||||
cat "$_DESIGN_DIR/feedback.json"
|
||||
elif [ -f "$_DESIGN_DIR/feedback-pending.json" ]; then
|
||||
echo "REGENERATE_RECEIVED"
|
||||
cat "$_DESIGN_DIR/feedback-pending.json"
|
||||
rm "$_DESIGN_DIR/feedback-pending.json"
|
||||
else
|
||||
echo "NO_FEEDBACK_FILE"
|
||||
fi
|
||||
\`\`\`
|
||||
|
||||
The feedback JSON has this shape:
|
||||
@@ -893,24 +904,30 @@ The feedback JSON has this shape:
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**If \`feedback-pending.json\` found (\`"regenerated": true\`):**
|
||||
**If \`feedback.json\` found:** The user clicked Submit on the board.
|
||||
Read \`preferred\`, \`ratings\`, \`comments\`, \`overall\` from the JSON. Proceed with
|
||||
the approved variant.
|
||||
|
||||
**If \`feedback-pending.json\` found:** The user clicked Regenerate/Remix on the board.
|
||||
1. Read \`regenerateAction\` from the JSON (\`"different"\`, \`"match"\`, \`"more_like_B"\`,
|
||||
\`"remix"\`, or custom text)
|
||||
2. If \`regenerateAction\` is \`"remix"\`, read \`remixSpec\` (e.g. \`{"layout":"A","colors":"B"}\`)
|
||||
3. Generate new variants with \`$D iterate\` or \`$D variants\` using updated brief
|
||||
4. Create new board: \`$D compare --images "..." --output "$_DESIGN_DIR/design-board.html"\`
|
||||
5. Parse the port from the \`$D serve\` stderr output (\`SERVE_STARTED: port=XXXXX\`),
|
||||
then reload the board in the user's browser (same tab):
|
||||
5. Reload the board in the user's browser (same tab):
|
||||
\`curl -s -X POST http://127.0.0.1:PORT/api/reload -H 'Content-Type: application/json' -d '{"html":"$_DESIGN_DIR/design-board.html"}'\`
|
||||
6. The board auto-refreshes. **Poll again** for the next feedback file.
|
||||
7. Repeat until \`feedback.json\` appears (user clicked Submit).
|
||||
6. The board auto-refreshes. **AskUserQuestion again** with the same board URL to
|
||||
wait for the next round of feedback. Repeat until \`feedback.json\` appears.
|
||||
|
||||
**If \`feedback.json\` found (\`"regenerated": false\`):**
|
||||
1. Read \`preferred\`, \`ratings\`, \`comments\`, \`overall\` from the JSON
|
||||
2. Proceed with the approved variant
|
||||
**If \`NO_FEEDBACK_FILE\`:** The user typed their preferences directly in the
|
||||
AskUserQuestion response instead of using the board. Use their text response
|
||||
as the feedback.
|
||||
|
||||
**If \`$D serve\` fails or no feedback within 10 minutes:** Fall back to AskUserQuestion:
|
||||
"I've opened the design board. Which variant do you prefer? Any feedback?"
|
||||
**POLLING FALLBACK:** Only use polling if \`$D serve\` fails (no port available).
|
||||
In that case, show each variant inline using the Read tool (so the user can see them),
|
||||
then use AskUserQuestion:
|
||||
"The comparison board server failed to start. I've shown the variants above.
|
||||
Which do you prefer? Any feedback?"
|
||||
|
||||
**After receiving feedback (any path):** Output a clear summary confirming
|
||||
what was understood:
|
||||
|
||||
@@ -11,11 +11,12 @@ import { generateTestFailureTriage } from './preamble';
|
||||
import { generateCommandReference, generateSnapshotFlags, generateBrowseSetup } from './browse';
|
||||
import { generateDesignMethodology, generateDesignHardRules, generateDesignOutsideVoices, generateDesignReviewLite, generateDesignSketch, generateDesignSetup, generateDesignMockup, generateDesignShotgunLoop } from './design';
|
||||
import { generateTestBootstrap, generateTestCoverageAuditPlan, generateTestCoverageAuditShip, generateTestCoverageAuditReview } from './testing';
|
||||
import { generateReviewDashboard, generatePlanFileReviewReport, generateSpecReviewLoop, generateBenefitsFrom, generateCodexSecondOpinion, generateAdversarialStep, generateCodexPlanReview, generatePlanCompletionAuditShip, generatePlanCompletionAuditReview, generatePlanVerificationExec } from './review';
|
||||
import { generateReviewDashboard, generatePlanFileReviewReport, generateSpecReviewLoop, generateBenefitsFrom, generateCodexSecondOpinion, generateAdversarialStep, generateCodexPlanReview, generatePlanCompletionAuditShip, generatePlanCompletionAuditReview, generatePlanVerificationExec, generateScopeDrift } from './review';
|
||||
import { generateSlugEval, generateSlugSetup, generateBaseBranchDetect, generateDeployBootstrap, generateQAMethodology, generateCoAuthorTrailer, generateChangelogWorkflow } from './utility';
|
||||
import { generateLearningsSearch, generateLearningsLog } from './learnings';
|
||||
import { generateConfidenceCalibration } from './confidence';
|
||||
import { generateInvokeSkill } from './composition';
|
||||
import { generateReviewArmy } from './review-army';
|
||||
|
||||
export const RESOLVERS: Record<string, ResolverFn> = {
|
||||
SLUG_EVAL: generateSlugEval,
|
||||
@@ -45,6 +46,7 @@ export const RESOLVERS: Record<string, ResolverFn> = {
|
||||
BENEFITS_FROM: generateBenefitsFrom,
|
||||
CODEX_SECOND_OPINION: generateCodexSecondOpinion,
|
||||
ADVERSARIAL_STEP: generateAdversarialStep,
|
||||
SCOPE_DRIFT: generateScopeDrift,
|
||||
DEPLOY_BOOTSTRAP: generateDeployBootstrap,
|
||||
CODEX_PLAN_REVIEW: generateCodexPlanReview,
|
||||
PLAN_COMPLETION_AUDIT_SHIP: generatePlanCompletionAuditShip,
|
||||
@@ -56,4 +58,5 @@ export const RESOLVERS: Record<string, ResolverFn> = {
|
||||
CONFIDENCE_CALIBRATION: generateConfidenceCalibration,
|
||||
INVOKE_SKILL: generateInvokeSkill,
|
||||
CHANGELOG_WORKFLOW: generateChangelogWorkflow,
|
||||
REVIEW_ARMY: generateReviewArmy,
|
||||
};
|
||||
|
||||
@@ -459,6 +459,21 @@ success/error/abort, and \`USED_BROWSE\` with true/false based on whether \`$B\`
|
||||
If you cannot determine the outcome, use "unknown". The local JSONL always logs. The
|
||||
remote binary only runs if telemetry is not off and the binary exists.
|
||||
|
||||
## Plan Mode Safe Operations
|
||||
|
||||
When in plan mode, these operations are always allowed because they produce
|
||||
artifacts that inform the plan, not code changes:
|
||||
|
||||
- \`$B\` commands (browse: screenshots, page inspection, navigation, snapshots)
|
||||
- \`$D\` commands (design: generate mockups, variants, comparison boards, iterate)
|
||||
- \`codex exec\` / \`codex review\` (outside voice, plan review, adversarial challenge)
|
||||
- Writing to \`~/.gstack/\` (config, analytics, review logs, design artifacts, learnings)
|
||||
- Writing to the plan file (already allowed by plan mode)
|
||||
- \`open\` commands for viewing generated artifacts (comparison boards, HTML previews)
|
||||
|
||||
These are read-only in spirit — they inspect the live site, generate visual artifacts,
|
||||
or get independent opinions. They do NOT modify project source files.
|
||||
|
||||
## Plan Status Footer
|
||||
|
||||
When you are in plan mode and about to call ExitPlanMode:
|
||||
|
||||
@@ -0,0 +1,190 @@
|
||||
/**
|
||||
* Review Army resolver — parallel specialist reviewers for /review
|
||||
*
|
||||
* Generates template prose that instructs Claude to:
|
||||
* 1. Detect stack and scope (via gstack-diff-scope)
|
||||
* 2. Select and dispatch specialist subagents in parallel
|
||||
* 3. Collect, parse, merge, and deduplicate JSON findings
|
||||
* 4. Feed merged findings into the existing Fix-First pipeline
|
||||
*
|
||||
* Shipped as Release 2 of the self-learning roadmap (SELF_LEARNING_V0.md).
|
||||
*/
|
||||
import type { TemplateContext } from './types';
|
||||
|
||||
function generateSpecialistSelection(ctx: TemplateContext): string {
|
||||
return `## Step 4.5: Review Army — Specialist Dispatch
|
||||
|
||||
### Detect stack and scope
|
||||
|
||||
\`\`\`bash
|
||||
source <(${ctx.paths.binDir}/gstack-diff-scope <base> 2>/dev/null) || true
|
||||
# Detect stack for specialist context
|
||||
STACK=""
|
||||
[ -f Gemfile ] && STACK="\${STACK}ruby "
|
||||
[ -f package.json ] && STACK="\${STACK}node "
|
||||
[ -f requirements.txt ] || [ -f pyproject.toml ] && STACK="\${STACK}python "
|
||||
[ -f go.mod ] && STACK="\${STACK}go "
|
||||
[ -f Cargo.toml ] && STACK="\${STACK}rust "
|
||||
echo "STACK: \${STACK:-unknown}"
|
||||
DIFF_LINES=$(git diff origin/<base> --stat | tail -1 | grep -oE '[0-9]+ insertion' | grep -oE '[0-9]+' || echo "0")
|
||||
echo "DIFF_LINES: $DIFF_LINES"
|
||||
\`\`\`
|
||||
|
||||
### Select specialists
|
||||
|
||||
Based on the scope signals above, select which specialists to dispatch.
|
||||
|
||||
**Always-on (dispatch on every review with 50+ changed lines):**
|
||||
1. **Testing** — read \`${ctx.paths.skillRoot}/review/specialists/testing.md\`
|
||||
2. **Maintainability** — read \`${ctx.paths.skillRoot}/review/specialists/maintainability.md\`
|
||||
|
||||
**If DIFF_LINES < 50:** Skip all specialists. Print: "Small diff ($DIFF_LINES lines) — specialists skipped." Continue to Step 5.
|
||||
|
||||
**Conditional (dispatch if the matching scope signal is true):**
|
||||
3. **Security** — if SCOPE_AUTH=true, OR if SCOPE_BACKEND=true AND DIFF_LINES > 100. Read \`${ctx.paths.skillRoot}/review/specialists/security.md\`
|
||||
4. **Performance** — if SCOPE_BACKEND=true OR SCOPE_FRONTEND=true. Read \`${ctx.paths.skillRoot}/review/specialists/performance.md\`
|
||||
5. **Data Migration** — if SCOPE_MIGRATIONS=true. Read \`${ctx.paths.skillRoot}/review/specialists/data-migration.md\`
|
||||
6. **API Contract** — if SCOPE_API=true. Read \`${ctx.paths.skillRoot}/review/specialists/api-contract.md\`
|
||||
7. **Design** — if SCOPE_FRONTEND=true. Use the existing design review checklist at \`${ctx.paths.skillRoot}/review/design-checklist.md\`
|
||||
|
||||
Note which specialists were selected and which were skipped. Print the selection:
|
||||
"Dispatching N specialists: [names]. Skipped: [names] (scope not detected)."`;
|
||||
}
|
||||
|
||||
function generateSpecialistDispatch(ctx: TemplateContext): string {
|
||||
return `### Dispatch specialists in parallel
|
||||
|
||||
For each selected specialist, launch an independent subagent via the Agent tool.
|
||||
**Launch ALL selected specialists in a single message** (multiple Agent tool calls)
|
||||
so they run in parallel. Each subagent has fresh context — no prior review bias.
|
||||
|
||||
**Each specialist subagent prompt:**
|
||||
|
||||
Construct the prompt for each specialist. The prompt includes:
|
||||
|
||||
1. The specialist's checklist content (you already read the file above)
|
||||
2. Stack context: "This is a {STACK} project."
|
||||
3. Past learnings for this domain (if any exist):
|
||||
|
||||
\`\`\`bash
|
||||
${ctx.paths.binDir}/gstack-learnings-search --type pitfall --query "{specialist domain}" --limit 5 2>/dev/null || true
|
||||
\`\`\`
|
||||
|
||||
If learnings are found, include them: "Past learnings for this domain: {learnings}"
|
||||
|
||||
4. Instructions:
|
||||
|
||||
"You are a specialist code reviewer. Read the checklist below, then run
|
||||
\`git diff origin/<base>\` to get the full diff. Apply the checklist against the diff.
|
||||
|
||||
For each finding, output a JSON object on its own line:
|
||||
{\\"severity\\":\\"CRITICAL|INFORMATIONAL\\",\\"confidence\\":N,\\"path\\":\\"file\\",\\"line\\":N,\\"category\\":\\"category\\",\\"summary\\":\\"description\\",\\"fix\\":\\"recommended fix\\",\\"fingerprint\\":\\"path:line:category\\",\\"specialist\\":\\"name\\"}
|
||||
|
||||
Required fields: severity, confidence, path, category, summary, specialist.
|
||||
Optional: line, fix, fingerprint, evidence.
|
||||
|
||||
If no findings: output \`NO FINDINGS\` and nothing else.
|
||||
Do not output anything else — no preamble, no summary, no commentary.
|
||||
|
||||
Stack context: {STACK}
|
||||
Past learnings: {learnings or 'none'}
|
||||
|
||||
CHECKLIST:
|
||||
{checklist content}"
|
||||
|
||||
**Subagent configuration:**
|
||||
- Use \`subagent_type: "general-purpose"\`
|
||||
- Do NOT use \`run_in_background\` — all specialists must complete before merge
|
||||
- If any specialist subagent fails or times out, log the failure and continue with results from successful specialists. Specialists are additive — partial results are better than no results.`;
|
||||
}
|
||||
|
||||
function generateFindingsMerge(_ctx: TemplateContext): string {
|
||||
return `### Step 4.6: Collect and merge findings
|
||||
|
||||
After all specialist subagents complete, collect their outputs.
|
||||
|
||||
**Parse findings:**
|
||||
For each specialist's output:
|
||||
1. If output is "NO FINDINGS" — skip, this specialist found nothing
|
||||
2. Otherwise, parse each line as a JSON object. Skip lines that are not valid JSON.
|
||||
3. Collect all parsed findings into a single list, tagged with their specialist name.
|
||||
|
||||
**Fingerprint and deduplicate:**
|
||||
For each finding, compute its fingerprint:
|
||||
- If \`fingerprint\` field is present, use it
|
||||
- Otherwise: \`{path}:{line}:{category}\` (if line is present) or \`{path}:{category}\`
|
||||
|
||||
Group findings by fingerprint. For findings sharing the same fingerprint:
|
||||
- Keep the finding with the highest confidence score
|
||||
- Tag it: "MULTI-SPECIALIST CONFIRMED ({specialist1} + {specialist2})"
|
||||
- Boost confidence by +1 (cap at 10)
|
||||
- Note the confirming specialists in the output
|
||||
|
||||
**Apply confidence gates:**
|
||||
- Confidence 7+: show normally in the findings output
|
||||
- Confidence 5-6: show with caveat "Medium confidence — verify this is actually an issue"
|
||||
- Confidence 3-4: move to appendix (suppress from main findings)
|
||||
- Confidence 1-2: suppress entirely
|
||||
|
||||
**Compute PR Quality Score:**
|
||||
After merging, compute the quality score:
|
||||
\`quality_score = max(0, 10 - (critical_count * 2 + informational_count * 0.5))\`
|
||||
Cap at 10. Log this in the review result at the end.
|
||||
|
||||
**Output merged findings:**
|
||||
Present the merged findings in the same format as the current review:
|
||||
|
||||
\`\`\`
|
||||
SPECIALIST REVIEW: N findings (X critical, Y informational) from Z specialists
|
||||
|
||||
[For each finding, in order: CRITICAL first, then INFORMATIONAL, sorted by confidence descending]
|
||||
[SEVERITY] (confidence: N/10, specialist: name) path:line — summary
|
||||
Fix: recommended fix
|
||||
[If MULTI-SPECIALIST CONFIRMED: show confirmation note]
|
||||
|
||||
PR Quality Score: X/10
|
||||
\`\`\`
|
||||
|
||||
These findings flow into Step 5 Fix-First alongside the CRITICAL pass findings from Step 4.
|
||||
The Fix-First heuristic applies identically — specialist findings follow the same AUTO-FIX vs ASK classification.`;
|
||||
}
|
||||
|
||||
function generateRedTeam(ctx: TemplateContext): string {
|
||||
return `### Red Team dispatch (conditional)
|
||||
|
||||
**Activation:** Only if DIFF_LINES > 200 OR any specialist produced a CRITICAL finding.
|
||||
|
||||
If activated, dispatch one more subagent via the Agent tool (foreground, not background).
|
||||
|
||||
The Red Team subagent receives:
|
||||
1. The red-team checklist from \`${ctx.paths.skillRoot}/review/specialists/red-team.md\`
|
||||
2. The merged specialist findings from Step 4.6 (so it knows what was already caught)
|
||||
3. The git diff command
|
||||
|
||||
Prompt: "You are a red team reviewer. The code has already been reviewed by N specialists
|
||||
who found the following issues: {merged findings summary}. Your job is to find what they
|
||||
MISSED. Read the checklist, run \`git diff origin/<base>\`, and look for gaps.
|
||||
Output findings as JSON objects (same schema as the specialists). Focus on cross-cutting
|
||||
concerns, integration boundary issues, and failure modes that specialist checklists
|
||||
don't cover."
|
||||
|
||||
If the Red Team finds additional issues, merge them into the findings list before
|
||||
Step 5 Fix-First. Red Team findings are tagged with \`"specialist":"red-team"\`.
|
||||
|
||||
If the Red Team returns NO FINDINGS, note: "Red Team review: no additional issues found."
|
||||
If the Red Team subagent fails or times out, skip silently and continue.`;
|
||||
}
|
||||
|
||||
export function generateReviewArmy(ctx: TemplateContext): string {
|
||||
// Codex host: strip entirely — Codex should not run Review Army
|
||||
if (ctx.host === 'codex') return '';
|
||||
|
||||
const sections = [
|
||||
generateSpecialistSelection(ctx),
|
||||
generateSpecialistDispatch(ctx),
|
||||
generateFindingsMerge(ctx),
|
||||
generateRedTeam(ctx),
|
||||
];
|
||||
|
||||
return sections.join('\n\n---\n\n');
|
||||
}
|
||||
+144
-56
@@ -54,7 +54,7 @@ Display:
|
||||
- **Eng Review (required by default):** The only review that gates shipping. Covers architecture, code quality, tests, performance. Can be disabled globally with \\\`gstack-config set skip_eng_review true\\\` (the "don't bother me" setting).
|
||||
- **CEO Review (optional):** Use your judgment. Recommend it for big product/business changes, new user-facing features, or scope decisions. Skip for bug fixes, refactors, infra, and cleanup.
|
||||
- **Design Review (optional):** Use your judgment. Recommend it for UI/UX changes. Skip for backend-only, infra, or prompt-only changes.
|
||||
- **Adversarial Review (automatic):** Auto-scales by diff size. Small diffs (<50 lines) skip adversarial. Medium diffs (50–199) get cross-model adversarial. Large diffs (200+) get all 4 passes: Claude structured, Codex structured, Claude adversarial subagent, Codex adversarial. No configuration needed.
|
||||
- **Adversarial Review (automatic):** Always-on for every review. Every diff gets both Claude adversarial subagent and Codex adversarial challenge. Large diffs (200+ lines) additionally get Codex structured review with P1 gate. No configuration needed.
|
||||
- **Outside Voice (optional):** Independent plan review from a different AI model. Offered after all review sections complete in /plan-ceo-review and /plan-eng-review. Falls back to Claude subagent if Codex is unavailable. Never gates shipping.
|
||||
|
||||
**Verdict logic:**
|
||||
@@ -359,6 +359,50 @@ SECOND OPINION (Claude subagent):
|
||||
If A: revise the premise and note the revision. If B: proceed (and note that the user defended this premise with reasoning — this is a founder signal if they articulate WHY they disagree, not just dismiss).`;
|
||||
}
|
||||
|
||||
// ─── Scope Drift Detection (shared between /review and /ship) ────────
|
||||
|
||||
export function generateScopeDrift(ctx: TemplateContext): string {
|
||||
const isShip = ctx.skillName === 'ship';
|
||||
const stepNum = isShip ? '3.48' : '1.5';
|
||||
|
||||
return `## Step ${stepNum}: Scope Drift Detection
|
||||
|
||||
Before reviewing code quality, check: **did they build what was requested — nothing more, nothing less?**
|
||||
|
||||
1. Read \`TODOS.md\` (if it exists). Read PR description (\`gh pr view --json body --jq .body 2>/dev/null || true\`).
|
||||
Read commit messages (\`git log origin/<base>..HEAD --oneline\`).
|
||||
**If no PR exists:** rely on commit messages and TODOS.md for stated intent — this is the common case since /review runs before /ship creates the PR.
|
||||
2. Identify the **stated intent** — what was this branch supposed to accomplish?
|
||||
3. Run \`git diff origin/<base>...HEAD --stat\` and compare the files changed against the stated intent.
|
||||
|
||||
4. Evaluate with skepticism (incorporating plan completion results if available from an earlier step or adjacent section):
|
||||
|
||||
**SCOPE CREEP detection:**
|
||||
- Files changed that are unrelated to the stated intent
|
||||
- New features or refactors not mentioned in the plan
|
||||
- "While I was in there..." changes that expand blast radius
|
||||
|
||||
**MISSING REQUIREMENTS detection:**
|
||||
- Requirements from TODOS.md/PR description not addressed in the diff
|
||||
- Test coverage gaps for stated requirements
|
||||
- Partial implementations (started but not finished)
|
||||
|
||||
5. Output (before the main review begins):
|
||||
\\\`\\\`\\\`
|
||||
Scope Check: [CLEAN / DRIFT DETECTED / REQUIREMENTS MISSING]
|
||||
Intent: <1-line summary of what was requested>
|
||||
Delivered: <1-line summary of what the diff actually does>
|
||||
[If drift: list each out-of-scope change]
|
||||
[If missing: list each unaddressed requirement]
|
||||
\\\`\\\`\\\`
|
||||
|
||||
6. This is **INFORMATIONAL** — does not block the review. Proceed to the next step.
|
||||
|
||||
---`;
|
||||
}
|
||||
|
||||
// ─── Adversarial Review (always-on) ──────────────────────────────────
|
||||
|
||||
export function generateAdversarialStep(ctx: TemplateContext): string {
|
||||
// Codex host: strip entirely — Codex should never invoke itself
|
||||
if (ctx.host === 'codex') return '';
|
||||
@@ -366,9 +410,9 @@ export function generateAdversarialStep(ctx: TemplateContext): string {
|
||||
const isShip = ctx.skillName === 'ship';
|
||||
const stepNum = isShip ? '3.8' : '5.7';
|
||||
|
||||
return `## Step ${stepNum}: Adversarial review (auto-scaled)
|
||||
return `## Step ${stepNum}: Adversarial review (always-on)
|
||||
|
||||
Adversarial review thoroughness scales automatically based on diff size. No configuration needed.
|
||||
Every diff gets adversarial review from both Claude and Codex. LOC is not a proxy for risk — a 5-line auth change can be critical.
|
||||
|
||||
**Detect diff size and tool availability:**
|
||||
|
||||
@@ -377,30 +421,34 @@ DIFF_INS=$(git diff origin/<base> --stat | tail -1 | grep -oE '[0-9]+ insertion'
|
||||
DIFF_DEL=$(git diff origin/<base> --stat | tail -1 | grep -oE '[0-9]+ deletion' | grep -oE '[0-9]+' || echo "0")
|
||||
DIFF_TOTAL=$((DIFF_INS + DIFF_DEL))
|
||||
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"
|
||||
# Respect old opt-out
|
||||
# Legacy opt-out — only gates Codex passes, Claude always runs
|
||||
OLD_CFG=$(~/.claude/skills/gstack/bin/gstack-config get codex_reviews 2>/dev/null || true)
|
||||
echo "DIFF_SIZE: $DIFF_TOTAL"
|
||||
echo "OLD_CFG: \${OLD_CFG:-not_set}"
|
||||
\`\`\`
|
||||
|
||||
If \`OLD_CFG\` is \`disabled\`: skip this step silently. Continue to the next step.
|
||||
If \`OLD_CFG\` is \`disabled\`: skip Codex passes only. Claude adversarial subagent still runs (it's free and fast). Jump to the "Claude adversarial subagent" section.
|
||||
|
||||
**User override:** If the user explicitly requested a specific tier (e.g., "run all passes", "paranoid review", "full adversarial", "do all 4 passes", "thorough review"), honor that request regardless of diff size. Jump to the matching tier section.
|
||||
|
||||
**Auto-select tier based on diff size:**
|
||||
- **Small (< 50 lines changed):** Skip adversarial review entirely. Print: "Small diff ($DIFF_TOTAL lines) — adversarial review skipped." Continue to the next step.
|
||||
- **Medium (50–199 lines changed):** Run Codex adversarial challenge (or Claude adversarial subagent if Codex unavailable). Jump to the "Medium tier" section.
|
||||
- **Large (200+ lines changed):** Run all remaining passes — Codex structured review + Claude adversarial subagent + Codex adversarial. Jump to the "Large tier" section.
|
||||
**User override:** If the user explicitly requested "full review", "structured review", or "P1 gate", also run the Codex structured review regardless of diff size.
|
||||
|
||||
---
|
||||
|
||||
### Medium tier (50–199 lines)
|
||||
### Claude adversarial subagent (always runs)
|
||||
|
||||
Claude's structured review already ran. Now add a **cross-model adversarial challenge**.
|
||||
Dispatch via the Agent tool. The subagent has fresh context — no checklist bias from the structured review. This genuine independence catches things the primary reviewer is blind to.
|
||||
|
||||
**If Codex is available:** run the Codex adversarial challenge. **If Codex is NOT available:** fall back to the Claude adversarial subagent instead.
|
||||
Subagent prompt:
|
||||
"Read the diff for this branch with \`git diff origin/<base>\`. Think like an attacker and a chaos engineer. Your job is to find ways this code will fail in production. Look for: edge cases, race conditions, security holes, resource leaks, failure modes, silent data corruption, logic errors that produce wrong results silently, error handling that swallows failures, and trust boundary violations. Be adversarial. Be thorough. No compliments — just the problems. For each finding, classify as FIXABLE (you know how to fix it) or INVESTIGATE (needs human judgment)."
|
||||
|
||||
**Codex adversarial:**
|
||||
Present findings under an \`ADVERSARIAL REVIEW (Claude subagent):\` header. **FIXABLE findings** flow into the same Fix-First pipeline as the structured review. **INVESTIGATE findings** are presented as informational.
|
||||
|
||||
If the subagent fails or times out: "Claude adversarial subagent unavailable. Continuing."
|
||||
|
||||
---
|
||||
|
||||
### Codex adversarial challenge (always runs when available)
|
||||
|
||||
If Codex is available AND \`OLD_CFG\` is NOT \`disabled\`:
|
||||
|
||||
\`\`\`bash
|
||||
TMPERR_ADV=$(mktemp /tmp/codex-adv-XXXXXXXX)
|
||||
@@ -420,34 +468,16 @@ Present the full output verbatim. This is informational — it never blocks ship
|
||||
- **Timeout:** "Codex timed out after 5 minutes."
|
||||
- **Empty response:** "Codex returned no response. Stderr: <paste relevant error>."
|
||||
|
||||
On any Codex error, fall back to the Claude adversarial subagent automatically.
|
||||
**Cleanup:** Run \`rm -f "$TMPERR_ADV"\` after processing.
|
||||
|
||||
**Claude adversarial subagent** (fallback when Codex unavailable or errored):
|
||||
|
||||
Dispatch via the Agent tool. The subagent has fresh context — no checklist bias from the structured review. This genuine independence catches things the primary reviewer is blind to.
|
||||
|
||||
Subagent prompt:
|
||||
"Read the diff for this branch with \`git diff origin/<base>\`. Think like an attacker and a chaos engineer. Your job is to find ways this code will fail in production. Look for: edge cases, race conditions, security holes, resource leaks, failure modes, silent data corruption, logic errors that produce wrong results silently, error handling that swallows failures, and trust boundary violations. Be adversarial. Be thorough. No compliments — just the problems. For each finding, classify as FIXABLE (you know how to fix it) or INVESTIGATE (needs human judgment)."
|
||||
|
||||
Present findings under an \`ADVERSARIAL REVIEW (Claude subagent):\` header. **FIXABLE findings** flow into the same Fix-First pipeline as the structured review. **INVESTIGATE findings** are presented as informational.
|
||||
|
||||
If the subagent fails or times out: "Claude adversarial subagent unavailable. Continuing without adversarial review."
|
||||
|
||||
**Persist the review result:**
|
||||
\`\`\`bash
|
||||
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"adversarial-review","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","status":"STATUS","source":"SOURCE","tier":"medium","commit":"'"$(git rev-parse --short HEAD)"'"}'
|
||||
\`\`\`
|
||||
Substitute STATUS: "clean" if no findings, "issues_found" if findings exist. SOURCE: "codex" if Codex ran, "claude" if subagent ran. If both failed, do NOT persist.
|
||||
|
||||
**Cleanup:** Run \`rm -f "$TMPERR_ADV"\` after processing (if Codex was used).
|
||||
If Codex is NOT available: "Codex CLI not found — running Claude adversarial only. Install Codex for cross-model coverage: \`npm install -g @openai/codex\`"
|
||||
|
||||
---
|
||||
|
||||
### Large tier (200+ lines)
|
||||
### Codex structured review (large diffs only, 200+ lines)
|
||||
|
||||
Claude's structured review already ran. Now run **all three remaining passes** for maximum coverage:
|
||||
If \`DIFF_TOTAL >= 200\` AND Codex is available AND \`OLD_CFG\` is NOT \`disabled\`:
|
||||
|
||||
**1. Codex structured review (if available):**
|
||||
\`\`\`bash
|
||||
TMPERR=$(mktemp /tmp/codex-review-XXXXXXXX)
|
||||
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
||||
@@ -468,34 +498,34 @@ B) Continue — review will still complete
|
||||
|
||||
If A: address the findings${isShip ? '. After fixing, re-run tests (Step 3) since code has changed' : ''}. Re-run \`codex review\` to verify.
|
||||
|
||||
Read stderr for errors (same error handling as medium tier).
|
||||
Read stderr for errors (same error handling as Codex adversarial above).
|
||||
|
||||
After stderr: \`rm -f "$TMPERR"\`
|
||||
|
||||
**2. Claude adversarial subagent:** Dispatch a subagent with the adversarial prompt (same prompt as medium tier). This always runs regardless of Codex availability.
|
||||
|
||||
**3. Codex adversarial challenge (if available):** Run \`codex exec\` with the adversarial prompt (same as medium tier).
|
||||
|
||||
If Codex is not available for steps 1 and 3, note to the user: "Codex CLI not found — large-diff review ran Claude structured + Claude adversarial (2 of 4 passes). Install Codex for full 4-pass coverage: \`npm install -g @openai/codex\`"
|
||||
|
||||
**Persist the review result AFTER all passes complete** (not after each sub-step):
|
||||
\`\`\`bash
|
||||
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"adversarial-review","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","status":"STATUS","source":"SOURCE","tier":"large","gate":"GATE","commit":"'"$(git rev-parse --short HEAD)"'"}'
|
||||
\`\`\`
|
||||
Substitute: STATUS = "clean" if no findings across ALL passes, "issues_found" if any pass found issues. SOURCE = "both" if Codex ran, "claude" if only Claude subagent ran. GATE = the Codex structured review gate result ("pass"/"fail"), or "informational" if Codex was unavailable. If all passes failed, do NOT persist.
|
||||
If \`DIFF_TOTAL < 200\`: skip this section silently. The Claude + Codex adversarial passes provide sufficient coverage for smaller diffs.
|
||||
|
||||
---
|
||||
|
||||
### Cross-model synthesis (medium and large tiers)
|
||||
### Persist the review result
|
||||
|
||||
After all passes complete, persist:
|
||||
\`\`\`bash
|
||||
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"adversarial-review","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","status":"STATUS","source":"SOURCE","tier":"always","gate":"GATE","commit":"'"$(git rev-parse --short HEAD)"'"}'
|
||||
\`\`\`
|
||||
Substitute: STATUS = "clean" if no findings across ALL passes, "issues_found" if any pass found issues. SOURCE = "both" if Codex ran, "claude" if only Claude subagent ran. GATE = the Codex structured review gate result ("pass"/"fail"), "skipped" if diff < 200, or "informational" if Codex was unavailable. If all passes failed, do NOT persist.
|
||||
|
||||
---
|
||||
|
||||
### Cross-model synthesis
|
||||
|
||||
After all passes complete, synthesize findings across all sources:
|
||||
|
||||
\`\`\`
|
||||
ADVERSARIAL REVIEW SYNTHESIS (auto: TIER, N lines):
|
||||
ADVERSARIAL REVIEW SYNTHESIS (always-on, N lines):
|
||||
════════════════════════════════════════════════════════════
|
||||
High confidence (found by multiple sources): [findings agreed on by >1 pass]
|
||||
Unique to Claude structured review: [from earlier step]
|
||||
Unique to Claude adversarial: [from subagent, if ran]
|
||||
Unique to Claude adversarial: [from subagent]
|
||||
Unique to Codex: [from codex adversarial or code review, if ran]
|
||||
Models used: Claude structured ✓ Claude adversarial ✓/✗ Codex ✓/✗
|
||||
════════════════════════════════════════════════════════════
|
||||
@@ -619,6 +649,9 @@ For each substantive tension point, use AskUserQuestion:
|
||||
|
||||
> "Cross-model disagreement on [topic]. The review found [X] but the outside voice
|
||||
> argues [Y]. [One sentence on what context you might be missing.]"
|
||||
>
|
||||
> RECOMMENDATION: Choose [A or B] because [one-line reason explaining which argument
|
||||
> is more compelling and why]. Completeness: A=X/10, B=Y/10.
|
||||
|
||||
Options:
|
||||
- A) Accept the outside voice's recommendation (I'll apply this change)
|
||||
@@ -784,16 +817,71 @@ After producing the completion checklist:
|
||||
|
||||
**Include in PR body (Step 8):** Add a \`## Plan Completion\` section with the checklist summary.`);
|
||||
} else {
|
||||
// review mode
|
||||
// review mode — enhanced Delivery Integrity (Release 2: Review Army)
|
||||
sections.push(`
|
||||
### Fallback Intent Sources (when no plan file found)
|
||||
|
||||
When no plan file is detected, use these secondary intent sources:
|
||||
|
||||
1. **Commit messages:** Run \`git log origin/<base>..HEAD --oneline\`. Use judgment to extract real intent:
|
||||
- Commits with actionable verbs ("add", "implement", "fix", "create", "remove", "update") are intent signals
|
||||
- Skip noise: "WIP", "tmp", "squash", "merge", "chore", "typo", "fixup"
|
||||
- Extract the intent behind the commit, not the literal message
|
||||
2. **TODOS.md:** If it exists, check for items related to this branch or recent dates
|
||||
3. **PR description:** Run \`gh pr view --json body -q .body 2>/dev/null\` for intent context
|
||||
|
||||
**With fallback sources:** Apply the same Cross-Reference classification (DONE/PARTIAL/NOT DONE/CHANGED) using best-effort matching. Note that fallback-sourced items are lower confidence than plan-file items.
|
||||
|
||||
### Investigation Depth
|
||||
|
||||
For each PARTIAL or NOT DONE item, investigate WHY:
|
||||
|
||||
1. Check \`git log origin/<base>..HEAD --oneline\` for commits that suggest the work was started, attempted, or reverted
|
||||
2. Read the relevant code to understand what was built instead
|
||||
3. Determine the likely reason from this list:
|
||||
- **Scope cut** — evidence of intentional removal (revert commit, removed TODO)
|
||||
- **Context exhaustion** — work started but stopped mid-way (partial implementation, no follow-up commits)
|
||||
- **Misunderstood requirement** — something was built but it doesn't match what the plan described
|
||||
- **Blocked by dependency** — plan item depends on something that isn't available
|
||||
- **Genuinely forgotten** — no evidence of any attempt
|
||||
|
||||
Output for each discrepancy:
|
||||
\`\`\`
|
||||
DISCREPANCY: {PARTIAL|NOT_DONE} | {plan item} | {what was actually delivered}
|
||||
INVESTIGATION: {likely reason with evidence from git log / code}
|
||||
IMPACT: {HIGH|MEDIUM|LOW} — {what breaks or degrades if this stays undelivered}
|
||||
\`\`\`
|
||||
|
||||
### Learnings Logging (plan-file discrepancies only)
|
||||
|
||||
**Only for discrepancies sourced from plan files** (not commit messages or TODOS.md), log a learning so future sessions know this pattern occurred:
|
||||
|
||||
\`\`\`bash
|
||||
~/.claude/skills/gstack/bin/gstack-learnings-log '{
|
||||
"type": "pitfall",
|
||||
"key": "plan-delivery-gap-KEBAB_SUMMARY",
|
||||
"insight": "Planned X but delivered Y because Z",
|
||||
"confidence": 8,
|
||||
"source": "observed",
|
||||
"files": ["PLAN_FILE_PATH"]
|
||||
}'
|
||||
\`\`\`
|
||||
|
||||
Replace KEBAB_SUMMARY with a kebab-case summary of the gap, and fill in the actual values.
|
||||
|
||||
**Do NOT log learnings from commit-message-derived or TODOS.md-derived discrepancies.** These are informational in the review output but too noisy for durable memory.
|
||||
|
||||
### Integration with Scope Drift Detection
|
||||
|
||||
The plan completion results augment the existing Scope Drift Detection. If a plan file is found:
|
||||
|
||||
- **NOT DONE items** become additional evidence for **MISSING REQUIREMENTS** in the scope drift report.
|
||||
- **Items in the diff that don't match any plan item** become evidence for **SCOPE CREEP** detection.
|
||||
- **HIGH-impact discrepancies** trigger AskUserQuestion:
|
||||
- Show the investigation findings
|
||||
- Options: A) Stop and implement missing items, B) Ship anyway + create P1 TODOs, C) Intentionally dropped
|
||||
|
||||
This is **INFORMATIONAL** — does not block the review (consistent with existing scope drift behavior).
|
||||
This is **INFORMATIONAL** unless HIGH-impact discrepancies are found (then it gates via AskUserQuestion).
|
||||
|
||||
Update the scope drift output to include plan file context:
|
||||
|
||||
@@ -803,11 +891,11 @@ Intent: <from plan file — 1-line summary>
|
||||
Plan: <plan file path>
|
||||
Delivered: <1-line summary of what the diff actually does>
|
||||
Plan items: N DONE, M PARTIAL, K NOT DONE
|
||||
[If NOT DONE: list each missing item]
|
||||
[If NOT DONE: list each missing item with investigation]
|
||||
[If scope creep: list each out-of-scope change not in the plan]
|
||||
\`\`\`
|
||||
|
||||
**No plan file found:** Fall back to existing scope drift behavior (check TODOS.md and PR description only).`);
|
||||
**No plan file found:** Use commit messages and TODOS.md as fallback sources (see above). If no intent sources at all, skip with: "No intent sources detected — skipping completion audit."`);
|
||||
}
|
||||
|
||||
return sections.join('\n');
|
||||
|
||||
Reference in New Issue
Block a user