Files
gstack/scripts/resolvers/utility.ts
T
Garry Tan 656df0e37e feat(v1.5.2.0): Opus 4.7 migration — model overlay, voice, routing (#1117)
* feat(v1.5.2.0): Opus 4.7 migration — model overlay, voice, routing

Adapts GStack skill text for Claude Opus 4.7's behavioral changes per
Anthropic's migration guide and community findings.

Key changes:

model-overlays/claude.md:
  - Fan out explicitly (4.7 spawns fewer subagents by default)
  - Effort-match the step (avoid overthinking simple tasks at max)
  - Batch questions in one AskUserQuestion turn
  - Literal interpretation awareness (deliver full scope)

hosts/claude.ts:
  - coAuthorTrailer updated to Claude Opus 4.7

SKILL.md.tmpl:
  - Expanded routing triggers with colloquial variants ("wtf",
    "this doesn't work", "send it", "where was I") — 4.7 won't
    generalize from sparse trigger patterns like 4.6 did
  - Added missing routes: /context-save, /context-restore, /cso, /make-pdf
  - Changed routing fallback from strict "do NOT answer directly" to
    "when in doubt, invoke the skill" — false positives are cheaper
    than false negatives on 4.7's literal interpreter

generate-voice-directive.ts:
  - Added concrete good/bad voice example — 4.7 needs shown examples,
    not just described tone. "auth.ts:47 returns undefined..." vs
    "I've identified a potential issue..."

Regenerated all 38 SKILL.md files. All tests pass.

* refactor(opus-4.7): split overlay, align routing, fix trailer fallback

Follow-up to wintermute's initial Opus 4.7 migration commit (addresses
ship-quality review findings before v1.6.1.0 release).

Overlay split (model-overlays/):
  - Move 4 Opus-4.7-specific nudges (Fan out, Effort-match, Batch your
    questions, Literal interpretation) from claude.md into new
    opus-4-7.md with {{INHERIT:claude}}
  - claude.md now holds only model-agnostic nudges (Todo discipline,
    Think before heavy, Dedicated tools over Bash)
  - Prevents Opus-4.7-specific guidance leaking onto Sonnet/Haiku
  - Uses existing {{INHERIT:claude}} mechanism at
    scripts/resolvers/model-overlay.ts:28-43

scripts/models.ts:
  - Add opus-4-7 to ALL_MODEL_NAMES
  - resolveModel: claude-opus-4-7-* variants route to opus-4-7,
    all other claude-* variants continue to route to claude

scripts/resolvers/utility.ts:
  - Update coAuthor trailer fallback: Opus 4.6 -> Opus 4.7
    (fallback was missed in the initial migration commit)

scripts/resolvers/preamble/generate-routing-injection.ts:
  - Align policy with new SKILL.md.tmpl: soft "when in doubt, invoke"
    instead of hard "ALWAYS invoke... Do NOT answer directly"
  - Replace stale /checkpoint reference with /context-save +
    /context-restore (skills were renamed in v1.0.1.0)
  - Expand route coverage to match full skill inventory:
    /plan-devex-review, /qa-only, /devex-review, /land-and-deploy,
    /setup-deploy, /canary, /open-gstack-browser,
    /setup-browser-cookies, /benchmark, /learn, /plan-tune, /health

scripts/resolvers/preamble/generate-voice-directive.ts:
  - Voice example closing: "Want me to ship it?" -> "Want me to fix it?"
  - Preserves directness while routing through review gates

SKILL.md.tmpl:
  - Add routing triggers for skills that were missing from the list:
    /plan-devex-review, /qa-only, /devex-review, /land-and-deploy,
    /setup-deploy, /canary, /open-gstack-browser,
    /setup-browser-cookies, /benchmark, /learn, /plan-tune, /health
  - Within Opus 4.7 overlay, added scope boundary to
    "Literal interpretation" nudge ("fix tests that this branch
    introduced or is responsible for")
  - Added pacing exception to "Batch your questions" nudge so skills
    that require one-question-at-a-time pacing still win

Follow-up commit will regenerate SKILL.md files + update goldens.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(opus-4.7): regenerate SKILL.md files + update golden fixtures

Mechanical consequence of the preceding source changes (overlay split,
routing alignment, voice example, routing expansion). No behavior change
beyond what that commit introduced.

- 36 SKILL.md files regenerated via bun run gen:skill-docs
- 3 golden fixtures updated (claude, codex, factory ship skill)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(routing): assert slash-prefixed skills + new policy + current names

Align gen-skill-docs.test.ts routing assertions with the remediated
routing-injection output:

- Expect '/office-hours' slash-prefixed form (matches SKILL.md.tmpl style)
- Add test asserting /context-save + /context-restore references
  (guards against stale '/checkpoint' name regression)
- Add test asserting "When in doubt, invoke the skill" soft policy
  (guards against "Do NOT answer directly" hard policy regression)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(binary-guard): replace xargs-per-file loops with fs.statSync + mode filter

The "no compiled binaries in git" describe block had two flaky tests:

- "git tracks no files larger than 2MB" timed out at 5s regularly because
  it spawned one `sh -c` per tracked file via `xargs -I{}` (~571 shells
  on every run, ~11s locally).
- "git tracks no Mach-O or ELF binaries" ran `file --mime-type` over every
  tracked file (~3-10s, flaky near the timeout).

Both were pre-existing — not caused by any recent change — but showed up
as red in every local `bun test` run and masked legit failures in the
same suite.

Rewrites:

- 2MB test: `fs.statSync(f).size` in a filter. Millisecond-fast.
- Mach-O test: pre-filter to mode 100755 files via `git ls-files -s`,
  then batch-invoke `file --mime-type` once across all executables.
  With zero executables tracked, the `file` invocation is skipped.

Test suite: 320 pass, 0 fail, 907ms (was ~12.7s with 2 fails).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(team-mode): give setup -q / setup --local tests a 3-minute budget

./setup runs a full install, Bun binary build, and skill regeneration.
On a cold cache it takes 60-90s, comfortably above bun test's 5s default.
Both "setup -q produces no stdout" and "setup --local prints deprecation
warning" have been flaky-to-failing for a while with [5001.78ms] timeouts.

The test logic was fine, the budget wasn't. Bumped both to 180s via the
third-arg timeout.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(opus-4.7): E2E eval for fanout rate + routing precision

Closes the measurement gap flagged by the ship-quality review: "zero
tests exercise Opus 4.7 behavior; every skill-e2e hardcodes 4.6."

Two cases, both pinned to claude-opus-4-7:

1. Fanout rate (A/B)
   - Arm A: regen SKILL.md with --model opus-4-7 (overlay ON, includes
     "Fan out explicitly" nudge).
   - Arm B: regen SKILL.md with --model claude (overlay OFF, only
     model-agnostic nudges).
   - Prompt: "Read alpha.txt, beta.txt, gamma.txt. These are independent."
   - Measure: parallel tool calls in first assistant turn.
   - Assert: arm A >= arm B.

2. Routing precision (6-case mini-benchmark)
   - 3 positive prompts that should route (wtf bug, send it, does it work)
   - 3 negative prompts that match keywords but should NOT route
     (syntax question, algorithm question, slack message)
   - Assert: TP rate >= 66%, FP rate <= 33%.

Cost estimate: ~$3-5 per full run. Classified as periodic tier per
CLAUDE.md convention (Opus model, non-deterministic). Runs only with
EVALS=1 env var, touchfile-gated so unrelated diffs don't trigger it.

Test plan artifact at
~/.gstack/projects/garrytan-gstack/garrytan-feat-opus-4.7-migration-eng-review-test-plan-20260421-230611.md
tracks the full specification.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(opus-4.7): rewrite fanout nudge to show parallel tool_use pattern

The original fanout nudge told 4.7 to "spawn subagents in the same turn"
and "run independent checks concurrently" in prose. An E2E eval on
claude-opus-4-7 reading 3 independent files showed zero effect: both
overlay-ON and overlay-OFF arms emitted serial Reads across 3-4 turns.

Rewrite follows the same "show not tell" principle the PR introduced for
voice examples. The nudge now includes a concrete wrong/right contrast
showing the exact tool_use structure:

  Wrong (3 turns):
    Turn 1: Read(foo.ts), then wait
    Turn 2: Read(bar.ts), then wait
    Turn 3: Read(baz.ts)

  Right (1 turn, 3 parallel tool_use blocks in one assistant message):
    Turn 1: [Read(foo.ts), Read(bar.ts), Read(baz.ts)]

Applies to Read, Bash, Grep, Glob, WebFetch, Agent, and any tool where
sub-calls don't depend on each other's output.

Effect on test/skill-e2e-opus-47.test.ts fanout eval: unchanged (both
arms still 0 parallel in first turn via `claude -p`). May land better in
Claude Code's interactive harness, where the system prompt + tool
handlers differ. Tracked as P0 TODO for follow-up verification in the
correct harness.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(opus-4.7): tighten ambiguous /qa routing prompt

"does this feature work on mobile? can you check the deploy?" was too
vague — a reasonable agent asks "which feature?" via AskUserQuestion
instead of routing to /qa. That's not a routing miss, it's an under-
specified prompt.

Replaced with "I just pushed the login flow changes. Test the deployed
site and find any bugs." — concrete subject + clear QA verb.

Result: pos-does-it-work went from MISS to OK, routing TP rate 2/3 -> 3/3.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(opus-4.7): rewrite scratch-root helper + add afterAll cleanup

First run of the Opus 4.7 eval exposed two test-setup gaps that made
results misleading:

- Only the root gstack SKILL.md was installed. Claude Code does
  auto-discovery per-directory under .claude/skills/{name}/SKILL.md, so
  without individual skill dirs the Skill tool had nothing to route to.
  Positive routing cases all failed.
- `claude -p` does not load SKILL.md content as system context the way
  the Claude Code harness does. The overlay nudges in SKILL.md were
  invisible to the model, so the fanout A/B could not actually differ.

New `mkEvalRoot(suffix, includeOverlay)` helper, modelled on the pattern
in skill-routing-e2e.test.ts:

- Installs per-skill SKILL.md under .claude/skills/ for ~14 key skills
  so the Skill tool has discoverable targets.
- Writes an explicit routing block into project CLAUDE.md.
- When includeOverlay is true, inlines the content of
  model-overlays/opus-4-7.md into CLAUDE.md too. This is what makes the
  fanout A/B observable in `claude -p`: arm ON gets the overlay in
  context, arm OFF does not.

Plus an afterAll that re-runs gen-skill-docs at the default model so
the working tree is not left with opus-4-7-generated SKILL.md files
after the eval finishes (would break golden-file tests in the next
`bun test` run otherwise).

With this setup in place: routing went from 3/3 FAIL to 3/3 PASS
(correct skill or clarification in every positive case, zero false
positives on negatives). Fanout A/B is now a fair comparison; still
shows 0 parallel in both arms under `claude -p` (tracked as a P0 TODO
for re-measurement inside Claude Code's harness, where fanout may land
differently).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(todos): verify Opus 4.7 fanout nudge in Claude Code harness (P0)

v1.6.1.0 shipped a rewritten "Fan out explicitly" nudge with a concrete
tool_use example. Under `claude -p` on claude-opus-4-7, the A/B eval
showed zero parallel tool calls in the first turn for both arms
(overlay ON and OFF). Routing verified 3/3 in the same harness, so the
gap is specific to fanout and likely to `claude -p`'s system prompt +
tool wiring.

This TODO closes the measurement loop the ship-quality review flagged:
re-run the fanout A/B inside Claude Code's real harness (or a faithful
replica) before landing another Opus migration claim.

P0 because it is a ship-quality commitment from the v1.6.1.0 release
notes, not a nice-to-have.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(release): v1.6.1.0 — Opus 4.7 migration, reviewed

Bump VERSION + package.json from 1.6.0.0 to 1.6.1.0. New CHANGELOG
entry describing the ship-quality remediation of PR #1117:

- Overlay split (model-agnostic claude.md + opus-4-7.md with INHERIT)
- Routing-injection aligned with SKILL.md.tmpl ("when in doubt" policy,
  current skill names, full skill inventory)
- utility.ts trailer fallback updated
- Voice example closes through review gate instead of ship-bypass
- Literal-interpretation nudge bounded to branch scope
- Batch-questions nudge has explicit pacing exception
- First Opus 4.7 eval: routing verified 3/3, fanout A/B unverified
  under `claude -p` (tracked as P0 TODO for next rev)
- Pre-existing test failures fixed: fs.statSync binary guard, 180s
  setup timeout, golden-file updates

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(opus-4.7): key touchfile entries by testName, not describe text

TOUCHFILES completeness scan in test/touchfiles.test.ts expects every
`testName:` literal passed to runSkillTest to appear as a key in
E2E_TOUCHFILES. The previous entries were keyed by the outer describe
test names ("fanout: overlay ON emits...") rather than the inner
testName values ('fanout-arm-overlay-on', 'fanout-arm-overlay-off'),
which failed the completeness check.

Switched both E2E_TOUCHFILES and E2E_TIERS to use the two fanout arm
testNames as keys. The routing sub-tests use a template literal
(`routing-${c.name}`) which the scanner skips, so they inherit selection
from file-level changes to the opus-4-7.md / routing-injection.ts paths
already covered by the fanout entries.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: gstack <ship@gstack.dev>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-22 01:06:22 -07:00

418 lines
17 KiB
TypeScript
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
import type { TemplateContext } from './types';
export function generateSlugEval(ctx: TemplateContext): string {
return `eval "$(${ctx.paths.binDir}/gstack-slug 2>/dev/null)"`;
}
export function generateSlugSetup(ctx: TemplateContext): string {
return `eval "$(${ctx.paths.binDir}/gstack-slug 2>/dev/null)" && mkdir -p ~/.gstack/projects/$SLUG`;
}
export function generateBaseBranchDetect(_ctx: TemplateContext): string {
return `## Step 0: Detect platform and base branch
First, detect the git hosting platform from the remote URL:
\`\`\`bash
git remote get-url origin 2>/dev/null
\`\`\`
- If the URL contains "github.com" → platform is **GitHub**
- If the URL contains "gitlab" → platform is **GitLab**
- Otherwise, check CLI availability:
- \`gh auth status 2>/dev/null\` succeeds → platform is **GitHub** (covers GitHub Enterprise)
- \`glab auth status 2>/dev/null\` succeeds → platform is **GitLab** (covers self-hosted)
- Neither → **unknown** (use git-native commands only)
Determine which branch this PR/MR targets, or the repo's default branch if no
PR/MR exists. Use the result as "the base branch" in all subsequent steps.
**If GitHub:**
1. \`gh pr view --json baseRefName -q .baseRefName\` — if succeeds, use it
2. \`gh repo view --json defaultBranchRef -q .defaultBranchRef.name\` — if succeeds, use it
**If GitLab:**
1. \`glab mr view -F json 2>/dev/null\` and extract the \`target_branch\` field — if succeeds, use it
2. \`glab repo view -F json 2>/dev/null\` and extract the \`default_branch\` field — if succeeds, use it
**Git-native fallback (if unknown platform, or CLI commands fail):**
1. \`git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's|refs/remotes/origin/||'\`
2. If that fails: \`git rev-parse --verify origin/main 2>/dev/null\` → use \`main\`
3. If that fails: \`git rev-parse --verify origin/master 2>/dev/null\` → use \`master\`
If all fail, fall back to \`main\`.
Print the detected base branch name. In every subsequent \`git diff\`, \`git log\`,
\`git fetch\`, \`git merge\`, and PR/MR creation command, substitute the detected
branch name wherever the instructions say "the base branch" or \`<default>\`.
---`;
}
export function generateDeployBootstrap(_ctx: TemplateContext): string {
return `\`\`\`bash
# Check for persisted deploy config in CLAUDE.md
DEPLOY_CONFIG=$(grep -A 20 "## Deploy Configuration" CLAUDE.md 2>/dev/null || echo "NO_CONFIG")
echo "$DEPLOY_CONFIG"
# If config exists, parse it
if [ "$DEPLOY_CONFIG" != "NO_CONFIG" ]; then
PROD_URL=$(echo "$DEPLOY_CONFIG" | grep -i "production.*url" | head -1 | sed 's/.*: *//')
PLATFORM=$(echo "$DEPLOY_CONFIG" | grep -i "platform" | head -1 | sed 's/.*: *//')
echo "PERSISTED_PLATFORM:$PLATFORM"
echo "PERSISTED_URL:$PROD_URL"
fi
# Auto-detect platform from config files
[ -f fly.toml ] && echo "PLATFORM:fly"
[ -f render.yaml ] && echo "PLATFORM:render"
([ -f vercel.json ] || [ -d .vercel ]) && echo "PLATFORM:vercel"
[ -f netlify.toml ] && echo "PLATFORM:netlify"
[ -f Procfile ] && echo "PLATFORM:heroku"
([ -f railway.json ] || [ -f railway.toml ]) && echo "PLATFORM:railway"
# Detect deploy workflows
for f in $(find .github/workflows -maxdepth 1 \\( -name '*.yml' -o -name '*.yaml' \\) 2>/dev/null); do
[ -f "$f" ] && grep -qiE "deploy|release|production|cd" "$f" 2>/dev/null && echo "DEPLOY_WORKFLOW:$f"
[ -f "$f" ] && grep -qiE "staging" "$f" 2>/dev/null && echo "STAGING_WORKFLOW:$f"
done
\`\`\`
If \`PERSISTED_PLATFORM\` and \`PERSISTED_URL\` were found in CLAUDE.md, use them directly
and skip manual detection. If no persisted config exists, use the auto-detected platform
to guide deploy verification. If nothing is detected, ask the user via AskUserQuestion
in the decision tree below.
If you want to persist deploy settings for future runs, suggest the user run \`/setup-deploy\`.`;
}
export function generateQAMethodology(_ctx: TemplateContext): string {
return `## Modes
### Diff-aware (automatic when on a feature branch with no URL)
This is the **primary mode** for developers verifying their work. When the user says \`/qa\` without a URL and the repo is on a feature branch, automatically:
1. **Analyze the branch diff** to understand what changed:
\`\`\`bash
git diff main...HEAD --name-only
git log main..HEAD --oneline
\`\`\`
2. **Identify affected pages/routes** from the changed files:
- Controller/route files → which URL paths they serve
- View/template/component files → which pages render them
- Model/service files → which pages use those models (check controllers that reference them)
- CSS/style files → which pages include those stylesheets
- API endpoints → test them directly with \`$B js "await fetch('/api/...')"\`
- Static pages (markdown, HTML) → navigate to them directly
**If no obvious pages/routes are identified from the diff:** Do not skip browser testing. The user invoked /qa because they want browser-based verification. Fall back to Quick mode — navigate to the homepage, follow the top 5 navigation targets, check console for errors, and test any interactive elements found. Backend, config, and infrastructure changes affect app behavior — always verify the app still works.
3. **Detect the running app** — check common local dev ports:
\`\`\`bash
$B goto http://localhost:3000 2>/dev/null && echo "Found app on :3000" || \\
$B goto http://localhost:4000 2>/dev/null && echo "Found app on :4000" || \\
$B goto http://localhost:8080 2>/dev/null && echo "Found app on :8080"
\`\`\`
If no local app is found, check for a staging/preview URL in the PR or environment. If nothing works, ask the user for the URL.
4. **Test each affected page/route:**
- Navigate to the page
- Take a screenshot
- Check console for errors
- If the change was interactive (forms, buttons, flows), test the interaction end-to-end
- Use \`snapshot -D\` before and after actions to verify the change had the expected effect
5. **Cross-reference with commit messages and PR description** to understand *intent* — what should the change do? Verify it actually does that.
6. **Check TODOS.md** (if it exists) for known bugs or issues related to the changed files. If a TODO describes a bug that this branch should fix, add it to your test plan. If you find a new bug during QA that isn't in TODOS.md, note it in the report.
7. **Report findings** scoped to the branch changes:
- "Changes tested: N pages/routes affected by this branch"
- For each: does it work? Screenshot evidence.
- Any regressions on adjacent pages?
**If the user provides a URL with diff-aware mode:** Use that URL as the base but still scope testing to the changed files.
### Full (default when URL is provided)
Systematic exploration. Visit every reachable page. Document 5-10 well-evidenced issues. Produce health score. Takes 5-15 minutes depending on app size.
### Quick (\`--quick\`)
30-second smoke test. Visit homepage + top 5 navigation targets. Check: page loads? Console errors? Broken links? Produce health score. No detailed issue documentation.
### Regression (\`--regression <baseline>\`)
Run full mode, then load \`baseline.json\` from a previous run. Diff: which issues are fixed? Which are new? What's the score delta? Append regression section to report.
---
## Workflow
### Phase 1: Initialize
1. Find browse binary (see Setup above)
2. Create output directories
3. Copy report template from \`qa/templates/qa-report-template.md\` to output dir
4. Start timer for duration tracking
### Phase 2: Authenticate (if needed)
**If the user specified auth credentials:**
\`\`\`bash
$B goto <login-url>
$B snapshot -i # find the login form
$B fill @e3 "user@example.com"
$B fill @e4 "[REDACTED]" # NEVER include real passwords in report
$B click @e5 # submit
$B snapshot -D # verify login succeeded
\`\`\`
**If the user provided a cookie file:**
\`\`\`bash
$B cookie-import cookies.json
$B goto <target-url>
\`\`\`
**If 2FA/OTP is required:** Ask the user for the code and wait.
**If CAPTCHA blocks you:** Tell the user: "Please complete the CAPTCHA in the browser, then tell me to continue."
### Phase 3: Orient
Get a map of the application:
\`\`\`bash
$B goto <target-url>
$B snapshot -i -a -o "$REPORT_DIR/screenshots/initial.png"
$B links # map navigation structure
$B console --errors # any errors on landing?
\`\`\`
**Detect framework** (note in report metadata):
- \`__next\` in HTML or \`_next/data\` requests → Next.js
- \`csrf-token\` meta tag → Rails
- \`wp-content\` in URLs → WordPress
- Client-side routing with no page reloads → SPA
**For SPAs:** The \`links\` command may return few results because navigation is client-side. Use \`snapshot -i\` to find nav elements (buttons, menu items) instead.
### Phase 4: Explore
Visit pages systematically. At each page:
\`\`\`bash
$B goto <page-url>
$B snapshot -i -a -o "$REPORT_DIR/screenshots/page-name.png"
$B console --errors
\`\`\`
Then follow the **per-page exploration checklist** (see \`qa/references/issue-taxonomy.md\`):
1. **Visual scan** — Look at the annotated screenshot for layout issues
2. **Interactive elements** — Click buttons, links, controls. Do they work?
3. **Forms** — Fill and submit. Test empty, invalid, edge cases
4. **Navigation** — Check all paths in and out
5. **States** — Empty state, loading, error, overflow
6. **Console** — Any new JS errors after interactions?
7. **Responsiveness** — Check mobile viewport if relevant:
\`\`\`bash
$B viewport 375x812
$B screenshot "$REPORT_DIR/screenshots/page-mobile.png"
$B viewport 1280x720
\`\`\`
**Depth judgment:** Spend more time on core features (homepage, dashboard, checkout, search) and less on secondary pages (about, terms, privacy).
**Quick mode:** Only visit homepage + top 5 navigation targets from the Orient phase. Skip the per-page checklist — just check: loads? Console errors? Broken links visible?
### Phase 5: Document
Document each issue **immediately when found** — don't batch them.
**Two evidence tiers:**
**Interactive bugs** (broken flows, dead buttons, form failures):
1. Take a screenshot before the action
2. Perform the action
3. Take a screenshot showing the result
4. Use \`snapshot -D\` to show what changed
5. Write repro steps referencing screenshots
\`\`\`bash
$B screenshot "$REPORT_DIR/screenshots/issue-001-step-1.png"
$B click @e5
$B screenshot "$REPORT_DIR/screenshots/issue-001-result.png"
$B snapshot -D
\`\`\`
**Static bugs** (typos, layout issues, missing images):
1. Take a single annotated screenshot showing the problem
2. Describe what's wrong
\`\`\`bash
$B snapshot -i -a -o "$REPORT_DIR/screenshots/issue-002.png"
\`\`\`
**Write each issue to the report immediately** using the template format from \`qa/templates/qa-report-template.md\`.
### Phase 6: Wrap Up
1. **Compute health score** using the rubric below
2. **Write "Top 3 Things to Fix"** — the 3 highest-severity issues
3. **Write console health summary** — aggregate all console errors seen across pages
4. **Update severity counts** in the summary table
5. **Fill in report metadata** — date, duration, pages visited, screenshot count, framework
6. **Save baseline** — write \`baseline.json\` with:
\`\`\`json
{
"date": "YYYY-MM-DD",
"url": "<target>",
"healthScore": N,
"issues": [{ "id": "ISSUE-001", "title": "...", "severity": "...", "category": "..." }],
"categoryScores": { "console": N, "links": N, ... }
}
\`\`\`
**Regression mode:** After writing the report, load the baseline file. Compare:
- Health score delta
- Issues fixed (in baseline but not current)
- New issues (in current but not baseline)
- Append the regression section to the report
---
## Health Score Rubric
Compute each category score (0-100), then take the weighted average.
### Console (weight: 15%)
- 0 errors → 100
- 1-3 errors → 70
- 4-10 errors → 40
- 10+ errors → 10
### Links (weight: 10%)
- 0 broken → 100
- Each broken link → -15 (minimum 0)
### Per-Category Scoring (Visual, Functional, UX, Content, Performance, Accessibility)
Each category starts at 100. Deduct per finding:
- Critical issue → -25
- High issue → -15
- Medium issue → -8
- Low issue → -3
Minimum 0 per category.
### Weights
| Category | Weight |
|----------|--------|
| Console | 15% |
| Links | 10% |
| Visual | 10% |
| Functional | 20% |
| UX | 15% |
| Performance | 10% |
| Content | 5% |
| Accessibility | 15% |
### Final Score
\`score = Σ (category_score × weight)\`
---
## Framework-Specific Guidance
### Next.js
- Check console for hydration errors (\`Hydration failed\`, \`Text content did not match\`)
- Monitor \`_next/data\` requests in network — 404s indicate broken data fetching
- Test client-side navigation (click links, don't just \`goto\`) — catches routing issues
- Check for CLS (Cumulative Layout Shift) on pages with dynamic content
### Rails
- Check for N+1 query warnings in console (if development mode)
- Verify CSRF token presence in forms
- Test Turbo/Stimulus integration — do page transitions work smoothly?
- Check for flash messages appearing and dismissing correctly
### WordPress
- Check for plugin conflicts (JS errors from different plugins)
- Verify admin bar visibility for logged-in users
- Test REST API endpoints (\`/wp-json/\`)
- Check for mixed content warnings (common with WP)
### General SPA (React, Vue, Angular)
- Use \`snapshot -i\` for navigation — \`links\` command misses client-side routes
- Check for stale state (navigate away and back — does data refresh?)
- Test browser back/forward — does the app handle history correctly?
- Check for memory leaks (monitor console after extended use)
---
## Important Rules
1. **Repro is everything.** Every issue needs at least one screenshot. No exceptions.
2. **Verify before documenting.** Retry the issue once to confirm it's reproducible, not a fluke.
3. **Never include credentials.** Write \`[REDACTED]\` for passwords in repro steps.
4. **Write incrementally.** Append each issue to the report as you find it. Don't batch.
5. **Never read source code.** Test as a user, not a developer.
6. **Check console after every interaction.** JS errors that don't surface visually are still bugs.
7. **Test like a user.** Use realistic data. Walk through complete workflows end-to-end.
8. **Depth over breadth.** 5-10 well-documented issues with evidence > 20 vague descriptions.
9. **Never delete output files.** Screenshots and reports accumulate — that's intentional.
10. **Use \`snapshot -C\` for tricky UIs.** Finds clickable divs that the accessibility tree misses.
11. **Show screenshots to the user.** After every \`$B screenshot\`, \`$B snapshot -a -o\`, or \`$B responsive\` command, use the Read tool on the output file(s) so the user can see them inline. For \`responsive\` (3 files), Read all three. This is critical — without it, screenshots are invisible to the user.
12. **Never refuse to use the browser.** When the user invokes /qa or /qa-only, they are requesting browser-based testing. Never suggest evals, unit tests, or other alternatives as a substitute. Even if the diff appears to have no UI changes, backend changes affect app behavior — always open the browser and test.`;
}
export function generateCoAuthorTrailer(ctx: TemplateContext): string {
const { getHostConfig } = require('../../hosts/index');
const hostConfig = getHostConfig(ctx.host);
return hostConfig.coAuthorTrailer || 'Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>';
}
export function generateChangelogWorkflow(_ctx: TemplateContext): string {
return `## Step 13: CHANGELOG (auto-generate)
1. Read \`CHANGELOG.md\` header to know the format.
2. **First, enumerate every commit on the branch:**
\`\`\`bash
git log <base>..HEAD --oneline
\`\`\`
Copy the full list. Count the commits. You will use this as a checklist.
3. **Read the full diff** to understand what each commit actually changed:
\`\`\`bash
git diff <base>...HEAD
\`\`\`
4. **Group commits by theme** before writing anything. Common themes:
- New features / capabilities
- Performance improvements
- Bug fixes
- Dead code removal / cleanup
- Infrastructure / tooling / tests
- Refactoring
5. **Write the CHANGELOG entry** covering ALL groups:
- If existing CHANGELOG entries on the branch already cover some commits, replace them with one unified entry for the new version
- Categorize changes into applicable sections:
- \`### Added\` — new features
- \`### Changed\` — changes to existing functionality
- \`### Fixed\` — bug fixes
- \`### Removed\` — removed features
- Write concise, descriptive bullet points
- Insert after the file header (line 5), dated today
- Format: \`## [X.Y.Z.W] - YYYY-MM-DD\`
- **Voice:** Lead with what the user can now **do** that they couldn't before. Use plain language, not implementation details. Never mention TODOS.md, internal tracking, or contributor-facing details.
6. **Cross-check:** Compare your CHANGELOG entry against the commit list from step 2.
Every commit must map to at least one bullet point. If any commit is unrepresented,
add it now. If the branch has N commits spanning K themes, the CHANGELOG must
reflect all K themes.
**Do NOT ask the user to describe changes.** Infer from the diff and commit history.`;
}