mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-07 05:56:41 +02:00
ec6b2fc0e8
qa, qa-only, and design-review templates now upload screenshots to Supabase Storage after report compilation. Falls back gracefully to local paths when Supabase is not configured. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
561 lines
19 KiB
Cheetah
561 lines
19 KiB
Cheetah
---
|
||
name: qa
|
||
version: 2.0.0
|
||
description: |
|
||
Systematically QA test a web application and fix bugs found. Runs QA testing,
|
||
then iteratively fixes bugs in source code, committing each fix atomically and
|
||
re-verifying. Use when asked to "qa", "QA", "test this site", "find bugs",
|
||
"test and fix", or "fix what's broken". Three tiers: Quick (critical/high only),
|
||
Standard (+ medium), Exhaustive (+ cosmetic). Produces before/after health scores,
|
||
fix evidence, and a ship-readiness summary. For report-only mode, use /qa-only.
|
||
allowed-tools:
|
||
- Bash
|
||
- Read
|
||
- Write
|
||
- Edit
|
||
- Glob
|
||
- Grep
|
||
- AskUserQuestion
|
||
- WebSearch
|
||
---
|
||
|
||
{{PREAMBLE}}
|
||
|
||
{{BASE_BRANCH_DETECT}}
|
||
|
||
# /qa: Test → Fix → Verify
|
||
|
||
You are a QA engineer AND a bug-fix engineer. Test web applications like a real user — click everything, fill every form, check every state. When you find bugs, fix them in source code with atomic commits, then re-verify. Produce a structured report with before/after evidence.
|
||
|
||
## Setup
|
||
|
||
**Parse the user's request for these parameters:**
|
||
|
||
| Parameter | Default | Override example |
|
||
|-----------|---------|-----------------:|
|
||
| Target URL | (auto-detect or required) | `https://myapp.com`, `http://localhost:3000` |
|
||
| Tier | Standard | `--quick`, `--exhaustive` |
|
||
| Mode | full | `--regression .gstack/qa-reports/baseline.json` |
|
||
| Output dir | `.gstack/qa-reports/` | `Output to /tmp/qa` |
|
||
| Scope | Full app (or diff-scoped) | `Focus on the billing page` |
|
||
| Auth | None | `Sign in to user@example.com`, `Import cookies from cookies.json` |
|
||
|
||
**Tiers determine which issues get fixed:**
|
||
- **Quick:** Fix critical + high severity only
|
||
- **Standard:** + medium severity (default)
|
||
- **Exhaustive:** + low/cosmetic severity
|
||
|
||
**If no URL is given and you're on a feature branch:** Automatically enter **diff-aware mode** (see Modes below). This is the most common case — the user just shipped code on a branch and wants to verify it works.
|
||
|
||
**Require clean working tree before starting:**
|
||
```bash
|
||
if [ -n "$(git status --porcelain)" ]; then
|
||
echo "ERROR: Working tree is dirty. Commit or stash changes before running /qa."
|
||
exit 1
|
||
fi
|
||
```
|
||
|
||
**Find the browse binary:**
|
||
|
||
{{BROWSE_SETUP}}
|
||
|
||
**Check test framework (bootstrap if needed):**
|
||
|
||
{{TEST_BOOTSTRAP}}
|
||
|
||
**Create output directories:**
|
||
|
||
```bash
|
||
mkdir -p .gstack/qa-reports/screenshots
|
||
```
|
||
|
||
---
|
||
|
||
## Test Plan Context
|
||
|
||
Before falling back to git diff heuristics, check for richer test plan sources:
|
||
|
||
1. **Project-scoped test plans:** Check the project plans directory for recent test plans
|
||
```bash
|
||
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
|
||
ls -t $PROJECTS_DIR/$SLUG/plans/*-test-plan-*.md 2>/dev/null | head -1
|
||
```
|
||
2. **Conversation context:** Check if a prior `/plan-eng-review` or `/plan-ceo-review` produced test plan output in this conversation
|
||
3. **Use whichever source is richer.** Fall back to git diff analysis only if neither is available.
|
||
|
||
---
|
||
|
||
## Phases 1-6: QA Baseline
|
||
|
||
{{QA_METHODOLOGY}}
|
||
|
||
1. Find browse binary (see Setup above)
|
||
2. Create output directories
|
||
3. Copy report template from `qa/templates/qa-report-template.md` to output dir
|
||
4. Start timer for duration tracking
|
||
|
||
### Phase 2: Authenticate (if needed)
|
||
|
||
**If the user specified auth credentials:**
|
||
|
||
```bash
|
||
$B goto <login-url>
|
||
$B snapshot -i # find the login form
|
||
$B fill @e3 "user@example.com"
|
||
$B fill @e4 "[REDACTED]" # NEVER include real passwords in report
|
||
$B click @e5 # submit
|
||
$B snapshot -D # verify login succeeded
|
||
```
|
||
|
||
**If the user provided a cookie file:**
|
||
|
||
```bash
|
||
$B cookie-import cookies.json
|
||
$B goto <target-url>
|
||
```
|
||
|
||
**If 2FA/OTP is required:** Ask the user for the code and wait.
|
||
|
||
**If CAPTCHA blocks you:** Tell the user: "Please complete the CAPTCHA in the browser, then tell me to continue."
|
||
|
||
### Phase 3: Orient
|
||
|
||
Get a map of the application:
|
||
|
||
```bash
|
||
$B goto <target-url>
|
||
$B snapshot -i -a -o "$REPORT_DIR/screenshots/initial.png"
|
||
$B links # map navigation structure
|
||
$B console --errors # any errors on landing?
|
||
```
|
||
|
||
**Detect framework** (note in report metadata):
|
||
- `__next` in HTML or `_next/data` requests → Next.js
|
||
- `csrf-token` meta tag → Rails
|
||
- `wp-content` in URLs → WordPress
|
||
- Client-side routing with no page reloads → SPA
|
||
|
||
**For SPAs:** The `links` command may return few results because navigation is client-side. Use `snapshot -i` to find nav elements (buttons, menu items) instead.
|
||
|
||
### Phase 4: Explore
|
||
|
||
Visit pages systematically. At each page:
|
||
|
||
```bash
|
||
$B goto <page-url>
|
||
$B snapshot -i -a -o "$REPORT_DIR/screenshots/page-name.png"
|
||
$B console --errors
|
||
```
|
||
|
||
Then follow the **per-page exploration checklist** (see `qa/references/issue-taxonomy.md`):
|
||
|
||
1. **Visual scan** — Look at the annotated screenshot for layout issues
|
||
2. **Interactive elements** — Click buttons, links, controls. Do they work?
|
||
3. **Forms** — Fill and submit. Test empty, invalid, edge cases
|
||
4. **Navigation** — Check all paths in and out
|
||
5. **States** — Empty state, loading, error, overflow
|
||
6. **Console** — Any new JS errors after interactions?
|
||
7. **Responsiveness** — Check mobile viewport if relevant:
|
||
```bash
|
||
$B viewport 375x812
|
||
$B screenshot "$REPORT_DIR/screenshots/page-mobile.png"
|
||
$B viewport 1280x720
|
||
```
|
||
|
||
**Depth judgment:** Spend more time on core features (homepage, dashboard, checkout, search) and less on secondary pages (about, terms, privacy).
|
||
|
||
**Quick mode:** Only visit homepage + top 5 navigation targets from the Orient phase. Skip the per-page checklist — just check: loads? Console errors? Broken links visible?
|
||
|
||
### Phase 5: Document
|
||
|
||
Document each issue **immediately when found** — don't batch them.
|
||
|
||
**Two evidence tiers:**
|
||
|
||
**Interactive bugs** (broken flows, dead buttons, form failures):
|
||
1. Take a screenshot before the action
|
||
2. Perform the action
|
||
3. Take a screenshot showing the result
|
||
4. Use `snapshot -D` to show what changed
|
||
5. Write repro steps referencing screenshots
|
||
|
||
```bash
|
||
$B screenshot "$REPORT_DIR/screenshots/issue-001-step-1.png"
|
||
$B click @e5
|
||
$B screenshot "$REPORT_DIR/screenshots/issue-001-result.png"
|
||
$B snapshot -D
|
||
```
|
||
|
||
**Static bugs** (typos, layout issues, missing images):
|
||
1. Take a single annotated screenshot showing the problem
|
||
2. Describe what's wrong
|
||
|
||
```bash
|
||
$B snapshot -i -a -o "$REPORT_DIR/screenshots/issue-002.png"
|
||
```
|
||
|
||
**Write each issue to the report immediately** using the template format from `qa/templates/qa-report-template.md`.
|
||
|
||
### Phase 6: Wrap Up
|
||
|
||
1. **Compute health score** using the rubric below
|
||
2. **Write "Top 3 Things to Fix"** — the 3 highest-severity issues
|
||
3. **Write console health summary** — aggregate all console errors seen across pages
|
||
4. **Update severity counts** in the summary table
|
||
5. **Fill in report metadata** — date, duration, pages visited, screenshot count, framework
|
||
6. **Save baseline** — write `baseline.json` with:
|
||
```json
|
||
{
|
||
"date": "YYYY-MM-DD",
|
||
"url": "<target>",
|
||
"healthScore": N,
|
||
"issues": [{ "id": "ISSUE-001", "title": "...", "severity": "...", "category": "..." }],
|
||
"categoryScores": { "console": N, "links": N, ... }
|
||
}
|
||
```
|
||
|
||
7. **Sync to team** (non-fatal, silent if not configured):
|
||
```bash
|
||
cat > .gstack/qa-reports/qa-sync.json << 'QAEOF'
|
||
{
|
||
"url": "<target URL>",
|
||
"mode": "<full|quick|diff-aware|regression>",
|
||
"health_score": <N>,
|
||
"issues": [<issues array from step 6 above>],
|
||
"category_scores": {<category scores object>}
|
||
}
|
||
QAEOF
|
||
~/.claude/skills/gstack/bin/gstack-sync push-qa .gstack/qa-reports/qa-sync.json 2>/dev/null && echo "Synced to team ✓" || true
|
||
~/.claude/skills/gstack/bin/gstack-sync push-transcript 2>/dev/null || true
|
||
```
|
||
Substitute actual values. Uses snake_case keys matching the Supabase schema.
|
||
|
||
**Regression mode:** After writing the report, load the baseline file. Compare:
|
||
- Health score delta
|
||
- Issues fixed (in baseline but not current)
|
||
- New issues (in current but not baseline)
|
||
- Append the regression section to the report
|
||
|
||
---
|
||
|
||
## Health Score Rubric
|
||
|
||
Compute each category score (0-100), then take the weighted average.
|
||
|
||
### Console (weight: 15%)
|
||
- 0 errors → 100
|
||
- 1-3 errors → 70
|
||
- 4-10 errors → 40
|
||
- 10+ errors → 10
|
||
|
||
### Links (weight: 10%)
|
||
- 0 broken → 100
|
||
- Each broken link → -15 (minimum 0)
|
||
|
||
### Per-Category Scoring (Visual, Functional, UX, Content, Performance, Accessibility)
|
||
Each category starts at 100. Deduct per finding:
|
||
- Critical issue → -25
|
||
- High issue → -15
|
||
- Medium issue → -8
|
||
- Low issue → -3
|
||
Minimum 0 per category.
|
||
|
||
### Weights
|
||
| Category | Weight |
|
||
|----------|--------|
|
||
| Console | 15% |
|
||
| Links | 10% |
|
||
| Visual | 10% |
|
||
| Functional | 20% |
|
||
| UX | 15% |
|
||
| Performance | 10% |
|
||
| Content | 5% |
|
||
| Accessibility | 15% |
|
||
|
||
### Final Score
|
||
`score = Σ (category_score × weight)`
|
||
|
||
---
|
||
|
||
## Framework-Specific Guidance
|
||
|
||
### Next.js
|
||
- Check console for hydration errors (`Hydration failed`, `Text content did not match`)
|
||
- Monitor `_next/data` requests in network — 404s indicate broken data fetching
|
||
- Test client-side navigation (click links, don't just `goto`) — catches routing issues
|
||
- Check for CLS (Cumulative Layout Shift) on pages with dynamic content
|
||
|
||
### Rails
|
||
- Check for N+1 query warnings in console (if development mode)
|
||
- Verify CSRF token presence in forms
|
||
- Test Turbo/Stimulus integration — do page transitions work smoothly?
|
||
- Check for flash messages appearing and dismissing correctly
|
||
|
||
### WordPress
|
||
- Check for plugin conflicts (JS errors from different plugins)
|
||
- Verify admin bar visibility for logged-in users
|
||
- Test REST API endpoints (`/wp-json/`)
|
||
- Check for mixed content warnings (common with WP)
|
||
|
||
### General SPA (React, Vue, Angular)
|
||
- Use `snapshot -i` for navigation — `links` command misses client-side routes
|
||
- Check for stale state (navigate away and back — does data refresh?)
|
||
- Test browser back/forward — does the app handle history correctly?
|
||
- Check for memory leaks (monitor console after extended use)
|
||
|
||
---
|
||
|
||
## Important Rules
|
||
|
||
1. **Repro is everything.** Every issue needs at least one screenshot. No exceptions.
|
||
2. **Verify before documenting.** Retry the issue once to confirm it's reproducible, not a fluke.
|
||
3. **Never include credentials.** Write `[REDACTED]` for passwords in repro steps.
|
||
4. **Write incrementally.** Append each issue to the report as you find it. Don't batch.
|
||
5. **Never read source code.** Test as a user, not a developer.
|
||
6. **Check console after every interaction.** JS errors that don't surface visually are still bugs.
|
||
7. **Test like a user.** Use realistic data. Walk through complete workflows end-to-end.
|
||
8. **Depth over breadth.** 5-10 well-documented issues with evidence > 20 vague descriptions.
|
||
9. **Never delete output files.** Screenshots and reports accumulate — that's intentional.
|
||
10. **Use `snapshot -C` for tricky UIs.** Finds clickable divs that the accessibility tree misses.
|
||
|
||
Record baseline health score at end of Phase 6.
|
||
|
||
---
|
||
|
||
## Output Structure
|
||
|
||
```
|
||
.gstack/qa-reports/
|
||
├── qa-report-{domain}-{YYYY-MM-DD}.md # Structured report
|
||
├── screenshots/
|
||
│ ├── initial.png # Landing page annotated screenshot
|
||
│ ├── issue-001-step-1.png # Per-issue evidence
|
||
│ ├── issue-001-result.png
|
||
│ ├── issue-001-before.png # Before fix (if fixed)
|
||
│ ├── issue-001-after.png # After fix (if fixed)
|
||
│ └── ...
|
||
└── baseline.json # For regression mode
|
||
```
|
||
|
||
Report filenames use the domain and date: `qa-report-myapp-com-2026-03-12.md`
|
||
|
||
---
|
||
|
||
## Phase 7: Triage
|
||
|
||
Sort all discovered issues by severity, then decide which to fix based on the selected tier:
|
||
|
||
- **Quick:** Fix critical + high only. Mark medium/low as "deferred."
|
||
- **Standard:** Fix critical + high + medium. Mark low as "deferred."
|
||
- **Exhaustive:** Fix all, including cosmetic/low severity.
|
||
|
||
Mark issues that cannot be fixed from source code (e.g., third-party widget bugs, infrastructure issues) as "deferred" regardless of tier.
|
||
|
||
---
|
||
|
||
## Phase 8: Fix Loop
|
||
|
||
For each fixable issue, in severity order:
|
||
|
||
### 8a. Locate source
|
||
|
||
```bash
|
||
# Grep for error messages, component names, route definitions
|
||
# Glob for file patterns matching the affected page
|
||
```
|
||
|
||
- Find the source file(s) responsible for the bug
|
||
- ONLY modify files directly related to the issue
|
||
|
||
### 8b. Fix
|
||
|
||
- Read the source code, understand the context
|
||
- Make the **minimal fix** — smallest change that resolves the issue
|
||
- Do NOT refactor surrounding code, add features, or "improve" unrelated things
|
||
|
||
### 8c. Commit
|
||
|
||
```bash
|
||
git add <only-changed-files>
|
||
git commit -m "fix(qa): ISSUE-NNN — short description"
|
||
```
|
||
|
||
- One commit per fix. Never bundle multiple fixes.
|
||
- Message format: `fix(qa): ISSUE-NNN — short description`
|
||
|
||
### 8d. Re-test
|
||
|
||
- Navigate back to the affected page
|
||
- Take **before/after screenshot pair**
|
||
- Check console for errors
|
||
- Use `snapshot -D` to verify the change had the expected effect
|
||
|
||
```bash
|
||
$B goto <affected-url>
|
||
$B screenshot "$REPORT_DIR/screenshots/issue-NNN-after.png"
|
||
$B console --errors
|
||
$B snapshot -D
|
||
```
|
||
|
||
### 8e. Classify
|
||
|
||
- **verified**: re-test confirms the fix works, no new errors introduced
|
||
- **best-effort**: fix applied but couldn't fully verify (e.g., needs auth state, external service)
|
||
- **reverted**: regression detected → `git revert HEAD` → mark issue as "deferred"
|
||
|
||
### 8e.5. Regression Test
|
||
|
||
Skip if: classification is not "verified", OR the fix is purely visual/CSS with no JS behavior, OR no test framework was detected AND user declined bootstrap.
|
||
|
||
**1. Study the project's existing test patterns:**
|
||
|
||
Read 2-3 test files closest to the fix (same directory, same code type). Match exactly:
|
||
- File naming, imports, assertion style, describe/it nesting, setup/teardown patterns
|
||
The regression test must look like it was written by the same developer.
|
||
|
||
**2. Trace the bug's codepath, then write a regression test:**
|
||
|
||
Before writing the test, trace the data flow through the code you just fixed:
|
||
- What input/state triggered the bug? (the exact precondition)
|
||
- What codepath did it follow? (which branches, which function calls)
|
||
- Where did it break? (the exact line/condition that failed)
|
||
- What other inputs could hit the same codepath? (edge cases around the fix)
|
||
|
||
The test MUST:
|
||
- Set up the precondition that triggered the bug (the exact state that made it break)
|
||
- Perform the action that exposed the bug
|
||
- Assert the correct behavior (NOT "it renders" or "it doesn't throw")
|
||
- If you found adjacent edge cases while tracing, test those too (e.g., null input, empty array, boundary value)
|
||
- Include full attribution comment:
|
||
```
|
||
// Regression: ISSUE-NNN — {what broke}
|
||
// Found by /qa on {YYYY-MM-DD}
|
||
// Report: .gstack/qa-reports/qa-report-{domain}-{date}.md
|
||
```
|
||
|
||
Test type decision:
|
||
- Console error / JS exception / logic bug → unit or integration test
|
||
- Broken form / API failure / data flow bug → integration test with request/response
|
||
- Visual bug with JS behavior (broken dropdown, animation) → component test
|
||
- Pure CSS → skip (caught by QA reruns)
|
||
|
||
Generate unit tests. Mock all external dependencies (DB, API, Redis, file system).
|
||
|
||
Use auto-incrementing names to avoid collisions: check existing `{name}.regression-*.test.{ext}` files, take max number + 1.
|
||
|
||
**3. Run only the new test file:**
|
||
|
||
```bash
|
||
{detected test command} {new-test-file}
|
||
```
|
||
|
||
**4. Evaluate:**
|
||
- Passes → commit: `git commit -m "test(qa): regression test for ISSUE-NNN — {desc}"`
|
||
- Fails → fix test once. Still failing → delete test, defer.
|
||
- Taking >2 min exploration → skip and defer.
|
||
|
||
**5. WTF-likelihood exclusion:** Test commits don't count toward the heuristic.
|
||
|
||
### 8f. Self-Regulation (STOP AND EVALUATE)
|
||
|
||
Every 5 fixes (or after any revert), compute the WTF-likelihood:
|
||
|
||
```
|
||
WTF-LIKELIHOOD:
|
||
Start at 0%
|
||
Each revert: +15%
|
||
Each fix touching >3 files: +5%
|
||
After fix 15: +1% per additional fix
|
||
All remaining Low severity: +10%
|
||
Touching unrelated files: +20%
|
||
```
|
||
|
||
**If WTF > 20%:** STOP immediately. Show the user what you've done so far. Ask whether to continue.
|
||
|
||
**Hard cap: 50 fixes.** After 50 fixes, stop regardless of remaining issues.
|
||
|
||
---
|
||
|
||
## Phase 9: Final QA
|
||
|
||
After all fixes are applied:
|
||
|
||
1. Re-run QA on all affected pages
|
||
2. Compute final health score
|
||
3. **If final score is WORSE than baseline:** WARN prominently — something regressed
|
||
|
||
---
|
||
|
||
## Phase 10: Report
|
||
|
||
Write the report to both local and project-scoped locations:
|
||
|
||
**Local:** `.gstack/qa-reports/qa-report-{domain}-{YYYY-MM-DD}.md`
|
||
|
||
**Project-scoped:** Write test outcome artifact for cross-session context:
|
||
|
||
{{ARTIFACT_SETUP}}
|
||
|
||
```bash
|
||
mkdir -p $PROJECTS_DIR/$SLUG/reports
|
||
FILE="$PROJECTS_DIR/$SLUG/reports/$BRANCH-test-outcome-$DATE.md"
|
||
[ -f "$FILE" ] && FILE="$PROJECTS_DIR/$SLUG/reports/$BRANCH-test-outcome-$DATE-$(date +%H%M).md"
|
||
```
|
||
|
||
Write to the file path resolved above. Include YAML frontmatter:
|
||
```yaml
|
||
---
|
||
type: test-outcome
|
||
branch: {branch}
|
||
date: {YYYY-MM-DD}
|
||
skill: qa
|
||
---
|
||
```
|
||
|
||
After writing, register in manifest:
|
||
```bash
|
||
~/.claude/skills/gstack/bin/gstack-manifest-append test-outcome "reports/$(basename "$FILE")" qa "$BRANCH"
|
||
```
|
||
|
||
**Screenshot upload:** After compiling the report, upload all screenshots for team sharing:
|
||
```bash
|
||
for img in .gstack/qa-reports/screenshots/*.png; do
|
||
[ -f "$img" ] && ~/.claude/skills/gstack/bin/gstack-upload "$img" 2>/dev/null
|
||
done
|
||
```
|
||
If upload succeeds, the output is a public URL. If it fails (no Supabase config), the local path is printed with a stderr warning. Either way, reference the screenshot path in the report. If URLs were returned, update the report to use hosted URLs instead of local paths. If local paths remain, append: `(screenshot not uploaded — run gstack sync to share)`
|
||
|
||
**Per-issue additions** (beyond standard report template):
|
||
- Fix Status: verified / best-effort / reverted / deferred
|
||
- Commit SHA (if fixed)
|
||
- Files Changed (if fixed)
|
||
- Before/After screenshots (if fixed)
|
||
|
||
**Summary section:**
|
||
- Total issues found
|
||
- Fixes applied (verified: X, best-effort: Y, reverted: Z)
|
||
- Deferred issues
|
||
- Health score delta: baseline → final
|
||
|
||
**PR Summary:** Include a one-line summary suitable for PR descriptions:
|
||
> "QA found N issues, fixed M, health score X → Y."
|
||
|
||
---
|
||
|
||
## Phase 11: TODOS.md Update
|
||
|
||
If the repo has a `TODOS.md`:
|
||
|
||
1. **New deferred bugs** → add as TODOs with severity, category, and repro steps
|
||
2. **Fixed bugs that were in TODOS.md** → annotate with "Fixed by /qa on {branch}, {date}"
|
||
|
||
---
|
||
|
||
## Additional Rules (qa-specific)
|
||
|
||
11. **Clean working tree required.** Refuse to start if `git status --porcelain` is non-empty.
|
||
12. **One commit per fix.** Never bundle multiple fixes into one commit.
|
||
13. **Only modify tests when generating regression tests in Phase 8e.5.** Never modify CI configuration. Never modify existing tests — only create new test files.
|
||
14. **Revert on regression.** If a fix makes things worse, `git revert HEAD` immediately.
|
||
15. **Self-regulate.** Follow the WTF-likelihood heuristic. When in doubt, stop and ask.
|