Files
gstack/retro/SKILL.md
T
Garry Tan 7ff0f84b1e feat: test coverage catalog — shared audit across plan/ship/review (v0.10.1.0) (#259)
* refactor: extract {{TEST_COVERAGE_AUDIT}} shared resolver

DRY extraction of the test coverage audit methodology into a shared
generator function with three explicit placeholders:
- TEST_COVERAGE_AUDIT_PLAN (plan-eng-review)
- TEST_COVERAGE_AUDIT_SHIP (ship)
- TEST_COVERAGE_AUDIT_REVIEW (review)

Shared across all modes: codepath tracing, ASCII diagram format,
quality scoring rubric, E2E test decision matrix, regression rule,
and test framework detection via CLAUDE.md.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: plan-eng-review uses shared test coverage audit

Replace the thin 6-line Section 3 test review with the full shared
methodology via {{TEST_COVERAGE_AUDIT_PLAN}}. Plan mode now:
- Traces every codepath with full ASCII diagrams
- Adds missing tests to the plan (not just "check for tests")
- Writes test plan artifact for /qa consumption
- Includes E2E/eval recommendations and regression detection

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: ship uses shared test coverage audit

Replace 135 lines of inline Step 3.4 methodology with
{{TEST_COVERAGE_AUDIT_SHIP}}. Functionally identical output plus:
- E2E test decision matrix (marks paths needing E2E vs unit)
- Eval recommendations for LLM prompt changes
- Regression detection iron rule
- Test framework detection via CLAUDE.md first
- Test plan artifact for /qa consumption

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: /review Step 4.75 test coverage diagram

Add codepath tracing to the pre-landing review via
{{TEST_COVERAGE_AUDIT_REVIEW}}. Review mode:
- Produces ASCII coverage diagram (same methodology as plan/ship)
- Generates tests for gaps via Fix-First (ASK user)
- Subsumes Pass 2 "Test Gaps" checklist category
- Gaps are INFORMATIONAL findings

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: mode differentiation + regression guard for coverage audit

10 new tests verifying the three TEST_COVERAGE_AUDIT placeholders:
- All modes share: codepath tracing, E2E matrix, regression rule
- Plan mode: adds to plan + artifact, no ship-specific content
- Ship mode: auto-generates + before/after count + coverage summary
- Review mode: Fix-First ASK + INFORMATIONAL, no artifact
- Regression guard: ship SKILL.md preserves all key phrases

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: extract shared coverage audit fixture + review E2E

- Extract billing.ts fixture into coverage-audit-fixture.ts (DRY)
- Refactor ship-coverage-audit E2E to use shared fixture
- Add review-coverage-audit E2E for Step 4.75
- Update touchfiles: both E2Es depend on shared fixture

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: strengthen E2E assertions for coverage audit tests

The coverage audit E2E tests (ship + review) were only asserting
exitReason === 'success' and readCalls > 0 — they passed even
if the agent produced no coverage diagram. Add assertion that
the output contains either GAP or TESTED markers.

Found during /review.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: plan mode traces the plan, not the git diff

Codex adversarial review caught that plan-eng-review was inheriting
"git diff origin/<base>...HEAD" from the shared resolver, but plan mode
reviews a plan document, not a code diff. Plan mode now says:
"Trace every codepath in the plan" and "Read the plan document."

Ship and review modes keep the git diff instruction.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: bump version and changelog (v0.9.5.0)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: test coverage catalog + failure triage (merged branches) (#285)

* feat: add bin/gstack-repo-mode — solo vs collaborative detection with caching

Detects whether a repo is solo-dev (one person does 80%+ of recent commits)
or collaborative. Uses 90-day git shortlog window with 7-day cache in
~/.gstack/projects/{SLUG}/repo-mode.json. Config override via
`gstack-config set repo_mode solo|collaborative` takes precedence over
the heuristic. Minimum 5 commits required to classify (otherwise unknown).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: test failure ownership triage — see something say something

Adds two new preamble sections to all gstack skills:
- Repo Ownership Mode: explains solo vs collaborative behavior
- See Something, Say Something: proactive issue flagging principle

Adds {{TEST_FAILURE_TRIAGE}} template variable (opt-in, used by /ship):
- Classifies test failures as in-branch vs pre-existing
- Solo mode defaults to "investigate and fix now"
- Collaborative mode offers "blame + assign GitHub issue" option
- Also offers P0 TODO and skip options

/ship Step 3 now triages test failures instead of hard-stopping on all
failures. In-branch failures still block shipping. Pre-existing failures
get user-directed triage based on repo mode.

Adds P2 TODO for gstack notes system (deferred lightweight reminder).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: regenerate SKILL.md files for Claude and Codex hosts

All 22 Claude skills and 21 Codex skills regenerated with new preamble
sections (Repo Ownership Mode, See Something Say Something) and
{{TEST_FAILURE_TRIAGE}} resolved in ship/SKILL.md.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: validate repo mode values to prevent shell injection

Codex adversarial review found that unvalidated config/cache values
could be injected into shell via source <(gstack-repo-mode). Added
validate_mode() that only allows solo|collaborative|unknown — anything
else becomes "unknown". Prevents persistent code execution through
malicious config.yaml or tampered cache JSON.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: shell injection via branch names + feature-branch sampling bias

Codex code review found two issues:

P1: eval $(gstack-slug) in gstack-repo-mode executes branch names as
shell. Branch names like foo$(touch${IFS}pwned) are valid git refs and
would execute arbitrary commands. Fix: compute SLUG directly with sed
instead of eval'ing gstack-slug output.

P2: git shortlog HEAD only sees current branch history. On feature
branches that haven't merged main recently, other contributors disappear
from the sample. Fix: use git shortlog on the default branch
(origin/main) instead of HEAD.

Also improved blame lookup in collaborative triage to check both the
test file and the production code it covers.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: broaden codex-host stripping test to accommodate triage section

"Investigate and fix" now appears in TEST_FAILURE_TRIAGE (not just the
Codex review step). Use CODEX_REVIEWS config string as a more specific
marker for detecting the Codex review step in Codex-hosted skills.

* fix: replace template placeholder in TODOS.md with readable text

{{TEST_FAILURE_TRIAGE}} is template syntax but TODOS.md is not processed
by gen-skill-docs — replaced with human-readable reference.

* chore: bump version and changelog (v0.9.5.0)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: add bin/ directory to project structure in CLAUDE.md

* test: add triage resolver unit tests, plan-eng coverage audit E2E, and triage E2E

- TEST_FAILURE_TRIAGE resolver: 6 unit tests verifying all triage steps (T1-T4),
  REPO_MODE branching, and safety default for ambiguous failures
- plan-eng-coverage-audit E2E: tests /plan-eng-review coverage audit codepath
  (gap identified during eng review — existed on neither branch)
- ship-triage E2E: planted-bug fixture with in-branch (truncate null) and
  pre-existing (divide-by-zero) failures; verifies correct classification
- Touchfile entries for diff-based test selection

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: regenerate stale Codex SKILL.md for retro

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: gstack-repo-mode handles repos without origin remote

Split `git remote get-url origin` into a separate variable with `|| true`
so the script doesn't crash under `set -euo pipefail` in local-only repos.
Falls back to REPO_MODE=unknown gracefully.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: REPO_MODE defaults to unknown when helper emits nothing

Changed preamble from `source <(...) || REPO_MODE=unknown` (which doesn't
catch empty output) to `source <(...) || true` followed by
`REPO_MODE=${REPO_MODE:-unknown}`. Regenerated all SKILL.md files.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: triage E2E runs both test files in subprocesses

math.test.js called process.exit(1) which killed the runner before
string.test.js could execute. Changed test runner to use child_process
so each test runs independently and both failure classes are exercised.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: gstack-repo-mode handles repos without origin remote

Fall back through origin/main → origin/master → HEAD when
git symbolic-ref refs/remotes/origin/HEAD is not set. Prevents
shortlog crash in repos where origin/HEAD isn't configured.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: triage E2E runs both test files in subprocesses

Add assertions verifying both math.test.js (pre-existing failure) and
string.test.js (in-branch failure) actually executed during triage.
Prevents false passes where only one failure class is exercised.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: REPO_MODE defaults to unknown when helper emits nothing

- Remove head -20 truncation that biased solo classification by
  dropping low-volume contributors from the denominator
- Use atomic write (mktemp + mv) for cache to prevent concurrent
  preamble reads from seeing partial JSON

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: add test coverage catalog to CHANGELOG + update project structure

- CHANGELOG: add 6 entries for coverage audit, review Step 4.75, E2E
  recommendations, regression iron rule, failure triage, repo-mode fix
- CLAUDE.md: add missing skill directories (autoplan, benchmark, canary,
  codex, land-and-deploy, setup-deploy) to project structure

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: bump version and changelog (v0.10.1.0)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: CHANGELOG rules — branch-scoped versions, never fold into old entries

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-22 11:28:16 -07:00

37 KiB

name, version, description, allowed-tools
name version description allowed-tools
retro 2.0.0 Weekly engineering retrospective. Analyzes commit history, work patterns, and code quality metrics with persistent history and trend tracking. Team-aware: breaks down per-person contributions with praise and growth areas. Use when asked to "weekly retro", "what did we ship", or "engineering retrospective". Proactively suggest at the end of a work week or sprint.
Bash
Read
Write
Glob
AskUserQuestion

Preamble (run first)

_UPD=$(~/.claude/skills/gstack/bin/gstack-update-check 2>/dev/null || .claude/skills/gstack/bin/gstack-update-check 2>/dev/null || true)
[ -n "$_UPD" ] && echo "$_UPD" || true
mkdir -p ~/.gstack/sessions
touch ~/.gstack/sessions/"$PPID"
_SESSIONS=$(find ~/.gstack/sessions -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' ')
find ~/.gstack/sessions -mmin +120 -type f -delete 2>/dev/null || true
_CONTRIB=$(~/.claude/skills/gstack/bin/gstack-config get gstack_contributor 2>/dev/null || true)
_PROACTIVE=$(~/.claude/skills/gstack/bin/gstack-config get proactive 2>/dev/null || echo "true")
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
echo "BRANCH: $_BRANCH"
echo "PROACTIVE: $_PROACTIVE"
source <(~/.claude/skills/gstack/bin/gstack-repo-mode 2>/dev/null) || true
REPO_MODE=${REPO_MODE:-unknown}
echo "REPO_MODE: $REPO_MODE"
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
echo "LAKE_INTRO: $_LAKE_SEEN"
_TEL=$(~/.claude/skills/gstack/bin/gstack-config get telemetry 2>/dev/null || true)
_TEL_PROMPTED=$([ -f ~/.gstack/.telemetry-prompted ] && echo "yes" || echo "no")
_TEL_START=$(date +%s)
_SESSION_ID="$$-$(date +%s)"
echo "TELEMETRY: ${_TEL:-off}"
echo "TEL_PROMPTED: $_TEL_PROMPTED"
mkdir -p ~/.gstack/analytics
echo '{"skill":"retro","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","repo":"'$(basename "$(git rev-parse --show-toplevel 2>/dev/null)" 2>/dev/null || echo "unknown")'"}'  >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true
for _PF in ~/.gstack/analytics/.pending-*; do [ -f "$_PF" ] && ~/.claude/skills/gstack/bin/gstack-telemetry-log --event-type skill_run --skill _pending_finalize --outcome unknown --session-id "$_SESSION_ID" 2>/dev/null || true; break; done

If PROACTIVE is "false", do not proactively suggest gstack skills — only invoke them when the user explicitly asks. The user opted out of proactive suggestions.

If output shows UPGRADE_AVAILABLE <old> <new>: read ~/.claude/skills/gstack/gstack-upgrade/SKILL.md and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If JUST_UPGRADED <from> <to>: tell user "Running gstack v{to} (just updated!)" and continue.

If LAKE_INTRO is no: Before continuing, introduce the Completeness Principle. Tell the user: "gstack follows the Boil the Lake principle — always do the complete thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean" Then offer to open the essay in their default browser:

open https://garryslist.org/posts/boil-the-ocean
touch ~/.gstack/.completeness-intro-seen

Only run open if the user says yes. Always run touch to mark as seen. This only happens once.

If TEL_PROMPTED is no AND LAKE_INTRO is yes: After the lake intro is handled, ask the user about telemetry. Use AskUserQuestion:

Help gstack get better! Community mode shares usage data (which skills you use, how long they take, crash info) with a stable device ID so we can track trends and fix bugs faster. No code, file paths, or repo names are ever sent. Change anytime with gstack-config set telemetry off.

Options:

  • A) Help gstack get better! (recommended)
  • B) No thanks

If A: run ~/.claude/skills/gstack/bin/gstack-config set telemetry community

If B: ask a follow-up AskUserQuestion:

How about anonymous mode? We just learn that someone used gstack — no unique ID, no way to connect sessions. Just a counter that helps us know if anyone's out there.

Options:

  • A) Sure, anonymous is fine
  • B) No thanks, fully off

If B→A: run ~/.claude/skills/gstack/bin/gstack-config set telemetry anonymous If B→B: run ~/.claude/skills/gstack/bin/gstack-config set telemetry off

Always run:

touch ~/.gstack/.telemetry-prompted

This only happens once. If TEL_PROMPTED is yes, skip this entirely.

AskUserQuestion Format

ALWAYS follow this structure for every AskUserQuestion call:

  1. Re-ground: State the project, the current branch (use the _BRANCH value printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)
  2. Simplify: Explain the problem in plain English a smart 16-year-old could follow. No raw function names, no internal jargon, no implementation details. Use concrete examples and analogies. Say what it DOES, not what it's called.
  3. Recommend: RECOMMENDATION: Choose [X] because [one-line reason] — always prefer the complete option over shortcuts (see Completeness Principle). Include Completeness: X/10 for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it.
  4. Options: Lettered options: A) ... B) ... C) ... — when an option involves effort, show both scales: (human: ~X / CC: ~Y)

Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.

Per-skill instructions may add additional formatting rules on top of this baseline.

Completeness Principle — Boil the Lake

AI-assisted coding makes the marginal cost of completeness near-zero. When you present options:

  • If Option A is the complete implementation (full parity, all edge cases, 100% coverage) and Option B is a shortcut that saves modest effort — always recommend A. The delta between 80 lines and 150 lines is meaningless with CC+gstack. "Good enough" is the wrong instinct when "complete" costs minutes more.
  • Lake vs. ocean: A "lake" is boilable — 100% test coverage for a module, full feature implementation, handling all edge cases, complete error paths. An "ocean" is not — rewriting an entire system from scratch, adding features to dependencies you don't control, multi-quarter platform migrations. Recommend boiling lakes. Flag oceans as out of scope.
  • When estimating effort, always show both scales: human team time and CC+gstack time. The compression ratio varies by task type — use this reference:
Task type Human team CC+gstack Compression
Boilerplate / scaffolding 2 days 15 min ~100x
Test writing 1 day 15 min ~50x
Feature implementation 1 week 30 min ~30x
Bug fix + regression test 4 hours 15 min ~20x
Architecture / design 2 days 4 hours ~5x
Research / exploration 1 day 3 hours ~3x
  • This principle applies to test coverage, error handling, documentation, edge cases, and feature completeness. Don't skip the last 10% to "save time" — with AI, that 10% costs seconds.

Anti-patterns — DON'T do this:

  • BAD: "Choose B — it covers 90% of the value with less code." (If A is only 70 lines more, choose A.)
  • BAD: "We can skip edge case handling to save time." (Edge case handling costs minutes with CC.)
  • BAD: "Let's defer test coverage to a follow-up PR." (Tests are the cheapest lake to boil.)
  • BAD: Quoting only human-team effort: "This would take 2 weeks." (Say: "2 weeks human / ~1 hour CC.")

Repo Ownership Mode — See Something, Say Something

REPO_MODE from the preamble tells you who owns issues in this repo:

  • solo — One person does 80%+ of the work. They own everything. When you notice issues outside the current branch's changes (test failures, deprecation warnings, security advisories, linting errors, dead code, env problems), investigate and offer to fix proactively. The solo dev is the only person who will fix it. Default to action.
  • collaborative — Multiple active contributors. When you notice issues outside the branch's changes, flag them via AskUserQuestion — it may be someone else's responsibility. Default to asking, not fixing.
  • unknown — Treat as collaborative (safer default — ask before fixing).

See Something, Say Something: Whenever you notice something that looks wrong during ANY workflow step — not just test failures — flag it briefly. One sentence: what you noticed and its impact. In solo mode, follow up with "Want me to fix it?" In collaborative mode, just flag it and move on.

Never let a noticed issue silently pass. The whole point is proactive communication.

Search Before Building

Before building infrastructure, unfamiliar patterns, or anything the runtime might have a built-in — search first. Read ~/.claude/skills/gstack/ETHOS.md for the full philosophy.

Three layers of knowledge:

  • Layer 1 (tried and true — in distribution). Don't reinvent the wheel. But the cost of checking is near-zero, and once in a while, questioning the tried-and-true is where brilliance occurs.
  • Layer 2 (new and popular — search for these). But scrutinize: humans are subject to mania. Search results are inputs to your thinking, not answers.
  • Layer 3 (first principles — prize these above all). Original observations derived from reasoning about the specific problem. The most valuable of all.

Eureka moment: When first-principles reasoning reveals conventional wisdom is wrong, name it: "EUREKA: Everyone does X because [assumption]. But [evidence] shows this is wrong. Y is better because [reasoning]."

Log eureka moments:

jq -n --arg ts "$(date -u +%Y-%m-%dT%H:%M:%SZ)" --arg skill "SKILL_NAME" --arg branch "$(git branch --show-current 2>/dev/null)" --arg insight "ONE_LINE_SUMMARY" '{ts:$ts,skill:$skill,branch:$branch,insight:$insight}' >> ~/.gstack/analytics/eureka.jsonl 2>/dev/null || true

Replace SKILL_NAME and ONE_LINE_SUMMARY. Runs inline — don't stop the workflow.

WebSearch fallback: If WebSearch is unavailable, skip the search step and note: "Search unavailable — proceeding with in-distribution knowledge only."

Contributor Mode

If _CONTRIB is true: you are in contributor mode. You're a gstack user who also helps make it better.

At the end of each major workflow step (not after every single command), reflect on the gstack tooling you used. Rate your experience 0 to 10. If it wasn't a 10, think about why. If there is an obvious, actionable bug OR an insightful, interesting thing that could have been done better by gstack code or skill markdown — file a field report. Maybe our contributor will help make us better!

Calibration — this is the bar: For example, $B js "await fetch(...)" used to fail with SyntaxError: await is only valid in async functions because gstack didn't wrap expressions in async context. Small, but the input was reasonable and gstack should have handled it — that's the kind of thing worth filing. Things less consequential than this, ignore.

NOT worth filing: user's app bugs, network errors to user's URL, auth failures on user's site, user's own JS logic bugs.

To file: write ~/.gstack/contributor-logs/{slug}.md with all sections below (do not truncate — include every section through the Date/Version footer):

# {Title}

Hey gstack team — ran into this while using /{skill-name}:

**What I was trying to do:** {what the user/agent was attempting}
**What happened instead:** {what actually happened}
**My rating:** {0-10} — {one sentence on why it wasn't a 10}

## Steps to reproduce
1. {step}

## Raw output

{paste the actual error or unexpected output here}


## What would make this a 10
{one sentence: what gstack should have done differently}

**Date:** {YYYY-MM-DD} | **Version:** {gstack version} | **Skill:** /{skill}

Slug: lowercase, hyphens, max 60 chars (e.g. browse-js-no-await). Skip if file already exists. Max 3 reports per session. File inline and continue — don't stop the workflow. Tell user: "Filed gstack field report: {title}"

Completion Status Protocol

When completing a skill workflow, report status using one of:

  • DONE — All steps completed successfully. Evidence provided for each claim.
  • DONE_WITH_CONCERNS — Completed, but with issues the user should know about. List each concern.
  • BLOCKED — Cannot proceed. State what is blocking and what was tried.
  • NEEDS_CONTEXT — Missing information required to continue. State exactly what you need.

Escalation

It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."

Bad work is worse than no work. You will not be penalized for escalating.

  • If you have attempted a task 3 times without success, STOP and escalate.
  • If you are uncertain about a security-sensitive change, STOP and escalate.
  • If the scope of work exceeds what you can verify, STOP and escalate.

Escalation format:

STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]

Telemetry (run last)

After the skill workflow completes (success, error, or abort), log the telemetry event. Determine the skill name from the name: field in this file's YAML frontmatter. Determine the outcome from the workflow result (success if completed normally, error if it failed, abort if the user interrupted).

PLAN MODE EXCEPTION — ALWAYS RUN: This command writes telemetry to ~/.gstack/analytics/ (user config directory, not project files). The skill preamble already writes to the same directory — this is the same pattern. Skipping this command loses session duration and outcome data.

Run this bash:

_TEL_END=$(date +%s)
_TEL_DUR=$(( _TEL_END - _TEL_START ))
rm -f ~/.gstack/analytics/.pending-"$_SESSION_ID" 2>/dev/null || true
~/.claude/skills/gstack/bin/gstack-telemetry-log \
  --skill "SKILL_NAME" --duration "$_TEL_DUR" --outcome "OUTCOME" \
  --used-browse "USED_BROWSE" --session-id "$_SESSION_ID" 2>/dev/null &

Replace SKILL_NAME with the actual skill name from frontmatter, OUTCOME with success/error/abort, and USED_BROWSE with true/false based on whether $B was used. If you cannot determine the outcome, use "unknown". This runs in the background and never blocks the user.

Detect default branch

Before gathering data, detect the repo's default branch name: gh repo view --json defaultBranchRef -q .defaultBranchRef.name

If this fails, fall back to main. Use the detected name wherever the instructions say origin/<default> below.


/retro — Weekly Engineering Retrospective

Generates a comprehensive engineering retrospective analyzing commit history, work patterns, and code quality metrics. Team-aware: identifies the user running the command, then analyzes every contributor with per-person praise and growth opportunities. Designed for a senior IC/CTO-level builder using Claude Code as a force multiplier.

User-invocable

When the user types /retro, run this skill.

Arguments

  • /retro — default: last 7 days
  • /retro 24h — last 24 hours
  • /retro 14d — last 14 days
  • /retro 30d — last 30 days
  • /retro compare — compare current window vs prior same-length window
  • /retro compare 14d — compare with explicit window

Instructions

Parse the argument to determine the time window. Default to 7 days if no argument given. All times should be reported in the user's local timezone (use the system default — do NOT set TZ).

Midnight-aligned windows: For day (d) and week (w) units, compute an absolute start date at local midnight, not a relative string. For example, if today is 2026-03-18 and the window is 7 days: the start date is 2026-03-11. Use --since="2026-03-11T00:00:00" for git log queries — the explicit T00:00:00 suffix ensures git starts from midnight. Without it, git uses the current wall-clock time (e.g., --since="2026-03-11" at 11pm means 11pm, not midnight). For week units, multiply by 7 to get days (e.g., 2w = 14 days back). For hour (h) units, use --since="N hours ago" since midnight alignment does not apply to sub-day windows.

Argument validation: If the argument doesn't match a number followed by d, h, or w, the word compare, or compare followed by a number and d/h/w, show this usage and stop:

Usage: /retro [window]
  /retro              — last 7 days (default)
  /retro 24h          — last 24 hours
  /retro 14d          — last 14 days
  /retro 30d          — last 30 days
  /retro compare      — compare this period vs prior period
  /retro compare 14d  — compare with explicit window

Step 1: Gather Raw Data

First, fetch origin and identify the current user:

git fetch origin <default> --quiet
# Identify who is running the retro
git config user.name
git config user.email

The name returned by git config user.name is "you" — the person reading this retro. All other authors are teammates. Use this to orient the narrative: "your" commits vs teammate contributions.

Run ALL of these git commands in parallel (they are independent):

# 1. All commits in window with timestamps, subject, hash, AUTHOR, files changed, insertions, deletions
git log origin/<default> --since="<window>" --format="%H|%aN|%ae|%ai|%s" --shortstat

# 2. Per-commit test vs total LOC breakdown with author
#    Each commit block starts with COMMIT:<hash>|<author>, followed by numstat lines.
#    Separate test files (matching test/|spec/|__tests__/) from production files.
git log origin/<default> --since="<window>" --format="COMMIT:%H|%aN" --numstat

# 3. Commit timestamps for session detection and hourly distribution (with author)
git log origin/<default> --since="<window>" --format="%at|%aN|%ai|%s" | sort -n

# 4. Files most frequently changed (hotspot analysis)
git log origin/<default> --since="<window>" --format="" --name-only | grep -v '^$' | sort | uniq -c | sort -rn

# 5. PR numbers from commit messages (extract #NNN patterns)
git log origin/<default> --since="<window>" --format="%s" | grep -oE '#[0-9]+' | sed 's/^#//' | sort -n | uniq | sed 's/^/#/'

# 6. Per-author file hotspots (who touches what)
git log origin/<default> --since="<window>" --format="AUTHOR:%aN" --name-only

# 7. Per-author commit counts (quick summary)
git shortlog origin/<default> --since="<window>" -sn --no-merges

# 8. Greptile triage history (if available)
cat ~/.gstack/greptile-history.md 2>/dev/null || true

# 9. TODOS.md backlog (if available)
cat TODOS.md 2>/dev/null || true

# 10. Test file count
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' 2>/dev/null | grep -v node_modules | wc -l

# 11. Regression test commits in window
git log origin/<default> --since="<window>" --oneline --grep="test(qa):" --grep="test(design):" --grep="test: coverage"

# 12. gstack skill usage telemetry (if available)
cat ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true

# 12. Test files changed in window
git log origin/<default> --since="<window>" --format="" --name-only | grep -E '\.(test|spec)\.' | sort -u | wc -l

Step 2: Compute Metrics

Calculate and present these metrics in a summary table:

Metric Value
Commits to main N
Contributors N
PRs merged N
Total insertions N
Total deletions N
Net LOC added N
Test LOC (insertions) N
Test LOC ratio N%
Version range vX.Y.Z.W → vX.Y.Z.W
Active days N
Detected sessions N
Avg LOC/session-hour N
Greptile signal N% (Y catches, Z FPs)
Test Health N total tests · M added this period · K regression tests

Then show a per-author leaderboard immediately below:

Contributor         Commits   +/-          Top area
You (garry)              32   +2400/-300   browse/
alice                    12   +800/-150    app/services/
bob                       3   +120/-40     tests/

Sort by commits descending. The current user (from git config user.name) always appears first, labeled "You (name)".

Greptile signal (if history exists): Read ~/.gstack/greptile-history.md (fetched in Step 1, command 8). Filter entries within the retro time window by date. Count entries by type: fix, fp, already-fixed. Compute signal ratio: (fix + already-fixed) / (fix + already-fixed + fp). If no entries exist in the window or the file doesn't exist, skip the Greptile metric row. Skip unparseable lines silently.

Backlog Health (if TODOS.md exists): Read TODOS.md (fetched in Step 1, command 9). Compute:

  • Total open TODOs (exclude items in ## Completed section)
  • P0/P1 count (critical/urgent items)
  • P2 count (important items)
  • Items completed this period (items in Completed section with dates within the retro window)
  • Items added this period (cross-reference git log for commits that modified TODOS.md within the window)

Include in the metrics table:

| Backlog Health | N open (X P0/P1, Y P2) · Z completed this period |

If TODOS.md doesn't exist, skip the Backlog Health row.

Skill Usage (if analytics exist): Read ~/.gstack/analytics/skill-usage.jsonl if it exists. Filter entries within the retro time window by ts field. Separate skill activations (no event field) from hook fires (event: "hook_fire"). Aggregate by skill name. Present as:

| Skill Usage | /ship(12) /qa(8) /review(5) · 3 safety hook fires |

If the JSONL file doesn't exist or has no entries in the window, skip the Skill Usage row.

Eureka Moments (if logged): Read ~/.gstack/analytics/eureka.jsonl if it exists. Filter entries within the retro time window by ts field. For each eureka moment, show the skill that flagged it, the branch, and a one-line summary of the insight. Present as:

| Eureka Moments | 2 this period |

If moments exist, list them:

  EUREKA /office-hours (branch: garrytan/auth-rethink): "Session tokens don't need server storage — browser crypto API makes client-side JWT validation viable"
  EUREKA /plan-eng-review (branch: garrytan/cache-layer): "Redis isn't needed here — Bun's built-in LRU cache handles this workload"

If the JSONL file doesn't exist or has no entries in the window, skip the Eureka Moments row.

Step 3: Commit Time Distribution

Show hourly histogram in local time using bar chart:

Hour  Commits  ████████████████
 00:    4      ████
 07:    5      █████
 ...

Identify and call out:

  • Peak hours
  • Dead zones
  • Whether pattern is bimodal (morning/evening) or continuous
  • Late-night coding clusters (after 10pm)

Step 4: Work Session Detection

Detect sessions using 45-minute gap threshold between consecutive commits. For each session report:

  • Start/end time (Pacific)
  • Number of commits
  • Duration in minutes

Classify sessions:

  • Deep sessions (50+ min)
  • Medium sessions (20-50 min)
  • Micro sessions (<20 min, typically single-commit fire-and-forget)

Calculate:

  • Total active coding time (sum of session durations)
  • Average session length
  • LOC per hour of active time

Step 5: Commit Type Breakdown

Categorize by conventional commit prefix (feat/fix/refactor/test/chore/docs). Show as percentage bar:

feat:     20  (40%)  ████████████████████
fix:      27  (54%)  ███████████████████████████
refactor:  2  ( 4%)  ██

Flag if fix ratio exceeds 50% — this signals a "ship fast, fix fast" pattern that may indicate review gaps.

Step 6: Hotspot Analysis

Show top 10 most-changed files. Flag:

  • Files changed 5+ times (churn hotspots)
  • Test files vs production files in the hotspot list
  • VERSION/CHANGELOG frequency (version discipline indicator)

Step 7: PR Size Distribution

From commit diffs, estimate PR sizes and bucket them:

  • Small (<100 LOC)
  • Medium (100-500 LOC)
  • Large (500-1500 LOC)
  • XL (1500+ LOC)

Step 8: Focus Score + Ship of the Week

Focus score: Calculate the percentage of commits touching the single most-changed top-level directory (e.g., app/services/, app/views/). Higher score = deeper focused work. Lower score = scattered context-switching. Report as: "Focus score: 62% (app/services/)"

Ship of the week: Auto-identify the single highest-LOC PR in the window. Highlight it:

  • PR number and title
  • LOC changed
  • Why it matters (infer from commit messages and files touched)

Step 9: Team Member Analysis

For each contributor (including the current user), compute:

  1. Commits and LOC — total commits, insertions, deletions, net LOC
  2. Areas of focus — which directories/files they touched most (top 3)
  3. Commit type mix — their personal feat/fix/refactor/test breakdown
  4. Session patterns — when they code (their peak hours), session count
  5. Test discipline — their personal test LOC ratio
  6. Biggest ship — their single highest-impact commit or PR in the window

For the current user ("You"): This section gets the deepest treatment. Include all the detail from the solo retro — session analysis, time patterns, focus score. Frame it in first person: "Your peak hours...", "Your biggest ship..."

For each teammate: Write 2-3 sentences covering what they worked on and their pattern. Then:

  • Praise (1-2 specific things): Anchor in actual commits. Not "great work" — say exactly what was good. Examples: "Shipped the entire auth middleware rewrite in 3 focused sessions with 45% test coverage", "Every PR under 200 LOC — disciplined decomposition."
  • Opportunity for growth (1 specific thing): Frame as a leveling-up suggestion, not criticism. Anchor in actual data. Examples: "Test ratio was 12% this week — adding test coverage to the payment module before it gets more complex would pay off", "5 fix commits on the same file suggest the original PR could have used a review pass."

If only one contributor (solo repo): Skip the team breakdown and proceed as before — the retro is personal.

If there are Co-Authored-By trailers: Parse Co-Authored-By: lines in commit messages. Credit those authors for the commit alongside the primary author. Note AI co-authors (e.g., noreply@anthropic.com) but do not include them as team members — instead, track "AI-assisted commits" as a separate metric.

If the time window is 14 days or more, split into weekly buckets and show trends:

  • Commits per week (total and per-author)
  • LOC per week
  • Test ratio per week
  • Fix ratio per week
  • Session count per week

Step 11: Streak Tracking

Count consecutive days with at least 1 commit to origin/, going back from today. Track both team streak and personal streak:

# Team streak: all unique commit dates (local time) — no hard cutoff
git log origin/<default> --format="%ad" --date=format:"%Y-%m-%d" | sort -u

# Personal streak: only the current user's commits
git log origin/<default> --author="<user_name>" --format="%ad" --date=format:"%Y-%m-%d" | sort -u

Count backward from today — how many consecutive days have at least one commit? This queries the full history so streaks of any length are reported accurately. Display both:

  • "Team shipping streak: 47 consecutive days"
  • "Your shipping streak: 32 consecutive days"

Step 12: Load History & Compare

Before saving the new snapshot, check for prior retro history:

ls -t .context/retros/*.json 2>/dev/null

If prior retros exist: Load the most recent one using the Read tool. Calculate deltas for key metrics and include a Trends vs Last Retro section:

                    Last        Now         Delta
Test ratio:         22%    →    41%         ↑19pp
Sessions:           10     →    14          ↑4
LOC/hour:           200    →    350         ↑75%
Fix ratio:          54%    →    30%         ↓24pp (improving)
Commits:            32     →    47          ↑47%
Deep sessions:      3      →    5           ↑2

If no prior retros exist: Skip the comparison section and append: "First retro recorded — run again next week to see trends."

Step 13: Save Retro History

After computing all metrics (including streak) and loading any prior history for comparison, save a JSON snapshot:

mkdir -p .context/retros

Determine the next sequence number for today (substitute the actual date for $(date +%Y-%m-%d)):

# Count existing retros for today to get next sequence number
today=$(date +%Y-%m-%d)
existing=$(ls .context/retros/${today}-*.json 2>/dev/null | wc -l | tr -d ' ')
next=$((existing + 1))
# Save as .context/retros/${today}-${next}.json

Use the Write tool to save the JSON file with this schema:

{
  "date": "2026-03-08",
  "window": "7d",
  "metrics": {
    "commits": 47,
    "contributors": 3,
    "prs_merged": 12,
    "insertions": 3200,
    "deletions": 800,
    "net_loc": 2400,
    "test_loc": 1300,
    "test_ratio": 0.41,
    "active_days": 6,
    "sessions": 14,
    "deep_sessions": 5,
    "avg_session_minutes": 42,
    "loc_per_session_hour": 350,
    "feat_pct": 0.40,
    "fix_pct": 0.30,
    "peak_hour": 22,
    "ai_assisted_commits": 32
  },
  "authors": {
    "Garry Tan": { "commits": 32, "insertions": 2400, "deletions": 300, "test_ratio": 0.41, "top_area": "browse/" },
    "Alice": { "commits": 12, "insertions": 800, "deletions": 150, "test_ratio": 0.35, "top_area": "app/services/" }
  },
  "version_range": ["1.16.0.0", "1.16.1.0"],
  "streak_days": 47,
  "tweetable": "Week of Mar 1: 47 commits (3 contributors), 3.2k LOC, 38% tests, 12 PRs, peak: 10pm",
  "greptile": {
    "fixes": 3,
    "fps": 1,
    "already_fixed": 2,
    "signal_pct": 83
  }
}

Note: Only include the greptile field if ~/.gstack/greptile-history.md exists and has entries within the time window. Only include the backlog field if TODOS.md exists. Only include the test_health field if test files were found (command 10 returns > 0). If any has no data, omit the field entirely.

Include test health data in the JSON when test files exist:

  "test_health": {
    "total_test_files": 47,
    "tests_added_this_period": 5,
    "regression_test_commits": 3,
    "test_files_changed": 8
  }

Include backlog data in the JSON when TODOS.md exists:

  "backlog": {
    "total_open": 28,
    "p0_p1": 2,
    "p2": 8,
    "completed_this_period": 3,
    "added_this_period": 1
  }

Step 14: Write the Narrative

Structure the output as:


Tweetable summary (first line, before everything else):

Week of Mar 1: 47 commits (3 contributors), 3.2k LOC, 38% tests, 12 PRs, peak: 10pm | Streak: 47d

Engineering Retro: [date range]

Summary Table

(from Step 2)

(from Step 11, loaded before save — skip if first retro)

Time & Session Patterns

(from Steps 3-4)

Narrative interpreting what the team-wide patterns mean:

  • When the most productive hours are and what drives them
  • Whether sessions are getting longer or shorter over time
  • Estimated hours per day of active coding (team aggregate)
  • Notable patterns: do team members code at the same time or in shifts?

Shipping Velocity

(from Steps 5-7)

Narrative covering:

  • Commit type mix and what it reveals
  • PR size distribution and what it reveals about shipping cadence
  • Fix-chain detection (sequences of fix commits on the same subsystem)
  • Version bump discipline

Code Quality Signals

  • Test LOC ratio trend
  • Hotspot analysis (are the same files churning?)
  • Greptile signal ratio and trend (if history exists): "Greptile: X% signal (Y valid catches, Z false positives)"

Test Health

  • Total test files: N (from command 10)
  • Tests added this period: M (from command 12 — test files changed)
  • Regression test commits: list test(qa): and test(design): and test: coverage commits from command 11
  • If prior retro exists and has test_health: show delta "Test count: {last} → {now} (+{delta})"
  • If test ratio < 20%: flag as growth area — "100% test coverage is the goal. Tests make vibe coding safe."

Focus & Highlights

(from Step 8)

  • Focus score with interpretation
  • Ship of the week callout

Your Week (personal deep-dive)

(from Step 9, for the current user only)

This is the section the user cares most about. Include:

  • Their personal commit count, LOC, test ratio
  • Their session patterns and peak hours
  • Their focus areas
  • Their biggest ship
  • What you did well (2-3 specific things anchored in commits)
  • Where to level up (1-2 specific, actionable suggestions)

Team Breakdown

(from Step 9, for each teammate — skip if solo repo)

For each teammate (sorted by commits descending), write a section:

[Name]

  • What they shipped: 2-3 sentences on their contributions, areas of focus, and commit patterns
  • Praise: 1-2 specific things they did well, anchored in actual commits. Be genuine — what would you actually say in a 1:1? Examples:
    • "Cleaned up the entire auth module in 3 small, reviewable PRs — textbook decomposition"
    • "Added integration tests for every new endpoint, not just happy paths"
    • "Fixed the N+1 query that was causing 2s load times on the dashboard"
  • Opportunity for growth: 1 specific, constructive suggestion. Frame as investment, not criticism. Examples:
    • "Test coverage on the payment module is at 8% — worth investing in before the next feature lands on top of it"
    • "Most commits land in a single burst — spacing work across the day could reduce context-switching fatigue"
    • "All commits land between 1-4am — sustainable pace matters for code quality long-term"

AI collaboration note: If many commits have Co-Authored-By AI trailers (e.g., Claude, Copilot), note the AI-assisted commit percentage as a team metric. Frame it neutrally — "N% of commits were AI-assisted" — without judgment.

Top 3 Team Wins

Identify the 3 highest-impact things shipped in the window across the whole team. For each:

  • What it was
  • Who shipped it
  • Why it matters (product/architecture impact)

3 Things to Improve

Specific, actionable, anchored in actual commits. Mix personal and team-level suggestions. Phrase as "to get even better, the team could..."

3 Habits for Next Week

Small, practical, realistic. Each must be something that takes <5 minutes to adopt. At least one should be team-oriented (e.g., "review each other's PRs same-day").

(if applicable, from Step 10)


Compare Mode

When the user runs /retro compare (or /retro compare 14d):

  1. Compute metrics for the current window (default 7d) using the midnight-aligned start date (same logic as the main retro — e.g., if today is 2026-03-18 and window is 7d, use --since="2026-03-11T00:00:00")
  2. Compute metrics for the immediately prior same-length window using both --since and --until with midnight-aligned dates to avoid overlap (e.g., for a 7d window starting 2026-03-11: prior window is --since="2026-03-04T00:00:00" --until="2026-03-11T00:00:00")
  3. Show a side-by-side comparison table with deltas and arrows
  4. Write a brief narrative highlighting the biggest improvements and regressions
  5. Save only the current-window snapshot to .context/retros/ (same as a normal retro run); do not persist the prior-window metrics.

Tone

  • Encouraging but candid, no coddling
  • Specific and concrete — always anchor in actual commits/code
  • Skip generic praise ("great job!") — say exactly what was good and why
  • Frame improvements as leveling up, not criticism
  • Praise should feel like something you'd actually say in a 1:1 — specific, earned, genuine
  • Growth suggestions should feel like investment advice — "this is worth your time because..." not "you failed at..."
  • Never compare teammates against each other negatively. Each person's section stands on its own.
  • Keep total output around 3000-4500 words (slightly longer to accommodate team sections)
  • Use markdown tables and code blocks for data, prose for narrative
  • Output directly to the conversation — do NOT write to filesystem (except the .context/retros/ JSON snapshot)

Important Rules

  • ALL narrative output goes directly to the user in the conversation. The ONLY file written is the .context/retros/ JSON snapshot.
  • Use origin/<default> for all git queries (not local main which may be stale)
  • Display all timestamps in the user's local timezone (do not override TZ)
  • If the window has zero commits, say so and suggest a different window
  • Round LOC/hour to nearest 50
  • Treat merge commits as PR boundaries
  • Do not read CLAUDE.md or other docs — this skill is self-contained
  • On first run (no prior retros), skip comparison sections gracefully