mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-01 19:25:10 +02:00
feat: /retro global — cross-project AI coding retrospective (v0.10.2.0) (#316)
* feat: gstack-global-discover — cross-tool AI session discovery Standalone script that scans Claude Code, Codex CLI, and Gemini CLI session directories, resolves each session's working directory to a git repo, deduplicates by normalized remote URL, and outputs structured JSON. - Reads only first 4-8KB of session files (avoids OOM on large transcripts) - Only counts JSONL files modified within the time window (accurate counts) - Week windows midnight-aligned like day windows for consistency - 16 tests covering URL normalization, CLI behavior, and output structure * feat: /retro global — cross-project retro using discovery engine Adds Global Retrospective Mode to the /retro skill. When invoked as `/retro global`, skips the repo-scoped retro and instead uses gstack-global-discover to find all AI coding sessions across all tools, then runs git log on each discovered repo for a unified cross-project retrospective with global shipping streak and context-switching metrics. * chore: bump version and changelog (v0.9.9.0) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: sync documentation with shipped changes Update README /retro description to mention global mode. Add bin/ directory to CLAUDE.md project structure. * feat: /retro global adds per-project personal contributions breakdown Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: regenerate SKILL.md files after main merge * chore: bump version and changelog (v0.10.2.0) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: test coverage catalog — shared audit across plan/ship/review (v0.10.1.0) (#259) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: /retro global shareable personal card — screenshot-ready stats Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: regenerate Codex/agents SKILL.md for retro shareable card Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: widen retro global card — never truncate repo names Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: retro global card — left border only, drop unreliable right border Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -273,6 +273,8 @@ When the user types `/retro`, run this skill.
|
||||
- `/retro 30d` — last 30 days
|
||||
- `/retro compare` — compare current window vs prior same-length window
|
||||
- `/retro compare 14d` — compare with explicit window
|
||||
- `/retro global` — cross-project retro across all AI coding tools (7d default)
|
||||
- `/retro global 14d` — cross-project retro with explicit window
|
||||
|
||||
## Instructions
|
||||
|
||||
@@ -280,17 +282,21 @@ Parse the argument to determine the time window. Default to 7 days if no argumen
|
||||
|
||||
**Midnight-aligned windows:** For day (`d`) and week (`w`) units, compute an absolute start date at local midnight, not a relative string. For example, if today is 2026-03-18 and the window is 7 days: the start date is 2026-03-11. Use `--since="2026-03-11T00:00:00"` for git log queries — the explicit `T00:00:00` suffix ensures git starts from midnight. Without it, git uses the current wall-clock time (e.g., `--since="2026-03-11"` at 11pm means 11pm, not midnight). For week units, multiply by 7 to get days (e.g., `2w` = 14 days back). For hour (`h`) units, use `--since="N hours ago"` since midnight alignment does not apply to sub-day windows.
|
||||
|
||||
**Argument validation:** If the argument doesn't match a number followed by `d`, `h`, or `w`, the word `compare`, or `compare` followed by a number and `d`/`h`/`w`, show this usage and stop:
|
||||
**Argument validation:** If the argument doesn't match a number followed by `d`, `h`, or `w`, the word `compare` (optionally followed by a window), or the word `global` (optionally followed by a window), show this usage and stop:
|
||||
```
|
||||
Usage: /retro [window]
|
||||
Usage: /retro [window | compare | global]
|
||||
/retro — last 7 days (default)
|
||||
/retro 24h — last 24 hours
|
||||
/retro 14d — last 14 days
|
||||
/retro 30d — last 30 days
|
||||
/retro compare — compare this period vs prior period
|
||||
/retro compare 14d — compare with explicit window
|
||||
/retro global — cross-project retro across all AI tools (7d default)
|
||||
/retro global 14d — cross-project retro with explicit window
|
||||
```
|
||||
|
||||
**If the first argument is `global`:** Skip the normal repo-scoped retro (Steps 1-14). Instead, follow the **Global Retrospective** flow at the end of this document. The optional second argument is the time window (default 7d). This mode does NOT require being inside a git repo.
|
||||
|
||||
### Step 1: Gather Raw Data
|
||||
|
||||
First, fetch origin and identify the current user:
|
||||
@@ -736,6 +742,293 @@ Small, practical, realistic. Each must be something that takes <5 minutes to ado
|
||||
|
||||
---
|
||||
|
||||
## Global Retrospective Mode
|
||||
|
||||
When the user runs `/retro global` (or `/retro global 14d`), follow this flow instead of the repo-scoped Steps 1-14. This mode works from any directory — it does NOT require being inside a git repo.
|
||||
|
||||
### Global Step 1: Compute time window
|
||||
|
||||
Same midnight-aligned logic as the regular retro. Default 7d. The second argument after `global` is the window (e.g., `14d`, `30d`, `24h`).
|
||||
|
||||
### Global Step 2: Run discovery
|
||||
|
||||
Locate and run the discovery script using this fallback chain:
|
||||
|
||||
```bash
|
||||
DISCOVER_BIN=""
|
||||
[ -x ~/.codex/skills/gstack/bin/gstack-global-discover ] && DISCOVER_BIN=~/.codex/skills/gstack/bin/gstack-global-discover
|
||||
[ -z "$DISCOVER_BIN" ] && [ -x .agents/skills/gstack/bin/gstack-global-discover ] && DISCOVER_BIN=.agents/skills/gstack/bin/gstack-global-discover
|
||||
[ -z "$DISCOVER_BIN" ] && which gstack-global-discover >/dev/null 2>&1 && DISCOVER_BIN=$(which gstack-global-discover)
|
||||
[ -z "$DISCOVER_BIN" ] && [ -f bin/gstack-global-discover.ts ] && DISCOVER_BIN="bun run bin/gstack-global-discover.ts"
|
||||
echo "DISCOVER_BIN: $DISCOVER_BIN"
|
||||
```
|
||||
|
||||
If no binary is found, tell the user: "Discovery script not found. Run `bun run build` in the gstack directory to compile it." and stop.
|
||||
|
||||
Run the discovery:
|
||||
```bash
|
||||
$DISCOVER_BIN --since "<window>" --format json 2>/tmp/gstack-discover-stderr
|
||||
```
|
||||
|
||||
Read the stderr output from `/tmp/gstack-discover-stderr` for diagnostic info. Parse the JSON output from stdout.
|
||||
|
||||
If `total_sessions` is 0, say: "No AI coding sessions found in the last <window>. Try a longer window: `/retro global 30d`" and stop.
|
||||
|
||||
### Global Step 3: Run git log on each discovered repo
|
||||
|
||||
For each repo in the discovery JSON's `repos` array, find the first valid path in `paths[]` (directory exists with `.git/`). If no valid path exists, skip the repo and note it.
|
||||
|
||||
**For local-only repos** (where `remote` starts with `local:`): skip `git fetch` and use the local default branch. Use `git log HEAD` instead of `git log origin/$DEFAULT`.
|
||||
|
||||
**For repos with remotes:**
|
||||
|
||||
```bash
|
||||
git -C <path> fetch origin --quiet 2>/dev/null
|
||||
```
|
||||
|
||||
Detect the default branch for each repo: first try `git symbolic-ref refs/remotes/origin/HEAD`, then check common branch names (`main`, `master`), then fall back to `git rev-parse --abbrev-ref HEAD`. Use the detected branch as `<default>` in the commands below.
|
||||
|
||||
```bash
|
||||
# Commits with stats
|
||||
git -C <path> log origin/$DEFAULT --since="<start_date>T00:00:00" --format="%H|%aN|%ai|%s" --shortstat
|
||||
|
||||
# Commit timestamps for session detection, streak, and context switching
|
||||
git -C <path> log origin/$DEFAULT --since="<start_date>T00:00:00" --format="%at|%aN|%ai|%s" | sort -n
|
||||
|
||||
# Per-author commit counts
|
||||
git -C <path> shortlog origin/$DEFAULT --since="<start_date>T00:00:00" -sn --no-merges
|
||||
|
||||
# PR numbers from commit messages
|
||||
git -C <path> log origin/$DEFAULT --since="<start_date>T00:00:00" --format="%s" | grep -oE '#[0-9]+' | sort -n | uniq
|
||||
```
|
||||
|
||||
For repos that fail (deleted paths, network errors): skip and note "N repos could not be reached."
|
||||
|
||||
### Global Step 4: Compute global shipping streak
|
||||
|
||||
For each repo, get commit dates (capped at 365 days):
|
||||
|
||||
```bash
|
||||
git -C <path> log origin/$DEFAULT --since="365 days ago" --format="%ad" --date=format:"%Y-%m-%d" | sort -u
|
||||
```
|
||||
|
||||
Union all dates across all repos. Count backward from today — how many consecutive days have at least one commit to ANY repo? If the streak hits 365 days, display as "365+ days".
|
||||
|
||||
### Global Step 5: Compute context switching metric
|
||||
|
||||
From the commit timestamps gathered in Step 3, group by date. For each date, count how many distinct repos had commits that day. Report:
|
||||
- Average repos/day
|
||||
- Maximum repos/day
|
||||
- Which days were focused (1 repo) vs. fragmented (3+ repos)
|
||||
|
||||
### Global Step 6: Per-tool productivity patterns
|
||||
|
||||
From the discovery JSON, analyze tool usage patterns:
|
||||
- Which AI tool is used for which repos (exclusive vs. shared)
|
||||
- Session count per tool
|
||||
- Behavioral patterns (e.g., "Codex used exclusively for myapp, Claude Code for everything else")
|
||||
|
||||
### Global Step 7: Aggregate and generate narrative
|
||||
|
||||
Structure the output with the **shareable personal card first**, then the full
|
||||
team/project breakdown below. The personal card is designed to be screenshot-friendly
|
||||
— everything someone would want to share on X/Twitter in one clean block.
|
||||
|
||||
---
|
||||
|
||||
**Tweetable summary** (first line, before everything else):
|
||||
```
|
||||
Week of Mar 14: 5 projects, 138 commits, 250k LOC across 5 repos | 48 AI sessions | Streak: 52d 🔥
|
||||
```
|
||||
|
||||
## 🚀 Your Week: [user name] — [date range]
|
||||
|
||||
This section is the **shareable personal card**. It contains ONLY the current user's
|
||||
stats — no team data, no project breakdowns. Designed to screenshot and post.
|
||||
|
||||
Use the user identity from `git config user.name` to filter all per-repo git data.
|
||||
Aggregate across all repos to compute personal totals.
|
||||
|
||||
Render as a single visually clean block. Left border only — no right border (LLMs
|
||||
can't align right borders reliably). Pad repo names to the longest name so columns
|
||||
align cleanly. Never truncate project names.
|
||||
|
||||
```
|
||||
╔═══════════════════════════════════════════════════════════════
|
||||
║ [USER NAME] — Week of [date]
|
||||
╠═══════════════════════════════════════════════════════════════
|
||||
║
|
||||
║ [N] commits across [M] projects
|
||||
║ +[X]k LOC added · [Y]k LOC deleted · [Z]k net
|
||||
║ [N] AI coding sessions (CC: X, Codex: Y, Gemini: Z)
|
||||
║ [N]-day shipping streak 🔥
|
||||
║
|
||||
║ PROJECTS
|
||||
║ ─────────────────────────────────────────────────────────
|
||||
║ [repo_name_full] [N] commits +[X]k LOC [solo/team]
|
||||
║ [repo_name_full] [N] commits +[X]k LOC [solo/team]
|
||||
║ [repo_name_full] [N] commits +[X]k LOC [solo/team]
|
||||
║
|
||||
║ SHIP OF THE WEEK
|
||||
║ [PR title] — [LOC] lines across [N] files
|
||||
║
|
||||
║ TOP WORK
|
||||
║ • [1-line description of biggest theme]
|
||||
║ • [1-line description of second theme]
|
||||
║ • [1-line description of third theme]
|
||||
║
|
||||
║ Powered by gstack · github.com/garrytan/gstack
|
||||
╚═══════════════════════════════════════════════════════════════
|
||||
```
|
||||
|
||||
**Rules for the personal card:**
|
||||
- Only show repos where the user has commits. Skip repos with 0 commits.
|
||||
- Sort repos by user's commit count descending.
|
||||
- **Never truncate repo names.** Use the full repo name (e.g., `analyze_transcripts`
|
||||
not `analyze_trans`). Pad the name column to the longest repo name so all columns
|
||||
align. If names are long, widen the box — the box width adapts to content.
|
||||
- For LOC, use "k" formatting for thousands (e.g., "+64.0k" not "+64010").
|
||||
- Role: "solo" if user is the only contributor, "team" if others contributed.
|
||||
- Ship of the Week: the user's single highest-LOC PR across ALL repos.
|
||||
- Top Work: 3 bullet points summarizing the user's major themes, inferred from
|
||||
commit messages. Not individual commits — synthesize into themes.
|
||||
E.g., "Built /retro global — cross-project retrospective with AI session discovery"
|
||||
not "feat: gstack-global-discover" + "feat: /retro global template".
|
||||
- The card must be self-contained. Someone seeing ONLY this block should understand
|
||||
the user's week without any surrounding context.
|
||||
- Do NOT include team members, project totals, or context switching data here.
|
||||
|
||||
**Personal streak:** Use the user's own commits across all repos (filtered by
|
||||
`--author`) to compute a personal streak, separate from the team streak.
|
||||
|
||||
---
|
||||
|
||||
## Global Engineering Retro: [date range]
|
||||
|
||||
Everything below is the full analysis — team data, project breakdowns, patterns.
|
||||
This is the "deep dive" that follows the shareable card.
|
||||
|
||||
### All Projects Overview
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Projects active | N |
|
||||
| Total commits (all repos, all contributors) | N |
|
||||
| Total LOC | +N / -N |
|
||||
| AI coding sessions | N (CC: X, Codex: Y, Gemini: Z) |
|
||||
| Active days | N |
|
||||
| Global shipping streak (any contributor, any repo) | N consecutive days |
|
||||
| Context switches/day | N avg (max: M) |
|
||||
|
||||
### Per-Project Breakdown
|
||||
For each repo (sorted by commits descending):
|
||||
- Repo name (with % of total commits)
|
||||
- Commits, LOC, PRs merged, top contributor
|
||||
- Key work (inferred from commit messages)
|
||||
- AI sessions by tool
|
||||
|
||||
**Your Contributions** (sub-section within each project):
|
||||
For each project, add a "Your contributions" block showing the current user's
|
||||
personal stats within that repo. Use the user identity from `git config user.name`
|
||||
to filter. Include:
|
||||
- Your commits / total commits (with %)
|
||||
- Your LOC (+insertions / -deletions)
|
||||
- Your key work (inferred from YOUR commit messages only)
|
||||
- Your commit type mix (feat/fix/refactor/chore/docs breakdown)
|
||||
- Your biggest ship in this repo (highest-LOC commit or PR)
|
||||
|
||||
If the user is the only contributor, say "Solo project — all commits are yours."
|
||||
If the user has 0 commits in a repo (team project they didn't touch this period),
|
||||
say "No commits this period — [N] AI sessions only." and skip the breakdown.
|
||||
|
||||
Format:
|
||||
```
|
||||
**Your contributions:** 47/244 commits (19%), +4.2k/-0.3k LOC
|
||||
Key work: Writer Chat, email blocking, security hardening
|
||||
Biggest ship: PR #605 — Writer Chat eats the admin bar (2,457 ins, 46 files)
|
||||
Mix: feat(3) fix(2) chore(1)
|
||||
```
|
||||
|
||||
### Cross-Project Patterns
|
||||
- Time allocation across projects (% breakdown, use YOUR commits not total)
|
||||
- Peak productivity hours aggregated across all repos
|
||||
- Focused vs. fragmented days
|
||||
- Context switching trends
|
||||
|
||||
### Tool Usage Analysis
|
||||
Per-tool breakdown with behavioral patterns:
|
||||
- Claude Code: N sessions across M repos — patterns observed
|
||||
- Codex: N sessions across M repos — patterns observed
|
||||
- Gemini: N sessions across M repos — patterns observed
|
||||
|
||||
### Ship of the Week (Global)
|
||||
Highest-impact PR across ALL projects. Identify by LOC and commit messages.
|
||||
|
||||
### 3 Cross-Project Insights
|
||||
What the global view reveals that no single-repo retro could show.
|
||||
|
||||
### 3 Habits for Next Week
|
||||
Considering the full cross-project picture.
|
||||
|
||||
---
|
||||
|
||||
### Global Step 8: Load history & compare
|
||||
|
||||
```bash
|
||||
ls -t ~/.gstack/retros/global-*.json 2>/dev/null | head -5
|
||||
```
|
||||
|
||||
**Only compare against a prior retro with the same `window` value** (e.g., 7d vs 7d). If the most recent prior retro has a different window, skip comparison and note: "Prior global retro used a different window — skipping comparison."
|
||||
|
||||
If a matching prior retro exists, load it with the Read tool. Show a **Trends vs Last Global Retro** table with deltas for key metrics: total commits, LOC, sessions, streak, context switches/day.
|
||||
|
||||
If no prior global retros exist, append: "First global retro recorded — run again next week to see trends."
|
||||
|
||||
### Global Step 9: Save snapshot
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.gstack/retros
|
||||
```
|
||||
|
||||
Determine the next sequence number for today:
|
||||
```bash
|
||||
today=$(date +%Y-%m-%d)
|
||||
existing=$(ls ~/.gstack/retros/global-${today}-*.json 2>/dev/null | wc -l | tr -d ' ')
|
||||
next=$((existing + 1))
|
||||
```
|
||||
|
||||
Use the Write tool to save JSON to `~/.gstack/retros/global-${today}-${next}.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "global",
|
||||
"date": "2026-03-21",
|
||||
"window": "7d",
|
||||
"projects": [
|
||||
{
|
||||
"name": "gstack",
|
||||
"remote": "https://github.com/garrytan/gstack",
|
||||
"commits": 47,
|
||||
"insertions": 3200,
|
||||
"deletions": 800,
|
||||
"sessions": { "claude_code": 15, "codex": 3, "gemini": 0 }
|
||||
}
|
||||
],
|
||||
"totals": {
|
||||
"commits": 182,
|
||||
"insertions": 15300,
|
||||
"deletions": 4200,
|
||||
"projects": 5,
|
||||
"active_days": 6,
|
||||
"sessions": { "claude_code": 48, "codex": 8, "gemini": 3 },
|
||||
"global_streak_days": 52,
|
||||
"avg_context_switches_per_day": 2.1
|
||||
},
|
||||
"tweetable": "Week of Mar 14: 5 projects, 182 commits, 15.3k LOC | CC: 48, Codex: 8, Gemini: 3 | Focus: gstack (58%) | Streak: 52d"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Compare Mode
|
||||
|
||||
When the user runs `/retro compare` (or `/retro compare 14d`):
|
||||
@@ -769,3 +1062,4 @@ When the user runs `/retro compare` (or `/retro compare 14d`):
|
||||
- Treat merge commits as PR boundaries
|
||||
- Do not read CLAUDE.md or other docs — this skill is self-contained
|
||||
- On first run (no prior retros), skip comparison sections gracefully
|
||||
- **Global mode:** Does NOT require being inside a git repo. Saves snapshots to `~/.gstack/retros/` (not `.context/retros/`). Gracefully skip AI tools that aren't installed. Only compare against prior global retros with the same window value. If streak hits 365d cap, display as "365+ days".
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
.env
|
||||
node_modules/
|
||||
browse/dist/
|
||||
bin/gstack-global-discover
|
||||
.gstack/
|
||||
.claude/skills/
|
||||
.context/
|
||||
|
||||
+14
-6
@@ -1,5 +1,19 @@
|
||||
# Changelog
|
||||
|
||||
## [0.11.1.0] - 2026-03-22 — Global Retro: Cross-Project AI Coding Retrospective
|
||||
|
||||
### Added
|
||||
|
||||
- **`/retro global` — see everything you shipped across every project in one report.** Scans your Claude Code, Codex CLI, and Gemini CLI sessions, traces each back to its git repo, deduplicates by remote, then runs a full retro across all of them. Global shipping streak, context-switching metrics, per-project breakdowns with personal contributions, and cross-tool usage patterns. Run `/retro global 14d` for a two-week view.
|
||||
- **Per-project personal contributions in global retro.** Each project in the global retro now shows YOUR commits, LOC, key work, commit type mix, and biggest ship — separate from team totals. Solo projects say "Solo project — all commits are yours." Team projects you didn't touch show session count only.
|
||||
- **`gstack-global-discover` — the engine behind global retro.** Standalone discovery script that finds all AI coding sessions on your machine, resolves working directories to git repos, normalizes SSH/HTTPS remotes for dedup, and outputs structured JSON. Compiled binary ships with gstack — no `bun` runtime needed.
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Discovery script reads only the first few KB of session files** instead of loading entire multi-MB JSONL transcripts into memory. Prevents OOM on machines with extensive coding history.
|
||||
- **Claude Code session counts are now accurate.** Previously counted all JSONL files in a project directory; now only counts files modified within the time window.
|
||||
- **Week windows (`1w`, `2w`) are now midnight-aligned** like day windows, so `/retro global 1w` and `/retro global 7d` produce consistent results.
|
||||
|
||||
## [0.11.0.0] - 2026-03-22 — /cso: Zero-Noise Security Audits
|
||||
|
||||
### Added
|
||||
@@ -54,12 +68,6 @@
|
||||
|
||||
- **`/autoplan` — one command, fully reviewed plan.** Hand it a rough plan and it runs the full CEO → design → eng review pipeline automatically. Reads the actual review skill files from disk (same depth, same rigor as running each review manually) and makes intermediate decisions using 6 encoded principles: completeness, boil lakes, pragmatic, DRY, explicit over clever, bias toward action. Taste decisions (close approaches, borderline scope, codex disagreements) surface at a final approval gate. You approve, override, interrogate, or revise. Saves a restore point so you can re-run from scratch. Writes review logs compatible with `/ship`'s dashboard.
|
||||
|
||||
## [0.9.9.0] - 2026-03-21 — Harder Office Hours
|
||||
|
||||
### Changed
|
||||
|
||||
- **`/office-hours` now pushes back harder.** The diagnostic questions no longer soften toward confident founders. Five changes: hardened response posture ("direct to the point of discomfort"), anti-sycophancy rules (banned phrases like "that's an interesting approach"), 5 worked pushback patterns showing BAD vs GOOD responses, a post-Q1 framing check that challenges undefined terms and hidden assumptions, and a gated escape hatch that asks 2 more questions before letting founders skip. Inspired by user feedback comparing gstack with dontbesilent's diagnostic skill.
|
||||
|
||||
## [0.9.8.0] - 2026-03-21 — Deploy Pipeline + E2E Performance
|
||||
|
||||
### Added
|
||||
|
||||
@@ -78,7 +78,8 @@ gstack/
|
||||
├── land-and-deploy/ # /land-and-deploy skill (merge → deploy → canary verify)
|
||||
├── office-hours/ # /office-hours skill (YC Office Hours — startup diagnostic + builder brainstorm)
|
||||
├── investigate/ # /investigate skill (systematic root-cause debugging)
|
||||
├── retro/ # Retrospective skill
|
||||
├── retro/ # Retrospective skill (includes /retro global cross-project mode)
|
||||
├── bin/ # Standalone scripts (gstack-global-discover for cross-tool session discovery)
|
||||
├── document-release/ # /document-release skill (post-ship doc updates)
|
||||
├── cso/ # /cso skill (OWASP Top 10 + STRIDE security audit)
|
||||
├── design-consultation/ # /design-consultation skill (design system from scratch)
|
||||
|
||||
@@ -141,9 +141,9 @@ Each skill feeds into the next. `/office-hours` writes a design doc that `/plan-
|
||||
| `/canary` | **SRE** | Post-deploy monitoring loop. Watches for console errors, performance regressions, and page failures. |
|
||||
| `/benchmark` | **Performance Engineer** | Baseline page load times, Core Web Vitals, and resource sizes. Compare before/after on every PR. |
|
||||
| `/document-release` | **Technical Writer** | Update all project docs to match what you just shipped. Catches stale READMEs automatically. |
|
||||
| `/retro` | **Eng Manager** | Team-aware weekly retro. Per-person breakdowns, shipping streaks, test health trends, growth opportunities. |
|
||||
| `/retro` | **Eng Manager** | Team-aware weekly retro. Per-person breakdowns, shipping streaks, test health trends, growth opportunities. `/retro global` runs across all your projects and AI tools (Claude Code, Codex, Gemini). |
|
||||
| `/browse` | **QA Engineer** | Real Chromium browser, real clicks, real screenshots. ~100ms per command. |
|
||||
| `/setup-browser-cookies` | **Session Manager** | Import cookies from your real browser into the headless session. Test authenticated pages. |
|
||||
| `/setup-browser-cookies` | **Session Manager** | Import cookies from your real browser (Chrome, Arc, Brave, Edge) into the headless session. Test authenticated pages. |
|
||||
| `/autoplan` | **Review Pipeline** | One command, fully reviewed plan. Runs CEO → design → eng review automatically with encoded decision principles. Surfaces only taste decisions for your approval. |
|
||||
|
||||
### Power tools
|
||||
|
||||
@@ -0,0 +1,591 @@
|
||||
#!/usr/bin/env bun
|
||||
/**
|
||||
* gstack-global-discover — Discover AI coding sessions across Claude Code, Codex CLI, and Gemini CLI.
|
||||
* Resolves each session's working directory to a git repo, deduplicates by normalized remote URL,
|
||||
* and outputs structured JSON to stdout.
|
||||
*
|
||||
* Usage:
|
||||
* gstack-global-discover --since 7d [--format json|summary]
|
||||
* gstack-global-discover --help
|
||||
*/
|
||||
|
||||
import { existsSync, readdirSync, statSync, readFileSync, openSync, readSync, closeSync } from "fs";
|
||||
import { join, basename } from "path";
|
||||
import { execSync } from "child_process";
|
||||
import { homedir } from "os";
|
||||
|
||||
// ── Types ──────────────────────────────────────────────────────────────────
|
||||
|
||||
interface Session {
|
||||
tool: "claude_code" | "codex" | "gemini";
|
||||
cwd: string;
|
||||
}
|
||||
|
||||
interface Repo {
|
||||
name: string;
|
||||
remote: string;
|
||||
paths: string[];
|
||||
sessions: { claude_code: number; codex: number; gemini: number };
|
||||
}
|
||||
|
||||
interface DiscoveryResult {
|
||||
window: string;
|
||||
start_date: string;
|
||||
repos: Repo[];
|
||||
tools: {
|
||||
claude_code: { total_sessions: number; repos: number };
|
||||
codex: { total_sessions: number; repos: number };
|
||||
gemini: { total_sessions: number; repos: number };
|
||||
};
|
||||
total_sessions: number;
|
||||
total_repos: number;
|
||||
}
|
||||
|
||||
// ── CLI parsing ────────────────────────────────────────────────────────────
|
||||
|
||||
function printUsage(): void {
|
||||
console.error(`Usage: gstack-global-discover --since <window> [--format json|summary]
|
||||
|
||||
--since <window> Time window: e.g. 7d, 14d, 30d, 24h
|
||||
--format <fmt> Output format: json (default) or summary
|
||||
--help Show this help
|
||||
|
||||
Examples:
|
||||
gstack-global-discover --since 7d
|
||||
gstack-global-discover --since 14d --format summary`);
|
||||
}
|
||||
|
||||
function parseArgs(): { since: string; format: "json" | "summary" } {
|
||||
const args = process.argv.slice(2);
|
||||
let since = "";
|
||||
let format: "json" | "summary" = "json";
|
||||
|
||||
for (let i = 0; i < args.length; i++) {
|
||||
if (args[i] === "--help" || args[i] === "-h") {
|
||||
printUsage();
|
||||
process.exit(0);
|
||||
} else if (args[i] === "--since" && args[i + 1]) {
|
||||
since = args[++i];
|
||||
} else if (args[i] === "--format" && args[i + 1]) {
|
||||
const f = args[++i];
|
||||
if (f !== "json" && f !== "summary") {
|
||||
console.error(`Invalid format: ${f}. Use 'json' or 'summary'.`);
|
||||
printUsage();
|
||||
process.exit(1);
|
||||
}
|
||||
format = f;
|
||||
} else {
|
||||
console.error(`Unknown argument: ${args[i]}`);
|
||||
printUsage();
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
if (!since) {
|
||||
console.error("Error: --since is required.");
|
||||
printUsage();
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (!/^\d+(d|h|w)$/.test(since)) {
|
||||
console.error(`Invalid window format: ${since}. Use e.g. 7d, 24h, 2w.`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
return { since, format };
|
||||
}
|
||||
|
||||
function windowToDate(window: string): Date {
|
||||
const match = window.match(/^(\d+)(d|h|w)$/);
|
||||
if (!match) throw new Error(`Invalid window: ${window}`);
|
||||
const [, numStr, unit] = match;
|
||||
const num = parseInt(numStr, 10);
|
||||
const now = new Date();
|
||||
|
||||
if (unit === "h") {
|
||||
return new Date(now.getTime() - num * 60 * 60 * 1000);
|
||||
} else if (unit === "w") {
|
||||
// weeks — midnight-aligned like days
|
||||
const d = new Date(now);
|
||||
d.setDate(d.getDate() - num * 7);
|
||||
d.setHours(0, 0, 0, 0);
|
||||
return d;
|
||||
} else {
|
||||
// days — midnight-aligned
|
||||
const d = new Date(now);
|
||||
d.setDate(d.getDate() - num);
|
||||
d.setHours(0, 0, 0, 0);
|
||||
return d;
|
||||
}
|
||||
}
|
||||
|
||||
// ── URL normalization ──────────────────────────────────────────────────────
|
||||
|
||||
export function normalizeRemoteUrl(url: string): string {
|
||||
let normalized = url.trim();
|
||||
|
||||
// SSH → HTTPS: git@github.com:user/repo → https://github.com/user/repo
|
||||
const sshMatch = normalized.match(/^(?:ssh:\/\/)?git@([^:]+):(.+)$/);
|
||||
if (sshMatch) {
|
||||
normalized = `https://${sshMatch[1]}/${sshMatch[2]}`;
|
||||
}
|
||||
|
||||
// Strip .git suffix
|
||||
if (normalized.endsWith(".git")) {
|
||||
normalized = normalized.slice(0, -4);
|
||||
}
|
||||
|
||||
// Lowercase the host portion
|
||||
try {
|
||||
const parsed = new URL(normalized);
|
||||
parsed.hostname = parsed.hostname.toLowerCase();
|
||||
normalized = parsed.toString();
|
||||
// Remove trailing slash
|
||||
if (normalized.endsWith("/")) {
|
||||
normalized = normalized.slice(0, -1);
|
||||
}
|
||||
} catch {
|
||||
// Not a valid URL (e.g., local:<path>), return as-is
|
||||
}
|
||||
|
||||
return normalized;
|
||||
}
|
||||
|
||||
// ── Git helpers ────────────────────────────────────────────────────────────
|
||||
|
||||
function isGitRepo(dir: string): boolean {
|
||||
return existsSync(join(dir, ".git"));
|
||||
}
|
||||
|
||||
function getGitRemote(cwd: string): string | null {
|
||||
if (!existsSync(cwd) || !isGitRepo(cwd)) return null;
|
||||
try {
|
||||
const remote = execSync("git remote get-url origin", {
|
||||
cwd,
|
||||
encoding: "utf-8",
|
||||
timeout: 5000,
|
||||
stdio: ["pipe", "pipe", "pipe"],
|
||||
}).trim();
|
||||
return remote || null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// ── Scanners ───────────────────────────────────────────────────────────────
|
||||
|
||||
function scanClaudeCode(since: Date): Session[] {
|
||||
const projectsDir = join(homedir(), ".claude", "projects");
|
||||
if (!existsSync(projectsDir)) return [];
|
||||
|
||||
const sessions: Session[] = [];
|
||||
|
||||
let dirs: string[];
|
||||
try {
|
||||
dirs = readdirSync(projectsDir);
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
|
||||
for (const dirName of dirs) {
|
||||
const dirPath = join(projectsDir, dirName);
|
||||
try {
|
||||
const stat = statSync(dirPath);
|
||||
if (!stat.isDirectory()) continue;
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Find JSONL files
|
||||
let jsonlFiles: string[];
|
||||
try {
|
||||
jsonlFiles = readdirSync(dirPath).filter((f) => f.endsWith(".jsonl"));
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
if (jsonlFiles.length === 0) continue;
|
||||
|
||||
// Coarse mtime pre-filter: check if any JSONL file is recent
|
||||
const hasRecentFile = jsonlFiles.some((f) => {
|
||||
try {
|
||||
return statSync(join(dirPath, f)).mtime >= since;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
});
|
||||
if (!hasRecentFile) continue;
|
||||
|
||||
// Resolve cwd
|
||||
let cwd = resolveClaudeCodeCwd(dirPath, dirName, jsonlFiles);
|
||||
if (!cwd) continue;
|
||||
|
||||
// Count only JSONL files modified within the window as sessions
|
||||
const recentFiles = jsonlFiles.filter((f) => {
|
||||
try {
|
||||
return statSync(join(dirPath, f)).mtime >= since;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
});
|
||||
for (let i = 0; i < recentFiles.length; i++) {
|
||||
sessions.push({ tool: "claude_code", cwd });
|
||||
}
|
||||
}
|
||||
|
||||
return sessions;
|
||||
}
|
||||
|
||||
function resolveClaudeCodeCwd(
|
||||
dirPath: string,
|
||||
dirName: string,
|
||||
jsonlFiles: string[]
|
||||
): string | null {
|
||||
// Fast-path: decode directory name
|
||||
// e.g., -Users-garrytan-git-repo → /Users/garrytan/git/repo
|
||||
const decoded = dirName.replace(/^-/, "/").replace(/-/g, "/");
|
||||
if (existsSync(decoded)) return decoded;
|
||||
|
||||
// Fallback: read cwd from first JSONL file
|
||||
// Sort by mtime descending, pick most recent
|
||||
const sorted = jsonlFiles
|
||||
.map((f) => {
|
||||
try {
|
||||
return { name: f, mtime: statSync(join(dirPath, f)).mtime.getTime() };
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
})
|
||||
.filter(Boolean)
|
||||
.sort((a, b) => b!.mtime - a!.mtime) as { name: string; mtime: number }[];
|
||||
|
||||
for (const file of sorted.slice(0, 3)) {
|
||||
const cwd = extractCwdFromJsonl(join(dirPath, file.name));
|
||||
if (cwd && existsSync(cwd)) return cwd;
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
function extractCwdFromJsonl(filePath: string): string | null {
|
||||
try {
|
||||
// Read only the first 8KB to avoid loading huge JSONL files into memory
|
||||
const fd = openSync(filePath, "r");
|
||||
const buf = Buffer.alloc(8192);
|
||||
const bytesRead = readSync(fd, buf, 0, 8192, 0);
|
||||
closeSync(fd);
|
||||
const text = buf.toString("utf-8", 0, bytesRead);
|
||||
const lines = text.split("\n").slice(0, 15);
|
||||
for (const line of lines) {
|
||||
if (!line.trim()) continue;
|
||||
try {
|
||||
const obj = JSON.parse(line);
|
||||
if (obj.cwd) return obj.cwd;
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// File read error
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function scanCodex(since: Date): Session[] {
|
||||
const sessionsDir = join(homedir(), ".codex", "sessions");
|
||||
if (!existsSync(sessionsDir)) return [];
|
||||
|
||||
const sessions: Session[] = [];
|
||||
|
||||
// Walk YYYY/MM/DD directory structure
|
||||
try {
|
||||
const years = readdirSync(sessionsDir);
|
||||
for (const year of years) {
|
||||
const yearPath = join(sessionsDir, year);
|
||||
if (!statSync(yearPath).isDirectory()) continue;
|
||||
|
||||
const months = readdirSync(yearPath);
|
||||
for (const month of months) {
|
||||
const monthPath = join(yearPath, month);
|
||||
if (!statSync(monthPath).isDirectory()) continue;
|
||||
|
||||
const days = readdirSync(monthPath);
|
||||
for (const day of days) {
|
||||
const dayPath = join(monthPath, day);
|
||||
if (!statSync(dayPath).isDirectory()) continue;
|
||||
|
||||
const files = readdirSync(dayPath).filter((f) =>
|
||||
f.startsWith("rollout-") && f.endsWith(".jsonl")
|
||||
);
|
||||
|
||||
for (const file of files) {
|
||||
const filePath = join(dayPath, file);
|
||||
try {
|
||||
const stat = statSync(filePath);
|
||||
if (stat.mtime < since) continue;
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Read first line for session_meta (only first 4KB)
|
||||
try {
|
||||
const fd = openSync(filePath, "r");
|
||||
const buf = Buffer.alloc(4096);
|
||||
const bytesRead = readSync(fd, buf, 0, 4096, 0);
|
||||
closeSync(fd);
|
||||
const firstLine = buf.toString("utf-8", 0, bytesRead).split("\n")[0];
|
||||
if (!firstLine) continue;
|
||||
const meta = JSON.parse(firstLine);
|
||||
if (meta.type === "session_meta" && meta.payload?.cwd) {
|
||||
sessions.push({ tool: "codex", cwd: meta.payload.cwd });
|
||||
}
|
||||
} catch {
|
||||
console.error(`Warning: could not parse Codex session ${filePath}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Directory read error
|
||||
}
|
||||
|
||||
return sessions;
|
||||
}
|
||||
|
||||
function scanGemini(since: Date): Session[] {
|
||||
const tmpDir = join(homedir(), ".gemini", "tmp");
|
||||
if (!existsSync(tmpDir)) return [];
|
||||
|
||||
// Load projects.json for path mapping
|
||||
const projectsPath = join(homedir(), ".gemini", "projects.json");
|
||||
let projectsMap: Record<string, string> = {}; // name → path
|
||||
if (existsSync(projectsPath)) {
|
||||
try {
|
||||
const data = JSON.parse(readFileSync(projectsPath, { encoding: "utf-8" }));
|
||||
// Format: { projects: { "/path": "name" } } — we want name → path
|
||||
const projects = data.projects || {};
|
||||
for (const [path, name] of Object.entries(projects)) {
|
||||
projectsMap[name as string] = path;
|
||||
}
|
||||
} catch {
|
||||
console.error("Warning: could not parse ~/.gemini/projects.json");
|
||||
}
|
||||
}
|
||||
|
||||
const sessions: Session[] = [];
|
||||
const seenTimestamps = new Map<string, Set<string>>(); // projectName → Set<startTime>
|
||||
|
||||
let projectDirs: string[];
|
||||
try {
|
||||
projectDirs = readdirSync(tmpDir);
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
|
||||
for (const projectName of projectDirs) {
|
||||
const chatsDir = join(tmpDir, projectName, "chats");
|
||||
if (!existsSync(chatsDir)) continue;
|
||||
|
||||
// Resolve cwd from projects.json
|
||||
let cwd = projectsMap[projectName] || null;
|
||||
|
||||
// Fallback: check .project_root
|
||||
if (!cwd) {
|
||||
const projectRootFile = join(tmpDir, projectName, ".project_root");
|
||||
if (existsSync(projectRootFile)) {
|
||||
try {
|
||||
cwd = readFileSync(projectRootFile, { encoding: "utf-8" }).trim();
|
||||
} catch {}
|
||||
}
|
||||
}
|
||||
|
||||
if (!cwd || !existsSync(cwd)) continue;
|
||||
|
||||
const seen = seenTimestamps.get(projectName) || new Set<string>();
|
||||
seenTimestamps.set(projectName, seen);
|
||||
|
||||
let files: string[];
|
||||
try {
|
||||
files = readdirSync(chatsDir).filter((f) =>
|
||||
f.startsWith("session-") && f.endsWith(".json")
|
||||
);
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
|
||||
for (const file of files) {
|
||||
const filePath = join(chatsDir, file);
|
||||
try {
|
||||
const stat = statSync(filePath);
|
||||
if (stat.mtime < since) continue;
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
|
||||
try {
|
||||
const data = JSON.parse(readFileSync(filePath, { encoding: "utf-8" }));
|
||||
const startTime = data.startTime || "";
|
||||
|
||||
// Deduplicate by startTime within project
|
||||
if (startTime && seen.has(startTime)) continue;
|
||||
if (startTime) seen.add(startTime);
|
||||
|
||||
sessions.push({ tool: "gemini", cwd });
|
||||
} catch {
|
||||
console.error(`Warning: could not parse Gemini session ${filePath}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return sessions;
|
||||
}
|
||||
|
||||
// ── Deduplication ──────────────────────────────────────────────────────────
|
||||
|
||||
async function resolveAndDeduplicate(sessions: Session[]): Promise<Repo[]> {
|
||||
// Group sessions by cwd
|
||||
const byCwd = new Map<string, Session[]>();
|
||||
for (const s of sessions) {
|
||||
const existing = byCwd.get(s.cwd) || [];
|
||||
existing.push(s);
|
||||
byCwd.set(s.cwd, existing);
|
||||
}
|
||||
|
||||
// Resolve git remotes for each cwd
|
||||
const cwds = Array.from(byCwd.keys());
|
||||
const remoteMap = new Map<string, string>(); // cwd → normalized remote
|
||||
|
||||
for (const cwd of cwds) {
|
||||
const raw = getGitRemote(cwd);
|
||||
if (raw) {
|
||||
remoteMap.set(cwd, normalizeRemoteUrl(raw));
|
||||
} else if (existsSync(cwd) && isGitRepo(cwd)) {
|
||||
remoteMap.set(cwd, `local:${cwd}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Group by normalized remote
|
||||
const byRemote = new Map<string, { paths: string[]; sessions: Session[] }>();
|
||||
for (const [cwd, cwdSessions] of byCwd) {
|
||||
const remote = remoteMap.get(cwd);
|
||||
if (!remote) continue;
|
||||
|
||||
const existing = byRemote.get(remote) || { paths: [], sessions: [] };
|
||||
if (!existing.paths.includes(cwd)) existing.paths.push(cwd);
|
||||
existing.sessions.push(...cwdSessions);
|
||||
byRemote.set(remote, existing);
|
||||
}
|
||||
|
||||
// Build Repo objects
|
||||
const repos: Repo[] = [];
|
||||
for (const [remote, data] of byRemote) {
|
||||
// Find first valid path
|
||||
const validPath = data.paths.find((p) => existsSync(p) && isGitRepo(p));
|
||||
if (!validPath) continue;
|
||||
|
||||
// Derive name from remote URL
|
||||
let name: string;
|
||||
if (remote.startsWith("local:")) {
|
||||
name = basename(remote.replace("local:", ""));
|
||||
} else {
|
||||
try {
|
||||
const url = new URL(remote);
|
||||
name = basename(url.pathname);
|
||||
} catch {
|
||||
name = basename(remote);
|
||||
}
|
||||
}
|
||||
|
||||
const sessionCounts = { claude_code: 0, codex: 0, gemini: 0 };
|
||||
for (const s of data.sessions) {
|
||||
sessionCounts[s.tool]++;
|
||||
}
|
||||
|
||||
repos.push({
|
||||
name,
|
||||
remote,
|
||||
paths: data.paths,
|
||||
sessions: sessionCounts,
|
||||
});
|
||||
}
|
||||
|
||||
// Sort by total sessions descending
|
||||
repos.sort(
|
||||
(a, b) =>
|
||||
b.sessions.claude_code + b.sessions.codex + b.sessions.gemini -
|
||||
(a.sessions.claude_code + a.sessions.codex + a.sessions.gemini)
|
||||
);
|
||||
|
||||
return repos;
|
||||
}
|
||||
|
||||
// ── Main ───────────────────────────────────────────────────────────────────
|
||||
|
||||
async function main() {
|
||||
const { since, format } = parseArgs();
|
||||
const sinceDate = windowToDate(since);
|
||||
const startDate = sinceDate.toISOString().split("T")[0];
|
||||
|
||||
// Run all scanners
|
||||
const ccSessions = scanClaudeCode(sinceDate);
|
||||
const codexSessions = scanCodex(sinceDate);
|
||||
const geminiSessions = scanGemini(sinceDate);
|
||||
|
||||
const allSessions = [...ccSessions, ...codexSessions, ...geminiSessions];
|
||||
|
||||
// Summary to stderr
|
||||
console.error(
|
||||
`Discovered: ${ccSessions.length} CC sessions, ${codexSessions.length} Codex sessions, ${geminiSessions.length} Gemini sessions`
|
||||
);
|
||||
|
||||
// Deduplicate
|
||||
const repos = await resolveAndDeduplicate(allSessions);
|
||||
|
||||
console.error(`→ ${repos.length} unique repos`);
|
||||
|
||||
// Count per-tool repo counts
|
||||
const ccRepos = new Set(repos.filter((r) => r.sessions.claude_code > 0).map((r) => r.remote)).size;
|
||||
const codexRepos = new Set(repos.filter((r) => r.sessions.codex > 0).map((r) => r.remote)).size;
|
||||
const geminiRepos = new Set(repos.filter((r) => r.sessions.gemini > 0).map((r) => r.remote)).size;
|
||||
|
||||
const result: DiscoveryResult = {
|
||||
window: since,
|
||||
start_date: startDate,
|
||||
repos,
|
||||
tools: {
|
||||
claude_code: { total_sessions: ccSessions.length, repos: ccRepos },
|
||||
codex: { total_sessions: codexSessions.length, repos: codexRepos },
|
||||
gemini: { total_sessions: geminiSessions.length, repos: geminiRepos },
|
||||
},
|
||||
total_sessions: allSessions.length,
|
||||
total_repos: repos.length,
|
||||
};
|
||||
|
||||
if (format === "json") {
|
||||
console.log(JSON.stringify(result, null, 2));
|
||||
} else {
|
||||
// Summary format
|
||||
console.log(`Window: ${since} (since ${startDate})`);
|
||||
console.log(`Sessions: ${allSessions.length} total (CC: ${ccSessions.length}, Codex: ${codexSessions.length}, Gemini: ${geminiSessions.length})`);
|
||||
console.log(`Repos: ${repos.length} unique`);
|
||||
console.log("");
|
||||
for (const repo of repos) {
|
||||
const total = repo.sessions.claude_code + repo.sessions.codex + repo.sessions.gemini;
|
||||
const tools = [];
|
||||
if (repo.sessions.claude_code > 0) tools.push(`CC:${repo.sessions.claude_code}`);
|
||||
if (repo.sessions.codex > 0) tools.push(`Codex:${repo.sessions.codex}`);
|
||||
if (repo.sessions.gemini > 0) tools.push(`Gemini:${repo.sessions.gemini}`);
|
||||
console.log(` ${repo.name} (${total} sessions) — ${tools.join(", ")}`);
|
||||
console.log(` Remote: ${repo.remote}`);
|
||||
console.log(` Paths: ${repo.paths.join(", ")}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Only run main when executed directly (not when imported for testing)
|
||||
if (import.meta.main) {
|
||||
main().catch((err) => {
|
||||
console.error(`Fatal error: ${err.message}`);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
||||
+1
-1
@@ -8,7 +8,7 @@
|
||||
"browse": "./browse/dist/browse"
|
||||
},
|
||||
"scripts": {
|
||||
"build": "bun run gen:skill-docs && bun run gen:skill-docs --host codex && bun build --compile browse/src/cli.ts --outfile browse/dist/browse && bun build --compile browse/src/find-browse.ts --outfile browse/dist/find-browse && bash browse/scripts/build-node-server.sh && git rev-parse HEAD > browse/dist/.version && rm -f .*.bun-build || true",
|
||||
"build": "bun run gen:skill-docs && bun run gen:skill-docs --host codex && bun build --compile browse/src/cli.ts --outfile browse/dist/browse && bun build --compile browse/src/find-browse.ts --outfile browse/dist/find-browse && bun build --compile bin/gstack-global-discover.ts --outfile bin/gstack-global-discover && bash browse/scripts/build-node-server.sh && git rev-parse HEAD > browse/dist/.version && rm -f .*.bun-build || true",
|
||||
"gen:skill-docs": "bun run scripts/gen-skill-docs.ts",
|
||||
"dev": "bun run browse/src/cli.ts",
|
||||
"server": "bun run browse/src/server.ts",
|
||||
|
||||
+296
-2
@@ -280,6 +280,8 @@ When the user types `/retro`, run this skill.
|
||||
- `/retro 30d` — last 30 days
|
||||
- `/retro compare` — compare current window vs prior same-length window
|
||||
- `/retro compare 14d` — compare with explicit window
|
||||
- `/retro global` — cross-project retro across all AI coding tools (7d default)
|
||||
- `/retro global 14d` — cross-project retro with explicit window
|
||||
|
||||
## Instructions
|
||||
|
||||
@@ -287,17 +289,21 @@ Parse the argument to determine the time window. Default to 7 days if no argumen
|
||||
|
||||
**Midnight-aligned windows:** For day (`d`) and week (`w`) units, compute an absolute start date at local midnight, not a relative string. For example, if today is 2026-03-18 and the window is 7 days: the start date is 2026-03-11. Use `--since="2026-03-11T00:00:00"` for git log queries — the explicit `T00:00:00` suffix ensures git starts from midnight. Without it, git uses the current wall-clock time (e.g., `--since="2026-03-11"` at 11pm means 11pm, not midnight). For week units, multiply by 7 to get days (e.g., `2w` = 14 days back). For hour (`h`) units, use `--since="N hours ago"` since midnight alignment does not apply to sub-day windows.
|
||||
|
||||
**Argument validation:** If the argument doesn't match a number followed by `d`, `h`, or `w`, the word `compare`, or `compare` followed by a number and `d`/`h`/`w`, show this usage and stop:
|
||||
**Argument validation:** If the argument doesn't match a number followed by `d`, `h`, or `w`, the word `compare` (optionally followed by a window), or the word `global` (optionally followed by a window), show this usage and stop:
|
||||
```
|
||||
Usage: /retro [window]
|
||||
Usage: /retro [window | compare | global]
|
||||
/retro — last 7 days (default)
|
||||
/retro 24h — last 24 hours
|
||||
/retro 14d — last 14 days
|
||||
/retro 30d — last 30 days
|
||||
/retro compare — compare this period vs prior period
|
||||
/retro compare 14d — compare with explicit window
|
||||
/retro global — cross-project retro across all AI tools (7d default)
|
||||
/retro global 14d — cross-project retro with explicit window
|
||||
```
|
||||
|
||||
**If the first argument is `global`:** Skip the normal repo-scoped retro (Steps 1-14). Instead, follow the **Global Retrospective** flow at the end of this document. The optional second argument is the time window (default 7d). This mode does NOT require being inside a git repo.
|
||||
|
||||
### Step 1: Gather Raw Data
|
||||
|
||||
First, fetch origin and identify the current user:
|
||||
@@ -743,6 +749,293 @@ Small, practical, realistic. Each must be something that takes <5 minutes to ado
|
||||
|
||||
---
|
||||
|
||||
## Global Retrospective Mode
|
||||
|
||||
When the user runs `/retro global` (or `/retro global 14d`), follow this flow instead of the repo-scoped Steps 1-14. This mode works from any directory — it does NOT require being inside a git repo.
|
||||
|
||||
### Global Step 1: Compute time window
|
||||
|
||||
Same midnight-aligned logic as the regular retro. Default 7d. The second argument after `global` is the window (e.g., `14d`, `30d`, `24h`).
|
||||
|
||||
### Global Step 2: Run discovery
|
||||
|
||||
Locate and run the discovery script using this fallback chain:
|
||||
|
||||
```bash
|
||||
DISCOVER_BIN=""
|
||||
[ -x ~/.claude/skills/gstack/bin/gstack-global-discover ] && DISCOVER_BIN=~/.claude/skills/gstack/bin/gstack-global-discover
|
||||
[ -z "$DISCOVER_BIN" ] && [ -x .claude/skills/gstack/bin/gstack-global-discover ] && DISCOVER_BIN=.claude/skills/gstack/bin/gstack-global-discover
|
||||
[ -z "$DISCOVER_BIN" ] && which gstack-global-discover >/dev/null 2>&1 && DISCOVER_BIN=$(which gstack-global-discover)
|
||||
[ -z "$DISCOVER_BIN" ] && [ -f bin/gstack-global-discover.ts ] && DISCOVER_BIN="bun run bin/gstack-global-discover.ts"
|
||||
echo "DISCOVER_BIN: $DISCOVER_BIN"
|
||||
```
|
||||
|
||||
If no binary is found, tell the user: "Discovery script not found. Run `bun run build` in the gstack directory to compile it." and stop.
|
||||
|
||||
Run the discovery:
|
||||
```bash
|
||||
$DISCOVER_BIN --since "<window>" --format json 2>/tmp/gstack-discover-stderr
|
||||
```
|
||||
|
||||
Read the stderr output from `/tmp/gstack-discover-stderr` for diagnostic info. Parse the JSON output from stdout.
|
||||
|
||||
If `total_sessions` is 0, say: "No AI coding sessions found in the last <window>. Try a longer window: `/retro global 30d`" and stop.
|
||||
|
||||
### Global Step 3: Run git log on each discovered repo
|
||||
|
||||
For each repo in the discovery JSON's `repos` array, find the first valid path in `paths[]` (directory exists with `.git/`). If no valid path exists, skip the repo and note it.
|
||||
|
||||
**For local-only repos** (where `remote` starts with `local:`): skip `git fetch` and use the local default branch. Use `git log HEAD` instead of `git log origin/$DEFAULT`.
|
||||
|
||||
**For repos with remotes:**
|
||||
|
||||
```bash
|
||||
git -C <path> fetch origin --quiet 2>/dev/null
|
||||
```
|
||||
|
||||
Detect the default branch for each repo: first try `git symbolic-ref refs/remotes/origin/HEAD`, then check common branch names (`main`, `master`), then fall back to `git rev-parse --abbrev-ref HEAD`. Use the detected branch as `<default>` in the commands below.
|
||||
|
||||
```bash
|
||||
# Commits with stats
|
||||
git -C <path> log origin/$DEFAULT --since="<start_date>T00:00:00" --format="%H|%aN|%ai|%s" --shortstat
|
||||
|
||||
# Commit timestamps for session detection, streak, and context switching
|
||||
git -C <path> log origin/$DEFAULT --since="<start_date>T00:00:00" --format="%at|%aN|%ai|%s" | sort -n
|
||||
|
||||
# Per-author commit counts
|
||||
git -C <path> shortlog origin/$DEFAULT --since="<start_date>T00:00:00" -sn --no-merges
|
||||
|
||||
# PR numbers from commit messages
|
||||
git -C <path> log origin/$DEFAULT --since="<start_date>T00:00:00" --format="%s" | grep -oE '#[0-9]+' | sort -n | uniq
|
||||
```
|
||||
|
||||
For repos that fail (deleted paths, network errors): skip and note "N repos could not be reached."
|
||||
|
||||
### Global Step 4: Compute global shipping streak
|
||||
|
||||
For each repo, get commit dates (capped at 365 days):
|
||||
|
||||
```bash
|
||||
git -C <path> log origin/$DEFAULT --since="365 days ago" --format="%ad" --date=format:"%Y-%m-%d" | sort -u
|
||||
```
|
||||
|
||||
Union all dates across all repos. Count backward from today — how many consecutive days have at least one commit to ANY repo? If the streak hits 365 days, display as "365+ days".
|
||||
|
||||
### Global Step 5: Compute context switching metric
|
||||
|
||||
From the commit timestamps gathered in Step 3, group by date. For each date, count how many distinct repos had commits that day. Report:
|
||||
- Average repos/day
|
||||
- Maximum repos/day
|
||||
- Which days were focused (1 repo) vs. fragmented (3+ repos)
|
||||
|
||||
### Global Step 6: Per-tool productivity patterns
|
||||
|
||||
From the discovery JSON, analyze tool usage patterns:
|
||||
- Which AI tool is used for which repos (exclusive vs. shared)
|
||||
- Session count per tool
|
||||
- Behavioral patterns (e.g., "Codex used exclusively for myapp, Claude Code for everything else")
|
||||
|
||||
### Global Step 7: Aggregate and generate narrative
|
||||
|
||||
Structure the output with the **shareable personal card first**, then the full
|
||||
team/project breakdown below. The personal card is designed to be screenshot-friendly
|
||||
— everything someone would want to share on X/Twitter in one clean block.
|
||||
|
||||
---
|
||||
|
||||
**Tweetable summary** (first line, before everything else):
|
||||
```
|
||||
Week of Mar 14: 5 projects, 138 commits, 250k LOC across 5 repos | 48 AI sessions | Streak: 52d 🔥
|
||||
```
|
||||
|
||||
## 🚀 Your Week: [user name] — [date range]
|
||||
|
||||
This section is the **shareable personal card**. It contains ONLY the current user's
|
||||
stats — no team data, no project breakdowns. Designed to screenshot and post.
|
||||
|
||||
Use the user identity from `git config user.name` to filter all per-repo git data.
|
||||
Aggregate across all repos to compute personal totals.
|
||||
|
||||
Render as a single visually clean block. Left border only — no right border (LLMs
|
||||
can't align right borders reliably). Pad repo names to the longest name so columns
|
||||
align cleanly. Never truncate project names.
|
||||
|
||||
```
|
||||
╔═══════════════════════════════════════════════════════════════
|
||||
║ [USER NAME] — Week of [date]
|
||||
╠═══════════════════════════════════════════════════════════════
|
||||
║
|
||||
║ [N] commits across [M] projects
|
||||
║ +[X]k LOC added · [Y]k LOC deleted · [Z]k net
|
||||
║ [N] AI coding sessions (CC: X, Codex: Y, Gemini: Z)
|
||||
║ [N]-day shipping streak 🔥
|
||||
║
|
||||
║ PROJECTS
|
||||
║ ─────────────────────────────────────────────────────────
|
||||
║ [repo_name_full] [N] commits +[X]k LOC [solo/team]
|
||||
║ [repo_name_full] [N] commits +[X]k LOC [solo/team]
|
||||
║ [repo_name_full] [N] commits +[X]k LOC [solo/team]
|
||||
║
|
||||
║ SHIP OF THE WEEK
|
||||
║ [PR title] — [LOC] lines across [N] files
|
||||
║
|
||||
║ TOP WORK
|
||||
║ • [1-line description of biggest theme]
|
||||
║ • [1-line description of second theme]
|
||||
║ • [1-line description of third theme]
|
||||
║
|
||||
║ Powered by gstack · github.com/garrytan/gstack
|
||||
╚═══════════════════════════════════════════════════════════════
|
||||
```
|
||||
|
||||
**Rules for the personal card:**
|
||||
- Only show repos where the user has commits. Skip repos with 0 commits.
|
||||
- Sort repos by user's commit count descending.
|
||||
- **Never truncate repo names.** Use the full repo name (e.g., `analyze_transcripts`
|
||||
not `analyze_trans`). Pad the name column to the longest repo name so all columns
|
||||
align. If names are long, widen the box — the box width adapts to content.
|
||||
- For LOC, use "k" formatting for thousands (e.g., "+64.0k" not "+64010").
|
||||
- Role: "solo" if user is the only contributor, "team" if others contributed.
|
||||
- Ship of the Week: the user's single highest-LOC PR across ALL repos.
|
||||
- Top Work: 3 bullet points summarizing the user's major themes, inferred from
|
||||
commit messages. Not individual commits — synthesize into themes.
|
||||
E.g., "Built /retro global — cross-project retrospective with AI session discovery"
|
||||
not "feat: gstack-global-discover" + "feat: /retro global template".
|
||||
- The card must be self-contained. Someone seeing ONLY this block should understand
|
||||
the user's week without any surrounding context.
|
||||
- Do NOT include team members, project totals, or context switching data here.
|
||||
|
||||
**Personal streak:** Use the user's own commits across all repos (filtered by
|
||||
`--author`) to compute a personal streak, separate from the team streak.
|
||||
|
||||
---
|
||||
|
||||
## Global Engineering Retro: [date range]
|
||||
|
||||
Everything below is the full analysis — team data, project breakdowns, patterns.
|
||||
This is the "deep dive" that follows the shareable card.
|
||||
|
||||
### All Projects Overview
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Projects active | N |
|
||||
| Total commits (all repos, all contributors) | N |
|
||||
| Total LOC | +N / -N |
|
||||
| AI coding sessions | N (CC: X, Codex: Y, Gemini: Z) |
|
||||
| Active days | N |
|
||||
| Global shipping streak (any contributor, any repo) | N consecutive days |
|
||||
| Context switches/day | N avg (max: M) |
|
||||
|
||||
### Per-Project Breakdown
|
||||
For each repo (sorted by commits descending):
|
||||
- Repo name (with % of total commits)
|
||||
- Commits, LOC, PRs merged, top contributor
|
||||
- Key work (inferred from commit messages)
|
||||
- AI sessions by tool
|
||||
|
||||
**Your Contributions** (sub-section within each project):
|
||||
For each project, add a "Your contributions" block showing the current user's
|
||||
personal stats within that repo. Use the user identity from `git config user.name`
|
||||
to filter. Include:
|
||||
- Your commits / total commits (with %)
|
||||
- Your LOC (+insertions / -deletions)
|
||||
- Your key work (inferred from YOUR commit messages only)
|
||||
- Your commit type mix (feat/fix/refactor/chore/docs breakdown)
|
||||
- Your biggest ship in this repo (highest-LOC commit or PR)
|
||||
|
||||
If the user is the only contributor, say "Solo project — all commits are yours."
|
||||
If the user has 0 commits in a repo (team project they didn't touch this period),
|
||||
say "No commits this period — [N] AI sessions only." and skip the breakdown.
|
||||
|
||||
Format:
|
||||
```
|
||||
**Your contributions:** 47/244 commits (19%), +4.2k/-0.3k LOC
|
||||
Key work: Writer Chat, email blocking, security hardening
|
||||
Biggest ship: PR #605 — Writer Chat eats the admin bar (2,457 ins, 46 files)
|
||||
Mix: feat(3) fix(2) chore(1)
|
||||
```
|
||||
|
||||
### Cross-Project Patterns
|
||||
- Time allocation across projects (% breakdown, use YOUR commits not total)
|
||||
- Peak productivity hours aggregated across all repos
|
||||
- Focused vs. fragmented days
|
||||
- Context switching trends
|
||||
|
||||
### Tool Usage Analysis
|
||||
Per-tool breakdown with behavioral patterns:
|
||||
- Claude Code: N sessions across M repos — patterns observed
|
||||
- Codex: N sessions across M repos — patterns observed
|
||||
- Gemini: N sessions across M repos — patterns observed
|
||||
|
||||
### Ship of the Week (Global)
|
||||
Highest-impact PR across ALL projects. Identify by LOC and commit messages.
|
||||
|
||||
### 3 Cross-Project Insights
|
||||
What the global view reveals that no single-repo retro could show.
|
||||
|
||||
### 3 Habits for Next Week
|
||||
Considering the full cross-project picture.
|
||||
|
||||
---
|
||||
|
||||
### Global Step 8: Load history & compare
|
||||
|
||||
```bash
|
||||
ls -t ~/.gstack/retros/global-*.json 2>/dev/null | head -5
|
||||
```
|
||||
|
||||
**Only compare against a prior retro with the same `window` value** (e.g., 7d vs 7d). If the most recent prior retro has a different window, skip comparison and note: "Prior global retro used a different window — skipping comparison."
|
||||
|
||||
If a matching prior retro exists, load it with the Read tool. Show a **Trends vs Last Global Retro** table with deltas for key metrics: total commits, LOC, sessions, streak, context switches/day.
|
||||
|
||||
If no prior global retros exist, append: "First global retro recorded — run again next week to see trends."
|
||||
|
||||
### Global Step 9: Save snapshot
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.gstack/retros
|
||||
```
|
||||
|
||||
Determine the next sequence number for today:
|
||||
```bash
|
||||
today=$(date +%Y-%m-%d)
|
||||
existing=$(ls ~/.gstack/retros/global-${today}-*.json 2>/dev/null | wc -l | tr -d ' ')
|
||||
next=$((existing + 1))
|
||||
```
|
||||
|
||||
Use the Write tool to save JSON to `~/.gstack/retros/global-${today}-${next}.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "global",
|
||||
"date": "2026-03-21",
|
||||
"window": "7d",
|
||||
"projects": [
|
||||
{
|
||||
"name": "gstack",
|
||||
"remote": "https://github.com/garrytan/gstack",
|
||||
"commits": 47,
|
||||
"insertions": 3200,
|
||||
"deletions": 800,
|
||||
"sessions": { "claude_code": 15, "codex": 3, "gemini": 0 }
|
||||
}
|
||||
],
|
||||
"totals": {
|
||||
"commits": 182,
|
||||
"insertions": 15300,
|
||||
"deletions": 4200,
|
||||
"projects": 5,
|
||||
"active_days": 6,
|
||||
"sessions": { "claude_code": 48, "codex": 8, "gemini": 3 },
|
||||
"global_streak_days": 52,
|
||||
"avg_context_switches_per_day": 2.1
|
||||
},
|
||||
"tweetable": "Week of Mar 14: 5 projects, 182 commits, 15.3k LOC | CC: 48, Codex: 8, Gemini: 3 | Focus: gstack (58%) | Streak: 52d"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Compare Mode
|
||||
|
||||
When the user runs `/retro compare` (or `/retro compare 14d`):
|
||||
@@ -776,3 +1069,4 @@ When the user runs `/retro compare` (or `/retro compare 14d`):
|
||||
- Treat merge commits as PR boundaries
|
||||
- Do not read CLAUDE.md or other docs — this skill is self-contained
|
||||
- On first run (no prior retros), skip comparison sections gracefully
|
||||
- **Global mode:** Does NOT require being inside a git repo. Saves snapshots to `~/.gstack/retros/` (not `.context/retros/`). Gracefully skip AI tools that aren't installed. Only compare against prior global retros with the same window value. If streak hits 365d cap, display as "365+ days".
|
||||
|
||||
+296
-2
@@ -41,6 +41,8 @@ When the user types `/retro`, run this skill.
|
||||
- `/retro 30d` — last 30 days
|
||||
- `/retro compare` — compare current window vs prior same-length window
|
||||
- `/retro compare 14d` — compare with explicit window
|
||||
- `/retro global` — cross-project retro across all AI coding tools (7d default)
|
||||
- `/retro global 14d` — cross-project retro with explicit window
|
||||
|
||||
## Instructions
|
||||
|
||||
@@ -48,17 +50,21 @@ Parse the argument to determine the time window. Default to 7 days if no argumen
|
||||
|
||||
**Midnight-aligned windows:** For day (`d`) and week (`w`) units, compute an absolute start date at local midnight, not a relative string. For example, if today is 2026-03-18 and the window is 7 days: the start date is 2026-03-11. Use `--since="2026-03-11T00:00:00"` for git log queries — the explicit `T00:00:00` suffix ensures git starts from midnight. Without it, git uses the current wall-clock time (e.g., `--since="2026-03-11"` at 11pm means 11pm, not midnight). For week units, multiply by 7 to get days (e.g., `2w` = 14 days back). For hour (`h`) units, use `--since="N hours ago"` since midnight alignment does not apply to sub-day windows.
|
||||
|
||||
**Argument validation:** If the argument doesn't match a number followed by `d`, `h`, or `w`, the word `compare`, or `compare` followed by a number and `d`/`h`/`w`, show this usage and stop:
|
||||
**Argument validation:** If the argument doesn't match a number followed by `d`, `h`, or `w`, the word `compare` (optionally followed by a window), or the word `global` (optionally followed by a window), show this usage and stop:
|
||||
```
|
||||
Usage: /retro [window]
|
||||
Usage: /retro [window | compare | global]
|
||||
/retro — last 7 days (default)
|
||||
/retro 24h — last 24 hours
|
||||
/retro 14d — last 14 days
|
||||
/retro 30d — last 30 days
|
||||
/retro compare — compare this period vs prior period
|
||||
/retro compare 14d — compare with explicit window
|
||||
/retro global — cross-project retro across all AI tools (7d default)
|
||||
/retro global 14d — cross-project retro with explicit window
|
||||
```
|
||||
|
||||
**If the first argument is `global`:** Skip the normal repo-scoped retro (Steps 1-14). Instead, follow the **Global Retrospective** flow at the end of this document. The optional second argument is the time window (default 7d). This mode does NOT require being inside a git repo.
|
||||
|
||||
### Step 1: Gather Raw Data
|
||||
|
||||
First, fetch origin and identify the current user:
|
||||
@@ -504,6 +510,293 @@ Small, practical, realistic. Each must be something that takes <5 minutes to ado
|
||||
|
||||
---
|
||||
|
||||
## Global Retrospective Mode
|
||||
|
||||
When the user runs `/retro global` (or `/retro global 14d`), follow this flow instead of the repo-scoped Steps 1-14. This mode works from any directory — it does NOT require being inside a git repo.
|
||||
|
||||
### Global Step 1: Compute time window
|
||||
|
||||
Same midnight-aligned logic as the regular retro. Default 7d. The second argument after `global` is the window (e.g., `14d`, `30d`, `24h`).
|
||||
|
||||
### Global Step 2: Run discovery
|
||||
|
||||
Locate and run the discovery script using this fallback chain:
|
||||
|
||||
```bash
|
||||
DISCOVER_BIN=""
|
||||
[ -x ~/.claude/skills/gstack/bin/gstack-global-discover ] && DISCOVER_BIN=~/.claude/skills/gstack/bin/gstack-global-discover
|
||||
[ -z "$DISCOVER_BIN" ] && [ -x .claude/skills/gstack/bin/gstack-global-discover ] && DISCOVER_BIN=.claude/skills/gstack/bin/gstack-global-discover
|
||||
[ -z "$DISCOVER_BIN" ] && which gstack-global-discover >/dev/null 2>&1 && DISCOVER_BIN=$(which gstack-global-discover)
|
||||
[ -z "$DISCOVER_BIN" ] && [ -f bin/gstack-global-discover.ts ] && DISCOVER_BIN="bun run bin/gstack-global-discover.ts"
|
||||
echo "DISCOVER_BIN: $DISCOVER_BIN"
|
||||
```
|
||||
|
||||
If no binary is found, tell the user: "Discovery script not found. Run `bun run build` in the gstack directory to compile it." and stop.
|
||||
|
||||
Run the discovery:
|
||||
```bash
|
||||
$DISCOVER_BIN --since "<window>" --format json 2>/tmp/gstack-discover-stderr
|
||||
```
|
||||
|
||||
Read the stderr output from `/tmp/gstack-discover-stderr` for diagnostic info. Parse the JSON output from stdout.
|
||||
|
||||
If `total_sessions` is 0, say: "No AI coding sessions found in the last <window>. Try a longer window: `/retro global 30d`" and stop.
|
||||
|
||||
### Global Step 3: Run git log on each discovered repo
|
||||
|
||||
For each repo in the discovery JSON's `repos` array, find the first valid path in `paths[]` (directory exists with `.git/`). If no valid path exists, skip the repo and note it.
|
||||
|
||||
**For local-only repos** (where `remote` starts with `local:`): skip `git fetch` and use the local default branch. Use `git log HEAD` instead of `git log origin/$DEFAULT`.
|
||||
|
||||
**For repos with remotes:**
|
||||
|
||||
```bash
|
||||
git -C <path> fetch origin --quiet 2>/dev/null
|
||||
```
|
||||
|
||||
Detect the default branch for each repo: first try `git symbolic-ref refs/remotes/origin/HEAD`, then check common branch names (`main`, `master`), then fall back to `git rev-parse --abbrev-ref HEAD`. Use the detected branch as `<default>` in the commands below.
|
||||
|
||||
```bash
|
||||
# Commits with stats
|
||||
git -C <path> log origin/$DEFAULT --since="<start_date>T00:00:00" --format="%H|%aN|%ai|%s" --shortstat
|
||||
|
||||
# Commit timestamps for session detection, streak, and context switching
|
||||
git -C <path> log origin/$DEFAULT --since="<start_date>T00:00:00" --format="%at|%aN|%ai|%s" | sort -n
|
||||
|
||||
# Per-author commit counts
|
||||
git -C <path> shortlog origin/$DEFAULT --since="<start_date>T00:00:00" -sn --no-merges
|
||||
|
||||
# PR numbers from commit messages
|
||||
git -C <path> log origin/$DEFAULT --since="<start_date>T00:00:00" --format="%s" | grep -oE '#[0-9]+' | sort -n | uniq
|
||||
```
|
||||
|
||||
For repos that fail (deleted paths, network errors): skip and note "N repos could not be reached."
|
||||
|
||||
### Global Step 4: Compute global shipping streak
|
||||
|
||||
For each repo, get commit dates (capped at 365 days):
|
||||
|
||||
```bash
|
||||
git -C <path> log origin/$DEFAULT --since="365 days ago" --format="%ad" --date=format:"%Y-%m-%d" | sort -u
|
||||
```
|
||||
|
||||
Union all dates across all repos. Count backward from today — how many consecutive days have at least one commit to ANY repo? If the streak hits 365 days, display as "365+ days".
|
||||
|
||||
### Global Step 5: Compute context switching metric
|
||||
|
||||
From the commit timestamps gathered in Step 3, group by date. For each date, count how many distinct repos had commits that day. Report:
|
||||
- Average repos/day
|
||||
- Maximum repos/day
|
||||
- Which days were focused (1 repo) vs. fragmented (3+ repos)
|
||||
|
||||
### Global Step 6: Per-tool productivity patterns
|
||||
|
||||
From the discovery JSON, analyze tool usage patterns:
|
||||
- Which AI tool is used for which repos (exclusive vs. shared)
|
||||
- Session count per tool
|
||||
- Behavioral patterns (e.g., "Codex used exclusively for myapp, Claude Code for everything else")
|
||||
|
||||
### Global Step 7: Aggregate and generate narrative
|
||||
|
||||
Structure the output with the **shareable personal card first**, then the full
|
||||
team/project breakdown below. The personal card is designed to be screenshot-friendly
|
||||
— everything someone would want to share on X/Twitter in one clean block.
|
||||
|
||||
---
|
||||
|
||||
**Tweetable summary** (first line, before everything else):
|
||||
```
|
||||
Week of Mar 14: 5 projects, 138 commits, 250k LOC across 5 repos | 48 AI sessions | Streak: 52d 🔥
|
||||
```
|
||||
|
||||
## 🚀 Your Week: [user name] — [date range]
|
||||
|
||||
This section is the **shareable personal card**. It contains ONLY the current user's
|
||||
stats — no team data, no project breakdowns. Designed to screenshot and post.
|
||||
|
||||
Use the user identity from `git config user.name` to filter all per-repo git data.
|
||||
Aggregate across all repos to compute personal totals.
|
||||
|
||||
Render as a single visually clean block. Left border only — no right border (LLMs
|
||||
can't align right borders reliably). Pad repo names to the longest name so columns
|
||||
align cleanly. Never truncate project names.
|
||||
|
||||
```
|
||||
╔═══════════════════════════════════════════════════════════════
|
||||
║ [USER NAME] — Week of [date]
|
||||
╠═══════════════════════════════════════════════════════════════
|
||||
║
|
||||
║ [N] commits across [M] projects
|
||||
║ +[X]k LOC added · [Y]k LOC deleted · [Z]k net
|
||||
║ [N] AI coding sessions (CC: X, Codex: Y, Gemini: Z)
|
||||
║ [N]-day shipping streak 🔥
|
||||
║
|
||||
║ PROJECTS
|
||||
║ ─────────────────────────────────────────────────────────
|
||||
║ [repo_name_full] [N] commits +[X]k LOC [solo/team]
|
||||
║ [repo_name_full] [N] commits +[X]k LOC [solo/team]
|
||||
║ [repo_name_full] [N] commits +[X]k LOC [solo/team]
|
||||
║
|
||||
║ SHIP OF THE WEEK
|
||||
║ [PR title] — [LOC] lines across [N] files
|
||||
║
|
||||
║ TOP WORK
|
||||
║ • [1-line description of biggest theme]
|
||||
║ • [1-line description of second theme]
|
||||
║ • [1-line description of third theme]
|
||||
║
|
||||
║ Powered by gstack · github.com/garrytan/gstack
|
||||
╚═══════════════════════════════════════════════════════════════
|
||||
```
|
||||
|
||||
**Rules for the personal card:**
|
||||
- Only show repos where the user has commits. Skip repos with 0 commits.
|
||||
- Sort repos by user's commit count descending.
|
||||
- **Never truncate repo names.** Use the full repo name (e.g., `analyze_transcripts`
|
||||
not `analyze_trans`). Pad the name column to the longest repo name so all columns
|
||||
align. If names are long, widen the box — the box width adapts to content.
|
||||
- For LOC, use "k" formatting for thousands (e.g., "+64.0k" not "+64010").
|
||||
- Role: "solo" if user is the only contributor, "team" if others contributed.
|
||||
- Ship of the Week: the user's single highest-LOC PR across ALL repos.
|
||||
- Top Work: 3 bullet points summarizing the user's major themes, inferred from
|
||||
commit messages. Not individual commits — synthesize into themes.
|
||||
E.g., "Built /retro global — cross-project retrospective with AI session discovery"
|
||||
not "feat: gstack-global-discover" + "feat: /retro global template".
|
||||
- The card must be self-contained. Someone seeing ONLY this block should understand
|
||||
the user's week without any surrounding context.
|
||||
- Do NOT include team members, project totals, or context switching data here.
|
||||
|
||||
**Personal streak:** Use the user's own commits across all repos (filtered by
|
||||
`--author`) to compute a personal streak, separate from the team streak.
|
||||
|
||||
---
|
||||
|
||||
## Global Engineering Retro: [date range]
|
||||
|
||||
Everything below is the full analysis — team data, project breakdowns, patterns.
|
||||
This is the "deep dive" that follows the shareable card.
|
||||
|
||||
### All Projects Overview
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Projects active | N |
|
||||
| Total commits (all repos, all contributors) | N |
|
||||
| Total LOC | +N / -N |
|
||||
| AI coding sessions | N (CC: X, Codex: Y, Gemini: Z) |
|
||||
| Active days | N |
|
||||
| Global shipping streak (any contributor, any repo) | N consecutive days |
|
||||
| Context switches/day | N avg (max: M) |
|
||||
|
||||
### Per-Project Breakdown
|
||||
For each repo (sorted by commits descending):
|
||||
- Repo name (with % of total commits)
|
||||
- Commits, LOC, PRs merged, top contributor
|
||||
- Key work (inferred from commit messages)
|
||||
- AI sessions by tool
|
||||
|
||||
**Your Contributions** (sub-section within each project):
|
||||
For each project, add a "Your contributions" block showing the current user's
|
||||
personal stats within that repo. Use the user identity from `git config user.name`
|
||||
to filter. Include:
|
||||
- Your commits / total commits (with %)
|
||||
- Your LOC (+insertions / -deletions)
|
||||
- Your key work (inferred from YOUR commit messages only)
|
||||
- Your commit type mix (feat/fix/refactor/chore/docs breakdown)
|
||||
- Your biggest ship in this repo (highest-LOC commit or PR)
|
||||
|
||||
If the user is the only contributor, say "Solo project — all commits are yours."
|
||||
If the user has 0 commits in a repo (team project they didn't touch this period),
|
||||
say "No commits this period — [N] AI sessions only." and skip the breakdown.
|
||||
|
||||
Format:
|
||||
```
|
||||
**Your contributions:** 47/244 commits (19%), +4.2k/-0.3k LOC
|
||||
Key work: Writer Chat, email blocking, security hardening
|
||||
Biggest ship: PR #605 — Writer Chat eats the admin bar (2,457 ins, 46 files)
|
||||
Mix: feat(3) fix(2) chore(1)
|
||||
```
|
||||
|
||||
### Cross-Project Patterns
|
||||
- Time allocation across projects (% breakdown, use YOUR commits not total)
|
||||
- Peak productivity hours aggregated across all repos
|
||||
- Focused vs. fragmented days
|
||||
- Context switching trends
|
||||
|
||||
### Tool Usage Analysis
|
||||
Per-tool breakdown with behavioral patterns:
|
||||
- Claude Code: N sessions across M repos — patterns observed
|
||||
- Codex: N sessions across M repos — patterns observed
|
||||
- Gemini: N sessions across M repos — patterns observed
|
||||
|
||||
### Ship of the Week (Global)
|
||||
Highest-impact PR across ALL projects. Identify by LOC and commit messages.
|
||||
|
||||
### 3 Cross-Project Insights
|
||||
What the global view reveals that no single-repo retro could show.
|
||||
|
||||
### 3 Habits for Next Week
|
||||
Considering the full cross-project picture.
|
||||
|
||||
---
|
||||
|
||||
### Global Step 8: Load history & compare
|
||||
|
||||
```bash
|
||||
ls -t ~/.gstack/retros/global-*.json 2>/dev/null | head -5
|
||||
```
|
||||
|
||||
**Only compare against a prior retro with the same `window` value** (e.g., 7d vs 7d). If the most recent prior retro has a different window, skip comparison and note: "Prior global retro used a different window — skipping comparison."
|
||||
|
||||
If a matching prior retro exists, load it with the Read tool. Show a **Trends vs Last Global Retro** table with deltas for key metrics: total commits, LOC, sessions, streak, context switches/day.
|
||||
|
||||
If no prior global retros exist, append: "First global retro recorded — run again next week to see trends."
|
||||
|
||||
### Global Step 9: Save snapshot
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.gstack/retros
|
||||
```
|
||||
|
||||
Determine the next sequence number for today:
|
||||
```bash
|
||||
today=$(date +%Y-%m-%d)
|
||||
existing=$(ls ~/.gstack/retros/global-${today}-*.json 2>/dev/null | wc -l | tr -d ' ')
|
||||
next=$((existing + 1))
|
||||
```
|
||||
|
||||
Use the Write tool to save JSON to `~/.gstack/retros/global-${today}-${next}.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "global",
|
||||
"date": "2026-03-21",
|
||||
"window": "7d",
|
||||
"projects": [
|
||||
{
|
||||
"name": "gstack",
|
||||
"remote": "https://github.com/garrytan/gstack",
|
||||
"commits": 47,
|
||||
"insertions": 3200,
|
||||
"deletions": 800,
|
||||
"sessions": { "claude_code": 15, "codex": 3, "gemini": 0 }
|
||||
}
|
||||
],
|
||||
"totals": {
|
||||
"commits": 182,
|
||||
"insertions": 15300,
|
||||
"deletions": 4200,
|
||||
"projects": 5,
|
||||
"active_days": 6,
|
||||
"sessions": { "claude_code": 48, "codex": 8, "gemini": 3 },
|
||||
"global_streak_days": 52,
|
||||
"avg_context_switches_per_day": 2.1
|
||||
},
|
||||
"tweetable": "Week of Mar 14: 5 projects, 182 commits, 15.3k LOC | CC: 48, Codex: 8, Gemini: 3 | Focus: gstack (58%) | Streak: 52d"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Compare Mode
|
||||
|
||||
When the user runs `/retro compare` (or `/retro compare 14d`):
|
||||
@@ -537,3 +830,4 @@ When the user runs `/retro compare` (or `/retro compare 14d`):
|
||||
- Treat merge commits as PR boundaries
|
||||
- Do not read CLAUDE.md or other docs — this skill is self-contained
|
||||
- On first run (no prior retros), skip comparison sections gracefully
|
||||
- **Global mode:** Does NOT require being inside a git repo. Saves snapshots to `~/.gstack/retros/` (not `.context/retros/`). Gracefully skip AI tools that aren't installed. Only compare against prior global retros with the same window value. If streak hits 365d cap, display as "365+ days".
|
||||
|
||||
@@ -0,0 +1,187 @@
|
||||
import { describe, test, expect, beforeEach, afterEach } from "bun:test";
|
||||
import { mkdtempSync, mkdirSync, writeFileSync, rmSync, existsSync } from "fs";
|
||||
import { join } from "path";
|
||||
import { tmpdir } from "os";
|
||||
import { spawnSync } from "child_process";
|
||||
|
||||
// Import normalizeRemoteUrl for unit testing
|
||||
// We test the script end-to-end via CLI and normalizeRemoteUrl via import
|
||||
const scriptPath = join(import.meta.dir, "..", "bin", "gstack-global-discover.ts");
|
||||
|
||||
describe("gstack-global-discover", () => {
|
||||
describe("normalizeRemoteUrl", () => {
|
||||
// Dynamically import to test the exported function
|
||||
let normalizeRemoteUrl: (url: string) => string;
|
||||
|
||||
beforeEach(async () => {
|
||||
const mod = await import("../bin/gstack-global-discover.ts");
|
||||
normalizeRemoteUrl = mod.normalizeRemoteUrl;
|
||||
});
|
||||
|
||||
test("strips .git suffix", () => {
|
||||
expect(normalizeRemoteUrl("https://github.com/user/repo.git")).toBe(
|
||||
"https://github.com/user/repo"
|
||||
);
|
||||
});
|
||||
|
||||
test("converts SSH to HTTPS", () => {
|
||||
expect(normalizeRemoteUrl("git@github.com:user/repo.git")).toBe(
|
||||
"https://github.com/user/repo"
|
||||
);
|
||||
});
|
||||
|
||||
test("converts SSH without .git to HTTPS", () => {
|
||||
expect(normalizeRemoteUrl("git@github.com:user/repo")).toBe(
|
||||
"https://github.com/user/repo"
|
||||
);
|
||||
});
|
||||
|
||||
test("lowercases host", () => {
|
||||
expect(normalizeRemoteUrl("https://GitHub.COM/user/repo")).toBe(
|
||||
"https://github.com/user/repo"
|
||||
);
|
||||
});
|
||||
|
||||
test("SSH and HTTPS for same repo normalize to same URL", () => {
|
||||
const ssh = normalizeRemoteUrl("git@github.com:garrytan/gstack.git");
|
||||
const https = normalizeRemoteUrl("https://github.com/garrytan/gstack.git");
|
||||
const httpsNoDotGit = normalizeRemoteUrl("https://github.com/garrytan/gstack");
|
||||
expect(ssh).toBe(https);
|
||||
expect(https).toBe(httpsNoDotGit);
|
||||
});
|
||||
|
||||
test("handles local: URLs consistently", () => {
|
||||
const result = normalizeRemoteUrl("local:/tmp/my-repo");
|
||||
// local: gets parsed as a URL scheme — the important thing is consistency
|
||||
expect(result).toContain("/tmp/my-repo");
|
||||
});
|
||||
|
||||
test("handles GitLab SSH URLs", () => {
|
||||
expect(normalizeRemoteUrl("git@gitlab.com:org/project.git")).toBe(
|
||||
"https://gitlab.com/org/project"
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe("CLI", () => {
|
||||
test("--help exits 0 and prints usage", () => {
|
||||
const result = spawnSync("bun", ["run", scriptPath, "--help"], {
|
||||
encoding: "utf-8",
|
||||
timeout: 10000,
|
||||
});
|
||||
expect(result.status).toBe(0);
|
||||
expect(result.stderr).toContain("--since");
|
||||
});
|
||||
|
||||
test("no args exits 1 with error", () => {
|
||||
const result = spawnSync("bun", ["run", scriptPath], {
|
||||
encoding: "utf-8",
|
||||
timeout: 10000,
|
||||
});
|
||||
expect(result.status).toBe(1);
|
||||
expect(result.stderr).toContain("--since is required");
|
||||
});
|
||||
|
||||
test("invalid window format exits 1", () => {
|
||||
const result = spawnSync("bun", ["run", scriptPath, "--since", "abc"], {
|
||||
encoding: "utf-8",
|
||||
timeout: 10000,
|
||||
});
|
||||
expect(result.status).toBe(1);
|
||||
expect(result.stderr).toContain("Invalid window format");
|
||||
});
|
||||
|
||||
test("--since 7d produces valid JSON", () => {
|
||||
const result = spawnSync(
|
||||
"bun",
|
||||
["run", scriptPath, "--since", "7d", "--format", "json"],
|
||||
{ encoding: "utf-8", timeout: 30000 }
|
||||
);
|
||||
expect(result.status).toBe(0);
|
||||
const json = JSON.parse(result.stdout);
|
||||
expect(json).toHaveProperty("window", "7d");
|
||||
expect(json).toHaveProperty("repos");
|
||||
expect(json).toHaveProperty("total_sessions");
|
||||
expect(json).toHaveProperty("total_repos");
|
||||
expect(json).toHaveProperty("tools");
|
||||
expect(Array.isArray(json.repos)).toBe(true);
|
||||
});
|
||||
|
||||
test("--since 7d --format summary produces readable output", () => {
|
||||
const result = spawnSync(
|
||||
"bun",
|
||||
["run", scriptPath, "--since", "7d", "--format", "summary"],
|
||||
{ encoding: "utf-8", timeout: 30000 }
|
||||
);
|
||||
expect(result.status).toBe(0);
|
||||
expect(result.stdout).toContain("Window: 7d");
|
||||
expect(result.stdout).toContain("Sessions:");
|
||||
expect(result.stdout).toContain("Repos:");
|
||||
});
|
||||
|
||||
test("--since 1h returns results (may be empty)", () => {
|
||||
const result = spawnSync(
|
||||
"bun",
|
||||
["run", scriptPath, "--since", "1h", "--format", "json"],
|
||||
{ encoding: "utf-8", timeout: 30000 }
|
||||
);
|
||||
expect(result.status).toBe(0);
|
||||
const json = JSON.parse(result.stdout);
|
||||
expect(json.total_sessions).toBeGreaterThanOrEqual(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe("discovery output structure", () => {
|
||||
test("repos have required fields", () => {
|
||||
const result = spawnSync(
|
||||
"bun",
|
||||
["run", scriptPath, "--since", "30d", "--format", "json"],
|
||||
{ encoding: "utf-8", timeout: 30000 }
|
||||
);
|
||||
expect(result.status).toBe(0);
|
||||
const json = JSON.parse(result.stdout);
|
||||
|
||||
for (const repo of json.repos) {
|
||||
expect(repo).toHaveProperty("name");
|
||||
expect(repo).toHaveProperty("remote");
|
||||
expect(repo).toHaveProperty("paths");
|
||||
expect(repo).toHaveProperty("sessions");
|
||||
expect(Array.isArray(repo.paths)).toBe(true);
|
||||
expect(repo.paths.length).toBeGreaterThan(0);
|
||||
expect(repo.sessions).toHaveProperty("claude_code");
|
||||
expect(repo.sessions).toHaveProperty("codex");
|
||||
expect(repo.sessions).toHaveProperty("gemini");
|
||||
}
|
||||
});
|
||||
|
||||
test("tools summary matches repo data", () => {
|
||||
const result = spawnSync(
|
||||
"bun",
|
||||
["run", scriptPath, "--since", "30d", "--format", "json"],
|
||||
{ encoding: "utf-8", timeout: 30000 }
|
||||
);
|
||||
const json = JSON.parse(result.stdout);
|
||||
|
||||
// Total sessions should equal sum across tools
|
||||
const toolTotal =
|
||||
json.tools.claude_code.total_sessions +
|
||||
json.tools.codex.total_sessions +
|
||||
json.tools.gemini.total_sessions;
|
||||
expect(json.total_sessions).toBe(toolTotal);
|
||||
});
|
||||
|
||||
test("deduplicates Conductor workspaces by remote", () => {
|
||||
const result = spawnSync(
|
||||
"bun",
|
||||
["run", scriptPath, "--since", "30d", "--format", "json"],
|
||||
{ encoding: "utf-8", timeout: 30000 }
|
||||
);
|
||||
const json = JSON.parse(result.stdout);
|
||||
|
||||
// Check that no two repos share the same normalized remote
|
||||
const remotes = json.repos.map((r: any) => r.remote);
|
||||
const uniqueRemotes = new Set(remotes);
|
||||
expect(remotes.length).toBe(uniqueRemotes.size);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -80,6 +80,9 @@ export const E2E_TOUCHFILES: Record<string, string[]> = {
|
||||
'retro': ['retro/**'],
|
||||
'retro-base-branch': ['retro/**'],
|
||||
|
||||
// Global discover
|
||||
'global-discover': ['bin/gstack-global-discover.ts', 'test/global-discover.test.ts'],
|
||||
|
||||
// Document-release
|
||||
'document-release': ['document-release/**'],
|
||||
|
||||
|
||||
Reference in New Issue
Block a user