Files
gstack/bin/gstack-analytics
T
Garry Tan 3b22fc39e6 feat: opt-in usage telemetry + community intelligence platform (v0.8.6) (#210)
* feat: add gstack-telemetry-log and gstack-analytics scripts

Local telemetry infrastructure for gstack usage tracking.
gstack-telemetry-log appends JSONL events with skill name, duration,
outcome, session ID, and platform info. Supports off/anonymous/community
privacy tiers. gstack-analytics renders a personal usage dashboard
from local data.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add telemetry preamble injection + opt-in prompt + epilogue

Extends generatePreamble() with telemetry start block (config read,
timer, session ID, .pending marker), opt-in prompt (gated by
.telemetry-prompted), and epilogue instructions for Claude to log
events after skill completion. Adds 5 telemetry tests.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: regenerate all SKILL.md files with telemetry blocks

Automated regeneration from gen-skill-docs.ts changes. All skills
now include telemetry start block, opt-in prompt, and epilogue.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add Supabase schema, edge functions, and SQL views

Telemetry backend infrastructure: telemetry_events table with RLS
(insert-only), installations table for retention tracking,
update_checks for install pings. Edge functions for update-check
(version + ping), telemetry-ingest (batch insert), and
community-pulse (weekly active count). SQL views for crash
clustering and skill co-occurrence sequences.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add telemetry-sync, community-dashboard, and integration tests

gstack-telemetry-sync: fire-and-forget JSONL → Supabase sync with
privacy tier field stripping, batch limits, and cursor tracking.
gstack-community-dashboard: CLI tool querying Supabase for skill
popularity, crash clusters, and version distribution.
19 integration tests covering all telemetry scripts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: session-specific .pending markers + crash_clusters view fix

Addresses Codex review findings:
- .pending race condition: use .pending-$SESSION_ID instead of
  shared .pending file to prevent concurrent session interference
- crash_clusters view: add total_occurrences and anonymous_occurrences
  columns since anonymous tier has no installation_id
- Added test: own session pending marker is not finalized

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: dual-attempt update check with Supabase install ping

Fires a parallel background curl to Supabase during the slow-path
version fetch. Logs upgrade_prompted event only on fresh fetches
(not cached replays) to avoid overcounting. GitHub remains the
primary version source — Supabase ping is fire-and-forget.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: integrate telemetry usage stats into /retro output

Retro now reads ~/.gstack/analytics/skill-usage.jsonl and includes
gstack usage metrics (skill run counts, top skills, success rate)
in the weekly retrospective output.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: move 'Skill usage telemetry' to Completed in TODOS.md

Implemented in this branch: local JSONL logging, opt-in prompt,
privacy tiers, Supabase backend, community dashboard, /retro
integration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: wire Supabase credentials and expose tables via Data API

Add supabase/config.sh with project URL and publishable key (safe to
commit — RLS restricts to INSERT only). Update telemetry-sync,
community-dashboard, and update-check to source the config and
include proper auth headers for the Supabase REST API.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: add SELECT RLS policies to migration for community dashboard reads

All telemetry data is anonymous (no PII), so public reads via the
publishable key are safe. Needed for the community dashboard to
query skill popularity, crash clusters, and version distribution.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: bump version and changelog (v0.8.6)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: analytics backward-compatible with old JSONL format

Handle old-format events (no event_type field) alongside new format.
Skip hook_fire events. Fix grep -c whitespace issues and unbound
variable errors.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: map JSONL field names to Postgres columns in telemetry-sync

Local JSONL uses short names (v, ts, sessions) but the Supabase
table expects full names (schema_version, event_timestamp,
concurrent_sessions). Add sed mapping during field stripping.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address Codex adversarial findings — cursor, opt-out, queries

- Sync cursor now advances on HTTP 2xx (not grep for "inserted")
- Update-check respects telemetry opt-out before pinging Supabase
- Dashboard queries use correct view column names (total_occurrences)
- Sync strips old-format "repo" field to prevent privacy leak

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: add Privacy & Telemetry section to README

Transparent disclosure of what telemetry collects, what it never sends,
how to opt out, and a link to the schema so users can verify.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 17:21:05 -07:00

192 lines
6.3 KiB
Bash
Executable File

#!/usr/bin/env bash
# gstack-analytics — personal usage dashboard from local JSONL
#
# Usage:
# gstack-analytics # default: last 7 days
# gstack-analytics 7d # last 7 days
# gstack-analytics 30d # last 30 days
# gstack-analytics all # all time
#
# Env overrides (for testing):
# GSTACK_STATE_DIR — override ~/.gstack state directory
set -uo pipefail
STATE_DIR="${GSTACK_STATE_DIR:-$HOME/.gstack}"
JSONL_FILE="$STATE_DIR/analytics/skill-usage.jsonl"
# ─── Parse time window ───────────────────────────────────────
WINDOW="${1:-7d}"
case "$WINDOW" in
7d) DAYS=7; LABEL="last 7 days" ;;
30d) DAYS=30; LABEL="last 30 days" ;;
all) DAYS=0; LABEL="all time" ;;
*) DAYS=7; LABEL="last 7 days" ;;
esac
# ─── Check for data ──────────────────────────────────────────
if [ ! -f "$JSONL_FILE" ]; then
echo "gstack usage — no data yet"
echo ""
echo "Usage data will appear here after you use gstack skills"
echo "with telemetry enabled (gstack-config set telemetry anonymous)."
exit 0
fi
TOTAL_LINES="$(wc -l < "$JSONL_FILE" | tr -d ' ')"
if [ "$TOTAL_LINES" = "0" ]; then
echo "gstack usage — no data yet"
exit 0
fi
# ─── Filter by time window ───────────────────────────────────
if [ "$DAYS" -gt 0 ] 2>/dev/null; then
# Calculate cutoff date
if date -v-1d +%Y-%m-%d >/dev/null 2>&1; then
# macOS date
CUTOFF="$(date -v-${DAYS}d -u +%Y-%m-%dT%H:%M:%SZ)"
else
# GNU date
CUTOFF="$(date -u -d "$DAYS days ago" +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || echo "2000-01-01T00:00:00Z")"
fi
# Filter: skill_run events (new format) OR basic skill events (old format, no event_type)
# Old format: {"skill":"X","ts":"Y","repo":"Z"} (no event_type field)
# New format: {"event_type":"skill_run","skill":"X","ts":"Y",...}
FILTERED="$(awk -F'"' -v cutoff="$CUTOFF" '
/"ts":"/ {
# Skip hook_fire events
if (/"event":"hook_fire"/) next
# Skip non-skill_run new-format events
if (/"event_type":"/ && !/"event_type":"skill_run"/) next
for (i=1; i<=NF; i++) {
if ($i == "ts" && $(i+1) ~ /^:/) {
ts = $(i+2)
if (ts >= cutoff) { print; break }
}
}
}
' "$JSONL_FILE")"
else
# All time: include skill_run events + old-format basic events, exclude hook_fire
FILTERED="$(awk '/"ts":"/ && !/"event":"hook_fire"/' "$JSONL_FILE" | grep -v '"event_type":"upgrade_' 2>/dev/null || true)"
fi
if [ -z "$FILTERED" ]; then
echo "gstack usage ($LABEL) — no skill runs found"
exit 0
fi
# ─── Aggregate by skill ──────────────────────────────────────
# Extract skill names and count
SKILL_COUNTS="$(echo "$FILTERED" | awk -F'"' '
/"skill":"/ {
for (i=1; i<=NF; i++) {
if ($i == "skill" && $(i+1) ~ /^:/) {
skill = $(i+2)
counts[skill]++
break
}
}
}
END {
for (s in counts) print counts[s], s
}
' | sort -rn)"
# Count outcomes
TOTAL="$(echo "$FILTERED" | wc -l | tr -d ' ')"
SUCCESS="$(echo "$FILTERED" | grep -c '"outcome":"success"' || true)"
SUCCESS="${SUCCESS:-0}"; SUCCESS="$(echo "$SUCCESS" | tr -d ' \n\r\t')"
ERRORS="$(echo "$FILTERED" | grep -c '"outcome":"error"' || true)"
ERRORS="${ERRORS:-0}"; ERRORS="$(echo "$ERRORS" | tr -d ' \n\r\t')"
# Old format events have no outcome field — count them as successful
NO_OUTCOME="$(echo "$FILTERED" | grep -vc '"outcome":' || true)"
NO_OUTCOME="${NO_OUTCOME:-0}"; NO_OUTCOME="$(echo "$NO_OUTCOME" | tr -d ' \n\r\t')"
SUCCESS=$(( SUCCESS + NO_OUTCOME ))
# Calculate success rate
if [ "$TOTAL" -gt 0 ] 2>/dev/null; then
SUCCESS_RATE=$(( SUCCESS * 100 / TOTAL ))
else
SUCCESS_RATE=100
fi
# ─── Calculate total duration ────────────────────────────────
TOTAL_DURATION="$(echo "$FILTERED" | awk -F'[:,]' '
/"duration_s"/ {
for (i=1; i<=NF; i++) {
if ($i ~ /"duration_s"/) {
val = $(i+1)
gsub(/[^0-9.]/, "", val)
if (val+0 > 0) total += val
}
}
}
END { printf "%.0f", total }
')"
# Format duration
TOTAL_DURATION="${TOTAL_DURATION:-0}"
if [ "$TOTAL_DURATION" -ge 3600 ] 2>/dev/null; then
HOURS=$(( TOTAL_DURATION / 3600 ))
MINS=$(( (TOTAL_DURATION % 3600) / 60 ))
DUR_DISPLAY="${HOURS}h ${MINS}m"
elif [ "$TOTAL_DURATION" -ge 60 ] 2>/dev/null; then
MINS=$(( TOTAL_DURATION / 60 ))
DUR_DISPLAY="${MINS}m"
else
DUR_DISPLAY="${TOTAL_DURATION}s"
fi
# ─── Render output ───────────────────────────────────────────
echo "gstack usage ($LABEL)"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
# Find max count for bar scaling
MAX_COUNT="$(echo "$SKILL_COUNTS" | head -1 | awk '{print $1}')"
BAR_WIDTH=20
echo "$SKILL_COUNTS" | while read -r COUNT SKILL; do
# Scale bar
if [ "$MAX_COUNT" -gt 0 ] 2>/dev/null; then
BAR_LEN=$(( COUNT * BAR_WIDTH / MAX_COUNT ))
else
BAR_LEN=1
fi
[ "$BAR_LEN" -lt 1 ] && BAR_LEN=1
# Build bar
BAR=""
i=0
while [ "$i" -lt "$BAR_LEN" ]; do
BAR="${BAR}"
i=$(( i + 1 ))
done
# Calculate avg duration for this skill
AVG_DUR="$(echo "$FILTERED" | awk -v skill="$SKILL" '
index($0, "\"skill\":\"" skill "\"") > 0 {
# Extract duration_s value using split on "duration_s":
n = split($0, parts, "\"duration_s\":")
if (n >= 2) {
# parts[2] starts with the value, e.g. "142,"
gsub(/[^0-9.].*/, "", parts[2])
if (parts[2]+0 > 0) { total += parts[2]; count++ }
}
}
END { if (count > 0) printf "%.0f", total/count; else print "0" }
')"
# Format avg duration
if [ "$AVG_DUR" -ge 60 ] 2>/dev/null; then
AVG_DISPLAY="$(( AVG_DUR / 60 ))m"
else
AVG_DISPLAY="${AVG_DUR}s"
fi
printf " /%-20s %s %d runs (avg %s)\n" "$SKILL" "$BAR" "$COUNT" "$AVG_DISPLAY"
done
echo ""
echo "Success rate: ${SUCCESS_RATE}% | Errors: ${ERRORS} | Total time: ${DUR_DISPLAY}"
echo "Events: ${TOTAL} skill runs"