feat: opt-in usage telemetry + community intelligence platform (v0.8.6) (#210)

* feat: add gstack-telemetry-log and gstack-analytics scripts

Local telemetry infrastructure for gstack usage tracking.
gstack-telemetry-log appends JSONL events with skill name, duration,
outcome, session ID, and platform info. Supports off/anonymous/community
privacy tiers. gstack-analytics renders a personal usage dashboard
from local data.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add telemetry preamble injection + opt-in prompt + epilogue

Extends generatePreamble() with telemetry start block (config read,
timer, session ID, .pending marker), opt-in prompt (gated by
.telemetry-prompted), and epilogue instructions for Claude to log
events after skill completion. Adds 5 telemetry tests.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: regenerate all SKILL.md files with telemetry blocks

Automated regeneration from gen-skill-docs.ts changes. All skills
now include telemetry start block, opt-in prompt, and epilogue.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add Supabase schema, edge functions, and SQL views

Telemetry backend infrastructure: telemetry_events table with RLS
(insert-only), installations table for retention tracking,
update_checks for install pings. Edge functions for update-check
(version + ping), telemetry-ingest (batch insert), and
community-pulse (weekly active count). SQL views for crash
clustering and skill co-occurrence sequences.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add telemetry-sync, community-dashboard, and integration tests

gstack-telemetry-sync: fire-and-forget JSONL → Supabase sync with
privacy tier field stripping, batch limits, and cursor tracking.
gstack-community-dashboard: CLI tool querying Supabase for skill
popularity, crash clusters, and version distribution.
19 integration tests covering all telemetry scripts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: session-specific .pending markers + crash_clusters view fix

Addresses Codex review findings:
- .pending race condition: use .pending-$SESSION_ID instead of
  shared .pending file to prevent concurrent session interference
- crash_clusters view: add total_occurrences and anonymous_occurrences
  columns since anonymous tier has no installation_id
- Added test: own session pending marker is not finalized

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: dual-attempt update check with Supabase install ping

Fires a parallel background curl to Supabase during the slow-path
version fetch. Logs upgrade_prompted event only on fresh fetches
(not cached replays) to avoid overcounting. GitHub remains the
primary version source — Supabase ping is fire-and-forget.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: integrate telemetry usage stats into /retro output

Retro now reads ~/.gstack/analytics/skill-usage.jsonl and includes
gstack usage metrics (skill run counts, top skills, success rate)
in the weekly retrospective output.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: move 'Skill usage telemetry' to Completed in TODOS.md

Implemented in this branch: local JSONL logging, opt-in prompt,
privacy tiers, Supabase backend, community dashboard, /retro
integration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: wire Supabase credentials and expose tables via Data API

Add supabase/config.sh with project URL and publishable key (safe to
commit — RLS restricts to INSERT only). Update telemetry-sync,
community-dashboard, and update-check to source the config and
include proper auth headers for the Supabase REST API.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: add SELECT RLS policies to migration for community dashboard reads

All telemetry data is anonymous (no PII), so public reads via the
publishable key are safe. Needed for the community dashboard to
query skill popularity, crash clusters, and version distribution.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: bump version and changelog (v0.8.6)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: analytics backward-compatible with old JSONL format

Handle old-format events (no event_type field) alongside new format.
Skip hook_fire events. Fix grep -c whitespace issues and unbound
variable errors.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: map JSONL field names to Postgres columns in telemetry-sync

Local JSONL uses short names (v, ts, sessions) but the Supabase
table expects full names (schema_version, event_timestamp,
concurrent_sessions). Add sed mapping during field stripping.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: address Codex adversarial findings — cursor, opt-out, queries

- Sync cursor now advances on HTTP 2xx (not grep for "inserted")
- Update-check respects telemetry opt-out before pinging Supabase
- Dashboard queries use correct view column names (total_occurrences)
- Sync strips old-format "repo" field to prevent privacy leak

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: add Privacy & Telemetry section to README

Transparent disclosure of what telemetry collects, what it never sends,
how to opt out, and a link to the schema so users can verify.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Garry Tan
2026-03-19 17:21:05 -07:00
committed by GitHub
parent bd834aeadb
commit 3b22fc39e6
34 changed files with 2193 additions and 2 deletions
+191
View File
@@ -0,0 +1,191 @@
#!/usr/bin/env bash
# gstack-analytics — personal usage dashboard from local JSONL
#
# Usage:
# gstack-analytics # default: last 7 days
# gstack-analytics 7d # last 7 days
# gstack-analytics 30d # last 30 days
# gstack-analytics all # all time
#
# Env overrides (for testing):
# GSTACK_STATE_DIR — override ~/.gstack state directory
set -uo pipefail
STATE_DIR="${GSTACK_STATE_DIR:-$HOME/.gstack}"
JSONL_FILE="$STATE_DIR/analytics/skill-usage.jsonl"
# ─── Parse time window ───────────────────────────────────────
WINDOW="${1:-7d}"
case "$WINDOW" in
7d) DAYS=7; LABEL="last 7 days" ;;
30d) DAYS=30; LABEL="last 30 days" ;;
all) DAYS=0; LABEL="all time" ;;
*) DAYS=7; LABEL="last 7 days" ;;
esac
# ─── Check for data ──────────────────────────────────────────
if [ ! -f "$JSONL_FILE" ]; then
echo "gstack usage — no data yet"
echo ""
echo "Usage data will appear here after you use gstack skills"
echo "with telemetry enabled (gstack-config set telemetry anonymous)."
exit 0
fi
TOTAL_LINES="$(wc -l < "$JSONL_FILE" | tr -d ' ')"
if [ "$TOTAL_LINES" = "0" ]; then
echo "gstack usage — no data yet"
exit 0
fi
# ─── Filter by time window ───────────────────────────────────
if [ "$DAYS" -gt 0 ] 2>/dev/null; then
# Calculate cutoff date
if date -v-1d +%Y-%m-%d >/dev/null 2>&1; then
# macOS date
CUTOFF="$(date -v-${DAYS}d -u +%Y-%m-%dT%H:%M:%SZ)"
else
# GNU date
CUTOFF="$(date -u -d "$DAYS days ago" +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || echo "2000-01-01T00:00:00Z")"
fi
# Filter: skill_run events (new format) OR basic skill events (old format, no event_type)
# Old format: {"skill":"X","ts":"Y","repo":"Z"} (no event_type field)
# New format: {"event_type":"skill_run","skill":"X","ts":"Y",...}
FILTERED="$(awk -F'"' -v cutoff="$CUTOFF" '
/"ts":"/ {
# Skip hook_fire events
if (/"event":"hook_fire"/) next
# Skip non-skill_run new-format events
if (/"event_type":"/ && !/"event_type":"skill_run"/) next
for (i=1; i<=NF; i++) {
if ($i == "ts" && $(i+1) ~ /^:/) {
ts = $(i+2)
if (ts >= cutoff) { print; break }
}
}
}
' "$JSONL_FILE")"
else
# All time: include skill_run events + old-format basic events, exclude hook_fire
FILTERED="$(awk '/"ts":"/ && !/"event":"hook_fire"/' "$JSONL_FILE" | grep -v '"event_type":"upgrade_' 2>/dev/null || true)"
fi
if [ -z "$FILTERED" ]; then
echo "gstack usage ($LABEL) — no skill runs found"
exit 0
fi
# ─── Aggregate by skill ──────────────────────────────────────
# Extract skill names and count
SKILL_COUNTS="$(echo "$FILTERED" | awk -F'"' '
/"skill":"/ {
for (i=1; i<=NF; i++) {
if ($i == "skill" && $(i+1) ~ /^:/) {
skill = $(i+2)
counts[skill]++
break
}
}
}
END {
for (s in counts) print counts[s], s
}
' | sort -rn)"
# Count outcomes
TOTAL="$(echo "$FILTERED" | wc -l | tr -d ' ')"
SUCCESS="$(echo "$FILTERED" | grep -c '"outcome":"success"' || true)"
SUCCESS="${SUCCESS:-0}"; SUCCESS="$(echo "$SUCCESS" | tr -d ' \n\r\t')"
ERRORS="$(echo "$FILTERED" | grep -c '"outcome":"error"' || true)"
ERRORS="${ERRORS:-0}"; ERRORS="$(echo "$ERRORS" | tr -d ' \n\r\t')"
# Old format events have no outcome field — count them as successful
NO_OUTCOME="$(echo "$FILTERED" | grep -vc '"outcome":' || true)"
NO_OUTCOME="${NO_OUTCOME:-0}"; NO_OUTCOME="$(echo "$NO_OUTCOME" | tr -d ' \n\r\t')"
SUCCESS=$(( SUCCESS + NO_OUTCOME ))
# Calculate success rate
if [ "$TOTAL" -gt 0 ] 2>/dev/null; then
SUCCESS_RATE=$(( SUCCESS * 100 / TOTAL ))
else
SUCCESS_RATE=100
fi
# ─── Calculate total duration ────────────────────────────────
TOTAL_DURATION="$(echo "$FILTERED" | awk -F'[:,]' '
/"duration_s"/ {
for (i=1; i<=NF; i++) {
if ($i ~ /"duration_s"/) {
val = $(i+1)
gsub(/[^0-9.]/, "", val)
if (val+0 > 0) total += val
}
}
}
END { printf "%.0f", total }
')"
# Format duration
TOTAL_DURATION="${TOTAL_DURATION:-0}"
if [ "$TOTAL_DURATION" -ge 3600 ] 2>/dev/null; then
HOURS=$(( TOTAL_DURATION / 3600 ))
MINS=$(( (TOTAL_DURATION % 3600) / 60 ))
DUR_DISPLAY="${HOURS}h ${MINS}m"
elif [ "$TOTAL_DURATION" -ge 60 ] 2>/dev/null; then
MINS=$(( TOTAL_DURATION / 60 ))
DUR_DISPLAY="${MINS}m"
else
DUR_DISPLAY="${TOTAL_DURATION}s"
fi
# ─── Render output ───────────────────────────────────────────
echo "gstack usage ($LABEL)"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
# Find max count for bar scaling
MAX_COUNT="$(echo "$SKILL_COUNTS" | head -1 | awk '{print $1}')"
BAR_WIDTH=20
echo "$SKILL_COUNTS" | while read -r COUNT SKILL; do
# Scale bar
if [ "$MAX_COUNT" -gt 0 ] 2>/dev/null; then
BAR_LEN=$(( COUNT * BAR_WIDTH / MAX_COUNT ))
else
BAR_LEN=1
fi
[ "$BAR_LEN" -lt 1 ] && BAR_LEN=1
# Build bar
BAR=""
i=0
while [ "$i" -lt "$BAR_LEN" ]; do
BAR="${BAR}█"
i=$(( i + 1 ))
done
# Calculate avg duration for this skill
AVG_DUR="$(echo "$FILTERED" | awk -v skill="$SKILL" '
index($0, "\"skill\":\"" skill "\"") > 0 {
# Extract duration_s value using split on "duration_s":
n = split($0, parts, "\"duration_s\":")
if (n >= 2) {
# parts[2] starts with the value, e.g. "142,"
gsub(/[^0-9.].*/, "", parts[2])
if (parts[2]+0 > 0) { total += parts[2]; count++ }
}
}
END { if (count > 0) printf "%.0f", total/count; else print "0" }
')"
# Format avg duration
if [ "$AVG_DUR" -ge 60 ] 2>/dev/null; then
AVG_DISPLAY="$(( AVG_DUR / 60 ))m"
else
AVG_DISPLAY="${AVG_DUR}s"
fi
printf " /%-20s %s %d runs (avg %s)\n" "$SKILL" "$BAR" "$COUNT" "$AVG_DISPLAY"
done
echo ""
echo "Success rate: ${SUCCESS_RATE}% | Errors: ${ERRORS} | Total time: ${DUR_DISPLAY}"
echo "Events: ${TOTAL} skill runs"
+113
View File
@@ -0,0 +1,113 @@
#!/usr/bin/env bash
# gstack-community-dashboard — community usage stats from Supabase
#
# Queries the Supabase REST API to show community-wide gstack usage:
# skill popularity, crash clusters, version distribution, retention.
#
# Env overrides (for testing):
# GSTACK_DIR — override auto-detected gstack root
# GSTACK_SUPABASE_URL — override Supabase project URL
# GSTACK_SUPABASE_ANON_KEY — override Supabase anon key
set -uo pipefail
GSTACK_DIR="${GSTACK_DIR:-$(cd "$(dirname "$0")/.." && pwd)}"
# Source Supabase config if not overridden by env
if [ -z "${GSTACK_SUPABASE_URL:-}" ] && [ -f "$GSTACK_DIR/supabase/config.sh" ]; then
. "$GSTACK_DIR/supabase/config.sh"
fi
SUPABASE_URL="${GSTACK_SUPABASE_URL:-}"
ANON_KEY="${GSTACK_SUPABASE_ANON_KEY:-}"
if [ -z "$SUPABASE_URL" ] || [ -z "$ANON_KEY" ]; then
echo "gstack community dashboard"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "Supabase not configured yet. The community dashboard will be"
echo "available once the gstack Supabase project is set up."
echo ""
echo "For local analytics, run: gstack-analytics"
exit 0
fi
# ─── Helper: query Supabase REST API ─────────────────────────
query() {
local table="$1"
local params="${2:-}"
curl -sf --max-time 10 \
"${SUPABASE_URL}/rest/v1/${table}?${params}" \
-H "apikey: ${ANON_KEY}" \
-H "Authorization: Bearer ${ANON_KEY}" \
2>/dev/null || echo "[]"
}
echo "gstack community dashboard"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
# ─── Weekly active installs ──────────────────────────────────
WEEK_AGO="$(date -u -v-7d +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u -d '7 days ago' +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || echo "")"
if [ -n "$WEEK_AGO" ]; then
PULSE="$(curl -sf --max-time 10 \
"${SUPABASE_URL}/functions/v1/community-pulse" \
-H "Authorization: Bearer ${ANON_KEY}" \
2>/dev/null || echo '{"weekly_active":0}')"
WEEKLY="$(echo "$PULSE" | grep -o '"weekly_active":[0-9]*' | grep -o '[0-9]*' || echo "0")"
CHANGE="$(echo "$PULSE" | grep -o '"change_pct":[0-9-]*' | grep -o '[0-9-]*' || echo "0")"
echo "Weekly active installs: ${WEEKLY}"
if [ "$CHANGE" -gt 0 ] 2>/dev/null; then
echo " Change: +${CHANGE}%"
elif [ "$CHANGE" -lt 0 ] 2>/dev/null; then
echo " Change: ${CHANGE}%"
fi
echo ""
fi
# ─── Skill popularity (top 10) ───────────────────────────────
echo "Top skills (last 7 days)"
echo "────────────────────────"
# Query telemetry_events, group by skill
EVENTS="$(query "telemetry_events" "select=skill,gstack_version&event_type=eq.skill_run&event_timestamp=gte.${WEEK_AGO}&limit=1000" 2>/dev/null || echo "[]")"
if [ "$EVENTS" != "[]" ] && [ -n "$EVENTS" ]; then
echo "$EVENTS" | grep -o '"skill":"[^"]*"' | awk -F'"' '{print $4}' | sort | uniq -c | sort -rn | head -10 | while read -r COUNT SKILL; do
printf " /%-20s %d runs\n" "$SKILL" "$COUNT"
done
else
echo " No data yet"
fi
echo ""
# ─── Crash clusters ──────────────────────────────────────────
echo "Top crash clusters"
echo "──────────────────"
CRASHES="$(query "crash_clusters" "select=error_class,gstack_version,total_occurrences,identified_users&limit=5" 2>/dev/null || echo "[]")"
if [ "$CRASHES" != "[]" ] && [ -n "$CRASHES" ]; then
echo "$CRASHES" | grep -o '"error_class":"[^"]*"' | awk -F'"' '{print $4}' | head -5 | while read -r ERR; do
C="$(echo "$CRASHES" | grep -o "\"error_class\":\"$ERR\"[^}]*\"total_occurrences\":[0-9]*" | grep -o '"total_occurrences":[0-9]*' | head -1 | grep -o '[0-9]*')"
printf " %-30s %s occurrences\n" "$ERR" "${C:-?}"
done
else
echo " No crashes reported"
fi
echo ""
# ─── Version distribution ────────────────────────────────────
echo "Version distribution (last 7 days)"
echo "───────────────────────────────────"
if [ "$EVENTS" != "[]" ] && [ -n "$EVENTS" ]; then
echo "$EVENTS" | grep -o '"gstack_version":"[^"]*"' | awk -F'"' '{print $4}' | sort | uniq -c | sort -rn | head -5 | while read -r COUNT VER; do
printf " v%-15s %d events\n" "$VER" "$COUNT"
done
else
echo " No data yet"
fi
echo ""
echo "For local analytics: gstack-analytics"
+158
View File
@@ -0,0 +1,158 @@
#!/usr/bin/env bash
# gstack-telemetry-log — append a telemetry event to local JSONL
#
# Data flow:
# preamble (start) ──▶ .pending marker
# preamble (epilogue) ──▶ gstack-telemetry-log ──▶ skill-usage.jsonl
# └──▶ gstack-telemetry-sync (bg)
#
# Usage:
# gstack-telemetry-log --skill qa --duration 142 --outcome success \
# --used-browse true --session-id "12345-1710756600"
#
# Env overrides (for testing):
# GSTACK_STATE_DIR — override ~/.gstack state directory
# GSTACK_DIR — override auto-detected gstack root
#
# NOTE: Uses set -uo pipefail (no -e) — telemetry must never exit non-zero
set -uo pipefail
GSTACK_DIR="${GSTACK_DIR:-$(cd "$(dirname "$0")/.." && pwd)}"
STATE_DIR="${GSTACK_STATE_DIR:-$HOME/.gstack}"
ANALYTICS_DIR="$STATE_DIR/analytics"
JSONL_FILE="$ANALYTICS_DIR/skill-usage.jsonl"
PENDING_DIR="$ANALYTICS_DIR" # .pending-* files live here
CONFIG_CMD="$GSTACK_DIR/bin/gstack-config"
VERSION_FILE="$GSTACK_DIR/VERSION"
# ─── Parse flags ─────────────────────────────────────────────
SKILL=""
DURATION=""
OUTCOME="unknown"
USED_BROWSE="false"
SESSION_ID=""
ERROR_CLASS=""
EVENT_TYPE="skill_run"
while [ $# -gt 0 ]; do
case "$1" in
--skill) SKILL="$2"; shift 2 ;;
--duration) DURATION="$2"; shift 2 ;;
--outcome) OUTCOME="$2"; shift 2 ;;
--used-browse) USED_BROWSE="$2"; shift 2 ;;
--session-id) SESSION_ID="$2"; shift 2 ;;
--error-class) ERROR_CLASS="$2"; shift 2 ;;
--event-type) EVENT_TYPE="$2"; shift 2 ;;
*) shift ;;
esac
done
# ─── Read telemetry tier ─────────────────────────────────────
TIER="$("$CONFIG_CMD" get telemetry 2>/dev/null || true)"
TIER="${TIER:-off}"
# Validate tier
case "$TIER" in
off|anonymous|community) ;;
*) TIER="off" ;; # invalid value → default to off
esac
if [ "$TIER" = "off" ]; then
# Still clear pending markers for this session even if telemetry is off
[ -n "$SESSION_ID" ] && rm -f "$PENDING_DIR/.pending-$SESSION_ID" 2>/dev/null || true
exit 0
fi
# ─── Finalize stale .pending markers ────────────────────────
# Each session gets its own .pending-$SESSION_ID file to avoid races
# between concurrent sessions. Finalize any that don't match our session.
for PFILE in "$PENDING_DIR"/.pending-*; do
[ -f "$PFILE" ] || continue
# Skip our own session's marker (it's still in-flight)
PFILE_BASE="$(basename "$PFILE")"
PFILE_SID="${PFILE_BASE#.pending-}"
[ "$PFILE_SID" = "$SESSION_ID" ] && continue
PENDING_DATA="$(cat "$PFILE" 2>/dev/null || true)"
rm -f "$PFILE" 2>/dev/null || true
if [ -n "$PENDING_DATA" ]; then
# Extract fields from pending marker using grep -o + awk
P_SKILL="$(echo "$PENDING_DATA" | grep -o '"skill":"[^"]*"' | head -1 | awk -F'"' '{print $4}')"
P_TS="$(echo "$PENDING_DATA" | grep -o '"ts":"[^"]*"' | head -1 | awk -F'"' '{print $4}')"
P_SID="$(echo "$PENDING_DATA" | grep -o '"session_id":"[^"]*"' | head -1 | awk -F'"' '{print $4}')"
P_VER="$(echo "$PENDING_DATA" | grep -o '"gstack_version":"[^"]*"' | head -1 | awk -F'"' '{print $4}')"
P_OS="$(uname -s | tr '[:upper:]' '[:lower:]')"
P_ARCH="$(uname -m)"
# Write the stale event as outcome: unknown
mkdir -p "$ANALYTICS_DIR"
printf '{"v":1,"ts":"%s","event_type":"skill_run","skill":"%s","session_id":"%s","gstack_version":"%s","os":"%s","arch":"%s","duration_s":null,"outcome":"unknown","error_class":null,"used_browse":false,"sessions":1}\n' \
"$P_TS" "$P_SKILL" "$P_SID" "$P_VER" "$P_OS" "$P_ARCH" >> "$JSONL_FILE" 2>/dev/null || true
fi
done
# Clear our own session's pending marker (we're about to log the real event)
[ -n "$SESSION_ID" ] && rm -f "$PENDING_DIR/.pending-$SESSION_ID" 2>/dev/null || true
# ─── Collect metadata ────────────────────────────────────────
TS="$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +%Y-%m-%dT%H:%M:%S 2>/dev/null || echo "")"
GSTACK_VERSION="$(cat "$VERSION_FILE" 2>/dev/null | tr -d '[:space:]' || echo "unknown")"
OS="$(uname -s | tr '[:upper:]' '[:lower:]')"
ARCH="$(uname -m)"
SESSIONS="1"
if [ -d "$STATE_DIR/sessions" ]; then
_SC="$(find "$STATE_DIR/sessions" -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' \n\r\t')"
[ -n "$_SC" ] && [ "$_SC" -gt 0 ] 2>/dev/null && SESSIONS="$_SC"
fi
# Generate installation_id for community tier
INSTALL_ID=""
if [ "$TIER" = "community" ]; then
HOST="$(hostname 2>/dev/null || echo "unknown")"
USER="$(whoami 2>/dev/null || echo "unknown")"
if command -v shasum >/dev/null 2>&1; then
INSTALL_ID="$(printf '%s-%s' "$HOST" "$USER" | shasum -a 256 | awk '{print $1}')"
elif command -v sha256sum >/dev/null 2>&1; then
INSTALL_ID="$(printf '%s-%s' "$HOST" "$USER" | sha256sum | awk '{print $1}')"
elif command -v openssl >/dev/null 2>&1; then
INSTALL_ID="$(printf '%s-%s' "$HOST" "$USER" | openssl dgst -sha256 | awk '{print $NF}')"
fi
# If no SHA-256 command available, install_id stays empty
fi
# Local-only fields (never sent remotely)
REPO_SLUG=""
BRANCH=""
if command -v git >/dev/null 2>&1; then
REPO_SLUG="$(git remote get-url origin 2>/dev/null | sed 's|.*[:/]\([^/]*/[^/]*\)\.git$|\1|;s|.*[:/]\([^/]*/[^/]*\)$|\1|' | tr '/' '-' 2>/dev/null || true)"
BRANCH="$(git rev-parse --abbrev-ref HEAD 2>/dev/null || true)"
fi
# ─── Construct and append JSON ───────────────────────────────
mkdir -p "$ANALYTICS_DIR"
# Escape null fields
ERR_FIELD="null"
[ -n "$ERROR_CLASS" ] && ERR_FIELD="\"$ERROR_CLASS\""
DUR_FIELD="null"
[ -n "$DURATION" ] && DUR_FIELD="$DURATION"
INSTALL_FIELD="null"
[ -n "$INSTALL_ID" ] && INSTALL_FIELD="\"$INSTALL_ID\""
BROWSE_BOOL="false"
[ "$USED_BROWSE" = "true" ] && BROWSE_BOOL="true"
printf '{"v":1,"ts":"%s","event_type":"%s","skill":"%s","session_id":"%s","gstack_version":"%s","os":"%s","arch":"%s","duration_s":%s,"outcome":"%s","error_class":%s,"used_browse":%s,"sessions":%s,"installation_id":%s,"_repo_slug":"%s","_branch":"%s"}\n' \
"$TS" "$EVENT_TYPE" "$SKILL" "$SESSION_ID" "$GSTACK_VERSION" "$OS" "$ARCH" \
"$DUR_FIELD" "$OUTCOME" "$ERR_FIELD" "$BROWSE_BOOL" "${SESSIONS:-1}" \
"$INSTALL_FIELD" "$REPO_SLUG" "$BRANCH" >> "$JSONL_FILE" 2>/dev/null || true
# ─── Trigger sync if tier is not off ─────────────────────────
SYNC_CMD="$GSTACK_DIR/bin/gstack-telemetry-sync"
if [ -x "$SYNC_CMD" ]; then
"$SYNC_CMD" 2>/dev/null &
fi
exit 0
+127
View File
@@ -0,0 +1,127 @@
#!/usr/bin/env bash
# gstack-telemetry-sync — sync local JSONL events to Supabase
#
# Fire-and-forget, backgrounded, rate-limited to once per 5 minutes.
# Strips local-only fields before sending. Respects privacy tiers.
#
# Env overrides (for testing):
# GSTACK_STATE_DIR — override ~/.gstack state directory
# GSTACK_DIR — override auto-detected gstack root
# GSTACK_TELEMETRY_ENDPOINT — override Supabase endpoint URL
set -uo pipefail
GSTACK_DIR="${GSTACK_DIR:-$(cd "$(dirname "$0")/.." && pwd)}"
STATE_DIR="${GSTACK_STATE_DIR:-$HOME/.gstack}"
ANALYTICS_DIR="$STATE_DIR/analytics"
JSONL_FILE="$ANALYTICS_DIR/skill-usage.jsonl"
CURSOR_FILE="$ANALYTICS_DIR/.last-sync-line"
RATE_FILE="$ANALYTICS_DIR/.last-sync-time"
CONFIG_CMD="$GSTACK_DIR/bin/gstack-config"
# Source Supabase config if not overridden by env
if [ -z "${GSTACK_TELEMETRY_ENDPOINT:-}" ] && [ -f "$GSTACK_DIR/supabase/config.sh" ]; then
. "$GSTACK_DIR/supabase/config.sh"
fi
ENDPOINT="${GSTACK_TELEMETRY_ENDPOINT:-}"
ANON_KEY="${GSTACK_SUPABASE_ANON_KEY:-}"
# ─── Pre-checks ──────────────────────────────────────────────
# No endpoint configured yet → exit silently
[ -z "$ENDPOINT" ] && exit 0
# No JSONL file → nothing to sync
[ -f "$JSONL_FILE" ] || exit 0
# Rate limit: once per 5 minutes
if [ -f "$RATE_FILE" ]; then
STALE=$(find "$RATE_FILE" -mmin +5 2>/dev/null || true)
[ -z "$STALE" ] && exit 0
fi
# ─── Read tier ───────────────────────────────────────────────
TIER="$("$CONFIG_CMD" get telemetry 2>/dev/null || true)"
TIER="${TIER:-off}"
[ "$TIER" = "off" ] && exit 0
# ─── Read cursor ─────────────────────────────────────────────
CURSOR=0
if [ -f "$CURSOR_FILE" ]; then
CURSOR="$(cat "$CURSOR_FILE" 2>/dev/null | tr -d ' \n\r\t')"
# Validate: must be a non-negative integer
case "$CURSOR" in *[!0-9]*) CURSOR=0 ;; esac
fi
# Safety: if cursor exceeds file length, reset
TOTAL_LINES="$(wc -l < "$JSONL_FILE" | tr -d ' \n\r\t')"
if [ "$CURSOR" -gt "$TOTAL_LINES" ] 2>/dev/null; then
CURSOR=0
fi
# Nothing new to sync
[ "$CURSOR" -ge "$TOTAL_LINES" ] 2>/dev/null && exit 0
# ─── Read unsent lines ───────────────────────────────────────
SKIP=$(( CURSOR + 1 ))
UNSENT="$(tail -n "+$SKIP" "$JSONL_FILE" 2>/dev/null || true)"
[ -z "$UNSENT" ] && exit 0
# ─── Strip local-only fields and build batch ─────────────────
BATCH="["
FIRST=true
COUNT=0
while IFS= read -r LINE; do
# Skip empty or malformed lines
[ -z "$LINE" ] && continue
echo "$LINE" | grep -q '^{' || continue
# Strip local-only fields + map JSONL field names to Postgres column names
CLEAN="$(echo "$LINE" | sed \
-e 's/,"_repo_slug":"[^"]*"//g' \
-e 's/,"_branch":"[^"]*"//g' \
-e 's/"v":/"schema_version":/g' \
-e 's/"ts":/"event_timestamp":/g' \
-e 's/"sessions":/"concurrent_sessions":/g' \
-e 's/,"repo":"[^"]*"//g')"
# If anonymous tier, strip installation_id
if [ "$TIER" = "anonymous" ]; then
CLEAN="$(echo "$CLEAN" | sed 's/,"installation_id":"[^"]*"//g; s/,"installation_id":null//g')"
fi
if [ "$FIRST" = "true" ]; then
FIRST=false
else
BATCH="$BATCH,"
fi
BATCH="$BATCH$CLEAN"
COUNT=$(( COUNT + 1 ))
# Batch size limit
[ "$COUNT" -ge 100 ] && break
done <<< "$UNSENT"
BATCH="$BATCH]"
# Nothing to send after filtering
[ "$COUNT" -eq 0 ] && exit 0
# ─── POST to Supabase ────────────────────────────────────────
HTTP_CODE="$(curl -s -o /dev/null -w '%{http_code}' --max-time 10 \
-X POST "${ENDPOINT}/telemetry_events" \
-H "Content-Type: application/json" \
-H "apikey: ${ANON_KEY}" \
-H "Authorization: Bearer ${ANON_KEY}" \
-H "Prefer: return=minimal" \
-d "$BATCH" 2>/dev/null || echo "000")"
# ─── Update cursor on success (2xx) ─────────────────────────
case "$HTTP_CODE" in
2*) NEW_CURSOR=$(( CURSOR + COUNT ))
echo "$NEW_CURSOR" > "$CURSOR_FILE" 2>/dev/null || true ;;
esac
# Update rate limit marker
touch "$RATE_FILE" 2>/dev/null || true
exit 0
+32
View File
@@ -140,6 +140,30 @@ fi
# ─── Step 4: Slow path — fetch remote version ────────────────
mkdir -p "$STATE_DIR"
# Fire Supabase install ping in background (parallel, non-blocking)
# This logs an update check event for community health metrics.
# If the endpoint isn't configured or Supabase is down, this is a no-op.
# Source Supabase config for install ping
if [ -z "${GSTACK_TELEMETRY_ENDPOINT:-}" ] && [ -f "$GSTACK_DIR/supabase/config.sh" ]; then
. "$GSTACK_DIR/supabase/config.sh"
fi
_SUPA_ENDPOINT="${GSTACK_TELEMETRY_ENDPOINT:-}"
_SUPA_KEY="${GSTACK_SUPABASE_ANON_KEY:-}"
# Respect telemetry opt-out — don't ping Supabase if user set telemetry: off
_TEL_TIER="$("$GSTACK_DIR/bin/gstack-config" get telemetry 2>/dev/null || true)"
if [ -n "$_SUPA_ENDPOINT" ] && [ -n "$_SUPA_KEY" ] && [ "${_TEL_TIER:-off}" != "off" ]; then
_OS="$(uname -s | tr '[:upper:]' '[:lower:]')"
curl -sf --max-time 5 \
-X POST "${_SUPA_ENDPOINT}/update_checks" \
-H "Content-Type: application/json" \
-H "apikey: ${_SUPA_KEY}" \
-H "Authorization: Bearer ${_SUPA_KEY}" \
-H "Prefer: return=minimal" \
-d "{\"gstack_version\":\"$LOCAL\",\"os\":\"$_OS\"}" \
>/dev/null 2>&1 &
fi
# GitHub raw fetch (primary, always reliable)
REMOTE=""
REMOTE="$(curl -sf --max-time 5 "$REMOTE_URL" 2>/dev/null || true)"
REMOTE="$(echo "$REMOTE" | tr -d '[:space:]')"
@@ -161,4 +185,12 @@ echo "UPGRADE_AVAILABLE $LOCAL $REMOTE" > "$CACHE_FILE"
if check_snooze "$REMOTE"; then
exit 0 # snoozed — stay quiet
fi
# Log upgrade_prompted event (only on slow-path fetch, not cached replays)
TEL_CMD="$GSTACK_DIR/bin/gstack-telemetry-log"
if [ -x "$TEL_CMD" ]; then
"$TEL_CMD" --event-type upgrade_prompted --skill "" --duration 0 \
--outcome success --session-id "update-$$-$(date +%s)" 2>/dev/null &
fi
echo "UPGRADE_AVAILABLE $LOCAL $REMOTE"