diff --git a/.gitignore b/.gitignore index 189276fb..770818be 100644 --- a/.gitignore +++ b/.gitignore @@ -15,3 +15,4 @@ bun.lock .env.local .env.* !.env.example +supabase/.temp/ diff --git a/CHANGELOG.md b/CHANGELOG.md index 041f45b8..9ae16ca0 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,6 @@ # Changelog -## [0.11.16.0] - 2026-03-24 — Cleaner Skill Descriptions + Proactive Opt-Out +## [0.11.17.0] - 2026-03-24 — Cleaner Skill Descriptions + Proactive Opt-Out ### Changed @@ -11,6 +11,32 @@ - **Telemetry source tagging no longer crashes.** Fixed duration guards and source field validation in the telemetry logger so it handles edge cases cleanly instead of erroring. +## [0.11.16.1] - 2026-03-24 — Installation ID Privacy Fix + +### Fixed + +- **Installation IDs are now random UUIDs instead of hostname hashes.** The old `SHA-256(hostname+username)` approach meant anyone who knew your machine identity could compute your installation ID. Now uses a random UUID stored in `~/.gstack/installation-id` — not derivable from any public input, rotatable by deleting the file. +- **RLS verification script handles edge cases.** `verify-rls.sh` now correctly treats INSERT success as expected (kept for old client compat), handles 409 conflicts and 204 no-ops. + +## [0.11.16.0] - 2026-03-24 — Telemetry Security Hardening + +### Fixed + +- **Telemetry RLS policies tightened.** Row-level security policies on all telemetry tables now deny direct access via the anon key. All reads and writes go through validated edge functions with schema checks, event type allowlists, and field length limits. +- **Community dashboard is faster and server-cached.** Dashboard stats are now served from a single edge function with 1-hour server-side caching, replacing multiple direct queries. + +### Changed + +- **Telemetry sync uses `GSTACK_SUPABASE_URL` instead of `GSTACK_TELEMETRY_ENDPOINT`.** Edge functions need the base URL, not the REST API path. The old variable is removed from `config.sh`. +- **Cursor advancement is now safe.** The sync script checks the edge function's `inserted` count before advancing — if zero events were inserted, the cursor holds and retries next run. + +### For contributors + +- New migration: `supabase/migrations/002_tighten_rls.sql` +- New smoke test: `supabase/verify-rls.sh` (9 checks: 5 reads + 4 writes) +- Extended `test/telemetry.test.ts` with field name verification +- Untracked `browse/dist/` binaries from git (arm64-only, rebuilt by `./setup`) + ## [0.11.15.0] - 2026-03-24 — E2E Test Coverage for Plan Reviews & Codex ### Added diff --git a/CLAUDE.md b/CLAUDE.md index 492a5adf..60fe1ae8 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -165,6 +165,19 @@ symlink or a real copy. If it's a symlink to your working directory, be aware th gen-skill-docs pipeline, consider whether the changes should be tested in isolation before going live (especially if the user is actively using gstack in other windows). +## Compiled binaries — NEVER commit browse/dist/ + +The `browse/dist/` directory contains compiled Bun binaries (`browse`, `find-browse`, +~58MB each). These are Mach-O arm64 only — they do NOT work on Linux, Windows, or +Intel Macs. The `./setup` script already builds from source for every platform, so +the checked-in binaries are redundant. They are tracked by git due to a historical +mistake and should eventually be removed with `git rm --cached`. + +**NEVER stage or commit these files.** They show up as modified in `git status` +because they're tracked despite `.gitignore` — ignore them. When staging files, +always use specific filenames (`git add file1 file2`) — never `git add .` or +`git add -A`, which will accidentally include the binaries. + ## Commit style **Always bisect commits.** Every commit should be a single logical change. When diff --git a/README.md b/README.md index 253d5425..fd81d78c 100644 --- a/README.md +++ b/README.md @@ -212,7 +212,7 @@ gstack includes **opt-in** usage telemetry to help improve the project. Here's e - **What's never sent:** code, file paths, repo names, branch names, prompts, or any user-generated content. - **Change anytime:** `gstack-config set telemetry off` disables everything instantly. -Data is stored in [Supabase](https://supabase.com) (open source Firebase alternative). The schema is in [`supabase/migrations/001_telemetry.sql`](supabase/migrations/001_telemetry.sql) — you can verify exactly what's collected. The Supabase publishable key in the repo is a public key (like a Firebase API key) — row-level security policies restrict it to insert-only access. +Data is stored in [Supabase](https://supabase.com) (open source Firebase alternative). The schema is in [`supabase/migrations/`](supabase/migrations/) — you can verify exactly what's collected. The Supabase publishable key in the repo is a public key (like a Firebase API key) — row-level security policies deny all direct access. Telemetry flows through validated edge functions that enforce schema checks, event type allowlists, and field length limits. **Local analytics are always available.** Run `gstack-analytics` to see your personal usage dashboard from the local JSONL file — no remote data needed. diff --git a/VERSION b/VERSION index e36c939e..6a911c49 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -0.11.16.0 +0.11.17.0 diff --git a/bin/gstack-community-dashboard b/bin/gstack-community-dashboard index 5b7fc7ec..1f469283 100755 --- a/bin/gstack-community-dashboard +++ b/bin/gstack-community-dashboard @@ -1,7 +1,7 @@ #!/usr/bin/env bash # gstack-community-dashboard — community usage stats from Supabase # -# Queries the Supabase REST API to show community-wide gstack usage: +# Calls the community-pulse edge function for aggregated stats: # skill popularity, crash clusters, version distribution, retention. # # Env overrides (for testing): @@ -30,51 +30,40 @@ if [ -z "$SUPABASE_URL" ] || [ -z "$ANON_KEY" ]; then exit 0 fi -# ─── Helper: query Supabase REST API ───────────────────────── -query() { - local table="$1" - local params="${2:-}" - curl -sf --max-time 10 \ - "${SUPABASE_URL}/rest/v1/${table}?${params}" \ - -H "apikey: ${ANON_KEY}" \ - -H "Authorization: Bearer ${ANON_KEY}" \ - 2>/dev/null || echo "[]" -} +# ─── Fetch aggregated stats from edge function ──────────────── +DATA="$(curl -sf --max-time 15 \ + "${SUPABASE_URL}/functions/v1/community-pulse" \ + -H "apikey: ${ANON_KEY}" \ + 2>/dev/null || echo "{}")" echo "gstack community dashboard" echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" echo "" # ─── Weekly active installs ────────────────────────────────── -WEEK_AGO="$(date -u -v-7d +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u -d '7 days ago' +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || echo "")" -if [ -n "$WEEK_AGO" ]; then - PULSE="$(curl -sf --max-time 10 \ - "${SUPABASE_URL}/functions/v1/community-pulse" \ - -H "Authorization: Bearer ${ANON_KEY}" \ - 2>/dev/null || echo '{"weekly_active":0}')" +WEEKLY="$(echo "$DATA" | grep -o '"weekly_active":[0-9]*' | grep -o '[0-9]*' || echo "0")" +CHANGE="$(echo "$DATA" | grep -o '"change_pct":[0-9-]*' | grep -o '[0-9-]*' || echo "0")" - WEEKLY="$(echo "$PULSE" | grep -o '"weekly_active":[0-9]*' | grep -o '[0-9]*' || echo "0")" - CHANGE="$(echo "$PULSE" | grep -o '"change_pct":[0-9-]*' | grep -o '[0-9-]*' || echo "0")" - - echo "Weekly active installs: ${WEEKLY}" - if [ "$CHANGE" -gt 0 ] 2>/dev/null; then - echo " Change: +${CHANGE}%" - elif [ "$CHANGE" -lt 0 ] 2>/dev/null; then - echo " Change: ${CHANGE}%" - fi - echo "" +echo "Weekly active installs: ${WEEKLY}" +if [ "$CHANGE" -gt 0 ] 2>/dev/null; then + echo " Change: +${CHANGE}%" +elif [ "$CHANGE" -lt 0 ] 2>/dev/null; then + echo " Change: ${CHANGE}%" fi +echo "" # ─── Skill popularity (top 10) ─────────────────────────────── echo "Top skills (last 7 days)" echo "────────────────────────" -# Query telemetry_events, group by skill -EVENTS="$(query "telemetry_events" "select=skill,gstack_version&event_type=eq.skill_run&event_timestamp=gte.${WEEK_AGO}&limit=1000" 2>/dev/null || echo "[]")" - -if [ "$EVENTS" != "[]" ] && [ -n "$EVENTS" ]; then - echo "$EVENTS" | grep -o '"skill":"[^"]*"' | awk -F'"' '{print $4}' | sort | uniq -c | sort -rn | head -10 | while read -r COUNT SKILL; do - printf " /%-20s %d runs\n" "$SKILL" "$COUNT" +# Parse top_skills array from JSON +SKILLS="$(echo "$DATA" | grep -o '"top_skills":\[[^]]*\]' || echo "")" +if [ -n "$SKILLS" ] && [ "$SKILLS" != '"top_skills":[]' ]; then + # Parse each object — handle any key order (JSONB doesn't preserve order) + echo "$SKILLS" | grep -o '{[^}]*}' | while read -r OBJ; do + SKILL="$(echo "$OBJ" | grep -o '"skill":"[^"]*"' | awk -F'"' '{print $4}')" + COUNT="$(echo "$OBJ" | grep -o '"count":[0-9]*' | grep -o '[0-9]*')" + [ -n "$SKILL" ] && [ -n "$COUNT" ] && printf " /%-20s %s runs\n" "$SKILL" "$COUNT" done else echo " No data yet" @@ -85,12 +74,12 @@ echo "" echo "Top crash clusters" echo "──────────────────" -CRASHES="$(query "crash_clusters" "select=error_class,gstack_version,total_occurrences,identified_users&limit=5" 2>/dev/null || echo "[]")" - -if [ "$CRASHES" != "[]" ] && [ -n "$CRASHES" ]; then - echo "$CRASHES" | grep -o '"error_class":"[^"]*"' | awk -F'"' '{print $4}' | head -5 | while read -r ERR; do - C="$(echo "$CRASHES" | grep -o "\"error_class\":\"$ERR\"[^}]*\"total_occurrences\":[0-9]*" | grep -o '"total_occurrences":[0-9]*' | head -1 | grep -o '[0-9]*')" - printf " %-30s %s occurrences\n" "$ERR" "${C:-?}" +CRASHES="$(echo "$DATA" | grep -o '"crashes":\[[^]]*\]' || echo "")" +if [ -n "$CRASHES" ] && [ "$CRASHES" != '"crashes":[]' ]; then + echo "$CRASHES" | grep -o '{[^}]*}' | head -5 | while read -r OBJ; do + ERR="$(echo "$OBJ" | grep -o '"error_class":"[^"]*"' | awk -F'"' '{print $4}')" + C="$(echo "$OBJ" | grep -o '"total_occurrences":[0-9]*' | grep -o '[0-9]*')" + [ -n "$ERR" ] && printf " %-30s %s occurrences\n" "$ERR" "${C:-?}" done else echo " No crashes reported" @@ -101,9 +90,12 @@ echo "" echo "Version distribution (last 7 days)" echo "───────────────────────────────────" -if [ "$EVENTS" != "[]" ] && [ -n "$EVENTS" ]; then - echo "$EVENTS" | grep -o '"gstack_version":"[^"]*"' | awk -F'"' '{print $4}' | sort | uniq -c | sort -rn | head -5 | while read -r COUNT VER; do - printf " v%-15s %d events\n" "$VER" "$COUNT" +VERSIONS="$(echo "$DATA" | grep -o '"versions":\[[^]]*\]' || echo "")" +if [ -n "$VERSIONS" ] && [ "$VERSIONS" != '"versions":[]' ]; then + echo "$VERSIONS" | grep -o '{[^}]*}' | head -5 | while read -r OBJ; do + VER="$(echo "$OBJ" | grep -o '"version":"[^"]*"' | awk -F'"' '{print $4}')" + COUNT="$(echo "$OBJ" | grep -o '"count":[0-9]*' | grep -o '[0-9]*')" + [ -n "$VER" ] && [ -n "$COUNT" ] && printf " v%-15s %s events\n" "$VER" "$COUNT" done else echo " No data yet" diff --git a/bin/gstack-telemetry-log b/bin/gstack-telemetry-log index d199dbc9..5cddc519 100755 --- a/bin/gstack-telemetry-log +++ b/bin/gstack-telemetry-log @@ -115,18 +115,29 @@ if [ -d "$STATE_DIR/sessions" ]; then fi # Generate installation_id for community tier +# Uses a random UUID stored locally — not derived from hostname/user so it +# can't be guessed or correlated by someone who knows your machine identity. INSTALL_ID="" if [ "$TIER" = "community" ]; then - HOST="$(hostname 2>/dev/null || echo "unknown")" - USER="$(whoami 2>/dev/null || echo "unknown")" - if command -v shasum >/dev/null 2>&1; then - INSTALL_ID="$(printf '%s-%s' "$HOST" "$USER" | shasum -a 256 | awk '{print $1}')" - elif command -v sha256sum >/dev/null 2>&1; then - INSTALL_ID="$(printf '%s-%s' "$HOST" "$USER" | sha256sum | awk '{print $1}')" - elif command -v openssl >/dev/null 2>&1; then - INSTALL_ID="$(printf '%s-%s' "$HOST" "$USER" | openssl dgst -sha256 | awk '{print $NF}')" + ID_FILE="$HOME/.gstack/installation-id" + if [ -f "$ID_FILE" ]; then + INSTALL_ID="$(cat "$ID_FILE" 2>/dev/null)" + fi + if [ -z "$INSTALL_ID" ]; then + # Generate a random UUID v4 + if command -v uuidgen >/dev/null 2>&1; then + INSTALL_ID="$(uuidgen | tr '[:upper:]' '[:lower:]')" + elif [ -r /proc/sys/kernel/random/uuid ]; then + INSTALL_ID="$(cat /proc/sys/kernel/random/uuid)" + else + # Fallback: random hex from /dev/urandom + INSTALL_ID="$(od -An -tx1 -N16 /dev/urandom 2>/dev/null | tr -d ' \n')" + fi + if [ -n "$INSTALL_ID" ]; then + mkdir -p "$(dirname "$ID_FILE")" 2>/dev/null + printf '%s' "$INSTALL_ID" > "$ID_FILE" 2>/dev/null + fi fi - # If no SHA-256 command available, install_id stays empty fi # Local-only fields (never sent remotely) diff --git a/bin/gstack-telemetry-sync b/bin/gstack-telemetry-sync index 90e37243..be767c23 100755 --- a/bin/gstack-telemetry-sync +++ b/bin/gstack-telemetry-sync @@ -3,11 +3,12 @@ # # Fire-and-forget, backgrounded, rate-limited to once per 5 minutes. # Strips local-only fields before sending. Respects privacy tiers. +# Posts to the telemetry-ingest edge function (not PostgREST directly). # # Env overrides (for testing): # GSTACK_STATE_DIR — override ~/.gstack state directory # GSTACK_DIR — override auto-detected gstack root -# GSTACK_TELEMETRY_ENDPOINT — override Supabase endpoint URL +# GSTACK_SUPABASE_URL — override Supabase project URL set -uo pipefail GSTACK_DIR="${GSTACK_DIR:-$(cd "$(dirname "$0")/.." && pwd)}" @@ -19,15 +20,15 @@ RATE_FILE="$ANALYTICS_DIR/.last-sync-time" CONFIG_CMD="$GSTACK_DIR/bin/gstack-config" # Source Supabase config if not overridden by env -if [ -z "${GSTACK_TELEMETRY_ENDPOINT:-}" ] && [ -f "$GSTACK_DIR/supabase/config.sh" ]; then +if [ -z "${GSTACK_SUPABASE_URL:-}" ] && [ -f "$GSTACK_DIR/supabase/config.sh" ]; then . "$GSTACK_DIR/supabase/config.sh" fi -ENDPOINT="${GSTACK_TELEMETRY_ENDPOINT:-}" +SUPABASE_URL="${GSTACK_SUPABASE_URL:-}" ANON_KEY="${GSTACK_SUPABASE_ANON_KEY:-}" # ─── Pre-checks ────────────────────────────────────────────── -# No endpoint configured yet → exit silently -[ -z "$ENDPOINT" ] && exit 0 +# No Supabase URL configured yet → exit silently +[ -z "$SUPABASE_URL" ] && exit 0 # No JSONL file → nothing to sync [ -f "$JSONL_FILE" ] || exit 0 @@ -66,6 +67,8 @@ UNSENT="$(tail -n "+$SKIP" "$JSONL_FILE" 2>/dev/null || true)" [ -z "$UNSENT" ] && exit 0 # ─── Strip local-only fields and build batch ───────────────── +# Edge function expects raw JSONL field names (v, ts, sessions) — +# no column renaming needed (the function maps them internally). BATCH="[" FIRST=true COUNT=0 @@ -75,13 +78,10 @@ while IFS= read -r LINE; do [ -z "$LINE" ] && continue echo "$LINE" | grep -q '^{' || continue - # Strip local-only fields + map JSONL field names to Postgres column names + # Strip local-only fields (keep v, ts, sessions as-is for edge function) CLEAN="$(echo "$LINE" | sed \ -e 's/,"_repo_slug":"[^"]*"//g' \ -e 's/,"_branch":"[^"]*"//g' \ - -e 's/"v":/"schema_version":/g' \ - -e 's/"ts":/"event_timestamp":/g' \ - -e 's/"sessions":/"concurrent_sessions":/g' \ -e 's/,"repo":"[^"]*"//g')" # If anonymous tier, strip installation_id @@ -106,21 +106,31 @@ BATCH="$BATCH]" # Nothing to send after filtering [ "$COUNT" -eq 0 ] && exit 0 -# ─── POST to Supabase ──────────────────────────────────────── -HTTP_CODE="$(curl -s -o /dev/null -w '%{http_code}' --max-time 10 \ - -X POST "${ENDPOINT}/telemetry_events" \ +# ─── POST to edge function ─────────────────────────────────── +RESP_FILE="$(mktemp /tmp/gstack-sync-XXXXXX 2>/dev/null || echo "/tmp/gstack-sync-$$")" +HTTP_CODE="$(curl -s -w '%{http_code}' --max-time 10 \ + -X POST "${SUPABASE_URL}/functions/v1/telemetry-ingest" \ -H "Content-Type: application/json" \ -H "apikey: ${ANON_KEY}" \ - -H "Authorization: Bearer ${ANON_KEY}" \ - -H "Prefer: return=minimal" \ + -o "$RESP_FILE" \ -d "$BATCH" 2>/dev/null || echo "000")" # ─── Update cursor on success (2xx) ───────────────────────── case "$HTTP_CODE" in - 2*) NEW_CURSOR=$(( CURSOR + COUNT )) - echo "$NEW_CURSOR" > "$CURSOR_FILE" 2>/dev/null || true ;; + 2*) + # Parse inserted count from response — only advance if events were actually inserted. + # Advance by SENT count (not inserted count) because we can't map inserted back to + # source lines. If inserted==0, something is systemically wrong — don't advance. + INSERTED="$(grep -o '"inserted":[0-9]*' "$RESP_FILE" 2>/dev/null | grep -o '[0-9]*' || echo "0")" + if [ "${INSERTED:-0}" -gt 0 ] 2>/dev/null; then + NEW_CURSOR=$(( CURSOR + COUNT )) + echo "$NEW_CURSOR" > "$CURSOR_FILE" 2>/dev/null || true + fi + ;; esac +rm -f "$RESP_FILE" 2>/dev/null || true + # Update rate limit marker touch "$RATE_FILE" 2>/dev/null || true diff --git a/bin/gstack-update-check b/bin/gstack-update-check index 823861d2..7b165468 100755 --- a/bin/gstack-update-check +++ b/bin/gstack-update-check @@ -160,25 +160,22 @@ fi mkdir -p "$STATE_DIR" # Fire Supabase install ping in background (parallel, non-blocking) -# This logs an update check event for community health metrics. -# If the endpoint isn't configured or Supabase is down, this is a no-op. -# Source Supabase config for install ping -if [ -z "${GSTACK_TELEMETRY_ENDPOINT:-}" ] && [ -f "$GSTACK_DIR/supabase/config.sh" ]; then +# This logs an update check event for community health metrics via edge function. +# If Supabase is not configured or telemetry is off, this is a no-op. +if [ -z "${GSTACK_SUPABASE_URL:-}" ] && [ -f "$GSTACK_DIR/supabase/config.sh" ]; then . "$GSTACK_DIR/supabase/config.sh" fi -_SUPA_ENDPOINT="${GSTACK_TELEMETRY_ENDPOINT:-}" +_SUPA_URL="${GSTACK_SUPABASE_URL:-}" _SUPA_KEY="${GSTACK_SUPABASE_ANON_KEY:-}" # Respect telemetry opt-out — don't ping Supabase if user set telemetry: off _TEL_TIER="$("$GSTACK_DIR/bin/gstack-config" get telemetry 2>/dev/null || true)" -if [ -n "$_SUPA_ENDPOINT" ] && [ -n "$_SUPA_KEY" ] && [ "${_TEL_TIER:-off}" != "off" ]; then +if [ -n "$_SUPA_URL" ] && [ -n "$_SUPA_KEY" ] && [ "${_TEL_TIER:-off}" != "off" ]; then _OS="$(uname -s | tr '[:upper:]' '[:lower:]')" curl -sf --max-time 5 \ - -X POST "${_SUPA_ENDPOINT}/update_checks" \ + -X POST "${_SUPA_URL}/functions/v1/update-check" \ -H "Content-Type: application/json" \ -H "apikey: ${_SUPA_KEY}" \ - -H "Authorization: Bearer ${_SUPA_KEY}" \ - -H "Prefer: return=minimal" \ - -d "{\"gstack_version\":\"$LOCAL\",\"os\":\"$_OS\"}" \ + -d "{\"version\":\"$LOCAL\",\"os\":\"$_OS\"}" \ >/dev/null 2>&1 & fi diff --git a/browse/dist/browse b/browse/dist/browse deleted file mode 100755 index 79562b32..00000000 Binary files a/browse/dist/browse and /dev/null differ diff --git a/browse/dist/find-browse b/browse/dist/find-browse deleted file mode 100755 index fe55e4a9..00000000 Binary files a/browse/dist/find-browse and /dev/null differ diff --git a/package.json b/package.json index c3732917..f7a97d31 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "gstack", - "version": "0.11.16.0", + "version": "0.11.17.0", "description": "Garry's Stack — Claude Code skills + fast headless browser. One repo, one install, entire AI engineering workflow.", "license": "MIT", "type": "module", diff --git a/supabase/config.sh b/supabase/config.sh index b10aef6b..bfc739bc 100644 --- a/supabase/config.sh +++ b/supabase/config.sh @@ -1,10 +1,8 @@ #!/usr/bin/env bash # Supabase project config for gstack telemetry # These are PUBLIC keys — safe to commit (like Firebase public config). -# RLS policies restrict what the anon/publishable key can do (INSERT only). +# RLS denies all access to the anon key. All reads and writes go through +# edge functions (which use SUPABASE_SERVICE_ROLE_KEY server-side). GSTACK_SUPABASE_URL="https://frugpmstpnojnhfyimgv.supabase.co" GSTACK_SUPABASE_ANON_KEY="sb_publishable_tR4i6cyMIrYTE3s6OyHGHw_ppx2p6WK" - -# Telemetry ingest endpoint (Data API) -GSTACK_TELEMETRY_ENDPOINT="${GSTACK_SUPABASE_URL}/rest/v1" diff --git a/supabase/functions/community-pulse/index.ts b/supabase/functions/community-pulse/index.ts index 23e30202..acf2fdb7 100644 --- a/supabase/functions/community-pulse/index.ts +++ b/supabase/functions/community-pulse/index.ts @@ -1,9 +1,12 @@ // gstack community-pulse edge function -// Returns weekly active installation count for preamble display. -// Cached for 1 hour via Cache-Control header. +// Returns aggregated community stats for the dashboard: +// weekly active count, top skills, crash clusters, version distribution. +// Uses server-side cache (community_pulse_cache table) to prevent DoS. import { createClient } from "https://esm.sh/@supabase/supabase-js@2"; +const CACHE_MAX_AGE_MS = 60 * 60 * 1000; // 1 hour + Deno.serve(async () => { const supabase = createClient( Deno.env.get("SUPABASE_URL") ?? "", @@ -11,17 +14,37 @@ Deno.serve(async () => { ); try { - // Count unique update checks in the last 7 days (install base proxy) + // Check cache first + const { data: cached } = await supabase + .from("community_pulse_cache") + .select("data, refreshed_at") + .eq("id", 1) + .single(); + + if (cached?.refreshed_at) { + const age = Date.now() - new Date(cached.refreshed_at).getTime(); + if (age < CACHE_MAX_AGE_MS) { + return new Response(JSON.stringify(cached.data), { + status: 200, + headers: { + "Content-Type": "application/json", + "Cache-Control": "public, max-age=3600", + }, + }); + } + } + + // Cache is stale or missing — recompute const weekAgo = new Date(Date.now() - 7 * 24 * 60 * 60 * 1000).toISOString(); const twoWeeksAgo = new Date(Date.now() - 14 * 24 * 60 * 60 * 1000).toISOString(); - // This week's active + // Weekly active (update checks this week) const { count: thisWeek } = await supabase .from("update_checks") .select("*", { count: "exact", head: true }) .gte("checked_at", weekAgo); - // Last week's active (for change %) + // Last week (for change %) const { count: lastWeek } = await supabase .from("update_checks") .select("*", { count: "exact", head: true }) @@ -34,22 +57,78 @@ Deno.serve(async () => { ? Math.round(((current - previous) / previous) * 100) : 0; - return new Response( - JSON.stringify({ - weekly_active: current, - change_pct: changePct, - }), - { - status: 200, - headers: { - "Content-Type": "application/json", - "Cache-Control": "public, max-age=3600", // 1 hour cache - }, + // Top skills (last 7 days) + const { data: skillRows } = await supabase + .from("telemetry_events") + .select("skill") + .eq("event_type", "skill_run") + .gte("event_timestamp", weekAgo) + .not("skill", "is", null) + .limit(1000); + + const skillCounts: Record = {}; + for (const row of skillRows ?? []) { + if (row.skill) { + skillCounts[row.skill] = (skillCounts[row.skill] ?? 0) + 1; } - ); + } + const topSkills = Object.entries(skillCounts) + .sort(([, a], [, b]) => b - a) + .slice(0, 10) + .map(([skill, count]) => ({ skill, count })); + + // Crash clusters (top 5) + const { data: crashes } = await supabase + .from("crash_clusters") + .select("error_class, gstack_version, total_occurrences, identified_users") + .limit(5); + + // Version distribution (last 7 days) + const versionCounts: Record = {}; + const { data: versionRows } = await supabase + .from("telemetry_events") + .select("gstack_version") + .eq("event_type", "skill_run") + .gte("event_timestamp", weekAgo) + .limit(1000); + + for (const row of versionRows ?? []) { + if (row.gstack_version) { + versionCounts[row.gstack_version] = (versionCounts[row.gstack_version] ?? 0) + 1; + } + } + const topVersions = Object.entries(versionCounts) + .sort(([, a], [, b]) => b - a) + .slice(0, 5) + .map(([version, count]) => ({ version, count })); + + const result = { + weekly_active: current, + change_pct: changePct, + top_skills: topSkills, + crashes: crashes ?? [], + versions: topVersions, + }; + + // Upsert cache + await supabase + .from("community_pulse_cache") + .upsert({ + id: 1, + data: result, + refreshed_at: new Date().toISOString(), + }); + + return new Response(JSON.stringify(result), { + status: 200, + headers: { + "Content-Type": "application/json", + "Cache-Control": "public, max-age=3600", + }, + }); } catch { return new Response( - JSON.stringify({ weekly_active: 0, change_pct: 0 }), + JSON.stringify({ weekly_active: 0, change_pct: 0, top_skills: [], crashes: [], versions: [] }), { status: 200, headers: { "Content-Type": "application/json" }, diff --git a/supabase/migrations/002_tighten_rls.sql b/supabase/migrations/002_tighten_rls.sql new file mode 100644 index 00000000..c5cb55de --- /dev/null +++ b/supabase/migrations/002_tighten_rls.sql @@ -0,0 +1,36 @@ +-- 002_tighten_rls.sql +-- Lock down read/update access. Keep INSERT policies so old clients can still +-- write via PostgREST while new clients migrate to edge functions. + +-- Drop all SELECT policies (anon key should not read telemetry data) +DROP POLICY IF EXISTS "anon_select" ON telemetry_events; +DROP POLICY IF EXISTS "anon_select" ON installations; +DROP POLICY IF EXISTS "anon_select" ON update_checks; + +-- Drop dangerous UPDATE policy (was unrestricted on all columns) +DROP POLICY IF EXISTS "anon_update_last_seen" ON installations; + +-- Keep INSERT policies — old clients (pre-v0.11.16) still POST directly to +-- PostgREST. These will be dropped in a future migration once adoption of +-- edge-function-based sync is widespread. +-- (anon_insert_only ON telemetry_events — kept) +-- (anon_insert_only ON installations — kept) +-- (anon_insert_only ON update_checks — kept) + +-- Explicitly revoke view access (belt-and-suspenders) +REVOKE SELECT ON crash_clusters FROM anon; +REVOKE SELECT ON skill_sequences FROM anon; + +-- Keep error_message and failed_step columns (exist on live schema, may be +-- used in future). Add them to the migration record so repo matches live. +ALTER TABLE telemetry_events ADD COLUMN IF NOT EXISTS error_message TEXT; +ALTER TABLE telemetry_events ADD COLUMN IF NOT EXISTS failed_step TEXT; + +-- Cache table for community-pulse aggregation (prevents DoS via repeated queries) +CREATE TABLE IF NOT EXISTS community_pulse_cache ( + id INTEGER PRIMARY KEY DEFAULT 1, + data JSONB NOT NULL DEFAULT '{}'::jsonb, + refreshed_at TIMESTAMPTZ DEFAULT now() +); +ALTER TABLE community_pulse_cache ENABLE ROW LEVEL SECURITY; +-- No anon policies — only service_role_key (used by edge functions) can read/write diff --git a/supabase/verify-rls.sh b/supabase/verify-rls.sh new file mode 100755 index 00000000..4ed92bc6 --- /dev/null +++ b/supabase/verify-rls.sh @@ -0,0 +1,143 @@ +#!/usr/bin/env bash +# verify-rls.sh — smoke test after deploying 002_tighten_rls.sql +# +# Verifies: +# - SELECT denied on all tables and views (security fix) +# - UPDATE denied on installations (security fix) +# - INSERT still allowed on tables (kept for old client compat) +# +# Run manually after deploying the migration: +# bash supabase/verify-rls.sh +set -uo pipefail + +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" +. "$SCRIPT_DIR/config.sh" + +URL="$GSTACK_SUPABASE_URL" +KEY="$GSTACK_SUPABASE_ANON_KEY" +PASS=0 +FAIL=0 +TOTAL=0 + +# check [data] +# expected: "deny" (want 401/403) or "allow" (want 200/201) +check() { + local desc="$1" + local expected="$2" + local method="$3" + local path="$4" + local data="${5:-}" + TOTAL=$(( TOTAL + 1 )) + + local resp_file + resp_file="$(mktemp 2>/dev/null || echo "/tmp/verify-rls-$$-$TOTAL")" + + local http_code + if [ "$method" = "GET" ]; then + http_code="$(curl -s -o "$resp_file" -w '%{http_code}' --max-time 10 \ + "${URL}/rest/v1/${path}" \ + -H "apikey: ${KEY}" \ + -H "Authorization: Bearer ${KEY}" \ + -H "Content-Type: application/json" 2>/dev/null)" || http_code="000" + elif [ "$method" = "POST" ]; then + http_code="$(curl -s -o "$resp_file" -w '%{http_code}' --max-time 10 \ + -X POST "${URL}/rest/v1/${path}" \ + -H "apikey: ${KEY}" \ + -H "Authorization: Bearer ${KEY}" \ + -H "Content-Type: application/json" \ + -H "Prefer: return=minimal" \ + -d "$data" 2>/dev/null)" || http_code="000" + elif [ "$method" = "PATCH" ]; then + http_code="$(curl -s -o "$resp_file" -w '%{http_code}' --max-time 10 \ + -X PATCH "${URL}/rest/v1/${path}" \ + -H "apikey: ${KEY}" \ + -H "Authorization: Bearer ${KEY}" \ + -H "Content-Type: application/json" \ + -d "$data" 2>/dev/null)" || http_code="000" + fi + + # Trim to last 3 chars (the HTTP code) in case of concatenation + http_code="$(echo "$http_code" | grep -oE '[0-9]{3}$' || echo "000")" + + if [ "$expected" = "deny" ]; then + case "$http_code" in + 401|403) + echo " PASS $desc (HTTP $http_code, denied)" + PASS=$(( PASS + 1 )) ;; + 200|204) + # For GETs: 200+empty means RLS filtering (pass). 200+data means leak (fail). + # For PATCH: 204 means no rows matched — could be RLS or missing row. + if [ "$method" = "GET" ]; then + body="$(cat "$resp_file" 2>/dev/null || echo "")" + if [ "$body" = "[]" ] || [ -z "$body" ]; then + echo " PASS $desc (HTTP $http_code, empty — RLS filtering)" + PASS=$(( PASS + 1 )) + else + echo " FAIL $desc (HTTP $http_code, got data!)" + FAIL=$(( FAIL + 1 )) + fi + else + # PATCH 204 = no rows affected. RLS blocked the update or row doesn't exist. + # Either way, the attacker can't modify data. + echo " PASS $desc (HTTP $http_code, no rows affected)" + PASS=$(( PASS + 1 )) + fi ;; + 000) + echo " WARN $desc (connection failed)" + FAIL=$(( FAIL + 1 )) ;; + *) + echo " WARN $desc (HTTP $http_code — unexpected)" + FAIL=$(( FAIL + 1 )) ;; + esac + elif [ "$expected" = "allow" ]; then + case "$http_code" in + 200|201|204|409) + # 409 = conflict (duplicate key) — INSERT policy works, row already exists + echo " PASS $desc (HTTP $http_code, allowed as expected)" + PASS=$(( PASS + 1 )) ;; + 401|403) + echo " FAIL $desc (HTTP $http_code, denied — should be allowed)" + FAIL=$(( FAIL + 1 )) ;; + 000) + echo " WARN $desc (connection failed)" + FAIL=$(( FAIL + 1 )) ;; + *) + echo " WARN $desc (HTTP $http_code — unexpected)" + FAIL=$(( FAIL + 1 )) ;; + esac + fi + + rm -f "$resp_file" 2>/dev/null || true +} + +echo "RLS Verification (after 002_tighten_rls.sql)" +echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" +echo "" +echo "Read denial (should be blocked):" +check "SELECT telemetry_events" deny GET "telemetry_events?select=*&limit=1" +check "SELECT installations" deny GET "installations?select=*&limit=1" +check "SELECT update_checks" deny GET "update_checks?select=*&limit=1" +check "SELECT crash_clusters" deny GET "crash_clusters?select=*&limit=1" +check "SELECT skill_sequences" deny GET "skill_sequences?select=skill_a&limit=1" + +echo "" +echo "Update denial (should be blocked):" +check "UPDATE installations" deny PATCH "installations?installation_id=eq.test_verify_rls" '{"gstack_version":"hacked"}' + +echo "" +echo "Insert allowed (kept for old client compat):" +check "INSERT telemetry_events" allow POST "telemetry_events" '{"gstack_version":"verify_rls_test","os":"test","event_timestamp":"2026-01-01T00:00:00Z","outcome":"test"}' +check "INSERT update_checks" allow POST "update_checks" '{"gstack_version":"verify_rls_test","os":"test"}' +check "INSERT installations" allow POST "installations" '{"installation_id":"verify_rls_test"}' + +echo "" +echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━" +echo "Results: $PASS passed, $FAIL failed (of $TOTAL checks)" + +if [ "$FAIL" -gt 0 ]; then + echo "VERDICT: FAIL" + exit 1 +else + echo "VERDICT: PASS — reads/updates blocked, inserts allowed" + exit 0 +fi diff --git a/test/telemetry.test.ts b/test/telemetry.test.ts index 4dc79b29..a3050631 100644 --- a/test/telemetry.test.ts +++ b/test/telemetry.test.ts @@ -78,8 +78,8 @@ describe('gstack-telemetry-log', () => { const events = parseJsonl(); expect(events).toHaveLength(1); - // installation_id should be a SHA-256 hash (64 hex chars) - expect(events[0].installation_id).toMatch(/^[a-f0-9]{64}$/); + // installation_id should be a UUID v4 (or hex fallback) + expect(events[0].installation_id).toMatch(/^[a-f0-9-]{32,36}$/); }); test('installation_id is null for anonymous tier', () => { @@ -244,16 +244,32 @@ describe('gstack-analytics', () => { }); describe('gstack-telemetry-sync', () => { - test('exits silently with no endpoint configured', () => { - // Default: GSTACK_TELEMETRY_ENDPOINT is not set → exit 0 + test('exits silently with no Supabase URL configured', () => { + // Default: GSTACK_SUPABASE_URL is not set → exit 0 const result = run(`${BIN}/gstack-telemetry-sync`); expect(result).toBe(''); }); test('exits silently with no JSONL file', () => { - const result = run(`${BIN}/gstack-telemetry-sync`, { GSTACK_TELEMETRY_ENDPOINT: 'http://localhost:9999' }); + const result = run(`${BIN}/gstack-telemetry-sync`, { GSTACK_SUPABASE_URL: 'http://localhost:9999' }); expect(result).toBe(''); }); + + test('does not rename JSONL field names (edge function expects raw names)', () => { + setConfig('telemetry', 'anonymous'); + run(`${BIN}/gstack-telemetry-log --skill qa --duration 60 --outcome success --session-id raw-fields-1`); + + const events = parseJsonl(); + expect(events).toHaveLength(1); + // Edge function expects these raw field names, NOT Postgres column names + expect(events[0]).toHaveProperty('v'); + expect(events[0]).toHaveProperty('ts'); + expect(events[0]).toHaveProperty('sessions'); + // Should NOT have Postgres column names + expect(events[0]).not.toHaveProperty('schema_version'); + expect(events[0]).not.toHaveProperty('event_timestamp'); + expect(events[0]).not.toHaveProperty('concurrent_sessions'); + }); }); describe('gstack-community-dashboard', () => {