mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-01 19:25:10 +02:00
v1.12.0.0 feat: /setup-gbrain — coding-agent onboarding for gbrain (#1183)
* feat(setup-gbrain): add gstack-gbrain-repo-policy bin helper Per-remote trust-tier store for the forthcoming /setup-gbrain skill. Tiers are the D3 triad (read-write / read-only / deny), keyed by a normalized remote URL so ssh-shorthand and https variants collapse to the same entry. The file carries _schema_version: 2 (D2-eng); legacy `allow` values from pre-D3 experiments auto-migrate to `read-write` on first read, idempotent, with a one-shot log line. Pure bash + jq to match the existing gstack-brain-* family. Atomic writes via tmpfile + rename. Policy file mode 0600. Corrupt files quarantine to .corrupt-<ts> and start fresh. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): unit tests for gstack-gbrain-repo-policy 24 tests covering normalize (ssh/https/shorthand/uppercase collapse to one key), set/get round-trip, all three D3 tiers accepted, invalid tiers rejected, file mode 0600, _schema_version field written on fresh files, legacy allow migration (including idempotence and preservation of non-allow entries), corrupt-JSON quarantine + fresh-file recovery, list output sorting, and get-without-arg auto-detect against a git repo with no origin. All tests green against a per-test tmpdir GSTACK_HOME so nothing leaks into the real ~/.gstack. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-detect state reporter Pure-introspection JSON emitter for the /setup-gbrain skill's start-up branching. Reports: gbrain presence + version on PATH, ~/.gbrain/config.json existence + engine, `gbrain doctor --json` health (wrapped in timeout 5s to match the /health D6 pattern), gstack-brain-sync mode via gstack-config, and ~/.gstack/.git presence for the memory-sync feature. Never modifies state. Always emits valid JSON even when every check is false. Handles malformed ~/.gbrain/config.json without crashing — gbrain_engine is null in that case, not an error. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-install with D5 detect-first + D19 PATH-shadow guard Clones gbrain at a pinned commit (v0.18.2) and registers it via `bun link`. Before any clone: D5 detect-first — probes ~/git/gbrain, ~/gbrain, and the install target for a valid pre-existing clone (package.json with name "gbrain" and bin.gbrain set). If one is found, `bun link` runs there instead of cloning a second copy. Prevents the day-one duplicate-install footgun on the skill author's own machine. After install: D19 PATH-shadow guard — reads the install-dir's package.json version, compares to `gbrain --version` on PATH. On mismatch: exits 3, prints every gbrain binary on PATH via `type -a`, and gives a remediation menu. Setup skills refuse broken environments instead of warning and continuing. Prereq checks (bun, git, https://github.com reachability) fail fast with install hints. --dry-run and --validate-only flags let the skill probe the plan without touching state; tests use them to cover D5 and D19 without exercising real bun link. Pin is a load-bearing version: setup-gbrain v1 verified against gbrain v0.18.2. Updating requires re-running Pre-Impl Gate 1 to verify gbrain's CLI + config shapes haven't drifted. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): unit tests for gstack-gbrain-detect + install 15 tests covering: detect emits valid JSON when nothing configured, reports gstack_brain_git on GSTACK_HOME/.git presence, reads ~/.gbrain/config.json engine, tolerates malformed config, detects a mocked gbrain binary on PATH with version parsing. For install: D5 detect-first uses ~/git/gbrain fixtures under a sandboxed HOME, verifies fall-through to fresh clone when no valid clone exists, rejects invalid package.json shapes. D19 PATH-shadow validation uses a fake gbrain on a minimal SAFE_PATH to simulate version mismatch, same-version-pass, v-prefix tolerance, missing binary on PATH, and missing version field in package.json. --validate-only mode in the install bin makes the D19 check unit- testable without running real bun link (which touches ~/.bun/bin). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-lib.sh with read_secret_to_env (D3-eng) Shared secret-read helper for PAT (D11) and pooler URL paste (D16). One implementation of the hardest-to-get-right pattern: stty -echo + SIGINT/TERM/EXIT trap that restores terminal mode, read into a named env var, optional redacted preview. Validates the target var name against [A-Z_][A-Z0-9_]* to prevent bash name-injection via `read -r "$varname"`. When stdin is not a TTY (CI, piped tests) the stty branches skip cleanly — piped input doesn't echo anyway. Exports the var after read so subprocesses inherit it; callers own the `unset` at handoff time. Sourced, not executed — no +x bit. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-supabase-verify structural URL check Zero-network validator for Supabase Session Pooler URLs before handing them to `gbrain init`. Canonical shape verified per gbrain init.ts:266: postgresql://postgres.<ref>:<password>@aws-0-<region>.pooler.supabase.com:6543/postgres Rejects direct-connection URLs (db.*.supabase.co:5432) with a distinct exit code 3 and clear IPv6-failure remediation — that's the most common paste mistake users make, so it earns its own UX path rather than a generic "bad URL" error. Never echoes the URL (contains a password) in error messages; tests verify a distinct seed password never appears in stderr on any reject path. Accepts URL from argv[1] or stdin ("-" or no arg). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): unit tests for supabase-verify + lib.sh secret helper 22 tests. verify: accepts canonical pooler URL (argv + stdin modes), rejects direct-connection URL with exit 3, rejects wrong scheme, wrong port, empty password, missing userinfo, plain 'postgres' user (catches direct-URL paste errors), wrong host, empty URL. Case-insensitive host match. Explicit negative: error messages never echo the URL password. lib.sh read_secret_to_env: reads piped stdin into the named env var, exports to subprocesses, redacted-preview emits masked form on stderr with the seed password absent, rejects invalid var names (lowercase, leading digit, hyphens), rejects missing/unknown flags, secret value never appears on stdout. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-supabase-provision Management API wrapper Four subcommands: list-orgs, create, wait, pooler-url. Built against the verified Supabase Management API shape (Pre-Impl Gate 1): - POST /v1/projects with {name, db_pass, organization_slug, region} — not the original plan's /v1/organizations/{ref}/projects - No `plan` field; subscription tier is org-level per the OpenAPI description ("Subscription Plan is now set on organization level and is ignored in this request") - GET /v1/projects/{ref}/config/database/pooler for pooler config — not /config/database Secrets discipline: SUPABASE_ACCESS_TOKEN (PAT) and DB_PASS read from env only, never from argv (D8 grep test enforces this). `set +x` at the top as a defensive default so debug tracing never leaks secrets. Management API hostname hardcoded to SUPABASE_API_BASE env override — no user-controlled URL portion (SSRF guard). HTTP error paths: 401/403 → exit 3 (auth), 402 → 4 (quota), 409 → 5 (conflict), 429 + 5xx → exponential-backoff retry up to 3 attempts, then exit 8. Wait subcommand polls every 5s until ACTIVE_HEALTHY with a configurable timeout; terminal states (INIT_FAILED, REMOVED, etc.) exit 7 immediately with a clear message. Timeout emits the --resume-provision hint so the skill can recover. Pooler-url constructs the URL locally from db_user/host/port/name + DB_PASS rather than trusting the API response's connection_string field, which is templated with [PASSWORD] rather than the real value. Handles both object and array response shapes, preferring session pool_mode when Supabase returns multiple pooler configs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): unit tests for gstack-gbrain-supabase-provision via mock API 22 tests covering D21 HTTP error suite (401/403/402/409/429/5xx) and happy paths for all four subcommands. Every test spins up a Bun.serve mock server bound to SUPABASE_API_BASE so nothing hits the real API. Uses Bun.spawn (async) rather than spawnSync because spawnSync blocks the Bun event loop, which prevents Bun.serve mocks from responding — calls would hit curl's own timeout instead of round-tripping. Verifies: POST body contains organization_slug (not organization_id) and no `plan` field, bearer-token auth header, retry-on-429 with eventual success, exit-8 on persistent 5xx after max retries, wait succeeds on ACTIVE_HEALTHY, exits 7 on INIT_FAILED, exits 6 with --resume-provision hint on timeout, pooler-url builds URL locally from db_user/host/port/name + DB_PASS (not response connection_string template), handles array pooler responses. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add SKILL.md.tmpl — user-facing skill prompt Stitches together every slice built so far (repo-policy, detect, install, lib.sh secret helper, supabase-verify, supabase-provision) into a single interactive flow. Paths: Supabase existing-URL, Supabase auto-provision (D7), Supabase manual, PGLite local, switch (PGLite ↔ Supabase via gbrain migrate wrapped in timeout 180s per D9). Secrets discipline per D8/D10/D11: PAT + DB_PASS + pooler URL all read via read_secret_to_env from lib.sh and handed to gbrain via GBRAIN_DATABASE_URL env, never argv. PAT carries the full D11 scope disclosure before collection and an explicit revocation reminder after success. D12 SIGINT recovery prints the in-flight ref + resume command. D18 MCP registration is scoped honestly to Claude Code — skips with a manual-register hint when `claude` is not on PATH. D6 per-remote trust-triad question (read-write/read-only/deny/skip-for-now) gates repo import; the triad values compose with the D2-eng schema-version policy file so future migrations stay deterministic. Skill runs concurrent-run-locked via mkdir ~/.gstack/.setup-gbrain.lock.d (atomic, same pattern as gstack-brain-sync). Telemetry (D4) payload carries enumerated categorical values only — never URL, PAT, or any postgresql:// substring. --repo, --switch, --resume-provision, --cleanup-orphans shortcut modes documented inline; the skill parses its own invocation args. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(health): integrate gbrain as D6 composite dimension Adds a GBrain row to the /health dashboard rubric with weight 10%. Three sub-signals rolled into one 0-10 score: doctor status (0.5), sync queue depth (0.3), last-push age (0.2). Redistributes when gbrain_sync_mode is off so the dimension stays fair. Weights rebalance: typecheck 25→22, lint 20→18, test 30→28, deadcode 15→13, shell 10→9, gbrain +10 — sums to 100. gbrain doctor --json wrapped in timeout 5s so a hung gbrain never stalls the /health dashboard. Dimension is omitted (not red) when gbrain is not installed — running /health on a non-gbrain machine shouldn't penalize that choice. History-JSONL adds a `gbrain` field. Pre-D6 entries read as null for trend comparison; new tracking starts from first post-D6 run. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(test): add secret-sink-harness for negative-space leak testing (D21 #5) Runs a subprocess with a seeded secret, captures every channel the subprocess could leak through, and asserts the seed never appears. Built per the D1-eng tightened contract: per-run tmp $HOME, four seed match rules (exact + URL-decoded + first-12-char prefix + base64), fd-level stdout/stderr capture via Bun.spawn, post-mortem walk of every file written under $HOME, separate buckets for telemetry JSONL. Reusable: any future skill that handles secrets can import runWithSecretSink and run positive/negative controls against its own bins. The harness itself is ~180 lines of TS with no external deps beyond Bun + node:fs. Out of scope for v1 (documented as follow-ups): subprocess env dump (portable /proc reading), the user's real shell history (bins don't modify it). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test: secret-sink harness positive controls + real-bin negative controls 11 tests. Positive controls deliberately leak a seed in every covered channel (stdout, stderr, a file under $HOME, the telemetry JSONL path, base64-encoded, first-12-char prefix) and assert the harness catches each one. Without these, a harness that silently under-reports would look identical to a harness that works. Negative controls run real setup-gbrain bins with distinctive seeds: - supabase-verify rejects a mysql:// URL and a direct-connection URL, password never appears in any captured channel - lib.sh read_secret_to_env reads piped stdin, emits only the length, seed value stays invisible - supabase-provision on an auth-failure path fails fast without leaking the PAT to any channel Covers D21 #5 leak harness + uses it to validate D3-eng, D10, D11 discipline end-to-end on the already-shipped bins. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add list-orphans + delete-project subcommands (D20) Powers /setup-gbrain --cleanup-orphans. list-orphans filters the authenticated user's Supabase projects by name prefix (default "gbrain") and excludes the project the local ~/.gbrain/config.json currently points at, so only unclaimed gbrain-shaped projects come back. Active-ref detection parses the pooler URL's user portion (postgres.<ref>:<pw>@...). delete-project is a thin DELETE /v1/projects/{ref} wrapper with no confirmation of its own — the skill's UI layer owns the per-project confirm AskUserQuestion loop. Keeps responsibilities clean: the bin manages HTTP; the skill manages user intent. Both subcommands reuse the existing api_call retry+backoff and the same PAT discipline (env only, never argv). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): list-orphans active-ref filtering + delete-project 404 6 new tests bringing the supabase-provision suite to 28: list-orphans: - Filters to gbrain-prefixed projects, excludes the active-ref derived from ~/.gbrain/config.json's pooler URL - Treats all gbrain-prefixed projects as orphans when no config exists (first run on a new machine) - Respects custom --name-prefix for users who named their brain something else delete-project: - Happy path sends DELETE /v1/projects/<ref> and returns {deleted_ref} - 404 surfaces cleanly (exit 2, "404" in stderr) - Missing <ref> positional rejected with exit 2 Uses per-test tmpdir HOME with a stubbed ~/.gbrain/config.json so active-ref extraction runs against deterministic fixtures. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * chore: regenerate setup-gbrain SKILL.md after main merge * chore: bump version and changelog (v1.12.0.0) Ships /setup-gbrain and its supporting infrastructure end-to-end: per-remote trust policy, installer with PATH-shadow guard, shared secret-read helper, structural URL verifier, Supabase Management API wrapper, /health GBrain dimension, secret-sink test harness. 100 new tests across 5 suites, all green. Three pre-existing test failures noted as P0 in TODOS.md. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: add USING_GBRAIN_WITH_GSTACK.md + update README for /setup-gbrain README changes: - Rewrote the "Cross-machine memory with GBrain sync" section into "GBrain — persistent knowledge for your coding agent." Covers the three /setup-gbrain paths (Supabase existing URL, auto-provision, PGLite local), MCP registration, per-remote trust triad, and the (still-separate) memory sync feature. - Added /setup-gbrain row to the skills table pointing at the full guide. - Added /setup-gbrain to both skill-list install snippets. - Added USING_GBRAIN_WITH_GSTACK.md to the Docs table. New doc (USING_GBRAIN_WITH_GSTACK.md): - All three setup paths with trust-surface caveats - MCP registration details (and honest Claude-Code-v1 scoping) - Per-remote trust triad semantics + how to change a policy - Switching engines (PGLite ↔ Supabase) via --switch - GStack memory sync + its relationship to the gbrain knowledge base - /setup-gbrain --cleanup-orphans for orphan Supabase projects - Full command + flag reference, every bin helper, every env var - Security model: what's enforced in code, what's enforced by the leak harness, and the honest limits of v1 - Troubleshooting: PATH shadowing, direct-connection URL reject, auto-provision timeout, stale lock, policy file hand-edits, migrate hang - Why-this-design section explaining the non-obvious choices Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(brain-sync): secret scanner now catches Bearer-prefixed auth tokens in JSON The bearer-token-json regex value charset was [A-Za-z0-9_./+=-]{16,}, which does NOT permit spaces. Real HTTP auth headers embed the scheme name with a literal space — "Bearer <token>" — so the value portion actually starts with "Bearer " and the existing regex couldn't match. Result: any JSON blob containing "authorization":"Bearer ..." would slip past the scanner and sync to the user's private brain repo with the bearer token inline. Added optional (Bearer |Basic |Token )? prefix in front of the value charset. Now matches the common auth-scheme forms without broadening the matcher to tolerate arbitrary whitespace (which would false-positive on lots of benign JSON). Verified against 5 positive cases (bearer-in-json, clean bearer, apikey no-prefix, token with Bearer, password no-prefix) + 3 negative cases (too-short tokens, non-secret field names like username, random JSON). This closes the P0 security regression first noticed during v1.12.0.0 /ship. brain-sync.test.ts now passes all 7 secret-scan fixtures. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test: mock-gh integration tests for gstack-brain-init auto-create path 8 tests covering the gh-repo-create happy path that had zero coverage before. Existing brain-sync.test.ts always passes --remote <bare-url> to bypass gh entirely, so the interactive default ("press Enter, we'll run gh repo create for you") was shipping on trust. Test strategy: write a bash stub for gh that records every call into a file, then run gstack-brain-init with that stub on PATH. Assertions verify: gh auth status is checked, gh repo create fires with the computed gstack-brain-<user> default name + --private + --source flags, fall-through to gh repo view when create reports already-exists, user-provided URL bypasses gh entirely, gh-not-on-path and gh-not-authed branches both prompt for URL, --remote flag short-circuits all gh calls, conflicting-remote re-runs exit 1 with a clear message. No real GitHub, no live auth. Gate tier — runs on every commit. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(e2e): privacy-gate AskUserQuestion fires from preamble (periodic tier) Two periodic-tier E2E tests exercising the preamble's privacy gate end-to-end via the Agent SDK + canUseTool. Previously uncovered: - Positive: stages a fake gbrain on PATH + gbrain_sync_mode_prompted=false in config, runs a real skill, intercepts tool-use. Asserts the preamble fires a 3-option AskUserQuestion matching the canonical prose ("publish session memory" / "artifact" / "decline") and does NOT fire a second time in the same run (idempotency within session). - Negative: same staging but prompted=true. Asserts the gate stays silent even with gbrain detected on the host. Registered in test/helpers/touchfiles.ts as `brain-privacy-gate` (periodic) with dependency tracking on generate-brain-sync-block.ts, the three gstack-brain-* bins, gstack-config, and the Agent SDK runner. Diff-based selection re-runs the E2E when any of those change. Cost: ~$0.30-$0.50 per run. Only fires under EVALS=1 EVALS_TIER=periodic; gate tier stays free. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: update TODOS for bearer-json fix + new brain-sync test coverage Moves the bearer-json secret-scan regression from the P0 "pre-existing failures" block into the Completed section with full context on the fix, the mock-gh tests, the E2E privacy-gate tests, and the touchfile registration. Remaining P0s are the GSTACK_HOME config-isolation bug and the stale Opus 4.7 overlay pacing assertion, both unrelated. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(test): E2E privacy gate — ambient env + skill-file prompt Two fixes to get the E2E actually running end-to-end (first attempt failed at the SDK auth step, second at the assertion step): 1. Don't pass an explicit `env:` object to runAgentSdkTest. The SDK's auth pipeline misses ANTHROPIC_API_KEY when env is supplied as an object (verified against the plan-mode-no-op test, which passes no env and auths cleanly). Mutate process.env before the call instead, and restore the originals in finally so other tests don't inherit the ambient mutation. 2. The "Run /learn with no arguments" user prompt was too narrow — the model reduced it to a direct action and skipped the preamble privacy-gate directives entirely, so zero AskUserQuestions fired. Mirror the plan-mode-no-op pattern: point the model at the skill file on disk and ask it to follow every preamble directive. Bumped maxTurns from 6 to 10 to give the preamble room to execute. Verified both tests pass under `EVALS=1 EVALS_TIER=periodic bun test test/skill-e2e-brain-privacy-gate.test.ts` against a real ANTHROPIC_API_KEY. Cost per run: ~$0.30-$0.50 per test. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs(CLAUDE.md): source ANTHROPIC/OPENAI keys from ~/.zshrc for paid evals Conductor workspaces don't inherit the interactive shell env, so both API keys are absent from the default process env even though they're set in ~/.zshrc. Documents the source-from-zshrc pattern (grep + eval, never echo the value) plus the Agent SDK gotcha: do NOT pass env as an object to runAgentSdkTest — mutate process.env ambiently and restore in finally. Discovered this during the brain-privacy-gate E2E. First run failed at SDK auth with 401; second failed because explicit env handoff bypassed the SDK's own auth routing. Fix pattern now codified so the next paid-eval session in a Conductor workspace doesn't hit the same two dead ends. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -88,7 +88,12 @@ patterns = [
|
||||
('pem-block', re.compile(r'-----BEGIN [A-Z ]{3,}-----')),
|
||||
('jwt', re.compile(r'\\beyJ[A-Za-z0-9_-]{10,}\\.[A-Za-z0-9_-]{10,}\\.[A-Za-z0-9_-]{10,}\\b')),
|
||||
('bearer-token-json',
|
||||
re.compile(r'\"(authorization|api[_-]?key|apikey|token|secret|password)\"\\s*:\\s*\"[A-Za-z0-9_./+=-]{16,}\"',
|
||||
# JSON-embedded auth headers. The optional Bearer/Basic/Token prefix
|
||||
# matters: real auth values include a literal space after the scheme
|
||||
# name, but the value charset below does not include spaces, so
|
||||
# without the optional prefix every Bearer token in a JSON blob slips
|
||||
# past the scanner.
|
||||
re.compile(r'\"(authorization|api[_-]?key|apikey|token|secret|password)\"\\s*:\\s*\"(Bearer |Basic |Token )?[A-Za-z0-9_./+=-]{16,}\"',
|
||||
re.IGNORECASE)),
|
||||
]
|
||||
text = sys.stdin.read()
|
||||
|
||||
Executable
+112
@@ -0,0 +1,112 @@
|
||||
#!/usr/bin/env bash
|
||||
# gstack-gbrain-detect — emit current gbrain/gstack-brain state as JSON.
|
||||
#
|
||||
# Usage:
|
||||
# gstack-gbrain-detect
|
||||
#
|
||||
# Output (always valid JSON, even when every check is false):
|
||||
# {
|
||||
# "gbrain_on_path": true|false,
|
||||
# "gbrain_version": "0.18.2" | null,
|
||||
# "gbrain_config_exists": true|false,
|
||||
# "gbrain_engine": "pglite"|"postgres" | null,
|
||||
# "gbrain_doctor_ok": true|false,
|
||||
# "gstack_brain_sync_mode": "off"|"artifacts-only"|"full",
|
||||
# "gstack_brain_git": true|false
|
||||
# }
|
||||
#
|
||||
# The /setup-gbrain skill reads this once at startup to decide which path
|
||||
# branches are live and which steps can be skipped. Never modifies state;
|
||||
# pure introspection. Exits 0 unless `jq` is missing.
|
||||
#
|
||||
# Env:
|
||||
# GSTACK_HOME — override ~/.gstack for gstack-brain-* state lookups.
|
||||
set -euo pipefail
|
||||
|
||||
STATE_DIR="${GSTACK_HOME:-$HOME/.gstack}"
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
CONFIG_BIN="$SCRIPT_DIR/gstack-config"
|
||||
GBRAIN_CONFIG="$HOME/.gbrain/config.json"
|
||||
|
||||
die() { echo "gstack-gbrain-detect: $*" >&2; exit 2; }
|
||||
|
||||
require_jq() {
|
||||
command -v jq >/dev/null 2>&1 || die "jq is required. Install with: brew install jq"
|
||||
}
|
||||
require_jq
|
||||
|
||||
# --- gbrain binary presence + version ---
|
||||
gbrain_on_path=false
|
||||
gbrain_version=null
|
||||
if command -v gbrain >/dev/null 2>&1; then
|
||||
gbrain_on_path=true
|
||||
# Format versions as JSON strings; gbrain --version may print other chatter.
|
||||
v=$(gbrain --version 2>/dev/null | head -1 | tr -d '[:space:]' || true)
|
||||
if [ -n "$v" ]; then
|
||||
gbrain_version=$(jq -Rn --arg v "$v" '$v')
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- gbrain config file ---
|
||||
gbrain_config_exists=false
|
||||
gbrain_engine=null
|
||||
if [ -f "$GBRAIN_CONFIG" ]; then
|
||||
gbrain_config_exists=true
|
||||
# Engine is defensively parsed; an invalid config returns null, not a crash.
|
||||
engine_raw=$(jq -r '.engine // empty' "$GBRAIN_CONFIG" 2>/dev/null || true)
|
||||
case "$engine_raw" in
|
||||
pglite|postgres) gbrain_engine=$(jq -Rn --arg e "$engine_raw" '$e') ;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# --- gbrain doctor health ---
|
||||
# Doctor is wrapped in `timeout 5s` to match the /health D6 pattern and avoid
|
||||
# the detect step hanging the skill when gbrain is broken or its DB is
|
||||
# unreachable. Any nonzero exit or non-"ok"/"warnings" status → false.
|
||||
gbrain_doctor_ok=false
|
||||
if [ "$gbrain_on_path" = "true" ]; then
|
||||
# Use `timeout` if available; some minimal macs use gtimeout from coreutils.
|
||||
timeout_bin=""
|
||||
if command -v timeout >/dev/null 2>&1; then timeout_bin="timeout 5s"
|
||||
elif command -v gtimeout >/dev/null 2>&1; then timeout_bin="gtimeout 5s"
|
||||
fi
|
||||
if doctor_json=$(eval "$timeout_bin gbrain doctor --json" 2>/dev/null); then
|
||||
status=$(echo "$doctor_json" | jq -r '.status // empty' 2>/dev/null || true)
|
||||
case "$status" in
|
||||
ok|warnings) gbrain_doctor_ok=true ;;
|
||||
esac
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- gstack-brain-sync state (memory sync, separate from gbrain itself) ---
|
||||
gstack_brain_sync_mode="off"
|
||||
if [ -x "$CONFIG_BIN" ]; then
|
||||
mode=$("$CONFIG_BIN" get gbrain_sync_mode 2>/dev/null || true)
|
||||
case "$mode" in
|
||||
off|artifacts-only|full) gstack_brain_sync_mode="$mode" ;;
|
||||
esac
|
||||
fi
|
||||
|
||||
gstack_brain_git=false
|
||||
if [ -d "$STATE_DIR/.git" ]; then
|
||||
gstack_brain_git=true
|
||||
fi
|
||||
|
||||
# Emit single-object JSON.
|
||||
jq -n \
|
||||
--argjson on_path "$gbrain_on_path" \
|
||||
--argjson version "$gbrain_version" \
|
||||
--argjson config_exists "$gbrain_config_exists" \
|
||||
--argjson engine "$gbrain_engine" \
|
||||
--argjson doctor_ok "$gbrain_doctor_ok" \
|
||||
--arg sync_mode "$gstack_brain_sync_mode" \
|
||||
--argjson brain_git "$gstack_brain_git" \
|
||||
'{
|
||||
gbrain_on_path: $on_path,
|
||||
gbrain_version: $version,
|
||||
gbrain_config_exists: $config_exists,
|
||||
gbrain_engine: $engine,
|
||||
gbrain_doctor_ok: $doctor_ok,
|
||||
gstack_brain_sync_mode: $sync_mode,
|
||||
gstack_brain_git: $brain_git
|
||||
}'
|
||||
Executable
+183
@@ -0,0 +1,183 @@
|
||||
#!/usr/bin/env bash
|
||||
# gstack-gbrain-install — install the gbrain CLI on a local Mac.
|
||||
#
|
||||
# Usage:
|
||||
# gstack-gbrain-install [--install-dir <dir>] [--pinned-commit <sha>] [--dry-run]
|
||||
#
|
||||
# D5 detect-first: before cloning anywhere, probe likely pre-existing
|
||||
# locations (~/git/gbrain and ~/gbrain) and reuse a working clone if one
|
||||
# exists. Falls back to a fresh clone of the pinned commit at ~/gbrain
|
||||
# (override with GBRAIN_INSTALL_DIR or --install-dir).
|
||||
#
|
||||
# D19 PATH-shadowing: after `bun link`, compare `gbrain --version` output
|
||||
# to the install-dir's package.json version. On mismatch, abort with an
|
||||
# actionable error listing every gbrain on PATH. Never "silently fixes"
|
||||
# PATH; setup skills should refuse broken environments.
|
||||
#
|
||||
# Prerequisites (checked before doing anything):
|
||||
# - bun (install: curl -fsSL https://bun.sh/install | bash)
|
||||
# - git
|
||||
# - network reachability to https://github.com
|
||||
#
|
||||
# The pinned commit is declared here rather than resolved dynamically so
|
||||
# upgrades are explicit and reviewable. Update PINNED_COMMIT when gstack
|
||||
# verifies compatibility with a new gbrain release.
|
||||
#
|
||||
# Env:
|
||||
# GBRAIN_INSTALL_DIR — override default install path (~/gbrain)
|
||||
#
|
||||
# Exit codes:
|
||||
# 0 — success (or --dry-run printed the plan)
|
||||
# 2 — prerequisite missing or invalid argument
|
||||
# 3 — post-install validation failed (PATH shadow, broken binary, etc.)
|
||||
set -euo pipefail
|
||||
|
||||
# --- defaults ---
|
||||
PINNED_COMMIT="08b3698e90532b7b66c445e6b1d8cdfe71822802" # gbrain v0.18.2
|
||||
PINNED_TAG="v0.18.2"
|
||||
GBRAIN_REPO_URL="https://github.com/garrytan/gbrain.git"
|
||||
DEFAULT_INSTALL_DIR="${GBRAIN_INSTALL_DIR:-$HOME/gbrain}"
|
||||
INSTALL_DIR="$DEFAULT_INSTALL_DIR"
|
||||
DRY_RUN=false
|
||||
VALIDATE_ONLY=false
|
||||
|
||||
die() { echo "gstack-gbrain-install: $*" >&2; exit 2; }
|
||||
fail() { echo "gstack-gbrain-install: $*" >&2; exit 3; }
|
||||
log() { echo "gstack-gbrain-install: $*"; }
|
||||
|
||||
# --- parse args ---
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--install-dir) INSTALL_DIR="$2"; shift 2 ;;
|
||||
--pinned-commit) PINNED_COMMIT="$2"; PINNED_TAG=""; shift 2 ;;
|
||||
--dry-run) DRY_RUN=true; shift ;;
|
||||
--validate-only) VALIDATE_ONLY=true; shift ;;
|
||||
--help|-h) sed -n '2,30p' "$0" | sed 's/^# \{0,1\}//'; exit 0 ;;
|
||||
*) die "unknown flag: $1" ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# --- prerequisites ---
|
||||
check_prereq() {
|
||||
local bin="$1"
|
||||
local hint="$2"
|
||||
if ! command -v "$bin" >/dev/null 2>&1; then
|
||||
fail "required tool '$bin' not found. $hint"
|
||||
fi
|
||||
}
|
||||
|
||||
if ! $VALIDATE_ONLY; then
|
||||
check_prereq bun "Install: curl -fsSL https://bun.sh/install | bash"
|
||||
check_prereq git "Install: xcode-select --install (macOS) or your package manager"
|
||||
|
||||
# GitHub reachability — fail fast if offline rather than hanging `git clone`.
|
||||
# --max-time 10, --head (no body), quiet. Status code 200-4xx means we reached
|
||||
# the server (even 404 is reachability proof).
|
||||
if ! curl -s --head --max-time 10 https://github.com >/dev/null 2>&1; then
|
||||
fail "cannot reach https://github.com. Check your network and try again."
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- D5 detect-first: probe common locations before cloning fresh ---
|
||||
# Accept any directory that looks like a gbrain clone: has package.json
|
||||
# with name "gbrain" and a `bin.gbrain` entry. Don't accept version mismatches
|
||||
# here — we'll let bun link run and then D19-validate.
|
||||
is_valid_clone() {
|
||||
local dir="$1"
|
||||
[ -d "$dir" ] || return 1
|
||||
[ -f "$dir/package.json" ] || return 1
|
||||
local name
|
||||
name=$(jq -r '.name // empty' "$dir/package.json" 2>/dev/null || true)
|
||||
[ "$name" = "gbrain" ] || return 1
|
||||
local bin
|
||||
bin=$(jq -r '.bin.gbrain // empty' "$dir/package.json" 2>/dev/null || true)
|
||||
[ -n "$bin" ] || return 1
|
||||
return 0
|
||||
}
|
||||
|
||||
DETECTED_CLONE=""
|
||||
if ! $VALIDATE_ONLY; then
|
||||
for candidate in "$HOME/git/gbrain" "$HOME/gbrain" "$INSTALL_DIR"; do
|
||||
if is_valid_clone "$candidate"; then
|
||||
DETECTED_CLONE="$candidate"
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
if $VALIDATE_ONLY; then
|
||||
log "validate-only mode: skipping detect + clone + install + link"
|
||||
elif [ -n "$DETECTED_CLONE" ]; then
|
||||
log "detected existing gbrain clone at $DETECTED_CLONE — reusing"
|
||||
INSTALL_DIR="$DETECTED_CLONE"
|
||||
else
|
||||
# Fresh clone path.
|
||||
if $DRY_RUN; then
|
||||
log "DRY RUN: would clone $GBRAIN_REPO_URL @ $PINNED_COMMIT → $INSTALL_DIR"
|
||||
exit 0
|
||||
fi
|
||||
if [ -d "$INSTALL_DIR" ]; then
|
||||
fail "install dir $INSTALL_DIR exists but is not a valid gbrain clone. Remove it or pass --install-dir <other>."
|
||||
fi
|
||||
log "cloning $GBRAIN_REPO_URL → $INSTALL_DIR"
|
||||
git clone --quiet "$GBRAIN_REPO_URL" "$INSTALL_DIR"
|
||||
( cd "$INSTALL_DIR" && git checkout --quiet "$PINNED_COMMIT" )
|
||||
log "pinned to $PINNED_COMMIT${PINNED_TAG:+ ($PINNED_TAG)}"
|
||||
fi
|
||||
|
||||
if $DRY_RUN; then
|
||||
log "DRY RUN: would run bun install + bun link in $INSTALL_DIR"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# --- install + link ---
|
||||
if ! $VALIDATE_ONLY; then
|
||||
log "running bun install in $INSTALL_DIR"
|
||||
( cd "$INSTALL_DIR" && bun install --silent )
|
||||
log "running bun link in $INSTALL_DIR"
|
||||
( cd "$INSTALL_DIR" && bun link --silent )
|
||||
fi
|
||||
|
||||
# --- D19 PATH-shadowing validation ---
|
||||
# Read the version from the install-dir's package.json; compare to
|
||||
# `gbrain --version`. If they disagree, PATH is returning a DIFFERENT
|
||||
# gbrain than the one we just linked. Fail hard with remediation.
|
||||
expected_version=$(jq -r '.version // empty' "$INSTALL_DIR/package.json" 2>/dev/null || true)
|
||||
if [ -z "$expected_version" ]; then
|
||||
fail "cannot read version from $INSTALL_DIR/package.json (install may be broken)"
|
||||
fi
|
||||
|
||||
if ! command -v gbrain >/dev/null 2>&1; then
|
||||
fail "bun link completed but 'gbrain' is not on PATH. Ensure ~/.bun/bin is in your PATH."
|
||||
fi
|
||||
|
||||
actual_version=$(gbrain --version 2>/dev/null | head -1 | tr -d '[:space:]' || true)
|
||||
if [ -z "$actual_version" ]; then
|
||||
fail "gbrain is on PATH but 'gbrain --version' produced no output — the binary may be broken."
|
||||
fi
|
||||
|
||||
# Tolerate a leading "v" (gbrain may print either "0.18.2" or "v0.18.2").
|
||||
expected_norm="${expected_version#v}"
|
||||
actual_norm="${actual_version#v}"
|
||||
|
||||
if [ "$actual_norm" != "$expected_norm" ]; then
|
||||
echo "" >&2
|
||||
echo "gstack-gbrain-install: PATH SHADOWING DETECTED" >&2
|
||||
echo "" >&2
|
||||
echo " We just linked gbrain $expected_version from $INSTALL_DIR," >&2
|
||||
echo " but PATH is returning gbrain $actual_version." >&2
|
||||
echo "" >&2
|
||||
echo " All gbrain binaries on PATH:" >&2
|
||||
type -a gbrain 2>&1 | sed 's/^/ /' >&2 || true
|
||||
echo "" >&2
|
||||
echo " Fix one of the following, then re-run /setup-gbrain:" >&2
|
||||
echo " a) rm the shadowing binary: rm \$(which gbrain)" >&2
|
||||
echo " b) prepend ~/.bun/bin to PATH in your shell rc" >&2
|
||||
echo " c) point GBRAIN_INSTALL_DIR at the shadowing binary's install dir" >&2
|
||||
echo "" >&2
|
||||
exit 3
|
||||
fi
|
||||
|
||||
log "installed gbrain $actual_version from $INSTALL_DIR"
|
||||
echo ""
|
||||
echo "Next: gbrain init --pglite (or run /setup-gbrain for the full setup flow)"
|
||||
@@ -0,0 +1,101 @@
|
||||
# gstack-gbrain-lib.sh — shared helpers for setup-gbrain bin scripts.
|
||||
#
|
||||
# This file is NOT executable; source it:
|
||||
#
|
||||
# . "$(dirname "$0")/gstack-gbrain-lib.sh"
|
||||
#
|
||||
# Provides:
|
||||
# read_secret_to_env <VARNAME> <prompt> [--echo-redacted <sed-expr>]
|
||||
# — Read a secret from stdin into the named env var without echoing
|
||||
# to the terminal. On SIGINT/SIGTERM/EXIT, restores terminal echo so
|
||||
# future keystrokes are visible. Optionally emits a redacted preview
|
||||
# of what was read so the user can visually confirm they pasted the
|
||||
# right thing.
|
||||
#
|
||||
# stdin handling: when stdin is a TTY, stty -echo suppresses echo
|
||||
# while the user types. When stdin is piped (automated tests), the
|
||||
# stty calls are skipped — piping into `read` is already invisible.
|
||||
#
|
||||
# Var name must match [A-Z_][A-Z0-9_]* to prevent injection via
|
||||
# `read -r "$varname"` expansion. Invalid names abort.
|
||||
#
|
||||
# Exported after read so sub-processes inherit the secret. Caller
|
||||
# is responsible for `unset <VARNAME>` when done.
|
||||
#
|
||||
# Load-bearing for D3-eng (shared secret helper across PAT + URL paste),
|
||||
# D10 (env-var handoff, never argv), D11 (PAT scope disclosure + SIGINT
|
||||
# restore), D16 (pooler URL paste hygiene with redacted preview).
|
||||
|
||||
# _gstack_gbrain_validate_varname <name> — returns 0 if usable, 2 otherwise.
|
||||
_gstack_gbrain_validate_varname() {
|
||||
local name="$1"
|
||||
case "$name" in
|
||||
[A-Z_][A-Z0-9_]*) return 0 ;;
|
||||
*) return 2 ;;
|
||||
esac
|
||||
}
|
||||
|
||||
read_secret_to_env() {
|
||||
local varname="" prompt="" redact_expr=""
|
||||
# Parse leading positional args (varname, prompt), then optional flags.
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "read_secret_to_env: usage: read_secret_to_env <VARNAME> <prompt> [--echo-redacted <sed-expr>]" >&2
|
||||
return 2
|
||||
fi
|
||||
varname="$1"; shift
|
||||
prompt="$1"; shift
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--echo-redacted) redact_expr="$2"; shift 2 ;;
|
||||
*) echo "read_secret_to_env: unknown flag: $1" >&2; return 2 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
if ! _gstack_gbrain_validate_varname "$varname"; then
|
||||
echo "read_secret_to_env: invalid var name '$varname' (must match [A-Z_][A-Z0-9_]*)" >&2
|
||||
return 2
|
||||
fi
|
||||
|
||||
# stty manipulation only makes sense when stdin is a terminal. In CI /
|
||||
# test / piped contexts we skip it — piped input doesn't echo anyway.
|
||||
local is_tty=false
|
||||
if [ -t 0 ]; then is_tty=true; fi
|
||||
|
||||
if $is_tty; then
|
||||
# Save current stty state; restore on any exit path.
|
||||
local saved_stty
|
||||
saved_stty=$(stty -g 2>/dev/null || echo "")
|
||||
# shellcheck disable=SC2064
|
||||
trap "stty '$saved_stty' 2>/dev/null; printf '\n' >&2" INT TERM EXIT
|
||||
stty -echo 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# Prompt on stderr so the caller can capture stdout cleanly.
|
||||
printf '%s' "$prompt" >&2
|
||||
|
||||
# Read one line from stdin. `read -r` returns nonzero on EOF-without-
|
||||
# newline but still populates `value` with whatever it saw — we want that
|
||||
# content, so don't clear on failure.
|
||||
local value=""
|
||||
IFS= read -r value || true
|
||||
|
||||
if $is_tty; then
|
||||
stty "$saved_stty" 2>/dev/null || true
|
||||
trap - INT TERM EXIT
|
||||
printf '\n' >&2
|
||||
fi
|
||||
|
||||
# Assign + export to the named variable.
|
||||
printf -v "$varname" '%s' "$value"
|
||||
# shellcheck disable=SC2163
|
||||
export "$varname"
|
||||
|
||||
# Optional redacted preview after successful read.
|
||||
if [ -n "$redact_expr" ] && [ -n "$value" ]; then
|
||||
local preview
|
||||
preview=$(printf '%s' "$value" | sed "$redact_expr" 2>/dev/null || true)
|
||||
if [ -n "$preview" ]; then
|
||||
printf 'Got: %s\n' "$preview" >&2
|
||||
fi
|
||||
fi
|
||||
}
|
||||
Executable
+227
@@ -0,0 +1,227 @@
|
||||
#!/usr/bin/env bash
|
||||
# gstack-gbrain-repo-policy — per-remote trust tier for gbrain repo ingest.
|
||||
#
|
||||
# Usage:
|
||||
# gstack-gbrain-repo-policy get [<remote-url>]
|
||||
# Print the tier for the given remote, or the current repo's origin
|
||||
# if no URL is passed. Exits 0 with one of: read-write, read-only,
|
||||
# deny, unset.
|
||||
#
|
||||
# gstack-gbrain-repo-policy set <remote-url> <read-write|read-only|deny>
|
||||
# Persist a tier for the given remote. Exits 0 on success.
|
||||
#
|
||||
# gstack-gbrain-repo-policy list
|
||||
# Print every entry as "<key>\t<tier>", sorted by key.
|
||||
#
|
||||
# gstack-gbrain-repo-policy normalize <url>
|
||||
# Print the normalized (canonical) key for a given remote URL.
|
||||
# Use this when other skills or tests need the same collapsing logic.
|
||||
#
|
||||
# gstack-gbrain-repo-policy --help
|
||||
#
|
||||
# Storage:
|
||||
# ~/.gstack/gbrain-repo-policy.json, mode 0600.
|
||||
#
|
||||
# File format:
|
||||
# {
|
||||
# "_schema_version": 2,
|
||||
# "github.com/foo/bar": "read-write",
|
||||
# "github.com/baz/qux": "deny"
|
||||
# }
|
||||
#
|
||||
# Tier semantics:
|
||||
# read-write — agent may search AND write new pages from this repo.
|
||||
# read-only — agent may search but NEVER write pages from this repo.
|
||||
# (Enforced at the caller level; this binary just stores the
|
||||
# decision.)
|
||||
# deny — no gbrain interaction at all.
|
||||
#
|
||||
# Legacy migration:
|
||||
# On any read of a file missing `_schema_version` (or with version < 2),
|
||||
# legacy `allow` values are atomically rewritten to `read-write`, and
|
||||
# `_schema_version: 2` is added. Log line emitted on stderr when the
|
||||
# migration actually changes anything. Idempotent: running twice is safe.
|
||||
#
|
||||
# Env:
|
||||
# GSTACK_HOME — override ~/.gstack state directory (aligns with other
|
||||
# gstack-* bins; used heavily in tests).
|
||||
set -euo pipefail
|
||||
|
||||
STATE_DIR="${GSTACK_HOME:-$HOME/.gstack}"
|
||||
POLICY_FILE="$STATE_DIR/gbrain-repo-policy.json"
|
||||
SCHEMA_VERSION=2
|
||||
|
||||
die() { echo "gstack-gbrain-repo-policy: $*" >&2; exit 2; }
|
||||
|
||||
require_jq() {
|
||||
if ! command -v jq >/dev/null 2>&1; then
|
||||
die "jq is required. Install with: brew install jq"
|
||||
fi
|
||||
}
|
||||
|
||||
# normalize <url> — canonical form: lowercase host + path, no protocol,
|
||||
# no userinfo, no trailing .git or /. SSH shorthand (git@host:path) collapses
|
||||
# to the same key as https://host/path.
|
||||
normalize() {
|
||||
local url="$1"
|
||||
[ -z "$url" ] && { echo ""; return 0; }
|
||||
# Strip protocol://
|
||||
url="${url#*://}"
|
||||
# Strip userinfo (git@, user:password@, etc.) — everything up to and
|
||||
# including the first @ iff an @ appears before the first / or :.
|
||||
case "$url" in
|
||||
*@*)
|
||||
local before_at="${url%%@*}"
|
||||
case "$before_at" in
|
||||
*/*|*:*) : ;; # @ is in the path, not userinfo — leave it
|
||||
*) url="${url#*@}" ;;
|
||||
esac
|
||||
;;
|
||||
esac
|
||||
# SSH shorthand: github.com:foo/bar → github.com/foo/bar. Only when the
|
||||
# hostname-part (before first /) contains a colon. sed is clearer than
|
||||
# bash's `${var/:/\/}` which has tricky escaping.
|
||||
local head="${url%%/*}"
|
||||
case "$head" in
|
||||
*:*) url=$(printf '%s' "$url" | sed 's|:|/|') ;;
|
||||
esac
|
||||
# Strip trailing .git
|
||||
url="${url%.git}"
|
||||
# Strip trailing /
|
||||
url="${url%/}"
|
||||
# Lowercase the whole thing. GitHub and most hosts are case-insensitive on
|
||||
# paths anyway; collapsing avoids duplicate entries for "Foo/Bar" vs
|
||||
# "foo/bar".
|
||||
printf '%s\n' "$url" | tr '[:upper:]' '[:lower:]'
|
||||
}
|
||||
|
||||
# ensure_file — create the policy file if missing, migrate if legacy.
|
||||
# Emits the migration log line on stderr exactly once per run when a
|
||||
# migration actually rewrites values.
|
||||
ensure_file() {
|
||||
require_jq
|
||||
mkdir -p "$STATE_DIR"
|
||||
|
||||
if [ ! -f "$POLICY_FILE" ]; then
|
||||
# Fresh file — just the schema version, no entries.
|
||||
local tmp
|
||||
tmp=$(mktemp "$POLICY_FILE.tmp.XXXXXX")
|
||||
printf '{"_schema_version":%d}\n' "$SCHEMA_VERSION" > "$tmp"
|
||||
mv "$tmp" "$POLICY_FILE"
|
||||
chmod 0600 "$POLICY_FILE"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# File exists — validate, migrate if needed.
|
||||
local raw
|
||||
if ! raw=$(cat "$POLICY_FILE" 2>/dev/null); then
|
||||
die "Cannot read $POLICY_FILE"
|
||||
fi
|
||||
|
||||
# Corrupt JSON → quarantine and start fresh.
|
||||
if ! echo "$raw" | jq empty 2>/dev/null; then
|
||||
local ts
|
||||
ts=$(date +%Y%m%d-%H%M%S)
|
||||
local quarantine="$POLICY_FILE.corrupt-$ts"
|
||||
mv "$POLICY_FILE" "$quarantine"
|
||||
echo "gstack-gbrain-repo-policy: corrupt policy file quarantined to $quarantine; starting fresh" >&2
|
||||
local tmp
|
||||
tmp=$(mktemp "$POLICY_FILE.tmp.XXXXXX")
|
||||
printf '{"_schema_version":%d}\n' "$SCHEMA_VERSION" > "$tmp"
|
||||
mv "$tmp" "$POLICY_FILE"
|
||||
chmod 0600 "$POLICY_FILE"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Check schema version.
|
||||
local version
|
||||
version=$(echo "$raw" | jq -r '._schema_version // 0')
|
||||
if [ "$version" -ge "$SCHEMA_VERSION" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Migrate: rename `allow` → `read-write`, add _schema_version.
|
||||
local allow_count migrated
|
||||
allow_count=$(echo "$raw" | jq '[to_entries[] | select(.key != "_schema_version" and .value == "allow")] | length')
|
||||
migrated=$(echo "$raw" | jq --argjson v "$SCHEMA_VERSION" '
|
||||
(to_entries | map(
|
||||
if .key == "_schema_version" then empty
|
||||
elif .value == "allow" then .value = "read-write"
|
||||
else .
|
||||
end
|
||||
) | from_entries) + {_schema_version: $v}
|
||||
')
|
||||
local tmp
|
||||
tmp=$(mktemp "$POLICY_FILE.tmp.XXXXXX")
|
||||
printf '%s\n' "$migrated" > "$tmp"
|
||||
mv "$tmp" "$POLICY_FILE"
|
||||
chmod 0600 "$POLICY_FILE"
|
||||
if [ "$allow_count" -gt 0 ]; then
|
||||
echo "[gstack-gbrain-repo-policy] Migrated $allow_count legacy allow entries to read-write" >&2
|
||||
fi
|
||||
}
|
||||
|
||||
cmd_get() {
|
||||
local url="${1:-}"
|
||||
if [ -z "$url" ]; then
|
||||
url=$(git remote get-url origin 2>/dev/null || true)
|
||||
if [ -z "$url" ]; then
|
||||
echo "unset"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
local key
|
||||
key=$(normalize "$url")
|
||||
if [ -z "$key" ]; then
|
||||
echo "unset"
|
||||
return 0
|
||||
fi
|
||||
ensure_file
|
||||
jq -r --arg key "$key" '.[$key] // "unset"' "$POLICY_FILE"
|
||||
}
|
||||
|
||||
cmd_set() {
|
||||
local url="${1:-}"
|
||||
local tier="${2:-}"
|
||||
[ -z "$url" ] && die "usage: set <remote-url> <tier>"
|
||||
[ -z "$tier" ] && die "usage: set <remote-url> <tier>"
|
||||
case "$tier" in
|
||||
read-write|read-only|deny) ;;
|
||||
*) die "invalid tier '$tier' (must be one of: read-write, read-only, deny)" ;;
|
||||
esac
|
||||
local key
|
||||
key=$(normalize "$url")
|
||||
[ -z "$key" ] && die "cannot normalize remote URL: $url"
|
||||
ensure_file
|
||||
local tmp
|
||||
tmp=$(mktemp "$POLICY_FILE.tmp.XXXXXX")
|
||||
jq --arg key "$key" --arg tier "$tier" '.[$key] = $tier' "$POLICY_FILE" > "$tmp"
|
||||
mv "$tmp" "$POLICY_FILE"
|
||||
chmod 0600 "$POLICY_FILE"
|
||||
echo "Set $key → $tier"
|
||||
}
|
||||
|
||||
cmd_list() {
|
||||
if [ ! -f "$POLICY_FILE" ]; then
|
||||
# Nothing to list; don't create the file just for a read.
|
||||
return 0
|
||||
fi
|
||||
ensure_file
|
||||
jq -r 'to_entries[] | select(.key != "_schema_version") | "\(.key)\t\(.value)"' "$POLICY_FILE" | sort
|
||||
}
|
||||
|
||||
cmd_normalize() {
|
||||
local url="${1:-}"
|
||||
[ -z "$url" ] && die "usage: normalize <url>"
|
||||
normalize "$url"
|
||||
}
|
||||
|
||||
case "${1:-}" in
|
||||
get) shift; cmd_get "$@" ;;
|
||||
set) shift; cmd_set "$@" ;;
|
||||
list) shift; cmd_list "$@" ;;
|
||||
normalize) shift; cmd_normalize "$@" ;;
|
||||
--help|-h|help) sed -n '2,47p' "$0" | sed 's/^# \{0,1\}//' ;;
|
||||
"") die "usage: gstack-gbrain-repo-policy {get|set|list|normalize|--help}" ;;
|
||||
*) die "unknown subcommand: $1" ;;
|
||||
esac
|
||||
Executable
+447
@@ -0,0 +1,447 @@
|
||||
#!/usr/bin/env bash
|
||||
# gstack-gbrain-supabase-provision — Supabase Management API wrapper for
|
||||
# /setup-gbrain path 2a (auto-provision).
|
||||
#
|
||||
# Subcommands:
|
||||
# list-orgs
|
||||
# GET /v1/organizations. Output: {"orgs": [{"slug","name"}, ...]}
|
||||
#
|
||||
# create <name> <region> <org-slug>
|
||||
# POST /v1/projects with {name, db_pass, organization_slug, region}.
|
||||
# db_pass must be in the DB_PASS env var (never argv — D8 grep test
|
||||
# enforces this). Output: {"ref","name","region","organization_slug","status"}.
|
||||
#
|
||||
# NOTE: does NOT send a `plan` field. Per verified Supabase Management
|
||||
# API OpenAPI, the `plan` field is now deprecated at the project level
|
||||
# — subscription tier is an org-level decision (D17 updated).
|
||||
#
|
||||
# wait <ref> [--timeout <seconds>]
|
||||
# Poll GET /v1/projects/{ref} every 5s until status=ACTIVE_HEALTHY,
|
||||
# or fail on terminal states (INIT_FAILED, REMOVED). Default timeout
|
||||
# 180s. Output on success: {"ref","status","elapsed_s"}.
|
||||
#
|
||||
# pooler-url <ref>
|
||||
# GET /v1/projects/{ref}/config/database/pooler, construct the full
|
||||
# Session Pooler URL using DB_PASS from env (the API response's
|
||||
# connection_string is typically templated [PASSWORD] rather than the
|
||||
# real value — we build from db_user/db_host/db_port/db_name instead).
|
||||
# Output: {"ref","pooler_url"}.
|
||||
#
|
||||
# list-orphans [--name-prefix <str>]
|
||||
# GET /v1/projects. Filter to projects whose name starts with --name-prefix
|
||||
# (default "gbrain") AND whose ref does NOT match the one in the local
|
||||
# active ~/.gbrain/config.json pooler URL. Those are the gbrain-shaped
|
||||
# projects that aren't pointed at by a working local config — candidates
|
||||
# for /setup-gbrain --cleanup-orphans.
|
||||
# Output: {"active_ref","orphans":[{"ref","name","created_at","region"}, ...]}.
|
||||
#
|
||||
# delete-project <ref>
|
||||
# DELETE /v1/projects/{ref}. Destructive, one-way — callers must
|
||||
# double-confirm before invoking. This bin performs NO confirmation
|
||||
# prompt; the skill's UI layer owns that responsibility.
|
||||
# Output: {"deleted_ref"}.
|
||||
#
|
||||
# Secrets discipline (D8, D10, D11):
|
||||
# - SUPABASE_ACCESS_TOKEN is read from env; never accepted as argv.
|
||||
# - DB_PASS (for `create` and `pooler-url`) is read from env; never argv.
|
||||
# - Forbidden strings (enforced by skill-validation grep test):
|
||||
# --insecure, -k (curl), NODE_TLS_REJECT_UNAUTHORIZED
|
||||
# - `set +x` default — debug mode requires explicit opt-in around
|
||||
# non-secret lines.
|
||||
#
|
||||
# Env:
|
||||
# SUPABASE_ACCESS_TOKEN — PAT for auth (required on all subcommands)
|
||||
# DB_PASS — database password (required for create + pooler-url)
|
||||
# SUPABASE_API_BASE — override the API host (tests point this at a
|
||||
# local mock server). Default: https://api.supabase.com
|
||||
#
|
||||
# Exit codes:
|
||||
# 0 — success
|
||||
# 2 — usage / invalid input
|
||||
# 3 — auth failure (401/403) — retry with fresh PAT
|
||||
# 4 — quota / billing (402) — user action needed
|
||||
# 5 — conflict (409) — duplicate name, user action needed
|
||||
# 6 — timeout (wait subcommand hit its deadline)
|
||||
# 7 — terminal failure state from Supabase (INIT_FAILED, REMOVED)
|
||||
# 8 — network / 5xx after retries
|
||||
set +x # Defensive: never trace secrets in this helper.
|
||||
set -euo pipefail
|
||||
|
||||
SUPABASE_API_BASE="${SUPABASE_API_BASE:-https://api.supabase.com}"
|
||||
API_VERSION="v1"
|
||||
DEFAULT_WAIT_TIMEOUT=180
|
||||
POLL_INTERVAL=5
|
||||
CURL_TIMEOUT=30
|
||||
|
||||
die() { echo "gstack-gbrain-supabase-provision: $*" >&2; exit 2; }
|
||||
die_auth() { echo "gstack-gbrain-supabase-provision: $*" >&2; exit 3; }
|
||||
die_quota(){ echo "gstack-gbrain-supabase-provision: $*" >&2; exit 4; }
|
||||
die_conflict(){ echo "gstack-gbrain-supabase-provision: $*" >&2; exit 5; }
|
||||
die_net() { echo "gstack-gbrain-supabase-provision: $*" >&2; exit 8; }
|
||||
|
||||
require_jq() {
|
||||
command -v jq >/dev/null 2>&1 || die "jq is required. Install with: brew install jq"
|
||||
}
|
||||
require_curl() {
|
||||
command -v curl >/dev/null 2>&1 || die "curl is required"
|
||||
}
|
||||
|
||||
require_pat() {
|
||||
if [ -z "${SUPABASE_ACCESS_TOKEN:-}" ]; then
|
||||
die_auth "SUPABASE_ACCESS_TOKEN is not set. Generate a PAT at https://supabase.com/dashboard/account/tokens"
|
||||
fi
|
||||
}
|
||||
|
||||
require_db_pass() {
|
||||
if [ -z "${DB_PASS:-}" ]; then
|
||||
die "DB_PASS env var is required (never passed as argv — that leaks via ps/history)"
|
||||
fi
|
||||
}
|
||||
|
||||
# api_call <method> <path> [<json-body-file>]
|
||||
# Handles: 401/403 → exit 3, 402 → 4, 409 → 5, 429 + 5xx → retry w/
|
||||
# exponential backoff up to 3 attempts. Returns the response body on
|
||||
# stdout and HTTP status on an internal variable via a pipe trick.
|
||||
#
|
||||
# Because bash lacks multi-value returns, we write response body to a
|
||||
# tmpfile + status to another tmpfile and the caller reads them.
|
||||
api_call() {
|
||||
local method="$1"
|
||||
local apipath="$2"
|
||||
local body_file="${3:-}"
|
||||
|
||||
local url="$SUPABASE_API_BASE/$API_VERSION/$apipath"
|
||||
local body_tmp
|
||||
body_tmp=$(mktemp)
|
||||
local status_tmp
|
||||
status_tmp=$(mktemp)
|
||||
# shellcheck disable=SC2064
|
||||
trap "rm -f '$body_tmp' '$status_tmp'" RETURN
|
||||
|
||||
local attempt=0
|
||||
local max_attempts=3
|
||||
local backoff=2
|
||||
while : ; do
|
||||
attempt=$((attempt + 1))
|
||||
local curl_args=(
|
||||
--silent
|
||||
--show-error
|
||||
--max-time "$CURL_TIMEOUT"
|
||||
-o "$body_tmp"
|
||||
-w "%{http_code}"
|
||||
-X "$method"
|
||||
-H "Authorization: Bearer $SUPABASE_ACCESS_TOKEN"
|
||||
-H "Accept: application/json"
|
||||
-H "Content-Type: application/json"
|
||||
-H "User-Agent: gstack-gbrain-supabase-provision"
|
||||
)
|
||||
if [ -n "$body_file" ]; then
|
||||
curl_args+=(--data-binary "@$body_file")
|
||||
fi
|
||||
local status
|
||||
if ! status=$(curl "${curl_args[@]}" "$url" 2>/dev/null); then
|
||||
# curl itself failed (network, timeout, etc.). Retry.
|
||||
if [ "$attempt" -ge "$max_attempts" ]; then
|
||||
die_net "network failure calling $method $apipath after $attempt attempts"
|
||||
fi
|
||||
sleep "$backoff"
|
||||
backoff=$((backoff * 2))
|
||||
continue
|
||||
fi
|
||||
|
||||
case "$status" in
|
||||
2??)
|
||||
cat "$body_tmp"
|
||||
printf '%s' "$status" > "$status_tmp"
|
||||
return 0
|
||||
;;
|
||||
401)
|
||||
die_auth "401 Unauthorized — your PAT is invalid or expired. Re-generate at https://supabase.com/dashboard/account/tokens"
|
||||
;;
|
||||
403)
|
||||
die_auth "403 Forbidden — your PAT lacks permission for $method $apipath. Regenerate with All Access scope."
|
||||
;;
|
||||
402)
|
||||
die_quota "402 Payment Required — Supabase project/organization quota exceeded. See https://supabase.com/dashboard"
|
||||
;;
|
||||
409)
|
||||
die_conflict "409 Conflict on $method $apipath — likely a duplicate project name. Pick a different name and re-run."
|
||||
;;
|
||||
429|5??)
|
||||
if [ "$attempt" -ge "$max_attempts" ]; then
|
||||
die_net "$status after $attempt attempts on $method $apipath"
|
||||
fi
|
||||
sleep "$backoff"
|
||||
backoff=$((backoff * 2))
|
||||
continue
|
||||
;;
|
||||
*)
|
||||
# 400, 404, etc. — surface the error body for debugging.
|
||||
local err
|
||||
err=$(jq -r '.message // .error // empty' "$body_tmp" 2>/dev/null || true)
|
||||
if [ -n "$err" ]; then
|
||||
die "HTTP $status from $method $apipath: $err"
|
||||
else
|
||||
die "HTTP $status from $method $apipath (no error message in response)"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
cmd_list_orgs() {
|
||||
local json_mode=false
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--json) json_mode=true; shift ;;
|
||||
*) die "list-orgs: unknown flag: $1" ;;
|
||||
esac
|
||||
done
|
||||
|
||||
require_jq; require_curl; require_pat
|
||||
local resp
|
||||
resp=$(api_call GET organizations)
|
||||
if $json_mode; then
|
||||
printf '%s' "$resp" | jq '{orgs: map({slug: .slug, name: .name})}'
|
||||
else
|
||||
printf '%s' "$resp" | jq -r '.[] | "\(.slug)\t\(.name)"'
|
||||
fi
|
||||
}
|
||||
|
||||
cmd_create() {
|
||||
local name="" region="" org_slug=""
|
||||
local json_mode=false
|
||||
local instance_size=""
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--json) json_mode=true; shift ;;
|
||||
--instance-size) instance_size="$2"; shift 2 ;;
|
||||
--*) die "create: unknown flag: $1" ;;
|
||||
*)
|
||||
if [ -z "$name" ]; then name="$1"
|
||||
elif [ -z "$region" ]; then region="$1"
|
||||
elif [ -z "$org_slug" ]; then org_slug="$1"
|
||||
else die "create: too many positional arguments"
|
||||
fi
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
[ -z "$name" ] && die "create: missing <name>"
|
||||
[ -z "$region" ] && die "create: missing <region>"
|
||||
[ -z "$org_slug" ] && die "create: missing <org-slug>"
|
||||
|
||||
require_jq; require_curl; require_pat; require_db_pass
|
||||
|
||||
local body_file
|
||||
body_file=$(mktemp)
|
||||
# shellcheck disable=SC2064
|
||||
trap "rm -f '$body_file'" RETURN
|
||||
if [ -n "$instance_size" ]; then
|
||||
jq -n \
|
||||
--arg name "$name" \
|
||||
--arg db_pass "$DB_PASS" \
|
||||
--arg organization_slug "$org_slug" \
|
||||
--arg region "$region" \
|
||||
--arg desired_instance_size "$instance_size" \
|
||||
'{name: $name, db_pass: $db_pass, organization_slug: $organization_slug, region: $region, desired_instance_size: $desired_instance_size}' \
|
||||
> "$body_file"
|
||||
else
|
||||
jq -n \
|
||||
--arg name "$name" \
|
||||
--arg db_pass "$DB_PASS" \
|
||||
--arg organization_slug "$org_slug" \
|
||||
--arg region "$region" \
|
||||
'{name: $name, db_pass: $db_pass, organization_slug: $organization_slug, region: $region}' \
|
||||
> "$body_file"
|
||||
fi
|
||||
|
||||
local resp
|
||||
resp=$(api_call POST projects "$body_file")
|
||||
if $json_mode; then
|
||||
printf '%s' "$resp" | jq '{ref, name, region, organization_slug, status}'
|
||||
else
|
||||
printf '%s' "$resp" | jq -r '"ref=\(.ref) status=\(.status) region=\(.region)"'
|
||||
fi
|
||||
}
|
||||
|
||||
cmd_wait() {
|
||||
local ref="" timeout="$DEFAULT_WAIT_TIMEOUT"
|
||||
local json_mode=false
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--timeout) timeout="$2"; shift 2 ;;
|
||||
--json) json_mode=true; shift ;;
|
||||
--*) die "wait: unknown flag: $1" ;;
|
||||
*) ref="$1"; shift ;;
|
||||
esac
|
||||
done
|
||||
[ -z "$ref" ] && die "wait: missing <ref>"
|
||||
|
||||
require_jq; require_curl; require_pat
|
||||
|
||||
local elapsed=0
|
||||
while : ; do
|
||||
local resp
|
||||
resp=$(api_call GET "projects/$ref")
|
||||
local status
|
||||
status=$(printf '%s' "$resp" | jq -r '.status // "UNKNOWN"')
|
||||
case "$status" in
|
||||
ACTIVE_HEALTHY)
|
||||
if $json_mode; then
|
||||
jq -n --arg ref "$ref" --arg status "$status" --argjson elapsed "$elapsed" \
|
||||
'{ref: $ref, status: $status, elapsed_s: $elapsed}'
|
||||
else
|
||||
echo "ready ref=$ref status=$status elapsed_s=$elapsed"
|
||||
fi
|
||||
return 0
|
||||
;;
|
||||
INIT_FAILED|REMOVED|RESTORE_FAILED|PAUSE_FAILED)
|
||||
echo "gstack-gbrain-supabase-provision: project $ref reached terminal failure state '$status'" >&2
|
||||
exit 7
|
||||
;;
|
||||
COMING_UP|INACTIVE|ACTIVE_UNHEALTHY|UNKNOWN|RESTORING|UPGRADING|PAUSING|RESTARTING|RESIZING|GOING_DOWN)
|
||||
# Still provisioning — keep polling.
|
||||
;;
|
||||
*)
|
||||
# Unexpected status from Supabase. Log but keep polling.
|
||||
echo "gstack-gbrain-supabase-provision: unexpected status '$status' — continuing to poll" >&2
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ "$elapsed" -ge "$timeout" ]; then
|
||||
echo "gstack-gbrain-supabase-provision: wait timed out after ${timeout}s (last status: $status)" >&2
|
||||
echo "gstack-gbrain-supabase-provision: re-run with /setup-gbrain --resume-provision $ref" >&2
|
||||
exit 6
|
||||
fi
|
||||
sleep "$POLL_INTERVAL"
|
||||
elapsed=$((elapsed + POLL_INTERVAL))
|
||||
done
|
||||
}
|
||||
|
||||
cmd_pooler_url() {
|
||||
local ref=""
|
||||
local json_mode=false
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--json) json_mode=true; shift ;;
|
||||
--*) die "pooler-url: unknown flag: $1" ;;
|
||||
*) ref="$1"; shift ;;
|
||||
esac
|
||||
done
|
||||
[ -z "$ref" ] && die "pooler-url: missing <ref>"
|
||||
|
||||
require_jq; require_curl; require_pat; require_db_pass
|
||||
|
||||
local resp
|
||||
resp=$(api_call GET "projects/$ref/config/database/pooler")
|
||||
|
||||
# Prefer the singular Session Pooler config when Supabase returns an
|
||||
# array (response shape can vary by project state). Fall back to the
|
||||
# first PRIMARY entry if no "session" pool_mode is present.
|
||||
local db_user db_host db_port db_name
|
||||
local first_or_session
|
||||
if printf '%s' "$resp" | jq -e 'type == "array"' >/dev/null 2>&1; then
|
||||
first_or_session=$(printf '%s' "$resp" | jq '[.[] | select(.pool_mode == "session")][0] // .[0]')
|
||||
else
|
||||
first_or_session="$resp"
|
||||
fi
|
||||
|
||||
db_user=$(printf '%s' "$first_or_session" | jq -r '.db_user // empty')
|
||||
db_host=$(printf '%s' "$first_or_session" | jq -r '.db_host // empty')
|
||||
db_port=$(printf '%s' "$first_or_session" | jq -r '.db_port // empty')
|
||||
db_name=$(printf '%s' "$first_or_session" | jq -r '.db_name // empty')
|
||||
|
||||
if [ -z "$db_user" ] || [ -z "$db_host" ] || [ -z "$db_port" ] || [ -z "$db_name" ]; then
|
||||
die "pooler-url: missing pooler config fields (db_user/db_host/db_port/db_name); re-poll or check project state"
|
||||
fi
|
||||
|
||||
local url="postgresql://${db_user}:${DB_PASS}@${db_host}:${db_port}/${db_name}"
|
||||
|
||||
if $json_mode; then
|
||||
jq -n --arg ref "$ref" --arg pooler_url "$url" '{ref: $ref, pooler_url: $pooler_url}'
|
||||
else
|
||||
# Non-JSON mode prints the URL; callers capturing it into a variable
|
||||
# keep it in process memory only.
|
||||
echo "$url"
|
||||
fi
|
||||
}
|
||||
|
||||
cmd_list_orphans() {
|
||||
local name_prefix="gbrain"
|
||||
local json_mode=false
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--name-prefix) name_prefix="$2"; shift 2 ;;
|
||||
--json) json_mode=true; shift ;;
|
||||
--*) die "list-orphans: unknown flag: $1" ;;
|
||||
*) die "list-orphans: unexpected arg: $1" ;;
|
||||
esac
|
||||
done
|
||||
|
||||
require_jq; require_curl; require_pat
|
||||
local all
|
||||
all=$(api_call GET projects)
|
||||
|
||||
# Extract the active brain's ref from ~/.gbrain/config.json if present.
|
||||
# Pooler URL format: postgresql://postgres.<ref>:<pw>@...
|
||||
local active_ref="null"
|
||||
local gbrain_cfg="$HOME/.gbrain/config.json"
|
||||
if [ -f "$gbrain_cfg" ]; then
|
||||
local url
|
||||
url=$(jq -r '.database_url // empty' "$gbrain_cfg" 2>/dev/null || true)
|
||||
if [ -n "$url" ]; then
|
||||
# Extract user portion before the colon: postgresql://USER:pw@...
|
||||
local user
|
||||
user=$(printf '%s' "$url" | sed -E 's|^[a-z]+://([^:]+):.*$|\1|')
|
||||
# User format: postgres.<ref> — pull ref suffix
|
||||
case "$user" in
|
||||
postgres.*)
|
||||
local ref="${user#postgres.}"
|
||||
active_ref=$(jq -Rn --arg r "$ref" '$r')
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
fi
|
||||
|
||||
local orphans
|
||||
orphans=$(printf '%s' "$all" | jq \
|
||||
--arg prefix "$name_prefix" \
|
||||
--argjson active "$active_ref" \
|
||||
'[.[]
|
||||
| select(.name | startswith($prefix))
|
||||
| select(.ref != $active)
|
||||
| {ref: .ref, name: .name, created_at: .created_at, region: .region}]')
|
||||
|
||||
jq -n --argjson active "$active_ref" --argjson orphans "$orphans" \
|
||||
'{active_ref: $active, orphans: $orphans}'
|
||||
}
|
||||
|
||||
cmd_delete_project() {
|
||||
local ref=""
|
||||
local json_mode=false
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--json) json_mode=true; shift ;;
|
||||
--*) die "delete-project: unknown flag: $1" ;;
|
||||
*) ref="$1"; shift ;;
|
||||
esac
|
||||
done
|
||||
[ -z "$ref" ] && die "delete-project: missing <ref>"
|
||||
|
||||
require_jq; require_curl; require_pat
|
||||
api_call DELETE "projects/$ref" >/dev/null
|
||||
jq -n --arg ref "$ref" '{deleted_ref: $ref}'
|
||||
}
|
||||
|
||||
case "${1:-}" in
|
||||
list-orgs) shift; cmd_list_orgs "$@" ;;
|
||||
create) shift; cmd_create "$@" ;;
|
||||
wait) shift; cmd_wait "$@" ;;
|
||||
pooler-url) shift; cmd_pooler_url "$@" ;;
|
||||
list-orphans) shift; cmd_list_orphans "$@" ;;
|
||||
delete-project) shift; cmd_delete_project "$@" ;;
|
||||
--help|-h|help) sed -n '2,80p' "$0" | sed 's/^# \{0,1\}//' ;;
|
||||
"") die "usage: gstack-gbrain-supabase-provision {list-orgs|create|wait|pooler-url|list-orphans|delete-project|--help}" ;;
|
||||
*) die "unknown subcommand: $1" ;;
|
||||
esac
|
||||
Executable
+126
@@ -0,0 +1,126 @@
|
||||
#!/usr/bin/env bash
|
||||
# gstack-gbrain-supabase-verify — structural check on a Supabase Session
|
||||
# Pooler URL before handing it to `gbrain init`.
|
||||
#
|
||||
# Usage:
|
||||
# gstack-gbrain-supabase-verify <url>
|
||||
# echo "<url>" | gstack-gbrain-supabase-verify -
|
||||
#
|
||||
# Accepts ONLY Session Pooler URLs (port 6543, host *.pooler.supabase.com).
|
||||
# Rejects direct-connection URLs (db.*.supabase.co:5432) since those are
|
||||
# IPv6-only and fail in many environments — gbrain's init wizard warns
|
||||
# about this at init.ts:150-158.
|
||||
#
|
||||
# Canonical shape (per gbrain init.ts:266):
|
||||
# postgresql://postgres.<ref>:<password>@aws-0-<region>.pooler.supabase.com:6543/postgres
|
||||
#
|
||||
# Exit codes:
|
||||
# 0 — URL passes structural check
|
||||
# 2 — invalid format (bad scheme, port, host, userinfo, or empty password)
|
||||
# 3 — direct-connection URL rejected (common mistake, special-cased for UX)
|
||||
#
|
||||
# The verifier never makes a network call; purely a regex match. Whether
|
||||
# the URL actually works (database up, password correct, host reachable)
|
||||
# is gbrain's problem at init time.
|
||||
#
|
||||
# Reads URL from:
|
||||
# 1. argv[1] if provided and not "-"
|
||||
# 2. stdin if argv[1] is "-" or missing
|
||||
#
|
||||
# Never echoes the URL to stderr (it contains a password). Error messages
|
||||
# refer to "the URL" generically.
|
||||
set -euo pipefail
|
||||
|
||||
die() { echo "gstack-gbrain-supabase-verify: $*" >&2; exit 2; }
|
||||
reject_direct() {
|
||||
cat >&2 <<EOF
|
||||
gstack-gbrain-supabase-verify: rejected direct-connection URL
|
||||
|
||||
You pasted a Supabase direct-connection URL (db.*.supabase.co on port
|
||||
5432). Direct connections are IPv6-only and fail in many environments.
|
||||
|
||||
Use the Session Pooler instead:
|
||||
Supabase Dashboard → Settings → Database → Connection Pooler →
|
||||
Transaction/Session → copy URI (port 6543)
|
||||
|
||||
Expected shape:
|
||||
postgresql://postgres.<ref>:<password>@aws-0-<region>.pooler.supabase.com:6543/postgres
|
||||
EOF
|
||||
exit 3
|
||||
}
|
||||
|
||||
URL=""
|
||||
case "${1:-}" in
|
||||
-) URL=$(cat) ;;
|
||||
"") URL=$(cat) ;;
|
||||
*) URL="$1" ;;
|
||||
esac
|
||||
|
||||
URL=$(printf '%s' "$URL" | tr -d '[:space:]')
|
||||
[ -z "$URL" ] && die "empty URL"
|
||||
|
||||
# Scheme: must be postgresql:// or postgres://. Explicitly reject other
|
||||
# schemes rather than guess.
|
||||
case "$URL" in
|
||||
postgresql://*|postgres://*) ;;
|
||||
*) die "bad scheme (must start with postgresql:// or postgres://)" ;;
|
||||
esac
|
||||
|
||||
# Strip scheme to expose userinfo + host + port + path.
|
||||
rest="${URL#*://}"
|
||||
|
||||
# Userinfo portion: everything before the first @. Must contain a : (user:pass).
|
||||
case "$rest" in
|
||||
*@*) ;;
|
||||
*) die "missing userinfo (expected postgres.<ref>:<password>@host)" ;;
|
||||
esac
|
||||
userinfo="${rest%%@*}"
|
||||
after_at="${rest#*@}"
|
||||
|
||||
# Userinfo must be user:password with neither part empty.
|
||||
case "$userinfo" in
|
||||
*:*) ;;
|
||||
*) die "userinfo missing password separator (expected user:password@)" ;;
|
||||
esac
|
||||
user_part="${userinfo%%:*}"
|
||||
pass_part="${userinfo#*:}"
|
||||
[ -z "$user_part" ] && die "empty user portion in userinfo"
|
||||
[ -z "$pass_part" ] && die "empty password in userinfo"
|
||||
|
||||
# Host + port + path.
|
||||
# Direct-connection detection FIRST (specific error beats generic).
|
||||
case "$after_at" in
|
||||
db.*.supabase.co:5432*|db.*.supabase.co/*|db.*.supabase.co) reject_direct ;;
|
||||
esac
|
||||
|
||||
# Extract host:port (before first / if present).
|
||||
hostport="${after_at%%/*}"
|
||||
case "$hostport" in
|
||||
*:*) ;;
|
||||
*) die "missing port (Session Pooler requires :6543)" ;;
|
||||
esac
|
||||
host="${hostport%:*}"
|
||||
port="${hostport##*:}"
|
||||
|
||||
# Host must be *.pooler.supabase.com (case-insensitive).
|
||||
host_lower=$(printf '%s' "$host" | tr '[:upper:]' '[:lower:]')
|
||||
case "$host_lower" in
|
||||
*.pooler.supabase.com) ;;
|
||||
*) die "host '$host' is not a Supabase Session Pooler (expected *.pooler.supabase.com)" ;;
|
||||
esac
|
||||
|
||||
# Port must be 6543 (Session Pooler default).
|
||||
if [ "$port" != "6543" ]; then
|
||||
die "port must be 6543 for Session Pooler (got $port)"
|
||||
fi
|
||||
|
||||
# User portion should look like postgres.<ref> (20-char lowercase ref,
|
||||
# per the Supabase Management API contract). Not strictly required by
|
||||
# gbrain, but rejecting a plain "postgres" user catches a common paste
|
||||
# error where someone grabs the Direct URL userinfo by mistake.
|
||||
case "$user_part" in
|
||||
postgres.*) ;;
|
||||
*) die "user portion '$user_part' should be 'postgres.<project-ref>' (20-char ref)" ;;
|
||||
esac
|
||||
|
||||
echo "ok"
|
||||
Reference in New Issue
Block a user