* feat(setup-gbrain): add gstack-gbrain-repo-policy bin helper Per-remote trust-tier store for the forthcoming /setup-gbrain skill. Tiers are the D3 triad (read-write / read-only / deny), keyed by a normalized remote URL so ssh-shorthand and https variants collapse to the same entry. The file carries _schema_version: 2 (D2-eng); legacy `allow` values from pre-D3 experiments auto-migrate to `read-write` on first read, idempotent, with a one-shot log line. Pure bash + jq to match the existing gstack-brain-* family. Atomic writes via tmpfile + rename. Policy file mode 0600. Corrupt files quarantine to .corrupt-<ts> and start fresh. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): unit tests for gstack-gbrain-repo-policy 24 tests covering normalize (ssh/https/shorthand/uppercase collapse to one key), set/get round-trip, all three D3 tiers accepted, invalid tiers rejected, file mode 0600, _schema_version field written on fresh files, legacy allow migration (including idempotence and preservation of non-allow entries), corrupt-JSON quarantine + fresh-file recovery, list output sorting, and get-without-arg auto-detect against a git repo with no origin. All tests green against a per-test tmpdir GSTACK_HOME so nothing leaks into the real ~/.gstack. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-detect state reporter Pure-introspection JSON emitter for the /setup-gbrain skill's start-up branching. Reports: gbrain presence + version on PATH, ~/.gbrain/config.json existence + engine, `gbrain doctor --json` health (wrapped in timeout 5s to match the /health D6 pattern), gstack-brain-sync mode via gstack-config, and ~/.gstack/.git presence for the memory-sync feature. Never modifies state. Always emits valid JSON even when every check is false. Handles malformed ~/.gbrain/config.json without crashing — gbrain_engine is null in that case, not an error. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-install with D5 detect-first + D19 PATH-shadow guard Clones gbrain at a pinned commit (v0.18.2) and registers it via `bun link`. Before any clone: D5 detect-first — probes ~/git/gbrain, ~/gbrain, and the install target for a valid pre-existing clone (package.json with name "gbrain" and bin.gbrain set). If one is found, `bun link` runs there instead of cloning a second copy. Prevents the day-one duplicate-install footgun on the skill author's own machine. After install: D19 PATH-shadow guard — reads the install-dir's package.json version, compares to `gbrain --version` on PATH. On mismatch: exits 3, prints every gbrain binary on PATH via `type -a`, and gives a remediation menu. Setup skills refuse broken environments instead of warning and continuing. Prereq checks (bun, git, https://github.com reachability) fail fast with install hints. --dry-run and --validate-only flags let the skill probe the plan without touching state; tests use them to cover D5 and D19 without exercising real bun link. Pin is a load-bearing version: setup-gbrain v1 verified against gbrain v0.18.2. Updating requires re-running Pre-Impl Gate 1 to verify gbrain's CLI + config shapes haven't drifted. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): unit tests for gstack-gbrain-detect + install 15 tests covering: detect emits valid JSON when nothing configured, reports gstack_brain_git on GSTACK_HOME/.git presence, reads ~/.gbrain/config.json engine, tolerates malformed config, detects a mocked gbrain binary on PATH with version parsing. For install: D5 detect-first uses ~/git/gbrain fixtures under a sandboxed HOME, verifies fall-through to fresh clone when no valid clone exists, rejects invalid package.json shapes. D19 PATH-shadow validation uses a fake gbrain on a minimal SAFE_PATH to simulate version mismatch, same-version-pass, v-prefix tolerance, missing binary on PATH, and missing version field in package.json. --validate-only mode in the install bin makes the D19 check unit- testable without running real bun link (which touches ~/.bun/bin). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-lib.sh with read_secret_to_env (D3-eng) Shared secret-read helper for PAT (D11) and pooler URL paste (D16). One implementation of the hardest-to-get-right pattern: stty -echo + SIGINT/TERM/EXIT trap that restores terminal mode, read into a named env var, optional redacted preview. Validates the target var name against [A-Z_][A-Z0-9_]* to prevent bash name-injection via `read -r "$varname"`. When stdin is not a TTY (CI, piped tests) the stty branches skip cleanly — piped input doesn't echo anyway. Exports the var after read so subprocesses inherit it; callers own the `unset` at handoff time. Sourced, not executed — no +x bit. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-supabase-verify structural URL check Zero-network validator for Supabase Session Pooler URLs before handing them to `gbrain init`. Canonical shape verified per gbrain init.ts:266: postgresql://postgres.<ref>:<password>@aws-0-<region>.pooler.supabase.com:6543/postgres Rejects direct-connection URLs (db.*.supabase.co:5432) with a distinct exit code 3 and clear IPv6-failure remediation — that's the most common paste mistake users make, so it earns its own UX path rather than a generic "bad URL" error. Never echoes the URL (contains a password) in error messages; tests verify a distinct seed password never appears in stderr on any reject path. Accepts URL from argv[1] or stdin ("-" or no arg). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): unit tests for supabase-verify + lib.sh secret helper 22 tests. verify: accepts canonical pooler URL (argv + stdin modes), rejects direct-connection URL with exit 3, rejects wrong scheme, wrong port, empty password, missing userinfo, plain 'postgres' user (catches direct-URL paste errors), wrong host, empty URL. Case-insensitive host match. Explicit negative: error messages never echo the URL password. lib.sh read_secret_to_env: reads piped stdin into the named env var, exports to subprocesses, redacted-preview emits masked form on stderr with the seed password absent, rejects invalid var names (lowercase, leading digit, hyphens), rejects missing/unknown flags, secret value never appears on stdout. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-supabase-provision Management API wrapper Four subcommands: list-orgs, create, wait, pooler-url. Built against the verified Supabase Management API shape (Pre-Impl Gate 1): - POST /v1/projects with {name, db_pass, organization_slug, region} — not the original plan's /v1/organizations/{ref}/projects - No `plan` field; subscription tier is org-level per the OpenAPI description ("Subscription Plan is now set on organization level and is ignored in this request") - GET /v1/projects/{ref}/config/database/pooler for pooler config — not /config/database Secrets discipline: SUPABASE_ACCESS_TOKEN (PAT) and DB_PASS read from env only, never from argv (D8 grep test enforces this). `set +x` at the top as a defensive default so debug tracing never leaks secrets. Management API hostname hardcoded to SUPABASE_API_BASE env override — no user-controlled URL portion (SSRF guard). HTTP error paths: 401/403 → exit 3 (auth), 402 → 4 (quota), 409 → 5 (conflict), 429 + 5xx → exponential-backoff retry up to 3 attempts, then exit 8. Wait subcommand polls every 5s until ACTIVE_HEALTHY with a configurable timeout; terminal states (INIT_FAILED, REMOVED, etc.) exit 7 immediately with a clear message. Timeout emits the --resume-provision hint so the skill can recover. Pooler-url constructs the URL locally from db_user/host/port/name + DB_PASS rather than trusting the API response's connection_string field, which is templated with [PASSWORD] rather than the real value. Handles both object and array response shapes, preferring session pool_mode when Supabase returns multiple pooler configs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): unit tests for gstack-gbrain-supabase-provision via mock API 22 tests covering D21 HTTP error suite (401/403/402/409/429/5xx) and happy paths for all four subcommands. Every test spins up a Bun.serve mock server bound to SUPABASE_API_BASE so nothing hits the real API. Uses Bun.spawn (async) rather than spawnSync because spawnSync blocks the Bun event loop, which prevents Bun.serve mocks from responding — calls would hit curl's own timeout instead of round-tripping. Verifies: POST body contains organization_slug (not organization_id) and no `plan` field, bearer-token auth header, retry-on-429 with eventual success, exit-8 on persistent 5xx after max retries, wait succeeds on ACTIVE_HEALTHY, exits 7 on INIT_FAILED, exits 6 with --resume-provision hint on timeout, pooler-url builds URL locally from db_user/host/port/name + DB_PASS (not response connection_string template), handles array pooler responses. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add SKILL.md.tmpl — user-facing skill prompt Stitches together every slice built so far (repo-policy, detect, install, lib.sh secret helper, supabase-verify, supabase-provision) into a single interactive flow. Paths: Supabase existing-URL, Supabase auto-provision (D7), Supabase manual, PGLite local, switch (PGLite ↔ Supabase via gbrain migrate wrapped in timeout 180s per D9). Secrets discipline per D8/D10/D11: PAT + DB_PASS + pooler URL all read via read_secret_to_env from lib.sh and handed to gbrain via GBRAIN_DATABASE_URL env, never argv. PAT carries the full D11 scope disclosure before collection and an explicit revocation reminder after success. D12 SIGINT recovery prints the in-flight ref + resume command. D18 MCP registration is scoped honestly to Claude Code — skips with a manual-register hint when `claude` is not on PATH. D6 per-remote trust-triad question (read-write/read-only/deny/skip-for-now) gates repo import; the triad values compose with the D2-eng schema-version policy file so future migrations stay deterministic. Skill runs concurrent-run-locked via mkdir ~/.gstack/.setup-gbrain.lock.d (atomic, same pattern as gstack-brain-sync). Telemetry (D4) payload carries enumerated categorical values only — never URL, PAT, or any postgresql:// substring. --repo, --switch, --resume-provision, --cleanup-orphans shortcut modes documented inline; the skill parses its own invocation args. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(health): integrate gbrain as D6 composite dimension Adds a GBrain row to the /health dashboard rubric with weight 10%. Three sub-signals rolled into one 0-10 score: doctor status (0.5), sync queue depth (0.3), last-push age (0.2). Redistributes when gbrain_sync_mode is off so the dimension stays fair. Weights rebalance: typecheck 25→22, lint 20→18, test 30→28, deadcode 15→13, shell 10→9, gbrain +10 — sums to 100. gbrain doctor --json wrapped in timeout 5s so a hung gbrain never stalls the /health dashboard. Dimension is omitted (not red) when gbrain is not installed — running /health on a non-gbrain machine shouldn't penalize that choice. History-JSONL adds a `gbrain` field. Pre-D6 entries read as null for trend comparison; new tracking starts from first post-D6 run. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(test): add secret-sink-harness for negative-space leak testing (D21 #5) Runs a subprocess with a seeded secret, captures every channel the subprocess could leak through, and asserts the seed never appears. Built per the D1-eng tightened contract: per-run tmp $HOME, four seed match rules (exact + URL-decoded + first-12-char prefix + base64), fd-level stdout/stderr capture via Bun.spawn, post-mortem walk of every file written under $HOME, separate buckets for telemetry JSONL. Reusable: any future skill that handles secrets can import runWithSecretSink and run positive/negative controls against its own bins. The harness itself is ~180 lines of TS with no external deps beyond Bun + node:fs. Out of scope for v1 (documented as follow-ups): subprocess env dump (portable /proc reading), the user's real shell history (bins don't modify it). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test: secret-sink harness positive controls + real-bin negative controls 11 tests. Positive controls deliberately leak a seed in every covered channel (stdout, stderr, a file under $HOME, the telemetry JSONL path, base64-encoded, first-12-char prefix) and assert the harness catches each one. Without these, a harness that silently under-reports would look identical to a harness that works. Negative controls run real setup-gbrain bins with distinctive seeds: - supabase-verify rejects a mysql:// URL and a direct-connection URL, password never appears in any captured channel - lib.sh read_secret_to_env reads piped stdin, emits only the length, seed value stays invisible - supabase-provision on an auth-failure path fails fast without leaking the PAT to any channel Covers D21 #5 leak harness + uses it to validate D3-eng, D10, D11 discipline end-to-end on the already-shipped bins. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add list-orphans + delete-project subcommands (D20) Powers /setup-gbrain --cleanup-orphans. list-orphans filters the authenticated user's Supabase projects by name prefix (default "gbrain") and excludes the project the local ~/.gbrain/config.json currently points at, so only unclaimed gbrain-shaped projects come back. Active-ref detection parses the pooler URL's user portion (postgres.<ref>:<pw>@...). delete-project is a thin DELETE /v1/projects/{ref} wrapper with no confirmation of its own — the skill's UI layer owns the per-project confirm AskUserQuestion loop. Keeps responsibilities clean: the bin manages HTTP; the skill manages user intent. Both subcommands reuse the existing api_call retry+backoff and the same PAT discipline (env only, never argv). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): list-orphans active-ref filtering + delete-project 404 6 new tests bringing the supabase-provision suite to 28: list-orphans: - Filters to gbrain-prefixed projects, excludes the active-ref derived from ~/.gbrain/config.json's pooler URL - Treats all gbrain-prefixed projects as orphans when no config exists (first run on a new machine) - Respects custom --name-prefix for users who named their brain something else delete-project: - Happy path sends DELETE /v1/projects/<ref> and returns {deleted_ref} - 404 surfaces cleanly (exit 2, "404" in stderr) - Missing <ref> positional rejected with exit 2 Uses per-test tmpdir HOME with a stubbed ~/.gbrain/config.json so active-ref extraction runs against deterministic fixtures. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * chore: regenerate setup-gbrain SKILL.md after main merge * chore: bump version and changelog (v1.12.0.0) Ships /setup-gbrain and its supporting infrastructure end-to-end: per-remote trust policy, installer with PATH-shadow guard, shared secret-read helper, structural URL verifier, Supabase Management API wrapper, /health GBrain dimension, secret-sink test harness. 100 new tests across 5 suites, all green. Three pre-existing test failures noted as P0 in TODOS.md. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: add USING_GBRAIN_WITH_GSTACK.md + update README for /setup-gbrain README changes: - Rewrote the "Cross-machine memory with GBrain sync" section into "GBrain — persistent knowledge for your coding agent." Covers the three /setup-gbrain paths (Supabase existing URL, auto-provision, PGLite local), MCP registration, per-remote trust triad, and the (still-separate) memory sync feature. - Added /setup-gbrain row to the skills table pointing at the full guide. - Added /setup-gbrain to both skill-list install snippets. - Added USING_GBRAIN_WITH_GSTACK.md to the Docs table. New doc (USING_GBRAIN_WITH_GSTACK.md): - All three setup paths with trust-surface caveats - MCP registration details (and honest Claude-Code-v1 scoping) - Per-remote trust triad semantics + how to change a policy - Switching engines (PGLite ↔ Supabase) via --switch - GStack memory sync + its relationship to the gbrain knowledge base - /setup-gbrain --cleanup-orphans for orphan Supabase projects - Full command + flag reference, every bin helper, every env var - Security model: what's enforced in code, what's enforced by the leak harness, and the honest limits of v1 - Troubleshooting: PATH shadowing, direct-connection URL reject, auto-provision timeout, stale lock, policy file hand-edits, migrate hang - Why-this-design section explaining the non-obvious choices Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(brain-sync): secret scanner now catches Bearer-prefixed auth tokens in JSON The bearer-token-json regex value charset was [A-Za-z0-9_./+=-]{16,}, which does NOT permit spaces. Real HTTP auth headers embed the scheme name with a literal space — "Bearer <token>" — so the value portion actually starts with "Bearer " and the existing regex couldn't match. Result: any JSON blob containing "authorization":"Bearer ..." would slip past the scanner and sync to the user's private brain repo with the bearer token inline. Added optional (Bearer |Basic |Token )? prefix in front of the value charset. Now matches the common auth-scheme forms without broadening the matcher to tolerate arbitrary whitespace (which would false-positive on lots of benign JSON). Verified against 5 positive cases (bearer-in-json, clean bearer, apikey no-prefix, token with Bearer, password no-prefix) + 3 negative cases (too-short tokens, non-secret field names like username, random JSON). This closes the P0 security regression first noticed during v1.12.0.0 /ship. brain-sync.test.ts now passes all 7 secret-scan fixtures. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test: mock-gh integration tests for gstack-brain-init auto-create path 8 tests covering the gh-repo-create happy path that had zero coverage before. Existing brain-sync.test.ts always passes --remote <bare-url> to bypass gh entirely, so the interactive default ("press Enter, we'll run gh repo create for you") was shipping on trust. Test strategy: write a bash stub for gh that records every call into a file, then run gstack-brain-init with that stub on PATH. Assertions verify: gh auth status is checked, gh repo create fires with the computed gstack-brain-<user> default name + --private + --source flags, fall-through to gh repo view when create reports already-exists, user-provided URL bypasses gh entirely, gh-not-on-path and gh-not-authed branches both prompt for URL, --remote flag short-circuits all gh calls, conflicting-remote re-runs exit 1 with a clear message. No real GitHub, no live auth. Gate tier — runs on every commit. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(e2e): privacy-gate AskUserQuestion fires from preamble (periodic tier) Two periodic-tier E2E tests exercising the preamble's privacy gate end-to-end via the Agent SDK + canUseTool. Previously uncovered: - Positive: stages a fake gbrain on PATH + gbrain_sync_mode_prompted=false in config, runs a real skill, intercepts tool-use. Asserts the preamble fires a 3-option AskUserQuestion matching the canonical prose ("publish session memory" / "artifact" / "decline") and does NOT fire a second time in the same run (idempotency within session). - Negative: same staging but prompted=true. Asserts the gate stays silent even with gbrain detected on the host. Registered in test/helpers/touchfiles.ts as `brain-privacy-gate` (periodic) with dependency tracking on generate-brain-sync-block.ts, the three gstack-brain-* bins, gstack-config, and the Agent SDK runner. Diff-based selection re-runs the E2E when any of those change. Cost: ~$0.30-$0.50 per run. Only fires under EVALS=1 EVALS_TIER=periodic; gate tier stays free. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: update TODOS for bearer-json fix + new brain-sync test coverage Moves the bearer-json secret-scan regression from the P0 "pre-existing failures" block into the Completed section with full context on the fix, the mock-gh tests, the E2E privacy-gate tests, and the touchfile registration. Remaining P0s are the GSTACK_HOME config-isolation bug and the stale Opus 4.7 overlay pacing assertion, both unrelated. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(test): E2E privacy gate — ambient env + skill-file prompt Two fixes to get the E2E actually running end-to-end (first attempt failed at the SDK auth step, second at the assertion step): 1. Don't pass an explicit `env:` object to runAgentSdkTest. The SDK's auth pipeline misses ANTHROPIC_API_KEY when env is supplied as an object (verified against the plan-mode-no-op test, which passes no env and auths cleanly). Mutate process.env before the call instead, and restore the originals in finally so other tests don't inherit the ambient mutation. 2. The "Run /learn with no arguments" user prompt was too narrow — the model reduced it to a direct action and skipped the preamble privacy-gate directives entirely, so zero AskUserQuestions fired. Mirror the plan-mode-no-op pattern: point the model at the skill file on disk and ask it to follow every preamble directive. Bumped maxTurns from 6 to 10 to give the preamble room to execute. Verified both tests pass under `EVALS=1 EVALS_TIER=periodic bun test test/skill-e2e-brain-privacy-gate.test.ts` against a real ANTHROPIC_API_KEY. Cost per run: ~$0.30-$0.50 per test. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs(CLAUDE.md): source ANTHROPIC/OPENAI keys from ~/.zshrc for paid evals Conductor workspaces don't inherit the interactive shell env, so both API keys are absent from the default process env even though they're set in ~/.zshrc. Documents the source-from-zshrc pattern (grep + eval, never echo the value) plus the Agent SDK gotcha: do NOT pass env as an object to runAgentSdkTest — mutate process.env ambiently and restore in finally. Discovered this during the brain-privacy-gate E2E. First run failed at SDK auth with 401; second failed because explicit env handoff bypassed the SDK's own auth routing. Fix pattern now codified so the next paid-eval session in a Conductor workspace doesn't hit the same two dead ends. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
19 KiB
Using GBrain with GStack
Your coding agent, with a memory it actually keeps.
GBrain is a persistent knowledge base designed for AI agents. It stores what your agent learns, what you've decided, what worked and what didn't, and lets the agent search all of it on demand. GStack gives you a one-command path from zero to "gbrain is running, and my agent can call it" — with paths for try-it-local, share-with-your-team, and everything between.
This is the full monty: every scenario, every flag, every helper bin, every troubleshooting step. For the quick pitch, see the README's GBrain section. For error codes and sync-specific issues, see docs/gbrain-sync.md.
The one-command install
/setup-gbrain
That's it. The skill detects your current state, asks three questions at most, and walks you through install, init, MCP registration for Claude Code, and per-repo trust policy. On a clean Mac with nothing installed it finishes in under five minutes. On a Mac where something's already set up it takes seconds (it detects the existing state and skips done work).
The three paths
You pick one when the skill asks "Where should your brain live?"
Path 1: Supabase, you already have a connection string
Best for: you (or a teammate's cloud agent) already provisioned a Supabase brain and you want this local machine to use the same data.
What happens: Paste the Session Pooler URL (Settings → Database → Connection Pooler → Session → copy URI, port 6543). The skill reads it with echo off, shows you a redacted preview (aws-0-us-east-1.pooler.supabase.com:6543/postgres — host visible, password masked), hands it to gbrain init via the GBRAIN_DATABASE_URL environment variable, and the URL is never written to argv or your shell history.
Trust warning: Pasting this URL gives your local Claude Code full read/write access to every page in the shared brain. If that's not the trust level you want, pick PGLite local (Path 3) instead and accept the brains are disjoint.
Path 2a: Supabase, auto-provision a new project
Best for: fresh Supabase account, you want a clean new project with zero clicking.
What happens: You paste a Supabase Personal Access Token (PAT). The skill shows you the scope disclosure first — the token grants full access to every project in your Supabase account, not just the one we're about to create. It lists your organizations, asks which one and which region (default us-east-1), generates a database password, calls POST /v1/projects, polls GET /v1/projects/{ref} every 5 seconds until the project is ACTIVE_HEALTHY (180s timeout), fetches the pooler URL, hands it to gbrain init. End-to-end: ~90 seconds.
At the end: explicit reminder to revoke the PAT at https://supabase.com/dashboard/account/tokens. The skill already discarded it from memory.
If you Ctrl-C mid-provision: The SIGINT trap prints your in-flight project ref + a resume command. You can delete the orphan at the Supabase dashboard, or run /setup-gbrain --resume-provision <ref> to pick up where you left off.
Path 2b: Supabase, create manually
Best for: you'd rather click through supabase.com yourself than paste a PAT.
What happens: The skill walks you through the four manual steps (signup → new project → wait ~2 min → copy Session Pooler URL), then takes over from Path 1's paste step. Same security treatment as Path 1.
Path 3: PGLite local
Best for: try-it-first, no account, no cloud, no sharing. Or a dedicated "this Mac's brain" that stays isolated from any cloud agent.
What happens: gbrain init --pglite. Brain lives at ~/.gbrain/brain.pglite. No network calls. Done in 30 seconds.
This is the best first choice if you just want to see what gbrain feels like before committing to cloud. You can always migrate later with /setup-gbrain --switch.
MCP registration for Claude Code
By default the skill asks "Give Claude Code a typed tool surface for gbrain?" If you say yes, it runs:
claude mcp add gbrain -- gbrain serve
That registers gbrain's stdio MCP server with Claude Code. Now gbrain search, gbrain put_page, gbrain get_page, etc. show up as first-class tools in every session, not bash shell-outs.
If claude is not on PATH, the skill skips MCP registration gracefully with a manual-register hint. The CLI resolver still works from any skill that shells out to gbrain — MCP is an upgrade, not a prerequisite.
Other local agents (Cursor, Codex CLI, etc.) need their own MCP registration. The skill is Claude-Code-targeted for v1; other hosts can register gbrain serve manually in their own MCP config.
Per-remote trust policy (the triad)
Every repo on your machine gets a policy decision: read-write, read-only, or deny.
- read-write — your agent can
gbrain searchfrom this repo's context AND write new pages back to the brain. Default for your own projects. - read-only — your agent can search the brain but never writes new pages from this repo's sessions. Ideal for multi-client consultants: search the shared brain, don't contaminate it with Client A's code while you're in Client B's repo.
- deny — no gbrain interaction at all. The repo is invisible to gbrain tooling.
The skill asks once per repo the first time you run a gstack skill there. After that the decision is sticky — every worktree + branch of the same git remote shares the same policy, so you set it once and it follows you.
SSH and HTTPS remote variants collapse to the same key: https://github.com/foo/bar.git and git@github.com:foo/bar.git are the same repo.
To change a policy:
/setup-gbrain --repo # re-prompt for this repo only
# Or directly:
~/.claude/skills/gstack/bin/gstack-gbrain-repo-policy set "github.com/foo/bar" read-only
To see every policy:
~/.claude/skills/gstack/bin/gstack-gbrain-repo-policy list
Storage: ~/.gstack/gbrain-repo-policy.json, mode 0600, schema-versioned so future migrations stay deterministic.
Switching engines later
Picked PGLite and now want to join a team brain? One command:
/setup-gbrain --switch
The skill runs gbrain migrate --to supabase --url "$URL" wrapped in timeout 180s. Migration is bidirectional (Supabase → PGLite also works) and lossless — pages, chunks, embeddings, links, tags, and timeline all copy. Your original brain is preserved as a backup.
If migration hangs: another gstack session may be holding a lock on the source brain. The timeout fires at 3 minutes with an actionable message. Close other workspaces and re-run.
GStack memory sync (a separate concern)
This is different from gbrain itself. Your gstack state (~/.gstack/ — learnings, plans, retros, timeline, developer profile) is machine-local by default. "GStack memory sync" optionally pushes a curated, secret-scanned subset to a private git repo so your memory follows you across machines — and, if you're running gbrain, that git repo becomes indexable there too.
Turn it on with:
gstack-brain-init
You'll get a one-time privacy prompt: everything allowlisted / artifacts only (plans, designs, retros, learnings — skip behavioral data like timelines) / off. Every skill run syncs the queue at start and end — no daemon, no background process.
Secret-shaped content (AWS keys, GitHub tokens, PEM blocks, JWTs, bearer tokens) is blocked from sync before it leaves your machine.
On a new machine: Copy ~/.gstack-brain-remote.txt over, run gstack-brain-restore, and yesterday's learnings surface on today's laptop.
Full guide: docs/gbrain-sync.md. Error index: docs/gbrain-sync-errors.md.
/setup-gbrain offers to wire this up for you at the end of initial setup — it's one more AskUserQuestion, and it integrates with the same private-repo infrastructure.
Cleanup orphan projects
If you Ctrl-C'd mid-provision, tried three different names before settling on one, or otherwise accumulated gbrain-shaped Supabase projects you don't use, there's a subcommand for that:
/setup-gbrain --cleanup-orphans
The skill re-collects a PAT (one-time, discarded after), lists every project in your Supabase account whose name starts with gbrain and whose ref doesn't match your active ~/.gbrain/config.json pooler URL. For each orphan it asks per-project: "Delete orphan project <ref> (<name>, created <date>)?" — no batching, no "delete all" shortcut. The active brain is never offered for deletion.
Command + flag reference
/setup-gbrain entry modes
| Invocation | What it does |
|---|---|
/setup-gbrain |
Full flow: detect state, pick path, install, init, MCP, policy, optional memory-sync |
/setup-gbrain --repo |
Flip the per-remote trust policy for the current repo only |
/setup-gbrain --switch |
Migrate engine (PGLite ↔ Supabase) without re-running the other steps |
/setup-gbrain --resume-provision <ref> |
Resume a path-2a auto-provision that was interrupted during polling |
/setup-gbrain --cleanup-orphans |
List + per-project delete of orphan Supabase projects |
Bin helpers (for scripting)
| Bin | Purpose |
|---|---|
gstack-gbrain-detect |
Emit current state as JSON: gbrain on PATH, version, config engine, doctor status, sync mode |
gstack-gbrain-install |
Detect-first installer (probes ~/git/gbrain, ~/gbrain, then fresh clone). Has --dry-run and --validate-only flags. PATH-shadow check exits 3 with remediation menu. |
gstack-gbrain-lib.sh |
Sourced, not executed. Provides read_secret_to_env VARNAME "prompt" [--echo-redacted "<sed-expr>"] |
gstack-gbrain-supabase-verify |
Structural URL check. Rejects direct-connection URLs (db.*.supabase.co:5432) with exit 3 |
gstack-gbrain-supabase-provision |
Management API wrapper. Subcommands: list-orgs, create, wait, pooler-url, list-orphans, delete-project. All require SUPABASE_ACCESS_TOKEN in env. create and pooler-url also require DB_PASS. --json mode available on every subcommand. |
gstack-gbrain-repo-policy |
Per-remote trust triad. Subcommands: get, set, list, normalize |
gbrain CLI (upstream tool)
Gbrain itself ships with these that gstack wraps:
| Command | Purpose |
|---|---|
gbrain init --pglite |
Initialize a local PGLite brain |
gbrain init --non-interactive |
Initialize via env (GBRAIN_DATABASE_URL or DATABASE_URL). Never pass a URL as argv — it'll leak to shell history. |
gbrain doctor --json |
Health check. Returns `{status: "ok" |
gbrain migrate --to supabase --url ... |
Move a PGLite brain to Supabase (lossless, preserves source as backup) |
gbrain migrate --to pglite |
Reverse migration |
gbrain search "query" |
Search the brain |
gbrain put_page --title "..." --tags "a,b" <<<"content" |
Write a page |
gbrain get_page "<slug>" |
Fetch a page |
gbrain serve |
Start the MCP stdio server (used by claude mcp add) |
Config files + state
| Path | What lives there |
|---|---|
~/.gbrain/config.json |
Engine (pglite/postgres), database URL or path, API keys. Mode 0600. Written by gbrain init. |
~/.gstack/gbrain-repo-policy.json |
Per-remote trust triad. Schema v2. Mode 0600. |
~/.gstack/.setup-gbrain.lock.d |
Concurrent-run lock (atomic mkdir). Released on normal exit + SIGINT. |
~/.gstack/.brain-queue.jsonl |
Pending sync entries for gstack memory sync |
~/.gstack/.brain-last-push |
Timestamp of last sync push (for /health scoring) |
~/.gstack-brain-remote.txt |
URL of your gstack memory sync remote (safe to copy between machines) |
~/.gstack/.setup-gbrain-inflight.json |
Reserved for future --resume-provision persisted state |
Environment variables
| Var | Where it's read | What it does |
|---|---|---|
SUPABASE_ACCESS_TOKEN |
gstack-gbrain-supabase-provision |
PAT for Management API calls. Discarded after each setup run. |
DB_PASS |
gstack-gbrain-supabase-provision (create, pooler-url) |
Generated DB password. Never in argv. |
GBRAIN_DATABASE_URL |
gbrain init, gbrain doctor, etc. |
Postgres connection string (Supabase pooler URL for us). Env takes precedence over ~/.gbrain/config.json. |
DATABASE_URL |
gbrain init (fallback) |
Same semantics as GBRAIN_DATABASE_URL; checked second. |
SUPABASE_API_BASE |
gstack-gbrain-supabase-provision |
Override the Management API host. Used by tests to point at a mock server. |
GBRAIN_INSTALL_DIR |
gstack-gbrain-install |
Override default install path (~/gbrain) |
GSTACK_HOME |
every bin helper | Override ~/.gstack state dir. Heavy test use. |
Security model
One rule for every secret this skill touches: env var only, never argv, never logged, never written to disk by us. The only persistent storage is gbrain's own ~/.gbrain/config.json at mode 0600, which is gbrain's discipline, not ours.
Enforced in code:
- CI grep test in
test/skill-validation.test.tsfails the build if$SUPABASE_ACCESS_TOKENor$GBRAIN_DATABASE_URLappears in an argv position - CI grep test fails if
--insecure,-k, orNODE_TLS_REJECT_UNAUTHORIZED=0appear inbin/gstack-gbrain-supabase-provision set +xat the top of the provision helper prevents debug tracing from leaking PAT- Telemetry payload contains only enumerated categorical values (scenario, install result, MCP opt-in, trust tier) — never free-form strings that could contain secrets
Enforced via tests:
test/secret-sink-harness.test.tsruns every secret-handling bin with a seeded secret and asserts the seed never appears in any captured channel (stdout, stderr, files under$HOME, telemetry JSONL). Four match rules per seed: exact, URL-decoded, first-12-char prefix, base64.- Positive controls in the same test file deliberately leak seeds in every covered channel and assert the harness catches each one. Without the positive controls, a harness that silently under-reports would look identical to a working harness.
What you can still leak (the honest limits of v1):
- If you paste a secret into a normal chat message outside
read -s, it's in the conversation transcript and any host-side logging - The leak harness doesn't dump subprocess environment — a bin that
env >> ~/.logwould evade detection (no bin in v1 does this; grep tests prevent it) - Your shell's own
HISTFILEbehavior is your shell's, not ours — we never pass secrets to argv so they don't land there via our code, but nothing stops you from pasting one into a rawcurlcommand yourself
Troubleshooting
"PATH SHADOWING DETECTED" during install
Another gbrain binary is earlier in PATH than the one the installer just linked. The installer's version check caught it. Fix one of:
rm $(which gbrain)if you don't need the other one- Prepend
~/.bun/binto PATH in your shell rc so the linked binary wins - Set
GBRAIN_INSTALL_DIRto the shadowing binary's install directory and re-run
Then re-run /setup-gbrain.
"rejected direct-connection URL"
You pasted a db.<ref>.supabase.co:5432 URL. Those are IPv6-only and fail in most environments. Use the Session Pooler URL instead: Supabase dashboard → Settings → Database → Connection Pooler → Session → copy URI (port 6543).
Auto-provision times out at 180s
The Supabase project is still initializing. Your ref was printed in the exit message. Wait a minute, then:
/setup-gbrain --resume-provision <ref>
The skill re-collects a PAT, skips project creation, resumes polling.
"Another /setup-gbrain instance is running"
You have a stale lock directory. If you're sure no other instance is actually running:
rm -rf ~/.gstack/.setup-gbrain.lock.d
Then re-run.
"No cross-model tension" on policy file
You edited ~/.gstack/gbrain-repo-policy.json by hand with legacy allow values? No problem. On the next read, gstack auto-migrates allow → read-write and adds _schema_version: 2. One log line on stderr, idempotent, deterministic.
gbrain doctor says "warnings"
/health treats that as yellow, not red. Check gbrain doctor --json | jq .checks to see which sub-checks are warning. Typical causes: resolver MECE overlap (skill names clashing) or DB connection not yet configured.
Switching PGLite → Supabase hangs
Another gstack session in a sibling Conductor workspace may be holding a lock on your local PGLite file via its preamble's gstack-brain-sync call. Close other workspaces, re-run /setup-gbrain --switch. The timeout is bounded at 180s so you'll never actually wait forever.
Why this design
Why per-remote trust triad and not binary allow/deny? Multi-client consultants need search without write-back. A freelance dev working on Client A in the morning and Client B in the afternoon can't let A's code insights leak into a brain Client B can search. Read-only solves that cleanly.
Why not bundle gbrain into gstack? Gbrain is a separate, actively-developed project with its own release cadence, schema migrations, and MCP surface. Bundling would mean gstack has to gate gbrain updates, which slows gbrain improvements from reaching users. Separate-but-integrated lets each ship on its own cadence.
Why gbrain init --non-interactive via env var and not a flag? Connection strings contain database passwords. Passing them as argv lands the password in ps, shell history, and process listings. Env-var handoff keeps the secret in process memory only. Gbrain supports both GBRAIN_DATABASE_URL and DATABASE_URL; we use the former to avoid collisions with non-gbrain tooling.
Why fail-hard on PATH shadowing instead of warn-and-continue? A shadowed gbrain means every subsequent command calls a different binary than the one we just installed. That's a silent version-drift bug that surfaces as mysterious feature gaps weeks later. Setup skills have one job — set up a working environment. Refusing to install into a broken one is the setup-skill-correct behavior.
Why not auto-import every repo? Privacy + noise. An auto-import preamble hook that ingests every repo you touch would: (a) leak work code into a shared brain without consent, and (b) clog search with throwaway repos. The per-remote policy makes ingestion an explicit, per-repo decision. /setup-gbrain doesn't install any auto-import hook today — but the policy store is forward-compatible for one later.
Related skills + next steps
/health— includes a GBrain dimension (doctor status, sync queue depth, last-push age) in its 0-10 composite score. The dimension is omitted when gbrain isn't installed; running/healthon a non-gbrain machine doesn't penalize that choice./gstack-upgrade— keeps gstack itself up to date. Does NOT upgrade gbrain independently. To bump gbrain, updatePINNED_COMMITinbin/gstack-gbrain-installand re-run/setup-gbrain./retro— weekly retrospective pulls learnings and plans from your gbrain when memory sync is on, letting the retro reference cross-machine history.
Run /setup-gbrain and see what sticks.