Files
gstack/USING_GBRAIN_WITH_GSTACK.md
Garry Tan 675717e320 v1.17.0.0: setup-gbrain wireup ships the gbrain federation surface (#1234)
* feat: gstack-gbrain-source-wireup helper + 13 unit tests

The new bin/gstack-gbrain-source-wireup is the single helper that registers
the gstack brain repo as a gbrain federated source via `git worktree`, runs
incremental sync, and supports --uninstall + --probe + --strict modes.

Replaces the dead `consumers.json + ingest_url + /ingest-repo` HTTP wireup
introduced in v1.12.0.0 — that endpoint never shipped on the gbrain side.
The federation surface (`gbrain sources` / `gbrain sync`) shipped in gbrain
v0.18.0; this helper adapts to its actual semantics (no `sources update`, so
path drift recovery is `remove + re-add`; no `--install-cron` either, so
freshness rides on the existing skill-end push hook).

Source-id derivation is multi-fallback: ~/.gstack/.git origin URL →
~/.gstack-brain-remote.txt → --source-id flag. This makes `--uninstall`
work even after `~/.gstack/.git` is destroyed by the parent uninstall script.

Worktree is `--detach`ed at $GSTACK_HOME's HEAD because main is already
checked out there; advance is a re-checkout of the parent's current HEAD,
not a `git pull`. Divergence recovery removes + re-adds the worktree.

Test suite covers 13 cases: fresh-state registration, idempotent re-runs,
drift recovery, --strict failure modes, source-id fallback chain, --probe
non-mutation, sync errors, and --uninstall. Fake gbrain on $PATH, real git
ops at GSTACK_HOME tmp dir.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: wire setup-gbrain + brain-restore + brain-uninstall to use the helper

setup-gbrain Step 7 now invokes gstack-gbrain-source-wireup --strict after
gstack-brain-init + gbrain_sync_mode is set. Strict mode means the user sees
the failure rather than silently ending up with an unwired brain.

bin/gstack-brain-init drops 60 lines of dead code: the HTTP POST to
${GBRAIN_URL}/ingest-repo, the GBRAIN_URL_VAL/GBRAIN_TOKEN_VAL probes, the
consumers.json writer, and the chore commit step. CONSUMERS_FILE variable
declaration removed. The closing message no longer points at the dead
gstack-brain-consumer add path.

bin/gstack-brain-restore drops the 18-line consumers.json token-rehydration
block (was a no-op for the only consumer that ever existed). Adds a
best-effort wireup invocation after the brain-repo clone so 2nd-Mac restore
gets gbrain federation automatically. Failure prints a stderr WARNING but
does not abort the restore — restore's primary job is the git clone.

bin/gstack-brain-uninstall calls the helper's --uninstall mode (which
removes the gbrain source registration, the git worktree, and the
future-launchd-plist stub) before the existing legacy consumers.json
removal. Ordering is fragile-by-design: helper derives source-id via
multi-fallback so it works even after .git is destroyed.

bin/gstack-brain-consumer gets a DEPRECATED header note. Stays in the tree
for one cycle of grace; removal in v1.13.0.0.

setup-gbrain/SKILL.md is regenerated from the .tmpl via gen:skill-docs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: v1.12.3.0 migration — wire existing brain-sync repos into gbrain

Idempotent migration script. For users who already opted into brain-sync
before this release (gbrain_sync_mode != off, ~/.gstack/.git exists), runs
the new gstack-gbrain-source-wireup helper so their existing brain repo
becomes searchable via gbrain immediately on /gstack-upgrade.

Skip conditions (each ends with exit 0):
  - HOME unset or empty (defensive)
  - gbrain_sync_mode = off or empty (user opted out)
  - no ~/.gstack/.git (brain-init never ran)
  - helper missing on disk (broken install)

No --strict on the helper invocation: missing or old gbrain is a benign
skip during a batch upgrade rather than a blocker.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* v1.12.3.0: setup-gbrain wireup ships the gbrain federation surface

Bumps VERSION 1.12.2.0 → 1.12.3.0 with a release-notes-format entry in
CHANGELOG.md. After upgrade, the placeholder consumers.json wireup is gone,
gbrain sources + sync + skill-end hook is the new path, your gstack memory
is actually searchable in gbrain.

The CHANGELOG entry follows the release-summary format from CLAUDE.md:
two-line bold headline, lead paragraph naming what shipped, "verify after
upgrade" command block readers can run on their own brain to see the
delta, then the standard Itemized changes / What this means / For
contributors sections.

Three pre-existing test failures on this branch are flagged in the
contributor section: the GSTACK_HOME isolation test (reads Garry's actual
~/.gstack/config.yaml), the 2MB tracked-binary test (security-bench
fixtures > 2MB), and the Opus 4.7 pacing-directive test (overlay text
drifted). All three were verified to fail on the base branch too — out
of scope for this PR, follow-up needed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: helper locks GBRAIN_DATABASE_URL at startup, defends against config rewrites

The wireup helper previously read ~/.gbrain/config.json on every gbrain
subprocess invocation. On Garry's Mac, multiple concurrent test runs and
agent integrations were rewriting that file mid-sync, redirecting the
wireup at the wrong brain partway through a 4-min initial import.

This commit adds a `--database-url <url>` flag to the helper and locks
the URL at startup. Precedence:
  1. --database-url flag                       (explicit caller intent)
  2. GBRAIN_DATABASE_URL / DATABASE_URL env    (CI / manual override)
  3. read once from ~/.gbrain/config.json      (default)

Whichever wins gets exported as GBRAIN_DATABASE_URL for every child
`gbrain` invocation. Per gbrain's loadConfig at src/core/config.ts:53,
env-var URLs override the file URL — so a process that flips config.json
between two of our gbrain calls can't redirect us. Defense-in-depth:
once the URL is locked, the wireup completes against the original brain
even under hostile filesystem conditions.

setup-gbrain/SKILL.md.tmpl Step 7 now reads the URL out of config.json
once (via python3 inline) and passes it explicitly with --database-url,
so even the very first wireup call is decoupled from config.json mutability.

Three new test cases cover the lock behavior:
  - --database-url flag is exported to child gbrain calls
  - falls back to ~/.gbrain/config.json when no flag and no env
  - flag overrides env GBRAIN_DATABASE_URL and config.json values

The fake gbrain in the test suite now records GBRAIN_DATABASE_URL alongside
each call so tests can assert the helper exported the locked URL.

Total test count: 13 → 16 passing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: bump v1.12.3.0 references to v1.15.1.0 to match merged-with-main release

Internal-only renames after merging origin/main bumped this branch's release
target from v1.12.3.0 → v1.15.1.0:

- gstack-upgrade/migrations/v1.12.3.0.sh → v1.15.1.0.sh (rename + log-prefix
  bump from "[v1.12.3.0]" to "[v1.15.1.0]")
- bin/gstack-brain-consumer header: "DEPRECATED in v1.12.3.0" → "DEPRECATED in
  v1.15.1.0"; removal target bumped from v1.13.0.0 → v1.16.0.0 (next minor
  after v1.15.1.0).
- bin/gstack-brain-uninstall: "no longer written ... since v1.12.3.0" →
  "since v1.15.1.0".

No behavior change. Test suite still 16/16 passing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test: 10 new cases close coverage gaps (helper defensive paths + migration)

/ship Step 7 coverage audit reported 48% (22/46 branches). Added 10 cases
covering the highest-impact gaps:

Helper (test/gstack-gbrain-source-wireup.test.ts, +3 cases → 19 total):
- --uninstall when gbrain is missing: best-effort exit 0, worktree still cleaned
- --no-pull skips HEAD advance on existing worktree (was untested)
- Stray non-git directory at worktree path is cleaned up + worktree created

Migration (test/gstack-upgrade-migration-v1_15_1_0.test.ts, NEW, 7 cases):
- HOME unset → defensive exit 0
- gbrain_sync_mode=off → exit 0 silently
- gbrain_sync_mode unset → exit 0 silently
- no ~/.gstack/.git → exit 0 silently
- helper missing on PATH → warning + exit 0
- happy path → invokes helper without --strict
- helper exits non-zero → migration prints retry hint, still exits 0 (non-blocking)

Also syncs package.json version from 1.15.0.0 → 1.15.1.0 to match VERSION
file (DRIFT_STALE_PKG repair from /ship Step 12 idempotency check; was a
manual-edit-bypass artifact from the merge step).

Coverage estimate: 48% → ~75%. Mainline + migration script + key defensive
paths all exercised. 26 tests total covering the new code surface.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: pre-landing review auto-fixes (5 correctness + observability)

/ship Step 9 review surfaced 9 INFORMATIONAL findings on the new helper +
migration. Five auto-fixed with no behavior regression (26/26 tests pass):

bin/gstack-gbrain-source-wireup:
- Version compare: put floor "0.18.0" first in `sort -V` stdin so equal-or-
  greater $v always sorts to position 2. Stable across sort implementations.
- _worktree_add_detached: drop `2>/dev/null` on the `worktree add`, surface
  git's stderr through `prefix` so users see WHY adds fail (disk, perms).
- ensure_worktree: same observability fix on the `git checkout --detach` path
  during HEAD-advance, so users see the actual git error before recovery.
- do_probe: replace `[ -d X ] || [ -f X ] && set=present` (precedence trap —
  the `&&` short-circuits when the dir branch fails) with explicit if-block.
- do_probe: capture `check_source_state`'s return code explicitly via
  `set +e; ...; rc=$?; set -e`. `$?` after an `if`/`elif` chain is fragile
  under set -e and may not reach the elif under some shell versions.
- do_wireup: same explicit return-code capture for `ensure_worktree`. The
  prior `ensure_worktree || { if [ $? = 2 ]; ...` pattern relied on `$?`
  reflecting the function's return after `||`, which is implementation-defined.

gstack-upgrade/migrations/v1.15.1.0.sh:
- Trim whitespace from `gstack-config get gbrain_sync_mode` output via
  `tr -d '[:space:]'`. Trailing newlines would mis-classify "off\n" as a
  non-empty non-off mode and incorrectly invoke the helper.

Skipped findings (cosmetic / out of scope):
- `python3 -c` reads `~/.gbrain/config.json` via `expanduser` instead of
  the helper's `$GBRAIN_CONFIG` variable (cosmetic; HONORS HOME override).
- Long sync-failure error message could truncate to last N lines (cosmetic
  log readability).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: adversarial review hardening (rm safety, jq probe, secret redaction, multi-Mac)

/ship Step 11 adversarial review surfaced 7 CRITICAL issues. Five fixed
inline (no behavior regression, 26/26 tests still pass):

bin/gstack-gbrain-source-wireup:

1. **rm -rf path validation** (was: F-c-CRITICAL 9/10).
   Added `safe_rm_worktree` helper that refuses any path not strictly under
   $HOME/, plus dangerous-path allowlist for /, /Users, $HOME root. Replaces
   raw `rm -rf "$WORKTREE"` calls (lines 161, 169 originally). If user sets
   GSTACK_BRAIN_WORKTREE="" or "/", the helper now dies cleanly instead of
   nuking the home dir or root.

2. **jq dependency probe** (was: F-c-CRITICAL 9/10).
   `check_source_state` now hard-fails with a clear message if jq is missing,
   instead of silently returning "absent" → re-add → die-on-duplicate. Plus
   trims whitespace from jq output (`tr -d '[:space:]'`) to defend against
   gbrain emitting `\n` for missing fields. Header comment claimed jq was a
   transitive dep; now we enforce it.

3. **Python heredoc warns on JSON parse failure** (was: F-c-CRITICAL 8/10).
   Previously `except Exception: pass` silently swallowed malformed JSON,
   leaving _locked_url empty and defeating the URL-lock defense. Now writes
   the parse error to a temp file and warns the user that the URL was not
   locked. Also passes the config path via env var (GBRAIN_CONFIG_PATH)
   instead of hardcoded `~/.gbrain/config.json`, respecting any HOME override.

4. **Multi-Mac source-id collision fix** (was: F-c-CRITICAL 9/10).
   When `check_source_state` returns 1 (source exists at different path), the
   helper used to remove + re-add. Two Macs sharing one Supabase brain would
   ping-pong the local_path metadata on every sync. Now: if the existing
   path's basename matches the local worktree's basename (likely another
   machine's local copy of the SAME brain repo), skip re-registration and
   sync against the local worktree. gbrain stores pages by content; metadata
   is informational. No more ping-pong.

5. **Redact DB URL from sync-failure error message** (was: F-c-CRITICAL 7/10).
   `gbrain sync` failures used to echo the full stderr (which can contain
   the postgres connection string with password) into the user's terminal
   and any log redirect. Now we sed-replace any `postgres://...` with
   `postgres://***REDACTED***` before the die() call, and only show the
   last 10 lines.

Bonus minor fix: `die()` now uses `$1` instead of `$*` for the warn
message, so the exit-code arg ($2) doesn't get appended to the warning text.

Acknowledged-but-deferred:
- GBRAIN_DATABASE_URL env exposure on Linux via /proc/$PID/environ. This is
  a Linux-only concern; gstack is Mac-targeted today and macOS restricts
  process env reads. Document as a follow-up if Linux support lands.
- gbrain version parser brittleness if gbrain switches to "v0.18.0" prefix.
  Defensive only; current gbrain output matches `gbrain X.Y.Z` exactly.
- bash 3.2 PIPESTATUS reliability. Tests pass on the host bash version (3.2+
  via macOS); modern bash 5.x is widely available.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: sync gbrain-source-wireup helper into USING_GBRAIN + gbrain-sync

USING_GBRAIN_WITH_GSTACK.md: add gstack-gbrain-source-wireup row to the bin
helpers table — describes federation registration via `gbrain sources add` +
worktree, lists flags, calls out it replaces the dead consumers.json/ingest-repo
HTTP wireup.

docs/gbrain-sync.md: replace the `gstack-brain-reader add --ingest-url` step
in gstack-brain-init's flow (which targeted the never-shipped /ingest-repo
endpoint) with the real flow — federate via gbrain sources + worktree, point
to bin/gstack-gbrain-source-wireup.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* v1.16.1.0: rebump after queue-collision (PR #1233 took v1.16.0.0)

CI's "Check VERSION is not stale vs queue" job (job 73105686380) failed
with: "VERSION drift: PR #1234 claims v1.15.1.0 but the queue has moved —
next free slot is v1.16.1.0." PR #1233 (garrytan/browserharness) entered
the queue claiming v1.16.0.0 between when this branch's prior /ship ran
and when CI evaluated, so v1.15.1.0 is stale. Rebumping on top.

Files updated:
- VERSION                                                     1.15.1.0 → 1.16.1.0
- package.json                                                1.15.1.0 → 1.16.1.0
- CHANGELOG.md heading + Before/After columns                 1.15.1.0 → 1.16.1.0
- CHANGELOG removal target (consumers.json + config keys)     1.16.0.0 → 1.17.0.0
- gstack-upgrade/migrations/v1.15.1.0.sh                      → renamed v1.16.1.0.sh + log prefix
- bin/gstack-brain-consumer "DEPRECATED in" + "removal in"    1.15.1.0/1.16.0.0 → 1.16.1.0/1.17.0.0
- bin/gstack-brain-uninstall "since vX.Y.Z.W"                 1.15.1.0 → 1.16.1.0
- test/gstack-upgrade-migration-v1_15_1_0.test.ts             → renamed v1_16_1_0.test.ts

No behavior change. 26/26 wireup + migration tests still pass on the rename.
Full bun test suite: exit 0, 0 failures.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* v1.17.0.0: rebump again — bump-detection now classifies branch as MINOR

CI's version-stale check (job 73106360896) failed: PR #1234 claims v1.16.1.0
but the queue moved to v1.17.0.0. Root cause: bumping 1.15.1.0 → 1.16.1.0
to dodge the prior collision turned the branch's diff classification from
PATCH (1.15.0 → 1.15.1) into MINOR (1.15.0 → 1.16.x). detect-bump.ts now
sees MINOR, gstack-next-version walks the MINOR lane past #1233's
v1.16.0.0 claim, and the next free slot is v1.17.0.0.

Honestly accurate per CLAUDE.md scale-aware bumps: this branch IS a
MINOR ("substantial new capability shipped — skill, harness, command,
big refactor"). The new helper + migration + integration totals ~1200
lines added across 11 files with 26 new tests. PATCH was always the
wrong honest classification; the queue collision forced the right
answer.

Files updated:
- VERSION                                                     1.16.1.0 → 1.17.0.0
- package.json                                                1.16.1.0 → 1.17.0.0
- CHANGELOG.md heading + After column                         1.16.1.0 → 1.17.0.0
- CHANGELOG removal targets                                   1.17.0.0 → 1.18.0.0
- gstack-upgrade/migrations/v1.16.1.0.sh                      → renamed v1.17.0.0.sh + log prefix
- bin/gstack-brain-consumer "DEPRECATED in" + "removal in"    1.16.1.0/1.17.0.0 → 1.17.0.0/1.18.0.0
- bin/gstack-brain-uninstall "since vX.Y.Z.W"                 1.16.1.0 → 1.17.0.0
- test/gstack-upgrade-migration-v1_16_1_0.test.ts             → renamed v1_17_0_0.test.ts

26/26 tests still pass. No behavior change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 01:17:54 -07:00

293 lines
19 KiB
Markdown

# Using GBrain with GStack
Your coding agent, with a memory it actually keeps.
[GBrain](https://github.com/garrytan/gbrain) is a persistent knowledge base designed for AI agents. It stores what your agent learns, what you've decided, what worked and what didn't, and lets the agent search all of it on demand. GStack gives you a one-command path from zero to "gbrain is running, and my agent can call it" — with paths for try-it-local, share-with-your-team, and everything between.
This is the full monty: every scenario, every flag, every helper bin, every troubleshooting step. For the quick pitch, see the [README's GBrain section](README.md#gbrain--persistent-knowledge-for-your-coding-agent). For error codes and sync-specific issues, see [docs/gbrain-sync.md](docs/gbrain-sync.md).
---
## The one-command install
```bash
/setup-gbrain
```
That's it. The skill detects your current state, asks three questions at most, and walks you through install, init, MCP registration for Claude Code, and per-repo trust policy. On a clean Mac with nothing installed it finishes in under five minutes. On a Mac where something's already set up it takes seconds (it detects the existing state and skips done work).
## The three paths
You pick one when the skill asks "Where should your brain live?"
### Path 1: Supabase, you already have a connection string
Best for: you (or a teammate's cloud agent) already provisioned a Supabase brain and you want this local machine to use the same data.
**What happens:** Paste the Session Pooler URL (Settings → Database → Connection Pooler → Session → copy URI, port 6543). The skill reads it with echo off, shows you a redacted preview (`aws-0-us-east-1.pooler.supabase.com:6543/postgres` — host visible, password masked), hands it to `gbrain init` via the `GBRAIN_DATABASE_URL` environment variable, and the URL is never written to argv or your shell history.
**Trust warning:** Pasting this URL gives your local Claude Code full read/write access to every page in the shared brain. If that's not the trust level you want, pick PGLite local (Path 3) instead and accept the brains are disjoint.
### Path 2a: Supabase, auto-provision a new project
Best for: fresh Supabase account, you want a clean new project with zero clicking.
**What happens:** You paste a Supabase Personal Access Token (PAT). The skill shows you the scope disclosure first — *the token grants full access to every project in your Supabase account, not just the one we're about to create*. It lists your organizations, asks which one and which region (default `us-east-1`), generates a database password, calls `POST /v1/projects`, polls `GET /v1/projects/{ref}` every 5 seconds until the project is `ACTIVE_HEALTHY` (180s timeout), fetches the pooler URL, hands it to `gbrain init`. End-to-end: ~90 seconds.
At the end: explicit reminder to revoke the PAT at https://supabase.com/dashboard/account/tokens. The skill already discarded it from memory.
**If you Ctrl-C mid-provision:** The SIGINT trap prints your in-flight project ref + a resume command. You can delete the orphan at the Supabase dashboard, or run `/setup-gbrain --resume-provision <ref>` to pick up where you left off.
### Path 2b: Supabase, create manually
Best for: you'd rather click through supabase.com yourself than paste a PAT.
**What happens:** The skill walks you through the four manual steps (signup → new project → wait ~2 min → copy Session Pooler URL), then takes over from Path 1's paste step. Same security treatment as Path 1.
### Path 3: PGLite local
Best for: try-it-first, no account, no cloud, no sharing. Or a dedicated "this Mac's brain" that stays isolated from any cloud agent.
**What happens:** `gbrain init --pglite`. Brain lives at `~/.gbrain/brain.pglite`. No network calls. Done in 30 seconds.
This is the best first choice if you just want to see what gbrain feels like before committing to cloud. You can always migrate later with `/setup-gbrain --switch`.
## MCP registration for Claude Code
By default the skill asks "Give Claude Code a typed tool surface for gbrain?" If you say yes, it runs:
```bash
claude mcp add gbrain -- gbrain serve
```
That registers gbrain's stdio MCP server with Claude Code. Now `gbrain search`, `gbrain put_page`, `gbrain get_page`, etc. show up as first-class tools in every session, not bash shell-outs.
**If `claude` is not on PATH**, the skill skips MCP registration gracefully with a manual-register hint. The CLI resolver still works from any skill that shells out to `gbrain` — MCP is an upgrade, not a prerequisite.
**Other local agents** (Cursor, Codex CLI, etc.) need their own MCP registration. The skill is Claude-Code-targeted for v1; other hosts can register `gbrain serve` manually in their own MCP config.
## Per-remote trust policy (the triad)
Every repo on your machine gets a policy decision: **read-write**, **read-only**, or **deny**.
- **read-write** — your agent can `gbrain search` from this repo's context AND write new pages back to the brain. Default for your own projects.
- **read-only** — your agent can search the brain but never writes new pages from this repo's sessions. Ideal for multi-client consultants: search the shared brain, don't contaminate it with Client A's code while you're in Client B's repo.
- **deny** — no gbrain interaction at all. The repo is invisible to gbrain tooling.
The skill asks once per repo the first time you run a gstack skill there. After that the decision is sticky — every worktree + branch of the same git remote shares the same policy, so you set it once and it follows you.
SSH and HTTPS remote variants collapse to the same key: `https://github.com/foo/bar.git` and `git@github.com:foo/bar.git` are the same repo.
**To change a policy:**
```bash
/setup-gbrain --repo # re-prompt for this repo only
# Or directly:
~/.claude/skills/gstack/bin/gstack-gbrain-repo-policy set "github.com/foo/bar" read-only
```
**To see every policy:**
```bash
~/.claude/skills/gstack/bin/gstack-gbrain-repo-policy list
```
Storage: `~/.gstack/gbrain-repo-policy.json`, mode 0600, schema-versioned so future migrations stay deterministic.
## Switching engines later
Picked PGLite and now want to join a team brain? One command:
```bash
/setup-gbrain --switch
```
The skill runs `gbrain migrate --to supabase --url "$URL"` wrapped in `timeout 180s`. Migration is bidirectional (Supabase → PGLite also works) and lossless — pages, chunks, embeddings, links, tags, and timeline all copy. Your original brain is preserved as a backup.
**If migration hangs:** another gstack session may be holding a lock on the source brain. The timeout fires at 3 minutes with an actionable message. Close other workspaces and re-run.
## GStack memory sync (a separate concern)
This is different from gbrain itself. Your gstack state (`~/.gstack/` — learnings, plans, retros, timeline, developer profile) is machine-local by default. "GStack memory sync" optionally pushes a curated, secret-scanned subset to a private git repo so your memory follows you across machines — and, if you're running gbrain, that git repo becomes indexable there too.
Turn it on with:
```bash
gstack-brain-init
```
You'll get a one-time privacy prompt: **everything allowlisted** / **artifacts only** (plans, designs, retros, learnings — skip behavioral data like timelines) / **off**. Every skill run syncs the queue at start and end — no daemon, no background process.
Secret-shaped content (AWS keys, GitHub tokens, PEM blocks, JWTs, bearer tokens) is blocked from sync before it leaves your machine.
**On a new machine:** Copy `~/.gstack-brain-remote.txt` over, run `gstack-brain-restore`, and yesterday's learnings surface on today's laptop.
Full guide: [docs/gbrain-sync.md](docs/gbrain-sync.md). Error index: [docs/gbrain-sync-errors.md](docs/gbrain-sync-errors.md).
`/setup-gbrain` offers to wire this up for you at the end of initial setup — it's one more AskUserQuestion, and it integrates with the same private-repo infrastructure.
## Cleanup orphan projects
If you Ctrl-C'd mid-provision, tried three different names before settling on one, or otherwise accumulated gbrain-shaped Supabase projects you don't use, there's a subcommand for that:
```bash
/setup-gbrain --cleanup-orphans
```
The skill re-collects a PAT (one-time, discarded after), lists every project in your Supabase account whose name starts with `gbrain` and whose ref doesn't match your active `~/.gbrain/config.json` pooler URL. For each orphan it asks per-project: *"Delete orphan project `<ref>` (`<name>`, created `<date>`)?"* — no batching, no "delete all" shortcut. The active brain is never offered for deletion.
## Command + flag reference
### `/setup-gbrain` entry modes
| Invocation | What it does |
|---|---|
| `/setup-gbrain` | Full flow: detect state, pick path, install, init, MCP, policy, optional memory-sync |
| `/setup-gbrain --repo` | Flip the per-remote trust policy for the current repo only |
| `/setup-gbrain --switch` | Migrate engine (PGLite ↔ Supabase) without re-running the other steps |
| `/setup-gbrain --resume-provision <ref>` | Resume a path-2a auto-provision that was interrupted during polling |
| `/setup-gbrain --cleanup-orphans` | List + per-project delete of orphan Supabase projects |
### Bin helpers (for scripting)
| Bin | Purpose |
|---|---|
| `gstack-gbrain-detect` | Emit current state as JSON: gbrain on PATH, version, config engine, doctor status, sync mode |
| `gstack-gbrain-install` | Detect-first installer (probes `~/git/gbrain`, `~/gbrain`, then fresh clone). Has `--dry-run` and `--validate-only` flags. PATH-shadow check exits 3 with remediation menu. |
| `gstack-gbrain-lib.sh` | Sourced, not executed. Provides `read_secret_to_env VARNAME "prompt" [--echo-redacted "<sed-expr>"]` |
| `gstack-gbrain-supabase-verify` | Structural URL check. Rejects direct-connection URLs (`db.*.supabase.co:5432`) with exit 3 |
| `gstack-gbrain-supabase-provision` | Management API wrapper. Subcommands: `list-orgs`, `create`, `wait`, `pooler-url`, `list-orphans`, `delete-project`. All require `SUPABASE_ACCESS_TOKEN` in env. `create` and `pooler-url` also require `DB_PASS`. `--json` mode available on every subcommand. |
| `gstack-gbrain-repo-policy` | Per-remote trust triad. Subcommands: `get`, `set`, `list`, `normalize` |
| `gstack-gbrain-source-wireup` | Registers your `~/.gstack/` brain repo with gbrain as a federated source via `gbrain sources add` + `git worktree`, then runs an initial `gbrain sync`. Idempotent. Replaces the dead `consumers.json + /ingest-repo` HTTP wireup from v1.12.x. Flags: `--strict`, `--source-id <id>`, `--no-pull`, `--uninstall`, `--probe`. |
### gbrain CLI (upstream tool)
Gbrain itself ships with these that gstack wraps:
| Command | Purpose |
|---|---|
| `gbrain init --pglite` | Initialize a local PGLite brain |
| `gbrain init --non-interactive` | Initialize via env (`GBRAIN_DATABASE_URL` or `DATABASE_URL`). Never pass a URL as argv — it'll leak to shell history. |
| `gbrain doctor --json` | Health check. Returns `{status: "ok"|"warnings"|"error", health_score: 0-100, checks: [...]}` |
| `gbrain migrate --to supabase --url ...` | Move a PGLite brain to Supabase (lossless, preserves source as backup) |
| `gbrain migrate --to pglite` | Reverse migration |
| `gbrain search "query"` | Search the brain |
| `gbrain put_page --title "..." --tags "a,b" <<<"content"` | Write a page |
| `gbrain get_page "<slug>"` | Fetch a page |
| `gbrain serve` | Start the MCP stdio server (used by `claude mcp add`) |
### Config files + state
| Path | What lives there |
|---|---|
| `~/.gbrain/config.json` | Engine (pglite/postgres), database URL or path, API keys. Mode 0600. Written by `gbrain init`. |
| `~/.gstack/gbrain-repo-policy.json` | Per-remote trust triad. Schema v2. Mode 0600. |
| `~/.gstack/.setup-gbrain.lock.d` | Concurrent-run lock (atomic mkdir). Released on normal exit + SIGINT. |
| `~/.gstack/.brain-queue.jsonl` | Pending sync entries for gstack memory sync |
| `~/.gstack/.brain-last-push` | Timestamp of last sync push (for `/health` scoring) |
| `~/.gstack-brain-remote.txt` | URL of your gstack memory sync remote (safe to copy between machines) |
| `~/.gstack/.setup-gbrain-inflight.json` | Reserved for future `--resume-provision` persisted state |
### Environment variables
| Var | Where it's read | What it does |
|---|---|---|
| `SUPABASE_ACCESS_TOKEN` | `gstack-gbrain-supabase-provision` | PAT for Management API calls. Discarded after each setup run. |
| `DB_PASS` | `gstack-gbrain-supabase-provision` (create, pooler-url) | Generated DB password. Never in argv. |
| `GBRAIN_DATABASE_URL` | `gbrain init`, `gbrain doctor`, etc. | Postgres connection string (Supabase pooler URL for us). Env takes precedence over `~/.gbrain/config.json`. |
| `DATABASE_URL` | `gbrain init` (fallback) | Same semantics as `GBRAIN_DATABASE_URL`; checked second. |
| `SUPABASE_API_BASE` | `gstack-gbrain-supabase-provision` | Override the Management API host. Used by tests to point at a mock server. |
| `GBRAIN_INSTALL_DIR` | `gstack-gbrain-install` | Override default install path (`~/gbrain`) |
| `GSTACK_HOME` | every bin helper | Override `~/.gstack` state dir. Heavy test use. |
## Security model
One rule for every secret this skill touches: **env var only, never argv, never logged, never written to disk by us.** The only persistent storage is gbrain's own `~/.gbrain/config.json` at mode 0600, which is gbrain's discipline, not ours.
**Enforced in code:**
- CI grep test in `test/skill-validation.test.ts` fails the build if `$SUPABASE_ACCESS_TOKEN` or `$GBRAIN_DATABASE_URL` appears in an argv position
- CI grep test fails if `--insecure`, `-k`, or `NODE_TLS_REJECT_UNAUTHORIZED=0` appear in `bin/gstack-gbrain-supabase-provision`
- `set +x` at the top of the provision helper prevents debug tracing from leaking PAT
- Telemetry payload contains only enumerated categorical values (scenario, install result, MCP opt-in, trust tier) — never free-form strings that could contain secrets
**Enforced via tests:**
- `test/secret-sink-harness.test.ts` runs every secret-handling bin with a seeded secret and asserts the seed never appears in any captured channel (stdout, stderr, files under `$HOME`, telemetry JSONL). Four match rules per seed: exact, URL-decoded, first-12-char prefix, base64.
- Positive controls in the same test file deliberately leak seeds in every covered channel and assert the harness catches each one. Without the positive controls, a harness that silently under-reports would look identical to a working harness.
**What you can still leak** (the honest limits of v1):
- If you paste a secret into a normal chat message outside `read -s`, it's in the conversation transcript and any host-side logging
- The leak harness doesn't dump subprocess environment — a bin that `env >> ~/.log` would evade detection (no bin in v1 does this; grep tests prevent it)
- Your shell's own `HISTFILE` behavior is your shell's, not ours — we never pass secrets to argv so they don't land there via our code, but nothing stops you from pasting one into a raw `curl` command yourself
## Troubleshooting
### "PATH SHADOWING DETECTED" during install
Another `gbrain` binary is earlier in PATH than the one the installer just linked. The installer's version check caught it. Fix one of:
- `rm $(which gbrain)` if you don't need the other one
- Prepend `~/.bun/bin` to PATH in your shell rc so the linked binary wins
- Set `GBRAIN_INSTALL_DIR` to the shadowing binary's install directory and re-run
Then re-run `/setup-gbrain`.
### "rejected direct-connection URL"
You pasted a `db.<ref>.supabase.co:5432` URL. Those are IPv6-only and fail in most environments. Use the Session Pooler URL instead: Supabase dashboard → Settings → Database → Connection Pooler → **Session** → copy URI (port 6543).
### Auto-provision times out at 180s
The Supabase project is still initializing. Your ref was printed in the exit message. Wait a minute, then:
```bash
/setup-gbrain --resume-provision <ref>
```
The skill re-collects a PAT, skips project creation, resumes polling.
### "Another `/setup-gbrain` instance is running"
You have a stale lock directory. If you're sure no other instance is actually running:
```bash
rm -rf ~/.gstack/.setup-gbrain.lock.d
```
Then re-run.
### "No cross-model tension" on policy file
You edited `~/.gstack/gbrain-repo-policy.json` by hand with legacy `allow` values? No problem. On the next read, gstack auto-migrates `allow``read-write` and adds `_schema_version: 2`. One log line on stderr, idempotent, deterministic.
### `gbrain doctor` says "warnings"
`/health` treats that as yellow, not red. Check `gbrain doctor --json | jq .checks` to see which sub-checks are warning. Typical causes: resolver MECE overlap (skill names clashing) or DB connection not yet configured.
### Switching PGLite → Supabase hangs
Another gstack session in a sibling Conductor workspace may be holding a lock on your local PGLite file via its preamble's `gstack-brain-sync` call. Close other workspaces, re-run `/setup-gbrain --switch`. The timeout is bounded at 180s so you'll never actually wait forever.
## Why this design
**Why per-remote trust triad and not binary allow/deny?** Multi-client consultants need search without write-back. A freelance dev working on Client A in the morning and Client B in the afternoon can't let A's code insights leak into a brain Client B can search. Read-only solves that cleanly.
**Why not bundle gbrain into gstack?** Gbrain is a separate, actively-developed project with its own release cadence, schema migrations, and MCP surface. Bundling would mean gstack has to gate gbrain updates, which slows gbrain improvements from reaching users. Separate-but-integrated lets each ship on its own cadence.
**Why `gbrain init --non-interactive` via env var and not a flag?** Connection strings contain database passwords. Passing them as argv lands the password in `ps`, shell history, and process listings. Env-var handoff keeps the secret in process memory only. Gbrain supports both `GBRAIN_DATABASE_URL` and `DATABASE_URL`; we use the former to avoid collisions with non-gbrain tooling.
**Why fail-hard on PATH shadowing instead of warn-and-continue?** A shadowed `gbrain` means every subsequent command calls a different binary than the one we just installed. That's a silent version-drift bug that surfaces as mysterious feature gaps weeks later. Setup skills have one job — set up a working environment. Refusing to install into a broken one is the setup-skill-correct behavior.
**Why not auto-import every repo?** Privacy + noise. An auto-import preamble hook that ingests every repo you touch would: (a) leak work code into a shared brain without consent, and (b) clog search with throwaway repos. The per-remote policy makes ingestion an explicit, per-repo decision. `/setup-gbrain` doesn't install any auto-import hook today — but the policy store is forward-compatible for one later.
## Related skills + next steps
- `/health` — includes a GBrain dimension (doctor status, sync queue depth, last-push age) in its 0-10 composite score. The dimension is omitted when gbrain isn't installed; running `/health` on a non-gbrain machine doesn't penalize that choice.
- `/gstack-upgrade` — keeps gstack itself up to date. Does NOT upgrade gbrain independently. To bump gbrain, update `PINNED_COMMIT` in `bin/gstack-gbrain-install` and re-run `/setup-gbrain`.
- `/retro` — weekly retrospective pulls learnings and plans from your gbrain when memory sync is on, letting the retro reference cross-machine history.
Run `/setup-gbrain` and see what sticks.