v1.12.0.0 feat: /setup-gbrain — coding-agent onboarding for gbrain (#1183)

* feat(setup-gbrain): add gstack-gbrain-repo-policy bin helper

Per-remote trust-tier store for the forthcoming /setup-gbrain skill.
Tiers are the D3 triad (read-write / read-only / deny), keyed by a
normalized remote URL so ssh-shorthand and https variants collapse to
the same entry. The file carries _schema_version: 2 (D2-eng); legacy
`allow` values from pre-D3 experiments auto-migrate to `read-write`
on first read, idempotent, with a one-shot log line.

Pure bash + jq to match the existing gstack-brain-* family. Atomic
writes via tmpfile + rename. Policy file mode 0600. Corrupt files
quarantine to .corrupt-<ts> and start fresh.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(setup-gbrain): unit tests for gstack-gbrain-repo-policy

24 tests covering normalize (ssh/https/shorthand/uppercase collapse to
one key), set/get round-trip, all three D3 tiers accepted, invalid
tiers rejected, file mode 0600, _schema_version field written on fresh
files, legacy allow migration (including idempotence and preservation
of non-allow entries), corrupt-JSON quarantine + fresh-file recovery,
list output sorting, and get-without-arg auto-detect against a git
repo with no origin.

All tests green against a per-test tmpdir GSTACK_HOME so nothing
leaks into the real ~/.gstack.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(setup-gbrain): add gstack-gbrain-detect state reporter

Pure-introspection JSON emitter for the /setup-gbrain skill's
start-up branching. Reports: gbrain presence + version on PATH,
~/.gbrain/config.json existence + engine, `gbrain doctor --json`
health (wrapped in timeout 5s to match the /health D6 pattern),
gstack-brain-sync mode via gstack-config, and ~/.gstack/.git
presence for the memory-sync feature.

Never modifies state. Always emits valid JSON even when every check
is false. Handles malformed ~/.gbrain/config.json without crashing
— gbrain_engine is null in that case, not an error.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(setup-gbrain): add gstack-gbrain-install with D5 detect-first + D19 PATH-shadow guard

Clones gbrain at a pinned commit (v0.18.2) and registers it via
`bun link`. Before any clone:

  D5 detect-first — probes ~/git/gbrain, ~/gbrain, and the install
  target for a valid pre-existing clone (package.json with name
  "gbrain" and bin.gbrain set). If one is found, `bun link` runs
  there instead of cloning a second copy. Prevents the day-one
  duplicate-install footgun on the skill author's own machine.

After install:

  D19 PATH-shadow guard — reads the install-dir's package.json
  version, compares to `gbrain --version` on PATH. On mismatch:
  exits 3, prints every gbrain binary on PATH via `type -a`, and
  gives a remediation menu. Setup skills refuse broken environments
  instead of warning and continuing.

Prereq checks (bun, git, https://github.com reachability) fail fast
with install hints. --dry-run and --validate-only flags let the
skill probe the plan without touching state; tests use them to
cover D5 and D19 without exercising real bun link.

Pin is a load-bearing version: setup-gbrain v1 verified against
gbrain v0.18.2. Updating requires re-running Pre-Impl Gate 1 to
verify gbrain's CLI + config shapes haven't drifted.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(setup-gbrain): unit tests for gstack-gbrain-detect + install

15 tests covering: detect emits valid JSON when nothing configured,
reports gstack_brain_git on GSTACK_HOME/.git presence, reads
~/.gbrain/config.json engine, tolerates malformed config, detects
a mocked gbrain binary on PATH with version parsing.

For install: D5 detect-first uses ~/git/gbrain fixtures under a
sandboxed HOME, verifies fall-through to fresh clone when no valid
clone exists, rejects invalid package.json shapes. D19 PATH-shadow
validation uses a fake gbrain on a minimal SAFE_PATH to simulate
version mismatch, same-version-pass, v-prefix tolerance, missing
binary on PATH, and missing version field in package.json.

--validate-only mode in the install bin makes the D19 check unit-
testable without running real bun link (which touches ~/.bun/bin).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(setup-gbrain): add gstack-gbrain-lib.sh with read_secret_to_env (D3-eng)

Shared secret-read helper for PAT (D11) and pooler URL paste (D16).
One implementation of the hardest-to-get-right pattern: stty -echo +
SIGINT/TERM/EXIT trap that restores terminal mode, read into a named
env var, optional redacted preview.

Validates the target var name against [A-Z_][A-Z0-9_]* to prevent
bash name-injection via `read -r "$varname"`. When stdin is not a TTY
(CI, piped tests) the stty branches skip cleanly — piped input doesn't
echo anyway. Exports the var after read so subprocesses inherit it;
callers own the `unset` at handoff time.

Sourced, not executed — no +x bit.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(setup-gbrain): add gstack-gbrain-supabase-verify structural URL check

Zero-network validator for Supabase Session Pooler URLs before handing
them to `gbrain init`. Canonical shape verified per gbrain init.ts:266:
  postgresql://postgres.<ref>:<password>@aws-0-<region>.pooler.supabase.com:6543/postgres

Rejects direct-connection URLs (db.*.supabase.co:5432) with a distinct
exit code 3 and clear IPv6-failure remediation — that's the most common
paste mistake users make, so it earns its own UX path rather than a
generic "bad URL" error.

Never echoes the URL (contains a password) in error messages; tests
verify a distinct seed password never appears in stderr on any reject
path. Accepts URL from argv[1] or stdin ("-" or no arg).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(setup-gbrain): unit tests for supabase-verify + lib.sh secret helper

22 tests. verify: accepts canonical pooler URL (argv + stdin modes),
rejects direct-connection URL with exit 3, rejects wrong scheme, wrong
port, empty password, missing userinfo, plain 'postgres' user (catches
direct-URL paste errors), wrong host, empty URL. Case-insensitive host
match. Explicit negative: error messages never echo the URL password.

lib.sh read_secret_to_env: reads piped stdin into the named env var,
exports to subprocesses, redacted-preview emits masked form on stderr
with the seed password absent, rejects invalid var names (lowercase,
leading digit, hyphens), rejects missing/unknown flags, secret value
never appears on stdout.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(setup-gbrain): add gstack-gbrain-supabase-provision Management API wrapper

Four subcommands: list-orgs, create, wait, pooler-url. Built against
the verified Supabase Management API shape (Pre-Impl Gate 1):

  - POST /v1/projects with {name, db_pass, organization_slug, region}
    — not the original plan's /v1/organizations/{ref}/projects
  - No `plan` field; subscription tier is org-level per the OpenAPI
    description ("Subscription Plan is now set on organization level
    and is ignored in this request")
  - GET /v1/projects/{ref}/config/database/pooler for pooler config
    — not /config/database

Secrets discipline: SUPABASE_ACCESS_TOKEN (PAT) and DB_PASS read from
env only, never from argv (D8 grep test enforces this). `set +x` at
the top as a defensive default so debug tracing never leaks secrets.
Management API hostname hardcoded to SUPABASE_API_BASE env override —
no user-controlled URL portion (SSRF guard).

HTTP error paths: 401/403 → exit 3 (auth), 402 → 4 (quota), 409 → 5
(conflict), 429 + 5xx → exponential-backoff retry up to 3 attempts,
then exit 8. Wait subcommand polls every 5s until ACTIVE_HEALTHY
with a configurable timeout; terminal states (INIT_FAILED, REMOVED,
etc.) exit 7 immediately with a clear message. Timeout emits the
--resume-provision hint so the skill can recover.

Pooler-url constructs the URL locally from db_user/host/port/name +
DB_PASS rather than trusting the API response's connection_string
field, which is templated with [PASSWORD] rather than the real value.
Handles both object and array response shapes, preferring session
pool_mode when Supabase returns multiple pooler configs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(setup-gbrain): unit tests for gstack-gbrain-supabase-provision via mock API

22 tests covering D21 HTTP error suite (401/403/402/409/429/5xx) and
happy paths for all four subcommands. Every test spins up a Bun.serve
mock server bound to SUPABASE_API_BASE so nothing hits the real API.

Uses Bun.spawn (async) rather than spawnSync because spawnSync blocks
the Bun event loop, which prevents Bun.serve mocks from responding —
calls would hit curl's own timeout instead of round-tripping.

Verifies: POST body contains organization_slug (not organization_id)
and no `plan` field, bearer-token auth header, retry-on-429 with
eventual success, exit-8 on persistent 5xx after max retries, wait
succeeds on ACTIVE_HEALTHY, exits 7 on INIT_FAILED, exits 6 with
--resume-provision hint on timeout, pooler-url builds URL locally
from db_user/host/port/name + DB_PASS (not response connection_string
template), handles array pooler responses.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(setup-gbrain): add SKILL.md.tmpl — user-facing skill prompt

Stitches together every slice built so far (repo-policy, detect,
install, lib.sh secret helper, supabase-verify, supabase-provision)
into a single interactive flow. Paths: Supabase existing-URL, Supabase
auto-provision (D7), Supabase manual, PGLite local, switch (PGLite ↔
Supabase via gbrain migrate wrapped in timeout 180s per D9).

Secrets discipline per D8/D10/D11: PAT + DB_PASS + pooler URL all
read via read_secret_to_env from lib.sh and handed to gbrain via
GBRAIN_DATABASE_URL env, never argv. PAT carries the full D11 scope
disclosure before collection and an explicit revocation reminder after
success. D12 SIGINT recovery prints the in-flight ref + resume command.

D18 MCP registration is scoped honestly to Claude Code — skips with
a manual-register hint when `claude` is not on PATH. D6 per-remote
trust-triad question (read-write/read-only/deny/skip-for-now) gates
repo import; the triad values compose with the D2-eng schema-version
policy file so future migrations stay deterministic.

Skill runs concurrent-run-locked via mkdir ~/.gstack/.setup-gbrain.lock.d
(atomic, same pattern as gstack-brain-sync). Telemetry (D4) payload
carries enumerated categorical values only — never URL, PAT, or any
postgresql:// substring.

--repo, --switch, --resume-provision, --cleanup-orphans shortcut modes
documented inline; the skill parses its own invocation args.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(health): integrate gbrain as D6 composite dimension

Adds a GBrain row to the /health dashboard rubric with weight 10%.
Three sub-signals rolled into one 0-10 score: doctor status (0.5),
sync queue depth (0.3), last-push age (0.2). Redistributes when
gbrain_sync_mode is off so the dimension stays fair.

Weights rebalance: typecheck 25→22, lint 20→18, test 30→28,
deadcode 15→13, shell 10→9, gbrain +10 — sums to 100.

gbrain doctor --json wrapped in timeout 5s so a hung gbrain never
stalls the /health dashboard. Dimension is omitted (not red) when
gbrain is not installed — running /health on a non-gbrain machine
shouldn't penalize that choice.

History-JSONL adds a `gbrain` field. Pre-D6 entries read as null for
trend comparison; new tracking starts from first post-D6 run.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(test): add secret-sink-harness for negative-space leak testing (D21 #5)

Runs a subprocess with a seeded secret, captures every channel the
subprocess could leak through, and asserts the seed never appears.
Built per the D1-eng tightened contract: per-run tmp $HOME, four seed
match rules (exact + URL-decoded + first-12-char prefix + base64),
fd-level stdout/stderr capture via Bun.spawn, post-mortem walk of
every file written under $HOME, separate buckets for telemetry JSONL.

Reusable: any future skill that handles secrets can import
runWithSecretSink and run positive/negative controls against its own
bins. The harness itself is ~180 lines of TS with no external deps
beyond Bun + node:fs.

Out of scope for v1 (documented as follow-ups): subprocess env dump
(portable /proc reading), the user's real shell history (bins don't
modify it).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test: secret-sink harness positive controls + real-bin negative controls

11 tests. Positive controls deliberately leak a seed in every covered
channel (stdout, stderr, a file under $HOME, the telemetry JSONL path,
base64-encoded, first-12-char prefix) and assert the harness catches
each one. Without these, a harness that silently under-reports would
look identical to a harness that works.

Negative controls run real setup-gbrain bins with distinctive seeds:
  - supabase-verify rejects a mysql:// URL and a direct-connection URL,
    password never appears in any captured channel
  - lib.sh read_secret_to_env reads piped stdin, emits only the length,
    seed value stays invisible
  - supabase-provision on an auth-failure path fails fast without
    leaking the PAT to any channel

Covers D21 #5 leak harness + uses it to validate D3-eng, D10, D11
discipline end-to-end on the already-shipped bins.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(setup-gbrain): add list-orphans + delete-project subcommands (D20)

Powers /setup-gbrain --cleanup-orphans. list-orphans filters the
authenticated user's Supabase projects by name prefix (default
"gbrain") and excludes the project the local ~/.gbrain/config.json
currently points at, so only unclaimed gbrain-shaped projects come
back. Active-ref detection parses the pooler URL's user portion
(postgres.<ref>:<pw>@...).

delete-project is a thin DELETE /v1/projects/{ref} wrapper with no
confirmation of its own — the skill's UI layer owns the per-project
confirm AskUserQuestion loop. Keeps responsibilities clean: the bin
manages HTTP; the skill manages user intent.

Both subcommands reuse the existing api_call retry+backoff and the
same PAT discipline (env only, never argv).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(setup-gbrain): list-orphans active-ref filtering + delete-project 404

6 new tests bringing the supabase-provision suite to 28:

list-orphans:
  - Filters to gbrain-prefixed projects, excludes the active-ref derived
    from ~/.gbrain/config.json's pooler URL
  - Treats all gbrain-prefixed projects as orphans when no config exists
    (first run on a new machine)
  - Respects custom --name-prefix for users who named their brain
    something else

delete-project:
  - Happy path sends DELETE /v1/projects/<ref> and returns {deleted_ref}
  - 404 surfaces cleanly (exit 2, "404" in stderr)
  - Missing <ref> positional rejected with exit 2

Uses per-test tmpdir HOME with a stubbed ~/.gbrain/config.json so
active-ref extraction runs against deterministic fixtures.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: regenerate setup-gbrain SKILL.md after main merge

* chore: bump version and changelog (v1.12.0.0)

Ships /setup-gbrain and its supporting infrastructure end-to-end:
per-remote trust policy, installer with PATH-shadow guard, shared
secret-read helper, structural URL verifier, Supabase Management
API wrapper, /health GBrain dimension, secret-sink test harness.

100 new tests across 5 suites, all green. Three pre-existing test
failures noted as P0 in TODOS.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: add USING_GBRAIN_WITH_GSTACK.md + update README for /setup-gbrain

README changes:
- Rewrote the "Cross-machine memory with GBrain sync" section into
  "GBrain — persistent knowledge for your coding agent." Covers the
  three /setup-gbrain paths (Supabase existing URL, auto-provision,
  PGLite local), MCP registration, per-remote trust triad, and the
  (still-separate) memory sync feature.
- Added /setup-gbrain row to the skills table pointing at the full guide.
- Added /setup-gbrain to both skill-list install snippets.
- Added USING_GBRAIN_WITH_GSTACK.md to the Docs table.

New doc (USING_GBRAIN_WITH_GSTACK.md):
- All three setup paths with trust-surface caveats
- MCP registration details (and honest Claude-Code-v1 scoping)
- Per-remote trust triad semantics + how to change a policy
- Switching engines (PGLite ↔ Supabase) via --switch
- GStack memory sync + its relationship to the gbrain knowledge base
- /setup-gbrain --cleanup-orphans for orphan Supabase projects
- Full command + flag reference, every bin helper, every env var
- Security model: what's enforced in code, what's enforced by the leak
  harness, and the honest limits of v1
- Troubleshooting: PATH shadowing, direct-connection URL reject,
  auto-provision timeout, stale lock, policy file hand-edits,
  migrate hang
- Why-this-design section explaining the non-obvious choices

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(brain-sync): secret scanner now catches Bearer-prefixed auth tokens in JSON

The bearer-token-json regex value charset was [A-Za-z0-9_./+=-]{16,},
which does NOT permit spaces. Real HTTP auth headers embed the scheme
name with a literal space — "Bearer <token>" — so the value portion
actually starts with "Bearer " and the existing regex couldn't match.
Result: any JSON blob containing "authorization":"Bearer ..." would
slip past the scanner and sync to the user's private brain repo with
the bearer token inline.

Added optional (Bearer |Basic |Token )? prefix in front of the value
charset. Now matches the common auth-scheme forms without broadening
the matcher to tolerate arbitrary whitespace (which would false-positive
on lots of benign JSON).

Verified against 5 positive cases (bearer-in-json, clean bearer, apikey
no-prefix, token with Bearer, password no-prefix) + 3 negative cases
(too-short tokens, non-secret field names like username, random JSON).

This closes the P0 security regression first noticed during v1.12.0.0
/ship. brain-sync.test.ts now passes all 7 secret-scan fixtures.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test: mock-gh integration tests for gstack-brain-init auto-create path

8 tests covering the gh-repo-create happy path that had zero coverage
before. Existing brain-sync.test.ts always passes --remote <bare-url>
to bypass gh entirely, so the interactive default ("press Enter, we'll
run gh repo create for you") was shipping on trust.

Test strategy: write a bash stub for gh that records every call into
a file, then run gstack-brain-init with that stub on PATH. Assertions
verify: gh auth status is checked, gh repo create fires with the
computed gstack-brain-<user> default name + --private + --source
flags, fall-through to gh repo view when create reports already-exists,
user-provided URL bypasses gh entirely, gh-not-on-path and gh-not-authed
branches both prompt for URL, --remote flag short-circuits all gh
calls, conflicting-remote re-runs exit 1 with a clear message.

No real GitHub, no live auth. Gate tier — runs on every commit.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(e2e): privacy-gate AskUserQuestion fires from preamble (periodic tier)

Two periodic-tier E2E tests exercising the preamble's privacy gate
end-to-end via the Agent SDK + canUseTool. Previously uncovered:

- Positive: stages a fake gbrain on PATH + gbrain_sync_mode_prompted=false
  in config, runs a real skill, intercepts tool-use. Asserts the
  preamble fires a 3-option AskUserQuestion matching the canonical
  prose ("publish session memory" / "artifact" / "decline") and does
  NOT fire a second time in the same run (idempotency within session).

- Negative: same staging but prompted=true. Asserts the gate stays
  silent even with gbrain detected on the host.

Registered in test/helpers/touchfiles.ts as `brain-privacy-gate`
(periodic) with dependency tracking on generate-brain-sync-block.ts,
the three gstack-brain-* bins, gstack-config, and the Agent SDK runner.
Diff-based selection re-runs the E2E when any of those change.

Cost: ~$0.30-$0.50 per run. Only fires under EVALS=1 EVALS_TIER=periodic;
gate tier stays free.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: update TODOS for bearer-json fix + new brain-sync test coverage

Moves the bearer-json secret-scan regression from the P0 "pre-existing
failures" block into the Completed section with full context on the
fix, the mock-gh tests, the E2E privacy-gate tests, and the touchfile
registration. Remaining P0s are the GSTACK_HOME config-isolation bug
and the stale Opus 4.7 overlay pacing assertion, both unrelated.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(test): E2E privacy gate — ambient env + skill-file prompt

Two fixes to get the E2E actually running end-to-end (first attempt
failed at the SDK auth step, second at the assertion step):

1. Don't pass an explicit `env:` object to runAgentSdkTest. The SDK's
   auth pipeline misses ANTHROPIC_API_KEY when env is supplied as an
   object (verified against the plan-mode-no-op test, which passes no
   env and auths cleanly). Mutate process.env before the call instead,
   and restore the originals in finally so other tests don't inherit
   the ambient mutation.

2. The "Run /learn with no arguments" user prompt was too narrow — the
   model reduced it to a direct action and skipped the preamble
   privacy-gate directives entirely, so zero AskUserQuestions fired.
   Mirror the plan-mode-no-op pattern: point the model at the skill
   file on disk and ask it to follow every preamble directive. Bumped
   maxTurns from 6 to 10 to give the preamble room to execute.

Verified both tests pass under `EVALS=1 EVALS_TIER=periodic bun test
test/skill-e2e-brain-privacy-gate.test.ts` against a real ANTHROPIC_API_KEY.
Cost per run: ~$0.30-$0.50 per test.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(CLAUDE.md): source ANTHROPIC/OPENAI keys from ~/.zshrc for paid evals

Conductor workspaces don't inherit the interactive shell env, so
both API keys are absent from the default process env even though
they're set in ~/.zshrc. Documents the source-from-zshrc pattern
(grep + eval, never echo the value) plus the Agent SDK gotcha: do
NOT pass env as an object to runAgentSdkTest — mutate process.env
ambiently and restore in finally.

Discovered this during the brain-privacy-gate E2E. First run failed
at SDK auth with 401; second failed because explicit env handoff
bypassed the SDK's own auth routing. Fix pattern now codified so
the next paid-eval session in a Conductor workspace doesn't hit the
same two dead ends.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Garry Tan
2026-04-24 01:38:21 -07:00
committed by GitHub
parent 9e244c0bed
commit 2014557e7f
27 changed files with 5889 additions and 60 deletions
File diff suppressed because it is too large Load Diff
+449
View File
@@ -0,0 +1,449 @@
---
name: setup-gbrain
preamble-tier: 2
version: 1.0.0
description: |
Set up gbrain for this coding agent: install the CLI, initialize a
local PGLite or Supabase brain, register MCP, capture per-remote trust
policy. One command from zero to "gbrain is running, and this agent
can call it." Use when: "setup gbrain", "connect gbrain", "start
gbrain", "install gbrain", "configure gbrain for this machine". (gstack)
triggers:
- setup gbrain
- install gbrain
- connect gbrain
- start gbrain
- configure gbrain
allowed-tools:
- Bash
- Read
- Write
- Edit
- Glob
- Grep
- AskUserQuestion
---
{{PREAMBLE}}
# /setup-gbrain — Coding-Agent Onboarding for gbrain
You are setting up gbrain (https://github.com/garrytan/gbrain), a persistent
knowledge base, on the user's local Mac so that this coding agent (typically
Claude Code) can call it as both a CLI and an MCP tool.
**Scope honesty:** This skill's MCP registration step (5a) uses
`claude mcp add` and targets Claude Code specifically. Other local hosts
(Cursor, Codex CLI, etc.) will still get the gbrain CLI on PATH — they can
register `gbrain serve` in their own MCP config manually after setup.
**Audience:** local-Mac users. openclaw/hermes agents typically run in cloud
docker containers with their own gbrain; "sharing" a brain between them and
local Claude Code is only possible through shared Postgres (Supabase).
## User-invocable
When the user types `/setup-gbrain`, run this skill. Three shortcut modes:
- `/setup-gbrain` — full flow (default)
- `/setup-gbrain --repo` — only flip the per-remote policy for the current repo
- `/setup-gbrain --switch` — only migrate the engine (PGLite ↔ Supabase)
- `/setup-gbrain --resume-provision <ref>` — re-enter a previously interrupted
Supabase auto-provision at the polling step
- `/setup-gbrain --cleanup-orphans` — list + delete in-flight Supabase projects
Parse the invocation args yourself — these are prose hints to the skill, not
implemented as a dispatcher binary.
---
## Step 1: Detect current state
```bash
~/.claude/skills/gstack/bin/gstack-gbrain-detect
```
Capture the JSON output. It contains: `gbrain_on_path`, `gbrain_version`,
`gbrain_config_exists`, `gbrain_engine`, `gbrain_doctor_ok`,
`gstack_brain_sync_mode`, `gstack_brain_git`.
Skip downstream steps that are already done. Report the detected state in
one line so the user knows what you found:
> "Detected: gbrain v0.18.2 on PATH, engine=postgres, doctor=ok,
> sync=artifacts-only. Nothing to install; jumping to the policy check."
Branch on the `--repo`, `--switch`, `--resume-provision`, `--cleanup-orphans`
invocation flags here and skip to the matching step.
---
## Step 2: Pick a path (AskUserQuestion)
Only fire this if Step 1 shows no existing working config AND no shortcut
flag was passed. The question title: "Where should your brain live?"
Options (present based on detected state):
- **1 — Supabase, I already have a connection string.** Cloud-agent users
whose openclaw/hermes provisioned one already. Paste the Session Pooler
URL from the Supabase dashboard (Settings → Database → Connection Pooler
→ Session). *Trust-surface caveat to include in the prompt:* "Pasting this
URL gives your local Claude Code full read/write access to every page your
cloud agent can see. If that's not the trust level you want, pick PGLite
local instead and accept the brains are disjoint."
- **2a — Supabase, auto-provision a new project.** You'll need a Supabase
Personal Access Token (~90 seconds). Best choice for a shared team brain.
- **2b — Supabase, create manually.** Walk through supabase.com signup
yourself; paste the URL back when ready.
- **3 — PGLite local.** Zero accounts, ~30 seconds. Isolated brain on this
Mac only. Best for try-first.
- **Switch** (only if Step 1 detected an existing engine): "You already have
a `<engine>` brain. Migrate it to the other engine?" → runs
`gbrain migrate --to <other>` wrapped in `timeout 180s` (D9).
Do NOT silently pick; fire the AskUserQuestion.
---
## Step 3: Install gbrain CLI (if missing)
Only if `gbrain_on_path=false`:
```bash
~/.claude/skills/gstack/bin/gstack-gbrain-install
```
The installer runs D5 detect-first (probes `~/git/gbrain`, `~/gbrain` first),
then D19 PATH-shadow validation (post-link `gbrain --version` must match
install-dir `package.json`). On D19 failure the installer exits 3 with a
clear remediation menu; surface the full output to the user and STOP. Do not
continue the skill — the environment is broken until the user fixes PATH.
---
## Step 4: Initialize the brain
Path-specific.
### Path 1 (Supabase, existing URL)
Source the secret-read helper, collect URL with `read -s` + redacted preview:
```bash
. ~/.claude/skills/gstack/bin/gstack-gbrain-lib.sh
read_secret_to_env GBRAIN_POOLER_URL "Paste Session Pooler URL: " \
--echo-redacted 's#://[^@]*@#://***@#'
```
Then validate structurally:
```bash
printf '%s' "$GBRAIN_POOLER_URL" | ~/.claude/skills/gstack/bin/gstack-gbrain-supabase-verify -
```
If the verify exit code is 3 (direct-connection URL), the verifier's own
message explains the fix; surface it and re-prompt for a Session Pooler URL.
On success, hand off to gbrain via env var (D10, never argv):
```bash
GBRAIN_DATABASE_URL="$GBRAIN_POOLER_URL" gbrain init --non-interactive --json
```
Then `unset GBRAIN_POOLER_URL GBRAIN_DATABASE_URL` immediately. The URL is
now persisted in `~/.gbrain/config.json` at mode 0600 by gbrain itself.
### Path 2a (Supabase, auto-provision — D7)
Show the D11 PAT scope disclosure verbatim BEFORE collecting the token:
> *This Supabase Personal Access Token grants full read/write/delete access
> to every project in your Supabase account, not just the `gbrain` one we're
> about to create. Supabase doesn't currently support scoped tokens. We use
> this PAT only to: create one project, poll it until healthy, read the
> Session Pooler URL — then discard it from process memory. The token
> remains valid on Supabase's side until you manually revoke it at
> https://supabase.com/dashboard/account/tokens — we recommend revoking
> immediately after setup completes.*
Then:
```bash
. ~/.claude/skills/gstack/bin/gstack-gbrain-lib.sh
read_secret_to_env SUPABASE_ACCESS_TOKEN "Paste PAT: "
```
Ask the D17 tier prompt via AskUserQuestion: "Which Supabase tier?" Present
Free (2-project limit, pauses after 7d inactivity) vs Pro ($25/mo, no
pauses, recommended for real use). Explain that tier is **org-level** (per
the Management API contract) — user picks their org based on its current
tier. Pro may require them to upgrade the org first at supabase.com.
List orgs, pick one (AskUserQuestion if multiple):
```bash
orgs=$(~/.claude/skills/gstack/bin/gstack-gbrain-supabase-provision list-orgs --json)
```
If the `.orgs` array is empty, surface: "Your Supabase account has no
organizations. Create one at https://supabase.com/dashboard, then re-run
`/setup-gbrain`." STOP.
Ask the user for a region (default `us-east-1`; valid values are the 18
enum values in the Supabase Management API — list a few common ones, let
them pick "Other" for a full list).
Generate the DB password (never shown to the user):
```bash
export DB_PASS=$(openssl rand -base64 24)
```
Set up a SIGINT trap (D12 basic recovery):
```bash
trap 'echo ""; echo "gstack-gbrain: interrupted. In-flight ref: $INFLIGHT_REF"; \
echo "Resume: /setup-gbrain --resume-provision $INFLIGHT_REF"; \
echo "Delete: https://supabase.com/dashboard/project/$INFLIGHT_REF"; \
unset SUPABASE_ACCESS_TOKEN DB_PASS; exit 130' INT TERM
```
Create + wait + fetch:
```bash
result=$(~/.claude/skills/gstack/bin/gstack-gbrain-supabase-provision \
create gbrain "$REGION" "$ORG_SLUG" --json)
INFLIGHT_REF=$(echo "$result" | jq -r .ref)
~/.claude/skills/gstack/bin/gstack-gbrain-supabase-provision wait "$INFLIGHT_REF" --json
pooler=$(~/.claude/skills/gstack/bin/gstack-gbrain-supabase-provision \
pooler-url "$INFLIGHT_REF" --json)
GBRAIN_DATABASE_URL=$(echo "$pooler" | jq -r .pooler_url)
export GBRAIN_DATABASE_URL
gbrain init --non-interactive --json
unset SUPABASE_ACCESS_TOKEN DB_PASS GBRAIN_DATABASE_URL INFLIGHT_REF
trap - INT TERM
```
After success, emit the PAT revocation reminder:
> "Setup complete. Revoke the PAT you pasted at
> https://supabase.com/dashboard/account/tokens — we've already discarded
> it from memory and don't need it again. The gbrain project will continue
> working because it uses its own embedded database password."
### Path 2b (Supabase, manual)
Walk the user through the supabase.com steps:
1. Login at https://supabase.com/dashboard
2. Click "New Project," name it `gbrain`, pick a region, copy the generated
database password (you'll need it for paste-back? no — it's embedded in
the pooler URL we collect next)
3. Wait ~2 min for the project to initialize
4. Settings → Database → Connection Pooler → Session → copy the URL (port
6543)
Then follow the same secret-read + verify + init flow as Path 1.
### Path 3 (PGLite local)
```bash
gbrain init --pglite --json
```
Done. No network, no secrets.
### Switch (from detect's existing-engine state)
```bash
# Going PGLite → Supabase, collect URL first (Path 1 flow), then:
timeout 180s gbrain migrate --to supabase --url "$URL" --json
# Going Supabase → PGLite:
timeout 180s gbrain migrate --to pglite --json
```
If `timeout` returns 124 (exit code for timeout): surface D9 message
("Migration didn't complete in 3 minutes — another gstack session may be
holding a lock on the source brain. Close other workspaces and re-run
`/setup-gbrain --switch`. Your original brain is untouched."). STOP.
---
## Step 5: Verify gbrain doctor
```bash
doctor=$(gbrain doctor --json)
status=$(echo "$doctor" | jq -r .status)
```
If status is `ok` or `warnings`, proceed. Anything else → surface the full
doctor output and STOP.
---
## Step 5a: Register gbrain as Claude Code MCP (D18)
Only if `which claude` resolves. Ask: "Give Claude Code a typed tool surface
for gbrain? (recommended yes)"
If yes:
```bash
claude mcp add gbrain -- gbrain serve
claude mcp list | grep gbrain # verify
```
If `claude` is not on PATH: emit "MCP registration skipped — this skill is
Claude-Code-targeted; register `gbrain serve` in your agent's MCP config
manually." Continue to step 6.
---
## Step 6: Per-remote policy (D3 triad, gated repo-import)
If we're in a git repo with an `origin` remote, check the policy:
```bash
current_tier=$(~/.claude/skills/gstack/bin/gstack-gbrain-repo-policy get)
```
Branches:
- `read-write` → import this repo: `gbrain import "$(pwd)" --no-embed` then
`gbrain embed --stale &` in the background.
- `read-only` → skip import entirely (this tier is enforced by the future
auto-import hook + by gbrain resolver injection, not here).
- `deny` → do nothing.
- `unset` → AskUserQuestion: "How should `<normalized-remote>` interact with
gbrain?"
- `read-write` — agent can search AND write new pages from this repo
- `read-only` — agent can search but never write
- `deny` — no interaction at all
- `skip-for-now` — don't persist, ask next time
On answer (other than skip-for-now):
```bash
~/.claude/skills/gstack/bin/gstack-gbrain-repo-policy set "$REMOTE" "$TIER"
```
Then import iff `read-write`.
If outside a git repo OR no origin remote: skip this step with a note.
For `/setup-gbrain --repo` invocations, execute ONLY Step 6 and exit.
---
## Step 7: Offer gstack-brain-sync
Separate AskUserQuestion: "Also sync your gstack session memory (learnings,
plans, retros) to a private git repo that gbrain can index across machines?"
Options:
- Yes, full sync (everything allowlisted)
- Yes, artifacts-only (plans, designs, retros — skip behavioral data)
- No thanks
If yes:
```bash
~/.claude/skills/gstack/bin/gstack-brain-init
~/.claude/skills/gstack/bin/gstack-config set gbrain_sync_mode artifacts-only
# or "full" if user picked yes-full
```
---
## Step 8: Persist `## GBrain Configuration` in CLAUDE.md
Find-and-replace (or append) this section in CLAUDE.md:
```markdown
## GBrain Configuration (configured by /setup-gbrain)
- Engine: {pglite|postgres}
- Config file: ~/.gbrain/config.json (mode 0600)
- Setup date: {today}
- MCP registered: {yes/no}
- Memory sync: {off|artifacts-only|full}
- Current repo policy: {read-write|read-only|deny|unset}
```
---
## Step 9: Smoke test
```bash
gbrain put_page --title "setup-gbrain smoke test" --tags "meta" \
<<<"Set up on $(date)"
gbrain search "smoke test" | grep -i "setup-gbrain smoke test"
```
Confirms the round trip. On failure, surface `gbrain doctor --json` output
and STOP with a NEEDS_CONTEXT escalation.
---
## `/setup-gbrain --cleanup-orphans` (D20)
Re-collect a PAT (Step 4 path-2a scope disclosure), then:
```bash
# List user's Supabase projects (user has to pipe this through their own
# shell to review; we don't rely on a stored PAT).
export SUPABASE_ACCESS_TOKEN="<collected from read_secret_to_env>"
projects=$(curl -s -H "Authorization: Bearer $SUPABASE_ACCESS_TOKEN" \
https://api.supabase.com/v1/projects)
```
Parse the response, identify any project named starting with `gbrain` whose
`ref` doesn't match the user's active `~/.gbrain/config.json` pooler URL.
For each orphan, AskUserQuestion per project: "Delete orphan project
`<ref>` (`<name>`, created `<created_at>`)?" — NEVER batch; per-project
confirm is a one-way door.
On confirmed delete:
```bash
curl -s -X DELETE -H "Authorization: Bearer $SUPABASE_ACCESS_TOKEN" \
https://api.supabase.com/v1/projects/$REF
```
Never delete the active brain without a second explicit confirmation.
At end: `unset SUPABASE_ACCESS_TOKEN`. Revocation reminder.
---
## Telemetry (D4)
The preamble's Telemetry block logs skill success/failure at exit. When
emitting the event, add these enumerated categorical values to the
telemetry payload (SAFE — no free-form secrets, never the URL or PAT):
- `scenario`: `supabase-existing` | `supabase-auto-provision` |
`supabase-manual` | `pglite-local` | `switch-to-supabase` |
`switch-to-pglite` | `repo-flip-only` | `cleanup-orphans` |
`resume-provision`
- `install_performed`: `yes` | `no` (D5 reuse) | `skipped` (pre-existing)
- `mcp_registered`: `yes` | `no` | `claude-missing`
- `trust_tier_set`: `read-write` | `read-only` | `deny` |
`skip-for-now` | `n/a` (outside git repo)
Never pass `SUPABASE_ACCESS_TOKEN`, `DB_PASS`, `GBRAIN_POOLER_URL`,
`GBRAIN_DATABASE_URL`, or any `postgresql://` substring to the telemetry
invocation. The CI grep test in `test/skill-validation.test.ts` enforces
this at build time.
---
## Important Rules
- **One rule for every secret.** PAT, DB_PASS, pooler URL: env-var only,
never argv, never logged, never persisted to disk by us. The only file
that holds the pooler URL long-term is `~/.gbrain/config.json`, written
by gbrain's own `init` at mode 0600 — that's gbrain's discipline, not
ours.
- **STOP points are hard.** Gbrain doctor not healthy, D19 PATH shadow, D9
migrate timeout, smoke test failure — each is a STOP. Do not paper over.
- **Concurrent-run lock.** At skill start, `mkdir ~/.gstack/.setup-gbrain.lock.d`
(atomic). If the mkdir fails, abort with: "Another `/setup-gbrain` instance
is running. Wait for it, or `rm -rf ~/.gstack/.setup-gbrain.lock.d` if
you're sure it's stale." Release on normal exit AND in the SIGINT trap.
- **CLAUDE.md is the audit trail.** Always update it in Step 8 after a
successful setup.