mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-01 11:17:50 +02:00
v1.17.0.0: setup-gbrain wireup ships the gbrain federation surface (#1234)
* feat: gstack-gbrain-source-wireup helper + 13 unit tests The new bin/gstack-gbrain-source-wireup is the single helper that registers the gstack brain repo as a gbrain federated source via `git worktree`, runs incremental sync, and supports --uninstall + --probe + --strict modes. Replaces the dead `consumers.json + ingest_url + /ingest-repo` HTTP wireup introduced in v1.12.0.0 — that endpoint never shipped on the gbrain side. The federation surface (`gbrain sources` / `gbrain sync`) shipped in gbrain v0.18.0; this helper adapts to its actual semantics (no `sources update`, so path drift recovery is `remove + re-add`; no `--install-cron` either, so freshness rides on the existing skill-end push hook). Source-id derivation is multi-fallback: ~/.gstack/.git origin URL → ~/.gstack-brain-remote.txt → --source-id flag. This makes `--uninstall` work even after `~/.gstack/.git` is destroyed by the parent uninstall script. Worktree is `--detach`ed at $GSTACK_HOME's HEAD because main is already checked out there; advance is a re-checkout of the parent's current HEAD, not a `git pull`. Divergence recovery removes + re-adds the worktree. Test suite covers 13 cases: fresh-state registration, idempotent re-runs, drift recovery, --strict failure modes, source-id fallback chain, --probe non-mutation, sync errors, and --uninstall. Fake gbrain on $PATH, real git ops at GSTACK_HOME tmp dir. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat: wire setup-gbrain + brain-restore + brain-uninstall to use the helper setup-gbrain Step 7 now invokes gstack-gbrain-source-wireup --strict after gstack-brain-init + gbrain_sync_mode is set. Strict mode means the user sees the failure rather than silently ending up with an unwired brain. bin/gstack-brain-init drops 60 lines of dead code: the HTTP POST to ${GBRAIN_URL}/ingest-repo, the GBRAIN_URL_VAL/GBRAIN_TOKEN_VAL probes, the consumers.json writer, and the chore commit step. CONSUMERS_FILE variable declaration removed. The closing message no longer points at the dead gstack-brain-consumer add path. bin/gstack-brain-restore drops the 18-line consumers.json token-rehydration block (was a no-op for the only consumer that ever existed). Adds a best-effort wireup invocation after the brain-repo clone so 2nd-Mac restore gets gbrain federation automatically. Failure prints a stderr WARNING but does not abort the restore — restore's primary job is the git clone. bin/gstack-brain-uninstall calls the helper's --uninstall mode (which removes the gbrain source registration, the git worktree, and the future-launchd-plist stub) before the existing legacy consumers.json removal. Ordering is fragile-by-design: helper derives source-id via multi-fallback so it works even after .git is destroyed. bin/gstack-brain-consumer gets a DEPRECATED header note. Stays in the tree for one cycle of grace; removal in v1.13.0.0. setup-gbrain/SKILL.md is regenerated from the .tmpl via gen:skill-docs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat: v1.12.3.0 migration — wire existing brain-sync repos into gbrain Idempotent migration script. For users who already opted into brain-sync before this release (gbrain_sync_mode != off, ~/.gstack/.git exists), runs the new gstack-gbrain-source-wireup helper so their existing brain repo becomes searchable via gbrain immediately on /gstack-upgrade. Skip conditions (each ends with exit 0): - HOME unset or empty (defensive) - gbrain_sync_mode = off or empty (user opted out) - no ~/.gstack/.git (brain-init never ran) - helper missing on disk (broken install) No --strict on the helper invocation: missing or old gbrain is a benign skip during a batch upgrade rather than a blocker. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * v1.12.3.0: setup-gbrain wireup ships the gbrain federation surface Bumps VERSION 1.12.2.0 → 1.12.3.0 with a release-notes-format entry in CHANGELOG.md. After upgrade, the placeholder consumers.json wireup is gone, gbrain sources + sync + skill-end hook is the new path, your gstack memory is actually searchable in gbrain. The CHANGELOG entry follows the release-summary format from CLAUDE.md: two-line bold headline, lead paragraph naming what shipped, "verify after upgrade" command block readers can run on their own brain to see the delta, then the standard Itemized changes / What this means / For contributors sections. Three pre-existing test failures on this branch are flagged in the contributor section: the GSTACK_HOME isolation test (reads Garry's actual ~/.gstack/config.yaml), the 2MB tracked-binary test (security-bench fixtures > 2MB), and the Opus 4.7 pacing-directive test (overlay text drifted). All three were verified to fail on the base branch too — out of scope for this PR, follow-up needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat: helper locks GBRAIN_DATABASE_URL at startup, defends against config rewrites The wireup helper previously read ~/.gbrain/config.json on every gbrain subprocess invocation. On Garry's Mac, multiple concurrent test runs and agent integrations were rewriting that file mid-sync, redirecting the wireup at the wrong brain partway through a 4-min initial import. This commit adds a `--database-url <url>` flag to the helper and locks the URL at startup. Precedence: 1. --database-url flag (explicit caller intent) 2. GBRAIN_DATABASE_URL / DATABASE_URL env (CI / manual override) 3. read once from ~/.gbrain/config.json (default) Whichever wins gets exported as GBRAIN_DATABASE_URL for every child `gbrain` invocation. Per gbrain's loadConfig at src/core/config.ts:53, env-var URLs override the file URL — so a process that flips config.json between two of our gbrain calls can't redirect us. Defense-in-depth: once the URL is locked, the wireup completes against the original brain even under hostile filesystem conditions. setup-gbrain/SKILL.md.tmpl Step 7 now reads the URL out of config.json once (via python3 inline) and passes it explicitly with --database-url, so even the very first wireup call is decoupled from config.json mutability. Three new test cases cover the lock behavior: - --database-url flag is exported to child gbrain calls - falls back to ~/.gbrain/config.json when no flag and no env - flag overrides env GBRAIN_DATABASE_URL and config.json values The fake gbrain in the test suite now records GBRAIN_DATABASE_URL alongside each call so tests can assert the helper exported the locked URL. Total test count: 13 → 16 passing. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * chore: bump v1.12.3.0 references to v1.15.1.0 to match merged-with-main release Internal-only renames after merging origin/main bumped this branch's release target from v1.12.3.0 → v1.15.1.0: - gstack-upgrade/migrations/v1.12.3.0.sh → v1.15.1.0.sh (rename + log-prefix bump from "[v1.12.3.0]" to "[v1.15.1.0]") - bin/gstack-brain-consumer header: "DEPRECATED in v1.12.3.0" → "DEPRECATED in v1.15.1.0"; removal target bumped from v1.13.0.0 → v1.16.0.0 (next minor after v1.15.1.0). - bin/gstack-brain-uninstall: "no longer written ... since v1.12.3.0" → "since v1.15.1.0". No behavior change. Test suite still 16/16 passing. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test: 10 new cases close coverage gaps (helper defensive paths + migration) /ship Step 7 coverage audit reported 48% (22/46 branches). Added 10 cases covering the highest-impact gaps: Helper (test/gstack-gbrain-source-wireup.test.ts, +3 cases → 19 total): - --uninstall when gbrain is missing: best-effort exit 0, worktree still cleaned - --no-pull skips HEAD advance on existing worktree (was untested) - Stray non-git directory at worktree path is cleaned up + worktree created Migration (test/gstack-upgrade-migration-v1_15_1_0.test.ts, NEW, 7 cases): - HOME unset → defensive exit 0 - gbrain_sync_mode=off → exit 0 silently - gbrain_sync_mode unset → exit 0 silently - no ~/.gstack/.git → exit 0 silently - helper missing on PATH → warning + exit 0 - happy path → invokes helper without --strict - helper exits non-zero → migration prints retry hint, still exits 0 (non-blocking) Also syncs package.json version from 1.15.0.0 → 1.15.1.0 to match VERSION file (DRIFT_STALE_PKG repair from /ship Step 12 idempotency check; was a manual-edit-bypass artifact from the merge step). Coverage estimate: 48% → ~75%. Mainline + migration script + key defensive paths all exercised. 26 tests total covering the new code surface. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix: pre-landing review auto-fixes (5 correctness + observability) /ship Step 9 review surfaced 9 INFORMATIONAL findings on the new helper + migration. Five auto-fixed with no behavior regression (26/26 tests pass): bin/gstack-gbrain-source-wireup: - Version compare: put floor "0.18.0" first in `sort -V` stdin so equal-or- greater $v always sorts to position 2. Stable across sort implementations. - _worktree_add_detached: drop `2>/dev/null` on the `worktree add`, surface git's stderr through `prefix` so users see WHY adds fail (disk, perms). - ensure_worktree: same observability fix on the `git checkout --detach` path during HEAD-advance, so users see the actual git error before recovery. - do_probe: replace `[ -d X ] || [ -f X ] && set=present` (precedence trap — the `&&` short-circuits when the dir branch fails) with explicit if-block. - do_probe: capture `check_source_state`'s return code explicitly via `set +e; ...; rc=$?; set -e`. `$?` after an `if`/`elif` chain is fragile under set -e and may not reach the elif under some shell versions. - do_wireup: same explicit return-code capture for `ensure_worktree`. The prior `ensure_worktree || { if [ $? = 2 ]; ...` pattern relied on `$?` reflecting the function's return after `||`, which is implementation-defined. gstack-upgrade/migrations/v1.15.1.0.sh: - Trim whitespace from `gstack-config get gbrain_sync_mode` output via `tr -d '[:space:]'`. Trailing newlines would mis-classify "off\n" as a non-empty non-off mode and incorrectly invoke the helper. Skipped findings (cosmetic / out of scope): - `python3 -c` reads `~/.gbrain/config.json` via `expanduser` instead of the helper's `$GBRAIN_CONFIG` variable (cosmetic; HONORS HOME override). - Long sync-failure error message could truncate to last N lines (cosmetic log readability). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix: adversarial review hardening (rm safety, jq probe, secret redaction, multi-Mac) /ship Step 11 adversarial review surfaced 7 CRITICAL issues. Five fixed inline (no behavior regression, 26/26 tests still pass): bin/gstack-gbrain-source-wireup: 1. **rm -rf path validation** (was: F-c-CRITICAL 9/10). Added `safe_rm_worktree` helper that refuses any path not strictly under $HOME/, plus dangerous-path allowlist for /, /Users, $HOME root. Replaces raw `rm -rf "$WORKTREE"` calls (lines 161, 169 originally). If user sets GSTACK_BRAIN_WORKTREE="" or "/", the helper now dies cleanly instead of nuking the home dir or root. 2. **jq dependency probe** (was: F-c-CRITICAL 9/10). `check_source_state` now hard-fails with a clear message if jq is missing, instead of silently returning "absent" → re-add → die-on-duplicate. Plus trims whitespace from jq output (`tr -d '[:space:]'`) to defend against gbrain emitting `\n` for missing fields. Header comment claimed jq was a transitive dep; now we enforce it. 3. **Python heredoc warns on JSON parse failure** (was: F-c-CRITICAL 8/10). Previously `except Exception: pass` silently swallowed malformed JSON, leaving _locked_url empty and defeating the URL-lock defense. Now writes the parse error to a temp file and warns the user that the URL was not locked. Also passes the config path via env var (GBRAIN_CONFIG_PATH) instead of hardcoded `~/.gbrain/config.json`, respecting any HOME override. 4. **Multi-Mac source-id collision fix** (was: F-c-CRITICAL 9/10). When `check_source_state` returns 1 (source exists at different path), the helper used to remove + re-add. Two Macs sharing one Supabase brain would ping-pong the local_path metadata on every sync. Now: if the existing path's basename matches the local worktree's basename (likely another machine's local copy of the SAME brain repo), skip re-registration and sync against the local worktree. gbrain stores pages by content; metadata is informational. No more ping-pong. 5. **Redact DB URL from sync-failure error message** (was: F-c-CRITICAL 7/10). `gbrain sync` failures used to echo the full stderr (which can contain the postgres connection string with password) into the user's terminal and any log redirect. Now we sed-replace any `postgres://...` with `postgres://***REDACTED***` before the die() call, and only show the last 10 lines. Bonus minor fix: `die()` now uses `$1` instead of `$*` for the warn message, so the exit-code arg ($2) doesn't get appended to the warning text. Acknowledged-but-deferred: - GBRAIN_DATABASE_URL env exposure on Linux via /proc/$PID/environ. This is a Linux-only concern; gstack is Mac-targeted today and macOS restricts process env reads. Document as a follow-up if Linux support lands. - gbrain version parser brittleness if gbrain switches to "v0.18.0" prefix. Defensive only; current gbrain output matches `gbrain X.Y.Z` exactly. - bash 3.2 PIPESTATUS reliability. Tests pass on the host bash version (3.2+ via macOS); modern bash 5.x is widely available. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: sync gbrain-source-wireup helper into USING_GBRAIN + gbrain-sync USING_GBRAIN_WITH_GSTACK.md: add gstack-gbrain-source-wireup row to the bin helpers table — describes federation registration via `gbrain sources add` + worktree, lists flags, calls out it replaces the dead consumers.json/ingest-repo HTTP wireup. docs/gbrain-sync.md: replace the `gstack-brain-reader add --ingest-url` step in gstack-brain-init's flow (which targeted the never-shipped /ingest-repo endpoint) with the real flow — federate via gbrain sources + worktree, point to bin/gstack-gbrain-source-wireup. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * v1.16.1.0: rebump after queue-collision (PR #1233 took v1.16.0.0) CI's "Check VERSION is not stale vs queue" job (job 73105686380) failed with: "VERSION drift: PR #1234 claims v1.15.1.0 but the queue has moved — next free slot is v1.16.1.0." PR #1233 (garrytan/browserharness) entered the queue claiming v1.16.0.0 between when this branch's prior /ship ran and when CI evaluated, so v1.15.1.0 is stale. Rebumping on top. Files updated: - VERSION 1.15.1.0 → 1.16.1.0 - package.json 1.15.1.0 → 1.16.1.0 - CHANGELOG.md heading + Before/After columns 1.15.1.0 → 1.16.1.0 - CHANGELOG removal target (consumers.json + config keys) 1.16.0.0 → 1.17.0.0 - gstack-upgrade/migrations/v1.15.1.0.sh → renamed v1.16.1.0.sh + log prefix - bin/gstack-brain-consumer "DEPRECATED in" + "removal in" 1.15.1.0/1.16.0.0 → 1.16.1.0/1.17.0.0 - bin/gstack-brain-uninstall "since vX.Y.Z.W" 1.15.1.0 → 1.16.1.0 - test/gstack-upgrade-migration-v1_15_1_0.test.ts → renamed v1_16_1_0.test.ts No behavior change. 26/26 wireup + migration tests still pass on the rename. Full bun test suite: exit 0, 0 failures. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * v1.17.0.0: rebump again — bump-detection now classifies branch as MINOR CI's version-stale check (job 73106360896) failed: PR #1234 claims v1.16.1.0 but the queue moved to v1.17.0.0. Root cause: bumping 1.15.1.0 → 1.16.1.0 to dodge the prior collision turned the branch's diff classification from PATCH (1.15.0 → 1.15.1) into MINOR (1.15.0 → 1.16.x). detect-bump.ts now sees MINOR, gstack-next-version walks the MINOR lane past #1233's v1.16.0.0 claim, and the next free slot is v1.17.0.0. Honestly accurate per CLAUDE.md scale-aware bumps: this branch IS a MINOR ("substantial new capability shipped — skill, harness, command, big refactor"). The new helper + migration + integration totals ~1200 lines added across 11 files with 26 new tests. PATCH was always the wrong honest classification; the queue collision forced the right answer. Files updated: - VERSION 1.16.1.0 → 1.17.0.0 - package.json 1.16.1.0 → 1.17.0.0 - CHANGELOG.md heading + After column 1.16.1.0 → 1.17.0.0 - CHANGELOG removal targets 1.17.0.0 → 1.18.0.0 - gstack-upgrade/migrations/v1.16.1.0.sh → renamed v1.17.0.0.sh + log prefix - bin/gstack-brain-consumer "DEPRECATED in" + "removal in" 1.16.1.0/1.17.0.0 → 1.17.0.0/1.18.0.0 - bin/gstack-brain-uninstall "since vX.Y.Z.W" 1.16.1.0 → 1.17.0.0 - test/gstack-upgrade-migration-v1_16_1_0.test.ts → renamed v1_17_0_0.test.ts 26/26 tests still pass. No behavior change. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
+55
-2
@@ -1,5 +1,60 @@
|
||||
# Changelog
|
||||
|
||||
## [1.17.0.0] - 2026-04-26
|
||||
|
||||
## **Your gstack memory now actually lives in gbrain.**
|
||||
|
||||
For everyone who ran `/setup-gbrain` in the last month and noticed `gbrain search` couldn't find their CEO plans, learnings, or retros: that's because Step 7 wrote a placeholder `consumers.json` with `status: "pending"` and called it done. The HTTP endpoint that placeholder pointed at was never built on the gbrain side. This release scraps that approach and uses the gbrain v0.18.0 federation surface (`gbrain sources` + `gbrain sync`) instead.
|
||||
|
||||
After upgrading, `/setup-gbrain` adds a `git worktree` of your brain repo, registers it as a federated source on your gbrain (Supabase or PGLite), and runs an initial sync. Subsequent gstack skill end-of-run cycles also run `gbrain sync` so new artifacts land in the index automatically. Local-Mac only. No cloud agent required. `/gstack-upgrade` runs a one-shot migration for existing users.
|
||||
|
||||
### Verify after upgrade
|
||||
|
||||
```bash
|
||||
gbrain sources list --json | jq '.sources[] | {id, page_count, federated}'
|
||||
# Expect: two entries, your default brain plus a "gstack-brain-{user}"
|
||||
# entry, both federated=true.
|
||||
|
||||
gbrain search "ethos" --source gstack-brain-{user} | head -5
|
||||
# Expect: hits from your gstack repo content (readme, ethos, designs, etc).
|
||||
```
|
||||
|
||||
### What shipped
|
||||
|
||||
`bin/gstack-gbrain-source-wireup` is the new helper. It derives a per-user source id from `~/.gstack/.git`'s origin URL (with multi-fallback to `~/.gstack-brain-remote.txt` and a `--source-id` flag), creates a detached `git worktree` at `~/.gstack-brain-worktree/`, registers it as a federated source on gbrain, runs initial backfill, and supports `--strict` (Step 7 strictness), `--uninstall` (full teardown including future-launchd plist), and `--probe` (read-only state inspection). All idempotent. The helper depends on `jq` (transitive via `gstack-gbrain-detect`).
|
||||
|
||||
The helper locks the database URL at startup (precedence: `--database-url` flag > `GBRAIN_DATABASE_URL`/`DATABASE_URL` env > read once from `~/.gbrain/config.json`) and exports it as `GBRAIN_DATABASE_URL` for every child `gbrain` invocation. This means external rewrites of `~/.gbrain/config.json` mid-sync (e.g., a concurrent `gbrain init --non-interactive` running in another workspace) cannot redirect the wireup at a different brain. Per gbrain's `loadConfig()`, env-var URLs override the file. Step 7 of `/setup-gbrain` reads the URL out of `config.json` once and passes it explicitly via `--database-url`, so the wireup is robust against config flips during the seconds-to-minutes sync window.
|
||||
|
||||
`/setup-gbrain` Step 7 now invokes the helper with `--strict` after `gstack-brain-init`. `/gstack-upgrade` invokes the helper without `--strict` via `gstack-upgrade/migrations/v1.12.3.0.sh` so missing/old gbrain is a benign skip during batch upgrade. `bin/gstack-brain-restore` invokes the helper after the initial clone so a 2nd Mac gets the wireup automatically. `bin/gstack-brain-uninstall` invokes `--uninstall` plus removes legacy `consumers.json`.
|
||||
|
||||
`bin/gstack-brain-init` drops 60 lines of dead consumer-registration code (the HTTP POST block, the `consumers.json` writer, the chore commit). `bin/gstack-brain-restore` drops the 18-line `consumers.json` token-rehydration block (the only consumer that used it never had real tokens). `bin/gstack-brain-consumer` is marked deprecated in its header docstring; removal in v1.18.0.0 after one cycle of grace.
|
||||
|
||||
`test/gstack-gbrain-source-wireup.test.ts` is new: 13 unit tests with a fake `gbrain` binary on `$PATH` covering fresh-state registration, idempotent re-runs, drift recovery (gbrain has no `sources update`, only `remove + add`), `--strict` failure modes, source-id fallback chain (`.git` → remote-file → flag), `--probe` non-mutation, sync errors, and `--uninstall`.
|
||||
|
||||
### The numbers that matter
|
||||
|
||||
These are reproducible on any machine after upgrade. Run the verify commands above to see your own delta.
|
||||
|
||||
| Metric | Before (v1.16.0.0) | After (v1.17.0.0) |
|
||||
|---|---|---|
|
||||
| `gbrain sources list` size | 1 (default `/data/brain`) | 2 (default + `gstack-brain-{user}`) |
|
||||
| `consumers.json` status | `"pending"`, ingest_url `""` | file deleted from new installs |
|
||||
| Manual steps to wire up | 4 (clone + sources add + sync + cron) | 0, automatic in Step 7 |
|
||||
| Helper test coverage | 0 unit tests | 13 unit tests (`bun test test/gstack-gbrain-source-wireup.test.ts`) |
|
||||
| `bin/gstack-brain-init` size | 363 lines | 300 lines (60 lines of dead code removed) |
|
||||
|
||||
Local Mac is the producer of artifacts and the worktree advances automatically with `~/.gstack/`'s commits. Cross-machine sync runs through GitHub via the existing `gstack-brain-sync --once` push hook. No new cron infrastructure needed today; when gbrain v0.21 code-graph features ship, the helper's `--enable-cron` flag is a clean extension.
|
||||
|
||||
### What this means for builders
|
||||
|
||||
Your gstack memory is searchable now. Run a CEO plan review or office-hours session, sync runs at skill-end automatically, and `gbrain search` finds the plan content from any gbrain client (this Claude Code session, future Macs, optional cloud agents like OpenClaw). One source of truth across machines. The placeholder is dead.
|
||||
|
||||
### For contributors
|
||||
|
||||
- `bin/gstack-brain-consumer` is deprecated in this release; removal in v1.18.0.0.
|
||||
- The `gbrain_url` and `gbrain_token` config keys are now no-ops. They remain readable for one cycle for back-compat, removed in v1.18.0.0.
|
||||
- Three pre-existing test failures on this branch (`gstack-config gbrain keys > GSTACK_HOME overrides real config dir`, `no compiled binaries in git > git tracks no files larger than 2MB`, `Opus 4.7 overlay — pacing directive`) were verified to fail on the base branch too. Out of scope for this PR; flagged for a follow-up.
|
||||
|
||||
## [1.16.0.0] - 2026-04-28
|
||||
|
||||
## **Paired-agent tunnel allowlist now matches what the docs already promised. Catch-22 resolved, gate is unit-testable.**
|
||||
@@ -47,8 +102,6 @@ Three things change immediately. **First**, paired agents can actually open and
|
||||
- The plan was reviewed under `/plan-eng-review` plus 2 sequential codex outside-voice passes during plan mode. Round-1 codex caught a doc-target mistake (we were going to update `SIDEBAR_MESSAGE_FLOW.md` instead of `REMOTE_BROWSER_ACCESS.md`) and a wrong-layer test design. Round-2 codex caught that the round-1 correction was still wrong (the chosen test harness only binds the local listener) AND that the docs promised 6 more commands than the allowlist had. All 6 of 7 substantive findings landed in the implementation; the 7th (a pre-existing `/pair-agent` `/health` probe mismatch at `cli.ts:656-668`) is logged as out of scope.
|
||||
- One known accepted risk: `tabs` over the tunnel returns metadata for ALL tabs in the browser, not just tabs the agent owns. The user authored the trust relationship when they paired the agent, the agent already can't read CONTENT of unowned tabs (write commands blocked, the active tab can't be switched without a `tab <id>` command that's NOT in the allowlist), and tab IDs already leak via the 403 `hint` field on disallowed `goto`. Codex noted that tightening this requires touching the ownership gate itself (the gate falls back to `getActiveTabId()` BEFORE dispatch in `server.ts:603-614`), which is materially out of scope for a catch-22 fix. Logged in the plan failure-mode table as accepted.
|
||||
|
||||
|
||||
|
||||
## [1.15.0.0] - 2026-04-26
|
||||
|
||||
## **Real-PTY test harness ships. 11 plan-mode E2E tests, 23 unit tests, and 50K fewer tokens per invocation.**
|
||||
|
||||
@@ -159,6 +159,7 @@ The skill re-collects a PAT (one-time, discarded after), lists every project in
|
||||
| `gstack-gbrain-supabase-verify` | Structural URL check. Rejects direct-connection URLs (`db.*.supabase.co:5432`) with exit 3 |
|
||||
| `gstack-gbrain-supabase-provision` | Management API wrapper. Subcommands: `list-orgs`, `create`, `wait`, `pooler-url`, `list-orphans`, `delete-project`. All require `SUPABASE_ACCESS_TOKEN` in env. `create` and `pooler-url` also require `DB_PASS`. `--json` mode available on every subcommand. |
|
||||
| `gstack-gbrain-repo-policy` | Per-remote trust triad. Subcommands: `get`, `set`, `list`, `normalize` |
|
||||
| `gstack-gbrain-source-wireup` | Registers your `~/.gstack/` brain repo with gbrain as a federated source via `gbrain sources add` + `git worktree`, then runs an initial `gbrain sync`. Idempotent. Replaces the dead `consumers.json + /ingest-repo` HTTP wireup from v1.12.x. Flags: `--strict`, `--source-id <id>`, `--no-pull`, `--uninstall`, `--probe`. |
|
||||
|
||||
### gbrain CLI (upstream tool)
|
||||
|
||||
|
||||
@@ -1,6 +1,11 @@
|
||||
#!/usr/bin/env bash
|
||||
# gstack-brain-consumer — manage the consumer (reader) registry.
|
||||
#
|
||||
# DEPRECATED in v1.17.0.0. This binary targets a gbrain HTTP /ingest-repo
|
||||
# endpoint that never shipped on the gbrain side. Live federation now uses
|
||||
# `gbrain sources` directly via bin/gstack-gbrain-source-wireup. This file
|
||||
# stays for one cycle to avoid breaking external scripts; removal in v1.18.0.0.
|
||||
#
|
||||
# Consumer = a reader that ingests the gstack-brain git repo as a source of
|
||||
# session memory. v1 primary consumer is GBrain; later versions can register
|
||||
# Codex, OpenClaw, or third-party readers.
|
||||
|
||||
+6
-69
@@ -22,11 +22,9 @@
|
||||
# 8. Prompt for remote (default: gh repo create --private gstack-brain-$USER)
|
||||
# 9. Initial commit + push
|
||||
# 10. Write ~/.gstack-brain-remote.txt (URL-only, safe to share)
|
||||
# 11. Register GBrain consumer (HTTP POST if GBRAIN_URL set; else defer)
|
||||
#
|
||||
# Env:
|
||||
# GSTACK_HOME — override ~/.gstack
|
||||
# GBRAIN_URL — GBrain ingest endpoint base URL (for consumer registration)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -34,7 +32,6 @@ GSTACK_HOME="${GSTACK_HOME:-$HOME/.gstack}"
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
CONFIG_BIN="$SCRIPT_DIR/gstack-config"
|
||||
REMOTE_FILE="$HOME/.gstack-brain-remote.txt"
|
||||
CONSUMERS_FILE="$GSTACK_HOME/consumers.json"
|
||||
|
||||
REMOTE_URL=""
|
||||
while [ $# -gt 0 ]; do
|
||||
@@ -280,68 +277,6 @@ fi
|
||||
echo "$REMOTE_URL" > "$REMOTE_FILE"
|
||||
chmod 600 "$REMOTE_FILE"
|
||||
|
||||
# ---- register GBrain consumer ----
|
||||
mkdir -p "$GSTACK_HOME"
|
||||
CONSUMER_STATUS="pending"
|
||||
GBRAIN_URL_VAL="${GBRAIN_URL:-$("$CONFIG_BIN" get gbrain_url 2>/dev/null || echo "")}"
|
||||
GBRAIN_TOKEN_VAL="${GBRAIN_TOKEN:-$("$CONFIG_BIN" get gbrain_token 2>/dev/null || echo "")}"
|
||||
|
||||
if [ -n "$GBRAIN_URL_VAL" ] && [ -n "$GBRAIN_TOKEN_VAL" ]; then
|
||||
# Try the HTTP handoff.
|
||||
HTTP_RESP=$(curl -sS -X POST "${GBRAIN_URL_VAL%/}/ingest-repo" \
|
||||
-H "Authorization: Bearer $GBRAIN_TOKEN_VAL" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data "{\"repo_url\":\"$REMOTE_URL\"}" \
|
||||
-w "\n%{http_code}" 2>&1 || echo -e "\ncurl-error")
|
||||
HTTP_CODE=$(echo "$HTTP_RESP" | tail -1)
|
||||
if [ "$HTTP_CODE" = "200" ] || [ "$HTTP_CODE" = "201" ] || [ "$HTTP_CODE" = "204" ]; then
|
||||
CONSUMER_STATUS="ok"
|
||||
echo "GBrain consumer registered: $GBRAIN_URL_VAL"
|
||||
else
|
||||
echo "GBrain ingest endpoint returned HTTP $HTTP_CODE; will retry on next skill run."
|
||||
fi
|
||||
elif [ -z "$GBRAIN_URL_VAL" ]; then
|
||||
echo "(GBRAIN_URL not configured; skipping consumer registration. Set it with:"
|
||||
echo " gstack-config set gbrain_url <url>"
|
||||
echo " gstack-config set gbrain_token <token>"
|
||||
echo " then run: gstack-brain-consumer add gbrain --ingest-url <url> --token <token>)"
|
||||
fi
|
||||
|
||||
# Write consumers.json — the canonical registry. Tokens are NOT stored here;
|
||||
# they stay in gstack-config (machine-local). This file IS synced so a new
|
||||
# machine knows which consumers exist and can prompt for tokens.
|
||||
python3 - "$CONSUMERS_FILE" "$GBRAIN_URL_VAL" "$CONSUMER_STATUS" <<'PYEOF'
|
||||
import sys, json, os
|
||||
path, url, status = sys.argv[1:4]
|
||||
try:
|
||||
with open(path) as f:
|
||||
data = json.load(f)
|
||||
except (FileNotFoundError, json.JSONDecodeError):
|
||||
data = {"consumers": []}
|
||||
# Upsert GBrain entry.
|
||||
entry = {"name": "gbrain", "ingest_url": url, "status": status, "token_ref": "gbrain_token"}
|
||||
updated = False
|
||||
for i, c in enumerate(data.get("consumers", [])):
|
||||
if c.get("name") == "gbrain":
|
||||
data["consumers"][i] = entry
|
||||
updated = True
|
||||
break
|
||||
if not updated:
|
||||
data.setdefault("consumers", []).append(entry)
|
||||
with open(path, "w") as f:
|
||||
json.dump(data, f, indent=2)
|
||||
f.write("\n")
|
||||
PYEOF
|
||||
|
||||
# Stage and commit consumers.json in the same session.
|
||||
cd "$GSTACK_HOME"
|
||||
git add -f consumers.json 2>/dev/null || true
|
||||
if ! git diff --cached --quiet 2>/dev/null; then
|
||||
git -c user.email="gstack@localhost" -c user.name="gstack-brain-init" \
|
||||
commit -q -m "chore: register GBrain consumer"
|
||||
git push -q origin HEAD 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# ---- done ----
|
||||
cat <<EOF
|
||||
|
||||
@@ -350,12 +285,14 @@ Repo: $GSTACK_HOME (git)
|
||||
Remote: $REMOTE_URL
|
||||
Remote URL also saved at: $REMOTE_FILE
|
||||
|
||||
Sync happens automatically at the start and end of each skill (no daemon).
|
||||
Check status anytime with:
|
||||
Sync to GitHub happens automatically at the start and end of each skill
|
||||
(no daemon). Check status anytime with:
|
||||
gstack-brain-sync --status
|
||||
|
||||
To activate sync, the next skill you run will ask you one question about
|
||||
privacy mode (sync everything / artifacts only / off).
|
||||
The next skill run will ask you one question about privacy mode (full /
|
||||
artifacts-only / off). After that, /setup-gbrain Step 7 (or the
|
||||
gstack-gbrain-source-wireup helper) registers this repo as a federated
|
||||
source on gbrain so its content is searchable via 'gbrain search'.
|
||||
|
||||
New machine? On the other laptop, put a copy of:
|
||||
$REMOTE_FILE
|
||||
|
||||
@@ -19,7 +19,8 @@
|
||||
# 3. rsync-copy tracked files into ~/.gstack/ with skip-if-same-hash
|
||||
# 4. Move staging's .git into ~/.gstack/.git
|
||||
# 5. Register local git config merge drivers (they don't clone from remote)
|
||||
# 6. Rehydrate consumers.json endpoints; prompt for tokens
|
||||
# 6. Wire the cloned brain into gbrain via gstack-gbrain-source-wireup
|
||||
# (best-effort; restore continues even if gbrain wireup fails)
|
||||
#
|
||||
# Env:
|
||||
# GSTACK_HOME — override ~/.gstack
|
||||
@@ -195,25 +196,6 @@ sys.exit(0)
|
||||
HOOK_EOF
|
||||
chmod +x "$HOOK"
|
||||
|
||||
# ---- rehydrate consumers, prompt for tokens ----
|
||||
if [ -f "$GSTACK_HOME/consumers.json" ]; then
|
||||
echo ""
|
||||
echo "Consumer registry restored. Tokens are machine-local and NOT synced."
|
||||
echo "Run these for each consumer to re-enter tokens:"
|
||||
python3 - "$GSTACK_HOME/consumers.json" <<'PYEOF'
|
||||
import sys, json
|
||||
try:
|
||||
with open(sys.argv[1]) as f:
|
||||
data = json.load(f)
|
||||
except Exception:
|
||||
sys.exit(0)
|
||||
for c in data.get("consumers", []):
|
||||
name = c.get("name", "")
|
||||
token_ref = c.get("token_ref", f"{name}_token")
|
||||
print(f" gstack-config set {token_ref} <your-token>")
|
||||
PYEOF
|
||||
fi
|
||||
|
||||
# ---- write remote helper file if missing ----
|
||||
if [ ! -f "$REMOTE_FILE" ]; then
|
||||
echo "$REMOTE_URL" > "$REMOTE_FILE"
|
||||
@@ -222,6 +204,12 @@ if [ ! -f "$REMOTE_FILE" ]; then
|
||||
echo "Wrote $REMOTE_FILE for future skill-run auto-detection."
|
||||
fi
|
||||
|
||||
# ---- wire the cloned brain into gbrain (best-effort) ----
|
||||
WIREUP_BIN="$SCRIPT_DIR/gstack-gbrain-source-wireup"
|
||||
if [ -x "$WIREUP_BIN" ]; then
|
||||
"$WIREUP_BIN" || >&2 echo "WARNING: gbrain wireup failed; run $WIREUP_BIN manually after fixing prereqs"
|
||||
fi
|
||||
|
||||
cat <<EOF
|
||||
|
||||
gstack-brain-restore complete.
|
||||
|
||||
@@ -120,6 +120,16 @@ rm -f "$GSTACK_HOME/.brain-last-pull" 2>/dev/null || true
|
||||
rm -f "$GSTACK_HOME/.brain-skip.txt" 2>/dev/null || true
|
||||
rm -f "$GSTACK_HOME/.brain-sync-status.json" 2>/dev/null || true
|
||||
rm -rf "$GSTACK_HOME/.brain-sync.lock.d" 2>/dev/null || true
|
||||
|
||||
# ---- unregister gbrain federated source + remove worktree (best-effort) ----
|
||||
# The wireup helper handles: gbrain sources remove, git worktree remove,
|
||||
# launchd plist (future). All best-effort; uninstall continues on failure.
|
||||
WIREUP_BIN="$SCRIPT_DIR/gstack-gbrain-source-wireup"
|
||||
if [ -x "$WIREUP_BIN" ]; then
|
||||
"$WIREUP_BIN" --uninstall 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# ---- legacy consumers.json (no longer written by gstack-brain-init since v1.17.0.0) ----
|
||||
rm -f "$GSTACK_HOME/consumers.json" 2>/dev/null || true
|
||||
|
||||
# ---- clear config keys ----
|
||||
|
||||
Executable
+357
@@ -0,0 +1,357 @@
|
||||
#!/usr/bin/env bash
|
||||
# gstack-gbrain-source-wireup — register the gstack brain repo as a gbrain
|
||||
# federated source via `git worktree`, run an initial sync, hook into
|
||||
# subsequent skill-end syncs.
|
||||
#
|
||||
# Replaces the v1.12.2.0 dead `consumers.json + ingest_url + /ingest-repo`
|
||||
# wireup which depended on a gbrain HTTP endpoint that never shipped.
|
||||
#
|
||||
# Usage:
|
||||
# gstack-gbrain-source-wireup [--strict] [--source-id <id>] [--no-pull]
|
||||
# [--database-url <url>]
|
||||
# gstack-gbrain-source-wireup --uninstall [--source-id <id>]
|
||||
# [--database-url <url>]
|
||||
# gstack-gbrain-source-wireup --probe
|
||||
# gstack-gbrain-source-wireup --help
|
||||
#
|
||||
# Exit codes:
|
||||
# 0 — success, OR benign skip without --strict
|
||||
# 1 — hard failure (gbrain or git op errored on a real call)
|
||||
# 2 — missing prereqs (no gbrain >= 0.18.0, no .git or remote-file)
|
||||
# 3 — source-id derivation failed in --uninstall, no fallback worked
|
||||
#
|
||||
# Env:
|
||||
# GSTACK_HOME — override ~/.gstack (test harness)
|
||||
# GSTACK_BRAIN_WORKTREE — override worktree path (default ~/.gstack-brain-worktree)
|
||||
# GSTACK_BRAIN_SOURCE_ID — id override; --source-id flag takes precedence
|
||||
# GSTACK_BRAIN_NO_SYNC — skip the gbrain sync step (tests; helper still
|
||||
# ensures source registration)
|
||||
#
|
||||
# Defense against external rewrites of ~/.gbrain/config.json:
|
||||
# At helper startup we capture the database URL ONCE — from --database-url,
|
||||
# from GBRAIN_DATABASE_URL/DATABASE_URL env, or from ~/.gbrain/config.json —
|
||||
# and export it as GBRAIN_DATABASE_URL for every child `gbrain` invocation.
|
||||
# That env var overrides whatever's in config.json (per gbrain's loadConfig
|
||||
# at src/core/config.ts:53), so a process that flips config.json mid-sync
|
||||
# can't redirect us at a different brain mid-stream.
|
||||
#
|
||||
# Depends on: jq (transitive via gstack-gbrain-detect).
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
CONFIG_BIN="$SCRIPT_DIR/gstack-config"
|
||||
|
||||
GSTACK_HOME="${GSTACK_HOME:-$HOME/.gstack}"
|
||||
WORKTREE="${GSTACK_BRAIN_WORKTREE:-$HOME/.gstack-brain-worktree}"
|
||||
REMOTE_FILE="$HOME/.gstack-brain-remote.txt"
|
||||
PLIST_PATH="$HOME/Library/LaunchAgents/com.gstack.brain-sync.plist"
|
||||
GBRAIN_CONFIG="$HOME/.gbrain/config.json"
|
||||
|
||||
# ---- arg parse ----
|
||||
MODE="wireup"
|
||||
STRICT=0
|
||||
NO_PULL=0
|
||||
SOURCE_ID=""
|
||||
DATABASE_URL_ARG=""
|
||||
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--uninstall) MODE="uninstall"; shift ;;
|
||||
--probe) MODE="probe"; shift ;;
|
||||
--strict) STRICT=1; shift ;;
|
||||
--no-pull) NO_PULL=1; shift ;;
|
||||
--source-id) SOURCE_ID="$2"; shift 2 ;;
|
||||
--database-url) DATABASE_URL_ARG="$2"; shift 2 ;;
|
||||
--help|-h) sed -n '2,40p' "$0" | sed 's/^# \{0,1\}//'; exit 0 ;;
|
||||
*) echo "Unknown flag: $1" >&2; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ---- lock the database URL at startup ----
|
||||
# Precedence: --database-url flag > existing GBRAIN_DATABASE_URL/DATABASE_URL
|
||||
# env > read once from ~/.gbrain/config.json. Whichever wins gets exported as
|
||||
# GBRAIN_DATABASE_URL so every child `gbrain` invocation uses THAT brain even
|
||||
# if config.json is rewritten by another process during the wireup.
|
||||
_locked_url=""
|
||||
if [ -n "$DATABASE_URL_ARG" ]; then
|
||||
_locked_url="$DATABASE_URL_ARG"
|
||||
elif [ -n "${GBRAIN_DATABASE_URL:-}" ]; then
|
||||
_locked_url="$GBRAIN_DATABASE_URL"
|
||||
elif [ -n "${DATABASE_URL:-}" ]; then
|
||||
_locked_url="$DATABASE_URL"
|
||||
elif [ -f "$GBRAIN_CONFIG" ]; then
|
||||
# Python heredoc reads config.json. On JSON parse failure or any IO error,
|
||||
# we WARN (not silently swallow) so the user knows the URL lock fell back
|
||||
# to gbrain's own loadConfig (which would still read this same file).
|
||||
_py_err=$(mktemp -t wireup-pyerr 2>/dev/null || mktemp /tmp/wireup-pyerr.XXXXXX)
|
||||
_locked_url=$(GBRAIN_CONFIG_PATH="$GBRAIN_CONFIG" python3 -c '
|
||||
import json, os, sys
|
||||
try:
|
||||
c = json.load(open(os.environ["GBRAIN_CONFIG_PATH"]))
|
||||
print(c.get("database_url",""))
|
||||
except FileNotFoundError:
|
||||
sys.exit(0)
|
||||
except Exception as e:
|
||||
print(f"config.json parse error: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
' </dev/null 2>"$_py_err") || warn "could not read $GBRAIN_CONFIG ($(cat "$_py_err" 2>/dev/null)); URL not locked"
|
||||
rm -f "$_py_err" 2>/dev/null
|
||||
fi
|
||||
if [ -n "$_locked_url" ]; then
|
||||
export GBRAIN_DATABASE_URL="$_locked_url"
|
||||
fi
|
||||
|
||||
prefix() { sed 's/^/gstack-gbrain-source-wireup: /' >&2; }
|
||||
warn() { echo "$*" | prefix; }
|
||||
# die <message> [exit_code]: warn with just the message, exit with code (default 1).
|
||||
die() { warn "$1"; exit "${2:-1}"; }
|
||||
|
||||
# Refuse to rm anything outside $HOME/. Defends against GSTACK_BRAIN_WORKTREE=/
|
||||
# or empty-string overrides that would otherwise have line 169 / 161 nuke the
|
||||
# user's home or root.
|
||||
safe_rm_worktree() {
|
||||
local target="$1"
|
||||
case "$target" in
|
||||
"" | "/" | "/Users" | "/Users/" | "$HOME" | "$HOME/" )
|
||||
die "refusing to rm dangerous path: $target" 1 ;;
|
||||
esac
|
||||
case "$target" in
|
||||
"$HOME"/*) rm -rf "$target" ;;
|
||||
*) die "refusing to rm path outside \$HOME: $target" 1 ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ---- source-id derivation (D6 multi-fallback) ----
|
||||
derive_source_id() {
|
||||
if [ -n "$SOURCE_ID" ]; then
|
||||
echo "$SOURCE_ID"; return 0
|
||||
fi
|
||||
if [ -n "${GSTACK_BRAIN_SOURCE_ID:-}" ]; then
|
||||
echo "$GSTACK_BRAIN_SOURCE_ID"; return 0
|
||||
fi
|
||||
local remote_url=""
|
||||
remote_url=$(git -C "$GSTACK_HOME" remote get-url origin 2>/dev/null) || true
|
||||
if [ -z "$remote_url" ] && [ -f "$REMOTE_FILE" ]; then
|
||||
remote_url=$(head -1 "$REMOTE_FILE" 2>/dev/null | tr -d '[:space:]')
|
||||
fi
|
||||
[ -z "$remote_url" ] && return 3
|
||||
basename "$remote_url" .git \
|
||||
| tr '[:upper:]' '[:lower:]' \
|
||||
| tr -c 'a-z0-9-' '-' \
|
||||
| sed 's/--*/-/g; s/^-//; s/-$//' \
|
||||
| cut -c1-32
|
||||
}
|
||||
|
||||
# ---- gbrain version gate ----
|
||||
gbrain_version_ok() {
|
||||
if ! command -v gbrain >/dev/null 2>&1; then
|
||||
return 1
|
||||
fi
|
||||
local v
|
||||
v=$(gbrain --version 2>/dev/null | awk '{print $2}')
|
||||
[ -z "$v" ] && return 1
|
||||
# 0.18.0 minimum (gbrain sources shipped here). Put the floor first in stdin
|
||||
# so equal or greater $v sorts to position 2 — head -1 == "0.18.0" iff $v >= floor.
|
||||
[ "$(printf '0.18.0\n%s\n' "$v" | sort -V | head -1)" = "0.18.0" ]
|
||||
}
|
||||
|
||||
# ---- worktree management ----
|
||||
# A worktree is always created `--detach`ed at $GSTACK_HOME's HEAD. Detached
|
||||
# because a branch (main) can only be checked out in ONE worktree, and the
|
||||
# parent at $GSTACK_HOME already has it. To advance, we re-checkout the
|
||||
# parent's current HEAD into the detached worktree.
|
||||
_worktree_add_detached() {
|
||||
local sha
|
||||
sha=$(git -C "$GSTACK_HOME" rev-parse HEAD 2>/dev/null) || return 1
|
||||
git -C "$GSTACK_HOME" worktree prune 2>/dev/null || true
|
||||
# Surface git errors via prefix so users see WHY the add failed (disk, perms, etc).
|
||||
git -C "$GSTACK_HOME" worktree add --detach "$WORKTREE" "$sha" 2>&1 | prefix
|
||||
return "${PIPESTATUS[0]}"
|
||||
}
|
||||
|
||||
ensure_worktree() {
|
||||
if [ ! -d "$GSTACK_HOME/.git" ]; then
|
||||
return 2
|
||||
fi
|
||||
if [ -d "$WORKTREE/.git" ] || [ -f "$WORKTREE/.git" ]; then
|
||||
# already exists; advance the detached HEAD to parent's current HEAD
|
||||
if [ "$NO_PULL" = "0" ]; then
|
||||
local sha
|
||||
sha=$(git -C "$GSTACK_HOME" rev-parse HEAD 2>/dev/null) || return 1
|
||||
# Surface checkout errors via prefix so users see WHY the advance failed
|
||||
# (uncommitted changes in the detached worktree, ref ambiguity, etc).
|
||||
( cd "$WORKTREE" && git checkout --detach "$sha" 2>&1 | prefix; exit "${PIPESTATUS[0]}" ) || {
|
||||
warn "worktree at $WORKTREE could not advance to $sha; resetting via remove + re-add"
|
||||
git -C "$GSTACK_HOME" worktree remove --force "$WORKTREE" 2>/dev/null || safe_rm_worktree "$WORKTREE"
|
||||
_worktree_add_detached || return 1
|
||||
}
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
# Stray non-git dir? Remove first.
|
||||
[ -e "$WORKTREE" ] && safe_rm_worktree "$WORKTREE"
|
||||
_worktree_add_detached || return 1
|
||||
}
|
||||
|
||||
# ---- gbrain sources operations ----
|
||||
# Returns 0 if source with id exists at expected path. 1 if exists but path differs. 2 if absent.
|
||||
# Hard-fails (exits non-zero via die) if jq is missing — without jq we cannot
|
||||
# distinguish "absent" from "missing-tool" and would falsely re-add an existing
|
||||
# source. jq is documented as a dependency of gstack-gbrain-detect (transitive)
|
||||
# but adversarial review flagged the silent-fall-through path; this probe makes
|
||||
# the failure mode loud.
|
||||
check_source_state() {
|
||||
local id="$1"
|
||||
if ! command -v jq >/dev/null 2>&1; then
|
||||
die "jq required for source state detection. Install jq (brew install jq) and re-run." 1
|
||||
fi
|
||||
local existing_path
|
||||
existing_path=$(gbrain sources list --json 2>/dev/null \
|
||||
| jq -r --arg id "$id" '.sources[] | select(.id==$id) | .local_path' 2>/dev/null \
|
||||
| tr -d '[:space:]') || existing_path=""
|
||||
if [ -z "$existing_path" ]; then
|
||||
return 2
|
||||
fi
|
||||
if [ "$existing_path" = "$WORKTREE" ]; then
|
||||
return 0
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
# ---- modes ----
|
||||
do_probe() {
|
||||
local id worktree_status="absent" gbrain_status="missing" source_status="absent"
|
||||
id=$(derive_source_id 2>/dev/null) || id="(unknown)"
|
||||
# Use explicit if-block so [ -d ] || [ -f ] doesn't get short-circuited by &&
|
||||
# precedence (the `||` and `&&` chain has trap behavior in bash test syntax).
|
||||
if [ -d "$WORKTREE/.git" ] || [ -f "$WORKTREE/.git" ]; then
|
||||
worktree_status="present"
|
||||
fi
|
||||
if gbrain_version_ok; then
|
||||
gbrain_status="ok ($(gbrain --version 2>/dev/null | awk '{print $2}'))"
|
||||
# Capture check_source_state's return code explicitly. Relying on $? after
|
||||
# an `if`-elif chain is fragile under set -e and undefined under some shells.
|
||||
set +e
|
||||
check_source_state "$id"
|
||||
local css_rc=$?
|
||||
set -e
|
||||
case "$css_rc" in
|
||||
0) source_status="registered ($WORKTREE)" ;;
|
||||
1) source_status="registered (different path)" ;;
|
||||
esac
|
||||
fi
|
||||
echo "source_id=$id"
|
||||
echo "worktree=$WORKTREE"
|
||||
echo "worktree_status=$worktree_status"
|
||||
echo "gbrain=$gbrain_status"
|
||||
echo "source_status=$source_status"
|
||||
}
|
||||
|
||||
do_wireup() {
|
||||
local id
|
||||
id=$(derive_source_id) || die "cannot derive source id (no .git, no remote-file, no --source-id)" 2
|
||||
|
||||
if ! gbrain_version_ok; then
|
||||
if [ "$STRICT" = "1" ]; then
|
||||
die "gbrain not installed or < 0.18.0; install/upgrade gbrain and re-run" 2
|
||||
fi
|
||||
warn "gbrain not installed or < 0.18.0; skipping wireup (benign skip)"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Capture ensure_worktree's return code explicitly. `$?` after `||` reflects
|
||||
# the LAST command in the function under set -e, which is unreliable when the
|
||||
# function has multiple internal exit paths.
|
||||
set +e
|
||||
ensure_worktree
|
||||
ew_rc=$?
|
||||
set -e
|
||||
case "$ew_rc" in
|
||||
0) : ;; # success
|
||||
2)
|
||||
[ "$STRICT" = "1" ] && die "no $GSTACK_HOME/.git; run /setup-gbrain Step 7 (gstack-brain-init) first" 2
|
||||
warn "no $GSTACK_HOME/.git; skipping (benign skip)"
|
||||
exit 0
|
||||
;;
|
||||
*) die "git worktree creation failed at $WORKTREE" 1 ;;
|
||||
esac
|
||||
|
||||
# Source registration: probe state, then act.
|
||||
set +e
|
||||
check_source_state "$id"
|
||||
local sstate=$?
|
||||
set -e
|
||||
case "$sstate" in
|
||||
0) : ;; # already correctly registered
|
||||
1)
|
||||
# Multi-Mac case: if the existing path also looks like another machine's
|
||||
# brain-worktree (same basename, different parent), don't ping-pong the
|
||||
# registration. Just sync from our local worktree — gbrain stores pages
|
||||
# by content, not by local_path. The metadata is informational only.
|
||||
local existing_path
|
||||
existing_path=$(gbrain sources list --json 2>/dev/null \
|
||||
| jq -r --arg id "$id" '.sources[] | select(.id==$id) | .local_path' 2>/dev/null \
|
||||
| tr -d '[:space:]') || existing_path=""
|
||||
if [ "$(basename "$existing_path")" = "$(basename "$WORKTREE")" ] \
|
||||
&& [ "$existing_path" != "$WORKTREE" ]; then
|
||||
warn "source $id is registered at $existing_path (likely another machine's local copy of the same brain repo). Skipping re-registration; will sync from local worktree."
|
||||
else
|
||||
warn "source $id registered with different path; recreating (gbrain has no 'sources update')"
|
||||
gbrain sources remove "$id" --yes 2>&1 | prefix || die "gbrain sources remove failed" 1
|
||||
gbrain sources add "$id" --path "$WORKTREE" --federated 2>&1 | prefix \
|
||||
|| die "gbrain sources add failed" 1
|
||||
fi
|
||||
;;
|
||||
2)
|
||||
gbrain sources add "$id" --path "$WORKTREE" --federated 2>&1 | prefix \
|
||||
|| die "gbrain sources add failed" 1
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ "${GSTACK_BRAIN_NO_SYNC:-0}" = "1" ]; then
|
||||
echo "source_id=$id"
|
||||
echo "worktree=$WORKTREE"
|
||||
echo "pages_synced=skipped"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
local sync_out sync_redacted
|
||||
sync_out=$(gbrain sync --repo "$WORKTREE" 2>&1) || {
|
||||
# Redact any postgres:// URLs from the error message in case gbrain logged
|
||||
# a connection error containing the full DSN with password. The user sees
|
||||
# "***REDACTED***" instead of credentials in their stderr or any log.
|
||||
sync_redacted=$(echo "$sync_out" | tail -10 | sed -E 's#postgres(ql)?://[^[:space:]]+#postgres://***REDACTED***#g')
|
||||
die "gbrain sync failed (last 10 lines, secrets redacted): $sync_redacted" 1
|
||||
}
|
||||
echo "$sync_out" | tail -3 | prefix
|
||||
|
||||
echo "source_id=$id"
|
||||
echo "worktree=$WORKTREE"
|
||||
echo "pages_synced=$(echo "$sync_out" | grep -oE '[0-9]+ pages? imported' | head -1 || echo 'incremental')"
|
||||
}
|
||||
|
||||
do_uninstall() {
|
||||
local id
|
||||
id=$(derive_source_id) || die "cannot derive source id; pass --source-id <id> explicitly" 3
|
||||
|
||||
if command -v gbrain >/dev/null 2>&1; then
|
||||
gbrain sources remove "$id" --yes 2>&1 | prefix || warn "gbrain sources remove failed (continuing)"
|
||||
fi
|
||||
|
||||
if [ -d "$WORKTREE/.git" ] || [ -f "$WORKTREE/.git" ]; then
|
||||
git -C "$GSTACK_HOME" worktree remove --force "$WORKTREE" 2>/dev/null \
|
||||
|| safe_rm_worktree "$WORKTREE"
|
||||
fi
|
||||
|
||||
# Cron-stub: future launchd plist (not created today; safety net for D9 future).
|
||||
rm -f "$PLIST_PATH" 2>/dev/null || true
|
||||
|
||||
echo "uninstalled source=$id worktree=$WORKTREE"
|
||||
}
|
||||
|
||||
case "$MODE" in
|
||||
probe) do_probe ;;
|
||||
wireup) do_wireup ;;
|
||||
uninstall) do_uninstall ;;
|
||||
esac
|
||||
+7
-3
@@ -43,9 +43,13 @@ The command:
|
||||
3. Pushes an initial commit with just the config.
|
||||
4. Writes `~/.gstack-brain-remote.txt` (URL-only, no secrets —
|
||||
safe to copy to another machine).
|
||||
5. Registers GBrain as a reader if `GBRAIN_URL` + `GBRAIN_TOKEN` are
|
||||
configured. Otherwise you can add readers later with
|
||||
`gstack-brain-reader add <name> --ingest-url <url> --token <token>`.
|
||||
5. Wires the gstack-brain repo into your local gbrain as a federated
|
||||
source (via `gbrain sources add` + `git worktree`) so `gbrain search`
|
||||
can index your synced learnings, plans, and designs. Implementation
|
||||
lives in `bin/gstack-gbrain-source-wireup`. The old
|
||||
`gstack-brain-reader add --ingest-url ...` HTTP path was removed in
|
||||
v1.15.1.0 — it depended on a `/ingest-repo` endpoint gbrain never
|
||||
shipped.
|
||||
|
||||
After init, the **next skill you run** will ask you ONE question about
|
||||
privacy mode:
|
||||
|
||||
Executable
+56
@@ -0,0 +1,56 @@
|
||||
#!/usr/bin/env bash
|
||||
# Migration: v1.17.0.0 — Wire existing brain-sync repos as gbrain federated sources
|
||||
#
|
||||
# Pre-1.17.0.0 /setup-gbrain wrote ~/.gstack/consumers.json with a placeholder
|
||||
# `status: "pending"` and an empty `ingest_url`, expecting a gbrain HTTP
|
||||
# /ingest-repo endpoint that never shipped. This migration runs the real
|
||||
# wireup (gbrain sources add + worktree + initial sync) for users who
|
||||
# already opted into brain-sync but never got the gbrain side connected.
|
||||
#
|
||||
# Idempotent: safe to re-run. Skips when:
|
||||
# - User never opted into brain-sync (gbrain_sync_mode = off or unset)
|
||||
# - No ~/.gstack/.git (brain-init never ran)
|
||||
# - The wireup helper is missing on disk (broken install — defensive)
|
||||
#
|
||||
# Failure mode: invokes the helper WITHOUT --strict, so a missing/old gbrain
|
||||
# CLI is a benign skip rather than blocking the rest of /gstack-upgrade.
|
||||
set -euo pipefail
|
||||
|
||||
if [ -z "${HOME:-}" ]; then
|
||||
echo " [v1.17.0.0] HOME is unset or empty — skipping migration." >&2
|
||||
exit 0
|
||||
fi
|
||||
|
||||
SKILLS_DIR="${HOME}/.claude/skills"
|
||||
BIN_DIR="${SKILLS_DIR}/gstack/bin"
|
||||
CONFIG_BIN="${BIN_DIR}/gstack-config"
|
||||
WIREUP_BIN="${BIN_DIR}/gstack-gbrain-source-wireup"
|
||||
|
||||
# Skip if user never opted into brain-sync.
|
||||
SYNC_MODE=""
|
||||
if [ -x "$CONFIG_BIN" ]; then
|
||||
# Trim whitespace defensively: gstack-config can emit trailing newlines,
|
||||
# which would mis-classify "off\n" as a non-empty non-off mode.
|
||||
SYNC_MODE=$("$CONFIG_BIN" get gbrain_sync_mode 2>/dev/null | tr -d '[:space:]' || echo "")
|
||||
fi
|
||||
if [ "$SYNC_MODE" = "off" ] || [ -z "$SYNC_MODE" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Skip if no brain-sync git repo exists.
|
||||
if [ ! -d "${HOME}/.gstack/.git" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Skip if helper missing (defensive — should always be present post-upgrade).
|
||||
if [ ! -x "$WIREUP_BIN" ]; then
|
||||
echo " [v1.17.0.0] $WIREUP_BIN missing or non-executable — skipping wireup." >&2
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo " [v1.17.0.0] Wiring brain-sync repo into gbrain (federated source + initial sync)..."
|
||||
|
||||
# No --strict: missing/old gbrain is a benign skip during a batch upgrade.
|
||||
"$WIREUP_BIN" || {
|
||||
echo " [v1.17.0.0] Wireup exited non-zero — re-run manually with: $WIREUP_BIN" >&2
|
||||
}
|
||||
+1
-1
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "gstack",
|
||||
"version": "1.16.0.0",
|
||||
"version": "1.17.0.0",
|
||||
"description": "Garry's Stack — Claude Code skills + fast headless browser. One repo, one install, entire AI engineering workflow.",
|
||||
"license": "MIT",
|
||||
"type": "module",
|
||||
|
||||
+32
-1
@@ -986,7 +986,7 @@ For `/setup-gbrain --repo` invocations, execute ONLY Step 6 and exit.
|
||||
|
||||
---
|
||||
|
||||
## Step 7: Offer gstack-brain-sync
|
||||
## Step 7: Offer gstack-brain-sync + wire it into gbrain
|
||||
|
||||
Separate AskUserQuestion: "Also sync your gstack session memory (learnings,
|
||||
plans, retros) to a private git repo that gbrain can index across machines?"
|
||||
@@ -1004,6 +1004,37 @@ If yes:
|
||||
# or "full" if user picked yes-full
|
||||
```
|
||||
|
||||
Then wire the brain repo into gbrain so its content is searchable from any
|
||||
gbrain client (this Claude Code session, future Macs, optional cloud agents).
|
||||
The helper creates a `git worktree` of `~/.gstack/`, registers it as a
|
||||
federated source on the user's gbrain (Supabase or PGLite), and runs an
|
||||
initial `gbrain sync`. Local-Mac only. No cloud agent required. Subsequent
|
||||
skill runs trigger incremental sync via the existing skill-end push hook.
|
||||
|
||||
Capture the database URL out of `~/.gbrain/config.json` first and pass it
|
||||
explicitly so the wireup is robust against any other process rewriting
|
||||
`~/.gbrain/config.json` mid-sync (e.g., concurrent `gbrain init` runs
|
||||
elsewhere on the machine):
|
||||
|
||||
```bash
|
||||
GBRAIN_URL=$(python3 -c "
|
||||
import json, os, sys
|
||||
try:
|
||||
c = json.load(open(os.path.expanduser('~/.gbrain/config.json')))
|
||||
print(c.get('database_url', ''))
|
||||
except Exception:
|
||||
pass
|
||||
")
|
||||
~/.claude/skills/gstack/bin/gstack-gbrain-source-wireup --strict \
|
||||
${GBRAIN_URL:+--database-url "$GBRAIN_URL"}
|
||||
```
|
||||
|
||||
`--strict` exits non-zero on missing prereqs (gbrain not installed, < 0.18.0,
|
||||
or no `~/.gstack/.git` yet) so the user sees the failure rather than silently
|
||||
ending up with an unwired brain. On non-zero exit, surface the helper's
|
||||
output and STOP per skill rules — search-across-machines won't work until
|
||||
the prereq is fixed.
|
||||
|
||||
---
|
||||
|
||||
## Step 8: Persist `## GBrain Configuration` in CLAUDE.md
|
||||
|
||||
@@ -347,7 +347,7 @@ For `/setup-gbrain --repo` invocations, execute ONLY Step 6 and exit.
|
||||
|
||||
---
|
||||
|
||||
## Step 7: Offer gstack-brain-sync
|
||||
## Step 7: Offer gstack-brain-sync + wire it into gbrain
|
||||
|
||||
Separate AskUserQuestion: "Also sync your gstack session memory (learnings,
|
||||
plans, retros) to a private git repo that gbrain can index across machines?"
|
||||
@@ -365,6 +365,37 @@ If yes:
|
||||
# or "full" if user picked yes-full
|
||||
```
|
||||
|
||||
Then wire the brain repo into gbrain so its content is searchable from any
|
||||
gbrain client (this Claude Code session, future Macs, optional cloud agents).
|
||||
The helper creates a `git worktree` of `~/.gstack/`, registers it as a
|
||||
federated source on the user's gbrain (Supabase or PGLite), and runs an
|
||||
initial `gbrain sync`. Local-Mac only. No cloud agent required. Subsequent
|
||||
skill runs trigger incremental sync via the existing skill-end push hook.
|
||||
|
||||
Capture the database URL out of `~/.gbrain/config.json` first and pass it
|
||||
explicitly so the wireup is robust against any other process rewriting
|
||||
`~/.gbrain/config.json` mid-sync (e.g., concurrent `gbrain init` runs
|
||||
elsewhere on the machine):
|
||||
|
||||
```bash
|
||||
GBRAIN_URL=$(python3 -c "
|
||||
import json, os, sys
|
||||
try:
|
||||
c = json.load(open(os.path.expanduser('~/.gbrain/config.json')))
|
||||
print(c.get('database_url', ''))
|
||||
except Exception:
|
||||
pass
|
||||
")
|
||||
~/.claude/skills/gstack/bin/gstack-gbrain-source-wireup --strict \
|
||||
${GBRAIN_URL:+--database-url "$GBRAIN_URL"}
|
||||
```
|
||||
|
||||
`--strict` exits non-zero on missing prereqs (gbrain not installed, < 0.18.0,
|
||||
or no `~/.gstack/.git` yet) so the user sees the failure rather than silently
|
||||
ending up with an unwired brain. On non-zero exit, surface the helper's
|
||||
output and STOP per skill rules — search-across-machines won't work until
|
||||
the prereq is fixed.
|
||||
|
||||
---
|
||||
|
||||
## Step 8: Persist `## GBrain Configuration` in CLAUDE.md
|
||||
|
||||
@@ -0,0 +1,440 @@
|
||||
/**
|
||||
* gstack-gbrain-source-wireup — unit tests with mocked gbrain CLI.
|
||||
*
|
||||
* The helper registers the gstack brain repo as a gbrain federated source
|
||||
* via `git worktree`, runs an initial sync, and exposes --uninstall + --probe.
|
||||
*
|
||||
* Strategy: put a fake `gbrain` binary on PATH that records every call into
|
||||
* a log file and reads/writes its "registered sources" state from a JSON
|
||||
* file in the test's tmp dir. The helper sees a consistent gbrain-CLI surface
|
||||
* but no real database, no real gbrain.
|
||||
*/
|
||||
|
||||
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
|
||||
import * as fs from 'fs';
|
||||
import * as os from 'os';
|
||||
import * as path from 'path';
|
||||
import { spawnSync } from 'child_process';
|
||||
|
||||
const ROOT = path.resolve(import.meta.dir, '..');
|
||||
const BIN_DIR = path.join(ROOT, 'bin');
|
||||
const WIREUP_BIN = path.join(BIN_DIR, 'gstack-gbrain-source-wireup');
|
||||
|
||||
let tmpHome: string;
|
||||
let gstackHome: string;
|
||||
let worktreeDir: string;
|
||||
let fakeBinDir: string;
|
||||
let gbrainCallLog: string;
|
||||
let gbrainStateFile: string;
|
||||
|
||||
function makeFakeGbrain(opts: {
|
||||
version?: string | null; // null = "binary missing" (don't write the file)
|
||||
syncFails?: boolean;
|
||||
}) {
|
||||
const version = opts.version ?? '0.18.2';
|
||||
if (version === null) return; // simulate missing binary by NOT writing one
|
||||
const syncFails = opts.syncFails ?? false;
|
||||
|
||||
// Stub gbrain reads/writes state from a JSON file. Fields:
|
||||
// sources: [{id, local_path, federated}]
|
||||
fs.writeFileSync(gbrainStateFile, JSON.stringify({ sources: [] }, null, 2));
|
||||
|
||||
const script = `#!/bin/bash
|
||||
LOG="${gbrainCallLog}"
|
||||
STATE="${gbrainStateFile}"
|
||||
# Record the call AND any GBRAIN_DATABASE_URL that the parent passed via env.
|
||||
# Format: "gbrain <args> [GBRAIN_DATABASE_URL=<url>]" so tests can assert
|
||||
# the wireup helper exported the locked URL into our env.
|
||||
LINE="gbrain $@"
|
||||
[ -n "\${GBRAIN_DATABASE_URL:-}" ] && LINE="\$LINE [GBRAIN_DATABASE_URL=\$GBRAIN_DATABASE_URL]"
|
||||
echo "\$LINE" >> "$LOG"
|
||||
|
||||
# --version
|
||||
if [ "$1" = "--version" ]; then
|
||||
echo "gbrain ${version}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# sources list --json → emits state
|
||||
if [ "$1" = "sources" ] && [ "$2" = "list" ]; then
|
||||
cat "$STATE"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# sources add <id> --path <p> --federated → adds entry
|
||||
if [ "$1" = "sources" ] && [ "$2" = "add" ]; then
|
||||
shift 2
|
||||
ID="$1"; shift
|
||||
PATH_VAL=""
|
||||
FED="false"
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--path) PATH_VAL="$2"; shift 2 ;;
|
||||
--federated) FED="true"; shift ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
python3 -c "
|
||||
import json, sys
|
||||
state = json.load(open('$STATE'))
|
||||
state['sources'].append({'id': '$ID', 'local_path': '$PATH_VAL', 'federated': '$FED' == 'true'})
|
||||
json.dump(state, open('$STATE','w'), indent=2)
|
||||
" || exit 1
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# sources remove <id> --yes → drops entry
|
||||
if [ "$1" = "sources" ] && [ "$2" = "remove" ]; then
|
||||
shift 2
|
||||
ID="$1"
|
||||
python3 -c "
|
||||
import json
|
||||
state = json.load(open('$STATE'))
|
||||
state['sources'] = [s for s in state['sources'] if s['id'] != '$ID']
|
||||
json.dump(state, open('$STATE','w'), indent=2)
|
||||
"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# sync --repo <p> → records, optionally fails
|
||||
if [ "$1" = "sync" ]; then
|
||||
${syncFails ? 'echo "sync failed: connection error" >&2; exit 1' : 'echo "1 page imported"; exit 0'}
|
||||
fi
|
||||
|
||||
echo "fake gbrain: unhandled subcommand: $@" >&2
|
||||
exit 99
|
||||
`;
|
||||
const gbrainPath = path.join(fakeBinDir, 'gbrain');
|
||||
fs.writeFileSync(gbrainPath, script, { mode: 0o755 });
|
||||
}
|
||||
|
||||
function run(
|
||||
argv: string[],
|
||||
opts: { env?: Record<string, string> } = {}
|
||||
) {
|
||||
const env = {
|
||||
PATH: `${fakeBinDir}:${process.env.PATH || '/usr/bin:/bin:/opt/homebrew/bin'}`,
|
||||
HOME: tmpHome,
|
||||
GSTACK_HOME: gstackHome,
|
||||
GSTACK_BRAIN_WORKTREE: worktreeDir,
|
||||
GSTACK_BRAIN_NO_SYNC: '0',
|
||||
...(opts.env || {}),
|
||||
};
|
||||
return spawnSync(WIREUP_BIN, argv, {
|
||||
env,
|
||||
encoding: 'utf-8',
|
||||
cwd: ROOT,
|
||||
});
|
||||
}
|
||||
|
||||
function readState(): { sources: Array<{ id: string; local_path: string; federated: boolean }> } {
|
||||
if (!fs.existsSync(gbrainStateFile)) return { sources: [] };
|
||||
return JSON.parse(fs.readFileSync(gbrainStateFile, 'utf-8'));
|
||||
}
|
||||
|
||||
function gbrainCalls(): string[] {
|
||||
if (!fs.existsSync(gbrainCallLog)) return [];
|
||||
return fs.readFileSync(gbrainCallLog, 'utf-8')
|
||||
.split('\n')
|
||||
.filter((l) => l.trim());
|
||||
}
|
||||
|
||||
function setupGstackRepo(remoteUrl: string) {
|
||||
// Real git repo at gstackHome with at least one commit + an origin remote.
|
||||
fs.mkdirSync(gstackHome, { recursive: true });
|
||||
spawnSync('git', ['-C', gstackHome, 'init', '-q', '-b', 'main'], { stdio: 'pipe' });
|
||||
spawnSync('git', ['-C', gstackHome, 'config', 'user.email', 'test@example.com'], { stdio: 'pipe' });
|
||||
spawnSync('git', ['-C', gstackHome, 'config', 'user.name', 'test'], { stdio: 'pipe' });
|
||||
fs.writeFileSync(path.join(gstackHome, '.brain-allowlist'), '# allowlist\n');
|
||||
spawnSync('git', ['-C', gstackHome, 'add', '.'], { stdio: 'pipe' });
|
||||
spawnSync('git', ['-C', gstackHome, 'commit', '-q', '-m', 'init'], { stdio: 'pipe' });
|
||||
spawnSync('git', ['-C', gstackHome, 'remote', 'add', 'origin', remoteUrl], { stdio: 'pipe' });
|
||||
}
|
||||
|
||||
beforeEach(() => {
|
||||
tmpHome = fs.mkdtempSync(path.join(os.tmpdir(), 'gstack-wireup-test-'));
|
||||
gstackHome = path.join(tmpHome, '.gstack');
|
||||
worktreeDir = path.join(tmpHome, '.gstack-brain-worktree');
|
||||
fakeBinDir = path.join(tmpHome, 'fake-bin');
|
||||
fs.mkdirSync(fakeBinDir, { recursive: true });
|
||||
gbrainCallLog = path.join(tmpHome, 'gbrain-calls.log');
|
||||
gbrainStateFile = path.join(tmpHome, 'gbrain-state.json');
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try {
|
||||
fs.rmSync(tmpHome, { recursive: true, force: true });
|
||||
} catch {}
|
||||
});
|
||||
|
||||
describe('gstack-gbrain-source-wireup — wireup mode', () => {
|
||||
test('fresh state: registers source + creates worktree + syncs', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({});
|
||||
const r = run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
||||
expect(r.status).toBe(0);
|
||||
expect(fs.existsSync(worktreeDir)).toBe(true);
|
||||
const state = readState();
|
||||
expect(state.sources).toHaveLength(1);
|
||||
expect(state.sources[0].id).toBe('gstack-brain-user');
|
||||
expect(state.sources[0].local_path).toBe(worktreeDir);
|
||||
expect(state.sources[0].federated).toBe(true);
|
||||
});
|
||||
|
||||
test('idempotent re-run after success: no new sources add call', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({});
|
||||
run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
||||
const callsAfterFirst = gbrainCalls().filter((c) => c.startsWith('gbrain sources add')).length;
|
||||
expect(callsAfterFirst).toBe(1);
|
||||
run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
||||
const callsAfterSecond = gbrainCalls().filter((c) => c.startsWith('gbrain sources add')).length;
|
||||
expect(callsAfterSecond).toBe(1); // no new add
|
||||
});
|
||||
|
||||
test('drift recovery: existing source with different path triggers remove + add', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({});
|
||||
// Pre-seed the fake gbrain state with a source at the wrong path
|
||||
fs.writeFileSync(
|
||||
gbrainStateFile,
|
||||
JSON.stringify({
|
||||
sources: [{ id: 'gstack-brain-user', local_path: '/old/stale/path', federated: true }],
|
||||
})
|
||||
);
|
||||
const r = run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
||||
expect(r.status).toBe(0);
|
||||
const calls = gbrainCalls();
|
||||
expect(calls.some((c) => c.startsWith('gbrain sources remove gstack-brain-user'))).toBe(true);
|
||||
expect(calls.some((c) => c.includes(`gbrain sources add gstack-brain-user --path ${worktreeDir}`))).toBe(true);
|
||||
const state = readState();
|
||||
expect(state.sources[0].local_path).toBe(worktreeDir);
|
||||
});
|
||||
|
||||
test('--strict + gbrain too old: exits 2', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({ version: '0.17.0' });
|
||||
const r = run(['--strict']);
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('< 0.18.0');
|
||||
});
|
||||
|
||||
test('non-strict + gbrain too old: warn + exit 0', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({ version: '0.17.0' });
|
||||
const r = run([]);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stderr).toContain('benign skip');
|
||||
});
|
||||
|
||||
test('--strict + gbrain missing on PATH: exits 2', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
// Don't make a fake gbrain — fakeBinDir is empty. Keep system dirs on PATH
|
||||
// so basic commands (git, awk, sed, etc.) work; only `gbrain` is absent.
|
||||
const r = run(['--strict'], {
|
||||
env: { PATH: `${fakeBinDir}:/usr/bin:/bin:/opt/homebrew/bin` },
|
||||
});
|
||||
expect(r.status).toBe(2);
|
||||
});
|
||||
|
||||
test('source-id derived from origin URL', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-alice.git');
|
||||
makeFakeGbrain({});
|
||||
const r = run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
||||
expect(r.status).toBe(0);
|
||||
expect(readState().sources[0].id).toBe('gstack-brain-alice');
|
||||
});
|
||||
|
||||
test('source-id fallback to ~/.gstack-brain-remote.txt when .git is gone', () => {
|
||||
// No git repo at gstackHome; just the remote-file
|
||||
fs.mkdirSync(tmpHome, { recursive: true });
|
||||
fs.writeFileSync(
|
||||
path.join(tmpHome, '.gstack-brain-remote.txt'),
|
||||
'git@github.com:user/gstack-brain-bob.git\n'
|
||||
);
|
||||
makeFakeGbrain({});
|
||||
// No --strict: helper should benign-skip because .gstack/.git is missing
|
||||
const r = run([]);
|
||||
// ensure_worktree returns 2 → benign skip, exit 0
|
||||
expect(r.status).toBe(0);
|
||||
});
|
||||
|
||||
test('source-id from --source-id flag overrides everything', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-different.git');
|
||||
makeFakeGbrain({});
|
||||
run(['--source-id', 'custom-id'], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
||||
const state = readState();
|
||||
expect(state.sources[0].id).toBe('custom-id');
|
||||
});
|
||||
|
||||
test('--probe: read-only, prints state without mutating', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({});
|
||||
const r = run(['--probe']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('source_id=gstack-brain-user');
|
||||
expect(r.stdout).toContain('worktree=');
|
||||
expect(r.stdout).toContain('gbrain=ok');
|
||||
expect(r.stdout).toContain('source_status=absent');
|
||||
// Probe should NOT call sources add / sync
|
||||
const calls = gbrainCalls();
|
||||
expect(calls.some((c) => c.startsWith('gbrain sources add'))).toBe(false);
|
||||
expect(calls.some((c) => c.startsWith('gbrain sync'))).toBe(false);
|
||||
});
|
||||
|
||||
test('gbrain sync failure: exits 1 with stderr', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({ syncFails: true });
|
||||
const r = run([]);
|
||||
expect(r.status).toBe(1);
|
||||
expect(r.stderr).toContain('sync failed');
|
||||
});
|
||||
});
|
||||
|
||||
describe('gstack-gbrain-source-wireup — --database-url lock (defends against external config rewrites)', () => {
|
||||
test('--database-url flag is exported as GBRAIN_DATABASE_URL to child gbrain calls', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({});
|
||||
const TARGET = 'postgresql://postgres.abc:pw@aws.pooler.supabase.com:5432/postgres';
|
||||
const r = run(['--database-url', TARGET], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
||||
expect(r.status).toBe(0);
|
||||
const calls = gbrainCalls();
|
||||
// every gbrain invocation should carry the locked URL
|
||||
const writingCalls = calls.filter((c) => c.includes('sources') || c.includes('sync'));
|
||||
expect(writingCalls.length).toBeGreaterThan(0);
|
||||
for (const c of writingCalls) {
|
||||
expect(c).toContain(`[GBRAIN_DATABASE_URL=${TARGET}]`);
|
||||
}
|
||||
});
|
||||
|
||||
test('falls back to ~/.gbrain/config.json database_url when no flag and no env', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({});
|
||||
const FILE_URL = 'postgresql://postgres.xyz:pw@aws.pooler.supabase.com:5432/postgres';
|
||||
fs.mkdirSync(path.join(tmpHome, '.gbrain'), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
path.join(tmpHome, '.gbrain', 'config.json'),
|
||||
JSON.stringify({ engine: 'postgres', database_url: FILE_URL })
|
||||
);
|
||||
// Important: don't pass GBRAIN_DATABASE_URL or DATABASE_URL in env; helper
|
||||
// should read from $HOME/.gbrain/config.json (HOME is tmpHome here).
|
||||
const r = run([], {
|
||||
env: {
|
||||
GSTACK_BRAIN_NO_SYNC: '1',
|
||||
GBRAIN_DATABASE_URL: '',
|
||||
DATABASE_URL: '',
|
||||
},
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
const calls = gbrainCalls();
|
||||
const writingCalls = calls.filter((c) => c.includes('sources add'));
|
||||
expect(writingCalls.length).toBe(1);
|
||||
expect(writingCalls[0]).toContain(`[GBRAIN_DATABASE_URL=${FILE_URL}]`);
|
||||
});
|
||||
|
||||
test('--database-url overrides env GBRAIN_DATABASE_URL and config.json', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({});
|
||||
const FLAG_URL = 'postgresql://postgres.flag:pw@a.b:5432/postgres';
|
||||
const ENV_URL = 'postgresql://postgres.env:pw@x.y:5432/postgres';
|
||||
const FILE_URL = 'postgresql://postgres.file:pw@p.q:5432/postgres';
|
||||
fs.mkdirSync(path.join(tmpHome, '.gbrain'), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
path.join(tmpHome, '.gbrain', 'config.json'),
|
||||
JSON.stringify({ engine: 'postgres', database_url: FILE_URL })
|
||||
);
|
||||
const r = run(['--database-url', FLAG_URL], {
|
||||
env: {
|
||||
GSTACK_BRAIN_NO_SYNC: '1',
|
||||
GBRAIN_DATABASE_URL: ENV_URL,
|
||||
},
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
const calls = gbrainCalls();
|
||||
const writingCalls = calls.filter((c) => c.includes('sources add'));
|
||||
expect(writingCalls.length).toBe(1);
|
||||
expect(writingCalls[0]).toContain(`[GBRAIN_DATABASE_URL=${FLAG_URL}]`);
|
||||
expect(writingCalls[0]).not.toContain(ENV_URL);
|
||||
expect(writingCalls[0]).not.toContain(FILE_URL);
|
||||
});
|
||||
});
|
||||
|
||||
describe('gstack-gbrain-source-wireup — uninstall mode', () => {
|
||||
test('after wireup: removes source + worktree', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({});
|
||||
run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
||||
expect(readState().sources).toHaveLength(1);
|
||||
expect(fs.existsSync(worktreeDir)).toBe(true);
|
||||
const r = run(['--uninstall']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(readState().sources).toHaveLength(0);
|
||||
expect(fs.existsSync(worktreeDir)).toBe(false);
|
||||
});
|
||||
|
||||
test('with no prior state: exits 3 (cannot derive id)', () => {
|
||||
// No git repo, no remote file. --uninstall must fail with code 3.
|
||||
fs.mkdirSync(tmpHome, { recursive: true });
|
||||
makeFakeGbrain({});
|
||||
const r = run(['--uninstall']);
|
||||
expect(r.status).toBe(3);
|
||||
});
|
||||
|
||||
test('--uninstall when gbrain is missing: exits 0 (best-effort), still removes worktree', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
// First wireup with fake gbrain to create the worktree + register source
|
||||
makeFakeGbrain({});
|
||||
run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
||||
expect(fs.existsSync(worktreeDir)).toBe(true);
|
||||
// Now remove the fake gbrain so uninstall sees gbrain missing
|
||||
fs.rmSync(path.join(fakeBinDir, 'gbrain'), { force: true });
|
||||
const r = run(['--uninstall'], {
|
||||
env: { PATH: `${fakeBinDir}:/usr/bin:/bin:/opt/homebrew/bin` },
|
||||
});
|
||||
expect(r.status).toBe(0); // best-effort, never fails on gbrain absence
|
||||
expect(fs.existsSync(worktreeDir)).toBe(false); // worktree still cleaned up
|
||||
});
|
||||
});
|
||||
|
||||
describe('gstack-gbrain-source-wireup — defensive paths', () => {
|
||||
test('--no-pull skips HEAD advance on existing worktree', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({});
|
||||
// First run to create worktree
|
||||
run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
||||
// Make a new commit on parent so worktree HEAD is "behind"
|
||||
fs.writeFileSync(path.join(gstackHome, 'newfile.md'), 'new');
|
||||
spawnSync('git', ['-C', gstackHome, 'add', '.'], { stdio: 'pipe' });
|
||||
spawnSync('git', ['-C', gstackHome, 'commit', '-q', '-m', 'second commit'], { stdio: 'pipe' });
|
||||
const parentHeadAfter = spawnSync('git', ['-C', gstackHome, 'rev-parse', 'HEAD'], {
|
||||
encoding: 'utf-8',
|
||||
}).stdout.trim();
|
||||
const worktreeHeadBefore = spawnSync('git', ['-C', worktreeDir, 'rev-parse', 'HEAD'], {
|
||||
encoding: 'utf-8',
|
||||
}).stdout.trim();
|
||||
expect(parentHeadAfter).not.toBe(worktreeHeadBefore); // sanity: parent advanced
|
||||
// --no-pull should leave worktree HEAD where it was
|
||||
const r = run(['--no-pull'], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
||||
expect(r.status).toBe(0);
|
||||
const worktreeHeadAfter = spawnSync('git', ['-C', worktreeDir, 'rev-parse', 'HEAD'], {
|
||||
encoding: 'utf-8',
|
||||
}).stdout.trim();
|
||||
expect(worktreeHeadAfter).toBe(worktreeHeadBefore);
|
||||
expect(worktreeHeadAfter).not.toBe(parentHeadAfter);
|
||||
});
|
||||
|
||||
test('stray non-git directory at worktree path is cleaned up + worktree created', () => {
|
||||
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
||||
makeFakeGbrain({});
|
||||
// Plant a stray non-git directory at the worktree path
|
||||
fs.mkdirSync(worktreeDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(worktreeDir, 'unrelated.txt'), 'not a worktree');
|
||||
expect(fs.existsSync(path.join(worktreeDir, 'unrelated.txt'))).toBe(true);
|
||||
expect(fs.existsSync(path.join(worktreeDir, '.git'))).toBe(false);
|
||||
// Helper should remove the stray dir + create a real worktree
|
||||
const r = run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
||||
expect(r.status).toBe(0);
|
||||
expect(fs.existsSync(path.join(worktreeDir, '.git'))).toBe(true); // real worktree
|
||||
expect(fs.existsSync(path.join(worktreeDir, 'unrelated.txt'))).toBe(false); // stray gone
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,151 @@
|
||||
/**
|
||||
* gstack-upgrade/migrations/v1.17.0.0.sh — migration script unit tests.
|
||||
*
|
||||
* The migration runs on /gstack-upgrade for users with brain-sync configured but
|
||||
* never wired up to gbrain. It has 4 skip conditions and one happy path.
|
||||
*
|
||||
* Strategy: stub gstack-config and gstack-gbrain-source-wireup binaries on PATH
|
||||
* so each skip condition can be triggered independently. The migration script
|
||||
* itself is plain bash — we exercise it directly.
|
||||
*/
|
||||
|
||||
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
|
||||
import * as fs from 'fs';
|
||||
import * as os from 'os';
|
||||
import * as path from 'path';
|
||||
import { spawnSync } from 'child_process';
|
||||
|
||||
const ROOT = path.resolve(import.meta.dir, '..');
|
||||
const MIGRATION = path.join(ROOT, 'gstack-upgrade', 'migrations', 'v1.17.0.0.sh');
|
||||
|
||||
let tmpHome: string;
|
||||
let fakeBinDir: string;
|
||||
let stubLog: string;
|
||||
|
||||
function makeFakeStubs(opts: {
|
||||
configValue?: string; // value gstack-config returns for gbrain_sync_mode
|
||||
configMissing?: boolean; // gstack-config binary itself missing (test edge)
|
||||
wireupMissing?: boolean; // wireup binary missing
|
||||
wireupExitCode?: number;
|
||||
}) {
|
||||
const skillsBin = path.join(tmpHome, '.claude', 'skills', 'gstack', 'bin');
|
||||
fs.mkdirSync(skillsBin, { recursive: true });
|
||||
|
||||
if (!opts.configMissing) {
|
||||
const cfg = `#!/bin/bash
|
||||
echo "gstack-config $@" >> "${stubLog}"
|
||||
[ "$1" = "get" ] && [ "$2" = "gbrain_sync_mode" ] && echo "${opts.configValue ?? ''}"
|
||||
exit 0
|
||||
`;
|
||||
fs.writeFileSync(path.join(skillsBin, 'gstack-config'), cfg, { mode: 0o755 });
|
||||
}
|
||||
|
||||
if (!opts.wireupMissing) {
|
||||
const wu = `#!/bin/bash
|
||||
echo "gstack-gbrain-source-wireup $@" >> "${stubLog}"
|
||||
exit ${opts.wireupExitCode ?? 0}
|
||||
`;
|
||||
fs.writeFileSync(path.join(skillsBin, 'gstack-gbrain-source-wireup'), wu, { mode: 0o755 });
|
||||
}
|
||||
}
|
||||
|
||||
function makeBrainGitRepo() {
|
||||
const gstackHome = path.join(tmpHome, '.gstack');
|
||||
fs.mkdirSync(path.join(gstackHome, '.git'), { recursive: true });
|
||||
}
|
||||
|
||||
function run(opts: { env?: Record<string, string> } = {}) {
|
||||
const env = {
|
||||
PATH: '/usr/bin:/bin:/opt/homebrew/bin',
|
||||
HOME: tmpHome,
|
||||
...(opts.env || {}),
|
||||
};
|
||||
return spawnSync('bash', [MIGRATION], {
|
||||
env,
|
||||
encoding: 'utf-8',
|
||||
cwd: tmpHome,
|
||||
});
|
||||
}
|
||||
|
||||
function stubCalls(): string[] {
|
||||
if (!fs.existsSync(stubLog)) return [];
|
||||
return fs.readFileSync(stubLog, 'utf-8').split('\n').filter((l) => l.trim());
|
||||
}
|
||||
|
||||
beforeEach(() => {
|
||||
tmpHome = fs.mkdtempSync(path.join(os.tmpdir(), 'gstack-migration-test-'));
|
||||
fakeBinDir = path.join(tmpHome, 'fake-bin');
|
||||
fs.mkdirSync(fakeBinDir, { recursive: true });
|
||||
stubLog = path.join(tmpHome, 'stub-calls.log');
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
try {
|
||||
fs.rmSync(tmpHome, { recursive: true, force: true });
|
||||
} catch {}
|
||||
});
|
||||
|
||||
describe('migrations/v1.17.0.0.sh', () => {
|
||||
test('HOME unset: prints message + exit 0 (defensive)', () => {
|
||||
// Override HOME to empty string. Bash's [ -z "${HOME:-}" ] guard should fire.
|
||||
const r = run({ env: { HOME: '' } });
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stderr).toContain('HOME is unset or empty');
|
||||
});
|
||||
|
||||
test('gbrain_sync_mode = off: exit 0 silently (no helper invoked)', () => {
|
||||
makeFakeStubs({ configValue: 'off' });
|
||||
const r = run();
|
||||
expect(r.status).toBe(0);
|
||||
// Helper should not have been invoked
|
||||
const calls = stubCalls();
|
||||
expect(calls.some((c) => c.startsWith('gstack-gbrain-source-wireup'))).toBe(false);
|
||||
});
|
||||
|
||||
test('gbrain_sync_mode unset/empty: exit 0 silently', () => {
|
||||
makeFakeStubs({ configValue: '' }); // empty string return
|
||||
const r = run();
|
||||
expect(r.status).toBe(0);
|
||||
const calls = stubCalls();
|
||||
expect(calls.some((c) => c.startsWith('gstack-gbrain-source-wireup'))).toBe(false);
|
||||
});
|
||||
|
||||
test('no ~/.gstack/.git: exit 0 silently (no brain-sync configured)', () => {
|
||||
makeFakeStubs({ configValue: 'full' });
|
||||
// Do NOT call makeBrainGitRepo() — no .gstack/.git directory exists
|
||||
const r = run();
|
||||
expect(r.status).toBe(0);
|
||||
const calls = stubCalls();
|
||||
expect(calls.some((c) => c.startsWith('gstack-gbrain-source-wireup'))).toBe(false);
|
||||
});
|
||||
|
||||
test('helper missing on PATH: prints warning, exit 0 (defensive)', () => {
|
||||
makeFakeStubs({ configValue: 'full', wireupMissing: true });
|
||||
makeBrainGitRepo();
|
||||
const r = run();
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stderr).toContain('missing or non-executable');
|
||||
});
|
||||
|
||||
test('happy path: invokes the helper', () => {
|
||||
makeFakeStubs({ configValue: 'full' });
|
||||
makeBrainGitRepo();
|
||||
const r = run();
|
||||
expect(r.status).toBe(0);
|
||||
const calls = stubCalls();
|
||||
expect(calls.some((c) => c.startsWith('gstack-gbrain-source-wireup'))).toBe(true);
|
||||
// Note: migration invokes WITHOUT --strict (benign-skip semantics for batch upgrade)
|
||||
const helperCall = calls.find((c) => c.startsWith('gstack-gbrain-source-wireup'));
|
||||
expect(helperCall).not.toContain('--strict');
|
||||
});
|
||||
|
||||
test('helper exits non-zero: migration prints retry hint, exit 0 (non-blocking)', () => {
|
||||
// The migration uses `|| { echo retry-hint; }` so non-zero helper still
|
||||
// exits 0 and prints a retry hint to stderr.
|
||||
makeFakeStubs({ configValue: 'full', wireupExitCode: 2 });
|
||||
makeBrainGitRepo();
|
||||
const r = run();
|
||||
expect(r.status).toBe(0); // migration is non-blocking
|
||||
expect(r.stderr).toContain('Wireup exited non-zero');
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user