mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-02 03:35:09 +02:00
675717e320
* feat: gstack-gbrain-source-wireup helper + 13 unit tests The new bin/gstack-gbrain-source-wireup is the single helper that registers the gstack brain repo as a gbrain federated source via `git worktree`, runs incremental sync, and supports --uninstall + --probe + --strict modes. Replaces the dead `consumers.json + ingest_url + /ingest-repo` HTTP wireup introduced in v1.12.0.0 — that endpoint never shipped on the gbrain side. The federation surface (`gbrain sources` / `gbrain sync`) shipped in gbrain v0.18.0; this helper adapts to its actual semantics (no `sources update`, so path drift recovery is `remove + re-add`; no `--install-cron` either, so freshness rides on the existing skill-end push hook). Source-id derivation is multi-fallback: ~/.gstack/.git origin URL → ~/.gstack-brain-remote.txt → --source-id flag. This makes `--uninstall` work even after `~/.gstack/.git` is destroyed by the parent uninstall script. Worktree is `--detach`ed at $GSTACK_HOME's HEAD because main is already checked out there; advance is a re-checkout of the parent's current HEAD, not a `git pull`. Divergence recovery removes + re-adds the worktree. Test suite covers 13 cases: fresh-state registration, idempotent re-runs, drift recovery, --strict failure modes, source-id fallback chain, --probe non-mutation, sync errors, and --uninstall. Fake gbrain on $PATH, real git ops at GSTACK_HOME tmp dir. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat: wire setup-gbrain + brain-restore + brain-uninstall to use the helper setup-gbrain Step 7 now invokes gstack-gbrain-source-wireup --strict after gstack-brain-init + gbrain_sync_mode is set. Strict mode means the user sees the failure rather than silently ending up with an unwired brain. bin/gstack-brain-init drops 60 lines of dead code: the HTTP POST to ${GBRAIN_URL}/ingest-repo, the GBRAIN_URL_VAL/GBRAIN_TOKEN_VAL probes, the consumers.json writer, and the chore commit step. CONSUMERS_FILE variable declaration removed. The closing message no longer points at the dead gstack-brain-consumer add path. bin/gstack-brain-restore drops the 18-line consumers.json token-rehydration block (was a no-op for the only consumer that ever existed). Adds a best-effort wireup invocation after the brain-repo clone so 2nd-Mac restore gets gbrain federation automatically. Failure prints a stderr WARNING but does not abort the restore — restore's primary job is the git clone. bin/gstack-brain-uninstall calls the helper's --uninstall mode (which removes the gbrain source registration, the git worktree, and the future-launchd-plist stub) before the existing legacy consumers.json removal. Ordering is fragile-by-design: helper derives source-id via multi-fallback so it works even after .git is destroyed. bin/gstack-brain-consumer gets a DEPRECATED header note. Stays in the tree for one cycle of grace; removal in v1.13.0.0. setup-gbrain/SKILL.md is regenerated from the .tmpl via gen:skill-docs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat: v1.12.3.0 migration — wire existing brain-sync repos into gbrain Idempotent migration script. For users who already opted into brain-sync before this release (gbrain_sync_mode != off, ~/.gstack/.git exists), runs the new gstack-gbrain-source-wireup helper so their existing brain repo becomes searchable via gbrain immediately on /gstack-upgrade. Skip conditions (each ends with exit 0): - HOME unset or empty (defensive) - gbrain_sync_mode = off or empty (user opted out) - no ~/.gstack/.git (brain-init never ran) - helper missing on disk (broken install) No --strict on the helper invocation: missing or old gbrain is a benign skip during a batch upgrade rather than a blocker. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * v1.12.3.0: setup-gbrain wireup ships the gbrain federation surface Bumps VERSION 1.12.2.0 → 1.12.3.0 with a release-notes-format entry in CHANGELOG.md. After upgrade, the placeholder consumers.json wireup is gone, gbrain sources + sync + skill-end hook is the new path, your gstack memory is actually searchable in gbrain. The CHANGELOG entry follows the release-summary format from CLAUDE.md: two-line bold headline, lead paragraph naming what shipped, "verify after upgrade" command block readers can run on their own brain to see the delta, then the standard Itemized changes / What this means / For contributors sections. Three pre-existing test failures on this branch are flagged in the contributor section: the GSTACK_HOME isolation test (reads Garry's actual ~/.gstack/config.yaml), the 2MB tracked-binary test (security-bench fixtures > 2MB), and the Opus 4.7 pacing-directive test (overlay text drifted). All three were verified to fail on the base branch too — out of scope for this PR, follow-up needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat: helper locks GBRAIN_DATABASE_URL at startup, defends against config rewrites The wireup helper previously read ~/.gbrain/config.json on every gbrain subprocess invocation. On Garry's Mac, multiple concurrent test runs and agent integrations were rewriting that file mid-sync, redirecting the wireup at the wrong brain partway through a 4-min initial import. This commit adds a `--database-url <url>` flag to the helper and locks the URL at startup. Precedence: 1. --database-url flag (explicit caller intent) 2. GBRAIN_DATABASE_URL / DATABASE_URL env (CI / manual override) 3. read once from ~/.gbrain/config.json (default) Whichever wins gets exported as GBRAIN_DATABASE_URL for every child `gbrain` invocation. Per gbrain's loadConfig at src/core/config.ts:53, env-var URLs override the file URL — so a process that flips config.json between two of our gbrain calls can't redirect us. Defense-in-depth: once the URL is locked, the wireup completes against the original brain even under hostile filesystem conditions. setup-gbrain/SKILL.md.tmpl Step 7 now reads the URL out of config.json once (via python3 inline) and passes it explicitly with --database-url, so even the very first wireup call is decoupled from config.json mutability. Three new test cases cover the lock behavior: - --database-url flag is exported to child gbrain calls - falls back to ~/.gbrain/config.json when no flag and no env - flag overrides env GBRAIN_DATABASE_URL and config.json values The fake gbrain in the test suite now records GBRAIN_DATABASE_URL alongside each call so tests can assert the helper exported the locked URL. Total test count: 13 → 16 passing. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * chore: bump v1.12.3.0 references to v1.15.1.0 to match merged-with-main release Internal-only renames after merging origin/main bumped this branch's release target from v1.12.3.0 → v1.15.1.0: - gstack-upgrade/migrations/v1.12.3.0.sh → v1.15.1.0.sh (rename + log-prefix bump from "[v1.12.3.0]" to "[v1.15.1.0]") - bin/gstack-brain-consumer header: "DEPRECATED in v1.12.3.0" → "DEPRECATED in v1.15.1.0"; removal target bumped from v1.13.0.0 → v1.16.0.0 (next minor after v1.15.1.0). - bin/gstack-brain-uninstall: "no longer written ... since v1.12.3.0" → "since v1.15.1.0". No behavior change. Test suite still 16/16 passing. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test: 10 new cases close coverage gaps (helper defensive paths + migration) /ship Step 7 coverage audit reported 48% (22/46 branches). Added 10 cases covering the highest-impact gaps: Helper (test/gstack-gbrain-source-wireup.test.ts, +3 cases → 19 total): - --uninstall when gbrain is missing: best-effort exit 0, worktree still cleaned - --no-pull skips HEAD advance on existing worktree (was untested) - Stray non-git directory at worktree path is cleaned up + worktree created Migration (test/gstack-upgrade-migration-v1_15_1_0.test.ts, NEW, 7 cases): - HOME unset → defensive exit 0 - gbrain_sync_mode=off → exit 0 silently - gbrain_sync_mode unset → exit 0 silently - no ~/.gstack/.git → exit 0 silently - helper missing on PATH → warning + exit 0 - happy path → invokes helper without --strict - helper exits non-zero → migration prints retry hint, still exits 0 (non-blocking) Also syncs package.json version from 1.15.0.0 → 1.15.1.0 to match VERSION file (DRIFT_STALE_PKG repair from /ship Step 12 idempotency check; was a manual-edit-bypass artifact from the merge step). Coverage estimate: 48% → ~75%. Mainline + migration script + key defensive paths all exercised. 26 tests total covering the new code surface. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix: pre-landing review auto-fixes (5 correctness + observability) /ship Step 9 review surfaced 9 INFORMATIONAL findings on the new helper + migration. Five auto-fixed with no behavior regression (26/26 tests pass): bin/gstack-gbrain-source-wireup: - Version compare: put floor "0.18.0" first in `sort -V` stdin so equal-or- greater $v always sorts to position 2. Stable across sort implementations. - _worktree_add_detached: drop `2>/dev/null` on the `worktree add`, surface git's stderr through `prefix` so users see WHY adds fail (disk, perms). - ensure_worktree: same observability fix on the `git checkout --detach` path during HEAD-advance, so users see the actual git error before recovery. - do_probe: replace `[ -d X ] || [ -f X ] && set=present` (precedence trap — the `&&` short-circuits when the dir branch fails) with explicit if-block. - do_probe: capture `check_source_state`'s return code explicitly via `set +e; ...; rc=$?; set -e`. `$?` after an `if`/`elif` chain is fragile under set -e and may not reach the elif under some shell versions. - do_wireup: same explicit return-code capture for `ensure_worktree`. The prior `ensure_worktree || { if [ $? = 2 ]; ...` pattern relied on `$?` reflecting the function's return after `||`, which is implementation-defined. gstack-upgrade/migrations/v1.15.1.0.sh: - Trim whitespace from `gstack-config get gbrain_sync_mode` output via `tr -d '[:space:]'`. Trailing newlines would mis-classify "off\n" as a non-empty non-off mode and incorrectly invoke the helper. Skipped findings (cosmetic / out of scope): - `python3 -c` reads `~/.gbrain/config.json` via `expanduser` instead of the helper's `$GBRAIN_CONFIG` variable (cosmetic; HONORS HOME override). - Long sync-failure error message could truncate to last N lines (cosmetic log readability). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix: adversarial review hardening (rm safety, jq probe, secret redaction, multi-Mac) /ship Step 11 adversarial review surfaced 7 CRITICAL issues. Five fixed inline (no behavior regression, 26/26 tests still pass): bin/gstack-gbrain-source-wireup: 1. **rm -rf path validation** (was: F-c-CRITICAL 9/10). Added `safe_rm_worktree` helper that refuses any path not strictly under $HOME/, plus dangerous-path allowlist for /, /Users, $HOME root. Replaces raw `rm -rf "$WORKTREE"` calls (lines 161, 169 originally). If user sets GSTACK_BRAIN_WORKTREE="" or "/", the helper now dies cleanly instead of nuking the home dir or root. 2. **jq dependency probe** (was: F-c-CRITICAL 9/10). `check_source_state` now hard-fails with a clear message if jq is missing, instead of silently returning "absent" → re-add → die-on-duplicate. Plus trims whitespace from jq output (`tr -d '[:space:]'`) to defend against gbrain emitting `\n` for missing fields. Header comment claimed jq was a transitive dep; now we enforce it. 3. **Python heredoc warns on JSON parse failure** (was: F-c-CRITICAL 8/10). Previously `except Exception: pass` silently swallowed malformed JSON, leaving _locked_url empty and defeating the URL-lock defense. Now writes the parse error to a temp file and warns the user that the URL was not locked. Also passes the config path via env var (GBRAIN_CONFIG_PATH) instead of hardcoded `~/.gbrain/config.json`, respecting any HOME override. 4. **Multi-Mac source-id collision fix** (was: F-c-CRITICAL 9/10). When `check_source_state` returns 1 (source exists at different path), the helper used to remove + re-add. Two Macs sharing one Supabase brain would ping-pong the local_path metadata on every sync. Now: if the existing path's basename matches the local worktree's basename (likely another machine's local copy of the SAME brain repo), skip re-registration and sync against the local worktree. gbrain stores pages by content; metadata is informational. No more ping-pong. 5. **Redact DB URL from sync-failure error message** (was: F-c-CRITICAL 7/10). `gbrain sync` failures used to echo the full stderr (which can contain the postgres connection string with password) into the user's terminal and any log redirect. Now we sed-replace any `postgres://...` with `postgres://***REDACTED***` before the die() call, and only show the last 10 lines. Bonus minor fix: `die()` now uses `$1` instead of `$*` for the warn message, so the exit-code arg ($2) doesn't get appended to the warning text. Acknowledged-but-deferred: - GBRAIN_DATABASE_URL env exposure on Linux via /proc/$PID/environ. This is a Linux-only concern; gstack is Mac-targeted today and macOS restricts process env reads. Document as a follow-up if Linux support lands. - gbrain version parser brittleness if gbrain switches to "v0.18.0" prefix. Defensive only; current gbrain output matches `gbrain X.Y.Z` exactly. - bash 3.2 PIPESTATUS reliability. Tests pass on the host bash version (3.2+ via macOS); modern bash 5.x is widely available. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: sync gbrain-source-wireup helper into USING_GBRAIN + gbrain-sync USING_GBRAIN_WITH_GSTACK.md: add gstack-gbrain-source-wireup row to the bin helpers table — describes federation registration via `gbrain sources add` + worktree, lists flags, calls out it replaces the dead consumers.json/ingest-repo HTTP wireup. docs/gbrain-sync.md: replace the `gstack-brain-reader add --ingest-url` step in gstack-brain-init's flow (which targeted the never-shipped /ingest-repo endpoint) with the real flow — federate via gbrain sources + worktree, point to bin/gstack-gbrain-source-wireup. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * v1.16.1.0: rebump after queue-collision (PR #1233 took v1.16.0.0) CI's "Check VERSION is not stale vs queue" job (job 73105686380) failed with: "VERSION drift: PR #1234 claims v1.15.1.0 but the queue has moved — next free slot is v1.16.1.0." PR #1233 (garrytan/browserharness) entered the queue claiming v1.16.0.0 between when this branch's prior /ship ran and when CI evaluated, so v1.15.1.0 is stale. Rebumping on top. Files updated: - VERSION 1.15.1.0 → 1.16.1.0 - package.json 1.15.1.0 → 1.16.1.0 - CHANGELOG.md heading + Before/After columns 1.15.1.0 → 1.16.1.0 - CHANGELOG removal target (consumers.json + config keys) 1.16.0.0 → 1.17.0.0 - gstack-upgrade/migrations/v1.15.1.0.sh → renamed v1.16.1.0.sh + log prefix - bin/gstack-brain-consumer "DEPRECATED in" + "removal in" 1.15.1.0/1.16.0.0 → 1.16.1.0/1.17.0.0 - bin/gstack-brain-uninstall "since vX.Y.Z.W" 1.15.1.0 → 1.16.1.0 - test/gstack-upgrade-migration-v1_15_1_0.test.ts → renamed v1_16_1_0.test.ts No behavior change. 26/26 wireup + migration tests still pass on the rename. Full bun test suite: exit 0, 0 failures. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * v1.17.0.0: rebump again — bump-detection now classifies branch as MINOR CI's version-stale check (job 73106360896) failed: PR #1234 claims v1.16.1.0 but the queue moved to v1.17.0.0. Root cause: bumping 1.15.1.0 → 1.16.1.0 to dodge the prior collision turned the branch's diff classification from PATCH (1.15.0 → 1.15.1) into MINOR (1.15.0 → 1.16.x). detect-bump.ts now sees MINOR, gstack-next-version walks the MINOR lane past #1233's v1.16.0.0 claim, and the next free slot is v1.17.0.0. Honestly accurate per CLAUDE.md scale-aware bumps: this branch IS a MINOR ("substantial new capability shipped — skill, harness, command, big refactor"). The new helper + migration + integration totals ~1200 lines added across 11 files with 26 new tests. PATCH was always the wrong honest classification; the queue collision forced the right answer. Files updated: - VERSION 1.16.1.0 → 1.17.0.0 - package.json 1.16.1.0 → 1.17.0.0 - CHANGELOG.md heading + After column 1.16.1.0 → 1.17.0.0 - CHANGELOG removal targets 1.17.0.0 → 1.18.0.0 - gstack-upgrade/migrations/v1.16.1.0.sh → renamed v1.17.0.0.sh + log prefix - bin/gstack-brain-consumer "DEPRECATED in" + "removal in" 1.16.1.0/1.17.0.0 → 1.17.0.0/1.18.0.0 - bin/gstack-brain-uninstall "since vX.Y.Z.W" 1.16.1.0 → 1.17.0.0 - test/gstack-upgrade-migration-v1_16_1_0.test.ts → renamed v1_17_0_0.test.ts 26/26 tests still pass. No behavior change. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
441 lines
17 KiB
TypeScript
441 lines
17 KiB
TypeScript
/**
|
|
* gstack-gbrain-source-wireup — unit tests with mocked gbrain CLI.
|
|
*
|
|
* The helper registers the gstack brain repo as a gbrain federated source
|
|
* via `git worktree`, runs an initial sync, and exposes --uninstall + --probe.
|
|
*
|
|
* Strategy: put a fake `gbrain` binary on PATH that records every call into
|
|
* a log file and reads/writes its "registered sources" state from a JSON
|
|
* file in the test's tmp dir. The helper sees a consistent gbrain-CLI surface
|
|
* but no real database, no real gbrain.
|
|
*/
|
|
|
|
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
|
|
import * as fs from 'fs';
|
|
import * as os from 'os';
|
|
import * as path from 'path';
|
|
import { spawnSync } from 'child_process';
|
|
|
|
const ROOT = path.resolve(import.meta.dir, '..');
|
|
const BIN_DIR = path.join(ROOT, 'bin');
|
|
const WIREUP_BIN = path.join(BIN_DIR, 'gstack-gbrain-source-wireup');
|
|
|
|
let tmpHome: string;
|
|
let gstackHome: string;
|
|
let worktreeDir: string;
|
|
let fakeBinDir: string;
|
|
let gbrainCallLog: string;
|
|
let gbrainStateFile: string;
|
|
|
|
function makeFakeGbrain(opts: {
|
|
version?: string | null; // null = "binary missing" (don't write the file)
|
|
syncFails?: boolean;
|
|
}) {
|
|
const version = opts.version ?? '0.18.2';
|
|
if (version === null) return; // simulate missing binary by NOT writing one
|
|
const syncFails = opts.syncFails ?? false;
|
|
|
|
// Stub gbrain reads/writes state from a JSON file. Fields:
|
|
// sources: [{id, local_path, federated}]
|
|
fs.writeFileSync(gbrainStateFile, JSON.stringify({ sources: [] }, null, 2));
|
|
|
|
const script = `#!/bin/bash
|
|
LOG="${gbrainCallLog}"
|
|
STATE="${gbrainStateFile}"
|
|
# Record the call AND any GBRAIN_DATABASE_URL that the parent passed via env.
|
|
# Format: "gbrain <args> [GBRAIN_DATABASE_URL=<url>]" so tests can assert
|
|
# the wireup helper exported the locked URL into our env.
|
|
LINE="gbrain $@"
|
|
[ -n "\${GBRAIN_DATABASE_URL:-}" ] && LINE="\$LINE [GBRAIN_DATABASE_URL=\$GBRAIN_DATABASE_URL]"
|
|
echo "\$LINE" >> "$LOG"
|
|
|
|
# --version
|
|
if [ "$1" = "--version" ]; then
|
|
echo "gbrain ${version}"
|
|
exit 0
|
|
fi
|
|
|
|
# sources list --json → emits state
|
|
if [ "$1" = "sources" ] && [ "$2" = "list" ]; then
|
|
cat "$STATE"
|
|
exit 0
|
|
fi
|
|
|
|
# sources add <id> --path <p> --federated → adds entry
|
|
if [ "$1" = "sources" ] && [ "$2" = "add" ]; then
|
|
shift 2
|
|
ID="$1"; shift
|
|
PATH_VAL=""
|
|
FED="false"
|
|
while [ $# -gt 0 ]; do
|
|
case "$1" in
|
|
--path) PATH_VAL="$2"; shift 2 ;;
|
|
--federated) FED="true"; shift ;;
|
|
*) shift ;;
|
|
esac
|
|
done
|
|
python3 -c "
|
|
import json, sys
|
|
state = json.load(open('$STATE'))
|
|
state['sources'].append({'id': '$ID', 'local_path': '$PATH_VAL', 'federated': '$FED' == 'true'})
|
|
json.dump(state, open('$STATE','w'), indent=2)
|
|
" || exit 1
|
|
exit 0
|
|
fi
|
|
|
|
# sources remove <id> --yes → drops entry
|
|
if [ "$1" = "sources" ] && [ "$2" = "remove" ]; then
|
|
shift 2
|
|
ID="$1"
|
|
python3 -c "
|
|
import json
|
|
state = json.load(open('$STATE'))
|
|
state['sources'] = [s for s in state['sources'] if s['id'] != '$ID']
|
|
json.dump(state, open('$STATE','w'), indent=2)
|
|
"
|
|
exit 0
|
|
fi
|
|
|
|
# sync --repo <p> → records, optionally fails
|
|
if [ "$1" = "sync" ]; then
|
|
${syncFails ? 'echo "sync failed: connection error" >&2; exit 1' : 'echo "1 page imported"; exit 0'}
|
|
fi
|
|
|
|
echo "fake gbrain: unhandled subcommand: $@" >&2
|
|
exit 99
|
|
`;
|
|
const gbrainPath = path.join(fakeBinDir, 'gbrain');
|
|
fs.writeFileSync(gbrainPath, script, { mode: 0o755 });
|
|
}
|
|
|
|
function run(
|
|
argv: string[],
|
|
opts: { env?: Record<string, string> } = {}
|
|
) {
|
|
const env = {
|
|
PATH: `${fakeBinDir}:${process.env.PATH || '/usr/bin:/bin:/opt/homebrew/bin'}`,
|
|
HOME: tmpHome,
|
|
GSTACK_HOME: gstackHome,
|
|
GSTACK_BRAIN_WORKTREE: worktreeDir,
|
|
GSTACK_BRAIN_NO_SYNC: '0',
|
|
...(opts.env || {}),
|
|
};
|
|
return spawnSync(WIREUP_BIN, argv, {
|
|
env,
|
|
encoding: 'utf-8',
|
|
cwd: ROOT,
|
|
});
|
|
}
|
|
|
|
function readState(): { sources: Array<{ id: string; local_path: string; federated: boolean }> } {
|
|
if (!fs.existsSync(gbrainStateFile)) return { sources: [] };
|
|
return JSON.parse(fs.readFileSync(gbrainStateFile, 'utf-8'));
|
|
}
|
|
|
|
function gbrainCalls(): string[] {
|
|
if (!fs.existsSync(gbrainCallLog)) return [];
|
|
return fs.readFileSync(gbrainCallLog, 'utf-8')
|
|
.split('\n')
|
|
.filter((l) => l.trim());
|
|
}
|
|
|
|
function setupGstackRepo(remoteUrl: string) {
|
|
// Real git repo at gstackHome with at least one commit + an origin remote.
|
|
fs.mkdirSync(gstackHome, { recursive: true });
|
|
spawnSync('git', ['-C', gstackHome, 'init', '-q', '-b', 'main'], { stdio: 'pipe' });
|
|
spawnSync('git', ['-C', gstackHome, 'config', 'user.email', 'test@example.com'], { stdio: 'pipe' });
|
|
spawnSync('git', ['-C', gstackHome, 'config', 'user.name', 'test'], { stdio: 'pipe' });
|
|
fs.writeFileSync(path.join(gstackHome, '.brain-allowlist'), '# allowlist\n');
|
|
spawnSync('git', ['-C', gstackHome, 'add', '.'], { stdio: 'pipe' });
|
|
spawnSync('git', ['-C', gstackHome, 'commit', '-q', '-m', 'init'], { stdio: 'pipe' });
|
|
spawnSync('git', ['-C', gstackHome, 'remote', 'add', 'origin', remoteUrl], { stdio: 'pipe' });
|
|
}
|
|
|
|
beforeEach(() => {
|
|
tmpHome = fs.mkdtempSync(path.join(os.tmpdir(), 'gstack-wireup-test-'));
|
|
gstackHome = path.join(tmpHome, '.gstack');
|
|
worktreeDir = path.join(tmpHome, '.gstack-brain-worktree');
|
|
fakeBinDir = path.join(tmpHome, 'fake-bin');
|
|
fs.mkdirSync(fakeBinDir, { recursive: true });
|
|
gbrainCallLog = path.join(tmpHome, 'gbrain-calls.log');
|
|
gbrainStateFile = path.join(tmpHome, 'gbrain-state.json');
|
|
});
|
|
|
|
afterEach(() => {
|
|
try {
|
|
fs.rmSync(tmpHome, { recursive: true, force: true });
|
|
} catch {}
|
|
});
|
|
|
|
describe('gstack-gbrain-source-wireup — wireup mode', () => {
|
|
test('fresh state: registers source + creates worktree + syncs', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({});
|
|
const r = run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
|
expect(r.status).toBe(0);
|
|
expect(fs.existsSync(worktreeDir)).toBe(true);
|
|
const state = readState();
|
|
expect(state.sources).toHaveLength(1);
|
|
expect(state.sources[0].id).toBe('gstack-brain-user');
|
|
expect(state.sources[0].local_path).toBe(worktreeDir);
|
|
expect(state.sources[0].federated).toBe(true);
|
|
});
|
|
|
|
test('idempotent re-run after success: no new sources add call', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({});
|
|
run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
|
const callsAfterFirst = gbrainCalls().filter((c) => c.startsWith('gbrain sources add')).length;
|
|
expect(callsAfterFirst).toBe(1);
|
|
run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
|
const callsAfterSecond = gbrainCalls().filter((c) => c.startsWith('gbrain sources add')).length;
|
|
expect(callsAfterSecond).toBe(1); // no new add
|
|
});
|
|
|
|
test('drift recovery: existing source with different path triggers remove + add', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({});
|
|
// Pre-seed the fake gbrain state with a source at the wrong path
|
|
fs.writeFileSync(
|
|
gbrainStateFile,
|
|
JSON.stringify({
|
|
sources: [{ id: 'gstack-brain-user', local_path: '/old/stale/path', federated: true }],
|
|
})
|
|
);
|
|
const r = run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
|
expect(r.status).toBe(0);
|
|
const calls = gbrainCalls();
|
|
expect(calls.some((c) => c.startsWith('gbrain sources remove gstack-brain-user'))).toBe(true);
|
|
expect(calls.some((c) => c.includes(`gbrain sources add gstack-brain-user --path ${worktreeDir}`))).toBe(true);
|
|
const state = readState();
|
|
expect(state.sources[0].local_path).toBe(worktreeDir);
|
|
});
|
|
|
|
test('--strict + gbrain too old: exits 2', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({ version: '0.17.0' });
|
|
const r = run(['--strict']);
|
|
expect(r.status).toBe(2);
|
|
expect(r.stderr).toContain('< 0.18.0');
|
|
});
|
|
|
|
test('non-strict + gbrain too old: warn + exit 0', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({ version: '0.17.0' });
|
|
const r = run([]);
|
|
expect(r.status).toBe(0);
|
|
expect(r.stderr).toContain('benign skip');
|
|
});
|
|
|
|
test('--strict + gbrain missing on PATH: exits 2', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
// Don't make a fake gbrain — fakeBinDir is empty. Keep system dirs on PATH
|
|
// so basic commands (git, awk, sed, etc.) work; only `gbrain` is absent.
|
|
const r = run(['--strict'], {
|
|
env: { PATH: `${fakeBinDir}:/usr/bin:/bin:/opt/homebrew/bin` },
|
|
});
|
|
expect(r.status).toBe(2);
|
|
});
|
|
|
|
test('source-id derived from origin URL', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-alice.git');
|
|
makeFakeGbrain({});
|
|
const r = run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
|
expect(r.status).toBe(0);
|
|
expect(readState().sources[0].id).toBe('gstack-brain-alice');
|
|
});
|
|
|
|
test('source-id fallback to ~/.gstack-brain-remote.txt when .git is gone', () => {
|
|
// No git repo at gstackHome; just the remote-file
|
|
fs.mkdirSync(tmpHome, { recursive: true });
|
|
fs.writeFileSync(
|
|
path.join(tmpHome, '.gstack-brain-remote.txt'),
|
|
'git@github.com:user/gstack-brain-bob.git\n'
|
|
);
|
|
makeFakeGbrain({});
|
|
// No --strict: helper should benign-skip because .gstack/.git is missing
|
|
const r = run([]);
|
|
// ensure_worktree returns 2 → benign skip, exit 0
|
|
expect(r.status).toBe(0);
|
|
});
|
|
|
|
test('source-id from --source-id flag overrides everything', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-different.git');
|
|
makeFakeGbrain({});
|
|
run(['--source-id', 'custom-id'], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
|
const state = readState();
|
|
expect(state.sources[0].id).toBe('custom-id');
|
|
});
|
|
|
|
test('--probe: read-only, prints state without mutating', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({});
|
|
const r = run(['--probe']);
|
|
expect(r.status).toBe(0);
|
|
expect(r.stdout).toContain('source_id=gstack-brain-user');
|
|
expect(r.stdout).toContain('worktree=');
|
|
expect(r.stdout).toContain('gbrain=ok');
|
|
expect(r.stdout).toContain('source_status=absent');
|
|
// Probe should NOT call sources add / sync
|
|
const calls = gbrainCalls();
|
|
expect(calls.some((c) => c.startsWith('gbrain sources add'))).toBe(false);
|
|
expect(calls.some((c) => c.startsWith('gbrain sync'))).toBe(false);
|
|
});
|
|
|
|
test('gbrain sync failure: exits 1 with stderr', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({ syncFails: true });
|
|
const r = run([]);
|
|
expect(r.status).toBe(1);
|
|
expect(r.stderr).toContain('sync failed');
|
|
});
|
|
});
|
|
|
|
describe('gstack-gbrain-source-wireup — --database-url lock (defends against external config rewrites)', () => {
|
|
test('--database-url flag is exported as GBRAIN_DATABASE_URL to child gbrain calls', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({});
|
|
const TARGET = 'postgresql://postgres.abc:pw@aws.pooler.supabase.com:5432/postgres';
|
|
const r = run(['--database-url', TARGET], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
|
expect(r.status).toBe(0);
|
|
const calls = gbrainCalls();
|
|
// every gbrain invocation should carry the locked URL
|
|
const writingCalls = calls.filter((c) => c.includes('sources') || c.includes('sync'));
|
|
expect(writingCalls.length).toBeGreaterThan(0);
|
|
for (const c of writingCalls) {
|
|
expect(c).toContain(`[GBRAIN_DATABASE_URL=${TARGET}]`);
|
|
}
|
|
});
|
|
|
|
test('falls back to ~/.gbrain/config.json database_url when no flag and no env', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({});
|
|
const FILE_URL = 'postgresql://postgres.xyz:pw@aws.pooler.supabase.com:5432/postgres';
|
|
fs.mkdirSync(path.join(tmpHome, '.gbrain'), { recursive: true });
|
|
fs.writeFileSync(
|
|
path.join(tmpHome, '.gbrain', 'config.json'),
|
|
JSON.stringify({ engine: 'postgres', database_url: FILE_URL })
|
|
);
|
|
// Important: don't pass GBRAIN_DATABASE_URL or DATABASE_URL in env; helper
|
|
// should read from $HOME/.gbrain/config.json (HOME is tmpHome here).
|
|
const r = run([], {
|
|
env: {
|
|
GSTACK_BRAIN_NO_SYNC: '1',
|
|
GBRAIN_DATABASE_URL: '',
|
|
DATABASE_URL: '',
|
|
},
|
|
});
|
|
expect(r.status).toBe(0);
|
|
const calls = gbrainCalls();
|
|
const writingCalls = calls.filter((c) => c.includes('sources add'));
|
|
expect(writingCalls.length).toBe(1);
|
|
expect(writingCalls[0]).toContain(`[GBRAIN_DATABASE_URL=${FILE_URL}]`);
|
|
});
|
|
|
|
test('--database-url overrides env GBRAIN_DATABASE_URL and config.json', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({});
|
|
const FLAG_URL = 'postgresql://postgres.flag:pw@a.b:5432/postgres';
|
|
const ENV_URL = 'postgresql://postgres.env:pw@x.y:5432/postgres';
|
|
const FILE_URL = 'postgresql://postgres.file:pw@p.q:5432/postgres';
|
|
fs.mkdirSync(path.join(tmpHome, '.gbrain'), { recursive: true });
|
|
fs.writeFileSync(
|
|
path.join(tmpHome, '.gbrain', 'config.json'),
|
|
JSON.stringify({ engine: 'postgres', database_url: FILE_URL })
|
|
);
|
|
const r = run(['--database-url', FLAG_URL], {
|
|
env: {
|
|
GSTACK_BRAIN_NO_SYNC: '1',
|
|
GBRAIN_DATABASE_URL: ENV_URL,
|
|
},
|
|
});
|
|
expect(r.status).toBe(0);
|
|
const calls = gbrainCalls();
|
|
const writingCalls = calls.filter((c) => c.includes('sources add'));
|
|
expect(writingCalls.length).toBe(1);
|
|
expect(writingCalls[0]).toContain(`[GBRAIN_DATABASE_URL=${FLAG_URL}]`);
|
|
expect(writingCalls[0]).not.toContain(ENV_URL);
|
|
expect(writingCalls[0]).not.toContain(FILE_URL);
|
|
});
|
|
});
|
|
|
|
describe('gstack-gbrain-source-wireup — uninstall mode', () => {
|
|
test('after wireup: removes source + worktree', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({});
|
|
run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
|
expect(readState().sources).toHaveLength(1);
|
|
expect(fs.existsSync(worktreeDir)).toBe(true);
|
|
const r = run(['--uninstall']);
|
|
expect(r.status).toBe(0);
|
|
expect(readState().sources).toHaveLength(0);
|
|
expect(fs.existsSync(worktreeDir)).toBe(false);
|
|
});
|
|
|
|
test('with no prior state: exits 3 (cannot derive id)', () => {
|
|
// No git repo, no remote file. --uninstall must fail with code 3.
|
|
fs.mkdirSync(tmpHome, { recursive: true });
|
|
makeFakeGbrain({});
|
|
const r = run(['--uninstall']);
|
|
expect(r.status).toBe(3);
|
|
});
|
|
|
|
test('--uninstall when gbrain is missing: exits 0 (best-effort), still removes worktree', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
// First wireup with fake gbrain to create the worktree + register source
|
|
makeFakeGbrain({});
|
|
run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
|
expect(fs.existsSync(worktreeDir)).toBe(true);
|
|
// Now remove the fake gbrain so uninstall sees gbrain missing
|
|
fs.rmSync(path.join(fakeBinDir, 'gbrain'), { force: true });
|
|
const r = run(['--uninstall'], {
|
|
env: { PATH: `${fakeBinDir}:/usr/bin:/bin:/opt/homebrew/bin` },
|
|
});
|
|
expect(r.status).toBe(0); // best-effort, never fails on gbrain absence
|
|
expect(fs.existsSync(worktreeDir)).toBe(false); // worktree still cleaned up
|
|
});
|
|
});
|
|
|
|
describe('gstack-gbrain-source-wireup — defensive paths', () => {
|
|
test('--no-pull skips HEAD advance on existing worktree', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({});
|
|
// First run to create worktree
|
|
run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
|
// Make a new commit on parent so worktree HEAD is "behind"
|
|
fs.writeFileSync(path.join(gstackHome, 'newfile.md'), 'new');
|
|
spawnSync('git', ['-C', gstackHome, 'add', '.'], { stdio: 'pipe' });
|
|
spawnSync('git', ['-C', gstackHome, 'commit', '-q', '-m', 'second commit'], { stdio: 'pipe' });
|
|
const parentHeadAfter = spawnSync('git', ['-C', gstackHome, 'rev-parse', 'HEAD'], {
|
|
encoding: 'utf-8',
|
|
}).stdout.trim();
|
|
const worktreeHeadBefore = spawnSync('git', ['-C', worktreeDir, 'rev-parse', 'HEAD'], {
|
|
encoding: 'utf-8',
|
|
}).stdout.trim();
|
|
expect(parentHeadAfter).not.toBe(worktreeHeadBefore); // sanity: parent advanced
|
|
// --no-pull should leave worktree HEAD where it was
|
|
const r = run(['--no-pull'], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
|
expect(r.status).toBe(0);
|
|
const worktreeHeadAfter = spawnSync('git', ['-C', worktreeDir, 'rev-parse', 'HEAD'], {
|
|
encoding: 'utf-8',
|
|
}).stdout.trim();
|
|
expect(worktreeHeadAfter).toBe(worktreeHeadBefore);
|
|
expect(worktreeHeadAfter).not.toBe(parentHeadAfter);
|
|
});
|
|
|
|
test('stray non-git directory at worktree path is cleaned up + worktree created', () => {
|
|
setupGstackRepo('git@github.com:user/gstack-brain-user.git');
|
|
makeFakeGbrain({});
|
|
// Plant a stray non-git directory at the worktree path
|
|
fs.mkdirSync(worktreeDir, { recursive: true });
|
|
fs.writeFileSync(path.join(worktreeDir, 'unrelated.txt'), 'not a worktree');
|
|
expect(fs.existsSync(path.join(worktreeDir, 'unrelated.txt'))).toBe(true);
|
|
expect(fs.existsSync(path.join(worktreeDir, '.git'))).toBe(false);
|
|
// Helper should remove the stray dir + create a real worktree
|
|
const r = run([], { env: { GSTACK_BRAIN_NO_SYNC: '1' } });
|
|
expect(r.status).toBe(0);
|
|
expect(fs.existsSync(path.join(worktreeDir, '.git'))).toBe(true); // real worktree
|
|
expect(fs.existsSync(path.join(worktreeDir, 'unrelated.txt'))).toBe(false); // stray gone
|
|
});
|
|
});
|