mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-01 19:25:10 +02:00
v1.12.0.0 feat: /setup-gbrain — coding-agent onboarding for gbrain (#1183)
* feat(setup-gbrain): add gstack-gbrain-repo-policy bin helper Per-remote trust-tier store for the forthcoming /setup-gbrain skill. Tiers are the D3 triad (read-write / read-only / deny), keyed by a normalized remote URL so ssh-shorthand and https variants collapse to the same entry. The file carries _schema_version: 2 (D2-eng); legacy `allow` values from pre-D3 experiments auto-migrate to `read-write` on first read, idempotent, with a one-shot log line. Pure bash + jq to match the existing gstack-brain-* family. Atomic writes via tmpfile + rename. Policy file mode 0600. Corrupt files quarantine to .corrupt-<ts> and start fresh. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): unit tests for gstack-gbrain-repo-policy 24 tests covering normalize (ssh/https/shorthand/uppercase collapse to one key), set/get round-trip, all three D3 tiers accepted, invalid tiers rejected, file mode 0600, _schema_version field written on fresh files, legacy allow migration (including idempotence and preservation of non-allow entries), corrupt-JSON quarantine + fresh-file recovery, list output sorting, and get-without-arg auto-detect against a git repo with no origin. All tests green against a per-test tmpdir GSTACK_HOME so nothing leaks into the real ~/.gstack. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-detect state reporter Pure-introspection JSON emitter for the /setup-gbrain skill's start-up branching. Reports: gbrain presence + version on PATH, ~/.gbrain/config.json existence + engine, `gbrain doctor --json` health (wrapped in timeout 5s to match the /health D6 pattern), gstack-brain-sync mode via gstack-config, and ~/.gstack/.git presence for the memory-sync feature. Never modifies state. Always emits valid JSON even when every check is false. Handles malformed ~/.gbrain/config.json without crashing — gbrain_engine is null in that case, not an error. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-install with D5 detect-first + D19 PATH-shadow guard Clones gbrain at a pinned commit (v0.18.2) and registers it via `bun link`. Before any clone: D5 detect-first — probes ~/git/gbrain, ~/gbrain, and the install target for a valid pre-existing clone (package.json with name "gbrain" and bin.gbrain set). If one is found, `bun link` runs there instead of cloning a second copy. Prevents the day-one duplicate-install footgun on the skill author's own machine. After install: D19 PATH-shadow guard — reads the install-dir's package.json version, compares to `gbrain --version` on PATH. On mismatch: exits 3, prints every gbrain binary on PATH via `type -a`, and gives a remediation menu. Setup skills refuse broken environments instead of warning and continuing. Prereq checks (bun, git, https://github.com reachability) fail fast with install hints. --dry-run and --validate-only flags let the skill probe the plan without touching state; tests use them to cover D5 and D19 without exercising real bun link. Pin is a load-bearing version: setup-gbrain v1 verified against gbrain v0.18.2. Updating requires re-running Pre-Impl Gate 1 to verify gbrain's CLI + config shapes haven't drifted. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): unit tests for gstack-gbrain-detect + install 15 tests covering: detect emits valid JSON when nothing configured, reports gstack_brain_git on GSTACK_HOME/.git presence, reads ~/.gbrain/config.json engine, tolerates malformed config, detects a mocked gbrain binary on PATH with version parsing. For install: D5 detect-first uses ~/git/gbrain fixtures under a sandboxed HOME, verifies fall-through to fresh clone when no valid clone exists, rejects invalid package.json shapes. D19 PATH-shadow validation uses a fake gbrain on a minimal SAFE_PATH to simulate version mismatch, same-version-pass, v-prefix tolerance, missing binary on PATH, and missing version field in package.json. --validate-only mode in the install bin makes the D19 check unit- testable without running real bun link (which touches ~/.bun/bin). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-lib.sh with read_secret_to_env (D3-eng) Shared secret-read helper for PAT (D11) and pooler URL paste (D16). One implementation of the hardest-to-get-right pattern: stty -echo + SIGINT/TERM/EXIT trap that restores terminal mode, read into a named env var, optional redacted preview. Validates the target var name against [A-Z_][A-Z0-9_]* to prevent bash name-injection via `read -r "$varname"`. When stdin is not a TTY (CI, piped tests) the stty branches skip cleanly — piped input doesn't echo anyway. Exports the var after read so subprocesses inherit it; callers own the `unset` at handoff time. Sourced, not executed — no +x bit. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-supabase-verify structural URL check Zero-network validator for Supabase Session Pooler URLs before handing them to `gbrain init`. Canonical shape verified per gbrain init.ts:266: postgresql://postgres.<ref>:<password>@aws-0-<region>.pooler.supabase.com:6543/postgres Rejects direct-connection URLs (db.*.supabase.co:5432) with a distinct exit code 3 and clear IPv6-failure remediation — that's the most common paste mistake users make, so it earns its own UX path rather than a generic "bad URL" error. Never echoes the URL (contains a password) in error messages; tests verify a distinct seed password never appears in stderr on any reject path. Accepts URL from argv[1] or stdin ("-" or no arg). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): unit tests for supabase-verify + lib.sh secret helper 22 tests. verify: accepts canonical pooler URL (argv + stdin modes), rejects direct-connection URL with exit 3, rejects wrong scheme, wrong port, empty password, missing userinfo, plain 'postgres' user (catches direct-URL paste errors), wrong host, empty URL. Case-insensitive host match. Explicit negative: error messages never echo the URL password. lib.sh read_secret_to_env: reads piped stdin into the named env var, exports to subprocesses, redacted-preview emits masked form on stderr with the seed password absent, rejects invalid var names (lowercase, leading digit, hyphens), rejects missing/unknown flags, secret value never appears on stdout. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add gstack-gbrain-supabase-provision Management API wrapper Four subcommands: list-orgs, create, wait, pooler-url. Built against the verified Supabase Management API shape (Pre-Impl Gate 1): - POST /v1/projects with {name, db_pass, organization_slug, region} — not the original plan's /v1/organizations/{ref}/projects - No `plan` field; subscription tier is org-level per the OpenAPI description ("Subscription Plan is now set on organization level and is ignored in this request") - GET /v1/projects/{ref}/config/database/pooler for pooler config — not /config/database Secrets discipline: SUPABASE_ACCESS_TOKEN (PAT) and DB_PASS read from env only, never from argv (D8 grep test enforces this). `set +x` at the top as a defensive default so debug tracing never leaks secrets. Management API hostname hardcoded to SUPABASE_API_BASE env override — no user-controlled URL portion (SSRF guard). HTTP error paths: 401/403 → exit 3 (auth), 402 → 4 (quota), 409 → 5 (conflict), 429 + 5xx → exponential-backoff retry up to 3 attempts, then exit 8. Wait subcommand polls every 5s until ACTIVE_HEALTHY with a configurable timeout; terminal states (INIT_FAILED, REMOVED, etc.) exit 7 immediately with a clear message. Timeout emits the --resume-provision hint so the skill can recover. Pooler-url constructs the URL locally from db_user/host/port/name + DB_PASS rather than trusting the API response's connection_string field, which is templated with [PASSWORD] rather than the real value. Handles both object and array response shapes, preferring session pool_mode when Supabase returns multiple pooler configs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): unit tests for gstack-gbrain-supabase-provision via mock API 22 tests covering D21 HTTP error suite (401/403/402/409/429/5xx) and happy paths for all four subcommands. Every test spins up a Bun.serve mock server bound to SUPABASE_API_BASE so nothing hits the real API. Uses Bun.spawn (async) rather than spawnSync because spawnSync blocks the Bun event loop, which prevents Bun.serve mocks from responding — calls would hit curl's own timeout instead of round-tripping. Verifies: POST body contains organization_slug (not organization_id) and no `plan` field, bearer-token auth header, retry-on-429 with eventual success, exit-8 on persistent 5xx after max retries, wait succeeds on ACTIVE_HEALTHY, exits 7 on INIT_FAILED, exits 6 with --resume-provision hint on timeout, pooler-url builds URL locally from db_user/host/port/name + DB_PASS (not response connection_string template), handles array pooler responses. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add SKILL.md.tmpl — user-facing skill prompt Stitches together every slice built so far (repo-policy, detect, install, lib.sh secret helper, supabase-verify, supabase-provision) into a single interactive flow. Paths: Supabase existing-URL, Supabase auto-provision (D7), Supabase manual, PGLite local, switch (PGLite ↔ Supabase via gbrain migrate wrapped in timeout 180s per D9). Secrets discipline per D8/D10/D11: PAT + DB_PASS + pooler URL all read via read_secret_to_env from lib.sh and handed to gbrain via GBRAIN_DATABASE_URL env, never argv. PAT carries the full D11 scope disclosure before collection and an explicit revocation reminder after success. D12 SIGINT recovery prints the in-flight ref + resume command. D18 MCP registration is scoped honestly to Claude Code — skips with a manual-register hint when `claude` is not on PATH. D6 per-remote trust-triad question (read-write/read-only/deny/skip-for-now) gates repo import; the triad values compose with the D2-eng schema-version policy file so future migrations stay deterministic. Skill runs concurrent-run-locked via mkdir ~/.gstack/.setup-gbrain.lock.d (atomic, same pattern as gstack-brain-sync). Telemetry (D4) payload carries enumerated categorical values only — never URL, PAT, or any postgresql:// substring. --repo, --switch, --resume-provision, --cleanup-orphans shortcut modes documented inline; the skill parses its own invocation args. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(health): integrate gbrain as D6 composite dimension Adds a GBrain row to the /health dashboard rubric with weight 10%. Three sub-signals rolled into one 0-10 score: doctor status (0.5), sync queue depth (0.3), last-push age (0.2). Redistributes when gbrain_sync_mode is off so the dimension stays fair. Weights rebalance: typecheck 25→22, lint 20→18, test 30→28, deadcode 15→13, shell 10→9, gbrain +10 — sums to 100. gbrain doctor --json wrapped in timeout 5s so a hung gbrain never stalls the /health dashboard. Dimension is omitted (not red) when gbrain is not installed — running /health on a non-gbrain machine shouldn't penalize that choice. History-JSONL adds a `gbrain` field. Pre-D6 entries read as null for trend comparison; new tracking starts from first post-D6 run. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(test): add secret-sink-harness for negative-space leak testing (D21 #5) Runs a subprocess with a seeded secret, captures every channel the subprocess could leak through, and asserts the seed never appears. Built per the D1-eng tightened contract: per-run tmp $HOME, four seed match rules (exact + URL-decoded + first-12-char prefix + base64), fd-level stdout/stderr capture via Bun.spawn, post-mortem walk of every file written under $HOME, separate buckets for telemetry JSONL. Reusable: any future skill that handles secrets can import runWithSecretSink and run positive/negative controls against its own bins. The harness itself is ~180 lines of TS with no external deps beyond Bun + node:fs. Out of scope for v1 (documented as follow-ups): subprocess env dump (portable /proc reading), the user's real shell history (bins don't modify it). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test: secret-sink harness positive controls + real-bin negative controls 11 tests. Positive controls deliberately leak a seed in every covered channel (stdout, stderr, a file under $HOME, the telemetry JSONL path, base64-encoded, first-12-char prefix) and assert the harness catches each one. Without these, a harness that silently under-reports would look identical to a harness that works. Negative controls run real setup-gbrain bins with distinctive seeds: - supabase-verify rejects a mysql:// URL and a direct-connection URL, password never appears in any captured channel - lib.sh read_secret_to_env reads piped stdin, emits only the length, seed value stays invisible - supabase-provision on an auth-failure path fails fast without leaking the PAT to any channel Covers D21 #5 leak harness + uses it to validate D3-eng, D10, D11 discipline end-to-end on the already-shipped bins. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(setup-gbrain): add list-orphans + delete-project subcommands (D20) Powers /setup-gbrain --cleanup-orphans. list-orphans filters the authenticated user's Supabase projects by name prefix (default "gbrain") and excludes the project the local ~/.gbrain/config.json currently points at, so only unclaimed gbrain-shaped projects come back. Active-ref detection parses the pooler URL's user portion (postgres.<ref>:<pw>@...). delete-project is a thin DELETE /v1/projects/{ref} wrapper with no confirmation of its own — the skill's UI layer owns the per-project confirm AskUserQuestion loop. Keeps responsibilities clean: the bin manages HTTP; the skill manages user intent. Both subcommands reuse the existing api_call retry+backoff and the same PAT discipline (env only, never argv). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(setup-gbrain): list-orphans active-ref filtering + delete-project 404 6 new tests bringing the supabase-provision suite to 28: list-orphans: - Filters to gbrain-prefixed projects, excludes the active-ref derived from ~/.gbrain/config.json's pooler URL - Treats all gbrain-prefixed projects as orphans when no config exists (first run on a new machine) - Respects custom --name-prefix for users who named their brain something else delete-project: - Happy path sends DELETE /v1/projects/<ref> and returns {deleted_ref} - 404 surfaces cleanly (exit 2, "404" in stderr) - Missing <ref> positional rejected with exit 2 Uses per-test tmpdir HOME with a stubbed ~/.gbrain/config.json so active-ref extraction runs against deterministic fixtures. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * chore: regenerate setup-gbrain SKILL.md after main merge * chore: bump version and changelog (v1.12.0.0) Ships /setup-gbrain and its supporting infrastructure end-to-end: per-remote trust policy, installer with PATH-shadow guard, shared secret-read helper, structural URL verifier, Supabase Management API wrapper, /health GBrain dimension, secret-sink test harness. 100 new tests across 5 suites, all green. Three pre-existing test failures noted as P0 in TODOS.md. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: add USING_GBRAIN_WITH_GSTACK.md + update README for /setup-gbrain README changes: - Rewrote the "Cross-machine memory with GBrain sync" section into "GBrain — persistent knowledge for your coding agent." Covers the three /setup-gbrain paths (Supabase existing URL, auto-provision, PGLite local), MCP registration, per-remote trust triad, and the (still-separate) memory sync feature. - Added /setup-gbrain row to the skills table pointing at the full guide. - Added /setup-gbrain to both skill-list install snippets. - Added USING_GBRAIN_WITH_GSTACK.md to the Docs table. New doc (USING_GBRAIN_WITH_GSTACK.md): - All three setup paths with trust-surface caveats - MCP registration details (and honest Claude-Code-v1 scoping) - Per-remote trust triad semantics + how to change a policy - Switching engines (PGLite ↔ Supabase) via --switch - GStack memory sync + its relationship to the gbrain knowledge base - /setup-gbrain --cleanup-orphans for orphan Supabase projects - Full command + flag reference, every bin helper, every env var - Security model: what's enforced in code, what's enforced by the leak harness, and the honest limits of v1 - Troubleshooting: PATH shadowing, direct-connection URL reject, auto-provision timeout, stale lock, policy file hand-edits, migrate hang - Why-this-design section explaining the non-obvious choices Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(brain-sync): secret scanner now catches Bearer-prefixed auth tokens in JSON The bearer-token-json regex value charset was [A-Za-z0-9_./+=-]{16,}, which does NOT permit spaces. Real HTTP auth headers embed the scheme name with a literal space — "Bearer <token>" — so the value portion actually starts with "Bearer " and the existing regex couldn't match. Result: any JSON blob containing "authorization":"Bearer ..." would slip past the scanner and sync to the user's private brain repo with the bearer token inline. Added optional (Bearer |Basic |Token )? prefix in front of the value charset. Now matches the common auth-scheme forms without broadening the matcher to tolerate arbitrary whitespace (which would false-positive on lots of benign JSON). Verified against 5 positive cases (bearer-in-json, clean bearer, apikey no-prefix, token with Bearer, password no-prefix) + 3 negative cases (too-short tokens, non-secret field names like username, random JSON). This closes the P0 security regression first noticed during v1.12.0.0 /ship. brain-sync.test.ts now passes all 7 secret-scan fixtures. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test: mock-gh integration tests for gstack-brain-init auto-create path 8 tests covering the gh-repo-create happy path that had zero coverage before. Existing brain-sync.test.ts always passes --remote <bare-url> to bypass gh entirely, so the interactive default ("press Enter, we'll run gh repo create for you") was shipping on trust. Test strategy: write a bash stub for gh that records every call into a file, then run gstack-brain-init with that stub on PATH. Assertions verify: gh auth status is checked, gh repo create fires with the computed gstack-brain-<user> default name + --private + --source flags, fall-through to gh repo view when create reports already-exists, user-provided URL bypasses gh entirely, gh-not-on-path and gh-not-authed branches both prompt for URL, --remote flag short-circuits all gh calls, conflicting-remote re-runs exit 1 with a clear message. No real GitHub, no live auth. Gate tier — runs on every commit. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test(e2e): privacy-gate AskUserQuestion fires from preamble (periodic tier) Two periodic-tier E2E tests exercising the preamble's privacy gate end-to-end via the Agent SDK + canUseTool. Previously uncovered: - Positive: stages a fake gbrain on PATH + gbrain_sync_mode_prompted=false in config, runs a real skill, intercepts tool-use. Asserts the preamble fires a 3-option AskUserQuestion matching the canonical prose ("publish session memory" / "artifact" / "decline") and does NOT fire a second time in the same run (idempotency within session). - Negative: same staging but prompted=true. Asserts the gate stays silent even with gbrain detected on the host. Registered in test/helpers/touchfiles.ts as `brain-privacy-gate` (periodic) with dependency tracking on generate-brain-sync-block.ts, the three gstack-brain-* bins, gstack-config, and the Agent SDK runner. Diff-based selection re-runs the E2E when any of those change. Cost: ~$0.30-$0.50 per run. Only fires under EVALS=1 EVALS_TIER=periodic; gate tier stays free. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs: update TODOS for bearer-json fix + new brain-sync test coverage Moves the bearer-json secret-scan regression from the P0 "pre-existing failures" block into the Completed section with full context on the fix, the mock-gh tests, the E2E privacy-gate tests, and the touchfile registration. Remaining P0s are the GSTACK_HOME config-isolation bug and the stale Opus 4.7 overlay pacing assertion, both unrelated. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(test): E2E privacy gate — ambient env + skill-file prompt Two fixes to get the E2E actually running end-to-end (first attempt failed at the SDK auth step, second at the assertion step): 1. Don't pass an explicit `env:` object to runAgentSdkTest. The SDK's auth pipeline misses ANTHROPIC_API_KEY when env is supplied as an object (verified against the plan-mode-no-op test, which passes no env and auths cleanly). Mutate process.env before the call instead, and restore the originals in finally so other tests don't inherit the ambient mutation. 2. The "Run /learn with no arguments" user prompt was too narrow — the model reduced it to a direct action and skipped the preamble privacy-gate directives entirely, so zero AskUserQuestions fired. Mirror the plan-mode-no-op pattern: point the model at the skill file on disk and ask it to follow every preamble directive. Bumped maxTurns from 6 to 10 to give the preamble room to execute. Verified both tests pass under `EVALS=1 EVALS_TIER=periodic bun test test/skill-e2e-brain-privacy-gate.test.ts` against a real ANTHROPIC_API_KEY. Cost per run: ~$0.30-$0.50 per test. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * docs(CLAUDE.md): source ANTHROPIC/OPENAI keys from ~/.zshrc for paid evals Conductor workspaces don't inherit the interactive shell env, so both API keys are absent from the default process env even though they're set in ~/.zshrc. Documents the source-from-zshrc pattern (grep + eval, never echo the value) plus the Agent SDK gotcha: do NOT pass env as an object to runAgentSdkTest — mutate process.env ambiently and restore in finally. Discovered this during the brain-privacy-gate E2E. First run failed at SDK auth with 401; second failed because explicit env handoff bypassed the SDK's own auth routing. Fix pattern now codified so the next paid-eval session in a Conductor workspace doesn't hit the same two dead ends. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,298 @@
|
||||
/**
|
||||
* gstack-gbrain-detect + gstack-gbrain-install — Slice 2 of /setup-gbrain.
|
||||
*
|
||||
* Detect: state-reporter JSON with presence, version, config, doctor health,
|
||||
* and gstack-brain-sync mode. Pure introspection, no side effects.
|
||||
*
|
||||
* Install: D5 detect-first (reuse pre-existing clones) + D19 PATH-shadow
|
||||
* validation. The install flow itself (git clone + bun install + bun link)
|
||||
* is not exercised in CI because it touches the user's real ~/.bun/bin and
|
||||
* network. Instead we use --validate-only to exercise the D19 check and
|
||||
* --dry-run to exercise the D5 detect-first path end-to-end.
|
||||
*/
|
||||
|
||||
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
import { spawnSync } from 'child_process';
|
||||
|
||||
const ROOT = path.resolve(import.meta.dir, '..');
|
||||
const DETECT = path.join(ROOT, 'bin', 'gstack-gbrain-detect');
|
||||
const INSTALL = path.join(ROOT, 'bin', 'gstack-gbrain-install');
|
||||
|
||||
// Minimal PATH with POSIX tools + homebrew (for jq/git/curl) but no user-bin
|
||||
// dirs — this keeps `gbrain` out of PATH deterministically across dev machines
|
||||
// while still finding jq, git, curl, sed, cat, etc. Each test can prepend a
|
||||
// fake-gbrain dir when it wants to simulate presence.
|
||||
const SAFE_PATH = '/usr/bin:/bin:/usr/sbin:/sbin:/opt/homebrew/bin:/usr/local/bin';
|
||||
|
||||
let tmpHome: string;
|
||||
let tmpHomeReal: string;
|
||||
|
||||
type RunOpts = { env?: Record<string, string>; cwd?: string };
|
||||
function run(bin: string, args: string[], opts: RunOpts = {}) {
|
||||
const env = {
|
||||
...process.env,
|
||||
GSTACK_HOME: tmpHome,
|
||||
HOME: tmpHomeReal,
|
||||
...(opts.env || {}),
|
||||
};
|
||||
const res = spawnSync(bin, args, {
|
||||
env,
|
||||
cwd: opts.cwd,
|
||||
encoding: 'utf-8',
|
||||
});
|
||||
return {
|
||||
stdout: (res.stdout || '').trim(),
|
||||
stderr: (res.stderr || '').trim(),
|
||||
status: res.status ?? -1,
|
||||
};
|
||||
}
|
||||
|
||||
beforeEach(() => {
|
||||
tmpHome = fs.mkdtempSync(path.join(os.tmpdir(), 'gbrain-detect-gstack-'));
|
||||
tmpHomeReal = fs.mkdtempSync(path.join(os.tmpdir(), 'gbrain-detect-home-'));
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
fs.rmSync(tmpHome, { recursive: true, force: true });
|
||||
fs.rmSync(tmpHomeReal, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
describe('gstack-gbrain-detect', () => {
|
||||
test('emits valid JSON even when nothing is configured', () => {
|
||||
// Override PATH to exclude any real gbrain so the test is deterministic.
|
||||
const emptyBin = fs.mkdtempSync(path.join(os.tmpdir(), 'empty-bin-'));
|
||||
try {
|
||||
const r = run(DETECT, [], { env: { PATH: `${emptyBin}:${SAFE_PATH}` } });
|
||||
expect(r.status).toBe(0);
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.gbrain_on_path).toBe(false);
|
||||
expect(j.gbrain_version).toBeNull();
|
||||
expect(j.gbrain_config_exists).toBe(false);
|
||||
expect(j.gbrain_engine).toBeNull();
|
||||
expect(j.gbrain_doctor_ok).toBe(false);
|
||||
expect(j.gstack_brain_sync_mode).toBe('off');
|
||||
expect(j.gstack_brain_git).toBe(false);
|
||||
} finally {
|
||||
fs.rmSync(emptyBin, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('reports gstack_brain_git: true when GSTACK_HOME has a .git dir', () => {
|
||||
fs.mkdirSync(path.join(tmpHome, '.git'));
|
||||
const emptyBin = fs.mkdtempSync(path.join(os.tmpdir(), 'empty-bin-'));
|
||||
try {
|
||||
const r = run(DETECT, [], { env: { PATH: `${emptyBin}:${SAFE_PATH}` } });
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.gstack_brain_git).toBe(true);
|
||||
} finally {
|
||||
fs.rmSync(emptyBin, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('reports gbrain_config + engine when ~/.gbrain/config.json exists', () => {
|
||||
// HOME is tmpHomeReal; detect reads $HOME/.gbrain/config.json.
|
||||
fs.mkdirSync(path.join(tmpHomeReal, '.gbrain'));
|
||||
fs.writeFileSync(
|
||||
path.join(tmpHomeReal, '.gbrain', 'config.json'),
|
||||
JSON.stringify({ engine: 'pglite', database_path: '/tmp/x.pglite' })
|
||||
);
|
||||
const emptyBin = fs.mkdtempSync(path.join(os.tmpdir(), 'empty-bin-'));
|
||||
try {
|
||||
const r = run(DETECT, [], { env: { PATH: `${emptyBin}:${SAFE_PATH}` } });
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.gbrain_config_exists).toBe(true);
|
||||
expect(j.gbrain_engine).toBe('pglite');
|
||||
} finally {
|
||||
fs.rmSync(emptyBin, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('malformed config returns null engine, does not crash', () => {
|
||||
fs.mkdirSync(path.join(tmpHomeReal, '.gbrain'));
|
||||
fs.writeFileSync(path.join(tmpHomeReal, '.gbrain', 'config.json'), 'not valid json{');
|
||||
const emptyBin = fs.mkdtempSync(path.join(os.tmpdir(), 'empty-bin-'));
|
||||
try {
|
||||
const r = run(DETECT, [], { env: { PATH: `${emptyBin}:${SAFE_PATH}` } });
|
||||
expect(r.status).toBe(0);
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.gbrain_config_exists).toBe(true);
|
||||
expect(j.gbrain_engine).toBeNull();
|
||||
} finally {
|
||||
fs.rmSync(emptyBin, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('detects a mocked gbrain binary on PATH and reports its version', () => {
|
||||
const fakeBin = fs.mkdtempSync(path.join(os.tmpdir(), 'fake-bin-'));
|
||||
fs.writeFileSync(
|
||||
path.join(fakeBin, 'gbrain'),
|
||||
'#!/bin/bash\necho "0.18.2"\nexit 0\n',
|
||||
{ mode: 0o755 }
|
||||
);
|
||||
try {
|
||||
const r = run(DETECT, [], { env: { PATH: `${fakeBin}:${SAFE_PATH}` } });
|
||||
expect(r.status).toBe(0);
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.gbrain_on_path).toBe(true);
|
||||
expect(j.gbrain_version).toBe('0.18.2');
|
||||
} finally {
|
||||
fs.rmSync(fakeBin, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('gstack-gbrain-install D5 detect-first', () => {
|
||||
test('--dry-run reuses a pre-existing ~/git/gbrain-shaped clone', () => {
|
||||
// Stand up a fake ~/git/gbrain that looks valid (name + bin.gbrain).
|
||||
const fakeGit = path.join(tmpHomeReal, 'git', 'gbrain');
|
||||
fs.mkdirSync(fakeGit, { recursive: true });
|
||||
fs.writeFileSync(
|
||||
path.join(fakeGit, 'package.json'),
|
||||
JSON.stringify({
|
||||
name: 'gbrain',
|
||||
version: '0.18.2',
|
||||
bin: { gbrain: './src/cli.ts' },
|
||||
})
|
||||
);
|
||||
const r = run(INSTALL, ['--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain(`detected existing gbrain clone at ${fakeGit}`);
|
||||
expect(r.stdout).toContain('would run bun install + bun link');
|
||||
});
|
||||
|
||||
test('--dry-run falls through to fresh clone when no valid clone detected', () => {
|
||||
// No ~/git/gbrain, no ~/gbrain.
|
||||
const r = run(INSTALL, ['--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('DRY RUN: would clone');
|
||||
expect(r.stdout).toContain('https://github.com/garrytan/gbrain.git');
|
||||
});
|
||||
|
||||
test('rejects a pre-existing path that lacks a valid gbrain package.json', () => {
|
||||
// Put garbage at ~/git/gbrain, but nothing at ~/gbrain.
|
||||
const badGit = path.join(tmpHomeReal, 'git', 'gbrain');
|
||||
fs.mkdirSync(badGit, { recursive: true });
|
||||
fs.writeFileSync(path.join(badGit, 'package.json'), JSON.stringify({ name: 'not-gbrain' }));
|
||||
const r = run(INSTALL, ['--dry-run']);
|
||||
expect(r.status).toBe(0);
|
||||
// Falls through to fresh clone
|
||||
expect(r.stdout).toContain('DRY RUN: would clone');
|
||||
});
|
||||
});
|
||||
|
||||
describe('gstack-gbrain-install D19 PATH-shadow validation', () => {
|
||||
function seedInstallDir(version: string): string {
|
||||
const d = fs.mkdtempSync(path.join(os.tmpdir(), 'gbrain-install-'));
|
||||
fs.writeFileSync(
|
||||
path.join(d, 'package.json'),
|
||||
JSON.stringify({ name: 'gbrain', version, bin: { gbrain: './src/cli.ts' } })
|
||||
);
|
||||
return d;
|
||||
}
|
||||
|
||||
function seedFakeGbrainBinary(version: string): string {
|
||||
const binDir = fs.mkdtempSync(path.join(os.tmpdir(), 'fake-bin-'));
|
||||
fs.writeFileSync(
|
||||
path.join(binDir, 'gbrain'),
|
||||
`#!/bin/bash\necho "${version}"\nexit 0\n`,
|
||||
{ mode: 0o755 }
|
||||
);
|
||||
return binDir;
|
||||
}
|
||||
|
||||
test('passes when install-dir version matches `gbrain --version` on PATH', () => {
|
||||
const installDir = seedInstallDir('0.18.2');
|
||||
const fakeBin = seedFakeGbrainBinary('0.18.2');
|
||||
try {
|
||||
const r = run(INSTALL, ['--validate-only', '--install-dir', installDir], {
|
||||
env: { PATH: `${fakeBin}:${SAFE_PATH}` },
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('installed gbrain 0.18.2');
|
||||
} finally {
|
||||
fs.rmSync(installDir, { recursive: true, force: true });
|
||||
fs.rmSync(fakeBin, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('tolerates a leading "v" in `gbrain --version` output', () => {
|
||||
const installDir = seedInstallDir('0.18.2');
|
||||
const fakeBin = seedFakeGbrainBinary('v0.18.2');
|
||||
try {
|
||||
const r = run(INSTALL, ['--validate-only', '--install-dir', installDir], {
|
||||
env: { PATH: `${fakeBin}:${SAFE_PATH}` },
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
} finally {
|
||||
fs.rmSync(installDir, { recursive: true, force: true });
|
||||
fs.rmSync(fakeBin, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('fails hard with exit 3 and PATH-shadow message on version mismatch', () => {
|
||||
const installDir = seedInstallDir('0.18.2');
|
||||
const fakeBin = seedFakeGbrainBinary('0.18.1');
|
||||
try {
|
||||
const r = run(INSTALL, ['--validate-only', '--install-dir', installDir], {
|
||||
env: { PATH: `${fakeBin}:${SAFE_PATH}` },
|
||||
});
|
||||
expect(r.status).toBe(3);
|
||||
expect(r.stderr).toContain('PATH SHADOWING DETECTED');
|
||||
expect(r.stderr).toContain('0.18.2');
|
||||
expect(r.stderr).toContain('0.18.1');
|
||||
// Remediation menu present
|
||||
expect(r.stderr).toContain('rm the shadowing binary');
|
||||
expect(r.stderr).toContain('prepend ~/.bun/bin to PATH');
|
||||
} finally {
|
||||
fs.rmSync(installDir, { recursive: true, force: true });
|
||||
fs.rmSync(fakeBin, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('fails hard when no gbrain on PATH after supposed install', () => {
|
||||
const installDir = seedInstallDir('0.18.2');
|
||||
const emptyBin = fs.mkdtempSync(path.join(os.tmpdir(), 'empty-bin-'));
|
||||
try {
|
||||
const r = run(INSTALL, ['--validate-only', '--install-dir', installDir], {
|
||||
env: { PATH: `${emptyBin}:${SAFE_PATH}` },
|
||||
});
|
||||
expect(r.status).toBe(3);
|
||||
expect(r.stderr).toContain("'gbrain' is not on PATH");
|
||||
} finally {
|
||||
fs.rmSync(installDir, { recursive: true, force: true });
|
||||
fs.rmSync(emptyBin, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('fails hard when install-dir package.json lacks version', () => {
|
||||
const d = fs.mkdtempSync(path.join(os.tmpdir(), 'gbrain-install-'));
|
||||
fs.writeFileSync(
|
||||
path.join(d, 'package.json'),
|
||||
JSON.stringify({ name: 'gbrain', bin: { gbrain: './src/cli.ts' } })
|
||||
);
|
||||
try {
|
||||
const r = run(INSTALL, ['--validate-only', '--install-dir', d]);
|
||||
expect(r.status).toBe(3);
|
||||
expect(r.stderr).toContain('cannot read version');
|
||||
} finally {
|
||||
fs.rmSync(d, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('gstack-gbrain-install argument handling', () => {
|
||||
test('--help prints usage without exiting non-zero', () => {
|
||||
const r = run(INSTALL, ['--help']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('gstack-gbrain-install');
|
||||
});
|
||||
|
||||
test('unknown flag exits 2 with an error message', () => {
|
||||
const r = run(INSTALL, ['--not-a-flag']);
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('unknown flag');
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,257 @@
|
||||
/**
|
||||
* gstack-gbrain-supabase-verify + gstack-gbrain-lib.sh — Slice 3 of /setup-gbrain.
|
||||
*
|
||||
* verify: structural URL check (scheme, userinfo, host, port). No network
|
||||
* call; pure regex. Rejects direct-connection URLs with a distinct exit
|
||||
* code + UX because that's the most common paste mistake.
|
||||
*
|
||||
* lib.sh: shared secret-read helper (read_secret_to_env) sourced by the
|
||||
* skill template and by gstack-gbrain-supabase-provision. Validates var
|
||||
* name, handles stdin=TTY and stdin=pipe (CI) paths, supports optional
|
||||
* redacted-preview echo.
|
||||
*
|
||||
* Not tested here: TTY path with stty manipulation. `bun test` runs under
|
||||
* pipe stdin so [ -t 0 ] is false and the stty branches skip. That's the
|
||||
* right test matrix for CI; TTY behavior is covered by the manual test
|
||||
* matrix on a real terminal.
|
||||
*/
|
||||
|
||||
import { describe, test, expect } from 'bun:test';
|
||||
import * as path from 'path';
|
||||
import { spawnSync } from 'child_process';
|
||||
|
||||
const ROOT = path.resolve(import.meta.dir, '..');
|
||||
const VERIFY = path.join(ROOT, 'bin', 'gstack-gbrain-supabase-verify');
|
||||
const LIB = path.join(ROOT, 'bin', 'gstack-gbrain-lib.sh');
|
||||
|
||||
function runVerify(arg: string, stdin?: string) {
|
||||
const res = spawnSync(VERIFY, arg === '' ? [] : [arg], {
|
||||
input: stdin,
|
||||
encoding: 'utf-8',
|
||||
});
|
||||
return {
|
||||
stdout: (res.stdout || '').trim(),
|
||||
stderr: (res.stderr || '').trim(),
|
||||
status: res.status ?? -1,
|
||||
};
|
||||
}
|
||||
|
||||
// Invoke a bash snippet that sources the lib and runs something against it.
|
||||
// Returns stdout + stderr + exit code. Stdin is piped so [ -t 0 ] = false.
|
||||
function runLibSnippet(snippet: string, stdin: string = '') {
|
||||
const script = `set -euo pipefail\n. ${JSON.stringify(LIB)}\n${snippet}`;
|
||||
const res = spawnSync('bash', ['-c', script], {
|
||||
input: stdin,
|
||||
encoding: 'utf-8',
|
||||
});
|
||||
return {
|
||||
stdout: (res.stdout || '').trim(),
|
||||
stderr: (res.stderr || '').trim(),
|
||||
status: res.status ?? -1,
|
||||
};
|
||||
}
|
||||
|
||||
describe('gstack-gbrain-supabase-verify', () => {
|
||||
const VALID =
|
||||
'postgresql://postgres.abcdefghijklmnopqrst:secretpass@aws-0-us-east-1.pooler.supabase.com:6543/postgres';
|
||||
|
||||
test('accepts canonical Session Pooler URL', () => {
|
||||
const r = runVerify(VALID);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toBe('ok');
|
||||
});
|
||||
|
||||
test('accepts postgres:// scheme (without ql)', () => {
|
||||
const r = runVerify(VALID.replace('postgresql://', 'postgres://'));
|
||||
expect(r.status).toBe(0);
|
||||
});
|
||||
|
||||
test('accepts URL via stdin with "-"', () => {
|
||||
const r = runVerify('-', VALID);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toBe('ok');
|
||||
});
|
||||
|
||||
test('accepts URL via stdin with no argv', () => {
|
||||
const r = runVerify('', VALID);
|
||||
expect(r.status).toBe(0);
|
||||
});
|
||||
|
||||
test('rejects direct-connection URL with exit code 3', () => {
|
||||
const url = 'postgresql://postgres:secret@db.abcdefghijk.supabase.co:5432/postgres';
|
||||
const r = runVerify(url);
|
||||
expect(r.status).toBe(3);
|
||||
expect(r.stderr).toContain('rejected direct-connection URL');
|
||||
expect(r.stderr).toContain('Session Pooler');
|
||||
// Error message should not echo the URL back (it contains a password)
|
||||
expect(r.stderr).not.toContain('secret');
|
||||
});
|
||||
|
||||
test('rejects wrong scheme', () => {
|
||||
const r = runVerify('mysql://user:pass@aws-0-us-east-1.pooler.supabase.com:6543/postgres');
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('bad scheme');
|
||||
});
|
||||
|
||||
test('rejects non-6543 port', () => {
|
||||
const r = runVerify(
|
||||
'postgresql://postgres.ref:pass@aws-0-us-east-1.pooler.supabase.com:5432/postgres'
|
||||
);
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('6543');
|
||||
});
|
||||
|
||||
test('rejects empty password', () => {
|
||||
const r = runVerify(
|
||||
'postgresql://postgres.ref:@aws-0-us-east-1.pooler.supabase.com:6543/postgres'
|
||||
);
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('empty password');
|
||||
});
|
||||
|
||||
test('rejects missing userinfo', () => {
|
||||
const r = runVerify('postgresql://aws-0-us-east-1.pooler.supabase.com:6543/postgres');
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('missing userinfo');
|
||||
});
|
||||
|
||||
test('rejects plain "postgres" user (no .ref) to catch direct-URL paste mistakes', () => {
|
||||
const r = runVerify(
|
||||
'postgresql://postgres:pass@aws-0-us-east-1.pooler.supabase.com:6543/postgres'
|
||||
);
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain("user portion 'postgres'");
|
||||
});
|
||||
|
||||
test('rejects wrong host (not *.pooler.supabase.com)', () => {
|
||||
const r = runVerify('postgresql://postgres.ref:pass@example.com:6543/postgres');
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('pooler.supabase.com');
|
||||
});
|
||||
|
||||
test('rejects empty URL', () => {
|
||||
const r = runVerify('-', '');
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('empty URL');
|
||||
});
|
||||
|
||||
test('case-insensitive host match (POOLER.SUPABASE.COM passes)', () => {
|
||||
const r = runVerify(
|
||||
'postgresql://postgres.ref:pass@AWS-0-US-EAST-1.POOLER.SUPABASE.COM:6543/postgres'
|
||||
);
|
||||
expect(r.status).toBe(0);
|
||||
});
|
||||
|
||||
test('error messages never echo the URL password', () => {
|
||||
// Supply a URL with a distinctive password; verify none of the errors
|
||||
// leak the password to stderr.
|
||||
const r = runVerify(
|
||||
'mysql://user:VERY-DISTINCT-SECRET-dk3984@aws-0-us-east-1.pooler.supabase.com:6543/postgres'
|
||||
);
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).not.toContain('VERY-DISTINCT-SECRET');
|
||||
});
|
||||
});
|
||||
|
||||
describe('gstack-gbrain-lib.sh read_secret_to_env', () => {
|
||||
test('reads secret from piped stdin into the named env var', () => {
|
||||
const r = runLibSnippet(
|
||||
`
|
||||
read_secret_to_env MY_SECRET "Enter: "
|
||||
echo "captured=[$MY_SECRET]"
|
||||
echo "len=\${#MY_SECRET}"
|
||||
`,
|
||||
'hello-world-123'
|
||||
);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('captured=[hello-world-123]');
|
||||
expect(r.stdout).toContain('len=15');
|
||||
});
|
||||
|
||||
test('exports the var so sub-processes see it', () => {
|
||||
const r = runLibSnippet(
|
||||
`
|
||||
read_secret_to_env TEST_VAR "Enter: "
|
||||
bash -c 'echo "child-sees=[$TEST_VAR]"'
|
||||
`,
|
||||
'child-test-value'
|
||||
);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('child-sees=[child-test-value]');
|
||||
});
|
||||
|
||||
test('redacted preview uses the provided sed expression (password masked)', () => {
|
||||
const r = runLibSnippet(
|
||||
`
|
||||
read_secret_to_env MY_URL "URL: " --echo-redacted 's#://[^@]*@#://***@#'
|
||||
echo "ok"
|
||||
`,
|
||||
'postgresql://user:SECRET123@host:5432/db'
|
||||
);
|
||||
expect(r.status).toBe(0);
|
||||
// Redacted preview goes to stderr
|
||||
expect(r.stderr).toContain('Got: postgresql://***@host:5432/db');
|
||||
// Password must not appear in the preview
|
||||
expect(r.stderr).not.toContain('SECRET123');
|
||||
});
|
||||
|
||||
test('rejects invalid var names (must match [A-Z_][A-Z0-9_]*)', () => {
|
||||
const r = runLibSnippet(
|
||||
`
|
||||
read_secret_to_env "lower-case" "Prompt: " || echo "correctly-rejected"
|
||||
`,
|
||||
'anything'
|
||||
);
|
||||
expect(r.status).toBe(0); // snippet returns 0 via the || fallback
|
||||
expect(r.stdout).toContain('correctly-rejected');
|
||||
expect(r.stderr).toContain('invalid var name');
|
||||
});
|
||||
|
||||
test('rejects var names that start with a digit', () => {
|
||||
const r = runLibSnippet(
|
||||
`
|
||||
read_secret_to_env "1VAR" "Prompt: " || echo "correctly-rejected"
|
||||
`,
|
||||
'x'
|
||||
);
|
||||
expect(r.stdout).toContain('correctly-rejected');
|
||||
});
|
||||
|
||||
test('rejects missing args', () => {
|
||||
const r = runLibSnippet(
|
||||
`
|
||||
read_secret_to_env || echo "correctly-rejected"
|
||||
`
|
||||
);
|
||||
expect(r.stdout).toContain('correctly-rejected');
|
||||
expect(r.stderr).toContain('usage');
|
||||
});
|
||||
|
||||
test('rejects unknown flags', () => {
|
||||
const r = runLibSnippet(
|
||||
`
|
||||
read_secret_to_env MY_VAR "Prompt: " --unknown-flag xxx || echo "correctly-rejected"
|
||||
`,
|
||||
'x'
|
||||
);
|
||||
expect(r.stdout).toContain('correctly-rejected');
|
||||
expect(r.stderr).toContain('unknown flag');
|
||||
});
|
||||
|
||||
test('secret value never appears on stdout', () => {
|
||||
// The entire stdout comes from our `echo` statements, not read_secret_to_env.
|
||||
// Verify that an uncaptured secret doesn't leak via the prompt or anywhere.
|
||||
const r = runLibSnippet(
|
||||
`
|
||||
read_secret_to_env HIDDEN "Enter: "
|
||||
echo "len=\${#HIDDEN}"
|
||||
`,
|
||||
'this-must-not-leak-abc'
|
||||
);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).not.toContain('this-must-not-leak-abc');
|
||||
expect(r.stdout).toBe('len=22');
|
||||
// The prompt goes to stderr; secret must not appear there either.
|
||||
expect(r.stderr).not.toContain('this-must-not-leak-abc');
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,271 @@
|
||||
/**
|
||||
* gstack-gbrain-repo-policy — per-remote trust-tier policy store.
|
||||
*
|
||||
* Covers the setup-gbrain D3/D2-eng decisions end-to-end:
|
||||
* - D3 triad semantics (read-write / read-only / deny / unset)
|
||||
* - Remote-URL normalization (ssh/https/shorthand all collapse to the same key)
|
||||
* - D2-eng schema-version field (_schema_version: 2) written on new files
|
||||
* - Legacy `allow` → `read-write` migration, one-shot, idempotent
|
||||
* - Atomic writes (tmpfile + rename; no partial files visible)
|
||||
* - Corrupt-file quarantine (file renamed to .corrupt-<ts>, fresh file created)
|
||||
* - 0600 permissions on the policy file
|
||||
*
|
||||
* Each test uses a temp GSTACK_HOME so nothing leaks into the user's real ~/.gstack.
|
||||
*/
|
||||
|
||||
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
import { spawnSync } from 'child_process';
|
||||
|
||||
const ROOT = path.resolve(import.meta.dir, '..');
|
||||
const BIN = path.join(ROOT, 'bin', 'gstack-gbrain-repo-policy');
|
||||
|
||||
let tmpHome: string;
|
||||
|
||||
function run(args: string[], opts: { env?: Record<string, string> } = {}) {
|
||||
const res = spawnSync(BIN, args, {
|
||||
env: { ...process.env, GSTACK_HOME: tmpHome, ...(opts.env || {}) },
|
||||
encoding: 'utf-8',
|
||||
});
|
||||
return {
|
||||
stdout: (res.stdout || '').trim(),
|
||||
stderr: (res.stderr || '').trim(),
|
||||
status: res.status ?? -1,
|
||||
};
|
||||
}
|
||||
|
||||
function policyFile(): string {
|
||||
return path.join(tmpHome, 'gbrain-repo-policy.json');
|
||||
}
|
||||
|
||||
function readPolicy(): any {
|
||||
return JSON.parse(fs.readFileSync(policyFile(), 'utf-8'));
|
||||
}
|
||||
|
||||
beforeEach(() => {
|
||||
tmpHome = fs.mkdtempSync(path.join(os.tmpdir(), 'gbrain-policy-'));
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
fs.rmSync(tmpHome, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
describe('normalize', () => {
|
||||
test('strips https:// and .git', () => {
|
||||
const r = run(['normalize', 'https://github.com/foo/bar.git']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toBe('github.com/foo/bar');
|
||||
});
|
||||
|
||||
test('plain https without .git', () => {
|
||||
const r = run(['normalize', 'https://github.com/foo/bar']);
|
||||
expect(r.stdout).toBe('github.com/foo/bar');
|
||||
});
|
||||
|
||||
test('ssh shorthand git@host:path collapses to the same key', () => {
|
||||
const r = run(['normalize', 'git@github.com:foo/bar.git']);
|
||||
expect(r.stdout).toBe('github.com/foo/bar');
|
||||
});
|
||||
|
||||
test('ssh:// URL form collapses to the same key', () => {
|
||||
const r = run(['normalize', 'ssh://git@github.com/foo/bar.git']);
|
||||
expect(r.stdout).toBe('github.com/foo/bar');
|
||||
});
|
||||
|
||||
test('uppercase hostname and path are lowercased', () => {
|
||||
const r = run(['normalize', 'HTTPS://GITHUB.COM/FOO/BAR']);
|
||||
expect(r.stdout).toBe('github.com/foo/bar');
|
||||
});
|
||||
|
||||
test('gitlab subgroups preserved (ssh shorthand)', () => {
|
||||
const r = run(['normalize', 'git@gitlab.com:group/subgroup/project.git']);
|
||||
expect(r.stdout).toBe('gitlab.com/group/subgroup/project');
|
||||
});
|
||||
|
||||
test('custom gitlab host with https', () => {
|
||||
const r = run(['normalize', 'https://gitlab.example.com/group/project']);
|
||||
expect(r.stdout).toBe('gitlab.example.com/group/project');
|
||||
});
|
||||
|
||||
test('all variants collapse to a single key', () => {
|
||||
const forms = [
|
||||
'https://github.com/Foo/Bar.git',
|
||||
'https://github.com/foo/bar',
|
||||
'git@github.com:foo/bar.git',
|
||||
'ssh://git@github.com/foo/bar.git',
|
||||
'HTTPS://GITHUB.COM/FOO/BAR',
|
||||
];
|
||||
const keys = forms.map((f) => run(['normalize', f]).stdout);
|
||||
expect(new Set(keys).size).toBe(1);
|
||||
expect(keys[0]).toBe('github.com/foo/bar');
|
||||
});
|
||||
});
|
||||
|
||||
describe('set + get', () => {
|
||||
test('set persists the tier and get returns it', () => {
|
||||
const s = run(['set', 'https://github.com/foo/bar.git', 'read-write']);
|
||||
expect(s.status).toBe(0);
|
||||
const g = run(['get', 'https://github.com/foo/bar']);
|
||||
expect(g.status).toBe(0);
|
||||
expect(g.stdout).toBe('read-write');
|
||||
});
|
||||
|
||||
test('all three tier values accepted', () => {
|
||||
run(['set', 'https://github.com/a/a', 'read-write']);
|
||||
run(['set', 'https://github.com/b/b', 'read-only']);
|
||||
run(['set', 'https://github.com/c/c', 'deny']);
|
||||
expect(run(['get', 'https://github.com/a/a']).stdout).toBe('read-write');
|
||||
expect(run(['get', 'https://github.com/b/b']).stdout).toBe('read-only');
|
||||
expect(run(['get', 'https://github.com/c/c']).stdout).toBe('deny');
|
||||
});
|
||||
|
||||
test('invalid tier rejected with non-zero exit', () => {
|
||||
const r = run(['set', 'https://github.com/foo/bar', 'allow']);
|
||||
expect(r.status).not.toBe(0);
|
||||
expect(r.stderr.toLowerCase()).toContain('invalid tier');
|
||||
});
|
||||
|
||||
test('get for unset remote returns literal unset', () => {
|
||||
run(['set', 'https://github.com/foo/bar', 'read-write']);
|
||||
const r = run(['get', 'https://github.com/baz/qux']);
|
||||
expect(r.stdout).toBe('unset');
|
||||
});
|
||||
|
||||
test('ssh-set then https-get returns the same tier', () => {
|
||||
run(['set', 'git@github.com:foo/bar.git', 'deny']);
|
||||
const r = run(['get', 'https://github.com/foo/bar']);
|
||||
expect(r.stdout).toBe('deny');
|
||||
});
|
||||
});
|
||||
|
||||
describe('file format + schema version', () => {
|
||||
test('_schema_version: 2 added on fresh file creation', () => {
|
||||
run(['set', 'https://github.com/foo/bar', 'read-write']);
|
||||
expect(readPolicy()._schema_version).toBe(2);
|
||||
});
|
||||
|
||||
test('policy file mode is 0600', () => {
|
||||
run(['set', 'https://github.com/foo/bar', 'read-write']);
|
||||
const mode = fs.statSync(policyFile()).mode & 0o777;
|
||||
expect(mode).toBe(0o600);
|
||||
});
|
||||
|
||||
test('re-running set does not duplicate schema version or entries', () => {
|
||||
run(['set', 'https://github.com/foo/bar', 'read-write']);
|
||||
run(['set', 'https://github.com/foo/bar', 'deny']);
|
||||
const p = readPolicy();
|
||||
expect(p._schema_version).toBe(2);
|
||||
expect(p['github.com/foo/bar']).toBe('deny');
|
||||
// Only the schema version + the one entry
|
||||
expect(Object.keys(p).length).toBe(2);
|
||||
});
|
||||
});
|
||||
|
||||
describe('legacy migration (D3 allow → read-write)', () => {
|
||||
test('legacy allow value is rewritten to read-write on first read', () => {
|
||||
fs.writeFileSync(
|
||||
policyFile(),
|
||||
JSON.stringify({ 'github.com/foo/bar': 'allow' }),
|
||||
{ mode: 0o600 }
|
||||
);
|
||||
const r = run(['get', 'https://github.com/foo/bar']);
|
||||
expect(r.stdout).toBe('read-write');
|
||||
expect(r.stderr).toContain('Migrated 1 legacy allow entries');
|
||||
const p = readPolicy();
|
||||
expect(p['github.com/foo/bar']).toBe('read-write');
|
||||
expect(p._schema_version).toBe(2);
|
||||
});
|
||||
|
||||
test('migration preserves deny entries unchanged', () => {
|
||||
fs.writeFileSync(
|
||||
policyFile(),
|
||||
JSON.stringify({ 'github.com/foo/bar': 'allow', 'github.com/baz/qux': 'deny' }),
|
||||
{ mode: 0o600 }
|
||||
);
|
||||
run(['get', 'https://github.com/foo/bar']);
|
||||
const p = readPolicy();
|
||||
expect(p['github.com/foo/bar']).toBe('read-write');
|
||||
expect(p['github.com/baz/qux']).toBe('deny');
|
||||
});
|
||||
|
||||
test('migration is idempotent — second run is a no-op', () => {
|
||||
fs.writeFileSync(
|
||||
policyFile(),
|
||||
JSON.stringify({ 'github.com/foo/bar': 'allow' }),
|
||||
{ mode: 0o600 }
|
||||
);
|
||||
const first = run(['get', 'https://github.com/foo/bar']);
|
||||
expect(first.stderr).toContain('Migrated 1');
|
||||
const second = run(['get', 'https://github.com/foo/bar']);
|
||||
expect(second.stderr).not.toContain('Migrated');
|
||||
expect(second.stdout).toBe('read-write');
|
||||
});
|
||||
|
||||
test('already-v2 file is not re-migrated', () => {
|
||||
fs.writeFileSync(
|
||||
policyFile(),
|
||||
JSON.stringify({ _schema_version: 2, 'github.com/foo/bar': 'read-write' }),
|
||||
{ mode: 0o600 }
|
||||
);
|
||||
const r = run(['get', 'https://github.com/foo/bar']);
|
||||
expect(r.stderr).not.toContain('Migrated');
|
||||
expect(r.stdout).toBe('read-write');
|
||||
});
|
||||
});
|
||||
|
||||
describe('corrupt-file handling', () => {
|
||||
test('unparseable JSON is quarantined and a fresh file is started', () => {
|
||||
fs.writeFileSync(policyFile(), 'not valid json{', { mode: 0o600 });
|
||||
const r = run(['get', 'https://github.com/foo/bar']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toBe('unset');
|
||||
expect(r.stderr).toContain('corrupt policy file quarantined');
|
||||
// New file exists, is valid, and has schema version
|
||||
const p = readPolicy();
|
||||
expect(p._schema_version).toBe(2);
|
||||
// Quarantine file exists
|
||||
const quarantine = fs.readdirSync(tmpHome).find((f) =>
|
||||
f.startsWith('gbrain-repo-policy.json.corrupt-')
|
||||
);
|
||||
expect(quarantine).toBeDefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('list', () => {
|
||||
test('list prints entries sorted, excludes _schema_version', () => {
|
||||
run(['set', 'https://github.com/zebra/zz', 'deny']);
|
||||
run(['set', 'https://github.com/apple/aa', 'read-write']);
|
||||
run(['set', 'https://github.com/middle/mm', 'read-only']);
|
||||
const r = run(['list']);
|
||||
const lines = r.stdout.split('\n');
|
||||
expect(lines.length).toBe(3);
|
||||
expect(lines[0]).toBe('github.com/apple/aa\tread-write');
|
||||
expect(lines[1]).toBe('github.com/middle/mm\tread-only');
|
||||
expect(lines[2]).toBe('github.com/zebra/zz\tdeny');
|
||||
});
|
||||
|
||||
test('list on missing file returns empty, no file created', () => {
|
||||
const r = run(['list']);
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toBe('');
|
||||
expect(fs.existsSync(policyFile())).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('get without arg (auto-detect from current dir)', () => {
|
||||
test('returns unset when not in a git repo', () => {
|
||||
const cwdTmp = fs.mkdtempSync(path.join(os.tmpdir(), 'no-git-'));
|
||||
try {
|
||||
const res = spawnSync(BIN, ['get'], {
|
||||
env: { ...process.env, GSTACK_HOME: tmpHome },
|
||||
cwd: cwdTmp,
|
||||
encoding: 'utf-8',
|
||||
});
|
||||
expect((res.stdout || '').trim()).toBe('unset');
|
||||
} finally {
|
||||
fs.rmSync(cwdTmp, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,556 @@
|
||||
/**
|
||||
* gstack-gbrain-supabase-provision — Supabase Management API wrapper.
|
||||
*
|
||||
* All tests run against a per-test local mock HTTP server (Bun.serve)
|
||||
* that returns fixture responses. Never hits the real Supabase API, never
|
||||
* requires a live PAT.
|
||||
*
|
||||
* Covers the D21 HTTP error suite (401/403/402/409/429/5xx), the happy
|
||||
* path for each subcommand (list-orgs, create, wait, pooler-url), the
|
||||
* verified schema corrections (POST /v1/projects with organization_slug,
|
||||
* GET /config/database/pooler), PAT + DB_PASS env-var discipline, retry
|
||||
* + backoff on transient errors, pooler URL construction using the
|
||||
* generated DB_PASS (not the API response's templated connection_string).
|
||||
*/
|
||||
|
||||
import { describe, test, expect, afterEach } from 'bun:test';
|
||||
import * as fs from 'fs';
|
||||
import * as os from 'os';
|
||||
import * as path from 'path';
|
||||
|
||||
const ROOT = path.resolve(import.meta.dir, '..');
|
||||
const BIN = path.join(ROOT, 'bin', 'gstack-gbrain-supabase-provision');
|
||||
|
||||
// Minimal PATH that finds jq/curl but excludes user bins.
|
||||
const SAFE_PATH = '/usr/bin:/bin:/usr/sbin:/sbin:/opt/homebrew/bin:/usr/local/bin';
|
||||
|
||||
type Handler = (req: Request) => Response | Promise<Response>;
|
||||
|
||||
interface MockServer {
|
||||
url: string;
|
||||
close: () => void;
|
||||
requests: Array<{ method: string; path: string; body?: string }>;
|
||||
}
|
||||
|
||||
function startMock(routes: Record<string, Handler>): MockServer {
|
||||
const requests: MockServer['requests'] = [];
|
||||
const server = Bun.serve({
|
||||
port: 0,
|
||||
async fetch(req) {
|
||||
const u = new URL(req.url);
|
||||
const key = `${req.method} ${u.pathname}`;
|
||||
// Log method+path only. Handlers that need the body read it themselves;
|
||||
// Response bodies can only be consumed once.
|
||||
requests.push({ method: req.method, path: u.pathname });
|
||||
const handler = routes[key] || routes[`${req.method} *`];
|
||||
if (!handler) {
|
||||
return new Response(
|
||||
JSON.stringify({ message: `no mock for ${key}` }),
|
||||
{ status: 404, headers: { 'content-type': 'application/json' } }
|
||||
);
|
||||
}
|
||||
return handler(req);
|
||||
},
|
||||
});
|
||||
const base = `http://localhost:${server.port}`;
|
||||
return {
|
||||
url: base,
|
||||
close: () => server.stop(true),
|
||||
requests,
|
||||
};
|
||||
}
|
||||
|
||||
async function runBin(
|
||||
args: string[],
|
||||
env: Record<string, string> = {}
|
||||
): Promise<{ stdout: string; stderr: string; status: number }> {
|
||||
// Use Bun.spawn (async) rather than spawnSync. spawnSync blocks the Bun
|
||||
// event loop, which prevents Bun.serve mocks from responding — every
|
||||
// HTTP call would hit curl's timeout instead of round-tripping.
|
||||
const proc = Bun.spawn([BIN, ...args], {
|
||||
env: { PATH: SAFE_PATH, ...env },
|
||||
stdout: 'pipe',
|
||||
stderr: 'pipe',
|
||||
});
|
||||
const [stdout, stderr, status] = await Promise.all([
|
||||
new Response(proc.stdout).text(),
|
||||
new Response(proc.stderr).text(),
|
||||
proc.exited,
|
||||
]);
|
||||
return { stdout: stdout.trim(), stderr: stderr.trim(), status };
|
||||
}
|
||||
|
||||
function jsonResp(body: any, status = 200): Response {
|
||||
return new Response(JSON.stringify(body), {
|
||||
status,
|
||||
headers: { 'content-type': 'application/json' },
|
||||
});
|
||||
}
|
||||
|
||||
let mock: MockServer;
|
||||
|
||||
afterEach(() => {
|
||||
if (mock) mock.close();
|
||||
});
|
||||
|
||||
describe('list-orgs', () => {
|
||||
test('happy path: returns orgs from GET /v1/organizations', async () => {
|
||||
mock = startMock({
|
||||
'GET /v1/organizations': () =>
|
||||
jsonResp([
|
||||
{ id: 'deprec-1', slug: 'acme', name: 'Acme Inc' },
|
||||
{ id: 'deprec-2', slug: 'personal', name: 'Personal' },
|
||||
]),
|
||||
});
|
||||
const r = await runBin(['list-orgs', '--json'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test_pat',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.orgs).toEqual([
|
||||
{ slug: 'acme', name: 'Acme Inc' },
|
||||
{ slug: 'personal', name: 'Personal' },
|
||||
]);
|
||||
});
|
||||
|
||||
test('sends Authorization: Bearer <PAT> header', async () => {
|
||||
let authHeader = '';
|
||||
mock = startMock({
|
||||
'GET /v1/organizations': (req) => {
|
||||
authHeader = req.headers.get('authorization') || '';
|
||||
return jsonResp([]);
|
||||
},
|
||||
});
|
||||
await runBin(['list-orgs', '--json'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_expected_pat_xxx',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(authHeader).toBe('Bearer sbp_expected_pat_xxx');
|
||||
});
|
||||
|
||||
test('exits 3 with auth error when SUPABASE_ACCESS_TOKEN is missing', async () => {
|
||||
const r = await runBin(['list-orgs']);
|
||||
expect(r.status).toBe(3);
|
||||
expect(r.stderr).toContain('SUPABASE_ACCESS_TOKEN is not set');
|
||||
});
|
||||
|
||||
test('exits 3 on 401 Unauthorized', async () => {
|
||||
mock = startMock({
|
||||
'GET /v1/organizations': () => jsonResp({ message: 'Invalid JWT' }, 401),
|
||||
});
|
||||
const r = await runBin(['list-orgs'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_bad',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(3);
|
||||
expect(r.stderr).toContain('401 Unauthorized');
|
||||
});
|
||||
|
||||
test('exits 3 on 403 Forbidden', async () => {
|
||||
mock = startMock({
|
||||
'GET /v1/organizations': () => jsonResp({ message: 'Forbidden' }, 403),
|
||||
});
|
||||
const r = await runBin(['list-orgs'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_noperm',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(3);
|
||||
expect(r.stderr).toContain('403 Forbidden');
|
||||
});
|
||||
});
|
||||
|
||||
describe('create', () => {
|
||||
test('happy path: POST /v1/projects with organization_slug, no `plan` field', async () => {
|
||||
let sentBody: any = null;
|
||||
mock = startMock({
|
||||
'POST /v1/projects': async (req) => {
|
||||
sentBody = JSON.parse(await req.text());
|
||||
return jsonResp({
|
||||
id: 'deprec',
|
||||
ref: 'abcdefghijklmnopqrst',
|
||||
organization_slug: 'acme',
|
||||
name: 'gbrain',
|
||||
region: 'us-east-1',
|
||||
created_at: '2026-04-23T00:00:00Z',
|
||||
status: 'COMING_UP',
|
||||
}, 201);
|
||||
},
|
||||
});
|
||||
const r = await runBin(['create', 'gbrain', 'us-east-1', 'acme', '--json'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
DB_PASS: 'generated-secret-pw',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.ref).toBe('abcdefghijklmnopqrst');
|
||||
expect(j.status).toBe('COMING_UP');
|
||||
// Verify the request body had the right shape
|
||||
expect(sentBody.name).toBe('gbrain');
|
||||
expect(sentBody.region).toBe('us-east-1');
|
||||
expect(sentBody.organization_slug).toBe('acme');
|
||||
expect(sentBody.db_pass).toBe('generated-secret-pw');
|
||||
// Critical: no `plan` field, since it's ignored server-side per OpenAPI
|
||||
expect(sentBody.plan).toBeUndefined();
|
||||
});
|
||||
|
||||
test('passes desired_instance_size when --instance-size flag is used', async () => {
|
||||
let sentBody: any = null;
|
||||
mock = startMock({
|
||||
'POST /v1/projects': async (req) => {
|
||||
sentBody = JSON.parse(await req.text());
|
||||
return jsonResp({ ref: 'r', status: 'COMING_UP' }, 201);
|
||||
},
|
||||
});
|
||||
await runBin(['create', 'gbrain', 'us-east-1', 'acme', '--instance-size', 'small', '--json'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
DB_PASS: 'pw',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(sentBody.desired_instance_size).toBe('small');
|
||||
});
|
||||
|
||||
test('exits 4 on 402 Payment Required (quota)', async () => {
|
||||
mock = startMock({
|
||||
'POST /v1/projects': () => jsonResp({ message: 'project limit reached' }, 402),
|
||||
});
|
||||
const r = await runBin(['create', 'gbrain', 'us-east-1', 'acme'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
DB_PASS: 'pw',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(4);
|
||||
expect(r.stderr).toContain('402 Payment Required');
|
||||
expect(r.stderr).toContain('quota exceeded');
|
||||
});
|
||||
|
||||
test('exits 5 on 409 Conflict (duplicate name)', async () => {
|
||||
mock = startMock({
|
||||
'POST /v1/projects': () => jsonResp({ message: 'conflict' }, 409),
|
||||
});
|
||||
const r = await runBin(['create', 'gbrain', 'us-east-1', 'acme'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
DB_PASS: 'pw',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(5);
|
||||
expect(r.stderr).toContain('409 Conflict');
|
||||
expect(r.stderr).toContain('duplicate project name');
|
||||
});
|
||||
|
||||
test('fails when DB_PASS is missing', async () => {
|
||||
const r = await runBin(['create', 'gbrain', 'us-east-1', 'acme'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
});
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('DB_PASS env var is required');
|
||||
});
|
||||
|
||||
test('missing positional args rejected with exit 2', async () => {
|
||||
const r = await runBin(['create', 'gbrain'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
DB_PASS: 'pw',
|
||||
});
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('missing');
|
||||
});
|
||||
|
||||
test('retries on 429 rate limit with backoff and eventually succeeds', async () => {
|
||||
let count = 0;
|
||||
mock = startMock({
|
||||
'POST /v1/projects': () => {
|
||||
count += 1;
|
||||
if (count < 2) return jsonResp({ message: 'too many requests' }, 429);
|
||||
return jsonResp({ ref: 'r', status: 'COMING_UP' }, 201);
|
||||
},
|
||||
});
|
||||
const r = await runBin(['create', 'gbrain', 'us-east-1', 'acme', '--json'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
DB_PASS: 'pw',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
expect(count).toBe(2);
|
||||
}, 15000);
|
||||
|
||||
test('exits 8 on persistent 5xx after max retries', async () => {
|
||||
let count = 0;
|
||||
mock = startMock({
|
||||
'POST /v1/projects': () => {
|
||||
count += 1;
|
||||
return jsonResp({ message: 'internal server error' }, 502);
|
||||
},
|
||||
});
|
||||
const r = await runBin(['create', 'gbrain', 'us-east-1', 'acme'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
DB_PASS: 'pw',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(8);
|
||||
expect(r.stderr).toContain('502');
|
||||
expect(count).toBeGreaterThanOrEqual(3);
|
||||
}, 30000);
|
||||
});
|
||||
|
||||
describe('wait', () => {
|
||||
test('happy path: polls until ACTIVE_HEALTHY', async () => {
|
||||
let count = 0;
|
||||
mock = startMock({
|
||||
'GET /v1/projects/abc': () => {
|
||||
count += 1;
|
||||
if (count < 2) return jsonResp({ ref: 'abc', status: 'COMING_UP' });
|
||||
return jsonResp({ ref: 'abc', status: 'ACTIVE_HEALTHY' });
|
||||
},
|
||||
});
|
||||
const r = await runBin(['wait', 'abc', '--timeout', '30', '--json'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.status).toBe('ACTIVE_HEALTHY');
|
||||
expect(j.ref).toBe('abc');
|
||||
}, 30000);
|
||||
|
||||
test('exits 7 on terminal INIT_FAILED state', async () => {
|
||||
mock = startMock({
|
||||
'GET /v1/projects/abc': () => jsonResp({ ref: 'abc', status: 'INIT_FAILED' }),
|
||||
});
|
||||
const r = await runBin(['wait', 'abc', '--timeout', '10'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(7);
|
||||
expect(r.stderr).toContain('INIT_FAILED');
|
||||
});
|
||||
|
||||
test('exits 6 on timeout with resume-provision hint', async () => {
|
||||
// Stay in COMING_UP forever.
|
||||
mock = startMock({
|
||||
'GET /v1/projects/abc': () => jsonResp({ ref: 'abc', status: 'COMING_UP' }),
|
||||
});
|
||||
const r = await runBin(['wait', 'abc', '--timeout', '0'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(6);
|
||||
expect(r.stderr).toContain('wait timed out');
|
||||
expect(r.stderr).toContain('--resume-provision abc');
|
||||
}, 15000);
|
||||
});
|
||||
|
||||
describe('pooler-url', () => {
|
||||
const REF = 'abcdefghijklmnopqrst';
|
||||
const POOLER_OK = {
|
||||
db_user: `postgres.${REF}`,
|
||||
db_host: 'aws-0-us-east-1.pooler.supabase.com',
|
||||
db_port: 6543,
|
||||
db_name: 'postgres',
|
||||
pool_mode: 'session',
|
||||
connection_string:
|
||||
'postgresql://postgres.abcdefghijklmnopqrst:[PASSWORD]@aws-0-us-east-1.pooler.supabase.com:6543/postgres',
|
||||
};
|
||||
|
||||
test('constructs URL from db_user/host/port/name + DB_PASS (not response connection_string)', async () => {
|
||||
mock = startMock({
|
||||
[`GET /v1/projects/${REF}/config/database/pooler`]: () => jsonResp(POOLER_OK),
|
||||
});
|
||||
const r = await runBin(['pooler-url', REF, '--json'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
DB_PASS: 'my-real-password',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.pooler_url).toBe(
|
||||
`postgresql://postgres.${REF}:my-real-password@aws-0-us-east-1.pooler.supabase.com:6543/postgres`
|
||||
);
|
||||
// The API's templated connection_string is NOT what we output.
|
||||
expect(j.pooler_url).not.toContain('[PASSWORD]');
|
||||
});
|
||||
|
||||
test('handles array response by preferring session pool_mode entry', async () => {
|
||||
mock = startMock({
|
||||
[`GET /v1/projects/${REF}/config/database/pooler`]: () =>
|
||||
jsonResp([
|
||||
{ ...POOLER_OK, pool_mode: 'transaction', db_port: 6543 },
|
||||
{ ...POOLER_OK, pool_mode: 'session', db_port: 5432 },
|
||||
]),
|
||||
});
|
||||
const r = await runBin(['pooler-url', REF, '--json'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
DB_PASS: 'pw',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
const j = JSON.parse(r.stdout);
|
||||
// Picked session entry with port 5432 (for this fixture)
|
||||
expect(j.pooler_url).toContain(':5432/postgres');
|
||||
});
|
||||
|
||||
test('fails cleanly when pooler config is missing required fields', async () => {
|
||||
mock = startMock({
|
||||
[`GET /v1/projects/${REF}/config/database/pooler`]: () =>
|
||||
jsonResp({ identifier: 'x', pool_mode: 'session' }),
|
||||
});
|
||||
const r = await runBin(['pooler-url', REF], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
DB_PASS: 'pw',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('missing pooler config fields');
|
||||
});
|
||||
|
||||
test('requires DB_PASS to construct URL', async () => {
|
||||
const r = await runBin(['pooler-url', REF], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
});
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('DB_PASS env var is required');
|
||||
});
|
||||
});
|
||||
|
||||
describe('list-orphans (D20)', () => {
|
||||
const MOCK_PROJECTS = [
|
||||
{ ref: 'aaaaaaaaaaaaaaaaaaaa', name: 'gbrain', created_at: '2026-04-20', region: 'us-east-1' },
|
||||
{ ref: 'bbbbbbbbbbbbbbbbbbbb', name: 'gbrain-backup', created_at: '2026-04-21', region: 'us-east-1' },
|
||||
{ ref: 'cccccccccccccccccccc', name: 'my-production', created_at: '2026-04-15', region: 'us-west-2' },
|
||||
{ ref: 'dddddddddddddddddddd', name: 'gbrain', created_at: '2026-04-22', region: 'eu-west-1' },
|
||||
];
|
||||
|
||||
test('lists gbrain-prefixed projects that are NOT the active brain', async () => {
|
||||
mock = startMock({
|
||||
'GET /v1/projects': () => jsonResp(MOCK_PROJECTS),
|
||||
});
|
||||
const home = fs.mkdtempSync(path.join(os.tmpdir(), 'gbrain-orphan-'));
|
||||
// use top-level fs
|
||||
fs.mkdirSync(path.join(home, '.gbrain'));
|
||||
fs.writeFileSync(
|
||||
path.join(home, '.gbrain', 'config.json'),
|
||||
JSON.stringify({
|
||||
engine: 'postgres',
|
||||
// Active brain points at aaaaaaaaaaaaaaaaaaaa
|
||||
database_url: 'postgresql://postgres.aaaaaaaaaaaaaaaaaaaa:pw@host:6543/postgres',
|
||||
})
|
||||
);
|
||||
try {
|
||||
const r = await runBin(['list-orphans', '--json'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
HOME: home,
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.active_ref).toBe('aaaaaaaaaaaaaaaaaaaa');
|
||||
expect(j.orphans.length).toBe(2);
|
||||
const refs = j.orphans.map((o: any) => o.ref).sort();
|
||||
expect(refs).toEqual(['bbbbbbbbbbbbbbbbbbbb', 'dddddddddddddddddddd']);
|
||||
// my-production is NOT in orphans — filtered out by gbrain prefix
|
||||
expect(refs).not.toContain('cccccccccccccccccccc');
|
||||
} finally {
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('treats all gbrain-prefixed projects as orphans when no active config exists', async () => {
|
||||
mock = startMock({
|
||||
'GET /v1/projects': () => jsonResp(MOCK_PROJECTS),
|
||||
});
|
||||
const home = fs.mkdtempSync(path.join(os.tmpdir(), 'gbrain-no-cfg-'));
|
||||
try {
|
||||
const r = await runBin(['list-orphans', '--json'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
HOME: home,
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.active_ref).toBeNull();
|
||||
// All 3 gbrain-prefixed projects are orphans when no active config
|
||||
expect(j.orphans.length).toBe(3);
|
||||
} finally {
|
||||
// use top-level fs
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('respects custom --name-prefix', async () => {
|
||||
mock = startMock({
|
||||
'GET /v1/projects': () =>
|
||||
jsonResp([
|
||||
{ ref: 'aaaaaaaaaaaaaaaaaaaa', name: 'my-prefix-one', created_at: '2026-04-20' },
|
||||
{ ref: 'bbbbbbbbbbbbbbbbbbbb', name: 'gbrain', created_at: '2026-04-20' },
|
||||
]),
|
||||
});
|
||||
const home = fs.mkdtempSync(path.join(os.tmpdir(), 'gbrain-prefix-'));
|
||||
try {
|
||||
const r = await runBin(['list-orphans', '--name-prefix', 'my-prefix', '--json'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
HOME: home,
|
||||
});
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.orphans.length).toBe(1);
|
||||
expect(j.orphans[0].name).toBe('my-prefix-one');
|
||||
} finally {
|
||||
// use top-level fs
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('delete-project (D20)', () => {
|
||||
test('issues DELETE /v1/projects/<ref> and returns the deleted ref', async () => {
|
||||
let deletedPath = '';
|
||||
mock = startMock({
|
||||
'DELETE /v1/projects/abcdefghijklmnopqrst': (req) => {
|
||||
deletedPath = new URL(req.url).pathname;
|
||||
return jsonResp({ id: 1, ref: 'abcdefghijklmnopqrst', name: 'gbrain' });
|
||||
},
|
||||
});
|
||||
const r = await runBin(['delete-project', 'abcdefghijklmnopqrst', '--json'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
expect(deletedPath).toBe('/v1/projects/abcdefghijklmnopqrst');
|
||||
const j = JSON.parse(r.stdout);
|
||||
expect(j.deleted_ref).toBe('abcdefghijklmnopqrst');
|
||||
});
|
||||
|
||||
test('surfaces 404 when the project does not exist', async () => {
|
||||
mock = startMock({
|
||||
'DELETE /v1/projects/nonexistent': () => jsonResp({ message: 'Project not found' }, 404),
|
||||
});
|
||||
const r = await runBin(['delete-project', 'nonexistent'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
SUPABASE_API_BASE: mock.url,
|
||||
});
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('404');
|
||||
});
|
||||
|
||||
test('requires a ref', async () => {
|
||||
const r = await runBin(['delete-project'], {
|
||||
SUPABASE_ACCESS_TOKEN: 'sbp_test',
|
||||
});
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('missing');
|
||||
});
|
||||
});
|
||||
|
||||
describe('general', () => {
|
||||
test('unknown subcommand exits 2', async () => {
|
||||
const r = await runBin(['nope']);
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('unknown subcommand');
|
||||
});
|
||||
|
||||
test('no args prints usage and exits 2', async () => {
|
||||
const r = await runBin([]);
|
||||
expect(r.status).toBe(2);
|
||||
expect(r.stderr).toContain('usage');
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,234 @@
|
||||
/**
|
||||
* gstack-brain-init — mocked-gh integration tests.
|
||||
*
|
||||
* The regular brain-sync tests pass `--remote <bare-git-url>` to skip the
|
||||
* gh-repo-creation path entirely. That left the happy path (user just
|
||||
* presses Enter, gstack-brain-init calls `gh repo create --private`)
|
||||
* with zero coverage — you'd only know it broke when a real user tried
|
||||
* it with a real GitHub account.
|
||||
*
|
||||
* These tests put a fake `gh` binary on PATH that records every call
|
||||
* into a file, then run gstack-brain-init in its non-flag interactive
|
||||
* mode and assert the fake `gh` was invoked with the expected arguments.
|
||||
*
|
||||
* No real GitHub account, no live API, deterministic per-run.
|
||||
*/
|
||||
|
||||
import { describe, test, expect, beforeEach, afterEach } from 'bun:test';
|
||||
import * as fs from 'fs';
|
||||
import * as os from 'os';
|
||||
import * as path from 'path';
|
||||
import { spawnSync } from 'child_process';
|
||||
|
||||
const ROOT = path.resolve(import.meta.dir, '..');
|
||||
const BIN_DIR = path.join(ROOT, 'bin');
|
||||
const INIT_BIN = path.join(BIN_DIR, 'gstack-brain-init');
|
||||
|
||||
let tmpHome: string;
|
||||
let bareRemote: string;
|
||||
let fakeBinDir: string;
|
||||
let ghCallLog: string;
|
||||
|
||||
function makeFakeGh(opts: {
|
||||
authStatus?: 'ok' | 'fail';
|
||||
repoCreate?: 'success' | 'already-exists' | 'fail';
|
||||
sshUrl?: string;
|
||||
}) {
|
||||
const authStatus = opts.authStatus ?? 'ok';
|
||||
const repoCreate = opts.repoCreate ?? 'success';
|
||||
const sshUrl = opts.sshUrl ?? bareRemote;
|
||||
const script = `#!/bin/bash
|
||||
echo "gh $@" >> "${ghCallLog}"
|
||||
case "$1" in
|
||||
auth)
|
||||
${authStatus === 'ok' ? 'exit 0' : 'exit 1'}
|
||||
;;
|
||||
repo)
|
||||
shift
|
||||
case "$1" in
|
||||
create)
|
||||
${
|
||||
repoCreate === 'success'
|
||||
? 'exit 0'
|
||||
: repoCreate === 'already-exists'
|
||||
? 'echo "GraphQL: Name already exists on this account" >&2; exit 1'
|
||||
: 'echo "network error" >&2; exit 1'
|
||||
}
|
||||
;;
|
||||
view)
|
||||
# Emulate \`gh repo view <name> --json sshUrl -q .sshUrl\`
|
||||
echo "${sshUrl}"
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
esac
|
||||
exit 0
|
||||
`;
|
||||
const ghPath = path.join(fakeBinDir, 'gh');
|
||||
fs.writeFileSync(ghPath, script, { mode: 0o755 });
|
||||
return ghPath;
|
||||
}
|
||||
|
||||
function run(
|
||||
argv: string[],
|
||||
opts: { env?: Record<string, string>; input?: string } = {}
|
||||
) {
|
||||
const env = {
|
||||
// Put the fake bin dir FIRST on PATH so our mock gh wins.
|
||||
PATH: `${fakeBinDir}:/usr/bin:/bin:/opt/homebrew/bin`,
|
||||
GSTACK_HOME: tmpHome,
|
||||
USER: 'testuser',
|
||||
HOME: tmpHome,
|
||||
...(opts.env || {}),
|
||||
};
|
||||
const res = spawnSync(INIT_BIN, argv, {
|
||||
env,
|
||||
encoding: 'utf-8',
|
||||
input: opts.input,
|
||||
cwd: ROOT,
|
||||
});
|
||||
return {
|
||||
stdout: res.stdout || '',
|
||||
stderr: res.stderr || '',
|
||||
status: res.status ?? -1,
|
||||
};
|
||||
}
|
||||
|
||||
function readGhCalls(): string[] {
|
||||
if (!fs.existsSync(ghCallLog)) return [];
|
||||
return fs.readFileSync(ghCallLog, 'utf-8').trim().split('\n').filter(Boolean);
|
||||
}
|
||||
|
||||
beforeEach(() => {
|
||||
tmpHome = fs.mkdtempSync(path.join(os.tmpdir(), 'brain-init-gh-mock-'));
|
||||
bareRemote = fs.mkdtempSync(path.join(os.tmpdir(), 'brain-init-bare-'));
|
||||
fakeBinDir = fs.mkdtempSync(path.join(os.tmpdir(), 'brain-init-fake-bin-'));
|
||||
ghCallLog = path.join(fakeBinDir, 'gh-calls.log');
|
||||
spawnSync('git', ['init', '--bare', '-q', '-b', 'main', bareRemote]);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
fs.rmSync(tmpHome, { recursive: true, force: true });
|
||||
fs.rmSync(bareRemote, { recursive: true, force: true });
|
||||
fs.rmSync(fakeBinDir, { recursive: true, force: true });
|
||||
const remoteFile = path.join(os.homedir(), '.gstack-brain-remote.txt');
|
||||
if (fs.existsSync(remoteFile)) {
|
||||
const contents = fs.readFileSync(remoteFile, 'utf-8');
|
||||
if (contents.includes(bareRemote)) fs.unlinkSync(remoteFile);
|
||||
}
|
||||
});
|
||||
|
||||
describe('gstack-brain-init uses gh CLI when present + authed', () => {
|
||||
test('calls gh repo create --private with the computed default name', () => {
|
||||
makeFakeGh({ authStatus: 'ok', repoCreate: 'success' });
|
||||
// Interactive mode; pressing Enter accepts the gh default.
|
||||
const r = run([], { input: '\n' });
|
||||
expect(r.status).toBe(0);
|
||||
const calls = readGhCalls();
|
||||
// First call: auth status check
|
||||
expect(calls.some((c) => c.startsWith('gh auth'))).toBe(true);
|
||||
// The create call
|
||||
const createCall = calls.find((c) => c.startsWith('gh repo create'));
|
||||
expect(createCall).toBeDefined();
|
||||
expect(createCall).toContain('gstack-brain-testuser');
|
||||
expect(createCall).toContain('--private');
|
||||
expect(createCall).toContain('--description');
|
||||
expect(createCall).toContain('--source');
|
||||
expect(createCall).toContain(tmpHome);
|
||||
});
|
||||
|
||||
test('falls back to gh repo view when create reports already-exists', () => {
|
||||
makeFakeGh({ authStatus: 'ok', repoCreate: 'already-exists' });
|
||||
const r = run([], { input: '\n' });
|
||||
expect(r.status).toBe(0);
|
||||
const calls = readGhCalls();
|
||||
// create was attempted
|
||||
expect(calls.some((c) => c.startsWith('gh repo create'))).toBe(true);
|
||||
// then view was called to recover the URL
|
||||
expect(calls.some((c) => c.startsWith('gh repo view') && c.includes('gstack-brain-testuser'))).toBe(true);
|
||||
// The view output (bareRemote URL) should have been wired up as origin.
|
||||
const remote = spawnSync('git', ['-C', tmpHome, 'remote', 'get-url', 'origin'], {
|
||||
encoding: 'utf-8',
|
||||
});
|
||||
expect(remote.stdout.trim()).toBe(bareRemote);
|
||||
});
|
||||
|
||||
test('user-provided URL bypasses gh create entirely', () => {
|
||||
makeFakeGh({ authStatus: 'ok', repoCreate: 'fail' });
|
||||
const r = run([], { input: `${bareRemote}\n` });
|
||||
expect(r.status).toBe(0);
|
||||
const calls = readGhCalls();
|
||||
// gh auth was still checked
|
||||
expect(calls.some((c) => c.startsWith('gh auth'))).toBe(true);
|
||||
// but create was NOT called (user bypassed the default)
|
||||
expect(calls.some((c) => c.startsWith('gh repo create'))).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('gstack-brain-init without gh CLI', () => {
|
||||
test('prompts for URL when gh is not on PATH', () => {
|
||||
// Don't install fake gh — PATH will not have it.
|
||||
// Use a bare-minimum PATH so nothing else shadows.
|
||||
const stripped = `${fakeBinDir}:/usr/bin:/bin`;
|
||||
const res = spawnSync(INIT_BIN, [], {
|
||||
env: {
|
||||
PATH: stripped,
|
||||
GSTACK_HOME: tmpHome,
|
||||
USER: 'testuser',
|
||||
HOME: tmpHome,
|
||||
},
|
||||
encoding: 'utf-8',
|
||||
input: `${bareRemote}\n`,
|
||||
cwd: ROOT,
|
||||
});
|
||||
expect(res.status).toBe(0);
|
||||
expect(res.stdout).toContain('gh CLI not found');
|
||||
// Remote got set from the stdin paste
|
||||
const remote = spawnSync('git', ['-C', tmpHome, 'remote', 'get-url', 'origin'], {
|
||||
encoding: 'utf-8',
|
||||
});
|
||||
expect(remote.stdout.trim()).toBe(bareRemote);
|
||||
});
|
||||
|
||||
test('prompts for URL when gh is present but not authed', () => {
|
||||
makeFakeGh({ authStatus: 'fail' });
|
||||
const r = run([], { input: `${bareRemote}\n` });
|
||||
expect(r.status).toBe(0);
|
||||
expect(r.stdout).toContain('gh CLI not found or not authenticated');
|
||||
const calls = readGhCalls();
|
||||
// Only `gh auth status` was called; no create attempt.
|
||||
expect(calls.some((c) => c.startsWith('gh auth'))).toBe(true);
|
||||
expect(calls.some((c) => c.startsWith('gh repo create'))).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('idempotency via flag', () => {
|
||||
test('--remote <url> skips all gh calls', () => {
|
||||
makeFakeGh({ authStatus: 'ok', repoCreate: 'success' });
|
||||
const r = run(['--remote', bareRemote]);
|
||||
expect(r.status).toBe(0);
|
||||
const calls = readGhCalls();
|
||||
// Zero calls to gh — the --remote flag short-circuits the interactive path.
|
||||
expect(calls.length).toBe(0);
|
||||
});
|
||||
|
||||
test('re-run with matching --remote is safe (no conflicting-remote error)', () => {
|
||||
run(['--remote', bareRemote]);
|
||||
const r2 = run(['--remote', bareRemote]);
|
||||
expect(r2.status).toBe(0);
|
||||
});
|
||||
|
||||
test('re-run with DIFFERENT --remote exits 1 with a conflict message', () => {
|
||||
run(['--remote', bareRemote]);
|
||||
const otherRemote = fs.mkdtempSync(path.join(os.tmpdir(), 'brain-init-other-'));
|
||||
spawnSync('git', ['init', '--bare', '-q', '-b', 'main', otherRemote]);
|
||||
try {
|
||||
const r2 = run(['--remote', otherRemote]);
|
||||
expect(r2.status).not.toBe(0);
|
||||
expect(r2.stderr).toContain('already a git repo');
|
||||
} finally {
|
||||
fs.rmSync(otherRemote, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,212 @@
|
||||
/**
|
||||
* Secret-sink test harness (D21 #5, D1-eng contract).
|
||||
*
|
||||
* Runs a bin with a seeded secret, captures every channel the bin could
|
||||
* leak through, and asserts that the seed never appears. Used by Slice 6
|
||||
* tests and available for future skills that handle secrets.
|
||||
*
|
||||
* Channels covered:
|
||||
* - stdout (Bun.spawn pipe)
|
||||
* - stderr (Bun.spawn pipe)
|
||||
* - files written under a per-run $HOME (walked post-mortem)
|
||||
* - telemetry JSONL under $HOME/.gstack/analytics/ (same walk, but called
|
||||
* out separately for clearer test failures)
|
||||
*
|
||||
* Match rules (any hit = leak):
|
||||
* - exact substring
|
||||
* - URL-decoded substring (catches percent-encoded leaks)
|
||||
* - first-12-char prefix (catches "we logged just a portion")
|
||||
* - base64 encoding of the seed (catches auth-header leakage)
|
||||
*
|
||||
* Intentionally NOT covered in v1:
|
||||
* - subprocess environment dump (portable /proc reading is non-trivial;
|
||||
* bins rarely leak env without also writing to stdout/stderr)
|
||||
* - the user's real shell history (bins don't modify it; the user's
|
||||
* shell does)
|
||||
* Those are documented as follow-ups in the D21 eng review commentary.
|
||||
*
|
||||
* Positive-control discipline: every test suite using this harness should
|
||||
* include one test that deliberately leaks a seed and asserts the harness
|
||||
* catches it. A harness that silently under-reports is worse than no
|
||||
* harness.
|
||||
*/
|
||||
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
|
||||
export interface SecretSinkOptions {
|
||||
bin: string;
|
||||
args: string[];
|
||||
/** Seeds whose presence in any captured channel = failure. */
|
||||
seeds: string[];
|
||||
env?: Record<string, string>;
|
||||
stdin?: string;
|
||||
/** Override the tmp $HOME. Default: fresh mkdtemp under os.tmpdir(). */
|
||||
tmpHome?: string;
|
||||
/** Cap on subprocess runtime, ms. Default 10_000. */
|
||||
timeoutMs?: number;
|
||||
}
|
||||
|
||||
export interface Leak {
|
||||
channel: 'stdout' | 'stderr' | 'file' | 'telemetry';
|
||||
matchType: 'exact' | 'url-decoded' | 'prefix-12' | 'base64';
|
||||
/** For channel=file|telemetry: the path relative to tmpHome. */
|
||||
where?: string;
|
||||
/** Short excerpt around the match (for debugging). */
|
||||
excerpt: string;
|
||||
}
|
||||
|
||||
export interface SinkResult {
|
||||
stdout: string;
|
||||
stderr: string;
|
||||
status: number;
|
||||
/** All files written under tmpHome during the run, keyed by relative path. */
|
||||
filesWritten: Record<string, string>;
|
||||
/** Subset of filesWritten matching .gstack/analytics/*.jsonl. */
|
||||
telemetry: Record<string, string>;
|
||||
/** Leaks discovered. Empty = clean. */
|
||||
leaks: Leak[];
|
||||
/** Where HOME was pointed during the run (for post-mortem inspection). */
|
||||
tmpHome: string;
|
||||
}
|
||||
|
||||
export async function runWithSecretSink(opts: SecretSinkOptions): Promise<SinkResult> {
|
||||
const tmpHome = opts.tmpHome ?? fs.mkdtempSync(path.join(os.tmpdir(), 'sink-'));
|
||||
// Make sure .gstack exists so bins that append to analytics have somewhere to write.
|
||||
fs.mkdirSync(path.join(tmpHome, '.gstack', 'analytics'), { recursive: true });
|
||||
|
||||
const env = {
|
||||
// Minimal PATH that still finds jq/git/curl/sed so our bins work.
|
||||
PATH: '/usr/bin:/bin:/usr/sbin:/sbin:/opt/homebrew/bin:/usr/local/bin',
|
||||
HOME: tmpHome,
|
||||
GSTACK_HOME: path.join(tmpHome, '.gstack'),
|
||||
...(opts.env || {}),
|
||||
};
|
||||
|
||||
const proc = Bun.spawn([opts.bin, ...opts.args], {
|
||||
env,
|
||||
stdout: 'pipe',
|
||||
stderr: 'pipe',
|
||||
stdin: opts.stdin ? 'pipe' : 'ignore',
|
||||
});
|
||||
if (opts.stdin) {
|
||||
proc.stdin!.write(opts.stdin);
|
||||
proc.stdin!.end();
|
||||
}
|
||||
|
||||
const timeoutMs = opts.timeoutMs ?? 10_000;
|
||||
const timeoutHandle = setTimeout(() => {
|
||||
try { proc.kill(); } catch { /* already done */ }
|
||||
}, timeoutMs);
|
||||
|
||||
const [stdout, stderr, status] = await Promise.all([
|
||||
new Response(proc.stdout).text(),
|
||||
new Response(proc.stderr).text(),
|
||||
proc.exited,
|
||||
]);
|
||||
clearTimeout(timeoutHandle);
|
||||
|
||||
// Walk tmpHome and read all files (skip binaries / very large files).
|
||||
const filesWritten: Record<string, string> = {};
|
||||
const telemetry: Record<string, string> = {};
|
||||
walk(tmpHome, tmpHome, filesWritten);
|
||||
for (const [rel, content] of Object.entries(filesWritten)) {
|
||||
if (rel.startsWith('.gstack/analytics/') && rel.endsWith('.jsonl')) {
|
||||
telemetry[rel] = content;
|
||||
}
|
||||
}
|
||||
|
||||
// Scan every channel for every seed with every match rule.
|
||||
const leaks: Leak[] = [];
|
||||
for (const seed of opts.seeds) {
|
||||
if (!seed) continue;
|
||||
const rules = buildMatchRules(seed);
|
||||
for (const { rule, matchType } of rules) {
|
||||
const stdoutHit = findHit(stdout, rule);
|
||||
if (stdoutHit !== null) {
|
||||
leaks.push({ channel: 'stdout', matchType, excerpt: excerptAt(stdout, stdoutHit) });
|
||||
}
|
||||
const stderrHit = findHit(stderr, rule);
|
||||
if (stderrHit !== null) {
|
||||
leaks.push({ channel: 'stderr', matchType, excerpt: excerptAt(stderr, stderrHit) });
|
||||
}
|
||||
for (const [rel, content] of Object.entries(filesWritten)) {
|
||||
const hit = findHit(content, rule);
|
||||
if (hit !== null) {
|
||||
const channel = rel.startsWith('.gstack/analytics/') ? 'telemetry' : 'file';
|
||||
leaks.push({ channel, matchType, where: rel, excerpt: excerptAt(content, hit) });
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { stdout, stderr, status, filesWritten, telemetry, leaks, tmpHome };
|
||||
}
|
||||
|
||||
function walk(root: string, dir: string, out: Record<string, string>) {
|
||||
for (const entry of fs.readdirSync(dir)) {
|
||||
const full = path.join(dir, entry);
|
||||
let stat;
|
||||
try {
|
||||
stat = fs.lstatSync(full);
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
if (stat.isSymbolicLink()) continue;
|
||||
if (stat.isDirectory()) {
|
||||
walk(root, full, out);
|
||||
continue;
|
||||
}
|
||||
if (!stat.isFile()) continue;
|
||||
if (stat.size > 1024 * 1024) continue; // skip huge files, unlikely to be secrets
|
||||
const rel = path.relative(root, full);
|
||||
try {
|
||||
out[rel] = fs.readFileSync(full, 'utf-8');
|
||||
} catch {
|
||||
// binary or unreadable — skip
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function buildMatchRules(seed: string): Array<{ rule: string; matchType: Leak['matchType'] }> {
|
||||
const rules: Array<{ rule: string; matchType: Leak['matchType'] }> = [];
|
||||
rules.push({ rule: seed, matchType: 'exact' });
|
||||
|
||||
// URL-decoded form — catches cases where the seed got percent-encoded
|
||||
// (e.g., a password with a '@' embedded in a connection string).
|
||||
try {
|
||||
const decoded = decodeURIComponent(seed);
|
||||
if (decoded !== seed) rules.push({ rule: decoded, matchType: 'url-decoded' });
|
||||
} catch {
|
||||
// malformed %-encoding in the seed itself; ignore
|
||||
}
|
||||
|
||||
// First-12-char prefix — catches partial leaks like "we logged the
|
||||
// first 10 chars for debugging." Only applied to seeds >= 16 chars,
|
||||
// since shorter seeds would false-positive against normal words.
|
||||
if (seed.length >= 16) {
|
||||
rules.push({ rule: seed.slice(0, 12), matchType: 'prefix-12' });
|
||||
}
|
||||
|
||||
// Base64 encoding — catches leaks through auth headers or config files
|
||||
// that encode the seed. Only for seeds >= 12 chars to reduce false
|
||||
// positives from short strings that happen to be valid base64.
|
||||
if (seed.length >= 12) {
|
||||
rules.push({ rule: Buffer.from(seed).toString('base64'), matchType: 'base64' });
|
||||
}
|
||||
|
||||
return rules;
|
||||
}
|
||||
|
||||
function findHit(haystack: string, needle: string): number | null {
|
||||
if (!needle) return null;
|
||||
const idx = haystack.indexOf(needle);
|
||||
return idx === -1 ? null : idx;
|
||||
}
|
||||
|
||||
function excerptAt(s: string, idx: number): string {
|
||||
const start = Math.max(0, idx - 20);
|
||||
const end = Math.min(s.length, idx + 40);
|
||||
return s.slice(start, end).replace(/\n/g, '\\n');
|
||||
}
|
||||
@@ -92,6 +92,7 @@ export const E2E_TOUCHFILES: Record<string, string[]> = {
|
||||
'plan-devex-review-plan-mode': ['plan-devex-review/**', 'scripts/resolvers/preamble/generate-plan-mode-handshake.ts', 'scripts/resolvers/preamble.ts', 'scripts/question-registry.ts', 'scripts/one-way-doors.ts', 'test/helpers/agent-sdk-runner.ts'],
|
||||
'plan-mode-no-op': ['plan-ceo-review/**', 'scripts/resolvers/preamble/generate-plan-mode-handshake.ts', 'scripts/resolvers/preamble.ts', 'test/helpers/agent-sdk-runner.ts'],
|
||||
'e2e-harness-audit': ['plan-ceo-review/**', 'plan-eng-review/**', 'plan-design-review/**', 'plan-devex-review/**', 'scripts/resolvers/preamble/generate-plan-mode-handshake.ts', 'test/helpers/agent-sdk-runner.ts'],
|
||||
'brain-privacy-gate': ['scripts/resolvers/preamble/generate-brain-sync-block.ts', 'scripts/resolvers/preamble.ts', 'bin/gstack-brain-sync', 'bin/gstack-brain-init', 'bin/gstack-config', 'test/helpers/agent-sdk-runner.ts'],
|
||||
|
||||
// AskUserQuestion format regression (RECOMMENDATION + Completeness: N/10)
|
||||
// Fires when either template OR the two preamble resolvers change.
|
||||
@@ -336,6 +337,10 @@ export const E2E_TIERS: Record<string, 'gate' | 'periodic'> = {
|
||||
'plan-mode-no-op': 'gate',
|
||||
'e2e-harness-audit': 'gate',
|
||||
|
||||
// Privacy gate for gstack-brain-sync — periodic (non-deterministic LLM call,
|
||||
// costs ~$0.30-$0.50 per run, not needed on every commit)
|
||||
'brain-privacy-gate': 'periodic',
|
||||
|
||||
// AskUserQuestion format regression — periodic (Opus 4.7 non-deterministic benchmark)
|
||||
'plan-ceo-review-format-mode': 'periodic',
|
||||
'plan-ceo-review-format-approach': 'periodic',
|
||||
|
||||
@@ -0,0 +1,216 @@
|
||||
/**
|
||||
* Tests for the secret-sink test harness (D21 #5).
|
||||
*
|
||||
* Positive controls: deliberately leak a seed in every covered channel and
|
||||
* assert the harness catches it. A harness that silently under-reports is
|
||||
* worse than no harness — these tests are the quality gate.
|
||||
*
|
||||
* Negative controls: run real setup-gbrain bins with known secrets; no
|
||||
* leaks should appear.
|
||||
*/
|
||||
|
||||
import { describe, test, expect } from 'bun:test';
|
||||
import * as fs from 'fs';
|
||||
import * as os from 'os';
|
||||
import * as path from 'path';
|
||||
import { runWithSecretSink } from './helpers/secret-sink-harness';
|
||||
|
||||
const ROOT = path.resolve(import.meta.dir, '..');
|
||||
const LEAK_BIN_DIR = fs.mkdtempSync(path.join(os.tmpdir(), 'leak-bins-'));
|
||||
|
||||
// Build a disposable bash script that leaks in a specific way. Returns
|
||||
// path to the executable. We don't bother cleaning these up per-test —
|
||||
// they live under a tmpdir that's fine to linger between tests.
|
||||
function makeLeakyBin(name: string, body: string): string {
|
||||
const p = path.join(LEAK_BIN_DIR, name);
|
||||
fs.writeFileSync(p, `#!/bin/bash\nset -euo pipefail\n${body}\n`, { mode: 0o755 });
|
||||
return p;
|
||||
}
|
||||
|
||||
describe('secret-sink-harness — positive controls', () => {
|
||||
test('catches a seed echoed to stdout', async () => {
|
||||
const bin = makeLeakyBin(
|
||||
'leak-stdout',
|
||||
'echo "config contains: $LEAK_SEED"'
|
||||
);
|
||||
const seed = 'my-secret-password-12345';
|
||||
const r = await runWithSecretSink({
|
||||
bin,
|
||||
args: [],
|
||||
seeds: [seed],
|
||||
env: { LEAK_SEED: seed },
|
||||
});
|
||||
expect(r.leaks.length).toBeGreaterThan(0);
|
||||
const stdoutLeaks = r.leaks.filter((l) => l.channel === 'stdout');
|
||||
expect(stdoutLeaks.length).toBeGreaterThan(0);
|
||||
expect(stdoutLeaks.some((l) => l.matchType === 'exact')).toBe(true);
|
||||
});
|
||||
|
||||
test('catches a seed echoed to stderr', async () => {
|
||||
const bin = makeLeakyBin(
|
||||
'leak-stderr',
|
||||
'echo "leaked: $LEAK_SEED" >&2'
|
||||
);
|
||||
const seed = 'another-secret-value-67890';
|
||||
const r = await runWithSecretSink({
|
||||
bin,
|
||||
args: [],
|
||||
seeds: [seed],
|
||||
env: { LEAK_SEED: seed },
|
||||
});
|
||||
expect(r.leaks.some((l) => l.channel === 'stderr')).toBe(true);
|
||||
});
|
||||
|
||||
test('catches a seed written to a file under $HOME', async () => {
|
||||
const bin = makeLeakyBin(
|
||||
'leak-file',
|
||||
'mkdir -p "$HOME/.gstack" && echo "seed: $LEAK_SEED" > "$HOME/.gstack/debug.log"'
|
||||
);
|
||||
const seed = 'file-leaked-secret-value-xyz';
|
||||
const r = await runWithSecretSink({
|
||||
bin,
|
||||
args: [],
|
||||
seeds: [seed],
|
||||
env: { LEAK_SEED: seed },
|
||||
});
|
||||
const fileLeaks = r.leaks.filter((l) => l.channel === 'file');
|
||||
expect(fileLeaks.length).toBeGreaterThan(0);
|
||||
expect(fileLeaks[0].where).toBe('.gstack/debug.log');
|
||||
});
|
||||
|
||||
test('catches a seed leaked into the telemetry channel', async () => {
|
||||
const bin = makeLeakyBin(
|
||||
'leak-telemetry',
|
||||
'mkdir -p "$HOME/.gstack/analytics" && ' +
|
||||
'echo "{\\"event\\":\\"x\\",\\"leaked_secret\\":\\"$LEAK_SEED\\"}" ' +
|
||||
' >> "$HOME/.gstack/analytics/skill-usage.jsonl"'
|
||||
);
|
||||
const seed = 'telemetry-leaked-abc123xyz';
|
||||
const r = await runWithSecretSink({
|
||||
bin,
|
||||
args: [],
|
||||
seeds: [seed],
|
||||
env: { LEAK_SEED: seed },
|
||||
});
|
||||
const telemetryLeaks = r.leaks.filter((l) => l.channel === 'telemetry');
|
||||
expect(telemetryLeaks.length).toBeGreaterThan(0);
|
||||
expect(telemetryLeaks[0].where).toContain('analytics/');
|
||||
});
|
||||
|
||||
test('catches a seed leaked in base64-encoded form (auth header pattern)', async () => {
|
||||
// printf (not echo) so no trailing newline — matches how real auth
|
||||
// headers encode: base64(seed) exactly, not base64(seed + "\n").
|
||||
const bin = makeLeakyBin(
|
||||
'leak-base64',
|
||||
'printf "%s" "$LEAK_SEED" | base64'
|
||||
);
|
||||
const seed = 'base64-leaked-long-enough-secret';
|
||||
const r = await runWithSecretSink({
|
||||
bin,
|
||||
args: [],
|
||||
seeds: [seed],
|
||||
env: { LEAK_SEED: seed },
|
||||
});
|
||||
expect(r.leaks.some((l) => l.matchType === 'base64')).toBe(true);
|
||||
});
|
||||
|
||||
test('catches a first-12-char prefix leak (the "I only logged a portion" pattern)', async () => {
|
||||
const bin = makeLeakyBin(
|
||||
'leak-prefix',
|
||||
'prefix="${LEAK_SEED:0:12}"; echo "debug prefix: $prefix"'
|
||||
);
|
||||
const seed = 'prefix-leaked-0123456789abcdef';
|
||||
const r = await runWithSecretSink({
|
||||
bin,
|
||||
args: [],
|
||||
seeds: [seed],
|
||||
env: { LEAK_SEED: seed },
|
||||
});
|
||||
expect(r.leaks.some((l) => l.matchType === 'prefix-12')).toBe(true);
|
||||
});
|
||||
|
||||
test('clean run with no leak returns an empty leaks array', async () => {
|
||||
const bin = makeLeakyBin('clean', 'echo "no secret here"');
|
||||
const r = await runWithSecretSink({
|
||||
bin,
|
||||
args: [],
|
||||
seeds: ['never-emitted-seed-xyz-987'],
|
||||
});
|
||||
expect(r.leaks).toEqual([]);
|
||||
});
|
||||
});
|
||||
|
||||
describe('secret-sink-harness — real bins (negative controls)', () => {
|
||||
test('supabase-verify does not leak a URL password on reject', async () => {
|
||||
const bin = path.join(ROOT, 'bin', 'gstack-gbrain-supabase-verify');
|
||||
const seedPassword = 'extremely-distinctive-password-abc-xyz-987';
|
||||
// Use a URL that will be REJECTED (wrong scheme) so all error paths run
|
||||
const leakyUrl = `mysql://user:${seedPassword}@host:6543/db`;
|
||||
const r = await runWithSecretSink({
|
||||
bin,
|
||||
args: [leakyUrl],
|
||||
seeds: [seedPassword],
|
||||
});
|
||||
// Status 2 — rejected as expected
|
||||
expect(r.status).toBe(2);
|
||||
// No leaks in any channel
|
||||
expect(r.leaks).toEqual([]);
|
||||
});
|
||||
|
||||
test('supabase-verify does not leak on direct-connection rejection path', async () => {
|
||||
const bin = path.join(ROOT, 'bin', 'gstack-gbrain-supabase-verify');
|
||||
const seedPassword = 'another-distinctive-secret-for-direct-conn';
|
||||
const leakyUrl = `postgresql://postgres:${seedPassword}@db.abcdef.supabase.co:5432/postgres`;
|
||||
const r = await runWithSecretSink({
|
||||
bin,
|
||||
args: [leakyUrl],
|
||||
seeds: [seedPassword],
|
||||
});
|
||||
expect(r.status).toBe(3);
|
||||
expect(r.leaks).toEqual([]);
|
||||
});
|
||||
|
||||
test('lib.sh read_secret_to_env does not leak stdin via captured channels', async () => {
|
||||
const seed = 'piped-secret-that-should-stay-invisible-zzz';
|
||||
// Wrapper script: source lib.sh, read secret, echo only its length.
|
||||
const lib = path.join(ROOT, 'bin', 'gstack-gbrain-lib.sh');
|
||||
const bin = makeLeakyBin(
|
||||
'read-secret-wrapper',
|
||||
`. "${lib}"\nread_secret_to_env MY_SECRET "Prompt: "\necho "len=\${#MY_SECRET}"`
|
||||
);
|
||||
const r = await runWithSecretSink({
|
||||
bin,
|
||||
args: [],
|
||||
seeds: [seed],
|
||||
stdin: seed,
|
||||
});
|
||||
expect(r.status).toBe(0);
|
||||
// The length is visible (43) but the value is not
|
||||
expect(r.stdout).toContain(`len=${seed.length}`);
|
||||
expect(r.leaks).toEqual([]);
|
||||
});
|
||||
|
||||
test('supabase-provision does not leak a PAT on auth-failure path', async () => {
|
||||
const bin = path.join(ROOT, 'bin', 'gstack-gbrain-supabase-provision');
|
||||
const seedPat = 'sbp_very_distinctive_pat_seed_abc_xyz_1234567890';
|
||||
// With no SUPABASE_API_BASE override, the bin tries the real API URL.
|
||||
// We want to avoid real network calls — point at a bogus URL that
|
||||
// immediately fails with curl. The bin should exit with an error
|
||||
// WITHOUT leaking the PAT to any channel.
|
||||
const r = await runWithSecretSink({
|
||||
bin,
|
||||
args: ['list-orgs'],
|
||||
seeds: [seedPat],
|
||||
env: {
|
||||
SUPABASE_ACCESS_TOKEN: seedPat,
|
||||
// Nonexistent port — curl fails fast.
|
||||
SUPABASE_API_BASE: 'http://127.0.0.1:1',
|
||||
},
|
||||
timeoutMs: 30_000, // curl retries with backoff — give it room to exit
|
||||
});
|
||||
// Expect a non-zero exit (network failure, exit 8 per the bin's
|
||||
// retry-exhausted path)
|
||||
expect(r.status).not.toBe(0);
|
||||
expect(r.leaks).toEqual([]);
|
||||
}, 60_000);
|
||||
});
|
||||
@@ -0,0 +1,227 @@
|
||||
/**
|
||||
* Privacy-gate E2E (periodic tier, paid).
|
||||
*
|
||||
* The gbrain-sync preamble block instructs the model to fire a one-time
|
||||
* AskUserQuestion when:
|
||||
* - `BRAIN_SYNC: off` in the preamble echo (sync mode not on)
|
||||
* - config `gbrain_sync_mode_prompted` is "false"
|
||||
* - gbrain is detected on the host (binary on PATH or `gbrain doctor`
|
||||
* --fast --json succeeds)
|
||||
*
|
||||
* This test stages all three conditions (via env + a fake `gbrain` binary
|
||||
* on PATH), runs a cheap gstack skill through the Agent SDK, intercepts
|
||||
* every tool use via canUseTool, and asserts: one of the AskUserQuestions
|
||||
* fired by the preamble is the privacy gate with its distinctive prose
|
||||
* and three options (full / artifacts-only / decline).
|
||||
*
|
||||
* Cost: ~$0.30-$0.50 per run. Periodic tier (EVALS=1 EVALS_TIER=periodic).
|
||||
*
|
||||
* See scripts/resolvers/preamble/generate-brain-sync-block.ts for the
|
||||
* prose contract this test locks in.
|
||||
*/
|
||||
|
||||
import { describe, test, expect } from 'bun:test';
|
||||
import * as fs from 'fs';
|
||||
import * as os from 'os';
|
||||
import * as path from 'path';
|
||||
import { runAgentSdkTest, passThroughNonAskUserQuestion, resolveClaudeBinary } from './helpers/agent-sdk-runner';
|
||||
|
||||
const shouldRun = !!process.env.EVALS && process.env.EVALS_TIER === 'periodic';
|
||||
const describeE2E = shouldRun ? describe : describe.skip;
|
||||
|
||||
describeE2E('gbrain-sync privacy gate fires once via preamble', () => {
|
||||
test('gstack skill preamble fires the 3-option AskUserQuestion when gbrain is detected', async () => {
|
||||
// Stage a fresh GSTACK_HOME with gbrain_sync_mode_prompted=false.
|
||||
const gstackHome = fs.mkdtempSync(path.join(os.tmpdir(), 'privacy-gate-gstack-'));
|
||||
const fakeBinDir = fs.mkdtempSync(path.join(os.tmpdir(), 'privacy-gate-bin-'));
|
||||
|
||||
// Seed the config so the gate's condition passes.
|
||||
fs.writeFileSync(
|
||||
path.join(gstackHome, 'config.yaml'),
|
||||
'gbrain_sync_mode: off\ngbrain_sync_mode_prompted: false\n',
|
||||
{ mode: 0o600 }
|
||||
);
|
||||
|
||||
// Fake `gbrain` binary that makes the host-detection probe succeed.
|
||||
// The preamble checks `gbrain doctor --fast --json` OR `which gbrain`.
|
||||
// Either branch counts as "gbrain detected."
|
||||
fs.writeFileSync(
|
||||
path.join(fakeBinDir, 'gbrain'),
|
||||
'#!/bin/bash\n' +
|
||||
'case "$1" in\n' +
|
||||
' doctor) echo \'{"status":"ok","schema_version":2}\' ; exit 0 ;;\n' +
|
||||
' --version) echo "0.18.2" ; exit 0 ;;\n' +
|
||||
' *) exit 0 ;;\n' +
|
||||
'esac\n',
|
||||
{ mode: 0o755 }
|
||||
);
|
||||
|
||||
const askUserQuestions: Array<{ input: Record<string, unknown> }> = [];
|
||||
const binary = resolveClaudeBinary();
|
||||
|
||||
// Ambient env mutations — restored in finally so other tests in the file
|
||||
// don't inherit them.
|
||||
const origGstackHome = process.env.GSTACK_HOME;
|
||||
const origPath = process.env.PATH;
|
||||
process.env.GSTACK_HOME = gstackHome;
|
||||
process.env.PATH = `${fakeBinDir}:${process.env.PATH ?? '/usr/bin:/bin:/opt/homebrew/bin'}`;
|
||||
|
||||
try {
|
||||
// Pick a small skill with the preamble and load it via Read to force
|
||||
// the model to execute every preamble directive. A narrow "run /learn"
|
||||
// prompt often gets reduced to a direct action, skipping the preamble
|
||||
// gates. Mirror the plan-mode-no-op test pattern: ask the model to
|
||||
// follow the skill's instructions in full.
|
||||
const learnSkill = path.resolve(
|
||||
import.meta.dir,
|
||||
'..',
|
||||
'learn',
|
||||
'SKILL.md'
|
||||
);
|
||||
await runAgentSdkTest({
|
||||
systemPrompt: { type: 'preset', preset: 'claude_code' },
|
||||
userPrompt:
|
||||
`Read the skill file at ${learnSkill} and follow its instructions from the top, including every preamble directive. Execute every bash block. If any AskUserQuestion fires, present it.`,
|
||||
workingDirectory: gstackHome,
|
||||
maxTurns: 10,
|
||||
allowedTools: ['Read', 'Grep', 'Glob', 'Bash'],
|
||||
// NOTE: do NOT pass `env:` here. When the Agent SDK gets an explicit
|
||||
// env object, its auth pipeline doesn't pick up ANTHROPIC_API_KEY the
|
||||
// same way as when env is undefined (SDK-internal detail, verified
|
||||
// against the plan-mode-no-op test which passes no env and auths
|
||||
// cleanly). Instead, mutate process.env before the call so the SDK
|
||||
// inherits our overrides ambiently.
|
||||
...(binary ? { pathToClaudeCodeExecutable: binary } : {}),
|
||||
canUseTool: async (toolName, input) => {
|
||||
if (toolName === 'AskUserQuestion') {
|
||||
askUserQuestions.push({ input });
|
||||
// Auto-answer "Decline — keep everything local" (option C)
|
||||
// so the skill can continue without actually turning on sync.
|
||||
const q = (input.questions as Array<{
|
||||
question: string;
|
||||
options: Array<{ label: string }>;
|
||||
}>)[0];
|
||||
const decline =
|
||||
q.options.find((o) => /decline|keep everything local|no thanks/i.test(o.label)) ??
|
||||
q.options[q.options.length - 1]!;
|
||||
return {
|
||||
behavior: 'allow',
|
||||
updatedInput: {
|
||||
questions: input.questions,
|
||||
answers: { [q.question]: decline.label },
|
||||
},
|
||||
};
|
||||
}
|
||||
return passThroughNonAskUserQuestion(toolName, input);
|
||||
},
|
||||
});
|
||||
|
||||
// Assertion 1: the privacy gate fired.
|
||||
const privacyQuestions = askUserQuestions.filter((aq) => {
|
||||
const qs = aq.input.questions as Array<{ question: string }>;
|
||||
return qs.some(
|
||||
(q) =>
|
||||
/publish.*session memory|private github repo|gbrain indexes/i.test(q.question)
|
||||
);
|
||||
});
|
||||
expect(privacyQuestions.length).toBeGreaterThanOrEqual(1);
|
||||
|
||||
// Assertion 2: the question has the three expected options.
|
||||
const gate = privacyQuestions[0]!.input.questions as Array<{
|
||||
question: string;
|
||||
options: Array<{ label: string }>;
|
||||
}>;
|
||||
const labels = gate[0]!.options.map((o) => o.label.toLowerCase()).join(' | ');
|
||||
// Full / artifacts-only / decline are the three canonical options.
|
||||
expect(labels).toMatch(/everything|allowlisted|full/);
|
||||
expect(labels).toMatch(/artifact/);
|
||||
expect(labels).toMatch(/decline|local|no thanks/);
|
||||
|
||||
// Assertion 3: the gate should NOT fire twice in one run.
|
||||
// (The preamble is supposed to be idempotent within a session.)
|
||||
expect(privacyQuestions.length).toBe(1);
|
||||
} finally {
|
||||
// Restore ambient env before other tests.
|
||||
if (origGstackHome === undefined) delete process.env.GSTACK_HOME;
|
||||
else process.env.GSTACK_HOME = origGstackHome;
|
||||
if (origPath === undefined) delete process.env.PATH;
|
||||
else process.env.PATH = origPath;
|
||||
fs.rmSync(gstackHome, { recursive: true, force: true });
|
||||
fs.rmSync(fakeBinDir, { recursive: true, force: true });
|
||||
}
|
||||
}, 180_000);
|
||||
|
||||
test('privacy gate does NOT fire when gbrain_sync_mode_prompted is already true', async () => {
|
||||
// Same staging, but prompted=true this time. Gate should be silent.
|
||||
const gstackHome = fs.mkdtempSync(path.join(os.tmpdir(), 'privacy-gate-off-'));
|
||||
const fakeBinDir = fs.mkdtempSync(path.join(os.tmpdir(), 'privacy-gate-off-bin-'));
|
||||
|
||||
fs.writeFileSync(
|
||||
path.join(gstackHome, 'config.yaml'),
|
||||
'gbrain_sync_mode: off\ngbrain_sync_mode_prompted: true\n',
|
||||
{ mode: 0o600 }
|
||||
);
|
||||
|
||||
fs.writeFileSync(
|
||||
path.join(fakeBinDir, 'gbrain'),
|
||||
'#!/bin/bash\necho \'{"status":"ok"}\'\nexit 0\n',
|
||||
{ mode: 0o755 }
|
||||
);
|
||||
|
||||
const askUserQuestions: Array<{ input: Record<string, unknown> }> = [];
|
||||
const binary = resolveClaudeBinary();
|
||||
|
||||
// Ambient env mutations (see note on the first test).
|
||||
const origGstackHome = process.env.GSTACK_HOME;
|
||||
const origPath = process.env.PATH;
|
||||
process.env.GSTACK_HOME = gstackHome;
|
||||
process.env.PATH = `${fakeBinDir}:${process.env.PATH ?? '/usr/bin:/bin:/opt/homebrew/bin'}`;
|
||||
|
||||
try {
|
||||
await runAgentSdkTest({
|
||||
systemPrompt: { type: 'preset', preset: 'claude_code' },
|
||||
userPrompt:
|
||||
'Run /learn with no arguments. Just report the learnings count.',
|
||||
workingDirectory: gstackHome,
|
||||
maxTurns: 4,
|
||||
allowedTools: ['Read', 'Grep', 'Glob', 'Bash'],
|
||||
...(binary ? { pathToClaudeCodeExecutable: binary } : {}),
|
||||
canUseTool: async (toolName, input) => {
|
||||
if (toolName === 'AskUserQuestion') {
|
||||
askUserQuestions.push({ input });
|
||||
// Pass through whatever the model asks; don't prefer anything.
|
||||
const q = (input.questions as Array<{
|
||||
question: string;
|
||||
options: Array<{ label: string }>;
|
||||
}>)[0];
|
||||
return {
|
||||
behavior: 'allow',
|
||||
updatedInput: {
|
||||
questions: input.questions,
|
||||
answers: { [q.question]: q.options[0]!.label },
|
||||
},
|
||||
};
|
||||
}
|
||||
return passThroughNonAskUserQuestion(toolName, input);
|
||||
},
|
||||
});
|
||||
|
||||
// No AskUserQuestion should have matched the privacy gate's prose.
|
||||
const privacyQuestions = askUserQuestions.filter((aq) => {
|
||||
const qs = aq.input.questions as Array<{ question: string }>;
|
||||
return qs.some(
|
||||
(q) =>
|
||||
/publish.*session memory|private github repo|gbrain indexes/i.test(q.question)
|
||||
);
|
||||
});
|
||||
expect(privacyQuestions.length).toBe(0);
|
||||
} finally {
|
||||
if (origGstackHome === undefined) delete process.env.GSTACK_HOME;
|
||||
else process.env.GSTACK_HOME = origGstackHome;
|
||||
if (origPath === undefined) delete process.env.PATH;
|
||||
else process.env.PATH = origPath;
|
||||
fs.rmSync(gstackHome, { recursive: true, force: true });
|
||||
fs.rmSync(fakeBinDir, { recursive: true, force: true });
|
||||
}
|
||||
}, 180_000);
|
||||
});
|
||||
Reference in New Issue
Block a user