mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-05 21:25:27 +02:00
bf65487162
* feat: lib/gstack-memory-helpers shared module for V1 memory ingest pipeline Lane 0 foundation per plan §"Eng review additions". 5 public functions imported by the V1 helpers (Lanes A/B/C): canonicalizeRemote(url) — normalize git remote → host/org/repo secretScanFile(path) — gitleaks wrapper with discriminated return detectEngineTier() — cached 60s in ~/.gstack/.gbrain-engine-cache.json parseSkillManifest(path) — extract gbrain.context_queries: from frontmatter withErrorContext(op,fn,caller) — async-aware error logging 22 unit tests, all passing. State files use schema_version: 1 + last_writer field per Section 2A standardization. Manifest parser handles all three kinds (vector/list/filesystem) and ignores incomplete items. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat: bin/gstack-memory-ingest — V1 unified memory ingest helper Lane A. Walks coding-agent transcripts (Claude Code + Codex; Cursor V1.0.1 follow-up) AND ~/.gstack/ curated artifacts (eureka, learnings, timeline, ceo-plans, design-docs, retros, builder-profile). Calls gbrain put_page with type-tagged frontmatter. Uses gstack-memory-helpers (Lane 0): - Modes: --probe / --incremental (default, mtime fast-path) / --bulk - Default 90-day window; --all-history opts into full archive - --sources subset filter; --include-unattributed opt-in for no-remote sessions - --limit N for smoke testing; --benchmark for throughput reporting - Tolerant JSONL parser handles truncated last lines (D10 partial-flag) - State file at ~/.gstack/.transcript-ingest-state.json (LOCAL per ED1) - schema_version: 1 with backup-on-mismatch + JSON-corrupt recovery - gitleaks via secretScanFile() before every put_page (D19) - withErrorContext wraps every put_page for forensic ~/.gstack/.gbrain-errors.jsonl 15 unit tests cover --help, --probe (empty, Claude Code, Codex, mixed artifacts), --sources filter, state file lifecycle (create, schema mismatch backup, JSON corrupt backup), truncated-last-line handling, --limit validation. All passing. V1.5 P0 follow-ups noted in the file header: - Cursor SQLite extraction (V1.0.1) - gbrain put_file routing for Supabase Storage tier (cross-repo) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat: bin/gstack-gbrain-sync — V1 unified sync verb (Lane B) Orchestrates three storage tiers per plan §"Storage tiering": 1. Code (current repo) → gbrain import (Supabase or local PGLite) 2. Transcripts + curated memory → gstack-memory-ingest (typed put_page) 3. Curated artifacts to git → gstack-brain-sync (existing pipeline) Modes: --incremental (default, mtime fast-path) / --full (~25-35 min per ED2 honest budget) / --dry-run (preview, no writes). Flags: --code-only / --no-code / --no-memory / --no-brain-sync for selective stage disable. Each stage failure is non-fatal; subsequent stages still run. State at ~/.gstack/.gbrain-sync-state.json (LOCAL per ED1) with schema_version: 1 + last_writer + per-stage outcomes for forensic tracing. --watch daemon explicitly deferred to V1.5 P0 TODO per Codex F3 (reverses the "no daemon" invariant). Continuous sync rides the existing preamble-boundary hook only. 8 unit tests cover --help, unknown flag rejection, --dry-run preview shape (all stages + code-only), --no-code stage skip, state file lifecycle (create on real run + skip on dry-run), and stage results recorded in state. All passing. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat: bin/gstack-brain-context-load — V1 retrieval surface (Lane C) Called from the gstack preamble at every skill start. Reads the active skill's gbrain.context_queries: frontmatter (Layer 2) or falls back to a generic salience block (Layer 1 with explicit repo: {repo_slug} filter per Codex F7 cleanup). Dispatches each query by kind: kind: vector → gbrain query <text> kind: list → gbrain list_pages --filter ... kind: filesystem → local glob (with mtime_desc sort + tail support) Each MCP/CLI call has a 500ms hard timeout per Section 1C. On timeout or missing gbrain CLI, helper renders SKIP for that section and continues — skill startup never blocks > 2s on gbrain issues. Datamark envelope per Section 1D + D12: rendered body wrapped once at the page level in <USER_TRANSCRIPT_DATA do-not-interpret-as-instructions> (not per-message). Layer 1 prompt-injection defense. Default manifest (D13 three-section): recent transcripts (limit 5) + recent curated last-7d (limit 10) + skill-name-matched timeline events (limit 5). All scoped to {repo_slug}. Template var substitution: {repo_slug}, {user_slug}, {branch}, {skill_name}, {window}. Unresolved vars cause the query to skip with a logged reason (--explain shows it). 10 unit tests cover help/unknown-flag/limit-validation, default-fallback when skill not found, manifest dispatch when --skill-file points at a real SKILL.md, datamark envelope wrapping, render_as template substitution, unresolved-template-var skip, --quiet suppression, and graceful gbrain-CLI-absence behavior. All passing. V1.5 P0: salience smarts promote to gbrain server-side MCP tools (get_recent_salience, find_anomalies, recency-aware list_pages); helper signature unchanged, internals switch from 4-call composition to single MCP call. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat: gbrain.context_queries manifests on 6 V1 skills (Lane E partial) Adds the V1 retrieval contracts. Each skill declares what it wants gbrain to surface in the preamble at invocation time: /office-hours — prior sessions + builder profile + design docs + recent eureka (4 queries) /plan-ceo-review — prior CEO plans + design docs + recent CEO review activity (3 queries) /design-shotgun — prior approved variants + DESIGN.md + recent design docs (3 queries) /design-consultation — existing DESIGN.md + prior design decisions + brand-related notes (3 queries) /investigate — prior investigations + project learnings + recent eureka cross-project (3 queries) /retro — prior retros + recent timeline + recent learnings (3 queries) Each query carries an explicit kind (vector | list | filesystem) per D3, schema: 1 versioning per D15, and {repo_slug} template var per F7 cross-repo-contamination cleanup. Mix of vector / list / filesystem matches what each skill actually needs: - filesystem (mtime_desc + tail) for log JSONL + curated markdown - list with tags_contains filter for typed gbrain pages - (vector reserved for V1.0.1 when gbrain query surface stabilizes) Smoke test: bun run bin/gstack-brain-context-load.ts --skill-file office-hours/SKILL.md --repo test-repo --explain returns mode=manifest queries=4 with the filesystem kinds populating real data from ~/.gstack/builder-profile.jsonl + ~/.gstack/analytics/eureka.jsonl on this Mac. End-to-end retrieval flow confirmed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat: setup-gbrain Step 7.5 ingest gate + Step 10 verdict + memory.md ref doc (Lane E partial) Step 7.5: Transcript & memory ingest gate. After Step 7 wires brain-sync but before Step 8's CLAUDE.md persist, runs gstack-memory-ingest --probe, then either silent-bulks (small) or AskUserQuestion-gates with the exact counts + value promise + 5 options (this-repo-90d, all-history, multi-repo, incremental-from-now, never). Decision persists to gstack-config set transcript_ingest_mode <choice>. Step 10: GREEN/YELLOW/RED verdict block. Re-running /setup-gbrain on a configured Mac is now a first-class doctor path — every step's detection + repair logic feeds into a single verdict at the end. Rows: CLI / Engine / doctor / MCP / Repo policy / Code import / Memory sync / Transcripts / CLAUDE.md / Smoke. Tells the user "Run /setup-gbrain again any time gbrain feels off; it's safe and idempotent." setup-gbrain/memory.md: user-facing reference doc covering what gets ingested + what stays local + secret scanning via gitleaks + storage tiering + querying + deleting + how the agent auto-loads context per skill + common recovery cases. Linked from Step 8's CLAUDE.md persist. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * test: V1 E2E pipeline + --no-write flag for ingest helper (Lane F) E2E pipeline test exercises the full Lane A → B → C value loop: 1. Set up fake $HOME with all 8 memory source types as fixtures 2. gstack-memory-ingest --probe verifies counts match disk 3. gstack-memory-ingest --incremental writes state with schema_version: 1 4. Idempotency: re-run reports 0 changes 5. --probe distinguishes new vs unchanged after first incremental 6. gstack-gbrain-sync --dry-run previews 3 stages 7. --no-code --no-brain-sync --quiet writes sync state with 1 stage entry 8. office-hours/SKILL.md V1 manifest dispatches 4 queries (mode=manifest) 9. Datamark envelope wraps every loaded section (Section 1D + D12) 10. Layer 1 fallback when no skill specified — default 3-section manifest 11. plan-ceo-review/SKILL.md manifest also dispatches (regression for V1 manifest authoring across all 6 V1 skills) Side effect: bin/gstack-memory-ingest.ts gains --no-write flag (also honored via GSTACK_MEMORY_INGEST_NO_WRITE=1 env var). Skips gbrain put_page calls while still updating the state file. Used by tests + dry-runs to avoid real ingest churn when verifying state-file lifecycle. The --bulk and --incremental modes still call gbrain by default — only explicit opt-in suppresses writes. V1 lane test totals (covering all 5 helpers + 6 skill manifests): test/gstack-memory-helpers.test.ts 22 tests test/gstack-memory-ingest.test.ts 15 tests test/gstack-gbrain-sync.test.ts 8 tests test/gstack-brain-context-load.test.ts 10 tests test/skill-e2e-memory-pipeline.test.ts 10 tests ────────────────────────────────────── ───────── TOTAL 65 passing Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * chore: bump version and changelog (v1.26.0.0) V1 of memory ingest + retrieval surface. Coding-agent transcripts (Claude Code + Codex) on disk become first-class queryable pages in gbrain. Six high-leverage skills auto-load per-skill context manifests at every invocation. Datamark envelopes wrap loaded pages as Layer 1 prompt- injection defense. Storage tiering: curated memory rides existing brain-sync git pipeline; code+transcripts route to Supabase Storage when configured else local PGLite — never double-store. Net branch size vs main: +4174/-849 across 39 files. 65 V1 tests, all green. Goldilocks scope per CEO D18; V1.5 P0 follow-ups documented in the plan's V1.5 TODOs section. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
412 lines
14 KiB
TypeScript
412 lines
14 KiB
TypeScript
/**
|
|
* gstack-memory-helpers — shared helpers for the V1 memory ingest + retrieval pipeline.
|
|
*
|
|
* Imported by:
|
|
* - bin/gstack-memory-ingest.ts (Lane A)
|
|
* - bin/gstack-gbrain-sync.ts (Lane B)
|
|
* - bin/gstack-brain-context-load.ts (Lane C)
|
|
* - scripts/gen-skill-docs.ts (manifest validation)
|
|
*
|
|
* Design refs in the plan:
|
|
* §"Eng review additions" — DRY refactor (Section 1A)
|
|
* §"V1 final scope clarification" — schema_version: 1 standardization (Section 2A)
|
|
* ED1 — engine-tier cache lives in ~/.gstack/.gbrain-engine-cache.json (60s TTL)
|
|
*
|
|
* NOTE: secretScanFile() currently shells out to `gitleaks` from PATH; the vendored
|
|
* binary install is part of Lane E (setup-gbrain). When gitleaks is missing, the
|
|
* helper warns once and returns an empty findings list — fail-safe defaults.
|
|
*/
|
|
|
|
import { existsSync, readFileSync, writeFileSync, mkdirSync, statSync, appendFileSync } from "fs";
|
|
import { dirname, join } from "path";
|
|
import { execSync, execFileSync } from "child_process";
|
|
import { homedir } from "os";
|
|
|
|
// ── Types ──────────────────────────────────────────────────────────────────
|
|
|
|
export interface SecretFinding {
|
|
rule_id: string;
|
|
description: string;
|
|
line: number;
|
|
redacted_match: string;
|
|
}
|
|
|
|
export interface SecretScanResult {
|
|
scanned: boolean;
|
|
findings: SecretFinding[];
|
|
scanner: "gitleaks" | "missing" | "error";
|
|
}
|
|
|
|
export type EngineTier = "pglite" | "supabase" | "unknown";
|
|
|
|
export interface EngineDetect {
|
|
engine: EngineTier;
|
|
supabase_url?: string;
|
|
detected_at: number;
|
|
schema_version: 1;
|
|
}
|
|
|
|
export interface GbrainManifestQuery {
|
|
id: string;
|
|
kind: "vector" | "list" | "filesystem";
|
|
render_as: string;
|
|
// kind=vector
|
|
query?: string;
|
|
// kind=list
|
|
filter?: Record<string, unknown>;
|
|
sort?: string;
|
|
// kind=filesystem
|
|
glob?: string;
|
|
tail?: number;
|
|
// common
|
|
limit?: number;
|
|
}
|
|
|
|
export interface GbrainManifest {
|
|
schema: number; // gbrain.schema in frontmatter; V1 = 1
|
|
context_queries: GbrainManifestQuery[];
|
|
}
|
|
|
|
export interface ErrorContextEntry {
|
|
ts: string;
|
|
op: string;
|
|
duration_ms: number;
|
|
outcome: "ok" | "error";
|
|
error?: string;
|
|
schema_version: 1;
|
|
last_writer: string;
|
|
}
|
|
|
|
// ── Public: canonicalizeRemote ────────────────────────────────────────────
|
|
|
|
/**
|
|
* Normalize a git remote URL to a canonical form: `host/org/repo` (no scheme,
|
|
* no trailing `.git`). Used as the dedup key for cross-Mac transcript routing
|
|
* (per ED1 — gbrain-side session_id dedup uses repo as a tag).
|
|
*
|
|
* Examples:
|
|
* https://github.com/garrytan/gstack.git → github.com/garrytan/gstack
|
|
* git@github.com:garrytan/gstack.git → github.com/garrytan/gstack
|
|
* ssh://git@gitlab.com/foo/bar → gitlab.com/foo/bar
|
|
* (empty / null) → ""
|
|
*/
|
|
export function canonicalizeRemote(url: string | null | undefined): string {
|
|
if (!url) return "";
|
|
let s = url.trim();
|
|
if (!s) return "";
|
|
// strip surrounding quotes that some configs add
|
|
s = s.replace(/^['"]|['"]$/g, "");
|
|
// git@host:path/repo → host/path/repo
|
|
const scpMatch = s.match(/^[^@\s]+@([^:]+):(.+)$/);
|
|
if (scpMatch) {
|
|
s = `${scpMatch[1]}/${scpMatch[2]}`;
|
|
} else {
|
|
// strip scheme (https://, ssh://, git://, http://)
|
|
s = s.replace(/^[a-z][a-z0-9+.-]*:\/\//i, "");
|
|
// strip user@ prefix on URL-style remotes
|
|
s = s.replace(/^[^@\/]+@/, "");
|
|
}
|
|
// strip trailing .git
|
|
s = s.replace(/\.git$/i, "");
|
|
// strip trailing slash
|
|
s = s.replace(/\/+$/, "");
|
|
// collapse multiple slashes (after path normalization)
|
|
s = s.replace(/\/{2,}/g, "/");
|
|
return s.toLowerCase();
|
|
}
|
|
|
|
// ── Public: secretScanFile (gitleaks wrapper) ─────────────────────────────
|
|
|
|
let _gitleaksAvailability: boolean | null = null;
|
|
|
|
function gitleaksAvailable(): boolean {
|
|
if (_gitleaksAvailability !== null) return _gitleaksAvailability;
|
|
try {
|
|
execSync("command -v gitleaks", { stdio: "ignore" });
|
|
_gitleaksAvailability = true;
|
|
} catch {
|
|
_gitleaksAvailability = false;
|
|
// Only warn once per process — Lane E will vendor the binary.
|
|
process.stderr.write(
|
|
"[gstack-memory-helpers] gitleaks not in PATH; secret scanning disabled. " +
|
|
"Run /setup-gbrain to install (or `brew install gitleaks`).\n"
|
|
);
|
|
}
|
|
return _gitleaksAvailability;
|
|
}
|
|
|
|
/**
|
|
* Scan a file for embedded secrets using gitleaks. Returns findings list
|
|
* (empty if clean). When gitleaks is not in PATH, returns scanned=false with
|
|
* scanner="missing" — caller decides whether to skip the file or proceed.
|
|
*
|
|
* Per D19: gitleaks runs at ingest time before any put_page / put_file write.
|
|
* Replaces the inadequate regex scanner in bin/gstack-brain-sync (which only
|
|
* applies to staged git diffs).
|
|
*/
|
|
export function secretScanFile(path: string): SecretScanResult {
|
|
if (!existsSync(path)) {
|
|
return { scanned: false, findings: [], scanner: "error" };
|
|
}
|
|
if (!gitleaksAvailable()) {
|
|
return { scanned: false, findings: [], scanner: "missing" };
|
|
}
|
|
try {
|
|
// gitleaks detect --no-git --source <path> --report-format json --report-path -
|
|
// Returns 0 on clean, 1 on findings, 126/127 on bad invocation.
|
|
const out = execFileSync(
|
|
"gitleaks",
|
|
["detect", "--no-git", "--source", path, "--report-format", "json", "--report-path", "/dev/stdout", "--exit-code", "0"],
|
|
{ encoding: "utf-8", maxBuffer: 16 * 1024 * 1024 }
|
|
);
|
|
const trimmed = out.trim();
|
|
if (!trimmed) return { scanned: true, findings: [], scanner: "gitleaks" };
|
|
const parsed = JSON.parse(trimmed) as Array<{
|
|
RuleID: string;
|
|
Description: string;
|
|
StartLine: number;
|
|
Match?: string;
|
|
Secret?: string;
|
|
}>;
|
|
const findings: SecretFinding[] = (parsed || []).map((f) => ({
|
|
rule_id: f.RuleID || "unknown",
|
|
description: f.Description || "",
|
|
line: f.StartLine || 0,
|
|
redacted_match: redactMatch(f.Secret || f.Match || ""),
|
|
}));
|
|
return { scanned: true, findings, scanner: "gitleaks" };
|
|
} catch (err) {
|
|
return {
|
|
scanned: false,
|
|
findings: [],
|
|
scanner: "error",
|
|
};
|
|
}
|
|
}
|
|
|
|
function redactMatch(s: string): string {
|
|
if (!s) return "";
|
|
if (s.length <= 8) return "[REDACTED]";
|
|
return `${s.slice(0, 4)}...${s.slice(-4)}`;
|
|
}
|
|
|
|
// ── Public: detectEngineTier (cached) ─────────────────────────────────────
|
|
|
|
const ENGINE_CACHE_TTL_MS = 60 * 1000;
|
|
|
|
function gstackHome(): string {
|
|
return process.env.GSTACK_HOME || join(homedir(), ".gstack");
|
|
}
|
|
|
|
function engineCachePath(): string {
|
|
return join(gstackHome(), ".gbrain-engine-cache.json");
|
|
}
|
|
|
|
function errorLogPath(): string {
|
|
return join(gstackHome(), ".gbrain-errors.jsonl");
|
|
}
|
|
|
|
/**
|
|
* Detect which gbrain engine is active (PGLite vs Supabase) and cache the
|
|
* answer for 60s in ~/.gstack/.gbrain-engine-cache.json. Caching avoids
|
|
* fork+exec'ing `gbrain doctor --json` on every skill start.
|
|
*
|
|
* Per ED1 (state files local-only): this cache is gitignored from the brain
|
|
* repo. Per Section 2A: schema_version: 1 + last_writer field for forensic
|
|
* tracing.
|
|
*/
|
|
export function detectEngineTier(): EngineDetect {
|
|
// Try cache first
|
|
if (existsSync(engineCachePath())) {
|
|
try {
|
|
const stat = statSync(engineCachePath());
|
|
const ageMs = Date.now() - stat.mtimeMs;
|
|
if (ageMs < ENGINE_CACHE_TTL_MS) {
|
|
const cached = JSON.parse(readFileSync(engineCachePath(), "utf-8")) as EngineDetect;
|
|
if (cached.schema_version === 1) return cached;
|
|
}
|
|
} catch {
|
|
// Cache corrupt; fall through to fresh detect.
|
|
}
|
|
}
|
|
|
|
const fresh = freshDetectEngineTier();
|
|
try {
|
|
mkdirSync(dirname(engineCachePath()), { recursive: true });
|
|
writeFileSync(
|
|
engineCachePath(),
|
|
JSON.stringify({ ...fresh, last_writer: "gstack-memory-helpers.detectEngineTier" }, null, 2),
|
|
"utf-8"
|
|
);
|
|
} catch {
|
|
// Cache write failure is non-fatal.
|
|
}
|
|
return fresh;
|
|
}
|
|
|
|
function freshDetectEngineTier(): EngineDetect {
|
|
const now = Date.now();
|
|
try {
|
|
const out = execSync("gbrain doctor --json --fast 2>/dev/null", { encoding: "utf-8", timeout: 5000 });
|
|
const parsed = JSON.parse(out);
|
|
const engine: EngineTier = parsed?.engine === "supabase" ? "supabase" : parsed?.engine === "pglite" ? "pglite" : "unknown";
|
|
return {
|
|
engine,
|
|
supabase_url: parsed?.supabase_url || undefined,
|
|
detected_at: now,
|
|
schema_version: 1,
|
|
};
|
|
} catch {
|
|
return { engine: "unknown", detected_at: now, schema_version: 1 };
|
|
}
|
|
}
|
|
|
|
// ── Public: parseSkillManifest ────────────────────────────────────────────
|
|
|
|
/**
|
|
* Parse the `gbrain:` section out of a SKILL.md.tmpl frontmatter block.
|
|
* Returns null if no manifest is declared OR if the file has no frontmatter.
|
|
*
|
|
* Schema validation (full kind/required-fields check) lives in
|
|
* scripts/gen-skill-docs.ts and runs at generation time. This parser is the
|
|
* runtime read path used by gstack-brain-context-load; it tolerates extra
|
|
* fields and relies on validation having already happened upstream.
|
|
*/
|
|
export function parseSkillManifest(skillFilePath: string): GbrainManifest | null {
|
|
if (!existsSync(skillFilePath)) return null;
|
|
const content = readFileSync(skillFilePath, "utf-8");
|
|
const frontmatter = extractFrontmatter(content);
|
|
if (!frontmatter) return null;
|
|
const gbrain = extractGbrainBlock(frontmatter);
|
|
if (!gbrain) return null;
|
|
return gbrain;
|
|
}
|
|
|
|
function extractFrontmatter(content: string): string | null {
|
|
// Supports both `---\n...\n---` (YAML) and `+++\n...\n+++` (TOML, rare).
|
|
const yamlMatch = content.match(/^---\s*\n([\s\S]*?)\n---\s*\n/);
|
|
if (yamlMatch) return yamlMatch[1];
|
|
return null;
|
|
}
|
|
|
|
function extractGbrainBlock(frontmatter: string): GbrainManifest | null {
|
|
// Naive YAML extraction — finds the `gbrain:` key and parses its sub-tree.
|
|
// Real YAML parsing avoided to keep zero-deps; gen-skill-docs validates the
|
|
// shape strictly at build time.
|
|
const lines = frontmatter.split("\n");
|
|
const start = lines.findIndex((l) => /^gbrain\s*:/.test(l));
|
|
if (start === -1) return null;
|
|
|
|
// Collect indented lines under `gbrain:` until next top-level key or EOF
|
|
const block: string[] = [];
|
|
for (let i = start + 1; i < lines.length; i++) {
|
|
const line = lines[i];
|
|
if (/^[A-Za-z_][A-Za-z0-9_-]*\s*:/.test(line)) break; // next top-level key
|
|
block.push(line);
|
|
}
|
|
|
|
const text = block.join("\n");
|
|
// Extract schema number
|
|
const schemaMatch = text.match(/\n\s*schema\s*:\s*(\d+)/);
|
|
const schema = schemaMatch ? parseInt(schemaMatch[1], 10) : 1;
|
|
|
|
// Extract context_queries items
|
|
const queries: GbrainManifestQuery[] = [];
|
|
const cqMatch = text.match(/\n\s*context_queries\s*:\s*\n([\s\S]+)/);
|
|
if (cqMatch) {
|
|
const cqText = cqMatch[1];
|
|
// Split using a positive lookahead so each chunk begins with the list-item dash.
|
|
// Pattern: line starting with 4-6 spaces + "-" + whitespace.
|
|
const rawItems = cqText.split(/(?=^[ ]{4,6}-\s)/m);
|
|
const items = rawItems.filter((s) => /^[ ]{4,6}-\s/.test(s));
|
|
for (const item of items) {
|
|
const q: Partial<GbrainManifestQuery> = {};
|
|
// Strip the leading list-item marker so id/kind/etc. regexes can use line-start.
|
|
const body = item.replace(/^[ ]{4,6}-\s+/, " ");
|
|
const idM = body.match(/(?:^|\n)\s*id\s*:\s*([^\n]+)/);
|
|
const kindM = body.match(/(?:^|\n)\s*kind\s*:\s*([^\n]+)/);
|
|
const renderM = body.match(/(?:^|\n)\s*render_as\s*:\s*"?([^"\n]+?)"?\s*$/m);
|
|
const queryM = body.match(/(?:^|\n)\s*query\s*:\s*"?([^"\n]+?)"?\s*$/m);
|
|
const limitM = body.match(/(?:^|\n)\s*limit\s*:\s*(\d+)/);
|
|
const globM = body.match(/(?:^|\n)\s*glob\s*:\s*"?([^"\n]+?)"?\s*$/m);
|
|
const sortM = body.match(/(?:^|\n)\s*sort\s*:\s*([^\n]+)/);
|
|
const tailM = body.match(/(?:^|\n)\s*tail\s*:\s*(\d+)/);
|
|
|
|
if (idM) q.id = idM[1].trim();
|
|
if (kindM) {
|
|
const k = kindM[1].trim();
|
|
if (k === "vector" || k === "list" || k === "filesystem") q.kind = k;
|
|
}
|
|
if (renderM) q.render_as = renderM[1].trim();
|
|
if (queryM) q.query = queryM[1].trim();
|
|
if (limitM) q.limit = parseInt(limitM[1], 10);
|
|
if (globM) q.glob = globM[1].trim();
|
|
if (sortM) q.sort = sortM[1].trim();
|
|
if (tailM) q.tail = parseInt(tailM[1], 10);
|
|
|
|
if (q.id && q.kind && q.render_as) {
|
|
queries.push(q as GbrainManifestQuery);
|
|
}
|
|
}
|
|
}
|
|
|
|
return { schema, context_queries: queries };
|
|
}
|
|
|
|
// ── Public: withErrorContext ──────────────────────────────────────────────
|
|
|
|
const ERROR_LOG_PATH = join(gstackHome(), ".gbrain-errors.jsonl");
|
|
|
|
/**
|
|
* Wrap an op with structured error logging. Logs success/failure + duration
|
|
* to ~/.gstack/.gbrain-errors.jsonl for forensic debugging. Replaces ad-hoc
|
|
* try/catch sites across the three Bun helpers (Section 2B).
|
|
*
|
|
* On error: the error is RE-THROWN after logging — caller still owns flow.
|
|
*/
|
|
export async function withErrorContext<T>(
|
|
op: string,
|
|
fn: () => T | Promise<T>,
|
|
caller: string = "unknown"
|
|
): Promise<T> {
|
|
const t0 = Date.now();
|
|
try {
|
|
const result = await fn();
|
|
logErrorContext({
|
|
ts: new Date().toISOString(),
|
|
op,
|
|
duration_ms: Date.now() - t0,
|
|
outcome: "ok",
|
|
schema_version: 1,
|
|
last_writer: caller,
|
|
});
|
|
return result;
|
|
} catch (err) {
|
|
logErrorContext({
|
|
ts: new Date().toISOString(),
|
|
op,
|
|
duration_ms: Date.now() - t0,
|
|
outcome: "error",
|
|
error: err instanceof Error ? err.message : String(err),
|
|
schema_version: 1,
|
|
last_writer: caller,
|
|
});
|
|
throw err;
|
|
}
|
|
}
|
|
|
|
function logErrorContext(entry: ErrorContextEntry): void {
|
|
try {
|
|
const path = errorLogPath();
|
|
mkdirSync(dirname(path), { recursive: true });
|
|
appendFileSync(path, JSON.stringify(entry) + "\n", "utf-8");
|
|
} catch {
|
|
// Logging failure is non-fatal — never block the op.
|
|
}
|
|
}
|
|
|
|
// Test-only export for resetting the gitleaks availability cache between tests.
|
|
export function _resetGitleaksAvailabilityCache(): void {
|
|
_gitleaksAvailability = null;
|
|
}
|