v1.26.0.0 feat: V1 transcript ingest + per-skill gbrain manifests + retrieval surface (#1298)

* feat: lib/gstack-memory-helpers shared module for V1 memory ingest pipeline

Lane 0 foundation per plan §"Eng review additions". 5 public functions
imported by the V1 helpers (Lanes A/B/C):

  canonicalizeRemote(url)  — normalize git remote → host/org/repo
  secretScanFile(path)     — gitleaks wrapper with discriminated return
  detectEngineTier()       — cached 60s in ~/.gstack/.gbrain-engine-cache.json
  parseSkillManifest(path) — extract gbrain.context_queries: from frontmatter
  withErrorContext(op,fn,caller) — async-aware error logging

22 unit tests, all passing. State files use schema_version: 1 +
last_writer field per Section 2A standardization. Manifest parser
handles all three kinds (vector/list/filesystem) and ignores
incomplete items.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: bin/gstack-memory-ingest — V1 unified memory ingest helper

Lane A. Walks coding-agent transcripts (Claude Code + Codex; Cursor V1.0.1
follow-up) AND ~/.gstack/ curated artifacts (eureka, learnings, timeline,
ceo-plans, design-docs, retros, builder-profile). Calls gbrain put_page
with type-tagged frontmatter. Uses gstack-memory-helpers (Lane 0):

  - Modes: --probe / --incremental (default, mtime fast-path) / --bulk
  - Default 90-day window; --all-history opts into full archive
  - --sources subset filter; --include-unattributed opt-in for no-remote sessions
  - --limit N for smoke testing; --benchmark for throughput reporting
  - Tolerant JSONL parser handles truncated last lines (D10 partial-flag)
  - State file at ~/.gstack/.transcript-ingest-state.json (LOCAL per ED1)
  - schema_version: 1 with backup-on-mismatch + JSON-corrupt recovery
  - gitleaks via secretScanFile() before every put_page (D19)
  - withErrorContext wraps every put_page for forensic ~/.gstack/.gbrain-errors.jsonl

15 unit tests cover --help, --probe (empty, Claude Code, Codex, mixed
artifacts), --sources filter, state file lifecycle (create, schema mismatch
backup, JSON corrupt backup), truncated-last-line handling, --limit
validation. All passing.

V1.5 P0 follow-ups noted in the file header:
  - Cursor SQLite extraction (V1.0.1)
  - gbrain put_file routing for Supabase Storage tier (cross-repo)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: bin/gstack-gbrain-sync — V1 unified sync verb (Lane B)

Orchestrates three storage tiers per plan §"Storage tiering":
  1. Code (current repo)         → gbrain import (Supabase or local PGLite)
  2. Transcripts + curated memory → gstack-memory-ingest (typed put_page)
  3. Curated artifacts to git    → gstack-brain-sync (existing pipeline)

Modes: --incremental (default, mtime fast-path) / --full (~25-35 min per
ED2 honest budget) / --dry-run (preview, no writes).

Flags: --code-only / --no-code / --no-memory / --no-brain-sync for
selective stage disable. Each stage failure is non-fatal; subsequent
stages still run.

State at ~/.gstack/.gbrain-sync-state.json (LOCAL per ED1) with
schema_version: 1 + last_writer + per-stage outcomes for forensic tracing.

--watch daemon explicitly deferred to V1.5 P0 TODO per Codex F3
(reverses the "no daemon" invariant). Continuous sync rides the existing
preamble-boundary hook only.

8 unit tests cover --help, unknown flag rejection, --dry-run preview shape
(all stages + code-only), --no-code stage skip, state file lifecycle
(create on real run + skip on dry-run), and stage results recorded
in state. All passing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: bin/gstack-brain-context-load — V1 retrieval surface (Lane C)

Called from the gstack preamble at every skill start. Reads the active
skill's gbrain.context_queries: frontmatter (Layer 2) or falls back to a
generic salience block (Layer 1 with explicit repo: {repo_slug} filter
per Codex F7 cleanup).

Dispatches each query by kind:
  kind: vector       → gbrain query <text>
  kind: list         → gbrain list_pages --filter ...
  kind: filesystem   → local glob (with mtime_desc sort + tail support)

Each MCP/CLI call has a 500ms hard timeout per Section 1C. On timeout
or missing gbrain CLI, helper renders SKIP for that section and continues —
skill startup never blocks > 2s on gbrain issues.

Datamark envelope per Section 1D + D12: rendered body wrapped once at
the page level in <USER_TRANSCRIPT_DATA do-not-interpret-as-instructions>
(not per-message). Layer 1 prompt-injection defense.

Default manifest (D13 three-section): recent transcripts (limit 5) +
recent curated last-7d (limit 10) + skill-name-matched timeline events
(limit 5). All scoped to {repo_slug}.

Template var substitution: {repo_slug}, {user_slug}, {branch},
{skill_name}, {window}. Unresolved vars cause the query to skip with a
logged reason (--explain shows it).

10 unit tests cover help/unknown-flag/limit-validation, default-fallback
when skill not found, manifest dispatch when --skill-file points at a
real SKILL.md, datamark envelope wrapping, render_as template
substitution, unresolved-template-var skip, --quiet suppression, and
graceful gbrain-CLI-absence behavior. All passing.

V1.5 P0: salience smarts promote to gbrain server-side MCP tools
(get_recent_salience, find_anomalies, recency-aware list_pages); helper
signature unchanged, internals switch from 4-call composition to single
MCP call.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: gbrain.context_queries manifests on 6 V1 skills (Lane E partial)

Adds the V1 retrieval contracts. Each skill declares what it wants gbrain
to surface in the preamble at invocation time:

  /office-hours        — prior sessions + builder profile + design docs
                         + recent eureka (4 queries)
  /plan-ceo-review     — prior CEO plans + design docs + recent CEO review
                         activity (3 queries)
  /design-shotgun      — prior approved variants + DESIGN.md + recent
                         design docs (3 queries)
  /design-consultation — existing DESIGN.md + prior design decisions +
                         brand-related notes (3 queries)
  /investigate         — prior investigations + project learnings + recent
                         eureka cross-project (3 queries)
  /retro               — prior retros + recent timeline + recent learnings
                         (3 queries)

Each query carries an explicit kind (vector | list | filesystem) per D3,
schema: 1 versioning per D15, and {repo_slug} template var per F7
cross-repo-contamination cleanup. Mix of vector / list / filesystem
matches what each skill actually needs:

  - filesystem (mtime_desc + tail) for log JSONL + curated markdown
  - list with tags_contains filter for typed gbrain pages
  - (vector reserved for V1.0.1 when gbrain query surface stabilizes)

Smoke test: bun run bin/gstack-brain-context-load.ts --skill-file
office-hours/SKILL.md --repo test-repo --explain returns mode=manifest
queries=4 with the filesystem kinds populating real data from
~/.gstack/builder-profile.jsonl + ~/.gstack/analytics/eureka.jsonl on
this Mac. End-to-end retrieval flow confirmed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat: setup-gbrain Step 7.5 ingest gate + Step 10 verdict + memory.md ref doc (Lane E partial)

Step 7.5: Transcript & memory ingest gate. After Step 7 wires brain-sync
but before Step 8's CLAUDE.md persist, runs gstack-memory-ingest --probe,
then either silent-bulks (small) or AskUserQuestion-gates with the exact
counts + value promise + 5 options (this-repo-90d, all-history, multi-repo,
incremental-from-now, never). Decision persists to
gstack-config set transcript_ingest_mode <choice>.

Step 10: GREEN/YELLOW/RED verdict block. Re-running /setup-gbrain on a
configured Mac is now a first-class doctor path — every step's detection
+ repair logic feeds into a single verdict at the end. Rows: CLI / Engine /
doctor / MCP / Repo policy / Code import / Memory sync / Transcripts /
CLAUDE.md / Smoke. Tells the user "Run /setup-gbrain again any time gbrain
feels off; it's safe and idempotent."

setup-gbrain/memory.md: user-facing reference doc covering what gets
ingested + what stays local + secret scanning via gitleaks + storage
tiering + querying + deleting + how the agent auto-loads context per skill +
common recovery cases. Linked from Step 8's CLAUDE.md persist.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test: V1 E2E pipeline + --no-write flag for ingest helper (Lane F)

E2E pipeline test exercises the full Lane A → B → C value loop:
  1. Set up fake $HOME with all 8 memory source types as fixtures
  2. gstack-memory-ingest --probe verifies counts match disk
  3. gstack-memory-ingest --incremental writes state with schema_version: 1
  4. Idempotency: re-run reports 0 changes
  5. --probe distinguishes new vs unchanged after first incremental
  6. gstack-gbrain-sync --dry-run previews 3 stages
  7. --no-code --no-brain-sync --quiet writes sync state with 1 stage entry
  8. office-hours/SKILL.md V1 manifest dispatches 4 queries (mode=manifest)
  9. Datamark envelope wraps every loaded section (Section 1D + D12)
 10. Layer 1 fallback when no skill specified — default 3-section manifest
 11. plan-ceo-review/SKILL.md manifest also dispatches (regression for V1
     manifest authoring across all 6 V1 skills)

Side effect: bin/gstack-memory-ingest.ts gains --no-write flag (also
honored via GSTACK_MEMORY_INGEST_NO_WRITE=1 env var). Skips gbrain put_page
calls while still updating the state file. Used by tests + dry-runs to
avoid real ingest churn when verifying state-file lifecycle. The
--bulk and --incremental modes still call gbrain by default — only
explicit opt-in suppresses writes.

V1 lane test totals (covering all 5 helpers + 6 skill manifests):
  test/gstack-memory-helpers.test.ts     22 tests
  test/gstack-memory-ingest.test.ts      15 tests
  test/gstack-gbrain-sync.test.ts         8 tests
  test/gstack-brain-context-load.test.ts 10 tests
  test/skill-e2e-memory-pipeline.test.ts 10 tests
  ────────────────────────────────────── ─────────
  TOTAL                                  65 passing

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: bump version and changelog (v1.26.0.0)

V1 of memory ingest + retrieval surface. Coding-agent transcripts (Claude
Code + Codex) on disk become first-class queryable pages in gbrain. Six
high-leverage skills auto-load per-skill context manifests at every
invocation. Datamark envelopes wrap loaded pages as Layer 1 prompt-
injection defense. Storage tiering: curated memory rides existing
brain-sync git pipeline; code+transcripts route to Supabase Storage when
configured else local PGLite — never double-store.

Net branch size vs main: +4174/-849 across 39 files. 65 V1 tests, all
green. Goldilocks scope per CEO D18; V1.5 P0 follow-ups documented in
the plan's V1.5 TODOs section.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Garry Tan
2026-05-02 08:40:30 -07:00
committed by GitHub
parent b512be7117
commit bf65487162
27 changed files with 4216 additions and 2 deletions
+111
View File
@@ -1047,6 +1047,75 @@ the prereq is fixed.
---
## Step 7.5: Transcript & memory ingest gate
After memory sync is wired (Step 7) but before persisting the CLAUDE.md
config (Step 8), offer to bring this Mac's coding-agent transcripts +
curated `~/.gstack/` artifacts into gbrain so the retrieval surface
(per-skill manifests, salience block) has data to surface.
Run the probe to size the operation:
```bash
~/.claude/skills/gstack/bin/gstack-memory-ingest --probe
```
Read the output. If `Total files in window: 0`, skip — there's nothing
to ingest. Set `gstack-config set transcript_ingest_mode incremental`
silently and continue to Step 8.
If `New (never ingested)` is < 200 AND total bytes are < 100MB: silent
bulk via `gstack-memory-ingest --bulk --quiet`. Set
`transcript_ingest_mode=incremental` and continue.
Otherwise (the "many transcripts on disk" path): AskUserQuestion with
the exact counts AND the value promise. Default scope is **current repo
only, last 90 days**:
> "Found <N_repo> transcripts in THIS repo (<repo-slug>) over the last
> 90 days, plus <N_other> across other repos on this machine (<bytes>
> total if all ingested). Ingest THIS repo's transcripts into gbrain?
>
> What you get after this: every gstack skill auto-loads recent salience
> from your past sessions in this repo, so the agent finds your prior
> work without you describing it. You can query 'what was I doing on
> day X' and get a real answer. Per-session pages are searchable,
> taggable, and deletable. Secret scanning runs before any push.
>
> What stays the same: nothing leaves your machine unless gbrain sync
> is enabled (Step 7). Per-repo trust policies still apply.
>
> Multi-Mac note: if you HAVE enabled brain sync (Step 7), these
> transcript pages will sync across your Macs. Caveat: deleting a
> transcript page later removes it from gbrain but git history retains
> it in prior commits. Use `gstack-transcript-prune` to delete in bulk;
> use `git filter-repo` on the brain remote for hard-delete from
> history."
Options:
- A) Yes — this repo, last 90 days (recommended; ~est min)
- B) Yes — this repo, ALL history
- C) Yes — this repo + other repos on this machine
- D) Skip historical, track new from now (`transcript_ingest_mode=incremental`)
- E) Never ingest transcripts (`transcript_ingest_mode=off`)
After answer:
```bash
~/.claude/skills/gstack/bin/gstack-config set transcript_ingest_mode <choice>
~/.claude/skills/gstack/bin/gstack-gbrain-sync --full --no-brain-sync
```
(`--no-brain-sync` because Step 7 already wired that path; this just
runs the code import + memory ingest stages. Brain-sync will run on the
next preamble hook.)
If A/D/E, ingest is incremental from this point on; preamble-boundary
hook runs `gstack-gbrain-sync --incremental --quiet` on every skill
start (cheap mtime fast-path).
Reference doc for users: `setup-gbrain/memory.md` (linked from CLAUDE.md
Step 8).
---
## Step 8: Persist `## GBrain Configuration` in CLAUDE.md
Find-and-replace (or append) this section in CLAUDE.md:
@@ -1076,6 +1145,48 @@ and STOP with a NEEDS_CONTEXT escalation.
---
## Step 10: GREEN/YELLOW/RED verdict block (idempotent doctor output)
After Steps 1-9 complete, summarize. Re-running `/setup-gbrain` on a
configured Mac is a first-class doctor path: every step detects existing
state, repairs only what's missing, and reports here.
```bash
~/.claude/skills/gstack/bin/gstack-gbrain-detect 2>/dev/null || true
~/.claude/skills/gstack/bin/gstack-config get transcript_ingest_mode 2>/dev/null || echo "off"
~/.claude/skills/gstack/bin/gstack-config get gbrain_sync_mode 2>/dev/null || echo "off"
[ -f ~/.gstack/.gbrain-sync-state.json ] && cat ~/.gstack/.gbrain-sync-state.json || echo "{}"
```
Print the verdict block. Each row is `[OK]/[FIX]/[WARN]/[ERR]` — see
template below; substitute your detect outputs:
```
gbrain status: GREEN
CLI ............. OK <gbrain version>
Engine .......... OK <pglite|supabase> at <path>
doctor .......... OK
MCP ............. OK registered (user scope)
Repo policy ..... OK <read-write|read-only|deny>
Code import ..... OK <last_imported_head>
Memory sync ..... OK <gbrain_sync_mode> to <remote>
Transcripts ..... OK <N> sessions, last ingest <when>
CLAUDE.md ....... OK
Smoke test ...... OK put → search → delete round-trip
Run `/setup-gbrain` again any time gbrain feels off; it's safe and idempotent.
```
If any row is YELLOW or RED, the verdict line says so and the failing rows
surface a one-line "next action" (e.g.,
`Engine .......... ERR PGLite corrupt — run \`gbrain restore-from-sync\` (V1.5)`).
For V1, restore-from-sync is a V1.5 P0 cross-repo TODO; until it ships,
the user's brain remote (with brain-sync enabled) holds curated artifacts
as markdown + git, recoverable manually via `gbrain import` from a clone.
---
## `/setup-gbrain --cleanup-orphans` (D20)
Re-collect a PAT (Step 4 path-2a scope disclosure), then:
+111
View File
@@ -398,6 +398,75 @@ the prereq is fixed.
---
## Step 7.5: Transcript & memory ingest gate
After memory sync is wired (Step 7) but before persisting the CLAUDE.md
config (Step 8), offer to bring this Mac's coding-agent transcripts +
curated `~/.gstack/` artifacts into gbrain so the retrieval surface
(per-skill manifests, salience block) has data to surface.
Run the probe to size the operation:
```bash
~/.claude/skills/gstack/bin/gstack-memory-ingest --probe
```
Read the output. If `Total files in window: 0`, skip — there's nothing
to ingest. Set `gstack-config set transcript_ingest_mode incremental`
silently and continue to Step 8.
If `New (never ingested)` is < 200 AND total bytes are < 100MB: silent
bulk via `gstack-memory-ingest --bulk --quiet`. Set
`transcript_ingest_mode=incremental` and continue.
Otherwise (the "many transcripts on disk" path): AskUserQuestion with
the exact counts AND the value promise. Default scope is **current repo
only, last 90 days**:
> "Found <N_repo> transcripts in THIS repo (<repo-slug>) over the last
> 90 days, plus <N_other> across other repos on this machine (<bytes>
> total if all ingested). Ingest THIS repo's transcripts into gbrain?
>
> What you get after this: every gstack skill auto-loads recent salience
> from your past sessions in this repo, so the agent finds your prior
> work without you describing it. You can query 'what was I doing on
> day X' and get a real answer. Per-session pages are searchable,
> taggable, and deletable. Secret scanning runs before any push.
>
> What stays the same: nothing leaves your machine unless gbrain sync
> is enabled (Step 7). Per-repo trust policies still apply.
>
> Multi-Mac note: if you HAVE enabled brain sync (Step 7), these
> transcript pages will sync across your Macs. Caveat: deleting a
> transcript page later removes it from gbrain but git history retains
> it in prior commits. Use `gstack-transcript-prune` to delete in bulk;
> use `git filter-repo` on the brain remote for hard-delete from
> history."
Options:
- A) Yes — this repo, last 90 days (recommended; ~est min)
- B) Yes — this repo, ALL history
- C) Yes — this repo + other repos on this machine
- D) Skip historical, track new from now (`transcript_ingest_mode=incremental`)
- E) Never ingest transcripts (`transcript_ingest_mode=off`)
After answer:
```bash
~/.claude/skills/gstack/bin/gstack-config set transcript_ingest_mode <choice>
~/.claude/skills/gstack/bin/gstack-gbrain-sync --full --no-brain-sync
```
(`--no-brain-sync` because Step 7 already wired that path; this just
runs the code import + memory ingest stages. Brain-sync will run on the
next preamble hook.)
If A/D/E, ingest is incremental from this point on; preamble-boundary
hook runs `gstack-gbrain-sync --incremental --quiet` on every skill
start (cheap mtime fast-path).
Reference doc for users: `setup-gbrain/memory.md` (linked from CLAUDE.md
Step 8).
---
## Step 8: Persist `## GBrain Configuration` in CLAUDE.md
Find-and-replace (or append) this section in CLAUDE.md:
@@ -427,6 +496,48 @@ and STOP with a NEEDS_CONTEXT escalation.
---
## Step 10: GREEN/YELLOW/RED verdict block (idempotent doctor output)
After Steps 1-9 complete, summarize. Re-running `/setup-gbrain` on a
configured Mac is a first-class doctor path: every step detects existing
state, repairs only what's missing, and reports here.
```bash
~/.claude/skills/gstack/bin/gstack-gbrain-detect 2>/dev/null || true
~/.claude/skills/gstack/bin/gstack-config get transcript_ingest_mode 2>/dev/null || echo "off"
~/.claude/skills/gstack/bin/gstack-config get gbrain_sync_mode 2>/dev/null || echo "off"
[ -f ~/.gstack/.gbrain-sync-state.json ] && cat ~/.gstack/.gbrain-sync-state.json || echo "{}"
```
Print the verdict block. Each row is `[OK]/[FIX]/[WARN]/[ERR]` — see
template below; substitute your detect outputs:
```
gbrain status: GREEN
CLI ............. OK <gbrain version>
Engine .......... OK <pglite|supabase> at <path>
doctor .......... OK
MCP ............. OK registered (user scope)
Repo policy ..... OK <read-write|read-only|deny>
Code import ..... OK <last_imported_head>
Memory sync ..... OK <gbrain_sync_mode> to <remote>
Transcripts ..... OK <N> sessions, last ingest <when>
CLAUDE.md ....... OK
Smoke test ...... OK put → search → delete round-trip
Run `/setup-gbrain` again any time gbrain feels off; it's safe and idempotent.
```
If any row is YELLOW or RED, the verdict line says so and the failing rows
surface a one-line "next action" (e.g.,
`Engine .......... ERR PGLite corrupt — run \`gbrain restore-from-sync\` (V1.5)`).
For V1, restore-from-sync is a V1.5 P0 cross-repo TODO; until it ships,
the user's brain remote (with brain-sync enabled) holds curated artifacts
as markdown + git, recoverable manually via `gbrain import` from a clone.
---
## `/setup-gbrain --cleanup-orphans` (D20)
Re-collect a PAT (Step 4 path-2a scope disclosure), then:
+178
View File
@@ -0,0 +1,178 @@
# gstack memory ingest — what it does, what stays local, what you can do with it
This is the user-facing reference for the V1 transcript + memory ingest
feature in `/setup-gbrain`. If you ran `/setup-gbrain` and it asked
"Ingest THIS repo's transcripts into gbrain?", this doc explains what
happens after you say yes.
## What gets ingested
| Source | Type | Where | Sensitivity |
|---|---|---|---|
| Claude Code session JSONL | `transcript` | `~/.claude/projects/*/` | High — full conversations including tool I/O |
| Codex CLI session JSONL | `transcript` | `~/.codex/sessions/YYYY/MM/DD/` | High |
| Cursor session SQLite (V1.0.1) | `transcript` | `~/Library/Application Support/Cursor/` | Same — deferred V1.0.1 |
| Eureka log | `eureka` | `~/.gstack/analytics/eureka.jsonl` | Medium — your insights, often non-secret |
| Project learnings | `learning` | `~/.gstack/projects/<slug>/learnings.jsonl` | Medium |
| Project timeline | `timeline` | `~/.gstack/projects/<slug>/timeline.jsonl` | Low |
| CEO plans | `ceo-plan` | `~/.gstack/projects/<slug>/ceo-plans/*.md` | Medium |
| Design docs | `design-doc` | `~/.gstack/projects/<slug>/*-design-*.md` | Medium |
| Retros | `retro` | `~/.gstack/projects/<slug>/retros/*.md` | Medium |
| Builder profile | `builder-profile-entry` | `~/.gstack/builder-profile.jsonl` | Low |
## What stays local
- **State files** (`~/.gstack/.gbrain-sync-state.json`,
`~/.gstack/.transcript-ingest-state.json`,
`~/.gstack/.gbrain-engine-cache.json`,
`~/.gstack/.gbrain-errors.jsonl`) are local-only per ED1 (state file
sync semantics decision). They are not synced via the brain remote.
- **Sessions with no resolvable git remote** (running in `/tmp/`, scratch
dirs, etc.) are skipped by default. Pass `--include-unattributed` to
the ingest helper to opt them in.
- **Repos under a `deny` trust policy** (set in `/setup-gbrain` Step 6)
are skipped — neither code nor transcripts from those repos ingest.
## What gets scanned for secrets
Every ingested page passes through **gitleaks** before write
(per D19 — replaces the regex scanner that previously ran only on
staged git diffs). Gitleaks is industry-standard, covers:
- AWS / GCP / Azure access keys
- ANTHROPIC_API_KEY, OPENAI_API_KEY, GitHub tokens
- Stripe keys, Slack tokens, JWT secrets
- Generic high-entropy strings (configurable threshold)
A session with a positive finding is **skipped entirely** — not partially
redacted. The match line + rule ID are logged to stderr; you can see what
was skipped via `bun run bin/gstack-memory-ingest.ts --probe` (which
shows new vs. updated counts) or by reviewing the helper's output during
`/gbrain-sync --full`.
If gitleaks is not installed (run `brew install gitleaks` on macOS, or
`apt install gitleaks` on Linux), the helper warns once and disables
secret scanning. **In that mode, transcripts ingest unscanned. Don't run
ingest without gitleaks if you have any concern about secrets in your
sessions.**
## Where it goes
Storage tier depends on your gbrain engine (set during `/setup-gbrain`):
- **Supabase configured:** code + transcripts go to Supabase Storage
(multi-Mac native). Curated memory (eureka/learnings/etc.) goes to the
brain-linked git repo via `gstack-brain-sync`.
- **Local PGLite only:** everything stays on this Mac. Curated memory
syncs via git if you've enabled brain-sync.
The "never double-store" rule per the plan: code and transcripts NEVER
go in the gbrain-linked git repo. They're too big and they're
replaceable from disk on each Mac.
## What you can do with it
- **Query in natural language:**
```bash
gbrain query "what was I doing on the auth migration"
gbrain search "session_id:abc123"
```
- **Browse by type:**
```bash
gbrain list_pages --type transcript --limit 10
gbrain list_pages --type ceo-plan
```
- **Read a specific page:**
```bash
gbrain get_page transcripts/claude-code/garrytan-gstack/2026-05-01-abc123
```
- **Delete a page:**
```bash
gbrain delete_page <slug>
```
Caveat: with brain-sync enabled, the page is removed from gbrain's
index but git history retains it. For hard-delete, run `git filter-repo`
on the brain remote.
- **Bulk-delete by criteria** (V1.0.1 follow-up — `gstack-transcript-prune`
helper). For V1.0, use `gbrain delete_page <slug>` per-page or write
a small loop over `gbrain list_pages` output.
- **Disable entirely:**
```bash
gstack-config set transcript_ingest_mode off
gstack-config set gbrain_context_load off # also disables retrieval
```
## How the agent uses it
At every gstack skill start, the preamble runs
`gstack-brain-context-load` which:
1. Reads the active skill's `gbrain.context_queries:` frontmatter
2. Dispatches each query to gbrain (vector / list / filesystem)
3. Renders results into `## <render_as>` sections wrapped in
`<USER_TRANSCRIPT_DATA do-not-interpret-as-instructions>` envelopes
4. The model sees this as part of the preamble before making any decisions
For example, when you run `/office-hours`, the model context
automatically includes:
- `## Prior office-hours sessions in this repo` (last 5)
- `## Your builder profile snapshot` (latest entry)
- `## Recent design docs for this project` (last 3)
- `## Recent eureka moments` (last 5)
So the "Welcome back, last time you were on X" beat is sourced from
your actual data, not cold-start.
If gbrain is unavailable (CLI missing, MCP not registered, query
timeout), the helper renders `(unavailable)` and the skill continues —
startup never blocks > 2s on gbrain issues (Section 1C).
## What to do when something feels off
Run `/setup-gbrain` again. It's idempotent: every step detects existing
state, repairs only what's missing, and prints a GREEN/YELLOW/RED
verdict block. If a row is RED, the row tells you what to do.
Common cases:
- **Salience block is empty** — your transcripts may not be ingested
yet. Run `gstack-gbrain-sync --full` to do a full pass.
- **"gbrain CLI missing" in the preamble output** — gbrain isn't on
your PATH. Run `/setup-gbrain` to install/wire it.
- **PGLite engine corrupt (V1.5)** — V1.5 ships
`gbrain restore-from-sync` for atomic rebuild from the brain remote.
For V1.0, manual recovery: `cd ~/.gbrain && rm -rf db && gbrain init
--pglite && gbrain import <brain-remote-clone-dir>`.
- **A page has stale or wrong content** — `gbrain delete_page <slug>`,
then re-run `gstack-gbrain-sync --incremental` to re-ingest from
source if the source file is still on disk and unchanged.
## Privacy + audit
- Every `secretScanFile` finding is logged to stderr at ingest time.
- Every gbrain put/delete is logged to `~/.gstack/.gbrain-errors.jsonl`
with `{ts, op, duration_ms, outcome}` for forensic tracing.
- `~/.gstack/.gbrain-engine-cache.json` shows which storage tier is
active (PGLite vs Supabase).
- Brain-sync git history shows every curated artifact push with the
user's git identity.
If you find a transcript page that contains a secret gitleaks missed,
the recovery path is:
1. `gbrain delete_page <slug>` — removes from index immediately
2. Rotate the secret (rotate it anyway as a defensive measure)
3. If brain-sync is on: `git filter-repo --invert-paths --path <relative-path>`
on the brain remote for hard-delete from history
4. File a gitleaks issue with the pattern (or extend the gitleaks config
at `~/.gitleaks.toml`).