mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-08 14:34:49 +02:00
f44de365c5
* feat: gstack-gbrain-mcp-verify helper for remote MCP probe
Probes a remote gbrain MCP endpoint with bearer auth. POSTs initialize,
classifies failures into NETWORK / AUTH / MALFORMED with one-line
remediation hints, and runs a tools/list capability probe to detect
sources_add MCP support (forward-compat for when gbrain ships URL ingest).
Token consumed from GBRAIN_MCP_TOKEN env, never argv. Required to set
both 'application/json' AND 'text/event-stream' in Accept; that gotcha
costs 10 minutes of debugging when missed (regression-tested).
Live-verified against wintermute (gbrain v0.27.1).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: gstack-artifacts-init + gstack-artifacts-url helpers
artifacts-init replaces brain-init with provider choice (gh / glab /
manual), per-user gstack-artifacts-$USER repo, HTTPS-canonical storage in
~/.gstack-artifacts-remote.txt, and a "send this to your brain admin"
hookup printout. Always prints the command, never auto-executes — gbrain
v0.26.x has no admin-scope MCP probe (codex Finding #3).
artifacts-url centralizes HTTPS↔SSH/host/owner-repo conversion so callers
don't each string-mangle (codex Finding #10). The remote-conflict check in
artifacts-init compares at the canonical level so re-running with HTTPS
input doesn't trip on a stored SSH URL for the same logical repo.
The "URL form not supported" branch prints a two-line clone-then-path
form for gbrain v0.26.x; the supported branch is a one-liner with --url
ready for when gbrain ships URL ingest.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: extend gstack-gbrain-detect with mcp_mode + artifacts_remote
Adds two new fields to detect's JSON output:
- gbrain_mcp_mode: local-stdio | remote-http | none
Resolved via 3-tier fallback (codex Finding D3): claude mcp get --json
→ claude mcp list text-grep → ~/.claude.json jq read. If Anthropic moves
the file format, the first two tiers absorb it.
- gstack_artifacts_remote: HTTPS URL from ~/.gstack-artifacts-remote.txt
Falls back to ~/.gstack-brain-remote.txt during the v1.27.0.0 migration
window so detect doesn't return empty between upgrade and migration.
Existing detect tests still pass (15/15). New 19 tests cover every fallback
tier independently, plus a schema regression for /sync-gbrain compat.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: setup-gbrain Path 4 (remote MCP) + artifacts rename
Path 4 lets users paste an HTTPS MCP URL + bearer token and registers it
as an HTTP-transport MCP without needing a local gbrain CLI install. The
flow:
- Step 2 gains a fourth option (Remote gbrain MCP)
- Step 4 adds Path 4 sub-flow: collect URL, secret-read bearer, verify
via gstack-gbrain-mcp-verify (NETWORK / AUTH / MALFORMED classifier)
- Step 5 (local doctor), Step 7.5 (transcript ingest), Step 5a's stdio
branch all skip on Path 4
- Step 5a adds an HTTP+bearer registration form: claude mcp add
--transport http --header "Authorization: Bearer ..."
- Step 7 renamed "session memory sync" → "artifacts sync" and now calls
gstack-artifacts-init (which always prints the brain-admin hookup
command — no auto-execute, codex Finding #3)
- Step 8 CLAUDE.md block branches: remote-http includes URL + server
version (never the token); local-stdio keeps engine + config-file
- Step 9 smoke test on Path 4 prints the curl-equivalent for
post-restart verification (MCP tools aren't visible mid-session)
- Step 10 verdict block has separate templates per mode
Idempotency: re-running with gbrain_mcp_mode=remote-http already in
detect output skips Step 2 entirely and goes to verification.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: rename gbrain_sync_mode → artifacts_sync_mode (v1.27.0.0 prep)
Hard rename, no dual-read alias (codex Finding D4). The on-disk migration
script (Phase C, separate commit) renames the config key in users'
~/.gstack/config.yaml and any CLAUDE.md blocks.
Touched call sites:
- bin/gstack-config defaults + validation + list/defaults output
- bin/gstack-gbrain-detect (gstack_brain_sync_mode field still emitted
with the same name for downstream-tool compat; reads new key)
- bin/gstack-brain-sync, bin/gstack-brain-enqueue, bin/gstack-brain-uninstall
- bin/gstack-timeline-log (comment ref)
- scripts/resolvers/preamble/generate-brain-sync-block.ts: renames key,
branches on gbrain_mcp_mode=remote-http to emit "ARTIFACTS_SYNC:
remote-mode (managed by brain server <host>)" instead of the local
mode/queue/last_push line (codex Finding #11)
- bin/gstack-brain-restore + bin/gstack-gbrain-source-wireup: read
~/.gstack-artifacts-remote.txt with ~/.gstack-brain-remote.txt fallback
during the migration window
- bin/gstack-artifacts-init: tolerant of unrecognized URL forms (local
paths, file://, self-hosted gitea) so test infrastructure and unusual
remotes work without canonicalization
- test/brain-sync.test.ts: gstack-brain-init → gstack-artifacts-init
- test/skill-e2e-brain-privacy-gate.test.ts: artifacts_sync_mode keys
- test/gen-skill-docs.test.ts: budget 35K → 36.5K for the new MCP-mode
probe in the preamble resolver
- health/SKILL.md.tmpl, sync-gbrain/SKILL.md.tmpl: comment + verdict line
Hard delete:
- bin/gstack-brain-init (replaced by bin/gstack-artifacts-init in v1.27.0.0)
- test/gstack-brain-init-gh-mock.test.ts (replaced by gstack-artifacts-init.test.ts)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: regenerate SKILL.md files after artifacts-sync rename
Mechanical regen via \`bun run gen:skill-docs --host all\`. All */SKILL.md
files reflect the renamed config key (gbrain_sync_mode →
artifacts_sync_mode), the renamed remote-helper file
(~/.gstack-artifacts-remote.txt with brain fallback), the renamed init
script (gstack-artifacts-init), and the new ARTIFACTS_SYNC: remote-mode
status line that fires when a remote-http MCP is registered.
Golden fixtures (test/fixtures/golden/*-ship-SKILL.md) refreshed to match
the regenerated default-ship output.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat: v1.27.0.0 migration — gstack-brain → gstack-artifacts rename
Journaled, interruption-safe migration. Six steps, each writes to
~/.gstack/.migrations/v1.27.0.0.journal on success; re-entry resumes
from the next un-done step. On final success, journal is replaced by
~/.gstack/.migrations/v1.27.0.0.done.
Steps:
1. gh_repo_renamed gh/glab repo rename gstack-brain-$USER →
gstack-artifacts-$USER (idempotent: detects
already-renamed and skips)
2. remote_txt_renamed mv ~/.gstack-brain-remote.txt → artifacts file,
rewriting URL path to match the new repo name
3. config_key_renamed sed -i in ~/.gstack/config.yaml flips
gbrain_sync_mode → artifacts_sync_mode
4. claude_md_block sed flips "- Memory sync:" → "- Artifacts sync:"
in cwd CLAUDE.md and ~/.gstack/CLAUDE.md
5. sources_swapped gbrain sources add NEW (verify) → remove OLD
(codex Finding #6: add-before-remove ordering,
no downtime window). On remote-MCP mode, prints
commands for the brain admin instead of executing.
6. done touchfile + delete journal
User opt-out: any "n" or "skip-for-now" answer at the initial prompt
writes a marker file that prevents re-prompting; user can re-invoke
via /setup-gbrain --rerun-migration.
11 unit tests cover: nothing-to-migrate, GitHub happy path, idempotent
re-run, journal-resume mid-flight, remote-MCP print-only path,
add-before-remove ordering verification, add-fail → old source stays
registered, CLAUDE.md field rewrite.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test: regression suite + E2E for v1.27.0.0 rename
Three new regression tests guard the rename's blast radius (per codex
Findings #1, #8, #9, #12):
- test/no-stale-gstack-brain-refs.test.ts: greps bin/, scripts/, *.tmpl,
test/ for forbidden identifiers (gstack-brain-init, gbrain_sync_mode);
fails CI if any non-allowlisted file references them.
- test/post-rename-doc-regen.test.ts: confirms gen-skill-docs output has
no stale references in any */SKILL.md (the cross-product blind spot).
- test/setup-gbrain-path4-structure.test.ts: structural lint over the
Path 4 prose contract — STOP gates after verify failure, never-write-
token rules, mode-aware CLAUDE.md block, bearer always via env-var.
Two new gate-tier E2E tests (deterministic stub HTTP server, fixed inputs):
- test/skill-e2e-setup-gbrain-remote.test.ts: Path 4 happy path. Stubs
an HTTP MCP server, drives the skill via Agent SDK with a stubbed
bearer, asserts claude.json gets the http MCP entry, CLAUDE.md gets
the remote-http block, the secret token NEVER leaks to CLAUDE.md.
- test/skill-e2e-setup-gbrain-bad-token.test.ts: stub server returns 401;
asserts the AUTH classifier hint surfaces, no MCP registration occurs,
CLAUDE.md is unchanged. Regression guard for the "verify failed → STOP"
rule.
touchfiles.ts: setup-gbrain-remote and setup-gbrain-bad-token added at
gate-tier so CI catches Path 4 regressions on every PR.
Plus a few comment refs flipped: bin/gstack-jsonl-merge, bin/gstack-timeline-log
(legacy gstack-brain-init mentions in headers).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* release: v1.27.0.0 — /setup-gbrain Path 4 + brain → artifacts rename
Bumps VERSION 1.26.4.0 → 1.27.0.0 (MINOR per CLAUDE.md scale-aware bump
guidance: ~1500 line net change including a new path in /setup-gbrain,
two new bin helpers, a journaled migration, 59 new tests, and a config
key rename across the codebase).
CHANGELOG entry covers: Path 4 (Remote MCP) end-to-end, the brain →
artifacts rename, the journaled migration, the verify-helper error
classifier, the artifacts-init multi-host provider choice. Includes
the canonical Garry-voice headline + numbers table + audience close
per the release-summary format.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test: demote setup-gbrain Path 4 E2E to periodic-tier
The Agent SDK E2E tests for Path 4 (skill-e2e-setup-gbrain-remote and
skill-e2e-setup-gbrain-bad-token) are inherently non-deterministic —
the model interprets "follow Path 4 only" prompts flexibly and can
skip Step 8 (CLAUDE.md write) or shortcut past the verify helper, which
makes the gate-tier assertions flaky.
The deterministic gate coverage for Path 4 is in
test/setup-gbrain-path4-structure.test.ts: a fast structural lint that
catches AUQ-pacing regressions and prose contract drift in <200ms with
zero token spend. That test is the right tool for catching the failure
mode the gate-tier was meant to guard against.
The Agent SDK E2E tests stay available on-demand for periodic-tier runs
(EVALS=1 EVALS_TIER=periodic bun test test/skill-e2e-setup-gbrain-*.test.ts).
Also tightened the verify-error assertion to the literal field shape
("error_class": "AUTH") instead of a substring match that false-matches
the parent claude session's "needs-auth" MCP discovery markers.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: sync package.json version to 1.27.0.0
VERSION was bumped to 1.27.0.0 in f6ec11eb but package.json was not
updated in the same commit. The gen-skill-docs.test.ts assertion
"package.json version matches VERSION file" caught the drift.
This is the DRIFT_STALE_PKG case the /ship Step 12 idempotency check
is designed for; the fix is the documented sync-only repair (no
re-bump, package.json synced to existing VERSION).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
453 lines
15 KiB
Bash
Executable File
453 lines
15 KiB
Bash
Executable File
#!/usr/bin/env bash
|
|
# gstack-brain-sync — drain queue, commit allowlisted paths, push to remote.
|
|
#
|
|
# Usage:
|
|
# gstack-brain-sync --once drain queue, commit, push (default)
|
|
# gstack-brain-sync --status print sync health as JSON
|
|
# gstack-brain-sync --skip-file <p> add <p> to ~/.gstack/.brain-skip.txt
|
|
# gstack-brain-sync --drop-queue --yes clear queue without committing
|
|
# gstack-brain-sync --discover-new scan allowlist dirs, enqueue changed files
|
|
#
|
|
# Invoked by the preamble at skill START and END boundaries. No persistent
|
|
# daemon. Typical run <1s when queue empty; ~200-800ms with network push.
|
|
#
|
|
# Singleton enforcement: flock on ~/.gstack/.brain-sync.lock. Concurrent
|
|
# invocations queue and serialize.
|
|
#
|
|
# Env:
|
|
# GSTACK_HOME — override ~/.gstack (aligns with writers).
|
|
|
|
set -uo pipefail
|
|
|
|
GSTACK_HOME="${GSTACK_HOME:-$HOME/.gstack}"
|
|
QUEUE="$GSTACK_HOME/.brain-queue.jsonl"
|
|
ALLOWLIST="$GSTACK_HOME/.brain-allowlist"
|
|
PRIVACY_MAP="$GSTACK_HOME/.brain-privacy-map.json"
|
|
SKIP_FILE="$GSTACK_HOME/.brain-skip.txt"
|
|
STATUS_FILE="$GSTACK_HOME/.brain-sync-status.json"
|
|
LAST_PUSH_FILE="$GSTACK_HOME/.brain-last-push"
|
|
LOCK_FILE="$GSTACK_HOME/.brain-sync.lock"
|
|
DISCOVER_CURSOR="$GSTACK_HOME/.brain-discover-cursor"
|
|
|
|
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
|
CONFIG_BIN="$SCRIPT_DIR/gstack-config"
|
|
|
|
# Remote-specific hint for auth errors (branch on origin URL).
|
|
remote_auth_hint() {
|
|
local url
|
|
url=$(git -C "$GSTACK_HOME" remote get-url origin 2>/dev/null || echo "")
|
|
case "$url" in
|
|
*github.com*|*@github.*) echo "run: gh auth status (and gh auth refresh if needed)" ;;
|
|
*gitlab*) echo "run: glab auth status" ;;
|
|
*) echo "check 'git remote -v' and your credentials" ;;
|
|
esac
|
|
}
|
|
|
|
write_status() {
|
|
# args: status_code message [extra_json_blob]
|
|
local code="$1"
|
|
local msg="$2"
|
|
local extra="${3:-{\}}"
|
|
local ts
|
|
ts=$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || echo "")
|
|
python3 - "$STATUS_FILE" "$code" "$msg" "$ts" "$extra" <<'PYEOF' 2>/dev/null || true
|
|
import json, sys
|
|
path, code, msg, ts, extra = sys.argv[1:6]
|
|
try:
|
|
extra_obj = json.loads(extra) if extra else {}
|
|
except Exception:
|
|
extra_obj = {}
|
|
data = {"status": code, "message": msg, "ts": ts, **extra_obj}
|
|
with open(path, "w") as f:
|
|
json.dump(data, f)
|
|
f.write("\n")
|
|
PYEOF
|
|
}
|
|
|
|
# Read config; return 0 if sync active, 1 otherwise.
|
|
sync_active() {
|
|
if [ ! -d "$GSTACK_HOME/.git" ]; then
|
|
return 1
|
|
fi
|
|
local mode
|
|
mode=$("$CONFIG_BIN" get artifacts_sync_mode 2>/dev/null || echo off)
|
|
[ "$mode" = "off" ] && return 1
|
|
return 0
|
|
}
|
|
|
|
# Secret regex families — stdin scan. Exits 0 clean, 1 if hit.
|
|
# Echoes the matching pattern family name on hit. Uses python3 -c (not
|
|
# heredoc) so sys.stdin stays available for the diff content.
|
|
secret_scan_stdin() {
|
|
python3 -c "
|
|
import sys, re
|
|
patterns = [
|
|
('aws-access-key', re.compile(r'AKIA[0-9A-Z]{16}')),
|
|
('github-token', re.compile(r'\\b(gh[pousr]_[A-Za-z0-9]{20,}|github_pat_[A-Za-z0-9_]{20,})')),
|
|
('openai-key', re.compile(r'\\bsk-[A-Za-z0-9_-]{20,}')),
|
|
('pem-block', re.compile(r'-----BEGIN [A-Z ]{3,}-----')),
|
|
('jwt', re.compile(r'\\beyJ[A-Za-z0-9_-]{10,}\\.[A-Za-z0-9_-]{10,}\\.[A-Za-z0-9_-]{10,}\\b')),
|
|
('bearer-token-json',
|
|
# JSON-embedded auth headers. The optional Bearer/Basic/Token prefix
|
|
# matters: real auth values include a literal space after the scheme
|
|
# name, but the value charset below does not include spaces, so
|
|
# without the optional prefix every Bearer token in a JSON blob slips
|
|
# past the scanner.
|
|
re.compile(r'\"(authorization|api[_-]?key|apikey|token|secret|password)\"\\s*:\\s*\"(Bearer |Basic |Token )?[A-Za-z0-9_./+=-]{16,}\"',
|
|
re.IGNORECASE)),
|
|
]
|
|
text = sys.stdin.read()
|
|
for name, rx in patterns:
|
|
m = rx.search(text)
|
|
if m:
|
|
snippet = m.group(0)
|
|
if len(snippet) > 30:
|
|
snippet = snippet[:30] + '...'
|
|
print(name + ':' + snippet)
|
|
sys.exit(1)
|
|
sys.exit(0)
|
|
"
|
|
}
|
|
|
|
# Compute matched allowlisted, privacy-filtered path set from queue.
|
|
# Output: newline-delimited relative paths that should be staged.
|
|
compute_paths_to_stage() {
|
|
local mode="$1"
|
|
python3 - "$GSTACK_HOME" "$QUEUE" "$ALLOWLIST" "$PRIVACY_MAP" "$SKIP_FILE" "$mode" <<'PYEOF'
|
|
import sys, json, os, fnmatch, glob
|
|
|
|
gstack_home, queue, allowlist_path, privacy_path, skip_path, mode = sys.argv[1:7]
|
|
|
|
def load_lines(path):
|
|
try:
|
|
with open(path) as f:
|
|
return [l.strip() for l in f if l.strip() and not l.lstrip().startswith("#")]
|
|
except FileNotFoundError:
|
|
return []
|
|
|
|
def load_privacy_map(path):
|
|
try:
|
|
with open(path) as f:
|
|
data = json.load(f)
|
|
# Expected: [{"pattern": "glob", "class": "artifact" | "behavioral"}]
|
|
return data if isinstance(data, list) else []
|
|
except (FileNotFoundError, json.JSONDecodeError):
|
|
return []
|
|
|
|
allowlist_globs = load_lines(allowlist_path)
|
|
privacy_map = load_privacy_map(privacy_path)
|
|
skip_lines = set(load_lines(skip_path))
|
|
|
|
# Read queue; collect unique file paths.
|
|
queue_paths = set()
|
|
try:
|
|
with open(queue) as f:
|
|
for line in f:
|
|
line = line.strip()
|
|
if not line:
|
|
continue
|
|
try:
|
|
obj = json.loads(line)
|
|
p = obj.get("file")
|
|
if isinstance(p, str):
|
|
queue_paths.add(p)
|
|
except json.JSONDecodeError:
|
|
continue
|
|
except FileNotFoundError:
|
|
pass
|
|
|
|
def path_matches_any(path, globs):
|
|
for pattern in globs:
|
|
if fnmatch.fnmatchcase(path, pattern):
|
|
return True
|
|
return False
|
|
|
|
def privacy_class(path, mapping):
|
|
for entry in mapping:
|
|
pat = entry.get("pattern")
|
|
if pat and fnmatch.fnmatchcase(path, pat):
|
|
return entry.get("class", "artifact")
|
|
# Default class when no pattern matches: artifact (safe default).
|
|
return "artifact"
|
|
|
|
# mode filter: 'off' → nothing; 'artifacts-only' → only artifact class;
|
|
# 'full' → both classes.
|
|
def mode_allows(cls, mode):
|
|
if mode == "off":
|
|
return False
|
|
if mode == "artifacts-only":
|
|
return cls == "artifact"
|
|
return True # full
|
|
|
|
final = []
|
|
for p in sorted(queue_paths):
|
|
if p in skip_lines:
|
|
continue
|
|
# Must be under GSTACK_HOME root. Reject absolute + reject ../ escape.
|
|
if p.startswith("/") or ".." in p.split("/"):
|
|
continue
|
|
# Must match at least one allowlist glob.
|
|
if not path_matches_any(p, allowlist_globs):
|
|
continue
|
|
# Must survive privacy mode filter.
|
|
cls = privacy_class(p, privacy_map)
|
|
if not mode_allows(cls, mode):
|
|
continue
|
|
# Must exist on disk — can't stage what isn't there.
|
|
if not os.path.exists(os.path.join(gstack_home, p)):
|
|
continue
|
|
final.append(p)
|
|
|
|
for p in final:
|
|
print(p)
|
|
PYEOF
|
|
}
|
|
|
|
subcmd_once() {
|
|
if ! sync_active; then
|
|
# Silent no-op when feature not initialized / disabled.
|
|
exit 0
|
|
fi
|
|
|
|
# Singleton lock via atomic mkdir. `flock(1)` isn't on macOS by default;
|
|
# `mkdir` is atomic on every POSIX filesystem. If another --once is already
|
|
# running, skip (don't wait) — the next skill boundary will catch up.
|
|
local lock_dir="${LOCK_FILE}.d"
|
|
if ! mkdir "$lock_dir" 2>/dev/null; then
|
|
# Is the lock stale? Check the pidfile inside. If process is dead, clear it.
|
|
if [ -f "$lock_dir/pid" ]; then
|
|
local lock_pid
|
|
lock_pid=$(cat "$lock_dir/pid" 2>/dev/null || echo "")
|
|
if [ -n "$lock_pid" ] && ! kill -0 "$lock_pid" 2>/dev/null; then
|
|
# Stale lock — clear and retry once.
|
|
rm -rf "$lock_dir" 2>/dev/null || true
|
|
if ! mkdir "$lock_dir" 2>/dev/null; then
|
|
exit 0
|
|
fi
|
|
else
|
|
# Lock is held by a live process.
|
|
exit 0
|
|
fi
|
|
else
|
|
# Lock dir without pidfile — treat as held; don't touch.
|
|
exit 0
|
|
fi
|
|
fi
|
|
echo "$$" > "$lock_dir/pid" 2>/dev/null || true
|
|
|
|
local mode
|
|
mode=$("$CONFIG_BIN" get artifacts_sync_mode 2>/dev/null || echo off)
|
|
|
|
local paths_file
|
|
paths_file=$(mktemp /tmp/brain-sync-paths.XXXXXX) || { rm -rf "$lock_dir" 2>/dev/null; write_status "error" "mktemp failed"; exit 1; }
|
|
# Single trap covers both: lock cleanup AND tempfile cleanup.
|
|
trap 'rm -f "$paths_file" 2>/dev/null; rm -rf "$lock_dir" 2>/dev/null || true' EXIT INT TERM
|
|
|
|
compute_paths_to_stage "$mode" > "$paths_file"
|
|
if [ ! -s "$paths_file" ]; then
|
|
# Nothing to stage. Clear any stale queue entries and exit.
|
|
: > "$QUEUE"
|
|
write_status "idle" "no allowlisted changes in queue"
|
|
exit 0
|
|
fi
|
|
|
|
# Stage with git add -f (forces past .gitignore=*) explicit paths only.
|
|
while IFS= read -r p; do
|
|
[ -z "$p" ] && continue
|
|
git -C "$GSTACK_HOME" add -f -- "$p" 2>/dev/null || true
|
|
done < "$paths_file"
|
|
|
|
# Secret-scan staged diff.
|
|
local scan_out
|
|
scan_out=$(git -C "$GSTACK_HOME" diff --cached 2>/dev/null | secret_scan_stdin || true)
|
|
if [ -n "$scan_out" ]; then
|
|
# Hit — unstage, preserve queue, write loud status.
|
|
git -C "$GSTACK_HOME" reset HEAD -- . >/dev/null 2>&1 || true
|
|
local hint
|
|
hint="secret pattern detected ($scan_out). Remediation: review the staged file, then run: gstack-brain-sync --skip-file <path> OR edit the content."
|
|
write_status "blocked" "$hint"
|
|
echo "BRAIN_SYNC: blocked: $scan_out" >&2
|
|
exit 0
|
|
fi
|
|
|
|
# Commit with template message.
|
|
local n ts
|
|
n=$(wc -l < "$paths_file" | tr -d ' ')
|
|
ts=$(date -u +%Y-%m-%dT%H:%M:%SZ)
|
|
local msg="sync: $n file(s) | $ts"
|
|
git -C "$GSTACK_HOME" -c user.email="gstack@localhost" -c user.name="gstack-brain-sync" \
|
|
commit -q -m "$msg" 2>/dev/null || {
|
|
# Nothing to commit (e.g. all files already committed).
|
|
: > "$QUEUE"
|
|
write_status "idle" "queue drained but no new changes to commit"
|
|
exit 0
|
|
}
|
|
|
|
# Push. On reject, fetch + merge (merge driver handles JSONL) + retry once.
|
|
local push_err
|
|
push_err=$(git -C "$GSTACK_HOME" push origin HEAD 2>&1 >/dev/null) || {
|
|
# Check if this is an auth error first — no point retrying.
|
|
if echo "$push_err" | grep -qiE "auth|permission|403|401|forbidden"; then
|
|
local hint
|
|
hint=$(remote_auth_hint)
|
|
write_status "push_failed" "push failed: auth error. fix: $hint"
|
|
echo "BRAIN_SYNC: push failed: auth. fix: $hint" >&2
|
|
# Queue cleared because the commit exists locally; next push will send it.
|
|
: > "$QUEUE"
|
|
exit 0
|
|
fi
|
|
|
|
# Try a fetch-and-merge + retry.
|
|
if git -C "$GSTACK_HOME" fetch origin 2>/dev/null; then
|
|
local branch
|
|
branch=$(git -C "$GSTACK_HOME" rev-parse --abbrev-ref HEAD 2>/dev/null || echo main)
|
|
if git -C "$GSTACK_HOME" merge --no-edit "origin/$branch" >/dev/null 2>&1; then
|
|
if git -C "$GSTACK_HOME" push origin HEAD 2>/dev/null; then
|
|
: > "$QUEUE"
|
|
date -u +%Y-%m-%dT%H:%M:%SZ > "$LAST_PUSH_FILE"
|
|
write_status "ok" "pushed $n file(s) after rebase"
|
|
exit 0
|
|
fi
|
|
fi
|
|
fi
|
|
write_status "push_failed" "push failed: $(printf '%s' "$push_err" | head -1)"
|
|
: > "$QUEUE"
|
|
exit 0
|
|
}
|
|
|
|
# Success: clear queue, update last-push.
|
|
: > "$QUEUE"
|
|
date -u +%Y-%m-%dT%H:%M:%SZ > "$LAST_PUSH_FILE"
|
|
write_status "ok" "pushed $n file(s)"
|
|
exit 0
|
|
}
|
|
|
|
subcmd_status() {
|
|
if [ -f "$STATUS_FILE" ]; then
|
|
cat "$STATUS_FILE"
|
|
else
|
|
echo '{"status":"unknown","message":"no status file yet"}'
|
|
fi
|
|
# Supplemental info (not in status file).
|
|
local queue_depth=0
|
|
[ -f "$QUEUE" ] && queue_depth=$(wc -l < "$QUEUE" | tr -d ' ')
|
|
local last_push="never"
|
|
[ -f "$LAST_PUSH_FILE" ] && last_push=$(cat "$LAST_PUSH_FILE" 2>/dev/null || echo never)
|
|
local mode
|
|
mode=$("$CONFIG_BIN" get artifacts_sync_mode 2>/dev/null || echo off)
|
|
printf '{"queue_depth":%s,"last_push":"%s","mode":"%s"}\n' "$queue_depth" "$last_push" "$mode"
|
|
}
|
|
|
|
subcmd_skip_file() {
|
|
local path="${1:-}"
|
|
if [ -z "$path" ]; then
|
|
echo "Usage: gstack-brain-sync --skip-file <path>" >&2
|
|
exit 1
|
|
fi
|
|
mkdir -p "$GSTACK_HOME"
|
|
# Avoid duplicate entries.
|
|
if [ -f "$SKIP_FILE" ] && grep -Fxq "$path" "$SKIP_FILE"; then
|
|
echo "already in skip list: $path"
|
|
exit 0
|
|
fi
|
|
echo "$path" >> "$SKIP_FILE"
|
|
echo "added to skip list: $path"
|
|
echo "(future writers will not enqueue this path; existing queue entries ignored on next --once)"
|
|
}
|
|
|
|
subcmd_drop_queue() {
|
|
local force="${1:-}"
|
|
if [ "$force" != "--yes" ]; then
|
|
echo "Refusing: --drop-queue discards pending syncs. Pass --yes to confirm." >&2
|
|
exit 1
|
|
fi
|
|
if [ ! -f "$QUEUE" ]; then
|
|
echo "queue already empty"
|
|
exit 0
|
|
fi
|
|
local n
|
|
n=$(wc -l < "$QUEUE" | tr -d ' ')
|
|
: > "$QUEUE"
|
|
echo "dropped $n queue entries"
|
|
}
|
|
|
|
subcmd_discover_new() {
|
|
if ! sync_active; then
|
|
exit 0
|
|
fi
|
|
# Walk allowlist globs; enqueue any file where mtime+size differs from cursor.
|
|
python3 - "$GSTACK_HOME" "$ALLOWLIST" "$DISCOVER_CURSOR" "$SCRIPT_DIR/gstack-brain-enqueue" <<'PYEOF' 2>/dev/null || true
|
|
import sys, os, json, glob, fnmatch, subprocess, hashlib
|
|
|
|
gstack_home, allowlist_path, cursor_path, enqueue_bin = sys.argv[1:5]
|
|
|
|
def load_lines(path):
|
|
try:
|
|
with open(path) as f:
|
|
return [l.strip() for l in f if l.strip() and not l.lstrip().startswith("#")]
|
|
except FileNotFoundError:
|
|
return []
|
|
|
|
def load_cursor(path):
|
|
try:
|
|
with open(path) as f:
|
|
return json.load(f)
|
|
except (FileNotFoundError, json.JSONDecodeError):
|
|
return {}
|
|
|
|
def save_cursor(path, data):
|
|
try:
|
|
with open(path, "w") as f:
|
|
json.dump(data, f)
|
|
except OSError:
|
|
pass
|
|
|
|
allowlist = load_lines(allowlist_path)
|
|
cursor = load_cursor(cursor_path)
|
|
new_cursor = dict(cursor)
|
|
|
|
# Walk all files under gstack_home, match against allowlist.
|
|
for root, dirs, files in os.walk(gstack_home):
|
|
# Skip .git and .brain-* state files.
|
|
if ".git" in root.split(os.sep):
|
|
continue
|
|
for name in files:
|
|
full = os.path.join(root, name)
|
|
rel = os.path.relpath(full, gstack_home)
|
|
if rel.startswith(".brain-"):
|
|
continue
|
|
matched = any(fnmatch.fnmatchcase(rel, pat) for pat in allowlist)
|
|
if not matched:
|
|
continue
|
|
try:
|
|
st = os.stat(full)
|
|
key = f"{int(st.st_mtime)}:{st.st_size}"
|
|
except OSError:
|
|
continue
|
|
prev = cursor.get(rel)
|
|
if prev != key:
|
|
# Enqueue via the shim (respects sync mode + skip list).
|
|
subprocess.run([enqueue_bin, rel], check=False)
|
|
new_cursor[rel] = key
|
|
|
|
save_cursor(cursor_path, new_cursor)
|
|
PYEOF
|
|
}
|
|
|
|
# -------- dispatch --------
|
|
case "${1:-}" in
|
|
--once|"") subcmd_once ;;
|
|
--status) subcmd_status ;;
|
|
--skip-file) shift; subcmd_skip_file "${1:-}" ;;
|
|
--drop-queue) shift; subcmd_drop_queue "${1:-}" ;;
|
|
--discover-new) subcmd_discover_new ;;
|
|
--help|-h)
|
|
sed -n '2,18p' "$0" | sed 's/^# \{0,1\}//'
|
|
;;
|
|
*)
|
|
echo "Unknown subcommand: $1" >&2
|
|
echo "Run: gstack-brain-sync --help" >&2
|
|
exit 1
|
|
;;
|
|
esac
|