mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-02 03:35:09 +02:00
54d4cde773
* refactor(security): loosen /connect rate limit from 3/min to 300/min
Setup keys are 24 random bytes (unbruteforceable), so a tight rate limit
does not meaningfully prevent key guessing. It exists only to cap
bandwidth, CPU, and log-flood damage from someone who discovered the
ngrok URL. A legitimate pair-agent session hits /connect once; 300/min
is 60x that pattern and never hit accidentally.
3/min caused pairing to fail on any retry flow (network blip, second
paired client) with no upside. Per-IP tracking was considered and
rejected — adds a bounded Map + LRU for defense already adequate at the
global layer.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(security): add tunnel-denial-log module for attack visibility
Append-only log of tunnel-surface auth denials to
~/.gstack/security/attempts.jsonl. Gives operators visibility into who
is probing tunneled daemons so the next security wave can be driven by
real attack data instead of speculation.
Design notes:
- Async via fs.promises.appendFile. Never appendFileSync — blocking the
event loop on every denial during a flood is what an attacker wants
(prior learning: sync-audit-log-io, 10/10 confidence).
- In-process rate cap at 60 writes/minute globally. Excess denials are
counted in memory but not written to disk — prevents disk DoS.
- Writes to the same ~/.gstack/security/attempts.jsonl used by the
prompt-injection attempt log. File rotation is handled by the existing
security pipeline (10MB, 5 generations).
No consumers in this commit; wired up in the dual-listener refactor that
follows.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(security): dual-listener tunnel architecture
The /health endpoint leaked AUTH_TOKEN to any caller that hit the ngrok
URL (spoofing chrome-extension:// origin, or catching headed mode).
Surfaced by @garagon in PR #1026; the original fix was header-inference
on the single port. Codex's outside-voice review during /plan-ceo-review
called that approach brittle (ngrok header behavior could change, local
proxies would false-positive), and pushed for the structural fix.
This is that fix. Stop making /health a root-token bootstrap endpoint on
any surface the tunnel can reach. The server now binds two HTTP
listeners when a tunnel is active. The local listener (extension, CLI,
sidebar) stays on 127.0.0.1 and is never exposed to ngrok. ngrok
forwards only to the tunnel listener, which serves only /connect
(unauth, rate-limited) and /command with a locked allowlist of
browser-driving commands. Security property comes from physical port
separation, not from header inference — a tunnel caller cannot reach
/health or /cookie-picker or /inspector because they live on a
different TCP socket.
What this commit adds to browse/src/server.ts:
* Surface type ('local' | 'tunnel') and TUNNEL_PATHS +
TUNNEL_COMMANDS allowlists near the top of the file.
* makeFetchHandler(surface) factory replacing the single fetch arrow;
closure-captures the surface so the filter that runs before route
dispatch knows which socket accepted the request.
* Tunnel filter at dispatch entry: 404s anything not on TUNNEL_PATHS,
403s root-token bearers with a clear pairing hint, 401s non-/connect
requests that lack a scoped token. Every denial is logged via
logTunnelDenial (from tunnel-denial-log).
* GET /connect alive probe (unauth on both surfaces) so /pair and
/tunnel/start can detect dead ngrok tunnels without reaching
/health — /health is no longer tunnel-reachable.
* Lazy tunnel listener lifecycle. /tunnel/start binds a dedicated
Bun.serve on an ephemeral port, points ngrok.forward at THAT port
(not the local port), hard-fails on bind error (no local fallback),
tears down cleanly on ngrok failure. BROWSE_TUNNEL=1 startup uses
the same pattern.
* closeTunnel() helper — single teardown path for both the ngrok
listener and the tunnel Bun.serve listener.
* resolveNgrokAuthtoken() helper — shared authtoken lookup across
/tunnel/start and BROWSE_TUNNEL=1 startup (was duplicated).
* TUNNEL_COMMANDS check in /command dispatch: on the tunnel surface,
commands outside the allowlist return 403 with a list of allowed
commands as a hint.
* Probe paths in /pair and /tunnel/start migrated from /health to
GET /connect — the only unauth path reachable on the tunnel surface
under the new architecture.
Test updates in browse/test/server-auth.test.ts:
* /pair liveness-verify test: assert via closeTunnel() helper instead
of the inline `tunnelActive = false; tunnelUrl = null` lines that
the helper subsumes.
* /tunnel/start cached-tunnel test: same closeTunnel() adaptation.
Credit
Derived from PR #1026 by @garagon — thanks for flagging the critical
bug that drove the architectural rewrite. The per-request
isTunneledRequest approach from #1026 is superseded by physical port
separation here; the underlying report remains the root cause for the
entire v1.6.0.0 wave.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(security): add source-level guards for dual-listener architecture
23 source-level assertions that keep future contributors from silently
widening the tunnel surface during a routine refactor. Covers:
* Surface type + tunnelServer state variable shape
* TUNNEL_PATHS is a closed set of /connect, /command, /sidebar-chat
(and NOT /health, /welcome, /cookie-picker, /inspector/*, /pair,
/token, /refs, /activity/stream, /tunnel/{start,stop})
* TUNNEL_COMMANDS includes browser-driving ops only (and NOT
launch-browser, tunnel-start, token-mint, cookie-import, etc.)
* makeFetchHandler(surface) factory exists and is wired to both
listeners with the correct surface parameter
* Tunnel filter runs BEFORE any route dispatch, with 404/403/401
responses and logged denials for each reason
* GET /connect returns {alive: true} unauth
* /command dispatch enforces TUNNEL_COMMANDS on tunnel surface
* closeTunnel() helper tears down ngrok + Bun.serve listener
* /tunnel/start binds on ephemeral port, points ngrok at TUNNEL_PORT
(not local port), hard-fails on bind error (no fallback), probes
cached tunnel via GET /connect (not /health), tears down on
ngrok.forward failure
* BROWSE_TUNNEL=1 startup uses the dual-listener pattern
* logTunnelDenial wired for all three denial reasons
* /connect rate limit is 300/min, not 3/min
All 23 tests pass. Behavioral integration tests (spawn subprocess, real
network) live in the E2E suite that lands later in this wave.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* security: gate download + scrape through validateNavigationUrl (SSRF)
The `goto` command was correctly wired through validateNavigationUrl,
but `download` and `scrape` called page.request.fetch(url, ...) directly.
A caller with the default write scope could hit the /command endpoint
and ask the daemon to fetch http://169.254.169.254/latest/meta-data/
(AWS IMDSv1) or the GCP/Azure/internal equivalents. The response body
comes back as base64 or lands on disk where GET /file serves it.
Fix: call validateNavigationUrl(url) immediately before each
page.request.fetch() call site in download and in the scrape loop.
Same blocklist that already protects `goto`: file://, javascript:,
data:, chrome://, cloud metadata (IPv4 all encodings, IPv6 ULA,
metadata.*.internal).
Tests: extend browse/test/url-validation.test.ts with a source-level
guard that walks every `await page.request.fetch(` call site and
asserts a validateNavigationUrl call precedes it within the same
branch. Regression trips before code review if a future refactor
drops the gate.
* security: route splitForScoped through envelope sentinel escape
The scoped-token snapshot path in snapshot.ts built its untrusted
block by pushing the raw accessibility-tree lines between the literal
`═══ BEGIN UNTRUSTED WEB CONTENT ═══` / `═══ END UNTRUSTED WEB CONTENT ═══`
sentinels. The full-page wrap path in content-security.ts already
applied a zero-width-space escape on those exact strings to prevent
sentinel injection, but the scoped path skipped it.
Net effect: a page whose rendered text contains the literal sentinel
can close the envelope early from inside untrusted content and forge
a fake "trusted" block for the LLM. That includes fabricating
interactive `@eN` references the agent will act on.
Fix:
* Extract the zero-width-space escape into a named, exported helper
`escapeEnvelopeSentinels(content)` in content-security.ts.
* Have `wrapUntrustedPageContent` call it (behavior unchanged on
that path — same bytes out).
* Import the helper in snapshot.ts and map it over `untrustedLines`
in the `splitForScoped` branch before pushing the BEGIN sentinel.
Tests: add a describe block in content-security.test.ts that covers
* `escapeEnvelopeSentinels` defuses BEGIN and END markers;
* `escapeEnvelopeSentinels` leaves normal text untouched;
* `wrapUntrustedPageContent` still emits exactly one real envelope
pair when hostile content contains forged sentinels;
* snapshot.ts imports the helper;
* the scoped-snapshot branch calls `escapeEnvelopeSentinels` before
pushing the BEGIN sentinel (source-level regression — if a future
refactor reorders this, the test trips).
* security: extend hidden-element detection to all DOM-reading channels
The Confusion Protocol envelope wrap (`wrapUntrustedPageContent`)
covers every scoped PAGE_CONTENT_COMMAND, but the hidden-element
ARIA-injection detection layer only ran for `text`. Other DOM-reading
channels (html, links, forms, accessibility, attrs, data, media,
ux-audit) returned their output through the envelope with no hidden-
content filter, so a page serving a display:none div that instructs
the agent to disregard prior system messages, or an aria-label that
claims to put the LLM in admin mode, leaked the injection payload on
any non-text channel. The envelope alone does not mitigate this, and
the page itself never rendered the hostile content to the human
operator.
Fix:
* New export `DOM_CONTENT_COMMANDS` in commands.ts — the subset of
PAGE_CONTENT_COMMANDS that derives its output from the live DOM.
Console and dialog stay out; they read separate runtime state.
* server.ts runs `markHiddenElements` + `cleanupHiddenMarkers` for
every scoped command in this set. `text` keeps its existing
`getCleanTextWithStripping` path (hidden elements physically
stripped before the read). All other channels keep their output
format but emit flagged elements as CONTENT WARNINGS on the
envelope, so the LLM sees what it would otherwise have consumed
silently.
* Hidden-element descriptions merge into `combinedWarnings`
alongside content-filter warnings before the wrap call.
Tests: new describe block in content-security.test.ts covering
* `DOM_CONTENT_COMMANDS` export shape and channel membership;
* dispatch gates on `DOM_CONTENT_COMMANDS.has(command)`, not the
literal `text` string;
* hiddenContentWarnings plumbs into `combinedWarnings` and reaches
wrapUntrustedPageContent;
* DOM_CONTENT_COMMANDS is a strict subset of PAGE_CONTENT_COMMANDS.
Existing datamarking, envelope wrap, centralized-wrapping, and chain
security suites stay green (52 pass, 0 fail).
* security: validate --from-file payload paths for parity with direct paths
The direct `load-html <file>` path runs every caller-supplied file path
through validateReadPath() so reads stay confined to SAFE_DIRECTORIES
(cwd, TEMP_DIR). The `load-html --from-file <payload.json>` shortcut
and its sibling `pdf --from-file <payload.json>` skipped that check and
went straight to fs.readFileSync(). An MCP caller that picks the
payload path (or any caller whose payload argument is reachable from
attacker-influenced text) could use --from-file as a read-anywhere
escape hatch for the safe-dirs policy.
Fix: call validateReadPath(path.resolve(payloadPath)) before readFileSync
at both sites. Error surface mirrors the direct-path branch so ops and
agent errors stay consistent.
Test coverage in browse/test/from-file-path-validation.test.ts:
- source-level: validateReadPath precedes readFileSync in the load-html
--from-file branch (write-commands.ts) and the pdf --from-file parser
(meta-commands.ts)
- error-message parity: both sites reference SAFE_DIRECTORIES
Related security audit pattern: R3 F002 (validateNavigationUrl gap on
download/scrape) and R3 F008 (markHiddenElements gap on 10 DOM commands)
were the same shape — a defense that existed on the primary code path
but not its shortcut sibling. This PR closes the same class of gap on
the --from-file shortcuts.
* fix(design): escape url.origin when injecting into served HTML
serve.ts injected url.origin into a single-quoted JS string in
the response body. A local request with a crafted Host header
(e.g. Host: "evil'-alert(1)-'x") would break out of the string
and execute JS in the 127.0.0.1:<port> origin opened by the
design board. Low severity — bound to localhost, requires a
local attacker — but no reason not to escape.
Fix: JSON.stringify(url.origin) produces a properly quoted,
escaped JS string literal in one call.
Also includes Prettier reformatting (single→double quotes,
trailing commas, line wrapping) applied by the repo's
PostToolUse formatter hook. Security change is the one line
in the HTML injection; everything else is whitespace/style.
* fix(scripts): drop shell:true from slop-diff npx invocations
spawnSync('npx', [...], { shell: true }) invokes /bin/sh -c
with the args concatenated, subjecting them to shell parsing
(word splitting, glob expansion, metacharacter interpretation).
No user input reaches these calls today, so not exploitable —
but the posture is wrong: npx + shell args should be direct.
Fix: scope shell:true to process.platform === 'win32' where
npx is actually a .cmd requiring the shell. POSIX runs the
npx binary directly with array-form args.
Also includes Prettier reformatting (single→double quotes,
trailing commas, line wrapping) applied by the repo's
PostToolUse formatter hook. Security-relevant change is just
the two shell:true -> shell: process.platform === 'win32'
lines; everything else is whitespace/style.
* security(E3): gate GSTACK_SLUG on /welcome path traversal
The /welcome handler interpolates GSTACK_SLUG directly into the filesystem
path used to locate the project-local welcome page. Without validation, a
slug like "../../etc/passwd" would resolve to
~/.gstack/projects/../../etc/passwd/designs/welcome-page-20260331/finalized.html
— classic path traversal.
Not exploitable today: GSTACK_SLUG is set by the gstack CLI at daemon launch,
and an attacker would already need local env-var access to poison it. But
the gate is one regex (^[a-z0-9_-]+$), and a defense-in-depth pass costs us
nothing when the cost of being wrong is arbitrary file read via /welcome.
Fall back to the safe 'unknown' literal when the slug fails validation —
same fallback the code already uses when GSTACK_SLUG is unset. No behavior
change for legitimate slugs (they all match the regex).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* security(N1): replace ?token= SSE auth with HttpOnly session cookie
Activity stream and inspector events SSE endpoints accepted the root
AUTH_TOKEN via `?token=` query param (EventSource can't send Authorization
headers). URLs leak to browser history, referer headers, server logs,
crash reports, and refactoring accidents. Codex flagged this during the
/plan-ceo-review outside voice pass.
New auth model: the extension calls POST /sse-session with a Bearer token
and receives a view-only session cookie (HttpOnly, SameSite=Strict, 30-min
TTL). EventSource is opened with `withCredentials: true` so the browser
sends the cookie back on the SSE connection. The ?token= query param is
GONE — no more URL-borne secrets.
Scope isolation (prior learning cookie-picker-auth-isolation, 10/10
confidence): the SSE session cookie grants access to /activity/stream and
/inspector/events ONLY. The token is never valid against /command, /token,
or any mutating endpoint. A leaked cookie can watch activity; it cannot
execute browser commands.
Components
* browse/src/sse-session-cookie.ts — registry: mint/validate/extract/
build-cookie. 256-bit tokens, 30-min TTL, lazy expiry pruning,
no imports from token-registry (scope isolation enforced by module
boundary).
* browse/src/server.ts — POST /sse-session mint endpoint (requires
Bearer). /activity/stream and /inspector/events now accept Bearer
OR the session cookie, and reject ?token= query param.
* extension/sidepanel.js — ensureSseSessionCookie() bootstrap call,
EventSource opened with withCredentials:true on both SSE endpoints.
Tested via the source guards; behavioral test is the E2E pairing
flow that lands later in the wave.
* browse/test/sse-session-cookie.test.ts — 20 unit tests covering
mint entropy, TTL enforcement, cookie flag invariants, cookie
parsing from multi-cookie headers, and scope-isolation contract
guard (module must not import token-registry).
* browse/test/server-auth.test.ts — existing /activity/stream auth
test updated to assert the new cookie-based gate and the absence
of the ?token= query param.
Cookie flag choices:
* HttpOnly: token not readable from page JS (mitigates XSS
exfiltration).
* SameSite=Strict: cookie not sent on cross-site requests (mitigates
CSRF). Fine for SSE because the extension connects to 127.0.0.1
directly.
* Path=/: cookie scoped to the whole origin.
* Max-Age=1800: 30 minutes, matches TTL. Extension re-mints on
reconnect when daemon restarts.
* Secure NOT set: daemon binds to 127.0.0.1 over plain HTTP. Adding
Secure would block the browser from ever sending the cookie back.
Add Secure when gstack ships over HTTPS.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* security(N2): document Windows v20 ABE elevation path on CDP port
The existing comment around the cookie-import-browser --remote-debugging-port
launch claimed "threat model: no worse than baseline." That's wrong on
Windows with App-Bound Encryption v20. A same-user local process that
opens the cookie SQLite DB directly CANNOT decrypt v20 values (DPAPI
context is bound to the browser process). The CDP port lets them bypass
that: connect to the debug port, call Network.getAllCookies inside Chrome,
walk away with decrypted v20 cookies.
The correct fix is to switch from TCP --remote-debugging-port to
--remote-debugging-pipe so the CDP transport is a stdio pipe, not a
socket. That requires restructuring the CDP WebSocket client in this
module and Playwright doesn't expose the pipe transport out of the box.
Non-trivial, deferred from the v1.6.0.0 wave.
This commit updates the comment to correctly describe the threat and
points at the tracking issue. No code change to the launch itself.
Follow-up: #1136.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs(E2): document dual-listener tunnel architecture in ARCHITECTURE.md
Adds an explicit per-endpoint disposition table to the Security model
section, covering the v1.6.0.0 dual-listener refactor. Every HTTP
endpoint now has a documented local-vs-tunnel answer. Future audits
(and future contributors wondering "is it safe to add X to the tunnel
surface?") can read this instead of reverse-engineering server.ts.
Also documents:
* Why physical port separation beats per-request header inference
(ngrok behavior drift, local proxies can forge headers, etc.)
* Tunnel surface denial logging → ~/.gstack/security/attempts.jsonl
* SSE session cookie model (gstack_sse, 30-min TTL, stream-scope only,
module-boundary-enforced scope isolation)
* N2 non-goal for Windows v20 ABE via CDP port (tracking #1136)
No code changes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* test(E1): end-to-end pair-agent flow against a spawned daemon
Spawns the browse daemon as a subprocess with BROWSE_HEADLESS_SKIP=1 so
the HTTP layer runs without a real browser. Exercises:
* GET /health — token delivery for chrome-extension origin, withheld
otherwise (the F1 + PR #1026 invariant)
* GET /connect — alive probe returns {alive:true} unauth
* POST /pair — root Bearer required (403 without), returns setup_key
* POST /connect — setup_key exchange mints a distinct scoped token
* POST /command — 401 without auth
* POST /sse-session — Bearer required, Set-Cookie has HttpOnly +
SameSite=Strict (the N1 invariant)
* GET /activity/stream — 401 without auth
* GET /activity/stream?token= — 401 (the old ?token= query param is
REJECTED, which is the whole point of N1)
* GET /welcome — serves HTML, does not leak /etc/passwd content under
the default 'unknown' slug (E3 regex gate)
12 behavioral tests, ~220ms end-to-end, no network dependencies, no
ngrok, no real browser. This is the receipt for the wave's central
'pair-agent still works + the security boundary holds' claim.
Tunnel-port binding (/tunnel/start) is deliberately NOT exercised here
— it requires an ngrok authtoken and live network. The dual-listener
route allowlist is covered by source-level guards in
dual-listener.test.ts; behavioral tunnel testing belongs in a separate
paid-evals harness.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* release(v1.6.0.0): bump VERSION + CHANGELOG for security wave
Architectural bump, not patch: dual-listener HTTP refactor changes the
daemon's tunnel-exposure model. See CHANGELOG for the full release
summary (~950 words) covering the five root causes this wave closes:
1. /health token leak over ngrok (F1 + E3 + test infra)
2. /cookie-picker + /inspector exposed over the tunnel (F1)
3. ?token=<ROOT> in SSE URLs leaking to logs/referer/history (N1)
4. /welcome GSTACK_SLUG path traversal (E3)
5. Windows v20 ABE elevation via CDP port (N2 — documented non-goal,
tracked as #1136)
Plus the base PRs: SSRF gate (#1029), envelope sentinel escape (#1031),
DOM-channel hidden-element coverage (#1032), --from-file path validation
(#1103), and 2 commits from #1073 (@theqazi).
VERSION + package.json bumped to 1.6.0.0. CHANGELOG entry covers
credits (@garagon, @Hybirdss, @HMAKT99, @theqazi), review lineage (CEO
→ Codex outside voice → Eng), and the non-goal tracking issue.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: pre-landing review findings (4 auto-fixes)
Addresses 4 findings from the Claude adversarial subagent on the
v1.6.0.0 security wave diff. No user-visible behavior change; all
are defense-in-depth hardening of newly-introduced code.
1. GET /connect rate-limited (was POST-only) [HIGH conf 8/10]
Attacker discovering the ngrok URL could probe unlimited GETs for
daemon enumeration. Now shares the global /connect counter.
2. ngrok listener leak on tunnel startup failure [MEDIUM conf 8/10]
If ngrok.forward() resolved but tunnelListener.url() or the
state-file write threw, the Bun listener was torn down but the
ngrok session was leaked. Fixed in BOTH /tunnel/start and
BROWSE_TUNNEL=1 startup paths.
3. GSTACK_SKILL_ROOT path-traversal gate [MEDIUM conf 8/10]
Symmetric with E3's GSTACK_SLUG regex gate — reject values
containing '..' before interpolating into the welcome-page path.
4. SSE session registry pruning [LOW conf 7/10]
pruneExpired() only checked 10 entries per mint call. Now runs
on every validate too, checks 20 entries, with a hard 10k cap as
backstop. Prevents registry growth under sustained extension
reconnect pressure.
Tests remain green (56/56 in sse-session-cookie + dual-listener +
pair-agent-e2e suites).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: update project documentation for v1.6.0.0
Reflect the dual-listener tunnel architecture, SSE session cookies,
SSRF guards, and Windows v20 ABE non-goal across the three docs
users actually read for remote-agent and browser auth context:
- docs/REMOTE_BROWSER_ACCESS.md: rewrote Architecture diagram for
dual listeners, fixed /connect rate limit (3/min → 300/min),
removed stale "/health requires no auth" (now 404 on tunnel),
added SSE cookie auth, expanded Security Model with tunnel
allowlist, SSRF guards, /welcome path traversal defense, and
the Windows v20 ABE tracking note.
- BROWSER.md: added dual-listener paragraph to Authentication and
linked to ARCHITECTURE.md endpoint table. Replaced the stale
?token= SSE auth note with the HttpOnly gstack_sse cookie flow.
- CLAUDE.md: added Transport-layer security section above the
sidebar prompt-injection stack so contributors editing server.ts,
sse-session-cookie.ts, or tunnel-denial-log.ts see the load-bearing
module boundaries before touching them.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(make-pdf): write --from-file payload to /tmp, not os.tmpdir()
make-pdf's browseClient wrote its --from-file payload to os.tmpdir(),
which is /var/folders/... on macOS. v1.6.0.0's PR #1103 cherry-pick
tightened browse load-html --from-file to validate against the
safe-dirs allowlist ([TEMP_DIR, cwd] where TEMP_DIR is '/tmp' on
macOS/Linux, os.tmpdir() on Windows). This closed a CLI/API parity
gap but broke make-pdf on macOS because /var/folders/... is outside
the allowlist.
Fix: mirror browse's TEMP_DIR convention — use '/tmp' on non-Windows,
os.tmpdir() on Windows. The make-pdf-gate CI failure on macOS-latest
(run 72440797490) is caused by exactly this: the payload file was
rejected by validateReadPath.
Verified locally: the combined-gate e2e test now passes after
rebuilding make-pdf/dist/pdf.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(sidebar): killAgent resets per-tab state; align tests with current agent event format
Two pre-existing bugs surfaced while running the full e2e suite on the
sec-wave branch. Both pre-date v1.6.0.0 (same failures on main at
e23ff280) but blocked the ship verification, so fixing now.
### Bug 1: killAgent leaked stale per-tab state
`killAgent()` reset the legacy globals (agentProcess, agentStatus,
etc.) but never touched the per-tab `tabAgents` Map. Meanwhile
`/sidebar-command` routes on `tabState.status` from that Map, not the
legacy globals. Consequence: after a kill (including the implicit
kill in `/sidebar-session/new`), the next /sidebar-command on the
same tab saw `tabState.status === 'processing'` and fell into the
queue branch, silently NOT spawning an agent. Integration tests that
called resetState between cases all failed with empty queues.
Fix: when targetTabId is supplied, reset that one tab's state; when
called without a tab (session-new, full kill), reset ALL tab states.
Matches the semantic boundary already used for the cancel-file write.
### Bug 2: sidebar-integration tests drifted from current event format
`agent events appear in /sidebar-chat` posted the raw Claude streaming
format (`{type: 'assistant', message: {content: [...]}}`) but
`processAgentEvent` in server.ts only handles the simplified types
that sidebar-agent.ts pre-processes into (text, text_delta, tool_use,
result, agent_error, security_event). The architecture moved
pre-processing into sidebar-agent.ts at some point and this test
never got updated. Fixed by sending the pre-processed `{type:
'text', text: '...'}` format — which is actually what the server sees
in production.
Also removed the `entry.prompt` URL-containment check in the
queue-write test. The URL is carried on entry.pageUrl (metadata) by
design: the system prompt tells Claude to run `browse url` to fetch
the actual page rather than trust any URL in the prompt body. That's
the URL-based prompt-injection defense. The prompt SHOULD NOT
contain the URL, so the test assertion was wrong for the current
security posture.
### Verification
- `bun test browse/test/sidebar-integration.test.ts` → 13/13 pass
(was 6/13 on both main and branch before this commit)
- Full `bun run test` → exit 0, zero fail markers
- No behavior change for production sidebar flows: killAgent was
already supposed to return the agent to idle; it just wasn't fully
doing so. Per-tab reset now matches the documented semantics.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: gus <gustavoraularagon@gmail.com>
Co-authored-by: Mohammed Qazi <10266060+theqazi@users.noreply.github.com>
631 lines
34 KiB
Markdown
631 lines
34 KiB
Markdown
# gstack development
|
|
|
|
## Commands
|
|
|
|
```bash
|
|
bun install # install dependencies
|
|
bun test # run free tests (browse + snapshot + skill validation)
|
|
bun run test:evals # run paid evals: LLM judge + E2E (diff-based, ~$4/run max)
|
|
bun run test:evals:all # run ALL paid evals regardless of diff
|
|
bun run test:gate # run gate-tier tests only (CI default, blocks merge)
|
|
bun run test:periodic # run periodic-tier tests only (weekly cron / manual)
|
|
bun run test:e2e # run E2E tests only (diff-based, ~$3.85/run max)
|
|
bun run test:e2e:all # run ALL E2E tests regardless of diff
|
|
bun run eval:select # show which tests would run based on current diff
|
|
bun run dev <cmd> # run CLI in dev mode, e.g. bun run dev goto https://example.com
|
|
bun run build # gen docs + compile binaries
|
|
bun run gen:skill-docs # regenerate SKILL.md files from templates
|
|
bun run skill:check # health dashboard for all skills
|
|
bun run dev:skill # watch mode: auto-regen + validate on change
|
|
bun run eval:list # list all eval runs from ~/.gstack-dev/evals/
|
|
bun run eval:compare # compare two eval runs (auto-picks most recent)
|
|
bun run eval:summary # aggregate stats across all eval runs
|
|
bun run slop # full slop-scan report (all files)
|
|
bun run slop:diff # slop findings in files changed on this branch only
|
|
```
|
|
|
|
`test:evals` requires `ANTHROPIC_API_KEY`. Codex E2E tests (`test/codex-e2e.test.ts`)
|
|
use Codex's own auth from `~/.codex/` config — no `OPENAI_API_KEY` env var needed.
|
|
E2E tests stream progress in real-time (tool-by-tool via `--output-format stream-json
|
|
--verbose`). Results are persisted to `~/.gstack-dev/evals/` with auto-comparison
|
|
against the previous run.
|
|
|
|
**Diff-based test selection:** `test:evals` and `test:e2e` auto-select tests based
|
|
on `git diff` against the base branch. Each test declares its file dependencies in
|
|
`test/helpers/touchfiles.ts`. Changes to global touchfiles (session-runner, eval-store,
|
|
touchfiles.ts itself) trigger all tests. Use `EVALS_ALL=1` or the `:all` script
|
|
variants to force all tests. Run `eval:select` to preview which tests would run.
|
|
|
|
**Two-tier system:** Tests are classified as `gate` or `periodic` in `E2E_TIERS`
|
|
(in `test/helpers/touchfiles.ts`). CI runs only gate tests (`EVALS_TIER=gate`);
|
|
periodic tests run weekly via cron or manually. Use `EVALS_TIER=gate` or
|
|
`EVALS_TIER=periodic` to filter. When adding new E2E tests, classify them:
|
|
1. Safety guardrail or deterministic functional test? -> `gate`
|
|
2. Quality benchmark, Opus model test, or non-deterministic? -> `periodic`
|
|
3. Requires external service (Codex, Gemini)? -> `periodic`
|
|
|
|
## Testing
|
|
|
|
```bash
|
|
bun test # run before every commit — free, <2s
|
|
bun run test:evals # run before shipping — paid, diff-based (~$4/run max)
|
|
```
|
|
|
|
`bun test` runs skill validation, gen-skill-docs quality checks, and browse
|
|
integration tests. `bun run test:evals` runs LLM-judge quality evals and E2E
|
|
tests via `claude -p`. Both must pass before creating a PR.
|
|
|
|
## Project structure
|
|
|
|
```
|
|
gstack/
|
|
├── browse/ # Headless browser CLI (Playwright)
|
|
│ ├── src/ # CLI + server + commands
|
|
│ │ ├── commands.ts # Command registry (single source of truth)
|
|
│ │ └── snapshot.ts # SNAPSHOT_FLAGS metadata array
|
|
│ ├── test/ # Integration tests + fixtures
|
|
│ └── dist/ # Compiled binary
|
|
├── hosts/ # Typed host configs (one per AI agent)
|
|
│ ├── claude.ts # Primary host config
|
|
│ ├── codex.ts, factory.ts, kiro.ts # Existing hosts
|
|
│ ├── opencode.ts, slate.ts, cursor.ts, openclaw.ts # IDE hosts
|
|
│ ├── hermes.ts, gbrain.ts # Agent runtime hosts
|
|
│ └── index.ts # Registry: exports all, derives Host type
|
|
├── scripts/ # Build + DX tooling
|
|
│ ├── gen-skill-docs.ts # Template → SKILL.md generator (config-driven)
|
|
│ ├── host-config.ts # HostConfig interface + validator
|
|
│ ├── host-config-export.ts # Shell bridge for setup script
|
|
│ ├── host-adapters/ # Host-specific adapters (OpenClaw tool mapping)
|
|
│ ├── resolvers/ # Template resolver modules (preamble, design, review, gbrain, etc.)
|
|
│ ├── skill-check.ts # Health dashboard
|
|
│ └── dev-skill.ts # Watch mode
|
|
├── test/ # Skill validation + eval tests
|
|
│ ├── helpers/ # skill-parser.ts, session-runner.ts, llm-judge.ts, eval-store.ts
|
|
│ ├── fixtures/ # Ground truth JSON, planted-bug fixtures, eval baselines
|
|
│ ├── skill-validation.test.ts # Tier 1: static validation (free, <1s)
|
|
│ ├── gen-skill-docs.test.ts # Tier 1: generator quality (free, <1s)
|
|
│ ├── skill-llm-eval.test.ts # Tier 3: LLM-as-judge (~$0.15/run)
|
|
│ └── skill-e2e-*.test.ts # Tier 2: E2E via claude -p (~$3.85/run, split by category)
|
|
├── qa-only/ # /qa-only skill (report-only QA, no fixes)
|
|
├── plan-design-review/ # /plan-design-review skill (report-only design audit)
|
|
├── design-review/ # /design-review skill (design audit + fix loop)
|
|
├── ship/ # Ship workflow skill
|
|
├── review/ # PR review skill
|
|
├── plan-ceo-review/ # /plan-ceo-review skill
|
|
├── plan-eng-review/ # /plan-eng-review skill
|
|
├── autoplan/ # /autoplan skill (auto-review pipeline: CEO → design → eng)
|
|
├── benchmark/ # /benchmark skill (performance regression detection)
|
|
├── canary/ # /canary skill (post-deploy monitoring loop)
|
|
├── codex/ # /codex skill (multi-AI second opinion via OpenAI Codex CLI)
|
|
├── land-and-deploy/ # /land-and-deploy skill (merge → deploy → canary verify)
|
|
├── office-hours/ # /office-hours skill (YC Office Hours — startup diagnostic + builder brainstorm)
|
|
├── investigate/ # /investigate skill (systematic root-cause debugging)
|
|
├── retro/ # Retrospective skill (includes /retro global cross-project mode)
|
|
├── bin/ # CLI utilities (gstack-repo-mode, gstack-slug, gstack-config, etc.)
|
|
├── document-release/ # /document-release skill (post-ship doc updates)
|
|
├── cso/ # /cso skill (OWASP Top 10 + STRIDE security audit)
|
|
├── design-consultation/ # /design-consultation skill (design system from scratch)
|
|
├── design-shotgun/ # /design-shotgun skill (visual design exploration)
|
|
├── open-gstack-browser/ # /open-gstack-browser skill (launch GStack Browser)
|
|
├── connect-chrome/ # symlink → open-gstack-browser (backwards compat)
|
|
├── design/ # Design binary CLI (GPT Image API)
|
|
│ ├── src/ # CLI + commands (generate, variants, compare, serve, etc.)
|
|
│ ├── test/ # Integration tests
|
|
│ └── dist/ # Compiled binary
|
|
├── extension/ # Chrome extension (side panel + activity feed + CSS inspector)
|
|
├── lib/ # Shared libraries (worktree.ts)
|
|
├── docs/designs/ # Design documents
|
|
├── setup-deploy/ # /setup-deploy skill (one-time deploy config)
|
|
├── .github/ # CI workflows + Docker image
|
|
│ ├── workflows/ # evals.yml (E2E on Ubicloud), skill-docs.yml, actionlint.yml
|
|
│ └── docker/ # Dockerfile.ci (pre-baked toolchain + Playwright/Chromium)
|
|
├── contrib/ # Contributor-only tools (never installed for users)
|
|
│ └── add-host/ # /gstack-contrib-add-host skill
|
|
├── setup # One-time setup: build binary + symlink skills
|
|
├── SKILL.md # Generated from SKILL.md.tmpl (don't edit directly)
|
|
├── SKILL.md.tmpl # Template: edit this, run gen:skill-docs
|
|
├── ETHOS.md # Builder philosophy (Boil the Lake, Search Before Building)
|
|
└── package.json # Build scripts for browse
|
|
```
|
|
|
|
## SKILL.md workflow
|
|
|
|
SKILL.md files are **generated** from `.tmpl` templates. To update docs:
|
|
|
|
1. Edit the `.tmpl` file (e.g. `SKILL.md.tmpl` or `browse/SKILL.md.tmpl`)
|
|
2. Run `bun run gen:skill-docs` (or `bun run build` which does it automatically)
|
|
3. Commit both the `.tmpl` and generated `.md` files
|
|
|
|
To add a new browse command: add it to `browse/src/commands.ts` and rebuild.
|
|
To add a snapshot flag: add it to `SNAPSHOT_FLAGS` in `browse/src/snapshot.ts` and rebuild.
|
|
|
|
**Token ceiling:** Generated SKILL.md files trip a warning above 160KB (~40K tokens).
|
|
This is a "watch for feature bloat" guardrail, not a hard gate. Modern flagship
|
|
models have 200K-1M context windows, so 40K is 4-20% of window, and prompt caching
|
|
makes the marginal cost of larger skills small. The ceiling exists to catch runaway
|
|
preamble/resolver growth, not to force compression on carefully-tuned big skills
|
|
(`ship`, `plan-ceo-review`, `office-hours` legitimately pack 25-35K tokens of
|
|
behavior). If you blow past 40K, the right fix is usually: (1) look at WHAT grew,
|
|
(2) if one resolver added 10K+ in a single PR, question whether it belongs inline
|
|
or as a reference doc, (3) only compress carefully-tuned prose as a last resort —
|
|
cuts to the coverage audit, review army, or voice directive have real quality cost.
|
|
|
|
**Merge conflicts on SKILL.md files:** NEVER resolve conflicts on generated SKILL.md
|
|
files by accepting either side. Instead: (1) resolve conflicts on the `.tmpl` templates
|
|
and `scripts/gen-skill-docs.ts` (the sources of truth), (2) run `bun run gen:skill-docs`
|
|
to regenerate all SKILL.md files, (3) stage the regenerated files. Accepting one side's
|
|
generated output silently drops the other side's template changes.
|
|
|
|
## Platform-agnostic design
|
|
|
|
Skills must NEVER hardcode framework-specific commands, file patterns, or directory
|
|
structures. Instead:
|
|
|
|
1. **Read CLAUDE.md** for project-specific config (test commands, eval commands, etc.)
|
|
2. **If missing, AskUserQuestion** — let the user tell you or let gstack search the repo
|
|
3. **Persist the answer to CLAUDE.md** so we never have to ask again
|
|
|
|
This applies to test commands, eval commands, deploy commands, and any other
|
|
project-specific behavior. The project owns its config; gstack reads it.
|
|
|
|
## Writing SKILL templates
|
|
|
|
SKILL.md.tmpl files are **prompt templates read by Claude**, not bash scripts.
|
|
Each bash code block runs in a separate shell — variables do not persist between blocks.
|
|
|
|
Rules:
|
|
- **Use natural language for logic and state.** Don't use shell variables to pass
|
|
state between code blocks. Instead, tell Claude what to remember and reference
|
|
it in prose (e.g., "the base branch detected in Step 0").
|
|
- **Don't hardcode branch names.** Detect `main`/`master`/etc dynamically via
|
|
`gh pr view` or `gh repo view`. Use `{{BASE_BRANCH_DETECT}}` for PR-targeting
|
|
skills. Use "the base branch" in prose, `<base>` in code block placeholders.
|
|
- **Keep bash blocks self-contained.** Each code block should work independently.
|
|
If a block needs context from a previous step, restate it in the prose above.
|
|
- **Express conditionals as English.** Instead of nested `if/elif/else` in bash,
|
|
write numbered decision steps: "1. If X, do Y. 2. Otherwise, do Z."
|
|
|
|
## Writing style (V1)
|
|
|
|
Default output from every tier-≥2 skill follows the Writing Style section in
|
|
`scripts/resolvers/preamble.ts`: jargon glossed on first use (curated list in
|
|
`scripts/jargon-list.json`, baked at gen-skill-docs time), questions framed in
|
|
outcome terms ("what breaks for your users if...") not implementation terms,
|
|
short sentences, decisions close with user impact. Power users who want the
|
|
tighter V0 prose set `gstack-config set explain_level terse` (binary switch,
|
|
no middle mode). See `docs/designs/PLAN_TUNING_V1.md` for the full design
|
|
rationale. The review pacing overhaul that originally tried to ride alongside
|
|
writing-style was extracted to V1.1 — see `docs/designs/PACING_UPDATES_V0.md`.
|
|
|
|
## Browser interaction
|
|
|
|
When you need to interact with a browser (QA, dogfooding, cookie setup), use the
|
|
`/browse` skill or run the browse binary directly via `$B <command>`. NEVER use
|
|
`mcp__claude-in-chrome__*` tools — they are slow, unreliable, and not what this
|
|
project uses.
|
|
|
|
**Sidebar architecture:** Before modifying `sidepanel.js`, `background.js`,
|
|
`content.js`, `sidebar-agent.ts`, or sidebar-related server endpoints, read
|
|
`docs/designs/SIDEBAR_MESSAGE_FLOW.md`. It documents the full initialization
|
|
timeline, message flow, auth token chain, tab concurrency model, and known
|
|
failure modes. The sidebar spans 5 files across 2 codebases (extension + server)
|
|
with non-obvious ordering dependencies. The doc exists to prevent the kind of
|
|
silent failures that come from not understanding the cross-component flow.
|
|
|
|
**Transport-layer security** (v1.6.0.0+). When `pair-agent` starts an ngrok tunnel,
|
|
the daemon binds two HTTP listeners: a local listener (127.0.0.1, full command
|
|
surface, never forwarded) and a tunnel listener (locked allowlist: `/connect`,
|
|
`/command` with a scoped token + 17-command browser-driving allowlist,
|
|
`/sidebar-chat`). ngrok forwards only the tunnel port. Root tokens over the tunnel
|
|
return 403. SSE endpoints use a 30-minute HttpOnly `gstack_sse` cookie minted via
|
|
`POST /sse-session` (never valid against `/command`). Tunnel-surface rejections go
|
|
to `~/.gstack/security/attempts.jsonl` via `tunnel-denial-log.ts`. Before editing
|
|
`server.ts`, `sse-session-cookie.ts`, or `tunnel-denial-log.ts`, read
|
|
[ARCHITECTURE.md](ARCHITECTURE.md#dual-listener-tunnel-architecture-v1600) —
|
|
the module boundary (no imports from `token-registry.ts` into `sse-session-cookie.ts`)
|
|
is load-bearing for scope isolation.
|
|
|
|
**Sidebar security stack** (layered defense against prompt injection):
|
|
|
|
| Layer | Module | Lives in |
|
|
|-------|--------|----------|
|
|
| L1-L3 | `content-security.ts` | both server and agent — datamarking, hidden element strip, ARIA regex, URL blocklist, envelope wrapping |
|
|
| L4 | `security-classifier.ts` (TestSavantAI ONNX) | **sidebar-agent only** |
|
|
| L4b | `security-classifier.ts` (Claude Haiku transcript) | **sidebar-agent only** |
|
|
| L5 | `security.ts` (canary) | both — inject in compiled, check in agent |
|
|
| L6 | `security.ts` (combineVerdict ensemble) | both |
|
|
|
|
**Critical constraint:** `security-classifier.ts` CANNOT be imported from the
|
|
compiled browse binary. `@huggingface/transformers` v4 requires `onnxruntime-node`
|
|
which fails to `dlopen` from Bun compile's temp extract dir. Only `security.ts`
|
|
(pure-string operations — canary, verdict combiner, attack log, status) is safe
|
|
for `server.ts`. See `~/.gstack/projects/garrytan-gstack/ceo-plans/2026-04-19-prompt-injection-guard.md`
|
|
§"Pre-Impl Gate 1 Outcome" for full architectural decision.
|
|
|
|
**Thresholds** (in `security.ts`):
|
|
- `BLOCK: 0.85` — single-layer score that would cause BLOCK if cross-confirmed
|
|
- `WARN: 0.60` — cross-confirm threshold. When L4 AND L4b both >= 0.60 → BLOCK
|
|
- `LOG_ONLY: 0.40` — gates transcript classifier (skip Haiku when all layers < 0.40)
|
|
|
|
**Ensemble rule:** BLOCK only when the ML content classifier AND the transcript
|
|
classifier both report >= WARN. Single-layer high confidence degrades to WARN —
|
|
this is the Stack Overflow instruction-writing FP mitigation. Canary leak
|
|
always BLOCKs (deterministic).
|
|
|
|
**Env knobs:**
|
|
- `GSTACK_SECURITY_OFF=1` — emergency kill switch. Classifier stays off even if
|
|
warmed. Canary is still injected; just the ML scan is skipped.
|
|
- `GSTACK_SECURITY_ENSEMBLE=deberta` — opt-in DeBERTa-v3 ensemble. Adds
|
|
ProtectAI DeBERTa-v3-base-injection-onnx as L4c classifier for cross-model
|
|
agreement. 721MB first-run download. With ensemble enabled, BLOCK requires
|
|
2-of-3 ML classifiers agreeing at >= WARN (testsavant, deberta, transcript).
|
|
Without ensemble (default), BLOCK requires testsavant + transcript at >= WARN.
|
|
- Classifier model cache: `~/.gstack/models/testsavant-small/` (112MB, first run only)
|
|
plus `~/.gstack/models/deberta-v3-injection/` (721MB, only when ensemble enabled)
|
|
- Attack log: `~/.gstack/security/attempts.jsonl` (salted sha256 + domain only,
|
|
rotates at 10MB, 5 generations)
|
|
- Per-device salt: `~/.gstack/security/device-salt` (0600)
|
|
- Session state: `~/.gstack/security/session-state.json` (cross-process, atomic)
|
|
|
|
## Dev symlink awareness
|
|
|
|
When developing gstack, `.claude/skills/gstack` may be a symlink back to this
|
|
working directory (gitignored). This means skill changes are **live immediately**,
|
|
great for rapid iteration, risky during big refactors where half-written skills
|
|
could break other Claude Code sessions using gstack concurrently.
|
|
|
|
**Check once per session:** Run `ls -la .claude/skills/gstack` to see if it's a
|
|
symlink or a real copy. If it's a symlink to your working directory, be aware that:
|
|
- Template changes + `bun run gen:skill-docs` immediately affect all gstack invocations
|
|
- Breaking changes to SKILL.md.tmpl files can break concurrent gstack sessions
|
|
- During large refactors, remove the symlink (`rm .claude/skills/gstack`) so the
|
|
global install at `~/.claude/skills/gstack/` is used instead
|
|
|
|
**Prefix setting:** Setup creates real directories (not symlinks) at the top level
|
|
with a SKILL.md symlink inside (e.g., `qa/SKILL.md -> gstack/qa/SKILL.md`). This
|
|
ensures Claude discovers them as top-level skills, not nested under `gstack/`.
|
|
Names are either short (`qa`) or namespaced (`gstack-qa`), controlled by
|
|
`skill_prefix` in `~/.gstack/config.yaml`. Pass `--no-prefix` or `--prefix` to
|
|
skip the interactive prompt.
|
|
|
|
**Note:** Vendoring gstack into a project's repo is deprecated. Use global install
|
|
+ `./setup --team` instead. See README.md for team mode instructions.
|
|
|
|
**For plan reviews:** When reviewing plans that modify skill templates or the
|
|
gen-skill-docs pipeline, consider whether the changes should be tested in isolation
|
|
before going live (especially if the user is actively using gstack in other windows).
|
|
|
|
**Upgrade migrations:** When a change modifies on-disk state (directory structure,
|
|
config format, stale files) in ways that could break existing user installs, add a
|
|
migration script to `gstack-upgrade/migrations/`. Read CONTRIBUTING.md's "Upgrade
|
|
migrations" section for the format and testing requirements. The upgrade skill runs
|
|
these automatically after `./setup` during `/gstack-upgrade`.
|
|
|
|
## Compiled binaries — NEVER commit browse/dist/ or design/dist/
|
|
|
|
The `browse/dist/` and `design/dist/` directories contain compiled Bun binaries
|
|
(`browse`, `find-browse`, `design`, ~58MB each). These are Mach-O arm64 only — they
|
|
do NOT work on Linux, Windows, or Intel Macs. The `./setup` script already builds
|
|
from source for every platform, so the checked-in binaries are redundant. They are
|
|
tracked by git due to a historical mistake and should eventually be removed with
|
|
`git rm --cached`.
|
|
|
|
**NEVER stage or commit these files.** They show up as modified in `git status`
|
|
because they're tracked despite `.gitignore` — ignore them. When staging files,
|
|
always use specific filenames (`git add file1 file2`) — never `git add .` or
|
|
`git add -A`, which will accidentally include the binaries.
|
|
|
|
## Commit style
|
|
|
|
**Always bisect commits.** Every commit should be a single logical change. When
|
|
you've made multiple changes (e.g., a rename + a rewrite + new tests), split them
|
|
into separate commits before pushing. Each commit should be independently
|
|
understandable and revertable.
|
|
|
|
Examples of good bisection:
|
|
- Rename/move separate from behavior changes
|
|
- Test infrastructure (touchfiles, helpers) separate from test implementations
|
|
- Template changes separate from generated file regeneration
|
|
- Mechanical refactors separate from new features
|
|
|
|
When the user says "bisect commit" or "bisect and push," split staged/unstaged
|
|
changes into logical commits and push.
|
|
|
|
## Slop-scan: AI code quality, not AI code hiding
|
|
|
|
We use [slop-scan](https://github.com/benvinegar/slop-scan) to catch patterns where
|
|
AI-generated code is genuinely worse than what a human would write. We are NOT trying
|
|
to pass as human code. We are AI-coded and proud of it. The goal is code quality.
|
|
|
|
```bash
|
|
npx slop-scan scan . # human-readable report
|
|
npx slop-scan scan . --json # machine-readable for diffing
|
|
```
|
|
|
|
Config: `slop-scan.config.json` at repo root (currently excludes `**/vendor/**`).
|
|
|
|
### What to fix (genuine quality improvements)
|
|
|
|
- **Empty catches around file ops** — use `safeUnlink()` (ignores ENOENT, rethrows
|
|
EPERM/EIO). A swallowed EPERM in cleanup means silent data loss.
|
|
- **Empty catches around process kills** — use `safeKill()` (ignores ESRCH, rethrows
|
|
EPERM). A swallowed EPERM means you think you killed something you didn't.
|
|
- **Redundant `return await`** — remove when there's no enclosing try block. Saves a
|
|
microtask, signals intent.
|
|
- **Typed exception catches** — `catch (err) { if (!(err instanceof TypeError)) throw err }`
|
|
is genuinely better than `catch {}` when the try block does URL parsing or DOM work.
|
|
You know what error you expect, so say so.
|
|
|
|
### What NOT to fix (linter gaming, not quality)
|
|
|
|
- **String-matching on error messages** — `err.message.includes('closed')` is brittle.
|
|
Playwright/Chrome can change wording anytime. If a fire-and-forget operation can fail
|
|
for ANY reason and you don't care, `catch {}` is the correct pattern.
|
|
- **Adding comments to exempt pass-through wrappers** — "alias for active session" above
|
|
a method just to trip slop-scan's exemption rule is noise, not documentation.
|
|
- **Converting extension catch-and-log to selective rethrow** — Chrome extensions crash
|
|
entirely on uncaught errors. If the catch logs and continues, that IS the right pattern
|
|
for extension code. Don't make it throw.
|
|
- **Tightening best-effort cleanup paths** — shutdown, emergency cleanup, and disconnect
|
|
code should use `safeUnlinkQuiet()` (swallows ALL errors). A cleanup path that throws
|
|
on EPERM means the rest of cleanup doesn't run. That's worse.
|
|
|
|
### Utilities in `browse/src/error-handling.ts`
|
|
|
|
| Function | Use when | Behavior |
|
|
|----------|----------|----------|
|
|
| `safeUnlink(path)` | Normal file deletion | Ignores ENOENT, rethrows others |
|
|
| `safeUnlinkQuiet(path)` | Shutdown/emergency cleanup | Swallows all errors |
|
|
| `safeKill(pid, signal)` | Sending signals | Ignores ESRCH, rethrows others |
|
|
| `isProcessAlive(pid)` | Boolean process checks | Returns true/false, never throws |
|
|
|
|
### Score tracking
|
|
|
|
Baseline (2026-04-09, before cleanup): 100 findings, 432.8 score, 2.38 score/file.
|
|
After cleanup: 90 findings, 358.1 score, 1.96 score/file.
|
|
|
|
Don't chase the number. Fix patterns that represent actual code quality problems.
|
|
Accept findings where the "sloppy" pattern is the correct engineering choice.
|
|
|
|
## Community PR guardrails
|
|
|
|
When reviewing or merging community PRs, **always AskUserQuestion** before accepting
|
|
any commit that:
|
|
|
|
1. **Touches ETHOS.md** — this file is Garry's personal builder philosophy. No edits
|
|
from external contributors or AI agents, period.
|
|
2. **Removes or softens promotional material** — YC references, founder perspective,
|
|
and product voice are intentional. PRs that frame these as "unnecessary" or
|
|
"too promotional" must be rejected.
|
|
3. **Changes Garry's voice** — the tone, humor, directness, and perspective in skill
|
|
templates, CHANGELOG, and docs are not generic. PRs that rewrite voice to be
|
|
more "neutral" or "professional" must be rejected.
|
|
|
|
Even if the agent strongly believes a change improves the project, these three
|
|
categories require explicit user approval via AskUserQuestion. No exceptions.
|
|
No auto-merging. No "I'll just clean this up."
|
|
|
|
## CHANGELOG + VERSION style
|
|
|
|
**VERSION and CHANGELOG are branch-scoped.** Every feature branch that ships gets its
|
|
own version bump and CHANGELOG entry. The entry describes what THIS branch adds —
|
|
not what was already on main.
|
|
|
|
**When to write the CHANGELOG entry:**
|
|
- At `/ship` time (Step 13), not during development or mid-branch.
|
|
- The entry covers ALL commits on this branch vs the base branch.
|
|
- Never fold new work into an existing CHANGELOG entry from a prior version that
|
|
already landed on main. If main has v0.10.0.0 and your branch adds features,
|
|
bump to v0.10.1.0 with a new entry — don't edit the v0.10.0.0 entry.
|
|
|
|
**Key questions before writing:**
|
|
1. What branch am I on? What did THIS branch change?
|
|
2. Is the base branch version already released? (If yes, bump and create new entry.)
|
|
3. Does an existing entry on this branch already cover earlier work? (If yes, replace
|
|
it with one unified entry for the final version.)
|
|
|
|
**Merging main does NOT mean adopting main's version.** When you merge origin/main into
|
|
a feature branch, main may bring new CHANGELOG entries and a higher VERSION. Your branch
|
|
still needs its OWN version bump on top. If main is at v0.13.8.0 and your branch adds
|
|
features, bump to v0.13.9.0 with a new entry. Never jam your changes into an entry that
|
|
already landed on main. Your entry goes on top because your branch lands next.
|
|
|
|
**After merging main, always check:**
|
|
- Does CHANGELOG have your branch's own entry separate from main's entries?
|
|
- Is VERSION higher than main's VERSION?
|
|
- Is your entry the topmost entry in CHANGELOG (above main's latest)?
|
|
If any answer is no, fix it before continuing.
|
|
|
|
**After any CHANGELOG edit that moves, adds, or removes entries,** immediately run
|
|
`grep "^## \[" CHANGELOG.md` and verify the full version sequence is contiguous
|
|
with no gaps or duplicates before committing. If a version is missing, the edit
|
|
broke something. Fix it before moving on.
|
|
|
|
CHANGELOG.md is **for users**, not contributors. Write it like product release notes:
|
|
|
|
- Lead with what the user can now **do** that they couldn't before. Sell the feature.
|
|
- Use plain language, not implementation details. "You can now..." not "Refactored the..."
|
|
- **Never mention TODOS.md, internal tracking, eval infrastructure, or contributor-facing
|
|
details.** These are invisible to users and meaningless to them.
|
|
- Put contributor/internal changes in a separate "For contributors" section at the bottom.
|
|
- Every entry should make someone think "oh nice, I want to try that."
|
|
- No jargon: say "every question now tells you which project and branch you're in" not
|
|
"AskUserQuestion format standardized across skill templates via preamble resolver."
|
|
|
|
### Release-summary format (every `## [X.Y.Z]` entry)
|
|
|
|
Every version entry in `CHANGELOG.md` MUST start with a release-summary section in
|
|
the GStack/Garry voice, one viewport's worth of prose + tables that lands like a
|
|
verdict, not marketing. The itemized changelog (subsections, bullets, files) goes
|
|
BELOW that summary, separated by a `### Itemized changes` header.
|
|
|
|
The release-summary section gets read by humans, by the auto-update agent, and by
|
|
anyone deciding whether to upgrade. The itemized list is for agents that need to
|
|
know exactly what changed.
|
|
|
|
Structure for the top of every `## [X.Y.Z]` entry:
|
|
|
|
1. **Two-line bold headline** (10-14 words total). Should land like a verdict, not
|
|
marketing. Sound like someone who shipped today and cares whether it works.
|
|
2. **Lead paragraph** (3-5 sentences). What shipped, what changed for the user.
|
|
Specific, concrete, no AI vocabulary, no em dashes, no hype.
|
|
3. **A "The X numbers that matter" section** with:
|
|
- One short setup paragraph naming the source of the numbers (real production
|
|
deployment OR a reproducible benchmark, name the file/command to run).
|
|
- A table of 3-6 key metrics with BEFORE / AFTER / Δ columns.
|
|
- A second optional table for per-category breakdown if relevant.
|
|
- 1-2 sentences interpreting the most striking number in concrete user terms.
|
|
4. **A "What this means for [audience]" closing paragraph** (2-4 sentences) tying
|
|
the metrics to a real workflow shift. End with what to do.
|
|
|
|
Voice rules for the release summary:
|
|
- No em dashes (use commas, periods, "...").
|
|
- No AI vocabulary (delve, robust, comprehensive, nuanced, fundamental, etc.) or
|
|
banned phrases ("here's the kicker", "the bottom line", etc.).
|
|
- Real numbers, real file names, real commands. Not "fast" but "~30s on 30K pages."
|
|
- Short paragraphs, mix one-sentence punches with 2-3 sentence runs.
|
|
- Connect to user outcomes: "the agent does ~3x less reading" beats "improved precision."
|
|
- Be direct about quality. "Well-designed" or "this is a mess." No dancing.
|
|
|
|
Source material:
|
|
- CHANGELOG previous entry for prior context.
|
|
- Benchmark files or `/retro` output for headline numbers.
|
|
- Recent commits (`git log <prev-version>..HEAD --oneline`) for what shipped.
|
|
- Don't make up numbers. If a metric isn't in a benchmark or production data,
|
|
don't include it. Say "no measurement yet" if asked.
|
|
|
|
Target length: ~250-350 words for the summary. Should render as one viewport.
|
|
|
|
### Itemized changes (below the release summary)
|
|
|
|
Write `### Itemized changes` and continue with the detailed subsections (Added,
|
|
Changed, Fixed, For contributors). Same rules as the user-facing voice guidance
|
|
above, plus:
|
|
|
|
- **Always credit community contributions.** When an entry includes work from a
|
|
community PR, name the contributor with `Contributed by @username`. Contributors
|
|
did real work. Thank them publicly every time, no exceptions.
|
|
|
|
## AI effort compression
|
|
|
|
When estimating or discussing effort, always show both human-team and CC+gstack time:
|
|
|
|
| Task type | Human team | CC+gstack | Compression |
|
|
|-----------|-----------|-----------|-------------|
|
|
| Boilerplate / scaffolding | 2 days | 15 min | ~100x |
|
|
| Test writing | 1 day | 15 min | ~50x |
|
|
| Feature implementation | 1 week | 30 min | ~30x |
|
|
| Bug fix + regression test | 4 hours | 15 min | ~20x |
|
|
| Architecture / design | 2 days | 4 hours | ~5x |
|
|
| Research / exploration | 1 day | 3 hours | ~3x |
|
|
|
|
Completeness is cheap. Don't recommend shortcuts when the complete implementation
|
|
is a "lake" (achievable) not an "ocean" (multi-quarter migration). See the
|
|
Completeness Principle in the skill preamble for the full philosophy.
|
|
|
|
## Search before building
|
|
|
|
Before designing any solution that involves concurrency, unfamiliar patterns,
|
|
infrastructure, or anything where the runtime/framework might have a built-in:
|
|
|
|
1. Search for "{runtime} {thing} built-in"
|
|
2. Search for "{thing} best practice {current year}"
|
|
3. Check official runtime/framework docs
|
|
|
|
Three layers of knowledge: tried-and-true (Layer 1), new-and-popular (Layer 2),
|
|
first-principles (Layer 3). Prize Layer 3 above all. See ETHOS.md for the full
|
|
builder philosophy.
|
|
|
|
## Local plans
|
|
|
|
Contributors can store long-range vision docs and design documents in `~/.gstack-dev/plans/`.
|
|
These are local-only (not checked in). When reviewing TODOS.md, check `plans/` for candidates
|
|
that may be ready to promote to TODOs or implement.
|
|
|
|
## E2E eval failure blame protocol
|
|
|
|
When an E2E eval fails during `/ship` or any other workflow, **never claim "not
|
|
related to our changes" without proving it.** These systems have invisible couplings —
|
|
a preamble text change affects agent behavior, a new helper changes timing, a
|
|
regenerated SKILL.md shifts prompt context.
|
|
|
|
**Required before attributing a failure to "pre-existing":**
|
|
1. Run the same eval on main (or base branch) and show it fails there too
|
|
2. If it passes on main but fails on the branch — it IS your change. Trace the blame.
|
|
3. If you can't run on main, say "unverified — may or may not be related" and flag it
|
|
as a risk in the PR body
|
|
|
|
"Pre-existing" without receipts is a lazy claim. Prove it or don't say it.
|
|
|
|
## Long-running tasks: don't give up
|
|
|
|
When running evals, E2E tests, or any long-running background task, **poll until
|
|
completion**. Use `sleep 180 && echo "ready"` + `TaskOutput` in a loop every 3
|
|
minutes. Never switch to blocking mode and give up when the poll times out. Never
|
|
say "I'll be notified when it completes" and stop checking — keep the loop going
|
|
until the task finishes or the user tells you to stop.
|
|
|
|
The full E2E suite can take 30-45 minutes. That's 10-15 polling cycles. Do all of
|
|
them. Report progress at each check (which tests passed, which are running, any
|
|
failures so far). The user wants to see the run complete, not a promise that
|
|
you'll check later.
|
|
|
|
## E2E test fixtures: extract, don't copy
|
|
|
|
**NEVER copy a full SKILL.md file into an E2E test fixture.** SKILL.md files are
|
|
1500-2000 lines. When `claude -p` reads a file that large, context bloat causes
|
|
timeouts, flaky turn limits, and tests that take 5-10x longer than necessary.
|
|
|
|
Instead, extract only the section the test actually needs:
|
|
|
|
```typescript
|
|
// BAD — agent reads 1900 lines, burns tokens on irrelevant sections
|
|
fs.copyFileSync(path.join(ROOT, 'ship', 'SKILL.md'), path.join(dir, 'ship-SKILL.md'));
|
|
|
|
// GOOD — agent reads ~60 lines, finishes in 38s instead of timing out
|
|
const full = fs.readFileSync(path.join(ROOT, 'ship', 'SKILL.md'), 'utf-8');
|
|
const start = full.indexOf('## Review Readiness Dashboard');
|
|
const end = full.indexOf('\n---\n', start);
|
|
fs.writeFileSync(path.join(dir, 'ship-SKILL.md'), full.slice(start, end > start ? end : undefined));
|
|
```
|
|
|
|
Also when running targeted E2E tests to debug failures:
|
|
- Run in **foreground** (`bun test ...`), not background with `&` and `tee`
|
|
- Never `pkill` running eval processes and restart — you lose results and waste money
|
|
- One clean run beats three killed-and-restarted runs
|
|
|
|
## Publishing native OpenClaw skills to ClawHub
|
|
|
|
Native OpenClaw skills live in `openclaw/skills/gstack-openclaw-*/SKILL.md`. These are
|
|
hand-crafted methodology skills (not generated by the pipeline) published to ClawHub
|
|
so any OpenClaw user can install them.
|
|
|
|
**Publishing:** The command is `clawhub publish` (NOT `clawhub skill publish`):
|
|
|
|
```bash
|
|
clawhub publish openclaw/skills/gstack-openclaw-office-hours \
|
|
--slug gstack-openclaw-office-hours --name "gstack Office Hours" \
|
|
--version 1.0.0 --changelog "description of changes"
|
|
```
|
|
|
|
Repeat for each skill: `gstack-openclaw-ceo-review`, `gstack-openclaw-investigate`,
|
|
`gstack-openclaw-retro`. Bump `--version` on each update.
|
|
|
|
**Auth:** `clawhub login` (opens browser for GitHub auth). `clawhub whoami` to verify.
|
|
|
|
**Updating:** Same `clawhub publish` command with a higher `--version` and `--changelog`.
|
|
|
|
**Verification:** `clawhub search gstack` to confirm they're live.
|
|
|
|
## Deploying to the active skill
|
|
|
|
The active skill lives at `~/.claude/skills/gstack/`. After making changes:
|
|
|
|
1. Push your branch
|
|
2. Fetch and reset in the skill directory: `cd ~/.claude/skills/gstack && git fetch origin && git reset --hard origin/main`
|
|
3. Rebuild: `cd ~/.claude/skills/gstack && bun run build`
|
|
|
|
Or copy the binaries directly:
|
|
- `cp browse/dist/browse ~/.claude/skills/gstack/browse/dist/browse`
|
|
- `cp design/dist/design ~/.claude/skills/gstack/design/dist/design`
|