Files
gstack/browse/src/meta-commands.ts
T
Garry Tan e8893a18b1 v1.20.0.0 feat: browser-skills runtime + gbrain-support carryover (#1233)
* feat(gbrain-sync): queue primitives + writer shims

Adds bin/gstack-brain-enqueue (atomic append to sync queue) and
bin/gstack-jsonl-merge (git merge driver, ts-sort with SHA-256 fallback).
Wires one backgrounded enqueue call into learnings-log, timeline-log,
review-log, and developer-profile --migrate. question-log and
question-preferences stay local per Codex v2 decision.

gstack-config gains gbrain_sync_mode (off/artifacts-only/full) and
gbrain_sync_mode_prompted keys, plus GSTACK_HOME env alignment so
tests don't leak into real ~/.gstack/config.yaml.

* feat(gbrain-sync): --once drain + secret scan + push

bin/gstack-brain-sync is the core sync binary. Subcommands: --once
(drain queue, allowlist-filter, privacy-class-filter, secret-scan
staged diff, commit with template, push with fetch+merge retry),
--status, --skip-file <path>, --drop-queue --yes, --discover-new
(cursor-based detection of artifact writes that skip the shim).

Secret regex families: AWS keys, GitHub tokens (ghp_/gho_/ghu_/ghs_/
ghr_/github_pat_), OpenAI sk-, PEM blocks, JWTs, bearer-token-in-JSON.
On hit: unstage, preserve queue, print remediation hint (--skip-file
or edit), exit clean. No daemon — invoked by preamble at skill
boundaries.

* feat(gbrain-sync): init, restore, uninstall, consumer registry

bin/gstack-brain-init: idempotent first-run. git init ~/.gstack/,
.gitignore=*, canonical .brain-allowlist + .brain-privacy-map.json,
pre-commit secret-scan hook (defense-in-depth), merge driver registration
via git config, gh repo create --private OR arbitrary --remote <url>,
initial push, ~/.gstack-brain-remote.txt for new-machine discovery,
GBrain consumer registration via HTTP POST.

bin/gstack-brain-restore: safe new-machine bootstrap. Refuses clobber
of existing allowlisted files, clones to staging, rsync-copies tracked
files, re-registers merge drivers (required — not cloned from remote),
rehydrates consumers.json, prompts for per-consumer tokens.

bin/gstack-brain-uninstall: clean off-ramp. Removes .git + .brain-*
files + consumers.json + config keys. Preserves user data (learnings,
plans, retros, profile). Optional --delete-remote for GitHub repos.

bin/gstack-brain-consumer + bin/gstack-brain-reader (symlink alias):
registry management. Internal 'consumer' term; user-facing 'reader'
per DX review decision.

* feat(gbrain-sync): preamble block — privacy gate + boundary sync

scripts/resolvers/preamble/generate-brain-sync-block.ts emits bash that
runs at every skill invocation:
- Detects ~/.gstack-brain-remote.txt on machines without local .git
  and surfaces a restore-available hint (does NOT auto-run restore).
- Runs gstack-brain-sync --once at skill start to drain any pending
  writes (and at skill end via prose instruction).
- Once-per-day auto-pull (cached via .brain-last-pull) for append-only
  JSONL files.
- Emits BRAIN_SYNC: status line every skill run.

Also emits prose for the host LLM to fire the one-time privacy
stop-gate (full / artifacts-only / off) when gbrain is detected and
gbrain_sync_mode_prompted is false. Wired into preamble.ts composition.

* test(gbrain-sync): 27-test consolidated suite

test/brain-sync.test.ts covers:
- Config: validation, defaults, GSTACK_HOME env isolation
- Enqueue: no-op gates, skip list, concurrent atomicity, JSON escape
- JSONL merge driver: 3-way + ts-sort + SHA-256 fallback
- Init + sync: canonical file creation, merge driver registration,
  push-reject + fetch+merge retry path
- Init refuses different remote (idempotency)
- Cross-machine restore round-trip (machine A write → machine B sees)
- Secret scan across all 6 regex families (AWS, GH, OpenAI, PEM, JWT,
  bearer-JSON). --skip-file unblock remediation
- Uninstall removes sync config, preserves user data
- --discover-new idempotence via mtime+size cursor

Behaviors verified via integration smokes during implementation. Known
follow-up: bun-test 5s default timeout needs 30s wrapper for
spawnSync-heavy tests.

* docs(gbrain-sync): user guide + error lookup + README section

docs/gbrain-sync.md: setup walkthrough, privacy modes, cross-machine
workflow, secret protection, two-machine conflict handling, uninstall,
troubleshooting reference.

docs/gbrain-sync-errors.md: problem/cause/fix index for every
user-visible error. Patterned on Rust's error docs + Stripe's API
error reference.

README.md: 'Cross-machine memory with GBrain sync' section near the
top (discovery moment), plus docs-table entry.

* chore: bump version and changelog (v1.7.0.0)

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* chore: regenerate SKILL.md files for gbrain-sync preamble block

Re-runs bun run gen:skill-docs after adding generateBrainSyncBlock
to scripts/resolvers/preamble.ts in a2aa8a07. CI check-freshness
caught the drift. All 36 SKILL.md files regenerated with the new
skill-start bash block + privacy-gate prose + skill-end sync
instructions baked in.

* fix(test): session-awareness reads AskUserQuestion Format from a Tier 2+ SKILL.md

The test was reading ROOT/SKILL.md (browse skill, Tier 1) which never
contained '## AskUserQuestion Format' — that section is only emitted
for Tier 2+ skills by scripts/resolvers/preamble.ts. As a result the
agent was prompted with an empty format guide and only emitted
'RECOMMENDATION' intermittently, making the test flaky.

Pre-existing on main (same ROOT/SKILL.md shape there) — surfaced now
because the agent run didn't hit the RECOMMENDATION/recommend/option a
fallback strings in this particular attempt.

Fix: read from office-hours/SKILL.md (Tier 3, always has the section)
with a fallback that scans for the first top-level skill dir whose
SKILL.md contains the header. Future template moves won't break this
test again.

* feat(browse): domain-skills storage + state machine

New module browse/src/domain-skills.ts implements the per-site notes
the agent writes for itself, persisted as type:"domain" rows alongside
/learn's per-project learnings.

Three scopes layered: per-project default, global by explicit promotion.
Project-active shadows global for the same host.

State machine (T6 — codex outside-voice):
  quarantined --3 uses w/o flag--> active(project) --promote--> global
        ^                                |
        +----- classifier flag during use

- Append-only JSONL with O_APPEND for atomic small writes
- Tolerant parser drops partial trailing line on read
- Tombstone for deletes (compactor cleans up later)
- Version log per (host, scope) enables rollback
- Hostname derived from active tab top-level origin (T3 confused-deputy fix)
- writeSkill rejects classifier_score >= 0.85 with structured error

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(browse): domain-skills storage + state machine

14 tests covering:
- T3 hostname normalization (lowercase, www. strip, port/path/query strip,
  subdomain-exact preserved)
- T4 scope shadowing (per-project active shadows global for same host)
- T5 persistence (version monotonicity, tolerant parser drops partial line)
- T6 state machine (quarantined → active after N=3 uses, classifier-flag
  blocks promotion, save-time score >= 0.85 rejected)
- Rollback by version log (restore prior body, advance version counter)
- Tombstone deletion (read returns null after delete)

All 14 pass in 27ms via bun test.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(browse): $B domain-skill subcommands

Wire the domain-skills storage layer into the browse CLI as a META command:

  $B domain-skill save              save body from stdin or --from-file
                                     (host derived from active tab — T3)
  $B domain-skill list              list all skills visible to current project
  $B domain-skill show <host>       print skill body
  $B domain-skill edit <host>       open in $EDITOR
  $B domain-skill promote-to-global <host>  cross-project promotion (T4)
  $B domain-skill rollback <host> [--global]  restore prior version
  $B domain-skill rm <host> [--global]        tombstone

Save path runs L1-L3 content filters from content-security.ts (importable
in compiled binary, unlike L4 ML classifier — see CLAUDE.md). The L4
classifier scan happens in sidebar-agent at prompt-injection load time.

Output is structured (problem + cause + suggested-action) per DX D7.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(browse): $B cdp escape hatch — deny-default allowlist + two-tier mutex

Codex T2: flip CDP posture to deny-default. Allowed methods enumerated in
cdp-allowlist.ts with (scope: tab|browser, output: trusted|untrusted,
justification) per entry.

Initial allowlist (~25 methods) covers:
- Accessibility tree extraction (read-only)
- DOM/CSS inspection (read-only)
- Performance metrics
- Tracing
- Emulation viewport/UA override
- Page screenshot/PDF capture (output is binary, no marker injection vector)
- Network.enable/disable (no bodies/cookies — those are exfil surfaces)
- Runtime.getProperties (NO evaluate/callFunctionOn — those would be RCE)

Page.navigate is INTENTIONALLY NOT allowed; agents use $B goto which
goes through the URL blocklist.

Codex T7: two-tier mutex. tab-scoped methods take per-tab lock; browser-
scoped take global lock that blocks all tab locks. 5s acquire timeout
yields CDPMutexAcquireTimeout (no silent hangs). All lock acquires use
try/finally so errors don't leak the lock.

Path A from spike: uses Playwright's newCDPSession() per page. No second
WebSocket, no need for --remote-debugging-port. CDPSession is cached
per page in a WeakMap and cleared on page close.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(browse): CDP allowlist + two-tier mutex

13 tests:
- Allowlist linter: every entry has 4 required fields, no duplicates,
  justification length > 20 chars
- Deny-list verification: dangerous methods (Runtime.evaluate, Page.navigate,
  Network.getResponseBody, Browser.close, Target.attachToTarget, etc.) are
  NOT allowed (Codex T2 categories 4-7)
- Per-tab mutex serializes ops on same tab
- Per-tab mutex allows parallel ops across different tabs
- Global lock blocks tab locks; tab locks block global lock
- Acquire timeout yields CDPMutexAcquireTimeout (no silent hang)
- Timeout error names the tab id and the timeout budget

Also extends Network.disable justification to satisfy linter.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(browse): telemetry signals + project-slug helper

Lightweight telemetry per DX D9: piggybacks on ~/.gstack/analytics/ pattern.
Hostname + aggregate counters only, no body content. GSTACK_TELEMETRY_OFF=1
silences. Fire-and-forget — never blocks calling path.

Signals fired so far:
- domain_skill_saved {host, scope, state, bytes}
- domain_skill_save_blocked {host, reason}

(domain_skill_fired and cdp_method_* fired in subsequent commits.)

Also extracts project-slug resolution into project-slug.ts so server.ts
and domain-skill-commands.ts share one cached lookup.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(browse): sidebar prompt-context injection + CDP telemetry

server.ts spawnClaude now:
- Imports per-project domain skill matching the active tab's hostname
  via readDomainSkill()
- Wraps the body in UNTRUSTED EXTERNAL CONTENT envelope (so the L4
  classifier in sidebar-agent sees it at load time per Eng D4)
- Appends as <domain-skill source="..." host="..." version="..."> block
- Fires domain_skill_fired telemetry (host, source, version)
- Calls recordSkillUse fire-and-forget so the auto-promote-after-N=3
  state machine advances on each successful prompt injection

System prompt also gets a one-liner introducing $B domain-skill commands
to agents (DX D4 start-of-task discoverability hint).

cdp-bridge.ts fires:
- cdp_method_denied (drives next allow-list growth)
- cdp_method_lock_acquire_ms (P50/P99 quantile observability)
- cdp_method_called (allowed methods)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(browse): telemetry module

3 tests covering:
- logTelemetry writes JSONL with ts injected
- GSTACK_TELEMETRY_OFF=1 silences all events
- logTelemetry never throws on disk failures

Uses GSTACK_HOME env var to redirect writes to a tmp dir; the telemetry
module reads HOME lazily so test mutations take effect.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: domain-skills reference + error lookup table

docs/domain-skills.md mirrors the layered shape of docs/gbrain-sync.md
(DX D8): how agents use it, state machine, storage layout, security model
(L1-L3 + L4 layered defense), error reference table.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(readme): browser-harness-js plug + domain-skills section

New "Domain skills + raw CDP escape hatch" section under "The sprint"
covering both v1.8.0.0 features. Plugs browser-use/browser-harness-js
as the no-rails alternative for users who want raw CDP without gstack's
security stack.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: bump version and changelog (v1.8.0.0)

Branch-scoped bump on top of merged 1.7.0.0 base. CHANGELOG entry covers
the full v1.8.0.0 scope: $B domain-skill, $B cdp escape hatch, two-tier
mutex, telemetry signals, sidebar prompt-context injection. Includes
Codex outside-voice trail (7 of 20 findings resolved, 12 mooted by T1
scope drop).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* todos: 7 follow-ups from v1.8.0.0 review trail

P1: Self-authoring $B commands with out-of-process worker isolation
    (Codex T1 deferred from v1.8.0.0 — needs real isolation design)
P2: Migrate /learn to SQLite (Codex T5 long-term primitive fix)
P2: Remove plan-mode handshake from /plan-devex-review (skill bug)
P3: GBrain skillpack publishing for domain-skills
P3: Replay/record demonstrated flows to domain-skills
P3: $B commands review batch-mode UX (alternative to inline approval)
P3: Heuristic command-gap watcher (DX D4 alternative C)

Each entry has the standard What/Why/Pros/Cons/Context/Effort/Priority/
Depends-on shape so anyone picking these up later has full context.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(browse): lazy GSTACK_HOME resolution in domain-skills

Module-level constants (GLOBAL_FILE, derived path) were evaluated at
module-load and cached. When E2E and unit tests run in the same Bun
test pass and set GSTACK_HOME differently, the second test sees the
first test's path. Switch to lazy gstackHome() / globalFile() / projectFile()
helpers so process.env mutations take effect.

Mirrors the pattern already used in telemetry.ts.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(browse): E2E gate-tier tests for domain-skills + CDP

domain-skills-e2e.test.ts (4 tests):
- save derives host from active tab top-level origin (T3)
- save lands quarantined; list surfaces it
- readSkill returns null until 3 uses without flag promote to active (T6)
- save without an active page errors with structured guidance

cdp-e2e.test.ts (8 tests):
- Accessibility.getFullAXTree returns wrapped JSON (allowed, untrusted-output)
- Performance.getMetrics returns plain JSON (allowed, trusted-output)
- Runtime.evaluate DENIED with structured guidance (T2 RCE block)
- Page.navigate DENIED (must use $B goto for blocklist routing)
- Network.getResponseBody DENIED (exfil block)
- malformed JSON params surfaces clear error
- non Domain.method format surfaces clear error
- $B cdp help returns help text

Both files boot a real Chromium via BrowserManager.launch() and exercise
the dispatch handlers end-to-end. Total 12 E2E tests in <2s.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: regenerate SKILL.md files with new $B commands

bun run gen:skill-docs picks up the domain-skill and cdp META_COMMANDS
entries added in commands.ts. Both top-level SKILL.md and browse/SKILL.md
now list the new commands in their Meta and Inspection tables.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(fixtures): regenerate ship SKILL.md golden baselines for v1.7.0.0

Pre-existing failures inherited from garrytan/gbrain-support: the GBrain
Sync preamble block (added in v1.7.0.0) appears in regenerated SKILL.md
output but the golden baselines in test/fixtures/golden/ were never
updated. Three failures fixed:

  golden-file regression > Claude ship skill matches golden baseline
  golden-file regression > Codex ship skill matches golden baseline
  golden-file regression > Factory ship skill matches golden baseline

Goldens regenerated by copying the current ship/SKILL.md, codex
.agents/skills/gstack-ship/SKILL.md, and .factory/skills/gstack-ship/SKILL.md
files. Diff is the v1.7.0.0 GBrain Sync preamble block + privacy stop-gate
(no behavioral changes — just preamble text).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(brain-sync): bearer-token regex catches values with leading space

Pre-existing bug from v1.7.0.0: the bearer-token-json secret pattern
required values matching [A-Za-z0-9_./+=-]{16,}, which rejected the
"Bearer <token>" form because the literal space after "Bearer" wasn't
in the character class. Real Authorization headers use "Bearer <token>"
syntax, and the test fixture
  '"authorization":"Bearer abcdef1234567890abcdef1234567890"'
sat unscanned despite being a leak-class secret.

One-character fix: add space to the value character class. Test
'gstack-brain-sync secret scan > blocks bearer-json' now passes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(brain-sync): GSTACK_HOME isolation test compares mtime, not content

Pre-existing flaky test: the GSTACK_HOME-overrides-real-config test asserted
the real ~/.gstack/config.yaml does NOT contain "gbrain_sync_mode: full"
after the test. That fails for any user whose real config legitimately has
that key set from prior usage — the test's invariant is "the command did
not modify the real file," not "the real file lacks any specific value."

Switch to mtime + content snapshot: capture both BEFORE running the command,
then verify both are unchanged after. Also add a positive assertion that
the tmpHome config DID get the new key.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(skill-validation): exempt deliberate large fixtures from 2MB limit

Pre-existing failure: the "git tracks no files larger than 2MB" test
caught browse/test/fixtures/security-bench-haiku-responses.json (28.8MB
of replay data committed in v1.6.4.0 for security benchmark gate tests).

The test exists to catch accidentally-committed binaries (Mach-O dist
binaries, etc), not to forbid all large files. Add an explicit
LARGE_FIXTURE_EXEMPTIONS allowlist so deliberate replay fixtures pass
the gate while accidental binaries still fail.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(skill-token): mint scoped tokens per skill spawn

Wraps token-registry.createToken/revokeToken with skill-specific
clientId encoding (skill:<name>:<spawn-id>) and read+write defaults.
Skill scripts get a per-spawn capability token bound to browser-driving
commands; the daemon root token never leaves the harness.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(browse-client): SDK for browser-skill scripts

Thin wrapper over POST /command with bearer auth. Resolves daemon
port + token from GSTACK_PORT + GSTACK_SKILL_TOKEN env vars first
(set by $B skill run when spawning), falls back to .gstack/browse.json
for standalone debug runs.

Convenience methods cover the read+write surface skills typically need:
goto, click, fill, text, html, snapshot, links, forms, accessibility,
attrs, media, data, scroll, press, type, select, wait, hover, screenshot.
Low-level command(cmd, args) escape hatch for anything else.

This is the canonical SDK source. Each browser-skill ships a sibling
copy at <skill>/_lib/browse-client.ts so each skill is fully portable
and version-pinned.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(browser-skills): 3-tier storage helpers

listBrowserSkills() walks project > global > bundled (first-wins),
parses SKILL.md frontmatter, no INDEX.json. readBrowserSkill() does
the same for a single name. tombstoneBrowserSkill() moves a skill
into .tombstones/<name>-<ts>/ for recoverability.

Frontmatter parser handles the subset browser-skills need: scalars
(host, description, trusted, version, source), string lists
(triggers), and arg-mapping lists ([{name, description}, ...]).
Quoted values handle colons; trusted defaults to false.

Bundled tier path is auto-detected from the binary install location;
project tier comes from git rev-parse; global is ~/.gstack/. All tier
paths are overridable for hermetic tests.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(browser-skills): \$B skill list/show/run/test/rm subcommands

handleSkillCommand dispatches to per-subcommand handlers; spawnSkill is
the load-bearing function that:

  1. Mints a per-spawn scoped token (read+write only) bound to the
     skill name + spawn-id.
  2. Builds the spawn env:
     - trusted: passes process.env minus GSTACK_TOKEN (defense in depth).
     - untrusted: minimal allowlist (LANG, LC_ALL, TERM, TZ) + locked
       PATH; explicitly drops anything matching TOKEN/KEY/SECRET/etc.
       Also drops AWS_/AZURE_/GCP_/GOOGLE_APPLICATION_/ANTHROPIC_/OPENAI_/
       GITHUB_/GH_/SSH_/GPG_/NPM_TOKEN/PYPI_ patterns.
   3. Always injects GSTACK_PORT + GSTACK_SKILL_TOKEN last (cannot be
     overridden by parent env).
  4. Spawns bun run script.ts -- <args> with cwd=skillDir, captures
     stdout (1MB cap), stderr, and timeout-kills past the deadline.
  5. Revokes the token in finally{}, always.

list output prints the resolved tier inline so "why did it run that
one?" never becomes a debugging mystery (Codex finding #4 mitigation).

server.ts threads the listen port to meta-commands via MetaCommandOpts.daemonPort.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(browser-skills): bundled hackernews-frontpage reference skill

Smallest interesting browser-skill: scrapes HN front page, returns
30 stories as JSON. No auth, stable HTML, fully fixture-tested.

Files:
  SKILL.md                          frontmatter + prose
  script.ts                         exports parseStoriesFromHtml(html)
                                    main: goto + html + parse + JSON.stringify
  _lib/browse-client.ts             vendored copy of the SDK
  fixtures/hn-2026-04-26.html       captured front page (5 stories)
  script.test.ts                    13 assertions against the fixture

The parser is a pure function over HTML so script.test.ts runs
without a daemon (just imports parseStoriesFromHtml and asserts).

This exercises every Phase 1 component end-to-end:
  - browse-client SDK (script imports browse from ./_lib/)
  - 3-tier lookup (hackernews-frontpage lives in the bundled tier)
  - scoped tokens (read+write is enough for goto + html)
  - spawn lifecycle (\$B skill run hackernews-frontpage)
  - file-fixture testing (\$B skill test hackernews-frontpage)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(skill-validation): cover bundled browser-skills

Adds 7 assertions per bundled skill at <root>/browser-skills/<name>/:
  - SKILL.md exists
  - frontmatter parses with required fields (name/host/triggers/args)
  - script.ts exists
  - _lib/browse-client.ts exists and matches the canonical SDK byte-for-byte
  - script.test.ts exists
  - script.ts imports browse from ./_lib/browse-client

The byte-identical SDK check enforces the version-pinning contract:
when the canonical SDK at browse/src/browse-client.ts changes, every
bundled skill's _lib/ copy must be re-synced or this test fails.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(designs): add BROWSER_SKILLS_V1 design doc

Captures the 13 locked decisions, two-axis trust model (daemon-side
scoped tokens + process-side env access), 3-tier lookup, file
layout, and full responses to all 8 Codex outside-voice findings.
Includes Phase 2-4 sketches for future branches.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(todos): replace self-authoring-\$B P1 with browser-skills phases

Phase 1 of the browser-skills design shipped on this branch (sidesteps
the in-daemon isolation problem the original P1 was blocked on). The
new entries enumerate the work that remains:

  P1: Phase 2 (/scrape + /automate skill templates)
  P2: Phase 3 (resolver injection at session start)
  P2: Phase 4 (eval infra + fixture staleness + OS sandbox)

Cross-references docs/designs/BROWSER_SKILLS_V1.md for the full
architecture and the 8 Codex review findings + responses.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* release: v1.9.0.0 — browser-skills runtime

VERSION 1.8.0.0 → 1.9.0.0. CHANGELOG entry leads with what humans
can do today (hand-write deterministic browser scripts, run them in
200ms via \$B skill run). Notes explicitly that agent authoring
lands in next release; no fabricated perf numbers.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(browser-skills-e2e): exercise dispatch with bundled hackernews-frontpage

Covers the full \$B skill list/show/test pipeline against the real
bundled reference skill (defaultTierPaths picks up <repo>/browser-skills/).
Verifies frontmatter shape, the three-tier walk surfaces the bundled
entry, and \$B skill test successfully runs the bundled script.test.ts
in a child bun process.

\$B skill run end-to-end against the live network is intentionally NOT
covered here (would be flaky against news.ycombinator.com); the spawn
lifecycle is exercised in browser-skill-commands.test.ts using inline
synthetic skills.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: regen SKILL.md to surface the skill META command

bun run gen:skill-docs picked up the new \`skill\` command from
COMMAND_DESCRIPTIONS in browse/src/commands.ts.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* release: bump v1.9.0.0 → v1.13.0.0

Main shipped through v1.11.1.0 while this branch was in flight; v1.12.x
is presumed claimed by another in-flight branch. Use v1.13.0.0 as the
next available slot.

Updated VERSION, package.json, and the CHANGELOG header. Entry body
unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* release: bump v1.13.0.0 → v1.16.0.0

Main shipped v1.13.0.0 (claude outside-voice skill), v1.14.0.0
(sidebar REPL), and v1.15.0.0 (slim preamble + plan-mode E2E)
while this branch was in flight. Use v1.16.0.0 as the next
available slot.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(browse-skills): atomic write helper for /skillify (D3)

stageSkill writes a candidate skill into ~/.gstack/.tmp/skillify-<spawnId>/
with restrictive perms. commitSkill does an atomic fs.renameSync into the
final tier path with realpath/lstat discipline (refuses symlinked staging
dirs, refuses to clobber existing skills). discardStaged is the cleanup
path for test failures and approval rejections, idempotent and bounded
to the per-spawn wrapper. validateSkillName enforces lowercase/digits/
dashes only, no path-escape characters.

Implements the D3 contract from the v1.19.0.0 plan review: never a
half-written skill on disk. Test fail or approval reject = rm -rf the
temp dir, no tombstone for never-approved skills.

Closes Codex finding #5 (atomic skill packaging) for Phase 2a.

34 unit assertions covering: stage validation, file-path escape rejection,
permission check, atomic rename, clobber refusal, symlink refusal, project
tier unresolved, idempotent discard, end-to-end happy + simulated test
failure + approval reject paths.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(scrape): /scrape <intent> skill template

One entry point for pulling page data. Three paths under the hood:

1. Match — agent reads $B skill list, semantically matches the user's
   intent against each skill's triggers + description + host. Confident
   match = $B skill run <name> in ~200ms.
2. Prototype — no match, drive the page with $B goto/text/html/links etc.
   Return JSON, append a one-line "say /skillify" nudge.
3. Mutating refusal — verbs like submit/click/fill route to /automate
   (Phase 2b P0); /scrape is read-only by contract.

Match decision lives in the agent, not the daemon. No new code in
browse/src/, no expanded daemon command surface, no new prompt-injection
blast radius.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(skillify): /skillify codifies last /scrape into permanent skill

The productivity multiplier. /scrape discovers the flow; /skillify writes
it as deterministic Playwright-via-browse-client code so the next /scrape
on the same intent runs in ~200ms.

11-step flow with three locked contracts from the v1.19.0.0 plan review:

D1 — Provenance guard. Walk back ≤10 agent turns for a clearly-bounded
/scrape result. Refuse with one specific message if cold. No silent
synthesis from chat fragments.

D2 — Synthesis input slice. Extract ONLY the final-attempt $B calls that
produced the JSON the user accepted, plus the user's intent string. Drop
failed selectors, drop unrelated chat, drop earlier-session content.
Closes Codex finding #6 by picking option (b) from the design doc:
re-prompt from agent's own context, not a structured recorder.

D3 — Atomic write. Stage to ~/.gstack/.tmp/skillify-<spawnId>/, run
$B skill test against the temp dir, only rename into the final tier path
on test pass + user approval. Test fail or approval reject = rm -rf the
temp dir entirely.

Default tier: global (~/.gstack/browser-skills/<name>/). --project flag
overrides to per-project. Generated test must include at least one ★★
assertion (parsed JSON has expected shape + non-empty key fields), not a
smoke ★ assertion.

Bun runtime distribution (Codex finding #7) carries over to Phase 4.
Documented in the skill's Limits section.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(browser-skills): gate-tier E2E for /scrape + /skillify (D4)

Five scenarios cover the productivity loop and the contracts locked
during the v1.19.0.0 plan review:

  scrape-match-path           — intent matching bundled hackernews-frontpage
                                routes via $B skill run, no prototype phase
  scrape-prototype-path       — no matching skill, drives $B against a local
                                file:// fixture, returns JSON, suggests
                                /skillify
  skillify-happy-path         — /scrape then /skillify; skill written to
                                ~/.gstack/browser-skills/<name>/ with the
                                full file tree; SKILL.md prose body must
                                not contain conversation fragments (D2)
  skillify-provenance-refusal — cold /skillify with no prior /scrape refuses
                                with the D1 message; nothing on disk (D1)
  skillify-approval-reject    — /scrape then /skillify but reject in the
                                approval gate; temp dir is removed, nothing
                                at the final tier path (D3)

All five gate-tier (~$0.50-$1.50 each, ~$5 total per CI run). Set EVALS=1
to enable. Uses local file:// fixtures so prototype + skillify scenarios
run deterministically without network.

Touchfiles registers all 5 entries with proper deps on scrape/**,
skillify/**, browse/src/browser-skill-write.ts, and the Phase 1 runtime
modules. The match-path test depends on the bundled hackernews-frontpage
skill so its touchfile includes browser-skills/hackernews-frontpage/**.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(browser-skills): TODOS Phase 2a + design doc D1-D4 decisions

TODOS.md:
- Narrows existing P1 (was "/scrape and /automate") to "/scrape and
  /skillify" — the /scrape + /skillify wedge ships in this branch.
  Codex finding #6 (synthesis) removed from Cons (resolved by D2);
  finding #7 (Bun runtime) stays as the open carry-over.
- Adds new ## P0 above PACING_UPDATES_V0 for the /automate follow-up.
  Same skillify pattern as /scrape, different trust profile (per-step
  confirmation gate when running non-codified). Reuses /skillify and
  the D3 helper as-is. Effort M.

BROWSER_SKILLS_V1.md:
- Phase table re-organized into 1, 2a, 2b, 3, 4. Phase 1 + Phase 2a
  consolidate into v1.19.0.0 ship (the v1.16.0.0 branch-internal
  bump never landed on main).
- New "Phase 2a" sub-section captures the four decisions locked
  during /plan-eng-review:
    D1 — provenance guard (≤10 turn walk-back, refuse if cold)
    D2 — synthesis input slice (final-attempt $B calls only,
         closes Codex finding #6)
    D3 — atomic write discipline (temp-dir-then-rename via new
         browse/src/browser-skill-write.ts helper)
    D4 — full test scope (5 gate E2E + 1 unit + smoke)
- New "Phase 2b" sketch for /automate: same skillify machinery,
  per-mutating-step confirmation gate, deferred to next branch.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* release: v1.16.0.0 -> v1.19.0.0 — browser-skills Phase 1 + 2a

Consolidates the v1.16.0.0 branch-internal bump (Phase 1 runtime, never
landed on main) with Phase 2a (/scrape + /skillify + atomic-write helper)
into one v1.19.0.0 ship per CLAUDE.md "Never orphan branch-internal
versions" rule.

Headline: Browser-skills land end-to-end. /scrape <intent> first call
drives the page; second call runs the codified script in 200ms.

The unified CHANGELOG entry covers:
- Phase 1 runtime: $B skill list/show/run/test/rm, scoped tokens,
  3-tier storage, bundled hackernews-frontpage reference.
- Phase 2a: /scrape + /skillify gstack skills, browser-skill-write.ts
  atomic helper, 5 gate-tier E2E + 34 unit assertions.

Numbers table updated: 5 new modules (+browser-skill-write), 2 new
gstack skills, 6 of 8 Codex outside-voice findings resolved (synthesis
#6 closed by D2; Bun runtime #7 + OS sandbox #1 stay deferred to Phase 4).

/automate (Phase 2b) is split out as P0 in TODOS for the next branch.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(commands): tighten descriptions for LLM-judge baseline pinning

The skill-llm-eval test "baseline score pinning" failed CI on three
retry attempts: judge gave command_reference.actionability=3, baseline
demands ≥4. Judge cited 8 specific gaps in COMMAND_DESCRIPTIONS.

This commit closes 7 of 8 by tightening the descriptions:

- press: documents that key names are case-sensitive Playwright keys,
  shows modifier syntax (Shift+Enter, Control+A), links the full key
  list. Removes the "is this case-sensitive?" guesswork.
- is: documents that <sel> accepts either a CSS selector OR an @ref
  token from a prior snapshot, and that property values are case-
  sensitive.
- scroll: documents that there is no --by/--to amount option, points
  at `js window.scrollTo(0, N)` for pixel-precise scrolling.
- js / eval: clarifies that both run in the same JS sandbox, the
  difference is just inline expr (js) vs file (eval).
- storage: clarifies sessionStorage is read-only via this command,
  points at `js sessionStorage.setItem(...)` for the write path.
- chain: walks through how to invoke (pipe a JSON array of arrays to
  $B chain), confirms it stops at the first error.
- cdp: explains how to discover allowed methods (read cdp-allowlist.ts)
  + shows a concrete example invocation.
- domain-skill: explains that the "classifier flag" is set automatically
  by the L4 prompt-injection scan (agents do not set it manually);
  enumerates the full lifecycle verbs.

The 8th gap (storage set syntax conflict) is also resolved as part of
the storage rewrite.

Two pipe-character bugs caught by the existing
`no command description contains pipe character` guard at
`test/gen-skill-docs.test.ts:595`: the chain example originally used
`echo '[...]' | $B chain` (literal pipe) and the cdp description used
`tab|browser` / `trusted|untrusted` (also literal pipes). Both rewritten
to keep markdown table cells intact.

Verification: 696/0 pass on skill-validation + gen-skill-docs after
regen across all hosts. The CI llm-judge eval will re-run against the
new SKILL.md and should hit actionability ≥4 reliably.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(browser): rewrite BROWSER.md as complete reference

Full rewrite covering the gstack browser surface as of v1.19.0.0. Up from
488 to 1,299 lines, 26 top-level sections.

Adds previously-undocumented subsystems:

- The productivity loop: /scrape + /skillify with D1 (provenance guard),
  D2 (final-attempt-only synthesis), D3 (atomic-write discipline) contracts.
- Browser-skills runtime: anatomy, three-tier storage, scoped tokens, trust
  model (capability + env axes), sibling SDK distribution, atomic-write
  helper, bundled hackernews-frontpage reference.
- Domain-skills: per-site agent notes with quarantined → active → global
  state machine and the L4-classifier auto-promotion gate.
- Pair-agent: dual-listener architecture, 26-command tunnel allowlist,
  canDispatchOverTunnel pure gate, three token types (root, setup key,
  scoped), denial log path + salt model.
- Security stack L1-L6: layer table, thresholds (BLOCK/WARN/LOG_ONLY/
  SOLO_CONTENT_BLOCK), ensemble rule, classifier model paths, env knobs.
- Side Panel deep dive: Terminal pane (Claude PTY) as the primary surface
  with Activity/Refs/Inspector as debug overlays, WS auth via
  Sec-WebSocket-Protocol, gstackInjectToTerminal cross-pane plumbing.
- CDP escape hatch: $B cdp deny-default allowlist, $B inspect CSS inspector,
  $B ux-audit page structure extraction.
- Meta commands previously undocumented: tabs/frames/state/watch/inbox/
  tab-each, with usage and storage paths.
- Authentication: three token types with lifetimes, SSE session cookie,
  PTY session cookie, token registry behavior.
- Full source map: 30+ file inventory of browse/src/ vs the old 11-file
  list.

Preserves from before: architecture diagram, daemon lifecycle, snapshot
ref staleness, screenshot modes, goto file:// vs load-html semantics,
batch endpoint, JS await wrapping, env vars, performance numbers vs MCP,
Playwright acknowledgments, dev guide.

Cross-links to ARCHITECTURE.md, CLAUDE.md, docs/REMOTE_BROWSER_ACCESS.md,
docs/designs/BROWSER_SKILLS_V1.md, scrape/SKILL.md, skillify/SKILL.md,
TODOS.md so anyone landing on BROWSER.md can navigate to the load-bearing
companion docs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(server): tab-ownership gate keys on tabPolicy, not isWrite

Browser-skill spawns hit `403: Tab not owned by your agent` on every
first run because the gate at server.ts:639 fired for any non-root
write, regardless of the token's tabPolicy. The bundled
hackernews-frontpage reference skill failed identically. Every
/skillify-generated skill failed identically. The user's natural
tabs have no claimed owner — by design — so any skill driving
them via `goto` (a write) was 403'd.

The intent in skill-token.ts:79 was always correct: `tabPolicy: 'shared'`
with the comment "skill scripts may switch tabs as needed." The
enforcement just ignored it.

Two surgical changes:

browser-manager.ts:checkTabAccess — gate now keys on options.ownOnly
only. Shared-policy tokens (skill spawns, default scoped clients) get
permissive access — root-equivalent for the tab gate. Own-only tokens
(pair-agent over the ngrok tunnel) still require ownership for every
read and write. isWrite stays in the signature for callers that want
to log or branch elsewhere; it no longer gates the decision.

server.ts:639 — gate predicate narrowed from
  (WRITE_COMMANDS.has(command) || tokenInfo.tabPolicy === 'own-only')
to just
  tokenInfo.tabPolicy === 'own-only'
The 'newtab' exemption stays. Shared tokens skip the gate entirely;
own-only tokens still hit it. Comment block above the gate updated to
document the new predicate intent.

Pair-agent isolation is intact. Tunnel tokens still default to
tabPolicy: 'own-only', still must `newtab` first to get a tab they
can drive, still can't dispatch any of the 23 commands outside the
tunnel allowlist.

The capability gate (scope checks) and rate limits already constrain
what local scoped clients can do; tab ownership was never a security
boundary for them — only for pair-agent. This release makes the
enforcement match the original design intent.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(server): lock the shared-vs-own-only tab gate contract

The pre-fix tests at tab-isolation.test.ts:43,57 encoded the broken
behavior as the contract — they specifically asserted "scoped agent
cannot write to unowned tab," which was the exact failure mode that
broke browser-skills. They passed because they tested the wrong
invariant.

This commit replaces those tests with explicit shared-vs-own-only
coverage that documents what each policy actually means:

- Shared scoped agents (skill spawns, default scoped clients) can
  read AND write any tab — unowned, their own, or another agent's.
  The capability is gated by scope checks + rate limits, not by tab
  ownership.
- Own-only scoped agents (pair-agent over tunnel) cannot read OR
  write any tab they don't own. Pre-fix this case was conflated with
  shared writes; now it's explicit.

9 unit assertions on checkTabAccess, up from 6. Each test names
the policy axis it's covering so a future refactor can't quietly
flip the contract.

Adds source-shape regression test 10a in server-auth.test.ts:
"tab gate predicate is own-only-scoped, not write-scoped." The
gate's `if (...)` line MUST contain `tabPolicy === 'own-only'` and
MUST NOT contain `WRITE_COMMANDS.has(command) ||`. If a future
refactor re-introduces the write-scoped gate, this fails immediately
in free-tier `bun test`.

Updates the marker for the existing newtab-excluded test to match
the new comment block ("Tab ownership check (own-only tokens /
pair-agent isolation)").

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* release: v1.19.0.0 -> v1.20.0.0 — fix tab-ownership footgun

Patch release on top of v1.19.0.0. The shipping headline of v1.19.0.0
(/scrape + /skillify productivity loop) was broken on first run in any
session where the daemon already had a tab. Bundled
hackernews-frontpage failed identically. Every /skillify-generated
skill failed identically.

The fix narrows the tab-ownership gate from "any non-root write" to
"tabPolicy === 'own-only' only." Pair-agent isolation (the v1.6.0.0
threat model) is intact; local skill spawns get their original
behavior back.

VERSION: 1.19.0.0 -> 1.20.0.0
package.json version: synced.

CHANGELOG entry leads with the user-visible impact: the productivity
loop works again, no half-second-stalls of confused 403s. Includes
before/after metrics on the bundled reference skill and the broken-
contract pre-fix tests that hid the regression.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(claude): sharpen CHANGELOG rule — diff between main and ship

Codifies what was already implicit in the existing "Never orphan
branch-internal versions" + "Only document what shipped between main
and this change" sections, but with sharper language and concrete
NEVER examples.

The rule: a CHANGELOG entry is the diff between main and the shipping
branch — what users get when they upgrade. NOT how the branch got
there. Branch-internal version bumps, mid-branch bug fixes, plan
review outcomes, and patch narratives all belong in PR descriptions
and commit messages, not in CHANGELOG.

Adds explicit examples of phrasing to NEVER use:
  - "v1.X had a bug that v1.Y fixes" (mentions a branch-internal version)
  - "The shipping headline of v1.X was broken because..." (apologizes
    for never-released state)
  - "Pre-fix tests encoded the broken behavior" (contributor's victory
    lap, not user benefit)
  - "Two surgical edits, both in the dispatch path" (micro-narrative
    of the patch)

The constructive replacement: describe the released system as a
property, not as a fix. "Browser-skills run end-to-end with the
expected tab-access semantics." If a property is worth calling out,
document it in the trust-model section, not as a "we fixed X" callout.

Pairs with feedback_no_shame_changelog and
feedback_changelog_harden_against_critics memories — entries should
read as a flex even to a hostile screenshotter, never admit prior
breakage.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(changelog): consolidate v1.20.0.0 as the diff vs main

Rewrites the v1.20.0.0 entry to describe what users get when they
upgrade from main (v1.17.0.0) to this release: browser-skills
end-to-end. Drops all branch-internal narrative — Phase 1 / Phase 2a
labels, the v1.8.0.0 P1 history paragraph, the test-counts-by-phase
split, and the patch micro-narrative for the tab-policy semantics.

The previously-separate v1.19.0.0 entry (a branch-internal version
that never landed on main) collapses into v1.20.0.0 per the
"Never orphan branch-internal versions" rule.

Tab-access policies are now documented as a property of the trust
model: `'shared'` (skill spawns) is permissive, `'own-only'`
(pair-agent over the tunnel) is strict. No "fix" framing, no
mention of an intermediate state where it was broken.

Adds the BROWSER.md rewrite and the new tab-isolation +
server-auth source-shape regression tests to the itemized changes.

The reverse-chronological order remains: v1.20.0.0 → v1.17.0.0 →
v1.16.0.0 → v1.15.0.0 → ... Gaps (v1.18, v1.19) are fine — those
were branch-internal version numbers that never landed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-28 20:08:04 -07:00

1151 lines
47 KiB
TypeScript

/**
* Meta commands — tabs, server control, screenshots, chain, diff, snapshot
*/
import type { BrowserManager } from './browser-manager';
import { handleSnapshot } from './snapshot';
import { getCleanText } from './read-commands';
import { READ_COMMANDS, WRITE_COMMANDS, META_COMMANDS, PAGE_CONTENT_COMMANDS, wrapUntrustedContent, canonicalizeCommand } from './commands';
import { handleDomainSkillCommand } from './domain-skill-commands';
import { handleSkillCommand } from './browser-skill-commands';
import { validateNavigationUrl } from './url-validation';
import { checkScope, type TokenInfo } from './token-registry';
import { validateOutputPath, validateReadPath, SAFE_DIRECTORIES, escapeRegExp } from './path-security';
// Re-export for backward compatibility (tests import from meta-commands)
export { validateOutputPath, escapeRegExp } from './path-security';
import * as Diff from 'diff';
import * as fs from 'fs';
import * as path from 'path';
import { TEMP_DIR } from './platform';
import { resolveConfig } from './config';
import type { Frame } from 'playwright';
/** Tokenize a pipe segment respecting double-quoted strings. */
function tokenizePipeSegment(segment: string): string[] {
const tokens: string[] = [];
let current = '';
let inQuote = false;
for (let i = 0; i < segment.length; i++) {
const ch = segment[i];
if (ch === '"') {
inQuote = !inQuote;
} else if (ch === ' ' && !inQuote) {
if (current) { tokens.push(current); current = ''; }
} else {
current += ch;
}
}
if (current) tokens.push(current);
return tokens;
}
// ─── PDF flag parsing (make-pdf contract) ─────────────────────────────
//
// The $B pdf command grew from a 2-line wrapper (format: 'A4') into a real
// PDF engine frontend. make-pdf/dist/pdf shells out to `browse pdf` with
// this flag set, so the contract here has to be stable.
//
// Mutex rules enforced:
// --format vs --width/--height
// --margins vs any --margin-*
// --page-numbers vs --footer-template (page-numbers writes the footer itself)
//
// Units for dimensions: "1in" | "72pt" | "25mm" | "2.54cm". Bare numbers
// are interpreted as pixels (Playwright's default), which is almost never
// what callers want — we warn but don't reject.
//
// Large payloads: header/footer HTML and custom CSS can exceed Windows'
// 8191-char CreateProcess cap via argv. Callers pass `--from-file <path>`
// to a JSON file holding the full options. make-pdf always uses this path.
interface ParsedPdfArgs {
output: string;
format?: string;
width?: string;
height?: string;
marginTop?: string;
marginRight?: string;
marginBottom?: string;
marginLeft?: string;
headerTemplate?: string;
footerTemplate?: string;
pageNumbers?: boolean;
tagged?: boolean;
outline?: boolean;
printBackground?: boolean;
preferCSSPageSize?: boolean;
toc?: boolean;
}
function parsePdfArgs(args: string[]): ParsedPdfArgs {
// --from-file short-circuits argv parsing entirely
for (let i = 0; i < args.length; i++) {
if (args[i] === '--from-file') {
const payloadPath = args[++i];
if (!payloadPath) throw new Error('pdf: --from-file requires a path');
return parsePdfFromFile(payloadPath);
}
}
const result: ParsedPdfArgs = {
output: `${TEMP_DIR}/browse-page.pdf`,
};
let margins: string | undefined;
const positional: string[] = [];
for (let i = 0; i < args.length; i++) {
const a = args[i];
if (a === '--format') { result.format = requireValue(args, ++i, 'format'); }
else if (a === '--page-size') { result.format = requireValue(args, ++i, 'page-size'); }
else if (a === '--width') { result.width = requireValue(args, ++i, 'width'); }
else if (a === '--height') { result.height = requireValue(args, ++i, 'height'); }
else if (a === '--margins') { margins = requireValue(args, ++i, 'margins'); }
else if (a === '--margin-top') { result.marginTop = requireValue(args, ++i, 'margin-top'); }
else if (a === '--margin-right') { result.marginRight = requireValue(args, ++i, 'margin-right'); }
else if (a === '--margin-bottom') { result.marginBottom = requireValue(args, ++i, 'margin-bottom'); }
else if (a === '--margin-left') { result.marginLeft = requireValue(args, ++i, 'margin-left'); }
else if (a === '--header-template') { result.headerTemplate = requireValue(args, ++i, 'header-template'); }
else if (a === '--footer-template') { result.footerTemplate = requireValue(args, ++i, 'footer-template'); }
else if (a === '--page-numbers') { result.pageNumbers = true; }
else if (a === '--tagged') { result.tagged = true; }
else if (a === '--outline') { result.outline = true; }
else if (a === '--print-background') { result.printBackground = true; }
else if (a === '--prefer-css-page-size') { result.preferCSSPageSize = true; }
else if (a === '--toc') { result.toc = true; }
else if (a.startsWith('--')) { throw new Error(`Unknown pdf flag: ${a}`); }
else { positional.push(a); }
}
if (positional.length > 0) result.output = positional[0];
if (margins !== undefined) {
if (result.marginTop || result.marginRight || result.marginBottom || result.marginLeft) {
throw new Error('pdf: --margins is mutex with --margin-top/--margin-right/--margin-bottom/--margin-left');
}
result.marginTop = result.marginRight = result.marginBottom = result.marginLeft = margins;
}
if (result.format && (result.width || result.height)) {
throw new Error('pdf: --format is mutex with --width/--height');
}
if (result.pageNumbers && result.footerTemplate) {
throw new Error('pdf: --page-numbers is mutex with --footer-template (page-numbers writes the footer itself)');
}
return result;
}
function parsePdfFromFile(payloadPath: string): ParsedPdfArgs {
// Parity with load-html --from-file (browse/src/write-commands.ts) and
// the direct load-html <file> path: every caller-supplied file path
// must pass validateReadPath so the safe-dirs policy can't be skirted
// by routing reads through the --from-file shortcut.
try {
validateReadPath(path.resolve(payloadPath));
} catch {
throw new Error(
`pdf: --from-file ${payloadPath} must be under ${SAFE_DIRECTORIES.join(' or ')} (security policy). Copy the payload into the project tree or /tmp first.`
);
}
const raw = fs.readFileSync(payloadPath, 'utf8');
const json = JSON.parse(raw);
const out: ParsedPdfArgs = {
output: json.output || `${TEMP_DIR}/browse-page.pdf`,
format: json.format,
width: json.width,
height: json.height,
marginTop: json.marginTop,
marginRight: json.marginRight,
marginBottom: json.marginBottom,
marginLeft: json.marginLeft,
headerTemplate: json.headerTemplate,
footerTemplate: json.footerTemplate,
pageNumbers: json.pageNumbers === true,
tagged: json.tagged === true,
outline: json.outline === true,
printBackground: json.printBackground === true,
preferCSSPageSize: json.preferCSSPageSize === true,
toc: json.toc === true,
};
return out;
}
function requireValue(args: string[], i: number, flag: string): string {
const v = args[i];
if (v === undefined || v.startsWith('--')) {
throw new Error(`pdf: --${flag} requires a value`);
}
return v;
}
function buildPdfOptions(parsed: ParsedPdfArgs): Record<string, unknown> {
const opts: Record<string, unknown> = {};
// Page size
if (parsed.format) {
opts.format = parsed.format.charAt(0).toUpperCase() + parsed.format.slice(1).toLowerCase();
} else if (parsed.width && parsed.height) {
opts.width = parsed.width;
opts.height = parsed.height;
} else {
opts.format = 'Letter';
}
// Margins
const margin: Record<string, string> = {};
if (parsed.marginTop) margin.top = parsed.marginTop;
if (parsed.marginRight) margin.right = parsed.marginRight;
if (parsed.marginBottom) margin.bottom = parsed.marginBottom;
if (parsed.marginLeft) margin.left = parsed.marginLeft;
if (Object.keys(margin).length > 0) opts.margin = margin;
// Header/footer
const displayHeaderFooter =
!!parsed.headerTemplate || !!parsed.footerTemplate || parsed.pageNumbers === true;
if (displayHeaderFooter) {
opts.displayHeaderFooter = true;
// Provide minimum empty templates when only one is set, otherwise Chromium
// emits its default ugly URL/date in the other slot.
if (parsed.headerTemplate !== undefined) opts.headerTemplate = parsed.headerTemplate;
else if (parsed.pageNumbers || parsed.footerTemplate) opts.headerTemplate = '<div></div>';
if (parsed.pageNumbers) {
opts.footerTemplate = [
'<div style="font-size:9pt; font-family:Helvetica,Arial,sans-serif; color:#666; ',
'width:100%; text-align:center;">',
'<span class="pageNumber"></span> of <span class="totalPages"></span>',
'</div>',
].join('');
} else if (parsed.footerTemplate !== undefined) {
opts.footerTemplate = parsed.footerTemplate;
} else {
opts.footerTemplate = '<div></div>';
}
}
if (parsed.tagged === true) opts.tagged = true;
if (parsed.outline === true) opts.outline = true;
if (parsed.printBackground === true) opts.printBackground = true;
if (parsed.preferCSSPageSize === true) opts.preferCSSPageSize = true;
return opts;
}
/** Options passed from handleCommandInternal for chain routing */
export interface MetaCommandOpts {
chainDepth?: number;
/** Callback to route subcommands through the full security pipeline (handleCommandInternal) */
executeCommand?: (body: { command: string; args?: string[]; tabId?: number }, tokenInfo?: TokenInfo | null) => Promise<{ status: number; result: string; json?: boolean }>;
/** The port the daemon is listening on (needed by `$B skill run` to point spawned scripts at the daemon). */
daemonPort?: number;
}
export async function handleMetaCommand(
command: string,
args: string[],
bm: BrowserManager,
shutdown: () => Promise<void> | void,
tokenInfo?: TokenInfo | null,
opts?: MetaCommandOpts,
): Promise<string> {
// Per-tab operations use the active session; global operations use bm directly
const session = bm.getActiveSession();
switch (command) {
// ─── Tabs ──────────────────────────────────────────
case 'tabs': {
const tabs = await bm.getTabListWithTitles();
return tabs.map(t =>
`${t.active ? '→ ' : ' '}[${t.id}] ${t.title || '(untitled)'}${t.url}`
).join('\n');
}
case 'tab': {
const id = parseInt(args[0], 10);
if (isNaN(id)) throw new Error('Usage: browse tab <id>');
bm.switchTab(id);
return `Switched to tab ${id}`;
}
case 'newtab': {
// --json returns structured output (machine-parseable). Other flag-like
// tokens are treated as the url. make-pdf always passes --json.
let url: string | undefined;
let jsonMode = false;
for (const a of args) {
if (a === '--json') { jsonMode = true; }
else if (!url) { url = a; }
}
const id = await bm.newTab(url);
if (jsonMode) {
return JSON.stringify({ tabId: id, url: url ?? null });
}
return `Opened tab ${id}${url ? `${url}` : ''}`;
}
case 'closetab': {
const id = args[0] ? parseInt(args[0], 10) : undefined;
await bm.closeTab(id);
return `Closed tab${id ? ` ${id}` : ''}`;
}
case 'tab-each': {
// Fan out a single command across every open tab. Returns a JSON
// object: { results: [{tabId, url, title, status, output}], total }.
// Restores the originally active tab when done so the user's view
// doesn't shift under them.
//
// Usage: $B tab-each <command> [args...]
// $B tab-each snapshot -i → snapshot every tab
// $B tab-each text → grab clean text from every tab
// $B tab-each goto https://x.y → load the same URL in every tab
if (args.length === 0) {
throw new Error(
'Usage: browse tab-each <command> [args...]\n' +
'Example: browse tab-each snapshot -i'
);
}
const innerRaw = args[0];
const innerName = canonicalizeCommand(innerRaw);
const innerArgs = args.slice(1);
// Scope check the inner command before fanning out, so a single
// permission failure aborts the whole batch instead of partially
// mutating tabs.
if (tokenInfo && tokenInfo.clientId !== 'root' && !checkScope(tokenInfo, innerName)) {
throw new Error(
`tab-each rejected: subcommand "${innerRaw}" not allowed by your token scope (${tokenInfo.scopes.join(', ')}).`
);
}
const tabs = await bm.getTabListWithTitles();
const originalActive = tabs.find(t => t.active)?.id ?? bm.getActiveTabId();
const executeCmd = opts?.executeCommand;
const results: Array<{
tabId: number;
url: string;
title: string;
status: number;
output: string;
}> = [];
try {
for (const tab of tabs) {
// Skip chrome:// internal pages — they aren't useful targets and
// many commands fail outright on them.
if (tab.url.startsWith('chrome://') || tab.url.startsWith('chrome-extension://')) {
results.push({
tabId: tab.id,
url: tab.url,
title: tab.title || '',
status: 0,
output: 'skipped: internal page',
});
continue;
}
// Switch to the tab. Don't pull focus away — we're a background
// operation; the user shouldn't see the OS window jump.
bm.switchTab(tab.id, { bringToFront: false });
let status = 0;
let output = '';
if (executeCmd) {
const r = await executeCmd(
{ command: innerName, args: innerArgs, tabId: tab.id },
tokenInfo,
);
status = r.status;
output = r.result;
if (status !== 200) {
try { output = JSON.parse(output).error || output; } catch (err: any) { if (!(err instanceof SyntaxError)) throw err; }
}
} else {
// Fallback path (CLI / test harness without a server context).
// We don't recurse through read/write/meta directly here because
// tab-each is only meaningful with the live server; surface a
// clear error.
status = 500;
output = 'tab-each requires the browse server (no executeCommand context)';
}
results.push({
tabId: tab.id,
url: tab.url,
title: tab.title || '',
status,
output,
});
}
} finally {
// Restore the original active tab so the user's view is unchanged.
try { bm.switchTab(originalActive, { bringToFront: false }); } catch {}
}
return JSON.stringify({
command: innerName,
args: innerArgs,
total: results.length,
results,
}, null, 2);
}
// ─── Server Control ────────────────────────────────
case 'status': {
const page = bm.getPage();
const tabs = bm.getTabCount();
const mode = bm.getConnectionMode();
return [
`Status: healthy`,
`Mode: ${mode}`,
`URL: ${page.url()}`,
`Tabs: ${tabs}`,
`PID: ${process.pid}`,
].join('\n');
}
case 'url': {
return bm.getCurrentUrl();
}
case 'stop': {
await shutdown();
return 'Server stopped';
}
case 'restart': {
// Signal that we want a restart — the CLI will detect exit and restart
console.log('[browse] Restart requested. Exiting for CLI to restart.');
await shutdown();
return 'Restarting...';
}
// ─── Visual ────────────────────────────────────────
case 'screenshot': {
// Parse priority: flags (--viewport, --clip, --base64) → selector (@ref, CSS) → output path
const page = bm.getPage();
let outputPath = `${TEMP_DIR}/browse-screenshot.png`;
let clipRect: { x: number; y: number; width: number; height: number } | undefined;
let targetSelector: string | undefined;
let viewportOnly = false;
let base64Mode = false;
const remaining: string[] = [];
let flagSelector: string | undefined;
for (let i = 0; i < args.length; i++) {
if (args[i] === '--viewport') {
viewportOnly = true;
} else if (args[i] === '--base64') {
base64Mode = true;
} else if (args[i] === '--selector') {
flagSelector = args[++i];
if (!flagSelector) throw new Error('Usage: screenshot --selector <css> [path]');
} else if (args[i] === '--clip') {
const coords = args[++i];
if (!coords) throw new Error('Usage: screenshot --clip x,y,w,h [path]');
const parts = coords.split(',').map(Number);
if (parts.length !== 4 || parts.some(isNaN))
throw new Error('Usage: screenshot --clip x,y,width,height — all must be numbers');
clipRect = { x: parts[0], y: parts[1], width: parts[2], height: parts[3] };
} else if (args[i].startsWith('--')) {
throw new Error(`Unknown screenshot flag: ${args[i]}`);
} else {
remaining.push(args[i]);
}
}
// Separate target (selector/@ref) from output path
for (const arg of remaining) {
// File paths containing / and ending with an image/pdf extension are never CSS selectors
const isFilePath = arg.includes('/') && /\.(png|jpe?g|webp|pdf)$/i.test(arg);
if (isFilePath) {
outputPath = arg;
} else if (arg.startsWith('@e') || arg.startsWith('@c') || arg.startsWith('.') || arg.startsWith('#') || arg.includes('[')) {
targetSelector = arg;
} else {
outputPath = arg;
}
}
// --selector flag takes precedence; conflict with positional selector.
if (flagSelector !== undefined) {
if (targetSelector !== undefined) {
throw new Error('--selector conflicts with positional selector — choose one');
}
targetSelector = flagSelector;
}
validateOutputPath(outputPath);
if (clipRect && targetSelector) {
throw new Error('Cannot use --clip with a selector/ref — choose one');
}
if (viewportOnly && clipRect) {
throw new Error('Cannot use --viewport with --clip — choose one');
}
// --base64 mode: capture to buffer instead of disk
if (base64Mode) {
let buffer: Buffer;
if (targetSelector) {
const resolved = await bm.resolveRef(targetSelector);
const locator = 'locator' in resolved ? resolved.locator : page.locator(resolved.selector);
buffer = await locator.screenshot({ timeout: 5000 });
} else if (clipRect) {
buffer = await page.screenshot({ clip: clipRect });
} else {
buffer = await page.screenshot({ fullPage: !viewportOnly });
}
if (buffer.length > 10 * 1024 * 1024) {
throw new Error('Screenshot too large for --base64 (>10MB). Use disk path instead.');
}
return `data:image/png;base64,${buffer.toString('base64')}`;
}
if (targetSelector) {
const resolved = await bm.resolveRef(targetSelector);
const locator = 'locator' in resolved ? resolved.locator : page.locator(resolved.selector);
await locator.screenshot({ path: outputPath, timeout: 5000 });
return `Screenshot saved (element): ${outputPath}`;
}
if (clipRect) {
await page.screenshot({ path: outputPath, clip: clipRect });
return `Screenshot saved (clip ${clipRect.x},${clipRect.y},${clipRect.width},${clipRect.height}): ${outputPath}`;
}
await page.screenshot({ path: outputPath, fullPage: !viewportOnly });
return `Screenshot saved${viewportOnly ? ' (viewport)' : ''}: ${outputPath}`;
}
case 'pdf': {
const page = bm.getPage();
const parsed = parsePdfArgs(args);
validateOutputPath(parsed.output);
// If --toc: wait up to 3s for Paged.js to signal by setting
// window.__pagedjsAfterFired = true. If the polyfill isn't injected
// (make-pdf v1 ships without Paged.js; TOC renders without page
// numbers), we fall through silently — callers that require strict
// TOC pagination should pass --require-paged-js too.
if (parsed.toc) {
const deadline = Date.now() + 3000;
let ready = false;
while (Date.now() < deadline) {
try {
ready = await page.evaluate('!!window.__pagedjsAfterFired');
} catch { /* tab may still be hydrating */ }
if (ready) break;
await new Promise(r => setTimeout(r, 150));
}
// Intentionally non-fatal. Paged.js is optional in v1.
}
const opts = buildPdfOptions(parsed);
opts.path = parsed.output;
await page.pdf(opts);
return `PDF saved: ${parsed.output}`;
}
case 'responsive': {
const page = bm.getPage();
const prefix = args[0] || `${TEMP_DIR}/browse-responsive`;
validateOutputPath(prefix);
const viewports = [
{ name: 'mobile', width: 375, height: 812 },
{ name: 'tablet', width: 768, height: 1024 },
{ name: 'desktop', width: 1280, height: 720 },
];
const originalViewport = page.viewportSize();
const results: string[] = [];
for (const vp of viewports) {
await page.setViewportSize({ width: vp.width, height: vp.height });
const screenshotPath = `${prefix}-${vp.name}.png`;
validateOutputPath(screenshotPath);
await page.screenshot({ path: screenshotPath, fullPage: true });
results.push(`${vp.name} (${vp.width}x${vp.height}): ${screenshotPath}`);
}
// Restore original viewport
if (originalViewport) {
await page.setViewportSize(originalViewport);
}
return results.join('\n');
}
// ─── Chain ─────────────────────────────────────────
case 'chain': {
// Read JSON array from args[0] (if provided) or expect it was passed as body
const jsonStr = args[0];
if (!jsonStr) throw new Error(
'Usage: echo \'[["goto","url"],["text"]]\' | browse chain\n' +
' or: browse chain \'goto url | click @e5 | snapshot -ic\''
);
let rawCommands: string[][];
try {
rawCommands = JSON.parse(jsonStr);
if (!Array.isArray(rawCommands)) throw new Error('not array');
} catch (err: any) {
// Fallback: pipe-delimited format "goto url | click @e5 | snapshot -ic"
if (!(err instanceof SyntaxError) && err?.message !== 'not array') throw err;
rawCommands = jsonStr.split(' | ')
.filter(seg => seg.trim().length > 0)
.map(seg => tokenizePipeSegment(seg.trim()));
}
// Canonicalize aliases across the whole chain. Pair canonical name with the raw
// input so result labels + error messages reflect what the user typed, but every
// dispatch path (scope check, WRITE_COMMANDS.has, watch blocking, handler lookup)
// uses the canonical name. Otherwise `chain '[["setcontent","/tmp/x.html"]]'`
// bypasses prevalidation or runs under the wrong command set.
const commands = rawCommands.map(cmd => {
const [rawName, ...cmdArgs] = cmd;
const name = canonicalizeCommand(rawName);
return { rawName, name, args: cmdArgs };
});
// Pre-validate ALL subcommands against the token's scope before executing any.
// Uses canonical name so aliases don't bypass scope checks.
if (tokenInfo && tokenInfo.clientId !== 'root') {
for (const c of commands) {
if (!checkScope(tokenInfo, c.name)) {
throw new Error(
`Chain rejected: subcommand "${c.rawName}" not allowed by your token scope (${tokenInfo.scopes.join(', ')}). ` +
`All subcommands must be within scope.`
);
}
}
}
// Route each subcommand through handleCommandInternal for full security:
// scope, domain, tab ownership, content wrapping — all enforced per subcommand.
// Chain-specific options: skip rate check (chain = 1 request), skip activity
// events (chain emits 1 event), increment chain depth (recursion guard).
const executeCmd = opts?.executeCommand;
const results: string[] = [];
let lastWasWrite = false;
if (executeCmd) {
// Full security pipeline via handleCommandInternal.
// Pass rawName so the server's own canonicalization is a no-op (already canonical).
for (const c of commands) {
const cr = await executeCmd(
{ command: c.name, args: c.args },
tokenInfo,
);
const label = c.rawName === c.name ? c.name : `${c.rawName}${c.name}`;
if (cr.status === 200) {
results.push(`[${label}] ${cr.result}`);
} else {
// Parse error from JSON result
let errMsg = cr.result;
try { errMsg = JSON.parse(cr.result).error || cr.result; } catch (err: any) { if (!(err instanceof SyntaxError)) throw err; }
results.push(`[${label}] ERROR: ${errMsg}`);
}
lastWasWrite = WRITE_COMMANDS.has(c.name);
}
} else {
// Fallback: direct dispatch (CLI mode, no server context)
const { handleReadCommand } = await import('./read-commands');
const { handleWriteCommand } = await import('./write-commands');
for (const c of commands) {
const name = c.name;
const cmdArgs = c.args;
const label = c.rawName === name ? name : `${c.rawName}${name}`;
try {
let result: string;
if (WRITE_COMMANDS.has(name)) {
if (bm.isWatching()) {
result = 'BLOCKED: write commands disabled in watch mode';
} else {
result = await handleWriteCommand(name, cmdArgs, session, bm);
}
lastWasWrite = true;
} else if (READ_COMMANDS.has(name)) {
result = await handleReadCommand(name, cmdArgs, session);
if (PAGE_CONTENT_COMMANDS.has(name)) {
result = wrapUntrustedContent(result, bm.getCurrentUrl());
}
lastWasWrite = false;
} else if (META_COMMANDS.has(name)) {
result = await handleMetaCommand(name, cmdArgs, bm, shutdown, tokenInfo, opts);
lastWasWrite = false;
} else {
throw new Error(`Unknown command: ${c.rawName}`);
}
results.push(`[${label}] ${result}`);
} catch (err: any) {
results.push(`[${label}] ERROR: ${err.message}`);
}
}
}
// Wait for network to settle after write commands before returning
if (lastWasWrite) {
await bm.getPage().waitForLoadState('networkidle', { timeout: 2000 }).catch(() => {});
}
return results.join('\n\n');
}
// ─── Diff ──────────────────────────────────────────
case 'diff': {
const [url1, url2] = args;
if (!url1 || !url2) throw new Error('Usage: browse diff <url1> <url2>');
const page = bm.getPage();
const normalizedUrl1 = await validateNavigationUrl(url1);
await page.goto(normalizedUrl1, { waitUntil: 'domcontentloaded', timeout: 15000 });
const text1 = await getCleanText(page);
const normalizedUrl2 = await validateNavigationUrl(url2);
await page.goto(normalizedUrl2, { waitUntil: 'domcontentloaded', timeout: 15000 });
const text2 = await getCleanText(page);
const changes = Diff.diffLines(text1, text2);
const output: string[] = [`--- ${url1}`, `+++ ${url2}`, ''];
for (const part of changes) {
const prefix = part.added ? '+' : part.removed ? '-' : ' ';
const lines = part.value.split('\n').filter(l => l.length > 0);
for (const line of lines) {
output.push(`${prefix} ${line}`);
}
}
return wrapUntrustedContent(output.join('\n'), `diff: ${url1} vs ${url2}`);
}
// ─── Snapshot ─────────────────────────────────────
case 'snapshot': {
const isScoped = tokenInfo && tokenInfo.clientId !== 'root';
const snapshotResult = await handleSnapshot(args, session, {
splitForScoped: !!isScoped,
});
// Scoped tokens get split format (refs outside envelope); root gets basic wrapping
if (isScoped) {
return snapshotResult; // already has envelope from split format
}
return wrapUntrustedContent(snapshotResult, bm.getCurrentUrl());
}
// ─── Handoff ────────────────────────────────────
case 'handoff': {
const message = args.join(' ') || 'User takeover requested';
return await bm.handoff(message);
}
case 'resume': {
bm.resume();
// Re-snapshot to capture current page state after human interaction
const isScoped2 = tokenInfo && tokenInfo.clientId !== 'root';
const snapshot = await handleSnapshot(['-i'], session, { splitForScoped: !!isScoped2 });
if (isScoped2) {
return `RESUMED\n${snapshot}`;
}
return `RESUMED\n${wrapUntrustedContent(snapshot, bm.getCurrentUrl())}`;
}
// ─── Headed Mode ──────────────────────────────────────
case 'connect': {
// connect is handled as a pre-server command in cli.ts
// If we get here, server is already running — tell the user
if (bm.getConnectionMode() === 'headed') {
return 'Already in headed mode with extension.';
}
return 'The connect command must be run from the CLI (not sent to a running server). Run: $B connect';
}
case 'disconnect': {
if (bm.getConnectionMode() !== 'headed') {
return 'Not in headed mode — nothing to disconnect.';
}
// Signal that we want a restart in headless mode
console.log('[browse] Disconnecting headed browser. Restarting in headless mode.');
await shutdown();
return 'Disconnected. Server will restart in headless mode on next command.';
}
case 'focus': {
if (bm.getConnectionMode() !== 'headed') {
return 'focus requires headed mode. Run `$B connect` first.';
}
try {
const { execSync } = await import('child_process');
// Try common Chromium-based browser app names to bring to foreground
const appNames = ['Comet', 'Google Chrome', 'Arc', 'Brave Browser', 'Microsoft Edge'];
let activated = false;
for (const appName of appNames) {
try {
execSync(`osascript -e 'tell application "${appName}" to activate'`, { stdio: 'pipe', timeout: 3000 });
activated = true;
break;
} catch (err: any) {
// Try next browser — osascript fails if app not found or AppleScript errors
if (err?.status === undefined && !err?.message?.includes('Command failed')) throw err;
}
}
if (!activated) {
return 'Could not bring browser to foreground. macOS only.';
}
// If a ref was passed, scroll it into view
if (args.length > 0 && args[0].startsWith('@')) {
try {
const resolved = await bm.resolveRef(args[0]);
if ('locator' in resolved) {
await resolved.locator.scrollIntoViewIfNeeded({ timeout: 5000 });
return `Browser activated. Scrolled ${args[0]} into view.`;
}
} catch (err: any) {
// Ref not found or element gone — still activated the browser
if (!err?.message?.includes('not found') && !err?.message?.includes('closed') && !err?.message?.includes('Target') && !err?.message?.includes('timeout')) throw err;
}
}
return 'Browser window activated.';
} catch (err: any) {
return `focus failed: ${err.message}. macOS only.`;
}
}
// ─── Watch ──────────────────────────────────────────
case 'watch': {
if (args[0] === 'stop') {
if (!bm.isWatching()) return 'Not currently watching.';
const result = bm.stopWatch();
const durationSec = Math.round(result.duration / 1000);
const lastSnapshot = result.snapshots.length > 0
? wrapUntrustedContent(result.snapshots[result.snapshots.length - 1], bm.getCurrentUrl())
: '(none)';
return [
`WATCH STOPPED (${durationSec}s, ${result.snapshots.length} snapshots)`,
'',
'Last snapshot:',
lastSnapshot,
].join('\n');
}
if (bm.isWatching()) return 'Already watching. Run `$B watch stop` to stop.';
if (bm.getConnectionMode() !== 'headed') {
return 'watch requires headed mode. Run `$B connect` first.';
}
bm.startWatch();
return 'WATCHING — observing user browsing. Periodic snapshots every 5s.\nRun `$B watch stop` to stop and get summary.';
}
// ─── Inbox ──────────────────────────────────────────
case 'inbox': {
const { execSync } = await import('child_process');
let gitRoot: string;
try {
gitRoot = execSync('git rev-parse --show-toplevel', { encoding: 'utf-8', stdio: ['pipe', 'pipe', 'pipe'] }).trim();
} catch (err: any) {
// execSync throws with exit status on non-git directories
if (err?.status === undefined && !err?.message?.includes('Command failed')) throw err;
return 'Not in a git repository — cannot locate inbox.';
}
const inboxDir = path.join(gitRoot, '.context', 'sidebar-inbox');
if (!fs.existsSync(inboxDir)) return 'Inbox empty.';
const files = fs.readdirSync(inboxDir)
.filter(f => f.endsWith('.json') && !f.startsWith('.'))
.sort()
.reverse(); // newest first
if (files.length === 0) return 'Inbox empty.';
const messages: { timestamp: string; url: string; userMessage: string }[] = [];
for (const file of files) {
try {
const data = JSON.parse(fs.readFileSync(path.join(inboxDir, file), 'utf-8'));
messages.push({
timestamp: data.timestamp || '',
url: data.page?.url || 'unknown',
userMessage: data.userMessage || '',
});
} catch (err: any) {
// Skip malformed JSON or unreadable files
if (!(err instanceof SyntaxError) && err?.code !== 'ENOENT' && err?.code !== 'EACCES') throw err;
}
}
if (messages.length === 0) return 'Inbox empty.';
const lines: string[] = [];
lines.push(`SIDEBAR INBOX (${messages.length} message${messages.length === 1 ? '' : 's'})`);
lines.push('────────────────────────────────');
for (const msg of messages) {
const ts = msg.timestamp ? `[${msg.timestamp}]` : '[unknown]';
lines.push(`${ts} ${wrapUntrustedContent(msg.url, 'inbox-url')}`);
lines.push(` "${wrapUntrustedContent(msg.userMessage, 'inbox-message')}"`);
lines.push('');
}
lines.push('────────────────────────────────');
// Handle --clear flag
if (args.includes('--clear')) {
for (const file of files) {
try { fs.unlinkSync(path.join(inboxDir, file)); } catch (err: any) { if (err?.code !== 'ENOENT') throw err; }
}
lines.push(`Cleared ${files.length} message${files.length === 1 ? '' : 's'}.`);
}
return lines.join('\n');
}
// ─── State ────────────────────────────────────────
case 'state': {
const [action, name] = args;
if (!action || !name) throw new Error('Usage: state save|load <name>');
// Sanitize name: alphanumeric + hyphens + underscores only
if (!/^[a-zA-Z0-9_-]+$/.test(name)) {
throw new Error('State name must be alphanumeric (a-z, 0-9, _, -)');
}
const config = resolveConfig();
const stateDir = path.join(config.stateDir, 'browse-states');
fs.mkdirSync(stateDir, { recursive: true });
const statePath = path.join(stateDir, `${name}.json`);
if (action === 'save') {
const state = await bm.saveState();
// V1: cookies + URLs only (not localStorage — breaks on load-before-navigate)
const saveData = {
version: 1,
savedAt: new Date().toISOString(),
cookies: state.cookies,
pages: state.pages.map(p => ({ url: p.url, isActive: p.isActive })),
};
fs.writeFileSync(statePath, JSON.stringify(saveData, null, 2), { mode: 0o600 });
return `State saved: ${statePath} (${state.cookies.length} cookies, ${state.pages.length} pages)\n⚠️ Cookies stored in plaintext. Delete when no longer needed.`;
}
if (action === 'load') {
if (!fs.existsSync(statePath)) throw new Error(`State not found: ${statePath}`);
const data = JSON.parse(fs.readFileSync(statePath, 'utf-8'));
if (!Array.isArray(data.cookies) || !Array.isArray(data.pages)) {
throw new Error('Invalid state file: expected cookies and pages arrays');
}
// Validate and filter cookies — reject malformed or internal-network cookies
const validatedCookies = data.cookies.filter((c: any) => {
if (typeof c !== 'object' || !c) return false;
if (typeof c.name !== 'string' || typeof c.value !== 'string') return false;
if (typeof c.domain !== 'string' || !c.domain) return false;
const d = c.domain.startsWith('.') ? c.domain.slice(1) : c.domain;
if (d === 'localhost' || d.endsWith('.internal') || d === '169.254.169.254') return false;
return true;
});
if (validatedCookies.length < data.cookies.length) {
console.warn(`[browse] Filtered ${data.cookies.length - validatedCookies.length} invalid cookies from state file`);
}
// Warn on state files older than 7 days
if (data.savedAt) {
const ageMs = Date.now() - new Date(data.savedAt).getTime();
const SEVEN_DAYS = 7 * 24 * 60 * 60 * 1000;
if (ageMs > SEVEN_DAYS) {
console.warn(`[browse] Warning: State file is ${Math.round(ageMs / 86400000)} days old. Consider re-saving.`);
}
}
// Close existing pages, then restore (replace, not merge)
bm.setFrame(null);
await bm.closeAllPages();
// Allowlist disk-loaded page fields — NEVER accept loadedHtml, loadedHtmlWaitUntil,
// or owner from disk. Those are in-memory-only invariants; allowing them would let
// a tampered state file smuggle HTML past load-html's safe-dirs + magic-byte + size
// checks, or forge tab ownership for cross-agent authorization bypass.
await bm.restoreState({
cookies: validatedCookies,
pages: data.pages.map((p: any) => ({
url: typeof p.url === 'string' ? p.url : '',
isActive: Boolean(p.isActive),
storage: null,
})),
});
return `State loaded: ${data.cookies.length} cookies, ${data.pages.length} pages`;
}
throw new Error('Usage: state save|load <name>');
}
// ─── Frame ───────────────────────────────────────
case 'frame': {
const target = args[0];
if (!target) throw new Error('Usage: frame <selector|@ref|--name name|--url pattern|main>');
if (target === 'main') {
bm.setFrame(null);
bm.clearRefs();
return 'Switched to main frame';
}
const page = bm.getPage();
let frame: Frame | null = null;
if (target === '--name') {
if (!args[1]) throw new Error('Usage: frame --name <name>');
frame = page.frame({ name: args[1] });
} else if (target === '--url') {
if (!args[1]) throw new Error('Usage: frame --url <pattern>');
frame = page.frame({ url: new RegExp(escapeRegExp(args[1])) });
} else {
// CSS selector or @ref for the iframe element
const resolved = await bm.resolveRef(target);
const locator = 'locator' in resolved ? resolved.locator : page.locator(resolved.selector);
const elementHandle = await locator.elementHandle({ timeout: 5000 });
frame = await elementHandle?.contentFrame() ?? null;
await elementHandle?.dispose();
}
if (!frame) throw new Error(`Frame not found: ${target}`);
bm.setFrame(frame);
bm.clearRefs();
return `Switched to frame: ${frame.url()}`;
}
// ─── UX Audit ─────────────────────────────────────
case 'ux-audit': {
const page = bm.getPage();
// Extract page structure for UX behavioral analysis
// Agent interprets the data and applies Krug's 6 usability tests
// Uses textContent (not innerText) to avoid layout computation on large DOMs
const data = await page.evaluate(() => {
const HEADING_CAP = 50;
const INTERACTIVE_CAP = 200;
const TEXT_BLOCK_CAP = 50;
// Site ID: logo or brand element
const logoEl = document.querySelector('[class*="logo"], [id*="logo"], header img, [aria-label*="home"], a[href="/"]');
const siteId = logoEl ? {
found: true,
text: (logoEl.textContent || '').trim().slice(0, 100),
tag: logoEl.tagName,
alt: (logoEl as HTMLImageElement).alt || null,
} : { found: false, text: null, tag: null, alt: null };
// Page name: main heading
const h1 = document.querySelector('h1');
const pageName = h1 ? {
found: true,
text: h1.textContent?.trim().slice(0, 200) || '',
} : { found: false, text: null };
// Navigation: primary nav elements
const navEls = document.querySelectorAll('nav, [role="navigation"]');
const navItems: Array<{ text: string; links: number }> = [];
navEls.forEach((nav, i) => {
if (i >= 5) return;
const links = nav.querySelectorAll('a');
navItems.push({
text: (nav.getAttribute('aria-label') || `nav-${i}`).slice(0, 50),
links: links.length,
});
});
// "You are here" indicator: current/active nav items
// Scoped to nav containers to avoid false positives from animation classes
const activeNavItems = document.querySelectorAll('nav [aria-current], nav .active, nav .current, [role="navigation"] [aria-current], [role="navigation"] .active, [role="navigation"] .current');
const youAreHere = Array.from(activeNavItems).slice(0, 5).map(el => ({
text: (el.textContent || '').trim().slice(0, 50),
tag: el.tagName,
}));
// Search: search box presence
const searchEl = document.querySelector('input[type="search"], [role="search"], input[name*="search"], input[placeholder*="search" i], input[aria-label*="search" i]');
const search = { found: !!searchEl };
// Breadcrumbs
const breadcrumbEl = document.querySelector('[aria-label*="breadcrumb" i], .breadcrumb, .breadcrumbs, [class*="breadcrumb"]');
const breadcrumbs = breadcrumbEl ? {
found: true,
items: Array.from(breadcrumbEl.querySelectorAll('a, span, li')).slice(0, 10).map(el => (el.textContent || '').trim().slice(0, 30)),
} : { found: false, items: [] };
// Headings: heading hierarchy
const headings = Array.from(document.querySelectorAll('h1,h2,h3,h4,h5,h6')).slice(0, HEADING_CAP).map(h => ({
tag: h.tagName,
text: (h.textContent || '').trim().slice(0, 80),
size: getComputedStyle(h).fontSize,
}));
// Interactive elements: buttons, links, inputs
const interactiveEls = Array.from(document.querySelectorAll('a, button, input, select, textarea, [role="button"], [tabindex]')).slice(0, INTERACTIVE_CAP);
const interactive = interactiveEls.map(el => {
const rect = el.getBoundingClientRect();
return {
tag: el.tagName,
text: (el.textContent || (el as HTMLInputElement).placeholder || '').trim().slice(0, 50),
type: (el as HTMLInputElement).type || null,
role: el.getAttribute('role'),
w: Math.round(rect.width),
h: Math.round(rect.height),
visible: rect.width > 0 && rect.height > 0,
};
}).filter(el => el.visible);
// Text blocks: paragraphs and large text areas
const textBlocks = Array.from(document.querySelectorAll('p, [class*="description"], [class*="intro"], [class*="welcome"], [class*="hero"] p, main p')).slice(0, TEXT_BLOCK_CAP).map(el => ({
text: (el.textContent || '').trim().slice(0, 200),
wordCount: (el.textContent || '').trim().split(/\s+/).filter(Boolean).length,
}));
// Total visible text word count (textContent avoids layout computation)
const bodyText = (document.body?.textContent || '').trim();
const totalWords = bodyText.split(/\s+/).filter(Boolean).length;
return {
url: window.location.href,
title: document.title,
siteId,
pageName,
navigation: navItems,
youAreHere,
search,
breadcrumbs,
headings,
interactive,
textBlocks,
totalWords,
};
});
return JSON.stringify(data, null, 2);
}
case 'domain-skill': {
return await handleDomainSkillCommand(args, bm);
}
case 'skill': {
const port = opts?.daemonPort;
if (port === undefined) {
throw new Error('skill command requires daemonPort in MetaCommandOpts (server bug)');
}
return await handleSkillCommand(args, { port });
}
case 'cdp': {
// Lazy import — cdp-bridge introduces module deps we don't want loaded
// for projects that never use the CDP escape hatch.
const { handleCdpCommand } = await import('./cdp-commands');
return await handleCdpCommand(args, bm);
}
default:
throw new Error(`Unknown meta command: ${command}`);
}
}