Files
gstack/browse/src/cookie-import-browser.ts
T
Garry Tan 54d4cde773 security: tunnel dual-listener + SSRF + envelope + path wave (v1.6.0.0) (#1137)
* refactor(security): loosen /connect rate limit from 3/min to 300/min

Setup keys are 24 random bytes (unbruteforceable), so a tight rate limit
does not meaningfully prevent key guessing. It exists only to cap
bandwidth, CPU, and log-flood damage from someone who discovered the
ngrok URL. A legitimate pair-agent session hits /connect once; 300/min
is 60x that pattern and never hit accidentally.

3/min caused pairing to fail on any retry flow (network blip, second
paired client) with no upside. Per-IP tracking was considered and
rejected — adds a bounded Map + LRU for defense already adequate at the
global layer.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(security): add tunnel-denial-log module for attack visibility

Append-only log of tunnel-surface auth denials to
~/.gstack/security/attempts.jsonl. Gives operators visibility into who
is probing tunneled daemons so the next security wave can be driven by
real attack data instead of speculation.

Design notes:
- Async via fs.promises.appendFile. Never appendFileSync — blocking the
  event loop on every denial during a flood is what an attacker wants
  (prior learning: sync-audit-log-io, 10/10 confidence).
- In-process rate cap at 60 writes/minute globally. Excess denials are
  counted in memory but not written to disk — prevents disk DoS.
- Writes to the same ~/.gstack/security/attempts.jsonl used by the
  prompt-injection attempt log. File rotation is handled by the existing
  security pipeline (10MB, 5 generations).

No consumers in this commit; wired up in the dual-listener refactor that
follows.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(security): dual-listener tunnel architecture

The /health endpoint leaked AUTH_TOKEN to any caller that hit the ngrok
URL (spoofing chrome-extension:// origin, or catching headed mode).
Surfaced by @garagon in PR #1026; the original fix was header-inference
on the single port. Codex's outside-voice review during /plan-ceo-review
called that approach brittle (ngrok header behavior could change, local
proxies would false-positive), and pushed for the structural fix.

This is that fix. Stop making /health a root-token bootstrap endpoint on
any surface the tunnel can reach. The server now binds two HTTP
listeners when a tunnel is active. The local listener (extension, CLI,
sidebar) stays on 127.0.0.1 and is never exposed to ngrok. ngrok
forwards only to the tunnel listener, which serves only /connect
(unauth, rate-limited) and /command with a locked allowlist of
browser-driving commands. Security property comes from physical port
separation, not from header inference — a tunnel caller cannot reach
/health or /cookie-picker or /inspector because they live on a
different TCP socket.

What this commit adds to browse/src/server.ts:
  * Surface type ('local' | 'tunnel') and TUNNEL_PATHS +
    TUNNEL_COMMANDS allowlists near the top of the file.
  * makeFetchHandler(surface) factory replacing the single fetch arrow;
    closure-captures the surface so the filter that runs before route
    dispatch knows which socket accepted the request.
  * Tunnel filter at dispatch entry: 404s anything not on TUNNEL_PATHS,
    403s root-token bearers with a clear pairing hint, 401s non-/connect
    requests that lack a scoped token. Every denial is logged via
    logTunnelDenial (from tunnel-denial-log).
  * GET /connect alive probe (unauth on both surfaces) so /pair and
    /tunnel/start can detect dead ngrok tunnels without reaching
    /health — /health is no longer tunnel-reachable.
  * Lazy tunnel listener lifecycle. /tunnel/start binds a dedicated
    Bun.serve on an ephemeral port, points ngrok.forward at THAT port
    (not the local port), hard-fails on bind error (no local fallback),
    tears down cleanly on ngrok failure. BROWSE_TUNNEL=1 startup uses
    the same pattern.
  * closeTunnel() helper — single teardown path for both the ngrok
    listener and the tunnel Bun.serve listener.
  * resolveNgrokAuthtoken() helper — shared authtoken lookup across
    /tunnel/start and BROWSE_TUNNEL=1 startup (was duplicated).
  * TUNNEL_COMMANDS check in /command dispatch: on the tunnel surface,
    commands outside the allowlist return 403 with a list of allowed
    commands as a hint.
  * Probe paths in /pair and /tunnel/start migrated from /health to
    GET /connect — the only unauth path reachable on the tunnel surface
    under the new architecture.

Test updates in browse/test/server-auth.test.ts:
  * /pair liveness-verify test: assert via closeTunnel() helper instead
    of the inline `tunnelActive = false; tunnelUrl = null` lines that
    the helper subsumes.
  * /tunnel/start cached-tunnel test: same closeTunnel() adaptation.

Credit
  Derived from PR #1026 by @garagon — thanks for flagging the critical
  bug that drove the architectural rewrite. The per-request
  isTunneledRequest approach from #1026 is superseded by physical port
  separation here; the underlying report remains the root cause for the
  entire v1.6.0.0 wave.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(security): add source-level guards for dual-listener architecture

23 source-level assertions that keep future contributors from silently
widening the tunnel surface during a routine refactor. Covers:

  * Surface type + tunnelServer state variable shape
  * TUNNEL_PATHS is a closed set of /connect, /command, /sidebar-chat
    (and NOT /health, /welcome, /cookie-picker, /inspector/*, /pair,
    /token, /refs, /activity/stream, /tunnel/{start,stop})
  * TUNNEL_COMMANDS includes browser-driving ops only (and NOT
    launch-browser, tunnel-start, token-mint, cookie-import, etc.)
  * makeFetchHandler(surface) factory exists and is wired to both
    listeners with the correct surface parameter
  * Tunnel filter runs BEFORE any route dispatch, with 404/403/401
    responses and logged denials for each reason
  * GET /connect returns {alive: true} unauth
  * /command dispatch enforces TUNNEL_COMMANDS on tunnel surface
  * closeTunnel() helper tears down ngrok + Bun.serve listener
  * /tunnel/start binds on ephemeral port, points ngrok at TUNNEL_PORT
    (not local port), hard-fails on bind error (no fallback), probes
    cached tunnel via GET /connect (not /health), tears down on
    ngrok.forward failure
  * BROWSE_TUNNEL=1 startup uses the dual-listener pattern
  * logTunnelDenial wired for all three denial reasons
  * /connect rate limit is 300/min, not 3/min

All 23 tests pass. Behavioral integration tests (spawn subprocess, real
network) live in the E2E suite that lands later in this wave.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* security: gate download + scrape through validateNavigationUrl (SSRF)

The `goto` command was correctly wired through validateNavigationUrl,
but `download` and `scrape` called page.request.fetch(url, ...) directly.
A caller with the default write scope could hit the /command endpoint
and ask the daemon to fetch http://169.254.169.254/latest/meta-data/
(AWS IMDSv1) or the GCP/Azure/internal equivalents. The response body
comes back as base64 or lands on disk where GET /file serves it.

Fix: call validateNavigationUrl(url) immediately before each
page.request.fetch() call site in download and in the scrape loop.
Same blocklist that already protects `goto`: file://, javascript:,
data:, chrome://, cloud metadata (IPv4 all encodings, IPv6 ULA,
metadata.*.internal).

Tests: extend browse/test/url-validation.test.ts with a source-level
guard that walks every `await page.request.fetch(` call site and
asserts a validateNavigationUrl call precedes it within the same
branch. Regression trips before code review if a future refactor
drops the gate.

* security: route splitForScoped through envelope sentinel escape

The scoped-token snapshot path in snapshot.ts built its untrusted
block by pushing the raw accessibility-tree lines between the literal
`═══ BEGIN UNTRUSTED WEB CONTENT ═══` / `═══ END UNTRUSTED WEB CONTENT ═══`
sentinels. The full-page wrap path in content-security.ts already
applied a zero-width-space escape on those exact strings to prevent
sentinel injection, but the scoped path skipped it.

Net effect: a page whose rendered text contains the literal sentinel
can close the envelope early from inside untrusted content and forge
a fake "trusted" block for the LLM. That includes fabricating
interactive `@eN` references the agent will act on.

Fix:
  * Extract the zero-width-space escape into a named, exported helper
    `escapeEnvelopeSentinels(content)` in content-security.ts.
  * Have `wrapUntrustedPageContent` call it (behavior unchanged on
    that path — same bytes out).
  * Import the helper in snapshot.ts and map it over `untrustedLines`
    in the `splitForScoped` branch before pushing the BEGIN sentinel.

Tests: add a describe block in content-security.test.ts that covers
  * `escapeEnvelopeSentinels` defuses BEGIN and END markers;
  * `escapeEnvelopeSentinels` leaves normal text untouched;
  * `wrapUntrustedPageContent` still emits exactly one real envelope
    pair when hostile content contains forged sentinels;
  * snapshot.ts imports the helper;
  * the scoped-snapshot branch calls `escapeEnvelopeSentinels` before
    pushing the BEGIN sentinel (source-level regression — if a future
    refactor reorders this, the test trips).

* security: extend hidden-element detection to all DOM-reading channels

The Confusion Protocol envelope wrap (`wrapUntrustedPageContent`)
covers every scoped PAGE_CONTENT_COMMAND, but the hidden-element
ARIA-injection detection layer only ran for `text`. Other DOM-reading
channels (html, links, forms, accessibility, attrs, data, media,
ux-audit) returned their output through the envelope with no hidden-
content filter, so a page serving a display:none div that instructs
the agent to disregard prior system messages, or an aria-label that
claims to put the LLM in admin mode, leaked the injection payload on
any non-text channel. The envelope alone does not mitigate this, and
the page itself never rendered the hostile content to the human
operator.

Fix:
  * New export `DOM_CONTENT_COMMANDS` in commands.ts — the subset of
    PAGE_CONTENT_COMMANDS that derives its output from the live DOM.
    Console and dialog stay out; they read separate runtime state.
  * server.ts runs `markHiddenElements` + `cleanupHiddenMarkers` for
    every scoped command in this set. `text` keeps its existing
    `getCleanTextWithStripping` path (hidden elements physically
    stripped before the read). All other channels keep their output
    format but emit flagged elements as CONTENT WARNINGS on the
    envelope, so the LLM sees what it would otherwise have consumed
    silently.
  * Hidden-element descriptions merge into `combinedWarnings`
    alongside content-filter warnings before the wrap call.

Tests: new describe block in content-security.test.ts covering
  * `DOM_CONTENT_COMMANDS` export shape and channel membership;
  * dispatch gates on `DOM_CONTENT_COMMANDS.has(command)`, not the
    literal `text` string;
  * hiddenContentWarnings plumbs into `combinedWarnings` and reaches
    wrapUntrustedPageContent;
  * DOM_CONTENT_COMMANDS is a strict subset of PAGE_CONTENT_COMMANDS.

Existing datamarking, envelope wrap, centralized-wrapping, and chain
security suites stay green (52 pass, 0 fail).

* security: validate --from-file payload paths for parity with direct paths

The direct `load-html <file>` path runs every caller-supplied file path
through validateReadPath() so reads stay confined to SAFE_DIRECTORIES
(cwd, TEMP_DIR). The `load-html --from-file <payload.json>` shortcut
and its sibling `pdf --from-file <payload.json>` skipped that check and
went straight to fs.readFileSync(). An MCP caller that picks the
payload path (or any caller whose payload argument is reachable from
attacker-influenced text) could use --from-file as a read-anywhere
escape hatch for the safe-dirs policy.

Fix: call validateReadPath(path.resolve(payloadPath)) before readFileSync
at both sites. Error surface mirrors the direct-path branch so ops and
agent errors stay consistent.

Test coverage in browse/test/from-file-path-validation.test.ts:
  - source-level: validateReadPath precedes readFileSync in the load-html
    --from-file branch (write-commands.ts) and the pdf --from-file parser
    (meta-commands.ts)
  - error-message parity: both sites reference SAFE_DIRECTORIES

Related security audit pattern: R3 F002 (validateNavigationUrl gap on
download/scrape) and R3 F008 (markHiddenElements gap on 10 DOM commands)
were the same shape — a defense that existed on the primary code path
but not its shortcut sibling. This PR closes the same class of gap on
the --from-file shortcuts.

* fix(design): escape url.origin when injecting into served HTML

serve.ts injected url.origin into a single-quoted JS string in
the response body. A local request with a crafted Host header
(e.g. Host: "evil'-alert(1)-'x") would break out of the string
and execute JS in the 127.0.0.1:<port> origin opened by the
design board. Low severity — bound to localhost, requires a
local attacker — but no reason not to escape.

Fix: JSON.stringify(url.origin) produces a properly quoted,
escaped JS string literal in one call.

Also includes Prettier reformatting (single→double quotes,
trailing commas, line wrapping) applied by the repo's
PostToolUse formatter hook. Security change is the one line
in the HTML injection; everything else is whitespace/style.

* fix(scripts): drop shell:true from slop-diff npx invocations

spawnSync('npx', [...], { shell: true }) invokes /bin/sh -c
with the args concatenated, subjecting them to shell parsing
(word splitting, glob expansion, metacharacter interpretation).
No user input reaches these calls today, so not exploitable —
but the posture is wrong: npx + shell args should be direct.

Fix: scope shell:true to process.platform === 'win32' where
npx is actually a .cmd requiring the shell. POSIX runs the
npx binary directly with array-form args.

Also includes Prettier reformatting (single→double quotes,
trailing commas, line wrapping) applied by the repo's
PostToolUse formatter hook. Security-relevant change is just
the two shell:true -> shell: process.platform === 'win32'
lines; everything else is whitespace/style.

* security(E3): gate GSTACK_SLUG on /welcome path traversal

The /welcome handler interpolates GSTACK_SLUG directly into the filesystem
path used to locate the project-local welcome page. Without validation, a
slug like "../../etc/passwd" would resolve to
~/.gstack/projects/../../etc/passwd/designs/welcome-page-20260331/finalized.html
— classic path traversal.

Not exploitable today: GSTACK_SLUG is set by the gstack CLI at daemon launch,
and an attacker would already need local env-var access to poison it. But
the gate is one regex (^[a-z0-9_-]+$), and a defense-in-depth pass costs us
nothing when the cost of being wrong is arbitrary file read via /welcome.

Fall back to the safe 'unknown' literal when the slug fails validation —
same fallback the code already uses when GSTACK_SLUG is unset. No behavior
change for legitimate slugs (they all match the regex).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* security(N1): replace ?token= SSE auth with HttpOnly session cookie

Activity stream and inspector events SSE endpoints accepted the root
AUTH_TOKEN via `?token=` query param (EventSource can't send Authorization
headers). URLs leak to browser history, referer headers, server logs,
crash reports, and refactoring accidents. Codex flagged this during the
/plan-ceo-review outside voice pass.

New auth model: the extension calls POST /sse-session with a Bearer token
and receives a view-only session cookie (HttpOnly, SameSite=Strict, 30-min
TTL). EventSource is opened with `withCredentials: true` so the browser
sends the cookie back on the SSE connection. The ?token= query param is
GONE — no more URL-borne secrets.

Scope isolation (prior learning cookie-picker-auth-isolation, 10/10
confidence): the SSE session cookie grants access to /activity/stream and
/inspector/events ONLY. The token is never valid against /command, /token,
or any mutating endpoint. A leaked cookie can watch activity; it cannot
execute browser commands.

Components
  * browse/src/sse-session-cookie.ts — registry: mint/validate/extract/
    build-cookie. 256-bit tokens, 30-min TTL, lazy expiry pruning,
    no imports from token-registry (scope isolation enforced by module
    boundary).
  * browse/src/server.ts — POST /sse-session mint endpoint (requires
    Bearer). /activity/stream and /inspector/events now accept Bearer
    OR the session cookie, and reject ?token= query param.
  * extension/sidepanel.js — ensureSseSessionCookie() bootstrap call,
    EventSource opened with withCredentials:true on both SSE endpoints.
    Tested via the source guards; behavioral test is the E2E pairing
    flow that lands later in the wave.
  * browse/test/sse-session-cookie.test.ts — 20 unit tests covering
    mint entropy, TTL enforcement, cookie flag invariants, cookie
    parsing from multi-cookie headers, and scope-isolation contract
    guard (module must not import token-registry).
  * browse/test/server-auth.test.ts — existing /activity/stream auth
    test updated to assert the new cookie-based gate and the absence
    of the ?token= query param.

Cookie flag choices:
  * HttpOnly: token not readable from page JS (mitigates XSS
    exfiltration).
  * SameSite=Strict: cookie not sent on cross-site requests (mitigates
    CSRF). Fine for SSE because the extension connects to 127.0.0.1
    directly.
  * Path=/: cookie scoped to the whole origin.
  * Max-Age=1800: 30 minutes, matches TTL. Extension re-mints on
    reconnect when daemon restarts.
  * Secure NOT set: daemon binds to 127.0.0.1 over plain HTTP. Adding
    Secure would block the browser from ever sending the cookie back.
    Add Secure when gstack ships over HTTPS.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* security(N2): document Windows v20 ABE elevation path on CDP port

The existing comment around the cookie-import-browser --remote-debugging-port
launch claimed "threat model: no worse than baseline." That's wrong on
Windows with App-Bound Encryption v20. A same-user local process that
opens the cookie SQLite DB directly CANNOT decrypt v20 values (DPAPI
context is bound to the browser process). The CDP port lets them bypass
that: connect to the debug port, call Network.getAllCookies inside Chrome,
walk away with decrypted v20 cookies.

The correct fix is to switch from TCP --remote-debugging-port to
--remote-debugging-pipe so the CDP transport is a stdio pipe, not a
socket. That requires restructuring the CDP WebSocket client in this
module and Playwright doesn't expose the pipe transport out of the box.
Non-trivial, deferred from the v1.6.0.0 wave.

This commit updates the comment to correctly describe the threat and
points at the tracking issue. No code change to the launch itself.
Follow-up: #1136.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(E2): document dual-listener tunnel architecture in ARCHITECTURE.md

Adds an explicit per-endpoint disposition table to the Security model
section, covering the v1.6.0.0 dual-listener refactor. Every HTTP
endpoint now has a documented local-vs-tunnel answer. Future audits
(and future contributors wondering "is it safe to add X to the tunnel
surface?") can read this instead of reverse-engineering server.ts.

Also documents:
  * Why physical port separation beats per-request header inference
    (ngrok behavior drift, local proxies can forge headers, etc.)
  * Tunnel surface denial logging → ~/.gstack/security/attempts.jsonl
  * SSE session cookie model (gstack_sse, 30-min TTL, stream-scope only,
    module-boundary-enforced scope isolation)
  * N2 non-goal for Windows v20 ABE via CDP port (tracking #1136)

No code changes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(E1): end-to-end pair-agent flow against a spawned daemon

Spawns the browse daemon as a subprocess with BROWSE_HEADLESS_SKIP=1 so
the HTTP layer runs without a real browser.  Exercises:

  * GET /health — token delivery for chrome-extension origin, withheld
    otherwise (the F1 + PR #1026 invariant)
  * GET /connect — alive probe returns {alive:true} unauth
  * POST /pair — root Bearer required (403 without), returns setup_key
  * POST /connect — setup_key exchange mints a distinct scoped token
  * POST /command — 401 without auth
  * POST /sse-session — Bearer required, Set-Cookie has HttpOnly +
    SameSite=Strict (the N1 invariant)
  * GET /activity/stream — 401 without auth
  * GET /activity/stream?token= — 401 (the old ?token= query param is
    REJECTED, which is the whole point of N1)
  * GET /welcome — serves HTML, does not leak /etc/passwd content under
    the default 'unknown' slug (E3 regex gate)

12 behavioral tests, ~220ms end-to-end, no network dependencies, no
ngrok, no real browser.  This is the receipt for the wave's central
'pair-agent still works + the security boundary holds' claim.

Tunnel-port binding (/tunnel/start) is deliberately NOT exercised here
— it requires an ngrok authtoken and live network.  The dual-listener
route allowlist is covered by source-level guards in
dual-listener.test.ts; behavioral tunnel testing belongs in a separate
paid-evals harness.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* release(v1.6.0.0): bump VERSION + CHANGELOG for security wave

Architectural bump, not patch: dual-listener HTTP refactor changes the
daemon's tunnel-exposure model.  See CHANGELOG for the full release
summary (~950 words) covering the five root causes this wave closes:

  1. /health token leak over ngrok (F1 + E3 + test infra)
  2. /cookie-picker + /inspector exposed over the tunnel (F1)
  3. ?token=<ROOT> in SSE URLs leaking to logs/referer/history (N1)
  4. /welcome GSTACK_SLUG path traversal (E3)
  5. Windows v20 ABE elevation via CDP port (N2 — documented non-goal,
     tracked as #1136)

Plus the base PRs: SSRF gate (#1029), envelope sentinel escape (#1031),
DOM-channel hidden-element coverage (#1032), --from-file path validation
(#1103), and 2 commits from #1073 (@theqazi).

VERSION + package.json bumped to 1.6.0.0.  CHANGELOG entry covers
credits (@garagon, @Hybirdss, @HMAKT99, @theqazi), review lineage (CEO
→ Codex outside voice → Eng), and the non-goal tracking issue.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: pre-landing review findings (4 auto-fixes)

Addresses 4 findings from the Claude adversarial subagent on the
v1.6.0.0 security wave diff.  No user-visible behavior change; all
are defense-in-depth hardening of newly-introduced code.

1. GET /connect rate-limited (was POST-only) [HIGH conf 8/10]
   Attacker discovering the ngrok URL could probe unlimited GETs for
   daemon enumeration.  Now shares the global /connect counter.

2. ngrok listener leak on tunnel startup failure [MEDIUM conf 8/10]
   If ngrok.forward() resolved but tunnelListener.url() or the
   state-file write threw, the Bun listener was torn down but the
   ngrok session was leaked.  Fixed in BOTH /tunnel/start and
   BROWSE_TUNNEL=1 startup paths.

3. GSTACK_SKILL_ROOT path-traversal gate [MEDIUM conf 8/10]
   Symmetric with E3's GSTACK_SLUG regex gate — reject values
   containing '..' before interpolating into the welcome-page path.

4. SSE session registry pruning [LOW conf 7/10]
   pruneExpired() only checked 10 entries per mint call.  Now runs
   on every validate too, checks 20 entries, with a hard 10k cap as
   backstop.  Prevents registry growth under sustained extension
   reconnect pressure.

Tests remain green (56/56 in sse-session-cookie + dual-listener +
pair-agent-e2e suites).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: update project documentation for v1.6.0.0

Reflect the dual-listener tunnel architecture, SSE session cookies,
SSRF guards, and Windows v20 ABE non-goal across the three docs
users actually read for remote-agent and browser auth context:

- docs/REMOTE_BROWSER_ACCESS.md: rewrote Architecture diagram for
  dual listeners, fixed /connect rate limit (3/min → 300/min),
  removed stale "/health requires no auth" (now 404 on tunnel),
  added SSE cookie auth, expanded Security Model with tunnel
  allowlist, SSRF guards, /welcome path traversal defense, and
  the Windows v20 ABE tracking note.
- BROWSER.md: added dual-listener paragraph to Authentication and
  linked to ARCHITECTURE.md endpoint table. Replaced the stale
  ?token= SSE auth note with the HttpOnly gstack_sse cookie flow.
- CLAUDE.md: added Transport-layer security section above the
  sidebar prompt-injection stack so contributors editing server.ts,
  sse-session-cookie.ts, or tunnel-denial-log.ts see the load-bearing
  module boundaries before touching them.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(make-pdf): write --from-file payload to /tmp, not os.tmpdir()

make-pdf's browseClient wrote its --from-file payload to os.tmpdir(),
which is /var/folders/... on macOS. v1.6.0.0's PR #1103 cherry-pick
tightened browse load-html --from-file to validate against the
safe-dirs allowlist ([TEMP_DIR, cwd] where TEMP_DIR is '/tmp' on
macOS/Linux, os.tmpdir() on Windows). This closed a CLI/API parity
gap but broke make-pdf on macOS because /var/folders/... is outside
the allowlist.

Fix: mirror browse's TEMP_DIR convention — use '/tmp' on non-Windows,
os.tmpdir() on Windows. The make-pdf-gate CI failure on macOS-latest
(run 72440797490) is caused by exactly this: the payload file was
rejected by validateReadPath.

Verified locally: the combined-gate e2e test now passes after
rebuilding make-pdf/dist/pdf.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(sidebar): killAgent resets per-tab state; align tests with current agent event format

Two pre-existing bugs surfaced while running the full e2e suite on the
sec-wave branch.  Both pre-date v1.6.0.0 (same failures on main at
e23ff280) but blocked the ship verification, so fixing now.

### Bug 1: killAgent leaked stale per-tab state

`killAgent()` reset the legacy globals (agentProcess, agentStatus,
etc.) but never touched the per-tab `tabAgents` Map.  Meanwhile
`/sidebar-command` routes on `tabState.status` from that Map, not the
legacy globals.  Consequence: after a kill (including the implicit
kill in `/sidebar-session/new`), the next /sidebar-command on the
same tab saw `tabState.status === 'processing'` and fell into the
queue branch, silently NOT spawning an agent.  Integration tests that
called resetState between cases all failed with empty queues.

Fix: when targetTabId is supplied, reset that one tab's state; when
called without a tab (session-new, full kill), reset ALL tab states.
Matches the semantic boundary already used for the cancel-file write.

### Bug 2: sidebar-integration tests drifted from current event format

`agent events appear in /sidebar-chat` posted the raw Claude streaming
format (`{type: 'assistant', message: {content: [...]}}`) but
`processAgentEvent` in server.ts only handles the simplified types
that sidebar-agent.ts pre-processes into (text, text_delta, tool_use,
result, agent_error, security_event).  The architecture moved
pre-processing into sidebar-agent.ts at some point and this test
never got updated.  Fixed by sending the pre-processed `{type:
'text', text: '...'}` format — which is actually what the server sees
in production.

Also removed the `entry.prompt` URL-containment check in the
queue-write test.  The URL is carried on entry.pageUrl (metadata) by
design: the system prompt tells Claude to run `browse url` to fetch
the actual page rather than trust any URL in the prompt body.  That's
the URL-based prompt-injection defense.  The prompt SHOULD NOT
contain the URL, so the test assertion was wrong for the current
security posture.

### Verification

- `bun test browse/test/sidebar-integration.test.ts` → 13/13 pass
  (was 6/13 on both main and branch before this commit)
- Full `bun run test` → exit 0, zero fail markers
- No behavior change for production sidebar flows: killAgent was
  already supposed to return the agent to idle; it just wasn't fully
  doing so.  Per-tab reset now matches the documented semantics.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: gus <gustavoraularagon@gmail.com>
Co-authored-by: Mohammed Qazi <10266060+theqazi@users.noreply.github.com>
2026-04-21 21:58:27 -07:00

1046 lines
38 KiB
TypeScript

/**
* Chromium browser cookie import — read and decrypt cookies from real browsers
*
* Supports macOS, Linux, and Windows Chromium-based browsers.
* Pure logic module — no Playwright dependency, no HTTP concerns.
*
* Decryption pipeline:
*
* ┌──────────────────────────────────────────────────────────────────┐
* │ 1. Resolve the cookie DB from the browser profile dir │
* │ - macOS: ~/Library/Application Support/<browser>/<profile> │
* │ - Linux: ~/.config/<browser>/<profile> │
* │ │
* │ 2. Derive the AES key │
* │ - macOS v10: Keychain password, PBKDF2(..., iter=1003) │
* │ - Linux v10: "peanuts", PBKDF2(..., iter=1) │
* │ - Linux v11: libsecret/secret-tool password, iter=1 │
* │ │
* │ 3. For each cookie with encrypted_value starting with "v10"/ │
* │ "v11": │
* │ - Ciphertext = encrypted_value[3:] │
* │ - IV = 16 bytes of 0x20 (space character) │
* │ - Plaintext = AES-128-CBC-decrypt(key, iv, ciphertext) │
* │ - Remove PKCS7 padding │
* │ - Skip first 32 bytes of Chromium cookie metadata │
* │ - Remaining bytes = cookie value (UTF-8) │
* │ │
* │ 4. If encrypted_value is empty but `value` field is set, │
* │ use value directly (unencrypted cookie) │
* │ │
* │ 5. Chromium epoch: microseconds since 1601-01-01 │
* │ Unix seconds = (epoch - 11644473600000000) / 1000000 │
* │ │
* │ 6. sameSite: 0→"None", 1→"Lax", 2→"Strict", else→"Lax" │
* └──────────────────────────────────────────────────────────────────┘
*/
import { Database } from 'bun:sqlite';
import * as crypto from 'crypto';
import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';
import { TEMP_DIR } from './platform';
// ─── Types ──────────────────────────────────────────────────────
export interface BrowserInfo {
name: string;
dataDir: string; // primary storage dir (retained for compatibility with existing callers/tests)
keychainService: string;
aliases: string[];
linuxDataDir?: string;
linuxApplication?: string;
windowsDataDir?: string;
}
export interface ProfileEntry {
name: string; // e.g. "Default", "Profile 1", "Profile 3"
displayName: string; // human-friendly name from Preferences, or falls back to dir name
}
export interface DomainEntry {
domain: string;
count: number;
}
export interface ImportResult {
cookies: PlaywrightCookie[];
count: number;
failed: number;
domainCounts: Record<string, number>;
}
export interface PlaywrightCookie {
name: string;
value: string;
domain: string;
path: string;
expires: number;
secure: boolean;
httpOnly: boolean;
sameSite: 'Strict' | 'Lax' | 'None';
}
export class CookieImportError extends Error {
constructor(
message: string,
public code: string,
public action?: 'retry',
) {
super(message);
this.name = 'CookieImportError';
}
}
type BrowserPlatform = 'darwin' | 'linux' | 'win32';
interface BrowserMatch {
browser: BrowserInfo;
platform: BrowserPlatform;
dbPath: string;
}
// ─── Browser Registry ───────────────────────────────────────────
// Hardcoded — NEVER interpolate user input into shell commands.
const BROWSER_REGISTRY: BrowserInfo[] = [
{ name: 'Comet', dataDir: 'Comet/', keychainService: 'Comet Safe Storage', aliases: ['comet', 'perplexity'] },
{ name: 'Chrome', dataDir: 'Google/Chrome/', keychainService: 'Chrome Safe Storage', aliases: ['chrome', 'google-chrome', 'google-chrome-stable'], linuxDataDir: 'google-chrome/', linuxApplication: 'chrome', windowsDataDir: 'Google/Chrome/User Data/' },
{ name: 'Chromium', dataDir: 'chromium/', keychainService: 'Chromium Safe Storage', aliases: ['chromium'], linuxDataDir: 'chromium/', linuxApplication: 'chromium', windowsDataDir: 'Chromium/User Data/' },
{ name: 'Arc', dataDir: 'Arc/User Data/', keychainService: 'Arc Safe Storage', aliases: ['arc'] },
{ name: 'Brave', dataDir: 'BraveSoftware/Brave-Browser/', keychainService: 'Brave Safe Storage', aliases: ['brave'], linuxDataDir: 'BraveSoftware/Brave-Browser/', linuxApplication: 'brave', windowsDataDir: 'BraveSoftware/Brave-Browser/User Data/' },
{ name: 'Edge', dataDir: 'Microsoft Edge/', keychainService: 'Microsoft Edge Safe Storage', aliases: ['edge'], linuxDataDir: 'microsoft-edge/', linuxApplication: 'microsoft-edge', windowsDataDir: 'Microsoft/Edge/User Data/' },
];
// ─── Key Cache ──────────────────────────────────────────────────
// Cache derived AES keys per browser. First import per browser does
// Keychain + PBKDF2. Subsequent imports reuse the cached key.
const keyCache = new Map<string, Buffer>();
// ─── Public API ─────────────────────────────────────────────────
/**
* Find which browsers are installed (have a cookie DB on disk in any profile).
*/
export function findInstalledBrowsers(): BrowserInfo[] {
return BROWSER_REGISTRY.filter(browser => {
// Check Default profile on any platform
if (findBrowserMatch(browser, 'Default') !== null) return true;
// Check numbered profiles (Profile 1, Profile 2, etc.)
for (const platform of getSearchPlatforms()) {
const dataDir = getDataDirForPlatform(browser, platform);
if (!dataDir) continue;
const browserDir = path.join(getBaseDir(platform), dataDir);
try {
const entries = fs.readdirSync(browserDir, { withFileTypes: true });
if (entries.some(e => {
if (!e.isDirectory() || !e.name.startsWith('Profile ')) return false;
const profileDir = path.join(browserDir, e.name);
return fs.existsSync(path.join(profileDir, 'Cookies'))
|| (platform === 'win32' && fs.existsSync(path.join(profileDir, 'Network', 'Cookies')));
})) return true;
} catch {}
}
return false;
});
}
export function listSupportedBrowserNames(): string[] {
const hostPlatform = getHostPlatform();
return BROWSER_REGISTRY
.filter(browser => hostPlatform ? getDataDirForPlatform(browser, hostPlatform) !== null : true)
.map(browser => browser.name);
}
/**
* List available profiles for a browser.
*/
export function listProfiles(browserName: string): ProfileEntry[] {
const browser = resolveBrowser(browserName);
const profiles: ProfileEntry[] = [];
// Scan each supported platform for profile directories
for (const platform of getSearchPlatforms()) {
const dataDir = getDataDirForPlatform(browser, platform);
if (!dataDir) continue;
const browserDir = path.join(getBaseDir(platform), dataDir);
if (!fs.existsSync(browserDir)) continue;
let entries: fs.Dirent[];
try {
entries = fs.readdirSync(browserDir, { withFileTypes: true });
} catch {
continue;
}
for (const entry of entries) {
if (!entry.isDirectory()) continue;
if (entry.name !== 'Default' && !entry.name.startsWith('Profile ')) continue;
// Chrome 80+ on Windows stores cookies under Network/Cookies
const cookieCandidates = platform === 'win32'
? [path.join(browserDir, entry.name, 'Network', 'Cookies'), path.join(browserDir, entry.name, 'Cookies')]
: [path.join(browserDir, entry.name, 'Cookies')];
if (!cookieCandidates.some(p => fs.existsSync(p))) continue;
// Avoid duplicates if the same profile appears on multiple platforms
if (profiles.some(p => p.name === entry.name)) continue;
// Try to read display name from Preferences.
// Prefer account email — signed-in Chrome profiles often have generic
// names like "Person 2" while the email is far more readable.
let displayName = entry.name;
try {
const prefsPath = path.join(browserDir, entry.name, 'Preferences');
if (fs.existsSync(prefsPath)) {
const prefs = JSON.parse(fs.readFileSync(prefsPath, 'utf-8'));
const email = prefs?.account_info?.[0]?.email;
if (email && typeof email === 'string') {
displayName = email;
} else {
const profileName = prefs?.profile?.name;
if (profileName && typeof profileName === 'string') {
displayName = profileName;
}
}
}
} catch {
// Ignore — fall back to directory name
}
profiles.push({ name: entry.name, displayName });
}
// Found profiles on this platform — no need to check others
if (profiles.length > 0) break;
}
return profiles;
}
/**
* List unique cookie domains + counts from a browser's DB. No decryption.
*/
export function listDomains(browserName: string, profile = 'Default'): { domains: DomainEntry[]; browser: string } {
const browser = resolveBrowser(browserName);
const match = getBrowserMatch(browser, profile);
const db = openDb(match.dbPath, browser.name);
try {
const now = chromiumNow();
const rows = db.query(
`SELECT host_key AS domain, COUNT(*) AS count
FROM cookies
WHERE has_expires = 0 OR expires_utc > ?
GROUP BY host_key
ORDER BY count DESC`
).all(now) as DomainEntry[];
return { domains: rows, browser: browser.name };
} finally {
db.close();
}
}
/**
* Decrypt and return Playwright-compatible cookies for specific domains.
*/
export async function importCookies(
browserName: string,
domains: string[],
profile = 'Default',
): Promise<ImportResult> {
if (domains.length === 0) return { cookies: [], count: 0, failed: 0, domainCounts: {} };
const browser = resolveBrowser(browserName);
const match = getBrowserMatch(browser, profile);
const derivedKeys = await getDerivedKeys(match);
const db = openDb(match.dbPath, browser.name);
try {
const now = chromiumNow();
// Parameterized query — no SQL injection
const placeholders = domains.map(() => '?').join(',');
const rows = db.query(
`SELECT host_key, name, value, encrypted_value, path, expires_utc,
is_secure, is_httponly, has_expires, samesite
FROM cookies
WHERE host_key IN (${placeholders})
AND (has_expires = 0 OR expires_utc > ?)
ORDER BY host_key, name`
).all(...domains, now) as RawCookie[];
const cookies: PlaywrightCookie[] = [];
let failed = 0;
const domainCounts: Record<string, number> = {};
for (const row of rows) {
try {
const value = decryptCookieValue(row, derivedKeys, match.platform);
const cookie = toPlaywrightCookie(row, value);
cookies.push(cookie);
domainCounts[row.host_key] = (domainCounts[row.host_key] || 0) + 1;
} catch {
failed++;
}
}
return { cookies, count: cookies.length, failed, domainCounts };
} finally {
db.close();
}
}
// ─── Internal: Browser Resolution ───────────────────────────────
function resolveBrowser(nameOrAlias: string): BrowserInfo {
const needle = nameOrAlias.toLowerCase().trim();
const found = BROWSER_REGISTRY.find(b =>
b.aliases.includes(needle) || b.name.toLowerCase() === needle
);
if (!found) {
const supported = BROWSER_REGISTRY.flatMap(b => b.aliases).join(', ');
throw new CookieImportError(
`Unknown browser '${nameOrAlias}'. Supported: ${supported}`,
'unknown_browser',
);
}
return found;
}
function validateProfile(profile: string): void {
if (/[/\\]|\.\./.test(profile) || /[\x00-\x1f]/.test(profile)) {
throw new CookieImportError(
`Invalid profile name: '${profile}'`,
'bad_request',
);
}
}
function getHostPlatform(): BrowserPlatform | null {
const p = process.platform;
if (p === 'darwin' || p === 'linux' || p === 'win32') return p as BrowserPlatform;
return null;
}
function getSearchPlatforms(): BrowserPlatform[] {
const current = getHostPlatform();
const order: BrowserPlatform[] = [];
if (current) order.push(current);
for (const platform of ['darwin', 'linux', 'win32'] as BrowserPlatform[]) {
if (!order.includes(platform)) order.push(platform);
}
return order;
}
function getDataDirForPlatform(browser: BrowserInfo, platform: BrowserPlatform): string | null {
if (platform === 'darwin') return browser.dataDir;
if (platform === 'linux') return browser.linuxDataDir || null;
return browser.windowsDataDir || null;
}
function getBaseDir(platform: BrowserPlatform): string {
if (platform === 'darwin') return path.join(os.homedir(), 'Library', 'Application Support');
if (platform === 'win32') return path.join(os.homedir(), 'AppData', 'Local');
return path.join(os.homedir(), '.config');
}
function findBrowserMatch(browser: BrowserInfo, profile: string): BrowserMatch | null {
validateProfile(profile);
for (const platform of getSearchPlatforms()) {
const dataDir = getDataDirForPlatform(browser, platform);
if (!dataDir) continue;
const baseProfile = path.join(getBaseDir(platform), dataDir, profile);
// Chrome 80+ on Windows stores cookies under Network/Cookies; fall back to Cookies
const candidates = platform === 'win32'
? [path.join(baseProfile, 'Network', 'Cookies'), path.join(baseProfile, 'Cookies')]
: [path.join(baseProfile, 'Cookies')];
for (const dbPath of candidates) {
try {
if (fs.existsSync(dbPath)) {
return { browser, platform, dbPath };
}
} catch {}
}
}
return null;
}
function getBrowserMatch(browser: BrowserInfo, profile: string): BrowserMatch {
const match = findBrowserMatch(browser, profile);
if (match) return match;
const attempted = getSearchPlatforms()
.map(platform => {
const dataDir = getDataDirForPlatform(browser, platform);
return dataDir ? path.join(getBaseDir(platform), dataDir, profile, 'Cookies') : null;
})
.filter((entry): entry is string => entry !== null);
throw new CookieImportError(
`${browser.name} is not installed (no cookie database at ${attempted.join(' or ')})`,
'not_installed',
);
}
// ─── Internal: SQLite Access ────────────────────────────────────
function openDb(dbPath: string, browserName: string): Database {
// On Windows, Chrome holds exclusive WAL locks even when we open readonly.
// The readonly open may "succeed" but return empty results because the WAL
// (where all actual data lives) can't be replayed. Always use the copy
// approach on Windows so we can open read-write and process the WAL.
if (process.platform === 'win32') {
return openDbFromCopy(dbPath, browserName);
}
try {
return new Database(dbPath, { readonly: true });
} catch (err: any) {
if (err.message?.includes('SQLITE_BUSY') || err.message?.includes('database is locked')) {
return openDbFromCopy(dbPath, browserName);
}
if (err.message?.includes('SQLITE_CORRUPT') || err.message?.includes('malformed')) {
throw new CookieImportError(
`Cookie database for ${browserName} is corrupt`,
'db_corrupt',
);
}
throw err;
}
}
function openDbFromCopy(dbPath: string, browserName: string): Database {
// Use os.tmpdir() instead of hardcoded /tmp for cross-platform support (#708)
const tmpPath = path.join(os.tmpdir(), `browse-cookies-${browserName.toLowerCase()}-${crypto.randomUUID()}.db`);
try {
fs.copyFileSync(dbPath, tmpPath);
// Also copy WAL and SHM if they exist (for consistent reads)
const walPath = dbPath + '-wal';
const shmPath = dbPath + '-shm';
if (fs.existsSync(walPath)) fs.copyFileSync(walPath, tmpPath + '-wal');
if (fs.existsSync(shmPath)) fs.copyFileSync(shmPath, tmpPath + '-shm');
const db = new Database(tmpPath, { readonly: true });
// Schedule cleanup after the DB is closed
const origClose = db.close.bind(db);
db.close = () => {
origClose();
try { fs.unlinkSync(tmpPath); } catch {}
try { fs.unlinkSync(tmpPath + '-wal'); } catch {}
try { fs.unlinkSync(tmpPath + '-shm'); } catch {}
};
return db;
} catch {
// Clean up on failure
try { fs.unlinkSync(tmpPath); } catch {}
throw new CookieImportError(
`Cookie database is locked (${browserName} may be running). Try closing ${browserName} first.`,
'db_locked',
'retry',
);
}
}
// ─── Internal: Keychain Access (async, 10s timeout) ─────────────
function deriveKey(password: string, iterations: number): Buffer {
return crypto.pbkdf2Sync(password, 'saltysalt', iterations, 16, 'sha1');
}
function getCachedDerivedKey(cacheKey: string, password: string, iterations: number): Buffer {
const cached = keyCache.get(cacheKey);
if (cached) return cached;
const derived = deriveKey(password, iterations);
keyCache.set(cacheKey, derived);
return derived;
}
async function getDerivedKeys(match: BrowserMatch): Promise<Map<string, Buffer>> {
if (match.platform === 'darwin') {
const password = await getMacKeychainPassword(match.browser.keychainService);
return new Map([
['v10', getCachedDerivedKey(`darwin:${match.browser.keychainService}:v10`, password, 1003)],
]);
}
if (match.platform === 'win32') {
const key = await getWindowsAesKey(match.browser);
return new Map([['v10', key]]);
}
const keys = new Map<string, Buffer>();
keys.set('v10', getCachedDerivedKey('linux:v10', 'peanuts', 1));
const linuxPassword = await getLinuxSecretPassword(match.browser);
if (linuxPassword) {
keys.set(
'v11',
getCachedDerivedKey(`linux:${match.browser.keychainService}:v11`, linuxPassword, 1),
);
}
return keys;
}
async function getWindowsAesKey(browser: BrowserInfo): Promise<Buffer> {
const cacheKey = `win32:${browser.keychainService}`;
const cached = keyCache.get(cacheKey);
if (cached) return cached;
const platform = 'win32' as const;
const dataDir = getDataDirForPlatform(browser, platform);
if (!dataDir) throw new CookieImportError(`No Windows data dir for ${browser.name}`, 'not_installed');
const localStatePath = path.join(getBaseDir(platform), dataDir, 'Local State');
let localState: any;
try {
localState = JSON.parse(fs.readFileSync(localStatePath, 'utf-8'));
} catch (err) {
const reason = err instanceof Error ? `: ${err.message}` : '';
throw new CookieImportError(
`Cannot read Local State for ${browser.name} at ${localStatePath}${reason}`,
'keychain_error',
);
}
const encryptedKeyB64: string = localState?.os_crypt?.encrypted_key;
if (!encryptedKeyB64) {
throw new CookieImportError(
`No encrypted key in Local State for ${browser.name}`,
'keychain_not_found',
);
}
// The stored value is base64(b"DPAPI" + dpapi_encrypted_bytes) — strip the 5-byte prefix
const encryptedKey = Buffer.from(encryptedKeyB64, 'base64').slice(5);
const key = await dpapiDecrypt(encryptedKey);
keyCache.set(cacheKey, key);
return key;
}
async function dpapiDecrypt(encryptedBytes: Buffer): Promise<Buffer> {
const script = [
'Add-Type -AssemblyName System.Security',
'$stdin = [Console]::In.ReadToEnd().Trim()',
'$bytes = [System.Convert]::FromBase64String($stdin)',
'$dec = [System.Security.Cryptography.ProtectedData]::Unprotect($bytes, $null, [System.Security.Cryptography.DataProtectionScope]::CurrentUser)',
'Write-Output ([System.Convert]::ToBase64String($dec))',
].join('; ');
const proc = Bun.spawn(['powershell', '-NoProfile', '-Command', script], {
stdin: 'pipe',
stdout: 'pipe',
stderr: 'pipe',
});
proc.stdin.write(encryptedBytes.toString('base64'));
proc.stdin.end();
const timeout = new Promise<never>((_, reject) =>
setTimeout(() => {
proc.kill();
reject(new CookieImportError('DPAPI decryption timed out', 'keychain_timeout', 'retry'));
}, 10_000),
);
try {
const exitCode = await Promise.race([proc.exited, timeout]);
const stdout = await new Response(proc.stdout).text();
if (exitCode !== 0) {
const stderr = await new Response(proc.stderr).text();
throw new CookieImportError(`DPAPI decryption failed: ${stderr.trim()}`, 'keychain_error');
}
return Buffer.from(stdout.trim(), 'base64');
} catch (err) {
if (err instanceof CookieImportError) throw err;
throw new CookieImportError(
`DPAPI decryption failed: ${(err as Error).message}`,
'keychain_error',
);
}
}
async function getMacKeychainPassword(service: string): Promise<string> {
// Use async Bun.spawn with timeout to avoid blocking the event loop.
// macOS may show an Allow/Deny dialog that blocks until the user responds.
const proc = Bun.spawn(
['security', 'find-generic-password', '-s', service, '-w'],
{ stdout: 'pipe', stderr: 'pipe' },
);
const timeout = new Promise<never>((_, reject) =>
setTimeout(() => {
proc.kill();
reject(new CookieImportError(
`macOS is waiting for Keychain permission. Look for a dialog asking to allow access to "${service}".`,
'keychain_timeout',
'retry',
));
}, 10_000),
);
try {
const exitCode = await Promise.race([proc.exited, timeout]);
const stdout = await new Response(proc.stdout).text();
const stderr = await new Response(proc.stderr).text();
if (exitCode !== 0) {
// Distinguish denied vs not found vs other
const errText = stderr.trim().toLowerCase();
if (errText.includes('user canceled') || errText.includes('denied') || errText.includes('interaction not allowed')) {
throw new CookieImportError(
`Keychain access denied. Click "Allow" in the macOS dialog for "${service}".`,
'keychain_denied',
'retry',
);
}
if (errText.includes('could not be found') || errText.includes('not found')) {
throw new CookieImportError(
`No Keychain entry for "${service}". Is this a Chromium-based browser?`,
'keychain_not_found',
);
}
throw new CookieImportError(
`Could not read Keychain: ${stderr.trim()}`,
'keychain_error',
'retry',
);
}
return stdout.trim();
} catch (err) {
if (err instanceof CookieImportError) throw err;
throw new CookieImportError(
`Could not read Keychain: ${(err as Error).message}`,
'keychain_error',
'retry',
);
}
}
async function getLinuxSecretPassword(browser: BrowserInfo): Promise<string | null> {
const attempts: string[][] = [
['secret-tool', 'lookup', 'Title', browser.keychainService],
];
if (browser.linuxApplication) {
attempts.push(
['secret-tool', 'lookup', 'xdg:schema', 'chrome_libsecret_os_crypt_password_v2', 'application', browser.linuxApplication],
['secret-tool', 'lookup', 'xdg:schema', 'chrome_libsecret_os_crypt_password', 'application', browser.linuxApplication],
);
}
for (const cmd of attempts) {
const password = await runPasswordLookup(cmd, 3_000);
if (password) return password;
}
return null;
}
async function runPasswordLookup(cmd: string[], timeoutMs: number): Promise<string | null> {
try {
const proc = Bun.spawn(cmd, { stdout: 'pipe', stderr: 'pipe' });
const timeout = new Promise<never>((_, reject) =>
setTimeout(() => {
proc.kill();
reject(new Error('timeout'));
}, timeoutMs),
);
const exitCode = await Promise.race([proc.exited, timeout]);
const stdout = await new Response(proc.stdout).text();
if (exitCode !== 0) return null;
const password = stdout.trim();
return password.length > 0 ? password : null;
} catch {
return null;
}
}
// ─── Internal: Cookie Decryption ────────────────────────────────
interface RawCookie {
host_key: string;
name: string;
value: string;
encrypted_value: Buffer | Uint8Array;
path: string;
expires_utc: number | bigint;
is_secure: number;
is_httponly: number;
has_expires: number;
samesite: number;
}
function decryptCookieValue(row: RawCookie, keys: Map<string, Buffer>, platform: BrowserPlatform): string {
// Prefer unencrypted value if present
if (row.value && row.value.length > 0) return row.value;
const ev = Buffer.from(row.encrypted_value);
if (ev.length === 0) return '';
const prefix = ev.slice(0, 3).toString('utf-8');
// Chrome 127+ on Windows uses App-Bound Encryption (v20) — cannot be decrypted
// outside the Chrome process. Caller should fall back to CDP extraction.
if (prefix === 'v20') throw new CookieImportError(
'Cookie uses App-Bound Encryption (v20). Use CDP extraction instead.',
'v20_encryption',
);
const key = keys.get(prefix);
if (!key) throw new Error(`No decryption key available for ${prefix} cookies`);
if (platform === 'win32' && prefix === 'v10') {
// Windows: AES-256-GCM — structure: v10(3) + nonce(12) + ciphertext + tag(16)
const nonce = ev.slice(3, 15);
const tag = ev.slice(ev.length - 16);
const ciphertext = ev.slice(15, ev.length - 16);
const decipher = crypto.createDecipheriv('aes-256-gcm', key, nonce) as crypto.DecipherGCM;
decipher.setAuthTag(tag);
return Buffer.concat([decipher.update(ciphertext), decipher.final()]).toString('utf-8');
}
// macOS / Linux: AES-128-CBC — structure: v10/v11(3) + ciphertext
const ciphertext = ev.slice(3);
const iv = Buffer.alloc(16, 0x20); // 16 space characters
const decipher = crypto.createDecipheriv('aes-128-cbc', key, iv);
const plaintext = Buffer.concat([decipher.update(ciphertext), decipher.final()]);
// Chromium prefixes encrypted cookie payloads with 32 bytes of metadata.
if (plaintext.length <= 32) return '';
return plaintext.slice(32).toString('utf-8');
}
function toPlaywrightCookie(row: RawCookie, value: string): PlaywrightCookie {
return {
name: row.name,
value,
domain: row.host_key,
path: row.path || '/',
expires: chromiumEpochToUnix(row.expires_utc, row.has_expires),
secure: row.is_secure === 1,
httpOnly: row.is_httponly === 1,
sameSite: mapSameSite(row.samesite),
};
}
// ─── Internal: Chromium Epoch Conversion ────────────────────────
const CHROMIUM_EPOCH_OFFSET = 11644473600000000n;
function chromiumNow(): bigint {
// Current time in Chromium epoch (microseconds since 1601-01-01)
return BigInt(Date.now()) * 1000n + CHROMIUM_EPOCH_OFFSET;
}
function chromiumEpochToUnix(epoch: number | bigint, hasExpires: number): number {
if (hasExpires === 0 || epoch === 0 || epoch === 0n) return -1; // session cookie
const epochBig = BigInt(epoch);
const unixMicro = epochBig - CHROMIUM_EPOCH_OFFSET;
return Number(unixMicro / 1000000n);
}
function mapSameSite(value: number): 'Strict' | 'Lax' | 'None' {
switch (value) {
case 0: return 'None';
case 1: return 'Lax';
case 2: return 'Strict';
default: return 'Lax';
}
}
// ─── CDP-based Cookie Extraction (Windows v20 fallback) ────────
// When App-Bound Encryption (v20) is detected, we launch Chrome headless
// with remote debugging and extract cookies via the DevTools Protocol.
// This only works when Chrome is NOT already running (profile lock).
const CHROME_PATHS_WIN = [
path.join(process.env.PROGRAMFILES || 'C:\\Program Files', 'Google', 'Chrome', 'Application', 'chrome.exe'),
path.join(process.env['PROGRAMFILES(X86)'] || 'C:\\Program Files (x86)', 'Google', 'Chrome', 'Application', 'chrome.exe'),
];
const EDGE_PATHS_WIN = [
path.join(process.env['PROGRAMFILES(X86)'] || 'C:\\Program Files (x86)', 'Microsoft', 'Edge', 'Application', 'msedge.exe'),
path.join(process.env.PROGRAMFILES || 'C:\\Program Files', 'Microsoft', 'Edge', 'Application', 'msedge.exe'),
];
function findBrowserExe(browserName: string): string | null {
const candidates = browserName.toLowerCase().includes('edge') ? EDGE_PATHS_WIN : CHROME_PATHS_WIN;
for (const p of candidates) {
if (fs.existsSync(p)) return p;
}
return null;
}
function isBrowserRunning(browserName: string): Promise<boolean> {
const exe = browserName.toLowerCase().includes('edge') ? 'msedge.exe' : 'chrome.exe';
return new Promise((resolve) => {
const proc = Bun.spawn(['tasklist', '/FI', `IMAGENAME eq ${exe}`, '/NH'], {
stdout: 'pipe', stderr: 'pipe',
});
proc.exited.then(async () => {
const out = await new Response(proc.stdout).text();
resolve(out.toLowerCase().includes(exe));
}).catch(() => resolve(false));
});
}
/**
* Extract cookies via Chrome DevTools Protocol. Launches Chrome headless with
* remote debugging on the user's real profile directory. Requires Chrome to be
* closed first (profile lock).
*
* v20 App-Bound Encryption binds decryption keys to the original user-data-dir
* path, so a temp copy of the profile won't work — Chrome silently discards
* cookies it can't decrypt. We must use the real profile.
*/
export async function importCookiesViaCdp(
browserName: string,
domains: string[],
profile = 'Default',
): Promise<ImportResult> {
if (domains.length === 0) return { cookies: [], count: 0, failed: 0, domainCounts: {} };
if (process.platform !== 'win32') {
throw new CookieImportError('CDP extraction is only needed on Windows', 'not_supported');
}
const browser = resolveBrowser(browserName);
const exePath = findBrowserExe(browser.name);
if (!exePath) {
throw new CookieImportError(
`Cannot find ${browser.name} executable. Install it or use /connect-chrome.`,
'not_installed',
);
}
if (await isBrowserRunning(browser.name)) {
throw new CookieImportError(
`${browser.name} is running. Close it first so we can launch headless with your profile, or use /connect-chrome to control your real browser directly.`,
'browser_running',
'retry',
);
}
// Must use the real user data dir — v20 ABE keys are path-bound
const dataDir = getDataDirForPlatform(browser, 'win32');
if (!dataDir) throw new CookieImportError(`No Windows data dir for ${browser.name}`, 'not_installed');
const userDataDir = path.join(getBaseDir('win32'), dataDir);
// Launch Chrome headless with remote debugging on the real profile.
//
// Security posture of the debug port:
// - Chrome binds --remote-debugging-port to 127.0.0.1 by default. The
// port is NOT exposed to the network. Baseline threat: a local
// process running as the same user can connect.
// - Port is randomized in [9222, 9321] to avoid collisions with other
// Chrome-based tools. Not cryptographic — security relies on
// same-user-access baseline, not port secrecy.
// - Chrome is always killed in the finally block below (even on crash).
//
// KNOWN NON-GOAL (tracked as a separate hardening task for the next
// security wave):
// On Windows 10.15+ with App-Bound Encryption (v20) enabled, a
// same-user process that opens the cookie DB directly cannot decrypt
// v20 values — the DPAPI context is bound to the browser process.
// The CDP port bypasses that: `Network.getAllCookies` runs inside the
// browser, so any same-user process that connects to the debug port
// before we kill Chrome could exfiltrate decrypted v20 cookies.
// Fix direction: switch to `--remote-debugging-pipe` so the CDP
// transport is a parent/child stdio pipe, not TCP. Requires
// restructuring the extractCookiesViaCdp WebSocket client; deferred
// to a follow-up because the transport swap is non-trivial and the
// baseline threat is still "attacker already has same-user access."
//
// Debugging note: if this path starts failing after a Chrome update,
// check the Chrome version logged below — Chrome's ABE key format (v20)
// or /json/list shape can change between major versions.
const debugPort = 9222 + Math.floor(Math.random() * 100);
const chromeProc = Bun.spawn([
exePath,
`--remote-debugging-port=${debugPort}`,
`--user-data-dir=${userDataDir}`,
`--profile-directory=${profile}`,
'--headless=new',
'--no-first-run',
'--disable-background-networking',
'--disable-default-apps',
'--disable-extensions',
'--disable-sync',
'--no-default-browser-check',
], { stdout: 'pipe', stderr: 'pipe' });
// Wait for Chrome to start, then find a page target's WebSocket URL.
// Network.getAllCookies is only available on page targets, not browser.
let wsUrl: string | null = null;
const startTime = Date.now();
let loggedVersion = false;
while (Date.now() - startTime < 15_000) {
try {
// One-time version log for future diagnostics when Chrome changes v20 format.
if (!loggedVersion) {
try {
const versionResp = await fetch(`http://127.0.0.1:${debugPort}/json/version`);
if (versionResp.ok) {
const v = await versionResp.json() as { Browser?: string };
console.log(`[cookie-import] CDP fallback: ${browser.name} ${v.Browser || 'unknown version'}`);
loggedVersion = true;
}
} catch {}
}
const resp = await fetch(`http://127.0.0.1:${debugPort}/json/list`);
if (resp.ok) {
const targets = await resp.json() as Array<{ type: string; webSocketDebuggerUrl?: string }>;
const page = targets.find(t => t.type === 'page');
if (page?.webSocketDebuggerUrl) {
wsUrl = page.webSocketDebuggerUrl;
break;
}
}
} catch {
// Not ready yet
}
await new Promise(r => setTimeout(r, 300));
}
if (!wsUrl) {
chromeProc.kill();
throw new CookieImportError(
`${browser.name} headless did not start within 15s`,
'cdp_timeout',
'retry',
);
}
try {
// Connect via CDP WebSocket
const cookies = await extractCookiesViaCdp(wsUrl, domains);
const domainCounts: Record<string, number> = {};
for (const c of cookies) {
domainCounts[c.domain] = (domainCounts[c.domain] || 0) + 1;
}
return { cookies, count: cookies.length, failed: 0, domainCounts };
} finally {
chromeProc.kill();
}
}
async function extractCookiesViaCdp(wsUrl: string, domains: string[]): Promise<PlaywrightCookie[]> {
return new Promise((resolve, reject) => {
const ws = new WebSocket(wsUrl);
let msgId = 1;
const timeout = setTimeout(() => {
ws.close();
reject(new CookieImportError('CDP cookie extraction timed out', 'cdp_timeout'));
}, 10_000);
ws.onopen = () => {
// Enable Network domain first, then request all cookies
ws.send(JSON.stringify({ id: msgId++, method: 'Network.enable' }));
};
ws.onmessage = (event) => {
const data = JSON.parse(String(event.data));
// After Network.enable succeeds, request all cookies
if (data.id === 1 && !data.error) {
ws.send(JSON.stringify({ id: msgId, method: 'Network.getAllCookies' }));
return;
}
if (data.id === msgId && data.result?.cookies) {
clearTimeout(timeout);
ws.close();
// Normalize domain matching: domains like ".example.com" match "example.com" and vice versa
const domainSet = new Set<string>();
for (const d of domains) {
domainSet.add(d);
domainSet.add(d.startsWith('.') ? d.slice(1) : '.' + d);
}
const matched: PlaywrightCookie[] = [];
for (const c of data.result.cookies as CdpCookie[]) {
if (!domainSet.has(c.domain)) continue;
matched.push({
name: c.name,
value: c.value,
domain: c.domain,
path: c.path || '/',
expires: c.expires === -1 ? -1 : c.expires,
secure: c.secure,
httpOnly: c.httpOnly,
sameSite: cdpSameSite(c.sameSite),
});
}
resolve(matched);
} else if (data.id === msgId && data.error) {
clearTimeout(timeout);
ws.close();
reject(new CookieImportError(
`CDP error: ${data.error.message}`,
'cdp_error',
));
}
};
ws.onerror = (err) => {
clearTimeout(timeout);
reject(new CookieImportError(
`CDP WebSocket error: ${(err as any).message || 'unknown'}`,
'cdp_error',
));
};
});
}
interface CdpCookie {
name: string;
value: string;
domain: string;
path: string;
expires: number;
size: number;
httpOnly: boolean;
secure: boolean;
session: boolean;
sameSite: string;
}
function cdpSameSite(value: string): 'Strict' | 'Lax' | 'None' {
switch (value) {
case 'Strict': return 'Strict';
case 'Lax': return 'Lax';
case 'None': return 'None';
default: return 'Lax';
}
}
/**
* Check if a browser's cookie DB contains v20 (App-Bound) encrypted cookies.
* Quick check — reads a small sample, no decryption attempted.
*/
export function hasV20Cookies(browserName: string, profile = 'Default'): boolean {
if (process.platform !== 'win32') return false;
try {
const browser = resolveBrowser(browserName);
const match = getBrowserMatch(browser, profile);
const db = openDb(match.dbPath, browser.name);
try {
const rows = db.query('SELECT encrypted_value FROM cookies LIMIT 10').all() as Array<{ encrypted_value: Buffer | Uint8Array }>;
return rows.some(row => {
const ev = Buffer.from(row.encrypted_value);
return ev.length >= 3 && ev.slice(0, 3).toString('utf-8') === 'v20';
});
} finally {
db.close();
}
} catch {
return false;
}
}