Files like .jsonl that mime-types doesn't know fell through to
application/octet-stream and rendered as "Unsupported binary file" in
the viewer. Replace istextorbinary with isbinaryfile for content-based
detection, and use mime-types for name-based classification with a
textual application/* allowlist.
The streaming transformer now defers classification when the name is
inconclusive and sniffs the first chunk before emitting "transform",
so route.ts and AnonymizedFile.ts get a content-aware Content-Type.
Whitelists .jsonl and .ndjson to short-circuit dataset files.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Files tracked by Git LFS used to come out as the pointer text:
version https://git-lfs.github.com/spec/v1
oid sha256:...
size ...
…because GitHub's blob API returns the pointer, not the resolved
content. Detect that prefix on the first ~150 bytes of the blob stream
and switch to a fresh fetch via the web raw URL
(github.com/<owner>/<repo>/raw/<commit>/<path>), which auto-redirects
to media.githubusercontent.com and resolves the LFS object — auth
header carries through. Non-LFS files are forwarded through the
existing pipeline unchanged.
Fixes#95.
When the anonymizer doesn't change a slice's text, the streamer used
to push Buffer.from(out, "utf8") — which loses any invalid-UTF-8 bytes
in the input (replaced by U+FFFD via StringDecoder). Files
mistakenly classified as text (binary blobs without a known extension,
text with stray non-UTF-8 bytes, BOMs) came out corrupted even though
nothing in the term list matched.
Track the raw chunk bytes alongside the decoded `pending`. On flush —
where we have every byte buffered — emit the original buffer directly
when the output equals the input, so a pure passthrough is bit-exact.
In the streaming OVERLAP path, do the same when the decode for that
slice round-trips losslessly; fall back to encoded output otherwise
(unchanged from before for that case).
Also add the "missing_content" locale entry for the
/api/anonymize-preview route.
istextorbinary returns null for filenames with no extension, and the
isTextFile() guard treated null as "not text" — so terms in LICENSE,
COPYING, AUTHORS, README (extensionless), CHANGELOG, NOTICE, and
similar conventional filenames went through the binary passthrough
in AnonymizeTransformer and were never anonymized.
Add a small whitelist of these names ahead of the istextorbinary call.
Fixes#493.
A term entered as "Anonymous=>ABC" now scrubs "Anonymous" to "ABC"
instead of "XXXX-N". Lets users keep anonymized identifiers valid in
source code (no hyphen) and align tokens between paper text and repo.
Indexing for default-mask terms is unchanged: a list of
"Alpha=>AAA", "Beta" still produces XXXX-2 for Beta.
Fixes#285.
Without the path, two different files in the same repo (same sha, same
anonymization options) shared an ETag. If a browser ever sent the cached
ETag for one file while requesting another, the server would have
returned 304 against the wrong cache entry. Fold the path into the
ETag so each file has its own fingerprint.
Follow-up to b3c1030 (#439).
Files were being served with Cache-Control: max-age=18144000 (210 days)
keyed only on the upstream ?v=<sha>. Editing the term list left the
same URL serving stale anonymized bytes — visible to users in regular
tabs but not in incognito. The previous fix-by-incognito recipe in #439
is exactly this.
Switch to ETag-based revalidation that fingerprints both the upstream
sha and the saved anonymization options, with Cache-Control:
no-cache, must-revalidate. Browsers now revalidate on every request and
get a 304 when nothing has changed, or fresh content as soon as terms,
image/link/etc. options are updated.
Fixes#439.
When a user added "Davó" to the term list, "Davo" elsewhere in the
content was left untouched (and vice versa). Each term now also runs a
diacritic-insensitive pass: ASCII Latin letters expand to a class
covering common accented siblings, with Unicode-aware lookaround
boundaries so the trailing boundary still fires next to "ó" etc.
Pure helpers moved into src/core/term-matching so the test file can
import them instead of duplicating the logic.
Fixes#280.
The streaming zip pipeline was constructing AnonymizeTransformer first and
then assigning opt.filePath afterwards. AnonymizeTransformer determines
isText in its constructor from opt.filePath, so every entry was classified
as binary and passed through unchanged — the downloaded zip leaked the
original (un-anonymized) terms even though the web view scrubbed them.
Pass filePath via the constructor so isText is computed correctly.
Fixes#342, #349.
Wrapping every user term as `\b${term}\b` silently dropped matches when
the term started or ended with a non-word char (e.g. `@tdurieux`,
`@author .*`), because JS `\b` only fires at a word/non-word transition.
Replace with `withWordBoundaries()`, which only emits `\b` on the side
where the term has a word-char edge.
Fixes#175, #249.