mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-01 19:25:10 +02:00
c6e6a21d1a
* refactor: add error-handling utility module with selective catches safeUnlink (ignores ENOENT), safeKill (ignores ESRCH), isProcessAlive (extracted from cli.ts with Windows support), and json() Response helper. All catches check err.code and rethrow unexpected errors instead of swallowing silently. Unit tests cover happy path + error code paths. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: replace defensive try/catches in server.ts with utilities Replace ~12 try/catch sites with safeUnlink/safeKill calls in shutdown, emergencyCleanup, killAgent, and log cleanup. Convert empty catches to selective catches with error code checks. Remove needless welcome page try/catches (fs.existsSync doesn't need wrapping). Reduces slop-scan empty-catch locations from 11 to 8 and error-swallowing from 24 to 18. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: extract isProcessAlive and replace try/catches in cli.ts Move isProcessAlive to shared error-handling module. Replace ~20 try/catch sites with safeUnlink/safeKill in killServer, connect, disconnect, and cleanup flows. Convert empty catches to selective catches. Reduces slop-scan empty-catch from 22 to 2 locations. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: remove unnecessary return await in content-security and read-commands Remove 6 redundant return-await patterns where there's no enclosing try block. Eliminates all defensive.async-noise findings from these files. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: add slop-scan config to exclude vendor files Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: replace empty catches with selective error handling in sidebar-agent Convert 8 empty catch blocks to selective catches that check err.code (ESRCH for process kills, ENOENT for file ops). Import safeUnlink for cancel file cleanup. Unexpected errors now propagate instead of being silently swallowed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: replace empty catches and mark pass-through wrappers in browser-manager Convert 12 empty catch blocks to selective catches: filesystem ops check ENOENT/EACCES, browser ops check for closed/Target messages, URL parsing checks TypeError. Add 'alias for active session' comments above 6 pass-through wrapper methods to document their purpose (and exempt from slop-scan pass-through-wrappers rule). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: selective catches in gstack-global-discover Convert 8 defensive catch blocks to selective error handling. Filesystem ops check ENOENT/EACCES, process ops check exit status. Unexpected errors now propagate instead of returning silent defaults. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: selective catches in write-commands, cdp-inspector, meta-commands, snapshot Convert ~27 empty/obscuring catches to selective error handling across 4 browse source files. CDP ops check for closed/Target/detached messages, DOM ops check TypeError/DOMException, filesystem ops check ENOENT/EACCES, JSON parsing checks SyntaxError. Remove dead code in cdp-inspector where try/catch wrapped synchronous no-ops. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: selective catches in Chrome extension files Convert empty catches and error-swallowing patterns across inspector.js, content.js, background.js, and sidepanel.js. DOM catches filter TypeError/DOMException, chrome API catches filter Extension context invalidated, network catches filter Failed to fetch. Unexpected errors now propagate. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: restore isProcessAlive boolean semantics, add safeUnlinkQuiet, remove unused json() isProcessAlive now catches ALL errors and returns false (pure boolean probe). Callers use it in if/while conditions without try/catch, so throwing on EPERM was a behavior change that could crash the CLI. Windows path gets its safety catch restored. safeUnlinkQuiet added for best-effort cleanup paths where throwing on non-ENOENT errors (like EPERM during shutdown) would abort cleanup. json() removed — dead code, never imported anywhere. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: use safeUnlinkQuiet in shutdown and cleanup paths Shutdown, emergency cleanup, and disconnect paths should never throw on file deletion failures. Switched from safeUnlink (throws on EPERM) to safeUnlinkQuiet (swallows all errors) in these best-effort paths. Normal operation paths (startup, lock release) keep safeUnlink. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * revert: remove brittle string-matching catches and alias comments in browser-manager Revert 6 catches that matched error messages via includes('closed'), includes('Target'), etc. back to empty catches. These fire-and-forget operations (page.close, bringToFront, dialog dismiss) genuinely don't care about any error type. String matching on error messages is brittle and will break on Playwright version bumps. Remove 6 'alias for active session' comments that existed solely to game slop-scan's pass-through-wrapper exemption rule. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * revert: remove brittle string-matching catches in extension files Revert error-swallowing fixes in background.js and sidepanel.js that matched error messages via includes('Failed to fetch'), includes( 'Extension context invalidated'), etc. In Chrome extensions, uncaught errors crash the entire extension. The original catch-and-log pattern is the correct choice for extension code where any error is non-fatal. content.js and inspector.js changes kept — their TypeError/DOMException catches are typed, not string-based. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add slop-scan usage guidelines to CLAUDE.md Instructions for using slop-scan to improve genuine code quality, not to game metrics or hide that we're AI-coded. Documents what to fix (empty catches on file/process ops, typed exception narrows, return await) and what NOT to fix (string-matching on error messages, linter gaming comments, tightening extension/cleanup catches). Includes utility function reference and baseline score tracking. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: add slop-scan as diagnostic in test suite Runs slop-scan after bun test as a non-blocking diagnostic. Prints the summary (top files, hotspots) so you see the number without it gating anything. Available standalone via bun run slop. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: slop-diff shows only NEW findings introduced on this branch Runs slop-scan on HEAD and the merge-base, diffs results with line-number-insensitive fingerprinting so shifted code doesn't create false positives. Uses git worktree for clean base comparison. Shows net new vs removed findings. Runs automatically after bun test. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: design doc for slop-scan integration in /review and /ship Deferred plan for surfacing slop-diff findings automatically during code review and shipping. Documents integration points, auto-fix vs skip heuristics, and implementation notes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: bump version and changelog (v0.16.3.0) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
171 lines
5.7 KiB
TypeScript
171 lines
5.7 KiB
TypeScript
#!/usr/bin/env bun
|
|
/**
|
|
* slop-diff: show NEW slop-scan findings introduced on this branch.
|
|
*
|
|
* Runs slop-scan on HEAD and on the merge-base, then diffs the results
|
|
* to show only findings that were added. Line-number-insensitive comparison
|
|
* so shifting code doesn't create false positives.
|
|
*
|
|
* Usage:
|
|
* bun run slop:diff # diff against main
|
|
* bun run slop:diff origin/release # diff against another base
|
|
*/
|
|
|
|
import { spawnSync } from 'child_process';
|
|
import * as fs from 'fs';
|
|
import * as os from 'os';
|
|
import * as path from 'path';
|
|
|
|
const base = process.argv[2] || 'main';
|
|
|
|
// 1. Find changed files
|
|
const diffResult = spawnSync('git', ['diff', '--name-only', `${base}...HEAD`], {
|
|
encoding: 'utf-8', timeout: 10000,
|
|
});
|
|
const changedFiles = new Set(
|
|
(diffResult.stdout || '').trim().split('\n').filter(Boolean)
|
|
);
|
|
if (changedFiles.size === 0) {
|
|
console.log('No files changed vs', base, '— nothing to check.');
|
|
process.exit(0);
|
|
}
|
|
|
|
// 2. Run slop-scan on HEAD
|
|
const scanHead = spawnSync('npx', ['slop-scan', 'scan', '.', '--json'], {
|
|
encoding: 'utf-8', timeout: 120000, shell: true,
|
|
});
|
|
if (!scanHead.stdout) {
|
|
console.log('slop-scan not available. Install: npm i -g slop-scan');
|
|
process.exit(0);
|
|
}
|
|
let headReport: any;
|
|
try { headReport = JSON.parse(scanHead.stdout); } catch {
|
|
console.log('slop-scan returned invalid JSON.'); process.exit(0);
|
|
}
|
|
|
|
// 3. Get base branch findings using git stash approach
|
|
// Check out base versions of changed files, scan, then restore
|
|
const mergeBase = spawnSync('git', ['merge-base', base, 'HEAD'], {
|
|
encoding: 'utf-8', timeout: 5000,
|
|
}).stdout?.trim();
|
|
|
|
// Fingerprint: strip line numbers so shifting code doesn't create false positives
|
|
// "line 142: empty catch, boundary=none" -> "empty catch, boundary=none"
|
|
function stripLineNum(evidence: string): string {
|
|
return evidence.replace(/^line \d+: /, '').replace(/ at line \d+ /, ' ');
|
|
}
|
|
|
|
// Count evidence items per (rule, file, stripped-evidence) for the base
|
|
const baseCounts = new Map<string, number>();
|
|
|
|
if (mergeBase) {
|
|
// Create temp worktree for base scan
|
|
const tmpWorktree = path.join(os.tmpdir(), `slop-base-${Date.now()}`);
|
|
const wtResult = spawnSync('git', ['worktree', 'add', '--detach', tmpWorktree, mergeBase], {
|
|
encoding: 'utf-8', timeout: 30000,
|
|
});
|
|
|
|
if (wtResult.status === 0) {
|
|
// Copy slop-scan config if it exists
|
|
const configFile = 'slop-scan.config.json';
|
|
if (fs.existsSync(configFile)) {
|
|
try { fs.copyFileSync(configFile, path.join(tmpWorktree, configFile)); } catch {}
|
|
}
|
|
|
|
const scanBase = spawnSync('npx', ['slop-scan', 'scan', tmpWorktree, '--json'], {
|
|
encoding: 'utf-8', timeout: 120000, shell: true,
|
|
});
|
|
|
|
if (scanBase.stdout) {
|
|
try {
|
|
const baseReport = JSON.parse(scanBase.stdout);
|
|
for (const f of baseReport.findings) {
|
|
// Remap worktree paths back to repo-relative
|
|
const realPath = f.path.replace(tmpWorktree + '/', '');
|
|
if (!changedFiles.has(realPath)) continue;
|
|
for (const ev of f.evidence || []) {
|
|
const key = `${f.ruleId}|${realPath}|${stripLineNum(ev)}`;
|
|
baseCounts.set(key, (baseCounts.get(key) || 0) + 1);
|
|
}
|
|
}
|
|
} catch {}
|
|
}
|
|
|
|
// Clean up worktree
|
|
spawnSync('git', ['worktree', 'remove', '--force', tmpWorktree], {
|
|
timeout: 10000,
|
|
});
|
|
}
|
|
}
|
|
|
|
// 4. Find genuinely new findings
|
|
// For each evidence item on HEAD, check if the base had the same (rule, file, stripped-evidence).
|
|
// Use counts to handle duplicates: if base had 2 and HEAD has 3, that's 1 new.
|
|
const headCounts = new Map<string, { count: number; evidence: string[] }>();
|
|
const headFindings = headReport.findings.filter((f: any) => changedFiles.has(f.path));
|
|
|
|
for (const f of headFindings) {
|
|
for (const ev of f.evidence || []) {
|
|
const key = `${f.ruleId}|${f.path}|${stripLineNum(ev)}`;
|
|
const entry = headCounts.get(key) || { count: 0, evidence: [] };
|
|
entry.count++;
|
|
entry.evidence.push(ev);
|
|
headCounts.set(key, entry);
|
|
}
|
|
}
|
|
|
|
// Compute net new
|
|
type NewFinding = { ruleId: string; filePath: string; evidence: string };
|
|
const newFindings: NewFinding[] = [];
|
|
let removedCount = 0;
|
|
|
|
for (const [key, entry] of headCounts) {
|
|
const baseCount = baseCounts.get(key) || 0;
|
|
const netNew = entry.count - baseCount;
|
|
if (netNew > 0) {
|
|
const [ruleId, filePath] = key.split('|');
|
|
// Take the last N evidence items as the "new" ones
|
|
for (const ev of entry.evidence.slice(-netNew)) {
|
|
newFindings.push({ ruleId, filePath, evidence: ev });
|
|
}
|
|
}
|
|
}
|
|
|
|
for (const [key, baseCount] of baseCounts) {
|
|
const headCount = headCounts.get(key)?.count || 0;
|
|
if (headCount < baseCount) removedCount += baseCount - headCount;
|
|
}
|
|
|
|
// 5. Print results
|
|
if (newFindings.length === 0) {
|
|
if (removedCount > 0) {
|
|
console.log(`\n slop-scan: no new findings. Removed ${removedCount} pre-existing findings.\n`);
|
|
} else {
|
|
console.log(`\n slop-scan: no new findings in ${changedFiles.size} changed files.\n`);
|
|
}
|
|
process.exit(0);
|
|
}
|
|
|
|
console.log(`\n── slop-scan: ${newFindings.length} new findings (+${newFindings.length} / -${removedCount}) ──\n`);
|
|
|
|
// Group by file, then by rule
|
|
const grouped = new Map<string, Map<string, string[]>>();
|
|
for (const { ruleId, filePath, evidence } of newFindings) {
|
|
if (!grouped.has(filePath)) grouped.set(filePath, new Map());
|
|
const rules = grouped.get(filePath)!;
|
|
if (!rules.has(ruleId)) rules.set(ruleId, []);
|
|
rules.get(ruleId)!.push(evidence);
|
|
}
|
|
|
|
for (const [filePath, rules] of grouped) {
|
|
console.log(` ${filePath}`);
|
|
for (const [ruleId, evidence] of rules) {
|
|
console.log(` ${ruleId}:`);
|
|
for (const ev of evidence) {
|
|
console.log(` ${ev}`);
|
|
}
|
|
}
|
|
}
|
|
|
|
console.log(`\n Net: +${newFindings.length} new, -${removedCount} removed\n`);
|