feat: SKILL.md template system, 3-tier testing, DX tools (v0.3.3) (#41)

* refactor: extract command registry to commands.ts, add SNAPSHOT_FLAGS metadata

- NEW: browse/src/commands.ts — command sets + COMMAND_DESCRIPTIONS + load-time validation (zero side effects)
- server.ts imports from commands.ts instead of declaring sets inline
- snapshot.ts: SNAPSHOT_FLAGS array drives parseSnapshotArgs (metadata-driven, no duplication)
- All 186 existing tests pass

* feat: SKILL.md template system with auto-generated command references

- SKILL.md.tmpl + browse/SKILL.md.tmpl with {{COMMAND_REFERENCE}} and {{SNAPSHOT_FLAGS}} placeholders
- scripts/gen-skill-docs.ts generates SKILL.md from templates (supports --dry-run)
- Build pipeline runs gen:skill-docs before binary compilation
- Generated files have AUTO-GENERATED header, committed to git

* test: Tier 1 static validation — 34 tests for SKILL.md command correctness

- test/helpers/skill-parser.ts: extracts $B commands from code blocks, validates against registry
- test/skill-parser.test.ts: 13 parser/validator unit tests
- test/skill-validation.test.ts: 13 tests validating all SKILL.md files + registry consistency
- test/gen-skill-docs.test.ts: 8 generator tests (categories, sorting, freshness)

* feat: DX tools (skill:check, dev:skill) + Tier 2 E2E test scaffolding

- scripts/skill-check.ts: health summary for all SKILL.md files (commands, templates, freshness)
- scripts/dev-skill.ts: watch mode for template development
- test/helpers/session-runner.ts: Agent SDK wrapper for E2E skill tests
- test/skill-e2e.test.ts: 2 E2E tests + 3 stubs (auto-skip inside Claude Code sessions)
- E2E tests must run from plain terminal: SKILL_E2E=1 bun test test/skill-e2e.test.ts

* ci: SKILL.md freshness check on push/PR + TODO updates

- .github/workflows/skill-docs.yml: fails if generated SKILL.md files are stale
- TODO.md: add E2E cost tracking and model pinning to future ideas

* fix: restore rich descriptions lost in auto-generation

- Snapshot flags: add back value hints (-d <N>, -s <sel>, -o <path>)
- Snapshot flags: restore parenthetical context (@e refs, @c refs, etc.)
- Commands: is → includes valid states enum
- Commands: console → notes --errors filter behavior
- Commands: press → lists common keys (Enter, Tab, Escape)
- Commands: cookie-import-browser → describes picker UI
- Commands: dialog-accept → specifies alert/confirm/prompt
- Tips: restore → arrow (was downgraded to ->)

* test: quality evals for generated SKILL.md descriptions

Catches the exact regressions we shipped and caught in review:
- Snapshot flags must include value hints (-d <N>, -s <sel>, -o <path>)
- is command must list all valid states (visible/hidden/enabled/...)
- press command must list example keys (Enter, Tab, Escape)
- console command must describe --errors behavior
- Snapshot -i must mention @e refs, -C must mention @c refs
- All descriptions must be >= 8 chars (no empty stubs)
- Tips section must use → not ->

* feat: LLM-as-judge evals for SKILL.md documentation quality

4 eval tests using Anthropic API (claude-haiku, ~$0.01-0.03/run):
- Command reference table: clarity/completeness/actionability >= 4/5
- Snapshot flags section: same thresholds
- browse/SKILL.md overall quality
- Regression: generated version must score >= hand-maintained baseline

Requires ANTHROPIC_API_KEY. Auto-skips without it.
Run: bun run test:eval (or ANTHROPIC_API_KEY=sk-... bun test test/skill-llm-eval.test.ts)

* chore: bump version to 0.3.3, update changelog

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: add ARCHITECTURE.md, update CLAUDE.md and CONTRIBUTING.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: conductor.json lifecycle hooks + .env propagation across worktrees

bin/dev-setup now copies .env from main worktree so API keys carry
over to Conductor workspaces automatically. conductor.json wires up
setup and archive hooks.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: complete CHANGELOG for v0.3.3 (architecture, conductor, .env)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Garry Tan
2026-03-13 21:08:12 -07:00
committed by GitHub
parent ea0c0dad5e
commit 5205070299
29 changed files with 2479 additions and 135 deletions
+150
View File
@@ -0,0 +1,150 @@
import { describe, test, expect } from 'bun:test';
import { COMMAND_DESCRIPTIONS } from '../browse/src/commands';
import { SNAPSHOT_FLAGS } from '../browse/src/snapshot';
import * as fs from 'fs';
import * as path from 'path';
const ROOT = path.resolve(import.meta.dir, '..');
describe('gen-skill-docs', () => {
test('generated SKILL.md contains all command categories', () => {
const content = fs.readFileSync(path.join(ROOT, 'SKILL.md'), 'utf-8');
const categories = new Set(Object.values(COMMAND_DESCRIPTIONS).map(d => d.category));
for (const cat of categories) {
expect(content).toContain(`### ${cat}`);
}
});
test('generated SKILL.md contains all commands', () => {
const content = fs.readFileSync(path.join(ROOT, 'SKILL.md'), 'utf-8');
for (const [cmd, meta] of Object.entries(COMMAND_DESCRIPTIONS)) {
const display = meta.usage || cmd;
expect(content).toContain(display);
}
});
test('command table is sorted alphabetically within categories', () => {
const content = fs.readFileSync(path.join(ROOT, 'SKILL.md'), 'utf-8');
// Extract command names from the Navigation section as a test
const navSection = content.match(/### Navigation\n\|.*\n\|.*\n([\s\S]*?)(?=\n###|\n## )/);
expect(navSection).not.toBeNull();
const rows = navSection![1].trim().split('\n');
const commands = rows.map(r => {
const match = r.match(/\| `(\w+)/);
return match ? match[1] : '';
}).filter(Boolean);
const sorted = [...commands].sort();
expect(commands).toEqual(sorted);
});
test('generated header is present in SKILL.md', () => {
const content = fs.readFileSync(path.join(ROOT, 'SKILL.md'), 'utf-8');
expect(content).toContain('AUTO-GENERATED from SKILL.md.tmpl');
expect(content).toContain('Regenerate: bun run gen:skill-docs');
});
test('generated header is present in browse/SKILL.md', () => {
const content = fs.readFileSync(path.join(ROOT, 'browse', 'SKILL.md'), 'utf-8');
expect(content).toContain('AUTO-GENERATED from SKILL.md.tmpl');
});
test('snapshot flags section contains all flags', () => {
const content = fs.readFileSync(path.join(ROOT, 'SKILL.md'), 'utf-8');
for (const flag of SNAPSHOT_FLAGS) {
expect(content).toContain(flag.short);
expect(content).toContain(flag.description);
}
});
test('template files exist for generated SKILL.md files', () => {
expect(fs.existsSync(path.join(ROOT, 'SKILL.md.tmpl'))).toBe(true);
expect(fs.existsSync(path.join(ROOT, 'browse', 'SKILL.md.tmpl'))).toBe(true);
});
test('templates contain placeholders', () => {
const rootTmpl = fs.readFileSync(path.join(ROOT, 'SKILL.md.tmpl'), 'utf-8');
expect(rootTmpl).toContain('{{COMMAND_REFERENCE}}');
expect(rootTmpl).toContain('{{SNAPSHOT_FLAGS}}');
const browseTmpl = fs.readFileSync(path.join(ROOT, 'browse', 'SKILL.md.tmpl'), 'utf-8');
expect(browseTmpl).toContain('{{COMMAND_REFERENCE}}');
expect(browseTmpl).toContain('{{SNAPSHOT_FLAGS}}');
});
});
/**
* Quality evals — catch description regressions.
*
* These test that generated output is *useful for an AI agent*,
* not just structurally valid. Each test targets a specific
* regression we actually shipped and caught in review.
*/
describe('description quality evals', () => {
// Regression: snapshot flags lost value hints (-d <N>, -s <sel>, -o <path>)
test('snapshot flags with values include value hints in output', () => {
const content = fs.readFileSync(path.join(ROOT, 'SKILL.md'), 'utf-8');
for (const flag of SNAPSHOT_FLAGS) {
if (flag.takesValue) {
expect(flag.valueHint).toBeDefined();
expect(content).toContain(`${flag.short} ${flag.valueHint}`);
}
}
});
// Regression: "is" lost the valid states enum
test('is command lists valid state values', () => {
const desc = COMMAND_DESCRIPTIONS['is'].description;
for (const state of ['visible', 'hidden', 'enabled', 'disabled', 'checked', 'editable', 'focused']) {
expect(desc).toContain(state);
}
});
// Regression: "press" lost common key examples
test('press command lists example keys', () => {
const desc = COMMAND_DESCRIPTIONS['press'].description;
expect(desc).toContain('Enter');
expect(desc).toContain('Tab');
expect(desc).toContain('Escape');
});
// Regression: "console" lost --errors filter note
test('console command describes --errors behavior', () => {
const desc = COMMAND_DESCRIPTIONS['console'].description;
expect(desc).toContain('--errors');
});
// Regression: snapshot -i lost "@e refs" context
test('snapshot -i mentions @e refs', () => {
const flag = SNAPSHOT_FLAGS.find(f => f.short === '-i')!;
expect(flag.description).toContain('@e');
});
// Regression: snapshot -C lost "@c refs" context
test('snapshot -C mentions @c refs', () => {
const flag = SNAPSHOT_FLAGS.find(f => f.short === '-C')!;
expect(flag.description).toContain('@c');
});
// Guard: every description must be at least 8 chars (catches empty or stub descriptions)
test('all command descriptions have meaningful length', () => {
for (const [cmd, meta] of Object.entries(COMMAND_DESCRIPTIONS)) {
expect(meta.description.length).toBeGreaterThanOrEqual(8);
}
});
// Guard: snapshot flag descriptions must be at least 10 chars
test('all snapshot flag descriptions have meaningful length', () => {
for (const flag of SNAPSHOT_FLAGS) {
expect(flag.description.length).toBeGreaterThanOrEqual(10);
}
});
// Guard: generated output uses → not ->
test('generated SKILL.md uses unicode arrows', () => {
const content = fs.readFileSync(path.join(ROOT, 'SKILL.md'), 'utf-8');
// Check the Tips section specifically (where we regressed -> from →)
const tipsSection = content.slice(content.indexOf('## Tips'));
expect(tipsSection).toContain('→');
expect(tipsSection).not.toContain('->');
});
});
+160
View File
@@ -0,0 +1,160 @@
/**
* Agent SDK wrapper for skill E2E testing.
*
* Spawns a Claude Code session, runs a prompt, collects messages,
* scans tool_result messages for browse errors.
*/
import { query } from '@anthropic-ai/claude-agent-sdk';
import * as fs from 'fs';
import * as path from 'path';
export interface SkillTestResult {
messages: any[];
toolCalls: Array<{ tool: string; input: any; output: string }>;
browseErrors: string[];
exitReason: string;
duration: number;
}
const BROWSE_ERROR_PATTERNS = [
/Unknown command: \w+/,
/Unknown snapshot flag: .+/,
/Exit code 1/,
/ERROR: browse binary not found/,
/Server failed to start/,
];
export async function runSkillTest(options: {
prompt: string;
workingDirectory: string;
maxTurns?: number;
allowedTools?: string[];
timeout?: number;
}): Promise<SkillTestResult> {
// Fail fast if running inside an Agent SDK session — nested sessions hang
if (process.env.CLAUDECODE || process.env.CLAUDE_CODE_ENTRYPOINT) {
throw new Error(
'Cannot run E2E skill tests inside a Claude Code session. ' +
'Run from a plain terminal: SKILL_E2E=1 bun test test/skill-e2e.test.ts'
);
}
const {
prompt,
workingDirectory,
maxTurns = 15,
allowedTools = ['Bash', 'Read', 'Write'],
timeout = 120_000,
} = options;
const messages: any[] = [];
const toolCalls: SkillTestResult['toolCalls'] = [];
const browseErrors: string[] = [];
let exitReason = 'unknown';
const startTime = Date.now();
// Strip all Claude-related env vars to allow nested sessions.
// Without this, the child claude process thinks it's an SDK child
// and hangs waiting for parent IPC instead of running independently.
const env: Record<string, string | undefined> = {};
for (const [key] of Object.entries(process.env)) {
if (key.startsWith('CLAUDE') || key.startsWith('CLAUDECODE')) {
env[key] = undefined;
}
}
const q = query({
prompt,
options: {
cwd: workingDirectory,
allowedTools,
permissionMode: 'bypassPermissions',
allowDangerouslySkipPermissions: true,
maxTurns,
env,
},
});
const timeoutPromise = new Promise<never>((_, reject) => {
setTimeout(() => reject(new Error(`Skill test timed out after ${timeout}ms`)), timeout);
});
try {
const runner = (async () => {
for await (const msg of q) {
messages.push(msg);
// Extract tool calls from assistant messages
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'tool_use') {
toolCalls.push({
tool: block.name,
input: block.input,
output: '', // will be filled from tool_result
});
}
// Scan tool_result blocks for browse errors
if (block.type === 'tool_result' || (typeof block === 'object' && 'text' in block)) {
const text = typeof block === 'string' ? block : (block as any).text || '';
for (const pattern of BROWSE_ERROR_PATTERNS) {
if (pattern.test(text)) {
browseErrors.push(text.slice(0, 200));
}
}
}
}
}
// Also scan user messages (which contain tool results)
if (msg.type === 'user' && msg.message?.content) {
const content = Array.isArray(msg.message.content) ? msg.message.content : [msg.message.content];
for (const block of content) {
const text = typeof block === 'string' ? block : (block as any)?.text || (block as any)?.content || '';
if (typeof text === 'string') {
for (const pattern of BROWSE_ERROR_PATTERNS) {
if (pattern.test(text)) {
browseErrors.push(text.slice(0, 200));
}
}
}
}
}
// Capture result
if (msg.type === 'result') {
exitReason = msg.subtype || 'success';
}
}
})();
await Promise.race([runner, timeoutPromise]);
} catch (err: any) {
exitReason = err.message?.includes('timed out') ? 'timeout' : `error: ${err.message}`;
}
const duration = Date.now() - startTime;
// Save transcript on failure
if (browseErrors.length > 0 || exitReason !== 'success') {
try {
const transcriptDir = path.join(workingDirectory, '.gstack', 'test-transcripts');
fs.mkdirSync(transcriptDir, { recursive: true });
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const transcriptPath = path.join(transcriptDir, `e2e-${timestamp}.json`);
fs.writeFileSync(transcriptPath, JSON.stringify({
prompt,
exitReason,
browseErrors,
duration,
messages: messages.map(m => ({ type: m.type, subtype: m.subtype })),
}, null, 2));
} catch {
// Transcript save failures are non-fatal
}
}
return { messages, toolCalls, browseErrors, exitReason, duration };
}
+133
View File
@@ -0,0 +1,133 @@
/**
* SKILL.md parser and validator.
*
* Extracts $B commands from code blocks, validates them against
* the command registry and snapshot flags.
*
* Used by:
* - test/skill-validation.test.ts (Tier 1 static tests)
* - scripts/skill-check.ts (health summary)
* - scripts/dev-skill.ts (watch mode)
*/
import { ALL_COMMANDS } from '../../browse/src/commands';
import { parseSnapshotArgs } from '../../browse/src/snapshot';
import * as fs from 'fs';
export interface BrowseCommand {
command: string;
args: string[];
line: number;
raw: string;
}
export interface ValidationResult {
valid: BrowseCommand[];
invalid: BrowseCommand[];
snapshotFlagErrors: Array<{ command: BrowseCommand; error: string }>;
warnings: string[];
}
/**
* Extract all $B invocations from bash code blocks in a SKILL.md file.
*/
export function extractBrowseCommands(skillPath: string): BrowseCommand[] {
const content = fs.readFileSync(skillPath, 'utf-8');
const lines = content.split('\n');
const commands: BrowseCommand[] = [];
let inBashBlock = false;
for (let i = 0; i < lines.length; i++) {
const line = lines[i];
// Detect code block boundaries
if (line.trimStart().startsWith('```')) {
if (inBashBlock) {
inBashBlock = false;
} else if (line.trimStart().startsWith('```bash')) {
inBashBlock = true;
}
// Non-bash code blocks (```json, ```, ```js, etc.) are skipped
continue;
}
if (!inBashBlock) continue;
// Match lines with $B command invocations
// Handle multiple $B commands on one line (e.g., "$B click @e3 $B fill @e4 "value"")
const matches = line.matchAll(/\$B\s+(\S+)(?:\s+([^\$]*))?/g);
for (const match of matches) {
const command = match[1];
let argsStr = (match[2] || '').trim();
// Strip inline comments (# ...) — but not inside quotes
// Simple approach: remove everything from first unquoted # onward
let inQuote = false;
for (let j = 0; j < argsStr.length; j++) {
if (argsStr[j] === '"') inQuote = !inQuote;
if (argsStr[j] === '#' && !inQuote) {
argsStr = argsStr.slice(0, j).trim();
break;
}
}
// Parse args — handle quoted strings
const args: string[] = [];
if (argsStr) {
const argMatches = argsStr.matchAll(/"([^"]*)"|(\S+)/g);
for (const am of argMatches) {
args.push(am[1] ?? am[2]);
}
}
commands.push({
command,
args,
line: i + 1, // 1-based
raw: match[0].trim(),
});
}
}
return commands;
}
/**
* Extract and validate all $B commands in a SKILL.md file.
*/
export function validateSkill(skillPath: string): ValidationResult {
const commands = extractBrowseCommands(skillPath);
const result: ValidationResult = {
valid: [],
invalid: [],
snapshotFlagErrors: [],
warnings: [],
};
if (commands.length === 0) {
result.warnings.push('no $B commands found');
return result;
}
for (const cmd of commands) {
if (!ALL_COMMANDS.has(cmd.command)) {
result.invalid.push(cmd);
continue;
}
// Validate snapshot flags
if (cmd.command === 'snapshot' && cmd.args.length > 0) {
try {
parseSnapshotArgs(cmd.args);
} catch (err: any) {
result.snapshotFlagErrors.push({ command: cmd, error: err.message });
continue;
}
}
result.valid.push(cmd);
}
return result;
}
+79
View File
@@ -0,0 +1,79 @@
import { describe, test, expect, beforeAll, afterAll } from 'bun:test';
import { runSkillTest } from './helpers/session-runner';
import { startTestServer } from '../browse/test/test-server';
import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';
// Skip if SKILL_E2E not set, or if running inside a Claude Code / Agent SDK session
// (nested Agent SDK sessions hang because the parent intercepts child claude subprocesses)
const isInsideAgentSDK = !!process.env.CLAUDECODE || !!process.env.CLAUDE_CODE_ENTRYPOINT;
const describeE2E = (process.env.SKILL_E2E && !isInsideAgentSDK) ? describe : describe.skip;
let testServer: ReturnType<typeof startTestServer>;
let tmpDir: string;
describeE2E('Skill E2E tests', () => {
beforeAll(() => {
testServer = startTestServer();
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'skill-e2e-'));
// Symlink browse binary into tmpdir for the skill to find
const browseBin = path.resolve(import.meta.dir, '..', 'browse', 'dist', 'browse');
const binDir = path.join(tmpDir, 'browse', 'dist');
fs.mkdirSync(binDir, { recursive: true });
if (fs.existsSync(browseBin)) {
fs.symlinkSync(browseBin, path.join(binDir, 'browse'));
}
// Also create browse/bin/find-browse so the SKILL.md setup works
const findBrowseDir = path.join(tmpDir, 'browse', 'bin');
fs.mkdirSync(findBrowseDir, { recursive: true });
fs.writeFileSync(path.join(findBrowseDir, 'find-browse'), `#!/bin/bash\necho "${browseBin}"\n`, { mode: 0o755 });
});
afterAll(() => {
testServer?.server?.stop();
// Clean up tmpdir
try { fs.rmSync(tmpDir, { recursive: true, force: true }); } catch {}
});
test('browse basic commands work without errors', async () => {
const result = await runSkillTest({
prompt: `You have a browse binary at ${path.resolve(import.meta.dir, '..', 'browse', 'dist', 'browse')}. Assign it to B variable and run these commands in sequence:
1. $B goto ${testServer.url}
2. $B snapshot -i
3. $B text
4. $B screenshot /tmp/skill-e2e-test.png
Report the results of each command.`,
workingDirectory: tmpDir,
maxTurns: 10,
timeout: 60_000,
});
expect(result.browseErrors).toHaveLength(0);
expect(result.exitReason).toBe('success');
}, 90_000);
test('browse snapshot flags all work', async () => {
const result = await runSkillTest({
prompt: `You have a browse binary at ${path.resolve(import.meta.dir, '..', 'browse', 'dist', 'browse')}. Assign it to B variable and run:
1. $B goto ${testServer.url}
2. $B snapshot -i
3. $B snapshot -c
4. $B snapshot -D
5. $B snapshot -i -a -o /tmp/skill-e2e-annotated.png
Report what each command returned.`,
workingDirectory: tmpDir,
maxTurns: 10,
timeout: 60_000,
});
expect(result.browseErrors).toHaveLength(0);
expect(result.exitReason).toBe('success');
}, 90_000);
test.todo('/qa quick completes without browse errors');
test.todo('/ship completes without browse errors');
test.todo('/review completes without browse errors');
});
+194
View File
@@ -0,0 +1,194 @@
/**
* LLM-as-a-Judge evals for generated SKILL.md quality.
*
* Uses the Anthropic API directly (not Agent SDK) to evaluate whether
* generated command docs are clear, complete, and actionable for an AI agent.
*
* Requires: ANTHROPIC_API_KEY env var
* Run: ANTHROPIC_API_KEY=sk-... bun test test/skill-llm-eval.test.ts
*
* Cost: ~$0.01-0.03 per run (haiku)
*/
import { describe, test, expect } from 'bun:test';
import Anthropic from '@anthropic-ai/sdk';
import * as fs from 'fs';
import * as path from 'path';
const ROOT = path.resolve(import.meta.dir, '..');
const hasApiKey = !!process.env.ANTHROPIC_API_KEY;
const describeEval = hasApiKey ? describe : describe.skip;
interface JudgeScore {
clarity: number; // 1-5: can an agent understand what each command does?
completeness: number; // 1-5: are all args, flags, valid values documented?
actionability: number; // 1-5: can an agent use this to construct correct commands?
reasoning: string; // why the scores were given
}
async function judge(section: string, prompt: string): Promise<JudgeScore> {
const client = new Anthropic();
const response = await client.messages.create({
model: 'claude-haiku-4-5-20251001',
max_tokens: 1024,
messages: [{
role: 'user',
content: `You are evaluating documentation quality for an AI coding agent's CLI tool reference.
The agent reads this documentation to learn how to use a headless browser CLI. It needs to:
1. Understand what each command does
2. Know what arguments to pass
3. Know valid values for enum-like parameters
4. Construct correct command invocations without guessing
Rate the following ${section} on three dimensions (1-5 scale):
- **clarity** (1-5): Can an agent understand what each command/flag does from the description alone?
- **completeness** (1-5): Are arguments, valid values, and important behaviors documented? Would an agent need to guess anything?
- **actionability** (1-5): Can an agent construct correct command invocations from this reference alone?
Scoring guide:
- 5: Excellent — no ambiguity, all info present
- 4: Good — minor gaps an experienced agent could infer
- 3: Adequate — some guessing required
- 2: Poor — significant info missing
- 1: Unusable — agent would fail without external help
Respond with ONLY valid JSON in this exact format:
{"clarity": N, "completeness": N, "actionability": N, "reasoning": "brief explanation"}
Here is the ${section} to evaluate:
${prompt}`,
}],
});
const text = response.content[0].type === 'text' ? response.content[0].text : '';
// Extract JSON from response (handle markdown code blocks)
const jsonMatch = text.match(/\{[\s\S]*\}/);
if (!jsonMatch) throw new Error(`Judge returned non-JSON: ${text.slice(0, 200)}`);
return JSON.parse(jsonMatch[0]) as JudgeScore;
}
describeEval('LLM-as-judge quality evals', () => {
test('command reference table scores >= 4 on all dimensions', async () => {
const content = fs.readFileSync(path.join(ROOT, 'SKILL.md'), 'utf-8');
// Extract just the command reference section
const start = content.indexOf('## Command Reference');
const end = content.indexOf('## Tips');
const section = content.slice(start, end);
const scores = await judge('command reference table', section);
console.log('Command reference scores:', JSON.stringify(scores, null, 2));
expect(scores.clarity).toBeGreaterThanOrEqual(4);
expect(scores.completeness).toBeGreaterThanOrEqual(4);
expect(scores.actionability).toBeGreaterThanOrEqual(4);
}, 30_000);
test('snapshot flags section scores >= 4 on all dimensions', async () => {
const content = fs.readFileSync(path.join(ROOT, 'SKILL.md'), 'utf-8');
const start = content.indexOf('## Snapshot System');
const end = content.indexOf('## Command Reference');
const section = content.slice(start, end);
const scores = await judge('snapshot flags reference', section);
console.log('Snapshot flags scores:', JSON.stringify(scores, null, 2));
expect(scores.clarity).toBeGreaterThanOrEqual(4);
expect(scores.completeness).toBeGreaterThanOrEqual(4);
expect(scores.actionability).toBeGreaterThanOrEqual(4);
}, 30_000);
test('browse/SKILL.md overall scores >= 4', async () => {
const content = fs.readFileSync(path.join(ROOT, 'browse', 'SKILL.md'), 'utf-8');
// Just the reference sections (skip examples/patterns)
const start = content.indexOf('## Snapshot Flags');
const section = content.slice(start);
const scores = await judge('browse skill reference (flags + commands)', section);
console.log('Browse SKILL.md scores:', JSON.stringify(scores, null, 2));
expect(scores.clarity).toBeGreaterThanOrEqual(4);
expect(scores.completeness).toBeGreaterThanOrEqual(4);
expect(scores.actionability).toBeGreaterThanOrEqual(4);
}, 30_000);
test('regression check: compare branch vs baseline quality', async () => {
// This test compares the generated output against the hand-maintained
// baseline from main. The generated version should score equal or higher.
const generated = fs.readFileSync(path.join(ROOT, 'SKILL.md'), 'utf-8');
const genStart = generated.indexOf('## Command Reference');
const genEnd = generated.indexOf('## Tips');
const genSection = generated.slice(genStart, genEnd);
const baseline = `## Command Reference
### Navigation
| Command | Description |
|---------|-------------|
| \`goto <url>\` | Navigate to URL |
| \`back\` / \`forward\` | History navigation |
| \`reload\` | Reload page |
| \`url\` | Print current URL |
### Interaction
| Command | Description |
|---------|-------------|
| \`click <sel>\` | Click element |
| \`fill <sel> <val>\` | Fill input |
| \`select <sel> <val>\` | Select dropdown |
| \`hover <sel>\` | Hover element |
| \`type <text>\` | Type into focused element |
| \`press <key>\` | Press key (Enter, Tab, Escape) |
| \`scroll [sel]\` | Scroll element into view |
| \`wait <sel>\` | Wait for element (max 10s) |
| \`wait --networkidle\` | Wait for network to be idle |
| \`wait --load\` | Wait for page load event |
### Inspection
| Command | Description |
|---------|-------------|
| \`js <expr>\` | Run JavaScript |
| \`css <sel> <prop>\` | Computed CSS |
| \`attrs <sel>\` | Element attributes |
| \`is <prop> <sel>\` | State check (visible/hidden/enabled/disabled/checked/editable/focused) |
| \`console [--clear\\|--errors]\` | Console messages (--errors filters to error/warning) |`;
const client = new Anthropic();
const response = await client.messages.create({
model: 'claude-haiku-4-5-20251001',
max_tokens: 1024,
messages: [{
role: 'user',
content: `You are comparing two versions of CLI documentation for an AI coding agent.
VERSION A (baseline — hand-maintained):
${baseline}
VERSION B (auto-generated from source):
${genSection}
Which version is better for an AI agent trying to use these commands? Consider:
- Completeness (more commands documented? all args shown?)
- Clarity (descriptions helpful?)
- Coverage (missing commands in either version?)
Respond with ONLY valid JSON:
{"winner": "A" or "B" or "tie", "reasoning": "brief explanation", "a_score": N, "b_score": N}
Scores are 1-5 overall quality.`,
}],
});
const text = response.content[0].type === 'text' ? response.content[0].text : '';
const jsonMatch = text.match(/\{[\s\S]*\}/);
if (!jsonMatch) throw new Error(`Judge returned non-JSON: ${text.slice(0, 200)}`);
const result = JSON.parse(jsonMatch[0]);
console.log('Regression comparison:', JSON.stringify(result, null, 2));
// Generated version should be at least as good as hand-maintained
expect(result.b_score).toBeGreaterThanOrEqual(result.a_score);
}, 30_000);
});
+179
View File
@@ -0,0 +1,179 @@
import { describe, test, expect } from 'bun:test';
import { extractBrowseCommands, validateSkill } from './helpers/skill-parser';
import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';
const FIXTURES_DIR = path.join(os.tmpdir(), 'skill-parser-test');
function writeFixture(name: string, content: string): string {
fs.mkdirSync(FIXTURES_DIR, { recursive: true });
const p = path.join(FIXTURES_DIR, name);
fs.writeFileSync(p, content);
return p;
}
describe('extractBrowseCommands', () => {
test('extracts $B commands from bash code blocks', () => {
const p = writeFixture('basic.md', [
'# Test',
'```bash',
'$B goto https://example.com',
'$B snapshot -i',
'```',
].join('\n'));
const cmds = extractBrowseCommands(p);
expect(cmds).toHaveLength(2);
expect(cmds[0].command).toBe('goto');
expect(cmds[0].args).toEqual(['https://example.com']);
expect(cmds[1].command).toBe('snapshot');
expect(cmds[1].args).toEqual(['-i']);
});
test('skips non-bash code blocks', () => {
const p = writeFixture('skip.md', [
'```json',
'{"key": "$B goto bad"}',
'```',
'```bash',
'$B text',
'```',
].join('\n'));
const cmds = extractBrowseCommands(p);
expect(cmds).toHaveLength(1);
expect(cmds[0].command).toBe('text');
});
test('returns empty array for file with no code blocks', () => {
const p = writeFixture('no-blocks.md', '# Just text\nSome content\n');
const cmds = extractBrowseCommands(p);
expect(cmds).toHaveLength(0);
});
test('returns empty array for code blocks with no $B invocations', () => {
const p = writeFixture('no-b.md', [
'```bash',
'echo "hello"',
'ls -la',
'```',
].join('\n'));
const cmds = extractBrowseCommands(p);
expect(cmds).toHaveLength(0);
});
test('handles multiple $B commands on one line', () => {
const p = writeFixture('multi.md', [
'```bash',
'$B click @e3 $B fill @e4 "value" $B hover @e1',
'```',
].join('\n'));
const cmds = extractBrowseCommands(p);
expect(cmds).toHaveLength(3);
expect(cmds[0].command).toBe('click');
expect(cmds[1].command).toBe('fill');
expect(cmds[1].args).toEqual(['@e4', 'value']);
expect(cmds[2].command).toBe('hover');
});
test('handles quoted arguments correctly', () => {
const p = writeFixture('quoted.md', [
'```bash',
'$B fill @e3 "test@example.com"',
'$B js "document.title"',
'```',
].join('\n'));
const cmds = extractBrowseCommands(p);
expect(cmds[0].args).toEqual(['@e3', 'test@example.com']);
expect(cmds[1].args).toEqual(['document.title']);
});
test('tracks correct line numbers', () => {
const p = writeFixture('lines.md', [
'# Header', // line 1
'', // line 2
'```bash', // line 3
'$B goto x', // line 4
'```', // line 5
'', // line 6
'```bash', // line 7
'$B text', // line 8
'```', // line 9
].join('\n'));
const cmds = extractBrowseCommands(p);
expect(cmds[0].line).toBe(4);
expect(cmds[1].line).toBe(8);
});
test('skips unlabeled code blocks', () => {
const p = writeFixture('unlabeled.md', [
'```',
'$B snapshot -i',
'```',
].join('\n'));
const cmds = extractBrowseCommands(p);
expect(cmds).toHaveLength(0);
});
});
describe('validateSkill', () => {
test('valid commands pass validation', () => {
const p = writeFixture('valid.md', [
'```bash',
'$B goto https://example.com',
'$B text',
'$B click @e3',
'$B snapshot -i -a',
'```',
].join('\n'));
const result = validateSkill(p);
expect(result.valid).toHaveLength(4);
expect(result.invalid).toHaveLength(0);
expect(result.snapshotFlagErrors).toHaveLength(0);
});
test('invalid commands flagged in result', () => {
const p = writeFixture('invalid.md', [
'```bash',
'$B goto https://example.com',
'$B explode',
'$B halp',
'```',
].join('\n'));
const result = validateSkill(p);
expect(result.valid).toHaveLength(1);
expect(result.invalid).toHaveLength(2);
expect(result.invalid[0].command).toBe('explode');
expect(result.invalid[1].command).toBe('halp');
});
test('snapshot flags validated via parseSnapshotArgs', () => {
const p = writeFixture('bad-snapshot.md', [
'```bash',
'$B snapshot --bogus',
'```',
].join('\n'));
const result = validateSkill(p);
expect(result.snapshotFlagErrors).toHaveLength(1);
expect(result.snapshotFlagErrors[0].error).toContain('Unknown snapshot flag');
});
test('returns warning when no $B commands found', () => {
const p = writeFixture('empty.md', '# Nothing here\n');
const result = validateSkill(p);
expect(result.warnings).toContain('no $B commands found');
});
test('valid snapshot flags pass', () => {
const p = writeFixture('snap-valid.md', [
'```bash',
'$B snapshot -i -a -C -o /tmp/out.png',
'$B snapshot -D',
'$B snapshot -d 3',
'$B snapshot -s "main"',
'```',
].join('\n'));
const result = validateSkill(p);
expect(result.valid).toHaveLength(4);
expect(result.snapshotFlagErrors).toHaveLength(0);
});
});
+100
View File
@@ -0,0 +1,100 @@
import { describe, test, expect } from 'bun:test';
import { validateSkill } from './helpers/skill-parser';
import { ALL_COMMANDS, COMMAND_DESCRIPTIONS, READ_COMMANDS, WRITE_COMMANDS, META_COMMANDS } from '../browse/src/commands';
import { SNAPSHOT_FLAGS } from '../browse/src/snapshot';
import * as fs from 'fs';
import * as path from 'path';
const ROOT = path.resolve(import.meta.dir, '..');
describe('SKILL.md command validation', () => {
test('all $B commands in SKILL.md are valid browse commands', () => {
const result = validateSkill(path.join(ROOT, 'SKILL.md'));
expect(result.invalid).toHaveLength(0);
expect(result.valid.length).toBeGreaterThan(0);
});
test('all snapshot flags in SKILL.md are valid', () => {
const result = validateSkill(path.join(ROOT, 'SKILL.md'));
expect(result.snapshotFlagErrors).toHaveLength(0);
});
test('all $B commands in browse/SKILL.md are valid browse commands', () => {
const result = validateSkill(path.join(ROOT, 'browse', 'SKILL.md'));
expect(result.invalid).toHaveLength(0);
expect(result.valid.length).toBeGreaterThan(0);
});
test('all snapshot flags in browse/SKILL.md are valid', () => {
const result = validateSkill(path.join(ROOT, 'browse', 'SKILL.md'));
expect(result.snapshotFlagErrors).toHaveLength(0);
});
test('all $B commands in qa/SKILL.md are valid browse commands', () => {
const qaSkill = path.join(ROOT, 'qa', 'SKILL.md');
if (!fs.existsSync(qaSkill)) return; // skip if missing
const result = validateSkill(qaSkill);
expect(result.invalid).toHaveLength(0);
});
test('all snapshot flags in qa/SKILL.md are valid', () => {
const qaSkill = path.join(ROOT, 'qa', 'SKILL.md');
if (!fs.existsSync(qaSkill)) return;
const result = validateSkill(qaSkill);
expect(result.snapshotFlagErrors).toHaveLength(0);
});
});
describe('Command registry consistency', () => {
test('COMMAND_DESCRIPTIONS covers all commands in sets', () => {
const allCmds = new Set([...READ_COMMANDS, ...WRITE_COMMANDS, ...META_COMMANDS]);
const descKeys = new Set(Object.keys(COMMAND_DESCRIPTIONS));
for (const cmd of allCmds) {
expect(descKeys.has(cmd)).toBe(true);
}
});
test('COMMAND_DESCRIPTIONS has no extra commands not in sets', () => {
const allCmds = new Set([...READ_COMMANDS, ...WRITE_COMMANDS, ...META_COMMANDS]);
for (const key of Object.keys(COMMAND_DESCRIPTIONS)) {
expect(allCmds.has(key)).toBe(true);
}
});
test('ALL_COMMANDS matches union of all sets', () => {
const union = new Set([...READ_COMMANDS, ...WRITE_COMMANDS, ...META_COMMANDS]);
expect(ALL_COMMANDS.size).toBe(union.size);
for (const cmd of union) {
expect(ALL_COMMANDS.has(cmd)).toBe(true);
}
});
test('SNAPSHOT_FLAGS option keys are valid SnapshotOptions fields', () => {
const validKeys = new Set([
'interactive', 'compact', 'depth', 'selector',
'diff', 'annotate', 'outputPath', 'cursorInteractive',
]);
for (const flag of SNAPSHOT_FLAGS) {
expect(validKeys.has(flag.optionKey)).toBe(true);
}
});
});
describe('Generated SKILL.md freshness', () => {
test('no unresolved {{placeholders}} in generated SKILL.md', () => {
const content = fs.readFileSync(path.join(ROOT, 'SKILL.md'), 'utf-8');
const unresolved = content.match(/\{\{\w+\}\}/g);
expect(unresolved).toBeNull();
});
test('no unresolved {{placeholders}} in generated browse/SKILL.md', () => {
const content = fs.readFileSync(path.join(ROOT, 'browse', 'SKILL.md'), 'utf-8');
const unresolved = content.match(/\{\{\w+\}\}/g);
expect(unresolved).toBeNull();
});
test('generated SKILL.md has AUTO-GENERATED header', () => {
const content = fs.readFileSync(path.join(ROOT, 'SKILL.md'), 'utf-8');
expect(content).toContain('AUTO-GENERATED');
});
});