mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-02 03:35:09 +02:00
f4bbfaa5bd
* feat: enable within-file E2E test concurrency for 3x faster runs Switch all E2E tests from serial test() to testConcurrentIfSelected() so tests within each file run in parallel. Wall clock drops from ~18min to ~6min (limited by the longest single test, not sequential sum). The concurrent helper was already built in e2e-helpers.ts but never wired up. Each test runs in its own describe block with its own beforeAll/tmpdir — no shared state conflicts. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add CI eval workflow on Ubicloud runners Single-job GitHub Actions workflow that runs E2E evals on every PR using Ubicloud runners ($0.006/run — 10x cheaper than GitHub standard). Uses EVALS_CONCURRENCY=40 with the new within-file concurrency for ~6min wall clock. Downloads previous eval artifact from main for comparison, uploads results, and posts a PR comment with pass/fail + cost. Ubicloud setup required: connect GitHub repo via ubicloud.com dashboard, add ANTHROPIC_API_KEY, OPENAI_API_KEY, GEMINI_API_KEY as repo secrets. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: bump version and changelog (v0.11.6.0) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: optimize CI eval PR comment — aggregate all suites, update-not-duplicate Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: parallelize CI evals — 12 runners (1 per suite) for ~3min wall clock Matrix strategy spins up 12 ubicloud-standard-2 runners simultaneously, one per test file. Separate report job aggregates all artifacts into a single PR comment. Bun dependency cache cuts install from ~30s to ~3s. Runner cost: ~$0.048 (from $0.024) — negligible vs $3-4 API costs. Wall clock: ~3-4min (from ~8min). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add Docker CI image with pre-baked toolchain + deps Dockerfile.ci pre-installs bun, node, claude CLI, gh CLI, and node_modules so eval runners skip all setup. Image rebuilds weekly and on lockfile/Dockerfile changes via ci-image.yml. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: parallelize CI evals — 12 runners (1 per suite) for ~3min wall clock Switch eval workflow to use Docker container image with pre-baked toolchain. Each of 12 matrix runners pulls the image, hardlinks cached node_modules, builds browse, and runs one test suite. Setup drops from ~70s to ~19s per runner. Wall clock is dominated by the slowest individual test, not sequential sum. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: self-bootstrapping CI — build Docker image inline, cache by content hash Move Docker image build into the evals workflow as a dependency job. Image tag is keyed on hash of Dockerfile+lockfile+package.json — only rebuilds when those change. Eliminates chicken-and-egg problem where the image must exist before the first PR run. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: bun.lockb → bun.lock + auth before manifest check This project uses bun.lock (text format), not bun.lockb (binary). Also move Docker login before manifest inspect so GHCR auth works. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: bun.lock is gitignored — use package.json only for Docker cache bun.lock is in .gitignore so it doesn't exist after checkout. Dockerfile and workflows now use package.json only for deps caching. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: symlink node_modules instead of hardlink (cross-device) Docker image layers and workspace are on different filesystems, so cp -al (hardlink) fails. Use ln -s (symlink) instead — zero copy overhead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * debug: add claude CLI smoke test step to diagnose exit_code_1 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * ci: retrigger eval workflow * ci: add workflow_dispatch trigger for manual runs * debug: more verbose claude CLI diagnostics * fix: run eval container as non-root — claude CLI rejects --dangerously-skip-permissions as root Claude Code CLI blocks --dangerously-skip-permissions when running as uid=0 for security. Add a 'runner' user to the Docker image and set --user runner on the container. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: install bun to /usr/local so non-root runner user can access it Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: unset CI/GITHUB_ACTIONS env vars for eval runs Claude CLI routing behavior changes when CI=true — it skips skill invocation and uses Bash directly. Unsetting these markers makes Claude behave like a local environment for consistent eval results. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * revert: remove CI env unset — didn't fix routing Unsetting CI/GITHUB_ACTIONS didn't improve routing test results (still 1/11 in container). The issue is model behavior in containerized environments, not env vars. Routing tests will be tracked as a known CI gap. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: copy CLAUDE.md into routing test tmpDirs for skill context In containerized CI, Claude lacks the project context (CLAUDE.md) that guides routing decisions locally. Without it, Claude answers directly with Bash/Agent instead of invoking specific skills. Copying CLAUDE.md gives Claude the same context it has locally. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: routing tests use createRoutingWorkDir with full project context Routing tests now copy CLAUDE.md, README.md, package.json, ETHOS.md, and all SKILL.md files into each test tmpDir. This gives Claude the same project context it has locally, which is needed for correct skill routing decisions in containerized CI environments. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: install skills at top-level .claude/skills/ for CI discovery Claude Code discovers project skills from .claude/skills/<name>/SKILL.md at the top level only. Nesting under .claude/skills/gstack/<name>/ caused Claude to see only one "gstack" skill instead of individual skills like /ship, /qa, /review. This explains 10/11 routing failures in CI — Claude invoked "gstack" or used Bash directly instead of routing to specific skills. Also adds workflow_dispatch trigger and --user runner container option. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: bump version and changelog (v0.11.10.0) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: CI report needs checkout + routing needs user-level skill install Two fixes: 1. Report job: add actions/checkout so `gh pr comment` has git context. Also add pull-requests:write permission for comment posting. 2. Routing tests: install skills to BOTH project-level (.claude/skills/) AND user-level (~/.claude/skills/) since Claude Code discovers from both locations. In CI containers, $HOME differs from workdir. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
174 lines
6.4 KiB
TypeScript
174 lines
6.4 KiB
TypeScript
/**
|
|
* Gemini CLI E2E tests — verify skills work when invoked by Gemini CLI.
|
|
*
|
|
* Spawns `gemini -p` with stream-json output in the repo root (where
|
|
* .agents/skills/ already exists), parses JSONL events, and validates
|
|
* structured results. Follows the same pattern as codex-e2e.test.ts.
|
|
*
|
|
* Prerequisites:
|
|
* - `gemini` binary installed (npm install -g @google/gemini-cli)
|
|
* - Gemini authenticated via ~/.gemini/ config or GEMINI_API_KEY env var
|
|
* - EVALS=1 env var set (same gate as Claude E2E tests)
|
|
*
|
|
* Skips gracefully when prerequisites are not met.
|
|
*/
|
|
|
|
import { describe, test, expect, afterAll } from 'bun:test';
|
|
import { runGeminiSkill } from './helpers/gemini-session-runner';
|
|
import type { GeminiResult } from './helpers/gemini-session-runner';
|
|
import { EvalCollector } from './helpers/eval-store';
|
|
import { selectTests, detectBaseBranch, getChangedFiles, GLOBAL_TOUCHFILES } from './helpers/touchfiles';
|
|
import * as path from 'path';
|
|
|
|
const ROOT = path.resolve(import.meta.dir, '..');
|
|
|
|
// --- Prerequisites check ---
|
|
|
|
const GEMINI_AVAILABLE = (() => {
|
|
try {
|
|
const result = Bun.spawnSync(['which', 'gemini']);
|
|
return result.exitCode === 0;
|
|
} catch { return false; }
|
|
})();
|
|
|
|
const evalsEnabled = !!process.env.EVALS;
|
|
|
|
// Skip all tests if gemini is not available or EVALS is not set.
|
|
const SKIP = !GEMINI_AVAILABLE || !evalsEnabled;
|
|
|
|
const describeGemini = SKIP ? describe.skip : describe;
|
|
|
|
// Log why we're skipping (helpful for debugging CI)
|
|
if (!evalsEnabled) {
|
|
// Silent — same as Claude E2E tests, EVALS=1 required
|
|
} else if (!GEMINI_AVAILABLE) {
|
|
process.stderr.write('\nGemini E2E: SKIPPED — gemini binary not found (install: npm i -g @google/gemini-cli)\n');
|
|
}
|
|
|
|
// --- Diff-based test selection ---
|
|
|
|
// Gemini E2E touchfiles — keyed by test name, same pattern as Codex E2E
|
|
const GEMINI_E2E_TOUCHFILES: Record<string, string[]> = {
|
|
'gemini-discover-skill': ['.agents/skills/**', 'test/helpers/gemini-session-runner.ts'],
|
|
'gemini-review-findings': ['review/**', '.agents/skills/gstack-review/**', 'test/helpers/gemini-session-runner.ts'],
|
|
};
|
|
|
|
let selectedTests: string[] | null = null; // null = run all
|
|
|
|
if (evalsEnabled && !process.env.EVALS_ALL) {
|
|
const baseBranch = process.env.EVALS_BASE
|
|
|| detectBaseBranch(ROOT)
|
|
|| 'main';
|
|
const changedFiles = getChangedFiles(baseBranch, ROOT);
|
|
|
|
if (changedFiles.length > 0) {
|
|
const selection = selectTests(changedFiles, GEMINI_E2E_TOUCHFILES, GLOBAL_TOUCHFILES);
|
|
selectedTests = selection.selected;
|
|
process.stderr.write(`\nGemini E2E selection (${selection.reason}): ${selection.selected.length}/${Object.keys(GEMINI_E2E_TOUCHFILES).length} tests\n`);
|
|
if (selection.skipped.length > 0) {
|
|
process.stderr.write(` Skipped: ${selection.skipped.join(', ')}\n`);
|
|
}
|
|
process.stderr.write('\n');
|
|
}
|
|
// If changedFiles is empty (e.g., on main branch), selectedTests stays null -> run all
|
|
}
|
|
|
|
/** Skip an individual test if not selected by diff-based selection. */
|
|
function testIfSelected(testName: string, fn: () => Promise<void>, timeout: number) {
|
|
const shouldRun = selectedTests === null || selectedTests.includes(testName);
|
|
(shouldRun ? test.concurrent : test.skip)(testName, fn, timeout);
|
|
}
|
|
|
|
// --- Eval result collector ---
|
|
|
|
const evalCollector = evalsEnabled && !SKIP ? new EvalCollector('e2e-gemini') : null;
|
|
|
|
/** DRY helper to record a Gemini E2E test result into the eval collector. */
|
|
function recordGeminiE2E(name: string, result: GeminiResult, passed: boolean) {
|
|
evalCollector?.addTest({
|
|
name,
|
|
suite: 'gemini-e2e',
|
|
tier: 'e2e',
|
|
passed,
|
|
duration_ms: result.durationMs,
|
|
cost_usd: 0, // Gemini doesn't report cost in USD; tokens are tracked
|
|
output: result.output?.slice(0, 2000),
|
|
turns_used: result.toolCalls.length, // approximate: tool calls as turns
|
|
exit_reason: result.exitCode === 0 ? 'success' : `exit_code_${result.exitCode}`,
|
|
});
|
|
}
|
|
|
|
/** Print cost summary after a Gemini E2E test. */
|
|
function logGeminiCost(label: string, result: GeminiResult) {
|
|
const durationSec = Math.round(result.durationMs / 1000);
|
|
console.log(`${label}: ${result.tokens} tokens, ${result.toolCalls.length} tool calls, ${durationSec}s`);
|
|
}
|
|
|
|
// Finalize eval results on exit
|
|
afterAll(async () => {
|
|
if (evalCollector) {
|
|
await evalCollector.finalize();
|
|
}
|
|
});
|
|
|
|
// --- Tests ---
|
|
|
|
describeGemini('Gemini E2E', () => {
|
|
|
|
testIfSelected('gemini-discover-skill', async () => {
|
|
// Run Gemini in the repo root where .agents/skills/ exists
|
|
const result = await runGeminiSkill({
|
|
prompt: 'List any skills or instructions you have available. Just list the names.',
|
|
timeoutMs: 60_000,
|
|
cwd: ROOT,
|
|
});
|
|
|
|
logGeminiCost('gemini-discover-skill', result);
|
|
|
|
// Gemini should have produced some output
|
|
const passed = result.exitCode === 0 && result.output.length > 0;
|
|
recordGeminiE2E('gemini-discover-skill', result, passed);
|
|
|
|
expect(result.exitCode).toBe(0);
|
|
expect(result.output.length).toBeGreaterThan(0);
|
|
// The output should reference skills in some form
|
|
const outputLower = result.output.toLowerCase();
|
|
expect(
|
|
outputLower.includes('review') || outputLower.includes('gstack') || outputLower.includes('skill'),
|
|
).toBe(true);
|
|
}, 120_000);
|
|
|
|
testIfSelected('gemini-review-findings', async () => {
|
|
// Run gstack-review skill via Gemini on this repo
|
|
const result = await runGeminiSkill({
|
|
prompt: 'Run the gstack-review skill on this repository. Review the current branch diff and report your findings.',
|
|
timeoutMs: 540_000,
|
|
cwd: ROOT,
|
|
});
|
|
|
|
logGeminiCost('gemini-review-findings', result);
|
|
|
|
// Should produce structured review-like output
|
|
const output = result.output;
|
|
const passed = result.exitCode === 0 && output.length > 50;
|
|
recordGeminiE2E('gemini-review-findings', result, passed);
|
|
|
|
expect(result.exitCode).toBe(0);
|
|
expect(output.length).toBeGreaterThan(50);
|
|
|
|
// Review output should contain some review-like content
|
|
const outputLower = output.toLowerCase();
|
|
const hasReviewContent =
|
|
outputLower.includes('finding') ||
|
|
outputLower.includes('issue') ||
|
|
outputLower.includes('review') ||
|
|
outputLower.includes('change') ||
|
|
outputLower.includes('diff') ||
|
|
outputLower.includes('clean') ||
|
|
outputLower.includes('no issues') ||
|
|
outputLower.includes('p1') ||
|
|
outputLower.includes('p2');
|
|
expect(hasReviewContent).toBe(true);
|
|
}, 600_000);
|
|
});
|