mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-02 03:35:09 +02:00
codex + Apple Silicon hardening wave (v0.18.4.0) (#1056)
* fix: ad-hoc codesign compiled binaries on Apple Silicon after build On some Apple Silicon machines, Bun's --compile produces a corrupt or linker-only code signature. macOS kills these binaries with SIGKILL (exit 137, zsh: killed) before they execute a single instruction. Add a post-build codesign step to setup that runs only on Darwin arm64: 1. Remove the corrupt/linker-only signature (required — a direct re-sign fails with 'invalid or unsupported format for signature') 2. Apply a fresh ad-hoc signature The step is idempotent, costs <1s, and is what Bun's own docs recommend for distributed standalone executables. All four compiled binaries are covered: browse, find-browse, design, and gstack-global-discover. Failure is a non-fatal warning so Intel/CI builds are unaffected. Fixes #997 * fix: prevent codex exec stdin deadlock with </dev/null redirect codex CLI 0.120.0+ blocks indefinitely when stdin is a non-TTY pipe (Claude Code Bash tool, background bash, CI). The CLI sees a non-TTY stdin and waits for EOF to append it as a <stdin> block, even when the prompt is passed as a positional argument. Fix: add < /dev/null to every codex exec and codex review invocation in the source-of-truth files (scripts/resolvers/*.ts and *.md.tmpl). Generated SKILL.md files will be produced by bun run gen:skill-docs in a subsequent commit (Tension D: template+resolver only, generator is authoritative, not cherry-picked artifacts). Affected source files (16 total invocations): - scripts/resolvers/review.ts (4) - scripts/resolvers/design.ts (3) - codex/SKILL.md.tmpl (5) - autoplan/SKILL.md.tmpl (4) Fixes #971 Co-Authored-By: loning <loning@users.noreply.github.com> Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat: codex/autoplan hardening + Apple Silicon coreutils auto-install Hardens /codex and /autoplan against silent failures surfaced by the #972 stdin fix and #1003 Apple Silicon codesign. Six-layer defense: 1. **Multi-signal auth probe** (new Step 0.5 / Phase 0.5): env-based auth ($CODEX_API_KEY, $OPENAI_API_KEY) OR file-based auth (${CODEX_HOME:-~/.codex}/auth.json). Rejects false negatives that the old file-only check produced for CI / platform-engineer users. 2. **Timeout wrapper** around every codex exec / codex review invocation: gtimeout → timeout → unwrapped fallback chain. On exit 124, surfaces common causes + actionable next step. Guards against model-API stalls not covered by the #972 stdin fix. 3. **Stderr capture in Challenge mode** (codex/SKILL.md.tmpl:208): 2>/dev/null → 2>$TMPERR. Post-invocation grep for auth/login/unauthorized surfaces errors that were previously dropped silently. 4. **Completeness check** in the Python JSON parser: tracks turn.completed events and warns on zero (possible mid-stream disconnect). 5. **Version warning** for known-bad Codex CLI (0.120.0-0.120.2, the range that introduced the stdin deadlock #972 fixes). Anchored regex `(^|[^0-9.])0\.120\.(0|1|2)([^0-9.]|$)` prevents 0.120.10 / 0.120.20 false positives. 6. **Failure telemetry + operational learnings**: codex_timeout, codex_auth_failed, codex_cli_missing, codex_version_warning events land in ~/.gstack/analytics/skill-usage.jsonl behind the existing telemetry opt-in. On timeout (exit 124), auto-logs an operational learning via gstack-learnings-log so future /investigate sessions surface prior hang patterns automatically. **Shared helper** (bin/gstack-codex-probe): consolidates all four pieces (auth probe, version check, timeout wrapper, telemetry logger) into one bash file that /codex and /autoplan source. Namespace-prefixed (_gstack_codex_*) with a unit test that verifies sourcing does not leak shell options into the caller. pathRewrites in host configs rewrite ~/.claude/skills/gstack → $GSTACK_ROOT for Codex, $GSTACK_BIN for Factory/Cursor/etc. **Apple Silicon coreutils auto-install** (setup:264): macOS lacks GNU timeout by default; Homebrew's coreutils installs it as gtimeout to avoid shadowing BSD utilities. ./setup now auto-installs coreutils on Darwin (arch-agnostic — applies to Intel + Apple Silicon) when neither gtimeout nor timeout is present. Opt-out via GSTACK_SKIP_COREUTILS=1 for CI, managed machines, or offline envs. **25 deterministic unit tests** (test/codex-hardening.test.ts): - 8 auth probe combinations (env precedence, whitespace, alternate $CODEX_HOME, corrupt file paths) - 10 version regex cases including 0.120.10 false-positive guards and v-prefixed / multiline output - 4 timeout wrapper + namespace hygiene (bash -n, gtimeout preference, set-option leak check) - 3 telemetry payload schema checks (confirms env values + auth tokens never leak into emitted events) **1 periodic-tier E2E** (test/skill-e2e-autoplan-dual-voice.test.ts): gates the /autoplan dual-voice path — asserts both Claude subagent and Codex voices produce output in Phase 1, OR that [codex-unavailable] is logged when Codex is absent. ~\$1/run, not a CI gate. Golden baseline + gen-skill-docs exclusion list updated for the new codex path references and the 16 < /dev/null redirects from #972. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix: plan-review right-sized diff counterbalance (not minimal-diff default) /plan-ceo-review and /plan-eng-review listed "minimal diff" as an engineering preference without counterbalancing language. Reviewers picked up on that and rejected rewrites that should have been approved. The preference is now framed as "right-sized diff" with explicit permission to recommend a rewrite when the existing foundation is broken. Implementation alternatives section in CEO review gets an equal-weight clarification: don't default to minimal viable just because it is smaller. Recommend whichever best serves the user's goal; if the right answer is a rewrite, say so. Three-line tone edit per template, no voice / ETHOS / YC / promotional content change. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * release: v0.18.4.0 — codex + Apple Silicon hardening wave - Apple Silicon codesign fix (#1003 @voidborne-d) - Codex stdin deadlock fix (#972 @loning) - Codex timeout wrapper (gtimeout → timeout → unwrapped fallback) - Multi-signal auth gate for /codex + /autoplan - Codex version warning for known-bad CLI (0.120.0-0.120.2) - Challenge mode stderr capture + completeness check - Plan-review right-sized diff counterbalance - Failure telemetry + auto-log timeout as operational learning - 25 deterministic unit tests + dual-voice periodic E2E Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: voidborne-d <voidborne-d@users.noreply.github.com> Co-authored-by: loning <loning@users.noreply.github.com> Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,366 @@
|
||||
import { describe, test, expect } from 'bun:test';
|
||||
import { spawnSync } from 'child_process';
|
||||
import * as path from 'path';
|
||||
import * as fs from 'fs';
|
||||
import * as os from 'os';
|
||||
|
||||
const ROOT = path.resolve(import.meta.dir, '..');
|
||||
const PROBE = path.join(ROOT, 'bin/gstack-codex-probe');
|
||||
|
||||
// Run a bash snippet that sources the probe and evaluates one of its functions.
|
||||
// Controlled env + optional tempdir for HOME isolation.
|
||||
function runProbe(opts: {
|
||||
snippet: string;
|
||||
env?: Record<string, string | undefined>;
|
||||
home?: string;
|
||||
}): { stdout: string; stderr: string; status: number } {
|
||||
const env: Record<string, string> = {
|
||||
// Start from a clean env so test-env vars from the parent don't leak in.
|
||||
PATH: process.env.PATH ?? '',
|
||||
_TEL: 'off',
|
||||
};
|
||||
if (opts.home) env.HOME = opts.home;
|
||||
// Apply overrides; undefined means "remove".
|
||||
if (opts.env) {
|
||||
for (const [k, v] of Object.entries(opts.env)) {
|
||||
if (v === undefined) {
|
||||
delete env[k];
|
||||
} else {
|
||||
env[k] = v;
|
||||
}
|
||||
}
|
||||
}
|
||||
const script = `set +e\nsource "${PROBE}"\n${opts.snippet}\n`;
|
||||
const result = spawnSync('bash', ['-c', script], {
|
||||
env,
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
timeout: 5000,
|
||||
});
|
||||
return {
|
||||
stdout: (result.stdout ?? '').toString(),
|
||||
stderr: (result.stderr ?? '').toString(),
|
||||
status: result.status ?? -1,
|
||||
};
|
||||
}
|
||||
|
||||
function tempHome(): string {
|
||||
return fs.mkdtempSync(path.join(os.tmpdir(), 'gstack-codex-probe-home-'));
|
||||
}
|
||||
|
||||
describe('gstack-codex-probe: auth probe', () => {
|
||||
test('CODEX_API_KEY set → AUTH_OK', () => {
|
||||
const home = tempHome();
|
||||
try {
|
||||
const r = runProbe({
|
||||
snippet: '_gstack_codex_auth_probe',
|
||||
env: { CODEX_API_KEY: 'sk-test' },
|
||||
home,
|
||||
});
|
||||
expect(r.stdout.trim()).toBe('AUTH_OK');
|
||||
expect(r.status).toBe(0);
|
||||
} finally {
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('OPENAI_API_KEY set → AUTH_OK', () => {
|
||||
const home = tempHome();
|
||||
try {
|
||||
const r = runProbe({
|
||||
snippet: '_gstack_codex_auth_probe',
|
||||
env: { OPENAI_API_KEY: 'sk-openai' },
|
||||
home,
|
||||
});
|
||||
expect(r.stdout.trim()).toBe('AUTH_OK');
|
||||
expect(r.status).toBe(0);
|
||||
} finally {
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('${CODEX_HOME:-~/.codex}/auth.json exists → AUTH_OK', () => {
|
||||
const home = tempHome();
|
||||
try {
|
||||
fs.mkdirSync(path.join(home, '.codex'), { recursive: true });
|
||||
fs.writeFileSync(path.join(home, '.codex', 'auth.json'), '{}');
|
||||
const r = runProbe({ snippet: '_gstack_codex_auth_probe', home });
|
||||
expect(r.stdout.trim()).toBe('AUTH_OK');
|
||||
expect(r.status).toBe(0);
|
||||
} finally {
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('no env + no file → AUTH_FAILED with exit 1', () => {
|
||||
const home = tempHome();
|
||||
try {
|
||||
const r = runProbe({ snippet: '_gstack_codex_auth_probe', home });
|
||||
expect(r.stdout.trim()).toBe('AUTH_FAILED');
|
||||
expect(r.status).toBe(1);
|
||||
} finally {
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('both CODEX_API_KEY and OPENAI_API_KEY set → AUTH_OK', () => {
|
||||
const home = tempHome();
|
||||
try {
|
||||
const r = runProbe({
|
||||
snippet: '_gstack_codex_auth_probe',
|
||||
env: { CODEX_API_KEY: 'k1', OPENAI_API_KEY: 'k2' },
|
||||
home,
|
||||
});
|
||||
expect(r.stdout.trim()).toBe('AUTH_OK');
|
||||
expect(r.status).toBe(0);
|
||||
} finally {
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('empty-string env vars + no file → AUTH_FAILED', () => {
|
||||
const home = tempHome();
|
||||
try {
|
||||
const r = runProbe({
|
||||
snippet: '_gstack_codex_auth_probe',
|
||||
env: { CODEX_API_KEY: '', OPENAI_API_KEY: '' },
|
||||
home,
|
||||
});
|
||||
expect(r.stdout.trim()).toBe('AUTH_FAILED');
|
||||
expect(r.status).toBe(1);
|
||||
} finally {
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('whitespace-only env vars + no file → AUTH_FAILED', () => {
|
||||
const home = tempHome();
|
||||
try {
|
||||
const r = runProbe({
|
||||
snippet: '_gstack_codex_auth_probe',
|
||||
env: { CODEX_API_KEY: ' ', OPENAI_API_KEY: '\t\n' },
|
||||
home,
|
||||
});
|
||||
expect(r.stdout.trim()).toBe('AUTH_FAILED');
|
||||
expect(r.status).toBe(1);
|
||||
} finally {
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('alternate $CODEX_HOME → checks the alternate path', () => {
|
||||
const home = tempHome();
|
||||
const altCodex = fs.mkdtempSync(path.join(os.tmpdir(), 'gstack-alt-codex-'));
|
||||
try {
|
||||
fs.writeFileSync(path.join(altCodex, 'auth.json'), '{}');
|
||||
const r = runProbe({
|
||||
snippet: '_gstack_codex_auth_probe',
|
||||
env: { CODEX_HOME: altCodex },
|
||||
home,
|
||||
});
|
||||
expect(r.stdout.trim()).toBe('AUTH_OK');
|
||||
expect(r.status).toBe(0);
|
||||
} finally {
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
fs.rmSync(altCodex, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// --- Group 2: Version check -------------------------------------------------
|
||||
// Stub `codex --version` by putting a fake `codex` executable on PATH.
|
||||
function tempStubCodex(versionOutput: string, bool_command_fails = false): {
|
||||
dir: string;
|
||||
pathEntry: string;
|
||||
} {
|
||||
const dir = fs.mkdtempSync(path.join(os.tmpdir(), 'gstack-codex-stub-'));
|
||||
const bin = path.join(dir, 'codex');
|
||||
const script = bool_command_fails
|
||||
? '#!/bin/bash\nexit 1\n'
|
||||
: `#!/bin/bash\nif [ "$1" = "--version" ]; then printf '%s' ${JSON.stringify(versionOutput)}; fi\n`;
|
||||
fs.writeFileSync(bin, script);
|
||||
fs.chmodSync(bin, 0o755);
|
||||
return { dir, pathEntry: dir };
|
||||
}
|
||||
|
||||
function runVersionCheck(versionOutput: string): string {
|
||||
const stub = tempStubCodex(versionOutput);
|
||||
try {
|
||||
const r = runProbe({
|
||||
snippet: '_gstack_codex_version_check',
|
||||
env: { PATH: `${stub.pathEntry}:${process.env.PATH}` },
|
||||
});
|
||||
return r.stdout + r.stderr;
|
||||
} finally {
|
||||
fs.rmSync(stub.dir, { recursive: true, force: true });
|
||||
}
|
||||
}
|
||||
|
||||
describe('gstack-codex-probe: version check (anchored regex per Tension I)', () => {
|
||||
// Matches (should WARN)
|
||||
test('codex-cli 0.120.0 → WARN', () => {
|
||||
const out = runVersionCheck('codex-cli 0.120.0\n');
|
||||
expect(out).toContain('WARN:');
|
||||
expect(out).toContain('0.120.0');
|
||||
});
|
||||
|
||||
test('codex-cli 0.120.1 → WARN', () => {
|
||||
const out = runVersionCheck('codex-cli 0.120.1\n');
|
||||
expect(out).toContain('WARN:');
|
||||
});
|
||||
|
||||
test('codex-cli 0.120.2 → WARN', () => {
|
||||
const out = runVersionCheck('codex-cli 0.120.2\n');
|
||||
expect(out).toContain('WARN:');
|
||||
});
|
||||
|
||||
// Does NOT match (should be silent)
|
||||
test('codex-cli 0.116.0 → OK (no warn)', () => {
|
||||
const out = runVersionCheck('codex-cli 0.116.0\n');
|
||||
expect(out).not.toContain('WARN:');
|
||||
});
|
||||
|
||||
test('codex-cli 0.121.0 → OK (no warn)', () => {
|
||||
const out = runVersionCheck('codex-cli 0.121.0\n');
|
||||
expect(out).not.toContain('WARN:');
|
||||
});
|
||||
|
||||
test('codex-cli 0.120.10 → OK (anchored regex prevents substring match)', () => {
|
||||
const out = runVersionCheck('codex-cli 0.120.10\n');
|
||||
expect(out).not.toContain('WARN:');
|
||||
});
|
||||
|
||||
test('codex-cli 0.120.20 → OK (anchored regex prevents substring match)', () => {
|
||||
const out = runVersionCheck('codex-cli 0.120.20\n');
|
||||
expect(out).not.toContain('WARN:');
|
||||
});
|
||||
|
||||
test('codex-cli 0.120.2-beta → WARN (still a bad release family)', () => {
|
||||
// 0.120.2-beta: regex (^|[^0-9.])0\.120\.(0|1|2)([^0-9.]|$) treats '-' as a
|
||||
// non-digit/non-dot boundary → matches.
|
||||
const out = runVersionCheck('codex-cli 0.120.2-beta\n');
|
||||
expect(out).toContain('WARN:');
|
||||
});
|
||||
|
||||
test('empty output → OK (silent, no crash)', () => {
|
||||
const out = runVersionCheck('');
|
||||
expect(out).not.toContain('WARN:');
|
||||
});
|
||||
|
||||
test('v-prefixed and multiline handled', () => {
|
||||
const out = runVersionCheck('codex-cli v0.116.0\nsome debug line\n');
|
||||
expect(out).not.toContain('WARN:');
|
||||
});
|
||||
});
|
||||
|
||||
// --- Group 3: Timeout wrapper + namespace hygiene ---------------------------
|
||||
|
||||
describe('gstack-codex-probe: timeout wrapper + namespace hygiene', () => {
|
||||
test('bin/gstack-codex-probe is syntactically valid bash (bash -n)', () => {
|
||||
const result = spawnSync('bash', ['-n', PROBE], { timeout: 5000 });
|
||||
expect(result.status).toBe(0);
|
||||
});
|
||||
|
||||
test('timeout wrapper executes command directly when neither binary present', () => {
|
||||
// Clear PATH to simulate no timeout/gtimeout. Use only /bin for `echo`.
|
||||
const r = runProbe({
|
||||
snippet: `_gstack_codex_timeout_wrapper 5 echo hello_world`,
|
||||
env: { PATH: '/bin:/usr/bin' }, // these usually lack gtimeout; timeout may exist on linux
|
||||
});
|
||||
// Regardless of whether timeout is on this PATH, echo hello_world should succeed.
|
||||
expect(r.stdout.trim()).toBe('hello_world');
|
||||
});
|
||||
|
||||
test('timeout wrapper resolves gtimeout preferentially when on PATH', () => {
|
||||
// Create a stub gtimeout that prints a sentinel so we can verify it was chosen.
|
||||
const dir = fs.mkdtempSync(path.join(os.tmpdir(), 'gstack-gto-stub-'));
|
||||
try {
|
||||
const stub = path.join(dir, 'gtimeout');
|
||||
fs.writeFileSync(stub, '#!/bin/bash\necho gtimeout_chosen_$1\n');
|
||||
fs.chmodSync(stub, 0o755);
|
||||
const r = runProbe({
|
||||
snippet: `_gstack_codex_timeout_wrapper 5 echo nope`,
|
||||
env: { PATH: `${dir}:/bin:/usr/bin` },
|
||||
});
|
||||
expect(r.stdout.trim()).toBe('gtimeout_chosen_5');
|
||||
} finally {
|
||||
fs.rmSync(dir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('sourcing probe does NOT set errexit/trap/IFS in caller shell (namespace hygiene)', () => {
|
||||
// Capture `set -o` output before and after sourcing. Any drift means the
|
||||
// probe polluted the caller.
|
||||
const r = runProbe({
|
||||
snippet: `
|
||||
BEFORE=$(set -o | sort)
|
||||
source "${PROBE}" # source again to catch accumulation
|
||||
AFTER=$(set -o | sort)
|
||||
if [ "$BEFORE" = "$AFTER" ]; then
|
||||
echo "CLEAN"
|
||||
else
|
||||
echo "POLLUTED"
|
||||
diff <(echo "$BEFORE") <(echo "$AFTER")
|
||||
fi
|
||||
`,
|
||||
});
|
||||
expect(r.stdout).toContain('CLEAN');
|
||||
});
|
||||
});
|
||||
|
||||
// --- Group 4: Telemetry event emission --------------------------------------
|
||||
|
||||
describe('gstack-codex-probe: telemetry event emission', () => {
|
||||
test('_gstack_codex_log_event writes jsonl when _TEL != off', () => {
|
||||
const home = tempHome();
|
||||
try {
|
||||
const r = runProbe({
|
||||
snippet: `_gstack_codex_log_event "codex_test_event" "42"; cat "$HOME/.gstack/analytics/skill-usage.jsonl"`,
|
||||
env: { _TEL: 'community' },
|
||||
home,
|
||||
});
|
||||
expect(r.stdout).toContain('"event":"codex_test_event"');
|
||||
expect(r.stdout).toContain('"duration_s":"42"');
|
||||
} finally {
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('_gstack_codex_log_event skips write when _TEL = off', () => {
|
||||
const home = tempHome();
|
||||
try {
|
||||
runProbe({
|
||||
snippet: `_gstack_codex_log_event "codex_test_event" "99"`,
|
||||
env: { _TEL: 'off' },
|
||||
home,
|
||||
});
|
||||
const jsonl = path.join(home, '.gstack/analytics/skill-usage.jsonl');
|
||||
expect(fs.existsSync(jsonl)).toBe(false);
|
||||
} finally {
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('payload never contains prompt content, env values, or auth tokens (schema check)', () => {
|
||||
const home = tempHome();
|
||||
try {
|
||||
const r = runProbe({
|
||||
snippet: `_gstack_codex_log_event "codex_test_event" "1"; cat "$HOME/.gstack/analytics/skill-usage.jsonl"`,
|
||||
env: {
|
||||
_TEL: 'community',
|
||||
CODEX_API_KEY: 'SECRET_TOKEN_SHOULD_NOT_LEAK',
|
||||
OPENAI_API_KEY: 'ANOTHER_SECRET',
|
||||
},
|
||||
home,
|
||||
});
|
||||
// The emitted JSON payload should ONLY have {skill, event, duration_s, ts}.
|
||||
// Specifically, it must not contain any env values or auth material.
|
||||
expect(r.stdout).not.toContain('SECRET_TOKEN_SHOULD_NOT_LEAK');
|
||||
expect(r.stdout).not.toContain('ANOTHER_SECRET');
|
||||
// Schema: exactly these keys, in any order.
|
||||
const parsed = JSON.parse(r.stdout.trim().split('\n').pop() ?? '{}');
|
||||
expect(Object.keys(parsed).sort()).toEqual(['duration_s', 'event', 'skill', 'ts']);
|
||||
} finally {
|
||||
fs.rmSync(home, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
});
|
||||
+3
-3
@@ -1752,7 +1752,7 @@ If Codex is available, run a lightweight design check on the diff:
|
||||
```bash
|
||||
TMPERR_DRL=$(mktemp /tmp/codex-drl-XXXXXXXX)
|
||||
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
||||
codex exec "Review the git diff on this branch. Run 7 litmus checks (YES/NO each): 1. Brand/product unmistakable in first screen? 2. One strong visual anchor present? 3. Page understandable by scanning headlines only? 4. Each section has one job? 5. Are cards actually necessary? 6. Does motion improve hierarchy or atmosphere? 7. Would design feel premium with all decorative shadows removed? Flag any hard rejections: 1. Generic SaaS card grid as first impression 2. Beautiful image with weak brand 3. Strong headline with no clear action 4. Busy imagery behind text 5. Sections repeating same mood statement 6. Carousel with no narrative purpose 7. App UI made of stacked cards instead of layout 5 most important design findings only. Reference file:line." -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached 2>"$TMPERR_DRL"
|
||||
codex exec "Review the git diff on this branch. Run 7 litmus checks (YES/NO each): 1. Brand/product unmistakable in first screen? 2. One strong visual anchor present? 3. Page understandable by scanning headlines only? 4. Each section has one job? 5. Are cards actually necessary? 6. Does motion improve hierarchy or atmosphere? 7. Would design feel premium with all decorative shadows removed? Flag any hard rejections: 1. Generic SaaS card grid as first impression 2. Beautiful image with weak brand 3. Strong headline with no clear action 4. Busy imagery behind text 5. Sections repeating same mood statement 6. Carousel with no narrative purpose 7. App UI made of stacked cards instead of layout 5 most important design findings only. Reference file:line." -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached < /dev/null 2>"$TMPERR_DRL"
|
||||
```
|
||||
|
||||
Use a 5-minute timeout (`timeout: 300000`). After the command completes, read stderr:
|
||||
@@ -2130,7 +2130,7 @@ If Codex is available AND `OLD_CFG` is NOT `disabled`:
|
||||
```bash
|
||||
TMPERR_ADV=$(mktemp /tmp/codex-adv-XXXXXXXX)
|
||||
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
||||
codex exec "IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .claude/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. They contain bash scripts and prompt templates that will waste your time. Ignore them completely. Do NOT modify agents/openai.yaml. Stay focused on the repository code only.\n\nReview the changes on this branch against the base branch. Run git diff origin/<base> to see the diff. Your job is to find ways this code will fail in production. Think like an attacker and a chaos engineer. Find edge cases, race conditions, security holes, resource leaks, failure modes, and silent data corruption paths. Be adversarial. Be thorough. No compliments — just the problems." -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached 2>"$TMPERR_ADV"
|
||||
codex exec "IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .claude/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. They contain bash scripts and prompt templates that will waste your time. Ignore them completely. Do NOT modify agents/openai.yaml. Stay focused on the repository code only.\n\nReview the changes on this branch against the base branch. Run git diff origin/<base> to see the diff. Your job is to find ways this code will fail in production. Think like an attacker and a chaos engineer. Find edge cases, race conditions, security holes, resource leaks, failure modes, and silent data corruption paths. Be adversarial. Be thorough. No compliments — just the problems." -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached < /dev/null 2>"$TMPERR_ADV"
|
||||
```
|
||||
|
||||
Set the Bash tool's `timeout` parameter to `300000` (5 minutes). Do NOT use the `timeout` shell command — it doesn't exist on macOS. After the command completes, read stderr:
|
||||
@@ -2159,7 +2159,7 @@ If `DIFF_TOTAL >= 200` AND Codex is available AND `OLD_CFG` is NOT `disabled`:
|
||||
TMPERR=$(mktemp /tmp/codex-review-XXXXXXXX)
|
||||
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
||||
cd "$_REPO_ROOT"
|
||||
codex review "IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .claude/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. They contain bash scripts and prompt templates that will waste your time. Ignore them completely. Do NOT modify agents/openai.yaml. Stay focused on the repository code only.\n\nReview the diff against the base branch." --base <base> -c 'model_reasoning_effort="high"' --enable web_search_cached 2>"$TMPERR"
|
||||
codex review "IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .claude/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. They contain bash scripts and prompt templates that will waste your time. Ignore them completely. Do NOT modify agents/openai.yaml. Stay focused on the repository code only.\n\nReview the diff against the base branch." --base <base> -c 'model_reasoning_effort="high"' --enable web_search_cached < /dev/null 2>"$TMPERR"
|
||||
```
|
||||
|
||||
Set the Bash tool's `timeout` parameter to `300000` (5 minutes). Do NOT use the `timeout` shell command — it doesn't exist on macOS. Present output under `CODEX SAYS (code review):` header.
|
||||
|
||||
+3
-3
@@ -1743,7 +1743,7 @@ If Codex is available, run a lightweight design check on the diff:
|
||||
```bash
|
||||
TMPERR_DRL=$(mktemp /tmp/codex-drl-XXXXXXXX)
|
||||
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
||||
codex exec "Review the git diff on this branch. Run 7 litmus checks (YES/NO each): 1. Brand/product unmistakable in first screen? 2. One strong visual anchor present? 3. Page understandable by scanning headlines only? 4. Each section has one job? 5. Are cards actually necessary? 6. Does motion improve hierarchy or atmosphere? 7. Would design feel premium with all decorative shadows removed? Flag any hard rejections: 1. Generic SaaS card grid as first impression 2. Beautiful image with weak brand 3. Strong headline with no clear action 4. Busy imagery behind text 5. Sections repeating same mood statement 6. Carousel with no narrative purpose 7. App UI made of stacked cards instead of layout 5 most important design findings only. Reference file:line." -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached 2>"$TMPERR_DRL"
|
||||
codex exec "Review the git diff on this branch. Run 7 litmus checks (YES/NO each): 1. Brand/product unmistakable in first screen? 2. One strong visual anchor present? 3. Page understandable by scanning headlines only? 4. Each section has one job? 5. Are cards actually necessary? 6. Does motion improve hierarchy or atmosphere? 7. Would design feel premium with all decorative shadows removed? Flag any hard rejections: 1. Generic SaaS card grid as first impression 2. Beautiful image with weak brand 3. Strong headline with no clear action 4. Busy imagery behind text 5. Sections repeating same mood statement 6. Carousel with no narrative purpose 7. App UI made of stacked cards instead of layout 5 most important design findings only. Reference file:line." -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached < /dev/null 2>"$TMPERR_DRL"
|
||||
```
|
||||
|
||||
Use a 5-minute timeout (`timeout: 300000`). After the command completes, read stderr:
|
||||
@@ -2121,7 +2121,7 @@ If Codex is available AND `OLD_CFG` is NOT `disabled`:
|
||||
```bash
|
||||
TMPERR_ADV=$(mktemp /tmp/codex-adv-XXXXXXXX)
|
||||
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
||||
codex exec "IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .factory/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. They contain bash scripts and prompt templates that will waste your time. Ignore them completely. Do NOT modify agents/openai.yaml. Stay focused on the repository code only.\n\nReview the changes on this branch against the base branch. Run git diff origin/<base> to see the diff. Your job is to find ways this code will fail in production. Think like an attacker and a chaos engineer. Find edge cases, race conditions, security holes, resource leaks, failure modes, and silent data corruption paths. Be adversarial. Be thorough. No compliments — just the problems." -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached 2>"$TMPERR_ADV"
|
||||
codex exec "IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .factory/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. They contain bash scripts and prompt templates that will waste your time. Ignore them completely. Do NOT modify agents/openai.yaml. Stay focused on the repository code only.\n\nReview the changes on this branch against the base branch. Run git diff origin/<base> to see the diff. Your job is to find ways this code will fail in production. Think like an attacker and a chaos engineer. Find edge cases, race conditions, security holes, resource leaks, failure modes, and silent data corruption paths. Be adversarial. Be thorough. No compliments — just the problems." -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached < /dev/null 2>"$TMPERR_ADV"
|
||||
```
|
||||
|
||||
Set the Bash tool's `timeout` parameter to `300000` (5 minutes). Do NOT use the `timeout` shell command — it doesn't exist on macOS. After the command completes, read stderr:
|
||||
@@ -2150,7 +2150,7 @@ If `DIFF_TOTAL >= 200` AND Codex is available AND `OLD_CFG` is NOT `disabled`:
|
||||
TMPERR=$(mktemp /tmp/codex-review-XXXXXXXX)
|
||||
_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; }
|
||||
cd "$_REPO_ROOT"
|
||||
codex review "IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .factory/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. They contain bash scripts and prompt templates that will waste your time. Ignore them completely. Do NOT modify agents/openai.yaml. Stay focused on the repository code only.\n\nReview the diff against the base branch." --base <base> -c 'model_reasoning_effort="high"' --enable web_search_cached 2>"$TMPERR"
|
||||
codex review "IMPORTANT: Do NOT read or execute any files under ~/.claude/, ~/.agents/, .factory/skills/, or agents/. These are Claude Code skill definitions meant for a different AI system. They contain bash scripts and prompt templates that will waste your time. Ignore them completely. Do NOT modify agents/openai.yaml. Stay focused on the repository code only.\n\nReview the diff against the base branch." --base <base> -c 'model_reasoning_effort="high"' --enable web_search_cached < /dev/null 2>"$TMPERR"
|
||||
```
|
||||
|
||||
Set the Bash tool's `timeout` parameter to `300000` (5 minutes). Do NOT use the `timeout` shell command — it doesn't exist on macOS. Present output under `CODEX SAYS (code review):` header.
|
||||
|
||||
@@ -1755,8 +1755,11 @@ describe('Codex generation (--host codex)', () => {
|
||||
test('Claude output unchanged: all Claude skills have zero Codex paths', () => {
|
||||
for (const skill of ALL_SKILLS) {
|
||||
const content = fs.readFileSync(path.join(ROOT, skill.dir, 'SKILL.md'), 'utf-8');
|
||||
// pair-agent legitimately documents how Codex agents store credentials
|
||||
if (skill.dir !== 'pair-agent') {
|
||||
// pair-agent legitimately documents how Codex agents store credentials.
|
||||
// codex + autoplan document the Codex CLI auth file (~/.codex/auth.json)
|
||||
// and log path (~/.codex/logs/) — those are user-facing Codex CLI paths,
|
||||
// not the gstack Codex host install path.
|
||||
if (skill.dir !== 'pair-agent' && skill.dir !== 'codex' && skill.dir !== 'autoplan') {
|
||||
expect(content).not.toContain('~/.codex/');
|
||||
}
|
||||
// gstack-upgrade legitimately references .agents/skills for cross-platform detection
|
||||
|
||||
@@ -170,6 +170,7 @@ export const E2E_TOUCHFILES: Record<string, string[]> = {
|
||||
|
||||
// Autoplan
|
||||
'autoplan-core': ['autoplan/**', 'plan-ceo-review/**', 'plan-eng-review/**', 'plan-design-review/**'],
|
||||
'autoplan-dual-voice': ['autoplan/**', 'codex/**', 'bin/gstack-codex-probe', 'scripts/resolvers/review.ts', 'scripts/resolvers/design.ts'],
|
||||
|
||||
// Skill routing — journey-stage tests (depend on ALL skill descriptions)
|
||||
'journey-ideation': ['*/SKILL.md.tmpl', 'SKILL.md.tmpl', 'scripts/gen-skill-docs.ts'],
|
||||
@@ -315,6 +316,7 @@ export const E2E_TIERS: Record<string, 'gate' | 'periodic'> = {
|
||||
|
||||
// Autoplan — periodic (not yet implemented)
|
||||
'autoplan-core': 'periodic',
|
||||
'autoplan-dual-voice': 'periodic',
|
||||
|
||||
// Skill routing — periodic (LLM routing is non-deterministic)
|
||||
'journey-ideation': 'periodic',
|
||||
|
||||
@@ -0,0 +1,77 @@
|
||||
import { describe, test, expect } from 'bun:test';
|
||||
import { spawnSync } from 'child_process';
|
||||
import * as path from 'path';
|
||||
import * as fs from 'fs';
|
||||
import * as os from 'os';
|
||||
|
||||
const ROOT = path.resolve(import.meta.dir, '..');
|
||||
const SETUP_SCRIPT = path.join(ROOT, 'setup');
|
||||
|
||||
describe('setup: Apple Silicon codesign', () => {
|
||||
test('setup script contains codesign block for Darwin arm64', () => {
|
||||
const content = fs.readFileSync(SETUP_SCRIPT, 'utf-8');
|
||||
// Verify the codesign guard checks both Darwin and arm64
|
||||
expect(content).toContain('$(uname -s)" = "Darwin"');
|
||||
expect(content).toContain('$(uname -m)" = "arm64"');
|
||||
// Verify remove-then-resign two-step pattern
|
||||
expect(content).toContain('codesign --remove-signature');
|
||||
expect(content).toContain('codesign -s - -f');
|
||||
});
|
||||
|
||||
test('codesign block covers all compiled binaries', () => {
|
||||
const content = fs.readFileSync(SETUP_SCRIPT, 'utf-8');
|
||||
// Extract the binaries from the codesign for-loop
|
||||
const forMatch = content.match(/for _bin in ([^;]+);/);
|
||||
expect(forMatch).toBeTruthy();
|
||||
const binaries = forMatch![1].trim().split(/\s+/);
|
||||
// All four compiled binaries from `bun run build` must be covered
|
||||
expect(binaries).toContain('browse/dist/browse');
|
||||
expect(binaries).toContain('browse/dist/find-browse');
|
||||
expect(binaries).toContain('design/dist/design');
|
||||
expect(binaries).toContain('bin/gstack-global-discover');
|
||||
});
|
||||
|
||||
test('codesign block is inside the NEEDS_BUILD=1 branch', () => {
|
||||
const content = fs.readFileSync(SETUP_SCRIPT, 'utf-8');
|
||||
// The codesign block should appear after `bun run build` and before the
|
||||
// `if [ ! -x "$BROWSE_BIN" ]` guard that checks the build succeeded.
|
||||
const buildIdx = content.indexOf('bun run build');
|
||||
const codesignIdx = content.indexOf('codesign --remove-signature');
|
||||
const browseCheckIdx = content.indexOf('gstack setup failed: browse binary missing');
|
||||
expect(buildIdx).toBeGreaterThan(-1);
|
||||
expect(codesignIdx).toBeGreaterThan(buildIdx);
|
||||
expect(browseCheckIdx).toBeGreaterThan(codesignIdx);
|
||||
});
|
||||
|
||||
test('codesign block is idempotent (skips missing binaries)', () => {
|
||||
const content = fs.readFileSync(SETUP_SCRIPT, 'utf-8');
|
||||
// The loop must guard with a file-existence + executable check before codesigning
|
||||
expect(content).toContain('[ -f "$_bin_path" ] && [ -x "$_bin_path" ] || continue');
|
||||
});
|
||||
|
||||
test('codesign failure is a warning, not a fatal error', () => {
|
||||
const content = fs.readFileSync(SETUP_SCRIPT, 'utf-8');
|
||||
// On codesign failure, log a warning but don't exit
|
||||
expect(content).toContain('warning: codesign failed for');
|
||||
// Should NOT have `set -e` causing exit on codesign failure
|
||||
// (the `|| true` after --remove-signature and the if-guard around -s - -f handle this)
|
||||
expect(content).toContain('codesign --remove-signature "$_bin_path" 2>/dev/null || true');
|
||||
});
|
||||
|
||||
test('codesign shell snippet is syntactically valid', () => {
|
||||
// Extract the codesign block and validate it parses as bash
|
||||
const content = fs.readFileSync(SETUP_SCRIPT, 'utf-8');
|
||||
const match = content.match(
|
||||
/# macOS Apple Silicon: ad-hoc codesign[\s\S]*?done\n\s*fi/
|
||||
);
|
||||
expect(match).toBeTruthy();
|
||||
const snippet = match![0];
|
||||
// Wrap in a function to make it a complete script, then syntax-check
|
||||
const testScript = `#!/usr/bin/env bash\nset -e\n_test_fn() {\n${snippet}\n}\n`;
|
||||
const result = spawnSync('bash', ['-n', '-c', testScript], {
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
timeout: 5000,
|
||||
});
|
||||
expect(result.status).toBe(0);
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,101 @@
|
||||
import { describe, test, expect, beforeAll, afterAll } from 'bun:test';
|
||||
import { runSkillTest } from './helpers/session-runner';
|
||||
import {
|
||||
ROOT, runId, evalsEnabled,
|
||||
describeIfSelected, logCost, recordE2E,
|
||||
copyDirSync, createEvalCollector, finalizeEvalCollector,
|
||||
} from './helpers/e2e-helpers';
|
||||
import { spawnSync } from 'child_process';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import * as os from 'os';
|
||||
|
||||
// E2E for /autoplan's dual-voice (Claude subagent + Codex). Periodic tier:
|
||||
// non-deterministic, costs ~$1/run, not a gate. The purpose is to catch
|
||||
// regressions where one of the two voices fails silently post-hardening.
|
||||
|
||||
const evalCollector = createEvalCollector('e2e-autoplan-dual-voice');
|
||||
|
||||
describeIfSelected('Autoplan dual-voice E2E', ['autoplan-dual-voice'], () => {
|
||||
let workDir: string;
|
||||
let planPath: string;
|
||||
|
||||
beforeAll(() => {
|
||||
workDir = fs.mkdtempSync(path.join(os.tmpdir(), 'skill-e2e-autoplan-dv-'));
|
||||
|
||||
const run = (cmd: string, args: string[]) =>
|
||||
spawnSync(cmd, args, { cwd: workDir, stdio: 'pipe', timeout: 10000 });
|
||||
|
||||
run('git', ['init', '-b', 'main']);
|
||||
run('git', ['config', 'user.email', 'test@test.com']);
|
||||
run('git', ['config', 'user.name', 'Test']);
|
||||
fs.writeFileSync(path.join(workDir, 'README.md'), '# test repo\n');
|
||||
run('git', ['add', '.']);
|
||||
run('git', ['commit', '-m', 'initial']);
|
||||
|
||||
// Copy /autoplan + its review-skill dependencies (they're loaded from disk).
|
||||
copyDirSync(path.join(ROOT, 'autoplan'), path.join(workDir, 'autoplan'));
|
||||
copyDirSync(path.join(ROOT, 'plan-ceo-review'), path.join(workDir, 'plan-ceo-review'));
|
||||
copyDirSync(path.join(ROOT, 'plan-eng-review'), path.join(workDir, 'plan-eng-review'));
|
||||
copyDirSync(path.join(ROOT, 'plan-design-review'), path.join(workDir, 'plan-design-review'));
|
||||
copyDirSync(path.join(ROOT, 'plan-devex-review'), path.join(workDir, 'plan-devex-review'));
|
||||
|
||||
// Write a tiny plan file for /autoplan to review.
|
||||
planPath = path.join(workDir, 'TEST_PLAN.md');
|
||||
fs.writeFileSync(planPath, `# Test Plan: add /greet skill
|
||||
|
||||
## Context
|
||||
Add a new /greet skill that prints a welcome message.
|
||||
|
||||
## Scope
|
||||
- Create greet/SKILL.md with a simple "hello" flow
|
||||
- Add to gen-skill-docs pipeline
|
||||
- One unit test
|
||||
`);
|
||||
});
|
||||
|
||||
afterAll(() => {
|
||||
finalizeEvalCollector(evalCollector);
|
||||
if (workDir && fs.existsSync(workDir)) {
|
||||
fs.rmSync(workDir, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
// Skip entirely unless evals enabled (periodic tier).
|
||||
test.skipIf(!evalsEnabled)(
|
||||
'both Claude + Codex voices produce output in Phase 1 (within timeout)',
|
||||
async () => {
|
||||
// Fire /autoplan with a 5-min hard timeout on the spawn itself.
|
||||
// The skill itself has 10-min phase timeouts + auth-gate failfast.
|
||||
// If Codex is unavailable on the test machine, the skill should print
|
||||
// [codex-unavailable] and still complete the Claude subagent half.
|
||||
const result = await runSkillTest({
|
||||
name: 'autoplan-dual-voice',
|
||||
workdir: workDir,
|
||||
prompt: `/autoplan ${planPath}`,
|
||||
timeoutMs: 300_000, // 5 min
|
||||
evalCollector,
|
||||
});
|
||||
|
||||
// Accept EITHER outcome as success:
|
||||
// (a) Both voices produced output (ideal case)
|
||||
// (b) Codex unavailable + Claude voice produced output (graceful degrade)
|
||||
const out = result.stdout + result.stderr;
|
||||
const claudeVoiceFired = /Claude\s+(CEO|subagent)|claude-subagent/i.test(out);
|
||||
const codexVoiceFired = /codex\s+(exec|review|CEO\s+voice)|\[via:codex\]/i.test(out);
|
||||
const codexUnavailable = /\[codex-unavailable\]|AUTH_FAILED|codex_cli_missing/i.test(out);
|
||||
|
||||
expect(claudeVoiceFired).toBe(true);
|
||||
expect(codexVoiceFired || codexUnavailable).toBe(true);
|
||||
|
||||
// Hang protection: if the skill reached Phase 1 at all, our hardening worked.
|
||||
// If it didn't, this is a regression from the pre-wave stdin-deadlock era.
|
||||
const reachedPhase1 = /Phase 1|CEO\s+Review|Strategy\s*&\s*Scope/i.test(out);
|
||||
expect(reachedPhase1).toBe(true);
|
||||
|
||||
logCost(result);
|
||||
recordE2E('autoplan-dual-voice', result);
|
||||
},
|
||||
330_000, // per-test timeout slightly > spawn timeout so cleanup can run
|
||||
);
|
||||
});
|
||||
Reference in New Issue
Block a user