feat: model overlays with explicit --model flag (no auto-detect)

Adds a per-model behavioral patch layer orthogonal to the host axis.
Different LLMs have different tendencies (GPT won't stop, Gemini
over-explains, o-series wants structured output). Overlays nudge each
model toward better defaults for gstack workflows.

Codex review caught three landmines the prior reviews missed:
1. Host != model — Claude Code can run any Claude model, Codex runs
   GPT/o-series, Cursor fronts multiple providers. Auto-detecting from
   host would lie. Dropped auto-detect. --model is explicit (default
   claude). Missing overlay file → empty string (graceful).
2. Import cycle — putting Model in resolvers/types.ts would cycle
   through hosts/index. Created neutral scripts/models.ts instead.
3. "Final say" is dangerous — overlay at the end of preamble could
   override STOP points, AskUserQuestion gates, /ship review gates.
   Placed overlay after spawned-session-check but before voice + tier
   sections. Wrapper heading adds explicit subordination language on
   every overlay: "subordinate to skill workflow, STOP points,
   AskUserQuestion gates, plan-mode safety, and /ship review gates."

Changes:
- scripts/models.ts: new neutral module. ALL_MODEL_NAMES, Model type,
  resolveModel() for family heuristics (gpt-5.4-mini → gpt-5.4, o3 →
  o-series, claude-opus-4-7 → claude), validateModel() helper.
- scripts/resolvers/types.ts: import Model, add ctx.model field.
- scripts/resolvers/model-overlay.ts: new resolver. Reads
  model-overlays/{model}.md. Supports {{INHERIT:base}} directive at
  top of file for concat (gpt-5.4 inherits gpt). Cycle guard.
- scripts/resolvers/index.ts: register MODEL_OVERLAY resolver.
- scripts/resolvers/preamble.ts: wire generateModelOverlay into
  composition before voice. Print MODEL_OVERLAY: {model} in preamble
  bash so users can see which overlay is active. Filter empty sections.
- scripts/gen-skill-docs.ts: parse --model CLI flag. Default claude.
  Unknown model → throw with list of valid options.
- model-overlays/{claude,gpt,gpt-5.4,gemini,o-series}.md: behavioral
  patches per model family. gpt-5.4.md uses {{INHERIT:gpt}} to extend
  gpt.md without duplication.
- test/gen-skill-docs.test.ts: fix qa-only guardrail regex scope.
  Was matching Edit/Glob/Grep anywhere after `allowed-tools:` in the
  whole file. Now scoped to frontmatter only. Body prose (Claude
  overlay references Edit as a tool) correctly no longer breaks it.

Verification:
- bun run gen:skill-docs --host all --dry-run → all fresh
- bun run gen:skill-docs --model gpt-5.4 → concat works, gpt.md +
  gpt-5.4.md content appears in order
- bun run gen:skill-docs --model unknown → errors with valid list
- All generated skills contain MODEL_OVERLAY: claude in preamble
- Golden ship fixtures regenerated

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Garry Tan
2026-04-17 06:01:27 +08:00
parent 36ef6e9869
commit 6c8cf6774f
47 changed files with 890 additions and 6 deletions
+16 -1
View File
@@ -43,6 +43,21 @@ const HOST_ARG_VAL: HostArg = (() => {
// For single-host mode, HOST is the host. For --host all, it's set per iteration below.
let HOST: Host = HOST_ARG_VAL === 'all' ? 'claude' : HOST_ARG_VAL;
// ─── Model Overlay Selection ────────────────────────────────
// --model is explicit. We do NOT auto-detect from host (host ≠ model).
// Default is 'claude'. Missing overlay file → empty string (graceful).
import { ALL_MODEL_NAMES, resolveModel, type Model } from './models';
const MODEL_ARG = process.argv.find(a => a.startsWith('--model'));
const MODEL_ARG_VAL: Model = (() => {
if (!MODEL_ARG) return 'claude';
const val = MODEL_ARG.includes('=') ? MODEL_ARG.split('=')[1] : process.argv[process.argv.indexOf(MODEL_ARG) + 1];
const resolved = resolveModel(val);
if (!resolved) {
throw new Error(`Unknown model: ${val}. Use ${ALL_MODEL_NAMES.join(', ')}, or a family variant (e.g., claude-opus-4-7, gpt-5.4-mini, o3).`);
}
return resolved;
})();
// HostPaths, HOST_PATHS, and TemplateContext imported from ./resolvers/types (line 7-8)
// Design constants (AI_SLOP_BLACKLIST, OPENAI_HARD_REJECTIONS, OPENAI_LITMUS_CHECKS)
// live in ./resolvers/constants and are consumed by resolvers directly.
@@ -398,7 +413,7 @@ function processTemplate(tmplPath: string, host: Host = 'claude'): { outputPath:
const tierMatch = tmplContent.match(/^preamble-tier:\s*(\d+)$/m);
const preambleTier = tierMatch ? parseInt(tierMatch[1], 10) : undefined;
const ctx: TemplateContext = { skillName, tmplPath, benefitsFrom, host, paths: HOST_PATHS[host], preambleTier };
const ctx: TemplateContext = { skillName, tmplPath, benefitsFrom, host, paths: HOST_PATHS[host], preambleTier, model: MODEL_ARG_VAL };
// Replace placeholders (supports parameterized: {{NAME:arg1:arg2}})
// Config-driven: suppressedResolvers return empty string for this host
+68
View File
@@ -0,0 +1,68 @@
/**
* Model taxonomy — neutral module with no imports from hosts/ or resolvers/.
*
* Model families supported by model overlays in model-overlays/{family}.md.
* Host configs can reference these as `defaultModel` strings (validated at
* generation time), but the model axis is independent of the host axis.
*
* IMPORTANT: host ≠ model. Claude Code can run any Claude model (Opus, Sonnet,
* Haiku, future). Codex CLI runs GPT/o-series models. Cursor and OpenCode can
* front multiple providers. We do NOT auto-detect the model from the host —
* users pass --model explicitly. Default is 'claude'.
*/
export const ALL_MODEL_NAMES = [
'claude',
'gpt',
'gpt-5.4',
'gemini',
'o-series',
] as const;
export type Model = (typeof ALL_MODEL_NAMES)[number];
/**
* Resolve a model argument from CLI input to a known Model family.
*
* Precedence rules:
* 1. Exact match against ALL_MODEL_NAMES → return as-is.
* 2. Family heuristics for common variants:
* - `gpt-5.4-mini`, `gpt-5.4-turbo`, `gpt-5.4-*` → `gpt-5.4`
* - `gpt-*` (anything else GPT) → `gpt`
* - `o3`, `o4`, `o4-mini`, `o1`, `o1-mini`, `o1-pro` → `o-series`
* - `claude-*` (sonnet, opus, haiku, any version) → `claude`
* - `gemini-*` (2.5-pro, flash, etc.) → `gemini`
* 3. Unknown input → returns null (caller decides: error, or fall back).
*
* The resolver file in model-overlays/{model}.md applies further fallback
* (e.g., missing gpt-5.4.md falls back to gpt.md). This function only
* normalizes CLI input to a family name.
*/
export function resolveModel(input: string): Model | null {
const s = input.trim();
if (!s) return null;
// Exact match first
if ((ALL_MODEL_NAMES as readonly string[]).includes(s)) {
return s as Model;
}
// Family heuristics
if (/^gpt-5\.4(-|$)/.test(s)) return 'gpt-5.4';
if (/^gpt(-|$)/.test(s)) return 'gpt';
if (/^o[0-9]+(-|$)/.test(s)) return 'o-series';
if (/^claude(-|$)/.test(s)) return 'claude';
if (/^gemini(-|$)/.test(s)) return 'gemini';
return null;
}
/**
* Validate a string against ALL_MODEL_NAMES. Used by host-config validators
* when a HostConfig declares `defaultModel`. Returns an error message or null
* if valid.
*/
export function validateModel(input: string): string | null {
if ((ALL_MODEL_NAMES as readonly string[]).includes(input)) return null;
return `'${input}' is not a known model. Use ${ALL_MODEL_NAMES.join(', ')}.`;
}
+2
View File
@@ -18,6 +18,7 @@ import { generateConfidenceCalibration } from './confidence';
import { generateInvokeSkill } from './composition';
import { generateReviewArmy } from './review-army';
import { generateDxFramework } from './dx';
import { generateModelOverlay } from './model-overlay';
export const RESOLVERS: Record<string, ResolverFn> = {
SLUG_EVAL: generateSlugEval,
@@ -62,4 +63,5 @@ export const RESOLVERS: Record<string, ResolverFn> = {
REVIEW_ARMY: generateReviewArmy,
CROSS_REVIEW_DEDUP: generateCrossReviewDedup,
DX_FRAMEWORK: generateDxFramework,
MODEL_OVERLAY: generateModelOverlay,
};
+60
View File
@@ -0,0 +1,60 @@
/**
* Model overlay resolver reads model-overlays/{model}.md and returns it
* wrapped in a subordinate behavioral-patch section.
*
* Precedence:
* 1. Exact match: ctx.model === 'gpt-5.4' reads model-overlays/gpt-5.4.md
* 2. INHERIT directive: if the file's first non-whitespace line is
* `{{INHERIT:claude}}`, the resolver reads model-overlays/claude.md first
* and concatenates it ahead of the rest of this file's content.
* This lets `gpt-5.4.md` build on top of `gpt.md` without duplication.
* 3. Missing file: returns empty string (graceful degradation, no error).
* 4. No ctx.model set: returns empty string.
*
* The returned block is subordinate to skill workflow, safety gates, and
* AskUserQuestion instructions. The subordination language is part of the
* wrapper heading so it appears with every overlay regardless of file content.
*/
import * as fs from 'fs';
import * as path from 'path';
import type { TemplateContext } from './types';
const OVERLAY_DIR = path.resolve(import.meta.dir, '../../model-overlays');
const INHERIT_RE = /^\s*\{\{INHERIT:([a-z0-9-]+(?:\.[0-9]+)*)\}\}\s*\n/;
function readOverlay(model: string, seen: Set<string> = new Set()): string {
if (seen.has(model)) return ''; // cycle guard
seen.add(model);
const filePath = path.join(OVERLAY_DIR, `${model}.md`);
if (!fs.existsSync(filePath)) return '';
const raw = fs.readFileSync(filePath, 'utf-8');
const match = raw.match(INHERIT_RE);
if (!match) return raw.trim();
const baseModel = match[1];
const base = readOverlay(baseModel, seen);
const rest = raw.replace(INHERIT_RE, '').trim();
if (!base) return rest;
return `${base}\n\n${rest}`;
}
export function generateModelOverlay(ctx: TemplateContext): string {
if (!ctx.model) return '';
const content = readOverlay(ctx.model);
if (!content) return '';
return `## Model-Specific Behavioral Patch (${ctx.model})
The following nudges are tuned for the ${ctx.model} model family. They are
**subordinate** to skill workflow, STOP points, AskUserQuestion gates, plan-mode
safety, and /ship review gates. If a nudge below conflicts with skill instructions,
the skill wins. Treat these as preferences, not rules.
${content}`;
}
+4 -1
View File
@@ -1,5 +1,6 @@
import type { TemplateContext } from './types';
import { getHostConfig } from '../../hosts/index';
import { generateModelOverlay } from './model-overlay';
/**
* Preamble architecture why every skill needs this
@@ -97,6 +98,7 @@ if [ -d ".claude/skills/gstack" ] && [ ! -L ".claude/skills/gstack" ]; then
fi
fi
echo "VENDORED_GSTACK: $_VENDORED"
echo "MODEL_OVERLAY: ${ctx.model ?? 'none'}"
# Detect spawned session (OpenClaw or other orchestrator)
[ -n "$OPENCLAW_SESSION" ] && echo "SPAWNED_SESSION: true" || true
\`\`\``;
@@ -747,10 +749,11 @@ export function generatePreamble(ctx: TemplateContext): string {
generateRoutingInjection(ctx),
generateVendoringDeprecation(ctx),
generateSpawnedSessionCheck(),
generateModelOverlay(ctx),
generateVoiceDirective(tier),
...(tier >= 2 ? [generateContextRecovery(ctx), generateAskUserFormat(ctx), generateCompletenessSection(), generateContextHealth()] : []),
...(tier >= 3 ? [generateRepoModeSection(), generateSearchBeforeBuildingSection(ctx)] : []),
generateCompletionStatus(ctx),
];
return sections.join('\n\n');
return sections.filter(s => s && s.trim().length > 0).join('\n\n');
}
+4
View File
@@ -47,6 +47,9 @@ function buildHostPaths(): Record<string, HostPaths> {
export const HOST_PATHS: Record<string, HostPaths> = buildHostPaths();
import type { Model } from '../models';
export type { Model } from '../models';
export interface TemplateContext {
skillName: string;
tmplPath: string;
@@ -54,6 +57,7 @@ export interface TemplateContext {
host: Host;
paths: HostPaths;
preambleTier?: number; // 1-4, controls which preamble sections are included
model?: Model; // model family for behavioral overlay. Omitted/undefined → no overlay.
}
/** Resolver function signature. args is populated for parameterized placeholders like {{INVOKE_SKILL:name}}. */