mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-07 14:06:42 +02:00
bec54c2b40
When a host launches Claude Code with --disallowedTools AskUserQuestion (Conductor does this by default — verified via ps on the live conductor claude process), the native AskUserQuestion tool is removed from the model's tool registry. Skill templates that say "call AskUserQuestion" silently fail in that environment: the model can't ask, the user never sees the question, the skill auto-proceeds without input. The fix is preamble guidance, not a skill-template change: generate-ask-user-format.ts: new "Tool resolution" section at the top of the AskUserQuestion Format block. Tells the model that "AskUserQuestion" can resolve to two tools at runtime — the host MCP variant (e.g. mcp__conductor__AskUserQuestion, registered when the host injects it) and the native tool — and to PREFER any mcp__*__AskUserQuestion variant. Same questions/options shape; same decision-brief format. If neither variant is callable, fall back to writing a "## Decisions to confirm" section into the plan file plus ExitPlanMode (the native plan-mode confirmation surfaces it). Never silently auto-decide. generate-completion-status.ts: the plan-mode-info block (preamble position 1) now explicitly notes that AskUserQuestion satisfies plan mode's end-of-turn requirement for "any variant" and points at the Tool resolution section for the fallback path. This puts the resolution rule in front of every tier-≥2 skill via the preamble, so plan-mode review skills (plan-ceo-review, plan-eng-review, plan-design-review, plan-devex-review, autoplan, office-hours) all gain the fix without per-template surgery. Includes regenerated SKILL.md files for all 41 skills + the 3 host-ship golden fixtures used by test/host-config.test.ts. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
63 lines
3.6 KiB
TypeScript
63 lines
3.6 KiB
TypeScript
import type { TemplateContext } from '../types';
|
|
|
|
export function generateAskUserFormat(_ctx: TemplateContext): string {
|
|
return `## AskUserQuestion Format
|
|
|
|
### Tool resolution (read first)
|
|
|
|
"AskUserQuestion" can resolve to two tools at runtime: the **host MCP variant** (e.g. \`mcp__conductor__AskUserQuestion\` — appears in your tool list when the host registers it) or the **native** Claude Code tool.
|
|
|
|
**Rule:** if any \`mcp__*__AskUserQuestion\` variant is in your tool list, prefer it. Hosts may disable native AUQ via \`--disallowedTools AskUserQuestion\` (Conductor does, by default) and route through their MCP variant; calling native there silently fails. Same questions/options shape; same decision-brief format applies.
|
|
|
|
**Fallback when neither variant is callable:** in plan mode, write the decision brief into the plan file as a \`## Decisions to confirm\` section + ExitPlanMode (the native "Ready to execute?" surfaces it). Outside plan mode, output the brief as prose and stop. **Never silently auto-decide** — only \`/plan-tune\` AUTO_DECIDE opt-ins authorize auto-picking.
|
|
|
|
### Format
|
|
|
|
Every AskUserQuestion is a decision brief and must be sent as tool_use, not prose.
|
|
|
|
\`\`\`
|
|
D<N> — <one-line question title>
|
|
Project/branch/task: <1 short grounding sentence using _BRANCH>
|
|
ELI10: <plain English a 16-year-old could follow, 2-4 sentences, name the stakes>
|
|
Stakes if we pick wrong: <one sentence on what breaks, what user sees, what's lost>
|
|
Recommendation: <choice> because <one-line reason>
|
|
Completeness: A=X/10, B=Y/10 (or: Note: options differ in kind, not coverage — no completeness score)
|
|
Pros / cons:
|
|
A) <option label> (recommended)
|
|
✅ <pro — concrete, observable, ≥40 chars>
|
|
❌ <con — honest, ≥40 chars>
|
|
B) <option label>
|
|
✅ <pro>
|
|
❌ <con>
|
|
Net: <one-line synthesis of what you're actually trading off>
|
|
\`\`\`
|
|
|
|
D-numbering: first question in a skill invocation is \`D1\`; increment yourself. This is a model-level instruction, not a runtime counter.
|
|
|
|
ELI10 is always present, in plain English, not function names. Recommendation is ALWAYS present. Keep the \`(recommended)\` label; AUTO_DECIDE depends on it.
|
|
|
|
Completeness: use \`Completeness: N/10\` only when options differ in coverage. 10 = complete, 7 = happy path, 3 = shortcut. If options differ in kind, write: \`Note: options differ in kind, not coverage — no completeness score.\`
|
|
|
|
Pros / cons: use ✅ and ❌. Minimum 2 pros and 1 con per option when the choice is real; Minimum 40 characters per bullet. Hard-stop escape for one-way/destructive confirmations: \`✅ No cons — this is a hard-stop choice\`.
|
|
|
|
Neutral posture: \`Recommendation: <default> — this is a taste call, no strong preference either way\`; \`(recommended)\` STAYS on the default option for AUTO_DECIDE.
|
|
|
|
Effort both-scales: when an option involves effort, label both human-team and CC+gstack time, e.g. \`(human: ~2 days / CC: ~15 min)\`. Makes AI compression visible at decision time.
|
|
|
|
Net line closes the tradeoff. Per-skill instructions may add stricter rules.
|
|
|
|
### Self-check before emitting
|
|
|
|
Before calling AskUserQuestion, verify:
|
|
- [ ] D<N> header present
|
|
- [ ] ELI10 paragraph present (stakes line too)
|
|
- [ ] Recommendation line present with concrete reason
|
|
- [ ] Completeness scored (coverage) OR kind-note present (kind)
|
|
- [ ] Every option has ≥2 ✅ and ≥1 ❌, each ≥40 chars (or hard-stop escape)
|
|
- [ ] (recommended) label on one option (even for neutral-posture)
|
|
- [ ] Dual-scale effort labels on effort-bearing options (human / CC)
|
|
- [ ] Net line closes the decision
|
|
- [ ] You are calling the tool, not writing prose
|
|
`;
|
|
}
|