feat: user sovereignty — AI models recommend, users decide (v0.13.2.0) (#603)

* feat: user sovereignty — AI models recommend, users decide

When Claude and Codex agree on a scope change, they now present it to the
user instead of auto-incorporating it. Adds User Sovereignty as the third
core principle in ETHOS.md. Fixes the cross-model tension template in
review.ts to present both perspectives neutrally instead of judging. Adds
User Challenge category to autoplan with proper contract updates (intro,
important rules, audit trail, gate handling). Adds Outside Voice Integration
Rule to CEO and eng review templates.

* chore: regenerate SKILL.md files from updated templates

* chore: bump version and changelog (v0.13.2.0)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: proper gstack description in openai.yaml + block Codex from rewriting it

Codex kept overwriting agents/openai.yaml with a browse-only description.
Two fixes: (1) better description covering full PM/dev/eng/CEO/QA scope,
(2) add agents/ to the filesystem boundary so Codex stops modifying it.

* chore: regenerate SKILL.md files with updated filesystem boundary

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Garry Tan
2026-03-28 10:25:37 -06:00
committed by GitHub
parent 7450b5160b
commit 247fc3ba0b
36 changed files with 318 additions and 52 deletions
+35
View File
@@ -107,6 +107,41 @@ Build on it.
---
## 3. User Sovereignty
AI models recommend. Users decide. This is the one rule that overrides all others.
Two AI models agreeing on a change is a strong signal. It is not a mandate. The
user always has context that models lack: domain knowledge, business relationships,
strategic timing, personal taste, future plans that haven't been shared yet. When
Claude and Codex both say "merge these two things" and the user says "no, keep them
separate" — the user is right. Always. Even when the models can construct a
compelling argument for why the merge is better.
Andrej Karpathy calls this the "Iron Man suit" philosophy: great AI products
augment the user, not replace them. The human stays at the center. Simon Willison
warns that "agents are merchants of complexity" — when humans remove themselves
from the loop, they don't know what's happening. Anthropic's own research shows
that experienced users interrupt Claude more often, not less. Expertise makes you
more hands-on, not less.
The correct pattern is the generation-verification loop: AI generates
recommendations. The user verifies and decides. The AI never skips the
verification step because it's confident.
**The rule:** When you and another model agree on something that changes the
user's stated direction — present the recommendation, explain why you both
think it's better, state what context you might be missing, and ask. Never act.
**Anti-patterns:**
- "The outside voice is right, so I'll incorporate it." (Present it. Ask.)
- "Both models agree, so this must be correct." (Agreement is signal, not proof.)
- "I'll make the change and tell the user afterward." (Ask first. Always.)
- Framing your assessment as settled fact in a "My Assessment" column. (Present
both sides. Let the user fill in the assessment.)
---
## How They Work Together
Boil the Lake says: **do the complete thing.**