New command: $D evolve --screenshot current.png --brief "make it calmer"
Two-step process: first analyzes the screenshot via GPT-4o vision to
produce a detailed description, then generates a new mockup that keeps
the existing layout structure but applies the requested changes. Starts
from reality, not blank canvas.
Bridges the gap between /design-review critique ("the spacing is off")
and a visual proposal of the fix.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New commands:
- $D diff --before old.png --after new.png: visual diff using GPT-4o
vision. Returns differences by area with severity (high/medium/low)
and a matchScore (0-100).
- $D verify --mockup approved.png --screenshot live.png: compares live
site screenshot against approved design mockup. Pass if matchScore
>= 70 and no high-severity differences.
Used by /design-review to close the design loop: design -> implement ->
verify visually.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New `$D extract` command: sends approved mockup to GPT-4o vision,
extracts color palette, typography, spacing, and layout patterns,
writes/updates DESIGN.md with an "Extracted Design Language" section.
Progressive constraint: if DESIGN.md exists, future mockup briefs
include it as style context. If no DESIGN.md, explorations run wide.
readDesignConstraints() reads existing DESIGN.md for brief construction.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
variants: generates N style variations with staggered parallel (1.5s
between launches, exponential backoff on 429). 7 built-in style
variations (bold, calm, warm, corporate, dark, playful + default).
Tested: 3/3 variants in 41.6s.
iterate: multi-turn design iteration using previous_response_id for
conversational threading. Falls back to re-generation with accumulated
feedback if threading doesn't retain visual context.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Stateless CLI (design/dist/design) wrapping OpenAI Responses API for
UI mockup generation. Three working commands:
- generate: brief -> PNG mockup via gpt-4o + image_generation tool
- check: vision-based quality gate via GPT-4o (text readability, layout
completeness, visual coherence)
- compare: generates self-contained HTML comparison board with star
ratings, radio Pick, per-variant feedback, regenerate controls,
and Submit button that writes structured JSON for agent polling
Auth reads from ~/.gstack/openai.json (0600), falls back to
OPENAI_API_KEY env var. Compiled separately from browse binary
(openai added to devDependencies, not runtime deps).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Prototype script sends 3 design briefs to OpenAI Responses API with
image_generation tool. Results: dashboard (47s, 2.1MB), landing page
(42s, 1.3MB), settings page (37s, 1.3MB) all produce real, implementable
UI mockups with accurate text rendering and clean layouts.
Key finding: Codex OAuth tokens lack image generation scopes. Direct
API key (sk-proj-*) required, stored in ~/.gstack/openai.json.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>