mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-05 05:05:08 +02:00
feat: skill-specific Search Before Building integrations
8 template changes: - /office-hours: Phase 2.75 Landscape Awareness (WebSearch + three-layer synthesis) - /plan-eng-review: Step 0 search check with layer provenance annotations - /investigate: external pattern search + search escalation on hypothesis failure - /plan-ceo-review: Landscape Check before scope challenge - /review: search-before-recommending for fix patterns - /qa-only: WebSearch in allowed-tools - /design-consultation: three-layer synthesis backport in Phase 2 Step 3 - /retro: eureka moment tracking from ~/.gstack/analytics/eureka.jsonl All search steps include WebSearch fallback clause. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -343,7 +343,12 @@ If browse is not available, rely on WebSearch results and your built-in design k
|
||||
|
||||
**Step 3: Synthesize findings**
|
||||
|
||||
The goal of research is NOT to copy. It is to get in the ballpark — to understand the visual language users in this category already expect. This gives you the baseline. The interesting design work starts after you have the baseline: deciding where to follow conventions (so the product feels literate) and where to break from them (so the product is memorable).
|
||||
**Three-layer synthesis:**
|
||||
- **Layer 1 (tried and true):** What design patterns does every product in this category share? These are table stakes — users expect them.
|
||||
- **Layer 2 (new and popular):** What are the search results and current design discourse saying? What's trending? What new patterns are emerging?
|
||||
- **Layer 3 (first principles):** Given what we know about THIS product's users and positioning — is there a reason the conventional design approach is wrong? Where should we deliberately break from the category norms?
|
||||
|
||||
**Eureka check:** If Layer 3 reasoning reveals a genuine design insight — a reason the category's visual language fails THIS product — name it: "EUREKA: Every [category] product does X because they assume [assumption]. But this product's users [evidence] — so we should do Y instead." Log the eureka moment (see preamble).
|
||||
|
||||
Summarize conversationally:
|
||||
> "I looked at what's out there. Here's the landscape: they converge on [patterns]. Most of them feel [observation — e.g., interchangeable, polished but generic, etc.]. The opportunity to stand out is [gap]. Here's where I'd play it safe and where I'd take a risk..."
|
||||
|
||||
@@ -309,6 +309,12 @@ Also check:
|
||||
- `TODOS.md` for related known issues
|
||||
- `git log` for prior fixes in the same area — **recurring bugs in the same files are an architectural smell**, not a coincidence
|
||||
|
||||
**External pattern search:** If the bug doesn't match a known pattern above, WebSearch for:
|
||||
- "{framework} {error message or symptom}"
|
||||
- "{library} {component} known issues"
|
||||
|
||||
If WebSearch is unavailable, skip this search and proceed with hypothesis testing. If a documented solution or known dependency bug surfaces, present it as a candidate hypothesis in Phase 3.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Hypothesis Testing
|
||||
@@ -317,7 +323,7 @@ Before writing ANY fix, verify your hypothesis.
|
||||
|
||||
1. **Confirm the hypothesis:** Add a temporary log statement, assertion, or debug output at the suspected root cause. Run the reproduction. Does the evidence match?
|
||||
|
||||
2. **If the hypothesis is wrong:** Return to Phase 1. Gather more evidence. Do not guess.
|
||||
2. **If the hypothesis is wrong:** Before forming the next hypothesis, WebSearch for the exact error message (quoted) and "{component} {unexpected behavior} {framework version}". This often surfaces version-specific regressions or known issues that save hypothesis cycles. If WebSearch is unavailable, skip and proceed. Then return to Phase 1. Gather more evidence. Do not guess.
|
||||
|
||||
3. **3-strike rule:** If 3 hypotheses fail, **STOP**. Use AskUserQuestion:
|
||||
```
|
||||
|
||||
@@ -467,6 +467,37 @@ If no matches found, proceed silently.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2.75: Landscape Awareness
|
||||
|
||||
Read ETHOS.md for the full Search Before Building framework (three layers, eureka moments). The preamble's Search Before Building section has the ETHOS.md path.
|
||||
|
||||
After understanding the problem through questioning, search for what the world thinks. This is NOT competitive research (that's /design-consultation's job). This is understanding conventional wisdom so you can evaluate where it's wrong.
|
||||
|
||||
If WebSearch is unavailable, skip this phase and note: "Search unavailable — proceeding with in-distribution knowledge only."
|
||||
|
||||
**Startup mode:** WebSearch for:
|
||||
- "[problem space] startup approach {current year}"
|
||||
- "[problem space] common mistakes"
|
||||
- "why [incumbent solution] fails" OR "why [incumbent solution] works"
|
||||
|
||||
**Builder mode:** WebSearch for:
|
||||
- "[thing being built] existing solutions"
|
||||
- "[thing being built] open source alternatives"
|
||||
- "best [thing category] {current year}"
|
||||
|
||||
Read the top 2-3 results. Run the three-layer synthesis:
|
||||
- **[Layer 1]** What does everyone already know about this space?
|
||||
- **[Layer 2]** What are the search results and current discourse saying?
|
||||
- **[Layer 3]** Given what WE learned in Phase 2A/2B — is there a reason the conventional approach is wrong?
|
||||
|
||||
**Eureka check:** If Layer 3 reasoning reveals a genuine insight, name it: "EUREKA: Everyone does X because they assume [assumption]. But [evidence from our conversation] suggests that's wrong here. This means [implication]." Log the eureka moment (see preamble).
|
||||
|
||||
If no eureka moment exists, say: "The conventional wisdom seems sound here. Let's build on it." Proceed to Phase 3.
|
||||
|
||||
**Important:** This search feeds Phase 3 (Premise Challenge). If you found reasons the conventional approach fails, those become premises to challenge. If conventional wisdom is solid, that raises the bar for any premise that contradicts it.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Premise Challenge
|
||||
|
||||
Before proposing solutions, challenge the premises:
|
||||
|
||||
@@ -397,6 +397,22 @@ Analyze the plan. If it involves ANY of: new UI screens/pages, changes to existi
|
||||
Identify 2-3 files or patterns in the existing codebase that are particularly well-designed. Note them as style references for the review. Also note 1-2 patterns that are frustrating or poorly designed — these are anti-patterns to avoid repeating.
|
||||
Report findings before proceeding to Step 0.
|
||||
|
||||
### Landscape Check
|
||||
|
||||
Read ETHOS.md for the Search Before Building framework (the preamble's Search Before Building section has the path). Before challenging scope, understand the landscape. WebSearch for:
|
||||
- "[product category] landscape {current year}"
|
||||
- "[key feature] alternatives"
|
||||
- "why [incumbent/conventional approach] [succeeds/fails]"
|
||||
|
||||
If WebSearch is unavailable, skip this check and note: "Search unavailable — proceeding with in-distribution knowledge only."
|
||||
|
||||
Run the three-layer synthesis:
|
||||
- **[Layer 1]** What's the tried-and-true approach in this space?
|
||||
- **[Layer 2]** What are the search results saying?
|
||||
- **[Layer 3]** First-principles reasoning — where might the conventional wisdom be wrong?
|
||||
|
||||
Feed into the Premise Challenge (0A) and Dream State Mapping (0C). If you find a eureka moment, surface it during the Expansion opt-in ceremony as a differentiation opportunity. Log it (see preamble).
|
||||
|
||||
## Step 0: Nuclear Scope Challenge + Mode Selection
|
||||
|
||||
### 0A. Premise Challenge
|
||||
|
||||
@@ -313,7 +313,15 @@ Before reviewing anything, answer these questions:
|
||||
1. **What existing code already partially or fully solves each sub-problem?** Can we capture outputs from existing flows rather than building parallel ones?
|
||||
2. **What is the minimum set of changes that achieves the stated goal?** Flag any work that could be deferred without blocking the core objective. Be ruthless about scope creep.
|
||||
3. **Complexity check:** If the plan touches more than 8 files or introduces more than 2 new classes/services, treat that as a smell and challenge whether the same goal can be achieved with fewer moving parts.
|
||||
4. **TODOS cross-reference:** Read `TODOS.md` if it exists. Are any deferred items blocking this plan? Can any deferred items be bundled into this PR without expanding scope? Does this plan create new work that should be captured as a TODO?
|
||||
4. **Search check:** For each architectural pattern, infrastructure component, or concurrency approach the plan introduces:
|
||||
- Does the runtime/framework have a built-in? Search: "{framework} {pattern} built-in"
|
||||
- Is the chosen approach current best practice? Search: "{pattern} best practice {current year}"
|
||||
- Are there known footguns? Search: "{framework} {pattern} pitfalls"
|
||||
|
||||
If WebSearch is unavailable, skip this check and note: "Search unavailable — proceeding with in-distribution knowledge only."
|
||||
|
||||
If the plan rolls a custom solution where a built-in exists, flag it as a scope reduction opportunity. Annotate recommendations with **[Layer 1]**, **[Layer 2]**, **[Layer 3]**, or **[EUREKA]** (see preamble's Search Before Building section). If you find a eureka moment — a reason the standard approach is wrong for this case — present it as an architectural insight.
|
||||
5. **TODOS cross-reference:** Read `TODOS.md` if it exists. Are any deferred items blocking this plan? Can any deferred items be bundled into this PR without expanding scope? Does this plan create new work that should be captured as a TODO?
|
||||
|
||||
5. **Completeness check:** Is the plan doing the complete version or a shortcut? With AI-assisted coding, the cost of completeness (100% test coverage, full edge case handling, complete error paths) is 10-100x cheaper than with a human team. If the plan proposes a shortcut that saves human-hours but only saves minutes with CC+gstack, recommend the complete version. Boil the lake.
|
||||
|
||||
|
||||
@@ -389,6 +389,20 @@ If TODOS.md doesn't exist, skip the Backlog Health row.
|
||||
|
||||
If the JSONL file doesn't exist or has no entries in the window, skip the Skill Usage row.
|
||||
|
||||
**Eureka Moments (if logged):** Read `~/.gstack/analytics/eureka.jsonl` if it exists. Filter entries within the retro time window by `ts` field. For each eureka moment, show the skill that flagged it, the branch, and a one-line summary of the insight. Present as:
|
||||
|
||||
```
|
||||
| Eureka Moments | 2 this period |
|
||||
```
|
||||
|
||||
If moments exist, list them:
|
||||
```
|
||||
EUREKA /office-hours (branch: garrytan/auth-rethink): "Session tokens don't need server storage — browser crypto API makes client-side JWT validation viable"
|
||||
EUREKA /plan-eng-review (branch: garrytan/cache-layer): "Redis isn't needed here — Bun's built-in LRU cache handles this workload"
|
||||
```
|
||||
|
||||
If the JSONL file doesn't exist or has no entries in the window, skip the Eureka Moments row.
|
||||
|
||||
### Step 3: Commit Time Distribution
|
||||
|
||||
Show hourly histogram in local time using bar chart:
|
||||
|
||||
@@ -339,6 +339,13 @@ Apply the checklist against the diff in two passes:
|
||||
|
||||
**Enum & Value Completeness requires reading code OUTSIDE the diff.** When the diff introduces a new enum value, status, tier, or type constant, use Grep to find all files that reference sibling values, then Read those files to check if the new value is handled. This is the one category where within-diff review is insufficient.
|
||||
|
||||
**Search-before-recommending:** When recommending a fix pattern (especially for concurrency, caching, auth, or framework-specific behavior):
|
||||
- Verify the pattern is current best practice for the framework version in use
|
||||
- Check if a built-in solution exists in newer versions before recommending a workaround
|
||||
- Verify API signatures against current docs (APIs change between versions)
|
||||
|
||||
Takes seconds, prevents recommending outdated patterns. If WebSearch is unavailable, note it and proceed with in-distribution knowledge.
|
||||
|
||||
Follow the output format specified in the checklist. Respect the suppressions — do NOT flag items listed in the "DO NOT flag" section.
|
||||
|
||||
---
|
||||
|
||||
@@ -353,7 +353,12 @@ If browse is not available, rely on WebSearch results and your built-in design k
|
||||
|
||||
**Step 3: Synthesize findings**
|
||||
|
||||
The goal of research is NOT to copy. It is to get in the ballpark — to understand the visual language users in this category already expect. This gives you the baseline. The interesting design work starts after you have the baseline: deciding where to follow conventions (so the product feels literate) and where to break from them (so the product is memorable).
|
||||
**Three-layer synthesis:**
|
||||
- **Layer 1 (tried and true):** What design patterns does every product in this category share? These are table stakes — users expect them.
|
||||
- **Layer 2 (new and popular):** What are the search results and current design discourse saying? What's trending? What new patterns are emerging?
|
||||
- **Layer 3 (first principles):** Given what we know about THIS product's users and positioning — is there a reason the conventional design approach is wrong? Where should we deliberately break from the category norms?
|
||||
|
||||
**Eureka check:** If Layer 3 reasoning reveals a genuine design insight — a reason the category's visual language fails THIS product — name it: "EUREKA: Every [category] product does X because they assume [assumption]. But this product's users [evidence] — so we should do Y instead." Log the eureka moment (see preamble).
|
||||
|
||||
Summarize conversationally:
|
||||
> "I looked at what's out there. Here's the landscape: they converge on [patterns]. Most of them feel [observation — e.g., interchangeable, polished but generic, etc.]. The opportunity to stand out is [gap]. Here's where I'd play it safe and where I'd take a risk..."
|
||||
|
||||
@@ -112,7 +112,12 @@ If browse is not available, rely on WebSearch results and your built-in design k
|
||||
|
||||
**Step 3: Synthesize findings**
|
||||
|
||||
The goal of research is NOT to copy. It is to get in the ballpark — to understand the visual language users in this category already expect. This gives you the baseline. The interesting design work starts after you have the baseline: deciding where to follow conventions (so the product feels literate) and where to break from them (so the product is memorable).
|
||||
**Three-layer synthesis:**
|
||||
- **Layer 1 (tried and true):** What design patterns does every product in this category share? These are table stakes — users expect them.
|
||||
- **Layer 2 (new and popular):** What are the search results and current design discourse saying? What's trending? What new patterns are emerging?
|
||||
- **Layer 3 (first principles):** Given what we know about THIS product's users and positioning — is there a reason the conventional design approach is wrong? Where should we deliberately break from the category norms?
|
||||
|
||||
**Eureka check:** If Layer 3 reasoning reveals a genuine design insight — a reason the category's visual language fails THIS product — name it: "EUREKA: Every [category] product does X because they assume [assumption]. But this product's users [evidence] — so we should do Y instead." Log the eureka moment (see preamble).
|
||||
|
||||
Summarize conversationally:
|
||||
> "I looked at what's out there. Here's the landscape: they converge on [patterns]. Most of them feel [observation — e.g., interchangeable, polished but generic, etc.]. The opportunity to stand out is [gap]. Here's where I'd play it safe and where I'd take a risk..."
|
||||
|
||||
@@ -16,6 +16,7 @@ allowed-tools:
|
||||
- Grep
|
||||
- Glob
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
hooks:
|
||||
PreToolUse:
|
||||
- matcher: "Edit"
|
||||
@@ -328,6 +329,12 @@ Also check:
|
||||
- `TODOS.md` for related known issues
|
||||
- `git log` for prior fixes in the same area — **recurring bugs in the same files are an architectural smell**, not a coincidence
|
||||
|
||||
**External pattern search:** If the bug doesn't match a known pattern above, WebSearch for:
|
||||
- "{framework} {error message or symptom}"
|
||||
- "{library} {component} known issues"
|
||||
|
||||
If WebSearch is unavailable, skip this search and proceed with hypothesis testing. If a documented solution or known dependency bug surfaces, present it as a candidate hypothesis in Phase 3.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Hypothesis Testing
|
||||
@@ -336,7 +343,7 @@ Before writing ANY fix, verify your hypothesis.
|
||||
|
||||
1. **Confirm the hypothesis:** Add a temporary log statement, assertion, or debug output at the suspected root cause. Run the reproduction. Does the evidence match?
|
||||
|
||||
2. **If the hypothesis is wrong:** Return to Phase 1. Gather more evidence. Do not guess.
|
||||
2. **If the hypothesis is wrong:** Before forming the next hypothesis, WebSearch for the exact error message (quoted) and "{component} {unexpected behavior} {framework version}". This often surfaces version-specific regressions or known issues that save hypothesis cycles. If WebSearch is unavailable, skip and proceed. Then return to Phase 1. Gather more evidence. Do not guess.
|
||||
|
||||
3. **3-strike rule:** If 3 hypotheses fail, **STOP**. Use AskUserQuestion:
|
||||
```
|
||||
|
||||
@@ -16,6 +16,7 @@ allowed-tools:
|
||||
- Grep
|
||||
- Glob
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
hooks:
|
||||
PreToolUse:
|
||||
- matcher: "Edit"
|
||||
@@ -104,6 +105,12 @@ Also check:
|
||||
- `TODOS.md` for related known issues
|
||||
- `git log` for prior fixes in the same area — **recurring bugs in the same files are an architectural smell**, not a coincidence
|
||||
|
||||
**External pattern search:** If the bug doesn't match a known pattern above, WebSearch for:
|
||||
- "{framework} {error message or symptom}"
|
||||
- "{library} {component} known issues"
|
||||
|
||||
If WebSearch is unavailable, skip this search and proceed with hypothesis testing. If a documented solution or known dependency bug surfaces, present it as a candidate hypothesis in Phase 3.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Hypothesis Testing
|
||||
@@ -112,7 +119,7 @@ Before writing ANY fix, verify your hypothesis.
|
||||
|
||||
1. **Confirm the hypothesis:** Add a temporary log statement, assertion, or debug output at the suspected root cause. Run the reproduction. Does the evidence match?
|
||||
|
||||
2. **If the hypothesis is wrong:** Return to Phase 1. Gather more evidence. Do not guess.
|
||||
2. **If the hypothesis is wrong:** Before forming the next hypothesis, WebSearch for the exact error message (quoted) and "{component} {unexpected behavior} {framework version}". This often surfaces version-specific regressions or known issues that save hypothesis cycles. If WebSearch is unavailable, skip and proceed. Then return to Phase 1. Gather more evidence. Do not guess.
|
||||
|
||||
3. **3-strike rule:** If 3 hypotheses fail, **STOP**. Use AskUserQuestion:
|
||||
```
|
||||
|
||||
@@ -19,6 +19,7 @@ allowed-tools:
|
||||
- Write
|
||||
- Edit
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
|
||||
<!-- Regenerate: bun run gen:skill-docs -->
|
||||
@@ -476,6 +477,37 @@ If no matches found, proceed silently.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2.75: Landscape Awareness
|
||||
|
||||
Read ETHOS.md for the full Search Before Building framework (three layers, eureka moments). The preamble's Search Before Building section has the ETHOS.md path.
|
||||
|
||||
After understanding the problem through questioning, search for what the world thinks. This is NOT competitive research (that's /design-consultation's job). This is understanding conventional wisdom so you can evaluate where it's wrong.
|
||||
|
||||
If WebSearch is unavailable, skip this phase and note: "Search unavailable — proceeding with in-distribution knowledge only."
|
||||
|
||||
**Startup mode:** WebSearch for:
|
||||
- "[problem space] startup approach {current year}"
|
||||
- "[problem space] common mistakes"
|
||||
- "why [incumbent solution] fails" OR "why [incumbent solution] works"
|
||||
|
||||
**Builder mode:** WebSearch for:
|
||||
- "[thing being built] existing solutions"
|
||||
- "[thing being built] open source alternatives"
|
||||
- "best [thing category] {current year}"
|
||||
|
||||
Read the top 2-3 results. Run the three-layer synthesis:
|
||||
- **[Layer 1]** What does everyone already know about this space?
|
||||
- **[Layer 2]** What are the search results and current discourse saying?
|
||||
- **[Layer 3]** Given what WE learned in Phase 2A/2B — is there a reason the conventional approach is wrong?
|
||||
|
||||
**Eureka check:** If Layer 3 reasoning reveals a genuine insight, name it: "EUREKA: Everyone does X because they assume [assumption]. But [evidence from our conversation] suggests that's wrong here. This means [implication]." Log the eureka moment (see preamble).
|
||||
|
||||
If no eureka moment exists, say: "The conventional wisdom seems sound here. Let's build on it." Proceed to Phase 3.
|
||||
|
||||
**Important:** This search feeds Phase 3 (Premise Challenge). If you found reasons the conventional approach fails, those become premises to challenge. If conventional wisdom is solid, that raises the bar for any premise that contradicts it.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Premise Challenge
|
||||
|
||||
Before proposing solutions, challenge the premises:
|
||||
|
||||
@@ -19,6 +19,7 @@ allowed-tools:
|
||||
- Write
|
||||
- Edit
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
|
||||
{{PREAMBLE}}
|
||||
@@ -235,6 +236,37 @@ If no matches found, proceed silently.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2.75: Landscape Awareness
|
||||
|
||||
Read ETHOS.md for the full Search Before Building framework (three layers, eureka moments). The preamble's Search Before Building section has the ETHOS.md path.
|
||||
|
||||
After understanding the problem through questioning, search for what the world thinks. This is NOT competitive research (that's /design-consultation's job). This is understanding conventional wisdom so you can evaluate where it's wrong.
|
||||
|
||||
If WebSearch is unavailable, skip this phase and note: "Search unavailable — proceeding with in-distribution knowledge only."
|
||||
|
||||
**Startup mode:** WebSearch for:
|
||||
- "[problem space] startup approach {current year}"
|
||||
- "[problem space] common mistakes"
|
||||
- "why [incumbent solution] fails" OR "why [incumbent solution] works"
|
||||
|
||||
**Builder mode:** WebSearch for:
|
||||
- "[thing being built] existing solutions"
|
||||
- "[thing being built] open source alternatives"
|
||||
- "best [thing category] {current year}"
|
||||
|
||||
Read the top 2-3 results. Run the three-layer synthesis:
|
||||
- **[Layer 1]** What does everyone already know about this space?
|
||||
- **[Layer 2]** What are the search results and current discourse saying?
|
||||
- **[Layer 3]** Given what WE learned in Phase 2A/2B — is there a reason the conventional approach is wrong?
|
||||
|
||||
**Eureka check:** If Layer 3 reasoning reveals a genuine insight, name it: "EUREKA: Everyone does X because they assume [assumption]. But [evidence from our conversation] suggests that's wrong here. This means [implication]." Log the eureka moment (see preamble).
|
||||
|
||||
If no eureka moment exists, say: "The conventional wisdom seems sound here. Let's build on it." Proceed to Phase 3.
|
||||
|
||||
**Important:** This search feeds Phase 3 (Premise Challenge). If you found reasons the conventional approach fails, those become premises to challenge. If conventional wisdom is solid, that raises the bar for any premise that contradicts it.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Premise Challenge
|
||||
|
||||
Before proposing solutions, challenge the premises:
|
||||
|
||||
@@ -17,6 +17,7 @@ allowed-tools:
|
||||
- Glob
|
||||
- Bash
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
|
||||
<!-- Regenerate: bun run gen:skill-docs -->
|
||||
@@ -405,6 +406,22 @@ Analyze the plan. If it involves ANY of: new UI screens/pages, changes to existi
|
||||
Identify 2-3 files or patterns in the existing codebase that are particularly well-designed. Note them as style references for the review. Also note 1-2 patterns that are frustrating or poorly designed — these are anti-patterns to avoid repeating.
|
||||
Report findings before proceeding to Step 0.
|
||||
|
||||
### Landscape Check
|
||||
|
||||
Read ETHOS.md for the Search Before Building framework (the preamble's Search Before Building section has the path). Before challenging scope, understand the landscape. WebSearch for:
|
||||
- "[product category] landscape {current year}"
|
||||
- "[key feature] alternatives"
|
||||
- "why [incumbent/conventional approach] [succeeds/fails]"
|
||||
|
||||
If WebSearch is unavailable, skip this check and note: "Search unavailable — proceeding with in-distribution knowledge only."
|
||||
|
||||
Run the three-layer synthesis:
|
||||
- **[Layer 1]** What's the tried-and-true approach in this space?
|
||||
- **[Layer 2]** What are the search results saying?
|
||||
- **[Layer 3]** First-principles reasoning — where might the conventional wisdom be wrong?
|
||||
|
||||
Feed into the Premise Challenge (0A) and Dream State Mapping (0C). If you find a eureka moment, surface it during the Expansion opt-in ceremony as a differentiation opportunity. Log it (see preamble).
|
||||
|
||||
## Step 0: Nuclear Scope Challenge + Mode Selection
|
||||
|
||||
### 0A. Premise Challenge
|
||||
|
||||
@@ -17,6 +17,7 @@ allowed-tools:
|
||||
- Glob
|
||||
- Bash
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
|
||||
{{PREAMBLE}}
|
||||
@@ -147,6 +148,22 @@ Analyze the plan. If it involves ANY of: new UI screens/pages, changes to existi
|
||||
Identify 2-3 files or patterns in the existing codebase that are particularly well-designed. Note them as style references for the review. Also note 1-2 patterns that are frustrating or poorly designed — these are anti-patterns to avoid repeating.
|
||||
Report findings before proceeding to Step 0.
|
||||
|
||||
### Landscape Check
|
||||
|
||||
Read ETHOS.md for the Search Before Building framework (the preamble's Search Before Building section has the path). Before challenging scope, understand the landscape. WebSearch for:
|
||||
- "[product category] landscape {current year}"
|
||||
- "[key feature] alternatives"
|
||||
- "why [incumbent/conventional approach] [succeeds/fails]"
|
||||
|
||||
If WebSearch is unavailable, skip this check and note: "Search unavailable — proceeding with in-distribution knowledge only."
|
||||
|
||||
Run the three-layer synthesis:
|
||||
- **[Layer 1]** What's the tried-and-true approach in this space?
|
||||
- **[Layer 2]** What are the search results saying?
|
||||
- **[Layer 3]** First-principles reasoning — where might the conventional wisdom be wrong?
|
||||
|
||||
Feed into the Premise Challenge (0A) and Dream State Mapping (0C). If you find a eureka moment, surface it during the Expansion opt-in ceremony as a differentiation opportunity. Log it (see preamble).
|
||||
|
||||
## Step 0: Nuclear Scope Challenge + Mode Selection
|
||||
|
||||
### 0A. Premise Challenge
|
||||
|
||||
@@ -16,6 +16,7 @@ allowed-tools:
|
||||
- Glob
|
||||
- AskUserQuestion
|
||||
- Bash
|
||||
- WebSearch
|
||||
---
|
||||
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
|
||||
<!-- Regenerate: bun run gen:skill-docs -->
|
||||
@@ -322,7 +323,15 @@ Before reviewing anything, answer these questions:
|
||||
1. **What existing code already partially or fully solves each sub-problem?** Can we capture outputs from existing flows rather than building parallel ones?
|
||||
2. **What is the minimum set of changes that achieves the stated goal?** Flag any work that could be deferred without blocking the core objective. Be ruthless about scope creep.
|
||||
3. **Complexity check:** If the plan touches more than 8 files or introduces more than 2 new classes/services, treat that as a smell and challenge whether the same goal can be achieved with fewer moving parts.
|
||||
4. **TODOS cross-reference:** Read `TODOS.md` if it exists. Are any deferred items blocking this plan? Can any deferred items be bundled into this PR without expanding scope? Does this plan create new work that should be captured as a TODO?
|
||||
4. **Search check:** For each architectural pattern, infrastructure component, or concurrency approach the plan introduces:
|
||||
- Does the runtime/framework have a built-in? Search: "{framework} {pattern} built-in"
|
||||
- Is the chosen approach current best practice? Search: "{pattern} best practice {current year}"
|
||||
- Are there known footguns? Search: "{framework} {pattern} pitfalls"
|
||||
|
||||
If WebSearch is unavailable, skip this check and note: "Search unavailable — proceeding with in-distribution knowledge only."
|
||||
|
||||
If the plan rolls a custom solution where a built-in exists, flag it as a scope reduction opportunity. Annotate recommendations with **[Layer 1]**, **[Layer 2]**, **[Layer 3]**, or **[EUREKA]** (see preamble's Search Before Building section). If you find a eureka moment — a reason the standard approach is wrong for this case — present it as an architectural insight.
|
||||
5. **TODOS cross-reference:** Read `TODOS.md` if it exists. Are any deferred items blocking this plan? Can any deferred items be bundled into this PR without expanding scope? Does this plan create new work that should be captured as a TODO?
|
||||
|
||||
5. **Completeness check:** Is the plan doing the complete version or a shortcut? With AI-assisted coding, the cost of completeness (100% test coverage, full edge case handling, complete error paths) is 10-100x cheaper than with a human team. If the plan proposes a shortcut that saves human-hours but only saves minutes with CC+gstack, recommend the complete version. Boil the lake.
|
||||
|
||||
|
||||
@@ -16,6 +16,7 @@ allowed-tools:
|
||||
- Glob
|
||||
- AskUserQuestion
|
||||
- Bash
|
||||
- WebSearch
|
||||
---
|
||||
|
||||
{{PREAMBLE}}
|
||||
@@ -81,7 +82,15 @@ Before reviewing anything, answer these questions:
|
||||
1. **What existing code already partially or fully solves each sub-problem?** Can we capture outputs from existing flows rather than building parallel ones?
|
||||
2. **What is the minimum set of changes that achieves the stated goal?** Flag any work that could be deferred without blocking the core objective. Be ruthless about scope creep.
|
||||
3. **Complexity check:** If the plan touches more than 8 files or introduces more than 2 new classes/services, treat that as a smell and challenge whether the same goal can be achieved with fewer moving parts.
|
||||
4. **TODOS cross-reference:** Read `TODOS.md` if it exists. Are any deferred items blocking this plan? Can any deferred items be bundled into this PR without expanding scope? Does this plan create new work that should be captured as a TODO?
|
||||
4. **Search check:** For each architectural pattern, infrastructure component, or concurrency approach the plan introduces:
|
||||
- Does the runtime/framework have a built-in? Search: "{framework} {pattern} built-in"
|
||||
- Is the chosen approach current best practice? Search: "{pattern} best practice {current year}"
|
||||
- Are there known footguns? Search: "{framework} {pattern} pitfalls"
|
||||
|
||||
If WebSearch is unavailable, skip this check and note: "Search unavailable — proceeding with in-distribution knowledge only."
|
||||
|
||||
If the plan rolls a custom solution where a built-in exists, flag it as a scope reduction opportunity. Annotate recommendations with **[Layer 1]**, **[Layer 2]**, **[Layer 3]**, or **[EUREKA]** (see preamble's Search Before Building section). If you find a eureka moment — a reason the standard approach is wrong for this case — present it as an architectural insight.
|
||||
5. **TODOS cross-reference:** Read `TODOS.md` if it exists. Are any deferred items blocking this plan? Can any deferred items be bundled into this PR without expanding scope? Does this plan create new work that should be captured as a TODO?
|
||||
|
||||
5. **Completeness check:** Is the plan doing the complete version or a shortcut? With AI-assisted coding, the cost of completeness (100% test coverage, full edge case handling, complete error paths) is 10-100x cheaper than with a human team. If the plan proposes a shortcut that saves human-hours but only saves minutes with CC+gstack, recommend the complete version. Boil the lake.
|
||||
|
||||
|
||||
@@ -12,6 +12,7 @@ allowed-tools:
|
||||
- Read
|
||||
- Write
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
|
||||
<!-- Regenerate: bun run gen:skill-docs -->
|
||||
|
||||
@@ -12,6 +12,7 @@ allowed-tools:
|
||||
- Read
|
||||
- Write
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
|
||||
{{PREAMBLE}}
|
||||
|
||||
@@ -396,6 +396,20 @@ If TODOS.md doesn't exist, skip the Backlog Health row.
|
||||
|
||||
If the JSONL file doesn't exist or has no entries in the window, skip the Skill Usage row.
|
||||
|
||||
**Eureka Moments (if logged):** Read `~/.gstack/analytics/eureka.jsonl` if it exists. Filter entries within the retro time window by `ts` field. For each eureka moment, show the skill that flagged it, the branch, and a one-line summary of the insight. Present as:
|
||||
|
||||
```
|
||||
| Eureka Moments | 2 this period |
|
||||
```
|
||||
|
||||
If moments exist, list them:
|
||||
```
|
||||
EUREKA /office-hours (branch: garrytan/auth-rethink): "Session tokens don't need server storage — browser crypto API makes client-side JWT validation viable"
|
||||
EUREKA /plan-eng-review (branch: garrytan/cache-layer): "Redis isn't needed here — Bun's built-in LRU cache handles this workload"
|
||||
```
|
||||
|
||||
If the JSONL file doesn't exist or has no entries in the window, skip the Eureka Moments row.
|
||||
|
||||
### Step 3: Commit Time Distribution
|
||||
|
||||
Show hourly histogram in local time using bar chart:
|
||||
|
||||
@@ -172,6 +172,20 @@ If TODOS.md doesn't exist, skip the Backlog Health row.
|
||||
|
||||
If the JSONL file doesn't exist or has no entries in the window, skip the Skill Usage row.
|
||||
|
||||
**Eureka Moments (if logged):** Read `~/.gstack/analytics/eureka.jsonl` if it exists. Filter entries within the retro time window by `ts` field. For each eureka moment, show the skill that flagged it, the branch, and a one-line summary of the insight. Present as:
|
||||
|
||||
```
|
||||
| Eureka Moments | 2 this period |
|
||||
```
|
||||
|
||||
If moments exist, list them:
|
||||
```
|
||||
EUREKA /office-hours (branch: garrytan/auth-rethink): "Session tokens don't need server storage — browser crypto API makes client-side JWT validation viable"
|
||||
EUREKA /plan-eng-review (branch: garrytan/cache-layer): "Redis isn't needed here — Bun's built-in LRU cache handles this workload"
|
||||
```
|
||||
|
||||
If the JSONL file doesn't exist or has no entries in the window, skip the Eureka Moments row.
|
||||
|
||||
### Step 3: Commit Time Distribution
|
||||
|
||||
Show hourly histogram in local time using bar chart:
|
||||
|
||||
@@ -14,6 +14,7 @@ allowed-tools:
|
||||
- Grep
|
||||
- Glob
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
|
||||
<!-- Regenerate: bun run gen:skill-docs -->
|
||||
@@ -348,6 +349,13 @@ Apply the checklist against the diff in two passes:
|
||||
|
||||
**Enum & Value Completeness requires reading code OUTSIDE the diff.** When the diff introduces a new enum value, status, tier, or type constant, use Grep to find all files that reference sibling values, then Read those files to check if the new value is handled. This is the one category where within-diff review is insufficient.
|
||||
|
||||
**Search-before-recommending:** When recommending a fix pattern (especially for concurrency, caching, auth, or framework-specific behavior):
|
||||
- Verify the pattern is current best practice for the framework version in use
|
||||
- Check if a built-in solution exists in newer versions before recommending a workaround
|
||||
- Verify API signatures against current docs (APIs change between versions)
|
||||
|
||||
Takes seconds, prevents recommending outdated patterns. If WebSearch is unavailable, note it and proceed with in-distribution knowledge.
|
||||
|
||||
Follow the output format specified in the checklist. Respect the suppressions — do NOT flag items listed in the "DO NOT flag" section.
|
||||
|
||||
---
|
||||
|
||||
@@ -14,6 +14,7 @@ allowed-tools:
|
||||
- Grep
|
||||
- Glob
|
||||
- AskUserQuestion
|
||||
- WebSearch
|
||||
---
|
||||
|
||||
{{PREAMBLE}}
|
||||
@@ -107,6 +108,13 @@ Apply the checklist against the diff in two passes:
|
||||
|
||||
**Enum & Value Completeness requires reading code OUTSIDE the diff.** When the diff introduces a new enum value, status, tier, or type constant, use Grep to find all files that reference sibling values, then Read those files to check if the new value is handled. This is the one category where within-diff review is insufficient.
|
||||
|
||||
**Search-before-recommending:** When recommending a fix pattern (especially for concurrency, caching, auth, or framework-specific behavior):
|
||||
- Verify the pattern is current best practice for the framework version in use
|
||||
- Check if a built-in solution exists in newer versions before recommending a workaround
|
||||
- Verify API signatures against current docs (APIs change between versions)
|
||||
|
||||
Takes seconds, prevents recommending outdated patterns. If WebSearch is unavailable, note it and proceed with in-distribution knowledge.
|
||||
|
||||
Follow the output format specified in the checklist. Respect the suppressions — do NOT flag items listed in the "DO NOT flag" section.
|
||||
|
||||
---
|
||||
|
||||
Reference in New Issue
Block a user