* feat: add 4 native OpenClaw skills for ClaHub publishing Hand-crafted methodology skills for the OpenClaw wintermute workspace: - gstack-openclaw-office-hours (375 lines) — 6 forcing questions, startup + builder modes - gstack-openclaw-ceo-review (193 lines) — 4 scope modes, 18 cognitive patterns - gstack-openclaw-investigate (136 lines) — Iron Law, 4-phase debugging - gstack-openclaw-retro (301 lines) — git analytics, per-person praise/growth Pure methodology, no gstack infrastructure. All frontmatter uses single-line inline JSON for OpenClaw parser compatibility. * feat: add AGENTS.md dispatch section with behavioral rules Ready-to-paste section for OpenClaw AGENTS.md with 3 iron-clad rules: 1. Always spawn sessions, never redirect user to Claude Code 2. Resolve repo path or ask, don't punt 3. Autoplan runs end-to-end, reports back in chat Includes full dispatch routing (Simple/Medium/Heavy/Full/Plan tiers). * chore: clear OpenClaw includeSkills — native skills replace generated Native ClaHub skills replace the gen-skill-docs pipeline output for these 4 skills. Updated test to validate empty includeSkills array. * docs: ClaHub install instructions + dispatch routing rules - README: add Native OpenClaw Skills section with clawhub install command - OPENCLAW.md: update dispatch routing with behavioral rules, update native skills section to reference ClaHub * chore: bump version and changelog (v0.15.10.0) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add gstack-upgrade to OpenClaw dispatch routing Ensures "upgrade gstack" routes to a Claude Code session with /gstack-upgrade instead of Wintermute trying to handle it conversationally. * fix: stop tracking 58MB compiled binary bin/gstack-global-discover Already in .gitignore but was tracked due to historical mistake. Same issue as browse/dist/ and design/dist/. The .ts source is right next to it and ./setup builds from source for every platform. * test: detect compiled binaries and large files tracked by git Two new tests in skill-validation: - No Mach-O or ELF binaries tracked (catches accidental git add of compiled output) - No files over 2MB tracked (catches bloated binaries sneaking in) Both print the exact git rm --cached command to fix the issue. * fix: ClaHub → ClawHub (correct spelling) * docs: add ClawHub publishing instructions to CLAUDE.md Documents the clawhub publish command (not clawhub skill publish), auth flow, version bumping, and verification. --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
5.6 KiB
name: gstack-openclaw-investigate description: Systematic debugging with root cause investigation. Four phases: investigate, analyze, hypothesize, implement. Iron Law: no fixes without root cause. Use when asked to debug, fix a bug, investigate an error, or root cause analysis. Proactively use when user reports errors, stack traces, unexpected behavior, or says something stopped working. version: 1.0.0 metadata: { "openclaw": { "emoji": "🔍" } }
Systematic Debugging
Iron Law
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST.
Fixing symptoms creates whack-a-mole debugging. Every fix that doesn't address root cause makes the next bug harder to find. Find the root cause, then fix it.
Phase 1: Root Cause Investigation
Gather context before forming any hypothesis.
-
Collect symptoms: Read the error messages, stack traces, and reproduction steps. If the user hasn't provided enough context, ask ONE question at a time. Don't ask five questions at once.
-
Read the code: Trace the code path from the symptom back to potential causes. Search for all references, read the logic around the failure point.
-
Check recent changes:
git log --oneline -20 -- <affected-files>Was this working before? What changed? A regression means the root cause is in the diff.
-
Reproduce: Can you trigger the bug deterministically? If not, gather more evidence before proceeding.
-
Check memory for prior debugging sessions on the same area. Recurring bugs in the same files are an architectural smell.
Output: "Root cause hypothesis: ..." ... a specific, testable claim about what is wrong and why.
Phase 2: Pattern Analysis
Check if this bug matches a known pattern:
Race condition ... Intermittent, timing-dependent. Look at concurrent access to shared state.
Nil/null propagation ... NoMethodError, TypeError. Missing guards on optional values.
State corruption ... Inconsistent data, partial updates. Check transactions, callbacks, hooks.
Integration failure ... Timeout, unexpected response. External API calls, service boundaries.
Configuration drift ... Works locally, fails in staging/prod. Env vars, feature flags, DB state.
Stale cache ... Shows old data, fixes on cache clear. Redis, CDN, browser cache.
Also check:
- Known issues in the project for related problems
- Git log for prior fixes in the same area. Recurring bugs in the same files are an architectural smell, not a coincidence.
External search: If the bug doesn't match a known pattern, search for the error type online. Sanitize first: strip hostnames, IPs, file paths, SQL, customer data. Search the error category, not the raw message.
Phase 3: Hypothesis Testing
Before writing ANY fix, verify your hypothesis.
-
Confirm the hypothesis: Add a temporary log statement, assertion, or debug output at the suspected root cause. Run the reproduction. Does the evidence match?
-
If the hypothesis is wrong: Search for the error (sanitize sensitive data first). Return to Phase 1. Gather more evidence. Do not guess.
-
3-strike rule: If 3 hypotheses fail, STOP. Tell the user:
"3 hypotheses tested, none match. This may be an architectural issue rather than a simple bug."
Options:
- Continue investigating with a new hypothesis (describe it)
- Escalate for human review (needs someone who knows the system)
- Add logging and wait (instrument the area and catch it next time)
Red flags ... if you see any of these, slow down:
- "Quick fix for now" ... there is no "for now." Fix it right or escalate.
- Proposing a fix before tracing data flow ... you're guessing.
- Each fix reveals a new problem elsewhere ... wrong layer, not wrong code.
Phase 4: Implementation
Once root cause is confirmed:
-
Fix the root cause, not the symptom. The smallest change that eliminates the actual problem.
-
Minimal diff: Fewest files touched, fewest lines changed. Resist the urge to refactor adjacent code.
-
Write a regression test that:
- Fails without the fix (proves the test is meaningful)
- Passes with the fix (proves the fix works)
-
Run the full test suite. No regressions allowed.
-
If the fix touches >5 files: Flag the blast radius to the user before proceeding. That's large for a bug fix.
Phase 5: Verification & Report
Fresh verification: Reproduce the original bug scenario and confirm it's fixed. This is not optional.
Run the test suite.
Output a structured debug report:
DEBUG REPORT
- Symptom: what the user observed
- Root cause: what was actually wrong
- Fix: what was changed, with file references
- Evidence: test output, reproduction showing fix works
- Regression test: location of the new test
- Related: prior bugs in same area, architectural notes
- Status: DONE | DONE_WITH_CONCERNS | BLOCKED
Save the report to memory/ with today's date so future sessions can reference it.
Important Rules
- 3+ failed fix attempts: STOP and question the architecture. Wrong architecture, not failed hypothesis.
- Never apply a fix you cannot verify. If you can't reproduce and confirm, don't ship it.
- Never say "this should fix it." Verify and prove it. Run the tests.
- If fix touches >5 files: Flag to user before proceeding.
- Completion status:
- DONE ... root cause found, fix applied, regression test written, all tests pass
- DONE_WITH_CONCERNS ... fixed but cannot fully verify (e.g., intermittent bug, requires staging)
- BLOCKED ... root cause unclear after investigation, escalated