mirror of
https://github.com/garrytan/gstack.git
synced 2026-05-01 19:25:10 +02:00
8115951284
* refactor: remove dead contributor mode, replace with operational self-improvement slot Contributor mode never fired in 18 days of heavy use (required manual opt-in via gstack-config, gated behind _CONTRIB=true, wrote disconnected markdown). Removes: generateContributorMode(), _CONTRIB bash var, 2 E2E tests, touchfile entry, doc references. Cleans up skip-lists in plan-ceo-review, autoplan, review resolver, and document-release templates. The operational self-improvement system (next commit) replaces this slot with automatic learning capture that requires no opt-in. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: operational self-improvement — every skill learns from failures Adds universal operational learning capture to the preamble completion protocol. At the end of every skill session, the agent reflects on CLI failures, wrong approaches, and project quirks, logging them as type "operational" to the learnings JSONL. Future sessions surface these automatically. - generateCompletionStatus(ctx) now includes operational capture section - Preamble bash shows top 3 learnings inline when count > 5 - New "operational" type in generateLearningsLog alongside pattern/pitfall/etc - Updated unit tests + operational seed entry in learnings E2E Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: wire learnings into all insight-producing skills Adds LEARNINGS_SEARCH and/or LEARNINGS_LOG to 10 skill templates that produce reusable insights but were previously disconnected from the learning system: - office-hours, plan-ceo-review, plan-eng-review: add LOG (had SEARCH) - plan-design-review: add both SEARCH + LOG (had neither) - design-review, design-consultation, cso, qa, qa-only: add both - retro: add SEARCH (had LOG) 13 skills now fully participate in the learning loop (read + write). Every review, QA, investigation, and design session both consults prior learnings and contributes new ones. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test: add operational-learning E2E test (gate-tier) Validates the write path: agent encounters a CLI failure, logs an operational learning to JSONL via gstack-learnings-log. Replaces the removed contributor-mode E2E test. Setup: temp git repo, copy bin scripts, set GSTACK_HOME. Prompt: simulated npm test failure needing --experimental-vm-modules. Assert: learnings.jsonl exists with type=operational entry. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: learnings-show E2E slug mismatch — seed at computed slug, not hardcoded The test seeded learnings at projects/test-project/ but gstack-slug computes the slug from basename(workDir) when no git remote exists. The agent's search looked at the wrong path and found nothing. Fix: compute slug the same way gstack-slug does (basename + sanitize) and seed the learnings there. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: bump version and changelog (v0.13.8.0) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
98 lines
3.9 KiB
TypeScript
98 lines
3.9 KiB
TypeScript
/**
|
|
* Learnings resolver — cross-skill institutional memory
|
|
*
|
|
* Learnings are stored per-project at ~/.gstack/projects/{slug}/learnings.jsonl.
|
|
* Each entry is a JSONL line with: ts, skill, type, key, insight, confidence,
|
|
* source, branch, commit, files[].
|
|
*
|
|
* Storage is append-only. Duplicates (same key+type) are resolved at read time
|
|
* by gstack-learnings-search ("latest winner" per key+type).
|
|
*
|
|
* Cross-project discovery is opt-in. The resolver asks the user once via
|
|
* AskUserQuestion and persists the preference via gstack-config.
|
|
*/
|
|
import type { TemplateContext } from './types';
|
|
|
|
export function generateLearningsSearch(ctx: TemplateContext): string {
|
|
if (ctx.host === 'codex') {
|
|
// Codex: simpler version, no cross-project, uses $GSTACK_BIN
|
|
return `## Prior Learnings
|
|
|
|
Search for relevant learnings from previous sessions on this project:
|
|
|
|
\`\`\`bash
|
|
$GSTACK_BIN/gstack-learnings-search --limit 10 2>/dev/null || true
|
|
\`\`\`
|
|
|
|
If learnings are found, incorporate them into your analysis. When a review finding
|
|
matches a past learning, note it: "Prior learning applied: [key] (confidence N, from [date])"`;
|
|
}
|
|
|
|
return `## Prior Learnings
|
|
|
|
Search for relevant learnings from previous sessions:
|
|
|
|
\`\`\`bash
|
|
_CROSS_PROJ=$(${ctx.paths.binDir}/gstack-config get cross_project_learnings 2>/dev/null || echo "unset")
|
|
echo "CROSS_PROJECT: $_CROSS_PROJ"
|
|
if [ "$_CROSS_PROJ" = "true" ]; then
|
|
${ctx.paths.binDir}/gstack-learnings-search --limit 10 --cross-project 2>/dev/null || true
|
|
else
|
|
${ctx.paths.binDir}/gstack-learnings-search --limit 10 2>/dev/null || true
|
|
fi
|
|
\`\`\`
|
|
|
|
If \`CROSS_PROJECT\` is \`unset\` (first time): Use AskUserQuestion:
|
|
|
|
> gstack can search learnings from your other projects on this machine to find
|
|
> patterns that might apply here. This stays local (no data leaves your machine).
|
|
> Recommended for solo developers. Skip if you work on multiple client codebases
|
|
> where cross-contamination would be a concern.
|
|
|
|
Options:
|
|
- A) Enable cross-project learnings (recommended)
|
|
- B) Keep learnings project-scoped only
|
|
|
|
If A: run \`${ctx.paths.binDir}/gstack-config set cross_project_learnings true\`
|
|
If B: run \`${ctx.paths.binDir}/gstack-config set cross_project_learnings false\`
|
|
|
|
Then re-run the search with the appropriate flag.
|
|
|
|
If learnings are found, incorporate them into your analysis. When a review finding
|
|
matches a past learning, display:
|
|
|
|
**"Prior learning applied: [key] (confidence N/10, from [date])"**
|
|
|
|
This makes the compounding visible. The user should see that gstack is getting
|
|
smarter on their codebase over time.`;
|
|
}
|
|
|
|
export function generateLearningsLog(ctx: TemplateContext): string {
|
|
const binDir = ctx.host === 'codex' ? '$GSTACK_BIN' : ctx.paths.binDir;
|
|
|
|
return `## Capture Learnings
|
|
|
|
If you discovered a non-obvious pattern, pitfall, or architectural insight during
|
|
this session, log it for future sessions:
|
|
|
|
\`\`\`bash
|
|
${binDir}/gstack-learnings-log '{"skill":"${ctx.skillName}","type":"TYPE","key":"SHORT_KEY","insight":"DESCRIPTION","confidence":N,"source":"SOURCE","files":["path/to/relevant/file"]}'
|
|
\`\`\`
|
|
|
|
**Types:** \`pattern\` (reusable approach), \`pitfall\` (what NOT to do), \`preference\`
|
|
(user stated), \`architecture\` (structural decision), \`tool\` (library/framework insight),
|
|
\`operational\` (project environment/CLI/workflow knowledge).
|
|
|
|
**Sources:** \`observed\` (you found this in the code), \`user-stated\` (user told you),
|
|
\`inferred\` (AI deduction), \`cross-model\` (both Claude and Codex agree).
|
|
|
|
**Confidence:** 1-10. Be honest. An observed pattern you verified in the code is 8-9.
|
|
An inference you're not sure about is 4-5. A user preference they explicitly stated is 10.
|
|
|
|
**files:** Include the specific file paths this learning references. This enables
|
|
staleness detection: if those files are later deleted, the learning can be flagged.
|
|
|
|
**Only log genuine discoveries.** Don't log obvious things. Don't log things the user
|
|
already knows. A good test: would this insight save time in a future session? If yes, log it.`;
|
|
}
|