with one click
dev-refactor
// Batch refactor code quality after testing with parallel analysis, dynamic stack-aware patterns, and early-exit for clean features. Use with /dev-refactor to improve code structure, naming, and patterns.
// Batch refactor code quality after testing with parallel analysis, dynamic stack-aware patterns, and early-exit for clean features. Use with /dev-refactor to improve code structure, naming, and patterns.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | dev-refactor |
| description | Batch refactor code quality after testing with parallel analysis, dynamic stack-aware patterns, and early-exit for clean features. Use with /dev-refactor to improve code structure, naming, and patterns. |
| reads | ["feature.build","feature.tests","backlog.status"] |
| writes | ["feature.refactor","backlog.status","learnings"] |
| metadata | {"author":"mileszeilstra","version":"2.2.0","category":"dev"} |
Optional quality step on completed features. Not a status-gate — features are DONE after /dev-verify. This skill improves code structure, naming, and patterns on already-finished features.
Batch-first architecture: analyzes ALL features in parallel via Explore agents, triages clean vs dirty, generates stack-aware refactor patterns via Context7, creates one combined plan with one approval, and applies changes with per-feature rollback.
Trigger: /dev-refactor or /dev-refactor {feature-name}
This skill ONLY refactors files that belong to the feature.
feature.json → files[] — these are the pipeline filesutils/ file). Existing external files may NEVER be modified.This rule exists because refactoring external files risks breaking other features and creates unpredictable side effects.
/dev-verify completes (features in DONE status).project/features/{name}/feature.json exists met tests sectieReads .project/features/{feature-name}/feature.json — unified feature file met requirements, architecture, files, build, tests secties.
.project/features/{feature-name}/
└── feature.json # Enriched: refactor section, status updated
.claude/research/
├── stack-baseline.md ← EXISTING: library conventions/patterns/pitfalls
│ (React hooks, Tailwind v4, GSAP cleanup, etc.)
│ Read in FASE 2 for research decision
│
└── refactor-patterns.md ← NEW: stack-specific code smells & anti-patterns
Generated via Context7 on first refactor
Reused on subsequent refactors
stack-baseline.md = "how to use these libraries correctly" (conventions) refactor-patterns.md = "what mistakes to look for in code using these libraries" (anti-patterns)
Fase tracking — eerste actie van de skill: roep TaskCreate aan met deze 6 items (status pending), daarna gebruik TaskUpdate om per fase in_progress te zetten aan begin en completed aan einde. Bij context compaction blijft de task list zichtbaar — geen risico op vergeten fases.
Todo: roep
TaskCreateaan met de 6 fase-items (zie boven). Markeer FASE 0 →in_progressviaTaskUpdate.
Read backlog for pipeline status:
Read .project/backlog.html (if exists), parse JSON uit <script id="backlog-data"> blok (zie shared/BACKLOG.md):
data.features.filter(f => f.status === "DONE" && !f.shipped).project/features/{name}/feature.json for existing refactor sectieunrefactored (no refactor section) vs refactored (has refactor section)data.features.filter(f => f.status === "DONE" && !f.shipped && !fs.existsSync('.project/features/' + f.name + '/feature.json')) — items zonder pipeline (CHANGE/BUG/PAGE/COMPONENT/etc)Determine feature queue:
a) Feature name provided (/dev-refactor auth):
.project/features/[auth] (regardless of refactor status)b) No feature name (/dev-refactor):
b0) UI-queue detectie (eerst checken):
queued = data.features.filter(f => f.transition === "refactoring" && f.status === "DONE" && !f.shipped)queued.length > 0:
feature_queue = queued, mode = "feature", spring naar stap 3 (worktree-switch)queued.length == 0 → ga direct door naar b1 hieronder.b1) Scope-selectie (als geen UI-queue of user koos "andere scope"):
featuresmall-itemsfeaturec) "recent": find most recently modified feature.json with tests sectie, queue = [that feature], mode = feature
Small-items mode (--small-items of via keuze):
data.features met status === "DONE" && !shipped && !feature.jsongit log --oneline --grep="{item.name}" -- {src/}git diff {first_hash}^..{last_hash} --name-onlySmall-items FASE-routing (sla FASE 0 stap 3-5 over, spring direct naar FASE 1):
shared/RULES.md + shared/PATTERNS.md + stack-baselineshipped = true, shippedAt, append naar project.json.recentChanges[]Codebase mode ("Hele codebase"):
src/ of equivalent uit project-context.json context.structure, of CLAUDE.md)node_modules/, .project/, test files, config files.project/session/codebase-refactor.jsonrefactor(codebase): {summary}Worktree switch (single-mode only):
Als feature_queue.length == 1 en niet in codebase-mode: voer de procedure in shared/WORKTREE.md uit met de feature-name. Switcht automatisch naar worktree-{feature-name} als die bestaat. Bij FAIL: stop met de melding uit WORKTREE.md.
Batch-mode (queue > 1) of codebase-mode: skip — blijf op main, refactor over al-gemergede code.
Load ALL feature docs for every feature in queue:
For each feature, read feature.json — bevat requirements, architecture, files, build, tests secties.
Validate tests sectie exists in feature.json for each feature. If missing → remove from queue and warn.
Build pipeline files list per feature:
For each feature, extract all code file paths from feature.json:
files[] array (each entry has path, type, action)pipeline_files[feature_name]Load project conventions + learnings (optioneel):
Lees .project/project-context.json (als bestaat). Extract context.patterns.
Learnings load via shared/LEARNINGS-LOAD.md:
scopes: [component]
pitfall-prefix: true
current-feature: <feature-name als feature-mode, anders "none">
Als beschikbaar: voeg toe aan Explore agent prompt in FASE 1 onder
PROJECT CONVENTIONS: sectie (patterns) en KNOWN PITFALLS: sectie (pitfall-prefix + component-scoped). Helpt agents onderscheid maken tussen
"intentioneel project pattern" en "code smell", en voorkomt herintroductie van bekende bugs. Eén van de patterns kan een
Code maturity: ... string zijn (zie shared/DASHBOARD.md voorbeelden) die
de refactor-agressie stuurt — die wordt automatisch meegegeven omdat hij deel
uitmaakt van patterns.
Voor elke feature met een bekend build-startmoment: bouw een diff-string die agents als focus-hint krijgen.
# Bepaal begin van feature-werk
first_hash=$(git log --since="{feature.build.startedAt}" --pretty=format:"%H" -- {pipeline_files} | tail -1)
# Diff van die commit tot nu, gescoped tot pipeline files
[ -n "$first_hash" ] && git diff ${first_hash}^..HEAD -- {pipeline_files} > /tmp/diff-{feature}.patch
Opslaan als pipeline_diff[feature_name]. Als diff leeg is of startedAt ontbreekt: skip — agent ziet dan alleen de volledige files.
Load or generate refactor-patterns.md:
IF .claude/research/refactor-patterns.md exists:
→ Load cached patterns, skip Context7
→ Log: "Refactor patterns loaded (cached)"
IF NOT exists:
→ Detect stack from CLAUDE.md ### Stack section
→ For each library/framework in stack:
Context7 resolve-library-id → query-docs:
"Common code smells, anti-patterns, and refactoring opportunities
in {library} projects. Focus on: performance pitfalls, security
anti-patterns, common mistakes, and code organization issues."
→ Compile results into .claude/research/refactor-patterns.md
→ Log: "Refactor patterns generated via Context7 ({N} libraries)"
Format for refactor-patterns.md:
# Refactor Patterns
<!-- Generated via Context7 for: {stack list} -->
<!-- Regenerate: delete this file and run /dev-refactor -->
## {Library Name}
### Performance Anti-patterns
- {pattern}: {description} — {what to look for in code}
### Security Anti-patterns
- {pattern}: {description} — {what to look for in code}
### Code Organization Anti-patterns
- {pattern}: {description} — {what to look for in code}
Output:
BATCH CONTEXT LOADED
| Metric | Value |
|--------|-------|
| Features in queue | {N} |
| Total pipeline files | {sum across all features} |
| Refactor patterns | {cached / generated via Context7} |
Features:
{for each feature:}
- {name}: {M} pipeline files
→ Starting parallel analysis...
mkdir -p .project/session
git status --porcelain | sort > .project/session/pre-skill-status.txt
echo '{"feature":"{feature-name}","skill":"refactor","startedAt":"{ISO timestamp}"}' > .project/session/active-{feature-name}.json
Todo: markeer FASE 0 →
completed, FASE 1 →in_progress.
Goal: Per feature drie focused Explore agents parallel (reuse / quality / efficiency), dan merge + triage naar CLEAN vs HAS_FINDINGS.
Waarom drie lenses: één monolithische prompt met 6 categorieën verdunt focus en produceert noise. Drie aparte lenses geven scherpere findings per domein. Geleerd uit /simplify-runs — zie plan in .claude/plans/ (2026-04).
Lens-definities (zie ook shared/PATTERNS.md als aanwezig):
shared/TOKENS.md — alleen frontend files: .tsx/.jsx/.vue/.svelte)Security blijft in Quality-lens (aparte security-agent is overkill; voor diepe security-review bestaat dev-owasp).
Bepaal lens-strategie per feature:
pipeline_files[feature].length < 4 → single-lens mode: één gecombineerde agent met alle drie lenses in de prompt (splitten levert te weinig signaal voor te veel token-overhead)length >= 4 → three-lens mode: drie agents parallel per featureConcurrency-budget: max 10 concurrent agents totaal. Als sum(lens_count_per_feature) > 10: batch features in groepen. Bijv. 5 features × 3 lenses = 15 → batch 3 features eerst (9 agents), dan de rest.
Model-default: alle lens-agents draaien op Sonnet. Haiku-switch voor Reuse-lens is een toekomstige optimalisatie — niet activeren zonder A/B-meting op finding-kwaliteit.
Launch agents IN PARALLEL volgens lens-strategie (zie shared/SKILL-PATTERNS.md#parallel-dispatch voor dispatch-criteria en integratie-stappen).
Universele prompt-header (elke lens, elke mode krijgt deze):
Feature: {feature-name}
Pipeline files:
{list of pipeline_files paths}
{if pipeline_diff[feature] exists:}
FOCUS HINT — deze regels zijn nieuw/gewijzigd in deze feature; scan
met voorrang (maar rapporteer issues in andere regels óók):
```diff
{pipeline_diff[feature]}
{/if}
PROJECT CONVENTIONS: {context.patterns of "niet beschikbaar — gebruik CLAUDE.md als fallback"} Als een pattern consistent is met project conventions → NIET rapporteren. Let op: een pattern met prefix "Code maturity:" geeft aan hoe agressief je mag refactoren — respecteer de daarin genoemde houding (bv. geen over-abstractions voor student/prototype projecten).
DISCIPLINE:
[IMPACT|CATEGORY] file:line — probleem — concrete fix in 1 zin
**Lens-specifieke body** — kies één van drie (of alle drie gecombineerd in single-lens mode):
**(A) REUSE lens body:**
LENS: Reuse
Scan voor:
VOORBEELDEN:
✓ Report: 3 tools met identieke JSON.stringify({text, sources}) wrapping → extract formatResult() helper
✓ Report: hand-rolled lstrip/rstrip + regex waar path.basename() bestaat
✗ Skip: twee functies met 3 vergelijkbare regels (te klein voor abstractie, zeker bij Code maturity: student)
✗ Skip: abstractie die maar 2× gebruikt wordt en de call-sites niet duidelijker maakt
**(B) QUALITY lens body:**
LENS: Quality
Scan voor: SECURITY:
CLARITY & QUALITY:
timeoutMs niet t, rawHtml vs safeHtml, userIdOwned niet id. Primitives zonder unit in naam = smell.?? "" dat missing data verbergt, unwrap zonder trace)COLD-READER (kan een nieuwe lezer dit begrijpen zonder 3 files open te zetten?):
VOORBEELDEN:
✓ Report: msg.constructor.name === "HumanMessage" i.p.v. isHumanMessage(msg) typeguard
✓ Report: dode exported functie zonder callers (leeg body met TODO-comment)
✓ Report: 4-niveau nested if/else waar early-returns het vlak maken
✓ Report: function charge(ctx) leest alleen ctx.userId + ctx.amount → charge(userId, amount)
✓ Report: const t = 5000 → const timeoutMs = 5000
✓ Report: 7 mutable locals in één loop-body, lezer verliest overzicht → extract state-record of splits loop
✗ Skip: comment die een niet-obvious invariant uitlegt (WHY is waardevol)
✗ Skip: expliciete intermediate variabele i.p.v. inline expression (clarity > compact, naming als documentatie)
✗ Skip: thin adapter aan framework-seam (middleware, Express handler) — shallowness IS de taak
✗ Skip: context-param waar framework-contract het vereist (middleware signature)
**(C) EFFICIENCY lens body:**
LENS: Efficiency
Scan voor:
VOORBEELDEN:
✓ Report: for (const c of chars) await loadBackstory(c) → Promise.all(chars.map(loadBackstory))
✓ Report: userStore Map die per-user groeit zonder TTL/LRU
✓ Report: similaritySearch(q, 8).filter(...).slice(0, 3) → filter-callback arg van de store
✗ Skip: O(n) loop over 5-item array (micro-optimization zonder impact)
✗ Skip: JSON.stringify in een niet-hot-path debug log
**Single-lens mode** (feature met <4 files): voeg alle drie bodies samen onder één agent, behoud DISCIPLINE-regels. Eén output-blok met alle findings.
Output format (elke lens-agent retourneert):
ANALYSIS_START
FEATURE: {name}
LENS: reuse | quality | efficiency | combined
STATUS: CLEAN | HAS_FINDINGS
ARCHITECTURE: libs={list} | patterns={list} | uncovered={list or "-"}
FINDINGS:
- [HIGH|SEC] path/to/file.js:42 — probleem-omschrijving — concrete fix
- [MED|DRY] a.js:10 ↔ b.js:55 — probleem — fix
- [LOW|CLARITY] c.js:120 — probleem — fix
SKIPPED (balance):
- path:line — korte rationale waarom bewust niet gerapporteerd
POSITIVES:
- observation
ANALYSIS_END
Impact-tags: HIGH (security, breaking bug, memory leak), MED (DRY, efficiency, clarity op hot-path), LOW (cosmetisch, micro-clarity).
Category-tags: SEC, DRY, EFF, CLARITY, OVERENG, STACK.
Merge lens-outputs per feature:
Voor three-lens features: combineer de drie FINDINGS-lijsten tot één lijst. Dedup op file:line + fix (zelfde issue door meerdere lenses gespot → 1 entry, categorie-tags mergen).
STATUS per feature = CLEAN als alle drie lenses CLEAN, anders HAS_FINDINGS.
Parsing agent results:
Per agent:
ANALYSIS_START..ANALYSIS_END in TaskOutputTriage:
CLEAN features → early-exit, skip FASE 2-4.
If ALL features CLEAN → jump direct naar FASE 5 (geen approval).
Output:
PARALLEL ANALYSIS COMPLETE
| Feature | Pipeline Files | Status | Findings |
|---------|---------------|--------|----------|
| {name1} | {N} | CLEAN | 0 |
| {name2} | {M} | HAS_FINDINGS | {X} |
| ... | ... | ... | ... |
Summary: {clean_count} clean, {findings_count} with findings
{if all clean:}
→ All features clean! Skipping to completion...
{if has findings:}
→ Proceeding with {findings_count} feature(s) to research decision...
Todo: markeer FASE 1 →
completed, FASE 2 →in_progress.
Goal: One research decision for all affected features combined (not per-feature).
Steps:
Aggregate architecture info from all HAS_FINDINGS features:
uncovered = used_libraries - baseline_libraries - refactor_pattern_librariesRead stack baseline:
.claude/research/stack-baseline.md (if exists)Decide: is Context7 research needed?
| Signal | Research needed? |
|---|---|
| Stack baseline + refactor-patterns cover all libraries | NO |
| Findings are concrete, directly actionable | NO |
| Uncovered libraries found in analysis | YES — research those specific libraries |
| Complex security concerns (auth, crypto, injection) | YES — research security best practices |
| No stack baseline exists at all | YES — research core stack patterns |
If research NOT needed → proceed directly to FASE 3.
If research needed → spawn one Explore agent (subagent_type: Explore, thoroughness: "very thorough") to do all research in an isolated context. This keeps Context7 results out of the main session.
Determine which research domains to include based on findings:
| Domain | Include when |
|---|---|
| Security | Security patterns found OR auth/crypto/input flows |
| Performance | N+1 patterns, heavy loops, or caching opportunities |
| Quality | Complex abstractions or unclear patterns |
| Error handling | Missing error handling in critical paths |
Agent prompt — include only domains identified as needed:
Research best practices for a refactoring task.
Tech stack: {from CLAUDE.md}
Stack baseline: {from stack-baseline.md, or "none"}
Aggregated analysis:
{ANALYSIS_START..ANALYSIS_END blocks from all HAS_FINDINGS features}
{If security domain needed:}
SECURITY:
- resolve-library-id + query-docs for: {relevant frameworks}
- Focus: input validation, auth patterns, injection prevention, OWASP
{If performance domain needed:}
PERFORMANCE:
- resolve-library-id + query-docs for: {relevant frameworks}
- Focus: N+1 queries, caching, eager loading, indexing, resource usage
{If quality domain needed:}
QUALITY:
- resolve-library-id + query-docs for: {relevant frameworks}
- Focus: design patterns, SOLID, DRY, complexity reduction
{If error-handling domain needed:}
ERROR HANDLING:
- resolve-library-id + query-docs for: {relevant frameworks}
- Focus: exception patterns, retry logic, graceful degradation
RETURN FORMAT:
RESEARCH_START
Security: {3-5 bullet points: vulnerabilities found, best practices, framework features}
Performance: {3-5 bullet points: optimization patterns, caching strategies, query fixes}
Quality: {3-5 bullet points: design patterns, refactoring approaches, conventions}
Error handling: {3-5 bullet points: exception patterns, resilience, logging}
RESEARCH_END
Only include sections for domains you were asked to research.
If uncovered libraries found → also update refactor-patterns.md:
Output:
Parse the agent's RESEARCH_START...END block. Display:
RESEARCH DECISION
| Source | Libraries Covered |
|--------|------------------|
| stack-baseline.md | {list} |
| refactor-patterns.md | {list} |
| Uncovered | {list or "none"} |
{if no research:}
Research: Skipped (existing knowledge sufficient)
{if research:}
Research: Explore agent ({domains researched})
Refactor patterns updated: {yes/no}
→ Ready for combined plan.
Todo: markeer FASE 2 →
completed, FASE 3 →in_progress.
Goal: One plan combining ALL findings from ALL affected features, one user approval (tenzij --quick pad).
Steps:
Create ranked improvements list:
Combine all findings from all HAS_FINDINGS features:
[IMPACT|CATEGORY] tags uit FASE 1 findings)Aggregate SKIPPED (balance) entries from all lens-agents per feature.
Dedup op file:line + rationale. Deze lijst toont de user wat de skill bewust niet wil fixen — zodat ze kunnen overriden ("fix die toch wel").
Evaluate --quick auto-apply pad:
Trigger:
/dev-refactor --quick {feature} in gebruikersinputCode maturity: library pattern in context.patterns — library-projecten krijgen altijd approvalGedrag bij quick-pad:
refactor(quick) i.p.v. refactor(batch)/refactor({feature})Revert: /rewind <hash> met saved_hash uit FASE 4Bij elke expliciete --quick die niet aan auto-condities voldoet: vallen terug op normale approval-flow + warn in output waarom (bv. "--quick genegeerd: 2 HIGH findings gevonden").
Present improvements with before/after code:
REFACTOR PLAN ({N} features, {M} improvements)
🔴 HIGH: [X] improvements (security, breaking risk)
🟡 MED: [Y] improvements (performance, DRY, efficiency)
🟢 LOW: [Z] improvements (clarity, simplification)
── {feature-1} ──
1. 🔴 {file}:{line} — {issue} → {fix}
Before: {code snippet}
After: {proposed change}
2. 🟡 {file}:{line} — {issue} → {fix}
Before: {code snippet}
After: {proposed change}
── {feature-2} ──
3. 🟡 {file}:{line} — {issue} → {fix}
...
── Bewust niet gefixt ──
- {file:line} {pattern} — {korte rationale}
- {file:line} {pattern} — {korte rationale}
(skip deze sectie als 0 SKIPPED entries)
──────────────────
Files to be modified: [count]
- {file1} ([N] changes) — {feature}
- {file2} ([M] changes) — {feature}
Per-feature rollback: YES (feature A succeeds, B fails → only B rolled back)
{if quick-pad active:}
⚡ QUICK MODE — approval wordt overgeslagen, directe apply.
Revert achteraf via /rewind.
Ask for scope (skip deze stap in quick-mode):
Use AskUserQuestion tool:
If "Per feature kiezen":
Features met findings:
1. {feature-1}: {N} findings ({HIGH}/{MED}/{LOW})
2. {feature-2}: {N} findings ({HIGH}/{MED}/{LOW})
...
Vraag: "Welke features wil je refactoren? Geef nummers (bv. 1, 3 of alles)."
Parse → approved-set. Lege input of "geen" → alle features krijgen CLEAN status.
If "Ook Bewust-niet-gefixt erbij" → toon SKIPPED-lijst in tweede AskUserQuestion (multiSelect) zodat user specifiek kan kiezen welke alsnog mee moeten, en promoteer die naar improvements.
Only approved features proceed to FASE 4. Non-selected features get CLEAN status.
De user kan ook "Annuleren" via de ingebouwde "Other" optie → EXIT met "Refactor geannuleerd door gebruiker".
Todo: markeer FASE 3 →
completed, FASE 4 →in_progress.
Goal: Apply approved improvements and test, with per-feature rollback isolation.
Priority order for each feature (execute in this sequence):
Steps:
Initialize change tracking:
git rev-parse HEAD # Store as saved_hash for global rollback
For each feature with approved improvements:
a. Track files for targeted rollback (no git stash needed — file-level tracking is sufficient):
Initialize empty lists: modified_files[feature_name] = [], created_files[feature_name] = []
b. Apply improvements using Edit tool:
modified_files[feature_name] = [list of existing files changed]created_files[feature_name] = [list of new files created]c. Run test suite after this feature's changes:
Detect test command from CLAUDE.md ### Testing section
All pass → mark feature as APPLIED, continue to next feature
Any fail → analyze before rollback:
| Test failure type | Action |
|---|---|
| Test expects old behavior that was intentionally improved | Update test, re-run |
| Genuine regression (broke unrelated functionality) | Rollback THIS feature only |
| Flaky or environment-dependent | Re-run once, then decide |
If test update needed:
Per-feature rollback (only this feature, not others):
git checkout -- {modified_files[feature_name]}
rm -f {created_files[feature_name]}
Mark feature as ROLLED_BACK with reason. Continue to next feature.
d. Report per feature:
✓ {feature-name}: {N} improvements applied
or:
✗ {feature-name}: rolled back ({reason})
Non-breaking rule:
Output:
IMPROVEMENTS APPLIED
| Feature | Status | Improvements | Files Modified |
|---------|--------|-------------|----------------|
| {name1} | APPLIED | {N} | {M} |
| {name2} | APPLIED | {N} | {M} |
| {name3} | ROLLED_BACK | 0 | 0 ({reason}) |
→ Documenting results...
Todo: markeer FASE 4 →
completed, FASE 5 →in_progress.
Goal: Proportional documentation, single backlog update, single commit.
Write feature.json per feature (read-modify-write):
Als N > 1 features: lees alle .project/features/{name}/feature.json parallel, muteer elk in memory, schrijf alle parallel terug.
Voeg refactor sectie toe per feature:
Altijd aanwezig in refactor: status, improvements (object met categorieën), decisions[], positiveObservations[], failureAnalysis, pendingImprovements[].
Per status variant:
refactor.status = "CLEAN", lege improvements, alleen positiveObservationsrefactor.status = "REFACTORED", gevulde improvements per categorie, decisions met rationalerefactor.status = "ROLLED_BACK", failureAnalysis (markdown string), pendingImprovements[]Update top-level feature status:
status: "DONE" (ongewijzigd)status: "DONE" (ongewijzigd)status: "DONE" (ongewijzigd — refactor.status documenteert de rollback)Bestaande secties NIET overschrijven.
1b. Learning extraction — voor features met status REFACTORED of CLEAN:
Lees de zojuist geschreven feature.json.refactor per feature:
decisions[] → type pattern, source extracted; en positiveObservations[] → type observation, source inferredpositiveObservations[] → type observation, source inferredfailureAnalysis is narratief proza, niet atomair)Filter: alleen items die relevant zijn buiten deze feature. Skip lokale refactor-logistiek ("moved helper to utils.js"). Richtlijn: als een decision of observation ook zinvol is voor een ander project of feature → extracteer; anders skip.
Schema:
{
"date": "YYYY-MM-DD",
"feature": "{feature-name}",
"type": "pattern|observation",
"source": "extracted|inferred",
"summary": "Max 200 chars"
}
Geen pitfall type — refactor ontdekt geen bugs.
Dedup via Jaccard(0.55) — zelfde logica als dev-verify Step 3b. Geen learnings gevonden → skip stilletjes.
Append naar project-context.json → learnings[] (wordt geschreven in stap 2 parallel sync).
Parallel sync (backlog + dashboard + conditionele context sync) — volg shared/SYNC.md 3-File Sync Pattern, skill-specifieke mutaties hieronder:
Lees parallel (skip als niet bestaat):
.project/backlog.html.project/project.json.project/project-context.jsonMuteer in memory:
Backlog (zie shared/BACKLOG.md): status blijft "DONE" voor alle features (CLEAN, REFACTORED, en ROLLED_BACK). Zet per feature het refactor veld én — bij success — het shipped veld:
f.refactor = "REFACTORED", f.shipped = true, f.shippedAt = <ISO-date>, f.shippedSha = <git-sha> (zie hieronder), verwijder transition (als aanwezig)f.refactor = "ROLLED_BACK", verwijder transition (als aanwezig) (shipped blijft false — item blijft in "Wacht op refactor" zone)Git sha voor shippedSha:
git rev-parse HEAD
Gebruik de HEAD sha na de auto-commit van FASE 5.3.
Zet data.updated naar huidige datum.
Dashboard (zie shared/DASHBOARD.md):
stack.packagesendpointsdata.entitiesfeatures array: status blijft "DONE"; zet refactor veld analoog aan backlog; zet ook shipped, shippedAt, shippedSha voor CLEAN/REFACTORED featuresrecentChanges[] array (maak aan als niet bestaat): { name, type, description, shipped: true, shippedAt }Context sync (conditioneel, schrijf naar project-context.json) — alleen als REFACTORED features structurele wijzigingen bevatten:
Trigger als ANY: bestanden hernoemd/verplaatst, nieuwe bestanden via extractie, patterns fundamenteel gewijzigd. Skip als: alleen interne code quality, performance zonder structurele impact.
Wanneer getriggerd (in project-context.json):
context.structure → overwrite full tree met gewijzigde file pathscontext.patterns → merge gewijzigde patternscontext.updated → huidige datumarchitecture.components → update bestaande componenten (status, src, test, connects_to[] als typed edges { to, type } — zie shared/DASHBOARD.md Edge waarden), voeg nieuwe toe als componenten zijn hernoemd/gesplitst. Volg component-first model uit shared/DASHBOARD.md.context: {N} updates ({keys touched}) of context: no updates neededSchrijf parallel terug:
backlog.html (keep <script> tags intact)project.json (stack, features, endpoints, data)project-context.json (context, architecture — maak aan als niet bestaat)Scoped auto-commit (only this skill's changes):
Compare current git status with baseline from FASE 0:
git status --porcelain | sort > /tmp/current-status.txt
Categorize files by comparing with .project/session/pre-skill-status.txt:
git add automaticallyIf baseline file doesn't exist, fall back to git add -A.
git commit -m "$(cat <<'EOF'
refactor(batch): {summary}
{N} features analyzed, {clean} clean, {refactored} refactored, {rolled_back} rolled back
{for each REFACTORED feature:}
- {feature}: {improvement count} improvements ({categories})
{for each CLEAN feature:}
- {feature}: clean (no changes needed)
{for each ROLLED_BACK feature:}
- {feature}: rolled back ({reason})
EOF
)"
For single-feature commits, use the existing format:
refactor({feature}): {summary}
Clean up: rm -f .project/session/pre-skill-status.txt .project/session/active-{feature-name}.json /tmp/current-status.txt
3b. Feature archivering (alleen features met feature.json, niet kleine items zonder pipeline):
Voor elke CLEAN of REFACTORED feature waarvan .project/features/{name}/feature.json bestaat:
mkdir -p .project/features/archive
mv .project/features/{name}/ .project/features/archive/{shippedAt-date}-{name}/
{shippedAt-date} = de datum uit het zojuist geschreven shippedAt veld (YYYY-MM-DD formaat).project/features/)Show completion:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
REFACTOR COMPLETE
{N} feature(s) processed:
{for each CLEAN feature:}
✓ {name} — clean (no changes needed)
{for each REFACTORED feature:}
✓ {name} — {improvement-count} improvements applied
{for each ROLLED_BACK feature:}
✗ {name} — rolled back ({reason})
Refactoring complete. Features remain in DONE status.
Next steps:
1. /dev-define {next-feature} → volgende feature uit backlog
2. /project-plan → backlog herzien als scope gewijzigd is
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Worktree integration hint — voeg één extra regel toe aan het completion-blok als beide voorwaarden waar zijn:
worktree-* pattern (git branch --show-current)Append:
💡 Run /core-merge {feature-name} om te integreren naar main/develop
Todo: markeer FASE 5 →
completed.
No features found → exit: "Run /dev-define and /dev-build first" No test results for any feature → exit: "Run /dev-verify first" Some features missing test results → remove from queue, warn, continue with rest No files in feature → skip feature, warn: "No code files found in feature.json for {feature}"
Context7 unavailable → skip refactor-patterns generation, proceed with universal patterns only Partial Context7 results → generate refactor-patterns.md with available data, note gaps CLAUDE.md has no ### Stack section → skip stack-specific patterns, use universal only
Explore agent fails for a feature → skip that feature, warn, continue with rest All Explore agents fail → exit: "Analysis failed — try again or run on a single feature" Agent output truncated → use Grep/Read to find ANALYSIS_START..ANALYSIS_END block
Tests fail after refactoring a feature → per-feature rollback, continue with next feature Test framework not detected → ask user which command to run Tests hang → kill process, rollback current feature
git checkout fails for feature files → report manual recovery steps:
saved_hash from FASE 4 step 1modified_files[feature_name] and created_files[feature_name]/rewind in Claude Code om terug te gaan naar een eerder punt"This skill must NEVER:
This skill must ALWAYS:
.project/project.json context for project-specific conventions during analysis