with one click
code-simplifier
// [Code Quality] Use when you need to simplify and refines code for clarity, consistency, and maintainability while preserving all functionality.
// [Code Quality] Use when you need to simplify and refines code for clarity, consistency, and maintainability while preserving all functionality.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | code-simplifier |
| description | [Code Quality] Use when you need to simplify and refines code for clarity, consistency, and maintainability while preserving all functionality. |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
[BLOCKING] Execute skill steps in declared order. NEVER skip, reorder, or merge steps without explicit user approval. [BLOCKING] Before each step or sub-skill call, update task tracking: set
in_progresswhen step starts, setcompletedwhen step ends. [BLOCKING] Every completed/skipped step MUST include brief evidence or explicit skip reason. [BLOCKING] If Task tools are unavailable, create and maintain an equivalent step-by-step plan tracker with the same status transitions.
Goal: Simplify and refine code for clarity, consistency, maintainability โ preserving all functionality.
MANDATORY IMPORTANT MUST ATTENTION Plan task to READ:
docs/project-reference/code-review-rules.mdโ anti-patterns, review checklists (READ FIRST)project-structure-reference.mdโ project patterns/structureIf not found, search for: project documentation, coding standards, architecture docs.
Workflow:
Key Rules:
When task involves frontend/UI changes:
MUST ATTENTION classify before simplifying โ detection drives focus and sub-agent routing:
| Artifact Type | Detection | Key Focus |
|---|---|---|
| Backend (C#/.NET) | .cs files | Entity expressions, fluent API, DRY via OOP, SOLID |
| Frontend (TS/HTML) | .ts, .html, .scss | BEM, store base, subscription cleanup, component base |
| Tests | *Test.cs, *.spec.ts | Assertions, WaitUntilAsync, data isolation |
| Config/Generated | Migrations, *.generated.* | SKIP โ NEVER simplify generated/migration code |
Sub-agent routing by artifact:
| Artifact | Sub-agent |
|---|---|
| Source code/diffs | code-reviewer |
| Security-sensitive | security-auditor |
| Performance-critical | performance-optimizer |
| Plans/docs/specs | general-purpose |
Skeptical-first: Verify before simplifying. Every change needs proof it preserves behavior.
file:line evidence of what was verifiedDimension-based reasoning replaces fixed checklists. Each dimension has a Think: prompt forcing first-principles reasoning.
Think: Would a new engineer understand this in 30 seconds? What forces multiple file traces?
Think: Pattern appearing โฅ3 places? What base class/generic eliminates duplication?
*Entity, *Dto, *Service) โ shared baseThink: Logic in lowest layer that can own it? Could moving it down enable reuse?
Entity/Model โ Domain Service โ Application Service โ Controller (logic belongs lowest)Think: What is cognitive load? Can nesting/conditionals flatten?
MANDATORY IMPORTANT MUST ATTENTION
- Paging: ALL list queries MUST use pagination. NEVER unbounded
GetAll(),ToList(),Find()withoutSkip/Takeor cursor-based paging.- Indexes: ALL filter fields, foreign keys, sort columns MUST have database indexes. Entity expressions must match index field order. Collections need index management methods.
docs/project-reference/backend-patterns-reference.md)docs/project-reference/backend-patterns-reference.md)Before simplifying, trace what depends on target:
python .claude/scripts/code_graph trace <file> --direction downstream --json
Verify simplified code preserves same interface for all traced consumers. Cross-service MESSAGE_BUS consumers are especially fragile โ may depend on exact message shape.
Additional queries:
python .claude/scripts/code_graph query callers_of <function> --jsonpython .claude/scripts/code_graph query importers_of <module> --jsonpython .claude/scripts/code_graph batch-query file1 file2 --jsonspawn_agent(agent_type="code-simplifier", prompt="Review and simplify [target files]")
Example:
// Before
function getData() {
const result = fetchData();
if (result !== null && result !== undefined) {
return result;
} else {
return null;
}
}
// After
function getData() {
return fetchData() ?? null;
}
After simplifications applied, verification requires fresh sub-agent review to eliminate confirmation bias. See SYNC blocks below.
When used standalone (outside a review workflow), run $workflow-review-changes to trigger the full review cycle with fresh sub-agent re-review.
MANDATORY IMPORTANT MUST ATTENTION โ NO EXCEPTIONS: If NOT already in workflow, use a direct user question to ask user. Do NOT decide this is "simple enough to skip" โ the user decides:
- Activate
quality-auditworkflow (Recommended) โ code-simplifier โ review-changes โ code-review- Execute
$code-simplifierdirectly โ run standalone
MANDATORY IMPORTANT MUST ATTENTION โ NO EXCEPTIONS after completing, use a direct user question:
Completion โ Correctness. Before reporting work done, prove it:
- Grep every removed name. Extraction/rename/delete โ grep confirms 0 dangling refs across ALL file types.
- Ask WHY before changing. Existing values intentional until proven otherwise. No "fix" without traced rationale.
- Verify ALL outputs. One build passing โ all builds passing. Check every affected stack.
- Evaluate pattern fit. Copying nearby code? Verify preconditions match โ same scope, lifetime, base class, constraints.
- New artifact = wired artifact. Created? Prove registered, imported, reachable by all consumers.
[IMPORTANT] Use task tracking to break ALL work into small tasks BEFORE starting โ including tasks for each file read. For simple tasks, MUST ATTENTION ask user whether to skip.
Prerequisites: MUST ATTENTION READ before executing:
docs/project-reference/domain-entities-reference.md โ domain entity catalog, relationships, cross-service sync (read when task involves business entities/models) (read directly when relevant; do not rely on hook-injected conversation text)External Memory: Complex/lengthy work โ write findings to
plans/reports/. Prevents context loss, serves as deliverable.
Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION โ every claim, finding, recommendation requires
file:lineproof or traced evidence (confidence >80% to act, <80% verify first).
OOP & DRY Enforcement: MANDATORY IMPORTANT MUST ATTENTION โ flag duplicated patterns for base class extraction. Same-suffix classes (
*Entity,*Dto,*Service) MUST ATTENTION inherit common base. Verify stack has linting/analyzer configured.
UI System Context โ For ANY task touching
.ts,.html,.scss, or.cssfiles:MUST ATTENTION READ before implementing:
docs/project-reference/frontend-patterns-reference.mdโ component base classes, stores, formsdocs/project-reference/scss-styling-guide.mdโ BEM methodology, SCSS variables, mixins, responsivedocs/project-reference/design-system/README.mdโ design tokens, component inventory, iconsReference
docs/project-config.jsonfor project-specific paths.
Shared Protocol Duplication Policy โ Inline protocol content in skills (wrapped in
<!-- SYNC:tag -->) is INTENTIONAL duplication. Do NOT extract, deduplicate, or replace with file references. AI compliance drops significantly when protocols are behind file-read indirection. To update: edit.claude/skills/shared/sync-inline-versions.mdfirst, then grepSYNC:protocol-nameand update all occurrences.
Fix-Triggered Re-Review Loop โ Re-review is triggered by a FIX CYCLE, not by a round number. Review purpose:
review โ if issues โ fix โ re-reviewuntil a round finds no issues. A clean review ENDS the loop โ no further rounds required.Round 1: Main-session review. Read target files, build understanding, note issues. Output findings + verdict (PASS / FAIL).
Decision after Round 1:
- No issues found (PASS, zero findings) โ review ENDS. Do NOT spawn a fresh sub-agent for confirmation.
- Issues found (FAIL, or any non-zero findings) โ fix the issues, then spawn a fresh sub-agent for Round 2 re-review.
Fresh sub-agent re-review (after every fix cycle): Spawn a NEW
spawn_agenttool call โ never reuse a prior agent. Sub-agent re-reads ALL files from scratch with ZERO memory of prior rounds. SeeSYNC:fresh-context-reviewfor the spawn mechanism andSYNC:review-protocol-injectionfor the canonical Agent prompt template. Each fresh round must catch:
- Cross-cutting concerns missed in the prior round
- Interaction bugs between changed files
- Convention drift (new code vs existing patterns)
- Missing pieces that should exist but don't
- Subtle edge cases the prior round rationalized away
- Regressions introduced by the fixes themselves
Loop termination: After each fresh round, repeat the same decision: clean โ END; issues โ fix โ next fresh round. Continue until a round finds zero issues, or 3 fresh-subagent rounds max, then escalate to user via a direct user question.
Rules:
- A clean Round 1 ENDS the review โ no mandatory Round 2
- NEVER skip the fresh sub-agent re-review after a fix cycle (every fix invalidates the prior verdict)
- NEVER reuse a sub-agent across rounds โ every iteration spawns a NEW Agent call
- Main agent READS sub-agent reports but MUST NOT filter, reinterpret, or override findings
- Max 3 fresh-subagent rounds per review โ if still FAIL, escalate via a direct user question (do NOT silently loop)
- Track round count in conversation context (session-scoped)
- Final verdict must incorporate ALL rounds executed
Report must include
## Round N Findings (Fresh Sub-Agent)for every round Nโฅ2 that was executed.
Fresh Sub-Agent Review โ Eliminate orchestrator confirmation bias via isolated sub-agents.
Why: The main agent knows what it (or
$cook) just fixed and rationalizes findings accordingly. A fresh sub-agent has ZERO memory, re-reads from scratch, and catches what the main agent dismissed. Sub-agent bias is mitigated by (1) fresh context, (2) verbatim protocol injection, (3) main agent not filtering the report.When: ONLY after a fix cycle. A review round that finds zero issues ENDS the loop โ do NOT spawn a confirmation sub-agent. A review round that finds issues triggers: fix โ fresh sub-agent re-review.
How:
- Spawn a NEW
spawn_agenttool call โ usecode-reviewersubagent_type for code reviews,general-purposefor plan/doc/artifact reviews- Inject ALL required review protocols VERBATIM into the prompt โ see
SYNC:review-protocol-injectionfor the full list and template. Never reference protocols by file path; AI compliance drops behind file-read indirection (seeSYNC:shared-protocol-duplication-policy)- Sub-agent re-reads ALL target files from scratch via its own tool calls โ never pass file contents inline in the prompt
- Sub-agent writes structured report to
plans/reports/{review-type}-round{N}-{date}.md- Main agent reads the report, integrates findings into its own report, DOES NOT override or filter
Rules:
- SKIP fresh sub-agent when the prior round found zero issues (no fixes = nothing new to verify)
- NEVER skip fresh sub-agent after a fix cycle โ every fix invalidates the prior verdict
- NEVER reuse a sub-agent across rounds โ every fresh round spawns a NEW
spawn_agentcall- Max 3 fresh-subagent rounds per review โ escalate via a direct user question if still failing; do NOT silently loop or fall back to any prior protocol
- Track iteration count in conversation context (session-scoped, no persistent files)
Review Protocol Injection โ Every fresh sub-agent review prompt MUST embed 10 protocol blocks VERBATIM. The template below has ALL 10 bodies already expanded inline. Copy the template wholesale into the Agent call's
promptfield at runtime, replacing only the{placeholders}in Task / Round / Reference Docs / Target Files / Output sections with context-specific values. Do NOT touch the embedded protocol sections.Why inline expansion: Placeholder markers would force file-read indirection at runtime. AI compliance drops significantly behind indirection (see
SYNC:shared-protocol-duplication-policy). Therefore the template carries all 10 protocol bodies pre-embedded.
code-reviewer โ for code reviews (reviewing source files, git diffs, implementation)general-purpose โ for plan / doc / artifact reviews (reviewing markdown plans, docs, specs)spawn_agent({
description: "Fresh Round {N} review",
agent_type: "code-reviewer",
prompt: `
## Task
{review-specific task โ e.g., "Review all uncommitted changes for code quality" | "Review plan files under {plan-dir}" | "Review integration tests in {path}"}
## Round
Round {N}. You have ZERO memory of prior rounds. Re-read all target files from scratch via your own tool calls. Do NOT trust anything from the main agent beyond this prompt.
## Protocols (follow VERBATIM โ these are non-negotiable)
### Evidence-Based Reasoning
Speculation is FORBIDDEN. Every claim needs proof.
1. Cite file:line, grep results, or framework docs for EVERY claim
2. Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend
3. Cross-service validation required for architectural changes
4. "I don't have enough evidence" is valid and expected output
BLOCKED until: Evidence file path (file:line) provided; Grep search performed; 3+ similar patterns found; Confidence level stated.
Forbidden without proof: "obviously", "I think", "should be", "probably", "this is because".
If incomplete โ output: "Insufficient evidence. Verified: [...]. Not verified: [...]."
### Bug Detection
MUST check categories 1-4 for EVERY review. Never skip.
1. Null Safety: Can params/returns be null? Are they guarded? Optional chaining gaps? .find() returns checked?
2. Boundary Conditions: Off-by-one (< vs <=)? Empty collections handled? Zero/negative values? Max limits?
3. Error Handling: Try-catch scope correct? Silent swallowed exceptions? Error types specific? Cleanup in finally?
4. Resource Management: Connections/streams closed? Subscriptions unsubscribed on destroy? Timers cleared? Memory bounded?
5. Concurrency (if async): Missing await? Race conditions on shared state? Stale closures? Retry storms?
6. Stack-Specific: JS: === vs ==, typeof null. C#: async void, missing using, LINQ deferred execution.
Classify: CRITICAL (crash/corrupt) โ FAIL | HIGH (incorrect behavior) โ FAIL | MEDIUM (edge case) โ WARN | LOW (defensive) โ INFO.
### Design Patterns Quality
Priority checks for every code change:
1. DRY via OOP: Same-suffix classes (*Entity, *Dto, *Service) MUST share base class. 3+ similar patterns โ extract to shared abstraction.
2. Right Responsibility: Logic in LOWEST layer (Entity > Domain Service > Application Service > Controller). Never business logic in controllers.
3. SOLID: Single responsibility (one reason to change). Open-closed (extend, don't modify). Liskov (subtypes substitutable). Interface segregation (small interfaces). Dependency inversion (depend on abstractions).
4. After extraction/move/rename: Grep ENTIRE scope for dangling references. Zero tolerance.
5. YAGNI gate: NEVER recommend patterns unless 3+ occurrences exist. Don't extract for hypothetical future use.
Anti-patterns to flag: God Object, Copy-Paste inheritance, Circular Dependency, Leaky Abstraction.
### Complexity Prevention (Ousterhout)
MANDATORY. Measure code by cost of change: one business change = one code change. Flag ALL 13:
1. Change amplification โ >3 edit sites for plausible future change = structural flaw. Reject.
2. Cognitive load โ deep inheritance, long param lists, boolean traps, implicit ordering = reader overload.
3. Cross-cutting duplication at entry points โ logging/error/validation/auth/tx reimplemented per handler โ lift to middleware/interceptor/filter/decorator/aspect.
4. Leaked implementation technology โ repos returning IQueryable/QuerySet/raw cursors/ORM entities โ return finished results + intent-revealing methods.
5. Type-switch scattering โ switch/if-chains on enum/discriminator in >1 place โ polymorphism/strategy. New variant = 1 new file, not N edits.
6. Anemic models โ getters/setters only, logic in services โ move invariants/behavior onto object (`order.Checkout()`, not `order.Status = ...`).
7. Primitive obsession โ raw string/int/decimal for account/email/money/percent/date-range โ value objects / records / structs validating once at construction.
8. Inline cross-cutting concerns โ authz/tenant/audit/sanitization at top of every handler โ declarative markers (`@RequirePermission`), enforce centrally.
9. Shallow modules โ tiny class, big interface wrapping little logic โ inline or deepen.
10. Missing base class for repeated component/handler lifecycle โ 3+ forms/CRUD handlers/list views reimplementing loading/dirty/submit/pagination โ base class/hook/composable/mixin/trait.
11. Premature vs delayed abstraction โ rule-of-three. First write it; second notice; third extract. No generic frameworks before real variation; no copy-paste for the 4th time.
12. Embedded utility logic not extracted โ inline paging/retry/datetime/string parsing/URL building โ extract to util/helper/extensions. Inline duplicates = duplicated bug surface.
13. Logic in wrong (higher) layer โ caller computing what callee owns โ downshift. Lowest responsible layer wins (Entity > Domain Service > App Service > Controller ยท Model/VM > Store > Component).
Pre-commit edit-site test (reject if answer is "many"): Add new variant โ 1 new file. Change HTTP error format โ 1 middleware. Add timestamp to every entity โ 1 base/interceptor. Add authorization to an endpoint โ 1 declarative marker. Swap DB/ORM โ data layer only. Change business calculation โ 1 method on entity. Add loading pattern to forms โ 1 base/hook. Add validation to primitive โ 1 value-object ctor. Change paging/retry/datetime algorithm โ 1 helper. Change entity derivation โ 1 method on entity.
Heuristics: write call site first ยท count edit sites for plausible future change ยท pre-reuse scan (grep similar algorithms before writing) ยท layer placement test ("would a sibling caller re-derive this?") ยท open-case-for-future-reuse (don't rationalize silent duplication with pure YAGNI โ extract now if cheap or track TODO) ยท prefer removing code ยท surface assumptions at boundaries, hide details inside.
The measure of good code is the cost of change.
### Logic & Intention Review
Verify WHAT code does matches WHY it was changed.
1. Change Intention Check: Every changed file MUST serve the stated purpose. Flag unrelated changes as scope creep.
2. Happy Path Trace: Walk through one complete success scenario through changed code.
3. Error Path Trace: Walk through one failure/edge case scenario through changed code.
4. Acceptance Mapping: If plan context available, map every acceptance criterion to a code change.
NEVER mark review PASS without completing both traces (happy + error path).
### Test Spec Verification
Map changed code to test specifications.
1. From changed files โ find TC-{FEAT}-{NNN} in docs/business-features/{Service}/detailed-features/{Feature}.md Section 15.
2. Every changed code path MUST map to a corresponding TC (or flag as "needs TC").
3. New functions/endpoints/handlers โ flag for test spec creation.
4. Verify TC evidence fields point to actual code (file:line, not stale references).
5. Auth changes โ TC-{FEAT}-02x exist? Data changes โ TC-{FEAT}-01x exist?
6. If no specs exist โ log gap and recommend $tdd-spec.
NEVER skip test mapping. Untested code paths are the #1 source of production bugs.
### Fix-Layer Accountability
NEVER fix at the crash site. Trace the full flow, fix at the owning layer. The crash site is a SYMPTOM, not the cause.
MANDATORY before ANY fix:
1. Trace full data flow โ Map the complete path from data origin to crash site across ALL layers (storage โ backend โ API โ frontend โ UI). Identify where bad state ENTERS, not where it CRASHES.
2. Identify the invariant owner โ Which layer's contract guarantees this value is valid? Fix at the LOWEST layer that owns the invariant, not the highest layer that consumes it.
3. One fix, maximum protection โ If fix requires touching 3+ files with defensive checks, you are at the wrong layer โ go lower.
4. Verify no bypass paths โ Confirm all data flows through the fix point. Check for direct construction skipping factories, clone/spread without re-validation, raw data not wrapped in domain models, mutations outside the model layer.
BLOCKED until: Full data flow traced (origin โ crash); Invariant owner identified with file:line evidence; All access sites audited (grep count); Fix layer justified (lowest layer that protects most consumers).
Anti-patterns (REJECT): "Fix it where it crashes" (crash site โ cause site, trace upstream); "Add defensive checks at every consumer" (scattered defense = wrong layer); "Both fix is safer" (pick ONE authoritative layer).
### Rationalization Prevention
AI skips steps via these evasions. Recognize and reject:
- "Too simple for a plan" โ Simple + wrong assumptions = wasted time. Plan anyway.
- "I'll test after" โ RED before GREEN. Write/verify test first.
- "Already searched" โ Show grep evidence with file:line. No proof = no search.
- "Just do it" โ Still need task tracking. Skip depth, never skip tracking.
- "Just a small fix" โ Small fix in wrong location cascades. Verify file:line first.
- "Code is self-explanatory" โ Future readers need evidence trail. Document anyway.
- "Combine steps to save time" โ Combined steps dilute focus. Each step has distinct purpose.
### Graph-Assisted Investigation
MANDATORY when .code-graph/graph.db exists.
HARD-GATE: MUST run at least ONE graph command on key files before concluding any investigation.
Pattern: Grep finds files โ trace --direction both reveals full system flow โ Grep verifies details.
- Investigation/Scout: trace --direction both on 2-3 entry files
- Fix/Debug: callers_of on buggy function + tests_for
- Feature/Enhancement: connections on files to be modified
- Code Review: tests_for on changed functions
- Blast Radius: trace --direction downstream
CLI: python .claude/scripts/code_graph {command} --json. Use --node-mode file first (10-30x less noise), then --node-mode function for detail.
### Understand Code First
HARD-GATE: Do NOT write, plan, or fix until you READ existing code.
1. Search 3+ similar patterns (grep/glob) โ cite file:line evidence.
2. Read existing files in target area โ understand structure, base classes, conventions.
3. Run python .claude/scripts/code_graph trace <file> --direction both --json when .code-graph/graph.db exists.
4. Map dependencies via connections or callers_of โ know what depends on your target.
5. Write investigation to .ai/workspace/analysis/ for non-trivial tasks (3+ files).
6. Re-read analysis file before implementing โ never work from memory alone.
7. NEVER invent new patterns when existing ones work โ match exactly or document deviation.
BLOCKED until: Read target files; Grep 3+ patterns; Graph trace (if graph.db exists); Assumptions verified with evidence.
## Reference Docs (READ before reviewing)
- docs/project-reference/code-review-rules.md
- {skill-specific reference docs โ e.g., integration-test-reference.md for integration-test-review; backend-patterns-reference.md for backend reviews; frontend-patterns-reference.md for frontend reviews}
## Target Files
{explicit file list OR "run git diff to see uncommitted changes" OR "read all files under {plan-dir}"}
## Output
Write a structured report to plans/reports/{review-type}-round{N}-{date}.md with sections:
- Status: PASS | FAIL
- Issue Count: {number}
- Critical Issues (with file:line evidence)
- High Priority Issues (with file:line evidence)
- Medium / Low Issues
- Cross-cutting findings
Return the report path and status to the main agent.
Every finding MUST have file:line evidence. Speculation is forbidden.
`
})
{placeholders} in Task / Round / Reference Docs / Target Files / Output sections with context-specific contentcode-reviewer subagent_type for code reviews and general-purpose for plan / doc / artifact reviewsAI Mistake Prevention โ Failure modes to avoid on every task:
Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal. Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing. Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain. Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips โ not just happy path. When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer โ never patch symptom site. Assume existing values are intentional โ ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code. Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks. Holistic-first debugging โ resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis. Surgical changes โ apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly. Surface ambiguity before coding โ don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Critical Thinking Mindset โ Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact โ cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence โ certainty without evidence root of all hallucination.
Understand Code First โ HARD-GATE: Do NOT write, plan, or fix until you READ existing code.
- Search 3+ similar patterns (
grep/glob) โ citefile:lineevidence- Read existing files in target area โ understand structure, base classes, conventions
- Run
python .claude/scripts/code_graph trace <file> --direction both --jsonwhen.code-graph/graph.dbexists- Map dependencies via
connectionsorcallers_ofโ know what depends on your target- Write investigation to
.ai/workspace/analysis/for non-trivial tasks (3+ files)- Re-read analysis file before implementing โ never work from memory alone
- NEVER invent new patterns when existing ones work โ match exactly or document deviation
BLOCKED until:
- [ ]Read target files- [ ]Grep 3+ patterns- [ ]Graph trace (if graph.db exists)- [ ]Assumptions verified with evidence
Design Patterns Quality โ Priority checks for every code change:
- DRY via OOP: Identify classes/modules with the same purpose, naming pattern, or lifecycle. Apply your knowledge of the project's language/framework to determine the idiomatic abstraction (base class, mixin, trait, protocol, decorator). 3+ similar patterns โ extract to shared abstraction.
- Right Responsibility: Logic in LOWEST layer (Entity > Domain Service > Application Service > Controller). Never business logic in controllers.
- SOLID: Single responsibility (one reason to change). Open-closed (extend, don't modify). Liskov (subtypes substitutable). Interface segregation (small interfaces). Dependency inversion (depend on abstractions).
- After extraction/move/rename: Grep ENTIRE scope for dangling references. Zero tolerance.
- YAGNI gate: NEVER recommend patterns unless 3+ occurrences exist. Don't extract for hypothetical future use.
Anti-patterns to flag: God Object, Copy-Paste inheritance, Circular Dependency, Leaky Abstraction.
Serial Attention for Design Quality โ DO NOT scan all quality concerns simultaneously. Split attention misses violations that focused passes catch.
- Identify applicable dimensions โ Based on the code's language, domain, and patterns, determine which quality dimensions apply: DRY, SOLID principles (SRP/OCP/LSP/ISP/DIP), OOP idioms, cohesion/coupling, GRASP, Law of Demeter, CQRS invariants, etc. Your list is NOT fixed โ derive from what the code actually does.
- One focused pass per dimension โ Dedicate single-focus attention to EACH dimension in sequence. Do NOT mix concerns across passes.
- Threshold: 3+ similar patterns = MANDATORY extraction โ Not optional suggestion. Flag as mandatory structural fix requiring action.
- 2+ violations of same kind = structural finding โ Report as "pattern problem" needing architectural resolution, not a list of individual instances.
Complexity Prevention (Ousterhout) โ MANDATORY. Measure code by cost of change: one business change should map to one code change. Flag ALL of the following in review:
- Change amplification โ small business change forces edits in >3 places โ structural flaw. Count edit sites for a plausible future change (add variant, add field, add authorization). >3 = reject.
- Cognitive load โ reader must hold too much context to safely modify. Flag deep inheritance, long parameter lists, boolean traps, implicit ordering dependencies.
- Cross-cutting duplication at entry points โ logging, error handling, validation, auth, transactions reimplemented per controller/handler/route. Lift to middleware / interceptor / filter / decorator / aspect.
- Leaked implementation technology โ repos returning
IQueryable/QuerySet/Criteria/raw cursors/ORM entities to callers. Return finished results + intent-revealing methods (GetActiveVipUsers()notQuery()).- Type-switch scattering โ
switch/if-chains on enum/discriminator in >1 place. New variant = new file, not N edits. One factory/registry switch at the boundary OK; scattered switches = reject.- Anemic models โ domain objects with only getters/setters, logic floats in services. Move invariants/behavior onto the object (
order.Checkout(), notorder.Status = ...).- Primitive obsession โ raw
string/int/decimalfor account numbers, emails, money, percentages, date ranges, with re-validation at every entry. Wrap in value objects / records / structs that validate once at construction.- Inline cross-cutting concerns โ authorization/tenant isolation/audit/sanitization hand-written at top of every handler. Flag intent with declarative markers (
@RequirePermission("Order.Delete")), enforce once centrally.- Shallow modules โ tiny class, big interface (many public methods, many flags, many ctor params) wrapping little logic. A module is deep when a small interface hides a lot of implementation. If interface โ implementation cost to learn โ inline.
- Missing base class for repeated component/handler lifecycle โ 3+ forms/CRUD handlers/list views reimplementing loading/dirty/submit/pagination โ extract to base class / hook / composable / mixin / trait.
- Premature vs delayed abstraction โ rule-of-three. First occurrence: write it. Second: notice duplication. Third: extract. Don't build generic frameworks before real variation; don't copy-paste for the 4th time.
- Embedded utility logic not extracted to helpers โ inline paging loops (
while (hasMore) { skip += take; ... }), ad-hoc datetime math, string parsing/formatting, collection partitioning, retry/backoff loops, URL/query-string building. If the algorithm is non-trivial AND stack-generic (not business-specific), extract toutil/helper/extensionsand let consumers call one line. Inline duplicates โ duplicated bug surface.- Logic in wrong (higher) layer โ downshift to callee โ business/derivation logic written in the caller when the callee owns the data. Defaults: Controller code that should be App Service. App Service code that should be Domain Service or Entity. Component code that should be ViewModel/Store/Service. Caller reaching into callee's data shape to compute something โ move the computation behind an intent-revealing method on the callee. Lowest responsible layer wins (Entity > Domain Service > App Service > Controller ยท Model/VM > Store > Component). Higher-layer placement = duplicated logic when a sibling caller needs the same thing.
- Owner owns the rule โ extract on first write โ if a caller inlines logic that derives, normalizes, validates, or computes from another type's data, MOVE it to the owning type. Single use is sufficient โ the trigger is wrong responsibility, not duplication. Sibling callers always arrive; inline copies drift silently with no compile error and no name to grep. Common offenders: Backend โ inlined rules in application-layer handlers / commands / queries / services / controllers that belong on the domain entity / value object / domain service. Frontend โ inlined derivations / formatting / validation in components that belong on the model / store / view-model / API service. Fix: name the rule once as a method (static or instance) on the owning type; callers invoke by name. Future variant โ SECOND named method on the owner, never an inline near-duplicate. Right responsibility first; reuse is the consequence.
Extraction target โ where the named rule lives:
Shape of the rule Goes to Pure function over an entity's own data static method on the entity Behavior that mutates / guards entity state instance method on the entity Always-true invariant on a primitive value value object constructor Needs DI (repo / settings / clock) helper class registered in DI Domain-agnostic algorithm reused across types util / extension method Pure shape / projection conversion DTO mapping Pre-commit edit-site test (reject if answer is "many"):
Change Scenario Should touch Add new variant (customer type, payment method) 1 new file Change HTTP error response format 1 middleware/filter Add timestamp field to every persisted entity 1 base entity/interceptor Add authorization to a new endpoint 1 declarative marker Swap database/ORM Data layer only Change business calculation rule 1 method on owning entity Add loading indicator pattern to forms 1 base component/hook Add validation rule to a domain primitive 1 value-object ctor Change paging/retry/datetime algorithm 1 helper/util function Change a derivation of entity data 1 method on the entity Operating heuristics:
- Write the call site first.
- Count edit sites for plausible future change.
- Prefer removing code over adding it.
- Surface assumptions at boundaries, hide details inside.
- Pre-reuse scan โ before writing a non-trivial block, grep for similar algorithms (
while.*skip,DateTime.*Add,split/joinchains, paging loops, retry loops). Match existing helper โ call it. None exists but pattern is stack-generic โ extract to util before second caller appears.- Layer placement test โ ask "if a sibling caller needed this tomorrow, would they re-derive it?" If yes, the logic is in the wrong layer. Move it down.
- Open-case-for-future-reuse โ if reviewer spots a block that is likely to appear in another feature (domain-agnostic algorithm, shared lifecycle, recurring derivation), do NOT rationalize with pure YAGNI. Either extract now (if cheap) or create a tracked TODO with the exact extraction target so the second caller does not duplicate silently. Silent duplication is the default failure mode.
- When in doubt ask: "What would need to change if the requirement shifts?"
The measure of good code is the cost of change. Not shortest. Not cleverest. Not most abstracted. Cheapest to safely modify having read a small local portion.
MUST ATTENTION apply critical thinking โ every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply complexity prevention โ one business change = one code change. Flag change amplification (>3 edit sites for future change), scattered type-switches, anemic models, primitive obsession, leaked technology through abstractions, shallow modules, un-extracted utility logic (paging/datetime/string/retry โ helpers), and logic in the wrong higher layer (downshift to callee/entity/VM). Don't rationalize silent duplication with pure YAGNI.
MUST ATTENTION apply AI mistake prevention โ holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
IMPORTANT MUST ATTENTION follow declared step order for this skill; NEVER skip, reorder, or merge steps without explicit user approval
IMPORTANT MUST ATTENTION for every step/sub-skill call: set in_progress before execution, set completed after execution
IMPORTANT MUST ATTENTION every skipped step MUST include explicit reason; every completed step MUST include concise evidence
IMPORTANT MUST ATTENTION if Task tools unavailable, maintain an equivalent step-by-step plan tracker with synchronized statuses
docs/project-reference/code-review-rules.md FIRSTAnti-Rationalization:
| Evasion | Rebuttal |
|---|---|
| "Too simple for graph trace" | Wrong assumptions waste more time. Run trace anyway. |
| "Already searched" | Show file:line evidence. No proof = no search. |
| "Just a small simplification" | Small change at wrong layer cascades. Verify consumers first. |
| "Code is self-explanatory" | Future readers need evidence trail. Document non-obvious intent. |
| "Simplification is safe" | NEVER assume safe without grepping all usages first. |
| "Skip Round 2 even after fixing" | Every fix triggers fresh sub-agent round. Clean Round 1 (zero issues) does end the review โ but ANY fix invalidates the prior verdict. |
[TASK-PLANNING] Before acting, analyze task scope and systematically break into small todo tasks using task tracking.
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns โ debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW โ NEVER ExecuteUowTaskwhere python/where py) โ NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) โ parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage โ never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role โ rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad โ rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) โ expresses HAPPENS, not membership.python/python3 resolves โ verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons โ
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns โdocs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders โ System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves โ run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons โ ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" โ Yes โ improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.