with one click
story-review
// [Code Quality] Use when you need to review user stories for completeness, coverage, dependencies, and quality before implementation.
// [Code Quality] Use when you need to review user stories for completeness, coverage, dependencies, and quality before implementation.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | story-review |
| description | [Code Quality] Use when you need to review user stories for completeness, coverage, dependencies, and quality before implementation. |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
[BLOCKING] Execute skill steps in declared order. NEVER skip, reorder, or merge steps without explicit user approval. [BLOCKING] Before each step or sub-skill call, update task tracking: set
in_progresswhen step starts, setcompletedwhen step ends. [BLOCKING] Every completed/skipped step MUST include brief evidence or explicit skip reason. [BLOCKING] If Task tools are unavailable, create and maintain an equivalent step-by-step plan tracker with the same status transitions.
Goal: Auto-review user stories for completeness, acceptance criteria coverage, dependency ordering, and quality before implementation proceeds.
Key distinction: AI self-review (automatic), NOT user interview.
Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
When this task involves frontend or UI changes,
docs/project-reference/frontend-patterns-reference.md (read directly when relevant; do not rely on hook-injected conversation text)docs/project-reference/scss-styling-guide.mddocs/project-reference/design-system/README.mdDefault stance: SKEPTIC challenging story quality, not confirming it.
Self-review trap: You wrote these stories. You will find them coherent because you made them coherent. This section forces deliberate challenge before rubber-stamping.
1. Strawman AC Check For each acceptance criterion: "Is this AC so obvious it was only included to pad coverage?" (e.g., "User can see the page" โ trivially true and tests nothing meaningful). Flag trivial ACs that would pass even if the feature is completely broken.
2. Vertical Slice Challenge For each story: "Can a stakeholder demo THIS STORY ALONE to a real user and get useful feedback?" If the story only delivers a backend endpoint, a DB migration, or a UI component in isolation โ it is a horizontal layer, not a vertical slice. Flag it.
3. Dependency Challenge If story B is blocked by story A: "What happens to the sprint if story A is descoped or delayed?" A story set with rigid sequential dependencies is fragile. Are dependencies truly required, or can stories be resequenced?
4. INVEST Violation Hunt Deliberately look for the WEAKEST INVEST criterion for each story. Ask: "Which of I/N/V/E/S/T does this story fail most obviously?" If a story is not Estimable โ why not? If not Independent โ can it be split?
5. Pre-Mortem Assume all stories in this set are implemented exactly as written. The feature ships and fails. Write the most plausible failure scenario. Which story was the gap?
6. Contrarian Pass Before writing any verdict, generate at least 2 sentences arguing the OPPOSITE conclusion. Then decide which argument is stronger.
If any box is unchecked โ adversarial review incomplete. Go back.
team-artifacts/stories/ or plan context| # | Check | Presence | Quality Depth |
|---|---|---|---|
| 1 | AC coverage โ Every acceptance criterion from PBI has at least one corresponding story | Does every PBI AC have a story? | Does every PBI AC have a story? Or are some ACs split across multiple stories in ways that create coverage gaps? |
| 2 | GIVEN/WHEN/THEN โ Each story has minimum 3 BDD scenarios (happy, edge, error) | Are all 3 BDD parts present per scenario? | Are all 3 BDD parts present per scenario? Are scenarios testing REAL user behavior or just "the system does X"? |
| 3 | INVEST criteria โ Stories are Independent, Negotiable, Valuable, Estimable, Small, Testable | Are all 6 INVEST criteria named or implied? | Are stories genuinely Independent (no hidden chains), Valuable (real user impact), Testable (automatable)? |
| 4 | Story points โ All stories have SP <=8 (>8 must be split) | Are SP assigned and all <=8? | Do SP reflect actual complexity? Is <=8 justified, or is the story undersized to pass the gate? |
| 5 | Dependency table โ Story set includes dependency ordering table (must-after, can-parallel, independent) | Does a dependency ordering table exist? | Does the ordering reflect ACTUAL dependencies, not just arbitrary sequencing? |
| 6 | No overlapping scope โ Stories don't duplicate functionality | Do any 2 stories reference the same AC? | Do any 2 stories claim the same AC? Would implementing both create duplication? |
| 7 | Vertical slices โ Each story delivers end-to-end value (not horizontal layers) | Does each story touch more than one layer (UI + API or API + DB)? | Can a stakeholder demo EACH story to a real user independently? Or do some deliver only infrastructure? |
| 8 | Authorization scenarios โ Every story includes at least 1 authorization scenario (unauthorized access โ rejection) per PBI roles table | Is an authorization scenario present per story? | Is the unauthorized-access scenario testing a realistic attack vector, not just "wrong role โ 403"? |
| 9 | UI Wireframe section โ If story involves UI: has ## UI Wireframe section per UI wireframe protocol (wireframe + component tree + interaction flow + states + responsive). If backend-only: explicit "N/A" | Does the section exist (or explicit N/A for backend-only)? | If UI: does the wireframe show interaction flow + states + responsive breakpoints? If backend-only: is "N/A" explicit? |
| # | Check | Presence | Quality Depth |
|---|---|---|---|
| 1 | Edge cases โ Boundary values, empty states, max limits addressed | Are edge case scenarios listed? | Are boundary values story-specific (not generic "empty state")? Do they include concurrency or partial-data scenarios? |
| 2 | Error scenarios โ Failure paths explicitly covered in stories | Are error path scenarios present? | Do error stories specify the exact error message/code returned, or just "shows error"? |
| 3 | API contract โ If API changes needed, story specifies contract | Is a request/response contract defined? | Does the contract specify request/response schema fully? Are breaking vs non-breaking changes distinguished? |
| 4 | UI/UX visualization โ Frontend stories have component decomposition tree with EXISTING/NEW classification, design token mapping, and responsive breakpoint behavior per UI wireframe protocol | Is a component decomposition tree present? | Are components EXISTING vs NEW classified? Are design token names (not just colors) specified? |
| 5 | Seed data stories โ If PBI has seed data requirements, Sprint 0 seed data story exists | Does a seed data story exist (or N/A if not required)? | If present, does the seed data story specify the exact data shape needed? |
| 6 | Data migration stories โ If PBI has schema changes, data migration story exists | Does a data migration story exist (or N/A if no schema changes)? | If present, does it specify rollback behavior? |
## Story Review Result
**Status:** PASS | WARN | FAIL
**Stories reviewed:** {count}
**Source PBI:** {pbi-path}
### AC Coverage Matrix
| Acceptance Criterion | Covered By Story | Status |
| -------------------- | ---------------- | ------ |
### Required ({X}/{Y})
- โ
/โ Check description
### Recommended ({X}/{Y})
- โ
/โ ๏ธ Check description
### Missing Stories
- {Any PBI AC not covered}
### Dependency Issues
- {Circular deps, missing ordering}
### Verdict
{PROCEED | REVISE_FIRST}
Protocol:
SYNC:double-round-trip-review+SYNC:fresh-context-review+SYNC:review-protocol-injection(all inlined above in this file).
After completing Round 1 checklist evaluation, spawn a fresh general-purpose sub-agent for Round 2 using the canonical Agent template from SYNC:review-protocol-injection above. Story artifact reviews are NOT code reviews โ use agent_type: "general-purpose". When constructing the Agent call prompt:
SYNC:review-protocol-injection template verbatimagent_type: "general-purpose"SYNC:evidence-based-reasoning, SYNC:rationalization-prevention, SYNC:understand-code-first (omit code-specific protocols like SYNC:bug-detection, SYNC:design-patterns-quality, SYNC:fix-layer-accountability which are not applicable to story artifacts)"Review the user story artifacts for completeness and quality. Focus on: implicit assumptions not validated, missing acceptance criteria coverage, edge cases not addressed in BDD scenarios, cross-references not verified, vague language, authorization gaps, INVEST violations."plans/reports/story-review-round{N}-{date}.mdAfter sub-agent returns:
## Round {N} Findings (Fresh Sub-Agent) in the main report โ DO NOT filter or overrideMANDATORY IMPORTANT MUST ATTENTION โ NO EXCEPTIONS after completing this skill, you MUST ATTENTION use a direct user question to present these options. Do NOT skip because the task seems "simple" or "obvious" โ the user decides:
[IMPORTANT] Use task tracking to break ALL work into small tasks BEFORE starting โ including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.
Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION โ every claim, finding, and recommendation requires
file:lineproof or traced evidence with confidence percentage (>80% to act, <80% must verify first).
OOP & DRY Enforcement: MANDATORY IMPORTANT MUST ATTENTION โ flag duplicated patterns that should be extracted to a base class, generic, or helper. Classes in the same group or suffix (ex *Entity, *Dto, *Service, etc...) MUST ATTENTION inherit a common base (even if empty now โ enables future shared logic and child overrides). Verify project has code linting/analyzer configured for the stack.
External Memory: For complex or lengthy work (research, analysis, scan, review), write intermediate findings and final results to a report file in
plans/reports/โ prevents context loss and serves as deliverable.
AI Mistake Prevention โ Failure modes to avoid on every task:
Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal. Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing. Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain. Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips โ not just happy path. When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer โ never patch symptom site. Assume existing values are intentional โ ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code. Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks. Holistic-first debugging โ resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis. Surgical changes โ apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly. Surface ambiguity before coding โ don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Estimation Framework โ Bottom-up first; SP DERIVED; output min-max range when likely โฅ3d. Stack-agnostic. Baseline: 3-5yr dev, 6 productive hrs/day. AI estimate assumes Claude Code + project context.
Method:
- Blast Radius pass (below) โ drives code AND test cost
- Decompose phases โ hours/phase โ
bottom_up_hours = ฮฃ phase_hourslikely_days = ceil(bottom_up_hours / 6) ร productivity_factor- Sum Risk Margin (base + add-ons) โ
max_days = likely_days ร (1 + margin)min_days = likely_days ร 0.9- Output as range when
likely_days โฅ3; single point allowed<3(still record margin)man_days_ai= same range ร AI speedupstory_pointsDERIVED fromlikely_daysvia SP-Days โ NEVER driver. Disagreement >50% โ trust bottom-upProductivity factor: 0.8 strong scaffolding+codegen+AI hooks ยท 1.0 mature default ยท 1.2 weak patterns ยท 1.5 greenfield
Cost Driver Heuristic (apply BEFORE work-type row):
- UI dominates in CRUD/business apps โ 1.5-3x backend (states, validation, responsive, a11y, polish)
- Backend dominates ONLY: multi-aggregate invariants, cross-service contracts, schema migrations, heavy query/perf, new event flows
Reuse-vs-Create axis (PRIMARY lever, per layer):
UI tier Cost Reuse component on existing screen 0.1-0.3d Add control/column to existing screen 0.3-0.8d Compose components into NEW screen 1-2d NEW screen, custom layout/states/validation 2-4d NEW shared/common component (themed, tested) 3-6d+
Backend tier Cost Reuse query/handler from new place 0.1-0.3d Small update existing handler/entity 0.3-0.8d NEW query on existing repo/model 0.5-1d NEW command/handler on existing aggregate (additive) 1-2d NEW aggregate/entity (repo, validation, events) 2-4d NEW cross-service contract OR schema migration 2-4d each Multi-aggregate invariant / heavy domain rule 3-5d Rule: Sum tiers across UI+backend+tests, apply productivity factor. Reuse short-circuits tiers โ call out.
Test-Scope drivers (compute test_count EXPLICITLY โ "+tests" hand-wave is #1 failure):
Driver Count Happy-path journeys 1 per story / AC main flow State-machine transitions reachable transitions ร allowed actors Multi-entity state combos state(A) ร state(B) โ REACHABLE only, not Cartesian Authorization matrix (owner, non-owner, elevated, unauth) ร each mutation Validation rules 1 per required field / boundary / format / cross-field UI states (per new screen/dialog) happy, loading, empty, error, partial โ present only Negative paths / invariants 1 per violatable business rule
Test tier (Trad, incl. setup+assert+flake) Cost 1-5 cases, fixtures reused 0.3-0.5d 6-12 cases, 1 new fixture 0.5-1d 13-25 cases, multi-entity setup 1-2d 26-50 cases OR new state-machine coverage 2-3d >50 cases OR full E2E journey 3-5d Test multipliers: new fixture/seed harness +0.5d ยท cross-service/bus assertion +0.3d each ยท UI E2E ร1.5 ยท each new role +1-2 cases
Blast Radius (mandatory pre-pass โ affects code AND test):
- Files/components directly modified โ count
- Of those, "complex" (>500 LOC, multi-handler, central, frequently-modified) โ count
- Downstream consumers (callers, event subscribers, cross-service) โ list
- Shared/common code touched (multi-app blast) โ yes/no
- Regression scope โ areas needing re-test
Rule: Complex touch โ add
risk_factors. Each downstream consumer โ +1-3 regression cases. Blast >5 areas OR >2 complex โ re-evaluate SPLIT before estimating.Risk Margin (drives max bound):
likely_days Base margin <1d trivial +10% 1-2d small additive +20% 3-4d real feature +35% 5-7d large +50% 8-10d very large +75% >10d +100% AND flag SHOULD SPLIT Risk-factor add-ons (additive โ enumerate in
risk_factors):
Factor +margin touches-complex-existing-feature(>500 LOC, multi-handler, central)+20% cross-service-contractchange+25% schema-migration-on-populated-data+25% new-tech-or-unfamiliar-pattern+30% regression-fan-out(โฅ3 downstream areas re-test)+20% performance-or-latency-critical+20% concurrency-race-event-ordering+25% shared-common-code(multi-consumer/multi-app)+25% unclear-requirements-or-design+30% Collapse rule: total margin >100% โ STOP, split (padding past 2x is dishonesty). Margin <15% on
likely_days โฅ5โ under-estimated, widen.Work-Type Caps (hard ceilings on
likely_days):
Work type Max SP Max likely Single field / config flag / style fix 1 0.5d Add property to existing model + bind to existing UI 2 1d Additive endpoint + minor UI control (button/menu/column), reuses fixtures 3 2-3d Additive endpoint + NEW UI surface OR additive multi-layer + new domain rule + 2+ test files 5 3-5d NEW model/aggregate OR migration OR cross-module contract OR heavy test (>1.5d) OR NEW UI + non-trivial backend 8 5-7d NEW UI surface + (NEW aggregate OR migration OR cross-service contract) 13 SHOULD split Cross-service contract + migration combined 13 SHOULD split Beyond 21 MUST split SPโDays (validation only): 1=0.5d/0.25d ยท 2=1d/0.35d ยท 3=2d/0.65d ยท 5=4d/1.0d ยท 8=6d/1.5d ยท 13=10d/2.0d (Trad/AI likely) AI speedup: SP 1โ2x ยท 2-3โ3x ยท 5-8โ4x ยท 13+โ5x. AI cost =
(code_gen ร 1.3) + (test_gen ร 1.3)(30% review overhead).MANDATORY frontmatter:
story_points: <n> complexity: low | medium | high | critical man_days_traditional: '<min>-<max>d' # range when likely โฅ3d; '<N>d' when <3d man_days_ai: '<min>-<max>d' risk_margin_pct: <n> # base + add-ons risk_factors: [touches-complex-existing-feature, regression-fan-out] # closed-list from add-ons; [] if none blast_radius: touched_areas: <n> complex_touched: <n> downstream_consumers: [list or count] shared_common_code: yes | no estimate_scope_included: [code, integration-tests, frontend, i18n, docs] estimate_scope_excluded: [unit-tests, e2e, perf, deployment, code-review-rounds] estimate_reasoning: | 5-7 lines covering: (a) UI tier โ row applied (b) Backend tier โ row applied (c) Test scope โ case breakdown by driver, file count, fixtures, tier row (d) Cost driver โ dominant tier + why (e) Blast radius โ touched, complex, regression scope (f) Risk factors โ list driving margin; why not larger/smaller Example: "UI: compose Form/Table/Dialog โ NEW screen (~1.5d). Backend: NEW command on existing aggregate, reuses validation+repo (~1d). Tests: 4 transitions ร 2 actors + 3 validation + 2 UI states = 13 cases, 1 new fixture โ tier 13-25 ~1.5d. Driver: UI composition + new states. Blast: 4 areas, 1 complex. Risk: base 35% + touches-complex +20% = 55% โ max 3.9d โ range 2.5-4d."Sanity self-check:
likely_days โฅ3dand single-point? โ reject, must be range- Margin <15% on
likely_days โฅ5d? โ under-estimated, widen- Margin >100%? โ STOP, split instead of buffer
- Complex existing feature touched, no regression budget in
(c)? โ reject- Blast
>5areas OR>2complex, no split discussion? โ reject- Purely additive on existing model AND existing UI? โ cap SP 3 unless tests >1.5d
- NEW UI surface (page/complex form/dashboard)? โ SP 5+ even if backend one endpoint
- Backend cross-service / migration / multi-aggregate? โ SP 8+ regardless of UI
bottom_up_hours / 6vs SP-Days disagreement >50%? โ trust bottom-up, downgrade SP- Without tests, SP drops โฅ1 bucket? โ tests dominate; state explicitly
- Reasoning called out UI vs backend vs blast vs risk factors? โ if missing, add
UI System Context โ For ANY task touching
.ts,.html,.scss, or.cssfiles:MUST ATTENTION READ before implementing:
docs/project-reference/frontend-patterns-reference.mdโ component base classes, stores, formsdocs/project-reference/scss-styling-guide.mdโ BEM methodology, SCSS variables, mixins, responsivedocs/project-reference/design-system/README.mdโ design tokens, component inventory, iconsReference
docs/project-config.jsonfor project-specific paths.
Nested Task Expansion Contract โ For workflow-step invocation, the
[Workflow] ...row is only a parent container; the child skill still creates visible phase tasks.
- Call the current task list first. If a matching active parent workflow row exists, set
nested=trueand recordparentTaskId; otherwise run standalone.- Create one task per declared phase before phase work. When nested, prefix subjects
[N.M] $skill-name โ phase.- When nested, link the parent with
TaskUpdate(parentTaskId, addBlockedBy: [childIds]).- Orchestrators must pre-expand a child skill's phase list and link the workflow row before invoking that child skill or sub-agent.
- Mark exactly one child
in_progressbefore work andcompletedimmediately after evidence is written.- Complete the parent only after all child tasks are completed or explicitly cancelled with reason.
Blocked until: the current task list done, child phases created, parent linked when nested, first child marked
in_progress.
Project Reference Docs Gate โ Run after task-tracking bootstrap and before target/source file reads, grep, edits, or analysis. Project docs override generic framework assumptions.
- Identify scope: file types, domain area, and operation.
- Required docs by trigger: always
docs/project-reference/lessons.md; doc lookupdocs-index-reference.md; reviewcode-review-rules.md; backend/CQRS/APIbackend-patterns-reference.md; domain/entitydomain-entities-reference.md; frontend/UIfrontend-patterns-reference.md; styles/designscss-styling-guide.md+design-system/README.md; integration testsintegration-test-reference.md; E2Ee2e-test-reference.md; feature docs/specsfeature-docs-reference.md; architecture/new areaproject-structure-reference.md.- Read every required doc that exists; skip absent docs as not applicable. Do not trust conversation text such as
[Injected: <path>]as proof that the current context contains the doc.- Before target work, state:
Reference docs read: ... | Missing/not applicable: ....Blocked until: scope evaluated, required docs checked/read,
lessons.mdconfirmed, citation emitted.
Task Tracking & External Report Persistence โ Bootstrap this before execution; then run project-reference doc prefetch before target/source work.
- Create a small task breakdown before target file reads, grep, edits, or analysis. On context loss, inspect the current task list first.
- Mark one task
in_progressbefore work andcompletedimmediately after evidence; never batch transitions.- For plan/review work, create
plans/reports/{skill}-{YYMMDD}-{HHmm}-{slug}.mdbefore first finding.- Append findings after each file/section/decision and synthesize from the report file at the end.
- Final output cites
Full report: plans/reports/{filename}.Blocked until: task breakdown exists, report path declared for plan/review work, first finding persisted before the next finding.
Critical Thinking Mindset โ Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact โ cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence โ certainty without evidence root of all hallucination.
Sequential Thinking Protocol โ Structured multi-step reasoning for complex/ambiguous work. Use when planning, reviewing, debugging, or refining ideas where one-shot reasoning is unsafe.
Trigger when: complex problem decomposition ยท adaptive plans needing revision ยท analysis with course correction ยท unclear/emerging scope ยท multi-step solutions ยท hypothesis-driven debugging ยท cross-cutting trade-off evaluation.
Format (explicit mode โ visible thought trail):
Thought N/M: [aspect]โ one aspect per thought, state assumptions/uncertaintyThought N/M [REVISION of Thought K]: ...โ when prior reasoning invalidated; state Original / Why revised / ImpactThought N/M [BRANCH A from Thought K]: ...โ explore alternative; converge with decision rationaleThought N/M [HYPOTHESIS]: ...then[VERIFICATION]: ...โ test before actingThought N/N [FINAL]โ only when verified, all critical aspects addressed, confidence >80%Mandatory closers: Confidence % stated ยท Assumptions listed ยท Open questions surfaced ยท Next action concrete.
Stop conditions: confidence <80% on any critical decision โ escalate via ask the user directly ยท โฅ3 revisions on same thought โ re-frame the problem ยท branch count >3 โ split into sub-task.
Implicit mode: apply methodology internally without visible markers when adding markers would clutter the response (routine work where reasoning aids accuracy).
Deep-dive: see
$sequential-thinkingskill (.claude/skills/sequential-thinking/SKILL.md) for worked examples (api-design, debug, architecture), advanced techniques (spiral refinement, hypothesis testing, convergence), and meta-strategies (uncertainty handling, revision cascades).
Fix-Triggered Re-Review Loop โ Re-review is triggered by a FIX CYCLE, not by a round number. Review purpose:
review โ if issues โ fix โ re-reviewuntil a round finds no issues. A clean review ENDS the loop โ no further rounds required.Round 1: Main-session review. Read target files, build understanding, note issues. Output findings + verdict (PASS / FAIL).
Decision after Round 1:
- No issues found (PASS, zero findings) โ review ENDS. Do NOT spawn a fresh sub-agent for confirmation.
- Issues found (FAIL, or any non-zero findings) โ fix the issues, then spawn a fresh sub-agent for Round 2 re-review.
Fresh sub-agent re-review (after every fix cycle): Spawn a NEW
spawn_agenttool call โ never reuse a prior agent. Sub-agent re-reads ALL files from scratch with ZERO memory of prior rounds. SeeSYNC:fresh-context-reviewfor the spawn mechanism andSYNC:review-protocol-injectionfor the canonical Agent prompt template. Each fresh round must catch:
- Cross-cutting concerns missed in the prior round
- Interaction bugs between changed files
- Convention drift (new code vs existing patterns)
- Missing pieces that should exist but don't
- Subtle edge cases the prior round rationalized away
- Regressions introduced by the fixes themselves
Loop termination: After each fresh round, repeat the same decision: clean โ END; issues โ fix โ next fresh round. Continue until a round finds zero issues, or 3 fresh-subagent rounds max, then escalate to user via a direct user question.
Rules:
- A clean Round 1 ENDS the review โ no mandatory Round 2
- NEVER skip the fresh sub-agent re-review after a fix cycle (every fix invalidates the prior verdict)
- NEVER reuse a sub-agent across rounds โ every iteration spawns a NEW Agent call
- Main agent READS sub-agent reports but MUST NOT filter, reinterpret, or override findings
- Max 3 fresh-subagent rounds per review โ if still FAIL, escalate via a direct user question (do NOT silently loop)
- Track round count in conversation context (session-scoped)
- Final verdict must incorporate ALL rounds executed
Report must include
## Round N Findings (Fresh Sub-Agent)for every round Nโฅ2 that was executed.
Fresh Sub-Agent Review โ Eliminate orchestrator confirmation bias via isolated sub-agents.
Why: The main agent knows what it (or
$cook) just fixed and rationalizes findings accordingly. A fresh sub-agent has ZERO memory, re-reads from scratch, and catches what the main agent dismissed. Sub-agent bias is mitigated by (1) fresh context, (2) verbatim protocol injection, (3) main agent not filtering the report.When: ONLY after a fix cycle. A review round that finds zero issues ENDS the loop โ do NOT spawn a confirmation sub-agent. A review round that finds issues triggers: fix โ fresh sub-agent re-review.
How:
- Spawn a NEW
spawn_agenttool call โ usecode-reviewersubagent_type for code reviews,general-purposefor plan/doc/artifact reviews- Inject ALL required review protocols VERBATIM into the prompt โ see
SYNC:review-protocol-injectionfor the full list and template. Never reference protocols by file path; AI compliance drops behind file-read indirection (seeSYNC:shared-protocol-duplication-policy)- Sub-agent re-reads ALL target files from scratch via its own tool calls โ never pass file contents inline in the prompt
- Sub-agent writes structured report to
plans/reports/{review-type}-round{N}-{date}.md- Main agent reads the report, integrates findings into its own report, DOES NOT override or filter
Rules:
- SKIP fresh sub-agent when the prior round found zero issues (no fixes = nothing new to verify)
- NEVER skip fresh sub-agent after a fix cycle โ every fix invalidates the prior verdict
- NEVER reuse a sub-agent across rounds โ every fresh round spawns a NEW
spawn_agentcall- Max 3 fresh-subagent rounds per review โ escalate via a direct user question if still failing; do NOT silently loop or fall back to any prior protocol
- Track iteration count in conversation context (session-scoped, no persistent files)
Review Protocol Injection โ Every fresh sub-agent review prompt MUST embed 10 protocol blocks VERBATIM. The template below has ALL 10 bodies already expanded inline. Copy the template wholesale into the Agent call's
promptfield at runtime, replacing only the{placeholders}in Task / Round / Reference Docs / Target Files / Output sections with context-specific values. Do NOT touch the embedded protocol sections.Why inline expansion: Placeholder markers would force file-read indirection at runtime. AI compliance drops significantly behind indirection (see
SYNC:shared-protocol-duplication-policy). Therefore the template carries all 10 protocol bodies pre-embedded.
code-reviewer โ for code reviews (reviewing source files, git diffs, implementation)general-purpose โ for plan / doc / artifact reviews (reviewing markdown plans, docs, specs)spawn_agent({
description: "Fresh Round {N} review",
agent_type: "code-reviewer",
prompt: `
## Task
{review-specific task โ e.g., "Review all uncommitted changes for code quality" | "Review plan files under {plan-dir}" | "Review integration tests in {path}"}
## Round
Round {N}. You have ZERO memory of prior rounds. Re-read all target files from scratch via your own tool calls. Do NOT trust anything from the main agent beyond this prompt.
## Protocols (follow VERBATIM โ these are non-negotiable)
### Evidence-Based Reasoning
Speculation is FORBIDDEN. Every claim needs proof.
1. Cite file:line, grep results, or framework docs for EVERY claim
2. Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend
3. Cross-service validation required for architectural changes
4. "I don't have enough evidence" is valid and expected output
BLOCKED until: Evidence file path (file:line) provided; Grep search performed; 3+ similar patterns found; Confidence level stated.
Forbidden without proof: "obviously", "I think", "should be", "probably", "this is because".
If incomplete โ output: "Insufficient evidence. Verified: [...]. Not verified: [...]."
### Bug Detection
MUST check categories 1-4 for EVERY review. Never skip.
1. Null Safety: Can params/returns be null? Are they guarded? Optional chaining gaps? .find() returns checked?
2. Boundary Conditions: Off-by-one (< vs <=)? Empty collections handled? Zero/negative values? Max limits?
3. Error Handling: Try-catch scope correct? Silent swallowed exceptions? Error types specific? Cleanup in finally?
4. Resource Management: Connections/streams closed? Subscriptions unsubscribed on destroy? Timers cleared? Memory bounded?
5. Concurrency (if async): Missing await? Race conditions on shared state? Stale closures? Retry storms?
6. Stack-Specific: JS: === vs ==, typeof null. C#: async void, missing using, LINQ deferred execution.
Classify: CRITICAL (crash/corrupt) โ FAIL | HIGH (incorrect behavior) โ FAIL | MEDIUM (edge case) โ WARN | LOW (defensive) โ INFO.
### Design Patterns Quality
Priority checks for every code change:
1. DRY via OOP: Same-suffix classes (*Entity, *Dto, *Service) MUST share base class. 3+ similar patterns โ extract to shared abstraction.
2. Right Responsibility: Logic in LOWEST layer (Entity > Domain Service > Application Service > Controller). Never business logic in controllers.
3. SOLID: Single responsibility (one reason to change). Open-closed (extend, don't modify). Liskov (subtypes substitutable). Interface segregation (small interfaces). Dependency inversion (depend on abstractions).
4. After extraction/move/rename: Grep ENTIRE scope for dangling references. Zero tolerance.
5. YAGNI gate: NEVER recommend patterns unless 3+ occurrences exist. Don't extract for hypothetical future use.
Anti-patterns to flag: God Object, Copy-Paste inheritance, Circular Dependency, Leaky Abstraction.
### Logic & Intention Review
Verify WHAT code does matches WHY it was changed.
1. Change Intention Check: Every changed file MUST serve the stated purpose. Flag unrelated changes as scope creep.
2. Happy Path Trace: Walk through one complete success scenario through changed code.
3. Error Path Trace: Walk through one failure/edge case scenario through changed code.
4. Acceptance Mapping: If plan context available, map every acceptance criterion to a code change.
NEVER mark review PASS without completing both traces (happy + error path).
### Test Spec Verification
Map changed code to test specifications.
1. From changed files โ find TC-{FEAT}-{NNN} in docs/business-features/{Service}/detailed-features/{Feature}.md Section 15.
2. Every changed code path MUST map to a corresponding TC (or flag as "needs TC").
3. New functions/endpoints/handlers โ flag for test spec creation.
4. Verify TC evidence fields point to actual code (file:line, not stale references).
5. Auth changes โ TC-{FEAT}-02x exist? Data changes โ TC-{FEAT}-01x exist?
6. If no specs exist โ log gap and recommend $tdd-spec.
NEVER skip test mapping. Untested code paths are the #1 source of production bugs.
### Fix-Layer Accountability
NEVER fix at the crash site. Trace the full flow, fix at the owning layer. The crash site is a SYMPTOM, not the cause.
MANDATORY before ANY fix:
1. Trace full data flow โ Map the complete path from data origin to crash site across ALL layers (storage โ backend โ API โ frontend โ UI). Identify where bad state ENTERS, not where it CRASHES.
2. Identify the invariant owner โ Which layer's contract guarantees this value is valid? Fix at the LOWEST layer that owns the invariant, not the highest layer that consumes it.
3. One fix, maximum protection โ If fix requires touching 3+ files with defensive checks, you are at the wrong layer โ go lower.
4. Verify no bypass paths โ Confirm all data flows through the fix point. Check for direct construction skipping factories, clone/spread without re-validation, raw data not wrapped in domain models, mutations outside the model layer.
BLOCKED until: Full data flow traced (origin โ crash); Invariant owner identified with file:line evidence; All access sites audited (grep count); Fix layer justified (lowest layer that protects most consumers).
Anti-patterns (REJECT): "Fix it where it crashes" (crash site โ cause site, trace upstream); "Add defensive checks at every consumer" (scattered defense = wrong layer); "Both fix is safer" (pick ONE authoritative layer).
### Rationalization Prevention
AI skips steps via these evasions. Recognize and reject:
- "Too simple for a plan" โ Simple + wrong assumptions = wasted time. Plan anyway.
- "I'll test after" โ RED before GREEN. Write/verify test first.
- "Already searched" โ Show grep evidence with file:line. No proof = no search.
- "Just do it" โ Still need task tracking. Skip depth, never skip tracking.
- "Just a small fix" โ Small fix in wrong location cascades. Verify file:line first.
- "Code is self-explanatory" โ Future readers need evidence trail. Document anyway.
- "Combine steps to save time" โ Combined steps dilute focus. Each step has distinct purpose.
### Graph-Assisted Investigation
MANDATORY when .code-graph/graph.db exists.
HARD-GATE: MUST run at least ONE graph command on key files before concluding any investigation.
Pattern: Grep finds files โ trace --direction both reveals full system flow โ Grep verifies details.
- Investigation/Scout: trace --direction both on 2-3 entry files
- Fix/Debug: callers_of on buggy function + tests_for
- Feature/Enhancement: connections on files to be modified
- Code Review: tests_for on changed functions
- Blast Radius: trace --direction downstream
CLI: python .claude/scripts/code_graph {command} --json. Use --node-mode file first (10-30x less noise), then --node-mode function for detail.
### Understand Code First
HARD-GATE: Do NOT write, plan, or fix until you READ existing code.
1. Search 3+ similar patterns (grep/glob) โ cite file:line evidence.
2. Read existing files in target area โ understand structure, base classes, conventions.
3. Run python .claude/scripts/code_graph trace <file> --direction both --json when .code-graph/graph.db exists.
4. Map dependencies via connections or callers_of โ know what depends on your target.
5. Write investigation to .ai/workspace/analysis/ for non-trivial tasks (3+ files).
6. Re-read analysis file before implementing โ never work from memory alone.
7. NEVER invent new patterns when existing ones work โ match exactly or document deviation.
BLOCKED until: Read target files; Grep 3+ patterns; Graph trace (if graph.db exists); Assumptions verified with evidence.
## Reference Docs (READ before reviewing)
- docs/project-reference/code-review-rules.md
- {skill-specific reference docs โ e.g., integration-test-reference.md for integration-test-review; backend-patterns-reference.md for backend reviews; frontend-patterns-reference.md for frontend reviews}
## Target Files
{explicit file list OR "run git diff to see uncommitted changes" OR "read all files under {plan-dir}"}
## Output
Write a structured report to plans/reports/{review-type}-round{N}-{date}.md with sections:
- Status: PASS | FAIL
- Issue Count: {number}
- Critical Issues (with file:line evidence)
- High Priority Issues (with file:line evidence)
- Medium / Low Issues
- Cross-cutting findings
Return the report path and status to the main agent.
Every finding MUST have file:line evidence. Speculation is forbidden.
`
})
{placeholders} in Task / Round / Reference Docs / Target Files / Output sections with context-specific contentcode-reviewer subagent_type for code reviews and general-purpose for plan / doc / artifact reviewsGraph Impact Analysis โ When
.code-graph/graph.dbexists, runblast-radius --jsonto detect ALL files affected by changes (7 edge types: CALLS, MESSAGE_BUS, API_ENDPOINT, TRIGGERS_EVENT, PRODUCES_EVENT, TRIGGERS_COMMAND_EVENT, INHERITS). Compute gap: impacted_files - changed_files = potentially stale files. Risk: <5 Low, 5-20 Medium, >20 High. Usetrace --direction downstreamfor deep chains on high-impact files.
**IMPORTANT MUST ATTENTION** run graph blast-radius on changed files to find potentially stale consumers/handlers (when graph.db exists).
**IMPORTANT MUST ATTENTION** read frontend-patterns-reference, scss-styling-guide, design-system/README before any UI change.
man_days_traditional (ฮฃh/6 ร productivity_factor); SP DERIVED. UI cost usually dominates โ bump SP one bucket if NEW UI surface (page/complex form/dashboard). Frontmatter MUST include story_points, complexity, man_days_traditional, man_days_ai, estimate_scope_included, estimate_scope_excluded, estimate_reasoning (UI vs backend cost driver). Cap SP 3 for additive-on-existing-model+existing-UI unless test scope >1.5d. SP 13 SHOULD split, SP 21 MUST split.
MUST ATTENTION apply critical thinking โ every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply sequential-thinking โ multi-step Thought N/M, REVISION/BRANCH/HYPOTHESIS markers, confidence % closer; see $sequential-thinking skill.
MUST ATTENTION apply AI mistake prevention โ holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
plans/reports/ incrementally and synthesize from disk.Reference docs read: ....lessons.md; project conventions override generic defaults.[N.M] $skill-name โ phase prefixes and one-in_progress discipline.IMPORTANT MUST ATTENTION follow declared step order for this skill; NEVER skip, reorder, or merge steps without explicit user approval
IMPORTANT MUST ATTENTION for every step/sub-skill call: set in_progress before execution, set completed after execution
IMPORTANT MUST ATTENTION every skipped step MUST include explicit reason; every completed step MUST include concise evidence
IMPORTANT MUST ATTENTION if Task tools unavailable, maintain an equivalent step-by-step plan tracker with synchronized statuses
MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using task tracking BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION validate decisions with user via a direct user question โ never auto-decide. MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality. MANDATORY IMPORTANT MUST ATTENTION READ the following files before starting:
[TASK-PLANNING] Before acting, analyze task scope and systematically break it into small todo tasks and sub-tasks using task tracking.
[IMPORTANT] Analyze how big the task is and break it into many small todo tasks systematically before starting โ this is very important.
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns โ debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW โ NEVER ExecuteUowTaskwhere python/where py) โ NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) โ parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage โ never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role โ rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad โ rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) โ expresses HAPPENS, not membership.python/python3 resolves โ verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons โ
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns โdocs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders โ System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves โ run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons โ ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" โ Yes โ improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.