with one click
integration-test-review
// [Code Quality] Use when you need to review integration tests for assertion quality, bug protection, repeatability, and test-spec traceability.
// [Code Quality] Use when you need to review integration tests for assertion quality, bug protection, repeatability, and test-spec traceability.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | integration-test-review |
| version | 1.1.0 |
| description | [Code Quality] Use when you need to review integration tests for assertion quality, bug protection, repeatability, and test-spec traceability. |
[BLOCKING] Execute skill steps in declared order. NEVER skip, reorder, or merge steps without explicit user approval. [BLOCKING] Before each step or sub-skill call, update task tracking: set
in_progresswhen step starts, setcompletedwhen step ends. [BLOCKING] Every completed/skipped step MUST include brief evidence or explicit skip reason. [BLOCKING] If Task tools are unavailable, create and maintain an equivalent step-by-step plan tracker with the same status transitions.
Goal: Review integration tests for real bug-protection value, correct data assertions, infinite repeatability, spec alignment.
Scope: All test files in uncommitted changes (default), or user-specified scope.
Workflow: Phase 0 Detect ā Collect ā 6-Gate Review ā Spec Cross-Check ā Report ā Fix issues ā Fresh sub-agent re-review ā Build & verify ā If fail: investigate + fix plan
Non-negotiable rules:
MUST read handler/service source BEFORE judging any test assertions
MUST flag smoke-only tests (no-exception-only checks) as FAIL
MUST flag DI-resolution-only tests (resolve + not-null) as FAIL ā NOT integration tests
MUST verify tests use unique IDs per run (infinitely repeatable)
MUST use async polling/retry for ALL DB assertions ā async delays are norm
MUST flag repository-created or repository-mutated test data that bypasses real use cases and can leave invalid state
MUST require 3 consecutive successful suite/project runs before declaring integration tests verified/idempotent
NEVER accept assertions that always pass regardless of handler correctness
NO smoke/fake/useless tests ā every test MUST execute actual operations and verify data state
docs/project-reference/integration-test-reference.md ā Integration test patterns, fixture setup, seeder conventions, lessons learned (MUST READ before reviewing) (read directly; do not rely on hook-injected conversation text)
Classify BEFORE any gate review. Route wrong ā waste all effort.
| Signal | Classification | Action |
|---|---|---|
| No user-specified files | Uncommitted changes | Run git diff --name-only to collect scope |
| User specifies files | Explicit scope | Use provided list directly |
| 10+ test files | Large scope | Parallel sub-agents grouped by module |
| 1-9 test files | Normal scope | Single review pass |
| 0 test files in changes | No tests | Report gap ā ask user for explicit scope via AskUserQuestion |
Search for test reference docs ā NEVER hardcode paths. Grep for integration-test-reference, test-patterns, integration-test-guide near changed test files to discover project-specific conventions before starting gate review.
Think: If I deleted the core logic from this handler, which assertions would fail? If NONE ā FAIL.
#1 AI failure: hallucination assertions ā look real, verify nothing.
PASS: At least one assertion per test FAILS if core logic breaks.
FAIL:
x >= 0 where x always >= 0, count >= 0, string not-empty on required fieldsVerify: Read handler source ā list fields it changes ā check test asserts those fields.
Think: Does this test prove the database changed, or just that no exception occurred?
PASS: After command, test queries DB and asserts specific entity field values.
FAIL:
Exception: Smoke-only ONLY when side effect truly unobservable. MUST include explicit justification comment.
ALWAYS use async polling/retry for data assertions. Event handlers, bus consumers, background jobs run async ā data may not be immediately available.
Think: If this test runs N times in a shared database, does it get noisier each run? Would run #2 fail?
FAIL: Hardcoded IDs, hardcoded business keys without unique suffix, teardown/cleanup, ordering dependency, seeders without existence check, or direct repository setup that creates state users could not create through real use cases.
Verify: Repeatability is only proven when the relevant suite/project passes 3 consecutive runs without resetting data. One green run is not enough.
Think: Did I read the handler source? Do I know which exact fields it writes? Do assertions check those fields ā and ONLY those fields?
PASS: Assertions match what handler ACTUALLY does (verified by reading source). Covers primary business rule. Validation paths tested.
FAIL: Assertions on untouched fields (copy-paste), missing primary side-effect assertion, event handler tests that never trigger the event.
Verify: Grep handler class ā read it ā list what it does ā compare with assertions.
Also check:
Think: Can I trace TC-XXX-NNN from test annotation ā spec docs ā feature docs in one unbroken chain?
PASS: Test has spec annotation linking to TC ID. TC ID exists in spec docs. Method name matches TC.
FAIL (WARN, not BLOCK): Missing annotation, orphaned TC ID, or spec says "Planned" but test exists.
Think: Have I read ALL 3 sources? Where exactly do they disagree? Does evidence support a verdict, or must I escalate?
Hardest gate. Identify discrepancy, classify using source-of-truth hierarchy ā NEVER silently pick winner.
| Priority | Source | Why |
|---|---|---|
| 1 (Highest) | Feature docs (docs/business-features/ā¦/Section 15 TCs) | Business intent ā defines WHAT must happen |
| 2 | Test-spec docs (docs/specs/) | TC scenarios derived from feature docs ā defines HOW to verify |
| 3 | Implementation code (handler/entity/service) | What WAS built ā may reflect intentional evolution not yet in docs |
| 4 (Lowest) | Integration test code | What IS being tested ā most likely to be wrong or stale |
Rule: Docs win over code. Code wins over tests. Feature docs win over test-spec docs.
| Pattern | Feature Doc | Impl Code | Test Code | Verdict | Action |
|---|---|---|---|---|---|
| All agree | ā | ā | ā | PASS | None |
| Stale docs | ā | ā | ā | Docs lag code | Flag docs for /docs-update; test is correct |
| Wrong test | ā | ā | ā | Test wrong | Fix test assertions to match code + docs |
| Code bug | ā | ā | ā | Code has bug | Report as BUG ā do NOT fix test to match code |
| Test + code diverge from docs | ā | ā | ā | Code bug + wrong test | Fix test to match docs; report code bug |
| Three-way conflict | ā | ā | ā | ESCALATE | Cannot self-resolve ā AskUserQuestion |
CRITICAL rules:
AskUserQuestionCompare each pair with file:line evidence for each source.
PASS: All three agree. WARN: Minor wording, same semantic. FAIL: Semantic disagreement on field/rule/outcome. ESCALATE: All three differ and evidence cannot resolve.
Use TaskCreate for EACH phase before starting.
Phase 1 ā Collect: Categorize changed files: new (full review), modified (changed methods only), new projects (infra + samples).
Phase 2 ā Gate Review: Per file, apply all 6 gates. Record per-file verdict table:
| Gate | Verdict | Evidence |
|---|---|---|
| 1. Assertion Value | PASS/FAIL | {file:line} |
| 2. Data State | PASS/FAIL | {file:line} |
| 3. Repeatability | PASS/FAIL | {file:line} |
| 4. Domain Logic | PASS/FAIL | {file:line} |
| 5. Traceability | PASS/WARN | {file:line} |
| 6. Three-Way Sync | PASS/WARN/FAIL/ESCALATE | {file:line} |
Phase 3 ā Spec Cross-Check + Three-Way Diff: For each TC ID in code:
docs/business-features/ (Section 15) and docs/specs/Phase 4 ā Initial Report: Write to plans/reports/integration-test-review-{date}-{slug}.md
Phase 5 ā Fix All Issues (MANDATORY): Fix every CRITICAL and HIGH issue. MEDIUM: fix if straightforward, document as tech debt otherwise.
file:line under ## Fixes AppliedPhase 6 ā Fresh Sub-Agent Re-Review (MANDATORY):
After Phase 5 fixes, spawn fresh code-reviewer sub-agents (parallel by module for 10+ files; single agent otherwise) using canonical Agent template from SYNC:review-protocol-injection. Each sub-agent re-reads ALL target test files from scratch with ZERO memory of Phase 2/5. When constructing Agent call prompt:
SYNC:review-protocol-injection template verbatimsubagent_type: "code-reviewer"SYNC:evidence-based-reasoning, SYNC:bug-detection, SYNC:design-patterns-quality, SYNC:logic-and-intention-review, SYNC:test-spec-verification, SYNC:fix-layer-accountability, SYNC:rationalization-prevention, SYNC:graph-assisted-investigation, SYNC:understand-code-first"Review integration tests in {file-list} against 6 quality gates: assertion value, data state, infinite repeatability, domain logic, test-spec traceability, three-way sync. Read handler source AND feature docs before judging assertions. Flag smoke-only, existence-only, dead assertions, and repository-created invalid test data as FAIL. Source-of-truth hierarchy: feature docs > test-spec docs > implementation code > test code. Classify every disagreement as: wrong test, code bug, stale docs, or escalate (three-way conflict)."docs/project-reference/integration-test-reference.mdplans/reports/integration-test-review-round{N}-{date}.mdAfter sub-agents return:
## Round {N} Findings (Fresh Sub-Agent) ā DO NOT filter or overrideAskUserQuestion if still failing after 3 roundsPhase 7 ā Build & Run Tests (MANDATORY): Build and run ALL changed/reviewed test files.
## Test Execution ResultsPhase 8 ā Failure Investigation (if Phase 7 fails): Never just retry ā investigate systematically.
file:line, TC-ID), error summary, root cause + confidence %, proposed fixBLOCKED ā requires running system; do NOT mark as test failures## Failure Investigation10+ files: Parallel sub-agents grouped by module. Each gets file list + 6 gates + handler paths + feature doc paths. Consolidate into single report.
| Anti-Pattern | Why It's Bad |
|---|---|
| Smoke-only (no-exception alone) | Proves no crash, not correctness |
| Existence-only (not-null) | Proves data exists, not handler set it correctly |
Dead assertion (count >= 0, always true) | Tests nothing |
| Framework testing (assert auto-set fields) | Tests framework, not handler |
| Copy-paste assertions (wrong entity fields) | Assertions don't match handler |
Hardcoded ID (Id = "test-001") | Fails on second run |
Cleanup dependency (finally { Delete(); }) | Fragile, hides pollution |
| Order dependency (test B needs A first) | Parallel execution breaks |
| Repository data hacks (direct create/update bypassing use cases) | Leaves impossible state and hides real workflow bugs |
| Missing await (unchecked async exception) | Exception swallowed silently |
| Event not triggered (query, never fire) | Tests seeder, not handler |
| Test fixed to match broken code | Hides the bug ā docs still say it's wrong |
| Self-resolved three-way conflict | AI picked winner without evidence ā silent lie |
| Stale docs assumed without two-source proof | Docs may be right; code may be the bug |
MANDATORY IMPORTANT MUST ATTENTION ā NO EXCEPTIONS: If NOT already in a workflow, MUST use
AskUserQuestionto ask user:
- Activate
write-integration-testworkflow (Recommended) ā scout ā investigate ā tdd-spec ā tdd-spec-review ā integration-test ā integration-test-review ā integration-test-verify ā tdd-spec [direction=sync] ā docs-update ā watzup ā workflow-end- Execute
/integration-test-reviewdirectly ā run standalone
MANDATORY IMPORTANT MUST ATTENTION ā NO EXCEPTIONS after completing, MUST use AskUserQuestion:
| Skill | Relationship | When to Call |
|---|---|---|
/integration-test | Producer ā generates tests this skill reviews | Always preceded by /integration-test |
/integration-test-verify | Successor ā runs tests after review clears | Call after review passes all 6 gates |
/tdd-spec | TC source ā Gate 5 checks TCs exist in feature doc Section 15 | If Gate 5 fails (orphaned test) ā run /tdd-spec UPDATE |
/spec-discovery | Spec authority ā Gate 6 compares test code vs spec bundle | If Gate 6 finds conflict: spec is authority |
/feature-docs | Business doc ā Gate 6 compares tests vs feature doc business rules | If Gate 6 finds conflict: check feature-docs vs spec-discovery alignment first |
/docs-update | Orchestrator ā includes tdd-spec sync | Call when Gate 6 reveals doc staleness |
When called outside a workflow, follow this chain after running integration-test-review.
integration-test-review (you are here)
ā
āā PREREQUISITE: integration tests must already exist
ā [REQUIRED] Verify: IntegrationTests/ directory has test files with [Trait("TestSpec", ...)] annotations
ā
āā Gate 1-5 findings ā fix tests (re-run integration-test if test code needs regeneration)
ā
āā Gate 6 (Three-Way Sync) conflict resolution:
ā ā
ā āā Test code ā spec (feature doc says behavior A, test asserts behavior B):
ā ā ā Determine: spec authoritative or test authoritative?
ā ā ā If SPEC is correct: fix test ā re-run /integration-test
ā ā ā If TEST reflects correct new behavior (spec stale): /spec-discovery [update] ā /feature-docs [update] ā /tdd-spec [UPDATE] ā update test
ā ā
ā āā Test code ā implementation (test asserts X, code does Y):
ā ā ā If CODE is correct: fix test ā /tdd-spec UPDATE (update TC to match code's correct behavior)
ā ā ā If TEST is correct (code bug): do NOT update test ā fix code ā /prove-fix ā re-run tests
ā ā
ā āā Feature doc ā spec bundle (business doc says A, engineering spec says B):
ā ā Feature doc has higher authority for business rules
ā ā Run /spec-discovery [update] to reconcile engineering spec with business doc
ā ā Do NOT self-resolve ā escalate to user if ambiguous
ā
āā [REQUIRED] ā /integration-test-verify
ā After all fixes, run actual tests to confirm all gates pass.
ā
āā [REQUIRED] ā /tdd-spec [direction=sync]
ā If TCs were updated (Gate 5/6 fix), sync QA dashboard.
ā
āā [RECOMMENDED] ā /docs-update
If Gate 6 revealed doc staleness, /docs-update runs full chain to update all layers.
[IMPORTANT] Use
TaskCreateto break ALL work into small tasks BEFORE starting. A test that cannot fail is not a test ā it is decoration. Every test MUST earn existence by proving it would FAIL if the bug it guards were reintroduced. Every finding requiresfile:lineproof with confidence >80%.
Critical Thinking Mindset ā Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact ā cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence ā certainty without evidence root of all hallucination.
Evidence-Based Reasoning ā Speculation is FORBIDDEN. Every claim needs proof.
- Cite
file:line, grep results, or framework docs for EVERY claim- Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend
- Cross-service validation required for architectural changes
- "I don't have enough evidence" is valid and expected output
BLOCKED until: Evidence file path (
file:line) provided; Grep search performed; 3+ similar patterns found; Confidence level stated.Forbidden without proof: "obviously", "I think", "should be", "probably", "this is because".
If incomplete ā output: "Insufficient evidence. Verified: [...]. Not verified: [...]."
Fix-Triggered Re-Review Loop ā Re-review is triggered by a FIX CYCLE, not by a round number. Review purpose:
review ā if issues ā fix ā re-reviewuntil a round finds no issues. A clean review ENDS the loop ā no further rounds required.Round 1: Main-session review. Read target files, build understanding, note issues. Output findings + verdict (PASS / FAIL).
Decision after Round 1:
- No issues found (PASS, zero findings) ā review ENDS. Do NOT spawn a fresh sub-agent for confirmation.
- Issues found (FAIL, or any non-zero findings) ā fix the issues, then spawn a fresh sub-agent for Round 2 re-review.
Fresh sub-agent re-review (after every fix cycle): Spawn a NEW
Agenttool call ā never reuse a prior agent. Sub-agent re-reads ALL files from scratch with ZERO memory of prior rounds. SeeSYNC:fresh-context-reviewfor the spawn mechanism andSYNC:review-protocol-injectionfor the canonical Agent prompt template. Each fresh round must catch:
- Cross-cutting concerns missed in the prior round
- Interaction bugs between changed files
- Convention drift (new code vs existing patterns)
- Missing pieces that should exist but don't
- Subtle edge cases the prior round rationalized away
- Regressions introduced by the fixes themselves
Loop termination: After each fresh round, repeat the same decision: clean ā END; issues ā fix ā next fresh round. Continue until a round finds zero issues, or 3 fresh-subagent rounds max, then escalate to user via
AskUserQuestion.Rules:
- A clean Round 1 ENDS the review ā no mandatory Round 2
- NEVER skip the fresh sub-agent re-review after a fix cycle (every fix invalidates the prior verdict)
- NEVER reuse a sub-agent across rounds ā every iteration spawns a NEW Agent call
- Main agent READS sub-agent reports but MUST NOT filter, reinterpret, or override findings
- Max 3 fresh-subagent rounds per review ā if still FAIL, escalate via
AskUserQuestion(do NOT silently loop)- Track round count in conversation context (session-scoped)
- Final verdict must incorporate ALL rounds executed
Report must include
## Round N Findings (Fresh Sub-Agent)for every round Nā„2 that was executed.
Fresh Sub-Agent Review ā Eliminate orchestrator confirmation bias via isolated sub-agents.
Why: The main agent knows what it (or
/cook) just fixed and rationalizes findings accordingly. A fresh sub-agent has ZERO memory, re-reads from scratch, and catches what the main agent dismissed. Sub-agent bias is mitigated by (1) fresh context, (2) verbatim protocol injection, (3) main agent not filtering the report.When: ONLY after a fix cycle. A review round that finds zero issues ENDS the loop ā do NOT spawn a confirmation sub-agent. A review round that finds issues triggers: fix ā fresh sub-agent re-review.
How:
- Spawn a NEW
Agenttool call ā usecode-reviewersubagent_type for code reviews,general-purposefor plan/doc/artifact reviews- Inject ALL required review protocols VERBATIM into the prompt ā see
SYNC:review-protocol-injectionfor the full list and template. Never reference protocols by file path; AI compliance drops behind file-read indirection (seeSYNC:shared-protocol-duplication-policy)- Sub-agent re-reads ALL target files from scratch via its own tool calls ā never pass file contents inline in the prompt
- Sub-agent writes structured report to
plans/reports/{review-type}-round{N}-{date}.md- Main agent reads the report, integrates findings into its own report, DOES NOT override or filter
Rules:
- SKIP fresh sub-agent when the prior round found zero issues (no fixes = nothing new to verify)
- NEVER skip fresh sub-agent after a fix cycle ā every fix invalidates the prior verdict
- NEVER reuse a sub-agent across rounds ā every fresh round spawns a NEW
Agentcall- Max 3 fresh-subagent rounds per review ā escalate via
AskUserQuestionif still failing; do NOT silently loop or fall back to any prior protocol- Track iteration count in conversation context (session-scoped, no persistent files)
Review Protocol Injection ā Every fresh sub-agent review prompt MUST embed 10 protocol blocks VERBATIM. The template below has ALL 10 bodies already expanded inline. Copy the template wholesale into the Agent call's
promptfield at runtime, replacing only the{placeholders}in Task / Round / Reference Docs / Target Files / Output sections with context-specific values. Do NOT touch the embedded protocol sections.Why inline expansion: Placeholder markers would force file-read indirection at runtime. AI compliance drops significantly behind indirection (see
SYNC:shared-protocol-duplication-policy). Therefore the template carries all 10 protocol bodies pre-embedded.
code-reviewer ā for code reviews (reviewing source files, git diffs, implementation)general-purpose ā for plan / doc / artifact reviews (reviewing markdown plans, docs, specs)Agent({
description: "Fresh Round {N} review",
subagent_type: "code-reviewer",
prompt: `
## Task
{review-specific task ā e.g., "Review all uncommitted changes for code quality" | "Review plan files under {plan-dir}" | "Review integration tests in {path}"}
## Round
Round {N}. You have ZERO memory of prior rounds. Re-read all target files from scratch via your own tool calls. Do NOT trust anything from the main agent beyond this prompt.
## Protocols (follow VERBATIM ā these are non-negotiable)
### Evidence-Based Reasoning
Speculation is FORBIDDEN. Every claim needs proof.
1. Cite file:line, grep results, or framework docs for EVERY claim
2. Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend
3. Cross-service validation required for architectural changes
4. "I don't have enough evidence" is valid and expected output
BLOCKED until: Evidence file path (file:line) provided; Grep search performed; 3+ similar patterns found; Confidence level stated.
Forbidden without proof: "obviously", "I think", "should be", "probably", "this is because".
If incomplete ā output: "Insufficient evidence. Verified: [...]. Not verified: [...]."
### Bug Detection
MUST check categories 1-4 for EVERY review. Never skip.
1. Null Safety: Can params/returns be null? Are they guarded? Optional chaining gaps? .find() returns checked?
2. Boundary Conditions: Off-by-one (< vs <=)? Empty collections handled? Zero/negative values? Max limits?
3. Error Handling: Try-catch scope correct? Silent swallowed exceptions? Error types specific? Cleanup in finally?
4. Resource Management: Connections/streams closed? Subscriptions unsubscribed on destroy? Timers cleared? Memory bounded?
5. Concurrency (if async): Missing await? Race conditions on shared state? Stale closures? Retry storms?
6. Stack-Specific: JS: === vs ==, typeof null. C#: async void, missing using, LINQ deferred execution.
Classify: CRITICAL (crash/corrupt) ā FAIL | HIGH (incorrect behavior) ā FAIL | MEDIUM (edge case) ā WARN | LOW (defensive) ā INFO.
### Design Patterns Quality
Priority checks for every code change:
1. DRY via OOP: Same-suffix classes (*Entity, *Dto, *Service) MUST share base class. 3+ similar patterns ā extract to shared abstraction.
2. Right Responsibility: Logic in LOWEST layer (Entity > Domain Service > Application Service > Controller). Never business logic in controllers.
3. SOLID: Single responsibility (one reason to change). Open-closed (extend, don't modify). Liskov (subtypes substitutable). Interface segregation (small interfaces). Dependency inversion (depend on abstractions).
4. After extraction/move/rename: Grep ENTIRE scope for dangling references. Zero tolerance.
5. YAGNI gate: NEVER recommend patterns unless 3+ occurrences exist. Don't extract for hypothetical future use.
Anti-patterns to flag: God Object, Copy-Paste inheritance, Circular Dependency, Leaky Abstraction.
### Logic & Intention Review
Verify WHAT code does matches WHY it was changed.
1. Change Intention Check: Every changed file MUST serve the stated purpose. Flag unrelated changes as scope creep.
2. Happy Path Trace: Walk through one complete success scenario through changed code.
3. Error Path Trace: Walk through one failure/edge case scenario through changed code.
4. Acceptance Mapping: If plan context available, map every acceptance criterion to a code change.
NEVER mark review PASS without completing both traces (happy + error path).
### Test Spec Verification
Map changed code to test specifications.
1. From changed files ā find TC-{FEAT}-{NNN} in docs/business-features/{Service}/detailed-features/{Feature}.md Section 15.
2. Every changed code path MUST map to a corresponding TC (or flag as "needs TC").
3. New functions/endpoints/handlers ā flag for test spec creation.
4. Verify TC evidence fields point to actual code (file:line, not stale references).
5. Auth changes ā TC-{FEAT}-02x exist? Data changes ā TC-{FEAT}-01x exist?
6. If no specs exist ā log gap and recommend /tdd-spec.
NEVER skip test mapping. Untested code paths are the #1 source of production bugs.
### Fix-Layer Accountability
NEVER fix at the crash site. Trace the full flow, fix at the owning layer. The crash site is a SYMPTOM, not the cause.
MANDATORY before ANY fix:
1. Trace full data flow ā Map the complete path from data origin to crash site across ALL layers (storage ā backend ā API ā frontend ā UI). Identify where bad state ENTERS, not where it CRASHES.
2. Identify the invariant owner ā Which layer's contract guarantees this value is valid? Fix at the LOWEST layer that owns the invariant, not the highest layer that consumes it.
3. One fix, maximum protection ā If fix requires touching 3+ files with defensive checks, you are at the wrong layer ā go lower.
4. Verify no bypass paths ā Confirm all data flows through the fix point. Check for direct construction skipping factories, clone/spread without re-validation, raw data not wrapped in domain models, mutations outside the model layer.
BLOCKED until: Full data flow traced (origin ā crash); Invariant owner identified with file:line evidence; All access sites audited (grep count); Fix layer justified (lowest layer that protects most consumers).
Anti-patterns (REJECT): "Fix it where it crashes" (crash site ā cause site, trace upstream); "Add defensive checks at every consumer" (scattered defense = wrong layer); "Both fix is safer" (pick ONE authoritative layer).
### Rationalization Prevention
AI skips steps via these evasions. Recognize and reject:
- "Too simple for a plan" ā Simple + wrong assumptions = wasted time. Plan anyway.
- "I'll test after" ā RED before GREEN. Write/verify test first.
- "Already searched" ā Show grep evidence with file:line. No proof = no search.
- "Just do it" ā Still need TaskCreate. Skip depth, never skip tracking.
- "Just a small fix" ā Small fix in wrong location cascades. Verify file:line first.
- "Code is self-explanatory" ā Future readers need evidence trail. Document anyway.
- "Combine steps to save time" ā Combined steps dilute focus. Each step has distinct purpose.
### Graph-Assisted Investigation
MANDATORY when .code-graph/graph.db exists.
HARD-GATE: MUST run at least ONE graph command on key files before concluding any investigation.
Pattern: Grep finds files ā trace --direction both reveals full system flow ā Grep verifies details.
- Investigation/Scout: trace --direction both on 2-3 entry files
- Fix/Debug: callers_of on buggy function + tests_for
- Feature/Enhancement: connections on files to be modified
- Code Review: tests_for on changed functions
- Blast Radius: trace --direction downstream
CLI: python .claude/scripts/code_graph {command} --json. Use --node-mode file first (10-30x less noise), then --node-mode function for detail.
### Understand Code First
HARD-GATE: Do NOT write, plan, or fix until you READ existing code.
1. Search 3+ similar patterns (grep/glob) ā cite file:line evidence.
2. Read existing files in target area ā understand structure, base classes, conventions.
3. Run python .claude/scripts/code_graph trace <file> --direction both --json when .code-graph/graph.db exists.
4. Map dependencies via connections or callers_of ā know what depends on your target.
5. Write investigation to .ai/workspace/analysis/ for non-trivial tasks (3+ files).
6. Re-read analysis file before implementing ā never work from memory alone.
7. NEVER invent new patterns when existing ones work ā match exactly or document deviation.
BLOCKED until: Read target files; Grep 3+ patterns; Graph trace (if graph.db exists); Assumptions verified with evidence.
## Reference Docs (READ before reviewing)
- docs/project-reference/code-review-rules.md
- {skill-specific reference docs ā e.g., integration-test-reference.md for integration-test-review; backend-patterns-reference.md for backend reviews; frontend-patterns-reference.md for frontend reviews}
## Target Files
{explicit file list OR "run git diff to see uncommitted changes" OR "read all files under {plan-dir}"}
## Output
Write a structured report to plans/reports/{review-type}-round{N}-{date}.md with sections:
- Status: PASS | FAIL
- Issue Count: {number}
- Critical Issues (with file:line evidence)
- High Priority Issues (with file:line evidence)
- Medium / Low Issues
- Cross-cutting findings
Return the report path and status to the main agent.
Every finding MUST have file:line evidence. Speculation is forbidden.
`
})
{placeholders} in Task / Round / Reference Docs / Target Files / Output sections with context-specific contentcode-reviewer subagent_type for code reviews and general-purpose for plan / doc / artifact reviewsInfinitely Repeatable Tests ā Tests MUST run N times without failure. Like manual QC ā run the suite 100 times, each run just adds more data. Verification is only PASS after the relevant suite/project passes 3 consecutive runs without DB reset.
- Unique data per run: Use the project's unique ID generator for ALL entity IDs created in tests. NEVER hardcode IDs.
- Additive only: Tests create data, never delete/reset. Prior test runs MUST NOT interfere with current run.
- No schema rollback dependency: Tests work with current schema only. Never rely on schema rollback or migration reversals.
- Idempotent seeders: Fixture-level seeders use create-if-missing pattern (check existence before insert). Test-level data uses unique IDs per execution.
- No cleanup required: No teardown, no database reset between runs. Each test is isolated by unique seed data, not by cleanup.
- Unique names/codes: When entities require unique names/codes, append a unique suffix using the project's ID generator.
AI Mistake Prevention ā Failure modes to avoid on every task:
Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal. Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing. Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain. Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips ā not just happy path. When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer ā never patch symptom site. Assume existing values are intentional ā ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code. Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks. Holistic-first debugging ā resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis. Surgical changes ā apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly. Surface ambiguity before coding ā don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Nested Task Expansion Contract ā For workflow-step invocation, the
[Workflow] ...row is only a parent container; the child skill still creates visible phase tasks.
- Call
TaskListfirst. If a matching active parent workflow row exists, setnested=trueand recordparentTaskId; otherwise run standalone.- Create one task per declared phase before phase work. When nested, prefix subjects
[N.M] $skill-name ā phase.- When nested, link the parent with
TaskUpdate(parentTaskId, addBlockedBy: [childIds]).- Orchestrators must pre-expand a child skill's phase list and link the workflow row before invoking that child skill or sub-agent.
- Mark exactly one child
in_progressbefore work andcompletedimmediately after evidence is written.- Complete the parent only after all child tasks are completed or explicitly cancelled with reason.
Blocked until:
TaskListdone, child phases created, parent linked when nested, first child markedin_progress.
Project Reference Docs Gate ā Run after task-tracking bootstrap and before target/source file reads, grep, edits, or analysis. Project docs override generic framework assumptions.
- Identify scope: file types, domain area, and operation.
- Required docs by trigger: always
docs/project-reference/lessons.md; doc lookupdocs-index-reference.md; reviewcode-review-rules.md; backend/CQRS/APIbackend-patterns-reference.md; domain/entitydomain-entities-reference.md; frontend/UIfrontend-patterns-reference.md; styles/designscss-styling-guide.md+design-system/README.md; integration testsintegration-test-reference.md; E2Ee2e-test-reference.md; feature docs/specsfeature-docs-reference.md; architecture/new areaproject-structure-reference.md.- Read every required doc that exists; skip absent docs as not applicable. Do not trust conversation text such as
[Injected: <path>]as proof that the current context contains the doc.- Before target work, state:
Reference docs read: ... | Missing/not applicable: ....Blocked until: scope evaluated, required docs checked/read,
lessons.mdconfirmed, citation emitted.
Task Tracking & External Report Persistence ā Bootstrap this before execution; then run project-reference doc prefetch before target/source work.
- Create a small task breakdown before target file reads, grep, edits, or analysis. On context loss, inspect the current task list first.
- Mark one task
in_progressbefore work andcompletedimmediately after evidence; never batch transitions.- For plan/review work, create
plans/reports/{skill}-{YYMMDD}-{HHmm}-{slug}.mdbefore first finding.- Append findings after each file/section/decision and synthesize from the report file at the end.
- Final output cites
Full report: plans/reports/{filename}.Blocked until: task breakdown exists, report path declared for plan/review work, first finding persisted before the next finding.
MUST ATTENTION apply critical thinking ā every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply AI mistake prevention ā holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
plans/reports/ incrementally and synthesize from disk.Reference docs read: ....lessons.md; project conventions override generic defaults.[N.M] $skill-name ā phase prefixes and one-in_progress discipline.IMPORTANT MUST ATTENTION follow declared step order for this skill; NEVER skip, reorder, or merge steps without explicit user approval
IMPORTANT MUST ATTENTION for every step/sub-skill call: set in_progress before execution, set completed after execution
IMPORTANT MUST ATTENTION every skipped step MUST include explicit reason; every completed step MUST include concise evidence
IMPORTANT MUST ATTENTION if Task tools unavailable, maintain an equivalent step-by-step plan tracker with synchronized statuses
TaskCreate for ALL phases BEFORE startingAskUserQuestionAnti-Rationalization:
| Evasion | Rebuttal |
|---|---|
| "Smoke test is fine for now" | No smoke test earns its place. Fix or delete. |
| "Handler source too long to read" | Cannot judge assertion quality without reading. REQUIRED. |
| "Fresh sub-agent is overkill" | Round 1 alone NEVER declares PASS. Non-negotiable. |
| "Tests were passing before" | Passing ā correct. Dead assertions always pass. |
| "Conflict is obvious, I can self-resolve" | Three-way conflict requires escalation. NEVER self-resolve. |
| "Phase 6/7/8 optional for small fixes" | No exceptions. Every fix requires re-review + build verification. |
| "0 test files, nothing to review" | Report gap and ask user ā do NOT silently exit. |