with one click
prove-fix
// [Code Quality] Use when you need prove fix correctness with code proof traces, confidence scoring, and stack-trace-style evidence chains.
// [Code Quality] Use when you need prove fix correctness with code proof traces, confidence scoring, and stack-trace-style evidence chains.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | prove-fix |
| description | [Code Quality] Use when you need prove fix correctness with code proof traces, confidence scoring, and stack-trace-style evidence chains. |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
[BLOCKING] Execute skill steps in declared order. NEVER skip, reorder, or merge steps without explicit user approval. [BLOCKING] Before each step or sub-skill call, update task tracking: set
in_progresswhen step starts, setcompletedwhen step ends. [BLOCKING] Every completed/skipped step MUST include brief evidence or explicit skip reason. [BLOCKING] If Task tools are unavailable, create and maintain an equivalent step-by-step plan tracker with the same status transitions.
Goal: Prove (or disprove) that each fix change is correct by building a code proof trace — like a debugger stack trace — with confidence percentages per change.
Workflow:
Key Rules:
file:line evidence — no exceptions$fix — never skip itWhen this task involves frontend or UI changes,
docs/project-reference/frontend-patterns-reference.mddocs/project-reference/scss-styling-guide.mddocs/project-reference/design-system/README.mdBe skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
Post-fix verification skill that builds evidence-based proof chains for every code change. Think of it as a code debugger's stack trace, but for proving WHY a fix is correct.
$fix in bugfix, hotfix, or any fix workflow$debug-investigate instead)$test instead)$code-review instead)List ALL changes made by the fix. For each change, document:
CHANGE #N: [short description]
File: [path/to/file.ext]
Lines: [start-end]
Before: [code snippet — the broken version]
After: [code snippet — the fixed version]
Type: [root-cause-fix | secondary-fix | defensive-fix | cleanup]
Change types:
For EACH change, build a stack-trace-style proof chain. This is the core of the skill.
PROOF TRACE — Change #N: [description]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
SYMPTOM (what the user sees):
→ [Observable behavior, e.g., "UI doesn't refresh after assigning PIC"]
TRIGGER PATH (how the symptom occurs):
1. [file:line] User action → [method/event]
2. [file:line] → calls [method]
3. [file:line] → dispatches [action/event]
4. [file:line] → handler/effect [name]
5. [file:line] ← BUG HERE: [exact broken behavior]
ROOT CAUSE (proven):
→ [One sentence: what exactly is wrong and why]
→ Evidence: [file:line] shows [specific code proving the bug]
FIX MECHANISM (how the change fixes it):
→ [One sentence: what the fix does differently]
→ Before: [broken code path with file:line]
→ After: [fixed code path with file:line]
WHY THIS FIX IS CORRECT:
→ [Reasoning backed by code evidence]
→ Pattern precedent: [file:line] shows same pattern working elsewhere
→ Framework behavior: [file:line or doc reference] confirms expected behavior
EDGE CASES CHECKED:
→ [edge case 1]: [verified/not-verified] — [evidence]
→ [edge case 2]: [verified/not-verified] — [evidence]
SIDE EFFECTS:
→ [None / List of potential side effects with evidence]
CONFIDENCE: [X%]
Verified: [list of verified items]
Not verified: [list of unverified items, if any]
file:line reference — no exceptionsEach change gets an individual confidence score:
| Score | Meaning | Action Required |
|---|---|---|
| 95-100% | Full proof trace complete, all edge cases verified, pattern precedent found | Ship it |
| 80-94% | Main proof trace complete, some edge cases unverified | Ship with caveats noted |
| 60-79% | Proof trace partial, some links unverified | Flag to user — recommend additional investigation |
| <60% | Insufficient evidence | BLOCK — do not proceed until evidence gathered |
Award points for each verified item:
| Criterion | Points | Evidence Required |
|---|---|---|
| Root cause identified with file:line | +25 | Code reference |
| Fix mechanism explained with before/after | +20 | Code diff |
| Pattern precedent found in codebase | +15 | Working example at file:line |
| Framework behavior confirmed | +10 | Framework source or docs |
| Edge cases checked (per case) | +5 each | Verification result |
| Side effects assessed | +10 | Impact analysis |
| No regressions identified | +5 | Test results or code analysis |
Total possible: 100+ (normalize to percentage)
After individual proof traces, perform cross-change verification:
[IMPORTANT] Database Performance Protocol (MANDATORY):
- Paging Required — ALL list/collection queries MUST ATTENTION use pagination. NEVER load all records into memory. Verify: no unbounded
GetAll(),ToList(), orFind()withoutSkip/Takeor cursor-based paging.- Index Required — ALL query filter fields, foreign keys, and sort columns MUST ATTENTION have database indexes configured. Verify: entity expressions match index field order, database collections have index management methods, migrations include indexes for WHERE/JOIN/ORDER BY columns.
Produce a summary verdict:
FIX VERIFICATION VERDICT
━━━━━━━━━━━━━━━━━━━━━━━
Overall Confidence: [X%]
Changes Summary:
#1: [description] — [X%] ✅/⚠️/❌
#2: [description] — [X%] ✅/⚠️/❌
#N: [description] — [X%] ✅/⚠️/❌
Symbols: ✅ ≥80% (ship) | ⚠️ 60-79% (flag) | ❌ <60% (block)
Remaining Risks:
- [risk 1]: [likelihood] × [impact] — [mitigation]
- [risk 2]: [likelihood] × [impact] — [mitigation]
Verification Method:
- [Manual testing required? Which scenarios?]
- [Automated tests cover this? Which tests?]
- [Additional monitoring needed post-deploy?]
Recommendation: [SHIP / SHIP WITH CAVEATS / INVESTIGATE FURTHER / BLOCK]
PROOF TRACE — Change #1: Move catchError inside switchMap
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
SYMPTOM:
→ UI doesn't refresh after assigning PIC, Job Opening, or changing stage
TRIGGER PATH:
1. candidate-quick-card-v2.component.ts:445 — User clicks "Assign PIC"
2. candidate-card.container.component.ts:892 — onPersonInChargeChange($event)
3. candidate-card.effect.ts:275 — SavePersonInCharge effect
4. candidate-card.effect.ts:284 — dispatches LoadCandidateDetailsAction
5. candidate-card.effect.ts:48 ← EFFECT IS DEAD — never processes the action
ROOT CAUSE:
→ catchError at outer pipe level (effect.ts:64) causes effect completion on ANY error
→ Evidence: effect.ts:43-69 shows catchError OUTSIDE switchMap
→ Evidence: ngrx-effects.js:156-165 confirms defaultEffectsErrorHandler
only catches errors, not completions
FIX MECHANISM:
→ Move catchError INSIDE switchMap so errors are caught per-request
→ Before: effect.ts:64 — catchError at outer pipe → effect COMPLETES → DEAD
→ After: effect.ts:52 — catchError inside switchMap → inner obs completes → outer SURVIVES
WHY THIS FIX IS CORRECT:
→ RxJS: catchError inside switchMap catches per-emission, outer stream continues
→ Pattern precedent: effect.ts:120 (moveApplicationToNextState) uses same inner pattern
→ Framework: NgRx effects auto-resubscribe on ERROR but NOT on COMPLETION
EDGE CASES:
→ 403 Forbidden: verified — returns SetCandidateDetails with isAllowDisplayed=false
→ Network timeout: verified — returns EMPTY, effect survives
→ Multiple rapid requests: verified — switchMap cancels previous (unchanged)
SIDE EFFECTS:
→ None — same error handling logic, only scope changed
CONFIDENCE: 95%
Verified: root cause, fix mechanism, pattern precedent, framework source, all edge cases
Not verified: behavior under specific proxy/auth middleware errors (very unlikely)
Run
python .claude/scripts/code_graph trace <file> --direction downstream --jsonto prove fix doesn't break downstream.
If .code-graph/graph.db exists, enhance analysis with structural queries:
python .claude/scripts/code_graph query tests_for <function> --jsonpython .claude/scripts/code_graph query callers_of <function> --jsonpython .claude/scripts/code_graph batch-query file1 file2 --jsonWhen graph DB is available, use trace to PROVE the fix doesn't break downstream consumers:
python .claude/scripts/code_graph trace <fixed-file> --direction downstream --json — verify all downstream consumers, event handlers, and bus message listeners are unaffectedpython .claude/scripts/code_graph trace <fixed-file> --direction both --json — full context: what triggered the bug (upstream) + what the fix affects (downstream)This skill is the mandatory verification gate between $fix and $code-simplifier in fix workflows.
Workflow position:
... → $fix → $prove-fix → $code-simplifier → $review-changes → ...
If proof trace reveals issues:
$debug-investigate or $fix step$ARGUMENTS
MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS: If you are NOT already in a workflow, you MUST ATTENTION use a direct user question to ask the user. Do NOT judge task complexity or decide this is "simple enough to skip" — the user decides whether to use a workflow, not you:
- Activate
bugfixworkflow (Recommended) — scout → investigate → debug → plan → fix → prove-fix → review → test- Execute
$prove-fixdirectly — run this skill standalone
MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS after completing this skill, you MUST ATTENTION use a direct user question to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:
[IMPORTANT] Use task tracking to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.
Prerequisites: MUST ATTENTION READ before executing:
docs/project-reference/domain-entities-reference.md — Domain entity catalog, relationships, cross-service sync (read when task involves business entities/models) (read directly when relevant; do not rely on hook-injected conversation text)External Memory: For complex or lengthy work (research, analysis, scan, review), write intermediate findings and final results to a report file in
plans/reports/— prevents context loss and serves as deliverable.
Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION — every claim, finding, and recommendation requires
file:lineproof or traced evidence with confidence percentage (>80% to act, <80% must verify first).
UI System Context — For ANY task touching
.ts,.html,.scss, or.cssfiles:MUST ATTENTION READ before implementing:
docs/project-reference/frontend-patterns-reference.md— component base classes, stores, formsdocs/project-reference/scss-styling-guide.md— BEM methodology, SCSS variables, mixins, responsivedocs/project-reference/design-system/README.md— design tokens, component inventory, iconsReference
docs/project-config.jsonfor project-specific paths.
Graph-Assisted Investigation — MANDATORY when
.code-graph/graph.dbexists.HARD-GATE: MUST ATTENTION run at least ONE graph command on key files before concluding any investigation.
Pattern: Grep finds files →
trace --direction bothreveals full system flow → Grep verifies details
Task Minimum Graph Action Investigation/Scout trace --direction bothon 2-3 entry filesFix/Debug callers_ofon buggy function +tests_forFeature/Enhancement connectionson files to be modifiedCode Review tests_foron changed functionsBlast Radius trace --direction downstreamCLI:
python .claude/scripts/code_graph {command} --json. Use--node-mode filefirst (10-30x less noise), then--node-mode functionfor detail.
Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
Understand Code First — HARD-GATE: Do NOT write, plan, or fix until you READ existing code.
- Search 3+ similar patterns (
grep/glob) — citefile:lineevidence- Read existing files in target area — understand structure, base classes, conventions
- Run
python .claude/scripts/code_graph trace <file> --direction both --jsonwhen.code-graph/graph.dbexists- Map dependencies via
connectionsorcallers_of— know what depends on your target- Write investigation to
.ai/workspace/analysis/for non-trivial tasks (3+ files)- Re-read analysis file before implementing — never work from memory alone
- NEVER invent new patterns when existing ones work — match exactly or document deviation
BLOCKED until:
- [ ]Read target files- [ ]Grep 3+ patterns- [ ]Graph trace (if graph.db exists)- [ ]Assumptions verified with evidence
Fix-Layer Accountability — NEVER fix at the crash site. Trace the full flow, fix at the owning layer.
AI default behavior: see error at Place A → fix Place A. This is WRONG. The crash site is a SYMPTOM, not the cause.
MANDATORY before ANY fix:
- Trace full data flow — Map the complete path from data origin to crash site across ALL layers (storage → backend → API → frontend → UI). Identify where the bad state ENTERS, not where it CRASHES.
- Identify the invariant owner — Which layer's contract guarantees this value is valid? That layer is responsible. Fix at the LOWEST layer that owns the invariant — not the highest layer that consumes it.
- One fix, maximum protection — Ask: "If I fix here, does it protect ALL downstream consumers with ONE change?" If fix requires touching 3+ files with defensive checks, you are at the wrong layer — go lower.
- Verify no bypass paths — Confirm all data flows through the fix point. Check for: direct construction skipping factories, clone/spread without re-validation, raw data not wrapped in domain models, mutations outside the model layer.
BLOCKED until:
- [ ]Full data flow traced (origin → crash)- [ ]Invariant owner identified withfile:lineevidence- [ ]All access sites audited (grep count)- [ ]Fix layer justified (lowest layer that protects most consumers)Anti-patterns (REJECT these):
- "Fix it where it crashes" — Crash site ≠ cause site. Trace upstream.
- "Add defensive checks at every consumer" — Scattered defense = wrong layer. One authoritative fix > many scattered guards.
- "Both fix is safer" — Pick ONE authoritative layer. Redundant checks across layers send mixed signals about who owns the invariant.
AI Mistake Prevention — Failure modes to avoid on every task: Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal. Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing. Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain. Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path. When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site. Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code. Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks. Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis. Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly. Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
IMPORTANT MUST ATTENTION search 3+ existing patterns and read code BEFORE any modification. Run graph trace when graph.db exists.
IMPORTANT MUST ATTENTION read frontend-patterns-reference, scss-styling-guide, design-system/README before any UI change.
IMPORTANT MUST ATTENTION run at least ONE graph command on key files when graph.db exists. Pattern: grep → graph trace → grep verify.
IMPORTANT MUST ATTENTION trace full data flow and fix at the owning layer, not the crash site. Audit all access sites before adding ?..
MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
IMPORTANT MUST ATTENTION follow declared step order for this skill; NEVER skip, reorder, or merge steps without explicit user approval
IMPORTANT MUST ATTENTION for every step/sub-skill call: set in_progress before execution, set completed after execution
IMPORTANT MUST ATTENTION every skipped step MUST include explicit reason; every completed step MUST include concise evidence
IMPORTANT MUST ATTENTION if Task tools unavailable, maintain an equivalent step-by-step plan tracker with synchronized statuses
MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using task tracking BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION validate decisions with user via a direct user question — never auto-decide. MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality. MANDATORY IMPORTANT MUST ATTENTION READ the following files before starting:
[TASK-PLANNING] Before acting, analyze task scope and systematically break it into small todo tasks and sub-tasks using task tracking.
[IMPORTANT] Analyze how big the task is and break it into many small todo tasks systematically before starting — this is very important.
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns — debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW — NEVER ExecuteUowTaskwhere python/where py) — NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) — parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage — never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role → rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad → rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) — expresses HAPPENS, not membership.python/python3 resolves — verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons →
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns →docs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders → System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves — run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons — ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" — Yes → improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.