| name | autofixing-and-escalating |
| description | Classifies issues from external sources by clarity of correctness, auto-fixes obvious ones, and escalates ambiguous ones with rationale and recommendations. Activates automatically whenever 2+ actionable items from an external source appear — linter output, review comments, security scan results, test failures, audit findings, or any batch of issues that were not generated by the current conversation. The user should never have to manually sort through obvious vs ambiguous items from tools or reviewers. |
Autofixing and Escalating
Overview
This skill processes issues from external sources — tools, reviewers, scanners — not suggestions Claude generates itself. When Claude's own analysis produces recommendations, present them directly without the classification ceremony.
Primary Classification: OBVIOUS vs AMBIGUOUS
The primary axis is clarity of correctness, not severity.
OBVIOUS (Auto-fix)
Items where the source identified a specific issue AND the fix is objectively correct with no room for reasonable disagreement.
All four criteria must be met:
- The source explicitly identified a specific issue (not a general suggestion)
- The issue is objectively verifiable (not opinion-based)
- There is exactly one correct way to fix it (no design choices involved)
- No reasonable developer would disagree with the fix
Examples:
- Typos in variable names, comments, or docs
- Missing null/nil checks that will provably crash
- Textbook security flaws (SQL injection, XSS) with a clear fix
- Off-by-one errors that are provably wrong
- Unused imports/variables explicitly identified
- Wrong API usage per official documentation
- Syntax errors or broken references
AMBIGUOUS (Escalate to user)
Items where there is room for interpretation, trade-offs, or legitimate disagreement.
Examples:
- "Consider using X pattern instead of Y" — architectural preference
- "This could be more performant with..." — trade-off involved
- "Maybe extract this into a separate function" — design choice
- Style preferences not enforced by tooling
- Suggestions requiring significant refactoring
- Performance optimizations with readability trade-offs
- Alternative approaches where both are valid
- Changes affecting public API surface
- Suggestions involving new dependencies
ALWAYS AMBIGUOUS (Never auto-fix)
Classify as AMBIGUOUS regardless of apparent clarity:
- Source used hedging language: "might", "could", "consider", "maybe", "what about"
- Fix requires changing more than ~10 lines
- Fix has multiple valid approaches
- Involves architectural or design decisions
- Affects public API or external contracts
Safety Rule
When in doubt, classify as AMBIGUOUS. It is always better to discuss than to silently apply a wrong fix.
ALWAYS SKIP (Never process)
- Items with resolution markers: checkmarks, "resolved", "fixed", "applied"
- Items already addressed in a prior pass
- Duplicate items (process once only)
- Purely informational items with no actionable change
See reference/classification.md for detailed examples, edge cases, and a decision tree for borderline items.
Secondary Classification: Severity (AMBIGUOUS items only)
Within AMBIGUOUS items, assign severity for grouping and sort order:
CRITICAL
- Security: auth bypass, sensitive data exposure, injection vulnerabilities
- Data Loss: destructive operations, corruption risks
- Breaking Bugs: nil pointer errors, type crashes, unhandled exceptions
MAJOR
- Performance: N+1 queries, memory leaks, missing indexes
- Significant Bugs: wrong calculations, race conditions
- Resource Issues: file handle leaks, connection pool exhaustion
MINOR
- Code Quality: naming, method extraction, DRY violations
- Style: formatting, code organization
- Documentation: missing comments, unclear naming
- Speculative: optional improvements
Resolution Workflow
Five phases — classify, report, discuss, execute, summarize. All decisions complete before any code changes. Read reference/resolution.md for the full workflow, formats, and examples.
| Phase | What happens |
|---|
| 1. Classify | Classify every item as OBVIOUS / AMBIGUOUS / SKIP. Register each actionable item as a task — do not execute yet. |
| 2. Report | Present the full classification to the user: OBVIOUS items queued, AMBIGUOUS items with analysis. |
| 3. Discuss | Resolve AMBIGUOUS items (already presented in Phase 2). Offer: apply all / review individually / skip all. Update task status per user decision. |
| 4. Execute | Batch-execute all approved tasks (OBVIOUS + user-approved AMBIGUOUS) in parallel via subagents grouped by file. |
| 5. Summary | Report results — applied, failed, skipped. |
Language Detection
Detect and use user's preferred language for all communication.
Detection priority:
- User's current messages — what language is the user speaking?
- Project context — check CLAUDE.md, README.md for language patterns
- Git history — recent commit message language
- Default to English
Apply detected language to: conversational messages, reports, summaries, error messages.
Always keep in English: code examples, commands, file paths, technical API calls.
Common Mistakes
Classifying Ambiguous Items as Obvious
Problem: Auto-applying a fix that had trade-offs the user should have weighed.
Fix: Apply the four OBVIOUS criteria strictly. When in doubt, classify as AMBIGUOUS.
Not Explaining Why Something Is Ambiguous
Problem: User sees an ambiguous item but doesn't understand what makes it debatable.
Fix: Always include "Why ambiguous" with each item — what's the trade-off or uncertainty?
Executing Before All Decisions Are Made
Problem: Applying fixes during classification instead of after — user loses oversight of the full picture.
Fix: Register every actionable item as a task first. Execute only after all classifications and user decisions are finalized.
Stopping on Single Failure
Problem: Workflow halts when one task fails during batch execution.
Fix: Subagents continue with remaining tasks in the same file group. Failed tasks are reported in the summary with what went wrong.
Treating All Sources Equally
Problem: Missing context about who or what raised the issue.
Fix: Include source attribution with each item — the origin often matters for deciding how to respond.