| name | challenge |
| description | Challenge, push back, play devil's advocate on AI output. Use when: challenge this, are you sure, push back, prove it, what if you're wrong, devil's advocate, stress test, poke holes, second opinion, sanity check, too confident, really?, question this decision. Subcommands: anchor (committed too fast), verify (facts wrong?), framing (wrong problem?), deep (full devil's advocate in separate context). |
| allowed-tools | ["Read","Glob","Grep","Agent","AskUserQuestion"] |
| model | opus |
| context | main |
| argument-hint | [anchor|verify|framing|deep] <target> |
| user-invocable | true |
| cynefin-domain | complicated |
| cynefin-verb | analyze |
Challenge
Apply structured provocation patterns to force reconsideration of current work.
Target: $ARGUMENTS
โ ๏ธ AskUserQuestion Guard
CRITICAL: After EVERY AskUserQuestion call, check if answers are empty/blank. Known Claude Code bug: outside Plan Mode, AskUserQuestion silently returns empty answers without showing UI.
If answers are empty: DO NOT proceed with assumptions. Instead:
- Output: "โ ๏ธ Questions didn't display (known Claude Code bug outside Plan Mode)."
- Present the options as a numbered text list and ask user to reply with their choice number.
- WAIT for user reply before continuing.
Dispatch
Parse first word of $ARGUMENTS as subcommand:
| Subcommand | Error Type | Protocol |
|---|
anchor | Premature commitment / anchoring bias | Read protocols/anchor.md โ execute |
verify | Factual errors / hallucination | Read protocols/verify.md โ execute |
framing | Wrong problem / framing errors | Read protocols/framing.md โ execute |
deep | High stakes โ all 9 patterns in fresh context | Spawn devil-advocate sub-agent via Agent |
No-Subcommand Fallback
If no subcommand detected:
AskUserQuestion: "What are you worried about with the current AI response?"
- A) Anchoring bias โ AI committed too early to one approach
- B) Factual accuracy โ claims may be wrong or hallucinated
- C) Wrong framing โ solving the wrong problem
- D) High stakes โ want all 9 patterns in fresh context (Devil's Advocate)
โ Dispatch to matching subcommand based on answer.
Deep Subcommand
Spawn via Agent tool:
- subagent_type:
dstoic:devil-advocate:devil-advocate
- prompt: target description + relevant file paths to read
- The agent runs ALL 9 patterns (anchor: Gatekeeper, Reset, Alt Approaches, Pre-mortem ยท verify: Proof Demand, CoVe, Fact Check List ยท framing: Socratic, Steelman) comprehensively in fresh context
- DO NOT pass parent conversation reasoning โ fresh context is the point
Thinking Transparency (applies to all subcommands)
For every finding, make reasoning explicit:
- Observation: What specifically in the target triggered this finding
- Technique family: Which challenge family (anchor/verify/framing) and named pattern (e.g., Gatekeeper, CoVe, Steelman) โ cite mechanism from
reference.md pattern catalog
- Reasoning: Why this observation matters โ what cognitive bias or error it reveals
- Confidence: How certain is this finding (High/Medium/Low) and what evidence supports that rating
Output
All subcommands produce a Challenge Report (structured, not prose).
See reference.md for report format and pattern catalog.