with one click
with one click
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | critique |
| description | Plan Critique & Auto-Revision |
| version | 1.0.0 |
| scope | enterprise |
| owner | captain |
| status | stable |
Invocation: As your first action, call
crane_skill_invoked(skill_name: "critique"). This is non-blocking — if the call fails, log the warning and continue. Usage data drives/skill-audit.
This command spawns critic subagents to challenge the current plan or approach in conversation, then auto-revises based on the critique.
No files required - works against whatever plan, proposal, or approach is in the current conversation context.
/critique [agents]
agents - number of critic agents to spawn in parallel (default: 1). More agents = more perspectives, but slower and more expensive.
Parse the argument: if $ARGUMENTS is empty or not a number, default to 1. If it's a number, use that value.
Store as AGENT_COUNT.
Scan the current conversation for the most recent plan, approach, or proposal. This could be:
If no plan is identifiable, stop and tell the user:
I don't see a plan or proposal in our current conversation to critique.
Describe your approach first, then run
/critique.
If a plan IS found, capture it as PLAN_TEXT and display a brief confirmation:
Critiquing: {one-line summary of what's being critiqued}
Agents: {AGENT_COUNT}
Do NOT ask for confirmation - proceed immediately.
Select perspectives based on AGENT_COUNT. Always assign from the top of this list:
| # | Perspective | Focus |
|---|---|---|
| 1 | Devil's Advocate | Flaws, risks, edge cases, false assumptions, failure modes. "What could go wrong?" |
| 2 | Simplifier | Over-engineering, unnecessary complexity, simpler alternatives. "Is there a simpler way?" |
| 3 | Pragmatist | Feasibility, hidden costs, timeline realism, operational burden. "Will this actually work in practice?" |
| 4 | Product Strategist | Scope, ROI, opportunity cost, venture priority fit. "Is this the right thing to build right now, vs. what's next?" |
| 5 | Contrarian | Fundamentally different approaches, paradigm challenges. "What if the entire framing is wrong?" |
| 6 | User/Stakeholder Advocate | End-user impact, UX consequences, stakeholder concerns. "How does this affect the people who use it?" |
| 7 | Security & Reliability | Failure modes, data integrity, security surface, recovery paths. "How does this break?" |
Launch AGENT_COUNT agents in a single message using the Task tool (subagent_type: general-purpose).
CRITICAL: All Task tool calls MUST be in a single message to run in true parallel.
Each agent receives:
PLAN_TEXTAgent prompt template:
You are a {PERSPECTIVE_NAME} critic reviewing a plan. Your job is to find weaknesses and suggest improvements from your specific angle.
## The Plan Being Critiqued
{PLAN_TEXT}
## Context
{BRIEF_SUMMARY_OF_WHAT_PROBLEM_IS_BEING_SOLVED_AND_KEY_CONSTRAINTS}
## Your Perspective: {PERSPECTIVE_NAME}
{PERSPECTIVE_DESCRIPTION}
## Instructions
1. Start with `## {PERSPECTIVE_NAME} Critique`
2. **Strengths** (1-3 bullets): What's good about this plan? Acknowledge what works before attacking.
3. **Issues Found** (numbered list): Each issue must include:
- **The problem**: What's wrong or risky
- **Why it matters**: Impact if ignored
- **Suggested fix**: A concrete alternative or mitigation - not just "think about this more"
4. **Alternative Approach** (optional): If you see a fundamentally better path, describe it briefly. Only include this if the alternative is genuinely superior, not just different.
5. **Verdict**: One line - "Proceed as-is", "Proceed with fixes", or "Reconsider approach"
CONSTRAINTS:
- Be specific and concrete. "This might have issues" is useless. "The database query in step 3 will table-scan because there's no index on user_id" is useful.
- Every issue MUST have a suggested fix. Critique without solutions is just complaining.
- Don't pad. If the plan is solid from your perspective, say so in 2-3 lines and move on. A short critique of a good plan is more valuable than a long critique full of nitpicks.
- Prioritize. If you find 10 issues, lead with the 3 that matter most.
- Do NOT write files. Return your critique as your final response message.
Wait for all agents to complete.
If multiple critics ran, synthesize their output before revising:
## Critique Summary ({AGENT_COUNT} perspectives)
### Consensus Issues (flagged by 2+ critics)
- ...
### Unique Issues
- ...
### Contradictions
- ...
### Verdicts
- Devil's Advocate: Proceed with fixes
- Simplifier: Reconsider approach
- ...
If AGENT_COUNT == 1, skip synthesis - use the single critic's output directly.
Using the critique (or synthesized critique), revise the plan:
Present the revised plan clearly:
## Revised Plan
{THE_REVISED_PLAN}
### Changes Made
1. {What changed and why, referencing which critic triggered it}
2. ...
### Critiques Acknowledged but Not Adopted
1. {What was raised, why it wasn't adopted}
After presenting the revised plan, ask:
"Revised plan above. Want to proceed, run another round of critique, or adjust something?"
Do NOT automatically start implementing. Wait for the user.
subagent_type: general-purpose via the Task tool./prd-review and /design-brief, critique is single-pass by design. If the user wants another round, they run /critique again - the revised plan is now the conversation context.