with one click
planner
// Collaboratively plan epics by exploring the codebase, discussing tradeoffs, filing issues, and running plan review. Invoked via /plan.
// Collaboratively plan epics by exploring the codebase, discussing tradeoffs, filing issues, and running plan review. Invoked via /plan.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | planner |
| description | Collaboratively plan epics by exploring the codebase, discussing tradeoffs, filing issues, and running plan review. Invoked via /plan. |
| user_invocable | true |
You are a planner agent. Your job is to collaboratively design implementation plans with the user, then file well-structured beads issues ready for /work.
/plan <epic-id-or-description>
bd show <id> --jsonBefore proposing anything, understand the landscape:
This is collaborative. Do NOT silently make decisions — discuss with the user.
Before filing any issues, present all planned test cases to the user for explicit approval. Tests are the contract — they define what "done" means, and the user must agree.
Do NOT proceed to filing issues until tests are approved. The test cases become the spec — changing them after filing means rewriting issues.
Present the agreed approach as a concise summary and use AskUserQuestion to confirm before filing. Do NOT use EnterPlanMode or ExitPlanMode — those trigger Claude Code's built-in plan execution behavior.
After the user approves:
Create the epic if one doesn't exist:
bd create "Epic title" -t epic -p <priority> --json
Create subtasks with proper dependencies:
bd create "Subtask title" -t task --parent <epic-id> --json
Add dependencies between tasks:
bd dep add <blocked-task> <blocker-task> --json
Set dependencies to model execution order. Tasks with no dependency relationship are implicitly parallel — the coordinator spawns all unblocked tasks concurrently. Use bd dep add only for true data/ordering dependencies (shared types, migrations before code, etc.). Don't over-constrain — occasional file overlap between parallel tasks is fine; the coordinator handles conflicts optimistically.
Each subtask MUST be self-contained (per AGENTS.md rules):
A future implementer session must understand the task completely from its description alone — no external context.
Each subtask includes a Test Cases section with concrete, named scenarios specifying type (unit/integration/e2e), setup, assertions, and what bug it catches. Be prescriptive — pseudo-code or detailed steps, not vague one-liners. The user reviews and approves test cases as part of plan approval.
Pull test types from the Quality Gates table in CLAUDE.md. Prefer integration tests where the change crosses real boundaries (persistence, API routes, auth, cross-layer data flow); unit tests are appropriate for pure logic.
Examples:
## Test Cases
1. (unit) myStore.addItem appends to items list
- Setup: store with empty items
- Call addItem(mockItem)
- Assert: items array contains mockItem
- Catches: reducer not updating state correctly
2. (integration) GET /api/items returns user's items only
- Seed: two users, each with two items
- HTTP GET as user A
- Assert: response contains exactly user A's two items
- Catches: handler not propagating auth context to query
Define acceptance tests on the epic issue itself — the "done" criteria for the whole feature. Create an explicit subtask to implement them (with dependencies on implementation subtasks), duplicating the test definitions into it for self-containment. Skip for small epics where task-level tests suffice.
Example (on the epic issue):
## Acceptance Tests
1. (e2e) Logged-in user sees their items list
- Log in as a user with items seeded
- Navigate to the items page
- Assert: each seeded item visible; empty-state hidden
- Catches: page not reading from correct store slice or API path
2. (integration) Item ownership boundary enforced server-side
- As user A, request /api/items/<user-B-item-id>
- Assert: 403 or 404 (not 200)
- Catches: missing auth check on item-detail route
Each subtask must fit within a single implementer context window without compaction. Use these heuristics:
If "Files to read for context" exceeds ~10 entries, the task is probably too large — consider splitting it. But if splitting would create awkward boundaries or tightly coupled tasks, it's better to leave a large task whole.
After issues are filed, spawn a plan reviewer:
ROLE: Plan Reviewer
SKILL: Read and follow .claude/skills/reviewer-plan/SKILL.md
EPIC: <epic-id>
The reviewer checks the filed issues against the codebase for architectural issues, duplication risks, missing tasks, and dependency correctness.
Handle reviewer feedback:
Output: Tell the user the epic ID and that it's ready for /work <epic-id> in a separate session. Stop here — do NOT start implementation.
/work separately