// Create detailed implementation plans through interactive research and iteration. Use this skill when needing to plan a feature, refactor, or bug fix before implementation. The skill orchestrates research agents (codebase-locator, codebase-analyzer, codebase-pattern-finder) to understand the codebase, then works collaboratively with the user to produce comprehensive technical specifications with phased implementation steps and success criteria.
| name | plan-creator |
| description | Create detailed implementation plans through interactive research and iteration. Use this skill when needing to plan a feature, refactor, or bug fix before implementation. The skill orchestrates research agents (codebase-locator, codebase-analyzer, codebase-pattern-finder) to understand the codebase, then works collaboratively with the user to produce comprehensive technical specifications with phased implementation steps and success criteria. |
This skill creates detailed implementation plans through an interactive, iterative process. It is designed to be skeptical, thorough, and collaborative, producing high-quality technical specifications by researching the codebase first, then working with the user to refine the approach.
Customise these paths and commands for your project:
plan_output_dir: Where to save plans (default: docs/plans/)plan_filename_format: Template for plan filenames (default: YYYY-MM-DD-description.md)sync_command: Optional command to sync plans (e.g., git add docs/plans)verification_commands: Default verification commands (e.g., make test, npm run lint)input_docs_dir: Optional directory for ticket/requirement files (e.g., docs/tickets/)When this skill is invoked:
Check if parameters were provided:
If no parameters provided, respond with:
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
1. The task/ticket description (or reference to a ticket file)
2. Any relevant context, constraints, or specific requirements
3. Links to related research or previous implementations
I'll analyse this information and work with you to create a comprehensive plan.
Then wait for the user's input.
Follow these 5 steps to create the plan:
Read all mentioned files immediately and FULLY:
docs/tickets/feature-123.md)Spawn initial research tasks to gather context: Before asking the user any questions, use specialised agents to research in parallel:
These agents will:
Read all files identified by research tasks:
Analyse and verify understanding:
Present informed understanding and focused questions:
Based on the ticket and my research of the codebase, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question that requires human judgement]
- [Business logic clarification]
- [Design preference that affects implementation]
Only ask questions that you genuinely cannot answer through code investigation.
After getting initial clarifications:
If the user corrects any misunderstanding:
Create a research todo list using TodoWrite to track exploration tasks
Spawn parallel sub-tasks for comprehensive research: Create multiple Task agents to research different aspects concurrently. Use the right agent for each type of research:
For deeper investigation:
For historical context:
Each agent knows how to:
Wait for ALL sub-tasks to complete before proceeding
Present findings and design options:
Based on my research, here's what I found:
**Current State:**
- [Key discovery about existing code]
- [Pattern or convention to follow]
**Design Options:**
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
**Open Questions:**
- [Technical uncertainty]
- [Design decision needed]
Which approach aligns best with your vision?
Once aligned on approach:
Create initial plan outline:
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
3. [Phase name] - [what it accomplishes]
Does this phasing make sense? Should I adjust the order or granularity?
Get feedback on structure before writing details
After structure approval:
Write the plan to configured plan_output_dir
plan_filename_format templateYYYY-MM-DD-description.md where:
2025-10-27-add-authentication.md2025-10-27-refactor-api-layer.mdUse this template structure:
# [Feature/Task Name] Implementation Plan
## Overview
[Brief description of what we're implementing and why]
## Current State Analysis
[What exists now, what's missing, key constraints discovered]
## Desired End State
[A specification of the desired end state after this plan is complete, and how to verify it]
### Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
## What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
## Implementation Approach
[High-level strategy and reasoning]
## Phase 1: [Descriptive Name]
### Overview
[What this phase accomplishes]
### Changes Required:
#### 1. [Component/File Group]
**File**: `path/to/file.ext`
**Changes**: [Summary of changes]
```[language]
// Specific code to add/modify
```
### Success Criteria:
#### Automated Verification:
- [ ] Migration applies cleanly: `make migrate`
- [ ] Unit tests pass: `make test-component`
- [ ] Type checking passes: `npm run typecheck`
- [ ] Linting passes: `make lint`
- [ ] Integration tests pass: `make test-integration`
#### Manual Verification:
- [ ] Feature works as expected when tested via UI
- [ ] Performance is acceptable under load
- [ ] Edge case handling verified manually
- [ ] No regressions in related features
**Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human that the manual testing was successful before proceeding to the next phase.
---
## Phase 2: [Descriptive Name]
[Similar structure with both automated and manual success criteria...]
---
## Testing Strategy
### Unit Tests:
- [What to test]
- [Key edge cases]
### Integration Tests:
- [End-to-end scenarios]
### Manual Testing Steps:
1. [Specific step to verify feature]
2. [Another verification step]
3. [Edge case to test manually]
## Performance Considerations
[Any performance implications or optimisations needed]
## Migration Notes
[If applicable, how to handle existing data/systems]
## References
- Original ticket: `docs/tickets/feature-123.md`
- Related research: `docs/research/relevant-topic.md`
- Similar implementation: `[file:line]`
Sync the plan (if sync_command configured):
Present the draft plan location:
I've created the initial implementation plan at:
`docs/plans/2025-10-27-feature-name.md`
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
Iterate based on feedback - be ready to:
Continue refining until the user is satisfied
Be Skeptical:
Be Interactive:
Be Thorough:
Be Practical:
Track Progress:
No Open Questions in Final Plan:
Always separate success criteria into two categories:
Automated Verification (can be run by execution agents):
make test, npm run lint, etc.Manual Verification (requires human testing):
Format example:
### Success Criteria:
#### Automated Verification:
- [ ] Database migration runs successfully: `make migrate`
- [ ] All unit tests pass: `go test ./...`
- [ ] No linting errors: `golangci-lint run`
- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`
#### Manual Verification:
- [ ] New feature appears correctly in the UI
- [ ] Performance is acceptable with 1000+ items
- [ ] Error messages are user-friendly
- [ ] Feature works correctly on mobile devices
When spawning research sub-tasks:
Example of spawning multiple tasks:
# Spawn these tasks concurrently:
Task 1: Research database schema
Task 2: Find API patterns
Task 3: Investigate UI components
Task 4: Check test patterns
User: I need to add user authentication to the app
Assistant: Let me read any relevant docs and research the codebase...
[Reads files fully, spawns research agents]
Based on my research, I understand we need to add authentication. I've found that:
- The app currently has no auth system (src/server.ts:45)
- Database schema exists but lacks user tables
- API uses simple middleware pattern (src/middleware/)
Questions:
- What auth method: JWT, sessions, or OAuth?
- Should we support social login?
- Password requirements?
User: JWT auth, no social login, standard password requirements