// Create detailed implementation plans through interactive research and iteration
| name | creating-plans |
| description | Create detailed implementation plans through interactive research and iteration |
You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
When this command is invoked:
Check if parameters were provided:
If no parameters provided, respond with:
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
1. The task description or reference to a file with requirements
2. Any relevant context, constraints, or specific requirements
3. Links to related research or previous implementations
I'll analyze this information and work with you to create a comprehensive plan.
Then wait for the user's input.
Read all mentioned files immediately and FULLY:
Spawn initial research tasks to gather context: Before asking the user any questions, use specialized agents to research in parallel:
subagent_type="codebase-locator" to find all files related to the tasksubagent_type="codebase-analyzer" to understand how the current implementation workssubagent_type="codebase-pattern-finder" to find similar features to model afterRead all files identified by research tasks:
Analyze and verify understanding:
Present informed understanding and focused questions:
Based on my research of the codebase, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question that requires human judgment]
- [Business logic clarification]
- [Design preference that affects implementation]
Only ask questions that you genuinely cannot answer through code investigation.
After getting initial clarifications:
If the user corrects any misunderstanding:
Create a research todo list using TodoWrite to track exploration tasks
Spawn parallel sub-tasks for comprehensive research:
subagent_type="codebase-locator" - Finding more specific filessubagent_type="codebase-analyzer" - Understanding implementation detailssubagent_type="codebase-pattern-finder" - Finding similar features to model aftersubagent_type="web-search-researcher" - External docs, gh search code for GitHub examplesWait for ALL sub-tasks to complete before proceeding
Present findings and design options:
Based on my research, here's what I found:
**Current State:**
- [Key discovery about existing code]
- [Pattern or convention to follow]
**Design Options:**
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
**Open Questions:**
- [Technical uncertainty]
- [Design decision needed]
Which approach aligns best with your vision?
Once aligned on approach:
Create initial plan outline:
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
3. [Phase name] - [what it accomplishes]
Does this phasing make sense? Should I adjust the order or granularity?
Get feedback on structure before writing details
After structure approval:
Write the plan to ./scratch/plans/YYYY-MM-DD-description.md
YYYY-MM-DD-description.md where:
2025-01-08-parent-child-tracking.md2025-01-08-improve-error-handling.mdUse this template structure:
# [Feature/Task Name] Implementation Plan
## Overview
[Brief description of what we're implementing and why]
## Current State Analysis
[What exists now, what's missing, key constraints discovered]
## Desired End State
[A Specification of the desired end state after this plan is complete, and how to verify it]
### Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
## What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
## Implementation Approach
[High-level strategy and reasoning]
## Phase 1: [Descriptive Name]
### Overview
[What this phase accomplishes]
### Changes Required:
#### 1. [Component/File Group]
**File**: `path/to/file.ext`
**Changes**: [Summary of changes]
```[language]
// Specific code to add/modify
```
### Success Criteria:
#### Automated Verification:
- [ ] Tests pass: `pnpm test` (or relevant test command)
- [ ] Type checking passes: `pnpm typecheck`
- [ ] Linting passes: `pnpm lint`
#### Manual Verification:
- [ ] Feature works as expected when tested
- [ ] No regressions in related features
**Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human that the manual testing was successful before proceeding to the next phase.
---
## Phase 2: [Descriptive Name]
[Similar structure with both automated and manual success criteria...]
---
## Testing Strategy
### Unit Tests:
- [What to test]
- [Key edge cases]
### Integration Tests:
- [End-to-end scenarios]
### Manual Testing Steps:
1. [Specific step to verify feature]
2. [Another verification step]
3. [Edge case to test manually]
## References
- Related files: `[file:line]`
- Similar implementation: `[file:line]`
Present the draft plan location:
I've created the initial implementation plan at:
`./plans/YYYY-MM-DD-description.md`
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
Iterate based on feedback - be ready to:
Continue refining until the user is satisfied
Be Skeptical:
Be Interactive:
Be Thorough:
Be Practical:
Track Progress:
No Open Questions in Final Plan:
$ARGUMENTS