// Generate comprehensive implementation plans for features. Use when user requests "help me implement X", "create a plan for X", "break down feature X", "how should I build X", or asks for detailed implementation guidance. Activates for planning requests, not exploratory design discussions.
| name | implementation-planner |
| description | Generate comprehensive implementation plans for features. Use when user requests "help me implement X", "create a plan for X", "break down feature X", "how should I build X", or asks for detailed implementation guidance. Activates for planning requests, not exploratory design discussions. |
| allowed-tools | Read, Bash, Glob, Grep, Write, TodoWrite |
Version: 3.0.0 Purpose: Generate conductor-compatible YAML implementation plans with built-in validation.
Activate for:
Do NOT activate for: Questions, debugging, code reviews, exploratory discussions.
fd '\.md$' ~/.claude/agents --type f
Extract names (remove path/extension). Default: general-purpose.
ls -la # Structure
cat go.mod || cat package.json # Stack
Document: Framework, test framework, architecture pattern, existing patterns for similar features.
CRITICAL: Verify file organization before specifying paths:
ls internal/learning/migrations/ 2>/dev/null # Does dir exist?
grep -r "CREATE TABLE" internal/learning/ # Where does SQL live?
depends_on relationshipsProblem: Feature-chain thinking produces wrong dependencies.
Process:
{function: task_that_creates_it}depends_on includes ALL producers# Add to plan header:
# DATA FLOW REGISTRY
# PRODUCERS: Task 4 → ExtractMetrics, Task 5 → LoadSession
# CONSUMERS: Task 16 → [4, 5, 15]
# VALIDATION: All consumers depend_on their producers ✓
For each task, write implementation: FIRST, then derive criteria.
implementation:
approach: |
Strategy and architectural decisions.
key_points:
- point: "Descriptive name"
details: "What this accomplishes and why"
reference: "path/to/file.go"
- point: "Another key point"
details: "Details here"
reference: "path/to/other.go"
integration: # Only for tasks with depends_on
imports: ["package/from/dep"]
config_values: ["setting.name"]
Each key_point must be:
RULE: Success criteria MUST be derived directly from key_points using SAME terminology.
For each key_point:
→ Write criterion that verifies THIS specific point
→ Use EXACT same terms as the key_point
→ Criterion = testable assertion of key_point
# WRITE key_points FIRST:
key_points:
- point: "EnforcePackageIsolation with git diff"
details: "Run git diff --name-only, compare against task.Files, fail if outside scope"
reference: "internal/executor/package_guard.go"
# THEN derive success_criteria using same words:
success_criteria:
- "EnforcePackageIsolation runs git diff --name-only before test commands, compares against task.Files, fails with remediation message if files modified outside declared scope."
# BAD - criteria uses different terms than key_points:
key_points:
- point: "Runtime package locks"
details: "Mutex prevents concurrent modifications"
success_criteria:
- "EnforcePackageIsolation validates file scope with git diff" # WRONG - not in key_points!
# GOOD - criteria matches key_points:
key_points:
- point: "Runtime package locks"
details: "Mutex prevents concurrent modifications"
- point: "EnforcePackageIsolation with git diff"
details: "Validate file scope before tests"
success_criteria:
- "Runtime package locks via mutex prevent concurrent modifications to same Go package."
- "EnforcePackageIsolation runs git diff --name-only, validates against task.Files."
Add to ALL tasks:
success_criteria:
# Task-specific (derived from key_points)
- "..."
# Auto-appended:
- "No TODO comments in production code paths."
- "No placeholder empty structs (e.g., Type{})."
- "No unused variables (_ = x pattern)."
- "All imports from dependency tasks resolve."
| Type | Definition | Test Method |
|---|---|---|
| CAPABILITY | What component CAN do | Unit test with task's files only |
| INTEGRATION | How components WORK TOGETHER | E2E across components |
type: component or no type): ONLY capability criteriatype: integration): BOTH success_criteria AND integration_criteriaMove criterion to integration task if it contains:
# BAD - CLI criterion in cache component task:
success_criteria:
- "Cache can be bypassed with --no-cache flag" # Requires CLI!
# GOOD - Split:
# Cache task:
success_criteria:
- "CacheManager accepts enabled: boolean option"
- "When enabled=false, get() returns null"
# CLI task or integration task:
integration_criteria:
- "CLI --no-cache flag passes enabled=false to CacheManager"
For EACH task, verify:
□ Every key_point has a corresponding success criterion
□ Every success criterion traces to a key_point
□ Same terminology used in both
□ No orphan criteria (criteria without key_point source)
Before writing key_points that claim existing behavior:
# Verify defaults
grep -n "??" <file> | grep <option>
# Verify option existence
grep -n "option\|flag\|--" <file>
# Verify function behavior
grep -A5 "func <name>" <file>
□ All numeric deps exist (same file)
□ All cross-file references point to real files/tasks
□ No circular dependencies
□ Data flow producers included in depends_on
□ Every task has implementation section with approach + key_points
□ Every task has success_criteria (derived from key_points)
□ Every task has test_commands
□ Every task has code_quality pipeline
□ Integration tasks have BOTH success_criteria AND integration_criteria
□ Files are flat lists (not nested)
Conductor enforces quality gates at runtime:
| Field | Type | Behavior |
|---|---|---|
test_commands | Hard gate | Must pass or task fails |
key_points | Soft signal | Verified, results sent to QC |
documentation_targets | Soft signal | Checked, results sent to QC |
# Hard gate - blocks task if fails:
test_commands:
- "go test ./internal/executor -run TestFoo"
- "go build ./..."
# Soft signal - verified before QC:
implementation:
key_points:
- point: "Function name"
details: "What it does"
reference: "path/to/file.go" # Verified to exist
# Soft signal - for doc tasks:
documentation_targets:
- file: "docs/README.md"
section: "## Installation"
action: "update"
# ═══════════════════════════════════════════════════════════════
# DATA FLOW REGISTRY
# ═══════════════════════════════════════════════════════════════
# PRODUCERS: Task N → Function/Type
# CONSUMERS: Task M → [deps]
# VALIDATION: All consumers depend_on producers ✓
# ═══════════════════════════════════════════════════════════════
# SUCCESS CRITERIA VALIDATION
# ═══════════════════════════════════════════════════════════════
# All criteria derived from key_points ✓
# Same terminology in key_points and criteria ✓
# Component tasks have CAPABILITY-only criteria ✓
# Integration tasks have dual criteria ✓
# ═══════════════════════════════════════════════════════════════
conductor:
default_agent: general-purpose
# quality_control: Omit to inherit from .conductor/config.yaml
worktree_groups:
- group_id: "group-name"
description: "Purpose"
tasks: [1, 2, 3]
rationale: "Why grouped"
plan:
metadata:
feature_name: "Feature Name"
created: "YYYY-MM-DD"
target: "What this achieves"
context:
framework: "Framework"
architecture: "Pattern"
test_framework: "Test framework"
tasks:
- task_number: "1"
name: "Task name"
agent: "agent-name"
files:
- "path/to/file.go"
depends_on: []
estimated_time: "30m"
success_criteria:
- "Criterion derived from key_point 1"
- "Criterion derived from key_point 2"
- "No TODO comments in production code paths."
- "No placeholder empty structs."
- "No unused variables."
- "All imports from dependency tasks resolve."
test_commands:
- "go test ./path -run TestName"
description: |
## PHASE 0: DEPENDENCY VERIFICATION (EXECUTE FIRST)
```bash
# Verify dependencies exist
```
## TASK DESCRIPTION
What to implement.
implementation:
approach: |
Strategy here.
key_points:
- point: "Key point 1"
details: "Details"
reference: "file.go"
- point: "Key point 2"
details: "Details"
reference: "file.go"
integration: {}
verification:
automated_tests:
command: "go test ./..."
expected_output: "Tests pass"
code_quality:
go:
full_quality_pipeline:
command: |
gofmt -w . && golangci-lint run ./... && go test ./...
exit_on_failure: true
commit:
type: "feat"
message: "description"
files:
- "path/**"
depends_on:
- 4 # Same file
- file: "plan-01-foundation.yaml" # Different file
task: 2
- task_number: "N"
name: "Wire X to Y"
type: integration
files:
- "component1/file.go"
- "component2/file.go"
depends_on: [component1_task, component2_task]
success_criteria: # Component-level
- "Function signatures correct"
integration_criteria: # Cross-component
- "X calls Y in correct sequence"
- "Error propagates end-to-end"
Split at ~2000 lines at worktree group boundaries:
docs/plans/feature-name/
├── plan-01-foundation.yaml
├── plan-02-execution.yaml
└── plan-03-integration.yaml
Run before outputting:
conductor validate docs/plans/<plan>.yaml
Output confirmation:
YAML plan: docs/plans/<slug>.yaml
- Total tasks: N
- Validation: PASSED
- Key points ↔ Criteria: ALIGNED ✓
Run: conductor run docs/plans/<slug>.yaml
| Failure | Cause | Prevention |
|---|---|---|
| Agent implements wrong thing | key_points incomplete | Write ALL requirements in key_points |
| QC fails despite working code | Criteria not in key_points | Derive criteria FROM key_points |
| Missing dependency | Data flow not traced | Build producer registry |
| Scope leak | Integration criterion in component | Classify criteria by type |
| Assumed behavior wrong | Didn't verify codebase | grep before claiming defaults |