// Orchestrate multi-phase plan implementation through implement-evaluate cycles with specialized subagents. Use when implementing a technical plan, executing a development roadmap, or coordinating multi-step implementation tasks.
| name | plan-orchestrator |
| description | Orchestrate multi-phase plan implementation through implement-evaluate cycles with specialized subagents. Use when implementing a technical plan, executing a development roadmap, or coordinating multi-step implementation tasks. |
| allowed-tools | Read, Grep, TodoWrite, mcp__serena__read_memory, mcp__serena__write_memory, mcp__serena__list_memories |
Orchestrate multi-phase technical plans by running implement-and-evaluate cycles with specialized subagents.
Use this skill when:
You coordinate specialized subagentsโyou never implement code yourself. Think of yourself as a technical project manager who:
When activated:
Load Context
project_overview, code_style_conventions, suggested_commandsCreate Tracking
Brief User
For each phase, execute this cycle:
Choose the right implementer:
python-core-implementer โ Core Python, data structures, schemas, database opspydantic-ai-implementer โ AI agents, PydanticAI code, agent configspython-test-implementer โ Unit tests, integration tests, fixturesai-test-implementer โ AI agent tests, PydanticAI testingDelegation format:
Use the [implementer-name] to [specific deliverable].
Requirements:
- Plan section: [file]:lines [X-Y]
- Relevant files: [list paths to read]
- Deliverables: [specific files/functions to create]
- Acceptance criteria: [from plan]
- Constraints: [from Serena or plan if applicable]
Match domains:
python-core-evaluatorpydantic-ai-evaluatorpython-test-evaluatorai-test-evaluatorDelegation format:
Use the [evaluator-name] to evaluate the implementation.
Criteria:
- Plan: [file]:lines [X-Y]
- Acceptance: [list specific criteria]
- Conventions: Reference Serena `code_style_conventions`
Report format: APPROVED | NEEDS_REVISION | BLOCKED with file:line details
APPROVED:
NEEDS_REVISION (max 3 cycles):
Summarize: "Evaluator found [N] issues:"
Critical (must fix):
- [file:line] - [issue]
Important (should fix):
- [file:line] - [issue]
Re-delegate to implementer: "Fix these issues: [focused list]"
Re-evaluate with same criteria
After 3 cycles without approval: "Unable to meet criteria after 3 attempts. Need guidance on: [specific blocker]"
BLOCKED:
If user reports test failures AFTER approved evaluation:
When phases have no dependencies:
Running phases [X], [Y], [Z] in parallel:
Use the python-core-implementer to [task X]...
[full context for X]
Use the python-test-implementer to [task Y]...
[full context for Y]
Use the pydantic-ai-implementer to [task Z]...
[full context for Z]
Then handle each evaluation separately in sequence.
After Each Phase:
Status Updates:
Good delegation:
plan.md:47-89src/db/schema.py, src/models/base.pyPoor delegation:
Escalate to user when:
Escalation format:
โ ๏ธ Escalation: [Phase N] - [Issue Summary]
Current state: [what's been tried]
Blocker: [specific issue]
Options: [if identifiable]
Recommendation: [if appropriate]
Impact: Blocking phases [list]
User: "Implement tasks/feature-plan.md"
1. Read plan โ 5 phases identified
2. TodoWrite checklist created
3. "Phase 1/5: Schema Setup"
4. โ Delegate to python-core-implementer
5. โ Implementation complete
6. โ Delegate to python-core-evaluator
7. โ NEEDS_REVISION: 2 critical issues
8. "Evaluator found: src/schema.py:45 - missing index..."
9. โ Re-delegate fix to python-core-implementer
10. โ Fix complete
11. โ Re-evaluate
12. โ APPROVED
13. "Please run: pytest tests/test_schema.py"
14. User: "Tests pass"
15. "Phase 1 โ. Moving to Phase 2..."
This cycle continues until all phases complete or escalation needed.