with one click
subagent-driven
// Use when executing an implementation plan in this session, sequentially, dispatching one Claude subagent per task with two-stage review.
// Use when executing an implementation plan in this session, sequentially, dispatching one Claude subagent per task with two-stage review.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | subagent-driven |
| description | Use when executing an implementation plan in this session, sequentially, dispatching one Claude subagent per task with two-stage review. |
Execute plan by dispatching one new subagent invocation per task, with two-stage review after each: spec compliance review first, then code quality review. Each subagent gets its own planning directory for structured knowledge capture.
Core principle: One new subagent invocation per task + per-agent planning dir + two-stage review (spec then quality) = high quality, fast iteration
Announce at start: "I'm using the subagent-driven skill to execute this plan."
./spec-reviewer-prompt.md subagent./quality-reviewer-prompt.md subagent (only after spec review passes)A task is NOT complete until BOTH reviews return APPROVED. No exceptions — not for "simple" tasks, config changes, or thorough self-reviews.
The Task Status Dashboard in .planning/progress.md has Spec Review, Quality Review, and Plan Align columns. All three MUST show PASS before status can be complete.
Each review loop (spec compliance and code quality) is capped at 3 fix-review rounds per task.
Round counting: The initial review does not count as a round. A "round" is one fix-then-re-review cycle: initial review → fix → re-review (round 1) → fix → re-review (round 2) → fix → re-review (round 3) → STOP.
After 3 rounds without approval, STOP the loop and escalate to the user:
Track round count in the Task Status Dashboard. Use notation like FAIL (round 2/3) in the Spec Review or Quality Review column.
digraph when_to_use {
"Have implementation plan?" [shape=diamond];
"Tasks mostly independent?" [shape=diamond];
"Stay in this session?" [shape=diamond];
"subagent-driven" [shape=box];
"executing-plans" [shape=box];
"Manual execution or brainstorm first" [shape=box];
"Have implementation plan?" -> "Tasks mostly independent?" [label="yes"];
"Have implementation plan?" -> "Manual execution or brainstorm first" [label="no"];
"Tasks mostly independent?" -> "Stay in this session?" [label="yes"];
"Tasks mostly independent?" -> "Manual execution or brainstorm first" [label="no - tightly coupled"];
"Stay in this session?" -> "subagent-driven" [label="yes"];
"Stay in this session?" -> "executing-plans" [label="no - parallel session"];
}
vs. Executing Plans (parallel session):
When extracting tasks from plan.md to dispatch to subagents:
plan.md, do not paraphrase or summarizeplan.md contains this task (e.g., ### Task 3: Recovery modes)plan.md or design.md has global constraints (shared interfaces, naming conventions, performance requirements), include them in the context sectionplan.md and design.md paths so subagents can cross-reference the originalsWhy: The orchestrator's extraction is the #1 source of plan drift. Verbatim copying + plan references let subagents and reviewers independently verify against the source of truth.
digraph process {
rankdir=TB;
subgraph cluster_per_task {
label="Per Task";
"Create agent planning dir (if not exists)" [shape=box style=filled fillcolor=lightyellow];
"Dispatch implementer subagent (./implementer-prompt.md)" [shape=box];
"Implementer subagent asks questions?" [shape=diamond];
"Answer questions, provide context" [shape=box];
"Implementer subagent implements, tests, commits, self-reviews" [shape=box];
"Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)" [shape=box];
"Spec reviewer subagent confirms code matches spec?" [shape=diamond];
"Implementer subagent fixes spec gaps" [shape=box];
"Dispatch code quality reviewer subagent (./quality-reviewer-prompt.md)" [shape=box];
"Code quality reviewer subagent approves?" [shape=diamond];
"Implementer subagent fixes quality issues" [shape=box];
"Aggregate agent findings into top-level .planning/" [shape=box style=filled fillcolor=lightyellow];
"Mark task complete via TaskUpdate" [shape=box];
}
"Read plan, extract all tasks with full text, note context, create tasks via TaskCreate" [shape=box];
"More tasks remain?" [shape=diamond];
"Dispatch final code reviewer subagent for entire implementation" [shape=box];
"Use superpower-planning:finishing-branch" [shape=box style=filled fillcolor=lightgreen];
"Read plan, extract all tasks with full text, note context, create tasks via TaskCreate" -> "Create agent planning dir (if not exists)";
"Create agent planning dir (if not exists)" -> "Dispatch implementer subagent (./implementer-prompt.md)";
"Dispatch implementer subagent (./implementer-prompt.md)" -> "Implementer subagent asks questions?";
"Implementer subagent asks questions?" -> "Answer questions, provide context" [label="yes"];
"Answer questions, provide context" -> "Dispatch implementer subagent (./implementer-prompt.md)";
"Implementer subagent asks questions?" -> "Implementer subagent implements, tests, commits, self-reviews" [label="no"];
"Implementer subagent implements, tests, commits, self-reviews" -> "Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)";
"Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)" -> "Spec reviewer subagent confirms code matches spec?";
"Spec reviewer subagent confirms code matches spec?" -> "Implementer subagent fixes spec gaps" [label="no"];
"Implementer subagent fixes spec gaps" -> "Dispatch spec reviewer subagent (./spec-reviewer-prompt.md)" [label="re-review\n(max 3 rounds)"];
"Spec reviewer subagent confirms code matches spec?" -> "Dispatch code quality reviewer subagent (./quality-reviewer-prompt.md)" [label="yes"];
"Dispatch code quality reviewer subagent (./quality-reviewer-prompt.md)" -> "Code quality reviewer subagent approves?";
"Code quality reviewer subagent approves?" -> "Implementer subagent fixes quality issues" [label="no"];
"Implementer subagent fixes quality issues" -> "Dispatch code quality reviewer subagent (./quality-reviewer-prompt.md)" [label="re-review\n(max 3 rounds)"];
"Code quality reviewer subagent approves?" -> "Aggregate agent findings into top-level .planning/" [label="yes"];
"Aggregate agent findings into top-level .planning/" -> "Mark task complete via TaskUpdate";
"Mark task complete via TaskUpdate" -> "More tasks remain?";
"More tasks remain?" -> "Create agent planning dir (if not exists)" [label="yes - next task"];
"More tasks remain?" -> "Plan Alignment Gate: re-read plan.md, verify all tasks match original plan" [label="no"];
"Plan Alignment Gate: re-read plan.md, verify all tasks match original plan" -> "Dispatch final code reviewer subagent for entire implementation";
"Dispatch final code reviewer subagent for entire implementation" -> "Use superpower-planning:finishing-branch";
}
Each agent role gets ONE directory, reused across all tasks:
mkdir -p .planning/agents/{role}/
Example:
mkdir -p .planning/agents/implementer/
mkdir -p .planning/agents/spec-reviewer/
mkdir -p .planning/agents/quality-reviewer/
Each agent planning dir contains:
findings.md - Discoveries, decisions, critical items (appended across tasks)progress.md - Step-by-step progress log (appended across tasks)Do NOT create per-task directories like implementer-task-1/, implementer-task-2/. One directory per role, updated continuously.
Include the planning dir path in the agent's prompt using ./implementer-prompt.md template.
After each task completes (both reviews passed), aggregate the agent's findings:
${CLAUDE_PLUGIN_ROOT}/scripts/aggregate-agent-findings.sh "<role>" "Task N: <name>"
This extracts "Critical for Orchestrator" items and appends them to top-level .planning/findings.md and .planning/progress.md. Then manually:
Example aggregation:
<!-- Append to .planning/findings.md -->
## Task 2: Recovery modes
- [From implementer] Database migration requires careful ordering
- [From spec-reviewer] All requirements met after fix pass
- [From quality-reviewer] Approved with no issues
<!-- Update Task Status Dashboard table in .planning/progress.md -->
| Task 1: Hook installation | ✅ complete | PASS | PASS | PASS | agents/implementer/ | 5 tests passing |
| Task 2: Recovery modes | ✅ complete | PASS (2nd pass) | PASS | PASS | agents/implementer/ | 8 tests passing |
| Task 3: Config parser | ⏳ pending | - | - | - | - | - |
<!-- Append to session log in .planning/progress.md -->
- [x] Task 2: Recovery modes - COMPLETED
- Implementer: 8 tests passing, committed
- Spec review: Passed (2nd pass after fix)
- Quality review: Approved
./implementer-prompt.md - Dispatch implementer subagent (includes planning dir injection)./spec-reviewer-prompt.md - Dispatch spec compliance reviewer subagent./quality-reviewer-prompt.md - Dispatch code quality reviewer subagentYou: I'm using Subagent-Driven Development to execute this plan.
[Read plan file once: .planning/plan.md]
[Extract all 5 tasks with full text and context]
[Create all tasks via TaskCreate]
Task 1: Hook installation script
[Create .planning/agents/implementer/ (if not exists)]
[Dispatch implementation subagent with full task text + context + planning dir]
Implementer: "Before I begin - should the hook be installed at user or system level?"
You: "User level (~/.config/superpowers/hooks/)"
Implementer: "Got it. Implementing now..."
[Later] Implementer:
- Implemented install-hook command
- Added tests, 5/5 passing
- Self-review: Found I missed --force flag, added it
- Committed
- Logged findings to .planning/agents/implementer/findings.md
[Dispatch spec compliance reviewer with its own planning dir]
Spec reviewer: Spec compliant - all requirements met, nothing extra
[Get git SHAs, dispatch code quality reviewer]
Code reviewer: Strengths: Good test coverage, clean. Issues: None. Approved.
[Aggregate: read agent findings, append to .planning/findings.md and progress.md]
[Mark Task 1 complete]
Task 2: Recovery modes
[Reuse .planning/agents/implementer/ (already exists from Task 1)]
[Dispatch implementation subagent with full task text + context + planning dir]
Implementer: [No questions, proceeds]
Implementer:
- Added verify/repair modes
- 8/8 tests passing
- Self-review: All good
- Committed
[Dispatch spec compliance reviewer]
Spec reviewer: Issues:
- Missing: Progress reporting (spec says "report every 100 items")
- Extra: Added --json flag (not requested)
[Implementer fixes issues]
Implementer: Removed --json flag, added progress reporting
[Spec reviewer reviews again]
Spec reviewer: Spec compliant now
[Dispatch code quality reviewer]
Code reviewer: Strengths: Solid. Issues (Important): Magic number (100)
[Implementer fixes]
Implementer: Extracted PROGRESS_INTERVAL constant
[Code reviewer reviews again]
Code reviewer: Approved
[Aggregate agent findings into .planning/]
[Mark Task 2 complete]
...
[After all tasks]
[Dispatch final code-reviewer]
Final reviewer: All requirements met, ready to merge
Done!
vs. Manual execution:
vs. Executing Plans:
Efficiency gains:
Quality gates:
Cost:
After ALL tasks complete and BEFORE the final code review, perform a plan alignment check:
.planning/plan.md completely — refresh the original requirements in context.planning/design.md if it exists — refresh architectural constraints.planning/progress.md:
Plan Align column in the Task Status Dashboard.planning/findings.md with specific detailsThis gate catches cumulative drift that per-task reviews miss. Individual tasks may each pass spec review while the whole implementation drifts from the plan's intent.
Never:
If subagent asks questions:
If reviewer finds issues:
If subagent fails task:
Required workflow skills:
Subagents should use:
Alternative workflow: