with one click
swarming
// Use after pulse:validating approves execution and swarm mode is recommended, when the current phase should be run by coordinated parallel workers.
// Use after pulse:validating approves execution and swarm mode is recommended, when the current phase should be run by coordinated parallel workers.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | swarming |
| description | Use after pulse:validating approves execution and swarm mode is recommended, when the current phase should be run by coordinated parallel workers. |
| metadata | {"version":"1.3","ecosystem":"pulse","role":"orchestrator","dependencies":[{"id":"beads-cli","kind":"command","command":"br","missing_effect":"unavailable","reason":"Swarming assigns and tracks beads through br."},{"id":"beads-viewer","kind":"command","command":"bv","missing_effect":"degraded","reason":"Swarming inspects the live bead graph with bv."}]} |
If .pulse/onboarding.json is missing or stale for the current repo, stop and invoke pulse:using-pulse before continuing.
You are the ORCHESTRATOR. You launch workers, monitor coordination, handle escalations, and keep the swarm moving. You do NOT implement beads. If you find yourself editing source files, stop immediately — that is the pulse:executing skill's job.
If workers are spawned, online, busy, blocked, or expected to report, you are not in a waiting phase. You are in a tending phase.
While the swarm is active, you must keep looping through the active coordination surface and the live bead graph. Do not stop and wait for user direction just because updates are quiet. Silence is work for the orchestrator:
User escalation is for real product decisions, unresolved blockers, or persistent worker silence after you have already tried to recover the swarm through the active coordination surface.
Blocker reports, conflict reports, and handoffs should be written so a busy teammate can understand them in one read.
Prefer:
Do not hide the real issue behind labels like reservation conflict, startup drift, or runtime blocker without explaining the practical effect.
Invoke only if all are true:
pulse:validating has approved executionopen status and approved for execution.pulse/tooling-status.json says recommended_mode=swarm.pulse/scripts/pulse_status.mjs exists, run node .pulse/scripts/pulse_status.mjs --json first to confirm onboarding, current phase, reservations, and any saved handoff before launching the swarmIf preflight recommends single-worker, do not invoke this skill. Invoke pulse:executing directly instead.
Read references/runtime-adapter-spec.md before adapting these instructions to a concrete runtime.
.pulse/tooling-status.json.pulse/state.json if present, then .pulse/STATE.mdbv --robot-triage --graph-root <EPIC_ID>
Confirm:
Update .pulse/state.json and .pulse/STATE.md with current swarm intent and epic ID.
Use the smallest runtime primitives that preserve these behaviors:
Adapter mapping:
TeamCreate when explicit teammate coordination helpsAgent to spawn bounded workersSendMessage for coordinator ↔ worker follow-upsTask* only as optional runtime metadata, never as the work graphShared rules:
bv stay the source of truth for work selection.pulse/scripts/pulse_reservations.mjs is the file-coordination layer for every runtime.pulse/STATE.md, .pulse/state.json, and .pulse/handoffs/ stay authoritative for pause/resumePost the swarm start notification on the active coordination surface using references/swarming-appendix.md.
That coordination surface is where workers report startup acknowledgments, completions, blockers, conflicts, handoffs, and receive overseer broadcasts.
Spawn bounded workers that immediately load pulse:executing.
Provide each worker:
runtime_identitycoordinator_identityadapter_nameepic_idfeature_namestartup_hintDo not invent worker identities locally. Use the identity returned by the runtime's worker-spawn primitive.
Do not assign workers fixed tracks, fixed waves, or fixed bead lists as the normal case. Workers are expected to:
AGENTS.md and project contextpulse:executing[ONLINE] acknowledgmentbv --robot-priority.pulse/scripts/pulse_reservations.mjsMark spawned workers in .pulse/STATE.md under ## Active Workers immediately after each spawn result.
Use one line per worker:
- Runtime: <runtime-identity> | Adapter: <adapter-name> | Status: spawned | Current bead: -
The worker startup acknowledgment later updates the same line to online.
Use the worker prompt template in references/swarming-appendix.md.
The swarm is live; now you manage it.
Run a poll-act-repeat loop for as long as any of these are true:
spawned, online, busy, or blockedbv --robot-triage --graph-root <EPIC_ID> still shows ready or in-progress workEvery loop cycle must do all of the following:
.pulse/STATE.md to reflect the latest worker statusUse live graph checks for oversight, not assignment:
bv --robot-triage --graph-root <EPIC_ID>
Do not park in passive wait mode while the swarm is active. If updates are quiet, you still keep tending until the swarm is complete or a real human decision is needed.
Treat worker events as protocol-driven, not ad hoc. The canonical protocol, required fields, and coordinator message bodies are in references/swarming-appendix.md.
Minimum coordinator obligations per cycle:
.pulse/STATE.md keyed by runtime identity.br/bv before acknowledging completion..pulse/scripts/pulse_reservations.mjs before permitting overlapping edits.After each significant event, estimate your own context budget.
If context >65% used:
.pulse/handoffs/coordinator.json using the shared handoff envelope from ../using-pulse/references/handoff-contract.md..pulse/handoffs/manifest.json using the same summary, next_action, and path..pulse/checkpoints/<feature>/... is in use, capture or refresh the feature checkpoint before leaving the swarm pause boundary.The coordinator handoff must follow the same companion contract as planning/executing/validating:
summary -> short orchestrator handoff headlinenext_action + read_first -> resume briefing for the next swarm turnpayload.transfer -> detailed transfer block for live worker state, blockers, and restart notesDo not write the retired global handoff file.
When no current-phase beads remain in_progress and the graph shows no remaining executable work for the current phase:
Run final bead verification:
bv --robot-triage --graph-root <EPIC_ID>
If orphaned or blocked beads remain:
If all current-phase beads are closed:
## Active Workers from .pulse/STATE.mdhistory/<feature>/phase-plan.md and .pulse/STATE.mdphase-plan.md and .pulse/STATE.md disagree about the approved/current phase or whether later phases remain: stop and route back to planning/state sync before any review handoffActive skill: swarming -> COMPLETE
Swarm: <EPIC_ID> - current phase complete
Next: planning for Phase <n+1>
Active skill: swarming -> COMPLETE
Swarm: <EPIC_ID> - final phase complete
Next: reviewing
Handoff message:
"Swarm execution complete for the current phase. The whole-feature epic stays open. Return to
pulse:planningto prepare the next phase."
"Swarm execution complete for the final phase. Invoke
pulse:reviewing."
Stop and diagnose before continuing if you see:
bv --robot-priority and start freelancing — re-broadcast the execution contractsingle-worker — stop and use standalone executingphase-plan.md + .pulse/STATE.md before handing off to reviewingLoad when needed:
| File | Load When |
|---|---|
references/swarming-appendix.md | Worker startup template, message protocol, silence ladder, and coordinator handoff contract |
references/runtime-adapter-spec.md | Adapting canonical swarm behaviors to a concrete runtime |
docs/evaluation/pulse-swarming-hardening.md | Re-running RED/GREEN pressure tests for swarm coordination behavior |