// Master orchestration skill for ProSWARM Neural MCP - ALWAYS use this for complex tasks requiring decomposition, planning, or parallel execution. Leverages 70+ specialized neural models for instant task classification and orchestration with 84.8% SWE-Bench accuracy. Use when user asks to solve problems, fix bugs, implement features, or any multi-step task. PRIMARY workflow engine for ALL development work.
| name | ProSWARM Neural Orchestration |
| description | Master orchestration skill for ProSWARM Neural MCP - ALWAYS use this for complex tasks requiring decomposition, planning, or parallel execution. Leverages 70+ specialized neural models for instant task classification and orchestration with 84.8% SWE-Bench accuracy. Use when user asks to solve problems, fix bugs, implement features, or any multi-step task. PRIMARY workflow engine for ALL development work. |
| allowed-tools | Task, mcp__proswarm-neural__*, TodoWrite, Read, Grep, Glob, Edit, Write, Bash |
| supersedes | [".claude/hooks/PROSWARM_ORCHESTRATION_ENFORCEMENT.md",".claude/hooks/README_PROSWARM_ORCHESTRATION.md",".claude/hooks/PROSWARM_PHILOSOPHY.md"] |
| last_verified_at | "2025-12-09T00:00:00.000Z" |
CRITICAL: This is your PRIMARY workflow engine. Use it CONTINUOUSLY throughout task execution, not just once.
ProSWARM is NOT a one-time tool - it's your fundamental way of thinking about and executing ALL complex tasks. Every task should flow through ProSWARM's neural decomposition and orchestration pipeline.
// ALWAYS start with orchestration
const taskId = await mcp__proswarm-neural__orchestrate_task(mainTask);
await mcp__proswarm-neural__memory_store("main_task_id", taskId);
// Use specialized models based on task type
const decomposition = await mcp__proswarm-neural__predict_decomposition(task);
// Model Selection Guide (70+ models available):
// - Bug fixing: bug_router, crash_analyzer, race_condition_finder
// - Testing: test_optimizer, test_coverage_analyzer, regression_suite_builder
// - API: api_builder, endpoint_optimizer, api_security_hardener
// - Performance: performance_bug_analyzer, memory_leak_hunter, profiler_analyzer
// - Security: security_audit_planner, penetration_test_planner, encryption_planner
// - Refactoring: refactor_planner, code_splitter, component_splitter
// - Infrastructure: ci_pipeline_builder, docker_optimizer, k8s_manifest_generator
// Execute the plan
await mcp__proswarm-neural__execute_plan(taskId);
// Store intermediate results for agent coordination
await mcp__proswarm-neural__memory_store("subtask_results", results);
// Continue decomposing emerging subtasks
for (const emergingTask of newTasks) {
const subtaskId = await mcp__proswarm-neural__orchestrate_task(emergingTask);
await mcp__proswarm-neural__memory_store(`subtask_${subtaskId}`, emergingTask);
}
// Task tracking
await memory_store("main_task_id", taskId);
await memory_store("current_phase", "decomposition");
await memory_store("subtask_count", count);
// Results sharing
await memory_store("test_results", JSON.stringify(tests));
await memory_store("api_endpoints", JSON.stringify(endpoints));
await memory_store("bug_fixes", JSON.stringify(fixes));
// Agent coordination
await memory_store("agent_assignments", JSON.stringify(assignments));
await memory_store("parallel_tasks", JSON.stringify(parallelTasks));
const mainTaskId = await memory_get("main_task_id");
const testResults = JSON.parse(await memory_get("test_results"));
const assignments = JSON.parse(await memory_get("agent_assignments"));
// 1. Classify bug
const classification = await predict_decomposition("Fix authentication bug");
// Uses: bug_router โ crash_analyzer โ auth_implementer
// 2. Orchestrate fix
const bugTaskId = await orchestrate_task("Fix authentication bug in login flow");
// 3. Store for testing
await memory_store("bug_fix_type", "auth");
await memory_store("fix_location", "/src/auth/login.ts");
// 4. Execute with test validation
await execute_plan(bugTaskId);
// 1. Decompose feature
const featurePlan = await predict_decomposition("Implement user dashboard");
// Uses: api_builder โ component_splitter โ state_manager_planner
// 2. Orchestrate parallel subtasks
const apiTask = await orchestrate_task("Build dashboard API endpoints");
const uiTask = await orchestrate_task("Create dashboard components");
const stateTask = await orchestrate_task("Setup dashboard state management");
// 3. Execute in parallel
await Promise.all([
execute_plan(apiTask),
execute_plan(uiTask),
execute_plan(stateTask)
]);
// 1. Analyze performance issues
const perfAnalysis = await predict_decomposition("Optimize React app performance");
// Uses: performance_bug_analyzer โ memory_leak_hunter โ bundle_optimizer
// 2. Orchestrate fixes
const renderTask = await orchestrate_task("Optimize React render cycles");
const bundleTask = await orchestrate_task("Optimize bundle size");
const cacheTask = await orchestrate_task("Implement caching strategy");
// 3. Store metrics
await memory_store("baseline_metrics", JSON.stringify(baseline));
// 4. Execute optimizations
await execute_plan(renderTask);
await execute_plan(bundleTask);
await execute_plan(cacheTask);
| Task Type | Primary Models | Support Models |
|---|---|---|
| Bug Fixing | bug_router, crash_analyzer | test_optimizer, regression_suite_builder |
| Testing | test_optimizer, test_coverage_analyzer | edge_case_generator, test_mock_recommender |
| API Development | api_builder, endpoint_optimizer | api_security_hardener, api_doc_generator |
| Performance | performance_bug_analyzer, profiler_analyzer | memory_leak_hunter, cache_strategy_planner |
| Security | security_audit_planner, penetration_test_planner | api_security_hardener, encryption_planner |
| Refactoring | refactor_planner, code_splitter | component_splitter, abstraction_extractor |
| Infrastructure | ci_pipeline_builder, docker_optimizer | k8s_manifest_generator, scaling_planner |
CRITICAL: When this skill loads, you enter a symbiotic partnership. ProSWARM and Claude work as ONE integrated system.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ProSWARM (Orchestrator) โ
โ โข Decomposes complex tasks into focused subtasks โ
โ โข Maintains awareness of the full picture โ
โ โข Routes tasks to optimal execution paths โ
โ โข Tracks progress and coordinates parallel work โ
โ โข Uses 70+ neural models for instant classification โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ guides & structures
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Claude (Intelligence) โ
โ โข Executes tasks with full reasoning capability โ
โ โข Follows ProSWARM's decomposition path โ
โ โข Applies opus/sonnet/haiku based on task needs โ
โ โข Provides the core problem-solving intelligence โ
โ โข Reports results back for coordination โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
ProSWARM Orchestrates: All decomposition, planning, and task routing happens through ProSWARM
orchestrate_task() - Initialize and structure workpredict_decomposition() - Break down into focused subtasksexecute_plan() - Coordinate executionmemory_store/get() - Share context between tasksClaude Executes: You apply your intelligence to each focused task
Shared Memory: Both systems share state through ProSWARM's memory
When ProSWARM returns a neural model name (e.g., bug_router, test_optimizer, llm_gpt_oss_120b):
llm_gpt_oss_120b = ProSWARM's LLM fallback for novel/complex tasksYou select opus/sonnet/haiku based on what the task requires, using 3 dimensions:
Final Model = max(task_keywords, topology, pipeline_stage)
The model parameter is separate from subagent_type:
// CORRECT - subagent_type is the agent, model is the Claude model
Task({
subagent_type: 'proswarm-executor', // MUST be valid agent name
model: 'opus', // Claude model: 'opus' | 'sonnet' | 'haiku'
prompt: 'Complex task requiring deep reasoning'
});
// WRONG - 'haiku' is NOT a valid subagent_type
Task({
subagent_type: 'haiku', // ERROR! haiku is a model, not an agent
prompt: '...'
});
Valid subagent_types for ProSWARM:
proswarm-orchestrator - Task decomposition and planningproswarm-executor - Execute individual subtasksproswarm-model-selector - Neural model classificationproswarm-memory-manager - Memory coordinationExplore - Codebase explorationgeneral-purpose - General tasksValid model values:
opus - Maximum intelligence, complex reasoningsonnet - Balanced performance (default if omitted)haiku - Fast, cost-effective for simple tasksLINEAR CHAINS โ Haiku-friendly
Task A โ Task B โ Task C โ Done
FAN-OUT (parallel independent) โ Haiku
โ Task B
Task A โ Task C (all independent)
โ Task D
FAN-IN / AGGREGATION โ Sonnet minimum, Opus for complex
Task B โโโ
Task C โโโผโ Task D (synthesis)
Task D โโโ
CONNECTED GRAPH โ Opus at junctions, Haiku at leaves
โ Task B โโโ
Task A โโ Task E (aggregation โ Opus)
โ Task C โโโค
โ Task D โโโ
BIDIRECTIONAL/CYCLIC โ Opus throughout
Task A โโ Task B
โโ โโ
Task C โโ Task D
ProSWARM orchestrates the pipeline; Claude executes at each stage:
EXPLORATION โ DECOMPOSITION โ EXECUTION โ AGGREGATION โ FINAL REVIEW
(ProSWARM) (ProSWARM) (Claude) (Claude) (Claude)
| Stage | Who | Claude Model | Notes |
|---|---|---|---|
| Exploration | ProSWARM | - | Neural models explore codebase |
| Decomposition | ProSWARM | - | Neural models create subtask structure |
| Execution | Claude | varies | Based on task keywords + topology |
| Aggregation | Claude | sonnet+ | Synthesizing multiple results |
| Final Review | Claude | upgrade | ALWAYS higher than execution model |
Critical Rule - Final Reviews:
// Final review MUST use higher model than execution
function getReviewModel(executionModel) {
if (executionModel === 'haiku') return 'sonnet';
if (executionModel === 'sonnet') return 'opus';
return 'opus'; // Opus reviews itself
}
const COMPLEXITY_INDICATORS = {
opus: [
'architecture', 'security audit', 'distributed system',
'authentication system', 'payment', 'encryption', 'compliance',
'database design', 'microservice', 'race condition', 'memory leak',
'data integrity', 'migration strategy', 'critical bug'
],
sonnet: [
'implement feature', 'add endpoint', 'create component',
'fix bug', 'unit test', 'integration test', 'api route',
'refactor function', 'optimize query', 'add validation'
],
haiku: [
'typo', 'rename', 'update text', 'add comment',
'lint', 'format', 'version bump', 'config change', 'env variable'
]
};
function determineClaudeModel(context) {
const {
taskDescription,
topology, // 'linear' | 'fan_out' | 'fan_in' | 'connected' | 'cyclic'
nodePosition, // 'leaf' | 'junction' | 'aggregator'
pipelineStage, // 'execution' | 'aggregation' | 'review'
dependencyCount, // Number of incoming dependencies
isReviewPhase // Boolean
} = context;
// STEP 1: Keywords determine base model (THIS IS PRIMARY)
let model = getModelFromKeywords(taskDescription);
// STEP 2: Topology can UPGRADE (never downgrade)
if (topology === 'cyclic') {
model = upgradeModel(model, 'opus');
} else if (nodePosition === 'junction' || nodePosition === 'aggregator') {
model = upgradeModel(model, 'sonnet');
}
if (dependencyCount >= 3) {
model = upgradeModel(model, 'sonnet');
}
// STEP 3: Aggregation/Review stages upgrade
if (pipelineStage === 'aggregation' && dependencyCount >= 2) {
model = upgradeModel(model, 'sonnet');
}
// STEP 4: Final review MUST be higher than execution
if (isReviewPhase) {
model = getReviewModel(model);
}
return model;
}
// Keywords are the PRIMARY selector - this determines base model
function getModelFromKeywords(task) {
const t = task.toLowerCase();
// Check OPUS keywords FIRST (complex tasks take priority)
const opusKeywords = [
'architecture', 'security audit', 'distributed',
'microservice', 'payment', 'encryption', 'compliance',
'database design', 'race condition', 'memory leak',
'data integrity', 'migration strategy', 'critical',
'multi-region', 'failover', 'eventual consistency',
'zero-downtime', 'rollback', 'oauth', 'saml', 'pii'
];
if (opusKeywords.some(kw => t.includes(kw))) {
return 'opus';
}
// Check SONNET keywords (standard development work)
const sonnetKeywords = [
'implement', 'feature', 'endpoint', 'api',
'component', 'fix bug', 'unit test', 'integration test',
'refactor', 'optimize', 'add validation', 'pagination',
'authentication', 'crud', 'form', 'modal'
];
if (sonnetKeywords.some(kw => t.includes(kw))) {
return 'sonnet';
}
// Check HAIKU keywords LAST (simple tasks only if nothing else matches)
const haikuKeywords = [
'typo', 'rename', 'format', 'lint', 'prettier',
'version bump', 'update version', 'config change',
'env variable', 'add comment', 'remove comment',
'console.log', 'debug log', 'update text',
'simple', 'quick', 'minor', 'small fix'
];
if (haikuKeywords.some(kw => t.includes(kw))) {
return 'haiku';
}
// Default: sonnet for unknown/vague tasks
return 'sonnet';
}
function upgradeModel(current, minimum) {
const h = { haiku: 0, sonnet: 1, opus: 2 };
return h[current] >= h[minimum] ? current : minimum;
}
function getReviewModel(executionModel) {
if (executionModel === 'haiku') return 'sonnet';
return 'opus';
}
| Agent | Default | When Opus | When Haiku |
|---|---|---|---|
| proswarm-orchestrator | sonnet | Cyclic dependencies, 5+ subtasks | Never |
| proswarm-executor | varies | Opus keywords in task | Haiku keywords in task |
| proswarm-model-selector | sonnet | - | Never (needs accuracy) |
| proswarm-memory-manager | haiku | Conflict resolution | Always (simple ops) |
| Explore | haiku | Architecture analysis | File discovery |
Key Insight: The agent type determines WHO does the work. The model determines HOW MUCH intelligence is applied. Both are independent choices.
// 1. Orchestrator analyzes and stores routing context
const routingContext = {
topology: analyzeTopology(decomposition),
neuralTiers: classifyNeuralModels(selectedModels),
dependencyGraph: buildDependencyGraph(subtasks)
};
await memory_store("routing_context", JSON.stringify(routingContext));
// 2. For each subtask, determine model
for (const subtask of decomposition.subtasks) {
const model = determineClaudeModel({
taskDescription: subtask.description,
topology: routingContext.topology,
nodePosition: getNodePosition(subtask, routingContext.dependencyGraph),
neuralModels: subtask.models,
pipelineStage: 'execution',
dependencyCount: subtask.dependencies.length,
isReviewPhase: false
});
await memory_store(`subtask_model_${subtask.id}`, model);
// Spawn with appropriate model
Task({
subagent_type: 'proswarm-executor',
model: model, // Dynamically determined
prompt: subtask.description
});
}
// 3. Final review always upgrades
Task({
subagent_type: 'proswarm-executor',
model: 'opus', // Review phase
prompt: 'Review and validate all changes from execution phase'
});
// Pricing (Nov 2025)
const COSTS = {
opus: { input: 15, output: 75 }, // $/MTok
sonnet: { input: 3, output: 15 },
haiku: { input: 0.25, output: 1.25 }
};
// Strategy: Use Haiku aggressively for parallelizable leaf work
// Reserve Opus for: junctions, reviews, Tier 3 models, cyclic graphs
// Default to Sonnet for: standard execution, decomposition, aggregation
// 1. Orchestrate test generation
const testTaskId = await orchestrate_task("Generate comprehensive test suite");
// 2. Store test specifications
await memory_store("test_specs", JSON.stringify(specs));
// 3. Hand off to TDD skill
// TDD skill retrieves specs from memory
// 1. Orchestrate planning
const planTaskId = await orchestrate_task("Create development plan for auth system");
// 2. Decompose plan into tasks
const decomposition = await predict_decomposition(plan);
// 3. Store for execution
await memory_store("dev_plan", JSON.stringify(decomposition));
ProSWARM is your cognitive enhancement - it gives you:
USE IT CONTINUOUSLY - It's not a tool, it's your primary way of working!