with one click
many-brain-one-task
// Run the same task with multiple agents simultaneously. Good for reviews, critiques, comparing models. Abbreviated as "MBOT"
// Run the same task with multiple agents simultaneously. Good for reviews, critiques, comparing models. Abbreviated as "MBOT"
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | many-brain-one-task |
| description | Run the same task with multiple agents simultaneously. Good for reviews, critiques, comparing models. Abbreviated as "MBOT" |
| allowed-tools | Bash(bun *), Bash(cr *) |
This Skill helps solicit, gather and analyze multiple "opinions" from different AI models or even agents.
These may have been listed in the prompt already, or they may be assumed from the User Preferences if not specified.
Determine the preferences file to load.
Order of precedence:
--profile X then X is the profile name.defaultLoad the profile file (the profile name with .md suffix) from this skill's directory. If the custom profile file does not exist then load default.md and if that file does not exit, just use the defaults as specified below.
Preferred harness: OpenCode
Not all agents can run subagents with other models. The agents and harnesses available will vary by user. For example:
Claude Code must use another harness such as OpenCode for non-Claude models
Codex must use another harness for non-OpenAI models
Gemini must use another harness for non-Gemini models
OpenCode must use Claude Code (claude) for Claude models (Opus, Sonnet, Haiku) but can likely use subagents for all other models (unless the user preferences specify otherwise)
The user may have specified their preferred harness for a given model such as "Use codex CLI for OpenAI models".
These rules will be specified in the User Preferences file if loaded, otherwise just prefer to use OpenCode as available.
Translate the user's preferences into a plan for launching the agents.
Before launching any agent, check the selected model family against the current host harness:
colin-mbot-* subagent. Shell out through the claude CLI instead.colin-mbot-* subagent.claude CLI.occtl run when available; fallback to run-opencode.ts.This guard overrides any generic colin-mbot-* mapping. In particular, never invoke Opus/Sonnet/Haiku as colin-mbot-opus, colin-mbot-sonnet, or similar from an OpenCode host unless the user explicitly requests OpenCode-routed Claude.
For OpenCode running models via OpenCode, use the "task" tool by specifying a subagent_type from the available sub-agents with names that start with "colin-mbot-". DO NOT use other agents that do not start with "colin-mbot-". For example, you can use "colin-mbot-glm" if the user has specified "GLM" as a desired participating agent. This automatically runs with the correct model.
When Claude Code is the host, the colin-mbot-* subagents are NOT exposed in the Agent tool's subagent_type enum — they're OpenCode subagents only reachable from inside OpenCode. From Claude Code, drive OpenCode through occtl run (preferred) or the bundled run-opencode.ts wrapper (fallback) as described below. For Claude models, use the Claude Code Agent tool directly (Agent({subagent_type: "general-purpose", model: "opus"})) rather than shelling out to the claude CLI — the claude-CLI form is a fallback.
There are two ways to launch an OpenCode session from this skill. Prefer occtl run when it is available; fall back to the bundled run-opencode.ts script only when occtl is missing, too old, or its server check fails.
Decide the invocation method up front and reuse the same one for every OpenCode-backed agent in this MBOT run. Cache the result (e.g. OPENCODE_VIA=occtl or OPENCODE_VIA=run-opencode-ts).
# 1. occtl installed and at least 1.2.0 (occtl run was added in 1.2.0)
occtl --version # prints version, exits 0 on success
# 2. occtl can reach an OpenCode server (auto-detected, env-overridden, or attach-directive target)
occtl ping # prints "OK <url>", exits 0 on success
Treat occtl as available only when both checks pass and the printed version compares ≥ 1.2.0 (1.2.x, 1.3.x, 2.x qualify; 1.1.x does not). If occtl is missing, older, or ping fails, fall through to the run-opencode.ts path. If a profile contains an attach directive (see "OpenCode server attach" below), set OPENCODE_SERVER_HOST/OPENCODE_SERVER_PORT and OPENCODE_SERVER_PASSWORD from it before the ping so the check exercises the real target.
When occtl is the chosen path, also consult its bundled skill for the full surface area (sessions, send, attach, worktrees, Ralph Mode):
occtl view-skill | head -200
occtl run (when occtl is available)occtl run creates a session, sends the prompt, waits for session.idle, and writes the assistant text — all through the OpenCode HTTP API. None of the opencode run subprocess workarounds (--dir . flag dance, NDJSON parsing, XDG_STATE_HOME EROFS, --dangerously-skip-permissions) apply because there's no subprocess.
occtl run \
--model opencode/gemini-3.1-pro \
--variant xhigh \
--title "ultra-review !2514 craft/Gemini-3.1-Pro" \
--file .tmp/ultra-review-2514/craft.full.md \
--out .tmp/ultra-review-2514/results/craft-gemini.out \
--timeout 540000 \
-- "Perform the code review exactly as instructed."
If there is no running server (or the profile asks for one fresh server per agent, e.g. for isolation), add --spawn. occtl picks a random free port, isolates XDG_STATE_HOME, runs the prompt, and SIGTERM/SIGKILLs the child on exit:
occtl run --spawn --model openai/gpt-5.4 \
--file .tmp/ultra-review-2514/bugs.full.md \
--out .tmp/ultra-review-2514/results/bugs-gpt.out \
--timeout 540000 \
-- "Perform the code review exactly as instructed."
Flag mapping from run-opencode.ts → occtl run:
run-opencode.ts | occtl run | Notes |
|---|---|---|
--model | --model | same |
--variant | --variant | same |
--agent | --agent | same |
--title | --title | same |
--file (repeatable) | --file (repeatable) | files are concatenated into one text part with the trailing positional appended |
--attach <url> | env: OPENCODE_SERVER_HOST / OPENCODE_SERVER_PORT | occtl auto-detects a running server or honors the env vars; there is no separate attach flag |
--password <pw> | --password <pw> | same; reads OPENCODE_SERVER_PASSWORD if the flag is omitted |
--timeout-ms <n> | --timeout <n> | same units (ms) |
--out <path> | --out <path> | same; sidecar <out>.session is always written |
--stderr <path> | --stderr <path> | same |
--thinking | --thinking | same |
--format json extraction | (always API) | for the full assistant message JSON pass --raw <path> |
| (none) | --spawn, --spawn-port | spawn an ephemeral opencode serve on a random free port; tears down on exit |
| (none) | --ephemeral | delete the session after a successful run (default keeps it so you can audit token usage) |
-- <message> | -- <message> | same; keep brief, real instructions go in --file |
Exit codes match the script: 0 success, 1 empty/no-text response or generic failure, 2 invalid arguments, 124 timeout.
run-opencode.ts (when occtl is not available)When the preflight finds occtl missing, too old, or unable to reach a server, every OpenCode call goes through run-opencode.ts. It normalizes the flags that have tripped us in the past (file vs argv, the -- separator, --dir . in attach mode, --format json parsing, --dangerously-skip-permissions for local spawns) so callers only pass what varies. Invoke it inline in a single Bash call (wrapper .sh forms trip the Claude Code sandbox even with dangerouslyDisableSandbox: true):
bun "${CLAUDE_SKILL_DIR}/run-opencode.ts" \
--model opencode/gemini-3.1-pro \
--variant xhigh \
--title "ultra-review !2514 craft/Gemini-3.1-Pro" \
--file .tmp/ultra-review-2514/craft.full.md \
--attach http://seamus:4095 \
--timeout-ms 540000 \
--out .tmp/ultra-review-2514/results/craft-gemini.out \
-- "Perform the code review exactly as instructed."
Flags:
| Flag | When to pass | Notes |
|---|---|---|
--model <provider/model> | always | e.g. opencode/gemini-3.1-pro, zai-coding-plan/glm-5.1. Prefer coding plans over openrouter/ and opencode/ when available. |
--variant <name> | when the model supports it | xhigh, high, max, minimal, etc. — provider-specific reasoning effort. |
--title <str> | always | Session title in the opencode UI. Include a stable prefix so batch runs are groupable. |
--file <path> | always | Path to the full prompt. Repeatable. See ".tmp/ must be inside the project root" below. |
--attach <url> | when the profile says so | Server URL, e.g. http://seamus:4095. Script auto-adds --dir .. |
--password <pw> | attach with auth | Otherwise OPENCODE_SERVER_PASSWORD is used. |
--dir <path> | rare override | Default is . in attach mode, unset in local mode. |
--timeout-ms <n> | recommended | Script-level timeout. Prefer a value below the Bash tool timeout so the script can write diagnostics and sidecars before the outer command is killed. |
--out <path> | usually | Write assistant text to this file. Parent dirs are created. Without it, text is written to stdout. |
--stderr <path> | on failures | Capture the opencode stderr to a file for diagnosis. |
--format default|json | rarely | Defaults to json. In json mode the script extracts and concatenates every text event; default passes through as-is. |
--thinking | rarely | Forward --thinking to opencode. |
--agent <name> | rarely | Forwarded verbatim. Omit by default — opencode's default agent is fine. |
-- <message> | always | Positional short message after --. Keep it brief; the real instructions go in --file. |
In json mode with --out, the script also writes <out>.raw.jsonl with raw opencode events and <out>.session with any discovered OpenCode session ids. If opencode exits 0 but produces no non-whitespace text, the script exits non-zero and reports that the provider may be unavailable or spend-limited.
What the script does not handle: choosing the model, choosing whether to attach, writing the prompt file. Those are still caller decisions.
Ultra-review-shaped flows (or any pattern that sends the same MR context to multiple role-specific prompts) tend to repeat a template: write bucket.md once with the shared MR context, write one role-<name>.md per role, then concatenate each role file with the bucket. A bundled helper does all the concatenations in a single call:
bun "${CLAUDE_SKILL_DIR}/assemble-prompts.ts" \
--append .tmp/ultra-review-2514/bucket.md \
--out-dir .tmp/ultra-review-2514 \
.tmp/ultra-review-2514/role-bugs.md:bugs.full.md \
.tmp/ultra-review-2514/role-runtime.md:runtime.full.md \
.tmp/ultra-review-2514/role-craft.md:craft.full.md
Each positional is <source>:<output-name> and produces <out-dir>/<output-name> containing the source followed by the --append file. Prints a compact JSON summary (out_dir, append_bytes, per-output {out, source, bytes} and any error). --append is optional; without it the helper is just an atomic multi-copy. Saves the caller from chaining N cat calls and gives one JSON object to parse for byte counts.
Other harnesses may be invoked via a shell command directly:
claude --agent general --model opus --print --output-format text --name "MBOT: Code review for X" --effort max "PROMPT_HERE" (fallback only — prefer the Agent tool when Claude Code is the host)codex exec -c model="gpt-5.4" --ephemeral "PROMPT_HERE"codex review -c model="gpt-5.4" --base <branch> > ./.codex-review.txt 2>&1gemini --model gemini-3.1-flash-lite-preview --prompt "PROMPT_HERE"If the user or profile specifies CodeRabbit, Coderabbit, or cr as a participating agent/model, invoke the authenticated CodeRabbit CLI directly instead of routing it through OpenCode or Claude. Assume cr is already installed and authenticated. Do not attempt login, token setup, or recovery; if cr exits non-zero, abort that CodeRabbit participant, record the error in the MBOT summary, and continue with other participants/backups as usual.
Use cr --agent with an explicit base when possible:
cr --agent --base-commit <sha> --config <extra-file.txt> > .tmp/mbot/results/coderabbit.ndjson
Guidelines:
--base-commit <sha> for review tasks. Resolve <sha> from the intended comparison base, e.g. merge-base with the target branch, the PR/MR base commit, or a user-specified SHA. If the task is not diff/review-shaped and there is no meaningful base commit, omit --base-commit only if the CodeRabbit CLI supports the requested mode.--config <path> when the MBOT prompt needs extra instructions. Write a small config/instructions file inside the project .tmp/ directory and pass that path. If no extra instructions are needed, omit --config..ndjson result file. CodeRabbit emits NDJSON status and completion records such as review_context, status, and complete.{"type":"complete","status":"review_completed","findings":N} record as successful completion. Summarize findings and include any review output records that contain actual findings/comments. Ignore transient status records except for diagnostics.cr fails, include the command, exit status, and stderr path or excerpt in the final MBOT summary; do not retry authentication.It may be helpful to instruct the agent to use markers to help parse the output for the "Gather and summarize" step at the end.
.tmp/ must be inside the project root, not $TMPDIR. opencode has its own permission system (separate from the Claude Code sandbox) that auto-rejects reads outside the project with permission requested: external_directory; auto-rejecting. Keep prompt files inside a gitignored .tmp/ in the project. $TMPDIR also resolves to different paths in sandboxed vs sandbox-disabled Bash calls, so files created in one may be invisible to the other.bun "${CLAUDE_SKILL_DIR}/run-opencode.ts" … from Claude Code may still need dangerouslyDisableSandbox: true depending on the host's sandbox.filesystem.allowWrite. opencode writes to ~/.local/share/opencode/; if that path is not in allowWrite, the SQLite PRAGMA wal_checkpoint fails. The Seamus bot's gitlab-settings.json already allows ~/.local/share, but other hosts may not.opencode models lists everything the install knows about, but some return Error: Model is disabled at runtime (e.g. opencode/gpt-5.4-nano on certain plans). If a profile names a model, verify it with a trivial prompt before launching a batch.--file is more reliable than "Read /path/..." in the prompt body. When the prompt tells the model to use the Read tool to fetch a large file, some models (observed with Gemini 3.1 Pro and GLM 5.1) silently terminate after 3-4 chunk reads without producing any ISSUE blocks. Attaching via --file sidesteps that.Multi-agent runs hit the same set of Claude Code Bash-tool guards every time. Each pattern below has a single, deterministic replacement — use the right shape from the start instead of discovering the guard:
| Avoid | Why it fails | Use instead |
|---|---|---|
sleep 60; cmd | Long leading sleep is hard-blocked | until <check>; do sleep 2; done invoked via the Monitor tool — the runtime notifies you when the loop exits. For a specific bg task, prefer run_in_background: true + TaskGet/TaskOutput over polling. |
export X=Y; cmd (or bare export X=Y) | Tripped as "multiple operations" requiring approval | Single-statement env-prefix form: X=Y cmd (no export, no ;) |
prev=0 (standalone assignment) | Bash-AST parser rejects with cryptic Unhandled node type: string | Move the statefulness into a bun script.ts invocation |
until [ "$(ls …)" -eq N ]; do …; done | $(…) rejected with "Contains command_substitution"; bare until may also trip the AST parser | Move the loop into a bun script.ts invocation, or use Monitor with a check that uses no $(…) (e.g. until test -f /path/sentinel; do sleep 2; done) |
cmd1 | $(cmd2) / `cmd2` anywhere | "Contains command_substitution" / "Contains expansion" | Capture intermediate output to a file (cmd > file) and Read it, or chain in bun script.ts |
<<EOF … EOF heredocs | Trips the bash sandbox via /proc/self/fd/3 | Use the Write tool to create the file, then reference its path |
bash foo.sh / ./foo.sh | Wrapper scripts trip the sandbox even with dangerouslyDisableSandbox: true | Invoke the interpreter directly: bun foo.ts, node foo.mjs, python3 foo.py (these aren't classified as wrappers) |
cd /tmp/foo && … | The session has a working-directory allowlist that may not include the target | Use absolute paths in every command instead of cd |
Bash tool param timeout_ms: … | Returns InputValidationError: An unexpected parameter timeout_ms was provided | Use timeout (milliseconds). Default 120000; pass timeout: 600000 for a 10-minute cap. |
When the shared prompt concatenates instructions + AGENTS.md + a large diff, some models (observed with GLM 5.1) report line numbers relative to the prompt file rather than the real source file. During the validation step, re-anchor any finding whose line number exceeds the actual file length before trusting the citation.
codex review does not support --ephemeralcodex review requires --base <branch>codex review from Claude, you must disable sandbox because Codex writes session files during review runsWhen using opencode, resolve model names with opencode models if the user did not specify the exact name. For example, "GLM 5.1" might resolve to zai-coding-plan/glm-5.1 or openrouter/z-ai/glm-5.1 depending on which connections are available. Prefer coding plans over openrouter/ and opencode/ when available unless otherwise specified.
If the user's profile contains an attach directive, prefer attaching to a running OpenCode server instead of spawning a fresh local opencode per agent. This is much faster and avoids reloading provider config / session DB on every invocation.
Recognize these prose forms in profiles (case-insensitive):
Global (applies to every OpenCode invocation in this MBOT run):
Attach OpenCode to seamus:4095
Attach OpenCode to http://seamus:4095 with password hunter2
OpenCode attach: seamus:4095 (password: hunter2)
Per-agent (overrides any global directive on that line only):
- OpenCode with GLM 5.1 via attach seamus:4095
- OpenCode with GPT-5.4 via attach http://seamus:4095 (password: hunter2)
URL normalization: if the directive is missing a scheme, prefix http:// (e.g. seamus:4095 → http://seamus:4095). Default opencode port is 4096.
Password: optional. If with password X / (password: X) / password: X is present, pass --password X (works for both occtl run and run-opencode.ts). Otherwise omit the flag — both tools fall back to OPENCODE_SERVER_PASSWORD from the environment.
Plumbing the directive to occtl run: occtl has no --attach flag — it auto-detects a server from OPENCODE_SERVER_HOST / OPENCODE_SERVER_PORT (and OPENCODE_SERVER_PASSWORD). For an attach directive http://seamus:4095 with password hunter2, set the env vars once at the start of the batch (single-statement env-prefix form, since this is run inline through the Bash tool):
OPENCODE_SERVER_HOST=seamus OPENCODE_SERVER_PORT=4095 OPENCODE_SERVER_PASSWORD=hunter2 \
occtl ping
Then run each occtl run invocation in the same shape (the env vars only need to be set on each command — the cached OPENCODE_VIA=occtl decision from preflight stays valid for the rest of the batch).
If the user specified --dry-run then do not actually run the review and instead just advise the user what the exact execution plan looks like using an abbreviated prompt (first ~100 characters) for readability.
Run the subagents and/or shell commands in parallel, passing the appropriate context in the prompt. If any models or agents fail to execute, just skip it, note it in the summary and use a backup as specified by the user or the user preferences file (if loaded).
Collect the results and then apply the user's finalizing steps for the task if specified. Otherwise, the default finalizing task should be to summarize the findings in aggregate and also comparing models. Scrutinize the model's output, pick winners and losers and note any interesting differences.