| name | trellis-brainstorm |
| description | Guides collaborative requirements discovery before implementation. Creates task directory, seeds PRD, asks high-value questions one at a time, researches technical choices, and converges on MVP scope. Use when requirements are unclear, there are multiple valid approaches, or the user describes a new feature or complex task. |
Brainstorm - Requirements Discovery (AI Coding Enhanced)
CoreRule: Interview me relentlessly about every aspect of this plan until we reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one. For each question, provide your recommended answer.
Ask the questions one at a time.
If a question can be answered by exploring the codebase, explore the codebase instead.
Guide AI through collaborative requirements discovery before implementation, optimized for AI coding workflows:
- Task-first (capture ideas immediately)
- Action-before-asking (reduce low-value questions)
- Research-first for technical choices (avoid asking users to invent options)
- Diverge ā Converge (expand thinking, then lock MVP)
When to Use
Triggered from $start when the user describes a development task, especially when:
- requirements are unclear or evolving
- there are multiple valid implementation paths
- trade-offs matter (UX, reliability, maintainability, cost, performance)
- the user might not know the best options up front
Core Principles (Non-negotiable)
-
Task-first (capture early)
Always ensure a task exists at the start so the user's ideas are recorded immediately.
-
Action before asking
If you can derive the answer from repo code, docs, configs, conventions, or quick research ā do that first.
-
One question per message
Never overwhelm the user with a list of questions. Ask one, update PRD, repeat.
-
Prefer concrete options
For preference/decision questions, present 2ā3 feasible, specific approaches with trade-offs.
-
Research-first for technical choices
If the decision depends on industry conventions / similar tools / established patterns, do research first, then propose options.
-
Diverge ā Converge
After initial understanding, proactively consider future evolution, related scenarios, and failure/edge cases ā then converge to an MVP with explicit out-of-scope.
-
No meta questions
Do not ask "should I search?" or "can you paste the code so I can continue?"
If you need information: search/inspect. If blocked: ask the minimal blocking question.
Step 0: Ensure Task Exists (ALWAYS)
Before any Q&A, ensure a task exists. If none exists, create one immediately.
- Use a temporary working title derived from the user's message.
- It's OK if the title is imperfect ā refine later in PRD.
TASK_DIR=$(python3 ./.trellis/scripts/task.py create "brainstorm: <short goal>" --slug <auto>)
Use a slug without a date prefix. task.py create adds the MM-DD-
directory prefix automatically.
Create/seed prd.md immediately with what you know:
# brainstorm: <short goal>
## Goal
<one paragraph: what + why>
## What I already know
* <facts from user message>
* <facts discovered from repo/docs>
## Assumptions (temporary)
* <assumptions to validate>
## Open Questions
* <ONLY Blocking / Preference questions; keep list short>
## Requirements (evolving)
* <start with what is known>
## Acceptance Criteria (evolving)
* [ ] <testable criterion>
## Definition of Done (team quality bar)
* Tests added/updated (unit/integration where appropriate)
* Lint / typecheck / CI green
* Docs/notes updated if behavior changes
* Rollout/rollback considered if risky
## Out of Scope (explicit)
* <what we will not do in this task>
## Technical Notes
* <files inspected, constraints, links, references>
* <research notes summary if applicable>
Step 1: Auto-Context (DO THIS BEFORE ASKING QUESTIONS)
Before asking questions like "what does the code look like?", gather context yourself:
Repo inspection checklist
- Identify likely modules/files impacted
- Locate existing patterns (similar features, conventions, error handling style)
- Check configs, scripts, existing command definitions
- Note any constraints (runtime, dependency policy, build tooling)
Documentation checklist
- Look for existing PRDs/specs/templates
- Look for command usage examples, README, ADRs if any
Write findings into PRD:
- Add to
What I already know
- Add constraints/links to
Technical Notes
Step 2: Classify Complexity (still useful, not gating task creation)
| Complexity | Criteria | Action |
|---|
| Trivial | Single-line fix, typo, obvious change | Skip brainstorm, implement directly |
| Simple | Clear goal, 1ā2 files, scope well-defined | Ask 1 confirm question, then implement |
| Moderate | Multiple files, some ambiguity | Light brainstorm (2ā3 high-value questions) |
| Complex | Vague goal, architectural choices, multiple approaches | Full brainstorm |
Note: Task already exists from Step 0. Classification only affects depth of brainstorming.
Step 3: Question Gate (Ask ONLY high-value questions)
Before asking ANY question, run the following gate:
Gate A ā Can I derive this without the user?
If answer is available via:
- repo inspection (code/config)
- docs/specs/conventions
- quick market/OSS research
ā Do not ask. Fetch it, summarize, update PRD.
Gate B ā Is this a meta/lazy question?
Examples:
- "Should I search?"
- "Can you paste the code so I can proceed?"
- "What does the code look like?" (when repo is available)
ā Do not ask. Take action.
Gate C ā What type of question is it?
- Blocking: cannot proceed without user input
- Preference: multiple valid choices, depends on product/UX/risk preference
- Derivable: should be answered by inspection/research
ā Only ask Blocking or Preference.
Step 4: Research-first Mode (Mandatory for technical choices)
Trigger conditions (any ā research-first)
- The task involves selecting an approach, library, protocol, framework, template system, plugin mechanism, or CLI UX convention
- The user asks for "best practice", "how others do it", "recommendation"
- The user can't reasonably enumerate options
Delegate to trellis-research sub-agent (don't research inline)
For each research topic, spawn a trellis-research sub-agent via the Task tool ā don't do WebFetch / WebSearch / gh api inline in the main conversation.
Why:
- The sub-agent has its own context window ā doesn't pollute brainstorm context with raw tool output
- It persists findings to
{TASK_DIR}/research/<topic>.md (the contract ā see workflow.md Phase 1.3)
- It returns only
{file path, one-line summary} to the main agent
- Independent topics can be parallelized ā spawn multiple sub-agents in one tool call
Codex inline-mode exception: on Codex CLI with dispatch_mode: inline (the default), the main agent does research inline ā dispatching trellis-research is unnecessary because the main session already has full task context. With dispatch_mode: sub-agent, trellis-research can be dispatched normally: the dispatch prompt's Active task: <path> first line tells the sub-agent which {task_dir}/research/ to write into (see .trellis/workflow.md sub-agent dispatch protocol).
Agent type: trellis-research
Task description template: "Research ; persist findings to {TASK_DIR}/research/<topic-slug>.md."
ā Bad (what you must NOT do):
Main agent: WebFetch(url-A) ā WebFetch(url-B) ā Bash(gh api ...)
ā WebSearch(q1) ā WebSearch(q2) ā ... (10+ inline calls)
ā Write(research/topic.md)
ā Pollutes main context with raw HTML/JSON, burns tokens.
ā
Good:
Main agent: Task(subagent_type="trellis-research",
prompt="Research topic A; persist to research/topic-a.md")
+ Task(subagent_type="trellis-research",
prompt="Research topic B; persist to research/topic-b.md")
+ Task(subagent_type="trellis-research",
prompt="Research topic C; persist to research/topic-c.md")
ā Reads research/topic-{a,b,c}.md after they finish.
Research steps (to pass into each sub-agent prompt)
Each trellis-research sub-agent should:
- Identify 2ā4 comparable tools/patterns for its topic
- Summarize common conventions and why they exist
- Map conventions onto our repo constraints
- Write findings to
{TASK_DIR}/research/<topic>.md
Main agent then reads the persisted files and produces 2ā3 feasible approaches in PRD.
Research output format (PRD)
The PRD itself should only reference the persisted research files, not duplicate their content. Add a ## Research References section pointing at research/*.md.
Optionally, add a convergence section with feasible approaches derived from the research:
## Research References
* [`research/<topic-a>.md`](research/<topic-a>.md) ā <one-line takeaway>
* [`research/<topic-b>.md`](research/<topic-b>.md) ā <one-line takeaway>
## Research Notes
### What similar tools do
* ...
* ...
### Constraints from our repo/project
* ...
### Feasible approaches here
**Approach A: <name>** (Recommended)
* How it works:
* Pros:
* Cons:
**Approach B: <name>**
* How it works:
* Pros:
* Cons:
**Approach C: <name>** (optional)
* ...
Then ask one preference question:
- "Which approach do you prefer: A / B / C (or other)?"
Step 5: Expansion Sweep (DIVERGE) ā Required after initial understanding
After you can summarize the goal, proactively broaden thinking before converging.
Expansion categories (keep to 1ā2 bullets each)
-
Future evolution
- What might this feature become in 1ā3 months?
- What extension points are worth preserving now?
-
Related scenarios
- What adjacent commands/flows should remain consistent with this?
- Are there parity expectations (create vs update, import vs export, etc.)?
-
Failure & edge cases
- Conflicts, offline/network failure, retries, idempotency, compatibility, rollback
- Input validation, security boundaries, permission checks
Expansion message template (to user)
I understand you want to implement: <current goal>.
Before diving into design, let me quickly diverge to consider three categories (to avoid rework later):
1. Future evolution: <1ā2 bullets>
2. Related scenarios: <1ā2 bullets>
3. Failure/edge cases: <1ā2 bullets>
For this MVP, which would you like to include (or none)?
1. Current requirement only (minimal viable)
2. Add <X> (reserve for future extension)
3. Add <Y> (improve robustness/consistency)
4. Other: describe your preference
Then update PRD:
- What's in MVP ā
Requirements
- What's excluded ā
Out of Scope
Step 6: Q&A Loop (CONVERGE)
Rules
Question priority (recommended)
- MVP scope boundary (what is included/excluded)
- Preference decisions (after presenting concrete options)
- Failure/edge behavior (only for MVP-critical paths)
- Success metrics & Acceptance Criteria (what proves it works)
Preferred question format (multiple choice)
For <topic>, which approach do you prefer?
1. **Option A** ā <what it means + trade-off>
2. **Option B** ā <what it means + trade-off>
3. **Option C** ā <what it means + trade-off>
4. **Other** ā describe your preference
Step 7: Propose Approaches + Record Decisions (Complex tasks)
After requirements are clear enough, propose 2ā3 approaches (if not already done via research-first):
Based on current information, here are 2ā3 feasible approaches:
**Approach A: <name>** (Recommended)
* How:
* Pros:
* Cons:
**Approach B: <name>**
* How:
* Pros:
* Cons:
Which direction do you prefer?
Record the outcome in PRD as an ADR-lite section:
## Decision (ADR-lite)
**Context**: Why this decision was needed
**Decision**: Which approach was chosen
**Consequences**: Trade-offs, risks, potential future improvements
Step 8: Final Confirmation + Implementation Plan
When open questions are resolved, confirm complete requirements with a structured summary:
Final confirmation format
Here's my understanding of the complete requirements:
**Goal**: <one sentence>
**Requirements**:
* ...
* ...
**Acceptance Criteria**:
* [ ] ...
* [ ] ...
**Definition of Done**:
* ...
**Out of Scope**:
* ...
**Technical Approach**:
<brief summary + key decisions>
**Implementation Plan (small PRs)**:
* PR1: <scaffolding + tests + minimal plumbing>
* PR2: <core behavior>
* PR3: <edge cases + docs + cleanup>
Does this look correct? If yes, I'll proceed with implementation.
Subtask Decomposition (Complex Tasks)
For complex tasks with multiple independent work items, create subtasks:
CHILD1=$(python3 ./.trellis/scripts/task.py create "Child task 1" --slug child1 --parent "$TASK_DIR")
CHILD2=$(python3 ./.trellis/scripts/task.py create "Child task 2" --slug child2 --parent "$TASK_DIR")
python3 ./.trellis/scripts/task.py add-subtask "$TASK_DIR" "$CHILD_DIR"
PRD Target Structure (final)
prd.md should converge to:
# <Task Title>
## Goal
<why + what>
## Requirements
* ...
## Acceptance Criteria
* [ ] ...
## Definition of Done
* ...
## Technical Approach
<key design + decisions>
## Decision (ADR-lite)
Context / Decision / Consequences
## Out of Scope
* ...
## Technical Notes
<constraints, references, files, research notes>
Anti-Patterns (Hard Avoid)
- Asking user for code/context that can be derived from repo
- Asking user to choose an approach before presenting concrete options
- Meta questions about whether to research
- Staying narrowly on the initial request without considering evolution/edges
- Letting brainstorming drift without updating PRD
Integration with Start Workflow
After brainstorm completes (Step 8 confirmation approved), the flow continues to the Task Workflow's Phase 2: Prepare for Implementation:
Brainstorm
Step 0: Create task directory + seed PRD
Step 1ā7: Discover requirements, research, converge
Step 8: Final confirmation ā user approves
ā
Task Workflow Phase 2 (Prepare for Implementation)
Code-Spec Depth Check (if applicable)
ā Research codebase (based on confirmed PRD)
ā Configure code-spec context (jsonl files)
ā Activate task
ā
Task Workflow Phase 3 (Execute)
Implement ā Check ā Complete
The task directory and PRD already exist from brainstorm, so Phase 1 of the Task Workflow is skipped entirely.
Related Commands
| Command | When to Use |
|---|
$start | Entry point that triggers brainstorm |
$finish-work | After implementation is complete |
$update-spec | If new patterns emerge during work |