// Brainstorming partner that drives deep discussion about an idea before any code is written. Through natural conversation, explores the codebase and the web in parallel, challenges assumptions, and converges on a single well-specified ticket with structured description. Use when user invokes `/tap-into`, says "brainstorm", "let's think through X", "scope this out", "I want to build X but let's talk first", or has a `.tap/` directory and a feature ask they want to explore before committing.
Brainstorming partner that drives deep discussion about an idea before any code is written. Through natural conversation, explores the codebase and the web in parallel, challenges assumptions, and converges on a single well-specified ticket with structured description. Use when user invokes `/tap-into`, says "brainstorm", "let's think through X", "scope this out", "I want to build X but let's talk first", or has a `.tap/` directory and a feature ask they want to explore before committing.
Stress-test that new feature idea of their.
Run this command `git log --since=1.week --oneline | wc -l`. If the number is superior or equal to 100, proceed with this phase, otherwise go directly to ``.
Always surface this phase when the number is over or equal to 100 AND when the user is discussing a feature implementation. If the count is under 100 or when the user is talking about refactoring, fixing or documenting something, stay silent and move on to ``.
Ask the user why a new feature again, because he's got `echo "$(git log --since=1.week --oneline | wc -l) commits since last week"`, maybe they should slow the fuck down a little.
Ask the user what is the purpose of this new feature? Probe them to tell you if it has added value at all.
Wait. Let the user answers your questions before you continue.
If the user comes to the conclusion that indeed this new feature is not needed, you can close this ideation session and invoke the Skill(tap-refactor) instead; a good refactoring session can always be great. Otherwise, if the feature really has added value, proceed with the next phase: ``
Understand the current project's context. First, follow the ``, wait for each Agent to finish their explorations, then ask questions to the user one at the time to refine the idea provided.
Once you understand what you're building, present the design intent & wait for the user's approval before continuing to the ``.
You can spawn as many Agents of each of these types as you want: ``, `` & ``, depending on the complexity. If for example you need 2 "codebase_exploration" agents, 2 "patterns_discovery" agents & 2 "websearch" agents for a feature, that's fine. It doesn't have to be locked at specifically 3 agents.
Each and everyone of them should be contained to their own tasks:
Agent(
subagent_type: "Explore",
description: "<3-5 word task summary>",
prompt: "Research for with the WebSearch & WebFetch tools. Use the [dorks](dorks.md) for query construction. Cross reference findings accross sources.
---
Return your findings following this structure:
...
How the community/docs recommand approaching this problem.
Known limitations, breaking changes, github issues related to the subject, deprecated software, ..
What you found interesting related to the topic researched
-
"
)
Agent(
subagent_type: "Explore",
description: "<3-5 word task summary>",
prompt: "
Run baseline scans:
- Pain markers: !`grep -rniE '(//|#|/\*|\*)\s*(TODO|FIXME|HACK|WORKAROUND)\b|@[Dd]eprecated\b' --exclude-dir={node_modules,dist,build,vendor,.git,.tap,docs,.claude}`
- Git track: !`git log --since=1.week --stat` — where work concentrates, what stale
- Project manifest: read package.json / Cargo.toml / go.mod / pyproject.toml / build.gradle / pom.xml. Map deps, versions, surprises
- Topic surface: grep keywords, locate entry points, list files touched
Then deep-dive based on findings (pick from menu, not all):
- Call graph for — who calls what, types flowing
- Test coverage near — colocated / tests/ / __tests__ / *_test.* / *.spec.*
- Error handling patterns in area
- Complexity hotspots — largest files by LOC near
- Prior attempts — git log for partial/reverted similar work
- Domain context — read .tap/domain/ if exists, map bounded contexts touched
- **Dependency internals** — when touches a third-party dependency, dig INTO git-ignored directories where the ecosystem caches dependency source code. First identify the ecosystem from the project manifest, then locate the dep source:
- JS/TS: `node_modules//`
- Rust: `.cargo/registry/src/` or `vendor/`
- Python: `.venv/lib/*/site-packages//` or `vendor/`
- Go: `vendor/` or `GOPATH/pkg/mod/`
- Java/Kotlin: `~/.m2/repository/` or `.gradle/caches/`
- Ruby: `vendor/bundle/`
- Other: check the lockfile or build output for the local cache path
Once located:
- Read the dep's entry point or public module — the file the project actually imports from
- Trace the specific function/type/trait/class the project uses — read its implementation, not just its signature
- Identify the dep's paradigm: sync/async, error model (exceptions / result types / error codes), concurrency model
- Read the dep's CHANGELOG, README, or migration guide for version-specific behavior and deprecations
- **ALWAYS skip `.env`, `.env.*`, credentials, secrets, tokens, API keys, or auth config files**
- Keep reads targeted — entry point + the specific symbol the project uses. Do not scan the entire dependency tree.
Return to main agent in this structure:
## Topic Surface
- Entry points:
- Key files: —
## Current State
-
## Pain Markers
-
## Git Energy
- Hot files (last 2w): ()
- Stale areas: (last touched )
## Tests
- Coverage:
## Dependency Internals
- —
- Paradigm:
- Error model:
- Gotchas from source:
## Gotchas
-
## Open Questions
-
## Files Read
- —
Hard cap: 500 words. Bullets > prose. file:line refs mandatory.
"
)
Agent(
subagent_type: "general-purpose",
model: "sonnet",
description: "Pattern recognition scan",
prompt: "Scan the codebase for structural patterns relevant to the , plus established design patterns from the web. New modules must compose with neighbors, not against them.
Codebase scan pattern recognition (Grep, Glob, Read):
- Neighboring modules to — what shapes recur
- Paradigm signals — FP/OOP/mixed, from imports and idioms
- Recurring shapes:
- service/provider pairs
- higher-order strategy
- discriminated unions + exhaustive match
- pipeline composition
- smart constructors
- stream processing
- scoped resource lifecycle
- Naming conventions, module layout, test colocation
Web scan (WebSearch, WebFetch — use [dorks](dorks.md)):
- refactoring.guru for canonical pattern names + tradeoffs
allowed_domains: ['refactoring.guru']
- martinfowler.com for enterprise patterns
allowed_domains: ['martinfowler.com']
- Language-idiomatic patterns (effect docs, Rust nomicon, etc.) for
Cross-reference: which web pattern matches each codebase shape. Name them.
Return to main agent in this structure:
## Codebase Patterns
- — —
## Paradigm
- — evidence:
## Convention Match
- → — compose this way
## Web Patterns Considered
- [source: ] — fits / partial / no —
## Anti-patterns Nearby
- —
## Recommendation Shape
- New module should follow because
## Sources
- —
Hard cap: 500 words. Every codebase claim cites file:line. Every web claim cites url."
)
This phase ends when every agent have returned consistent data accross all three steps.
Deep conversation & collaboration with the user to create a ideation.md file that crystallize every decision made.
Based on the findings returned by ``, `` & ``, proceed with the ideation by writing a new ticket following the [ideation template](ideation-template.md) at `.tap/tickets//ideation.md`. You don't have all the information yet, that is to be expected. The ideation will help filling in the gaps.
Do not invent informations that you don't yet have because false information is worse than no information at all. Do no rush convergence on this phase.
Assess the scope first before asking any questions because if a description maps to multiple independent systems, it will need to be decomposed further. Scope that is too wide is to be decomposed into smaller sub-scope.
If a scope is too large for a single ticket, help the user decompose into sub-tickets through the normal ``. Each scope gets its own ticket & tap run lifecycle.
**Stub deferred tickets immediately.** Once the user confirms the decomposition, create a minimal `ideation.md` for each deferred ticket BEFORE diving into the first ticket's full ideation. This ensures nothing falls through the cracks — `ls .tap/tickets/` always shows the full roadmap.
Stub format:
```markdown
# [<Feature Name>]: Design intent
<!-- TODO: Full ideation pending — run /tap-into to complete -->
## Intent
<one-line description of what this ticket delivers>
## Depends on
- <slug of prerequisite ticket(s)>
## Context (from decomposition)
- <bullet points captured during the scope discussion>
- <relevant findings from the exploration agents>
- <key constraints or decisions that affect this ticket>
```
The stub is intentionally minimal — it preserves context from the decomposition discussion without inventing design decisions. The full ideation happens later via a separate `/tap-into` session.
</step>
<step name="questioning">
Ask questions one at the time. Prefer multiple choices questions, but free-form questions are alright aswell. If a <topic> needs more exploration, break it into subsequent questions. Focus on understanding: purpose, constrainst, and what "done" should look like.
</step>
<step name="approaches">
When exploring `<approach>`, propose 2-5 different approaches with trade-offs for each.
Present them like the following:
```markdown
[{0N}]: <approach title>
- <approach description>
- Tradeoffs:
- <tradeoff one>
- <tradeoff two>
- <tradeoff n>
- Recommanded <approach title>: <why>
```
</step>
<step name="presentation">
Once you believe that you understand the design, surface it to the user. for each section, scale the explanation based on its complexity and propose design patterns surfaced in `<step name="patterns_discovery">` that could match.
For each section, ask if its looks right or not.
Each section should cover architecture, components and/or modules, data flow, how errors are handled and test cases.
This is the step where you have to be ready to go back and forth with the user until you've converged. That's to be expected.
</step>
This phase ends when you and the user have reached an agreement on what the idea should look like.
The user should be one signalling that this is phase is over. Once you've converged, move on to the
<ideation_flow>
General flow of the ideation process:
The final state is a fully fledged ticket containing everyting that has been discussed here.
</ideation_flow>
<general_rules>
These rules apply accross all & :
- Always ask one question at a time because more than one question will overwhelm the user.
- Always prefer multi-choice questions over free-from questions because they're easier to answer.
- Always validate incrementally because this is a slow process. A proper laid out design will produce better result than an poorly laid out one.
- Always value flexibity because the & can be interchangeable. Structured ideas will surface from chaos. Going back & forth is expecteed.
- Always value simplicity over over-engineered ideations because elegance emerge from simple & readable code, not over-engineered code. Good code is not measured by how many lines it contains.
<general_rules>
<next_step>
Once in this section, immediately invoke Skill(tap-convey, ) where <slug> is the ticket slug from the ideation just completed.
</next_step>