// Guide when to use built-in tools (WebFetch, WebSearch) and MCP servers (Parallel Search, Perplexity, Context7) for research. Synthesize findings into narrative for braindump. Use when gathering data, examples, or citations for blog posts.
| name | research-synthesis |
| description | Guide when to use built-in tools (WebFetch, WebSearch) and MCP servers (Parallel Search, Perplexity, Context7) for research. Synthesize findings into narrative for braindump. Use when gathering data, examples, or citations for blog posts. |
Use when:
Skip when:
Only use REAL research from MCP tools. Never invent:
If no data found: โ BAD: "Research shows 70% of OKR implementations fail..." โ GOOD: "I don't have data on OKR failure rates. Should I research using Perplexity?"
Before adding to braindump:
| Tool | Use For | Examples |
|---|---|---|
| WebFetch | Specific URLs, extracting article content, user-mentioned sources | User: "Check this article: https://..." |
| WebSearch | Recent trends/news, statistical data, multiple perspectives, general knowledge gaps | "Recent research on OKR failures", "Companies that abandoned agile" |
| Tool | Use For | Examples |
|---|---|---|
| Parallel Search | Advanced web search with agentic mode, fact-checking, competitive intelligence, multi-source synthesis, deep URL extraction | Complex queries needing synthesis, validation across sources, extracting full content from URLs |
| Tool | Use For | Examples |
|---|---|---|
| Perplexity | Broad surveys when WebSearch/Parallel insufficient | Industry consensus, statistical data, multiple perspectives |
| Tool | Use For | Examples |
|---|---|---|
| Context7 | Library/framework docs, API references, technical specifications | "How does React useEffect work?", "Check latest API docs" |
Decision tree:
Need research?
โโ Specific URL? โ WebFetch โ Parallel Search
โโ Technical docs/APIs? โ Context7
โโ General search? โ WebSearch โ Parallel Search โ Perplexity
โโ Complex synthesis? โ Parallel Search
Rationale: Built-in tools (WebFetch, WebSearch) are faster and always available. Parallel Search provides advanced agentic mode for synthesis and deep content extraction. Perplexity offers broad surveys when needed. Context7 for official docs only.
โ Bad (data dump):
Research shows:
- Stat 1
- Stat 2
- Stat 3
โ Good (synthesized narrative):
Found pattern: 3 recent studies show 60-70% OKR failure rates.
- HBR: 70% failure, metric gaming primary cause
- McKinsey: >100 OKRs correlate with diminishing returns
- Google: Shifted from strict OKRs to "goals and signals"
Key insight: Failure correlates with treating OKRs as compliance exercise.
## Research
### OKR Implementation Failures
60-70% failure rate (HBR, McKinsey). Primary causes: metric gaming, checkbox compliance.
**Sources:**
- HBR: "Why OKRs Don't Work" - 70% fail to improve performance
- McKinsey: Survey of 500 companies
- Google blog: Evolution of goals system
**Key Quote:**
> "When OKRs become performance evaluation, they stop being planning."
> - John Doerr, Measure What Matters
Research flows naturally into conversation:
Proactive: "That's a strong claim - let me check data... [uses tool] Good intuition! Found 3 confirming studies. Adding to braindump."
Requested: "Find X... [uses tool] Found several cases. Should I add all to braindump or focus on one approach?"
During Drafting: "Need citation... [uses tool] Found supporting research. Adding to draft with attribution."
Always ask before updating (unless context is clear): "Found X, Y, Z. Add to braindump under Research?"
Update sections:
Before adding to braindump:
For detailed examples, see reference/examples.md