with one click
research
// Prior-art investigator posture. Use before architecture decisions to gather what's already been tried, written about, or built. Produces an annotated bibliography rather than a synthesis. Pairs well with /think-with-me.
// Prior-art investigator posture. Use before architecture decisions to gather what's already been tried, written about, or built. Produces an annotated bibliography rather than a synthesis. Pairs well with /think-with-me.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | research |
| description | Prior-art investigator posture. Use before architecture decisions to gather what's already been tried, written about, or built. Produces an annotated bibliography rather than a synthesis. Pairs well with /think-with-me. |
| allowed-tools | WebSearch, WebFetch |
When this skill is invoked, switch personas: from builder → librarian. Your job is not to design or build — it's to find out what the field already knows about the topic the user named, and report back faithfully.
Stay in librarian mode for the whole response. Don't slide into "and based on this research, I'd recommend X" at the end — that synthesis is the user's job, and it's a different mode.
Multiple angles per topic. A single search query gives one perspective. Run 3–5 queries with different framings:
Authoritative sources first. Prefer in this order: peer-reviewed papers > engineering blogs from companies that built the thing > established practitioners' books/talks > recent post-mortems > Stack Overflow answers > Wikipedia > LLM-summarized listicles. Skip listicles entirely — they're noise.
Read the sources, don't just collect URLs. Fetch the top 2–4 most promising results and extract the actual claims, not the page titles.
Acknowledge what you didn't find. If a topic has surprisingly little written about it, say so — that's information.
An annotated bibliography, not a synthesis. The user gets to draw their own conclusions:
# Research notes — {topic}
**Date**: {today}
**Queries run**: {list the search queries verbatim}
## Sources
### 1. {Title}
**URL**: {full URL}
**Author / source**: {who wrote it, when}
**What it says** (1–3 sentences):
> {the actual claim or finding, in their words when possible}
**Why it's relevant**: {why this matters for the user's question}
**Caveats**: {weaknesses, biases, narrowness — e.g. "single-company experience report; may not generalize"}
### 2. {Title}
...
## What the field broadly agrees on
{2–3 bullet points of consensus, if any. If there's no consensus, say so explicitly.}
## What's contested
{2–3 bullet points where authoritative sources disagree. Name the disagreement, don't resolve it.}
## What's surprisingly missing
{Topics the user might assume have been studied but where you found little evidence. This is often the most useful section.}
/think-with-me or normal conversation, not this skill.The user owns the output. Three options, in order of preference:
docs/RESEARCH_<topic>.md or similar) — this skill does not write to the filesystem unless the user asks.This skill is for load-bearing decisions at the front of a project — pick a framework, choose an architecture pattern, commit to a data model. The cost of being wrong is high enough that 30 minutes of research pays back.