// Systematic approach for gathering authoritative, version-accurate documentation. Claude invokes this skill when research is needed before implementation. Ensures truth over speed while achieving both.
| name | research-methodology |
| description | Systematic approach for gathering authoritative, version-accurate documentation. Claude invokes this skill when research is needed before implementation. Ensures truth over speed while achieving both. |
| auto_invoke | true |
| tags | ["research","documentation","verification","truth"] |
This skill provides a systematic methodology for conducting rapid, accurate documentation research to ground implementations in truth rather than potentially stale LLM memory.
Claude will automatically invoke this skill when:
Objectives:
Actions:
Parse request - Extract library/API names mentioned
Version detection - Check project dependency files:
package.json → Node.js projectsrequirements.txt, pyproject.toml, Pipfile → Pythongo.mod → GoCargo.toml → Rustbuild.gradle, pom.xml → Java*.csproj → C#/.NETpubspec.yaml → Dart/Fluttercomposer.json → PHPContext gathering - Note runtime, platform, existing dependencies
Output:
Target: [library-name]
Current Version: [X.Y.Z] (detected from [file])
Platform: [Node.js 20.x / Python 3.11 / etc.]
If unclear: Ask ONE specific clarifying question rather than proceed with assumptions
Source hierarchy (in order of preference):
Official documentation (PRIMARY)
Official migration/upgrade guides (if version change)
Official release notes/changelog (for version-specific info)
Official GitHub repository (if docs sparse)
Avoid (unless no alternatives):
Retrieval strategy:
1. Try context7 system (if available)
└─ Fastest, curated, version-aware docs
2. Use WebFetch on known official doc URLs
└─ Direct fetch from source
3. If URL unknown, use WebSearch
Query format: "[library name] [version] official documentation"
└─ Find the official site first, then fetch
4. Extract only relevant sections
└─ Don't download entire docs, target specific info needed
What to extract:
Anti-stagnation:
For every piece of information extracted:
API: someFunction(param1: Type): ReturnType
Source: official-docs.com/api-reference/someFunction [version X.Y.Z]
Verification checklist:
Confidence levels:
Report confidence: Always indicate confidence level in research output
Deliver in ResearchPack format:
# 📋 ResearchPack: [Library Name]
## Quick Reference
- Library: [name] v[X.Y.Z]
- Official Docs: [URL]
- Confidence: [HIGH/MEDIUM/LOW]
## Key APIs
[Concise list with signatures and sources]
## Setup
[Essential steps only, not every option]
## Gotchas
[Version-specific issues to avoid]
## Example
[Minimal working code with source link]
## Implementation Checklist
[Files to modify, steps in order]
## Sources
[Numbered list with URLs and versions]
## Open Questions
[Any decisions/clarifications needed]
Keep it concise:
Before delivering research:
Completeness check:
Accuracy check:
Usefulness check:
❌ Don't:
✅ Do:
Scenario: User says "Add Redis caching to the product service"
Claude (using this skill):
[Internal: This requires research on Redis client library]
🔍 Activating research-methodology skill
📦 Detecting Redis version from package.json... Found redis@4.6.0
🌐 Fetching official docs from redis.io...
[Follows methodology: assess → prioritize → retrieve → verify → output]
# 📋 ResearchPack: Redis Client (Node.js)
## Quick Reference
- Library: redis v4.6.0
- Official Docs: https://redis.io/docs/clients/nodejs/
- Confidence: HIGH
## Key APIs
1. `createClient(options)` - Initialize connection
Source: redis.io/docs/clients/nodejs/v4.6#createClient
2. `client.get(key: string): Promise<string | null>` - Retrieve value
Source: redis.io/docs/clients/nodejs/v4.6#get
[...rest of ResearchPack]
✅ Research complete - Ready for planning phase
This skill ensures all implementations are grounded in current, authoritative documentation rather than potentially outdated LLM knowledge.