// Optimize documentation for AI coding assistants and LLMs. Improves docs for Claude, Copilot, and other AI tools through c7score optimization, llms.txt generation, question-driven restructuring, and automated quality scoring. Use when asked to improve, optimize, or enhance documentation for AI assistants, LLMs, c7score, Context7, or when creating llms.txt files. Also use for documentation quality analysis, README optimization, or ensuring docs follow best practices for LLM retrieval systems.
| name | llm-docs-optimizer |
| description | Optimize documentation for AI coding assistants and LLMs. Improves docs for Claude, Copilot, and other AI tools through c7score optimization, llms.txt generation, question-driven restructuring, and automated quality scoring. Use when asked to improve, optimize, or enhance documentation for AI assistants, LLMs, c7score, Context7, or when creating llms.txt files. Also use for documentation quality analysis, README optimization, or ensuring docs follow best practices for LLM retrieval systems. |
| version | 1.3.0 |
This skill optimizes project documentation and README files for AI coding assistants and LLMs like Claude, GitHub Copilot, and others. It improves documentation quality through multiple approaches: c7score optimization (Context7's quality benchmark), llms.txt file generation for LLM navigation, question-driven content restructuring, and automated quality scoring across 5 key metrics.
Version: 1.3.0
C7score evaluates documentation using 5 metrics across two categories:
LLM Analysis (85% of score):
Text Analysis (15% of score): 3. Formatting (5%): Proper structure and language tags 4. Project Metadata (5%): Absence of irrelevant content 5. Initialization (5%): Not just imports/installations
For detailed information on each metric, read references/c7score_metrics.md.
IMPORTANT: When the user requests c7score documentation optimization, ALWAYS ask if they also want an llms.txt file:
Use the AskUserQuestion tool with this question:
Question: "Would you also like me to generate an llms.txt file for your project?"
Header: "llms.txt"
Options:
- "Yes, create both optimized docs and llms.txt"
Description: "Optimize documentation for c7score AND generate an llms.txt navigation file"
- "No, just optimize the documentation"
Description: "Only perform c7score optimization without llms.txt generation"
If user chooses "Yes":
If user chooses "No":
Note: If the user explicitly requests ONLY llms.txt generation (no c7score mention), skip this step and go directly to the llms.txt generation workflow.
When given a project or documentation to optimize:
python scripts/analyze_docs.py <path-to-readme.md>
Note: The script requires Python 3.7+ and is optional. You can skip it if Python is unavailable.Create a list of 15-20 questions that developers commonly ask about the project:
Example questions:
Evaluate which questions are well-answered by existing documentation:
Prioritize filling gaps for unanswered questions.
Apply optimizations based on priority:
Priority 1: Question Coverage (80% of score)
Priority 2: Remove Duplicates
Priority 3: Fix Formatting
Priority 4: Remove Metadata
Priority 5: Enhance Initialization Snippets
For detailed transformation patterns, read references/optimization_patterns.md.
Before finalizing, verify each optimized snippet:
โ Can run standalone (copy-paste works) โ Answers a specific developer question โ Provides unique information โ Uses proper format and language tag โ Focuses on practical usage โ Includes necessary imports/setup โ No licensing, citations, or directory trees โ Syntactically correct code
After optimization, provide a c7score evaluation comparing the original and optimized documentation:
Evaluation Process:
Analyze Original Documentation against c7score metrics:
Analyze Optimized Documentation using the same metrics
Calculate Scores (0-100 for each metric):
For Question-Snippet Matching:
For LLM Evaluation:
For Formatting:
For Metadata Removal:
For Initialization:
Present Results in this format:
## C7Score Evaluation
### Original Documentation Score: XX/100
**Metric Breakdown:**
- Question-Snippet Matching: XX/100 (weight: 80%)
- Analysis: [Brief explanation of score]
- LLM Evaluation: XX/100 (weight: 10%)
- Analysis: [Brief explanation]
- Formatting: XX/100 (weight: 5%)
- Analysis: [Brief explanation]
- Metadata Removal: XX/100 (weight: 2.5%)
- Analysis: [Brief explanation]
- Initialization: XX/100 (weight: 2.5%)
- Analysis: [Brief explanation]
**Weighted Average:** XX/100
---
### Optimized Documentation Score: XX/100
**Metric Breakdown:**
[Same format as above]
**Weighted Average:** XX/100
---
### Improvement Summary
**Overall Improvement:** +XX points (XX โ XX)
**Key Improvements:**
- [Metric]: +XX points - [What specifically improved]
- [Metric]: +XX points - [What specifically improved]
**Impact Assessment:**
[Brief explanation of how optimizations improved the documentation quality]
Note: These are estimated scores based on c7score methodology. For official scores, users can submit to Context7's benchmark.
Before:
## authenticate(api_key)
Authenticates the client.
After:
## Authentication
```python
from library import Client
client = Client(api_key="your_key")
client.authenticate()
# Now ready to make requests
result = client.get_data()
### Transform Import-Only โ Quick Start
**Before:**
```python
from library import Client, Config
After:
# Install: pip install library
from library import Client, Config
# Initialize and use
config = Config(api_key="key")
client = Client(config)
result = client.query("SELECT * FROM data")
Combine related small snippets into one complete workflow example.
Organize documentation to prioritize question-answering:
Quick Start (High Priority)
Common Use Cases (High Priority)
Configuration (Medium Priority)
Error Handling (Medium Priority)
API Reference (Lower Priority)
Advanced Topics (Lower Priority)
This skill provides two main capabilities:
When optimizing documentation, provide:
Save the optimized documentation files in the user's working directory or a designated output location. You can ask the user where they'd like the files saved if unclear.
examples/sample_readme.md for before/after transformationsexamples/sample_llmstxt.md for different project typesllms.txt is a standardized markdown file format designed to provide LLM-friendly content summaries and documentation navigation. It helps language models and AI agents quickly understand project structure and find relevant documentation.
Key purposes:
Official specification: https://llmstxt.org/
For complete format details, read references/llmstxt_format.md.
When asked to create an llms.txt file:
Explore the project directory to understand structure:
Identify project type:
Assess documentation organization:
Choose the appropriate template based on project type:
Python Library / Package:
CLI Tool:
Web Framework:
Claude Skill:
General Project:
See examples/sample_llmstxt.md for complete examples of each type.
Build the llms.txt file following this structure:
# Project Name
> Brief description of what the project does, its main purpose, and key value proposition.
> Should be 1-3 sentences that give LLMs essential context.
Key features:
- Main feature or capability
- Another important aspect
- Third key point
Project follows these principles:
- Design principle 1
- Design principle 2
Organize links into H2-headed sections:
## Documentation
- [Link Title](https://full-url): Brief description of what this contains
- [Another Doc](https://full-url): What developers will find here
## API Reference
- [Core API](https://full-url): Main API documentation
- [Configuration](https://full-url): Configuration options
## Examples
- [Basic Usage](https://full-url): Simple getting-started examples
- [Advanced Patterns](https://full-url): Complex use cases
## Optional
- [Blog](https://full-url): Latest updates and tutorials
- [Community](https://full-url): Where to get help
Each link must follow this exact format:
- [Descriptive Title](https://full-url): Optional helpful notes about the resource
Requirements:
-)[text](url): followed by helpful description (optional but recommended).md files when possibleExamples:
โ Good:
- [Quick Start](https://github.com/user/repo/blob/main/docs/quickstart.md): Get running in 5 minutes
- [API Reference](https://github.com/user/repo/blob/main/docs/api.md): Complete function documentation
โ Bad:
- [Guide](../docs/guide.md): A guide
- Guide: docs/guide.md
- [Click here](guide)
Order sections from most to least important:
High Priority (First):
Medium Priority (Middle):
Low Priority (Last - Optional Section):
The "Optional" section has special meaning: LLMs can skip this when shorter context is needed.
For GitHub repos, construct URLs like:
https://github.com/username/repo/blob/main/path/to/file.md
If no remote repository exists yet, use placeholder URLs:
https://github.com/username/repo/blob/main/README.md
And note in your response that URLs need to be updated when the repo is published.
If project has a docs website, prefer linking to markdown versions:
- [Guide](https://docs.example.com/guide.md): Getting started guide
Or link to HTML with .md suffix if markdown versions exist:
- [Guide](https://docs.example.com/guide.html.md): Getting started guide
Before finalizing, check:
llms.txt (lowercase)[text](url)# LibraryName
> Brief description of what the library does and its main use case.
## Documentation
- Getting started, installation, core concepts
## API Reference
- Module/class/function documentation
## Examples
- Usage examples, patterns, recipes
## Development
- Contributing, testing, development setup
## Optional
- Changelog, blog, community
# ToolName
> Brief description of what the tool does.
## Getting Started
- Installation, quickstart
## Commands
- Command reference and examples
## Configuration
- Config files, environment variables
## Examples
- Common workflows and patterns
## Optional
- Advanced usage, plugins, troubleshooting
# FrameworkName
> Brief description and key features.
## Documentation
- Core concepts, routing, data fetching
## Guides
- Authentication, deployment, testing
## API Reference
- Configuration, CLI, components
## Examples
- Sample applications
## Integrations
- Third-party tools and services
## Optional
- Blog, showcase, community
# skill-name
> Brief description of what the skill does.
## Documentation
- README, SKILL.md, usage guide
## Reference Materials
- Specifications, patterns, formats
## Examples
- Usage examples, before/after
## Development
- Scripts, contributing guide
## Optional
- External resources, related tools
When generating an llms.txt file, provide:
Save the file as llms.txt in the project root directory.
llms.txt generation can be combined with c7score optimization:
Or generate them independently based on user needs.
references/llmstxt_format.mdexamples/sample_llmstxt.md