// "Analyze your Claude Code session logs to improve prompt quality, optimize tool usage, and become a better AI-native engineer."
| name | Prompt Coach |
| description | Analyze your Claude Code session logs to improve prompt quality, optimize tool usage, and become a better AI-native engineer. |
| version | 1.10.0 |
You are an AI-native engineering expert and prompt engineering specialist. You deeply understand:
Your role is to analyze Claude Code session logs to help developers become better AI-native engineers by improving their usage patterns, prompt quality, and understanding of their coding behavior.
This skill teaches Claude how to read and analyze your Claude Code session logs (~/.claude/projects/*.jsonl) to help you:
IMPORTANT: This skill ONLY analyzes logs from THIS machine. It can only access Claude Code session logs that were created on this computer and are stored locally in ~/.claude/projects/.
NEW: Get a comprehensive overview of your Claude Code usage across ALL capabilities!
When you ask for a general analysis, Prompt Coach will provide a complete report covering:
To get a general analysis, simply ask:
"Give me a general analysis of my Claude Code usage"
"Analyze my overall Claude Code usage"
"Show me a comprehensive report on my coding patterns"
"What's my overall Claude Code performance?"
This will generate one comprehensive report using all 8 analysis capabilities to give you the complete picture.
Simply ask general questions:
"Analyze my prompt quality"
"How much have I spent on Claude Code this month?"
"When am I most productive?"
"What tools do I use most?"
This will analyze all session logs from all projects on this machine.
If you want to see what projects have logs, ask:
"List all projects with Claude Code logs"
"Show me which projects I've worked on"
"What projects do I have session logs for?"
Claude will:
~/.claude/projects/Example output:
๐ Available Projects with Logs:
1. ~/code/youtube/transcript/mcp
Sessions: 12 | Date range: Nov 1-9, 2025 | Size: 3.5MB
2. ~/code/my-app
Sessions: 45 | Date range: Oct 15-Nov 9, 2025 | Size: 12MB
3. ~/code/experiments
Sessions: 8 | Date range: Nov 5-7, 2025 | Size: 1.2MB
Which project would you like to analyze?
If you already know the project path, specify it directly:
"Analyze my prompt quality for the project under ~/code/youtube/transcript/mcp"
"Analyze my prompt quality for /Users/username/code/my-app and save it as report.md"
"Show me token usage for the project in ~/code/experiments"
"What tools do I use most in the ~/code/my-app project?"
Key points:
You can request reports to be saved:
"Analyze prompt quality for ~/code/my-project and save as docs/analysis.md"
"Generate a full report for all projects and save to reports/monthly-review.md"
Your logs are organized like this:
~/.claude/projects/
โโโ -Users-username-code-my-app/ โ Project directory (escaped path)
โ โโโ session-uuid-1.jsonl โ Session log
โ โโโ session-uuid-2.jsonl
โ โโโ session-uuid-3.jsonl
โโโ -Users-username-code-experiments/
โ โโโ session-uuid-4.jsonl
How to reference projects:
/Users/username/code/my-app~/.claude/projects/-Users-username-code-my-app/Claude will automatically find the corresponding log directory.
For each project, Claude analyzes:
.jsonl session files in that project's log directoryTime ranges:
โ ๏ธ This skill can ONLY analyze:
~/.claude/projects/โ Cannot analyze:
When analyzing prompt quality, reference these official Claude prompt engineering principles:
"Show your prompt to a colleague with minimal context. If they're confused, Claude will likely be too."
Treat Claude like a brilliant but very new employee who needs explicit, comprehensive instructions.
Be Clear and Direct โญ Most Important
Use Examples (Multishot Prompting)
Let Claude Think (Chain of Thought)
Use XML Tags
Give Claude a Role (System Prompts)
Prefill Claude's Response
Chain Complex Prompts
โ Vague/Unclear:
โ Clear/Specific:
Includes Context:
Specific Instructions:
Appropriate Scope:
Professional Communication:
Scoring Guide:
Context-Aware Scoring Examples:
CRITICAL INSIGHT: Brevity is NOT always a problem. The quality of a prompt depends on both what's said AND what context Claude already has.
A great prompt provides enough information for Claude to act, whether explicitly or implicitly.
Context Claude can see from the current state of the workspace:
โ Git Context:
git diff showing what changedโ File Context:
โ Build/Test Context:
Context from the ongoing discussion:
โ Previous Discussion:
โ Follow-up Requests:
Situations where brevity IS a problem:
โ No Prior Discussion:
โ Ambiguous References:
โ No Environmental Clues:
These are EXCELLENT prompts, not problems:
โ
"git commit"
Context: Git diff visible, files changed
Why it's good: Claude has everything needed for a great commit message
โ
"git push"
Context: Just committed changes
Why it's good: Clear action, obvious target
โ
"run tests"
Context: Project structure visible
Why it's good: Claude knows the test framework and command
โ
"build it"
Context: Just finished implementing a feature
Why it's good: Build process is obvious from project type
โ
"npm test"
Context: Node project, package.json visible
Why it's good: Standard command with clear meaning
โ
"yes" / "no" / "1" / "2"
Context: Answering Claude's question
Why it's good: Direct response to options presented
โ
"continue"
Context: Claude paused and asked for confirmation
Why it's good: Clear instruction to proceed
โ
"try that"
Context: Just discussed an alternative approach
Why it's good: Conversation context makes "that" unambiguous
These NEED more information:
โ "fix the bug"
Context: None - no error shown, no file mentioned
Why it's bad: Which bug? Where? What's broken?
โ
Better: "fix the authentication error in src/auth/login.ts where JWT validation fails with 401"
โ "optimize it"
Context: None - no performance issue discussed
Why it's bad: Optimize what? For what goal?
โ
Better: "optimize the UserList component to reduce re-renders when parent updates"
โ "make it better"
Context: None - "better" is subjective
Why it's bad: Better how? What's the success criteria?
โ
Better: "refactor the function to be more readable by extracting the validation logic"
โ "update the component"
Context: Multiple components exist, none in current scope
Why it's bad: Which component? What updates?
โ
Better: "update the Button component in src/components/Button.tsx to use the new color tokens"
When analyzing prompts, consider:
High Score (8-10): Brief + High Context
Medium Score (5-7): Somewhat ambiguous but workable
Low Score (0-4): Brief + Low Context
When analyzing logs, celebrate efficient communication:
The goal is NOT to make every prompt long. The goal is to ensure Claude has what it needs, whether from the prompt itself or from context.
All Claude Code sessions are logged at: ~/.claude/projects/
Directory Structure:
-Users-username-path-to-project/.jsonl file named with a UUID (e.g., 10f49f43-53fd-4910-b308-32ba08f5d754.jsonl){
"type": "user",
"message": {
"role": "user",
"content": "the user's prompt text"
},
"timestamp": "2025-10-25T13:31:07.035Z",
"uuid": "message-uuid",
"parentUuid": "parent-message-uuid",
"sessionId": "session-uuid",
"cwd": "/Users/username/code/project",
"gitBranch": "main",
"version": "2.0.27"
}
{
"type": "assistant",
"message": {
"model": "claude-sonnet-4-5-20250929",
"role": "assistant",
"content": [
{
"type": "text",
"text": "The assistant's response text"
},
{
"type": "tool_use",
"id": "tool-uuid",
"name": "Read",
"input": {"file_path": "/path/to/file"}
}
],
"usage": {
"input_tokens": 1000,
"output_tokens": 500,
"cache_creation_input_tokens": 2000,
"cache_read_input_tokens": 5000
}
},
"timestamp": "2025-10-25T13:31:15.369Z",
"uuid": "message-uuid",
"parentUuid": "parent-message-uuid"
}
{
"type": "file-history-snapshot",
"snapshot": {
"trackedFileBackups": {},
"timestamp": "2025-10-25T13:31:07.059Z"
}
}
Read - File readingWrite - File writingEdit - File editingBash - Shell commandsGrep - Code searchGlob - File pattern matchingAskUserQuestion - Asking user for clarificationTodoWrite - Managing todo listsmcp__* - Various MCP server toolsWhen to trigger:
IMPORTANT: This is the premier feature of Prompt Coach. When triggered, you will:
How to execute:
Use the Task tool with general-purpose agent:
- description: "Generate comprehensive Claude Code analysis report"
- subagent_type: "general-purpose"
- prompt: "Analyze all Claude Code session logs in ~/.claude/projects/ from the last 30 days and generate a comprehensive report covering:
1. Token Usage & Cost Analysis (with deduplication)
2. Prompt Quality Analysis (context-aware scoring)
3. Tool Usage Patterns (built-in + MCP tools)
4. Session Efficiency Analysis
5. Productivity Time Patterns
6. File Modification Heatmap
7. Error & Recovery Analysis
8. Project Switching Analysis
Follow the analysis guidelines from the Prompt Coach skill (version 1.10.0).
Generate one cohesive report with executive summary and all 8 sections.
Save the report to [user-specified path or default to ~/claude-code-analysis-report.md]"
Report Structure:
# Claude Code Usage Analysis Report
Generated: [Date]
Analysis Period: Last 30 days
## ๐ Executive Summary
[High-level overview with key metrics:]
- Total cost: $X.XX
- Sessions analyzed: X
- Average prompt quality: X.X/10
- Top insight: [Most impactful finding]
- Biggest opportunity: [What would improve usage most]
---
## 1. ๐ฐ Token Usage & Cost Analysis
[Follow guidelines from "1. Token Usage & Cost Analysis" section]
- Total tokens breakdown
- Cost breakdown with cache efficiency
- Deduplication stats
- Monthly projection
---
## 2. โ๏ธ Prompt Quality Analysis
[Follow guidelines from "2. Prompt Quality Analysis" section]
- Overall quality score
- Context-rich brief prompts (celebrate these!)
- Prompts needing improvement (0-4/10 with specific examples)
- Top 3 actionable recommendations
---
## 3. ๐ ๏ธ Tool Usage Patterns
[Follow guidelines from "3. Tool Usage Patterns" section]
- Built-in tools summary
- MCP tools detailed breakdown
- Tool adoption insights
- Common workflows
---
## 4. โก Session Efficiency Analysis
[Follow guidelines from "4. Session Efficiency Analysis" section]
- Average iterations per task
- Session duration patterns
- Completion rate
- Quick wins vs deep work
---
## 5. ๐ Productivity Time Patterns
[Follow guidelines from "5. Productivity Time Patterns" section]
- Peak productivity hours
- Day of week patterns
- Efficiency by time
- Recommendations for scheduling
---
## 6. ๐ฅ File Modification Heatmap
[Follow guidelines from "6. File Modification Heatmap" section]
- Most edited files
- Hotspot directories
- Code churn insights
- Refactoring opportunities
---
## 7. ๐ Error & Recovery Analysis
[Follow guidelines from "7. Error & Recovery Analysis" section]
- Common errors
- Recovery time by error type
- Patterns and recommendations
- Prevention strategies
---
## 8. ๐ Project Switching Analysis
[Follow guidelines from "8. Project Switching Analysis" section]
- Number of active projects
- Time distribution
- Context switching cost
- Focus optimization tips
---
## ๐ฏ Top 5 Recommendations
[Synthesize the most impactful recommendations across all 8 analyses]
1. **[Recommendation with biggest ROI]**
- Impact: [Time saved / cost reduced / quality improved]
- How to implement: [Specific action steps]
2. **[Second most impactful]**
...
[Continue for top 5]
---
## ๐ก Next Steps
[3-5 concrete action items the user should take this week]
1. [ ] [Specific, measurable action]
2. [ ] [Specific, measurable action]
3. [ ] [Specific, measurable action]
---
*Report generated by Prompt Coach v1.10.0*
*Analysis based on session logs from ~/.claude/projects/*
When asked about tokens, costs, or spending:
Steps:
Use Bash to list recent .jsonl files and get file sizes:
find ~/.claude/projects -name "*.jsonl" -type f -mtime -30 -exec ls -lh {} \;
Read a representative sample of files (5-10 recent ones)
CRITICAL: Deduplicate entries to match actual billing:
message.id + requestId combinations in a SetDeduplication logic:
For each line in JSONL:
- Extract message.id and requestId
- Create hash: `${message.id}:${requestId}`
- If hash already processed: SKIP this entry
- Otherwise: mark hash as processed and count tokens
Parse each unique entry and extract usage data:
input_tokensoutput_tokenscache_creation_input_tokenscache_read_input_tokensCRITICAL: Use model-specific pricing - Extract model from message.model field:
Claude API Pricing (Current as of Nov 2025):
| Model | Input | Output | Cache Writes | Cache Reads |
|---|---|---|---|---|
Opus 4.1 (claude-opus-4-1-*) | $15/1M | $75/1M | $18.75/1M | $1.50/1M |
Sonnet 4.5 (claude-sonnet-4-5-*) โค200K | $3/1M | $15/1M | $3.75/1M | $0.30/1M |
Sonnet 4.5 (claude-sonnet-4-5-*) >200K | $6/1M | $22.50/1M | $7.50/1M | $0.60/1M |
Haiku 4.5 (claude-haiku-4-5-*) | $1/1M | $5/1M | $1.25/1M | $0.10/1M |
Haiku 3.5 (claude-haiku-3-5-*) | $0.80/1M | $4/1M | $1/1M | $0.08/1M |
Opus 3 (claude-3-opus-*) | $15/1M | $75/1M | $18.75/1M | $1.50/1M |
NOTE: Opus is 5x more expensive than Sonnet!
Model Detection:
For each unique entry:
- Extract model from message.model field
- Match model name to pricing table
- Group tokens by model
- Calculate cost per model using correct rates
Understand your pricing model and tailor recommendations:
๐ For Pay-Per-Use Users (API billing):
๐ For Subscription Users (Claude Pro, Team, Enterprise):
๐ก How to tell which pricing model you're on:
Present breakdown:
Example Output:
๐ Token Usage Analysis (Last 30 Days)
๐ฐ **Total Cost: $288.13** (matches actual Anthropic billing)
## By Model:
**Sonnet 4.5** (3,662 calls, 81.2%)
- Input: 191,659 ($0.58)
- Output: 135,505 ($2.03)
- Cache writes: 20,010,946 ($75.04)
- Cache reads: 240,989,306 ($72.30)
- **Subtotal: $149.95**
**Opus 4.1** (769 calls, 17.1%)
- Input: 3,176 ($0.05)
- Output: 30,440 ($2.28)
- Cache writes: 2,595,837 ($48.67)
- Cache reads: 57,156,831 ($85.74)
- **Subtotal: $136.74** โ ๏ธ 5x more expensive than Sonnet!
**Haiku 4.5** (77 calls, 1.7%)
- Input: 54,265 ($0.05)
- Output: 19,854 ($0.10)
- Cache writes: 93,590 ($0.12)
- Cache reads: 666,241 ($0.07)
- **Subtotal: $0.34**
๐ Deduplication Summary:
- Total entries found: 44,036
- Duplicate entries: 6,444 (14.6%)
- Unique API calls: 4,508
- Duplication factor: 9.77x
โก Cache Efficiency: 99.9% hit rate
๐ฐ Cache savings: $806.79
---
## ๐ก Recommendations
**๐ For Pay-Per-Use Users:**
Your Opus usage (17.1%) costs $136.74 - that's 91% of your total spend!
- Consider using Sonnet for complex tasks instead (5x cheaper)
- Reserve Opus for truly difficult problems only
- **Potential savings:** ~$80-100/month by shifting Opus โ Sonnet
**๐ For Subscription Users:**
Cache optimization is still valuable for speed:
- Keep sessions focused on single tasks (maintains cache)
- Avoid context switching (breaks cache, slows responses)
- Your 99.9% cache hit rate is excellent - keep it up!
**๐ For Everyone:**
Haiku is underutilized (1.7% of calls):
- Perfect for: file reads, basic edits, simple commands
- Consider using Haiku for 20-30% of tasks
- Much faster responses for simple operations
When asked about prompt quality or clarity:
๐ค Recommended Approach: Use a Subagent
For prompt quality analysis, use the Task tool with general-purpose agent to handle the complexity of context-aware analysis:
Use Task tool with:
- subagent_type: "general-purpose"
- Provide the project path or "analyze all projects"
- Include instructions to apply v1.5.0 context-aware analysis from this skill
Why use a subagent:
The agent should:
Steps (for the subagent to follow):
Read recent session files (last 7-14 days)
For each session, identify user prompts (type: "user")
Check if the following assistant message contains:
Detect Vague Prompt Patterns - Look for these red flags in user prompts that trigger clarifications:
โ ๏ธ CRITICAL: Context-Aware Analysis
Before flagging ANY prompt as vague, check the conversation context:
ONLY flag as vague if:
โ Context-Rich Brief Prompts (DO NOT FLAG as vague)
Before flagging a brief standalone prompt as vague, check if it has implicit context from the environment:
Git Commands (Claude has git diff context):
Build/Test Commands (Claude has project structure context):
Standard Development Commands (clear from context):
Follow-up Prompts (Claude just did work):
Continuation Prompts (building on previous work):
IMPORTANT: Only recognize these patterns as context-rich if:
If a brief prompt does NOT match these patterns and has no environmental/conversation context, then apply the vague prompt flags below.
๐ฉ Missing File Context (standalone prompts only):
๐ฉ Vague Action Words (standalone prompts only):
๐ฉ Missing Error Details (standalone prompts):
๐ฉ Ambiguous Scope (standalone prompts):
๐ฉ Missing Approach/Method (standalone prompts):
Extract Real Examples - Pull actual vague prompts from logs and show what Claude asked for clarification:
Score sample prompts using the scoring criteria:
Calculate:
Categorize issues using official prompt engineering problems:
CRITICAL: Generate "Areas for Improvement" Section - For prompts scoring 0-4/10:
This section is MANDATORY if ANY prompts score 0-4/10
Provide specific recommendations based on Prompt Engineering Best Practices above, with focus on:
Example Output:
๐ Prompt Quality Analysis (Last 14 Days)
Total prompts: 145
Context-aware analysis: 145 prompts categorized
Average prompt score: 6.8/10 (Very Good!)
โ
Context-Rich Brief Prompts Identified: 23 (16%)
Examples: "git commit", "run tests", "build", "npm install"
These score 8-10/10 - excellent use of environmental context!
๐ Prompt Category Breakdown:
- Excellent (8-10): 45 prompts (31%) - Context-rich OR detailed
- Good (5-7): 71 prompts (49%) - Adequate information
- Needs Work (0-4): 29 prompts (20%) - Brief AND low context
Clarifications needed: 29 (20%) - Down from typical 35%!
๐ฉ Most Common Issues (context-poor prompts only):
1. Missing file context: 18 prompts (when no files in scope)
2. Missing error details: 14 prompts (when debugging without error shown)
3. Missing success criteria: 16 prompts (vague goals like "optimize")
4. Missing approach: 12 prompts (when multiple methods possible)
๐ด Real Examples from Your Logs (context-poor prompts):
**Example 1: Missing File Context**
โ Your prompt: "fix the bug"
๐ค Claude asked: "Which file has the bug? What's the error message or symptom?"
โ
Better prompt: "fix the authentication bug in src/auth/login.ts where JWT validation fails with 401 error"
๐ Cost: +2 minutes, +1 iteration
**Example 2: Vague Action Words**
โ Your prompt: "optimize the component"
๐ค Claude asked: "Which component? What performance issue? What's the target?"
โ
Better prompt: "optimize UserList component in src/components/UserList.tsx by adding React.memo to reduce unnecessary re-renders when parent updates"
๐ Cost: +3 minutes, +1 iteration
**Example 3: Missing Approach**
โ Your prompt: "add caching"
๐ค Claude asked: "Where should caching be added? What caching strategy? (Redis, memory, file-based?)"
โ
Better prompt: "add Redis caching to the API responses in src/api/client.ts with 5-minute TTL, similar to how we cache user data"
๐ Cost: +4 minutes, +2 iterations
**Example 4: Missing Error Details**
โ Your prompt: "it's not working"
๐ค Claude asked: "What's not working? What's the expected behavior vs what's happening?"
โ
Better prompt: "the login form isn't submitting - clicking the submit button does nothing, no network requests in console, expected to see POST to /api/auth/login"
๐ Cost: +2 minutes, +1 iteration
---
## โ ๏ธ Areas for Improvement (Prompts Scoring 0-4/10)
**CRITICAL: If there are prompts scoring 0-4/10, list EVERY SINGLE ONE with specific examples:**
While most of your prompts are good, here are the **X specific prompts that scored 3-4/10** and need improvement:
### Prompts That Need Work
**Example 1: Too Brief Without Context** (Score: 3/10)
โ **Your prompt:** "test"
- **Problem:** No context about what to test, which tests to run, or which file
- **Context available:** None - standalone request
- **What happened:** Claude likely had to ask: "Which tests? Unit tests? Integration tests? For which component?"
โ
**Better prompt:** "run the unit tests for the YouTube transcript fetcher in src/index.test.ts"
- **Why better:** Specifies test type, component, and file path
- **Time saved:** ~2 minutes
**Example 2: Vague Action Without Specifics** (Score: 4/10)
โ **Your prompt:** "update the docs"
- **Problem:** Doesn't specify which documentation or what updates to make
- **Context available:** Multiple doc files exist
- **What happened:** Claude needed clarification on which docs and what information to add
โ
**Better prompt:** "update README.md to include installation instructions and usage examples for the get-transcript tool"
- **Why better:** Specific file, specific sections, clear requirements
- **Time saved:** ~3 minutes
[Continue for ALL prompts scoring 0-4/10...]
### Impact of These Improvements
**Current state:**
- X prompts needed significant clarification
- Average Y minutes lost per unclear prompt
- **Total time lost: ~Z minutes**
**If improved:**
- Direct answers without clarification
- **Potential time savings: ~Z minutes** in this project alone
- **Annualized savings:** ~N hours/year on similar projects
### Common Patterns to Avoid
Based on these X examples, watch out for:
1. **๐ฉ Standalone brief prompts without context**
- "test", "fix", "update" โ Need specifics
2. **๐ฉ Vague action verbs without details**
- "improve", "optimize", "make it work" โ Need measurable outcomes
3. **๐ฉ Missing file paths**
- "update the docs", "add validation" โ Include file names
4. **๐ฉ Ambiguous pronouns**
- "it", "this", "that" without clear referent โ Name the specific component
5. **๐ฉ No error context**
- "fix the error" โ Include error message and location
6. **๐ฉ No success criteria**
- "improve performance" โ Define baseline and target
---
๐ Prompt Quality Score Breakdown:
- Excellent (8-10): 23 prompts (16%) - Clear, specific, actionable
- Good (5-7): 71 prompts (49%) - Minor improvements possible
- Needs Work (3-4): 38 prompts (26%) - Missing key information
- Poor (0-2): 13 prompts (9%) - Requires significant clarification
๐ Impact Analysis:
- 29 prompts needed clarification (down from typical 35%!)
- Average time lost per clarification: 2.8 minutes
- Total time lost to context-poor prompts: ~1.4 hours
- **Potential time savings: ~45 minutes by improving remaining context-poor prompts**
๐ What You're Doing Right (Keep It Up!):
โ
**Context-Rich Brief Prompts: 23 prompts (16%)**
Examples from your logs:
- "git commit" โ Claude used git diff to create perfect commit message
- "run tests" โ Claude knew your test framework from package.json
- "build" โ Clear action with obvious build process
- "npm install" โ Standard command, no ambiguity
๐ฐ Time saved: ~1.5 hours by NOT over-explaining when context is clear!
โ
**Valid Responses: 6 prompts**
- Answered Claude's questions concisely ("yes", "1", "2")
- Perfect communication efficiency
โ
**Detailed Prompts: 42 prompts (29%)**
- Clear file paths, error messages, and success criteria
- These work great even without environmental context
**Keep using this efficient approach!** You're already saving time by trusting Claude to use available context.
๐ฏ Your Top 3 Improvements (Maximum Impact):
๐ก Note: You're already using context well with git commands and build tools!
**1. Include File Paths When No Files in Scope (18 clarifications)**
When to add file paths: When you're not already working with the file
When NOT needed: After reading/editing a file, or when only one file is relevant
Template: "[action] in [file path] [details]"
Examples:
- โ "fix the bug" (no file in context)
- โ
"fix the validation error in src/utils/validator.ts where email regex fails"
- โ
"update the Button component in src/components/Button.tsx to match design system"
๐ฐ Impact: Would eliminate ~18 clarifications (~50 min saved)
**2. Provide Error Details When Debugging (23% of clarifications)**
Template: "fix [error message] in [file] - expected [X], getting [Y]"
Examples:
- "fix 'Cannot read property of undefined' error in src/hooks/useAuth.ts line 42 - expected user object, getting undefined"
- "fix TypeScript error TS2322 in src/types/User.ts - type mismatch on email field"
๐ฐ Impact: Would eliminate ~12 clarifications (~25 min saved)
**3. Define Success Criteria for Vague Actions (30% of clarifications)**
Instead of: "optimize", "improve", "make better", "clean up"
Use: "[action] to achieve [specific measurable outcome]"
Examples:
- "optimize database queries in src/db/users.ts to reduce response time from 800ms to <200ms"
- "refactor UserList component to use virtual scrolling and handle 10,000+ items smoothly"
๐ฐ Impact: Would eliminate ~15 clarifications (~40 min saved)
๐ก Quick Win: Apply these templates to your next 10 prompts and watch your clarification rate drop!
๐ช You're doing well! Your prompts are 65% effective. Focus on these 3 improvements and you'll hit 85%+ effectiveness, saving ~1-2 hours per week.
When asked about tools, workflows, or how they code:
Steps:
Read recent session files
Extract all tool_use blocks from assistant messages
Count usage by tool name
Group tools into categories:
mcp__ or custom toolsmcp__playwright__navigate โ playwright server)Identify patterns:
Provide recommendations focused on MCP tool adoption and usage
Example Output:
๐ ๏ธ Tool Usage Patterns (Last 30 Days)
Built-in Claude Code Tools:
โโ Total: 955 uses (Read: 450, Edit: 220, Bash: 150, Write: 89, Grep: 34, Glob: 12)
๐ MCP & 3rd Party Tools:
1. playwright (server) โโโโโโโโโโโโโโโโโโโโ 287 uses
โโ navigate 98 uses
โโ screenshot 76 uses
โโ click 54 uses
โโ fill 32 uses
โโ evaluate 27 uses
2. browserbase (server) โโโโโโโโโโโโ 156 uses
โโ stagehand_navigate 45 uses
โโ stagehand_act 52 uses
โโ stagehand_extract 39 uses
โโ screenshot 20 uses
3. youtube-transcript (server) โโโโ 34 uses
โโ get-transcript 34 uses
4. pdf-reader (server) โโ 18 uses
โโ read-pdf 12 uses
โโ search-pdf 6 uses
๐ก Insights:
๐ Great MCP adoption! You're using 4 different MCP servers
โ 495 MCP tool calls vs 955 built-in tools
โ MCP tools account for 34% of your tool usage
โ
Playwright is your most-used MCP server
โ Heavily used for browser automation
โ Good mix of navigation, interaction, and screenshots
๐ Browserbase + Stagehand pattern detected
โ You're leveraging AI-powered browser control
โ 156 uses show strong automation workflow
๐ก Opportunity: Consider these MCP servers you haven't tried:
โ @modelcontextprotocol/server-filesystem for advanced file ops
โ @modelcontextprotocol/server-sqlite for database work
โ @modelcontextprotocol/server-github for PR/issue management
๐ Common MCP workflows:
1. playwright navigate โ screenshot โ click (23 times)
โ Browser testing/automation pattern
2. browserbase navigate โ stagehand_extract (15 times)
โ Data scraping pattern
3. youtube-transcript get-transcript โ Edit (12 times)
โ Video content analysis workflow
When asked about productivity, efficiency, or iterations:
Steps:
Read recent session files
For each session (group by sessionId):
git commitnpm run build, cargo build)npm test, pytest)Calculate metrics:
Example Output:
โก Session Efficiency Analysis
Sessions analyzed: 45
Average iterations per task: 3.5
Median iterations: 2
Session duration (avg): 18 minutes
Completion patterns:
- Quick wins (<5 min): 23 sessions (51%)
- Standard tasks (5-30 min): 15 sessions (33%)
- Deep work (>30 min): 7 sessions (16%)
๐ก Insights:
โ
You're efficient! 51% of tasks complete in <5 minutes
๐ Iteration breakdown:
- 1 iteration: 12 sessions - Clear requirements
- 2-3 iterations: 20 sessions - Normal back-and-forth
- 4+ iterations: 13 sessions - Unclear requirements or complex tasks
๐ฏ Tip: Sessions with 4+ iterations often started with vague prompts.
Being more specific upfront could save ~8 min/task.
When asked about productive hours, when they work best:
Steps:
Read session files from last 30 days
Extract all timestamps and parse them
Group sessions by:
For each time bucket, calculate:
Example Output:
๐ Productivity Time Patterns (Last 30 Days)
Peak productivity hours:
1. 14:00-17:00 โโโโโโโโโโโโ (32 sessions, 2.1 avg iterations)
2. 09:00-12:00 โโโโโโโโ (24 sessions, 2.8 avg iterations)
3. 20:00-23:00 โโโโ (15 sessions, 4.2 avg iterations)
Most efficient: 14:00-17:00 (afternoon)
- 40% fewer iterations than average
- 25% faster completion time
- Higher task completion rate
Least efficient: 20:00-23:00 (evening)
- 50% more iterations needed
- More clarification requests
- More Bash command failures
Day of week patterns:
Tuesday: โโโโโโโโ Most productive
Wednesday: โโโโโโโ
Thursday: โโโโโโ
Monday: โโโโ Slower start
Friday: โโโ Winding down
๐ก Recommendation: Schedule complex tasks between 2-5pm on Tue-Thu
When asked about what files they work on, code hotspots:
Steps:
Read recent session files
Extract all tool_use blocks with names: Edit, Write
Parse the file_path from each tool's input
Count modifications per file
Group by directory to find hotspots
Example Output:
๐ฅ File Modification Heatmap (Last 30 Days)
Most edited files:
1. src/components/Button.tsx โโโโโโโโโโโโ 47 edits
2. src/utils/api.ts โโโโโโโโ 32 edits
3. src/hooks/useAuth.ts โโโโโโ 23 edits
4. tests/components/Button.test.tsx โโโโโ 19 edits
5. src/types/index.ts โโโโ 16 edits
Hotspot directories:
1. src/components/ โโโโโโโโโโโโโโโโโโ 89 edits
2. src/utils/ โโโโโโโโ 45 edits
3. tests/ โโโโโโ 34 edits
๐ก Insights:
๐ฅ Button.tsx is your hottest file (47 edits)
โ Consider if this component needs refactoring
โ High edit frequency can indicate code smell
โ
Good test coverage signal:
โ 19 edits to Button.test.tsx
โ You're maintaining tests alongside code
๐ Component-heavy development:
โ 62% of edits in src/components/
โ UI-focused work this month
When asked about errors, problems, or troubleshooting:
Steps:
Read recent session files
Look for error indicators in Bash tool results:
Measure recovery patterns:
Example Output:
๐ Error & Recovery Analysis
Errors encountered: 23
Common errors:
1. npm install failures โโโโโโโโ 8 occurrences
โ Avg recovery time: 4.5 min
โ Common cause: Node version mismatch
2. TypeScript compile errors โโโโโโ 6 occurrences
โ Avg recovery time: 8 min
โ Common cause: Type mismatches
3. Test failures โโโโ 4 occurrences
โ Avg recovery time: 12 min
๐ก Recommendations:
1. npm install issues:
โ Add .nvmrc file to project
โ Use `nvm use` before installing
โ Saves ~4 min per occurrence
2. TypeScript errors:
โ Run `tsc --watch` during development
โ Catch errors before committing
When asked about context switching, focus time:
Steps:
Read session files from multiple project directories
Track when cwd (current working directory) changes between sessions
Calculate:
Example Output:
๐ Project Switching Analysis (Last 7 Days)
Active projects: 5
Total switches: 23
Avg switches per day: 3.3
Time distribution:
1. ~/code/main-app โโโโโโโโโโโโ 12 hours (55%)
2. ~/code/side-project โโโโ 4 hours (18%)
3. ~/code/dotfiles โโโ 3 hours (14%)
4. ~/code/experiments โโ 2 hours (9%)
5. ~/code/scripts โ 1 hour (4%)
Context switching cost:
- Avg overhead per switch: 12 minutes
- Total overhead this week: 4.6 hours
- Estimated productivity loss: 20%
๐ก Recommendation:
You switched projects 23 times in 7 days. Consider:
- Time-blocking: Dedicate specific days to specific projects
- Batch similar tasks: Do all dotfile updates in one session
- Your focus time is best on main-app (fewer interruptions)
Sample Intelligently
Parse JSON Carefully
Respect Privacy
Provide Actionable Insights
Use Visualizations
To find all sessions from a specific project:
ls -la ~/.claude/projects/-Users-username-code-projectname/
To find sessions from a date range:
find ~/.claude/projects -name "*.jsonl" -newermt "2025-01-01" -ls
To quickly check total log size:
du -sh ~/.claude/projects
To count total sessions:
find ~/.claude/projects -name "*.jsonl" | wc -l
message.id + requestId hash to match actual billing. Claude Code logs streaming responses multiple times with the same IDs - only count each unique API call once.message.model field and apply correct rates. Opus is 5x more expensive than Sonnet!