// Analyzes skill ecosystem to visualize dependencies, identify workflow bottlenecks, and recommend optimal skill stacks. Use when asked about skill combinations, workflow optimization, bottleneck identification, or which skills work together. Triggers include phrases like "which skills work together", "skill dependencies", "workflow bottlenecks", "optimal skill stack", or "recommend skills for".
| name | skill-dependency-mapper |
| description | Analyzes skill ecosystem to visualize dependencies, identify workflow bottlenecks, and recommend optimal skill stacks. Use when asked about skill combinations, workflow optimization, bottleneck identification, or which skills work together. Triggers include phrases like "which skills work together", "skill dependencies", "workflow bottlenecks", "optimal skill stack", or "recommend skills for". |
Analyzes the skill ecosystem to understand relationships, identify inefficiencies, and optimize workflows.
Use this skill when users ask about:
Run the analyzer script to extract metadata from all available skills:
cd /home/claude/skill-dependency-mapper
python scripts/analyze_skills.py > /tmp/skill_analysis.txt
The script extracts:
For bottleneck analysis, first capture skill data as JSON:
import json
from scripts.analyze_skills import SkillAnalyzer
analyzer = SkillAnalyzer()
analyzer.scan_skills()
# Save for bottleneck detection
with open('/tmp/skill_data.json', 'w') as f:
json.dump(analyzer.skills, f, default=list)
Then run bottleneck detection:
python scripts/detect_bottlenecks.py /tmp/skill_data.json > /tmp/bottlenecks.txt
Create a text-based dependency visualization:
from scripts.analyze_skills import SkillAnalyzer
analyzer = SkillAnalyzer()
analyzer.scan_skills()
# Generate dependency mapping
dependencies = analyzer.find_dependencies()
# Format as markdown
output = ["# Skill Dependency Map\n"]
for skill, related in sorted(dependencies.items()):
if related:
output.append(f"## {skill}\n")
output.append("Works well with:\n")
for related_skill in sorted(related)[:8]:
# Show why they're related
skill_a = analyzer.skills[skill]
skill_b = analyzer.skills[related_skill]
shared = []
if skill_a['tools'] & skill_b['tools']:
shared.append(f"tools: {', '.join(skill_a['tools'] & skill_b['tools'])}")
if skill_a['formats'] & skill_b['formats']:
shared.append(f"formats: {', '.join(skill_a['formats'] & skill_b['formats'])}")
if skill_a['domains'] & skill_b['domains']:
shared.append(f"domains: {', '.join(skill_a['domains'] & skill_b['domains'])}")
reason = " | ".join(shared) if shared else "complementary"
output.append(f"- **{related_skill}** ({reason})\n")
output.append("\n")
print('\n'.join(output))
For task-specific recommendations:
from scripts.analyze_skills import SkillAnalyzer
analyzer = SkillAnalyzer()
analyzer.scan_skills()
# Get recommended stacks
stacks = analyzer.recommend_stacks()
# Format output
output = ["# Recommended Skill Stacks\n"]
for stack in stacks:
output.append(f"## {stack['name']}\n")
output.append(f"**Use case**: {stack['use_case']}\n\n")
output.append("**Skills**:\n")
for skill in sorted(stack['skills']):
skill_data = analyzer.skills[skill]
output.append(f"- **{skill}** - {skill_data['description'][:80]}...\n")
output.append("\n")
print('\n'.join(output))
For specific queries, filter and analyze programmatically:
from scripts.analyze_skills import SkillAnalyzer
analyzer = SkillAnalyzer()
analyzer.scan_skills()
# Example: Find all skills that use web_search
web_skills = [
name for name, data in analyzer.skills.items()
if 'web_search' in data['tools']
]
# Example: Find skills by domain
financial_skills = [
name for name, data in analyzer.skills.items()
if 'financial' in data['domains']
]
# Example: Find lightweight skills
lightweight = [
name for name, data in analyzer.skills.items()
if data['complexity_score'] < 5 and data['size'] < 2000
]
Generate concise markdown reports with:
Keep output token-efficient:
Optimal stacks minimize:
For established patterns and anti-patterns, reference:
view /home/claude/skill-dependency-mapper/references/known_patterns.md
Use this when:
Modify detection thresholds in detect_bottlenecks.py:
detector.detect_high_tool_usage(threshold=4) # Adjust tool count threshold
detector.detect_large_references(size_threshold=8000) # Adjust size threshold
detector.detect_token_budget_risks(combined_threshold=12000) # Adjust combined size
Analyze only specific skill types:
analyzer = SkillAnalyzer()
analyzer.scan_skills()
# User skills only
user_skills = {
name: data for name, data in analyzer.skills.items()
if data['type'] == 'user'
}
# Public skills only
public_skills = {
name: data for name, data in analyzer.skills.items()
if data['type'] == 'public'
}