// Meta-skill for creating workflow-optimized skills from MCP servers. Use when users want to create a custom skill that integrates one or more MCP servers into a specialized workflow. The user provides MCP server configurations and describes their work scenario (workflow, preferences, SOPs), and this skill generates a new skill with optimized scripts following Anthropic's MCP + code execution best practices.
| name | mcp-skill-creator |
| description | Meta-skill for creating workflow-optimized skills from MCP servers. Use when users want to create a custom skill that integrates one or more MCP servers into a specialized workflow. The user provides MCP server configurations and describes their work scenario (workflow, preferences, SOPs), and this skill generates a new skill with optimized scripts following Anthropic's MCP + code execution best practices. |
This meta-skill creates workflow-optimized skills from MCP servers using code execution patterns inspired by Anthropic's MCP engineering practices.
Transform MCP servers into specialized, personalized workflow skills by:
Use this skill when a user wants to:
ALWAYS check and install dependencies FIRST before doing anything else:
python3 -c "import mcp; print('โ MCP SDK is installed')" 2>/dev/null || pip3 install mcp --break-system-packages
Automatic Installation Process:
pip3 install mcp --break-system-packagesDO NOT ask the user to manually install dependencies - you should handle this automatically as part of the skill creation process.
Why this matters: The introspector and generated scripts require the mcp package. Installing it upfront ensures a smooth workflow.
Follow these steps to create an MCP-powered skill. This process combines programmatic MCP infrastructure generation with LLM-driven skill design, following skill-creator principles.
You should automatically check and install the MCP SDK if needed:
python3 -c "import mcp; print('โ MCP SDK is installed')" 2>/dev/null || pip3 install mcp --break-system-packages
Process:
pip3 install mcp --break-system-packagesWhy needed: The introspector and generated scripts use the mcp package to connect to MCP servers.
DO NOT ask the user to manually install - handle this automatically as part of the workflow.
Before starting, collect the following from the user:
MCP Server Configurations (required):
{
"mcp_servers": [
{
"name": "server-name",
"command": ["npx", "-y", "@modelcontextprotocol/server-..."]
}
]
}
Workflow Description (required):
User Preferences (optional):
Standard Operating Procedures (optional):
If user provides a single configuration file, parse it to extract these components.
This step generates the MCP client infrastructure and tool discovery utilities.
Use scripts/mcp_introspector.py to discover available tools:
# Create MCP config file in skill directory
echo '{
"servers": [
{
"name": "filesystem",
"command": ["npx", "-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
}
]
}' > <skill-dir>/mcp_config.json
# Run introspection
python scripts/mcp_introspector.py <skill-dir>/mcp_config.json introspection.json
This produces a JSON file with all available tools, their parameters, and descriptions.
Use scripts/generate_mcp_wrappers.py to create the infrastructure:
python scripts/generate_mcp_wrappers.py introspection.json <skill-dir>
This creates:
scripts/mcp_client.py - Working MCP client with proper connection managementscripts/list_mcp_tools.py - Dynamic tool discovery (Progressive Disclosure)scripts/tools/<server>/ - (Optional) Type-safe wrappers for each toolGenerated structure:
<skill-dir>/
โโโ mcp_config.json # Server configuration
โโโ scripts/
โ โโโ mcp_client.py # โ
Working implementation
โ โโโ list_mcp_tools.py # ๐ View tool docs on-demand
โ โโโ workflows/ # (You'll create these)
โ โโโ your_workflow.py
Progressive Disclosure means tools are discovered on-demand, not pre-loaded. Three ways to view docs:
Dynamic Query (Recommended):
cd <skill-dir>/scripts
python list_mcp_tools.py
Shows all available tools with parameters and descriptions.
Generate Static Reference:
python list_mcp_tools.py > references/mcp_tools_reference.txt
Save for offline reference.
In SKILL.md: List only the most commonly used tools, full docs available via method 1 or 2.
Key Insight: You don't need wrapper files for each tool. Just use call_mcp_tool() directly:
from mcp_client import call_mcp_tool
result = await call_mcp_tool('filesystem', 'search_files', {
'path': '/path',
'pattern': 'myfile'
})
Now analyze the user's workflow description to understand what this skill needs to accomplish. Similar to skill-creator's Step 1.
Ask clarifying questions if needed:
Identify workflow characteristics:
Example Analysis:
User says: "I research products by checking the official site, ProductHunt, Twitter, and Reddit, then create a report"
Analysis:
Based on the workflow analysis, determine what to include in the skill. Follow skill-creator principles.
Create scripts when:
Use text guidance when:
For each workflow script to create, determine:
Script purpose: What part of the workflow does it handle?
MCP tools needed: Which tools from which servers?
Optimization patterns:
asyncio.gather() for independent fetchesParameters: What inputs does the script need?
Output: What does it return? (Prefer summaries over full data)
Determine what goes in SKILL.md:
Essential:
Optional references/:
Now create the actual skill files. This is where you write code and documentation.
For each planned script, create scripts/workflows/<script_name>.py:
Follow these patterns from Anthropic's MCP best practices:
Pattern 1: Parallel Fetch + Aggregate
async def research_pipeline(product_url: str, product_name: str) -> dict:
"""Complete research workflow with parallel data gathering"""
# Parallel fetch from multiple sources
official_task = google_devtools.fetch_page(product_url)
twitter_task = x_com.search_tweets(f'"{product_name}"')
reddit_task = reddit.search_discussions(product_name)
# Execute concurrently (3x faster than sequential)
official, twitter, reddit = await asyncio.gather(
official_task, twitter_task, reddit_task
)
# Filter and aggregate in execution environment
# (keeps raw data out of context)
key_features = extract_features(official, top_n=10)
sentiment = analyze_sentiment([twitter, reddit])
highlights = extract_highlights(twitter + reddit, top_n=5)
# Return summary (not full data)
return {
'key_features': key_features,
'sentiment': sentiment,
'highlights': highlights,
'source_count': len(twitter) + len(reddit)
}
Pattern 2: Polling/Monitoring
async def wait_for_deployment(channel: str, keyword: str, timeout: int = 300):
"""Poll Slack channel for deployment completion"""
start = time.time()
while time.time() - start < timeout:
messages = await slack.get_channel_history(channel, limit=10)
if any(keyword in m['text'].lower() for m in messages):
return {'status': 'complete', 'message': messages[0]}
await asyncio.sleep(10)
return {'status': 'timeout'}
Pattern 3: Bulk Processing
async def sync_contacts(sheet_id: str, crm_object: str):
"""Sync contacts from sheet to CRM (privacy-preserving)"""
# Load data once
contacts = await google_sheets.get_sheet(sheet_id)
# Filter in execution environment (not in context)
valid = [c for c in contacts if validate_email(c['email'])]
# Batch update (PII never enters model context)
results = []
for batch in chunked(valid, batch_size=50):
batch_results = await asyncio.gather(*[
crm.update_record(crm_object, contact)
for contact in batch
])
results.extend(batch_results)
# Return summary only
return {
'processed': len(valid),
'successful': sum(1 for r in results if r['success'])
}
Key Principles for Scripts:
async/await for IO-bound MCP callsCreate the SKILL.md following skill-creator structure, with MCP-specific additions.
YAML Frontmatter:
---
name: <skill-name>
description: <Brief description of workflow + when to use + MCP servers involved>
---
Body Structure:
# [Skill Name]
[Overview of what this skill does]
## Prerequisites
This skill requires the MCP SDK. **The scripts will automatically check and install it if needed.**
If you want to manually verify or install:
```bash
python3 -c "import mcp; print('โ MCP SDK ready!')" 2>/dev/null || pip3 install mcp --break-system-packages
Why needed: This skill uses MCP tools to [brief explanation of what MCP servers do]. The workflow scripts require the mcp package to connect to MCP servers.
Note: When you run any workflow script, it will automatically check for MCP SDK and display a helpful error message if not installed.
[User's workflow steps in their own language]
[USER PREFERENCES - Embedded as guidance] When using this skill:
[SOPs - Embedded as procedural instructions] Standard procedure:
Before running workflows, ensure MCP SDK is installed (see Prerequisites above).
[Simple example of using the main workflow script]
Use when: [Scenario]
Location: scripts/workflows/<script>.py
Usage:
from scripts.workflows import workflow_name
result = await workflow_name(params)
Optimizations:
[Repeat for other workflow scripts]
[Brief overview of integrated MCP servers]
Tools: [Count] available
Location: scripts/tools/[server]/
Key tools: [List 3-5 most relevant]
Discovery: Use ls scripts/tools/[server]/ to see all tools
[Repeat for each server]
[Guidance on combining scripts, customization, etc.]
[Context optimization benefits, speedups from parallelization]
**Critical**: Embed user preferences and SOPs directly into the workflow guidance, not as separate sections. They should inform HOW to use the skill.
**Example of embedded preferences**:
```markdown
## Workflow Overview
This skill automates product research with the following steps:
1. Official website analysis
2. Community feedback gathering
3. Report generation
**Research approach**: Always prioritize quantitative metrics (user counts, ratings) over qualitative descriptions. Recent information (last 6 months) is valued over older reviews. Cross-reference official claims against community feedback to identify contradictions.
Only create references/ files if SKILL.md would exceed 500 lines or if there's detailed reference material that doesn't belong in the main workflow.
Possible reference files:
references/mcp_tools.md - Detailed catalog of all MCP toolsreferences/schemas.md - Data schemas for APIsreferences/examples.md - Additional usage examplesOnce the skill is complete:
Review the skill structure
Package the skill
python /mnt/skills/public/skill-creator/scripts/package_skill.py <skill-dir>
Provide to user
This meta-skill extends skill-creator with MCP-specific capabilities:
Standard skill-creator: You manually write scripts or provide instructions
MCP skill-creator: Programmatically generates type-safe tool wrappers from MCP servers, enabling progressive disclosure
Standard skill-creator: General workflow guidance
MCP skill-creator: Specific optimization patterns (parallel execution, data filtering, control flow, privacy)
Standard skill-creator: Domain knowledge in references/
MCP skill-creator: User preferences and SOPs embedded directly into workflow guidance
Create scripts for:
Use text guidance for:
โ Don't: Create separate "User Preferences" section โ Do: Weave preferences into workflow guidance
Example:
## Workflow Overview
This skill follows your research methodology:
1. Start with official sources (per your SOP)
2. Gather community feedback in parallel
3. Cross-reference claims (highlighting contradictions as you prefer)
4. Generate report with quantitative metrics emphasized
Always consider these opportunities when analyzing workflows:
Parallel Execution: Any independent fetch operations
Data Filtering: Processing that reduces data size
Control Flow: Loops, conditionals, error handling
State Persistence: Long-running or resumable workflows
Privacy: Sensitive data that shouldn't enter context
User Input:
MCP Servers: puppeteer, twitter, reddit
Workflow: Research products by visiting official site, checking ProductHunt,
searching Twitter/Reddit, then creating markdown report
Preferences: Quantitative metrics > qualitative, recent info > old
SOPs: Start with official sources, cross-reference claims, cite sources
Generated Skill:
SKILL.md:
---
name: product-research-workflow
description: Automated product research integrating official sources and community platforms
---
# Product Research Workflow
Research internet products efficiently by gathering data from official sources
and community platforms, with emphasis on quantitative metrics and recent information.
## Workflow Overview
This skill implements your standard research process:
1. **Official Source Analysis**: Visit product website and extract key features,
pricing, and positioning (per your SOP: always start with official sources)
2. **Community Intelligence**: Gather feedback from ProductHunt, Twitter, and Reddit
in parallel (optimized for speed)
3. **Cross-Reference**: Identify contradictions between official claims and community
feedback (your preference for critical analysis)
4. **Report Generation**: Create comprehensive markdown report with quantitative
metrics emphasized (ratings, user counts, pricing comparisons)
## Quick Start
```python
from scripts.workflows import product_research_pipeline
report = await product_research_pipeline(
product_url='https://example.com',
product_name='ExampleApp'
)
Use when: Researching any new internet product or SaaS tool
Optimizations:
[... rest of SKILL.md with embedded preferences and SOPs ...]
`scripts/workflows/product_research_pipeline.py`:
```python
async def product_research_pipeline(product_url: str, product_name: str):
# Official source
official = await puppeteer.fetch_page(product_url)
# Parallel community research (3x faster)
twitter, reddit, ph = await asyncio.gather(
twitter_mcp.search_tweets(f'"{product_name}"', recent_days=180),
reddit_mcp.search(product_name, time_filter='6months'),
producthunt_mcp.get_product(product_name)
)
# Filter in execution env (user preference: quantitative focus)
metrics = extract_quantitative_metrics(official)
sentiment = calculate_sentiment_score([twitter, reddit, ph])
recent_feedback = filter_recent(twitter + reddit, days=180)
contradictions = find_contradictions(official, recent_feedback)
# Return summary (not raw data)
return {
'official_metrics': metrics,
'sentiment_score': sentiment,
'recent_mention_count': len(recent_feedback),
'contradictions': contradictions[:5],
'top_praise': extract_top_feedback(recent_feedback, 'positive', 3),
'top_complaints': extract_top_feedback(recent_feedback, 'negative', 3)
}
For detailed MCP optimization patterns and examples:
references/mcp-best-practices.md - Comprehensive guide to MCP + code executionreferences/quick-start.md - Step-by-step tutorialreferences/example-config.json - Complete configuration example