with one click
writing-skills
// Use when creating new skills, editing existing skills, or reviewing a SKILL.md
// Use when creating new skills, editing existing skills, or reviewing a SKILL.md
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | writing-skills |
| description | Use when creating new skills, editing existing skills, or reviewing a SKILL.md |
Complements skill-creator. While skill-creator runs the workflow (capture intent → draft → eval → iterate → package), this skill is about the principles for writing the SKILL.md text well: what makes a description trigger reliably, how to structure the body, what patterns to avoid.
A skill is a reusable reference for a technique, pattern, or tool that future Claude needs to find and apply. It is not a narrative about how a problem was once solved.
Create when:
Don't create for:
| Type | Purpose |
|---|---|
| Technique | Concrete how-to with steps |
| Pattern | Way of thinking about a problem |
| Reference | Lookup material — API, syntax, conventions |
The type informs the structure: techniques need steps, patterns need a mental model, references need scannability.
This is where most skill quality lives — Claude reads the description to decide whether to load the body. Get this wrong and the skill never fires, no matter how good the body is.
Two rules that override everything else:
# ❌ BAD — workflow summary
description: Use when fixing tests — write failing test first, watch fail, then minimal code
# ❌ BAD — body content leaking into trigger surface
description: Use when authoring SKILL.md — covers description writing (CSO), structure, anti-patterns
# ❌ BAD — first person
description: I can help when async tests are flaky
# ❌ BAD — too vague
description: For async testing
# ✅ GOOD — trigger conditions only
description: Use when tests have race conditions, timing dependencies, or pass/fail inconsistently
# ✅ GOOD — narrow scope, action verbs
description: Use when creating new skills, editing existing skills, or reviewing a SKILL.md
Keyword coverage: Include words a future Claude would search for — error messages, symptoms, tool names. Trigger on the problem ("race conditions"), not the language-specific symptom ("setTimeout").
Naming: verb-first, gerund (-ing) for processes. creating-skills not skill-creation. condition-based-waiting not async-helpers.
YAML frontmatter (required):
name: letters, numbers, hyphens only — no parentheses or special charsdescription: as aboveSee agentskills.io/specification for additional supported fields.
Body template — scale sections to skill size; small skills can collapse to overview + core pattern:
# Skill Name
## Overview
What is this? Core principle in 1-2 sentences.
## When to Use
Symptoms and use cases. When NOT to use.
## Core Pattern (for techniques/patterns)
The actual approach — steps, mental model, before/after.
## Quick Reference (for references)
Table or bullets for scanning.
## Common Mistakes
What goes wrong, how to recognize it, how to fix.
Three patterns, in order of frequency:
Self-contained — most skills:
skill-name/
SKILL.md
With reusable example or tool:
skill-name/
SKILL.md
example.py # Working code to adapt
With heavy reference:
skill-name/
SKILL.md # Overview + workflows
api-reference.md # 600 lines of API docs
The SKILL.md links to reference files; Claude follows the link only when needed. Heavy reference inline burns context on every load.
Frequently-loaded skills compete with everything else for context. Be ruthless:
--helpIn Claude.ai, just mention the skill by name in prose: "For the eval workflow, use the skill-creator skill." Claude follows the reference when relevant. No special syntax needed.
(@-style force-loading is CC-specific and burns context on every load — don't port that pattern.)
If a skill has a non-trivial decision flow, sketch it as a mermaid block or ask Claude to render via show_widget during authoring. Most skills are clearer as a numbered checklist — don't add visual machinery for its own sake.
example-js.js, example-py.py, example-go.go. Mediocre everywhere, maintenance burden. Pick one.helper1, step3, pattern4. Labels should carry semantic meaning.For evaluating a skill — test cases, grading, description optimization, iteration loop — use the skill-creator skill. It has the full workflow, including Claude.ai-specific adaptations for environments without subagents.
anthropic-best-practices.md — Anthropic's official skill-authoring guide (companion reference, lives next to this SKILL.md when installed)skill-creator skill — for the create → test → iterate → package workflow