| name | improving-skills |
| argument-hint | [skill-path or skill-name] |
| description | Audit and improve existing agent skills against the agentskills.io specification and current best practices. Also audits CLAUDE.md, GEMINI.md, and AGENTS.md for bloat and configuration issues. Guides migration from client-specific folders to .agents/skills/ structure. Use when reviewing skill quality, checking specification compliance, optimizing descriptions for better triggering, identifying anti-patterns, upgrading skills to follow latest standards, or auditing agent instruction files. Activates on: "audit skills", "review my skill", "improve skill", "check skill quality", "skill audit", "fix skill description", "skill compliance", "check CLAUDE.md", "audit AGENTS.md", even if the user does not explicitly mention "audit" or "best practices".
|
Improving Skills — Audit & Optimization Guide
Structured workflow for auditing and improving agent skills, instruction files
(AGENTS.md, CLAUDE.md, GEMINI.md), and cross-platform compatibility.
Quick audit (5 minutes)
- Read the skill's SKILL.md and list all files in the directory
- Check frontmatter against the specification (Step 2 below)
Automated: run
skills-ref validate <skill-path> if installed (reference
implementation for demonstration — agentskills/skills-ref).
- Evaluate description quality (trigger coverage, third person, specificity)
- Scan content for anti-patterns (see anti-patterns.md)
- Generate a prioritized improvement report (Step 5 template)
Full audit workflow
Step 1: Inventory the skill
Read the complete skill directory structure and all files.
Record:
- Total files and directories
- SKILL.md line count
- Number of reference files and scripts
- Any unusual files or structures
Step 2: Specification compliance
If skills-ref is installed, run skills-ref validate <skill-path> first to catch
mechanical violations automatically. Focus manual review on description quality and content.
Frontmatter validation
Structure validation
Step 3: Description quality assessment
The description is the routing key — it determines whether the skill triggers.
Trigger coverage
Writing quality
Common description problems
| Problem | Example | Fix |
|---|
| Too vague | "Helps with documents" | "Extracts text from PDFs, fills forms. Use when..." |
| First person | "I can help you..." | "Processes files and generates..." |
| No trigger context | "PDF text extraction" | Add "Use when... even if they don't mention..." |
| Too broad | "Handles all data tasks" | Narrow to specific capabilities |
| Missing keywords | Only mentions "CSV" | Add "tabular data", "spreadsheet", "Excel", "TSV" |
Scoring (1–5)
- 5: Specific, pushy, great keyword coverage, clear trigger contexts
- 4: Good but missing one or two trigger contexts
- 3: Adequate but could be more specific or pushy
- 2: Vague or missing trigger contexts
- 1: Too generic, wrong person, or misleading
Step 4: Content quality review
Conciseness
Progressive disclosure
Specificity calibration
Anti-patterns (quick scan)
See anti-patterns.md for full list.
Step 5: Generate improvement report
# Skill Audit Report: [skill-name]
## Summary
- Specification compliance: [PASS/FAIL with count]
- Description quality: [score/5]
- Content quality: [HIGH/MEDIUM/LOW]
- Cross-platform: [status]
- Overall: [number] issues found
## Critical issues (fix immediately)
1. [Issue]: [What's wrong] → [How to fix]
## Recommended improvements
1. [Issue]: [What's wrong] → [How to fix]
## Minor suggestions
1. [Suggestion]
## Description recommendation
Current:
> [current description]
Suggested:
> [improved description]
Batch audit
Default target: ~/.agents/skills/ (user-scope global skills).
For repo-scope: .agents/skills/ relative to project root.
- List all skill directories in the target path
- Run Quick Audit (Steps 1–5) for each skill
- Compile summary table:
| Skill | Spec | Description | Content | Issues |
|-------|------|-------------|---------|--------|
| skill-a | PASS | 4/5 | HIGH | 1 |
| skill-b | FAIL | 2/5 | LOW | 5 |
- Prioritize fixes across all skills by impact
Instruction file audit
Audit agent instruction files (AGENTS.md, CLAUDE.md, GEMINI.md, CODEX.md, etc.)
for quality and consistency. These rules apply universally — every instruction file
follows the same principles regardless of platform.
Universal checklist
Should include
- Bash commands the agent can't guess
- Code style rules that differ from defaults
- Testing instructions and preferred test runners
- Repo etiquette (branch naming, PR conventions)
- Architecture decisions specific to the project
- Developer environment quirks (required env vars)
- Common gotchas and non-obvious behaviors
Should NOT include
- Anything the agent can figure out by reading code
- Standard language conventions the agent already knows
- Detailed API documentation (link to docs instead)
- Information that changes frequently
- Long explanations or tutorials
- File-by-file descriptions of the codebase
- Self-evident practices like "write clean code"
The one-line test
For every line: "Would removing this cause the agent to make mistakes?" If not, cut it.
If the agent ignores a rule, the file is probably too long — not the rule too weak.
If the agent asks questions answered in the file, the phrasing is ambiguous.
Platform-specific adapters
Skills vs. instruction files
Instruction files load every session. Skills load on demand.
- Persistent broad rule (style, testing, deploy) → instruction file
- On-demand expertise, workflow, checklist → skill
- Content only relevant sometimes → skill (not instruction file)
- If instruction file grows past ~100 lines → migrate workflows to skills
Cross-agent compatibility review
Platform discovery paths
The agentskills.io spec is client-agnostic — discovery paths are not part
of the spec. The locations below are each client's own convention.
| Platform | User scope | Repo scope |
|---|
| Claude Code | ~/.claude/skills/ | .claude/skills/ |
| OpenAI Codex | ~/.agents/skills/ | .agents/skills/ (scanned from cwd up to repo root) |
| Gemini CLI | ~/.gemini/skills/ or ~/.agents/skills/ (alias wins) | .gemini/skills/ or .agents/skills/ (alias wins) |
Share one skill set across all three by keeping files in ~/.agents/skills/ — Codex and Gemini pick it up natively — and junction (Windows) or symlink (macOS/Linux) ~/.claude/skills/ → ~/.agents/skills/ so Claude Code sees the same files under its own path. A working pattern: a small Python or PowerShell script that calls mklink /J (Windows) or os.symlink (POSIX) per skill directory; run once per machine setup so new skills under ~/.agents/skills/ are picked up by Claude Code through the junction.
Compatibility checklist
Description rewriting
The description is the routing key — agents load only name + description at startup, so fix this first when a skill under-triggers. Third person, imperative ("Use when…"), pushy about trigger contexts, under 1024 characters.
For full trigger evaluation (build a query set, grade with a validation split, iterate), use the skill-creator skill — that's where the benchmark tooling lives. This skill stops at identifying that a rewrite is needed.
Gotchas
description has a hard cap of 1024 characters — count characters before saving when writing long descriptions
name must match the directory name exactly — uppercase letters, underscores, or spaces break discovery
- In batch audits,
~/.agents/skills/ is the user-scope default, .agents/skills/ is repo-scope — don't mix them up
- Symlinks from
.claude/skills/ → .agents/skills/ can cause duplicate discovery reports
- Instruction file audit (AGENTS.md/CLAUDE.md) is a separate workflow from skill audit — don't combine them into the same report
allowed-tools is marked Experimental in the spec — don't add routinely, support varies across platforms
version is not a root-level frontmatter field — to version a skill, place it under metadata: { version: "1.0" }. Free-form root-level keys may be rejected by spec validators
argument-hint and model are Claude Code -specific extensions, not in agentskills.io spec — other clients silently ignore them. Safe to use, but don't rely on cross-client behavior
- Root
skills/ breaks discovery: Moving project skills from .agents/skills/ to a root skills/ directory breaks Claude Code /skills discovery and Codex auto-discovery — both scan .agents/skills/ (repo scope) directly. Root skills/ only works as AGENTS.md @include context, not as a discoverable/invokable skill. Keep skills in .agents/skills/<name>/. To auto-load a skill every session, add @.agents/skills/<name>/SKILL.md to AGENTS.md.
- Verify behavioral claims against official docs/source before editing — truncation behavior, deprecation status, experimental flags, and token budgets must come from specs, READMEs, or source code, not from inference or plausibility. When docs are silent on a behavior, preserve the original wording rather than invent it. A plausible-sounding claim that rots later is worse than no claim.
The goal is reliable triggering, specification compliance, and clear value without wasting context tokens.