with one click
prompt-enhance
// [Skill Management] Use when enhancing prompts, docs, or skills with compression and attention anchoring.
// [Skill Management] Use when enhancing prompts, docs, or skills with compression and attention anchoring.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | prompt-enhance |
| version | 3.2.0 |
| description | [Skill Management] Use when enhancing prompts, docs, or skills with compression and attention anchoring. |
Goal: Two-phase optimization: (1) Caveman Compression — strip stop words + grammatical scaffolding, preserve semantic meaning; (2) Prompt Enhancement — apply AI attention anchoring so AI reads and follows all instructions.
Workflow:
Key Rules:
file:line evidenceCompress and enhance this file: $ARGUMENTS
No file? Ask via AskUserQuestion. Text passed (not file path)? Apply caveman compression directly and output result.
Before any other step, classify target:
| Target type | Detection | Action |
|---|---|---|
| Skill file | Path matches .claude/skills/**/*.md | Apply Universal Skill-Building Principles after Phase 1 |
| Protocol file | Path matches .claude/protocols/**/*.md | Standard 2-phase optimization only |
| General doc/prompt | Any other .md file | Standard 2-phase optimization only |
| Raw text | No file path provided | Apply caveman compression only, output result |
Target .claude/skills/**/*.md (any SKILL.md)? Apply Universal Skill-Building Principles AFTER caveman compression, BEFORE writing enhanced output.
After caveman compression, evaluate skill against each principle, add missing structure:
| Principle | Check | Action if missing |
|---|---|---|
| Detect Before Act | Phase 0 / classification step present? | Add artifact-type detection before Phase 1 |
| Derive, Don't Enumerate | Thinking framework vs. fixed checklist? | Replace checklist with "understand → derive → execute" |
| Evidence Gates | Every claim requires file:line? | Add evidence requirement to all review steps |
| Fresh Eyes Protocol | Multi-round sub-agent review defined? | Add Round 2 fresh sub-agent protocol |
| Specialize by Type | Sub-agent routing table present? | Add security-auditor/performance-optimizer options |
| Embed Protocols Verbatim | Protocols inline in sub-agent prompts? | Move protocol bodies inline, remove file references |
| Search-Based Discovery | Any hardcoded paths/formats/IDs? | Replace with search instructions |
| Dimensions > Checklists | Named dimensions with Think: prompts? | Convert checklist to dimension framework |
| Recursive Quality Loop | Fix → re-review → max 3 rounds defined? | Add recursive review loop |
| Anti-Rationalization Anchors | Closing reminders include evasion table? | Add evasion → rebuttal table |
Aggressively remove stop words + grammatical scaffolding preserving meaning. Use only content words carrying semantic weight.
| Category | Examples |
|---|---|
| Articles | a, an, the |
| Auxiliary verbs | is, are, was, were, am, be, been, being, have, has, had, do, does, did |
| Redundant prepositions | of, for, to, in, on, at (when meaning stays clear without them) |
| Pronouns (when context clear) | it, this, that, these, those |
| Pure intensifiers | very, quite, rather, somewhat, really, extremely |
| Category | Reason |
|---|---|
| All nouns | Core semantic units |
| All main verbs (not auxiliaries) | Actions carry meaning |
| All meaningful adjectives | Add semantic signal |
| Numbers and quantifiers | at least, approximately, more than, 15, many |
| Uncertainty qualifiers | appears to be, seems, might, what sounded like |
| Critical prepositions | from, with, without, stuck to — change meaning |
| Time/frequency words | every Tuesday, weekly, always, never |
| Names and titles | Dr., Mr., Senator |
| Technical/domain terms | Never simplify domain language |
| Negations | not, no, never, without |
made from wood (keep from), stuck to wall (keep to)system for processing data → system processing datain/on/at for location/position: file in /src (keep) vs written in prose (remove)| Original | Compressed | Removed |
|---|---|---|
| "The system was designed to process data efficiently" | "System designed process data efficiently." | The, was, to |
| "It removes predictable grammar while preserving the unpredictable content" | "Removes predictable grammar preserving unpredictable content." | It, the, while |
| "There were at least 20 people" | "At least 20 people." | There, were |
| "Made from wood and metal" | "Made from wood and metal." | nothing — from kept |
| "This is a method for compressing LLM contexts" | "Method compressing LLM contexts." | This, is, a, for |
Apply to:
Do NOT compress:
file:line references and paths<!-- SYNC --> tags and their contentPrompt quality FIRST. Verbose prompts degrade quality — AI attention dilutes across unnecessary tokens. Optimize clarity-per-token: maximum signal, minimum noise.
What to cut:
// comment in codeWhat to KEEP:
file:line evidence and concrete pathsEvaluation metrics per doc:
.claude/skills/**/*.md) → apply Universal Skill-Building Principles after Phase 1.claude/ (needs inline summary) or docs/ (skip)For each .claude/ protocol reference:
| Check | Pass Condition |
|---|---|
| No YAML corruption | Frontmatter intact |
| No content loss | All rules, code, paths present |
| Rule density | Post ≥ pre (count MUST ATTENTION/NEVER/ALWAYS) |
| Line count | Reduced (compression worked) |
| Formatting | Blank lines between sections, headers correct |
| READ classification | .claude/ → inline summary, docs/ → skipped |
[IMPORTANT] Use
TaskCreateto break ALL work into small tasks BEFORE starting.
Output Quality — Token efficiency without sacrificing quality.
- No inventories/counts — AI can
grep | wc -l. Counts go stale instantly- No directory trees — AI can
glob/ls. Use 1-line path conventions- No TOCs — AI reads linearly. TOC wastes tokens
- No examples that repeat what rules say — one example only if non-obvious
- Lead with answer, not reasoning. Skip filler words and preamble
- Sacrifice grammar for concision in reports
- Unresolved questions at end, if any
Universal Skill-Building Principles — 10 principles for building AI skills that work across any project type. Source: extracted from review-changes, plan-review, code-review skill rewrites.
Meta-principle: Teach AI to reason, not to recite. Skill's job: structure WHEN and HOW AI applies its existing knowledge — not enumerate every possible concern.
Detect Before Act — Every skill starts with a classification phase. Detect artifact type (plan type, code category, change nature) before applying any logic. Detection drives: sub-agent selection, which dimensions to emphasize, mandatory vs. optional checks. Anti-pattern: same checklist applied regardless of input type.
Derive, Don't Enumerate — Teach AI HOW to reason about a domain, not WHAT items to tick. Replace "check X, Y, Z" with "understand role → read conventions → derive concerns from first principles → execute with evidence." Fixed checklist = ceiling. Thinking framework = floor. Test: Can this skill run on a Python/Go project without modification? If not → it's enumerating, not teaching.
Evidence Gates — Every claim, finding, recommendation requires
file:lineproof or traced call chain. Confidence thresholds: >80% act freely, 60-80% verify first, <60% DO NOT recommend. "Insufficient evidence" is valid output. Speculation is forbidden output.Fresh Eyes Protocol — Round 1 in main session. Round 2+ with fresh sub-agent (zero memory of Round 1). Main agent reads report but NEVER filters or overrides findings. Max 3 rounds, then escalate to user. Never declare PASS after Round 1 alone. Why: main agent rationalizes its own mistakes. Zero-memory sub-agent catches what main agent dismissed.
Specialize by Type — Route to specialized sub-agents based on detected artifact type:
Artifact type Sub-agent Source code / diffs code-reviewerSecurity-sensitive changes security-auditorPerformance-critical changes performance-optimizerPlans / docs / specs general-purposeEmbed Protocols Verbatim, Never Reference — Shared protocols MUST be copied inline into every sub-agent prompt — never referenced by file path or tag name. AI compliance drops significantly behind file-read indirection. Maintain canonical source; embed body at every call site.
Search-Based Discovery — Never hardcode project-specific paths, formats, or identifiers. Teach skill to discover them:
- "Search for
coding-standards,style-guide,contributing" not "readdocs/X/code-review-rules.md"- "Find the project's test format near changed files" not "look for
TC-{FEAT}-{NNN}indocs/business-features/" This is what makes a skill work across any project without modification.Dimensions > Checklists — Structure review/analysis as named thinking dimensions, each with a
Think:prompt that forces first-principles reasoning: (1) state dimension's role, (2) derive what could go wrong if weak, (3) apply to artifact with evidence. Produces targeted, evidence-backed findings — not generic "add more detail" suggestions. Serial attention: When applying a dimension-based framework, NEVER scan all dimensions simultaneously. One focused pass per dimension. AI misses violations when attention is split across concurrent concerns. Pattern: identify applicable dimensions → sequential focused passes → aggregate. Threshold invariant: 3+ similar patterns in any dimension pass = MANDATORY extraction. 2+ violations of same kind = structural/architectural finding, not individual instance.Recursive Quality Loop — Fix → Re-review → Fix → Re-review. Each round uses a NEW fresh sub-agent. Continue until PASS or 3 rounds max, then escalate. Never declare success after Round 1 alone. Never reuse a sub-agent across rounds.
Anti-Rationalization Anchors — Explicitly name and embed the evasion patterns AI uses to skip steps in the skill's closing reminders:
Evasion Rebuttal "Too simple for this" Wrong assumptions waste more time. Apply anyway. "Already searched" Show file:lineevidence. No proof = no search."Just do it" Still need task tracking. Skip depth, never skip tracking.
Context Engineering Principles — Research-backed principles for prompt quality. Source: Anthropic prompt engineering guide, Stanford "lost-in-the-middle" research, 2025-2026 LLM context optimization studies.
- Primacy-Recency Effect — LLM performance drops 15-47% for middle-context information (Stanford). AI attention peaks at first/last 10% of text. Action: Place the 3 most critical rules in both the first 5 lines AND the last 5 lines of every prompt. Queries at end improve quality by up to 30% (Anthropic).
- High-Signal Density — Anthropic: "Identify the smallest collection of high-signal tokens that maximize the probability of the desired outcome." Action: Every line should change AI behavior. If removing a line doesn't change output → cut it. Target ≥8 rules (MUST ATTENTION/NEVER/ALWAYS) per 100 lines.
- Context Rot — LLM performance degrades as context length grows — even when all content is relevant. Compression (5-20x) maintains or improves accuracy while saving 70-94% tokens. Action: Compress aggressively. Shorter, denser prompts outperform longer, diluted ones.
- Structured > Prose — Tables, bullets, XML/markdown parse faster than paragraphs. Constrained formats reduce error rates vs free-text. Action: Convert narrative to tables/bullets. Use markdown headers for semantic sections.
- RCCF Framework — Modern LLMs (2025+) already know how to reason. What they need: Role (personality), Context (grounding), Constraints (guardrails), Format (structure). Constraints and format matter more than verbose instructions.
- Checkbox Avoidance —
[ ]syntax triggers mechanical compliance — AI ticks boxes without reasoning. Bullet rules force reading and evaluation. Action: Replace- [ ] Check Xwith- MUST ATTENTION verify X.- Example Economy — 3-5 examples optimal for few-shot; diminishing returns after. Action: 1 best example per pattern. Use BAD→GOOD pairs (2-3 lines each) for anti-patterns.
- Deferred Tool Loading — Claude Code delays loading tool definitions when they exceed 10% of context window. Action: Keep injected docs well under 10% of context budget. Docs exceeding ~3,000 lines are too large for injection — split or compress.
- Rule Density Verification — Post-optimization rule count (MUST ATTENTION/NEVER/ALWAYS) must be ≥ pre-optimization count. Compression should preserve or increase density, never decrease it. Action: Count before and after every optimization pass.
Prompt Enhancement Transforms (Base) — Transforms 1-3 are identical across
prompt-enhance/prompt-expand. Transform 4 is per-skill (conciseness pass for enhance; structural clarity pass for expand) and stays local to each skill.Transform 1: Inline Summaries for READ References
Problem: AI sees
MUST ATTENTION READ file.mdand skips it. Solution: Add a 2-3 line summary of key rules BEFORE the read instruction.Before:
MUST ATTENTION READ .claude/protocols/evidence.mdAfter:
> **Evidence-Based Reasoning** — Speculation is FORBIDDEN. Every claim requires `file:line` proof. > Confidence: >95% recommend freely, 80-94% with caveats, <80% DO NOT recommend. MUST ATTENTION READ .claude/protocols/evidence.md for full details.Scope rules:
.claude/protocol files → always add an inline summary (stable, belongs to framework)docs/project-reference/files → NO inline summary (project-specific). Add:(read directly when relevant; do not rely on hook-injected conversation text)Transform 2: Top Summary Section
Required structure (first 20 lines after frontmatter):
> **[IMPORTANT]** TaskCreate instruction... > **Protocol Name** — [inline summary]. MUST ATTENTION READ `path` for details. ## Quick Summary **Goal:** [One sentence — what this skill achieves] **Workflow:** 1. **[Step]** — [description] **Key Rules:** - [Most critical constraint]Transform 3: Bottom Closing Reminders
Add at the very end of the file:
--- ## Closing Reminders **IMPORTANT MUST ATTENTION** [echo rule #1 from the top section] **IMPORTANT MUST ATTENTION** [echo rule #2] **IMPORTANT MUST ATTENTION** [echo rule #3] **IMPORTANT MUST ATTENTION** add a final review task to verify work qualityPick 3-5 rules AI most commonly violates. Bottom section re-anchors attention after the long middle.
Shared Protocol Duplication Policy — Inline protocol content in skills (wrapped in
<!-- SYNC:tag -->) is INTENTIONAL duplication. Do NOT extract, deduplicate, or replace with file references. AI compliance drops significantly when protocols are behind file-read indirection. To update: edit.claude/skills/shared/sync-inline-versions.mdfirst, then grepSYNC:protocol-nameand update all occurrences.
AI Mistake Prevention — Failure modes to avoid on every task: Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal. Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing. Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain. Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path. When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site. Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code. Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks. Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis. Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly. Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
IMPORTANT MUST ATTENTION apply caveman compression FIRST (Phase 1) before any structural enhancement — never skip
IMPORTANT MUST ATTENTION NEVER compress code blocks, YAML frontmatter, structured tables, or SYNC tags
IMPORTANT MUST ATTENTION read target file completely before any changes
IMPORTANT MUST ATTENTION read each referenced protocol file to write accurate inline summaries — NEVER guess content
IMPORTANT MUST ATTENTION apply primacy-recency anchoring — 3 critical rules in first 5 AND last 5 lines of every enhanced file
IMPORTANT MUST ATTENTION verify rule density: count MUST ATTENTION/NEVER/ALWAYS before and after — post ≥ pre
IMPORTANT MUST ATTENTION add inline summaries only for .claude/ protocol files, not project-specific docs/ files
IMPORTANT MUST ATTENTION keep all meaningful content — only restructure/compress, NEVER delete rules or code examples
IMPORTANT MUST ATTENTION verify no YAML frontmatter corruption after changes
IMPORTANT MUST ATTENTION cite file:line evidence for every claim (confidence >80% to act). NEVER speculate without proof.
IMPORTANT MUST ATTENTION READ CLAUDE.md before starting
Anti-Rationalization:
| Evasion | Rebuttal |
|---|---|
| "File is short, skip compression" | Apply both phases anyway — density matters at any length |
| "Already read the file" | Show recorded line count + rule density as proof |
| "Closing reminders already exist" | Verify they echo top-section rules AND include anti-rationalization table |
| "Skill file, skip Universal Principles" | NEVER skip — Phase 0 detection is BLOCKING |
[TASK-PLANNING] Before acting, analyze task scope and systematically break it into small todo tasks and sub-tasks using TaskCreate.