with one click
brand-voice
// Direct technical voice for docs, README, user-facing text. Concise/strict modes. Triggers: documentation, README, content, output-mode, voice, prose style.
// Direct technical voice for docs, README, user-facing text. Concise/strict modes. Triggers: documentation, README, content, output-mode, voice, prose style.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | brand-voice |
| description | Direct technical voice for docs, README, user-facing text. Concise/strict modes. Triggers: documentation, README, content, output-mode, voice, prose style. |
| effort | medium |
| user-invocable | true |
| allowed-tools | Read |
Auto-loaded when writing documentation, content, or user-facing text. Enforces consistent, direct voice and eliminates LLM rhetoric.
Also loaded when a project sets output-mode: concise or output-mode: strict in CLAUDE.md, to govern conversational response length and structure (see Output Modes).
| Banned | Replacement |
|---|---|
| "cutting-edge" | Describe what it does |
| "game-changer" | State the specific impact |
| "revolutionary" | State the concrete improvement |
| "robust" | Describe what makes it reliable |
| "seamless" | Describe the integration mechanism |
| "state-of-the-art" | Cite specific capabilities |
| "leveraging" | "using" |
| "harnessing" | "using" |
| "utilizing" | "using" |
| "delve" / "delve into" | "examine" / "look at" |
| "holistic" | "complete" / "full" |
| "synergy" | Describe the actual interaction |
| "paradigm shift" | Describe the change |
| "best-in-class" | Cite the benchmark or drop it |
| "empower" | Say what it enables |
| "streamline" | Say what step it removes |
| "elevate" | Say what improves and by how much |
| "unlock" | Say what becomes possible |
| Principle | Rule |
|---|---|
| Direct over diplomatic | Say what you mean. "This function is slow" not "This function could potentially benefit from optimization." |
| Specific over general | Numbers, names, versions. "Reduces cold start by 40ms" not "Improves performance significantly." |
| Evidence over assertion | Show, don't tell. Include benchmarks, examples, or code. |
| Short over long | One sentence beats three. Cut filler words on every pass. |
| Active over passive | "The function returns X" not "X is returned by the function." |
| Technical over casual | Match the audience's expertise. Never dumb down for developers. |
| Honest over promotional | State limitations alongside strengths. |
Bad (filler, marketing, generic):
In today's ever-evolving landscape of AI, our cutting-edge toolkit empowers
developers to seamlessly leverage state-of-the-art skills. Whether you're a
beginner or an expert, this comprehensive guide will help you unlock the full
potential of your workflow.
Good (direct, specific, active):
ai-toolkit installs 99 skills and 44 agents via `npm install -g @softspark/ai-toolkit`.
After install, run `ai-toolkit doctor` to verify symlinks and hooks. Typical
install takes under 30 seconds on a local disk.
Three modes govern conversational response length. Default applies always; concise and strict activate per-project or per-session.
| Mode | Token target vs default | Used when |
|---|---|---|
default | 100% (no extra constraints) | No output-mode configured |
concise | ≤60% | Daily work, short Q&A, code edits, short reviews |
strict | ≤40% | Long sessions, expensive models, batch operations, code-only tasks |
Mode rules live in this skill's modes/ directory. Read the file matching the active mode:
modes/concise.md — bullet-first, no preamble, max 3-sentence prose blocksmodes/strict.md — telegraphic, no prose blocks, only lists/tables/codeoutput-mode: concise to project CLAUDE.md frontmatter or .claude/settings.json under aiToolkit.outputMode/brand-voice concise or /brand-voice strict to switch for the current session/brand-voice default or remove the project settingModes change prose, not data. If you cut a fact to fit a length budget, you have failed the mode, not honored it.
Run python3 app/skills/brand-voice/scripts/measure.py --fixtures tests/fixtures/output-modes/ to compare baseline vs mode tokens on the fixture set. Report shows per-fixture deltas and an aggregate ratio.