with one click
linter-setup
// [Quality] Use when you need to research and configure code quality tooling for any tech stack — linters, formatters, static analysis, pre-commit hooks, and CI gates.
// [Quality] Use when you need to research and configure code quality tooling for any tech stack — linters, formatters, static analysis, pre-commit hooks, and CI gates.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | linter-setup |
| description | [Quality] Use when you need to research and configure code quality tooling for any tech stack — linters, formatters, static analysis, pre-commit hooks, and CI gates. |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
[BLOCKING] Execute skill steps in declared order. NEVER skip, reorder, or merge steps without explicit user approval. [BLOCKING] Before each step or sub-skill call, update task tracking: set
in_progresswhen step starts, setcompletedwhen step ends. [BLOCKING] Every completed/skipped step MUST include brief evidence or explicit skip reason. [BLOCKING] If Task tools are unavailable, create and maintain an equivalent step-by-step plan tracker with the same status transitions.
Goal: Install the full computational feedback sensor layer for any tech stack — linters, formatters, type checkers, static analyzers, pre-commit hooks, and CI quality gates.
Output: Config files at project root + pre-commit hook config + CI quality gate step + .editorconfig.
When invoked: After $scaffold in the greenfield workflow, before $harness-setup.
Design principles:
Read from (in priority order):
plan.md YAML frontmatter — look for tech_stack, language, framework fieldsExtract: primary language(s), framework(s), CI platform, test framework, package manager.
Write detected profile to .ai/workspace/linter-setup/stack-profile.md:
# Stack Profile
Language: {language}
Framework: {framework}
Package Manager: {npm/pip/dotnet/go/cargo/etc}
CI Platform: {github-actions/gitlab-ci/azure-pipelines/etc}
Test Framework: {framework}
If any critical field undetectable → a direct user question to confirm before research.
MANDATORY IMPORTANT MUST ATTENTION — This section uses QUERY TEMPLATES, not tool names. DO NOT hardcode specific tool recommendations. Research current ecosystem for the detected stack and present options.
For each tech stack layer detected, research these TOOL CATEGORIES using the query templates below:
| Category | Purpose (WHY) | Research Query Template |
|---|---|---|
| Linter | Catch bugs, enforce style, prevent common errors at author time | "{language} best linter {year} community standard" |
| Formatter | Eliminate style debates, enforce consistent code shape | "{language} opinionated code formatter {year}" |
| Type Checker | Catch type errors without runtime — strongest computational sensor | "{language} static type checker {year}" |
| Static Analyzer | Deep bug patterns, complexity, dead code, security CWEs | "{language} static analysis SAST tool {year}" |
| Dependency Scanner | Known CVEs in dependencies — supply chain security | "{language} dependency vulnerability scanner {year}" |
| Architecture Fitness | Enforce module boundaries, dependency direction | "{language} architecture linting module boundaries {year}" |
Research process per category:
IMPORTANT: If confidence in current ecosystem is <80% (e.g., fast-moving ecosystem, unfamiliar stack) → use WebSearch to verify before presenting options.
After user selects tools for each category:
.{tool}rc, {tool}.config.{ext}, pyproject.toml section, etc..gitignore.editorconfig (ALWAYS generate — stack-agnostic):
root = true
[*]
indent_style = space
indent_size = 2
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
Adjust indent_size and end_of_line for the detected stack's conventions.
Note on framework names: Pre-commit hook frameworks are ecosystem infrastructure standards, not research choices. Naming them here is correct — they are the glue layer, not the quality tools invoked through them. The quality tools (linter, formatter) invoked inside hooks are the research-driven selections from the Tool Research Protocol above.
Detect pre-commit framework for the stack:
pre-commit package).git/hooks/pre-commit shell scriptConfigure hooks to run in this order (fastest first to fail fast):
Performance constraint: Hooks MUST run in <30 seconds total for good DX. If slower:
Generate:
.husky/pre-commit, .lefthook.yml, .pre-commit-config.yaml, etc.)README.md section: "## Code Quality — Pre-commit Hooks" with setup instructions for new team membersDetect CI platform from project files:
.github/workflows/ → GitHub Actions.gitlab-ci.yml → GitLab CIazure-pipelines.yml → Azure PipelinesJenkinsfile → Jenkinsbitbucket-pipelines.yml → Bitbucket PipelinesIf not detected → a direct user question: "Which CI platform does this project use?"
Generate CI job/step that:
--check mode, no auto-fix)MANDATORY: CI gate must match pre-commit hooks. If a check runs locally, it runs in CI. No divergence.
After all config files generated, verify MUST ATTENTION each item:
.editorconfig created at project rootgit commit — test with an intentional violation (e.g., add a lint error, attempt commit, verify hook blocks)README.md — new devs know to run {hook install command} after clone.gitignore updated with tool cache directoriesa direct user question:
[IMPORTANT] Use task tracking to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.
Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
AI Mistake Prevention — Failure modes to avoid on every task:
Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal. Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing. Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain. Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path. When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site. Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code. Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks. Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis. Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly. Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
IMPORTANT MUST ATTENTION follow declared step order for this skill; NEVER skip, reorder, or merge steps without explicit user approval
IMPORTANT MUST ATTENTION for every step/sub-skill call: set in_progress before execution, set completed after execution
IMPORTANT MUST ATTENTION every skipped step MUST include explicit reason; every completed step MUST include concise evidence
IMPORTANT MUST ATTENTION if Task tools unavailable, maintain an equivalent step-by-step plan tracker with synchronized statuses
MUST ATTENTION use QUERY TEMPLATES in Tool Research — never hardcode tool names in the research phase MUST ATTENTION present top 2-3 options per category via a direct user question — never auto-select MUST ATTENTION verify pre-commit hook fires with an intentional violation before marking complete MUST ATTENTION CI gate must match pre-commit hooks — no divergence between local and CI checks MUST ATTENTION loosen strict defaults ONLY with explicit user approval
[TASK-PLANNING] Before acting, analyze task scope and break it into small todo tasks using task tracking.
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns — debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW — NEVER ExecuteUowTaskwhere python/where py) — NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) — parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage — never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role → rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad → rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) — expresses HAPPENS, not membership.python/python3 resolves — verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons →
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns →docs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders → System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves — run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons — ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" — Yes → improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.