| name | lint-and-validate |
| description | Automatic quality control, linting, and static analysis procedures. Use after every code modification to ensure syntax correctness and project standards. Triggers onKeywords: lint, format, check, validate, types, static analysis. |
Lint and Validate Skill
MANDATORY: Run appropriate validation tools after EVERY code change. Do not finish a task until the code is error-free.
Procedures by Ecosystem
Node.js / TypeScript
- Lint/Fix:
npm run lint or npx eslint "path" --fix
- Types:
npx tsc --noEmit
- Security:
npm audit --audit-level=high
Python
- Linter (Ruff):
ruff check "path" --fix (Fast & Modern)
- Security (Bandit):
bandit -r "path" -ll
- Types (MyPy):
mypy "path"
The Quality Loop
- Write/Edit Code
- Run Audit:
npm run lint && npx tsc --noEmit
- Analyze Report: Check the "FINAL AUDIT REPORT" section.
- Fix & Repeat: Submitting code with "FINAL AUDIT" failures is NOT allowed.
Error Handling
- If
lint fails: Fix the style or syntax issues immediately.
- If
tsc fails: Correct type mismatches before proceeding.
- If no tool is configured: Check the project root for
.eslintrc, tsconfig.json, pyproject.toml and suggest creating one.
Strict Rule: No code should be committed or reported as "done" without passing these checks.
Scripts
| Script | Purpose | Command |
|---|
scripts/lint_runner.py | Unified lint check | python scripts/lint_runner.py <project_path> |
scripts/type_coverage.py | Type coverage analysis | python scripts/type_coverage.py <project_path> |
AGI Framework Integration
Qdrant Memory Integration
Before executing complex tasks with this skill:
python3 execution/memory_manager.py auto --query "<task summary>"
Decision Tree:
- Cache hit? Use cached response directly — no need to re-process.
- Memory match? Inject
context_chunks into your reasoning.
- No match? Proceed normally, then store results:
python3 execution/memory_manager.py store \
--content "Description of what was decided/solved" \
--type decision \
--tags lint-and-validate <relevant-tags>
Note: Storing automatically updates both Vector (Qdrant) and Keyword (BM25) indices.
Agent Team Collaboration
- Strategy: This skill communicates via the shared memory system.
- Orchestration: Invoked by
orchestrator via intelligent routing.
- Context Sharing: Always read previous agent outputs from memory before starting.
Local LLM Support
When available, use local Ollama models for embedding and lightweight inference:
- Embeddings:
nomic-embed-text via Qdrant memory system
- Lightweight analysis: Local models reduce API costs for repetitive patterns