with one click
scan-docs-index
// [Documentation] Use when scanning documentation structure, counts, relationships, categories, and lookup tables.
// [Documentation] Use when scanning documentation structure, counts, relationships, categories, and lookup tables.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | scan-docs-index |
| description | [Documentation] Use when scanning documentation structure, counts, relationships, categories, and lookup tables. |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
Goal: Scan the project's docs/ directory → populate docs/project-reference/docs-index-reference.md with accurate documentation tree, file counts by category, doc relationships, and keyword-to-doc lookup table.
Workflow:
Key Rules:
Before any other step, run in parallel:
Read docs/project-reference/docs-index-reference.md
Detect documentation organization type:
| Signal | Type | Scan Approach |
|---|---|---|
Structured docs/{category}/ directories | Structured hierarchy | Scan per-category with phase table below |
Single flat docs/ with all files | Flat structure | Single glob, categorize by filename prefix |
wiki/ or external doc system | Wiki-based | Scan wiki directory, note external docs |
| Mix of docs + inline README.md files | Hybrid | Scan both docs/ and source-embedded READMEs |
docs/project-config.json if availableEvidence gate: Confidence <60% on organization type → ask user, DO NOT guess structure.
Create task tracking entries for each scan dimension. Do not start Phase 2 without tasks created.
Write findings incrementally after each category — NEVER batch at end.
Think (Coverage dimension): Which directories exist under docs/? Which ones have content vs are empty/stub?
Think (Accuracy dimension): For each count in the existing doc, does the actual glob match? What's the delta?
Think (Completeness dimension): Are there markdown files outside documented directories (e.g., in src/, .claude/, project root)? Are those included in any category?
Think (Discovery dimension): Which files don't fit any existing category? Where do they go?
*.md in project root (README.md, CLAUDE.md, CHANGELOG.md, etc.)Scan each subdirectory with verified glob counts:
| Category | Glob Pattern | What to Extract |
|---|---|---|
| project-reference/ | docs/project-reference/**/*.md | File count (verified), list with purposes |
| business-features/ | docs/business-features/**/*.md | Count per app, feature count |
| operations | docs/getting-started.md, docs/deployment.md, etc. | File count, list |
| design-system/ | docs/design-system/**/*.md or docs/project-reference/design-system/**/*.md | File count, app mapping |
| specs/ | docs/specs/**/*.md | File count, module coverage |
| architecture-decisions/ | docs/architecture-decisions/**/*.md | ADR count |
| templates/ | docs/templates/**/*.md | Template count and types |
| release-notes/ | docs/release-notes/**/*.md | File count |
Uncategorized files discovery rule: After scanning all categories above, run a broad glob for docs/**/*.md and diff against the union of all category globs. Files in the diff are uncategorized — create a separate "Uncategorized / Other" section for them. NEVER silently omit files.
.claude/docs/**/*.md — count and categorize.claude/skills/**/*.md — count skillsThink: Which docs serve as entry points (README → guide chains)? Which docs are referenced from multiple places? Which are isolated?
Trace key doc relationships by grepping for markdown links between docs:
For each docs/business-features/{App}/ directory:
For each docs/project-reference/*.md:
Spawn a fresh sub-agent (zero memory) to independently verify:
docs/**/*.md that appear in no category? (Run the diff)Do NOT proceed to Phase 6 until fresh-eyes verification passes.
Write to docs/project-reference/docs-index-reference.md with sections:
<!-- Last scanned: {YYYY-MM-DD} -->
# Documentation Index Reference
> Auto-generated by `$scan-docs-index`. Do not edit manually.
## Documentation System
{total} markdown files across {N} categories. Last scanned: {date}.
## Documentation Graph
{ASCII tree with counts — counts from verified globs only}
## Key Doc Relationships
{ASCII relationship diagram — entry points and cross-references}
## Doc Lookup Guide
{keyword → path table}
## Uncategorized Files
{Files found by broad glob not in any category — with paths}
[IMPORTANT] Use task tracking to break ALL work into small tasks BEFORE starting.
Prerequisites: MUST ATTENTION READ before executing:
Critical Thinking Mindset — Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources, admit uncertainty, self-check output, cross-reference independently. Certainty without evidence = root of all hallucination.
Scan & Update Reference Doc — Surgical updates only, NEVER full rewrite.
- Read existing doc first — understand structure and manual annotations
- Detect mode: Placeholder (headings only) → Init. Has content → Sync.
- Scan codebase (grep/glob) for current state
- Diff findings vs doc — identify stale sections only
- Update ONLY diverged sections. Preserve manual annotations.
- Update metadata (date, version) in frontmatter/header
- NEVER rewrite entire doc. NEVER remove sections without evidence obsolete.
Output Quality — Token efficiency without sacrificing quality.
- No inventories/counts — stale instantly
- No directory trees — use 1-line path conventions
- No TOCs — AI reads linearly
- One example per pattern — only if non-obvious
- Lead with answer, not reasoning
- Sacrifice grammar for concision in reports
- Unresolved questions at end
AI Mistake Prevention — Failure modes to avoid:
Verify AI-generated content against actual code. AI hallucinates file paths and counts. Glob to confirm existence before documenting. Trace full dependency chain after edits. Always trace full chain. Surface ambiguity before coding. NEVER pick silently. Update docs that embed canonical data when source changes. Docs inlining counts go stale silently.
IMPORTANT MUST ATTENTION read existing doc first, scan codebase, diff, surgical update only. Never rewrite entire doc.
IMPORTANT MUST ATTENTION output quality: no counts/trees/TOCs in the skill output itself, 1 example per pattern, lead with answer.
MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
IMPORTANT MUST ATTENTION break work into small task tracking tasks BEFORE starting IMPORTANT MUST ATTENTION detect doc organization type in Phase 0 — scan approach depends on it IMPORTANT MUST ATTENTION evidence gate for EVERY count — glob to verify, NEVER estimate or copy from existing content IMPORTANT MUST ATTENTION write findings incrementally after each category — NEVER batch at end IMPORTANT MUST ATTENTION run uncategorized file discovery — NEVER silently omit files that don't fit categories IMPORTANT MUST ATTENTION Phase 5 fresh-eyes verification is mandatory before writing final doc
Anti-Rationalization:
| Evasion | Rebuttal |
|---|---|
| "Count looks right from existing doc, skip glob" | EVERY count requires fresh glob verification — no exceptions |
| "Only need to check 3 paths" | Phase 5 has 6 specific checks — sample across all categories |
| "All files fit into existing categories" | Run the uncategorized discovery diff — NEVER assume full coverage |
| "Skip Round 2 even when Round 1 found issues" | Clean Round 1 ends the scan. When issues exist, fresh-eyes mandatory after fixing — main agent's counts carry confirmation bias. |
| "Lookup table doesn't need all keywords" | Map keywords for EVERY documented category, not just top-level |
[TASK-PLANNING] Before acting, analyze task scope and break into small todo tasks and sub-tasks using task tracking.
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns — debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW — NEVER ExecuteUowTaskwhere python/where py) — NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) — parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage — never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role → rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad → rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) — expresses HAPPENS, not membership.python/python3 resolves — verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons →
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns →docs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders → System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves — run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons — ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" — Yes → improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.