with one click
scan-ui-system
// [Documentation] Use when you need orchestrate all UI system scans in parallel: design system + SCSS styling + frontend patterns.
// [Documentation] Use when you need orchestrate all UI system scans in parallel: design system + SCSS styling + frontend patterns.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | scan-ui-system |
| description | [Documentation] Use when you need orchestrate all UI system scans in parallel: design system + SCSS styling + frontend patterns. |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
Goal: Run all 3 UI scan skills in parallel → produce a consolidated summary of what was found and what's still missing. Single command for full UI system documentation refresh.
Workflow:
Key Rules:
docs/project-reference/
MUST ATTENTION verify each sub-skill output doc after completion — never trust "it ran" without checking[BLOCKING] Before launching sub-skills, determine:
| Signal | Action |
|---|---|
angular.json, package.json with frontend framework, src/Web* directories | Proceed with all 3 scans |
| No frontend code detected | STOP — report "Backend-only project; scan-ui-system skipped" |
| Reference Doc | Glob to Check | Stale If |
|---|---|---|
docs/project-reference/design-system/README.md | Check last-scanned date in file | >30 days old OR is placeholder |
docs/project-reference/scss-styling-guide.md | Check last-scanned date in file | >30 days old OR is placeholder |
docs/project-reference/frontend-patterns-reference.md | Check last-scanned date in file | >30 days old OR is placeholder |
| Condition | Decision |
|---|---|
| All 3 docs fresh (≤30 days, has real content) | Ask user: "All UI docs are recent. Force refresh?" |
| 1-2 docs stale/missing | Run only the stale/missing scans |
| All 3 stale/missing | Run all 3 in parallel |
User explicitly ran $scan-ui-system | Run all 3 regardless of freshness |
docs/project-config.json for designSystem section if available — pass config-driven paths to sub-skills.Evidence gate: Confidence <60% on frontend code existence → ask user before proceeding.
Create task tracking entries for each sub-skill that will run + one verification task per sub-skill + one summary task. Do not start Phase 2 without tasks created.
Run the applicable sub-skills simultaneously. Each sub-skill is FULLY self-contained — do NOT pass context between them.
Activate $scan-design-system → populates docs/project-reference/design-system/README.md
Passes: detected project-config.json designSystem config to sub-skill if available.
Activate $scan-scss-styling → populates docs/project-reference/scss-styling-guide.md
Activate $scan-frontend-patterns → populates docs/project-reference/frontend-patterns-reference.md
Do NOT proceed to Phase 4 until all 3 are verified.
For each output doc:
<!-- Last scanned: --> header was updated to today's dateIf re-run also produces placeholder: escalate to user — "scan-{name} produced no output. Please run it manually and check for errors."
After all 3 verified, produce a concise summary:
UI System Scan Complete ({date}):
Design System → docs/project-reference/design-system/README.md
Tokens: {approach: token-first | figma-driven | ad-hoc}
Components: {library | none detected}
Gaps: {list or "none identified"}
SCSS Styling → docs/project-reference/scss-styling-guide.md
Approach: {SCSS | Tailwind | CSS-in-JS | CSS Modules | hybrid}
BEM: {active | partial | none}
Gaps: {list or "none identified"}
Frontend Patterns → docs/project-reference/frontend-patterns-reference.md
Framework: {Angular | React | Vue | Svelte | multi-framework}
State: {store type detected}
Gaps: {list or "none identified"}
Replace {placeholders} with actual findings from verified output docs — NEVER fabricate.
$scaffold in greenfield-init workflow (design system just created)$scan-ui-systemproject-config skill Phase 5 scan task creationThis skill replaces 3 separate scan entries in the project-config scan table:
| Reference Docs | Scan Skill |
|---|---|
design-system/README.md + scss-styling-guide.md + frontend-patterns-reference.md | $scan-ui-system |
[IMPORTANT] Use task tracking to break ALL work into small tasks BEFORE starting.
AI Mistake Prevention — Failure modes to avoid:
Verify sub-skill results after completion. Sub-skills may complete with partial output. Grep-verify each output doc has real content before declaring success. Do NOT skip a sub-skill because the others found nothing. Each scan is independent — one empty result does not imply others will be empty. Surface ambiguity before coding. NEVER pick silently. Check downstream references before deleting. Map referencing files before removal.
Critical Thinking Mindset — Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources, admit uncertainty, self-check output, cross-reference independently. Certainty without evidence = root of all hallucination.
Output Quality — Token efficiency without sacrificing quality.
- No inventories/counts — stale instantly
- No directory trees — use 1-line path conventions
- No TOCs — AI reads linearly
- One example per pattern — only if non-obvious
- Lead with answer, not reasoning
- Sacrifice grammar for concision in reports
- Unresolved questions at end
MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
IMPORTANT MUST ATTENTION break work into small task tracking tasks BEFORE starting — one per sub-skill, one per verification, one for summary IMPORTANT MUST ATTENTION run pre-flight check in Phase 0 — never launch scans on backend-only projects IMPORTANT MUST ATTENTION verify each sub-skill output doc has real content — "it ran" ≠ "it produced output" IMPORTANT MUST ATTENTION summary must come from actual verified doc content — NEVER fabricate token counts or component names
Anti-Rationalization:
| Evasion | Rebuttal |
|---|---|
| "Frontend code obvious, skip pre-flight check" | Phase 0 is BLOCKING — backend-only project wastes 3 sub-skill invocations |
| "All docs are probably still fresh" | Check last-scanned date with actual file read — never assume freshness |
| "Sub-skills ran, so output must be there" | Verify output doc content after each sub-skill — placeholder ≠ populated |
| "Summary from memory is fine" | Summary must come from verified output docs — never fabricate findings |
| "Only re-run needed sub-skills" | If user ran $scan-ui-system explicitly, run all 3 — override freshness check |
[TASK-PLANNING] Before acting, analyze task scope and break into small todo tasks and sub-tasks using task tracking.
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns — debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW — NEVER ExecuteUowTaskwhere python/where py) — NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) — parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage — never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role → rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad → rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) — expresses HAPPENS, not membership.python/python3 resolves — verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons →
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns →docs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders → System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves — run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons — ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" — Yes → improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.