with one click
scan-seed-test-data
// [Documentation] Use when you need to scan seeder patterns and populate/sync docs/project-reference/seed-test-data-reference markdown from real code evidence.
// [Documentation] Use when you need to scan seeder patterns and populate/sync docs/project-reference/seed-test-data-reference markdown from real code evidence.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | scan-seed-test-data |
| description | [Documentation] Use when you need to scan seeder patterns and populate/sync docs/project-reference/seed-test-data-reference markdown from real code evidence. |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
[IMPORTANT] Use task tracking to break work into small tasks before scanning.
Goal: Populate or sync docs/project-reference/seed-test-data-reference.md with project-specific seeder patterns using file:line evidence.
Workflow:
Read:
docs/project-reference/seed-test-data-reference.mddocs/project-config.json (Data Seeders context group)Mode rules:
Run evidence-first scans (adapt to stack, examples below for.NET projects):
rg -n "DataSeeder|SeedData|CanSeedTestingData|SeedingMinimumDummyItemsCount|ExecuteInjectScopedAsync|ExecuteUowTask" src
rg -n "IPlatformApplicationDataSeeder|AddTransient<IPlatformApplicationDataSeeder" src
rg -n "WaitUntilAsync|SeedAdminUserData|CountAsync\\(" src
Graph check (when .code-graph/graph.db exists):
python .claude/scripts/code_graph trace <seeder-file> --direction both --json
Minimum evidence to capture:
ExecuteInjectScopedAsync vs anti-patterns)Target file:
docs/project-reference/seed-test-data-reference.mdRules:
file:line proofVerification checklist:
docs/project-config.json seeder rulesWrite report:
plans/reports/scan-seed-test-data-{YYMMDD}-{HHMM}-report.mdReport sections:
file:line)IMPORTANT MUST ATTENTION cite file:line evidence for every claim
IMPORTANT MUST ATTENTION use surgical updates in sync mode (do not rewrite entire doc)
IMPORTANT MUST ATTENTION verify DI-scope safety guidance (ExecuteInjectScopedAsync) against real source usage
IMPORTANT MUST ATTENTION run one graph trace when graph DB is available
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns — debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW — NEVER ExecuteUowTaskwhere python/where py) — NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) — parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage — never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role → rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad → rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) — expresses HAPPENS, not membership.python/python3 resolves — verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons →
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns →docs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders → System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves — run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons — ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" — Yes → improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.