with one click
seed-test-data
// [Dev Data] Use when you need to implement or enhance test data seeders that simulate QC happy-path scenarios via application-layer commands.
// [Dev Data] Use when you need to implement or enhance test data seeders that simulate QC happy-path scenarios via application-layer commands.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | seed-test-data |
| description | [Dev Data] Use when you need to implement or enhance test data seeders that simulate QC happy-path scenarios via application-layer commands. |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
[BLOCKING] Execute skill steps in declared order. NEVER skip, reorder, or merge steps without explicit user approval. [BLOCKING] Before each step or sub-skill call, update task tracking: set
in_progresswhen step starts, setcompletedwhen step ends. [BLOCKING] Every completed/skipped step MUST include brief evidence or explicit skip reason. [BLOCKING] If Task tools are unavailable, create and maintain an equivalent step-by-step plan tracker with the same status transitions.
Goal: Implement or enhance test data seeders — simulate QC happy-path scenarios via application-layer commands; NEVER direct DB writes.
Workflow:
Key Rules:
docs/project-reference/seed-test-data-reference.md and docs/project-config.json (Data Seeders context group) before writing any seeder changesexisting_count to target_count for restart-safetyBefore any other step, classify request:
| Task Type | Detection | Action |
|---|---|---|
| New seeder | No existing seeder for feature area | Create following discovered base class pattern |
| Enhance existing | Seeder exists, needs new scenarios | Read existing seeder, add without breaking |
| Fix broken | Seeder fails env gate / idempotency / DI scope | Diagnose via Universal Rules, fix at root |
| Unknown | Request ambiguous | Ask user — NEVER assume |
grep -r "{Feature}Seeder\|{Feature}SeedData\|{Feature}TestData" src/ -l
Search for project seeder conventions:
# .NET
grep -r "IDataSeeder\|ISeedDataHandler\|ApplicationDataSeeder\|CanSeedTestingData\|SeedingMinimumDummyItemsCount" src/ --include="*.cs" -l
# TypeScript
grep -r "seeder\|SeedData\|DataSeed" src/ --include="*.ts" -l
Record with file:line evidence:
Confirm dev config has both env gate key and count key. If absent, add following project's dev config convention.
Identify before writing any code:
grep -r "{Feature}*Command" src/ --include="*.cs" -lgrep -r "{Feature}TestSeeder\|{Feature}SeedingHelper\|{Feature}TestDataSeeder" src/ -l
Algorithm (language-agnostic):
seeder():
if not is_development_environment(): return
if not seed_enabled_in_config(): return
target = config.get("SeedCount")
if target <= 0: return
existing = count_by_seeder_marker()
if existing >= target: return
for i from existing to target:
call_application_command(build_scenario_input(i))
Seeder marker — stable predicate identifying seeded vs user data:
MUST ATTENTION verify all before complete:
file:line evidence requiredfile:line evidenceexisting_count, not 0 — file:line evidencefile:line evidence| Task | Sub-Agent | When |
|---|---|---|
| Discover seeders + commands across large codebase | general-purpose | Steps 1-2 |
| Review seeder compliance | code-reviewer | Round 1 post-implementation |
| Seeder handles credentials/PII | security-auditor | Security-sensitive patterns |
| Seeder runs 1000+ records | performance-optimizer | Performance-intensive |
All sub-agent prompts MUST include:
Graph DB active. After grep finds key files, run:
python .claude/scripts/code_graph trace <file> --direction both --json
Pattern: grep → trace → grep verify.
| Anti-Pattern | Correct |
|---|---|
| Direct repo insert for domain entities | Call application command |
| Seeder validates business rules | Command owns validation; seeder provides valid inputs |
| No idempotency check | Check count first; seed only remaining |
Hardcoded count (for i in 0..10) | Read count from config key (discovered Step 1) |
| No environment gate | Check project env gate key first |
| Shared DI scope across loop iterations | Use project's scoped DI per iteration (prevents DbContext corruption) |
| Batch-all-then-write sub-agent findings | Persist findings per file; NEVER batch at end |
Round 1: After implementation, spawn fresh code-reviewer sub-agent with zero memory of implementation:
Review seeder at [file:path]. Verify with file:line evidence for each:
1. Environment gate is FIRST check
2. Idempotency: count-before-seed pattern present
3. Loop starts at existing_count not 0
4. Zero application-layer command bypasses (direct repo/DB = FAIL)
5. No hardcoded count — config key read
6. Scoped DI per iteration
Report: PASS or FAIL with file:line for each finding.
Round 2: If FAIL → fix → new fresh sub-agent. Max 3 rounds → escalate to user. NEVER reuse sub-agent across rounds. A clean round ENDS the review; a round with issues triggers fix → fresh sub-agent re-review.
MUST ATTENTION — NOT IN WORKFLOW YET: Use a direct user question:
- Activate
workflow-seed-test-data(Recommended) — scout → investigate → seed-test-data → review-changes → code-simplifier → docs-update- Execute
$seed-test-datadirectly — run this skill standalone
MUST ATTENTION after completing: use a direct user question — do NOT skip:
[IMPORTANT] task tracking for ALL tasks BEFORE starting. For simple tasks, ask user whether to skip.
Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
Understand Code First — HARD-GATE: Do NOT write, plan, or fix until you READ existing code.
- Search 3+ similar patterns (
grep/glob) — citefile:lineevidence- Read existing files in target area — understand structure, base classes, conventions
- Run
python .claude/scripts/code_graph trace <file> --direction both --jsonwhen.code-graph/graph.dbexists- Map dependencies via
connectionsorcallers_of— know what depends on your target- Write investigation to
.ai/workspace/analysis/for non-trivial tasks (3+ files)- Re-read analysis file before implementing — never work from memory alone
- NEVER invent new patterns when existing ones work — match exactly or document deviation
BLOCKED until:
- [ ]Read target files- [ ]Grep 3+ patterns- [ ]Graph trace (if graph.db exists)- [ ]Assumptions verified with evidence
Evidence-Based Reasoning — Speculation is FORBIDDEN. Every claim needs proof.
- Cite
file:line, grep results, or framework docs for EVERY claim- Declare confidence: >80% act freely, 60-80% verify first, <60% DO NOT recommend
- Cross-service validation required for architectural changes
- "I don't have enough evidence" is valid and expected output
BLOCKED until:
- [ ]Evidence file path (file:line)- [ ]Grep search performed- [ ]3+ similar patterns found- [ ]Confidence level statedForbidden without proof: "obviously", "I think", "should be", "probably", "this is because" If incomplete → output:
"Insufficient evidence. Verified: [...]. Not verified: [...]."
AI Mistake Prevention — Failure modes to avoid on every task:
Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal. Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing. Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain. Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path. When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site. Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code. Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks. Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis. Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly. Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
IMPORTANT MUST ATTENTION search 3+ existing patterns and read code BEFORE writing any seeder.
MUST ATTENTION cite file:line for every claim; declare confidence; "I don't have enough evidence" is valid output.
MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
IMPORTANT MUST ATTENTION follow declared step order for this skill; NEVER skip, reorder, or merge steps without explicit user approval
IMPORTANT MUST ATTENTION for every step/sub-skill call: set in_progress before execution, set completed after execution
IMPORTANT MUST ATTENTION every skipped step MUST include explicit reason; every completed step MUST include concise evidence
IMPORTANT MUST ATTENTION if Task tools unavailable, maintain an equivalent step-by-step plan tracker with synchronized statuses
IMPORTANT MUST ATTENTION task tracking — break all work into tasks BEFORE starting
IMPORTANT MUST ATTENTION NEVER call repo/DB directly — use application-layer commands
IMPORTANT MUST ATTENTION ALWAYS gate by environment FIRST; ALWAYS check count before seeding
IMPORTANT MUST ATTENTION loop from existing_count to target_count — NEVER from 0
IMPORTANT MUST ATTENTION scoped DI per iteration — shared DI scope = silent DbContext corruption
IMPORTANT MUST ATTENTION fresh sub-agent re-review required ONLY after a fix cycle. Clean Round 1 ENDS the review.
Anti-Rationalization:
| Evasion | Rebuttal |
|---|---|
| "Simple seeder, skip review loop" | Idempotency bugs are silent. Run Round 1 always. |
| "Already know the base class" | Show file:line. No proof = no knowledge. |
| "Environment gate is obvious" | Verify it's FIRST check with file:line evidence. |
| "Just hardcode count for now" | NEVER — config key required. Find it in Step 1. |
| "No graph.db, skip trace" | Use grep-only trace. Still run 3+ pattern search. |
| "Existing scenarios look fine, skip enhance" | Read all scenarios; enhancement may conflict — verify first. |
[TASK-PLANNING] Before acting, break task into small todo tasks using task tracking.
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns — debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW — NEVER ExecuteUowTaskwhere python/where py) — NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) — parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage — never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role → rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad → rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) — expresses HAPPENS, not membership.python/python3 resolves — verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons →
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns →docs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders → System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves — run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons — ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" — Yes → improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.