with one click
code-no-test
// [Implementation] Use when you need to start coding an existing plan (no testing).
// [Implementation] Use when you need to start coding an existing plan (no testing).
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | code-no-test |
| description | [Implementation] Use when you need to start coding an existing plan (no testing). |
| disable-model-invocation | false |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
Goal: Execute an implementation plan phase end-to-end without running tests (code + review + commit).
Workflow:
Key Rules:
MUST ATTENTION READ CLAUDE.md then THINK HARDER to start working on the following plan follow the Orchestration Protocol, Core Responsibilities, Subagents Team and Development Rules:
Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
$ARGUMENTS<$plan-hard>
IMPORTANT: Remind these rules with subagents communication:
If $ARGUMENTS is empty:
plan.md in ./plans | find ./plans -name "plan.md" -type f -exec stat -f "%m %N" {} \; 2>/dev/null | sort -rn | head -1 | cut -d' ' -f2-If $ARGUMENTS provided: Use that plan and detect which phase to work on (auto-detect or use argument like "phase-2").
Output: ✓ Step 0: [Plan Name] - [Phase Name]
Subagent Pattern (use throughout):
Task(agent_type="[type]", prompt="[task description]", description="[brief]")
Rules: Follow steps 1-6 in order. Each step requires output marker starting with "✓ Step N:". Mark each complete in task tracking before proceeding. Do not skip steps.
Read plan file completely. Map dependencies between tasks. List ambiguities or blockers. Identify required skills/tools and activate from catalog. Parse phase file and extract actionable tasks. If the plan references analysis files in .ai/workspace/analysis/, re-read them before implementation.
task tracking Initialization & Task Extraction:
Step 0: [Plan Name] - [Phase Name] and all command steps (Step 1 through Step 6)Output: ✓ Step 1: Found [N] tasks across [M] phases - Ambiguities: [list or "none"]
Mark Step 1 complete in task tracking, mark Step 2 in_progress.
Implement selected plan phase step-by-step following extracted tasks (Step 2.1, Step 2.2, etc.). Mark tasks complete as done. For UI work, call ui-ux-designer subagent: "Implement [feature] UI per./docs/design-guidelines.md". Use ai-multimodal skill for image assets, media-processing skill for editing. Run type checking and compile to verify no syntax errors.
Output: ✓ Step 2: Implemented [N] files - [X/Y] tasks complete, compilation passed
Mark Step 2 complete in task tracking, mark Step 3 in_progress.
Call code-reviewer subagent: "Review changes for plan phase [phase-name]. Check security, performance, architecture, YAGNI/KISS/DRY". If critical issues found: STOP, fix all, re-run tester to verify, re-run code-reviewer. Repeat until no critical issues.
Critical issues: Security vulnerabilities (XSS, SQL injection, OWASP), performance bottlenecks, architectural violations, principle violations.
Output: ✓ Step 3: Code reviewed - [0] critical issues
Validation: If critical issues > 0, Step 3 INCOMPLETE - do not proceed.
Mark Step 3 complete in task tracking, mark Step 4 in_progress.
Present summary (3-5 bullets): what implemented, code review outcome.
Ask user explicitly: "Phase implementation complete. Code reviewed. Approve changes?"
Stop and wait - do not output Step 5 content until user responds.
Output (while waiting): ⏸ Step 4: WAITING for user approval
Output (after approval): ✓ Step 4: User approved - Ready to complete
Mark Step 4 complete in task tracking, mark Step 5 in_progress.
Prerequisites: User approved in Step 4 (verified above).
project-manager sub-agent: "Update plan status in [plan-path]. Mark plan phase [phase-name] as DONE with timestamp. Update roadmap."docs-manager sub-agent: "Update docs for plan phase [phase-name]. Changed files: [list]."ONBOARDING CHECK: Detect onboarding requirements (API keys, env vars, config) + generate summary report with next steps.
AUTO-COMMIT (after steps 1 and 2 completes):
Validation: Steps 1 and 2 must complete successfully. Step 3 (auto-commit) runs only if conditions met.
Mark Step 5 complete in task tracking.
Phase workflow finished. Ready for next plan phase.
Step outputs must follow unified format: ✓ Step [N]: [Brief status] - [Key metrics]
Examples:
✓ Step 0: [Plan Name] - [Phase Name]✓ Step 1: Found [N] tasks across [M] phases - Ambiguities: [list]✓ Step 2: Implemented [N] files - [X/Y] tasks complete✓ Step 3: Code reviewed - [0] critical issues✓ Step 4: User approved - Ready to complete✓ Step 5: Finalize - Status updated - Git committedIf any "✓ Step N:" output missing, that step is INCOMPLETE.
task tracking tracking required: Initialize at Step 0, mark each step complete before next.
Mandatory subagent calls:
code-reviewerproject-manager AND docs-manager (when user approves)Blocking gates:
project-manager and docs-manager must complete successfullyREMEMBER:
ai-multimodal skill on the fly for visual assets.ai-multimodal skill to verify they meet requirements.media-processing skill or similar tools as needed.MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS: If this skill was called outside a workflow, you MUST ATTENTION use a direct user question to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:
If already inside a workflow, skip — the workflow handles sequencing.
[IMPORTANT] Use task tracking to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.
docs/project-reference/domain-entities-reference.md — Domain entity catalog, relationships, cross-service sync (read when task involves business entities/models) (read directly when relevant; do not rely on hook-injected conversation text)Nested Task Expansion Contract — For workflow-step invocation, the
[Workflow] ...row is only a parent container; the child skill still creates visible phase tasks.
- Call the current task list first. If a matching active parent workflow row exists, set
nested=trueand recordparentTaskId; otherwise run standalone.- Create one task per declared phase before phase work. When nested, prefix subjects
[N.M] $skill-name — phase.- When nested, link the parent with
TaskUpdate(parentTaskId, addBlockedBy: [childIds]).- Orchestrators must pre-expand a child skill's phase list and link the workflow row before invoking that child skill or sub-agent.
- Mark exactly one child
in_progressbefore work andcompletedimmediately after evidence is written.- Complete the parent only after all child tasks are completed or explicitly cancelled with reason.
Blocked until: the current task list done, child phases created, parent linked when nested, first child marked
in_progress.
Project Reference Docs Gate — Run after task-tracking bootstrap and before target/source file reads, grep, edits, or analysis. Project docs override generic framework assumptions.
- Identify scope: file types, domain area, and operation.
- Required docs by trigger: always
docs/project-reference/lessons.md; doc lookupdocs-index-reference.md; reviewcode-review-rules.md; backend/CQRS/APIbackend-patterns-reference.md; domain/entitydomain-entities-reference.md; frontend/UIfrontend-patterns-reference.md; styles/designscss-styling-guide.md+design-system/README.md; integration testsintegration-test-reference.md; E2Ee2e-test-reference.md; feature docs/specsfeature-docs-reference.md; architecture/new areaproject-structure-reference.md.- Read every required doc that exists; skip absent docs as not applicable. Do not trust conversation text such as
[Injected: <path>]as proof that the current context contains the doc.- Before target work, state:
Reference docs read: ... | Missing/not applicable: ....Blocked until: scope evaluated, required docs checked/read,
lessons.mdconfirmed, citation emitted.
Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
Understand Code First — HARD-GATE: Do NOT write, plan, or fix until you READ existing code.
- Search 3+ similar patterns (
grep/glob) — citefile:lineevidence- Read existing files in target area — understand structure, base classes, conventions
- Run
python .claude/scripts/code_graph trace <file> --direction both --jsonwhen.code-graph/graph.dbexists- Map dependencies via
connectionsorcallers_of— know what depends on your target- Write investigation to
.ai/workspace/analysis/for non-trivial tasks (3+ files)- Re-read analysis file before implementing — never work from memory alone
- NEVER invent new patterns when existing ones work — match exactly or document deviation
BLOCKED until:
- [ ]Read target files- [ ]Grep 3+ patterns- [ ]Graph trace (if graph.db exists)- [ ]Assumptions verified with evidence
AI Mistake Prevention — Failure modes to avoid on every task:
Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal. Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing. Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain. Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path. When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site. Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code. Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks. Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis. Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly. Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
IMPORTANT MUST ATTENTION search 3+ existing patterns and read code BEFORE any modification. Run graph trace when graph.db exists.
MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
Reference docs read: ....lessons.md; project conventions override generic defaults.[N.M] $skill-name — phase prefixes and one-in_progress discipline.IMPORTANT MUST ATTENTION break work into small todo tasks using task tracking BEFORE starting
IMPORTANT MUST ATTENTION search codebase for 3+ similar patterns before creating new code
IMPORTANT MUST ATTENTION cite file:line evidence for every claim (confidence >80% to act)
IMPORTANT MUST ATTENTION add a final review todo task to verify work quality
IMPORTANT MUST ATTENTION validate decisions with user via a direct user question — never auto-decide
MANDATORY IMPORTANT MUST ATTENTION READ the following files before starting:
IMPORTANT MUST ATTENTION READ CLAUDE.md before starting
[TASK-PLANNING] Before acting, analyze task scope and systematically break it into small todo tasks and sub-tasks using task tracking.
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns — debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW — NEVER ExecuteUowTaskwhere python/where py) — NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) — parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage — never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role → rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad → rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) — expresses HAPPENS, not membership.python/python3 resolves — verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons →
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns →docs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders → System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves — run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons — ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" — Yes → improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.