with one click
workflow-start
// [Skill Management] Use when starting a detected workflow, initializing workflow state, or activating a workflow sequence.
// [Skill Management] Use when starting a detected workflow, initializing workflow state, or activating a workflow sequence.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | workflow-start |
| description | [Skill Management] Use when starting a detected workflow, initializing workflow state, or activating a workflow sequence. |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
Goal: Detect user intent ā present catalog/custom options ā activate with full task tracking plan.
Workflow:
in_progressKey Rules:
A) Activate [Workflow] (Recommended) | B) Custom Pipeline: [step ā ...] | C) Execute directlyworkflows.json workflows field is an OBJECT ā use workflows[workflowId], NEVER .find() or [index]in_progress ā batch creation, then executecompleted without invoking its skill invocation ā skip = in_progress + comment, not delete## Workflow Catalog first (Tier 1) ā NEVER read workflows.json directly when catalog is in contextNOT for: Manual step execution (follow task tracking items), workflow design (use planning), catalog management.
Related: $workflow-start <workflowId> | Hook: workflow-step-tracker.cjs | Hook: workflow-router.cjs
When the prompt doesn't cleanly match a single catalog workflow ā or combining steps from multiple workflows serves the request better ā the AI MAY propose a Custom Pipeline alongside the catalog option.
| Condition | Example |
|---|---|
| No catalog workflow matches well | "Review hook changes and update skill docs" ā spans review + docs |
| Best-match has significant unnecessary steps | Quick investigate + fix, but bugfix includes full TDD + integration cycle |
| Prompt combines 2+ workflow domains | "Audit performance and write integration tests for the slow query" |
| User explicitly requests a step sequence | "Just run scout, plan, and cook ā nothing else" |
Do NOT propose when a catalog workflow is a strong match (>80% of its steps are relevant). Catalog workflows encode validated best-practice sequences ā prefer them.
commandMapping in workflows.json. No invented step names.Show full step sequences for ALL options so the user compares scope:
Option A ā Activate "Bug Fix" workflow (Recommended)
Steps: $scout ā $investigate ā $debug-investigate ā $plan-hard ā $fix ā $prove-fix ā $test ā ...
Option B ā Custom Pipeline: "Quick Fix + Docs"
Steps: $scout ā $investigate ā $fix ā $docs-update
Rationale: Prompt targets a known location ā full TDD cycle is over-engineered here.
Option C ā Execute directly without workflow
Rules:
Same 1:1 protocol ā one task tracking per step. Use [Custom] prefix to distinguish from catalog tasks:
Task tracking: subject="[Custom] /{step-name} ā {brief description}", description="Custom pipeline step N/{total}.", activeForm="Executing /{step-name}"
ALWAYS try tiers in order ā stop at first success.
The workflow catalog is already injected as ## Workflow Catalog in your context (injected by workflow-router.cjs on every UserPromptSubmit).
## Workflow Catalog**{workflowId}** ā {name} Use: ... | Steps: /step1 ā /step2 ā ...Steps: value ā these ARE the slash commands for task trackingExample: Steps: $scout ā $plan-hard ā $cook ā $test ā $workflow-end
ā Create 5 task tracking items for $scout, $plan-hard, $cook, $test, $workflow-end in order.
ā
Use Tier 1 for: all standard task tracking creation (no file reads needed)
ā ļø Use Tier 2 if: catalog not in context, OR you need preActions.injectContext
Use when catalog is absent from context OR preActions.injectContext is needed:
Grep: pattern='"<workflowId>":' path='.claude/workflows.json' context=35
Returns only that workflow's entry (~35 lines vs full file).
Parse: sequence array ā step IDs ā resolve via commandMapping.
Only if Tier 1 and Tier 2 both fail:
Read '.claude/workflows.json' lines 1-15 ā commandMapping only
Then Grep '"<workflowId>":' context=30
NEVER read the full file ā it is large and wastes tokens.
FIRST action after activation: create EXACTLY one task tracking for EACH entry in the workflow's sequence array.
workflows.json ā CRITICAL SCHEMAworkflows.json is a JSON OBJECT, not an array. Most common AI mistake.
{
"commandMapping": { <stepId>: { "claude": "/cmd", "copilot": "/cmd" } },
"settings": { ... },
"workflows": { <workflowId>: WorkflowEntry } ā OBJECT, keyed by ID
}
Lookup algorithm:
workflow = workflows[workflowId] // key lookup ā NOT .find(), NOT [index]
steps = workflow.sequence // array of step ID strings
// resolve slash command:
slashCmd = commandMapping[stepId].claude // commandMapping["scout"].claude ā "$scout"
WorkflowEntry fields:
| Field | Type | Notes |
|---|---|---|
name | string | Display name |
confirmFirst | boolean | Prompt user before starting |
sequence | string[] | Ordered step IDs ā SOLE source of truth |
whenToUse / whenNotToUse | string | Natural language intent matching |
preActions | object | Optional injectContext / readFiles |
FORBIDDEN (common mistakes):
// ā WRONG
workflows.find(w => w.id === workflowId)
workflows[0]
// ā
CORRECT
workflows[workflowId]
Object.keys(workflows) // list all IDs
## Workflow Catalog ā find **{workflowId}** ā parse Steps: line ā slash commands are ready to useGrep .claude/workflows.json --pattern '"<workflowId>":' --context 35 ā parse sequence + commandMappingSee Workflow Lookup ā Token-Efficient (3-Tier Strategy) above for full lookup rules and fallback chain.
Task format:
Task tracking: subject="[Workflow] /{step-name} ā {brief description}", description="Workflow step N/{total}. {conditional note}", activeForm="Executing /{step-name}"
Rules (NON-NEGOTIABLE):
[Workflow] $workflow-review-changes ā Recursive re-review (conditional)task count == len(sequence). Fix mismatch before proceeding.Create ALL tasks first ā then TaskUpdate first task to in_progress.
Per step: TaskUpdate in_progress ā invoke skill invocation ā complete skill ā TaskUpdate completed.
$plan-validate, $plan-review, $why-review) MUST use a direct user question ā never auto-approveTaskUpdate in_progress ā comment "Skipped ā {reason}" ā TaskUpdate completed. Never delete.Some workflow steps ARE themselves full workflows. Running them inline causes the parent session to absorb the entire nested workflow's tool calls, file reads, and sub-agent reports ā guaranteed context overflow on long sequences.
Steps requiring sub-agent delegation (hard gate):
| Step | Workflow activated | Steps | Agent type |
|---|---|---|---|
$workflow-review-changes | review-changes | 16 | code-reviewer |
$workflow-review | review | 14 | code-reviewer |
Protocol when these steps appear in the active workflow sequence:
spawn_agent tool: agent_type: "code-reviewer"plans/reports/plans/reports/ file only when resolving specific blockersThe
workflow-step-tracker.cjsPostToolUse hook injects the ā ļø [WORKFLOW-IN-WORKFLOW GATE] warning automatically when the next step is one of the above.
IMPORTANT MANDATORY Steps: detect-workflow -> analyze-best-match -> ask-user-workflow-choice -> activate-workflow -> create-task-tracking -> execute-sequence
IMPORTANT MANDATORY Steps: detect-workflow -> analyze-best-match -> ask-user-workflow-choice -> activate-workflow -> create-task-tracking -> execute-sequence
[MANDATORY] task tracking FIRST ā break every workflow into tasks before any action. NEVER skip. [MANDATORY] a direct user question ALWAYS ā present 3 options, NEVER auto-activate. [MANDATORY] skill invocation REQUIRED per step ā NEVER mark a task
completedwithout invoking it.
AI Mistake Prevention ā Failure modes to avoid on every task:
Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal. Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing. Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain. Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips ā not just happy path. When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer ā never patch symptom site. Assume existing values are intentional ā ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code. Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks. Holistic-first debugging ā resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis. Surgical changes ā apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly. Surface ambiguity before coding ā don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Critical Thinking Mindset ā Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact ā cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence ā certainty without evidence root of all hallucination.
Incremental Result Persistence ā MANDATORY for all sub-agents or heavy inline steps processing >3 files.
- Before starting: Create report file
plans/reports/{skill}-{date}-{slug}.md- After each file/section reviewed: Append findings to report immediately ā never hold in memory
- Return to main agent: Summary only (per SYNC:subagent-return-contract) with
Full report:path- Main agent: Reads report file only when resolving specific blockers
Why: Context cutoff mid-execution loses ALL in-memory findings. Each disk write survives compaction. Partial results are better than no results.
Report naming:
plans/reports/{skill-name}-{YYMMDD}-{HHmm}-{slug}.md
Sub-Agent Return Contract ā When this skill spawns a sub-agent, the sub-agent MUST return ONLY this structure. Main agent reads only this summary ā NEVER requests full sub-agent output inline.
## Sub-Agent Result: [skill-name] Status: ā PASS | ā ļø PARTIAL | ā FAIL Confidence: [0-100]% ### Findings (Critical/High only ā max 10 bullets) - [severity] [file:line] [finding] ### Actions Taken - [file changed] [what changed] ### Blockers (if any) - [blocker description] Full report: plans/reports/[skill-name]-[date]-[slug].mdMain agent reads
Full reportfile ONLY when: (a) resolving a specific blocker, or (b) building a fix plan. Sub-agent writes full report incrementally (per SYNC:incremental-persistence) ā not held in memory.
MUST ATTENTION apply critical thinking ā every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply AI mistake prevention ā holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
MUST ATTENTION call a direct user question BEFORE activating ā present all THREE options (catalog | custom pipeline | execute directly). Never auto-activate.
MUST ATTENTION workflows is an OBJECT ā workflows[workflowId], NEVER .find() / [index] / .forEach()
MUST ATTENTION create ALL task tracking items for the full sequence BEFORE marking the first task in_progress
MUST ATTENTION never mark a task completed without invoking its skill invocation ā skip means comment + completed, not delete
MUST ATTENTION custom pipeline steps must be valid commandMapping keys ā never invent step names
MUST ATTENTION use Tier 1 context parse FIRST ā check ## Workflow Catalog in context before any file read
[TASK-PLANNING] Before acting, analyze task scope and systematically break it into small todo tasks and sub-tasks using task tracking.
[IMPORTANT] Analyze how big the task is and break it into many small todo tasks systematically before starting ā this is very important.
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns ā debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW ā NEVER ExecuteUowTaskwhere python/where py) ā NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) ā parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage ā never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role ā rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad ā rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) ā expresses HAPPENS, not membership.python/python3 resolves ā verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons ā
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns ādocs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders ā System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves ā run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons ā ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" ā Yes ā improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.