with one click
business-analyst
// [Project Management] Use when creating user stories, writing acceptance criteria, analyzing requirements, or mapping business processes.
// [Project Management] Use when creating user stories, writing acceptance criteria, analyzing requirements, or mapping business processes.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | business-analyst |
| description | [Project Management] Use when creating user stories, writing acceptance criteria, analyzing requirements, or mapping business processes. |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
Goal: Refine requirements into actionable user stories with BDD acceptance criteria and business rule traceability.
Workflow:
Key Rules:
docs/business-features/ before creating new onesstory_points and complexity in all PBI/story outputsBe skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
docs/project-reference/domain-entities-reference.md โ Domain entity catalog, relationships, cross-service sync (read when task involves business entities/models) (read directly when relevant; do not rely on hook-injected conversation text)Help Business Analysts refine requirements into actionable user stories with clear acceptance criteria using BDD format.
When refining domain-related PBIs, automatically extract and reference existing business rules.
Dynamic Discovery:
Glob("docs/business-features/{module}/detailed-features/*.md") for feature docsGlob("docs/business-features/{module}/detailed-features/**/*.md") for nested featuresFrom PBI frontmatter or module detection:
module fieldrelated_features listFrom feature doc "Business Rules" section:
BR-{MOD}-XXX: DescriptionBR-GRO-001: Goals must have measurable success criteriaInclude section:
## Related Business Rules
**From Feature Docs:**
- BR-GRO-001: Goals must have measurable success criteria
- BR-GRO-005: Only goal owner and manager can edit progress
**New Business Rules (if applicable):**
- BR-GRO-042: {New rule description}
**Conflicts/Clarifications:**
- {Note any conflicts with existing rules}
Target 8-12K tokens total (validated decision: prefer completeness):
When refining domain-related PBIs, investigate related entities using feature docs.
Glob("docs/business-features/{module}/detailed-features/*.md")
Select file matching feature from PBI context.
From ## Domain Model section (Section 5):
Entity : BaseClassProperty: TypeNavigationProperty: List<Related>Property: Type (computed: logic)From ## File Locations section:
From ## Key Expressions section:
Include entity context:
## Entity Context
**Primary:** {Entity} - {description}
**Related:** {Entity1}, {Entity2}
**Key Queries:** {ExpressionName}
**Source:** {path}
This ensures implementation uses correct entities and patterns.
As a {user role/persona}
I want {goal/desire}
So that {benefit/value}
Scenario: {Descriptive title}
Given {precondition/context}
And {additional context}
When {action/trigger}
And {additional action}
Then {expected outcome}
And {additional verification}
For Project Domain:
file:line formatTC-GRO-GOAL-001: Create goal with valid data
GIVEN employee has permission to create goals
WHEN employee submits goal form with all required fields
THEN goal is created and appears in goal list
Evidence: goal.service.ts:87, goal.component.ts:142
BR-{MOD}-{NNN}: {Rule name}
IF {condition}
THEN {action/result}
ELSE {alternative}
Evidence: {file}:{line}
Before finalizing user story:
Add to user story:
## Reference Documentation
- Feature Doc: `docs/business-features/{module}/detailed-features/{feature}.md`
- Related Entities: `docs/business-features/{module}/detailed-features/*.md`
- Existing Test Cases: See feature doc Section 15 (Test Specifications)
If conflicts found, note in "Unresolved Questions" section.
When user runs $refine {idea-file}:
team-artifacts/pbis/When user runs $story {pbi-file}:
team-artifacts/pbis/stories/---
id: US-{YYMMDD}-{NNN}
parent_pbi: '{PBI-ID}'
persona: '{Persona name}'
priority: P1 | P2 | P3
story_points: 1 | 2 | 3 | 5 | 8 | 13 | 21
complexity: Low | Medium | High | Very High
status: draft | ready | in_progress | done
module: '' # Project module (if applicable)
---
# User Story
**As a** {user role}
**I want** {goal}
**So that** {benefit}
## Acceptance Criteria
### Scenario 1: {Happy path title}
```gherkin
Given {context}
When {action}
Then {outcome}
```
### Scenario 2: {Edge case title}
```gherkin
Given {context}
When {action}
Then {outcome}
```
### Scenario 3: {Error case title}
```gherkin
Given {context}
When {invalid action}
Then {error handling}
```
## Related Business Rules
<!-- Auto-extracted from feature docs -->
- BR-{MOD}-XXX: {Description}
## Out of Scope
- {Explicitly excluded item}
## Notes
- {Implementation guidance}
{YYMMDD}-ba-story-{slug}.md
{YYMMDD}-ba-requirements-{slug}.md
FR-{MOD}-{NNN} (e.g., FR-GROW-001)NFR-{MOD}-{NNN}BR-{MOD}-{NNN}AC-{NNN} per story/PBITC-{FEATURE}-{NNN} (e.g., TC-GM-001)Before completing BA artifacts:
Every refinement must end with a validation interview.
After completing user story or PBI refinement, conduct validation to:
Use a direct user question tool with 3-5 questions:
| Category | Example Questions |
|---|---|
| Assumptions | "We assume X. Is this correct?" |
| Scope | "Should Y be explicitly excluded?" |
| Dependencies | "Does this depend on Z being ready?" |
| Edge Cases | "What happens when data is empty/null?" |
| Business Impact | "Will this affect existing reports?" |
Add to user story/PBI:
## Validation Summary
**Validated:** {date}
### Confirmed Decisions
- {decision}: {user choice}
### Concerns Raised
- {concern}: {resolution}
### Action Items
- [ ] {follow-up if any}
This step is NOT optional - always validate before marking refinement complete.
[IMPORTANT] Use task tracking to break ALL work into small tasks BEFORE starting โ including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.
AI Mistake Prevention โ Failure modes to avoid on every task:
Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal. Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing. Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain. Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips โ not just happy path. When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer โ never patch symptom site. Assume existing values are intentional โ ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code. Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks. Holistic-first debugging โ resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis. Surgical changes โ apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly. Surface ambiguity before coding โ don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Critical Thinking Mindset โ Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact โ cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence โ certainty without evidence root of all hallucination.
Sequential Thinking Protocol โ Structured multi-step reasoning for complex/ambiguous work. Use when planning, reviewing, debugging, or refining ideas where one-shot reasoning is unsafe.
Trigger when: complex problem decomposition ยท adaptive plans needing revision ยท analysis with course correction ยท unclear/emerging scope ยท multi-step solutions ยท hypothesis-driven debugging ยท cross-cutting trade-off evaluation.
Format (explicit mode โ visible thought trail):
Thought N/M: [aspect]โ one aspect per thought, state assumptions/uncertaintyThought N/M [REVISION of Thought K]: ...โ when prior reasoning invalidated; state Original / Why revised / ImpactThought N/M [BRANCH A from Thought K]: ...โ explore alternative; converge with decision rationaleThought N/M [HYPOTHESIS]: ...then[VERIFICATION]: ...โ test before actingThought N/N [FINAL]โ only when verified, all critical aspects addressed, confidence >80%Mandatory closers: Confidence % stated ยท Assumptions listed ยท Open questions surfaced ยท Next action concrete.
Stop conditions: confidence <80% on any critical decision โ escalate via ask the user directly ยท โฅ3 revisions on same thought โ re-frame the problem ยท branch count >3 โ split into sub-task.
Implicit mode: apply methodology internally without visible markers when adding markers would clutter the response (routine work where reasoning aids accuracy).
Deep-dive: see
$sequential-thinkingskill (.claude/skills/sequential-thinking/SKILL.md) for worked examples (api-design, debug, architecture), advanced techniques (spiral refinement, hypothesis testing, convergence), and meta-strategies (uncertainty handling, revision cascades).
MUST ATTENTION apply critical thinking โ every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply sequential-thinking โ multi-step Thought N/M, REVISION/BRANCH/HYPOTHESIS markers, confidence % closer; see $sequential-thinking skill.
MUST ATTENTION apply AI mistake prevention โ holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
file:line evidence for every claim (confidence >80% to act)[TASK-PLANNING] Before acting, analyze task scope and systematically break it into small todo tasks and sub-tasks using task tracking.
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns โ debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW โ NEVER ExecuteUowTaskwhere python/where py) โ NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) โ parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage โ never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role โ rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad โ rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) โ expresses HAPPENS, not membership.python/python3 resolves โ verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons โ
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns โdocs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders โ System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves โ run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons โ ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" โ Yes โ improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.