with one click
story
// [Project Management] Use when creating user stories from PBIs, slicing features, or breaking down requirements.
// [Project Management] Use when creating user stories from PBIs, slicing features, or breaking down requirements.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | story |
| version | 1.2.0 |
| description | [Project Management] Use when creating user stories from PBIs, slicing features, or breaking down requirements. |
[BLOCKING] Execute skill steps in declared order. NEVER skip, reorder, or merge steps without explicit user approval. [BLOCKING] Before each step or sub-skill call, update task tracking: set
in_progresswhen step starts, setcompletedwhen step ends. [BLOCKING] Every completed/skipped step MUST include brief evidence or explicit skip reason. [BLOCKING] If Task tools are unavailable, create and maintain an equivalent step-by-step plan tracker with the same status transitions.
Goal: Break Product Backlog Items into implementable user stories using vertical slicing, SPIDR splitting, and INVEST criteria.
MANDATORY IMPORTANT MUST ATTENTION Plan ToDo Task to READ the following project-specific reference docs:
project-structure-reference.md-- project patterns and structure
Estimation Framework โ Bottom-up first; SP DERIVED; output min-max range when likely โฅ3d. Stack-agnostic. Baseline: 3-5yr dev, 6 productive hrs/day. AI estimate assumes Claude Code + project context.
Method:
- Blast Radius pass (below) โ drives code AND test cost
- Decompose phases โ hours/phase โ
bottom_up_hours = ฮฃ phase_hourslikely_days = ceil(bottom_up_hours / 6) ร productivity_factor- Sum Risk Margin (base + add-ons) โ
max_days = likely_days ร (1 + margin)min_days = likely_days ร 0.9- Output as range when
likely_days โฅ3; single point allowed<3(still record margin)man_days_ai= same range ร AI speedupstory_pointsDERIVED fromlikely_daysvia SP-Days โ NEVER driver. Disagreement >50% โ trust bottom-upProductivity factor: 0.8 strong scaffolding+codegen+AI hooks ยท 1.0 mature default ยท 1.2 weak patterns ยท 1.5 greenfield
Cost Driver Heuristic (apply BEFORE work-type row):
- UI dominates in CRUD/business apps โ 1.5-3x backend (states, validation, responsive, a11y, polish)
- Backend dominates ONLY: multi-aggregate invariants, cross-service contracts, schema migrations, heavy query/perf, new event flows
Reuse-vs-Create axis (PRIMARY lever, per layer):
UI tier Cost Reuse component on existing screen 0.1-0.3d Add control/column to existing screen 0.3-0.8d Compose components into NEW screen 1-2d NEW screen, custom layout/states/validation 2-4d NEW shared/common component (themed, tested) 3-6d+
Backend tier Cost Reuse query/handler from new place 0.1-0.3d Small update existing handler/entity 0.3-0.8d NEW query on existing repo/model 0.5-1d NEW command/handler on existing aggregate (additive) 1-2d NEW aggregate/entity (repo, validation, events) 2-4d NEW cross-service contract OR schema migration 2-4d each Multi-aggregate invariant / heavy domain rule 3-5d Rule: Sum tiers across UI+backend+tests, apply productivity factor. Reuse short-circuits tiers โ call out.
Test-Scope drivers (compute test_count EXPLICITLY โ "+tests" hand-wave is #1 failure):
Driver Count Happy-path journeys 1 per story / AC main flow State-machine transitions reachable transitions ร allowed actors Multi-entity state combos state(A) ร state(B) โ REACHABLE only, not Cartesian Authorization matrix (owner, non-owner, elevated, unauth) ร each mutation Validation rules 1 per required field / boundary / format / cross-field UI states (per new screen/dialog) happy, loading, empty, error, partial โ present only Negative paths / invariants 1 per violatable business rule
Test tier (Trad, incl. setup+assert+flake) Cost 1-5 cases, fixtures reused 0.3-0.5d 6-12 cases, 1 new fixture 0.5-1d 13-25 cases, multi-entity setup 1-2d 26-50 cases OR new state-machine coverage 2-3d >50 cases OR full E2E journey 3-5d Test multipliers: new fixture/seed harness +0.5d ยท cross-service/bus assertion +0.3d each ยท UI E2E ร1.5 ยท each new role +1-2 cases
Blast Radius (mandatory pre-pass โ affects code AND test):
- Files/components directly modified โ count
- Of those, "complex" (>500 LOC, multi-handler, central, frequently-modified) โ count
- Downstream consumers (callers, event subscribers, cross-service) โ list
- Shared/common code touched (multi-app blast) โ yes/no
- Regression scope โ areas needing re-test
Rule: Complex touch โ add
risk_factors. Each downstream consumer โ +1-3 regression cases. Blast >5 areas OR >2 complex โ re-evaluate SPLIT before estimating.Risk Margin (drives max bound):
likely_days Base margin <1d trivial +10% 1-2d small additive +20% 3-4d real feature +35% 5-7d large +50% 8-10d very large +75% >10d +100% AND flag SHOULD SPLIT Risk-factor add-ons (additive โ enumerate in
risk_factors):
Factor +margin touches-complex-existing-feature(>500 LOC, multi-handler, central)+20% cross-service-contractchange+25% schema-migration-on-populated-data+25% new-tech-or-unfamiliar-pattern+30% regression-fan-out(โฅ3 downstream areas re-test)+20% performance-or-latency-critical+20% concurrency-race-event-ordering+25% shared-common-code(multi-consumer/multi-app)+25% unclear-requirements-or-design+30% Collapse rule: total margin >100% โ STOP, split (padding past 2x is dishonesty). Margin <15% on
likely_days โฅ5โ under-estimated, widen.Work-Type Caps (hard ceilings on
likely_days):
Work type Max SP Max likely Single field / config flag / style fix 1 0.5d Add property to existing model + bind to existing UI 2 1d Additive endpoint + minor UI control (button/menu/column), reuses fixtures 3 2-3d Additive endpoint + NEW UI surface OR additive multi-layer + new domain rule + 2+ test files 5 3-5d NEW model/aggregate OR migration OR cross-module contract OR heavy test (>1.5d) OR NEW UI + non-trivial backend 8 5-7d NEW UI surface + (NEW aggregate OR migration OR cross-service contract) 13 SHOULD split Cross-service contract + migration combined 13 SHOULD split Beyond 21 MUST split SPโDays (validation only): 1=0.5d/0.25d ยท 2=1d/0.35d ยท 3=2d/0.65d ยท 5=4d/1.0d ยท 8=6d/1.5d ยท 13=10d/2.0d (Trad/AI likely) AI speedup: SP 1โ2x ยท 2-3โ3x ยท 5-8โ4x ยท 13+โ5x. AI cost =
(code_gen ร 1.3) + (test_gen ร 1.3)(30% review overhead).MANDATORY frontmatter:
story_points: <n> complexity: low | medium | high | critical man_days_traditional: '<min>-<max>d' # range when likely โฅ3d; '<N>d' when <3d man_days_ai: '<min>-<max>d' risk_margin_pct: <n> # base + add-ons risk_factors: [touches-complex-existing-feature, regression-fan-out] # closed-list from add-ons; [] if none blast_radius: touched_areas: <n> complex_touched: <n> downstream_consumers: [list or count] shared_common_code: yes | no estimate_scope_included: [code, integration-tests, frontend, i18n, docs] estimate_scope_excluded: [unit-tests, e2e, perf, deployment, code-review-rounds] estimate_reasoning: | 5-7 lines covering: (a) UI tier โ row applied (b) Backend tier โ row applied (c) Test scope โ case breakdown by driver, file count, fixtures, tier row (d) Cost driver โ dominant tier + why (e) Blast radius โ touched, complex, regression scope (f) Risk factors โ list driving margin; why not larger/smaller Example: "UI: compose Form/Table/Dialog โ NEW screen (~1.5d). Backend: NEW command on existing aggregate, reuses validation+repo (~1d). Tests: 4 transitions ร 2 actors + 3 validation + 2 UI states = 13 cases, 1 new fixture โ tier 13-25 ~1.5d. Driver: UI composition + new states. Blast: 4 areas, 1 complex. Risk: base 35% + touches-complex +20% = 55% โ max 3.9d โ range 2.5-4d."Sanity self-check:
likely_days โฅ3dand single-point? โ reject, must be range- Margin <15% on
likely_days โฅ5d? โ under-estimated, widen- Margin >100%? โ STOP, split instead of buffer
- Complex existing feature touched, no regression budget in
(c)? โ reject- Blast
>5areas OR>2complex, no split discussion? โ reject- Purely additive on existing model AND existing UI? โ cap SP 3 unless tests >1.5d
- NEW UI surface (page/complex form/dashboard)? โ SP 5+ even if backend one endpoint
- Backend cross-service / migration / multi-aggregate? โ SP 8+ regardless of UI
bottom_up_hours / 6vs SP-Days disagreement >50%? โ trust bottom-up, downgrade SP- Without tests, SP drops โฅ1 bucket? โ tests dominate; state explicitly
- Reasoning called out UI vs backend vs blast vs risk factors? โ if missing, add
docs/project-reference/domain-entities-reference.mdโ Domain entity catalog, relationships, cross-service sync (read when task involves business entities/models) (read directly when relevant; do not rely on hook-injected conversation text)docs/specs/โ Test specifications by module (read existing TCs for related features; include test story/acceptance criteria for new stories)If file not found, search for: project documentation, coding standards, architecture docs.
Workflow:
Key Rules:
When this task involves frontend or UI changes,
Component patterns: docs/project-reference/frontend-patterns-reference.md
Styling/BEM guide: docs/project-reference/scss-styling-guide.md
Design system tokens: docs/project-reference/design-system/README.md
Stories with SP >8 MUST ATTENTION be split; >5 SHOULD be split (see estimation-framework.md)
All stories MUST ATTENTION include story_points, complexity, man_days_traditional, man_days_ai fields
Auto-detected: If no existing codebase is found (no code directories like
src/,app/,lib/,server/,packages/, etc., no manifest files likepackage.json/*.sln/go.mod, no populatedproject-config.json), this skill switches to greenfield mode automatically. Planning artifacts (docs/, plans/, .claude/) don't count โ the project must have actual code directories with content.
When greenfield is detected:
Generate foundation PBIs instead of feature stories: infrastructure setup, project scaffold, CI/CD pipeline, first feature vertical slice
Add dependency ordering: infrastructure stories BEFORE feature stories
Skip "MUST ATTENTION READ project-structure-reference.md" (won't exist)
Include setup stories: dev environment, build tooling, deployment pipeline, monitoring
Priority order: infra โ scaffold โ first feature โ remaining features
[CRITICAL] Architecture Scaffolding Story: FIRST story = "Architecture Scaffolding" โ all OOP/SOLID base abstract classes, generic interfaces, infrastructure abstractions per chosen tech stack. AI self-investigates what base classes the project needs. All feature stories depend on this.
Scaffolding acceptance criteria: all base classes compile/type-check, DI/IoC registrations resolve, smoke test passes
UI System Foundation Story: If the project has a frontend, generate a "UI System Foundation" story (Sprint 0) with these sub-stories:
| Sub-Story | SP | Priority | Depends On |
|---|---|---|---|
| "Set up design token system" | 2-3 | Must Have | Architecture Scaffolding |
| "Create base layout and responsive grid" | 2-3 | Must Have | Design tokens |
| "Create core UI components (loading, error, empty, toast, button, input)" | 3-5 | Must Have | Design tokens + layout |
Dependency rule: All UI feature stories MUST ATTENTION depend on "UI System Foundation" stories.
Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
Break Product Backlog Items into implementable user stories using vertical slicing and SPIDR patterns.
If running within a workflow (big-feature, greenfield-init, etc.):
plans/*/plan.md sorted by modification time, or check TaskList for plan contextplan.md โ understand project scope, architecture decisions, domain model, implementation plan{plan-dir}/research/*.md and {plan-dir}/phase-*.md for domain model, tech stack, architecturedocs/project-reference/domain-entities-reference.md (if exists) โ understand existing domain entities for accurate story scopingteam-artifacts/pbis/stories//tdd-spec or /design-specteam-artifacts/pbis/stories/{YYMMDD}-us-{pbi-slug}.mdWhen slicing domain-related PBIs, automatically load business context.
From PBI frontmatter:
module fielddocs/business-features/ directory namesGlob("docs/business-features/{module}/detailed-features/*.md")
related_features listRead docs/project-config.json modules[] and docs/business-features/ to detect domain vocabulary per module. Use entity names from feature docs โ avoid ambiguous synonyms.
## Domain Context
**Module:** {detected module}
**Feature:** {related feature}
**Entities:** {Entity1}, {Entity2}
**Business Rules:** BR-{MOD}-XXX (from feature docs)
| Criterion | Definition | Validation Question |
|---|---|---|
| Independent | No dependencies on other stories | Can this be developed in any order? |
| Negotiable | Details can change | Is the "how" open for discussion? |
| Valuable | Delivers user value | Does user get observable benefit? |
| Estimable | Can estimate story points | Can team size this? (Fibonacci 1-21) |
| Small | Completable in sprint | SP โค8? (prefer โค5) |
| Testable | Clear acceptance criteria | Can we write pass/fail tests? |
When to apply: Story SP >8 MUST ATTENTION split. SP >5 SHOULD split. SP 13 = SHOULD split into 2-3 stories. SP 21 = MUST ATTENTION split (epic-level).
| Pattern | Question | Split Strategy |
|---|---|---|
| Spike | Unknown complexity? | Create research spike first, then stories |
| Paths | Multiple workflow branches? | One story per path/choice |
| Interfaces | Multiple UIs or APIs? | One story per interface |
| Data | Multiple data formats/types? | One story per data variation |
| Rules | Multiple business rules? | One story per rule variation |
Paths: "User can pay by card OR PayPal" โ Story A: Card payment, Story B: PayPal payment
Data: "Import CSV, Excel, JSON" โ Story A: CSV import, Story B: Excel import, Story C: JSON import
Rules: "Different approval flows by amount" โ Story A: <$1000 auto-approve, Story B: >$1000 manager approval
SP 1-5: โ
Good size
SP 6-8: โ ๏ธ Consider splitting (apply SPIDR)
SP 13: โ SHOULD split into 2-3 stories
SP 21: โ MUST ATTENTION split โ epic-level, not sprint-ready
Scenario: User successfully {completes action}
Given {user has required permissions/state}
And {required data exists}
When user {performs valid action}
Then {primary expected outcome}
And {secondary verification if needed}
Scenario: System handles {boundary condition}
Given {edge state: empty list, max items, zero value}
When user {attempts action at boundary}
Then {appropriate handling: pagination, warning, default}
Scenario: System prevents {invalid action}
Given {precondition}
When user {provides invalid input OR unauthorized action}
Then error message "{specific error message}"
And {system remains in valid state}
And {no partial changes saved}
Scenario: Unauthorized user cannot {perform action}
Given user has role {unauthorized role}
When user attempts to {action}
Then system rejects with "Forbidden" or "Unauthorized"
And no data is modified
Performance: Response time under load Concurrency: Simultaneous user actions Integration: External service unavailable
---
id: US-{YYMMDD}-{NNN}
parent_pbi: '{PBI-ID}'
title: '{Brief story title}'
persona: '{User persona}'
priority: P1 | P2 | P3
story_points: 1 | 2 | 3 | 5 | 8 | 13
complexity: Low | Medium | High | Very High
man_days_traditional: '{ Xd (Yd code + Zd test) โ from SP table }'
man_days_ai: '{ Xd (Yd code + Zd test) โ from SP table with AI }'
sprint: 0 | 1 | 2 | ...
status: draft | ready | in_progress | done
module: '{ServiceA | ServiceB | ServiceC | ServiceD}'
---
# User Stories for {PBI Title}
## Story 1: {Title}
**As a** {user role}
**I want** {goal}
**So that** {benefit}
### Acceptance Criteria
#### Scenario 1: {Happy path title}
```gherkin
Given {context}
When {action}
Then {outcome}
```
Given {edge state}
When {action}
Then {handling}
Given {context}
When {invalid action}
Then error "{message}"
{Repeat structure...}
| Story | Depends On | Type | Reason |
|---|---|---|---|
| US-{NNN} | - | independent | First slice, no dependencies |
| US-{NNN} | US-{NNN} | must-after | Needs entity/API from prior story |
| US-{NNN} | US-{NNN} | can-parallel | Independent feature slice |
| US-{NNN} | US-{NNN} | blocked-by | Requires external service/infra |
Module: {module} Related Feature: {feature doc path} Entities: {Entity1}, {Entity2} Business Rules: {BR-XXX references}
{ASCII wireframe showing this story's UI slice โ see UI wireframe protocol}
Classify per Component Hierarchy in
UI wireframe protocolโ search existing libs before proposing new components.
| State | Behavior |
|---|---|
| Default | {what user sees initially} |
| Loading | {spinner/skeleton} |
| Empty | {empty state message} |
| Error | {error handling} |
If backend-only:
## UI WireframeโN/A โ Backend-only change. No UI affected.
Validated: {date}
---
## Sprint 0 / Foundation Stories (Production Readiness)
When the PBI includes a "Production Readiness Concerns" table with "Required" items, automatically generate Sprint 0 / foundation stories for each concern:
| PBI Concern | Story Title | Story Points | Priority |
|-------------|-------------|-------------|----------|
| Code linting/analyzers = Required | "Set up code linting and formatting" | 1-2 SP | Must Have |
| Error handling setup = Required | "Set up error handling foundation" | 2-3 SP | Must Have |
| Loading indicators = Required | "Set up loading indicator infrastructure" | 1-2 SP | Must Have |
| Docker integration = Required | "Set up Docker development environment" | 2-3 SP | Must Have |
| CI/CD quality gates = Required | "Set up CI/CD quality gates" | 2-3 SP | Must Have |
| Seed data = Required | "Set up seed data / data seeder" | 2-3 SP | Must Have |
| Data migration = Required | "Create data migration for schema changes" | 1-3 SP | Must Have |
### Rules
- Foundation stories MUST ATTENTION be completed before feature stories begin
- Mark as `sprint: 0` or `sprint: foundation` in story metadata
- Each foundation story references the specific protocol section for implementation guidance
- If PBI concern = "Existing", skip story generation (already set up)
- If PBI concern = "No", skip story generation (explicitly opted out)
---
## Anti-Patterns to Avoid
| Anti-Pattern | Problem | Correct Approach |
| ------------------ | ------------------------------------------------- | --------------------------------------------- |
| Horizontal slicing | "Backend story" + "Frontend story" = delays value | Vertical slice: thin end-to-end functionality |
| Single scenario | Missing edge/error cases | Minimum 3 scenarios: happy, edge, error |
| Vague criteria | "Fast", "user-friendly" untestable | Quantify: "< 200ms", "โค 3 clicks" |
| Solution-speak | "Use Redis cache" constrains team | Outcome: "Results return within 200ms" |
| Effort >8 | Won't fit sprint, hard to estimate | Apply SPIDR, split until โค8 |
| No error scenario | Missing negative test coverage | Always include invalid input handling |
| Generic persona | "As a user" too vague | Specific: "As a hiring manager" |
---
## Key Rules
- **Every story set MUST ATTENTION include a Story Dependencies table** โ with types: `must-after`, `can-parallel`, `blocked-by`, `independent`. This enables `/prioritize` and `/plan` to respect implementation ordering.
- **SPIDR splits MUST ATTENTION include dependency chains** โ When splitting a story, declare which split stories depend on others.
- **No orphan stories** โ Every story must appear in the dependency table, even if independent.
## Quality Checklist
Before completing user stories:
- [ ] Each story follows "As a... I want... So that..." format
- [ ] SPIDR splitting applied (effort โค8, prefer โค5)
- [ ] At least 3 scenarios per story: happy, edge, error
- [ ] All scenarios use GIVEN/WHEN/THEN format
- [ ] Effort estimated in Fibonacci (1, 2, 3, 5, 8)
- [ ] Stories independent (can develop in any order)
- [ ] Out of scope explicitly listed
- [ ] Story Dependencies table included with all stories listed
- [ ] Dependency types correct (must-after, can-parallel, blocked-by, independent)
- [ ] Parent PBI linked in frontmatter
- [ ] Domain vocabulary used correctly (if the project)
- [ ] Authorization scenario included per story (unauthorized access rejection)
- [ ] Seed data story included if PBI has seed data requirements
- [ ] Data migration story included if PBI has schema changes
- [ ] Validation interview completed
---
## Validation Step (MANDATORY)
After creating user stories, validate with user.
### Question Categories
| Category | Example Question |
| ---------------- | --------------------------------------------------- |
| **Slicing** | "Are the story slices independent enough?" |
| **Size** | "Any story >8 effort that needs further splitting?" |
| **Scenarios** | "Any acceptance criteria missing for edge cases?" |
| **Dependencies** | "Are there hidden dependencies between stories?" |
| **Scope** | "Should anything be explicitly excluded?" |
### Process
1. Generate 2-4 questions focused on slicing quality, scenarios, and dependencies
2. Use `AskUserQuestion` tool to interview
3. Document in story artifact under `## Validation Summary`
4. Update stories based on answers (split if needed)
**This step is NOT optional.**
---
## Related
| Type | Reference |
| -------------- | ------------------------------------------- |
| **Role Skill** | `business-analyst` |
| **Command** | `/story` |
| **Input** | `/refine` output (PBI) |
| **Next Steps** | `/tdd-spec`, `/design-spec`, `/prioritize` |
---
## MANDATORY: Systematic Task Breakdown for Stories
**MANDATORY IMPORTANT MUST ATTENTION** break down ALL stories into small, systematic todo tasks using `TaskCreate` BEFORE starting implementation. Each story MUST ATTENTION have its own set of tasks that cover:
1. **Read & understand story** โ Load story artifact, acceptance criteria, domain context
2. **Identify vertical slice layers** โ Backend entity/command/query, frontend component/store/API, integration points
3. **Create implementation subtasks per layer** โ One task per file or logical unit (entity, command handler, DTO, component, service, test)
4. **Include spec tasks** โ Each story MUST ATTENTION have corresponding test specifications (unit, integration, or E2E as appropriate)
5. **Include validation task** โ Verify story against acceptance criteria GIVEN/WHEN/THEN after implementation
6. **Include review task** โ Final quality check per story
### Task Naming Convention
```
[Story US-{ID}] {Layer}: {Description}
```
Example for a "Create Goal" story:
```
[Story US-001] Entity: Create Goal entity with validation rules
[Story US-001] Command: CreateGoalCommand + Handler
[Story US-001] DTO: GoalDto with mapping
[Story US-001] API: POST /api/goals endpoint
[Story US-001] Component: GoalCreateFormComponent
[Story US-001] Store: GoalVmStore with create action
[Story US-001] Test: Integration test for CreateGoalCommand
[Story US-001] Test: E2E test for goal creation flow
[Story US-001] Review: Verify against AC scenarios
```
**Why:** Without systematic task breakdown, stories become monolithic โ leading to missed edge cases, incomplete specs, and context loss during implementation.
---
## Next Steps
**MANDATORY IMPORTANT MUST ATTENTION โ NO EXCEPTIONS** after completing this skill, you MUST ATTENTION use `AskUserQuestion` to present these options. Do NOT skip because the task seems "simple" or "obvious" โ the user decides:
- **"/tdd-spec (Recommended)"** โ Generate test specifications from stories
- **"/pbi-mockup"** โ Generate HTML mockup report from PBI and stories
- **"/plan-validate"** โ If stories need validation against plan
- **"Skip, continue manually"** โ user decides
> **[IMPORTANT]** Use `TaskCreate` to break ALL work into small tasks BEFORE starting โ including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ATTENTION ask user whether to skip.
> **External Memory:** For complex or lengthy work (research, analysis, scan, review), write intermediate findings and final results to a report file in `plans/reports/` โ prevents context loss and serves as deliverable.
> **Evidence Gate:** MANDATORY IMPORTANT MUST ATTENTION โ every claim, finding, and recommendation requires `file:line` proof or traced evidence with confidence percentage (>80% to act, <80% must verify first).
<!-- SYNC:ui-system-context -->
> **UI System Context** โ For ANY task touching `.ts`, `.html`, `.scss`, or `.css` files:
>
> **MUST ATTENTION READ before implementing:**
>
> 1. `docs/project-reference/frontend-patterns-reference.md` โ component base classes, stores, forms
> 2. `docs/project-reference/scss-styling-guide.md` โ BEM methodology, SCSS variables, mixins, responsive
> 3. `docs/project-reference/design-system/README.md` โ design tokens, component inventory, icons
>
> Reference `docs/project-config.json` for project-specific paths.
<!-- /SYNC:ui-system-context -->
<!-- SYNC:ui-wireframe -->
> **UI Wireframe** โ For UI artifacts: include ASCII wireframe (box-drawing chars), component tree with EXISTING/NEW classification and tier (common | domain-shared | page/app), interaction flow (user action โ system response โ UI update), states table (default/loading/empty/error), and responsive breakpoint behavior. Process Figma URLs or screenshots BEFORE wireframing. Search existing component libs before proposing new components. Backend-only changes: `N/A โ Backend-only change. No UI affected.`
<!-- /SYNC:ui-wireframe -->
<!-- SYNC:critical-thinking-mindset -->
> **Critical Thinking Mindset** โ Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act.
> **Anti-hallucination:** Never present guess as fact โ cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence โ certainty without evidence root of all hallucination.
<!-- /SYNC:critical-thinking-mindset -->
<!-- SYNC:sequential-thinking-protocol -->
> **Sequential Thinking Protocol** โ Structured multi-step reasoning for complex/ambiguous work. Use when planning, reviewing, debugging, or refining ideas where one-shot reasoning is unsafe.
>
> **Trigger when:** complex problem decomposition ยท adaptive plans needing revision ยท analysis with course correction ยท unclear/emerging scope ยท multi-step solutions ยท hypothesis-driven debugging ยท cross-cutting trade-off evaluation.
>
> **Format (explicit mode โ visible thought trail):**
>
> 1. `Thought N/M: [aspect]` โ one aspect per thought, state assumptions/uncertainty
> 2. `Thought N/M [REVISION of Thought K]: ...` โ when prior reasoning invalidated; state Original / Why revised / Impact
> 3. `Thought N/M [BRANCH A from Thought K]: ...` โ explore alternative; converge with decision rationale
> 4. `Thought N/M [HYPOTHESIS]: ...` then `[VERIFICATION]: ...` โ test before acting
> 5. `Thought N/N [FINAL]` โ only when verified, all critical aspects addressed, confidence >80%
>
> **Mandatory closers:** Confidence % stated ยท Assumptions listed ยท Open questions surfaced ยท Next action concrete.
>
> **Stop conditions:** confidence <80% on any critical decision โ escalate via AskUserQuestion ยท โฅ3 revisions on same thought โ re-frame the problem ยท branch count >3 โ split into sub-task.
>
> **Implicit mode:** apply methodology internally without visible markers when adding markers would clutter the response (routine work where reasoning aids accuracy).
>
> **Deep-dive:** see `/sequential-thinking` skill (`.claude/skills/sequential-thinking/SKILL.md`) for worked examples (api-design, debug, architecture), advanced techniques (spiral refinement, hypothesis testing, convergence), and meta-strategies (uncertainty handling, revision cascades).
<!-- /SYNC:sequential-thinking-protocol -->
<!-- SYNC:ai-mistake-prevention -->
**AI Mistake Prevention** โ Failure modes to avoid on every task:
**Check downstream references before deleting.** Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
**Verify AI-generated content against actual code.** AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
**Trace full dependency chain after edits.** Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
**Trace ALL code paths when verifying correctness.** Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips โ not just happy path.
**When debugging, ask "whose responsibility?" before fixing.** Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer โ never patch symptom site.
**Assume existing values are intentional โ ask WHY before changing.** Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
**Verify ALL affected outputs, not just the first.** Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
**Holistic-first debugging โ resist nearest-attention trap.** When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
**Surgical changes โ apply the diff test.** Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
**Surface ambiguity before coding โ don't pick silently.** If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
<!-- /SYNC:ai-mistake-prevention -->
<!-- SYNC:estimation-framework:reminder -->
- **MANDATORY MUST ATTENTION** estimation: bottom-up phase hours drive `man_days_traditional` (`ฮฃh/6 ร productivity_factor`); SP DERIVED. UI cost usually dominates โ bump SP one bucket if NEW UI surface (page/complex form/dashboard). Frontmatter MUST include `story_points`, `complexity`, `man_days_traditional`, `man_days_ai`, `estimate_scope_included`, `estimate_scope_excluded`, `estimate_reasoning` (UI vs backend cost driver). Cap SP 3 for additive-on-existing-model+existing-UI unless test scope >1.5d. SP 13 SHOULD split, SP 21 MUST split.
<!-- /SYNC:estimation-framework:reminder -->
<!-- SYNC:ui-system-context:reminder -->
**IMPORTANT MUST ATTENTION** read frontend-patterns-reference, scss-styling-guide, design-system/README before any UI change.
<!-- /SYNC:ui-system-context:reminder -->
<!-- SYNC:critical-thinking-mindset:reminder -->
**MUST ATTENTION** apply critical thinking โ every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
<!-- /SYNC:critical-thinking-mindset:reminder -->
<!-- SYNC:sequential-thinking-protocol:reminder -->
**MUST ATTENTION** apply sequential-thinking โ multi-step Thought N/M, REVISION/BRANCH/HYPOTHESIS markers, confidence % closer; see `/sequential-thinking` skill.
<!-- /SYNC:sequential-thinking-protocol:reminder -->
<!-- SYNC:ai-mistake-prevention:reminder -->
**MUST ATTENTION** apply AI mistake prevention โ holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
<!-- /SYNC:ai-mistake-prevention:reminder -->
<!-- PROMPT-ENHANCE:STEP-TASK-CLOSING:START -->
## Prompt-Enhance Closing Anchors
**IMPORTANT MUST ATTENTION** follow declared step order for this skill; NEVER skip, reorder, or merge steps without explicit user approval
**IMPORTANT MUST ATTENTION** for every step/sub-skill call: set `in_progress` before execution, set `completed` after execution
**IMPORTANT MUST ATTENTION** every skipped step MUST include explicit reason; every completed step MUST include concise evidence
**IMPORTANT MUST ATTENTION** if Task tools unavailable, maintain an equivalent step-by-step plan tracker with synchronized statuses
<!-- PROMPT-ENHANCE:STEP-TASK-CLOSING:END -->
## Closing Reminders
**MANDATORY IMPORTANT MUST ATTENTION** break work into small todo tasks using `TaskCreate` BEFORE starting.
**MANDATORY IMPORTANT MUST ATTENTION** validate decisions with user via `AskUserQuestion` โ never auto-decide.
**MANDATORY IMPORTANT MUST ATTENTION** add a final review todo task to verify work quality.
MANDATORY IMPORTANT MUST ATTENTION READ the following files before starting:
[TASK-PLANNING] Before acting, analyze task scope and systematically break it into small todo tasks and sub-tasks using TaskCreate.
[IMPORTANT] Analyze how big the task is and break it into many small todo tasks systematically before starting โ this is very important.