with one click
brainstorm
// [Content] Use when you need to brainstorm as a PO/BA — structured ideation for problem-solving, new product creation, or feature enhancement.
// [Content] Use when you need to brainstorm as a PO/BA — structured ideation for problem-solving, new product creation, or feature enhancement.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | brainstorm |
| description | [Content] Use when you need to brainstorm as a PO/BA — structured ideation for problem-solving, new product creation, or feature enhancement. |
| disable-model-invocation | false |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
[BLOCKING] Execute skill steps in declared order. NEVER skip, reorder, or merge steps without explicit user approval. [BLOCKING] Before each step or sub-skill call, update task tracking: set
in_progresswhen step starts, setcompletedwhen step ends. [BLOCKING] Every completed/skipped step MUST include brief evidence or explicit skip reason. [BLOCKING] If Task tools are unavailable, create and maintain an equivalent step-by-step plan tracker with the same status transitions.
Goal: Facilitate a structured PO/BA brainstorming session using the Double Diamond process — diverge to discover problems and opportunities, then converge to validate and prioritize ideas before committing to implementation.
Three Scenarios:
| Scenario | Entry Trigger | Primary Methods |
|---|---|---|
| Problem-Solving | "Something is broken / users complain / metric is bad" | 5 Whys → Fishbone → HMW → SCAMPER → Hypothesis RAT |
| New Product | "Greenfield idea / new market / no codebase yet" | JTBD → Lean Canvas → Crazy 8s → Opportunity Scoring → Lean Hypothesis |
| Feature Enhancement | "Existing product / add capability / improve flow" | Opportunity Solution Tree → SCAMPER → Impact Mapping → RICE → Value Hypothesis |
Double Diamond (master meta-framework):
DIAMOND 1: Right Problem DIAMOND 2: Right Solution
──────────────────────────── ──────────────────────────
Discover ──► Define Develop ──► Deliver
(diverge) (converge) (diverge) (converge)
Golden Rule: NEVER evaluate ideas while generating them. Diverge and converge are separate modes. Mixing them kills creative output.
Be skeptical. Apply critical thinking. Every idea needs a testable hypothesis. Confidence >80% required before recommending.
$ARGUMENTS
Use a direct user question to detect scenario, role, and constraints before any technique.
Ask:
"What scenario are we in?"
"What is the primary role in this session?"
"How much is already known?"
docs/business-features/ to understand domaindocs/project-reference/domain-entities-reference.md if entity context neededWebSearch for market/competitor context when scenario = New Product or EnhancementGoal: Fully understand the problem space before jumping to solutions. The #1 failure in brainstorming: solving the wrong problem.
Time-box: 20–45 minutes of session time.
Formulate a crisp problem statement BEFORE any ideation:
[User/Persona] needs [need/job-to-be-done]
because [insight/root cause/context],
but [current barrier/friction/failure].
Example:
HR Managers need to quickly identify top performers for promotion
because quarterly reviews create promotion backlogs,
but the current system shows raw scores with no ranking or comparison.
Use a direct user question to validate:
Apply one of:
5 Whys:
Problem: [stated problem]
Why 1: [first cause]
Why 2: [cause of cause 1]
Why 3: [cause of cause 2]
Why 4: [cause of cause 3]
Why 5: [root cause] ← Fix HERE, not at Why 1
Fishbone (Ishikawa) — for systemic problems: Spine = problem statement. Bones = 6 cause categories:
Replace user stories with job stories to expose real motivation:
User Story (what): As a manager, I want to see employee scores, so that I can make decisions.
Job Story (why + context): When I'm preparing for quarterly reviews with limited time, I want to instantly see who deserves promotion without reading every profile, so I can make fair, defensible decisions before the deadline.
Job Story Formula:
When [triggering situation + context],
I want to [motivation / job to be done],
so I can [outcome / expected result].
Generate 3–5 job stories covering main user segments. Each story = one opportunity.
Transform problem statements into ideation-ready questions:
Formula: "How might we [verb] [object] so that [desired outcome]?"
From the POV statement:
Rules:
Output of Phase 1:
Goal: Narrow problem space to the highest-opportunity focus areas before ideating solutions.
Teresa Torres' framework. Maps desired outcome → opportunities → solutions → experiments.
Desired Outcome (business metric)
├── Opportunity 1 (unmet user need / pain / want)
│ ├── Solution A
│ └── Solution B
├── Opportunity 2
│ ├── Solution C
│ └── Solution D
└── Opportunity 3 (deprioritized)
Step 1: State ONE desired outcome (lagging metric the team owns — e.g., "Increase manager satisfaction with review process from 3.2 to 4.0 CSAT") Step 2: Map ALL known opportunities (pains, needs, wants) from research/interviews Step 3: For each top opportunity, generate solution directions (not detailed solutions yet) Step 4: Pick 1–2 opportunities to develop further in Phase 3
One-page business model for greenfield ideas (Ash Maurya):
| Block | Question |
|---|---|
| Problem | Top 3 problems being solved |
| Customer Segments | Who has this problem? Early adopters? |
| Unique Value Prop | Single compelling message |
| Solution | Top 3 features (not full spec) |
| Channels | How to reach customers |
| Revenue Streams | How to make money |
| Cost Structure | Fixed + variable costs |
| Key Metrics | One number that measures success |
| Unfair Advantage | What can't easily be copied? |
Fill one canvas per major target segment. Keep it to 20 min — speed is the point.
Eliminate-Reduce-Raise-Create grid (Chan Kim & Mauborgne):
| Eliminate | Reduce |
|---|---|
| Features users never use | Features that are over-engineered |
| Raise | Create |
| Features users want more of | Features no competitor offers |
Rule: Every innovation should have at least ONE item in Create AND one in Eliminate. A product with only Raise entries is incremental — not differentiated.
Connects customer profile to product value:
Customer Profile:
Value Map:
Fit = where Pain Relievers match Pains + Gain Creators match Gains.
Output of Phase 2:
Goal: Generate maximum quantity of solution ideas without judgment. Quality comes in Phase 4.
Critical rule: NO evaluation in this phase. Every idea is valid. "Yes, and..." not "Yes, but..."
Apply each lens to the problem/existing product to generate solution directions:
| Letter | Prompt | Example for HR review feature |
|---|---|---|
| Substitute | What can be replaced? | Replace manual scoring with AI-assisted ranking |
| Combine | What can be merged? | Combine performance + feedback + OKR in one view |
| Adapt | What can be borrowed? | Adapt Netflix recommendation to suggest top performers |
| Modify | What can be scaled/shrunk? | Minimize review to a weekly pulse check |
| Put to other use | Different context? | Use review data for learning path recommendations |
| Eliminate | What can be removed? | Eliminate annual review — replace with continuous signals |
| Reverse | Flip the process? | Let employees score managers instead |
Generate at least 2 ideas per SCAMPER letter = minimum 14 ideas.
Time-box: 8 minutes. 8 ideas. No refinement.
Process:
For AI-facilitated sessions:
For multi-stakeholder sessions (async-friendly):
For AI-facilitated sessions:
Gojko Adzic's technique. Maps Goal → Actors → Impacts → Deliverables:
GOAL: [business outcome with measurable target]
├── ACTOR: Who can help/hinder?
│ ├── IMPACT: How should behavior change?
│ │ └── DELIVERABLE: What feature produces this impact?
│ └── IMPACT: What negative behavior to prevent?
│ └── DELIVERABLE: What reduces this risk?
└── ACTOR: ...
Key insight: Work backward from GOAL. If a deliverable doesn't trace to an actor behavior change, don't build it.
"How does [industry X] solve [similar problem Y]?"
| Analogy Source | Application to HR |
|---|---|
| Spotify Discover Weekly | Personalized learning recommendations |
| Uber surge pricing | Dynamic bonus pool allocation |
| GitHub PR reviews | Peer skill endorsement with evidence |
| Amazon recommendation engine | Next goal suggestion |
| Netflix "because you watched" | "Colleagues like you also achieved..." |
Output of Phase 3:
Goal: Reduce 25–40 raw ideas to a ranked shortlist of 3–5 candidates for hypothesis testing.
Before scoring, do a quick gut-check elimination:
Rank remaining candidates:
RICE Score = (Reach × Impact × Confidence) / Effort
Reach: Users affected per quarter (100 / 500 / 1000 / 5000+)
Impact: 0.25 minimal | 0.5 low | 1 medium | 2 high | 3 massive
Confidence: 0.5 low (gut feel) | 0.8 medium (some data) | 1.0 high (validated)
Effort: Story Points — 1 trivial | 3 small | 5 medium | 8 large | 13 very large
Score all 10–15 candidates. Sort descending. Top 5 = shortlist.
For each shortlisted idea, classify:
| Category | Description | If absent | If present | Example |
|---|---|---|---|---|
| Must-Be | Baseline expectation | Users angry | Users neutral | Login works |
| Performance | More = better | Users dissatisfied | Users satisfied | Faster load |
| Delighter | Unexpected value | Users neutral | Users delighted | Smart suggestion |
| Indifferent | Doesn't matter | Users neutral | Users neutral | Icon colors |
| Reverse | Some want, some don't | Segment upset | Segment happy | Auto-fill |
Strategy: Must-Be → Performance → Delighter. Never skip Must-Be items for Delighters.
Quick visual triage:
HIGH IMPACT
│ Quick Wins ★ │ Major Projects ⚙️
│ (do first) │ (schedule carefully)
────┼──────────────────┼────────────────────
│ Fill-Ins 📋 │ Money Pits ⚠️
│ (if time) │ (avoid or cut)
LOW IMPACT
LOW EFFORT HIGH EFFORT
Plot each shortlisted idea. Quick Wins = default first picks unless Major Project has strategic necessity.
For each idea in the shortlist, assign release priority:
| Priority | Meaning | Threshold |
|---|---|---|
| Must Have | MVP is broken without it | Include if >80% of value depends on it |
| Should Have | Important but MVP works without it | Include if RICE > median |
| Could Have | Nice to have, low risk to cut | Include if effort ≤ 3 SP |
| Won't Have | Explicitly out of scope this cycle | Document for future |
Output of Phase 4:
Goal: Before committing to build, test your riskiest assumptions. 42% of startups fail because no market need — validate before building.
**We believe** [target users/persona]
**Experience** [specific problem]
**Because** [root cause]
**We'll know this is true when** [validation metric/observable evidence]
Example:
We believe HR Managers
Experience frustration identifying top performers during review cycles
Because scoring data is fragmented across 3 systems with no unified ranking
We'll know this is true when 3+ managers confirm they spend >2hrs per cycle on manual data aggregation
**We believe** [feature/solution]
**Will deliver** [specific value/outcome]
**To** [target users]
**We'll know we're right when** [measurable success metric]
Identify the ONE assumption whose failure kills the idea:
Probability of being wrong (0–1) × Impact if wrong (0–1)For each top idea, define the loop:
BUILD: Minimum experiment to test the assumption (not a full product)
MEASURE: One metric that proves/disproves the hypothesis
LEARN: What decision do we make if metric is met / not met?
PIVOT: If hypothesis invalidated — which alternative from Phase 3 do we try next?
Output of Phase 5:
Goal: Present a clear, opinionated recommendation with trade-offs. Not "here are all the options" — "here's what we recommend and why."
Present final shortlist as a decision table:
| Option | RICE | Kano | Effort | Risk | RAT Test | Recommendation |
|---|---|---|---|---|---|---|
| Option A | 320 | Delighter | 5 SP | Medium | 3-day interview | ⭐ Recommended |
| Option B | 180 | Performance | 8 SP | Low | Prototype | Viable |
| Option C | 90 | Must-Be | 13 SP | High | Pre-sell | Defer |
RECOMMENDED: [Option Name]
Why: [1–2 sentences on RICE + Kano + strategic fit]
Risk: [Primary risk + mitigation]
First step: [Cheapest test to validate before full commitment]
Time to validation: [Days/weeks]
Use naming pattern from ## Naming section in injected context.
Create markdown summary report:
# Brainstorm Session Report: [Topic]
## Session Context
- Scenario: [Problem-Solving / New Product / Enhancement]
- Role: [PO / BA / Mixed]
- Date: [YYYY-MM-DD]
- Input: [Original question/problem]
## Problem Statement
[POV format]
## Root Cause Analysis
[5 Whys or Fishbone — if Problem-Solving]
## Job Stories
1. [Job Story 1]
2. [Job Story 2]
3. [Job Story 3]
## HMW Questions
1. How might we...
2. How might we...
## Opportunity Map
[OST or Lean Canvas — per scenario]
## Raw Ideas Generated
[Total count: XX ideas across SCAMPER / Crazy 8s / Impact Mapping]
## Scored Shortlist (RICE)
| Rank | Idea | RICE | Kano | Effort | Priority |
| ---- | ---- | ---- | ---- | ------ | ----------- |
| 1 | ... | ... | ... | ... | Must Have |
| 2 | ... | ... | ... | ... | Should Have |
## Hypothesis Cards
### Top Recommendation: [Option Name]
- Problem Hypothesis: ...
- Value Hypothesis: ...
- Riskiest Assumption: ...
- Cheapest Test: ...
- Success Metric: ...
## Decision
[Recommendation + rationale]
## Next Steps
- [ ] [First concrete action]
- [ ] [Validation test]
- [ ] [Stakeholder alignment needed]
| Technique | Phase | When to Use | Time-box |
|---|---|---|---|
| POV Statement | P1 | Always | 10 min |
| 5 Whys | P1 | Problem-solving scenario | 15 min |
| Fishbone | P1 | Systemic/complex problems | 20 min |
| JTBD / Job Stories | P1 | New product or enhancement | 20 min |
| HMW Questions | P1 | Always — bridge problem → ideation | 15 min |
| Opportunity Solution Tree | P2 | Enhancement scenario | 30 min |
| Lean Canvas | P2 | New product scenario | 20 min |
| Blue Ocean ERRC | P2 | Differentiation needed | 20 min |
| Value Proposition Canvas | P2 | Product-market fit unclear | 25 min |
| SCAMPER | P3 | Always — structured ideation | 30 min |
| Crazy 8s | P3 | Need quantity fast | 8 min |
| Brainwriting 6-3-5 | P3 | Multi-stakeholder, async | 30 min |
| Impact Mapping | P3 | Outcome-first thinking | 30 min |
| Analogical Thinking | P3 | Novel/creative directions needed | 15 min |
| Dot Voting | P4 | First-pass elimination | 10 min |
| RICE Scoring | P4 | Always for prioritization | 20 min |
| Kano Model | P4 | Feature classification | 15 min |
| 2×2 Effort/Impact | P4 | Visual triage | 10 min |
| MoSCoW | P4 | Release scoping | 15 min |
| Problem Hypothesis | P5 | Always before committing | 15 min |
| Value Hypothesis | P5 | Always before committing | 15 min |
| Riskiest Assumption Test | P5 | Before full build | 20 min |
| Build-Measure-Learn | P5 | Lean validation | 20 min |
planner agent — research industry best practices for specific domaindocs-manager agent — understand existing feature constraints and domain contextWebSearch — market/competitor context for new product scenariosdocs-seeker skill — latest documentation for external plugins/APIsai-multimodal skill — analyze visual mockups, screenshots, competitor UIssequential-thinking skill — complex problem decomposition requiring structured causal chainsweb-research skill — deep market research for greenfield or competitive analysis1. POV Statement → 2. 5 Whys / Fishbone → 3. HMW Questions
→ 4. SCAMPER on current solution → 5. RICE scoring
→ 6. Problem Hypothesis + RAT → 7. Recommend + cheapest test
1. Job Stories (JTBD) → 2. Lean Canvas → 3. Blue Ocean ERRC
→ 4. HMW Questions → 5. Crazy 8s / Brainwriting
→ 6. Kano Classification → 7. Value Hypothesis + RAT → 8. MVP scope
1. Job Stories (JTBD) → 2. Opportunity Solution Tree
→ 3. HMW Questions → 4. SCAMPER on existing feature
→ 5. Impact Mapping → 6. RICE scoring → 7. 2×2 matrix
→ 8. Value Hypothesis + RAT → 9. Recommend + next experiment
| Anti-Pattern | Why It Fails | Better Approach |
|---|---|---|
| Jumping to solutions before defining problem | Builds the wrong thing | Always complete Phase 1 first |
| Evaluating ideas while generating them | Kills creative output, premature closure | Strict diverge/converge separation |
| One stakeholder perspective only | Misses jobs, pains, context | Brainwriting from 6 different roles |
| No hypothesis before building | 42% of features fail — no market need | Always write hypothesis + RAT |
| RICE without confidence score | Overestimates low-evidence ideas | Always include Confidence as a multiplier |
| Kano ignored — building only Delighters | Users can't use a delighter with broken Must-Bes | Prioritize Must-Be → Performance → Delighter |
| "Best idea wins" without validation test | HiPPO bias (Highest Paid Person's Opinion) | Every top idea needs a RAT test design |
| Scope creep in ideation | Ideas balloon beyond what team can validate | Timebox each phase strictly |
| Treating RICE score as final truth | RICE is directional, not precise | Use RICE + Kano + strategic context together |
After brainstorm session concludes, use a direct user question to present next steps:
| Next Step | When | Skill/Workflow |
|---|---|---|
$idea | Capture top idea as backlog artifact | idea skill |
$refine | Turn top idea into actionable PBI with AC | refine skill |
$web-research | Need deeper market/competitor research first | web-research skill |
$plan-hard | Problem is clear, solution is validated, ready to implement | plan skill |
$design-spec | UI-heavy idea, need wireframes before spec | design-spec skill |
$domain-analysis | Idea touches domain entities, need model first | domain-analysis skill |
| Continue brainstorming | More scenarios to explore | Stay in this session |
[IMPORTANT] Use task tracking to break ALL work into small tasks BEFORE starting. This prevents context loss from long sessions.
AI Mistake Prevention — Failure modes to avoid on every task:
Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal. Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing. Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain. Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path. When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site. Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code. Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks. Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis. Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly. Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
Sequential Thinking Protocol — Structured multi-step reasoning for complex/ambiguous work. Use when planning, reviewing, debugging, or refining ideas where one-shot reasoning is unsafe.
Trigger when: complex problem decomposition · adaptive plans needing revision · analysis with course correction · unclear/emerging scope · multi-step solutions · hypothesis-driven debugging · cross-cutting trade-off evaluation.
Format (explicit mode — visible thought trail):
Thought N/M: [aspect]— one aspect per thought, state assumptions/uncertaintyThought N/M [REVISION of Thought K]: ...— when prior reasoning invalidated; state Original / Why revised / ImpactThought N/M [BRANCH A from Thought K]: ...— explore alternative; converge with decision rationaleThought N/M [HYPOTHESIS]: ...then[VERIFICATION]: ...— test before actingThought N/N [FINAL]— only when verified, all critical aspects addressed, confidence >80%Mandatory closers: Confidence % stated · Assumptions listed · Open questions surfaced · Next action concrete.
Stop conditions: confidence <80% on any critical decision → escalate via ask the user directly · ≥3 revisions on same thought → re-frame the problem · branch count >3 → split into sub-task.
Implicit mode: apply methodology internally without visible markers when adding markers would clutter the response (routine work where reasoning aids accuracy).
Deep-dive: see
$sequential-thinkingskill (.claude/skills/sequential-thinking/SKILL.md) for worked examples (api-design, debug, architecture), advanced techniques (spiral refinement, hypothesis testing, convergence), and meta-strategies (uncertainty handling, revision cascades).
MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply sequential-thinking — multi-step Thought N/M, REVISION/BRANCH/HYPOTHESIS markers, confidence % closer; see $sequential-thinking skill.
MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
IMPORTANT MUST ATTENTION follow declared step order for this skill; NEVER skip, reorder, or merge steps without explicit user approval
IMPORTANT MUST ATTENTION for every step/sub-skill call: set in_progress before execution, set completed after execution
IMPORTANT MUST ATTENTION every skipped step MUST include explicit reason; every completed step MUST include concise evidence
IMPORTANT MUST ATTENTION if Task tools unavailable, maintain an equivalent step-by-step plan tracker with synchronized statuses
[TASK-PLANNING] Before acting, analyze task scope and systematically break it into small todo tasks and sub-tasks using task tracking.
[IMPORTANT] Analyze how big the task is and break it into many small todo tasks systematically before starting — this is very important.
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns — debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW — NEVER ExecuteUowTaskwhere python/where py) — NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) — parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage — never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role → rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad → rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) — expresses HAPPENS, not membership.python/python3 resolves — verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons →
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns →docs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders → System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves — run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons — ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" — Yes → improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.