// Comprehensive PRP (Product Requirements Prompt) workflow for feature implementation. Use when user wants to implement complex features, needs a structured implementation plan, or mentions PRPs, feature implementation, or systematic development. Handles PRP generation from requirements, clarification of ambiguities, and execution for implementation. Three modes - Generation (create PRP), Clarification (resolve ambiguities), Execution (implement feature).
| name | prp-workflow |
| description | Comprehensive PRP (Product Requirements Prompt) workflow for feature implementation. Use when user wants to implement complex features, needs a structured implementation plan, or mentions PRPs, feature implementation, or systematic development. Handles PRP generation from requirements, clarification of ambiguities, and execution for implementation. Three modes - Generation (create PRP), Clarification (resolve ambiguities), Execution (implement feature). |
This skill implements a comprehensive Product Requirements Prompt (PRP) workflow for systematic feature implementation with proper context engineering.
Activate this skill when the user:
A PRP (Product Requirements Prompt) is similar to a PRD (Product Requirements Document) but specifically engineered for AI coding assistants. It includes:
This skill operates in three modes:
Generate a comprehensive PRP from a feature request or requirements document.
Identify and resolve underspecified areas in an existing PRP through structured questioning (recommended before execution).
Implement a feature using an existing PRP file.
The user should provide:
Before generating the PRP, conduct thorough research:
tree command to understand project structureUse AskUserQuestion tool to clarify:
Create a comprehensive PRP using this structure:
name: "[Feature Name] PRP"
description: |
## Purpose
[What this PRP enables the AI to build]
## Core Principles
1. **Context is King**: Include ALL necessary documentation, examples, and caveats
2. **Validation Loops**: Provide executable tests/lints the AI can run and fix
3. **Information Dense**: Use keywords and patterns from the codebase
4. **Progressive Success**: Start simple, validate, then enhance
5. **Global rules**: Follow all rules in CLAUDE.md (if exists)
---
## Goal
[What needs to be built - be specific about the end state]
## Why
- [Business value and user impact]
- [Integration with existing features]
- [Problems this solves and for whom]
## What
[User-visible behavior and technical requirements]
### Success Criteria
- [ ] [Specific measurable outcome 1]
- [ ] [Specific measurable outcome 2]
- [ ] [Specific measurable outcome 3]
## All Needed Context
### Documentation & References
```yaml
# MUST READ - Include these in your context window
- url: [Official API docs URL]
why: [Specific sections/methods you'll need]
- file: [path/to/example.py]
why: [Pattern to follow, gotchas to avoid]
- doc: [Library documentation URL]
section: [Specific section]
critical: [Key insight that prevents common errors]
- docfile: [specs/ai_docs/file.md]
why: [User-provided documentation]
[Output from tree command - shows current structure]
[New files to be added with their responsibilities]
# CRITICAL: [Library name] requires [specific setup]
# CRITICAL: [Framework] version [X] has [specific behavior]
# CRITICAL: [Common pitfall to avoid]
[Define core data structures, Pydantic models, ORM models, etc.]
Task 1: [Task Name]
MODIFY/CREATE [file path]:
- [Specific action 1]
- [Specific action 2]
PATTERN: [Reference to similar code]
Task 2: [Task Name]
...
Task N: [Final Task]
...
# Task 1: [Task Name]
# High-level pseudocode showing CRITICAL details
def feature_function(param: Type) -> Result:
# PATTERN: Always validate input (see file:line)
# GOTCHA: This library requires X
# CRITICAL: API has rate limit of Y
pass
DATABASE:
- migration: [SQL or migration description]
- index: [Index requirements]
CONFIG:
- add to: [config file path]
- pattern: [How to add config]
ROUTES/ENDPOINTS:
- add to: [router file]
- pattern: [How to register]
# Language-specific linting commands
# Examples:
# Python: ruff check . --fix && mypy .
# TypeScript: eslint . --fix && tsc --noEmit
# Go: go fmt ./... && go vet ./...
# Define required test cases:
# 1. Happy path test
# 2. Edge case test
# 3. Error handling test
# Run command
# Example: pytest tests/ -v
# Manual test commands
# Example: curl commands, CLI commands, etc.
### PRP Output
**CRITICAL: Always save PRPs in the `specs` directory with auto-incrementing naming**
Follow this process:
1. Ensure the `specs` directory exists (create if needed)
2. Find the highest numbered PRP in `specs` (e.g., if `003-auth.md` exists, next is 004)
3. Generate a sensible kebab-case filename from the feature name (e.g., "User Authentication" → "user-authentication")
4. Save as `specs/{XXX}-{feature-name}.md` where XXX is zero-padded 3-digit number
Examples:
- First PRP: `specs/001-user-authentication.md`
- Second PRP: `specs/002-payment-integration.md`
- Tenth PRP: `specs/010-api-rate-limiting.md`
- 100th PRP: `specs/100-advanced-analytics.md`
**Filename generation rules:**
- Convert to lowercase
- Replace spaces with hyphens
- Remove special characters (keep only alphanumeric and hyphens)
- Keep it concise (max 50 characters for the name part)
- Use the core feature name, not full description
### PRP Quality Checklist
Before finalizing, verify:
- [ ] All necessary context included
- [ ] Validation gates are executable by AI
- [ ] References existing patterns with file paths
- [ ] Clear implementation path with specific tasks
- [ ] Error handling documented
- [ ] Success criteria are measurable
**Score the PRP confidence level (1-10)** for one-pass implementation success.
## Mode 2: PRP Clarification
### Purpose
Identify and resolve underspecified areas in a PRP through structured questioning BEFORE implementation. This reduces downstream rework risk and ensures requirements are clear enough for one-pass implementation success.
### When to Use
- After PRP generation, before execution
- When ambiguities or missing decision points are suspected
- To validate that requirements are implementation-ready
- User can skip if explicitly doing exploratory work (but warn about rework risk)
### Clarification Process
#### Step 1: Locate PRP
Ask user for PRP file path, or search for it in `specs/` directory with patterns like:
- `specs/###-*.md` (numbered PRPs)
- `prp.md`, `requirements.md` in current directory
If no PRP exists, instruct user to generate one first (Mode 1).
#### Step 2: Coverage Analysis
Load the PRP and perform structured ambiguity scan using this taxonomy. Internally mark each category as: **Clear** / **Partial** / **Missing**.
**Coverage Taxonomy:**
1. **Functional Scope & Behavior**
- Core user goals & success criteria
- Explicit out-of-scope declarations
- User roles / personas differentiation
- Primary workflows and interactions
2. **Domain & Data Model**
- Entities, attributes, relationships
- Identity & uniqueness rules
- Lifecycle/state transitions
- Data volume / scale assumptions
3. **Interaction & UX Flow**
- Critical user journeys / sequences
- Error/empty/loading states
- Accessibility or localization requirements
- Input validation rules
4. **Non-Functional Quality Attributes**
- Performance (latency, throughput targets)
- Scalability (horizontal/vertical, limits)
- Reliability & availability (uptime, recovery)
- Observability (logging, metrics, tracing)
- Security & privacy (auth, data protection, threats)
- Compliance / regulatory constraints
5. **Integration & External Dependencies**
- External services/APIs and failure modes
- Data import/export formats
- Protocol/versioning assumptions
- Third-party library constraints
6. **Edge Cases & Failure Handling**
- Negative scenarios (invalid input, missing data)
- Rate limiting / throttling
- Conflict resolution (concurrent edits)
- Degraded mode operations
7. **Constraints & Tradeoffs**
- Technical constraints (language, storage, hosting)
- Explicit tradeoffs or rejected alternatives
- Budget/timeline constraints affecting scope
8. **Terminology & Consistency**
- Canonical glossary terms
- Avoided synonyms / deprecated terms
- Domain-specific language definitions
9. **Completion Signals**
- Acceptance criteria testability
- Measurable Definition of Done indicators
- Success metrics
10. **Misc / Placeholders**
- TODO markers / unresolved decisions
- Ambiguous adjectives ("robust", "intuitive", "fast")
- Vague requirements without concrete criteria
**Consider for clarification** if Partial/Missing AND:
- Would materially impact implementation or validation strategy
- Not better deferred to execution phase
#### Step 3: Generate Question Queue
Internally create prioritized question queue (max 5 questions). **Do NOT output all at once.**
**Question Constraints:**
- Maximum 5 questions per session (absolute limit)
- Each answerable with EITHER:
- Multiple-choice (2-5 mutually exclusive options), OR
- Short phrase (≤5 words)
- Only include if materially impacts:
- Architecture, data modeling, task decomposition
- Test design, UX behavior, operational readiness
- Compliance validation
**Prioritization:**
- Use (Impact × Uncertainty) heuristic
- Cover highest-impact categories first
- Balance coverage across categories
- Exclude already-answered questions
- Exclude trivial preferences
- Exclude execution details (unless blocking)
- Favor questions that reduce rework risk
#### Step 4: Interactive Questioning with Native UI
Use the `AskUserQuestion` tool to present clarification questions using Claude's native questioning UI.
**Question Batch Preparation:**
1. Prepare 1-4 questions from the prioritized queue (max 4 per batch due to tool limits)
2. For each question, determine:
- Clear question text ending with "?"
- Short header (≤12 chars) for the chip/tag display
- 2-4 option choices (tool automatically adds "Other" for custom input)
- Whether multiSelect should be enabled (default: false for mutually exclusive choices)
**Question Format Guidelines:**
- **header**: Ultra-concise label (e.g., "Auth method", "Data store", "API style")
- **question**: Full question text with context
- **options**: Array of 2-4 choices, each with:
- **label**: Short choice name (1-5 words)
- **description**: Explanation of what this choice means and its implications
- **multiSelect**: Set to `true` only if choices are not mutually exclusive
**Answer Processing:**
- Tool returns user selections in `answers` object keyed by question text
- If user selects "Other", they provide custom text input
- All answers are automatically validated by the tool
- Record each answer and proceed to integration step
**Batching Strategy:**
- Present up to 4 questions at once if they're independent
- If answers to earlier questions affect later questions, batch accordingly
- Respect 5-question total limit across all batches
- After each batch, integrate answers before potentially asking more
**Stop Conditions:**
- All critical ambiguities resolved (remaining questions unnecessary)
- 5 questions total asked across all batches
- No more questions in queue
- User explicitly indicates completion via custom responses
**Example Tool Usage:**
```typescript
AskUserQuestion({
questions: [
{
question: "Which authentication method should the system use?",
header: "Auth method",
multiSelect: false,
options: [
{
label: "OAuth 2.0 with JWT",
description: "Standard, secure, scalable for web/mobile apps. Stateless authentication."
},
{
label: "Session cookies",
description: "Traditional approach, requires session storage, good for server-rendered apps."
},
{
label: "API keys",
description: "Simple but less secure, suitable for service-to-service auth only."
}
]
},
{
question: "What is the target API response latency at p95?",
header: "Latency",
multiSelect: false,
options: [
{
label: "<100ms",
description: "High performance, requires caching and optimization"
},
{
label: "<200ms",
description: "Standard for responsive web apps, balanced approach"
},
{
label: "<500ms",
description: "Acceptable for non-critical operations"
}
]
}
]
})
After EACH batch of answers (from AskUserQuestion tool), update PRP immediately:
First batch only: Ensure structure exists
## Clarifications section (after overview/context)### Session YYYY-MM-DD subheadingRecord Q&A for each answer in batch:
- Q: <question> → A: <answer> for each question/answer pairApply clarification to appropriate sections:
| Type | Target Section | Action |
|---|---|---|
| Functional ambiguity | Success Criteria / Requirements | Add/update specific capability |
| User interaction | User Stories / Workflows | Add clarified role/constraint/scenario |
| Data model | Data Models and Structure | Add fields, types, relationships |
| Non-functional | Success Criteria | Add measurable criteria (vague→metric) |
| Edge case | Known Gotchas / Edge Cases | Add bullet or subsection |
| Terminology | Throughout | Normalize term, add "(formerly X)" once |
| Security/privacy | Implementation Blueprint / Gotchas | Add auth/data protection rules |
| Performance | Success Criteria / Gotchas | Add latency/throughput/capacity targets |
| Integration | Integration Points / Context | Add service details, failure modes |
Handle contradictions:
Write to disk:
After each write, verify:
## Clarifications, ### Session YYYY-MM-DDAfter questioning ends, provide:
Session Summary:
Coverage Summary Table:
| Category | Status | Notes |
|---|---|---|
| Functional Scope | Resolved/Clear/Deferred/Outstanding | Brief note if not Clear |
| Domain & Data Model | ... | ... |
| Interaction & UX Flow | ... | ... |
| Non-Functional Attributes | ... | ... |
| Integration & Dependencies | ... | ... |
| Edge Cases & Failures | ... | ... |
| Constraints & Tradeoffs | ... | ... |
| Terminology | ... | ... |
| Completion Signals | ... | ... |
Status Definitions:
Recommendations:
Early Exit:
Constraints:
Scenario 1: Authentication & Performance Batch
AskUserQuestion({
questions: [
{
question: "Which authentication method should the system use?",
header: "Auth method",
multiSelect: false,
options: [
{ label: "OAuth 2.0 with JWT", description: "Standard, secure, scalable for web/mobile. Stateless auth." },
{ label: "Session cookies", description: "Traditional, requires session storage, good for SSR apps." },
{ label: "API keys", description: "Simple but less secure, for service-to-service only." },
{ label: "SSO (SAML/OIDC)", description: "Enterprise integration, complex setup." }
]
},
{
question: "What is the target API response latency at p95?",
header: "Latency",
multiSelect: false,
options: [
{ label: "<100ms", description: "High performance, requires caching and optimization" },
{ label: "<200ms", description: "Standard for responsive web apps, balanced approach" },
{ label: "<500ms", description: "Acceptable for non-critical operations" }
]
}
]
})
User selects: "OAuth 2.0 with JWT" and "<200ms"
Integration:
After receiving answers:
- Q: Authentication method? → A: OAuth 2.0 with JWT- Q: Target API latency? → A: <200ms p95# CRITICAL: Use OAuth 2.0 with JWT for stateless authScenario 2: Multi-Select for Features
AskUserQuestion({
questions: [
{
question: "Which optional features should be included in the initial release?",
header: "Features",
multiSelect: true,
options: [
{ label: "Email notifications", description: "Send alerts via email, requires SMTP setup" },
{ label: "Export to CSV", description: "Allow users to download data as CSV files" },
{ label: "Dark mode", description: "Theme toggle for UI, affects all components" },
{ label: "Audit logging", description: "Track all user actions, increases storage needs" }
]
}
]
})
User selects: "Email notifications", "Export to CSV", "Dark mode"
Integration:
- Q: Optional features? → A: Email notifications, Export to CSV, Dark modeWhen executing a PRP to implement a feature:
specs directory)CRITICAL: Think deeply before executing
CRITICAL: If validation fails, never skip or ignore:
Throughout execution, you can always:
If the project contains a CLAUDE.md file:
A successful PRP workflow achieves:
User: "I have a feature request in INITIAL.md, can you implement it using the PRP workflow?"
AI: [Activates this skill]
1. Reads INITIAL.md
2. Researches codebase and external resources
3. Generates comprehensive PRP in specs/XXX-feature-name.md (auto-incremented)
4. Asks if user wants to execute immediately or review first
5. If approved, executes the PRP with full workflow
User: "Build an async web scraper using BeautifulSoup that extracts product data, handles rate limiting, and stores in PostgreSQL"
AI: [Activates this skill]
1. Clarifies any ambiguities
2. Researches relevant patterns in codebase
3. Searches for BeautifulSoup + async + PostgreSQL patterns
4. Generates PRP with comprehensive context
5. Asks for approval to execute
6. Implements with validation loops
User: "Clarify the PRP in specs/002-api-integration.md"
OR
User: "Review the requirements in specs/002-api-integration.md for ambiguities"
AI: [Activates this skill in clarification mode]
1. Reads specs/002-api-integration.md completely
2. Analyzes coverage across all taxonomy categories
3. Generates prioritized question queue (max 5 total)
4. Uses AskUserQuestion tool to present questions in batches (up to 4 per batch)
5. Updates PRP incrementally after each batch of answers
6. Provides coverage summary and next step recommendation
User: "Execute the PRP in specs/003-web-scraper.md"
AI: [Activates this skill in execution mode]
1. Reads specs/003-web-scraper.md completely
2. Creates implementation plan with TodoWrite
3. Executes systematically with validation
4. Reports completion
This skill should activate when user says things like:
For PRP Generation (Mode 1):
For PRP Clarification (Mode 2):
For PRP Execution (Mode 3):
Users can skip Mode 2 for exploratory work, but should be warned about increased rework risk.