// Create structured sprint plans with automated Git workflow. Generates TOML sprints, creates feature branches, and enforces proper development workflow.
| name | sprint-plan |
| version | 1.1.0 |
| description | Create structured sprint plans with automated Git workflow. Generates TOML sprints, creates feature branches, and enforces proper development workflow. |
Automates sprint planning with Git workflow enforcement:
REQUIRED task format: [tasks.TASK-ID] (flat structure)
OBSOLETE format: [[epic.*.tasks]] (will break SprintLoader)
5 Required fields per task:
id - Task identifiername - Task name (NOT "title")status - Current status (pending, in_progress, completed)phase - Phase nameagent - Agent assignmentAll other fields are OPTIONAL - Use minimal format for quick tasks, rich format for complex features.
Backward compatible: Old sprints with minimal fields continue to work. New sprints can use rich metadata.
REQUIRED naming format: {SPRINT_ID}_{TASK_ID}_{TYPE}.md
Why Sprint ID? Prevents collisions when multiple sprints have the same task ID (e.g., Sprint 3 BUG-001 vs Sprint 17.1 BUG-001).
Examples:
17.1-BUGS_BUG-011_ENHANCED_PROMPT.md (Sprint 17.1, enhanced prompt)17.1-BUGS_BUG-011_QUESTIONS.md (Sprint 17.1, questions document)3_PROTECT-001_TEST_PLAN.md (Sprint 3, test plan)4_KEY-002_DESIGN.md (Sprint 4, design document)Sprint ID Extraction:
ACTIVE_SPRINT_17.1_BUGS.toml โ Sprint ID = 17.1-BUGS17.1_BUGS โ 17.1-BUGSDocument Types:
ENHANCED_PROMPT - MVP-003 enhanced task promptQUESTIONS - Q&A document for task clarificationTEST_PLAN - Test plan and validation criteriaDESIGN - Design document and architectureRETROSPECTIVE - Post-task retrospectiveEnforcement:
docs/patterns/Pattern-DOCS-002-TaskDocumentLinking.mdMigration:
Use this skill when the user:
When this skill is invoked, FIRST determine the mode:
CREATE Mode - Creating a new sprint from scratch
ENHANCE Mode - Adding tasks to existing sprint
ACTIVE_SPRINT_*.toml file to modifyHow to decide:
internal/sprints/ACTIVE_SPRINT*.toml exists โ Ask user which modeUse this workflow when creating a new sprint from scratch.
Ask the user:
1. Sprint duration? (1 week, 2 weeks, etc.)
2. Main features/goals?
3. Team size and roles?
4. Priority order?
The skill will:
CRITICAL: After generating feature tasks, automatically inject normalized template tasks.
This step ensures every sprint includes quality assurance, documentation, and retrospective tasks.
Step 1: Load Template
// Load SPRINT_TEMPLATE.toml
const fs = require('fs');
const toml = require('@iarna/toml');
const templatePath = 'internal/sprints/SPRINT_TEMPLATE.toml';
const templateContent = fs.readFileSync(templatePath, 'utf-8');
const template = toml.parse(templateContent);
// Template structure:
// - template.metadata (version, task counts)
// - template.required.* (14 tasks - always include)
// - template.suggested.* (4 tasks - include with skip option)
// - template.conditional.* (8 tasks - condition-based)
// - template.retrospective.* (2 tasks - always include)
Step 2: Detect Conditions
Analyze sprint requirements to determine which conditional tasks to include:
// Condition detection logic
const sprintName = metadata.sprint_name || "";
const sprintDescription = userRequirements || "";
const combinedText = (sprintName + " " + sprintDescription).toLowerCase();
// Publishing condition
const isPublishing = combinedText.includes("publish") ||
combinedText.includes("release") ||
combinedText.includes("version bump") ||
combinedText.includes("v1.0") ||
combinedText.includes("deploy");
// User-facing changes condition
const isUserFacing = combinedText.includes("ui") ||
combinedText.includes("ux") ||
combinedText.includes("user experience") ||
combinedText.includes("interface") ||
combinedText.includes("component") ||
combinedText.includes("breaking change");
Condition Keywords:
Publishing Keywords (triggers PUB-001 to PUB-005):
User-Facing Keywords (triggers UX-001 to UX-003):
Step 3: Extract Template Tasks
// Extract task categories from template
const requiredTasks = Object.entries(template.required).map(([key, task]) => ({
...task,
category: "REQUIRED",
canSkip: false
}));
const suggestedTasks = Object.entries(template.suggested).map(([key, task]) => ({
...task,
category: "SUGGESTED",
canSkip: true,
skipJustification: "" // Prompt user for justification if skipping
}));
const conditionalTasks = [];
if (isPublishing) {
// Include publishing tasks (PUB-001 to PUB-005)
Object.entries(template.conditional)
.filter(([key, task]) => task.id.startsWith("PUB-"))
.forEach(([key, task]) => {
conditionalTasks.push({
...task,
category: "CONDITIONAL-PUBLISHING",
condition: "Sprint includes publishing/release"
});
});
}
if (isUserFacing) {
// Include UX tasks (UX-001 to UX-003)
Object.entries(template.conditional)
.filter(([key, task]) => task.id.startsWith("UX-"))
.forEach(([key, task]) => {
conditionalTasks.push({
...task,
category: "CONDITIONAL-UX",
condition: "Sprint has user-facing changes"
});
});
}
const retrospectiveTasks = Object.entries(template.retrospective).map(([key, task]) => ({
...task,
category: "RETROSPECTIVE",
canSkip: false
}));
// Total template tasks to inject
const allTemplateTasks = [
...requiredTasks, // 14 tasks
...suggestedTasks, // 4 tasks
...conditionalTasks, // 0-8 tasks (condition-dependent)
...retrospectiveTasks // 2 tasks
];
Step 4: Task ID Collision Avoidance
Template tasks use category prefixes that won't collide with feature tasks:
Template Task ID Prefixes (reserved, never use for features):
DOC-* - Documentation tasks (CHANGELOG, README, patterns)QA-* - Quality assurance tasks (tests, coverage, audits)AGENT-* - Agent synchronization tasks (context updates)INFRA-* - Infrastructure tasks (git hooks, validation)CONFIG-* - Configuration tasks (settings schema)PERF-* - Performance tasks (regression testing)SEC-* - Security tasks (vulnerability scans)COMPAT-* - Compatibility tasks (cross-platform, backwards compat)PUB-* - Publishing tasks (pre-publish, artifacts, verification)UX-* - User experience tasks (upgrade docs, screenshots)RETRO-* - Retrospective tasks (sprint retrospective, pattern extraction)Feature Task ID Prefixes (use for generated tasks):
FEAT-* - New featuresAUTH-* - Authentication featuresAPI-* - API endpointsUI-* - UI components (not to be confused with UX-* template tasks)BUG-* - Bug fixesREFACTOR-* - Refactoring tasks[CUSTOM]-* - Domain-specific prefixes (e.g., VOICE-, DATA-, SYNC-*)Validation: Before adding feature tasks, ensure none use reserved template prefixes.
Step 5: Merge Tasks
// Merge feature tasks with template tasks
const allTasks = {
// Feature tasks (generated from requirements)
...featureTasks,
// Template tasks (injected from SPRINT_TEMPLATE.toml)
...allTemplateTasks.reduce((acc, task) => {
acc[task.id] = task;
return acc;
}, {})
};
// Example merged task list:
// - FEAT-001: Implement user authentication (feature)
// - FEAT-002: Add password reset flow (feature)
// - DOC-001: Update CHANGELOG.md (template-required)
// - DOC-002: Update README.md (template-required)
// - QA-001: Run ripple analysis (template-required)
// - PUB-001: Pre-publish validation (template-conditional, if publishing)
// - RETRO-001: Sprint retrospective (template-retrospective)
Step 6: Generate Sprint TOML with Merged Tasks
The final sprint TOML includes both feature and template tasks:
[metadata]
version = "1.0.0"
sprint_number = 1
status = "active"
# ... rest of metadata
# Feature tasks (from requirements)
[tasks.FEAT-001]
id = "FEAT-001"
name = "Implement user authentication"
status = "pending"
phase = "feature-development"
agent = "infrastructure-agent"
# ... rich format fields
[tasks.FEAT-002]
id = "FEAT-002"
name = "Add password reset flow"
status = "pending"
phase = "feature-development"
agent = "infrastructure-agent"
# ... rich format fields
# Template tasks (injected automatically)
[tasks.DOC-001]
id = "DOC-001"
name = "Update CHANGELOG.md with sprint changes"
status = "pending"
phase = "documentation"
agent = "documentation-agent"
# ... full template task definition
[tasks.QA-001]
id = "QA-001"
name = "Run ripple analysis for breaking changes"
status = "pending"
phase = "quality-assurance"
agent = "testing-agent"
# ... full template task definition
# ... all other template tasks (DOC-002 through DOC-005, QA-002 through QA-004, etc.)
# NOTE: DOC-005 is a REFACTORING task (context optimization, -500 lines estimated)
[tasks.RETRO-001]
id = "RETRO-001"
name = "Sprint retrospective"
status = "pending"
phase = "retrospective"
agent = "documentation-agent"
# ... full template task definition
Before Template Injection:
After Template Injection:
Benefits:
Historical Bug Prevention:
Total Time Saved: 15+ hours of historical debugging prevented per sprint.
User: "Plan a sprint to build API endpoints for user management"
Generated Feature Tasks:
Injected Template Tasks (23 tasks):
Total: 5 feature + 19 template = 24 tasks
Condition Detection:
User: "Plan a sprint to release v2.0 with UI redesign"
Generated Feature Tasks:
Injected Template Tasks (28 tasks):
Total: 3 feature + 28 template = 31 tasks
Condition Detection:
SUGGESTED tasks are included by default but can be skipped with written justification:
User Prompt (during sprint creation):
Template includes 4 SUGGESTED tasks:
- PERF-001: Performance regression testing
- SEC-001: Security vulnerability scan
- COMPAT-001: Cross-platform testing (Windows/Mac/Linux)
- COMPAT-002: Backwards compatibility check
These tasks are recommended but optional. Would you like to:
1. Include all suggested tasks (recommended)
2. Skip some tasks with justification
If skipping, please provide justification for each (e.g., "No performance-critical changes this sprint").
If user skips with justification:
# Add skip justification to sprint notes
[notes]
template_task_justifications = """
SUGGESTED Tasks Skipped:
- PERF-001: No performance-critical changes (API-only sprint)
- COMPAT-001: No platform-specific code changes
"""
Justification examples:
# Create feature branch from master
git checkout master
git pull origin master
git checkout -b feature/[epic-name]
# Create sub-branches for phases
git checkout -b feature/[epic-name]/phase-1
feature/[epic-name] - Main feature branchfeature/[epic-name]/phase-[n] - Phase branchesfix/[issue-name] - Bug fix branchesrefactor/[component-name] - Refactoring branchesCRITICAL: Sprint File Naming Convention
Sprint files MUST follow this pattern to appear in the Sprint dropdown:
Format: ACTIVE_SPRINT_[DESCRIPTOR].toml
Examples:
ACTIVE_SPRINT_v0.15.4_UI_REFACTOR.toml โ (version-based)ACTIVE_SPRINT_2025-10-31.toml โ (date-based)ACTIVE_SPRINT_PHASE_1.toml โ (phase-based)ACTIVE_SPRINT_AUTH_FEATURE.toml โ (feature-based)ACTIVE_SPRINT.toml โ (legacy format, still works)sprint.toml โ (won't appear in dropdown)Why: The ACTIVE_SPRINT prefix is the identifier that makes it discoverable.
The descriptor after the underscore helps users identify which sprint is which.
Location: Place in internal/sprints/ (รtherLight dev) or sprints/ (user projects)
Create sprints/ACTIVE_SPRINT_[DESCRIPTOR].toml:
CRITICAL: Use [tasks.ID] format (NOT [[epic.*.tasks]])
The SprintLoader expects flat task structure: data.tasks[taskId]
[meta]
sprint_name = "Sprint [Number]"
version = "[version]"
created = "YYYY-MM-DD"
status = "active"
# Tasks use flat structure: [tasks.TASK-ID]
[tasks.AUTH-001]
# REQUIRED FIELDS (5 fields - enforced by SprintSchemaValidator)
id = "AUTH-001" # Task ID (must match section key)
name = "Implement JWT tokens" # Task name (NOT "title")
status = "pending" # Status: pending, in_progress, completed
phase = "authentication" # Phase name
agent = "infrastructure-agent" # Agent assignment (from AgentRegistry)
# COMMON OPTIONAL FIELDS (recommended for most tasks)
skill = "publish" # Optional: Automated workflow to use (see Step 3.6)
# Only for tasks with automated workflows (publish, code-analyze, protect, etc.)
# Omit for manual work (most tasks)
description = "Add JWT token generation and validation"
estimated_time = "4-6 hours"
estimated_lines = 300
dependencies = [] # Array of task IDs this depends on
assigned_engineer = "engineer_1"
required_expertise = ["typescript", "security"]
deliverables = [
"JWT token generation function",
"Token validation middleware",
"Unit tests for token operations"
]
# ERROR HANDLING (optional but strongly recommended - see template below)
error_handling = """
- Wrap JWT parsing in try-catch
- Handle expired token errors gracefully
- Validate token format before parsing
- Return clear error messages to user
"""
validation_criteria = [
"All token operations have error boundaries",
"Invalid tokens don't crash the app",
"Clear error messages for auth failures"
]
For quick sprints or simple tasks, you only need the 5 required fields:
[meta]
sprint_name = "Quick Bug Fix Sprint"
version = "1.0"
status = "active"
created = "2025-11-05"
[tasks.BUG-001]
id = "BUG-001"
name = "Fix login redirect"
status = "pending"
phase = "bugfix"
agent = "general-purpose"
This minimal format is fully compatible with the sprint system.
For complex features, sprint tasks can include rich metadata for better context and tracking:
[tasks.FEATURE-001]
# REQUIRED FIELDS (always needed)
id = "FEATURE-001"
name = "Implement Enhanced Prompt Display"
status = "pending"
phase = "feature-development"
agent = "ui-agent"
# NARRATIVE FIELDS (helps understand WHY and context)
why = """
User expectation: Enhancement button should enhance text in-place.
Current state: Button disabled, function is empty stub.
Users frustrated by missing core feature.
"""
context = """
Feature was removed during v0.16.x refactoring to fix layout bugs.
PromptEnhancer.ts exists but not integrated with UI.
Pattern-UX-001 emphasizes real-time feedback for user actions.
"""
reasoning_chain = [
"1. User types minimal prompt in text area",
"2. Clicks โจ enhancement button",
"3. PromptEnhancer analyzes and enriches prompt with context",
"4. Enhanced prompt replaces text in textarea (in-place)",
"5. User reviews enhanced prompt before sending"
]
pattern_context = "Pattern-UX-001: Real-time feedback improves user experience by 50%"
success_impact = """
After FEATURE-001 complete:
โ
Enhancement button fully functional
โ
Users write better prompts with less effort
โ
Pattern-driven development workflow enabled
โ
User satisfaction increased (feature restoration)
"""
# TESTING FIELDS (TDD integration)
test_requirements = """
TDD Requirements (UI Task - 70% coverage):
RED Phase - Write tests FIRST:
1. enhancePrompt() called when โจ button clicked
2. Original text replaced with enhanced text (in-place)
3. Enhancement completes in < 3 seconds
4. Error handling: API failures show user-friendly message
5. Keyboard shortcut: Ctrl+Shift+E triggers enhancement
GREEN Phase - Implement to pass tests
REFACTOR Phase - Optimize performance
"""
test_files = [
"vscode-lumina/test/commands/enhancePrompt.test.ts",
"vscode-lumina/test/services/PromptEnhancer.test.ts"
]
test_coverage_requirement = 0.7 # 70% for UI tasks
validation_criteria = [
"โจ button enabled and clickable",
"Click triggers PromptEnhancer.enhanceWithPatterns()",
"Enhanced text replaces original in textarea",
"Success notification shown",
"Keyboard shortcut Ctrl+Shift+E works"
]
# PERFORMANCE & PATTERNS
performance_target = "Enhancement completes < 3 seconds"
patterns = [
"Pattern-UX-001", # Real-time feedback
"Pattern-CONTEXT-002" # Context enrichment
]
# FILE & DELIVERABLE TRACKING
estimated_lines = 150
estimated_time = "2-3 hours"
files_to_modify = [
"vscode-lumina/src/commands/voicePanel.ts (line 1088-1112 - enhance button handler)"
]
files_to_create = [
"vscode-lumina/test/commands/enhancePrompt.test.ts"
]
deliverables = [
"Enable โจ enhancement button (remove disabled attribute)",
"Implement enhanceWithPatterns() function",
"Show success notification after enhancement",
"Add keyboard shortcut Ctrl+Shift+E",
"Unit tests with 70% coverage"
]
# ERROR HANDLING (strongly recommended)
error_handling = """
- Wrap PromptEnhancer.enhance() in try-catch
- Handle API timeout errors (show "Enhancement timed out" message)
- Handle network errors gracefully
- Preserve original text on error (don't lose user input)
- Show actionable error messages with retry option
"""
# DEPENDENCIES (task ordering)
dependencies = [] # No dependencies, can start immediately
id - Task identifiername - Task namestatus - Current status (pending, in_progress, completed)phase - Phase nameagent - Agent assignment (query AgentRegistry for available agents)why - User pain point or business justificationcontext - Background information, related systems, historyreasoning_chain - Step-by-step logic flow (array)pattern_context - Link to architectural patternssuccess_impact - Measurable benefits after completiontest_requirements - TDD requirements with RED/GREEN/REFACTOR phasestest_files - Test file paths (array)test_coverage_requirement - Coverage percentage (0.7-0.9 by task type)validation_criteria - Acceptance criteria (array)performance_target - Target completion time or throughputpatterns - Referenced patterns (array)estimated_lines - Lines of code estimateestimated_time - Time estimate (e.g., "2-3 hours", "1-2 days")files_to_modify - Files to change (with line numbers for precision)files_to_create - New files (array)deliverables - Specific deliverables (array)error_handling - Error boundaries and recovery strategiesvalidation_criteria - Error validation checklistdependencies - Array of task IDs that must complete firstThe [metadata] section can include rich sprint-level information:
[metadata]
# Basic metadata (always include)
version = "0.16.0"
sprint_number = 2
start_date = "2025-11-03"
target_completion = "2025-11-04"
status = "active"
# Focus areas (helps prioritize work)
[metadata.focus]
primary = "Fix critical UI bugs from v0.16.0 release"
secondary = "Stabilize voice panel and control panel"
tertiary = "Address console errors"
# Priority order (phase-by-phase priorities)
[metadata.priority_order]
phase_1 = "Bug Fixes - CRITICAL (blocks all features)"
phase_2 = "UI Architecture - Restructure layout hierarchy"
phase_3 = "Documentation - Update patterns and guides"
# Sprint progression (links to related sprints)
[metadata.progression]
previous_sprint = "archive/SPRINT_v0.15.4_UI_REFACTOR.toml"
next_sprint = "DOGFOODING_SPRINT.toml"
triggers_hotfix_release = true
git_branch = "feature/v0.16.1-ui-bugfix"
# Audit context (if sprint follows an audit)
[metadata.audit_context]
audit_file = "v0.16.0_AUDIT_RESULTS.md"
audit_score = "15/80 (19%)"
critical_bugs_found = 4
# Team information
[metadata.team]
team_size = 1
default_engineer = "engineer_1"
[[metadata.team.engineers]]
id = "engineer_1"
name = "BB_Aelor"
expertise = ["typescript", "vscode-extensions", "ui-ux"]
available_agents = ["infrastructure-agent", "ui-agent", "documentation-agent"]
max_parallel_tasks = 2
daily_capacity_hours = 8
Add a [progress] section for sprint health monitoring:
[progress]
total_tasks = 22
completed_tasks = 5
in_progress_tasks = 1
pending_tasks = 16
completion_percentage = 23
Note: Progress fields can be auto-calculated or manually updated as sprint progresses.
Add a [notes] section for strategic documentation:
[notes]
dogfooding_strategy = """
Release early, fix fast via patches.
- v0.16.0: Initial release with SYNC-001
- v0.16.1: UI bug fixes (this sprint)
- v0.16.2: Enhancement restoration
"""
priority_order = """
1. UI-001, UI-002 (button functionality - blocking workflow)
2. UI-003 (resize - UX issue)
3. UI-004, UI-005 (visual cleanup)
"""
retrospective = """
Lessons learned:
- Pattern-based refactoring works but needs careful testing
- UI changes require comprehensive visual validation
- Dogfooding catches bugs early
"""
Different task categories have different test coverage requirements:
| Task Category | Coverage Required | Test Type |
|---|---|---|
| Infrastructure | 90% | Unit tests, integration tests |
| API | 85% | Endpoint tests, error case tests |
| UI | 70% | Component tests, user interaction tests |
| Documentation | 0% | Manual review only |
When generating sprints:
test_coverage_requirement based on task categoryChoose appropriate agents based on task type:
| Task Type | Recommended Agent | Expertise |
|---|---|---|
| Service architecture, middleware | infrastructure-agent | TypeScript services, orchestration |
| UI components, layouts, styling | ui-agent | React, HTML/CSS, user experience |
| REST APIs, GraphQL, endpoints | api-agent | API design, error handling |
| Patterns, guides, README | documentation-agent | Technical writing, examples |
| Native desktop features | tauri-desktop-agent | Rust, Tauri, OS integration |
| Bug fixes (any category) | general-purpose | Cross-functional debugging |
How to query available agents:
internal/agents/[agent]-context.mdUse Minimal Format (5 required fields) when:
Use Rich Format (all metadata) when:
Rich format benefits:
Here's a complete example showing both minimal and rich tasks:
[metadata]
version = "1.0.0"
sprint_number = 1
start_date = "2025-11-05"
target_completion = "2025-11-12"
status = "active"
[metadata.focus]
primary = "Launch core authentication feature"
secondary = "Stabilize user management"
tertiary = "Documentation updates"
[metadata.team]
team_size = 2
default_engineer = "engineer_1"
[[metadata.team.engineers]]
id = "engineer_1"
expertise = ["typescript", "react", "node"]
available_agents = ["infrastructure-agent", "ui-agent"]
max_parallel_tasks = 2
# MINIMAL TASK (quick bug fix)
[tasks.BUG-001]
id = "BUG-001"
name = "Fix login redirect loop"
status = "pending"
phase = "bugfix"
agent = "general-purpose"
# RICH TASK (complex feature)
[tasks.AUTH-001]
id = "AUTH-001"
name = "Implement JWT Authentication"
status = "pending"
phase = "authentication"
agent = "infrastructure-agent"
why = """
Users need secure authentication to access protected resources.
Current state: No authentication system, all endpoints public.
Security risk: Data exposure without access control.
"""
context = """
Industry standard: JWT tokens for stateless authentication.
Pattern-SECURITY-001 emphasizes defense in depth.
Integration required with existing user database.
"""
reasoning_chain = [
"1. User submits credentials (email + password)",
"2. Server validates against database (bcrypt hash)",
"3. Generate JWT token with user ID and roles",
"4. Return token to client (httpOnly cookie)",
"5. Client includes token in subsequent requests",
"6. Middleware validates token on protected routes"
]
pattern_context = "Pattern-SECURITY-001: Defense in depth, Pattern-AUTH-002: JWT best practices"
success_impact = """
After AUTH-001 complete:
โ
Secure authentication system protecting user data
โ
Token-based access control for all endpoints
โ
Password hashing with bcrypt (industry standard)
โ
XSS protection via httpOnly cookies
"""
test_requirements = """
TDD Requirements (Infrastructure Task - 90% coverage):
RED Phase - Write tests FIRST:
1. generateToken(userId) returns valid JWT
2. validateToken(token) returns user object
3. validateToken(expiredToken) throws error
4. validateToken(invalidToken) throws error
5. Password hashing with bcrypt (verify salt rounds >= 10)
6. Login endpoint returns 401 for invalid credentials
7. Protected endpoint returns 403 without token
GREEN Phase - Implement to pass tests
REFACTOR Phase - Optimize token generation performance
"""
test_files = [
"backend/test/auth/jwt.test.ts",
"backend/test/middleware/auth.test.ts"
]
test_coverage_requirement = 0.9
validation_criteria = [
"All auth operations have error boundaries",
"Tokens expire after 24 hours",
"Password never stored in plain text",
"Login endpoint rate-limited (5 attempts/min)",
"Protected endpoints require valid token"
]
performance_target = "Token validation < 50ms"
patterns = ["Pattern-SECURITY-001", "Pattern-AUTH-002"]
estimated_lines = 500
estimated_time = "8-10 hours"
files_to_create = [
"backend/src/auth/jwt.ts",
"backend/src/middleware/auth.ts",
"backend/test/auth/jwt.test.ts"
]
files_to_modify = [
"backend/src/routes/user.ts (lines 15-30 - add auth middleware)"
]
deliverables = [
"JWT token generation function",
"Token validation middleware",
"Password hashing utility (bcrypt)",
"Login endpoint (/api/auth/login)",
"Protected route example",
"Unit tests (90% coverage)"
]
error_handling = """
- Wrap JWT operations in try-catch
- Handle malformed tokens gracefully
- Handle expired tokens with 401 status
- Rate limit login attempts (prevent brute force)
- Log authentication failures for security audit
"""
dependencies = []
[progress]
total_tasks = 2
completed_tasks = 0
in_progress_tasks = 0
pending_tasks = 2
completion_percentage = 0
The skill combines:
# 1. Initialize sprint structure
mkdir -p sprints
cd sprints
# 2. Create sprint file
cat > SPRINT_$(date +%Y%m%d).toml << 'EOF'
[Generated TOML content]
EOF
# 3. Create Git branches
git checkout -b feature/[epic-name]
git push -u origin feature/[epic-name]
# 4. Create phase branches
for phase in 1 2 3; do
git checkout -b feature/[epic-name]/phase-$phase
git push -u origin feature/[epic-name]/phase-$phase
done
# 5. Switch back to first phase
git checkout feature/[epic-name]/phase-1
/publish skillrefactor/[name] branchUser: "Plan a 2-week sprint for adding voice commands to our app"
Claude:
SPRINT_20251030.toml with voice feature epicfeature/voice-commandsfeature/voice-commands/phase-1 (setup)feature/voice-commands/phase-2 (implementation)feature/voice-commands/phase-3 (testing)Sprint planning succeeds when:
STRONGLY RECOMMENDED (optional field, but critical for production code):
[tasks.TASK-ID]
id = "TASK-ID"
name = "Task Name"
# ... other required fields ...
# ERROR HANDLING (optional but strongly recommended)
error_handling = """
Best practices for ALL production tasks:
1. Identify failure points:
- [ ] List all parsing operations
- [ ] List all file I/O operations
- [ ] List all API/network calls
- [ ] List all user input validations
2. Implement error boundaries:
- [ ] Wrap in try-catch blocks
- [ ] Log errors appropriately
- [ ] Show user-friendly messages
- [ ] Provide recovery options
3. Prevent cascade failures:
- [ ] Errors isolated to component
- [ ] Application continues functioning
- [ ] No corruption of state
"""
validation_criteria = [
"No unhandled exceptions possible",
"All errors logged with context",
"User sees actionable error messages",
"Application remains stable after errors"
]
NEVER AGAIN: A simple duplicate ID crashed entire VS Code for 9+ hours.
Requirements added after BUG-012:
Use this workflow when adding tasks to an existing sprint.
If multiple sprint files exist:
ls internal/sprints/ACTIVE_SPRINT*.toml
Ask user which sprint to enhance (if multiple exist).
Target: internal/sprints/ACTIVE_SPRINT_[DESCRIPTOR].toml
Read the target sprint file to understand:
cat internal/sprints/ACTIVE_SPRINT_v0.16.0.toml
Critical: Understand the ID pattern:
Based on user requirements, generate new tasks following the sprint's existing patterns:
Required fields (MANDATORY - 5 fields):
id - Continue the ID pattern (e.g., if RELEASE-007 exists, next is RELEASE-008)name - Task namestatus - Usually "pending" for new tasksphase - Match existing phases or create new oneagent - Appropriate agent from AgentRegistryOptional fields (use rich format for complex tasks):
description, why, context, reasoning_chaintest_requirements, test_files, test_coverage_requirementestimated_time, estimated_linesdeliverables, files_to_create, files_to_modifyerror_handling, validation_criteriadependencies - Reference existing task IDs if dependentExample new task:
[tasks.RELEASE-001]
id = "RELEASE-001"
name = "Verify sprint-plan skill enhancements"
status = "pending"
phase = "release"
agent = "infrastructure-agent"
estimated_time = "1-2 hours"
dependencies = []
deliverables = [
"Sprint-plan skill has ENHANCE mode",
"ENHANCE mode tested with current sprint",
"Documentation updated"
]
STOP. Before proceeding to Step 4, you MUST validate agent assignments.
Why this is critical:
Validation Process (Answer OUT LOUD):
For EACH new task, complete this checklist:
Task: [TASK-ID] - [Task Name]
1. โ
What is the task category?
- [ ] UI/UX (components, layouts, styling, state management, user interactions)
- [ ] Infrastructure (middleware, services, orchestration, CI/CD)
- [ ] API (REST endpoints, GraphQL, error handling)
- [ ] Documentation (patterns, guides, README)
- [ ] Database (schemas, migrations, queries)
- [ ] Testing (test infrastructure, test patterns)
2. โ
What are the key task indicators?
Extract keywords from task name + description:
- UI keywords: "component", "layout", "CSS", "state", "resize", "focus", "click", "paste", "persistence", "text area", "button", "modal", "webview"
- Infrastructure keywords: "service", "middleware", "orchestration", "pipeline", "publishing", "deployment"
- API keywords: "endpoint", "REST", "GraphQL", "HTTP", "authentication"
3. โ
Query Agent Selection Guide (line 470-482):
| Task Type | Recommended Agent |
|-----------|------------------|
| Service architecture, middleware | infrastructure-agent |
| UI components, layouts, styling | ui-agent |
| REST APIs, GraphQL, endpoints | api-agent |
| Patterns, guides, README | documentation-agent |
| Native desktop features | tauri-desktop-agent |
| Bug fixes (any category) | general-purpose |
4. โ
Verify agent context file (internal/agents/{agent}-context.md):
Read the agent's responsibilities section to confirm capability match.
Example:
- ui-agent-context.md line 14-19:
* Create React components (TypeScript)
* Implement responsive layouts (CSS, Tailwind)
* Handle user interactions (events, state)
* Ensure accessibility (WCAG 2.1 AA)
Does the agent have the required capabilities? YES/NO
5. โ
Final verification:
- Task category matches agent specialization? YES/NO
- Agent context confirms capability? YES/NO
- If NO to either: CORRECT THE AGENT NOW
Common Agent Assignment Mistakes:
โ DON'T assign to infrastructure-agent:
โ DON'T assign to ui-agent:
Example Validation (UX Task):
Task: UX-014 - Make Ctrl+Enter work globally (not just in text area)
1. Task category: UI/UX (user interaction, keyboard shortcut)
2. Keywords: "Ctrl+Enter", "keyboard", "focus", "text area", "send to terminal"
3. Agent Selection Guide: UI components/layouts/styling โ ui-agent
4. Agent context (ui-agent-context.md): "Handle user interactions (events, state)" โ
5. Final verification:
- UI category matches ui-agent? YES โ
- Agent has keyboard event handling capability? YES โ
- Assigned agent: ui-agent โ
If you skip this step, you WILL assign wrong agents (proven by this conversation).
STOP. Before proceeding to Step 4, you MUST check if tasks need skill assignments.
Why this is critical:
Validation Process (Answer OUT LOUD):
For EACH new task, complete this checklist:
Task: [TASK-ID] - [Task Name]
1. โ
Does this task match an automated workflow?
Check task name and description for keywords:
- Publishing/releasing keywords: "publish", "release", "npm", "GitHub release", "version bump"
- Code analysis keywords: "analyze", "audit", "scan", "complexity", "technical debt"
- Protection keywords: "protect", "annotate", "@protected", "@immutable"
- Sprint planning keywords: "sprint plan", "create sprint", "enhance sprint"
- Validation keywords: "validate protection", "pre-commit", "hook"
2. โ
Query Available Skills (from .claude/skills/):
| Skill Name | Purpose | When to Use |
|-----------|---------|-------------|
| `publish` | Publishing releases (npm + GitHub) | Task involves version bumping, npm publish, or GitHub releases |
| `code-analyze` | Codebase analysis and pattern detection | Task involves analyzing code, detecting patterns, or complexity metrics |
| `protect` | Code protection annotation | Task involves annotating code with @protected/@immutable |
| `protection-audit` | Protection system audit | Task involves auditing protection system, generating reports |
| `sprint-plan` | Sprint creation and enhancement | Task involves creating new sprints or adding tasks to existing sprints |
| `validate-protection` | Protection enforcement validation | Task involves validating protection enforcement via pre-commit hooks |
3. โ
Decision Tree:
- Does task name contain "Publish" or "Release"? โ skill = "publish"
- Does task involve code analysis or audit? โ skill = "code-analyze"
- Does task involve protecting code? โ skill = "protect"
- Does task involve sprint planning? โ skill = "sprint-plan"
- Does task involve protection validation? โ skill = "validate-protection" or "protection-audit"
- No match? โ Omit skill field (most tasks are manual work)
4. โ
Verify skill exists:
Skill file location: .claude/skills/{skill-name}/SKILL.md
- Does the skill file exist? YES/NO
- Does the skill description match the task? YES/NO
5. โ
Final verification:
- Task matches automated workflow pattern? YES/NO
- Skill exists and matches task? YES/NO
- If YES to both: Add `skill = "{skill-name}"` to task
- If NO: Omit skill field (manual task, agent implements from scratch)
Common Skill Assignment Patterns:
โ DO assign skill for automated workflows:
โ DON'T assign skill for manual work:
Example Validation (Publishing Task):
Task: PATCH-001 - Publish v0.16.7
1. Automated workflow match: YES โ
Keywords: "Publish", "v0.16.7" (version number)
2. Available Skills: Check "publish" skill
- Purpose: Publishing releases (npm + GitHub)
- When to Use: Task involves version bumping, npm publish, or GitHub releases
- Match? YES โ
3. Decision Tree: Task name contains "Publish" โ skill = "publish" โ
4. Verify skill exists:
- .claude/skills/publish/SKILL.md exists? YES โ
- Skill description matches task? YES โ (automates npm publish + GitHub release)
5. Final verification:
- Task matches automated workflow? YES โ (publishing is automated)
- Skill exists and matches? YES โ
- Assigned skill: skill = "publish" โ
Example Validation (Manual Task):
Task: PROTECT-000C - Implement 'Start This Task' with prompt generation
1. Automated workflow match: NO โ
Keywords: "Implement", "prompt generation" (custom coding)
2. Available Skills: No match
- Not publishing (no skill = "publish")
- Not analyzing (no skill = "code-analyze")
- Not protecting (no skill = "protect")
- Custom feature development
3. Decision Tree: No automated workflow match โ Omit skill field โ
4. Verify: Manual coding task, agent implements from scratch
5. Final verification:
- Task matches automated workflow? NO โ
- Omit skill field: YES โ (manual work, no automation)
- agent = "infrastructure-agent" (has expertise)
- skill = (omitted) (manual implementation)
Key Distinction:
# Example 1: Automated workflow (has skill)
[tasks.PATCH-001]
id = "PATCH-001"
name = "Publish v0.16.7"
agent = "infrastructure-agent" # WHO: Has publishing expertise
skill = "publish" # WHAT: Use automated publish workflow
# Example 2: Manual work (no skill)
[tasks.PROTECT-000C]
id = "PROTECT-000C"
name = "Implement 'Start This Task' with prompt generation"
agent = "infrastructure-agent" # WHO: Has infrastructure expertise
# skill = (omitted) # WHAT: Manual implementation (no automation)
If you skip this step, tasks won't know which automated workflow to use (incomplete autonomous execution).
Recalculate progress based on ALL tasks (existing + new):
[progress]
total_tasks = 30 # Old total (23) + new tasks (7)
completed_tasks = 22 # Keep existing completed count
in_progress_tasks = 1 # Update if changed
pending_tasks = 7 # Old pending (1) + new tasks (7)
completion_percentage = 73 # (22 / 30) * 100
Formula: completion_percentage = (completed_tasks / total_tasks) * 100
CRITICAL: The [progress] section must reflect actual task counts, not stale values.
Update version if sprint version changed:
[metadata]
version = "0.16.7" # Update if user specified new version
# ... rest of metadata stays the same
Note: Only update metadata if user explicitly requests version change. Otherwise, preserve existing metadata.
If new tasks depend on existing tasks, update the dependencies array:
[tasks.PATCH-001]
id = "PATCH-001"
name = "Publish v0.16.7"
status = "pending"
phase = "release"
agent = "infrastructure-agent"
dependencies = ["RELEASE-001", "RELEASE-002", "RELEASE-003", "RELEASE-004", "RELEASE-005", "RELEASE-006", "RELEASE-007"]
Validation: Ensure no circular dependencies (A depends on B, B depends on A).
Before writing, validate the enhanced TOML:
Required checks:
[tasks.TASK-ID] format (NOT [[epic.*.tasks]])${} in strings)[tasks.FOO-001] must have id = "FOO-001"Validation command:
node scripts/validate-sprint-schema.js
If validation script doesn't exist, use manual TOML parse:
node -e "const toml = require('@iarna/toml'); toml.parse(require('fs').readFileSync('internal/sprints/ACTIVE_SPRINT.toml', 'utf-8')); console.log('โ
TOML valid');"
Write the enhanced sprint TOML to the same file:
CRITICAL: Follow Pre-Flight Checklist from CLAUDE.md before editing ACTIVE_SPRINT.toml:
# Backup original (safety)
cp internal/sprints/ACTIVE_SPRINT_v0.16.0.toml internal/sprints/ACTIVE_SPRINT_v0.16.0.toml.bak
# Use Edit tool to add new tasks to existing sprint file
# Preserve ALL existing tasks and metadata
# Add new tasks at appropriate location (usually at end, or in logical phase grouping)
Important: Use Edit tool, NOT Write tool, to preserve existing content.
If on master branch:
If on feature branch:
git add internal/sprints/ACTIVE_SPRINT_v0.16.0.toml
git commit -m "enhance(sprint): add 7 release validation tasks
- RELEASE-001: Verify sprint-plan skill enhancements
- RELEASE-002: Document Sprint Planner UI state
- RELEASE-003: Verify sub-package publication
- RELEASE-004: Update repository README
- RELEASE-005: Update npm package docs
- RELEASE-006: Update CHANGELOG
- RELEASE-007: Pre-publish audit
Updated PATCH-001 dependencies to include all RELEASE tasks.
Updated progress: 22/30 complete (73%).
"
Show user what was added:
โ
Sprint Enhanced Successfully
Target: internal/sprints/ACTIVE_SPRINT_v0.16.0.toml
New Tasks Added:
- RELEASE-001: Verify sprint-plan skill enhancements (pending)
- RELEASE-002: Document Sprint Planner UI state (pending)
- RELEASE-003: Verify sub-package publication (pending)
- RELEASE-004: Update repository README (pending)
- RELEASE-005: Update npm package documentation (pending)
- RELEASE-006: Update CHANGELOG and release notes (pending)
- RELEASE-007: Pre-publish audit (pending)
Updated Existing Tasks:
- PATCH-001: Added dependencies on RELEASE-001 through RELEASE-007
Progress:
- Before: 22/23 tasks (95.7%)
- After: 22/30 tasks (73.3%)
- Status: 7 new pending tasks added
Next Steps:
1. Review the enhanced sprint file
2. Start working on RELEASE-001
3. Use TodoWrite to track release tasks
Sprint enhancement succeeds when:
| Aspect | CREATE Mode | ENHANCE Mode |
|---|---|---|
| When to use | New sprint from scratch | Add tasks to existing sprint |
| User says | "plan sprint", "create sprint" | "add tasks", "enhance sprint" |
| Input | User requirements, goals | Existing sprint + new tasks |
| Output | New ACTIVE_SPRINT_*.toml file | Updated existing sprint file |
| Git workflow | Create feature branch + commit | Commit to current branch |
| Validation | Standard TOML validation | + Check for duplicate IDs |
| Progress | Calculate from new tasks | Recalculate with existing + new |
| Complexity | Higher (full sprint structure) | Lower (add tasks only) |