// Generates comprehensive specifications (spec.md, plan.md, tasks.md with embedded tests) for SpecWeave increments using proven templates and flexible structure. Activates when users create new increments, plan features, or need structured documentation. Keywords: specification, spec, plan, tasks, tests, increment planning, feature planning, requirements.
| name | spec-generator |
| description | Generates comprehensive specifications (spec.md, plan.md, tasks.md with embedded tests) for SpecWeave increments using proven templates and flexible structure. Activates when users create new increments, plan features, or need structured documentation. Keywords: specification, spec, plan, tasks, tests, increment planning, feature planning, requirements. |
Purpose: Automatically generate comprehensive specification documentation (spec.md, plan.md, tasks.md with embedded tests) for SpecWeave increments using proven templates and flexible, context-aware structure.
When to Use:
/specweave:inc)Based On: Flexible Spec Generator (V2) - context-aware, non-rigid templates
Adapts to Context:
YAML Frontmatter (v0.31.0+ MANDATORY):
---
increment: 0001-feature-name
title: "Feature Title"
type: feature
priority: P1
status: planned
created: 2025-12-04
project: my-project # REQUIRED - target project for living docs sync
board: my-board # REQUIRED for 2-level structures (ADO area paths, JIRA boards)
---
Detect Structure Level First (see src/utils/structure-level-detector.ts):
project: field REQUIREDproject: AND board: fields REQUIREDCore Sections (Always Present):
# Product Specification: [Increment Name]
**Increment**: [ID]
**Title**: [Title]
**Status**: Planning
**Priority**: [P0-P3]
**Created**: [Date]
## Executive Summary
[1-2 paragraph overview]
## Problem Statement
### Current State
### User Pain Points
### Target Audience
## User Stories & Acceptance Criteria
<!--
โ ๏ธ PER-US PROJECT TARGETING (v0.33.0+):
Each user story MUST have **Project**: (and **Board**: for 2-level) fields.
The LLM MUST resolve these from context - see RULE 0 in increment-planner.
-->
### US-001: [Title]
**Project**: [resolved-from-context] <!-- REQUIRED - actual project ID from config/folders -->
**Board**: [resolved-from-context] <!-- REQUIRED for 2-level structures -->
**As a** [user type]
**I want** [goal]
**So that** [benefit]
**Acceptance Criteria**:
- [ ] **AC-US1-01**: [Criterion 1]
- [ ] **AC-US1-02**: [Criterion 2]
---
### Per-US Project Resolution (v0.33.0+ MANDATORY)
**๐ง USE ALL AVAILABLE CONTEXT TO RESOLVE PROJECT/BOARD:**
Before generating spec.md, analyze:
1. **Living docs folders**: `ls .specweave/docs/internal/specs/` โ actual project IDs
2. **Recent increment patterns**: `grep "**Project**:" .specweave/increments/*/spec.md`
3. **Config projectMappings**: Exact project IDs from config
4. **Feature keywords**: Map to actual projects (not generic terms)
**Resolution Example:**
Feature: "Add OAuth login to React frontend" Detected: "React", "frontend", "login"
Step 1: Check living docs โ folders: frontend-app/, backend-api/, shared/ Step 2: "frontend" keyword โ matches "frontend-app" folder Step 3: Assign Project: frontend-app (NOT "frontend"!)
If cross-cutting ("OAuth" = both frontend + backend): US-001 (Login UI) โ Project: frontend-app US-002 (Auth API) โ Project: backend-api
**NEVER:**
- โ Use generic keywords as project names ("frontend", "backend")
- โ Ask user when context provides the answer
- โ Leave `{{PROJECT_ID}}` placeholders
## Success Metrics
[How we'll measure success]
## Non-Goals (Out of Scope)
[What we're NOT doing in this increment]
Flexible Sections (Context-Dependent):
Adapts to Complexity:
Core Sections:
# Technical Plan: [Increment Name]
## Architecture Overview
[System design, components, interactions]
## Component Architecture
### Component 1
[Purpose, responsibilities, interfaces]
## Data Models
[Entities, relationships, schemas]
## Implementation Strategy
### Phase 1: [Name]
### Phase 2: [Name]
## Testing Strategy
[Unit, integration, E2E approach]
## Deployment Plan
[How we'll roll this out]
## Risks & Mitigations
Smart Task Creation:
# Implementation Tasks: [Increment Name]
## Task Overview
**Total Tasks**: [N]
**Estimated Duration**: [X weeks]
**Priority**: [P0]
---
## Phase 1: Foundation (Week 1) - X tasks
### T-001: [Task Title]
**Priority**: P0
**Estimate**: [X hours]
**Status**: pending
**Description**:
[What needs to be done]
**Files to Create/Modify**:
- `path/to/file.ts`
**Implementation**:
```[language]
[Code example or approach]
Acceptance Criteria:
[Repeat for all tasks]
[Dependency graph if complex]
### 4. Test Strategy Generation (tests.md)
**Comprehensive Test Coverage**:
```markdown
# Test Strategy: [Increment Name]
## Test Overview
**Total Test Cases**: [N]
**Test Levels**: [Unit, Integration, E2E, Performance]
**Coverage Target**: 80%+ overall, 90%+ critical
---
## Unit Tests (X test cases)
### TC-001: [Test Name]
```[language]
describe('[Component]', () => {
it('[should do something]', async () => {
// Arrange
// Act
// Assert
});
});
---
## Spec Generator Templates
### Template Selection Logic
**Input Analysis**:
1. Analyze increment description (keywords, complexity)
2. Detect domain (frontend, backend, infra, ML, etc.)
3. Determine scope (feature, product, bug fix, refactor)
4. Assess technical complexity (simple, moderate, complex)
**Template Selection**:
IF new_product THEN spec_template = "Full PRD" plan_template = "System Architecture" ELSE IF feature_addition THEN spec_template = "User Stories Focused" plan_template = "Component Design" ELSE IF bug_fix THEN spec_template = "Problem-Solution" plan_template = "Implementation Steps" ELSE IF refactoring THEN spec_template = "Current-Proposed" plan_template = "Migration Strategy" END IF
### Context-Aware Sections
**Auto-Include Based On**:
- **"authentication"** โ Security Considerations, JWT/OAuth design
- **"API"** โ API Design, OpenAPI spec, rate limiting
- **"database"** โ ER diagrams, migration scripts, indexes
- **"frontend"** โ Component hierarchy, state management, UI/UX
- **"deployment"** โ CI/CD, infrastructure, monitoring
- **"ML"** โ Model architecture, training pipeline, evaluation metrics
---
## Usage Examples
### Example 1: Simple Feature
User: /specweave:inc "Add dark mode toggle to settings"
Spec Generator: โ Detected: Frontend feature (UI component) โ Complexity: Simple โ Template: User Stories + Component Design
Generating specifications... โ spec.md: 3 user stories, UI mockups section โ plan.md: Component design, state management, CSS-in-JS โ tasks.md: 8 tasks (component, styles, state, tests) โ tests.md: Unit tests for component, E2E for toggle
### Example 2: Complex System
User: /specweave:inc "Multi-tenant SaaS platform with billing"
Spec Generator: โ Detected: New product (SaaS, multi-tenant, payments) โ Complexity: High โ Template: Full PRD + System Architecture
Generating specifications... โ spec.md: Market analysis, personas, competitive landscape, 15 user stories โ plan.md: Multi-tenant architecture, database per tenant, Stripe integration, C4 diagrams โ tasks.md: 40 tasks across 4 weeks (auth, tenancy, billing, UI) โ tests.md: 100+ test cases (unit, integration, E2E, load testing)
### Example 3: Bug Fix
User: /specweave:inc "Fix race condition in user session handling"
Spec Generator: โ Detected: Bug fix (concurrency issue) โ Complexity: Moderate โ Template: Problem-Solution
Generating specifications... โ spec.md: Problem statement, root cause analysis, impact, solution โ plan.md: Current implementation, proposed fix, Redis locking โ tasks.md: 5 tasks (analysis, fix, tests, rollout, monitoring) โ tests.md: Concurrency tests, stress tests
---
## Integration with /specweave:inc
The Spec Generator is automatically invoked by `/specweave:inc`:
1. **User Intent Analysis**:
- Analyze increment description
- Detect keywords, domain, complexity
2. **Template Selection**:
- Choose appropriate templates
- Auto-include relevant sections
3. **Specification Generation**:
- Generate spec.md with PM context
- Generate plan.md with Architect context
- Generate tasks.md with breakdown
- Generate tests.md with coverage strategy
4. **User Review**:
- Show generated structure
- Allow refinement
- Confirm before creating files
---
## Advantages Over Rigid Templates
**Flexible (V2) Approach**:
- โ
Adapts to increment type (product, feature, bug fix, refactor)
- โ
Includes only relevant sections
- โ
Scales complexity up/down
- โ
Domain-aware (frontend, backend, ML, infra)
- โ
Faster for simple increments
- โ
Comprehensive for complex products
**Rigid (V1) Approach**:
- โ Same template for everything
- โ Many irrelevant sections
- โ Wastes time on simple features
- โ Insufficient for complex products
- โ One-size-fits-none
---
## Configuration
Users can customize spec generation in `.specweave/config.yaml`:
```yaml
spec_generator:
# Default complexity level
default_complexity: moderate # simple | moderate | complex
# Always include sections
always_include:
- executive_summary
- user_stories
- success_metrics
# Never include sections
never_include:
- competitive_analysis # We're not doing market research
# Domain defaults
domain_defaults:
frontend:
include: [ui_mockups, component_hierarchy, state_management]
backend:
include: [api_design, database_schema, authentication]
CRITICAL: When umbrella/multi-project mode is detected, user stories MUST be generated per-project!
Automated Detection: Use detectMultiProjectMode(projectRoot) from src/utils/multi-project-detector.ts. This utility checks ALL config formats automatically.
Manual check (for agents): Read .specweave/config.json and check:
umbrella.enabled + childRepos[]multiProject.enabled + projects{}sync.profiles[].config.boardMapping.specweave/docs/internal/specs/If ANY of these conditions are TRUE โ Multi-project mode ACTIVE:
umbrella.enabled: true in config.jsonumbrella.childRepos has entriesspecs/ (e.g., sw-app-fe/, sw-app-be/, sw-app-shared/)v0.33.0+ introduces per-US project targeting - each user story specifies its target project inline:
## User Stories
### US-001: Thumbnail Upload & Comparison (P1)
**Project**: frontend-app
**Board**: ui-team <!-- 2-level structures only -->
**As a** content creator
**I want** to upload multiple thumbnail variants
**So that** I can visually evaluate my options
**Acceptance Criteria**:
- [ ] **AC-US1-01**: User can drag-and-drop up to 5 images
---
### US-002: CTR Prediction API (P1)
**Project**: backend-api
**Board**: ml-team <!-- 2-level structures only -->
**As a** frontend application
**I want** to call POST /predict-ctr endpoint
**So that** I can get AI-powered predictions
**Acceptance Criteria**:
- [ ] **AC-US2-01**: POST /predict-ctr accepts thumbnail image
Benefits of per-US targeting:
โ LEGACY (Project prefixes - still works but per-US targeting preferred):
## User Stories
### US-001: Thumbnail Upload
As a content creator, I want to upload thumbnails...
### US-002: CTR Prediction API
As a system, I want to predict click-through rates...
โ LEGACY (Multi-Project Format with prefixes - use per-US targeting instead):
## User Stories by Project
### Frontend (sw-thumbnail-ab-fe)
#### US-FE-001: Thumbnail Upload & Comparison (P1)
**Related Repo**: sw-thumbnail-ab-fe
**As a** content creator
**I want** to upload multiple thumbnail variants and compare them side-by-side
**So that** I can visually evaluate my options before testing
**Acceptance Criteria**:
- [ ] **AC-FE-US1-01**: User can drag-and-drop up to 5 thumbnail images (JPG, PNG, WebP)
- [ ] **AC-FE-US1-02**: Images are validated for YouTube specs (1280x720 min, <2MB)
- [ ] **AC-FE-US1-03**: Side-by-side comparison view displays all variants
---
### Backend (sw-thumbnail-ab-be)
#### US-BE-001: Thumbnail Analysis API (P1)
**Related Repo**: sw-thumbnail-ab-be
**As a** frontend application
**I want** to call POST /predict-ctr endpoint
**So that** I can get AI-powered click-through rate predictions
**Acceptance Criteria**:
- [ ] **AC-BE-US1-01**: POST /predict-ctr endpoint accepts thumbnail image
- [ ] **AC-BE-US1-02**: ML model analyzes: face detection, text readability, color psychology
---
### Shared Library (sw-thumbnail-ab-shared)
#### US-SHARED-001: Common Types & Validators (P1)
**Related Repo**: sw-thumbnail-ab-shared
**As a** developer in FE or BE repos
**I want** shared TypeScript types and validators
**So that** API contracts are consistent across projects
**Acceptance Criteria**:
- [ ] **AC-SHARED-US1-01**: ThumbnailMetadata type exported
- [ ] **AC-SHARED-US1-02**: Validation schemas for image specs
When analyzing user descriptions, classify each user story by keywords:
| Keywords | Project | Prefix |
|---|---|---|
| UI, component, page, form, view, drag-drop, theme, builder, menu display | Frontend | FE |
| API, endpoint, CRUD, webhook, analytics, database, service, ML model | Backend | BE |
| types, schemas, validators, utilities, localization, common | Shared | SHARED |
| iOS, Android, mobile app, push notification | Mobile | MOBILE |
| Terraform, K8s, Docker, CI/CD, deployment | Infrastructure | INFRA |
AC-{PROJECT}-US{story}-{number}
Examples:
- AC-FE-US1-01 (Frontend, User Story 1, AC #1)
- AC-BE-US1-01 (Backend, User Story 1, AC #1)
- AC-SHARED-US1-01 (Shared, User Story 1, AC #1)
- AC-MOBILE-US1-01 (Mobile, User Story 1, AC #1)
### T-001: Create Thumbnail Upload Component
**User Story**: US-FE-001 โ MUST reference project-scoped ID!
**Satisfies ACs**: AC-FE-US1-01, AC-FE-US1-02
**Status**: [ ] Not Started
### T-004: Database Schema & Migrations
**User Story**: US-BE-001, US-BE-002 โ Backend stories only!
**Satisfies ACs**: AC-BE-US1-01, AC-BE-US2-01
**Status**: [ ] Not Started
1. DETECT multi-project mode (check config.json, folder structure)
โ
2. If multi-project โ Group user stories by project (FE/BE/SHARED/MOBILE/INFRA)
โ
3. Generate prefixed user stories: US-FE-001, US-BE-001, US-SHARED-001
โ
4. Generate prefixed ACs: AC-FE-US1-01, AC-BE-US1-01
โ
5. Generate tasks referencing correct project user stories
โ
6. Each project folder gets its own filtered spec
Without project-scoped stories:
With project-scoped stories: