// Streamline PayK12 development workflows with intelligent coordination, cost optimization, and continuous feedback loops. Use when orchestrating multi-step tasks, monitoring workflow health, or optimizing development processes across repositories.
| name | workflow-management |
| description | Streamline PayK12 development workflows with intelligent coordination, cost optimization, and continuous feedback loops. Use when orchestrating multi-step tasks, monitoring workflow health, or optimizing development processes across repositories. |
Streamline development workflows across the PayK12 multi-repository system with intelligent task coordination, cost tracking, and continuous improvement feedback. This skill provides patterns for workflow optimization, monitoring, and automation.
/bug-fix command execution and success ratescloud-architect or deployment-engineernextjs-pro, dotnet-pro agentssecurity-auditor agentPayK12 Workflow Stack:
โโโ /bug-fix command (2120+ lines)
โ โโโ Phase 1: Analysis
โ โโโ Phase 2: Reproduction (Playwright)
โ โโโ Phase 3: Implementation
โ โโโ Phase 4: Testing
โ โโโ Phase 5: PR Creation
โโโ Session logging & cost tracking
โโโ Agent dispatch & coordination
โโโ Continuous improvement feedback
The /bug-fix Command Flow:
User: /bug-fix PL-479
1. ANALYSIS PHASE
โโ Parse JIRA ticket PL-479
โโ Extract requirements
โโ Identify repository scope
โโ Assess complexity
โโ Create execution plan
2. REPRODUCTION PHASE
โโ Generate test case for bug
โโ Run Playwright tests (should fail)
โโ Capture failure evidence
โโ Document reproduction steps
โโ Create test baseline
3. IMPLEMENTATION PHASE
โโ Dispatch to appropriate agent
โ โโ dotnet-pro for API changes
โ โโ nextjs-pro for frontend changes
โ โโ legacy-modernizer for legacy changes
โ โโ multi-repo-fixer for cross-repo
โโ Implement fix
โโ Run local tests
โโ Update documentation
4. TESTING PHASE
โโ Run Playwright tests (should pass)
โโ Run unit tests
โโ Run integration tests
โโ Check code coverage
โโ Verify no regressions
5. PR CREATION PHASE
โโ Create merge request with:
โ โโ Clear description
โ โโ Testing evidence
โ โโ Screenshots/traces if applicable
โ โโ Auto-link to JIRA ticket
โโ Post CI/CD results
โโ Wait for reviews
โโ Merge when approved
FEEDBACK & ITERATION (up to 3 times)
โโ Monitor test failures
โโ Self-heal common issues
โโ Provide diagnostic information
โโ Attempt auto-fix or escalate
Success Indicators:
Token Usage Breakdown:
Average Cost Per Bug Fix:
Context Loading: 25,000 tokens (35%)
โโ Architecture context
โโ Repository structure
โโ Existing patterns
โโ Test infrastructure
Analysis Phase: 12,000 tokens (17%)
โโ JIRA ticket parsing
โโ Code review
โโ Planning
Reproduction Phase: 8,000 tokens (11%)
โโ Test generation
โโ Test execution analysis
โโ Evidence capture
Implementation Phase: 18,000 tokens (25%)
โโ Code writing
โโ Local testing
โโ Refinement
Testing Phase: 5,000 tokens (7%)
โโ Test monitoring
โโ Result analysis
โโ Coverage check
Total Average: 70,000 tokens (~$2.10/bug fix)
Optimization Opportunities:
โโ Cache context (save 35% โ 25,000 tokens)
โโ Reuse test patterns (save 20% of reproduction)
โโ Parallel execution (reduce wall-clock time 30%)
โโ Early termination on simple bugs
Optimization Strategies:
Before: Load context fresh each time
Cost: 25,000 tokens per bug
After: Cache and reuse context
Cost: 16,250 tokens (35% savings)
Action: Implement context-manager agent
Timeline: 6 weeks
ROI: Break-even after 5 bugs
Before: Sequential phases (1 โ 2 โ 3 โ 4 โ 5)
Time: ~45 minutes per bug
After: Parallel where possible
- Phase 2 & 3 overlap (testing while implementing)
- Phase 1 & 2 analysis done in parallel
Time: ~30 minutes per bug
Implementation: Update /bug-fix workflow
Timeline: 1 week
Impact: 15 more bugs/day throughput
First IDOR vulnerability: 70,000 tokens
Second IDOR vulnerability: 35,000 tokens (50% savings)
โโ Reuse test patterns and fixes
Action: Build pattern library for common bug types
Timeline: 2 weeks (after 10-15 bugs)
Savings: ~30% average cost reduction
Health Score Calculation:
Overall Workflow Health = (S ร 0.3) + (I ร 0.25) + (C ร 0.2) + (A ร 0.25)
Where:
S = Success Rate (target: 95%+)
I = Iteration Efficiency (1-2 iterations ideal)
C = Cost Efficiency (tokens per bug)
A = Agent Accuracy (code quality)
Health Score Interpretation:
90-100 = Excellent โ
(no action needed)
80-90 = Good โ ๏ธ (monitor, optimize when needed)
70-80 = Fair โ ๏ธ (identify bottlenecks)
< 70 = Poor โ (investigation required)
Metrics Dashboard:
Last 30 Days Summary:
โโ Bugs Fixed: 47
โโ Success Rate: 91.5% (43/47)
โโ Avg Iterations: 1.4
โโ Avg Cost: $2.15 per bug
โโ Total Cost: $101.05
โโ Avg Time: 38 minutes
โโ Agent Accuracy: 94%
โโ Context Cache Hit Rate: 62%
Trend Analysis:
โโ Cost trending down (-12% vs prev month)
โโ Success rate improving (+5%)
โโ Speed improving (-7 min avg time)
โโ Cache efficiency improving (+8%)
Recommendations:
โโ Deploy context-manager (projected 35% cost savings)
โโ Implement parallel execution (30% speed improvement)
โโ Build IDOR pattern library (50% cost savings for security bugs)
โโ Add code review agent (improve accuracy to 98%)
Estimated Impact (if all implemented):
โโ Cost: $101/month โ $52/month (48% savings)
โโ Speed: 38 min โ 26 min (31% faster)
โโ Success: 91% โ 97% (+6%)
โโ Throughput: 47 bugs โ 72 bugs (+53%)
Scenario: Bug requires changes in multiple repositories
Bug: Contact creation fails because validation differs between frontend and API
Step 1: Analysis
โโ Identify affected repositories:
โ โโ repos/frontend (React validation)
โ โโ repos/api (C# validation)
โ โโ repos/legacy-api (legacy validation)
โโ Find root cause (one has different rules)
โโ Plan synchronization strategy
Step 2: Design Solution
โโ Decide on source of truth:
โ โโ Option A: Shared validation schema
โ โโ Option B: One repo leads, others follow
โ โโ Option C: Message-based synchronization
โโ Determine update order
Step 3: Implementation Order
โโ First: Backend (API) - source of truth
โโ Second: Frontend (React) - sync with API
โโ Third: Legacy API - gradual migration
Step 4: Testing
โโ Test API validation changes
โโ Test Frontend integration with new API
โโ Test Legacy API still works (compatibility mode)
โโ End-to-end workflow test
Step 5: Deployment
โโ Deploy API changes first
โโ Monitor for issues
โโ Deploy frontend changes
โโ Monitor E2E tests
โโ Plan legacy-API deprecation
Pattern 1: Sequential Deployment
repo/api โ repo/frontend โ (later) repos/legacy-api
Used when: Backward compatibility needed
Risk: Low (version gating)
Speed: Slower (staggered deploys)
Pattern 2: Parallel Deployment
repo/api โโ
โโโ repo/frontend
repos/legacy-api โโ
Used when: Breaking changes or major refactor
Risk: Medium (coordination required)
Speed: Faster (parallel work)
Pattern 3: Feature Flag Driven
Deploy all changes with flags OFF
Enable flags gradually per region/user
Rollback by disabling flags
Used when: Zero-downtime deployment needed
Risk: Low (easy rollback)
Speed: Medium (flag toggling)
Tier 1: Deterministic Fixes (High confidence)
Issue: Formatting violations
Fix: Auto-apply prettier/eslint
Confidence: 100%
Action: Auto-commit, notify user
Issue: Missing nullable type annotations
Fix: Add ? to type signature
Confidence: 98%
Action: Suggest, wait for approval
Tier 2: Heuristic Fixes (Medium confidence)
Issue: Test failing on assertion
Fix: Suggest mock adjustment
Confidence: 75%
Action: Create PR with suggestion, wait for review
Issue: API endpoint not found
Fix: Check version mismatch, suggest compatibility mode
Confidence: 70%
Action: Log issue, escalate to human
Tier 3: Manual Escalation (Low confidence)
Issue: Unexpected algorithm behavior
Fix: Escalate to human with diagnostics
Confidence: < 50%
Action: Provide full context, request human decision
Issue: Design decision conflict
Fix: Escalate with alternatives
Confidence: < 40%
Action: Request human judgment
/bug-fix command (2120+ lines)log-session.sh scriptcontext-manager-integration-plan.mdagent-organizer agent| Issue | Indicator | Solution |
|---|---|---|
| High costs | > $3/bug average | Analyze token usage, implement caching |
| Low success rate | < 85% pass rate | Review agent accuracy, add patterns |
| Slow execution | > 60 min avg time | Profile phases, parallelize where possible |
| Cache misses | < 50% hit rate | Expand cache policies, reuse patterns |
| Manual escalations | > 10% of bugs | Improve auto-healing heuristics |
For workflow optimization:
product-manager agent for strategyperformance-engineer for bottleneck analysisagent-organizer for coordination issues/docs/workflow-engine-guide.md for advanced topics