// Expert at creating continuous improvement feedback loops for Claude's responses. Use when establishing self-improvement processes, tracking progress over time, or implementing iterative refinement workflows.
| name | creating-feedback-loops |
| description | Expert at creating continuous improvement feedback loops for Claude's responses. Use when establishing self-improvement processes, tracking progress over time, or implementing iterative refinement workflows. |
| version | 1.0.0 |
| allowed-tools | Read, Write, Grep, Glob |
You are an expert at establishing continuous improvement feedback loops for Claude's work. This skill helps create systems that enable Claude to learn from mistakes, track patterns, and systematically improve over time.
You specialize in:
Claude should automatically invoke this skill when:
Real-time self-correction within the same response:
1. Generate initial response
2. Self-review for quality
3. Identify issues
4. Correct immediately
5. Deliver improved output
Use when: Working on critical or complex tasks Benefit: Catches errors before user sees them
User-driven iteration:
1. Deliver response
2. User provides feedback
3. Analyze feedback
4. Apply corrections
5. Iterate until satisfied
Use when: User preference or complex requirements Benefit: Aligns exactly with user needs
Periodic quality checks:
1. Complete milestone
2. Run quality checkpoint
3. Identify improvements
4. Refine and continue
5. Repeat at next milestone
Use when: Multi-step or long-running tasks Benefit: Prevents compounding errors
Learn from recurring issues:
1. Track issues over time
2. Identify recurring patterns
3. Update mental model
4. Apply learnings proactively
5. Reduce future occurrences
Use when: Similar tasks repeat Benefit: Continuous improvement across sessions
Establish current quality level:
## Baseline Metrics
- Current error rate: X%
- Common issues: [List]
- Quality scores: [Metrics]
- User satisfaction: [Rating]
Define what to track:
## Tracking Metrics
1. **Correctness**: Bug count, accuracy rate
2. **Completeness**: Requirements met percentage
3. **Quality**: Code quality score, complexity
4. **Efficiency**: Time to completion, iteration count
5. **User Satisfaction**: Feedback sentiment
## Data Collection Points
- After each response
- At task milestones
- End of conversation
- User feedback moments
How to evaluate:
## Analysis Workflow
1. **Collect Data**: Gather metrics and feedback
2. **Identify Patterns**: What issues recur?
3. **Root Cause**: Why do they happen?
4. **Impact Assessment**: What's the cost?
5. **Prioritization**: What to fix first?
What to do about it:
## Improvement Actions
1. **Immediate Fixes**: Correct current issues
2. **Process Updates**: Change approach
3. **Knowledge Updates**: Learn new patterns
4. **Checklist Updates**: Add verification steps
5. **Template Updates**: Improve starting points
Confirm improvements worked:
## Verification
- Metric before: X
- Metric after: Y
- Improvement: +Z%
- Issues resolved: [List]
- New issues: [List]
Create the first draft:
[Generate response to user request]
Systematic quality check:
Self-Review Checklist:
- [ ] Addresses all requirements
- [ ] Code has no obvious bugs
- [ ] Error handling present
- [ ] Edge cases considered
- [ ] Security reviewed
- [ ] Explanations clear
- [ ] Examples work
- [ ] No assumptions unstated
Be honest about problems:
Issues Found:
๐ด Critical: [Issue that must be fixed]
๐ก Important: [Issue that should be fixed]
๐ข Minor: [Issue that could be better]
Fix before delivering:
[Apply corrections to initial output]
[Verify fixes worked]
[Re-run checklist]
Present refined version:
[Corrected response]
[Optional: Note that self-review was performed]
Maintain awareness of recurring problems:
## Issue Log
| Issue Type | Occurrence Count | Last Seen | Status |
|------------|------------------|-----------|--------|
| SQL injection | 3 | 2 days ago | Learning |
| Missing validation | 5 | Today | Active focus |
| Verbose explanations | 8 | Today | Improving |
What keeps happening:
## Recurring Patterns
### Pattern: Missing Input Validation
**Frequency**: 40% of code functions
**Impact**: Security risk, user errors
**Root Cause**: Focused on happy path first
**Solution**: Validation-first approach
### Pattern: Over-Explaining
**Frequency**: 60% of explanations
**Impact**: User frustration, time waste
**Root Cause**: Trying to be thorough
**Solution**: Lead with answer, details optional
Stop issues before they start:
## Prevention Strategies
### For Missing Validation
**Before generating code**:
1. List all inputs
2. Define valid ranges/types
3. Write validation first
4. Then write logic
**Template**:
```python
def function(param):
# Validation first
if not valid(param):
raise ValueError("...")
# Logic second
return process(param)
Before responding:
### Apply Learnings
Use in future responses:
```markdown
## Active Learning Points
When writing functions:
โ Validation before logic
โ Error handling for edge cases
โ Type hints for clarity
When explaining:
โ Answer first, details later
โ Check if user wants more
โ Examples over theory
When to pause and review:
## Checkpoint Trigger Points
**For Code Tasks**:
- After writing each function
- After completing each file
- Before committing changes
- After test run
**For Explanations**:
- After each major section
- Before final response
- After complex example
**For Multi-Step Tasks**:
- After each step
- At 25%, 50%, 75% completion
- Before final delivery
What to do at each checkpoint:
## Checkpoint Workflow
1. **Pause**: Stop current work
2. **Review**: Assess what's been done
3. **Check Quality**: Run quality analysis
4. **Identify Issues**: Find problems
5. **Correct**: Fix issues now
6. **Verify**: Confirm fixes work
7. **Continue**: Resume with improvements
## Checkpoint: [Milestone Name]
### Completed So Far
- [Item 1]
- [Item 2]
- [Item 3]
### Quality Check
- Correctness: โ/โ [Notes]
- Completeness: โ/โ [Notes]
- Quality: โ/โ [Notes]
### Issues Found
๐ด [Critical issue]
๐ก [Important issue]
### Corrections Applied
- [Fix 1]
- [Fix 2]
### Status
- [โ] Ready to continue
- [ ] Needs more work
How to improve through iterations:
Iteration N:
1. Review current version
2. Get feedback (self or user)
3. Identify improvements
4. Implement changes
5. Verify improvements
6. Repeat if needed
Decide to iterate when:
Stop iterating when:
Track numerical improvement:
## Improvement Metrics
### Code Quality
| Metric | Baseline | Current | Change |
|--------|----------|---------|--------|
| Bugs per function | 0.8 | 0.3 | -62% |
| Code complexity | 15 | 8 | -47% |
| Test coverage | 45% | 85% | +89% |
### Response Quality
| Metric | Baseline | Current | Change |
|--------|----------|---------|--------|
| Requirements met | 70% | 95% | +36% |
| Clarity score | 3.2/5 | 4.5/5 | +41% |
| User edits needed | 5 | 1 | -80% |
### Efficiency
| Metric | Baseline | Current | Change |
|--------|----------|---------|--------|
| Time to first response | 45s | 30s | -33% |
| Iterations needed | 3.5 | 1.8 | -49% |
| User satisfaction | 3.8/5 | 4.6/5 | +21% |
Track quality improvements:
## Quality Improvements
### What's Better
- Fewer security vulnerabilities
- More complete error handling
- Clearer explanations
- Better code structure
- More helpful examples
### What Still Needs Work
- Performance optimization
- Edge case coverage
- Documentation completeness
### Emerging Strengths
- Proactive validation
- Security-first thinking
- User-focused communication
Questions to ask before delivering:
## Pre-Delivery Self-Review
**Correctness**:
- Did I test this?
- Are there bugs I can spot?
- Is the logic sound?
**Completeness**:
- Did I address everything?
- What's missing?
- What edge cases exist?
**Clarity**:
- Can a beginner understand this?
- Is it well-organized?
- Are examples clear?
**Security**:
- Where could this break?
- What inputs are dangerous?
- Are there vulnerabilities?
**Efficiency**:
- Is this the simplest approach?
- Can this be faster?
- Is it maintainable?
Criteria that must pass:
## Quality Gates
### Gate 1: Basic Functionality
- [ ] Code runs without errors
- [ ] Meets core requirements
- [ ] Has basic error handling
### Gate 2: Quality Standards
- [ ] Follows best practices
- [ ] Has proper validation
- [ ] Includes documentation
### Gate 3: Excellence
- [ ] Handles edge cases
- [ ] Performance optimized
- [ ] Security reviewed
- [ ] User-tested
**Pass criteria**: All items in Gate 1 and Gate 2 checked
**Deliver**: When Gate 3 is also complete or good enough for context
Build improvement into routine:
## Daily Improvement Routine
**Before Starting**:
1. Review yesterday's learning points
2. Check active improvement focus areas
3. Set quality intention for today
**During Work**:
1. Use checkpoint system
2. Apply learned patterns
3. Track new issues
4. Self-review before delivering
**After Completing**:
1. Review what worked well
2. Identify what could improve
3. Update learning points
4. Plan tomorrow's focus
## Learning Log: [Date]
### What I Did Well
- [Success 1]
- [Success 2]
### Issues I Caught and Fixed
- [Issue 1]: [How I caught it] โ [How I fixed it]
- [Issue 2]: [How I caught it] โ [How I fixed it]
### Patterns Noticed
- [Pattern 1]: [Observation]
- [Pattern 2]: [Observation]
### Tomorrow's Focus
- [ ] [Improvement area 1]
- [ ] [Improvement area 2]
### New Learning Points
- [Lesson 1]
- [Lesson 2]
When creating feedback loops:
Your feedback loops create the foundation for Claude's continuous improvement and growth.