// Self-improvement system enabling Claude Code to detect code quality violations,
| name | dogfooding-system |
| description | Self-improvement system enabling Claude Code to detect code quality violations, retrieve past fix patterns from memory, and orchestrate automated improvement cycles. Uses Connascence Analyzer for 7+ violation types, Memory-MCP for pattern storage with WHO/WHEN/PROJECT/WHY tagging, and sandbox testing with automated rollback. Perfect for continuous quality improvement and self-healing codebases. |
| version | 1.0.0 |
| category | quality |
| tags | ["quality","testing","validation"] |
| author | ruv |
Use this skill when:
Do NOT use this skill for:
This skill succeeds when:
Handle these edge cases carefully:
CRITICAL RULES - ALWAYS FOLLOW:
Use multiple validation perspectives:
Validation Threshold: Findings require 2+ confirming signals before flagging as violations.
This skill integrates with:
A comprehensive 3-phase self-improvement system that enables Claude Code to automatically improve itself and connected MCP servers through quality detection, pattern retrieval, and safe automated fixes.
The Dogfooding System orchestrates three integrated phases:
Key Components:
Activate this skill when:
This skill is particularly valuable when:
Run Connascence Analysis to detect violations and store findings in Memory-MCP.
Workflow:
Violations Detected:
Execution:
# Run quality detection for single project
.\resources\scripts\run-quality-detection.bat memory-mcp
# Run for all projects
.\resources\scripts\run-quality-detection.bat all
Agents: code-analyzer, reviewer
Outputs:
metrics/dogfooding/<project>_<timestamp>.jsonmetrics/dogfooding/summary_<timestamp>.txtQuery Memory-MCP for similar past fixes using vector search, rank by relevance, optionally apply.
Workflow:
all-MiniLM-L6-v2Vector Search:
all-MiniLM-L6-v2 (384-dimensional embeddings)Transformation Strategies:
Execution:
# Query only (no application)
.\resources\scripts\run-pattern-retrieval.bat "God Object with 26 methods"
# Query + apply best pattern
.\resources\scripts\run-pattern-retrieval.bat "Parameter Bomb 10 params" --apply
Agents: code-analyzer, coder, reviewer
Outputs:
retrievals/query-<timestamp>.json (vector search results)retrievals/best-pattern-<timestamp>.jsonFull cycle orchestration combining Quality Detection + Pattern Retrieval + Safe Application.
Workflow:
Safety Checks (MANDATORY):
Execution:
# Single cycle with safety checks
.\resources\scripts\run-continuous-improvement.bat memory-mcp
# Dry-run (no fixes applied)
.\resources\scripts\run-continuous-improvement.bat memory-mcp --dry-run
# Full cycle all projects (round-robin)
.\resources\scripts\run-continuous-improvement.bat all
Agents: hierarchical-coordinator, code-analyzer, coder, reviewer
Outputs:
cycle-summaries/cycle-<id>.txtarchive/<cycle_id>/ (all artifacts)ALL Memory-MCP writes use automatic metadata tagging:
WHO: Agent name, category, capabilities WHEN: ISO timestamp, Unix timestamp, readable format PROJECT: connascence-analyzer, memory-mcp-triple-system, claude-flow, etc. WHY: Intent (implementation, bugfix, refactor, testing, documentation, analysis, planning, research)
Implementation:
const { taggedMemoryStore } = require('./hooks/12fa/memory-mcp-tagging-protocol.js');
// Auto-tagged memory write
const tagged = taggedMemoryStore('code-analyzer', 'Detected God Object with 26 methods', {
violation_type: 'god-object',
severity: 'high',
project: 'memory-mcp'
});
// Automatically includes: agent metadata, timestamps, project, intent
Documentation: C:\Users\17175\docs\DOGFOODING-SAFETY-RULES.md
mkdir C:\Users\17175\tmp\dogfood-sandbox
xcopy /E /I /Q <project> C:\Users\17175\tmp\dogfood-sandbox
cd C:\Users\17175\tmp\dogfood-sandbox && npm test
# If pass → apply to production
# If fail → reject fix
git stash push -u -m "backup-<timestamp>"
<apply-fix>
npm test || git stash pop # Rollback on failure
.github/workflows/dogfooding-safety.yml.\resources\scripts\run-quality-detection.bat memory-mcp
.\resources\scripts\run-pattern-retrieval.bat "God Object with 26 methods"
.\resources\scripts\run-continuous-improvement.bat memory-mcp
# Windows Task Scheduler (daily at 12:00 UTC)
schtasks /create /tn "Dogfooding-Cycle" \
/tr "C:\Users\17175\claude-code-plugins\ruv-sparc-three-loop-system\skills\dogfooding-system\resources\scripts\run-continuous-improvement.bat all" \
/sc daily /st 12:00
run-quality-detection.bat - Execute Phase 1 (Quality Detection)run-pattern-retrieval.bat - Execute Phase 2 (Pattern Retrieval)run-continuous-improvement.bat - Execute Phase 3 (Full Cycle)generate-cycle-summary.js - Generate cycle summary reportsviolation-report.md - Violation report template with metadatafix-pattern.json - Fix pattern schema for Memory-MCP storagecycle-summary.md - Cycle summary template with metricstest-quality-detection.js - Verify Phase 1 workflowtest-pattern-retrieval.js - Verify Phase 2 vector searchtest-continuous-improvement.js - Verify Phase 3 full cycleFix: Already patched in Memory-MCP
def __init__(self, ...):
self.client = chromadb.PersistentClient(path=persist_directory)
self.create_collection() # <-- Added this line
Solution: Run Phase 1 (Quality Detection) first to populate Memory-MCP with violations and fixes
Solution: Enhanced sandbox testing to better replicate production environment
Upstream Skills (that feed into dogfooding):
functionality-audit - Sandbox testing + debuggingtheater-detection-audit - Byzantine consensus verificationproduction-readiness - Complete deployment checklistDownstream Skills (that use dogfooding):
cicd-intelligent-recovery - Automated failure recoveryperformance-analysis - Performance optimizationcode-review-assistant - Multi-agent swarm reviewdogfooding-system/
├── skill.md (this file)
├── INDEX.md (comprehensive documentation)
├── resources/
│ ├── scripts/
│ │ ├── run-quality-detection.bat
│ │ ├── run-pattern-retrieval.bat
│ │ ├── run-continuous-improvement.bat
│ │ └── generate-cycle-summary.js
│ └── templates/
│ ├── violation-report.md
│ ├── fix-pattern.json
│ └── cycle-summary.md
├── tests/
│ ├── test-quality-detection.js
│ ├── test-pattern-retrieval.js
│ └── test-continuous-improvement.js
└── graphviz/
└── dogfooding-system-process.dot
Since implementation:
Status: ✅ PRODUCTION READY Version: 1.0 (Gold Tier) Skills: 3 SOPs (Quality Detection, Pattern Retrieval, Continuous Improvement) Agents: hierarchical-coordinator, code-analyzer, coder, reviewer MCP Tools: connascence-analyzer, memory-mcp, claude-flow Safety: Sandbox testing + automated rollback + verification
Quality violations must be detected through measurable metrics (connascence analysis), not subjective judgment. Every finding requires concrete evidence.
In practice:
Fix quality issues by learning from past solutions, not reinventing approaches. Build institutional knowledge through vector search.
In practice:
Apply fixes in isolated sandboxes with automated rollback before touching production code.
In practice:
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Batch Fixes Without Testing | Applying multiple fixes simultaneously causes cascading failures with unknown root cause | Fix ONE violation at a time, test after each |
| Skipping Sandbox Validation | Direct production fixes risk breaking functionality with no recovery path | ALWAYS test in sandbox first, rollback on failure |
| Ignoring Pattern History | Reinventing solutions wastes time and introduces untested approaches | Query Memory-MCP for similar violations before fixing |
| Low Confidence Threshold | Flagging ambiguous patterns creates false positives and noise | Use >=40 confidence score threshold, require 2+ signals |
| Missing Metadata Tagging | Fixes stored without WHO/WHEN/PROJECT/WHY context become unusable for pattern retrieval | Use taggedMemoryStore() for automatic metadata injection |
The Dogfooding System represents a paradigm shift from reactive debugging to proactive quality improvement through institutional learning. By combining Connascence Analysis for detection, Memory-MCP for pattern storage, and sandbox testing for safe application, organizations build self-improving codebases that learn from past fixes.
The three-phase architecture (Quality Detection -> Pattern Retrieval -> Continuous Improvement) creates a feedback loop where every fix strengthens future detection. Teams implementing dogfooding should prioritize safety rules (sandbox testing, progressive fixes, test coverage requirements) and proper metadata tagging for pattern retrieval. The system's vector search capabilities enable finding relevant fixes even when violation details differ, as semantic similarity identifies underlying patterns.
Most critically, dogfooding succeeds when organizations resist the temptation to batch fixes or skip validation. The discipline of one-fix-at-a-time testing, combined with automated rollback, creates sustainable quality improvement without production risk. Teams measuring success by improvement velocity (violations fixed per day) rather than just violation counts build codebases that continuously evolve toward higher quality.