// Quality assurance and CodeRabbit learning system for Zammad MCP development. Use when creating PRs, implementing MCP tools, adding pagination, handling errors, or addressing CodeRabbit feedback. Provides checklists, pattern guides, and accumulated project learnings to prevent recurring issues.
| name | zammad-mcp-quality |
| description | Quality assurance and CodeRabbit learning system for Zammad MCP development. Use when creating PRs, implementing MCP tools, adding pagination, handling errors, or addressing CodeRabbit feedback. Provides checklists, pattern guides, and accumulated project learnings to prevent recurring issues. |
Ensure consistent, high-quality contributions to the Zammad MCP project by:
Trigger this skill when:
When starting a new feature, review relevant references:
For pagination features: → Read pagination-patterns.md
For error handling: → Read error-handling-guide.md
For type-heavy code: → Read type-annotation-standards.md
Keep patterns in mind while coding:
ALWAYS run through the checklist before creating a PR:
→ Open pre-pr-checklist.md
This takes 5-10 minutes but reduces review iterations by 60-70%.
Quick quality check:
# From project root
./scripts/quality-check.sh && uv run pytest --cov=mcp_zammad
When CodeRabbit comments on your PR:
Process for keeping skill current:
Estimated time: 30 minutes/month
pre-pr-checklist.md Quick self-review checklist. Use before every PR.
coderabbit-learnings.md Consolidated feedback from PR reviews. Living knowledge base.
pagination-patterns.md Complete guide to pagination in MCP servers:
error-handling-guide.md Actionable error message patterns:
type-annotation-standards.md Python 3.10+ type annotation guide:
cast()Stub for automating CodeRabbit feedback extraction:
# Future usage
python scripts/extract_feedback.py --prs 10 --output references/coderabbit-learnings.md
Status: Planned for Phase 3 automation See: scripts/extract_feedback.py for stub
User: "I'm ready to create a PR for the new search filter feature"
Claude: Let me help you with pre-PR review using the Zammad MCP Quality skill.
*Reviews pre-pr-checklist.md*
I'll check your code against the quality checklist:
1. Type Annotations ✓
- All functions have return types
- Using modern list[str] syntax
- No parameter shadowing
2. Pagination ⚠️
- Found: response["total"] = len(items)
- Should be: total from API or None
- See pagination-patterns.md line 45
3. Error Handling ✓
- Messages are actionable
- Using proper exception chaining
Let me fix the pagination issue before you create the PR...
User: "Add a new tool for managing ticket watchers"
Claude: I'll implement following Zammad MCP patterns from the quality skill.
*Checks coderabbit-learnings.md for relevant patterns*
Key patterns to follow:
1. Use Pydantic model for input (TicketSearchParams pattern)
2. Support ResponseFormat.JSON and .MARKDOWN
3. Clear docstring explaining ID vs number (from Issue #99 learnings)
4. Proper error handling with actionable messages
*Reviews error-handling-guide.md for message templates*
Implementing tool with these quality standards...
CodeRabbit: "The 'total' field is set to page count, not true total across all pages."
Claude: Let me check the pagination-patterns.md guide...
*Reviews pagination-patterns.md section on "Common Mistakes"*
I see - this is a known pattern. The guide shows:
❌ Wrong: response["total"] = len(tickets)
✅ Correct: response["total"] = total_from_api # or None if unknown
Let me fix this and also check for similar instances in the codebase...
*Searches for pattern in other pagination functions*
Found 3 other locations with same issue. Fixing all...
*Updates coderabbit-learnings.md with occurrence count*
Track these over time to measure skill effectiveness:
Update this skill when:
When updating references:
coderabbit-learnings.md as indexThis skill complements existing docs:
CLAUDE.md - Project context for Claude Code → Focuses on architecture, development rules → Quality skill adds CodeRabbit-specific patterns
.github/copilot-instructions.md - GitHub Copilot guidance → Focuses on code generation patterns → Quality skill adds review/validation patterns
.coderabbit.yaml - CodeRabbit configuration → Focuses on tool settings and path instructions → Quality skill captures learnings from reviews
mcp-builder skill - Generic MCP development → Focuses on MCP protocol and general patterns → Quality skill adds Zammad-specific learnings
Expected outcomes after skill adoption:
Week 1:
Month 1:
Month 3+:
Q: Checklist is too long, takes too much time A: Focus on sections relevant to your changes. Not every item applies to every PR.
Q: Found a pattern not documented here A: Great! Document it in coderabbit-learnings.md or open an issue to add it.
Q: CodeRabbit still commenting despite following checklist A: Skill evolves with project. Update references based on new feedback.
Q: How do I know which reference to read? A: Start with pre-pr-checklist.md - it links to detailed guides as needed.
Planned automation:
extract_feedback.py - Parse CodeRabbit comments from PRsStatus: Designed in Phase 2, implementation pending
Skill working well? Great! Keep using the checklist.
Found issues or improvements? Update the skill! It's designed to evolve.
Have questions? Check existing references or ask in PR discussions.
Monthly reviews ensure skill stays current and useful.
This skill is part of a systematic quality improvement system. By using it consistently, you're helping the project learn and improve over time.