// "Enhanced project documentation with AI-powered features. Enhanced with Context7 MCP for up-to-date documentation."
| name | moai-project-documentation |
| version | 4.0.0 |
| created | "2025-11-12T00:00:00.000Z" |
| updated | "2025-11-12T00:00:00.000Z" |
| status | stable |
| tier | specialization |
| description | Enhanced project documentation with AI-powered features. Enhanced with Context7 MCP for up-to-date documentation. |
| allowed-tools | Read, Glob, Grep, WebSearch, WebFetch, mcp__context7__resolve-library-id, mcp__context7__get-library-docs |
| primary-agent | alfred |
| secondary-agents | [] |
| keywords | ["project","documentation","git","frontend","kubernetes"] |
| tags | [] |
| orchestration | null |
| can_resume | true |
| typical_chain_position | middle |
| depends_on | [] |
Project Documentation
Primary Agent: alfred
Secondary Agents: none
Version: 4.0.0
Keywords: project, documentation, git, frontend, kubernetes
Core Content
Ask user to identify their project type:
Web Application
Mobile Application
CLI Tool / Utility
Shared Library / SDK
Data Science / ML Project
# Mission & Strategy
- What problem do we solve?
- Who are the users?
- What's our value proposition?
# Success Metrics
- How do we measure impact?
- What are KPIs?
- How often do we measure?
# Next Features (SPEC Backlog)
- What features are coming?
- How are they prioritized?
Web Application Product.md Focus:
Mobile Application Product.md Focus:
CLI Tool Product.md Focus:
Library Product.md Focus:
Data Science Product.md Focus:
# System Architecture
- What's the overall design pattern?
- What layers/tiers exist?
- How do components interact?
# Core Modules
- What are the main building blocks?
- What's each module responsible for?
- How do they communicate?
# External Integrations
- What external systems do we depend on?
- How do we authenticate?
- What's our fallback strategy?
# Traceability
- How do SPECs map to code?
- How do we trace changes?
Web Application Architecture:
Frontend (React/Vue) โ API Layer (FastAPI/Node) โ Database (PostgreSQL)
โ
WebSocket Server (Real-time features)
โ
Message Queue (Async jobs)
โ
Background Workers
Mobile Application Architecture:
UI Layer (Screens, Widgets)
โ
State Management (Bloc, Redux, Riverpod)
โ
Data Layer (Local DB: SQLite/Realm, Remote: REST/GraphQL)
โ
Authentication (OAuth, JWT)
โ
Native Modules (Camera, GPS, Contacts)
โ
Offline Sync Engine
CLI Tool Architecture:
Input Parsing โ Command Router โ Core Logic โ Output Formatter
โ
Validation Layer
โ
Caching Layer
Library Architecture:
Public API Surface
โ
Type Guards / Validation
โ
Core Logic
โ
Platform Adapters (Node.js, Browser, Deno)
Data Science Architecture:
Data Ingestion โ Feature Engineering โ Model Training โ Inference
โ
Feature Store
โ
Model Registry
โ
Monitoring & Alerting
# Technology Stack
- What language(s)?
- What version ranges?
- Why these choices?
# Quality Gates
- What's required to merge?
- How do we measure quality?
- What tools enforce standards?
# Security Policy
- How do we manage secrets?
- How do we handle vulnerabilities?
- What's our incident response?
# Deployment Strategy
- Where do we deploy?
- How do we release?
- How do we rollback?
Web Application:
Frontend: TypeScript, React 18, Vitest, TailwindCSS
Backend: Python 3.13, FastAPI, pytest (85% coverage)
Database: PostgreSQL 15, Alembic migrations
DevOps: Docker, Kubernetes, GitHub Actions
Quality: TypeScript strict mode, mypy, ruff, GitHub code scanning
Mobile Application:
Framework: Flutter 3.13 or React Native 0.72
Language: Dart or TypeScript
Testing: flutter test or Jest, 80%+ coverage
State Management: Riverpod, Bloc, or Redux
Local Database: SQLite, Hive, or Realm
HTTP Client: Dio or Axios wrapper
UI: Material Design or Cupertino
DevOps: Fastlane, GitHub Actions for app store deployment
Quality: flutter analyze, dart format, excellent test coverage
Performance: App size <50MB (iOS), startup <2s
CLI Tool:
Language: Go 1.21 or Python 3.13
Testing: Go's built-in testing or pytest
Packaging: Single binary (Go) or PyPI (Python)
Quality: golangci-lint or ruff, <100MB binary
Performance: <100ms startup time
Library:
Language: TypeScript 5.2 or Python 3.13
Testing: Vitest or pytest, 90%+ coverage (libraries = higher bar)
Package Manager: npm/pnpm or uv
Documentation: TSDoc/JSDoc or Google-style docstrings
Type Safety: TypeScript strict or mypy strict
Data Science:
Language: Python 3.13, Jupyter notebooks
ML Framework: scikit-learn, PyTorch, or TensorFlow
Data: pandas, Polars, DuckDB
Testing: pytest, nbval (notebook validation)
Experiment Tracking: MLflow, Weights & Biases
Quality: 80% code coverage, data validation tests
โ Too Vague
โ Specific
โ Over-Specified in product.md
โ Architecture-Level
โ Inconsistent Across Documents
โ Aligned
โ Outdated
โ Fresh
Metadata
Purpose
Guide interactive creation of three core project documentation files (product.md, structure.md, tech.md) based on project type and user input. Provides templates, examples, checklists, and best practices for each project type (Web App, CLI Tool, Library, Data Science).
Core Content
Ask user to identify their project type:
Web Application
Mobile Application
CLI Tool / Utility
Shared Library / SDK
Data Science / ML Project
# Mission & Strategy
- What problem do we solve?
- Who are the users?
- What's our value proposition?
# Success Metrics
- How do we measure impact?
- What are KPIs?
- How often do we measure?
# Next Features (SPEC Backlog)
- What features are coming?
- How are they prioritized?
Web Application Product.md Focus:
Mobile Application Product.md Focus:
CLI Tool Product.md Focus:
Library Product.md Focus:
Data Science Product.md Focus:
# System Architecture
- What's the overall design pattern?
- What layers/tiers exist?
- How do components interact?
# Core Modules
- What are the main building blocks?
- What's each module responsible for?
- How do they communicate?
# External Integrations
- What external systems do we depend on?
- How do we authenticate?
- What's our fallback strategy?
# Traceability
- How do SPECs map to code?
- How do we trace changes?
Web Application Architecture:
Frontend (React/Vue) โ API Layer (FastAPI/Node) โ Database (PostgreSQL)
โ
WebSocket Server (Real-time features)
โ
Message Queue (Async jobs)
โ
Background Workers
Mobile Application Architecture:
UI Layer (Screens, Widgets)
โ
State Management (Bloc, Redux, Riverpod)
โ
Data Layer (Local DB: SQLite/Realm, Remote: REST/GraphQL)
โ
Authentication (OAuth, JWT)
โ
Native Modules (Camera, GPS, Contacts)
โ
Offline Sync Engine
CLI Tool Architecture:
Input Parsing โ Command Router โ Core Logic โ Output Formatter
โ
Validation Layer
โ
Caching Layer
Library Architecture:
Public API Surface
โ
Type Guards / Validation
โ
Core Logic
โ
Platform Adapters (Node.js, Browser, Deno)
Data Science Architecture:
Data Ingestion โ Feature Engineering โ Model Training โ Inference
โ
Feature Store
โ
Model Registry
โ
Monitoring & Alerting
# Technology Stack
- What language(s)?
- What version ranges?
- Why these choices?
# Quality Gates
- What's required to merge?
- How do we measure quality?
- What tools enforce standards?
# Security Policy
- How do we manage secrets?
- How do we handle vulnerabilities?
- What's our incident response?
# Deployment Strategy
- Where do we deploy?
- How do we release?
- How do we rollback?
Web Application:
Frontend: TypeScript, React 18, Vitest, TailwindCSS
Backend: Python 3.13, FastAPI, pytest (85% coverage)
Database: PostgreSQL 15, Alembic migrations
DevOps: Docker, Kubernetes, GitHub Actions
Quality: TypeScript strict mode, mypy, ruff, GitHub code scanning
Mobile Application:
Framework: Flutter 3.13 or React Native 0.72
Language: Dart or TypeScript
Testing: flutter test or Jest, 80%+ coverage
State Management: Riverpod, Bloc, or Redux
Local Database: SQLite, Hive, or Realm
HTTP Client: Dio or Axios wrapper
UI: Material Design or Cupertino
DevOps: Fastlane, GitHub Actions for app store deployment
Quality: flutter analyze, dart format, excellent test coverage
Performance: App size <50MB (iOS), startup <2s
CLI Tool:
Language: Go 1.21 or Python 3.13
Testing: Go's built-in testing or pytest
Packaging: Single binary (Go) or PyPI (Python)
Quality: golangci-lint or ruff, <100MB binary
Performance: <100ms startup time
Library:
Language: TypeScript 5.2 or Python 3.13
Testing: Vitest or pytest, 90%+ coverage (libraries = higher bar)
Package Manager: npm/pnpm or uv
Documentation: TSDoc/JSDoc or Google-style docstrings
Type Safety: TypeScript strict or mypy strict
Data Science:
Language: Python 3.13, Jupyter notebooks
ML Framework: scikit-learn, PyTorch, or TensorFlow
Data: pandas, Polars, DuckDB
Testing: pytest, nbval (notebook validation)
Experiment Tracking: MLflow, Weights & Biases
Quality: 80% code coverage, data validation tests
โ Too Vague
โ Specific
โ Over-Specified in product.md
โ Architecture-Level
โ Inconsistent Across Documents
โ Aligned
โ Outdated
โ Fresh
Examples by Project Type
Product.md excerpt:
---
Quality Gates
- Test coverage: 85% minimum
- Type errors: Zero in strict mode
- Bundle size: <200KB gzipped
Product.md excerpt:
---
Deployment
- App Store & Google Play via Fastlane
- TestFlight for beta testing
- Version every 2 weeks
Product.md excerpt:
---
Build
- Binary size: <100MB
- Startup time: <100ms
- Distribution: GitHub Releases + Homebrew
Product.md excerpt:
---
Primary Language: TypeScript 5.2+
- Test coverage: 90% (libraries have higher bar)
- Type checking: Zero errors in strict mode
- Bundle: <50KB gzipped, tree-shakeable
Product.md excerpt:
---
Versioning & Updates
**When to update this Skill:**
- New programming languages added to MoAI
- New project type examples needed
- Quality gate standards change
- Package management tools evolve
**Current version:** 0.1.0 (2025-11-04)
---
---
### Level 3: Advanced Patterns (Expert Reference)
> **Note**: Advanced patterns for complex scenarios.
**Coming soon**: Deep dive into expert-level usage.
---
## ๐ฏ Best Practices Checklist
**Must-Have:**
- โ
[Critical practice 1]
- โ
[Critical practice 2]
**Recommended:**
- โ
[Recommended practice 1]
- โ
[Recommended practice 2]
**Security:**
- ๐ [Security practice 1]
---
## ๐ Context7 MCP Integration
**When to Use Context7 for This Skill:**
This skill benefits from Context7 when:
- Working with [project]
- Need latest documentation
- Verifying technical details
**Example Usage:**
```python
# Fetch latest documentation
from moai_adk.integrations import Context7Helper
helper = Context7Helper()
docs = await helper.get_docs(
library_id="/org/library",
topic="project",
tokens=5000
)
Relevant Libraries:
| Library | Context7 ID | Use Case |
|---|---|---|
| [Library 1] | /org/lib1 | [When to use] |
When to use moai-project-documentation:
Start
โโ Need project?
โ โโ YES โ Use this skill
โ โโ NO โ Consider alternatives
โโ Complex scenario?
โโ YES โ See Level 3
โโ NO โ Start with Level 1
Prerequisite Skills:
Complementary Skills:
Next Steps:
Primary Documentation:
Best Practices:
** .0** (2025-11-12)
Generated with: MoAI-ADK Skill Factory
Last Updated: 2025-11-12
Maintained by: Primary Agent (alfred)