| name | model-release-briefing |
| description | Use when a new Anthropic model is released and the user wants a personalized briefing and hands-on tutorial. Use when the user says "new model released", "walk me through the release", "briefing on Claude X", or invokes /model-release-briefing. |
| user-invocable | true |
| argument-hint | <model-name> <path-to-source-docs> |
Model Release Briefing
Generate a personalized briefing and hands-on tutorial for a new Anthropic model release. Six phases, executed sequentially. Each phase must complete before the next begins.
Phase 1: Ingest Source Materials
Read all files at the provided path ($ARGUMENTS[1], or ask the user). Accept blog posts, transcripts, summaries, prompt templates, changelogs, release notes. Build a comprehensive picture of every change in the release.
Phase 2: Gather User Context
Check conversation history and memory files for user context. If insufficient, ask these 4 intake questions and wait for answers before proceeding:
- What is your role? (developer, PM, researcher, etc.)
- How do you primarily use Claude? (coding, writing, analysis, agents, etc.)
- Which Claude products do you use? (API, Claude.ai, Claude Code, etc.)
- What is your most common AI workflow?
Phase 3: Generate Personalized Briefing
Using briefing-reference-template.md, walk through every category of changes. For each item: what it is, what it does in practice, why it matters to this specific user. Structure:
- Open with "What Matters Most for You" - top 3-4 changes ranked by impact for their role/workflow
- Walk through all categories - skip nothing, but weight detail by relevance
- Close with "Practical Takeaways" - 3-7 actionable items they can do today
Phase 4: Identify Tutorial Features
From the briefing, select the 3-5 features with highest practical impact for this user's workflows. Present the selection to the user and get confirmation before proceeding.
Phase 5: Create Self-Contained Tutorial
Using tutorial-structure.md, generate a hands-on tutorial. Requirements:
- Global prerequisites section (tools, subscriptions, CLI versions)
- Per-feature prerequisites (SDK versions, env vars, feature flags)
- Step-by-step exercises with working code
- Verification steps for each exercise
- Zero reliance on external docs - everything needed is inline
Phase 6: Verification Pass (Non-Negotiable)
For every feature flag, config value, API parameter, and setup step in the tutorial:
- Fetch the official documentation (docs.anthropic.com, relevant SDK docs)
- Verify each claim against the source
- Fix any discrepancies before presenting to the user
Use verification-checklist.md as the verification guide. Do not skip this phase.
Lessons Learned
These are real errors caught in previous runs. Keep them in mind throughout:
- Features may be disabled by default. Example: agent teams required an env var (
CLAUDE_CODE_ENABLE_TEAMS=1) not mentioned in blog posts. Always check for opt-in requirements.
- API parameters change between model versions. Example:
budget_tokens for extended thinking was deprecated in favor of token_count with type: "adaptive". Always verify parameter names and types.
- SDK versions matter. New parameters often require the latest SDK version. Pin and document the minimum version.
- "Max" effort levels may be model-exclusive. Some settings (like
thinking: {"type": "enabled", "budget_tokens": "max"}) only work on specific models.
- Subagent vs. agent team confusion is real. Always include a clear comparison table when both concepts exist.
- Blog posts and summaries contain errors. Never rely solely on marketing materials. Always verify against official API docs and changelogs.