// Audit an Anthropic Cookbook notebook based on a rubric. Use whenever a notebook review or audit is requested.
| name | cookbook-audit |
| description | Audit an Anthropic Cookbook notebook based on a rubric. Use whenever a notebook review or audit is requested. |
Review the requested Cookbook notebook using the guidelines and rubrics in style_guide.md. Provide a score based on scoring guidelines and recommendations on improving the cookbook.
The style guide provides detailed templates and examples for:
IMPORTANT: Always read style_guide.md first before conducting an audit. The style guide contains the canonical templates and good/bad examples to reference.
Follow these steps for a comprehensive audit:
style_guide.md to understand current best practicespython3 validate_notebook.py <path> to catch technical issues and generate markdown
scripts/detect-secrets/plugins.pyscripts/detect-secrets/.secrets.baselinetmp/ folder for easier review (saves context vs raw .ipynb)
Present your audit using this structure:
[Brief justification with specific examples]
[Brief justification with specific examples]
[Brief justification with specific examples]
[Brief justification with specific examples]
[Prioritized, actionable list of improvements with references to specific sections]
[Show specific excerpts from the notebook with concrete suggestions for improvement]
Use this to ensure comprehensive coverage:
Introduction (See style_guide.md Section 1)
Prerequisites & Setup (See style_guide.md Section 2)
Structure & Organization
Conclusion (See style_guide.md Section 4)
Code Quality
Output Management
Content Quality
Technical Requirements
Cookbooks are primarily action-oriented but strategically incorporate understanding and informed by Diataxis framework.
Core Principles:
A good cookbook doesn't just help users solve today's problem, it also helps them understand the underlying principles behind the solutions, encouraging them to recognize when and how to adapt approaches. Users will be able to make more informed decisions about AI system design, develop judgement about model outputs, and build skills that transfer to future AI systems.
Cookbooks are not pure tutorials: We assume users have basic technical skills and API familiarity. We clearly state prerequisites in our cookbooks, and direct users to the Academy to learn more on topics. They are not comprehensive explanations: We don't teach transformer architecture or probability theory. We need to understand that our users are following our cookbooks to solve problems they are facing today. They are busy, in the midst of learning or building, and want to be able to use what they learn to solve their immediate needs. Cookbooks are not reference docs: We don't exhaustively document every parameter, we link to appropriate resources in our documentation as needed. Cookbooks are not simple tips and tricks: We don't teach "hacks" that only work for the current model generation. We don't over-promise and under-deliver. Cookbooks are not production-ready code: They showcase use cases and capabilities, not production patterns. Excessive error handling is not required.
dotenv.load_dotenv() instead of os.environRemove extraneous output with %%capture:
Show relevant output:
See style_guide.md for detailed templates and examples
Must include:
❌ Avoid: Leading with machinery ("We will build a research agent...") ✅ Do: Lead with problem/value ("Your team spends hours triaging CI failures...")
Must include:
%%capture for pip installsdotenv.load_dotenv() not os.environMODEL constant at topOrganized by logical steps or phases, each with:
Must include:
❌ Avoid: Generic summaries ("We've demonstrated how the SDK enables...") ✅ Do: Actionable guidance ("Consider applying this to X... Next, try Y...")
Refer to style_guide.md for detailed good/bad examples. Watch for these issues:
❌ Leading with machinery: "We will build a research agent using the Claude SDK..." ❌ Feature dumps: Listing SDK methods or tool capabilities ❌ Vague learning objectives: "Learn about agents" or "Understand the API" ✅ Problem-first framing with specific, actionable learning objectives
❌ Noisy pip install output without %%capture
❌ Multiple separate pip install commands
❌ Using os.environ["API_KEY"] = "your_key" instead of dotenv
❌ Hardcoding model names throughout instead of using a MODEL constant
✅ Clean setup with grouped installs, dotenv, and constants
❌ Code blocks without explanatory text before them ❌ No explanation of what we learned after running code ❌ Comments that explain "what" the code does (code should be self-documenting) ❌ Over-explaining obvious code ✅ Context before code, insights after code, comments explain "why"
❌ Generic summaries: "We've demonstrated how the SDK enables..." ❌ Simply restating what the notebook did without guidance ❌ Not mapping back to the stated learning objectives ✅ Actionable guidance on applying lessons to user's specific context