How to Contribute to Anthropic’s Claude Cookbook: A Complete Developer’s Guide
Learn how to set up, develop, and submit high-quality Jupyter notebooks to the official Anthropic Claude Cookbook repository with best practices and CI tools.
This guide walks you through setting up the development environment, using Claude Code slash commands for validation, following notebook best practices, and submitting pull requests to the Anthropic Claude Cookbook repository.
How to Contribute to Anthropic’s Claude Cookbook: A Complete Developer’s Guide
The Anthropic Claude Cookbook is the official repository of Jupyter notebooks that demonstrate how to build with Claude. Whether you’re adding a new skill, fixing a bug, or improving documentation, contributing to this cookbook helps the entire Claude community. This guide covers everything you need to start contributing—from environment setup to pull request best practices.
Why Contribute?
- Share your expertise: Show others how to use Claude for classification, RAG, tool use, and more.
- Improve quality: Help maintain high standards for code, documentation, and model usage.
- Get recognized: Your contributions are reviewed by both automated tools and Claude AI itself.
Development Setup
Before you write a single line of code, set up your local environment correctly.
Prerequisites
- Python 3.11 or higher
- uv (recommended) or pip
- Git
- A Claude API key (get one at console.anthropic.com)
Step 1: Install uv
uv is the recommended package manager because it’s fast and handles virtual environments cleanly.
# macOS / Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
Or with Homebrew
brew install uv
Step 2: Clone the Repository
git clone https://github.com/anthropics/anthropic-cookbook.git
cd anthropic-cookbook
Step 3: Create a Virtual Environment and Install Dependencies
uv sync --all-extras
If you prefer pip:
pip install -e ".[dev]"
Step 4: Install Pre-commit Hooks
Pre-commit hooks automatically format and validate your code before each commit.
uv run pre-commit install
Step 5: Set Up Your API Key
cp .env.example .env
Edit .env and add your ANTHROPIC_API_KEY
Understanding the Quality Stack
The cookbook uses three layers of validation to keep notebooks consistent and reliable.
1. nbconvert
Jupyter nbconvert executes notebooks end-to-end to verify they run without errors. This is the same tool used in CI.2. ruff
ruff is a lightning-fast Python linter and formatter with native Jupyter support. It checks both.py files and .ipynb notebooks.
3. Claude AI Review
Every pull request is automatically reviewed by Claude. It checks for model version accuracy, code quality, and adherence to best practices.
Using Claude Code Slash Commands
This repository includes custom slash commands that work both in Claude Code (local development) and GitHub Actions CI. When you’re working in the repository with Claude Code, these commands are automatically available.
Available Commands
| Command | Purpose |
|---|---|
/link-review | Validate all links in markdown and notebooks |
/model-check | Verify that Claude model references are current |
/notebook-review | Comprehensive quality check on a notebook |
Example Usage
# Validate a specific notebook
/notebook-review skills/classification/guide.ipynb
Check all model references
/model-check
Check links in a README
/link-review README.md
These commands run the exact same logic as the CI pipeline, so you can catch issues before pushing.
Notebook Best Practices
Follow these guidelines to ensure your notebook is accepted quickly.
Use Environment Variables for API Keys
Never hardcode API keys. Use os.environ:
import os
from anthropic import Anthropic
client = Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY"))
Use Current Claude Models
Always use the latest model aliases. Check the official model overview for updates.
# Good: uses alias
response = client.messages.create(
model="claude-haiku-4-5",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}]
)
Avoid: hardcoded version that may become outdated
model="claude-3-haiku-20240307"
Keep Notebooks Focused
- One concept per notebook (e.g., classification, RAG, tool use)
- Clear markdown cells explaining each step
- Expected outputs as markdown or captured cell outputs
Test Before Committing
Run the validation scripts locally:
# Lint and format
uv run ruff check skills/ --fix
uv run ruff format skills/
Validate notebook structure
uv run python scripts/validate_notebooks.py
For a full execution test (requires API key):
uv run jupyter nbconvert --to notebook \
--execute skills/classification/guide.ipynb \
--ExecutePreprocessor.kernel_name=python3 \
--output test_output.ipynb
Git Workflow
Branch Naming
Use a descriptive branch name with your username:
git checkout -b alice/add-rag-example
Conventional Commits
Use the Conventional Commits format:
<type>(<scope>): <description>
| Type | When to Use |
|---|---|
feat | New notebook or feature |
fix | Bug fix |
docs | Documentation changes |
style | Formatting, linting |
refactor | Code restructuring |
test | Adding or fixing tests |
chore | Maintenance tasks |
ci | CI/CD changes |
git commit -m "feat(skills): add text-to-sql notebook"
git commit -m "fix(api): use environment variable for API key"
git commit -m "docs(readme): update installation instructions"
Keep Commits Atomic
Each commit should represent one logical change. This makes review easier and allows reverting specific changes if needed.
Pull Request Guidelines
PR Title
Use the same conventional commit format as your commits.
PR Description
Include:
- What you changed
- Why you changed it
- How to test it
- Screenshots or output examples (if applicable)
CI Checks
Before your PR is merged, the following checks must pass:
- Link validation (via
/link-review) - Model version check (via
/model-check) - Notebook quality review (via
/notebook-review) - Claude AI review (automated)
Common Pitfalls to Avoid
- Hardcoding API keys – Always use environment variables.
- Using outdated model names – Check the official model list.
- Notebooks that don’t run top-to-bottom – Always execute your notebook fresh before committing.
- Too many concepts in one notebook – Keep it focused.
- Missing markdown explanations – Assume the reader is learning.
Key Takeaways
- Set up with
uvfor fast, reliable dependency management and pre-commit hooks. - Use Claude Code slash commands (
/notebook-review,/model-check,/link-review) to validate your work locally before pushing. - Follow notebook best practices: environment variables for API keys, current model aliases, and one concept per notebook.
- Use conventional commits with descriptive messages and atomic changes.
- Run the full validation stack (ruff, nbconvert, link checks) before submitting your pull request to ensure CI passes on the first try.