How to Contribute to Anthropic’s Claude Cookbook: A Developer’s Guide
Learn how to set up, develop, and submit high-quality Jupyter notebooks to the official Anthropic Claude Cookbook repository. Includes setup, validation, and PR best practices.
This guide walks you through contributing to the Anthropic Claude Cookbook: from cloning the repo and installing dependencies with uv, to running validation checks, using Claude Code slash commands, and submitting a PR that meets Anthropic’s quality standards.
How to Contribute to Anthropic’s Claude Cookbook: A Developer’s Guide
Anthropic’s Claude Cookbook is the go-to resource for practical, hands-on examples of using Claude AI. Whether you’re building a RAG pipeline, a text-to-SQL agent, or a multi-step reasoning workflow, the cookbook provides ready-to-run Jupyter notebooks that showcase best practices.
But the cookbook isn’t just for consuming—it’s for contributing. Anthropic actively encourages the community to submit high-quality notebooks that demonstrate new skills, patterns, or integrations. This guide will walk you through everything you need to know to make your first contribution, from setting up your development environment to passing the automated quality checks.
Why Contribute?
Before we dive into the technical steps, it’s worth understanding what makes a good contribution. The Claude Cookbook is not a dumping ground for random experiments. Each notebook should:
- Teach one clear concept (e.g., “How to use Claude for structured data extraction”)
- Be fully executable from top to bottom
- Use current Claude models and best practices
- Include clear explanations and expected outputs
Prerequisites
To follow this guide, you’ll need:
- Python 3.11 or higher installed on your machine
- A GitHub account and basic familiarity with Git
- An Anthropic API key (for testing notebook execution)
- A terminal or command-line interface
Step 1: Set Up Your Development Environment
Anthropic recommends using uv, a fast Python package manager, for development. If you don’t have it yet, install it with one of these commands:
# macOS / Linux (curl)
curl -LsSf https://astral.sh/uv/install.sh | sh
macOS (Homebrew)
brew install uv
Once uv is installed, clone the repository and set up your environment:
git clone https://github.com/anthropics/anthropic-cookbook.git
cd anthropic-cookbook
Create a virtual environment and install all dependencies
uv sync --all-extras
If you prefer pip, you can use:
pip install -e ".[dev]"
Next, install the pre-commit hooks. These will automatically check your code for issues before every commit:
uv run pre-commit install
Finally, set up your API key:
cp .env.example .env
Edit .env and add your ANTHROPIC_API_KEY
Step 2: Understand the Quality Standards
Anthropic uses an automated Notebook Validation Stack to ensure every notebook meets a high bar. The stack includes:
- nbconvert – Executes notebooks from top to bottom to verify they run without errors.
- ruff – A lightning-fast Python linter and formatter with native Jupyter notebook support.
- Claude AI Review – Anthropic’s own AI reviews your code for correctness, model usage, and best practices.
Step 3: Use Claude Code Slash Commands
One of the most powerful features of this repository is its integration with Claude Code (Anthropic’s CLI tool for AI-assisted development). The repository includes custom slash commands that run the same validations as the CI pipeline.
Once you have Claude Code installed and are working inside the repository, you can run:
# Validate all links in a notebook or markdown file
/link-review skills/my-notebook.ipynb
Check that Claude model references are current
/model-check
Run a comprehensive notebook quality check
/notebook-review skills/my-notebook.ipynb
These commands are defined in .claude/commands/ and work both locally and in GitHub Actions CI. Running them before you push can save you from failing CI checks later.
Step 4: Write Your Notebook
Now comes the creative part. Here are the best practices you should follow:
Use Environment Variables for API Keys
Never hardcode your API key. Use os.environ:
import os
from anthropic import Anthropic
client = Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY"))
Use Current Claude Models
Always reference the latest model aliases. As of this writing:
- Haiku 4.5:
claude-haiku-4-5 - Sonnet 4:
claude-sonnet-4 - Opus 4:
claude-opus-4
Keep Notebooks Focused
Each notebook should teach one concept. If you’re building a RAG pipeline, don’t also try to cover fine-tuning and streaming in the same notebook. Keep it clean, well-commented, and linear.
Include Expected Outputs
Use Markdown cells to describe what the reader should see after running each cell. For example:
Expected output: A JSON object containing the extracted fields: {"name": "Alice", "role": "engineer"}
Test Your Notebook
Before committing, run your notebook end-to-end:
uv run jupyter nbconvert --to notebook \
--execute skills/classification/guide.ipynb \
--ExecutePreprocessor.kernel_name=python3 \
--output test_output.ipynb
This will execute every cell in order and produce a new notebook with the outputs. Review the outputs to make sure they make sense.
Step 5: Run Quality Checks Locally
Before you commit, run the automated checks:
# Lint and format your code
uv run ruff check skills/ --fix
uv run ruff format skills/
Validate notebook structure and execution
uv run python scripts/validate_notebooks.py
If any check fails, fix the issues and run again. The pre-commit hooks will also catch common issues when you run git commit.
Step 6: Follow the Git Workflow
Anthropic uses a specific Git workflow to keep the repository organized.
Create a Feature Branch
git checkout -b <your-name>/<feature-description>
Example: git checkout -b alice/add-rag-example
Use Conventional Commits
Your commit messages should follow the Conventional Commits format:
<type>(<scope>): <subject>
Common types:
| Type | When to use |
|---|---|
feat | New notebook or feature |
fix | Bug fix or correction |
docs | Documentation changes |
style | Formatting, no logic change |
refactor | Code restructuring |
test | Adding or fixing tests |
chore | Maintenance tasks |
ci | CI/CD configuration changes |
git commit -m "feat(skills): add text-to-sql notebook"
git commit -m "fix(api): use environment variable for API key"
git commit -m "docs(readme): update installation instructions"
Keep Commits Atomic
Each commit should represent one logical change. If you’re adding a notebook and fixing a typo in the README, do them in separate commits.
Step 7: Submit a Pull Request
Once your branch is ready, push it and create a Pull Request:
git push -u origin your-branch-name
gh pr create # Or use the GitHub web interface
Your PR title should also follow the conventional commit format. In the description, include:
- What you changed
- Why you made the change
- Any special instructions for reviewers (e.g., “Requires API key to test”)
Common Pitfalls to Avoid
- Hardcoding secrets – Always use environment variables.
- Using deprecated models – Check the model overview page before writing your notebook.
- Forgetting to run validation – Always run
ruffandvalidate_notebooks.pybefore pushing. - Overly long notebooks – If your notebook takes more than 5 minutes to read or run, consider splitting it into multiple notebooks.
- Missing outputs – If you clear all cell outputs before committing, reviewers won’t know what to expect.
Key Takeaways
- Set up with
uvfor a fast, reproducible development environment that matches the CI pipeline. - Use Claude Code slash commands (
/notebook-review,/model-check,/link-review) to catch issues before pushing. - Follow the notebook best practices: one concept per notebook, current models, environment variables for keys, and clear expected outputs.
- Run the full validation stack (
ruff,validate_notebooks.py, and optionallynbconvert --execute) before committing. - Use conventional commits and feature branches to keep the repository history clean and reviewable.