How to Contribute to the Anthropic Cookbook: A Developer's Guide
Learn how to contribute high-quality Jupyter notebooks to the Anthropic Cookbook. Covers development setup, quality standards, Claude Code slash commands, and PR best practices.
This guide walks you through setting up the Anthropic Cookbook development environment, using Claude Code slash commands for automated quality checks, and submitting pull requests that meet Anthropic's standards for notebook quality, model usage, and code formatting.
How to Contribute to the Anthropic Cookbook: A Developer's Guide
The Anthropic Cookbook is the official repository of Jupyter notebooks demonstrating best practices for building with Claude. Whether you're adding a new skill, fixing a bug, or improving documentation, this guide will help you contribute effectively.
By the end of this article, you'll know how to set up your local environment, run the same quality checks used in CI, and submit a pull request that meets Anthropic's standards.
Prerequisites
Before you start, make sure you have:
- Python 3.11 or higher installed
- A Claude API key (you can get one from the Anthropic Console)
- Basic familiarity with Git and Jupyter notebooks
Setting Up Your Development Environment
Anthropic recommends using uv, a fast Python package manager, for dependency management. Here's how to get started.
Step 1: Install uv
# On macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
Or with Homebrew
brew install uv
Step 2: Clone the Repository
git clone https://github.com/anthropics/anthropic-cookbook.git
cd anthropic-cookbook
Step 3: Create a Virtual Environment and Install Dependencies
uv sync --all-extras
If you prefer pip:
pip install -e ".[dev]"
Step 4: Install Pre-commit Hooks
Pre-commit hooks automatically check your code before each commit, catching issues early.
uv run pre-commit install
Step 5: Configure Your API Key
cp .env.example .env
Edit .env and add your ANTHROPIC_API_KEY
Understanding the Quality Standards
The Cookbook repository uses a multi-layered validation stack to ensure every notebook is correct, readable, and maintainable.
The Notebook Validation Stack
- nbconvert: Executes notebooks end-to-end to verify they run without errors
- ruff: A lightning-fast Python linter and formatter with native Jupyter support
- Claude AI Review: Automated code review using Claude itself
Note: Notebook outputs are intentionally kept in the repository. They serve as expected results for users who want to verify their own runs.
Using Claude Code Slash Commands
One of the most powerful features of this repository is its integration with Claude Code. The repository includes slash commands that work both locally (in Claude Code) and in GitHub Actions CI.
Available Commands
| Command | Purpose |
|---|---|
/link-review | Validate all links in markdown and notebooks |
/model-check | Verify that Claude model references are current |
/notebook-review | Comprehensive notebook quality check |
Running Commands Locally
# Check a specific notebook
/notebook-review skills/my-notebook.ipynb
Verify model usage across the repo
/model-check
Validate links in a README
/link-review README.md
These commands use the exact same validation logic as the CI pipeline, so you can catch issues before pushing. The command definitions live in .claude/commands/.
Before You Commit: Running Quality Checks
Always run these checks before committing your changes.
1. Lint and Format Your Code
uv run ruff check skills/ --fix
uv run ruff format skills/
2. Validate Notebook Structure
uv run python scripts/validate_notebooks.py
3. (Optional) Execute the Notebook
If you have your API key set up, you can run the notebook end-to-end:
uv run jupyter nbconvert --to notebook \
--execute skills/classification/guide.ipynb \
--ExecutePreprocessor.kernel_name=python3 \
--output test_output.ipynb
Notebook Best Practices
To ensure your contribution is accepted quickly, follow these guidelines.
Use Environment Variables for API Keys
Never hardcode API keys. Use environment variables instead:
import os
import anthropic
client = anthropic.Anthropic(
api_key=os.environ.get("ANTHROPIC_API_KEY")
)
Use Current Claude Models
Always reference the latest model aliases. As of this writing:
- Haiku:
claude-haiku-4-5(Haiku 4.5) - Sonnet: Check the models overview page for the latest
claude-sonnet-4-20250514) improves maintainability. Claude will automatically validate model usage during PR review.
Keep Notebooks Focused
- One concept per notebook — don't cram multiple techniques into a single file
- Clear explanations — use markdown cells to explain what each code cell does
- Include expected outputs — show users what they should see after running each cell
Test Thoroughly
- Ensure the notebook runs from top to bottom without errors
- Use minimal tokens for example API calls to keep costs low
- Include error handling (e.g., try/except blocks for API calls)
Git Workflow and Commit Conventions
Branch Naming
Create a feature branch with a descriptive name:
git checkout -b <your-name>/<feature-description>
Example: git checkout -b alice/add-rag-example
Conventional Commits
Use the Conventional Commits format:
<type>(<scope>): <subject>
Common types:
| Type | When to Use |
|---|---|
feat | New notebook or feature |
fix | Bug fix |
docs | Documentation changes |
style | Code formatting |
refactor | Code restructuring |
test | Adding or fixing tests |
chore | Maintenance tasks |
ci | CI/CD changes |
git commit -m "feat(skills): add text-to-sql notebook"
git commit -m "fix(api): use environment variable for API key"
git commit -m "docs(readme): update installation instructions"
Keep Commits Atomic
Each commit should represent a single logical change. Write clear, descriptive messages and reference issues when applicable.
Creating a Pull Request
Step 1: Push Your Branch
git push -u origin your-branch-name
Step 2: Open a PR
Use the GitHub web interface or the gh CLI:
gh pr create
Step 3: Write a Good PR Description
Include:
- What changes you made
- Why you made them
- How to test the changes
- Screenshots or expected outputs (if applicable)
Step 4: Respond to CI Feedback
The CI pipeline will run the same slash commands you used locally. If any checks fail, fix the issues and push new commits to your branch.
Key Takeaways
- Use
uvfor dependency management — it's faster and matches the repository's recommended setup - Run Claude Code slash commands locally (
/notebook-review,/model-check,/link-review) to catch CI failures before pushing - Follow conventional commits and keep commits atomic for cleaner PR reviews
- Always use environment variables for API keys and reference current Claude model aliases
- Test your notebook end-to-end with
nbconvert --executeto ensure it runs without errors