BeClaude
Guide2026-04-20

How to Contribute to the Claude Cookbook: A Complete Developer Guide

Learn how to set up your development environment, follow quality standards, and submit successful contributions to the official Claude Cookbook repository with this step-by-step guide.

Quick Answer

This guide walks you through contributing to the Claude Cookbook, covering environment setup with uv, notebook best practices, quality checks with Claude slash commands, and the Git workflow for submitting successful pull requests.

claude-cookbookopen-sourcedevelopmentcontributingnotebooks

How to Contribute to the Claude Cookbook: A Complete Developer Guide

The Claude Cookbook is an invaluable resource for developers learning to work with Claude's API, featuring practical examples, tutorials, and implementation patterns. As an open-source project, it thrives on community contributions. This comprehensive guide walks you through the entire contribution process—from setting up your development environment to submitting polished pull requests that meet the repository's high standards.

Setting Up Your Development Environment

Before you start contributing, you'll need to properly configure your local development environment. The Claude Cookbook uses modern Python tooling to ensure consistency and quality across all contributions.

Prerequisites and Installation

First, ensure you have Python 3.11 or higher installed. The repository strongly recommends using uv, a fast Python package manager and resolver, though traditional pip is also supported.

Installing uv (Recommended):
# On macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh

Or with Homebrew on macOS

brew install uv
Cloning and Setting Up the Repository:
# Clone the repository
git clone https://github.com/anthropics/anthropic-cookbook.git
cd anthropic-cookbook

Set up the development environment with uv

uv sync --all-extras

Alternative with pip

pip install -e ".[dev]"

The --all-extras flag ensures all development dependencies are installed, including tools for testing, formatting, and validation.

Configuring API Access and Pre-commit Hooks

To test notebooks that make API calls, you'll need to configure your Anthropic API key:

# Copy the environment template
cp .env.example .env

Edit .env and add your API key

ANTHROPIC_API_KEY=your-key-here

Next, install pre-commit hooks that automatically run quality checks before each commit:

uv run pre-commit install

Or simply: pre-commit install

These hooks will catch common issues early, saving you time during the review process.

Understanding the Quality Standards

The Claude Cookbook maintains high quality through an automated validation stack. Understanding these standards is crucial for creating contributions that will be accepted.

The Notebook Validation Stack

The repository uses three key tools to ensure notebook quality:

  • nbconvert: Executes notebooks to verify they run without errors
  • ruff: A fast Python linter and formatter with native Jupyter support
  • Claude AI Review: Intelligent code review using Claude itself
Important Note: Unlike many repositories, the Claude Cookbook intentionally keeps notebook outputs in the repository. These outputs demonstrate expected results to users, so don't clear them when contributing.

Claude Code Slash Commands for Local Validation

One of the most powerful features for contributors is the built-in Claude slash commands. These commands work both locally in Claude Code and in GitHub Actions CI, using identical validation logic.

Available Commands:
  • /link-review [file] - Validate links in markdown and notebooks
  • /model-check - Verify Claude model usage is current
  • /notebook-review [file] - Comprehensive notebook quality check
Using Slash Commands in Development:
# Run comprehensive notebook review
/notebook-review skills/my-new-notebook.ipynb

Check if you're using current Claude models

/model-check

Validate links in your documentation

/link-review README.md

These commands use the exact same validation logic as the CI pipeline, helping you catch issues before pushing. The command definitions are stored in .claude/commands/ for consistency between local and CI environments.

Running Manual Quality Checks

Before committing your changes, run these manual checks:

# Format and lint Python code in notebooks
uv run ruff check skills/ --fix
uv run ruff format skills/

Validate notebook structure

uv run python scripts/validate_notebooks.py

Optional: Test notebook execution (requires API key)

uv run jupyter nbconvert --to notebook \ --execute skills/your-notebook.ipynb \ --ExecutePreprocessor.kernel_name=python3 \ --output test_output.ipynb

The pre-commit hooks will run these checks automatically, but running them manually first helps you fix issues proactively.

Notebook Best Practices for Contributors

Creating effective cookbook notebooks requires following specific patterns and conventions. Here's what you need to know.

API Key Management and Model Usage

Always use environment variables for API keys—never hardcode them:

import os

Correct: Using environment variables

api_key = os.environ.get("ANTHROPIC_API_KEY")

Incorrect: Hardcoded keys

api_key = "sk-ant-..." # Never do this!

For model selection, use current Claude models and consider aliases for better maintainability:

# Using the latest Haiku model (as of this writing)
model = "claude-haiku-4-5"  # Haiku 4.5

Check current models at:

https://docs.claude.com/en/docs/about-claude/models/overview

Claude will automatically validate model usage during PR reviews, but using /model-check locally helps you catch issues early.

Notebook Structure and Focus

Effective cookbook notebooks follow these principles:

  • One concept per notebook: Each notebook should demonstrate a single technique or pattern
  • Clear explanations: Use markdown cells to explain the "why" behind the code
  • Expected outputs: Include sample outputs as markdown cells to show users what to expect
  • Minimal tokens: Use small examples for API calls to conserve tokens
  • Error handling: Include basic error handling in your examples
Example notebook structure:
# Cell 1: Introduction and imports
import anthropic
import os

Cell 2: Setup and configuration

client = anthropic.Anthropic(api_key=os.environ.get("ANTHROPIC_API_KEY"))

Cell 3: Core example with explanation

This demonstrates how to use Claude for text classification

...

Cell 4: Expected output as markdown

Expected Output:

The model should return: "This is positive sentiment"

Git Workflow and Contribution Process

Following the proper Git workflow ensures your contributions integrate smoothly with the repository.

Branching and Commit Conventions

Start by creating a descriptive feature branch:

git checkout -b <your-name>/<feature-description>

Example: git checkout -b alice/add-rag-example

Use conventional commits with the format: <type>(<scope>): <subject>

Commit types and examples:
# New feature
feat(skills): add text-to-sql notebook

Bug fix

fix(api): use environment variable for API key

Documentation

docs(readme): update installation instructions

Formatting

style(notebook): fix formatting in classification example

Code restructuring

refactor(utils): extract common functions to helpers

Tests

test(skills): add tests for embedding notebook

Maintenance

chore(deps): update anthropic SDK to latest version

CI/CD changes

ci(workflows): add model-check validation

Keep commits atomic—one logical change per commit with clear, descriptive messages.

Creating and Submitting Pull Requests

After committing your changes:

# Push your branch
git push -u origin your-branch-name

Create a pull request (if you have GitHub CLI)

gh pr create
Pull Request Requirements:
  • Title: Use conventional commit format
  • Description: Include:
- What changes you made - Why you made them (the problem they solve) - Any relevant context or considerations
  • Linked Issues: Reference any related issues using GitHub's linking syntax (e.g., Fixes #123)
Before submitting, ensure:
  • All quality checks pass
  • Notebooks execute without errors
  • You've used current Claude models
  • API keys are properly managed via environment variables

Testing Your Contributions

Thorough testing is essential for successful contributions. Here's how to ensure your notebooks work correctly.

Notebook Execution Testing

Test that your notebook runs from top to bottom without errors:

# Simple test script to verify notebook execution
import nbformat
from nbconvert.preprocessors import ExecutePreprocessor
import os

Load the notebook

with open("skills/your-notebook.ipynb") as f: nb = nbformat.read(f, as_version=4)

Execute it

ep = ExecutePreprocessor(timeout=600, kernel_name='python3') ep.preprocess(nb, {'metadata': {'path': 'skills/'}})

Save the executed notebook

with open('executed-notebook.ipynb', 'w', encoding='utf-8') as f: nbformat.write(nb, f)

Minimal Example Testing

When testing API calls, use minimal examples to conserve tokens:

# Instead of long, complex examples
response = client.messages.create(
    model="claude-haiku-4-5",
    max_tokens=1000,
    messages=[{"role": "user", "content": "A very long prompt..."}]
)

Use focused, minimal examples

response = client.messages.create( model="claude-haiku-4-5", max_tokens=100, messages=[{"role": "user", "content": "Classify: 'I love this product!'"}] )

Troubleshooting Common Issues

Even experienced contributors encounter issues. Here are solutions to common problems.

Pre-commit Hook Failures

If pre-commit hooks fail:

# Run the specific check that failed
uv run ruff check skills/your-notebook.ipynb --fix

Or run all hooks manually

uv run pre-commit run --all-files

Notebook Validation Errors

For notebook validation issues:

# Use the built-in slash command for comprehensive review
/notebook-review skills/problem-notebook.ipynb

Or run the validation script directly

uv run python scripts/validate_notebooks.py skills/problem-notebook.ipynb

Model Usage Warnings

If you get warnings about outdated models:

  • Check the current models at https://docs.claude.com/en/docs/about-claude/models/overview
  • Update your notebook to use the latest model versions
  • Run /model-check to verify your changes

Key Takeaways

  • Use the recommended toolchain: Set up your environment with uv and pre-commit hooks to catch issues early and ensure consistency with the repository standards.
  • Leverage Claude's validation tools: The built-in slash commands (/notebook-review, /model-check, /link-review) provide the same validation as the CI pipeline, helping you submit error-free contributions.
  • Follow notebook best practices: Keep notebooks focused on single concepts, use environment variables for API keys, employ current Claude models, and include clear explanations with expected outputs.
  • Adhere to Git conventions: Use conventional commits, create descriptive branches, and maintain atomic commits with clear messages to make the review process smoother.
  • Test thoroughly before submitting: Ensure notebooks execute completely, use minimal tokens for API examples, and verify all quality checks pass locally before creating your pull request.
By following this guide, you'll be well-equipped to contribute valuable examples and tutorials to the Claude Cookbook, helping the entire community learn and build with Claude's AI capabilities.