Getting Started with Claude API: A Practical Guide for Developers
Learn how to build with Claude API from scratch. Covers setup, Messages API, model selection, and key features like extended thinking and tool use.
This guide walks you through setting up the Claude API, making your first API call, understanding the Messages API structure, choosing the right model, and exploring key capabilities like extended thinking, tool use, and structured outputs.
Getting Started with Claude API: A Practical Guide for Developers
Claude by Anthropic is a powerful family of large language models designed for text generation, code creation, vision processing, and complex reasoning tasks. Whether you're building a custom chatbot, an AI-powered coding assistant, or an enterprise workflow automation, the Claude API gives you direct access to these frontier models.
This guide will take you from zero to a working Claude integration. You'll learn how to set up your environment, make your first API call, understand the core Messages API, choose the right model for your use case, and explore Claude's standout features.
Prerequisites
Before you start, you'll need:
- An Anthropic Console account
- An API key (generated in the Console under API Keys)
- Python 3.8+ or Node.js 18+ installed on your machine
- Basic familiarity with REST APIs and JSON
Step 1: Make Your First API Call
Let's start by installing the official Anthropic SDK and sending your first message to Claude.
Install the SDK
Python:pip install anthropic
TypeScript/JavaScript:
npm install @anthropic-ai/sdk
Set Your API Key
Store your API key as an environment variable for security:
export ANTHROPIC_API_KEY="your-api-key-here"
Send Your First Message
Here's a minimal example that asks Claude to explain itself:
Python:import anthropic
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude! What can you do?"}
]
)
print(message.content[0].text)
TypeScript:
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic();
async function main() {
const message = await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [
{ role: 'user', content: 'Hello, Claude! What can you do?' }
]
});
console.log(message.content[0].text);
}
main();
If everything is set up correctly, you'll see Claude's friendly response printed to your console.
Step 2: Understand the Messages API
The Messages API is the primary way to interact with Claude programmatically. It supports multi-turn conversations, system prompts, and various content types.
Core Request Structure
Every API call requires:
model: The model identifier (e.g.,claude-sonnet-4-20250514)max_tokens: Maximum number of tokens in the responsemessages: An array of message objects, each with arole("user"or"assistant") andcontent
system: A system prompt to set Claude's behavior and contexttemperature: Controls randomness (0.0 to 1.0)stop_sequences: Custom strings that stop generation
Multi-Turn Conversations
To continue a conversation, include the full message history:
messages = [
{"role": "user", "content": "What is the capital of France?"},
{"role": "assistant", "content": "The capital of France is Paris."},
{"role": "user", "content": "What is its population?"}
]
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=256,
messages=messages
)
System Prompts
System prompts are a powerful way to define Claude's persona and constraints:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
system="You are a helpful coding assistant. Always provide code examples in Python. Be concise.",
messages=[
{"role": "user", "content": "How do I read a CSV file?"}
]
)
Handling Stop Reasons
Every response includes a stop_reason field. Common values:
"end_turn": Claude finished naturally"max_tokens": Response was cut off because you hit the token limit"stop_sequence": A custom stop sequence was encountered"tool_use": Claude wants to call a tool (more on this later)
stop_reason to handle incomplete responses gracefully.
Step 3: Choose the Right Model
Anthropic offers several Claude models optimized for different use cases:
| Model | Best For | Speed | Cost |
|---|---|---|---|
| Claude Opus 4.7 | Complex reasoning, agentic coding, research | Moderate | Highest |
| Claude Sonnet 4.6 | General coding, agents, enterprise workflows | Fast | Balanced |
| Claude Haiku 4.5 | High-throughput, real-time applications | Fastest | Lowest |
- Start with Claude Sonnet 4.6 for most applications—it offers the best balance of intelligence and speed.
- Use Claude Opus 4.7 when you need deep reasoning, complex math, or multi-step agentic tasks.
- Choose Claude Haiku 4.5 for high-volume, low-latency use cases like content moderation or simple Q&A.
Step 4: Explore Key Features
Claude's API supports several advanced capabilities that can transform your application.
Extended Thinking
For complex problems, enable extended thinking to let Claude reason step-by-step before responding:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=4096,
thinking={"type": "enabled", "budget_tokens": 2048},
messages=[
{"role": "user", "content": "Solve this: If a train leaves Station A at 60 mph and another leaves Station B at 80 mph, 200 miles apart, when do they meet?"}
]
)
Tool Use (Function Calling)
Claude can call external tools and APIs. Define tools in your request:
tools = [
{
"name": "get_weather",
"description": "Get the current weather for a city",
"input_schema": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}
]
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
tools=tools,
messages=[
{"role": "user", "content": "What's the weather in Tokyo?"}
]
)
When Claude decides to use a tool, the response will contain a tool_use content block with the tool name and arguments. You execute the tool, then return the result in a subsequent message.
Structured Outputs
For reliable parsing, request structured JSON responses:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
system="Always respond in valid JSON with keys: name, age, occupation",
messages=[
{"role": "user", "content": "Extract info: John is 32 and works as an engineer."}
]
)
Vision (Image Processing)
Claude can analyze images. Pass image data as base64 or via URL:
import base64
with open("chart.png", "rb") as f:
image_data = base64.b64encode(f.read()).decode("utf-8")
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What does this chart show?"},
{"type": "image", "source": {"type": "base64", "media_type": "image/png", "data": image_data}}
]
}
]
)
Streaming Responses
For real-time applications, stream tokens as they're generated:
stream = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a short poem about AI."}],
stream=True
)
for event in stream:
if event.type == "content_block_delta":
print(event.delta.text, end="", flush=True)
Step 5: Best Practices
Prompt Engineering
- Be specific and clear in your instructions
- Use system prompts to set behavior, not user messages
- Provide examples (few-shot prompting) for complex tasks
- Break multi-step tasks into smaller, sequential calls
Error Handling
Always handle API errors gracefully:
try:
response = client.messages.create(...)
except anthropic.APIError as e:
print(f"API error: {e}")
except anthropic.RateLimitError as e:
print(f"Rate limited. Retry after {e.response.headers.get('retry-after')}")
except anthropic.APIConnectionError as e:
print(f"Connection error: {e}")
Cost Management
- Use
max_tokensto cap response length - Cache frequent system prompts with prompt caching (available in the API)
- Choose Haiku for high-volume, simple tasks
- Monitor usage in the Anthropic Console
Next Steps
Now that you have a solid foundation, here's what to explore next:
- Claude Cookbook: Interactive Jupyter notebooks covering PDFs, embeddings, and more
- API Reference: Full documentation for all endpoints and SDK methods
- Anthropic Console: Prototype prompts with the Workbench and prompt generator
- Use Cases: Explore release notes and community examples for inspiration
Key Takeaways
- Start simple: Make your first API call with the SDK, then gradually add features like system prompts and multi-turn conversations.
- Choose the right model: Sonnet for balance, Opus for deep reasoning, Haiku for speed and cost efficiency.
- Leverage advanced features: Extended thinking, tool use, structured outputs, and vision can dramatically expand what your application can do.
- Handle errors and costs: Implement proper error handling, set token limits, and monitor usage to build robust, cost-effective applications.
- Explore the ecosystem: The Anthropic Console, Cookbook, and community resources are invaluable for accelerating development.