Getting Started with the Claude API: A Practical Guide for Developers
Learn how to build with Claude using the Messages API, choose the right model, and explore key capabilities like vision, tools, and structured outputs.
This guide walks you through setting up the Claude API, making your first API call with the Messages API, choosing the right model for your use case, and exploring key features like vision, tools, and structured outputs.
Getting Started with the Claude API: A Practical Guide for Developers
Claude by Anthropic offers developers two powerful paths for building AI-powered applications: the Messages API for direct model access and fine-grained control, and Claude Managed Agents for long-running, asynchronous tasks. This guide focuses on the Messages API—the recommended starting point for most developers.
Whether you're building a chatbot, a code assistant, a document analyzer, or an agentic workflow, this guide will take you from zero to a working Claude integration.
Prerequisites
Before you begin, make sure you have:
- An Anthropic account and API key (get one from the Anthropic Console)
- Python 3.8+ or Node.js 16+ installed
- Basic familiarity with REST APIs and JSON
Step 1: Make Your First API Call
Let's start by setting up your environment and sending your first message to Claude.
Install the SDK
Python:pip install anthropic
TypeScript/JavaScript:
npm install @anthropic-ai/sdk
Send Your First Message
Python example:import anthropic
client = anthropic.Anthropic(api_key="YOUR_API_KEY")
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude! What can you do?"}
]
)
print(message.content[0].text)
TypeScript example:
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({ apiKey: 'YOUR_API_KEY' });
async function main() {
const message = await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [
{ role: 'user', content: 'Hello, Claude! What can you do?' }
]
});
console.log(message.content[0].text);
}
main();
Note: Replace YOUR_API_KEY with your actual API key. Never hardcode keys in production—use environment variables.
Step 2: Understand the Messages API
The Messages API is the core interface for interacting with Claude. Here's what you need to know:
Request Structure
A basic request includes:
model: The Claude model identifier (e.g.,claude-sonnet-4-20250514)messages: An array of message objects, each with arole(userorassistant) andcontentmax_tokens: The maximum number of tokens to generate in the responsesystem(optional): A system prompt to set Claude's behavior
Multi-Turn Conversations
To maintain a conversation, include the full message history:
messages = [
{"role": "user", "content": "What is the capital of France?"},
{"role": "assistant", "content": "The capital of France is Paris."},
{"role": "user", "content": "What is its population?"}
]
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=256,
messages=messages
)
Stop Reasons
When Claude finishes generating, the response includes a stop_reason field. Common values:
"end_turn": Claude finished naturally"max_tokens": The response was cut off because it reachedmax_tokens"stop_sequence": Claude encountered a custom stop sequence you provided"tool_use": Claude wants to call a tool (see Step 4)
Step 3: Choose the Right Model
Claude offers several models optimized for different use cases. Here's a quick comparison:
| Model | Best For | Notes |
|---|---|---|
| Claude Opus 4.7 | Complex reasoning, agentic coding | Most capable, step-change over Opus 4.6 |
| Claude Sonnet 4.6 | Coding, agents, enterprise workflows | Frontier intelligence at scale |
| Claude Haiku 4.5 | Fast, near-frontier intelligence | Fastest model, great for real-time apps |
Step 4: Explore Key Features
Claude's API supports a rich set of features. Here are the most impactful ones:
Extended Thinking
Enable Claude to "think through" complex problems before responding:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=2048,
thinking={"type": "enabled", "budget_tokens": 1024},
messages=[
{"role": "user", "content": "Solve this step by step: 15 * 24 + 7"}
]
)
Vision (Image Processing)
Claude can analyze images and generate text from visual input:
import base64
with open("chart.png", "rb") as f:
image_data = base64.b64encode(f.read()).decode("utf-8")
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What does this chart show?"},
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/png",
"data": image_data
}
}
]
}
]
)
Structured Outputs
Get Claude to return structured JSON instead of free text:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Extract the name, age, and city from: John is 28 and lives in Seattle."}
],
response_format={
"type": "json_schema",
"json_schema": {
"name": "person_info",
"schema": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"city": {"type": "string"}
},
"required": ["name", "age", "city"]
}
}
}
)
Tool Use (Function Calling)
Give Claude the ability to call external functions or APIs:
tools = [
{
"name": "get_weather",
"description": "Get the current weather for a city",
"input_schema": {
"type": "object",
"properties": {
"city": {"type": "string"},
"units": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["city"]
}
}
]
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
tools=tools
)
When Claude decides to use a tool, the response will include a tool_use stop reason and a tool_calls array. You then execute the function and return the result.
Streaming Responses
For real-time applications, stream responses token by token:
with client.messages.stream(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Tell me a short story."}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
Step 5: Best Practices
- Use system prompts to set Claude's personality and constraints
- Implement retry logic with exponential backoff for API errors
- Cache frequent prompts using Prompt Caching to reduce costs and latency
- Monitor token usage to avoid unexpected bills
- Handle stop reasons gracefully—especially
max_tokensandtool_use
Next Steps
Now that you have a working Claude integration, explore these resources:
- Messages API Reference
- Claude Cookbook — interactive Jupyter notebooks
- Anthropic Console — prototype prompts with the Workbench
Key Takeaways
- Start with the Messages API for direct, fine-grained control over Claude's behavior
- Choose your model wisely: Opus for reasoning, Sonnet for balance, Haiku for speed
- Leverage structured outputs and tool use to build reliable, production-ready applications
- Stream responses for better user experience in real-time apps
- Always handle stop reasons to build robust conversational flows