Getting Started with Claude API: A Practical Guide for Developers
Learn how to build with Claude API from scratch. Covers setup, Messages API, model selection, key features like extended thinking and tools, with code examples.
This guide walks you through setting up the Claude API, making your first call with the Messages API, choosing the right model, and exploring key features like extended thinking, tool use, and structured outputs.
Getting Started with Claude API: A Practical Guide for Developers
Claude from Anthropic is one of the most capable AI models available today. Whether you're building a chatbot, an agentic coding assistant, or an enterprise workflow, the Claude API gives you direct access to frontier intelligence. This guide covers everything you need to go from zero to a working integration.
What You'll Learn
- How to set up your environment and make your first API call
- The structure of the Messages API for single and multi-turn conversations
- How to choose the right Claude model for your use case
- Key features: extended thinking, tool use, structured outputs, and more
1. Make Your First API Call
Before you start, you'll need an Anthropic API key. Sign up at console.anthropic.com and generate a key.
Install the SDK
Anthropic provides official SDKs for Python and TypeScript. Install the one for your language:
Python:pip install anthropic
TypeScript/JavaScript:
npm install @anthropic-ai/sdk
Send Your First Message
Here's a minimal example in Python:
import anthropic
client = anthropic.Anthropic(api_key="your-api-key")
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude!"}
]
)
print(response.content[0].text)
TypeScript equivalent:
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({ apiKey: 'your-api-key' });
async function main() {
const response = await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello, Claude!' }]
});
console.log(response.content[0].text);
}
main();
2. Understand the Messages API
The Messages API is the core interface for interacting with Claude. Every request sends an array of messages, and Claude responds with one or more content blocks.
Request Structure
A basic request includes:
- model: The model identifier (e.g.,
claude-sonnet-4-20250514) - max_tokens: Maximum tokens in the response
- messages: Array of message objects with
roleandcontent
Multi-Turn Conversations
To continue a conversation, include the full message history:
messages = [
{"role": "user", "content": "What's the capital of France?"},
{"role": "assistant", "content": "The capital of France is Paris."},
{"role": "user", "content": "What is its population?"}
]
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=messages
)
System Prompts
Use system prompts to set Claude's behavior:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
system="You are a helpful assistant that speaks like a pirate.",
messages=[{"role": "user", "content": "Tell me about the weather."}]
)
Stop Reasons
Every response includes a stop_reason field. Common values:
"end_turn": Claude finished naturally"max_tokens": Response was cut off; increasemax_tokensor continue the conversation"tool_use": Claude wants to call a tool (see Tool Use section)
3. Choose the Right Model
Anthropic offers several Claude models optimized for different needs:
| Model | Best For | Speed | Cost |
|---|---|---|---|
| Claude Opus 4.7 | Complex reasoning, agentic coding | Moderate | Highest |
| Claude Sonnet 4.6 | Coding, agents, enterprise workflows | Fast | Medium |
| Claude Haiku 4.5 | High-throughput, simple tasks | Fastest | Lowest |
4. Explore Key Features
Extended Thinking
Claude can show its reasoning process before giving a final answer. Enable it with the thinking parameter:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=2048,
thinking={"type": "enabled", "budget_tokens": 1024},
messages=[{"role": "user", "content": "Solve this math problem step by step: 15 * 23 + 7"}]
)
Tool Use (Function Calling)
Claude can call external tools. Define tools and let Claude decide when to use them:
tools = [
{
"name": "get_weather",
"description": "Get the current weather for a city",
"input_schema": {
"type": "object",
"properties": {
"city": {"type": "string"},
"units": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["city"]
}
}
]
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
tools=tools,
messages=[{"role": "user", "content": "What's the weather in Paris?"}]
)
Check if Claude wants to use a tool
if response.stop_reason == "tool_use":
tool_call = response.content[1] # Second content block
print(f"Tool: {tool_call.name}")
print(f"Input: {tool_call.input}")
Structured Outputs
Force Claude to return valid JSON by specifying a schema:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "List three programming languages and their primary use cases."}],
response_format={
"type": "json_schema",
"json_schema": {
"name": "languages",
"schema": {
"type": "object",
"properties": {
"languages": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"use_case": {"type": "string"}
},
"required": ["name", "use_case"]
}
}
},
"required": ["languages"]
}
}
}
)
print(response.content[0].text)
Vision (Image Processing)
Claude can analyze images. Send image data as base64:
import base64
with open("chart.png", "rb") as f:
image_data = base64.b64encode(f.read()).decode("utf-8")
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this chart."},
{"type": "image", "source": {"type": "base64", "media_type": "image/png", "data": image_data}}
]
}
]
)
Streaming
For real-time applications, stream responses token by token:
stream = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a short poem about AI."}],
stream=True
)
for event in stream:
if event.type == "content_block_delta":
print(event.delta.text, end="", flush=True)
5. Best Practices
- Use system prompts to set context and behavior
- Handle stop reasons to manage conversation flow
- Implement retries with exponential backoff for transient errors
- Monitor token usage to control costs
- Use prompt caching for repeated system prompts to reduce latency and cost
Next Steps
Now that you have the fundamentals, explore more advanced topics:
- Claude Managed Agents for long-running, asynchronous tasks
- Prompt Caching to optimize repeated requests
- Batch Processing for high-volume workloads
- Claude Cookbook for interactive Jupyter notebooks
Key Takeaways
- The Messages API is the core interface for all Claude interactions, supporting multi-turn conversations, system prompts, and streaming
- Choose your model based on task complexity: Opus for reasoning, Sonnet for general use, Haiku for speed and cost efficiency
- Claude supports powerful features like extended thinking, tool use, structured outputs, and vision out of the box
- Always handle stop reasons (
end_turn,max_tokens,tool_use) to build robust applications - Start with the official SDKs (Python/TypeScript) and use the Developer Console for prompt prototyping