Want to integrate Claude AI into your own app, automation, or product? The Claude API makes it easier than ever. This tutorial walks beginners through getting an API key, writing your first code (Python or Node.js), and shipping a real AI-powered app — in under 30 minutes.
What is the Claude API?
The Claude API lets your code talk to Claude's AI models (Opus 4.7, Sonnet 4.6, Haiku 4.5). Send text → get smart responses. Use it for:
- Chatbots and customer support
- Content generation tools
- Document analysis / RAG systems
- Code assistants
- Data extraction and summarization
- Translation, classification, sentiment analysis
Step-by-Step Tutorial
1 Get Your API Key
Visit claude.ai → sign up → go to console.anthropic.com → API Keys → Create Key. Copy it. Never share or commit this to git!
2 Install the SDK
Python:
pip install anthropic
Node.js:
npm install @anthropic-ai/sdk
3 Set Your API Key
Add to your environment (don't hardcode):
# In your shell or .env file
export ANTHROPIC_API_KEY="sk-ant-..."
4 Your First API Call (Python)
import anthropic
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-opus-4-7",
max_tokens=1024,
messages=[
{"role": "user", "content": "Write a haiku about coding"}
]
)
print(message.content[0].text)
5 Same Call in Node.js
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const message = await client.messages.create({
model: "claude-opus-4-7",
max_tokens: 1024,
messages: [
{ role: "user", content: "Write a haiku about coding" }
]
});
console.log(message.content[0].text);
6 Add Streaming (Better UX)
Stream tokens to your UI for a snappier experience:
# Python streaming
with client.messages.stream(
model="claude-opus-4-7",
max_tokens=1024,
messages=[{"role": "user", "content": "Tell me a story"}],
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
7 Enable Prompt Caching (Save 90% on Costs!)
If you reuse the same context (like a system prompt or large doc), cache it:
message = client.messages.create(
model="claude-opus-4-7",
max_tokens=1024,
system=[
{
"type": "text",
"text": "You are an expert assistant for [your domain].",
"cache_control": {"type": "ephemeral"}
}
],
messages=[{"role": "user", "content": "What's the rule for X?"}]
)
Saves up to 90% on repeated input tokens.
Real Project: Build a Q&A Chatbot
Here's a complete working chatbot in 30 lines:
import anthropic
client = anthropic.Anthropic()
history = []
print("Chat with Claude. Type 'quit' to exit.\n")
while True:
user_input = input("You: ")
if user_input.lower() == 'quit':
break
history.append({"role": "user", "content": user_input})
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=1024,
messages=history
)
reply = response.content[0].text
history.append({"role": "assistant", "content": reply})
print(f"Claude: {reply}\n")
Model Pricing Cheat Sheet (2026)
| Model | Input (per 1M) | Output (per 1M) | Best For |
|---|---|---|---|
| Claude Opus 4.7 | $15 | $75 | Complex reasoning, coding |
| Claude Sonnet 4.6 | $3 | $15 | Balanced, most use cases |
| Claude Haiku 4.5 | $0.80 | $4 | Cheap, fast, high volume |
💡 Pro tip: For Claude Code on a budget, try GLM Coding Plan at $18/mo (compatible with Claude SDKs).
Cost-Saving Best Practices
- Pick the right model — Haiku for high-volume, Sonnet default, Opus for hard tasks
- Enable prompt caching — 90% savings on repeated context
- Use batch API — 50% discount on non-time-sensitive jobs
- Limit max_tokens — only generate what you need
- Cache responses in your DB for repeated queries
Advanced Features
Tool Use (Function Calling)
tools = [{
"name": "get_weather",
"description": "Get current weather for a city",
"input_schema": {
"type": "object",
"properties": {
"city": {"type": "string"}
},
"required": ["city"]
}
}]
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=1024,
tools=tools,
messages=[{"role": "user", "content": "What's the weather in Tokyo?"}]
)
Vision (Send Images)
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=1024,
messages=[{
"role": "user",
"content": [
{"type": "image", "source": {"type": "base64", "media_type": "image/png", "data": "..."}},
{"type": "text", "text": "What's in this image?"}
]
}]
)
Extended Thinking (Best Reasoning)
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=8000,
thinking={"type": "enabled", "budget_tokens": 5000},
messages=[{"role": "user", "content": "Solve this complex problem..."}]
)
Common Errors & Fixes
- 401 Unauthorized: Wrong API key. Regenerate.
- 429 Rate Limit: Slow down requests or upgrade tier.
- 400 Bad Request: Check JSON structure, message format.
- Context too long: Reduce input, summarize first, or use Sonnet with 1M context.
Deploy Your AI App
Once your code works locally, deploy to production:
- Push to GitHub
- Create DigitalOcean App ($200 free credit)
- Connect GitHub repo
- Add ANTHROPIC_API_KEY as encrypted env var
- Click Deploy. Done!
See our full DigitalOcean deployment tutorial.
Resources
FAQ
Do I need to know coding to use Claude API?
Basic Python or JavaScript helps, but Claude itself can write the code for you. Just describe what you want.
How much does Claude API cost monthly?
Depends on usage. Light apps: $5-20/mo. Heavy apps: $100-500/mo. Use prompt caching to slash costs.
Is the Claude API better than OpenAI's?
For coding, accuracy, and writing quality — yes. OpenAI wins on image generation and multimodal.
Can I use Claude API for free?
New accounts get free credit. After that, pay-as-you-go. For Claude Code at a flat price, try GLM Coding Plan ($18/mo).
Conclusion
You now have everything to build with the Claude API: setup, code, advanced features, deployment, and cost tips. The hard part isn't the tech — it's deciding what to build. Get your API key and ship something this week.