Anthropic Provider¶
Claude models from Anthropic. Great for long context and constitutional AI.
Installation¶
Setup¶
API Key¶
Usage¶
Basic¶
from anthropic import AsyncAnthropic
from dynabots_core.providers import AnthropicProvider
from dynabots_core import LLMMessage
client = AsyncAnthropic() # Uses ANTHROPIC_API_KEY env var
llm = AnthropicProvider(client, model="claude-3-5-sonnet-20241022")
response = await llm.complete([
LLMMessage(role="user", content="What is 2+2?"),
])
print(response.content) # "4"
Custom API Key¶
client = AsyncAnthropic(api_key="sk-ant-...")
llm = AnthropicProvider(client, model="claude-3-5-sonnet-20241022")
Features¶
Temperature¶
Control randomness:
response = await llm.complete(
messages,
temperature=0.1 # Deterministic
)
response = await llm.complete(
messages,
temperature=0.9 # Creative
)
Max Tokens¶
Limit response length:
System Prompt¶
Include system instructions:
response = await llm.complete([
LLMMessage(role="system", content="You are an expert analyst."),
LLMMessage(role="user", content="Analyze this data..."),
])
Tool Calling¶
Function calling support:
from dynabots_core.protocols.llm import ToolDefinition
tools = [
ToolDefinition(
name="calculate",
description="Perform mathematical calculations",
parameters={
"type": "object",
"properties": {
"expression": {"type": "string"}
},
"required": ["expression"]
}
)
]
response = await llm.complete(
messages=[
LLMMessage(role="user", content="What is 123 + 456?")
],
tools=tools
)
if response.tool_calls:
for call in response.tool_calls:
print(f"Tool: {call['function']['name']}")
print(f"Args: {call['function']['arguments']}")
Model Selection¶
Recommended Models¶
- claude-3-5-sonnet-20241022 - Best overall, recommended
- claude-3-haiku-20240307 - Cheaper, lighter
- claude-3-opus-20240229 - Most powerful, expensive
# Recommended
llm = AnthropicProvider(
client,
model="claude-3-5-sonnet-20241022"
)
# Budget
llm = AnthropicProvider(
client,
model="claude-3-haiku-20240307"
)
# Maximum power
llm = AnthropicProvider(
client,
model="claude-3-opus-20240229"
)
Model Comparison¶
| Model | Cost | Speed | Context | Best For |
|---|---|---|---|---|
| claude-3-5-sonnet | $$ | Medium | 200K | Production, balanced |
| claude-3-haiku | $ | Fast | 200K | Cost-sensitive |
| claude-3-opus | $$$ | Slow | 200K | Complex reasoning |
Long Context¶
Claude models support 200K token context window:
# You can fit a lot of context
very_long_text = """
[200,000 tokens of text, documents, conversations, etc.]
"""
response = await llm.complete([
LLMMessage(
role="user",
content=f"Analyze this: {very_long_text}"
)
])
Great for: - Analyzing large documents - Full conversation history - Knowledge base retrieval augmentation
Protocol Definition¶
dynabots_core.providers.anthropic.AnthropicProvider
¶
LLMProvider implementation for Anthropic Claude models.
Example
from anthropic import AsyncAnthropic client = AsyncAnthropic() llm = AnthropicProvider(client, model="claude-3-5-sonnet-20241022")
Source code in packages/core/dynabots_core/providers/anthropic.py
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 | |
model
property
¶
Get the current model name.
__init__(client, model='claude-3-5-sonnet-20241022', max_tokens=4096)
¶
Initialize the Anthropic provider.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
client
|
Any
|
An AsyncAnthropic client instance. |
required |
model
|
str
|
Claude model ID. |
'claude-3-5-sonnet-20241022'
|
max_tokens
|
int
|
Default max tokens (Anthropic requires this). |
4096
|
Source code in packages/core/dynabots_core/providers/anthropic.py
complete(messages, temperature=0.1, max_tokens=2000, json_mode=False, tools=None)
async
¶
Send messages to Anthropic and get a response.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
List[LLMMessage]
|
Conversation messages. |
required |
temperature
|
float
|
Sampling temperature. |
0.1
|
max_tokens
|
int
|
Maximum response tokens. |
2000
|
json_mode
|
bool
|
If True, append JSON instruction to system prompt. |
False
|
tools
|
Optional[List[ToolDefinition]]
|
Optional list of tools for function calling. |
None
|
Returns:
| Type | Description |
|---|---|
LLMResponse
|
LLMResponse with the model's output. |
Source code in packages/core/dynabots_core/providers/anthropic.py
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 | |
Constitutional AI¶
Claude is trained with constitutional AI—it follows principles of helpfulness, harmlessness, and honesty:
response = await llm.complete([
LLMMessage(role="user", content="How do I make an illegal substance?"),
])
# Claude will decline and offer helpful alternatives
This is built-in, no configuration needed.
Cost Optimization¶
Model Selection¶
# Most expensive
llm = AnthropicProvider(client, model="claude-3-opus-20240229")
# $0.015 per 1M input tokens, $0.075 per 1M output
# Recommended (best value)
llm = AnthropicProvider(client, model="claude-3-5-sonnet-20241022")
# $0.003 per 1M input tokens, $0.015 per 1M output
# Cheapest
llm = AnthropicProvider(client, model="claude-3-haiku-20240307")
# $0.00080 per 1M input tokens, $0.0024 per 1M output
Token Limits¶
# Keep responses short to save money
response = await llm.complete(
messages,
max_tokens=500 # Limited response
)
Error Handling¶
try:
response = await llm.complete(messages)
except Exception as e:
if "invalid_request_error" in str(type(e)):
print("Check message format")
elif "authentication_error" in str(type(e)):
print("Check API key")
else:
print(f"Error: {e}")
Usage Statistics¶
Anthropic provides token usage:
response = await llm.complete(messages)
print(response.usage)
# {
# "prompt_tokens": 100,
# "completion_tokens": 50,
# "total_tokens": 150
# }
# Estimate cost (Claude 3.5 Sonnet)
cost = (response.usage["prompt_tokens"] * 0.003 +
response.usage["completion_tokens"] * 0.015) / 1_000_000
print(f"Cost: ${cost:.6f}")
Common Issues¶
Invalid API Key¶
Solution: Check your API key:
Get a new key from Anthropic console.
Quota Exceeded¶
Solution: Check usage at Anthropic console.
Message Too Long¶
Solution: Claude has a max input/output tokens. Reduce context or response limit:
Tips¶
- System prompts: Use them to set behavior and context.
- Long context: Leverage the 200K context for rich documents.
- JSON mode: No native JSON mode, use structured prompts.
- Tool calling: Works well with Claude—models are good at planning.