Skip to content

Anthropic Provider

Claude models from Anthropic. Great for long context and constitutional AI.


Installation

pip install dynabots-core[anthropic]

Setup

API Key

# Get your API key from https://console.anthropic.com
export ANTHROPIC_API_KEY="sk-ant-..."

Usage

Basic

from anthropic import AsyncAnthropic
from dynabots_core.providers import AnthropicProvider
from dynabots_core import LLMMessage

client = AsyncAnthropic()  # Uses ANTHROPIC_API_KEY env var
llm = AnthropicProvider(client, model="claude-3-5-sonnet-20241022")

response = await llm.complete([
    LLMMessage(role="user", content="What is 2+2?"),
])

print(response.content)  # "4"

Custom API Key

client = AsyncAnthropic(api_key="sk-ant-...")
llm = AnthropicProvider(client, model="claude-3-5-sonnet-20241022")

Features

Temperature

Control randomness:

response = await llm.complete(
    messages,
    temperature=0.1  # Deterministic
)

response = await llm.complete(
    messages,
    temperature=0.9  # Creative
)

Max Tokens

Limit response length:

response = await llm.complete(
    messages,
    max_tokens=1000
)

System Prompt

Include system instructions:

response = await llm.complete([
    LLMMessage(role="system", content="You are an expert analyst."),
    LLMMessage(role="user", content="Analyze this data..."),
])

Tool Calling

Function calling support:

from dynabots_core.protocols.llm import ToolDefinition

tools = [
    ToolDefinition(
        name="calculate",
        description="Perform mathematical calculations",
        parameters={
            "type": "object",
            "properties": {
                "expression": {"type": "string"}
            },
            "required": ["expression"]
        }
    )
]

response = await llm.complete(
    messages=[
        LLMMessage(role="user", content="What is 123 + 456?")
    ],
    tools=tools
)

if response.tool_calls:
    for call in response.tool_calls:
        print(f"Tool: {call['function']['name']}")
        print(f"Args: {call['function']['arguments']}")

Model Selection

  • claude-3-5-sonnet-20241022 - Best overall, recommended
  • claude-3-haiku-20240307 - Cheaper, lighter
  • claude-3-opus-20240229 - Most powerful, expensive
# Recommended
llm = AnthropicProvider(
    client,
    model="claude-3-5-sonnet-20241022"
)

# Budget
llm = AnthropicProvider(
    client,
    model="claude-3-haiku-20240307"
)

# Maximum power
llm = AnthropicProvider(
    client,
    model="claude-3-opus-20240229"
)

Model Comparison

Model Cost Speed Context Best For
claude-3-5-sonnet $$ Medium 200K Production, balanced
claude-3-haiku $ Fast 200K Cost-sensitive
claude-3-opus $$$ Slow 200K Complex reasoning

Long Context

Claude models support 200K token context window:

# You can fit a lot of context
very_long_text = """
[200,000 tokens of text, documents, conversations, etc.]
"""

response = await llm.complete([
    LLMMessage(
        role="user",
        content=f"Analyze this: {very_long_text}"
    )
])

Great for: - Analyzing large documents - Full conversation history - Knowledge base retrieval augmentation


Protocol Definition

dynabots_core.providers.anthropic.AnthropicProvider

LLMProvider implementation for Anthropic Claude models.

Example

from anthropic import AsyncAnthropic client = AsyncAnthropic() llm = AnthropicProvider(client, model="claude-3-5-sonnet-20241022")

Source code in packages/core/dynabots_core/providers/anthropic.py
class AnthropicProvider:
    """
    LLMProvider implementation for Anthropic Claude models.

    Example:
        from anthropic import AsyncAnthropic
        client = AsyncAnthropic()
        llm = AnthropicProvider(client, model="claude-3-5-sonnet-20241022")
    """

    def __init__(
        self,
        client: Any,
        model: str = "claude-3-5-sonnet-20241022",
        max_tokens: int = 4096,
    ) -> None:
        """
        Initialize the Anthropic provider.

        Args:
            client: An AsyncAnthropic client instance.
            model: Claude model ID.
            max_tokens: Default max tokens (Anthropic requires this).
        """
        self._client = client
        self._model = model
        self._default_max_tokens = max_tokens

    async def complete(
        self,
        messages: List[LLMMessage],
        temperature: float = 0.1,
        max_tokens: int = 2000,
        json_mode: bool = False,
        tools: Optional[List[ToolDefinition]] = None,
    ) -> LLMResponse:
        """
        Send messages to Anthropic and get a response.

        Args:
            messages: Conversation messages.
            temperature: Sampling temperature.
            max_tokens: Maximum response tokens.
            json_mode: If True, append JSON instruction to system prompt.
            tools: Optional list of tools for function calling.

        Returns:
            LLMResponse with the model's output.
        """
        # Separate system message from conversation
        system_content = ""
        conversation_messages = []

        for msg in messages:
            if msg.role == "system":
                system_content += msg.content + "\n"
            else:
                conversation_messages.append({
                    "role": msg.role,
                    "content": msg.content,
                })

        # JSON mode: append instruction to system prompt
        if json_mode:
            system_content += "\n\nRespond with valid JSON only."

        kwargs: Dict[str, Any] = {
            "model": self._model,
            "messages": conversation_messages,
            "max_tokens": max_tokens or self._default_max_tokens,
            "temperature": temperature,
        }

        if system_content.strip():
            kwargs["system"] = system_content.strip()

        if tools:
            kwargs["tools"] = [
                {
                    "name": t.name,
                    "description": t.description,
                    "input_schema": t.parameters,
                }
                for t in tools
            ]

        response = await self._client.messages.create(**kwargs)

        # Extract content
        content = ""
        tool_calls = []

        for block in response.content:
            if block.type == "text":
                content += block.text
            elif block.type == "tool_use":
                tool_calls.append({
                    "id": block.id,
                    "type": "function",
                    "function": {
                        "name": block.name,
                        "arguments": block.input,
                    },
                })

        return LLMResponse(
            content=content,
            usage={
                "prompt_tokens": response.usage.input_tokens,
                "completion_tokens": response.usage.output_tokens,
                "total_tokens": (
                    response.usage.input_tokens + response.usage.output_tokens
                ),
            },
            model=self._model,
            tool_calls=tool_calls if tool_calls else None,
            finish_reason=response.stop_reason,
        )

    @property
    def model(self) -> str:
        """Get the current model name."""
        return self._model

model property

Get the current model name.

__init__(client, model='claude-3-5-sonnet-20241022', max_tokens=4096)

Initialize the Anthropic provider.

Parameters:

Name Type Description Default
client Any

An AsyncAnthropic client instance.

required
model str

Claude model ID.

'claude-3-5-sonnet-20241022'
max_tokens int

Default max tokens (Anthropic requires this).

4096
Source code in packages/core/dynabots_core/providers/anthropic.py
def __init__(
    self,
    client: Any,
    model: str = "claude-3-5-sonnet-20241022",
    max_tokens: int = 4096,
) -> None:
    """
    Initialize the Anthropic provider.

    Args:
        client: An AsyncAnthropic client instance.
        model: Claude model ID.
        max_tokens: Default max tokens (Anthropic requires this).
    """
    self._client = client
    self._model = model
    self._default_max_tokens = max_tokens

complete(messages, temperature=0.1, max_tokens=2000, json_mode=False, tools=None) async

Send messages to Anthropic and get a response.

Parameters:

Name Type Description Default
messages List[LLMMessage]

Conversation messages.

required
temperature float

Sampling temperature.

0.1
max_tokens int

Maximum response tokens.

2000
json_mode bool

If True, append JSON instruction to system prompt.

False
tools Optional[List[ToolDefinition]]

Optional list of tools for function calling.

None

Returns:

Type Description
LLMResponse

LLMResponse with the model's output.

Source code in packages/core/dynabots_core/providers/anthropic.py
async def complete(
    self,
    messages: List[LLMMessage],
    temperature: float = 0.1,
    max_tokens: int = 2000,
    json_mode: bool = False,
    tools: Optional[List[ToolDefinition]] = None,
) -> LLMResponse:
    """
    Send messages to Anthropic and get a response.

    Args:
        messages: Conversation messages.
        temperature: Sampling temperature.
        max_tokens: Maximum response tokens.
        json_mode: If True, append JSON instruction to system prompt.
        tools: Optional list of tools for function calling.

    Returns:
        LLMResponse with the model's output.
    """
    # Separate system message from conversation
    system_content = ""
    conversation_messages = []

    for msg in messages:
        if msg.role == "system":
            system_content += msg.content + "\n"
        else:
            conversation_messages.append({
                "role": msg.role,
                "content": msg.content,
            })

    # JSON mode: append instruction to system prompt
    if json_mode:
        system_content += "\n\nRespond with valid JSON only."

    kwargs: Dict[str, Any] = {
        "model": self._model,
        "messages": conversation_messages,
        "max_tokens": max_tokens or self._default_max_tokens,
        "temperature": temperature,
    }

    if system_content.strip():
        kwargs["system"] = system_content.strip()

    if tools:
        kwargs["tools"] = [
            {
                "name": t.name,
                "description": t.description,
                "input_schema": t.parameters,
            }
            for t in tools
        ]

    response = await self._client.messages.create(**kwargs)

    # Extract content
    content = ""
    tool_calls = []

    for block in response.content:
        if block.type == "text":
            content += block.text
        elif block.type == "tool_use":
            tool_calls.append({
                "id": block.id,
                "type": "function",
                "function": {
                    "name": block.name,
                    "arguments": block.input,
                },
            })

    return LLMResponse(
        content=content,
        usage={
            "prompt_tokens": response.usage.input_tokens,
            "completion_tokens": response.usage.output_tokens,
            "total_tokens": (
                response.usage.input_tokens + response.usage.output_tokens
            ),
        },
        model=self._model,
        tool_calls=tool_calls if tool_calls else None,
        finish_reason=response.stop_reason,
    )

Constitutional AI

Claude is trained with constitutional AI—it follows principles of helpfulness, harmlessness, and honesty:

response = await llm.complete([
    LLMMessage(role="user", content="How do I make an illegal substance?"),
])

# Claude will decline and offer helpful alternatives

This is built-in, no configuration needed.


Cost Optimization

Model Selection

# Most expensive
llm = AnthropicProvider(client, model="claude-3-opus-20240229")
# $0.015 per 1M input tokens, $0.075 per 1M output

# Recommended (best value)
llm = AnthropicProvider(client, model="claude-3-5-sonnet-20241022")
# $0.003 per 1M input tokens, $0.015 per 1M output

# Cheapest
llm = AnthropicProvider(client, model="claude-3-haiku-20240307")
# $0.00080 per 1M input tokens, $0.0024 per 1M output

Token Limits

# Keep responses short to save money
response = await llm.complete(
    messages,
    max_tokens=500  # Limited response
)

Error Handling

try:
    response = await llm.complete(messages)
except Exception as e:
    if "invalid_request_error" in str(type(e)):
        print("Check message format")
    elif "authentication_error" in str(type(e)):
        print("Check API key")
    else:
        print(f"Error: {e}")

Usage Statistics

Anthropic provides token usage:

response = await llm.complete(messages)

print(response.usage)
# {
#     "prompt_tokens": 100,
#     "completion_tokens": 50,
#     "total_tokens": 150
# }

# Estimate cost (Claude 3.5 Sonnet)
cost = (response.usage["prompt_tokens"] * 0.003 +
        response.usage["completion_tokens"] * 0.015) / 1_000_000
print(f"Cost: ${cost:.6f}")

Common Issues

Invalid API Key

Error: Invalid API key

Solution: Check your API key:

echo $ANTHROPIC_API_KEY

Get a new key from Anthropic console.

Quota Exceeded

Error: You exceeded your quota

Solution: Check usage at Anthropic console.

Message Too Long

Error: Message is too long

Solution: Claude has a max input/output tokens. Reduce context or response limit:

response = await llm.complete(
    messages,
    max_tokens=1000  # Be explicit
)

Tips

  1. System prompts: Use them to set behavior and context.
  2. Long context: Leverage the 200K context for rich documents.
  3. JSON mode: No native JSON mode, use structured prompts.
  4. Tool calling: Works well with Claude—models are good at planning.

See Also