LLM Provider Protocol¶
The unified interface for any language model service.
Protocol Definition¶
dynabots_core.protocols.llm.LLMProvider
¶
Bases: Protocol
Protocol for LLM providers.
Implementations wrap a specific LLM service behind a uniform interface. This enables LLM-agnostic orchestration - swap providers without changing your agent code.
Required method: - complete: Send messages and get a response
Optional features (check implementation): - Tool calling: Pass tools parameter to enable function calling - JSON mode: Set json_mode=True for structured output - Streaming: Some implementations may offer streaming variants
Example implementation
class OllamaProvider: def init(self, model: str = "llama3.1:70b"): self.model = model self.client = ollama.AsyncClient()
async def complete(
self,
messages: list[LLMMessage],
temperature: float = 0.1,
max_tokens: int = 2000,
json_mode: bool = False,
tools: list[ToolDefinition] | None = None,
) -> LLMResponse:
response = await self.client.chat(
model=self.model,
messages=[{"role": m.role, "content": m.content} for m in messages],
options={"temperature": temperature, "num_predict": max_tokens},
format="json" if json_mode else None,
)
return LLMResponse(
content=response["message"]["content"],
model=self.model,
)
Source code in packages/core/dynabots_core/protocols/llm.py
complete(messages, temperature=0.1, max_tokens=2000, json_mode=False, tools=None)
async
¶
Send messages to the LLM and get a response.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
messages
|
List[LLMMessage]
|
Conversation messages (system, user, assistant, tool). |
required |
temperature
|
float
|
Sampling temperature (0.0 = deterministic, 1.0 = creative). |
0.1
|
max_tokens
|
int
|
Maximum tokens in the response. |
2000
|
json_mode
|
bool
|
If True, request JSON-formatted output. |
False
|
tools
|
Optional[List[ToolDefinition]]
|
Optional list of tools the LLM can call. |
None
|
Returns:
| Type | Description |
|---|---|
LLMResponse
|
LLMResponse with the model's response text and optional metadata. |
Raises:
| Type | Description |
|---|---|
Exception
|
If the LLM call fails (implementation-specific). |
Source code in packages/core/dynabots_core/protocols/llm.py
dynabots_core.protocols.llm.LLMMessage
dataclass
¶
A single message in an LLM conversation.
Attributes:
| Name | Type | Description |
|---|---|---|
role |
str
|
Message role - "system", "user", or "assistant" |
content |
str
|
Message content (text) |
name |
Optional[str]
|
Optional name for the message sender |
tool_calls |
Optional[List[Dict[str, Any]]]
|
Optional list of tool calls (for assistant messages) |
tool_call_id |
Optional[str]
|
Optional ID linking to a tool call (for tool messages) |
Example
messages = [ LLMMessage(role="system", content="You are a helpful assistant."), LLMMessage(role="user", content="What's 2+2?"), LLMMessage(role="assistant", content="2+2 equals 4."), ]
Source code in packages/core/dynabots_core/protocols/llm.py
dynabots_core.protocols.llm.LLMResponse
dataclass
¶
Response from an LLM provider.
Attributes:
| Name | Type | Description |
|---|---|---|
content |
str
|
The model's response text |
usage |
Optional[Dict[str, int]]
|
Optional token usage statistics |
model |
Optional[str]
|
Optional model identifier |
tool_calls |
Optional[List[Dict[str, Any]]]
|
Optional list of tool calls requested by the model |
finish_reason |
Optional[str]
|
Why the model stopped generating (stop, length, tool_calls) |
Example
response = await provider.complete(messages) print(response.content) print(f"Tokens used: {response.usage.get('total_tokens', 'unknown')}")
Source code in packages/core/dynabots_core/protocols/llm.py
dynabots_core.protocols.llm.ToolDefinition
dataclass
¶
Definition of a tool that can be called by the LLM.
Attributes:
| Name | Type | Description |
|---|---|---|
name |
str
|
Tool name (function name) |
description |
str
|
What the tool does |
parameters |
Dict[str, Any]
|
JSON Schema for the parameters |
Example
search_tool = ToolDefinition( name="search_database", description="Search the database for records", parameters={ "type": "object", "properties": { "query": {"type": "string", "description": "Search query"}, "limit": {"type": "integer", "default": 10} }, "required": ["query"] } )
Source code in packages/core/dynabots_core/protocols/llm.py
Custom Implementation¶
Create your own provider for any LLM service:
from typing import Any, Dict, List, Optional
from dynabots_core.protocols.llm import (
LLMMessage,
LLMProvider,
LLMResponse,
ToolDefinition,
)
class MyCustomProvider:
"""Custom LLM provider for your own service."""
def __init__(self, api_key: str, model: str = "my-model"):
self.api_key = api_key
self.model = model
self.client = MyLLMClient(api_key)
async def complete(
self,
messages: List[LLMMessage],
temperature: float = 0.1,
max_tokens: int = 2000,
json_mode: bool = False,
tools: Optional[List[ToolDefinition]] = None,
) -> LLMResponse:
"""Call your LLM service."""
# Convert messages to your API format
api_messages = [
{"role": m.role, "content": m.content}
for m in messages
]
# Build request
request = {
"messages": api_messages,
"temperature": temperature,
"max_tokens": max_tokens,
}
if json_mode:
request["response_format"] = "json"
if tools:
request["tools"] = [
{
"name": t.name,
"description": t.description,
"parameters": t.parameters,
}
for t in tools
]
# Call your LLM service
response = await self.client.generate(**request)
# Parse response
return LLMResponse(
content=response.text,
model=self.model,
usage={
"prompt_tokens": response.prompt_tokens,
"completion_tokens": response.completion_tokens,
"total_tokens": response.total_tokens,
},
)
Message Format¶
LLMMessage¶
Represents a single message in a conversation:
from dynabots_core import LLMMessage
messages = [
LLMMessage(
role="system",
content="You are a helpful assistant."
),
LLMMessage(
role="user",
content="What is 2+2?"
),
LLMMessage(
role="assistant",
content="2+2 equals 4."
),
LLMMessage(
role="user",
content="And 3+3?"
),
]
Roles:
- "system": System prompt (LLM behavior)
- "user": User input
- "assistant": LLM response
- "tool": Tool output (for tool calling)
LLMResponse¶
The response from a provider:
response = await provider.complete(messages)
print(response.content) # The LLM's response text
print(response.model) # Model identifier
print(response.usage) # {"prompt_tokens": N, "completion_tokens": N, "total_tokens": N}
print(response.finish_reason) # "stop", "length", "tool_calls"
print(response.tool_calls) # Tool calls if any
Features¶
Temperature¶
Control randomness:
# Deterministic (for analysis, code generation)
response = await llm.complete(messages, temperature=0.0)
# Balanced (default)
response = await llm.complete(messages, temperature=0.1)
# Creative (for brainstorming)
response = await llm.complete(messages, temperature=0.9)
JSON Mode¶
Request structured JSON output:
response = await llm.complete(
messages=[
LLMMessage(role="user", content="Extract: name, age, role from the text...")
],
json_mode=True, # Request JSON output
)
import json
data = json.loads(response.content)
print(data) # {"name": "Alice", "age": 30, "role": "Engineer"}
Not all providers support JSON mode. Check provider documentation.
Max Tokens¶
Limit response length:
# Short responses
response = await llm.complete(messages, max_tokens=100)
# Long responses
response = await llm.complete(messages, max_tokens=4000)
Tool Calling¶
Enable function calling:
from dynabots_core.protocols.llm import ToolDefinition
tools = [
ToolDefinition(
name="search",
description="Search the knowledge base",
parameters={
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"},
"limit": {"type": "integer", "default": 10}
},
"required": ["query"]
}
),
ToolDefinition(
name="calculate",
description="Perform calculations",
parameters={
"type": "object",
"properties": {
"expression": {"type": "string", "description": "Math expression"}
},
"required": ["expression"]
}
),
]
response = await llm.complete(
messages=[
LLMMessage(role="user", content="What is 2+2 and search for Python?")
],
tools=tools,
)
if response.tool_calls:
for call in response.tool_calls:
print(f"Tool: {call['function']['name']}")
print(f"Args: {call['function']['arguments']}")
Built-in Providers¶
DynaBots provides three implementations.
Ollama (Local)¶
from dynabots_core.providers import OllamaProvider
llm = OllamaProvider(model="qwen2.5:72b")
response = await llm.complete(messages)
Best for: - Local development - Privacy-sensitive workloads - Self-hosted deployments
Requires: Ollama running locally
OpenAI (Cloud)¶
from openai import AsyncOpenAI
from dynabots_core.providers import OpenAIProvider
client = AsyncOpenAI(api_key="sk-...")
llm = OpenAIProvider(client, model="gpt-4o")
response = await llm.complete(messages)
Best for: - Production workloads - Advanced capabilities - High-quality outputs
Also supports Azure OpenAI endpoint.
Anthropic (Cloud)¶
from anthropic import AsyncAnthropic
from dynabots_core.providers import AnthropicProvider
client = AsyncAnthropic(api_key="sk-ant-...")
llm = AnthropicProvider(client, model="claude-3-5-sonnet-20241022")
response = await llm.complete(messages)
Best for: - Constitutional AI - Extended thinking (with claude models) - Multimodal understanding
Comparison¶
| Provider | Cost | Speed | Customization | Latency |
|---|---|---|---|---|
| Ollama | Free | Medium | Full | Low (local) |
| OpenAI | $$ | Fast | Limited | Medium |
| Anthropic | $$ | Fast | Limited | Medium |
Swapping Providers¶
The power of protocols: change LLM without changing agent code.
# Start with Ollama (free, local)
llm = OllamaProvider(model="qwen2.5:7b")
# Agent code
async def my_agent_method(self, task):
response = await self.llm.complete(messages)
return response.content
# Later, switch to OpenAI
from openai import AsyncOpenAI
from dynabots_core.providers import OpenAIProvider
llm = OpenAIProvider(AsyncOpenAI(), model="gpt-4o")
# Same agent code works!
self.llm = llm # Just swap the provider
No agent code changes needed.
Error Handling¶
Providers raise exceptions on failure:
try:
response = await llm.complete(messages)
except ConnectionError:
print("LLM service unreachable")
except ValueError:
print("Invalid parameters")
except Exception as e:
print(f"LLM error: {e}")
Best Practices¶
- Async/await: Always use async. Providers are async.
- Temperature tuning: Lower (0.1) for deterministic tasks, higher for creative.
- Token limits: Set reasonable max_tokens to control costs.
- Error handling: Wrap provider calls in try/except.
- Fallbacks: Have a fallback provider if a service is down.