Skip to content

OpenAI Provider

Production-ready access to OpenAI and Azure OpenAI models.


Installation

pip install dynabots-core[openai]

Setup

OpenAI API Key

# Get your API key from https://platform.openai.com/api-keys
export OPENAI_API_KEY="sk-..."

Azure OpenAI

# For Azure OpenAI, also set:
export AZURE_OPENAI_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com"

Usage

Basic

from openai import AsyncOpenAI
from dynabots_core.providers import OpenAIProvider
from dynabots_core import LLMMessage

client = AsyncOpenAI()  # Uses OPENAI_API_KEY env var
llm = OpenAIProvider(client, model="gpt-4o")

response = await llm.complete([
    LLMMessage(role="user", content="What is 2+2?"),
])

print(response.content)  # "4"

Custom API Key

client = AsyncOpenAI(api_key="sk-...")
llm = OpenAIProvider(client, model="gpt-4o")

Features

Temperature

Control randomness:

response = await llm.complete(
    messages,
    temperature=0.1  # Deterministic
)

response = await llm.complete(
    messages,
    temperature=0.9  # Creative
)

Max Tokens

Limit response length:

response = await llm.complete(
    messages,
    max_tokens=100
)

JSON Mode

Structured output:

response = await llm.complete(
    messages=[
        LLMMessage(role="user", content="Extract name and age: ...")
    ],
    json_mode=True
)

import json
data = json.loads(response.content)

Tool Calling

Function calling support:

from dynabots_core.protocols.llm import ToolDefinition

tools = [
    ToolDefinition(
        name="get_weather",
        description="Get current weather",
        parameters={
            "type": "object",
            "properties": {
                "location": {"type": "string"}
            },
            "required": ["location"]
        }
    )
]

response = await llm.complete(
    messages=[
        LLMMessage(role="user", content="What's the weather in San Francisco?")
    ],
    tools=tools
)

if response.tool_calls:
    for call in response.tool_calls:
        tool_name = call["function"]["name"]
        tool_args = call["function"]["arguments"]
        # Execute tool...

Model Selection

  • gpt-4o - Most capable, recommended for production
  • gpt-4o-mini - Cheaper, still very capable
  • gpt-3.5-turbo - Older, not recommended for new projects
# Production
llm = OpenAIProvider(client, model="gpt-4o")

# Cost-conscious
llm = OpenAIProvider(client, model="gpt-4o-mini")

Model Comparison

Model Cost Speed Reasoning Best For
gpt-4o $$$ Medium Excellent Production, complex tasks
gpt-4o-mini $ Fast Good Cost-sensitive, simple tasks
gpt-3.5-turbo $ Fastest Basic Legacy only

Protocol Definition

dynabots_core.providers.openai.OpenAIProvider

LLMProvider implementation for OpenAI and Azure OpenAI.

Example

from openai import AsyncOpenAI client = AsyncOpenAI(api_key="sk-...") llm = OpenAIProvider(client, model="gpt-4o")

response = await llm.complete([ LLMMessage(role="user", content="Hello!") ])

Source code in packages/core/dynabots_core/providers/openai.py
class OpenAIProvider:
    """
    LLMProvider implementation for OpenAI and Azure OpenAI.

    Example:
        from openai import AsyncOpenAI
        client = AsyncOpenAI(api_key="sk-...")
        llm = OpenAIProvider(client, model="gpt-4o")

        response = await llm.complete([
            LLMMessage(role="user", content="Hello!")
        ])
    """

    def __init__(self, client: Any, model: str = "gpt-4o") -> None:
        """
        Initialize the OpenAI provider.

        Args:
            client: An AsyncOpenAI or AsyncAzureOpenAI client instance.
            model: Model name (OpenAI) or deployment name (Azure).
        """
        self._client = client
        self._model = model

    async def complete(
        self,
        messages: List[LLMMessage],
        temperature: float = 0.1,
        max_tokens: int = 2000,
        json_mode: bool = False,
        tools: Optional[List[ToolDefinition]] = None,
    ) -> LLMResponse:
        """
        Send messages to OpenAI and get a response.

        Args:
            messages: Conversation messages.
            temperature: Sampling temperature.
            max_tokens: Maximum response tokens.
            json_mode: If True, request JSON-formatted output.
            tools: Optional list of tools for function calling.

        Returns:
            LLMResponse with the model's output.
        """
        kwargs: Dict[str, Any] = {
            "model": self._model,
            "messages": [
                {"role": m.role, "content": m.content}
                for m in messages
            ],
            "temperature": temperature,
            "max_tokens": max_tokens,
        }

        if json_mode:
            kwargs["response_format"] = {"type": "json_object"}

        if tools:
            kwargs["tools"] = [
                {
                    "type": "function",
                    "function": {
                        "name": t.name,
                        "description": t.description,
                        "parameters": t.parameters,
                    },
                }
                for t in tools
            ]

        response = await self._client.chat.completions.create(**kwargs)

        # Extract usage
        usage = None
        if response.usage:
            usage = {
                "prompt_tokens": response.usage.prompt_tokens,
                "completion_tokens": response.usage.completion_tokens,
                "total_tokens": response.usage.total_tokens,
            }

        # Extract tool calls
        tool_calls = None
        if response.choices[0].message.tool_calls:
            tool_calls = [
                {
                    "id": tc.id,
                    "type": tc.type,
                    "function": {
                        "name": tc.function.name,
                        "arguments": tc.function.arguments,
                    },
                }
                for tc in response.choices[0].message.tool_calls
            ]

        return LLMResponse(
            content=response.choices[0].message.content or "",
            usage=usage,
            model=self._model,
            tool_calls=tool_calls,
            finish_reason=response.choices[0].finish_reason,
        )

    @property
    def model(self) -> str:
        """Get the current model name."""
        return self._model

model property

Get the current model name.

__init__(client, model='gpt-4o')

Initialize the OpenAI provider.

Parameters:

Name Type Description Default
client Any

An AsyncOpenAI or AsyncAzureOpenAI client instance.

required
model str

Model name (OpenAI) or deployment name (Azure).

'gpt-4o'
Source code in packages/core/dynabots_core/providers/openai.py
def __init__(self, client: Any, model: str = "gpt-4o") -> None:
    """
    Initialize the OpenAI provider.

    Args:
        client: An AsyncOpenAI or AsyncAzureOpenAI client instance.
        model: Model name (OpenAI) or deployment name (Azure).
    """
    self._client = client
    self._model = model

complete(messages, temperature=0.1, max_tokens=2000, json_mode=False, tools=None) async

Send messages to OpenAI and get a response.

Parameters:

Name Type Description Default
messages List[LLMMessage]

Conversation messages.

required
temperature float

Sampling temperature.

0.1
max_tokens int

Maximum response tokens.

2000
json_mode bool

If True, request JSON-formatted output.

False
tools Optional[List[ToolDefinition]]

Optional list of tools for function calling.

None

Returns:

Type Description
LLMResponse

LLMResponse with the model's output.

Source code in packages/core/dynabots_core/providers/openai.py
async def complete(
    self,
    messages: List[LLMMessage],
    temperature: float = 0.1,
    max_tokens: int = 2000,
    json_mode: bool = False,
    tools: Optional[List[ToolDefinition]] = None,
) -> LLMResponse:
    """
    Send messages to OpenAI and get a response.

    Args:
        messages: Conversation messages.
        temperature: Sampling temperature.
        max_tokens: Maximum response tokens.
        json_mode: If True, request JSON-formatted output.
        tools: Optional list of tools for function calling.

    Returns:
        LLMResponse with the model's output.
    """
    kwargs: Dict[str, Any] = {
        "model": self._model,
        "messages": [
            {"role": m.role, "content": m.content}
            for m in messages
        ],
        "temperature": temperature,
        "max_tokens": max_tokens,
    }

    if json_mode:
        kwargs["response_format"] = {"type": "json_object"}

    if tools:
        kwargs["tools"] = [
            {
                "type": "function",
                "function": {
                    "name": t.name,
                    "description": t.description,
                    "parameters": t.parameters,
                },
            }
            for t in tools
        ]

    response = await self._client.chat.completions.create(**kwargs)

    # Extract usage
    usage = None
    if response.usage:
        usage = {
            "prompt_tokens": response.usage.prompt_tokens,
            "completion_tokens": response.usage.completion_tokens,
            "total_tokens": response.usage.total_tokens,
        }

    # Extract tool calls
    tool_calls = None
    if response.choices[0].message.tool_calls:
        tool_calls = [
            {
                "id": tc.id,
                "type": tc.type,
                "function": {
                    "name": tc.function.name,
                    "arguments": tc.function.arguments,
                },
            }
            for tc in response.choices[0].message.tool_calls
        ]

    return LLMResponse(
        content=response.choices[0].message.content or "",
        usage=usage,
        model=self._model,
        tool_calls=tool_calls,
        finish_reason=response.choices[0].finish_reason,
    )

Azure OpenAI

Use with Azure-hosted OpenAI:

from openai import AsyncAzureOpenAI
from dynabots_core.providers import OpenAIProvider

client = AsyncAzureOpenAI(
    azure_endpoint="https://my-resource.openai.azure.com",
    api_key="your-api-key",
    api_version="2024-02-01",  # Use latest version
)

# Use same provider class
llm = OpenAIProvider(client, model="my-deployment-name")

response = await llm.complete(messages)

Cost Optimization

Model Selection

# Most expensive
llm = OpenAIProvider(client, model="gpt-4o")  # $0.03/$0.06 per 1M tokens

# Cheaper
llm = OpenAIProvider(client, model="gpt-4o-mini")  # $0.15/$0.60 per 1M tokens

Token Limits

# Shorter responses = lower cost
response = await llm.complete(
    messages,
    max_tokens=100  # Limited response
)

Batch Processing

For non-latency-sensitive work, OpenAI offers batch API:

# See OpenAI batch API documentation
# Cheaper rates (~50% discount)

Error Handling

try:
    response = await llm.complete(messages)
except Exception as e:
    if "rate_limit" in str(e):
        print("Rate limited, retry later")
    elif "invalid_api_key" in str(e):
        print("Check OPENAI_API_KEY")
    else:
        print(f"Error: {e}")

Usage Statistics

OpenAI provides token usage in responses:

response = await llm.complete(messages)

print(response.usage)
# {
#     "prompt_tokens": 50,
#     "completion_tokens": 25,
#     "total_tokens": 75
# }

# Estimate cost
cost = (response.usage["prompt_tokens"] * 0.03 +
        response.usage["completion_tokens"] * 0.06) / 1_000_000
print(f"Cost: ${cost:.6f}")

Common Issues

Invalid API Key

Error: Invalid API key provided

Solution: Check your API key:

echo $OPENAI_API_KEY

Get a new key from OpenAI dashboard.

Rate Limiting

Error: 429 Rate limit exceeded

Solution: Implement exponential backoff:

import asyncio

async def call_with_backoff(llm, messages, max_retries=3):
    for attempt in range(max_retries):
        try:
            return await llm.complete(messages)
        except Exception as e:
            if "rate_limit" not in str(e):
                raise
            wait = 2 ** attempt
            print(f"Rate limited, waiting {wait}s...")
            await asyncio.sleep(wait)

Quota Exceeded

Error: You exceeded your current quota

Solution: Check your billing and usage at OpenAI dashboard.


See Also