Skip to content

Providers

@synkro/agents ships with three built-in providers and a simple interface for custom integrations. All providers use fetch internally — no SDK packages needed.

Built-in providers

import { OpenAIProvider } from "@synkro/agents";
const provider = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY!,
// baseUrl: "https://api.openai.com/v1" (default)
});

Supports GPT-4, GPT-4o, GPT-4o-mini, and any model available via the OpenAI Chat Completions API.

Provider options

All built-in providers accept the same shape:

FieldTypeDescription
apiKeystringAPI key for authentication.
baseUrlstringOverride the default API endpoint. Useful for proxies or self-hosted deployments.

Using a provider

Pass the provider instance and a model config to createAgent:

import { createAgent, AnthropicProvider } from "@synkro/agents";
const agent = createAgent({
name: "claude-agent",
systemPrompt: "You are a helpful assistant.",
provider: new AnthropicProvider({ apiKey: process.env.ANTHROPIC_API_KEY! }),
model: {
model: "claude-sonnet-4-20250514",
temperature: 0.5,
maxTokens: 4096,
},
});

ModelOptions

FieldTypeDefaultDescription
modelstringrequiredModel identifier (e.g. "gpt-4o", "claude-sonnet-4-20250514", "gemini-1.5-pro").
temperaturenumberProvider defaultSampling temperature.
maxTokensnumberProvider defaultMaximum tokens in the response.

Custom providers

Implement the ModelProvider interface to use any LLM backend:

import type { ModelProvider, Message, ModelOptions, ModelResponse } from "@synkro/agents";
class OllamaProvider implements ModelProvider {
private readonly baseUrl: string;
constructor(baseUrl = "http://localhost:11434") {
this.baseUrl = baseUrl;
}
async chat(messages: Message[], options: ModelOptions): Promise<ModelResponse> {
const response = await fetch(`${this.baseUrl}/api/chat`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model: options.model,
messages: messages.map((m) => ({
role: m.role,
content: m.content,
})),
stream: false,
}),
});
const data = await response.json();
return {
content: data.message.content,
toolCalls: undefined,
usage: {
promptTokens: data.prompt_eval_count ?? 0,
completionTokens: data.eval_count ?? 0,
totalTokens: (data.prompt_eval_count ?? 0) + (data.eval_count ?? 0),
},
finishReason: "stop",
};
}
}

ModelProvider interface

interface ModelProvider {
chat(messages: Message[], options: ModelOptions): Promise<ModelResponse>;
chatStream?(messages: Message[], options: ModelOptions): AsyncIterable<ModelStreamChunk>;
}

The chatStream method is optional. If provided, it enables streaming responses.

ModelResponse

type ModelResponse = {
content: string;
toolCalls?: ToolCall[];
usage: TokenUsage;
finishReason: "stop" | "tool_calls" | "length";
};
FieldDescription
contentText content of the assistant response.
toolCallsTool calls requested by the model, if any.
usageToken counts: promptTokens, completionTokens, totalTokens.
finishReason"stop" (normal), "tool_calls" (wants to call tools), "length" (hit token limit).