Providers
@synkro/agents ships with three built-in providers and a simple interface for custom integrations. All providers use fetch internally — no SDK packages needed.
Built-in providers
import { OpenAIProvider } from "@synkro/agents";
const provider = new OpenAIProvider({ apiKey: process.env.OPENAI_API_KEY!, // baseUrl: "https://api.openai.com/v1" (default)});Supports GPT-4, GPT-4o, GPT-4o-mini, and any model available via the OpenAI Chat Completions API.
import { AnthropicProvider } from "@synkro/agents";
const provider = new AnthropicProvider({ apiKey: process.env.ANTHROPIC_API_KEY!, // baseUrl: "https://api.anthropic.com/v1" (default)});Supports Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku, and any model available via the Anthropic Messages API.
import { GeminiProvider } from "@synkro/agents";
const provider = new GeminiProvider({ apiKey: process.env.GEMINI_API_KEY!, // baseUrl: "https://generativelanguage.googleapis.com/v1beta" (default)});Supports Gemini 1.5 Pro, Gemini 1.5 Flash, and any model available via the Gemini API.
Provider options
All built-in providers accept the same shape:
| Field | Type | Description |
|---|---|---|
apiKey | string | API key for authentication. |
baseUrl | string | Override the default API endpoint. Useful for proxies or self-hosted deployments. |
Using a provider
Pass the provider instance and a model config to createAgent:
import { createAgent, AnthropicProvider } from "@synkro/agents";
const agent = createAgent({ name: "claude-agent", systemPrompt: "You are a helpful assistant.", provider: new AnthropicProvider({ apiKey: process.env.ANTHROPIC_API_KEY! }), model: { model: "claude-sonnet-4-20250514", temperature: 0.5, maxTokens: 4096, },});ModelOptions
| Field | Type | Default | Description |
|---|---|---|---|
model | string | required | Model identifier (e.g. "gpt-4o", "claude-sonnet-4-20250514", "gemini-1.5-pro"). |
temperature | number | Provider default | Sampling temperature. |
maxTokens | number | Provider default | Maximum tokens in the response. |
Custom providers
Implement the ModelProvider interface to use any LLM backend:
import type { ModelProvider, Message, ModelOptions, ModelResponse } from "@synkro/agents";
class OllamaProvider implements ModelProvider { private readonly baseUrl: string;
constructor(baseUrl = "http://localhost:11434") { this.baseUrl = baseUrl; }
async chat(messages: Message[], options: ModelOptions): Promise<ModelResponse> { const response = await fetch(`${this.baseUrl}/api/chat`, { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ model: options.model, messages: messages.map((m) => ({ role: m.role, content: m.content, })), stream: false, }), });
const data = await response.json();
return { content: data.message.content, toolCalls: undefined, usage: { promptTokens: data.prompt_eval_count ?? 0, completionTokens: data.eval_count ?? 0, totalTokens: (data.prompt_eval_count ?? 0) + (data.eval_count ?? 0), }, finishReason: "stop", }; }}ModelProvider interface
interface ModelProvider { chat(messages: Message[], options: ModelOptions): Promise<ModelResponse>; chatStream?(messages: Message[], options: ModelOptions): AsyncIterable<ModelStreamChunk>;}The chatStream method is optional. If provided, it enables streaming responses.
ModelResponse
type ModelResponse = { content: string; toolCalls?: ToolCall[]; usage: TokenUsage; finishReason: "stop" | "tool_calls" | "length";};| Field | Description |
|---|---|
content | Text content of the assistant response. |
toolCalls | Tool calls requested by the model, if any. |
usage | Token counts: promptTokens, completionTokens, totalTokens. |
finishReason | "stop" (normal), "tool_calls" (wants to call tools), "length" (hit token limit). |