Overview
@synkro/agents adds AI agent orchestration to Synkro. Build LLM-powered agents that reason, call tools, remember conversations, and plug directly into Synkro’s event engine — with zero additional dependencies.
When to use it
- You need an LLM agent that can call tools and reason in a loop (ReAct pattern).
- You want to run agents as Synkro event handlers or workflow steps.
- You need conversation memory backed by Redis, reusing your existing Synkro transport.
- You want provider-agnostic code that works with OpenAI, Anthropic, or Google Gemini.
Features
| Feature | Description |
|---|---|
| ReAct loop | Agents reason and act in iterations — call tools, observe results, decide next steps. |
| Tool execution | Define typed tools with JSON Schema parameters; the agent calls them automatically. |
| Provider agnostic | Built-in support for OpenAI, Anthropic, and Gemini. Bring your own via the ModelProvider interface. |
| Conversation memory | Redis-backed memory reuses Synkro’s TransportManager — no extra infrastructure. |
| Synkro integration | agent.asHandler() turns any agent into an event handler or workflow step. |
| Token tracking | Cumulative token usage per run with optional budget limits and callbacks. |
| Safety guardrails | maxIterations and tokenBudget prevent runaway agents. |
| Observability | Agents emit lifecycle events (started, completed, tool executed) to Synkro’s event system. |
| Dynamic Router | LLM-based N-path branching for classifying and routing inputs to different handlers. |
| Supervisor / Worker | Delegate tasks to specialized worker agents with configurable delegation rounds. |
| Debate | Multi-agent collaboration where participants discuss and debate a topic over multiple rounds. |
| Zero dependencies | Uses fetch for HTTP — no SDK packages to install. |
Installation
npm install @synkro/agents @synkro/coreQuick start
-
Install the packages
Terminal window npm install @synkro/agents @synkro/core -
Create an agent and run it
import { createAgent, OpenAIProvider } from "@synkro/agents";const agent = createAgent({name: "assistant",systemPrompt: "You are a helpful assistant.",provider: new OpenAIProvider({apiKey: process.env.OPENAI_API_KEY!,}),model: { model: "gpt-4o" },});const result = await agent.run("What is the capital of France?");console.log(result.output);// "The capital of France is Paris."console.log(result.tokenUsage);// { promptTokens: 25, completionTokens: 12, totalTokens: 37 } -
Check the result status
if (result.status === "completed") {console.log("Agent finished successfully");}
Next steps
- Creating Agents — configure agents with tools, memory, and guardrails.
- Tools — define tools the agent can call during its reasoning loop.
- Providers — use OpenAI, Anthropic, Gemini, or bring your own.
- Memory — persist conversations across runs.
- Synkro Integration — run agents as event handlers and workflow steps.
- Observability — monitor agent lifecycle events.
- Dynamic Router — LLM-based dynamic routing for N-path branching.
- Supervisor / Worker — delegate tasks to specialized worker agents.
- Debate — multi-agent collaboration and debate over multiple rounds.
- API Reference — complete type and function reference.