Skip to content

Overview

@synkro/agents adds AI agent orchestration to Synkro. Build LLM-powered agents that reason, call tools, remember conversations, and plug directly into Synkro’s event engine — with zero additional dependencies.

When to use it

  • You need an LLM agent that can call tools and reason in a loop (ReAct pattern).
  • You want to run agents as Synkro event handlers or workflow steps.
  • You need conversation memory backed by Redis, reusing your existing Synkro transport.
  • You want provider-agnostic code that works with OpenAI, Anthropic, or Google Gemini.

Features

FeatureDescription
ReAct loopAgents reason and act in iterations — call tools, observe results, decide next steps.
Tool executionDefine typed tools with JSON Schema parameters; the agent calls them automatically.
Provider agnosticBuilt-in support for OpenAI, Anthropic, and Gemini. Bring your own via the ModelProvider interface.
Conversation memoryRedis-backed memory reuses Synkro’s TransportManager — no extra infrastructure.
Synkro integrationagent.asHandler() turns any agent into an event handler or workflow step.
Token trackingCumulative token usage per run with optional budget limits and callbacks.
Safety guardrailsmaxIterations and tokenBudget prevent runaway agents.
ObservabilityAgents emit lifecycle events (started, completed, tool executed) to Synkro’s event system.
Dynamic RouterLLM-based N-path branching for classifying and routing inputs to different handlers.
Supervisor / WorkerDelegate tasks to specialized worker agents with configurable delegation rounds.
DebateMulti-agent collaboration where participants discuss and debate a topic over multiple rounds.
Zero dependenciesUses fetch for HTTP — no SDK packages to install.

Installation

Terminal window
npm install @synkro/agents @synkro/core

Quick start

  1. Install the packages

    Terminal window
    npm install @synkro/agents @synkro/core
  2. Create an agent and run it

    import { createAgent, OpenAIProvider } from "@synkro/agents";
    const agent = createAgent({
    name: "assistant",
    systemPrompt: "You are a helpful assistant.",
    provider: new OpenAIProvider({
    apiKey: process.env.OPENAI_API_KEY!,
    }),
    model: { model: "gpt-4o" },
    });
    const result = await agent.run("What is the capital of France?");
    console.log(result.output);
    // "The capital of France is Paris."
    console.log(result.tokenUsage);
    // { promptTokens: 25, completionTokens: 12, totalTokens: 37 }
  3. Check the result status

    if (result.status === "completed") {
    console.log("Agent finished successfully");
    }

Next steps

  • Creating Agents — configure agents with tools, memory, and guardrails.
  • Tools — define tools the agent can call during its reasoning loop.
  • Providers — use OpenAI, Anthropic, Gemini, or bring your own.
  • Memory — persist conversations across runs.
  • Synkro Integration — run agents as event handlers and workflow steps.
  • Observability — monitor agent lifecycle events.
  • Dynamic Router — LLM-based dynamic routing for N-path branching.
  • Supervisor / Worker — delegate tasks to specialized worker agents.
  • Debate — multi-agent collaboration and debate over multiple rounds.
  • API Reference — complete type and function reference.