createAgent
Problem
You need to send messages to an LLM provider and get back a response — but every time you do this manually, you also have to wire up timing, token counting, and result formatting yourself. It becomes repetitive and inconsistent across files.
Solution
createAgent wraps your LLM provider into a single, reusable agent object. Call .run() with your messages and it handles the execution, stats collection, and response formatting automatically.
Feature & Use-Case
Use createAgent when:
- You need a single-turn LLM call (one question → one answer)
- You want execution stats (token usage, duration, step count) returned alongside the result
- You are using LangChain with a custom provider and want a clean wrapper
Do not use createAgent when you need the agent to call tools across multiple steps — use reActLoop for that.
Import
import { createAgent } from "llm-layer-engine";Function Signature
function createAgent(config: AgentConfig): {
run(messages: Message[]): Promise<{
result: string;
stats: {
durationMs: number;
tokenUsage: number;
numberOfSteps: number;
toolCalls: number;
errors: number;
};
}>;
}Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
config.provider | LLMProvider | ✅ | Your LangChain-compatible provider object |
config.model | string | ✅ | Model name string (e.g. "claude-3-5-sonnet-20241022") |
config.temperature | number | ❌ | Defaults to 0.7 if not provided |
Example
import { createAgent } from "llm-layer-engine";
import type { LLMProvider } from "llm-layer-engine";
import { ChatAnthropic } from "@langchain/anthropic";
// Step 1 — Build your provider
const model = new ChatAnthropic({ model: "claude-3-5-sonnet-20241022" });
const provider: LLMProvider = {
name: "anthropic",
async run({ messages, config }) {
const res = await model.invoke(
messages.map((m) => ({ role: m.role, content: m.content }))
);
return { content: res.content as string };
},
};
// Step 2 — Create the agent
const agent = createAgent({
provider,
model: "claude-3-5-sonnet-20241022",
temperature: 0.4,
});
// Step 3 — Run it
const { result, stats } = await agent.run([
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Summarize the ReAct pattern in 2 sentences." },
]);
console.log(result);
// → "ReAct combines reasoning and acting in an iterative loop..."
console.log(stats);
// → { durationMs: 812, tokenUsage: 147, numberOfSteps: 1, toolCalls: 0, errors: 0 }Stats Reference
| Field | Type | Description |
|---|---|---|
durationMs | number | Total execution time in milliseconds |
tokenUsage | number | Total tokens used (from provider raw response) |
numberOfSteps | number | Always 1 for createAgent |
toolCalls | number | Number of tool calls made (usually 0 here) |
errors | number | Count of errors during execution |
Conclusion
createAgent is your go-to for single LLM calls. Initialize it once, call .run() anywhere in your Node.js service. It keeps your provider logic clean and gives you stats without any extra setup.
When your use-case needs tool calls or multi-step reasoning, graduate to reActLoop.