Quick Start
Get a working agent running in under 5 minutes.
Step 1 — Define Your LLM Provider
LLM Layer Engine does not ship a built-in provider. You bring your own by implementing the LLMProvider interface. Here is a LangChain example using Claude:
import { ChatAnthropic } from "@langchain/anthropic";
import type { LLMProvider, LLMResponse, AgentConfig, Message } from "llm-layer-engine";
const model = new ChatAnthropic({ model: "claude-3-5-sonnet-20241022" });
const myProvider: LLMProvider = {
name: "anthropic",
async run({ messages, config }): Promise<LLMResponse> {
const response = await model.invoke(
messages.map((m) => ({ role: m.role, content: m.content }))
);
return {
content: response.content as string,
};
},
};Step 2 — Run a Simple Agent
import { createAgent } from "llm-layer-engine";
const agent = createAgent({
provider: myProvider,
model: "claude-3-5-sonnet-20241022",
temperature: 0.5,
});
const { result, stats } = await agent.run([
{ role: "user", content: "What is the capital of France?" },
]);
console.log(result); // Paris
console.log(stats); // { durationMs, tokenUsage, ... }Step 3 — Add a Tool
import { registerTool, reActLoop } from "llm-layer-engine";
registerTool({
name: "get_weather",
execute: async ({ city }) => {
return { weather: "Sunny", city };
},
});
const loop = reActLoop({
config: { provider: myProvider, model: "claude-3-5-sonnet-20241022" },
provider: myProvider,
messages: [{ role: "user", content: "What is the weather in Karachi?" }],
tools: [{ name: "get_weather", execute: async (i) => ({ weather: "Sunny", city: i.city }) }],
maxSteps: 5,
});
const { result, stats } = await loop.run();
console.log(result);What Happens Under the Hood (User Perspective)
createAgentsends your messages to the provider and returns the result + statsreActLoopruns multiple steps — calling tools when needed — until it has a final answerregisterTooladds your tool to the global registry so the agent can find it
That is everything you need to get started.
Next Step
Last updated on