Logo
READLEARNKNOWCONNECT
Back to posts
build-a-lightweight-ai-agent

Build a Lightweight AI Agent

ChriseMarch 09, 2026 at 6 PM WAT

Build a Lightweight AI Agent That Can Use APIs

An AI agent is just a model that can decide when to call tools. With a small Node.js loop and a couple APIs, you can build one that fetches data, triggers workflows, and actually does things.

Most AI apps still behave like polite parrots. You ask something, then they generate text. Useful, sure, but a bit passive. The moment the model can call real APIs, it stops being a chatbot and starts acting like a small automation engine. Now it can fetch data, update systems, or trigger workflows.

You don't need a giant framework to build one. A basic agent can be a Node.js script, an LLM API, and a short list of tools. The model decides what tool to use, then your code runs it.

The Minimal Stack

A useful, lightweight stack looks like this:

Node.js for the runtime, an LLM API like OpenAI or Anthropic, a couple APIs you want the agent to use, and a small loop that passes instructions between them.

You can add frameworks like LangChain and LlamaIndex later if you want, but honestly a lot of devs skip them for small agents because debugging is easier when everything is just plain JavaScript.

Setting Up the Project

Start with a simple Node project. We're using Node 18 or newer, which includes fetch natively.

bash
mkdir ai-agent
cd ai-agent
npm init -y
npm install openai dotenv

Create a `.env` file with your API keys.

This will not work without real API keys. Go get them first.
bash
OPENAI_API_KEY=your_key_here
WEATHER_API_KEY=your_weather_api_key
SLACK_TOKEN=your_slack_token

Then create a file called `agent.js`.

Defining Tools the Agent Can Use

Tools are just functions your agent is allowed to call. Each one usually wraps an external API. Here we use OpenAI but for most modern LLM APIs, it's the same process.

javascript
async function getWeather(city) {
  try {
    const res = await fetch(`https://api.weatherapi.com/v1/current.json?key=${process.env.WEATHER_API_KEY}&q=${city}`)
    if (!res.ok) throw new Error(`Weather API returned ${res.status}`)
    const data = await res.json()
    if (!data.current) throw new Error('Invalid response format')
    return `${city} temperature is ${data.current.temp_c}C`
  } catch (error) {
    return `Failed to get weather: ${error.message}`
  }
}

async function sendSlackMessage(message) {
  try {
    const res = await fetch("https://slack.com/api/chat.postMessage", {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        "Authorization": `Bearer ${process.env.SLACK_TOKEN}`
      },
      body: JSON.stringify({
        channel: "#alerts",
        text: message
      })
    })
    
    const data = await res.json()
    if (!data.ok) throw new Error(`Slack error: ${data.error}`)
    return "Message sent to Slack"
  } catch (error) {
    return `Failed to send Slack message: ${error.message}`
  }
}

So far nothing too fancy, just two functions.

Letting the Model Choose the Tool

Modern LLM APIs support structured tool calls. You send a list of tools and the model decides when to use one.

javascript
const tools = [
  {
    type: "function",
    function: {
      name: "getWeather",
      description: "Get the current weather for a city",
      parameters: {
        type: "object",
        properties: {
          city: { type: "string" }
        },
        required: ["city"]
      }
    }
  },
  {
    type: "function",
    function: {
      name: "sendSlackMessage",
      description: "Send a message to a Slack channel",
      parameters: {
        type: "object",
        properties: {
          message: { type: "string" }
        },
        required: ["message"]
      }
    }
  }
];

You send this tool definition to the model along with the user prompt.

The Slack integration assumes:

  • You have a Slack bot token with chat:write scope.
  • A channel named #alerts exists
  • The bot is invited to that channel

If none of those is the case, just enjoy the post. We're glad you're here.

The Agent Loop

An agent is basically a loop. Ask the model what to do. If it wants to call a tool, run the tool. Send the result back. Repeat until the model stops asking for tools. Thank said model, so you'll be spared in the AI apocalypse.

javascript
// CommonJS version (recommended for beginners)
const OpenAI = require("openai");
require('dotenv').config();

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

async function runAgent(userPrompt) {
  let messages = [{ role: "user", content: userPrompt }];
  const maxTurns = 10; // prevent infinite loops
  
  for (let turn = 0; turn < maxTurns; turn++) {
    try {
      const response = await openai.chat.completions.create({
        model: "gpt-4o-mini",
        messages,
        tools,
      });

      const message = response.choices[0].message;
      messages.push(message);

      if (!message.tool_calls) {
        return message.content;
      }

      for (const toolCall of message.tool_calls) {
        // Parse arguments safely
        let args;
        try {
          args = JSON.parse(toolCall.function.arguments);
        } catch (e) {
          console.error('Invalid JSON from model:', toolCall.function.arguments);
          continue;
        }

        let result;
        if (toolCall.function.name === "getWeather") {
          result = await getWeather(args.city);
        } else if (toolCall.function.name === "sendSlackMessage") {
          result = await sendSlackMessage(args.message);
        } else {
          result = `Unknown tool: ${toolCall.function.name}`;
        }

        messages.push({
          role: "tool",
          tool_call_id: toolCall.id,
          name: toolCall.function.name, // Required field!
          content: result,
        });
      }
    } catch (error) {
      return `Agent error: ${error.message}`;
    }
  }
  
  return "Agent reached maximum turns without completing";
}

// ES Modules version (if you prefer)
/*
import OpenAI from "openai";
import 'dotenv/config';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// ... rest of the code is the same
*/

That tiny loop is enough to create a functional agent, so you can just call it with a normal prompt. The model decides whether a tool is needed and your code executes it. Try something simple first so you can see the magic happen. This is the fancy part.

javascript
// Test with a simple query
runAgent("What's the weather in Abuja today?")
  .then(console.log)
  .catch(console.error);

// Test with multiple tool calls
runAgent("What's the weather in Abuja today? Then post it to #alerts on Slack.")
  .then(console.log)
  .catch(console.error);

Your code executes it and returns something like *The current temperature in Abuja is 31°C with partly cloudy skies — a warm afternoon overall.*. Neat huh?

Where This Becomes Useful

Once the agent can call APIs, you can connect it to almost anything. Slack, GitHub, Google Calendar, internal databases, analytics dashboards, then the model can gather info and trigger real actions.

A few common examples devs build first: a support assistant that opens tickets in Jira, a research agent that pulls data from multiple APIs, or a deployment helper that triggers CI pipelines, etc.

Things Will Break

Agents are still messy. Models sometimes pick the wrong tool, send weird parameters or try to solve everything with the same API call and occasionally try to ruin a dev's life by publishing a hit piece about them.

That's where good logging helps. Print every tool call and every response so you can see what the model was thinking, evil thoughts or not.

Tools Worth Exploring

If you want to go deeper later, a few libraries are worth looking at. LangChain if you want structured agent frameworks. LlamaIndex if your agent needs to query documents. AutoGen if you want multiple agents talking to each other.

But honestly, the simple loop you just saw gets surprisingly far.

That's all. Go crazy.

Tags

#ai-agents#api-automation#llm#nodejs#openai#upskill

Join the Discussion

Enjoyed this? Ask questions, share your take (hot, lukewarm, or undecided), or follow the thread with people in real time. The community’s open, join us.

Build a Lightweight AI Agent That Can Use APIs | VeryCodedly