Logo
READLEARNKNOWCONNECT
Back to posts
build-your-first-ai-assistant

Build Your First AI Assistant

ChriseJanuary 26, 2026 at 2 PM WAT

How to Build Your First AI Assistant Step by Step

A practical beginner tutorial that walks through building a simple AI assistant using Python and a free local language model. Includes setup, code, and explanation without hype.

Okay, fellow human. We’re going to wire together a language model and give it instructions. We’re not creating intelligence, but we’re building something that talks back and listens. It’s powerful, yes, but also understandable. Stick with me.

By the end, you’ll have a tiny AI assistant running on your machine. No cloud. No API keys. Just Python, a local model, and your curiosity. Ready? Let’s do this together.

What You’ll Need

  • A computer with at least 8GB of RAM (16GB is smoother, but 8 will do)
  • Python 3.9 or newer installed
  • Comfort running commands in a terminal
  • About 4-8GB of free storage for the model

If you can install a Python package and run a script, you’re ready. That’s the bar.

Step 1: Install Ollama

We’re using Ollama to run our model locally. Think of it as your personal AI engine. Go ahead and download it from the official site. Done? Great. Open a terminal and check it’s working:

code
ollama --version

Step 2: Grab a Model

We’ll use the 8B parameter version of Llama 3. Big name, small setup. Run:

code
ollama pull llama3

It’s a few gigabytes. Watch the progress. This is your AI sitting on your own machine.

Step 3: Say Hello

Before coding, let’s see it respond. Type:

code
ollama run llama3

Try typing a question, like “What’s 2+2?” See the response? That’s your assistant already talking. Exit with Ctrl+D or `/bye`. Notice how it feels responsive, not magical.

Step 4: Python Time

Create a new folder for your project. Inside, make a virtual environment:

code
python -m venv ai-assistant
# Activate it:
ai-assistant\Scripts\activate  # Windows
source ai-assistant/bin/activate  # macOS/Linux

Install the Python client for Ollama. Done? Good. Next, open your editor.

code
pip install ollama

Step 5: Build the Assistant

We’ll make a single file: `assistant.py`. Type along. Don’t copy blindly. Ask yourself what each line does.

code
import ollama

SYSTEM_PROMPT = """
Hello, assistant.
You are concise and clear.
You do not make up facts.
"""

def ask_assistant(user_input):
    response = ollama.chat(
        model="llama3",
        messages=[
            {"role": "system", "content": SYSTEM_PROMPT},
            {"role": "user", "content": user_input}
        ]
    )
    return response['message']['content']

print("Ready. Type 'exit' or 'quit' to stop.")
while True:
    user_text = input("\nYou: ")
    if user_text.lower() in ["exit", "quit"]:
        print("Goodbye!")
        break
    reply = ask_assistant(user_text)
    print("\nAssistant:", reply)

Run it with `python assistant.py` and start talking. Notice how changing your questions or tone changes the responses. That’s all wiring, not wizardry.

Step 6: Explore and Experiment

  • Modify the system prompt, see how it changes personality
  • Keep conversation history to maintain context across messages
  • Load text from files to give your assistant memory
  • Wrap it in a simple web interface with Flask or FastAPI

The magic isn’t in the model itself. It’s in your instructions, your setup, and your curiosity. Try things, break things, watch it respond - you’re learning by doing.

Tags

#ai#beginner#local-ai#python#step-by-step#tutorial#ollama

Related Links

Join the Discussion

Enjoyed this? Ask questions, share your take (hot, lukewarm, or undecided), or follow the thread with people in real time. The community’s open, join us.

Published January 26, 2026Updated January 26, 2026

published

How to Build Your First AI Assistant Step by Step | VeryCodedly