
Build Your First AI Assistant
How to Build Your First AI Assistant Step by Step
A practical beginner tutorial that walks through building a simple AI assistant using Python and a free local language model. Includes setup, code, and explanation without hype.
Okay, fellow human. We’re going to wire together a language model and give it instructions. We’re not creating intelligence, but we’re building something that talks back and listens. It’s powerful, yes, but also understandable. Stick with me.
By the end, you’ll have a tiny AI assistant running on your machine. No cloud. No API keys. Just Python, a local model, and your curiosity. Ready? Let’s do this together.
What You’ll Need
- A computer with at least 8GB of RAM (16GB is smoother, but 8 will do)
- Python 3.9 or newer installed
- Comfort running commands in a terminal
- About 4-8GB of free storage for the model
If you can install a Python package and run a script, you’re ready. That’s the bar.
Step 1: Install Ollama
We’re using Ollama to run our model locally. Think of it as your personal AI engine. Go ahead and download it from the official site. Done? Great. Open a terminal and check it’s working:
ollama --versionStep 2: Grab a Model
We’ll use the 8B parameter version of Llama 3. Big name, small setup. Run:
ollama pull llama3It’s a few gigabytes. Watch the progress. This is your AI sitting on your own machine.
Step 3: Say Hello
Before coding, let’s see it respond. Type:
ollama run llama3Try typing a question, like “What’s 2+2?” See the response? That’s your assistant already talking. Exit with Ctrl+D or `/bye`. Notice how it feels responsive, not magical.
Step 4: Python Time
Create a new folder for your project. Inside, make a virtual environment:
python -m venv ai-assistant
# Activate it:
ai-assistant\Scripts\activate # Windows
source ai-assistant/bin/activate # macOS/LinuxInstall the Python client for Ollama. Done? Good. Next, open your editor.
pip install ollamaStep 5: Build the Assistant
We’ll make a single file: `assistant.py`. Type along. Don’t copy blindly. Ask yourself what each line does.
import ollama
SYSTEM_PROMPT = """
Hello, assistant.
You are concise and clear.
You do not make up facts.
"""
def ask_assistant(user_input):
response = ollama.chat(
model="llama3",
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": user_input}
]
)
return response['message']['content']
print("Ready. Type 'exit' or 'quit' to stop.")
while True:
user_text = input("\nYou: ")
if user_text.lower() in ["exit", "quit"]:
print("Goodbye!")
break
reply = ask_assistant(user_text)
print("\nAssistant:", reply)Run it with `python assistant.py` and start talking. Notice how changing your questions or tone changes the responses. That’s all wiring, not wizardry.
Step 6: Explore and Experiment
- Modify the system prompt, see how it changes personality
- Keep conversation history to maintain context across messages
- Load text from files to give your assistant memory
- Wrap it in a simple web interface with Flask or FastAPI
The magic isn’t in the model itself. It’s in your instructions, your setup, and your curiosity. Try things, break things, watch it respond - you’re learning by doing.
Tags
Related Links
Join the Discussion
Enjoyed this? Ask questions, share your take (hot, lukewarm, or undecided), or follow the thread with people in real time. The community’s open, join us.
Latest in Step-by-Step

Your Phone Storage Is Full. Delete These First
Mar 24, 2026

Turn Any YouTube Video Into Structured Notes Automatically
Feb 19, 2026

How To Turn Any Website into a Secure Progressive Web App
Jan 31, 2026

How to Build Your First AI Assistant Step by Step
Jan 26, 2026

How to Run Your Own AI Model Locally
Dec 9, 2025
Right Now in Tech

PS5 Price Hike: $650 for Standard, $900 for Pro Starting April 2
Mar 28, 2026

Apple Discontinues Mac Pro, Ends Intel Era
Mar 27, 2026

OpenAI Is Pulling the Plug on Sora
Mar 26, 2026

Meta and YouTube Ordered to Pay $3M in Landmark Social Media Ruling
Mar 25, 2026

Your Galaxy S26 Can Finally AirDrop to an iPhone
Mar 23, 2026