- Build a Local AI Agent
- Local AI Agent OpenChat
š§ How to Build a Local AI Agent Using LangChain, Ollama, and Mistral
Have you ever wanted to run your own AI assistant locally, without depending on cloud services or subscriptions?
In this guide, weāll walk stepābyāstep through how to set up a lightweight AI agent using:
- š§± LangChain ā a powerful Python library for building agent workflows
- š¦ Ollama ā a local languageāmodel runner (weāll use Mistral)
- š» Python ā the glue that brings everything together
Whether you’re a beginner or just looking to experiment offline, this guide is for you.
š§ What Youāll Need
- Python 3.10ā3.12 recommended (avoid 3.13 for now due to compatibility issues)
- Ollama installed and running ā download here
- A code editor (VSĀ Code, Notepad++, etc.)
- Basic commandāline comfort (weāll walk you through every step)
š Step 1: Create Your Project Folder
Create a folder, for example:
D:\MyAIProjects\Ollama_test
Inside it, add a text file named start_up.txt with:
cd D:\MyData\Portfolio\SocialEnterprise\Ollama_test env\Scripts\activate python ollama_test.py
š± Step 2: Set Up a Python Virtual Environment
cd D:\MyAIProjects\Ollama_test python -m venv env env\Scripts\activate # Windows pip install langchain langchain-community langchain-ollama
š¤ Step 3: Start Ollama and Download a Model
In a new terminal:
ollama run mistral
This downloads and starts the Mistral model locally.
š Step 4: Create the Agent Script (simple_agent.py)
from langchain_ollama import OllamaLLM
from langchain.agents import Tool, initialize_agent
from langchain.agents.agent_types import AgentType
from langchain.tools import tool
llm = OllamaLLM(model="mistral")
@tool
def read_file(filename: str) -> str:
import os
filename = filename.strip("'\"")
if not os.path.exists(filename):
return f"Error: File '{filename}' does not exist."
if not filename.endswith(('.txt', '.md')):
return "Error: Only .txt and .md files are supported."
with open(filename, 'r', encoding='utf-8') as f:
return f.read()
@tool
def calculate(expression: str) -> str:
try:
return str(eval(expression))
except Exception as e:
return f"Error: {e}"
tools = [
Tool.from_function(read_file, name="ReadFile",
description="Read .txt or .md files (no quotes in filename)."),
Tool.from_function(calculate, name="Calculator",
description="Evaluate math like '3 * 5 + 10'.")
]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=True
)
prompt = """
You are a helpful AI assistant. When given a file, respond like:
Thought: Explain reasoning.
Action: ReadFile
Action Input: start_up.txt
"""
print("\nš§ Agent Response:")
print(agent.run(prompt))
š Step 5: Run the Agent
python simple_agent.py
You should see output resembling:
> Entering new AgentExecutor chain... Thought: I should read the file to understand what it does. Action: ReadFile Action Input: start_up.txt Observation: (file contents) Final Answer: The purpose of "start_up.txt" is to ...
š” Bonus: Things You Can Add Next
- š A CLI loop to ask multiple questions without restarting the script
- š A
ListFilestool to let the agent see available files - š§ Memory with
ConversationBufferMemoryfor multiāturn context - š A simple Flask web interface to chat with your agent
- š Search tools for your
.txt,.md, or.csvdocuments
ā Final Thoughts
Running a local AI agent with LangChain and Ollama is surprisingly accessible. With just Python, an openāsource model, and a few lines of code,
youāve got a private, fast, and customizable assistant that works offline and respects your data.
Whether you’re a student, developer, or entrepreneur, this is a great entry point into agentic AI.
ā ļø A Note on Model Limitations and Reliability
While the setup weāve described works in principle, the reality is that most local open-source models ā including Mistral ā do not consistently follow the structured output format expected by LangChain agents.
Specifically, LangChainās agent framework (like ZERO_SHOT_REACT_DESCRIPTION) expects the model to follow a rigid format:
Thought: I should use the ReadFile tool. Action: ReadFile Action Input: start_up.txt
But Mistral and many other open models were not trained to reliably produce this format. As a result:
- ā The model may skip tool usage entirely
- ā It may āhallucinateā summaries of files it didnāt actually read
- ā It may generate improperly formatted output that causes LangChain to fail silently
In our testing, weāve seen this exact behavior: even when a file like start_up.txt exists and a working ReadFile tool is available, the model might pretend the file doesnāt exist or simply make up an answer instead of calling the tool.
Why is this happening? The root cause is that most open-source models are not instruction-tuned for tool use. They’re not aware of or compliant with LangChain’s internal expectations ā unless you heavily guide them with carefully engineered prompts (and even that doesnāt always work).
ā What You Can Try Instead
- Option 1: Switch to a more tool-aware model like
openchatorllama3ā these tend to follow instructions more reliably than Mistral. - Option 2: Skip LangChain agents for now and directly invoke tools in Python, then feed the results into the LLM for summarization.
- Option 3: Use LangChain only for simple chains or chat interfaces, and wait for more advanced open models that support tool use natively.
We’ll explore Option 1 next with a version of this agent powered by openchat ā a model that performs better with LangChainās tool-using agents.