- Build a Local AI Agent
- Local AI Agent OpenChat
🧠 Building a Local AI Agent with LangChain, Ollama, and OpenChat
In a previous post, we built a local AI agent using LangChain and Mistral via Ollama. That setup worked — but not perfectly. The model often ignored instructions, skipped tool calls, or “hallucinated” answers instead of reading the actual file.
This follow-up post solves those issues by switching to the OpenChat model. OpenChat behaves more reliably with LangChain’s agent framework — especially when tools like ReadFile and Calculator are involved.
If you followed the previous post, this will feel familiar. You’ll reuse the same tools and setup — but get much better results.
🔧 What You’ll Need
- Python 3.10–3.12 (avoid 3.13)
- Ollama installed from ollama.com
- Virtual environment (venv)
- LangChain packages installed:
pip install langchain langchain-community langchain-ollama
🗂 Step 1: Create Your Project Folder
Use the same structure as before:
D:\MyData\Portfolio\SocialEnterprise\Ollama_test
Add a file named start_up.txt with the following content:
cd D:\MyData\Portfolio\SocialEnterprise\Ollama_test env\Scripts\activate python ollama_test.py
🌱 Step 2: Start the OpenChat Model in Ollama
Run the following command to download and start OpenChat locally:
ollama run openchat
Once downloaded, Ollama will load OpenChat and expose it to LangChain on localhost:11434.
📄 Step 3: Create the New Agent Script
Save the following code as simple_agent_openchat.py:
from langchain_ollama import OllamaLLM
from langchain.agents import Tool, initialize_agent
from langchain.agents.agent_types import AgentType
from langchain.tools import tool
import os
llm = OllamaLLM(model="openchat")
@tool
def read_file(filename: str) -> str:
filename = filename.strip("'"")
print(f"📥 Tool called with: {filename}")
print("📁 Current dir:", os.getcwd())
print("📄 Files:", os.listdir())
try:
if not os.path.exists(filename):
return f"Error: File '{filename}' does not exist."
if not filename.endswith(('.txt', '.md')):
return "Error: Only .txt and .md files are supported."
with open(filename, 'r', encoding='utf-8') as f:
return f.read()
except Exception as e:
return f"Error reading file: {e}"
@tool
def calculate(expression: str) -> str:
try:
return str(eval(expression))
except Exception as e:
return f"Error: {e}"
tools = [
Tool.from_function(read_file, name="ReadFile",
description="Read .txt or .md files (no quotes)."),
Tool.from_function(calculate, name="Calculator",
description="Do math like '4 * 5 + 7'")
]
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=True
)
prompt = """
You are an AI assistant. When asked to read a file, always use this structure:
Thought: I should read the file to understand it.
Action: ReadFile
Action Input: start_up.txt
Do not summarize unless you’ve received an Observation. Now please read and summarize the file start_up.txt.
"""
response = agent.run(prompt)
print("\n🧠 Agent Response:")
print(response)
🚀 Step 4: Run the Script
Back in your terminal with the virtual environment activated:
python simple_agent_openchat.py
You should see output like:
> Entering new AgentExecutor chain... Thought: I should read the file to understand what it does. Action: ReadFile Action Input: start_up.txt Observation: cd D:\MyData\Portfolio\... Final Answer: Change directory, activate env, and run ollama_test.py
✅ Why OpenChat Works Better
- ✔️ OpenChat is more compliant with LangChain’s agent format
- ✔️ Tools are actually called and not skipped
- ✔️ Output is structured and logical
This small change in model gives a big upgrade in functionality. It’s now much easier to trust the agent’s behavior, especially when you need it to interact with your own files or do calculations.
💡 Bonus Ideas for Expansion
- 🔁 Add a CLI loop to keep the agent running interactively
- 📂 Add a
ListFilestool so the agent can browse directories - 📊 Create a CSV-reading tool
- 🌐 Build a simple Flask UI to use it in a browser
- 🧠 Use memory to retain conversation context
🔍 Troubleshooting Tip
If you see an error like model 'openchat' not found, it means you need to download the model using:
ollama run openchat
✅ Final Thoughts
Switching to OpenChat made our local agent work as expected — consistently calling tools and returning answers based on real data. This approach gives you a faster, private, and offline-capable AI assistant that’s great for prototyping or personal projects.
In future posts, we’ll explore looping input, memory, and adding a browser interface.
ollama list
You can run this command at the terminal (command prompt). Why? This will.
- The name of each model you’ve downloaded (like mistral, openchat, etc.)
- Their sizes
- Whether they’re currently running or not