Build a Local AI Agent


This entry is part 1 of 2 in the series Local AI Agents

🧠 How to Build a Local AI Agent Using LangChain, Ollama, and Mistral

Have you ever wanted to run your own AI assistant locally, without depending on cloud services or subscriptions?
In this guide, we’ll walk step‑by‑step through how to set up a lightweight AI agent using:

  • 🧱 LangChain — a powerful Python library for building agent workflows
  • šŸ¦™ Ollama — a local language‑model runner (we’ll use Mistral)
  • šŸ’» Python — the glue that brings everything together

Whether you’re a beginner or just looking to experiment offline, this guide is for you.

šŸ”§ What You’ll Need

  • Python 3.10–3.12 recommended (avoid 3.13 for now due to compatibility issues)
  • Ollama installed and running – download here
  • A code editor (VSĀ Code, Notepad++, etc.)
  • Basic command‑line comfort (we’ll walk you through every step)

šŸ—‚ Step 1: Create Your Project Folder

Create a folder, for example:
D:\MyAIProjects\Ollama_test

Inside it, add a text file named start_up.txt with:

cd D:\MyData\Portfolio\SocialEnterprise\Ollama_test
env\Scripts\activate
python ollama_test.py

🌱 Step 2: Set Up a Python Virtual Environment

cd D:\MyAIProjects\Ollama_test
python -m venv env
env\Scripts\activate  # Windows

pip install langchain langchain-community langchain-ollama

šŸ¤– Step 3: Start Ollama and Download a Model

In a new terminal:

ollama run mistral

This downloads and starts the Mistral model locally.

šŸ“„ Step 4: Create the Agent Script (simple_agent.py)

from langchain_ollama import OllamaLLM
from langchain.agents import Tool, initialize_agent
from langchain.agents.agent_types import AgentType
from langchain.tools import tool

llm = OllamaLLM(model="mistral")

@tool
def read_file(filename: str) -> str:
    import os
    filename = filename.strip("'\"")
    if not os.path.exists(filename):
        return f"Error: File '{filename}' does not exist."
    if not filename.endswith(('.txt', '.md')):
        return "Error: Only .txt and .md files are supported."
    with open(filename, 'r', encoding='utf-8') as f:
        return f.read()

@tool
def calculate(expression: str) -> str:
    try:
        return str(eval(expression))
    except Exception as e:
        return f"Error: {e}"

tools = [
    Tool.from_function(read_file,  name="ReadFile",
        description="Read .txt or .md files (no quotes in filename)."),
    Tool.from_function(calculate, name="Calculator",
        description="Evaluate math like '3 * 5 + 10'.")
]

agent = initialize_agent(
    tools=tools,
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    handle_parsing_errors=True,
    verbose=True
)

prompt = """
You are a helpful AI assistant. When given a file, respond like:

Thought: Explain reasoning.
Action: ReadFile
Action Input: start_up.txt
"""

print("\n🧠 Agent Response:")
print(agent.run(prompt))

šŸš€ Step 5: Run the Agent

python simple_agent.py

You should see output resembling:

> Entering new AgentExecutor chain...
Thought: I should read the file to understand what it does.
Action: ReadFile
Action Input: start_up.txt
Observation: (file contents)
Final Answer: The purpose of "start_up.txt" is to ...

šŸ’” Bonus: Things You Can Add Next

  • šŸ” A CLI loop to ask multiple questions without restarting the script
  • šŸ“ A ListFiles tool to let the agent see available files
  • 🧠 Memory with ConversationBufferMemory for multi‑turn context
  • 🌐 A simple Flask web interface to chat with your agent
  • šŸ” Search tools for your .txt, .md, or .csv documents

āœ… Final Thoughts

Running a local AI agent with LangChain and Ollama is surprisingly accessible. With just Python, an open‑source model, and a few lines of code,
you’ve got a private, fast, and customizable assistant that works offline and respects your data.
Whether you’re a student, developer, or entrepreneur, this is a great entry point into agentic AI.

āš ļø A Note on Model Limitations and Reliability

While the setup we’ve described works in principle, the reality is that most local open-source models — including Mistral — do not consistently follow the structured output format expected by LangChain agents.

Specifically, LangChain’s agent framework (like ZERO_SHOT_REACT_DESCRIPTION) expects the model to follow a rigid format:

Thought: I should use the ReadFile tool.
Action: ReadFile
Action Input: start_up.txt

But Mistral and many other open models were not trained to reliably produce this format. As a result:

  • āŒ The model may skip tool usage entirely
  • āŒ It may ā€œhallucinateā€ summaries of files it didn’t actually read
  • āŒ It may generate improperly formatted output that causes LangChain to fail silently

In our testing, we’ve seen this exact behavior: even when a file like start_up.txt exists and a working ReadFile tool is available, the model might pretend the file doesn’t exist or simply make up an answer instead of calling the tool.

Why is this happening? The root cause is that most open-source models are not instruction-tuned for tool use. They’re not aware of or compliant with LangChain’s internal expectations — unless you heavily guide them with carefully engineered prompts (and even that doesn’t always work).

āœ… What You Can Try Instead
  • Option 1: Switch to a more tool-aware model like openchat or llama3 — these tend to follow instructions more reliably than Mistral.
  • Option 2: Skip LangChain agents for now and directly invoke tools in Python, then feed the results into the LLM for summarization.
  • Option 3: Use LangChain only for simple chains or chat interfaces, and wait for more advanced open models that support tool use natively.

We’ll explore Option 1 next with a version of this agent powered by openchat — a model that performs better with LangChain’s tool-using agents.

Local AI Agents

Local AI Agent OpenChat

Leave a Reply