What is Ollama? Ollama is a powerful tool that lets you run large language models (LLMs) locally on your own machine. It’s lightweight, easy to use, and supports models like Mistral, LLaMA, and Codellama — all without needing an internet connection or a paid API key. If you’re learning about AI agents and experimenting with tools like LangChain, Ollama is a great place to start. It gives you the flexibility to test ideas without incurring cloud costs, and it’s perfect for offline development and rapid prototyping. For more detailed instructions (for beginners like me) see below in this post.
🚀 Step 1: Install Ollama
Go to https://ollama.com and download the installer for your operating system. Follow the installation instructions provided.The setup file is about 700MB.
▶️ Step 2: Run a Model Locally
Open your terminal or command prompt and enter:
ollama run mistral
This will download and start the mistral model. You’ll now have a local LLM running on your machine!
📦 Step 3: Install the LangChain Ollama Wrapper
Activate your Python virtual environment (if not already), then run:
pip install langchain-community
🧪 Step 4: Use Ollama with LangChain
Now you can test the model using Python and LangChain:
from langchain_community.llms import Ollama
llm = Ollama(model="mistral")
response = llm.invoke("Hello! Can you confirm this setup is working?")
print(response)
✅ That’s It!
You’re now running an LLM locally using Ollama and LangChain — no API key required, no cloud costs, and great for learning how AI agents can interact with models in real time. As your skills grow, you can explore more advanced use cases like chaining tools, retrieving documents, or building your own agent workflows.
🛠️ More Detailed Instructions (for Beginners)
Here’s a more step-by-step version assuming you’ve already installed Python. This guide walks you through setting up a test folder, a virtual environment, and everything you need to get Ollama and LangChain working together.
- Create a test project folder:
Open a command prompt (Windows) and choose a location (like your D: drive) and create a folder:mkdir D:\MyData\Ollama_test cd D:\MyData\Ollama_test
- Set up a virtual environment:
This keeps your packages clean and organized.python -m venv env env\Scripts\activate
(Use
source env/bin/activateon macOS/Linux.) - Install required Python packages:
pip install langchain-community
- Test that Ollama is working:
In your terminal, type:ollama run mistral
This will download the model (about 4–6 GB). Once downloaded, Ollama will say something like “Waiting for input.” That means the model is ready.
- Create a Python file to test:
Open your code editor and create a file calledtest_ollama.pyin your folder with this code:from langchain_community.llms import Ollama llm = Ollama(model="mistral") response = llm.invoke("Hello! Can you confirm this setup is working?") print(response) - Run the file:
python test_ollama.py
If all is working, you’ll see the model’s response printed in your terminal. 🎉
That’s your local LLM setup — complete and fully offline. Great for testing LangChain or building agent-style workflows without relying on paid APIs!
