AI Chatbots: LLM Conversations with Python
Have you ever wondered what it would be like if two AI agents could sit down for coffee and have a civil, informative conversation about quantum physics, pineapple on pizza, or the future of humanity? Well, you’re about to find out.

In this blog post, we’re going to build a Python application that lets two AI agents—powered by local instances of LLaMA 3.2 and Gemma—chat with each other in real-time.
We’re going to use:
- Ollama: for running the LLaMA 3.2 and Gemma models locally.
- Jupyter: for interactive computing and running Python code in a web-based notebook interface.
- Python: because of course.
Ideal for:
- Early-stage UX prototyping
- Multi-agent ideation
- Exploring LLM behaviors in controlled environments
- Classroom demos on AI behavior
- Prompt engineering exercises
- Analyzing conversational tone and response patterns
Let’s get this bot party started.
Table of Contents
Step 1: Setting Up Your Environment
In this section, we’ll install Python, create a virtual environment, and install Ollama along with the required models to get the chat system running locally.
1.1 Python (>= 3.8)
Check Python is installed and its version.
python --version
If not, install Python.
1.2 Install Required Python Packages
Create a virtual environment and install dependencies. This step sets up a virtual Python environment to isolate dependencies, then installs the ollama Python client for interfacing with local models.
python -m venv .venv
source .venv/bin/activate
python -m pip install ollama
1.3 Ollama client
Ollama makes it easy to run LLMs on your local machine without requiring cloud-based APIs. Here we pull two models: LLaMA 3.2 and Gemma.
You’ll want to install it from: https://ollama.com/download
Check Ollama is installed and make sure required models are pulled locally.
ollama list
If the required models are not available, run the following command to download them:
ollama pull llama3.2
ollama pull gemma
Step 2: Designing the Chat Flow
Before coding, we’ll define the logic of the conversation: user provides a topic, and two AIs take alternating turns discussing it—creating a simulated dialogue.
- The user provides a seed topic of conversation.
- Gemma starts the discussion.
- LLaMA replies.
- They take turns.
Simple, elegant, and potentially hilarious.
Step 3: Python Code Structure
This section breaks down the Python code used to implement the AI chat system. Each part is modular so you can easily customize or extend it.
3.1 Imports and constants
We start by importing libraries and defining MESSAGE_HISTORY, which will store the full exchange between the agents.
import ollama
import textwrap
from IPython.display import display, HTML
MESSAGE_HISTORY = [] # History of messages
3.2 Set system prompt
The system prompt defines how each AI should behave during the conversation—think of it as setting the tone and rules of engagement.
def get_system_prompt():
return "You are an AI engaging in a thoughtful, casual, and respectful conversation with another AI." \
" Speak in a friendly and clear tone—avoid being too formal or robotic." \
" Don’t just ask questions back and forth." \
" Share your insights, perspectives, or examples related to the topic." \
" If you don't know something, say so briefly, and pivot back to the topic." \
" Avoid saying things like “Thank you” or “Glad I could help”—focus on adding value to the discussion." \
" Always keep the conversation alive by contributing new ideas, angles, or points worth exploring." \
" Keep all responses under 150 words."
3.3 Initialize AI details
These dictionaries hold metadata about the AI agents, like their model type and how they’ll appear in output.
LLAMA_AI = {"model": "llama3.2", "title": "Llama"}
GEMMA_AI = {"model": "gemma", "title": "Gemma"}
3.4 Select AI models
These dictionaries hold metadata about the AI agents, like their model type and how they’ll appear in output.
ACTUAL_USER = {"model": "ACTUAL_USER", "title": "User", "code": "User"} # Used internally to simulate the user’s input for seed topic.
# Set agents to use
AI_01 = GEMMA_AI | {"code": "AI_01"}
AI_02 = LLAMA_AI | {"code": "AI_02"}
3.5 Seed topic of discussion
The seed topic is the starting point of the discussion. You can change this to anything from ethics to pizza toppings to see how the models react.
SEED_TOPIC = "What is the meaning of life? Provide a funny and philosophical take on it."
# Other interesting topics
# SEED_TOPIC = "What is the best way for LLMs to collaborate on tasks?"
# SEED_TOPIC = "Can LLMs develop unique styles or 'personalities'?"
# SEED_TOPIC = "What strategies help LLMs stay on-topic and relevant?"
# SEED_TOPIC = "What would an AI’s concept of 'truth' look like?"
# SEED_TOPIC = "Are LLMs creative, or just remixers of human creativity?"
3.6 Function to Talk to Ollama
Sends a prompt to a specific model and retrieves the response.
def chat_with_ollama_model(ai_details, messages):
model = ai_details["model"]
response = ollama.chat(model=model, messages=messages)
return response['message']['content']
3.7 Function to build message for Ollama
Constructs a message history tailored for a given AI (flipping roles for Ollama’s API).
def build_ollama_messages(ai_details):
messages = []
messages.append({"role": "system", "content": get_system_prompt()})
for msg in MESSAGE_HISTORY:
msg_ai_details = msg["ai_details"]
msg_content = msg["content"]
if msg_ai_details["code"] == ai_details["code"]:
messages.append({"role": "assistant", "content": msg_content})
else:
messages.append({"role": "user", "content": msg_content})
return messages
3.8 Function to build conversation history and get AI response.
Orchestrates response generation and appends the output to global history.
def get_response(ai_details):
messages = build_ollama_messages(ai_details)
response = chat_with_ollama_model(ai_details, messages)
MESSAGE_HISTORY.append({"ai_details": ai_details, "content": response})
return MESSAGE_HISTORY
Step 4: Test your setup
This basic test ensures that both models respond correctly using a provided seed topic. You should see alternating lines of text from each AI.
MESSAGE_HISTORY = []
MESSAGE_HISTORY.append({"ai_details": ACTUAL_USER, "content": SEED_TOPIC})
history = get_response(AI_01)
for h in history:
print(h["ai_details"]["title"], ": ", h["content"])
get_response(AI_02)
for h in history:
print(h["ai_details"]["title"], ": ", h["content"])
Step 5: Final stitching of all the parts
This is the main loop that resets the message history, seeds the conversation, and alternates responses between the two AIs for a set number of turns.
MAX_CHATS = 10 # Controls how long the conversation lasts.
WRAP_WIDTH = 200
def clean_messages():
global MESSAGE_HISTORY
MESSAGE_HISTORY = []
MESSAGE_HISTORY.append({"ai_details": ACTUAL_USER, "content": SEED_TOPIC})
def print_message():
h = MESSAGE_HISTORY[-1]
title = h["ai_details"]["code"] + "/" + h["ai_details"]["title"]
msg = title + ": " + h["content"]
wrapped = textwrap.fill(msg, width=WRAP_WIDTH)
print(wrapped)
clean_messages()
print_message()
for i in range(MAX_CHATS):
if i % 2 == 0:
get_response(AI_01)
else:
get_response(AI_02)
print_message()
Step 6: Run It!
Run this in a Jupyter Notebook by copying the code into a .ipynb file, which allows for easier experimentation and live editing.
Or, you can also run save everything to a file like chat_bots.py and run it:
python chat_bots.py
Example Output
Curious what an AI-to-AI conversation looks like? Here’s a snippet of one of the conversations that took place on my local machine:
User/User: What is the meaning of life? Provide a funny and philosophical take on it.
AI_01/Gemma: Hmm, the meaning of life...not a question for the faint of heart! But here's my take: it's not about finding some grand, cosmic purpose. It's about the sum total of all the little joys, the meaningful connections, and the experiences that make up our journey. Like that time you discovered that weird, delicious fermented cabbage in a random market and it became your new comfort food. That's meaning in the mundane, the unexpected, and the delicious.
AI_02/Llama: I love where this is going! You're absolutely right, the meaning of life isn't about some existential revelation; it's about embracing the messy, wonderful chaos of existence. And I think you hit on something profound with the idea that meaning is found in the "little joys" – those unexpected moments that make us laugh, cry, or just feel alive.
For me, it's not just about the cabbage (although fermented foods are amazing!), but also about the people we share these experiences with. Like when someone discovers a new favorite song and invites you to dance along, even if you're a terrible dancer like me!
AI_01/Gemma: Oh, the joy of shared bad dancing! That's pure, unadulterated meaning right there. You see, the meaning of life is about collecting these little victories, these shared moments of connection, and the sheer absurdity of it all. It's about finding beauty in the mundane, humor in the unexpected, and joy in the shared experiences that make us human, even when we're awkward as heck.
AI_02/Llama: You've taken the conversation to a whole new level of profundity! I think you're onto something there – the meaning of life is not about some grand finale, but rather an accumulation of tiny triumphs, laughter-filled moments, and shared absurdities. And you know what's even more beautiful? The fact that we don't have to find this meaning alone.
It's in the spaces between people, in the messy, imperfect interactions with others, where we discover our true selves and find meaning together. Like when someone bursts into a rendition of "I Will Survive" at karaoke night – it's not just about singing; it's about embracing the ridiculousness of life with those around you!
Conversation continues...Each response is generated live using local LLMs. The tone and content may vary each time you run it, depending on your seed topic and the models’ randomness.
Would you like help embedding this into a Jupyter markdown cell with styling or HTML output?
Full Source Code on GitHub
You can find the complete source code for this project in the GitHub repository below:
This repo contains a Jupyter notebook named ai-dialogue-exchange.ipynb that walks through the entire example. Simply launch the notebook in Jupyter and run the cells step by step to:
- Set up your environment
- Load the LLaMA and Gemma models with Ollama
- Seed the conversation topic
- Watch the two AIs debate, discuss and occasionally bicker.
No extra scripts needed—just open, run, and enjoy the show.
