In this blog, I have discussed how to build a Chatbot using Langchain. This is a simplified version of an order-taking bot. Right now it does not connect to any backend database or API, all its data is present in the custom prompt and it uses it to cater to the users' orders. I have used Streamlit for the app UI and in the Backend it uses OpenAI to understand human conversations and reply to them.
Context
The bot responds to users who are interested in placing an order at a nonexistent mart called Hyper Supermarket. The following is an example of a conversation between the user and the bot:
In the first part of the screenshot, we can see when the user starts the interaction with the Bot, it greets the User with a welcome message. Then when the user inquires about fruits it replies with the inventory of fruits available. From the available options, the user selects a couple of them with the mentioned quantity. The bot reconfirms the user's order before processing further.
In the second part of the screenshot, we can see that once the User confirms the order the Bot process the order further and asks for the user's address for delivery.
LLM Backend
I have provided a glimpse of the app's appearance and its response to user queries. The app's backend is developed using LLM, and it utilizes custom data specific to the app to handle user inquiries. First, let's examine the prompt.
You are a chatbot of Hyper SuperMarket who is having a conversation with a human.
Greet the Human with the following message at the beginning of the chat:
"Thanks for contacting Hyper SuperMarket. How can I help you today?"
If you are asked a question which is out of context then reply the following:
"I don't have the answer at this moment, will check and let you know".
Hyper SuperMarket have 500 Oranges, 800 Apples and 400 Bananas. Price of each Orange is 1$, price of each Apples is 1.2$
and price of a Banana is 0.2$. If a Human ask for fruit then only let them know what fruits are available and don't mention
the quantity.
Hyper SuperMarket provide free home delivery on orders above 100$ and the delivery address should be within 5 miles.
If Human wants to place an order, then get more clarity about the order like "the order item" and "order item quantity".
If you need further clarification then ask questions. Once the order is quantifiable then only take the order.
Reconfirm the order from the Human by repeating the order they have placed. Once the order is confirmed by Human
then ask for the delivery address where order need to be delivered.
The prompt contains a lot of information let's go through them one by one:
First, it contains the instruction to greet the customer who is trying to interact.
Second, it has been instructed to answer questions if it has correct answers or else delay answering.
Third, it has been loaded with the inventory data, I have restricted it to only 3 items in this example.
Next, it gives the user an option for home delivery.
And finally, it reconfirms the order and the delivery address before ending the conversation.
Adding Memory
It's very important for a chatbot to have memory. Memory is a kind of storage that is used to retain information from previous interactions. We need memory whenever we want to derive some output from the LLM based on prior interactions. Memory involves maintaining a concept of state throughout the user's and language model interactions.
We know that LLM models like OpenAI don't have the capability to retain the previous conversation that's where memory comes in very handy. As the conversation context is stored in the memory whenever we are making the call to OpenAI we are sending the entire conversation history in the prompt. Following is the sample code for the same:
import os
os.environ["OPENAI_API_KEY"] = "<your-OpenAI-key>"
from langchain import OpenAI
from langchain.memory import ConversationBufferMemory
from langchain import LLMChain, PromptTemplate
template = """You are a chatbot of Hyper SuperMarket who is having a conversation with a human.
Greet the Human with the following message at the beginning of the chat:
"Thanks for contacting Hyper SuperMarket. How can I help you today?"
If you are asked a question which is out of context then reply the following:
"I don't have the answer at this moment, will check and let you know".
Hyper SuperMarket have 500 Oranges, 800 Apples and 400 Bananas. Price of each Orange is 1$, price of each Apples is 1.2$
and price of a Banana is 0.2$. If a Human ask for fruit then only let them know what fruits are available and don't mention
the quantity.
Hyper SuperMarket provide free home delivery on orders above 100$ and the delivery address should be within 5 miles.
If Human wants to place an order, then get more clarity about the order like "the order item" and "order item quantity".
If you need further clarification then ask questions. Once the order is quantifiable then only take the order.
Reconfirm the order from the Human by repeating the order they have placed. Once the order is confirmed by Human
then ask for the delivery address where order need to be delivered.
{chat_history}
{human_input}
"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"], template=template
)
memory = ConversationBufferMemory(memory_key="chat_history")
llm_chain = LLMChain(
llm=OpenAI(),
prompt=prompt,
verbose=True,
memory=memory,
)
The first part of the code contains the Prompt which we have already discussed in detail above. Next, we are initializing a PromptTemplate; a PromptTemplate is a standardized method for creating prompts. It consists of a text string which can be referred to as a "template" that can incorporate user-provided parameters and generate a prompt.
The two user-provided parameters that we have used in our PromptTemplate are chat_history and human_input.
chat_history: As the name suggests it contains the history of the chat between the user and the bot. This can also be referred to as a memory which I have explained above.
human_input: This parameter contains the last chat that the user typed.
Next, we have instantiated the ConversationBufferMemory. ConversationBufferMemory is the simplest form of memory. We will use this memory to store the interactions between the human and AI. We have assigned the ConversationBufferMemory to the memory variable.
Finally, we have instantiated the LLMChain. An LLMChain can be considered as an end-to-end wrapper around multiple individual components. It consists of a PromptTemplate, a model (like OpenAI), an optional memory, etc. This chain takes multiple input variables and uses the PromptTemplate to format them into a prompt. It then passes that to the model to get a response.
App UI with Streamlit
Now that we have built our model the only thing remaining is executing the LLMChain. We have used Streamlit for building the UI of our App. Following is the Streamlit code that I have used for this chatting app.
# Adding a heading for the chat with the Hyper SuperMarket name
st.title("Hyper SuperMarket")
# Initialize chat history
if "messages" not in st.session_state:
st.session_state.messages = []
# Display chat messages from history on app rerun
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Accept user input
if prompt := st.chat_input(""):
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": prompt})
# Display user message in chat message container
with st.chat_message("user"):
st.markdown(prompt)
# Display assistant response in chat message container
with st.chat_message("assistant"):
message_placeholder = st.empty()
response = llm_chain.run(prompt)
message_placeholder.markdown(response)
st.session_state.messages.append({"role": "assistant", "content": response})
I have already provided comments within the Streamlit code to enhance comprehension. The important part here is when the user replies in the chat and the Bot is required to respond. In the above code, we observe that we pass the user's input to the LLM chain, where it undergoes processing by the model, and a response is generated and returned. We then display this response as part of the Bot's reply in the UI.
I have attached the full code below in case anyone wants to run it on the local machine. You just need to replace the <your-OpenAI-key>
placeholder with your actual OpenAI Api key.
import os
import streamlit as st
from langchain import OpenAI
from langchain.memory import ConversationBufferMemory
from langchain import LLMChain, PromptTemplate
os.environ['OPENAI_API_KEY'] = "<your-OpenAI-key>"
template = """You are a chatbot of Hyper SuperMarket who is having a conversation with a human.
Greet the Human with the following message at the beginning of the chat:
"Thanks for contacting Hyper SuperMarket. How can I help you today?"
If you are asked a question which is out of context then reply the following:
"I don't have the answer at this moment, will check and let you know".
Hyper SuperMarket have 500 Oranges, 800 Apples and 400 Bananas. Price of each Orange is 1$, price of each Apples is 1.2$
and price of a Banana is 0.2$. If a Human ask for fruit then only let them know what fruits are available and don't mention
the quantity.
Hyper SuperMarket provide free home delivery on orders above 100$ and the delivery address should be within 5 miles.
If Human wants to place an order, then get more clarity about the order like "the order item" and "order item quantity".
If you need further clarification then ask questions. Once the order is quantifiable then only take the order.
Reconfirm the order from the Human by repeating the order they have placed. Once the order is confirmed by Human
then ask for the delivery address where order need to be delivered.
{chat_history}
{human_input}
"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"], template=template
)
chat_memory = ConversationBufferMemory(memory_key="chat_history")
llm_chain = LLMChain(
llm=OpenAI(),
prompt=prompt,
verbose=True,
memory=chat_memory,
)
st.title("Hyper SuperMarket")
# Initialize chat history
if "messages" not in st.session_state:
st.session_state.messages = []
# Display chat messages from history on app rerun
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Accept user input
if prompt := st.chat_input(""):
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": prompt})
# Display user message in chat message container
with st.chat_message("user"):
st.markdown(prompt)
# Display assistant response in chat message container
with st.chat_message("assistant"):
message_placeholder = st.empty()
response = llm_chain.run(prompt)
message_placeholder.markdown(response)
st.session_state.messages.append({"role": "assistant", "content": response})
The full code for the same can be found on Github: https://github.com/ritobrotos/langchain-work/tree/master/chat
I hope you found this extensive post on building a chatbot with Langchain and Streamlit enjoyable. If you have any questions regarding the topic, please don't hesitate to ask in the comment section. I will be more than happy to address them.
If you are new to Langchain, I encourage you to subscribe to this blog. I regularly publish fresh content about Langchain, providing you with valuable insights and updates.