Playback speed
×
Share post
Share post at current time
0:00
/
0:00

One of the most powerful applications of LangChain is the simplicity with which we can augment LLMs with tools. A tool is usually a software with an API we can programmatically call and get a response from. An agent is a wrapper around an LLM that utilizes its reasoning ability to choose the best tools to solve a problem. We are going to cover:

  • What is an Agent

  • Using an Agent example

  • Dissecting the agent process

  • Building custom tools


Below are the code and images used in the video!

What is an Agent?

An agent uses an LLM to choose a sequence of actions to take to solve a problem. In chains, a sequence of actions is hardcoded. In agents, an LLM uses its reasoning to choose which actions to take and in which order.

Agent Example

Let’s see an example of how to use an agent. Let’s install Wikipedia

%pip install wikipedia

Let’s initialize an agent:

from langchain.agents import initialize_agent, AgentType, load_tools
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI()
tools = load_tools(['wikipedia', 'llm-math'], llm=llm)

agent = initialize_agent(
    tools=tools, 
    llm=llm, 
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, 
    return_intermediate_steps=True,
    verbose=True,
)

And ask a question:

from langchain.callbacks import StdOutCallbackHandler

question = """
What is the current American president's age? 
Return the age multiplied by 82
"""

handler = StdOutCallbackHandler()
response = agent(
    {"input": question}, 
    callbacks=[handler]
)

response

The current American president's age multiplied by 82 is 6396.

Let’s dissect the process

We have multiple iterations. Here is the prompt in the first iteration:

The LLM is prompted to answer with a thought, an action, and an action input. The resulting action and action input are then extracted from the text and used to run that action:

In the second iteration, we provide the context of the first iteration with the response from the tool within the prompt:

In the third iteration, the LLM is provided with the context of the first 2 iterations and continues its investigation

In the fourth iteration, the LLM is convinced it knows the answer thus, the agent exits the iterative process:

You can see a list of all the tools here!

Custom tools

We are going to provide a retrieval chain as a tool to the agent. Let’s create a database with some data:

from langchain.document_loaders import PyPDFLoader
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings

file_path = '...'

loader = PyPDFLoader(file_path=file_path)
data = loader.load_and_split()

embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(
    data, 
    embeddings, 
    collection_name="statistical_learning"
)

And let’s create a retrieval chain:

from langchain.chains import RetrievalQA

llm = ChatOpenAI()

chain = RetrievalQA.from_chain_type(
    llm=llm, 
    chain_type="stuff", 
    retriever=docsearch.as_retriever()
)

We can turn this chain into a tool using the Tool class:

from langchain.agents import Tool

description = """
useful for when you need to answer questions 
about Machine Learning and Statistical Learning. 
Input should be a fully formed question.
"""

retrieval_tool = Tool(
    name="Statistical Learning Book",
    func=chain.run,
    description=description,
)

We can append that tool to other tools when initializing the agent:

tools = load_tools(['wikipedia', "llm-math"], llm=llm)

tools.append(retrieval_tool)

agent = initialize_agent(
    tools=tools, 
    llm=llm, 
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, 
    verbose=True,
    return_intermediate_steps=True,
)

We can now run that agent with that new tool:

question = """
Can we use machine learning to predict the age of the president?
"""

response = agent(
    {"input": question}, 
    callbacks=[handler]
)

response

Yes, machine learning can be used to predict the age of the president, but the accuracy of the predictions would vary depending on the data used.

The AiEdge Newsletter
The AiEdge Newsletter
Authors
Damien Benveniste