Skip to content

Commit

Permalink
Add LangChain recipes (#485)
Browse files Browse the repository at this point in the history
  • Loading branch information
jeffxtang committed May 6, 2024
2 parents 0e54f56 + 17e68a0 commit c113695
Show file tree
Hide file tree
Showing 6 changed files with 2,416 additions and 0 deletions.
61 changes: 61 additions & 0 deletions recipes/use_cases/agents/langchain/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# LangChain <> Llama3 Cookbooks

LLM agents use [planning, memory, and tools](https://lilianweng.github.io/posts/2023-06-23-agent/) to accomplish tasks.

LangChain offers several different ways to implement agents.

(1) Use [AgentExecutor](https://python.langchain.com/docs/modules/agents/quick_start/) with [tool-calling](https://python.langchain.com/docs/integrations/chat/) versions of Llama 3.

(2) Use [LangGraph](https://python.langchain.com/docs/langgraph), a library from LangChain that can be used to build reliable agents with Llama 3.

---

### AgentExecutor Agent

AgentExecutor is the runtime for an agent. AgentExecutor calls the agent, executes the actions it chooses, passes the action outputs back to the agent, and repeats.

Our first notebook, `tool-calling-agent`, shows how to build a [tool calling agent](https://python.langchain.com/docs/modules/agents/agent_types/tool_calling/) with AgentExecutor and Llama 3.

This shows how to build an agent that uses web search and retrieval tools.

---

### LangGraph Agent

[LangGraph](https://python.langchain.com/docs/langgraph) is a library from LangChain that can be used to build reliable agents.

LangGraph can be used to build agents with a few pieces:
- **Planning:** Define a control flow of steps that you want the agent to take (a graph)
- **Memory:** Persist information (graph state) across these steps
- **Tool use:** Modify state at any step

Our second notebook, `langgraph-agent`, shows how to build a Llama 3 powered agent that uses web search and retrieval tool in LangGraph.

It discusses some of the trade-offs between AgentExecutor and LangGraph.

---

### LangGraph RAG Agent

Our third notebook, `langgraph-rag-agent`, shows how to apply LangGraph to build advanced Llama 3 powered RAG agents that use ideas from 3 papers:

* Corrective-RAG (CRAG) [paper](https://arxiv.org/pdf/2401.15884.pdf) uses self-grading on retrieved documents and web-search fallback if documents are not relevant.
* Self-RAG [paper](https://arxiv.org/abs/2310.11511) adds self-grading on generations for hallucinations and for ability to answer the question.
* Adaptive RAG [paper](https://arxiv.org/abs/2403.14403) routes queries between different RAG approaches based on their complexity.

We implement each approach as a control flow in LangGraph:
- **Planning:** The sequence of RAG steps (e.g., retrieval, grading, and generation) that we want the agent to take
- **Memory:** All the RAG-related information (input question, retrieved documents, etc) that we want to pass between steps
- **Tool use:** All the tools needed for RAG (e.g., decide web search or vectorstore retrieval based on the question)

We will build from CRAG (blue, below) to Self-RAG (green) and finally to Adaptive RAG (red):

![Screenshot 2024-05-03 at 10 50 02 AM](https://github.com/rlancemartin/llama-recipes/assets/122662504/ec4aa1cd-3c7e-4cd1-a1e7-7deddc4033a8)

---

### Local LangGraph RAG Agent

Our fourth notebook, `langgraph-rag-agent-local`, shows how to apply LangGraph to build advanced RAG agents using Llama 3 that run locally and reliably.

See this [video overview](https://www.youtube.com/watch?v=sgnrL7yo1TE) for more detail.
698 changes: 698 additions & 0 deletions recipes/use_cases/agents/langchain/langgraph-agent.ipynb

Large diffs are not rendered by default.

713 changes: 713 additions & 0 deletions recipes/use_cases/agents/langchain/langgraph-rag-agent-local.ipynb

Large diffs are not rendered by default.

643 changes: 643 additions & 0 deletions recipes/use_cases/agents/langchain/langgraph-rag-agent.ipynb

Large diffs are not rendered by default.

297 changes: 297 additions & 0 deletions recipes/use_cases/agents/langchain/tool-calling-agent.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,297 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9856250d-51bb-49d3-962d-483178cfafab",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/use_cases/agents/langchain/tool-calling-agent.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "158a3f92-139a-4e92-9273-e018b027fc54",
"metadata": {},
"outputs": [],
"source": [
"! pip install -U langchain_groq langchain langchain_community sentence-transformers tavily-python tiktoken langchainhub chromadb"
]
},
{
"cell_type": "markdown",
"id": "745f7d9f-15c4-41c8-94f8-09e1426581cc",
"metadata": {},
"source": [
"# Tool calling agent with Llama 3\n",
"\n",
"[Tool calling](https://python.langchain.com/docs/modules/agents/agent_types/tool_calling/) allows an LLM to detect when one or more tools should be called.\n",
"\n",
"It will then respond with the inputs that should be passed to those tools. \n",
"\n",
"LangChain has a general agent that works with tool-calling LLMs.\n",
"\n",
"### Tools \n",
"\n",
"Let's define a few tools.\n",
"\n",
"`Retriever`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "16569e69-890c-4e0c-947b-35ed2e0cf0c7",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ['TAVILY_API_KEY'] = 'YOUR_TAVILY_API_KEY'\n",
"os.environ['GROQ_API_KEY'] = 'YOUR_GROQ_API_KEY'"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6debf81d-84d1-4645-aa25-07d24cdcbc2c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_community.embeddings import HuggingFaceEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"loader = WebBaseLoader(\"https://docs.smith.langchain.com/overview\")\n",
"docs = loader.load()\n",
"documents = RecursiveCharacterTextSplitter(\n",
" chunk_size=1000, chunk_overlap=200\n",
").split_documents(docs)\n",
"vector = Chroma.from_documents(documents, HuggingFaceEmbeddings())\n",
"retriever = vector.as_retriever()\n",
"\n",
"from langchain.tools.retriever import create_retriever_tool\n",
"retriever_tool = create_retriever_tool(\n",
" retriever,\n",
" \"langsmith_search\",\n",
" \"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8454286f-f6d3-4ac0-a583-16315189d151",
"metadata": {},
"source": [
"`Web search`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8a4d9feb-80b7-4355-9cba-34e816400aa5",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.tools.tavily_search import TavilySearchResults\n",
"search = TavilySearchResults()"
]
},
{
"cell_type": "markdown",
"id": "42215e7b-3170-4311-8438-5a7b385ebb64",
"metadata": {},
"source": [
"`Custom`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c130481d-dc6f-48e0-b795-7e3a4438fb6a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import tool\n",
"\n",
"@tool\n",
"def magic_function(input: int) -> int:\n",
" \"\"\"Applies a magic function to an input.\"\"\"\n",
" return input + 2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bfc0cfbe-d5ce-4c26-859f-5d29be121bef",
"metadata": {},
"outputs": [],
"source": [
"tools = [magic_function, search, retriever_tool] "
]
},
{
"cell_type": "markdown",
"id": "e88c2e1d-1503-4659-be4d-98900a69253f",
"metadata": {},
"source": [
"### LLM\n",
"\n",
"Here, we need a Llama 3 model that supports tool use.\n",
"\n",
"This can be accomplished via prompt engineering (e.g., see [here](https://replicate.com/hamelsmu/llama-3-70b-instruct-awq-with-tools)) or fine-tuning (e.g., see [here](https://huggingface.co/mzbac/llama-3-8B-Instruct-function-calling) and [here](https://huggingface.co/mzbac/llama-3-8B-Instruct-function-calling)).\n",
"\n",
"We can review LLMs that support tool calling [here](https://python.langchain.com/docs/integrations/chat/) and Groq is included.\n",
"\n",
"[Here](https://github.com/groq/groq-api-cookbook/blob/main/llama3-stock-market-function-calling/llama3-stock-market-function-calling.ipynb) is a notebook by Groq on function calling with Llama 3 and LangChain."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "99c919d2-198d-4c3b-85ba-0772bf7db383",
"metadata": {},
"outputs": [],
"source": [
"from langchain_groq import ChatGroq\n",
"llm = ChatGroq(temperature=0, model=\"llama3-70b-8192\")"
]
},
{
"cell_type": "markdown",
"id": "695ffb74-c278-4420-b10b-b18210d824eb",
"metadata": {},
"source": [
"### Agent\n",
"\n",
"We use LangChain [tool calling agent](https://python.langchain.com/docs/modules/agents/agent_types/tool_calling/). "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fae083a8-864c-4394-93e5-36d22aaa5fe3",
"metadata": {},
"outputs": [],
"source": [
"# Prompt \n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful assistant\"),\n",
" (\"human\", \"{input}\"),\n",
" MessagesPlaceholder(\"agent_scratchpad\"),\n",
" ]\n",
")\n",
"prompt.pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "421f9565-bc1a-4141-aae7-c6bcae2c63fc",
"metadata": {},
"outputs": [],
"source": [
"### Run\n",
"from langchain.agents import AgentExecutor, create_tool_calling_agent, tool\n",
"agent = create_tool_calling_agent(llm, tools, prompt)\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "229372f4-abb3-4418-9444-cadc548a8155",
"metadata": {},
"outputs": [],
"source": [
"agent_executor.invoke({\"input\": \"what is the value of magic_function(3)?\"})"
]
},
{
"cell_type": "markdown",
"id": "016b447f-d374-4fc7-a1fe-0ce56856a763",
"metadata": {},
"source": [
"Trace: \n",
"\n",
"https://smith.langchain.com/public/adf06494-94d6-4e93-98f3-60e65d2f2c19/r"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6914b16c-be7a-4838-b080-b6af6b6e1417",
"metadata": {},
"outputs": [],
"source": [
"agent_executor.invoke({\"input\": \"whats the weather in sf?\"})"
]
},
{
"cell_type": "markdown",
"id": "9e363535-29b8-45d8-85b6-10d2e21f93bc",
"metadata": {},
"source": [
"Trace: https://smith.langchain.com/public/64a62781-7e3c-4acf-ae72-ce49ccb82960/r"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8ce1b38c-a22a-4035-a9a1-2ea0da419ade",
"metadata": {},
"outputs": [],
"source": [
"agent_executor.invoke({\"input\": \"how can langsmith help with testing?\"})"
]
},
{
"cell_type": "markdown",
"id": "e28cc79b-f6de-45fb-b7e5-84c119ba57da",
"metadata": {},
"source": [
"This last question failed to run. \n",
"\n",
"Trace: https://smith.langchain.com/public/960a40e9-24f1-42a0-859d-2e0a30018d1c/r\n",
"\n",
"We can see that the agent correctly decides to query the vectorstore for a question about LangSmith. But it then inexplicably attempts web search. And it appears to get stuck in a loop of calling various tools before crashing.\n",
"\n",
"Of course, this is using a non-fine-tuned (only prompting) version of llama3 for tool-use. But, it illustates the reliability challenge with using Agent Executor. It is sensitive to the LLM's capacity for tool-use! \n",
"\n",
"In the next notebook, we will show an alternative way to implement this agent using LangGraph."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "55e7518c-e7d8-4ce7-9a4a-7909fb3a8b88",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
4 changes: 4 additions & 0 deletions scripts/spellcheck_conf/wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1310,3 +1310,7 @@ leaderboards
txn
ollama
tavily
AgentExecutor
LangGraph
langgraph
vectorstore

0 comments on commit c113695

Please sign in to comment.