Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LangChain recipes #485

Merged
merged 6 commits into from
May 6, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
51 changes: 51 additions & 0 deletions recipes/use_cases/langchain/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# LangChain <> Llama3 Cookbooks

LLM agents use [planning, memory, and tools](https://lilianweng.github.io/posts/2023-06-23-agent/) to accomplish tasks.

LangChain offers several different ways to implement agents.

(1) Use [agent executor](https://python.langchain.com/docs/modules/agents/quick_start/) with [tool-calling](https://python.langchain.com/docs/integrations/chat/) versions of llama3.

(2) Use [LangGraph](https://python.langchain.com/docs/langgraph), a library from LangChain that can be used to build reliable agents.

---

### Agent Executor

Our first notebook, `tool-calling-agent`, shows how to build a [tool calling agent](https://python.langchain.com/docs/modules/agents/agent_types/tool_calling/) with agent executor.

This show how to build an agent that uses web search and retrieval tools.

---

### LangGraph

[LangGraph](https://python.langchain.com/docs/langgraph) is a library from LangChain that can be used to build reliable agents.

LangGraph can be used to build agents with a few pieces:
- **Planning:** Define a control flow of steps that you want the agent to take (a graph)
- **Memory:** Persist information (graph state) across these steps
- **Tool use:** Tools can be used at any step to modify state

Our second notebook, `langgraph-agent`, shows how to build an agent that uses web search and retrieval tool in LangGraph.

It discusses some of the trade-offs between agent executor and LangGraph.

Our third notebook, `langgraph-rag-agent`, shows how to apply LangGraph to build advanced RAG agents that use ideas from 3 papers:

* Corrective-RAG (CRAG) [paper](https://arxiv.org/pdf/2401.15884.pdf) uses self-grading on retrieved documents and web-search fallback if documents are not relevant.
* Self-RAG [paper](https://arxiv.org/abs/2310.11511) adds self-grading on generations for hallucinations and for ability to answer the question.
* Adaptive RAG [paper](https://arxiv.org/abs/2403.14403) routes queries between different RAG approaches based on their complexity.

We implement each approach as a control flow in LangGraph:
- **Planning:** The sequence of RAG steps (e.g., retrieval, grading, generation) that we want the agent to take
- **Memory:** All the RAG-related information (input question, retrieved documents, etc) that we want to pass between steps
- **Tool use:** All the tools needed for RAG (e.g., decide web search or vectorstore retrieval based on the question)

We will build from CRAG (blue, below) to Self-RAG (green) and finally to Adaptive RAG (red):

![Screenshot 2024-05-03 at 10 50 02 AM](https://github.com/rlancemartin/llama-recipes/assets/122662504/ec4aa1cd-3c7e-4cd1-a1e7-7deddc4033a8)

Our fouth notebook, `langgraph-rag-agent-local`, shows how to apply LangGraph to build advanced RAG agents that run locally and reliable.

See this [video overview](https://www.youtube.com/watch?v=sgnrL7yo1TE) for more detail.
762 changes: 762 additions & 0 deletions recipes/use_cases/langchain/langgraph-agent.ipynb

Large diffs are not rendered by default.

803 changes: 803 additions & 0 deletions recipes/use_cases/langchain/langgraph-rag-agent-local.ipynb

Large diffs are not rendered by default.

724 changes: 724 additions & 0 deletions recipes/use_cases/langchain/langgraph-rag-agent.ipynb

Large diffs are not rendered by default.

343 changes: 343 additions & 0 deletions recipes/use_cases/langchain/tool-calling-agent.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,343 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "2f35e5e7-e3b2-4321-bb43-886575533d3d",
"metadata": {},
"outputs": [],
"source": [
"! pip install -U langchain_groq langchain langchain_community langchain_openai tavily-python tiktoken langchainhub chromadb"
]
},
{
"cell_type": "markdown",
"id": "745f7d9f-15c4-41c8-94f8-09e1426581cc",
"metadata": {},
"source": [
"# Tool calling agent with LLaMA3\n",
"\n",
"[Tool calling](https://python.langchain.com/docs/modules/agents/agent_types/tool_calling/) allows an LLM to detect when one or more tools should be called.\n",
"\n",
"It will then respond with the inputs that should be passed to those tools. \n",
"\n",
"LangChain has a general agent that works with tool-calling LLMs.\n",
"\n",
"### Tools \n",
"\n",
"Let's define a few tools.\n",
"\n",
"`Retriever`"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "6debf81d-84d1-4645-aa25-07d24cdcbc2c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"loader = WebBaseLoader(\"https://docs.smith.langchain.com/overview\")\n",
"docs = loader.load()\n",
"documents = RecursiveCharacterTextSplitter(\n",
" chunk_size=1000, chunk_overlap=200\n",
").split_documents(docs)\n",
"vector = Chroma.from_documents(documents, OpenAIEmbeddings())\n",
"retriever = vector.as_retriever()\n",
"\n",
"from langchain.tools.retriever import create_retriever_tool\n",
"retriever_tool = create_retriever_tool(\n",
" retriever,\n",
" \"langsmith_search\",\n",
" \"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8454286f-f6d3-4ac0-a583-16315189d151",
"metadata": {},
"source": [
"`Web search`"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8a4d9feb-80b7-4355-9cba-34e816400aa5",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.tools.tavily_search import TavilySearchResults\n",
"search = TavilySearchResults()"
]
},
{
"cell_type": "markdown",
"id": "42215e7b-3170-4311-8438-5a7b385ebb64",
"metadata": {},
"source": [
"`Custom`"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c130481d-dc6f-48e0-b795-7e3a4438fb6a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import tool\n",
"\n",
"@tool\n",
"def magic_function(input: int) -> int:\n",
" \"\"\"Applies a magic function to an input.\"\"\"\n",
" return input + 2"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "bfc0cfbe-d5ce-4c26-859f-5d29be121bef",
"metadata": {},
"outputs": [],
"source": [
"tools = [magic_function, search, retriever_tool] "
]
},
{
"cell_type": "markdown",
"id": "e88c2e1d-1503-4659-be4d-98900a69253f",
"metadata": {},
"source": [
"### LLM\n",
"\n",
"Here, we need a llama model that support tool use.\n",
"\n",
"This can be accomplished via prompt engineering (e.g., see [here](https://replicate.com/hamelsmu/llama-3-70b-instruct-awq-with-tools)) or fine-tuning (e.g., see [here](https://huggingface.co/mzbac/llama-3-8B-Instruct-function-calling) and [here](https://huggingface.co/mzbac/llama-3-8B-Instruct-function-calling)).\n",
"\n",
"We can review LLMs that support tool calling [here](https://python.langchain.com/docs/integrations/chat/) and Groq is included.\n",
"\n",
"[Here](https://github.com/groq/groq-api-cookbook/blob/main/llama3-stock-market-function-calling/llama3-stock-market-function-calling.ipynb) is a reference for Groq + tool use."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "99c919d2-198d-4c3b-85ba-0772bf7db383",
"metadata": {},
"outputs": [],
"source": [
"from langchain_groq import ChatGroq\n",
"llm = ChatGroq(temperature=0, model=\"llama3-70b-8192\")"
]
},
{
"cell_type": "markdown",
"id": "695ffb74-c278-4420-b10b-b18210d824eb",
"metadata": {},
"source": [
"### Agent\n",
"\n",
"We use LangChain [tool calling agent](https://python.langchain.com/docs/modules/agents/agent_types/tool_calling/). "
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "fae083a8-864c-4394-93e5-36d22aaa5fe3",
"metadata": {},
"outputs": [],
"source": [
"# Prompt \n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful assistant\"),\n",
" (\"human\", \"{input}\"),\n",
" MessagesPlaceholder(\"agent_scratchpad\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "421f9565-bc1a-4141-aae7-c6bcae2c63fc",
"metadata": {},
"outputs": [],
"source": [
"### Run\n",
"from langchain.agents import AgentExecutor, create_tool_calling_agent, tool\n",
"agent = create_tool_calling_agent(llm, tools, prompt)\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "229372f4-abb3-4418-9444-cadc548a8155",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `magic_function` with `{'input': 3}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m5\u001b[0m\u001b[32;1m\u001b[1;3mThe result of `magic_function(3)` is indeed 5.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': 'what is the value of magic_function(3)?',\n",
" 'output': 'The result of `magic_function(3)` is indeed 5.'}"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.invoke({\"input\": \"what is the value of magic_function(3)?\"})"
]
},
{
"cell_type": "markdown",
"id": "016b447f-d374-4fc7-a1fe-0ce56856a763",
"metadata": {},
"source": [
"Trace: \n",
"\n",
"https://smith.langchain.com/public/adf06494-94d6-4e93-98f3-60e65d2f2c19/r"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "6914b16c-be7a-4838-b080-b6af6b6e1417",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `tavily_search_results_json` with `{'query': 'current weather in san francisco'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[33;1m\u001b[1;3m[{'url': 'https://www.weatherapi.com/', 'content': \"{'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1714766520, 'localtime': '2024-05-03 13:02'}, 'current': {'last_updated_epoch': 1714766400, 'last_updated': '2024-05-03 13:00', 'temp_c': 17.8, 'temp_f': 64.0, 'is_day': 1, 'condition': {'text': 'Partly cloudy', 'icon': '//cdn.weatherapi.com/weather/64x64/day/116.png', 'code': 1003}, 'wind_mph': 6.9, 'wind_kph': 11.2, 'wind_degree': 250, 'wind_dir': 'WSW', 'pressure_mb': 1014.0, 'pressure_in': 29.95, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 54, 'cloud': 25, 'feelslike_c': 17.8, 'feelslike_f': 64.0, 'vis_km': 16.0, 'vis_miles': 9.0, 'uv': 5.0, 'gust_mph': 17.0, 'gust_kph': 27.4}}\"}, {'url': 'https://www.wunderground.com/hourly/us/ca/san-francisco/94134/date/2024-05-03', 'content': 'San Francisco Weather Forecasts. Weather Underground provides local & long-range weather forecasts, weatherreports, maps & tropical weather conditions for the San Francisco area. ... Friday 05/03 ...'}, {'url': 'https://www.accuweather.com/en/us/san-francisco/94103/weather-forecast/347629', 'content': 'San Francisco, CA Weather Forecast, with current conditions, wind, air quality, and what to expect for the next 3 days.'}, {'url': 'https://forecast.weather.gov/zipcity.php?inputstring=San francisco,CA', 'content': 'San Francisco CA 37.77°N 122.41°W (Elev. 131 ft) Last Update: 1:25 am PDT May 2, 2024. Forecast Valid: 4am PDT May 2, 2024-6pm PDT May 8, 2024 . Forecast Discussion . Additional Resources. Radar & Satellite Image. Hourly Weather Forecast. ... Severe Weather ; Current Outlook Maps ; Drought ; Fire Weather ; Fronts/Precipitation Maps ; Current ...'}, {'url': 'https://www.timeanddate.com/weather/usa/san-francisco/hourly', 'content': 'Hour-by-Hour Forecast for San Francisco, California, USA. Weather Today Weather Hourly 14 Day Forecast Yesterday/Past Weather Climate (Averages) Currently: 51 °F. Clear. (Weather station: San Francisco International Airport, USA). See more current weather.'}]\u001b[0m\u001b[32;1m\u001b[1;3mThe current weather in San Francisco is partly cloudy with a temperature of 64°F (17.8°C) and humidity of 54%. The wind is blowing at 6.9 mph (11.2 km/h) from the west-southwest direction.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': 'whats the weather in sf?',\n",
" 'output': 'The current weather in San Francisco is partly cloudy with a temperature of 64°F (17.8°C) and humidity of 54%. The wind is blowing at 6.9 mph (11.2 km/h) from the west-southwest direction.'}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.invoke({\"input\": \"whats the weather in sf?\"})"
]
},
{
"cell_type": "markdown",
"id": "9e363535-29b8-45d8-85b6-10d2e21f93bc",
"metadata": {},
"source": [
"Trace: \n",
"\n",
"https://smith.langchain.com/public/64a62781-7e3c-4acf-ae72-ce49ccb82960/r"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8ce1b38c-a22a-4035-a9a1-2ea0da419ade",
"metadata": {},
"outputs": [],
"source": [
"agent_executor.invoke({\"input\": \"how can langsmith help with testing?\"})"
]
},
{
"cell_type": "markdown",
"id": "e28cc79b-f6de-45fb-b7e5-84c119ba57da",
"metadata": {},
"source": [
"This last question failed to run. \n",
"\n",
"Trace:\n",
"\n",
"https://smith.langchain.com/public/960a40e9-24f1-42a0-859d-2e0a30018d1c/r\n",
"\n",
"We can see that the agent correctly decides to query the vectorstore for a question about LangSmith.\n",
"\n",
"But it then inexplicably attempts web search. \n",
"\n",
"And it appears to get stuck in a loop of calling various tools before crashing.\n",
"\n",
"Of course, this is using a non-fine-tuned (only prompting) version of llama3 for tool-use.\n",
"\n",
"But, it illustates the reliability challenge with using Agent Executor. \n",
"\n",
"It is sensitive to the LLMs capacity for tool-use! \n",
"\n",
"In the next notebook, we will show an alternative way to implement this agent using LangGraph."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "55e7518c-e7d8-4ce7-9a4a-7909fb3a8b88",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}