Skip to content

如何为您的图添加线程级持久性

先决条件

本指南假设您熟悉以下内容:

许多人工智能应用程序需要记忆,以便在多个交互中共享上下文。在LangGraph中,这种类型的记忆可以通过添加线程级持久性到任何StateGraph来实现。

在创建任何LangGraph图时,您可以通过在编译图时添加一个检查点器来设置其持久状态:

from langgraph.checkpoint.memory import MemorySaver

checkpointer = MemorySaver()
graph.compile(checkpointer=checkpointer)

API Reference: MemorySaver

本指南将展示如何为您的图添加线程级持久性。

注意

如果您需要在多个对话或用户之间共享的记忆(跨线程持久性),请查看此操作指南

环境搭建

首先我们需要安装所需的包

%%capture --no-stderr
%pip install --quiet -U langgraph langchain_anthropic

接下来,我们需要为Anthropic(我们将使用的大型语言模型)设置API密钥。

import getpass
import os


def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")


_set_env("ANTHROPIC_API_KEY")
ANTHROPIC_API_KEY:  ········

为LangGraph开发设置LangSmith

注册LangSmith,可以快速发现并解决您的LangGraph项目中的问题,提高项目性能。LangSmith允许您使用跟踪数据来调试、测试和监控使用LangGraph构建的LLM应用程序——更多关于如何开始的信息,请参阅此处

定义图

我们将使用一个单节点图,该图调用一个聊天模型

让我们首先定义将要使用的模型:

from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(model="claude-3-5-sonnet-20240620")

API Reference: ChatAnthropic

现在我们可以定义我们的StateGraph并添加调用模型的节点:

from typing import Annotated
from typing_extensions import TypedDict

from langgraph.graph import StateGraph, MessagesState, START


def call_model(state: MessagesState):
    response = model.invoke(state["messages"])
    return {"messages": response}


builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_edge(START, "call_model")
graph = builder.compile()

API Reference: StateGraph | START

如果我们尝试使用这个图,对话的上下文将不会在交互之间持久化:

input_message = {"role": "user", "content": "hi! I'm bob"}
for chunk in graph.stream({"messages": [input_message]}, stream_mode="values"):
    chunk["messages"][-1].pretty_print()

input_message = {"role": "user", "content": "what's my name?"}
for chunk in graph.stream({"messages": [input_message]}, stream_mode="values"):
    chunk["messages"][-1].pretty_print()
================================ Human Message =================================

hi! I'm bob
================================== Ai Message ==================================

Hello Bob! It's nice to meet you. How are you doing today? Is there anything I can help you with or would you like to chat about something in particular?
================================ Human Message =================================

what's my name?
================================== Ai Message ==================================

I apologize, but I don't have access to your personal information, including your name. I'm an AI language model designed to provide general information and answer questions to the best of my ability based on my training data. I don't have any information about individual users or their personal details. If you'd like to share your name, you're welcome to do so, but I won't be able to recall it in future conversations.

添加持久化

要添加持久化功能,我们需要在编译图时传入一个Checkpointer

from langgraph.checkpoint.memory import MemorySaver

memory = MemorySaver()
graph = builder.compile(checkpointer=memory)
# If you're using LangGraph Cloud or LangGraph Studio, you don't need to pass the checkpointer when compiling the graph, since it's done automatically.

API Reference: MemorySaver

注意

如果您使用的是LangGraph Cloud或LangGraph Studio,则在编译图时不需要传递检查点器,因为这是自动完成的。

我们现在可以与代理交互,并看到它记得之前的对话内容!

config = {"configurable": {"thread_id": "1"}}
input_message = {"role": "user", "content": "hi! I'm bob"}
for chunk in graph.stream({"messages": [input_message]}, config, stream_mode="values"):
    chunk["messages"][-1].pretty_print()
================================ Human Message =================================

hi! I'm bob
================================== Ai Message ==================================

Hello Bob! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions you have that I can help you with?
您可以随时恢复之前的线程:

input_message = {"role": "user", "content": "what's my name?"}
for chunk in graph.stream({"messages": [input_message]}, config, stream_mode="values"):
    chunk["messages"][-1].pretty_print()
================================ Human Message =================================

what's my name?
================================== Ai Message ==================================

Your name is Bob, as you introduced yourself at the beginning of our conversation.
如果我们想要开始一个新的对话,可以传入一个不同的thread_id。瞬间!所有的聊天记录都消失了!

input_message = {"role": "user", "content": "what's my name?"}
for chunk in graph.stream(
    {"messages": [input_message]},
    {"configurable": {"thread_id": "2"}},
    stream_mode="values",
):
    chunk["messages"][-1].pretty_print()
================================ Human Message =================================

what's is my name?
================================== Ai Message ==================================

I apologize, but I don't have access to your personal information, including your name. As an AI language model, I don't have any information about individual users unless it's provided within the conversation. If you'd like to share your name, you're welcome to do so, but otherwise, I won't be able to know or guess it.

Comments