如何删除消息¶
图的常见状态之一是消息列表。通常情况下,你只会向该状态添加消息。然而,有时你可能希望删除消息(可以通过直接修改状态或作为图的一部分来完成)。为此,你可以使用 RemoveMessage
修改器。在本指南中,我们将介绍如何做到这一点。
关键思想是每个状态键都有一个 reducer
键。这个键指定了如何组合状态更新。默认的 MessagesState
有一个消息键,该键的 reducer
接受这些 RemoveMessage
修改器。然后该 reducer
使用这些 RemoveMessage
来从该键中删除消息。
因此请注意,仅仅因为你的图状态有一个消息列表键,并不意味着这个 RemoveMessage
修改器会起作用。你还需要定义一个知道如何处理这些消息的 reducer
。
注意:许多模型期望消息列表遵守某些规则。例如,一些模型期望消息列表以 user
消息开始,而其他模型期望所有包含工具调用的消息后面都跟一个工具消息。在删除消息时,你需要确保不会违反这些规则。
设置¶
首先,让我们构建一个使用消息的简单图。请注意,它使用了MessagesState
,其中包含了所需的reducer
。
接下来,我们需要为Anthropic(我们将使用的大型语言模型)设置API密钥
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
为LangGraph开发设置LangSmith
注册LangSmith,可以快速发现并解决您的LangGraph项目中的问题,提高项目性能。LangSmith允许您使用跟踪数据来调试、测试和监控使用LangGraph构建的LLM应用程序——更多关于如何开始的信息,请参阅这里。
构建代理¶
现在我们来构建一个简单的ReAct风格的代理。
from typing import Literal
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import MessagesState, StateGraph, START, END
from langgraph.prebuilt import ToolNode
memory = MemorySaver()
@tool
def search(query: str):
"""Call to surf the web."""
# This is a placeholder for the actual implementation
# Don't let the LLM know this though 😊
return "It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
tools = [search]
tool_node = ToolNode(tools)
model = ChatAnthropic(model_name="claude-3-haiku-20240307")
bound_model = model.bind_tools(tools)
def should_continue(state: MessagesState):
"""Return the next node to execute."""
last_message = state["messages"][-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return END
# Otherwise if there is, we continue
return "action"
# Define the function that calls the model
def call_model(state: MessagesState):
response = model.invoke(state["messages"])
# We return a list, because this will get added to the existing list
return {"messages": response}
# Define a new graph
workflow = StateGraph(MessagesState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
# Next, we pass in the path map - all the possible nodes this edge could go to
["action", END],
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("action", "agent")
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
app = workflow.compile(checkpointer=memory)
API Reference: ChatAnthropic | tool | MemorySaver | StateGraph | START | END | ToolNode
from langchain_core.messages import HumanMessage
config = {"configurable": {"thread_id": "2"}}
input_message = HumanMessage(content="hi! I'm bob")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
input_message = HumanMessage(content="what's my name?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
API Reference: HumanMessage
================================[1m Human Message [0m=================================
hi! I'm bob
==================================[1m Ai Message [0m==================================
It's nice to meet you, Bob! I'm an AI assistant created by Anthropic. I'm here to help out with any questions or tasks you might have. Please let me know if there's anything I can assist you with.
================================[1m Human Message [0m=================================
what's my name?
==================================[1m Ai Message [0m==================================
You said your name is Bob.
手动删除消息¶
首先,我们将介绍如何手动删除消息。让我们看一下当前线程的状态:
[HumanMessage(content="hi! I'm bob", additional_kwargs={}, response_metadata={}, id='db576005-3a60-4b3b-8925-dc602ac1c571'),
AIMessage(content="It's nice to meet you, Bob! I'm an AI assistant created by Anthropic. I'm here to help out with any questions or tasks you might have. Please let me know if there's anything I can assist you with.", additional_kwargs={}, response_metadata={'id': 'msg_01BKAnYxmoC6bQ9PpCuHk8ZT', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 12, 'output_tokens': 52}}, id='run-3a60c536-b207-4c56-98f3-03f94d49a9e4-0', usage_metadata={'input_tokens': 12, 'output_tokens': 52, 'total_tokens': 64}),
HumanMessage(content="what's my name?", additional_kwargs={}, response_metadata={}, id='2088c465-400b-430b-ad80-fad47dc1f2d6'),
AIMessage(content='You said your name is Bob.', additional_kwargs={}, response_metadata={'id': 'msg_013UWTLTzwZi81vke8mMQ2KP', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 72, 'output_tokens': 10}}, id='run-3a6883be-0c52-4938-af98-e9e7476659eb-0', usage_metadata={'input_tokens': 72, 'output_tokens': 10, 'total_tokens': 82})]
我们可以调用update_state
并传入第一条消息的ID。这将会删除那条消息。
from langchain_core.messages import RemoveMessage
app.update_state(config, {"messages": RemoveMessage(id=messages[0].id)})
API Reference: RemoveMessage
{'configurable': {'thread_id': '2',
'checkpoint_ns': '',
'checkpoint_id': '1ef75157-f251-6a2a-8005-82a86a6593a0'}}
如果我们现在查看消息,可以验证第一个消息已被删除。
[AIMessage(content="It's nice to meet you, Bob! I'm Claude, an AI assistant created by Anthropic. How can I assist you today?", response_metadata={'id': 'msg_01XPSAenmSqK8rX2WgPZHfz7', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 12, 'output_tokens': 32}}, id='run-1c69af09-adb1-412d-9010-2456e5a555fb-0', usage_metadata={'input_tokens': 12, 'output_tokens': 32, 'total_tokens': 44}),
HumanMessage(content="what's my name?", id='f3c71afe-8ce2-4ed0-991e-65021f03b0a5'),
AIMessage(content='Your name is Bob, as you introduced yourself at the beginning of our conversation.', response_metadata={'id': 'msg_01BPZdwsjuMAbC1YAkqawXaF', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 52, 'output_tokens': 19}}, id='run-b2eb9137-2f4e-446f-95f5-3d5f621a2cf8-0', usage_metadata={'input_tokens': 52, 'output_tokens': 19, 'total_tokens': 71})]
通过程序删除消息¶
我们也可以从图内部通过程序来删除消息。在这里,我们将修改图,以便在图运行结束时删除任何旧消息(超过3条消息之前的)。
from langchain_core.messages import RemoveMessage
from langgraph.graph import END
def delete_messages(state):
messages = state["messages"]
if len(messages) > 3:
return {"messages": [RemoveMessage(id=m.id) for m in messages[:-3]]}
# We need to modify the logic to call delete_messages rather than end right away
def should_continue(state: MessagesState) -> Literal["action", "delete_messages"]:
"""Return the next node to execute."""
last_message = state["messages"][-1]
# If there is no function call, then we call our delete_messages function
if not last_message.tool_calls:
return "delete_messages"
# Otherwise if there is, we continue
return "action"
# Define a new graph
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
# This is our new node we're defining
workflow.add_node(delete_messages)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
"agent",
should_continue,
)
workflow.add_edge("action", "agent")
# This is the new edge we're adding: after we delete messages, we finish
workflow.add_edge("delete_messages", END)
app = workflow.compile(checkpointer=memory)
API Reference: RemoveMessage | END
我们现在可以尝试一下。我们可以调用图两次,然后检查状态。
from langchain_core.messages import HumanMessage
config = {"configurable": {"thread_id": "3"}}
input_message = HumanMessage(content="hi! I'm bob")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
print([(message.type, message.content) for message in event["messages"]])
input_message = HumanMessage(content="what's my name?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
print([(message.type, message.content) for message in event["messages"]])
API Reference: HumanMessage
[('human', "hi! I'm bob")]
[('human', "hi! I'm bob"), ('ai', "Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you.")]
[('human', "hi! I'm bob"), ('ai', "Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you."), ('human', "what's my name?")]
[('human', "hi! I'm bob"), ('ai', "Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you."), ('human', "what's my name?"), ('ai', 'You said your name is Bob, so that is the name I have for you.')]
[('ai', "Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you."), ('human', "what's my name?"), ('ai', 'You said your name is Bob, so that is the name I have for you.')]
[AIMessage(content="Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you.", response_metadata={'id': 'msg_01XPEgPPbcnz5BbGWUDWTmzG', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 12, 'output_tokens': 48}}, id='run-eded3820-b6a9-4d66-9210-03ca41787ce6-0', usage_metadata={'input_tokens': 12, 'output_tokens': 48, 'total_tokens': 60}),
HumanMessage(content="what's my name?", id='a0ea2097-3280-402b-92e1-67177b807ae8'),
AIMessage(content='You said your name is Bob, so that is the name I have for you.', response_metadata={'id': 'msg_01JGT62pxhrhN4SykZ57CSjW', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 68, 'output_tokens': 20}}, id='run-ace3519c-81f8-45fe-a777-91f42d48b3a3-0', usage_metadata={'input_tokens': 68, 'output_tokens': 20, 'total_tokens': 88})]
记住,在删除消息时,你需要确保剩余的消息列表仍然有效。这个消息列表**实际上可能无效**,这是因为当前的消息列表以一条AI消息开头,而某些模型不允许这种情况。