自定义状态¶
在本教程中,你将向状态添加额外的字段,以定义复杂的行为,而无需依赖消息列表。聊天机器人将使用其搜索工具查找特定信息,并将其转发给人类进行审核。
Note
本教程基于 添加人机协作控制。
1. 将键添加到状态中¶
通过向状态中添加 name
和 birthday
键,更新聊天机器人以研究实体的生日:
API Reference: add_messages
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
name: str
birthday: str
将这些信息添加到状态中,可以使其他图节点(如存储或处理信息的下游节点)以及图的持久化层轻松访问这些信息。
2. 在工具内部更新状态¶
现在,在 human_assistance
工具中填充状态键。这允许人在信息存储到状态之前进行审查。使用 Command
从工具内部发出状态更新。
from langchain_core.messages import ToolMessage
from langchain_core.tools import InjectedToolCallId, tool
from langgraph.types import Command, interrupt
@tool
# 注意,因为我们正在为状态更新生成一个 ToolMessage,我们通常需要对应工具调用的 ID。
# 我们可以使用 LangChain 的 InjectedToolCallId 来表明这个参数不应该在工具的 schema 中显示给模型。
def human_assistance(
name: str, birthday: str, tool_call_id: Annotated[str, InjectedToolCallId]
) -> str:
"""请求人类协助。"""
human_response = interrupt(
{
"question": "这是正确的吗?",
"name": name,
"birthday": birthday,
},
)
# 如果信息是正确的,按原样更新状态。
if human_response.get("correct", "").lower().startswith("y"):
verified_name = name
verified_birthday = birthday
response = "正确"
# 否则,接收来自人类审查者的信息。
else:
verified_name = human_response.get("name", name)
verified_birthday = human_response.get("birthday", birthday)
response = f"进行了更正:{human_response}"
# 这次我们显式地在工具内使用 ToolMessage 更新状态。
state_update = {
"name": verified_name,
"birthday": verified_birthday,
"messages": [ToolMessage(response, tool_call_id=tool_call_id)],
}
# 我们在工具中返回一个 Command 对象以更新状态。
return Command(update=state_update)
图的其余部分保持不变。
3. 提示聊天机器人¶
提示聊天机器人查找 LangGraph 库的“生日”(即发布时间),并在获取所需信息后引导聊天机器人使用 human_assistance
工具。通过在工具的参数中设置 name
和 birthday
,你强制聊天机器人为这些字段生成建议。
user_input = (
"Can you look up when LangGraph was released? "
"When you have the answer, use the human_assistance tool for review."
)
config = {"configurable": {"thread_id": "1"}}
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message =================================
Can you look up when LangGraph was released? When you have the answer, use the human_assistance tool for review.
================================== Ai Message ==================================
[{'text': "Certainly! I'll start by searching for information about LangGraph's release date using the Tavily search function. Then, I'll use the human_assistance tool for review.", 'type': 'text'}, {'id': 'toolu_01JoXQPgTVJXiuma8xMVwqAi', 'input': {'query': 'LangGraph release date'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_01JoXQPgTVJXiuma8xMVwqAi)
Call ID: toolu_01JoXQPgTVJXiuma8xMVwqAi
Args:
query: LangGraph release date
================================= Tool Message =================================
Name: tavily_search_results_json
[{"url": "https://blog.langchain.dev/langgraph-cloud/", "content": "We also have a new stable release of LangGraph. By LangChain 6 min read Jun 27, 2024 (Oct '24) Edit: Since the launch of LangGraph Platform, we now have multiple deployment options alongside LangGraph Studio - which now fall under LangGraph Platform. LangGraph Platform is synonymous with our Cloud SaaS deployment option."}, {"url": "https://changelog.langchain.com/announcements/langgraph-cloud-deploy-at-scale-monitor-carefully-iterate-boldly", "content": "LangChain - Changelog | ☁ 🚀 LangGraph Platform: Deploy at scale, monitor LangChain LangSmith LangGraph LangChain LangSmith LangGraph LangChain LangSmith LangGraph LangChain Changelog Sign up for our newsletter to stay up to date DATE: The LangChain Team LangGraph LangGraph Platform ☁ 🚀 LangGraph Platform: Deploy at scale, monitor carefully, iterate boldly DATE: June 27, 2024 AUTHOR: The LangChain Team LangGraph Platform is now in closed beta, offering scalable, fault-tolerant deployment for LangGraph agents. LangGraph Platform also includes a new playground-like studio for debugging agent failure modes and quick iteration: Join the waitlist today for LangGraph Platform. And to learn more, read our blog post announcement or check out our docs. Subscribe By clicking subscribe, you accept our privacy policy and terms and conditions."}]
================================== Ai Message ==================================
[{'text': "Based on the search results, it appears that LangGraph was already in existence before June 27, 2024, when LangGraph Platform was announced. However, the search results don't provide a specific release date for the original LangGraph. \n\nGiven this information, I'll use the human_assistance tool to review and potentially provide more accurate information about LangGraph's initial release date.", 'type': 'text'}, {'id': 'toolu_01JDQAV7nPqMkHHhNs3j3XoN', 'input': {'name': 'Assistant', 'birthday': '2023-01-01'}, 'name': 'human_assistance', 'type': 'tool_use'}]
Tool Calls:
human_assistance (toolu_01JDQAV7nPqMkHHhNs3j3XoN)
Call ID: toolu_01JDQAV7nPqMkHHhNs3j3XoN
Args:
name: Assistant
birthday: 2023-01-01
我们再次触发了 human_assistance
工具中的 interrupt
。
4. 添加人工协助¶
聊天机器人未能识别正确的日期,因此为其提供信息:
human_command = Command(
resume={
"name": "LangGraph",
"birthday": "Jan 17, 2024",
},
)
events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================== Ai Message ==================================
[{'text': "根据搜索结果,看来 LangGraph 在 2024 年 6 月 27 日 LangGraph Platform 宣布之前就已经存在了。然而,搜索结果并未提供原始 LangGraph 的具体发布日期。\n\n基于这些信息,我将使用 human_assistance 工具来审查并可能提供有关 LangGraph 初始发布日期的更准确信息。", 'type': 'text'}, {'id': 'toolu_01JDQAV7nPqMkHHhNs3j3XoN', 'input': {'name': 'Assistant', 'birthday': '2023-01-01'}, 'name': 'human_assistance', 'type': 'tool_use'}]
Tool Calls:
human_assistance (toolu_01JDQAV7nPqMkHHhNs3j3XoN)
Call ID: toolu_01JDQAV7nPqMkHHhNs3j3XoN
Args:
name: Assistant
birthday: 2023-01-01
================================= Tool Message =================================
Name: human_assistance
进行了更正:{'name': 'LangGraph', 'birthday': 'Jan 17, 2024'}
================================== Ai Message ==================================
感谢您的人工协助。现在我可以为您提供关于 LangGraph 发布日期的正确信息。
LangGraph 最初于 2024 年 1 月 17 日发布。这一信息来源于人工协助的更正,比我最初找到的搜索结果更为准确。
总结如下:
1. LangGraph 的原始发布日期:2024 年 1 月 17 日
2. LangGraph Platform 宣布日期:2024 年 6 月 27 日
值得注意的是,在 LangGraph Platform 宣布之前,LangGraph 已经经过了一段时间的开发和使用,但 LangGraph 本身的官方初始发布日期是 2024 年 1 月 17 日。
请注意,这些字段现在已反映在状态中:
snapshot = graph.get_state(config)
{k: v for k, v in snapshot.values.items() if k in ("name", "birthday")}
这使得下游节点(例如进一步处理或存储信息的节点)可以轻松访问这些信息。
5. 手动更新状态¶
LangGraph 对应用程序状态提供了高度的控制。例如,在任何时刻(包括被中断时),都可以使用 graph.update_state
手动覆盖一个键:
{'configurable': {'thread_id': '1',
'checkpoint_ns': '',
'checkpoint_id': '1efd4ec5-cf69-6352-8006-9278f1730162'}}
6. 查看新值¶
如果你调用 graph.get_state
,你可以看到新的值已经被反映出来:
snapshot = graph.get_state(config)
{k: v for k, v in snapshot.values.items() if k in ("name", "birthday")}
手动更新状态会在 LangSmith 中生成一个追踪记录。如果需要的话,它们也可以用来控制人机协作的工作流。通常建议使用 interrupt
函数,因为它允许在人机协作交互中独立于状态更新传输数据。
恭喜! 你已经将自定义键添加到状态中以支持更复杂的工作流,并且学会了如何从工具内部生成状态更新。
查看下面的代码片段,回顾本教程中的图:
{}¶
import os
from langchain.chat_models import init_chat_model
os.environ["AZURE_OPENAI_API_KEY"] = "..."
os.environ["AZURE_OPENAI_ENDPOINT"] = "..."
os.environ["OPENAI_API_VERSION"] = "2025-03-01-preview"
llm = init_chat_model(
"azure_openai:gpt-4.1",
azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
)
API Reference: TavilySearch | ToolMessage | InjectedToolCallId | tool | MemorySaver | StateGraph | START | END | add_messages | ToolNode | tools_condition | Command | interrupt
from typing import Annotated
from langchain_tavily import TavilySearch
from langchain_core.messages import ToolMessage
from langchain_core.tools import InjectedToolCallId, tool
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.types import Command, interrupt
class State(TypedDict):
messages: Annotated[list, add_messages]
name: str
birthday: str
@tool
def human_assistance(
name: str, birthday: str, tool_call_id: Annotated[str, InjectedToolCallId]
) -> str:
"""请求人类协助。"""
human_response = interrupt(
{
"question": "这是正确的吗?",
"name": name,
"birthday": birthday,
},
)
if human_response.get("correct", "").lower().startswith("y"):
verified_name = name
verified_birthday = birthday
response = "正确"
else:
verified_name = human_response.get("name", name)
verified_birthday = human_response.get("birthday", birthday)
response = f"进行了更正:{human_response}"
state_update = {
"name": verified_name,
"birthday": verified_birthday,
"messages": [ToolMessage(response, tool_call_id=tool_call_id)],
}
return Command(update=state_update)
tool = TavilySearch(max_results=2)
tools = [tool, human_assistance]
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
message = llm_with_tools.invoke(state["messages"])
assert(len(message.tool_calls) <= 1)
return {"messages": [message]}
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)
下一步¶
在完成 LangGraph 基础教程之前,还有一个概念需要回顾:将 checkpointing
和 state updates
连接到 时间旅行。