如何编辑图状态¶
前提条件
人在回路(HIL)交互对于智能体系统至关重要。手动更新图状态是一种常见的 HIL 交互模式,它允许人编辑操作(例如,正在调用什么工具或如何调用它)。
我们可以在 LangGraph 中使用断点来实现这一点:断点允许我们在特定步骤之前中断图的执行。在这个断点处,我们可以手动更新图状态,然后从该位置继续执行。
安装设置¶
首先,我们需要安装所需的软件包。
接下来,我们需要为 Anthropic(我们将使用的大语言模型)设置 API 密钥。
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
为 LangGraph 开发设置 LangSmith
注册 LangSmith 以快速发现问题并提升你的 LangGraph 项目的性能。LangSmith 允许你使用跟踪数据来调试、测试和监控使用 LangGraph 构建的大语言模型应用程序 — 点击 此处 了解更多关于如何开始使用的信息。
简单用法¶
让我们来看看它的非常基础的用法。
下面,我们要做三件事:
1) 我们使用 interrupt_before
在指定步骤(节点)之前指定断点。
2) 我们设置一个检查点器来保存直到该节点的图状态。
3) 我们使用 .update_state
来更新图的状态。
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from IPython.display import Image, display
class State(TypedDict):
input: str
def step_1(state):
print("---Step 1---")
pass
def step_2(state):
print("---Step 2---")
pass
def step_3(state):
print("---Step 3---")
pass
builder = StateGraph(State)
builder.add_node("step_1", step_1)
builder.add_node("step_2", step_2)
builder.add_node("step_3", step_3)
builder.add_edge(START, "step_1")
builder.add_edge("step_1", "step_2")
builder.add_edge("step_2", "step_3")
builder.add_edge("step_3", END)
# Set up memory
memory = MemorySaver()
# Add
graph = builder.compile(checkpointer=memory, interrupt_before=["step_2"])
# View
display(Image(graph.get_graph().draw_mermaid_png()))
# Input
initial_input = {"input": "hello world"}
# Thread
thread = {"configurable": {"thread_id": "1"}}
# Run the graph until the first interruption
for event in graph.stream(initial_input, thread, stream_mode="values"):
print(event)
print("Current state!")
print(graph.get_state(thread).values)
graph.update_state(thread, {"input": "hello universe!"})
print("---\n---\nUpdated state!")
print(graph.get_state(thread).values)
# Continue the graph execution
for event in graph.stream(None, thread, stream_mode="values"):
print(event)
智能体¶
在智能体的场景中,更新状态对于编辑工具调用等操作非常有用。
为了展示这一点,我们将构建一个相对简单的 ReAct 风格的智能体,该智能体可以进行工具调用。
我们将使用 Anthropic 的模型和一个虚拟工具(仅用于演示目的)。
# Set up the tool
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
from langgraph.graph import MessagesState, START, END, StateGraph
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.memory import MemorySaver
@tool
def search(query: str):
"""Call to surf the web."""
# This is a placeholder for the actual implementation
# Don't let the LLM know this though 😊
return [
"It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
]
tools = [search]
tool_node = ToolNode(tools)
# Set up the model
model = ChatAnthropic(model="claude-3-5-sonnet-20240620")
model = model.bind_tools(tools)
# Define nodes and conditional edges
# Define the function that determines whether to continue or not
def should_continue(state):
messages = state["messages"]
last_message = messages[-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return "end"
# Otherwise if there is, we continue
else:
return "continue"
# Define the function that calls the model
def call_model(state):
messages = state["messages"]
response = model.invoke(messages)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
# Define a new graph
workflow = StateGraph(MessagesState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
# Finally we pass in a mapping.
# The keys are strings, and the values are other nodes.
# END is a special node marking that the graph should finish.
# What will happen is we will call `should_continue`, and then the output of that
# will be matched against the keys in this mapping.
# Based on which one it matches, that node will then be called.
{
# If `tools`, then we call the tool node.
"continue": "action",
# Otherwise we finish.
"end": END,
},
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("action", "agent")
# Set up memory
memory = MemorySaver()
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
# We add in `interrupt_before=["action"]`
# This will add a breakpoint before the `action` node is called
app = workflow.compile(checkpointer=memory, interrupt_before=["action"])
API Reference: tool
与智能体进行交互¶
现在我们可以与智能体进行交互,并观察到它在调用工具之前会停止。
from langchain_core.messages import HumanMessage
thread = {"configurable": {"thread_id": "3"}}
inputs = [HumanMessage(content="search for the weather in sf now")]
for event in app.stream({"messages": inputs}, thread, stream_mode="values"):
event["messages"][-1].pretty_print()
API Reference: HumanMessage
================================[1m Human Message [0m=================================
search for the weather in sf now
==================================[1m Ai Message [0m==================================
[{'text': "Certainly! I'll search for the current weather in San Francisco for you. Let me use the search function to find this information.", 'type': 'text'}, {'id': 'toolu_01DxRhkj4fAvaGWoBhVuvfeL', 'input': {'query': 'current weather in San Francisco'}, 'name': 'search', 'type': 'tool_use'}]
Tool Calls:
search (toolu_01DxRhkj4fAvaGWoBhVuvfeL)
Call ID: toolu_01DxRhkj4fAvaGWoBhVuvfeL
Args:
query: current weather in San Francisco
现在我们可以相应地更新状态了。让我们修改工具调用,使其查询内容为 "旧金山的当前天气"
。
# First, lets get the current state
current_state = app.get_state(thread)
# Let's now get the last message in the state
# This is the one with the tool calls that we want to update
last_message = current_state.values["messages"][-1]
# Let's now update the args for that tool call
last_message.tool_calls[0]["args"] = {"query": "current weather in SF"}
# Let's now call `update_state` to pass in this message in the `messages` key
# This will get treated as any other update to the state
# It will get passed to the reducer function for the `messages` key
# That reducer function will use the ID of the message to update it
# It's important that it has the right ID! Otherwise it would get appended
# as a new message
app.update_state(thread, {"messages": last_message})
{'configurable': {'thread_id': '3',
'checkpoint_ns': '',
'checkpoint_id': '1ef7830a-c688-6fc6-8002-824126081ba0'}}
现在让我们检查一下应用程序的当前状态,以确保它已相应地更新。
[{'name': 'search',
'args': {'query': 'current weather in SF'},
'id': 'toolu_01FSkinAVXR1C4D5kecrzAnj'}]
总结
现在,我们可以再次调用智能体,不提供任何输入以继续执行,即按要求运行工具。从日志中我们可以看到,它将更新参数传递给了工具。
=================================[1m Tool Message [0m=================================
Name: search
["It's sunny in San Francisco, but you better look out if you're a Gemini \ud83d\ude08."]
==================================[1m Ai Message [0m==================================
Based on the search results, I can provide you with the current weather information for San Francisco:
The weather in San Francisco is currently sunny.
It's important to note that the search result also included a playful astrological reference, which isn't directly related to the weather. If you need more specific weather details like temperature, humidity, or forecast, please let me know, and I can perform another search to find that information for you.
Is there anything else you'd like to know about the weather in San Francisco or any other location?