如何审查工具调用(函数式 API)¶
本指南演示了如何使用 LangGraph 函数式 API 在 ReAct 代理中实现人工介入工作流。
我们将基于 如何使用函数式 API 创建 ReAct 代理 指南中创建的代理进行构建。
具体来说,我们将演示如何在执行由 聊天模型 生成的 工具调用 之前对其进行审查。这可以通过在应用程序的关键点使用 中断 函数来实现。
预览:
我们将实现一个简单的函数,用于审查从聊天模型生成的工具调用,并在应用程序的 入口点 内部调用它:
def review_tool_call(tool_call: ToolCall) -> Union[ToolCall, ToolMessage]:
"""Review a tool call, returning a validated version."""
human_review = interrupt(
{
"question": "Is this correct?",
"tool_call": tool_call,
}
)
review_action = human_review["action"]
review_data = human_review.get("data")
if review_action == "continue":
return tool_call
elif review_action == "update":
updated_tool_call = {**tool_call, **{"args": review_data}}
return updated_tool_call
elif review_action == "feedback":
return ToolMessage(
content=review_data, name=tool_call["name"], tool_call_id=tool_call["id"]
)
设置¶
首先,让我们安装所需的包并设置 API 密钥:
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
设置 LangSmith 以实现更好的调试
注册 LangSmith 以快速发现问题并提升你的 LangGraph 项目的性能。LangSmith 允许你使用跟踪数据来调试、测试和监控你使用 LangGraph 构建的大语言模型应用程序 — 请在 文档 中了解更多关于如何开始使用的信息。
定义模型和工具¶
让我们首先定义我们将在示例中使用的工具和模型。正如在ReAct 代理指南中一样,我们将使用一个占位工具,该工具可获取某个地点的天气描述。
在这个示例中,我们将使用一个 OpenAI 聊天模型,但任何支持工具调用的模型都可以。
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
model = ChatOpenAI(model="gpt-4o-mini")
@tool
def get_weather(location: str):
"""Call to get the weather from a specific location."""
# This is a placeholder for the actual implementation
if any([city in location.lower() for city in ["sf", "san francisco"]]):
return "It's sunny!"
elif "boston" in location.lower():
return "It's rainy!"
else:
return f"I am not sure what the weather is in {location}"
tools = [get_weather]
API Reference: tool
定义任务¶
我们的任务与从头开始构建 ReAct 代理指南中的保持一致:
- 调用模型:我们希望使用消息列表查询我们的聊天模型。
- 调用工具:如果我们的模型生成了工具调用,我们希望执行这些调用。
from langchain_core.messages import ToolCall, ToolMessage
from langgraph.func import entrypoint, task
tools_by_name = {tool.name: tool for tool in tools}
@task
def call_model(messages):
"""Call model with a sequence of messages."""
response = model.bind_tools(tools).invoke(messages)
return response
@task
def call_tool(tool_call):
tool = tools_by_name[tool_call["name"]]
observation = tool.invoke(tool_call["args"])
return ToolMessage(content=observation, tool_call_id=tool_call["id"])
API Reference: ToolCall | ToolMessage
定义入口点¶
为了在执行工具调用之前进行审核,我们添加了一个 review_tool_call
函数,该函数调用 中断。当调用此函数时,执行将暂停,直到我们发出继续执行的命令。
给定一个工具调用,我们的函数将触发 中断
以进行人工审核。此时,我们可以选择以下操作之一:
- 接受该工具调用;
- 修改工具调用并继续;
- 生成自定义工具消息(例如,指示模型重新格式化其工具调用)。
我们将在下面的 使用示例 中演示这三种情况。
from typing import Union
def review_tool_call(tool_call: ToolCall) -> Union[ToolCall, ToolMessage]:
"""Review a tool call, returning a validated version."""
human_review = interrupt(
{
"question": "Is this correct?",
"tool_call": tool_call,
}
)
review_action = human_review["action"]
review_data = human_review.get("data")
if review_action == "continue":
return tool_call
elif review_action == "update":
updated_tool_call = {**tool_call, **{"args": review_data}}
return updated_tool_call
elif review_action == "feedback":
return ToolMessage(
content=review_data, name=tool_call["name"], tool_call_id=tool_call["id"]
)
现在,我们可以更新我们的入口点,以审查生成的工具调用。如果某个工具调用被接受或修改,我们将按照之前的方式执行。否则,我们只需追加人类提供的 ToolMessage
。
Tip
先前任务的结果(在本例中为初始模型调用)会被持久化,这样在 interrupt
之后就不会再次运行这些任务。
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph.message import add_messages
from langgraph.types import Command, interrupt
checkpointer = MemorySaver()
@entrypoint(checkpointer=checkpointer)
def agent(messages, previous):
if previous is not None:
messages = add_messages(previous, messages)
llm_response = call_model(messages).result()
while True:
if not llm_response.tool_calls:
break
# Review tool calls
tool_results = []
tool_calls = []
for i, tool_call in enumerate(llm_response.tool_calls):
review = review_tool_call(tool_call)
if isinstance(review, ToolMessage):
tool_results.append(review)
else: # is a validated tool call
tool_calls.append(review)
if review != tool_call:
llm_response.tool_calls[i] = review # update message
# Execute remaining tool calls
tool_result_futures = [call_tool(tool_call) for tool_call in tool_calls]
remaining_tool_results = [fut.result() for fut in tool_result_futures]
# Append to message list
messages = add_messages(
messages,
[llm_response, *tool_results, *remaining_tool_results],
)
# Call model again
llm_response = call_model(messages).result()
# Generate final response
messages = add_messages(messages, llm_response)
return entrypoint.final(value=llm_response, save=messages)
使用方法¶
让我们来演示一些场景。
def _print_step(step: dict) -> None:
for task_name, result in step.items():
if task_name == "agent":
continue # just stream from tasks
print(f"\n{task_name}:")
if task_name in ("__interrupt__", "review_tool_call"):
print(result)
else:
result.pretty_print()
接受工具调用¶
要接受工具调用,我们只需在 Command
中提供的数据里指明该工具调用应被放行。
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_Bh5cSwMqCpCxTjx7AjdrQTPd)
Call ID: call_Bh5cSwMqCpCxTjx7AjdrQTPd
Args:
location: San Francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'call_Bh5cSwMqCpCxTjx7AjdrQTPd', 'type': 'tool_call'}}, resumable=True, ns=['agent:22fcc9cd-3573-b39b-eea7-272a025903e2'], when='during'),)
human_input = Command(resume={"action": "continue"})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
=================================[1m Tool Message [0m=================================
It's sunny!
call_model:
==================================[1m Ai Message [0m==================================
The weather in San Francisco is sunny!
修改工具调用¶
若要修改工具调用,我们可以提供更新后的参数。
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_b9h8e18FqH0IQm3NMoeYKz6N)
Call ID: call_b9h8e18FqH0IQm3NMoeYKz6N
Args:
location: san francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'san francisco'}, 'id': 'call_b9h8e18FqH0IQm3NMoeYKz6N', 'type': 'tool_call'}}, resumable=True, ns=['agent:9559a81d-5720-dc19-a457-457bac7bdd83'], when='during'),)
human_input = Command(resume={"action": "update", "data": {"location": "SF, CA"}})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
=================================[1m Tool Message [0m=================================
It's sunny!
call_model:
==================================[1m Ai Message [0m==================================
The weather in San Francisco is sunny!
生成自定义的工具消息¶
要生成自定义的 ToolMessage
,我们需要提供消息的内容。在这种情况下,我们将要求模型重新格式化其工具调用。
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_VqGjKE7uu8HdWs9XuY1kMV18)
Call ID: call_VqGjKE7uu8HdWs9XuY1kMV18
Args:
location: San Francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'call_VqGjKE7uu8HdWs9XuY1kMV18', 'type': 'tool_call'}}, resumable=True, ns=['agent:4b3b372b-9da3-70be-5c68-3d9317346070'], when='during'),)
human_input = Command(
resume={
"action": "feedback",
"data": "Please format as <City>, <State>.",
},
)
for step in agent.stream(human_input, config):
_print_step(step)
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_xoXkK8Cz0zIpvWs78qnXpvYp)
Call ID: call_xoXkK8Cz0zIpvWs78qnXpvYp
Args:
location: San Francisco, CA
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco, CA'}, 'id': 'call_xoXkK8Cz0zIpvWs78qnXpvYp', 'type': 'tool_call'}}, resumable=True, ns=['agent:4b3b372b-9da3-70be-5c68-3d9317346070'], when='during'),)
human_input = Command(resume={"action": "continue"})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
=================================[1m Tool Message [0m=================================
It's sunny!
call_model:
==================================[1m Ai Message [0m==================================
The weather in San Francisco, CA is sunny!