如何审查工具调用(功能 API)¶
本指南演示了如何使用 LangGraph 的 Functional API 在 ReAct 代理中实现 human-in-the-loop 工作流。
我们将基于 How to create a ReAct agent using the Functional API 指南中创建的代理进行构建。
具体来说,我们将展示如何在执行之前审查由 chat model 生成的 tool calls。这可以通过在我们应用程序的关键点使用 interrupt 函数来完成。
预览:
我们将实现一个简单的函数,用于审查从我们的 chat model 生成的 tool calls,并从我们应用程序的 entrypoint 中调用它:
def review_tool_call(tool_call: ToolCall) -> Union[ToolCall, ToolMessage]:
"""审查一个工具调用,返回一个验证过的版本。"""
human_review = interrupt(
{
"question": "这是正确的吗?",
"tool_call": tool_call,
}
)
review_action = human_review["action"]
review_data = human_review.get("data")
if review_action == "continue":
return tool_call
elif review_action == "update":
updated_tool_call = {**tool_call, **{"args": review_data}}
return updated_tool_call
elif review_action == "feedback":
return ToolMessage(
content=review_data, name=tool_call["name"], tool_call_id=tool_call["id"]
)
设置¶
首先,让我们安装所需的包并设置我们的 API 密钥:
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
设置 LangSmith 以获得更好的调试体验
注册 LangSmith 可以快速发现并解决您 LangGraph 项目中的问题,提高性能。LangSmith 允许您使用追踪数据来调试、测试和监控使用 LangGraph 构建的 LLM 应用 —— 有关如何入门的更多信息,请参阅 文档。
定义模型和工具¶
首先,我们为示例定义将使用的工具和模型。与ReAct代理指南中一样,我们将使用一个占位符工具,用于获取某个位置的天气描述。
在这个示例中,我们将使用一个OpenAI聊天模型,但任何支持工具调用的模型支持工具调用都可以。
API Reference: ChatOpenAI | tool
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
model = ChatOpenAI(model="gpt-4o-mini")
@tool
def get_weather(location: str):
"""Call to get the weather from a specific location."""
# This is a placeholder for the actual implementation
if any([city in location.lower() for city in ["sf", "san francisco"]]):
return "It's sunny!"
elif "boston" in location.lower():
return "It's rainy!"
else:
return f"I am not sure what the weather is in {location}"
tools = [get_weather]
定义任务¶
我们的 任务 与 ReAct 代理指南 中的保持一致:
- 调用模型:我们希望使用消息列表来查询我们的聊天模型。
- 调用工具:如果模型生成了工具调用,我们希望执行它们。
API Reference: ToolCall | ToolMessage | entrypoint | task
from langchain_core.messages import ToolCall, ToolMessage
from langgraph.func import entrypoint, task
tools_by_name = {tool.name: tool for tool in tools}
@task
def call_model(messages):
"""Call model with a sequence of messages."""
response = model.bind_tools(tools).invoke(messages)
return response
@task
def call_tool(tool_call):
tool = tools_by_name[tool_call["name"]]
observation = tool.invoke(tool_call["args"])
return ToolMessage(content=observation, tool_call_id=tool_call["id"])
定义入口点¶
为了在执行前审查工具调用,我们添加了一个 review_tool_call
函数,该函数会调用 interrupt。当调用此函数时,执行将暂停,直到我们发出继续执行的指令。
给定一个工具调用,我们的函数将 interrupt
以供人工审查。此时我们可以选择:
- 接受该工具调用;
- 修改工具调用并继续;
- 生成自定义工具消息(例如,指示模型重新格式化其工具调用)。
我们在下面的使用示例中将演示这三种情况。
from typing import Union
def review_tool_call(tool_call: ToolCall) -> Union[ToolCall, ToolMessage]:
"""Review a tool call, returning a validated version."""
human_review = interrupt(
{
"question": "Is this correct?",
"tool_call": tool_call,
}
)
review_action = human_review["action"]
review_data = human_review.get("data")
if review_action == "continue":
return tool_call
elif review_action == "update":
updated_tool_call = {**tool_call, **{"args": review_data}}
return updated_tool_call
elif review_action == "feedback":
return ToolMessage(
content=review_data, name=tool_call["name"], tool_call_id=tool_call["id"]
)
我们现在可以更新我们的入口点,以审查生成的工具调用。如果工具调用被接受或修改,我们将以与之前相同的方式执行。否则,我们只需追加由用户提供的ToolMessage
。
Tip
之前任务的结果——在此情况下是初始模型调用——会被保存下来,因此在interrupt
之后不会再次运行。
API Reference: MemorySaver | add_messages | Command | interrupt
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph.message import add_messages
from langgraph.types import Command, interrupt
checkpointer = MemorySaver()
@entrypoint(checkpointer=checkpointer)
def agent(messages, previous):
if previous is not None:
messages = add_messages(previous, messages)
llm_response = call_model(messages).result()
while True:
if not llm_response.tool_calls:
break
# Review tool calls
tool_results = []
tool_calls = []
for i, tool_call in enumerate(llm_response.tool_calls):
review = review_tool_call(tool_call)
if isinstance(review, ToolMessage):
tool_results.append(review)
else: # is a validated tool call
tool_calls.append(review)
if review != tool_call:
llm_response.tool_calls[i] = review # update message
# Execute remaining tool calls
tool_result_futures = [call_tool(tool_call) for tool_call in tool_calls]
remaining_tool_results = [fut.result() for fut in tool_result_futures]
# Append to message list
messages = add_messages(
messages,
[llm_response, *tool_results, *remaining_tool_results],
)
# Call model again
llm_response = call_model(messages).result()
# Generate final response
messages = add_messages(messages, llm_response)
return entrypoint.final(value=llm_response, save=messages)
用法¶
让我们演示一些场景。
def _print_step(step: dict) -> None:
for task_name, result in step.items():
if task_name == "agent":
continue # just stream from tasks
print(f"\n{task_name}:")
if task_name in ("__interrupt__", "review_tool_call"):
print(result)
else:
result.pretty_print()
接受工具调用¶
要接受工具调用,我们只需在 Command
中提供的数据中表明该工具调用应被传递。
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
================================== Ai Message ==================================
Tool Calls:
get_weather (call_Bh5cSwMqCpCxTjx7AjdrQTPd)
Call ID: call_Bh5cSwMqCpCxTjx7AjdrQTPd
Args:
location: San Francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'call_Bh5cSwMqCpCxTjx7AjdrQTPd', 'type': 'tool_call'}}, resumable=True, ns=['agent:22fcc9cd-3573-b39b-eea7-272a025903e2'], when='during'),)
human_input = Command(resume={"action": "continue"})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
================================= Tool Message =================================
It's sunny!
call_model:
================================== Ai Message ==================================
The weather in San Francisco is sunny!
修订工具调用¶
要修订一个工具调用,我们可以提供更新后的参数。
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
================================== Ai Message ==================================
Tool Calls:
get_weather (call_b9h8e18FqH0IQm3NMoeYKz6N)
Call ID: call_b9h8e18FqH0IQm3NMoeYKz6N
Args:
location: san francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'san francisco'}, 'id': 'call_b9h8e18FqH0IQm3NMoeYKz6N', 'type': 'tool_call'}}, resumable=True, ns=['agent:9559a81d-5720-dc19-a457-457bac7bdd83'], when='during'),)
human_input = Command(resume={"action": "update", "data": {"location": "SF, CA"}})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
================================= Tool Message =================================
It's sunny!
call_model:
================================== Ai Message ==================================
The weather in San Francisco is sunny!
此运行的 LangSmith 追踪信息特别有参考价值:
生成自定义的 ToolMessage¶
要生成自定义的 ToolMessage
,我们需要提供消息的内容。在这种情况下,我们将要求模型重新格式化其工具调用。
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
================================== Ai Message ==================================
Tool Calls:
get_weather (call_VqGjKE7uu8HdWs9XuY1kMV18)
Call ID: call_VqGjKE7uu8HdWs9XuY1kMV18
Args:
location: San Francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'call_VqGjKE7uu8HdWs9XuY1kMV18', 'type': 'tool_call'}}, resumable=True, ns=['agent:4b3b372b-9da3-70be-5c68-3d9317346070'], when='during'),)
human_input = Command(
resume={
"action": "feedback",
"data": "Please format as <City>, <State>.",
},
)
for step in agent.stream(human_input, config):
_print_step(step)
call_model:
================================== Ai Message ==================================
Tool Calls:
get_weather (call_xoXkK8Cz0zIpvWs78qnXpvYp)
Call ID: call_xoXkK8Cz0zIpvWs78qnXpvYp
Args:
location: San Francisco, CA
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco, CA'}, 'id': 'call_xoXkK8Cz0zIpvWs78qnXpvYp', 'type': 'tool_call'}}, resumable=True, ns=['agent:4b3b372b-9da3-70be-5c68-3d9317346070'], when='during'),)
一旦重新格式化,我们就可以接受它: