如何审查工具调用(功能性API)¶
本指南演示了如何使用LangGraph 功能性API在ReAct代理中实现人机协作工作流。
我们将基于在如何使用功能性API从头开始创建ReAct代理指南中创建的代理进行构建。
具体来说,我们将演示如何在工具调用执行之前审查由聊天模型生成的工具调用。这可以通过在应用程序的关键点使用interrupt函数来实现。
预览:
我们将实现一个简单的函数来审查从聊天模型生成的工具调用,并在应用程序的入口点内部调用它:
def review_tool_call(tool_call: ToolCall) -> Union[ToolCall, ToolMessage]:
"""审查工具调用,返回验证后的版本。"""
human_review = interrupt(
{
"question": "这是正确的吗?",
"tool_call": tool_call,
}
)
review_action = human_review["action"]
review_data = human_review.get("data")
if review_action == "continue":
return tool_call
elif review_action == "update":
updated_tool_call = {**tool_call, **{"args": review_data}}
return updated_tool_call
elif review_action == "feedback":
return ToolMessage(
content=review_data, name=tool_call["name"], tool_call_id=tool_call["id"]
)
环境准备¶
首先,让我们安装所需的包并设置我们的API密钥:
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
使用LangSmith进行更好的调试
注册 LangSmith 可以快速发现并解决项目中的问题,提高 LangGraph 项目的性能。LangSmith 允许您使用跟踪数据来调试、测试和监控使用 LangGraph 构建的 LLM 应用程序——更多关于如何开始的信息请参阅文档。
定义模型和工具¶
首先,让我们定义我们将用于示例的工具和模型。与ReAct代理指南一样,我们将使用一个占位符工具来获取某个位置的天气描述。
我们将使用一个OpenAI聊天模型来进行这个示例,但任何支持工具调用的模型支持工具调用都可以。
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
model = ChatOpenAI(model="gpt-4o-mini")
@tool
def get_weather(location: str):
"""Call to get the weather from a specific location."""
# This is a placeholder for the actual implementation
if any([city in location.lower() for city in ["sf", "san francisco"]]):
return "It's sunny!"
elif "boston" in location.lower():
return "It's rainy!"
else:
return f"I am not sure what the weather is in {location}"
tools = [get_weather]
API Reference: ChatOpenAI | tool
定义任务¶
- 调用模型:我们希望使用消息列表查询聊天模型。
- 调用工具:如果模型生成了工具调用,我们希望执行它们。
from langchain_core.messages import ToolCall, ToolMessage
from langgraph.func import entrypoint, task
tools_by_name = {tool.name: tool for tool in tools}
@task
def call_model(messages):
"""Call model with a sequence of messages."""
response = model.bind_tools(tools).invoke(messages)
return response
@task
def call_tool(tool_call):
tool = tools_by_name[tool_call["name"]]
observation = tool.invoke(tool_call["args"])
return ToolMessage(content=observation, tool_call_id=tool_call["id"])
API Reference: ToolCall | ToolMessage | entrypoint | task
定义入口点¶
为了在执行前审查工具调用,我们添加了一个名为 review_tool_call
的函数,该函数调用了interrupt。当调用此函数时,执行将暂停,直到我们发出命令以恢复执行。
给定一个工具调用,我们的函数将进行 中断
以供人工审查。此时,我们可以:
- 接受工具调用;
- 修改工具调用并继续执行;
- 生成自定义工具消息(例如,指示模型重新格式化其工具调用)。
我们将在下面的使用示例中演示这三种情况。
from typing import Union
def review_tool_call(tool_call: ToolCall) -> Union[ToolCall, ToolMessage]:
"""Review a tool call, returning a validated version."""
human_review = interrupt(
{
"question": "Is this correct?",
"tool_call": tool_call,
}
)
review_action = human_review["action"]
review_data = human_review.get("data")
if review_action == "continue":
return tool_call
elif review_action == "update":
updated_tool_call = {**tool_call, **{"args": review_data}}
return updated_tool_call
elif review_action == "feedback":
return ToolMessage(
content=review_data, name=tool_call["name"], tool_call_id=tool_call["id"]
)
我们现在可以更新我们的入口点以审查生成的工具调用。如果一个工具调用被接受或修改,我们将按照之前的方式执行。否则,我们只是追加由人类提供的ToolMessage
。
提示
前一个任务的结果——在这种情况下是初始模型调用——会被持久化,因此在中断
之后不会重新运行。
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph.message import add_messages
from langgraph.types import Command, interrupt
checkpointer = MemorySaver()
@entrypoint(checkpointer=checkpointer)
def agent(messages, previous):
if previous is not None:
messages = add_messages(previous, messages)
llm_response = call_model(messages).result()
while True:
if not llm_response.tool_calls:
break
# Review tool calls
tool_results = []
tool_calls = []
for i, tool_call in enumerate(llm_response.tool_calls):
review = review_tool_call(tool_call)
if isinstance(review, ToolMessage):
tool_results.append(review)
else: # is a validated tool call
tool_calls.append(review)
if review != tool_call:
llm_response.tool_calls[i] = review # update message
# Execute remaining tool calls
tool_result_futures = [call_tool(tool_call) for tool_call in tool_calls]
remaining_tool_results = [fut.result() for fut in tool_result_futures]
# Append to message list
messages = add_messages(
messages,
[llm_response, *tool_results, *remaining_tool_results],
)
# Call model again
llm_response = call_model(messages).result()
# Generate final response
messages = add_messages(messages, llm_response)
return entrypoint.final(value=llm_response, save=messages)
API Reference: MemorySaver | add_messages | Command | interrupt
使用方法¶
让我们演示一些应用场景。
def _print_step(step: dict) -> None:
for task_name, result in step.items():
if task_name == "agent":
continue # just stream from tasks
print(f"\n{task_name}:")
if task_name in ("__interrupt__", "review_tool_call"):
print(result)
else:
result.pretty_print()
接受工具调用¶
要接受工具调用,我们只需在Command
中提供的数据中指示该工具调用应通过即可。
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_Bh5cSwMqCpCxTjx7AjdrQTPd)
Call ID: call_Bh5cSwMqCpCxTjx7AjdrQTPd
Args:
location: San Francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'call_Bh5cSwMqCpCxTjx7AjdrQTPd', 'type': 'tool_call'}}, resumable=True, ns=['agent:22fcc9cd-3573-b39b-eea7-272a025903e2'], when='during'),)
human_input = Command(resume={"action": "continue"})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
=================================[1m Tool Message [0m=================================
It's sunny!
call_model:
==================================[1m Ai Message [0m==================================
The weather in San Francisco is sunny!
修改工具调用¶
要修改工具调用,我们可以提供更新后的参数。
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_b9h8e18FqH0IQm3NMoeYKz6N)
Call ID: call_b9h8e18FqH0IQm3NMoeYKz6N
Args:
location: san francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'san francisco'}, 'id': 'call_b9h8e18FqH0IQm3NMoeYKz6N', 'type': 'tool_call'}}, resumable=True, ns=['agent:9559a81d-5720-dc19-a457-457bac7bdd83'], when='during'),)
human_input = Command(resume={"action": "update", "data": {"location": "SF, CA"}})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
=================================[1m Tool Message [0m=================================
It's sunny!
call_model:
==================================[1m Ai Message [0m==================================
The weather in San Francisco is sunny!
- 在中断前的跟踪before the interrupt中,我们生成了一个指向位置
"San Francisco"
的工具调用。 - 在恢复后的跟踪after resuming中,我们看到消息中的工具调用已更新为
"SF, CA"
。
生成自定义 ToolMessage¶
要生成自定义的 ToolMessage
,我们需要提供消息的内容。在这种情况下,我们将要求模型重新格式化其工具调用。
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_VqGjKE7uu8HdWs9XuY1kMV18)
Call ID: call_VqGjKE7uu8HdWs9XuY1kMV18
Args:
location: San Francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'call_VqGjKE7uu8HdWs9XuY1kMV18', 'type': 'tool_call'}}, resumable=True, ns=['agent:4b3b372b-9da3-70be-5c68-3d9317346070'], when='during'),)
human_input = Command(
resume={
"action": "feedback",
"data": "Please format as <City>, <State>.",
},
)
for step in agent.stream(human_input, config):
_print_step(step)
call_model:
==================================[1m Ai Message [0m==================================
Tool Calls:
get_weather (call_xoXkK8Cz0zIpvWs78qnXpvYp)
Call ID: call_xoXkK8Cz0zIpvWs78qnXpvYp
Args:
location: San Francisco, CA
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco, CA'}, 'id': 'call_xoXkK8Cz0zIpvWs78qnXpvYp', 'type': 'tool_call'}}, resumable=True, ns=['agent:4b3b372b-9da3-70be-5c68-3d9317346070'], when='during'),)
human_input = Command(resume={"action": "continue"})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
=================================[1m Tool Message [0m=================================
It's sunny!
call_model:
==================================[1m Ai Message [0m==================================
The weather in San Francisco, CA is sunny!