如何处理工具调用错误¶
大语言模型(LLMs)在调用工具方面并非完美无缺。模型可能会尝试调用不存在的工具,或者无法返回与请求模式相匹配的参数。诸如保持模式简单、减少一次性传递的工具数量,以及使用合适的名称和描述等策略,有助于降低这种风险,但并非万无一失。
本指南介绍了一些在图中构建错误处理机制的方法,以应对这些失败情况。
安装设置¶
首先,让我们安装所需的软件包并设置我们的 API 密钥。
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
为 LangGraph 开发设置 LangSmith
注册 LangSmith,以便快速发现问题并提升你的 LangGraph 项目的性能。LangSmith 允许你使用跟踪数据来调试、测试和监控使用 LangGraph 构建的大语言模型应用程序 —— 点击 此处 了解更多关于如何开始使用的信息。
使用预构建的 ToolNode
¶
首先,定义一个模拟的天气工具,该工具对输入查询有一些隐藏的限制。这里的目的是模拟一个现实世界的场景,即模型未能正确调用工具:
from langchain_core.tools import tool
@tool
def get_weather(location: str):
"""Call to get the current weather."""
if location == "san francisco":
raise ValueError("Input queries must be proper nouns")
elif location == "San Francisco":
return "It's 60 degrees and foggy."
else:
raise ValueError("Invalid input.")
API Reference: tool
接下来,设置一个 ReAct 代理 的图实现。该代理将某个查询作为输入,然后反复调用工具,直到有足够的信息来解决该查询。我们将使用预构建的 ToolNode
来执行被调用的工具,以及一个由 Anthropic 提供支持的小型快速模型:
from typing import Literal
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode
tool_node = ToolNode([get_weather])
model_with_tools = ChatAnthropic(
model="claude-3-haiku-20240307", temperature=0
).bind_tools([get_weather])
def should_continue(state: MessagesState):
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return END
def call_model(state: MessagesState):
messages = state["messages"]
response = model_with_tools.invoke(messages)
return {"messages": [response]}
workflow = StateGraph(MessagesState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, ["tools", END])
workflow.add_edge("tools", "agent")
app = workflow.compile()
from IPython.display import Image, display
try:
display(Image(app.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
当你尝试调用工具时,你会发现模型以错误的输入调用该工具,导致工具抛出错误。执行该工具的预构建 ToolNode
有一些内置的错误处理机制,它会捕获错误并将其传回给模型,以便模型可以再次尝试:
response = app.invoke(
{"messages": [("human", "what is the weather in san francisco?")]},
)
for message in response["messages"]:
string_representation = f"{message.type.upper()}: {message.content}\n"
print(string_representation)
HUMAN: what is the weather in san francisco?
AI: [{'id': 'toolu_01K5tXKVRbETcs7Q8U9PHy96', 'input': {'location': 'san francisco'}, 'name': 'get_weather', 'type': 'tool_use'}]
TOOL: Error: ValueError('Input queries must be proper nouns')
Please fix your mistakes.
AI: [{'text': 'Apologies, it looks like there was an issue with the weather lookup. Let me try that again with the proper format:', 'type': 'text'}, {'id': 'toolu_01KSCsme3Du2NBazSJQ1af4b', 'input': {'location': 'San Francisco'}, 'name': 'get_weather', 'type': 'tool_use'}]
TOOL: It's 60 degrees and foggy.
AI: The current weather in San Francisco is 60 degrees and foggy.
自定义策略¶
在很多情况下,这是一个不错的默认设置,但有些情况下,自定义回退策略可能会更好。
例如,下面的工具需要输入一个特定长度的元素列表——这对小模型来说可不容易!我们还会故意不将 topic
变为复数形式,以此误导模型,让它认为应该传入一个字符串:
from langchain_core.output_parsers import StrOutputParser
from pydantic import BaseModel, Field
class HaikuRequest(BaseModel):
topic: list[str] = Field(
max_length=3,
min_length=3,
)
@tool
def master_haiku_generator(request: HaikuRequest):
"""Generates a haiku based on the provided topics."""
model = ChatAnthropic(model="claude-3-haiku-20240307", temperature=0)
chain = model | StrOutputParser()
topics = ", ".join(request.topic)
haiku = chain.invoke(f"Write a haiku about {topics}")
return haiku
tool_node = ToolNode([master_haiku_generator])
model = ChatAnthropic(model="claude-3-haiku-20240307", temperature=0)
model_with_tools = model.bind_tools([master_haiku_generator])
def should_continue(state: MessagesState):
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return END
def call_model(state: MessagesState):
messages = state["messages"]
response = model_with_tools.invoke(messages)
return {"messages": [response]}
workflow = StateGraph(MessagesState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, ["tools", END])
workflow.add_edge("tools", "agent")
app = workflow.compile()
response = app.invoke(
{"messages": [("human", "Write me an incredible haiku about water.")]},
{"recursion_limit": 10},
)
for message in response["messages"]:
string_representation = f"{message.type.upper()}: {message.content}\n"
print(string_representation)
API Reference: StrOutputParser
HUMAN: Write me an incredible haiku about water.
AI: [{'text': 'Here is a haiku about water:', 'type': 'text'}, {'id': 'toolu_01L13Z3Gtaym5KKgPXVyZhYn', 'input': {'topic': ['water']}, 'name': 'master_haiku_generator', 'type': 'tool_use'}]
TOOL: Error: 1 validation error for master_haiku_generator
request
Field required [type=missing, input_value={'topic': ['water']}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.7/v/missing
Please fix your mistakes.
AI: [{'text': 'Oops, my apologies. Let me try that again with the correct format:', 'type': 'text'}, {'id': 'toolu_01HCQ5uXr5kXQHBQ3FyQ1Ysk', 'input': {'topic': ['water']}, 'name': 'master_haiku_generator', 'type': 'tool_use'}]
TOOL: Error: 1 validation error for master_haiku_generator
request
Field required [type=missing, input_value={'topic': ['water']}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.7/v/missing
Please fix your mistakes.
AI: [{'text': 'Hmm, it seems there was an issue with the input format. Let me try a different approach:', 'type': 'text'}, {'id': 'toolu_01RF96nruwr4nMqhLBRsbfE5', 'input': {'request': {'topic': ['water']}}, 'name': 'master_haiku_generator', 'type': 'tool_use'}]
TOOL: Error: 1 validation error for master_haiku_generator
request.topic
List should have at least 3 items after validation, not 1 [type=too_short, input_value=['water'], input_type=list]
For further information visit https://errors.pydantic.dev/2.7/v/too_short
Please fix your mistakes.
AI: [{'text': 'Ah I see, the haiku generator requires at least 3 topics. Let me provide 3 topics related to water:', 'type': 'text'}, {'id': 'toolu_011jcgHuG2Kyr87By459huqQ', 'input': {'request': {'topic': ['ocean', 'rain', 'river']}}, 'name': 'master_haiku_generator', 'type': 'tool_use'}]
TOOL: Here is a haiku about ocean, rain, and river:
Vast ocean's embrace,
Raindrops caress the river,
Nature's symphony.
AI: I hope this haiku about water captures the essence you were looking for! Let me know if you would like me to generate another one.
更好的策略可能是剔除失败的尝试以减少干扰,然后改用更高级的模型。以下是一个示例。我们还使用了一个自定义构建的节点来调用我们的工具,而不是使用预构建的 ToolNode
:
import json
from langchain_core.messages import AIMessage, ToolMessage
from langchain_core.messages.modifier import RemoveMessage
@tool
def master_haiku_generator(request: HaikuRequest):
"""Generates a haiku based on the provided topics."""
model = ChatAnthropic(model="claude-3-haiku-20240307", temperature=0)
chain = model | StrOutputParser()
topics = ", ".join(request.topic)
haiku = chain.invoke(f"Write a haiku about {topics}")
return haiku
def call_tool(state: MessagesState):
tools_by_name = {master_haiku_generator.name: master_haiku_generator}
messages = state["messages"]
last_message = messages[-1]
output_messages = []
for tool_call in last_message.tool_calls:
try:
tool_result = tools_by_name[tool_call["name"]].invoke(tool_call["args"])
output_messages.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
except Exception as e:
# Return the error if the tool call fails
output_messages.append(
ToolMessage(
content="",
name=tool_call["name"],
tool_call_id=tool_call["id"],
additional_kwargs={"error": e},
)
)
return {"messages": output_messages}
model = ChatAnthropic(model="claude-3-haiku-20240307", temperature=0)
model_with_tools = model.bind_tools([master_haiku_generator])
better_model = ChatAnthropic(model="claude-3-5-sonnet-20240620", temperature=0)
better_model_with_tools = better_model.bind_tools([master_haiku_generator])
def should_continue(state: MessagesState):
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return END
def should_fallback(
state: MessagesState,
) -> Literal["agent", "remove_failed_tool_call_attempt"]:
messages = state["messages"]
failed_tool_messages = [
msg
for msg in messages
if isinstance(msg, ToolMessage)
and msg.additional_kwargs.get("error") is not None
]
if failed_tool_messages:
return "remove_failed_tool_call_attempt"
return "agent"
def call_model(state: MessagesState):
messages = state["messages"]
response = model_with_tools.invoke(messages)
return {"messages": [response]}
def remove_failed_tool_call_attempt(state: MessagesState):
messages = state["messages"]
# Remove all messages from the most recent
# instance of AIMessage onwards.
last_ai_message_index = next(
i
for i, msg in reversed(list(enumerate(messages)))
if isinstance(msg, AIMessage)
)
messages_to_remove = messages[last_ai_message_index:]
return {"messages": [RemoveMessage(id=m.id) for m in messages_to_remove]}
# Fallback to a better model if a tool call fails
def call_fallback_model(state: MessagesState):
messages = state["messages"]
response = better_model_with_tools.invoke(messages)
return {"messages": [response]}
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", call_tool)
workflow.add_node("remove_failed_tool_call_attempt", remove_failed_tool_call_attempt)
workflow.add_node("fallback_agent", call_fallback_model)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, ["tools", END])
workflow.add_conditional_edges("tools", should_fallback)
workflow.add_edge("remove_failed_tool_call_attempt", "fallback_agent")
workflow.add_edge("fallback_agent", "tools")
app = workflow.compile()
API Reference: AIMessage | ToolMessage | RemoveMessage
如果工具调用失败,tools
节点现在将在 additional_kwargs
中返回带有 error
字段的 ToolMessage
。如果发生这种情况,它将转到另一个节点,该节点会移除失败的工具消息,并让更强大的模型重新尝试生成工具调用。
下面的图表直观地展示了这一点:
try:
display(Image(app.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
让我们来试试看。为了突出移除步骤,让我们对模型的响应进行 流式传输
,这样我们就可以看到每个已执行的节点:
stream = app.stream(
{"messages": [("human", "Write me an incredible haiku about water.")]},
{"recursion_limit": 10},
)
for chunk in stream:
print(chunk)
{'agent': {'messages': [AIMessage(content=[{'text': 'Here is a haiku about water:', 'type': 'text'}, {'id': 'toolu_019mY8NX4t7YkJBWeHG6jE4T', 'input': {'topic': ['water']}, 'name': 'master_haiku_generator', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01RmoaLh38DnRX2fv7E8vCFh', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 384, 'output_tokens': 67}}, id='run-a1511215-1a62-49b5-b5b3-b2c8f8c7920e-0', tool_calls=[{'name': 'master_haiku_generator', 'args': {'topic': ['water']}, 'id': 'toolu_019mY8NX4t7YkJBWeHG6jE4T', 'type': 'tool_call'}], usage_metadata={'input_tokens': 384, 'output_tokens': 67, 'total_tokens': 451})]}}
{'tools': {'messages': [ToolMessage(content='', name='master_haiku_generator', id='69f85339-dbc2-4341-8c4d-26300dfe31a5', tool_call_id='toolu_019mY8NX4t7YkJBWeHG6jE4T')]}}
{'remove_failed_tool_call_attempt': {'messages': [RemoveMessage(content='', additional_kwargs={}, response_metadata={}, id='run-a1511215-1a62-49b5-b5b3-b2c8f8c7920e-0'), RemoveMessage(content='', additional_kwargs={}, response_metadata={}, id='69f85339-dbc2-4341-8c4d-26300dfe31a5')]}}
{'fallback_agent': {'messages': [AIMessage(content=[{'text': 'Certainly! I\'d be happy to help you create an incredible haiku about water. To do this, I\'ll use the master_haiku_generator function, which requires three topics. Since you\'ve specified water as the main theme, I\'ll add two related concepts to create a more vivid and interesting haiku. Let\'s use "water," "flow," and "reflection" as our three topics.', 'type': 'text'}, {'id': 'toolu_01FxSxy8LeQ5PjdNYq8vLFTd', 'input': {'request': {'topic': ['water', 'flow', 'reflection']}}, 'name': 'master_haiku_generator', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01U5HV3pt1NVm6syGbxx29no', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 414, 'output_tokens': 158}}, id='run-3eb746c7-b607-4ad3-881a-1c11a7638af7-0', tool_calls=[{'name': 'master_haiku_generator', 'args': {'request': {'topic': ['water', 'flow', 'reflection']}}, 'id': 'toolu_01FxSxy8LeQ5PjdNYq8vLFTd', 'type': 'tool_call'}], usage_metadata={'input_tokens': 414, 'output_tokens': 158, 'total_tokens': 572})]}}
{'tools': {'messages': [ToolMessage(content='"Here is a haiku about water, flow, and reflection:\\n\\nRippling waters flow,\\nMirroring the sky above,\\nTranquil reflection."', name='master_haiku_generator', id='fdfc497d-939a-42c0-8748-31371b98a3a7', tool_call_id='toolu_01FxSxy8LeQ5PjdNYq8vLFTd')]}}
{'agent': {'messages': [AIMessage(content='I hope you enjoy this haiku about the beauty and serenity of water. Please let me know if you would like me to generate another one.', additional_kwargs={}, response_metadata={'id': 'msg_012rXWHapc8tPfBPEonpAT6W', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 587, 'output_tokens': 35}}, id='run-ab6d412d-9374-4a4b-950d-6dcc43d87cf5-0', usage_metadata={'input_tokens': 587, 'output_tokens': 35, 'total_tokens': 622})]}}
你还可以查看这个 LangSmith 追踪记录,它展示了对较小模型的初始调用失败的情况。
下一步¶
你现在已经了解了如何实施一些策略来处理工具调用错误。
接下来,查看 这里的其他 LangGraph 操作指南。