Skip to content

如何实现座席之间的转接

前提条件

本指南假设你熟悉以下内容:

在多智能体架构中,智能体可以表示为图节点。每个智能体节点执行其步骤,并决定是结束执行还是路由到另一个智能体,包括可能路由到自身(例如,循环运行)。多智能体交互中的一种自然模式是交接,即一个智能体将控制权交给另一个智能体。交接允许你指定:

  • 目的地:要导航到的目标智能体 - LangGraph 中的节点名称
  • 有效负载:要传递给该智能体的信息 - LangGraph 中的状态更新

为了在 LangGraph 中实现交接,智能体节点可以返回 Command 对象,该对象允许你同时组合控制流和状态更新

def agent(state) -> Command[Literal["agent", "another_agent"]]:
    # 路由/停止的条件可以是任何条件,例如大语言模型工具调用/结构化输出等。
    goto = get_next_agent(...)  # 'agent' / 'another_agent'
    return Command(
        # 指定接下来要调用的智能体
        goto=goto,
        # 更新图状态
        update={"my_state_key": "my_state_value"}
    )

最常见的智能体类型之一是工具调用智能体。对于这类智能体,一种模式是将交接封装在工具调用中,例如:

@tool
def transfer_to_bob(state):
    """转移到 bob。"""
    return Command(
        goto="bob",
        update={"my_state_key": "my_state_value"},
        # 每个工具调用智能体都实现为一个子图。
        # 因此,要导航到另一个智能体(兄弟子图),
        # 我们需要指定导航是相对于父图进行的。
        graph=Command.PARENT,
    )

本指南展示了你可以如何:

  • 使用 Command 实现交接:智能体节点决定将控制权交给谁(通常基于大语言模型),并通过 Command 显式返回交接。当你需要对智能体如何路由到另一个智能体进行细粒度控制时,这些方法很有用。它可能非常适合在监督者架构中实现监督者智能体。
  • 使用工具实现交接:工具调用智能体可以访问能够通过 Command 返回交接的工具。智能体中的工具执行节点会识别工具返回的 Command 对象并相应地进行路由。交接工具是一种通用原语,在任何包含工具调用智能体的多智能体系统中都很有用。

安装设置

%%capture --no-stderr
%pip install -U langgraph langchain-anthropic
import getpass
import os


def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")


_set_env("ANTHROPIC_API_KEY")

为 LangGraph 开发设置 LangSmith

注册 LangSmith,以便快速发现问题并提升你的 LangGraph 项目的性能。LangSmith 允许你使用跟踪数据来调试、测试和监控使用 LangGraph 构建的大语言模型应用程序 — 点击 此处 了解更多关于如何开始使用的信息。

使用 Command 实现交接

让我们实现一个包含两个智能体的系统:

  • 加法专家(仅能进行数字加法运算)
  • 乘法专家(仅能进行数字乘法运算)。

在这个示例中,智能体将依靠大语言模型(LLM)来进行数学运算。在更实际的后续示例中,我们将为智能体提供用于进行数学运算的工具。

当加法专家在乘法运算上需要帮助时,它会将任务交给乘法专家,反之亦然。这是一个简单的多智能体网络的示例。

每个智能体都将有一个对应的节点函数,该函数可以有条件地返回一个 Command 对象(例如我们的任务交接)。节点函数将使用带有系统提示的大语言模型,以及一个工具,该工具能让它在需要将任务交接给另一个智能体时发出信号。如果大语言模型的响应中包含工具调用,我们将返回一个 Command(goto=<other_agent>)

注意:虽然我们使用工具让大语言模型发出任务交接的信号,但交接条件可以是任何情况:大语言模型的特定响应文本、大语言模型的结构化输出或任何其他自定义逻辑等。

from typing_extensions import Literal
from langchain_core.messages import ToolMessage
from langchain_core.tools import tool
from langchain_anthropic import ChatAnthropic
from langgraph.graph import MessagesState, StateGraph, START
from langgraph.types import Command

model = ChatAnthropic(model="claude-3-5-sonnet-latest")


@tool
def transfer_to_multiplication_expert():
    """Ask multiplication agent for help."""
    # This tool is not returning anything: we're just using it
    # as a way for LLM to signal that it needs to hand off to another agent
    # (See the paragraph above)
    return


@tool
def transfer_to_addition_expert():
    """Ask addition agent for help."""
    return


def addition_expert(
    state: MessagesState,
) -> Command[Literal["multiplication_expert", "__end__"]]:
    system_prompt = (
        "You are an addition expert, you can ask the multiplication expert for help with multiplication. "
        "Always do your portion of calculation before the handoff."
    )
    messages = [{"role": "system", "content": system_prompt}] + state["messages"]
    ai_msg = model.bind_tools([transfer_to_multiplication_expert]).invoke(messages)
    # If there are tool calls, the LLM needs to hand off to another agent
    if len(ai_msg.tool_calls) > 0:
        tool_call_id = ai_msg.tool_calls[-1]["id"]
        # NOTE: it's important to insert a tool message here because LLM providers are expecting
        # all AI messages to be followed by a corresponding tool result message
        tool_msg = {
            "role": "tool",
            "content": "Successfully transferred",
            "tool_call_id": tool_call_id,
        }
        return Command(
            goto="multiplication_expert", update={"messages": [ai_msg, tool_msg]}
        )

    # If the expert has an answer, return it directly to the user
    return {"messages": [ai_msg]}


def multiplication_expert(
    state: MessagesState,
) -> Command[Literal["addition_expert", "__end__"]]:
    system_prompt = (
        "You are a multiplication expert, you can ask an addition expert for help with addition. "
        "Always do your portion of calculation before the handoff."
    )
    messages = [{"role": "system", "content": system_prompt}] + state["messages"]
    ai_msg = model.bind_tools([transfer_to_addition_expert]).invoke(messages)
    if len(ai_msg.tool_calls) > 0:
        tool_call_id = ai_msg.tool_calls[-1]["id"]
        tool_msg = {
            "role": "tool",
            "content": "Successfully transferred",
            "tool_call_id": tool_call_id,
        }
        return Command(goto="addition_expert", update={"messages": [ai_msg, tool_msg]})

    return {"messages": [ai_msg]}

API Reference: ToolMessage | tool

现在,让我们将这两个节点合并到一个单一的图中。请注意,各个智能体之间没有边!如果某个专家有答案,它会直接将答案返回给用户,否则它会向另一个专家寻求帮助。

builder = StateGraph(MessagesState)
builder.add_node("addition_expert", addition_expert)
builder.add_node("multiplication_expert", multiplication_expert)
# we'll always start with the addition expert
builder.add_edge(START, "addition_expert")
graph = builder.compile()

最后,让我们定义一个辅助函数来很好地渲染流式输出:

from langchain_core.messages import convert_to_messages


def pretty_print_messages(update):
    if isinstance(update, tuple):
        ns, update = update
        # skip parent graph updates in the printouts
        if len(ns) == 0:
            return

        graph_id = ns[-1].split(":")[0]
        print(f"Update from subgraph {graph_id}:")
        print("\n")

    for node_name, node_update in update.items():
        print(f"Update from node {node_name}:")
        print("\n")

        for m in convert_to_messages(node_update["messages"]):
            m.pretty_print()
        print("\n")

API Reference: convert_to_messages

让我们使用一个既需要加法又需要乘法的表达式来运行这个图:

for chunk in graph.stream(
    {"messages": [("user", "what's (3 + 5) * 12")]},
):
    pretty_print_messages(chunk)
Update from node addition_expert:


================================== Ai Message ==================================

[{'text': "Let me help break this down:\n\nFirst, I'll handle the addition part since I'm the addition expert:\n3 + 5 = 8\n\nNow, for the multiplication of 8 * 12, I'll need to ask the multiplication expert for help.", 'type': 'text'}, {'id': 'toolu_015LCrsomHbeoQPtCzuff78Y', 'input': {}, 'name': 'transfer_to_multiplication_expert', 'type': 'tool_use'}]
Tool Calls:
  transfer_to_multiplication_expert (toolu_015LCrsomHbeoQPtCzuff78Y)
 Call ID: toolu_015LCrsomHbeoQPtCzuff78Y
  Args:
================================= Tool Message =================================

Successfully transferred


Update from node multiplication_expert:


================================== Ai Message ==================================

[{'text': 'I see there was an error in my approach. I am actually the multiplication expert, and I need to ask the addition expert for help with (3 + 5) first.', 'type': 'text'}, {'id': 'toolu_01HFcB8WesPfDyrdgxoXApZk', 'input': {}, 'name': 'transfer_to_addition_expert', 'type': 'tool_use'}]
Tool Calls:
  transfer_to_addition_expert (toolu_01HFcB8WesPfDyrdgxoXApZk)
 Call ID: toolu_01HFcB8WesPfDyrdgxoXApZk
  Args:
================================= Tool Message =================================

Successfully transferred


Update from node addition_expert:


================================== Ai Message ==================================

Now that I have the result of 3 + 5 = 8 from the addition expert, I can multiply 8 * 12:

8 * 12 = 96

So, (3 + 5) * 12 = 96
你可以看到,加法专家首先处理了括号内的表达式,然后将其交给乘法专家完成计算。

现在让我们看看如何使用特殊的交接工具来实现这个相同的系统,并为我们的智能体提供实际的数学工具。

使用工具实现交接

实现一个交接工具

在前面的示例中,我们在每个智能体节点中显式定义了自定义交接。另一种模式是创建特殊的**交接工具**,这些工具直接返回 Command 对象。当一个智能体调用这样的工具时,它会将控制权移交给另一个智能体。具体来说,智能体中的工具执行节点会识别工具返回的 Command 对象,并相应地路由控制流。注意:与前面的示例不同,调用工具的智能体不是单个节点,而是另一个图,可以作为子图节点添加到多智能体图中。

实现交接工具时需要考虑以下几个重要方面:

  • 由于每个智能体都是另一个图中的**子图**节点,并且工具将在其中一个智能体子图节点(例如工具执行器)中被调用,我们需要在 Command 中指定 graph=Command.PARENT,以便 LangGraph 知道要导航到智能体子图之外。
  • 我们可以选择指定一个状态更新,该更新将在调用下一个智能体之前应用于父图状态。
    • 这些状态更新可用于控制目标智能体可以看到多少聊天消息历史记录。例如,你可以选择只共享当前智能体的最后一条 AI 消息,或者其完整的内部聊天历史记录等。在下面的示例中,我们将共享完整的内部聊天历史记录。
  • 我们可以选择向工具(在工具函数签名中)提供以下内容:

这些不是必需的,但对于创建传递给下一个智能体的状态更新很有用。

from typing import Annotated

from langchain_core.tools import tool
from langchain_core.tools.base import InjectedToolCallId
from langgraph.prebuilt import InjectedState


def make_handoff_tool(*, agent_name: str):
    """Create a tool that can return handoff via a Command"""
    tool_name = f"transfer_to_{agent_name}"

    @tool(tool_name)
    def handoff_to_agent(
        # # optionally pass current graph state to the tool (will be ignored by the LLM)
        state: Annotated[dict, InjectedState],
        # optionally pass the current tool call ID (will be ignored by the LLM)
        tool_call_id: Annotated[str, InjectedToolCallId],
    ):
        """Ask another agent for help."""
        tool_message = {
            "role": "tool",
            "content": f"Successfully transferred to {agent_name}",
            "name": tool_name,
            "tool_call_id": tool_call_id,
        }
        return Command(
            # navigate to another agent node in the PARENT graph
            goto=agent_name,
            graph=Command.PARENT,
            # This is the state update that the agent `agent_name` will see when it is invoked.
            # We're passing agent's FULL internal message history AND adding a tool message to make sure
            # the resulting chat history is valid. See the paragraph above for more information.
            update={"messages": state["messages"] + [tool_message]},
        )

    return handoff_to_agent

API Reference: tool | InjectedToolCallId

与自定义代理一起使用

为了演示如何使用转接工具,让我们首先实现预构建的 create_react_agent 的一个简单版本。如果你想实现一个自定义的工具调用代理,并且想利用转接工具,那么这会很有用。

from typing_extensions import Literal
from langchain_core.messages import ToolMessage
from langchain_core.tools import tool
from langgraph.graph import MessagesState, StateGraph, START
from langgraph.types import Command


def make_agent(model, tools, system_prompt=None):
    model_with_tools = model.bind_tools(tools)
    tools_by_name = {tool.name: tool for tool in tools}

    def call_model(state: MessagesState) -> Command[Literal["call_tools", "__end__"]]:
        messages = state["messages"]
        if system_prompt:
            messages = [{"role": "system", "content": system_prompt}] + messages

        response = model_with_tools.invoke(messages)
        if len(response.tool_calls) > 0:
            return Command(goto="call_tools", update={"messages": [response]})

        return {"messages": [response]}

    # NOTE: this is a simplified version of the prebuilt ToolNode
    # If you want to have a tool node that has full feature parity, please refer to the source code
    def call_tools(state: MessagesState) -> Command[Literal["call_model"]]:
        tool_calls = state["messages"][-1].tool_calls
        results = []
        for tool_call in tool_calls:
            tool_ = tools_by_name[tool_call["name"]]
            tool_input_fields = tool_.get_input_schema().model_json_schema()[
                "properties"
            ]

            # this is simplified for demonstration purposes and
            # is different from the ToolNode implementation
            if "state" in tool_input_fields:
                # inject state
                tool_call = {**tool_call, "args": {**tool_call["args"], "state": state}}

            tool_response = tool_.invoke(tool_call)
            if isinstance(tool_response, ToolMessage):
                results.append(Command(update={"messages": [tool_response]}))

            # handle tools that return Command directly
            elif isinstance(tool_response, Command):
                results.append(tool_response)

        # NOTE: nodes in LangGraph allow you to return list of updates, including Command objects
        return results

    graph = StateGraph(MessagesState)
    graph.add_node(call_model)
    graph.add_node(call_tools)
    graph.add_edge(START, "call_model")
    graph.add_edge("call_tools", "call_model")

    return graph.compile()

API Reference: ToolMessage | tool

让我们也定义一些将提供给我们的智能体使用的数学工具:

@tool
def add(a: int, b: int) -> int:
    """Adds two numbers."""
    return a + b


@tool
def multiply(a: int, b: int) -> int:
    """Multiplies two numbers."""
    return a * b

让我们对代理实现进行测试,以确保它能按预期工作:

agent = make_agent(model, [add, multiply])

for chunk in agent.stream({"messages": [("user", "what's (3 + 5) * 12")]}):
    pretty_print_messages(chunk)
Update from node call_model:


================================== Ai Message ==================================

[{'text': "I'll help break this down into two steps:\n1. First calculate 3 + 5\n2. Then multiply that result by 12\n\nLet me make these calculations:\n\n1. Adding 3 and 5:", 'type': 'text'}, {'id': 'toolu_01DUAzgWFqq6XZtj1hzHTka9', 'input': {'a': 3, 'b': 5}, 'name': 'add', 'type': 'tool_use'}]
Tool Calls:
  add (toolu_01DUAzgWFqq6XZtj1hzHTka9)
 Call ID: toolu_01DUAzgWFqq6XZtj1hzHTka9
  Args:
    a: 3
    b: 5


Update from node call_tools:


================================= Tool Message =================================
Name: add

8


Update from node call_model:


================================== Ai Message ==================================

[{'text': '2. Multiplying the result (8) by 12:', 'type': 'text'}, {'id': 'toolu_01QXi1prSN4etgJ1QCuFJsgN', 'input': {'a': 8, 'b': 12}, 'name': 'multiply', 'type': 'tool_use'}]
Tool Calls:
  multiply (toolu_01QXi1prSN4etgJ1QCuFJsgN)
 Call ID: toolu_01QXi1prSN4etgJ1QCuFJsgN
  Args:
    a: 8
    b: 12


Update from node call_tools:


================================= Tool Message =================================
Name: multiply

96


Update from node call_model:


================================== Ai Message ==================================

The result of (3 + 5) * 12 = 96
现在,我们可以使用乘法和加法专家智能体来实现我们的多智能体系统。这次我们将为它们提供进行数学运算的工具,以及我们特殊的交接工具:

addition_expert = make_agent(
    model,
    [add, make_handoff_tool(agent_name="multiplication_expert")],
    system_prompt="You are an addition expert, you can ask the multiplication expert for help with multiplication.",
)
multiplication_expert = make_agent(
    model,
    [multiply, make_handoff_tool(agent_name="addition_expert")],
    system_prompt="You are a multiplication expert, you can ask an addition expert for help with addition.",
)

builder = StateGraph(MessagesState)
builder.add_node("addition_expert", addition_expert)
builder.add_node("multiplication_expert", multiplication_expert)
builder.add_edge(START, "addition_expert")
graph = builder.compile()

让我们使用与之前相同的多步计算输入来运行该图:

for chunk in graph.stream(
    {"messages": [("user", "what's (3 + 5) * 12")]}, subgraphs=True
):
    pretty_print_messages(chunk)
Update from subgraph addition_expert:


Update from node call_model:


================================== Ai Message ==================================

[{'text': "I can help with the addition part (3 + 5), but I'll need to ask the multiplication expert for help with multiplying the result by 12. Let me break this down:\n\n1. First, let me calculate 3 + 5:", 'type': 'text'}, {'id': 'toolu_01McaW4XWczLGKaetg88fxQ5', 'input': {'a': 3, 'b': 5}, 'name': 'add', 'type': 'tool_use'}]
Tool Calls:
  add (toolu_01McaW4XWczLGKaetg88fxQ5)
 Call ID: toolu_01McaW4XWczLGKaetg88fxQ5
  Args:
    a: 3
    b: 5


Update from subgraph addition_expert:


Update from node call_tools:


================================= Tool Message =================================
Name: add

8


Update from subgraph addition_expert:


Update from node call_model:


================================== Ai Message ==================================

[{'text': "Now that we have 8, we need to multiply it by 12. I'll ask the multiplication expert for help with this:", 'type': 'text'}, {'id': 'toolu_01KpdUhHuyrmha62z5SduKRc', 'input': {}, 'name': 'transfer_to_multiplication_expert', 'type': 'tool_use'}]
Tool Calls:
  transfer_to_multiplication_expert (toolu_01KpdUhHuyrmha62z5SduKRc)
 Call ID: toolu_01KpdUhHuyrmha62z5SduKRc
  Args:


Update from subgraph multiplication_expert:


Update from node call_model:


================================== Ai Message ==================================

[{'text': 'Now that we have 8 as the result of the addition, I can help with the multiplication by 12:', 'type': 'text'}, {'id': 'toolu_01Vnp4k3TE87siad3BNJgRKb', 'input': {'a': 8, 'b': 12}, 'name': 'multiply', 'type': 'tool_use'}]
Tool Calls:
  multiply (toolu_01Vnp4k3TE87siad3BNJgRKb)
 Call ID: toolu_01Vnp4k3TE87siad3BNJgRKb
  Args:
    a: 8
    b: 12


Update from subgraph multiplication_expert:


Update from node call_tools:


================================= Tool Message =================================
Name: multiply

96


Update from subgraph multiplication_expert:


Update from node call_model:


================================== Ai Message ==================================

The final result is 96.

To break down the steps:
1. 3 + 5 = 8
2. 8 * 12 = 96
我们可以看到,在加法专家完成计算的第一部分(调用 add 工具之后),它决定将任务交给乘法专家,由乘法专家计算最终结果。

与预构建的 ReAct 代理一起使用

如果你不需要额外的自定义设置,可以使用预构建的 create_react_agent,它通过 ToolNode 提供了对转接工具的内置支持。

from langgraph.prebuilt import create_react_agent

addition_expert = create_react_agent(
    model,
    [add, make_handoff_tool(agent_name="multiplication_expert")],
    prompt="You are an addition expert, you can ask the multiplication expert for help with multiplication.",
)

multiplication_expert = create_react_agent(
    model,
    [multiply, make_handoff_tool(agent_name="addition_expert")],
    prompt="You are a multiplication expert, you can ask an addition expert for help with addition.",
)

builder = StateGraph(MessagesState)
builder.add_node("addition_expert", addition_expert)
builder.add_node("multiplication_expert", multiplication_expert)
builder.add_edge(START, "addition_expert")
graph = builder.compile()

现在我们可以验证预构建的 ReAct 代理的工作方式与上述自定义代理完全相同:

for chunk in graph.stream(
    {"messages": [("user", "what's (3 + 5) * 12")]}, subgraphs=True
):
    pretty_print_messages(chunk)
Update from subgraph addition_expert:


Update from node agent:


================================== Ai Message ==================================

[{'text': "I can help with the addition part of this calculation (3 + 5), and then I'll need to ask the multiplication expert for help with multiplying the result by 12.\n\nLet me first calculate 3 + 5:", 'type': 'text'}, {'id': 'toolu_01GUasumGGJVXDV7TJEqEfmY', 'input': {'a': 3, 'b': 5}, 'name': 'add', 'type': 'tool_use'}]
Tool Calls:
  add (toolu_01GUasumGGJVXDV7TJEqEfmY)
 Call ID: toolu_01GUasumGGJVXDV7TJEqEfmY
  Args:
    a: 3
    b: 5


Update from subgraph addition_expert:


Update from node tools:


================================= Tool Message =================================
Name: add

8


Update from subgraph addition_expert:


Update from node agent:


================================== Ai Message ==================================

[{'text': "Now that we have 8, we need to multiply it by 12. Since I'm an addition expert, I'll transfer this to the multiplication expert to complete the calculation:", 'type': 'text'}, {'id': 'toolu_014HEbwiH2jVno8r1Pc6t9Qh', 'input': {}, 'name': 'transfer_to_multiplication_expert', 'type': 'tool_use'}]
Tool Calls:
  transfer_to_multiplication_expert (toolu_014HEbwiH2jVno8r1Pc6t9Qh)
 Call ID: toolu_014HEbwiH2jVno8r1Pc6t9Qh
  Args:


Update from subgraph multiplication_expert:


Update from node agent:


================================== Ai Message ==================================

[{'text': 'I notice I made a mistake - I actually don\'t have access to the "add" function or "transfer_to_multiplication_expert". Instead, I am the multiplication expert and I should ask the addition expert for help with the first part. Let me correct this:', 'type': 'text'}, {'id': 'toolu_01VAGpmr4ysHjvvuZp3q5Dzj', 'input': {}, 'name': 'transfer_to_addition_expert', 'type': 'tool_use'}]
Tool Calls:
  transfer_to_addition_expert (toolu_01VAGpmr4ysHjvvuZp3q5Dzj)
 Call ID: toolu_01VAGpmr4ysHjvvuZp3q5Dzj
  Args:


Update from subgraph addition_expert:


Update from node agent:


================================== Ai Message ==================================

[{'text': "I'll help you with the addition part of (3 + 5) * 12. First, let me calculate 3 + 5:", 'type': 'text'}, {'id': 'toolu_01RE16cRGVo4CC4wwHFB6gaE', 'input': {'a': 3, 'b': 5}, 'name': 'add', 'type': 'tool_use'}]
Tool Calls:
  add (toolu_01RE16cRGVo4CC4wwHFB6gaE)
 Call ID: toolu_01RE16cRGVo4CC4wwHFB6gaE
  Args:
    a: 3
    b: 5


Update from subgraph addition_expert:


Update from node tools:


================================= Tool Message =================================
Name: add

8


Update from subgraph addition_expert:


Update from node agent:


================================== Ai Message ==================================

[{'text': "Now that we have 8, we need to multiply it by 12. Since I'm an addition expert, I'll need to transfer this to the multiplication expert to complete the calculation:", 'type': 'text'}, {'id': 'toolu_01HBDRh64SzGcCp7EX1u3MFa', 'input': {}, 'name': 'transfer_to_multiplication_expert', 'type': 'tool_use'}]
Tool Calls:
  transfer_to_multiplication_expert (toolu_01HBDRh64SzGcCp7EX1u3MFa)
 Call ID: toolu_01HBDRh64SzGcCp7EX1u3MFa
  Args:


Update from subgraph multiplication_expert:


Update from node agent:


================================== Ai Message ==================================

[{'text': 'Now that I have the result of 3 + 5 = 8, I can help with multiplying by 12:', 'type': 'text'}, {'id': 'toolu_014Ay95rsKvvbWWJV4CcZSPY', 'input': {'a': 8, 'b': 12}, 'name': 'multiply', 'type': 'tool_use'}]
Tool Calls:
  multiply (toolu_014Ay95rsKvvbWWJV4CcZSPY)
 Call ID: toolu_014Ay95rsKvvbWWJV4CcZSPY
  Args:
    a: 8
    b: 12


Update from subgraph multiplication_expert:


Update from node tools:


================================= Tool Message =================================
Name: multiply

96


Update from subgraph multiplication_expert:


Update from node agent:


================================== Ai Message ==================================

The final result is 96. Here's the complete calculation:
(3 + 5) * 12 = 8 * 12 = 96

Comments