Skip to content

如何创建带有配置的智能体

LangGraph API 的优点之一是它允许你创建具有不同配置的智能体。 当你有以下需求时,这会很有用:

  • 将认知架构一次性定义为 LangGraph
  • 使该 LangGraph 能够在某些属性(例如,系统消息或要使用的大语言模型)上进行配置
  • 让用户能够创建具有任意配置的智能体,保存它们,然后在未来使用

在本指南中,我们将展示如何为我们内置的默认智能体实现这一点。

如果你查看我们定义的智能体,你会发现,在 call_model 节点内部,我们根据一些配置创建了模型。该节点如下所示:

def call_model(state, config):
    messages = state["messages"]
    model_name = config.get('configurable', {}).get("model_name", "anthropic")
    model = _get_model(model_name)
    response = model.invoke(messages)
    # 我们返回一个列表,因为它将被添加到现有列表中
    return {"messages": [response]}
function callModel(state: State, config: RunnableConfig) {
  const messages = state.messages;
  const modelName = config.configurable?.model_name ?? "anthropic";
  const model = _getModel(modelName);
  const response = model.invoke(messages);
  // 我们返回一个列表,因为它将被添加到现有列表中
  return { messages: [response] };
}

我们在配置中查找 model_name 参数(如果未找到,则默认为 anthropic)。这意味着默认情况下,我们使用 Anthropic 作为模型提供商。在这个示例中,我们将看到一个如何创建一个配置为使用 OpenAI 的示例智能体的例子。

首先,让我们设置客户端和线程:

from langgraph_sdk import get_client

client = get_client(url=<DEPLOYMENT_URL>)
# 选择一个未配置的助手
assistants = await client.assistants.search()
assistant = [a for a in assistants if not a["config"]][0]
import { Client } from "@langchain/langgraph-sdk";

const client = new Client({ apiUrl: <DEPLOYMENT_URL> });
// 选择一个未配置的助手
const assistants = await client.assistants.search();
const assistant = assistants.find(a => !a.config);
curl --request POST \
    --url <DEPLOYMENT_URL>/assistants/search \
    --header 'Content-Type: application/json' \
    --data '{
        "limit": 10,
        "offset": 0
    }' | jq -c 'map(select(.config == null or .config == {})) | .[0]'

现在,我们可以调用 .get_schemas 来获取与此图关联的模式:

schemas = await client.assistants.get_schemas(
    assistant_id=assistant["assistant_id"]
)
# 有多种类型的模式
# 我们可以获取 `config_schema` 来查看可配置参数
print(schemas["config_schema"])
const schemas = await client.assistants.getSchemas(
  assistant["assistant_id"]
);
// 有多种类型的模式
// 我们可以获取 `config_schema` 来查看可配置参数
console.log(schemas.config_schema);
curl --request GET \
    --url <DEPLOYMENT_URL>/assistants/<ASSISTANT_ID>/schemas | jq -r '.config_schema'

输出:

{
    'model_name': 
        {
            'title': 'Model Name',
            'enum': ['anthropic', 'openai'],
            'type': 'string'
        }
}

现在,我们可以使用配置初始化一个助手:

openai_assistant = await client.assistants.create(
    # "agent" 是我们部署的图的名称
    "agent", config={"configurable": {"model_name": "openai"}}
)

print(openai_assistant)
let openAIAssistant = await client.assistants.create(
  // "agent" 是我们部署的图的名称
  "agent", { "configurable": { "model_name": "openai" } }
);

console.log(openAIAssistant);
curl --request POST \
    --url <DEPLOYMENT_URL>/assistants \
    --header 'Content-Type: application/json' \
    --data '{"graph_id":"agent","config":{"configurable":{"model_name":"open_ai"}}}'

输出:

{
    "assistant_id": "62e209ca-9154-432a-b9e9-2d75c7a9219b",
    "graph_id": "agent",
    "created_at": "2024-08-31T03:09:10.230718+00:00",
    "updated_at": "2024-08-31T03:09:10.230718+00:00",
    "config": {
        "configurable": {
            "model_name": "open_ai"
        }
    },
    "metadata": {}
}

我们可以验证配置是否确实生效:

thread = await client.threads.create()
input = {"messages": [{"role": "user", "content": "who made you?"}]}
async for event in client.runs.stream(
    thread["thread_id"],
    openai_assistant["assistant_id"],
    input=input,
    stream_mode="updates",
):
    print(f"Receiving event of type: {event.event}")
    print(event.data)
    print("\n\n")
const thread = await client.threads.create();
let input = { "messages": [{ "role": "user", "content": "who made you?" }] };

const streamResponse = client.runs.stream(
  thread["thread_id"],
  openAIAssistant["assistant_id"],
  {
    input,
    streamMode: "updates"
  }
);

for await (const event of streamResponse) {
  console.log(`Receiving event of type: ${event.event}`);
  console.log(event.data);
  console.log("\n\n");
}
thread_id=$(curl --request POST \
    --url <DEPLOYMENT_URL>/threads \
    --header 'Content-Type: application/json' \
    --data '{}' | jq -r '.thread_id') && \
curl --request POST \
    --url "<DEPLOYMENT_URL>/threads/${thread_id}/runs/stream" \
    --header 'Content-Type: application/json' \
    --data '{
        "assistant_id": <OPENAI_ASSISTANT_ID>,
        "input": {
            "messages": [
                {
                    "role": "user",
                    "content": "who made you?"
                }
            ]
        },
        "stream_mode": [
            "updates"
        ]
    }' | \
    sed 's/\r$//' | \
    awk '
    /^event:/ {
        if (data_content != "") {
            print data_content "\n"
        }
        sub(/^event: /, "Receiving event of type: ", $0)
        printf "%s...\n", $0
        data_content = ""
    }
    /^data:/ {
        sub(/^data: /, "", $0)
        data_content = $0
    }
    END {
        if (data_content != "") {
            print data_content "\n\n"
        }
    }
'

输出:

Receiving event of type: metadata
{'run_id': '1ef6746e-5893-67b1-978a-0f1cd4060e16'}



Receiving event of type: updates
{'agent': {'messages': [{'content': 'I was created by OpenAI, a research organization focused on developing and advancing artificial intelligence technology.', 'additional_kwargs': {}, 'response_metadata': {'finish_reason': 'stop', 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_157b3831f5'}, 'type': 'ai', 'name': None, 'id': 'run-e1a6b25c-8416-41f2-9981-f9cfe043f414', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None}]}}

Comments