Skip to content

根据用户需求生成提示词

在这个示例中,我们将创建一个帮助用户生成提示词的聊天机器人。 它将首先收集用户的需求,然后生成提示词(并根据用户输入对其进行优化)。 这分为两个独立的状态,由大语言模型(LLM)决定何时在这两个状态之间进行转换。

下面是该系统的图形表示。

prompt-generator.png

安装设置

首先,让我们安装所需的软件包并设置我们的 OpenAI API 密钥(我们将使用的大语言模型)。

%%capture --no-stderr
% pip install -U langgraph langchain_openai
import getpass
import os


def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")


_set_env("OPENAI_API_KEY")

为 LangGraph 开发设置 LangSmith

注册 LangSmith 以快速发现问题并提升你的 LangGraph 项目的性能。LangSmith 允许你使用跟踪数据来调试、测试和监控使用 LangGraph 构建的大语言模型应用程序 — 点击 此处 了解更多关于如何开始使用的信息。

收集信息

首先,让我们定义图中用于收集用户需求的部分。这将是一次带有特定系统消息的大语言模型(LLM)调用。当它准备好生成提示时,它可以调用一个工具。

在 LangChain 中使用 Pydantic

本笔记本使用 Pydantic v2 的 BaseModel,这需要 langchain-core >= 0.3。使用 langchain-core < 0.3 会因混用 Pydantic v1 和 v2 的 BaseModel 而导致错误。

from typing import List

from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI

from pydantic import BaseModel

API Reference: SystemMessage

template = """Your job is to get information from a user about what type of prompt template they want to create.

You should get the following information from them:

- What the objective of the prompt is
- What variables will be passed into the prompt template
- Any constraints for what the output should NOT do
- Any requirements that the output MUST adhere to

If you are not able to discern this info, ask them to clarify! Do not attempt to wildly guess.

After you are able to discern all the information, call the relevant tool."""


def get_messages_info(messages):
    return [SystemMessage(content=template)] + messages


class PromptInstructions(BaseModel):
    """Instructions on how to prompt the LLM."""

    objective: str
    variables: List[str]
    constraints: List[str]
    requirements: List[str]


llm = ChatOpenAI(temperature=0)
llm_with_tool = llm.bind_tools([PromptInstructions])


def info_chain(state):
    messages = get_messages_info(state["messages"])
    response = llm_with_tool.invoke(messages)
    return {"messages": [response]}

生成提示信息

我们现在来设置用于生成提示信息的状态。 这将需要一条单独的系统消息,以及一个用于过滤掉工具调用**之前**所有消息的函数(因为在工具调用时,上一个状态已判定是时候生成提示信息了)

from langchain_core.messages import AIMessage, HumanMessage, ToolMessage

# New system prompt
prompt_system = """Based on the following requirements, write a good prompt template:

{reqs}"""


# Function to get the messages for the prompt
# Will only get messages AFTER the tool call
def get_prompt_messages(messages: list):
    tool_call = None
    other_msgs = []
    for m in messages:
        if isinstance(m, AIMessage) and m.tool_calls:
            tool_call = m.tool_calls[0]["args"]
        elif isinstance(m, ToolMessage):
            continue
        elif tool_call is not None:
            other_msgs.append(m)
    return [SystemMessage(content=prompt_system.format(reqs=tool_call))] + other_msgs


def prompt_gen_chain(state):
    messages = get_prompt_messages(state["messages"])
    response = llm.invoke(messages)
    return {"messages": [response]}

API Reference: AIMessage | HumanMessage | ToolMessage

定义状态逻辑

这是关于聊天机器人所处状态的逻辑。 如果最后一条消息是工具调用,那么我们处于“提示词创建器”(prompt)应该做出响应的状态。 否则,如果最后一条消息不是人类消息,那么我们知道接下来应该由人类做出响应,因此我们处于 END 状态。 如果最后一条消息是人类消息,那么如果之前有过工具调用,我们处于 prompt 状态。 否则,我们处于“信息收集”(info)状态。

from typing import Literal

from langgraph.graph import END


def get_state(state):
    messages = state["messages"]
    if isinstance(messages[-1], AIMessage) and messages[-1].tool_calls:
        return "add_tool_message"
    elif not isinstance(messages[-1], HumanMessage):
        return END
    return "info"

创建图表

现在我们可以创建图表了。 我们将使用 SqliteSaver 来持久化对话历史记录。

from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from typing import Annotated
from typing_extensions import TypedDict


class State(TypedDict):
    messages: Annotated[list, add_messages]


memory = MemorySaver()
workflow = StateGraph(State)
workflow.add_node("info", info_chain)
workflow.add_node("prompt", prompt_gen_chain)


@workflow.add_node
def add_tool_message(state: State):
    return {
        "messages": [
            ToolMessage(
                content="Prompt generated!",
                tool_call_id=state["messages"][-1].tool_calls[0]["id"],
            )
        ]
    }


workflow.add_conditional_edges("info", get_state, ["add_tool_message", "info", END])
workflow.add_edge("add_tool_message", "prompt")
workflow.add_edge("prompt", END)
workflow.add_edge(START, "info")
graph = workflow.compile(checkpointer=memory)
from IPython.display import Image, display

display(Image(graph.get_graph().draw_mermaid_png()))

使用图谱

现在我们可以使用已创建的聊天机器人了。

import uuid

cached_human_responses = ["hi!", "rag prompt", "1 rag, 2 none, 3 no, 4 no", "red", "q"]
cached_response_index = 0
config = {"configurable": {"thread_id": str(uuid.uuid4())}}
while True:
    try:
        user = input("User (q/Q to quit): ")
    except:
        user = cached_human_responses[cached_response_index]
        cached_response_index += 1
    print(f"User (q/Q to quit): {user}")
    if user in {"q", "Q"}:
        print("AI: Byebye")
        break
    output = None
    for output in graph.stream(
        {"messages": [HumanMessage(content=user)]}, config=config, stream_mode="updates"
    ):
        last_message = next(iter(output.values()))["messages"][-1]
        last_message.pretty_print()

    if output and "prompt" in output:
        print("Done!")
User (q/Q to quit): hi!
================================== Ai Message ==================================

Hello! How can I assist you today?
User (q/Q to quit): rag prompt
================================== Ai Message ==================================

Sure! I can help you create a prompt template. To get started, could you please provide me with the following information:

1. What is the objective of the prompt?
2. What variables will be passed into the prompt template?
3. Any constraints for what the output should NOT do?
4. Any requirements that the output MUST adhere to?

Once I have this information, I can assist you in creating the prompt template.
User (q/Q to quit): 1 rag, 2 none, 3 no, 4 no
================================== Ai Message ==================================
Tool Calls:
  PromptInstructions (call_tcz0foifsaGKPdZmsZxNnepl)
 Call ID: call_tcz0foifsaGKPdZmsZxNnepl
  Args:
    objective: rag
    variables: ['none']
    constraints: ['no']
    requirements: ['no']
================================= Tool Message =================================

Prompt generated!
================================== Ai Message ==================================

Please write a response using the RAG (Red, Amber, Green) rating system.
Done!
User (q/Q to quit): red
================================== Ai Message ==================================

Response: The status is RED.
User (q/Q to quit): q
AI: Byebye

Comments