从用户需求生成提示¶
在这个示例中,我们将创建一个聊天机器人,帮助用户生成提示。 它将首先从用户那里收集需求,然后生成提示(并根据用户的输入进行优化)。 这些分为两个独立的状态,LLM决定何时在它们之间进行转换。
系统的图形表示如下所示。
设置¶
首先,让我们安装所需的软件包并设置我们的 OpenAI API 密钥(我们将使用的 LLM)
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
为 LangGraph 开发设置 LangSmith
注册 LangSmith 可以快速发现并改进你的 LangGraph 项目的问题。LangSmith 允许你使用追踪数据来调试、测试和监控使用 LangGraph 构建的 LLM 应用 —— 了解更多如何入门的信息,请访问 此处。
收集信息¶
首先,让我们定义图中将收集用户需求的部分。这将是一个带有特定系统消息的LLM调用。当它准备好生成提示时,它可以调用一个工具。
使用 Pydantic 与 LangChain
本笔记本使用 Pydantic v2 的 BaseModel
,这需要 langchain-core >= 0.3
。使用 langchain-core < 0.3
会导致错误,因为混合了 Pydantic v1 和 v2 的 BaseModels
。
API Reference: SystemMessage | ChatOpenAI
from typing import List
from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
template = """Your job is to get information from a user about what type of prompt template they want to create.
You should get the following information from them:
- What the objective of the prompt is
- What variables will be passed into the prompt template
- Any constraints for what the output should NOT do
- Any requirements that the output MUST adhere to
If you are not able to discern this info, ask them to clarify! Do not attempt to wildly guess.
After you are able to discern all the information, call the relevant tool."""
def get_messages_info(messages):
return [SystemMessage(content=template)] + messages
class PromptInstructions(BaseModel):
"""Instructions on how to prompt the LLM."""
objective: str
variables: List[str]
constraints: List[str]
requirements: List[str]
llm = ChatOpenAI(temperature=0)
llm_with_tool = llm.bind_tools([PromptInstructions])
def info_chain(state):
messages = get_messages_info(state["messages"])
response = llm_with_tool.invoke(messages)
return {"messages": [response]}
生成提示¶
我们现在设置将生成提示的状态。 这将需要一个单独的系统消息,以及一个函数来过滤掉工具调用之前的全部消息(因为这是之前的状态决定生成提示的时候)。
API Reference: AIMessage | HumanMessage | ToolMessage
from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
# New system prompt
prompt_system = """Based on the following requirements, write a good prompt template:
{reqs}"""
# Function to get the messages for the prompt
# Will only get messages AFTER the tool call
def get_prompt_messages(messages: list):
tool_call = None
other_msgs = []
for m in messages:
if isinstance(m, AIMessage) and m.tool_calls:
tool_call = m.tool_calls[0]["args"]
elif isinstance(m, ToolMessage):
continue
elif tool_call is not None:
other_msgs.append(m)
return [SystemMessage(content=prompt_system.format(reqs=tool_call))] + other_msgs
def prompt_gen_chain(state):
messages = get_prompt_messages(state["messages"])
response = llm.invoke(messages)
return {"messages": [response]}
定义状态逻辑¶
这是聊天机器人的状态逻辑。
如果最后一条消息是工具调用,则我们处于“提示创建者”(prompt
)应响应的状态。
否则,如果最后一条消息不是 HumanMessage
,则我们知道下一步应由用户响应,因此我们处于 END
状态。
如果最后一条消息是 HumanMessage
,则如果有之前的工具调用,我们处于 prompt
状态。
否则,我们处于“信息收集”(info
)状态。
API Reference: END
from typing import Literal
from langgraph.graph import END
def get_state(state):
messages = state["messages"]
if isinstance(messages[-1], AIMessage) and messages[-1].tool_calls:
return "add_tool_message"
elif not isinstance(messages[-1], HumanMessage):
return END
return "info"
创建图¶
我们现在可以创建图了。 我们将使用一个 SqliteSaver 来持久化对话历史。
API Reference: MemorySaver | StateGraph | START | add_messages
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from typing import Annotated
from typing_extensions import TypedDict
class State(TypedDict):
messages: Annotated[list, add_messages]
memory = MemorySaver()
workflow = StateGraph(State)
workflow.add_node("info", info_chain)
workflow.add_node("prompt", prompt_gen_chain)
@workflow.add_node
def add_tool_message(state: State):
return {
"messages": [
ToolMessage(
content="Prompt generated!",
tool_call_id=state["messages"][-1].tool_calls[0]["id"],
)
]
}
workflow.add_conditional_edges("info", get_state, ["add_tool_message", "info", END])
workflow.add_edge("add_tool_message", "prompt")
workflow.add_edge("prompt", END)
workflow.add_edge(START, "info")
graph = workflow.compile(checkpointer=memory)
使用图谱¶
我们现在可以使用创建的聊天机器人了。
import uuid
cached_human_responses = ["hi!", "rag prompt", "1 rag, 2 none, 3 no, 4 no", "red", "q"]
cached_response_index = 0
config = {"configurable": {"thread_id": str(uuid.uuid4())}}
while True:
try:
user = input("User (q/Q to quit): ")
except:
user = cached_human_responses[cached_response_index]
cached_response_index += 1
print(f"User (q/Q to quit): {user}")
if user in {"q", "Q"}:
print("AI: Byebye")
break
output = None
for output in graph.stream(
{"messages": [HumanMessage(content=user)]}, config=config, stream_mode="updates"
):
last_message = next(iter(output.values()))["messages"][-1]
last_message.pretty_print()
if output and "prompt" in output:
print("Done!")
User (q/Q to quit): hi!
================================== Ai Message ==================================
Hello! How can I assist you today?
User (q/Q to quit): rag prompt
================================== Ai Message ==================================
Sure! I can help you create a prompt template. To get started, could you please provide me with the following information:
1. What is the objective of the prompt?
2. What variables will be passed into the prompt template?
3. Any constraints for what the output should NOT do?
4. Any requirements that the output MUST adhere to?
Once I have this information, I can assist you in creating the prompt template.
User (q/Q to quit): 1 rag, 2 none, 3 no, 4 no
================================== Ai Message ==================================
Tool Calls:
PromptInstructions (call_tcz0foifsaGKPdZmsZxNnepl)
Call ID: call_tcz0foifsaGKPdZmsZxNnepl
Args:
objective: rag
variables: ['none']
constraints: ['no']
requirements: ['no']
================================= Tool Message =================================
Prompt generated!
================================== Ai Message ==================================
Please write a response using the RAG (Red, Amber, Green) rating system.
Done!
User (q/Q to quit): red
================================== Ai Message ==================================
Response: The status is RED.
User (q/Q to quit): q
AI: Byebye