You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've taken the default rag-conversation example (see source code here) and modified the retriever slightly to use Azure AI search. The vector store contains a synthetic dataset filled with data about disturbances in a production factory.
importosfromoperatorimportitemgetterfromtypingimportList, Tuplefromlangchain_community.vectorstores.azuresearchimportAzureSearchfromlangchain_openaiimportAzureChatOpenAI, AzureOpenAIEmbeddingsfromlangchain_core.messagesimportAIMessage, HumanMessagefromlangchain_core.output_parsersimportStrOutputParserfromlangchain_core.promptsimport (
ChatPromptTemplate,
MessagesPlaceholder,
format_document,
)
fromlangchain_core.prompts.promptimportPromptTemplatefromlangchain_core.pydantic_v1importBaseModel, Fieldfromlangchain_core.runnablesimport (
RunnableBranch,
RunnableLambda,
RunnableParallel,
RunnablePassthrough,
)
fromsrc.utilsimportAzureAISearchConfig, AzureOpenAIConfig# Load configurationsazure_ais_conf=AzureAISearchConfig.from_yaml("/some/config/location")
azure_oai_conf=AzureOpenAIConfig.from_yaml("/some/config/location")
embeddings=AzureOpenAIEmbeddings(
azure_deployment=azure_oai_conf.embedding_model,
openai_api_version=azure_oai_conf.api_version,
azure_endpoint=azure_oai_conf.endpoint,
)
llm=AzureChatOpenAI(
azure_deployment=azure_oai_conf.chat_model,
openai_api_version=azure_oai_conf.api_version,
azure_endpoint=azure_oai_conf.endpoint,
)
vectorstore=AzureSearch(
azure_search_endpoint=azure_ais_conf.endpoint,
azure_search_key=os.environ["AZURE_SEARCH_KEY"],
index_name="langchain-vector-dummy",
embedding_function=embeddings.embed_query,
)
retriever=vectorstore.as_retriever()
# Condense a chat history and follow-up question into a standalone question_template="""Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.Chat History:{chat_history}Follow Up Input: {question}Standalone question:"""# noqa: E501CONDENSE_QUESTION_PROMPT=PromptTemplate.from_template(_template)
# RAG answer synthesis prompttemplate="""Answer the question based only on the following context:<context>{context}</context>"""ANSWER_PROMPT=ChatPromptTemplate.from_messages(
[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{question}"),
]
)
# Conversational Retrieval ChainDEFAULT_DOCUMENT_PROMPT=PromptTemplate.from_template(template="{page_content}")
def_combine_documents(docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"):
doc_strings= [format_document(doc, document_prompt) fordocindocs]
returndocument_separator.join(doc_strings)
def_format_chat_history(chat_history: List[Tuple[str, str]]) ->List:
buffer= []
forhuman, aiinchat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
returnbuffer# User inputclassChatHistory(BaseModel):
chat_history: List[Tuple[str, str]] =Field(..., extra={"widget": {"type": "chat"}})
question: str_search_query=RunnableBranch(
# If input includes chat_history, we condense it with the follow-up question
(
RunnableLambda(lambdax: bool(x.get("chat_history"))).with_config(
run_name="HasChatHistoryCheck"
), # Condense follow-up question and chat into a standalone_questionRunnablePassthrough.assign(chat_history=lambdax: _format_chat_history(x["chat_history"]))
|CONDENSE_QUESTION_PROMPT|llm|StrOutputParser(),
),
# Else, we have no chat history, so just pass through the questionRunnableLambda(itemgetter("question")),
)
_inputs=RunnableParallel(
{
"question": lambdax: x["question"],
"chat_history": lambdax: _format_chat_history(x["chat_history"]),
"context": _search_query|retriever|_combine_documents,
}
).with_types(input_type=ChatHistory)
chain=_inputs|ANSWER_PROMPT|llm|StrOutputParser()
When I run the code with playground_type="default" and define the chat history as follows, I get the following ouput:
When I run the code with playground_type="chat", I get no output after the first question.
I tried to do a further analysis and inspected the network calls. They look slightly different:
The network calls for the default playground look like this:
The network calls for the chat playground look like this:
What could be going on here?
The text was updated successfully, but these errors were encountered:
Hi all,
I've taken the default rag-conversation example (see source code here) and modified the retriever slightly to use Azure AI search. The vector store contains a synthetic dataset filled with data about disturbances in a production factory.
The code is shown below
server.py:
chain.py
When I run the code with
playground_type="default"
and define the chat history as follows, I get the following ouput:When I run the code with
playground_type="chat"
, I get no output after the first question.I tried to do a further analysis and inspected the network calls. They look slightly different:
The network calls for the
default
playground look like this:The network calls for the
chat
playground look like this:What could be going on here?
The text was updated successfully, but these errors were encountered: