Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In a LangGraph app employing Azure OpenAI LLM within multiple nodes, despite setting streaming=False to some of them, the app still streams all LLMs' tokens. #278

Open
4 tasks done
haolxx opened this issue Apr 4, 2024 · 1 comment

Comments

@haolxx
Copy link

haolxx commented Apr 4, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.

Example Code

happened to both streaming options:


from langchain_core.messages import HumanMessage

inputs = [HumanMessage(content="what is the weather in sf")]
async for event in app.astream_events(inputs, version="v1"):
kind = event["event"]
if kind == "on_chat_model_stream":
content = event["data"]["chunk"].content
if content:
# Empty content in the context of OpenAI means
# that the model is asking for a tool to be invoked.
# So we only print non-empty content
print(content, end="|")
elif kind == "on_tool_start":
print("--")
print(
f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"
)
elif kind == "on_tool_end":
print(f"Done tool: {event['name']}")
print(f"Tool output was: {event['data'].get('output')}")
print("--")

inputs = {"messages": [HumanMessage(content="what is the weather in sf?")]}

async for output in app.astream_log(inputs, include_types=["llm"]):
# astream_log() yields the requested logs (here LLMs) in JSONPatch format
for op in output.ops:
if op["path"] == "/streamed_output/-":
# this is the output from .stream()
...
elif op["path"].startswith("/logs/") and op["path"].endswith(
"/streamed_output/-"
):
# because we chose to only include LLMs, these are LLM tokens
print(op["value"])

Error Message and Stack Trace (if applicable)

No response

Description

In a LangGraph app employing Azure OpenAI LLM within multiple nodes, despite setting streaming=False to some of them, the app still streams all LLMs' tokens.

System Info

python 3.11

langgraph = "^0.0.30"
langchain = "^0.1.4"
langchain-openai = "^0.1.1"

@nfcampos
Copy link
Contributor

nfcampos commented May 1, 2024

hi, not sure what the question is here, if you call your graph with astream_events we try to stream output from all things called inside it. were you expecting to call astream_events but not get streaming output?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants