Replies: 2 comments
-
Hey @mraguth, fancy seeing you here again! Diving into another data adventure, I see. Let's crack this one together. Based on the context you've provided, it seems like the timeout issue you're experiencing with AzureOpenAI when using the Chain_of_tables feature from the LlamaIndex repository might be due to the size of your dataset. Here are a few approaches you could consider to mitigate this issue:
from httpx import Client, Timeout
from llama_index.legacy.embeddings.azure_openai import AzureOpenAIEmbedding
# Increase the timeout duration
timeout = Timeout(30.0) # Adjust the timeout as needed
http_client = Client(timeout=timeout)
# Initialize AzureOpenAIEmbedding with the custom http_client
azure_openai_embedding = AzureOpenAIEmbedding(http_client=http_client, ...)
Remember to test these adjustments in a controlled environment to find the optimal configuration that works for your specific dataset and use case. You can find more information about these parameters in the LlamaIndex codebase. Sources
|
Beta Was this translation helpful? Give feedback.
-
Can someone help me how I add 'embed_batch_size' either in llm or embeddings configuration please? I am using AzureOpenai. |
Beta Was this translation helpful? Give feedback.
-
I am using a Chain of tables for 200 row dataset (with 5 columns). However, it is timing out every time while I am using AzureOpenAI.
If I use 20 or 30 rows, it is working fine. I have bigger prompt. Is that an issue? Is there any mitigation for this process? Please help.
https://github.com/run-llama/llama-hub/blob/main/llama_hub/llama_packs/tables/chain_of_table/chain_of_table.ipynb
I am using this particular function:
Beta Was this translation helpful? Give feedback.
All reactions