You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add a variant of this openai cookbook example to pro-actively throttle API requests. The current Langroid chat and achat methods do have the usual retry-with-exponential backoff etc here but this approach is "blind" in the sense that there is no pro-active throttling attempt. As a result when we do large batch jobs (e.g. using run_batch_tasks there might be just too much time wasted in hitting rate limits and retrying. A pro-active throttling approach like in the script below should work better.
Add a variant of this openai cookbook example to pro-actively throttle API requests. The current Langroid
chat
andachat
methods do have the usual retry-with-exponential backoff etc here but this approach is "blind" in the sense that there is no pro-active throttling attempt. As a result when we do large batch jobs (e.g. usingrun_batch_tasks
there might be just too much time wasted in hitting rate limits and retrying. A pro-active throttling approach like in the script below should work better.https://github.com/openai/openai-cookbook/blob/main/examples/api_request_parallel_processor.py
The text was updated successfully, but these errors were encountered: