-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase connection Timeout #105
Comments
Seconded, since sometimes the initial load of the model into Ollama times out for big models and you have to re-submit your prompt. Once it's "warmed up" it's fine. Maybe there's an alternative way for the program to see if Ollama is still running and just taking a long time to respond. |
Totally agree on this. timeout needs to be increased. |
This is importart for folks with low end hardware. I agree, should be in the settings. |
It's important even for high end hardware if you're using a giant model. Sometimes the initial model load times out and you have to resubmit, after which it works. |
100% i'm running a 16 core epyc as my LLM machine - it really chugs trying to load mixtral 8x22b even if loading of NVME into ram. |
Maybe add a variable in settings to change the default timeout setting.
The text was updated successfully, but these errors were encountered: