-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running in Docker, Can not connect to local Ollama #4461
Comments
Hello @zblabs! I'm Dosu, a bot here to help you solve bugs, answer questions, and guide you on becoming a contributor while you wait for a human maintainer. I’m here to assist you with any issues or queries you might have. I'm taking a look at your issue now and will get back to you with a full answer shortly. |
Dify looks awesome, however Corporate AI Governance requires all AI tools must run 100% in-house. I am in the research phase so I can not POC anything that cannot run in-house. We are allowed and do contribute to open-source projects. |
try |
Switching to Ensure that the Ollama service is actively listening on the correct port ( If after applying this change you still face issues, double-check the port numbers for consistency and ensure that your Docker networking configurations are correctly set up to support this type of communication.
|
SOLVED: After updating from 3912 and rebuilding the Docker container I ran http://localhost/install (on a whim) and it now recognizes my Ollama instance. Thank you for your support! I have tried http://host.docker.internal:11343 and it still failed. Looking at the logs in Docker I see: I am running Win11 and Ollama works fine with non-docker apps. Notice it is trying to use localhost and getting a 400 error. Is there a configuration setting I should be making so internally Dify isn't going to use localhost? I tried the solution from #3912 and it did not resolve the issue. |
SOLVED: After updating from 3912 and rebuilding the Docker container I ran http://localhost/install (on a whim) and it now recognizes my Ollama instance. Thank you for your support! |
Self Checks
Dify version
0.6.6
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Run in docker
Browse to http://localhost/explore/apps
Select Ollama
Enter:
model name: llama3:latest
base URL: http://localhost:11343 (have also tried with IP of PC. And opened firewall).
Completion mode: chat
Model Context size: 4096
Upper bound for max tokens: 4096
Vision Support: no
Press: Save
✔️ Expected Behavior
Configuration would be saved
❌ Actual Behavior
Received error:
The text was updated successfully, but these errors were encountered: