Skip to content

Issues: ollama/ollama

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Label
Filter by label
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Milestones
Filter by milestone
Assignee
Filter by who’s assigned
Sort

Issues list

Add flag version feature request New feature or request
#4121 opened May 3, 2024 by qiulaidongfeng
Ollama running in docker with concurrent requests doesn't work bug Something isn't working docker Issues relating to using ollama in containers
#4102 opened May 2, 2024 by BBjie
Error: do encode request: Post "http://127.0.0.1:39207/tokenize": EOF bug Something isn't working
#4100 opened May 2, 2024 by j2l
Ollama model stuck when executing commands. bug Something isn't working docker Issues relating to using ollama in containers
#4098 opened May 2, 2024 by rk-spirinova
Generation Request Failing When Ollama Server Running Inside a Docker Container bug Something isn't working docker Issues relating to using ollama in containers
#4097 opened May 2, 2024 by Deepansharora27
Is there a problem with the document? bug Something isn't working windows
#4094 opened May 2, 2024 by ggjk616
使用本地知识库时模型每次都要重新加载 bug Something isn't working
#4093 opened May 2, 2024 by androidsr
microsoft/Phi-3-mini-128k-instruct model request Model requests
#4092 opened May 2, 2024 by andsty
Colons in hostname cause an error on Windows bug Something isn't working windows
#4088 opened May 1, 2024 by jmorganca
PC crash after installing Ollama bug Something isn't working nvidia Issues relating to Nvidia GPUs and CUDA windows
#4081 opened May 1, 2024 by tyseng92
crash loading llama-3-chinese-8b-instruct model bug Something isn't working model request Model requests
#4080 opened May 1, 2024 by jiangweiatgithub
About OLLAMA_PARALLEL split the max context length bug Something isn't working
#4079 opened May 1, 2024 by DirtyKnightForVi
invalid file magic while importing llama3 70b into ollama bug Something isn't working
#4075 opened May 1, 2024 by David20080125
Grammar Guided response from model. feature request New feature or request
#4074 opened May 1, 2024 by NeevJewalkar
Ollama should prevent sleep when working. feature request New feature or request good first issue Good for newcomers windows
#4072 opened May 1, 2024 by owenzhao
Support to build llama.cpp with Intel oneMKL feature request New feature or request
#4069 opened May 1, 2024 by MarkWard0110
Support IPEX-LLM feature request New feature or request
#4066 opened Apr 30, 2024 by shawnshi
Support DirectML feature request New feature or request
#4064 opened Apr 30, 2024 by shawnshi
moondream returns no response bug Something isn't working
#4063 opened Apr 30, 2024 by DuckyBlender
ProTip! What’s not been updated in a month: updated:<2024-04-03.