-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
serving llama3 does not work #3876
Comments
Can you try adding a tag to the model parameter (by default it's curl $LLAMA_URL -d '{
"model": "llama3:latest",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
]
}' |
That's weird, that was the first thing I tried and it didn't work. Must have been fixed by an update. |
I'm still running into the same issue. I've tried
When I run
I've tried reinstalling This is running on Ubuntu 20.04 running ollama 0.1.32 EDIT: I solved it by running Not sure what I got wrong here, as I thought doing |
@sridvijay this basically solves it, thanks! I've updated FAQ accordingly in #3936 |
same error. not solved |
What is the issue?
I am able to run llama 3 (
ollama run llama3
) but when I try to run the server I getThis is in spite of
ollama list
detecting the model.Specifically I ran
OS
Linux
GPU
No response
CPU
No response
Ollama version
0.1.32
The text was updated successfully, but these errors were encountered: