Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lmql serve-model llama.cpp:<PATH TO WEIGHTS>.gguf only works with an absolute path #344

Open
filippobistaffa opened this issue Mar 29, 2024 · 0 comments

Comments

@filippobistaffa
Copy link

It's probably not a "bug", but <PATH TO WEIGHTS> in the docs should probably be <ABSOLUTE PATH TO WEIGHTS>. I tried to run lmql serve-model llama.cpp:vicuna-13b-v1.5-16k.Q4_K_M.gguf inside a directory with vicuna-13b-v1.5-16k.Q4_K_M.gguf, but I got a ValueError: Model path does not exist: vicuna-13b-v1.5-16k.Q4_K_M.gguf. Providing the full absolute path solves the problem.

The reason is that, in my case, the current directory gets changed to <root directory of the virtual environment>/lib/python3.11/site-packages when running the lmql command. Maybe that's the expected behavior, but at least specifying that the path should be absolute could cause less head-scratching for newbies :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant