Issues: ggerganov/llama.cpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Custom
seed
values ignored by llama.cpp HTTP server
bug-unconfirmed
#7381
opened May 19, 2024 by
mirekphd
Why does the server-cuda container consume CPU time?
bug-unconfirmed
#7377
opened May 19, 2024 by
wencan
llama_model_load: error loading model: unable to allocate backend buffer
bug-unconfirmed
#7366
opened May 18, 2024 by
phaelon74
Can I handle multiple images in the same context?
bug-unconfirmed
#7364
opened May 18, 2024 by
Eriter555
[SYCL] include shared libs in sycl release
enhancement
New feature or request
#7361
opened May 18, 2024 by
gfody
Need help on building shared libraries on Windows machine for Android x86_64 (emulator)
#7357
opened May 18, 2024 by
cmpktheo
[Android/Termux] Significantly higher RAM usage with Vulkan compared to CPU only
bug-unconfirmed
#7351
opened May 17, 2024 by
egeoz
AMD ROCm: 8x22B Model Causes 100% GPU Utilization Stall
bug-unconfirmed
#7344
opened May 17, 2024 by
Trat8547
convert.py still fails on llama3 8B-Instruct downloaded directly from Meta (Huggingface works)
bug-unconfirmed
#7339
opened May 17, 2024 by
aleloi
llava surgery script for new llava-arch model from Intel
bug-unconfirmed
#7333
opened May 16, 2024 by
KohakuBlueleaf
Add support for multilingual Viking models, please.
enhancement
New feature or request
#7309
opened May 15, 2024 by
JohnClaw
Possible performance boost with 2-pass online softmax
bug-unconfirmed
#7306
opened May 15, 2024 by
zixuanweeei
Improve and expand Wikipedia article about llama.cpp
enhancement
New feature or request
#7294
opened May 15, 2024 by
fffelix-jan
4 tasks done
Llama-3 Instruct tokenizer_config.json changes in relation to the currently fetched llama-bpe configs.
enhancement
New feature or request
#7289
opened May 14, 2024 by
Spacellary
4 tasks done
Infinite update_slots issue on latest build (1265c67)
bug-unconfirmed
#7283
opened May 14, 2024 by
Leowolf93
/embeddings endpoint sometimes does not return embedding
bug-unconfirmed
#7277
opened May 14, 2024 by
marcingomulkiewicz
Previous Next
ProTip!
Adding no:label will show everything without a label.