-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Issues: intel-analytics/ipex-llm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Quantized model loading method expects the model should be locally available.
user issue
#11268
opened Jun 7, 2024 by
unrahul
IPEX-LLM with Langchain-chatchat runs into httpcore.RemoteProtocolError in MTL with iGPU
user issue
#11259
opened Jun 7, 2024 by
zcwang
ubuntu 22.04 MTL 165h benchmark Aborted (core dumped)
user issue
#11256
opened Jun 7, 2024 by
taotao1-1
"can NOT allocate memory block with size larger than 4GB" on Arc A770 GPU when inference
user issue
#11248
opened Jun 6, 2024 by
Eternal-YMZ
Error: Failed to load the llama dynamic library. Segmentation fault
user issue
#11245
opened Jun 6, 2024 by
eugeooi
vllm-cpu bug - Qwen2Attention' object has no attribute 'kv_scale'
user issue
#11228
opened Jun 5, 2024 by
bratao
Support for max_loaded_maps and num_parallel variables/parameter
user issue
#11225
opened Jun 5, 2024 by
jars101
[Feature Request] Provide IPEX-LLM as an executable to install in Windows
user issue
#11183
opened May 31, 2024 by
bibekyess
phi3 medium - garbage output in webui or generated by ollama
user issue
#11177
opened May 30, 2024 by
js333031
transformers 4.38.1 gives bad llama3 performance on MTL iGPU
user issue
#11172
opened May 29, 2024 by
Cbaoj
Evaluation on if MiniCPM-2B-sft-bf16 need model based optimization on ipex-llm
user issue
#11163
opened May 29, 2024 by
wluo1007
all-in-one benchmark llama-3-8b-instruct issue with version 2.1.0b1
user issue
#11147
opened May 27, 2024 by
Fred-cell
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.