You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I am attempting to connect Open Web-UI to access Jan's server for utilizing the TensorRT-LLM model (Mistral 7B Instructions v0.1 INT4). However, I am experiencing issues and it does not work as expected. I have tried switching to a standard gguf model (Meta-Llama-3-8B-Instruct.Q8_0), which functions correctly. When chatting with Jan, both the TensorRT-LLM and gguf models work as expected.
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your issue.
Environment details
Operating System: [Windows 11]
Jan Version: [0.4.12]
Processor: [Intel Core i7]
RAM: [64GB]
Any additional relevant hardware specifics: [RTX 4060]
Describe the bug
I am attempting to connect Open Web-UI to access Jan's server for utilizing the TensorRT-LLM model (Mistral 7B Instructions v0.1 INT4). However, I am experiencing issues and it does not work as expected. I have tried switching to a standard gguf model (Meta-Llama-3-8B-Instruct.Q8_0), which functions correctly. When chatting with Jan, both the TensorRT-LLM and gguf models work as expected.
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your issue.
Environment details
Logs
Additional context
The text was updated successfully, but these errors were encountered: