You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem
Sometimes, people like me run an openai compatible server instead of running the model through Jan's nitro for better tailored to hardware compatibility and availability. The openai inference partially works when set like this:
However, Inference parameters, such as top_k, top_p, etc are not available. And some parameters such as temperature are not preserved. The model name is also incorrect as shown in the screenshot:
Success Criteria
It would be better to have support for a custom openai server. The list of models available can be queried through v1/models endpoint(if available).
The text was updated successfully, but these errors were encountered:
Problem
Sometimes, people like me run an openai compatible server instead of running the model through Jan's nitro for better tailored to hardware compatibility and availability. The openai inference partially works when set like this:
However, Inference parameters, such as top_k, top_p, etc are not available. And some parameters such as temperature are not preserved. The model name is also incorrect as shown in the screenshot:
Success Criteria
It would be better to have support for a custom openai server. The list of models available can be queried through v1/models endpoint(if available).
The text was updated successfully, but these errors were encountered: