You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
🥰 Description of requirements
Large local models run by ollama, such as llava, support vision. However, currently, the models loaded by ollama in lobechat do not support vision by default, and there is no option to open it by yourself.
🧐 Solution
When using the ollama local model, users are allowed to choose whether to turn on visual capabilities (in fact, the same is true for other capabilities, such as plug-ins)
🥰 需求描述
ollama运行的本地大模型,例如llava,是支持视觉的,但是目前lobechat中加载ollama运行的模型都是默认不支持视觉,而且没有选项可以自行打开
🧐 解决方案
当使用ollama本地模型时,允许用户自行选择是否打开视觉能力(其实其他能力也一样,比如插件)
📝 补充信息
No response
The text was updated successfully, but these errors were encountered: