Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Request] ollama本地大模型自定义选择是否支持视觉 #2493

Open
MarsSovereign opened this issue May 14, 2024 · 2 comments
Open
Labels
🌠 Feature Request New feature or request | 特性与建议

Comments

@MarsSovereign
Copy link

🥰 需求描述

ollama运行的本地大模型,例如llava,是支持视觉的,但是目前lobechat中加载ollama运行的模型都是默认不支持视觉,而且没有选项可以自行打开

🧐 解决方案

当使用ollama本地模型时,允许用户自行选择是否打开视觉能力(其实其他能力也一样,比如插件)

📝 补充信息

No response

@MarsSovereign MarsSovereign added the 🌠 Feature Request New feature or request | 特性与建议 label May 14, 2024
@lobehubbot
Copy link
Member

👀 @MarsSovereign

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


🥰 Description of requirements

Large local models run by ollama, such as llava, support vision. However, currently, the models loaded by ollama in lobechat do not support vision by default, and there is no option to open it by yourself.

🧐 Solution

When using the ollama local model, users are allowed to choose whether to turn on visual capabilities (in fact, the same is true for other capabilities, such as plug-ins)

📝 Supplementary information

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🌠 Feature Request New feature or request | 特性与建议
Projects
None yet
Development

No branches or pull requests

2 participants