Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] 有计划支持xtuner-llava系列吗 #999

Closed
KooSung opened this issue Jan 19, 2024 · 3 comments
Closed

[Feature] 有计划支持xtuner-llava系列吗 #999

KooSung opened this issue Jan 19, 2024 · 3 comments
Assignees

Comments

@KooSung
Copy link

KooSung commented Jan 19, 2024

Motivation

现在有计划支持xtuner-llava系列的VLM吗?比如LLaVA-InternLM2-7B (XTuner)

Related resources

No response

Additional context

No response

@lvhan028
Copy link
Collaborator

还没有具体的时间表,lmdeploy的turbomind已经有关于多模态推理接口。
可能一个比较快实现的方式是在 xtuner 那边集成 lmdeploy。我们内部还在讨论中

@pppppM
Copy link
Collaborator

pppppM commented Feb 27, 2024

正在 xtuner 中开发,但由于功能比较复杂,还需要一些时间 @KooSung
InternLM/xtuner#317

@lvhan028
Copy link
Collaborator

hi, @KooSung

lmdeploy v0.2.6 支持了 llava, qwen-vl, yi-vl 等模型的推理 pipeline 和serving。

关于 xtuner-llava 系列,我们觉得在 xtuner 侧增加比较合适。具体请关注 InternLM/xtuner#317

这个issue就先关掉了。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants