We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
现在有计划支持xtuner-llava系列的VLM吗?比如LLaVA-InternLM2-7B (XTuner)
No response
The text was updated successfully, but these errors were encountered:
还没有具体的时间表,lmdeploy的turbomind已经有关于多模态推理接口。 可能一个比较快实现的方式是在 xtuner 那边集成 lmdeploy。我们内部还在讨论中
Sorry, something went wrong.
正在 xtuner 中开发,但由于功能比较复杂,还需要一些时间 @KooSung InternLM/xtuner#317
hi, @KooSung
lmdeploy v0.2.6 支持了 llava, qwen-vl, yi-vl 等模型的推理 pipeline 和serving。
关于 xtuner-llava 系列,我们觉得在 xtuner 侧增加比较合适。具体请关注 InternLM/xtuner#317
这个issue就先关掉了。
irexyc
No branches or pull requests
Motivation
现在有计划支持xtuner-llava系列的VLM吗?比如LLaVA-InternLM2-7B (XTuner)
Related resources
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: