We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MoE-LLaVA:大型视觉语言模型的混合专家模型
Github: https://github.com/PKU-YuanGroup/MoE-LLaVA Paper: https://arxiv.org/abs/2401.15947 Demo: https://huggingface.co/spaces/LanguageBind/MoE-LLaVA
MoE-LLaVA只有3B个稀疏激活参数,表现与LLaVA-1.5-7B在各种视觉理解数据集上相当,并且在物体幻觉基准测试中甚至超越了LLaVA-1.5-13B。通过MoE-LLaVA,我们旨在建立稀疏LVLMs的基准,并为未来研究开发更高效和有效的多模态学习系统提供宝贵的见解。并且MoE-LLaVA团队已经开放了所有的数据、代码和模型。
The text was updated successfully, but these errors were encountered:
No branches or pull requests
MoE-LLaVA:大型视觉语言模型的混合专家模型
Github: https://github.com/PKU-YuanGroup/MoE-LLaVA
Paper: https://arxiv.org/abs/2401.15947
Demo: https://huggingface.co/spaces/LanguageBind/MoE-LLaVA
MoE-LLaVA只有3B个稀疏激活参数,表现与LLaVA-1.5-7B在各种视觉理解数据集上相当,并且在物体幻觉基准测试中甚至超越了LLaVA-1.5-13B。通过MoE-LLaVA,我们旨在建立稀疏LVLMs的基准,并为未来研究开发更高效和有效的多模态学习系统提供宝贵的见解。并且MoE-LLaVA团队已经开放了所有的数据、代码和模型。
20240126_205845.mp4
The text was updated successfully, but these errors were encountered: