You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when it will release the version for supporting llava-llama3-70b?
meainwhile, will it will consider of supporting unofficial version like, using llm of llama3-120b?
huggingface link: mlabonne/Meta-Llama-3-120B-Instruct
Module optimization
i think it's very important to enhance the visual encoder funciton, so how can i change the visual encoder instead of just clip vit, suach as other clip or mamba visual version may be better?
or can i just add some personal adapter(just simple MLP may not be a good idea) for visual encoder and finetuning on my on datasets? looking farward to the guidline and hook methods.
i really to want to have a try on personal creativaty!
look forward to your reply, thx!
The text was updated successfully, but these errors were encountered:
llava-llama3-70b We will support this in the near future, but for 120b we don't have that much computing power.
Regarding the plan to experiment with replacing model components, we are already working on it. We hope to complete the refactoring by the end of this month.
llava-llama3-70b We will support this in the near future, but for 120b we don't have that much computing power.
Regarding the plan to experiment with replacing model components, we are already working on it. We hope to complete the refactoring by the end of this month.
hope you can give us some detail guides on personnal method later. or will the intern campus VLM part have an advanced lesson for llm engineer?
that will be great if someone post it in zhihu, bilibili or other. it must be an excellent influent on mmllm and everyone wants to take part in get better on own models.
i really can't wait any more!
thx
when it will release the version for supporting llava-llama3-70b?
meainwhile, will it will consider of supporting unofficial version like, using llm of llama3-120b?
huggingface link: mlabonne/Meta-Llama-3-120B-Instruct
i think it's very important to enhance the visual encoder funciton, so how can i change the visual encoder instead of just clip vit, suach as other clip or mamba visual version may be better?
or can i just add some personal adapter(just simple MLP may not be a good idea) for visual encoder and finetuning on my on datasets? looking farward to the guidline and hook methods.
i really to want to have a try on personal creativaty!
look forward to your reply, thx!
The text was updated successfully, but these errors were encountered: