-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Implement COG-VLM2 #1622
Comments
@isidentical hi, thanks for your information. We will include cogvlm2 after pr #1502 is merged. |
any update? |
hi, it's in progress. Any update will sync to this issue. |
@isidentical @Jayantverma2 hi, guys. CogVLM2 models are supported in PR #1502. If you have time, have a try. Welcome to leave any comments in the PR. THX. |
@RunningLeon Is this the correct way to initialize the cogvlm2? engine = pipeline(model_path, "cogvlm2",log_level="DEBUG") { But when I am running this with this prompt |
@Tushar-ml hi, pls. follow examples in the document: https://lmdeploy.readthedocs.io/en/latest/inference/vl_pipeline.html#vlm-offline-inference-pipeline. prompts should be like
|
@RunningLeon any docs how to run this CogVLM2 as in PR mentioned, Tokenizer need to be applied manually |
awesome, look forward to it. Really like lmdeploy because it's much more stable than sglang for these vision models. |
@Tushar-ml hi, no need to do so for cogvlm2, but should do for cogvlm(1). |
@pseudotensor hi, glad to hear that. If possible, please recommend lmdeploy to other people who are interested in deploying LLMs and VLMs. Thanks. |
Yes, will gladly do that. |
@RunningLeon I am getting OOM in A40G, 48 GRAM. What is the recommended system for cogvlm2, as model is of size not more than 40gb |
@Tushar-ml hi, could you provide your sample code? Normally, you can reudce Lines 202 to 230 in 5a2aaf1
|
Thanks @RunningLeon I will try this |
@RunningLeon Hi! root@gpu9:~/data/CogVLM2# python cogvlm_demo.py my code: model_path = '/root/data/cogvlm2-llama3-chinese-chat-19B/' pipe = pipeline(model_path) image = load_image('/root/data/dataset/misumi_data/images/Misumi000006.jpg') I look forward to your reply. Thank you |
@GuoXu-booo hi, because cogvlm is supported in pytorch engine and can you simply clone the code from pr and run
|
Motivation
CogVLM2 is now the SOTA open source VLM for captioning tasks.
Related resources
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: