You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We run the scripts for inference a video by the command python run_net.py \ --cfg configs/exp01_vidcomposer_full.yaml \ --input_video "demo_video/blackswan.mp4" \ --input_text_desc "A black swan swam in the water" \ --seed 9999 . We get the error as follows, File "/root/paddlejob/workspace/lxz/miniconda3/envs/VideoComposer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/root/paddlejob/workspace/lxz/videocomposer/tools/videocomposer/unet_sd.py", line 238, in forward out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op) File "/root/paddlejob/workspace/lxz/miniconda3/envs/VideoComposer/lib/python3.8/site-packages/xformers/ops.py", line 574, in memory_efficient_attention return op.forward_no_grad( File "/root/paddlejob/workspace/lxz/miniconda3/envs/VideoComposer/lib/python3.8/site-packages/xformers/ops.py", line 189, in forward_no_grad return cls.FORWARD_OPERATOR( File "/root/paddlejob/workspace/lxz/miniconda3/envs/VideoComposer/lib/python3.8/site-packages/torch/_ops.py", line 143, in __call__ return self._op(*args, **kwargs or {}) RuntimeError: CUDA error: no kernel image is available for execution on the device
The version of torch is the same as yours. The version of cuda is 11.3, and torch==1.12.0+cu113, torchvision==0.13.0+cu113. We use a V100, and when we execute nvidia-smi the cuda version shown on V100 is 11.4. We think the version of our machine is compatible, and we do not know where the problem is.
The text was updated successfully, but these errors were encountered:
I guess there may be two possible reasons: 1) Your GPU memory may not be sufficient. The current model inference requires 28G of GPU memory. Please check your machine; 2) If the torch version is correct, please try recompiling xformers from the source. Other researchers have solved this problem in this way before.
We run the scripts for inference a video by the command
python run_net.py \ --cfg configs/exp01_vidcomposer_full.yaml \ --input_video "demo_video/blackswan.mp4" \ --input_text_desc "A black swan swam in the water" \ --seed 9999
. We get the error as follows,File "/root/paddlejob/workspace/lxz/miniconda3/envs/VideoComposer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/root/paddlejob/workspace/lxz/videocomposer/tools/videocomposer/unet_sd.py", line 238, in forward out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op) File "/root/paddlejob/workspace/lxz/miniconda3/envs/VideoComposer/lib/python3.8/site-packages/xformers/ops.py", line 574, in memory_efficient_attention return op.forward_no_grad( File "/root/paddlejob/workspace/lxz/miniconda3/envs/VideoComposer/lib/python3.8/site-packages/xformers/ops.py", line 189, in forward_no_grad return cls.FORWARD_OPERATOR( File "/root/paddlejob/workspace/lxz/miniconda3/envs/VideoComposer/lib/python3.8/site-packages/torch/_ops.py", line 143, in __call__ return self._op(*args, **kwargs or {}) RuntimeError: CUDA error: no kernel image is available for execution on the device
The version of torch is the same as yours. The version of cuda is 11.3, and torch==1.12.0+cu113, torchvision==0.13.0+cu113. We use a V100, and when we execute
nvidia-smi
the cuda version shown on V100 is 11.4. We think the version of our machine is compatible, and we do not know where the problem is.The text was updated successfully, but these errors were encountered: