-
Notifications
You must be signed in to change notification settings - Fork 697
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected #820
Comments
Hi @kunmonster , |
Sorry,the command line is in the top of the second picture. Actually,i run the docker container in interactive mode,then run the call_variants line within the container |
Since sometimes warning messages from Tensorflow may be misleading. Could you please try running call_variants and at the same time monitor the GPU load to make sure the GPU is not used? |
Happy to see the reply,actually, i run this on hpc and the usage of the gpu is very low , but the cpu and memory usage is extremely high , then i run this command line on my laptop with gtx 1050ti and compare the time of the prediction one batch ,the time in the hpc is longer than my laptop , but the truth is the performance of the hpc gpu is better than gtx1050ti. So, the gpu don't work. I will post what you want later.Thx! |
Did you run Could you try the suggestion from this thread |
Yes i did , I have posted the result which shows that in python shell the gpu can be identified with tensorflow in the first comment.
Actuallly i have tried to set I think the reason why the error occurs may be in your code the value of the |
Hi , when i run call_variant , it arises this warn which means can't use the gpu,but i can make sure that the tensorflow can use the gpu.There are the screen shots of the warn and the existence of the gpu.
The text was updated successfully, but these errors were encountered: