Skip to content
This repository has been archived by the owner on Aug 29, 2023. It is now read-only.

CUDA issue #49

Open
nlienard opened this issue Jun 23, 2023 · 16 comments
Open

CUDA issue #49

nlienard opened this issue Jun 23, 2023 · 16 comments

Comments

@nlienard
Copy link

nlienard commented Jun 23, 2023

I got CUDA 12.1 installed on win11

I'm running as well roop with venv.
I'm running as well A111 with venv also.

For refacer, i run "pip install -r requirements-GPU.txt" successfuly also in a venv.

but when launching, i got this error related to CUDA, it doesn't find it. I'm puzzled because i didn't do anything special for roop and A1111.

(venv-refacer) C:\Users\xxx\IA\faceswap\refacer>python app.py
Trying FFMPEG h264_nvenc encoder
FFMPEG h264_nvenc encoder works
Video codec for FFMPEG: h264_nvenc
CUDA mode with providers ['CUDAExecutionProvider', 'CPUExecutionProvider']
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
Traceback (most recent call last):
  File "C:\Users\xxx\IA\faceswap\venv-refacer\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "C:\Users\xxx\IA\faceswap\venv-refacer\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 435, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\xxx\IA\faceswap\refacer\app.py", line 17, in <module>
    refacer = Refacer(force_cpu=args.force_cpu,colab_performance=args.colab_performance)
  File "C:\Users\xxx\IA\faceswap\refacer\refacer.py", line 35, in __init__
    self.__init_apps()
  File "C:\Users\xxx\IA\faceswap\refacer\refacer.py", line 84, in __init_apps
    sess_face = rt.InferenceSession(model_path, self.sess_options, providers=self.providers)
  File "C:\Users\xxx\IA\faceswap\venv-refacer\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 394, in __init__
    raise fallback_error from e
  File "C:\Users\xxx\IA\faceswap\venv-refacer\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 389, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "C:\Users\xxx\IA\faceswap\venv-refacer\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 435, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

CUDA is available:

>>> import onnxruntime
>>> print(onnxruntime.get_available_providers())
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
>>>

any clue is welcome.

thanks

@azoksky
Copy link

azoksky commented Jun 23, 2023

For this to work smoothly, trust me, you will need cuda 11.4 and cuDnn8.2.4.15.. install them separately..and add to env path. That's the requirement for onnxruntime-gpu. Refacer solely depends on the system cuda and cudnn. It's unlike auto1111 and roop.

During installation, both automatic1111 & roop download torch with cuda and cudnn. They don't depend on the cuda and cudnn you have on your pc. If you believe that they work because of cuda 12.1 you installed on the PC, you are wrong. Even if you uninstall them, roop and automatic1111 would continue to work, because they have all dependencies installed via pip. So plz install the versions I mentioned. It doesn't matter what you already have on your system. Different cuda and cudnn versions can co exist in one pc. Make sure all are in path.

@nlienard
Copy link
Author

nlienard commented Jun 23, 2023

ok i'll try but i got a WSL/ubuntu on my win11, and i successfuly run refacer.
it is using Cuda 12.1 also but the one of Ubuntu packages.

(.venv-refacer) xxx@NITRO:~/dev/refacer$ python app.py
Trying FFMPEG h264_nvenc encoder
FFMPEG h264_nvenc encoder works
Video codec for FFMPEG: h264_nvenc
CUDA mode with providers ['CUDAExecutionProvider', 'CPUExecutionProvider']
inswapper-shape: [1, 3, 128, 128]
Running on local URL:  http://127.0.0.1:7860/

So, it is weird, it works fine with CUDA 12.1 on ubuntu but not on native win11.

@azoksky
Copy link

azoksky commented Jun 24, 2023

It all depends on the onnxruntime-gpu. On windows, it's not as up-to-date as the one available for linux. Roop downloads cuda 11.8 with cudnns. So I guess anything greater than 11.8 may not work. The most desirable version is cuda 11.4 & cuDnn8.2.4.15.

@nlienard
Copy link
Author

thanks, i'm dowloading cuda 11.8 to test.

About cuDnn, i don't have currently any version of it, i was not aware it is required.

@nlienard
Copy link
Author

nlienard commented Jun 24, 2023

same issue with CUDA 11.8

(venv-refacer) C:\Users\xxx\IA\faceswap\refacer>echo %CUDA_PATH%
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8

CUDA 11.4 is only available for Windows 10 and i'm running Windows 11

@azoksky
Copy link

azoksky commented Jun 24, 2023

My card is GTX 1650 ti.. 11.4 is available for my card... so Install cuda 11.4 if available & cuDnn8.2.4.15 and add them to the path like below. Take a look at the first 3 lines.. you need them..

243660723-b4cc4dca-61cf-4606-a781-1ade36c2ff01 (1)

@nlienard
Copy link
Author

nlienard commented Jun 24, 2023

Ok i followed your recommandation but still same issue with CUDA 11.4 and CuDnn8.2.4

(venv-refacer) C:\Users\xxx\IA\faceswap\refacer>echo %CUDA_PATH%
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4

(venv-refacer) C:\Users\xxx\IA\faceswap\refacer>python app.py
Trying FFMPEG h264_nvenc encoder
FFMPEG h264_nvenc encoder works
Video codec for FFMPEG: h264_nvenc
CUDA mode with providers ['CUDAExecutionProvider', 'CPUExecutionProvider']
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
Traceback (most recent call last):
  File "C:\Users\nico\IA\faceswap\venv-refacer\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "C:\Users\xxx\IA\faceswap\venv-refacer\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 435, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\xxx\IA\faceswap\refacer\app.py", line 17, in <module>
    refacer = Refacer(force_cpu=args.force_cpu,colab_performance=args.colab_performance)
  File "C:\Users\xxx\IA\faceswap\refacer\refacer.py", line 35, in __init__
    self.__init_apps()
  File "C:\Users\xxx\IA\faceswap\refacer\refacer.py", line 84, in __init_apps
    sess_face = rt.InferenceSession(model_path, self.sess_options, providers=self.providers)
  File "C:\Users\xxx\IA\faceswap\venv-refacer\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 394, in __init__
    raise fallback_error from e
  File "C:\Users\xxx\IA\faceswap\venv-refacer\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 389, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "C:\Users\xxx\IA\faceswap\venv-refacer\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 435, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

image

What am I missing ?

@azoksky
Copy link

azoksky commented Jun 24, 2023

You are missing.. 11.4/bin and cudnn in the path. Make sure you have all the 3 items I marked in the picutre below. preferably in System environment path.
Screenshot 2023-06-24 172409

@nlienard
Copy link
Author

nlienard commented Jun 24, 2023

Same result:

image

For CuDnn i simply followed the install instructions, i copied CuDnn/bin to cuda11.4/bin, CuDnn/includes to Cuda11.4/includes and CuDnn/libs to Cuda11.4/lib

@nlienard
Copy link
Author

nlienard commented Jun 24, 2023

Ok i did a mistake, i m redoing it properly:

image

Still same issue

@azoksky
Copy link

azoksky commented Jun 24, 2023

That's strange. I faced similar issues but installing the proper cuda and cudnn resolved the problem for me. Have u installed cudnn 8.2.4.15? No other version would work.

@nlienard
Copy link
Author

nlienard commented Jun 24, 2023

I used the following archive: cudnn-11.4-windows-x64-v8.2.4.15.zip
so yes 8.2.4.15
I copied the folders into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDNN\v8.x

image

image

@nlienard
Copy link
Author

nlienard commented Jun 24, 2023

ok CUDNN is in the path but still issue:

(venv-refacer) C:\Users\xxx\IA\faceswap\refacer>python app.py
Trying FFMPEG h264_nvenc encoder
FFMPEG h264_nvenc encoder works
Video codec for FFMPEG: h264_nvenc
CUDA mode with providers ['CUDAExecutionProvider', 'CPUExecutionProvider']
EP Error D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
 when using ['CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
Traceback (most recent call last):
  File "C:\Users\xxx\IA\faceswap\venv-refacer\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "C:\Users\xxx\IA\faceswap\venv-refacer\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 435, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

PATH:

(venv-refacer) C:\Users\xxx\IA\faceswap\refacer>echo %PATH%
C:\Users\xxx\IA\faceswap\venv-refacer\Scripts;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDNN\v8.2.4.15\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4\libnvvp;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvp;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\libnvvp;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\NVIDIA Corporation\Nsight Compute 2021.2.0\;C:\Users\xxx\AppData\Local\Microsoft\WindowsApps;C:\Users\xxx\AppData\Local\Programs\Git\cmd;C:\Users\xxx\AppData\Local\Microsoft\WinGet\Links;

image

@azoksky
Copy link

azoksky commented Jun 24, 2023

This is the exact setup I have and it works flawlessly.

@nlienard
Copy link
Author

nlienard commented Jun 26, 2023

ok at the end i managed to make it working with the help of a guy (Doofenshmirtz, thanks to him) on discord.

he told me that I should install torch and import it before onnxruntime

So, i installed torch and modified refacer.py to add "import torch" before "import onnxruntime"

It means my CUDA installation are not used (like for roop).

(.venv) C:\Users\xxx\IA\faceswap\refacer>python app.py
Trying FFMPEG h264_nvenc encoder
FFMPEG h264_nvenc encoder works
Video codec for FFMPEG: h264_nvenc
CUDA mode with providers ['CUDAExecutionProvider', 'CPUExecutionProvider']
inswapper-shape: [1, 3, 128, 128]
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.

@azoksky
Copy link

azoksky commented Jun 26, 2023

Okay good for you.. mine works right out of the box.. without having to install torch..

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants