Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a CUDA_VISIBLE_DEVICES option or a option to restrict the number of GPUs #549

Closed
mpperez3 opened this issue May 14, 2024 · 0 comments · Fixed by #550
Closed

Add a CUDA_VISIBLE_DEVICES option or a option to restrict the number of GPUs #549

mpperez3 opened this issue May 14, 2024 · 0 comments · Fixed by #550
Labels
enhancement New feature or request

Comments

@mpperez3
Copy link

Describe the need of your request

Currently, the project does not provide a way to limit the number of GPUs or control the GPU resources allocated to the application. This can lead to inefficient resource utilization, especially in environments with multiple GPUs. It can also be problematic in shared environments where multiple programs or processes need to share GPU resources without interference.

Proposed solution

Introduce a configuration option to set the CUDA_VISIBLE_DEVICES environment variable within the application. This will allow users to specify which GPUs should be visible and utilized by the application, thereby controlling the maximum GPU usage. The option can be set through a configuration variable.

Additional context

Implementing this feature will enhance resource management, particularly in multi-GPU setups. It will provide users with greater control over their GPU resources, improving the efficiency and flexibility of the application. This is especially useful for scenarios involving varying task requirements

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant