CUDA
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Here are 4,891 public repositories matching this topic...
A high-throughput and memory-efficient inference and serving engine for LLMs
-
Updated
May 19, 2024 - Python
Build and run Docker containers leveraging NVIDIA GPUs
-
Updated
Dec 6, 2023
Instant neural graphics primitives: lightning fast NeRF and more
-
Updated
Apr 18, 2024 - Cuda
kaldi-asr/kaldi is the official location of the Kaldi project.
-
Updated
Apr 30, 2024 - Shell
Open3D: A Modern Library for 3D Data Processing
-
Updated
May 18, 2024 - C++
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
-
Updated
May 19, 2024 - Python
Modular ZK(Zero Knowledge) backend accelerated by GPU
-
Updated
May 18, 2024 - C++
Containers for machine learning
-
Updated
May 17, 2024 - Python
Go package for computer vision using OpenCV 4 and beyond. Includes support for DNN, CUDA, and OpenCV Contrib.
-
Updated
May 14, 2024 - Go
A flexible framework of neural networks for deep learning
-
Updated
Aug 28, 2023 - Python
OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
-
Updated
May 15, 2024 - C++
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
-
Updated
May 6, 2024 - C
[ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl
-
Updated
Feb 8, 2024 - C++
CUDA Templates for Linear Algebra Subroutines
-
Updated
May 16, 2024 - C++
Created by Nvidia
Released June 23, 2007
- Followers
- 199 followers
- Website
- developer.nvidia.com/cuda-zone
- Wikipedia
- Wikipedia