cuVS - a library for vector search and clustering on the GPU
-
Updated
May 30, 2024 - Cuda
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
cuVS - a library for vector search and clustering on the GPU
A high-throughput and memory-efficient inference and serving engine for LLMs
Domain specific library for electronic structure calculations
A retargetable MLIR-based machine learning compiler and runtime toolkit.
GPU accelerated MatchMS
Easy-to-Use C++ Computer Vision Library
Unified Collective Communication Library
Convolutional Neural Network inference library running on CUDA
Radar Simulator built with Python and C++
Sparse matrix-vector product kernel in C, parallelized using OpenMP and CUDA for computing y←Ax
Deep Learning Docker Image
oneAPI Math Kernel Library (oneMKL) Interfaces
A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.
OpenTerrace: A fast, flexible and extendable Python framework for packed bed thermal energy storage simulations
✨ Zero-code distributed tracing and profiling, observability via eBPF 🚀
Created by Nvidia
Released June 23, 2007