ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
May 28, 2024 - C++
ncnn is a high-performance neural network inference framework optimized for the mobile platform
FeatherCNN is a high performance inference engine for convolutional neural networks.
Heterogeneous Run Time version of Caffe. Added heterogeneous capabilities to the Caffe, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original Caffe architecture which users deploy their applications seamlessly.
Up to 200x Faster Inner Products and Vector Similarity — for Python, JavaScript, Rust, C, and Swift, supporting f64, f32, f16 real & complex, i8, and binary vectors using SIMD for both x86 AVX2 & AVX-512 and Arm NEON & SVE 📐
Heterogeneous Run Time version of MXNet. Added heterogeneous capabilities to the MXNet, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original MXNet architecture which users deploy their applications seamlessly.
A modern C++17 glTF 2.0 library focused on speed, correctness, and usability
benchmark for embededded-ai deep learning inference engines, such as NCNN / TNN / MNN / TensorFlow Lite etc.
Heterogeneous Run Time version of TensorFlow. Added heterogeneous capabilities to the TensorFlow, uses heterogeneous computing infrastructure framework to speed up Deep Learning on Arm-based heterogeneous embedded platform. It also retains all the features of the original TensorFlow architecture which users deploy their applications seamlessly.
RV: A Unified Region Vectorizer for LLVM
Single Header Quite Fast QOI(Quite OK Image Format) Implementation written in C++20
Hardkernel Odroid HC4 Ubuntu 20.04LTS install tutorial & tool build
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.
Colorful Mandelbrot set renderer in C# + OpenGL + ARM NEON
Pipelined lowlevel implementation of COLM for ARM-based systems
Simple neural network microkernels in C accelerated with ARMv8.2-a Neon vector intrinsics.
Add a description, image, and links to the arm-neon topic page so that developers can more easily learn about it.
To associate your repository with the arm-neon topic, visit your repo's landing page and select "manage topics."