☄️ Parallel and distributed training with spaCy and Ray
-
Updated
Jul 31, 2023 - Python
☄️ Parallel and distributed training with spaCy and Ray
Easily implement parallel training and distributed training. Machine learning library. Note.neuralnetwork.tf package include Llama2, Llama3, Gemma, CLIP, ViT, ConvNeXt, BEiT, Swin Transformer, Segformer, etc, these models built with Note are compatible with TensorFlow and can be trained with TensorFlow.
Cross-lingual Language Model (XLM) pretraining and Model-Agnostic Meta-Learning (MAML) for fast adaptation of deep networks
This repository is a tutorial targeting how to train a deep neural network model in a higher efficient way. In this repository, we focus on two main frameworks that are Keras and Tensorflow.
Add a description, image, and links to the parallel-training topic page so that developers can more easily learn about it.
To associate your repository with the parallel-training topic, visit your repo's landing page and select "manage topics."