A framework for large scale recommendation algorithms.
-
Updated
Jun 7, 2024 - Python
A framework for large scale recommendation algorithms.
Repository for Project Insight: NLP as a Service
[NeurIPS 2023] Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text Aligned Latent Representation
Federated Learning Utilities and Tools for Experimentation
CLIP (Contrastive Language–Image Pre-training) for Italian
I will implement Fastai in each projects present in this repository.
Retrieval-based Voice Conversion (RVC) implemented with Hugging Face Transformers.
Pytorch implementation of image captioning using transformer-based model.
[TMI 2023] XBound-Former: Toward Cross-scale Boundary Modeling in Transformers
This repository contain my 75Day Hard Generative AI and LLM Learning Challenge.
An ASR (Automatic Speech Recognition) adversarial attack repository.
Image Captioning Vision Transformers (ViTs) are transformer models that generate descriptive captions for images by combining the power of Transformers and computer vision. It leverages state-of-the-art pre-trained ViT models and employs technique
Symbolic music generation taking inspiration from NLP and human composition process
Neural Persian Poet: A sequence-to-sequence model for composing Persian poetry
Public repo for the paper: "Modeling Intensification for Sign Language Generation: A Computational Approach" by Mert Inan*, Yang Zhong*, Sabit Hassan*, Lorna Quandt, Malihe Alikhani
This repository contains code and resources for abstractive text summarization (TS) using a novel framework that leverages knowledge-based word sense disambiguation (WSD) and semantic content generalization to enhance the performance of sequence-to-sequence (seq2seq) neural-based TS.
CHARacter-awaRE Diffusion: Multilingual Character-Aware Encoders for Font-Aware Diffusers That Can Actually Spell
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.
An autoML for explainable text classification.
✨ Solve multi_dimensional multiple knapsack problem using state_of_the_art Reinforcement Learning Algorithms and transformers
Add a description, image, and links to the transformers-models topic page so that developers can more easily learn about it.
To associate your repository with the transformers-models topic, visit your repo's landing page and select "manage topics."