Skip to content

LLM-Lora-PEFT_accumulate explores optimizations for Large Language Models (LLMs) using PEFT, LORA, and QLORA. Contribute experiments and implementations to enhance LLM efficiency. Join discussions and push the boundaries of LLM optimization. Let's make LLMs more efficient together!

Notifications You must be signed in to change notification settings

dasdristanta13/LLM-Lora-PEFT_accumulate

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM-Lora-PEFT_accumulate

Welcome to the LLM-Lora-PEFT_accumulate repository!

This repository contains implementations and experiments related to Large Language Models (LLMs) using PEFT (Parameter Efficient Fine Tuning), LORA (Low-Rank Adaptation of Large Language Models), and QLORA (Quantized LLMs with Low-Rank Adapters).

Loading a model in 8-bit precision can save up to 4x memory compared to full precision model

image

What does PEFT do?

You easily add adapters on a frozen 8-bit model thus reducing the memory requirements of the optimizer states, by training a small fraction of parameters

image

Resources

🌐 Websites

📺 YouTube Videos

📄 Papers

🐙 GitHub Repositories

🐍 Python Notebooks

SWOT of LLMs

image Go to LLM Analysis with SWOT for more clarification.

About

LLM-Lora-PEFT_accumulate explores optimizations for Large Language Models (LLMs) using PEFT, LORA, and QLORA. Contribute experiments and implementations to enhance LLM efficiency. Join discussions and push the boundaries of LLM optimization. Let's make LLMs more efficient together!

Topics

Resources

Stars

Watchers

Forks