Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FSDP + QDoRA Support #159

Open
iseesaw opened this issue Apr 24, 2024 · 6 comments
Open

FSDP + QDoRA Support #159

iseesaw opened this issue Apr 24, 2024 · 6 comments

Comments

@iseesaw
Copy link

iseesaw commented Apr 24, 2024

Hi the team, great work!

QDoRA seems to be better than QLoRA, refer to Efficient finetuning of Llama 3 with FSDP QDoRA

I wonder whether there will be demo / example about FSDP + QDoRA during finetuning?

Thanks!

@MustafaAlahmid
Copy link

I have done some FSDP to train full parameters mistral 7b

maybe its useful for you

here

@iseesaw
Copy link
Author

iseesaw commented Apr 24, 2024

I have done some FSDP to train full parameters mistral 7b

maybe its useful for you

here

Thanks, good job!

I want to finetune Llama-3-70B with 8 A6000 48G, which are not enough for training full parameters.

FSDP + QDoRA is the method I have found to be feasible and probably the most effective.

@MustafaAlahmid
Copy link

I have done some FSDP to train full parameters mistral 7b
maybe its useful for you
here

Thanks, good job!

I want to finetune Llama-3-70B with 8 A6000 48G, which are not enough for training full parameters.

FSDP + QDoRA is the method I have found to be feasible and probably the most effective.

yes it should work
try to change the config file for FSDP and put llama decoder layer
should be something like this

ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/fsdp.yaml scripts/run_sft.py recipes/{modelname}/sft/config_q;ora.yaml

@iseesaw
Copy link
Author

iseesaw commented Apr 24, 2024

yes it should work try to change the config file for FSDP and put llama decoder layer should be something like this

ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/fsdp.yaml scripts/run_sft.py recipes/{modelname}/sft/config_q;ora.yaml

I've tried this command and encountered the issue described in huggingface/peft#1674

Currently, I am following the official example provided in PEFT for further troubleshooting: https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_qlora_fsdp.sh

@freegheist
Copy link

FSDP + QDoRA for Zephyr 141b would be really good

@deep-diver
Copy link
Contributor

deep-diver commented May 17, 2024

AFAIK, FSDP+QDoRA is not supported feature in HF official releases like transformers, peft, ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants