Skip to content

This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.

License

Notifications You must be signed in to change notification settings

declare-lab/flan-alpaca

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🍮 🦙 Flan-Alpaca: Instruction Tuning from Humans and Machines

📣 We developed Flacuna by fine-tuning Vicuna-13B on the Flan collection. Flacuna is better than Vicuna at problem-solving. Access the model here https://huggingface.co/declare-lab/flacuna-13b-v1.0.

📣 FLAN-T5 is also useful in text-to-audio generation. Find our work at https://github.com/declare-lab/tango if you are interested.

This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5. We have a live interactive demo thanks to Joao Gante! We are also benchmarking many instruction-tuned models at declare-lab/flan-eval. Our pretrained models are fully available on HuggingFace 🤗 :

Model Parameters Instruction Data Training GPUs
Flan-Alpaca-Base 220M Flan, Alpaca 1x A6000
Flan-Alpaca-Large 770M Flan, Alpaca 1x A6000
Flan-Alpaca-XL 3B Flan, Alpaca 1x A6000
Flan-Alpaca-XXL 11B Flan, Alpaca 4x A6000 (FSDP)
Flan-GPT4All-XL 3B Flan, GPT4All 1x A6000
Flan-ShareGPT-XL 3B Flan, ShareGPT/Vicuna 1x A6000
Flan-Alpaca-GPT4-XL* 3B Flan, GPT4-Alpaca 1x A6000

*recommended for better performance

Why?

Alpaca represents an exciting new direction to approximate the performance of large language models (LLMs) like ChatGPT cheaply and easily. Concretely, they leverage an LLM such as GPT-3 to generate instructions as synthetic training data. The synthetic data which covers more than 50k tasks can then be used to finetune a smaller model. However, the original implementation is less accessible due to licensing constraints of the underlying LLaMA model. Furthermore, users have noted potential noise in the synthetic dataset. Hence, it may be better to explore a fully accessible model that is already trained on high-quality (but less diverse) instructions such as Flan-T5.

Usage

from transformers import pipeline

prompt = "Write an email about an alpaca that likes flan"
model = pipeline(model="declare-lab/flan-alpaca-gpt4-xl")
model(prompt, max_length=128, do_sample=True)

# Dear AlpacaFriend,
# My name is Alpaca and I'm 10 years old.
# I'm excited to announce that I'm a big fan of flan!
# We like to eat it as a snack and I believe that it can help with our overall growth.
# I'd love to hear your feedback on this idea. 
# Have a great day! 
# Best, AL Paca

Setup

Install dependencies and download data. We used the cleaned data from Alpaca-LoRA for training.

conda create -n paca python=3.8 -y
conda activate paca
pip install -r requirements.txt
mkdir -p data
wget https://github.com/declare-lab/flan-alpaca/releases/download/v0.1.0/alpaca_data.json -O data/alpaca.json
wget https://github.com/declare-lab/flan-alpaca/releases/download/v0.1.0/alpaca_data_cleaned.json -O data/alpaca_clean.json
wget https://github.com/declare-lab/flan-alpaca/releases/download/v0.1.0/alpaca_gpt4_data.json -O data/alpaca_gpt4.json

Preprocess Cleaned Alpaca training dataset:

python data_loading.py preprocess_alpaca \
--path_in data/alpaca_gpt4.json \
--path_out data/train.json

If you want to use GPT4All data, you can use this command:

python data_loading.py preprocess_gpt4all --path_out data/train.json

If you want to use ShareGPT data, you can use this command:

wget https://github.com/declare-lab/flan-alpaca/releases/download/v0.1.0/ShareGPT_unfiltered_cleaned_split.json -O data/sharegpt.json
python data_loading.py preprocess_sharegpt --path_out data/train.json

Training

The following command will finetune the Flan-T5-XL (8hrs on a single A6000 GPU).

python training.py --output_dir outputs/model/xl \
--use_compile \
--train_epochs 3 \
--max_source_length 64 \
--max_target_length 512 \
--data_path data/train.json \
--model_name_or_path "google/flan-t5-xl" \
--train_batch_size 1 \
--gradient_accumulation_steps 64

If the model does not fit into memory, and you have multiple GPUs, you can try fully-sharded data parallel by replacing --use_compile with --use_fsdp.

Inference

python inference.py test_model \
--path "outputs/model/xl/epoch=2-step=2439.ckpt" \
--prompt "Write an email about an alpaca that likes flan"

Exporting to HuggingFace Hub

Replace declare-lab/flan-alpaca-xl with your desired HuggingFace repo.

huggingface-cli login

python inference.py export_to_hub \
--path "outputs/model/xl/epoch=2-step=2439.ckpt" \
--repo declare-lab/flan-alpaca-xl

About

This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages