Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IImportError: cannot import name 'prepare_model_for_int8_training' from 'peft' (/usr/local/lib/python3.10/dist-packages/peft/__init__.py) #508

Closed
2 tasks done
Tizzzzy opened this issue May 13, 2024 · 2 comments

Comments

@Tizzzzy
Copy link

Tizzzzy commented May 13, 2024

System Info

  1. python: 3.10.12
  2. nvcc:
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0
  1. peft: 0.10.0

All other packages are the same as the requirments.txt

Information

  • The official example scripts
  • My own modified scripts

🐛 Describe the bug

I am new to llama-recipe, I am trying to finetune llama3 on a huggingface dataset "openbookqa". I used this command to run: python -m llama_recipes.finetuning --dataset "openbookqa" --custom_dataset.file "datasets/openbookqa_dataset.py" --batching_strategy "packing". However I got this error:

(llama3) root@Dong:/mnt/c/Users/super/OneDrive/Desktop/research/llama-recipes# python -m llama_recipes.finetuning --dataset "openbookqa" --custom_dataset.file "datasets/openbookqa_dataset.py" --batching_strategy "packing"
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.10/dist-packages/llama_recipes/finetuning.py", line 11, in <module>
    from peft import get_peft_model, prepare_model_for_int8_training
ImportError: cannot import name 'prepare_model_for_int8_training' from 'peft' (/usr/local/lib/python3.10/dist-packages/peft/__init__.py)

I followed the README instruction. I git clone the repo. I did pip install llama-recipes. I also did pip install -r requirements.txt

I did some research on this error, and some people said prepare_model_for_int8_training has been deprecated for quite some time, with PEFT v0.10.0, use prepare_model_for_kbit_training instead.

However, if this is the case, I don't know which file I need to change.

Error logs

(llama3) root@Dong:/mnt/c/Users/super/OneDrive/Desktop/research/llama-recipes# python -m llama_recipes.finetuning --dataset "openbookqa" --custom_dataset.file "datasets/openbookqa_dataset.py" --batching_strategy "packing"
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.10/dist-packages/llama_recipes/finetuning.py", line 11, in <module>
    from peft import get_peft_model, prepare_model_for_int8_training
ImportError: cannot import name 'prepare_model_for_int8_training' from 'peft' (/usr/local/lib/python3.10/dist-packages/peft/__init__.py)  

Expected behavior

I expect I can finetune llama3

@mreso
Copy link
Contributor

mreso commented May 13, 2024

Hi, seems like you're using an old llama-recipes version (pypi releases are sadly lagging behind quite a bit) as we've switched to prepare_model_for_kbit_training
some time ago

from peft import get_peft_model, prepare_model_for_kbit_training

Please update llama-recipes from source by running:

git checkout main && git pull && pip install -U .

in the repo main directory.

@mreso
Copy link
Contributor

mreso commented May 17, 2024

Closing this issue , feel free to reopen if there are more questions. btw we just updated the pypi package.

@mreso mreso closed this as completed May 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants