New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quantization (q4_k_m gguf) failed for Phi-3 #413
Comments
Working on a fix! Sorry on the issue again! |
I also await this fix! |
did you fixed it? |
Still waiting... |
I have manually save and quantize the gguf model as steps bellow (in Windows 11 environment):
I followed the format of the datasets referenced in the notebook to generate nearly 300 training data entries from a user manual of internal application. After that, I performed fine-tuning. However, the fine-tuned LoRA model and the quantized model both seem unable to correctly answer the same questions from the dataset. I am still unsure which step might be causing the issue. |
Apologies everyone! @Li-Yanzhi @win4r @DrewThomasson Whoops I forgot to inform you all that it should be fixed!!! (I actually pushed a fix a few days ago whoops!) Please update Unsloth for local installations:
For Colab and Kaggle, no need, just restart the kernel. Apologies on the delay - hope it works now! |
@danielhanchen I think there is something strange which is happening. It looks like Did anyone else experience this? These are some of the logs which are generated: I have performed my test on the latest version on Unsloth on a fresh Google Colab instance, using |
Probably related to #476 |
Thanks @danielhanchen, I can run Phi-3 notebook successfully in Colab now. BTW: When I run same code on my Windows 11, there are some filename issues (e.g. quantize vs quantize.exe) in save.py, when I manually edit these, I can also run the code on my Windows Laptop too. Only one question left is that it seems my own training dataset is not well trained in LORA model and model cannot answer the question in dataset correctly, I will try to figure this out ... |
I can confirm that GGUF quantisation works now! thanks! 🙏 |
When run Alpaca + Phi-3 3.8b full example.ipynb in https://colab.research.google.com/drive/1NvkBmkHfucGO3Ve9s1NKZvMNlw5p83ym?usp=sharing, in last step to save quantization model:
the error will occur:
The text was updated successfully, but these errors were encountered: