-
Notifications
You must be signed in to change notification settings - Fork 733
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resolve output characters garbled #1422
Comments
It can understand my question and provide corresponding output, but there are some characters whose output is in garbled form |
I wonder perhaps if it is related to the tokenizer? It could also be a limitation of the terminal outputting certain characters. Unfortunately, I am not super familiar with working with those characters. One thing you could try is perhaps adding a |
Thank you very much for your reply |
Because I've been too busy lately, I've only started trying this method now. I can output those garbled characters from the terminal, and I'm not sure if this is related to the tokenizer,In order to make the model master the Chinese language ability, the developers of the Chinese llama repository have expanded the tokenizer |
hello , Because I want my model to have Chinese language ability, but the language and training resources required for full parameter training of Chinese language are huge, I used the Chinese llama model trained by other open-source projects. However, I think the litgpt project is very convenient, so I converted the models from other open-source projects to a lit model. The output of the lit model has Chinese language ability, but there is character garbled phenomenon in the output text. How can I solve the character garbled phenomenon? I look forward to your reply. Thank you
(Appendix: Chinese Open Source Model GitHub Address: https://github.com/LlamaFamily/Llama-Chinese?tab=readme-ov-file
Chinese Open Source Model File Hugging Face Address: https://huggingface.co/FlagAlpha/Llama3-Chinese-8B-Instruct/tree/main
The text was updated successfully, but these errors were encountered: