Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama-3 Instruct tokenizer_config.json changes in relation to the currently fetched llama-bpe configs. #7289

Open
4 tasks done
Spacellary opened this issue May 14, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@Spacellary
Copy link

Spacellary commented May 14, 2024

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Question/Conjecture:

I am performing model conversions as per the guidelines in this PR and using the llama-bpe configs fetched:

#6920 (comment)

...

The recent convert-hf-to-gguf-update.py script fetches the llama-bpe configs, but these reflect the ones from the Base model.

Recently, within the last week, there was a change to these settings in the meta-llama/Meta-Llama-3-8B-Instruct repo.

Is this change in the Instruct EOS pertinent to the current conversion process?

To add:
I haven't noticed any issues so far using either the Base model configs or the Instruct model configs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant