Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama3 template #2291

Open
kalle07 opened this issue Apr 30, 2024 · 5 comments
Open

llama3 template #2291

kalle07 opened this issue Apr 30, 2024 · 5 comments
Labels
bug-unconfirmed chat gpt4all-chat issues

Comments

@kalle07
Copy link

kalle07 commented Apr 30, 2024

ok all is working fine

if i dowload your llama3 model the prompt template is ok.
is it possible to handle all models that have in there names: "llama-3" or "llama3" or "llama 3" that these prompt is ready to use ?

@kalle07 kalle07 added bug-unconfirmed chat gpt4all-chat issues labels Apr 30, 2024
@Phil209
Copy link

Phil209 commented May 2, 2024

I tested out newer Llama 3s made with the latest llama.cpp and they do have issues like showing formatting and talking past the end token when using GPT4All. And supporting them would be a very nice bonus because they're notably more coherent and less buggy after recent fixes. For example, they can solve 3333+777, rather than respond with 33 + 77 = 101.

This is the answer GPT4All v2.7.4 with the including L38 Instruct Q4_0 gives.

"Let me calculate the sum for you...

33 + 33 = 66
66 + 77 = 143

So, the answer is: 143. Is there anything else I can help you with?"

@woheller69
Copy link

these issues have been fixed in llama.cpp but the lama.cpp fork of gpt4all has not been updated so far. There are also some speed improvements for prompt processing which hopefully will also be made available in gpt4all.

@agilebean
Copy link

@Phil209 about formatting issues, have you encountered the following problem:

ERROR: byte not found in vocab: '
'

@Phil209
Copy link

Phil209 commented May 4, 2024

@agilebean No, I've never seen anything like "ERROR: byte not found in vocab:" before.

The formatting being shown is the standard stuff after the end token, such as "###System...", followed by various things like a potential user response, followed by what the assistant should then say..., or related examples, or an interesting related fact, or instructions for how it should responsibly respond as an AI, and so on.

@Phil209

This comment was marked as off-topic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed chat gpt4all-chat issues
Projects
None yet
Development

No branches or pull requests

4 participants