-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model folder missing #166
Comments
I don't see the solar model within the folder containing the other models, which is the last image. That is very strange. Also, I'm not sure why there is the 12b model from stabilityai showing; that was removed from my release (but still left commented out I believe). Can you try closing down the program, restarting, and sending me a screenshot of what the models tab looks like. It should not be showing solar as downloaded unless there's a specific folder for it in the "Models" folder... It should say "no" and allow you to download it. Also, can you try some of the other chat models to see if they're working? Lastly, I noticed that you're getting a pynvml error...what graphics card are you using? |
Yep, unchecking "chunks only" wouldn't change the vram usage, the model would still be loaded. But it's SUPPOSED to automatically remove the "local" model when you choose the use LM Studio radio button...With that being said, check out the release page and it shows that dolphin uses 9.2 gb. Also, my program doesn't yet have the ability to use multiple gpus. I'm seriously considering switching to llama-cpp, which allows one to offload part to the GPU and part to system ram...but I wanted to get this release out ASAP. Let me know if reloading and restarting allows you to download the solar model please. |
No, I can't download solar model after restart
Gpu 0 (rtx) is use by default.
…On Wed, 8 May 2024, 00:20 BBC-Esq, ***@***.***> wrote:
Yep, unchecking "chunks only" wouldn't change the vram usage, the model
would still be loaded. But it's SUPPOSED to automatically remove the
"local" model when you choose the use LM Studio radio button...With that
being said, check out the release page and it shows that dolphin uses 9.2
gb. Also, my program doesn't yet have the ability to use multiple gpus.
I'm seriously considering switching to llama-cpp, which allows one to
offload part to the GPU and part to system ram...but I wanted to get this
release out ASAP. Let me know if reloading and restarting allows you to
download the solar model please.
—
Reply to this email directly, view it on GitHub
<#166 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHW2BZV5MF64D6MNT2OZFT3ZBFHSNAVCNFSM6AAAAABHLDW2EGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJZGQYDSMRRHE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Does your VRAM usage roughly match what my release page says it should be for the various models? Are you still unable to download the SOLAR model? Any more details? |
Feel free to reopen if this issue persists. |
When I query with chunks only, it works but when I unchecked the chunk only, it pop up warning as the first image. Solar Instruct folder is missing, but the options in models says that its already downloaded.
After the error warning, I cant click "submit question" button
The text was updated successfully, but these errors were encountered: