Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delete models installed from Ollama in my Mac to free the space #4122

Closed
ISK-VAGR opened this issue May 3, 2024 · 6 comments
Closed

Delete models installed from Ollama in my Mac to free the space #4122

ISK-VAGR opened this issue May 3, 2024 · 6 comments
Labels
model request Model requests

Comments

@ISK-VAGR
Copy link

ISK-VAGR commented May 3, 2024

HI,

I installed two Llama models using "Ollama run" in the terminal. Those occupy a significant space in disk and I need to free space to install a different model.

I tried Ollama rm command, but it only deletes the file in the manifests folder which is KBs. I also tried to delete those files manually, but again those are KBs in size not GB as the real models.

I need a solution to delete the big files out o my system.

Any clues?

Any help will be appreciated

@ISK-VAGR ISK-VAGR added the model request Model requests label May 3, 2024
@ISK-VAGR
Copy link
Author

ISK-VAGR commented May 3, 2024

Hi,

Already solved. The models weights and biases are saved in blob files in MACs seems to be accessible when you run in the terminal:

open ~/.ollama/models/blobs

there are the bloody files occupying space in your computer. ;-)

@sealad886
Copy link

There are several utilities that exist to help manage your cache. It is not generally recommended to manually delete files from your cache, as you could irreversibly corrupt it.

From the command line, you can remove a model one at a time using:

ollama rm <model name>

For example:

> ollama list
NAME                           	ID          	SIZE  	MODIFIED     
codegemma:7b-code-fp16         	211627025485	17 GB 	2 days ago  	
codegemma:7b-instruct-fp16     	27f776c137a0	17 GB 	2 days ago  	
codellama:70b-code-q2_K        	a971fcfd33e2	25 GB 	2 days ago  	
codellama:latest               	8fdf8f752f6e	3.8 GB	10 days ago 	
command-r:latest               	b8cdfff0263c	20 GB 	4 weeks ago 

> ollama rm codellama
deleted 'codellama'
> ollama rm codellama:70b-code-q2_K 
deleted 'codellama:70b-code-q2_K'

In Python, if you have installed ollama-python:

import ollama
status = ollama.delete('codellama').       # status is either of: 'success' or 'error'

I've written a utility to help manage the cache, called ollamautil. Feel free to use that to manage your cache, if you find it useful!

@ISK-VAGR
Copy link
Author

ISK-VAGR commented May 3, 2024

@sealad886

Thanks a lot for the feedback. I really had no option but to delete the files from the Cache. The problem as fundamentally, that Ollama rm command didn't work. I will test your solution. Thanks a lot.

@sealad886
Copy link

The cache tries to intelligently reduce disk space by storing a single blob file that is then shared among two or more models. If the blob file wasn't deleted with ollama rm <model> then it's probable that it was being used by one or more other models.

The way Ollama has implemented symlinking is actually essentially agnostic to the OS (i.e. I'm assuming their method allows this pseudo-symlink to work on Windows).: each <quant> file in /models/manifests/registry.ollama.ai/library/<model>/<quant> is actually a text file that just has the blob's sha256 hash stored, which is also the name of the blob file itself.

@ISK-VAGR
Copy link
Author

ISK-VAGR commented May 3, 2024

The cache tries to intelligently reduce disk space by storing a single blob file that is then shared among two or more models. If the blob file wasn't deleted with ollama rm <model> then it's probable that it was being used by one or more other models.

The way Ollama has implemented symlinking is actually essentially agnostic to the OS (i.e. I'm assuming their method allows this pseudo-symlink to work on Windows).: each <quant> file in /models/manifests/registry.ollama.ai/library/<model>/<quant> is actually a text file that just has the blob's sha256 hash stored, which is also the name of the blob file itself.

I am not a programmer. So no idea about this. However, in your repo the ollamautil.py in line 578 and 580 is asking for the path to external and internal DIR. How do I know where to find those?

@mxyng
Copy link
Contributor

mxyng commented May 13, 2024

I tried Ollama rm command, but it only deletes the file in the manifests folder which is KBs. I also tried to delete those files manually, but again those are KBs in size not GB as the real models.

Different models can share files. These files are not removed using ollama rm if there are other models that use the same files. For example, if model A uses blob A, B and model B uses blob A, C, removing model A will only remove blob B. This is likely the main source of the behaviour you're seeing.

@mxyng mxyng closed this as completed May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
model request Model requests
Projects
None yet
Development

No branches or pull requests

3 participants