Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLMEval not loading Qwen1.5 -0.5B model in to memory #53

Open
mobile-appz opened this issue Apr 23, 2024 · 14 comments
Open

LLMEval not loading Qwen1.5 -0.5B model in to memory #53

mobile-appz opened this issue Apr 23, 2024 · 14 comments
Labels
enhancement New feature or request

Comments

@mobile-appz
Copy link

When trying to load Qwen1.5, the model downloads fully but doesn't appear to load in to memory on MacOS or iOS. After typing a prompt, the error output is "Failed: unhandledKeys(base: "Embedding", keys: ["biases", "scales"])

Using MLX 0.11.0

Other linked models work as per the repo code but this is the smallest, which looks like best one for older devices with less RAM and would be great to get it working.

@awni
Copy link
Member

awni commented Apr 23, 2024

Right, we changed quantization in MLX core so now the embedding layer is quantized. We'll need to update Swift to do the same.

@awni awni added the enhancement New feature or request label Apr 23, 2024
@mobile-appz
Copy link
Author

Right, we changed quantization in MLX core so now the embedding layer is quantized. We'll need to update Swift to do the same.

Thanks for the info. I was totally unsure as to the cause of this error message. To update, I tried to load this in LLM tool in mlx-swift-examples and that failed with the same error. I then tried to run the python code in the mlx-examples and the model did load and process a prompt. However, the output wasn't worthwhile for anything apparently useful, probably because the model is so small.

@davidkoski
Copy link
Collaborator

I think these are the commits in question:

@awni
Copy link
Member

awni commented Apr 25, 2024

Those are the commits. Sorry that broke more stuff than I was expecting. Basically the embeddings are default quantized now. So when we quantize for MLX in python the model is not usable in Swift because it doesn't support quantized embeddings.

The medium term solution is to update Swift to quantize embeddings (this is a swift only change, don't need anything from core). But as a temporary patch, we could also upload models without embedding layers quantized.

@mobile-appz
Copy link
Author

At least 1 small model, that can run on an older iOS 17 compatible iPhone, without embedding layers quantized would be really useful for experimentation purposes. Thanks.

@davidkoski
Copy link
Collaborator

Those are the commits. Sorry that broke more stuff than I was expecting. Basically the embeddings are default quantized now. So when we quantize for MLX in python the model is not usable in Swift because it doesn't support quantized embeddings.

The medium term solution is to update Swift to quantize embeddings (this is a swift only change, don't need anything from core). But as a temporary patch, we could also upload models without embedding layers quantized.

If we make this change will it break other models that don't have the quantized embeddings (all the models we have been using to date)? I wonder if we need some way to detect and switch between these modes?

@awni
Copy link
Member

awni commented Apr 25, 2024

Right, so this is what solves that problem in MLX: https://github.com/ml-explore/mlx-examples/blob/main/llms/mlx_lm/utils.py#L336-L346

It's actually really useful because it handles heterogeneously quantized models very cleanly which is a problem we've had in the past (e.g. old models with unquantized gate matrices or unquantized LM heads prior to when we supported more sizes).

@davidkoski
Copy link
Collaborator

Aha, I didn't implement that -- we have just been using the load safetensors function and the update parameters method.

  • implement load_model with quantization support (here)
  • implement embedding quantization (here)
  • adopt in mlx-swift-examples (here)

@awni
Copy link
Member

awni commented Apr 25, 2024

we have just been using the load safetensors function and the update parameters method.

But how do you know if it's a quantized model or not? Presumably there are some loc somewhere that quantizes the model based on the config? (prior to loading the safetensors)

@davidkoski
Copy link
Collaborator

we have just been using the load safetensors function and the update parameters method.

But how do you know if it's a quantized model or not? Presumably there are some loc somewhere that quantizes the model based on the config? (prior to loading the safetensors)

The config file indicates it -- I am pretty sure this is how the mlx_lm code (or maybe the predecessor) worked and I just copied that, but perhaps that has moved forward.

@awni
Copy link
Member

awni commented Apr 25, 2024

This is what I'm referring to:

https://github.com/ml-explore/mlx-swift-examples/blob/main/Libraries/LLM/Load.swift#L58-L60

MLX LM has always had something like that. It builds the quantized model based on the config. The premise didn't change much. Only two things really:

  1. Quantize all Linear and Embedding models by default
  2. Of those only quantize modules which have a "scales" parameter in their weights

@awni
Copy link
Member

awni commented Apr 25, 2024

It looks like you added some edge case handling already in there (e.g. https://github.com/ml-explore/mlx-swift-examples/blob/main/Libraries/LLM/Load.swift#L97-L108). The update to MLX LM simplified that kind of stuff a bit.

@davidkoski
Copy link
Collaborator

It looks like you added some edge case handling already in there (e.g. https://github.com/ml-explore/mlx-swift-examples/blob/main/Libraries/LLM/Load.swift#L97-L108). The update to MLX LM simplified that kind of stuff a bit.

Yeah, that is actually a port of the python code, so I must have got things in the middle.

The load_model method probably should have been implemented from the start but I never used it and it just got lost.

Now I think we have a good idea of what needs to be done here.

@solume
Copy link

solume commented May 18, 2024

Is there a temporary solution to this? running into the same issue with openELM, but that doesnt seem to be supported for <0.11.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants