New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLMEval not loading Qwen1.5 -0.5B model in to memory #53
Comments
Right, we changed quantization in MLX core so now the embedding layer is quantized. We'll need to update Swift to do the same. |
Thanks for the info. I was totally unsure as to the cause of this error message. To update, I tried to load this in LLM tool in mlx-swift-examples and that failed with the same error. I then tried to run the python code in the mlx-examples and the model did load and process a prompt. However, the output wasn't worthwhile for anything apparently useful, probably because the model is so small. |
I think these are the commits in question: |
Those are the commits. Sorry that broke more stuff than I was expecting. Basically the embeddings are default quantized now. So when we quantize for MLX in python the model is not usable in Swift because it doesn't support quantized embeddings. The medium term solution is to update Swift to quantize embeddings (this is a swift only change, don't need anything from core). But as a temporary patch, we could also upload models without embedding layers quantized. |
At least 1 small model, that can run on an older iOS 17 compatible iPhone, without embedding layers quantized would be really useful for experimentation purposes. Thanks. |
If we make this change will it break other models that don't have the quantized embeddings (all the models we have been using to date)? I wonder if we need some way to detect and switch between these modes? |
Right, so this is what solves that problem in MLX: https://github.com/ml-explore/mlx-examples/blob/main/llms/mlx_lm/utils.py#L336-L346 It's actually really useful because it handles heterogeneously quantized models very cleanly which is a problem we've had in the past (e.g. old models with unquantized gate matrices or unquantized LM heads prior to when we supported more sizes). |
But how do you know if it's a quantized model or not? Presumably there are some loc somewhere that quantizes the model based on the config? (prior to loading the safetensors) |
The config file indicates it -- I am pretty sure this is how the mlx_lm code (or maybe the predecessor) worked and I just copied that, but perhaps that has moved forward. |
This is what I'm referring to: https://github.com/ml-explore/mlx-swift-examples/blob/main/Libraries/LLM/Load.swift#L58-L60 MLX LM has always had something like that. It builds the quantized model based on the config. The premise didn't change much. Only two things really:
|
It looks like you added some edge case handling already in there (e.g. https://github.com/ml-explore/mlx-swift-examples/blob/main/Libraries/LLM/Load.swift#L97-L108). The update to MLX LM simplified that kind of stuff a bit. |
Yeah, that is actually a port of the python code, so I must have got things in the middle. The Now I think we have a good idea of what needs to be done here. |
Is there a temporary solution to this? running into the same issue with openELM, but that doesnt seem to be supported for <0.11.0 |
When trying to load Qwen1.5, the model downloads fully but doesn't appear to load in to memory on MacOS or iOS. After typing a prompt, the error output is "Failed: unhandledKeys(base: "Embedding", keys: ["biases", "scales"])
Using MLX 0.11.0
Other linked models work as per the repo code but this is the smallest, which looks like best one for older devices with less RAM and would be great to get it working.
The text was updated successfully, but these errors were encountered: