Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for mlx.float64 #799

Open
kyrollosyanny opened this issue Mar 6, 2024 · 4 comments
Open

Support for mlx.float64 #799

kyrollosyanny opened this issue Mar 6, 2024 · 4 comments
Labels
enhancement New feature or request

Comments

@kyrollosyanny
Copy link

Describe the bug
Would it be possible to support float64 types. For some numerical simulations, having float64 is important for the accuracy of the simulation. The goal is to use mlx for automatic differentiation in these types of scenarios.

@awni
Copy link
Member

awni commented Mar 6, 2024

Double isn’t possible in Metal. In theory we could do it on the CPU only, but that is likely a lot less interesting to you?

@kyrollosyanny
Copy link
Author

For some simulations and optimizations, cpu is more than enough. If it is possible to add that support in future versions, that would be great. Thanks a lot.

@awni awni added the enhancement New feature or request label Mar 7, 2024
@awni
Copy link
Member

awni commented Mar 7, 2024

Sounds good, I'll leave this open for now as a possible enhancement. I don't know if we will do it, but people can comment here with use cases etc to help us prioritize.

@Andyuch
Copy link

Andyuch commented Jun 5, 2024

It would be so grateful and helpful if the float64 will be added, there is a similar issue appearing in my scientific simulation using 'mps' in pytorch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants