You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Would it be possible to support float64 types. For some numerical simulations, having float64 is important for the accuracy of the simulation. The goal is to use mlx for automatic differentiation in these types of scenarios.
The text was updated successfully, but these errors were encountered:
For some simulations and optimizations, cpu is more than enough. If it is possible to add that support in future versions, that would be great. Thanks a lot.
Sounds good, I'll leave this open for now as a possible enhancement. I don't know if we will do it, but people can comment here with use cases etc to help us prioritize.
It would be so grateful and helpful if the float64 will be added, there is a similar issue appearing in my scientific simulation using 'mps' in pytorch.
Describe the bug
Would it be possible to support float64 types. For some numerical simulations, having float64 is important for the accuracy of the simulation. The goal is to use mlx for automatic differentiation in these types of scenarios.
The text was updated successfully, but these errors were encountered: