-
Notifications
You must be signed in to change notification settings - Fork 766
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segment Anything Model #552
Conversation
@gathierry should we review and merge this? I'm not sure what the current status is, but let me know what you think. |
Hi @awni , yes please review it at your convenience. |
@gathierry this is really nicely done! Thanks for the example. I started looking at it today and send some changes to your branch. There's two high-level things I think we should aim to improve:
|
Thanks for the comments and improvements @awni
I tried to use mlx first but found it much slower than torch. I thought it was because some Boolean masking and nonzero operators that mlx didn't support yet. So I had to convert it to numpy back and forth. |
Do you have a branch with that by any chance? We can profile and improve the ops if it's a bottleneck in MLX.
PS sorry for the long delay on this. It kind of fell through the cracks. But I am quite keen to get an object segmentation example working! |
I don't have an existing branch for that but I can try to write one and compare them. I remember the gap was pretty big but maybe we can try to improve it. |
Hi @awni , I have What do you think? Maybe the third one is the easiest one to start? |
Thanks for adding that! Which notebook did you test, the amg one?
Each of those ops is a bit tricky because they have output shapes which depend on input data. But I would like to take a look and see where the slowdown is coming from. It might be from the conversion to numpy but it could be something else so it would be good to verify. |
Yes, the amg one. |
On an M2 Ultra the fully MLX version seems to be overall faster (looking at total time) Hybrid:
All MLX:
|
@gathierry would you mind updating this PR to use the MLX version? Then we can just focus on optimizing it. I don't think we will merge the torch version anyway. You can keep it in a side branch for reference (or it will also be in the git history). |
amg in mlx
Updated to pure mlx version |
Thanks for sending the pure MLX version. I'm noticing the main script isn't working 😓 . I tried:
It's possible I broke something in a previous refactor but lmk if you have any ideas. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this amazing addition!!
Add Segment Anything Model (SAM) in MLX.