-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[InferenceSlicer] - allow batch size inference #781
Comments
Hi, @inakierregueab 👋🏻 That is something we were considering but didn't implement due to time restrictions. Let me add some details to this issue. Maybe someone will pick it up. |
Hi @SkalskiP, can I work on this issue if it is for beginners? |
Hi, @Bhavay-2001 👋🏻 Do you already have experience with running model inference at different batch sizes? |
Hi @SkalskiP, yes I think I can manage that. Can you please let me know how to proceed with this? Thanks |
Great! Do you have any specific questions? |
Hi @SkalskiP, how to add batch_size feature in the Inference Class. How can I test in google colab? Any start point that can help me get on track will be helpful. |
I outlined vital steps that need to be taken to add |
Hi @SkalskiP, can you please refer me some code sample that is already been implemented and provides the batch_size functionality? |
@Bhavay-2001, I'm afraid we do not have a code sample. Implementing batch inference was supposed to be executed in this task. :/ |
@SkalskiP, What I am thinking of doing is to implement a for loop with batch of images. Each image is then passed to the model and detections are collected and at the end the detections are returned for the batch. |
Hi @SkalskiP, can you please review this PR? |
Hi @SkalskiP, can you please review and let me know. Thanks |
Me and SkalskiP had a conversation about this - I'll take over for now. |
Intermediate results:
Testing more broadly, however, provides mixed results.
Still checking Colab coming soon. |
https://colab.research.google.com/drive/1j85QErM74VCSLADoGliM296q4GFUdnGM?usp=sharing As you can see, in these tests it only helped the Ultralytics case. Known insufficiencies:
|
PR: #1108 |
Description
Currently,
sv.InferenceSlicer
processes each slice in a separate callback call - hindering inference with a batch size larger than 1. We can change this by:batch_size
can be a new parameter for theInferenceSlicer
class.callback: Callable[[np.ndarray], Detections]
to callback:Callable[[List[np.ndarray]], List[Detections]]
.Additional
The text was updated successfully, but these errors were encountered: