-
-
Notifications
You must be signed in to change notification settings - Fork 15.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What Should Be the Output Size of Predictions for Object Detection? #13009
Comments
Hello! 👋 It sounds like you're deep into exploring YOLOv5's output formats; I'll do my best to clarify. The output discrepancy you see mainly stems from different configurations for exporting the model. The output from Contrastingly, when you modify the
Each detection layer's output corresponds to different scales at which the model detects objects. The smaller grid sizes detect larger objects, and the larger grid sizes detect smaller objects due to the receptive field of the convolutional layers. If you're converting to another platform like RKNN and encountering shape issues, it's essential to ensure that the output dimensions expected by your RKNN implementation match the output shapes provided by the YOLOv5 model. You may need to adjust either the export script or your RKNN input processing accordingly to match these dimensions. I hope this clarification helps! If there are more questions or a need for further help, feel free to ask. Happy coding! 😊 |
Thank you very much for your reply. Your answer has helped me resolve part of my confusion, but there is still something I don't understand. After encountering this issue with the output shape, I also printed the shape in detect.py. Why is the shape of its prediction torch.Size([1, 15120, 133]), torch.Size([1, 3, 48, 80, 133]), torch.Size([1, 3, 24, 40, 133]), torch.Size([1, 3, 12, 20, 133])? Was there any additional processing done to these outputs? Why is the output (48, 80) instead of (80, 80)? |
@lllittleX hello again! Glad to hear the previous response was somewhat helpful! The shapes you're seeing in
The reason grids are not square (e.g., 80x80), but rather rectangular (48x80), is based on the input image aspect ratio and how it's processed and downsampled through the network layers, maintaining a certain aspect ratio. This means that the neural network architecture is designed in such a way to best capture the features relevant for detection tasks, given the original input dimensions and desired efficiency. Keep diving into the details; that's how you get the best out of these models! 😉 Happy coding! |
Search before asking
Question
I want to convert ONNX to RKNN, but during the testing process, I encountered an error indicating that the input shape is incorrect. I used two methods to convert to ONNX, and the results were very different. The first method used the default export.py, and the result was (1, 25200, 133). The second method involved modifying the forward code in yolo.py, and the results were (1, 399, 80, 80), (1, 256, 40, 40), and (1, 512, 20, 20).
def forward(self, x): z = [] # inference output for i in range(self.nl): x[i] = self.m[i](x[i]) # conv bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) # -------------------------------------------------------------- if self.export: return x # --------------------------------------------------------------------------------- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
So, I am very curious about what the output should look like during the training and prediction processes. Therefore, I used torch.load to check the final output, and the result was like this:
(24): Detect( (m): ModuleList( (0): Conv2d(128, 399, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(256, 399, kernel_size=(1, 1), stride=(1, 1)) (2): Conv2d(512, 399, kernel_size=(1, 1), stride=(1, 1))
When I ran detect.py, the output result was like this:
`# Inference
with dt[1]:
visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
if model.xml and im.shape[0] > 1:
pred = None
for image in ims:
if pred is None:
pred = model(image, augment=augment, visualize=visualize).unsqueeze(0)
else:
pred = torch.cat((pred, model(image, augment=augment, visualize=visualize).unsqueeze(0)), dim=0)
pred = [pred, None]
else:
pred = model(im, augment=augment, visualize=visualize)
torch.Size([1, 15120, 133]) torch.Size([1, 3, 48, 80, 133]) torch.Size([1, 3, 24, 40, 133]) torch.Size([1, 3, 12, 20, 133])
I am now very confused about what is going on. Can someone explain this?
Additional
No response
The text was updated successfully, but these errors were encountered: