Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What Should Be the Output Size of Predictions for Object Detection? #13009

Closed
1 task done
lllittleX opened this issue May 14, 2024 · 3 comments
Closed
1 task done

What Should Be the Output Size of Predictions for Object Detection? #13009

lllittleX opened this issue May 14, 2024 · 3 comments
Labels
question Further information is requested

Comments

@lllittleX
Copy link

Search before asking

Question

I want to convert ONNX to RKNN, but during the testing process, I encountered an error indicating that the input shape is incorrect. I used two methods to convert to ONNX, and the results were very different. The first method used the default export.py, and the result was (1, 25200, 133). The second method involved modifying the forward code in yolo.py, and the results were (1, 399, 80, 80), (1, 256, 40, 40), and (1, 512, 20, 20).
def forward(self, x): z = [] # inference output for i in range(self.nl): x[i] = self.m[i](x[i]) # conv bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85) # -------------------------------------------------------------- if self.export: return x # --------------------------------------------------------------------------------- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()

So, I am very curious about what the output should look like during the training and prediction processes. Therefore, I used torch.load to check the final output, and the result was like this:
(24): Detect( (m): ModuleList( (0): Conv2d(128, 399, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(256, 399, kernel_size=(1, 1), stride=(1, 1)) (2): Conv2d(512, 399, kernel_size=(1, 1), stride=(1, 1))
When I ran detect.py, the output result was like this:
`# Inference
with dt[1]:
visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
if model.xml and im.shape[0] > 1:
pred = None
for image in ims:
if pred is None:
pred = model(image, augment=augment, visualize=visualize).unsqueeze(0)
else:
pred = torch.cat((pred, model(image, augment=augment, visualize=visualize).unsqueeze(0)), dim=0)
pred = [pred, None]
else:
pred = model(im, augment=augment, visualize=visualize)

            print(pred[0].shape)
            print(pred[1][0].shape)
            print(pred[1][1].shape)
            print(pred[1][2].shape)`

torch.Size([1, 15120, 133]) torch.Size([1, 3, 48, 80, 133]) torch.Size([1, 3, 24, 40, 133]) torch.Size([1, 3, 12, 20, 133])
I am now very confused about what is going on. Can someone explain this?

Additional

No response

@lllittleX lllittleX added the question Further information is requested label May 14, 2024
@glenn-jocher
Copy link
Member

Hello! 👋 It sounds like you're deep into exploring YOLOv5's output formats; I'll do my best to clarify.

The output discrepancy you see mainly stems from different configurations for exporting the model. The output from export.py (1, 25200, 133) primarily deals with post-processed detections which combine all grid cells and anchor boxes across all scales into a single dimension. This format is useful for direct deployment uses where a consolidated list of predictions is more practical.

Contrastingly, when you modify the forward function, you get the outputs arranged by each detection layer (i.e., (1, 399, 80, 80), (1, 256, 40, 40), and (1, 512, 20, 20)). Each tuple reflects:

  • The number of batches
  • The number of anchors multiplied by the number of detection classes plus box attributes (5 for x, y, width, height, confidence level)
  • The grid size

Each detection layer's output corresponds to different scales at which the model detects objects. The smaller grid sizes detect larger objects, and the larger grid sizes detect smaller objects due to the receptive field of the convolutional layers.

If you're converting to another platform like RKNN and encountering shape issues, it's essential to ensure that the output dimensions expected by your RKNN implementation match the output shapes provided by the YOLOv5 model. You may need to adjust either the export script or your RKNN input processing accordingly to match these dimensions.

I hope this clarification helps! If there are more questions or a need for further help, feel free to ask. Happy coding! 😊

@lllittleX
Copy link
Author

Thank you very much for your reply. Your answer has helped me resolve part of my confusion, but there is still something I don't understand. After encountering this issue with the output shape, I also printed the shape in detect.py. Why is the shape of its prediction torch.Size([1, 15120, 133]), torch.Size([1, 3, 48, 80, 133]), torch.Size([1, 3, 24, 40, 133]), torch.Size([1, 3, 12, 20, 133])? Was there any additional processing done to these outputs? Why is the output (48, 80) instead of (80, 80)?

@glenn-jocher
Copy link
Member

@lllittleX hello again! Glad to hear the previous response was somewhat helpful!

The shapes you're seeing in detect.py are due to the processing and design of YOLOv5's architecture, which uses different sizes of feature maps to detect objects at various scales.

  • torch.Size([1, 15120, 133]) represents the flattened form of predictions combining all scales.
  • torch.Size([1, 3, 48, 80, 133]), torch.Size([1, 3, 24, 40, 133]), and torch.Size([1, 3, 12, 20, 133]) are predictions from three different scales. The dimensions 48x80, 24x40, and 12x20 reflect different grid sizes each layer uses to capture features of various extents.

The reason grids are not square (e.g., 80x80), but rather rectangular (48x80), is based on the input image aspect ratio and how it's processed and downsampled through the network layers, maintaining a certain aspect ratio. This means that the neural network architecture is designed in such a way to best capture the features relevant for detection tasks, given the original input dimensions and desired efficiency.

Keep diving into the details; that's how you get the best out of these models! 😉 Happy coding!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants