-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
model.engine speed is slowest than model.pt #12653
Comments
Hello! It sounds like you're experiencing slower performance with the To potentially improve the performance, you might want to look into the following:
Here's an example command to adjust the yolo export model=path/to/model.pt format=engine workspace=8 If adjustments to these areas do not improve the performance, it might be helpful to profile both executions to understand where the bottleneck occurs. |
I have RTX 3060 TI which version of python , tensorRT compatible with and number of workspace and batches ? |
Hello! For your setup with an RTX 3060 Ti, here’s a quick guide to get you started:
Feel free to tweak these settings based on your performance and accuracy needs! 🚀 |
Search before asking
Question
when convert model from model.pt to model.engine the size of model is increased and takes more time in prediction step.
Additional
No response
The text was updated successfully, but these errors were encountered: