yolov5/tutorials/running_on_jetson_nano/ #8386
Replies: 10 comments 45 replies
-
can i use jetson nano dev kit for .engine model |
Beta Was this translation helpful? Give feedback.
-
Heyy. |
Beta Was this translation helpful? Give feedback.
-
I'm following this tutorial with a jetson nano 2gb but I get an error as soon as i try to install requirements.txt... I'm pretty new to jetson/linux but I've followed every step exactly as its listed, on both Jetpack 4.6.1 and 4.6. |
Beta Was this translation helpful? Give feedback.
-
Hello @lakshanthad. |
Beta Was this translation helpful? Give feedback.
-
@glenn-jocher About your answer in the previous thread, |
Beta Was this translation helpful? Give feedback.
-
Hi Glenn,
Somehow, I can't see your reply in the github, so I write this.
Thank you for the details. :)
…On Thu, Mar 28, 2024 at 6:09 PM Glenn Jocher ***@***.***> wrote:
Hey there! Thanks for bringing this up! 🚀 Yes, you're correct. The
--calib=<file> option in trtexec is indeed for specifying an existing
INT8 calibration cache file, not for providing images to generate one. The
process of creating a calibration table does indeed vary based on the
target device, and Jetson devices have their own specific workflow for this.
For Jetson, when working with TensorRT for optimizing models for
inference, the calibration process typically involves running a set of
calibration data (images) through the network and collecting statistics
that are then used to create the calibration table. This step is crucial
for achieving optimal performance when moving to INT8 precision.
The documentation you pointed out generally covers deploying YOLO models
on Jetson devices, including using TensorRT and the necessary setup, but it
seems we might need to clarify the calibration part a bit more for Jetson
specifics. The provided guide focuses on deployment steps after you already
have your model ready or converted, including TensorRT optimization and
inference steps.
For anyone looking to generate an INT8 calibration cache specifically for
Jetson devices using TensorRT, I'd recommend checking NVIDIA's official
TensorRT documentation and guides on INT8 calibration, as this process can
be quite nuanced and device-specific. NVIDIA provides tools and examples
that show how to perform calibration using a dataset representative of your
application's use case.
If you're diving into TensorRT optimization on Jetson, here’s a simple
snippet on how you might proceed after setting up your environment and
having your model:
# pseudo code for INT8 calibration process outlineimport tensorrt as trt
builder = trt.Builder(TRT_LOGGER)network = builder.create_network()parser = trt.OnnxParser(network, TRT_LOGGER)
# Parse your modelwith open("your_model.onnx", 'rb') as model:
parser.parse(model.read())
builder.int8_mode = Truebuilder.int8_calibrator = YourCalibratorClass(...) # Implement this based on TensorRT docs
# Proceed with the rest of your optimization steps...
And yes, you should adjust to your specific use case, whether it's for
general deployment or targeting devices like Jetson. Hope this helps
clarify a bit! If you have further questions, feel free to ask. 📚
—
Reply to this email directly, view it on GitHub
<#8386 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AX6OYJVJAUC6UHJXCMKDJLTY2SWKDAVCNFSM6AAAAABDU3EABOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DSNBXGAZTA>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hello I am using Nvidia Xavier NX to deploy custom trained yolo model using deep streams. python3 gen_wts_yoloV5.py -w yolov5s.pt Not able to find the gen_wts_yoloV5.py in https://github.com/marcoslucianops/DeepStream-Yolo Found this file: export_yoloV5.py Getting errors: Starting: hatav2-80epc-best.pt YOLOv5 🚀 v6.1-306-gfbe67e46 Python-3.10.12 torch-2.1.0 CPU Fusing layers... Please help me with this issue |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hi there, i’m using jetpack 4.5.1 and with python 3.6.9 on jetson nano. I’m wondering whether i can implement yolov5 with this config. Thanks |
Beta Was this translation helpful? Give feedback.
-
yolov5/tutorials/running_on_jetson_nano/
Detailed guide on deploying trained models on NVIDIA Jetson using TensorRT and DeepStream SDK. Optimize the inference performance on Jetson with Ultralytics.
https://docs.ultralytics.com/yolov5/tutorials/running_on_jetson_nano/
Beta Was this translation helpful? Give feedback.
All reactions