Yolo5+Tensorrt acceleration+DeepStream1.Precautions before use2.instructions2.1 Model Transformation2.2 Deployment Model2.3 Modify the deepstream configuration file (this step can be omitted for YAHBOOM version images)3.Compile Run
If you are using the YAHBOOM version of the image, there is no need to build the DeepStream environment. If you have built your own image, you need to build the environment for DeepStream. You can refer to the DeepStream building tutorial we provide, or you can also build your own Baidu
xgit clone https://github.com/marcoslucianops/DeepStream-Yolo.git
cd DeepStream-Yolo/utils
cp gen_wts_yoloV5.py ../../yolov5
cd ../../yolov5
python3 gen_wts_yoloV5.py -w ./yolov5s.pt
xxxxxxxxxx
[property]
# omit ...
**model-engine-file=model_b2_gpu0_fp16.engine** # fp32->fp16
batch-size=2 # batch-size Change to 2, the speed will be faster
# omit...
**network-mode=2 #** 2:Force the use of fp16 inference
# omit ...
Note: FPS is related to parameters such as input image size, batch size, interval, etc., and needs to be optimized according to practical applications. Here, we directly change the batch size of the input to 2, which will significantly improve the inference speed of the model
xxxxxxxxxx
cd nvdsinfer_custom_impl_Yolo/
CUDA_VER=11.4 make -j4 #Modify the numerical part of 11.4 based on your CUDA version
cd ..
deepstream-app -c deepstream_app_config.txt
After waiting for a while, you can see that the CSI camera screen has opened
node
then run
xxxxxxxxxx
cd ~/DeepStream-Yolo
deepstream-app -c deepstream_app_config_usb.txt
Just a moment, we can achieve the detection
deepsteram The parameter description for file configuration and related reference connections are as follows: https://blog.csdn.net/weixin_38369492/article/details/104859567