1. Use Yolov5 to train traffic signs

In the previous course, we introduced how to use yolov5 to train the model. In this course, we will train our own model through actual operations. Here we choose to train traffic signs. In order to save training time, we train two kinds of signs and multiple kinds of signs. Just modify the corresponding value of the training method.

Notice:

Due to the limited performance of Jetson Nano, the training model cannot be completed. This training process uses a Jetson Orin NX 16G motherboard. You can also use a computer with an independent graphics card. The training process is for reference only. You need to bring your own relevant hardware equipment.

The Orin SUPER image needs to first enter the yolov5 virtual environment.

image-20250325144129333

1.1. Run Get_garbageData.py to generate training set images

Let’s look at the main part of this function first,

There are two key information points here, one is the number of random reads, and the other is the position where the picture is read, which correspond to:

Since we only have two types of traffic signs, we change 16 to 2. The data here is modified based on how many types there are. Before training, we need to put these two types of images in the ~/yolov5-5.0/data/garbage/image directory, as shown in the figure below,

image-20220830145814164

Continue below to see the main content of the function,

Here is the storage location of the pictures and labels after we have completed training. After training, img_total training images and training labels will be generated in this directory. Here we set img_total to 100.

image-20220830154200976

We can take a look at one of the training pictures. In fact, we perform some processing on the picture and then place it on the background picture. Various permutations and combinations form the training picture.

image-20220830154307060

1.2. Modify yaml file

After we get the training images, we can generally train. However, because the training images are relatively large, we need to load them through a file. From the train.py file in the ~/software/yolov5-5.0 directory, we can see that

During training, this yaml file will be loaded, and the content here is the path of the trained image and related label information.

We modify the content of this yaml as follows,

train and val indicate the location of the training images, nc indicates how many types of images there are, name indicates the type names of these images, and the order of images needs to be consistent with ~/yolov5-5.0/data/garbage/image.

1.3. Run train.py to train the model

A screenshot of successful operation is shown below:

image-20220831091901351

This is trained 50 times, that is, the epoch value is 50. If a virtual machine is used for training, it needs to be modified.

After training, the path to save the model will be printed on the terminal, as shown in the figure below.

image-20220831093240591

Go to ~/software/yolov5-5.0/train/runs/train/exp17 to view the contents inside. The test_batch0_pred.jpg, test_batch1_pred.jpg and test_batch2_pred.jpg inside are the pictures we predicted.

image-20220831093817412

It can be seen that the recognition accuracy is quite high. Then, we put ~/software/yolov5-5.0/train/runs/train/exp17/weights/best.pt in the ~/software/yolov5-5.0 directory, and then modify

The content inside detection_video.py,

By running detection_video.py, you can use the model we just trained to recognize the two signs we just trained in real time. As shown below,

image-20220831094956198