6 yolov4-tiny

yolov4-tiny official website: https://github.com/AlexeyAB/darknet

Source code: https://github.com/bubbliiiing/yolov4-tiny-tf2

6.1 Introduction

release time point

The performance of YOLOv4-Tiny on COCO: 40.2% AP50, 371 FPS(GTX 1080 Ti) Whether it is AP or FPS performance, it is a huge improvement compared to YOLOv3-Tiny, Pelee, and CSP, as shown in the figure below :

img

Comparison of YOLOv4 and YOLOv4-Tiny detection results, source network

YOLOv4 detection results

img

YOLOv4-Tiny detection results

img

We can see that the detection accuracy of Yolov4-tiny has decreased, but Yolov4-tiny has obvious advantages in terms of time consumption: Yolov4-tiny detection takes only 2.6 milliseconds, while Yolov4 detection takes 27 milliseconds, which is faster More than 10 times!

6.2 Use

Raspberry Pi version is supported.

Support real-time monitoring of web pages, such as:

View node information

1364

Print detection information

print as follows

6.3 folder structure

The concept of anchor box was introduced in the YOLO-v2 version, which greatly increased the performance of target detection. The essence of anchor is the reverse of the idea of SPP(spatial pyramid pooling), and what SPP itself does is to combine different sizes The input resize becomes the output of the same size, so the inverse of SPP is to push the output of the same size backward to get the input of different size.

6.4 Environmental requirements

The factory image is already configured, no need to install

Installation example

6.5 Custom training data set

6.5.1 Making a dataset

Method 1: Take some photos first, use the annotation tool to mark the target on each photo, create a [train.txt] file under the [garbage_data] folder, and write the target information in a specific format.

Method 2: Put background images(as many as possible) in the [garbage_data/texture] folder, modify the [GetData.py] code as required, and execute [GetData.py] to generate a dataset(as many as possible).

The name of the image and the label file should correspond. The label format in the [train.txt] file is as follows:

Take method 2 as an example.

Open the [GetData.py] file

Modify the total number of generated datasets and fill in as required. [More], too few datasets will lead to suboptimal training results.

Run the [GetData.py] file to get the dataset

6.5.2 Add weight file

There are good weight files(pre-training model) [yolov4_tiny_weights_coco.h5] and [yolov4_tiny_weights_voc.h5] under the [model_data] file. Choose one of the two, and recommend coco's weight file.

If you need the latest weight file, you can download it by Baidu search.

6.5.3 Make label file

Be careful not to use Chinese labels and no spaces in the folder!

For example: garbage.txt

6.5.4 Modify the train.py file

According to your needs, refer to the notes to modify.

According to the above process, after the operation is completed, you can directly run the [train.py] file for training.

6.5.5 Model checking

During this period, you need to manually input the image to be detected, as shown below:

image-20220302102751259