Robotic arm target detection yolov8 model training

1. Collection of training data

Connect to the vehicle's control panel via vnc and enter the command in the terminal:

image

Reopen a terminal and enter the command in the terminal:

image

Place the object you want to capture in front of the camera, and then press the s key to save the image.

The image will be saved in the

~/YBAMR-COBOT-EDU-00001/src/yahboom_mycobo_320_apriltag/scripts/tem folder.

image

According to the above method, more than 100 pictures are continuously collected as a data set.

2. Labeling of Dataset

Upload the images in the ~/YBAMR-COBOT-EDU-00001/src/yahboom_mycobo_320_apriltag/scripts/tem file to the https://roboflow.com/ annotation platform for annotation.

image

image

If you haven't registered yet, register now. You can quickly register using your GitHub account.

If you log in for the first time, you need to create a new project

image

image

image

image

image

After uploading is complete, enter the following interface

image-20241107215751-syaxspp

Add categories first and then label.

image

image

image

image

image

image

image-20241108104437-7dtjd8k

image

image

image

image

As shown in the figure below, a picture is labeled.

image

Label all the pictures in the same way.

3. Dataset creation

image

image

image

image

image

image

image

image

image

image

4. Model training

Download the official code of yolov8. The link is https://github.com/ultralytics/ultralytics

Enter the command in the terminal:

I won't go into detail about the running environment here, you can search on Google.

Of course, a good graphics card is required for training, usually 3060 is enough.

Let's take a look at how to train. Put the downloaded dataset in the root directory of ultralytics and unzip it to the yolov8_obb_class folder

image

image

image

image

image

image

5. Model conversion to onnx

Since the model needs to be executed on the terminal, the pt model needs to be converted to onnx, then converted to engine on the terminal, and then accelerated inference is performed through tensorrt.

Convert the onnx model, as shown in the following figure:

image

image

image

6. Convert to engine model

Open the terminal on the orin_nx 16G terminal board, put the generated onnx model under the home path, then open the terminal tool and enter the command in the terminal:

image

image

image

Since our yolov8 web inference model is saved in the path /home/yahboom/YBAMR-COBOT-EDU-00001/soft/yolov8/weights/, we need to rename the best-obb02.engine file to best-obb.engine and put it in the path /home/yahboom/YBAMR-COBOT-EDU-00001/soft/yolov8/weights/.

image