5.Control Muto color tracking

Before running this program, you need to bind the port number of the voice board and the port number of the ROS expansion board on the host machine. You can refer to the previous chapter for binding; When entering the docker container, you need to mount the voice board to recognize the voice board in the docker container.

1. Program function description

After the program is started, say "Hello, yahboom" to the module. The module replies "Yes" to wake up the voice board, and then you can tell it to start tracking any color of red/green/blue/yellow. After receiving the command, the program recognizes the color, loads the processed image, and then presses the R2 button on the handle to start the tracking program.Muto will track the recognized color. When the recognized color moves slowly, Muto will also track the movement.

2. Program code reference path

After entering the docker container, the source code of this function is located at

3. Configuration before use

Note: Since the Muto series robots are equipped with multiple lidar devices, the factory system has been configured with routines for multiple devices. However, since the product cannot be automatically recognized, the lidar model needs to be manually set.

After entering the container: Make the following modifications according to the lidar type:

After the modification is completed, save and exit vim, and then execute:

You can see the current modified lidar type.

 

4. Program startup

4.1. Start command

After entering the docker container, enter in the terminal

image-20230423173841228

Take tracking red as an example. After waking up the module, tell it "start tracking red" and press the R2 button on the remote control. After the program receives the command, it starts processing the image.Then calculate the center coordinates of the red object, publish the center coordinates of the object; combine the depth information provided by the depth camera, calculate the speed, and finally publish it to drive Muto.

4.2. Node topic communication diagram

docker terminal input,

yanse

 

You can also use dynamic parameter adjuster, modify parameters, and docker terminal input,

image-20230423185839133

After modifying the parameters, click on a blank space in the GUI to write the parameter value. As can be seen from the above figure,

The meaning of each parameter is as follows:

colorHSV

Parameter nameParameter meaning
HminH minimum value
SminS minimum value
VminV minimum value
HmaxH maximum value
SmaxS maximum value
VmaxV maximum value
refreshRefresh data into the program

colorTracker

Parameter nameParameter meaning
linear_KpLinear speed P value
linear_KiLinear speed i value
linear_KdLinear speed d value
angular_KpAngular speed P value
angular_KpAngular speed i value
angular_KpAngular speed value
scaleProportional coefficient
minDistanceTracking distance

5. Core code

5.1.colorHSV

This part mainly analyzes voice commands, image processing, and finally releases the center coordinates.

5.2.colorTracker

This part receives the topic data and depth data of the center coordinates, then calculates the speed and publishes it to the chassis.

Combined with the node communication diagram of 3.2, understanding the source code will be clearer.