After the function is turned on, the camera captures images and recognizes relevant gestures to control the movement of the car.
Gesture number "5" | Car stops |
---|---|
Gesture "yes" | Car moves in a square |
Gesture "ok" | Car turns in a circle |
Gesture "rock" (index finger and pinky finger are straight, and the others are bent) | Car moves in an S shape |
Gesture contempt (clenched fist, thumb extended, thumb facing down) | Car moves forward |
After entering the docker container, the source code of this function is located at,
/root/yahboomcar_ws/src/yahboomcar_mediapipe/yahboomcar_mediapipe/
Open a terminal and enter the following command to enter docker,
xxxxxxxxxx
./docker_ros2.sh
When the following interface appears, you have successfully entered docker
Start chassis
xxxxxxxxxx
ros2 launch yahboomcar_bringup bringup.launch.py
Open a new terminal and enter the same docker. Change the following da8c4f47020a to the ID displayed in the actual terminal
xxxxxxxxxx
docker ps
xxxxxxxxxx
docker exec -it da8c4f47020a /bin/bash
After entering the docker container, enter in the terminal,
xxxxxxxxxx
ros2 run yahboomcar_mediapipe FingerCtrl
Turn on this function, then put your hand in front of the camera, the screen will draw the shape of your finger, and after the program recognizes the gesture, it will send the speed to the chassis, thereby controlling the movement of the car.
xxxxxxxxxx
frame, lmList, bbox = self.hand_detector.findHands(frame) #Detect palm
fingers = self.hand_detector.fingersUp(lmList) #Get finger coordinates
gesture = self.hand_detector.get_gesture(lmList) #Get gesture
For the specific implementation process of the above three functions, please refer to the content in media_library.py
The implementation process here is also very simple. The main function opens the camera to obtain data and then passes it to the process function. It performs "detect palm" -> "get finger coordinates" -> "get gesture" in sequence, and then determines the action to be performed based on the gesture results.