6 Pure visual 2D mapping and navigation

depthimage_to_laserscan: http://wiki.ros.org/depthimage_to_laserscan

depthimage_to_laserscan source code: https://github.com/ros-perception/depthimage_to_laserscan

6.1 Introduction

depthimage_to_laserscan takes a depth image(float-encoded meters or preferably uint16-encoded millimeters for OpenNI devices) and generates a 2D laser scan based on the provided parameters. depthimage_to_laserscan uses deferred subscriptions and does not subscribe to images or camera information until a user scans.

The depthimage_to_laserscan function package converts depth images into lidar data, and its mapping and navigation functions are the same as lidar. Note: The scanning range of the depth camera is not 360°.

6.2 Use

Note: The pure depth mapping navigation in this section is not effective and is not recommended.

Note: When building a map, the slower the speed, the better the effect(note that if the rotation speed is slower), the effect will be poor if the speed is too fast.

According to different models, you only need to set the purchased model in [.bashrc], X1(ordinary four-wheel drive) X3(Mike wheel) X3plus(Mike wheel mechanical arm) R2(Ackerman differential) and so on. Section takes X3 as an example

Open the [.bashrc] file

Find the [ROBOT_TYPE] parameter and modify the corresponding model

6.2.1 Mapping

Start command(robot side)

Mapping command(robot side)

Open the visual interface(virtual machine side)

image-20220228102911102

6.2.2 Control the robot

There may be some scattered points during the mapping process. If the mapping environment is well closed and regular, the movement is slow, and the scattering phenomenon is much smaller.

6.2.3 Map save

The map will be saved to ~/yahboomcar_ws/src/yahboomcar_nav/maps/ folder, a pgm image, a yaml file.

map.yaml

Parameter parsing:

6.2.4 Navigation

Start command(robot side)

Navigation commands(robot side)

Open the visual interface(virtual machine side)

  1. Single point navigation
  1. Multi-point navigation

6.3 Topics and Services

Subscribe to topicstypedescribe
imagesensor_msgs/ImageEnter an image. This can be in floating point or raw uint16 format. For OpenNI devices, uint16 is the native representation, which is more efficient to process. This is usually /camera/depth/image_raw. If your image is distorted, you should remap this subject to image_rect. OpenNI cameras generally have little distortion, so corrections for this application can be skipped.
camera_infosensor_msgs/CameraInfoCamera information for the associated image.
Post a topictypedescribe
scansensor_msgs/LaserScanOutput laser scan. and will output a range array containing NAN and +-INF.

Node view

1061

6.4 configuration parameters

parametertypeDefaultsdescribe
scan_heightint1 pixelThe number of pixel rows used to generate the laser scan. For each column, the scan will return the minimum value of the vertically centered pixel in the image.
scan_timedouble1/30.0Hz(0.033s)Scan interval time(seconds). Typically, a rate of 1.0/frame. This value is not easily calculated from consecutive messages, so it is up to the user to set it correctly.
range_mindouble0.45mThe minimum range to return(in meters). Ranges smaller than this value will be output as -Inf.
range_maxdouble10.0mThe maximum range returned(in meters). Ranges larger than this value will be output as +Inf.
output_frame_idstrcamera_depth_frameThe frame id of the laser scan. For point clouds from "optical" frames in Z-forward, the value should be set to the corresponding frame in X-forward and Z-up

6.5 TF transformation

Required TF Transformdescribe
laser-->base_linkThe transformation between the lidar coordinate system and the base coordinate system, generally published by robot_state_publisher or static_transform_publisher
base_link-->odomThe transformation between the map coordinate system and the robot's odometer coordinate system to estimate the robot's pose in the map
Published TF Transformdescribe
map-->odomThe transformation between the map coordinate system and the robot's odometer coordinate system to estimate the robot's pose in the map

View tf tree

1062