5. ORB_SLAM2 Basics

 

5. ORB_SLAM2 Basics5.1, Introduction5.2. Official Cases5.2.1 Data set location5.2.2. Entering the docker container5.2.3 Testing with one eye5.2.4, Binocular test5.2.5, RGBD test5.3, ORB_SLAM2 ROS2 camera test5.3.1, Internal Reference Modification5.3.2, Single-Eye Testing5.3.3, Binocular test5.3.4, RGBD testing

official website:http://webdiis.unizar.es/~raulmur/orbslam/

TUM Dataset:http://vision.in.tum.de/data/datasets/rgbd-dataset/download

KITTI Dataset:http://www.cvlibs.net/datasets/kitti/eval_odometry.php

EuRoC Dataset:http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets

orb_slam2_ros:http://wiki.ros.org/orb_slam2_ros

ORB-SLAM:https://github.com/raulmur/ORB_SLAM

ORB-SLAM2:https://github.com/raulmur/ORB_SLAM2

ORB-SLAM3:https://github.com/UZ-SLAMLab/ORB_SLAM3

 

The operating environment and hardware and software reference configuration are as follows:

 

5.1, Introduction

ORB-SLAM is mainly used for monocular SLAM;

ORB-SLAM version 2 supports monocular, binocular and RGBD interfaces;

ORB-SLAM version 3 adds IMU coupling and supports fisheye cameras.

All steps of ORB-SLAM uniformly use ORB features of the image.ORB features are a very fast feature extraction method with rotational invariance and scale invariance can be constructed using pyramids. The use of uniform ORB features helps the SLAM algorithm to have consistency in the steps of feature extraction and tracking, keyframe selection, 3D reconstruction, and closed-loop detection. The system is also robust to strenuous motion and supports wide-baseline closed-loop detection and relocalization, including fully automatic initialization. Since the ORB-SLAM system is a feature point-based SLAM system, it is able to compute the camera trajectory in real time and generate sparse 3D reconstruction results of the scene.

On the basis of ORB-SLAM, ORB-SLAM2 contributes points:

1) The first open source SLAM system for monocular, binocular and RGBD cameras, including loopback and relocalization and map reuse.

2) RGBD results show that more accuracy than ICP or minimization based on photometric and depth errors can be obtained by using BA.

3) Binocular results are more accurate than direct binocular SLAM algorithms by utilizing far and near points in binoculars, as well as monocular observations.

4) The light localization mode allows for efficient reuse of maps.

ORB-SLAM2 contains modules common to all SLAM systems: Tracking, Mapping, Relocalization, and Loop closing. The following figure shows the flow of ORB-SLAM2.

12.5.1

 

5.2. Official Cases

Note: The commands or locations mentioned below are all in the docker container unless otherwise stated.

5.2.1 Data set location

Inside, the MH01 dataset from EuRoC and the rgbd_dataset_freiburg1_xyz dataset from TUM were downloaded.

image-20230417111610512

If you need another dataset, you can go to the following address to download it:

 

5.2.2. Entering the docker container

To enter the docker container, please refer to [docker course chapter ----- 5, enter the docker container of the robot].

 

5.2.3 Testing with one eye

Here we take the EuRoC dataset as an example, first enter the docker container, and then enter the ORB_SLAM2 directory:

Run the following command:

If the following problem is encountered:

image-20230417110342053

 

is due to docker display problems, you can run a few more times will be successful, successful run interface is shown below, this interface will be displayed on the vnc or cart screen.

image-20230417105947250

The blue boxes are the keyframes, the green boxes are the camera orientation, the black dots are the saved points, and the red dots are the points currently seen by the camera.

 

At the end of the test, the key frames are saved to the KeyFrameTrajectory.txt file in the current directory:

 

5.2.4, Binocular test

Here we take the EuRoC dataset as an example, enter the docker container, then go to the ORB_SLAM2 directory and run the following command:

The Successful Run screen is shown below:

image-20230417114807211

The blue boxes are the keyframes, the green boxes are the camera orientation, the black dots are the saved points, and the red dots are the points currently seen by the camera.

At the end of the test, the keyframes are saved to the CameraTrajectory.txt file in the current directory.

5.2.5, RGBD test

Here we use the TUM dataset, this time adding the depth information. Here we need to match the rgb map and depth map, and merge the depth data and color map data into rgbd data.

The official script program associate.py is provided https://svncvpr.in.tum.de/cvpr-ros-pkg/trunk/rgbd_benchmark/rgbd_benchmark_tools/src/rgbd_benchmark_tools/associate.py

Download the associate.py file.

Use python to run associate.py, you can see the associations.txt file is generated in the specified path [this step has been configured before the product is shipped].

 

Then test: go to the docker container and then to the ORB_SLAM2 directory and enter the following command:

The Successful Run screen is shown below:

image-20230417122258974

CameraTrajectory.txt and KeyFrameTrajectory.txt are also saved at the end of the run.

image-20230417122536623

 

 

5.3, ORB_SLAM2 ROS2 camera test

 

The internal parameters of the camera have been modified before the product leaves the factory, you can start testing directly from 5.3.2. The learning method can be found in the subsection [5.3.1, Internal Reference Modification].

5.3.1, Internal Reference Modification

Camera running ORBSLAM before the need for internal parameters of the camera, so you must first calibrate the camera to be able to see the specific method of [Astra camera calibration] this lesson.

After calibration, move the [calibrationdata.tar.gz] file to the [home] directory.

After unzipping, open [ost.yaml] inside that folder and find the camera internal reference matrix, e.g. the following:

The internal reference matrix of the camera:

Modify the data in data to the corresponding values of [mono.yaml] and [rgbd.yaml] in the [params] folder under the [yahboomcar_slam] package.

Path to the params folder:

 

5.3.2, Single-Eye Testing

Enter the docker container:

Handheld robot can be used as a mobile carrier for mobile testing. If handheld there is no need to execute the next instruction, otherwise, execute it.

Starting the camera ORB_SLAM2 test

image-20230417161125907

When the command is executed, there is only a green box in the interface of [ORB_SLAM2:Map Viewer], and the initialization is being tried in the interface of [ORB_SLAM2:Current Frame], at this time the camera is moved slowly up and down, left and right, to find the feature points in the frame and initialize the slam.

After the test, the key frames are saved in the KeyFrameTrajectory.txt file in the following directory:

 

image-20230417160920005

As shown in the figure above, at this time to enter the [SLAM MODE] mode, running monocular must be continuous access to each frame of the image in order to camera positioning, if you select the upper left [Localization Mode] pure positioning mode, the camera will not be able to find their own position, it is necessary to start from the beginning of a new access to the key frames.

 

5.3.3, Binocular test

Since there is no binocular camera on the cart, we won't do a demo here. Those who have binocular camera can follow the following steps to test:

Enter the docker container:

  1. Start the binocular camera node and check the name of the topic published by the camera
  2. Modify the subscription topic of binocular camera in orbslam to the topic published by your own binocular camera:

image-20230417163154688

  1. Recompile the ros2_orbslam function package:
  1. Restart the binocular camera node, then run the orbslam node

The key frames are saved in the KeyFrameTrajectory.txt file in the following directory at the end of the test:

5.3.4, RGBD testing

Enter the docker container:

 

image-20230417161654629

RGBD does not have to run monocular as it must acquire each frame consecutively, if you select the upper left figure [Localization Mode] pure localization mode, you can just get the key frames for localization.

 

After the test, the key frames are saved in the KeyFrameTrajectory.txt file in the following directory: