Camera angle calibration in ROS2

The MR-B3RB platform is equipped with a camera connected to the NavQPlus, enabling a variety of functions such as robot visualization, Visual SLAM, and other computer vision and machine learning applications. The Cognipilot package for ROS2 includes an 'ov5645.ini' configuration file with the camera's intrinsic calibration data. This data is crucial for correcting lens distortion and understanding the camera's geometric properties in computer vision and robotics applications. However, this file serves is just an indication, and users must perform a camera calibration after assembling the MR-B3RB robot to ensure a better operation.

Introduction and background

Some cameras, such as pinhole cameras, introduce significant distortion to images. Two major kinds of distortion are radial distortion and tangential distortion.

Radial distortion causes straight lines to appear curved. Radial distortion becomes larger the farther points are from the center of the image. Similarly, tangential distortion occurs because the image-taking lense is not aligned perfectly parallel to the imaging plane. So, some areas in the image may look nearer than expected. In short, we need to find five parameters, known as distortion coefficients.

Extrinsic parameters corresponds to rotation and translation vectors which translates a coordinates of a 3D point to a coordinate system

To find these parameters, we must provide some sample images of a well defined pattern (e.g. a chess board). We find some specific points of which we already know the relative positions (e.g. square corners in the chess board). We know the coordinates of these points in real world space and we know the coordinates in the image, so we can solve for the distortion coefficients. For better results, we need at least 10 test patterns.

In addition to this, we need to some other information, like the intrinsic and extrinsic parameters of the camera. Intrinsic parameters are specific to a camera. They include information like focal length and optical centers. The focal length and optical centers can be used to create a camera matrix, which can be used to remove distortion due to the lenses of a specific camera. The camera matrix is unique to a specific camera, so once calculated, it can be reused on other images taken by the same camera.

Camera calibration example procedure.

The camera matrix is independent of the camera's position on the vehicle. It only affects the perspective transformation used to flatten the image after rectification.

After the theoretical background we are going to see how to calibrate the camera on the MR-B3RB robot.

Before start, make sure that you have followed the CogniPilot developer guide on MR-B3RB and you have installed the most updated stable version of Cognipilot. (NXP-CUP: In case you are a participant of the NXP CUP 2024 please also follow this tutorial NXP-CUP 2024 with MR-B3RB)

Connect through ssh to the NavQPlus as explained in Cognipilot: CheatSheet for operation

Install necessary packages

sudo apt update
sudo apt install ros-humble-camera-calibration-parsers
sudo apt install ros-humble-camera-info-manager
sudo apt install ros-humble-launch-testing-ament-cmake

Install b3rb_camera_calibration package

cd ~/cognipilot/cranium/src
git clone https://github.com/NXPHoverGames/b3rb_camera_calibration
cd ~/cognipilot/cranium/
colcon build --symlink-install --packages-select b3rb_camera_calibration
source ~/cognipilot/cranium/install/setup.sh

Start calibration procedure

Remember to print the calibration pattern: Calibration pattern

In one terminal run the following command:

ros2 launch b3rb_camera_calibration b3rb_camera_calibration_launch.py 

Then open a new terminal and connect throush ssh to the NavQPlus and run:

ros2 run b3rb_camera_calibration cameracalibrator --size 9x6 --square 0.02 --ros-args -r image:=/camera/image_raw -p camera:='chessboard'

If you want to see the output yo can run in your host computer:

ros2 launch electrode electode.launch.py

This will open foxglove and you will be able to select the topic /calibration_camera_feed on the image panel.

You should be able to see something like this.

Calibration procedure

For the calibration process, it's advised to keep the robot stationary and move the calibration pattern to different positions instead. Place it at various angles, because the calibration system requires multiple samples and poses to calibrate accurately.

You will see the 'Continue Sampling' message in the terminal until the calibration process has collected enough samples. After that, the message will switch to 'READY TO CALIBRATE.' Additionally, the 'calibrate' text in the Foxglove image will change to blue, as shown in the image below:

Finish calibration

After the calibration node has collected enough samples, you'll need to open a new terminal window. Then, connect to the NavQPlus via SSH to begin calculating the camera matrix. In this new terminal, execute the following command to start the calibration:

ros2 topic pub /calibration_control std_msgs/msg/String "data: 'start_calibration'" --once

For saving the results please run the following command:

ros2 topic pub /calibration_control std_msgs/msg/String "data: 'save_results'" --once

Then you must see a message similar to the following:

Then the calibration has successfully finished, to stop the calibration please run:

ros2 topic pub /calibration_control std_msgs/msg/String "data: 'stop'" --once

View the results

To see the calibration results, first navigate to the cognipilot folder:

cd ~/cognipilot

Then create a new folder:

mkdir calibration_data

And uncompress the file and place the files in this new folder:

tar -xzvf calibrationdata.tar.gz -C calibration_data/

The files ost.txt and ost.yaml contain the camera matrix and distortion parameters.

Calibration pattern

You have to print a calibration pattern, this tutorial is going to be done with a 9x6 chessboard pattern, but you can do it with other pattern. This web allows you to choose the one that you prefer: https://calib.io/pages/camera-calibration-pattern-generator

This is the one used in this tutorial: https://github.com/opencv/opencv/blob/4.x/doc/pattern.png

Please print it in an A4 mantaining the original size.

Last updated