This post is an application of the EOL calibration described in the EOL calibration article.

Detect Image Corners

Detecting the corners of the fisheye images on its original image is challenging due to its severe distortion. So we resort to detecting the corners on the BEV image and then project the corners back onto the original image to further refine the corners.

  1. Set initial extrinsics (referring to installation parameters (angles and positions) or parameters from joint calibration with LiDAR), construct a 20m×20m grid with resolution of 0.01m in the ego coordinate system’s ground plane (z=0), and generate a BEV projected image;

  2. Select the rectangular area which covers the interested chessboard corners in the BEV image by hand, use OpenCV’s findChessboardCornersSB to find the corners, and calculate the coordinates of corners in the ego coordinate system;

  3. Use the coordinates of corners in the ego coordinate system and initial extrinsics to project the corners in the BEV image back to the original image, and then use OpenCV’s cornerSubPix to get subpixel coordinates of the corners in the original image.

The following figure shows the above process for the front fisheye image.

original front fisheye image front fisheye image in the BEV and the detected corners the BEV detected corners back project to the original image and the refined corners

The left image shows the original front fisheye image, the middle image shows the front fisheye image in the BEV and the detected corners, the right image shows the BEV detected corners back project to the original image and the refined corners.

Associate Image Corners with Measured Ground Points

Associate the detected image points with the measured ground points: if the initial extrinsics are already good enough, project the refined corners on the BEV, then find the nearest measured ground points as their corresponding world points. Otherwise, you will need to manually assign the world points.

the measured ground points.
The measured ground points and their tags.

Once the image points are associated with the measured ground points, we can follow the EOL calibration article to do single camera calibration and joint calibration.