====== HW 2 - Camera Calibration ====== ===== Goals ===== - Download {{:courses:b3b33vir:tutorials:hw2:hw2.zip|source code package}} and implement missing parts in module ''intrinsics.py'' (see below) - Get familiar with pinhole camera model, nonlinear distortion model and respective [[https://docs.opencv.org/3.4.3/d9/d0c/group__calib3d.html|calibration routines from OpenCV]] - Calibrate camera intrinsic parameters * Implement function * ''intrinsics.calibrate'' * Using functions * ''cv2.findChessboardCorners'' * ''cv2.cornerSubPix'' (limit search window size ''winSize'' to avoid excessive corrections) * ''cv2.calibrateCamera'' (use the rational model without tangential distorion from the lecture: ''flags=cv2.CALIB_RATIONAL_MODEL + cv2.CALIB_ZERO_TANGENT_DIST'') - Convert camera intrinsics parameters (camera matrix and field of view) * Implement functions * ''intrinsics.create_camera_matrix'' * ''intrinsics.camera_field_of_view'' - Undistort images using the intrinsic parameters found and new pinhole camera parameters * Implement function * ''intrinsics.remap'' * Using * ''cv2.initUndistortRectifyMap'' * ''cv2.remap'' (use bilinear interpolation) ===== Module Usage ===== python multimodal_dataset.py --help python intrinsics.py --help Print out command-line parameters of the modules. python multimodal_dataset.py download python multimodal_dataset.py calibrate Download the multimodal dataset [1] and calibrate intrinsic parameters of its cameras (calls ''intrinsics.calibrate''). The calibration is saved into the following JSON files: * ''data/multimodal_ir_intrinsics.json'' * ''data/multimodal_left_intrinsics.json'' * ''data/multimodal_right_intrinsics.json'' python intrinsics.py calibrate intrinsics.json --pattern COLS ROWS --unit UNIT -- IMAGE [IMAGE ...] python intrinsics.py calibrate intrinsics.json --pattern COLS ROWS --unit X_UNIT Y_UNIT Z_UNIT -- IMAGE [IMAGE ...] python intrinsics.py calibrate data/multimodal_left_intrinsics.json --pattern 8 9 --unit 0.061 0.047 0.0 -- data/calibration_sequence_I/*Left.ppm python intrinsics.py calibrate data/multimodal_right_intrinsics.json --pattern 8 9 --unit 0.061 0.047 0.0 -- data/calibration_sequence_I/*Right.ppm Calibrate camera using list of files and parameters of the calibration pattern (calls ''intrinsics.calibrate''). The calibration is saved to the specified file. \\ The last two commands partially reproduce the calibration done in ''python multimodal_dataset.py calibrate'' (excl. the infra-red camera). \\ python intrinsics.py remap intrinsics.json --fov FOV --size ROWS COLS IMAGE [IMAGE ...] python intrinsics.py remap data/multimodal_ir_intrinsics.json --fov 40 --size 426 534 data/calibration_sequence_I/*IR1_crop.bmp Undistort (i.e., removes radial distortion from) images using the calibration (calls ''intrinsics.remap'' and ''intrinsics.create_camera_matrix''). The images are re-projected into new pinhole camera with given parameters (field of view and image size). \\ python intrinsics.py remap intrinsics.json --alpha ALPHA IMAGE [IMAGE ...] python intrinsics.py remap data/multimodal_left_intrinsics.json --alpha 0 data/calibration_sequence_I/*Left.ppm python intrinsics.py remap data/multimodal_left_intrinsics.json --alpha 1 data/calibration_sequence_I/*Left.ppm Undistort images using the calibration (calls ''intrinsics.remap'' and ''intrinsics.camera_field_of_view''). Instead of specifying the parameters manually, an optimal camera parameters are estimated to ensure all pixels are valid (''%%--alpha 0%%'') or no information is lost (''%%--alpha 1%%''). Any value in-between can also be used. \\ ===== Expected Results ===== Expected re-projection errors (the 1st output of ''cv2.calibrateCamera'') for the multimodal dataset are approximatelly: * Left camera: 0.37 px (0.67 px without sub-pixel refinement via ''cv2.cornerSubPix'') * Right camera: 0.33 px (0.68 px) * IR camera: 0.78 px (1.27 px) Original and undistorted images from the left camera should look similarly to these: | {{ :courses:b3b33vir:tutorials:hw2:2010_10_19_14_05_46_bb_left.png?direct&400 |}} | {{ :courses:b3b33vir:tutorials:hw2:2010_10_19_14_05_46_bb_left_45_480x640.png?direct&400 |}} | {{ :courses:b3b33vir:tutorials:hw2:2010_10_19_14_05_46_bb_left_46_480x640.png?direct&400 |}} | | Original | Undistorted, ''%%--alpha 0%%'' | Undistorted, ''%%--alpha 1%%'' | ===== References ===== [1] Barrera F., Lumbreras F., Sappa A. Multimodal Stereo Vision System: 3D Data Extraction and Algorithm Evaluation. In //IEEE Journal of Selected Topics in Signal Processing//, Vol. 6, No. 5, September 2012, pp. 437--446.