Warning
This page is located in archive. Go to the latest version of this course pages.

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
courses:b3b33vir:tutorials:hw2:start [2018/11/09 16:36]
petrito1 [Goals]
courses:b3b33vir:tutorials:hw2:start [2018/11/13 16:50]
petrito1 [Goals]
Line 5: Line 5:
   - Download {{:​courses:​b3b33vir:​tutorials:​hw2:​hw2.zip|source code package}} and implement missing parts in module ''​intrinsics.py''​ (see below)   - Download {{:​courses:​b3b33vir:​tutorials:​hw2:​hw2.zip|source code package}} and implement missing parts in module ''​intrinsics.py''​ (see below)
   - Get familiar with pinhole camera model, nonlinear distortion model and respective [[https://​docs.opencv.org/​3.4.3/​d9/​d0c/​group__calib3d.html|calibration routines from OpenCV]]   - Get familiar with pinhole camera model, nonlinear distortion model and respective [[https://​docs.opencv.org/​3.4.3/​d9/​d0c/​group__calib3d.html|calibration routines from OpenCV]]
-  - Calibrate camera intrinsic parameters ​(rational model without tangetial distortion)+  - Calibrate camera intrinsic parameters
     * Implement function     * Implement function
       * ''​intrinsics.calibrate''​       * ''​intrinsics.calibrate''​
     * Using functions     * Using functions
       * ''​cv2.findChessboardCorners''​       * ''​cv2.findChessboardCorners''​
-      * ''​cv2.cornerSubPix''​ +      * ''​cv2.cornerSubPix'' ​(limit search window size ''​winSize''​ to avoid excessive corrections) 
-      * ''​cv2.calibrateCamera''​+      * ''​cv2.calibrateCamera'' ​(use the rational model without tangential distorion from the lecture: ''​flags=cv2.CALIB_RATIONAL_MODEL + cv2.CALIB_ZERO_TANGENT_DIST''​)
   - Convert camera intrinsics parameters (camera matrix and field of view)   - Convert camera intrinsics parameters (camera matrix and field of view)
     * Implement functions     * Implement functions
Line 36: Line 36:
 python multimodal_dataset.py calibrate python multimodal_dataset.py calibrate
 </​code>​ </​code>​
-Download the multimodal dataset and calibrate intrinsic parameters of its cameras (calls ''​intrinsics.calibrate''​).+Download the multimodal dataset ​[1] and calibrate intrinsic parameters of its cameras (calls ''​intrinsics.calibrate''​).
 The calibration is saved into the following JSON files: The calibration is saved into the following JSON files:
   * ''​data/​multimodal_ir_intrinsics.json''​   * ''​data/​multimodal_ir_intrinsics.json''​
Line 66: Line 66:
 Undistort images using the calibration (calls ''​intrinsics.remap''​ and ''​intrinsics.camera_field_of_view''​). Undistort images using the calibration (calls ''​intrinsics.remap''​ and ''​intrinsics.camera_field_of_view''​).
 Instead of specifying the parameters manually, an optimal camera parameters are estimated to ensure all pixels are valid (''​%%--alpha 0%%''​) or no information is lost (''​%%--alpha 1%%''​). Any value in-between can also be used. \\ Instead of specifying the parameters manually, an optimal camera parameters are estimated to ensure all pixels are valid (''​%%--alpha 0%%''​) or no information is lost (''​%%--alpha 1%%''​). Any value in-between can also be used. \\
 +
 +
 +===== Expected Results =====
 +
 +Expected re-projection errors (the 1st output of ''​cv2.calibrateCamera''​) for the multimodal dataset are approximatelly:​
 +  * Left camera: ​ 0.37 px (0.67 px without sub-pixel refinement via ''​cv2.cornerSubPix''​)
 +  * Right camera: 0.33 px (0.68 px)
 +  * IR camera: 0.78 px (1.27 px)
 +
 +Original and undistorted images from the left camera should look similarly to these:
 +
 +| {{ :​courses:​b3b33vir:​tutorials:​hw2:​2010_10_19_14_05_46_bb_left.png?​direct&​400 |}} | {{ :​courses:​b3b33vir:​tutorials:​hw2:​2010_10_19_14_05_46_bb_left_45_480x640.png?​direct&​400 |}} | {{ :​courses:​b3b33vir:​tutorials:​hw2:​2010_10_19_14_05_46_bb_left_46_480x640.png?​direct&​400 |}} |
 +| Original | Undistorted,​ ''​%%--alpha 0%%''​ | Undistorted,​ ''​%%--alpha 1%%''​ |
 +
 +
 +===== References =====
 +
 +[1] Barrera F., Lumbreras F., Sappa A. Multimodal Stereo Vision System: 3D Data Extraction and Algorithm Evaluation. In //IEEE Journal of Selected Topics in Signal Processing//,​ Vol. 6, No. 5, September 2012, pp. 437--446.
 +
courses/b3b33vir/tutorials/hw2/start.txt · Last modified: 2018/11/16 10:16 by petrito1