Warning
This page is located in archive. Go to the latest version of this course pages. Go the latest version of this page.

Homework 07 - Autocalibration from vanishing points

The task

1. Extract vanishing points

  1. Download two images of 'pokemons' from the upload system (InputData).
  2. Download the coordinates of the corners of the poster and of the black square, u1 in the first image and u2 in the second. The corners are in the clock-wise order and correspond between the images (this is not used for calibration but later).
  3. Construct vanishing points from the squares, store them as vp1 and vp2 (four points each) for the first and the second image, respectively.
  4. Show both images, draw the vanishing points into both images and connect (by a line) each vanishing point with all corners of appropriate rectangle. Also connect the two most distant vanishing points by a line in each of the images. Export as 07_vp1.pdf and 07_vp2.pdf. Then show each figure zoomed such that the image is clearly visible and export as 07_vp1_zoom.pdf, 07_vp2_zoom.pdf.
Example of vanishing points in the first image

2. Calibration

  1. Compute the camera calibration matrix K from three vanishing points. From the four available v.p. pairs (two in each image), select three pairs.
  2. Compute the angle (should be acute) in the scene between the square and the rectangle. Use the mean value of four computed angles from four pairs of vanishing points.

3. Virtual object

  1. Use the K to compute the pose of calibrated camera w.r.t. the black square using P3P. Compute camera centers C1, C2, and rotations R1, R2 (both images). Chose one corner of the square as origin and consider the square sides having the unit length.
  2. Create a virtual object: 'place' a cube into the two images. The black square is the bottom face of the cube, which sits on the poster. Show the wire-frame cube in each of the images, export as 07_box_wire1.pdf, 07_box_wire2.pdf.
  3. Generate a sequence of 20 virtual views on the cube, interpolating camera from the first image to the second. Use the first image, transformed by a homography. Store the middle image of the sequence as 07_box_wire3.pdf and whole sequence as 07_seq_wire.avi.
  4. [Up two extra points] Texture faces of the cube by a chosen texture using homographies. Do not plot wire-frame cube, instead create the images in the bitmap directly. Then store the middle image as 07_box_tx.png (bitmap, not figure) and the sequence as 07_seq_tx.avi.
Examples of a virtual object

4. Save all data

  1. Save all the u1, u2, vp1, vp2, K, angle, C1, C2, R1, R2 into 07_data.mat. All points are euclidean column vectors, the (acute) angle is in radians, and K(3,3)=1.

Notes

Interpolation of the camera poses

Let $R_1$, $C_1$ is the first camera pose and $R_2$, $C_2$ is the second camera pose. Let $\lambda$ be the interpolation parameter taking value from 0 (meaning the first camera) to 1 (meaning the second camera). Then use following

$C = (1-\lambda) C_1 + \lambda C_2$

$R = (R_2 R_1^\top)^\lambda R_1$

Note that we are using matrix power here. Due to numeric accuracy, it is necessary to take only the real part of the result.

matlab python
C = C2 * lambda + C1 * (1-lambda); C = C2 * lambda + C1 * (1-lambda)
R = real( (R2 * R1')^lambda * R1 ); R = scipy.linalg.fractional_matrix_power( R2 @ R1.T, lambda ).real @ R1

Texturing the cube faces

Homography from the cube face, projected in the image, to the texture image, must be established. The problem concerning whether the pixels are inside or outside the face is easily solved in the coordinate system of the texture.

Also visibility must be solved here. The easiest way is to consider each face's normal vector, computed from the sides using the vector product (take care with the sign). Also the direction vector from the camera to some point of the face must be computed. Then the face is visible only if the normal vector form an acute angle with the direction vector, i.e. it's cosine (dot product) is positive.

Creating figure snapshots and video

You can use any possibility, e.g. SeqWriter and getframe helpers from the GVG Tools repository (but feel free to use own solution)

Matlab:

  wr = SeqWriter( '07_seq_wire.avi' );
  
  for i = frames_range
      % prepare figure ...
      f = getframe( gca );
      im = im2double( f.cdata );
      writer.write( im );
  end
  writer.Close()

Python:

  import tools
  import SeqWriter
  import matplotlib.pyplot as plt
  
  writer = SeqWriter( '07_seq_wire.avi' )
  for i in frames_range
     # prepare figure ...
     im = tools.getframe( plt.gcf() )
     writer.Write( im )
     
  writer.Close()

Upload

Upload an archive consisting of:

  1. 07_vp1.pdf, 07_vp2.pdf, 07_vp1_zoom.pdf, 07_vp2_zoom.pdf
  2. 07_data.mat
  3. 07_box_wire1.pdf, 07_box_wire2.pdf, 07_box_wire3.pdf
  4. 07_seq_wire.avi
  5. optionally 07_box_tx.png, 07_seq_tx.avi
  6. hw07.m or hw07.py – your implementation entry point

any other files required by your implementation (including data and files from the repository).

courses/gvg/labs/hw-07.txt · Last modified: 2023/02/07 16:26 (external edit)