Geometry Flashcards

1
Q

Why should we remove pincushion and barrel distortion before processing

A

The geometry used in this class assumes a pinhole camera model, this model has no distortion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Briefly describe Zhangs method for camera calibration

A
  1. Take at least 3 images of a known surface (checkerboard)
  2. Use these images to calculate homographies.
  3. Estimate K from these homographies
  4. Use iterative, non-linear methods to calculate the full set of intrinsic parameters
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

If we estimate the camera pose from the K matrix and corresponding world/image points mathematically, the rotation part will typically not be a true rotation matrix. How can we fix this?

A

We can use the SVD to get the closest true rotation matrix in Frobenius-norm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the difference between PnP methods and iterative methods for camera pose estimation from 3D data?

A

The PnP methods are non-iterative, faster and need fewer data points.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What do we need to know to extract 3D information from single view cameras?

A

We need objects/ regions with a known 3D structure, like planar regions, parallel lines, horizontal surfaces, vertical structures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the vanishing point of a line

A

Infinite lines in the real world have finite length in the image and vanish at a point. This is the vanishing point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How is the vanishing point for parallel lines related

A

Parallel lines in the same plane have the same vanishing point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the vanishing line, and how is this related to vanishing points

A

The vanishing line is where planes disappear in the image (Like the horizon). All vanishing points for lines in the plane intersect the vanishing line of that plane.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

If the vanishing line of the horizontal plane is straight and runs through the center of the image, how is the camera rotated?

A

The camera is straight and level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which parameters determine the epipolar plane for two view geometry?

A

The position of the cameras and the observed points

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the epipoles?

A

The epipoles are where the baseline intersects the images

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the Q matrix used for in stereo geometry

A

The Q matrix is used to reproject points in the image to the world, it gives the world coordinates from image coordinates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the DSI (Disparity Space Image) used in Stereo processing

A

DSI is a mapping R3->R with pixel coordinates u,v and disparity d as input and output indicating how well it matches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is E, the essential matrix, in two view geometry

A

The essential matrix relates a point in the first normalized image plane to an epipolar line in the second normalized image plane.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How many point correspondences do we need to estimate E, the essential matrix

A

At least 5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is F, the fundamental matrix, in two view geometry

A

The fundamental matrix relates a point in one image to an (epipolar) line in the second image plane

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How many point correspondences do we need to estimate F, the fundamental matrix

A

At least 7, but 8 is often used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Describe the 8-point algorithm for computing F, the fundamental matrix.

A
  1. Normalize using similarity transforms
  2. Compute A from the point correspondences
  3. Use SVD and extract F_mark from the right Singular value vector
  4. Calculate the SVD of F_mark.
  5. Set s_33 = 0 to enforce a true Fundamental matrix
  6. Denormalize F
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the fundamental difference between the 8 and 7 point algorithm for computing F?

A

The 7 point algorithm will give a 2-D nullspace of Ah. F can be found through a linear combination of the basis vectors with the constraint det(F) = 0. This constraint gives rise to a cubic polynomial which gives 1 or 3 solutions for F.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the theory behind two-view triangulation and why can’t we use this directly in practice.

A

Theoretically, two image points can be back-projected into the world and we can determine the intersection of the two lines formed. In practical applications, noise will usually result in these two lines not intersecting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the problem with minimizing geometric error in triangulation, and what linear alternatives do we have?

A

Minimizing the geometric error will usually not minimize the reprojection error and the method doesn’t naturally extend to more than two cameras.

We could instead minimize the algebraic error using a least squares approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How do we reduce the non-linear reprojection minimization problem of triangulation from 3 parameters of X, the point in the world, to 1 parameter?

A

We utilize that the epipolar lines can be described with 1 parameter, and minimize the error from u, u’ to their polar line.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Can we determine the camera matrices P and P’ from the fundamental matrix F?

A

Yes, but only up to projective ambiguity. By adding knowledge of the scene or restricting yourself to calibrated cameras this can be restricted to affine or metric ambiguity.

24
Q

Can we recover the Pose of cameras from the essential matrix E?

A

Yes, but only up to a scale for t, the translation. q

25
Q

What is the cheirality constraint?

A

When recovering the camera pose from E, we get 4 solutions up to a scale of t, but only one of this solutions will put both cameras in front of the scene. This is the cheirality constraint.

26
Q

Describe the algorithm for visual odometry

A
  1. Capture img_k+1.
  2. Use feature matching to estimate E between img_k+1 and img_k
  3. Decompose E into R ant t.
  4. calculate ||k_t_k+1|| from ||k-1_t_k|| and rescale t
  5. Calculate the POSE of k+1 relative to 0.
27
Q

How can we treat the scalability problem in visual odometry?

A

We can set 1_t_0 to 1 and calculate t_k+1 relative to t_k.

28
Q

What is difference between visual odometry in a 3D scene and a planar scene

A

The 3D cases estimates and epipolar geometry and uses E to calculate pose. The planar scene case estimates a Homography, H, and uses this to estimate pose.

29
Q

What is the trifocal tensor

A

The trifocal tensor is the algebraic representation of three view geometry

30
Q

Describe the Sequential SfM (Sequential Structure from Motion)

A
  1. Initialize motion from two images (as described in two view geometry)
  2. Initialize 3D structure with triangulation
  3. For each new view calculate the projection matrix, refine and extend the 3D structure
31
Q

What is bundle adjustment in relation to SfM (Structure from Motion)

A

A non-linear method that refines structure and motion by minimizing the squared reprojection error

32
Q

What can we do to deal with the potential extreme number of parameters in Bundle adjustment for SfM

A
  • Compute bundle adjustment only on a subset and add missing views/points based on the result
  • Divide view/points into several subsets, perform bundle adjustment for each and merge them.
33
Q

What are the advantages of multiple view depth calculations over stereo view?

A
  1. Multiple views can be used to verify correspondences
  2. Can make reconstruction more robust to occlusion
  3. Can be used to infer free space and volumes.
34
Q

Describe the plane sweep algorithm

A
  1. Map each target image to the reference image using each plane depth
  2. Compute the similarity for each pixel for each depth (Using Zero Mean Normalized Cross Correlation on a small patch around the pixel)
  3. Chose the best fit depth for each pixel.
35
Q

What can we do to avoid distortions and get better matching during plane-sweep

A

Choose another plane normal (ground normal…)

36
Q

What is a voxel?

A

3D “Pixel”

37
Q

Describe space carving

A
  1. Create a surface ( Cube ) of voxels
  2. Project voxel into image
  3. Remove if not photo consistant
  4. Continue until convergence
38
Q

What is PnP and what does it do?

A

n-point pose problem

Estimates the camera pose (camera position in world frame).

39
Q

What is the epipolar plane?

A

The plane defined by the two camera centers and a point X in 3D.

40
Q

What is the baseline?

A

The line defined by the two camera centers.

41
Q

What is an epipolar line?

A

The line defined by the epipolar plane intersecting the normalised image plane.

42
Q

What is an epipole?

A

Where the baseline intersects the two image planes.

43
Q

How can we create a sparse set of stereo matches?

A

Match the best feature points

44
Q

How can we create a dense set of stereo matches?

A

Search along a subset of the epipolar line with some window. Find the closest match, do this for all pixels.

45
Q

What error are we often trying to minimize with non-linear triangulation?

A

The reprojection error

46
Q

What is the formula for calculating the Essential matrix E

A

E = [t]x * R, where t is the translation vector from camera 2 to 1, [t]x is the cross product matrix and R is the rotational matrix from camera 1 to 2.

47
Q

How are F and E related

A

F = K^(-T) * E * K^-1

48
Q

In two view geometry, how can we calculate the epipolar line in image 2 for a given point x in Image 1?

A

l = Fx

49
Q

Explain briefly how we can estimate the relative pose between two perspective cameras from a pair of overlapping images.

A
  1. Estimate E
  2. Calculate R and t using the SVD
  3. This will give 4 solutions, but only 1 solution where both scenes are in front of both cameras. (Cherelatiy constraint). The correct solution can be found by triangulating at least one point, but preferably several.
50
Q

What do we mean by structure from motion

A

Structure from motion is an algorithm for reconstructing 3D structures and projection matrices from multiple views from fixed points.

51
Q

Explain the main principles of how we can estimate dense depth maps from multiple images using plane sweep.

A
  1. Map each image to a reference image for several different depths.
  2. Compute the similarity between each image and the reference image using ZNCC(Zero mean Normalized Cross Correlation).
  3. Combine the results from each view by summing the scores.
  4. Chose the depth which fits best.
    (The dept plane normal doesn’t have to fit with the reference image z. It can, for example, be Ground normal))
52
Q

If an infinite line doesn’t have a vanishing point, what do we know about the orientation of the line?

A

The lines are parallel to the image plane ( or equivalent perpendicular to the optical axis).

53
Q

Assume that you have a straight and level camera. One vertical structure is exactly the same height as the vanishing line for the horizontal plane in the image plane. What is the real height of the vertical structure?

A

The same as the height of the camera.

54
Q

Assume we have a straight and level camera. In the image plane, there are two vertical infinite lines without a vanishing point. We then tilt the camera upwards. What happens to the lines regarding vanishing points, and what happens to the horizontal vanishing line?

A

The lines will get a vanishing point above the camera. The horizontal vanishing line will shift downwards.

55
Q

Name at least one bundle adjustment library

A

GTSAM, RealityCapture, SBA, Ceres, Bundler…

56
Q

Describe how to estimate camera pose when we have a single view of a planar world scene, and known H and K.

A
  1. u = K[r1, r2, r3, t]x
  2. H = K[r1, r2, t]
  3. [r1, r2, t]= (to scale) K^-1 * H = M
  4. [r1, r2, t] = +/- lambda * M
  5. r3 = +/-(r1 x r2)
  6. r3 sign is choosen so that det(R) = 1
  7. Choose the solution where camera is over the scene ,