w8 gemini Flashcards

1
Q

List the constraints typically applied to solving the video correspondence problem.

A

Spatial coherence (neighboring points have similar optical flow) and Small motion (optical flow vectors have small magnitude).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Note circumstances in which the spatial coherence constraint fails.

A

It fails at discontinuities between surfaces at different depths, or surfaces with different motion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Note circumstances in which the small motion constraint fails.

A

It fails if relative motion is fast or frame rate is slow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define what is meant by the “aperture problem”.

A

The aperture problem refers to the fact that the direction of motion of a small image patch can be ambiguous.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Suggest how the aperture problem can be overcome.

A
  1. Integrating information from many local motion detectors / image patches, or 2. by giving preference to image locations where image structure provides unambiguous information about optic flow (e.g. corners).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Consider a traditional barber’s pole. When the pole rotates on its axis, in which direction is the motion field?

A

Horizontal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Consider a traditional barber’s pole. When the pole rotates on its axis, in which direction is the optic flow?

A

Vertical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Explain why the optic flow of a rotating barber’s pole appears vertical, despite the actual motion being horizontal, in the context of the aperture problem.

A

The stripes of the pole provide ambiguous information. Only where the stripes meet the top and bottom and the sides of the pole are corners present, and hence, there is (seemingly) unambiguous information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is video in the context of computer vision?

A

A series of N images, or frames, acquired at discrete time instants tk=to+kΔt, where Δt is a fixed time interval and k=0,1,…,N-1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How is video similar to stereo vision?

A

Both deal with more than one image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are some advantages of using video compared to static images or stereo vision?

A

Inference of 3D structure, segmentation of objects from background without recovery of depth, and inference of self and object motion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the two main types of motion that cause changes in the projection of a scene point in a video?

A

Object motion and camera motion (or ‘ego motion’).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is optic flow?

A

The apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer (an eye or a camera) and the scene.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Distinguish between an optic flow vector and an optic flow field.

A

An optic flow vector represents the image motion of a single scene point. An optic flow field is the collection of all optic flow vectors in an image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the two main categories of optic flow fields?

A

Sparse (vectors defined only for specified features) and dense (vectors defined everywhere).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the analogy between optic flow vectors and disparity vectors?

A

Optic flow vectors are analogous to disparity vectors in stereo vision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is required to measure optic flow?

A

Finding correspondences between images.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the motion field?

A

The true image motion of a scene point, which is the actual projection of the relative motion between the camera and the 3D scene.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Explain why optic flow is often an approximation of the motion field.

A

Optic flow is what we can measure in the image, but it might not perfectly represent the actual 3D motion due to factors like illumination changes or non-Lambertian surfaces.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Give an example where the motion field is non-zero, but the optic flow is zero.

A

A smooth, Lambertian, uniform sphere rotating around a diameter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Give an example where the motion field is zero, but the optic flow is non-zero.

A

A stationary, specular sphere and a moving light source.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Give an example where the motion field and optic flow differ.

A

A barber’s pole: motion field is horizontal, optic flow is vertical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Why must we estimate the motion field by observing the optic flow?

A

Because the motion field cannot be directly observed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the video correspondence problem?

A

The problem of finding corresponding points in different frames of a video to measure optic flow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What are the two main types of methods used to solve the video correspondence problem?

A

Feature-based methods and Direct methods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Describe feature-based methods for solving the video correspondence problem.

A

Extract descriptors from around interest points and find similar features in the next frame.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Describe direct methods for solving the video correspondence problem.

A

Directly recover image motion at each pixel from temporal variations of the image brightness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What are the basic requirements for using feature-based methods to solve the correspondence problem?

A
  1. Most scene points visible in both images. 2. Corresponding image regions appear “similar”.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is the spatial coherence constraint in video correspondence?

A

Similar neighboring flow vectors are preferred over dissimilar ones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

When does the spatial coherence constraint fail?

A

At discontinuities between surfaces at different depths or with different motions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is the small motion constraint in video correspondence?

A

Small optic flow vectors are preferred over large ones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

When does the small motion constraint fail?

A

If relative motion is fast or the frame rate is slow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Explain the aperture problem using the example of a moving rectangle.

A

When observing a small patch of a moving edge, the motion parallel to the edge is ambiguous.

34
Q

How does the brain deal with the aperture problem?

A

The brain combines local motion measurements across space or relies on unambiguous features like corners.

35
Q

What are two potential reasons why we might perceive motion perpendicular to an edge in the aperture problem?

A

It might be the average of all possibilities or it predicts the slowest movement.

36
Q

Describe two solutions to the aperture problem.

A
  1. Combine local motion measurements across space. 2. Utilize locations where the direction of motion is unambiguous.
37
Q

List some applications of optic flow.

A

Estimating the layout of the environment, estimating ego-motion, estimating object motions, and obtaining predictive information for action control.

38
Q

Describe the simple case 1 for depth from optic flow with known ego-motion.

A

Direction of motion is perpendicular to the optical axis, and the velocity of the camera (Vx) is known.

39
Q

Describe the simple case 2 for depth from optic flow with known ego-motion.

A

Direction of motion is along the camera optical axis, and the velocity of the camera (Vz) is known.

40
Q

In the context of time-to-collision from optic flow (simple case 2), what is assumed about the camera’s velocity?

A

The camera velocity is unknown.

41
Q

What is the formula for time-to-collision derived from optic flow?

A

Time-to-collision = x / vx

42
Q

What is a significant advantage of calculating time-to-collision from optic flow in this way?

A

It can be calculated purely from the image without knowing the camera’s velocity or the object’s depth.

43
Q

How can optic flow be used for robot navigation?

A

By moving so that the destination is the Focus of Expansion (FOE).

44
Q

How can optic flow be used to judge relative depths?

A

By relative magnitudes of optic flow vectors; points closer to the camera move more quickly across the image plane.

45
Q

How can optic flow be used to measure absolute depths?

A

With knowledge of camera velocity.

46
Q

How can optic flow be used to judge camera speed?

A

By the rates of expansion/contraction.

47
Q

How can optic flow be used for segmentation?

A

Discontinuities in the optic flow field indicate different depths and, hence, different objects.

48
Q

What are camera translations that induce characteristic patterns of optic flow for a static scene?

A

Turn/translate left, turn/translate right, move forward, move backward.

49
Q

What are the characteristics of a Parallel Optic Flow Field (Vz = 0)?

A

All optic flow vectors are parallel, direction of camera movement is opposite to the direction of the optic flow field.

50
Q

What are the characteristics of a Radial Optic Flow Field (Vz ≠ 0)?

A

All optic flow vectors point towards/away from a vanishing point (po).

51
Q

How is relative depth determined in a Parallel Optic Flow Field?

A

Depth is inversely proportional to the magnitude of the optic flow vector.

52
Q

How is relative depth determined in a Radial Optic Flow Field?

A

Depth of a point p is inversely proportional to the magnitude of the optic flow vector and also proportional to the distance from p to po.

53
Q

Explain how optic flow discontinuities can be used for segmentation.

A

Large differences in optic flow between adjacent regions often indicate boundaries between objects moving independently.

54
Q

What is the key idea behind segmentation from motion?

A

Moving objects can be distinguished from a static background or other moving objects based on differences in their motion patterns.

55
Q

What are tracking algorithms?

A

Algorithms used when high-level features, such as objects, are matched across several frames.

56
Q

How do tracking algorithms typically work?

A

They use previous frames to PREDICT the location of an object in the next frame.

57
Q

What are the intents behind using tracking algorithms?

A

To do less work looking for the object and to get improved estimates since measurement noise is averaged out.

58
Q

Name two common methods used in tracking algorithms.

A

Kalman filtering and Particle filtering.

59
Q

How do humans predict movement?

A

By extrapolating object trajectory from previous locations and from high-level knowledge of typical movement.

60
Q

Describe the concept of ‘segmentation from motion’.

A

Using motion cues to separate different objects or regions in a video sequence.

61
Q

Give an example of segmentation from motion.

A

Camouflaged objects are more easily seen when they move.

62
Q

Explain the technique of ‘image differencing’ for segmentation.

A

For a static camera, successive images are subtracted pixel by pixel.

63
Q

What is a problem with the image differencing technique for segmentation?

A

The grey level changes most where background is replaced by the object.

64
Q

Explain the ‘background subtraction’ technique for segmentation.

A

A reference image of the static background is learned and subtracted from the current frame.

65
Q

What is an advantage of background subtraction over image differencing?

A

Objects that are temporarily stationary are still seen as foreground.

66
Q

What are some common methods for learning the static background in background subtraction?

A

Adjacent Frame Difference, Off-line average, and Moving average.

67
Q

Describe the ‘Adjacent Frame Difference’ method for background subtraction.

A

Each image is subtracted from the previous image in the sequence.

68
Q

Describe the ‘Off-line average’ method for background subtraction.

A

Pixel-wise mean values are computed during a separate training phase.

69
Q

Describe the ‘Moving average’ method for background subtraction.

A

The background model is a linear weighted sum of previous frames.

70
Q

List the steps in a typical algorithm for motion segmentation using background subtraction.

A
  1. Update background model. 2. Compute frame difference. 3. Threshold frame difference. 4. Noise removal.
71
Q

What is the goal of the correspondence step in optic flow analysis?

A

Determining which points in different frames are projections of the same point in the scene.

72
Q

What constraints are typically used in the correspondence step of optic flow?

A

Small motion and spatial coherence.

73
Q

Describe the ‘Moving average’ method for background subtraction.

A

The background model is a linear weighted sum of previous frames, which can overcome slow illumination changes.

74
Q

What constraints are typically used in the correspondence step of optic flow?

A

Small motion and Spatial coherence.

75
Q

What is the goal of the reconstruction step in optic flow analysis?

A

Given the correspondence between points, calculate 3D structure and motion of the observed scene.

76
Q

What can be calculated in the reconstruction step with knowledge of ego-motion?

A

Absolute depth.

77
Q

What can be calculated in the reconstruction step without knowledge of ego-motion?

A

Relative depths, time-to-collision, and direction of ego-motion.

78
Q

Summarize the process of segmenting moving objects from a stationary background.

A

Calculate the absolute difference between the current image and a background model (abs(I(x,y,t)-B(x,y))>T).

79
Q

What are two common approaches to background subtraction?

A

Image differencing and explicit background subtraction.

80
Q

Describe the formula for the background model in image differencing.

A

B(x,y) = I(x,y,t-1).

81
Q

Describe the formula for the background model in off-line average background subtraction.

A

B(x,y) = (1/N) * Σ(from t=1 to N) I(x,y,t).

82
Q

Describe the formula for the background model in moving average background subtraction.

A

B(x,y) ← (1-β)B(x,y) + βI(x,y,t).