1.3 Lane Detection Flashcards

1
Q

Why is lane detection crucial for driver assistance systems and autonomous vehicles?

A

Lane detection is crucial because it allows calculating the position of the vehicle with respect to lane lines. This information can be used to warn the driver in case of potential involuntary lane crossing, control the steering to maintain a desired position relative to lane lines, and calculate the radius of curvature of the lane.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the main phases of the image processing pipeline for lane detection?

A

The main phases of the pipeline are: camera calibration, distortion correction, perspective transformation, lane detection, and lane curvature computation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is camera calibration?

A

Camera calibration consists of calculating the internal and external parameters specific to a given camera, including distortion parameters. It’s a process that allows mapping between the 3D space of the real world and the 2D image captured by the camera.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the basic model for camera calibration?

A

The basic model is the pinhole model, which considers the central projection of points in space onto a plane, where the projection center is the origin of a Euclidean coordinate system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How is the pinhole camera model mathematically described?

A

In the pinhole model, a point in space X = (X, Y, Z)ᵀ is mapped to the point on the image plane where a line joining point X to the center of projection meets the image plane. The point (X,Y,Z)ᵀ is mapped to the point (fX/Z,fY/Z,f)ᵀ on the image plane, where f is the focal length.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How is central projection represented using homogeneous coordinates?

A

Using homogeneous coordinates, central projection can be expressed as:

| X |     | fX |   | f 0 0 0 |   | X |
| Y | --> | fY | = | 0 f 0 0 | * | Y |
| Z |     | Z  |   | 0 0 1 0 |   | Z |
| 1 |                         | 1 |

Which can be written in compact form as x = PX, where P is the camera projection matrix.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the camera calibration matrix K?

A

The K matrix is the camera calibration matrix, defined as:

| f px |
| 0 f py |
| 0 0 1  |

Where f is the focal length and (px, py) are the coordinates of the principal point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the internal and external camera parameters?

A

The internal parameters (internal orientation) are contained in the K matrix (the elements f, px, py). The external parameters (external orientation) are the parameters of R and C̄ that relate the orientation and position of the camera to a world coordinate system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How is the pinhole model adapted for CCD cameras?

A

For CCD cameras, the possibility of having non-square pixels is added. The general form of the calibration matrix becomes:

| αx s x0 |
| 0 αy y0 |
| 0 0  1  |

where αx = fmx and αy = fmy represent the focal length in terms of pixel dimensions in the x and y directions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is radial distortion and how can it be corrected?

A

Radial distortion occurs because light enters the camera through a lens instead of a pinhole. A radial distortion model can have this form:

x̂ = x(1 + κ₁(x² + y²) + κ₂(x² + y²)²)
ŷ = y(1 + κ₁(x² + y²) + κ₂(x² + y²)²)

where κ₁ and κ₂ are the radial distortion parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What types of radial distortion can affect images?

A

The two main types of radial distortion are:

  • Barrel distortion: straight lines bend outward, making the image appear ‘bulged’ at the center
  • Pincushion distortion: straight lines bend inward, making the image appear ‘pulled inward’ at the corners
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the steps for the camera calibration procedure?

A

The camera calibration procedure consists of:

  1. Printing a pattern and attaching it to a planar surface (normally a checkerboard pattern)
  2. Taking a few images of the plane with different orientations
  3. Detecting the feature points in the images
  4. Estimating the intrinsic and extrinsic parameters using the closed-form solution
  5. Estimating the coefficients of radial distortion
  6. Refining all parameters through a minimization procedure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How are camera parameters estimated using a checkerboard pattern?

A

Considering a feature point in the world reference frame and the mapped point in the camera reference frame, we have:
s[u,v,1]ᵀ = K[R|t][X,Y,Z,1]ᵀ
Assuming that Z=0, and setting R = [r₁ r₂ r₃], we obtain:
s[u,v,1]ᵀ = λK[r₁ r₂|t][X,Y,1]ᵀ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How is the homography matrix H calculated in camera calibration?

A

We define λK[r₁ r₂ t] = H = [h₁ h₂ h₃]. Knowing that r₁ and r₂ are orthogonal, we have:
h₁ᵀ(K⁻ᵀ)(K⁻¹)h₂ = 0
h₁ᵀ(K⁻ᵀ)(K⁻¹)h₁ = h₂ᵀ(K⁻ᵀ)(K⁻¹)h₂

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How are the radial distortion coefficients k₁, k₂ estimated?

A

The parameters k₁, k₂ can be estimated by solving 2nm equations, where each ideal pixel position (u, v) is computed from the actual pixel position (x, y), considering m points in n images. The linear least-squares solution is given by:
k = (DᵀD)⁻¹Dᵀd

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How is distortion correction applied to an image?

A

After camera calibration, distortions can be removed by applying the radial distortion parameters. In MATLAB, this can be done using the undistortImage() function.

17
Q

Why is perspective transformation necessary for lane detection?

A

The image obtained from the car camera contains non-road areas (sky, trees) that would increase computational complexity and interfere with lane information. Only the region of interest (ROI) containing the road is selected.

18
Q

What is inverse perspective transformation in the context of lane detection?

A

Inverse perspective transformation eliminates the perspective effect and produces a top-view image (Bird’s Eye View) in which lane lines appear vertical and parallel to each other, facilitating identification in subsequent algorithms.

19
Q

How is inverse perspective transformation mathematically expressed?

A

If (u, v) denotes the ROI coordinate system and (x, y) denotes the transformed top-view system, the mapping relationship can be expressed as:
P = QM
Where P = [xw yw w], x = xw/w, y = yw/w, Q = [u v 1], and M is the transformation matrix from the rectangle to the inverted trapezoid.

20
Q

What are the main steps for lane detection?

A

Lane detection consists of:

  1. Grayscale conversion and noise reduction
  2. Image binarization
  3. Extraction of lane characteristics
21
Q

How does the Gaussian blur filter work in noise reduction?

A

The Gaussian blur filter is defined by a convolution matrix (kernel). The filter is applied using matrix multiplication according to the formula:
g(x, y) = w ⊗ f(x, y) = ΣΣ w(s, t) * f(x-s, y-t)
For Gaussian blur, the kernel w is typically 3x3 or 5x5.

22
Q

How is image binarization performed for lane detection?

A

The image is binarized based on grayscale according to the rule:
b(x, y) = {
255 if g(x, y) ≥ Th
0 if g(x, y) < Th
}
where Th is a threshold determined iteratively to adapt to illumination conditions.

23
Q

How are lane characteristics extracted from the binarized image?

A

In the binary image, pixels corresponding to the line are 255, while others are 0. Pixels in each column of the image are summed and a histogram is built. The position of the two peaks in the histogram represents the position of the lane delimiters.