Exam Flashcards

1
Q

Two main reasons for using lenses?

A
  1. gather light on image point from multiple rays leaving object (single ray of object reaches every image point on image plane under ideal pinhole conditions)
  2. focus image on image plane (cone of light rays passes through normal pinholes – creating blurred images
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which type of lenses are more realistic?

A

Thick lenses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are barrel and pincushion distortion?

A

Barrel distortion happens when your camera has a wide field of view which cause the image to be seen curved and bended along the edges and pincushion is when you have a large focal length and the effect on the image is opposit of Barrell distortion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

When aberrations happen?

A

When light rays don’t intersect in a single point which is a case when using thick lenses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Vignetting?

A

When corners of your image gets gradually unbright which can be caused by using multiple lenses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What depth of field is related to?

A

Size of lense, if you make your aperture smaller, you increase depth of field

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Focal Length?

A

Focal length is the distance from the center of the lens to the imaging point (focal plane) where the light for the image is collected. When a lens is described as a “50mm lens,” it is referring to its focal length.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Diameter of lense?

A

aperture or قطر لنز

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is f-number?

A

The ration of focal length to the diameter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The larger the f-number, the larger …. , but the …. the sensor

A

the depth of focus / less light reaches

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How sensor percept different colors?

A

By using different kernel for different colors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Is the response of camera sensor to the intensity linear?

A

Yes it is linear but it is adapted to a non-linear form to create pleasing image as the human visual system is not linear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the problem of capturing pleasing images by adapting intensity reposnse of camera for the Computer Vision?

A

You need the linear response because thats the truth and so you need to know the convert function and inverse it to get the linear response and that relies on camera calibration because every camera is different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is demosaicing?

A

The proccess of convert a CCD or CMOS image (containing either blue, green or red color) to an image containing RGB values.
For doing that you usually do interpolation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the different types of noises?

A
  1. Read noise
  2. Dark (-current) noise
  3. Shot or photon noise
  4. Salt and peper noise
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a read noise?

A

It is the noise that is present in the image due to the sensor and readout electronics.
It is a random noise, so it will affect each pixel in the image differently.
It has a Gaussian distribution

17
Q

What is dark (-current) noise?

A

Dark current noise, also known as thermal noise or dark noise, is a type of noise that is present in image sensors even when no light is entering the camera. It is caused by the random movement of electrons within the sensor, which generates a small electrical current. This current is independent of the light entering the camera and is present even in complete darkness.
Dark current noise is particularly problematic in long exposures or high temperature conditions, as it can accumulate over time and result in a brighter, noisy image.
It has a Poisson distribution

18
Q

What is shot or photon noise?

A

Shot noise, also known as photon noise, is a type of noise that is present in an image due to the random nature of the photons (or light particles) hitting the image sensor. It is caused by the finite number of photons that hit the sensor during a given exposure, leading to a random fluctuation in the number of electrons that are generated by the sensor. As a result, shot noise leads to variations in the brightness and color of the pixels in the image.
Shot noise is particularly noticeable in low-light situations, such as night photography or astrophotography, where the number of photons hitting the sensor is low. It can also be reduced by increasing the exposure time or by using a larger aperture to increase the amount of light entering the camera. Shot noise is also dependent on the ISO setting of the camera and higher the ISO, the more shot noise.

It is important to note that shot noise is a fundamental limitation of imaging and it cannot be completely eliminated, but it can be reduced or managed to a certain extent by using techniques such as image averaging, image stacking, or by using specialized sensors.

19
Q

What is salt-and-peper noise?

A

Salt-and-pepper noise, also known as impulse noise, is a type of noise that appears in an image as small, randomly-spaced white or black pixels, giving the appearance of grains of salt and pepper sprinkled over the image. It is caused by errors in the image acquisition process, such as faulty or malfunctioning pixels on the image sensor, errors in the data transfer process, or defects in the storage media.

Salt-and-pepper noise is particularly noticeable in images with low contrast and can be visually distracting and make it difficult to discern fine details in the image. It can be reduced by using image processing techniques such as median filtering, which replaces the affected pixels with the median value of the surrounding pixels, or by using techniques such as morphological processing and filtering.

It is important to note that Salt and pepper noise are caused by the errors in the image acquisition process or storage media, it can be caused by hardware failure, poor image compression, poor image transmission, or other factors.

20
Q

What is the difference betwen intrinsic and extrinsic camera parameters?

A

In the context of computer vision and photogrammetry, intrinsic and extrinsic parameters refer to different aspects of the camera’s properties.

Intrinsic parameters are the properties of the camera that are inherent to the camera itself, such as the focal length, principal point, and lens distortion. These parameters describe how the camera lens projects 3D points onto the 2D image plane. The intrinsic parameters are independent of the camera’s position and orientation in the world, and can be determined by calibration using a known pattern or object.

On the other hand, extrinsic parameters are the properties of the camera that describe its position and orientation in the world. These parameters include the camera’s position and orientation in 3D space, as well as the rotation and translation vectors that describe how the camera moves in relation to the world. The extrinsic parameters can be determined by using a technique called structure from motion, where the camera’s position and orientation are estimated based on the correspondences between multiple images of the same scene.

In summary, intrinsic parameters describe the internal properties of the camera and how it projects 3D points onto the 2D image plane, while extrinsic parameters describe the camera’s position and orientation in the world.

21
Q

What is camera calibration?

A

Finding intrinsic and extrinsic camera parameters

22
Q

What is called normalized image plane?

A

It is the image plane which is perfectly aligned with the sensor or pinhole and the distance between it and the pinhole is 1 (which means its center corrospondes to the pinhole) but actually this plane cannot be be existed.

23
Q

What we actaully do in calculating intrinsic camera parameters?

A

In prospective projection we assumed that the image we capture is projected on a normalized image plane which is perfectly aligned with sensor or pinhole which is not the case in the real world but instead the image we get is a result of misalignment between physical sensor and lense. so we need a transformation to transfer a none real image to an image on normalized image plane to go to the real world. parameters of this transformation are intrinsic camera parameters.

24
Q

What is principle point?

A

The origin of the camera coordinate system (Pinhole or lens) projects onto the point C0, which is called principle point of the camera

25
Q

Types of radial distortion?

A
  1. Barrel
  2. Pincusion
  3. Mustache
26
Q

Why brutte force is not a good method for finding corrosponding points in images?

A

First its complexity is O(n^2) and is not efficient and second it may result in many false positive because of similar points

27
Q

What is optical center?

A

The point inside camera in which all light rays intersect in 3D

28
Q

What does up-to-scale mean?

A

It means the solution or thing we are talking about depends on the scale and does not work with any scales

29
Q

What is the difference between feature based and gray scale maching?

A

Feature based uses features as the corrospandance detection but gray scale uses the intensity information.
gray scale is sensitiv to lighting conditions but it is easy and cheaper to calculate

30
Q

What is the difference between active and passive stereo systems?

A

A passive stereo system is a stereo vision system that uses natural light to form an image of the scene. It uses two or more cameras with overlapping fields of view to capture the same scene from different viewpoints, and then uses the differences in the captured images to reconstruct a depth map of the scene. Passive stereo systems are simple and inexpensive, but they have limited range, accuracy, and robustness in challenging lighting conditions.

An active stereo system, on the other hand, uses an active illumination source, such as a laser or a structured light pattern, to project light onto the scene. It then uses the pattern of light on the scene to estimate the depth information. Active stereo systems have higher range, accuracy, and robustness compared to passive stereo systems, but they are also more complex and expensive.

31
Q

What is the registeration of depth maps?

A

The process in which different depth maps are merged to generate more filled depth map is called registeration. Soecially when we have occlusion and we want to combine depth maps of different views