Image Acquisition Flashcards
What vector graphics and bitmaps are and how they are made for different use cases
Vector graphics and bitmaps are two different ways of storing digital images.
Vector graphics are images that are made up of mathematical equations and can be scaled up or down without losing quality. Examples of vector graphics include logos, illustrations, and charts. Since they are resolution independent, they are often used in cases where the image needs to be resized, such as in printing or in website design.
Bitmaps, on the other hand, are images made up of pixels, which are small squares of color.
What light is, and how the human eye perceives it
Light is a type of energy that allows us to see things. Our eyes have special cells called rods and cones that detect light and turn it into signals that our brain interprets as images. Rods help us to see in low light and detect movement, while cones allow us to see different colors. When light enters the eye, it passes through different parts before reaching the retina where the rods and cones are located which then send the signals to the brain.
How a camera sensor is built
A camera sensor is made up of millions of tiny light-sensitive cells called photosites that are arranged in a grid on a semiconductor material. When light hits these photosites, they convert the light into electrical signals. These signals are then processed by the camera’s image processor to create a digital image. Additionally, a color filter array is placed on top of the sensor, which allows the sensor to differentiate between different colors of light and create a full-color image.
What the difference between CMOS and CCD sensors is
A CMOS sensor and a CCD sensor are two different types of image sensors used in digital cameras. CMOS sensors are more cost-effective and consume less power but they may produce more noise in low light conditions while CCD sensors are more expensive and consume more power but produce less noise and better image quality in low light conditions.
What quantum efficiency of a camera sensor is and what it means for the images acquired
Quantum efficiency of a camera sensor is a measure of the ability of the sensor to convert incoming light into an electrical signal. A higher quantum efficiency means that more of the incoming light is converted into a signal and less is wasted, resulting in better image quality and less noise in low light conditions.
What the rolling shutter effect is
Rolling shutter effect is a distortion that occurs when a camera sensor reads the image from top to bottom. This can cause a skewed or distorted image when the subject is moving quickly or the camera is moving while the image is being captured, resulting in a distorted or skewed image.
What a light field camera is, how it works, what it can do that other cameras can’t, and what its limitations are
A light field camera (plenoptic camera) captures all the light rays in a scene, not just the ones that hit the camera sensor. It allows to change the focus, aperture and perspective of the image after it has been taken, and also enable 3D-reconstruction of the scene. But its limitations include high cost, big size and lower resolution compared to traditional cameras.
How does the aperture of a camera change the incidence of light to a sensor? What effect does the diameter of the aperture have on the image?
Aperture is a setting in a camera that controls the amount of light that enters the lens and reaches the sensor. A larger aperture allows more light to enter the camera and a smaller aperture allows less light to enter. The diameter of the aperture affects the depth of field, which is the area of the image that appears in focus. A larger aperture (smaller f-number) will result in a shallower depth of field, which means that only a small part of the image will be in focus and the background will be blurred. A smaller aperture (larger f-number) will result in a deeper depth of field, which means that more of the image will be in focus.
Please sketch the structure of a single cell of a camera sensor and name the following parts: IR filter, microlens, color filter, photosite
A camera sensor cell, also known as a pixel, typically consists of several layers. The basic structure of a single cell is as follows:
IR filter: This is the topmost layer of the pixel. It blocks infrared light and allows only visible light to pass through to the other layers.
Microlens: This is a small lens that sits above the color filter array. Its purpose is to focus light onto the photosite.
Color filter: This is a layer of colored filters that sits above the photosite. It is used to separate light into different color channels (red, green, and blue).
Photosite: Also known as the photo-sensitive area, it is the bottom layer of the pixel, it is a tiny area where the light is converted into an electrical charge, which is then used to produce the final image.