Lecture 1 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Digital Image

A
  • N-dimensional matrix of intensity values
  • Each cell of the matrix is a pixel or voxel
    o Voxel is 3D equivalent of a pixel
  • Conversion from real world into digital images by:
    o Sampling: digitising the coordinate values
    ▪ How we divide the image into pixels
    ▪ Result of limited special resolution
    o Quantisation: digitising the amplitude values
    ▪ How we measure colour intensity (for example, blackness/whiteness of an
    image)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Sampling

A
  • Ideal sampling s(x,y) in a regular grid of MxN locations can be represented using a collection
    of Dirac functions
    o s(x,y) = ∑ ∑ 𝛿(𝑥 − 𝑗∆𝑥, 𝑦 − 𝑘∆𝑦)
    𝑁
    𝑘=1
    𝑀
    𝑗=1
    ▪ Δx := horizontal spacing
    ▪ Δy := vertical spacing
    o For each point in the grid, we can
    compute the Dirac function
    ▪ Translates to amplitude
    value
    ▪ Grid defines spacing
    between pixels
    ▪ Pixel coverage defines what
    area of the image has the
    same Dirac function value
  • Larger pixel
    coverage -> larger
    pixel coverage -> lower resolution
    ▪ Parameters of grid are stored in image headers
  • (e.g. DICOM format)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Quantisation

A
  • Limited intensity values -> require quantisation
    o Often map intensity value to grayscale
    values between 0 and 255
  • Often measured in Hounsfield Units (HU)
    o Range of [-1000,1000]
    ▪ Increases with density (e.g.
    bone is 1000)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Intensity Transformations

A
  • Transformation T() of the original pixel value p from scale [p0,pk] into brightness from a new
    scale [q0,qk] given by:
    o 𝑞 = 𝑇(𝑝)
    o This depends on the pixel value, not on the position of the pixel
  • Log transformation boosts low grey-level values
  • Negative function = invert function
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Histogram matching

A
  • Histograms provide frequency of frequency of
    the pixel value p in an image
    o Normalised: divide frequencies by the
    number of pixels/voxels in the image
  • Histogram matching: transform intensity of input
    image to match the histogram of target image
    o Compute cumulative distribution
    function of histogram of both images
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Linear Contrast Stretching

A
  • Add more contrast to image by stretching
    intensity frequency distribution
    o Rescale range [p0,pk] to [0,1]
    o Then rescale the result to the desired range [q0,qk]
    o q = q0 + (qk – q0) [(p-p0) / (pk – p0)]
  • Used to reveal structures that are different but have very
    similar intensity values
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Spatial Filtering

A
  • Operations that work with the values of the image pixels in a
    neighbourhood and corresponding values of a sub-image that has the same
    dimensions as the neighbourhood
    o Called: filter, mask, kernel, template, window
    o Values in a kernel are referred to as coefficients, no longer as pixels
  • Filters should have an uneven width and height such that we have a centre pixel
  • For example, calculating the correlation at each position
    o Pass input filter over input
    o Multiply input value by filter value, sum all multiplications
    o Move filter
  • Convolutions are the same as first rotating the filter by 180 degrees and then using
    correlation
  • Problems may occur at the border of an image: some parts
    of the filter may cover a part outside of the image
    o Valid convolution: only allow the filter to go over
    parts of the image where the entire filter covers
    part of the image
    ▪ Reduces size of the output
    o Same convolution: add padding around the image
    (typically 0), and perform filter operation
    ▪ Keeps output size the same if one
    row/column is added
    o Cycling padding: Copy rows/columns that are valid,
    and place them around the original image
    ▪ Helps to learn features without bias towards
    the centre of the image
  • Padding does not add new
    information or artifacts to the image
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Types of filters

A
  • Image smoothing -> blurring and noise reduction
    o Based on averaging brightness value in some
    neighbourhood -> i.e. multiply with filter that has
    coefficients of 1
    o Should be normalised again
  • Edge detection
    o Kernel should have a sign change -> measures difference between
    adjacent pixels
    o If the kernel looks like an edge, it responds on edges
  • Image derivative
    o Measure change of intensity in x or y direction
    o To find structures along a certain direction
    o Prewitt gradient kernel: derivative in one direction,
    smoothing in perpendicular direction
  • Blob detection
    o Using the “Mexican hat” (Laplacian of Gaussian) filter
    o Region of a digital image in which all the points can be considered
    to be similar to each other, and different from surroundings (e.g.
    nodules)
    o Laplacian of Gaussian
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Gaussian Filters

A
  • Gaussian operator in 1D:
    o Mu: mean value, controls position of the bell
    o Sigma: standard deviation, controls width of the bell
  • Gaussian operator in 2D:
    o Circular filter
    o Now no longer has a Mu, only has sigma as a parameter
    o Typically a normal distribution in two dimensions
    o Convolving this function with an image blurs the function
    ▪ Changes in sigma influence how blurry the image becomes
  • When you have to define the kernel size for the Gaussian function:
    o Rule of thumb: Gaussian function goes to zero after approximately
  • Derivative of Gaussian filters:
    o Linearity of convolution theorem states that convolutions are
    associative
    o Computing the derivative of an image is thus done by convolving with the derivate of
    a Gaussien -> introduces scale dependency in the computation of derivative
  • Also used for edge detection via first order derivative (gradient direction)
    o Edge at maximum of the first-order derivative (local maxima of gradient magnitude)
    o Edge at zero-crossing of the second-order derivative
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Binary Morphology

A
  • Structuring element (SE): point set that is used as image probe
    o Own local origin
  • Morphological transformation: records locations where certain relations
    between image and SE are satisfied
  • Dilation: combines two sets by vector addition
    o First two images on the right.
    ▪ Top one where origin is member of SE
    ▪ Bottom one where origin is not member of SE
    o Cell with X in it is the origin
    o Apply filter B, if the left cell contains a black square, add one to the
    right if it is not there yet
    o Used to fill in dotted lines
  • Erosion: combines two sets by vector subtraction
    o Removes noise while keeping structures intact
  • Opening: erosion followed by dilation
    o Smooth contours
    o Cut narrow bridges
    o Remove small islands and sharp corners
  • Closing: dilation followed by erosion
    o Fill narrow channels
    o Fill small holes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Segmentation

A
  • Dividing image into segments
  • Thresholding: grey value remapping operation that results in a binary
    image
    o Divides image into two segments
    o Usually to identify a single object (segment 1) and background (segment 0)
    o Threshold is tuneable parameter
    o Can use multiple thresholds to identify multiple objects that can be distinguished by
    their grey values
  • Automatic threshold: objects have approximately the same grey level that differs from the
    grey level of the background
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Otsu algorithm

A
  • Given a threshold, we can define two classes
    o Pixels belonging to the background
    o Pixels belonging to the foreground
  • For each class, we can compute:
    o Mean value μ
    o Variance σ
    o Weight ω = #nr pixels per class / #total pixels
  • Finds optimal threshold such that:
    o Intra-class variance is minimised
    o Inter-class variance is maximised
How well did you know this?
1
Not at all
2
3
4
5
Perfectly