Finals | Statistical Analysis and Information Entropy Flashcards

1
Q

Actual amount of information included in the image that can be computed

A

INFORMATION ENTROPY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describes how much randomness (or uncertainty) there is in a signal or an image; in other words, how much information is provided by the signal or image.

A

INFORMATION ENTROPY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

ENTROPY is aka

A

Shannon’s Entropy (1948) or Information Entropy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Use of the same number of bits to represent all pixels.

A

Fixed-length coding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Fixed-length coding

In this way, when a program reads an image file it knows that the first 12–bits represent the (1) of the first pixel, the next 12–bits represent the second pixel, and so on, with no need for any (2) to represent the (3) of each pixel’s (4).

A
  1. value
  2. special symbol
  3. end
  4. data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Uses a variable number of bits to represent pixel values.

A

Variable-length coding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Variable-length coding

Provides short code-words for (1) characters and long code words for (2) characters.

A
  1. frequent
  2. infrequent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Variable-length coding

Variability is primarily dependent on (1) of data.

A
  1. frequency of occurrence
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The use of more bits that are needed to convey a given amount of information

A

Coding redundancy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Redundant code has a number of consequences including

A

bloated source code
reduced reliability
reduced maintainability

MaRe Bloated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Two major methods of variable-length encoding to reduce coding redundancy

A

A. Huffman coding (Huffman, 1952)
B. Arithmetic coding (Abramson, 1963)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

ENCODED vs DECODED

A

ENCODED: Compressed image
DECODED: Compressed image that had been RECONSTRUCTED

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An entropy encoding algorithm used for lossless data compression.

A

HUFFMAN CODING

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Guarantees the uniqueness of the decoding process so that a set of codes can only represent one set of image values.

A

HUFFMAN CODING

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

This replaces each pixel value of an image with a special code on a one-to-one basis.

A

HUFFMAN CODING

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

This replaces each pixel value of an image with a one code.

A

ARITHMETIC CODING

17
Q

The main idea behind this coding is to assign each symbol an interval.

A

ARITHMETIC CODING

18
Q

It consists of a few arithmetic operations due to its complexity is less. In terms of complexity, this is asymptotically better than the other coding.

A

ARITHMETIC CODING

19
Q

ARITHMETIC vs HUFFMAN (statistical method?)

A

ARITHMETIC: Not a statistical method
HUFFMAN: Is a statistical method

20
Q

ARITHMETIC vs HUFFMAN (Result?)

A

ARITHMETIC: Yields an optimum result
HUFFMAN: Does not yield an optimum result

21
Q

ARITHMETIC vs HUFFMAN (1 to 1 correspondence)

A

ARITHMETIC: No one-to-one correspondence b/n source symbol and code word
HUFFMAN: There is one-to-one correspondence b/n source symbol and code word

22
Q

ARITHMETIC vs HUFFMAN (example?)

A

ARITHMETIC: If a,b,c are messages, then only one unique code to entire message
HUFFMAN: If a,b,c are messages, then separate sybols are assigned

23
Q

Fourier transform pair

A

two functions f (x, y) and F(u, v)

24
Q

Enable the transformation of a two-dimensional image from the spatial domain to the frequency domain, and vice versa.

A

The Fourier and the inverse transform

25
Q

Basically, the FT provides a —

A

frequency spectrum of a signal

26
Q

For medical images utilizing multi-modalities: In the transformation model, three categories can be identified according to the associated degrees of freedom:

A

rigid, affine, and nonlinear.

27
Q

The transformation involves only reflection, translations and rotations. This sufficesto register images of rigid objects (like bones).

A

RIGID REGISTRATION:

28
Q

Preserves the parallelism of lines, but neither their lengths nor their mutual angles.

A

AFFINE TRANSFORMATION:

29
Q

It extends the degrees of freedom of the rigid transformation with scaling and shearing in each image dimension.

A

AFFINE TRANSFORMATION:

30
Q

It is an appropriate transformation model when the image has been skewed during acquisition.

A

AFFINE TRANSFORMATION:

31
Q

Allow the mapping of straight lines to curves.

A

NONLINEAR TRANSFORMATION

32
Q

The similarity measure can be calculated globally, on the entire image, or locally, on a sub-image.

A

NONLINEAR TRANSFORMATION

33
Q

Smoothness of the deformation can be achieved in different ways and the deformation can be either (1) (any deformation is allowed) or guided by an underlying physical model of material properties, such as (2) or (3)

A
  1. free-form
  2. tissue elasticity
  3. fluid flow