Exam Preparation Deck Flashcards

1
Q

State the lower bound for the probability of error, given the entropy of H(X|Y) by estimating X from Y.

What is the name of that inequality?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the probability density function of a Gaussian random variable?

A

Pretty tough to remember, eh?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

For a source with fixed power level \sigma^2, what is its maximum entropy?

A

It’s probably not tested before. But just in case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

State Shannon’s Channel Coding theorem.

A

For a channel of capacity C > entropy H, there exists a coding scheme such that source is reliably sent through channel with error rate lower than any arbitrary epsilon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

State Noisy Channel Coding Theorem. Which is more effective in increasing capacity: SNR or W?

A

P is average transmitted power, N_0 is power spectral density of noise and W is bandwidth. One can show that C -> P/N_0 log(e) when W tends to infinity. However C grows without bounds when SNR = P/N_0W is increased.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a necessary condition for code lengths, such that the resulting code is uniquely decodable?

Please name the inequality.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the inescapable bound of the frequency-time uncertainty product, given by the uncertainty principle?

How does it show that information is fundamentally quantised?

A

Bound is 1/4\pi. For any given closed subspace in the time-frequency plane with area A, there could be at most 4A\pi independent quanta of data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

State the functional and Fourier form of the Gabor wavelet.

A

Rule 1: Love Gabor wavelets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Define the KL distance.

A

KL distance measures the inefficiency of coding optimally in q(x), while the real distribution is p(x).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain how Fourier/Gabor/DCT transform manages to compress data.

A

Neighbouring pixels in an image are often highly correlated. These transformation uses decorrelating expansion basis function. As a result, we get non-uniform coefficients, often peaked at very few places. This leads to lower entropy. By Shannon coding theorem, lower code lengths are then possible. Even lossless images can have large compression. If we quantize values coarsely in the encoding (such as JPEG), we can obtain compression factor of 30:1, without much loss in image quality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define auto-correlation function. How is it able to extract periodic component from noise?

A

Original signal often is coherent, while noises are not. The auto-correlation function would be able to extract periodic component since noises often average out themselves in the process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the ideal function used for retreiving input signal from sampled points?

Given that the sampling frequency is f_s.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Describe the compression strategies of JPEG.

A
  1. Splits an image into a uniform grid of non-overlapping tiles, usually 8x8 pixels in size.
  2. Performs DCT (Discrete Cosine Transform) on each tile, generating 64 DCT coefficients.
  3. Quantize the resulting DCT coefficients, more coarsely for the coefficients associated with higher frequencies.
  4. Uses a quantization table to control the relative severity of truncation.
  5. Many high frequency components end up being 0 after quantization. The long runs of 0 are captured by run-length encoding.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Describe the compression strategies deployed by JPEG-2000.

How does it improve on JPEG?

Does it have any disadvantage?

A
  • Uses Discrete Wavelet Transform, with smoothly attenuating Daubechies wavelets.
  • The absence of sharply truncated tile boundaries avoid coarse blocky quantization artifacts.
  • Performs much better than JPEG in severe compression, such as 20:1 or 50:1. JPEG only around 10:1.
  • Implementation of DWT more complicated, even though Daubechies wavelets are orthogonal. Execution times are slower for both compression and decompression.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does IrisCode work?

A
  • Segmentation (detect iris boundaries) and size normalization performed to extract the iris pattern.
  • Regions of iris textures are projected onto 2D Gabor wavelet basis.
  • MSBs of real-part and imaginary-part of projection coefficients are encoded.
  • The MSBs amount to encoding the phase quandrants.
  • The binary pair forms a Gray Code, so that misjudges between any neighbouring quadrants flips only one bit.
  • 250 bits of entropy extracted from iris. Hugely discriminating.
  • Compact encoding of phase structure allows fast searching and matching. Just use bit-parallel XORs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

List two results that proves bandlimited signals have finite degrees of freedom.

A
  1. Nyquist’s sampling theorem: If a signal is strictly bandlimited to some frequency W, sampling it at a rate 2W specfies the signal everywhere.
  2. Logan’s theorem: If a function is bandlimited to one octave, i.e. f_max < 2*f_min, then its set of zero-crossings fully specifies the function (up to a scale factor in amplitude).
17
Q

What does it mean for a sequence to be algorithmically random?

A

A sequence of length n is algorithmically random if the Kolmogorov complexity is at least n.

That is, the shortest program that generates the sequence is the listing of the sequence itself.