5. Spectral Analysis Flashcards

1
Q

How does a practical analog-to-digital converter affect a signal?

A

A practical analog-to-digital converter distorts the signal in three different ways:

  - (some small amount of) aliasing is inevitably inserted during sampling
  - quantization distortion is added by the quantizer
  - spectral distortion is introduced to the signal because of the finite order of the anti-aliasing filter

(p. 100)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the difference between sample-based and frame-based (also called blockbased) data transfer?

A

In a real-time application the signal is typically received on a sample or a frame basis. This means that the analog-to-digital converter communicates with the signal processing core at regular time instances only, each time passing isolated packets of one (sample-based data transfer) or multiple (frame-based, also called block-based data transfer) samples to (and from) the processing core.

(p. 101)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a (non-)stationary signal? Give a practical example.

A

A signal is said to be stationary if the characteristics of the underlying process(es) do not change over time and, hence, if as a result, the properties of the signal (such as amplitude, frequency content, . . .) that are observed over a reasonable amount of time, remain invariant.

If for instance a hammer repeatedly hits a piano string at the same place and with the same speed and if the properties of the string (e.g. tension) do not change over time, always a sound with the same frequency and timbre is produced.

A non-stationary signal is for instance fluent speech where different sounds with a different spectral content and pitch rapidly follow each other as tongue, lips and jaw change position and the rate at which the vocal chords vibrate is altered.

(p.102)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a deterministic signal?

What is a stochastic signal? Give a practical example.

A

A signal is said to be deterministic if the underlying generative processes depend on known (environmental) factors such that their cumulative result (the signal) can be well described
and predicted. For example, a piano sound or vowels produced by a human voice.

If too many or unknown processes contribute to the cumulative result, the signal is called non-deterministic or stochastic. Stochastic signals tend to manifest themselves as noise. For example, EEG signals.

(p. 102)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What will we use to compute the frequency spectrum of a discrete-time signal of finite length at a finite number of frequencies equally spread over the fundamental interval? And why?

A

To compute the frequency spectrum of a discrete-time signal of finite length at a finite number of frequencies equally spread over the fundamental interval the (DFT or) FFT can be used. This will typically result in a smaller number of operations than when the spectrum is directly obtained from equation 5.3, i.e. with the DTFT.

(p. 105)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How can one improve the resolution of the frequency spectum?

A

Zero padding, i.e. setting L > N, is a simple trick to improve the resolution of the frequency spectrum (see figure 5.2). Recall though that frequency resolution can only be effectively improved by increasing N. Making L larger merely enhances the spectral plot, i.e. makes the spectral plot more smooth.

(p. 104)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can we practically compute the spectrum of a long stationary signal?

A

The analysis of a frame of samples rather than the entire signal is a practical way out to compute the spectrum of an (infinitely-)long (quasi-)stationary signal, even though it gives rise to an approximate result only (see figure 5.3).

(p. 106)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Make sure that you understand and can reproduce figure 5.4. You are not supposed to be able to replicate the values on the vertical axes.

A

Spectral analysis of a frame of N consecutive samples coming from a (co)sine wave signal of frequency ¯f = M fs/N with M ∈ {0, 1, 2, …} and N ∈ {1, 2, 3, …}. On the left the signals are represented in the continuous-time domain, on the right the corresponding continuous Fourier transforms are shown. Notice that M = 2 and N = 20 in this example.

(p. 108)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Make sure that you understand and can reproduce figure 5.6. You are not supposed to be able to replicate the values on the vertical axes.

A

Spectral analysis of a frame of N consecutive samples coming from a (co)sine wave of frequency ¯f = M fs/N with N ∈ {1, 2, 3, …}, but M /∈ Z. On the left the signals are represented in the continuous-time domain, on the right the corresponding continuous Fourier transforms are shown. Notice that M = 2.44 and N = 20 in this example.

(p. 112)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does the DFT spectrum of a frame of N consecutive samples of a (co)sine wave typically look like?
From which artifacts does it suffer?
Why do they occur?

A

The resulting DFT spectrum XN [n] contains multiple frequency components that are spread all over the fundamental interval. The most prominent contributions occur nevertheless at frequency bins n that are closest to M /∈ Z or N − M /∈ Z.

Smearing and leakage (?) when the frame contains a non-integer number of periods.

(p. 111)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is smearing?

What is leakage?

A

The fact that a (co)sine wave is represented by multiple adjacent peaks around n = M and n = N − M is called smearing.

The fact that also frequency bins n far from M and N − M are non-zero is called leakage.

Figure 5.8 illustrates both.

(p. 113 - 114)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

To which errors may smearing and leakage give rise?

A
  1. Smearing and leakage may give misleading information about the spectral content of the analog signal the samples are derived from. They can insert spurious frequency components in the
    spectral plot or make some spectral information disappear (see figure 5.3). Bear in mind that the resulting frequency spectrum reveals the spectral content of the zero-padded/periodically
    repeated frame of signal samples rather than the spectral content of the analog signal the samples are derived from.
  2. Smearing and leakage cause errors in frequency, amplitude and phase measurements.
  3. Smearing and leakage owing to a strong frequency component can hide weaker components at nearby frequencies. This may lead to a detection problem.
  4. Smearing compromises the frequency resolution capacity of the DFT.
    (p. 114 - 115)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Explain in words why the application of a windowing function other than the rectangular window can reduce the amount of leakage.

What is the price to pay?
Which trade-off does exist?

A

Non-rectangular windowing functions are applied which weight each sample in the frame by a certain factor so that the signal amplitude is smoothly brought to zero at the beginning and at the end of the data frame, a process which is called apodization. In this way, discontinuities between periodic repetitions of the data frame are avoided, and hence, the amount of leakage is reduced. The corresponding W(f) will have much smaller sidelobes, which lowers the amount of leakage.

A non-rectangular (they example in the book) has a wider mainlobe than the rectangular window in figure 5.6, second plot on the right. This increases the amount of smearing and therefore compromises the resolution capacity of the DFT. Apparently, a small amount of leakage implies a large amount of smearing, and vice versa. The emphasis can be put on either frequency
resolution (low smearing levels) or frequency detection (low leakage levels). However, whatever windowing function is used, the resulting spectral plot will always suffer to some extent
from both smearing and leakage.

(p. 117)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Give the spectral analysis of a non-rectangular windowed frame of N consecutive samples coming from a (co)sine wave.

A

Make sure that you understand and can reproduce figure 5.9. You are not supposed to be able to replicate the values on the vertical axes.

(p. 116)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does the window length relate to the performance?

A

The application of a window of length N comes down to N multiplications.

(p. 118)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is scalloping loss?

A

Scalloping loss is the maximum estimation error that can occur with a specific window when using equation 5.24 to find the amplitude of a (co)sine wave or the strength of a signal component.

Also give the graph 5.12

(p. 119 - 120)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is highest sidelobe level?

A

The highest sidelobe level (HSL) is the level of the highest sidelobe peak |Wd(φHSL)| relative to the level of the mainlobe peak. It is usually expressed in dB.

Explain with graph 5.12.

(p. 119 - 120)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is sidelobe roll-off?

A

Sidelobe roll-off (SRO) is the rate at which the sidelobe peaks decrease with φ, as is illustrated in figure 5.12. Together with the highest sidelobe level, it predicts the amount of leakage that is inserted in the spectrum.

Explain with graph 5.12.

(p. 119/121)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is mainlobe width?

A

The mainlobe width (MLW) is defined as the size of the frequency interval between the first minimum of |Wd(φ)| on the left of the mainlobe peak and the first (symmetric) minimum on the right. The mainlobe width reflects the amount of smearing that is introduced by the window.

Explain with graph 5.12.

(p. 119/121)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Give the name of a number of popular windowing functions.

Interpret table 5.2. Also have a look at figures 5.14, 5.15 and 5.16.

A

Rectangular window, Hann window, Hamming window, Bartlett window, Blackman window, Blackman-Harris, Flat-top window, Guassian window, Kaiser-Bessel window.

(p. 121 - 128)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

When is the rectangular window used?

A

The rectangular window (w[k] = 1) is most appropriate if a small amount of smearing is required and leakage is not a point of concern, if the analysis frame contains an integer number of periods of the signal or if all the signal samples can be input into the D(T)FT at the same time such that no window needs to be applied.

(p. 122)

22
Q

What do flat-top windows offer?

A

Flat-top windows offer a small amount of scalloping loss and hence yield almost exact amplitude estimates for isolated frequency components.

(p. 126)

23
Q

Why are other windowing functions proposed (other than rectangular)?

A

Many other windowing functions (e.g. Hann, Hamming, Blackman, Kaiser-Bessel, …) have been proposed to limit the amount of leakage that is inserted in the spectral plot. Thanks to smoothing, samples at the beginning and at the end of the analysis frame are weighted by a small factor, whereas in the middle of the frame the weights are close to 1. This brings amplitudes at the beginning and at the end of the frame to zero prior to frequency-domain transformation. In this way, severe discontinuities at the frame boundaries are avoided, which reduces the amount of leakage that is introduced at the expense of more smearing.

(p. 117)

24
Q

What is a multisine signal?

A

A multisine signal is a sum of sine waves of different frequency, amplitude and phase.

(p. 126)

25
Q

Have a look at the application examples of section 5.4.4. Make sure you can explain what happens in figures 5.17, 5.18 and 5.19.

A

(p. 128 - 132)

26
Q

Read through the summary presented in section 5.4.5 and make sure you understand.

A

(p. 130 - 132)

27
Q

What is the mean value of a noise signal n[k]?

A

The mean value of the noise is defined as

       n¯[k] = E{n[k]}

where E{·} represents the expectation operator. Most often it is assumed that the noise is stationary, Under this assumption, the mean value of the noise n¯ = ¯n[k] becomes independent of time. Because of the expectation operator that is involved, the mean value can be calculated only if a statistical model is available for the stochastic process that generates n. Luckily, if some properties are fulfilled (which is mostly the case), the mean value can be estimated from the observed data samples n[k] by simply averaging over time : formula 5.49.

(p. 133)

28
Q

What is the variance of a noise signal n[k]?

A

The variance of the noise, which is given by

        σn^2 [k] = E{|n[k] − n¯[k]|^2}

If the noise is stationary, equation 5.50 becomes independent of time:

       σn^2 = E{|n[k] − n¯|^2}

The variance can be estimated by averaging over time:

       formula 5.52

(p. 133)

29
Q

What is standard deviation?

What is the physical meaning?

A

The square root of the variance is called the standard deviation. The standard deviation can be considered to be a measure for the ’amplitude’ of the noise.

(p. 133)

30
Q

What is the stochastic autocorrelation of a noise signal n[k]?
What is the physical meaning?

A

Autocorrelation of n[k], defined as

rnn[k, m] = E{n[k] · n[k + m]}

which is the expected value of the product of n[k] and a time-shifted version of n[k]. The autocorrelation quantifies the similarity between noise samples that are m time steps apart. If the correlation is small, the noise samples are expected to be largely uncorrelated with each other.

In practical applications rnn[k, m] needs to be estimated from the observed data by averaging over time.

(p. 134)

31
Q

Stochastic vs. deterministic autocorrelation…

A

Keep in mind that stochastic autocorrelation (equation 5.54) and deterministic autocorrelation (equation 4.37) are different concepts, which are somehow related, though.

(p. 134)

32
Q

What is the mean square value of a noise signal n[k]?

What is the physical meaning?

A

the mean square value of the noise, which is defined as E{|n[k]|^2} = rnn[k, 0]. The mean square value is the expected instantaneous power P1Ω that is delivered by the signal to a resistor R of 1 Ω. In fact, if n[k] is considered to be a voltage,

      formula 5.59

(p. 135)

33
Q

What is the root mean square value of a noise signal n[k]?

A

The root mean square (RMS) value is (E{|n[k]|^2})^1/2 = (rnn[k, 0])^1/2.

(p. 135)

34
Q

What is the power spectral density of a noise signal n[k]?

What is the physical meaning?

A

The power spectral density (PSD) of the noise is defined as

        formula 5.61

the PSD is the discrete-time Fourier transform of the stochastic autocorrelation. The PSDn(k, φ) indicates how the power of n[k] is spread over different frequencies.

(p. 135 - 136)

35
Q

What is the continuous-time equivalent of power spectral density used for in electronics?

A

A continuous-time variant of the PSD, which is the continuous Fourier transform of the continuous(-time) stochastic autocorrelation,

     formula 5.67

where the continuous stochastic autocorrelation is defined as

     formula 5.68

(p. 137)

36
Q

What is uniformly distributed noise?

A

Uniformly distributed noise has a flat probability density function, which means that all possible noise amplitudes are equally probable.

(p. 137)

37
Q

What is normally distributed noise?

A

Normally distributed noise, also called Gaussian noise, on the other hand, has a probability density function pn(x) that is a Gaussian.

(p. 137)

38
Q

What is uncorrelated noise?

What does the power spectral density of uncorrelated noise look like?

A

A noise signal is said to be uncorrelated if the stochastic autocorrelation function takes on the form

         rnn[m] = δ[m] · P0

where δ[m] is a discrete Dirac impulse. This means that two noise samples recorded at different time instances are expected to be completely uncorrelated, they do not have anything to do with each other.

The power spectral density of an uncorrelated noise sequence is a constant

        PSDn(φ) = P0

i. e. the PSD is independent of frequency. For this reason, uncorrelated noise is also called white noise, as just like white light it contains all frequencies (in equal amounts).
(p. 138)

39
Q

What is pink or 1/f noise? What is it used for?

A

Pink or 1/f noise has a power spectral density that is proportional to 1/|f|, i.e. α = 1. In other words, the PSD goes down by 10 dB per decade, or 3 dB per octave18. It is easily shown that on average (stationary) pink noise contains equal power per octave or decade. This makes pink noise a useful test signal, for instance to measure loudspeaker characteristics.

(p. 139 - 140)

40
Q

What is a periodogram?

A

The most common approach consists in first selecting a frame of N samples of x[k]. If desired, the resulting sequence is zero-padded by appending L − N zero samples such that a signal of length L is obtained, with L ≥ N. Next, the discrete Fourier transform XL[n] of the zero-padded sequence is computed and the spectral coefficients are magnitude squared. Finally, the magnitude spectrum is normalized by a scaling factor α:

1/α · |XL[n]|^2

It can be shown that if the scaling factor is properly chosen equation 5.73 becomes a reliable estimate for the power spectral density of the signal.

A periodogram is an estimate for the power spectral density of the signal.

(p. 140 - 141)

41
Q

With the 6 algorithmic steps that are shown on page 143 a Welch periodogram is calculated, and hence an estimate is made for the power spectral density of a signal. Make sure you understand each of the 6 steps.

How many calculations are required for each step?
What are the memory requirements?

A
  1. Divide signal x[k] into R (overlapping) frames of N samples that are shifted by P samples with respect to each other such that the r-th frame contains samples
    x[k + rP], k = 0, 1, …, N − 1.
    If P < N there is overlap between the frames. Recall that it is advised to keep the overlap below or equal to 50%.
  2. Window each frame by applying an appropriate windowing function w[k], examples of which can be found in section 5.4.3. Often, Hann(ing) or Hamming windows are used.
  3. Once the signal has been divided into windowed frames of length N, the windowed frames are zero-padded with L − N zeros such that sequences of length L ≥ N are obtained.
  4. Next, the zero-padded windowed frames are transformed into the frequency domain using an L-point DFT or FFT, leading to XL[n, r].
  5. Then, for each frame r the squared magnitude of the frequency spectrum |XL[n, r]|^2 is computed.
  6. Finally, one averages over all the magnitude squared spectra and divides by a normalization factor α:
           1/Rα SUM van r = 0 tot R - 1 (|XL[n, r]|^2)

where

          α = SUM van k = 0 tot N - 1 (w^2 [k] * fs

(p. 143)

42
Q

Give a practical application where the Welch periodogram is used.

A

To see the effect of periodogram averaging, consider figures 5.25–5.27, which show single-sided power spectral density estimates. Observe that as more frames are included in the average, the resulting power spectral density estimate becomes more smooth, i.e. less noisy. Notice that in this way, the true spectral characteristics, i.e. the PSD of the highway noise signal emerges.

Periodogram averaging can also be applied to deterministic signals with recording noise on the background. Thanks to the averaging performed by the periodogram, the variance of the background noise that is seen in the spectral plot can be reduced and hence, the spectral characteristics of the deterministic signal-of-interest more clearly come to the fore.

(p. 143 - 144)

43
Q

What is a chirp or sweep signal?

A

The chirp, also called sweep signal

     formula 5.80

is an example of a non-stationary test signal (e.g. used to measure the frequency response of (real-life) systems). The chirp defined by equation 5.80 is a sine wave signal with a frequency that linearly increases with time.

(p. 146)

44
Q

With the 7 algorithmic steps that are shown on page 147 a time-domain signal x[k] can be displayed in the joint time-frequency domain. Make sure you understand each of the 7 steps.

How many calculations are required?
What are the memory requirements?

A
  1. Divide signal x[k] into R (overlapping) frames of N samples that are shifted by P samples with respect to each other such that the r-th frame contains samples
    x[k + rP], k = 0, 1, …, N − 1
    If P < N there is overlap between the frames. The overlap is typically expressed as a percentage, which is to be calculated according to equation 5.79.
  2. Window each frame by applying an appropriate windowing function w[k], examples of which can be found in section 5.4.3. Often a Hann(ing) window is used. If such a Hann window is
    applied with 50% overlap, all the samples equally contribute to the resulting time-frequency plot, as illustrated in figure 5.29.
  3. Once the signal has been divided into windowed frames of length N, the windowed frames are zero-padded with L − N zeros such that sequences of length L ≥ N are obtained.
  4. Next, the zero-padded windowed frames are transformed into the frequency domain using an L-point DFT or FFT, leading to XL[n, r].
  5. Then, for each frame r the squared magnitude of the frequency spectrum |XL[n, r]|^2 is computed.
  6. As a sixth step, the first half of the magnitude squared spectra is stored in a matrix. In other words, |XL[n, r]|^2 is saved for values of n going from 0 (DC) to L/2 (Nyquist frequency), and this
    for all frames r. Recall that the DFT spectrum of a real-valued signal has the symmetry of equation 4.66. As a result, basically only half of the spectral data is unique and needs to be stored.
  7. Finally, |XL[n, r]|^2 is displayed in a time-frequency plot.
    (p. 147 - 148)
45
Q

What is the short-time Fourier transform of x[k]? Make sure you understand formula 5.85. The formula can be found in the formulary. Interpret.
What is the physical meaning of n and r?
How do you select parameters L, N and P?

A

Steps 1 to 4 of the computation of a set of frequency spectra of a non-stationary signal come down to calculating the mixed time-frequency transform

        formula 5.85

which is called the short-time Fourier transform22 (STFT) of x[k].

STFT depends on two variables : a frequency index n, which is similar to parameter n used in the DFT, and a frame index r, which acts as a time variable. More precisely, |XL[n, r]| indicates how much of frequency nfs/L there is in the signal around time instance rP/fs , where fs is the sampling frequency.

To obtain meaningful results parameters N, L and P should meet the following inequality:

        L ≥ N ≥ P.

(p. 148)

46
Q

Give a practical application where the short-time Fourier transform is used.

A

Think of a recording of a musical instrument playing different notes, i.e. different frequencies. If the frequency spectrum of the entire signal is computed, a plot results that shows contributions of each note that has been played. Hence, the spectral plot shows peaks at the fundamental frequency and at the harmonics of each note. One is probably not interested in this kind of (messy) representation. However, if a frame is selected that contains contributions of one note only, and if this frame is transformed into the frequency domain, the local (rather than the global) frequency characteristics of the signal can be observed, which is usually of more interest to the user.

(p. 146 - 147)

47
Q

What is a spectrogram?
With which signal transform do you calculate a spectrogram?

Give a practical application.

A

It is common to visualize the magnitude of the STFT in a two-dimensional plot. Such a time-frequency plot of 10 · log10 |XL[n, r]|^2 for all r and n = 0, 1, . . . , L/2, is called a spectrogram. Spectrogram plots are frequently used to analyze non-stationary signals such as human voice recordings. Based on this plot useful information can be retrieved from the signal, for example, the type of sound (vowel, consonant, …) that is spoken, the fundamental frequency of the voice, possible voice pathologies, …

(p. 149 - 150)

48
Q

What is a wideband spectrogram?
What is a narrowband spectrogram?
What can they be used for?

A

When a small frame size N is used, the spectrogram plot gives detailed time information, but offers poor frequency resolution. For this reason, the plot is called a wideband spectrogram. When a larger frame size N is used, the frequency resolution has improved. Unfortunately, the time resolution is poorer. This is called a narrowband spectrogram.

With a narrowband spectrogram, the fundamental frequency and its harmonics become visible (in a speach signal).

(p. 150/152)

49
Q

The frequency resolution can be effectively improved by increasing frame size N (and not by making L larger). The price to pay is a poorer time resolution. Explain.

A

By comparing the time and frequency resolution capabilities of the wideband and the narrowband spectrogram it becomes clear that there is a fundamental trade-off between time and frequency resolution when using the STFT.

Actually, if N is decreased a smaller number of time-domain samples x[k], …, x[k+N −1] are input into the FFT. This improves time resolution, but makes frequency resolution worse as the effective frequency resolution of a DFT spectrum is inversely proportional to the frame length N (see page 151). More exactly, the time resolution (window length) of the rectangular window (see figure 5.6, second row) is given by

              ∆t = N/fs

whereas the frequency resolution (mainlobe width) equals

              ∆f = 2fs/N = 2/∆t

Apparently, for the rectangular window, the product of time and frequency resolution

             ∆t · ∆f = 2 

is a constant, independent of N. It follows that a decrease (i.e. an improvement) of ∆t (or ∆f) makes the other parameter ∆f (or ∆t) larger (i.e. worse). Also by changing the window type, time and/or frequency resolution may be improved.

(p. 152)

50
Q

What is the uncertainty principle?

A

Keep in mind that time and frequency resolution are two competitive trade-off parameters. By changing the type of windowing function used by the STFT, time and/or frequency resolution may be improved. However, there exists a fundamental lower bound for the product of time resolution ∆t and frequency resolution ∆f : it is impossible to achieve infinite time resolution and infinite frequency resolution at the same time. This is called the uncertainty principle.

(p. 153)

51
Q

Make sure you can explain figure 5.34.
What are the advantages of the figure on the right?
What is the relation with human perception?

A

It shows the time-frequency plane overlaid by the checkerboard sample grid used by the STFT. The horizontal lines represent the frequencies fn = nfs/L computed by the STFT. The vertical lines
show the time instances tr = rP/fs where the STFT is calculated25. Observe that the grid provides uniform sampling both in the time and in the frequency domain, i.e. ∆¯ t and ∆¯ f are the same for all time instances and frequencies. Hence, it follows that time and frequency resolution are constant over the entire (t, f)-plane.

With this grid (on the right) structure time and frequency resolution are optimally adapted to the type of signal that is expected at a specific location in the time-frequency plane. In this way, good frequency resolution is obtained at low frequencies and at the same time good time resolution can be offered at high frequencies. As a result, the product of time and frequency resolution can be kept close to the lower bound defined by equation 5.90 over the entire (t, f)-plane.

It is well known (law of Weber and Fechner) that humans react to an applied stimulus more or less in a logarithmic way. Hence, from the point of view of the human observer a higher frequency resolution is required at low frequencies than at high frequencies.

(p. 154 - 155)

52
Q

Whatfor can the wavelet transform be used?

A

To analyze signals using a logarithmically spaced sample grid as in figure 5.34 on the right the wavelet transform can be employed.

(p. 154)