Unit 10 - Time Series & Signals Flashcards
Quantisation
To force something to a discrete set of values (e.g. integers in the range -128 to 127). The result of quantising is that we capture the essence of the continuous time-varying function x(t) as a 1D ndarray of numbers (or 2D if xt was vector-valued)
Time Quantisation
Sampling is done by making measurements with precisely fixed time intervals between them. Each measurement records the value of x(t) at that instant.
Amplitude Quantisation
Each measurement of x(t) is itself quantised to a fixed set of values so it can be represented in memory (e.g. as an int8 or a uint16)
Functional Representation of Real-World Signals
Real-world signals are continuous in time/space and value. These are, x = x(t) for functions of time t, for example; or images where brightness is a function of space, and so on.
Sampled Sequences: throwing away time
Think about the Wheat Pricing, we already know that we have measurements every 5 years starting at 1570, we don’t need a 2D ndarray storing all the years. Instead, we have a simple 1D array along with starting date and every 5 years.
Sampling Rate fs
How often we sampled the original data. In the wheat example, this is every 5 years. Measured in Hertz or measurements per second.
Example, fs = 100Hz is the same as 0.01 second Delta T between each measurement. And every 5 years is the same as Delta T = 157,788,000 seconds between each measurement
Why sample signals?
It’s a Compact, and Efficient way to represent an approximation to a continuously varying function as an array of numbers. Allows for very efficient algorithms to be applied to signals. Computation becomes easier:
- removing offset from signal = subtract value from array
- mixing two signals = sum of arrays
- correlation between signals = elementwise multiplication
- selecting regions of signals = slicing
- smoothing and regression can be applied to arrays
Noise
All measurements of the world introduce noise. Time series have some level of noise.
e.g. wheat prices, the signal x(t) has two components, y(t) which is the real measurement we want (real price of wheat) and e(t) which is a random fluctuation signal (e..g the price adjustment made by the market trader)
SNR (Signal to Noise Ratio)
How much of the signal we expect to be true signal and how much to be noise
This is the ratio of the amplitude of the signal component y(t) (S) to the noise component e(t) (N).
This is typically represented logarithmically using decibels (just a specific scaling of the logarithm)
SNRdb = 10 * log10 (S / N).
An increase in 10db in SNR = signal 10x louder relative to noise. We ignore the difference between power and amplitude, if you see 20log10, it is showing SNR in terms of amplitude.
Removing Noise by Filtering
e(t) is random and cannot be easily removed. But by making assumptions about how y(t) should look, we can try and separate out parts of the signal that could never be y(t).
E.g. if the true wheat price doesn’t change rapidly and is similar to what it was last year. Then if the measurement of the signal seems to change very quickly, we can discount those rapid changes as being implausible.
Filtering is removing elements of a signal based on a model that encodes our assumptions about how the signal should really behave. If assumptions are wrong, we destroy parts of the true signal we want to measure.
Sampling: Amplitude Quantization
Amplitude Quantisation makes f(t) discrete by reducing it to a number of distinct values, typically evenly spaced.
The number of levels is often quoted in bits:
- 6 bits = 64 levels
- 8 bits = 256 levels
- 10 bits = 1024 levels
- 16 bits = 65536 levels, etc.
Amplitude Quantisation introduces noise… the difference between the value of signal and quantisation levels is random, resulting in an increase in noise present. The residual (difference) between high amplitude resolution signal and the low-resolution signal looks random and unstructured. If we plot the residual, this is how much error the quantisation has introduced…
Quantisation adds measurement noise.
But Coarser quantisation = less storage space, less precise circuitry, lower memory bandwidth, less computation time, etc. Hard which transforms Analogue to Digital always has limited quantisation capability, cheap hardware might quantise to 8 bits and expensive hardware to 24 bits, for example.
Irregular Data and Timing
Some data is meaningless unless the data represents a regularly sampled signal.
e.g. the cherry tree data isn’t regularly sampled, it has whatever measurements were collected, with arbitrary gaps and multiple readings at one point
Gridding: Re-interpolation onto regular grids
1
Interpolation
Means estimating a value between some known measurements. An interpolation function is a function which can produce an estimate for the value of a function that is represented by the data points observed, in between where those data points lie.
Many choices for interpolation algorithm, which imply assumptions about how we think the signal might change:
- constant or nearest-neighbour interpolation assumes that data is unchanging between data points
- linear interpolation assumes we have a straight line between data points
- polynomial interpolation fits a low-order polynomial (quadratic, cubic) through data points to find smother approximations.
Are typically applied piecewise, so that the function is built up of “chunks” or “pieces” which are often just the span between two data points.
Resampling
We sample again, giving us a regularly sampled signal that we can use to do any standard signal processing on