Unit 10 - Time Series & Signals Flashcards

1
Q

Quantisation

A

To force something to a discrete set of values (e.g. integers in the range -128 to 127). The result of quantising is that we capture the essence of the continuous time-varying function x(t) as a 1D ndarray of numbers (or 2D if xt was vector-valued)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Time Quantisation

A

Sampling is done by making measurements with precisely fixed time intervals between them. Each measurement records the value of x(t) at that instant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Amplitude Quantisation

A

Each measurement of x(t) is itself quantised to a fixed set of values so it can be represented in memory (e.g. as an int8 or a uint16)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Functional Representation of Real-World Signals

A

Real-world signals are continuous in time/space and value. These are, x = x(t) for functions of time t, for example; or images where brightness is a function of space, and so on.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Sampled Sequences: throwing away time

A

Think about the Wheat Pricing, we already know that we have measurements every 5 years starting at 1570, we don’t need a 2D ndarray storing all the years. Instead, we have a simple 1D array along with starting date and every 5 years.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Sampling Rate fs

A

How often we sampled the original data. In the wheat example, this is every 5 years. Measured in Hertz or measurements per second.

Example, fs = 100Hz is the same as 0.01 second Delta T between each measurement. And every 5 years is the same as Delta T = 157,788,000 seconds between each measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why sample signals?

A

It’s a Compact, and Efficient way to represent an approximation to a continuously varying function as an array of numbers. Allows for very efficient algorithms to be applied to signals. Computation becomes easier:

  • removing offset from signal = subtract value from array
  • mixing two signals = sum of arrays
  • correlation between signals = elementwise multiplication
  • selecting regions of signals = slicing
  • smoothing and regression can be applied to arrays
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Noise

A

All measurements of the world introduce noise. Time series have some level of noise.

e.g. wheat prices, the signal x(t) has two components, y(t) which is the real measurement we want (real price of wheat) and e(t) which is a random fluctuation signal (e..g the price adjustment made by the market trader)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

SNR (Signal to Noise Ratio)

A

How much of the signal we expect to be true signal and how much to be noise

This is the ratio of the amplitude of the signal component y(t) (S) to the noise component e(t) (N).

This is typically represented logarithmically using decibels (just a specific scaling of the logarithm)
SNRdb = 10 * log10 (S / N).

An increase in 10db in SNR = signal 10x louder relative to noise. We ignore the difference between power and amplitude, if you see 20log10, it is showing SNR in terms of amplitude.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Removing Noise by Filtering

A

e(t) is random and cannot be easily removed. But by making assumptions about how y(t) should look, we can try and separate out parts of the signal that could never be y(t).

E.g. if the true wheat price doesn’t change rapidly and is similar to what it was last year. Then if the measurement of the signal seems to change very quickly, we can discount those rapid changes as being implausible.

Filtering is removing elements of a signal based on a model that encodes our assumptions about how the signal should really behave. If assumptions are wrong, we destroy parts of the true signal we want to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Sampling: Amplitude Quantization

A

Amplitude Quantisation makes f(t) discrete by reducing it to a number of distinct values, typically evenly spaced.

The number of levels is often quoted in bits:

  • 6 bits = 64 levels
  • 8 bits = 256 levels
  • 10 bits = 1024 levels
  • 16 bits = 65536 levels, etc.

Amplitude Quantisation introduces noise… the difference between the value of signal and quantisation levels is random, resulting in an increase in noise present. The residual (difference) between high amplitude resolution signal and the low-resolution signal looks random and unstructured. If we plot the residual, this is how much error the quantisation has introduced…

Quantisation adds measurement noise.

But Coarser quantisation = less storage space, less precise circuitry, lower memory bandwidth, less computation time, etc. Hard which transforms Analogue to Digital always has limited quantisation capability, cheap hardware might quantise to 8 bits and expensive hardware to 24 bits, for example.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Irregular Data and Timing

A

Some data is meaningless unless the data represents a regularly sampled signal.

e.g. the cherry tree data isn’t regularly sampled, it has whatever measurements were collected, with arbitrary gaps and multiple readings at one point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Gridding: Re-interpolation onto regular grids

A

1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Interpolation

A

Means estimating a value between some known measurements. An interpolation function is a function which can produce an estimate for the value of a function that is represented by the data points observed, in between where those data points lie.

Many choices for interpolation algorithm, which imply assumptions about how we think the signal might change:

  • constant or nearest-neighbour interpolation assumes that data is unchanging between data points
  • linear interpolation assumes we have a straight line between data points
  • polynomial interpolation fits a low-order polynomial (quadratic, cubic) through data points to find smother approximations.

Are typically applied piecewise, so that the function is built up of “chunks” or “pieces” which are often just the span between two data points.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Resampling

A

We sample again, giving us a regularly sampled signal that we can use to do any standard signal processing on

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Resampling to Align Multiple Signals

A

What if we want to do an operation on multiple signals that are measured at different rates. These signals cannot be directly compared. We can resample all of them to a common rate, and then manipulate the signals in this common timebase.

This alignment process is essential when combining sensor data from multiple streams.

17
Q

Filtering and Smoothing

A

Filtering takes advantage of the fact that we have temporal structure, real signals cannot have arbitrary changes, some portion is usually predictable, and we can code that predictive model as a filter to clean up signals. A simple assumption could be the true signal and the noise are independent, i.e. the random fluctuation doesn’t depend on the previous time step.

In this case, we can average out the contribution of the noise by averaging over multiple time steps. This will also average out the true signal, but if we know that the true signal is not changing quickly (i.e. it does have a strong dependence on the state at the previous timestep) then we won’t damage that signal too much.

18
Q

A Simple Filter: Moving Averages

A

“Tomorrow will be just like today” - we expect true signals to change slowly and noise that we are not interested in to change quickly. Do this using a moving average - take a sliding window of samples and compute their mean. We then slide the window along by one sample, and take the mean of the next window, until we reach the end of the signal.

Longer the moving average (i.e. longer the length of the sliding window K), the smoother the waveform and the more high-frequencies are suppressed.

19
Q

Sliding Window

A

A sliding window takes a sampled signal of unbound length and reduces it to a collection of fixed-length vectors. We break the signal up into exactly equally spaced windows, of a common length K. We can then process these as an N x K matrix: N windows of K samples. If we have vector-valued measurements, like stereo audio, we would have a N x K x D tensor; N windows of K length of D channels.

This makes vectors tractable with tools we already understand