Block 2 Part 1 Flashcards

1
Q

Two different types of error-control coding

A
  • Error detection coding

- Error correction coding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Error detection coding

A
  • only allows you to know when received data contains errors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Error correction coding

A
  • allows you to know when received data contains errors and allows you to correct them
  • also known as forward error correction (FEC)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Automatic repeat request (ARQ)

A
  • if there is a return channel its possible to request retransmission
  • alternatively some systems require error free data to be acknowledged
  • if not they retransmit data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Parity check

A
  • for given block of bits add one more (parity bit)

- chosen to be 1 or 0 to ensure total number of 1s in block, including parity, is even number

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Cyclic redundancy check (CRCs)

A
  • 1s and 0s used
  • number used to divide data and leave a remainder
  • remainder sent to decoder with message
  • decoder calculates remainder itself and checks for match
  • divisor decided beforehand so doesn’t need to be sent
  • check digits sent directly after message
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Rectangular code (product code)

A
  • can not only detect presence of error but can also identify bit in error and correct it
  • example of block code
  • not good at correcting error bursts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Block code

A
  • encoder tales input data in successive blocks of k bits and encodes them as n bits
  • where n > k so encoded data has some redundancy
  • code described as (n, k) block code
  • block code memoryless
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

code rate

A
  • measure of efficiency of an (n, k) block code

- is ratio k/n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Hamming code

A
  • uses eight parity digits to correct any single error in block of exactly 255 digits
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Syndrome

A
  • points directly at the error if there is one
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Hamming distance

A
  • measure of closeness between code words

- count how many digits differ to get distance. One with minimum distance between two code words is hamming distance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Different Hamming code distances

A
  • Hamming distance of 2 allows single errors to be detected but not corrected
  • Hamming codes have distance of 3 and one error can be corrected
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Perfect codes

A
  • Hamming code that makes use of all information available in syndrome
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Golay code

A
  • (23,12) code

- Minimum Hamming distance of 7

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Turbo codes

A
  • codes that achieve the ultimate limits of capacity of communications channel
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Iterative decoding (turbo decoding)

A
  • When data received it is decoded
  • this information fed back into decoder with copy of original data
  • this gives better chance of removing errors
  • process can be repeated number of times
  • the more times you repeat the more delay you add
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Hard decision decoding

A
  • job of demodulator to decide if data it receives represents 1 or 0
  • decoding like this works entirely with digital data
  • only 1 or 0 passed to decoder
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Soft decision decoding

A
  • more information reaches decoder
  • in practice sometimes done by working with both hard decision (1 or 0) and measure of confidence in that decision
  • useful with iterative decoding
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Interleaving

A
  • technique employed by turbo coding systems

- simple method of interleaving is block interleaving

21
Q

Block interleaving

A
  • example code (7,4)
  • four consecutive code words interleaved by writing words into 4 x 7 matrix of memory locations, row by row, then reading out column by column
  • errors spread out
  • sequence is coding, interleaving, channel, de-interleaving, decoding
22
Q

Latency

A
  • important measure of quality of communications channel
23
Q

Concatenation

A
  • another feature of turbo codes
  • one way of getting benefits of large codes without excessive complexity is by concatenation of shorter codes
  • feed output of one code to input of another
  • use interleaver in between the codes
24
Q

Turbo codes finally

A
  • uses iterative decoding in receiver

- uses concatenation with interleaving and soft decision detection

25
Low density parity check (LDPC)
- block codes that are based on parity checks - have parity check matrix that does not contain many 1s - large block sizes, and many parity checks, but each check only involves small number of bits, this is low density - also uses soft decision and iterative decoding
26
Parity check matrices
- each row is parity check - positions of bits involved that check are indicated by 1s - number of parity bits equal to number of parity checks - number of rows also gives number of parity checks
27
Reed-Solomon codes
- codes designed for error correction to work with digits to bases other than 2 (binary) - widely used for binary data structured in 8-bit bytes
28
Reed-Solomon continued
- (n, k) used for non-binary codes as well - n is encoded block size, but block consists of digits to base q - other n - k digits are parity digits - two sets of syndromes used - one set identifies where errors are - other set identifies how to correct them
29
Two steps to correcting RS codes
- identify which of digits (bytes) have errors | - correct those digits
30
Erasures
- symbol already known to be in error | - just needs correcting
31
Symbol errors
- symbols where error location is not known in advance
32
Convolutional codes
- have memory - constructed with high redundancy to give good error correction - used where high rates of error correction required from minimum of circuitry
33
Viterbi decoding
- common method of decoding convolutional codes - at any point, decoder takes account of what has come before and after in deciding how to decode current data - compares Hamming distances
34
Puncturing
- for this to work you delete some of the output bits - decoder knows puncturing cycle and knows when bits missing - this can still provide good error correction
35
Trellis coded modulation (TCM)
- output from trellis coding would be passed to modulator - this would then modulate signal on transmission medium - instead of Hamming distance to decode, soft distance based on comparing modulated waveforms used
36
Two different ways of reducing but error rate
- Error detection with ARQ | - Forward error correction (FEC)
37
Error detection with ARQ merits
- large code rate (high efficiency, low overhead) - simple codes possible (simple parity checks or CRCs) - low latency when retransmission not needed - flexible (can have multiple retransmissions if error rate is high)
38
Error detection with ARQ limitations
- Needs a reverse channel (to request the retransmission) | - Introduces a delay (latency) if retransmission needed
39
Forward error correction merits
- No reverse channel needed | - Fixed latency
40
Forward error correction limitations
- Generally more complex codes than for error detection - Inherent latency due to complex coding and decoding - Relatively inflexible
41
Error detection and forward error correction used for different things
- ED used for downloading web pages as delay not important | - FEC more useful for long distance telephony
42
Hybrid automatic repeat request (HARQ)
- encodes data with FEC and corrects errors if it can - if errors exceed correction capability of FEC scheme, request retransmission of data - different types of HARQ
43
Type 1 HARQ
- simplest type of HARQ - if errors can't be corrected, data resent coded exactly same way - if still errors to be corrected data sent again
44
Chase combining
- more complicated type of Type 1 HARQ - received retransmitted data combined with originally received data so decoding second time uses info from both transmissions - increases S/N ratio each time data retransmitted
45
Type 2 HARQ
- retransmitted signal is different | - most common version is incremental redundancy
46
Incremental redundancy
- successive retransmissions send additional redundant info to aid with error correction - combined retransmissions build up to form more powerful error correction coding - used with soft-decision decoding
47
Signal-to-noise ratio
- High S/N ratio means probability for errors is low | - Low S/N ratio means probability for errors is high
48
Considerations when choosing code
- What is distribution of errors? do they come in bursts - What is structure of data, already in bytes, packets, frames? is it streamed? multiplexed or interleaved? - How serious if error escapes detection? - Is there return channel (so ARQ can be used)? - is latency an issue? - is complexity in encoder or decoder issue? - is code patented?