Block 2 Part 1 Flashcards
Two different types of error-control coding
- Error detection coding
- Error correction coding
Error detection coding
- only allows you to know when received data contains errors
Error correction coding
- allows you to know when received data contains errors and allows you to correct them
- also known as forward error correction (FEC)
Automatic repeat request (ARQ)
- if there is a return channel its possible to request retransmission
- alternatively some systems require error free data to be acknowledged
- if not they retransmit data
Parity check
- for given block of bits add one more (parity bit)
- chosen to be 1 or 0 to ensure total number of 1s in block, including parity, is even number
Cyclic redundancy check (CRCs)
- 1s and 0s used
- number used to divide data and leave a remainder
- remainder sent to decoder with message
- decoder calculates remainder itself and checks for match
- divisor decided beforehand so doesn’t need to be sent
- check digits sent directly after message
Rectangular code (product code)
- can not only detect presence of error but can also identify bit in error and correct it
- example of block code
- not good at correcting error bursts
Block code
- encoder tales input data in successive blocks of k bits and encodes them as n bits
- where n > k so encoded data has some redundancy
- code described as (n, k) block code
- block code memoryless
code rate
- measure of efficiency of an (n, k) block code
- is ratio k/n
Hamming code
- uses eight parity digits to correct any single error in block of exactly 255 digits
Syndrome
- points directly at the error if there is one
Hamming distance
- measure of closeness between code words
- count how many digits differ to get distance. One with minimum distance between two code words is hamming distance
Different Hamming code distances
- Hamming distance of 2 allows single errors to be detected but not corrected
- Hamming codes have distance of 3 and one error can be corrected
Perfect codes
- Hamming code that makes use of all information available in syndrome
Golay code
- (23,12) code
- Minimum Hamming distance of 7
Turbo codes
- codes that achieve the ultimate limits of capacity of communications channel
Iterative decoding (turbo decoding)
- When data received it is decoded
- this information fed back into decoder with copy of original data
- this gives better chance of removing errors
- process can be repeated number of times
- the more times you repeat the more delay you add
Hard decision decoding
- job of demodulator to decide if data it receives represents 1 or 0
- decoding like this works entirely with digital data
- only 1 or 0 passed to decoder
Soft decision decoding
- more information reaches decoder
- in practice sometimes done by working with both hard decision (1 or 0) and measure of confidence in that decision
- useful with iterative decoding
Interleaving
- technique employed by turbo coding systems
- simple method of interleaving is block interleaving
Block interleaving
- example code (7,4)
- four consecutive code words interleaved by writing words into 4 x 7 matrix of memory locations, row by row, then reading out column by column
- errors spread out
- sequence is coding, interleaving, channel, de-interleaving, decoding
Latency
- important measure of quality of communications channel
Concatenation
- another feature of turbo codes
- one way of getting benefits of large codes without excessive complexity is by concatenation of shorter codes
- feed output of one code to input of another
- use interleaver in between the codes
Turbo codes finally
- uses iterative decoding in receiver
- uses concatenation with interleaving and soft decision detection
Low density parity check (LDPC)
- block codes that are based on parity checks
- have parity check matrix that does not contain many 1s
- large block sizes, and many parity checks, but each check only involves small number of bits, this is low density
- also uses soft decision and iterative decoding
Parity check matrices
- each row is parity check
- positions of bits involved that check are indicated by 1s
- number of parity bits equal to number of parity checks
- number of rows also gives number of parity checks
Reed-Solomon codes
- codes designed for error correction to work with digits to bases other than 2 (binary)
- widely used for binary data structured in 8-bit bytes
Reed-Solomon continued
- (n, k) used for non-binary codes as well
- n is encoded block size, but block consists of digits to base q
- other n - k digits are parity digits
- two sets of syndromes used
- one set identifies where errors are
- other set identifies how to correct them
Two steps to correcting RS codes
- identify which of digits (bytes) have errors
- correct those digits
Erasures
- symbol already known to be in error
- just needs correcting
Symbol errors
- symbols where error location is not known in advance
Convolutional codes
- have memory
- constructed with high redundancy to give good error correction
- used where high rates of error correction required from minimum of circuitry
Viterbi decoding
- common method of decoding convolutional codes
- at any point, decoder takes account of what has come before and after in deciding how to decode current data
- compares Hamming distances
Puncturing
- for this to work you delete some of the output bits
- decoder knows puncturing cycle and knows when bits missing
- this can still provide good error correction
Trellis coded modulation (TCM)
- output from trellis coding would be passed to modulator
- this would then modulate signal on transmission medium
- instead of Hamming distance to decode, soft distance based on comparing modulated waveforms used
Two different ways of reducing but error rate
- Error detection with ARQ
- Forward error correction (FEC)
Error detection with ARQ merits
- large code rate (high efficiency, low overhead)
- simple codes possible (simple parity checks or CRCs)
- low latency when retransmission not needed
- flexible (can have multiple retransmissions if error rate is high)
Error detection with ARQ limitations
- Needs a reverse channel (to request the retransmission)
- Introduces a delay (latency) if retransmission needed
Forward error correction merits
- No reverse channel needed
- Fixed latency
Forward error correction limitations
- Generally more complex codes than for error detection
- Inherent latency due to complex coding and decoding
- Relatively inflexible
Error detection and forward error correction used for different things
- ED used for downloading web pages as delay not important
- FEC more useful for long distance telephony
Hybrid automatic repeat request (HARQ)
- encodes data with FEC and corrects errors if it can
- if errors exceed correction capability of FEC scheme, request retransmission of data
- different types of HARQ
Type 1 HARQ
- simplest type of HARQ
- if errors can’t be corrected, data resent coded exactly same way
- if still errors to be corrected data sent again
Chase combining
- more complicated type of Type 1 HARQ
- received retransmitted data combined with originally received data so decoding second time uses info from both transmissions
- increases S/N ratio each time data retransmitted
Type 2 HARQ
- retransmitted signal is different
- most common version is incremental redundancy
Incremental redundancy
- successive retransmissions send additional redundant info to aid with error correction
- combined retransmissions build up to form more powerful error correction coding
- used with soft-decision decoding
Signal-to-noise ratio
- High S/N ratio means probability for errors is low
- Low S/N ratio means probability for errors is high
Considerations when choosing code
- What is distribution of errors? do they come in bursts
- What is structure of data, already in bytes, packets, frames? is it streamed? multiplexed or interleaved?
- How serious if error escapes detection?
- Is there return channel (so ARQ can be used)?
- is latency an issue?
- is complexity in encoder or decoder issue?
- is code patented?