Channel Models Flashcards

1
Q

Memoryless Channel

A

A channel is said to be memoryless if the probability distribution of the output depends only on the input at that time and is conditionally independent of previous channel inputs or outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Symmetric Channel

A

It has a channel transition matrix where all columns are permutations of each other and all rows are permutations of each other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Name the common channel models

A

Binary Symmetric Channel

Binary Erasure Channel

Z-Channel

AWGN Channel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain the uses of the binary symmetric channel

A

Used for modelling hard-decision channels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain the uses of the binary erasure channel

A

Used for modelling channels with erasures (e.g. networks with packet losses) or channels with high SNRS and three-level quantization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain the uses of the Z-Channel

A

Very simple model for free-space optical communications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain the uses of AWGN Channel

A

Most common model for most kind of communications systems with analog physical transmissions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Explain the relationship between Es/N0 and Eb/N0

A

Es: the energy per transmitted code bit
Eb: energy per information bit u

We have k information bits and then when we encode the information, we get n code bits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain the concept of hard decision in BI-AWGN channel

A

If y tilda is positive = 0
if y tilda is negative = 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define the coding gain and net coding gain

A

Coding gain; the gain in Es/No at a defined target BER compared with uncoded transmission

Net coding gain; gain in Eb/No “ “

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is the net coding gain smaller?

A

In the coding gain, we are looking at the raw numbers.

In the net coding gain, we take into account for extra energy/bandwidth to transmit extra energy (parity or dedundant bits)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does the bhattacharya parameter tell us?

A

It quantifies the “closeness” of two random samples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly