Discrete communication channels Flashcards

1
Q

Define a discrete channel

A

Input alphabet,
X
Output alphabet,
Y
Probability transition matrix that expresses the probability of observing the output symbol y given that we send the symbol x,
P(y|x)
Memoryless channel, if the probability distribution of the output depends only on the input at that time and is conditionally independent of previous channel inputs or outputs
P(y_n | x_1, …, x_{n-1}, x_n, y_1, …, y_{n-1}) = P_{Y|X} (y_n |x_n).

A discrete channel, denoted by (X, p(y|x), Y), consists of two finite sets X and Y and a collection of probability mass functions p(y|x), one for each x ∈ X, such that for every x and y, p(y|x) ≥ 0, and for every x, \sum_y p(y|x) = 1, with the interpretation that X is the input and Y is the output of the channel.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Channel capacity

A

The capacity of a channel is the supremum of all achievable rates.
The information channel capacity is defined as

C = max_{p(x)} I(X;Y).

Information channel capacity equal to the operational channel capacity, which is defined as the highest rate in bits per channel use at which information can be sent with arbitrarily low probability of error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Noiseless binary channel

A

C = 1 bit, there are no errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Noisy channel with nonoverlapping outputs

A

This channel has two possible outputs corresponding to each of the two inputs.
C = 1 bit, there are no actual errors, as every transmitted bit can be recovered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Binary symmetric channel

A

This is a binary channel in which the input symbols are complemented with probability p. The bits received do not reveal where the errors have occurred. Nonzero rate and abitrarily small probability of error is still possible.
C = 1 - H(p) bits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Binary erasure channel

A

In this channel, a fraction α of the bits are erased. The receiver knows which bits have been erased
C = 1 - α bits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Symmetric channel

A

In this channel, all the rows of the probability transition matrix, p(·|x) are permutations of each other and so are the columns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Weakly symmetric channel

A

A channel is said to be weakly symmetric if every row of the transition matrix p(·|x) is a permutation of every other row and all the column sums \sum_x p(y|x) are equal.
It has channel capacity
C = log|Y| - H(row of transition matrix),
which is achieved by a uniform distribution on the input alphabet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Properties of channel capacity

A

C ≥ 0 since I (X; Y) ≥ 0
C ≤ log|X| since C = max I (X; Y) ≤ max H(X) = log|X|.
C ≤ log|Y| for the same reason.
I(X; Y) is a continuous function of p(x).
I(X; Y) is a concave function of p(x)

In general, there is no closed-form solution for the capacity. But for many simple channels it is possible to calculate the capacity using properties such as symmetry.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Channel without feedback

A

If the channel is used without feedback i.e., if the input symbols do not depend on the past output symbols, namely, p(x_k|x^{k−1}, y^{k−1}) = p(x_k|x^{k−1}),
then the channel transition function for the nth extension of the discrete memoryless channel reduces to
p(y^n|x^n) = ∏_{i=1}^n p(y_i|x_i).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

(M,n) code

A

An (M, n) code for the channel (X, p(y|x), Y) consists of the following:
1. An index set {1, 2, . . . , M}.
2. An encoding function X^n : {1, 2, … , M} → ˆX^n, yielding codewords x^n(1), x^n(2), … , x^n(M). The set of codewords is called the codebook
3. A decoding function
g : Y^n → {1, 2, . . . , M},
which is a deterministic rule that assigns a guess to each possible received vector.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Probability of error

A

Let
λ_i = Pr(g(Y^n ) ≠ i|X^n = x^n(i) ) = sum_{y^n} p(y^n|x^n(i) ) * I( g(y^n) ≠ i)
be the conditional probability of error given that index i was sent, where I (·) is the indicator function.
Then the maximal probabilty of error λ^n is given as
λ^n = max_{i ∈ 1, 2, … , M} λ_i.
The average probabilty of error, P_e^n is given as
P_e^n = 1/M \sum_{i=1}^M λ_i.
Note that P_e^n is the probability of error, if the index W is chosen according to a uniform distribution and X^n = x^n(W).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Rate of a code

A

The rate R of an (M, n) code is
R = log(M) / n bits per transmission.
The capacity of a channel is the supremum of all achievable rates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Achieveable rate

A

The rate is achieveable if there exists a sequence of (⌈2^{nR}⌉ , n) codes such that the maximal probability of error λ(n) tends to 0 as n → ∞.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Channel coding theorem

A

All rates below capacity C are achievable, and all rates above capacity are not; that is, for all rates R < C, there exists a sequence of (2^{nR} , n) codes with probability of error λ(n) → 0. Conversely, for rates R > C, λ(n) is bounded away from 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the channel?

A

Channel is the part of the system one is unwilling or unable to change