Gaussian communication channels Flashcards
What is the channel?
Channel is the part of the system one is unwilling or unable to change
Define a Gaussian channel
Additive Gaussian noise, which is assumed to be independent from X_i
Y_i = X_i + Z_i, Z ~ N(0,N)
Infinite capacity without power constraint:
* If the noise variance is zero, the receiver receives the transmitted symbol perfectly. Since X can take on any real value, the channel can transmit an arbitrary real number with no error.
* If the noise variance is nonzero and there is no constraint on the input, we can choose an infinite subset of inputs arbitrarily far apart, so that they are distinguishable at the output with arbitrarily small probability of error.
Commonly used constraint on the average power of the codewords (x_1, x_2, … , x_n) is
1/n \sum_{i=1}^n x_i^2 ≤ P
Information capacity for Gaussian channel
C = max_{E(X^2) ≤ P} I(X; Y) = 1/2 log(1 + P/N) bits per transmission
and the maximum is attained when X ∼ N(0, P).
This capacity is also the supremum of the rates achievable for the channel
(M,n) code for Gaussian channel
An (M, n) code for the Gaussian channel with power constraint consists of the following:
1. An index set {1, 2, . . . , M}.
2. An encoding function X^n : {1, 2, … , M} → ˆX^n, yielding codewords x^n(1), x^n(2), … , x^n(M). The set of codewords is called the codebook. They satisfy the power constraint,
\sum_{i=1}^n x_i^2(w) ≤ n * P, w = 1, 2, … , M
3. A decoding function
g : Y^n → {1, 2, . . . , M},
which is a deterministic rule that assigns a guess to each possible received vector. The rate and probability of error of the code are defined as for the discrete case. The arithmetic average of the probability of error is defined by
P_e^n = 1 / 2^{nR} \sum λ_i.
Bandlimited channels
The output of a bandlimited channel can be described as the convolution
Y (t) = (X(t) + Z(t)) ∗ h(t),
where X(t) is the signal waveform, Z(t) is the waveform of the white Gaussian noise, and h(t) is the impulse response of an ideal bandpass filter, which cuts out all frequencies greater than W .
The Nyquist–Shannon sampling theorem shows that a bandlimited function has only 2W degrees of freedom per second, which results in. Furthermore, it shows sampling a bandlimited signal at a sampling rate 1 / 2W is sufficient to reconstruct the signal from the samples
Capacity of a bandlimited Gaussian channel with noise spectral
density N0/2 watts/Hz and power P watts.
The noise has power N_0 * W and each of the 2 * W * T noise samples in time T has variance N0W T /2W T = N0/2. Then the capacity is given by
C = W * log(1 + P / [N_0 * W]) bits per second.
This equation is one of the most famous formulas of information theory.
If we let W → ∞ in the above equation we obtain
C = P/N_0 log_2(e) bits per second
as the capacity of a channel with an infinite bandwidth, power P , and noise spectral density N_0/2. Thus, for infinite bandwidth channels, the capacity grows linearly with the power.
How are Gaussian channels converted to discrete channels?
The main advantage of a discrete channel is ease of processing of the output signal for error correction, but some information is lost in the quantization.
Assume that we want to send 1 bit over the channel in one use of the channel. Given the power constraint, the best that we can do is to send one of two levels, +√P or −√P .
‘The receiver looks at the corresponding Y received and tries to decide which of the two levels was sent. Assuming that both levels are equally likely (this would be the case if we wish to send exactly 1 bit of nformation), the optimum decoding rule is to decide that +√P was sent if Y > 0 and decide −√P was sent if Y < 0.
The probability of error with such a decoding scheme is
P_e = 1 − Φ( \sqrt{P/N} ), where
Φ(x) = \int_{−∞}^x 1 / [\sqrt{2π}] e^{−t^2 / 2} dt.
Using such a scheme, we have converted the Gaussian channel into a discrete binary symmetric channel with crossover probability P_e.