Communication Chapter 2 Flashcards
Noiseless Communication scheme
source -> source encoder (compressor) -> noiseless channel -> source decoder (extractor) -> destination
random source model
Let p = (p₁,..,pₘ) be a discrete probability distribution. A random source S={s₁,..,sₘ} has probability distrubution p={p₁,…,pₘ} if
(1) The random source produces letters one at a time
(2) The letter sᵢ in S is produced with probability pᵢ
(3) The letter is produced independently of the past.
H(P)=
H(P)= - the sum from i=1 to m of pᵢ log₂ pᵢ = H(S)
H(S)=
H(S) = the sum from i=1 to m of pᵢ log (1/pᵢ) = H(P)
How to find the binary entropy function
sub in the values of p into the formula
H(P)= - the sum from i=1 to m of pᵢ log₂ pᵢ
High entropy =>
unpredictable
Low entropy =>
predictable
What range can the entropy be in
0≤H(P)≤logm
When does H(P)=0
if p₁ = 1 and pᵢ = 0
When does H(P) = logm
if pᵢ=1/m (i.e. equal chance for each)
Gibbs Lemma
Let r₁,…,rₘ be a sequence of positive real numbers that have a sum less than or equal to one. Let p = (p₁,…,pₘ) be a probability distribution. Then,
H(P)≤ the sum from i=1 to m of pᵢ log (1/rᵢ) (*)
ENCODING
Takes the source and assigns each letter a codeword
DECODING
Takes one of the codewords and maps it to the associated letter
𝔼(ℓ(f))
𝔼(ℓ(f))= the sum from i=1 to m of pᵢ |f(sᵢ)|
Shannons Noiseless Encoding Theorem
Let S be a random source and f:S->{0,1}* be an optimal prefix-free encoding of S. Then, H(S)≤𝔼(ℓ(f))≤H(S)+1.