8 Coset decoding Flashcards
error vector
When x ∈ Fⁿ_q is transmitted down a noisy channel y is received.
Define the error vector
e = y − x
Then the number of transmission errors is w(e).
Def 8.1 Coset
Suppose C ∈ [n, k, d] − cod_q (some d) and a ∈ Fⁿ_q
Then set
a + C = {a + x | x ∈ C}
is called a coset of C in Fⁿ_q
Note that a = a + 0 ∈ a + C
coset links for error vector
The importance for decoding is that the error vector e and the received word y belong to the same coset.
size of cosets
Note that each coset has size
|C| = qᵏ
x, y ∈ Fⁿ_q belong
to the same coset of C if
x, y ∈ Fⁿ_q belong
to the same coset of C if and only if x − y ∈ C
Theorem 8.2.(Lagrange)
The cosets of a linear code C ⊂ Fⁿ_q
partition Fⁿ_q
Proof: (Idea) Think of C as a subspace (such as a plane in R^3
through the origin). We can think of a as shifting this subspace parallel-ly away from the plane; in other words to a new plane not including the origin.
We dont have R^3, but the same idea works. □
8.3. Example. Let C be 2-ary generated by
G =
[1011]
[0101]
That is C = {0000, 1011, 0101, 1110}. Cosets:
0000 + C = C
1000 + C = {1000, 0011, 1101, 0110} = 0011 + C (etc)
and so on.
standard array
generated by coset leaders for C
the rows of the array are cosets of C
an arrangement of Fⁿ_q
as are a partition of Fⁿ_q
|Fⁿ_q|/|C| x |C|
row 1: C words
Row 2: minimal coset leader that doesn’t already appear
continue until all words of Fⁿ_q
equally good decoding if choice of coset leaders
standard array for
8.3. Example. Let C be 2-ary generated by
G =
[1011]
[0101]
That is C = {0000, 1011, 0101, 1110}.
0000 1011 0101 1110
1000 0011 1101 0110
0100 1111 0001 1010
0010 1001 0111 1100
coset decoding
assume for each y in Fⁿ_q
coset leader e_A(y)
decode y= e_A(y)+c_A(y) at the top of the col
is coset decoding a good strategy?
- assuming that the actual error
from the transmitted codeword x
e = y − x is the coset leader e_A(y) of coset y + C
*if actual error is e then as in the same coset it must be correct decoding
*otherwise the decoding is wrong
*by chossing coset leaders with min weight always decode y
as the (Hamming) nearest codeword to y (or at least one of the
joint nearest). E.g. y = 0110 decodes as x = 1110 in our example.
That is, we assume the fewest errors possible
issues with decoding on example:
8.3. Example. Let C be 2-ary generated by
G =
[1011]
[0101]
That is C = {0000, 1011, 0101, 1110}.
d(C) = w(C) = 2
not single error correcting, even single errors might not be correcred properly
(would only be corrected 1st, 2nd or 3rd digit, but not the 4th.))
message:codeword:noise:decode:truncate
01 0101 0111 (say) 0101 01 (correct)
10 1011 1010 1110 11 (incorrect)
This glitch is precisely to do with the fact that we had a choice of coset leaders in 0100 + C. We could have chosen 0001 instead, in which case the 4th digit errors would be recovered and the 2nd digit errors not recovered