Skip to main content

Section 17.2 Polynomial Codes

With knowledge of polynomial rings and finite fields, it is now possible to derive more sophisticated codes than those of Chapter 16. First let us recall that an \((n, k)\)-block code consists of a one-to-one encoding function \(E:{\mathbb Z}^{k}_{2} \rightarrow {\mathbb Z}^{n}_{2}\) and a decoding function \(D:{\mathbb Z}^{n}_{2} \rightarrow {\mathbb Z}^{k}_{2}\text{.}\) The code is error-correcting if \(D\) is onto. A code is a linear code if it is the null space of a matrix \(H \in {\mathbb M}_{k \times n}({\mathbb Z}_2)\text{.}\)

We are interested in a class of codes known as cyclic codes. Let \(\phi : {\mathbb Z}_2^k \rightarrow {\mathbb Z}_2^n\) be a binary \((n,k)\)-block code. Then \(\phi\) is a cyclic code if for every codeword \((a_1, a_2, \ldots, a_n )\text{,}\) the cyclically shifted \(n\)-tuple \((a_n, a_1, a_2, \ldots, a_{n - 1} )\) is also a codeword. Cyclic codes are particularly easy to implement on a computer using shift registers [2, 3].

Example 17.14.

Consider the \((6,3)\)-linear codes generated by the two matrices

\begin{equation*} G_1 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \quad \text{and} \quad G_2 = \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}\text{.} \end{equation*}

Messages in the first code are encoded as follows:

\begin{equation*} \begin{array}{rclccrcl} (000) & \mapsto & (000000) & & & (100) & \mapsto & (100100) \\ (001) & \mapsto & (001001) & & & (101) & \mapsto & (101101) \\ (010) & \mapsto & (010010) & & & (110) & \mapsto & (110110) \\ (011) & \mapsto & (011011) & & & (111) & \mapsto & (111111). \end{array} \end{equation*}

It is easy to see that the codewords form a cyclic code. In the second code, 3-tuples are encoded in the following manner:

\begin{equation*} \begin{array}{rclccrcl} (000) & \mapsto & (000000) & & & (100) & \mapsto & (111100) \\ (001) & \mapsto & (001111) & & & (101) & \mapsto & (110011) \\ (010) & \mapsto & (011110) & & & (110) & \mapsto & (100010) \\ (011) & \mapsto & (010001) & & & (111) & \mapsto & (101101). \end{array} \end{equation*}

This code cannot be cyclic, since \((101101)\) is a codeword but \((011011)\) is not a codeword.

Subsection 17.2.1 Polynomial Codes

We would like to find an easy method of obtaining cyclic linear codes. To accomplish this, we can use our knowledge of finite fields and polynomial rings over \({\mathbb Z}_2\text{.}\) Any binary \(n\)-tuple can be interpreted as a polynomial in \({\mathbb Z}_2[x]\text{.}\) Stated another way, the \(n\)-tuple \((a_0, a_1, \ldots, a_{n - 1} )\) corresponds to the polynomial

\begin{equation*} f(x) = a_0 + a_1 x + \cdots + a_{n-1} x^{n - 1}\text{,} \end{equation*}

where the degree of \(f(x)\) is at most \(n - 1\text{.}\) For example, the polynomial corresponding to the \(5\)-tuple \((10011)\) is

\begin{equation*} 1 + 0 x + 0 x^2 + 1 x^3 + 1 x^4 = 1 + x^3 + x^4\text{.} \end{equation*}

Conversely, with any polynomial \(f(x) \in {\mathbb Z}_2[x]\) with \(\deg f(x) \lt n\) we can associate a binary \(n\)-tuple. The polynomial \(x + x^2 + x^4\) corresponds to the \(5\)-tuple \((01101)\text{.}\)

Let us fix a nonconstant polynomial \(g(x)\) in \({\mathbb Z}_2[x]\) of degree \(n - k\text{.}\) We can define an \((n,k)\)-code \(C\) in the following manner. If \((a_0, \ldots, a_{k - 1})\) is a \(k\)-tuple to be encoded, then \(f(x) = a_0 + a_1 x + \cdots + a_{k - 1} x^{k - 1}\) is the corresponding polynomial in \({\mathbb Z}_2[x]\text{.}\) To encode \(f(x)\text{,}\) we multiply by \(g(x)\text{.}\) The codewords in \(C\) are all those polynomials in \({\mathbb Z}_2[x]\) of degree less than \(n\) that are divisible by \(g(x)\text{.}\) Codes obtained in this manner are called polynomial codes.

Example 17.15.

If we let \(g(x)= 1 + x^3\text{,}\) we can define a \((6,3)\)-code \(C\) as follows. To encode a \(3\)-tuple \(( a_0, a_1, a_2 )\text{,}\) we multiply the corresponding polynomial \(f(x) = a_0 + a_1 x + a_2 x^2\) by \(1 + x^3\text{.}\) We are defining a map \(\phi : {\mathbb Z}_2^3 \rightarrow {\mathbb Z}_2^6\) by \(\phi : f(x) \mapsto g(x) f(x)\text{.}\) It is easy to check that this map is a group homomorphism. In fact, if we regard \({\mathbb Z}_2^n\) as a vector space over \({\mathbb Z}_2\text{,}\) \(\phi\) is a linear transformation of vector spaces. Let us compute the kernel of \(\phi\text{.}\) Observe that \(\phi ( a_0, a_1, a_2 ) = (000000)\) exactly when

\begin{align*} 0 + 0x + 0x^2 + 0x^3 + 0x^4 + 0 x^5 & = (1 + x^3) ( a_0 + a_1 x + a_2 x^2 )\\ & = a_0 + a_1 x + a_2 x^2 + a_0 x^3 + a_1 x^4 + a_2 x^5\text{.} \end{align*}

Since the polynomials over a field form an integral domain, \(a_0 + a_1 x + a_2 x^2\) must be the zero polynomial. Therefore, \(\ker \phi = \{ (000) \}\) and \(\phi\) is one-to-one.

To calculate a generator matrix for \(C\text{,}\) we merely need to examine the way the polynomials \(1\text{,}\) \(x\text{,}\) and \(x^2\) are encoded:

\begin{align*} (1 + x^3) \cdot 1 & = 1 + x^3\\ (1 + x^3)x & = x + x^4\\ (1 + x^3)x^2 & = x^2 + x^5\text{.} \end{align*}

We obtain the code corresponding to the generator matrix \(G_1\) in Example 17.14. The parity-check matrix for this code is

\begin{equation*} H = \begin{pmatrix} 1 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 \end{pmatrix}\text{.} \end{equation*}

Since the smallest weight of any nonzero codeword is \(2\text{,}\) this code has the ability to detect all single errors.

Rings of polynomials have a great deal of structure; therefore, our immediate goal is to establish a link between polynomial codes and ring theory. Recall that \(x^n - 1 = (x - 1)( x^{n - 1} + \cdots + x + 1)\text{.}\) The factor ring

\begin{equation*} R_n = {\mathbb Z}_2[x]/ \langle x^n - 1 \rangle \end{equation*}

can be considered to be the ring of polynomials of the form

\begin{equation*} f(t) = a_0 + a_1 t + \cdots + a_{n - 1} t^{n - 1} \end{equation*}

that satisfy the condition \(t^n = 1\text{.}\) It is an easy exercise to show that \({\mathbb Z}_2^n\) and \(R_n\) are isomorphic as vector spaces. We will often identify elements in \({\mathbb Z}_2^n\) with elements in \({\mathbb Z}[x] / \langle x^n - 1 \rangle\text{.}\) In this manner we can interpret a linear code as a subset of \({\mathbb Z}[x] / \langle x^n - 1 \rangle\text{.}\)

The additional ring structure on polynomial codes is very powerful in describing cyclic codes. A cyclic shift of an \(n\)-tuple can be described by polynomial multiplication. If \(f(t) = a_0 + a_1 t + \cdots + a_{n - 1} t^{n - 1}\) is a code polynomial in \(R_n\text{,}\) then

\begin{equation*} tf(t) = a_{n - 1} + a_0 t + \cdots + a_{n - 2} t^{n - 1} \end{equation*}

is the cyclically shifted word obtained from multiplying \(f(t)\) by \(t\text{.}\) The following theorem gives a beautiful classification of cyclic codes in terms of the ideals of \(R_n\text{.}\)

Proof.

Let \(C\) be a linear cyclic code and suppose that \(f(t)\) is in \(C\text{.}\) Then \(t f(t)\) must also be in \(C\text{.}\) Consequently, \(t^k f(t)\) is in \(C\) for all \(k \in {\mathbb N}\text{.}\) Since \(C\) is a linear code, any linear combination of the codewords \(f(t), tf(t), t^2f(t), \ldots, t^{n-1}f(t)\) is also a codeword; therefore, for every polynomial \(p(t)\text{,}\) \(p(t)f(t)\) is in \(C\text{.}\) Hence, \(C\) is an ideal.

Conversely, let \(C\) be an ideal in \({\mathbb Z}_2[x]/\langle x^n + 1\rangle\text{.}\) Suppose that \(f(t) = a_0 + a_1 t + \cdots + a_{n - 1} t^{n - 1}\) is a codeword in \(C\text{.}\) Then \(t f(t)\) is a codeword in \(C\text{;}\) that is, \((a_1, \ldots, a_{n-1}, a_0)\) is in \(C\text{.}\)

Theorem 17.16 tells us that knowing the ideals of \(R_n\) is equivalent to knowing the linear cyclic codes in \({\mathbb Z}_2^n\text{.}\) Fortunately, the ideals in \(R_n\) are easy to describe. The natural ring homomorphism \(\phi : {\mathbb Z}_2[x] \rightarrow R_n\) defined by \(\phi[f(x)] = f(t)\) is a surjective homomorphism. The kernel of \(\phi\) is the ideal generated by \(x^n - 1\text{.}\) By Theorem 5.34, every ideal \(C\) in \(R_n\) is of the form \(\phi(I)\text{,}\) where \(I\) is an ideal in \({\mathbb Z}_2[x]\) that contains \(\langle x^n - 1 \rangle\text{.}\) By Theorem 6.27, we know that every ideal \(I\) in \({\mathbb Z}_2[x]\) is a principal ideal, since \({\mathbb Z}_2\) is a field. Therefore, \(I = \langle g(x) \rangle\) for some unique monic polynomial in \({\mathbb Z}_2[x]\text{.}\) Since \(\langle x^n - 1 \rangle\) is contained in \(I\text{,}\) it must be the case that \(g(x)\) divides \(x^n - 1\text{.}\) Consequently, every ideal \(C\) in \(R_n\) is of the form

\begin{equation*} C = \langle g(t) \rangle = \{ f(t)g(t) : f(t) \in R_n \text{ and } g(x) \mid (x^n - 1) \text{ in } {\mathbb Z}_2[x] \}\text{.} \end{equation*}

The unique monic polynomial of the smallest degree that generates \(C\) is called the minimal generator polynomial of \(C\text{.}\)

Example 17.17.

If we factor \(x^7 - 1\) into irreducible components, we have

\begin{equation*} x^7 - 1 = (1 + x)(1 + x + x^3)(1+ x^2 + x^3)\text{.} \end{equation*}

We see that \(g(t) = (1 + t + t^3)\) generates an ideal \(C\) in \(R_7\text{.}\) This code is a \((7, 4)\)-block code. As in Example 17.15, it is easy to calculate a generator matrix by examining what \(g(t)\) does to the polynomials 1, \(t\text{,}\) \(t^2\text{,}\) and \(t^3\text{.}\) A generator matrix for \(C\) is

\begin{equation*} G = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}\text{.} \end{equation*}

In general, we can determine a generator matrix for an \((n, k)\)-code \(C\) by the manner in which the elements \(t^k\) are encoded. Let \(x^n - 1 = g(x) h(x)\) in \({\mathbb Z}_2[x]\text{.}\) If \(g(x) = g_0 + g_1 x + \cdots + g_{n-k} x^{n-k}\) and \(h(x) = h_0 + h_1 x + \cdots + h_k x^k\text{,}\) then the \(n \times k\) matrix

\begin{equation*} G = \begin{pmatrix} g_0 & 0 & \cdots & 0 \\ g_1 & g_0 & \cdots & 0 \\ \vdots & \vdots &\ddots & \vdots \\ g_{n-k} & g_{n-k-1} & \cdots & g_0 \\ 0 & g_{n-k} & \cdots & g_{1} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & g_{n-k} \end{pmatrix} \end{equation*}

is a generator matrix for the code \(C\) with generator polynomial \(g(t)\text{.}\) The parity-check matrix for \(C\) is the \((n - k) \times n\) matrix

\begin{equation*} H = \begin{pmatrix} 0 & \cdots & 0 & 0 & h_k & \cdots & h_0 \\ 0 & \cdots & 0 & h_k & \cdots & h_0 & 0 \\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\ h_k & \cdots & h_0 & 0 & 0 & \cdots & 0 \end{pmatrix}\text{.} \end{equation*}

We will leave the details of the proof of the following proposition as an exercise.

Example 17.19.

In Example 17.17,

\begin{equation*} x^7 - 1 = g(x) h(x) = (1 + x + x^3)(1 + x + x^2 + x^4)\text{.} \end{equation*}

Therefore, a parity-check matrix for this code is

\begin{equation*} H = \begin{pmatrix} 0 & 0 & 1 & 0 & 1 & 1 & 1 \\ 0 & 1 & 0 & 1 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1 & 1 & 0 & 0 \end{pmatrix}\text{.} \end{equation*}

To determine the error-detecting and error-correcting capabilities of a cyclic code, we need to know something about determinants. If \(\alpha_1, \ldots, \alpha_n\) are elements in a field \(F\text{,}\) then the \(n \times n\) matrix

\begin{equation*} \begin{pmatrix} 1 & 1 & \cdots & 1 \\ \alpha_1 & \alpha_2 & \cdots & \alpha_n \\ \alpha_1^2 & \alpha_2^2 & \cdots & \alpha_n^2 \\ \vdots & \vdots & \ddots & \vdots \\ \alpha_1^{n-1} & \alpha_2^{n-1} & \cdots & \alpha_n^{n-1} \end{pmatrix} \end{equation*}

is called the Vandermonde matrix. The determinant of this matrix is called the Vandermonde determinant. We will need the following lemma in our investigation of cyclic codes.

Proof.

We will induct on \(n\text{.}\) If \(n = 2\text{,}\) then the determinant is \(\alpha_2 - \alpha_1\text{.}\) Let us assume the result for \(n - 1\) and consider the polynomial \(p(x)\) defined by

\begin{equation*} p(x) = \det \begin{pmatrix} 1 & 1 & \cdots & 1 & 1 \\ \alpha_1 & \alpha_2 & \cdots & \alpha_{n-1} & x \\ \alpha_1^2 & \alpha_2^2 & \cdots & \alpha_{n-1}^2 & x^2 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \alpha_1^{n-1} & \alpha_2^{n-1} & \cdots & \alpha_{n-1}^{n-1} & x^{n-1} \end{pmatrix}\text{.} \end{equation*}

Expanding this determinant by cofactors on the last column, we see that \(p(x)\) is a polynomial of at most degree \(n-1\text{.}\) Moreover, the roots of \(p(x)\) are \(\alpha_1, \ldots, \alpha_{n-1}\text{,}\) since the substitution of any one of these elements in the last column will produce a column identical to the last column in the matrix. Remember that the determinant of a matrix is zero if it has two identical columns. Therefore,

\begin{equation*} p(x) = (x - \alpha_1)(x - \alpha_2) \cdots (x - \alpha_{n-1}) \beta\text{,} \end{equation*}

where

\begin{equation*} \beta = (-1)^{n + n} \det \begin{pmatrix} 1 & 1 & \cdots & 1 \\ \alpha_1 & \alpha_2 & \cdots & \alpha_{n-1} \\ \alpha_1^2 & \alpha_2^2 & \cdots & \alpha_{n-1}^2 \\ \vdots & \vdots & \ddots & \vdots \\ \alpha_1^{n-2} & \alpha_2^{n-2} & \cdots & \alpha_{n-1}^{n-2} \end{pmatrix}\text{.} \end{equation*}

By our induction hypothesis,

\begin{equation*} \beta = (-1)^{n+n} \prod_{1 \leq j \lt i \leq n - 1} (\alpha_i - \alpha_j)\text{.} \end{equation*}

If we let \(x = \alpha_n\text{,}\) the result now follows immediately.

The following theorem gives us an estimate on the error detection and correction capabilities for a particular generator polynomial.

Proof.

Suppose that

\begin{equation*} g( \omega^r) = g(\omega^{r + 1}) = \cdots = g( \omega^{r + s - 1}) = 0\text{.} \end{equation*}

Let \(f(x)\) be some polynomial in \(C\) with \(s\) or fewer nonzero coefficients. We can assume that

\begin{equation*} f(x) = a_{i_0} x^{i_0} + a_{i_1} x^{i_1} + \cdots + a_{i_{s - 1}} x^{i_{s - 1}} \end{equation*}

be some polynomial in \(C\text{.}\) It will suffice to show that all of the \(a_i\)'s must be 0. Since

\begin{equation*} g( \omega^r) = g(\omega^{r + 1}) = \cdots = g( \omega^{r + s - 1}) = 0 \end{equation*}

and \(g(x)\) divides \(f(x)\text{,}\)

\begin{equation*} f( \omega^r) = f(\omega^{r + 1}) = \cdots = f( \omega^{r + s - 1}) = 0\text{.} \end{equation*}

Equivalently, we have the following system of equations:

\begin{align*} a_{i_0} (\omega^r)^{i_0} + a_{i_1} (\omega^r)^{i_1} + \cdots + a_{i_{s - 1}} (\omega^r)^{i_{s - 1}} & = 0\\ a_{i_0} (\omega^{r + 1})^{i_0} + a_{i_1} (\omega^{r + 1})^{i_2} + \cdots + a_{i_{s-1}} (\omega^{r+1})^{i_{s-1}} & = 0\\ & \vdots\\ a_{i_0} (\omega^{r + s - 1})^{i_0} + a_{i_1} (\omega^{r + s - 1})^{i_1} + \cdots + a_{i_{s - 1}} (\omega^{r + s - 1})^{i_{s - 1}} & = 0\text{.} \end{align*}

Therefore, \((a_{i_0}, a_{i_1}, \ldots, a_{i_{s - 1}})\) is a solution to the homogeneous system of linear equations

\begin{align*} (\omega^{i_0})^r x_0 + (\omega^{i_1})^r x_1 + \cdots + (\omega^{i_{s - 1}})^r x_{n - 1} & = 0\\ (\omega^{i_0})^{r + 1} x_0 + (\omega^{i_1})^{r + 1} x_1 + \cdots + (\omega^{i_{s - 1}})^{r + 1} x_{n - 1} & = 0\\ & \vdots\\ (\omega^{i_0})^{r + s - 1} x_0 + (\omega^{i_1})^{r + s - 1} x_1 + \cdots + (\omega^{i_{s - 1}})^{r + s - 1} x_{n - 1} & = 0\text{.} \end{align*}

However, this system has a unique solution, since the determinant of the matrix

\begin{equation*} \begin{pmatrix} (\omega^{i_0})^r & (\omega^{i_1})^r & \cdots & (\omega^{i_{s-1}})^r \\ (\omega^{i_0})^{r+1} & (\omega^{i_1})^{r+1} & \cdots & (\omega^{i_{s-1}})^{r+1} \\ \vdots & \vdots & \ddots & \vdots \\ (\omega^{i_0})^{r+s-1} & (\omega^{i_1})^{r+s-1} & \cdots & (\omega^{i_{s-1}})^{r+s-1} \end{pmatrix} \end{equation*}

can be shown to be nonzero using Lemma 17.20 and the basic properties of determinants (Exercise). Therefore, this solution must be \(a_{i_0} = a_{i_1} = \cdots = a_{i_{s - 1}} = 0\text{.}\)

Subsection 17.2.2 BCH Codes

Some of the most important codes, discovered independently by A. Hocquenghem in 1959 and by R. C. Bose and D. V. Ray-Chaudhuri in 1960, are BCH codes. The European and transatlantic communication systems both use BCH codes. Information words to be encoded are of length \(231\text{,}\) and a polynomial of degree \(24\) is used to generate the code. Since \(231 + 24 = 255 = 2^8-1\text{,}\) we are dealing with a \((255, 231)\)-block code. This BCH code will detect six errors and has a failure rate of \(1\) in \(16\) million. One advantage of BCH codes is that efficient error correction algorithms exist for them.

The idea behind BCH codes is to choose a generator polynomial of smallest degree that has the largest error detection and error correction capabilities. Let \(d = 2r + 1\) for some \(r \geq 0\text{.}\) Suppose that \(\omega\) is a primitive \(n\)th root of unity over \({\mathbb Z}_2\text{,}\) and let \(m_i(x)\) be the minimal polynomial over \({\mathbb Z}_2\) of \(\omega^i\text{.}\) If

\begin{equation*} g(x) = \lcm[ m_1(x), m_{2}(x), \ldots, m_{2r}(x)]\text{,} \end{equation*}

then the cyclic code \(\langle g(t) \rangle\) in \(R_n\) is called the BCH code of length \(n\) and distance \(d\text{.}\) By Theorem 17.21, the minimum distance of \(C\) is at least \(d\text{.}\)

Proof.

(1) \(\Rightarrow\) (2). If \(f(t)\) is in \(C\text{,}\) then \(g(x) \mid f(x)\) in \({\mathbb Z}_2[x]\text{.}\) Hence, for \(i = 1, \ldots, 2r\text{,}\) \(f( \omega^i) = 0\) since \(g( \omega^i ) = 0\text{.}\) Conversely, suppose that \(f( \omega^i) = 0\) for \(1 \leq i \leq d\text{.}\) Then \(f(x)\) is divisible by each \(m_i(x)\text{,}\) since \(m_i(x)\) is the minimal polynomial of \(\omega^i\text{.}\) Therefore, \(g(x) \mid f(x)\) by the definition of \(g(x)\text{.}\) Consequently, \(f(x)\) is a codeword.

(2) \(\Rightarrow\) (3). Let \(f(t) = a_0 + a_1 t + \cdots + a_{n - 1}v t^{n - 1}\) be in \(R_n\text{.}\) The corresponding \(n\)-tuple in \({\mathbb Z}_2^n\) is \({\mathbf x} = (a_0 a_1 \cdots a_{n - 1})^\transpose\text{.}\) By (2),

\begin{equation*} H {\mathbf x} = \begin{pmatrix} a_0 + a_1 \omega + \cdots + a_{n-1} \omega^{n-1} \\ a_0 + a_1 \omega^2 + \cdots + a_{n-1} (\omega^2)^{n-1} \\ \vdots \\ a_0 + a_1 \omega^{2r} + \cdots + a_{n-1} (\omega^{2r})^{n-1} \end{pmatrix} = \begin{pmatrix} f(\omega) \\ f(\omega^2) \\ \vdots \\ f(\omega^{2r}) \end{pmatrix} = 0 \end{equation*}

exactly when \(f(t)\) is in \(C\text{.}\) Thus, \(H\) is a parity-check matrix for \(C\text{.}\)

(3) \(\Rightarrow\) (1). By (3), a code polynomial \(f(t) = a_0 + a_1 t + \cdots + a_{n - 1} t^{n - 1}\) is in \(C\) exactly when \(f(\omega^i) = 0\) for \(i = 1, \ldots, 2r\text{.}\) The smallest such polynomial is \(g(t) = \lcm[ m_1(t),\ldots, m_{2r}(t)]\text{.}\) Therefore, \(C = \langle g(t) \rangle\text{.}\)

Example 17.23.

It is easy to verify that \(x^{15} - 1 \in {\mathbb Z}_2[x]\) has a factorization

\begin{equation*} x^{15} - 1 = (x + 1)(x^2 + x + 1)(x^4 + x + 1)(x^4 + x^3 + 1)(x^4 + x^3 + x^2 + x + 1)\text{,} \end{equation*}

where each of the factors is an irreducible polynomial. Let \(\omega\) be a root of \(1 + x + x^4\text{.}\) The Galois field \(\gf(2^4)\) is

\begin{equation*} \{ a_0 + a_1 \omega + a_2 \omega^2 + a_3 \omega^3 : a_i \in {\mathbb Z}_2 \text{ and } 1 + \omega + \omega^4 = 0 \}\text{.} \end{equation*}

By Example 17.8, \(\omega\) is a primitive \(15\)th root of unity. The minimal polynomial of \(\omega\) is \(m_1(x) = 1 + x + x^4\text{.}\) It is easy to see that \(\omega^2\) and \(\omega^4\) are also roots of \(m_1(x)\text{.}\) The minimal polynomial of \(\omega^3\) is \(m_2(x) = 1 + x + x^2 + x^3 + x^4\text{.}\) Therefore,

\begin{equation*} g(x) = m_1(x) m_2(x) = 1 + x^4 + x^6 + x^7 + x^8 \end{equation*}

has roots \(\omega\text{,}\) \(\omega^2\text{,}\) \(\omega^3\text{,}\) \(\omega^4\text{.}\) Since both \(m_1(x)\) and \(m_2(x)\) divide \(x^{15} - 1\text{,}\) the BCH code is a \((15, 7)\)-code. If \(x^{15} -1 = g(x)h(x)\text{,}\) then \(h(x) = 1 + x^4 + x^6 + x^7\text{;}\) therefore, a parity-check matrix for this code is

\begin{equation*} \left( \begin{array}{ccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right)\text{.} \end{equation*}