Skip to main content

Section 15.1 Matrix Groups

Subsection 15.1.1 Some Facts from Linear Algebra

Before we study matrix groups, we must recall some basic facts from linear algebra. One of the most fundamental ideas of linear algebra is that of a linear transformation. A linear transformation or linear map \(T : {\mathbb R}^n \rightarrow {\mathbb R}^m\) is a map that preserves vector addition and scalar multiplication; that is, for vectors \({\mathbf x}\) and \({\mathbf y}\) in \({\mathbb R}^n\) and a scalar \(\alpha \in {\mathbb R}\text{,}\)

\begin{align*} T({\mathbf x}+{\mathbf y}) & = T({\mathbf x}) + T({\mathbf y})\\ T(\alpha {\mathbf y}) & = \alpha T({\mathbf y})\text{.} \end{align*}

An \(m \times n\) matrix with entries in \({\mathbb R}\) represents a linear transformation from \({\mathbb R}^n\) to \({\mathbb R}^m\text{.}\) If we write vectors \({\mathbf x} = (x_1, \ldots, x_n)^\transpose\) and \({\mathbf y} = (y_1, \ldots, y_n)^\transpose\) in \({\mathbb R}^n\) as column matrices, then an \(m \times n\) matrix

\begin{equation*} A = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} \end{equation*}

maps the vectors to \({\mathbb R}^m\) linearly by matrix multiplication. Observe that if \(\alpha\) is a real number,

\begin{equation*} A({\mathbf x} + {\mathbf y} ) = A {\mathbf x }+ A {\mathbf y} \qquad \text{and} \qquad \alpha A {\mathbf x} = A ( \alpha {\mathbf x})\text{,} \end{equation*}

where

\begin{equation*} {\mathbf x} = \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix}\text{.} \end{equation*}

We will often abbreviate the matrix \(A\) by writing \((a_{ij})\text{.}\)

Conversely, if \(T : {\mathbb R}^n \rightarrow {\mathbb R}^m\) is a linear map, we can associate a matrix \(A\) with \(T\) by considering what \(T\) does to the vectors

\begin{align*} {\mathbf e}_1 & = (1, 0, \ldots, 0)^\transpose\\ {\mathbf e}_2 & = (0, 1, \ldots, 0)^\transpose\\ & \vdots &\\ {\mathbf e}_n & = (0, 0, \ldots, 1)^\transpose\text{.} \end{align*}

We can write any vector \({\mathbf x} = (x_1, \ldots, x_n)^\transpose\) as

\begin{equation*} x_1 {\mathbf e}_1 + x_2 {\mathbf e}_2 + \cdots + x_n {\mathbf e}_n\text{.} \end{equation*}

Consequently, if

\begin{align*} T({\mathbf e}_1) & = (a_{11}, a_{21}, \ldots, a_{m1})^\transpose,\\ T({\mathbf e}_2) & = (a_{12}, a_{22}, \ldots, a_{m2})^\transpose,\\ & \vdots &\\ T({\mathbf e}_n) & = (a_{1n}, a_{2n}, \ldots, a_{mn})^\transpose\text{,} \end{align*}

then

\begin{align*} T({\mathbf x} ) & = T(x_1 {\mathbf e}_1 + x_2 {\mathbf e}_2 + \cdots + x_n {\mathbf e}_n)\\ & = x_1 T({\mathbf e}_1) + x_2 T({\mathbf e}_2) + \cdots + x_n T({\mathbf e}_n)\\ & = \left( \sum_{k=1}^{n} a_{1k} x_k, \ldots, \sum_{k=1}^{n} a_{mk} x_k \right)^\transpose\\ & = A {\mathbf x}\text{.} \end{align*}
Example 15.1.

If we let \(T : {\mathbb R}^2 \rightarrow {\mathbb R}^2\) be the map given by

\begin{equation*} T(x_1, x_2) = (2 x_1 + 5 x_2, - 4 x_1 + 3 x_2)\text{,} \end{equation*}

the axioms that \(T\) must satisfy to be a linear transformation are easily verified. The column vectors \(T {\mathbf e}_1 = (2, -4)^\transpose\) and \(T {\mathbf e}_2 = (5,3)^\transpose\) tell us that \(T\) is given by the matrix

\begin{equation*} A = \begin{pmatrix} 2 & 5 \\ -4 & 3 \end{pmatrix}\text{.} \end{equation*}

Since we are interested in groups of matrices, we need to know which matrices have multiplicative inverses. Recall that an \(n \times n\) matrix \(A\) is invertible exactly when there exists another matrix \(A^{-1}\) such that \(A A^{-1} = A^{-1} A = I\text{,}\) where

\begin{equation*} I = \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{pmatrix} \end{equation*}

is the \(n \times n\) identity matrix. From linear algebra we know that \(A\) is invertible if and only if the determinant of \(A\) is nonzero. Sometimes an invertible matrix is said to be nonsingular.

Example 15.2.

If \(A\) is the matrix

\begin{equation*} \begin{pmatrix} 2 & 1 \\ 5 & 3 \end{pmatrix}\text{,} \end{equation*}

then the inverse of \(A\) is

\begin{equation*} A^{-1} = \begin{pmatrix} 3 & -1 \\ -5 & 2 \end{pmatrix}\text{.} \end{equation*}

We are guaranteed that \(A^{-1}\) exists, since \(\det(A) = 2 \cdot 3 - 5 \cdot 1 = 1\) is nonzero.

Some other facts about determinants will also prove useful in the course of this chapter. Let \(A\) and \(B\) be \(n \times n\) matrices. From linear algebra we have the following properties of determinants.

  • The determinant is a homomorphism into the multiplicative group of real numbers; that is, \(\det( A B) = (\det A )(\det B)\text{.}\)

  • If \(A\) is an invertible matrix, then \(\det(A^{-1}) = 1 / \det A\text{.}\)

  • If we define the transpose of a matrix \(A = (a_{ij})\) to be \(A^\transpose = (a_{ji})\text{,}\) then \(\det(A^\transpose) = \det A\text{.}\)

  • Let \(T\) be the linear transformation associated with an \(n \times n\) matrix \(A\text{.}\) Then \(T\) multiplies volumes by a factor of \(|\det A|\text{.}\) In the case of \({\mathbb R}^2\text{,}\) this means that \(T\) multiplies areas by \(|\det A|\text{.}\)

Linear maps, matrices, and determinants are covered in any elementary linear algebra text; however, if you have not had a course in linear algebra, it is a straightforward process to verify these properties directly for \(2 \times 2\) matrices, the case with which we are most concerned.

Subsection 15.1.2 The General and Special Linear Groups

The set of all \(n \times n\) invertible matrices forms a group called the general linear group. We will denote this group by \(GL_n({\mathbb R})\text{.}\) The general linear group has several important subgroups. The multiplicative properties of the determinant imply that the set of matrices with determinant one is a subgroup of the general linear group. Stated another way, suppose that \(\det(A) =1\) and \(\det(B) = 1\text{.}\) Then \(\det(AB) = \det(A) \det (B) = 1\) and \(\det(A^{-1}) = 1 / \det A = 1\text{.}\) This subgroup is called the special linear group and is denoted by \(SL_n({\mathbb R})\text{.}\)

Example 15.3.

Given a \(2 \times 2\) matrix

\begin{equation*} A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}\text{,} \end{equation*}

the determinant of \(A\) is \(ad-bc\text{.}\) The group \(GL_2({\mathbb R})\) consists of those matrices in which \(ad-bc \neq 0\text{.}\) The inverse of \(A\) is

\begin{equation*} A^{-1} = \frac{1}{ad-bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}\text{.} \end{equation*}

If \(A\) is in \(SL_2({\mathbb R})\text{,}\) then

\begin{equation*} A^{-1} = \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}\text{.} \end{equation*}

Geometrically, \(SL_2({\mathbb R})\) is the group that preserves the areas of parallelograms. Let

\begin{equation*} A = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \end{equation*}

be in \(SL_2({\mathbb R})\text{.}\) In Figure 15.4, the unit square corresponding to the vectors \({\mathbf x} = (1,0)^\transpose\) and \({\mathbf y} = (0,1)^\transpose\) is taken by \(A\) to the parallelogram with sides \((1,0)^\transpose\) and \((1, 1)^\transpose\text{;}\) that is, \(A {\mathbf x} = (1,0)^\transpose\) and \(A {\mathbf y} = (1, 1)^\transpose\text{.}\) Notice that these two parallelograms have the same area.

Two side-by figures. The figure on the left is a square on set of axes with the left edge a vector from the origin to (0,1) and bottom edge a vector from the origin to (1,0). The figure on the right is a parallelogram on set of axes with the left edge a vector from the origin to (1,1) and bottom edge a vector from the origin to (1,0).
Figure 15.4. \(SL_2(\mathbb R)\) acting on the unit square

Subsection 15.1.3 The Orthogonal Group \(O(n)\)

Another subgroup of \(GL_n({\mathbb R})\) is the orthogonal group. A matrix \(A\) is orthogonal if \(A^{-1} = A^\transpose\text{.}\) The orthogonal group consists of the set of all orthogonal matrices. We write \(O(n)\) for the \(n \times n\) orthogonal group. We leave as an exercise the proof that \(O(n)\) is a subgroup of \(GL_n( {\mathbb R})\text{.}\)

Example 15.5.

The following matrices are orthogonal:

\begin{equation*} \begin{pmatrix} 3/5 & -4/5 \\ 4/5 & 3/5 \end{pmatrix}, \quad \begin{pmatrix} 1/2 & -\sqrt{3}/2 \\ \sqrt{3}/2 & 1/2 \end{pmatrix}, \quad \begin{pmatrix} -1/\sqrt{2} & 0 & 1/ \sqrt{2} \\ 1/\sqrt{6} & -2/\sqrt{6} & 1/\sqrt{6} \\ 1/ \sqrt{3} & 1/ \sqrt{3} & 1/ \sqrt{3} \end{pmatrix}\text{.} \end{equation*}

There is a more geometric way of viewing the group \(O(n)\text{.}\) The orthogonal matrices are exactly those matrices that preserve the length of vectors. We can define the length of a vector using the Euclidean inner product, or dot product, of two vectors. The Euclidean inner product of two vectors \({\mathbf x}=(x_1, \ldots, x_n)^\transpose\) and \({\mathbf y}=(y_1, \ldots, y_n)^\transpose\) is

\begin{equation*} \langle {\mathbf x}, {\mathbf y} \rangle = {\mathbf x}^\transpose {\mathbf y} = (x_1, x_2, \ldots, x_n) \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{pmatrix} = x_1 y_1 + \cdots + x_n y_n\text{.} \end{equation*}

We define the length of a vector \({\mathbf x}=(x_1, \ldots, x_n)^\transpose\) to be

\begin{equation*} \| {\mathbf x} \| = \sqrt{\langle {\mathbf x}, {\mathbf x} \rangle} = \sqrt{x_1^2 + \cdots + x_n^2}\text{.} \end{equation*}

Associated with the notion of the length of a vector is the idea of the distance between two vectors. We define the distance between two vectors \({\mathbf x}\) and \({\mathbf y}\) to be \(\| {\mathbf x}-{\mathbf y} \|\text{.}\) We leave as an exercise the proof of the following proposition about the properties of Euclidean inner products.

Example 15.7.

The vector \({\mathbf x} =(3,4)^\transpose\) has length \(\sqrt{3^2 + 4^2} = 5\text{.}\) We can also see that the orthogonal matrix

\begin{equation*} A= \begin{pmatrix} 3/5 & -4/5 \\ 4/5 & 3/5 \end{pmatrix} \end{equation*}

preserves the length of this vector. The vector \(A{\mathbf x} = (-7/5,24/5)^\transpose\) also has length 5.

Since \(\det(A A^\transpose) = \det(I) = 1\) and \(\det(A) = \det( A^\transpose )\text{,}\) the determinant of any orthogonal matrix is either \(1\) or \(-1\text{.}\) Consider the column vectors

\begin{equation*} {\mathbf a}_j = \begin{pmatrix} a_{1j} \\ a_{2j} \\ \vdots \\ a_{nj} \end{pmatrix} \end{equation*}

of the orthogonal matrix \(A= (a_{ij})\text{.}\) Since \(AA^\transpose = I\text{,}\) \(\langle {\mathbf a}_r, {\mathbf a}_s \rangle = \delta_{rs}\text{,}\) where

\begin{equation*} \delta_{rs} = \begin{cases} 1 & r = s \\ 0 & r \neq s \end{cases} \end{equation*}

is the Kronecker delta. Accordingly, column vectors of an orthogonal matrix all have length 1; and the Euclidean inner product of distinct column vectors is zero. Any set of vectors satisfying these properties is called an orthonormal set. Conversely, given an \(n \times n\) matrix \(A\) whose columns form an orthonormal set, it follows that \(A^{-1} = A^\transpose\text{.}\)

We say that a matrix \(A\) is distance-preserving, length-preserving, or inner product-preserving when \(\| A{\mathbf x}- A{\mathbf y} \| =\| {\mathbf x}- {\mathbf y} \|\text{,}\) \(\| A{\mathbf x} \| =\| {\mathbf x} \|\text{,}\) or \(\langle A{\mathbf x}, A{\mathbf y} \rangle = \langle {\mathbf x},{\mathbf y} \rangle\text{,}\) respectively. The following theorem, which characterizes the orthogonal group, says that these notions are the same.

Proof.

We have already shown (1) and (2) to be equivalent.

\((2) \Rightarrow (3)\text{.}\)

\begin{align*} \langle A{\mathbf x}, A{\mathbf y} \rangle & = (A {\mathbf x})^\transpose A {\mathbf y}\\ & = {\mathbf x}^\transpose A^\transpose A {\mathbf y}\\ & = {\mathbf x}^\transpose {\mathbf y}\\ & = \langle {\mathbf x}, {\mathbf y} \rangle\text{.} \end{align*}

\((3) \Rightarrow (2)\text{.}\) Since

\begin{align*} \langle {\mathbf x}, {\mathbf x} \rangle & = \langle A{\mathbf x}, A{\mathbf x} \rangle\\ & = {\mathbf x}^\transpose A^\transpose A {\mathbf x}\\ & = \langle {\mathbf x}, A^\transpose A{\mathbf x} \rangle\text{,} \end{align*}

we know that \(\langle {\mathbf x}, (A^\transpose A - I){\mathbf x} \rangle = 0\) for all \({\mathbf x}\text{.}\) Therefore, \(A^\transpose A -I = 0\) or \(A^{-1} = A^\transpose\text{.}\)

\((3) \Rightarrow (4)\text{.}\) If \(A\) is inner product-preserving, then \(A\) is distance-preserving, since

\begin{align*} \| A{\mathbf x} - A{\mathbf y} \|^2 & = \| A({\mathbf x} - {\mathbf y}) \|^2\\ & = \langle A({\mathbf x} - {\mathbf y}), A({\mathbf x} - {\mathbf y}) \rangle\\ & = \langle {\mathbf x} - {\mathbf y}, {\mathbf x} - {\mathbf y} \rangle\\ & = \| {\mathbf x} - {\mathbf y} \|^2\text{.} \end{align*}

\((4) \Rightarrow (5)\text{.}\) If \(A\) is distance-preserving, then \(A\) is length-preserving. Letting \({\mathbf y} = 0\text{,}\) we have

\begin{equation*} \| A{\mathbf x}\| = \| A{\mathbf x}- A{\mathbf y} \| = \| {\mathbf x}- {\mathbf y} \| = \| {\mathbf x} \|\text{.} \end{equation*}

\((5) \Rightarrow (3)\text{.}\) We use the following identity to show that length-preserving implies inner product-preserving:

\begin{equation*} \langle {\mathbf x}, {\mathbf y} \rangle = \frac{1}{2} \left[ \|{\mathbf x} +{\mathbf y}\|^2 - \|{\mathbf x}\|^2 - \|{\mathbf y}\|^2 \right]\text{.} \end{equation*}

Observe that

\begin{align*} \langle A {\mathbf x}, A {\mathbf y} \rangle & = \frac{1}{2} \left[ \|A {\mathbf x} + A {\mathbf y} \|^2 - \|A {\mathbf x} \|^2 - \|A {\mathbf y} \|^2 \right]\\ & = \frac{1}{2} \left[ \|A ( {\mathbf x} + {\mathbf y} ) \|^2 - \|A {\mathbf x} \|^2 - \|A {\mathbf y} \|^2 \right]\\ & = \frac{1}{2} \left[ \|{\mathbf x} + {\mathbf y}\|^2 - \|{\mathbf x}\|^2 - \|{\mathbf y}\|^2 \right]\\ & = \langle {\mathbf x}, {\mathbf y} \rangle\text{.} \end{align*}
Two side-by figures. The figure on the left is a set of axes with an arrow pointing up and right from the origin to (a,b) and the second arrow pointing down and right from the origin to a point (a, -b). The figure on the right is a set of axes with an arrow pointed up and right from the origin to (cosine theta, sine theta) and an arrow point up and left at a right angle to the first vector from the origin to (sine theta, minus cosine theta).
Figure 15.9. \(O(2)\) acting on \(\mathbb R^2\)
Example 15.10.

Let us examine the orthogonal group on \({\mathbb R}^2\) a bit more closely. An element \(A \in O(2)\) is determined by its action on \({\mathbf e}_1 = (1, 0)^\transpose\) and \({\mathbf e}_2 = (0, 1)^\transpose\text{.}\) If \(A{\mathbf e}_1 = (a,b)^\transpose\text{,}\) then \(a^2 + b^2 = 1\text{,}\) since the length of a vector must be preserved when it is multiplied by \(A\text{.}\) Since multiplication of an element of \(O(2)\) preserves length and orthogonality, \(A{\mathbf e}_2 = \pm(-b, a)^\transpose\text{.}\) If we choose \(A{\mathbf e}_2 = (-b, a)^\transpose\text{,}\) then

\begin{equation*} A = \begin{pmatrix} a & -b \\ b & a \end{pmatrix} = \begin{pmatrix} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}, \end{equation*}

where \(0 \leq \theta \lt 2 \pi\text{.}\) The matrix \(A\) rotates a vector in \(\mathbb R^2\) counterclockwise about the origin by an angle of \(\theta\) (Figure 15.9).

If we choose \(A{\mathbf e}_2 = (b, -a)^\transpose\text{,}\) then we obtain the matrix

\begin{equation*} B = \begin{pmatrix} a & b \\ b & -a \end{pmatrix} = \begin{pmatrix} \cos \theta & \sin \theta \\ \sin \theta & -\cos \theta \end{pmatrix}. \end{equation*}

Here, \(\det B = -1\) and

\begin{equation*} B^2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. \end{equation*}

A reflection about the horizontal axis is given by the matrix

\begin{equation*} C = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, \end{equation*}

and \(B = AC\) (see Figure 15.9). Thus, a reflection about a line \(\ell\) is simply a reflection about the horizontal axis followed by a rotation.

Two of the other matrix or matrix-related groups that we will consider are the special orthogonal group and the group of Euclidean motions. The special orthogonal group, \(SO(n)\text{,}\) is just the intersection of \(O(n)\) and \(SL_n({\mathbb R})\text{;}\) that is, those elements in \(O(n)\) with determinant one. The Euclidean group, \(E(n)\text{,}\) can be written as ordered pairs \((A, {\mathbf x})\text{,}\) where \(A\) is in \(O(n)\) and \({\mathbf x}\) is in \({\mathbb R}^n\text{.}\) We define multiplication by

\begin{equation*} (A, {\mathbf x}) (B, {\mathbf y}) = (AB, A {\mathbf y} +{\mathbf x})\text{.} \end{equation*}

The identity of the group is \((I,{\mathbf 0})\text{;}\) the inverse of \((A, {\mathbf x})\) is \((A^{-1}, -A^{-1} {\mathbf x})\text{.}\) In Exercise 15.4.6, you are asked to check that \(E(n)\) is indeed a group under this operation.

Two side-by figures. The figure on the left is a set of axes with an arrow pointing up and right from the origin to a point x. The figure on the right is a set of axes with an arrow of the same length and same direction from a point to the right and above the origin to a point x + y.
Figure 15.11. Translations in \(\mathbb R^2\)