Exercise Set 8 Answers

  1. Write down the \(2\times 2\) matrix \(R_\pi\) that rotates a vector anticlockwise by \(\pi\). Apply this to a vector \(\mathbf{v}=\left(\begin{smallmatrix}v_1\\v_2\end{smallmatrix}\right)\).

    Answers: The matrix \(R_\theta\) for anticlockwise rotation by angle \(\theta\) is \[ R_\theta = \begin{pmatrix}\cos \theta & - \sin \theta \\ \sin\theta & \cos\theta\end{pmatrix}\] So the matrix that rotates a vector anticlockwise by \(\pi\) is \[R_\pi = \begin{pmatrix}-1 & 0 \\ 0 & -1\end{pmatrix}\] Applying this to a vector: \[\begin{pmatrix}-1 & 0 \\ 0 & -1\end{pmatrix}\begin{pmatrix}v_1\\v_2\end{pmatrix} = \begin{pmatrix}-v_1\\-v_2\end{pmatrix}\]

  2. Write down the \(2\times 2\) matrix \(M_x\) that reflects a vector in the \(x\)-axis. Similarly write down the \(2\times 2\) matrix \(M_y\) that reflects a vector in the \(y\)-axis. Multiply these two matrices to find the transformation that first reflects in the \(x\)-axis and then reflects in the \(y\)-axis. Compare this to the matrix \(R_\pi\).

    Answers:

    \[M_x = \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}\] \[M_y = \begin{pmatrix} -1 & 0 \\0 & 1\end{pmatrix}\] \[M_xM_y = \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}\begin{pmatrix} -1 & 0 \\0 & 1\end{pmatrix} = \begin{pmatrix}-1 & 0 \\ 0 & -1\end{pmatrix} = R_\pi\]

  3. Note that \(R^2_\theta=R_{2\theta}\) (why?). Use this to find the “double angle identity” for \(\cos\) and \(\sin\). Can you find other trigonometric identities using \(R_\theta\)?

    Answers:

    \(R^2_\theta=R_\theta R_\theta\), that is, apply \(R_\theta\) twice: this is the same as rotating by \(2\theta\).

    \[R_{2\theta} = R_\theta^2 = R_\theta R_\theta\\ \begin{pmatrix}\cos (2\theta) & -\sin(2\theta) \\ \sin(2\theta) & \cos(2\theta)\end{pmatrix} = \begin{pmatrix}\cos \theta & - \sin \theta \\ \sin\theta & \cos\theta\end{pmatrix}\begin{pmatrix}\cos \theta & - \sin \theta \\ \sin\theta & \cos\theta\end{pmatrix}\\ \begin{pmatrix}\cos (2\theta) & -\sin(2\theta) \\ \sin(2\theta) & \cos(2\theta)\end{pmatrix} = \begin{pmatrix} \cos^2(\theta) - \sin^2(\theta) & -2\cos(\theta)\sin(\theta) \\ 2\cos(\theta)\sin(\theta) & \cos^2(\theta) - \sin^2(\theta)\end{pmatrix}\] Where each entry gives the double angle formula for \(\sin\) and \(\cos\).

    The general sum and difference identities for \(\sin\) and \(\cos\) can be derived by noting that \(R_{a+b} = R_a R_b\) and \(R_{a-b} = R_a R_{-b}\).

    Similarly, any identity for positive integer multiples \(n\theta\) can be derived from \(R_{n\theta}=R^n_{\theta}\).

  4. For each of the following pairs of matrices \(A\) and \(B\), find (when possible), \(A+B\), \(A-B\), \(A^2\), \(B^2\), \(AB\), \(BA\).

    1. \(A = \begin{pmatrix} 2 & 3 \\ 4 & 1 \end{pmatrix}\qquad B = \begin{pmatrix} -1 & 2 \\ -2 & 0 \end{pmatrix}\)
    2. \(A = \begin{pmatrix} 4 & -5 \\ 6 & 1 \\ 0 & 1 \end{pmatrix}\qquad B = \begin{pmatrix} 5 & 2 & -3 \\ 1 & 3 & -1 \\ 2 & 2 & -1 \end{pmatrix}\)
    3. \(A = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 0 & 1\\ 1 & 0 & 1 \end{pmatrix} \qquad B = \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 0 \end{pmatrix}\)
    4. \(A = \begin{pmatrix} -1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1\\ 1 & 0 & 1 & 0\\ 1 & 1 & 1 & -1 \end{pmatrix} \qquad B = \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & -1 & 1 \end{pmatrix}\).

    Answers:

    1. We have \[\begin{align*} &A + B = \begin{pmatrix} 1 & 5 \\ 2 & 1 \end{pmatrix}, \qquad A - B = \begin{pmatrix} 3 & 1 \\ 6 & 1 \end{pmatrix}, \\ &A^2 = \begin{pmatrix} 16 & 9 \\ 12 & 13 \end{pmatrix}, \qquad B^2 = \begin{pmatrix} -3 & -2 \\ 2 & -4 \end{pmatrix},\\ &AB = \begin{pmatrix} -8 & 4 \\ -6 & 8\end{pmatrix}, \qquad BA = \begin{pmatrix} 6 & -1 \\ -4 & -6 \end{pmatrix}. \end{align*}\]
    2. Because \(A\) is a \(3\times 2\) and \(B\) is a \(3\times 3\) matrix, \(A + B\), \(A - B\), \(A^2\) and \(AB\) are not defined. We have \[\begin{align*} B^2 = \begin{pmatrix} 21 & 10 & -14 \\ 6 & 9 & -5 \\ 10 & 8 & -7 \end{pmatrix}, \qquad BA = \begin{pmatrix} 32 & -26 \\ 22 & -3 \\ 20 & -9 \end{pmatrix}. \end{align*}\]
    3. We have \[\begin{align*} &A + B = \begin{pmatrix} 1 & 1 & 1 \\ 1 & 0 & 1 \\ 2 & 1 & 1 \end{pmatrix}, \qquad A - B = \begin{pmatrix} 1 & 1 & -1 \\ -1 & 0 & 1 \\ 0 & -1 & 1 \end{pmatrix}, \\ &A^2 = \begin{pmatrix} 1 & 1 & 1 \\ 1 & 0 & 1 \\ 2 & 1 & 1 \end{pmatrix}, \qquad B^2 = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 1 \end{pmatrix}, \\ &AB = \begin{pmatrix} 1 & 0 & 1 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \end{pmatrix}, \qquad BA = \begin{pmatrix} 1 & 0 & 1 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \end{pmatrix}. \end{align*}\]
    4. Because \(A\) is a \(4\times 4\) and \(B\) is a \(4\times 3\) matrix, \(A + B\), \(A - B\), \(B^2\), and \(BA\) are not defined. We have \[\begin{align*} A^2 = \begin{pmatrix} 3 & 0 & 2 & -2\\ 1 & 2 & 2 & 0\\ 0 & 1 & 1 & 1\\ 0 & 0 & 1 & 1 \end{pmatrix}, \qquad AB = \begin{pmatrix} 2 & -1 & 0\\ 2 & 0 & 2\\ 1 & 1 & 1\\ 1 & 2 & 0 \end{pmatrix}. \end{align*}\]
  5. Find two \(3 \times 3\) matrices \(A\) and \(B\) such that \(AB=BA\). Now find two \(3 \times 3\) matrices such that \(AB\neq BA\).

    Answers:

    Two diagonal matrices will always commute: \[ \begin{pmatrix}a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c\end{pmatrix}\begin{pmatrix}d & 0 & 0 \\ 0 & e & 0 \\ 0 & 0 & f\end{pmatrix} = \begin{pmatrix}d & 0 & 0 \\ 0 & e & 0 \\ 0 & 0 & f\end{pmatrix} \begin{pmatrix}a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c\end{pmatrix} = \begin{pmatrix}ad & 0 & 0 \\ 0 & be & 0 \\ 0 & 0 &cf\end{pmatrix}\] but there are also non-diagonal matrices that commute.

    Two matrices that do not commute are \[A = \begin{pmatrix}0 & 0 & a \\ 0 & 0 & 0 \\ 0 & 0 & 0\end{pmatrix}, \quad B = \begin{pmatrix}0 & 0 & 0 \\0 & 0 & 0\\b & 0 & 0\end{pmatrix}\] \[AB = \begin{pmatrix}ab & 0 & 0\\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \neq BA = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\0 & 0 & ab\end{pmatrix}\]

  6. For any two numbers \(a\) and \(b\), if \(ab=0\) then at least one of \(a\) or \(b\) must be \(0\). Does an analagous result hold for matrices? That is, if \(AB=0_{n\times m}\) must at least one of the matrices \(A\) or \(B\) be the zero matrix?

    Answers:

    A similar result does not hold for matrices, neither \(A\) nor \(B\) must be zero if \(AB=0_{n\times m}\). This occurs when the columns of \(B\) are in the null space of \(A\), that is to say, they are mapped to zero. For instance: \[\begin{pmatrix} a & b & 0 \\ c & d & 0 \\ e & f & 0\end{pmatrix}\begin{pmatrix}0 & 0 \\ 0 & 0 \\g & h\end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \\ 0 & 0 \end{pmatrix}\]

  7. Determine whether the following matrices are invertible or singular by computing their determinants. If they are invertible, find the inverse.

    1. \(\begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix}\)
    2. \(\begin{pmatrix} 6 & 3 \\ -4 & -2 \end{pmatrix}\)
    3. \(\begin{pmatrix} 4 & -28 & 48 \\ -27 & 162 & -216 \\ 32 & -160 & 192 \end{pmatrix}\)
    4. \(\begin{pmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta)\end{pmatrix}\)
    5. \(\begin{pmatrix} 1 & 3 & -5 \\ -2 & 1 & 4 \\ 1 & 2 & -4 \end{pmatrix}\)
    6. \(\begin{pmatrix} 1 & -1 & 4 \\ 2 & 3 & 3 \\ 3 & 1 & 8 \end{pmatrix}\)

    Answers:

    1. We have \(\det(A) = 2\times 1 - 1\times 1 = 1\), so using the formula for the inverse of a \(2\times 2\) matrix: \[ A^{-1} = \begin{pmatrix} 1 & -1 \\ -1 & 2 \end{pmatrix}. \] Alternatively, using the Gaussian elmination method \[ \left(\begin{array}{cc|cc} 2 & 1 & 1 & 0 \\ 1 & 1 & 0 & 1 \end{array}\right), \] we perform the EROs \(R_1 \to R_1 - R_2\) and \(R_2 \to R_2 - R_1\) to obtain \[ \left(\begin{array}{cc|cc} 1 & 0 & 1 & -1 \\ 0 & 1 & -1 & 2 \end{array}\right), \] as required.

    2. We have \(\det(A) = 0\), so the matrix is singular. Alternatively, using the Gaussian elmination method \[ \left(\begin{array}{cc|cc} 6 & 3 & 1 & 0 \\ -4 & -2 & 0 & 1 \end{array}\right), \] we perform the EROs \(R_1 \to \frac{1}{6}R_1\) and then \(R_2 \to R_2 + 4R_1\) to obtain \[ \left(\begin{array}{cc|cc} 1 & \frac{1}{2} & \frac{1}{6} & 0 \\ 0 & 0 & \frac{4}{6} & 1 \end{array}\right), \] which has leading entries in the solution vectors, hence shows that the system of equations is inconsistent, and therefore that \(A\) is singular.

    3. We have the augmented matrix \[ \left(\begin{array}{ccc|ccc} 4 & -28 & 48 & 1 & 0 & 0\\ -27 & 162 & -216 & 0 & 1 & 0\\ 32 & -160 & 192 & 0 & 0 & 1 \end{array}\right). \] Performing the EROs \(R_1 \to 27\times 8\times R_1\), \(R_2 \to -32 R_2\) and \(R_3 \to 27 R_3\) gives \[ \left(\begin{array}{ccc|ccc} 864 & -6048 & 10368 & 216 & 0 & 0\\ 864 & -5184 & 6912 & 0 & -32 & 0\\ 864 & -4320 & 5184 & 0 & 0 & 27 \end{array}\right). \] Then, performing \(R_2 \to R_2 - R_1\) and \(R_3 \to R_3 - R_1\) gives \[ \left(\begin{array}{ccc|ccc} 864 & -6048 & 10368 & 216 & 0 & 0\\ 0 & 864 & -3456 & -216 & -32 & 0\\ 0 & 1728 & -5184 & -216 & 0 & 27 \end{array}\right). \] Then, performing \(R_3 \to R_3 + 2R_2\) and \(R_1 \to R_1 + 7R_3\) gives \[ \left(\begin{array}{ccc|ccc} 864 & 0 & 13824 & -1296 & -224 & 0\\ 0 & 864 & -3456 & -216 & -32 & 0\\ 0 & 0 & 1728 & 216 & 64 & 27 \end{array}\right). \] Then, performing \(R_1 \to R_1 + 8R_3\) and \(R_2 \to R_2 + 2R_3\) gives \[ \left(\begin{array}{ccc|ccc} 864 & 0 & 0 & 432 & 288 & 216\\ 0 & 864 & 0 & 216 & 96 & 54\\ 0 & 0 & 1728 & 216 & 64 & 27 \end{array}\right). \] Finally, performing \(R_1 \to \frac{1}{864} R_1\), \(R_2 \to \frac{1}{864} R_2\) and \(R_3 \to \frac{1}{1728} R_3\) gives \[ \left(\begin{array}{ccc|ccc} 1 & 0 & 0 & \frac{1}{2} & \frac{1}{3} & \frac{1}{4}\\ 0 & 1 & 0 & \frac{1}{4} & \frac{1}{9} & \frac{1}{16}\\ 0 & 0 & 1 & \frac{1}{8} & \frac{1}{27} & \frac{1}{64} \end{array}\right) \] and we read of \[ A^{-1} = \begin{pmatrix} \frac{1}{2} & \frac{1}{3} & \frac{1}{4}\\ \frac{1}{4} & \frac{1}{9} & \frac{1}{16}\\ \frac{1}{8} & \frac{1}{27} & \frac{1}{64} \end{pmatrix}. \]

    4. We have \(\det(A) = \cos^2 \theta + \sin^2 \theta = 1\), so \[ A^{-1} = \begin{pmatrix} \cos(\theta) & \sin(\theta) \\ -\sin(\theta) & \cos(\theta) \end{pmatrix}. \] Alternatively, using the Gaussian elmination method \[ \left(\begin{array}{cc|cc} \cos(\theta) & -\sin(\theta) & 1 & 0 \\ \sin(\theta) & \cos(\theta) & 0 & 1 \end{array}\right), \] we perform the EROs \(R_1 \to \cos(\theta)R_1\), \(R_2 \to \sin(\theta)R_2\) to obtain \[ \left(\begin{array}{cc|cc} \cos^2(\theta) & -\cos(\theta)\sin(\theta) & \cos(\theta) & 0 \\ \sin^2(\theta) & \cos(\theta)\sin(\theta) & 0 & \sin(\theta) \end{array}\right), \] and then \(R_1 \to R_1 + R_2\) to obtain \[ \left(\begin{array}{cc|cc} 1 & 0 & \cos(\theta) & \sin(\theta) \\ \sin^2(\theta) & \cos(\theta)\sin(\theta) & 0 & \sin(\theta) \end{array}\right), \] and then \(R_2 \to R_2 - \sin^2(\theta)R_1\) and \(R_2 \to \frac{1}{\sin(\theta)\cos(\theta)}R_2\) to arrive at \[ \left(\begin{array}{cc|cc} 1 & 0 & \cos(\theta) & \sin(\theta)\\ 0 & 1 & -\sin(\theta) & \frac{\sin(\theta)(1 - \sin^2(\theta))}{\sin(\theta)\cos(\theta)} \end{array}\right), \] which, by substituting \(1 = \sin^2(\theta) + \cos^2(\theta)\) second entry of the fourth columns, is equal to \[ \left(\begin{array}{cc|cc} 1 & 0 & \cos(\theta) & \sin(\theta)\\ 0 & 1 & -\sin(\theta) & \cos(\theta) \end{array}\right), \] as required.

      Note that \(A\) is the rotation anticlockwise by an angle \(\theta\) and \(A^{-1}\) is simply rotation clockwise by the angle \(\theta\), i.e. the matrix \(R_{-\theta}\) where one uses the facts that \(\cos(-\theta)=\cos(\theta)\) and \(\sin(-\theta)=-\sin(\theta)\).

    5. We have the augmented matrix \[ \left(\begin{array}{ccc|ccc} 1 & 3 & -5 & 1 & 0 & 0 \\ -2 & 1 & 4 & 0 & 1 & 0 \\ 1 & 2 & -4 & 0 & 0 & 1 \end{array}\right). \] We perform EROs \(R_2 \to R_2 + 2R_1\) and \(R_3 \to R_3 - R_1\) to obtain \[ \left(\begin{array}{ccc|ccc} 1 & 3 & -5 & 1 & 0 & 0 \\ 0 & 7 & -6 & 2 & 1 & 0 \\ 0 & -1 & 1 & -1 & 0 & 1 \end{array}\right) \] Swap the second and third rows, i.e. perform ERO \(R_2 \leftrightarrow R_3\), and the perform \(R_2 \to -R_2\) to have \[ \left(\begin{array}{ccc|ccc} 1 & 3 & -5 & 1 & 0 & 0 \\ 0 & 1 & -1 & 1 & 0 & -1 \\ 0 & 7 & -6 & 2 & 1 & 0 \end{array}\right). \] Then, perform \(R_3 \to R_3 - 7R_2\) and \(R_1 \to R_1 - 3R_2\) to obtain \[ \left(\begin{array}{ccc|ccc} 1 & 0 & -2 & -2 & 0 & 3 \\ 0 & 1 & -1 & 1 & 0 & -1 \\ 0 & 0 & 1 & -5 & 1 & 7 \end{array}\right). \] Then, perform \(R_1 \to R_1 + 2R_3\) and \(R_2 \to R_2 + R_3\) to obtain \[ \left(\begin{array}{ccc|ccc} 1 & 0 & 0 & -12 & 2 & 17 \\ 0 & 1 & 0 & -4 & 1 & 6 \\ 0 & 0 & 1 & -5 & 1 & 7 \end{array}\right). \] So \[ A^{-1} = \begin{pmatrix} -12 & 2 & 17 \\ -4 & 1 & 6\\ -5 & 1 & 7 \end{pmatrix}. \]

    6. We have the augmented matrix \[ \left(\begin{array}{ccc|ccc} 1 & -1 & 4 & 1 & 0 & 0 \\ 2 & 3 & 3 & 0 & 1 & 0 \\ 3 & 1 & 8 & 0 & 0 & 1 \end{array}\right). \] We perform EROs \(R_2 \to R_2 - 2R_1\) and \(R_3 \to R_3 - 3R_1\) to obtain \[ \left(\begin{array}{ccc|ccc} 1 & -1 & 4 & 1 & 0 & 0 \\ 0 & 5 & -5 & -2 & 1 & 0 \\ 0 & 4 & -4 & -3 & 0 & 1 \end{array}\right), \] Then, we perform \(R_3 \to R_3 - \frac{4}{5}R_2\) to obtain \[ \left(\begin{array}{ccc|ccc} 1 & -1 & 4 & 1 & 0 & 0 \\ 0 & 5 & -5 & -2 & 1 & 0 \\ 0 & 0 & 0 & -\frac{7}{5} & -\frac{4}{5} & 1 \end{array}\right), \] which is in echelon form with (at least one) leading entries in the solution vectors, thus shows that the system of equations is inconsistent, therefore the matrix is singular.

  8. Find the eigenvalues and eigenvectors of each of the following matrices \(A\). Determine whether the matrix is diagonalisable and, if so, find the matrices \(D\) and \(P\) in the diagonalisation \(D=P^{-1}AP\).

    1. \(\begin{pmatrix} 1 & 0 \\ 2 & 2 \end{pmatrix}\)
    2. \(\begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix}\)
    3. \(\begin{pmatrix} 1 & 2 \\ 2 & -2 \end{pmatrix}\)
    4. \(\begin{pmatrix} 1 & -2 & -1 \\ 2 & 6 & 2 \\ -1 & -2 & 1 \end{pmatrix}\)
    5. \(\begin{pmatrix} -2 & 1 & 1 \\ -11 & 4 & 5 \\ -1 & 1 & 0 \end{pmatrix}\)
    6. \(\begin{pmatrix} 2 & \sqrt 2 & 0 \\ \sqrt 2 & 2 & \sqrt 2 \\ 0 & \sqrt 2 & 2 \end{pmatrix}\)
    7. \(\begin{pmatrix} 1 & -1 & -1 \\ 1 & -1 & 0 \\ 1 & 0 & -1 \end{pmatrix}\)
    8. \(\begin{pmatrix} 5 & 5 & 1 \\ -2 & -1 & 0 \\ 1 & 1 & 1 \end{pmatrix}\).

    Answers:

    In the following, we always let \(B_{\lambda} = \lambda I - A\) and denote the characteristic polynomial \(p_{A}(\lambda) = \det(\lambda I - A)\).

    1. The characteristic polynomial \(p_{A}(\lambda)\) of \(A\) is defined by \[ p_{A}(\lambda) = \begin{vmatrix} \lambda - 1 & 0 \\ -2 & \lambda - 2 \end{vmatrix} = (\lambda-1)(\lambda-2). \] Thus, the eigenvalues are \(\lambda_1 = 1\) and \(\lambda_2 = 2\). For \(\lambda_1 = 1\), substitute \(\lambda = \lambda_1 = 1\) into \(B_{\lambda}\) and solve \(B_{1}\mathbf{x} = \mathbf{0}\), that is \[ \begin{pmatrix} 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ -2 & -1 \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ -2x -y \end{pmatrix}. \] Thus, \(y = -2x\) (with \(x\) arbitrary) and the set of eigenvectors corresponding to the eigenvalue \(\lambda_1 = 1\) is given by \[ \begin{pmatrix} \alpha \\ -2\alpha \end{pmatrix} \colon \alpha \neq 0. \] For \(\lambda_2 = 2\), substitute \(\lambda = \lambda_2 = 2\) into \(B_{\lambda}\) and solve \(B_{2}\mathbf{x} = \mathbf{0}\), that is \[ \begin{pmatrix} 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ -2 & 0 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} x \\ -2x \end{pmatrix}, \] so the solution is \(x = 0\), with \(y\) arbitrary. The set of eigenvectors corresponding to the eigenvalue \(\lambda_2 = 2\) is \[ \begin{pmatrix} 0 \\ \beta \end{pmatrix} \colon \beta \neq 0 \] Taking \(\alpha = \beta = 1\), we have the eigenvectors \[ \begin{pmatrix} 1 \\ -2 \end{pmatrix} \qquad\text{and} \qquad \begin{pmatrix} 0 \\ 1 \end{pmatrix} \] corresponding to the eigenvalues \(\lambda_1 = 1\) and \(\lambda_2 = 2\), respectively. Let \[ P = \begin{pmatrix} 1 & 0 \\ -2 & 1 \end{pmatrix}, \] the matrix whose columns are the eigenvectors listed above. Then we have \[ D = \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}. \]

    2. We have \[ p_{A}(\lambda) = \begin{vmatrix} \lambda - 1 & -2 \\ 0 & \lambda - 1 \end{vmatrix} = (\lambda - 1)^2. \] so the only eigenvalue is \(\lambda = \lambda_{1,2} = 1\). To find the eigenvectors, we solve \[ \begin{pmatrix} 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 & -2 \\ 0 & 0 \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} -2y \\ 0 \end{pmatrix}. \] Thus, we have \(y = 0\) and the eigenvectors are given by the set \[ \begin{pmatrix} \alpha \\ 0 \end{pmatrix} \colon \alpha \neq 0. \] Since there is a repeated eigenvalue \(\lambda_{1,2} = 1\) and there does not exists a set of two linearly independent eigenvectors for \(\lambda_{1,2}\), \(A\) is not diagonalisable.

    3. We have \[\begin{align*} p_{A}(\lambda) =\det(\lambda I - A) &= \begin{vmatrix} \lambda-1 & -2 \\ -2 & \lambda+2 \end{vmatrix} \\ &= (\lambda-1)(\lambda+2)-4 \\ &= \lambda^2+\lambda-6 \\ &= (\lambda+3)(\lambda-2), \end{align*}\] hence, the eigenvalues are \(\lambda_1 = -3\) and \(\lambda_2 = 2\). For \(\lambda_1 = -3\), substitute \(\lambda = \lambda_1 = -3\) into \(B_{\lambda}\) and solve \(B_{-3}\mathbf{x} = \mathbf{0}\), that is \[ \begin{pmatrix} 0 \\ 0 \end{pmatrix} = \begin{pmatrix} -4 & -2 \\ -2 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}. \] We note that the top row is twice the bottom row, the solutions satisfy \(y = -2x\). We take \(x = \alpha\) to be arbitrary, whence the set of eigenvectors corresponding to the eigenvalue \(\lambda_1 = -3\) is \[ \begin{pmatrix} \alpha \\ -2\alpha \end{pmatrix} \colon \alpha \neq 0 \] For \(\lambda_2 = 2\), substitute \(\lambda = \lambda_2 = 2\) into \(B_{\lambda}\) and solve \(B_{2}\mathbf{x} = \mathbf{0}\), that is \[ \begin{pmatrix} 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 1 & -2 \\ -2 & 4 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}. \] Again, we note that the bottom row is \(-2\) times the top row, thus solutions satisfy \(x = 2y\). We take \(y = \beta\) to be arbitrary, whence the set of eigenvectors corresponding to the eigenvalue \(\lambda_2 = 2\) is \[ \begin{pmatrix} 2\beta\\ \beta \end{pmatrix} \colon \beta \neq 0. \] Taking \(\alpha = \beta = 1\), we have the eigenvectors \[ \begin{pmatrix} 1 \\ -2 \end{pmatrix} \qquad\text{and}\qquad \begin{pmatrix} 2 \\ 1 \end{pmatrix} \] corresponding to the eigenvalues \(\lambda_1 = -3\) and \(\lambda_2 = 2\), respectively. Let \[ P = \begin{pmatrix} 1 & 2 \\ -2 & 1 \end{pmatrix}, \] then we have \[ D= \begin{pmatrix} -3 & 0 \\ 0 & 2 \end{pmatrix}. \]

    4. We have \[\begin{align*} p_{A}(\lambda) = \det(\lambda I - A) &= \begin{vmatrix} \lambda-1 & 2 & 1 \\ -2 & \lambda - 6 & -2 \\ 1 & 2 & \lambda -1 \end{vmatrix} \\ &= (\lambda-2)^2(\lambda-4) \end{align*}\] hence, the eigenvalues are \(\lambda_{1,2} = 2\) and \(\lambda_3 = 4\) (note that there are only two distinct eigenvalues). For \(\lambda_{1,2} = 2\), substitute \(\lambda = \lambda_{1,2} = 2\) into \(B_{\lambda}\) and solve \(B_{-3}\mathbf{x} = \mathbf{0}\), that is \[ \begin{pmatrix} 0 \\ 0\\ 0 \end{pmatrix} = \begin{pmatrix} 1 & 2 & 1 \\ -2 & -4 & -2 \\ 1 & 2 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix}. \] Here the second and third rows are multiples of the first row, so the general solution to the above system of equations is \(x + 2y + z = 0\). We can take \(y = \alpha\) and \(z = \beta\) to be arbitrary, and get the set of corresponding eigenvectors \[ \begin{pmatrix} -2\alpha-\beta \\ \alpha \\ \beta \end{pmatrix} \colon \text{$\alpha, \beta$ not both equal to zero.} \] In particular, note that \[ \mathbf{v}_1 = \begin{pmatrix} -2 \\ 1 \\ 0 \end{pmatrix} \qquad\text{and}\qquad \mathbf{v}_2 = \begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix} \] are two linearly independent eigenvectors corresponding to the eigenvalue \(\lambda_{1,2} = 2\), where we take \((\alpha,\beta) = (1,0)\) to get \(\mathbf{v}_1\) and \((\alpha,\beta) = (0,1)\) to get \(\mathbf{v}_2\). For \(\lambda_3 = 4\), substitute \(\lambda = \lambda_3 = 4\) into \(B_{\lambda}\) and solve \(B_{4}\mathbf{x} = \mathbf{0}\), that is \[ \begin{pmatrix} 0 \\ 0 \\ 0\end{pmatrix} = \begin{pmatrix} 3 & 2 & 1 \\ -2 & -2 & -2 \\ 1 & 2 & 3\end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix}. \] Applying EROs \(R_1 \to R_1 - 3R_3\) and \(R_2 \to R_2 + 2R_3\) gives \[ \begin{pmatrix} 0 & -4 & -8 \\ 0 & 2 & 4 \\ 1 & 2 & 3 \end{pmatrix}. \] Then, applying \(R_1 \leftrightarrow R_3\), \(R_2 \to \frac{1}{2}R_2\) and \(R_3 \to -\frac{1}{4}R_3\) gives \[ \begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 2 \\ 0 & 1 & 2 \end{pmatrix}, \] and performing \(R_3 \to R_3 - R_2\) and \(R_1 \to R_1 - 2R_2\), we arrive at the reduced echelon form \[ \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \end{pmatrix}. \] There is no leading entry in the third column of the matrix of coefficients, so we may take \(z = \gamma\) to be arbitrary and hence read off the the set of eigenvectors corresponding to the eigenvalue \(\lambda_3 = 4\) as \[ \begin{pmatrix} \gamma \\ -2 \gamma \\ \gamma \end{pmatrix} \colon \gamma \neq 0. \] In particular, taking \(\gamma = 1\), we get the eigenvector \[ \mathbf{v}_3 = \begin{pmatrix} 1 \\ -2 \\ 1 \end{pmatrix}. \] We define the matrix \(P\) by \[ P = (\mathbf{v}_1 \ \mathbf{v}_2\ \mathbf{v}_3) = \begin{pmatrix} -2 & -1 & 1\\ 1 & 0 & -2 \\ 0 & 1 & 1 \end{pmatrix}, \] then \[ D = \begin{pmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 4 \end{pmatrix} \] Note: In this example, we are able to diagonalise the \(3 \times 3\) matrix \(A\) even though it only had two distinct eigenvalues. This is because we can find two linearly independent eigenvectors corresponding to one of the eigenvalues.

    5. We have \[\begin{align*} p_{A}(\lambda) &= \det(\lambda I - A) &= \begin{vmatrix} \lambda+2 & -1 & -1 \\ 11 & \lambda -4 & -5 \\ 1 & -1 & \lambda \end{vmatrix}\\ &= (\lambda+1)(\lambda-1)(\lambda-2), \end{align*}\] hence the eigenvalues of \(A\) are \(\lambda_1 = -1\), \(\lambda_2 = 1\), and \(\lambda_3 = 2\). For \(\lambda_1 = -1\), we substitute \(\lambda = \lambda_1 = -1\) into the matrix \(B_{\lambda}\) and solve \(B_{-1}\mathbf{x} = \mathbf{0}\). The augmented matrix is \[ \begin{pmatrix} 1 & -1 & -1 \\ 11 & -5 & -5 \\ 1 & -1 & -1 \end{pmatrix} \] We perform EROs as follows: \[\begin{align*} \text{$R_2 \to R_2 - 11R_1$ and $R_3 \to R_3 - R_1$} \colon\quad & \begin{pmatrix} 1 & -1 & -1\\ 0 & 6 & 6 \\ 0 & 0 & 0 \end{pmatrix}; \\ \text{$R_2 \to \frac{1}{6}R_2$} \colon\quad & \begin{pmatrix} 1 & -1 & -1 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{pmatrix}; \\ \text{$R_1 \to R_1 + R_2$ and $R_3 \to R_3 - R_1$} \colon\quad & \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{pmatrix}. \end{align*}\] There is no leading entry in the last column so we take \(z = \alpha\) to be arbitrary. The set of eigenvectors corresponding to the eigenvalue \(\lambda_1 = -1\) is thus given by \[ \alpha \begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix} \colon \alpha \neq 0. \] For \(\lambda_2 = 1\), we substitute \(\lambda = \lambda_2 = 1\) into \(B_{\lambda}\) and solve \(B_{1}\mathbf{x} = \mathbf{0}\). We have the augmented matrix \[ \begin{pmatrix} 3 & -1 & -1 \\ 11 & -3 & -5 \\ 1 & -1 & 1 \\ \end{pmatrix}, \] which we bring into reduced echelon form as follows: \[\begin{align*} \text{$R_1 \leftrightarrow R_3$} \colon\quad & \begin{pmatrix} 1 & -1 & 1 \\ 11 & -3 & -5 \\ 3 & -1 & -1 \end{pmatrix}; \\ \text{$R_2 \to R_2 - 11R_1$ and $R_3 \to R_3 - 3R_1$} \colon\quad & \begin{pmatrix} 1 & -1 & 1 \\ 0 & 8 & -16 \\ 0 & 2 & -4 \end{pmatrix}; \\ \text{$R_2 \to R_2 - 11R_1$ and $R_3 \to R_3 - R_1$} \colon\quad & \begin{pmatrix} 1 & -1 & 1 \\ 0 & 1 & -2 \\ 0 & 0 & 0 \end{pmatrix}; \\ \text{$R_1 \to R_1 + R_2$} \colon\quad & \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & -2 \\ 0 & 0 & 0 \end{pmatrix}. \end{align*}\] There is no leading entry in the last column so we take \(z = \beta\) to be arbitrary. The set of eigenvectors corresponding to the eigenvalue \(\lambda_2 = 1\) is thus given by \[ \beta \begin{pmatrix} 1 \\ 2 \\ 1 \end{pmatrix} \colon \beta \neq 0. \] For \(\lambda_3 = 2\), we substitute \(\lambda = \lambda_3 = 2\) into \(B_{\lambda}\) and solve \(B_{3}\mathbf{x} = \mathbf{0}\). We have the augmented matrix \[ \begin{pmatrix} 4 & -1 & -1 \\ 11 & -2 & -5 \\ 1 & -1 & 2 \end{pmatrix} \] which we transform into reduced echelon form via the following process: \[\begin{align*} \text{$R_1 \leftrightarrow R_3$} \colon\quad & \begin{pmatrix} 1 & -1 & 2 \\ 11 & -2 & -5 \\ 4 & -1 & -1 \end{pmatrix}; \\ \text{$R_2 \to R_2 - 11R_1$ and $R_3 \to R_3 - 4R_1$} \colon\quad & \begin{pmatrix} 1 & -1 & 2 \\ 0 & 9 & -27 \\ 0 & 3 & -9 \end{pmatrix}; \\ \text{$R_2 \to \frac{1}{9}R_2$ and $R_3 \to R_3 - 3R_2$} \colon\quad & \begin{pmatrix} 1 & -1 & 2 \\ 0 & 1 & -3 \\ 0 & 0 & 0 \end{pmatrix}; \\ \text{$R_1 \to R_1 + R_2$} \colon\quad & \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & -3 \\ 0 & 0 & 0 \end{pmatrix}. \end{align*}\] Again, there is no leading entry in the last column, thus, taking \(z = \gamma\), we read off that the set eigenvectors corresponding to \(\lambda_3 = 2\) is given by \[ \gamma \begin{pmatrix} 1 \\ 3 \\ 1 \end{pmatrix} \colon \gamma\neq 0. \] Let \[ P = (\mathbf{v}_1 \ \ \mathbf{v}_2 \ \ \mathbf{v}_3) = \begin{pmatrix} 0 & 1 & 1 \\ -1 & 2 & 3 \\ 1 & 1 & 1 \end{pmatrix}, \] where, for \(\alpha = \beta = \gamma = 1\), \(\mathbf{v}_1, \mathbf{v}_2\), and \(\mathbf{v}_3\) are eigenvectors corresponding to the eigenvalues \(\lambda_1\), \(\lambda_2\) and \(\lambda_3\), respectively. Then \[ P^{-1}AP = D, \] where \[ D = \begin{pmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix}. \]

    6. We have \[\begin{align*} p_{A}(\lambda) = \det(\lambda I - A) &= \begin{vmatrix} \lambda-2 & -\sqrt{2} & 0 \\ - \sqrt{2} & \lambda -2 & - \sqrt{2} \\ 0 & - \sqrt{2} & \lambda - 2 \end{vmatrix} \\ &= (\lambda-2)\begin{vmatrix} \lambda -2 & -\sqrt{2} \\ -\sqrt{2} & \lambda -2 \end{vmatrix} + \sqrt{2} \begin{vmatrix} -\sqrt{2} & - \sqrt{2} \\ 0 & \lambda-2 \end{vmatrix} \\ &= (\lambda-2)\left[(\lambda-2)^2-2\right] + \sqrt{2} \left[-\sqrt{2} (\lambda-2) \right] \\ &= (\lambda-2)\left[ \lambda^2 - 4 \lambda\right] \\ &= \lambda(\lambda-2)(\lambda-4), \end{align*}\] so the eigenvalues are \(\lambda_1=0\), \(\lambda_2=2\), and \(\lambda_3=4\). For \(\lambda_1 = 0\), we substitute \(\lambda = \lambda_1 = 0\) into \(B_{\lambda}\) and solve \(B_{0}\mathbf{x} = \mathbf{0}\). We have the augmented matrix \[ \begin{pmatrix} -2 & - \sqrt{2} & 0 \\ -\sqrt{2} & -2 & -\sqrt{2} \\ 0 & -\sqrt{2} & -2 \end{pmatrix} \] which we bring into reduced echelon form as follows: \[\begin{align*} \text{$R_1 \leftrightarrow R_2$} \colon\quad & \begin{pmatrix} - \sqrt{2} & - 2 & -\sqrt{2} \\ -2 & -\sqrt{2} & 0 \\ 0 & -\sqrt{2} & -2 \end{pmatrix};\\ \text{$R_1 \to -\frac{1}{\sqrt{2}}R_1$ and $R_2 \to R_2 + 2R_1$} \colon\quad & \begin{pmatrix} 1 & \sqrt{2} & 1 \\ 0 & \sqrt{2} & 2 \\ 0 & -\sqrt{2} & -2 \end{pmatrix}; \\ \text{$R_1 \to R_1 - R_2$ and $R_3 \to R_3 + R_2$} \colon\quad & \begin{pmatrix} 1 & 0 & -1 \\ 0 & \sqrt{2} & 2 \\ 0 & 0 & 0 \end{pmatrix}; \\ \text{$R_2 \to \frac{1}{\sqrt{2}}R_2$} \colon\quad & \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & \sqrt{2} \\ 0 & 0 & 0 \end{pmatrix} \end{align*}\] Thus, the eigenvectors corresponding to the eigenvalue \(\lambda_1=0\) are given by as the set \[ \alpha \begin{pmatrix} 1 \\ -\sqrt{2} \\ 1 \end{pmatrix} \colon \alpha \neq 0. \] Note, that this example shows that it is possible for an eigen to be zero, even though eigen are prohibited from being zero. Note also, that for any eigenvector \(\mathbf{x}\) corresponding to a zero eigenvalue, we have \(A\mathbf{x} = \mathbf{0}\). For \(\lambda_2 = 2\), we substitute \(\lambda = \lambda_2 =2\) into \(B_{\lambda}\) and solve \(B_2\mathbf{x} = \mathbf{0}\). We have the augmented matrix \[ \begin{pmatrix} 0 & -\sqrt{2} & 0 \\ -\sqrt{2} & 0 & -\sqrt{2} \\ 0 & - \sqrt{2} & 0 \end{pmatrix}, \] which, applying EROs \(R_1 \leftrightarrow R_2\) and \(R_3 \to R_3 - R_2\) we transform to \[ \begin{pmatrix} -\sqrt{2} & 0 & -\sqrt{2} \\ 0 & - \sqrt{2} & 0 \\ 0 & 0 & 0 \end{pmatrix}, \] and, performing \(R_i \to -\frac{1}{\sqrt{2}}R_i\), for \(i=1,2\), to \[ \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix}. \] Thus, the set of eigenvectors corresponding to the eigenvalue \(\lambda_2 = 2\), is given as \[ \beta \begin{pmatrix} -1 \\0 \\ 1 \end{pmatrix} \colon \beta \neq 0. \] For \(\lambda_3=4\), we substitute \(\lambda = \lambda_3 = 4\) into the matrix \(B_{\lambda}\) and solve \(B_4\mathbf{x} = \mathbf{0}\). We get the augmented matrix \[ \begin{pmatrix} 2 & - \sqrt{2} & 0\\ - \sqrt{2} & 2 & -\sqrt{2} \\ 0 & -\sqrt{2} & 2 \end{pmatrix} \] which, applying EROs, can be transformed into REF as follows: \[\begin{align*} \text{$R_1 \leftrightarrow R_2$} \colon\quad & \begin{pmatrix} -\sqrt{2} & 2 & -\sqrt{2} \\ 2 & -\sqrt{2} & 0 \\ 0 & -\sqrt{2} & 2 \end{pmatrix} \\ \text{$R_2 \to R_2 + \sqrt{2}R_1$} \colon\quad & \begin{pmatrix} -\sqrt{2} & 2 & -\sqrt{2} \\ 0 & \sqrt{2} & - 2 \\ 0 & -\sqrt{2} & 2 \end{pmatrix} \\ \text{$R_3 \to R_3 + R_2$ and $R_1 \to -\frac{1}{\sqrt{2}} R_1$} \colon\quad & \begin{pmatrix} 1 & -\sqrt{2} & 1 \\ 0 & \sqrt{2} & - 2 \\ 0 & 0 & 0 \end{pmatrix} \\ \text{$R_1 \to R_1 + R_2$ and $R_2 \to \frac{1}{\sqrt{2}} R_2$} \colon\quad & \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & -\sqrt{2} \\ 0 & 0 & 0 \end{pmatrix}. \end{align*}\] Thus, the eigenvectors corresponding to the eigenvalue \(\lambda_3=4\) are given by as the set \[ \gamma \begin{pmatrix} 1 \\ \sqrt{2} \\ 1 \end{pmatrix} \colon \gamma \neq 0. \] Using \(\alpha = \beta = \gamma = 1\) in the above sets to obtain particular eigenvectors \(\mathbf{v}_1, \mathbf{v}_2\) and \(\mathbf{v}_3\), respectively, define the matrix \(P\) by \[ P = (\mathbf{v}_1 \ \ \mathbf{v}_2 \ \ \mathbf{v}_3) = \begin{pmatrix} 1 & -1 & 1 \\ -\sqrt{2} & 0 & \sqrt{2} \\ 1 & 1 & 1 \end{pmatrix}. \] Then, \[ P^{-1}AP = D, \] where \[ D = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 4 \end{pmatrix}. \]

    7. We have \[\begin{align*} p_{A}(\lambda) = \det(\lambda I - A) &= \begin{vmatrix} \lambda-1 & 1 & 1 \\ -1 & \lambda+1 & 0 \\ -1 & 0 & \lambda +1 \end{vmatrix}\\ &= (\lambda-1) \begin{vmatrix} \lambda +1 & 0 \\ 0 & \lambda+1 \end{vmatrix} - \begin{vmatrix} -1 & 0 \\ -1 & \lambda+1 \end{vmatrix} + \begin{vmatrix} -1 & \lambda+1 \\ -1 & 0 \end{vmatrix} \\ &= (\lambda-1)(\lambda+1)^2 + (\lambda+1) + (\lambda+1) \\ &= (\lambda+1)(\lambda^2+1), \end{align*}\] thus the eigenvalues are \(\lambda_1 = -1\), \(\lambda_2 = i\) and \(\lambda_3 = -i\) (where \(i^2=-1\)). For \(\lambda_1 = -1\), we substitute \(\lambda = \lambda_1 = -1\) into \(B_{\lambda}\) and solve \(B_{-1}\mathbf{x} = \mathbf{0}\). We have the augmented matrix \[ \begin{pmatrix} -2 & 1 & 1 \\ -1 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix}, \] which we transform into reduced echelon form via \(R_3 \to R_3 - R_2\), \(R_1 \to R_1 - 2R_2\) and \(R_1 \leftrightarrow R_2\) to obtain \[ \begin{pmatrix} -1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{pmatrix}, \] and thus find the set of eigenvectors \[ \alpha \begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix} \colon \alpha \neq 0 \] corresponding to \(\lambda_1 = -1\). For \(\lambda_2 = i\), we substitute \(\lambda = \lambda_2 = i\) into \(B_{\lambda}\) and solve \(B_{i}\mathbf{x} = \mathbf{0}\). We have the augmented matrix \[ \begin{pmatrix} i-1 & 1 & 1 \\ -1 & i+1 & 0 \\ -1 & 0 & i+1 \end{pmatrix}. \] First, performing \(R_1 \leftrightarrow R_2\), \(R_2 \to R_2 + (i-1)R_1\) and \(R_3 \to R_3 - R_1\) gives \[ \begin{pmatrix} -1 & i+1 & 0 \\ 0 & 1 + (i+1)(i-1) & 1 \\ 0 & -i-1 & i+1 \end{pmatrix} = \begin{pmatrix} -1 & i+1 & 0 \\ 0 & -1 & 1 \\ 0 & -(i+1) & i+1 \end{pmatrix}, \] and applying \(R_3 \to R_3 - (i+1)R_2\) and \(R_1 \to R_1 + (i+1)R_2\) gives \[ \begin{pmatrix} -1 & 0 & (i+1) \\ 0 & -1 & 1 \\ 0 & 0 & 0 \end{pmatrix}, \] which is in REF and thus we find the set of eigenvectors \[ \beta \begin{pmatrix} (i+1) \\ 1 \\ 1 \end{pmatrix} \colon \beta \neq 0 \] corresponding to \(\lambda_2 = i\). For \(\lambda_3 = -i\), we substitute \(\lambda = \lambda_3 = -i\) into \(B_{\lambda}\) and solve \(B_{-i}\mathbf{x} = \mathbf{0}\). We have the augmented matrix \[ \begin{pmatrix} -i-1 & 1 & 1 \\ -1 & -i+1 & 0 \\ -1 & 0 & -i+1 \end{pmatrix}. \] First, performing \(R_1 \leftrightarrow R_2\), \(R_2 \to R_2 - (i+1)R_1\) and \(R_3 \to R_3 - R_1\) gives \[ \begin{pmatrix} -1 & -(i-1) & 0 \\ 0 & 1 + (i+1)(i-1) & 1 \\ 0 & i-1 & -i+1 \end{pmatrix} = \begin{pmatrix} -1 & -(i-1) & 0 \\ 0 & -1 & 1 \\ 0 & i-1 & -(i-1) \end{pmatrix}, \] and applying \(R_3 \to R_3 + (i-1)R_2\) and \(R_1 \to R_1 - (i-1)R_2\) gives \[ \begin{pmatrix} -1 & 0 & -(i-1) \\ 0 & -1 & 1 \\ 0 & 0 & 0 \end{pmatrix}, \] which is in REF and thus we find the set of eigenvectors \[ \gamma \begin{pmatrix} -(i-1) \\ 1 \\ 1 \end{pmatrix} \colon \gamma \neq 0 \] corresponding to \(\lambda_3 = -i\). Setting \(\alpha = \beta = \gamma = 1\), we find that the matrix \[ P = \begin{pmatrix} 0 & i+1 & -i+1 \\ -1 & 1 & 1 \\ 1 & 1 & 1 \end{pmatrix} \] satisfies \[ P^{-1}AP = D, \] where \[ D = \begin{pmatrix} -1 & 0 & 0 \\ 0 & i & 0 \\ 0 & 0 & -i \end{pmatrix}. \]

    8. We have \[\begin{align*} p_{A}(\lambda) = \det(\lambda I - A) &= \begin{vmatrix} \lambda-5 & -5 & -1 \\ 2 & \lambda+1 & 0 \\ -1 & -1 & \lambda-1 \end{vmatrix}\\ &= (\lambda-5)\begin{vmatrix} \lambda+1 & 0 \\ -1 & \lambda-1 \end{vmatrix} - (-5) \begin{vmatrix} 2 & 0 \\ -1 & \lambda-1 \end{vmatrix} \\ &\phantom{={}} + (-1) \begin{vmatrix} 2 & \lambda+1 \\ -1 & -1 \end{vmatrix} \\ &= (\lambda-5)(\lambda+1)(\lambda-1) + 5(2(\lambda-1)) - (-2 + \lambda + 1) \\ &= (\lambda-1)\big((\lambda-5)(\lambda+1) + 10 - 1\big) \\ &= (\lambda-1)\big(\lambda^2 - 4\lambda + 4\big) \\ &= (\lambda-1)(\lambda - 2)^2, \end{align*}\] thus the eigenvalues are \(\lambda_1 = 1\), \(\lambda_{2,3} = 2\). For \(\lambda_1 = 1\), we substitute \(\lambda = \lambda_1 = 1\) into \(B_{\lambda}\) and solve \(B_{1}\mathbf{x} = \mathbf{0}\). We have the augmented matrix \[ \begin{pmatrix} -4 & -5 & -1 \\ 2 & 2 & 0 \\ -1 & -1 & 0 \end{pmatrix}. \] Applying EROs \(R_1 \to R_1 + 2R_2\) and \(R_3 \to R_3 + \frac{1}{2}R_2\) gives \[ \begin{pmatrix} 0 & -1 & -1 \\ 2 & 2 & 0 \\ 0 & 0 & 0 \end{pmatrix}. \] Performing \(R_1 \leftrightarrow R_2\), \(R_1 \to R_1 + 2R_2\) and \(R_1 \to \frac{1}{2}R_1\) and \(R_2 \to -R_2\) gives \[ \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{pmatrix}, \] which is in REF and thus we find the set of eigenvectors \[ \alpha\begin{pmatrix} 1 \\ -1 \\ 1 \end{pmatrix} \colon \alpha \neq 0 \] corresponding to \(\lambda_1 = 1\). For \(\lambda_{2,3} = 2\), we substitute \(\lambda = \lambda_{2,3} = 2\) into \(B_{\lambda}\) and solve \(B_{2}\mathbf{x} = \mathbf{0}\). We have the augmented matrix \[ \begin{pmatrix} -3 & -5 & -1 \\ 2 & 3 & 0 \\ -1 & -1 & 1 \end{pmatrix}. \] Performing \(R_1\leftrightarrow R_3\), \(R_2 \to R_2 + 2R_1\) and \(R_3 \to R_3 - 3R_1\), we have \[ \begin{pmatrix} -1 & -1 & 1 \\ 0 & 1 & 2 \\ 0 & -2 & -4 \end{pmatrix}, \] and further, performing \(R_3 \to R_3 + 2R_2\), \(R_1 \to -R_1\) and \(R_1 \to R_1 - R_2\) gives \[ \begin{pmatrix} 1 & 0 & -3 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \end{pmatrix}, \] which is in REF and thus we find the set of eigenvectors \[ \beta\begin{pmatrix} 3 \\ -2 \\ 1 \end{pmatrix} \colon \beta \neq 0 \] corresponding to \(\lambda_{2,3} = 2\). This shows that there are only two linearly independent eigenvectors, hence \(A\) is not diagonalisable.

  9. For the matrices in a. and d. in the previous question, find a formula for \(A^n\).

    Answers:

    First, note that \(P^{-1}AP = D\) and thus \(A = PDP^{-1}\). Then, \[\begin{align*} A^2 &= \left(PDP^{-1}\right)^2 \\ &= \left(PDP^{-1}\right)\left(PDP^{-1}\right) \\ &= PDP^{-1}PDP^{-1} \\ &= PD I_n DP^{-1} \\ &= P D^{2} P^{-1}. \end{align*}\] and more generally \(A^n = P D^{n} P^{-1}\).

    Now for a. \[\begin{align*} A^n &= \begin{pmatrix} 1 & 0 \\ -2 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 2^n \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 2 & 1\end{pmatrix} \\ &= \begin{pmatrix} 1 & 0 \\ -2 & 2^n \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix} \\ &= \begin{pmatrix} 1 & 0 \\ 2^{n+1}-2 & 2^n \end{pmatrix}. \end{align*}\]

    For d., first compute \(P^{-1}\) using Gaussian elimination. Starting with the augmented matrix \[\left(\begin{array}{rrr|rrr} 2 & -1 & 1 & 1 & 0 & 0 \\ -1 & 0 & -2 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 \end{array}\right).\] Applying \(R_1 \leftrightarrow R_2\) gives \[\left(\begin{array}{rrr|rrr} -1 & 0 & -2 & 0 & 1 & 0 \\ 2 & -1 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 \end{array}\right),\] performing \(R_2 \to R_2 + 2R_1\) gives \[\left(\begin{array}{rrr|rrr} -1 & 0 & -2 & 0 & 1 & 0 \\ 0 & -1 & -3 & 1 & 2 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 \end{array}\right),\] performing \(R_3 \to R_3 + R_2\) gives \[\left(\begin{array}{rrr|rrr} -1 & 0 & -2 & 0 & 1 & 0 \\ 0 & -1 & -3 & 1 & 2 & 0 \\ 0 & 0 & -2 & 1 & 2 & 1 \end{array}\right),\] performing \(R_1 \to -R_1\), \(R_2 \to -R_2\) and \(R_3 \to -\frac{1}{2}R_3\) gives \[\left(\begin{array}{rrr|rrr} 1 & 0 & 2 & 0 & -1 & 0 \\ 0 & 1 & 3 & -1 & -2 & 0 \\ 0 & 0 & 1 & -\frac{1}{2} & -1 & -\frac{1}{2} \end{array}\right),\] and with \(R_2 \to R_2 - 3R_3\) and \(R_1 \to R_1 - 2R_3\) we arrive at \[\left(\begin{array}{rrr|rrr} 1 & 0 & 0 & 1 & 1 & 1 \\ 0 & 1 & 0 & \frac{1}{2} & 1 & \frac{3}{2} \\ 0 & 0 & 1 & -\frac{1}{2} & -1 & -\frac{1}{2} \end{array}\right).\] We deduce that \[ P^{-1} = \frac{1}{2} \begin{pmatrix} 2 & 2 & 2 \\ 1 & 2 & 3 \\ -1 & -2 & -1 \end{pmatrix}. \] We have \[ D^n = \begin{pmatrix} 2^n & 0 & 0\\ 0 & 2^n & 0 \\ 0 & 0 & 4^n \end{pmatrix}, \] and so \[\begin{align*} A^n &= P D^n P^{-1} \\ &=\frac{1}{2} \begin{pmatrix} 2^{n+2}-2^n-4^n &2^{n+2}-2^{n+1}-2^{2 n+1} & 2^{n+2}-3\times 2^n-4^n \\ -2^{n+1}+2^{2 n+1} &-2^{n+1}+2^{2 n+2} &-2^{n+1}+2^{2 n+1} \\ 2^n-4^n & 2^{n+1}-2^{2 n+1} & 3\times 2^n-4^n \end{pmatrix}. \end{align*}\]

  10. Linear Difference Equations. A population of Wildebeest can be classified into two life stages: juvenile and adult. Each year \(60\%\) of the juveniles survive to become adults, adults give birth on average to \(0.5\) juvelines and \(70\%\) of adults survive the year. If there are \(200\) juveniles and \(200\) adults in one year, what is the long term population of juveniles and adults? What is the long term ratio of juveniles to adults? Hint: write this as a matrix equation and use diagonalisation (save some time and use a computer to find the eigenvalues and eigenvectors for this question).

    How about in the case when the adult survival rate increases to \(80\%\)? In this case also give the long term growth rate of the juvenile and adult populations. What happens in the case that the adult survival rate drops to \(60\%\)?

    Answers:

    Let \(x_1(n)\) denote the population of juveniles and \(x_2(n)\) the population of adults in year \(n\). Then we can formulate this as the coupled system of linear difference equations: \[ \begin{pmatrix} x_1(n)\\ x_2(n) \end{pmatrix} = \begin{pmatrix} 0&0.5\\ 0.6&0.7 \end{pmatrix} \begin{pmatrix} x_1(n-1)\\ x_2(n-1) \end{pmatrix}. \] Let \[ \mathbf{x}(n)= \begin{pmatrix} x_1(n)\\ x_2(n) \end{pmatrix} \quad \text{and} \quad A= \begin{pmatrix} 0&0.5\\ 0.6&0.7 \end{pmatrix} \] then the solution is \[ \mathbf{x}(n)=A^n\mathbf{x}(0). \] If the matrix \(A\) is diagonalisable, then we can find \(A^n\) by diagonalisation. We find that \(A\) has eigenvalues \(\lambda_1=1\) and \(\lambda_2=-\frac{3}{10}\) with corresponding eigenvectors \[\mathbf{v}_1=\begin{pmatrix}1\\2\end{pmatrix},\quad \mathbf{v}_2\begin{pmatrix}1\\-3/5\end{pmatrix}.\] So \[ A^n=\frac{1}{13} \begin{pmatrix} 1&1\\ 2&-\frac{3}{5} \end{pmatrix} \begin{pmatrix} 1&0\\ 0&-\frac{3}{10} \end{pmatrix}^n \begin{pmatrix} 3&5\\ 10&-5 \end{pmatrix} \] and \[\begin{align} \mathbf{x}(n)&=\frac{1}{13}\times 1^n \times (3x_1(0)+5x_2(0)) \begin{pmatrix} 1\\ 2 \end{pmatrix} \\ &+\frac{1}{13}\times\left(-\frac{3}{10}\right)^n\times(10x_1(0)-5x_2(0)) \begin{pmatrix} 1\\ -\frac{3}{5} \end{pmatrix}\tag{22.1} \end{align}\] or writing each population separately \[ \begin{aligned} x_1(n)&=\frac{1}{13}(3x_1(0)+5x_2(0))+\frac{1}{13}\left(-\frac{3}{10}\right)^n(10x_1(0)-5x_2(0))\\ x_2(n)&=\frac{2}{13}(3x_1(0)+5x_2(0))+\frac{1}{13}\left(-\frac{3}{10}\right)^n\left( -\frac{3}{5}\right)(10x_1(0)-5x_2(0)). \end{aligned} \]

    From (22.1) we can see that in the long term the populations settle down to: \[\begin{equation} \lim\limits_{n\to\infty}\mathbf{x}(n)=\frac{1}{13}(3x_1(0)+5x_2(0)) \begin{pmatrix} 1\\ 2 \end{pmatrix}\tag{22.2} \end{equation}\] that is, \[\begin{align*} \lim\limits_{n\to\infty}x_1(n)&=\frac{1}{13}(3x_1(0)+5x_2(0))\approx 123,\\ \lim\limits_{n\to\infty}x_2(n)&=\frac{2}{13}(3x_1(0)+5x_2(0))\approx 246. \end{align*}\] From (22.2) the long term ratio of juveniles to adults is \[ \lim\limits_{n\to\infty}\frac{x_1(n)}{x_2(n)}=\frac{1}{2} \] i.e. \(1:2\).

    Now when the survival rate increases to 80% the matrix \(A\) becomes \[ A= \begin{pmatrix} 0&0.5\\ 0.6&0.8 \end{pmatrix} \] which is still diagonalisable and has eigenvalues and corresponding eigenvectors \[\begin{align*} \lambda_1=\frac{1}{10} (4 + \sqrt{46})\approx 1.08,\quad & \quad\lambda_2=\frac{1}{10} (4 - \sqrt{46})\approx-0.28,\\ \mathbf{v}_1= \begin{pmatrix} \frac{1}{6}(-4 + \sqrt{46})\\ 1 \end{pmatrix},\quad & \quad \mathbf{v}_2=\begin{pmatrix} \frac{1}{6}(-4 - \sqrt{46})\\ 1 \end{pmatrix}. \end{align*}\] If we let \(\mathcal{P}=(\mathbf{v}_1,\mathbf{v}_2)\) and write \[ \mathbf{x}(0)_\mathcal{P}=\begin{pmatrix} \mu_1\\ \mu_2 \end{pmatrix} \] then \[ \mathbf{x}(n)=\mu_1\lambda_1^n\mathbf{v}_1+\mu_2\lambda_2^n\mathbf{v}_2. \] Since \(|\lambda_1|>1\) and \(|\lambda_2|<1\), for large \(n\) \[ \mathbf{x}(n)\approx\mu_1\lambda_1^nv_1 \] and hence both \(x_1(n)\to\infty\) and \(x_2(n)\to \infty\), with the long term ratio \(x_1:x_2\) being \(\frac{1}{6}(-4 + \sqrt{46}):1\), or approximately \(1:2.2\).

    In the case that the adult survival rate drops to 60%, we have \[ A= \begin{pmatrix} 0&0.5\\ 0.6&0.6 \end{pmatrix} \] which is diagonalisable and has eigenvalues and corresponding eigenvectors \[\begin{align*} \lambda_1=\frac{1}{10} (3 + \sqrt{39})\approx 0.92,\quad & \quad\lambda_2=\frac{1}{10} (3 - \sqrt{39})\approx-0.32,\\ \mathbf{v}_1= \begin{pmatrix} \frac{1}{6}(-3 + \sqrt{39})\\ 1 \end{pmatrix},\quad & \quad \mathbf{v}_2=\begin{pmatrix} \frac{1}{6}(-3 - \sqrt{39})\\ 1 \end{pmatrix}. \end{align*}\] If we let \(\mathcal{P}=(\mathbf{v}_1,\mathbf{v}_2)\) and write \[ \mathbf{x}(0)_\mathcal{P}=\begin{pmatrix} \mu_1\\ \mu_2 \end{pmatrix} \] then \[ \mathbf{x}(n)=\mu_1\lambda_1^nv_1+\mu_2\lambda_2^nv_2, \] but now since both \(|\lambda_1|<1\) and \(|\lambda_2|<1\), \[ \lim\limits_{n\to\infty}x(n)=\lim\limits_{n\to\infty}\begin{pmatrix} x_1(n)\\ x_2(n) \end{pmatrix} = \begin{pmatrix} 0\\ 0 \end{pmatrix} \] and the population dies out.