$$ \DeclareMathOperator*{\Arg}{Arg} \DeclareMathOperator*{\Ln}{Ln} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} $$

Topic 1 - Linear Algebra

Week 1 Material

Question 1.1 \(\;\) Use Gaussian elimination to find all solutions to the following systems of equations. If there are infinitely many solutions, write them in parametric form.

\(\;\;\;(a)\;\;\; \begin{matrix} 2x+\;\,y+3z \!\!\!\!&=&\!\!\!\! 1 \\ 4x+3y+5z \!\!\!\!&=&\!\!\!\! 1 \\ 6x+5y+5z \!\!\!\!&=&\!\!\!\! -3 \end{matrix} \hspace{2.5cm}\) \((b)\;\;\; \begin{matrix} 3x+2y+\;\,z \!\!\!\!&=&\!\!\!\! 3 \\ 2x+\;\,y+\;\,z \!\!\!\!&=&\!\!\!\! 0 \\ 6x+2y+4z \!\!\!\!&=&\!\!\!\! 6 \end{matrix}\)


\(\;\;\;(c)^*\;\;\; \begin{matrix} \;\,x+\;\,y-\;\,z \!\!\!\!&=&\!\!\!\! 7 \\ 4x-\;\,y+5z \!\!\!\!&=&\!\!\!\! 4 \\ 6x+\;\,y+3z \!\!\!\!&=&\!\!\!\! 18 \end{matrix} \hspace{2.2cm}\) \((d)\;\;\;\begin{matrix} \;\,x+2y+2z \!\!\!\!&=&\!\!\!\! 4 \\ 3x+\;\,y+2z \!\!\!\!&=&\!\!\!\! 0 \\ \;\,x+\;\,y+4z \!\!\!\!&=&\!\!\!\! 3 \end{matrix}\)


\(\;\;\;(e)\;\;\;\begin{matrix} \qquad\;\;\; x+\;\,y+\;\,z \!\!\!\!&=&\!\!\!\! 1 \\ 3w\qquad\,+3y-4z \!\!\!\!&=&\!\!\!\! 7 \\ \;\,w+\;\,x+\;\,y+2z \!\!\!\!&=&\!\!\!\! 6 \\ 2w+3x+\;\,y+3z \!\!\!\!&=&\!\!\!\! 6 \end{matrix} \hspace{1.5cm}\) \((f)\;\;\;\begin{matrix} \;\,w-2x+\;\,y+\;\,z \!\!\!\!&=&\!\!\!\! 2 \\ 3w\qquad+2y-2z \!\!\!\!&=&\!\!\!\! -8 \\ \qquad 4x-\;\,y-\;\,z \!\!\!\!&=&\!\!\!\! 1 \\ 5w\qquad+3y-\;\,z \!\!\!\!&=&\!\!\!\! 0 \end{matrix}\)


Quick Check

\((a)\;\) \(x=-3,\, y=1,\, z=2.\) \(\hspace{1cm}\) \((b)\;\) There are no solutions.

\((c)\;\) \(x = \frac{11}{5}-\frac45\lambda, \, y = \frac{24}{5}+\frac95\lambda,\, z=\lambda\) where \(\lambda\) is a parameter.

\((d)\;\) \(x =-1, \, y=2,\, z=\frac12.\)

\((e)\;\) \(w=3.15,\, x = -2.5, \,y = 1.65, \,z = 1.85.\) \(\hspace{1cm}\) \((f)\;\) There are no solutions.


Solution

\((a)\;\) Write in augmented matrix form and apply the algorithm. Here we start by scaling the first row to make the first coefficient 1 and then use this row to eliminate the first column.

\[ \left(\begin{matrix} 2 & 1 & 3 \\ 4 & 3 & 5 \\ 6 & 5 & 5 \end{matrix} \left|\,\begin{matrix} 1 \\ 1 \\ -3 \end{matrix}\right.\right) \xrightarrow{\frac12 R_1} \left(\begin{matrix} 1 & \frac12 & \frac32 \\ 4 & 3 & 5 \\ 6 & 5 & 5 \end{matrix} \left|\,\begin{matrix} \frac12 \\ 1 \\ -3 \end{matrix}\right.\right) \] \[\hspace{2cm} \xrightarrow[R_3 - 6R_1]{R_2-4R_1} \left(\begin{matrix} 1 & \frac12 & \frac32 \\ 0 & 1 & -1 \\ 0 & 2 & -4 \end{matrix} \left|\,\begin{matrix} \frac12 \\ -1 \\ -6 \end{matrix}\right.\right) \]

The diagonal coefficient in row 2 is already \(1\), so we use it to eliminate the third row in the second column :

\[ \left(\begin{matrix} 1 & \frac12 & \frac32 \\ 0 & 1 & -1 \\ 0 & 2 & -4 \end{matrix} \left|\,\begin{matrix} \frac12 \\ -1 \\ -6 \end{matrix}\right.\right) \xrightarrow{R_3 -2 R_2} \left(\begin{matrix} 1 & \frac12 & \frac32 \\ 0 & 1 & -1 \\ 0 & 0 & -2 \end{matrix} \left|\,\begin{matrix} \frac12 \\ -1 \\ -4 \end{matrix}\right.\right) \]

Finally we scale the last diagonal coefficient:

\[ \left(\begin{matrix} 1 & \frac12 & \frac32 \\ 0 & 1 & -1 \\ 0 & 0 & -2 \end{matrix} \left|\,\begin{matrix} \frac12 \\ -1 \\ -4 \end{matrix}\right.\right) \xrightarrow{-\frac12R_3} \left(\begin{matrix} 1 & \frac12 & \frac32 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{matrix} \left|\,\begin{matrix} \frac12 \\ -1 \\ 2 \end{matrix}\right.\right) \] Now back substitution gives \[\begin{cases} x +\frac12y+\frac32z =\frac12 \\ \qquad\;\, y-\;\,z = -1 \\ \qquad\;\,\qquad\;z = 2 \end{cases} \quad\implies\quad \begin{cases} z = 2,\\ y = -1 + z = 1,\\ x = \frac12 - \frac12y - \frac{3}{2}z = -3. \end{cases}\]


Alternatively, we could delay the division in the first step, and just try to obtain zeros below the diagonal. This might avoid too many calculations with fractions. \[ \left(\begin{matrix} 2 & 1 & 3 \\ 4 & 3 & 5 \\ 6 & 5 & 5 \end{matrix} \left|\,\begin{matrix} 1 \\ 1 \\ -3 \end{matrix}\right.\right) \xrightarrow[R_3-3R_1]{R_2-2R_1} \left(\begin{matrix} 2 & 1 & 3 \\ 0 & 1 & -1 \\ 0 & 2 & -4 \end{matrix} \left|\,\begin{matrix} 1 \\ -1 \\ -6 \end{matrix}\right.\right) \] \[\hspace{2cm} \xrightarrow{R_3-2R_2} \left(\begin{matrix} 2 & 1 & 3 \\ 0 & 1 & -1 \\ 0 & 0 & -2 \end{matrix} \left|\,\begin{matrix} 1 \\ -1 \\ -4 \end{matrix}\right.\right) \] We now include the divisions in the backward substitution stage: \[\begin{cases} 2x +y+3z = 1 \\ \qquad\; y-\;\,z = -1 \\ \qquad\quad\;-2z = -4 \end{cases} \quad\implies\quad \begin{cases} z = 2,\\ y = -1 + z = 1,\\ x = \dfrac{1-y-3z}{2}=-3 \end{cases}\]

\((b)\;\) Following a similar idea, try to obtain a 1 in the top left. We could divide the top row by 3, but a sneaky way which avoids fractions is to subtract row 2: \[ \left(\begin{matrix} 3 & 2 & 1 \\ 2 & 1 & 1 \\ 6 & 2 & 4 \end{matrix} \left|\,\begin{matrix} 3 \\ 0 \\ 6 \end{matrix}\right.\right) \xrightarrow{R_1-R_2} \left(\begin{matrix} 1 & 1 & 0 \\ 2 & 1 & 1 \\ 6 & 2 & 4 \end{matrix} \left|\,\begin{matrix} 3 \\ 0 \\ 6 \end{matrix}\right.\right) \] \[ \xrightarrow[R_3-6R_1]{R_2-2R_1} \left(\begin{matrix} 1 & 1 & 0 \\ 0 & -1 & 1 \\ 0 & -4 & 4 \end{matrix} \left|\,\begin{matrix} 3 \\ -6 \\ -12 \end{matrix}\right.\right) \xrightarrow{R_3-4R_2} \left(\begin{matrix} 1 & 1 & 0 \\ 0 & -1 & 1 \\ 0 & 0 & 0 \end{matrix} \left|\,\begin{matrix} 3 \\ -6 \\ 12 \end{matrix}\right.\right) \] The last equation here says \(0x+0y+0z=12\), hence there are no solutions.

\((c)\;\) Tutorial question

\[ \left(\begin{matrix} 1 & 1 & -1 \\ 4 & -1 & 5 \\ 6 & 1 & 3 \end{matrix} \left|\,\begin{matrix} 7 \\ 4 \\ 18 \end{matrix}\right.\right) \xrightarrow[R_3-6R_1]{R_2-4R_1} \left(\begin{matrix} 1 & 1 & -1 \\ 0 & -5 & 9 \\ 0 & -5 & 9 \end{matrix} \left|\,\begin{matrix} 7 \\ -24 \\ -24 \end{matrix}\right.\right) \] \[\hspace{2cm} \xrightarrow{R_3-R_2} \left(\begin{matrix} 1 & 1 & -1 \\ 0 & -5 & 9 \\ 0 & 0 & 0 \end{matrix} \left|\,\begin{matrix} 7 \\ -24 \\ 0 \end{matrix}\right.\right) \] Notice that the third equation is consistent as it just says \(0=0\). But the number of independent equations remaining is only two. The final step of Gaussian elimination is to normalize the second row: \[ \xrightarrow{-\frac15R_2} \left(\begin{matrix} 1 & 1 & -1 \\ 0 & 1 & -\frac95 \\ 0 & 0 & 0 \end{matrix} \left|\,\begin{matrix} 7 \\ \frac{24}{5} \\ 0 \end{matrix}\right.\right) \] Setting \(z=\lambda\) (a parameter), we can now use back substitution to get \(x\) and \(y\) in terms of the parameter: \[\begin{cases} x +y-\;\,z = 7 \\ \quad\;\;\, y-\frac95z = \frac{24}{5} \end{cases} \quad\implies\quad \begin{cases} y = \frac{24}{5} + \frac95z = \frac{24}{5}+\frac95\lambda,\\ x = 7 - y + z = \frac{11}{5}-\frac45\lambda. \end{cases}\]

\((d)\;\) Again applying row operations, \[ \left(\begin{matrix} 1 & 2 & 2 \\ 3 & 1 & 2 \\ 1 & 1 & 4 \end{matrix} \left|\,\begin{matrix} 4 \\ 0 \\ 3 \end{matrix}\right.\right) \xrightarrow[R_3-R_1]{R_2-3R_1} \left(\begin{matrix} 1 & 2 & 2 \\ 0 & -5 & -4 \\ 0 & -1 & 2 \end{matrix} \left|\,\begin{matrix} 4 \\ -12 \\ -1 \end{matrix}\right.\right) \] \[ \xrightarrow{R_2\leftrightarrow R_3} \left(\begin{matrix} 1 & 2 & 2 \\ 0 & -1 & 2 \\ 0 & -5 & -4 \end{matrix} \left|\,\begin{matrix} 4 \\ -1 \\ -12 \end{matrix}\right.\right) \xrightarrow{-R_2} \left(\begin{matrix} 1 & 2 & 2 \\ 0 & 1 & -2 \\ 0 & -5 & -4 \end{matrix} \left|\,\begin{matrix} 4 \\ 1 \\ -12 \end{matrix}\right.\right) \] \[ \xrightarrow{R_3+5R_2} \left(\begin{matrix} 1 & 2 & 2 \\ 0 & 1 & -2 \\ 0 & 0 & -14 \end{matrix} \left|\,\begin{matrix} 4 \\ 1 \\ -7 \end{matrix}\right.\right) \xrightarrow{-\frac{1}{14}R_2} \left(\begin{matrix} 1 & 2 & 2 \\ 0 & 1 & -2 \\ 0 & 0 & 1 \end{matrix} \left|\,\begin{matrix} 4 \\ 1 \\ \frac12 \end{matrix}\right.\right) \] Back substitution then gives \[\begin{cases} x +2y+2z = 4 \\ \qquad\; y-2z = 1 \\ \qquad\;\qquad\;z = \frac12 \end{cases} \quad\implies\quad \begin{cases} z = \frac12 \\ y = -1 + 2z = 2\\ x = 4 - 2y - 2z = -1 \end{cases}\]

\((e)\;\) Here the coefficient of \(x\) in \(R_1\) is zero, so we should start by swapping rows. One possible choice is the following (you should get the same answer in the end!): \[ \left(\begin{matrix} 0 & 1 & 1 & 1 \\ 3 & 0 & 3 & -4 \\ 1 & 1 & 1 & 2 \\ 2 & 3 & 1 & 3 \end{matrix} \left|\,\begin{matrix} 1 \\ 7 \\ 6 \\ 6 \end{matrix}\right.\right) \xrightarrow{R_1\leftrightarrow R_3} \left(\begin{matrix} 1 & 1 & 1 & 2 \\ 3 & 0 & 3 & -4 \\ 0 & 1 & 1 & 1 \\ 2 & 3 & 1 & 3 \end{matrix} \left|\,\begin{matrix} 6 \\ 7 \\ 1 \\ 6 \end{matrix}\right.\right) \] \[ \xrightarrow[R_4-2R_1]{R_2-3R_1} \left(\begin{matrix} 1 & 1 & 1 & 2 \\ 0 & -3 & 0 & -10 \\ 0 & 1 & 1 & 1 \\ 0 & 1 & -1 & -1 \end{matrix} \left|\,\begin{matrix} 6 \\ -11 \\ 1 \\ -6 \end{matrix}\right.\right) \xrightarrow{R_2\leftrightarrow R_3} \left(\begin{matrix} 1 & 1 & 1 & 2 \\ 0 & 1 & 1 & 1 \\ 0 & -3 & 0 & -10 \\ 0 & 1 & -1 & -1 \end{matrix} \left|\,\begin{matrix} 6 \\ 1 \\ -11 \\ -6 \end{matrix}\right.\right) \] \[ \xrightarrow[R_4-R_2]{R_3+3R_2} \left(\begin{matrix} 1 & 1 & 1 & 2 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 3 & -7 \\ 0 & 0 & -2 & -2 \end{matrix} \left|\,\begin{matrix} 6 \\ 1 \\ -8 \\ -7 \end{matrix}\right.\right) \xrightarrow{R_3+R_4} \left(\begin{matrix} 1 & 1 & 1 & 2 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & -9 \\ 0 & 0 & -2 & -2 \end{matrix} \left|\,\begin{matrix} 6 \\ 1 \\ -15 \\ -7 \end{matrix}\right.\right) \] \[ \xrightarrow{R_4+2R_3} \left(\begin{matrix} 1 & 1 & 1 & 2 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & -9 \\ 0 & 0 & 0 & -20 \end{matrix} \left|\,\begin{matrix} 6 \\ 1 \\ -15 \\ -37 \end{matrix}\right.\right) \xrightarrow{-\frac{1}{20}R_4} \left(\begin{matrix} 1 & 1 & 1 & 2 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & -9 \\ 0 & 0 & 0 & 1 \end{matrix} \left|\,\begin{matrix} 6 \\ 1 \\ -15 \\ \frac{37}{20} \end{matrix}\right.\right) \] Backward substitution finally gives: \[\begin{cases} w+x +y+2z = 6 \\ \qquad x+y+\;\,z = 1 \\ \qquad\quad\;\;\, y-9z = -15 \\ \qquad\qquad\qquad z = \frac{37}{20} \\ \end{cases} \;\;\implies\;\; \begin{cases} z = \frac{37}{20}=1.85 \\ y = -15 + 9z = \frac{33}{20}=1.65 \\ x = 1-y-z = -\frac{50}{20}=-2.5 \\ w = 6-x-y-2z=\frac{63}{20}=3.15 \end{cases}\]

\((f)\;\) Here, we try \[ \left(\begin{matrix} 1 & -2 & 1 & 1 \\ 3 & 0 & 2 & -2 \\ 0 & 4 & -1 & -1 \\ 5 & 0 & 3 & -1 \end{matrix} \left|\,\begin{matrix} 2 \\ -8 \\ 1 \\ 0 \end{matrix}\right.\right) \xrightarrow[R_4-5R_1]{R_2-3R_1} \left(\begin{matrix} 1 & -2 & 1 & 1 \\ 0 & 6 & -1 & -5 \\ 0 & 4 & -1 & -1 \\ 0 & 10 & -2 & -6 \end{matrix} \left|\,\begin{matrix} 2 \\ -14 \\ 1 \\ -10 \end{matrix}\right.\right) \] We could continue as usual, but notice if we subtract rows 2 and 3 from row 4 we get: \[ \xrightarrow{R_4-R_3} \left(\begin{matrix} 1 & -2 & 1 & 1 \\ 0 & 6 & -1 & -5 \\ 0 & 4 & -1 & -1 \\ 0 & 6 & -1 & -5 \end{matrix} \left|\,\begin{matrix} 2 \\ -14 \\ 1 \\ -11 \end{matrix}\right.\right) \xrightarrow{R_4-R_2} \left(\begin{matrix} 1 & -2 & 1 & 1 \\ 0 & 6 & -1 & -5 \\ 0 & 4 & -1 & -1 \\ 0 & 0 & 0 & 0 \end{matrix} \left|\,\begin{matrix} 2 \\ -14 \\ 1 \\ 3 \end{matrix}\right.\right) \] The last equation says \(0=3\). This is clearly impossible, the equations are inconsistent and there are no solutions.


Question 1.2 \(\;\;\) Evaluate the following determinants. (You may find using the properties of determinants speed some of these up considerably.)

\(\;\;\;(a)\;\;\;\begin{vmatrix} 2 & 2 & 3 \\ -2 & 2 & 1 \\ 1 & 3 & 3 \end{vmatrix} \hspace{1cm}\) \((b)\;\;\begin{vmatrix} 2 & 2 & 3 \\ 1 & 2 & 1 \\ -2 & 3 & 3 \end{vmatrix} \hspace{2.2cm}\) \((c)^*\;\;\begin{vmatrix} 3 & 3 & -1 \\ -4 & -3 & 2 \\ -2 & -2 & 1 \end{vmatrix}\)

\(\;\;\;(d)^*\;\;\;\begin{vmatrix} a & b & c \\ 0 & d & e \\ 0 & 0 & f \end{vmatrix} \hspace{1.3cm}\) \((e)\;\begin{vmatrix} 1 & 2 & 3 & 3 \\ 2 & 3 & 1 & 3 \\ 1 & 1 & 2 & 3 \\ -1 & 2 & 3 & -2 \end{vmatrix} \hspace{1cm}\) \((f)\;\begin{vmatrix} 3 & 2 & 1 & 3 \\ 2 & 1 & 1 & 3 \\ 1 & 1 & 2 & 3 \\ -1 & 2 & 3 & -2 \end{vmatrix}\)

\(\;\;\;(g)\;\;\begin{vmatrix} 1 & 2 & 1 & 1 & 3 \\ -2 & 1 & -1 & -3 & 1 \\ 2 & -2 & 1 & -1 & -2 \\ -1 & 1 & 2 & -3 & 1 \\ -3 & 2 & 1 & 2 & 1 \end{vmatrix} \hspace{1.5cm}\) \((h)^*\;\begin{vmatrix} 1 & 1 & 0 & 0 \\ -1 & 1 & 1 & 0 \\ 0 & -1 & 1 & 1 \\ 0 & 0 & -1 & 1 \end{vmatrix}\)

\(\;\;\;(i)\;\;\) The \(n\times n\) analogue of part \((h)\), which has \(1\)’s on the diagonal, \(1\)’s directly above the diagonal and \(-1\)’s directly below it. (This might take some thought but it does have a neat answer!) \[\begin{vmatrix} 1 & 1 & 0 & \cdots & 0 & 0 \\ -1 & 1 & 1 & \cdots & 0 & 0 \\ 0 & -1 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & 1 \\ 0 & 0 & 0 & \cdots & -1 & 1 \end{vmatrix} \]


Quick Check

\((a)\;\) \(-4\) \(\hspace{1cm}\) \((b)\;\) \(17\) \(\hspace{1cm}\) \((c)\;\) \(1\) \(\hspace{1cm}\) \((d)\;\) \(adf\)

\((e)\;\) \(2\) \(\hspace{1cm}\) \((f)\;\) \(4\) \(\hspace{1cm}\) \((g)\;\) \(-9\) \(\hspace{1cm}\) \((h)\;\) \(5\) \(\hspace{1cm}\) \((i)\;\) Counting rabbits…


Solution

One could calculate these directly, using the usual (cumbersome) formula. Instead, we’ll apply row/column operations to obtain a row/column with only one non-zero entry, thus reducing the size of the determinant. There are of course many ways to do this.

\((a)\;\) Here we could use row operations to get two zeros, then transpose to put these in the first row: \[\begin{align*} \begin{vmatrix} 2 & 2 & 3 \\ -2 & 2 & 1 \\ 1 & 3 & 3 \end{vmatrix} & \;\;\overset{\begin{subarray}{l} R_1-2R_3 \\[3pt] R_2+2R_3 \\[5pt]\end{subarray}}{=}\;\; \begin{vmatrix} 0 & -4 & -3 \\ 0 & 8 & 7 \\ 1 & 3 & 3 \end{vmatrix} \overset{\scriptstyle transp.}{=} \begin{vmatrix} 0 & 0 & 1 \\ -4 & 8 & 3 \\ -3 & 7 & 3 \end{vmatrix} \\ &=\begin{vmatrix} -4 & 8 \\ -3 & 7 \end{vmatrix}=-28+24=-4 \end{align*}\]

\((b)\;\) Here I use the middle row to create two zeros in the first column, then transpose: \[\begin{align*} \begin{vmatrix} 2 & 2 & 3 \\ 1 & 2 & 1 \\ -2 & 3 & 3 \end{vmatrix} & \;\;\overset{\begin{subarray}{l} R_1-2R_2 \\[3pt] R_3+2R_2 \\[5pt]\end{subarray}}{=}\;\; \begin{vmatrix} 0 & -2 & 1 \\ 1 & 2 & 1 \\ 0 & 7 & 5 \end{vmatrix} \overset{\scriptstyle transp.}{=} \begin{vmatrix} 0 & 1 & 0 \\ -2 & 2 & 7 \\ 1 & 1 & 5 \end{vmatrix} \\ &= -\begin{vmatrix} -2 & 7 \\ 1 & 5 \end{vmatrix}=-(-10-7)=17 \end{align*}\]

\((c)\;\) Tutorial question

See if you can identify the row operations here: \[\begin{align*} \begin{vmatrix} 3 & 3 & -1 \\ -4 & -3 & 2 \\ -2 & -2 & 1 \end{vmatrix} = \begin{vmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ -2 & -2 & 1 \end{vmatrix} &= \begin{vmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -2 & -2 & 1 \end{vmatrix} \\ &= \begin{vmatrix} 1 & 0 \\ -2 & 1 \end{vmatrix}=1 \end{align*}\] It’s not always immediate to see what operations someone uses, and this is one reason why it’s a good idea to say!

\((d)\;\) Tutorial question

Here we just need to transpose and then repeatedly expand along the top row: \[\begin{vmatrix} a & b & c \\ 0 & d & e \\ 0 & 0 & f \end{vmatrix} =\begin{vmatrix} a & 0 & 0 \\ b & d & 0 \\ c & e & f \end{vmatrix}= a\begin{vmatrix} d & 0 \\ e & f \end{vmatrix}=adf\] More generally, we can see the determinant of any triangular matrix is just the product of the diagonal elements.

\((e)\;\) To save some writing, we can operate on the columns as though they were rows (because of the transpose property). Thus \[\begin{align*} \begin{vmatrix} 1 & 2 & 3 & 3 \\ 2 & 3 & 1 & 3 \\ 1 & 1 & 2 & 3 \\ -1 & 2 & 3 & -2 \end{vmatrix} & \;\;\begin{subarray}{c} C_2-2C_1 \\[6pt] C_3-3C_1 \\[6pt] {\displaystyle =} \\[6pt] C_4-3C_1 \end{subarray}\;\; \begin{vmatrix} 1 & 0 & 0 & 0 \\ 2 & -1 & -5 & -3 \\ 1 & -1 & -1 & 0 \\ -1 & 4 & 6 & 1 \end{vmatrix} \\ &\hspace{0.6cm}= \begin{vmatrix} -1 & -5 & -3 \\ -1 & -1 & 0 \\ 4 & 6 & 1 \end{vmatrix} \;\;\begin{subarray}{c} C_2-5C_1 \\[6pt] {\displaystyle =} \\[6pt] C_3-3C_1 \end{subarray}\;\; \begin{vmatrix} -1 & 0 & 0 \\ -1 & 4 & 3 \\ 4 & -14 & -11 \end{vmatrix} \\ &\hspace{0.6cm}= -\begin{vmatrix} 4 & 3 \\ -14 & -11 \end{vmatrix}\;\;=-(-44+42)=2 \end{align*}\]

\((f)\;\) Here I do a mixture of row and column operations: \[\begin{align*} \begin{vmatrix} 3 & 2 & 1 & 3 \\ 2 & 1 & 1 & 3 \\ 1 & 1 & 2 & 3 \\ -1 & 2 & 3 & -2 \end{vmatrix} &\;\;\begin{subarray}{c} R_1-R_2 \\[6pt] {\displaystyle =} \end{subarray}\;\; \begin{vmatrix} 1 & 1 & 0 & 0 \\ 2 & 1 & 1 & 3 \\ 1 & 1 & 2 & 3 \\ -1 & 2 & 3 & -2 \end{vmatrix} \\ &\;\;\begin{subarray}{c} C_2-C_1 \\[6pt] {\displaystyle =} \end{subarray}\;\; \begin{vmatrix} 1 & 0 & 0 & 0 \\ 2 & -1 & 1 & 3 \\ 1 & 0 & 2 & 3 \\ -1 & 3 & 3 & -2 \end{vmatrix} \\[5pt] &\hspace{0.6cm}= \begin{vmatrix} -1 & 1 & 3 \\ 0 & 2 & 3 \\ 3 & 3 & -2 \end{vmatrix} \;\;\begin{subarray}{c} R_3+3R_1 \\[6pt] {\displaystyle =} \end{subarray}\;\; \begin{vmatrix} -1 & 1 & 3 \\ 0 & 2 & 3 \\ 0 & 6 & 7 \end{vmatrix} \\ &\hspace{0.6cm}= -\begin{vmatrix} 2 & 3 \\ 6 & 7 \end{vmatrix}\;\;=-(14-18)=4 \end{align*}\]

\((g)\;\) Here we have to do a lot of work, but the same ideas will help: create lots of zeros in a row (or column) and reduce to a smaller determinant \[ \begin{vmatrix} 1 & 2 & 1 & 1 & 3 \\ -2 & 1 & -1 & -3 & 1 \\ 2 & -2 & 1 & -1 & -2 \\ -1 & 1 & 2 & -3 & 1 \\ -3 & 2 & 1 & 2 & 1 \end{vmatrix} \;\;\begin{subarray}{c} C_2-2C_1 \\[6pt] C_3-C_1 \\[6pt] {\displaystyle =} \\[6pt] C_4-C_1 \\[6pt] C_5-3C_1 \end{subarray}\;\; \begin{vmatrix} 1 & 0 & 0 & 0 & 0 \\ -2 & 5 & 1 & -1 & 7 \\ 2 & -6 & -1 & -3 & -8 \\ -1 & 3 & 3 & -2 & 4 \\ -3 & 8 & 4 & 5 & 10 \end{vmatrix} \] \[ =\begin{vmatrix} 5 & 1 & -1 & 7 \\ -6 & -1 & -3 & -8 \\ 3 & 3 & -2 & 4 \\ 8 & 4 & 5 & 10 \end{vmatrix} \\[5pt] \;\;\begin{subarray}{c} C_1-5C_2 \\[6pt] C_3+C_2 \\[6pt] {\displaystyle =} \\[6pt] C_4-7C_2 \end{subarray}\;\; \begin{vmatrix} 0 & 1 & 0 & 0 \\ -1 & -1 & -4 & -1 \\ -12& 3 & 1 &-17 \\ -12& 4 & 9 &-18 \end{vmatrix} \] \[ \begin{subarray}{c} C_1\leftrightarrow C_2 \\[6pt] {\displaystyle =} \end{subarray}\;\; -\begin{vmatrix} 1 & 0 & 0 & 0 \\ -1 & -1 & -4 & -1 \\ 3 &-12 & 1 &-17 \\ 4 &-12 & 9 &-18 \end{vmatrix} = -\begin{vmatrix} -1 & -4 & -1 \\ -12& 1 &-17 \\ -12& 9 &-18 \end{vmatrix} \\[6pt]\] \[\begin{subarray}{c} C_2-4C_1 \\[6pt] {\displaystyle =} \\[6pt] C_3-C_1 \end{subarray}\;\; -\begin{vmatrix} -1 & 0 & 0 \\ -12 & 49 & -5 \\ -12 & 57 & -6 \end{vmatrix}\;\;=\;\; \begin{vmatrix} 49 & -5 \\ 57 & -6 \end{vmatrix}\;\;=-294+285=-9 \]

\((h)\;\) Tutorial question

Here it’s not too bad just to evaluate directly by expanding along the top \[\begin{align*}\begin{vmatrix} 1 & 1 & 0 & 0 \\ -1 & 1 & 1 & 0 \\ 0 & -1 & 1 & 1 \\ 0 & 0 & -1 & 1 \end{vmatrix} &= \begin{vmatrix} 1 & 1 & 0 \\ -1 & 1 & 1 \\ 0 & -1 & 1 \end{vmatrix}- \begin{vmatrix} -1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & -1 & 1 \end{vmatrix} \\ &= \begin{vmatrix} 1 & 1 & 0 \\ -1 & 1 & 1 \\ 0 & -1 & 1 \end{vmatrix}+ \begin{vmatrix} 1 & 1 \\ -1 & 1 \end{vmatrix} \\[3pt] &=\begin{vmatrix} 1 & 1 \\ -1 & 1 \end{vmatrix} -\begin{vmatrix} -1 & 1 \\ 0 & 1 \end{vmatrix} +\begin{vmatrix} 1 & 1 \\ -1 & 1 \end{vmatrix} \\[5pt] &=2-(-1)+2=5 \end{align*}\]

\((i)\;\) Let \(F_n\) be the \(n\times n\) determinant. Expanding along the top row, exactly as in the first two steps in part (h), gives \(F_n=F_{n-1}+F_{n-2}\). This is the recurrence relation defining the Fibonacci sequence (look it up…!)

In particular, starting from \(F_1=1\) and \(F_2=2\), we get \(F_3=3\), \(F_4=5\), \(F_5=8\),…


Week 2 material

Question 1.3 \(\;\) Given that \[\begin{align*} A=\begin{pmatrix} 1&3 \\ 2&-1 \end{pmatrix},\quad B=\begin{pmatrix} 2&1 \\ 3&1 \end{pmatrix},\quad C=\begin{pmatrix} 5&-1 \\ 2&-3 \\ 4&2 \end{pmatrix}, \end{align*}\] \[\begin{align*} D=\begin{pmatrix} -2&4&-3 \\ 2&1&2 \end{pmatrix},\quad E=\begin{pmatrix} 3&-1&1 \\ 4&-2&1 \\ 2&-5&2 \end{pmatrix},\quad F=\begin{pmatrix} 4&-2&1 \\ 2&1&5 \\ 1&-1&2 \end{pmatrix}, \end{align*}\] find the following matrices (or if they don’t exist, say why):

\((a)\hspace{0.2cm} A+B \hspace{1.0cm}\) \((b)\hspace{0.2cm} A+C \hspace{1.0cm}\) \((c)\hspace{0.2cm} E+2F \hspace{1.0cm}\) \((d)\hspace{0.2cm} AB \hspace{1.0cm}\) \((e)\hspace{0.2cm} BA \hspace{1.0cm}\)

\((f)\hspace{0.2cm} AC \hspace{1.0cm}\) \((g)\hspace{0.2cm} CA \hspace{1.0cm}\) \((h)\hspace{0.2cm} EF \hspace{1.0cm}\) \((i)\hspace{0.2cm} AB+DC \hspace{1.0cm}\) \((j)\hspace{0.2cm} DF \hspace{1.0cm}\)


Quick Check

\((a)\hspace{0.2cm}\displaystyle \begin{pmatrix} 3 & 4 \\ 5 & 0 \end{pmatrix}\) \(\hspace{4.2cm}\) \((b)\hspace{0.2cm}\) Doesn’t exist.

\((c)\hspace{0.2cm}\displaystyle \begin{pmatrix} 11 & -5 & 3 \\ 8 & 0 & 11 \\ 4 & -7 & 6 \end{pmatrix}\) \(\hspace{2.5cm}\) \((d)\hspace{0.2cm}\displaystyle \begin{pmatrix} 11 & 4 \\ 1 & 1 \end{pmatrix}\)

\((e)\hspace{0.2cm}\displaystyle \begin{pmatrix} 4 & 5 \\ 5 & 8 \end{pmatrix}\) \(\hspace{4.2cm}\) \((f)\hspace{0.2cm}\) Doesn’t exist.

\((g)\hspace{0.2cm}\displaystyle \begin{pmatrix} 3 & 16 \\ -4 & 9 \\ 8 & 10 \end{pmatrix}\) \(\hspace{3.4cm}\) \((h)\hspace{0.2cm}\displaystyle \begin{pmatrix} 11 & -8 & 0 \\ 13 & -11 & -4 \\ 0 & -11 & -19 \end{pmatrix}\)

\((i)\hspace{0.2cm}\displaystyle \begin{pmatrix} -3 & -12 \\ 21 & 0 \end{pmatrix}\) \(\hspace{3.3cm}\) \((j)\hspace{0.2cm}\displaystyle \begin{pmatrix} -3 & 11 & 12 \\ 12 & -5 & 11 \end{pmatrix}\)


Solution

\((a)\hspace{0.2cm}\displaystyle A+B=\begin{pmatrix} 3 & 4 \\ 5 & 0 \end{pmatrix}\) \(\hspace{3.5cm}\) \((b)\hspace{0.2cm} A+C\) doesn’t exist.

\((c)\hspace{0.2cm}\displaystyle E+2F=\begin{pmatrix} 11 & -5 & 3 \\ 8 & 0 & 11 \\ 4 & -7 & 6 \end{pmatrix}\) \(\hspace{1.5cm}\) \((d)\hspace{0.2cm}\displaystyle AB=\begin{pmatrix} 11 & 4 \\ 1 & 1 \end{pmatrix}\)

\((e)\hspace{0.2cm}\displaystyle BA=\begin{pmatrix} 4 & 5 \\ 5 & 8 \end{pmatrix}\) \(\hspace{4.2cm}\) \((f)\hspace{0.2cm} AC\) doesn’t exist.

\((g)\hspace{0.2cm}\displaystyle CA=\begin{pmatrix} 3 & 16 \\ -4 & 9 \\ 8 & 10 \end{pmatrix}\) \(\hspace{3.4cm}\) \((h)\hspace{0.2cm}\displaystyle EF=\begin{pmatrix} 11 & -8 & 0 \\ 13 & -11 & -4 \\ 0 & -11 & -19 \end{pmatrix}\)

\((i)\hspace{0.2cm}\displaystyle AB+DC=\begin{pmatrix} -3 & -12 \\ 21 & 0 \end{pmatrix}\) \(\hspace{1.9cm}\) \((j)\hspace{0.2cm}\displaystyle DF=\begin{pmatrix} -3 & 11 & 12 \\ 12 & -5 & 11 \end{pmatrix}\)


Question 1.4 \(\;\) Use Gaussian elimination to find the inverses of the following matrices, or show that no inverse exists:

\((a)^*\hspace{0.2cm} \begin{pmatrix} 1 & 1& -1 \\ 4 & 3 & -6 \\ -2 & -2 & 3 \end{pmatrix}\hspace{1.0cm}\) \((b)\hspace{0.2cm} \begin{pmatrix} 3 & 1 & -1 \\ -4 & 1 & 2 \\ -2 & 0 & 1 \end{pmatrix}\hspace{1.0cm}\) \((c)\hspace{0.2cm} \begin{pmatrix} 1 & 2 & 3 \\ 0 & 1 & 2 \\ 4 & 5 & 3 \end{pmatrix}\hspace{1.0cm}\)

\((d)\hspace{0.2cm} \begin{pmatrix} 1 & 2 & -3 \\ 1 & -2 & 1 \\ 5 & -2 & -3 \end{pmatrix}\hspace{1.0cm}\) \((e)\hspace{0.2cm} \begin{pmatrix} a & b \\ c & d \end{pmatrix}\;\;\) assuming \(ad-bc\neq 0\).


Quick Check

\((a)\;\) \(A^{-1}=\begin{pmatrix} 3 & 1 & 3 \\ 0 & -1 & -2 \\ 2 & 0 & 1 \end{pmatrix}\). \(\hspace{1cm}\) \((b)\;\) \(A^{-1}= \begin{pmatrix} 1 & -1 & 3 \\ 0 & 1 & -2 \\ 2 & -2 & 7 \end{pmatrix}\).

\((c)\;\) \(A^{-1}=\begin{pmatrix} \frac73 & -3 & -\frac13 \\ -\frac83 & 3 & \frac23 \\ \frac43 & -1 & -\frac13 \end{pmatrix}\). \(\hspace{1cm}\) \((d)\;\) \(A^{-1}\) doesn’t exist.

\((e)\;\) Hint: be careful not to divide by zero!


Solution

In each case, we write a big augmented matrix with the identity matrix \(I_3\) on the right, and apply row operations to get \(I_3\) on the left. The inverse then appears on the right.

\((a)\;\) Tutorial question

\[ \left(\begin{matrix} 1 & 1 & -1 \\ 4 & 3 & -6 \\ -2 & -2 & 3 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right.\right) \xrightarrow[R_3+2R_1]{R_2-4R_1} \left(\begin{matrix} 1 & 1 & -1 \\ 0 & -1 & -2 \\ 0 & 0 & 1 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 0 \\ -4 & 1 & 0 \\ 2 & 0 & 1 \end{matrix}\right.\right) \] \[ \xrightarrow{-R_2} \left(\begin{matrix} 1 & 1 & -1 \\ 0 & 1 & 2 \\ 0 & 0 & 1 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 0 \\ 4 & -1 & 0 \\ 2 & 0 & 1 \end{matrix}\right.\right) \xrightarrow[R_2-2R_3]{R_1+R_3} \left(\begin{matrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix} \left|\,\begin{matrix} 3 & 0 & 1 \\ 0 & -1 & -2 \\ 2 & 0 & 1 \end{matrix}\right.\right)\] \[ \xrightarrow{R_1-R_2} \left(\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix} \left|\,\begin{matrix} 3 & 1 & 3 \\ 0 & -1 & -2 \\ 2 & 0 & 1 \end{matrix}\right.\right)\] So the inverse is \(A^{-1}=\begin{pmatrix} 3 & 1 & 3 \\ 0 & -1 & -2 \\ 2 & 0 & 1 \end{pmatrix}\).

\((b)\;\) You could begin with \(\frac13R_1\) to get a 1 in the top left entry. However, an alternative is to do \(R_1+R_3\), which avoids fractions: \[ \left(\begin{matrix} 3 & 1 & -1 \\ -4 & 1 & 2 \\ -2 & 0 & 1 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right.\right) \xrightarrow{R_1+R_3} \left(\begin{matrix} 1 & 1 & 0 \\ -4 & 1 & 2 \\ -2 & 0 & 1 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right.\right) \] \[ \xrightarrow[R_3+2R_1]{R_2+4R_1} \left(\begin{matrix} 1 & 1 & 0 \\ 0 & 5 & 2 \\ 0 & 2 & 1 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 1 \\ 4 & 1 & 4 \\ 2 & 0 & 3 \end{matrix}\right.\right) \xrightarrow{R_2-2R_3} \left(\begin{matrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 2 & 1 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 1 \\ 0 & 1 & -2 \\ 2 & 0 & 3 \end{matrix}\right.\right) \] \[ \xrightarrow{R_3-2R_2} \left(\begin{matrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 1 \\ 0 & 1 & -2 \\ 2 & -2 & 7 \end{matrix}\right.\right) \xrightarrow{R_1-R_2} \left(\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix} \left|\,\begin{matrix} 1 & -1 & 3 \\ 0 & 1 & -2 \\ 2 & -2 & 7 \end{matrix}\right.\right) \] Whichever EROs you use, you should find the inverse is \(A^{-1}= \begin{pmatrix} 1 & -1 & 3 \\ 0 & 1 & -2 \\ 2 & -2 & 7 \end{pmatrix}\).

\((c)\;\) \[ \left(\begin{matrix} 1 & 2 & 3 \\ 0 & 1 & 2 \\ 4 & 5 & 3 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right.\right) \xrightarrow{R_3-4R_1} \left(\begin{matrix} 1 & 2 & 3 \\ 0 & 1 & 2 \\ 0 & -3 & -9 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -4 & 0 & 1 \end{matrix}\right.\right) \] \[ \xrightarrow{R_3+3R_2} \left(\begin{matrix} 1 & 2 & 3 \\ 0 & 1 & 2 \\ 0 & 0 & -3 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -4 & 3 & 1 \end{matrix}\right.\right) \xrightarrow{-\frac13R_3} \left(\begin{matrix} 1 & 2 & 3 \\[3pt] 0 & 1 & 2 \\[3pt] 0 & 0 & 1 \\[3pt] \end{matrix} \left|\,\begin{matrix} 1 & 0 & 0 \\[3pt] 0 & 1 & 0 \\ \frac43 & -1 & -\frac13 \end{matrix}\right.\right) \] \[ \xrightarrow[R_2-2R_3]{R_1-3R_3} \left(\begin{matrix} 1 & 2 & 0 \\[3pt] 0 & 1 & 0 \\[3pt] 0 & 0 & 1 \\[3pt] \end{matrix} \left|\,\begin{matrix} -3 & 3 & 1 \\ -\frac83 & 3 & \frac23 \\ \frac43 & -1 & -\frac13 \end{matrix}\right.\right) \xrightarrow{R_1-2R_2} \left(\begin{matrix} 1 & 0 & 0 \\[3pt] 0 & 1 & 0 \\[3pt] 0 & 0 & 1 \\[3pt] \end{matrix} \left|\,\begin{matrix} \frac73 & -3 & -\frac13 \\ -\frac83 & 4 & \frac23 \\ \frac43 & -1 & -\frac13 \end{matrix}\right.\right) \] So the inverse is \(A^{-1}=\begin{pmatrix} \frac73 & -3 & -\frac13 \\ -\frac83 & 3 & \frac23 \\ \frac43 & -1 & -\frac13 \end{pmatrix}\).

\((d)\;\) \[ \left(\begin{matrix} 1 & 2 & -3 \\ 1 & -2 & 1 \\ 5 & -2 & -3 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right.\right) \xrightarrow[R_3-5R_1]{R_2-R_1} \left(\begin{matrix} 1 & 2 & -3 \\ 0 & -4 & 4 \\ 0 & -12 & 12 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ -5 & 0 & 1 \end{matrix}\right.\right) \] \[ \xrightarrow{R_3-3R_1} \left(\begin{matrix} 1 & 2 & -3 \\ 0 & -4 & 4 \\ 0 & 0 & 0 \end{matrix} \left|\,\begin{matrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ -2 & -2 & 1 \end{matrix}\right.\right) \] The appearance of a row of zeros indicates that the matrix is not invertible. We have essentially computed that the matrix has determinant zero and \(A^{-1}\) doesn’t exist.

\(\,\)

\((e)\;\) It is very easy in calculations where the matrix contains unknown entries to accidentally divide by zero. We can not just divide the first row by \(a\) since \(a\) might equal zero. Instead, first assume \(a\neq 0\). Then proceed as usual: \[ \left(\begin{matrix} a & b \\ c & d \end{matrix} \left|\,\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right.\right) \xrightarrow{(1/a)R_1} \left(\begin{matrix} 1 & b/a \\ c & d \end{matrix} \left|\,\begin{matrix} 1/a & 0 \\ 0 & 1 \end{matrix}\right.\right) \] \[ \xrightarrow{R_2-cR_1} \left(\begin{matrix} 1 & b/a \\ 0 & d-bc/a \end{matrix} \left|\,\begin{matrix} 1/a & 0 \\ -c/a & 1 \end{matrix}\right.\right) \] \[\xrightarrow{(d-bc/a)^{-1}R_2} \left(\begin{matrix} 1 & b/a \\ 0 & 1 \end{matrix} \left|\,\begin{matrix} 1/a & 0 \\ -c/(ad-bc) & a/(ad-bc) \end{matrix}\right.\right) \]

Note this step is allowed since \(d-bc/a=\dfrac{ad-bc}{a}\neq 0\). Continuing, \[\xrightarrow{R_1-(b/a)R_2} \left(\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \left|\,\begin{matrix} 1/a+(bc/a)/(ad-bc) & -b/(ad-bc) \\ -c/(ad-bc) & a/(ad-bc) \end{matrix}\right.\right) \] and simplifying the right hand side gives \(\displaystyle A^{-1}= \frac{1}{ad-bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}\).

But what happens when \(a=0\)? In that case, \(ad-bc=-bc\neq 0\) and so \(b\neq 0\) and \(c\neq 0\). We can thus first swap the rows and then proceed similarly to the above. More explicitly, \[ \left(\begin{matrix} 0 & b \\ c & d \end{matrix} \left|\,\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right.\right) \xrightarrow{R_1\leftrightarrow R_2} \left(\begin{matrix} c & d \\ 0 & b \end{matrix} \left|\,\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right.\right) \xrightarrow{(1/b)R_2} \left(\begin{matrix} c & d \\ 0 & 1 \end{matrix} \left|\,\begin{matrix} 0 & 1 \\ 1/b & 0 \end{matrix}\right.\right) \] \[ \xrightarrow{R_1-dR_2} \left(\begin{matrix} c & 0 \\ 0 & 1 \end{matrix} \left|\,\begin{matrix} -d/b & 1 \\ 1/b & 0 \end{matrix}\right.\right) \xrightarrow{(1/c)R_1} \left(\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \left|\,\begin{matrix} -d/bc & 1/c \\ 1/b & 0 \end{matrix}\right.\right) \] and the inverse is \(\displaystyle A^{-1}= \frac{1}{-bc} \begin{pmatrix} d & -b \\ -c & 0 \end{pmatrix}\), which is actually the same formula with \(a=0\).

If you answered this question without ever breaking maths by dividing by zero then well done!


Question 1.5 \(\;\) Find the elementary matrices \(E\), \(F\), \(G\) that reduce the matrix \[A=\begin{pmatrix} 2 & 3 & 1 \\ 4 & 11 & 1 \\ -2 & 12 & -5 \end{pmatrix}\] to upper triangular form. Hence find the \(LU\) decomposition of \(A\) and use it solve \(\displaystyle A{\bf x}=\begin{pmatrix} 7 \\ 25 \\ 27\end{pmatrix}\).


Quick Check

\(x=1,\, y=2, \, z=-1.\)


Solution

We perform Gaussian elimination without normalization, keeping track of the elementary matrices: \[ \underbrace{\begin{pmatrix} 2 & 3 & 1 \\ 4 & 11 & 1 \\ -2 & 12 & -5 \end{pmatrix}}_A \xrightarrow{R_2-2R_1} \underbrace{ \begin{pmatrix} 2 & 3 & 1 \\ 0 & 5 & -1 \\ -2 & 12 & -5 \end{pmatrix}}_{EA}, \hspace{0.6cm} E=\begin{pmatrix} 1 & 0 & 0 \\ -2 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}\] \[\qquad\qquad\qquad\qquad \xrightarrow{R_3+R_1} \underbrace{\begin{pmatrix} 2 & 3 & 1 \\ 0 & 5 & -1 \\ 0 & 15 & -4 \end{pmatrix}}_{FEA}, \hspace{1.0cm} F=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{pmatrix}\] \[\qquad\qquad\qquad\; \xrightarrow{R_3-3R_2} \underbrace{\begin{pmatrix} 2 & 3 & 1 \\ 0 & 5 & -1 \\ 0 & 0 & -1 \end{pmatrix}}_{U=GFEA}, \hspace{0.4cm} G=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -3 & 1 \end{pmatrix}\]

Then \[\begin{align*} L=E^{-1}F^{-1}G^{-1} &= \begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -1 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 3 & 1 \end{pmatrix} \\ &= \begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ -1 & 3 & 1 \end{pmatrix} \end{align*}\]

Check: \[A=\begin{pmatrix} 2 & 3 & 1 \\ 4 & 11 & 1 \\ -2 & 12 & -5 \end{pmatrix} =\begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ -1 & 3 & 1 \end{pmatrix} \begin{pmatrix} 2 & 3 & 1 \\ 0 & 5 & -1 \\ 0 & 0 & -1 \end{pmatrix}=LU\]

We now have to solve \(A{\bf x}={\bf b}=\begin{pmatrix} 7 \\ 25 \\ 27\end{pmatrix}\), that is, \(LU{\bf x}={\bf b}\). First solve \(L{\bf u}={\bf b}\): \[L=\begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ -1 & 3 & 1 \end{pmatrix} \begin{pmatrix} u \\ v \\ w \end{pmatrix}=\begin{pmatrix} 7 \\ 25 \\ 27 \end{pmatrix}\] by forward substitution: \[\begin{cases} \;\;\, u \!\!\!&= 7 \\ \;2u+\;\,v \!\!\!&= 25 \\ -u+3v+w \!\!\!&= 27 \end{cases} \qquad\implies\qquad \begin{cases} u = 7 \\ v=25-2u=11 \\ w = 27+u-3v= 1\end{cases}\]

Finally, solve \(U{\bf x}={\bf u}\), that is, \[\begin{pmatrix} 2 & 3 & 1 \\ 0 & 5 & -1 \\ 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix}=\begin{pmatrix} 7 \\ 11 \\ 1 \end{pmatrix}\] by backward substitution: \[\begin{cases} 2x+3y+z \!\!\!&= 7 \\ \qquad\; 5y-z \!\!\!&= 11 \\ \qquad\quad\;\;\, -z \!\!\!&= 1 \end{cases} \qquad\implies\qquad \begin{cases} x =\frac{7-3y-z}{2}=1 \\ y=\frac{11+z}{5}=2 \\ z= -1 \end{cases}\]


Question 1.6 \(\;\) Find the \(LU\) decomposition of the following matrices:

\((a)^*\hspace{0.2cm} A=\begin{pmatrix} 2 & 0 & 1 \\ 4 & 1 & 1 \\ -2 & 3 & -1 \end{pmatrix}\hspace{0.5cm}\) \((b)\hspace{0.2cm} A=\begin{pmatrix} -2 & 3 & -1 \\ 4 & 1 & 1 \\ 2 & 0 & 1 \end{pmatrix}\hspace{0.5cm}\) \((c)\hspace{0.2cm} A=\begin{pmatrix} -1 & 2 & -2 \\ 1 & 1 & 1 \\ 2 & -1 & 1 \end{pmatrix}\hspace{0.5cm}\)

\((d)\hspace{0.2cm} A=\begin{pmatrix} 2 & -1 & 3 & -2 \\ 4 & -4 & 7 & -1 \\ -6 & -1 & -6 & 14 \\ 2 & 3 & 4 & 1 \end{pmatrix}\hspace{0.5cm}\) \((e)\hspace{0.2cm} A=\begin{pmatrix} 3 & -1 & 2 & 1 \\ -3 & 3 & -4 & 21 \\ 9 & -7 & 11 & 8 \\ 6 & 0 & 0 & -7 \end{pmatrix}\hspace{0.5cm}\)

\((f)\hspace{0.2cm} A=\begin{pmatrix} 2 & 1 & 3 & -1 \\ 4 & 3 & 4 & -1 \\ -2 & 0 & -2 & 4 \\ 4 & 0 & 19 & 1 \end{pmatrix}\)


Quick Check

\((a)\;\) \(L=\begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ -1 & 3 & 1 \end{pmatrix}, \quad U=\begin{pmatrix} 2 & 0 & 1 \\ 0 & 1 & -1 \\ 0 & 0 & 3 \end{pmatrix}\)

\((b)\;\) \(L=\begin{pmatrix} 1 & 0 & 0 \\ -2 & 1 & 0 \\ -1 & \frac37 & 1 \end{pmatrix}, \quad U=\begin{pmatrix} -2 & 3 & -1 \\ 0 & 7 & -1 \\ 0 & 0 & \frac37 \end{pmatrix}\)

\((c)\;\) \(L=\begin{pmatrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ -2 & 1 & 1 \end{pmatrix}, \quad U=\begin{pmatrix} -1 & 2 & -2 \\ 0 & 3 & -1 \\ 0 & 0 & -2 \end{pmatrix}\)

\((d)\;\) \(L=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 \\ -3 & 2 & 1 & 0 \\ 1 & -2 & 3 & 1 \end{pmatrix}, \quad U=\begin{pmatrix} 2 & -1 & 3 & -2 \\ 0 & -2 & 1 & 3 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 3 \end{pmatrix}\)

\((e)\;\) \(L=\begin{pmatrix} 1 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ 3 & -2 & 1 & 0 \\ 2 & 1 & -2 & 1 \end{pmatrix}, \quad U=\begin{pmatrix} 3 & -1 & 2 & 1 \\ 0 & 2 & -2 & 22 \\ 0 & 0 & 1 & 49 \\ 0 & 0 & 0 & 67 \end{pmatrix}\)

\((f)\;\) \(L=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 \\ -1 & 1 & 1 & 0 \\ 2 & -2 & 3 & 1 \end{pmatrix}, \quad U=\begin{pmatrix} 2 & 1 & 3 & -1 \\ 0 & 1 & -2 & 1 \\ 0 & 0 & 3 & 2 \\ 0 & 0 & 0 & -1 \end{pmatrix}\)


Solution

We do Gaussian elimination but without normalizing the rows. (Remember that row swaps are not allowed.)

\((a)\;\) Tutorial question

Make \(A\) upper triangular: \[A=\begin{pmatrix} 2 & 0 & 1 \\ 4 & 1 & 1 \\ -2 & 3 & -1 \end{pmatrix} \xrightarrow[R_3+R_1]{R_2-2R_1} \begin{pmatrix} 2 & 0 & 1 \\ 0 & 1 & -1 \\ 0 & 3 & 0 \end{pmatrix} \xrightarrow{R_3-3R_2} \begin{pmatrix} 2 & 0 & 1 \\ 0 & 1 & -1 \\ 0 & 0 & 3 \end{pmatrix}\] so \[L=\begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ -1 & 3 & 1 \end{pmatrix} \qquad\text{and}\qquad U=\begin{pmatrix} 2 & 0 & 1 \\ 0 & 1 & -1 \\ 0 & 0 & 3 \end{pmatrix}\] where the entries of \(L\) are the row multipliers with signs reversed.

\((b)\;\) Make \(A\) upper triangular: \[A=\begin{pmatrix} -2 & 3 & -1 \\ 4 & 1 & 1 \\ 2 & 0 & 1 \end{pmatrix} \xrightarrow[R_3+R_1]{R_2+2R_1} \begin{pmatrix} -2 & 3 & -1 \\ 0 & 7 & -1 \\ 0 & 3 & 0 \end{pmatrix} \xrightarrow{R_3-3R_2/7} \begin{pmatrix} -2 & 3 & -1 \\ 0 & 7 & -1 \\ 0 & 0 & \frac37 \end{pmatrix}\] and we obtain \[L=\begin{pmatrix} 1 & 0 & 0 \\ -2 & 1 & 0 \\ -1 & \frac37 & 1 \end{pmatrix} \qquad\text{and}\qquad U=\begin{pmatrix} -2 & 3 & -1 \\ 0 & 7 & -1 \\ 0 & 0 & \frac37 \end{pmatrix}\]

\((c)\;\) Make \(A\) upper triangular: \[A=\begin{pmatrix} -1 & 2 & -2 \\ 1 & 1 & 1 \\ 2 & -1 & 1 \end{pmatrix} \xrightarrow[R_3+2R_1]{R_2+R_1} \begin{pmatrix} -1 & 2 & -2 \\ 0 & 3 & -1 \\ 0 & 3 & -3 \end{pmatrix} \xrightarrow{R_3-R_2} \begin{pmatrix} -1 & 2 & -2 \\ 0 & 3 & -1 \\ 0 & 0 & -2 \end{pmatrix}\] and we obtain \[L=\begin{pmatrix} 1 & 0 & 0 \\ -1 & 1 & 0 \\ -2 & 1 & 1 \end{pmatrix} \qquad\text{and}\qquad U=\begin{pmatrix} -1 & 2 & -2 \\ 0 & 3 & -1 \\ 0 & 0 & -2 \end{pmatrix}\]

\((d)\;\) Make \(A\) upper triangular: \[A=\begin{pmatrix} 2 & -1 & 3 & -2 \\ 4 & -4 & 7 & -1 \\ -6 & -1 & -6 & 14 \\ 2 & 3 & 4 & 1 \end{pmatrix} \xrightarrow[R_4-R_1] {\begin{subarray}{c} R_2-2R_1 \\[5pt] R_3+3R_1 \end{subarray}} \begin{pmatrix} 2 & -1 & 3 & -2 \\ 0 & -2 & 1 & 3 \\ 0 & -4 & 3 & 8 \\ 0 & 4 & 1 & 3 \end{pmatrix} \] \[ \xrightarrow[R_4+2R_2]{R_3-2R_2} \begin{pmatrix} 2 & -1 & 3 & -2 \\ 0 & -2 & 1 & 3 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 3 & 9 \end{pmatrix} \xrightarrow{R_4-3R_3} \begin{pmatrix} 2 & -1 & 3 & -2 \\ 0 & -2 & 1 & 3 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 3 \end{pmatrix}\] so \[L=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 \\ -3 & 2 & 1 & 0 \\ 1 & -2 & 3 & 1 \end{pmatrix} \qquad\text{and}\qquad U=\begin{pmatrix} 2 & -1 & 3 & -2 \\ 0 & -2 & 1 & 3 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 3 \end{pmatrix}\]

\((e)\;\) Make \(A\) upper triangular: \[A=\begin{pmatrix} 3 & -1 & 2 & 1 \\ -3 & 3 & -4 & 21 \\ 9 & -7 & 11 & 8 \\ 6 & 0 & 0 & -7 \end{pmatrix} \xrightarrow[R_4-2R_1] {\begin{subarray}{c} R_2+R_1 \\[5pt] R_3-3R_1 \end{subarray}} \begin{pmatrix} 3 & -1 & 2 & 1 \\ 0 & 2 & -2 & 22 \\ 0 & -4 & 5 & 5 \\ 0 & 2 & -4 & -9 \end{pmatrix} \] \[ \xrightarrow[R_4-R_2]{R_3+2R_2} \begin{pmatrix} 3 & -1 & 2 & 1 \\ 0 & 2 & -2 & 22 \\ 0 & 0 & 1 & 49 \\ 0 & 0 & -2 & -31 \end{pmatrix} \xrightarrow{R_4+2R_3} \begin{pmatrix} 3 & -1 & 2 & 1 \\ 0 & 2 & -2 & 22 \\0 & 0 & 1 & 49 \\ 0 & 0 & 0 & 67 \end{pmatrix}\] so \[L=\begin{pmatrix} 1 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ 3 & -2 & 1 & 0 \\ 2 & 1 & -2 & 1 \end{pmatrix} \qquad\text{and}\qquad U=\begin{pmatrix} 3 & -1 & 2 & 1 \\ 0 & 2 & -2 & 22 \\ 0 & 0 & 1 & 49 \\ 0 & 0 & 0 & 67 \end{pmatrix}\]

\((f)\;\) Make \(A\) upper triangular: \[A=\begin{pmatrix} 2 & 1 & 3 & -1 \\ 4 & 3 & 4 & -1 \\ -2 & 0 & -2 & 4 \\ 4 & 0 & 19 & 1 \end{pmatrix} \xrightarrow[R_4-2R_1] {\begin{subarray}{c} R_2-2R_1 \\[5pt] R_3+R_1 \end{subarray}} \begin{pmatrix} 2 & 1 & 3 & -1 \\ 0 & 1 & -2 & 1 \\ 0 & 1 & 1 & 3 \\ 0 & -2 & 13 & 3 \end{pmatrix} \] \[\xrightarrow[R_4+2R_2]{R_3-R_2} \begin{pmatrix} 2 & 1 & 3 & -1 \\ 0 & 1 & -2 & 1 \\ 0 & 0 & 3 & 2 \\ 0 & 0 & 9 & 5 \end{pmatrix} \xrightarrow{R_4-3R_3} \begin{pmatrix} 2 & 1 & 3 & -1 \\ 0 & 1 & -2 & 1 \\ 0 & 0 & 3 & 2 \\ 0 & 0 & 0 & -1 \end{pmatrix}\] so \[L=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 \\ -1 & 1 & 1 & 0 \\ 2 & -2 & 3 & 1 \end{pmatrix} \qquad\text{and}\qquad U=\begin{pmatrix} 2 & 1 & 3 & -1 \\ 0 & 1 & -2 & 1 \\ 0 & 0 & 3 & 2 \\ 0 & 0 & 0 & -1 \end{pmatrix}\]


Question 1.7 \(\;\) Find the \(LU\) decomposition of \(A=\begin{pmatrix} 2 & 1 & 3 & -1 \\ 4 & 3 & 4 & -1 \\ -2 & 0 & -2 & 4 \\ 4 & 0 & 19 & 1 \end{pmatrix},\) and hence solve \(A{\bf x}={\bf b}\) with

\((a)\hspace{0.2cm} {\bf b}=\begin{pmatrix} 3 \\ 8 \\ -8 \\ -17 \end{pmatrix}\hspace{0.5cm}\) \((b)\hspace{0.2cm} {\bf b}=\begin{pmatrix} -1 \\ -2 \\ -8 \\ -26 \end{pmatrix}\hspace{0.5cm}\) \((c)\hspace{0.2cm} {\bf b}=\begin{pmatrix} 3 \\ 4 \\ -4 \\ 14 \end{pmatrix}\hspace{0.5cm}\) \((d)\hspace{0.2cm} {\bf b}=\begin{pmatrix} 7 \\ 12 \\ -8 \\ 22 \end{pmatrix}\)


Quick Check

\((a)\hspace{0.5cm}\) \(\displaystyle{\bf x}=\begin{pmatrix} 1 \\ 2 \\ -1 \\ -2 \end{pmatrix}\). \(\hspace{1cm}\) \((b)\hspace{0.5cm}\) \(\displaystyle{\bf x}=\begin{pmatrix} -1 \\ 1 \\ -1 \\ -3 \end{pmatrix}\). \(\hspace{1cm}\) \((c)\hspace{0.5cm}\) \(\displaystyle{\bf x}=\begin{pmatrix} -1 \\ 1 \\ 1 \\ -1 \end{pmatrix}\). \(\hspace{1cm}\) \((d)\hspace{0.5cm}\) \(\displaystyle{\bf x}=\begin{pmatrix} 1 \\ 1 \\ 1 \\ -1 \end{pmatrix}\).


Solution

First make \(A\) upper triangular, using higher rows to fix lower ones: \[A=\begin{pmatrix} 2 & 1 & 3 & -1 \\ 4 & 3 & 4 & -1 \\ -2 & 0 & -2 & 4 \\ 4 & 0 & 19 & 1 \end{pmatrix} \xrightarrow[R_4-2R_1] {\begin{subarray}{c} R_2-2R_1 \\[5pt] R_3+R_1 \end{subarray}} \begin{pmatrix} 2 & 1 & 3 & -1 \\ 0 & 1 & -2 & 1 \\ 0 & 1 & 1 & 3 \\ 0 & -2 & 13 & 3 \end{pmatrix} \] \[ \xrightarrow[R_4+2R_2]{R_3-R_2} \begin{pmatrix} 2 & 1 & 3 & -1 \\ 0 & 1 & -2 & 1 \\ 0 & 0 & 3 & 2 \\ 0 & 0 & 9 & 5 \end{pmatrix} \xrightarrow{R_4-3R_3} \begin{pmatrix} 2 & 1 & 3 & -1 \\ 0 & 1 & -2 & 1 \\ 0 & 0 & 3 & 2 \\ 0 & 0 & 0 & -1 \end{pmatrix}\] so \(A=LU\) where \[L=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 \\ -1 & 1 & 1 & 0 \\ 2 & -2 & 3 & 1 \end{pmatrix} \qquad\text{and}\qquad U=\begin{pmatrix} 2 & 1 & 3 & -1 \\ 0 & 1 & -2 & 1 \\ 0 & 0 & 3 & 2 \\ 0 & 0 & 0 & -1 \end{pmatrix}\]

\((a)\;\) Now solve \(L{\bf u}={\bf b}\), i.e. \[\begin{pmatrix} 1 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 \\ -1 & 1 & 1 & 0 \\ 2 & -2 & 3 & 1 \end{pmatrix} \begin{pmatrix} s \\ t \\ u \\ v \end{pmatrix}= \begin{pmatrix} 3 \\ 8 \\ -8 \\ -17 \end{pmatrix}\] by forward substitution: \[\begin{cases} \;\,\,s \!\!\!&= 3 \\ \,\,2s+\;\,t \!\!\!&=8 \\ -s+\;\,t+\;\,u \!\!\!&= -8 \\ \,\,2s-2t+3u+v \!\!\!&=-17 \end{cases} \quad\implies\quad \begin{cases} s=3 \\ t=8-2s=2 \\ u= -8+s-t= -7 \\ v= -17-2s+2t-3u = 2\end{cases} \]

Then solve \(U{\bf x}={\bf u}\), i.e. \[\begin{pmatrix} 2 & 1 & 3 & -1 \\ 0 & 1 & -2 & 1 \\ 0 & 0 & 3 & 2 \\ 0 & 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} w \\ x \\ y \\ z \end{pmatrix}=\begin{pmatrix} 3 \\ 2 \\ -7 \\ 2 \end{pmatrix}\] by backward substitution: \[\begin{cases} 2w+x+3y-z &= 3 \\ x-2y+z&=2 \\ 3y+2z &= -7 \\ -z &=2 \end{cases} \qquad\implies\qquad \begin{cases} w=\frac{3-x-3y+z}{2}=1 \\ x= 2+2y-z=2 \\ y=\frac{-7-2z}{3}=-1 \\ z= -2 \end{cases} \] and so \(\displaystyle{\bf x}=\begin{pmatrix} 1 \\ 2 \\ -1 \\ -2 \end{pmatrix}\).

Similarly for the next parts we have:

\((b)\hspace{0.5cm}\) \(\displaystyle{\bf u}=\begin{pmatrix} -1 \\ 0 \\ -9 \\ 3 \end{pmatrix}\) and \(\displaystyle{\bf x}=\begin{pmatrix} -1 \\ 1 \\ -1 \\ -3 \end{pmatrix}\),

\((c)\hspace{0.5cm}\) \(\displaystyle{\bf u}=\begin{pmatrix} 3 \\ -2 \\ 1 \\ 1 \end{pmatrix}\) and \(\displaystyle{\bf x}=\begin{pmatrix} -1 \\ 1 \\ 1 \\ -1 \end{pmatrix}\),

\((d)\hspace{0.5cm}\) \(\displaystyle{\bf u}=\begin{pmatrix} 7 \\ -2 \\ 1 \\ 1 \end{pmatrix}\) and \(\displaystyle{\bf x}=\begin{pmatrix} 1 \\ 1 \\ 1 \\ -1 \end{pmatrix}\).


Question 1.8 \(\;\) Determine whether or not the following matrices have an \(LU\) decomposition without doing the factorisation.

\((a)^*\hspace{0.2cm} \begin{pmatrix} 2 & 1 & -1 \\ 1 & 2 & 0 \\ -1 & 0 & 3 \end{pmatrix} \hspace{0.5cm}\) \((b)\hspace{0.2cm} \begin{pmatrix} 2 & 1 & -1 \\ 1 & 2 & -1 \\ -1 & -1 & 3 \end{pmatrix} \hspace{0.5cm}\) \((c)\hspace{0.2cm} \begin{pmatrix} 2 & 1 & -1 \\ 4 & 2 & -1 \\ -1 & -2 & 1 \end{pmatrix} \hspace{0.5cm}\)

\((d)\hspace{0.2cm} \begin{pmatrix} 1 & 1 & -1 \\ 1 & 2 & 2 \\ -1 & 2 & 3 \end{pmatrix} \hspace{0.5cm}\) \((e)\hspace{0.2cm} \begin{pmatrix} 3 & 1 & -1 \\ 1 & 2 & 2 \\ -1 & 2 & 3 \end{pmatrix} \hspace{0.5cm}\) \((f)\hspace{0.2cm} \begin{pmatrix} 5 & 1 & -1 \\ 1 & 2 & 2 \\ -1 & 2 & 3 \end{pmatrix} \hspace{0.5cm}\)

\((g)\hspace{0.2cm} \begin{pmatrix} 5 & 1 & -1 & 0 \\ 1 & 2 & 2 & 0 \\ -1 & 2 & 3 & 1 \\ 0 & 0 & 1 & 2 \end{pmatrix} \hspace{0.5cm}\) \((h)\hspace{0.2cm} \begin{pmatrix} 5 & 1 & -1 & 0 \\ 1 & 2 & 2 & 0 \\ -1 & 2 & 3 & 1 \\ 0 & 0 & 1 & 9 \end{pmatrix}\)


Quick Check

\((a)\;\) Yes. \(\hspace{1cm}\) \((b)\;\) Yes. \(\hspace{1cm}\) \((c)\;\) No. \(\hspace{1.2cm}\) \((d)\;\) Yes.

\((e)\;\) Yes. \(\hspace{1cm}\) \((f)\;\) Yes. \(\hspace{1cm}\) \((g)\;\) Yes. \(\hspace{1cm}\) \((h)\;\) No.


Solution

In each case, we need to check whether the principal minors are all non-zero.

\((a)\;\) Tutorial question

This matrix has an \(LU\) decomposition because \[\begin{align*} 2\neq 0, \quad \begin{vmatrix} 2 & 1 \\ 1 & 2 \end{vmatrix}=3 \neq 0 \quad\text{and}\quad \begin{vmatrix} 2 & 1 & -1 \\ 1 & 2 & 0 \\ -1 & 0 & 3 \end{vmatrix} &=2\begin{vmatrix} 2 & 0 \\ 0 & 3 \end{vmatrix}- \begin{vmatrix} 1 & 0 \\ -1 & 3 \end{vmatrix}- \begin{vmatrix} 1 & 2 \\ -1 & 0 \end{vmatrix} \\ &=12-3-2=7\neq 0. \end{align*}\]

\((b)\;\) This matrix has an \(LU\) decomposition because \[\begin{align*} 2\neq 0, \quad \begin{vmatrix} 2 & 1 \\ 1 & 2 \end{vmatrix}=3\neq 0 \quad\text{and}\quad \begin{vmatrix} 2 & 1 & -1 \\ 1 & 2 & -1 \\ -1 & -1 & 3 \end{vmatrix} &=2\begin{vmatrix} 2 & -1 \\ -1 & 3 \end{vmatrix}- \begin{vmatrix} 1 & -1 \\ -1 & 3 \end{vmatrix}- \begin{vmatrix} 1 & 2 \\ -1 & -1 \end{vmatrix} \\ &=10-2-1=7\neq 0. \end{align*}\]

\((c)\;\) This matrix does not have an \(LU\) decomposition because \(\displaystyle\begin{vmatrix} 2 & 1 \\ 4 & 2 \end{vmatrix}=0\).

\((d)\;\) This matrix does have an \(LU\) decomposition because \[\begin{align*} 1\neq 0, \quad \begin{vmatrix} 1 & 1 \\ 1 & 2 \end{vmatrix}=1 \neq 0 \quad\text{and}\quad \begin{vmatrix} 1 & 1 & -1 \\ 1 & 2 & 2 \\ -1 & 2 & 3 \end{vmatrix} =-7\neq 0. \end{align*}\]

\((e)\;\) Once again, this matrix also has an \(LU\) decomposition because \[\begin{align*} 3\neq 0, \qquad \begin{vmatrix} 3 & 1 \\ 1 & 2 \end{vmatrix}=5 \neq 0 \quad\text{and}\quad \begin{vmatrix} 3 & 1 & -1 \\ 1 & 2 & 2 \\ -1 & 2 & 3 \end{vmatrix} =-3\neq 0. \end{align*}\]

\((f)\;\) Once again, this matrix also has an \(LU\) decomposition because \[\begin{align*} 5 \neq 0, \quad \begin{vmatrix} 5 & 1 \\ 1 & 2 \end{vmatrix}=9 \neq 0 \quad\text{and}\quad \begin{vmatrix} 5 & 1 & -1 \\ 1 & 2 & 2 \\ -1 & 2 & 3 \end{vmatrix} =1\neq 0. \end{align*}\]

\((g)\;\) Using the previous part \((f)\), we just need to check the full \(4\times 4\) determinant. There are many ways to do this. For instance, by expanding along the bottom row we can use the previous part again to find \[\begin{vmatrix} 5 & 1 & -1 & 0 \\ 1 & 2 & 2 & 0 \\ -1 & 2 & 3 & 1 \\ 0 & 0 & 1 & 2 \end{vmatrix} =-\begin{vmatrix} 5 & 1 & 0 \\ 1 & 2 & 0 \\ -1 & 2 & 1 \end{vmatrix} +2\begin{vmatrix} 5 & 1 & -1 \\ 1 & 2 & 2 \\ -1 & 2 & 3 \end{vmatrix} =-9+2=-7\neq 0\] so this matrix also has an \(LU\) decomposition.

\((h)\;\) This time \[\begin{vmatrix} 5 & 1 & -1 & 0 \\ 1 & 2 & 2 & 0 \\ -1 & 2 & 3 & 1 \\ 0 & 0 & 1 & 9 \end{vmatrix} =-\begin{vmatrix} 5 & 1 & 0 \\ 1 & 2 & 0 \\ -1 & 2 & 1 \end{vmatrix} +9\begin{vmatrix} 5 & 1 & -1 \\ 1 & 2 & 2 \\ -1 & 2 & 3 \end{vmatrix} =-9+9=0\] so this matrix does not have an \(LU\) decomposition.


Question 1.9 \(\;\) Show that the following can be factorised as \(A=LU\) without doing the factorisation.

\((a)\hspace{0.2cm} A=\begin{pmatrix} 7 & 1 & 1 & 1 & 1 \\ 1 & 8 & -2 & 1 & 1 \\ -2 & 1 & -9 & 3 & 1 \\ 1 & 1 & -1 & -5 & 1 \\ 2 & -3 & 1 & -1 & 8 \end{pmatrix} \hspace{1cm}\) \((b)\hspace{0.2cm} A=\begin{pmatrix} -17 & 1 & -5 & 1 & 1 \\ 1 & 8 & 2 & -3 & 1 \\ 12 & 1 & 29 & -13 & 1 \\ 1 & -10 & -1 & -50 & 22 \\ 2 & 13 & 1 & -1 & 21 \end{pmatrix}\)


Quick Check

\((a)\;\) Hint: check if the matrix is strictly diagonally dominant.

\((b)\;\) Hint: check if the matrix is strictly diagonally dominant.


Solution

We just check that each matrix is strictly diagonally dominant. To do this, notice each diagonal term is strictly greater than the sum of the other terms in the row, after throwing away all the minus signs:

\((a)\;\) \[\begin{align*} 7 &> 1+1+1+1 \\ 8 &> 1+2+1+1 \\ 9 &> 2+1+3+1 \\ 5 &> 1+1+1+1 \\ 8 &> 2+3+1+1 \end{align*}\]

\((b)\;\) \[\begin{align*} 17 &> 1+5+1+1 \\ 8 &> 1+2+3+1 \\ 29 &> 12+1+13+1 \\ 50 &> 1+10+1+22 \\ 21 &> 2+13+1+1 \end{align*}\]


Question 1.10 \(\;\) For each of the following, use the indicated row swaps to find \(P\), \(L\) and \(U\) such that we have a decomposition \(A=P^TLU\).

\((a)\hspace{0.2cm}\) using \(R_1\leftrightarrow R_2\) on \(A=\begin{pmatrix} 0 & 2 & 1 \\ -2 & 3 & -1 \\ -6 & 9 & 0 \end{pmatrix} \hspace{1cm}\)

\((b)\hspace{0.2cm}\) using \(R_1\leftrightarrow R_3\) on \(A=\begin{pmatrix} 0 & 2 & 1 \\ -2 & 3 & -1 \\ -6 & 9 & 0 \end{pmatrix} \hspace{1cm}\)

\((c)\hspace{0.2cm}\) using \(R_1\leftrightarrow R_2\) on \(A=\begin{pmatrix} 0 & 2 & 3 \\ 1 & -2 & 1 \\ 2 & 0 & 5 \end{pmatrix} \hspace{1cm}\)

\((d)\hspace{0.2cm}\) using \(R_2\leftrightarrow R_3\) on \(A=\begin{pmatrix} -1 & 2 & 3 \\ -2 & 2 & 5 \\ 3 & -4 & -5 \end{pmatrix} \hspace{1cm}\)


Quick Check

\((a)\;\) \(P=\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \quad\quad L=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 3 & 0 & 1 \end{pmatrix}, \quad\quad U=\begin{pmatrix} -2 & 3 & -1 \\ 0 & 2 & 1 \\ 0 & 0 & 3 \end{pmatrix}.\)

\((b)\;\) \(P=\begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}, \quad\quad L=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ \frac13 & 0 & 1 \end{pmatrix}, \quad\quad U=\begin{pmatrix} -6 & 9 & 0 \\ 0 & 2 & 1 \\ 0 & 0 & -1 \end{pmatrix}.\)

\((c)\;\) \(P=\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \quad\quad L=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 2 & 2 & 1 \end{pmatrix}, \quad\quad U=\begin{pmatrix} 1 & -2 & 1 \\ 0 & 2 & 3 \\ 0 & 0 & -3 \end{pmatrix}.\)

\((d)\;\) \(P=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}, \quad\quad L=\begin{pmatrix} 1 & 0 & 0 \\ -3 & 1 & 0 \\ 2 & -1 & 1 \end{pmatrix}, \quad\quad U=\begin{pmatrix} -1 & 2 & 3 \\ 0 & 2 & 4 \\ 0 & 0 & 3 \end{pmatrix}.\)


Solution

\((a)\;\) Swapping \(R_1\leftrightarrow R_2\) corresponds to multiplying by the permutation matrix \[P=\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}\] and we have \[PA=\begin{pmatrix} -2 & 3 & -1 \\ 0 & 2 & 1 \\ -6 & 9 & 0 \end{pmatrix} \xrightarrow{R_3-3R_1} \begin{pmatrix} -2 & 3 & -1 \\ 0 & 2 & 1 \\ 0 & 0 & 3 \end{pmatrix}\] So we find \(A=P^TLU\), where \[P=\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \quad\quad L=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 3 & 0 & 1 \end{pmatrix}, \quad\quad U=\begin{pmatrix} -2 & 3 & -1 \\ 0 & 2 & 1 \\ 0 & 0 & 3 \end{pmatrix}.\]

\((b)\;\) Swapping \(R_1\leftrightarrow R_3\) corresponds to multiplying by the permutation matrix \[P_1=\begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}\] and we have \[P_1A=\begin{pmatrix} -6 & 9 & 0 \\ -2 & 3 & -1 \\ 0 & 2 & 1 \end{pmatrix} \xrightarrow{R_2-\frac13R_1} \begin{pmatrix} -6 & 9 & 0 \\ 0 & 0 & -1 \\ 0 & 2 & 1 \end{pmatrix}\] To get this into upper triangular form, we need another row swap \(R_2\leftrightarrow R_3\) which corresponds to \[P_2=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}\] Hence we swap \(R_1\leftrightarrow R_3\) then \(R_2\leftrightarrow R_3\) corresponding to \[P=P_2P_1=\begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}\] Then \[PA=\begin{pmatrix} -6 & 9 & 0 \\ 0 & 2 & 1 \\ -2 & 3 & -1 \end{pmatrix} \xrightarrow{R_3-\frac13R_1} \begin{pmatrix} -6 & 9 & 0 \\ 0 & 2 & 1 \\ 0 & 0 & -1 \end{pmatrix}\] So we find \(A=P^TLU\), where \[P=\begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}, \quad\quad L=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ \frac13 & 0 & 1 \end{pmatrix}, \quad\quad U=\begin{pmatrix} -6 & 9 & 0 \\ 0 & 2 & 1 \\ 0 & 0 & -1 \end{pmatrix}.\]

Remark: notice \((a)\) and \((b)\) give slightly different decompositions of the same matrix.

\((c)\;\) Swapping \(R_1\leftrightarrow R_2\) corresponds to multiplying by the permutation matrix \[P=\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}\] and we have \[PA=\begin{pmatrix} 1 & -2 & 1 \\ 0 & 2 & 3 \\ 2 & 0 & 5 \end{pmatrix} \xrightarrow{R_3-2R_1} \begin{pmatrix} 1 & -2 & 1 \\ 0 & 2 & 3 \\ 0 & 4 & 3 \end{pmatrix} \xrightarrow{R_3-2R_2} \begin{pmatrix} 1 & -2 & 1 \\ 0 & 2 & 3 \\ 0 & 0 & -3 \end{pmatrix}\] So we find \(A=P^TLU\), where \[P=\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \quad\quad L=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 2 & 2 & 1 \end{pmatrix}, \quad\quad U=\begin{pmatrix} 1 & -2 & 1 \\ 0 & 2 & 3 \\ 0 & 0 & -3 \end{pmatrix}.\]

\((d)\;\) Swapping \(R_2\leftrightarrow R_3\) corresponds to multiplying by the permutation matrix \[P=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}\] and we have \[PA=\begin{pmatrix} -1 & 2 & 3 \\ 3 & -4 & -5 \\ -2 & 2 & 5 \end{pmatrix} \xrightarrow[R_3-2R_1]{R_2+3R_1} \begin{pmatrix} -1 & 2 & 3 \\ 0 & 2 & 4 \\ 0 & -2 & -1 \end{pmatrix} \xrightarrow{R_3+R_2} \begin{pmatrix} -1 & 2 & 3 \\ 0 & 2 & 4 \\ 0 & 0 & 3 \end{pmatrix}\] So we find \(A=P^TLU\), where \[P=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}, \quad\quad L=\begin{pmatrix} 1 & 0 & 0 \\ -3 & 1 & 0 \\ 2 & -1 & 1 \end{pmatrix}, \quad\quad U=\begin{pmatrix} -1 & 2 & 3 \\ 0 & 2 & 4 \\ 0 & 0 & 3 \end{pmatrix}.\]


Week 3 Material

Question 1.11 \(\;\) Check that the following equations have the solution \(x=1,y=-2\). \[\begin{align*} \frac{529}{229}x+y&=\frac{71}{229}\\ \frac{298}{129}x+y&=\frac{40}{129} \end{align*}\] What happens if we change the right-hand side as follows? \[\begin{align*} \frac{529}{229}x+y&=\frac{70}{229}\\ \frac{298}{129}x+y&=\frac{41}{129} \end{align*}\] Find the solutions of the new equations and the determinant of the coefficient matrix. What do you notice about this determinant?


Quick Check

The new solution is \(x=359,\, y=-829.\). The matrix of coefficients has a very small determinant \(-1/29541.\)


Solution

It’s easy to see \(x=1,y=-2\) solves the first set of equations. For the second set of equations, subtracting the second from the first gives \[\left(\frac{529}{229}-\frac{298}{129}\right)x=\frac{70}{229}-\frac{41}{129} \qquad\text{i.e.}\qquad -\frac{1}{29541}x=-\frac{359}{29541}\] leading to \(x=359\) and \[y=\frac{41-298\times 359}{129}=-829\] The very small change of the right-hand side has led to a very large change in the solutions. The matrix of coefficients has a very small determinant: \[\begin{vmatrix} \frac{529}{229} & 1 \\[3pt] \frac{298}{129} & 1 \end{vmatrix} =\frac{529}{229}-\frac{298}{129}=-\frac{1}{29541} \] Notice in calculating \(x\) and \(y\), we had to divide by this very small number - this results in the large change.


Question 1.12 \(\;\) Write out the iteration scheme for the Jacobi method for solving \[\begin{pmatrix} -4 & 1 & 1 & 0 \\ 1 & -4 & 0 & 1 \\ 1 & 0 & -4 & 1 \\ 0 & 1 & 1 & -4 \end{pmatrix} \begin{pmatrix} w \\ x \\ y \\ z \end{pmatrix} =\begin{pmatrix} -\sqrt{3}/4 \\ \sqrt{3}/4 \\ 0 \\ 0 \end{pmatrix}.\] Will this scheme converge for an arbitrary initial value of the iteration vector? Give a reason for your answer.


Quick Check

The iteration is as follows: \[\begin{align*} w^{(k+1)} &=\frac{-\sqrt{3}/4-x^{(k)}-y^{(k)}}{-4} \\ x^{(k+1)} &=\frac{\sqrt{3}/4-w^{(k)}-z^{(k)}}{-4} \\ y^{(k+1)} &=\frac{-w^{(k)}-z^{(k)}}{-4} \\ z^{(k+1)} &=\frac{-x^{(k)}-y^{(k)}}{-4} \end{align*}\] This will converge for any initial values. Why?


Solution

The equations are \[\begin{align*} -4w+x+y \qquad\;\; &= -\sqrt{3}/4 \\ w-4x\qquad\;\; +z &= \sqrt{3}/4 \\ w\qquad\; -4y\;+z &= 0 \\ x+y-4z &= 0 \end{align*}\] which we rewrite as \[\begin{align*} w &=\frac{-\sqrt{3}/4-x-y}{-4} \\ x &=\frac{\sqrt{3}/4-w-z}{-4} \\ y &=\frac{-w-z}{-4} \\ z &=\frac{-x-y}{-4} \end{align*}\] Applying the superscripts then gives the Jacobi iteration: \[\begin{align*} w^{(k+1)} &=\frac{-\sqrt{3}/4-x^{(k)}-y^{(k)}}{-4} \\ x^{(k+1)} &=\frac{\sqrt{3}/4-w^{(k)}-z^{(k)}}{-4} \\ y^{(k+1)} &=\frac{-w^{(k)}-z^{(k)}}{-4} \\ z^{(k+1)} &=\frac{-x^{(k)}-y^{(k)}}{-4} \end{align*}\] The iteration will converge for any initial values since the matrix is strictly diagonally dominant.


Question 1.13 \(\;\) Rearrange the following system of equations so the the Gauss-Seidel iteration method will converge for any choice of initial values. Explain why this is so. \[\begin{align*} x+3y&=1\\ 2x-\;\,y&=3 \end{align*}\] For the new system, write out the iteration scheme for the Gauss-Seidel method and with initial values \(x^{(0)}=1\), \(y^{(0)}=-1\), find the iterates up to \(x^{(3)},y^{(3)}\).


Quick Check

The next iterates are \(x^{(1)} =1, \; y^{(1)} =0\) and \(x^{(2)} =\frac{3}{2}, \; y^{(2)} =-\frac{1}{6}\) and \(x^{(3)} =\frac{17}{12}, \; y^{(3)} =-\frac{5}{36}.\)


Solution

By swapping the order of the equations, \[\begin{align*} 2x-y&=3 \\ x+3y&=1 \end{align*}\] the coefficient matrix is strictly diagonally dominant so the Gauss-Seidel method will converge. The iteration scheme is \[\begin{align*} x^{(k+1)} &=\frac{3+y^{(k)}}{2} \\ y^{(k+1)} &= \frac{1-x^{(k+1)}}{3} \end{align*}\]

Starting with \(x^{(0)}=1\), \(y^{(0)}=-1\), \[\begin{align*} x^{(1)} &=\frac{3+y^{(0)}}{2} =\frac{3-1}{2}=1 \\ y^{(1)} &= \frac{1-x^{(1)}}{3} = \frac{1-1}{3}=0 \end{align*}\] and \[\begin{align*} x^{(2)} &=\frac{3+y^{(1)}}{2} =\frac{3+0}{2}=\frac{3}{2} \\ y^{(2)} &= \frac{1-x^{(2)}}{3} = \frac{1-\frac{3}{2}}{3}=-\frac{1}{6} \end{align*}\] and \[\begin{align*} x^{(3)} &=\frac{3+y^{(2)}}{2} =\frac{3-\frac{1}{6}}{2}=\frac{17}{12} \\ y^{(3)} &= \frac{1-x^{(3)}}{3} = \frac{1-\frac{17}{12}}{3}=-\frac{5}{36} \end{align*}\]


Question 1.14 \(\;\) Rearrange the following system into a form you know will converge under the Gauss-Seidel iteration method for any choice of \({\bf x}^{(0)}\), stating why. Write out the resulting Gauss-Seidel iteration and calculate \({\bf x}^{(1)}\) and \({\bf x}^{(2)}\) given that \({\bf x}^{(0)}=(1,1,1,1)^T\), working to 4 significant figures. \[\begin{align*} -x_1+8x_2+\;\;\;\,x_3+4x_4&=6\\ 2x_1+5x_2+10x_3-\;\,x_4&=2\\ -x_1+\;\,x_2-\;\;\;\,x_3+4x_4&=8\\ 6x_1-\;\,x_2-\;\;\;\,x_3-\;\,x_4&=9 \end{align*}\]


Quick Check

First iteration: \(x_1^{(1)} =2, \; x_2^{(1)} =0.375, \; x_3^{(1)} =-0.2875, \; x_4^{(1)} =2.334.\)

Second iteration: \(x_1^{(2)} =1.903, \; x_2^{(2)} =-0.1432, \; x_3^{(2)} =0.1244, \; x_4^{(2)} =2.542.\)


Solution

Moving the last equation first gives a strictly diagonally dominant coefficient matrix \[\begin{pmatrix} 6 & - 1 & -1 & -1 \\ -1 & 8 & 1 & 4 \\ 2 & 5 & 10 & -1 \\ -1 & 1 & -1 & 4 \end{pmatrix}\] and in this order, the iteration will converge.

The Gauss-Seidel scheme is then \[\begin{align*} x_1^{(k+1)} &=\frac{9+x_2^{(k)}+x_3^{(k)}+x_4^{(k)}}{6} \\ x_2^{(k+1)} &=\frac{6+x_1^{(k+1)}-x_3^{(k)}-4x_4^{(k)}}{8} \\ x_3^{(k+1)} &=\frac{2-2x_1^{(k+1)}-5x_2^{(k+1)}+x_4^{(k)}}{10} \\ x_4^{(k+1)} &=\frac{8+x_1^{(k+1)}-x_2^{(k+1)}+x_3^{(k+1)}}{4} \end{align*}\] Starting with \({\bf x}^{(0)}=[1,1,1,1]^T\), we get \[\begin{align*} x_1^{(1)} &=\frac{9+x_2^{(0)}+x_3^{(0)}+x_4^{(0)}}{6} =\frac{9+1+1+1}{6}=2 \\ x_2^{(1)} &=\frac{6+x_1^{(1)}-x_3^{(0)}-4x_4^{(0)}}{8} =\frac{6+2-1-4}{8}=0.375\\ x_3^{(1)} &=\frac{2-2x_1^{(1)}-5x_2^{(1)}+x_4^{(0)}}{10} =\frac{2-2\times 2-5\times 0.375+1}{10}=-0.2875 \\ x_4^{(1)} &=\frac{8+x_1^{(1)}-x_2^{(1)}+x_3^{(1)}}{4} =\frac{8+2-0.375-0.2875}{4}=2.334 \end{align*}\] and \[\begin{align*} x_1^{(2)} &=\frac{9+x_2^{(1)}+x_3^{(1)}+x_4^{(1)}}{6} =\frac{9+0.375-0.2875+2.334}{6}=1.903 \\ x_2^{(2)} &=\frac{6+x_1^{(2)}-x_3^{(1)}-4x_4^{(1)}}{8} =\frac{6+1.903+0.2875-4\times 2.334}{8}=-0.1432 \\ x_3^{(2)} &=\frac{2-2x_1^{(2)}-5x_2^{(2)}+x_4^{(1)}}{10} =\frac{2-2\times 1.903 +5\times 0.1432+2.334}{10}=0.1244 \\ x_4^{(2)} &=\frac{8+x_1^{(2)}-x_2^{(2)}+x_3^{(2)}}{4} =\frac{8+1.903+0.1432+0.1244 }{4}=2.542 \end{align*}\]


Question 1.15 \(\;\) Rearrange the following \[\begin{pmatrix} 6 & 1 & -1 \\ -1 & 1 & 7 \\ 1 & 5 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} =\begin{pmatrix} 3 \\ -17 \\ 0 \end{pmatrix}\] to a form where you know an iterative scheme converges, stating why you then know it converges. Write out the iteration equations both for the Jacobi and the Gauss-Seidel Methods. Working to 4 significant figures, determine \(x^{(3)},y^{(3)},z^{(3)}\) starting with \(x^{(0)}=y^{(0)}=z^{(0)}=1\), both for the Jacobi and the Gauss-Seidel Methods.


Quick Check

With Jacobi: \(x^{(3)} = 0.05233, \; y^{(3)} =0.4276 , \; z^{(3)} =-2.461.\)

With Gauss-Seidel: \(x^{(3)} = 0.01717, \; y^{(3)} =0.4900 , \; z^{(3)} =-2.496.\)


Solution

Swap the second and third rows to give \[\begin{align*} 6x+y-z &= 3 \\ x+5y+z &=0 \\ -x+y+7z &=-17 \end{align*}\] The coefficient matrix is now s.d.d. so both methods will converge. The Jacobi iteration scheme is \[\begin{align*} x^{(k+1)} &=\frac{3-y^{(k)}+z^{(k)}}{6} \\ y^{(k+1)} &=\frac{-x^{(k)}-z^{(k)}}{5} \\ z^{(k+1)} &=\frac{-17+x^{(k)}-y^{(k)}}{7} \end{align*}\] and the Gauss-Seidel iteration scheme is: \[\begin{align*} x^{(k+1)} &=\frac{3-y^{(k)}+z^{(k)}}{6} \\ y^{(k+1)} &=\frac{-x^{(k+1)}-z^{(k)}}{5} \\ z^{(k+1)} &=\frac{-17+x^{(k+1)}-y^{(k+1)}}{7} \end{align*}\] Here’s a table comparing the computations with Jacobi on the left and Gauss-Seidel on the right:

\(k\) \(x^{(k)}\) \(y^{(k)}\) \(z^{(k)}\) \(x^{(k)}\) \(y^{(k)}\) \(z^{(k)}\)
0 1.000 1.000 1.000 1.000 1.000 1.000
1 0.500 -0.400 -2.429 0.500 -0.300 -2.314
2 0.168 0.3858 -2.300 0.1643 0.4300 -2.467
3 0.05233 0.4276 -2.461 0.01717 0.4900 -2.496
4 0.0185 0.4818 -2.483 0.002333 0.4988 -2.500
5 0.005833 0.4928 -2.494 0.0001667 0.5000 -2.500
6 0.002167 0.4976 -2.497 0.000 0.5000 -2.500
7 0.0008333 0.4990 -2.500 0.000 0.5000 -2.500
8 0.0001667 0.4998 -2.500
9 0.000 0.5000 -2.500
10 0.000 0.5000 -2.500


Question 1.16 \(\!{}^*\) \(\;\) \((a)\;\) Determine whether or not the matrix \(\displaystyle\begin{pmatrix}6 & 1 & 2 \\ 1 & 5 & 3 \\ 2 & 3 & 4\end{pmatrix}\) is positive definite.

\((b)\;\) Rearrange the equations \[\begin{align*} 6x+\;\,y+2z &=-2 \\ 2x+3y+4z &=5 \\ x+5y+3z &=7 \end{align*}\] so that the Gauss-Seidel iterations will converge for every choice of \({\bf x}^{(0)}\), explaining why this is so. Write down the resulting iteration and hence find \({\bf x}^{(1)}\), \({\bf x}^{(2)}\) given that \({\bf x}^{(0)}=\left(1,2,1\right)^T\).


Quick Check

\((a)\;\) The matrix is positive definite.

\((b)\;\) \(x^{(1)} = -1, \, y^{(1)} =1, \, z^{(1)} =1\) and \(x^{(2)}=-\dfrac56, \; y^{(2)}=\dfrac{29}{30}, \; z^{(2)}=\dfrac{113}{120}\).


Solutions

Tutorial question

\((a)\;\) It is positive definite since it’s symmetric and \[6>0,\qquad \begin{vmatrix} 6 & 1 \\ 1 & 5 \end{vmatrix}=30-1=29>0\] \[\begin{vmatrix}6 & 1 & 2 \\ 1 & 5 & 3 \\ 2 & 3 & 4\end{vmatrix} =6\begin{vmatrix} 5 & 3 \\ 3 & 4 \end{vmatrix} -\begin{vmatrix} 1 & 3 \\ 2 & 4 \end{vmatrix} +2\begin{vmatrix} 1 & 5 \\ 2 & 3 \end{vmatrix} = 66+2-14=54>0\]

\((b)\;\) If we swap the second and third equations \[\begin{align*} 6x+y+2z&=-2\\ x+5y+3z&=7 \\ 2x+3y+4z&=5 \end{align*}\] the coefficient matrix is positive definite from \((a)\), so the Gauss-Seidel method will converge for any starting values. The iteration scheme is \[\begin{align*} x^{(k+1)} &=\frac{-2-y^{(k)}-2z^{(k)}}{6} \\ y^{(k+1)} &=\frac{7-x^{(k+1)}-3z^{(k)}}{5} \\ z^{(k+1)} &=\frac{5-2x^{(k+1)}-3y^{(k+1)}}{4} \end{align*}\]

Starting with \(x^{(0)}=1\), \(y^{(0)}=2\), \(z^{(0)}=1\), we get \[\begin{align*} x^{(1)} &=\frac{-2-y^{(0)}-2z^{(0)}}{6} = \frac{-2-2-2\times 1}{6} = -1 \\ y^{(1)} &=\frac{7-x^{(1)}-3z^{(0)}}{5} = \frac{7+1-3\times 1}{5} =1\\ z^{(1)} &=\frac{5-2x^{(1)}-3y^{(1)}}{4} = \frac{5+2\times 1-3\times 1}{4} =1 \end{align*}\] and similarly \(x^{(2)}=-\dfrac56\), \(y^{(2)}=\dfrac{29}{30}\), \(z^{(2)}=\dfrac{113}{120}\).


Question 1.17 \(\!{}^*\) \(\;\) Consider the matrices

\(\hspace{0.5cm}(a)\;\) \(\;A=\begin{pmatrix} 2&1&0 \\ 0&3&0 \\ 1&0&4 \end{pmatrix},\hspace{1cm}\) \((b)\;\) \(\;A=\begin{pmatrix} 2&1&0 \\ 0&3&2 \\ 1&2&4 \end{pmatrix},\hspace{1cm}\) \((c)\;\) \(\;A=\begin{pmatrix} 2&-1&0\\ -1&4&2\\ 0&2&2 \end{pmatrix}\).

\((i)\;\) Determine which are strictly diagonally dominant.

\((ii)\;\) Also determine which are positive definite.

\((iii)\;\) What can you then say about solving \(A{\bf x}={\bf b}\) using \(LU\) decomposition or the Jacobi or Gauss-Seidel iteration techniques?


Quick Check

\((a)\;\) s.d.d but not positive definite.

\((b)\;\) s.d.d but not positive definite.

\((c)\;\) Not s.d.d but it is positive definite.


Solutions

Tutorial question

\((a)\;\) This is s.d.d since \(|2|>|1|+|0|\), \(|3|>|0|+|2|\), \(|4|>|1|+|0|\).

It is not positive definite since it isn’t symmetric.

Since \(A\) is s.d.d., it does have an \(LU\) decomposition and the Jacobi and Gauss-Seidel methods will both converge for any \({\bf x}^{(0)}\).

\((b)\;\) This is s.d.d since \(|2|>|1|+|0|\), \(|3|>|0|+|2|\), \(|4|>|1|+|2|\).

It is not positive definite since it isn’t symmetric.

Since \(A\) is s.d.d., it does have an \(LU\) decomposition and the Jacobi and Gauss-Seidel methods will both converge for any \({\bf x}^{(0)}\).

\((c)\;\) This is not s.d.d since in the third row, \(|2|\leq |0|+|2|\).

It’s symmetric and \[2>0,\qquad \begin{vmatrix} 2 & -1 \\ -1 & 4 \end{vmatrix}=8-1=7>0\] \[\begin{vmatrix} 2 & -1 & 0 \\ -1 & 4 & 2 \\ 0 & 2 & 2 \end{vmatrix} =2\begin{vmatrix} 4 & 2 \\ 2 & 2 \end{vmatrix} +\begin{vmatrix} -1 & 2 \\ 0 & 2 \end{vmatrix} = 8-2=6>0\] Since \(A\) is positive definite, it does have an \(LU\) decomposition and the Gauss-Seidel method will converge for any \({\bf x}^{(0)}\). (We can’t say about the Jacobi method.)


Question 1.18 \(\;\) Consider \(A{\bf x}={\bf b}\) where \(\displaystyle A=\begin{pmatrix} 2&-1&0&0\\-1&2&-1&0\\0&-1&2&-1\\0&0&-1&2\end{pmatrix}\) and \(\displaystyle {\bf b}=\begin{pmatrix}-4\\2\\3\\-1\end{pmatrix}\).

Is \(A\) strictly diagonally dominant? Is \(A\) positive definite? Based on this evidence, what can you deduce about the Jacobi or Gauss-Seidel iteration techniques? Give reasons for your answer. Write down the iteration scheme, and working to 4 significant figures, calculate 3 iterates of the Gauss-Seidel scheme taking \({\bf x}^{(0)}=\left(1,1,1,1\right)^T\). Use your results to guess the exact solution and check that it is correct.


Quick Check

The matrix \(A\) is not s.d.d but it is positive definite.


Solution

The matrix \(A\) is not s.d.d. due to for example the second row. It is symmetric and \[2>0,\qquad \begin{vmatrix} 2 & -1 \\ -1 & 2 \end{vmatrix}=4-1=3>0,\qquad \begin{vmatrix} 2 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 2 \end{vmatrix}=2(4-1)+(-2-0)=4>0\] \[\begin{vmatrix}2&-1&0&0\\-1&2&-1&0\\0&-1&2&-1\\0&0&-1&2\end{vmatrix}= 2\begin{vmatrix} 2 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 2 \end{vmatrix}+ \begin{vmatrix} -1 & -1 & 0 \\ 0 & 2 & -1 \\ 0 & -1 & 2 \end{vmatrix} =8-(4-1)=5>0\] Hence \(A\) is positive definite so the Gauss-Seidel method will converge.

The iteration scheme and a table of values is: \[\begin{align*} x_1^{(k+1)} &=\frac{-4+x_2^{(k)}}{2} \\ x_2^{(k+1)} &=\frac{2+x_1^{(k+1)}+x_3^{(k)}}{2} \\ x_3^{(k+1)} &=\frac{3+x_2^{(k+1)}+x_4^{(k)}}{2} \\ x_4^{(k+1)} &=\frac{-1+x_3^{(k+1)}}{2} \end{align*}\]

\(k\) \(x_1^{(k)}\) \(x_2^{(k)}\) \(x_3^{(k)}\) \(x_4^{(k)}\)
0 1.000 1.000 1.000 1.000
1 -1.500 0.7500 2.375 0.6785
2 -1.625 1.375 2.531 0.7655
3 -1.312 1.610 2.688 0.8440
4 -1.195 1.746 2.795 0.8975
5 -1.127 1.834 2.866 0.933
6 -1.083 1.892 2.912 0.956
7 -1.054 1.929 2.942 0.956
8 -1.036 1.953 2.962 0.981
9 -1.024 1.969 2.975 0.988
10 -1.016 1.980 2.984 0.992
11 -1.010 1.987 2.990 0.995
12 -1.006 1.992 2.994 0.997
13 -1.004 1.995 2.996 0.998
14 -1.002 1.997 2.998 0.999
15 -1.002 1.998 2.998 0.999
16 -1.001 1.998 2.998 0.999
17 -1.001 1.998 2.998 0.999


After 3 iterations it might be hard to guess, but after 6 one could reasonably guess the iteration is converging to \({\bf x}=\left(-1,2,3,1\right)^T\) and a simple check shows that this is indeed the exact solution.


Question 1.19 \(\;\) Use the SOR method with \(\omega=1.021\) to solve the equations \[\begin{align*} 5x+\;\,y\qquad\,&=4\\ x+5y-\;\,z&=-5\\ -y+5z&=6, \end{align*}\] working to 4 significant figures, starting with \({\bf x}^{(0)}=\left(0,0,0\right)^T\) and finding the first 4 iterates. Use your results to guess the exact solution.


Quick Check
\(k\) \(x^{(k)}\) \(y^{(k)}\) \(z^{(k)}\)
0 0.000 0.000 0.0000
1 0.8168 -1.188 0.9826
2 1.042 -1.008 0.9984
3 1.001 -1.001 1.000
4 1.000 -1.000 1.000
5 1.000 -1.000 1.000


Solution

The SOR method \[{\bf x}^{(k+1)} = (1-\omega){\bf x}^{(k)}+\omega D^{-1}\left( {\bf b}+L{\bf x}^{(k+1)}+U{\bf x}^{(k)}\right)\] becomes \[\begin{align*} x^{(k+1)} &=-0.021x^{(k)}+1.021\left(\frac{ 4-y^{(k)}}{5}\right) \\ y^{(k+1)} &=-0.021y^{(k)}+1.021\left(\frac{-5-x^{(k+1)}+z^{(k)}}{5}\right) \\ z^{(k+1)} &=-0.021z^{(k)}+1.021\left(\frac{ 6+y^{(k+1)}}{5}\right) \end{align*}\] In particular, starting with \(x^{(0)}=0\), \(y^{(0)}=0\), \(z^{(0)}=0\), we get \[\begin{align*} x^{(1)} &=-0.021x^{(0)}+1.021\left(\frac{ 4-y^{(0)}}{5}\right) = 0+1.021\left(\frac{4}{5}\right) = 0.8168 \\ y^{(1)} &=-0.021y^{(0)}+1.021\left(\frac{-5-x^{(1)}+z^{(0)}}{5}\right) = 0+1.021\left(\frac{-5-0.8168}{5}\right)=-1.188 \\ z^{(1)} &=-0.021z^{(0)}+1.021\left(\frac{ 6+y^{(1)}}{5}\right) = 0+1.021\left(\frac{6-1.188}{5}\right) = 0.9826 \\ \end{align*}\] Continuing, the iteration converges quickly to the exact solution \(x=1\), \(y=-1\), \(z=1\):


\(k\) \(x^{(k)}\) \(y^{(k)}\) \(z^{(k)}\)
0 0.000 0.000 0.0000
1 0.8168 -1.188 0.9826
2 1.042 -1.008 0.9984
3 1.001 -1.001 1.000
4 1.000 -1.000 1.000
5 1.000 -1.000 1.000