**Matrices
**

Solving a system of linear equations is not technically difficult: just eliminate the variables in a systematic fashion. When there are only two or three variables, this is easy to manage. But for a bigger system, things can quickly get confusing. We need to develop a systematic method.

The first thing to notice is that the names of the variables don’t matter. Consider, for example, the two systems

and

It’s clear that if we ignore the names of the variables — and versus and — these two systems are the same. The reason we can tell that they’re the same is because the {\em coefficients} of the variables are the same and the numbers on the right hand side are the same. These are really the only things about a system of linear equations that matter, and so what we can do is strip the system down to its bare bones and rewrite it like this:

This is an *augmented coefficient matrix* (in general, a rectangular array of numbers, like the above, is called a *matrix*; a matrix with an additional vertical line, which plays the same role as the equals signs in the original equations, is *augmented*). This matrix has two horizontal rows and , one row for each equation in the system. It also has three vertical columns , , and (one column for each of the two variables and one for the numbers on the right hand side of each linear equation). A matrix with two rows and three columns is called a matrix.

For example, the augmented coefficient matrix of the system

is the matrix

The augmented coefficient matrix

corresponds to the system of linear equations

We now know how to rewrite a system of linear equations as an augmented coefficient matrix. But how does that help us find the solution of that system? As we’ve already discussed, to find the solution of a simple system of linear equations like

we can rewrite the first equation as and substitute this back into the second equation to get , an equation from which has been eliminated (and which is easy to solve). Here’s another way of achieving the same thing: Instead of substituting to eliminate a variable, let’s combine the equations in a way that produces a simpler set of equations with the same solution.

We will first do this with the equations themselves, then come up with a set of rules for manipulating the equations, then we will see how these rules correspond to operations on the augmented coefficient matrices. Eventually we will come up with a way to solves the equations, without playing with the equations at all, but simply by using a series of moves to get the matrix into exactly the form we want.

Consider the system

1) Our first step to make the system simpler is to eliminate from the second equation. To do this, notice that if we multiply the whole of the first equation, , by and then add it to the second equation, , then

which after simplification becomes the equation which does not contain .

2) Replace the original second equation, , with the equation to get a modified system of linear equations:

3) Now we combine the two equations in this modified system to eliminate from the first one. All we have to do is subtract

the second equation from the first:

which becomes, after simplification, the equation .

4) Replace the first equation, , in the modified system of linear equations with this new equation, which does not contain , and we have the (further modified) system

which is also, of course, the solution of the system.

It’s interesting to see what all of this corresponds to in terms of the geometrical picture of what’s going on. Of course taking a single equation and multiplying it by a constant doesn’t change the line that it corresponds to (line, in this case as we’re in 2d). However, adding together equations does give us new lines. This might sound a bit strange. However, what we saw before was that the equations we get in the end, which are really like different combinations of the original graphs, correspond to two new lines (vertical and horizontal) which tell us the solution to the original equations.

What makes the method from this example preferable to the other one (substitution) is that we can employ essentially the same procedure with an augmented coefficient matrix. When we do, the most important thing to remember is that every row of an augmented coefficient matrix is essentially an equation. Anything which we can `legally’ do to a system of linear equations, we can do to its augmented coefficient matrix.

So what can we `legally’ do to a system of linear equations? We have to be clearer about that word `legal’: What we want to make sure of, when we do something to a system of linear equations, is that the resulting system has exactly the same solution as the original system. Two systems of linear equations with this property are called equivalent. It can be shown that if we do any of the following to a system of linear equations, then the resulting system is equivalent to the original:

1) Swap two equations.

2) Multiply an equation by a non-zero constant.

3) Replace an equation with the sum of that equation and a constant multiple of some other equation.

We can now translate these three operations on equations into an equivalent set of operations on the corresponding matrices. These operations are called the *Elementary Row Operations*:

If any of the following is done to an augmented coefficient matrix, then the resulting matrix has the same solution as the original:

E1) Swap two rows.

E2) Multiply a row by a non-zero constant.

E3) Replace any row with itself plus a constant multiple of any other row.

For clarity, we should mention that when we talk about the `solution’ of a matrix, we mean the solution of the associated system of linear equations.

The augmented coefficient matrix of the system

is

We shall now use elementary row operations to reduce this system to a simpler system with the same solution. When we worked with equations, we produced a simpler system by eliminating variables. When working with an augmented coefficient matrix, this is equivalent to trying to reduce the matrix to one with lots of 0s. For convenience, we shall write and for the first and second rows of the augmented coefficient matrix, respectively.

1) Our first step in the previous example was to eliminate from the second equation. Our first step here will be to reduce the augmented coefficient matrix to one that has a 0 in the first element of the second row (the position in the matrix corresponding to the first variable, , in the second equation). To make sure that the reduced matrix has the same solution as the original, we have to use an elementary row operation, and so we replace with (an elementary row operation of the third kind). This gives us the matrix

(the notation indicates that the second row of this matrix is equal to the second row of the previous matrix minus two times the first row of the previous matrix).

It’ll make things easier if the first non-zero entry in row 2 is 1 (rather than 3), so let’s fix that using an elementary row operation of the second kind:

(notice that now refers to the second row of the previous matrix, and not to the second row of the original matrix).

In the previous example, our next step was to eliminate the variable from the first equation, i.e., replacing with a row that has a 0 in the 2nd column. To do this with an elementary row operation, we replace with .

We’ve now got a very simple augmented coefficient matrix and we can just read off the solution:

Let’s try the same technique — using elementary row operations to reduce an augmented coefficient matrix to a simpler one with the same solution — with a slightly bigger system:

In this case this corresponds to a system of three planes in three dimensions which look like:

The augmented coefficient matrix is

We begin by `clearing’ the first column (i.e., eliminating the variable from every equation except the first):

Now we make things neater by making the first nonzero entry in each row equal to 1:

The next step is to eliminate the variable from the first and third equations by clearing the second column:

Usually, we would now multiply by so that the first nonzero entry () becomes . However, that will just make the process of clearing the third column more laborious. We therefore proceed to clear the third column without first tidying up:

Finally, we tidy up row 3:

This is what we wanted — a simple augmented coefficient matrix from which we can read off the solution:

Try and go through each of the matrices and work out what they correspond to in terms of both equations, and planes in 3d space. You will find that however much the equations change, they always correspond to three planes intersecting at the same point.

**Reduced row echelon form**

In the previous section, we used elementary row operations to reduce a system of linear equations to a simpler system in which it was easy to see the solution. This idea can be turned into a rigorous, systematic method — Gauss reduction — for finding the solution of a system of linear equations. But to start, we must decide what goal we have in mind when we use elementary row operations. We would like to produce a matrix that is simple enough that we can just read off the solution. With that in mind, we give some definitions.

**Definition**

The first non-zero entry in each row of a matrix is called a *pivot*.

In the matrix below

the numbers 1, 5, and 3 are the pivots in the 1st, 2nd, and 3rd rows, respectively. The fourth row does not have a pivot.

**Definition
**

A matrix is said to be in

*reduced row echelon form*if it satisfies all four of the following conditions:

1) All the non-zero rows (i.e., rows with at least one non-zero element) are above any rows that contain only zeroes.

2) The pivot in every non-zero row is 1.

3) Every pivot is (strictly) to the right of every pivot in the rows above it.

4) Every pivot is the only non-zero entry in its column.

Below is an example of a matrix that is in reduced row echelon form and several examples of matrices that are not.

This matrix is in reduced row echelon form.

Not in reduced row echelon form: there is a row of zeroes above a non-zero row.

Not in reduced row echelon form: some of the pivots are not equal to 1.

Not in reduced row echelon form: the pivot in row 3 is to the left of the pivot in row 2.

Not in reduced row echelon form: the pivot in row 2 is not the only non-zero element in its column. Similarly for the pivot in row 3.

If an augmented coefficient matrix is in reduced row echelon form, then it is easy to read off the solution. We’ve already seen two examples of this: The last augmented coefficient matrix in the first example was in reduced row echelon form, and similarly for the second example. In each of these two examples, the system had a unique solution. In general, when we find the solution of a system of linear equations, there are three possibilities:

1) The system has no solution.

2) The system has exactly one solution.

3) The system has infinitely many solutions

We shall prove this in a few of lectures.

UCT MAM1000W lecture notes subject links – MathemafricaJanuary 17, 2018 at 10:08 am[…] Previous Next […]