**Gauss reduction**

So far we have seen that we have a way to translate a system of linear equations into a matrix. We can manipulate the matrix in ways which correspond to operations on the equations which keep the important information in the system of equations the same (ie. the solution of the equations before and after the operations is the same). We have seen a couple of examples of when we can read off the solution from the matrix having performed the operations. So far the order with which we perform the operations feels a bit arbitrary, although we know that we would like to get the matrix into reduced row echelon form. There is however a very systematic way of going about this, and the term for the process is called *Gauss Reduction***.**

Here is a detailed view of what Gauss Reduction will give us:

**Gauss Reduction:
**

To solve a system of linear equations:

1) First find the augmented coefficient matrix of the system of equations.

2) Then use the three elementary row operations:

- E1) Swap two rows.

E2) Multiply a row by a non-zero constant.

E3) Replace any row with itself plus a constant multiple of any other row.

3) to reduce the augmented coefficient matrix to reduced row echelon form:

- RREF1) All the non-zero rows (i.e., rows with at least one non-zero element) are above any rows that contain only zeroes.

RREF2) The pivot in every non-zero row is 1.

RREF3) Every pivot is (strictly) to the right of every pivot in the rows above it.

RREF4) Every pivot is the only non-zero entry in its column.

When we do this, we usually

- G1) Work from left to right, one column at a time.

G2) If the column contains only 0s, skip it.

G3) Otherwise, choose the first non-zero entry in that column that is a pivot (i.e., that is the first non-zero entry in its row). Use the elementary row operations to make that entry 1 and everything else in that column 0.

G4) Move any rows of 0s to the bottom.

4) Read off the solution

We now do some examples.

**Example 1) **Let’s use Gauss Reduction to find the solution to the system of linear equations

We can see from this that we are talking about a 3 dimensional space, and we have 3 equations (corresponding to 3 two dimensional planes). Generically these will intersect at a point. However, in some special cases, this will not be so. The simplest example to think of is if they are parallel. If they are parallel and separated, then there won’t be anywhere that all the three planes intersect. If they are actually the same planes, then there will be an infinite number of solutions (ie. intersection points). Actually there is another option, which is that while the normal vectors of at least two planes aren’t the same (ie. parallel planes), the three normal vectors may be coplanar – ie. all live in a two dimensional plane themselves. In this case we can have a situation that the three planes intersect at a line (or indeed that again, there is no common intersection). This will happen if the three equations are not linearly independent. In this case that means that if three equations are linearly independent, we can’t form one of them by a linear combination of the other two. If we can, then there will not be a single solution. Here’s an example of three planes intersecting at a line, even though they are not parallel planes.

And here’s an example where none of the planes are parallel but there are no solutions:

Note that the figures above is not from this example, but are simply showing that three planes in three dimensions don’t always give what you might expect. The above equations actually correspond to these three intersecting planes:

Right, let’s now convert this into matrix form and solve it using Gauss reduction:

First, we find the augmented coefficient matrix:

Now, we use the elementary row operations to reduce the matrix to reduced row echelon form. We start at the left with column 1. The first number in column 1 — the 1 in row 1, column 1 — is the pivot for row 1. We then use elementary row operations to clear the rest of column 1:

This matrix now corresponds to this set of planes, two of which are different, but they still intersect in the same place…

We’re finished with column 1, so we turn our attention to column 2. The first entry in column 2 that is a pivot is -5. However, if we bear in mind what we want the matrix to eventually look like, it will make things easier if we first use an elementary row operation of the first kind to swap rows 2 and 3. We can then use the 4 in column 2 as a pivot for row 2 instead:

Multiplying, or dividing a row by a non-zero constant doesn’t change the planes.

Now the number 1 is the first pivot in column 2. We clear the rest of column 2:

Now this looks like:

We are now finished with column 2. The number 6 is the (first and only) pivot in column 3, and we proceed as before.

which now looks like

This matrix is in reduced row echelon form. The solution is

so indeed we have a single point in three dimensions which corresponds to the intersection point of the three planes.

Example 2) Let’s use Gauss Reduction to find the solution to the system of linear equations

We see here that this is rather different from the previous example. We are clearly in five dimensions (because these are equations in five variables), but there are only four equations. A single equation corresponds to a four dimensional surface in five dimensions. If you have two equations, then they will intersect at a three dimensional intersection (as long as they are linearly independent). If you have three equations, they will intersect at a two dimensional surface, and if you have four equations, they will intersect at a one dimensional intersection – ie. a line in five dimensions. We previously came up with a formalism to describe lines using vectors, and we didn’t specify how many dimensions we were in, so we should be able to use this same formalism to come up with a vector equation for the line of intersection of these four, five dimensional surfaces. Pretty amazing, huh!?

In fact in this example we will find that because not all of the equations are linearly independent, we will be able to reduce this system to three equations. This will correspond to a two dimensional surface in five dimensions. To define this, we will use a parametric solution, but it will have two parameters which will move us about the surface…we will look at this in more detail when we come across it…

First, we find the augmented coefficient matrix of the above set of equations:

We begin, as usual, with column 1. Since we want the entry in row 1, column 1 to be 1, let’s swap rows 1 and 2:

Now we clear the rest of column 1.

We are finished with column 1. Before we go any further, notice that rows 2 and 3 are equal! This means we can do the following:

The fact that we have now only three non-zero rows means that one of the equations could be written as the sum of the others. This means that we can describe the whole system by three equations only. Again, three equations in five dimensions gives us a five dimensional surface.

Now we follow this up by moving the row of zeroes to the bottom. At the same time, to get ready for the next step, we divide row 2 by -1.

We now turn our attention to column 2. The pivot we need is the number 1 in row 2, column 2. We then clear the rest of column 2:

We are done with column 2. The first pivot in column 3 is the number $-7$. We start by dividing row 3 by $-7$:

And now we clear everything else out of column 3.

This matrix is in reduced row echelon form. The solution is

But this is rather intersecting. We have constraints on and , but it looks like and can take on any values. Let’s rewrite this in a slightly different way which makes that clear.

The fact that there are two free parameters and means that we have a two dimensional surface of solutions. In fact we can make the solution a little neater using vectors:

Example 3) In this example, we shall not explain what we are doing step by step. To indicate the fact that each augmented coefficient matrix is equivalent to the next (i.e., the associated systems of linear equations have the same solution), we use the symbol .

Let’s use Gauss Reduction to solve

As usual, we reduce the augmented coefficient matrix to reduced row echelon form.

This is in reduced row echelon form. The solution is

First year maths lecture notes subject links – MathemafricaJuly 11, 2019 at 9:03 am[…] Linear algebra – Gauss reduction […]