So we’ve now looked at a couple of different functions and found polynomials which approximate the functions to different levels of accuracy. Let’s try and come up with a general method of formulating this. Let’s say that we have some function f(x) and we want to approximate it close to x=a. We will then assume that we can write the polynomial approximation as:


\sum_{i=0}^n c_i (x-a)^i


Note that previously we wrote a_i but it’s good to get used to slightly changeable notation. The context is what should tell you the meaning.

We want to have that:


f(x)\approx \sum_{i=0}^n c_i (x-a)^i


We will first ask that the value of the polynomial is equal to the value of the function at x=a. We do this by setting x=a in both sides of the above. Note that we are being slightly ambiguous in what we mean by the approximation here because in a moment we will go from a \approx sign to an = sign. This is because while the polynomial is only an approximation, we want that certain properties between the two hold exactly at x=a. OK, setting x=a and asking that this is exact, we have:




Now take a derivative of the above approximation and we have:


f'(x)\approx \sum_{i=1}^n c_i i (x-a)^{i-1}


The i=0 term was a constant and vanished. Now again we ask that the approximation is exact at x=a:




Now take another derivative:


f''(x)\approx \sum_{i=2}^n c_i i(i-1) (x-a)^{i-2}


Again, there was a constant term which had the c_1 coefficient which vanished. Now set x=a and we get:


f''(a)=2 c_2


Thus c_2=\frac{f''(a)}{2}

Take another derivative:


f'''(x)\approx \sum_{i=3}^n c_i i(i-1)(i-2) (x-a)^{i-3}


Set x=a:


f'''(a)=3\times 2 c_3


Thus c_3=\frac{f'''(a)}{3\times 2}. Again:


f^{(4)}(x)\approx \sum_{i=4}^n c_i i(i-1)(i-2)(i-3) (x-a)^{i-4}


where we’ve used the notation f^{(4)}(x) to mean four derivatives of f(x). Set x=a:


f^{(4)}(a)=4\times 3\times 2 c_4.


The next iteration will give:


f^{(5)}(a)=5\times 4\times 3\times 2 c_5. Which gives: c_5=\frac{f^{(5)}(a)}{5\times 4\times 3\times 2}


In fact we can see a general pattern emerging, and we can write:

c_i=\frac{f^{(i)}(a)}{i!}. If this is the case, then we can plug these constants back into our original polynomial and we have:


f(x)\approx \sum_{i=0}^n \frac{f^{(i)}(a)}{i!} (x-a)^i


This is the Taylor approximation for f(x) about x=a.

So, what does this mean? It means that we can write down a polynomial approximation for a function about x=a if we can calculate its derivatives at the point x=a. This is a very powerful statement, because polynomials are very easy to deal with, whereas our original expression may be very hard to deal with. The more terms we include, the more accurate will be our approximation. If we only include one term, then our approximation is only valid very very very close to x=a. The more terms we include, the more closely the polynomial will approximate the function further away from x=a. We have to be careful because sometimes this doesn’t work if our original function is badly behaved in particular ways, but for the functions we will look at, this will be fine.

OK, so what do we need to do for a given function? Well, we simply need to calculate the values of the derivatives of the function at x=a and then plug these values into the expression above.



The Maclaurin expansion for f(x)=e^x

Let’s look at the function f(x)=e^x and write this as a Taylor polynomial about the point x=0 (ie. a Maclaurin polynomial). In order to do this we have to calculate the values of the derivatives of the function at x=0. We set up a table:


\begin{array}{ccc}   i & f^{(i)}\text{(x)} & f^{(i)}\text{(0)} \\   0 & e^x & 1 \\   1 & e^x & 1 \\   2 & e^x & 1 \\   3 & e^x & 1 \\   4 & e^x & 1 \\   5 & e^x & 1 \\  \end{array}


In this case it is incredibly simple to work out the derivatives of the function and the value of the derivative at x=0 because they are all the same. Now we can plug this into the expression for the Taylor polynomial and find:




where this is only an approximation of e^x if we take a finite number of terms. In this case, if we let n\rightarrow \infty we get an exact expression for e^x:




In the following figure you can see how the first six terms in the Maclaurin polynomial add up to get a function which is a better and better approximation of the exponential function.


It is also very important to note that the further away from 0 (which was where we expanded the function about) we are, the worse is our approximation, independent of n. For x\approx 0 even n=1 will give a reasonable approximation to the value of e^x. The further from x we go, the higher the n we need to get a good approximation for the function value. We can see from the equation itself that there is a balance at play. For x close to zero, the higher order terms in the polynomial die off quickly because x^i for small x and larger and larger i get smaller and smaller. However, for larger x, (let’s say 2), the value of x^i increases as i gets larger and so we have to wait until the i! in the denominator get large enough to start to make the terms \frac{x^i}{i!} get small. Of course it’s not quite as simple as that as we are not just looking at individual terms but we are adding them together. The subject of how quickly these terms in the polynomials die off and can be ignored is a subject in and of itself.

What can we do with this? Well, the simplest thing that we can do is to get a value of e in an algebraic way. By putting in x=1 we find that:


You can thus calculate e by hand to arbitrary precision.

Isn’t that pretty amazing that a function which has a particular property related to growth (its derivative is equal to the function) and has a non-rational value, can be written like this as a sum of terms all of whom are rational. The point is that to get the exact value, you have to add up an infinite number of terms, but here we are happy with the approximate value.

How clear is this post?