We’ve seen some intriguing things in this course so far, and we’ve developed some clever tricks, from how to find the gradient of just about any function we can throw at you, to proving statements to be true for an infinite number of cases.
To some extent, this is what we have looked at so far (at least in terms of calculus, and building up to calculus):
However, we’re about to see some magic. We’re about to see the most important thing yet on this course, and indeed one of the most important moments in all of mathematical history.
We are going to see…actually, we are going to prove, that there is a relationship between rates of change and the area under a graph. This doesn’t sound that amazing, but its consequences have essentially allowed for the development of much of modern mathematics over the last 350 years.
The link that we are going to prove will allow us to find the area under graphs of functions for which taking the Riemann sum would be really hard. This techniques that we will develop will have major consequences for differential equations later on, and essentially make up the bulk of quantitative science over the last three centuries.
This will be the new picture:
It doesn’t sound like much, but believe me, it’s a big deal!
Ok, so let’s set up the problem.
At the moment, if I give you some function, let’s say and ask you to find the area under the graph between and you would have to:
- Split the area into rectangles, choosing either to use left-points, right-points or mid-points
- Write down an expression for the sum of the areas of the rectangles
- Take the limit as the rectangles become narrower and narrower, and their number tends to
It may be that taking that limit is pretty tricky. What we are going to prove is that in fact there’s another way of finding this area, and that is using the opposite of the derivative – ie the antiderivative.
If we can find a function whose derivative is , then it turns out that we are done! (more or less). This is going to come in the second part of the FTC.
Keep this in mind at all times, the anti-derivative of a function, is just a family of functions whose derivatives are . We say ‘a family of functions’, because of course you can add a constant onto any function and it doesn’t change its derivative. Re-read this paragraph and make sure that you truly understand what it means.
The magic is going to be showing the relationship between the definite integral and the antiderivative – two objects which you wouldn’t have thought that they were linked.
We will state here the fundamental theorem of calculus (part 1) and then prove it and explain it:
The Fundamental Theorem of Calculus, part 1
If is continuous on then the function defined by:
is continuous on and differentiable on and .
ok, let’s state that in words.
- We have some function which is continuous on some closed interval
- We define a new function (g) which is just the area between under the curve from the lower limit of the closed interval and (the argument of the function). It is a function which measures the area under the curve from to some point of your choosing .
- The rate of change of the area is just the original function that we are integrating.
This needs to be digested properly, but first I’ll give you a little animation
ok, what are we looking at here?
We have a graph in blue of some function . We are looking at the area under this curve which is the region shaded in blue. We are varying the upper limit up to which we are looking for the area (you see the right hand side of the region moving further to the right). At the same time we are plotting, in red, the total area from 0 up to . Of course because the area is always above the axis, this is just continuously increasing (as we increase we are just adding more area to it).
The fundamental theorem of calculus then says: The red curve is the function whose gradient is the blue curve. Ie. the red curve is an antiderivative of the blue curve. You can see for yourself. Think about the gradient of the red curve. It starts off small, then it peaks around 1, then it gets less steep again by 1.5.
The key take-home point however is the following:
If you take the derivative of a definite integral from to with respect to , it gives back the function that you are integrating. This means that the integral really is undone by the derivative…ie. they really are inverse procedures.
Now, all of a sudden we have a reason to call the indefinite integral the antiderivative.
The thing that we are going to do is to prove that this is indeed the case.
I’m going to let you digest this for now. It might seem odd and abstract at the moment, but it will become more and more concrete as we go on.