I'm a lecturer at the University of Cape Town in the department of Mathematics and Applied Mathematics. I teach mathematics both at undergraduate and at honours levels and my research interests lie in the intersection of applied mathematics and many other areas of science, from biology and neuroscience to fundamental particle physics and psychology.

## Curves for the Mathematically Curious – an anthology of the unpredictable, historical, beautiful and romantic, by Julian Havil – a review

NB I was sent this book as a review copy.

What a beautiful idea. What a beautiful book! In studying mathematics, one comes across various different curves while studying calculus, or number theory, or geometry in various forms and they are asides of the particular subject. The idea however of flipping the script and looking at curves themselves and from them gaining insight into: statistics, combinatorics, number theory, analysis, cryptography, fractals, Fourier series, axiomatic set theory and so much more is just wonderful.

This book looks at ten carefully chosen curves and from them shows how much insight one can get into vast swathes of mathematics and mathematical history. The curves chosen are:

1. The Euler Spiral – an elegant spiral which leads to many other interesting parametrically defined curves
2. The Weierstrass Curve – an everywhere continuous but nowhere differentiable function
3. Bezier Curves – which show up in computer graphics and beyond
4. The Rectangular Hyperbola – which leads to the investigation of logarithms and exponentials
5. The Quadratrix of Hippies – which are tightly linked to the impossible problems of antiquity
6. Peano’s Function and Hilbert’s Curve – space filling curves which lead to a completely flipped understanding of the possibilities of infinitely thin lines
7. Curves of Constant Width – curves which can perfectly fit down a hallway as they rotate.

## Tales of Impossibility – The 2000 year quest to solve the mathematical problems of antiquity, by David S. Richeson – a review

NB I was sent this book as a review copy.

Four impossible puzzles, all described in detail during the height of classical Greek Mathematics. All simple to define and yet so tempting that it has taken not only the brain power of many, many thousands of mathematicians (amateur and professional alike), but also two millennia to show that however hard you may try, these puzzles are just not possible. The puzzles are:

• Squaring the circle: With only a compass and a straight edge, draw a square with the same area as that of a given circle.
• Doubling the cube: With only a compass and a straight edge, draw the edge of a cube with volume twice that of a cube whose edge is given.
• Constructing regular polygons: Given a compass and a straight edge, construct a regular n-gon in a given circle for $n\ge 3$.
• Trisecting an angle: Given a compass and a straight edge, and a given angle, construct an angle that is one third of the original.

## What’s the shortest known Normal Number?

Well, the answer is that it has to be infinitely long, but the question is what is the most compact form of a Normal Number possible.

I was motivated to look into this from a lovely Numberphile video about all the real numbers.

Normal numbers in base 10 are those for which, in the base 10 decimal expansion, you can find every natural number.

Champernowne’s number is a very simple example of this where it is simply written as:

0.12345678910111213…etc.

I thought that it might be interesting to see if one could write a more compact Normal Number, but using a similar procedure to Champernowne. I haven’t seen this done anywhere else. For example, in the above expression, you don’t need to include the 12 explicitly as it’s already there at the beginning. You could write

0.12345678910113

So you skip the 12, and also 11 and 13 becomes 113. We will do all of this just with the list of digits, rather than the number in base 10.…

## On the invariant measure in special relativity

I’m writing this for my string theory class. We are basing our lectures on Zwiebach – A First Course in String Theory, and starting off with special relativity. Not everybody in the class has a physics background (pure and applied mathematics students), and so there are likely to be questions which come up which show where I have to fill in some knowledge. We had a question about the invariant measure in special relativity (SR) and why there was a different sign in front of the time term compared with the space terms. I’ll do my best to explain here. Note that I am not explaining it in the precise chronological order of discoveries.

We start the picture off with relativity before SR – that is, Galilean Relativity. This simply states that the laws of motion are the same in all inertial (non-accelerating frames). That may sound straightaway like SR, but there’s a crucial ingredient missing which we will see in a bit.…

## The definite integral

I realise now, in all the excitement of the FTC that I hadn’t written a post about the definite integral…that’s shocking! ok, here we go…the plan for this post:

• Look at our Riemann sums and think about taking a limit of them
• Define the definite integral
• Look at a couple of theorems about the definite integral
• Do an example
• Look at properties of definite integrals

That’s quite a lot, but we are more or less going to follow along with Stewart. Stewart just has a slightly different style to mine, so I recommend reading his for more detail, and mine for potentially a bit more intuition.

So, let’s begin…

We have seen in previous lectures/sections/semesters/lives that we can approximate the area under a curve by splitting it up into rectangular regions. Here are examples of splitting up one function into rectangles (and, in the last way trapezoids, but you don’t have to worry about this).…

## The Fundamental Theory of Calculus part 2 (part ii)

OK, get ready for some Calculus-Fu!

We have now said that rather than taking pesky limits of Riemann sums to calculate areas under curves (ie. definite integrals), all we need is to find an antiderivative of the function that we are looking at.

As a reminder, to calculate the definite integral of a continuous function, we have:

$\int_a^b f(x)dx=F(b)-F(a)$

where $F$ is any antiderivative of $f$

Remember that to calculate the area under the curve of $f(x)=x^4$ from, let’s say 2 to 5, we had to write:

$\int_2^5 x^4 dx=\lim_{n\rightarrow \infty}\sum_{i=1}^n f(x_i)\Delta x=\lim_{n\rightarrow \infty} f\left(2+\frac{3i}{n}\right)\frac{3}{n}=\lim_{n\rightarrow\infty}\frac{3}{n}\left(2+\frac{3i}{n}\right)^4$

And at that point we had barely even started because we still had to actually evaluate this sum, which is a hell of a calculation…then we have to calculate the limit. What a pain.

Now, we are told that all we have to do is to find any antiderivative of $f(x)=x^4$ and we are basically done.

Can we find a function which, when we take its derivative gives us $x^4$?…

## The Fundamental Theory of Calculus part 2 (part i)

OK, now we come onto the part of the FTC that you are going to use most. We are finally going to show the direct link between the definite integral and the antiderivative. I know that you’ve been holding your breaths until this moment. Get ready to breath a sign of relief:

The Fundamental Theorem of Calculus, Part 2 (also known as the Evaluation Theorem)

If $f$ is continuous on $[a,b]$ then

$\int_a^b f(x) dx=F(b)-F(a)$

where $F$ is any antiderivative of $f$. Ie any function such that $F'=f$.

————-

This means that, very excitingly, now to calculate the area under the curve of a continuous function we no longer have to do any ghastly Riemann sums. We just have to find an antiderivative!

OK, let’s prove this one straight away.

We’ll define:

$g(x)=\int_a^x f(t)dt$

and we know from the FTC part 1 how to take derivatives of this. It’s just $g'(x)=f(x)$. This says that $g$ is an antiderivative of $f$.…

## The Fundamental Theorem of Calculus part 1 (part iii)

So, we are now ready to prove the FTC part 1. We’re going to follow the proof in Stewart and add in some discussion as we go along to motivate what we are doing. What we are going to prove is that:

$\frac{d}{dx} \int_a^x f(t) dt=f(x)$

for $x\in [a,b]$ when $f$ is continuous on $[a,b]$.

Proof:

we define $g(x)=\int_a^x f(t)dt$ and we want to find the derivative of $g$. We will do this by using the fundamental definition of the derivative, so let’s look at calculating this function at $x$ and $x+h$ – ie. how much does it change when we change $x$ by a little bit?

$g(x+h)-g(x)=\int_a^{x+h}f(t) dt-\int_a^x f(t) dt$

But remember that the definite integral is just the area, so this difference is the area between a and x+h minus the area between a and x. Which is just the area between x and x+h. Using the properties of integrals, we can write this formally as:

$g(x+h)-g(x)=\int_a^{x+h}f(t) dt-\int_a^x f(t) dt=\left(\int_a^{x}f(t)+\int_x^{x+h}f(t)\right)-\int_a^{x}f(t)=\int_x^{x+h}f(t)dt$

and we can write, for $h\ne 0$:

$\frac{g(x+h)-g(x)}{h}=\frac{1}{h}\int_x^{x+h}f(t)dt$

Restated, we can think of this as the area between x and x+h divided by h.…