I came across the following YouTube video a while back which uses a strange trick to accurately approximate square roots. I suggest watching at least the first minute where the presenter explains how it’s done:

Let’s do an example. Approximate to 2 decimal places: \sqrt{40}

First, YouTube tells us to find the nearest perfect square that’s less than 40, that’s 36, and take the root, giving us 6. So our answer is, obviously, 6 point something. That something is a fraction, where the numerator is the difference between 40 and 36, and the denominator is 2 times 6. So we have:

 \sqrt{40} \approx \sqrt{36} +\frac{4}{12} = 6+ \frac{1}{3} \approx 6.33

So what’s the actual answer to 2 decimals? It’s 6.32. That’s what I like to call: pretty darn close. Naturally, there are a few catches to this technique: you need to know your perfect squares, you need to know your fractions, and things tend to get hard with larger numbers.

But still, I thought this was surprisingly effective for such a simple piece of arcane trickery. A few questions came to mind:

  • How accurate are these estimates?
  • Can we do better?
  • Why on Earth does it even work in the first place?!?

If none of those questions intrigue you, this post probably won’t be your cup of tea. In the event that you are even slightly curious, keep reading.

To answer the first question, if you experiment a bit with this method you’ll quickly get the idea that the accuracy changes with different inputs, but follows some sort of pattern. So I wrote some Python code to calculate these square root approximations of all whole numbers up to 100. Plotting the errors of these estimates, we can see what’s happening under the hood:

11

It’s a glorified zig-zag, but one thing is immediately clear: the bigger the number, the better the approximation. Small numbers don’t play well with this method; the square root of 3 is the worst. You’ll also notice that the smallest errors can be found immediately after perfect squares, but as you approach the next perfect square, things get ugly. At perfect squares themselves, the error is zero (oviaas).

Now, the practical question: can we do better? Duh, use a calculator. But in your head? Well, I had the idea that this method could be done in reverse and thought it might help. The basics of this backward method are the same, except now you pick the closest perfect square after what you want to root, so now you’ll be subtracting the fraction.

Quick example. Approximate \sqrt{96} using this method. The next perfect square is 100, whose root is 10. As for the fraction, the numerator is the difference between 100 and 96, and the denominator is 2 times 10. So we have:

 \sqrt{96} \approx \sqrt{100} -\frac{4}{20} = 10- \frac{1}{5} = 9.80

This time, that happens to be the correct answer to 2 decimal places. 96 was far away from the previous perfect square, 81, but close to 100, so maybe we got an accurate estimate because we employed the backward version instead of the forward one. Hold that thought. Let’s compare the accuracy of these methods:

22

These two methods are the perfect couple: where the one lacks, the other thrives. We could now make much better approximations if we used the method that yielded the smallest error for that specific case. To do that, we can simply make a rule: find the closest perfect square to the number we want to root; if it’s below our number, apply the forward estimate, if it’s above, apply the backward estimate. So when rooting 40, we see it’s closer to 36 than 49, so we apply the forward estimate. For 96, the opposite is true. By doing things this way, the error of this “hybrid” method is somewhat diminished:

33

Ok, there are still peaks, but these are where the forward and backward estimates intersected, so notice how small the error values are compared to earlier.

This is where I end my analysis, and with this hybrid method you can make fairly precise mental estimates of arbitrary square roots, but one who is familiar with these errors can easily improve their approximations if they want. Notice how all these errors are positive, i.e. the estimate is always greater than (or equal to) the actual value. Also, the the peaks of the hybrid estimate are now in the middle of regions between perfect squares. So in order to improve your guess, you could subtract something small (usually 0.01 or 0.02) when square rooting a number in the “middle regions” between perfect squares to account for the expected error. If we’d done this with the first example earlier, we might’ve got the answer spot on. It’s a bit of a thumb-suck, since it depends on where you are on the number line and how “close” you are to the peak, but this is not an exact science, so thumb-suck away.

Finally, the important question: why does this even work? The answer is: because we’re secretly taking a linear approximation of the square root function. In other words, we’re drawing the tangent line to the square root curve at a point, and then approximating it’s neighbours using the tangent line instead of the curve. To be very clear:

linear

You can get the formula for this line straight out of the Taylor Series expansion, but a little Algebra and a dash of Calculus would get you the same result:

 f(x) \approx  f(x_0) + f'(x_0)(x-x_0)

We could apply it to any function we like, but for now we’re only interested in square roots:

 \sqrt{x} \approx  \sqrt{x_0} + \frac{x-x_0}{2\sqrt{x_0}}

Ring a bell? If x is what we’re rooting, and x_0 is a nearby perfect square, then this is just a mathy way of saying what our YouTube friend was telling us from the beginning!

No arcane trickery after all. Maths just works.

How clear is this post?