“Integration sounds like interrogation and that scares me”

I recently received a message from a friend and the heading of the post perfectly describes what was said to me. Thereafter, an interesting integration question was sent to me. It read as follows:

 

integral

 

I must admit, it does look quiet scary. My immediate thought was that some sort of substitution was required but I really had no idea as to where and how this should be done. Two pages and a headache later, I thought to myself why don’t I get a rough idea of what the answer should look like. Once again, let’s start approximating things (as it turns out, my approximation gets me the absolutely correct answer).

I looked at the integration bounds and noted that the point x = 3 was in fact the midpoint. I then decided to construct a Taylor polynomial for the above function around the point x = 3. It was done as follows:

 

integral2

 

You might be wondering why I didn’t bother taking anymore derivatives.…

By | December 23rd, 2016|Uncategorized|1 Comment

The least preferred, but maybe the most understandable way of approximating π

Why \pi? I assume this is the question on everyone’s mind. (Whether you’re a Math lover or not)

The simple answer would be that we all love pie, now don’t we?

Before I begin discussing any technicalities, I’d like to acknowledge that it is possible for some of us to find the concepts easy whilst others might struggle with them. This is the reason why I’m choosing to speak in a very simple and understandable manner. (I’m baby proofing my post!)

Firstly, let us have a look at the Maclaurin series of \arctan(x)

Aside: A Maclaurin series is a polynomial which approximates a function around the point x = 0. The level of accuracy decreases as you move further away from x = 0. The only way to get an exact answer and not an approximate is to let the sum go to infinity.

Below is a table of the first few derivatives of \arctan(x).…

By | December 21st, 2016|Uncategorized|1 Comment

Maxwell’s Equations

Essentially, the entire theory of electromagnetism can be found in the following four equations:

\begin{aligned}\mathbf{\nabla \cdot E} &= \frac{\rho}{\epsilon_{0}} \\ \mathbf{\nabla \times E} &= - \frac{\partial{\mathbf{B}}}{\partial{t}}\\ \mathbf{\nabla \cdot B} &= 0\phantom{\frac{1}{2}}\\ \mathbf{\nabla \times B} &= \mu_{0} \mathbf{j}+\mu_{0} \epsilon_{0} \frac{\partial{\mathbf{E}}}{\partial{t}} \end{aligned}

These are Maxwell’s Equations in differential form, not in integral form, which is the way they are often introduced. I will discuss them in this form, however, as I believe the differential equations convey more elegantly their physical meaning straight from the mathematics. Let’s get started.


Fields

If you ever did high school physics, you should have some idea of what electric and magnetic fields are. Below is an example of each (depicted using field lines):

Fields

What these field lines show is the direction of the respective fields at each location (indicated by arrows) as well as their relative strengths (indicated by density of field lines). So what are these fields actually representing? Well, for electric fields, this shows you the direction an object with a positive electric charge would be forced, while a negatively charged object would feel a force in the opposite direction.…

By | December 1st, 2016|Uncategorized|0 Comments

Faith, Fashion and Fantasy in the New Physics of the Universe, by Roger Penrose – a review

 

Roger Penrose is unquestionably a giant of 20th century theoretical physics. He has been enormously influential in diverse areas of both mathematics and physics, from the nature of spacetime to twistor theory, to geometrical structures and beyond. His famous, but perhaps less well-accepted theories on quantum consciousness, the collapse of the wave function, and visible imprints of cyclic cosmologies on our universe are thought-provoking, to say the least.

I will premise this review of his latest book “Faith, Fashion and Fantasy in the New Physics of the Universe” (FFaFitNPotU) with a slight detour to talk about his book “The Road to Reality” (TRtR), as there are some interesting contrasts, and similarities. TRtR, I see as a fascinating attempt to teach a large swathe of mathematics and physics from the ground up (wherever the ground really is). The book is some 1000 pages long, and goes at quite a pace through a number of very complicated topics, but it is enough, I believe, for the keen high school student to get an idea of some of the most important areas of mathematical physics.…

By | November 5th, 2016|Book reviews, Reviews|1 Comment

A Linear algebra problem

I have this linear algebra problem in the context of quantum mechanics. Let \mathbf{f}_\lambda be a family of linear operators so to each \lambda \in \mathbf{R} we have a linear operator \mathbf{f}_\lambda : \mathcal{H} \to \mathcal{H} where \mathcal{H} is a complex vector space if one is unfamiliar with functional analysis (like I am) or is a Hilbert space if one is. Let’s suppose that this family is differentiable.

Suppose further that \mathbf{f}_\lambda is always a Hermitian operator. Suppose that \mathbf{f}_\lambda has a discrete spectrum of eigenvalues f_1(\lambda), f_2(\lambda), \cdots. I need to show the following:

Theorem

D_\lambda f_n(\lambda) = \left\langle f_n(\lambda)\right|D_\lambda \mathbf{f}_\lambda \left|f_n(\lambda) \right\rangle

Now here is a “proof,” it is not quite rigorous since there are probably a lot of technical details regarding functional analysis that I’m missing out on but:

Proof We begin by differentiating the eigenvalue equation \mathbf{f}_\lambda \left| f_n(\lambda) \right\rangle = f_n(\lambda) \left| f_n(\lambda) \right\rangle with respect to \lambda using the product rule:

(D_\lambda \mathbf{f}_\lambda) \left|f_n(\lambda)\right\rangle + \mathbf{f}_\lambda (D_\lambda \left| f_n(\lambda) \right \rangle) = (D_\lambda f_n(\lambda)) \left| f_n(\lambda) \right \rangle + f_n(\lambda) (D_\lambda \left| f_n(\lambda) \right\rangle)

After multiplying by \left\langle f_n(\lambda) \right| and rearranging terms we have the following:

\left\langle f_n(\lambda)\right| D_\lambda f_n(\lambda) \left|f_n(\lambda)\right\rangle = \left\langle f_n(\lambda)\right| D_\lambda \mathbf{f}_\lambda \left| f_n(\lambda) \right\rangle + \left\langle f_n(\lambda)\right| \mathbf{f}_\lambda (D_\lambda \left|f_n(\lambda)\right\rangle) - \left\langle f_n(\lambda)\right| f_n(\lambda) (D_\lambda \left|f_n(\lambda)\right\rangle)

Now we can take the adjoint of both sides of the eigenvalue equation to get that \langle f_n(\lambda)| \mathbf{f}_\lambda = \langle f_n(\lambda)| f_n(\lambda) since f_n(\lambda)^* = f_n(\lambda) because the eigenvalues of a normal operator are real.…

By | October 12th, 2016|Uncategorized|0 Comments

Dependent Types

This blog post will carry on from the previous one, and introduce dependent types. So what is a dependent type? To motivate the idea let’s talk about equality. Remember that we interpret propositions as types, so if we have x, y : A then the statement “x is equal to y” corresponds to some type, let’s call it x =_A y. This type depends on its values, for example we expect to be able to prove (i.e. construct) 3 =_{\mathbb{N}} 3, but not to be able to prove 2 =_{\mathbb{N}} 3 and so we will have an equality type that depends on its values. This idea is also being explored in various programming languages. These languages have a type like \mathrm{Vec}(x, A), where l : \mathrm{Vec}(x, A) means that l is a list of x elements from the type A. Since the length of the list is part of its type which is known ahead of time, it is impossible to ask questions like, “What is the first element of this empty list?” Indeed dependent types are so powerful that one can write a compiler and be sure that the compiler preserves the meaning of a program.…

By | October 11th, 2016|Uncategorized|0 Comments

Checking direction fields

I was recently asked about how to spot which direction field corresponds to which differential equation. I hope that by working through a few examples here we will get a reasonable intuition as to how to do this.

Remember that a direction field is a method for getting the general behaviour of a first order differential equation. Given an equation of the form:

 

\frac{dy}{dx}=f(x,y)

 

For any function of x and y, the solution to this differential equation must be some function (or indeed family of functions) where the gradient of the function satisfies the above relationship.

The first such equation that we looked at was the equation:

 

\frac{dy(x)}{dx}=x+y(x).

 

We are trying to find some function, or indeed family of functions y(x) which satisfy this equation. We need to find a function whose derivative (y'(x)) at each point x is equal to the value of the function (ie. y(x)), plus that value of x.…

By | October 11th, 2016|Courses, First year, MAM1000, Undergraduate|1 Comment

Cellular Automaton

try

By | October 4th, 2016|Uncategorized|0 Comments

Group Theory in a Nutshell for Physicists, by Tony Zee – A review

I studied group theory for the first time around 15 years ago at the beginning of my PhD. There were six of us in the class, and I found it both a magical, as well as a mysterious subject. We had a great lecturer, but the way that the course was set up, and as a course designed for theoretical physicists, where the tools were more important than the construction of the tools, a lot of ideas were left as mysterious boxes where the right answers were guaranteed so long as the algorithm was correctly followed.

Tony Zee is known for his incredible ability to lead the student on a path from little knowledge, to an intuitive understanding of a topic in a seemingly painless process. His books are not necessarily the most technically rigorous (note that this doesn’t mean that they are wrong, but that the appropriate level of detail is chosen for the new learner such that the overarching ideas aren’t fogged in unnecessarily complication), but they are, in my opinion some of the best texts for taking a learner from nothing, to a working knowledge with which they can perform calculations that I’ve ever come across.…

By | September 11th, 2016|Book reviews, Reviews|1 Comment

Radius of convergence of a series, and approximating polynomials

I hinted today that there were sometimes issues when you did a polynomial approximation, that if you tried to find the value of a function a long way from the region about which you’re approximating, that sometimes you wouldn’t be able to do it. This is related to an idea called the radius of convergence of a series. In the following we are just plotting polynomials, but you can see that whereas in the polynomial approximation for sin(x) (on the right), as we get more and more terms, we approximate the function better and better far away from the point x=1 (which is the point about which we are approximating the function). However, for the function \sqrt{1+x}, after x=3, the approximations are nowhere near the function itself. This is because that function has a radius of convergence of 2, when expanded about x=1. This is due to the behaviour of the function at x=-1, which is a distance 2 away.…

By | August 19th, 2016|Courses, First year, MAM1000, Undergraduate|0 Comments