A Linear algebra problem

I have this linear algebra problem in the context of quantum mechanics. Let \mathbf{f}_\lambda be a family of linear operators so to each \lambda \in \mathbf{R} we have a linear operator \mathbf{f}_\lambda : \mathcal{H} \to \mathcal{H} where \mathcal{H} is a complex vector space if one is unfamiliar with functional analysis (like I am) or is a Hilbert space if one is. Let’s suppose that this family is differentiable.

Suppose further that \mathbf{f}_\lambda is always a Hermitian operator. Suppose that \mathbf{f}_\lambda has a discrete spectrum of eigenvalues f_1(\lambda), f_2(\lambda), \cdots. I need to show the following:


D_\lambda f_n(\lambda) = \left\langle f_n(\lambda)\right|D_\lambda \mathbf{f}_\lambda \left|f_n(\lambda) \right\rangle

Now here is a “proof,” it is not quite rigorous since there are probably a lot of technical details regarding functional analysis that I’m missing out on but:

Proof We begin by differentiating the eigenvalue equation \mathbf{f}_\lambda \left| f_n(\lambda) \right\rangle = f_n(\lambda) \left| f_n(\lambda) \right\rangle with respect to \lambda using the product rule:

(D_\lambda \mathbf{f}_\lambda) \left|f_n(\lambda)\right\rangle + \mathbf{f}_\lambda (D_\lambda \left| f_n(\lambda) \right \rangle) = (D_\lambda f_n(\lambda)) \left| f_n(\lambda) \right \rangle + f_n(\lambda) (D_\lambda \left| f_n(\lambda) \right\rangle)

After multiplying by \left\langle f_n(\lambda) \right| and rearranging terms we have the following:

\left\langle f_n(\lambda)\right| D_\lambda f_n(\lambda) \left|f_n(\lambda)\right\rangle = \left\langle f_n(\lambda)\right| D_\lambda \mathbf{f}_\lambda \left| f_n(\lambda) \right\rangle + \left\langle f_n(\lambda)\right| \mathbf{f}_\lambda (D_\lambda \left|f_n(\lambda)\right\rangle) - \left\langle f_n(\lambda)\right| f_n(\lambda) (D_\lambda \left|f_n(\lambda)\right\rangle)

Now we can take the adjoint of both sides of the eigenvalue equation to get that \langle f_n(\lambda)| \mathbf{f}_\lambda = \langle f_n(\lambda)| f_n(\lambda) since f_n(\lambda)^* = f_n(\lambda) because the eigenvalues of a normal operator are real.…

By | October 12th, 2016|Uncategorized|0 Comments

Dependent Types

This blog post will carry on from the previous one, and introduce dependent types. So what is a dependent type? To motivate the idea let’s talk about equality. Remember that we interpret propositions as types, so if we have x, y : A then the statement “x is equal to y” corresponds to some type, let’s call it x =_A y. This type depends on its values, for example we expect to be able to prove (i.e. construct) 3 =_{\mathbb{N}} 3, but not to be able to prove 2 =_{\mathbb{N}} 3 and so we will have an equality type that depends on its values. This idea is also being explored in various programming languages. These languages have a type like \mathrm{Vec}(x, A), where l : \mathrm{Vec}(x, A) means that l is a list of x elements from the type A. Since the length of the list is part of its type which is known ahead of time, it is impossible to ask questions like, “What is the first element of this empty list?” Indeed dependent types are so powerful that one can write a compiler and be sure that the compiler preserves the meaning of a program.…

By | October 11th, 2016|Uncategorized|0 Comments

Checking direction fields

I was recently asked about how to spot which direction field corresponds to which differential equation. I hope that by working through a few examples here we will get a reasonable intuition as to how to do this.

Remember that a direction field is a method for getting the general behaviour of a first order differential equation. Given an equation of the form:




For any function of x and y, the solution to this differential equation must be some function (or indeed family of functions) where the gradient of the function satisfies the above relationship.

The first such equation that we looked at was the equation:




We are trying to find some function, or indeed family of functions y(x) which satisfy this equation. We need to find a function whose derivative (y'(x)) at each point x is equal to the value of the function (ie. y(x)), plus that value of x.…

By | October 11th, 2016|Courses, First year, MAM1000, Undergraduate|1 Comment

Cellular Automaton


How clear is this post?
By | October 4th, 2016|Uncategorized|0 Comments