In this post we introduce two important concepts in multivariate calculus: the gradient vector and the directional derivative. These both extend the idea of the derivative of a function of one variable, each in a different way. The aim of this post is to clarify what these concepts are, how they differ and show that the directional derivative is maximised in the direction of the gradient vector.
The gradient vector
The gradient vector, is, simply, a vector of partial derivatives. So to find this, we can 1) find the partial derivatives 2) put them into a vector. So far so good. Let’s start this on some familiar territory: a function of 2 variables.
That is, let be a function of 2 variables, x,y. Then the gradient vector can be written as:
For a more tangible example, let , then:
So far, so good. Now we can generalise this for a function taking in a vector .…