This blog post will carry on from the previous one, and introduce dependent types. So what is a dependent type? To motivate the idea let’s talk about equality. Remember that we interpret propositions as types, so if we have then the statement “ is equal to ” corresponds to some type, let’s call it . This type depends on its values, for example we expect to be able to prove (i.e. construct) , but not to be able to prove and so we will have an equality type that depends on its values. This idea is also being explored in various programming languages. These languages have a type like , where means that is a list of elements from the type . Since the length of the list is part of its type which is known ahead of time, it is *impossible* to ask questions like, “What is the first element of this empty list?” Indeed dependent types are so powerful that one can write a compiler and be sure that the compiler preserves the meaning of a program. Such generalisations of type systems are useful in programming, they allow us to structure information in more general and precise ways. Another reason dependent type theory is interesting is its elegance. Dependent type theory allows values to become part of types, and makes every type a value thereby uniting both notions.

## The Universe

The first question we need to ask is, “What is the type of a type?” The reason this is important is that we want to be able to speak of functions which return a type. For example, we might have the type , which is the type of finite lists of elements from . So , is a function that takes a type as input and returns one as output, and since every function requires an input type and an output type we need to be able to answer that question. So, the type of a type is called “the Universe,” and is denoted . The thing is, we have to be careful about this self-reference. For example, in Set Theory when we give our selves the ability to make “a set of all sets that satisfy some property,” we get a contradiction called Russel’s paradox. In a similar way we get a contradiction if we just say that every type’s has the type , which would imply that and this can be used to derive a paradox of self reference. However, the proof of this isn’t as elegantly intuitive as Russel’s paradox. So instead we postulate an *infinite hierarchy* of universes, where we may have for example , and , and every universe is a member of a universe one step larger than it. But we will just sweep this complexity under the rug and say things like leaving out the size of the universe, there are formal ways to deal with such ambiguity but they are complicated and we have a different path.

Already we have added quite a lot of power to our programming language. For example, consider the type of finite vectors of elements each of the type . We may define it by recursion like this:

Or in mathematical English, we define the type of vectors with zero length as . Why does this make sense? A vector of length zero contains no elements of , and yet one should be able to trivially construct an empty vector without specifying any information. So, we define a vector of length in to be an element of which can be trivially constructed.

The next definition says that a vector of length , in is the same as an element of paired with a vector of length in .

## Dependent Functions

Dependent functions are like regular functions , except that the type of the output may depend on the input. For example, what would be the type of the function that concatenates the vectors that we defined earlier? Well the function takes any which are the lengths of the vectors, one value with the type and another value with the type and returns a value with the type since when you concatenate two vector the length of the result is the sum of the lengths of each vector. So the type of output returned by the function depends on the input.

**Formation Rule** So when can you talk about having a dependent function? Well you need an input type , and a type family where if you put in , then the output will have the type . Given these two pieces of data we have the type , of dependent functions which take as input and return as output. This type is pronounced “pi in , of ,” or we may speak of a function which “takes (any) as input and returns a as output.”

For example, let’s assume that you have a type and an element . Consider the function , which takes a number and gives you back a list with copies of , so for example when we put we get back . Given those assumption, what type will have, well if we implement it correctly we will have which means that takes a number and outputs a vector of length with elements from . The formation rule allows us to say that is a type, since is a type family over , since if you plug in a natural number you get a type . From now on however, I will leave out formation rules they are pretty obvious most of the time.

**Introduction Rule** To make a dependent function, just like with a regular function you need an expression that will be the body of your dependent function, let’s call it like last time. If you want your dependent function to have the type , you need to make sure that whenever , . Given this assumption .

**Elimination Rule** Assume then one may use the dependent function by applying it to input. So, given , we have . Let me give an example of this rule, remember the function , what happens if you plug in for the input ? Well you get a vector of length 3 in called as output.

**Computation Rule** Just the same as regular functions, .

**Uniqueness Principle** Given we have .

Just like before, dependent functions with multiple inputs will be defined using nested dependent functions. So for example, if I have a dependent function with two inputs and , I will create something with the type , which I’ll simply write as . Also, if I have a dependent function which takes multiple inputs with the same type, say and , I’ll write the type as for short.

### Induction

How do you construct a dependent function out of the natural numbers? So, let’s say we have a type family , and we want to make a function with . How do we do this? The idea is very similar to the recursion principle we covered last time.

First of all, we need the value of the function on . Since our function is supposed to have the type , when we put in for we expect to get some value . So let’s assume .

Just like with recursion, we need to handle natural numbers of the form somehow. Generalising the idea from the recursion principle, we assume . The function maps any natural number and the value of the function on which has the type to the value of the function on which has the type .

The induction rule is also called the dependent elimination rule, or the dependent recursion rule since it is a generalisation to the setting of dependent types of the recusion/elimination rule we talked about last time.

Given these two pieces of information and , we can construct a dependent function . So that’s the dependent version of the elimination rule for natural numbers. The next question is, “What happens when I apply input to this function ?” The resulting computation rule is exactly the same as in the recursive case:

Let’s apply this rule to define a function , intuitively what does is to take and and give me back the vector which has copies of . Let’s start by introducing two variables for and , and filling in a template for the induction principle for the natural numbers, since we want to define a dependent function that takes any and gives back a vector of copies of .

So we have to define , and . Remember that is the value of the function that gives a vector of copies of when . It’s type is, recalling the definition, . So we have to construct an element of , and so we can trivially choose for this. So we can put for in the template and get this:

Now we have to fill in , what does it do? It is the recursive step that takes any , and the value of the function on and gives us back the value of the function on . Remember, we are trying to define a function that takes and gives you back a vector copies of . The value of this function on , will be a vector on elements in it will have the type . Similarly, the value of this function on will have the type , so to summarise we have to define:

Now let’s say , and are the inputs to , what should the output be? First of all we can rewrite the type of the output using the definition, , which means that we have to output a pair of an element in and a vector of elements from . Since should add another copy of to the start of the vector we have no choice but to output the pair . So we should define , and plugging this in back into our template we get the following completed definition:

Notice how we did not define repeat, and then prove separately that repeat gives a vector of length , rather the type of shows that it gives a vector with the correct length *by construction*.

## Dependent pairs

On to dependent pairs. Dependent pairs generalise the product types . A dependent type is a pair of things where the type of the second depends on the value of the first. The notation for the type is , where is a type and is a type family.

**Introduction Rule** Given an appropriate pair of elements , we may form .

**Recursion principle** Suppose we want to construct a function from the type of dependent pairs to some other type . Just like with the other types, the data we need will reflect the constructor for dependent pairs. Given a way to map the components of a dependent pair into :

There exists a function that satisfies the following:

**Computation rule** Given , and we have:

**Induction principle** Let’s now generalise the recursion rule to the dependent case. Suppose we have a type family , equipped with data that reflects the constructor for a dependent pair,

Then we have a dependent function satisfying the following:

**Computation rule** Given , and we have:

As an easy example, let us define the projections that extract out the components of a dependent pair. First things first, let’s suppose and and construct the first projection:

We have to fill in the dots with a function with the type but we can just ignore the given element of to give the following definition for :

What about the second projection? It will take any pair and give you its second component, which will have the type . So let’s define it then, starting with the template for the induction rule:

We have to define , however, by the computation rule for recursion. So we just have to define , but this is trivial:

## Dependent types and logic

Let’s talk about the logical interpretation of dependent types. First, in logical terms what is a type family. What does mean in the context of logic. Well, is a function that associates every element of to a type. Since propositions are types, is a proposition that depends on an elment of , that is to say a *predicate* on .

Now what about dependent functions? Given a type family , the dependent function type is a way to map every element to some value in . Logically speaking, we are mapping any element to a corresponding proof of some proposition , since propositions are types and in particular proving a proposition is just constructing an element of a particular type. So in other words we are showing that the proposition holds for every , this means that the dependent function type is the way we represent “for all” statements in type theory. To put it another way, the statement “for all , P(x)” corresponds to the type .

This clarifies why we call the dependent generalisation of the recursion rule, the “induction principle.” On the natural numbers for example, the induction principle shows you how to construct a dependent function by handling the base case with some and the recursive case with some . The usual principle of mathematical induction that gives a method for proving for all statements, is just a special case of the induction principle for natural numbers.

Finally what about dependent pairs? Try to guess what their logical interpretation is before reading on. The type of dependent pairs is generated by pairs where and . So dependent pairs are pairs of elements together with proofs that some proposition holds for that element. Naturally, we interpret the type of dependent pairs , as the proposition “there exists an element such that .” What the recursion rule for dependent pairs says, logically speaking is that given a proposition , we can show that implies by showing . Or in more words, by assuming an arbitrary element of exists, such that and deducing from that.

As an application of everything we’ve learned so far, we will *prove* the “type theoretic axiom of choice.” Suppose we are given , , and a type family

This seems at first glance to be very similar to the set theoretic axiom of choice. We have a type , and a family of types associated with every element of . If, for every there is at least one both satisfying some property then there is a way of mapping these elements to a choice of called . This satisfies the property that the chosen elements all satisfy .

The proof is quite trivial, I will write it with pattern matching notation first:

Let us first construct the first component of the output of , which is a function . Let , we need to construct , however we can get the first component of so we define

Now what about the second component, it needs to be a function and since we just need to construct . Obviously, the appropriate thing to use is the first projection since has the type we want.

Done! Now we should reflect on this for a moment, in type theory we can *prove* this version of the axiom of choice it is not an axiom. Why is it the case that a similar seeming statement in set theory needs to be assumed but the equivalent version in type theory can be proven. Essentially, there is a subtle difference between the notion of “there exists,” that we have in type theory based on dependent pairs and the traditional set theoretic notion. In type theory, existence means being able to construct an object satisfying particular properties. In set theory, this is not necessarily the case one can show that something exists without giving a method for constructing it. So, in type theory the statement “for every there exits a corresponding such that ” is a method of constructing a from a given such that we always have and we just have to rearrange the data into two separate parts: firstly a method for constructing a from a given this is the function , and secondly a proof that this method always outputs elements that satisfy . There are several ways of recovering the traditional notion of “existence,” which does not consist of construction which we may talk about later. This is a special case of the general distinction between classical, a.k.a. traditional logic and constructive logic such as the one we have based on type theory. A full discussion of this distinction would be interesting, but unfortunately this blog post is already too long. So here are exercises, I can’t give more interesting ones since we haven’t defined equality yet:

**Exercise** Implement concatenation for vectors. First implement it using pattern matching notation, and then translate the pattern matching notation to the recursion/induction principles for the appropriate types. Notice that if you fit your implementation of concatenation to your implementation of addition for natural numbers then you will make your life much easier.

**Exercise** Let’s say I have a type and a function , I can apply this function to each component of a vector to get a vector . Implement a general procedure for applying a given function to each component of a vector:

**Exercise** Implement a procedure that calculates the product of all the components of a vector , and a similar procedure that implements the sum of all the components. If you see a similarity between these two, define a general procedure that captures that similarity and rewrite your definitions in term of this general procedure.

## Leave a Reply