This is an episode in a series on mathematical logic approached with some rigour. Here, we will (still) be closely following the book by Peter B. Andrews: An Introduction To Mathematical Logic and Type Theory. We will look at the axiomatic structure of a logistic system we’ve been working on. The previous blogpost can be found here: (S1E01): The Rules.

An axiom is a statement that is taken to be true. The system we’ve been building from (S1E01) (called \mathcal{P}) consists of well-formed formulas having one of the following

(S1E02) (Sc0) axiomatic schemas:

  1. A \lor A \Rightarrow A
  2. A \Rightarrow B \lor A
  3. A \Rightarrow B \Rightarrow (C \lor A) \Rightarrow (B \lor C)

We then also have just one primitive rule of inference:

  1. Modus Ponens
    For some wff Q, if it is accepted and is of the form A \Rightarrow B, then we can infer B, given A holds. A,\; and \;B are wffs.
    Example: Let Q = If it is an evergreen tree, then it doesn’t loose leaves in winter and its leaves are always green. Here we have A = It is an evergreen tree. And B= It doesn’t loose leaves throughout the year and its leaves are always green. Let us accept Q, and take a walk down the streets of some neighbourhood in South Africa with one of your friends who happens to be a botany major. We then come across these trees ( Cape Holly), and your friend exclaims: “It’s the evergreen Cape Holly trees!”.
    What deduction can you make about the colour of its leaves throughout the year? They’re always green! Note that in this instance, if a tree’s leaves are always green and never fall off, it is true that it’s an evergreen, however, it’s not generally true that if one has a consequent: (B), that can they assert anything about the antecedent (A). i.e. the arrow goes from left to right (\Rightarrow), not from right to left.

\mathcal{H} will denote a set containing wffs.
And then, we shall define what a proof is:

(S1E02) (Sc1) Defn: What is a Proof

A proof of a wff B \in \mathcal{H}, where \mathcal{H} is a set of antecedents (or hypothesis), is a finite sequence B_1, ..., B_m where B_m = B and for each j \; (1 \leq j \leq m)

  1. B_j is an axiom.
  2. B_j \in \mathcal{H}
  3. B_j is inferred (by Modus Ponens) from  B_i and B_k where i < j and k < j.

Notation and Facts

  • A theorem is then a wff with a proof.
  • We can write \mathcal{H} \vdash B which means B is a theorem from the set \mathcal{H} , and B is derivable from \mathcal{H} .
  • In the place of \{A_1,...,A_n\} \vdash B we write A_1,...A_n \vdash B.
  • \mathcal{H}, A \vdash B will mean \mathcal{H} \cup \{A\} \vdash B.
  • \vdash B will mean B is derivable from the empty set, and that therefore B is theorem.

What it means for the set of antecedents to be empty is that we can always infer a wff B from it. This reminds me of something known as vacuous truth. I encourage the reader to read the Wikipedia article on this, an excerpt:

“statements which can be characterized informally as vacuously true can be misleading. Such statements make reasonable assertions about qualified objects which do not actually exist. For example, a child might tell their parent “I ate every vegetable on my plate”, when there were no vegetables on the child’s plate to begin with.”

–  The Wikipedia Article I’m Talking About

However, vacuous truth and \vdash B, aren’t necessarily the same thing! The former refers to the idea of referring to elements involved in the statements as not existing, whereas the latter refers to the statements themselves not existing.

Testing if a finite sequence B_1,...,B_m of wffs is a proof.

A proof is generally of this form:
\newline  (.1)\; B_1 \newline  . \newline . \newline . \newline  (.m)\; B_m
Where each line is a consequence of (S1E02)\; (S1)\; 1,\; or\; 2,\; or \;3. That is, each line is either (1): an axiom, (2): A premise (antecedent), or (3): inferred from a line prior through Modus Ponens.  In general though, we’ll see later on that there are many more rules of inference we can use which are a derivative of the information we already have about our system.

For the reader that’s not familiar with Groups And Rings, to put it shortly, a Group is a set that is closed under some binary operation. i.e. if G is my set, and I have any x,y \in G, then x \cdot y \in G. Example: \mathbb{Z} are closed under addition.
Rings have this property too, but with an additional binary operation. Example: \mathbb{R} are closed under standard addition and multiplication.
There is much more to Rings and Groups than this, but this is about all we’ll need for this next section. Read more about them on Wikipedia (Rings)Wikipedia (Groups).

In Group theory, we have what is known as the group axioms, and likewise, for Ring theory, we have the Ring axioms. The Ring Axioms are an appendix to the group axioms and thus the set of antecedents for Group \mathcal{G} is a subset of the set of antecedents for the Ring \mathcal{R}. So, we have \mathcal{G} \subset \mathcal{R}. Now, I’ll attempt to draw parallels between this fact and the theorems below.


It’s important to understand and know the following theorems.

Theorem (S1E02) (Sc2)

\mathcal{H}_{1} \vdash B and \mathcal{H}_{1} \subseteq \mathcal{H}_{2} then \mathcal{H}_{2} \vdash B.

For example:  if I have a theorem which states that the identity element of a group, e.g. in 0 \in \mathbb{Z} is unique for some binary operation (in this case +), then that exact theorem also exists in Ring Theory for some Ring: example \mathbb{R}.

Proof (S1E02) (Sc2)

\mathcal{H}_{2} has the same set of axioms and rules of inference to allow B to be derived. Since it’s a superset of \mathcal{H}_{1} where B is a proof in.
So \mathcal{H}_{2} \vdash B.

There is a direct consequence of this theorem (a corollary). logic thing 3
This result can be obtained by applying Modus Ponens on the first two statements in the antecedent on the set \mathscr{H} and this you can do because of  (S1E02) (Sc2).

Theorem (S1E02) (Sc3) Rule Of Substitution (Sub)

If \mathcal{H} \vdash A where a proof for A is given be B_1,...,B_n and if p_1, ..., p_n are distinct variables not found in any wff in \mathcal{H}, then \theta (B_1,...,B_n) = p_1,...,p_n is also a proof in \mathcal{H}. Where \theta is a substitution.

Proof (S1E02) (Sc3) (Sub)

If B_1,...,B_m is a sequence describing a theorem A = B_m, it follows that \theta (B_1) , ..., \theta (B_m) is also a sequence describing a theorem in \mathcal{H}. So \theta A is also a theorem in \mathcal{H}.

For the next theorems, it’s important that one understands what Modus Ponens is. Modus ponens is a rule of inference that states that if I have a statement Q \equiv P \Rightarrow R, and Q holds. Then I can infer R from P. That is to say that R also holds. From here on, we will refer to Modus Ponens as MP. Also, we will abbreviate Axiom to Ax.

Theorem (S1E02) (Sc4)

If \mathcal{H} \vdash (A \Rightarrow B) and \mathcal{H} \vdash (A \lor C) then \mathcal{H} \vdash (B \lor C).

Proof (S1E02) (Sc4)

Suppose we have the proofs for \mathcal{H} \vdash (A \Rightarrow B) and \mathcal{H} \vdash (A \lor C) where these are respectively D_1,..., D_i and E_i, ..., E_j respectively. So, we have…
.1) D_1 \newline . \newline . \newline . \newline  .i) A \Rightarrow B \newline  .i+1) (A \Rightarrow B) \Rightarrow (A \lor C) \Rightarrow (B \lor C) \rightline{Ax 3} \newline  .i + 2)  (A \lor C) \Rightarrow (B \lor C) \rightline{MP: i+1 Holds, Infer i+2 From i} \newline  .i+ 3) E_1 \newline  .i+3 + j) A \lor C  \newline  .i + 4 + j) (B \lor C) \rightline{MP: i + 2 holds, Infer i + 4 + j From i + 3 + j}

Theorem (S1E02) (Sc4) is a derived rule of inference. Here we made use of MP and Axiom 3.

The following theorems are all written in the form \vdash A. Where  A is some well-formed formula. Notice there is no set of antecedents in this case. What this means is that thees theorems are independent of any antecedent. They are then thus the theorems of logic and are true for any \mathcal{H}These are important.

Theorem (S1E02) (Sc5)

\vdash p \lor \sim p.

Proof (S1E02) (Sc5)

1.) \; \vdash p \lor p \Rightarrow p \Rightarrow \sim p \lor [p \lor p ] \Rightarrow p \lor \sim p \rightline{Ax 3}
2.) \; \vdash p \lor \sim p \rightline{MP: 1 holds, we infer 2 (by Ax 1 and Ax 2)}

Theorem (S1E02) (Sc6)

\vdash p \Rightarrow \sim \sim p.

To be honest, this theorem’s proof (from the book) didn’t make any sense to me. After reading up on substitutions I realise that some books allow substitutions that we are here anyways trying to show, so that wasn’t too helpful. What was helpful though, was my previous post. Recall this excerpt from the book:logic things

and also recall we wanted to prove \theta (\sim A) = q_0   = \sim (\theta A). I will remind the user of the proof I used and some of the information I extracted from the proof:

Consider a wff A with some substitution \theta. If A is a wff then we know it must have a formulation sequence (S1E01) (Climax 1). Consider a simultaneous substitution on all formulas in the formulation sequence to variables q_o,..., q_n. Here, q_0 = \sim  q_n. So, \theta (\sim A) = q_0 and \theta (A) = \sim q_n. So putting this together, we get \theta (\sim A) = q_0 = \sim (q_n)=\sim (\sim q_o ) = \sim (\theta A). This then tells us that there must exist a substitution \Psi, such that \Psi (\sim \sim ) gives us the empty formula.

This substitution does exist because we see in this post that q_0 = \sim \sim q_0.

Now for the proof:
In this proof, we’ll basically note that p \Rightarrow \sim \sim p \equiv \sim p \lor \sim \sim p. And to both we apply \Psi. Which will give us p \Rightarrow p \equiv \sim p \lor p. If either are theorems in some set \mathcal{H}, then so is the other (Sub). We already know from  (S1E02) (Sc5) this should be the case so we summaries our proof as follows:

Proof (S1E02) (Sc6)

\vdash p \Rightarrow \sim \sim p \rightline{Sub: (S1E02) (Sc5)}

Note that here we assumed commutative of the disjunction. If this is actually not the case, then we’ll have to come back and fix this at a later stage or if we ever need this fact to prove such, then we’ll have to come back and reason this differently.

A similar idea is used for the theorem \vdash \sim \sim p \Rightarrow p.

So we have \vdash \sim \sim p \iff p

Which is basically what we’ve stated above.
We are starting to notice some algebraic behaviour here. \sim cancels \sim like -1.

Now, the proof for this is a bit long in the book. Instead, I’ll make use of \Psi for the sake of shortening this long-ass post.

Theorem (S1E02) (Sc7)

\vdash p \Rightarrow p.

Proof (S1E02) (Sc7)

\vdash p \Rightarrow p \rightline{Sub: (S1E02) (Sc6)}.

The next theorem (S1E02) (Sc8) says:

\vdash p \lor q \Rightarrow q \lor p.

Proof  (S1E02) (Sc8)

.1) \; \vdash p \Rightarrow p \Rightarrow p \lor q \Rightarrow q \lor p \rightline{Ax 3}
.2) \; \vdash p \lor q \Rightarrow q \lor p \rightline{MP: infer (.2) (.1) holds by (S1E02) (Sc7)}

We can do the same for  \vdash q \lor p \Rightarrow p \lor q . And thus we have commutativety that we’ve been seeking. Here we relied on the result p \Rightarrow p. Though trivial, our proof for it made use of another that relied on commutativity. So it’s circular. The assistance of the reader would greatly be appreciated, how would we rectify this?

I will stop this post here seeing as it’ll just be a collection of theorems from hereon. I will, however, make a post on those theorems next. But I think this is a good place to stop.
Any feedback is greatly appreciated. Next, we look at some other useful theorems and start to investigate what all this means (the semantics of our syntax).

The other theorems I have not discussed in this blogpost are mostly algebraic and for anyone who’s been proving theorems for quite some time, are really obvious.  So for example the transitive property of implication, a deduction theorem, a theorem of antecedent independence (if a consequent follows whether the antecedent holds or not then that consequent must be a theorem of some theory), and also associativity of the disjunction (there’s a quick exercise to show it holds both ways in the book). If anyone is really reading this, and wants further details, drop a comment.


How clear is this post?