The Pilot.

This is an episode in a series on mathematical logic approached with some rigour. Here, we will be closely following the book by Peter B. Andrews: An Introduction To Mathematical Logic and Type Theory. In this episode, we will:

Part 1

2. Show the equivalence of the well-formed formula definition to that of a formation sequence on a formula.
3. Revisit the principle of mathematical induction and complete induction on the natural numbers.
4. Learn the principle of induction on the construction of a well-formed formula.

Part 2

1. Learn about substitutions in the context of propositional logic. And use this idea to derive one of De Morgan’s Laws.

The only knowledge this post will assume is a basic knowledge in set theory.

Part 1.

(The Prelude) Before we start with logic, let’s revise two ideas which may serve to be important later on. If one is familiar with the principle of mathematical induction and strong induction, one may skip this.

(S1E01)  (Prelude 1) Principle Of Mathematical Induction.
We take it to be true that given any property $R$, if the first member in a list $P_0$ has this property. And if I choose any member at an index of $n$, i.e. $P_n$, and it happens that the next member $P_{n+1}$ also has this property $R$, then all members of this list have the property $R$.

(S1E01)  (Prelude 2) Strong Induction
For any property $R$, and for any member in a list $P_n$ I pick, if every other member $P_j$ where $j has this property $R$ means that $P_{n}$ has this property, then all the members in this list have this property $R$.

Logic and Reasoning

Let us first start by naming our system of reasoning, let’s call it $\mathcal{P}$. This system consists of two types of primitive symbols. These are symbols whose meanings aren’t defined, but instead, chosen depending on how the system is intended to be used.

(S1E01) (Scene 1) The Primitive Symbols:

1. Proper Symbols : [, ], ~,
2. Improper Symbols: These are denumerable propositional variables : $p, p_1, p_2, ...$. As the name suggests, holds a proposition.

The symbol “~” is called a negation, when applied to a propositional variable (or to a formula which is defined later on) – it negates it.   is a disjunction which joins propositional variables or formulas.

A formula is a sequence of primitive symbols. e.g. ~[] is a formula, or ]$p$ is a formula. But in the context of reasoning what could  ~[] possibly mean? How does one reason without any proposition? It is for this that we then define a well-formed formula (wff).

(S1E01)  (Scene 2) A formula is a well-formed formula $\iff$ its being is a consequence of either one of the following formation rules:

1. A propositional variable standing alone is a wff.
2. If a wff, then ~A is a wff.
3. If A and B are wff, then [A  B] is a wff.

So, we define a wff in the following way: Suppose $\Omega_{i}$ is an arbitary set containing wffs. Then we define the set containing all wffs as $W = \bigcap_{i} \Omega_{i}$. Where

1. $p \in \Omega_{i}$, where $p$ is a propositional variable.
2. If $p \in \Omega_{i}$ then $\sim p \in \Omega_{i}$.
3. If $A,B \in \Omega_{i}$ then $[A \lor B] \in \Omega_{i}$. Where $A$ and $B$ are formulas.

And thus any wff is member of the set $W$.

(S1E01)  (Scene 3) An alternative definition uses the idea of a formation sequence (fs). A formation sequence for a formula is a finite sequence of formulas $Y_1, ..., Y_m$. Where we have that $Y\; is\; Y_m$ and for each $i \; (1 \leq i \leq m)$  one of the following condition holds:

1. $Y_i$ is a propositional variable.
2. there is $j < i$ such that $Y_i \; is \; \sim Y_j$.
3. there are indices $j < i$ and $k < i$ where $Y_i$ is $[Y_j \lor Y_k]$

We will now show that Scene 3 and Scene 2 are equivalent, i.e. A formula is a wff $\leftrightarrow$ it has a formation sequence.

Before we do this, we will reformulate the principle for induction on the construction of well-formed formulas.

(S1E01)  (Scene 4) Induction on the construction of well-formed formulas.
Suppose $\chi$ is a property of formulas, and $\chi (A)$ means that a wff $A$ has property $\chi$. Suppose

1. $\chi (q)$ for every propositional variable $q$.
2. Whenever $\chi (A)$ for every formula $A$, then $\chi(\sim A)$.
3. Whenever $\chi(A)$ and $\chi(B)$ then $\chi ([A \lor B])$

Then every wff has the property $\chi$.

(S1E01) (Climax 1)
Theorem: Suppose $\Omega$ is a set that contains all wff. A formula $L \in \Omega \leftrightarrow L$ has a formation sequence.

Proof:
Let $\Omega_{i}$ be an arbitrary set of wffs and $\alpha$ is a set containing all formulas with formation sequences. Furthermore, let $x \in \Omega_{i}$. By (scene 2) (1), $x$ is possibly a propositional variable and thus have formation sequence of $x$. Therefore $x \in \alpha$. By (scene 2) (2) it could also possibly be a formula $x$ where we can construct a formation sequence $\sim x,..., x \rightarrow \sim x \in \Omega_{i}$ so $x \in \alpha$. Finally, by (scene 2) (3), if we have $x,y,Z \in \Omega_{i} \rightarrow [x \lor y] \in \Omega_{i}$. We can then have $Z = [x \lor y]$ with a formation sequence $x, ...., y,..., Z$. So $Z \in \alpha$. So by (S1E01) (Scene 4) All wff in the set $\Omega_{i}$ have formation sequences. So we have that $\Omega_{i}\subseteq \alpha$

Now, again, let $\Omega_{i}$ be an arbitrary set of wffs and $\alpha$ is a set containing all formulas with formation sequences. But now let $x \in \alpha$. By (S1E01)  (Scene 3) (1), $x_{j}$ in a formation sequence can be a propositional variable and thus we have $x \in \Omega_{i}$. Or by (S1E01)  (Scene 3) (2), we can have the case where $x,y \in \alpha$ where $x = \sim y$ and y proceeds x in a formation sequence, we then have that $x,y \in \Omega_{i}$. Or by (S1E01)  (Scene 3) (3), we have that $x,y,z \in \alpha \; such \; that \; x = [y \lor z]$ where y and z preceed x in the formation sequence. So, $x,y,z \in \Omega_{i}$. We can then state that $\alpha \subseteq \Omega_{i}$.

Therefore we have that $\alpha = \Omega_{i}$. And since this set was arbitrary, the result follows.

We can assign meanings to the primitive symbols, and generate new meanings as shown in below.
Picture(1)

The image comes from the textbook mentioned above. Click on the image to enlarge it.

(S1E01)  (denouement)  For Fun: I was listening to Mrs Potter’s lullaby while writing this, and there’s a part in the lyric that goes “Or the elephants will get out and forget to remember what you said“. Let’s have some fun with this sentence. First, we will define a map $\phi$ that maps any wff to either True or False or both or neither. If $\phi(p) = T \; or \; F$ but not both and not neither we say $\phi$ is a function with a wff parameter. If it it maps to both for some $p$, then we say $p$ is a paradox. We will then define the proposition $q =$ Elephants never forget, and assert that  $\phi(q) = T$. Then, we’ll define $x =$ elephants will forget to remember. And finally, let’s say $y \equiv q \rightarrow x$. What will $\phi(y)$ be? Will $\phi$ be a function?

Well, $\phi(y) = False$ and it’s thus a function. That’s it, that’s the answer.  The lyricist lied to us… or did he? Why or why not?

CAUTION : The above definitions (in (S1E01)  (denouement)) may not necssarily be standard in mathematics and nevermind mathematical logical. I just happen to like to think of things in terms of functions and this was just me messing around. Dankie! Ka leboa..

Part 2

Before we introduce the next idea, it is worth noting that if we have formulas $X$ and $Y$, we can concatenate the two by writing $XY$.

(S1E01)  (Scene 5) Definition: Substitutions. A function $\theta$ from a set of formulas into a set of formulas is a substitution if and only if

1. $\theta X$ is the empty formula if and only if $X$ is the empty formula.
2.  for all formulas $X$ and $Y$, $\theta XY = \theta X \theta Y$

The two questions I had when I first saw this definition was: so in this context, is the empty formula the kernel? And the function $\theta$ a homomorphism?  One can read the definitions and convince themselves that this is possibly the case.

And we say that any two substitutions are actually the same if they behave exactly the same way. i.e. for any primitive symbol $x$, if $\theta_{1} x = \theta_{2} x$, then $\theta_{1} = \theta_{2}$. Furthermore, we say that $\theta$ is finite if and only if the set of primitive symbols $x$, such that $\theta x \neq x$ is finite.

Furthermore, the textbook introduces some useful notation:
Picture(2)
Click on the image to enlarge it.

I will personally not be using this notation in this post.

I think it’s worth exploring this idea a bit more. To do this, I will reformulate some of the ideas in Picture(1) in terms of substitutions.
Before we do so though, let us just go through some important results:
Consider a wff $A$ with some substitution $\theta$. If $A$ is a wff then we know it must a formulation sequence (S1E01) (Climax 1). Consider a simultaneous substitution on all formulas in the formulation sequence to variables $q_o,..., q_n$. Here, $q_0 = \sim q_n$. So, $\theta (\sim A) = q_0$ and $\theta (A) = \sim q_n$. So putting this together, we get $\theta (\sim A) = q_0 = \sim (q_n)=\sim (\sim q_o ) = \sim (\theta A)$. This then tells us that there must exist a substitution $\Psi$, such that $\Psi (\sim \sim )$ gives us the empty formula. The empty formula is otherwise known as the identity. An identity acts like a 1 in multiplication or 0 in addition.  Notice I say “an” identity and not “the” identity because I am not quite sure yet if it’s unique. This also means that $\sim \sim$ is the empty formula. So $\Psi$, is a substitution that maps an empty formula, to another instance of the empty formula.  We can use this information to show that $\Psi (\sim \sim A) = A$. We can then define another substitution $\theta (\sim \lor) = \land$.
I would like to use these two functions to derive one of De Morgan’s Laws, where $[A \land B] = \sim [ \sim A \lor \sim B]$. I will simply do this by taking $(\theta \circ \Psi)(\sim [ \sim A \lor \sim B])$. Where the circle represents a composition. But before we do this, we need to prove two important results:

1. For any substitution $\gamma$, we have $\gamma ( [A \lor B]) = [\gamma (A) \lor \gamma (B)]$ .
Proof:
Consider a wff $Q = [A \lor B]$ , we know that this wff has a formulation sequence $q_0, ..., q_m,...,Q$. Where we have $\gamma ([A \lor B]) = Q$, $\gamma(A) = q_0$ and $\gamma(B) = q_m$. So we have that $\gamma [A \lor B] = Q = q_0 \lor q_m = [\gamma (A) \lor \gamma (B)]$.
2. For any substitutions $\tau \;, \sigma$. The composition $\tau \circ \sigma$ is also a substitution.
Proof:
1. We have to check whether $\tau \circ \sigma (A) = A$ if A is an empty. We know $\sigma$ is a substitution, so it returns an empty formula which is passed into $\tau$ and that is a substitution, so it returns an empty formula so the result follows.
2. We then have to check $\tau \circ \sigma (AB) = \tau (\sigma (A))\tau (\sigma (B))$. I will leave this for the reader to check, just remember that $\tau$ and $\sigma$ are already substitutions and $A$ and $B$ are formulas.

$(\theta \circ \Psi)(\sim [ \sim A \lor \sim B]) = (\theta \circ \Psi)( [ \sim \sim A \sim\lor \sim \sim B]) =\theta ([A \sim \lor B]) = [A \land B]$.
It would be nice if the reader defined substitutions to derive the other relationships in (picture 1).

I’ve pretty much woven-in my solutions to the exercises of this section into the post, of which they may or may not be right. I’ve tried my best to ensure they make sense to me and have tried to verify them where I can. This textbook does not have solutions, so I am not 100% sure, but I am convinced I have not made a booboo, maybe a typo here and there.

Finally, I will end with another idea which I may or may not have used implicitly in my post.
(S1E01)  (Scene 6) Claim : for distinct variables $p_1, p_2, ..., p_n$ and wffs $B_1, B_2, ..., B_n, A$ where we have an arbitrary substitution $\theta$, with  $\theta (p_i) = B_i$ then simultaneously substituting wff in $A$ with the variables above yields a wff. I will leave this to the reader to prove.

See you on the next episode. The next episode will be on the axiomatic structure of our system.

 How clear is this post?