Sie sind auf Seite 1von 78

Free Logic Now! (v0.

1)
Last Updated: October 27, 2009

Zachary Ernst Department of Philosophy University of Missouri-Columbia


Creative Commons Copyright (2009)

Contents
List of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pregrets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guiding Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part I. Propositional Logic . . . . . . . . . . . . . . . . . . . . . . . . Chapter 1. The Language of Propositional 1.1. Logic as an Articial Language . . 1.2. True and False . . . . . . . . . . . 1.3. Logical Entailment . . . . . . . . . 1.4. Primitive Connectives . . . . . . . Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

4.1. The Need for First-Order Logic . . . . . . . . . . . . . . . . . . . . . 4.2. Grammar and First-Order Logic . . . . . . . . . . . . . . . . . . . . Chapter 5. Models of First-Order Logic . . . . . . . . . . . . . 5.1. Denitions and Examples . . . . . . . . . . . . . . . . . 6 5.2. More Complicated Models . . . . . . . . . . . . . . . . . 5.3. Validity in First-Order Logic . . . . . . . . . . . . . . . 7 Chapter 6. Proofs in First-Order Logic . . . . . . . . . . . . . . 6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 9 6.2. The Easy Rules . . . . . . . . . . . . . . . . . . . . . . . 10 6.3. The Harder Rules . . . . . . . . . . . . . . . . . . . . . . 10 6.4. Quantier Exchange . . . . . . . . . . . . . . . . . . . . 12 14 Part III. First-Order Logic with Identity and Functions 15 Chapter 7. Identity . . . . . . . . . . . . . . . . . . . . . . . . . 16 7.1. The Need for Identity . . . . . . . . . . . . . . . . . . . 16 7.2. Introducing Identity . . . . . . . . . . . . . . . . . . . . 16 7.3. Semantics for Identity . . . . . . . . . . . . . . . . . . . 19 7.4. Proof Rules for Identity . . . . . . . . . . . . . . . . . . 20 7.5. Example: Family Relationships . . . . . . . . . . . . . . 22 Chapter 8. Functions . . . . . . . . . . . . . . . . . . . . . . . . 25 25 Part IV. Applications of First-Order Logic . . . . . . . . 26 Part V. Computation and Decidability . . . . . . . . . . 28 Chapter 9. Computable and Uncomputable Sequences . . . . . 9.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 29
3

29 29 31 31 31 33 34 34 34 35 37 39 40 40 40 41 41 42 44 45 47 48 48

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 2. Proofs in Propositional Logic . . . . . . 2.1. Primitive Proof Rules of Propositional Logic 2.2. Easy Proofs and a Few Rules . . . . . . . . 2.3. The Harder (more interesting) Rules . . . . 2.4. Examples of Primitive Proofs . . . . . . . . 2.5. Derived Rules . . . . . . . . . . . . . . . . .

Chapter 3. Interlude: The Island of Knights and Knaves . . . . . . . . . . . 3.1. Smullyons Island . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2. A Logical Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part II. Pure First-Order Logic . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 4. The Language of First-Order Logic . . . . . . . . . . . . . . . .

Zachary Ernst 9.2. 9.3. 9.4. 9.5. 9.6. Chapter 10.1. 10.2. 10.3. 10.4. Cantors Diagonal Argument From Cantor to Computers . Turing Machines . . . . . . . An Uncomputable Function . The Halting Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 50 50 54 54 60 60 60 62 63 65 66 66 66 68 68 72 73 74 75 76 77

Free Logic Now!

10. Undecidability of First-Order Logic Basic Concepts . . . . . . . . . . . . . An Analogy . . . . . . . . . . . . . . . Logical Representation . . . . . . . . . Undecidability . . . . . . . . . . . . . .

Part VI. Gdels Incompleteness Theorems . . . . . . . . . . . . . . Chapter 11. Logicism, Russell, and Hilbert . . . . . . . . . . . . . . . . . . . 11.1. Hilberts Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2. Smullyans Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 12.1. 12.2. 12.3. 12. The Machinery of Arithmetic . . . . The Language of Principia Mathematica High-Level Overview . . . . . . . . . . . Gdel Numbering in Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Appendix A. Denitions and Rules . . . . . . . . . . . . . . . . . . . . . . . Appendix B. Answers to Selected Exercises . . . . . . . . . . . . . . . . . .

Appendix. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix. Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Draft

Page 4

October 27, 2009

List of Symbols
The page number is where the symbol is dened in the text. E I A |S| R(n) a, b, c, DM F Z K(n) qi MTT N N N+ I PM

Q+ P, Q, R G(n)

And-elimination rule, page 17 i And-introduction rule, page 17 T Conjunction, page 11 M Assumption rule, page 17 Cardinality of the set S, page 48 The nth class sign in an arbitrarily ordered list of all class signs, [p/] U page 73 Constants, page 30 x, y, z, DeMorgans Rule, page 23 Logical entailment (double turnstile), page 14 Equivalence, page 11 Existential quantier, page 29 False, page 12 Implication, page 11 Set of integers, page 49 Kolmogorov complexity of the sequence n, page 54 m-conguration (i.e. state) of a Turing machine, page 50 Modus tollens, page 23 The class of all unprovable formulas, page 73 Set of natural numbers, page 48 Set of natural numbers without zero, page 49 Negation, page 11 Or-introduction rule, page 18 Disjunction, page 11 The system Principia Mathematica, page 68
5

The set of positive rationals, page 49 Predicates, page 29 Set of real numbers, page 49 Symbol occuring at square n on the tape of a Turing machine, page 50 Transition rule of a Turing machine, page 51 True, page 12 Turing machine, page 51 Logical implication (turnstile), page 17 Uniform substitution of for p throughout the formula , page 23 Universal Turing machine, page 55 Universal quantier, page 29 Variables, page 29

Pregrets1

he good news is that you do not have to purchase a text for my class. The bad news is that instead, you will be subjected to my rst and incomplete draft of my own text. This is very rough. I anticipate revising this quite a bit throughout the semester. There are lots of ways of presenting formal logic; this is one way. Although it may dier from other presentations, I have tried very hard to make this presentation as standard as possible in every important way. If you decide to go on to study more logic at a more advanced level, you will nd that this material lays a good foundation regardless of how any additional material is presented. There should be plenty of exercises for practice, and I have also tried to give both formal and informal denitions of key concepts and terms.

1Pregrets are like regrets, except that you feel sorry about something you are about to do.

Guiding Ideas

y goal is to write a completely free logic text that is useful to at least one person other than me. This is a very incomplete draft, which I will continue to update as I teach my logic courses. The reason for writing a free logic text is to release it from the constraints that come with publishing. A free text does not have to be written so as to sell it to publishers, and can be updated and corrected continuously without waiting for the next printing. From the perspective of the instructor, it provides the opportunity to use a text that costs students nothing; this has the additional advantage that if there is material here that will not be used in a particular course, there is no money wasted in requiring this text. Quite honestly, I also like the idea of freeing the text from the clutches of university bookstores, which (in my experience, at least) exploit their virtual monopoly over the students and faculty. What I want to do here is briey explain a little bit about how this book is organized, and why the proof systems are set up the way they are. The proof system I develop here is somewhat of a mixture of the systems presented in E.J. Lemmons classic text, Beginning Logic [3], and the system in Colin Allen and Michael Hands Logic Primer [1] (both of which I like very much, and have used in my courses). It is a natural deduction system in which assumptions are tracked in one column, and step justications are given in another. When settling on a natural deduction system for an introductory text, it is necessary to nd a balance between competing virtues. On the one hand, it is desirable to have a system in which the proof rules correspond as closely as possible to the sort of informal, syllogistic reasoning that is easy to teach, and easy to grasp. On the other hand, one also would like for the system to reveal as much as possible about the structure of proofs, to correspond roughly to some axiomatic system, and to provide a solid foundation for future study of metatheory. I see the Allen
7

and Hand text as leaning closer toward the rst set of virtues, for example. In it, the reductio ad absurdum rule allows you to either add or delete a negation to the assumption, and the rule for discharging disjunctions is simply disjunctive syllogism. Students nd it easier to grasp this presentation, but it somewhat obscures the role of proof by contradiction, and it also bears less of a similarity to presentations in more advanced texts. On the other end of the spectrum, we might cite Graeme Forbess Modern Logic [2] (which I have also enjoyed using in my classes). In it, the proof rules are presented in such a way that weaker systems, such as intuitionistic logic, can be straightforwardly had by simply disallowing particular rules. However, the presentation gives up a lot of simplicity, since there is a profusion of dierent sets of proof rules. Ive tried to strike a balance by using a system in which intuitionism can be had by deleting one rule, the reductio ad absurdum rule only allows us to add negations to the assumed formula, and the rule for eliminating disjunctions takes the form of a constructive dilemma instead of the simpler disjunctive syllogism. Derived rules are introduced as soon as possible, and so the instructor is free to emphasize these simpler rules if that is desired. I have kept the natural deduction system for rst-order logic. In particular, I refuse to present the tableau method, which is a crime against logic because it obscures the role played by arbitrary instances of formulas in deriving universal statements and discharging existentials. To keep the presentation as simple as possible and keeping in mind that the proof rules already in place are not the simplest ones available the system does not utilize free variables at all. Of course, this means that if the students were to go on to study metatheory, they would be introduced to free variables for the rst time. However, for the purpose of an introductory course, I think that the interpretation of formulas with free variables is obscure enough to warrant their exclusion. And I also suspect that the students who are motivated to study metatheory do not need to see unbound variables in their introductory-level

Zachary Ernst course, anyway. Quantier logic is presented without functions and equality rst, and these are added later. Models of rst-order logic are presented using set-theoryish notation. Although I do mention the existence (and necessity) of innite models, with an example or two, the presentation is almost completely conned to small, nite models. One important absence is worth noting. I have devoted almost no space whatsoever to the topic of translating formulas into English, and vice-versa. My, admittedly minority, opinion is that the material conditional and the extensional meaning of the quantiers render classical logic such a poor model of natural language that its better to ignore this topic almost completely. Also, my experience in teaching this course many times is that although students do well enough translating statements back and forth, they (justiably) nd the translations quite puzzling when they come up against the paradoxes of material implication, for example. This has the eect (again, in my minority opinion) of actually unraveling some of the progress they have made. Perhaps the fact that these translations are so poor makes them feel as if they are missing something important which of course, they arent. In addition, this text will include selected topics in metatheory, suitable for a graduate or advanced undergraduate course. After the presentations of propositional and rst-order logic, the book will have a complete presentation of Turing machines, basic notions of computability and uncomputability, a proof of the unsolvability of the halting problem, and Turings proof of the undecidability of rst-order logic. That section will be followed by a presentation of Gdels Incompleteness Theorems. My hope is that by presenting all these topics together in one text, we will have the opportunity to achieve a unied and consistent presentation of both object level and meta-level logic, which should be benecial to the student and to the instructor. When the text is nally completed, I hope it will be useful for a two-semester course in logic. Finally, this is a draft, indeed, a very drafty draft. The way I work is to get a skeleton of the project working as quickly as possible. Then I go back over it repeatedly to correct errors, x inconsistencies in the presentation, and add additional material. Obviously, this will take a long time to complete, but I hope it may be useful to someone before the bugs in the text have all been exterminated. For reference, heres what Ive got so far: (1) Propositional Logic Draft Page 8

Free Logic Now! (a) Proof theory: Complete presentation, with exercises and examples. Some notation is still buggy. (b) Semantics: Complete presentation, with exercises and examples. First-order logic (a) Proof theory: Complete presentation; need lots more examples and exercises. (b) Semantics: Complete, but very brief, presentation of nite models. Need examples of innite models, and more examples and exercises. Functions and equality: Complete presentation of equality. Needs more elaboration. Nothing yet for functions. Computation: Computable and uncomputable functions, with examples. Complete presentation of Turing machines. A quick but complete presentation of the halting problem is now roughed-in. Decidability: Just begun. This will be fast, I expect. Incompleteness: Partial discussions of classes, Peano arithmetic, and a very high-level overview of the proof.

(2)

(3) (4) (5) (6)

October 27, 2009

Part I

Propositional Logic

CHAPTER 1

The Language of Propositional Logic

was once in a large hospital building when I came across a man who had the formidable task of testing all of the smoke detectors in every room. This hospital employee was very tall I am pretty condent he was over six and a half feet tall. This was an important asset to him, because he could simply reach up to the ceiling and pull down each smoke detector, rather than having to lug around a ladder. I almost said to him, Boy, it sure is lucky youre so tall!, but instead, I asked him, How many people today have said that its lucky youre so tall?. He answered and Im sure he wasnt exaggerating that it was in the hundreds. He was exasperated. If any of these hundreds of people had thought for a moment before speaking, each would have realized that their comment was so obvious that it really didnt need to be said, and that this poor guy had already heard it too many times. I once heard an interview with the author of a book entitled, The Black Swan. He had gone to give dozens of talks about his book, and every single host had served a particular brand of wine called Black Swan, thinking that this was funny and clever.1 The author brought this up during the interview to make the point that although everyone thinks, hardly anyone thinks about thinking. How true! We all are guilty of saying things and arguing for conclusions that are either terribly obvious, or hopelessly misguided. And often, if we had simply taken a moment to think about our own thinking, we could avoid those awkward blunders. In its most common form, logic simply is the study of proper reasoning. In a sense, the study of logic has no content whatsoever. It gives up any consideration of ordinary facts and judgments in order to study the proper (and improper) forms of arguing, reasoning, and judging. It does this by creating a mathematical theory of proper reasoning, which is both interesting in its own right, and which also can be studied by mathematicians.
1Unfortunately for the author, Black Swan wine is quite awful. 10

The major diculty that logic addresses is that our natural languages, such as English, are terribly vague. You can see this when you think about how dicult it can be to even tell what a particular writer is trying to say. And of course, arguments can often be interpreted in many dierent ways. Whether or not you nd a particular argument compelling often depends just as much on how you read the argument as on what the author actually wrote. Logic addresses this problem directly, by providing us with a so-called articial language that is never vague or ambiguous. We will have perfectly precise rules for determining which expressions in our articial language are meaningful and which are not. Furthermore, every meaningful expression in our language can be interpreted in precisely one way. With this language, we will have exact criteria for determining which arguments are good ones (and of course, we will have a precise denition of what constitutes a good argument). 1.1. Logic as an Articial Language Just as we construct English sentences out of words arranged according to particular rules, we will dene logical sentences by showing how they are built up from their components. Example 1. One thing that strikes some students as strange is that we are concerned only with the form of sentences and arguments, and not about their content. For example, consider the following two arguments: (1) All sh live in the sea. All whales are sh. Therefore, all whales live in the sea. (2) All rectangles are quadrilaterals. All squares are rectangles. Therefore, all squares are quadrilaterals. It should be clear that argument (1) is not a good argument, in a particular way. After all, whales are not sh, so the second sentences is false. But argument (2) doesnt have that problem it is a perfectly good argument.

Zachary Ernst But notice that in a particular way, both arguments are similar. Pretend for the moment that the rst two sentences in argument (1) were true, but you didnt know if the last sentence is true. I think youll agree that if you were to accept the rst two sentences, it would very irrational to deny the third sentence. The same is true of argument (2), with the dierence that you dont have to pretend that the second sentence is true it really is true. In a sense, we will say that these arguments are similar in that they are both valid, that is, if one were to accept the premises, then one would be forced to accept the conclusion. Put more precisely, we shall use the term valid in the following way: Definition 2. An argument is valid if it is impossible for the premises to be true under the same circumstances in which the conclusion is false. But why are we forced to accept the conclusion if we accept the premises? It is not because of any particular facts about sh, whales, rectangles, or squares. Rather, the arguments are valid because of their form. We could put in any word in place of whales and sh, for example, and so long as we did this replacement consistently, we would also end up with an equally valid argument. Accordingly, logic does not care about the particular words in a sentence or argument it is concerned only with how those words are arranged, relative to one another. Thus, in the language of propositional logic, well simply represent certain phrases by letters, not caring at all about what those letters are meant to stand for. So our rst sentences in propositional logic will simply be lower-case letters such as p, q, r, . . . and so on. One fact about our natural languages is that we can form compound sentences out of simpler sentences by inserting certain words. For example, we could take the two sentences: It is sunny. It is raining. and combine them to form the sentence: It is sunny or it is raining. Words like or can be used to connect two sentences together to form new sentences. So we will call such words, connectives. The connectives we will be concerned with here are and, or, if-then, just in case. So we could form the sentences: It is sunny and it is raining. If it is sunny, then it is raining. Draft It is sunny just in case it is raining.

Free Logic Now!

There is also one more connective which does not combine two sentences together. This is the connective not, which we can add to any sentence to form (what we shall call) its negation. Our articial language will be written entirely in symbols, and so in addition to the letters that represent sentences, we will also have symbols for the connectives. These are: (1) (2) (3) (4) (5) (and) (or) (if-then) (just in case) (not)

Not every combination of symbols is meaningful in the language of propositional logic. We cant just write nonsense like pq and expect it to make sense. Rather, we have precise rules for forming meaningful expressions. We shall call these meaningful expressions well-formed formulae, which is typically abbreviated w. The following rules dene the set of w: (1) (2) (3) (4) Every propositional letter p, q, r, . . . is a w. If and are ws, then so are ( ), ( ), ( ), and ( ). If is a w, then so is . Nothing else is a w.

So we can construct ws such as p, (p q), ((p q) r), and so on. But sets of symbols such as pp q are not ws, because they cannot be constructed from the rules. Throughout the rest of this text and for the rest of your time on earth, you will need to be able to talk to other people who enjoy logic. So we need some terminology that is commonly used to talk about formulas and parts of formulas. (1) We say that is the negation of . (2) The formula ( ) is an implication or a conditional. The formula is the antecedent, and is the consequent. (3) The formula ( ) is a disjunction; each side of the disjunction is called a disjunct. (4) The formula ( ) is a conjunction; each side of the conjunction is called a conjunct. October 27, 2009

Page 11

Zachary Ernst (5) Any formula of the form ( ) is called an equivalence. The left-hand side of the equivalence is called the left-hand side of the equivalence, and the right-hand side of the equivalence is called the right-hand side of the equivalence. (6) Whichever connective you would have added last in constructing a formula is its main connective. You will notice that Ive used lower-case Greek letters to stand for formulas. This is a common convention, because we want to emphasize that these statement apply to any ws at all, not just particular ws like p, q, and so on. Additionally, when it is necessary to refer to arbitrary sets of ws, I will use upper-case Greek letters such as , , and so on. I will use lower-case letters to also stand for numbers; of course, this is a little ambiguous because lower-case Latin letters are also used as formulas, but in context, it will be perfectly clear what I mean. Finally, when I need to refer to arbitrary sets of numbers (this will happen below, when we start talking about proofs), I will use upper-case Latin letters. Exercise 1. Determine which of the following are ws and which are not: (1) (2) (3) (4) (p p) ((p p)q) (p p q) (p pq) 1.2. True and False So far, we have only been speaking of the formulas of propositional logic as strings of symbols without any meaning. This is because logic consists of two components: a syntax, and a semantics. The syntax determines which symbols are well-formed, and what rules we may use to manipulate those symbols in acceptable ways. The semantics, on the other hand, is concerned with notions relating to the meaning of the symbols. Semantics tells us whether formulas are to be considered True or False, and how the meanings of formulas are built up from the meanings of those formulas parts. Happily, the semantics for propositional logic is very simple (things will get more complicated when we consider quantier logic later on, though). The semantics boils down to a few simple rules that we will have to keep in mind. They are: Draft

Free Logic Now! (1) Every w has a truth-value of either True or False. (2) No ws truth-value is both True and False. (3) The truth value of a w is determined by the truth-values of its components according to the following rules: (a) A propositional variable may be either True or False. (b) The w ( ) is True only if either or (or both) are True. (c) The w ( ) is True only if both and are True. (d) The w ( ) is False only if is True and is False. It is True otherwise. (e) The w ( ) is True only if and have the same truth-value. (f) The w is True if is False, and vice-versa.2

When we consider all the possible truth-values of a formula, we have to consider every possible way the values True and False could be assigned to every (distinct) propositional letter in the formula. For example, if we were interested in the formula (p (q p)), we would note that there are two distinct propositional letters in the formula (counting the p) only once. Since each letter can take on one of two dierent truth-values, there are 2 2 = 4 cases to consider. We may represent these cases in the following truth table: p T T F F q T F T F

(1.2.1)

As I have said earlier, the truth-values of formulas are built up from the truth-values of their parts. So we gure out the truth-values of the formula by considering each part of the formula, one by one, starting with the smallest subformulas and moving up. So in Table 1.2.1, we have already started, by writing down all the truth-values for the smallest subformulas. So now, we write down a larger table, which will include not only the propositional letters, but every subformula, and nally, the
2I think its easier to learn the truth-values of the connectives this way, than by using a truthtable to refer back to. But if you prefer, the table corresponding to this list is Table (1.4.1) on page 15.

Page 12

October 27, 2009

Zachary Ernst entire formula: (1.2.2) p T T F F q T F T F (q p) (q p) (p (q p)) T T T F

Free Logic Now! are both T s and F s. In this case, we say that the formula is contingent. Like the ordinary use of the word, this indicates that the truth-value of the formula depends upon the truth values of its propositional letters. But it is also possible that the truth-value does not depend on its propositional letters at all. Here is a simple example of one such case: (1.2.5) p T F p F T (p p) T T

Here, we have written all the ws that appear in the formula across the top, and we have also written down the truth-values for the formula (q p), using the rules from page 12. If we were constructing the formula (p (q p)) using the rules for forming ws, the next subformula we would construct would be (q p), so we can ll in that column of the truth table. The connective that is added to form that formula is , so we use the fact that the negation symbol () changes the truth-value from True to False, and vice-versa. So, looking at the column under (q p), we simply reverse the truth-values as follows: p q (q p) (q p) (p (q p)) T T T F (1.2.3) T F T F F T T F F F F T Finally, we are ready to ll in the last column of the truth table. For this column, we note that the last connective to be added is , so we look at the left-hand side of it (where we nd the subformula p), and the right-hand side (where we have (q p)). Looking at the corresponding columns which are the rst and fourth columns, respectively we nd the combinations (T, F ), (T, F ), (F, F ), and (F, T ). Since the semantic rule for says that it is False only in the case where the rst truth-value is True and the second is False, we get: p T T F F q T F T F (q p) (q p) (p (q p)) T F F T F F T F T F T T

Here, we note that regardless of the truth-value of p, the truth-value of (p p) is always True. In this case, we say that (p p) is a tautology, or that the formula is valid (both mean the same thing). On the other hand, it is also possible that a formula will always take the value False, regardless of the truth-values of its propositional letters. Here is one such case: (1.2.6) p T F p F T (p p) F F

In this example, we have just switched the to , but this has the eect of completely reversing the truth-values of the formula. In this case, since the formula always takes the value False, we say that it is a contradiction. These categories of contingent, valid, and contradictory will be very important later on. We will soon start describing a system for proving certain formulas using precisely dened rules of inference; it will turn out that the formulas that are tautologous are the only ones that are provable. Exercise 2. Construct truth-tables for the following formulas, and indicate which ones are contingent, tautologous, or contradictory. (1) (2) (3) (4) (5) (6) (p p) q (p (p q) q (p q) (p q) (p q) (p q) (p q) (p q) (p q) (p q) October 27, 2009

(1.2.4)

Of the possible ways in which a formula may take on its truth-values, we distinguish among three cases. First, like the example, it might turn out that there Draft Page 13

Zachary Ernst 1.3. Logical Entailment Closely related to the concepts of tautology, contingency, and contradictory, is the concept of logical entailment, which closely mirrors what we want to clarify about validity. Logical entailment is a relation between a set of ws and some other w; this concept is meant to precisely model what we mean by an arguments being valid. Recall that an argument is valid just in case it is impossible for the premises to be true while the conclusion is false. In such a case, we would say that the conclusion is logically entailed by the premises. When we say that a w is logically entailed by other ws, we mean that for any assignment of truth-values to the premises, if the premises all take the truth-value True, then the entailed formula must also be True. For instance, take a simply example in which the formula (p q) is logically entailed by (p q). We can see that the rst formula is logically entailed by the second by looking at the following truth-table: the truth-table like so: (1.3.2) p T T F F q T F T F

Free Logic Now!

(p q) (q q) (p q) F T T T F F T T T T F T

To see that (p q) is, indeed, logically entailed by the other two formulae, we nd every row of the truth table in which (p q) and (q q) are both true; then we check whether (p q) is true in those rows. Here, there is only one such row we need to worry about (it is indicated in bold print), and in fact, (p q) is true there. So the formula is a logical entailment of the rst two. For a case in which a formula, (p q), is not entailed by the formulas {(p q), (p q)}, consider the following: (1.3.3) p q T T T F F T F F (p q) (p q) (p q) T F T T T T T T F F T T

(1.3.1)

p T T F F

q (p q) (p q) T T T F F T T F F F F F

In this truth-table, we can see that the row in which p is False and q is True shows that (p q) is not logically entailed by the other two formulas. We represent logical entailment using a simple notation. When a formula logically entails , we can concisely write this fact as . (This new symbol, , is usually pronounced double turnstile.) When several formulas together entail another (as in Example 1.3.2), we gather the entailing formulas together in brackets; so we would write {, } .

Exercise 3. Is it possible to give a formula that does not entail (p p)? If To see that (p q) is logically entailed by (p q), we nd every row of the truth table in which (p q) is true. In this case, there is only one such row (I have put yes, provide one; if not, explain why not. that row in bold print). When we look across the table at the corresponding row Exercise 3. Is it possible to give a formula that does not entail (p p)? under (p q), we nd that the formula also takes the value True. Therefore, there is Exercise 4. Is it possible to give a formula that is not entailed by (p p)? no row in which (p q) is true, but in which (p q) is false. So the latter is logically entailed by the former. Exercise 5. For each pair of formulas, one is logically entailed by the other. The same concept also applies in cases in which a formula is logically entailed by Show which is logically entailed by giving a truth table: a set of formulas. For example, suppose that we want to know whether the formula (1) (p q), (p q) (p q) is logically entailed by the two formulas {(p q), (q q)}. We can write Draft Page 14 October 27, 2009

Zachary Ernst (2) (p q), (p q) (3) (p q), p (3) (p q) (4) (p q) (5) p

Free Logic Now!

1.4. Primitive Connectives

As austere as our artical language may seem, we actually do not even need most of our connectives. In fact, we could have gotten by with only and any one of the other connectives. To illustrate, consider the truth tables for all of the connectives: p q p (p q) (p q) (p q) (p q) T T F T T T T (1.4.1) T F F T F F F F T T T F T F F F T F F T T Now suppose that we had only and as connectives, but we wanted to write a formula that was equivalent to (p q). The following table shows that this is easily accomplished: p T T F F q T F T F (p q) (p q) T T F F T T F F

(1.4.2)

Other examples will be left as exercises. Here they are: Exercise 6. Using only and , write formulas equivalent to each of the following: (1) (p q) (2) (p q) (3) (p q)

Exercise 7. Suppose we had a new connective |, where (p|q) has the value True exactly when p is True and q is False (this is called Sheers stroke). Using only this connective, write formulas equivalent to each of the following, and show that they are equivalent by using a truth-table. (1) (p q) (2) (p q) Draft Page 15 October 27, 2009

CHAPTER 2

Proofs in Propositional Logic

he previous section introduced the semantic rules for evaluating the truth and falsity of sentences of propositional logic. As I have said earlier, for each semantic notion, there is a corresponding syntactic notion. Here, the corresponding notion is that of proofs. As it turns out, we will spend most of our time learning about, and becoming procient with proofs. Because logic is concerned with understanding good forms of reasoning, our study of proofs will be concerned with determining which conclusions can validly be drawn from a set of assumptions, which we shall call premises. In this system proofs are very particular, stylized objects that provide a sort of model of argumentation. But just as the language of logic is a more precise, austere version of our natural language, so too, proofs are austere versions of informal arguments. With a logical proof, we get a precise version of the key elements of the sorts of informal arguments that we are confronted with every day. When we come across an argument say, by a politician who is trying to establish a conclusion about a policy decision we are concerned to ask a few questions about it: (1) What assumptions are being made within the argument? (2) What information is being used to suppose each claim in the argument? (3) How, exactly, is each piece of information being used to generate new pieces of information? Typically, informal arguments will leave the answers to these questions implicit; it is very rare that an argument would ever be given that would make the answers to those questions clear. Indeed, the task of evaluating arguments is often very dicult precisely because we have to infer, perhaps quite indirectly, exactly how the
16

2.1. Primitive Proof Rules of Propositional Logic

argument is supposed to hang together. In contrast, a proof within our system must make every step perfectly clear in every relevant respect. By keeping in mind that proofs are supposed to mirror informal arguments, you will nd it easier to keep track of all of the elements of proofs. Exercise 4. This is another test of the solutions environment... 2.2. Easy Proofs and a Few Rules When we try to convince someone of a conclusion, we try to justify our reasoning as much as it is practical to do so. It is as if we wanted to have an answer ready if we were ever asked any of the three questions above. Accordingly, proofs have a format that highlights those three requirements. A proof will be organized by having two columns. In the right-hand column, we will write the formula which we are asserting. In the left-hand column, we will write down our justication for making that assertion. These justications can take either of two forms. On the one hand, we can simply write that we are assuming, without justication, the formula that is written on that line. On the other hand, we can write down the claim that our formula can be derived from other formulae that appear earlier in the proof. Lets begin by taking a very simple example: (2.2.1) 1. 2. 3. 4. 1. 1. 1. 1. (p q) p q (q p) A 1, E 1, E 3, 1, I

We can think of this proof as showing that if we assume that (p q) is true, then so is (q p). Lets examine the proof, line by line, informally; after this is done, we will state precisely the proof rules used in it. Once we have shown that a particular conclusion can be derived from certain premises, we can show this through a notation called a sequent that is parallel to that of logical entailment. We write to

Zachary Ernst show that can be derived from , so that in our example, the sequent would be (p q) (q p) (and we unsurprisingly refer to the as a single turnstile, or just turnstile). And parallel to the double-turnstile notation of logical entailment, we can put more than one formula to the left of the turnstile, or none at all to indicate that a formula can be proven from no assumptions whatsoever. Now we need to explain the various parts of the proof. First, each line of the proof is numbered in the leftmost column this is necessary because we will constantly be referring to previous lines of the proof. The second column which here contains only 1s is used to keep track of all the assumptions that are required in order to write down the formula. (In this example, our assumption column is not terribly interesting, but dont worry it will get much more complicated later.) Next, we have the formulas which are being asserted, beginning with the premises, and ending with the conclusion. The right-hand column is the more complicated one. In it, we write down using the proof rules that will be explained below what justication we have for writing down that formula. Our rst proof rule will appear to be quite trivial initially, but we will later see that the correct use of it is important, and sometimes very dicult. This is the rule which allows us to write assumptions into our proof the idea behind this rule is that we can always write down any assumption whatsoever, so long as we mark it as an assumption: Assumption: At any line of a proof, write down any formula whatsoever, copy the line number into the assumption column, and write an A in the third column. (2.2.2) 1. 1. A

Free Logic Now! The second rule used in the sample proof is called and elimination, or E for short.1 The intuitive idea behind it is that if you know that two statements are true, then each one by itself must be true. For example, if I were to tell you that it is sunny and that it is raining, you could assert assuming that I have been truthful and that you know this that it is sunny. And Elimination: If ( ) appears at line n in the proof, then write either or , copy the assumptions from line n, and write n, E. (2.2.3) 1. M. ( ) 2. M. 1, E 3. M 1, E

We can see this proof rule at work in the rst line of Proof 2.2.1, in which we assume the formula (p q). Although it may seem illegitimate to simply write down any formula whatsoever, the point of the Assumption rule is to make sure that we ag the fact that the formula is an assumption. Our other proof rules will ensure that assumptions are not used illegitimately. Of course, it is not terribly interesting (or challenging!) to simply write assumptions. The other proof rules, of which there are several, can be thought of as ways of manipulating the assumptions, depending upon the formulas man connective. In fact, it is not unreasonable to think of a proof as a game in which the assumptions are transformed by these rules into the conclusion. Draft

At this point, it is appropriate to say a few words about how this proof system is organized. As I said above, you can think of a proof as a set of transformations in which the assumptions are gradually transformed into the conclusion. If we think of the proof in this way, then we see that in order to construct a formula from others, we need to be able to get rid of connectives thereby making the formula shorter and we need to be able to lengthen the formula, which would require us to write down additional connectives. The proof rules thus come in two varieties; the rst set of rules are elimination rules in which formulas get shorter. The second set of rules are introduction rules in which we get to write down new connectives and thereby make the formula longer. With this in mind, we need also to be able to introduce s into formulas. We therefore need the next rule: And Introduction: Given a formula at line k, and a formula at line , write down ( ), copy the assumptions from k and , and put k, , I as the formulas justication. (2.2.4) 1. M. 2. N. 3. M, N. ( ) 1, 2, I

I think you will nd that the rules concerning the introduction and elimination of are not dicult to grasp. In my experience, the part of the proof system that seems odd at this point is the assumption list. But the oddness of it will go away
1This rule is always pronounced wedge E never wedgie.

Page 17

October 27, 2009

Zachary Ernst when we get to the rules that explicitly require that certain information appear in that column. So for now, try to put it out of your mind. In the remainder of this section, I will present a few more rules. These are the simplest rules, in that they do not require any changes to be made to the assumption list when they are used. In the next section, we will complete our survey of propositional logics rules of inference. We next have to consider the other connectives, namely, , , and . For , we will look at its introduction rule. This rule is extremely simple indeed, it is so simple that many students nd it somewhat odd. The upshot of this rule is that a person can always weaken a claim that they make by saying that either that claim, or something else, is true. In this proof system, this amounts to the following: Or Introduction: Given a formula at line n, infer either ( ) or ( ). Copy the assumptions from line n, then write n, I as the formulas justication. (2.2.5) 1. M. 2. M. ( ) 1, I 3. M. ( ) 1, I and m, and writing n, m E as the justication. (2.2.6) 1. M. ( ) 2. N. 3. M, N. n, m, E

Free Logic Now!

Finally, we have to consider the introduction and elimination rules for . These rules are, I think, every bit as clear as the others, so long as we keep in mind the intended meaning of , that is, the formula ( ) is meant to be understood as is true just in case is true. And the way to understand that statement is as two conditionals: (1) If is true, so is . (2) If is true, so is . Thus, we can think of the symbol as being a shorthand for two symbols. That is, the formula ( ) is a shorthand for ( ) and ( ). This meaning is directly represented in the inference rules for statements involving that is, the inference rules allow us to move between the two dierent kinds of formulas. Equivalence Elimination: Given a proof of ( ) on line k, infer either ( ) or ( ). Copy the assumptions from line k, and write k, E as the justication.

What seems to strike people as counterintuitive about the I rule is that there is no 1. M. ( ) restriction whatsoever on what formula you choose for . In fact, can come from 2. M. ( ) k, E (2.2.7) anywhere at all in the proof, or it might appear nowhere in the proof at all. But we 3. M. ( ) k, E will see later on that often the key to completing a proof is making a strategically sound choice for what formula to use as . The corresponding introduction rule accomplishes the opposite it says that if we Next, we will look at the elimination rule for . In some ways, this is the most prove both conditionals, then we are justied in asserting the equivalence. basic of all proof rules in fact, older systems of propositional logic often had only Equivalence Introduction: Given a proof containing ( ) two rules, one of which was the one we shall call E, but which is most often at line , and ( ) at line m, infer ( ). Copy the asreferred to by its Latin name, modus ponens. This inference rule makes perfect sumptions from lines and m, and write for the justication, sense if we keep in mind that the symbol is supposed to represent the phrase , m, I. if...then.... So the formula ( ) would be understood informally as saying that if is true, then must be true. Accordingly, its elimination rule is simply: 1. M. ( ) 2. N. ( ) (2.2.8) Horseshoe Elimination: Given a formula ( ) at line n, 3. M, N. ( ) 1, 2, I and at line m, infer , copying the assumptions from both n Draft Page 18 October 27, 2009

Zachary Ernst Our last proof rule is perhaps the simplest. It tells us that if I stack two s in front of a formula, I can erase them. Its a very intuitive rule which we use to eliminate redundant strings of negations from the beginning of formulas: Negation Elimination: Given a proof containing at line k, derive . Copy the assumptions from line k, and write k, E for the justication. (2.2.9) 1. 1. 2. 1. 1, E

Free Logic Now! You will notice that in this proof rules justication, we have a line number appear in parentheses. This number represents the assumption that was necessary for the use of this rule; and it also indicates that this assumption is being erased. The natural question here is why we would ever be justied in erasing an assumption from a proof. But if we attend closely to the intended meaning of , the answer is fairly clear. When we assert a formula of the form , we are saying that if were true, then would be true. We are emphatically not saying that is, indeed, true we are only considering what would be the case were true. And this makes perfect sense, for we frequently make such claims even when we know for a fact that the statement in question is false. For example, I might say, if I were very tall, then I could reach the top shelf, and this would be true in spite of the fact that I am not very tall. And similarly, it is still perfectly meaningful (and true), even if I know and am perfectly aware of the fact that I am not very tall. There are two more such proof rules to cover in this section, and they also require us to assume certain formulas for the sake of argument, so to speak. And like the previous rule, they also allow us to erase those temporary assumptions when the rule is used. The next rule is the one for eliminating statements involving as the main connective. The intuitive idea behind this rule is that if we know that the very same consequence follows from each of two possibilities, and we know that at least one of those two possibilities is true, then we may infer that the consequence is true. For example, I recall an incident where one of my professors went to the doctor with a rash. The doctor told him that the rash could have two dierent causes, and he thereby wanted to perform a test that would indicate which one was the cause. My professor asked him what the treatment would be if the test indicated that the rst one was the cause, and the doctor told him. Then he asked what the treatment would be if the second was the cause, and the doctor told him that it would be the very same treatment. My professor, having long been acquainted with this particular proof rule, suggested that instead of performing the test (which was expensive), couldnt he simply prescribe the treatment, since we already know that the treatment will be indicated by the test no matter how it turns out. The doctor thought this was a very clever idea. I think you will see this line of reasoning in the following: Or Elimination: If ( ) appears at line k, is assumed at line , is assumed at line m, and appears at lines n and o, Page 19 October 27, 2009

2.3. The Harder (more interesting) Rules Now we are ready to consider the rules that crucially involve the assumption list in a way that the previous rules do not. The feature that these rules have in common is that they involve temporarily assuming certain formulas, and then eliminating those assumptions after a particular goal has been reached. The best analogy to informal reasoning might be when we say that we assume something for the sake of argument or merely hypothetically. When we say, assume such-and-such for the sake of argument, we do not mean for our listener to actually believe that particular assertion, and we do not mean to indicate that we believe it, either. Rather, we usually want to explore what the consequences of that fact would be, if it were true. So, too, when we introduce new assumptions into a proof, we do so for the purpose of exploring what the consequences of it would be, if the assumption were true. Our rst example of such a proof rule is the one which makes this method most explicit it is the rule we use to introduce conditional statements into a proof, namely: Horseshoe Introduction: If appears as an assumption at line k, and appears in the proof at line , then infer ( ), copying the assumptions from line , but erasing the assumption k if it is among those assumptions. Write k, m, I(k) for the justication. 1. 1. (2.3.1) A . . . 3. 1. 4. 1. ( ) 1, 2, I(1)

Draft

Zachary Ernst then we may infer . We copy the assumptions from lines n and o, and write k, , n, m, o E( , m) as the justication. 1. 2. (2.3.2) 4. 5. M. 2. 2. 5. ( ) A . . . . . . A 1, 2, 4, 5, 7, E(2, 5)

Free Logic Now! assumptions from lines and m, deleting any occurrence of k from the assumptions list. For the justication, use k, , m, RAA(k) . 1. 1. (2.3.3) 3. 1. . . . . . . A 1, 3, 5, RAA(1)

7. 5. 8. M, 2, 5.

2.4. Examples of Primitive Proofs It is one thing to understand the proof rules in the abstract, the way I have presented them in the previous section. It is a dierent thing entirely to become procient at using them to generate proofs. In this section, well see a few examples of propositional logic proofs that use the primitive proof rules. After you have become condent that you can produce these proofs, we will learn how these proofs can be shortened and simplied, while still remaining rigorous. Example 8. Let us suppose we want to prove the sequent (p q) (q r) (p r). The following is a straightforward proof: 1. 2. 3. 4. 5. 6. 7. 1. 1. 1. 4. 1, 4. 1, 4. 1, 4. (p q) (q r) (p q) (q r) p q r (p r) A 1, E 1, E A 2, 4, E 3, 5, E 4, 6, I(4)

5. 1. 6. 1.

Another fact that strikes many as odd is that this proof rule doesnt allow us to write down any new formulas it merely copies a formula that has already appeared (twice!). Unlike the previous rules we have seen, the purpose of the step in which E is used is to discharge the assumptions. Really, what we are saying when we use the E rule is, I have now shown that regardless of which disjunct we happens to be true, the very same result follows. So I can safely say that we dont need to make assumptions about which one is true. This assertion is reected in the rule for E because the rule allows us to erase those two assumptions. And when those assumptions are erased, we represent that fact by writing the assumptions in parentheses at the end of the justication line. The last proof rule we shall discuss here is called RAA, which stands for the Latin phrase reductio ad absurdum. It is also a rule which is used in so-called proof by contradiction, which is also called an indirect proof. The phrase, proof by contradiction is probably the best description of how this proof rule works. When we use it to prove a formula, , we begin by assuming that is false. Then, we use whatever rules we can to show that we can prove a contradiction. Since contradictions are always false, we know that something has, so to speak, gone wrong when we derived that contradiction. And so we learn that our assumption must have been false; this fact allows us to derive , which is what we wanted. Well present the rule itself rst, and then give a simple, informal example. RAA: If you have assumed at line k, and derived the two formulas and at lines and m, then infer . Copy the Draft

(2.4.1)

Now, Ill give a sort of narration of each step of the proof, explaining why each step of the proof appears the way it does. (1) This is the assumption. I have simply copied the left-hand side of the sequent, which is virtually always the rst step of the proof. (2) The rst line has as its main connective, and I want to break down that formula. So the rule is E. (3) Same as line 2 I want to get the second conjunct from line 1, so E is the appropriate rule again. October 27, 2009

Page 20

Zachary Ernst (4) This is the rst line where some strategy appears for the rst time. Looking at the conclusion, which is (p r), I notice that I will need to introduce the symbol (because that is the main connective). The proof rule, I, requires that I assume the antecedent (which is p), and try to derive the consequent (which is q). So I know that I need to assume p, which is what I do here. (5) Even if I dont know what I should do next, its easy to notice that I havent done anything with lines 2 or 3 yet. The main connectives of each are , so the appropriate rule is E. It requires that I derive the antecedent of the conditional, and I notice that I have derived the antecedent of line 2 already it is on line 4. So I can use E to get q, which is here. (6) Exactly like line 5, I get r the same way. (7) Remembering back to line 4, I know that I needed to assume p, and derive r. And indeed, I have derived r at line 6. This means that I have everything I need to derive my conclusion, (p r). According to the rules, I copy the assumptions from lines 4 and 6, but I erase every occurrence of 4 in that list so I have crossed it out. I note that the only assumption remaining is 1, which is what I wanted.

Free Logic Now!

(3) The E rule can be used twice, so I am basically repeating the previous line again. (4) Now I have nothing to work with. I have already used E two times, and I cant use E without having derived the antecedent of those two conditionals. So I look at the conclusion that Im trying to prove. It is a conditional, so I need to assume the antecedent and try to get the consequent. The antecedent is p, so I assume it. (5) The formula q, which Ive assumed in the previous line, is not terribly useful yet. So I look again at what Im trying to prove. If I am to successfully use the I rule (which is my plan), then I will need to derive p. But the only rule that allows me to derive a negation is RAA, and that rule requires me to assume the opposite of what Im trying to prove. So I do that here, assuming p. (6) Now I have enough to work with. I notice that I have both (p q) and p, so I can use E to get q, which is done on this line. (7) I have dutifully kept in mind that I need to look for a contradiction, because I have assumed p with the intention of using the rule RAA. And indeed, I have both q and q. So I may deploy the RAA by citing the assumption p, plus the lines where q and q appear. This rule allows me to discharge Example 9. Lets try another one which uses some of the other proof rules. line 5. Here, well prove (p q) (q p): (8) Again, I have kept in mind that I wanted to derive (q p) by assuming q and deriving p. I have now done so. So I can use the I rule, and 1. 1. (p q) A the proof is done. 2. 1. (p q) 1, E The next proof is of a formula that is sometimes called the law of the excluded 3. 1. (q p) 1, E middle, a name that was coined by Aristotle.2 4. 4. q A (2.4.2) 1. 1. (p p) A 5. 5. p A 2. 2. p A 6. 1, 5. q 2, 5, E 3. 2. (p p) 2, I 7. 1, 4, 5. p 5, 6, 4, RAA(5) 4. 1, 2. p 1, 2, 3, RAA(2) (2.4.3) 8. 1, 4. (q p) 4, 7, I 5. 1. (p p) 4, I Lets look again at each step. 6. 1. (p p) 1, 1, 5, RAA(1) 7. (p p) 6, E (1) This, like in the previous proof, is just the left-hand side of the sequent. (2) So far, I have nothing to work with, so I simply use whatever rule happens 2Years ago, I once told my class that it was discovered by a professor named Harris Doddle. to apply to the rst line of the proof. Its main connective is , so I get to This, of course, was a joke. But after I saw 99% of my class dutifully writing that name in their use the E rule. notes, I decided not to tell them it was a joke. I still feel kind of bad about that. Draft Page 21 October 27, 2009

Zachary Ernst And last, heres an example of a proof involving the use of E. Well prove (p q) two proofs: (p r) p:

Free Logic Now!

(2.4.4)

1. 2. 3. 4. 5. 6.

1. 2. 2. 4. 4. 1, 2, 4.

(p q) (p r) (p q) p (p r) p p

A A 2, E A 4, E 1, 2, 3, 4, 5, E(2, 4)

(2.5.1)

1. 2. 3. 4. 5. 6. 7. 1. 2. 3. 4. 5. 6. 7.

1. 1. 3. 4. 1, 4. 1, 4. 1, 3. 1. 1. 3. 4. 1, 4. 1, 4. 1, 3.

(p q) (p q) q p q p (q p) (p q) (p q) p q p q (p q)

A 1, E A A 1, 4, E 4, 3, 5, RAA(4) 3, 6, I(3) A 1, E A A 1, 4, E 4, 3, 5, RAA(4) 3, 6 I(3)

Exercise 5. Prove the following sequents: (1) (2) (3) (4) (5) (6) (p q), p q (p q) (p q) (p q) (p q) (p q) (p q) (q p) (p q) (p q) (p q) (p q)

These proofs are extremely similar in fact, they dier only in that the ps and qs are reversed in lines 27. It seems a waste of time to have to repeat the very same lines in proofs over and over again. After all, it is easy to see that if we can produce a particular sequence of proof steps, then we can also produce a sequence that diers only in the fact that the symbols have been uniformly and consistent replaced by a dierent set of symbols. The rule of Sequent Introduction allows us to use this fact to shorten our proofs. It says that when we prove something once, we can simply cite that proof and essentially use it as a single proof step. In the previous two proofs, the set of lines which could be shortened is 27. And the sequent that corresponds to those steps is the following: (2.5.2) The idea behind our use of Sequent Introduction is that if we can substitute formulas in place of p and q in the formula (p q), then we could write the corresponding substitution of (q p). To state this rigorously, we need the notion of a Uniform Substitution. (p q) (q p)

2.5. Derived Rules

Definition 10. Uniform Substitution: Let be an arbitrary formula of propoAs we practice writing proofs, it quickly becomes clear that were repeating many of the same patterns over and over again. For example, consider the following sitional logic, with propositional letters {p1 , p2 , , pn }. Let be a set of formulas Draft Page 22 October 27, 2009

Zachary Ernst

Free Logic Now!

{1 , 2 , , n }. Then is a uniform substitution of into if it results from These sequents are easier to remember if you have proven them yourself. Before substituting each i for each pi . We may represent by [p1 /z , p2 /2 , , pn /n ]. we look at examples, a quick note about the names of these rules is in order. First, The formalism of Denition 10 may seem unfamiliar, but it is important to keep several of these rules are referred-to as DM because these principles are often at3 in mind the intuitive idea behind it. It merely says that the uniform substitution of tributed to an early logician named Augustus DeMorgan in the nineteenth century. a formula is the result of substituting formulas for propositional letters consistently They express a relationship between and and or. The rst one, for example, may be interpreted to say that if neither p nor q is true, then p is not true, and q is not throughout. true. The others express similar principles. Example 11. These are examples of the use of the uniform substitution rule. The rules called Def are so named because they simply express the truth The original formula is on the left, and the result of a uniform substitution is on the conditions for formulas whose main connective is . Recall that ( ) has the right. truth-value False only if is True and is False.4 Equivalently, ( ) is true whenever either is False or is True. Looking at the Def sequents, you can (p q) ((p q) r) recognize that this is expressed by the assertion that (p q) entails (p q). ((p q) p) (s (p q) s) Finally, the MTT rule is one that is extremely useful. Its name stands for the ((p r) p) ((p q) (s p)) (p q)) Latin phrase, modus tollendo tollens, which is a close relative of modus ponens. It It is useful to go through these formulas to gure out exactly which uniform substi- says that if we know that implies , and is false, then so is . This is a kind tution was used to generate the formulas on the right. of reasoning we use every day. For instance, you might know that if the car starts, then it must have gas. So without giving it a second thought, you also know that if We can now give the glorious and time-saving... the car does not have gas, then it will not start. This is just an informal use of the Sequent Introduction Rule: If {1 , 2 , , n } has been MTT rule. proven, and each i is the result of the same uniform substitution Now we are ready for an example of the Sequent Introduction rule. We will applied to each i , then write , copying the justication from the show that (p q) (q p). lines containing i . As justication, cite the lines containing the 1. 1. ((p q) (q p)) A various i , plus the the sequent itself, or a name of the sequent. 2. 1. (p q) (q p) 1, DM We will look at examples of the use of the this rule shortly. But rst, here is a list 3. 1. (p q) 2, E of commonly used sequents that come in very handy when working on proofs: 4. 1. (q p) 2, E (p q) (p q) DM 5. 1. (p q) 3, Def (2.5.4) (p q) (p q) DM 6. 1. (q p) 4, Def 7. 1. p 5, E (p q) (p q) DM 8. 1. p 6, E (p q) (p q) DM 9. 1. ((p q) (q p)) 1, 7, 8, RAA(1) (2.5.3) (p q) (p q) Def 10. ((p q) (q p)) 9, E (p q) (p q) Def Exercise 12. Prove each of the sequents in Table 2.5.3 using only primitive proof rules. Draft (p q), q p MTT
you know that he said this in the nineteenth century, then the date of his birth can be inferred from his statement. 4See page 12.
3He allegedly gave the date of his birth by saying that he was x years old in the year x2 . If

Page 23

October 27, 2009

Zachary Ernst If you have proven this sequent using only the primitive proof rules, I think you will agree that this is vastly easier. The reason it is easier is that many steps involving introducing and eliminating assumptions have been eliminated; they are hiding in the steps which rely on derived rules. Exercise 6. Prove each of the following twice: once using only primitive proof rules, and once using derived rules. (1) ((p p) q) (2) p, ((q p) s) ((s s) q) (3) (p q) ((p (s q)) s)

Free Logic Now!

Draft

Page 24

October 27, 2009

CHAPTER 3

Interlude: The Island of Knights and Knaves


3.1. Smullyons Island propositional logic is good for. Is it useful?, they want to know. My answer is, Is cotton candy useful? of course not! But were not going to stop eating it, are we? Logic is like cotton candy; it does not need to be useful. Heres one example of how fun propositional logic is. This is a kind of puzzle that is due to Raymond Smullyon, an excellent logician who has become quite well-known for his puzzles. On a certain island, every person is either a Knight or a Knave. Knights always tell the truth; Knaves always lie. For fun, I like to go around the island asking people questions and trying to gure out who is a Knight and who is a Knave. Example 13. Suppose you come across two inhabitants of the island, Al and Betty. Al tells you that one of them is a Knight and the other is a Knave. Betty tells you that Al is wrong about that. What are they?

Zippy is a knight. Zippy then tells you that either he is a knight, or Joe is a knave (or both). Finally, Alice says, Dont listen to Zippy hes a Knave. This can be solved in much the same manner as the previous puzzle. Just assume that someone is a Knight or a Knave, and see what else must be true as a result of that assumption. If you get a contradiction for example, a Knight saying something false, or a Knave saying something true, then it must be that your initial assumption as wrong. If you get stuck, just start over and try making a dierent assumption. Heres a very short, but interesting one: Example 15. Bob says I could claim to be a Knight. What is he? Do we even have enough information to say? As it turns out, Bob has told us enough. Suppose he were a Knave. Then he must be lying; so contrary to what he said, he could not claim to be a Knight. But Knaves lie, so any Knave could claim to be a Knight. So it must be the case that Bob is actually a Knight. We might go on to ask whether a Knight can claim to be a Knight. Of course! So can a Knight claim that he can claim to be a Knight? Well, since he can claim to be a Knight, hed be telling the truth if he said that he could so claim. And since Knights tell the truth, the answer is that a Knight could, indeed, claim to be able to claim to be a Knight.

How to solve this puzzle? Lets suppose for a moment that Al is a Knave. Then his statement is false, and so they are the same. That makes Betty a Knave, too. But she is saying that Al is wrong in his statement that they are dierent, which has Example 16. You meet Sara on the island. She says, I could claim..., but she turned out to be true. But Knaves never say true things! Therefore, Al must be a mumbles a little bit and you cant hear the statement that she said she could claim. Knight. So his statement is true, which means that Betty is a Knave. And indeed, Do you know anything about that statement? Betty is saying something false, for Al is right in that they are, in fact, dierent. Surprisingly, in spite of the fact that you dont know if Sara is a Knight or a That was a pretty easy one. What about: Knave, and you didnt hear the statement she said she could claim, you do know Example 14. This time, you meet three inhabitants of the island: Joe, Alice, that whatever she said, it must be true. To see this, lets give the mumbled phrase a and Zippy. First, you remind yourself that just because someone is named Zippy, name, X. So Sara said, I could claim X. Weve got four cases to consider: X could that does not automatically make them a Knave. Joe says, Alice is a knave and be either true or false, and Sara could be either a Knight or a Knave.
25

Zachary Ernst (1) Lets start by assuming that X is true, and that Sara is a Knight. Since shes a Knight, and X is true, she could claim X. And since she could claim X, then shes telling the truth when she says that she could. Knights tell the truth, so this could happen. Sara could be a Knight if X is true. (2) Suppose X is true, but Sara is a Knave. Then she couldnt possibly claim X, because shed be telling the truth. So if she say that she could claim X, shes lying. But thats okay, because Knaves lie. So this case is possible: Sara could be a Knave if X is true. (3) Suppose X is false, and Sara is a Knight. A Knight cannot say something false. So Sara would be saying something false if she said that she could claim X. But as a Knight, she cant do this. So this case is impossible. (4) Last, lets suppose that X is false and Sara is a Knave. Sara could claim X, because Knaves lie. So if she said that she could claim X, shed be saying something true, which is impossible because shes a Knave. So this case is impossible. So it turns out that only cases 1 and 2 are possible; and in each of those, X is true. So although we still dont know if Sara is a Knight or a Knave, we do know that X has to be true. 3.2. A Logical Method As you might suspect, these puzzles can be completely analyzed in propositional logic. It even turns out that the analysis isnt too awfully tedious. Our task is to nd some way to represent the puzzles in propositional logic. We choose the representation so that once the puzzles are translated into propositional logic, we can take the formulas representing the information in the puzzle, and deduce the answer. Of course, when I say that we will deduce the answer, what I mean is that we will prove certain formulas which can be translated back into English, and whose translations are the correct answers. Consider the rst puzzle with Al and Betty. Al claims that he and Betty are not the same. Lets pick a propositional letter a to stand for Al, and well use b to stand for Betty. More specically, we will interpret each letter as representing the statement that the person is a Knight. So on this scheme, the formula a would mean, Al is a Knave. With this very simple scheme, we can write down translations of their claims: Draft (3.2.2)

Free Logic Now! (1) (a b) (a b) Either Al is a Knight and Betty is a Knave, or Al is a Knave and Betty is a Knight. (2) a Al is a Knave.

Although the English phrases are not word-for-word copies of their statements, I think youll quickly agree that they mean the same thing. We have to ask what were to do with these translations. The only thing we know for sure about these formulas is that, because they represent the claims of each inhabitant, each is true just in case the speaker is a Knight. In other words: (3.2.1) a ((a b) (a b)) b a

Recall that we have already seen that Al is a Knight and Betty is a Knave. Interestingly, we can now prove that this is the solution we can either prove (a b), or we can examine a truth-table. For example: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 1. 2. 1. 1. 2. 2. 7. 2, 7. 2, 7. 2, 7. 1, 2, 7. 1, 2, 7. 1, 2. 1, 2. 1, 2. (a ((a b) (a b))) (b a) (a ((a b) (a b))) (((a b) (a b)) a) (b a) (a b) a b (a b) ((a b) (a b)) a a a b (a b) A A 1, E 1, E 2, E 2, E A 6, 7, E 7, 8, I 9, I 4, 10, E 7, 11, 7, RAA(7) 12, E 5, 12, MTT 13, 14, I

What is particularly cute about doing a proof is that you can reconstruct the same line of reasoning that you might use when solving the puzzle informally. Notice, for instance, that we began informally by assuming that Al was a Knave then it turned out that the assumption implied something impossible. Similarly, at line 7, we assume a, which just represents the assertion that Al is a Knave. We get October 27, 2009

Page 26

Zachary Ernst a contradiction from that assumption, so that at line 12, we have inferred that Al cannot actually be a Knave. From there, the rest of the proof goes quickly. Because proofs and truth-tables correspond so closely, we can also solve the puzzle by examining the truth-table for the two formulas: a T T F F b T F T F a ((a b) (a b)) b a F F T T F T F F

Free Logic Now!

(3.2.3)

You can see from the truth-table that the two formulas representing their claims are true only when a is true and b is false. Since a represents the statement, Al is a Knight, and bs falsity represents the statement, Betty is a Knave, this also gives us the correct answer to the puzzle. Exercise 7. Translate Example 14 into propositional logic, and produce the corresponding proof of the answer, and a truth-table showing the answer. Exercise 8. Translate Example 15 into propositional logic, and produce the corresponding proof and truth-table. (The translation is a bit tricky.) Exercise 9. Show, using a truth-table, that whenever a resident of the island says, I could claim X, then X must be true. Exercise 10. In order to be a good puzzle, it must be impossible to solve it without examining every characters statement. Figure out how to use truth-tables to determine if a puzzle is good. Exercise 11. In order to be a tough puzzle, it must be impossible to determine any single characters identity without examining every characters statement. Figure out how to use truth-tables to determine if a puzzle is tough.

Draft

Page 27

October 27, 2009

Part II

Pure First-Order Logic

CHAPTER 4 group has a particular property being a good dog, in this case. The second sentence names a particular dog, and asserts that he is a member of that class. Finally, the third sentence applies to Alexander the generalization given in the rst. 4.1. The Need for First-Order Logic First-order logic has the resources to represent all of these operations, and indeed, many more. We shall examine rst-order logic in much the same way as we in some ways, a sort of foundation for thinking about logic. We can add features to it in order to generate more complex, but potentially more useful, systems. That did propositional logic, beginning by dening its language. is what we shall do in the following chapters. The logic we shall examine is called 4.2. Grammar and First-Order Logic rst-order logic. In a word, the most prominent shortcoming of propositional logic is that that the Unary predicates. First-order logic closely mirrors the structure of sentences smallest formulas propositional letters represent entire sentences. But usually, in natural language. Sentences of natural language contain several elements that will an argument is valid or invalid, not because of the relations among the various be explicitly represented in this articial language. Consider the following sentence: sentences, but because of the structure of those sentences themselves. For example, Every big dog is good. consider the following simple argument: We can extract many of the elements of rst-order logic by considering the various parts of this simple sentence. We will consider them one by one. (1) Every dog weighing at least one-hundred pounds is a good dog. (1) The word, every is (what we shall call) a quantier. Quantiers deter(2) Alexander is a dog weighing at least one-hundred pounds. mine whether we are talking about an entire set of things (every, all), or (3) Therefore, Alexander is a good dog. one thing in the set (there is a...). Other phrases may also be considered quantiers these are phrases such as, most, many, half of, and so on. This is obviously a trivial argument, but it adequately illustrates why we need rst(2) Adjectives such as good and big will be called predicates. order logic. For if we were to try to represent this argument in propositional logic, we run into diculties. First, proofs in propositional logic proceed by way of rules The language of rst-order logic formalizes these notions in the following way: (1) We shall have the two quantiers , and , which are interpreted as that depend upon the connectives. But if we look at the argument above, we nd for all and there exists, respectively. They are called the universal that there are no sentences or phrases that correspond to any of the connectives quantier and the existential quantier. there is no use of and, or, if-then, and so on. So we really have no material in the (2) We shall use capital letters P, Q, R, and so on to stand for properties argument that would enable us to represent its structure in propositional logic. But the argument is clearly valid. Why is it valid, despite lacking any connecthat objects might have. They are called predicates. (3) In addition, we shall have lower-case letters near the end of the alphabet, tives? When we look at it in more detail, several features stand out. The argument x, y, z, w, , which play the role of pronouns such as it, she, he, and crucially relies upon the correct use of the word, every. The rst sentence makes so on. a claim about a set of objects big dogs and claims that every member of that
29

The Language of First-Order Logic

Zachary Ernst

Free Logic Now!

(4) Lower-case letters near the beginning of the alphabet, a, b, c, , will be appropriate rule on the right: called constants, and they are interpreted as names of objects. Pa (1) (5) We will also keep all the usual connectives from propositional logic, which Qb (1) shall have the same interpretation as before. P a (4) (4.2.1) P a Qb (4) Example 17. Lets begin with a couple of very simple examples of formulas x(P x Qb) (2) and their intended interpretations, and then then well be a position to introduce yx(P x Qy) (3) the formal rules for forming sentences of rst-order logic. Binary and up. In the examples above, we have considered only liters such x(P x) : Everything has the property P as P a, P b, and so on. But many properties and predicates involve more than one object; and if we look again at rule 1 for forming ws of rst-order logic, we can see x(P x) : Something has the property P that that rule allows for more than one constant to follow a predicate. For instance, x(P x Qx) : Everything that has the property P has Q we might want to say not only that Charlie is a father, but that Charlie is a father of x(P x) x(P x) : If something has P then everything does a particular person, Bob. So we would need (what we shall call) a binary predicate that is, one that takes two objects instead of one. These are expressed in the same As I mentioned above, we will keep the connectives from propositional logic. Howway as so-called unary predicates like the ones above, except that there will be ever, we will drop the propositional letters from our language. Our formation rules two constants following the predicate. So we would have F cb, perhaps, to represent are: the sentence, Charlie the the father of Bob. It is important to emphasize that we (1) If P is an n-ary predicate and a1 , a2 , an are constants, then P a1 a2 an are simply interpreting F as meaning father of it is not as if there is any xed is a w. meaning for the predicates. We interpret them as having one meaning sometimes, (2) If is a w with at least on constant c appearing in it, then x( ) is a and another meaning at a dierent time. w, where is the result of replacing zero or more instances of c with the Obviously, the order in which the constants appear will be important; this is at variable x, with the condition that x does not already appear in . it should be. After all, we would not want to say that Bob is the father of Charlie, (3) If is a w with at least on constant c appearing in it, then x( ) is a merely because Charlie is the father of Bob. These are clearly two distinct sentences, w, where is the result of replacing zero or more instances of c with the and rst-order logic will therefore treat P bc as a totally dierent formula from P cb. variable x, with the condition that x does not already appear in . Of course, when we allow our predicates to have two constants, we open the (4) All of the old rules for introducing connectives still hold.1 door to having them take as many constants as we like. So we easily have threeplace, or ternary predicates such as P abc. Ternary predicates may be interpreted Example 18. The following is an example of the steps one may take to con- to represent relationships between three objects, such as is between, when we say struct a w in quantier logic, with the formula on the left and the number of the that a is between b and c. Although there is no limit to the number of constants a predicate may take, in practice, we will usually keep the number down to two or three at the most.
1Those who are familiar with other presentations of quantier logic may notice that under this scheme, there cannot be a w with unbound variables. For the purposes of teaching metatheory, this may be a drawback. However, my experience has been that there is little gained by allowing for unbound variables in an introduction to quantier logic; and those who go on to study metatheory have no trouble adapting to a more complex presentation of the language.

Draft

Page 30

October 27, 2009

CHAPTER 5 These basic facts about the objects and their properties are used to generate more complicated facts, and thereby tell us whether more complicated formulas have the values True or False. For example, since we have seen that c does not have P , 5.1. Denitions and Examples then the formula P c is false. And because P c is false, then as you might expect, we shall approach rst-order logic from two directions, by giving a semantics and P c is true; that is, negation simply reverses the truth-value of a formula just as a proof system. However, as you will see, both are signicantly more complicated. it does in propositional logic. Accordingly, Qa is true, which also determines that In particular, the semantics for rst-order logic is quite dierent from the method Qa is false. In this way, we build up the truth-value of a formula by considering each part of the formula, much as we did when we looked at propositional logic and of truth-tables. Quantier logic expresses propositions about objects and what properties they truth-tables. Formally, we get the following rules for doing so: (1) If P is a unary predicate and a is a constant in U , then P a is true just in have or lack. Accordingly, models will consist of a set of objects, plus facts about the case a is in the extension of P . It is false otherwise. properties they possess. All of the objects will comprise a set called the universe (2) If is a w, then x is true just in case every substitution instance of (sometimes called the universe of discourse), while the facts about their properties in which x is replaced by any constant in U is true. will be expressed in other sets one for each predicate called extensions. As (3) If is a w, then x is true just in case there is at least one substitution before, we will look at a simple example rst, before giving the formal denitions. instance of in which x is replaced by a constant in U which is true. (4) The truth-values of formulas whose main connective is , , , or are determined exactly as in propositional logic. See rule 3 on page 12. U = {a, b, c} (5.1.1) P Q = {a, c} = {a, b} It is a good idea to practice this process for more complicated formulas, so: Exercise 12. (1) P a P c (2) P a Qb (3) ((Qa Qb) (P a P b))

Models of First-Order Logic

Let us call this model M (throughout, I will use upper-case script letters to denote models). In M,there are three objects: a, b, and c. In this universe, there are only two properties, and they are named by the predicates P and Q. The letters following 5.2. More Complicated Models each predicate in the model tell us which objects have that property, so that a and b both have the property P , while a and c have Q. These lists are taken to dene the We need also to be able to represent models in which binary, ternary or npredicates in the model completely; so if an object does not appear in a predicates ary predicates are represented. Therefore, we need a convention for writing the extension, then that object does not have the property. So in the model M, we extensions of such predicates. In this text, well use angle brackets to do so. So in order to write the extension for a binary predicate P , so that P ab and P aa are true, know that c does not have the property P , and b does not have Q.
31

Zachary Ernst but P ba and P bb are false, we would write: (5.2.1) P = { a, b , a, a }

Free Logic Now! substitute one constant for the rst x and another constant for the second x, so we cannot say that xQxx is satised by the formula Qab.

Exercise 13. For each formula, determine whether it is true or false in the A quick note for those who are encountering notation like this for the rst time. The following model: order in which the constants are listed in the angle brackets makes a dierence, so a, b does not mean the same thing as b, a . But the order in which the expressions U = {a, b, c, d} in the curly brackets appear does not matter at all. So the expression 5.2.1 is P = { a, a , b, b , c, c , d, d equivalent to: Q = { a, b , b, c , c, d , d, a (5.2.2) P = { a, a , a, b } (1) xP xx So as before, we now consider a very simple model in which P is binary: (2) xQxx (3) yQyy U = {a, b, c} (4) x(P xx Qxc) (5.2.3) P = { a, a , b, b , c, c } So far, the quantied formulas we have looked at have been very simple, with Q = { a, b , b, a } just a single quantier. However, as weve seen, we can construct formulas in which Clearly, the following formulas are all true: there are many quantiers. These are evaluated in models exactly the same way we (1) P aa evaluate the simpler formulas; however, it is a good idea to evaluate these formulas (2) P bb step by step. Lets work with the following model in some examples: (3) Qab U = {a, b, c} (4) Qba Q = { a, a , a, b , a, c , b, c , b, a } But now we should consider whether certain quantied formulas are true, and how we determine their truth values. Here are some examples:

Example 19. xyQxy This formula is true in the model. The main connective is , which is true only if there is some constant that can be uniformly Example. xP xx is true. To see this, we look at all of the uniform substitutions of {a, b, c} into the place where we see the variable x in the formula. Those substituted into x, and which renders the resulting formula true. So try substisubstitutions are P aa, P bb, and P cc. Because a, a , b, b , and c, c are all in the tuting a into x, which gives us the formula, yQay. That formula is true just in case we get a true formula no matter what constant we substitute into y. Since extension of P , the formula is true. there are three constants, we need to see if Qaa, Qab, and Qac are true. And be(1) xQxx is false. Again, we consider all of the uniform substitutions of cause Qaa , Qab , and Qac are all in the extension of Q, the formula is true. So variables into the variable x, and we quickly see that Qaa is false, because xyQxy is true in the model. a, a is not in the extension of Q. Because any formula with x as the main connective is true only if every substitution into x is true, the formula Example 20. xyQxy This formula is false in the model. Again, we work is false. with one quantier at a time, beginning with the main connective. Since the main (2) Indeed, even the much weaker formula xQxx is false. In order to be true, connective is , the formula is false if there is at least one constant we can substitute there would have to be some constant we could substitute into x to yield a into x to yield a false statement. Pick c for this job, so that were looking at the true formula. But neither Qaa, Qbb, or Qcc is true. And it is atly illegal to formula yQcy, which is true just in case there is some constant we can substitute for Draft Page 32 October 27, 2009

Zachary Ernst

Free Logic Now!

y, and yield a true formula. But neither c, a , c, b , nor c, c are in the extension whether the formula is valid. But in the case of rst-order logic, we cant tell how of Q, so there is no such constant. Therefore, the entire formula is false. many models we will have to examine; and instead of having a guarantee that there will be only 2n truth-table rows to examine, there are innitely many dierent rstExample 21. xy(Qxy Qyx) This formula is true in the model. We need order models!2 to nd constants for x and y that result in a true formula when theyre uniformly substituted. Choose a for x, and b for y, which gives us (Qab Qba). That conjunction is true just in case both conjuncts are true; and they are, because a, b is in the extension of Q, and b, a is in the extension of Q as well. 5.3. Validity in First-Order Logic Just as in propositional logic, we will have two parallel concepts of validity and provability. However, both will be somewhat more complex when we are dealing with rst-order logic.1 The denition of validity is closely analogous to the corresponding notion of validity in propositional logic. Recall that in propositional logic, a formula is valid just in case it is true in every row of its truth table. Similarly, we shall say that a formula is valid in rst-order logic just in case it is true in every model. The concept of logical entailment is similarly dened: Definition 22. A formula of rst-order logic is valid just in case, for every rst-order model M which denes an extension for every predicate in , M assigns the value True to . In symbols, we shall write to mean that is valid. A set of formulas logically entails a formula just in case for every model M, if M , then M .

As you might be beginning to suspect, it is signicantly more dicult to tell whether a rst-order formula is valid than it is to tell whether a propositional logic formula is valid. The reason is that when we are determining whether a formula from propositional logic is valid, we have a simple (although sometimes tedious) procedure for doing so we just build the truth-table. Furthermore, we know exactly how long the truth-table will be, so we can tell exactly how much work it will be to determine
1In fact, there is a precise notion of what I mean when I say that rst-order logic is more complex. As weve seen, there is a totally mechanical procedure for determining whether a formula is valid in propositional logic we can just build a truth-table and check to see if we get all T s down the appropriate column. However, it has been proven that there is no mechanical procedure at all that will accomplish the same task in rst-order logic; and by this, I dont mean that we just havent come up with one yet there really isnt any such procedure at all.

2We will return to this point later, when we look at innitely large rst-order models. And if you go on to study more advanced topics in logic, this crucial dierence between propositional and rst-order logic turns out to be terribly important.

Draft

Page 33

October 27, 2009

CHAPTER 6 that everything has a particular property, then you are entitled to claim that any particular object has that property. For example, is I am in a pessimistic mood and I believe that everything is hopeless, then I will also believe that my job prospects 6.1. Introduction are hopeless. Or if I believe that everyone is ineectual, then I will also believe that it turns out that the notions of validity and provability coincide perfectly Bob is ineectual. So to put the point in terms of quantier logic, if I know that every valid formula is provable, and vice-versa. And when a set of formulas logically xP x is true, then I may infer that P a, P b, and so on are true, too. Formally: entails another, then that formula can be proven from them. In this chapter, we will Universal Elimination: If x appears at line n in the proof, demonstrate proofs of rst-order logic, and look at plenty of examples. then write [x/c] where c is any constant whatsoever. In other Recall that in propositional logic, we had two rules for each connective one words, you may eliminate the outermost , replacing the variable for introducing the connective, and one for eliminating it. The same will be true consistently with any constant whatsoever. Copy the assumptions here. Furthermore, all of the proof rules from propositional logic are still used in from line n, and write n, E for the justication. rst-order logic; we will not change any of the proof rules at all. And of course, all 1. M. xP x of the derived rules will work, too. But we have introduced two more connectives, (6.2.1) 2. M. P a 1, E namely, the two quantiers. So we will need four new proof rules. There is also an interesting parallel between our new proof rules and some of The other easy proof rule is the one for introducing existential quantiers; and the the old ones. Think back to the rules for , , and . For each of these, one rule informal justication for it is just as intuitive as the one for eliminating universal was easy and one was hard. For I, E, and RAA, we had to make assumptions quantiers. The rule is intended to capture the inference that if a particular object and then discharge those assumptions. But for E, I, and E, we did not. The is known to have some property, then something has that property. For example, if same pattern will occur when we look at the quantiers. Specically, the rules for I were to tell you that Alexander is a good dog, then you could infer that something I and E will require us to introduce and discharge assumptions, whereas the rules is a good dog (namely, Alexander). Formally, the rule says that if we have a formula for E and I will not. So just as before, we will begin with the easy rules, and with a constant, then we may replace that constant with a (new) variable and put then move onto the harder ones. an existential quantier in front of that formula: 6.2. The Easy Rules It is quite important to keep in mind what the connectives are intended to mean; if this is kept in mind, then it is much easier to remember how the rules are used. So we begin with the rule for universal elimination. Recall that a formula of the (6.2.2) form xP x is intended to mean, everything has the property P . So if you learn
34

Proofs in First-Order Logic

Existential Introduction: If occurs in a proof at line n, c is a constant, and x is a variable not occurring in , then write x[c/x]. Copy the assumptions from line n, and write n, I. 1. M. P c 2. M. xP x 1, I

Zachary Ernst 6.3. The Harder Rules As promised, this section will explain, and give examples of, the other two proof rules that are part of rst-order quantier logic. Both of these rules are more dicult to understand at an intuitive level. But with enough practice, their meaning becomes quite clear.

Free Logic Now! the Island. So, no matter who we might pick from the Island, we could essentially just repeat the proof, erasing Bobs name, and replacing it with the name of anyone whatsoever. The universal introduction rule works in exactly the same way; if we want to prove xP x, then we need to prove an arbitrary instance of it, perhaps P c. Then we are allowed to assert xP x, provided that c (like Bob) is arbitrary. In the example above, we said that Bob was an arbitrary choice because we never assumed anything special about him that is what guarantees that we could repeat the proof with anyone else we choose. So in rst-order logic, you wont be surprised that the assumption set plays a major role. We can transform a proof of P c into a proof of xP x, provided that c does not occur in any line mentioned in the assumptions. For example:

Universal introduction. We shall begin with the rule for introducing universal quantiers. As you will recall, any statement of the form xP x is supposed to mean that every object has the property P . The diculty with proving a statement of this form is that it says something about all the objects. And because there might be innitely many objects, statements of that form assert, in a sense, innitely many facts. But obviously, we cannot give innitely many proofs; so we need to have a method that accomplishes the work that would be done by innitely many proofs. The way that we accomplish this task in rst-order logic is by proving what 1. 1. x(P x Qx) A we shall call an arbitrary instance of the formula. We may illustrate this method 2. 1. P c Qc 1, E informally with an example from the Island of Knights and Knaves. (6.3.1) 3. 1. P c 2, E Recall that on the Island, everyone is either a Knight or a Knave; Knights always 4. 1. xP x 3, I tell the truth, and Knaves always lie. Suppose we want to prove that any individual from the Island can claim to be a Knight. This is a universally quantied statement we are asserting a fact about every person. I think you will nd the following line Here, we have proven that if everything has both P and Q, then everything has P . of reasoning perfectly natural. It is obviously a simple proof, but it is enough to illustrate the correct use of the Suppose we pick a random person from the Island of Knights and I rule. Specically, look at line 4, where the I rule is used. The assumption set Knaves; lets give that person a name to make it easier to talk for that line has only the number 1 in it; and on line 1 itself, the constant c does about the example. So we call him Bob. Now, we know that not appear. That fact that the constant used in the transition from line 3 to line Bob is either a Knight or a Knave. If he were a Knight, then he 4 does not appear is the key fact justifying the use of the I rule. Formally, the could claim to be a Knight because that would be the truth; if rule is: he were a Knave, then he could claim to be a Knight, because that would be a lie, and Knaves lie. So Bob could claim to be a Universal Introduction: If appears in line n with assumption Knight, regardless. set M , and has some constant c, then write x[c/x], where x does not appear in . Copy the assumption set M from line n. From this line of reasoning, I claim that every person on the island could claim to The step is only valid if c does not appear in any of the lines in be a Knight. But why should this claim be justied? After all, we have only shown M. that one particular person, Bob, could claim to be a Knight. In our language, we will say that the reason why this proof shows that everyone can claim to be a Knight is because Bob is arbitrary. This means that we never In order to illustrate the dierence between correct and incorrect use of the I rule, assumed any facts about Bob that wouldnt apply equally well to any member of here are two examples. In the rst one, the rule is used correctly; in the second, the Page 35 October 27, 2009

Draft

Zachary Ernst rule is used incorrectly. 1. 2. 3. 4. 5. 6. 7. 8. 9. 1. 2. 3. 4. 5. 1. 2. 1. 2. 5. 1, 5. 1, 2, 5. 1, 2, 5. 1, 2. 1. 2. 1. 1, 2. 1, 2. x(P x Qx) x(Qx Rx) P a Qa Qa Ra Pa Qa Ra P a Ra x(P x Rx) A A 1, E 2, E A 3, 5, E 4, 6, E 5, 7, I(5) 8, I

Free Logic Now! a does not appear there. So when we infer x(P x Rx) from (P a Ra), we can do so because the constant a is indeed arbitrary the inference relies upon no special facts about a, so it would apply equally well to any object in the domain at all.

(6.3.2)

Here is a proof in which the rule is used incorrectly: x(P x Qb) Pa P a Qb Qb x(Qx)

(6.3.3)

To see that something has gone terribly wrong in 6.3.3, consider the sequent that has just been allegedly proven: (6.3.4) The rst premise says something about a particular object named b. Specically, it says that if anything at all has the property P , then b has the property Q. The second premise says that there is some particular object named a, which has the property P . So the premises only assert facts about two specic objects, a and b. However, the conclusion says that everything has the property Q. But clearly, just because we have learned something about a and b, we have no guarantee about every object! So we ought to suspect right away that the proof is invalid. And indeed, it is. For on line 5, the proof illegitimately uses the I rule. It is illegitimate because, when we look at the assumption set, there is a line listed namely, 1 in which the constant b appears. So b is not an arbitrary instance; there is something special about it which is asserted in premise 1. So it is atly illegal to use the formula Qb to infer that everything has the property Q. In contrast, the proof in 6.3.2 is perfectly valid. When the I rule is deployed at line 9, the assumptions are 1 and 2. Checking those lines, we nd that the constant Draft x(P x Qb), P a x(Qx)

A A 1, E 3, 2, E 4, I BAD!

Existential elimination. There is only one more proof rule to learn for pure rst-order logic, and that is the rule that eliminates existential quantiers. The existential elimination rule also makes use of arbitrary instances of formulas, but it employs them in a slightly dierent way. The best way to think about existential elimination is that when we use it, we create a new name for an object whose identity we do not know. With the arbitrary name in place, we go on to infer more formulas until we have derived a formula that doesnt use the name at all. At that point, existential elimination is used. Like the E rule, it does not allow us to write any new formulas; rather, it acts as a check to ensure that the arbitrary instance really is arbitrary. To illustrate the dierence between correctly and incorrectly using an arbitrary instance, we might consider the following story. Suppose a murder has been committed, but the police have no idea who the murderer is. For the sake of being able to talk about the murderer, the police might give him or her a name, say, Jack. If a detective were to decide to begin the investigation by interviewing everyone in the town named Jack, that would obviously be ridiculous. After all, the name is arbitrary, and no special facts about anyone who happens to be named Jack could possibly be used to validly draw any conclusions about the murderer. However, if the detective were to draw a conclusion that didnt rely at all upon the name the police have chosen, but was instead based upon evidence, that conclusion may be justied. In much the same way, when we use existential elimination, we begin with a claim about some object or other, but we do not know which object that might be. So we temporarily give the object a name, but we have to make sure that nothing depends upon the particular name we happen to have given it. So let us consider a simple proof using the existential elimination rule. As usual, we will go over the proof, and then formally introduce the rule. x(P x Qx), xP x xQx October 27, 2009

Page 36

Zachary Ernst 1. 2. 3. 4. 5. 6. 7. 1. 2. 3. 1. 1, 3. 1, 3. 1, 3. x(P x Qx) xP x Pa P a Qa Qa xQx xQx A A A (for E) 1, E 4, 3, E 5, I 2, 3, 6, E(3) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 1. x(P x Qx) 2. P a Qa 3. Pa 3. xP x 3. xP x xQx 6. Qa 6. xQx 6. xP x xQx 2, 3, 6 xP x xQx 1, 2. xP x xQx

Free Logic Now! A A A 3, I 4, I A 6, I 7, I 2, 3, 5, 6, 8, E 1, 2, 9, E

(6.3.5)

(6.3.6)

The proof expresses a simple line of reasoning. If we know that everything that has P also has Q (line 1), and we know that something has P (line 2), then we know that something has Q (line 7). Three lines of this proof are important for illustrating the E rule. The rst is line 3, in which we assume that a has the property P . One might wonder why this is an assumption, instead of being justied somehow by line 2, which guarantees that something has the property P . The reason why it is an assumption is that, although we know that something has the property P , we do not know that the object a does. We are temporarily assuming that the object that has P is specically a, but that is, indeed, an assumption. From that point through line 6, we simply use the old proof rules just as we have before. At line 6, we seem to have proven the formula we want to have, namely, xQx. But this does not complete the proof, because the assumptions on line 6 include 3, which is not a premise. Rather, line 3 is the assumption that was required for the E rule. This assumption is discharged when we write down that formula again on line 7. Existential Elimination: If x is on line n with assumption set M , and the formula [c/x] is assumed at line m, and a formula not containing c occurs at line p with assumption set M1 , then write with the assumption set M M1 . Erase n from the assumption set, and write n, m, p, E(n). This rule may only be used if {M M1 } n does not contain any line whose formula contains an instance of c.

6.4. Quantier Exchange When quantier logic was rst formulated by Gottlieb Frege, it did not have anything corresponding to the existential quantier. Instead, it had only a universal quantier. But Frege recognized that the universal quantier, combined with negation, was sucient to form expressions that were equivalent to ones with the existential. His observation was that the phrase, there is a such-and-such has the same meaning as not everything isnt a such-and-such, which is unwieldy sounding, but nonetheless synonymous. In modern notation (such as the notation were using here), the following sequents are all provable in quantier logic: xP x (6.4.1) xP x xP x xP x xP x xP x

xP x xP x xP x xP x

xP x xP x xP x

xP x xP x

Now we will look at at another example of the correct use of existential elimination. These quantier exchange rules may seem dicult to learn, but they are all instances of the same fact namely, that if you ever have a qunatier with a negation on one side, you can move that negation to the other side of the quantier, so long as you change the quantier from an existential to a universal, or vice-versa. Here, well x(P x Qx) xP x xQx Draft Page 37 October 27, 2009

xP x

Zachary Ernst prove a couple of them, leaving the rest as exercises: 1. 2. 3. 4. 5. 6. 1. 2. 3. 2. 2, 3. 1, 3. xP x xP x A A A 2, E 3, 4, RAA(2) 1, 3, 5, E(3) xP x xP x Pa P a xP x xP x

Free Logic Now!

(6.4.2)

Heres the second quantier exchange rule: 1. 2. 3. 4. 5. 6. 7. 1. 2. 3. 1. 1, 3. 1, 2, 3. 1, 2. xP x

xP x A A A 1, E 4, 3, RAA(2) 2, 3, 5, E(3) 2, 6, RAA(2)

(6.4.3)

Exercise 23. Prove the remaining six quantier exchange rules, using only primitive proof rules. Like DeMorgans rules, we will refer to each of the quantier exchange rules by a single name, QE. So if we want to use any of the quantier exchange rules, we will cite as justication the line number of the formula to which we are applying the rule, followed by QE. Quantier Exchange: If a formula of the form x, x, x, x, x, or x, appears at line n of the proof with assumption set M , we may write x, x, x, x, x, or x (respectively). Copy the assumption set from line n, and write n, QE as the justication. Exercise 14. Prove the following with and without using the QE rules: (1) x(yP y P x)

xP x xP x P a Pa xP x xP x xP x

Draft

Page 38

October 27, 2009

Part III

First-Order Logic with Identity and Functions

CHAPTER 7 It might seem that the two arguments have nothing in common with each other the rst requires some facts about names, and the second requires facts about arithmetic. So one might reasonably suspect that in order to represent these arguments, 7.1. The Need for Identity we need to add two dierent mechanisms to rst-order logic (perhaps something there are some basic tasks that cant be accomplished with the resources we have about names, and something else about numbers). But surprisingly, one simple explained so far. Consider the following simple argument, which is not expressible addition will do the job; this is what we will be concerned with in this short chapter. in pure rst-order logic. 7.2. Introducing Identity (1) The rst President of the United States had a white wig. (2) George Washing was the rst President of the United States. Lets return to the rst argument about George Washington. If we had a con(3) Therefore, George Washington had a white wig. stant to be interpreted as George Washington (say, g), and another constant to Clearly, the argument is perfectly valid. What makes the argument valid is the fact represent the rst President of the United States (say, p), we need to say that g and that the very same person can have two dierent names. Once we have recognized p refer to the same person. To do this, we merely use the familiar equality symbol, that two dierent names refer to the same person, then anything we have asserted =. So we say that: using one name can also be asserted using the other name.1 p=g Here is another argument that is equally valid, and which we cannot express in (7.2.1) rst-order logic. We can use the same symbol, in combination with the quantiers, to represent (1) There are exactly two pigeon-holes. the assertion that there are (at least) two pigeons. Suppose that P x is to be inter(2) There are exactly three pigeons. preted as, x is a pigeon. The following formula expresses the fact that there are (3) Each pigeon will y into one of the pigeon-holes. two pigeons: (4) Therefore, at least one pigeon-hole will have at least two pigeons. xy(P x P y x = y) In order to express this argument, we need to be able to express certain facts about (7.2.2) numbers. Specically, we need the resources to say that there are exactly two or Formula 7.2.2 says, rst, that there exist an x and y such that they are pigeons. But three of something. But there are no available resources in rst-order logic for doing this does not ensure that there are two pigeons; after all, it might turn out that x this. and y are really the same pigeon, just as George Washington and the rtst President 1Professional philosophers will gasp in horror at this last sentence, which is as we have long of the United States turn out to be the same person. So we simply add the conjunct x = y to rule out the possibility that our two pigeons are the same pigeon. since learned from Quine false. But Id like to ask permission to leave these issues aside for the The observant reader will recognize that Formula 7.2.2 falls short of saying that time being. By the way, this sentence prevents this text from falling prey to the preface paradox. there are exactly two pigeons. This is because it is perfectly compatible with the Clever, eh?
40

Identity

Zachary Ernst

Free Logic Now!

existence of a third, distinct pigeon. So we need to say not only that there are two pairs x, y of which x = y takes the value True. For instance, in order to add to Model 7.3.1 the fact that a = b, well just rewrite it as: pigeons, but that there is no third pigeon. That turns out to be easy to do:
two pigeons no more than two

(7.2.3)

In Formula 7.2.3, the rst two conjuncts are exactly the same as before. But the subformula with the universal quantier is obviously new, and that subformula guarantees that there is no third pigeon. It says that if you were to have some z thats a pigeon, then it either z and x are the same, or z and y are. So as you can see, we capture the notion that there are exactly a certain number of things by breaking that assertion into two parts the rst is that there are at least that many things, and the second is that there are no more than that many. We can expand upon this technique to say that there are any (nite) number of things. For instance, if we wanted to say that there are exactly three pigeons, we would say: (7.2.4) xyz(P x P y P z x = y x = z y = z w(P w (w = x w = y w = z)))

xy(P x P y x = y P z(P z (z = x z = y)))

U (7.3.2) P

= {a, b, c} = {a, b}

Here, we have added the fourth line, which lists the pairs of constants that are identical. Its important to keep in mind that Id is not a predicate so it would be nonsense to write, Idxy. We need to make an important point, now that identity has been introduced into rst-order models. It is now possible to write something down which appears to describe a model, but actually doesnt. For example, consider the following, which is not a model: U (7.3.3) P = {a, b, c} = {a}

Id = { a, b }

Q = { a, b , b, c

In order to include facts about identity, we need to add a third component, which will simply list the pairs of objects in the domain that are identical that is, the Draft

Q = { a, b , b, c As you will notice, Formula 7.2.4 is quite a bit longer than Formula 7.2.3. This is necessary because we have more possibilities that have to be excluded. That is, Id = { a, b } we need exclude the possibility that x and y are the same, that x and z are the same, and that y and z are. If we wanted to say that there are exactly four pigeons, In this example, what weve written down fails to be a model, even though it is the required formula would become quite long and unwieldy (not to mention if we almost exactly like 7.3.2; the only dierence is that b has been removed from the extension of P . The reason why it fails is because the model has a = b, so every wanted to say that there are 157 of them). property of a should also be a property of b. However, P a is true, while P b is false. In order for this to be a model, we would have to either eliminate a, b from Id, add 7.3. Semantics for Identity b to the extension of P , or eliminate a from the extension of P . Recall that in the semantics for pure rst-order logic, each model contains two types of information: the universe, and the extensions of the predicates. So a simple 7.4. Proof Rules for Identity model might take the form: We now need to dene the proof rules which concern the use of identity. Happily, U = {a, b, c} these rules are quite simple, and have are easy to understand in an intuitive way. The (7.3.1) P = {a, b} rst rule expresses the simplest fact about identity that every object is identical Q = { a, b , b, c } with itself: Reexivity of Identity: Any formula of the form c = c, where c is a constant, may be asserted at any point in the proof. The

Page 41

October 27, 2009

Zachary Ernst assumption set is empty, and the justication is Ref =. (7.4.1) 1. c = c Ref = 1. 2. 3. 4. 5. 6. 7. 8. 1. a = b b = c 1. a = b 1. b = c 1. a = c 1. (a = b b = c) a = c z(a = b b = z) a = z yz(a = y y = z) a = z xyz(x = y y = z) x = z

Free Logic Now! A 1, E 2, E 2, 3, = 1, 4, I 5, I 6, I 7, z

The second rule of inference for identity expresses the fact that any two objects that are identical share all their properties. The way that we make use of this fact is by noting that if any formula is true, and it uses one constant, we may replace any occurrence of that constant with another one, so long as we have also established that they are identical to each other. For instance, if we know that P c, and we have also established that c = d, then we may infer P d. Formally: Substitution of Identicals: If c1 = c2 appears at line n with assumption set M , and occurs at line m with assumption set N , then may be asserted, where is the result of either replacing some or all occurrences of c1 by c2 (or vice-versa) in . The assumption set is M N , and the justication is n, m, =.

(7.4.4)

7.5. Example: Family Relationships

An interesting fact about logic (and mathematics, generally), is that we can represent complex relationships among a set of objects by reducing those relationships to a simple set of rules. Then, it often turns out that the more dicult and less obvious facts about that set can be derived using only logic. A particularly clear example of this is that we can represent basic facts about family relationships, and derive more complex facts. 1. M. c1 = c2 So lets begin by considering some simple facts about the relationship between 2. N. P c1 (7.4.2) parent and child. For the sake of simplicity, well assume that children have only 3. M N. P c2 1, 2, = one parent (we can accomplish the same tasks without this simplifying assumption, but the proofs are very unwieldy). Lets interpret P xy as meaning x is the parent We can now look at a few simple proofs that express some basic facts about identity. of y. Then the following true facts about parents can be written in rst-order logic First, we shall prove that identity is symmetric that is, if x = y then y = x. with identity: xy(x = y y = x) 1. 2. 3. 4. 5. 6. 1. a = b a=a 1. b = a 1. a = b b = a y(a = y y = a) xy(x = y y = x) A Ref = 1, 2, = 1, 3, I 4, I 5, I xyz((P yx P zx) y = z) Everyone has at most one parent (7.5.1) xy(P xy P yx) If xis the parent of y, then yis not the parent of x yis the parent of z, then x xP xx Nobody is their own parent. xyP yx Everyone has a parent

(7.4.3)

xyz((P xy P yz) P xz) If xis the parent of yand

is not the parent of z Similarly, we now prove that identity is transitive that is, if x = y and y = z, then I think you will readily agree that all of the facts in 7.5.1 are true. Now, we can x = z. dene other relationships, such as Grandparent in terms of the P predicate. So we might ask, what does it mean when we say that a is the grandparent of b? The xyz((x = y y = z) x = z) Draft Page 42 October 27, 2009

Zachary Ernst answer is that there exists some c such that a is the parent of c, and c is the parent of b. In other words: (7.5.2) Notice that we have not built into the denition of G any obvious facts that we all take for granted about grandparents. For instance, we have not assumed that it is impossible to be ones own grandparent. However, it turns out that we can prove that fact, just using the denition of G, and the facts we have asserted about P . The assertion that nobody is their own grandparent can be written simply, xGxx, which we can prove like this: xGxx 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 13. 14. 15. 16. 17. 18. 19. 20. 21. 1. 2. 3. 4. 5. 6. 7. 8. 6. 6. 6. 6, 8. 13. 4. 4. 13. 13. 4, 13. 4, 13. 4, 6, 8, 13 4, 6, 7, 8. 4, 6, 7. xyP yx xyz((P yx P zx) y = z) xP xx xy(P xy P yx) xyz((P xy P yz) P xz) xy(Gxy z(P xz P zy)) xGxx Gaa y(Gay z(P az P zy)) Gaa z(P az P za) Gaa z(P az P za) z(P az P za) P ab P ba y(P ay P ya) P ab P ba P ab P ba P ba xGxx xGxx xGxx xGxx A A A A A A A (for RAA) A (for E) 6, E 9, E 10, E 11, 8, E A (for E) 4, E 13, E 13, E 13, E 14, 15, E 16, 17, RAA(7) 12, 13, 18, E(13) 7, 8, 19, E(8) 7, 20, RAA(7) xy(Gxy z(P xz P zy))

Free Logic Now! (2) xyz((Gyz Gzx) y = z) Everyone has at most one grandparent. (3) xy(Gxy Gyx) If x is ys grandparent, then y is not xs grandparent.

Exercise 15. Prove (1)(3) above by assuming the formulas in 7.5.1 and 7.5.2 above.

(7.5.3)

The following are also provable in much the same way: (1) xy(Gyx) Everyone has a grandparent. Draft

Page 43

October 27, 2009

CHAPTER 8

Functions

44

Part IV

Applications of First-Order Logic

Zachary Ernst The chapters in this part will demonstrate how to express a chunk of arithmetic and set theory in rst-order logic. It will also demonstrate limitations of rst-order logic in representing suciently rich theories.

Free Logic Now!

Draft

Page 46

October 27, 2009

Part V

Computation and Decidability

CHAPTER 9

Computable and Uncomputable Sequences


9.1. Introduction

of computers as extremely powerful, and it is easy to think that there is nothing that a suciently powerful, cleverly programmed, computer cannot do. However, as we will see in this chapter, this is actually false. There are many indeed, innitely many functions that cannot be computed by even an idealized computer. And yet, these functions are perfectly well-dened and easy to understand. The results in this chapter were accomplished by several individuals particularly Alan Turing long before anything like a modern computer had been invented. In fact, it is a striking fact that before there were computers, there was a quite welldeveloped theory of computation that had already yielded many fascinating and Definition 24. Two sets and are the same size (i.e. are of the same important results. In this chapter, we will look at some of those results, and em- cardinality) if there exists a function f : which is one-to-one. phasize their importance for logic. As is our custom, we will motivate and explain these results informally rst; then we will rigorously prove all the details. For example, suppose we have two small sets, = {a, b, c} and = {x, y, z}. Clearly, these are of the same cardinality and Denition 24 agrees. They are the same cardinality because there is some function f that carries each element of to 9.2. Cantors Diagonal Argument a unique element of and which is one-to-one. For example, let f be dened as We begin by considering an extremely simple, but surprisingly subtle, question: follows: What does it mean for two sets to contain the same number of objects? The answer f (a) = x might seem obvious: They have the same number of objects when the number of objects in the rst set is the same as the number of objects in the second set. That (9.2.1) f (b) = y is, to determine whether two sets have the same number of objects, we count the f (c) = z number of objects in both sets, and if the resulting numbers are equal, then the sets Clearly, f is one-to-one: every element of is the image of some element of and have the same number of objects. This procedure works perfectly well when we limit ourselves to nite sets, but it every element of has one image in . The same criterion applies to innite sets. So we will now return to the sets N fails terribly when we consider innite sets. For example, consider the set N, which is the set of natural numbers, {0, 1, 2, 3, }. Clearly, the set of even numbers, which and E. It turns out that these two sets are indeed of the same cardinality, because
48

we will call E, is a subset of N. After all, N contains innitely many objects (the odd numbers) that are not contained in E. So intuitively, this would seem to imply that the set N has more objects than E. We will use vertical bars to represent the number of objects in a set, so this intuitive claim, in symbols, is |N| > |E|. But in another sense, this claim that |N| > |E| is quite unclear and problematic. After all, we cannot count the number of objects in either set; both have innitely many objects. And what sense does it make to say that one innity is larger than another? What is needed is a criterion for telling whether two sets are the same size, but which does not require the objects in the set to be counted. In addition, this criterion must agree with the obvious facts that nite sets are of equal size if they have the same number of elements. The way this is done is to make use of functions, in the following way:

Zachary Ernst the desired function f : N E can be dened. It is just: (9.2.2) f (x) = 2x

Free Logic Now! the values of f (1), f (2) and so on. Thus, the table would look like this: f (0) f (1) f (2) f (3) f (4) . . . 3 1 1 6 2 . . . 1 0 0 1 1 . . . 4 0 1 8 7 . . . 1 5 0 0 0 1 0 3 1 8 . . . . . .

To show that f is one-to-one, we need to show that every element of N is carried into some element of E, and that every element of E is the image of some element of (9.2.3) N. The rst part is obvious each element n N is mapped to 2n, which is clearly even, and hence, an element of E. For the second part, we only need to note that every element of E, being an even number, can be written as 2n for some n. Then obviously, n is the element of N that is mapped onto it. So |N| = |E|. where the numbers to the right of the horizontal line are the decimal representations Exercise 16. Using Denition 24, show that if |A| = |B|, and |B| = |C|, then of real numbers. Now, Cantors diagonal argument gets its name from the next step. He asks us |A| = |C|. to consider a particular number generated from the table by the following procedure. n Exercise 17. Let Q+ be the set of positive rationals, that is, Q+ = { m | n, m We start with the digit in the upper-left of the table, and we move down and to the + + right, writing down each digit we reach. That sequence of digits corresponds to a N }. Show that |N| = |Q |. real number. Now, we take each digit and add 1 to it (unless it is a 9, in which case Exercise 18. Let N+ be the set of natural numbers, without zero. Show that we write 0). Now, consider that new number. Is it in the original table anywhere? The answer is no for the following reason. It cannot be the rst number on the list, |N| = |N+ |. because the rst digit is dierent. Similarly, it cannot be the second number, because Exercise 19. Let Z be the set of positive and negative integers. Show that the second digit is dierent. By the same reasoning, it cannot be any number on |N| = |Z|. the list, because it diers from each number in exactly one digit. So far, the examples we have looked at are ones in which the cardinalities of the two sets turn out to be equal. This might cause the unwary reader to believe that this is always the case. But as Cantor showed in his famous diagonal argument, there are many cases in which this is not true. The example which concerned Cantor was the question of whether the set of real numbers R is of the same cardinality as N. His elegant and justly famous proof that |R| > |N| proceeds from the observation, which we shall take for granted here, that every real number can be expressed as an innitely long sequence of decimals. So for instance, the number 1 can be written as 1.000 , and every rational number can be represented as a repeating sequence of decimals; for instance, 1 = .333 . 3 Cantors proof proceeds by a particularly clever reductio ad absurdum. So he assumes that there is some f : N R which is one-to-one. If there were such an f , then we could list all of the real numbers in an innitely long table by writing down Draft 1. 3 1 2. 1 0 3. 1 0 4. 6 1 5. 2 1 . . . . . . . . . 4 1 5 0 0 0 1 0 1 8 0 3 7 1 8 . . . . . . . . . new number diagonal 30108 41219

(9.2.4)

But we assumed that the table contains every real number, so we have a contradiction. Our assumption for the reductio was that we can construct the desired function f which is one-to-one between N and R. Therefore, no such f exists, and so |R| > |N|. October 27, 2009

Page 49

Zachary Ernst 9.3. From Cantor to Computers We have spent time going over the diagonal argument in detail because the line of reasoning innovated by Cantor has been deployed in contexts that are of direct concern to us here. In this brief section, we will look at an informal argument that certain sequences of numbers cannot be output by any computer even an idealized computer with unlimited memory and time. Consider any computer that is programmed in a language that has a nite number of words and symbols, which are arranged to form programs of any nite length. This is an extremely modest restriction indeed, every computer and programming language have this property. Just think of the symbols on the keyboard as the ones that are allowed in the machines programs. We can make a list of all of the possible computer programs in the following way. First, we write down each possible combination of symbols that could comprise the shortest possible program. We then arrange these programs in alphabetical order (settling on some arbitrary ordering for symbols that are not letters). Then, we continue this process for all possible programs that have one more symbol. In this way, any possible computer program will appear somewhere on the list. Each program will output some sequence of numbers, which we then write down on the list to the right of every program. So our list looks like this: #1 #2 #3 #4 #5 . . . 3 1 1 6 2 . . . 1 0 0 1 1 . . . 4 0 1 8 7 . . . 1 0 0 0 1 . . . 5 0 1 3 8 . . . 9.4. Turing Machines

Free Logic Now!

It is quite a signicant result that there are non-computable sequences; and so, the informal proof from the previous section has taught us something important (and perhaps surprising). However, the proof has not told us anything about what sequences, in particular, are non-computable. Indeed, one might wonder whether it would even be possible to describe a sequence that cant be computed. Alan Turing showed that it is possible to specify such sequences. However, he could not leave the notion of computable at the somewhat intuitive level that we have left it in the previous section. And of course, because programmable computers had not been invented, he could not rely upon any preexisting notion to suggest the correct concept of computable. So Turings rst step was to formalize the concept by proposing a mathematical model of a computer. This model has come to be known as a Turing machine, and it has proven to be an extremely useful construction. This section will give a rigorous presentation of the Turing machine, and thereby lay the groundwork for the important results that follow. A Turing machine has the following components: (1) An innitely long tape, which is divided into squares. Each square may be either blank, or contain a symbol. We will refer to each square by giving its position on the tape, so that we may refer to the nth square. The symbol occuring at square n will be denoted G(n). (2) A machine that occupies one square of the tape at a time. The machine can: (a) read the symbol occurring at the square it occupies, (b) write a new symbol on the square it oocupies, or erase the symbol, and it can (c) move to the right or left one square. (3) Additionally, we the machine may be in any one of a nite number of possible states q1 , q2 , , qn . Turing refers to these states as the machines m-conguration. (4) The machines next action (moving, writing, erasing, or stopping), is determined by its state (or m-conguration) and the symbol at the machines current square. So it is convenient to say that the pair qn , G(r) is the machines conguration.

where each #i is the listing of a computer program. We now ask whether there is some sequence of numbers that is not output by any program in the list. Having already seen Cantors diagonal argument, the answer should be obvious. We can construct a sequence of numbers by writing down all of the digits appearing down the diagonal in the list and adding 1 to each digit. By the same argument as before, this number does not appear anywhere on the list. In addition, the machine is programmed with a set of rules describing its behavTherefore, it is a sequence that is not output by any possible computer program. ior when it is in various possible congurations. These programs, in order to deal We say of such a sequence that it is not noncomputable. Draft Page 50 October 27, 2009

Zachary Ernst
Machine reads one square at a time, reading or writing to that square.

Free Logic Now! can have the same left-hand side; this guarantees that for every conguration, there will be (at most) one unique rule for the Turing machine to follow. We can now look at a few examples of some simple Turing machines. We shall use the convention of specifying the machine by listing its transition rules, followed by its initial state and position on the tape (if necessary). We shall use script letters such as M, N , and so on to name the machines.

Example 25. It is possible to program a Turing machine to output the sequence 110011001100 quite simply. The idea is that we have four states, which represent with the possible congurations and actions, have two components the rst is a whether the machine is to write 0 or 1, and whether it is the rst occurrence of that conguration, the second is a pair describing an action and a state. For instance, digit, or the second. The transition rules are: the following is one rule from a possible machine program: (9.4.1) This rule says that if the machine is in state a, and there is a 1 at the current square, then erase the symbol and go into state c. We will refer to such rules as transition rules, and each transition rule will (when necessary) be labeled i for some i. Following Turings notation, we will use German lower-case letters a, b, c, to describe states, and we will use E, P, R, L to refer to the actions of erasing, (9.4.4) printing, moving to the right, and moving to the left, respectively. It is also possible that a machines transition rules will require that the machine perform more than one action when it is in a particular conguration. So transition rules may take the form: (9.4.2) Here, the rule 1 says that if the machine is in state b, and 0 is written on the current square, then erase the symbol, print 1 on the square, and then move right one square. In order to specify congurations in which the current square is blank, we will use the symbol (think for mpty). For instance: (9.4.3) 1 : b; which species that if the machine is in state b and the current square is blank, then print 0 and enter state c. So we can now specify a Turing machine by listing a set of transition rules {1 , 2 , , n }. We will make the restriction that no two of the i s Draft P 0; c 1 : b; 0 E, P 1, R; a 1 : a; 1 E; c

Infinitely long tape divided into squares

1 : a; 2 : b; 3 : c; 4 : d;

M1

P 1, R; b P 1, R; c P 0, R; d P 0, R; a

We will specify that the machine starts in state a, and is given a blank tape. It thus moves according to rule 1 , writing 1, entering state b, and moving to the right. The next rule it uses is 2 , since it is now in state b and is on an empty square. So it writes 1 again, and enters state c. The next two stepts are 3 and 4 , in which it writes 0 on the next two squares. Then, it is again in state a, and the process repeats forever.

Page 51

October 27, 2009

Zachary Ernst

Free Logic Now!

Example 26. The next machine writes an endless sequence of 1s to the right, the Fibonacci sequence). For our purposes here, it is merely used as a nontrivial and an endless sequence of 0s to the left. example of what can be done with a Turing machine. We need to be clear on what it means to say that a particular Turing machine M2 calculates something. Clearly, a Turing machine is not like a pocket calculator, 1 : a; P 0; b with buttons and a display that show obvious numbers and symbols. Instead, in 2 : b; P 1; c order to claim that a Turing machine calculates particular function or a particular sequence, we need to establish (what we shall call) a convention for representing 3 : b; 0 R; b the elements of that function or sequence. Specically, we will often be interested in (9.4.5) 4 : b; 1 R; b having the Turing machine calculate a numerical sequence by repeatedly applying 5 : c; P 0; b arithmetical operations. Therefore, we need to have a convention that tells us how 6 : c; 0 L; c to interpret the machines tape as a sequence of numbers. Furthermore, we need to dene our convention is a way that requires only a nite number of dierent symbols 7 : c; 1 L; c to represent the numbers, since there are only a nite number of distinct symbols initial state a that can be written on the squares of the tape. tape blank Of course, the usual Arabic numbering system is one such convention. We have a nite sequence of symbols (0, 1, 2, 3, , 9) and a convention that tells us how to Exercise 20. Explain how M2 (from Example 9.4.5) works, and give a clear interpretation of the meanings of states a, b, and c by explaining what M2 is doing read combinations of these symbols as distinct numbers. Our convention in this case, for so-called base-ten arithmetic, is to count the number of each digits position when it is in each of those states. from the right-hand side of the sequence, raising 10 to that power, and multiplying Examples 9.4.4 and 9.4.5 are quite straightforward they require very few states by the digit itself. We then add these multiplicands together to get the number and transition rules, and they do not require that the machine writes symbols on the represented by the sequence. So for example, the number: tape temporarily, erasing them later. But in order to specify a machine that outputs 1452 more complicated sequences, we need to have much more complex transition rules. In contrast, the Turing machine whose instructions are given in Figure 9.4.1 is read as performs a much more complicated task. This machine, which we shall call Mf , 1 103 + 4 102 + 5 105 + 2 100 calculates the Fibonacci sequence: which sums2 to 1, 452. (9.4.6) 1, 1, 2, 3, 5, 8, 13, Although we are so used to this particular convention for representing numbers that we apply the rules without thinking about it, we would suspect correctly whose nth member is the sum of the previous two elements, and where the rst two elements are dened to be 1. The Fibonacci sequence is very interesting in its own that it would be a tremendous bother to program a Turing machine to do arithmetic right1 (and even has its own journal, which publishes only papers having to do with with the numbers represented by base-10 Arabic numerals. We will instead adopt a much simpler convention that is quite useful and appropriate for Turing machines, 1An interesting feature of the Fibonacci sequence is that it converges on the so-called golden but would be highly inappropriate and unwieldy for daily use. Specically, we will f (n) ratio. Let f (n) be the nth member of the sequence. Then limn = , where is the represent a number n by a sequence of n 1s in a row. So for example, the number
unique solution to the equation 1 = edly equal to 1.6180339887.
1 .

The number is a transcendental number approximat-

f (n1)

2Keeping in mind that n0 = 1, for all n R.

Draft

Page 52

October 27, 2009

Zachary Ernst a b c d e s1 s2 s3 s4 e1 a; 2 R, a b; 2 R, b c; 0 L; c d; 3 R; d e; 5 P 0, L; e s1 ; 3 R; s2 s2 ; 1 R; s2 s3 ; 0 P 4; s4 s4 ; 2 ; a e1 ; 1 R; e1 a; 3 R; a b; 1 R; b b; 0 P 1; d c; 1 L; c c; 5 L; d d; 2 R; d d; 4 P 1, R; a e; 1 L; e s1 ; 2 R; s1 s2 ; 0 P 3, R; s3 s2 ; 2 R; s2 s3 ; 1 R; s3 s4 ; 1 L; s4 e1 ; 0 P 2; s1 a; 5 ; e b; 3 R; b c; 3 L; c c; 2 ; d d; 1 R; d e; 3 P 0, L; e s1 ; 1 R; s1 s2 ; 3 R; s2 s4 ; 3 L; s4 a; 1 P 4, R; b b; 5 R; b c; 4 L; c d; 0 R; d e; 2 P 0, R; e1 s1 ; 5 R; s1 s2 ; 5 R; s2 s4 ; 5 L; s4

Free Logic Now!

Figure 9.4.1. A Turing machine that calculates the Fibonacci sequence 12 would simply be 111111111111. We will also adopt the convention that distinct Therefore, when we claim that a Turing machine calculates the Fibonacci senumbers will be separated by a single 0 between them. So for example, the sequence: quence, we mean that it prints the sequence of digits on the tape that represent consecutive members of the sequence. So the initial portion of the tape, after the 1, 1, 1, 1, 0, 1, 1, 1, 1, 1 machine has run, would be: would be interpreted as the two numbers 4 and 5. 1 1 2 3 5 8 This gives us a clear way to represent the application of functions, or the gen1 , 0, 1 , 0, 1, 1 , 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, eration of sequences of numbers. For example, if we were to claim that a particular Turing machine performed the function of addition on two numbers, we would mean that for any two numbers n and m, a tape which initially began with the sequence: We shall also adopt a simplifying convention that, when specifying a Turing machine, we are permitted to stipulate that before the machine has run, the tape consists of n m an endless sequence of symbols, with any nite portion of the tape having any 1, 1, 1, 0, 1, 1, 1 particular sequence of symbols we specify. So for example, we will be permitted, when specifying Mf , that the tape consists of an endless sequence of 0s, except would, after halting, leave the tape with the initial sequence: that the portion of the tape at which the head is initially located will contain the n m n+m symbols 0, 1, 0, 1 (which are the initial two numbers of the Fibonacci sequence). 1, 1, , 1, 0, 1, 1, , 1, 0, 1, 1, , 1 Exercise 21. Explain why this simplifying convention is harmless. Specically, Incidentally, we will also adopt the convention that Turing machines, unless otherexplain why we do not add any capacities to the Turing machine by allowing any wise stipulated, will leave their initial input intact, merely appending the result of nite portion of the tape to be pre-written with an arbitrary sequence of symbols. its calculation the end of the input on the tape. Page 53 October 27, 2009

Draft

Zachary Ernst

Free Logic Now!

Exercise 22. In order to compute the Fibonacci sequence, one merely performs nite sequence of symbols. However, because each Turing machine in fact outputs addition on successive pairs of elements of the sequence. This, essentially, is exactly only one sequence, this is impossible. Therefore, there is no upper limit to the what the machine Mf does. Referring to the transition rules in Figure 9.4.1, identify Kolmogorov complexity of nite sequences. which transition rules perform addition, and explain briey how they work. In order to prove that K is uncomputable, we will assume that it is computable. Exercise 23. Describe informally how one would program a Turing machine If it were computable, then we could dene another Turing machine that performs other computations (which are clearly computable) in addition to computing K. In to output the sequence of prime numbers. particular, one such Turing machine is easily shown to be impossible; therefore, no 9.5. An Uncomputable Function Turing machine can calculate K. Specically: In this section, we will look at a simple example of a function that cannot be computed by any Turing machine. Unlike our earlier proof that uncomputable functions exist, we will be able to specify exactly what this function is. We will not present a completely rigorous proof, but we will present enough detail that anyone wishing to ll in the gaps should be able to do so easily. The function, which we will denote K, computes the so-called Kolmogorov complexity of a nite sequence of symbols. Definition 27. The Kolmogorov complexity of a nite sequence is the smallest number of transition rules dening the behavior of a Turing machine that outputs exactly that sequence. For example, suppose we wanted to compute the Kolmogorov complexity of the sequence 1001. Many dierent sets of transition rules could be used to specify a Turing machine having exactly that output. Suppose that the shortest set of such transition rules was of length 3. Then K(1001) = 3. Thus, a Turing machine that calculates K(x) would begin with the sequence s written on its tape; after running, it would have the number K(x) on its tape. To prove that K is uncomputable, we need to establish a preliminary result. We must show that there is no upper bound on the complexity of nite sequences. That is: Theorem 28. For any n N, there is some sequence s such that K(s) n. Theorem 29. K is uncomputable. Proof. Suppose it were computable. Then there is a machine MK that will compute the Kolmogorov complexity of any given sequence of 0 and 1 that are written on the tape. We now construct a machine that will output a sequence whose Kolmogorov complexity is greater than the length of M. And since this is impossible given the denition of Kolmogorov complexity this will show that K is uncomputable. We rst note that it is easy to construct a machine MC which will enumerate all possible sequences consisting of 0 and 1. Therefore, we can combine MK and MC to construct a machine MKC that will successively write every sequence consisting of 0 and 1, following each such sequence by a number representing the Kolmogorov complexity of that sequence. We further modify MKC so that it stops as soon as it has found a sequence whose Kolmogorov complexity is larger than the length of MKC . The machine then erases all sequences except that one, and also erases all the Kolmogorov values it has written on the tape. The remaining sequence is then copied to the beginning of the tape, and the machine halts. MKC has thereby output a sequence whose Kolmogorov complexity is greater than the machines length. This clearly violates the denition of Kolmogorov complexity, and so K is uncomputable. 9.6. The Halting Problem

Historically, the rst noncomputable function to be identied was not K. In the Proof. Suppose not. Then there is some n N such that for any nite severy same paper in which Turing rst introduces his Turing machine3, he gives a quence s, K(s) n. Thus, every nite sequence can be output by some Turing very interesting example of a noncomputable sequence. The function is now known machine having at most n transition rules. Because there are innitely many nite 3Of course, Turing himself did not name his construction the Turing machine. He gave it the sequences of symbols, and only nitely many distinct Turing machines having at most n transition rules, some Turing machines will have to output more than one much more generic name of an automatic machine. Draft Page 54 October 27, 2009

Zachary Ernst as the halting problem, and the standard presentation of it is quite dierent from the presentation given by Turing. In what follows, we will examine a proof that is much closer to the one he originally gave Turings paper was clearly inuenced by Kurt Gdels seminal paper in which he proved that arithmetic was incomplete. We will later examine that proof in great detail in Part VI. But it was part of the great originality of the Incompleteness Theorems that they were highly self-referential. In short, Gdel described a way in which proofs about mathematics could be mapped onto mathematical operations, and this innovation is the key to the Incompleteness Theorems. Turing, no doubt inuenced by Gdels insight, showed that Turing machines could be used to simulate other Turing machines in short, that the behavior of a Turing machine can be described as the input to a Turing machine. This insight drives Turings proof. At the heart of the proof is a demonstration that a particular Turing machine can be constructed; this machine is called the Universal machine. We will denote it U. The behavior of U is that it begins with a set of transition rules listed on its tape, and then U simulates what a Turing machine would do, if it were programmed with those rules. The vast majority of the proof consists of showing how U can be constructed. In modern terms, we may think of U as an emulator, which is quite common in personal computers. The emulator is a program that contains information about other kinds of computers; if a piece of software is given to the emulator, then the computer is able to simulate exactly what that software would do if it were loaded into another computer. Of course, emulators themselves are just pieces of software; so it is possible to run an emulator inside another emulator. Theoretically, there is no limit to the number of copies of emulators that could be run inside of each other on an idealized computer. A Turing machine, as we have seen, just is an idealized model of a computer. And just as U can be used to simulate any Turing machine, it could also be used to simulate itself. This is the key fact that made it useful for Turing to show that U can be constructed. As a rst approximation of how Turings proof is done, we will assume (for reductio) that there is a machine call it H that can solve the Halting problem. Because U is a universal machine, we can consider what U would output if it were simulating itself simulating H. It will turn out that its behavior is contradictory in particular, we will show that such a machine can neither halt, nor can it run forever. But since, clearly, every machine must either eventually halt, or else run Draft

Free Logic Now! forever, then we must conclude that H is impossible to construct. Therefore, there is no general solution to the Halting problem. Obviously, there is a lot of work to be done before we can consider the proof in detail. This work is highly detailed, but it is helpful to keep the above proof sketch in mind, so that we can see why these details are important. Representation. As we have seen, the dicult part of the proof will be showing that U exists. But the very denition of U raises an immediate question: how do we represent a Turing machine in such a way that we can feed it into U? After all, like any Turing machine, U has a set of transition rules and a tape, each of which has only a nite number of symbols. So we will need to dene a kind of language by which we can represent any of the innite variety of Turing machines using only the symbols that are available to U. This section shows how we may accomplish this. So far, we have been fairly liberal about how the transition rules of a Turing machine are written. We have allowed, for instance, the possibility that a single rule may contain more than one movement instruction. This is certainly a harmless convention, because we could always replace such a rule with one or more new rules, each of which moves the machines head only one space. But now that we are going to nd a standard representation for all Turing machines, it will be much easier if we are very strict about how the transition rules are written. So we will begin by introducing a standard form for writing down all transition rules. We will construct the standard description in a few, simple steps. Recall that transition rules have ve components: an m-conguration and a tape symbol, which specify when the rule is to be applied, followed by a new symbol, a movement instruction, and a new m-conguration. Thus, we can represent every transition rule by a set of ve elements, which we will represent: qi sj sk M q where M is one of L, R, N , which stand for move left, move right, and do not move, respectively. In order to represent a set of transition rules, we list each set of transition rules separated by a new symbol. We will use D for this purpose (which stands for delimiter). So if a machine has three transition rules, the standard form of those rules will be of the form: qi1 sj1 sk1 M1 q 1 Dqi2 sj2 sk2 M2 q 2 Dq21 sj2 sk2 M2 q
2

Page 55

October 27, 2009

Zachary Ernst

Free Logic Now!

In addition, we need to specify the state that the machines initial m-conguration; The standard description of this machine would therefore be: so we will put that in front of the sequence, followed again by a D, as in: initial rst transition second transition qs Dqi1 sj1 sk1 M1 q 1 Dqi2 sj2 sk2 M2 q 2 Dq21 sj2 sk2 M2 q
2

ABB D AB ABB ABBB L ABB D ABB ABBB ABB R AB D


s1 2 P3 s2 s2 3 P2 s1

Clearly, we can also ignore the possibility of a squares being blank; instead we will assume that a convention is in place according to which we may interpret blank as some symbol, say 0. Accordingly, we also may assume that every square of the tape has that symbol written on it initially.4 Now we have to face a small problem. Because we are constructing one universal Turing machine that can simulate every possible Turing machine, we know that U will have only a nite number of states and symbols in its transition rules. But if U is to be able to handle any Turing machine whatsoever, then it will have to be able to simulate machines whose set of symbols and states is larger than Us. Thus, we have to be able to represent an unlimited number of possible states and symbols by using only a nite number of symbols in U. Of course, we do this all the time; we have only a nite number of Arabic numerals 0, 1, , 9, and they are able to represent an unlimited number of numbers. We will do the same here, but more simply. Specically, we will think of the states and symbols of Turing machines as numbers, and we will represent a number n by A BB B . The last symbols we will need for the standard description of a Turing machine are the ones we will use to represent the movements of the machine across the tape. It is convenient to use L, R, and N . Transition rules (and the initial state of the machine) are said to be written in standard description when they are represented this way. For example, consider the following machine: s1 ; 2 s2 ; 3 initial state P 3, L; s2 P 2, R; s1 s2
n Bs

It will be helpful for the remainder of the proof to think of the standard description of a Turing machine as a number. Then we can consider a sequence of Turing machines as if they were a sequence of numbers, and this allows us to consider a set of Turing machines as a set of computable numbers. Translating the standard description of a Turing machine into a number is trivial we simply replace each symbol in the standard description with a digit.5 Then the Turing machine has been specied as a very large number. The particular substitution of symbols by digits is arbitrary. For example, suppose we translate A, B, D, L, R, and N as 1, 2, 3, 4, 5, and 6, respectively. Then the standard description above becomes:
initial rst transition second transition

122 3 12 122 1222 4 122 3 122 1222 122 5 12 3


s1 2 P3 s2 s2 3 P2 s1

So we shall say that the machines description number is: 12, 231, 212, 212, 224, 122, 312, 212, 221, 225, 123 Clearly, every Turing machine has a unique description number, but not every number is the description number of a Turing machine. The Universal Turing Machine. We are now in a position to begin the proof that the universal Turing machine U exists. The proof will be constructive that is, it will not merely tell us that U exists; it will tell us exactly how to construct it. As you might suspect, the transition rules for U are fairly complex. So we will allow ourselves to abbreviate the descriptions of Us transition rules by noting that there are certain tasks that a Turing machine can easily be programmed to perform. When we need to specify that U performs one of those tasks, we will not bother to specify exactly what states and transition rules are required; instead, we will assure
5Although this device of translating sequences of symbols into numbers is quite simple, it is the foundation of Gdels Incompleteness Theorems. Accordingly, these numbers are often called Gdel numbers.

4If a machine were to require that the tape contain some information at the start of the machines run, we could just add new states and commands to be executed by the machine itself that would write the required information on the tape.

Draft

Page 56

October 27, 2009

Zachary Ernst ourselves that the correct states and transition rules could be given explicitly. These tasks include:

Free Logic Now!

(4) Next, U will write the symbol written by M, using the same notation as in the standard description. If M would not have written anything, then the same symbol as previously was on the square will be copied here. (1) Moving the read head left or right to the nth occurrence of a particular (5) Finally, the movement of M will be recorded as either L, R, or N . symbol. (2) Comparing two sequences of symbols and determining whether they are An example of such a tape is given in Figure 9.6.1. If we were to examine the output of U after a run, and consider only the sequences representing the output that M identical. would have written, then we clearly have a representation that is equivalent to the (3) Finding the rst unmarked square on the tape. output of M. (4) Copying a sequence from one location on the tape to another. Now, of course, we have to actually demonstrate that the operations of U can, (5) Moving a sequence of symbols from one location to another on the tape, indeed, be programmed into a Turing machine. Before we do this, it will be extremely erasing the original sequence. (6) Marking the read heads present position, returning to it, and restoring helpful to establish one more convention. For every symbol which occurs on the tape, we will introduce another symbol which is the same, except that it is followed by that square of the tape to its previous symbol. a prime. So for each of A, B, D, L, R, and so on, we will have A , B , D , L , R , and so on. Throughout its operation, U will often replace a symbol by its primed Exercise 24. Write the transition rules for Turing machines which can accomcounterpart. We will call this, marking the square. Each time this happens, U will plish the tasks listed above. eventually go back to the square and restore the original symbol to it. We will call 6 We can now give an overview of how the Universal Turing machine U will sim- this, restoring the square. ulate the behavior of another machine M. On Us tape, the head will begin on Exercise 30. Describe the transition rules for a Turing machine that adds two a square marked with a special symbol , which will mark the beginning of the numbers, using the marking convention described above. tape. Following the symbol , we will have the transition rules of M, written as a Theorem 31. The Universal Turing machine U can be constructed. standard description. After the transition rules, the tape will be marked with to mark the end of the description of M. Proof. We prove that U exists by describing in detail what those processes As U runs, it maintains a record of every movement and action performed by are, and arguing that they are each possible for the machine to perform. M. That is, whenever M would have moved its head, U writes that movement on We will write U(M) for the universal Turing machine with the rules for M its tape; whenever M would have changed state, U records the new state by writing written initially on the tape, using the conventions described above as depicted in it on the tape; and whenever M would have written a symbol on the tape, U records Figure 9.6.1. The machine begins on the square immediately to the left of the that information as well. symbol and its conguration evolves according to the following procedure. We will therefore need a set of conventions for recording the record of Ms moveStage(1): U nds the symbol , and writes on the square immediately ments. We shall use the following procedure for reading and writing this information to its right. Then it nds the symbol and copies the rst sequence on the tape: ABB B onto the square to the right of , followed by the delimiter D. This is how U simulates the rst m-conguration of M. (1) Each new movement of the machine will be marked by writing . (2) Next, the state of M at the end of its movement will be written, using the 6Turing used a similar convention, but in his presentation, only every other square was used same device of As and Bs as we used in the standard description of the for output; the square immediately to the left of a square was used to mark it by writing particular machine. symbols on it. This complicates the presentation a bit, because it requires us to distinguish between (3) The end of the state will be followed by a D, as in the standard description. the two kinds of squares. But the two conventions are obviously equivalent in every important way. Draft Page 57 October 27, 2009

Zachary Ernst
Begin state, movement, and symbol Description of machine State

Free Logic Now!

Movement Symbol Begin next movement

Figure 9.6.1. The tape of U during a run Stage(2): By assumption, the symbol at the rst square of machine M is blank, so U writes the appropriate symbol (recalling that we treat a blank square as a square with a predetermined symbol), followed by N . It then marks the tape symbol that it has just written; this mark represents the position of M on its own tape. Then, U returns to the square immediately to the left of . This way, we have treated the initial movement of M as one in which it simply goes into a particular state, writing nothing, and not moving. Stage(3): For each subsequent movement of U, we need to determine which transition rule to apply. This is done simply by checking which symbols in the description of M match the appropriate part of the sequence written after the last symbol on the tape. So U searches for the rst sequence described between the and symbols that matches the state and symbol written after the last . If it nds none, then U halts. If it does nd a match, then U marks it in the description of M. Stage(4): The rst blank square is found, and the machine writes on it. Then, U returns to the marked square and copies the m-conguration to the right of the . Stage(5): The machine nds the sequence representing the state of the tape from the previous move, and it copies that sequence to the rst set of blank squares to the right. Recall that one of those squares will already be marked. Stage(6): The machine nds the L, R, or N to the left. If it is L, then U restores the marked square at the end of the written portion of the tape, Draft and marks the square immediately to its left. It acts analogously if the symbol is an R. If the symbol is N , then the square remains marked as it is. If the symbol was L, and there was no appropriate square to the left of the marked square, then the entire sequence representing the tape of M is shifted one square to the right in order to make room for a new marked square. If there is no appropriate square to the right of the marked square, and the symbol was R, then the next square is rewritten to the blank symbol, and that square is marked. Stage(7): The process, beginning at Stage(3), is repeated, with the exception that the marked square on the appropriate portion of Us tape is used to determine the symbol read by M. Clearly, all of the processes used in the description of U can be accomplished by a Turing machine (see p. 57). Therefore, the universal Turing machine exists. The Uncomputability of the Halting Problem. We are now in a position to show that there is no solution to the halting problem. We do this indirectly, by assuming that there is a solution that is, a Turing machine that solves it and we show that its behavior is contradictory. Theorem 32. The halting problem is uncomputable. Proof. Assume that there is a machine which takes the description number of a Turing machine and enters one state if that machine halts, and another state if it does not. Now we specify the Turing machine H as follows. For convenience, we will think of the tape of H as being composed of a set of columns. Let h be the description October 27, 2009

Page 58

Zachary Ernst number of H. Down the rst column, H enumerates all the natural numbers up to and including h. For each number, H determines whether it is the description number of a machine that halts, writing 1 next to each such number, and 0 otherwise. For every number n with a 1 next to it, H invokes a copy of the universal Turing machine U, and simulates the behavior of the machine with description number n. Because each such machine would halt, and there is only a nite number of machines with description numbers less than or equal to h, H must itself halt. When H reaches h, it simulates its own behavior. So it writes all of the natural numbers less than or equal to h, and repeats the same process. Clearly, this will cause H to fall into an endless loop for each iteration, it will recopy all of the natural numbers less than or equal to h, and eventually invoke another copy of itself, repeating the same process again. Because this loop will never end, we know that H does not halt. But we have already established that H does halt. This is our desired contradiction, so there is some aspect of H that is impossible. We have already established that U exists, and it is clearly possible for H to list all of the natural numbers less than or equal to h. So the only aspect of H that could be the source of the contradiction is the assumption that it can determine whether a number is the description number of a Turing machine that halts. Therefore, there can be no such machine. Exercise 1 f (x) = 0 25. Let the function f be dened as follows: if x is the description number of a Turing machine which enters an even number of states during its run, or runs forever otherwise

Free Logic Now!

Show that f is an uncomputable function. Exercise 26. Let the function g be dened as follows: 1 if x and y are description numbers of Turing machines g(x, y) = which have identical output 0 otherwise

Prove that g is an uncomputable function by showing that if you could compute g, then you could compute Kolmogorov complexity (see page 54). Exercise 27. Prove that g is uncomputable by showing that if you could compute g, then you could solve the Halting Problem. Draft Page 59 October 27, 2009

CHAPTER 10 might try to augment this procedure by simultaneously enumerating all of the rstorder models. However, some of those models will be innitely large, and it is not possible to put all of those models in a one-to-one correspondence with the natural 10.1. Basic Concepts numbers (for reasons which exactly parallel Cantors diagonal argument). So we do In his seminal paper, Turing also provided almost in an o-hand manner a not have any way at all of enumerating all of the models. It is for this reason that simple proof that rst-order quantier logic is undecidable. In order to present his there is no decision procedure for rst-order logic. proof, we will need to get clear on a few basic concepts. 10.2. An Analogy Decidability is a property that concerns whether there is a certain kind of procedure for settling questions regarding a formal system, such as logic or some particular The strategy that Turing employed for proving the undecidability of rst-order area of mathematics. logic is to show that if there were such a decision procedure, then we could use that procedure to solve the halting problem. But since there is no solution to the halting Definition 33. A logic is decidable just in case, for any formula , there is a problem, there must not be a decision procedure for rst-order logic, either. procedure that is guaranteed to answer, within a nite amount of time, whether The overarching idea of the proof is that it is possible to specify a set of rstis a theorem of that logic. order premises having the property that each line of a proof which proceeds from those premises represents one stage in the evolution of a Turing machine. We can For example, propositional logic is decidable because it is possible to provide then specify a particular formula which has the property that if it is provable from a truth-table for any propositional formula, and thereby determine whether that a set of premises, then the machine halts. Therefore, if there is a way to determine formula is valid. And because validity corresponds exactly to provability, we are whether a formula is provable, then there would also be a way to show that a Turing able to tell whether that formula is a theorem of propositional logic. machine halts. We can note that in any of the formal systems we have discussed so far, it is We will introduce this approach by way of a simple analogy. This analogy shows possible to mechanically enumerate every possible proof in that system; we simply exactly how to represent a simple board game in rst-order logic. Each step of a list all of the available axioms, apply every inference rule to them in every possible proof will correspond to the next state of the board in a game, and there is a solution way, and repeat. Thus, one might come to the conclusion that every system is for this game just in case there is a particular proof in rst-order logic. trivially decidable we just enumerate all of the theorems and check whether the The board game is quite familiar to many people.1 It is a triangular board with formula in question is among them. ten holes, nine of which have pegs in them. On each move, the player may take one However, we would be mistaken to conclude this, because the following situation peg, jump across another peg, and place the original peg in a hole on the opposite is possible. We may enumerate theorems for a very long time but not yet come across side of the jumped peg. The jumped peg is then removed, and play repeats. The the formula in question. At that point, we have no way of determining whether the 1It is always on the table at Cracker Barrel restaurants. formula will ever be encountered, or whether it is simply not a theorem at all. One
60

Undecidability of First-Order Logic

Zachary Ernst

Free Logic Now! because hole six is empty, and the rest have pegs. After one move, the boards state is represented as: P (0, 1, 0, 1, 1, 1, 1, 1, 1, 1)
3 5 6

1 2 4 5 3 6 4 2

10

10

Figure 10.2.1. The peg and board game. Small grey circles indicate where a peg is located. In the play depicted, the peg in hole one jumps over the peg in hole three, and lands in hole six.

which shows that the peg in hole one has been moved (thus, leaving that hole empty), the peg in hole three has been removed, and that there is now a peg in hole six. Of course, at any point in the game at which there is a peg in hole one, a peg in hole three, and no peg in hole six, that particular move could be made. It clearly does not matter whether any other particular hole has a peg or not. Thus, we could express this fact in terms of the P predicate as follows: (10.2.1) x1 x2 , x3 , x4 , x5 , x6 , x7 (P (1, x1 , 1, x2 , x3 , 0, x4 , x5 , x6 , x7 ) P (0, x1 , 0, x2 , x3 , 1, x4 , x5 , x6 , x7 ))

It is easy to see that an application of this formula by using universal instantiation and modus ponens would represent the move pictured in the gure above. Because goal is to leave exactly one peg at the end of the game. If this is accomplshed, we the board has only a nite number of holes and pegs, only a nite number of formulas will say that the player has solved the game. The game is pictured in Figure 10.2.1. like 10.2.1 are needed if we are to represent every possible move in the game. Let We may easily specify a formula in rst-order logic which has two interesting us call the disjunction of all of those formulas R (for rules). We are still missing two components of the formula. First, we need to specify a properties: starting position for the board. Typically, the game begins with hole six empty. So (1) The formula is provable if and only if the game can be solved. we will need the formula: (2) A proof of the formula represents each step in a paticular game that ends P (1, 1, 1, 1, 1, 0, 1, 1, 1, 1) in the game being solved. Let us call this formula S (for start). The specication of this formula is quite simple, and yet it does a good job of Finally, we need to consider all of the possible ways in which the game can be illustrating the general idea behind Turings proof of the undecidability of rstsolved. Solutions are boards in which all but one hole is empty. So the formulas order logic. We may construct this formula by noting that the entire state of the representing these states are: board, as well as all of the rules for moving the pegs, can be represented in rst-order logic. To do so, we construct a ten-place predicate P which will indicate by using P (1, 0, 0, 0, 0, 0, 0, 0, 0, 0) two constants whether each of the ten holes in the board has a peg in it. Let us P (0, 1, 0, 0, 0, 0, 0, 0, 0, 0) say that the constant 1 will mean that there is a peg in that hole, and 0 will mean P (0, 0, 1, 0, 0, 0, 0, 0, 0, 0) that the hole is empty. We will assign numbers to each of the holes in the board, as . . is depicted in Figure 10.2.1, which will tell us which place in the predicate contains . information about the corresponding hole. So for instance, the board on the left Let us call the disjunction of all ten of these formulas E (for end). Now we consider side of the gure would be expressed as: the formula: P (1, 1, 1, 1, 1, 0, 1, 1, 1, 1) Draft (10.2.2) Page 61 (R S) E October 27, 2009

Zachary Ernst The antecedent of Formula 10.2.2 represents the rules of the game, and the initial state of the board. Thus, the consequent is derivable form it just in case there is a set of manipulations of the board, using the rules, which lead to one of the ten possible solutions. Thus, the formula is provable just in case the game can be solved. The analogy to Turing machines is straightforward. As we have seen, it is possible to represent the evolution of a simple system by using rst-order logic, so long as the initial state and the rules which cause the system to change, can be faithfully represented as rst-order formulas. Thus, in order to show that the system in question will enter (or can enter) a particular state, it suces to prove the appropriate analog of Formula 10.2.2. Turing machines, although they are vastly more complex than the simple peg and board game, are also such a system. Thus, we can take any particular Turing machine and generate the appropriate formula that represents how that Turing machine behaves. And so, we can prove in rst-order logic, whether the machine will enter any particular state.

Free Logic Now! One fact that is not obvious is that we will need to add some additional information, using a few more predicates, to the formula which describe the relationships that may obtain between the various steps in the run of the machine. For example, we will need to be able to say that step x is the one immediately before step y. To do so, we shall introduce a predicate N (x, y) which shall mean that step y occurs immediately after step x. Similarly, we shall use the inx predicate x < y to mean that step x occurs before step y. With these predicates in place, we can begin to construct the required formula. The rst part of the formula will be a conjunction that gives us all the facts that we need about how the steps are related to one another. So it will contain the following: xyz((x < y) (y < z) x < z) The < relation is transitive xyN (x, y) Every step has a successor xN (x, x) No step is its own successor xy(x < y y x) If x is before y, then y is not before x Let us refer to the conjunction of these four formulas as . Now we have to represent the initial state of the tape. As before, we arbitrarily choose a symbol say b to represent a blank square, and we use a constant 0 to represent the initial step of the machine. Then the following is the appropriate formula: (10.3.1) xTb (x, 0)

10.3. Logical Representation Of course, it is much more dicult to represent a Turing machine in rst-order logic. An important reason why it is more dicult is that Turing machines have an innitely long tape, and the machine may be scanning any square of that tape at any given moment. But it is possible to do so, and we shall prove this fact now. We shall present a method which takes any Turing machine and yields a formula which is provable if and only if the machine halts. This representation of a Turing machine must keep track of the machines state, the location of the square it is currently scanning, and the symbols which have been written on the tape. Of course, all of this information can will change as the machine works; so we will need a device for identifying which step in the machines evolution is being described. Finally, we will need the consequent of the formula to be true just in case the machine halts. First, we will represent the state of the machine by using a set of predicates Si . For each possible state i of the machine, we shall have a predicate Si (x). The argument x represents which step of the machine we are describing. So Si (x) is interpreted as at step x, the machine is in state i. Similarly, we shall have another predicate R(x, y) which shall represent the fact that the machine is reading square x at step y, and another set of predicates Ti (x, y) which shall mean that square x on the tape has symbol i at step y of the machines evolution. Draft

which says that every square has the symbol b written on it as the machine begins its run. Similarly, we also need to say which state the machine is in, and we will stipulate that the machine is scanning square 0 of the tape. where a is the initial state of the machine. R(0, 0) Sa (0)

We shall also represent the fact that each square of the machine has at most one symbol written on it. This ensures that as the machine writes new symbols on the tape, we know that the appropriate squares no longer contain the old symbol. This is easy to do. For every pair of distinct symbols i and j, we need to have the formula: (10.3.2) If there are n possible symbols that can be written on the tape, then there will be n2 n formulas necessary. Let us refer to the conjunction of these formulas, together with Formula 10.3.1, as . October 27, 2009 xy(Ti (x, y) Tj (x, y))

Page 62

Zachary Ernst We shall take a moment to note that, like steps, we will also need to say of one square that it immediately comes before another, and so on. So we shall make the N and < predicates do double duty; they will be used to describe relations among the machines steps, as well as relations among the tapes squares. Now we need to represent the machines transition rules. This is the hardest part of representing the Turing machine. We will consider three cases, depending upon whether the transition rule causes the machine to scan the next square to the left, stay on the current square, or to move to the right. We will begin with the rst case. Let us say that the transition rule says that if the machine is scanning a square containing symbol i, and the machine is in state j, then the machine should write symbol k, move to the left, and enter state . All other squares remain unchanged. This is represented as: (10.3.3) xy((Si (x) Tj (y, x) R(y, x)) consider the following formula: (10.3.4) xy(
i,j P /

Free Logic Now!

(Si (x) R(y, x) Tj (y, x)))

This formula asserts that there is some step x and square y such that at step x the machine is scanning square y, the machine is in state i, and the symbol j appears at that square. Since i, j P , then the machine must halt. We will use to refer / to Formula 10.3.4 . 10.4. Undecidability We may now prove that the following formula is provable only if the corresponding machine halts: (10.4.1) Theorem 34. Formula 10.4.1 is provable in rst-order logic just in case the corresponding Turing machine halts. Proof. We show this by demonstrating that for any step in Ms evolution, M uses a transition rule just in case the corresponding state and symbol can be proven from the assumption that the antecedent of Formula 10.4.1 is true. From this, it follows immediately that if the machine enters a state in which it would halt, then the consequent of Formula 10.4.1 can be proven. We show this by induction on the number of steps. For the basis step, we show that in the initial stage, the antecedent allows us to prove that the machine is in the proper initial state, and that it is currently scanning a square of the tape that is blank. But since the antecedent contains the corresponding terms as conjuncts in , this is trivial. For the inductive step, we assume that the tape has an arbitrary (nite) sequence of symbols written on it, that the machine is in state s, is currently scanning square n, has undergone m steps, and that the tape has the symbol o written on the current square. We also assume that the corresponding formulas can be proven from the antecedent of the formula, namely: R(n, m) Ss (m) To (n, m) October 27, 2009 ( )

x y ((N (x, x ) N (y, y )) (Tk (x, y ) R(x , y ) S (y )))


m=j

zw(N (x , z) T1 (z, y) T1 (z, y ))))

(Recall that the large conjunction sign refers to a conjunction whose conjuncts are dened by the next formula and the subscript under the sign.) For each transition rule, there will be one conjunct in the formulas antecedent of the form of 10.3.3. It is easy to construct the other formulas that represent transition rules in which the machine scans the next square to right, and in which the machine scans the same square. Exercise 28. Write the two remaining formulas that are analogous to Formula 10.3.3, representing the cases in which the machine scans the square to the right, and in which the machine scans the same square. Let us call the conjunction of these formulas . The last ingredient that we shall need is a formula which asserts that the machine halts. Recall that a Turing machine halts just in case there is no transition rule corresponding to the machines current state, and the symbol on the square the machine is scanning. It is easy to examine the set of transition rules and nd any such state/symbol pairs that are not represented. Let us call the set P the set of state/symbol pairs that are represented in the machines transition rules. So now Draft

Page 63

Zachary Ernst Also, we assume that the appropriate Ti (n, p) can be proven for each square p on the tape at step n. We now show that for step m + 1, the same is true.

Free Logic Now!

Draft

Page 64

October 27, 2009

Part VI

Gdels Incompleteness Theorems

CHAPTER 11

Logicism, Russell, and Hilbert


11.1. Hilberts Program will present a detailed proof of Gdels Incompleteness Theorems. In order to understand the signicance of these theorems, it is necessary to understand logicism, and how logicism gured into the most important unsolved problems at the beginning of the twentieth century... 11.2. Smullyans Machine A quite ingenious thought experiment by Raymond Smullyan provides a simple example of how incompleteness can arise in a very simple formal system. Suppose there is a machine with a tape output and a simple keyboard. The machine allows the user to type sequences of symbols with the keyboard; it will print those symbols on the tape only if the sequence is true. Every sequence the machine prints will be true; we are not guaranteed that it will print every true sequence. The keyboard has only a few keys. They are: N, P, R, and parentheses. The set of meaningful sequences consists of the following, with the interpretations given: P(x) NP(x) PR(x) NPR(x) x is printable by the machine. x is not printable by the machine. x(x) is printable by the machine. x(x) is not printable by the machine.

For example, suppose that you type P into the machine, and the machine prints it. Then you would know the following facts about the machine: (1) It would be able to print P(P). That sequence asserts that P is printable, which (we are assuming) it is. (2) Of course, the machine cannot print NP(P), because that would assert that P is not printable. But the machine prints only true statements, so it cant print a statement that says that it cant print P. (3) The machine can print PR(P). To see this, note that PR(x) asserts that the machine can print x(x). Here, x = P, so PR(P) asserts that the machine can print P(P), which it can (as we showed above). So now we have something that is quite close to being a formal system. Instead of a formula being provable, we are speaking of a sequence being printable. And just as a formal system has an intended interpretation (truth-tables, rst-order models, etc.) namely, that the symbols are interpreted as asserting facts about the machines ability to print. Accordingly, we can ask whether the machine is sound and complete relative to the intended interpretation. Of course, the machine is denitely sound it is simply stipulated that the machine prints only true sequences. Now we ask whether it is also complete that is, can it print every true sequence? The answer is that even with this incredibly simple example, we have enough information to prove that it is not complete. For consider the sequence: (11.2.1) NPR(NPR)

Recall that NPR(x) asserts that x(x) is not printable. So the sequence 11.2.1 asserts that NPR(NPR) is not printable and this, of course, is the very same sequence! For all other expressions that are not meaningful to the machine, it can print The chain of reasoning which leads us to an incompleteness proof is short, but some of them, and not print others. You have no information ahead of time regarding not simple. We will break it down carefully, because this will provide us with a very which meaningless sequences are printable. But of course, youre totally free to try high-level overview of the same reasoning that occurs in Gdels Incompleteness Theorems. some of them!
66

Zachary Ernst

Free Logic Now!

(1) We consider the question of whether the machine can print NPR(NPR). establishing these facts about arithmetic is quite dicult and subtle. It is to that Suppose that it can. process that we now turn. (2) Because the machine prints only true statements, then we know that whatever the sequence asserts must be true. (3) The sequence asserts that the machine cannot print NPR(NPR). We know this because NPR(x) asserts that it cannot print x(x); here, x = NPR. (4) But we have assumed that it can print NPR(NPR), and this has led to the conclusion that it cannot print this sequence. At this point, we might wonder whether the machine has fallen into some kind of paradox; if so, then we cannot learn anything except that the denition of this machine is incoherent. However, this is actually not a paradox at all; there is one circumstance in which its behavior (whatever that behavior might be) regarding the string NPR(NPR) is coherent. Only the following conditions, taken together, are coherent: (1) The machine cannot print NPR(NPR). Therefore, NPR(NPR) is true. (2) So the machine has at least one true statement that it cannot print. This renders the machine sound (because it prints only true statements), but incomplete. In broad outline, this is exactly how the Incompleteness Theorems are proven. Every part of this simple example has a counterpart in the proof: Sequences of the machine Formulas of Principia Mathematica Truth about the machine Truths of arithmetic Printable sequences Provable formulas What makes this example so (relatively) simple is that Smullyans machine has certain capacities simply built-in. Most importantly, it has the capacity to assert facts about its own printing capabilities. But also, its language has the capacity to assert a sentence that itself asserts a fact about that very sentence; specically, the machines language can assert NPR(NPR), which asserts that that very sentence cannot be printed. It was Gdels quite brilliant insight that arithmetic also has those two capacities: arithmetic proofs can be represented as arithmetic equations, and it is possible to construct a sentence that asserts that it cannot be proven within arithmetic. I hope you agree with me that it is quite shocking that arithmetic can do these things! Accordingly, the vast majority of Gdels proof is concerned with establishing these powers that arithmetic possesses. As you no doubt suspect, the process of Draft Page 67 October 27, 2009

CHAPTER 12

The Machinery of Arithmetic

The rules of inference for this fragment of the system are: ,

s we have seen, Gdels proof was in response to Russell and Whiteheads Principia Mathematica, which we have not presented here. But fortunately, we have discussed virtually all of the elements of its language that are required to understand the Incompleteness Theorems. Its language is not much more than a combination for rst-order logic with identity, Peano arithmetic, and the language of classes. We shall call this system PM.

which are just the familiar rules of modus ponens and universal substitution. So far, the system is quite familiar. The main complication is the theory of classes. We consider classes to be like a set, with one dierence, which is Russells theory of types. Classes may be considered as a special kind of set, with one small complication. To understand them, we need only to contrast them with sets. Normally, we can construct a set by specifying a domain of objects, and identifying sets as (possibly empty) collections of objects from that domain. So if a domain is Logic Plus Classes. Besides the use of classes rather than sets, the other im- {a, b, c, d}, then we may have the sets {a}, {a, c}, , and so on. We also help ourportant dierence between how we have presented logics earlier is that the presenta- selves to sets of sets, so we would also be able to construct the sets {{a, b}, {b, c}}, tion here is axiomatic. That is, instead of presenting a natural deduction system as {{a}}, and so on. It is in this last respect that classes dier from sets. Classes are broken down we did in Parts I and II, we will specify the set of provable formulas by giving a small set of axioms, and a small set of rules of inference; the axioms are to be counted as into types, where a class of type 1 is a collection of objects. A class of type 2 is provable, and any formula derivable from any other provable formulas by the rules a collection of type 1 classes, and so on. We are expressly prohibited from mixing of inference will also be counted as provable. The axioms and rules of inference are classes together. So for example, if the objects under consideration are a, b, and c, similar to those for propositional and quantier logic, with a few complications. We then we have: will begin with the more familiar axioms {a, c} Type-1 class {{a}, {a, b}} Type-2 class p (q p) {{{a}}, {{a}, {a, b}}} Type-3 class (p q) (q p) {a, {b, c}} Illegal! ((p q) (q r)) (p r) Accordingly, when we have a quantier, we shall specify what class of objects it quanties over. For example, we will distinguish between expressions of the form (p q) (p p) 1 xP x and 2 xP x, where we take the rst statement as quantifying over Typex [c/x] (c a constant) 1 classes, and the second as quantifying over Type-2 classes. We shall also put x( ) x (x not free in ) subscripts on variables to indicate their class, as in x1 , x2 , and so on.
68

12.1. The Language of Principia Mathematica

[/]

Zachary Ernst Each system that uses sets or classes must have so-called comprehension schema, Peano Arithmetic. which are rules for dening classes. Here, we shall use the following comprehension (12.1.5a) schema: (12.1.5b) (12.1.1) xn+1 (xn (xn xn+1 a)) (12.1.5c) (12.1.5d) (12.1.5e) (12.1.5f)

Free Logic Now!

x =y x=y x+0=x x =0

x + y = (x + y)

where a is any formula which contains only x1 free. What 12.1.1 means is that for any formula a with one free variable, there must be a class which consists of all the objects which, when substituted into a, generate a true sentence. This guarantees that for any condition that can be written in the language, there is a class consisting of all the objects that satisfy that condition. You will notice that the subscripts under the xs indicate that the comprehension schema is really an innite number of schema for any condition that is satised by class-n objects, there is a class of type n + 1 consisting of those type-n objects that satisfy the condition. Just as 12.1.1 ensures that there exists a class for every speciable collection of objects (of the immediately preceding class), we need another condition which ensures that any two classes containing exactly the same objects must be identical. In other words, we need to know that classes are determined entirely by their members. (12.1.2) ing: We may now state the nal rule of inference for the system, which is the follow0 x2 x1 (x1 x2 s(x1 ) x2 ) (x1 x1 x2 ) xn+1 yn+1 (zn zn xn+1 zn yn+1 ) xn+1 = yn+1

x y = x + (x y) x0=0

Axioms 12.1.5a through 12.1.5f, together with the principle of mathematical induction 12.1.3, allow us to prove important facts about addition and multiplication. It is important, when learning Peano arithmetic, not to take for granted basic facts about arithmetic that have not yet been proven. For example, it would be natural to simply assume that x + y = y + x. This, of course, is true. But it must be proven, and not simply assumed. We begin by proving an important fact about zero. Theorem 35. 0 + x = x, for all x. Proof. We prove this by mathematical induction. For the basis case, we show that 0 + 0 = 0, which is immediate by 12.1.5c. Next, we must show that if 0 + a = a, then 0 + a = a .

(12.1.3)

The formula 12.1.3 represents the principle of mathematical induction... Gdel extensively employs the notion of a class sign, which we can also un0 + a = (0 + a) (12.1.5b) derstand by way of a comparison to sets. Consider how sets are usually described =a inductive hypothesis symbolically. For example, the set of prime numbers may be described this way: (12.1.4) This completes the inductive step, so 0 + x = x, for all x. S = {x | x N & x > 1 & y, z((x, y N x = y z) (y = x y = 1)) }

In 12.1.4, we dene the set S by saying that it is the set of all x such that x satises So we now know that zero is a left- and right-hand identity for addition. We the condition in the box.1 When we look at the condition in the box, we see that now show: it is an expression with one free variable, namely, x. This is what Gdel means by Theorem 36. x + y = (x + y) , for all x and y. a class sign it is the condition that is satised by exactly the members of that class. Proof. We must prove this by performing induction. For the base case, we 1The box, of course, not being part of the notation. show that x +0 = (x+0) , for all x. This is immediate, by (12.1.5c). Next we assume Page 69 October 27, 2009

Draft

Zachary Ernst that for all x, x + a = (x + a) . We need to show that for all x, x + a = (x + a ) : x + a = (x + a) = (x + a) = (x + a ) which completes the proof. Next, we are ready to show that addition is commutative, that is: Theorem 37. For all x and y, x + y = y + x. which completes the proof.

Free Logic Now! For the inductive step, we assume that (x + a) + z = x + (a + z), and we need to show that (x + a ) + z = x + (a + z): (x + a ) + z = (x + a) + z = ((x + a) + z) = (x + (a + z)) = x + (a + z) = x + (a + z)

Proof. First, we show that 0 + y = y + 0, for all y. This fact constitutes the basis case for the induction, which will show that x + y = y + x, for all x and y. That 0+y = y+0 is immediate, because we have already established in Theorem Theorem 39. 0 x = 0. 35 that 0+y = 0. And of course, we have in Axiom 12.1.5c that y +0 = y. Together, Proof. We show this by induction on the value of x. For the basis case, we these give us that 0 + y = y + 0. need to show that 0 0 = 0, which is immediate from the axioms. For the inductive Next, we assume that a + y = y + a, and we show that a + y = y + a : step, we assume that 0 a = 0, and we show that 0 a = 0: a + y = (a + y) = (y + a) = (y + a ) which completes the proof. We now show that addition is associative, that is: Theorem 38. For all x, y, and z, (x + y) + z = x + (y + z). which completes the proof. We will prove one more fact about multiplication, leaving the rest as exercises. Theorem 40. x y = y + (x y). 0 a = 0 + (0 a) =0a =0

Just as addition is commutative and associative, multiplication can be shown to be commutative and associative. The proofs of these facts are closely analogous to the corresponding proofs for addition. We begin by showing:

Proof. We show this by induction on the value of y. For the basis case, we Proof. We show this by induction on the value of y. For the basis case, we need to show that x 0 = 0 + (x 0). But since we already know that x 0 = 0 for need to show that (x + 0) + z = x + (0 + z). This is easy, because (x + 0) = x, and all x, and 0 + 0 = 0, this is immediate. Next, we assume that x a = a + (x a), (0 + z) = z. Because x + z = x + z, we can then write (x + 0) + z = x + (0 + z). and show that x a = a + (x a ) Draft Page 70 October 27, 2009

Zachary Ernst which completes the proof.2 x a = x + (x a) = x + (a + (x a))

Free Logic Now!

= (x + (a + (x a))) = ((x + a) + (x a)) = ((a + x) + (x a)) = (a + x) + (x a) = (a + x) + (x a) = a + (x a )

Exercise 30. Show that multiplication is associative, that is, (x y) z = x (y z). Exercise 31. Dene exponentiation and show that xa xb = xa+b

= a + (x + (x a)) which completes the proof.

Expressive Power of Principia Mathematica. With the theory of classes, rst-order quantier logic, and Peano arithmetic, we are able to express an extremely wide range of propositions concerning the arithmetic of natural numbers. These propositions fall into two types. The rst is a self-contained formula with a determinate truth-value; the second is a formula with one or more free variables that takes on a truth-value only when those free variables are replaced by terms. For example, consider the following formula: (12.1.6) x(x 0 = 0 )

Proof. We show this by induction on the value of y. So for the basis case, we need to show that x (0 + z) = (x 0) + (x z). But this is immediate from Axiom 12.1.5c and Theorem 39. For the inductive step, we assume that x (a + z) = (x a) + (x z). We must show that x (a + z) = (x a ) + (x z). x (a + z) = x (a + z) = x + (x (a + z))

This says that there is some x such that x times 2 equals 6. In other words, it asserts the true statement that 2 divides 6. Of course, the value of x that makes Having shown some important facts about addition and multiplication, we may the statement true the so-called witness of it is 3. And indeed, we can prove now show some important facts about the relationships between those two opera- that 3 2 = 6 by using the appropriate axioms and denitions of addition and multiplication. tions. One important such fact is that multiplication distributes over addition: Recall that a prime number is a number other than the number one that has no Theorem 41. Mupliplication distributes over addition that is, x (y + z) = divisors other than one and itself. So we may express the proposition that ve is a (x y) + (x z). prime number by the following formula: Exercise 29. Show that multiplication is commutative, that is, x y = y x. (12.1.7) which asserts that if two numbers multiplied together are equal to ve, then one of them has to be equal to one. Clearly, this takes care of the possibility that one of them will be equal to ve; for if one of them is equal to ve, then the other must be equal to one. By replacing the numeral in 12.1.7 with a free variable, and accounting for the fact that the number one is not prime, we can dene the class of primes as follows: (12.1.8) xy(x y = z (x = 0 y = 0 ) z = 1 xy(x y = 0 (x = 0 y = 0 )

= x + ((x a) + (x z)) = (x + (x a)) + (x z) = (x a ) + (x z)

2Incidentally, we have now shown that addition and multiplication over N in Peano arithmetic is a semiring.

Draft

Page 71

October 27, 2009

Zachary Ernst

Free Logic Now!

Having shown that the class of primes is denable, we can stipulate that the new For instance, the pair 15, 49 would be in the class, because 15 is divisible only by predicate Prime(x) shall abbreviate Formula 12.1.8. With the help of another dened the prime numbers 3 and 5, while 49 is divisible only by 7. predicate, we will be able to precisely express some non-trivial facts about the primes. Exercise 37. Write the formula which asserts that every pair of prime numbers For example, we can make use of the following: are relatively prime. (12.1.9) x(x = 0 x + y = z) Exercise 38. Why cant we dene x y as being equal to the number z such This expresses the condition that y < z by stating that there is a non-zero number that z + y = x? x such that z = y + x. The existence of the Formula 12.1.9 allows us to introduce the new predicate <, which has the usual meaning. Now, suppose we want to express the fact that there are innitely many primes. 12.2. High-Level Overview With Prime and <, this is easily accomplished: We have already seen the device of Gdel-numbering earlier, in the context of (12.1.10) Prime(2) x(Prime(x) y(x < y Prime(y))) Turing machines. Specically, we generated the description number of a Turing machine by substituting digits for symbols in the transition rules of a machine. which could be expanded into: We shall use exactly the same device here, except that instead of giving numbers to transition rules of a machine, the numbers will be given to mathematical expressions. xy(x y = 0 (x = 0 y = 0 ) 0 = 1 As before, the exact substitution scheme is arbitrary; it need only be consistently z((xy(x y = z (x = 0 y = 0 ) z = 1) deployed. wv(v = 0 z + v = q So in the discussion of arithmetic which is to follow, we will need to carefully distinguish between mathematical expressions and the numbers that represent them. xy(x y = w (x = 0 y = 0 ) w = 1)) Here, we will use the convention that regular type (x = 2 + 2) will be used as usual This is obviously extremely awkward, but easily veried to be nothing more than to represent a mathematical formula. When we refer to the number representing the expanded form of Formula 12.1.10. a mathematical formula, we shall put the expression in bold face, as in 2 + 2 = 4. Similarly, when we use a variable, we shall put it in bold face just in case a GdelExercise 32. Dene the predicate which means x = y + 2. number is to be put in its place. Exercise 33. Twin primes are numbers that dier by exactly two, and which We begin by observing that the language consists of only a nite number of are both prime. For example, 11 and 13 are twin primes. Give an abbreviated and symbols. So, much like Turing machines, it is possible to enumerate all of the an expanded form of the predicate which means x and y are twin primes. possible class signs and place them in an arbitrary order we may think of this Exercise 34. The twin primes conjecture says that there are innitely many order as being alphabetical, for instance. To take a specic example, perhaps the twin primes. Express the twin primes conjecture in both an abbreviated and an set of class signs has been arranged in an order in which the rst few elements are: 1) x + 1 = 0 expanded form. 2) x + x = 4 Exercise 35. Dene the predicate which means that the number x is divisible 3) x + 2 = 5 . . by two. . . . . Note that each of the formulas on the right counts as a class sign because each Exercise 36. Dene the predicate which means that x and y are relatively prime this means that x and y are not divisible by any of the same prime numbers. has exactly one free variable namely, x. If we were to substitute some particular Page 72 October 27, 2009

Draft

Zachary Ernst numeral for x, then we would have a formula with a specic truth-value, and which may or may not have a proof. It shall be important to refer to class signs by their index number; this will be done by using the function Rn , which shall name the nth class sign. So R1 is x + 1 = 0, for example. Because each Rn has no truth-value (because it has a free variable), we will be more interested in the formulas resulting from substituting a number for the free variable in each Ri . This will be denoted Rn (m) the result of substituting m for the free variable in Rn . We shall adopt the convention of using sans-serif fonts to display predicates that have an intuitive meaning. This will be very common in the presentation of the Incompleteness Theorem because much of the proof will be concerned with establishing that such predicates exist. Our rst such predicate will be Proveable() which shall be intended to mean that there is a proof of . Our use of a particular font for these predicates is also meant to highlight the fact that these are predicates, rather than meta-linguistic words with their usual meaning (although they correspond closely to such metalinguistic terms, of course). The propositional conectives will be used as always, so that Provable() shall mean that Provable() is false in other words, that there is no proof of . A specic class sign, which we shall call N , will be particularly important. This class is dened in the following way: (12.2.1) x N Provable(Rx (x))

Free Logic Now! Because Rq (q) is true, and it asserts that q N , then Provable(Rq (q)); therefore, there is no proof of Rq (q), and Principia Mathematica is therefore incomplete. We have now to do the hard work of showing that the assumptions we have just made are correct. That is, we need to show that predicates such as Provable can, indeed, be dened in the language of Principia Mathematica. 12.3. Gdel Numbering in Action The exact substitution of numbers for symbols, as we have already remarked, is arbitrary.

In other words, N shall be the class of all numbers n, which, when substituted into the nth class sign, yield an unprovable formula. N , of course, just is another class sign, so it appears in the sequence of all class signs. We will say that it is at position q in the list. We may now show the following: Theorem 42. If Principia Mathematica is consistent, then it is incomplete. Proof. Assume that Principia Mathematica is consistent. Suppose that Rq (q) is false. Because Rq is the class sign asserting membership in N , then the falsity of Rq (q) entails that q N . Since N is the class of numbers n such that / Provable(Rn (n)), then we have that Provable(Rq (q)), so there is a proof of Rq (q). So if Rq (q) is false, then Principia Mathematica is inconsistent. Therefore, Rq (q) is true. Draft Page 73 October 27, 2009

APPENDIX A

Denitions and Rules


Here I will put a short summary of the proof rules, and so on.

74

APPENDIX B

Answers to Selected Exercises


1 The rst and second are ws. The others are not. 3 It is not possible. In order for a formula to not entail (pp), it would be necessary for there to be a row of the truth-table in which that formula is true while (p p) is false. But that formula is never false. 4 This is the answer to the rst exercise in Chapter Two. 5 This is where the answer to the question will go... 7 This will be the rst solution to the rst problem in the Knights and Knaves chapter... 28 Here is where the solution will go. 29 The basis case has already been proven. So we assume that x a = a x, and show: x a = x + (x a) = x + (a z) =a x

and we show that: x (a z) = x (z + (a z)) = (x z) + (x (a z)) = (x z) + ((x a) z) = (x z) + (z (x a)) = (z x) + (z (x a)) = z (x a ) which completes the proof. = (x a ) z = z (x + (x a))

30 We prove this by induction on the value of y. So for the basis case, we need to show that x (0 z) = (x 0) z), which is immediate from the fact that 0 is a left- and right-hand absorber. For the inductive step, we assume that x(az) = (xa)z),
75

Bibliography
[1] Collin Allen and Michael Hand. Logic primer. The MIT Press, 2001. [2] Graeme Forbes. Modern logic: a text in elementary symbolic logic. Oxford University Press, USA, 1994. [3] E.J. Lemmon. Beginning logic. Chapman & Hall/CRC, 1998.

76

Index

and, 11 and elimination, 17 and introduction, 17 antecedent, 11 articial language, 10 assumption column, 17 assumption rule, 17 Cantor, Georg, 49 class sign, 69 classes, 68 comprehension schema, 69 conditional, 11 conjunct, 11 conjunction, 11 connectives, 11 consequent, 11 contingent, 13 contradiction, 13 decidability denition of, 60 diagonal argument, 49 disjunct, 11 disjunction, 11 equivalence, 12 equivalence elimination, 18 equivalence introduction, 18
77

existential elimination, 37 Fibonacci sequence, 52 golden ratio, 52 halting problem uncomputability of, 58 horseshoe elimination, 18 horseshoe introduction, 19 if-then, 11 implication, 11 indirect proof, 20 just in case, 11 justication column, 17 Knights and Knaves, 25 proofs, 26 translations of, 26 truth-tables for, 27 Kolmogorov complexity, 54 left-hand side of the equivalence, 12 logical entailment, 14 main connective, 12 modus ponens, 18, 68 natural language, 10

Zachary Ernst
negation, 11 negation elimination, 19 not, 11 or, 11 or elimination, 19 or introduction, 18 Peano arithmetic, 6971 peg and board game, 60 premises, 16 prime numbers, 71 innitude of, 72 proof by contradiction, see RAA proofs, 16 quantier exchange, 37 RAA, 20 reductio ad absurdum, see RAA reexivity of identity, 41 right-hand side of the equivalence, 12 Russell, Bertrand, 68 semantics, 12 semiring, 71 sequent, 16 sequent introduction, 23 Sheers stroke, 15 Smullyon, Raymond, 25, 66 substitution of identicals, 42 syntax, 12 tautology, 13 truth tables, 12 truth-values, 12 semantic rules, 12 Turing, Alan, 48 turnstile, 17 universal introduction, 35 universal substitution, 68 universal Turing machine, 56 valid informal notion of, 11 semantic notion of, 13 well-formed formula, 11 formation rules, 11 Whitehead, Alfred North, 68

Free Logic Now!

Draft

Page 78

October 27, 2009