Sie sind auf Seite 1von 1051

Boolean algebra

en.wikipedia.org
Chapter 1

(, )-denition of limit

c x

Whenever a point x is within units of c, f(x) is within units of L

In calculus, the (, )-denition of limit ("epsilondelta denition of limit) is a formalization of the notion of limit.
The concept is due to Augustin-Louis Cauchy, who never gave an ( , ) denition of limit in his Cours d'Analyse,
but occasionally used , arguments in proofs. It was rst given as a formal denition by Bernard Bolzano in 1817,

2
1.1. HISTORY 3

and the denitive modern statement was ultimately provided by Karl Weierstrass.[1][2] It makes rigorous the following
informal notion: the dependent expression f(x) approaches the value L as the variable x approaches the value c if f(x)
can be made as close as desired to L by taking x suciently close to c.

1.1 History

Although the Greeks examined limiting process, such as the Babylonian method, they probably had no concept similar
to the modern limit.[3] The need for the concept of a limit came into force in the 17th century when Pierre de Fermat
attempted to nd the slope of the tangent line at a point x of a function such as f (x) = x2 . Using a non-zero, but
almost zero quantity, E , Fermat performed the following calculation:
f (x + E) f (x)
slope =
E
(x + E)2 x2
=
E
x2 + 2xE + E 2 x2
=
E
2xE + E 2
= = 2x + E = 2x.
E
The key to the above calculation is that since E is non-zero one can divide f (x+E)f (x) by E , but since E is close
to 0, 2x + E is essentially 2x .[4] Quantities such as E are called innitesimals. The problem with this calculation
is that mathematicians of the era were unable to rigorously dene a quantity with properties of E [5] although it was
common practice to 'neglect' higher power innitesimals and this seemed to yield correct results.
This problem reappeared later in the 1600s at the center of the development of calculus because calculations such
as Fermats are important to the calculation of derivatives. Isaac Newton rst developed calculus via an innitesimal
quantity called a uxion. He developed them in reference to the idea of an innitely small moment in time...[6]
However, Newton later rejected uxions in favor of a theory of ratios that is close to the modern denition of
the limit.[6] Moreover, Isaac Newton was aware that the limit of the ratio of vanishing quantities was not itself a ratio,
as he wrote:

Those ultimate ratios ... are not actually ratios of ultimate quantities, but limits ... which they can
approach so closely that their dierence is less than any given quantity...

Additionally, Newton occasionally explained limits in terms similar to the epsilondelta denition.[7] Gottfried Wil-
helm Leibniz developed an innitesimal of his own and tried to provide it with a rigorous footing, but it was still
greeted with unease by some mathematicians and philosophers.[8]
Augustin-Louis Cauchy gave a denition of limit in terms of a more primitive notion he called a variable quantity.
He never gave an epsilondelta denition of limit (Grabiner 1981). Some of Cauchys proofs contain indications of
the epsilondelta method. Whether or not his foundational approach can be considered a harbinger of Weierstrasss
is a subject of scholarly dispute. Grabiner feels that it is, while Schubring (2005) disagrees.[1] Nakane concludes that
Cauchy and Weierstrass gave the same name to dierent notions of limit.[9]
Eventually, Weierstrass and Bolzano are credited with providing a rigorous footing for calculus in the form of the
modern denition of the limit. [1][10] The need for reference to an innitesimal E was then removed [11] and
Fermats computation turned into the computation of the following limit:
f (x + h) f (x)
lim .
h0 h
This is not to say that the limiting denition was free of problems as although it removed the need to use innitesimals,
it did require the construction of the real numbers by Richard Dedekind.[12] This is also not to say that innitesimals
have no place in modern mathematics as later mathematicians were able to rigorously create innitesimal quantities
as part of the hyperreal number or surreal number systems. Moreover, it is possible to rigorously develop calculus
with these quantities and they have other mathematical uses.[13]
4 CHAPTER 1. (, )-DEFINITION OF LIMIT

1.2 Informal statement


A viable intuitive or provisional denition is that a "function f approaches the limit L near a (symbolically, limxa f (x) =
L ) if we can make f(x) as close as we like to L by requiring that x be suciently close to, but unequal to, a.[14]
When we say that two things are close (such as f(x) and L or x and a) we mean that the distance between them is small.
When f(x), L, x, and a are real numbers, the distance between two numbers is the absolute value of the dierence of
the two. Thus, when we say f(x) is close to L we mean |f (x) L| is small. When we say that x and a are close, we
mean that |x a| is small.[15]
When we say that we can make f(x) as close as we like to L, we mean that for all non-zero distances, , we can make
the distance between f(x) and L smaller than .[15]
When we say that we can make f(x) as close as we like to L by requiring that x be suciently close to, but, unequal
to, a, we mean that for all non-zero distances , there is some non-zero distance such that if the distance between
x and a is less than then the distance between f(x) and L is smaller than .[15]
The aspect that must be grasped is that the denition requires the following conversation. One is provided with any
challenge > 0 for a given f,a, and L. One must answer with a > 0 such that 0 < |x a| < implies that
|f (x) L| < . If one can provide an answer for any challenge, one has proven that the limit exists.

1.3 Precise statement and related statements

1.3.1 Precise statement for real valued functions


The (, ) denition of the limit of a function is as follows:[15]
Let f be a real-valued function dened on a subset D of the real numbers. Let c be a limit point of D and let L be a
real number. We say that

lim f (x) = L
xc

if for every > 0 there exists a such that, for all x D , if 0 < |x c| < , then |f (x) L| < .
Symbolically:

lim f (x) = L ( > 0, > 0, x D, 0 < |x c| < |f (x) L| < )


xc

If D = [a, b] or D = R , then the condition that c is a limit point is automatically met because closed real intervals
and the entire real line are perfect sets.

1.3.2 Precise statement for functions between metric spaces


The denition can be generalized to functions that map between metric spaces. These spaces come with a function,
called a metric, that takes two points in the space and returns a real number that represents the distance between the
two points.[16] The generalized denition is as follows:[17]
Suppose f is dened on a subset D of a metric space X with a metric dX (x, y) and maps into a metric space Y with
a metric dY (x, y) . Let c be a limit point of D and let L be a point of Y .
We say that

lim f (x) = L
xc

if for every > 0 there exists a such that, for all x D , if 0 < dX (x, c) < , then dY (f (x), L) < .
Since d(x, y) = |xy| is a metric on the real numbers, one can show that this denition generalizes the rst denition
for real functions.[18]
1.4. WORKED EXAMPLES 5

1.3.3 Negation of the precise statement

The negation of the denition is a follows:[19]


Suppose f is dened on a subset D of a metric space X with a metric dX (x, y) and maps into a metric space Y with
a metric dY (x, y) . Let c be a limit point of D and let L be a point of Y .
We say that

lim f (x) = L
xc

if there exists an > 0 such that for all > 0 there is an x D such that 0 < dX (x, c) < and dY (f (x), L) > .
We say that limxc f (x) does not exist if for all L Y , limxc f (x) = L .
For the negation of a real valued function dened on the real numbers, simply set dY (x, y) = dX (x, y) = |x y| .

1.3.4 Precise statement for limits at innity

The precise statement for limits at innity is as follows:[16]


Suppose f is dened on a subset D of a metric space X with a metric dX (x, y) and maps into a metric space Y with
a metric dY (x, y) . Let L Y .
We say that

lim f (x) = L
x

if for every > 0 , there is a real number N > 0 such that there is an x D where dX (x, 0) > N and such that if
dX (x, 0) > N and x D , then dY (f (x), L) < .

1.4 Worked examples

1.4.1 Example 1

We will show that

( )
1
lim x sin =0
x0 x
( )
We let > 0 be given. We need to nd a > 0 such that |x 0| < implies x sin x1 0 < .
Since sine is bounded above by 1 and below by 1,
( ) ( )

x sin 1 0 = x sin 1
x x
( )
1
= |x| sin
x
|x|.
( )
Thus, if we take = , then |x| = |x 0| < implies x sin x1 0 |x| < , which completes the proof.

1.4.2 Example 2

Let us prove the statement that


6 CHAPTER 1. (, )-DEFINITION OF LIMIT

lim x2 = a2
xa

for any real number a .


Let > 0 be given. We will nd a > 0 such that |x a| < implies |x2 a2 | < .
We start by factoring:

|x2 a2 | = |(x a)(x + a)| = |x a||x + a|.


We recognize that |x a| is the term bounded by so we can presuppose a bound of 1 and later pick something
smaller than that for .[20]
So we suppose |x a| < 1 . Since |x| |y| |x y| holds in general for real numbers x and y , we have

|x| |a| |x a| < 1.


Thus,

|x| < 1 + |a|.


Thus via the triangle inequality,

|x + a| |x| + |a| < 2|a| + 1.


Thus, if we further suppose that


|x a| <
2|a| + 1
then

|x2 a2 | <
In summary, we set

( )

= min 1,
2|a| + 1
So, if |x a| < , then

|x2 a2 | = |x a||x + a|

< (|x + a|)
2|a| + 1

< (2|a| + 1)
2|a| + 1
=

Thus, we have found a such that |x a| < implies |x2 a2 | < . Thus, we have shown that

lim x2 = a2
xa

for any real number a .


1.5. CONTINUITY 7

1.4.3 Example 3
Let us prove the statement that

lim (3x 3) = 12.


x5

This is easily shown through graphical understandings of the limit, and as such serves as a strong basis for introduction
to proof. According to the formal denition above, a limit statement is correct if and only if conning x to units of
c will inevitably conne f (x) to units of L . In this specic case, this means that the statement is true if and only
if conning x to units of 5 will inevitably conne

3x 3

to units of 12. The overall key to showing this implication is to demonstrate how and must be related to each
other such that the implication holds. Mathematically, we want to show that

0 < |x 5| < |(3x 3) 12| < .

Simplifying, factoring, and dividing 3 on the right hand side of the implication yields

|x 5| < /3,

which immediately gives the required result if we choose

= /3.

Thus the proof is completed. The key to the proof lies in the ability of one to choose boundaries in x , and then
conclude corresponding boundaries in f (x) , which in this case were related by a factor of 3, which is entirely due to
the slope of 3 in the line

y = 3x 3.

1.5 Continuity
A function f is said to be continuous at c if it is both dened at c and its value at c equals the limit of f as x approaches
c:

lim f (x) = f (c).


xc

If the condition 0 < |x c| is left out of the denition of limit, then requiring f(x) to have a limit at c would be the
same as requiring f(x) to be continuous at c.
f is said to be continuous on an interval I if it is continuous at every point c of I.

1.6 Comparison with innitesimal denition


Keisler proved that a hyperreal denition of limit reduces the quantier complexity by two quantiers.[21] Namely,
f (x) converges to a limit L as x tends to a if and only if for every innitesimal e, the value f (x + e) is innitely
8 CHAPTER 1. (, )-DEFINITION OF LIMIT

close to L; see microcontinuity for a related denition of continuity, essentially due to Cauchy. Innitesimal cal-
culus textbooks based on Robinson's approach provide denitions of continuity, derivative, and integral at standard
points in terms of innitesimals. Once notions such as continuity have been thoroughly explained via the approach
using microcontinuity, the epsilondelta approach is presented as well. Karel Hrbek argues that the denitions of
continuity, derivative, and integration in Robinson-style non-standard analysis must be grounded in the method
in order to cover also non-standard values of the input.[22] Baszczyk et al. argue that microcontinuity is useful in
developing a transparent denition of uniform continuity, and characterize the criticism by Hrbek as a dubious
lament.[23] Hrbek proposes an alternative non-standard analysis, which (unlike Robinsons) has many levels of
innitesimals, so that limits at one level can be dened in terms of innitesimals at the next level.[24]

1.7 See also


Continuous function

Limit of a sequence

List of calculus topics

1.8 References
[1] Grabiner, Judith V. (March 1983), Who Gave You the Epsilon? Cauchy and the Origins of Rigorous Calculus (PDF), The
American Mathematical Monthly, Mathematical Association of America, 90 (3): 185194, JSTOR 2975545, doi:10.2307/2975545,
archived (PDF) from the original on 2009-05-04, retrieved 2009-05-01

[2] Cauchy, A.-L. (1823), Septime Leon - Valeurs de quelques expressions qui se prsentent sous les formes indtermines


, 0 , . . . Relation qui existe entre le rapport aux dirences nies et la fonction drive, Rsum des leons donnes
lcole royale polytechnique sur le calcul innitsimal, Paris, archived from the original on 2009-05-04, retrieved 2009-05-
01, p. 44.. Accessed 2009-05-01.

[3] Stillwell, John (1989). Mathematics and its history. New York: Springer-Verlag. pp. 3839. ISBN 978-1-4899-0007-4.

[4] Stillwell, John (1989). Mathematics and its history. New York: Springer-Verlag. p. 104. ISBN 978-1-4899-0007-4.

[5] Stillwell, John (1989). Mathematics and its history. New York: Springer-Verlag. p. 106. ISBN 978-1-4899-0007-4.

[6] Buckley, Benjamin Lee (2012). The continuity debate : Dedekind, Cantor, du Bois-Reymond and Peirce on continuity and
innitesimals. p. 31. ISBN 9780983700487.

[7] Pourciau, B. (2001), Newton and the Notion of Limit, Historia Mathematica, 28 (1), doi:10.1006/hmat.2000.2301

[8] Buckley, Benjamin Lee (2012). The continuity debate : Dedekind, Cantor, du Bois-Reymond and Peirce on continuity and
innitesimals. p. 32. ISBN 9780983700487.

[9] Nakane, Michiyo. Did Weierstrasss dierential calculus have a limit-avoiding character? His denition of a limit in
style. BSHM Bull. 29 (2014), no. 1, 5159.

[10] Cauchy, A.-L. (1823), Septime Leon - Valeurs de quelques expressions qui se prsentent sous les formes indtermines


, 0 , . . . Relation qui existe entre le rapport aux dirences nies et la fonction drive, Rsum des leons donnes
lcole royale polytechnique sur le calcul innitsimal, Paris, archived from the original on 2009-05-04, retrieved 2009-05-
01, p. 44..

[11] Buckley, Benjamin Lee (2012). The continuity debate : Dedekind, Cantor, du Bois-Reymond and Peirce on continuity and
innitesimals. p. 33. ISBN 9780983700487.

[12] Buckley, Benjamin Lee (2012). The continuity debate : Dedekind, Cantor, du Bois-Reymond and Peirce on continuity and
innitesimals. pp. 3235. ISBN 9780983700487.

[13] Tao, Terence (2008). Structure and randomness : pages from year one of a mathematical blog. Providence, R.I.: American
Mathematical Society. pp. 95110. ISBN 978-0-8218-4695-7.

[14] Spivak, Michael (2008). Calculus (4th ed.). Houston, Tex.: Publish or Perish. p. 90. ISBN 978-0914098911.

[15] Spivak, Michael (2008). Calculus (4th ed.). Houston, Tex.: Publish or Perish. p. 96. ISBN 978-0914098911.
1.9. FURTHER READING 9

[16] Rudin, Walter (1976). Principles of Mathematical Analysis. McGraw-Hill Science/Engineering/Math. p. 30. ISBN 978-
0070542358.

[17] Rudin, Walter (1976). Principles of Mathematical Analysis. McGraw-Hill Science/Engineering/Math. p. 83. ISBN 978-
0070542358.

[18] Rudin, Walter (1976). Principles of Mathematical Analysis. McGraw-Hill Science/Engineering/Math. p. 84. ISBN 978-
0070542358.

[19] Spivak, Michael (2008). Calculus (4th ed.). Houston, Tex.: Publish or Perish. p. 97. ISBN 978-0914098911.

[20] Spivak, Michael (2008). Calculus (4th ed.). Houston, Tex.: Publish or Perish. p. 95. ISBN 978-0914098911.

[21] Keisler, H. Jerome (2008), Quantiers in limits (PDF), Andrzej Mostowski and foundational studies, IOS, Amsterdam,
pp. 151170

[22] Hrbacek, K. (2007), Stratied Analysis?", in Van Den Berg, I.; Neves, V., The Strength of Nonstandard Analysis, Springer

[23] Baszczyk, Piotr; Katz, Mikhail; Sherry, David (2012), Ten misconceptions from the history of analysis and their debunk-
ing, Foundations of Science, arXiv:1202.4153 , doi:10.1007/s10699-012-9285-8

[24] Hrbacek, K. (2009). Relative set theory: Internal view. Journal of Logic and Analysis. 1.

1.9 Further reading


Grabiner, Judith V. (1982). The Origins of Cauchys Rigorous Calculus. Courier Corporation. ISBN 978-0-
486-14374-3.

Schubring, Gert (2005). Conicts Between Generalization, Rigor, and Intuition: Number Concepts Underlying
the Development of Analysis in 17th19th Century France and Germany (illustrated ed.). Springer. ISBN
0-387-22836-5.
Chapter 2

2-valued morphism

2-valued morphism is a term used in mathematics[1] to describe a morphism that sends a Boolean algebra B onto a
two-element Boolean algebra 2 = {0,1}. It is essentially the same thing as an ultralter on B.
A 2-valued morphism can be interpreted as representing a particular state of B. All propositions of B which are
mapped to 1 are considered true, all propositions mapped to 0 are considered false. Since this morphism conserves
the Boolean operators (negation, conjunction, etc.), the set of true propositions will not be inconsistent but will
correspond to a particular maximal conjunction of propositions, denoting the (atomic) state.
The transition between two states s1 and s2 of B, represented by 2-valued morphisms, can then be represented by an
automorphism f from B to B, such tuhat s2 o f = s1 .
The possible states of dierent objects dened in this way can be conceived as representing potential events. The set of
events can then be structured in the same way as invariance of causal structure, or local-to-global causal connections
or even formal properties of global causal connections.
The morphisms between (non-trivial) objects could be viewed as representing causal connections leading from one
event to another one. For example, the morphism f above leads form event s1 to event s2 . The sequences or paths of
morphisms for which there is no inverse morphism, could then be interpreted as dening horismotic or chronological
precedence relations. These relations would then determine a temporal order, a topology, and possibly a metric.
According to,[2] A minimal realization of such a relationally determined space-time structure can be found. In this
model there are, however, no explicit distinctions. This is equivalent to a model where each object is characterized
by only one distinction: (presence, absence) or (existence, non-existence) of an event. In this manner, the 'arrows
or the 'structural language' can then be interpreted as morphisms which conserve this unique distinction.[2]
If more than one distinction is considered, however, the model becomes much more complex, and the interpretation
of distinctional states as events, or morphisms as processes, is much less straightforward.

2.1 References
[1] Fleischer, Isidore (1993), A Boolean formalization of predicate calculus, Algebras and orders (Montreal, PQ, 1991),
NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci., 389, Kluwer Acad. Publ., Dordrecht, pp. 193198, MR 1233791.

[2] Heylighen, Francis (1990). A Structural Language for the Foundations of Physics. Brussels: International Journal of General
Systems 18, p. 93-112.

2.2 External links


Representation and Change - A metarepresentational framework for the foundations of physical and cognitive
science

10
Chapter 3

Absorption (logic)

Absorption is a valid argument form and rule of inference of propositional logic.[1][2] The rule states that if P implies
Q , then P implies P and Q . The rule makes it possible to introduce conjunctions to proofs. It is called the law of
absorption because the term Q is absorbed by the term P in the consequent.[3] The rule can be stated:

P Q
P (P Q)

where the rule is that wherever an instance of " P Q " appears on a line of a proof, " P (P Q) " can be
placed on a subsequent line.

3.1 Formal notation

The absorption rule may be expressed as a sequent:

P Q P (P Q)

where is a metalogical symbol meaning that P (P Q) is a syntactic consequences of (P Q) in some logical


system;
and expressed as a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem
of propositional logic by Russell and Whitehead in Principia Mathematica as:

(P Q) (P (P Q))

where P , and Q are propositions expressed in some formal system.

3.2 Examples

If it will rain, then I will wear my coat.


Therefore, if it will rain then it will rain and I will wear my coat.

11
12 CHAPTER 3. ABSORPTION (LOGIC)

3.3 Proof by truth table

3.4 Formal proof

3.5 References
[1] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall. p. 362.

[2] http://www.philosophypages.com/lg/e11a.htm

[3] Russell and Whitehead, Principia Mathematica


Chapter 4

Absorption law

In algebra, the absorption law or absorption identity is an identity linking a pair of binary operations.
Two binary operations, and , are said to be connected by the absorption law if:

a (a b) = a (a b) = a.

A set equipped with two commutative, associative and idempotent binary operations (join) and (meet) that
are connected by the absorption law is called a lattice.
Examples of lattices include Boolean algebras, the set of sets with union and intersection operators, Heyting algebras,
and ordered sets with min and max operations.
In classical logic, and in particular Boolean algebra, the operations OR and AND, which are also denoted by and
, satisfy the lattice axioms, including the absorption law. The same is true for intuitionistic logic.
The absorption law does not hold in many other algebraic structures, such as commutative rings, e.g. the eld of real
numbers, relevance logics, linear logics, and substructural logics. In the last case, there is no one-to-one correspon-
dence between the free variables of the dening pair of identities.

4.1 See also


Identity (mathematics)

4.2 References
Davey, B. A.; Priestley, H. A. (2002). Introduction to Lattices and Order (second ed.). Cambridge University
Press. ISBN 0-521-78451-4.

Hazewinkel, Michiel, ed. (2001) [1994], Absorption laws, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

Weisstein, Eric W. Absorption Law. MathWorld.

13
Chapter 5

Admissible rule

This article is about rules of inference in logic systems. For the concept in decision theory, see admissible decision
rule.

In logic, a rule of inference is admissible in a formal system if the set of theorems of the system does not change
when that rule is added to the existing rules of the system. In other words, every formula that can be derived using
that rule is already derivable without that rule, so, in a sense, it is redundant. The concept of an admissible rule was
introduced by Paul Lorenzen (1955).

5.1 Denitions
Admissibility has been systematically studied only in the case of structural rules in propositional non-classical logics,
which we will describe next.
Let a set of basic propositional connectives be xed (for instance, {, , , } in the case of superintuitionistic
logics, or {, , } in the case of monomodal logics). Well-formed formulas are built freely using these connectives
from a countably innite set of propositional variables p0 , p1, . A substitution is a function from formulas to
formulas which commutes with the connectives, i.e.,

f (A1 , . . . , An ) = f (A1 , . . . , An )

for every connective f, and formulas A1 , , An. (We may also apply substitutions to sets of formulas, making
= {A: A }.) A Tarski-style consequence relation[1] is a relation between sets of formulas, and formulas, such
that

1. A A,
2. if A then , A,
3. if A and , A B then , B,

for all formulas A, B, and sets of formulas , . A consequence relation such that

1. if A then A

for all substitutions is called structural. (Note that the term structural as used here and below is unrelated to the
notion of structural rules in sequent calculi.) A structural consequence relation is called a propositional logic. A
formula A is a theorem of a logic if A .
For example, we identify a superintuitionistic logic L with its standard consequence relation L axiomatizable by
modus ponens and axioms, and we identify a normal modal logic with its global consequence relation L axiomatized
by modus ponens, necessitation, and axioms.

14
5.2. EXAMPLES 15

A structural inference rule[2] (or just rule for short) is given by a pair (,B), usually written as

A1 , . . . , An
or A1 , . . . , An /B,
B
where = {A1 , , An} is a nite set of formulas, and B is a formula. An instance of the rule is

A1 , . . . , An /B
for a substitution . The rule /B is derivable in , if B . It is admissible if for every instance of the rule, B
is a theorem whenever all formulas from are theorems.[3] In other words, a rule is admissible if, when added to
the logic, does not lead to new theorems.[4] We also write | B if /B is admissible. (Note that | is a structural
consequence relation on its own.)
Every derivable rule is admissible, but not vice versa in general. A logic is structurally complete if every admissible
rule is derivable, i.e., = | .[5]
In logics with a well-behaved conjunction connective (such as superintuitionistic or modal logics), a rule A1 , . . . , An /B
is equivalent to A1 An /B with respect to admissibility and derivability. It is therefore customary to only deal
with unary rules A/B.

5.2 Examples
Classical propositional calculus (CPC) is structurally complete.[6] Indeed, assume that A/B is non-derivable
rule, and x an assignment v such that v(A) = 1, and v(B) = 0. Dene a substitution such that for every
variable p, p = if v(p) = 1, and p = if v(p) = 0. Then A is a theorem, but B is not (in fact, B is a
theorem). Thus the rule A/B is not admissible either. (The same argument applies to any multi-valued logic L
complete with respect to a logical matrix whose all elements have a name in the language of L.)
The KreiselPutnam rule (a.k.a. Harrop's rule, or independence of premise rule)

p q r
(KPR)
(p q) (p r)
is admissible in the intuitionistic propositional calculus (IPC). In fact, it is admissible in every superin-
tuitionistic logic.[7] On the other hand, the formula
(p q r) ((p q) (p r))
is not an intuitionistic tautology, hence KPR is not derivable in IPC. In particular, IPC is not structurally
complete.

The rule

p
p
is admissible in many modal logics, such as K, D, K4, S4, GL (see this table for names of modal logics).
It is derivable in S4, but it is not derivable in K, D, K4, or GL.

The rule

p p

is admissible in every normal modal logic.[8] It is derivable in GL and S4.1, but it is not derivable in K,
D, K4, S4, S5.
16 CHAPTER 5. ADMISSIBLE RULE

Lbs rule

p p
(LR)
p
is admissible (but not derivable) in the basic modal logic K, and it is derivable in GL. However, LR is
not admissible in K4. In particular, it is not true in general that a rule admissible in a logic L must be
admissible in its extensions.

The GdelDummett logic (LC), and the modal logic Grz.3 are structurally complete.[9] The product fuzzy
logic is also structurally complete.[10]

5.3 Decidability and reduced rules


The basic question about admissible rules of a given logic is whether the set of all admissible rules is decidable.
Note that the problem is nontrivial even if the logic itself (i.e., its set of theorems) is decidable: the denition of
admissibility of a rule A/B involves an unbounded universal quantier over all propositional substitutions, hence a
priori we only know that admissibility of rule in a decidable logic is 01 (i.e., its complement is recursively enumerable).
For instance, it is known that admissibility in the bimodal logics Ku and K4u (the extensions of K or K4 with the
universal modality) is undecidable.[11] Remarkably, decidability of admissibility in the basic modal logic K is a major
open problem.
Nevertheless, admissibility of rules is known to be decidable in many modal and superintuitionistic logics. The rst
decision procedures for admissible rules in basic transitive modal logics were constructed by Rybakov, using the
reduced form of rules.[12] A modal rule in variables p0 , , pk is called reduced if it has the form

n ( k k )
i=0 j=0 0i,j pj j=0 1i,j pj
,
p0
where each ui,j is either blank, or negation . For each rule r, we can eectively construct a reduced rule s (called
the reduced form of r) such that any logic admits (or derives) r if and only if it admits (or derives) s, by introducing
extension variables for all subformulas in A, and expressing the result in the full disjunctive normal form. It is thus
sucient to construct a decision algorithm for admissibility of reduced rules.
n
Let i=0 i /p0 be a reduced rule as above. We identify every conjunction i with the set {0i,j pj , 1i,j pj | j k}
of its conjuncts. For any subset W of the set {i | i n} of all conjunctions, let us dene a Kripke model
M = W, R, by

i pj pj i ,

i R i j k (pj i {pj , pj } i ).
Then the following provides an algorithmic criterion for admissibility in K4:[13]
n
Theorem. The rule i=0 i /p0 is not admissible in K4 if and only if there exists a set W {i | i n} such that

1. i p0 for some i n,

2. i i for every i n,

3. for every subset D of W there exist elements , W such that the equivalences

pj if and only if pj pj for every D


pj if and only if pj and pj pj for every D
hold for all j.
5.4. PROJECTIVITY AND UNIFICATION 17

Similar criteria can be found for the logics S4, GL, and Grz.[14] Furthermore, admissibility in intuitionistic logic can
be reduced to admissibility in Grz using the GdelMcKinseyTarski translation:[15]

A |IP C B if and only if T (A) |Grz T (B).

Rybakov (1997) developed much more sophisticated techniques for showing decidability of admissibility, which apply
to a robust (innite) class of transitive (i.e., extending K4 or IPC) modal and superintuitionistic logics, including e.g.
S4.1, S4.2, S4.3, KC, Tk (as well as the above mentioned logics IPC, K4, S4, GL, Grz).[16]
Despite being decidable, the admissibility problem has relatively high computational complexity, even in simple
logics: admissibility of rules in the basic transitive logics IPC, K4, S4, GL, Grz is coNEXP-complete.[17] This should
be contrasted with the derivability problem (for rules or formulas) in these logics, which is PSPACE-complete.[18]

5.4 Projectivity and unication


Admissibility in propositional logics is closely related to unication in the equational theory of modal or Heyting
algebras. The connection was developed by Ghilardi (1999, 2000). In the logical setup, a unier of a formula A in a
logic L (an L-unier for short) is a substitution such that A is a theorem of L. (Using this notion, we can rephrase
admissibility of a rule A/B in L as every L-unier of A is an L-unier of B".) An L-unier is less general than an
L-unier , written as , if there exists a substitution such that

L p p

for every variable p. A complete set of uniers of a formula A is a set S of L-uniers of A such that every L-unier
of A is less general than some unier from S. A most general unier (mgu) of A is a unier such that {} is a
complete set of uniers of A. It follows that if S is a complete set of uniers of A, then a rule A/B is L-admissible if
and only if every in S is an L-unier of B. Thus we can characterize admissible rules if we can nd well-behaved
complete sets of uniers.
An important class of formulas which have a most general unier are the projective formulas: these are formulas A
such that there exists a unier of A such that

A L B B

for every formula B. Note that is a mgu of A. In transitive modal and superintuitionistic logics with the nite model
property (fmp), one can characterize projective formulas semantically as those whose set of nite L-models has the
extension property:[19] if M is a nite Kripke L-model with a root r whose cluster is a singleton, and the formula A
holds in all points of M except for r, then we can change the valuation of variables in r so as to make A true in r as
well. Moreover, the proof provides an explicit construction of a mgu for a given projective formula A.
In the basic transitive logics IPC, K4, S4, GL, Grz (and more generally in any transitive logic with the fmp whose
set of nite frame satises another kind of extension property), we can eectively construct for any formula A its
projective approximation (A):[20] a nite set of projective formulas such that

1. P L A for every P (A),


2. every unier of A is a unier of a formula from (A).

It follows that the set of mgus of elements of (A) is a complete set of uniers of A. Furthermore, if P is a projective
formula, then

P |L B if and only if P L B

for any formula B. Thus we obtain the following eective characterization of admissible rules:[21]

A |L B if and only if P (A) (P L B).


18 CHAPTER 5. ADMISSIBLE RULE

5.5 Bases of admissible rules


Let L be a logic. A set R of L-admissible rule is called a basis[22] of admissible rules, if every admissible rule /B
can be derived from R and the derivable rules of L, using substitution, composition, and weakening. In other words,
R is a basis if and only if |L is the smallest structural consequence relation which includes L and R.
Notice that decidability of admissible rules of a decidable logic is equivalent to the existence of recursive (or recursively
enumerable) bases: on the one hand, the set of all admissible rule is a recursive basis if admissibility is decidable. On
the other hand, the set of admissible rules is always co-r.e., and if we further have an r.e. basis, it is also r.e., hence
it is decidable. (In other words, we can decide admissibility of A/B by the following algorithm: we start in parallel
two exhaustive searches, one for a substitution which unies A but not B, and one for a derivation of A/B from R
and L . One of the searches has to eventually come up with an answer.) Apart from decidability, explicit bases of
admissible rules are useful for some applications, e.g. in proof complexity.[23]
For a given logic, we can ask whether it has a recursive or nite basis of admissible rules, and to provide an explicit
basis. If a logic has no nite basis, it can nevertheless has an independent basis: a basis R such that no proper subset
of R is a basis.
In general, very little can be said about existence of bases with desirable properties. For example, while tabular
logics are generally well-behaved, and always nitely axiomatizable, there exist tabular modal logics without a nite
or independent basis of rules.[24] Finite bases are relatively rare: even the basic transitive logics IPC, K4, S4, GL, Grz
do not have a nite basis of admissible rules,[25] though they have independent bases.[26]

5.5.1 Examples of bases


The empty set is a basis of L-admissible rules if and only if L is structurally complete.

Every extension of the modal logic S4.3 (including, notably, S5) has a nite basis consisting of the single rule[27]

p p
.

Visser's rules

(
n )
(pi qi ) pn+1 pn+2 r
i=1
, n1
(
n+2 n )
(pi qi ) pj r
j=1 i=1

are a basis of admissible rules in IPC or KC.[28]

The rules

(
n )
q pi r
i=1
, n0

n
(q q pi ) r
i=1

are a basis of admissible rules of GL.[29] (Note that the empty disjunction is dened as .)

The rules
5.6. SEMANTICS FOR ADMISSIBLE RULES 19

(
n )
(q q) pi r
i=1
, n0

n
(q pi ) r
i=1

are a basis of admissible rules of S4 or Grz.[30]

5.6 Semantics for admissible rules


A rule /B is valid in a modal or intuitionistic Kripke frame F = W, R , if the following is true for every valuation
in F:

if x W (x A) for all A , then x W (x B) .

(The denition readily generalizes to general frames, if needed.)


Let X be a subset of W, and t a point in W. We say that t is

a reexive tight predecessor of X, if for every y in W: t R y if and only if t = y or x = y or x R y for some x in


X,
an irreexive tight predecessor of X, if for every y in W: t R y if and only if x = y or x R y for some x in X.

We say that a frame F has reexive (irreexive) tight predecessors, if for every nite subset X of W, there exists a
reexive (irreexive) tight predecessor of X in W.
We have:[31]

a rule is admissible in IPC if and only if it is valid in all intuitionistic frames which have reexive tight prede-
cessors,
a rule is admissible in K4 if and only if it is valid in all transitive frames which have reexive and irreexive
tight predecessors,
a rule is admissible in S4 if and only if it is valid in all transitive reexive frames which have reexive tight
predecessors,
a rule is admissible in GL if and only if it is valid in all transitive converse well-founded frames which have
irreexive tight predecessors.

Note that apart from a few trivial cases, frames with tight predecessors must be innite, hence admissible rules in
basic transitive logics do not enjoy the nite model property.

5.7 Structural completeness


While a general classication of structurally complete logics is not an easy task, we have a good understanding of
some special cases.
Intuitionistic logic itself is not structurally complete, but its fragments may behave dierently. Namely, any disjunction-
free rule or implication-free rule admissible in a superintuitionistic logic is derivable.[32] On the other hand, the Mints
rule

(p q) p r
((p q) p) ((p q) r)
20 CHAPTER 5. ADMISSIBLE RULE

is admissible in intuitionistic logic but not derivable, and contains only implications and disjunctions.
We know the maximal structurally incomplete transitive logics. A logic is called hereditarily structurally com-
plete, if every its extension is structurally complete. For example, classical logic, as well as the logics LC and Grz.3
mentioned above, are hereditarily structurally complete. A complete description of hereditarily structurally complete
superintuitionistic and transitive modal logics was given by Citkin and Rybakov. Namely, a superintuitionistic logic
is hereditarily structurally complete if and only if it is not valid in any of the ve Kripke frames[9]

Similarly, an extension of K4 is hereditarily structurally complete if and only if it is not valid in any of certain twenty
Kripke frames (including the ve intuitionistic frames above).[9]
There exist structurally complete logics that are not hereditarily structurally complete: for example, Medvedevs logic
is structurally complete,[33] but it is included in the structurally incomplete logic KC.

5.8 Variants
A rule with parameters is a rule of the form

A(p1 , . . . , pn , s1 , . . . , sk )
,
B(p1 , . . . , pn , s1 , . . . , sk )

whose variables are divided into the regular variables pi, and the parameters si. The rule is L-admissible if every
L-unier of A such that si = si for each i is also a unier of B. The basic decidability results for admissible rules
also carry to rules with parameters.[34]
A multiple-conclusion rule is a pair (,) of two nite sets of formulas, written as

A1 , . . . , An
or A1 , . . . , An /B1 , . . . , Bm .
B1 , . . . , Bm

Such a rule is admissible if every unier of is also a unier of some formula from .[35] For example, a logic L is
consistent i it admits the rule

and a superintuitionistic logic has the disjunction property i it admits the rule

pq
.
p, q

Again, basic results on admissible rules generalize smoothly to multiple-conclusion rules.[36] In logics with a variant
of the disjunction property, the multiple-conclusion rules have the same expressive power as single-conclusion rules:
for example, in S4 the rule above is equivalent to

A1 , . . . , An
.
B1 Bm
5.9. NOTES 21

Nevertheless, multiple-conclusion rules can often be employed to simplify arguments.


In proof theory, admissibility is often considered in the context of sequent calculi, where the basic objects are sequents
rather than formulas. For example, one can rephrase the cut-elimination theorem as saying that the cut-free sequent
calculus admits the cut rule

A, , A
.
, ,
(By abuse of language, it is also sometimes said that the (full) sequent calculus admits cut, meaning its cut-free
version does.) However, admissibility in sequent calculi is usually only a notational variant for admissibility in the
corresponding logic: any complete calculus for (say) intuitionistic logic admits a sequent rule if andonly if IPC
admits
the formula rule which we obtain by translating each sequent to its characteristic formula .

5.9 Notes
[1] Blok & Pigozzi (1989), Kracht (2007)

[2] Rybakov (1997), Def. 1.1.3

[3] Rybakov (1997), Def. 1.7.2

[4] From de Jonghs theorem to intuitionistic logic of proofs

[5] Rybakov (1997), Def. 1.7.7

[6] Chagrov & Zakharyaschev (1997), Thm. 1.25

[7] Prucnal (1979), cf. Iemho (2006)

[8] Rybakov (1997), p. 439

[9] Rybakov (1997), Thms. 5.4.4, 5.4.8

[10] Cintula & Metcalfe (2009)

[11] Wolter & Zakharyaschev (2008)

[12] Rybakov (1997), 3.9

[13] Rybakov (1997), Thm. 3.9.3

[14] Rybakov (1997), Thms. 3.9.6, 3.9.9, 3.9.12; cf. Chagrov & Zakharyaschev (1997), 16.7

[15] Rybakov (1997), Thm. 3.2.2

[16] Rybakov (1997), 3.5

[17] Jebek (2007)

[18] Chagrov & Zakharyaschev (1997), 18.5

[19] Ghilardi (2000), Thm. 2.2

[20] Ghilardi (2000), p. 196

[21] Ghilardi (2000), Thm. 3.6

[22] Rybakov (1997), Def. 1.4.13

[23] Mints & Kojevnikov (2004)

[24] Rybakov (1997), Thm. 4.5.5

[25] Rybakov (1997), 4.2

[26] Jebek (2008)


22 CHAPTER 5. ADMISSIBLE RULE

[27] Rybakov (1997), Cor. 4.3.20

[28] Iemho (2001, 2005), Rozire (1992)

[29] Jebek (2005)

[30] Jebek (2005,2008)

[31] Iemho (2001), Jebek (2005)

[32] Rybakov (1997), Thms. 5.5.6, 5.5.9

[33] Prucnal (1976)

[34] Rybakov (1997), 6.1

[35] Jebek (2005); cf. Kracht (2007), 7

[36] Jebek (2005, 2007, 2008)

5.10 References
W. Blok, D. Pigozzi, Algebraizable logics, Memoirs of the American Mathematical Society 77 (1989), no. 396,
1989.

A. Chagrov and M. Zakharyaschev, Modal Logic, Oxford Logic Guides vol. 35, Oxford University Press, 1997.
ISBN 0-19-853779-4

P. Cintula and G. Metcalfe, Structural completeness in fuzzy logics, Notre Dame Journal of Formal Logic 50
(2009), no. 2, pp. 153182. doi:10.1215/00294527-2009-004

A. I. Citkin, On structurally complete superintuitionistic logics, Soviet Mathematics Doklady, vol. 19 (1978),
pp. 816819.

S. Ghilardi, Unication in intuitionistic logic, Journal of Symbolic Logic 64 (1999), no. 2, pp. 859880. Project
Euclid JSTOR

S. Ghilardi, Best solving modal equations, Annals of Pure and Applied Logic 102 (2000), no. 3, pp. 183198.
doi:10.1016/S0168-0072(99)00032-9

R. Iemho, On the admissible rules of intuitionistic propositional logic, Journal of Symbolic Logic 66 (2001),
no. 1, pp. 281294. Project Euclid JSTOR

R. Iemho, Intermediate logics and Vissers rules, Notre Dame Journal of Formal Logic 46 (2005), no. 1, pp.
6581. doi:10.1305/ndj/1107220674

R. Iemho, On the rules of intermediate logics, Archive for Mathematical Logic, 45 (2006), no. 5, pp. 581599.
doi:10.1007/s00153-006-0320-8

E. Jebek, Admissible rules of modal logics, Journal of Logic and Computation 15 (2005), no. 4, pp. 411431.
doi:10.1093/logcom/exi029

E. Jebek, Complexity of admissible rules, Archive for Mathematical Logic 46 (2007), no. 2, pp. 7392.
doi:10.1007/s00153-006-0028-9

E. Jebek, Independent bases of admissible rules, Logic Journal of the IGPL 16 (2008), no. 3, pp. 249267.
doi:10.1093/jigpal/jzn004

M. Kracht, Modal Consequence Relations, in: Handbook of Modal Logic (P. Blackburn, J. van Benthem, and
F. Wolter, eds.), Studies of Logic and Practical Reasoning vol. 3, Elsevier, 2007, pp. 492545. ISBN 978-0-
444-51690-9

P. Lorenzen, Einfhrung in die operative Logik und Mathematik, Grundlehren der mathematischen Wissenschaften
vol. 78, SpringerVerlag, 1955.
5.10. REFERENCES 23

G. Mints and A. Kojevnikov, Intuitionistic Frege systems are polynomially equivalent, Zapiski Nauchnyh Sem-
inarov POMI 316 (2004), pp. 129146. gzipped PS
T. Prucnal, Structural completeness of Medvedevs propositional calculus, Reports on Mathematical Logic 6
(1976), pp. 103105.
T. Prucnal, On two problems of Harvey Friedman, Studia Logica 38 (1979), no. 3, pp. 247262. doi:10.1007/BF00405383

P. Rozire, Rgles admissibles en calcul propositionnel intuitionniste, Ph.D. thesis, Universit de Paris VII, 1992.
PDF

V. V. Rybakov, Admissibility of Logical Inference Rules, Studies in Logic and the Foundations of Mathematics
vol. 136, Elsevier, 1997. ISBN 0-444-89505-1

F. Wolter, M. Zakharyaschev, Undecidability of the unication and admissibility problems for modal and de-
scription logics, ACM Transactions on Computational Logic 9 (2008), no. 4, article no. 25. doi:10.1145/1380572.1380574
PDF
Chapter 6

Arming a disjunct

The formal fallacy of arming a disjunct also known as the fallacy of the alternative disjunct or a false exclu-
sionary disjunct occurs when a deductive argument takes the following logical form:

A or B
A
Therefore, not B

Or in logical operators:

pq
p
q

Where denotes a logical assertion.

6.1 Explanation
The fallacy lies in concluding that one disjunct must be false because the other disjunct is true; in fact they may both be
true because or is dened inclusively rather than exclusively. It is a fallacy of equivocation between the operations
OR and XOR.
Arming the disjunct should not be confused with the valid argument known as the disjunctive syllogism.

6.2 Examples
The following argument indicates the unsoundness of arming a disjunct:

Max is a mammal or Max is a cat.


Max is a mammal.
Therefore, Max is not a cat.

This inference is unsound because all cats, by denition, are mammals.


A second example provides a rst proposition that appears realistic and shows how an obviously awed conclusion
still arises under this fallacy.

To be on the cover of Vogue Magazine, one must be a celebrity or very beautiful.


This months cover was a celebrity.
Therefore, this celebrity is not very beautiful.

24
6.3. SEE ALSO 25

6.3 See also


Exclusive disjunction

Logical disjunction
Syllogistic fallacy

6.4 External links


Fallacy les: arming a disjunct

Arming a disjunct logicallyfallacious.com


Chapter 7

Arming the consequent

Arming the consequent, sometimes called converse error, fallacy of the converse or confusion of necessity
and suciency, is a formal fallacy of inferring the converse from the original statement. The corresponding argument
has the general form:

P Q, Q
P
An argument of this form is invalid, i.e., the conclusion can be false even when statements 1 and 2 are true. Since P
was never asserted as the only sucient condition for Q, other factors could account for Q (while P was false).[1][2]
To put it dierently, if P implies Q, the only inference that can be made is non-Q implies non-P. (Non-P and non-Q
designate the opposite propositions to P and Q.) This is known as logical contraposition. Symbolically:
(P Q) (Q P )
The name arming the consequent derives from the premise Q, which arms the then clause of the conditional
premise.

7.1 Examples
Example 1
One way to demonstrate the invalidity of this argument form is with a counterexample with true premises but an
obviously false conclusion. For example:

If someone owns Fort Knox, then he is rich.


Bill Gates is rich.
Therefore, Bill Gates owns Fort Knox.

Owning Fort Knox is not the only way to be rich. Any number of other ways exist to be rich.
However, one can arm with certainty that if someone is not rich (non-Q), then this person does not own Fort
Knox (non-P). This is the contrapositive of the rst statement, and it must be true if and only if the original statement
is true.
Example 2
Arguments of the same form can sometimes seem supercially convincing, as in the following example:

If I have the u, then I have a sore throat.


I have a sore throat.
Therefore, I have the u.

26
7.2. SEE ALSO 27

But having the u is not the only cause of a sore throat since many illnesses cause sore throat, such as the common
cold or strep throat.
Arming the consequent is commonly used in rationalization, and thus appears as a coping mechanism in some
people.
Example 3
In Catch-22,[3] the chaplain is interrogated for supposedly being 'Washinton Irving'/'Irving Washington', who has
been blocking out large portions of soldiers letters home. The colonel has found such a letter, but with the Chaplains
name signed.

'You can read, though, can't you?' the colonel persevered sarcastically. 'The author signed his name.'
'Thats my name there.'
'Then you wrote it. Q.E.D.'

P in this case is 'The chaplain signs his own name', and Q 'The chaplains name is written'. The chaplains name may
be written, but he did not necessarily write it, as the colonel falsely concludes (and in fact he did not, as in the novel,
Yossarian signed the name[3] ).

7.2 See also


Confusion of the inverse
Denying the antecedent

ELIZA eect
Fallacy of the single cause

Fallacy of the undistributed middle

Inference to the best explanation


Modus ponens

Modus tollens
Post hoc ergo propter hoc

Necessity and suciency

7.3 References
[1] Arming the Consequent. Fallacy Files. Fallacy Files. Retrieved 9 May 2013.

[2] Damer, T. Edward (2001). Confusion of a Necessary with a Sucient Condition. Attacking Faulty Reasoning (4th ed.).
Wadsworth. p. 150.

[3] Heller, Joseph (1994). Catch-22. Vintage. pp. 438, 8. ISBN 0-09-947731-9.
Chapter 8

Algebraic normal form

In Boolean algebra, the algebraic normal form (ANF), ring sum normal form (RSNF or RNF), Zhegalkin normal
form, or ReedMuller expansion is a way of writing logical formulas in one of three subforms:

The entire formula is purely true or false:


1
0
One or more variables are ANDed together into a term. One or more terms are XORed together into ANF.
No NOTs are permitted:
a b ab abc

or in standard propositional logic symbols:

a b (a b) (a b c)

The previous subform with a purely true term:


1 a b ab abc

Formulas written in ANF are also known as Zhegalkin polynomials (Russian: ) and Positive
Polarity (or Parity) ReedMuller expressions.

8.1 Common uses


ANF is a normal form, which means that two equivalent formulas will convert to the same ANF, easily showing
whether two formulas are equivalent for automated theorem proving. Unlike other normal forms, it can be represented
as a simple list of lists of variable names. Conjunctive and disjunctive normal forms also require recording whether
each variable is negated or not. Negation normal form is unsuitable for that purpose, since it doesn't use equality as
its equivalence relation: a a isn't reduced to the same thing as 1, even though they're equal.
Putting a formula into ANF also makes it easy to identify linear functions (used, for example, in linear feedback shift
registers): a linear function is one that is a sum of single literals. Properties of nonlinear feedback shift registers can
also be deduced from certain properties of the feedback function in ANF.

8.2 Performing operations within algebraic normal form


There are straightforward ways to perform the standard boolean operations on ANF inputs in order to get ANF results.
XOR (logical exclusive disjunction) is performed directly:

28
8.3. CONVERTING TO ALGEBRAIC NORMAL FORM 29

(1 x) (1 x y)
1x1xy
11xxy
y

NOT (logical negation) is XORing 1:[1]

(1 x y)
1 (1 x y)
11xy
xy

AND (logical conjunction) is distributed algebraically[2]

(1 x)(1 x y)
1(1 x y) x(1 x y)
(1 x y) (x x xy)
1 x x x y xy
1 x y xy

OR (logical disjunction) uses either 1 (1 a)(1 b)[3] (easier when both operands have purely true terms) or a
b ab[4] (easier otherwise):

(1 x) + (1 x y)
1 (1 1 x)(1 1 x y)
1 x(x y)
1 x xy

8.3 Converting to algebraic normal form


Each variable in a formula is already in pure ANF, so you only need to perform the formulas boolean operations as
shown above to get the entire formula into ANF. For example:

x + (y z)
x + (y(1 z))
x + (y yz)
x (y yz) x(y yz)
x y xy yz xyz

8.4 Formal representation


ANF is sometimes described in an equivalent way:

where a0 , a1 , . . . , a1,2,...,n {0, 1} fully describes f .


30 CHAPTER 8. ALGEBRAIC NORMAL FORM

8.4.1 Recursively deriving multiargument Boolean functions

There are only four functions with one argument:

f (x) = 0

f (x) = 1

f (x) = x

f (x) = 1 x

To represent a function with multiple arguments one can use the following equality:

f (x1 , x2 , . . . , xn ) = g(x2 , . . . , xn ) x1 h(x2 , . . . , xn ) , where

g(x2 , . . . , xn ) = f (0, x2 , . . . , xn )
h(x2 , . . . , xn ) = f (0, x2 , . . . , xn ) f (1, x2 , . . . , xn )

Indeed,

if x1 = 0 then x1 h = 0 and so f (0, . . .) = f (0, . . .)

if x1 = 1 then x1 h = h and so f (1, . . .) = f (0, . . .) f (0, . . .) f (1, . . .)

Since both g and h have fewer arguments than f it follows that using this process recursively we will nish with
functions with one variable. For example, let us construct ANF of f (x, y) = x y (logical or):

f (x, y) = f (0, y) x(f (0, y) f (1, y))

since f (0, y) = 0 y = y and f (1, y) = 1 y = 1

it follows that f (x, y) = y x(y 1)

by distribution, we get the nal ANF: f (x, y) = y xy x = x y xy

8.5 See also


ReedMuller expansion

Zhegalkin normal form

Boolean function

Logical graph

Zhegalkin polynomial

Negation normal form

Conjunctive normal form

Disjunctive normal form

Karnaugh map

Boolean ring
8.6. REFERENCES 31

8.6 References
[1] WolframAlpha NOT-equivalence demonstration: a = 1 a

[2] WolframAlpha AND-equivalence demonstration: (a b)(c d) = ac ad bc bd

[3] From De Morgans laws

[4] WolframAlpha OR-equivalence demonstration: a + b = a b ab

8.7 Further reading


Wegener, Ingo (1987). The complexity of Boolean functions. Wiley-Teubner. p. 6. ISBN 3-519-02107-2.

Presentation (PDF) (in German). University of Duisburg-Essen. Archived (PDF) from the original on 2017-
04-19. Retrieved 2017-04-19.

Maxeld, Clive Max (2006-11-29). Reed-Muller Logic. Logic 101. EETimes. Part 3. Archived from the
original on 2017-04-19. Retrieved 2017-04-19.
Chapter 9

Allegory (category theory)

In the mathematical eld of category theory, an allegory is a category that has some of the structure of the category
of sets and binary relations between them. Allegories can be used as an abstraction of categories of relations, and in
this sense the theory of allegories is a generalization of relation algebra to relations between dierent sorts. Allegories
are also useful in dening and investigating certain constructions in category theory, such as exact completions.
In this article we adopt the convention that morphisms compose from right to left, so RS means rst do S, then do
R".

9.1 Denition
An allegory is a category in which

every morphism R : X Y is associated with an anti-involution, i.e. a morphism R : Y X with satisfying


R = R and (RS) = SR; and
every pair of morphisms R,S : X Y with common domain/codomain is associated with an intersection, i.e.
a morphism RS : X Y

all such that

intersections are idempotent (RR = R), commutative (RS = SR), and associative (RS)T = R(ST);
anti-involution distributes over intersection ((RS) = SR);
composition is semi-distributive over intersection (R(ST)RSRT, (RS)TRTST); and
the modularity law is satised: (RST(RTS)S).

Here, we are abbreviating using the order dened by the intersection: "RS" means "R = RS".
A rst example of an allegory is the category of sets and relations. The objects of this allegory are sets, and a morphism
X Y is a binary relation between X and Y. Composition of morphisms is composition of relations; intersection of
morphisms is intersection of relations.

9.2 Regular categories and allegories

9.2.1 Allegories of relations in regular categories


In a category C, a relation between objects X, Y is a span of morphisms XRY that is jointly-monic. Two such
spans XSY and XTY are considered equivalent when there is an isomorphism between S and T that make
everything commute, and strictly speaking relations are only dened up to equivalence (one may formalise this either

32
9.3. REFERENCES 33

using equivalence classes or using bicategories). If the category C has products, a relation between X and Y is the
same thing as a monomorphism into XY (or an equivalence class of such). In the presence of pullbacks and a proper
factorization system, one can dene the composition of relations. The composition of XRYSZ is found by
rst pulling back the cospan RYS and then taking the jointly-monic image of the resulting span XRSZ.
Composition of relations will be associative if the factorization system is appropriately stable. In this case one can
consider a category Rel(C), with the same objects as C, but where morphisms are relations between the objects. The
identity relations are the diagonals XXX.
Recall that a regular category is a category with nite limits and images in which covers are stable under pullback. A
regular category has a stable regular epi/mono factorization system. The category of relations for a regular category
is always an allegory. Anti-involution is dened by turning the source/target of the relation around, and intersections
are intersections of subobjects, computed by pullback.

9.2.2 Maps in allegories, and tabulations


A morphism R in an allegory A is called a map if it is entire (1RR) and deterministic (RR1). Another way of
saying this: a map is a morphism that has a right adjoint in A, when A is considered, using the local order structure,
as a 2-category. Maps in an allegory are closed under identity and composition. Thus there is a subcategory Map(A)
of A, with the same objects but only the maps as morphisms. For a regular category C, there is an isomorphism of
categories CMap(Rel(C)). In particular, a morphism in Map(Rel(Set)) is just an ordinary set function.
In an allegory, a morphism R:XY is tabulated by a pair of maps f:ZX, g:ZY if gf=R and ffgg=1. An
allegory is called tabular if every morphism has a tabulation. For a regular category C, the allegory Rel(C) is always
tabular. On the other hand, for any tabular allegory A, the category Map(A) of maps is a locally regular category: it
has pullbacks, equalizers and images that are stable under pullback. This is enough to study relations in Map(A) and,
in this setting, ARel(Map(A)).

9.2.3 Unital allegories and regular categories of maps


A unit in an allegory is an object U for which the identity is the largest morphism UU, and such that from every
other object there is an entire relation to U. An allegory with a unit is called unital. Given a tabular allegory A, the
category Map(A) is a regular category (it has a terminal object) if and only if A is unital.

9.2.4 More sophisticated kinds of allegory


Additional properties of allegories can be axiomatized. Distributive allegories have a union-like operation that is
suitably well-behaved, and division allegories have a generalization of the division operation of relation algebra.
Power allegories are distributive division allegories with additional powerset-like structure. The connection between
allegories and regular categories can be developed into a connection between power allegories and toposes.

9.3 References
Peter Freyd, Andre Scedrov (1990). Categories, Allegories. Mathematical Library Vol 39. North-Holland.
ISBN 978-0-444-70368-2.

Peter Johnstone (2003). Sketches of an Elephant: A Topos Theory Compendium. Oxford Science Publications.
OUP. ISBN 0-19-852496-X.
Chapter 10

Alternating multilinear map

In mathematics, more specically in multilinear algebra, an alternating multilinear map is a multilinear map with
all arguments belonging to the same space (e.g., a bilinear form or a multilinear form) that is zero whenever any two
adjacent arguments are equal.
The notion of alternatization (or alternatisation in British English) is used to derive an alternating multilinear map
from any multilinear map with all arguments belonging to the same space.

10.1 Denition
A multilinear map of the form f : V n W is said to be alternating if f (x1 , . . . , xn ) = 0 whenever there exists
1 i n 1 such that xi = xi+1 . [1][2]

10.2 Example
In a Lie algebra, the multiplication is an alternating bilinear map called the Lie bracket.

10.3 Properties
If any distinct pair of components of an alternating multilinear map are equal, then such a map is zero:[1][3]

f (x1 , . . . , xi , . . . , xj , . . . , xn ) = 0 whenever there exist i and j such that i < j and xi = xj .

If the components of an alternating multilinear map are linearly dependent, then such a map is zero.

If any component xi of an alternating multilinear map is replaced by xi + c xj for any j i and c in the base ring
R, then the value of that map is not changed.[3]

Every alternating multilinear map is antisymmetric.[4]

If n! is a unit in the base ring R, then every antisymmetric n-multilinear form is alternating.

10.4 Alternatization

map of the form f : V W , the alternating multilinear map g : V W dened by


n n
Given a multilinear
g(x1 , . . . , xn ) := Sn sgn()f (x(1) , . . . , x(n) ) is said to be the alternatization of f .

Properties

34
10.5. SEE ALSO 35

The alternatization of an n-multilinear alternating map is n! times itself.

The alternatization of a symmetric map is zero.


The alternatization of a bilinear map is bilinear. Most notably, the alternatization of any cocycle is bilinear. This
fact plays a crucial role in identifying the second cohomology group of a lattice with the group of alternating
bilinear forms on a lattice.

10.5 See also


Alternating algebra

Bilinear map
Exterior algebra Alternating multilinear forms

Map (mathematics)
Multilinear algebra

Multilinear map
Multilinear form

Symmetrization

10.6 Notes
[1] Lang 2002, pp. 511512.

[2] Bourbaki 2007, p. A III.80, 4.

[3] Dummit & Foote 2004, p. 436.

[4] Rotman 1995, p. 235.

10.7 References
Bourbaki, N. (2007). Elments de mathmatique. Algbre Chapitres 1 3 (reprint ed.). Springer.

Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). Wiley.
Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. 211 (revised 3rd ed.). Springer. ISBN 978-0-
387-95385-4. OCLC 48176673.
Rotman, Joseph J. (1995). An Introduction to the Theory of Groups. Graduate Texts in Mathematics. 148 (4th
ed.). Springer. ISBN 0-387-94285-8. OCLC 30028913.
Chapter 11

Analysis of Boolean functions

In mathematics and theoretical computer science, analysis of Boolean functions[1] is the study of real-valued func-
tions on {0, 1}n or {1, 1}n from a spectral perspective (such functions are sometimes known as pseudo-Boolean
functions). The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The
area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer
science, especially in hardness of approximation, property testing and PAC learning.

11.1 Basic concepts


We will mostly consider functions dened on the domain {1, 1}n . Sometimes it is more convenient to work with
the domain {0, 1}n instead. If f is dened on {1, 1}n , then the corresponding function dened on {0, 1}n is

f01 (x1 , . . . , xn ) = f ((1)x1 , . . . , (1)xn ).

Similarly, for us a Boolean function is a {1, 1} -valued function, though often it is more convenient to consider
{0, 1} -valued functions instead.

11.1.1 Fourier expansion


Every real-valued function f : {1, 1}n R has a unique expansion as a multilinear polynomial:


f (x) = f(S)S (x), S (x) = xi .
S[n] iS

This is the Hadamard transform of the function f , which is the Fourier transform in the group Zn2 . The coecients
f(S) are known as Fourier coecients, and the entire sum is known as the Fourier expansion of f . The functions S
are known as Fourier characters, and they form an orthonormal basis for the space of all functions over {1, 1}n ,
n
with respect to the inner product f, g = 2 x{1,1}n f (x)g(x) .
The Fourier coecients can be calculated using an inner product:

f(S) = f, S .

In particular, this shows that f() = E[f ]. Parsevals identity states that


f 2 = E[f 2 ] = f(S)2 .
S

36
11.1. BASIC CONCEPTS 37

If we skip S = , then we get the variance of f :


V[f ] = f(S)2 .
S=

11.1.2 Fourier degree and Fourier levels

The degree of a function f : {1, 1}n R is the maximum d such that f(S) = 0 for some set S of size d . In other
words, the degree of f is its degree as a multilinear polynomial.
It is convenient to decompose the Fourier expansion into levels: the Fourier coecient f(S) is on level |S| .
The degree d part of f is


f =d = f(S)S .
|S|=d

It is obtained from f by zeroing out all Fourier coecients not on level d .


We similarly dene f >d , f <d , f d , f d .

11.1.3 Inuence
The i 'th inuence of a function f : {1, 1}n R can be dened in two equivalent ways:

[( )2 ]
f f i
Infi [f ] = E = f(S)2 ,
2
Si

f i (x1 , . . . , xn ) = f (x1 , . . . , xi1 , xi , xi+1 , . . . , xn ).

If f is Boolean then Infi [f ] is the probability that ipping the i 'th coordinate ips the value of the function:

Infi [f ] = Pr[f (x) = f i (x)].

If Infi [f ] = 0 then f doesn't depend on the i 'th coordinate.


The total inuence of f is the sum of all of its inuences:


n
Inf[f ] = Infi [f ] = |S|f(S)2 .
i=1 S

The total inuence of a Boolean function is also the average sensitivity of the function. The sensitivity of a Boolean
function f at a given point is the number of coordinates i such that if we ip the i 'th coordinate, the value of the
function changes. The average value of this quantity is exactly the total inuence.
The total inuence can also be dened using the discrete Laplacian of the Hamming graph, suitably normalized:
Inf[f ] = f, Lf .

11.1.4 Noise stability


Given 1 1 , we say that two random vectors x, y {1, 1}n are -correlated if the marginal distributions
of x, y are uniform, and E[xi yi ] = . Concretely, we can generate a pair of -correlated random variables by rst
choosing x, z {1, 1}n uniformly at random, and then choosing y according to one of the following two equivalent
rules, applied independently to each coordinate:
38 CHAPTER 11. ANALYSIS OF BOOLEAN FUNCTIONS

{ {
xi w.p., xi w.p. 1+
2 ,
yi = or yi =
zi w.p.1 . xi w.p. 1
2 .

We denote this distribution by y N (x) .


The noise stability of a function f : {1, 1}n R at can be dened in two equivalent ways:


Stab [f ] = Ex;yN (x) [f (x)f (y)] = |S| f(S)2 .
S[n]

For 0 1 , the noise sensitivity of f at is

1 1
NS [f ] = Stab12 [f ].
2 2
If f is Boolean, then this is the probability that the value of f changes if we ip each coordinate with probability ,
independently.

11.1.5 Noise operator


The noise operator T is an operator taking a function f : {1, 1}n R and returning another function T f : {1, 1}n
R given by


(T f )(x) = EyN (x) [f (y)] = |S| f(S)S .
S[n]

When > 0 , the noise operator can also be dened using a continuous-time Markov chain in which each bit is ipped
independently with rate 1. The operator T corresponds to running this Markov chain for 12 log 1 steps starting at x ,
and taking the average value of f at the nal state. This Markov chain is generated by the Laplacian of the Hamming
graph, and this relates total inuence to the noise operator.
Noise stability can be dened in terms of the noise operator: Stab [f ] = f, T f .

11.1.6 Hypercontractivity
For 1 q < , the Lq -norm of a function f : {1, 1}n R is dened by


f q = q
E[|f |q ].
We also dene f = maxx{1,1}n |f (x)|.
The hypercontractivity theorem states that for any q > 2 and q = 1/(1 1/q) ,

T f q f 2 and T f 2 f q .
Hypercontractivity is closely related to the logarithmic Sobolev inequalities of functional analysis.[2]
A similar result for q < 2 is known as reverse hypercontractivity.[3]

11.1.7 p-Biased analysis


In many situations the input to the function is not uniformly distributed over {1, 1}n , but instead has a bias toward
1 or 1 . In these situations it is customary to consider functions over the domain {0, 1}n . For 0 < p < 1 , the
p-biased measure p is given by
11.1. BASIC CONCEPTS 39


p (x) = p i xi
(1 p) i (1xi ) .

This measure can be generated by choosing each coordinate independently to be 1 with probability p and 0 with
probability 1 p .
The classical Fourier characters are no longer orthogonal with respect to this measure. Instead, we use the following
characters:

( )|{iS:xi =0}| ( )|{iS:xi =1}|


p 1p
S (x) = .
1p p

The p-biased Fourier expansion of f is the expansion of f as a linear combination of p-biased characters:


f= f(S)S .
S[n]

We can extend the denitions of inuence and the noise operator to the p-biased setting by using their spectral
denitions.

Inuence

The i 's inuence is given by


Infi [f ] = f(S)2 = p(1 p)E[(f f i )2 ].
Si

The total inuence is the sum of the individual inuences:


n
Inf[f ] = Infi [f ].
i=1

Noise operator

A pair of -correlated random variables can be obtained by choosing x, z p independently and y N (x) ,
where N is given by

{
xi w.p.,
yi =
zi w.p.1 .

The noise operator is then given by


(T f )(x) = |S| f(S)S (x) = EyN (x) [f (y)].
S[n]

Using this we can dene the noise stability and the noise sensitivity, as before.

RussoMargulis formula

The RussoMargulis formula states that for monotone Boolean functions f : {0, 1}n {0, 1} ,
40 CHAPTER 11. ANALYSIS OF BOOLEAN FUNCTIONS

d Inf[f ] n
Exp [f (x)] = = Pr[f = f i ].
dp p(1 p) i=1

Both the inuence and the probabilities are taken with respect to p , and on the right-hand side we have the average
sensitivity of f . If we think of f as a property, then the formula states that as p varies, the derivative of the probability
that f occurs at p equals the average sensitivity at p .
The RussoMargulis formula is key for proving sharp threshold theorems such as Friedguts.

11.1.8 Gaussian space


One of the deepest results in the area, the invariance principle, connects the distribution of functions on the Boolean
cube {1, 1}n to their distribution on Gaussian space, which is the space Rn endowed with the standard n -
dimensional Gaussian measure.
Many of the basic concepts of Fourier analysis on the Boolean cube have counterparts in Gaussian space:

The counterpart of the Fourier expansion in Gaussian space is the Hermite expansion, which is an expansion
to an innite sum (converging in L2 ) of multivariate Hermite polynomials.
The counterpart of total inuence or average sensitivity for the indicator function of a set is Gaussian surface
area, which is the Minkowski content of the boundary of the set.
The counterpart of the noise operator is the
OrnsteinUhlenbeck operator (related to the Mehler transform),
given by (U f )(x) = EzN (0,1) [f (x + 1 2 z)] , or alternatively by (U f )(x) = E[f (y)] , where x, y
is a pair of -correlated standard Gaussians.
Hypercontractivity holds (with appropriate parameters) in Gaussian space as well.

Gaussian space is more symmetric than the Boolean cube (for example, it is rotation invariant), and supports con-
tinuous arguments which may be harder to get through in the discrete setting of the Boolean cube. The invariance
principle links the two settings, and allows deducing results on the Boolean cube from results on Gaussian space.

11.2 Basic results

11.2.1 FriedgutKalaiNaor theorem


If f : {1, 1}n {1, 1} has degree at most 1, then f is either constant, equal to a coordinate, or equal to the
negation of a coordinate. In particular, f is a dictatorship: a function depending on at most one coordinate.
The FriedgutKalaiNaor theorem,[4] also known as the FKN theorem, states that if f almost has degree 1 then it
is close to a dictatorship. Quantitatively, if f : {1, 1}n {1, 1} and f >1 2 < , then f is O() -close to a
dictatorship, that is, f g2 = O() for some Boolean dictatorship g , or equivalently, Pr[f = g] = O() for some
Boolean dictatorship g .
Similarly, a Boolean function of degree at most d depends on at most d2d1 coordinates, making it a junta (a function
depending on a constant number of coordinates). The KindlerSafra theorem[5] generalizes the FriedgutKalaiNaor
theorem to this setting. It states that if f : {1, 1}n {1, 1} satises f >d 2 < then f is O() -close to a
Boolean function of degree at most d .

11.2.2 KahnKalaiLinial theorem


The Poincar inequality for the Boolean cube (which follows from formulas appearing above) states that for a function
f : {1, 1}n R ,

V[f ] Inf[f ] deg f V[f ].


11.3. SOME APPLICATIONS 41

V[f ]
This implies that maxi Infi [f ] n .
[6]
The
( KahnKalaiLinial
) theorem, also known as the KKL theorem, states that if f is Boolean then maxi Infi [f ] =
log n
n .
The bound given by the KahnKalaiLinial theorem is tight, and is achieved by the Tribes function of Ben-Or and
Linial:[7]

(x1,1 x1,w ) (x2w ,1 x2w ,w ).

The KahnKalaiLinial theorem was one of the rst results in the area, and was the one introducing hypercontractivity
into the context of Boolean functions.

11.2.3 Friedguts junta theorem

If f : {1, 1}n {1, 1} is an M -junta (a function depending on at most M coordinates) then Inf[f ] M
according to the Poincar inequality.
Friedguts theorem[8] is a converse to this result. It states that for any > 0 , the function f is -close to a Boolean
junta depending on exp(Inf[f ]/) coordinates.
Combined with the RussoMargulis lemma, Friedguts junta theorem implies that for every p , every monotone
function is close to a junta with respect to q for some q p .

11.2.4 Invariance principle

The invariance principle[9] generalizes the BerryEsseen theorem to non-linear functions.


n
The BerryEsseen theorem states (among else) that if f = i=1 ci xi and no ci is too large compared to the rest,
then the distribution of f over {1, 1}n is close to a normal distribution with the same mean and variance.
The invariance principle (in a special case) informally states that if f is a multilinear polynomial of bounded degree
over x1 , . . . , xn and all inuences of f are small, then the distribution of f under the uniform measure over {1, 1}n
is close to its distribution in Gaussian space.

More formally, let be a univariate Lipschitz function, let f = S[n] f (S)S , let k = deg f , and let =

maxi Si f(S)2 . Suppose that S= f(S)2 1 . Then


Ex{1,1}n [(f (x))] EgN (0,I) [(f (g))] = O(k9k ).

By choosing appropriate , this implies that the distributions of f under both measures are close in CDF distance,
which is given by supt | Pr[f (x) < t] Pr[f (g) < t]| .
The invariance principle was the key ingredient in the original proof of the Majority is Stablest theorem.

11.3 Some applications

11.3.1 Linearity testing

A Boolean function f : {1, 1}n {1, 1} is linear if it satises f (xy) = f (x)f (y) , where xy = (x1 y1 , . . . , xn yn )
. It is not hard to show that the Boolean linear functions are exactly the characters S .
In property testing we want to test whether a given function is linear. It is natural to try the following test: choose
x, y {1, 1}n uniformly at random, and check that f (xy) = f (x)f (y) . If f is linear then it always passes the
test. Blum, Luby and Rubinfeld[10] showed that if the test passes with probability 1 then f is O() -close to a
Fourier character. Their proof was combinatorial.
42 CHAPTER 11. ANALYSIS OF BOOLEAN FUNCTIONS

Bellare et al.[11] gave an extremely simple Fourier-analytic proof, that also shows that if the test succeeds with prob-
ability 1/2 + , then f is correlated with a Fourier character. Their proof relies on the following formula for the
success probability of the test:

1 1 3
+ f (S) .
2 2
S[n]

11.3.2 Arrows theorem


Arrows impossibility theorem states that for three and more candidates, the only unanimous voting rule for which
there is always a Condorcet winner is a dictatorship.
The usual proof of Arrows theorem is combinatorial. Kalai[12] gave an alternative proof of this result in the case
of three candidates using Fourier analysis. If f : {1, 1}n {1, 1} is the rule that assigns a winner among
two candidates given their relative orders in the votes, then the probability that there is a Condorcet winner given a
uniformly random vote is 43 34 Stab1/3 [f ] , from which the theorem easily follows.
The FKN theorem implies that if f is a rule for which there is almost always a Condorcet winner, then f is close to
a dictatorship.

11.3.3 Sharp thresholds


A classical result in the theory of random graphs states that the probability that a G(n, p) random graph is connected
c
tends to ee if p log nn+c . This is an example of a sharp threshold: the width of the threshold window, which
is O(1/n) , is asymptotically smaller than the threshold itself, which is roughly lognn . In contrast, the probability that
a G(n, p) graph contains a triangle tends to ec /6 when p nc . Here both the threshold window and the threshold
3

itself are (1/n) , and so this is a coarse threshold.


Friedguts sharp threshold theorem[13] states, roughly speaking, that a monotone graph property (a graph property is
a property which doesn't depend on the names of the vertices) has a sharp threshold unless it is correlated with the
appearance of small subgraphs. This theorem has been widely applied to analyze random graphs and percolation.
On a related note, the KKL theorem implies that the width of threshold window is always at most O(1/ log n) .[14]

11.3.4 Majority is Stablest


Let Majn : {1, 1}n {1, 1} denote the majority function on n coordinates. Sheppards formula gives the
asymptotic noise stability of majority:

2
Stab [Majn ] 1 arccos .

This is related to the probability that if we choose x {1, 1}n uniformly at random and form y {1, 1}n by
ipping each bit of x with probability 1 2 , then the majority stays the same:

Stab [Majn ] = 2 Pr[Majn (x) = Majn (y)] 1


There are Boolean functions with larger noise stability. For example, a dictatorship xi has noise stability .
The Majority is Stablest theorem states, informally, then the only functions having noise stability larger than majority
have inuential coordinates. Formally, for every > 0 there exists > 0 such that if f : {1, 1}n {1, 1} has
expectation zero and maxi Infi [f ] , then Stab [f ] 1 2 arccos + .
The rst proof of this theorem used the invariance principle in conjunction with an isoperimetric theorem of Borell
in Gaussian space; since then more direct proofs were devised.
Majority is Stablest implies that the GoemansWilliamson approximation algorithm for MAX-CUT is optimal, as-
suming the unique games conjecture. This implication, due to Khot et al.,[15] was the impetus behind proving the
theorem.
11.4. REFERENCES 43

11.4 References
[1] O'Donnell, Ryan (2014). Analysis of Boolean functions. Cambridge University Press. ISBN 978-1-107-03832-5.

[2] Diaconis, Persi; Salo-Coste, Laurent (1996). Logarithmic Sobolev inequalities for nite Markov chains. Ann. Appl.
Probab. 6 (3): 695750. doi:10.1214/aoap/1034968224.

[3] Mossel, Elchanan; Oleszkiewicz, Krzysztof; Sen, Arnab (2013). On reverse hypercontractivity. GAFA. 23 (3): 1062
1097. doi:10.1007/s00039-013-0229-4.

[4] Friedgut, Ehud; Kalai, Gil; Naor, Assaf (2002). Boolean functions whose Fourier transform is concentrated on the rst
two levels. Adv. Appl. Math. 29 (3): 427437. doi:10.1016/S0196-8858(02)00024-6.

[5] Kindler, Guy (2002). 16. Property testing, PCP, and juntas (Thesis). Tel Aviv University.

[6] Kahn, Je; Kalai, Gil; Linial, Nati (1988). The inuence of variables on Boolean functions.. Proc. 29th Symp. on
Foundations of Computer Science. SFCS'88. White Plains: IEEE. pp. 6880. doi:10.1109/SFCS.1988.2192.

[7] Ben-Or, Michael; Linial, Nathan (1985). Collective coin ipping, robust voting schemes and minima of Banzhaf val-
ues. Proc. 26th Symp. on Foundations of Computer Science. SFCS'85. Portland, Oregon: IEEE. pp. 408416.
doi:10.1109/SFCS.1985.15.

[8] Friedgut, Ehud (1998). Boolean functions with low average sensitivity depend on few coordinates. Combinatorica. 18
(1): 474483. doi:10.1007/PL00009809.

[9] Mossel, Elchanan; O'Donnell, Ryan; Oleszkiewicz, Krzysztof (2010). Noise stability of functions with low inuences:
Invariance and optimality. Ann. Math. 171 (1): 295341. doi:10.4007/annals.2010.171.295.

[10] Blum, Manuel; Luby, Michael; Rubinfeld, Ronitt (1993). Self-testing/correcting with applications to numerical problems.
J. Comput. Syst. Sci. 47 (3): 549595. doi:10.1016/0022-0000(93)90044-W.

[11] Bellare, Mihir; Coppersmith, Don; Hstad, Johan; Kiwi, Marcos; Sudan, Madhu (1995). Linearity testing in characteristic
two. Proc. 36th Symp. on Foundations of Computer Science. FOCS'95.

[12] Kalai, Gil (2002). A Fourier-theoretic perspective on the Condorcet paradox and Arrows theorem. Adv. Appl. Math.
29 (3): 412426. doi:10.1016/S0196-8858(02)00023-4.

[13] Friedgut, Ehud (1999). Sharp thresholds of graph properties and the k-SAT problem. J. Am. Math. Soc. 12 (4):
10171054. doi:10.1090/S0894-0347-99-00305-7.

[14] Friedgut, Ehud; Kalai, Gil (1996). Every monotone graph property has a sharp threshold. Proc. Am. Math. Soc. 124
(10): 29933002. doi:10.1090/S0002-9939-96-03732-X.

[15] Khot, Subhash; Kindler, Guy; Mossel, Elchanan; O'Donnell, Ryan (2007), Optimal inapproximability results for MAX-
CUT and other two-variable CSPs?" (PDF), SIAM Journal on Computing, 37 (1): 319357, doi:10.1137/S0097539705447372
Chapter 12

Distributive property

Distributivity redirects here. It is not to be confused with Distributivism.


In abstract algebra and formal logic, the distributive property of binary operations generalizes the distributive

a + a = a
b c b c

ab + ac = a(b+c)
Visualization of distributive law for positive numbers

law from elementary algebra. In propositional logic, distribution refers to two valid rules of replacement. The rules
allow one to reformulate conjunctions and disjunctions within logical proofs.
For example, in arithmetic:

2 (1 + 3) = (2 1) + (2 3), but 2 / (1 + 3) (2 / 1) + (2 / 3).

In the left-hand side of the rst equation, the 2 multiplies the sum of 1 and 3; on the right-hand side, it multiplies the
1 and the 3 individually, with the products added afterwards. Because these give the same nal answer (8), it is said
that multiplication by 2 distributes over addition of 1 and 3. Since one could have put any real numbers in place of
2, 1, and 3 above, and still have obtained a true equation, we say that multiplication of real numbers distributes over
addition of real numbers.

12.1 Denition
Given a set S and two binary operators and + on S, we say that the operation:
is left-distributive over + if, given any elements x, y, and z of S,

x (y + z) = (x y) + (x z),

is right-distributive over + if, given any elements x, y, and z of S,

(y + z) x = (y x) + (z x),

44
12.2. MEANING 45

is distributive over + if it is left- and right-distributive.[1]


Notice that when is commutative, the three conditions above are logically equivalent.

12.2 Meaning
The operators used for examples in this section are the binary operations of addition ( + ) and multiplication ( ) of
numbers.
There is a distinction between left-distributivity and right-distributivity:

a (b c) = a b a c (left-distributive)
(a b) c = a c b c (right-distributive)

In either case, the distributive property can be described in words as:


To multiply a sum (or dierence) by a factor, each summand (or minuend and subtrahend) is multiplied by this factor
and the resulting products are added (or subtracted).
If the operation outside the parentheses (in this case, the multiplication) is commutative, then left-distributivity implies
right-distributivity and vice versa.
One example of an operation that is only right-distributive is division, which is not commutative:

(a b) c = a c b c

In this case, left-distributivity does not apply:

a (b c) = a b a c

The distributive laws are among the axioms for rings and elds. Examples of structures in which two operations are
mutually related to each other by the distributive law are Boolean algebras such as the algebra of sets or the switching
algebra. There are also combinations of operations that are not mutually distributive over each other; for example,
addition is not distributive over multiplication.
Multiplying sums can be put into words as follows: When a sum is multiplied by a sum, multiply each summand of a
sum with each summand of the other sums (keeping track of signs), and then adding up all of the resulting products.

12.3 Examples

12.3.1 Real numbers


In the following examples, the use of the distributive law on the set of real numbers R is illustrated. When multi-
plication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of
view of algebra, the real numbers form a eld, which ensures the validity of the distributive law.

First example (mental and written multiplication)

During mental arithmetic, distributivity is often used unconsciously:

6 16 = 6 (10 + 6) = 6 10 + 6 6 = 60 + 36 = 96

Thus, to calculate 6 16 in your head, you rst multiply 6 10 and 6 6 and add the intermediate results. Written
multiplication is also based on the distributive law.
46 CHAPTER 12. DISTRIBUTIVE PROPERTY

Second example (with variables)

3a2 b (4a 5b) = 3a2 b 4a 3a2 b 5b = 12a3 b 15a2 b2

Third example (with two sums)

(a + b) (a b) = a (a b) + b (a b) = a2 ab + ba b2 = a2 b2
= (a + b) a (a + b) b = a2 + ba ab b2 = a2 b2

Here the distributive law was applied twice, and it does not matter which bracket is rst multiplied out.

Fourth Example Here the distributive law is applied the other way around compared to the previous examples.
Consider

12a3 b2 30a4 bc + 18a2 b3 c2 .

Since the factor 6a2 b occurs in all summand, it can be factored out. That is, due to the distributive law one obtains

12a3 b2 30a4 bc + 18a2 b3 c2 = 6a2 b(2ab 5a2 c + 3b2 c2 ) .

12.3.2 Matrices
The distributive law is valid for matrix multiplication. More precisely,

(A + B) C = A C + B C

for all l m -matrices A, B and m n -matrices C , as well as

A (B + C) = A B + A C

for all l m -matrices A and m n -matrices B, C . Because the commutative property does not hold for matrix
multiplication, the second law does not follow from the rst law. In this case, they are two dierent laws.

12.3.3 Other examples


1. Multiplication of ordinal numbers, in contrast, is only left-distributive, not right-distributive.
2. The cross product is left- and right-distributive over vector addition, though not commutative.
3. The union of sets is distributive over intersection, and intersection is distributive over union.
4. Logical disjunction (or) is distributive over logical conjunction (and), and vice versa.
5. For real numbers (and for any totally ordered set), the maximum operation is distributive over the minimum
operation, and vice versa: max(a, min(b, c)) = min(max(a, b), max(a, c)) and min(a, max(b, c)) = max(min(a,
b), min(a, c)).
6. For integers, the greatest common divisor is distributive over the least common multiple, and vice versa: gcd(a,
lcm(b, c)) = lcm(gcd(a, b), gcd(a, c)) and lcm(a, gcd(b, c)) = gcd(lcm(a, b), lcm(a, c)).
7. For real numbers, addition distributes over the maximum operation, and also over the minimum operation: a
+ max(b, c) = max(a + b, a + c) and a + min(b, c) = min(a + b, a + c).
8. For binomial multiplication, distribution is sometimes referred to as the FOIL Method[2] (First terms ac, Outer
ad, Inner bc, and Last bd) such as: (a + b) * (c + d) = ac + ad + bc + bd.
9. Polynomial multiplication is similar to that for binomials: (a + b) * (c + d + e) = ac + ad + ae + bc + bd + be.
10. Complex number multiplication is distributive: u(v + w) = uv + uw, (u + v)w = uw + vw
12.4. PROPOSITIONAL LOGIC 47

12.4 Propositional logic

12.4.1 Rule of replacement


In standard truth-functional propositional logic, distribution[3][4] in logical proofs uses two valid rules of replacement
to expand individual occurrences of certain logical connectives, within some formula, into separate applications of
those connectives across subformulas of the given formula. The rules are:

(P (Q R)) ((P Q) (P R))

and

(P (Q R)) ((P Q) (P R))

where " ", also written , is a metalogical symbol representing can be replaced in a proof with or is logically
equivalent to.

12.4.2 Truth functional connectives


Distributivity is a property of some logical connectives of truth-functional propositional logic. The following logical
equivalences demonstrate that distributivity is a property of particular connectives. The following are truth-functional
tautologies.

Distribution of conjunction over conjunction (P (Q R)) ((P Q) (P R))

Distribution of conjunction over disjunction (P (Q R)) ((P Q) (P R))

Distribution of disjunction over conjunction (P (Q R)) ((P Q) (P R))

Distribution of disjunction over disjunction (P (Q R)) ((P Q) (P R))

Distribution of implication (P (Q R)) ((P Q) (P R))

Distribution of implication over equivalence (P (Q R)) ((P Q) (P R))

Distribution of disjunction over equivalence (P (Q R)) ((P Q) (P R))

((P Q) (R S)) (((P R) (P S)) ((Q R) (Q S)))


Double distribution
((P Q) (R S)) (((P R) (P S)) ((Q R) (Q S)))

12.5 Distributivity and rounding


In practice, the distributive property of multiplication (and division) over addition may appear to be compromised or
lost because of the limitations of arithmetic precision. For example, the identity + + = (1 + 1 + 1) / 3 appears
to fail if the addition is conducted in decimal arithmetic; however, if many signicant digits are used, the calculation
will result in a closer approximation to the correct results. For example, if the arithmetical calculation takes the form:
0.33333 + 0.33333 + 0.33333 = 0.99999 1, this result is a closer approximation than if fewer signicant digits had
been used. Even when fractional numbers can be represented exactly in arithmetical form, errors will be introduced
if those arithmetical values are rounded or truncated. For example, buying two books, each priced at 14.99 before
a tax of 17.5%, in two separate transactions will actually save 0.01, over buying them together: 14.99 1.175
= 17.61 to the nearest 0.01, giving a total expenditure of 35.22, but 29.98 1.175 = 35.23. Methods such
as bankers rounding may help in some cases, as may increasing the precision used, but ultimately some calculation
errors are inevitable.
48 CHAPTER 12. DISTRIBUTIVE PROPERTY

12.6 Distributivity in rings


Distributivity is most commonly found in rings and distributive lattices.
A ring has two binary operations (commonly called "+" and ""), and one of the requirements of a ring is that must
distribute over +. Most kinds of numbers (example 1) and matrices (example 4) form rings. A lattice is another kind
of algebraic structure with two binary operations, and . If either of these operations (say ) distributes over the
other (), then must also distribute over , and the lattice is called distributive. See also the article on distributivity
(order theory).
Examples 4 and 5 are Boolean algebras, which can be interpreted either as a special kind of ring (a Boolean ring) or
a special kind of distributive lattice (a Boolean lattice). Each interpretation is responsible for dierent distributive
laws in the Boolean algebra. Examples 6 and 7 are distributive lattices which are not Boolean algebras.
Failure of one of the two distributive laws brings about near-rings and near-elds instead of rings and division rings
respectively. The operations are usually congured to have the near-ring or near-eld distributive on the right but not
on the left.
Rings and distributive lattices are both special kinds of rigs, certain generalizations of rings. Those numbers in
example 1 that don't form rings at least form rigs. Near-rigs are a further generalization of rigs that are left-distributive
but not right-distributive; example 2 is a near-rig.

12.7 Generalizations of distributivity


In several mathematical areas, generalized distributivity laws are considered. This may involve the weakening of the
above conditions or the extension to innitary operations. Especially in order theory one nds numerous important
variants of distributivity, some of which include innitary operations, such as the innite distributive law; others being
dened in the presence of only one binary operation, such as the according denitions and their relations are given in
the article distributivity (order theory). This also includes the notion of a completely distributive lattice.
In the presence of an ordering relation, one can also weaken the above equalities by replacing = by either or .
Naturally, this will lead to meaningful concepts only in some situations. An application of this principle is the notion
of sub-distributivity as explained in the article on interval arithmetic.
In category theory, if (S, , ) and (S, , ) are monads on a category C, a distributive law S.S S.S is a natural
transformation : S.S S.S such that (S, ) is a lax map of monads S S and (S, ) is a colax map of monads S
S. This is exactly the data needed to dene a monad structure on S.S: the multiplication map is S.S 2 .SS and
the unit map is S.. See: distributive law between monads.
A generalized distributive law has also been proposed in the area of information theory.

12.7.1 Notions of antidistributivity


The ubiquitous identity that relates inverses to the binary operation in any group, namely (xy)1 = y1 x1 , which
is taken as an axiom in the more general context of a semigroup with involution, has sometimes been called an
antidistributive property (of inversion as a unary operation).[5]
In the context of a near-ring, which removes the commutativity of the additively written group and assumes only one-
sided distributivity, one can speak of (two-sided) distributive elements but also of antidistributive elements. The
latter reverse the order of (the non-commutative) addition; assuming a left-nearring (i.e. one which all elements dis-
tribute when multiplied on the left), then an antidistributive element a reverses the order of addition when multiplied
to the right: (x + y)a = ya + xa.[6]
In the study of propositional logic and Boolean algebra, the term antidistributive law is sometimes used to denote
the interchange between conjunction and disjunction when implication factors over them:[7]

(a b) c (a c) (b c)

(a b) c (a c) (b c)

These two tautologies are a direct consequence of the duality in De Morgans laws.
12.8. NOTES 49

12.8 Notes
[1] Distributivity of Binary Operations from Mathonline

[2] Kim Steward (2011) Multiplying Polynomials from Virtual Math Lab at West Texas A&M University

[3] Elliott Mendelson (1964) Introduction to Mathematical Logic, page 21, D. Van Nostrand Company

[4] Alfred Tarski (1941) Introduction to Logic, page 52, Oxford University Press

[5] Chris Brink; Wolfram Kahl; Gunther Schmidt (1997). Relational Methods in Computer Science. Springer. p. 4. ISBN
978-3-211-82971-4.

[6] Celestina Cotti Ferrero; Giovanni Ferrero (2002). Nearrings: Some Developments Linked to Semigroups and Groups.
Kluwer Academic Publishers. pp. 62 and 67. ISBN 978-1-4613-0267-4.

[7] Eric C.R. Hehner (1993). A Practical Theory of Programming. Springer Science & Business Media. p. 230. ISBN
978-1-4419-8596-5.

12.9 External links


A demonstration of the Distributive Law for integer arithmetic (from cut-the-knot)
Chapter 13

Associative property

This article is about the associative property in mathematics. For associativity in the central processing unit memory
cache, see CPU cache Associativity. For associativity in programming languages, see operator associativity.
Associative and non-associative redirect here. For associative and non-associative learning, see Learning Types.

In mathematics, the associative property[1] is a property of some binary operations. In propositional logic, associa-
tivity is a valid rule of replacement for expressions in logical proofs.
Within an expression containing two or more occurrences in a row of the same associative operator, the order in
which the operations are performed does not matter as long as the sequence of the operands is not changed. That is,
rearranging the parentheses in such an expression will not change its value. Consider the following equations:

(2 + 3) + 4 = 2 + (3 + 4) = 9
2 (3 4) = (2 3) 4 = 24.
Even though the parentheses were rearranged on each line, the values of the expressions were not altered. Since
this holds true when performing addition and multiplication on any real numbers, it can be said that addition and
multiplication of real numbers are associative operations.
Associativity is not the same as commutativity, which addresses whether or not the order of two operands changes
the result. For example, the order doesn't matter in the multiplication of real numbers, that is, a b = b a, so we
say that the multiplication of real numbers is a commutative operation.
Associative operations are abundant in mathematics; in fact, many algebraic structures (such as semigroups and
categories) explicitly require their binary operations to be associative.
However, many important and interesting operations are non-associative; some examples include subtraction, exponentiation,
and the vector cross product. In contrast to the theoretical properties of real numbers, the addition of oating point
numbers in computer science is not associative, and the choice of how to associate an expression can have a signicant
eect on rounding error.

13.1 Denition
Formally, a binary operation on a set S is called associative if it satises the associative law:

(x y) z = x (y z) for all x, y, z in S.

Here, is used to replace the symbol of the operation, which may be any symbol, and even the absence of symbol
(juxtaposition) as for multiplication.

(xy)z = x(yz) = xyz for all x, y, z in S.

The associative law can also be expressed in functional notation thus: f(f(x, y), z) = f(x, f(y, z)).

50
13.2. GENERALIZED ASSOCIATIVE LAW 51

A binary operation on the set S is associative when this diagram commutes. That is, when the two paths from SSS to S compose
to the same function from SSS to S.

13.2 Generalized associative law


If a binary operation is associative, repeated application of the operation produces the same result regardless how valid
pairs of parenthesis are inserted in the expression.[2] This is called the generalized associative law. For instance, a
product of four elements may be written in ve possible ways:

1. ((ab)c)d

2. (ab)(cd)

3. (a(bc))d

4. a((bc)d)

5. a(b(cd))

If the product operation is associative, the generalized associative law says that all these formulas will yield the same
result, making the parenthesis unnecessary. Thus the product can be written unambiguously as

abcd.

As the number of elements increases, the number of possible ways to insert parentheses grows quickly, but they
remain unnecessary for disambiguation.

13.3 Examples
Some examples of associative operations include the following.

The concatenation of the three strings hello, " ", world can be computed by concatenating the rst two
strings (giving hello ") and appending the third string (world), or by joining the second and third string
(giving " world) and concatenating the rst string (hello) with the result. The two methods produce the
same result; string concatenation is associative (but not commutative).

In arithmetic, addition and multiplication of real numbers are associative; i.e.,


52 CHAPTER 13. ASSOCIATIVE PROPERTY

a(b(c(de)))

a(b((cd)e)) (ab)(c(de))

a((bc)(de)) a((b(cd))e)

a(((bc)d)e) (ab)((cd)e)

(a(bc))(de) (a(b(cd)))e

(a((bc)d))e ((ab)(cd))e

((ab)c)(de) ((a(bc))d)e

(((ab)c)d)e
In the absence of the associative property, ve factors a, b, c, d, e result in a Tamari lattice of order four, possibly dierent products.

}
(x + y) + z = x + (y + z) = x + y + z
for all x, y, z R.
(x y)z = x(y z) = x y z
Because of associativity, the grouping parentheses can be omitted without ambiguity.
13.3. EXAMPLES 53

x y z x y z

xy yz

(xy)z x(yz)

In associative operations is (x y) z = x (y z) .

(x + y) + z

=
x + (y + z)

The addition of real numbers is associative.

Addition and multiplication of complex numbers and quaternions are associative. Addition of octonions is also
associative, but multiplication of octonions is non-associative.
The greatest common divisor and least common multiple functions act associatively.

}
gcd(gcd(x, y), z) = gcd(x, gcd(y, z)) = gcd(x, y, z)
for all x, y, z Z.
lcm(lcm(x, y), z) = lcm(x, lcm(y, z)) = lcm(x, y, z)

Taking the intersection or the union of sets:

}
(A B) C = A (B C) = A B C
for all sets A, B, C.
(A B) C = A (B C) = A B C
54 CHAPTER 13. ASSOCIATIVE PROPERTY

If M is some set and S denotes the set of all functions from M to M, then the operation of functional composition
on S is associative:

(f g) h = f (g h) = f g h for all f, g, h S.

Slightly more generally, given four sets M, N, P and Q, with h: M to N, g: N to P, and f: P to Q, then

(f g) h = f (g h) = f g h

as before. In short, composition of maps is always associative.

Consider a set with three elements, A, B, and C. The following operation:

is associative. Thus, for example, A(BC)=(AB)C = A. This operation is not commutative.

Because matrices represent linear transformation functions, with matrix multiplication representing functional
composition, one can immediately conclude that matrix multiplication is associative. [3]

13.4 Propositional logic

13.4.1 Rule of replacement


In standard truth-functional propositional logic, association,[4][5] or associativity[6] are two valid rules of replacement.
The rules allow one to move parentheses in logical expressions in logical proofs. The rules are:

(P (Q R)) ((P Q) R)

and

(P (Q R)) ((P Q) R),

where " " is a metalogical symbol representing can be replaced in a proof with.

13.4.2 Truth functional connectives


Associativity is a property of some logical connectives of truth-functional propositional logic. The following logical
equivalences demonstrate that associativity is a property of particular connectives. The following are truth-functional
tautologies.
Associativity of disjunction:

((P Q) R) (P (Q R))

(P (Q R)) ((P Q) R)
Associativity of conjunction:

((P Q) R) (P (Q R))
13.5. NON-ASSOCIATIVE OPERATION 55

(P (Q R)) ((P Q) R)
Associativity of equivalence:

((P Q) R) (P (Q R))
(P (Q R)) ((P Q) R)

13.5 Non-associative operation


A binary operation on a set S that does not satisfy the associative law is called non-associative. Symbolically,

(x y) z = x (y z) for some x, y, z S.
For such an operation the order of evaluation does matter. For example:

Subtraction

(5 3) 2 = 5 (3 2)

Division

(4/2)/2 = 4/(2/2)

Exponentiation
2
2(1 )
= (21 )2
Also note that innite sums are not generally associative, for example:

(1 + 1) + (1 + 1) + (1 + 1) + (1 + 1) + (1 + 1) + (1 + 1) + . . . = 0
whereas

1 + (1 + 1) + (1 + 1) + (1 + 1) + (1 + 1) + (1 + 1) + (1 + 1) + . . . = 1
The study of non-associative structures arises from reasons somewhat dierent from the mainstream of classical
algebra. One area within non-associative algebra that has grown very large is that of Lie algebras. There the associative
law is replaced by the Jacobi identity. Lie algebras abstract the essential nature of innitesimal transformations, and
have become ubiquitous in mathematics.
There are other specic types of non-associative structures that have been studied in depth; these tend to come from
some specic applications or areas such as combinatorial mathematics. Other examples are Quasigroup, Quasield,
Non-associative ring, Non-associative algebra and Commutative non-associative magmas.

13.5.1 Nonassociativity of oating point calculation


In mathematics, addition and multiplication of real numbers is associative. By contrast, in computer science, the
addition and multiplication of oating point numbers is not associative, as rounding errors are introduced when
dissimilar-sized values are joined together.[7]
To illustrate this, consider a oating point representation with a 4-bit mantissa:
(1.0002 20 + 1.0002 20 ) + 1.0002 24 = 1.0002 21 + 1.0002 24 = 1.0012 24
1.0002 20 + (1.0002 20 + 1.0002 24 ) = 1.0002 20 + 1.0002 24 = 1.0002 24
Even though most computers compute with a 24 or 53 bits of mantissa,[8] this is an important source of rounding
error, and approaches such as the Kahan summation algorithm are ways to minimise the errors. It can be especially
problematic in parallel computing.[9][10]
56 CHAPTER 13. ASSOCIATIVE PROPERTY

13.5.2 Notation for non-associative operations


Main article: Operator associativity

In general, parentheses must be used to indicate the order of evaluation if a non-associative operation appears more
than once in an expression. However, mathematicians agree on a particular order of evaluation for several common
non-associative operations. This is simply a notational convention to avoid parentheses.
A left-associative operation is a non-associative operation that is conventionally evaluated from left to right, i.e.,


x y z = (x y) z
w x y z = ((w x) y) z for all w, x, y, z S

etc.

while a right-associative operation is conventionally evaluated from right to left:


x y z = x (y z)
w x y z = w (x (y z)) for all w, x, y, z S

etc.

Both left-associative and right-associative operations occur. Left-associative operations include the following:

Subtraction and division of real numbers:

x y z = (x y) z for all x, y, z R;

x/y/z = (x/y)/z for all x, y, z R with y = 0, z = 0.

Function application:

(f x y) = ((f x) y)

This notation can be motivated by the currying isomorphism.

Right-associative operations include the following:

Exponentiation of real numbers:

z z
xy = x(y )

One reason exponentiation is right-associative is that a repeated left-associative exponentiation operation


would be less useful. Multiple appearances could (and would) be rewritten with multiplication:

(xy )z = x(yz)

An additional argument for exponentiation being right-associative is that the superscript inherently be-
haves as a set of parentheses; e.g. in the expression 2x+3 the addition is performed before the exponen-
tiation despite there being no explicit parentheses 2(x+3) wrapped around it. Thus given an expression
z
such as xy , it makes sense to require evaluating the full exponent y z of the base x rst.
13.6. SEE ALSO 57

Tetration via the up-arrow operator:

a b = b a

Function denition

Z Z Z = Z (Z Z)

x 7 y 7 x y = x 7 (y 7 x y)

Using right-associative notation for these operations can be motivated by the Curry-Howard correspon-
dence and by the currying isomorphism.

Non-associative operations for which no conventional evaluation order is dened include the following.

Taking the Cross product of three vectors:

a (b c) = (a b) c for some a, b, c R3

Taking the pairwise average of real numbers:

(x + y)/2 + z x + (y + z)/2
= for all x, y, z R with x = z.
2 2

Taking the relative complement of sets (A\B)\C is not the same as A\(B\C) . (Compare material nonim-
plication in logic.)

13.6 See also


Lights associativity test
A semigroup is a set with a closed associative binary operation.
Commutativity and distributivity are two other frequently discussed properties of binary operations.
Power associativity, alternativity, exibility and N-ary associativity are weak forms of associativity.
Moufang identities also provide a weak form of associativity.

13.7 References
[1] Hungerford, Thomas W. (1974). Algebra (1st ed.). Springer. p. 24. ISBN 0387905189. Denition 1.1 (i) a(bc) = (ab)c
for all a, b, c in G.

[2] Durbin, John R. (1992). Modern Algebra: an Introduction (3rd ed.). New York: Wiley. p. 78. ISBN 0-471-51001-7. If
a1 , a2 , . . . , an (n 2) are elements of a set with an associative operation, then the product a1 a2 . . . an is unambiguous;
this is, the same element will be obtained regardless of how parentheses are inserted in the product

[3] Matrix product associativity. Khan Academy. Retrieved 5 June 2016.


58 CHAPTER 13. ASSOCIATIVE PROPERTY

[4] Moore and Parker

[5] Copi and Cohen

[6] Hurley

[7] Knuth, Donald, The Art of Computer Programming, Volume 3, section 4.2.2

[8] IEEE Computer Society (August 29, 2008). IEEE Standard for Floating-Point Arithmetic. IEEE. ISBN 978-0-7381-
5753-5. doi:10.1109/IEEESTD.2008.4610935. IEEE Std 754-2008.

[9] Villa, Oreste; Chavarra-mir, Daniel; Gurumoorthi, Vidhya; Mrquez, Andrs; Krishnamoorthy, Sriram, Eects of Floating-
Point non-Associativity on Numerical Computations on Massively Multithreaded Systems (PDF), retrieved 2014-04-08

[10] Goldberg, David (March 1991). What Every Computer Scientist Should Know About Floating-Point Arithmetic (PDF).
ACM Computing Surveys. 23 (1): 548. doi:10.1145/103162.103163. Retrieved 2016-01-20. (, )
Chapter 14

Atomic formula

In mathematical logic, an atomic formula (also known simply as an atom) is a formula with no deeper propositional
structure, that is, a formula that contains no logical connectives or equivalently a formula that has no strict subformulas.
Atoms are thus the simplest well-formed formulas of the logic. Compound formulas are formed by combining the
atomic formulas using the logical connectives.
The precise form of atomic formulas depends on the logic under consideration; for propositional logic, for example,
the atomic formulas are the propositional variables. For predicate logic, the atoms are predicate symbols together
with their arguments, each argument being a term. In model theory, atomic formula are merely strings of symbols
with a given signature, which may or may not be satisable with respect to a given model.[1]

14.1 Atomic formula in rst-order logic


The well-formed terms and propositions of ordinary rst-order logic have the following syntax:
Terms:

t c | x | f (t1 , ..., tn ) ,

that is, a term is recursively dened to be a constant c (a named object from the domain of discourse), or a variable x
(ranging over the objects in the domain of discourse), or an n-ary function f whose arguments are terms tk. Functions
map tuples of objects to objects.
Propositions:

A, B, ... P (t1 , ..., tn ) | A B | | A B | | A B | x. A | x. A ,

that is, a proposition is recursively dened to be an n-ary predicate P whose arguments are terms tk, or an expression
composed of logical connectives (and, or) and quantiers (for-all, there-exists) used with other propositions.
An atomic formula or atom is simply a predicate applied to a tuple of terms; that is, an atomic formula is a formula
of the form P (t 1 , , tn) for P a predicate, and the tn terms.
All other well-formed formulae are obtained by composing atoms with logical connectives and quantiers.
For example, the formula x. P (x) y. Q (y, f (x)) z. R (z) contains the atoms

P (x)

Q(y, f (x))

R(z)

When all of the terms in an atom are ground terms, then the atom is called a ground atom or ground predicate.

59
60 CHAPTER 14. ATOMIC FORMULA

14.2 See also


In model theory, structures assign an interpretation to the atomic formulas.

In proof theory, polarity assignment for atomic formulas is an essential component of focusing.
Atomic sentence

14.3 References
[1] Wilfrid Hodges (1997). A Shorter Model Theory. Cambridge University Press. pp. 1114. ISBN 0-521-58713-1.

14.4 Further reading


Hinman, P. (2005). Fundamentals of Mathematical Logic. A K Peters. ISBN 1-56881-262-0.
Chapter 15

Atomic sentence

In logic, an atomic sentence is a type of declarative sentence which is either true or false (may also be referred to as a
proposition, statement or truthbearer) and which cannot be broken down into other simpler sentences. For example,
The dog ran is an atomic sentence in natural language, whereas The dog ran and the cat hid. is a molecular
sentence in natural language.
From a logical analysis, the truth or falsity of sentences in general is determined by only two things: the logical form
of the sentence and the truth or falsity of its simple sentences. This is to say, for example, that the truth of the sentence
John is Greek and John is happy is a function of the meaning of "and", and the truth values of the atomic sentences
John is Greek and John is happy. However, the truth or falsity of an atomic sentence is not a matter that is within
the scope of logic itself, but rather whatever art or science the content of the atomic sentence happens to be talking
about.[1]
Logic has developed articial languages, for example sentential calculus and predicate calculus partly with the purpose
of revealing the underlying logic of natural languages statements, the surface grammar of which may conceal the
underlying logical structure; see Analytic Philosophy. In these articial languages an Atomic Sentence is a string of
symbols which can represent an elementary sentence in a natural language, and it can be dened as follows.
In a formal language, a well-formed formula (or w) is a string of symbols constituted in accordance with the rules of
syntax of the language. A term is a variable, an individual constant or a n-place function letter followed by n terms.
An atomic formula is a w consisting of either a sentential letter or an n-place predicate letter followed by n terms. A
sentence is a w in which any variables are bound. An atomic sentence is an atomic formula containing no variables.
It follows that an atomic sentence contains no logical connectives, variables or quantiers. A sentence consisting of
one or more sentences and a logical connective is a compound (or molecular sentence). See vocabulary in First-order
logic

15.1 Examples

15.1.1 Assumptions

In the following examples:


* let F, G, H be predicate letters; * let a, b, c be individual constants; * let x, y, z be variables.

15.1.2 Atomic sentences

These ws are atomic sentences; they contain no variables or conjunctions:

F(a)

H(b, a, c)

61
62 CHAPTER 15. ATOMIC SENTENCE

15.1.3 Atomic formulae


These ws are atomic formulae, but are not sentences (atomic or otherwise) because they include free variables:

F(x)
G(a, z)
H(x, y, z)

15.1.4 Compound sentences


These ws are compound sentences. They are sentences, but are not atomic sentences because they are not atomic
formulae:

x (F(x))
z (G(a, z))
x y z (H(x, y, x))
x z (F(x) G(a, z))
x y z (G(a, z) H(x, y, z))

15.1.5 Compound formulae


These ws are compound formulae. They are not atomic formulae but are built up from atomic formulae using logical
connectives. They are also not sentences because they contain free variables:

F(x) G(a, z)
G(a, z) H(x, y, z)

15.2 Interpretations
Main article: Interpretation (logic)

A sentence is either true or false under an interpretation which assigns values to the logical variables. We might
for example make the following assignments:
Individual Constants

a: Socrates
b: Plato
c: Aristotle

Predicates:

F: is sleeping
G: hates
H: made hit

Sentential variables:
15.3. TRANSLATING SENTENCES FROM A NATURAL LANGUAGE INTO AN ARTIFICIAL LANGUAGE 63

p: It is raining.

Under this interpretation the sentences discussed above would represent the following English statements:

p: It is raining.

F(a): Socrates is sleeping.

H(b, a, c): Plato made Socrates hit Aristotle.

x (F(x)): Everybody is sleeping.

z (G(a, z)): Socrates hates somebody.

x y z (H(x, y, z)): Somebody made everybody hit somebody. (They may not have all hit the same person
z, but they all did so because of the same person x.)

x z (F(x) G(a, z)): Everybody is sleeping and Socrates hates somebody.

x y z (G(a, z) H(x, y, z)): Either Socrates hates somebody or somebody made everybody hit somebody.

15.3 Translating sentences from a natural language into an articial lan-


guage
Sentences in natural languages can be ambiguous, whereas the languages of the sentential logic and predicate logics
are precise. Translation can reveal such ambiguities and express precisely the intended meaning.
For example, take the English sentence Father Ted married Jack and Jill. Does this mean Jack married Jill? In
translating we might make the following assignments: Individual Constants

a: Father Ted

b: Jack

c: Jill

Predicates:

M: ociated at the marriage of to

Using these assignments the sentence above could be translated as follows:

M(a, b, c): Father Ted ociated at the marriage of Jack and Jill.

x y (M(a, b, x) M(a, c, y)): Father Ted ociated at the marriage of Jack to somebody and Father Ted
ociated at the marriage of Jill to somebody.

x y (M(x, a, b) M(y, a, c)): Somebody ociated at the marriage of Father Ted to Jack and somebody
ociated at the marriage of Father Ted to Jill.

To establish which is the correct translation of Father Ted married Jack and Jill, it would be necessary to ask the
speaker exactly what was meant.
64 CHAPTER 15. ATOMIC SENTENCE

15.4 Philosophical signicance


Atomic sentences are of particular interest in philosophical logic and the theory of truth and, it has been argued, there
are corresponding atomic facts. An Atomic sentence (or possibly the meaning of an atomic sentence) is called an
elementary proposition by Wittgenstein and an atomic proposition by Russell:

4.2 The sense of a proposition is its agreement and disagreement with possibilities of existence and non-existence
of states of aairs. 4.21 The simplest kind of proposition, an elementary proposition, asserts the existence of a
state of aairs.: Wittgenstein, Tractatus Logico-Philosophicus, s:Tractatus Logico-Philosophicus.

A proposition (true or false) asserting an atomic fact is called an atomic proposition.: Russell, Introduction to
Tractatus Logico-Philosophicus, s:Tractatus Logico-Philosophicus/Introduction

see also [2] and [3] especially regarding elementary proposition and atomic proposition as discussed by Russell
and Wittgenstein

Note the distinction between an elementary/atomic proposition and an atomic fact


No atomic sentence can be deduced from (is not entailed by) any other atomic sentence, no two atomic sentences are
incompatible, and no sets of atomic sentences are self-contradictory. Wittgenstein made much of this in his Tractatus
Logico-Philosophicus. If there are any atomic sentences then there must be atomic facts which correspond to those
that are true, and the conjunction of all true atomic sentences would say all that was the case, i.e. the world since,
according to Wittegenstein, The world is all that is the case. (TLP:1). Similarly the set of all sets of atomic sentences
corresponds to the set of all possible worlds (all that could be the case).
The T-schema, which embodies the theory of truth proposed by Alfred Tarski, denes the truth of arbitrary sentences
from the truth of atomic sentences.

15.5 See also


Logical atomism

Logical constant
Truthbearer

15.6 References
[1] Philosophy of Logic, Willard Van Orman Quine

[2] http://plato.stanford.edu/entries/logical-atomism/

[3] http://plato.stanford.edu/entries/wittgenstein-atomism/

15.7 Bibliography
Benson Mates, Elementary Logic, OUP, New York 1972 (Library of Congress Catalog Card no.74-166004)
Elliot Mendelson, Introduction to Mathematical Logic, Van Nostran Reinholds Company, New York 1964

Wittgenstein, Tractatus_Logico-Philosophicus: s:Tractatus Logico-Philosophicus.]


Chapter 16

Balanced boolean function

In mathematics and computer science, a balanced boolean function is a boolean function whose output yields as
many 0s as 1s over its input set. This means that for a uniformly random input string of bits, the probability of getting
a 1 is 1/2.
An example of a balanced boolean function is the function that assigns a 1 to every even number and 0 to all odd
numbers (likewise the other way around). The same applies for functions assigning 1 to all positive numbers and 0
otherwise.
A Boolean function of n bits is balanced if it takes the value 1 with probability 12.

16.1 Usage
Balanced boolean functions are primarily used in cryptography. If a function is not balanced, it will have a statistical
bias, making it subject to cryptanalysis such as the correlation attack.

16.2 See also


Bent function

16.3 References
Balanced boolean functions that can be evaluated so that every input bit is unlikely to be read, Annual ACM
Symposium on Theory of Computing

65
Chapter 17

Bent function

The four 2-ary Boolean functions with Hamming weight 1 are bent, i.e. their nonlinearity is 1 (which is what this diagram shows).
The following formula shows that a 2-ary function is bent when its nonlinearity is 1:
2
221 2 2 1 = 2 1 = 1

In the mathematical eld of combinatorics, a bent function is a special type of Boolean function. This means it takes
several inputs and gives one output, each of which has two possible values (such as 0 and 1, or true and false). The
name is gurative. Bent functions are so called because they are as dierent as possible from all linear functions (the
simplest or straight-line functions) and from all ane functions (which preserve parallel lines). This makes the bent
functions naturally hard to approximate. Bent functions were dened and named in the 1960s by Oscar Rothaus in
research not published until 1976.[1] They have been extensively studied for their applications in cryptography, but
have also been applied to spread spectrum, coding theory, and combinatorial design. The denition can be extended
in several ways, leading to dierent classes of generalized bent functions that share many of the useful properties of
the original.
It is known that V. A. Eliseev and O. P. Stepchenkov studied bent functions, which they called minimal functions, in
the USSR in 1962, see.[2] However, their results have still not been declassied.

17.1 Walsh transform


Bent functions are dened in terms of the Walsh transform. The Walsh transform of a Boolean function f: Zn
2 Z2 is the function f : Zn2 Z given by

66
17.2. DEFINITION AND PROPERTIES 67

1
f(a) = (1)f (x)+ax
2n xZn
2

where a x = a1 x1 + a2 x2 + ... + anxn (mod 2) is the dot product in Zn


2.[3] Alternatively, let S 0 (a) = { x Zn
2 : f(x) = a x } and S 1 (a) = { x Zn
2 : f(x) a x }. Then |S 0 (a)| + |S 1 (a)| = 2n and hence

f(a) = |S0 (a)| |S1 (a)| = 2|S0 (a)| 2n .

For any Boolean function f and a Zn


2 the transform lies in the range

2n f(a) 2n .

Moreover, the linear function f 0 (x) = a x and the ane function f 1 (x) = a x + 1 correspond to the two extreme
cases, since

f0 (a) = 2n , f1 (a) = 2n .

Thus, for each a Zn


2 the value of f(a) characterizes where the function f(x) lies in the range from f 0 (x) to f 1 (x).

17.2 Denition and properties


Rothaus dened a bent function as a Boolean function f: Zn
2 Z2 whose Walsh transform has constant absolute value. Bent functions are in a sense equidistant from all the
ane functions, so they are equally hard to approximate with any ane function.
The simplest examples of bent functions, written in algebraic normal form, are F(x1 ,x2 ) = x1 x2 and G(x1 ,x2 ,x3 ,x4 ) =
x1 x2 + x3 x4 . This pattern continues: x1 x2 + x3 x4 + ... + xn xn is a bent function Zn
2 Z2 for every even n, but there is a wide variety of dierent types of bent functions as n increases.[4] The sequence
of values (1)f(x) , with x Zn
2 taken in lexicographical order, is called a bent sequence; bent functions and bent sequences have equivalent prop-
erties. In this 1 form, the Walsh transform is easily computed as

f(a) = W (2n )(1)f (a) ,

where W(2n ) is the natural-ordered Walsh matrix and the sequence is treated as a column vector.[5]
Rothaus proved that bent functions exist only for even n, and that for a bent function f, |f(a)| = 2n/2 for all a Zn
2.[3] In fact, f(a) = 2n/2 (1)g(a) , where g is also bent. In this case, g(a) = 2n/2 (1)f (a) , so f and g are
considered dual functions.[5]
Every bent function has a Hamming weight (number of times it takes the value 1) of 2n 1 2n/2 1 , and in fact
agrees with any ane function at one of those two numbers of points. So the nonlinearity of f (minimum number of
times it equals any ane function) is 2n 1 2n/2 1 , the maximum possible. Conversely, any Boolean function with
nonlinearity 2n 1 2n/2 1 is bent.[3] The degree of f in algebraic normal form (called the nonlinear order of f) is at
most n/2 (for n > 2).[4]
Although bent functions are vanishingly rare among Boolean functions of many variables, they come in many dierent
kinds. There has been detailed research into special classes of bent functions, such as the homogeneous ones[6] or
those arising from a monomial over a nite eld,[7] but so far the bent functions have deed all attempts at a complete
enumeration or classication.
68 CHAPTER 17. BENT FUNCTION

17.3 Constructions
There are several types of constructions for bent functions.[2]

combinatorial constructions: iterative constructions, Maiorana-McFarland construction, Partial Spreads, Dil-


lons and Dobbertins bent functions, minterm bent functions, Bent Iterative functions

algebraic constructions: monomial bent functions with exponents of Gold, Dillon, Kasami, Canteaut-Leander
and Canteaut-Charpin-Kuyreghyan; Niho bent functions, etc.

17.4 Applications
As early as 1982 it was discovered that maximum length sequences based on bent functions have cross-correlation and
autocorrelation properties rivalling those of the Gold codes and Kasami codes for use in CDMA.[8] These sequences
have several applications in spread spectrum techniques.
The properties of bent functions are naturally of interest in modern digital cryptography, which seeks to obscure
relationships between input and output. By 1988 Forr recognized that the Walsh transform of a function can be
used to show that it satises the Strict Avalanche Criterion (SAC) and higher-order generalizations, and recommended
this tool to select candidates for good S-boxes achieving near-perfect diusion.[9] Indeed, the functions satisfying the
SAC to the highest possible order are always bent.[10] Furthermore, the bent functions are as far as possible from
having what are called linear structures, nonzero vectors a such that f(x + a) + f(x) is a constant. In the language of
dierential cryptanalysis (introduced after this property was discovered) the derivative of a bent function f at every
nonzero point a (that is, fa(x) = f(x + a) + f(x)) is a balanced Boolean function, taking on each value exactly half of
the time. This property is called perfect nonlinearity.[4]
Given such good diusion properties, apparently perfect resistance to dierential cryptanalysis, and resistance by
denition to linear cryptanalysis, bent functions might at rst seem the ideal choice for secure cryptographic functions
such as S-boxes. Their fatal aw is that they fail to be balanced. In particular, an invertible S-box cannot be constructed
directly from bent functions, and a stream cipher using a bent combining function is vulnerable to a correlation attack.
Instead, one might start with a bent function and randomly complement appropriate values until the result is balanced.
The modied function still has high nonlinearity, and as such functions are very rare the process should be much faster
than a brute-force search.[4] But functions produced in this way may lose other desirable properties, even failing
to satisfy the SACso careful testing is necessary.[10] A number of cryptographers have worked on techniques
for generating balanced functions that preserve as many of the good cryptographic qualities of bent functions as
possible.[11][12][13]
Some of this theoretical research has been incorporated into real cryptographic algorithms. The CAST design pro-
cedure, used by Carlisle Adams and Staord Tavares to construct the S-boxes for the block ciphers CAST-128 and
CAST-256, makes use of bent functions.[13] The cryptographic hash function HAVAL uses Boolean functions built
from representatives of all four of the equivalence classes of bent functions on six variables.[14] The stream cipher
Grain uses an NLFSR whose nonlinear feedback polynomial is, by design, the sum of a bent function and a linear
function.[15]
Applications of bent functions are listed in.[2]

17.5 Generalizations
More than 25 dierent generalizations of bent functions are described in.[2] There are algebraic generalizations (q-
valued bent functions, p-ary bent functions, bent functions over a nite eld, generalized Boolean bent functions of
Schmidt, bent functions from a nite Abelian group into the set of complex numbers on the unit circle, bent func-
tions from a nite Abelian group into a nite Abelian group, non Abelian bent functions, vectorial G-bent functions,
multidimensional bent functions on a nite Abelian group), combinatorial generalizations (symmetric bent functions,
homogeneous bent functions, rotation symmetric bent functions, normal bent functions, self-dual and anti-self-dual
bent functions, partially dened bent functions, plateaued functions, Z-bent functions and quantum bent functions)
and cryptographic generalizations (semi-bent functions, balanced bent functions, partially bent functions, hyper-bent
functions, bent functions of higher order, k-bent functions).
17.6. REFERENCES 69

The most common class of generalized bent functions is the mod m type, f : Znm Zm such that

2i
f(a) = e m (f (x)ax)

xZn
m

has constant absolute value mn/2 . Perfect nonlinear functions f : Znm Zm , those such that for all nonzero a, f(x
+ a) f(a) takes on each value mn 1 times, are generalized bent. If m is prime, the converse is true. In most cases
only prime m are considered. For odd prime m, there are generalized bent functions for every positive n, even and
odd. They have many of the same good cryptographic properties as the binary bent functions.[16]
Semi-bent functions are an odd-order counterpart to bent functions. A semi-bent function is f : Znm Zm with n
odd, such that |f| takes only the values 0 and m(n+1)/2 . They also have good cryptographic characteristics, and some
of them are balanced, taking on all possible values equally often.[17]
The partially bent functions form a large class dened by a condition on the Walsh transform and autocorrela-
tion functions. All ane and bent functions are partially bent. This is in turn a proper subclass of the plateaued
functions.[18]
The idea behind the hyper-bent functions is to maximize the minimum distance to all Boolean functions coming
from bijective monomials on the nite eld GF(2n ), not just the ane functions. For these functions this distance is
constant, which may make them resistant to an interpolation attack.
Other related names have been given to cryptographically important classes of functions Zn
2 Zn
2, such as almost bent functions and crooked functions. While not bent functions themselves (these are not even
Boolean functions), they are closely related to the bent functions and have good nonlinearity properties.

17.6 References
[1] O. S. Rothaus (May 1976). On Bent Functions. Journal of Combinatorial Theory, Series A. 20 (3): 300305. ISSN
0097-3165. doi:10.1016/0097-3165(76)90024-8. Retrieved 16 December 2013.

[2] N. Tokareva. Bent functions: results and applications to cryptography. Acad. Press. Elsevier. 2015. 220 pages. Retrieved
30 November 2016.

[3] C. Qu; J. Seberry; T. Xia (29 December 2001). Boolean Functions in Cryptography. Retrieved 14 September 2009.

[4] W. Meier; O. Staelbach (April 1989). Nonlinearity Criteria for Cryptographic Functions. Eurocrypt '89. pp. 549562.

[5] C. Carlet; L.E. Danielsen; M.G. Parker; P. Sol (19 May 2008). Self Dual Bent Functions (PDF). Fourth International
Workshop on Boolean Functions: Cryptography and Applications (BFCA '08). Retrieved 21 September 2009.

[6] T. Xia; J. Seberry; J. Pieprzyk; C. Charnes (June 2004). Homogeneous bent functions of degree n in 2n variables do not
exist for n > 3. Discrete Applied Mathematics. 142 (13): 127132. ISSN 0166-218X. doi:10.1016/j.dam.2004.02.006.
Retrieved 21 September 2009.

[7] A. Canteaut; P. Charpin; G. Kyureghyan (January 2008). A new class of monomial bent functions (PDF). Finite Fields
and Their Applications. 14 (1): 221241. ISSN 1071-5797. doi:10.1016/j.a.2007.02.004. Retrieved 21 September
2009.

[8] J. Olsen; R. Scholtz; L. Welch (November 1982). Bent-Function Sequences. IEEE Transactions on Information Theory.
IT-28 (6): 858864. ISSN 0018-9448. doi:10.1109/tit.1982.1056589. Archived from the original on 22 July 2011.
Retrieved 24 September 2009.

[9] R. Forr (August 1988). The Strict Avalanche Criterion: Spectral Properties of Boolean Functions and an Extended Deni-
tion. CRYPTO '88. pp. 450468.

[10] C. Adams; S. Tavares (January 1990). The Use of Bent Sequences to Achieve Higher-Order Strict Avalanche Criterion
in S-Box Design. Technical Report TR 90-013. Queens University. CiteSeerX 10.1.1.41.8374 .

[11] K. Nyberg (April 1991). Perfect nonlinear S-boxes. Eurocrypt '91. pp. 378386.

[12] J. Seberry; X. Zhang (December 1992). Highly Nonlinear 0-1 Balanced Boolean Functions Satisfying Strict Avalanche
Criterion. AUSCRYPT '92. pp. 143155. CiteSeerX 10.1.1.57.4992 .
70 CHAPTER 17. BENT FUNCTION

[13] C. Adams (November 1997). Constructing Symmetric Ciphers Using the CAST Design Procedure. Designs, Codes and
Cryptography. 12 (3): 283316. ISSN 0925-1022. doi:10.1023/A:1008229029587. Archived from the original on 26
October 2008. Retrieved 20 September 2009.

[14] Y. Zheng; J. Pieprzyk; J. Seberry (December 1992). HAVALa one-way hashing algorithm with variable length of output.
AUSCRYPT '92. pp. 83104. Retrieved 20 June 2015.

[15] M. Hell; T. Johansson; A. Maximov; W. Meier. A Stream Cipher Proposal: Grain-128 (PDF). Retrieved 24 September
2009.

[16] K. Nyberg (May 1990). Constructions of bent functions and dierence sets. Eurocrypt '90. pp. 151160.

[17] K. Khoo; G. Gong; D. Stinson (February 2006). A new characterization of semi-bent and bent functions on nite elds
(PostScript). Designs, Codes and Cryptography. 38 (2): 279295. ISSN 0925-1022. doi:10.1007/s10623-005-6345-x.
Retrieved 24 September 2009.

[18] Y. Zheng; X. Zhang (November 1999). Plateaued Functions. Second International Conference on Information and Com-
munication Security (ICICS '99). pp. 284300. Retrieved 24 September 2009.

17.7 Further reading


C. Carlet (May 1993). Two New Classes of Bent Functions. Eurocrypt '93. pp. 77101.
J. Seberry; X. Zhang (March 1994). Constructions of Bent Functions from Two Known Bent Functions.
Australasian Journal of Combinatorics. 9: 2135. CiteSeerX 10.1.1.55.531 . ISSN 1034-4942.

T. Neumann (May 2006). Bent Functions. CiteSeerX 10.1.1.85.8731 .


Colbourn, Charles J.; Dinitz, Jerey H. (2006). Handbook of Combinatorial Designs (2nd ed.). CRC Press.
pp. 337339. ISBN 978-1-58488-506-1.

Cusick, T.W.; Stanica, P. (2009). Cryptographic Boolean Functions and Applications. Academic Press. ISBN
9780123748904.
17.7. FURTHER READING 71
Chapter 18

BernaysSchnnkel class

The BernaysSchnnkel class (also known as BernaysSchnnkel-Ramsey class) of formulas, named after Paul
Bernays and Moses Schnnkel (and Frank P. Ramsey), is a fragment of rst-order logic formulas where satisability
is decidable.
It is the set of sentences that, when written in prenex normal form, have an quantier prex and do not contain
any function symbols.
This class of logic formulas is also sometimes referred as eectively propositional (EPR) since it can be eectively
translated into propositional logic formulas by a process of grounding or instantiation.
The satisability problem for this class is NEXPTIME-complete.[1]

18.1 See also


Prenex normal form

18.2 Notes
[1] Lewis, Harry R. (1980), Complexity results for classes of quanticational formulas, Journal of Computer and System
Sciences, 21 (3): 317353, MR 603587, doi:10.1016/0022-0000(80)90027-6

18.3 References
Ramsey, F. (1930), On a problem in formal logic, Proc. London Math. Soc., 30: 264286, doi:10.1112/plms/s2-
30.1.264

Piskac, R.; de Moura, L.; Bjorner, N. (December 2008), Deciding Eectively Propositional Logic with Equal-
ity (PDF), Microsoft Research Technical Report (2008-181)

72
Chapter 19

Head normal form

In the lambda calculus, a term is in beta normal form if no beta reduction is possible.[1] A term is in beta-eta
normal form if neither a beta reduction nor an eta reduction is possible. A term is in head normal form if there is
no beta-redex in head position.

19.1 Beta reduction


In the lambda calculus, a beta redex is a term of the form

(x.A)M

A redex r is in head position in a term t , if t t has the following shape:

x1 . . . xn . (x.A)M1 M2 . . . Mm , where n 0 and m 1 .


| {z }
redex ther

A beta reduction is an application of the following rewrite rule to a beta redex contained in a term t :

(x.A)M A[x := M ]

where A[x := M ] is the result of substituting the term M for the variable x in the term A .
A head beta reduction is a beta reduction applied in head position, that is, of the following form:

x1 . . . xn .(x.A)M1 M2 . . . Mm x1 . . . xn .A[x := M1 ]M2 . . . Mm , where n 0 and


m1.

Any other reduction is an internal beta reduction.


A normal form is a term that does not contain any beta redex, i.e. that cannot be further reduced. More generally, a
head normal form is a term that does not contain a beta redex in head position, i.e. that cannot be further reduced
by a head reduction. Head normal forms are the terms of the following shape:

x1 . . . xn .xM1 M2 . . . Mm , where x is a variable (if considering the simple lambda calculus), n 0


and m 0 .

A head normal form is not always a normal form, because the applied arguments Mj need not be normal. However,
the converse is true: any normal form is also a head normal form. In fact, the normal forms are exactly the head
normal forms in which the subterms Mj are themselves normal forms. This gives an inductive syntactic description
of normal forms.

73
74 CHAPTER 19. HEAD NORMAL FORM

19.2 Reduction strategies


In general, a given term can contain several redexes, hence several dierent beta reductions could be applied. We
may specify a strategy to choose which redex to reduce.

Normal-order reduction is the strategy in which one continually applies the rule for beta reduction in head
position until no more such reductions are possible. At that point, the resulting term is in head normal form. One
then continues applying head reduction in the subterms Mj , from left to right. Stated otherwise, normalorder
reduction is the strategy that always reduces the leftmost outermost redex rst.

By contrast, in applicative order reduction, one applies the internal reductions rst, and then only applies the
head reduction when no more internal reductions are possible.

Normal-order reduction is complete, in the sense that if a term has a head normal form, then normalorder reduction
will eventually reach it. By the syntactic description of normal forms above, this entails the same statement for a
fully normal form (this is the standardization theorem). By contrast, applicative order reduction may not terminate,
even when the term has a normal form. For example, using applicative order reduction, the following sequence of
reductions is possible:

(x.z)((w.www)(w.www))
(x.z)((w.www)(w.www)(w.www))
(x.z)((w.www)(w.www)(w.www)(w.www))
(x.z)((w.www)(w.www)(w.www)(w.www)(w.www))
...

But using normal-order reduction, the same starting point reduces quickly to normal form:

(x.z)((w.www)(w.www))

z
Sinots director strings is one method by which the computational complexity of beta reduction can be optimized.

19.3 See also


Lambda calculus

Normal form (disambiguation)

19.4 References
[1] Beta normal form. Encyclopedia. TheFreeDictionary.com. Retrieved 18 November 2013.
Chapter 20

Biconditional elimination

Biconditional elimination is the name of two valid rules of inference of propositional logic. It allows for one to
infer a conditional from a biconditional. If (P Q) is true, then one may infer that (P Q) is true, and also
that (Q P ) is true.[1] For example, if its true that I'm breathing if and only if I'm alive, then its true that if I'm
breathing, I'm alive; likewise, its true that if I'm alive, I'm breathing. The rules can be stated formally as:

(P Q)
(P Q)

and

(P Q)
(Q P )

where the rule is that wherever an instance of " (P Q) " appears on a line of a proof, either " (P Q) " or "
(Q P ) " can be placed on a subsequent line;

20.1 Formal notation


The biconditional elimination rule may be written in sequent notation:

(P Q) (P Q)

and

(P Q) (Q P )

where is a metalogical symbol meaning that (P Q) , in the rst case, and (Q P ) in the other are syntactic
consequences of (P Q) in some logical system;
or as the statement of a truth-functional tautology or theorem of propositional logic:

(P Q) (P Q)

(P Q) (Q P )
where P , and Q are propositions expressed in some formal system.

75
76 CHAPTER 20. BICONDITIONAL ELIMINATION

20.2 See also


Logical biconditional

20.3 References
[1] Cohen, S. Marc. Chapter 8: The Logic of Conditionals (PDF). University of Washington. Retrieved 8 October 2013.
Chapter 21

Biconditional introduction

In propositional logic, biconditional introduction[1][2][3] is a valid rule of inference. It allows for one to infer a
biconditional from two conditional statements. The rule makes it possible to introduce a biconditional statement into
a logical proof. If P Q is true, and if Q P is true, then one may infer that P Q is true. For example, from
the statements if I'm breathing, then I'm alive and if I'm alive, then I'm breathing, it can be inferred that I'm
breathing if and only if I'm alive. Biconditional introduction is the converse of biconditional elimination. The rule
can be stated formally as:

P Q, Q P
P Q
where the rule is that wherever instances of " P Q " and " Q P " appear on lines of a proof, " P Q " can
validly be placed on a subsequent line.

21.1 Formal notation


The biconditional introduction rule may be written in sequent notation:

(P Q), (Q P ) (P Q)

where is a metalogical symbol meaning that P Q is a syntactic consequence when P Q and Q P are
both in a proof;
or as the statement of a truth-functional tautology or theorem of propositional logic:

((P Q) (Q P )) (P Q)

where P , and Q are propositions expressed in some formal system.

21.2 References
[1] Hurley

[2] Moore and Parker

[3] Copi and Cohen

77
Chapter 22

Bidirectional transformation

In computer programming, bidirectional transformations (bx) are programs in which a single piece of code can
be run in several ways, such that the same data are sometimes considered as input, and sometimes as output. For
example, a bx run in the forward direction might transform input I into output O, while the same bx run backward
would take as input versions of I and O and produce a new version of I as its output.
Bidirectional model transformations are an important special case in which a model is input to such a program.
Some bidirectional languages are bijective. The bijectivity of a language is a severe restriction of its bidirectionality,[1]
because a bijective language is merely relating two dierent ways to present the very same information.
More general is a lens language, in which there is a distinguished forward direction (get) that takes a concrete input
to an abstract output, discarding some information in the process: the concrete state includes all the information that
is in the abstract state, and usually some more. The backward direction (put) takes a concrete state and an abstract
state and computes a new concrete state. Lenses are required to obey certain conditions to ensure sensible behaviour.
The most general case is that of symmetric bidirectional transformations. Here the two states that are related typically
share some information, but each also includes some information that is not included in the other.

22.1 Usage
Bidirectional transformations can be used to:

Maintain the consistency of several sources of information[2]

Provide an 'abstract view' to easily manipulate data and write them back to their source

22.2 Vocabulary
A bidirectional program which obeys certain round-trip laws is called a lens.

22.3 Examples of implementations


Boomerang is a programming language which allows writing lenses to process text data formats bidirectionally

Augeas is a conguration management library whose lens language is inspired by the Boomerang project

biXid is a programming language for processing XML data bidirectionally[3]

XSugar allows translation from XML to non-XML formats[4]

78
22.4. SEE ALSO 79

22.4 See also


Bidirectionalization

Reverse computation

22.5 References
[1] http://grace.gsdlab.org/images/e/e2/Nate-short.pdf

[2] http://www.cs.cornell.edu/~{}jnfoster/papers/grace-report.pdf

[3] http://arbre.is.s.u-tokyo.ac.jp/~{}hahosoya/papers/bixid.pdf

[4] http://www.brics.dk/xsugar/

22.6 External links


GRACE International Meeting on Bidirectional Transformations

Bidirectional Transformations: The Bx Wiki


Chapter 23

Bijection

X Y
1 D

2 B
3 C

4 A

A bijective function, f: X Y, where set X is {1, 2, 3, 4} and set Y is {A, B, C, D}. For example, f(1) = D.

In mathematics, a bijection, bijective function or one-to-one correspondence is a function between the elements
of two sets, where each element of one set is paired with exactly one element of the other set, and each element of the
other set is paired with exactly one element of the rst set. There are no unpaired elements. In mathematical terms,

80
23.1. DEFINITION 81

a bijective function f: X Y is a one-to-one (injective) and onto (surjective) mapping of a set X to a set Y.
A bijection from the set X to the set Y has an inverse function from Y to X. If X and Y are nite sets, then the existence
of a bijection means they have the same number of elements. For innite sets the picture is more complicated, leading
to the concept of cardinal number, a way to distinguish the various sizes of innite sets.
A bijective function from a set to itself is also called a permutation.
Bijective functions are essential to many areas of mathematics including the denitions of isomorphism, homeomorphism,
dieomorphism, permutation group, and projective map.

23.1 Denition
For more details on notation, see Function (mathematics) Notation.

For a pairing between X and Y (where Y need not be dierent from X) to be a bijection, four properties must hold:

1. each element of X must be paired with at least one element of Y,


2. no element of X may be paired with more than one element of Y,
3. each element of Y must be paired with at least one element of X, and
4. no element of Y may be paired with more than one element of X.

Satisfying properties (1) and (2) means that a bijection is a function with domain X. It is more common to see
properties (1) and (2) written as a single statement: Every element of X is paired with exactly one element of Y.
Functions which satisfy property (3) are said to be "onto Y " and are called surjections (or surjective functions).
Functions which satisfy property (4) are said to be "one-to-one functions" and are called injections (or injective
functions).[1] With this terminology, a bijection is a function which is both a surjection and an injection, or using
other words, a bijection is a function which is both one-to-one and onto.
Bijections are sometimes denoted by a two-headed rightwards arrow with tail (U+2916 RIGHTWARDS TWO-
HEADED ARROW WITH TAIL), as in f : X Y. This symbol is a combination of the two-headed rightwards arrow
(U+21A0 RIGHTWARDS TWO HEADED ARROW) sometimes used to denote surjections and the rightwards
arrow with a barbed tail (U+21A3 RIGHTWARDS ARROW WITH TAIL) sometimes used to denote injections.

23.2 Examples

23.2.1 Batting line-up of a baseball or cricket team


Consider the batting line-up of a baseball or cricket team (or any list of all the players of any sports team where every
player holds a specic spot in a line-up). The set X will be the players on the team (of size nine in the case of baseball)
and the set Y will be the positions in the batting order (1st, 2nd, 3rd, etc.) The pairing is given by which player
is in what position in this order. Property (1) is satised since each player is somewhere in the list. Property (2) is
satised since no player bats in two (or more) positions in the order. Property (3) says that for each position in the
order, there is some player batting in that position and property (4) states that two or more players are never batting
in the same position in the list.

23.2.2 Seats and students of a classroom


In a classroom there are a certain number of seats. A bunch of students enter the room and the instructor asks them
all to be seated. After a quick look around the room, the instructor declares that there is a bijection between the set
of students and the set of seats, where each student is paired with the seat they are sitting in. What the instructor
observed in order to reach this conclusion was that:

1. Every student was in a seat (there was no one standing),


82 CHAPTER 23. BIJECTION

2. No student was in more than one seat,

3. Every seat had someone sitting there (there were no empty seats), and

4. No seat had more than one student in it.

The instructor was able to conclude that there were just as many seats as there were students, without having to count
either set.

23.3 More mathematical examples and some non-examples


For any set X, the identity function 1X: X X, 1X(x) = x, is bijective.

The function f: R R, f(x) = 2x + 1 is bijective, since for each y there is a unique x = (y 1)/2 such that f(x)
= y. In more generality, any linear function over the reals, f: R R, f(x) = ax + b (where a is non-zero) is a
bijection. Each real number y is obtained from (paired with) the real number x = (y - b)/a.

The function f: R (-/2, /2), given by f(x) = arctan(x) is bijective since each real number x is paired
with exactly one angle y in the interval (-/2, /2) so that tan(y) = x (that is, y = arctan(x)). If the codomain
(-/2, /2) was made larger to include an integer multiple of /2 then this function would no longer be onto
(surjective) since there is no real number which could be paired with the multiple of /2 by this arctan function.

The exponential function, g: R R, g(x) = ex , is not bijective: for instance, there is no x in R such that g(x) =
1, showing that g is not onto (surjective). However, if the codomain is restricted to the positive real numbers
R+ (0, +) , then g becomes bijective; its inverse (see below) is the natural logarithm function ln.

The function h: R R+ , h(x) = x2 is not bijective: for instance, h(1) = h(1) = 1, showing that h is not one-
0 [0, +) , then h becomes bijective; its inverse
to-one (injective). However, if the domain is restricted to R+
is the positive square root function.

23.4 Inverses
A bijection f with domain X (indicated by f: X Y in functional notation) also denes a relation starting in Y and
going to X (by turning the arrows around). The process of turning the arrows around for an arbitrary function does
not, in general, yield a function, but properties (3) and (4) of a bijection say that this inverse relation is a function with
domain Y. Moreover, properties (1) and (2) then say that this inverse function is a surjection and an injection, that is,
the inverse function exists and is also a bijection. Functions that have inverse functions are said to be invertible. A
function is invertible if and only if it is a bijection.
Stated in concise mathematical notation, a function f: X Y is bijective if and only if it satises the condition

for every y in Y there is a unique x in X with y = f(x).

Continuing with the baseball batting line-up example, the function that is being dened takes as input the name of
one of the players and outputs the position of that player in the batting order. Since this function is a bijection, it has
an inverse function which takes as input a position in the batting order and outputs the player who will be batting in
that position.

23.5 Composition
The composition gf of two bijections f: X Y and g: Y Z is a bijection. The inverse of gf is (g f )1 =
(f 1 ) (g 1 ) .

Conversely, if the composition g f of two functions is bijective, we can only say that f is injective and g is surjective.
23.6. BIJECTIONS AND CARDINALITY 83

X Y Z
1 D P

2 B Q

3 C R

A bijection composed of an injection (left) and a surjection (right).

23.6 Bijections and cardinality


If X and Y are nite sets, then there exists a bijection between the two sets X and Y if and only if X and Y have
the same number of elements. Indeed, in axiomatic set theory, this is taken as the denition of same number of
elements (equinumerosity), and generalising this denition to innite sets leads to the concept of cardinal number,
a way to distinguish the various sizes of innite sets.

23.7 Properties
A function f: R R is bijective if and only if its graph meets every horizontal and vertical line exactly once.

If X is a set, then the bijective functions from X to itself, together with the operation of functional composition
(), form a group, the symmetric group of X, which is denoted variously by S(X), SX, or X! (X factorial).

Bijections preserve cardinalities of sets: for a subset A of the domain with cardinality |A| and subset B of the
codomain with cardinality |B|, one has the following equalities:

|f(A)| = |A| and |f 1 (B)| = |B|.

If X and Y are nite sets with the same cardinality, and f: X Y, then the following are equivalent:

1. f is a bijection.
2. f is a surjection.
3. f is an injection.

For a nite set S, there is a bijection between the set of possible total orderings of the elements and the set of
bijections from S to S. That is to say, the number of permutations of elements of S is the same as the number
of total orderings of that setnamely, n!.
84 CHAPTER 23. BIJECTION

23.8 Bijections and category theory


Bijections are precisely the isomorphisms in the category Set of sets and set functions. However, the bijections are not
always the isomorphisms for more complex categories. For example, in the category Grp of groups, the morphisms
must be homomorphisms since they must preserve the group structure, so the isomorphisms are group isomorphisms
which are bijective homomorphisms.

23.9 Generalization to partial functions


The notion of one-to-one correspondence generalizes to partial functions, where they are called partial bijections,
although partial bijections are only required to be injective. The reason for this relaxation is that a (proper) partial
function is already undened for a portion of its domain; thus there is no compelling reason to constrain its inverse
to be a total function, i.e. dened everywhere on its domain. The set of all partial bijections on a given base set is
called the symmetric inverse semigroup.[2]
Another way of dening the same notion is to say that a partial bijection from A to B is any relation R (which turns
out to be a partial function) with the property that R is the graph of a bijection f:A B, where A is a subset of A
and B is a subset of B.[3]
When the partial bijection is on the same set, it is sometimes called a one-to-one partial transformation.[4] An
example is the Mbius transformation simply dened on the complex plane, rather than its completion to the extended
complex plane.[5]

23.10 Contrast with


This list is incomplete; you can help by expanding it.

Multivalued function

23.11 See also


Bijection, injection and surjection
Symmetric group
Bijective numeration
Bijective proof
Category theory
AxGrothendieck theorem

23.12 Notes
[1] There are names associated to properties (1) and (2) as well. A relation which satises property (1) is called a total relation
and a relation satisfying (2) is a single valued relation.

[2] Christopher Hollings (16 July 2014). Mathematics across the Iron Curtain: A History of the Algebraic Theory of Semigroups.
American Mathematical Society. p. 251. ISBN 978-1-4704-1493-1.

[3] Francis Borceux (1994). Handbook of Categorical Algebra: Volume 2, Categories and Structures. Cambridge University
Press. p. 289. ISBN 978-0-521-44179-7.

[4] Pierre A. Grillet (1995). Semigroups: An Introduction to the Structure Theory. CRC Press. p. 228. ISBN 978-0-8247-
9662-4.
23.13. REFERENCES 85

[5] John Meakin (2007). Groups and semigroups: connections and contrasts. In C.M. Campbell, M.R. Quick, E.F. Robert-
son, G.C. Smith. Groups St Andrews 2005 Volume 2. Cambridge University Press. p. 367. ISBN 978-0-521-69470-4.
preprint citing Lawson, M. V. (1998). The Mbius Inverse Monoid. Journal of Algebra. 200 (2): 428. doi:10.1006/jabr.1997.7242.

23.13 References
This topic is a basic concept in set theory and can be found in any text which includes an introduction to set theory.
Almost all texts that deal with an introduction to writing proofs will include a section on set theory, so the topic may
be found in any of these:

Wolf (1998). Proof, Logic and Conjecture: A Mathematicians Toolbox. Freeman.

Sundstrom (2003). Mathematical Reasoning: Writing and Proof. Prentice-Hall.


Smith; Eggen; St.Andre (2006). A Transition to Advanced Mathematics (6th Ed.). Thomson (Brooks/Cole).

Schumacher (1996). Chapter Zero: Fundamental Notions of Abstract Mathematics. Addison-Wesley.


O'Leary (2003). The Structure of Proof: With Logic and Set Theory. Prentice-Hall.

Morash. Bridge to Abstract Mathematics. Random House.


Maddox (2002). Mathematical Thinking and Writing. Harcourt/ Academic Press.

Lay (2001). Analysis with an introduction to proof. Prentice Hall.

Gilbert; Vanstone (2005). An Introduction to Mathematical Thinking. Pearson Prentice-Hall.


Fletcher; Patty. Foundations of Higher Mathematics. PWS-Kent.

Iglewicz; Stoyle. An Introduction to Mathematical Reasoning. MacMillan.


Devlin, Keith (2004). Sets, Functions, and Logic: An Introduction to Abstract Mathematics. Chapman & Hall/
CRC Press.
D'Angelo; West (2000). Mathematical Thinking: Problem Solving and Proofs. Prentice Hall.

Cupillari. The Nuts and Bolts of Proofs. Wadsworth.


Bond. Introduction to Abstract Mathematics. Brooks/Cole.

Barnier; Feldman (2000). Introduction to Advanced Mathematics. Prentice Hall.


Ash. A Primer of Abstract Mathematics. MAA.

23.14 External links


Hazewinkel, Michiel, ed. (2001) [1994], Bijection, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

Weisstein, Eric W. Bijection. MathWorld.

Earliest Uses of Some of the Words of Mathematics: entry on Injection, Surjection and Bijection has the history
of Injection and related terms.
Chapter 24

Bijection, injection and surjection

In mathematics, injections, surjections and bijections are classes of functions distinguished by the manner in which
arguments (input expressions from the domain) and images (output expressions from the codomain) are related or
mapped to each other.
A function maps elements from its domain to elements in its codomain. Given a function f : X Y

The function is injective (one-to-one) if every element of the codomain is mapped to by at most one element
of the domain. An injective function is an injection. Notationally:

x, x X, f (x) = f (x ) x = x .
Or, equivalently (using logical transposition),
x, x X, x = x f (x) = f (x ).

The function is surjective (onto) if every element of the codomain is mapped to by at least one element of the
domain. (That is, the image and the codomain of the function are equal.) A surjective function is a surjection.
Notationally:

y Y, x X that such y = f (x).

The function is bijective (one-to-one and onto or one-to-one correspondence) if every element of the
codomain is mapped to by exactly one element of the domain. (That is, the function is both injective and
surjective.) A bijective function is a bijection.

An injective function need not be surjective (not all elements of the codomain may be associated with arguments),
and a surjective function need not be injective (some images may be associated with more than one argument). The
four possible combinations of injective and surjective features are illustrated in the diagrams to the right.

24.1 Injection
Main article: Injective function
For more details on notation, see Function (mathematics) Notation.
A function is injective (one-to-one) if every possible element of the codomain is mapped to by at most one argument.
Equivalently, a function is injective if it maps distinct arguments to distinct images. An injective function is an
injection. The formal denition is the following.

The function f : X Y is injective i for all x, x X , we have f (x) = f (x ) x = x .

A function f : X Y is injective if and only if X is empty or f is left-invertible; that is, there is a function g :
f(X) X such that g o f = identity function on X. Here f(X) is the image of f.

86
24.2. SURJECTION 87

X Y Z
1 D P

2 B Q

3 C R

A S

Injective composition: the second function need not be injective.

Since every function is surjective when its codomain is restricted to its image, every injection induces a bijection
onto its image. More precisely, every injection f : X Y can be factored as a bijection followed by an inclusion
as follows. Let fR : X f(X) be f with codomain restricted to its image, and let i : f(X) Y be the inclusion
map from f(X) into Y. Then f = i o fR. A dual factorisation is given for surjections below.
The composition of two injections is again an injection, but if g o f is injective, then it can only be concluded
that f is injective. See the gure at right.
Every embedding is injective.

24.2 Surjection
Main article: Surjective function
A function is surjective (onto) if every possible image is mapped to by at least one argument. In other words, every
element in the codomain has non-empty preimage. Equivalently, a function is surjective if its image is equal to its
codomain. A surjective function is a surjection. The formal denition is the following.

The function f : X Y is surjective i for all y Y , there is x X such that f (x) = y.

A function f : X Y is surjective if and only if it is right-invertible, that is, if and only if there is a function
g: Y X such that f o g = identity function on Y. (This statement is equivalent to the axiom of choice.)
By collapsing all arguments mapping to a given xed image, every surjection induces a bijection dened on a
quotient of its domain. More precisely, every surjection f : X Y can be factored as a non-bijection followed
by a bijection as follows. Let X/~ be the equivalence classes of X under the following equivalence relation: x
~ y if and only if f(x) = f(y). Equivalently, X/~ is the set of all preimages under f. Let P(~) : X X/~ be the
projection map which sends each x in X to its equivalence class [x]~, and let fP : X/~ Y be the well-dened
function given by fP([x]~) = f(x). Then f = fP o P(~). A dual factorisation is given for injections above.
The composition of two surjections is again a surjection, but if g o f is surjective, then it can only be concluded
that g is surjective. See the gure.
88 CHAPTER 24. BIJECTION, INJECTION AND SURJECTION

X Y Z
1 D P

2 B Q

3 C R

4 A

Surjective composition: the rst function need not be surjective.

24.3 Bijection
Main article: Bijective function
A function is bijective if it is both injective and surjective. A bijective function is a bijection (one-to-one corre-
spondence). A function is bijective if and only if every possible image is mapped to by exactly one argument. This
equivalent condition is formally expressed as follow.

The function f : X Y is bijective i for all y Y , there is a unique x X such that f (x) = y.

A function f : X Y is bijective if and only if it is invertible, that is, there is a function g: Y X such that
g o f = identity function on X and f o g = identity function on Y. This function maps each image to its unique
preimage.

The composition of two bijections is again a bijection, but if g o f is a bijection, then it can only be concluded
that f is injective and g is surjective. (See the gure at right and the remarks above regarding injections and
surjections.)

The bijections from a set to itself form a group under composition, called the symmetric group.

24.4 Cardinality
Suppose you want to dene what it means for two sets to have the same number of elements. One way to do this is
to say that two sets have the same number of elements if and only if all the elements of one set can be paired with
the elements of the other, in such a way that each element is paired with exactly one element. Accordingly, we can
dene two sets to have the same number of elements if there is a bijection between them. We say that the two sets
have the same cardinality.
Likewise, we can say that set X has fewer than or the same number of elements as set Y if there is an injection
from X to Y . We can also say that set X has fewer than the number of elements in set Y if there is an injection
from X to Y but not a bijection between X and Y .
24.5. EXAMPLES 89

X Y Z
1 D P

2 B Q

3 C R

Bijective composition: the rst function need not be surjective and the second function need not be injective.

24.5 Examples
It is important to specify the domain and codomain of each function since by changing these, functions which we
think of as the same may have dierent jectivity.

24.5.1 Injective and surjective (bijective)

For every set X the identity function idX and thus specically R R : x 7 x .

R+ R+ : x 7 x2 and thus also its inverse R+ R+ : x 7 x .

The exponential function exp : R R+ : x 7 ex and thus also its inverse the natural logarithm ln : R+
R : x 7 ln x

24.5.2 Injective and non-surjective

The exponential function exp : R R : x 7 ex

24.5.3 Non-injective and surjective

R R : x 7 (x 1)x(x + 1) = x3 x

R [1, 1] : x 7 sin(x)

24.5.4 Non-injective and non-surjective

R R : x 7 sin(x)
90 CHAPTER 24. BIJECTION, INJECTION AND SURJECTION

24.6 Properties
For every function f, subset X of the domain and subset Y of the codomain we have X f 1 (f(X)) and
f(f 1 (Y)) Y. If f is injective we have X = f 1 (f(X)) and if f is surjective we have f(f 1 (Y)) = Y.
For every function h : X Y we can dene a surjection H : X h(X) : x h(x) and an injection I : h(X)
Y : x x. It follows that h = I H. This decomposition is unique up to isomorphism.

24.7 Category theory


In the category of sets, injections, surjections, and bijections correspond precisely to monomorphisms, epimorphisms,
and isomorphisms, respectively.

24.8 History
This terminology was originally coined by the Bourbaki group.

24.9 See also


Bijective function
Horizontal line test

Injective module
Injective function

Permutation
Surjective function

24.10 External links


Earliest Uses of Some of the Words of Mathematics: entry on Injection, Surjection and Bijection has the history
of Injection and related terms.
Chapter 25

Binary decision diagram

In computer science, a binary decision diagram (BDD) or branching program is a data structure that is used to
represent a Boolean function. On a more abstract level, BDDs can be considered as a compressed representation of
sets or relations. Unlike other compressed representations, operations are performed directly on the compressed rep-
resentation, i.e. without decompression. Other data structures used to represent a Boolean function include negation
normal form (NNF), and propositional directed acyclic graph (PDAG).

25.1 Denition
A Boolean function can be represented as a rooted, directed, acyclic graph, which consists of several decision nodes
and terminal nodes. There are two types of terminal nodes called 0-terminal and 1-terminal. Each decision node N
is labeled by Boolean variable VN and has two child nodes called low child and high child. The edge from node VN
to a low (or high) child represents an assignment of VN to 0 (resp. 1). Such a BDD is called 'ordered' if dierent
variables appear in the same order on all paths from the root. A BDD is said to be 'reduced' if the following two rules
have been applied to its graph:

Merge any isomorphic subgraphs.

Eliminate any node whose two children are isomorphic.

In popular usage, the term BDD almost always refers to Reduced Ordered Binary Decision Diagram (ROBDD in
the literature, used when the ordering and reduction aspects need to be emphasized). The advantage of an ROBDD is
that it is canonical (unique) for a particular function and variable order.[1] This property makes it useful in functional
equivalence checking and other operations like functional technology mapping.
A path from the root node to the 1-terminal represents a (possibly partial) variable assignment for which the repre-
sented Boolean function is true. As the path descends to a low (or high) child from a node, then that nodes variable
is assigned to 0 (resp. 1).

25.1.1 Example

The left gure below shows a binary decision tree (the reduction rules are not applied), and a truth table, each rep-
resenting the function f (x1, x2, x3). In the tree on the left, the value of the function can be determined for a given
variable assignment by following a path down the graph to a terminal. In the gures below, dotted lines represent
edges to a low child, while solid lines represent edges to a high child. Therefore, to nd (x1=0, x2=1, x3=1), begin
at x1, traverse down the dotted line to x2 (since x1 has an assignment to 0), then down two solid lines (since x2 and
x3 each have an assignment to one). This leads to the terminal 1, which is the value of f (x1=0, x2=1, x3=1).
The binary decision tree of the left gure can be transformed into a binary decision diagram by maximally reducing
it according to the two reduction rules. The resulting BDD is shown in the right gure.

91
92 CHAPTER 25. BINARY DECISION DIAGRAM

25.2 History
The basic idea from which the data structure was created is the Shannon expansion. A switching function is split
into two sub-functions (cofactors) by assigning one variable (cf. if-then-else normal form). If such a sub-function
is considered as a sub-tree, it can be represented by a binary decision tree. Binary decision diagrams (BDD) were
introduced by Lee,[2] and further studied and made known by Akers[3] and Boute.[4]
The full potential for ecient algorithms based on the data structure was investigated by Randal Bryant at Carnegie
Mellon University: his key extensions were to use a xed variable ordering (for canonical representation) and shared
sub-graphs (for compression). Applying these two concepts results in an ecient data structure and algorithms for
the representation of sets and relations.[5][6] By extending the sharing to several BDDs, i.e. one sub-graph is used
by several BDDs, the data structure Shared Reduced Ordered Binary Decision Diagram is dened.[7] The notion of a
BDD is now generally used to refer to that particular data structure.
In his video lecture Fun With Binary Decision Diagrams (BDDs),[8] Donald Knuth calls BDDs one of the only really
fundamental data structures that came out in the last twenty-ve years and mentions that Bryants 1986 paper was
for some time one of the most-cited papers in computer science.
Adnan Darwiche and his collaborators have shown that BDDs are one of several normal forms for Boolean functions,
each induced by a dierent combination of requirements. Another important normal form identied by Darwiche is
Decomposable Negation Normal Form or DNNF.

25.3 Applications
BDDs are extensively used in CAD software to synthesize circuits (logic synthesis) and in formal verication. There
are several lesser known applications of BDD, including fault tree analysis, Bayesian reasoning, product conguration,
and private information retrieval.[9] [10]
Every arbitrary BDD (even if it is not reduced or ordered) can be directly implemented in hardware by replacing
each node with a 2 to 1 multiplexer; each multiplexer can be directly implemented by a 4-LUT in a FPGA. It is not
so simple to convert from an arbitrary network of logic gates to a BDD (unlike the and-inverter graph).

25.4 Variable ordering


The size of the BDD is determined both by the function being represented and the chosen ordering of the variables.
There exist Boolean functions f (x1 , . . . , xn ) for which depending upon the ordering of the variables we would end
up getting a graph whose number of nodes would be linear (in n) at the best and exponential at the worst case (e.g., a
ripple carry adder). Let us consider the Boolean function f (x1 , . . . , x2n ) = x1 x2 + x3 x4 + + x2n1 x2n . Using
the variable ordering x1 < x3 < < x2n1 < x2 < x4 < < x2n , the BDD needs 2n+1 nodes to represent
the function. Using the ordering x1 < x2 < x3 < x4 < < x2n1 < x2n , the BDD consists of 2n + 2 nodes.
It is of crucial importance to care about variable ordering when applying this data structure in practice. The problem
of nding the best variable ordering is NP-hard.[11] For any constant c > 1 it is even NP-hard to compute a variable
ordering resulting in an OBDD with a size that is at most c times larger than an optimal one.[12] However, there exist
ecient heuristics to tackle the problem.[13]
There are functions for which the graph size is always exponential independent of variable ordering. This holds e.g.
for the multiplication function.[1] In fact, the function computing the middle bit of the product of two n -bit numbers
does not have an OBDD smaller than 2n/2 /61 4 vertices.[14] (If the multiplication function had polynomial-size
OBDDs, it would show that integer factorization is in P/poly, which is not known to be true.[15] )
Researchers have suggested renements on the BDD data structure giving way to a number of related graphs, such as
BMD (binary moment diagrams), ZDD (zero-suppressed decision diagram), FDD (free binary decision diagrams),
PDD (parity decision diagrams), and MTBDDs (multiple terminal BDDs).

25.5 Logical operations on BDDs


Many logical operations on BDDs can be implemented by polynomial-time graph manipulation algorithms:[16]:20
25.6. SEE ALSO 93

conjunction

disjunction

negation

existential abstraction

universal abstraction

However, repeating these operations several times, for example forming the conjunction or disjunction of a set of
BDDs, may in the worst case result in an exponentially big BDD. This is because any of the preceding operations
for two BDDs may result in a BDD with a size proportional to the product of the BDDs sizes, and consequently
for several BDDs the size may be exponential. Also, since constructing the BDD of a Boolean function solves the
NP-complete Boolean satisability problem and the co-NP-complete tautology problem, constructing the BDD can
take exponential time in the size of the Boolean formula even when the resulting BDD is small.

25.6 See also


Boolean satisability problem

L/poly, a complexity class that captures the complexity of problems with polynomially sized BDDs

Model checking

Radix tree

Barringtons theorem

25.7 References
[1] Graph-Based Algorithms for Boolean Function Manipulation, Randal E. Bryant, 1986

[2] C. Y. Lee. Representation of Switching Circuits by Binary-Decision Programs. Bell System Technical Journal, 38:985
999, 1959.

[3] Sheldon B. Akers. Binary Decision Diagrams, IEEE Transactions on Computers, C-27(6):509516, June 1978.

[4] Raymond T. Boute, The Binary Decision Machine as a programmable controller. EUROMICRO Newsletter, Vol.
1(2):1622, January 1976.

[5] Randal E. Bryant. "Graph-Based Algorithms for Boolean Function Manipulation". IEEE Transactions on Computers,
C-35(8):677691, 1986.

[6] R. E. Bryant, "Symbolic Boolean Manipulation with Ordered Binary Decision Diagrams, ACM Computing Surveys, Vol.
24, No. 3 (September, 1992), pp. 293318.

[7] Karl S. Brace, Richard L. Rudell and Randal E. Bryant. "Ecient Implementation of a BDD Package. In Proceedings of
the 27th ACM/IEEE Design Automation Conference (DAC 1990), pages 4045. IEEE Computer Society Press, 1990.

[8] http://scpd.stanford.edu/knuth/index.jsp

[9] R.M. Jensen. CLab: A C+ + library for fast backtrack-free interactive product conguration. Proceedings of the Tenth
International Conference on Principles and Practice of Constraint Programming, 2004.

[10] H.L. Lipmaa. First CPIR Protocol with Data-Dependent Computation. ICISC 2009.

[11] Beate Bollig, Ingo Wegener. Improving the Variable Ordering of OBDDs Is NP-Complete, IEEE Transactions on Com-
puters, 45(9):9931002, September 1996.

[12] Detlef Sieling. The nonapproximability of OBDD minimization. Information and Computation 172, 103138. 2002.

[13] Rice, Michael. A Survey of Static Variable Ordering Heuristics for Ecient BDD/MDD Construction (PDF).
94 CHAPTER 25. BINARY DECISION DIAGRAM

[14] Philipp Woelfel. "Bounds on the OBDD-size of integer multiplication via universal hashing. Journal of Computer and
System Sciences 71, pp. 520-534, 2005.

[15] Richard J. Lipton. BDDs and Factoring. Gdels Lost Letter and P=NP, 2009.

[16] Andersen, H. R. (1999). An Introduction to Binary Decision Diagrams (PDF). Lecture Notes. IT University of Copen-
hagen.

R. Ubar, Test Generation for Digital Circuits Using Alternative Graphs (in Russian)", in Proc. Tallinn Tech-
nical University, 1976, No.409, Tallinn Technical University, Tallinn, Estonia, pp. 7581.

25.8 Further reading


D. E. Knuth, "The Art of Computer Programming Volume 4, Fascicle 1: Bitwise tricks & techniques; Binary
Decision Diagrams (AddisonWesley Professional, March 27, 2009) viii+260pp, ISBN 0-321-58050-8. Draft
of Fascicle 1b available for download.

Ch. Meinel, T. Theobald, "Algorithms and Data Structures in VLSI-Design: OBDD Foundations and Appli-
cations, Springer-Verlag, Berlin, Heidelberg, New York, 1998. Complete textbook available for download.

Rdiger Ebendt; Grschwin Fey; Rolf Drechsler (2005). Advanced BDD optimization. Springer. ISBN 978-
0-387-25453-1.

Bernd Becker; Rolf Drechsler (1998). Binary Decision Diagrams: Theory and Implementation. Springer.
ISBN 978-1-4419-5047-5.

25.9 External links


Fun With Binary Decision Diagrams (BDDs), lecture by Donald Knuth
List of BDD software libraries for several programming languages.
Chapter 26

Bitwise operation

In digital computer programming, a bitwise operation operates on one or more bit patterns or binary numerals at the
level of their individual bits. It is a fast, simple action directly supported by the processor, and is used to manipulate
values for comparisons and calculations.
On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster
than multiplication, and sometimes signicantly faster than addition. While modern processors usually perform addi-
tion and multiplication just as fast as bitwise operations due to their longer instruction pipelines and other architectural
design choices, bitwise operations do commonly use less power because of the reduced use of resources.[1]

26.1 Bitwise operators


In the explanations below, any indication of a bits position is counted from the right (least signicant) side, advancing
left. For example, the binary value 0001 (decimal 1) has zeroes at every position but the rst one.

26.1.1 NOT

The bitwise NOT, or complement, is a unary operation that performs logical negation on each bit, forming the ones
complement of the given binary value. Bits that are 0 become 1, and those that are 1 become 0. For example:
NOT 0111 (decimal 7) = 1000 (decimal 8) NOT 10101011 = 01010100
The bitwise complement is equal to the twos complement of the value minus one. If twos complement arithmetic
is used, then NOT x = -x 1.
For unsigned integers, the bitwise complement of a number is the mirror reection of the number across the half-
way point of the unsigned integers range. For example, for 8-bit unsigned integers, NOT x = 255 - x, which can be
visualized on a graph as a downward line that eectively ips an increasing range from 0 to 255, to a decreasing
range from 255 to 0. A simple but illustrative example use is to invert a grayscale image where each pixel is stored
as an unsigned integer.

26.1.2 AND

A bitwise AND takes two equal-length binary representations and performs the logical AND operation on each pair
of the corresponding bits, by multiplying them. Thus, if both bits in the compared position are 1, the bit in the
resulting binary representation is 1 (1 1 = 1); otherwise, the result is 0 (1 0 = 0 and 0 0 = 0). For example:
0101 (decimal 5) AND 0011 (decimal 3) = 0001 (decimal 1)
The operation may be used to determine whether a particular bit is set (1) or clear (0). For example, given a bit pattern
0011 (decimal 3), to determine whether the second bit is set we use a bitwise AND with a bit pattern containing 1
only in the second bit:
0011 (decimal 3) AND 0010 (decimal 2) = 0010 (decimal 2)

95
96 CHAPTER 26. BITWISE OPERATION

Because the result 0010 is non-zero, we know the second bit in the original pattern was set. This is often called bit
masking. (By analogy, the use of masking tape covers, or masks, portions that should not be altered or portions that
are not of interest. In this case, the 0 values mask the bits that are not of interest.)
The bitwise AND may be used to clear selected bits (or ags) of a register in which each bit represents an individual
Boolean state. This technique is an ecient way to store a number of Boolean values using as little memory as
possible.
For example, 0110 (decimal 6) can be considered a set of four ags, where the rst and fourth ags are clear (0), and
the second and third ags are set (1). The second bit may be cleared by using a bitwise AND with the pattern that
has a zero only in the second bit:
0110 (decimal 6) AND 1101 (decimal 13) = 0100 (decimal 4)
Because of this property, it becomes easy to check the parity of a binary number by checking the value of the lowest
valued bit. Using the example above:
0110 (decimal 6) AND 0001 (decimal 1) = 0000 (decimal 0)
Because 6 AND 1 is zero, 6 is divisible by two and therefore even.

26.1.3 OR

A bitwise OR takes two bit patterns of equal length and performs the logical inclusive OR operation on each pair of
corresponding bits. The result in each position is 0 if both bits are 0, while otherwise the result is 1. For example:
0101 (decimal 5) OR 0011 (decimal 3) = 0111 (decimal 7)
The bitwise OR may be used to set to 1 the selected bits of the register described above. For example, the fourth bit
of 0010 (decimal 2) may be set by performing a bitwise OR with the pattern with only the fourth bit set:
0010 (decimal 2) OR 1000 (decimal 8) = 1010 (decimal 10)

26.1.4 XOR

A bitwise XOR takes two bit patterns of equal length and performs the logical exclusive OR operation on each pair
of corresponding bits. The result in each position is 1 if only the rst bit is 1 or only the second bit is 0, but will be 0
if both are 0 or both are 1. In this we perform the comparison of two bits, being 1 if the two bits are dierent, and 0
if they are the same. For example:
0101 (decimal 5) XOR 0011 (decimal 3) = 0110 (decimal 6)
The bitwise XOR may be used to invert selected bits in a register (also called toggle or ip). Any bit may be toggled
by XORing it with 1. For example, given the bit pattern 0010 (decimal 2) the second and fourth bits may be toggled
by a bitwise XOR with a bit pattern containing 1 in the second and fourth positions:
0010 (decimal 2) XOR 1010 (decimal 10) = 1000 (decimal 8)
This technique may be used to manipulate bit patterns representing sets of Boolean states.
Assembly language programmers sometimes use XOR as a short-cut to setting the value of a register to zero. Per-
forming XOR on a value against itself always yields zero, and on many architectures this operation requires fewer
clock cycles and memory than loading a zero value and saving it to the register.

26.1.5 Mathematical equivalents

Assuming , for the non-negative integers, the bitwise operations can be written as follows:
26.2. BIT SHIFTS 97

log2 (x)
[( x ) ]
NOT x = 2n mod2 + 1 mod2 = 2log2 (x)+1 1 x
n=0
2n
log2 (x)
( x ) ( y )
x AND y = 2n mod2 mod2
n=0
2n 2n
log2 (x)
[[( x ) ( y ) ( x ) ( y )] ]
x OR y = 2n mod2 + mod2 + mod2 mod2 mod2
n=0
2n 2n 2n 2n
log2 (x)
[[( x ) ( y )] ] log
2 (x) [( x y ) ]
x XOR y = 2n mod2 + mod2 mod2 = 2n
+ mod2
n=0
2n 2n n=0
2n 2n

26.2 Bit shifts


The bit shifts are sometimes considered bitwise operations, because they treat a value as a series of bits rather than as
a numerical quantity. In these operations the digits are moved, or shifted, to the left or right. Registers in a computer
processor have a xed width, so some bits will be shifted out of the register at one end, while the same number
of bits are shifted in from the other end; the dierences between bit shift operators lie in how they determine the
values of the shifted-in bits.

26.2.1 Arithmetic shift


Main article: Arithmetic shift
In an arithmetic shift, the bits that are shifted out of either end are discarded. In a left arithmetic shift, zeros are
MSB

LSB

7 6 5 4 3 2 1 0
0 0 0 1 0 1 1 1

0 0 1 0 1 1 1 0 0
Left arithmetic shift

shifted in on the right; in a right arithmetic shift, the sign bit (the MSB in twos complement) is shifted in on the left,
thus preserving the sign of the operand. This statement is not reliable in the latest C language draft standard, however:
98 CHAPTER 26. BITWISE OPERATION

MSB

LSB
7 6 5 4 3 2 1 0
0 0 0 1 0 1 1 1

0 0 0 0 1 0 1 1
Right arithmetic shift

if the value being shifted is negative, the result is implementation-dened, indicating the result is not necessarily
consistent across platforms.[2]
This example uses an 8-bit register:
00010111 (decimal +23) LEFT-SHIFT = 00101110 (decimal +46) 10010111 (decimal 105) RIGHT-SHIFT =
11001011 (decimal 53)
In the rst case, the leftmost digit was shifted past the end of the register, and a new 0 was shifted into the rightmost
position. In the second case, the rightmost 1 was shifted out (perhaps into the carry ag), and a new 1 was copied into
the leftmost position, preserving the sign of the number (but not reliably, according to the most recent C language
draft standard, as noted above). Multiple shifts are sometimes shortened to a single shift by some number of digits.
For example:
00010111 (decimal +23) LEFT-SHIFT-BY-TWO = 01011100 (decimal +92)
A left arithmetic shift by n is equivalent to multiplying by 2n (provided the value does not overow), while a right
arithmetic shift by n of a twos complement value is equivalent to dividing by 2n and rounding toward negative innity.
If the binary number is treated as ones complement, then the same right-shift operation results in division by 2n and
rounding toward zero.

26.2.2 Logical shift

Main article: Logical shift

In a logical shift, zeros are shifted in to replace the discarded bits. Therefore, the logical and arithmetic left-shifts are
exactly the same.
26.2. BIT SHIFTS 99

However, as the logical right-shift inserts value 0 bits into the most signicant bit, instead of copying the sign bit,
it is ideal for unsigned binary numbers, while the arithmetic right-shift is ideal for signed twos complement binary
numbers.

26.2.3 Rotate no carry


Main article: Circular shift

Another form of shift is the circular shift or bit rotation. In this operation, the bits are rotated as if the left and right
ends of the register were joined. The value that is shifted in on the right during a left-shift is whatever value was
shifted out on the left, and vice versa. This operation is useful if it is necessary to retain all the existing bits, and is
frequently used in digital cryptography.

26.2.4 Rotate through carry


Rotate through carry is similar to the rotate no carry operation, but the two ends of the register are separated by the
carry ag. The bit that is shifted in (on either end) is the old value of the carry ag, and the bit that is shifted out (on
the other end) becomes the new value of the carry ag.
A single rotate through carry can simulate a logical or arithmetic shift of one position by setting up the carry ag
beforehand. For example, if the carry ag contains 0, then x RIGHT-ROTATE-THROUGH-CARRY-BY-ONE
is a logical right-shift, and if the carry ag contains a copy of the sign bit, then x RIGHT-ROTATE-THROUGH-
CARRY-BY-ONE is an arithmetic right-shift. For this reason, some microcontrollers such as low end PICs just have
rotate and rotate through carry, and don't bother with arithmetic or logical shift instructions.
Rotate through carry is especially useful when performing shifts on numbers larger than the processors native word
size, because if a large number is stored in two registers, the bit that is shifted o the end of the rst register must
come in at the other end of the second. With rotate-through-carry, that bit is saved in the carry ag during the rst
shift, ready to shift in during the second shift without any extra preparation.

26.2.5 Shifts in C, C++, C#, Go, Java, JavaScript, Pascal, Perl, PHP, Python and Ruby
In C-inspired languages, the left and right shift operators are "<<" and ">>", respectively. The number of places to
shift is given as the second argument to the shift operators. For example,
x = y << 2;

assigns x the result of shifting y to the left by two bits, which is equivalent to a multiplication by four.
Shifts can result in implementation-dened behavior or undened behavior, so care must be taken when using them.
The result of shifting by a bit count greater than or equal to the words size is undened behavior in C and C++.[3][4]
Right-shifting a negative value is implementation-dened and not recommended by good coding practice;[5] the result
of left-shifting a signed value is undened if the result cannot be represented in the result type.[3] In C#, the right-shift
is an arithmetic shift when the rst operand is an int or long. If the rst operand is of type uint or ulong, the right-shift
is a logical shift.[6]

Circular shifts in C-family languages

Main article: Circular shift Implementing circular shifts

The C-family of languages lack a rotate operator, but one can be synthesized from the shift operators. Care must be
taken to ensure the statement is well formed to avoid undened behavior and timing attacks in software with security
requirements.[7] For example, a naive implementation that left rotates a 32-bit unsigned value x by n positions is
simply:
unsigned int x = ..., n = ...; unsigned int y = (x << n) | (x >> (32 - n));
100 CHAPTER 26. BITWISE OPERATION

However, a shift by 0 bits results in undened behavior in the right hand expression (x >> (32 - n)) because 32 - 0 is
32, and 32 is outside the range [0 - 31] inclusive. A second try might result in:
unsigned int x = ..., n = ...; unsigned int y = n ? (x << n) | (x >> (32 - n)) : x;

where the shift amount is tested to ensure it does not introduce undened behavior. However, the branch adds an
additional code path and presents an opportunity for timing analysis and attack, which is often not acceptable in high
integrity software.[7] In addition, the code compiles to multiple machine instructions, which is often less ecient than
the processors native instruction.
To avoid the undened behavior and branches under GCC and Clang, the following should be used. The pattern is
recognized by many compilers, and the compiler will emit a single rotate instruction:[8][9][10]
unsigned int x = ..., n = ...; unsigned int y = (x << n) | (x >> (-n & 31));

There are also compiler-specic intrinsics implementing circular shifts, like _rotl8, _rotl16, _rotr8, _rotr16 in Mi-
crosoft Visual C++. Clang provides some rotate intrinsics for Microsoft compatibility that suers the problems
above.[10] GCC does not oer rotate intrinsics. Intel also provides x86 Intrinsics.

Shifts in Java

In Java, all integer types are signed, so the "<<" and ">>" operators perform arithmetic shifts. Java adds the operator
">>>" to perform logical right shifts, but since the logical and arithmetic left-shift operations are identical for signed
integer, there is no "<<<" operator in Java.
More details of Java shift operators:[11]

The operators << (left shift), >> (signed right shift), and >>> (unsigned right shift) are called the shift operators.

The type of the shift expression is the promoted type of the left-hand operand. For example, aByte >>> 2 is
equivalent to ((int) aByte) >>> 2.

If the promoted type of the left-hand operand is int, only the ve lowest-order bits of the right-hand operand are
used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator
& with the mask value 0x1f (0b11111).[12] The shift distance actually used is therefore always in the range 0
to 31, inclusive.

If the promoted type of the left-hand operand is long, then only the six lowest-order bits of the right-hand
operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical
AND operator & with the mask value 0x3f (0b111111).[12] The shift distance actually used is therefore always
in the range 0 to 63, inclusive.

The value of n >>> s is n right-shifted s bit positions with zero-extension.

In bit and shift operations, the type byte is implicitly converted to int. If the byte value is negative, the highest
bit is one, then ones are used to ll up the extra bytes in the int. So byte b1=5; int i = b1 | 0x0200; will give
i == 5 as result.

Shifts in JavaScript

JavaScript uses bitwise operations to evaluate each of two or more units place to 1 or 0.[13]

Shifts in Pascal

In Pascal, as well as in all its dialects (such as Object Pascal and Standard Pascal), the left and right shift operators
are shl and shr, respectively. The number of places to shift is given as the second argument. For example, the
following assigns x the result of shifting y to the left by two bits:
x := y shl 2;
26.3. OTHER 101

26.3 Other
popcount, used in cryptography
count leading zeros

26.4 Applications
Bitwise operations are necessary particularly in lower-level programming such as device drivers, low-level graphics,
communications protocol packet assembly, and decoding.
Although machines often have ecient built-in instructions for performing arithmetic and logical operations, all these
operations can be performed by combining the bitwise operators and zero-testing in various ways.[14] For example,
here is a pseudocode implementation of ancient Egyptian multiplication showing how to multiply two arbitrary inte-
gers a and b (a greater than b) using only bitshifts and addition:
c 0 while b 0 if (b and 1) 0 c c + a left shift a by 1 right shift b by 1 return c

Another example is a pseudocode implementation of addition, showing how to calculate a sum of two integers a and
b using bitwise operators and zero-testing:
while a 0 c b and a b b xor a left shift c by 1 a c return b

26.5 See also


Arithmetic logic unit
Bit manipulation
Bitboard
Bitwise operations in C
Boolean algebra (logic)
Double dabble
Find rst set
Karnaugh map
Logic gate
Logical operator
Primitive data type

26.6 References
[1] CMicrotek Low-power Design Blog. CMicrotek. Retrieved 12 August 2015.

[2] Garcia, Blandine (2011). INTERNATIONAL STANDARD ISO/IEC 9899:201x (PDF) (201x ed.). ISO/IEC. p. 95. Retrieved
7 September 2015.

[3] JTC1/SC22/WG14 N843 C programming language, section 6.5.7

[4] Arithmetic operators - cppreference.com. en.cppreference.com. Retrieved 2016-07-06.

[5] INT13-C. Use bitwise operators only on unsigned operands. CERT: Secure Coding Standards. Software Engineering
Institute, Carnegie Mellon University. Retrieved 7 September 2015.
102 CHAPTER 26. BITWISE OPERATION

[6] Operator (C# Reference)". Microsoft. Retrieved 14 July 2013.

[7] Near constant time rotate that does not violate the standards?". Stack Exchange Network. Retrieved 12 August 2015.

[8] Poor optimization of portable rotate idiom. GNU GCC Project. Retrieved 11 August 2015.

[9] Circular rotate that does not violate C/C++ standard?". Intel Developer Forums. Retrieved 12 August 2015.

[10] Constant not propagated into inline assembly, results in constraint 'I' expects an integer constant expression"". LLVM
Project. Retrieved 11 August 2015.

[11] The Java Language Specication, section 15.19. Shift Operators

[12] Chapter 15. Expressions. oracle.com.

[13] JavaScript Bitwise. W3Schools.com.

[14] Synthesizing arithmetic operations using bit-shifting tricks. Bisqwit.iki.. 15 February 2014. Retrieved 8 March 2014.

26.7 External links


Online Bitwise Calculator supports Bitwise AND, OR and XOR

Division using bitshifts


"Bitwise Operations Mod N" by Enrique Zeleny, Wolfram Demonstrations Project.

"Plots Of Compositions Of Bitwise Operations" by Enrique Zeleny, The Wolfram Demonstrations Project.
Chapter 27

Blake canonical form

In Boolean logic, a formula for a Boolean function f is in Blake canonical form (BCF),[1] also called the complete
sum of prime implicants,[2] the complete sum,[3] or the disjunctive prime form,[4] when it is a disjunction of all
the prime implicants of f.[1] The Blake canonical form is a disjunctive normal form.
The Blake canonical form is not necessarily minimal, however all the terms of a minimal sum are contained in the
Blake canonical form.[3]
It was introduced in 1937 by Archie Blake, who called it the simplied canonical form";[5][6] it was named in honor
of Blake by Frank Markham Brown in 1990.[1]
Blake discussed three methods for calculating the canonical form: exhaustion of implicants, iterated consensus, and
multiplication. The iterated consensus method was rediscovered by Samson and Mills, Quine, and Bing.[1]

27.1 See also


Horn clause
Consensus theorem

27.2 References
[1] Brown, Frank Markham (2012) [2003, 1990]. Chapter 4: The Blake Canonical Form. Boolean Reasoning - The Logic of
Boolean Equations (reissue of 2nd ed.). Mineola, New York: Dover Publications, Inc. pp. 77. ISBN 978-0-486-42785-0.
ISBN 0-486-42785-4.

[2] Sasao, Tsutomu (1996). Ternary Decision Diagrams and their Applications. In Sasao, Tsutomu; Fujita, Masahira.
Representations of Discrete Functions. p. 278. ISBN 0792397207.

[3] Kandel, Abraham. Foundations of Digital Logic Design. p. 177.

[4] Donald E. Knuth, The Art of Computer Programming 4A: Combinatorial Algorithms, Part 1, 2011, p. 54

[5] Blake, Archie (1937). Canonical expressions in Boolean algebra (Dissertation). Department of Mathematics, University
of Chicago: University of Chicago Libraries.

[6] McKinsey, J. C. C., ed. (June 1938). Blake, Archie. Canonical expressions in Boolean algebra, Department of Mathemat-
ics, University of Chicago, 1937. The Journal of Symbolic Logic (Review). 3 (2:93). JSTOR 2267634. doi:10.2307/2267634.

103
Chapter 28

Booles expansion theorem

Booles expansion theorem, often referred to as the Shannon expansion or decomposition, is the identity: F =
x Fx + x Fx , where F is any Boolean function, x is a variable, x is the complement of x , and Fx and Fx are F
with the argument x equal to 1 and to 0 respectively.
The terms Fx and Fx are sometimes called the positive and negative Shannon cofactors, respectively, of F with
respect to x . These are functions, computed by restrict operator, restrict(F, x, 0) and restrict(F, x, 1) (see valuation
(logic) and partial application).
It has been called the fundamental theorem of Boolean algebra.[1] Besides its theoretical importance, it paved the
way for binary decision diagrams, satisability solvers, and many other techniques relevant to computer engineering
and formal verication of digital circuits.

28.1 Statement of the theorem


A more explicit way of stating the theorem is:

f (X1 , X2 , . . . , Xn ) = X1 f (1, X2 , . . . , Xn ) + X1 f (0, X2 , . . . , Xn )

Proof for the statement follows from direct use of mathematical induction, from the observation that f (X1 ) =
X1 .f (1) + X1 .f (0) and expanding a 2-ary and n-ary Boolean functions identically.

28.2 Variations and implications


XOR-Form The statement also holds when the disjunction "+" is replaced by the XOR operator:

f (X1 , X2 , . . . , Xn ) = X1 f (1, X2 , . . . , Xn ) X1 f (0, X2 , . . . , Xn )

Dual form There is a dual form of the Shannon expansion (which does not have a related XOR form):

f (X1 , X2 , . . . , Xn ) = (X1 + f (0, X2 , . . . , Xn )) (X1 + f (1, X2 , . . . , Xn ))

Repeated application for each argument leads to the Sum of Products (SoP) canonical form of the Boolean function
f . For example for n = 2 that would be

f (X1 , X2 ) = X1 f (1, X2 ) + X1 f (0, X2 )


= X1 X2 f (1, 1) + X1 X2 f (1, 0) + X1 X2 f (0, 1) + X1 X2 f (0, 0)

Likewise, application of the dual form leads to the Product of Sums (PoS) canonical form (using the distributivity
law of + over ):

104
28.3. PROPERTIES OF COFACTORS 105

f (X1 , X2 ) = (X1 + f (0, X2 )) (X1 + f (1, X2 ))


= (X1 + X2 + f (0, 0)) (X1 + X2 + f (0, 1)) (X1 + X2 + f (1, 0)) (X1 + X2 + f (1, 1))

28.3 Properties of Cofactors


Linear Properties of Cofactors: For a boolean function H which is made up of two boolean functions F and G the
following are true:

If F = H then Fx = Hx

If F = G H then Fx = Gx Hx

If F = G + H then Fx = Gx + Hx

If F = G H then Fx = Gx Hx

Characteristics of Unate Functions: If H is a unate function and...

If H is positive unate then F = x Fx + Fx

If H is negative unate then F = Fx + x Fx

28.4 Operations with Cofactors


Boolean Dierence: The boolean dierence or boolean derivative of the function F with respect to the literal x is
dened as:
F
x = Fx Fx

Universal Quantication: The universal quantication of F is dened as:

xF = Fx Fx

Existential Quantication: The existential quantication of F is dened as:

xF = Fx + Fx

28.5 History
George Boole presented this expansion as his Proposition II, To expand or develop a function involving any number of
logical symbols, in his Laws of Thought (1854),[2] and it was widely applied by Boole and other nineteenth-century
logicians.[3]
Claude Shannon mentioned this expansion, among other Boolean identities, in a 1948 paper,[4] and showed the switch-
ing network interpretations of the identity. In the literature of computer design and switching theory, the identity is
often incorrectly attributed to Shannon.[3]

28.6 Application to switching circuits


1. Binary decision diagrams follow from systematic use of this theorem

2. Any Boolean function can be implemented directly in a switching circuit using a hierarchy of basic multiplexer
by repeated application of this theorem.
106 CHAPTER 28. BOOLES EXPANSION THEOREM

28.7 Notes
[1] Paul C. Rosenbloom, The Elements of Mathematical Logic, 1950, p. 5

[2] George Boole, An Investigation of the Laws of Thought: On which are Founded the Mathematical Theories of Logic and
Probabilities, 1854, p. 72 full text at Google Books

[3] Frank Markham Brown, Boolean Reasoning: The Logic of Boolean Equations, 2nd edition, 2003, p. 42

[4] Claude Shannon, The Synthesis of Two-Terminal Switching Circuits, Bell System Technical Journal 28:5998, full text,
p. 62

28.8 See also


ReedMuller expansion

28.9 External links


Shannons Decomposition Example with multiplexers.
Optimizing Sequential Cycles Through Shannon Decomposition and Retiming (PDF) Paper on application.
Chapter 29

Boolean algebra

For other uses, see Boolean algebra (disambiguation).

In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables
are the truth values true and false, usually denoted 1 and 0 respectively. Instead of elementary algebra where the values
of the variables are numbers, and the prime operations are addition and multiplication, the main operations of Boolean
algebra are the conjunction and denoted as , the disjunction or denoted as , and the negation not denoted as . It
is thus a formalism for describing logical relations in the same way that ordinary algebra describes numeric relations.
Boolean algebra was introduced by George Boole in his rst book The Mathematical Analysis of Logic (1847), and set
forth more fully in his An Investigation of the Laws of Thought (1854).[1] According to Huntington, the term Boolean
algebra was rst suggested by Sheer in 1913.[2]
Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern
programming languages. It is also used in set theory and statistics.[3]

29.1 History
Booles algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as
connected to the origins of both elds.[4] In an abstract setting, Boolean algebra was perfected in the late 19th century
by Jevons, Schrder, Huntington, and others until it reached the modern conception of an (abstract) mathematical
structure.[4] For example, the empirical observation that one can manipulate expressions in the algebra of sets by
translating them into expressions in Booles algebra is explained in modern terms by saying that the algebra of sets
is a Boolean algebra (note the indenite article). In fact, M. H. Stone proved in 1936 that every Boolean algebra is
isomorphic to a eld of sets.
In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of
Booles algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by
algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus,
thus he cast his switching algebra as the two-element Boolean algebra. In circuit engineering settings today, there
is little need to consider other Boolean algebras, thus switching algebra and Boolean algebra are often used
interchangeably.[5][6][7] Ecient implementation of Boolean functions is a fundamental problem in the design of
combinational logic circuits. Modern electronic design automation tools for VLSI circuits often rely on an ecient
representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis
and formal verication.[8]
Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean
algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way.[9][10][11]
Boolean algebra is not sucient to capture logic formulas using quantiers, like those from rst order logic. Although
the development of mathematical logic did not follow Booles program, the connection between his algebra and logic
was later put on rm ground in the setting of algebraic logic, which also studies the algebraic systems of many other
logics.[4] The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned
in such a way as to make the formula evaluate to true is called the Boolean satisability problem (SAT), and is of
importance to theoretical computer science, being the rst problem shown to be NP-complete. The closely related

107
108 CHAPTER 29. BOOLEAN ALGEBRA

model of computation known as a Boolean circuit relates time complexity (of an algorithm) to circuit complexity.

29.2 Values
Whereas in elementary algebra expressions denote mainly numbers, in Boolean algebra they denote the truth values
false and true. These values are represented with the bits (or binary digits), namely 0 and 1. They do not behave like
the integers 0 and 1, for which 1 + 1 = 2, but may be identied with the elements of the two-element eld GF(2),
that is, integer arithmetic modulo 2, for which 1 + 1 = 0. Addition and multiplication then play the Boolean roles of
XOR (exclusive-or) and AND (conjunction) respectively, with disjunction xy (inclusive-or) denable as x + y + xy.
Boolean algebra also deals with functions which have their values in the set {0, 1}. A sequence of bits is a commonly
used such function. Another common example is the subsets of a set E: to a subset F of E is associated the indicator
function that takes the value 1 on F and 0 outside F. The most general example is the elements of a Boolean algebra,
with all of the foregoing being instances thereof.
As with elementary algebra, the purely equational part of the theory may be developed without considering explicit
values for the variables.[12]

29.3 Operations

29.3.1 Basic operations


The basic operations of Boolean calculus are as follows.

AND (conjunction), denoted xy (sometimes x AND y or Kxy), satises xy = 1 if x = y = 1 and xy = 0


otherwise.
OR (disjunction), denoted xy (sometimes x OR y or Axy), satises xy = 0 if x = y = 0 and xy = 1 otherwise.
NOT (negation), denoted x (sometimes NOT x, Nx or !x), satises x = 0 if x = 1 and x = 1 if x = 0.

Alternatively the values of xy, xy, and x can be expressed by tabulating their values with truth tables as follows.

If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations
of arithmetic, or by the minimum/maximum functions:

x y = x y = min(x, y)
x y = x + y (x y) = max(x, y)
x = 1 x

One may consider that only the negation and one of the two other operations are basic, because of the following
identities that allow to dene the conjunction in terms of the negation and the disjunction, and vice versa:

x y = (x y)
x y = (x y)

29.3.2 Secondary operations


The three Boolean operations described above are referred to as basic, meaning that they can be taken as a basis
for other Boolean operations that can be built up from them by composition, the manner in which operations are
combined or compounded. Operations composed from the basic operations include the following examples:
29.4. LAWS 109

x y = x y

x y = (x y) (x y)

x y = (x y)
These denitions give rise to the following truth tables giving the values of these operations for all four possible inputs.

The rst operation, x y, or Cxy, is called material implication. If x is true then the value of x y is taken to be
that of y. But if x is false then the value of y can be ignored; however the operation must return some truth value and
there are only two choices, so the return value is the one that entails less, namely true. (Relevance logic addresses this
by viewing an implication with a false premise as something other than either true or false.)
The second operation, x y, or Jxy, is called exclusive or (often abbreviated as XOR) to distinguish it from disjunction
as the inclusive kind. It excludes the possibility of both x and y. Dened in terms of arithmetic it is addition mod 2
where 1 + 1 = 0.
The third operation, the complement of exclusive or, is equivalence or Boolean equality: x y, or Exy, is true just
when x and y have the same value. Hence x y as its complement can be understood as x y, being true just when
x and y are dierent. Equivalence counterpart in arithmetic mod 2 is x + y + 1.
Given two operands, each with two possible values, there are 22 = 4 possible combinations of inputs. Because each
output can have two possible values, there are a total of 24 = 16 possible binary Boolean operations.

29.4 Laws
A law of Boolean algebra is an identity such as x(yz) = (xy)z between two Boolean terms, where a Boolean
term is dened as an expression built up from variables and the constants 0 and 1 using the operations , , and .
The concept can be extended to terms involving other Boolean operations such as , , and , but such extensions
are unnecessary for the purposes to which the laws are put. Such purposes include the denition of a Boolean algebra
as any model of the Boolean laws, and as a means for deriving new laws from old as in the derivation of x(yz) =
x(zy) from yz = zy as treated in the section on axiomatization.

29.4.1 Monotone laws

Boolean algebra satises many of the same laws as ordinary algebra when one matches up with addition and with
multiplication. In particular the following laws are common to both kinds of algebra:[13]

The following laws hold in Boolean Algebra, but not in ordinary algebra:

Taking x = 2 in the third law above shows that it is not an ordinary algebra law, since 22 = 4. The remaining ve
laws can be falsied in ordinary algebra by taking all variables to be 1, for example in Absorption Law 1 the left hand
side would be 1(1+1) = 2 while the right hand side would be 1, and so on.
All of the laws treated so far have been for conjunction and disjunction. These operations have the property that
changing either argument either leaves the output unchanged or the output changes in the same way as the input.
Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this
property are said to be monotone. Thus the axioms so far have all been for monotonic Boolean logic. Nonmono-
tonicity enters via complement as follows.[3]
110 CHAPTER 29. BOOLEAN ALGEBRA

29.4.2 Nonmonotone laws


The complement operation is dened by the following two laws.

1 Complementation x x = 0
2 Complementation x x = 1

All properties of negation including the laws below follow from the above two laws alone.[3]
In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, whence in both algebras it
satises the double negation law (also called involution law)

negation Double (x) = x

But whereas ordinary algebra satises the two laws

(x)(y) = xy
(x) + (y) = (x + y)

Boolean algebra satises De Morgans laws:

1 Morgan De x y = (x y)
2 Morgan De x y = (x y)

29.4.3 Completeness
The laws listed above dene Boolean algebra, in the sense that they entail the rest of the subject. The laws Com-
plementation 1 and 2, together with the monotone laws, suce for this purpose and can therefore be taken as one
possible complete set of laws or axiomatization of Boolean algebra. Every law of Boolean algebra follows logically
from these axioms. Furthermore, Boolean algebras can then be dened as the models of these axioms as treated in
the section thereon.
To clarify, writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms,
nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been
Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws
that were not Boolean algebras.
This axiomatization is by no means the only one, or even necessarily the most natural given that we did not pay attention
to whether some of the axioms followed from others but simply chose to stop when we noticed we had enough laws,
treated further in the section on axiomatizations. Or the intermediate notion of axiom can be sidestepped altogether
by dening a Boolean law directly as any tautology, understood as an equation that holds for all values of its variables
over 0 and 1. All these denitions of Boolean algebra can be shown to be equivalent.

29.4.4 Duality principle


Principle: If {X, R} is a poset, then {X, R(inverse)} is also a poset.
There is nothing magical about the choice of symbols for the values of Boolean algebra. We could rename 0 and 1
to say and , and as long as we did so consistently throughout it would still be Boolean algebra, albeit with some
obvious cosmetic dierences.
But suppose we rename 0 and 1 to 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating
on the same values. However it would not be identical to our original Boolean algebra because now we nd behaving
the way used to do and vice versa. So there are still some cosmetic dierences to show that we've been ddling
with the notation, despite the fact that we're still using 0s and 1s.
29.5. DIAGRAMMATIC REPRESENTATIONS 111

But if in addition to interchanging the names of the values we also interchange the names of the two binary operations,
now there is no trace of what we have done. The end product is completely indistinguishable from what we started
with. We might notice that the columns for xy and xy in the truth tables had changed places, but that switch is
immaterial.
When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are
switched simultaneously, we call the members of each pair dual to each other. Thus 0 and 1 are dual, and and
are dual. The Duality Principle, also called De Morgan duality, asserts that Boolean algebra is unchanged when all
dual pairs are interchanged.
One change we did not need to make as part of this interchange was to complement. We say that complement is a
self-dual operation. The identity or do-nothing operation x (copy the input to the output) is also self-dual. A more
complicated example of a self-dual operation is (xy) (yz) (zx). There is no self-dual binary operation that
depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, if f(x,
y, z) = (xy) (yz) (zx), then f(f(x, y, z), x, t) is a self-dual operation of four arguments x,y,z,t.
The principle of duality can be explained from a group theory perspective by fact that there are exactly four func-
tions that are one-to-one mappings (automorphisms) of the set of Boolean polynomials back to itself: the identity
function, the complement function, the dual function and the contradual function (complemented dual). These four
functions form a group under function composition, isomorphic to the Klein four-group, acting on the set of Boolean
polynomials. Walter Gottschalk remarked that consequently a more appropriate name for the phenomenon would be
the principle (or square) of quaternality.[14]

29.5 Diagrammatic representations

29.5.1 Venn diagrams


A Venn diagram[15] is a representation of a Boolean operation using shaded overlapping regions. There is one region
for each variable, all circular in the examples here. The interior and exterior of region x corresponds respectively to
the values 1 (true) and 0 (false) for variable x. The shading indicates the value of the operation for each combination
of regions, with dark denoting 1 and light 0 (some authors use the opposite convention).
The three Venn diagrams in the gure below represent respectively conjunction xy, disjunction xy, and complement
x.

x y x y x

xy xy x
Figure 2. Venn diagrams for conjunction, disjunction, and complement

For conjunction, the region inside both circles is shaded to indicate that xy is 1 when both variables are 1. The other
regions are left unshaded to indicate that xy is 0 for the other three combinations.
The second diagram represents disjunction xy by shading those regions that lie inside either or both circles. The
third diagram represents complement x by shading the region not inside the circle.
While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white
box and a dark box, neither one containing a circle. However we could put a circle for x in those boxes, in which
case each would denote a function of one argument, x, which returns the same value independently of x, called a
constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the
112 CHAPTER 29. BOOLEAN ALGEBRA

dierence is that a constant takes no arguments, called a zeroary or nullary operation, while a constant function takes
one argument, which it ignores, and is a unary operation.
Venn diagrams are helpful in visualizing laws. The commutativity laws for and can be seen from the symmetry
of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because inter-
changing x and y would have the eect of reecting the diagram horizontally and any failure of commutativity would
then appear as a failure of symmetry.
Idempotence of and can be visualized by sliding the two circles together and noting that the shaded area then
becomes the whole circle, for both and .
To see the rst absorption law, x(xy) = x, start with the diagram in the middle for xy and note that the portion of
the shaded area in common with the x circle is the whole of the x circle. For the second absorption law, x(xy) =
x, start with the left diagram for xy and note that shading the whole of the x circle results in just the x circle being
shaded, since the previous shading was inside the x circle.
The double negation law can be seen by complementing the shading in the third diagram for x, which shades the x
circle.
To visualize the rst De Morgans law, (x)(y) = (xy), start with the middle diagram for xy and complement its
shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes.
The result is the same as if we shaded that region which is both outside the x circle and outside the y circle, i.e. the
conjunction of their exteriors, which is what the left hand side of the law describes.
The second De Morgans law, (x)(y) = (xy), works the same way with the two diagrams interchanged.
The rst complement law, xx = 0, says that the interior and exterior of the x circle have no overlap. The second
complement law, xx = 1, says that everything is either inside or outside the x circle.

29.5.2 Digital logic gates

Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting of logic gates
connected to form a circuit diagram. Each gate implements a Boolean operation, and is depicted schematically by
a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction
(OR-gates), and complement (inverters) are as follows.[16]

The lines on the left of each gate represent input wires or ports. The value of the input is represented by a voltage
on the lead. For so-called active-high logic, 0 is represented by a voltage close to zero or ground, while 1 is
represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate
represents the output port, which normally follows the same voltage conventions as the input ports.
Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to
the output; the small circle on the output denotes the actual inversion complementing the input. The convention of
putting such a circle on any port means that the signal passing through this port is complemented on the way through,
whether it is an input or output port.
The Duality Principle, or De Morgans laws, can be understood as asserting that complementing all three ports of an
AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an
inverter however leaves the operation unchanged.
More generally one may complement any of the eight subsets of the three ports of either an AND or OR gate. The
resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s
in their truth table. There are eight such because the odd-bit-out can be either 0 or 1 and can go in any of four
positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an
29.6. BOOLEAN ALGEBRAS 113

even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both
their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namely x, y, x, and
y; and the remaining two are xy (XOR) and its complement xy.

29.6 Boolean algebras

Main article: Boolean algebra (structure)

The term algebra denotes both a subject, namely the subject of algebra, and an object, namely an algebraic structure.
Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects
called Boolean algebras, dened in full generality as any model of the Boolean laws. We begin with a special case
of the notion denable without reference to the laws, namely concrete Boolean algebras, and then give the formal
denition of the general notion.

29.6.1 Concrete Boolean algebras

A concrete Boolean algebra or eld of sets is any nonempty set of subsets of a given set X closed under the set
operations of union, intersection, and complement relative to X.[3]
(As an aside, historically X itself was required to be nonempty as well to exclude the degenerate or one-element
Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since
the degenerate algebra satises every equation. However this exclusion conicts with the preferred purely equational
denition of Boolean algebra, there being no way to rule out the one-element algebra using only equations 0 1
does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and let X be
empty.)
Example 1. The power set 2X of X, consisting of all subsets of X. Here X may be any set: empty, nite, innite, or
even uncountable.
Example 2. The empty set and X. This two-element algebra shows that a concrete Boolean algebra can be nite even
when it consists of subsets of an innite set. It can be seen that every eld of subsets of X must contain the empty set
and X. Hence no smaller example is possible, other than the degenerate algebra obtained by taking X to be empty so
as to make the empty set and X coincide.
Example 3. The set of nite and conite sets of integers, where a conite set is one omitting only nitely many
integers. This is clearly closed under complement, and is closed under union because the union of a conite set with
any set is conite, while the union of two nite sets is nite. Intersection behaves like union with nite and conite
interchanged.
Example 4. For a less trivial example of the point made by Example 2, consider a Venn diagram formed by n closed
curves partitioning the diagram into 2n regions, and let X be the (innite) set of all points in the plane not on any
curve but somewhere within the diagram. The interior of each region is thus an innite subset of X, and every point
n
in X is in exactly one region. Then the set of all 22 possible unions of regions (including the empty set obtained as the
union of the empty set of regions and X obtained as the union of all 2n regions) is closed under union, intersection,
and complement relative to X and therefore forms a concrete Boolean algebra. Again we have nitely many subsets
of an innite set forming a concrete Boolean algebra, with Example 2 arising as the case n = 0 of no curves.
114 CHAPTER 29. BOOLEAN ALGEBRA

29.6.2 Subsets as bit vectors


A subset Y of X can be identied with an indexed family of bits with index set X, with the bit indexed by x X being 1
or 0 according to whether or not x Y. (This is the so-called characteristic function notion of a subset.) For example,
a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,,31}, with 0 and 31 indexing the low and high
order bits respectively. For a smaller example, if X = {a,b,c} where a, b, c are viewed as bit positions in that order
from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} of X can be identied with the
respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers
are innite sequences of bits, while those indexed by the reals in the unit interval [0,1] are packed too densely to be
able to write conventionally but nonetheless form well-dened indexed families (imagine coloring every point of the
interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]).
From this bit vector viewpoint, a concrete Boolean algebra can be dened equivalently as a nonempty set of bit
vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations of
bitwise , , and , as in 10100110 = 0010, 10100110 = 1110, and 1010 = 0101, the bit vector realizations of
intersection, union, and complement respectively.

29.6.3 The prototypical Boolean algebra


Main article: two-element Boolean algebra

The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length
one, which by the identication of bit vectors with subsets can also be understood as the two subsets of a one-element
set. We call this the prototypical Boolean algebra, justied by the following observation.

The laws satised by all nondegenerate concrete Boolean algebras coincide with those satised by the
prototypical Boolean algebra.

This observation is easily proved as follows. Certainly any law satised by all concrete Boolean algebras is satised
by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must
have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that
law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector.
The nal goal of the next section can be understood as eliminating concrete from the above observation. We shall
however reach that goal via the surprisingly stronger observation that, up to isomorphism, all Boolean algebras are
concrete.

29.6.4 Boolean algebras: the denition


The Boolean algebras we have seen so far have all been concrete, consisting of bit vectors or equivalently of subsets
of some set. Such a Boolean algebra consists of a set and operations on that set which can be shown to satisfy the
laws of Boolean algebra.
Instead of showing that the Boolean laws are satised, we can instead postulate a set X, two binary operations on X,
and one unary operation, and require that those operations satisfy the laws of Boolean algebra. The elements of X
need not be bit vectors or subsets but can be anything at all. This leads to the more general abstract denition.

A Boolean algebra is any set with binary operations and and a unary operation thereon satisfying
the Boolean laws.[17]

For the purposes of this denition it is irrelevant how the operations came to satisfy the laws, whether by at or proof.
All concrete Boolean algebras satisfy the laws (by proof rather than at), whence every concrete Boolean algebra is
a Boolean algebra according to our denitions. This axiomatic denition of a Boolean algebra as a set and certain
operations satisfying certain laws or axioms by at is entirely analogous to the abstract denitions of group, ring, eld
etc. characteristic of modern or abstract algebra.
Given any complete axiomatization of Boolean algebra, such as the axioms for a complemented distributive lattice,
a sucient condition for an algebraic structure of this kind to satisfy all the Boolean laws is that it satisfy just those
axioms. The following is therefore an equivalent denition.
29.7. AXIOMATIZING BOOLEAN ALGEBRA 115

A Boolean algebra is a complemented distributive lattice.

The section on axiomatization lists other axiomatizations, any of which can be made the basis of an equivalent de-
nition.

29.6.5 Representable Boolean algebras

Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Let
n be a square-free positive integer, one not divisible by the square of an integer, for example 30 but not 12. The
operations of greatest common divisor, least common multiple, and division into n (that is, x = n/x), can be shown
to satisfy all the Boolean laws when their arguments range over the positive divisors of n. Hence those divisors form
a Boolean algebra. These divisors are not subsets of a set, making the divisors of n a Boolean algebra that is not
concrete according to our denitions.
However, if we represent each divisor of n by the set of its prime factors, we nd that this nonconcrete Boolean algebra
is isomorphic to the concrete Boolean algebra consisting of all sets of prime factors of n, with union corresponding to
least common multiple, intersection to greatest common divisor, and complement to division into n. So this example
while not technically concrete is at least morally concrete via this representation, called an isomorphism. This
example is an instance of the following notion.

A Boolean algebra is called representable when it is isomorphic to a concrete Boolean algebra.

The obvious next question is answered positively as follows.

Every Boolean algebra is representable.

That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This quite nontrivial result
depends on the Boolean prime ideal theorem, a choice principle slightly weaker than the axiom of choice, and is
treated in more detail in the article Stones representation theorem for Boolean algebras. This strong relationship
implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of
representability.

The laws satised by all Boolean algebras coincide with those satised by the prototypical Boolean al-
gebra.

It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example
a relation algebra is a Boolean algebra with additional structure but it is not the case that every relation algebra is
representable in the sense appropriate to relation algebras.

29.7 Axiomatizing Boolean algebra


Main articles: Axiomatization of Boolean algebras and Boolean algebras canonically dened

The above denition of an abstract Boolean algebra as a set and operations satisfying the Boolean laws raises the
question, what are those laws? A simple-minded answer is all Boolean laws, which can be dened as all equations
that hold for the Boolean algebra of 0 and 1. Since there are innitely many such laws this is not a terribly satisfactory
answer in practice, leading to the next question: does it suce to require only nitely many laws to hold?
In the case of Boolean algebras the answer is yes. In particular the nitely many equations we have listed above
suce. We say that Boolean algebra is nitely axiomatizable or nitely based.
Can this list be made shorter yet? Again the answer is yes. To begin with, some of the above laws are implied by
some of the others. A sucient subset of the above laws consists of the pairs of associativity, commutativity, and
absorption laws, distributivity of over (or the other distributivity lawone suces), and the two complement
laws. In fact this is the traditional axiomatization of Boolean algebra as a complemented distributive lattice.
116 CHAPTER 29. BOOLEAN ALGEBRA

By introducing additional laws not listed above it becomes possible to shorten the list yet further. In 1933 Edward
Huntington showed that if the basic operations are taken to be xy and x, with xy considered a derived operation
(e.g. via De Morgans law in the form xy = (xy)), then the equation (xy)(xy) = x along with the
two equations expressing associativity and commutativity of completely axiomatized Boolean algebra. When the
only basic operation is the binary NAND operation (xy), Stephen Wolfram has proposed in his book A New Kind
of Science the single axiom (((xy)z)(x((xz)x))) = z as a one-equation axiomatization of Boolean algebra, where for
convenience here xy denotes the NAND rather than the AND of x and y.

29.8 Propositional logic


Main article: Propositional calculus

Propositional logic is a logical system that is intimately connected to Boolean algebra.[3] Many syntactic concepts
of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while
the semantics of propositional logic are dened via Boolean algebras in a way that the tautologies (theorems) of
propositional logic correspond to equational theorems of Boolean algebra.
Syntactically, every Boolean term corresponds to a propositional formula of propositional logic. In this transla-
tion between Boolean algebra and propositional logic, Boolean variables x,y become propositional variables (or
atoms) P,Q,, Boolean terms such as xy become propositional formulas PQ, 0 becomes false or , and 1 be-
comes true or T. It is convenient when referring to generic propositions to use Greek letters , , as metavariables
(variables outside the language of propositional calculus, used when talking about propositional calculus) to denote
propositions.
The semantics of propositional logic rely on truth assignments. The essential idea of a truth assignment is that the
propositional variables are mapped to elements of a xed Boolean algebra, and then the truth value of a propositional
formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean
term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while in
Boolean-valued semantics arbitrary Boolean algebras are considered. A tautology is a propositional formula that is
assigned truth value 1 by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or,
equivalently, every truth assignment to the two element Boolean algebra).
These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean
algebra. Every tautology of propositional logic can be expressed as the Boolean equation = 1, which will be
a theorem of Boolean algebra. Conversely every theorem = of Boolean algebra corresponds to the tautologies
() () and () (). If is in the language these last tautologies can also be written as
() (), or as two separate theorems and ; if is available then the single tautology
can be used.

29.8.1 Applications

One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural
language. Whereas the proposition if x = 3 then x+1 = 4 depends on the meanings of such symbols as + and 1, the
proposition if x = 3 then x = 3 does not; it is true merely by virtue of its structure, and remains true whether "x =
3 is replaced by "x = 4 or the moon is made of green cheese. The generic or abstract form of this tautology is if
P then P", or in the language of Boolean algebra, "P P".
Replacing P by x = 3 or any other proposition is called instantiation of P by that proposition. The result of instan-
tiating P in an abstract proposition is called an instance of the proposition. Thus "x = 3 x = 3 is a tautology by
virtue of being an instance of the abstract tautology "P P". All occurrences of the instantiated variable must be
instantiated with the same proposition, to avoid such nonsense as P x = 3 or x = 3 x = 4.
Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using
Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional
variables by abstract propositions, such as instantiating Q by QP in P(QP) to yield the instance P((QP)P).
(The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables
within the language of propositional calculus, since ordinary propositional variables can be considered within the
language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not
29.9. APPLICATIONS 117

being part of the language of propositional calculus but rather part of the same language for talking about it that this
sentence is written in, where we need to be able to distinguish propositional variables and their instantiations as being
distinct syntactic entities.)

29.8.2 Deductive systems for propositional logic


An axiomatization of propositional calculus is a set of tautologies called axioms and one or more inference rules for
producing new tautologies from old. A proof in an axiom system A is a nite nonempty sequence of propositions
each of which is either an instance of an axiom of A or follows by some rule of A from propositions appearing earlier
in the proof (thereby disallowing circular reasoning). The last proposition is the theorem proved by the proof. Every
nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An
axiomatization is sound when every theorem is a tautology, and complete when every tautology is a theorem.[18]

Sequent calculus

Main article: Sequent calculus

Propositional calculus is commonly organized as a Hilbert system, whose operations are just those of Boolean algebra
and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form is
sequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propo-
sitions called sequents, such as AB, AC, A, BC,. The two halves of a sequent are called the antecedent
and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is , and for a
succedent ; thus ,A would denote a sequent whose succedent is a list and whose antecedent is a list with
an additional proposition A appended after it. The antecedent is interpreted as the conjunction of its propositions,
the succedent as the disjunction of its propositions, and the sequent itself as the entailment of the succedent by the
antecedent.
Entailment diers from implication in that whereas the latter is a binary operation that returns a value in a Boolean
algebra, the former is a binary relation which either holds or does not hold. In this sense entailment is an external form
of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external
and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of
is as in the partial order of the Boolean algebra dened by x y just when xy = y. This ability to mix external
implication and internal implication in the one logic is among the essential dierences between sequent calculus
and propositional calculus.[19]

29.9 Applications
Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and
mathematical logic, and is also used in other areas of mathematics such as set theory and statistics.[3]

29.9.1 Computers
In the early 20th century, several electrical engineers intuitively recognized that Boolean algebra was analogous to
the behavior of certain types of electrical circuits. Claude Shannon formally proved such behavior was logically
equivalent to Boolean algebra in his 1937 masters thesis, A Symbolic Analysis of Relay and Switching Circuits.
Today, all modern general purpose computers perform their functions using two-value Boolean logic; that is, their
electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: as
voltages on wires in high-speed circuits and capacitive storage devices, as orientations of a magnetic domain in fer-
romagnetic storage devices, as holes in punched cards or paper tape, and so on. (Some early computers used decimal
circuits or mechanisms instead of two-valued logic circuits.)
Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively
0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of dierent sizes in a punched card. In practice,
the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it
hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather
118 CHAPTER 29. BOOLEAN ALGEBRA

than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per
wire, high and low.
Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use
ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011.
When programming in machine code, assembly language, and certain other programming languages, programmers
work with the low-level digital structure of the data registers. These registers operate on voltages, where zero volts
represents Boolean 0, and a reference voltage (often +5V, +3.3V, +1.8V) represents Boolean 1. Such languages
support both numeric operations and logical operations. In this context, numeric means that the computer treats
sequences of bits as binary numbers (base two numbers) and executes arithmetic operations like add, subtract, mul-
tiply, or divide. Logical refers to the Boolean logical operations of disjunction, conjunction, and negation between
two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence.
Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean
algebra as needed. A core dierentiating feature between these families of operations is the existence of the carry
operation in the rst but not the second.

29.9.2 Two-valued logic


Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced
or complex answers such as maybe or only on the weekend are acceptable. In more focused situations such as
a court of law or theorem-based mathematics however it is deemed advantageous to frame questions so as to admit
a simple yes-or-no answeris the defendant guilty or not guilty, is the proposition true or falseand to disallow
any other answer. However much of a straitjacket this might prove in practice for the respondent, the principle of
the simple yes-no question has become a central feature of both judicial and mathematical logic, making two-valued
logic deserving of organization and study in its own right.
A central concept of set theory is membership. Now an organization may permit multiple degrees of membership,
such as novice, associate, and full. With sets however an element is either in or out. The candidates for membership
in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as
each wire is either high or low.
Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to
make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory.
Two-valued logic can be extended to multi-valued logic, notably by replacing the Boolean domain {0, 1} with the
unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be
assumed. Algebraically, negation (NOT) is replaced with 1 x, conjunction (AND) is replaced with multiplication
( xy ), and disjunction (OR) is dened via De Morgans law. Interpreting these values as logical truth values yields
a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value
is interpreted as the degree of truth to what extent a proposition is true, or the probability that the proposition is
true.

29.9.3 Boolean operations


The original application for Boolean operations was mathematical logic, where it combines the truth values, true or
false, of individual formulas.
Natural languages such as English have words for several Boolean operations, in particular conjunction (and), dis-
junction (or), negation (not), and implication (implies). But not is synonymous with and not. When used to combine
situational assertions such as the block is on the table and cats drink milk, which naively are either true or false,
the meanings of these logical connectives often have the meaning of their logical counterparts. However, with de-
scriptions of behavior such as Jim walked through the door, one starts to notice dierences such as failure of
commutativity, for example the conjunction of Jim opened the door with Jim walked through the door in that
order is not equivalent to their conjunction in the other order, since and usually means and then in such cases. Ques-
tions can be similar: the order Is the sky blue, and why is the sky blue?" makes more sense than the reverse order.
Conjunctive commands about behavior are like behavioral assertions, as in get dressed and go to school. Disjunctive
commands such love me or leave me or sh or cut bait tend to be asymmetric via the implication that one alternative
is less preferable. Conjoined nouns such as tea and milk generally describe aggregation as with set union while tea or
milk is a choice. However context can reverse these senses, as in your choices are coee and tea which usually means
the same as your choices are coee or tea (alternatives). Double negation as in I don't not like milk rarely means
29.10. SEE ALSO 119

literally I do like milk but rather conveys some sort of hedging, as though to imply that there is a third possibility.
Not not P can be loosely interpreted as surely P, and although P necessarily implies not not P" the converse
is suspect in English, much as with intuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in
natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them.
Boolean operations are used in digital logic to combine the bits carried on individual wires, thereby interpreting them
over {0,1}. When a vector of n identical binary gates are used to combine two bit vectors each of n bits, the individual
bit operations can be understood collectively as a single operation on values from a Boolean algebra with 2n elements.
Naive set theory interprets Boolean operations as acting on subsets of a given set X. As we saw earlier this behavior
exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the
disjunction of two bit vectors and so on.
The 256-element free Boolean algebra on three generators is deployed in computer displays based on raster graphics,
which use bit blit to manipulate whole regions consisting of pixels, relying on Boolean operations to specify how
the source region should be combined with the destination, typically with the help of a third region called the mask.
3
Modern video cards oer all 22 = 256 ternary operations for this purpose, with the choice of operation being a
one-byte (8-bit) parameter. The constants SRC = 0xaa or 10101010, DST = 0xcc or 11001100, and MSK = 0xf0 or
11110000 allow Boolean operations such as (SRC^DST)&MSK (meaning XOR the source and destination and then
AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time, 0x60
in the (SRC^DST)&MSK example, 0x66 if just SRC^DST, etc. At run time the video card interprets the byte as the
raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and
which takes time completely independent of the complexity of the expression.
Solid modeling systems for computer aided design oer a variety of methods for building objects from other objects,
combination by Boolean operations being one of them. In this method the space in which objects exist is understood
as a set S of voxels (the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are dened as
subsets of S, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a
complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal
of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on
physical materials can be simulated on the computer with the Boolean operation x y or x y, which in set theory
is set dierence, remove the elements of y from those of x. Thus given two shapes one to be machined and the other
the material to be removed, the result of machining the former to remove the latter is described simply as their set
dierence.

Boolean searches

Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be consid-
ered to be an element of a set. The following examples use a syntax supported by Google.[20]

Doublequotes are used to combine whitespace-separated words into a single search term.[21]
Whitespace is used to specify logical AND, as it is the default operator for joining search terms:

Search term 1 Search term 2

The OR keyword is used for logical OR:

Search term 1 OR Search term 2

A prexed minus sign is used for logical NOT:

Search term 1 "Search term 2

29.10 See also


Binary number
Boolean algebra (structure)
120 CHAPTER 29. BOOLEAN ALGEBRA

Boolean algebras canonically dened

Booleo

Heyting algebra

Intuitionistic logic

List of Boolean algebra topics

Logic design

Propositional calculus

Relation algebra

Three-valued logic

Vector logic

29.11 References
[1] Boole, George (2003) [1854]. An Investigation of the Laws of Thought. Prometheus Books. ISBN 978-1-59102-089-9.

[2] The name Boolean algebra (or Boolean 'algebras) for the calculus originated by Boole, extended by Schrder, and per-
fected by Whitehead seems to have been rst suggested by Sheer, in 1913. E. V. Huntington, "New sets of independent
postulates for the algebra of logic, with special reference to Whitehead and Russells Principia mathematica", in Trans.
Amer. Math. Soc. 35 (1933), 274-304; footnote, page 278.

[3] Givant, Steven; Halmos, Paul (2009). Introduction to Boolean Algebras. Undergraduate Texts in Mathematics, Springer.
ISBN 978-0-387-40293-2.

[4] J. Michael Dunn; Gary M. Hardegree (2001). Algebraic methods in philosophical logic. Oxford University Press US. p. 2.
ISBN 978-0-19-853192-0.

[5] Norman Balabanian; Bradley Carlson (2001). Digital logic design principles. John Wiley. pp. 3940. ISBN 978-0-471-
29351-4., online sample

[6] Rajaraman & Radhakrishnan. Introduction To Digital Computer Design An 5Th Ed. PHI Learning Pvt. Ltd. p. 65. ISBN
978-81-203-3409-0.

[7] John A. Camara (2010). Electrical and Electronics Reference Manual for the Electrical and Computer PE Exam. www.
ppi2pass.com. p. 41. ISBN 978-1-59126-166-7.

[8] Shin-ichi Minato, Saburo Muroga (2007). Binary Decision Diagrams. In Wai-Kai Chen. The VLSI handbook (2nd ed.).
CRC Press. ISBN 978-0-8493-4199-1. chapter 29.

[9] Alan Parkes (2002). Introduction to languages, machines and logic: computable languages, abstract machines and formal
logic. Springer. p. 276. ISBN 978-1-85233-464-2.

[10] Jon Barwise; John Etchemendy; Gerard Allwein; Dave Barker-Plummer; Albert Liu (1999). Language, proof, and logic.
CSLI Publications. ISBN 978-1-889119-08-3.

[11] Ben Goertzel (1994). Chaotic logic: language, thought, and reality from the perspective of complex systems science. Springer.
p. 48. ISBN 978-0-306-44690-0.

[12] Halmos, Paul (1963). Lectures on Boolean Algebras. van Nostrand.

[13] O'Regan, Gerard (2008). A brief history of computing. Springer. p. 33. ISBN 978-1-84800-083-4.

[14] Steven R. Givant; Paul Richard Halmos (2009). Introduction to Boolean algebras. Springer. pp. 2122. ISBN 978-0-387-
40293-2.

[15] Venn, John (July 1880). I. On the Diagrammatic and Mechanical Representation of Propositions and Reasonings (PDF).
The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 5. Taylor & Francis. 10 (59): 118.
doi:10.1080/14786448008626877. Archived (PDF) from the original on 2017-05-16.
29.12. FURTHER READING 121

[16] Shannon, Claude (1949). The Synthesis of Two-Terminal Switching Circuits. Bell System Technical Journal. 28: 5998.
doi:10.1002/j.1538-7305.1949.tb03624.x.

[17] Koppelberg, Sabine (1989). General Theory of Boolean Algebras. Handbook of Boolean Algebras, Vol. 1 (ed. J. Donald
Monk with Robert Bonnet). Amsterdam: North Holland. ISBN 978-0-444-70261-6.

[18] Hausman, Alan; Howard Kahane; Paul Tidman (2010) [2007]. Logic and Philosophy: A Modern Introduction. Wadsworth
Cengage Learning. ISBN 0-495-60158-6.

[19] Girard, Jean-Yves; Paul Taylor; Yves Lafont (1990) [1989]. Proofs and Types. Cambridge University Press (Cambridge
Tracts in Theoretical Computer Science, 7). ISBN 0-521-37181-3.

[20] Not all search engines support the same query syntax. Additionally, some organizations (such as Google) provide special-
ized search engines that support alternate or extended syntax. (See e.g.,Syntax cheatsheet, Google codesearch supports
regular expressions).

[21] Doublequote-delimited search terms are called exact phrase searches in the Google documentation.

29.11.1 General
Mano, Morris; Ciletti, Michael D. (2013). Digital Design. Pearson. ISBN 978-0-13-277420-8.

29.12 Further reading


J. Eldon Whitesitt (1995). Boolean algebra and its applications. Courier Dover Publications. ISBN 978-0-
486-68483-3. Suitable introduction for students in applied elds.
Dwinger, Philip (1971). Introduction to Boolean algebras. Wrzburg: Physica Verlag.
Sikorski, Roman (1969). Boolean Algebras (3/e ed.). Berlin: Springer-Verlag. ISBN 978-0-387-04469-9.
Bocheski, Jzef Maria (1959). A Prcis of Mathematical Logic. Translated from the French and German
editions by Otto Bird. Dordrecht, South Holland: D. Reidel.

Historical perspective

George Boole (1848). "The Calculus of Logic," Cambridge and Dublin Mathematical Journal III: 18398.
Theodore Hailperin (1986). Booles logic and probability: a critical exposition from the standpoint of contem-
porary algebra, logic, and probability theory (2nd ed.). Elsevier. ISBN 978-0-444-87952-3.
Dov M. Gabbay, John Woods, ed. (2004). The rise of modern logic: from Leibniz to Frege. Handbook of the
History of Logic. 3. Elsevier. ISBN 978-0-444-51611-4., several relevant chapters by Hailperin, Valencia,
and Grattan-Guinesss
Calixto Badesa (2004). The birth of model theory: Lwenheims theorem in the frame of the theory of rela-
tives. Princeton University Press. ISBN 978-0-691-05853-5., chapter 1, Algebra of Classes and Propositional
Calculus
Burris, Stanley, 2009. The Algebra of Logic Tradition. Stanford Encyclopedia of Philosophy.
Radomir S. Stankovic; Jaakko Astola (2011). From Boolean Logic to Switching Circuits and Automata: Towards
Modern Information Technology. Springer. ISBN 978-3-642-11681-0.

29.13 External links


Boolean Algebra chapter on All About Circuits
How Stu Works Boolean Logic
Science and Technology - Boolean Algebra contains a list and proof of Boolean theorems and laws.
Chapter 30

Boolean algebra (structure)

For an introduction to the subject, see Boolean algebra. For an alternative presentation, see Boolean algebras canon-
ically dened.

In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of
algebraic structure captures essential properties of both set operations and logic operations. A Boolean algebra can
be seen as a generalization of a power set algebra or a eld of sets, or its elements can be viewed as generalized truth
values. It is also a special case of a De Morgan algebra and a Kleene algebra (with involution).
Every Boolean algebra gives rise to a Boolean ring, and vice versa, with ring multiplication corresponding to conjunction
or meet , and ring addition to exclusive disjunction or symmetric dierence (not disjunction ). However, the theory
of Boolean rings has an inherent asymmetry between the two operators, while the axioms and theorems of Boolean
algebra express the symmetry of the theory described by the duality principle.[1]

{x,y,z}

{x,y} {x,z} {y,z}

{x} {y} {z}

Boolean lattice of subsets

122
30.1. HISTORY 123

30.1 History
The term Boolean algebra honors George Boole (18151864), a self-educated English mathematician. He intro-
duced the algebraic system initially in a small pamphlet, The Mathematical Analysis of Logic, published in 1847 in
response to an ongoing public controversy between Augustus De Morgan and William Hamilton, and later as a more
substantial book, The Laws of Thought, published in 1854. Booles formulation diers from that described above
in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations.
Boolean algebra emerged in the 1860s, in papers written by William Jevons and Charles Sanders Peirce. The rst sys-
tematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schrder.
The rst extensive treatment of Boolean algebra in English is A. N. Whitehead's 1898 Universal Algebra. Boolean
algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V.
Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s,
and with Garrett Birkho's 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep
new results in mathematical logic and axiomatic set theory using oshoots of Boolean algebra, namely forcing and
Boolean-valued models.

30.2 Denition
A Boolean algebra is a six-tuple consisting of a set A, equipped with two binary operations (called meet or
and), (called join or or), a unary operation (called complement or not) and two elements 0 and 1
(called bottom and top, or least and greatest element, also denoted by the symbols and , respectively),
such that for all elements a, b and c of A, the following axioms hold:[2]

Note, however, that the absorption law can be excluded from the set of axioms as it can be derived from the other
axioms (see Proven properties).
A Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra.
(Some authors require 0 and 1 to be distinct elements in order to exclude this case.)
It follows from the last three pairs of axioms above (identity, distributivity and complements), or from the absorption
axiom, that

a = b a if and only if a b = b.

The relation dened by a b if these equivalent conditions hold, is a partial order with least element 0 and greatest
element 1. The meet a b and the join a b of two elements coincide with their inmum and supremum, respectively,
with respect to .
The rst four pairs of axioms constitute a denition of a bounded lattice.
It follows from the rst ve pairs of axioms that any complement is unique.
The set of axioms is self-dual in the sense that if one exchanges with and 0 with 1 in an axiom, the result is
again an axiom. Therefore, by applying this operation to a Boolean algebra (or Boolean lattice), one obtains another
Boolean algebra with the same elements; it is called its dual.[3]

30.3 Examples
The simplest non-trivial Boolean algebra, the two-element Boolean algebra, has only two elements, 0 and 1,
and is dened by the rules:

It has applications in logic, interpreting 0 as false, 1 as true, as and, as or, and as not.
Expressions involving variables and the Boolean operations represent statement forms, and two
such expressions can be shown to be equal using the above axioms if and only if the corresponding
statement forms are logically equivalent.
124 CHAPTER 30. BOOLEAN ALGEBRA (STRUCTURE)

The two-element Boolean algebra is also used for circuit design in electrical engineering; here 0
and 1 represent the two dierent states of one bit in a digital circuit, typically high and low voltage.
Circuits are described by expressions containing variables, and two such expressions are equal
for all values of the variables if and only if the corresponding circuits have the same input-output
behavior. Furthermore, every possible input-output behavior can be modeled by a suitable Boolean
expression.

The two-element Boolean algebra is also important in the general theory of Boolean algebras,
because an equation involving several variables is generally true in all Boolean algebras if and only
if it is true in the two-element Boolean algebra (which can be checked by a trivial brute force
algorithm for small numbers of variables). This can for example be used to show that the following
laws (Consensus theorems) are generally valid in all Boolean algebras:
(a b) (a c) (b c) (a b) (a c)
(a b) (a c) (b c) (a b) (a c)

The power set (set of all subsets) of any given nonempty set S forms a Boolean algebra, an algebra of sets, with
the two operations := (union) and := (intersection). The smallest element 0 is the empty set and the
largest element 1 is the set S itself.

After the two-element Boolean algebra, the simplest Boolean algebra is that dened by the power
set of two atoms:

The set of all subsets of S that are either nite or conite is a Boolean algebra, an algebra of sets.

Starting with the propositional calculus with sentence symbols, form the Lindenbaum algebra (that is, the set
of sentences in the propositional calculus modulo tautology). This construction yields a Boolean algebra. It is
in fact the free Boolean algebra on generators. A truth assignment in propositional calculus is then a Boolean
algebra homomorphism from this algebra to the two-element Boolean algebra.

Given any linearly ordered set L with a least element, the interval algebra is the smallest algebra of subsets of
L containing all of the half-open intervals [a, b) such that a is in L and b is either in L or equal to . Interval
algebras are useful in the study of Lindenbaum-Tarski algebras; every countable Boolean algebra is isomorphic
to an interval algebra.

For any natural number n, the set of all positive divisors of n, dening ab if a divides b, forms a distributive
lattice. This lattice is a Boolean algebra if and only if n is square-free. The bottom and the top element of this
Boolean algebra is the natural number 1 and n, respectively. The complement of a is given by n/a. The meet
and the join of a and b is given by the greatest common divisor (gcd) and the least common multiple (lcm) of
a and b, respectively. The ring addition a+b is given by lcm(a,b)/gcd(a,b). The picture shows an example for
n = 30. As a counter-example, considering the non-square-free n=60, the greatest common divisor of 30 and
its complement 2 would be 2, while it should be the bottom element 1.

Other examples of Boolean algebras arise from topological spaces: if X is a topological space, then the col-
lection of all subsets of X which are both open and closed forms a Boolean algebra with the operations :=
(union) and := (intersection).

If R is an arbitrary ring and we dene the set of central idempotents by


A = { e R : e2 = e, ex = xe, x R }
then the set A becomes a Boolean algebra with the operations e f := e + f - ef and e f := ef.

30.4 Homomorphisms and isomorphisms


A homomorphism between two Boolean algebras A and B is a function f : A B such that for all a, b in A:
30.5. BOOLEAN RINGS 125

30

6 10 15

2 3 5

Hasse diagram of the Boolean algebra of divisors of 30.

f(a b) = f(a) f(b),


f(a b) = f(a) f(b),
f(0) = 0,
f(1) = 1.

It then follows that f(a) = f(a) for all a in A. The class of all Boolean algebras, together with this notion of
morphism, forms a full subcategory of the category of lattices.

30.5 Boolean rings


Main article: Boolean ring

Every Boolean algebra (A, , ) gives rise to a ring (A, +, ) by dening a + b := (a b) (b a) = (a b) (a


b) (this operation is called symmetric dierence in the case of sets and XOR in the case of logic) and a b := a b.
The zero element of this ring coincides with the 0 of the Boolean algebra; the multiplicative identity element of the
126 CHAPTER 30. BOOLEAN ALGEBRA (STRUCTURE)

ring is the 1 of the Boolean algebra. This ring has the property that a a = a for all a in A; rings with this property
are called Boolean rings.
Conversely, if a Boolean ring A is given, we can turn it into a Boolean algebra by dening x y := x + y + (x y) and
x y := x y. [4][5] Since these two constructions are inverses of each other, we can say that every Boolean ring arises
from a Boolean algebra, and vice versa. Furthermore, a map f : A B is a homomorphism of Boolean algebras
if and only if it is a homomorphism of Boolean rings. The categories of Boolean rings and Boolean algebras are
equivalent.[6]
Hsiang (1985) gave a rule-based algorithm to check whether two arbitrary expressions denote the same value in every
Boolean ring. More generally, Boudet, Jouannaud, and Schmidt-Schau (1989) gave an algorithm to solve equations
between arbitrary Boolean-ring expressions. Employing the similarity of Boolean rings and Boolean algebras, both
algorithms have applications in automated theorem proving.

30.6 Ideals and lters


Main articles: Ideal (order theory) and Filter (mathematics)

An ideal of the Boolean algebra A is a subset I such that for all x, y in I we have x y in I and for all a in A we have
a x in I. This notion of ideal coincides with the notion of ring ideal in the Boolean ring A. An ideal I of A is called
prime if I A and if a b in I always implies a in I or b in I. Furthermore, for every a A we have that a -a =
0 I and then a I or -a I for every a A, if I is prime. An ideal I of A is called maximal if I A and if the
only ideal properly containing I is A itself. For an ideal I, if a I and -a I, then I {a} or I {-a} is properly
contained in another ideal J. Hence, that an I is not maximal and therefore the notions of prime ideal and maximal
ideal are equivalent in Boolean algebras. Moreover, these notions coincide with ring theoretic ones of prime ideal
and maximal ideal in the Boolean ring A.
The dual of an ideal is a lter. A lter of the Boolean algebra A is a subset p such that for all x, y in p we have x y
in p and for all a in A we have a x in p. The dual of a maximal (or prime) ideal in a Boolean algebra is ultralter.
Ultralters can alternatively be described as 2-valued morphisms from A to the two-element Boolean algebra. The
statement every lter in a Boolean algebra can be extended to an ultralter is called the Ultralter Theorem and can
not be proved in ZF, if ZF is consistent. Within ZF, it is strictly weaker than the axiom of choice. The Ultralter
Theorem has many equivalent formulations: every Boolean algebra has an ultralter, every ideal in a Boolean algebra
can be extended to a prime ideal, etc.

30.7 Representations
It can be shown that every nite Boolean algebra is isomorphic to the Boolean algebra of all subsets of a nite set.
Therefore, the number of elements of every nite Boolean algebra is a power of two.
Stones celebrated representation theorem for Boolean algebras states that every Boolean algebra A is isomorphic to
the Boolean algebra of all clopen sets in some (compact totally disconnected Hausdor) topological space.

30.8 Axiomatics
The rst axiomatization of Boolean lattices/algebras in general was given by Alfred North Whitehead in 1898.[7][8]
It included the above axioms and additionally x1=1 and x0=0. In 1904, the American mathematician Edward
V. Huntington (18741952) gave probably the most parsimonious axiomatization based on , , , even proving the
associativity laws (see box).[9] He also proved that these axioms are independent of each other.[10] In 1933, Huntington
set out the following elegant axiomatization for Boolean algebra. It requires just one binary operation + and a unary
functional symbol n, to be read as 'complement', which satisfy the following laws:

1. Commutativity: x + y = y + x.

2. Associativity: (x + y) + z = x + (y + z).
30.9. GENERALIZATIONS 127

3. Huntington equation: n(n(x) + y) + n(n(x) + n(y)) = x.

Herbert Robbins immediately asked: If the Huntington equation is replaced with its dual, to wit:

4. Robbins Equation: n(n(x + y) + n(x + n(y))) = x,

do (1), (2), and (4) form a basis for Boolean algebra? Calling (1), (2), and (4) a Robbins algebra, the question then
becomes: Is every Robbins algebra a Boolean algebra? This question (which came to be known as the Robbins
conjecture) remained open for decades, and became a favorite question of Alfred Tarski and his students. In 1996,
William McCune at Argonne National Laboratory, building on earlier work by Larry Wos, Steve Winker, and Bob
Vero, answered Robbinss question in the armative: Every Robbins algebra is a Boolean algebra. Crucial to
McCunes proof was the automated reasoning program EQP he designed. For a simplication of McCunes proof,
see Dahn (1998).

30.9 Generalizations
Removing the requirement of existence of a unit from the axioms of Boolean algebra yields generalized Boolean
algebras. Formally, a distributive lattice B is a generalized Boolean lattice, if it has a smallest element 0 and for any
elements a and b in B such that a b, there exists an element x such that a x = 0 and a x = b. Dening a b
as the unique x such that (a b) x = a and (a b) x = 0, we say that the structure (B,,,,0) is a generalized
Boolean algebra, while (B,,0) is a generalized Boolean semilattice. Generalized Boolean lattices are exactly the ideals
of Boolean lattices.
A structure that satises all axioms for Boolean algebras except the two distributivity axioms is called an orthocomplemented
lattice. Orthocomplemented lattices arise naturally in quantum logic as lattices of closed subspaces for separable
Hilbert spaces.

30.10 See also

30.11 Notes
[1] Givant and Paul Halmos, 2009, p. 20

[2] Davey, Priestley, 1990, p.109, 131, 144

[3] Goodstein, R. L. (2012), Chapter 2: The self-dual system of axioms, Boolean Algebra, Courier Dover Publications, pp.
21, ISBN 9780486154978.

[4] Stone, 1936

[5] Hsiang, 1985, p.260

[6] Cohn (2003), p. 81.

[7] Padmanabhan, p. 73

[8] Whitehead, 1898, p.37

[9] Huntington, 1904, p.292-293, (rst of several axiomatizations by Huntington)

[10] Huntington, 1904, p.296

30.12 References
Brown, Stephen; Vranesic, Zvonko (2002), Fundamentals of Digital Logic with VHDL Design (2nd ed.),
McGrawHill, ISBN 978-0-07-249938-4. See Section 2.5.
128 CHAPTER 30. BOOLEAN ALGEBRA (STRUCTURE)

A. Boudet; J.P. Jouannaud; M. Schmidt-Schau (1989). Unication in Boolean Rings and Abelian Groups
(PDF). Journal of Symbolic Computation. 8: 449477. doi:10.1016/s0747-7171(89)80054-9.
Cohn, Paul M. (2003), Basic Algebra: Groups, Rings, and Fields, Springer, pp. 51, 7081, ISBN 9781852335878
Cori, Rene; Lascar, Daniel (2000), Mathematical Logic: A Course with Exercises, Oxford University Press,
ISBN 978-0-19-850048-3. See Chapter 2.
Dahn, B. I. (1998), Robbins Algebras are Boolean: A Revision of McCunes Computer-Generated Solution
of the Robbins Problem, Journal of Algebra, 208 (2): 526532, doi:10.1006/jabr.1998.7467.
B.A. Davey; H.A. Priestley (1990). Introduction to Lattices and Order. Cambridge Mathematical Textbooks.
Cambridge University Press.
Givant, Steven; Halmos, Paul (2009), Introduction to Boolean Algebras, Undergraduate Texts in Mathematics,
Springer, ISBN 978-0-387-40293-2.
Halmos, Paul (1963), Lectures on Boolean Algebras, Van Nostrand, ISBN 978-0-387-90094-0.
Halmos, Paul; Givant, Steven (1998), Logic as Algebra, Dolciani Mathematical Expositions, 21, Mathematical
Association of America, ISBN 978-0-88385-327-6.
Hsiang, Jieh (1985). Refutational Theorem Proving Using Term Rewriting Systems (PDF). AI. 25: 255300.
doi:10.1016/0004-3702(85)90074-8.
Edward V. Huntington (1904). Sets of Independent Postulates for the Algebra of Logic. Transactions of the
American Mathematical Society. 5: 288309. JSTOR 1986459. doi:10.1090/s0002-9947-1904-1500675-4.
Huntington, E. V. (1933), New sets of independent postulates for the algebra of logic (PDF), Transactions
of the American Mathematical Society, American Mathematical Society, 35 (1): 274304, JSTOR 1989325,
doi:10.2307/1989325.
Huntington, E. V. (1933), Boolean algebra: A correction, Transactions of the American Mathematical Society,
35 (2): 557558, JSTOR 1989783, doi:10.2307/1989783.
Mendelson, Elliott (1970), Boolean Algebra and Switching Circuits, Schaums Outline Series in Mathematics,
McGrawHill, ISBN 978-0-07-041460-0.
Monk, J. Donald; Bonnet, R., eds. (1989), Handbook of Boolean Algebras, North-Holland, ISBN 978-0-
444-87291-3. In 3 volumes. (Vol.1:ISBN 978-0-444-70261-6, Vol.2:ISBN 978-0-444-87152-7, Vol.3:ISBN
978-0-444-87153-4)
Padmanabhan, Ranganathan; Rudeanu, Sergiu (2008), Axioms for lattices and boolean algebras, World Scien-
tic, ISBN 978-981-283-454-6.
Sikorski, Roman (1966), Boolean Algebras, Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer Ver-
lag.
Stoll, R. R. (1963), Set Theory and Logic, W. H. Freeman, ISBN 978-0-486-63829-4. Reprinted by Dover
Publications, 1979.
Marshall H. Stone (1936). The Theory of Representations for Boolean Algebra. Transactions of the Ameri-
can Mathematical Society. 40: 37111. doi:10.1090/s0002-9947-1936-1501865-8.
A.N. Whitehead (1898). A Treatise on Universal Algebra. Cambridge University Press. ISBN 1-4297-0032-7.

30.13 External links


Hazewinkel, Michiel, ed. (2001) [1994], Boolean algebra, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Stanford Encyclopedia of Philosophy: "The Mathematics of Boolean Algebra," by J. Donald Monk.
McCune W., 1997. Robbins Algebras Are Boolean JAR 19(3), 263276
30.13. EXTERNAL LINKS 129

Boolean Algebra by Eric W. Weisstein, Wolfram Demonstrations Project, 2007.

A monograph available free online:

Burris, Stanley N.; Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag. ISBN
3-540-90578-2.
Weisstein, Eric W. Boolean Algebra. MathWorld.
Chapter 31

Boolean algebras canonically dened

Boolean algebras have been formally dened variously as a kind of lattice and as a kind of ring. This
article presents them, equally formally, as simply the models of the equational theory of two values, and
observes the equivalence of both the lattice and ring denitions to this more elementary one.

Boolean algebra is a mathematically rich branch of abstract algebra. Just as group theory deals with groups, and linear
algebra with vector spaces, Boolean algebras are models of the equational theory of the two values 0 and 1 (whose
interpretation need not be numerical). Common to Boolean algebras, groups, and vector spaces is the notion of an
algebraic structure, a set closed under zero or more operations satisfying certain equations.
Just as there are basic examples of groups, such as the group Z of integers and the permutation group Sn of permutations
of n objects, there are also basic examples of Boolean algebra such as the following.

The algebra of binary digits or bits 0 and 1 under the logical operations including disjunction, conjunction, and
negation. Applications include the propositional calculus and the theory of digital circuits.

The algebra of sets under the set operations including union, intersection, and complement. Applications
include any area of mathematics for which sets form a natural foundation.

Boolean algebra thus permits applying the methods of abstract algebra to mathematical logic, digital logic, and the
set-theoretic foundations of mathematics.
Unlike groups of nite order, which exhibit complexity and diversity and whose rst-order theory is decidable only in
special cases, all nite Boolean algebras share the same theorems and have a decidable rst-order theory. Instead the
intricacies of Boolean algebra are divided between the structure of innite algebras and the algorithmic complexity
of their syntactic structure.

31.1 Denition
Boolean algebra treats the equational theory of the maximal two-element nitary algebra, called the Boolean proto-
type, and the models of that theory, called Boolean algebras. These terms are dened as follows.
An algebra is a family of operations on a set, called the underlying set of the algebra. We take the underlying set of
the Boolean prototype to be {0,1}.
An algebra is nitary when each of its operations takes only nitely many arguments. For the prototype each argument
of an operation is either 0 or 1, as is the result of the operation. The maximal such algebra consists of all nitary
operations on {0,1}.
The number of arguments taken by each operation is called the arity of the operation. An operation on {0,1} of arity
n, or n-ary operation, can be applied to any of 2n possible values for its n arguments. For each choice of arguments
n
the operation may return 0 or 1, whence there are 22 n-ary operations.
The prototype therefore has two operations taking no arguments, called zeroary or nullary operations, namely zero
and one. It has four unary operations, two of which are constant operations, another is the identity, and the most

130
31.2. BASIS 131

commonly used one, called negation, returns the opposite of its argument: 1 if 0, 0 if 1. It has sixteen binary
operations; again two of these are constant, another returns its rst argument, yet another returns its second, one is
called conjunction and returns 1 if both arguments are 1 and otherwise 0, another is called disjunction and returns 0 if
both arguments are 0 and otherwise 1, and so on. The number of (n+1)-ary operations in the prototype is the square
of the number of n-ary operations, so there are 162 = 256 ternary operations, 2562 = 65,536 quaternary operations,
and so on.
A family is indexed by an index set. In the case of a family of operations forming an algebra, the indices are called
operation symbols, constituting the language of that algebra. The operation indexed by each symbol is called the
denotation or interpretation of that symbol. Each operation symbol species the arity of its interpretation, whence
all possible interpretations of a symbol have the same arity. In general it is possible for an algebra to interpret
distinct symbols with the same operation, but this is not the case for the prototype, whose symbols are in one-one
n
correspondence with its operations. The prototype therefore has 22 n-ary operation symbols, called the Boolean
operation symbols and forming the language of Boolean algebra. Only a few operations have conventional symbols,
such as for negation, for conjunction, and for disjunction. It is convenient to consider the i-th n-ary symbol to
be n fi as done below in the section on truth tables.
An equational theory in a given language consists of equations between terms built up from variables using symbols
of that language. Typical equations in the language of Boolean algebra are xy = yx, xx = x, xx = yy, and
xy = x.
An algebra satises an equation when the equation holds for all possible values of its variables in that algebra when
the operation symbols are interpreted as specied by that algebra. The laws of Boolean algebra are the equations in
the language of Boolean algebra satised by the prototype. The rst three of the above examples are Boolean laws,
but not the fourth since 10 1.
The equational theory of an algebra is the set of all equations satised by the algebra. The laws of Boolean algebra
therefore constitute the equational theory of the Boolean prototype.
A model of a theory is an algebra interpreting the operation symbols in the language of the theory and satisfying the
equations of the theory.

A Boolean algebra is any model of the laws of Boolean algebra.

That is, a Boolean algebra is a set and a family of operations thereon interpreting the Boolean operation symbols and
satisfying the same laws as the Boolean prototype.
If we dene a homologue of an algebra to be a model of the equational theory of that algebra, then a Boolean algebra
can be dened as any homologue of the prototype.
Example 1. The Boolean prototype is a Boolean algebra, since trivially it satises its own laws. It is thus the
prototypical Boolean algebra. We did not call it that initially in order to avoid any appearance of circularity in the
denition.

31.2 Basis
The operations need not be all explicitly stated. A basis is any set from which the remaining operations can be
obtained by composition. A Boolean algebra may be dened from any of several dierent bases. Three bases for
Boolean algebra are in common use, the lattice basis, the ring basis, and the Sheer stroke or NAND basis. These
bases impart respectively a logical, an arithmetical, and a parsimonious character to the subject.

The lattice basis originated in the 19th century with the work of Boole, Peirce, and others seeking an algebraic
formalization of logical thought processes.
The ring basis emerged in the 20th century with the work of Zhegalkin and Stone and became the basis of choice
for algebraists coming to the subject from a background in abstract algebra. Most treatments of Boolean algebra
assume the lattice basis, a notable exception being Halmos[1963] whose linear algebra background evidently
endeared the ring basis to him.
Since all nitary operations on {0,1} can be dened in terms of the Sheer stroke NAND (or its dual NOR),
the resulting economical basis has become the basis of choice for analyzing digital circuits, in particular gate
arrays in digital electronics.
132 CHAPTER 31. BOOLEAN ALGEBRAS CANONICALLY DEFINED

The common elements of the lattice and ring bases are the constants 0 and 1, and an associative commutative binary
operation, called meet xy in the lattice basis, and multiplication xy in the ring basis. The distinction is only termino-
logical. The lattice basis has the further operations of join, xy, and complement, x. The ring basis has instead the
arithmetic operation xy of addition (the symbol is used in preference to + because the latter is sometimes given
the Boolean reading of join).
To be a basis is to yield all other operations by composition, whence any two bases must be intertranslatable. The
lattice basis translates xy to the ring basis as xyxy, and x as x1. Conversely the ring basis translates xy to the
lattice basis as (xy)(xy).
Both of these bases allow Boolean algebras to be dened via a subset of the equational properties of the Boolean
operations. For the lattice basis, it suces to dene a Boolean algebra as a distributive lattice satisfying xx = 0
and xx = 1, called a complemented distributive lattice. The ring basis turns a Boolean algebra into a Boolean ring,
namely a ring satisfying x2 = x.
Emil Post gave a necessary and sucient condition for a set of operations to be a basis for the nonzeroary Boolean
operations. A nontrivial property is one shared by some but not all operations making up a basis. Post listed ve
nontrivial properties of operations, identiable with the ve Posts classes, each preserved by composition, and showed
that a set of operations formed a basis if, for each property, the set contained an operation lacking that property. (The
converse of Posts theorem, extending if to "if and only if, is the easy observation that a property from among these
ve holding of every operation in a candidate basis will also hold of every operation formed by composition from that
candidate, whence by nontriviality of that property the candidate will fail to be a basis.) Posts ve properties are:

monotone, no 0-1 input transition can cause a 1-0 output transition;

ane, representable with Zhegalkin polynomials that lack bilinear or higher terms, e.g. xy1 but not xy;

self-dual, so that complementing all inputs complements the output, as with x, or the median operator xyyzzx,
or their negations;

strict (mapping the all-zeros input to zero);

costrict (mapping all-ones to one).

The NAND (dually NOR) operation lacks all these, thus forming a basis by itself.

31.3 Truth tables


The nitary operations on {0,1} may be exhibited as truth tables, thinking of 0 and 1 as the truth values false and
true. They can be laid out in a uniform and application-independent way that allows us to name, or at least number,
them individually. These names provide a convenient shorthand for the Boolean operations. The names of the n-
n
ary operations are binary numbers of 2n bits. There being 22 such operations, one cannot ask for a more succinct
nomenclature. Note that each nitary operation can be called a switching function.
This layout and associated naming of operations is illustrated here in full for arities from 0 to 2.

These tables continue at higher arities, with 2n rows at arity n, each row giving a valuation or binding of the n variables
x0 ,...xn and each column headed n fi giving the value n fi(x0 ,...,xn) of the i-th n-ary operation at that valuation. The
operations include the variables, for example 1 f 2 is x0 while 2 f 10 is x0 (as two copies of its unary counterpart) and
2
f 12 is x1 (with no unary counterpart). Negation or complement x0 appears as 1 f 1 and again as 2 f 5 , along with 2 f 3
(x1 , which did not appear at arity 1), disjunction or union x0 x1 as 2 f 14 , conjunction or intersection x0 x1 as 2 f 8 ,
implication x0 x1 as 2 f 13 , exclusive-or symmetric dierence x0 x1 as 2 f 6 , set dierence x0 x1 as 2 f 2 , and so on.
As a minor detail important more for its form than its content, the operations of an algebra are traditionally organized
as a list. Although we are here indexing the operations of a Boolean algebra by the nitary operations on {0,1}, the
truth-table presentation above serendipitously orders the operations rst by arity and second by the layout of the tables
for each arity. This permits organizing the set of all Boolean operations in the traditional list format. The list order
for the operations of a given arity is determined by the following two rules.
31.4. EXAMPLES 133

(i) The i-th row in the left half of the table is the binary representation of i with its least signicant or 0-th
bit on the left (little-endian order, originally proposed by Alan Turing, so it would not be unreasonable
to call it Turing order).

(ii) The j-th column in the right half of the table is the binary representation of j, again in little-endian
order. In eect the subscript of the operation is the truth table of that operation. By analogy with Gdel
numbering of computable functions one might call this numbering of the Boolean operations the Boole
numbering.

When programming in C or Java, bitwise disjunction is denoted x|y, conjunction x&y, and negation ~x. A program
can therefore represent for example the operation x(yz) in these languages as x&(y|z), having previously set x =
0xaa, y = 0xcc, and z = 0xf0 (the 0x indicates that the following constant is to be read in hexadecimal or base 16),
either by assignment to variables or dened as macros. These one-byte (eight-bit) constants correspond to the columns
for the input variables in the extension of the above tables to three variables. This technique is almost universally
used in raster graphics hardware to provide a exible variety of ways of combining and masking images, the typical
operations being ternary and acting simultaneously on source, destination, and mask bits.

31.4 Examples

31.4.1 Bit vectors

Example 2. All bit vectors of a given length form a Boolean algebra pointwise, meaning that any n-ary Boolean
operation can be applied to n bit vectors one bit position at a time. For example, the ternary OR of three bit vectors
each of length 4 is the bit vector of length 4 formed by oring the three bits in each of the four bit positions, thus
010010001001 = 1101. Another example is the truth tables above for the n-ary operations, whose columns are
all the bit vectors of length 2n and which therefore can be combined pointwise whence the n-ary operations form a
Boolean algebra. This works equally well for bit vectors of nite and innite length, the only rule being that the bit
positions all be indexed by the same set in order that corresponding position be well dened.
The atoms of such an algebra are the bit vectors containing exactly one 1. In general the atoms of a Boolean algebra
are those elements x such that xy has only two possible values, x or 0.

31.4.2 Power set algebra

Example 3. The power set algebra, the set 2W of all subsets of a given set W. This is just Example 2 in disguise,
with W serving to index the bit positions. Any subset X of W can be viewed as the bit vector having 1s in just those
bit positions indexed by elements of X. Thus the all-zero vector is the empty subset of W while the all-ones vector is
W itself, these being the constants 0 and 1 respectively of the power set algebra. The counterpart of disjunction xy
is union XY, while that of conjunction xy is intersection XY. Negation x becomes ~X, complement relative to
W. There is also set dierence XY = X~Y, symmetric dierence (XY)(YX), ternary union XYZ, and so on.
The atoms here are the singletons, those subsets with exactly one element.
Examples 2 and 3 are special cases of a general construct of algebra called direct product, applicable not just to
Boolean algebras but all kinds of algebra including groups, rings, etc. The direct product of any family B of Boolean
algebras where i ranges over some index set I (not necessarily nite or even countable) is a Boolean algebra consisting
of all I-tuples (...x,...) whose i-th element is taken from Bi. The operations of a direct product are the corresponding
operations of the constituent algebras acting within their respective coordinates; in particular operation n fj of the
product operates on n I-tuples by applying operation n fj of Bi to the n elements in the i-th coordinate of the n tuples,
for all i in I.
When all the algebras being multiplied together in this way are the same algebra A we call the direct product a direct
power of A. The Boolean algebra of all 32-bit bit vectors is the two-element Boolean algebra raised to the 32nd power,
or power set algebra of a 32-element set, denoted 232 . The Boolean algebra of all sets of integers is 2Z . All Boolean
algebras we have exhibited thus far have been direct powers of the two-element Boolean algebra, justifying the name
power set algebra.
134 CHAPTER 31. BOOLEAN ALGEBRAS CANONICALLY DEFINED

31.4.3 Representation theorems

It can be shown that every nite Boolean algebra is isomorphic to some power set algebra. Hence the cardinality
(number of elements) of a nite Boolean algebra is a power of 2, namely one of 1,2,4,8,...,2n ,... This is called a
representation theorem as it gives insight into the nature of nite Boolean algebras by giving a representation of
them as power set algebras.
This representation theorem does not extend to innite Boolean algebras: although every power set algebra is a
Boolean algebra, not every Boolean algebra need be isomorphic to a power set algebra. In particular, whereas there
can be no countably innite power set algebras (the smallest innite power set algebra is the power set algebra 2N of
sets of natural numbers, shown by Cantor to be uncountable), there exist various countably innite Boolean algebras.
To go beyond power set algebras we need another construct. A subalgebra of an algebra A is any subset of A closed
under the operations of A. Every subalgebra of a Boolean algebra A must still satisfy the equations holding of A,
since any violation would constitute a violation for A itself. Hence every subalgebra of a Boolean algebra is a Boolean
algebra.
A subalgebra of a power set algebra is called a eld of sets; equivalently a eld of sets is a set of subsets of some set W
including the empty set and W and closed under nite union and complement with respect to W (and hence also under
nite intersection). Birkhos [1935] representation theorem for Boolean algebras states that every Boolean algebra
is isomorphic to a eld of sets. Now Birkhos HSP theorem for varieties can be stated as, every class of models
of the equational theory of a class C of algebras is the Homomorphic image of a Subalgebra of a direct Product of
algebras of C. Normally all three of H, S, and P are needed; what the rst of these two Birkho theorems shows is that
for the special case of the variety of Boolean algebras Homomorphism can be replaced by Isomorphism. Birkhos
HSP theorem for varieties in general therefore becomes Birkhos ISP theorem for the variety of Boolean algebras.

31.4.4 Other examples

It is convenient when talking about a set X of natural numbers to view it as a sequence x0 ,x1 ,x2 ,... of bits, with xi =
1 if and only if i X. This viewpoint will make it easier to talk about subalgebras of the power set algebra 2N , which
this viewpoint makes the Boolean algebra of all sequences of bits. It also ts well with the columns of a truth table:
when a column is read from top to bottom it constitutes a sequence of bits, but at the same time it can be viewed as
the set of those valuations (assignments to variables in the left half of the table) at which the function represented by
that column evaluates to 1.
Example 4. Ultimately constant sequences. Any Boolean combination of ultimately constant sequences is ultimately
constant; hence these form a Boolean algebra. We can identify these with the integers by viewing the ultimately-zero
sequences as nonnegative binary numerals (bit 0 of the sequence being the low-order bit) and the ultimately-one
sequences as negative binary numerals (think twos complement arithmetic with the all-ones sequence being 1).
This makes the integers a Boolean algebra, with union being bit-wise OR and complement being x1. There are
only countably many integers, so this innite Boolean algebra is countable. The atoms are the powers of two, namely
1,2,4,.... Another way of describing this algebra is as the set of all nite and conite sets of natural numbers, with
the ultimately all-ones sequences corresponding to the conite sets, those sets omitting only nitely many natural
numbers.
Example 5. Periodic sequence. A sequence is called periodic when there exists some number n > 0, called a witness
to periodicity, such that xi = xin for all i 0. The period of a periodic sequence is its least witness. Negation leaves
period unchanged, while the disjunction of two periodic sequences is periodic, with period at most the least common
multiple of the periods of the two arguments (the period can be as small as 1, as happens with the union of any
sequence and its complement). Hence the periodic sequences form a Boolean algebra.
Example 5 resembles Example 4 in being countable, but diers in being atomless. The latter is because the conjunction
of any nonzero periodic sequence x with a sequence of greater period is neither 0 nor x. It can be shown that
all countably innite atomless Boolean algebras are isomorphic, that is, up to isomorphism there is only one such
algebra.
Example 6. Periodic sequence with period a power of two. This is a proper subalgebra of Example 5 (a proper
subalgebra equals the intersection of itself with its algebra). These can be understood as the nitary operations, with
the rst period of such a sequence giving the truth table of the operation it represents. For example, the truth table
of x0 in the table of binary operations, namely 2 f 10 , has period 2 (and so can be recognized as using only the rst
variable) even though 12 of the binary operations have period 4. When the period is 2n the operation only depends
31.5. BOOLEAN ALGEBRAS OF BOOLEAN OPERATIONS 135

on the rst n variables, the sense in which the operation is nitary. This example is also a countably innite atomless
Boolean algebra. Hence Example 5 is isomorphic to a proper subalgebra of itself! Example 6, and hence Example
5, constitutes the free Boolean algebra on countably many generators, meaning the Boolean algebra of all nitary
operations on a countably innite set of generators or variables.
Example 7. Ultimately periodic sequences, sequences that become periodic after an initial nite bout of lawlessness.
They constitute a proper extension of Example 5 (meaning that Example 5 is a proper subalgebra of Example 7)
and also of Example 4, since constant sequences are periodic with period one. Sequences may vary as to when they
settle down, but any nite set of sequences will all eventually settle down no later than their slowest-to-settle member,
whence ultimately periodic sequences are closed under all Boolean operations and so form a Boolean algebra. This
example has the same atoms and coatoms as Example 4, whence it is not atomless and therefore not isomorphic to
Example 5/6. However it contains an innite atomless subalgebra, namely Example 5, and so is not isomorphic to
Example 4, every subalgebra of which must be a Boolean algebra of nite sets and their complements and therefore
atomic. This example is isomorphic to the direct product of Examples 4 and 5, furnishing another description of it.
Example 8. The direct product of a Periodic Sequence (Example 5) with any nite but nontrivial Boolean algebra.
(The trivial one-element Boolean algebra is the unique nite atomless Boolean algebra.) This resembles Example 7
in having both atoms and an atomless subalgebra, but diers in having only nitely many atoms. Example 8 is in fact
an innite family of examples, one for each possible nite number of atoms.
These examples by no means exhaust the possible Boolean algebras, even the countable ones. Indeed, there are
uncountably many nonisomorphic countable Boolean algebras, which Jussi Ketonen [1978] classied completely in
terms of invariants representable by certain hereditarily countable sets.

31.5 Boolean algebras of Boolean operations


The n-ary Boolean operations themselves constitute a power set algebra 2W , namely when W is taken to be the set of
2n valuations of the n inputs. In terms of the naming system of operations n fi where i in binary is a column of a truth
table, the columns can be combined with Boolean operations of any arity to produce other columns present in the
table. That is, we can apply any Boolean operation of arity m to m Boolean operations of arity n to yield a Boolean
operation of arity n, for any m and n.
The practical signicance of this convention for both software and hardware is that n-ary Boolean operations can be
represented as words of the appropriate length. For example, each of the 256 ternary Boolean operations can be
represented as an unsigned byte. The available logical operations such as AND and OR can then be used to form
new operations. If we take x, y, and z (dispensing with subscripted variables for now) to be 10101010, 11001100,
and 11110000 respectively (170, 204, and 240 in decimal, 0xaa, 0xcc, and 0xf0 in hexadecimal), their pairwise
conjunctions are xy = 10001000, yz = 11000000, and zx = 10100000, while their pairwise disjunctions are xy =
11101110, yz = 11111100, and zx = 11111010. The disjunction of the three conjunctions is 11101000, which also
happens to be the conjunction of three disjunctions. We have thus calculated, with a dozen or so logical operations
on bytes, that the two ternary operations

(xy)(yz)(zx)

and

(xy)(yz)(zx)

are actually the same operation. That is, we have proved the equational identity

(xy)(yz)(zx) = (xy)(yz)(zx),

for the two-element Boolean algebra. By the denition of Boolean algebra this identity must therefore hold in every
Boolean algebra.
This ternary operation incidentally formed the basis for Graus [1947] ternary Boolean algebras, which he axiomatized
in terms of this operation and negation. The operation is symmetric, meaning that its value is independent of any of
the 3! = 6 permutations of its arguments. The two halves of its truth table 11101000 are the truth tables for , 1110,
and , 1000, so the operation can be phrased as if z then xy else xy. Since it is symmetric it can equally well be
136 CHAPTER 31. BOOLEAN ALGEBRAS CANONICALLY DEFINED

phrased as either of if x then yz else yz, or if y then zx else zx. Viewed as a labeling of the 8-vertex 3-cube, the
upper half is labeled 1 and the lower half 0; for this reason it has been called the median operator, with the evident
generalization to any odd number of variables (odd in order to avoid the tie when exactly half the variables are 0).

31.6 Axiomatizing Boolean algebras


The technique we just used to prove an identity of Boolean algebra can be generalized to all identities in a systematic
way that can be taken as a sound and complete axiomatization of, or axiomatic system for, the equational laws of
Boolean logic. The customary formulation of an axiom system consists of a set of axioms that prime the pump with
some initial identities, along with a set of inference rules for inferring the remaining identities from the axioms and
previously proved identities. In principle it is desirable to have nitely many axioms; however as a practical matter
it is not necessary since it is just as eective to have a nite axiom schema having innitely many instances each of
which when used in a proof can readily be veried to be a legal instance, the approach we follow here.
Boolean identities are assertions of the form s = t where s and t are n-ary terms, by which we shall mean here terms
whose variables are limited to x0 through xn-1. An n-ary term is either an atom or an application. An application
m
fi(t 0 ,...,tm) is a pair consisting of an m-ary operation m fi and a list or m-tuple (t 0 ,...,tm) of m n-ary terms
called operands.
Associated with every term is a natural number called its height. Atoms are of zero height, while applications are of
height one plus the height of their highest operand.
Now what is an atom? Conventionally an atom is either a constant (0 or 1) or a variable xi where 0 i < n. For the
proof technique here it is convenient to dene atoms instead to be n-ary operations n fi, which although treated here
as atoms nevertheless mean the same as ordinary terms of the exact form n fi(x0 ,...,xn) (exact in that the variables
must listed in the order shown without repetition or omission). This is not a restriction because atoms of this form
include all the ordinary atoms, namely the constants 0 and 1, which arise here as the n-ary operations n f 0 and n f for
n
each n (abbreviating 22 1 to 1), and the variables x0 ,...,xn as can be seen from the truth tables where x0 appears
as both the unary operation 1 f 2 and the binary operation 2 f 10 while x1 appears as 2 f 12 .
The following axiom schema and three inference rules axiomatize the Boolean algebra of n-ary terms.

A1. m fi(n fj 0 ,...,n fjm) = n fi where (io)v = iv, with being j transpose, dened by (v)u = (ju)v.
R1. With no premises infer t = t.
R2. From s = u and t = u infer s = t where s, t, and u are n-ary terms.
R3. From s0 = t 0 ,...,sm = tm infer m fi(s0 ,...,sm) = m fi(t 0 ,...,tm), where all terms s, t are
n-ary.

The meaning of the side condition on A1 is that io is that 2n -bit number whose v-th bit is the v-th bit of i, where
n
the ranges of each quantity are u: m, v: 2n , ju: 22 , and v: 2m . (So j is an m-tuple of 2n -bit numbers while as the
transpose of j is a 2 -tuple of m-bit numbers. Both j and therefore contain m2n bits.)
n

A1 is an axiom schema rather than an axiom by virtue of containing metavariables, namely m, i, n, and j0 through
jm-1. The actual axioms of the axiomatization are obtained by setting the metavariables to specic values. For
example, if we take m = n = i = j 0 = 1, we can compute the two bits of io from i1 = 0 and i0 = 1, so io = 2 (or 10
when written as a two-bit number). The resulting instance, namely 1 f 1 (1 f 1 ) = 1 f 2 , expresses the familiar axiom x
= x of double negation. Rule R3 then allows us to infer x = x by taking s0 to be 1 f 1 (1 f 1 ) or x0 , t0 to be 1 f 2
or x0 , and m fi to be 1 f1 or .
m n
For each m and n there are only nitely many axioms instantiating A1, namely 22 (22 )m . Each instance is specied
by 2m +m2n bits.
We treat R1 as an inference rule, even though it is like an axiom in having no premises, because it is a domain-
independent rule along with R2 and R3 common to all equational axiomatizations, whether of groups, rings, or any
other variety. The only entity specic to Boolean algebras is axiom schema A1. In this way when talking about
dierent equational theories we can push the rules to one side as being independent of the particular theories, and
conne attention to the axioms as the only part of the axiom system characterizing the particular equational theory at
hand.
This axiomatization is complete, meaning that every Boolean law s = t is provable in this system. One rst shows by
induction on the height of s that every Boolean law for which t is atomic is provable, using R1 for the base case (since
31.7. UNDERLYING LATTICE STRUCTURE 137

distinct atoms are never equal) and A1 and R3 for the induction step (s an application). This proof strategy amounts
to a recursive procedure for evaluating s to yield an atom. Then to prove s = t in the general case when t may be an
application, use the fact that if s = t is an identity then s and t must evaluate to the same atom, call it u. So rst prove
s = u and t = u as above, that is, evaluate s and t using A1, R1, and R3, and then invoke R2 to infer s = t.
In A1, if we view the number nm as the function type mn, and mn as the application m(n), we can reinterpret the
numbers i, j, , and io as functions of type i: (m2)2, j: m((n2)2), : (n2)(m2), and io: (n2)2.
The denition (io)v = iv in A1 then translates to (io)(v) = i((v)), that is, io is dened to be composition of i and
understood as functions. So the content of A1 amounts to dening term application to be essentially composition,
modulo the need to transpose the m-tuple j to make the types match up suitably for composition. This composition
is the one in Lawveres previously mentioned category of power sets and their functions. In this way we have trans-
lated the commuting diagrams of that category, as the equational theory of Boolean algebras, into the equational
consequences of A1 as the logical representation of that particular composition law.

31.7 Underlying lattice structure


Underlying every Boolean algebra B is a partially ordered set or poset (B,). The partial order relation is dened
by x y just when x = xy, or equivalently when y = xy. Given a set X of elements of a Boolean algebra, an upper
bound on X is an element y such that for every element x of X, x y, while a lower bound on X is an element y such
that for every element x of X, y x.
A sup (supremum) of X is a least upper bound on X, namely an upper bound on X that is less or equal to every upper
bound on X. Dually an inf (inmum) of X is a greatest lower bound on X. The sup of x and y always exists in the
underlying poset of a Boolean algebra, being xy, and likewise their inf exists, namely xy. The empty sup is 0 (the
bottom element) and the empty inf is 1 (top). It follows that every nite set has both a sup and an inf. Innite subsets
of a Boolean algebra may or may not have a sup and/or an inf; in a power set algebra they always do.
Any poset (B,) such that every pair x,y of elements has both a sup and an inf is called a lattice. We write xy for
the sup and xy for the inf. The underlying poset of a Boolean algebra always forms a lattice. The lattice is said to
be distributive when x(yz) = (xy)(xz), or equivalently when x(yz) = (xy)(xz), since either law implies
the other in a lattice. These are laws of Boolean algebra whence the underlying poset of a Boolean algebra forms a
distributive lattice.
Given a lattice with a bottom element 0 and a top element 1, a pair x,y of elements is called complementary when
xy = 0 and xy = 1, and we then say that y is a complement of x and vice versa. Any element x of a distributive
lattice with top and bottom can have at most one complement. When every element of a lattice has a complement the
lattice is called complemented. It follows that in a complemented distributive lattice, the complement of an element
always exists and is unique, making complement a unary operation. Furthermore, every complemented distributive
lattice forms a Boolean algebra, and conversely every Boolean algebra forms a complemented distributive lattice.
This provides an alternative denition of a Boolean algebra, namely as any complemented distributive lattice. Each
of these three properties can be axiomatized with nitely many equations, whence these equations taken together
constitute a nite axiomatization of the equational theory of Boolean algebras.
In a class of algebras dened as all the models of a set of equations, it is usually the case that some algebras of the class
satisfy more equations than just those needed to qualify them for the class. The class of Boolean algebras is unusual in
that, with a single exception, every Boolean algebra satises exactly the Boolean identities and no more. The exception
is the one-element Boolean algebra, which necessarily satises every equation, even x = y, and is therefore sometimes
referred to as the inconsistent Boolean algebra.

31.8 Boolean homomorphisms


A Boolean homomorphism is a function h: AB between Boolean algebras A, B such that for every Boolean operation
m
fi,

h(m fi(x0 ,...,xm)) = m fi(h(x0 ),...,h(xm)).

The category Bool of Boolean algebras has as objects all Boolean algebras and as morphisms the Boolean homomor-
phisms between them.
138 CHAPTER 31. BOOLEAN ALGEBRAS CANONICALLY DEFINED

There exists a unique homomorphism from the two-element Boolean algebra 2 to every Boolean algebra, since ho-
momorphisms must preserve the two constants and those are the only elements of 2. A Boolean algebra with this
property is called an initial Boolean algebra. It can be shown that any two initial Boolean algebras are isomorphic,
so up to isomorphism 2 is the initial Boolean algebra.
In the other direction, there may exist many homomorphisms from a Boolean algebra B to 2. Any such homomorphism
partitions B into those elements mapped to 1 and those to 0. The subset of B consisting of the former is called an
ultralter of B. When B is nite its ultralters pair up with its atoms; one atom is mapped to 1 and the rest to 0. Each
ultralter of B thus consists of an atom of B and all the elements above it; hence exactly half the elements of B are in
the ultralter, and there as many ultralters as atoms.
For innite Boolean algebras the notion of ultralter becomes considerably more delicate. The elements greater or
equal than an atom always form an ultralter but so do many other sets; for example in the Boolean algebra of nite
and conite sets of integers the conite sets form an ultralter even though none of them are atoms. Likewise the
powerset of the integers has among its ultralters the set of all subsets containing a given integer; there are countably
many of these standard ultralters, which may be identied with the integers themselves, but there are uncountably
many more nonstandard ultralters. These form the basis for nonstandard analysis, providing representations for
such classically inconsistent objects as innitesimals and delta functions.

31.9 Innitary extensions

Recall the denition of sup and inf from the section above on the underlying partial order of a Boolean algebra. A
complete Boolean algebra is one every subset of which has both a sup and an inf, even the innite subsets. Gaifman
[1964] and Hales [1964] independently showed that innite free complete Boolean algebras do not exist. This suggests
that a logic with set-sized-innitary operations may have class-many termsjust as a logic with nitary operations
may have innitely many terms.
There is however another approach to introducing innitary Boolean operations: simply drop nitary from the
denition of Boolean algebra. A model of the equational theory of the algebra of all operations on {0,1} of arity
up to the cardinality of the model is called a complete atomic Boolean algebra, or CABA. (In place of this awkward
restriction on arity we could allow any arity, leading to a dierent awkwardness, that the signature would then be
larger than any set, that is, a proper class. One benet of the latter approach is that it simplies the denition of
homomorphism between CABAs of dierent cardinality.) Such an algebra can be dened equivalently as a complete
Boolean algebra that is atomic, meaning that every element is a sup of some set of atoms. Free CABAs exist for
V
all cardinalities of a set V of generators, namely the power set algebra 22 , this being the obvious generalization of
the nite free Boolean algebras. This neatly rescues innitary Boolean logic from the fate the GaifmanHales result
seemed to consign it to.
The nonexistence of free complete Boolean algebras can be traced to failure to extend the equations of Boolean logic
suitably to all laws that should hold for innitary conjunction and disjunction, in particular the neglect of distributivity
in the denition of complete Boolean algebra. A complete Boolean algebra is called completely distributive when
arbitrary conjunctions distribute over arbitrary disjunctions and vice versa. A Boolean algebra is a CABA if and only
if it is complete and completely distributive, giving a third denition of CABA. A fourth denition is as any Boolean
algebra isomorphic to a power set algebra.
A complete homomorphism is one that preserves all sups that exist, not just the nite sups, and likewise for infs. The
category CABA of all CABAs and their complete homomorphisms is dual to the category of sets and their functions,
meaning that it is equivalent to the opposite of that category (the category resulting from reversing all morphisms).
Things are not so simple for the category Bool of Boolean algebras and their homomorphisms, which Marshall Stone
showed in eect (though he lacked both the language and the conceptual framework to make the duality explicit) to
be dual to the category of totally disconnected compact Hausdor spaces, subsequently called Stone spaces.
Another innitary class intermediate between Boolean algebras and complete Boolean algebras is the notion of a
sigma-algebra. This is dened analogously to complete Boolean algebras, but with sups and infs limited to countable
arity. That is, a sigma-algebra is a Boolean algebra with all countable sups and infs. Because the sups and infs are of
bounded cardinality, unlike the situation with complete Boolean algebras, the Gaifman-Hales result does not apply
and free sigma-algebras do exist. Unlike the situation with CABAs however, the free countably generated sigma
algebra is not a power set algebra.
31.10. OTHER DEFINITIONS OF BOOLEAN ALGEBRA 139

31.10 Other denitions of Boolean algebra


We have already encountered several denitions of Boolean algebra, as a model of the equational theory of the two-
element algebra, as a complemented distributive lattice, as a Boolean ring, and as a product-preserving functor from
a certain category (Lawvere). Two more denitions worth mentioning are:.

Stone (1936) A Boolean algebra is the set of all clopen sets of a topological space. It is no limitation to require
the space to be a totally disconnected compact Hausdor space, or Stone space, that is, every Boolean algebra
arises in this way, up to isomorphism. Moreover, if the two Boolean algebras formed as the clopen sets of two
Stone spaces are isomorphic, so are the Stone spaces themselves, which is not the case for arbitrary topological
spaces. This is just the reverse direction of the duality mentioned earlier from Boolean algebras to Stone spaces.
This denition is eshed out by the next denition.

Johnstone (1982) A Boolean algebra is a ltered colimit of nite Boolean algebras.

(The circularity in this denition can be removed by replacing nite Boolean algebra by nite power set equipped
with the Boolean operations standardly interpreted for power sets.)
To put this in perspective, innite sets arise as ltered colimits of nite sets, innite CABAs as ltered limits of nite
power set algebras, and innite Stone spaces as ltered limits of nite sets. Thus if one starts with the nite sets and
asks how these generalize to innite objects, there are two ways: adding them gives ordinary or inductive sets while
multiplying them gives Stone spaces or pronite sets. The same choice exists for nite power set algebras as the
duals of nite sets: addition yields Boolean algebras as inductive objects while multiplication yields CABAs or power
set algebras as pronite objects.
A characteristic distinguishing feature is that the underlying topology of objects so constructed, when dened so as
to be Hausdor, is discrete for inductive objects and compact for pronite objects. The topology of nite Hausdor
spaces is always both discrete and compact, whereas for innite spaces discrete"' and compact are mutually exclu-
sive. Thus when generalizing nite algebras (of any kind, not just Boolean) to innite ones, discrete and compact
part company, and one must choose which one to retain. The general rule, for both nite and innite algebras, is that
nitary algebras are discrete, whereas their duals are compact and feature innitary operations. Between these two
extremes, there are many intermediate innite Boolean algebras whose topology is neither discrete nor compact.

31.11 See also

31.12 References
Birkho, Garrett (1935). On the structure of abstract algebras. Proc. Camb. Phil. Soc. 31: 433454. ISSN
0008-1981. doi:10.1017/s0305004100013463.

Boole, George (2003) [1854]. An Investigation of the Laws of Thought. Prometheus Books. ISBN 978-1-
59102-089-9.

Dwinger, Philip (1971). Introduction to Boolean algebras. Wrzburg: Physica Verlag.

Gaifman, Haim (1964). Innite Boolean Polynomials, I. Fundamenta Mathematicae. 54: 229250. ISSN
0016-2736.

Givant, Steven; Halmos, Paul (2009). Introduction to Boolean Algebras. Undergraduate Texts in Mathematics,
Springer. ISBN 978-0-387-40293-2..

Grau, A.A. (1947). Ternary Boolean algebra. Bull. Am. Math. Soc. 33 (6): 567572. doi:10.1090/S0002-
9904-1947-08834-0.

Hales, Alfred W. (1964). On the Non-Existence of Free Complete Boolean Algebras. Fundamenta Mathe-
maticae. 54: 4566. ISSN 0016-2736.

Halmos, Paul (1963). Lectures on Boolean Algebras. van Nostrand. ISBN 0-387-90094-2.
140 CHAPTER 31. BOOLEAN ALGEBRAS CANONICALLY DEFINED

--------, and Givant, Steven (1998) Logic as Algebra. Dolciani Mathematical Exposition, No. 21. Mathematical
Association of America.
Johnstone, Peter T. (1982). Stone Spaces. Cambridge, UK: Cambridge University Press. ISBN 978-0-521-
33779-3.
Ketonen, Jussi (1978). The structure of countable Boolean algebras. Annals of Mathematics. 108 (1): 4189.
JSTOR 1970929. doi:10.2307/1970929.
Koppelberg, Sabine (1989) General Theory of Boolean Algebras in Monk, J. Donald, and Bonnet, Robert,
eds., Handbook of Boolean Algebras, Vol. 1. North Holland. ISBN 978-0-444-70261-6.
Peirce, C. S. (1989) Writings of Charles S. Peirce: A Chronological Edition: 18791884. Kloesel, C. J. W., ed.
Indianapolis: Indiana University Press. ISBN 978-0-253-37204-8.
Lawvere, F. William (1963). Functorial semantics of algebraic theories. Proceedings of the National Academy
of Sciences. 50 (5): 869873. doi:10.1073/pnas.50.5.869.

Schrder, Ernst (18901910). Vorlesungen ber die Algebra der Logik (exakte Logik), IIII. Leipzig: B.G.
Teubner.

Sikorski, Roman (1969). Boolean Algebras (3rd. ed.). Berlin: Springer-Verlag. ISBN 978-0-387-04469-9.
Stone, M. H. (1936). The Theory of Representation for Boolean Algebras. Transactions of the American
Mathematical Society. 40 (1): 37111. ISSN 0002-9947. JSTOR 1989664. doi:10.2307/1989664.
Tarski, Alfred (1983). Logic, Semantics, Metamathematics, Corcoran, J., ed. Hackett. 1956 1st edition edited
and translated by J. H. Woodger, Oxford Uni. Press. Includes English translations of the following two articles:
Tarski, Alfred (1929). Sur les classes closes par rapport certaines oprations lmentaires. Funda-
menta Mathematicae. 16: 19597. ISSN 0016-2736.
Tarski, Alfred (1935). Zur Grundlegung der Booleschen Algebra, I. Fundamenta Mathematicae. 24:
17798. ISSN 0016-2736.

Vladimirov, D.A. (1969). (Boolean algebras, in Russian, German translation Boolesche Al-
gebren 1974). Nauka (German translation Akademie-Verlag).
Chapter 32

Boolean conjunctive query

In the theory of relational databases, a Boolean conjunctive query is a conjunctive query without distinguished
predicates, i.e., a query in the form R1 (t1 ) Rn (tn ) , where each Ri is a relation symbol and each ti is a
tuple of variables and constants; the number of elements in ti is equal to the arity of Ri . Such a query evaluates to
either true or false depending on whether the relations in the database contain the appropriate tuples of values, i.e.
the conjunction is valid according to the facts in the database.
As an example, if a database schema contains the relation symbols Father (binary, whos the father of whom) and
Employed (unary, who is employed), a conjunctive query could be F ather(Mark, x) Employed(x) . This query
evaluates to true if there exists an individual x who is a child of Mark and employed. In other words, this query
expresses the question: does Mark have an employed child?"

32.1 See also


Logical conjunction

Conjunctive query

32.2 References
G. Gottlob; N. Leone; F. Scarcello (2001). The complexity of acyclic conjunctive queries. Journal of the
ACM (JACM). 48 (3): 431498. doi:10.1145/382780.382783.

141
Chapter 33

Boolean data type

In computer science, the Boolean data type is a data type, having two values (usually denoted true and false),
intended to represent the truth values of logic and Boolean algebra. It is named after George Boole, who rst dened
an algebraic system of logic in the mid 19th century. The Boolean data type is primarily associated with conditional
statements, which allow dierent actions and change control ow depending on whether a programmer-specied
Boolean condition evaluates to true or false. It is a special case of a more general logical data type; logic need not
always be Boolean.

33.1 Generalities
In programming languages with a built-in Boolean data type, such as Pascal and Java, the comparison operators such
as > and are usually dened to return a Boolean value. Conditional and iterative commands may be dened to test
Boolean-valued expressions.
Languages with no explicit Boolean data type, like C90 and Lisp, may still represent truth values by some other data
type. Common Lisp uses an empty list for false, and any other value for true. C uses an integer type, where relational
expressions like i > j and logical expressions connected by && and || are dened to have value 1 if true and 0 if false,
whereas the test parts of if, while, for, etc., treat any non-zero value as true.[1][2] Indeed, a Boolean variable may be
regarded (and implemented) as a numerical variable with one binary digit (bit), which can store only two values. The
implementation of Booleans in computers are most likely represented as a full word, rather than a bit; this is usually
due to the ways computers transfer blocks of information.
Most programming languages, even those with no explicit Boolean type, have support for Boolean algebraic operations
such as conjunction (AND, &, *), disjunction (OR, |, +), equivalence (EQV, =, ==), exclusive or/non-equivalence
(XOR, NEQV, ^, !=), and negation (NOT, ~, !).
In some languages, like Ruby, Smalltalk, and Alice the true and false values belong to separate classes, i.e., True and
False, respectively, so there is no one Boolean type.
In SQL, which uses a three-valued logic for explicit comparisons because of its special treatment of Nulls, the Boolean
data type (introduced in SQL:1999) is also dened to include more than two truth values, so that SQL Booleans can
store all logical values resulting from the evaluation of predicates in SQL. A column of Boolean type can also be
restricted to just TRUE and FALSE though.

33.2 ALGOL and the built-in boolean type


One of the earliest programming languages to provide an explicit boolean data type was ALGOL 60 (1960) with
values true and false and logical operators denoted by symbols ' ' (and), ' ' (or), ' ' (implies), ' ' (equivalence),
and ' ' (not). Due to input device and character set limits on many computers of the time, however, most compilers
used alternative representations for many of the operators, such as AND or 'AND'.
This approach with boolean as a built-in (either primitive or otherwise predened) data type was adopted by many
later programming languages, such as Simula 67 (1967), ALGOL 68 (1970),[3] Pascal (1970), Ada (1980), Java

142
33.3. FORTRAN 143

(1995), and C# (2000), among others.

33.3 Fortran
The rst version of FORTRAN (1957) and its successor FORTRAN II (1958) had no logical values or operations;
even the conditional IF statement took an arithmetic expression and branched to one of three locations according to
its sign; see arithmetic IF. FORTRAN IV (1962), however, followed the ALGOL 60 example by providing a Boolean
data type (LOGICAL), truth literals (.TRUE. and .FALSE.), Boolean-valued numeric comparison operators (.EQ.,
.GT., etc.), and logical operators (.NOT., .AND., .OR.). In FORMAT statements, a specic control character ('L')
was provided for the parsing or formatting of logical values.[4]

33.4 Lisp and Scheme


The language Lisp (1958) never had a built-in Boolean data type. Instead, conditional constructs like cond assume
that the logical value false is represented by the empty list (), which is dened to be the same as the special atom nil or
NIL; whereas any other s-expression is interpreted as true. For convenience, most modern dialects of Lisp predene
the atom t to have value t, so that t can be used as a mnemonic notation for true.
This approach (any value can be used as a Boolean value) was retained in most Lisp dialects (Common Lisp, Scheme,
Emacs Lisp), and similar models were adopted by many scripting languages, even ones having a distinct Boolean type
or Boolean values; although which values are interpreted as false and which are true vary from language to language.
In Scheme, for example, the false value is an atom distinct from the empty list, so the latter is interpreted as true.

33.5 Pascal, Ada, and Haskell


The language Pascal (1970) introduced the concept of programmer-dened enumerated types. A built-in Boolean
data type was then provided as a predened enumerated type with values FALSE and TRUE. By denition, all
comparisons, logical operations, and conditional statements applied to and/or yielded Boolean values. Otherwise, the
Boolean type had all the facilities which were available for enumerated types in general, such as ordering and use
as indices. In contrast, converting between Booleans and integers (or any other types) still required explicit tests or
function calls, as in ALGOL 60. This approach (Boolean is an enumerated type) was adopted by most later languages
which had enumerated types, such as Modula, Ada, and Haskell.

33.6 C, C++, Objective-C, AWK


Initial implementations of the language C (1972) provided no Boolean type, and to this day Boolean values are
commonly represented by integers (ints) in C programs. The comparison operators (>, ==, etc.) are dened to return
a signed integer (int) result, either 0 (for false) or 1 (for true). Logical operators (&&, ||, !, etc.) and condition-testing
statements (if, while) assume that zero is false and all other values are true.
After enumerated types (enums) were added to the American National Standards Institute version of C, ANSI C
(1989), many C programmers got used to dening their own Boolean types as such, for readability reasons. However,
enumerated types are equivalent to integers according to the language standards; so the eective identity between
Booleans and integers is still valid for C programs.
Standard C (since C99) provides a boolean type, called _Bool. By including the header stdbool.h one can use the
more intuitive name bool and the constants true and false. The language guarantees that any two true values will
compare equal (which was impossible to achieve before the introduction of the type). Boolean values still behave
as integers, can be stored in integer variables, and used anywhere integers would be valid, including in indexing,
arithmetic, parsing, and formatting. This approach (Boolean values are just integers) has been retained in all later
versions of C.
C++ has a separate Boolean data type bool, but with automatic conversions from scalar and pointer values that are
very similar to those of C. This approach was adopted also by many later languages, especially by some scripting
languages such as AWK.
144 CHAPTER 33. BOOLEAN DATA TYPE

Objective-C also has a separate Boolean data type BOOL, with possible values being YES or NO, equivalents of
true and false respectively.[5] Also, in Objective-C compilers that support C99, Cs _Bool type can be used, since
Objective-C is a superset of C.

33.7 Perl and Lua


Perl has no boolean data type. Instead, any value can behave as boolean in boolean context (condition of if or while
statement, argument of && or ||, etc.). The number 0, the strings 0 and "", the empty list (), and the special value
undef evaluate to false.[6] All else evaluates to true.
Lua has a boolean data type, but non-boolean value can also behave as boolean. The non-value nil evaluate to false,
whereas every other data type always evaluates to true, regardless of value.

33.8 Python, Ruby, and JavaScript


Python, from version 2.3 forward, has a bool type which is a subclass of int, the standard integer type.[7] It has two
possible values: True and False, which are special versions of 1 and 0 respectively and behave as such in arithmetic
contexts. Also, a numeric value of zero (integer or fractional), the null value (None), the empty string, and empty
containers (i.e. lists, sets, etc.) are considered Boolean false; all other values are considered Boolean true by default.[8]
Classes can dene how their instances are treated in a Boolean context through the special method __nonzero__
(Python 2) or __bool__ (Python 3). For containers, __len__ (the special method for determining the length of
containers) is used if the explicit Boolean conversion method is not dened.
In Ruby, in contrast, only nil (Rubys null value) and a special false object are false, all else (including the integer 0
and empty arrays) is true.
In JavaScript, the empty string (""), null, undened, NaN, +0, 0 and false[9] are sometimes called falsy, and their
complement, truthy, to distinguish between strictly type-checked and coerced Booleans.[10] Languages such as PHP
also use this approach.

33.9 SQL
The SQL:1999 standard introduced a BOOLEAN data type as an optional feature (T031). When restricted by a NOT
NULL constraint, a SQL BOOLEAN behaves like Booleans in other languages. However, in SQL the BOOLEAN
type is nullable by default like all other SQL data types, meaning it can have the special null value also. Although
the SQL standard denes three literals for the BOOLEAN type TRUE, FALSE, and UNKNOWN it also says
that the NULL BOOLEAN and UNKNOWN may be used interchangeably to mean exactly the same thing.[11][12]
This has caused some controversy because the identication subjects UNKNOWN to the equality comparison rules
for NULL. More precisely UNKNOWN = UNKNOWN is not TRUE but UNKNOWN/NULL.[13] As of 2012 few
major SQL systems implement the T031 feature.[14] PostgreSQL is a notable exception, although it implements no
UNKNOWN literal; NULL can be used instead.[15]

33.10 See also


true and false (commands), for shell scripting
Shannons expansion
stdbool.h, C99 denitions for boolean

33.11 References
[1] Kernighan, Brian W; Ritchie, Dennis M (1978). The C Programming Language (1st ed.). Englewood Clis, NJ: Prentice
Hall. p. 41. ISBN 0-13-110163-3.
33.11. REFERENCES 145

[2] Plauger, PJ; Brodie, Jim (1992) [1989]. ANSI and ISO Standard C Programmers reference. Microsoft Press. pp. 8693.
ISBN 1-55615-359-7.

[3] Report on the Algorithmic Language ALGOL 68, Section 10.2.2. (PDF). August 1968. Retrieved 30 April 2007.

[4] Digital Equipment Corporation, DECSystem10 FORTRAN IV Programmers Reference Manual. Reprinted in Mathematical
Languages Handbook. Online version accessed 2011-11-16.

[5] https://developer.apple.com/library/ios/#documentation/cocoa/conceptual/ProgrammingWithObjectiveC/FoundationTypesandCollections/
FoundationTypesandCollections.html

[6] perlsyn - Perl Syntax / Truth and Falsehood. Retrieved 10 September 2013.

[7] Van Rossum, Guido (3 April 2002). PEP 285 -- Adding a bool type. Retrieved 15 May 2013.

[8] Expressions. Python v3.3.2 documentation. Retrieved 15 May 2013.

[9] ECMAScript Language Specication (PDF). p. 43.

[10] The Elements of JavaScript Style. Douglas Crockford. Retrieved 5 March 2011.

[11] C. Date (2011). SQL and Relational Theory: How to Write Accurate SQL Code. O'Reilly Media, Inc. p. 83. ISBN
978-1-4493-1640-2.

[12] ISO/IEC 9075-2:2011 4.5

[13] Martyn Prigmore (2007). Introduction to Databases With Web Applications. Pearson Education Canada. p. 197. ISBN
978-0-321-26359-9.

[14] Troels Arvin, Survey of BOOLEAN data type implementation

[15] http://www.postgresql.org/docs/current/static/datatype-boolean.html
Chapter 34

Boolean domain

In mathematics and abstract algebra, a Boolean domain is a set consisting of exactly two elements whose interpre-
tations include false and true. In logic, mathematics and theoretical computer science, a Boolean domain is usually
written as {0, 1},[1][2][3] {false, true}, {F, T},[4] {, } [5] or B. [6][7]
The algebraic structure that naturally builds on a Boolean domain is the Boolean algebra with two elements. The
initial object in the category of bounded lattices is a Boolean domain.
In computer science, a Boolean variable is a variable that takes values in some Boolean domain. Some programming
languages feature reserved words or symbols for the elements of the Boolean domain, for example false and true.
However, many programming languages do not have a Boolean datatype in the strict sense. In C or BASIC, for
example, falsity is represented by the number 0 and truth is represented by the number 1 or 1, and all variables that
can take these values can also take any other numerical values.

34.1 Generalizations
The Boolean domain {0, 1} can be replaced by the unit interval [0,1], in which case rather than only taking values 0
or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 x,
conjunction (AND) is replaced with multiplication ( xy ), and disjunction (OR) is dened via De Morgans law to be
1 (1 x)(1 y) .
Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic
and probabilistic logic. In these interpretations, a value is interpreted as the degree of truth to what extent a
proposition is true, or the probability that the proposition is true.

34.2 See also


Boolean-valued function

34.3 Notes
[1] Dirk van Dalen, Logic and Structure. Springer (2004), page 15.

[2] David Makinson, Sets, Logic and Maths for Computing. Springer (2008), page 13.

[3] George S. Boolos and Richard C. Jerey, Computability and Logic. Cambridge University Press (1980), page 99.

[4] Elliott Mendelson, Introduction to Mathematical Logic (4th. ed.). Chapman & Hall/CRC (1997), page 11.

[5] Eric C. R. Hehner, A Practical Theory of Programming. Springer (1993, 2010), page 3.

[6] Ian Parberry (1994). Circuit Complexity and Neural Networks. MIT Press. p. 65. ISBN 978-0-262-16148-0.

146
34.3. NOTES 147

[7] Jordi Cortadella; et al. (2002). Logic Synthesis for Asynchronous Controllers and Interfaces. Springer Science & Business
Media. p. 73. ISBN 978-3-540-43152-7.
Chapter 35

Boolean expression

In computer science, a Boolean expression is an expression in a programming language that produces a Boolean
value when evaluated, i.e. one of true or false. A Boolean expression may be composed of a combination of the
Boolean constants true or false, Boolean-typed variables, Boolean-valued operators, and Boolean-valued functions.[1]
Boolean expressions correspond to propositional formulas in logic and are a special case of Boolean circuits.[2]

35.1 Boolean operators


Most programming languages have the Boolean operators OR, AND and not; in C and some newer languages, these
are represented by "||" (double pipe character), "&&" (double ampersand) and "!" (exclamation point) respectively,
while the corresponding bitwise operations are represented by "|", "&" and "~" (tilde).[3] In the mathematical literature
the symbols used are often "+" (plus), "" (dot) and overbar, or "" (cup), "" (cap) and "" or "" (prime).

35.2 Examples
The expression 5 > 3 is evaluated as true.
The expression 3 > 5 is evaluated as false.
5>=3 and 3<=5 are equivalent Boolean expressions, both of which are evaluated as true.
typeof true returns boolean and typeof false returns boolean
Of course, most Boolean expressions will contain at least one variable (X > 3), and often more (X > Y).

35.3 See also


Expression (computer science)
Expression (mathematics)

35.4 References
[1] Gries, David; Schneider, Fred B. (1993), Chapter 2. Boolean Expressions, A Logical Approach to Discrete Math, Mono-
graphs in Computer Science, Springer, p. 25, ISBN 9780387941158.
[2] van Melkebeek, Dieter (2000), Randomness and Completeness in Computational Complexity, Lecture Notes in Computer
Science, 1950, Springer, p. 22, ISBN 9783540414926.
[3] E.g. for Java see Brogden, William B.; Green, Marcus (2003), Java 2 Programmer, Que Publishing, p. 45, ISBN
9780789728616.

148
35.5. EXTERNAL LINKS 149

35.5 External links


The Calculus of Logic, by George Boole, Cambridge and Dublin Mathematical Journal Vol. III (1848), pp.
18398.
Chapter 36

Boolean function

Not to be confused with Binary function.

In mathematics and logic, a (nitary) Boolean function (or switching function) is a function of the form : Bk
B, where B = {0, 1} is a Boolean domain and k is a non-negative integer called the arity of the function. In the case
where k = 0, the function is essentially a constant element of B.
Every k-ary Boolean function can be expressed as a propositional formula in k variables x1 , , xk, and two propo-
k
sitional formulas are logically equivalent if and only if they express the same Boolean function. There are 22 k-ary
functions for every k.

36.1 Boolean functions in applications


A Boolean function describes how to determine a Boolean value output based on some logical calculation from
Boolean inputs. Such functions play a basic role in questions of complexity theory as well as the design of circuits
and chips for digital computers. The properties of Boolean functions play a critical role in cryptography, particularly
in the design of symmetric key algorithms (see substitution box).
Boolean functions are often represented by sentences in propositional logic, and sometimes as multivariate polynomials
over GF(2), but more ecient representations are binary decision diagrams (BDD), negation normal forms, and
propositional directed acyclic graphs (PDAG).
In cooperative game theory, monotone Boolean functions are called simple games (voting games); this notion is
applied to solve problems in social choice theory.

36.2 See also


Algebra of sets

Boolean algebra

Boolean algebra topics

Boolean domain

Boolean-valued function

Logical connective

Truth function

Truth table

Symmetric Boolean function

150
36.3. REFERENCES 151

Decision tree model

Evasive Boolean function


Indicator function

Balanced boolean function


Read-once function

3-ary Boolean functions

36.3 References
Crama, Y; Hammer, P. L. (2011), Boolean Functions, Cambridge University Press.
Hazewinkel, Michiel, ed. (2001) [1994], Boolean function, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Jankovi, Dragan; Stankovi, Radomir S.; Moraga, Claudio (November 2003). Arithmetic expressions opti-
misation using dual polarity property (PDF). Serbian Journal of Electrical Engineering. 1 (71-80, number 1).
Archived from the original (PDF) on 2016-03-05. Retrieved 2015-06-07.

Mano, M. M.; Ciletti, M. D. (2013), Digital Design, Pearson.


Chapter 37

Boolean prime ideal theorem

In mathematics, a prime ideal theorem guarantees the existence of certain types of subsets in a given algebra. A
common example is the Boolean prime ideal theorem, which states that ideals in a Boolean algebra can be extended
to prime ideals. A variation of this statement for lters on sets is known as the ultralter lemma. Other theorems
are obtained by considering dierent mathematical structures with appropriate notions of ideals, for example, rings
and prime ideals (of ring theory), or distributive lattices and maximal ideals (of order theory). This article focuses
on prime ideal theorems from order theory.
Although the various prime ideal theorems may appear simple and intuitive, they cannot be deduced in general from
the axioms of ZermeloFraenkel set theory without the axiom of choice (abbreviated ZF). Instead, some of the
statements turn out to be equivalent to the axiom of choice (AC), while othersthe Boolean prime ideal theorem,
for instancerepresent a property that is strictly weaker than AC. It is due to this intermediate status between ZF
and ZF + AC (ZFC) that the Boolean prime ideal theorem is often taken as an axiom of set theory. The abbreviations
BPI or PIT (for Boolean algebras) are sometimes used to refer to this additional axiom.

37.1 Prime ideal theorems

An order ideal is a (non-empty) directed lower set. If the considered partially ordered set (poset) has binary suprema
(a.k.a. joins), as do the posets within this article, then this is equivalently characterized as a non-empty lower set I
that is closed for binary suprema (i.e. x, y in I imply x y in I). An ideal I is prime if its set-theoretic complement
in the poset is a lter. Ideals are proper if they are not equal to the whole poset.
Historically, the rst statement relating to later prime ideal theorems was in fact referring to lterssubsets that
are ideals with respect to the dual order. The ultralter lemma states that every lter on a set is contained within
some maximal (proper) lteran ultralter. Recall that lters on sets are proper lters of the Boolean algebra of its
powerset. In this special case, maximal lters (i.e. lters that are not strict subsets of any proper lter) and prime
lters (i.e. lters that with each union of subsets X and Y contain also X or Y) coincide. The dual of this statement
thus assures that every ideal of a powerset is contained in a prime ideal.
The above statement led to various generalized prime ideal theorems, each of which exists in a weak and in a strong
form. Weak prime ideal theorems state that every non-trivial algebra of a certain class has at least one prime ideal.
In contrast, strong prime ideal theorems require that every ideal that is disjoint from a given lter can be extended
to a prime ideal that is still disjoint from that lter. In the case of algebras that are not posets, one uses dierent
substructures instead of lters. Many forms of these theorems are actually known to be equivalent, so that the assertion
that PIT holds is usually taken as the assertion that the corresponding statement for Boolean algebras (BPI) is valid.
Another variation of similar theorems is obtained by replacing each occurrence of prime ideal by maximal ideal. The
corresponding maximal ideal theorems (MIT) are oftenthough not alwaysstronger than their PIT equivalents.

152
37.2. BOOLEAN PRIME IDEAL THEOREM 153

37.2 Boolean prime ideal theorem


The Boolean prime ideal theorem is the strong prime ideal theorem for Boolean algebras. Thus the formal statement
is:

Let B be a Boolean algebra, let I be an ideal and let F be a lter of B, such that I and F are disjoint. Then
I is contained in some prime ideal of B that is disjoint from F.

The weak prime ideal theorem for Boolean algebras simply states:

Every Boolean algebra contains a prime ideal.

We refer to these statements as the weak and strong BPI. The two are equivalent, as the strong BPI clearly implies the
weak BPI, and the reverse implication can be achieved by using the weak BPI to nd prime ideals in the appropriate
quotient algebra.
The BPI can be expressed in various ways. For this purpose, recall the following theorem:
For any ideal I of a Boolean algebra B, the following are equivalent:

I is a prime ideal.

I is a maximal ideal, i.e. for any proper ideal J, if I is contained in J then I = J.

For every element a of B, I contains exactly one of {a, a}.

This theorem is a well-known fact for Boolean algebras. Its dual establishes the equivalence of prime lters and
ultralters. Note that the last property is in fact self-dualonly the prior assumption that I is an ideal gives the full
characterization. All of the implications within this theorem can be proven in ZF.
Thus the following (strong) maximal ideal theorem (MIT) for Boolean algebras is equivalent to BPI:

Let B be a Boolean algebra, let I be an ideal and let F be a lter of B, such that I and F are disjoint. Then
I is contained in some maximal ideal of B that is disjoint from F.

Note that one requires global maximality, not just maximality with respect to being disjoint from F. Yet, this
variation yields another equivalent characterization of BPI:

Let B be a Boolean algebra, let I be an ideal and let F be a lter of B, such that I and F are disjoint. Then
I is contained in some ideal of B that is maximal among all ideals disjoint from F.

The fact that this statement is equivalent to BPI is easily established by noting the following theorem: For any
distributive lattice L, if an ideal I is maximal among all ideals of L that are disjoint to a given lter F, then I is
a prime ideal. The proof for this statement (which can again be carried out in ZF set theory) is included in the article
on ideals. Since any Boolean algebra is a distributive lattice, this shows the desired implication.
All of the above statements are now easily seen to be equivalent. Going even further, one can exploit the fact the dual
orders of Boolean algebras are exactly the Boolean algebras themselves. Hence, when taking the equivalent duals of
all former statements, one ends up with a number of theorems that equally apply to Boolean algebras, but where every
occurrence of ideal is replaced by lter. It is worth noting that for the special case where the Boolean algebra under
consideration is a powerset with the subset ordering, the maximal lter theorem is called the ultralter lemma.
Summing up, for Boolean algebras, the weak and strong MIT, the weak and strong PIT, and these statements with
lters in place of ideals are all equivalent. It is known that all of these statements are consequences of the Axiom of
Choice, AC, (the easy proof makes use of Zorns lemma), but cannot be proven in ZF (Zermelo-Fraenkel set theory
without AC), if ZF is consistent. Yet, the BPI is strictly weaker than the axiom of choice, though the proof of this
statement, due to J. D. Halpern and Azriel Lvy is rather non-trivial.
154 CHAPTER 37. BOOLEAN PRIME IDEAL THEOREM

37.3 Further prime ideal theorems

The prototypical properties that were discussed for Boolean algebras in the above section can easily be modied to
include more general lattices, such as distributive lattices or Heyting algebras. However, in these cases maximal ideals
are dierent from prime ideals, and the relation between PITs and MITs is not obvious.
Indeed, it turns out that the MITs for distributive lattices and even for Heyting algebras are equivalent to the axiom
of choice. On the other hand, it is known that the strong PIT for distributive lattices is equivalent to BPI (i.e. to the
MIT and PIT for Boolean algebras). Hence this statement is strictly weaker than the axiom of choice. Furthermore,
observe that Heyting algebras are not self dual, and thus using lters in place of ideals yields dierent theorems in
this setting. Maybe surprisingly, the MIT for the duals of Heyting algebras is not stronger than BPI, which is in sharp
contrast to the abovementioned MIT for Heyting algebras.
Finally, prime ideal theorems do also exist for other (not order-theoretical) abstract algebras. For example, the MIT
for rings implies the axiom of choice. This situation requires to replace the order-theoretic term lter by other
conceptsfor rings a multiplicatively closed subset is appropriate.

37.4 The ultralter lemma

A lter on a set X is a nonempty collection of nonempty subsets of X that is closed under nite intersection and under
superset. An ultralter is a maximal lter. The ultralter lemma states that every lter on a set X is a subset of
some ultralter on X.[1] This lemma is most often used in the study of topology. An ultralter that does not contain
nite sets is called non-principal. The ultralter lemma, and in particular the existence of non-principal ultralters
(consider the lter of all sets with nite complements), follows easily from Zorns lemma.
The ultralter lemma is equivalent to the Boolean prime ideal theorem, with the equivalence provable in ZF set theory
without the axiom of choice. The idea behind the proof is that the subsets of any set form a Boolean algebra partially
ordered by inclusion, and any Boolean algebra is representable as an algebra of sets by Stones representation theorem.

37.5 Applications

Intuitively, the Boolean prime ideal theorem states that there are enough prime ideals in a Boolean algebra in
the sense that we can extend every ideal to a maximal one. This is of practical importance for proving Stones
representation theorem for Boolean algebras, a special case of Stone duality, in which one equips the set of all prime
ideals with a certain topology and can indeed regain the original Boolean algebra (up to isomorphism) from this data.
Furthermore, it turns out that in applications one can freely choose either to work with prime ideals or with prime
lters, because every ideal uniquely determines a lter: the set of all Boolean complements of its elements. Both
approaches are found in the literature.
Many other theorems of general topology that are often said to rely on the axiom of choice are in fact equivalent to
BPI. For example, the theorem that a product of compact Hausdor spaces is compact is equivalent to it. If we leave
out Hausdor we get a theorem equivalent to the full axiom of choice.
A not too well known application of the Boolean prime ideal theorem is the existence of a non-measurable set[2] (the
example usually given is the Vitali set, which requires the axiom of choice). From this and the fact that the BPI is
strictly weaker than the axiom of choice, it follows that the existence of non-measurable sets is strictly weaker than
the axiom of choice.
In linear algebra, the Boolean prime ideal theorem can be used to prove that any two bases of a given vector space
have the same cardinality.

37.6 See also

List of Boolean algebra topics


37.7. NOTES 155

37.7 Notes
[1] Halpern, James D. (1966), Bases in Vector Spaces and the Axiom of Choice, Proceedings of the American Mathematical
Society, American Mathematical Society, 17 (3): 670673, JSTOR 2035388, doi:10.1090/S0002-9939-1966-0194340-1.

[2] Sierpiski, Wacaw (1938), Fonctions additives non compltement additives et fonctions non mesurables, Fundamenta
Mathematicae, 30: 9699

37.8 References
Davey, B. A.; Priestley, H. A. (2002), Introduction to Lattices and Order (2nd ed.), Cambridge University Press,
ISBN 978-0-521-78451-1.

An easy to read introduction, showing the equivalence of PIT for Boolean algebras and distributive lattices.

Johnstone, Peter (1982), Stone Spaces, Cambridge studies in advanced mathematics, 3, Cambridge University
Press, ISBN 978-0-521-33779-3.

The theory in this book often requires choice principles. The notes on various chapters discuss the general
relation of the theorems to PIT and MIT for various structures (though mostly lattices) and give pointers
to further literature.

Banaschewski, B. (1983), The power of the ultralter theorem, Journal of the London Mathematical Society
(2nd series), 27 (2): 193202, doi:10.1112/jlms/s2-27.2.193.

Discusses the status of the ultralter lemma.

Ern, M. (2000), Prime ideal theory for general algebras, Applied Categorical Structures, 8: 115144, doi:10.1023/A:100861192

Gives many equivalent statements for the BPI, including prime ideal theorems for other algebraic structures.
PITs are considered as special instances of separation lemmas.
Chapter 38

Boolean ring

In mathematics, a Boolean ring R is a ring for which x2 = x for all x in R,[1][2][3] such as the ring of integers modulo
2. That is, R consists only of idempotent elements.[4][5]
Every Boolean ring gives rise to a Boolean algebra, with ring multiplication corresponding to conjunction or meet
, and ring addition to exclusive disjunction or symmetric dierence (not disjunction , which would constitute a
semiring). Boolean rings are named after the founder of Boolean algebra, George Boole.

38.1 Notations

There are at least four dierent and incompatible systems of notation for Boolean rings and algebras.

In commutative algebra the standard notation is to use x + y = (x y) ( x y) for the ring sum of x and
y, and use xy = x y for their product.

In logic, a common notation is to use x y for the meet (same as the ring product) and use x y for the join,
given in terms of ring notation (given just above) by x + y + xy.

In set theory and logic it is also common to use x y for the meet, and x + y for the join x y. This use of + is
dierent from the use in ring theory.

A rare convention is to use xy for the product and x y for the ring sum, in an eort to avoid the ambiguity of
+.

Historically, the term Boolean ring has been used to mean a Boolean ring possibly without an identity, and
Boolean algebra has been used to mean a Boolean ring with an identity. The existence of the identity is necessary to
consider the ring as an algebra over the eld of two elements: otherwise there cannot be a (unital) ring homomorphism
of the eld of two elements into the Boolean ring. (This is the same as the old use of the terms ring and algebra
in measure theory.[lower-alpha 1] )

38.2 Examples

One example of a Boolean ring is the power set of any set X, where the addition in the ring is symmetric dierence,
and the multiplication is intersection. As another example, we can also consider the set of all nite or conite subsets
of X, again with symmetric dierence and intersection as operations. More generally with these operations any eld
of sets is a Boolean ring. By Stones representation theorem every Boolean ring is isomorphic to a eld of sets (treated
as a ring with these operations).

156
38.3. RELATION TO BOOLEAN ALGEBRAS 157

x y x y x

xy xy x
Venn diagrams for the Boolean operations of conjunction, disjunction, and complement

38.3 Relation to Boolean algebras


Since the join operation in a Boolean algebra is often written additively, it makes sense in this context to denote
ring addition by , a symbol that is often used to denote exclusive or.
Given a Boolean ring R, for x and y in R we can dene

x y = xy,

x y = x y xy,

x = 1 x.

These operations then satisfy all of the axioms for meets, joins, and complements in a Boolean algebra. Thus every
Boolean ring becomes a Boolean algebra. Similarly, every Boolean algebra becomes a Boolean ring thus:

xy = x y,

x y = (x y) (x y).

If a Boolean ring is translated into a Boolean algebra in this way, and then the Boolean algebra is translated into a
ring, the result is the original ring. The analogous result holds beginning with a Boolean algebra.
A map between two Boolean rings is a ring homomorphism if and only if it is a homomorphism of the corresponding
Boolean algebras. Furthermore, a subset of a Boolean ring is a ring ideal (prime ring ideal, maximal ring ideal) if
and only if it is an order ideal (prime order ideal, maximal order ideal) of the Boolean algebra. The quotient ring of
a Boolean ring modulo a ring ideal corresponds to the factor algebra of the corresponding Boolean algebra modulo
the corresponding order ideal.

38.4 Properties of Boolean rings


Every Boolean ring R satises x x = 0 for all x in R, because we know

x x = (x x)2 = x2 x2 x2 x2 = x x x x

and since (R,) is an abelian group, we can subtract x x from both sides of this equation, which gives x x = 0. A
similar proof shows that every Boolean ring is commutative:

x y = (x y)2 = x2 xy yx y2 = x xy yx y
158 CHAPTER 38. BOOLEAN RING

and this yields xy yx = 0, which means xy = yx (using the rst property above).
The property x x = 0 shows that any Boolean ring is an associative algebra over the eld F2 with two elements, in
just one way. In particular, any nite Boolean ring has as cardinality a power of two. Not every associative algebra
with one over F2 is a Boolean ring: consider for instance the polynomial ring F2 [X].
The quotient ring R/I of any Boolean ring R modulo any ideal I is again a Boolean ring. Likewise, any subring of a
Boolean ring is a Boolean ring.
Every prime ideal P in a Boolean ring R is maximal: the quotient ring R/P is an integral domain and also a Boolean
ring, so it is isomorphic to the eld F2 , which shows the maximality of P. Since maximal ideals are always prime,
prime ideals and maximal ideals coincide in Boolean rings.
Boolean rings are von Neumann regular rings.
Boolean rings are absolutely at: this means that every module over them is at.
Every nitely generated ideal of a Boolean ring is principal (indeed, (x,y)=(x+y+xy)).

38.5 Unication
Unication in Boolean rings is decidable,[6] that is, algorithms exist to solve arbitrary equations over Boolean rings.
Both unication and matching in nitely generated free Boolean rings are NP-complete, and NP-hard in nitely
presented Boolean rings.[7] (In fact, as any unication problem f(X) = g(X) in a Boolean ring can be rewritten as the
matching problem f(X) + g(X) = 0, the problems are equivalent.)
Unication in Boolean rings is unitary if all the uninterpreted function symbols are nullary and nitary otherwise
(i.e. if the function symbols not occurring in the signature of Boolean rings are all constants then there exists a most
general unier, and otherwise the minimal complete set of uniers is nite).[8]

38.6 See also


Ring-sum normal form

38.7 Notes
[1] When a Boolean ring has an identity, then a complement operation becomes denable on it, and a key characteristic of the
modern denitions of both Boolean algebra and sigma-algebra is that they have complement operations.

38.8 References
[1] Fraleigh (1976, p. 200)

[2] Herstein (1964, p. 91)

[3] McCoy (1968, p. 46)

[4] Fraleigh (1976, p. 25)

[5] Herstein (1964, p. 224)

[6] Martin, U.; Nipkow, T. (1986). Unication in Boolean Rings. In Jrg H. Siekmann. Proc. 8th CADE. LNCS. 230.
Springer. pp. 506513.

[7] Kandri-Rody, A., Kapur, D., and Narendran, P., An ideal-theoretic approach to word problems and unication problems
over nitely presented commutative algebras, Proc. of the rst Conference on Rewriting Techniques and Applications,
Dijon, France, May 1985, LNCS 202, Springer Verlag, New York, 345-364.

[8] A. Boudet; J.-P. Jouannaud; M. Schmidt-Schau (1989). Unication of Boolean Rings and Abelian Groups (PDF).
Journal of Symbolic Computation. 8: 449477. doi:10.1016/s0747-7171(89)80054-9.
38.9. FURTHER READING 159

38.9 Further reading


Atiyah, Michael Francis; Macdonald, I. G. (1969), Introduction to Commutative Algebra, Westview Press, ISBN
978-0-201-40751-8
Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-
201-01984-1

Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016
McCoy, Neal H. (1968), Introduction To Modern Algebra (Revised ed.), Boston: Allyn and Bacon, LCCN
68015225
Ryabukhin, Yu. M. (2001) [1994], Boolean_ring, in Hazewinkel, Michiel, Encyclopedia of Mathematics,
Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

38.10 External links


John Armstrong, Boolean Rings
Chapter 39

Boolean satisability algorithm heuristics

Given a Boolean expression B with V = {v0 , . . . , vn } variables, nding an assignment V of the variables such that
B(V ) is true is called the Boolean satisability problem, frequently abbreviated SAT, and is seen as the canonical
NP-complete problem.
Although no known algorithm is known to solve SAT in polynomial time, there are classes of SAT problems which
do have ecient algorithms that solve them. These classes of problems arise from many practical problems in AI
planning, circuit testing, and software verication.[1][2] Research on constructing ecient SAT solvers has been based
on various principles such as resolution, search, local search and random walk, binary decisions, and Stalmarcks
algorithm.[2]
Some of these algorithms are deterministic, while others may be stochastic.
As there exist polynomial-time algorithms to convert any Boolean expression to conjunctive normal form such as
Tseitins algorithm, posing SAT problems in CNF does not change their computational diculty. SAT problems are
canonically expressed in CNF because CNF has certain properties that can help prune the search space and speed up
the search process.[2]

39.1 Branching heuristics in conict-driven algorithms [2]


One of the cornerstone Conict-Driven Clause Learning SAT solver algorithms is the DPLL algorithm. The algorithm
works by iteratively assigning free variables, and when the algorithm encounters a bad assignment, then it backtracks
to a previous iteration and chooses a dierent assignment of variables. It relies on a Branching Heuristic to pick
the next free variable assignment; the branching algorithm eectively makes choosing the variable assignment into a
decision tree. Dierent implementations of this heuristic produce markedly dierent decision trees, and thus have
signicant eect on the eciency of the solver.
Early branching Heuristics (Bohms Heuristic, Maximum Occurrences on Minimum sized clauses heuristic, and
Jeroslow-Wang heuristic) can be regarded as greedy algorithms. Their basic premise is to choose a free variable
assignment that will satisfy the most already unsatised clauses in the Boolean expression. However, as Boolean
expressions get larger, more complicated, or more structured, these heuristics fail to capture useful information about
these problems that could improve eciency; they often get stuck in local maxima or do not consider the distribution
of variables. Additionally, larger problems require more processing, as the operation of counting free variables in
unsatised clauses dominates the run-time.
Another heuristic called Variable State Independent Decaying Sum (VSIDS) attempts to score each variable. VSIDS
starts by looking at small portions of the Boolean expression and assigning each phase of a variable (a variable and its
negated complement) a score proportional to the number of clauses that variable phase is in. As VSIDS progresses
and searches more parts of the Boolean expression, periodically, all scores are divided by a constant. This discounts
the eect of the presence of variables in earlier-found clauses in favor of variables with a greater presence in more
recent clauses. VSIDS will select the variable phase with the highest score to determine where to branch.
VSIDS is quite eective because the scores of variable phases is independent of the current variable assignment, so
backtracking is much easier. Further, VSIDS guarantees that each variable assignment satises the greatest number
of recently searched segments of the Boolean expression.

160
39.2. STOCHASTIC SOLVERS [3] 161

39.2 Stochastic solvers [3]


MAX-SAT (the version of SAT in which the number of satised clauses is maximized) solvers can also be solved
using probabilistic algorithms. If we are given a Boolean expression B , with V = {v0 , . . . , vn } variables and we set
each variable randomly, then each clause c , with |c| variables, has a chance of being satised by a particular variable
assignment Pr( c is satised) = 1 2|c| . This is because each variable in c has 12 probability of being satised, and
we only need one variable in c to be satised. This works |c| 1 , so Pr( c is satised) = 1 2|c| 21 .
Now we show that randomly assigning variable values is a 12 -approximation algorithm, which means that is an optimal
approximation algorithm unless P = NP. Suppose we are given a Boolean expression B = {ci }ni=1 and

{
0 ifci satised is ,
ij =
1 ifci satised not is .

E[Satsied Clauses Num] = E[i ] = 1 2|ci |
i i
1 1 1
= |i| = OP T
i
2 2 2
This algorithm cannot be further optimized by the PCP theorem unless P = NP.
Other Stochastic SAT solvers, such as WalkSAT and GSAT are an improvement to the above procedure. They start by
randomly assigning values to each variable and then traverse the given Boolean expression to identify which variables
to ip to minimize the number of unsatised clauses. They may randomly select a variable to ip or select a new
random variable assignment to escape local maxima, much like a simulated annealing algorithm.

39.3 2-SAT heuristics


Unlike general SAT problems, 2-SAT problems are tractable. There exist algorithms that can compute the satisability
of a 2-SAT problem in polynomial time. This is a result of the constraint that each clause has only two variables,
so when an algorithm sets a variable vi , the satisfaction of clauses, which contain vi but are not satised by that
variable assignment, depend on the satisfaction of the second variable in those clauses, which leaves only one possible
assignment for those variables.

39.3.1 Backtracking
Suppose we are given a Boolean expressions:

B1 = (v3 v2 ) (v1 v3 )
B2 = (v3 v2 ) (v1 v3 ) (v1 v2 ).
With B1 , the algorithm can select v1 = true , so to satisfy the second clause, the algorithm will need to set v3 = false
, and resultantly to satisfy the rst clause, the algorithm will set v2 = false .
If the algorithm tries to satisfy B2 in the same way it tried to solve B1 , then the third clause will remain unsatised.
This will cause the algorithm to backtrack and set v1 = false and continue assigning variables further.

39.3.2 Graph reduction [4]


2-SAT problems can also be reduced to running a depth-rst search on a strongly connected components of a graph.
Each variable phase (a variable and its negated complement) is connected to other variable phases based on implica-
tions. In the same way when the algorithm above tried to solve

B1 , v1 = true = v3 = false = v2 = false = v1 = true.


162 CHAPTER 39. BOOLEAN SATISFIABILITY ALGORITHM HEURISTICS

However, when the algorithm tried solve

B2 , v1 = true = v3 = false = v2 = false


= v1 = false = = v1 = true,

which is a contradiction.
Once a 2-SAT problem is reduced to a graph, then if a depth rst search nds a strongly connected component with
both phases of a variable, then the 2-SAT problem is not satisable. Likewise, if the depth rst search does not nd
a strongly connected component with both phases of a variable, then the 2-SAT problem is satisable.

39.4 Weighted SAT problems


Numerous weighted SAT problems exist as the optimization versions of the general SAT problem. In this class of
problems, each clause in a CNF Boolean expression is given a weight. The objective is the maximize or minimize the
total sum of the weights of the satised clauses given a Boolean expression. weighted Max-SAT is the maximization
version of this problem, and Max-SAT is an instance of weighted MAX-SAT problem where the weights of each
clause are the same. The partial Max-SAT problem is the problem where some clauses necessarily must be satised
(hard clauses) and the sum total of weights of the rest of the clauses (soft clauses) are to be maximized or minimized,
depending on the problem. Partial Max-SAT represents an intermediary between Max-SAT (all clauses are soft) and
SAT (all clauses are hard).
Note that the stochasitic probabilistic solvers can also be used to nd optimal approximations for Max-SAT.

39.4.1 Variable splitting[5]


Variable splitting is a tool to nd upper and lower bounds on a Max-SAT problem. It involves splitting a variable a
into new variables for all but once occurrence of a in the original Boolean expression. For example, given the Boolean
expression: B = (abc)(aeb)(acf ) will become: B = (abc)(a1 eb)(a2 cf )
, with a, a1 , a2 , . . . , an being all distinct variables.
This relaxes the problem by introducing new variables into the Boolean expression, which has the eect of removing
many of the constraints in the expression. Because any assignment of variables in B can be represented by an
assignment of variables in B , the minimization and maximization of the weights of B represent lower and upper
bounds on the minimization and maximization of the weights of B .

39.4.2 Partial Max-SAT


Partial Max-SAT can be solved by rst considering all of the hard clauses and solving them as an instance of SAT. The
total maximum (or minimum) weight of the soft clauses can be evaluated given the variable assignment necessary to
satisfy the hard clauses and trying to optimize the free variables (the variables that the satisfaction of the hard clauses
does not depend on). The latter step is an implementation of Max-SAT given some pre-dened variables. Of course,
dierent variable assignments that satisfy the hard clauses might have dierent optimal free variable assignments, so
it is necessary to check dierent hard clause satisfaction variable assignments.

39.5 Data structures for storing clauses [2]


As SAT solvers and practical SAT problems (e.g. circuit verication) get more advanced, the Boolean expressions of
interest may exceed millions of variables with several million clauses; therefore, ecient data structures to store and
evaluate the clauses must be used.
Expressions can be stored as a list of clauses, where each clause is a list of variables, much like an adjacency list.
Though these data structures are convenient for manipulation (adding elements, deleting elements, etc.), they rely on
many pointers, which increases their memory overhead, decreases cache locality, and increases cache misses, which
renders them impractical for problems with large clause counts and large clause sizes.
39.6. REFERENCES 163

When clause sizes are large, more ecient analogous implementations include storing expressions as a list of clauses,
where each clause is represented as a matrix that represents the clauses and the variables present in that clause, much
like an adjacency matrix. The elimination of pointers and the contiguous memory occupation of arrays serve to
decrease memory usage and increase cache locality and cache hits, which oers a run-time speed up compared to the
aforesaid implementation.

39.6 References
[1] Aloul, Fadi A., On Solving Optimization Problems Using Boolean Satisability, American University of Sharjah (2005),
http://www.aloul.net/Papers/faloul_icmsao05.pdf

[2] Zhang, Lintao. Malik, Sharad. The Quest for Ecient Boolean Satisability Solvers, Department of Electrical Engi-
neering, Princeton University. https://www.princeton.edu/~{}chaff/publication/cade_cav_2002.pdf

[3] Sung, Phil. Maximum Satisability (2006) http://math.mit.edu/~{}goemans/18434S06/max-sat-phil.pdf

[4] Grith, Richard. Strongly Connected Components and the 2-SAT Problem in Dart. http://www.greatandlittle.com/
studios/index.php?post/2013/03/26/Strongly-Connected-Components-and-the-2-SAT-Problem-in-Dart

[5] Pipatsrisawat, Knot. Palyan, Akop. et. al. Solving Weighted Max-SAT Problems in a Reduced Search Space: A Perfor-
mance Analysis. University of California Computer Science Department http://reasoning.cs.ucla.edu/fetch.php?id=86&
type=pdf
Chapter 40

Boolean satisability problem

3SAT redirects here. For the Central European television network, see 3sat.

In computer science, the Boolean satisability problem (sometimes called Propositional Satisability Problem
and abbreviated as SATISFIABILITY or SAT) is the problem of determining if there exists an interpretation that
satises a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be
consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the
case, the formula is called satisable. On the other hand, if no such assignment exists, the function expressed by the
formula is FALSE for all possible variable assignments and the formula is unsatisable. For example, the formula "a
AND NOT b" is satisable because one can nd the values a = TRUE and b = FALSE, which make (a AND NOT
b) = TRUE. In contrast, "a AND NOT a" is unsatisable.
SAT is the rst problem that was proven to be NP-complete; see CookLevin theorem. This means that all problems
in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most
as dicult to solve as SAT. There is no known algorithm that eciently solves each SAT problem, and it is generally
believed that no such algorithm exists; yet this belief has not been proven mathematically, and resolving the question
whether SAT has a polynomial-time algorithm is equivalent to the P versus NP problem, which is a famous open
problem in the theory of computing.
Nevertheless, as of 2016, heuristical SAT-algorithms are able to solve problem instances involving tens of thousands
of variables and formulas consisting of millions of symbols,[1] which is sucient for many practical SAT problems
from e.g. articial intelligence, circuit design, and automatic theorem proving.

40.1 Basic denitions and terminology


A propositional logic formula, also called Boolean expression, is built from variables, operators AND (conjunction,
also denoted by ), OR (disjunction, ), NOT (negation, ), and parentheses. A formula is said to be satisable if
it can be made TRUE by assigning appropriate logical values (i.e. TRUE, FALSE) to its variables. The Boolean
satisability problem (SAT) is, given a formula, to check whether it is satisable. This decision problem is of
central importance in various areas of computer science, including theoretical computer science, complexity theory,
algorithmics, cryptography and articial intelligence.
There are several special cases of the Boolean satisability problem in which the formulas are required to have a
particular structure. A literal is either a variable, then called positive literal, or the negation of a variable, then
called negative literal. A clause is a disjunction of literals (or a single literal). A clause is called a Horn clause
if it contains at most one positive literal. A formula is in conjunctive normal form (CNF) if it is a conjunction of
clauses (or a single clause). For example, x1 is a positive literal, x2 is a negative literal, x1 x2 is a clause, and (x1
x2 ) (x1 x2 x3 ) x1 is a formula in conjunctive normal form, its 1st and 3rd clause are Horn clauses, but
its 2nd clause is not. The formula is satisable, choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since (FALSE
FALSE) (FALSE FALSE x3 ) FALSE evaluates to (FALSE TRUE) (TRUE FALSE x3 )
TRUE, and in turn to TRUE TRUE TRUE (i.e. to TRUE). In contrast, the CNF formula a a, consisting of
two clauses of one literal, is unsatisable, since for a=TRUE and a=FALSE it evaluates to TRUE TRUE (i.e. to
FALSE) and FALSE FALSE (i.e. again to FALSE), respectively.

164
40.2. COMPLEXITY AND RESTRICTED VERSIONS 165

For some versions of the SAT problem, it is useful to dene the notion of a generalized conjunctive normal form
formula, viz. as a conjunction of arbitrarily many generalized clauses, the latter being of the form R(l1 ,...,ln) for some
boolean operator R and (ordinary) literals li. Dierent sets of allowed boolean operators lead to dierent problem
versions. As an example, R(x,a,b) is a generalized clause, and R(x,a,b) R(b,y,c) R(c,d,z) is a generalized
conjunctive normal form. This formula is used below, with R being the ternary operator that is TRUE just if exactly
one of its arguments is.
Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive
normal form, which may, however, be exponentially longer. For example, transforming the formula (x1 y1 ) (x2 y2 )
... (xnyn) into conjunctive normal form yields

(x1 x2 xn)
(y1 x2 xn)
(x1 y2 xn)
(y1 y2 xn) ...
(x1 x2 yn)
(y1 x2 yn)
(x1 y2 yn)
(y1 y2 yn);

while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2n clauses of n variables.

40.2 Complexity and restricted versions

40.2.1 Unrestricted satisability (SAT)


Main article: CookLevin theorem

SAT was the rst known NP-complete problem, as proved by Stephen Cook at the University of Toronto in 1971[2]
and independently by Leonid Levin at the National Academy of Sciences in 1973.[3] Until that time, the concept of
an NP-complete problem did not even exist. The proof shows how every decision problem in the complexity class
NP can be reduced to the SAT problem for CNF[note 1] formulas, sometimes called CNFSAT. A useful property of
Cooks reduction is that it preserves the number of accepting answers. For example, deciding whether a given graph
has a 3-coloring is another problem in NP; if a graph has 17 valid 3-colorings, the SAT formula produced by the
CookLevin reduction will have 17 satisfying assignments.
NP-completeness only refers to the run-time of the worst case instances. Many of the instances that occur in practical
applications can be solved much more quickly. See Algorithms for solving SAT below.
SAT is trivial if the formulas are restricted to those in disjunctive normal form, that is, they are disjunction of
conjunctions of literals. Such a formula is indeed satisable if and only if at least one of its conjunctions is satisable,
and a conjunction is satisable if and only if it does not contain both x and NOT x for some variable x. This can be
checked in linear time. Furthermore, if they are restricted to being in full disjunctive normal form, in which every
variable appears exactly once in every conjunction, they can be checked in constant time (each conjunction represents
one satisfying assignment). But it can take exponential time and space to convert a general SAT problem to disjunctive
normal form; for an example exchange "" and "" in the above exponential blow-up example for conjunctive normal
forms.

40.2.2 3-satisability
Like the satisability problem for arbitrary formulas, determining the satisability of a formula in conjunctive nor-
mal form where each clause is limited to at most three literals is NP-complete also; this problem is called 3-SAT,
3CNFSAT, or 3-satisability. To reduce the unrestricted SAT problem to 3-SAT, transform each clause l1
ln to a conjunction of n 2 clauses
166 CHAPTER 40. BOOLEAN SATISFIABILITY PROBLEM

x ~x

x y

y y

~x ~y ~y

The 3-SAT instance (xxy) (xyy) (xyy) reduced to a clique problem. The green vertices form a 3-clique and
correspond to the satisfying assignment x=FALSE, y=TRUE.

(l1 l2 x2 )
(x2 l3 x3 )
(x3 l4 x4 )
(xn ln xn )
(xn ln ln)

where x2 , , xn are fresh variables not occurring elsewhere. Although the two formulas are not logically equivalent,
they are equisatisable. The formula resulting from transforming all clauses is at most 3 times as long as its original,
i.e. the length growth is polynomial.[4]
3-SAT is one of Karps 21 NP-complete problems, and it is used as a starting point for proving that other problems
are also NP-hard.[note 2] This is done by polynomial-time reduction from 3-SAT to the other problem. An example
of a problem where this method has been used is the clique problem: given a CNF formula consisting of c clauses,
the corresponding graph consists of a vertex for each literal, and an edge between each two non-contradicting[note 3]
literals from dierent clauses, cf. picture. The graph has a c-clique if and only if the formula is satisable.[5]
There is a simple randomized algorithm due to Schning (1999) that runs in time (4/3)n where n is the number of
variables in the 3-SAT proposition, and succeeds with high probability to correctly decide 3-SAT.[6]
The exponential time hypothesis asserts that no algorithm can solve 3-SAT (or indeed k-SAT for any k > 2) in time
that is fundamentally faster than exp(o(n)).
Selman, Mitchell, and Levesque (1996) give empirical data on the diculty of randomly generated 3-SAT formulas,
depending on their size parameters. Diculty is measured in number recursive calls made by a DPLL algorithm.[7]
40.2. COMPLEXITY AND RESTRICTED VERSIONS 167

3-satisability can be generalized to k-satisability (k-SAT, also k-CNF-SAT), when formulas in CNF are consid-
ered with each clause containing up to k literals. However, since for any k3, this problem can neither be easier than
3-SAT nor harder than SAT, and the latter two are NP-complete, so must be k-SAT.
Some authors restrict k-SAT to CNF formulas with exactly k literals. This doesn't lead to a dierent complexity class
either, as each clause l1 lj with j<k literals can be padded with xed dummy variables to l1 lj dj
dk. After padding all clauses, 2k 1 extra clauses[note 4] have to be appended to ensure that only d1 ==dk=FALSE
can lead to a satisfying assignment. Since k doesn't depend on the formula length, the extra clauses lead to a constant
increase in length. For the same reason, it does not matter whether duplicate literals are allowed in clauses (like e.g.
x y y), or not.

40.2.3 Exactly-1 3-satisability

Left: Schaefers reduction of a 3-SAT clause xyz. The result of R is TRUE (1) if exactly one of its arguments is TRUE, and FALSE
(0) otherwise. All 8 combinations of values for x,y,z are examined, one per line. The fresh variables a,...,f can be chosen to satisfy
all clauses (exactly one green argument for each R) in all lines except the rst, where xyz is FALSE. Right: A simpler reduction
with the same properties.

A variant of the 3-satisability problem is the one-in-three 3-SAT (also known variously as 1-in-3-SAT and exactly-
1 3-SAT). Given a conjunctive normal form, the problem is to determine whether there exists a truth assignment to
the variables so that each clause has exactly one TRUE literal (and thus exactly two FALSE literals). In contrast,
ordinary 3-SAT requires that every clause has at least one TRUE literal. Formally, a one-in-three 3-SAT problem is
given as a generalized conjunctive normal form with all generalized clauses using a ternary operator R that is TRUE
just if exactly one of its arguments is. When all literals of a one-in-three 3-SAT formula are positive, the satisability
problem is called one-in-three positive 3-SAT.
One-in-three 3-SAT, together with its positive case, is listed as NP-complete problem LO4 in the standard reference,
Computers and Intractability: A Guide to the Theory of NP-Completeness by Michael R. Garey and David S. Johnson.
One-in-three 3-SAT was proved to be NP-complete by Thomas J. Schaefer as a special case of Schaefers dichotomy
theorem, which asserts that any problem generalizing Boolean satisability in a certain way is either in the class P or
is NP-complete.[8]
Schaefer gives a construction allowing an easy polynomial-time reduction from 3-SAT to one-in-three 3-SAT. Let "(x
or y or z)" be a clause in a 3CNF formula. Add six fresh boolean variables a, b, c, d, e, and f, to be used to simulate
this clause and no other. Then the formula R(x,a,d) R(y,b,d) R(a,b,e) R(c,d,f) R(z,c,FALSE) is satisable by
some setting of the fresh variables if and only if at least one of x, y, or z is TRUE, see picture (left). Thus any 3-SAT
instance with m clauses and n variables may be converted into an equisatisable one-in-three 3-SAT instance with
5m clauses and n+6m variables.[9] Another reduction involves only four fresh variables and three clauses: R(x,a,b)
R(b,y,c) R(c,d,z), see picture (right).

40.2.4 2-satisability

Main article: 2-satisability

SAT is easier if the number of literals in a clause is limited to at most 2, in which case the problem is called 2-SAT.
This problem can be solved in polynomial time, and in fact is complete for the complexity class NL. If additionally
all OR operations in literals are changed to XOR operations, the result is called exclusive-or 2-satisability, which
is a problem complete for the complexity class SL = L.
168 CHAPTER 40. BOOLEAN SATISFIABILITY PROBLEM

40.2.5 Horn-satisability

Main article: Horn-satisability

The problem of deciding the satisability of a given conjunction of Horn clauses is called Horn-satisability, or
HORN-SAT. It can be solved in polynomial time by a single step of the Unit propagation algorithm, which produces
the single minimal model of the set of Horn clauses (w.r.t. the set of literals assigned to TRUE). Horn-satisability is
P-complete. It can be seen as Ps version of the Boolean satisability problem. Also, deciding the truth of quantied
Horn formulas can be done in polynomial time. [10]
Horn clauses are of interest because they are able to express implication of one variable from a set of other variables.
Indeed, one such clause x1 ... xn y can be rewritten as x1 ... xn y, that is, if x1 ,...,xn are all TRUE,
then y needs to be TRUE as well.
A generalization of the class of Horn formulae is that of renamable-Horn formulae, which is the set of formulae that
can be placed in Horn form by replacing some variables with their respective negation. For example, (x1 x2 )
(x1 x2 x3 ) x1 is not a Horn formula, but can be renamed to the Horn formula (x1 x2 ) (x1 x2 y3 )
x1 by introducing y3 as negation of x3 . In contrast, no renaming of (x1 x2 x3 ) (x1 x2 x3 ) x1
leads to a Horn formula. Checking the existence of such a replacement can be done in linear time; therefore, the
satisability of such formulae is in P as it can be solved by rst performing this replacement and then checking the
satisability of the resulting Horn formula.

40.2.6 XOR-satisability

Another special case is the class of problems where each clause contains XOR (i.e. exclusive or) rather than (plain)
OR operators.[note 5] This is in P, since an XOR-SAT formula can also be viewed as a system of linear equations mod
2, and can be solved in cubic time by Gaussian elimination;[11] see the box for an example. This recast is based on
the kinship between Boolean algebras and Boolean rings, and the fact that arithmetic modulo two forms a nite eld.
Since a XOR b XOR c evaluates to TRUE if and only if exactly 1 or 3 members of {a,b,c} are TRUE, each solution
of the 1-in-3-SAT problem for a given CNF formula is also a solution of the XOR-3-SAT problem, and in turn each
solution of XOR-3-SAT is a solution of 3-SAT, cf. picture. As a consequence, for each CNF formula, it is possible
to solve the XOR-3-SAT problem dened by the formula, and based on the result infer either that the 3-SAT problem
is solvable or that the 1-in-3-SAT problem is unsolvable.
Provided that the complexity classes P and NP are not equal, neither 2-, nor Horn-, nor XOR-satisability is NP-
complete, unlike SAT.

40.2.7 Schaefers dichotomy theorem

Main article: Schaefers dichotomy theorem

The restrictions above (CNF, 2CNF, 3CNF, Horn, XOR-SAT) bound the considered formulae to be conjunctions
of subformulae; each restriction states a specic form for all subformulae: for example, only binary clauses can be
subformulae in 2CNF.
Schaefers dichotomy theorem states that, for any restriction to Boolean operators that can be used to form these sub-
formulae, the corresponding satisability problem is in P or NP-complete. The membership in P of the satisability
of 2CNF, Horn, and XOR-SAT formulae are special cases of this theorem.

40.3 Extensions of SAT


An extension that has gained signicant popularity since 2003 is Satisability modulo theories (SMT) that can
enrich CNF formulas with linear constraints, arrays, all-dierent constraints, uninterpreted functions,[12] etc. Such
extensions typically remain NP-complete, but very ecient solvers are now available that can handle many such kinds
of constraints.
The satisability problem becomes more dicult if both for all () and there exists () quantiers are allowed
40.4. SELF-REDUCIBILITY 169

to bind the Boolean variables. An example of such an expression would be x y z (x y z) (x y z);


it is valid, since for all values of x and y, an appropriate value of z can be found, viz. z=TRUE if both x and y are
FALSE, and z=FALSE else. SAT itself (tacitly) uses only quantiers. If only quantiers are allowed instead, the
so-called tautology problem is obtained, which is co-NP-complete. If both quantiers are allowed, the problem is
called the quantied Boolean formula problem (QBF), which can be shown to be PSPACE-complete. It is widely
believed that PSPACE-complete problems are strictly harder than any problem in NP, although this has not yet been
proved. Using highly parallel P systems, QBF-SAT problems can be solved in linear time.[13]
Ordinary SAT asks if there is at least one variable assignment that makes the formula true. A variety of variants deal
with the number of such assignments:

MAJ-SAT asks if the majority of all assignments make the formula TRUE. It is known to be complete for PP,
a probabilistic class.
#SAT, the problem of counting how many variable assignments satisfy a formula, is a counting problem, not a
decision problem, and is #P-complete.
UNIQUE-SAT is the problem of determining whether a formula has exactly one assignment. It is complete for
US, the complexity class describing problems solvable by a non-deterministic polynomial time Turing machine
that accepts when there is exactly one nondeterministic accepting path and rejects otherwise.
UNAMBIGUOUS-SAT is the name given to the satisability problem when the input is restricted to formu-
las having at most one satisfying assignment. A solving algorithm for UNAMBIGUOUS-SAT is allowed to
exhibit any behavior, including endless looping, on a formula having several satisfying assignments. Although
this problem seems easier, Valiant and Vazirani have shown[14] that if there is a practical (i.e. randomized
polynomial-time) algorithm to solve it, then all problems in NP can be solved just as easily.
MAX-SAT, the maximum satisability problem, is an FNP generalization of SAT. It asks for the maximum
number of clauses, which can be satised by any assignment. It has ecient approximation algorithms, but is
NP-hard to solve exactly. Worse still, it is APX-complete, meaning there is no polynomial-time approximation
scheme (PTAS) for this problem unless P=NP.

Other generalizations include satisability for rst- and second-order logic, constraint satisfaction problems, 0-1 in-
teger programming.

40.4 Self-reducibility
The SAT problem is self-reducible, that is, each algorithm which correctly answers if an instance of SAT is solvable
can be used to nd a satisfying assignment. First, the question is asked on the given formula . If the answer is no,
the formula is unsatisable. Otherwise, the question is asked on the partly instantiated formula {x1 =TRUE}, i.e.
with the rst variable x1 replaced by TRUE, and simplied accordingly. If the answer is yes, then x1 =TRUE,
otherwise x1 =FALSE. Values of other variables can be found subsequently in the same way. In total, n+1 runs of the
algorithm are required, where n is the number of distinct variables in .
This property of self-reducibility is used in several theorems in complexity theory:

NP P/poly PH = 2 (KarpLipton theorem)


NP BPP NP = RP
P = NP FP = FNP

40.5 Algorithms for solving SAT


Since the SAT problem is NP-complete, only algorithms with exponential worst-case complexity are known for it. In
spite of this, ecient and scalable algorithms for SAT were developed over the last decade and have contributed to
dramatic advances in our ability to automatically solve problem instances involving tens of thousands of variables and
millions of constraints (i.e. clauses).[1] Examples of such problems in electronic design automation (EDA) include
formal equivalence checking, model checking, formal verication of pipelined microprocessors,[12] automatic test
170 CHAPTER 40. BOOLEAN SATISFIABILITY PROBLEM

pattern generation, routing of FPGAs,[15] planning, and scheduling problems, and so on. A SAT-solving engine is
now considered to be an essential component in the EDA toolbox.
There are two classes of high-performance algorithms for solving instances of SAT in practice: the Conict-Driven
Clause Learning algorithm, which can be viewed as a modern variant of the DPLL algorithm (well known imple-
mentations include Cha[16] and GRASP[17] ) and stochastic local search algorithms, such as WalkSAT.
A DPLL SAT solver employs a systematic backtracking search procedure to explore the (exponentially sized) space
of variable assignments looking for satisfying assignments. The basic search procedure was proposed in two seminal
papers in the early 1960s (see references below) and is now commonly referred to as the DavisPutnamLogemann
Loveland algorithm (DPLL or DLL).[18][19] Theoretically, exponential lower bounds have been proved for the
DPLL family of algorithms.
In contrast, randomized algorithms like the PPSZ algorithm by Paturi, Pudlak, Saks, and Zane set variables in a
random order according to some heuristics, for example bounded-width resolution. If the heuristic can't nd the
correct setting, the variable is assigned randomly. The PPSZ algorithm has a runtime of O(20.386n ) for 3-SAT with
a single satisfying assignment. Currently this is the best-known runtime for this problem. In the setting with many
satisfying assignments the randomized algorithm by Schning has a better bound.[6][20]
Modern SAT solvers (developed in the last ten years) come in two avors: conict-driven and look-ahead.
Conict-driven solvers augment the basic DPLL search algorithm with ecient conict analysis, clause learning,
non-chronological backtracking (a.k.a. backjumping), as well as two-watched-literals unit propagation, adaptive
branching, and random restarts. These extras to the basic systematic search have been empirically shown to be es-
sential for handling the large SAT instances that arise in electronic design automation (EDA).[21] Look-ahead solvers
have especially strengthened reductions (going beyond unit-clause propagation) and the heuristics, and they are gen-
erally stronger than conict-driven solvers on hard instances (while conict-driven solvers can be much better on large
instances which actually have an easy instance inside).
Modern SAT solvers are also having signicant impact on the elds of software verication, constraint solving in
articial intelligence, and operations research, among others. Powerful solvers are readily available as free and open
source software. In particular, the conict-driven MiniSAT, which was relatively successful at the 2005 SAT com-
petition, only has about 600 lines of code. A modern Parallel SAT solver is ManySAT. It can achieve super linear
speed-ups on important classes of problems. An example for look-ahead solvers is march_dl, which won a prize at
the 2007 SAT competition.
Certain types of large random satisable instances of SAT can be solved by survey propagation (SP). Particularly
in hardware design and verication applications, satisability and other logical properties of a given propositional
formula are sometimes decided based on a representation of the formula as a binary decision diagram (BDD).
Almost all SAT solvers include time-outs, so they will terminate in reasonable time even if they cannot nd a solution.
Dierent SAT solvers will nd dierent instances easy or hard, and some excel at proving unsatisability, and others
at nding solutions. All of these behaviors can be seen in the SAT solving contests.[22]

40.6 See also


Unsatisable core
Satisability modulo theories
Counting SAT
KarloZwick algorithm
Circuit satisability

40.7 Notes
[1] The SAT problem for arbitrary formulas is NP-complete, too, since it is easily shown to be in NP, and it cannot be easier
than SAT for CNF formulas.

[2] i.e. at least as hard as every other problem in NP. A decision problem is NP-complete if and only if it is in NP and is
NP-hard.
40.8. REFERENCES 171

[3] i.e. such that one literal is not the negation of the other

[4] viz. all maxterms that can be built with d1 ,,dk, except d1 dk

[5] Formally, generalized conjunctive normal forms with a ternary boolean operator R are employed, which is TRUE just if 1
or 3 of its arguments is. An input clause with more than 3 literals can be transformed into an equisatisable conjunction of
clauses 3 literals similar to above; i.e. XOR-SAT can be reduced to XOR-3-SAT.

40.8 References
[1] Ohrimenko, Olga; Stuckey, Peter J.; Codish, Michael (2007), Propagation = Lazy Clause Generation, Principles and
Practice of Constraint Programming CP 2007, Lecture Notes in Computer Science, 4741, pp. 544558, doi:10.1007/978-
3-540-74970-7_39, modern SAT solvers can often handle problems with millions of constraints and hundreds of thousands
of variables.

[2] Cook, Stephen A. (1971). The Complexity of Theorem-Proving Procedures (PDF). Proceedings of the 3rd Annual ACM
Symposium on Theory of Computing: 151158. doi:10.1145/800157.805047.

[3] Levin, Leonid (1973). Universal search problems (Russian: , Universal'nye perebornye
zadachi)". Problems of Information Transmission (Russian: , Problemy Peredachi In-
formatsii). 9 (3): 115116. (pdf) (in Russian), translated into English by Trakhtenbrot, B. A. (1984). A survey of
Russian approaches to perebor (brute-force searches) algorithms. Annals of the History of Computing. 6 (4): 384400.
doi:10.1109/MAHC.1984.10036.

[4] Alfred V. Aho; John E. Hopcroft; Jerey D. Ullman (1974). The Design and Analysis of Computer Algorithms. Addison-
Wesley.; here: Thm.10.4

[5] Aho, Hopcroft, Ullman[4] (1974); Thm.10.5

[6] Schning, Uwe (Oct 1999). A Probabilistic Algorithm for k-SAT and Constraint Satisfaction Problems. Proc. 40th Ann.
Symp. Foundations of Computer Science (PDF). pp. 410414. doi:10.1109/SFFCS.1999.814612.

[7] Bart Selman; David Mitchell; Hector Levesque (1996). Generating Hard Satisability Problems. Articial Intelligence.
81: 1729. doi:10.1016/0004-3702(95)00045-3.

[8] Schaefer, Thomas J. (1978). The complexity of satisability problems (PDF). Proceedings of the 10th Annual ACM
Symposium on Theory of Computing. San Diego, California. pp. 216226.

[9] (Schaefer, 1978), p.222, Lemma 3.5

[10] Buning, H.K.; Karpinski, Marek; Flogel, A. (1995). Resolution for Quantied Boolean Formulas. Information and
Computation. Elsevier. 117 (1): 1218. doi:10.1006/inco.1995.1025.

[11] Moore, Cristopher; Mertens, Stephan (2011), The Nature of Computation, Oxford University Press, p. 366, ISBN 9780199233212.

[12] R. E. Bryant, S. M. German, and M. N. Velev, Microprocessor Verication Using Ecient Decision Procedures for a Logic
of Equality with Uninterpreted Functions, in Analytic Tableaux and Related Methods, pp. 113, 1999.

[13] Alhazov, Artiom; Martn-Vide, Carlos; Pan, Linqiang (2003). Solving a PSPACE-Complete Problem by Recognizing P
Systems with Restricted Active Membranes. Fundamenta Informaticae. 58: 6777.

[14] Valiant, L.; Vazirani, V. (1986). NP is as easy as detecting unique solutions (PDF). Theoretical Computer Science. 47:
8593. doi:10.1016/0304-3975(86)90135-0.

[15] Gi-Joon Nam; Sakallah, K. A.; Rutenbar, R. A. (2002). A new FPGA detailed routing approach via search-based
Boolean satisability (PDF). IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 21 (6):
674. doi:10.1109/TCAD.2002.1004311.

[16] Moskewicz, M. W.; Madigan, C. F.; Zhao, Y.; Zhang, L.; Malik, S. (2001). Cha: Engineering an Ecient SAT Solver
(PDF). Proceedings of the 38th conference on Design automation (DAC). p. 530. ISBN 1581132972. doi:10.1145/378239.379017.

[17] Marques-Silva, J. P.; Sakallah, K. A. (1999). GRASP: a search algorithm for propositional satisability (PDF). IEEE
Transactions on Computers. 48 (5): 506. doi:10.1109/12.769433.

[18] Davis, M.; Putnam, H. (1960). A Computing Procedure for Quantication Theory. Journal of the ACM. 7 (3): 201.
doi:10.1145/321033.321034.
172 CHAPTER 40. BOOLEAN SATISFIABILITY PROBLEM

[19] Davis, M.; Logemann, G.; Loveland, D. (1962). A machine program for theorem-proving (PDF). Communications of
the ACM. 5 (7): 394397. doi:10.1145/368273.368557.
[20] An improved exponential-time algorithm for k-SAT, Paturi, Pudlak, Saks, Zani
[21] Vizel, Y.; Weissenbacher, G.; Malik, S. (2015). Boolean Satisability Solvers and Their Applications in Model Checking.
Proceedings of the IEEE. 103 (11). doi:10.1109/JPROC.2015.2455034.
[22] The international SAT Competitions web page. Retrieved 2007-11-15.

References are ordered by date of publication:

Michael R. Garey & David S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-
Completeness. W.H. Freeman. ISBN 0-7167-1045-5. A9.1: LO1 LO7, pp. 259 260.
Marques-Silva, J.; Glass, T. (1999). Combinational equivalence checking using satisability and recursive
learning (PDF). Design, Automation and Test in Europe Conference and Exhibition, 1999. Proceedings (Cat.
No. PR00078). p. 145. ISBN 0-7695-0078-1. doi:10.1109/DATE.1999.761110.
Clarke, E.; Biere, A.; Raimi, R.; Zhu, Y. (2001). Bounded Model Checking Using Satisability Solving.
Formal Methods in System Design. 19: 7. doi:10.1023/A:1011276507260.
Giunchiglia, E.; Tacchella, A. (2004). Giunchiglia, Enrico; Tacchella, Armando, eds. Theory and Appli-
cations of Satisability Testing. Lecture Notes in Computer Science. 2919. ISBN 978-3-540-20851-8.
doi:10.1007/b95238.
Babic, D.; Bingham, J.; Hu, A. J. (2006). B-Cubing: New Possibilities for Ecient SAT-Solving (PDF).
IEEE Transactions on Computers. 55 (11): 1315. doi:10.1109/TC.2006.175.
Rodriguez, C.; Villagra, M.; Baran, B. (2007). Asynchronous team algorithms for Boolean Satisability
(PDF). 2007 2nd Bio-Inspired Models of Network, Information and Computing Systems. p. 66. doi:10.1109/BIMNICS.2007.46100
Carla P. Gomes; Henry Kautz; Ashish Sabharwal; Bart Selman (2008). Satisability Solvers. In Frank Van
Harmelen; Vladimir Lifschitz; Bruce Porter. Handbook of knowledge representation. Foundations of Articial
Intelligence. 3. Elsevier. pp. 89134. ISBN 978-0-444-52211-5. doi:10.1016/S1574-6526(07)03002-7.
Vizel, Y.; Weissenbacher, G.; Malik, S. (2015). Boolean Satisability Solvers and Their Applications in
Model Checking. Proceedings of the IEEE. 103 (11). doi:10.1109/JPROC.2015.2455034.

40.9 External links

40.9.1 SAT problem format


A SAT problem is often described in the DIMACS-CNF format: an input le in which each line represents a single
disjunction. For example, a le with the two lines
1 5 4 0 1 5 3 4 0
represents the formula "(x1 x5 x4 ) (x1 x5 x3 x4 )".
Another common format for this formula is the 7-bit ASCII representation "(x1 | ~x5 | x4) & (~x1 | x5 | x3 | x4)".

BCSAT is a tool that converts input les in human-readable format to the DIMACS-CNF format.

40.9.2 Online SAT solvers


BoolSAT Solves formulas in the DIMACS-CNF format or in a more human-friendly format: a and not b or
a. Runs on a server.
Logictools - Provides dierent solvers in javascript for learning, comparison and hacking. Runs in the browser.
minisat-in-your-browser Solves formulas in the DIMACS-CNF format. Runs in the browser.
SATRennesPA - Solves formulas written in a user-friendly way. Runs on a server.
somerby.net/mack/logic - Solves formulas written in symbolic logic. Runs in the browser.
40.9. EXTERNAL LINKS 173

40.9.3 Oine SAT solvers

MiniSAT DIMACS-CNF format.

Lingeling won a gold medal in a 2011 SAT competition.

PicoSAT an earlier solver from the Lingeling group.

Sat4j DIMACS-CNF format. Java source code available.

Glucose DIMACS-CNF format.

RSat won a gold medal in a 2007 SAT competition.

UBCSAT. Supports unweighted and weighted clauses, both in the DIMACS-CNF format. C source code
hosted on GitHub.

CryptoMiniSat won a gold medal in a 2011 SAT competition. C++ source code hosted on GitHub. Tries to
put many useful features of MiniSat 2.0 core, PrecoSat ver 236, and Glucose into one package, adding many
new features

Spear Supports bit-vector arithmetic. Can use the DIMACS-CNF format or the Spear format.

HyperSAT Written to experiment with B-cubing search space pruning. Won 3rd place in a 2005 SAT
competition. An earlier and slower solver from the developers of Spear.

BASolver

ArgoSAT

Fast SAT Solver based on genetic algorithms.

zCha not supported anymore.

BCSAT human-readable boolean circuit format (also converts this format to the DIMACS-CNF format and
automatically links to MiniSAT or zCha).

gini - Golang sat solver with related tools.

40.9.4 SAT applications

WinSAT v2.04: A Windows-based SAT application made particularly for researchers.

40.9.5 Conferences

International Conference on Theory and Applications of Satisability Testing

40.9.6 Publications

Journal on Satisability, Boolean Modeling and Computation

Survey Propagation
174 CHAPTER 40. BOOLEAN SATISFIABILITY PROBLEM

40.9.7 Benchmarks
Forced Satisable SAT Benchmarks
SATLIB

Software Verication Benchmarks


Fadi Aloul SAT Benchmarks

SAT solving in general:

http://www.satlive.org

http://www.satisfiability.org

40.9.8 Evaluation of SAT solvers


Yearly evaluation of SAT solvers

SAT solvers evaluation results for 2008


International SAT Competitions

History

More information on SAT:

SAT and MAX-SAT for the Lay-researcher

This article includes material from a column in the ACM SIGDA e-newsletter by Prof. Karem Sakallah
Original text is available here
Chapter 41

Boolean-valued function

A Boolean-valued function (sometimes called a predicate or a proposition) is a function of the type f : X B, where
X is an arbitrary set and where B is a Boolean domain, i.e. a generic two-element set, (for example B = {0, 1}), whose
elements are interpreted as logical values, for example, 0 = false and 1 = true, i.e., a single bit of information.
In the formal sciences, mathematics, mathematical logic, statistics, and their applied disciplines, a Boolean-valued
function may also be referred to as a characteristic function, indicator function, predicate, or proposition. In all of
these uses it is understood that the various terms refer to a mathematical object and not the corresponding semiotic
sign or syntactic expression.
In formal semantic theories of truth, a truth predicate is a predicate on the sentences of a formal language, interpreted
for logic, that formalizes the intuitive concept that is normally expressed by saying that a sentence is true. A truth
predicate may have additional domains beyond the formal language domain, if that is what is required to determine
a nal truth value.

41.1 References
Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer
Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY, 2003.

Kohavi, Zvi (1978), Switching and Finite Automata Theory, 1st edition, McGrawHill, 1970. 2nd edition,
McGrawHill, 1978.

Korfhage, Robert R. (1974), Discrete Computational Structures, Academic Press, New York, NY.

Mathematical Society of Japan, Encyclopedic Dictionary of Mathematics, 2nd edition, 2 vols., Kiyosi It (ed.),
MIT Press, Cambridge, MA, 1993. Cited as EDM.

Minsky, Marvin L., and Papert, Seymour, A. (1988), Perceptrons, An Introduction to Computational Geometry,
MIT Press, Cambridge, MA, 1969. Revised, 1972. Expanded edition, 1988.

41.2 See also


Bit

Boolean data type

Boolean algebra (logic)

Boolean domain

Boolean logic

Propositional calculus

175
176 CHAPTER 41. BOOLEAN-VALUED FUNCTION

Truth table

Logic minimization
Indicator function

Predicate
Proposition

Finitary boolean function


Boolean function
Chapter 42

Boolean-valued model

In mathematical logic, a Boolean-valued model is a generalization of the ordinary Tarskian notion of structure from
model theory. In a Boolean-valued model, the truth values of propositions are not limited to true and false, but
instead take values in some xed complete Boolean algebra.
Boolean-valued models were introduced by Dana Scott, Robert M. Solovay, and Petr Vopnka in the 1960s in order to
help understand Paul Cohen's method of forcing. They are also related to Heyting algebra semantics in intuitionistic
logic.

42.1 Denition
Fix a complete Boolean algebra B[1] and a rst-order language L; the signature of L will consist of a collection of
constant symbols, function symbols, and relation symbols.
A Boolean-valued model for the language L consists of a universe M, which is a set of elements (or names), together
with interpretations for the symbols. Specically, the model must assign to each constant symbol of L an element of
M, and to each n-ary function symbol f of L and each n-tuple <a0 ,...,an> of elements of M, the model must assign
an element of M to the term f(a0 ,...,an).
Interpretation of the atomic formulas of L is more complicated. To each pair a and b of elements of M, the model
must assign a truth value ||a=b|| to the expression a=b; this truth value is taken from the Boolean algebra B. Similarly,
for each n-ary relation symbol R of L and each n-tuple <a0 ,...,an> of elements of M, the model must assign an
element of B to be the truth value ||R(a0 ,...,an)||.

42.2 Interpretation of other formulas and sentences


The truth values of the atomic formulas can be used to reconstruct the truth values of more complicated formulas,
using the structure of the Boolean algebra. For propositional connectives, this is easy; one simply applies the cor-
responding Boolean operators to the truth values of the subformulae. For example, if (x) and (y,z) are formulas
with one and two free variables, respectively, and if a, b, c are elements of the models universe to be substituted for
x, y, and z, then the truth value of

(a) (b, c)

is simply

(a) (b, c) = (a) (b, c)

The completeness of the Boolean algebra is required to dene truth values for quantied formulas. If (x) is a formula
with free variable x (and possibly other free variables that are suppressed), then

177
178 CHAPTER 42. BOOLEAN-VALUED MODEL


x(x) = (a),
aM

where the right-hand side is to be understood as the supremum in B of the set of all truth values ||(a)|| as a ranges
over M.
The truth value of a formula is sometimes referred to as its probability. However, these are not probabilities in the
ordinary sense, because they are not real numbers, but rather elements of the complete Boolean algebra B.

42.3 Boolean-valued models of set theory


Given a complete Boolean algebra B[1] there is a Boolean-valued model denoted by VB , which is the Boolean-valued
analogue of the von Neumann universe V. (Strictly speaking, VB is a proper class, so we need to reinterpret what it
means to be a model appropriately.) Informally, the elements of VB are Boolean-valued sets. Given an ordinary set
A, every set either is or is not a member; but given a Boolean-valued set, every set has a certain, xed probability of
being a member of A. Again, the probability is an element of B, not a real number. The concept of Boolean-valued
sets resembles, but is not the same as, the notion of a fuzzy set.
The (probabilistic) elements of the Boolean-valued set, in turn, are also Boolean-valued sets, whose elements are
also Boolean-valued sets, and so on. In order to obtain a non-circular denition of Boolean-valued set, they are
dened inductively in a hierarchy similar to the cumulative hierarchy. For each ordinal of V, the set VB is dened
as follows.

V B 0 is the empty set.

VB +1 is the set of all functions from VB to B. (Such a function represents a probabilistic subset of VB ;
if f is such a function, then for any xVB , f(x) is the probability that x is in the set.)

If is a limit ordinal, VB is the union of VB for <

The class VB is dened to be the union of all sets VB .


It is also possible to relativize this entire construction to some transitive model M of ZF (or sometimes a fragment
thereof). The Boolean-valued model M B is obtained by applying the above construction inside M. The restriction to
transitive models is not serious, as the Mostowski collapsing theorem implies that every reasonable (well-founded,
extensional) model is isomorphic to a transitive one. (If the model M is not transitive things get messier, as M's
interpretation of what it means to be a function or an ordinal may dier from the external interpretation.)
Once the elements of V B have been dened as above, it is necessary to dene B-valued relations of equality and
membership on VB . Here a B-valued relation on VB is a function from VB VB to B. To avoid confusion with the usual
equality and membership, these are denoted by ||x=y|| and ||xy|| for x and y in VB . They are dened as follows:

||xy|| is dened to be tD y ||x=t|| y(t) ("x is in y if it is equal to something in y").


||x=y|| is dened to be ||xy||||yx|| ("x equals y if x and y are both subsets of each other), where
||xy|| is dened to be tD x x(t)||ty|| ("x is a subset of y if all elements of x are in y")

The symbols and denote the least upper bound and greatest lower bound operations, respectively, in the complete
Boolean algebra B. At rst sight the denitions above appear to be circular: || || depends on || = ||, which depends on
|| ||, which depends on || ||. However, a close examination shows that the denition of || || only depends on || ||
for elements of smaller rank, so || || and || = || are well dened functions from VB VB to B.
It can be shown that the B-valued relations || || and || = || on VB make VB into a Boolean-valued model of set theory.
Each sentence of rst order set theory with no free variables has a truth value in B; it must be shown that the axioms
for equality and all the axioms of ZF set theory (written without free variables) have truth value 1 (the largest element
of B). This proof is straightforward, but it is long because there are many dierent axioms that need to be checked.
42.4. RELATIONSHIP TO FORCING 179

42.4 Relationship to forcing


Set theorists use a technique called forcing to obtain independence results and to construct models of set theory for
other purposes. The method was originally developed by Paul Cohen but has been greatly extended since then. In
one form, forcing adds to the universe a generic subset of a poset, the poset being designed to impose interesting
properties on the newly added object. The wrinkle is that (for interesting posets) it can be proved that there simply is
no such generic subset of the poset. There are three usual ways of dealing with this:

syntactic forcing A forcing relation p is dened between elements p of the poset and formulas of
the forcing language. This relation is dened syntactically and has no semantics; that is, no model is ever
produced. Rather, starting with the assumption that ZFC (or some other axiomatization of set theory) proves
the independent statement, one shows that ZFC must also be able to prove a contradiction. However, the
forcing is over V"; that is, it is not necessary to start with a countable transitive model. See Kunen (1980) for
an exposition of this method.
countable transitive models One starts with a countable transitive model M of as much of set theory as is
needed for the desired purpose, and that contains the poset. Then there do exist lters on the poset that are
generic over M; that is, that meet all dense open subsets of the poset that happen also to be elements of M.
ctional generic objects Commonly, set theorists will simply pretend that the poset has a subset that is generic
over all of V. This generic object, in nontrivial cases, cannot be an element of V, and therefore does not really
exist. (Of course, it is a point of philosophical contention whether any sets really exist, but that is outside the
scope of the current discussion.) Perhaps surprisingly, with a little practice this method is useful and reliable,
but it can be philosophically unsatisfying.

42.4.1 Boolean-valued models and syntactic forcing


Boolean-valued models can be used to give semantics to syntactic forcing; the price paid is that the semantics is not
2-valued (true or false), but assigns truth values from some complete Boolean algebra. Given a forcing poset P,
there is a corresponding complete Boolean algebra B, often obtained as the collection of regular open subsets of P,
where the topology on P is dened by declaring all lower sets open (and all upper sets closed). (Other approaches to
constructing B are discussed below.)
Now the order on B (after removing the zero element) can replace P for forcing purposes, and the forcing relation
can be interpreted semantically by saying that, for p an element of B and a formula of the forcing language,

p p ||||

where |||| is the truth value of in V B .


This approach succeeds in assigning a semantics to forcing over V without resorting to ctional generic objects. The
disadvantages are that the semantics is not 2-valued, and that the combinatorics of B are often more complicated than
those of the underlying poset P.

42.4.2 Boolean-valued models and generic objects over countable transitive models
One interpretation of forcing starts with a countable transitive model M of ZF set theory, a partially ordered set P,
and a generic subset G of P, and constructs a new model of ZF set theory from these objects. (The conditions that
the model be countable and transitive simplify some technical problems, but are not essential.) Cohens construction
can be carried out using Boolean-valued models as follows.

Construct a complete Boolean algebra B as the complete Boolean algebra generated by the poset P.
Construct an ultralter U on B (or equivalently a homomorphism from B to the Boolean algebra {true, false})
from the generic subset G of P.
Use the homomorphism from B to {true, false} to turn the Boolean-valued model MB of the section above into
an ordinary model of ZF.
180 CHAPTER 42. BOOLEAN-VALUED MODEL

We now explain these steps in more detail.


For any poset P there is a complete Boolean algebra B and a map e from P to B+ (the non-zero elements of B) such
that the image is dense, e(p)e(q) whenever pq, and e(p)e(q)=0 whenever p and q are incompatible. This Boolean
algebra is unique up to isomorphism. It can be constructed as the algebra of regular open sets in the topological space
of P (with underlying set P, and a base given by the sets Up of elements q with qp).
The map from the poset P to the complete Boolean algebra B is not injective in general. The map is injective if and
only if P has the following property: if every rp is compatible with q, then pq.
The ultralter U on B is dened to be the set of elements b of B that are greater than some element of (the image
of) G. Given an ultralter U on a Boolean algebra, we get a homomorphism to {true, false} by mapping U to true
and its complement to false. Conversely, given such a homomorphism, the inverse image of true is an ultralter, so
ultralters are essentially the same as homomorphisms to {true, false}. (Algebraists might prefer to use maximal
ideals instead of ultralters: the complement of an ultralter is a maximal ideal, and conversely the complement of a
maximal ideal is an ultralter.)
If g is a homomorphism from a Boolean algebra B to a Boolean algebra C and MB is any B-valued model of ZF (or
of any other theory for that matter) we can turn MB into a C -valued model by applying the homomorphism g to
the value of all formulas. In particular if C is {true, false} we get a {true, false}-valued model. This is almost the
same as an ordinary model: in fact we get an ordinary model on the set of equivalence classes under || = || of a {true,
false}-valued model. So we get an ordinary model of ZF set theory by starting from M, a Boolean algebra B, and
an ultralter U on B. (The model of ZF constructed like this is not transitive. In practice one applies the Mostowski
collapsing theorem to turn this into a transitive model.)
We have seen that forcing can be done using Boolean-valued models, by constructing a Boolean algebra with ultralter
from a poset with a generic subset. It is also possible to go back the other way: given a Boolean algebra B, we can
form a poset P of all the nonzero elements of B, and a generic ultralter on B restricts to a generic set on P. So the
techniques of forcing and Boolean-valued models are essentially equivalent.

42.5 Notes
[1] B here is assumed to be nondegenerate; that is, 0 and 1 must be distinct elements of B. Authors writing on Boolean-valued
models typically take this requirement to be part of the denition of Boolean algebra, but authors writing on Boolean
algebras in general often do not.

42.6 References
Bell, J. L. (1985) Boolean-Valued Models and Independence Proofs in Set Theory, Oxford. ISBN 0-19-853241-
5
Grishin, V.N. (2001) [1994], b/b016990, in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer
Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Jech, Thomas (2002). Set theory, third millennium edition (revised and expanded). Springer. ISBN 3-540-
44085-2. OCLC 174929965.
Kunen, Kenneth (1980). Set Theory: An Introduction to Independence Proofs. North-Holland. ISBN 0-444-
85401-0. OCLC 12808956.

Kusraev, A. G. and S. S. Kutateladze (1999). Boolean Valued Analysis. Kluwer Academic Publishers. ISBN
0-7923-5921-6. OCLC 41967176. Contains an account of Boolean-valued models and applications to Riesz
spaces, Banach spaces and algebras.
Manin, Yu. I. (1977). A Course in Mathematical Logic. Springer. ISBN 0-387-90243-0. OCLC 2797938.
Contains an account of forcing and Boolean-valued models written for mathematicians who are not set theorists.
Rosser, J. Barkley (1969). Simplied Independence Proofs, Boolean valued models of set theory. Academic
Press.
Chapter 43

Booleo

Booleo is a strategy card game using boolean logic gates. It was developed by Jonathan Brandt and Chris Kampf
with Sean P. Dennis in 2008, and it was rst published by Tessera Games LLC in 2009.[1]

43.1 Game
The deck consists of 64 cards:

48 Gate cards using three Boolean operators AND, OR, and XOR

8 OR cards resolving to 1
8 OR cards resolving to 0
8 AND cards resolving to 1
8 AND cards resolving to 0
8 XOR cards resolving to 1
8 XOR cards resolving to 0

8 NOT cards

6 Initial Binary cards, each displaying a 0 and a 1 aligned to the two short ends of the card

2 Truth Tables (used for reference, not in play)

43.2 Play
Starting with a line of Initial Binary cards laid perpendicular to two facing players, the object of the game is to be
the rst to complete a logical pyramid whose nal output equals that of the rightmost Initial Binary card facing that
player.
The game is played in draw one play one format. The pyramid consists of decreasing rows of gate cards, where the
outputs of any contiguous pair of cards comprise the input values to a single card in the following row. The pyramid,
therefore, has Initial Binary values as its base and tapers to a single card closest to the player. By tracing the ow
of values through any series of gate, every card placed in the pyramid must make logical sense, i.e. the inputs and
output value of every gate card must conform to the rule of that gate card.
The NOT cards are played against any of the Initial Binary cards in play, causing that card to be rotated 180 degrees,
literally ipping the value of that card from 0 to 1 or vice versa.
By changing the value of any Initial Binary, any and all gate cards which ow from it must be re-evaluated to ensure
its placement makes logical sense. If it does not, that gate card is removed from the players pyramid.

181
182 CHAPTER 43. BOOLEO

Since both players pyramids share the Initial Binary cards as a base, ipping an Initial Binary has an eect on both
players pyramids. A principal strategy during game play is to invalidate gate cards in the opponents logic pyramid
while rendering as little damage to ones own pyramid in the process.
Some logic gates are more robust than others to a change to their inputs. Therefore, not all logic gate cards have the
same strategic value.
The standard edition of the game does not contain NAND, NOR, or XNOR gates. It is possible, therefore, for a
player to arrive at an unresolvable pair of inputs.[2]

43.3 Variations
The number of cards in bOOleO will comfortably support a match between two players whose logic pyramids are six
cards wide at their base. By combining decks, it is possible to construct larger pyramids or to have matches among
more than two players. For example:

Four players may play individually or as facing teams by arranging a cross of Initial Binary cards,
where four logic pyramids extend like compass points in four directions
Four or more players may build partially overlapping pyramids from a long base of Initial Binary
cards

Tessera Games also published bOOleO-N Edition, which is identical to bOOleO with the exception that it uses the
inverse set of logic gates: NAND, NOR, and XNOR. bOOleO-N Edition may be played on its own, or it may be
combined with bOOleO.

43.4 References
[1] http://boardgamegeek.com/boardgame/40943/booleo

[2] Somma, Ryan. A Game of Boolean Logic Gates with an Ambiguous Spelling. Geeking Out. Retrieved 9 August 2017.
Chapter 44

Bounded quantier

This article is about bounded quantication in mathematical logic. For bounded quantication in type theory, see
Bounded quantication.

In the study of formal theories in mathematical logic, bounded quantiers are often included in a formal language
in addition to the standard quantiers "" and "". Bounded quantiers dier from "" and "" in that bounded
quantiers restrict the range of the quantied variable. The study of bounded quantiers is motivated by the fact that
determining whether a sentence with only bounded quantiers is true is often not as dicult as determining whether
an arbitrary sentence is true.
Examples of bounded quantiers in the context of real analysis include "x>0, "y<0, and "x ". Informally
"x>0 says for all x where x is larger than 0, "y<0 says there exists a y where y is less than 0 and "x " says
for all x where x is a real number. For example, "x>0 y<0 (x = y2 )" says every positive number is the square of
a negative number.

44.1 Bounded quantiers in arithmetic


Suppose that L is the language of Peano arithmetic (the language of second-order arithmetic or arithmetic in all nite
types would work as well). There are two types of bounded quantiers: n < t and n < t . These quantiers
bind the number variable n and contain a numeric term t which may not mention n but which may have other free
variables. (Numeric terms here means terms such as 1 + 1, 2, 2 3, "m + 3, etc.)
These quantiers are dened by the following rules ( denotes formulas):

n < t n(n < t )

n < t n(n < t )


There are several motivations for these quantiers.

In applications of the language to recursion theory, such as the arithmetical hierarchy, bounded quantiers add
no complexity. If is a decidable predicate then n < t and n < t are decidable as well.

In applications to the study of Peano Arithmetic, the fact that a particular set can be dened with only bounded
quantiers can have consequences for the computability of the set. For example, there is a denition of primality
using only bounded quantiers: a number n is prime if and only if there are not two numbers strictly less than
n whose product is n. There is no quantier-free denition of primality in the language 0, 1, +, , <, = ,
however. The fact that there is a bounded quantier formula dening primality shows that the primality of each
number can be computably decided.

In general, a relation on natural numbers is denable by a bounded formula if and only if it is computable in the
linear-time hierarchy, which is dened similarly to the polynomial hierarchy, but with linear time bounds instead of

183
184 CHAPTER 44. BOUNDED QUANTIFIER

polynomial. Consequently, all predicates denable by a bounded formula are Kalmr elementary, context-sensitive,
and primitive recursive.
In the arithmetical hierarchy, an arithmetical formula which contains only bounded quantiers is called 00 , 00 , and
00 . The superscript 0 is sometimes omitted.

44.2 Bounded quantiers in set theory


Suppose that L is the language , . . . , = of the ZermeloFraenkel set theory, where the ellipsis may be replaced
by term-forming operations such as a symbol for the powerset operation. There are two bounded quantiers: x t
and x t . These quantiers bind the set variable x and contain a term t which may not mention x but which may
have other free variables.
The semantics of these quantiers is determined by the following rules:

x t () x(x t )

x t () x(x t )
A ZF formula which contains only bounded quantiers is called 0 , 0 , and 0 . This forms the basis of the Levy
hierarchy, which is dened analogously with the arithmetical hierarchy.
Bounded quantiers are important in Kripke-Platek set theory and constructive set theory, where only 0 separation
is included. That is, it includes separation for formulas with only bounded quantiers, but not separation for other
formulas. In KP the motivation is the fact that whether a set x satises a bounded quantier formula only depends on
the collection of sets that are close in rank to x (as the powerset operation can only be applied nitely many times to
form a term). In constructive set theory, it is motivated on predicative grounds.

44.3 See also


Subtyping bounded quantication in type theory
System F<: a polymorphic typed lambda calculus with bounded quantication

44.4 References
Hinman, P. (2005). Fundamentals of Mathematical Logic. A K Peters. ISBN 1-56881-262-0.

Kunen, K. (1980). Set theory: An introduction to independence proofs. Elsevier. ISBN 0-444-86839-9.
Chapter 45

Branching quantier

In logic a branching quantier,[1] also called a Henkin quantier, nite partially ordered quantier or even
nonlinear quantier, is a partial ordering[2]

Qx1 . . . Qxn
of quantiers for Q{,}. It is a special case of generalized quantier. In classical logic, quantier prexes are
linearly ordered such that the value of a variable ym bound by a quantier Qm depends on the value of the variables

y1 ,...,y -

bound by quantiers

Qy1 ,...,Qy -

preceding Qm. In a logic with (nite) partially ordered quantication this is not in general the case.
Branching quantication rst appeared in a 1959 conference paper of Leon Henkin.[3] Systems of partially ordered
quantication are intermediate in strength between rst-order logic and second-order logic. They are being used as a
basis for Hintikkas and Gabriel Sandus independence-friendly logic.

45.1 Denition and properties


The simplest Henkin quantier QH is

( )
x1 y1
(QH x1 , x2 , y1 , y2 )(x1 , x2 , y1 , y2 ) (x1 , x2 , y1 , y2 )
x2 y2
It (in fact every formula with a Henkin prex, not just the simplest one) is equivalent to its second-order Skolemization,
i.e.

f gx1 x2 (x1 , x2 , f (x1 ), g(x2 ))


It is also powerful enough to dene the quantier QN (i.e. there are innitely many) dened as

(QN x)(x) a(QH x1 , x2 , y1 , y2 )[a (x1 = x2 y1 = y2 ) ((x1 ) ((y1 ) y1 = a))]


Several things follow from this, including the nonaxiomatizability of rst-order logic with QH (rst observed by
Ehrenfeucht), and its equivalence to the 11 -fragment of second-order logic (existential second-order logic)the
latter result published independently in 1970 by Herbert Enderton[4] and W. Walkoe.[5]
The following quantiers are also denable by QH .[2]

185
186 CHAPTER 45. BRANCHING QUANTIFIER

Rescher: The number of s is less than or equal to the number of s

(QL x)(x, x) Card({x : x}) Card({x : x}) (QH x1 x2 y1 y2 )[(x1 = x2 y1 = y2 )(x1 y1 )]

Hrtig: The s are equinumerous with the s

(QI x)(x, x) (QL x)(x, x) (QL x)(x, x)

Chang: The number of s is equinumerous with the domain of the model

(QC x)(x) (QL x)(x = x, x)


The Henkin quantier QH can itself be expressed as a type (4) Lindstrm quantier.[2]

45.2 Relation to natural languages


Hintikka in a 1973 paper[6] advanced the hypothesis that some sentences in natural languages are best understood in
terms of branching quantiers, for example: some relative of each villager and some relative of each townsman hate
each other is supposed to be interpreted, according to Hintikka, as:[7][8]

( )
x1 y1
[(V (x1 ) T (x2 )) (R(x1 , y1 ) R(x2 , y2 ) H(y1 , y2 ) H(y2 , y1 ))]
x2 y2

which is known to have no rst-order logic equivalent.[7]


The idea of branching is not necessarily restricted to using the classical quantiers as leaves. In a 1979 paper,[9]
Jon Barwise proposed variations of Hintikka sentences (as the above is sometimes called) in which the inner quan-
tiers are themselves generalized quantiers, for example: Most villagers and most townsmen hate each other.[7]
Observing that 11 is not closed under negation, Barwise also proposed a practical test to determine whether natu-
ral language sentences really involve branching quantiers, namely to test whether their natural-language negation
involves universal quantication over a set variable (a 11 sentence).[10]
Hintikkas proposal was met with skepticism by a number of logicians because some rst-order sentences like the one
below appear to capture well enough the natural language Hintikka sentence.

[x1 y1 x2 y2 (x1 , x2 , y1 , y2 )] [x2 y2 x1 y1 (x1 , x2 , y1 , y2 )] where


(x1 , x2 , y1 , y2 ) denotes (V (x1 ) T (x2 )) (R(x1 , y1 ) R(x2 , y2 ) H(y1 , y2 ) H(y2 , y1 ))

Although much purely theoretical debate followed, it wasn't until 2009 that some empirical tests with students trained
in logic found that they are more likely to assign models matching the bidirectional rst-order sentence rather than
branching-quantier sentence to several natural-language constructs derived from the Hintikka sentence. For instance
students were shown undirected bipartite graphswith squares and circles as verticesand asked to say whether
sentences like more than 3 circles and more than 3 squares are connected by lines were correctly describing the
diagrams.[7]

45.3 See also


Game semantics
Dependence logic
Independence-friendly logic (IF logic)
Mostowski quantier
Lindstrm quantier
Nonrstorderizability
45.4. REFERENCES 187

45.4 References
[1] Stanley Peters; Dag Westersthl (2006). Quantiers in language and logic. Clarendon Press. pp. 6672. ISBN 978-0-19-
929125-0.

[2] Antonio Badia (2009). Quantiers in Action: Generalized Quantication in Query, Logical and Natural Languages. Springer.
p. 7476. ISBN 978-0-387-09563-9.

[3] Henkin, L. Some Remarks on Innitely Long Formulas. Innitistic Methods: Proceedings of the Symposium on Founda-
tions of Mathematics, Warsaw, 29 September 1959, Panstwowe Wydawnictwo Naukowe and Pergamon Press, Warsaw,
1961, pp. 167-183. OCLC 2277863

[4] Jaakko Hintikka and Gabriel Sandu, Game-theoretical semantics, in Handbook of logic and language, ed. J. van Benthem
and A. ter Meulen, Elsevier 2011 (2nd ed.) citing Enderton, H.B., 1970. Finite partially-ordered quantiers. Z. Math.
Logik Grundlag. Math. 16, 393397 doi:10.1002/malq.19700160802.

[5] Blass, A.; Gurevich, Y. (1986). Henkin quantiers and complete problems (PDF). Annals of Pure and Applied Logic.
32: 1. doi:10.1016/0168-0072(86)90040-0. citing W. Walkoe, Finite partially-ordered quantication, J. Symbolic Logic
35 (1970) 535-555. JSTOR 2271440

[6] Hintikka, J. (1973). Quantiers vs. Quantication Theory. Dialectica. 27 (34): 329358. doi:10.1111/j.1746-
8361.1973.tb00624.x.

[7] Gierasimczuk, N.; Szymanik, J. (2009). Branching Quantication v. Two-way Quantication (PDF). Journal of Seman-
tics. 26 (4): 367. doi:10.1093/jos/p008.

[8] Sher, G. (1990). Ways of branching quantifers. Linguistics and Philosophy. 13 (4): 393422. doi:10.1007/BF00630749.

[9] Barwise, J. (1979). On branching quantiers in English. Journal of Philosophical Logic. 8: 4780. doi:10.1007/BF00258419.

[10] Hand, Michael (1998). The Journal of Symbolic Logic. 63 (4). Association for Symbolic Logic: 16111614. JSTOR
2586678.

45.5 External links


Game-theoretical quantier at PlanetMath.
Chapter 46

Canonical normal form

In Boolean algebra, any Boolean function can be put into the canonical disjunctive normal form (CDNF) or
minterm canonical form and its dual canonical conjunctive normal form (CCNF) or maxterm canonical form.
Other canonical forms include the complete sum of prime implicants or Blake canonical form (and its dual), and the
algebraic normal form (also called Zhegalkin or ReedMuller).
Minterms are called products because they are the logical AND of a set of variables, and maxterms are called sums
because they are the logical OR of a set of variables. These concepts are dual because of their complementary-
symmetry relationship as expressed by De Morgans laws.
Two dual canonical forms of any Boolean function are a sum of minterms and a product of maxterms. The term
"Sum of Products" or "SoP" is widely used for the canonical form that is a disjunction (OR) of minterms. Its De
Morgan dual is a "Product of Sums" or "PoS" for the canonical form that is a conjunction (AND) of maxterms.
These forms can be useful for the simplication of these functions, which is of great importance in the optimization
of Boolean formulas in general and digital circuits in particular.

46.1 Summary

One application of Boolean algebra is digital circuit design. The goal may be to minimize the number of gates, to
minimize the settling time, etc.
There are sixteen possible functions of two variables, but in digital logic hardware, the simplest gate circuits implement
only four of them: conjunction (AND), disjunction (inclusive OR), and the respective complements of those (NAND
and NOR).
Most gate circuits accept more than 2 input variables; for example, the spaceborne Apollo Guidance Computer, which
pioneered the application of integrated circuits in the 1960s, was built with only one type of gate, a 3-input NOR,
whose output is true only when all 3 inputs are false.[1]

46.2 Minterms

For a boolean function of n variables x1 , . . . , xn , a product term in which each of the n variables appears once (in
either its complemented or uncomplemented form) is called a minterm. Thus, a minterm is a logical expression of n
variables that employs only the complement operator and the conjunction operator.
For example, abc , ab c and abc are 3 examples of the 8 minterms for a Boolean function of the three variables a ,
b , and c . The customary reading of the last of these is a AND b AND NOT-c.
There are 2n minterms of n variables, since a variable in the minterm expression can be in either its direct or its
complemented formtwo choices per variable.

188
46.3. MAXTERMS 189

46.2.1 Indexing minterms


Minterms are often numbered by a binary encoding of the complementation pattern of the variables, where the
variables are written in a standard order, usually alphabetical. This convention assigns the value 1 to the direct form
n
( xi ) and 0 to the complemented form ( xi ); the minterm is then 2i value(xi ) . For example, minterm abc is
i=1
numbered 1102 = 610 and denoted m6 .

46.2.2 Functional equivalence


A given minterm n gives a true value (i.e., 1) for just one combination of the input variables. For example, minterm
5, a b' c, is true only when a and c both are true and b is falsethe input arrangement where a = 1, b = 0, c = 1 results
in 1.
Given the truth table of a logical function, it is possible to write the function as a sum of products. This is a special
form of disjunctive normal form. For example, if given the truth table for the arithmetic sum bit u of one bit positions
logic of an adder circuit, as a function of x and y from the addends and the carry in, ci:
Observing that the rows that have an output of 1 are the 2nd, 3rd, 5th, and 8th, we can write u as a sum of minterms
m1 , m2 , m4 , and m7 . If we wish to verify this: u(ci, x, y) = m1 + m2 + m4 + m7 = (ci , x , y) + (ci , x, y ) +
(ci, x , y ) + (ci, x, y) evaluated for all 8 combinations of the three variables will match the table.

46.3 Maxterms
For a boolean function of n variables x1 , . . . , xn , a sum term in which each of the n variables appears once (in
either its complemented or uncomplemented form) is called a maxterm. Thus, a maxterm is a logical expression
of n variables that employs only the complement operator and the disjunction operator. Maxterms are a dual of the
minterm idea (i.e., exhibiting a complementary symmetry in all respects). Instead of using ANDs and complements,
we use ORs and complements and proceed similarly.
For example, the following are two of the eight maxterms of three variables:

a + b' + c
a' + b + c

There are again 2n maxterms of n variables, since a variable in the maxterm expression can also be in either its direct
or its complemented formtwo choices per variable.

46.3.1 Indexing maxterms


Each maxterm is assigned an index based on the opposite conventional binary encoding used for minterms. The
maxterm convention assigns the value 0 to the direct form (xi ) and 1 to the complemented form (xi ) . For example,
we assign the index 6 to the maxterm a + b + c (110) and denote that maxterm as M 6 . Similarly M 0 of these three
variables is a + b + c (000) and M 7 is a + b + c (111).

46.3.2 Functional equivalence


It is apparent that maxterm n gives a false value (i.e., 0) for just one combination of the input variables. For example,
maxterm 5, a' + b + c', is false only when a and c both are true and b is falsethe input arrangement where a = 1, b
= 0, c = 1 results in 0.
If one is given a truth table of a logical function, it is possible to write the function as a product of sums. This is
a special form of conjunctive normal form. For example, if given the truth table for the carry-out bit co of one bit
positions logic of an adder circuit, as a function of x and y from the addends and the carry in, ci:
Observing that the rows that have an output of 0 are the 1st, 2nd, 3rd, and 5th, we can write co as a product of
maxterms M0 , M1 , M2 and M4 . If we wish to verify this: co(ci, x, y) = M0 M1 M2 M4 = (ci + x + y) (ci + x + y')
(ci + x' + y) (ci' + x + y) evaluated for all 8 combinations of the three variables will match the table.
190 CHAPTER 46. CANONICAL NORMAL FORM

46.4 Dualization
The complement of a minterm is the respective maxterm. This can be easily veried by using de Morgans law. For
example: M5 = a + b + c = (ab c) = m5

46.5 Non-canonical PoS and SoP forms


It is often the case that the canonical minterm form can be simplied to an equivalent SoP form. This simplied form
would still consist of a sum of product terms. However, in the simplied form, it is possible to have fewer product
terms and/or product terms that contain fewer variables. For example, the following 3-variable function:
has the canonical minterm representation: f = a bc + abc , but it has an equivalent simplied form: f = bc . In this
trivial example, it is obvious that bc = a bc + abc , but the simplied form has both fewer product terms, and the
term has fewer variables. The most simplied SoP representation of a function is referred to as a minimal SoP form.
In a similar manner, a canonical maxterm form can have a simplied PoS form.
While this example was easily simplied by applying normal algebraic methods [ f = (a + a)bc ], in less obvious
cases a convenient method for nding the minimal PoS/SoP form of a function with up to four variables is using a
Karnaugh map.
The minimal PoS and SoP forms are very important for nding optimal implementations of boolean functions and
minimizing logic circuits.

46.6 Application example


The sample truth tables for minterms and maxterms above are sucient to establish the canonical form for a single
bit position in the addition of binary numbers, but are not sucient to design the digital logic unless your inventory
of gates includes AND and OR. Where performance is an issue (as in the Apollo Guidance Computer), the available
parts are more likely to be NAND and NOR because of the complementing action inherent in transistor logic. The
values are dened as voltage states, one near ground and one near the DC supply voltage V , e.g. +5 VDC. If the
higher voltage is dened as the 1 true value, a NOR gate is the simplest possible useful logical element.
Specically, a 3-input NOR gate may consist of 3 bipolar junction transistors with their emitters all grounded, their
collectors tied together and linked to V through a load impedance. Each base is connected to an input signal, and the
common collector point presents the output signal. Any input that is a 1 (high voltage) to its base shorts its transistors
emitter to its collector, causing current to ow through the load impedance, which brings the collector voltage (the
output) very near to ground. That result is independent of the other inputs. Only when all 3 input signals are 0 (low
voltage) do the emitter-collector impedances of all 3 transistors remain very high. Then very little current ows, and
the voltage-divider eect with the load impedance imposes on the collector point a high voltage very near to V .
The complementing property of these gate circuits may seem like a drawback when trying to implement a function in
canonical form, but there is a compensating bonus: such a gate with only one input implements the complementing
function, which is required frequently in digital logic.
This example assumes the Apollo parts inventory: 3-input NOR gates only, but the discussion is simplied by sup-
posing that 4-input NOR gates are also available (in Apollo, those were compounded out of pairs of 3-input NORs).

46.6.1 Canonical and non-canonical consequences of NOR gates

Fact #1: a set of 8 NOR gates, if their inputs are all combinations of the direct and complement forms of the 3
input variables ci, x, and y, always produce minterms, never maxtermsthat is, of the 8 gates required to process
all combinations of 3 input variables, only one has the output value 1. Thats because a NOR gate, despite its name,
could better be viewed (using De Morgans law) as the AND of the complements of its input signals.
Fact #2: the reason Fact #1 is not a problem is the duality of minterms and maxterms, i.e. each maxterm is the
complement of the like-indexed minterm, and vice versa.
In the minterm example above, we wrote u(ci, x, y) = m1 + m2 + m4 + m7 but to perform this with a 4-input
46.6. APPLICATION EXAMPLE 191

NOR gate we need to restate it as a product of sums (PoS), where the sums are the opposite maxterms. That is,
u(ci, x, y) = AND( M0 , M3 , M5 , M6 ) = NOR( m0 , m3 , m5 , m6 ). Truth tables:
In the maxterm example above, we wrote co(ci, x, y) = M0 M1 M2 M4 but to perform this with a 4-input NOR gate
we need to notice the equality to the NOR of the same minterms. That is,
co(ci, x, y) = AND( M0 , M1 , M2 , M4 ) = NOR( m0 , m1 , m2 , m4 ). Truth tables:

46.6.2 Design trade-os considered in addition to canonical forms


One might suppose that the work of designing an adder stage is now complete, but we haven't addressed the fact that
all 3 of the input variables have to appear in both their direct and complement forms. Theres no diculty about the
addends x and y in this respect, because they are static throughout the addition and thus are normally held in latch
circuits that routinely have both direct and complement outputs. (The simplest latch circuit made of NOR gates is a
pair of gates cross-coupled to make a ip-op: the output of each is wired as one of the inputs to the other.) There is
also no need to create the complement form of the sum u. However, the carry out of one bit position must be passed
as the carry into the next bit position in both direct and complement forms. The most straightforward way to do this
is to pass co through a 1-input NOR gate and label the output co', but that would add a gate delay in the worst possible
place, slowing down the rippling of carries from right to left. An additional 4-input NOR gate building the canonical
form of co' (out of the opposite minterms as co) solves this problem.

co (ci, x, y) = AND(M3 , M5 , M6 , M7 ) = NOR(m3 , m5 , m6 , m7 ).

Truth tables:
The trade-o to maintain full speed in this way includes an unexpected cost (in addition to having to use a bigger
gate). If we'd just used that 1-input gate to complement co, there would have been no use for the minterm m7 , and
the gate that generated it could have been eliminated. Nevertheless, its still a good trade.
Now we could have implemented those functions exactly according to their SoP and PoS canonical forms, by turning
NOR gates into the functions specied. A NOR gate is made into an OR gate by passing its output through a 1-input
NOR gate; and it is made into an AND gate by passing each of its inputs through a 1-input NOR gate. However,
this approach not only increases the number of gates used, but also doubles the number of gate delays processing the
signals, cutting the processing speed in half. Consequently, whenever performance is vital, going beyond canonical
forms and doing the Boolean algebra to make the unenhanced NOR gates do the job is well worthwhile.

46.6.3 Top-down vs. bottom-up design


We have now seen how the minterm/maxterm tools can be used to design an adder stage in canonical form with the
addition of some Boolean algebra, costing just 2 gate delays for each of the outputs. Thats the top-down way to
design the digital circuit for this function, but is it the best way? The discussion has focused on identifying fastest as
best, and the augmented canonical form meets that criterion awlessly, but sometimes other factors predominate.
The designer may have a primary goal of minimizing the number of gates, and/or of minimizing the fanouts of signals
to other gates since big fanouts reduce resilience to a degraded power supply or other environmental factors. In such
a case, a designer may develop the canonical-form design as a baseline, then try a bottom-up development, and nally
compare the results.
The bottom-up development involves noticing that u = ci XOR (x XOR y), where XOR means eXclusive OR [true
when either input is true but not when both are true], and that co = ci x + x y + y ci. One such development takes
twelve NOR gates in all: six 2-input gates and two 1-input gates to produce u in 5 gate delays, plus three 2-input
gates and one 3-input gate to produce co' in 2 gate delays. The canonical baseline took eight 3-input NOR gates plus
three 4-input NOR gates to produce u, co and co' in 2 gate delays. If the circuit inventory actually includes 4-input
NOR gates, the top-down canonical design looks like a winner in both gate count and speed. But if (contrary to our
convenient supposition) the circuits are actually 3-input NOR gates, of which two are required for each 4-input NOR
function, then the canonical design takes 14 gates compared to 12 for the bottom-up approach, but still produces the
sum digit u considerably faster. The fanout comparison is tabulated as:
Whats a decision-maker to do? An observant one will have noticed that the description of the bottom-up development
mentions co' as an output but not co. Does that design simply never need the direct form of the carry out? Well, yes
192 CHAPTER 46. CANONICAL NORMAL FORM

and no. At each stage, the calculation of co' depends only on ci', x' and y', which means that the carry propagation
ripples along the bit positions just as fast as in the canonical design without ever developing co. The calculation of u,
which does require ci to be made from ci' by a 1-input NOR, is slower but for any word length the design only pays
that penalty once (when the leftmost sum digit is developed). Thats because those calculations overlap, each in what
amounts to its own little pipeline without aecting when the next bit positions sum bit can be calculated. And, to be
sure, the co' out of the leftmost bit position will probably have to be complemented as part of the logic determining
whether the addition overowed. But using 3-input NOR gates, the bottom-up design is very nearly as fast for doing
parallel addition on a non-trivial word length, cuts down on the gate count, and uses lower fanouts ... so it wins if gate
count and/or fanout are paramount!
We'll leave the exact circuitry of the bottom-up design of which all these statements are true as an exercise for the
interested reader, assisted by one more algebraic formula: u = ci(x XOR y) + ci'(x XOR y)']'. Decoupling the carry
propagation from the sum formation in this way is what elevates the performance of a carry-lookahead adder over
that of a ripple carry adder.
To see how NOR gate logic was used in the Apollo Guidance Computers ALU, visit http://klabs.org/history/ech/
agc_schematics/index.htm, select any of the 4-BIT MODULE entries in the Index to Drawings, and expand images
as desired.

46.7 See also


Algebraic normal form
Canonical form

Blake canonical form

List of Boolean algebra topics

46.8 Footnotes
[1] Hall, Eldon C. (1996). Journey to the Moon: The History of the Apollo Guidance Computer. AIAA. ISBN 1-56347-185-X.

46.9 References
Bender, Edward A.; Williamson, S. Gill (2005). A Short Course in Discrete Mathematics. Mineola, NY: Dover
Publications, Inc. ISBN 0-486-43946-1.
The authors demonstrate a proof that any Boolean (logic) function can be expressed in either disjunctive or
conjunctive normal form (cf pages 56); the proof simply proceeds by creating all 2N rows of N Boolean
variables and demonstrates that each row (minterm or maxterm) has a unique Boolean expression. Any
Boolean function of the N variables can be derived from a composite of the rows whose minterm or maxterm
are logical 1s (trues)

McCluskey, E. J. (1965). Introduction to the Theory of Switching Circuits. NY: McGrawHill Book Company.
p. 78. LCCN 65-17394. Canonical expressions are dened and described

Hill, Fredrick J.; Peterson, Gerald R. (1974). Introduction to Switching Theory and Logical Design (2nd ed.).
NY: John Wiley & Sons. p. 101. ISBN 0-471-39882-9. Minterm and maxterm designation of functions

46.10 External links


Boole, George (1848). Translated by Wilkins, David R.. The Calculus of Logic. Cambridge and Dublin
Mathematical Journal. III: 183198.
Chapter 47

Cantor algebra

For the algebras encoding a bijection from an innite set X onto the product XX, sometimes called Cantor algebras,
see JnssonTarski algebra.

In mathematics, a Cantor algebra, named after Georg Cantor, is one of two closely related Boolean algebras, one
countable and one complete.
The countable Cantor algebra is the Boolean algebra of all clopen subsets of the Cantor set. This is the free Boolean
algebra on a countable number of generators. Up to isomorphism, this is the only nontrivial Boolean algebra that is
both countable and atomless.
The complete Cantor algebra is the complete Boolean algebra of Borel subsets of the reals modulo meager sets (Balcar
& Jech 2006). It is isomorphic to the completion of the countable Cantor algebra. (The complete Cantor algebra is
sometimes called the Cohen algebra, though "Cohen algebra" usually refers to a dierent type of Boolean algebra.)
The complete Cantor algebra was studied by von Neumann in 1935 (later published as (von Neumann 1998)), who
showed that it is not isomorphic to the random algebra of Borel subsets modulo measure zero sets.

47.1 References
Balcar, Bohuslav; Jech, Thomas (2006), Weak distributivity, a problem of von Neumann and the mystery of
measurability, Bulletin of Symbolic Logic, 12 (2): 241266, MR 2223923

von Neumann, John (1998) [1960], Continuous geometry, Princeton Landmarks in Mathematics, Princeton
University Press, ISBN 978-0-691-05893-1, MR 0120174

193
Chapter 48

Cha algorithm

Cha is an algorithm for solving instances of the Boolean satisability problem in programming. It was designed
by researchers at Princeton University, United States. The algorithm is an instance of the DPLL algorithm with a
number of enhancements for ecient implementation.

48.1 Implementations
Some available implementations of the algorithm in software are mCha and zCha, the latter one being the most
widely known and used. zCha was originally written by Dr. Lintao Zhang, now at Microsoft Research, hence the
z. It is now maintained by researchers at Princeton University and available for download as both source code and
binaries on Linux. zCha is free for non-commercial use.

48.2 References
M. Moskewicz, C. Madigan, Y. Zhao, L. Zhang, S. Malik. Cha: Engineering an Ecient SAT Solver, 39th
Design Automation Conference (DAC 2001), Las Vegas, ACM 2001.
Vizel, Y.; Weissenbacher, G.; Malik, S. (2015). Boolean Satisability Solvers and Their Applications in
Model Checking. Proceedings of the IEEE. 103 (11). doi:10.1109/JPROC.2015.2455034.

48.3 External links


Web page about zCha

194
Chapter 49

Clause (logic)

For other uses, see Clause (disambiguation).

In logic, a clause is an expression formed from a nite collection of literals (variables or their negations) that is true
either whenever at least one of the literals that form it is true (a disjunctive clause, the most common use of the term),
or when all of the literals that form it are true (a conjunctive clause, a less common use of the term). That is, it is a
nite disjunction[1] or conjunction of literals, depending on the context. Clauses are usually written as follows, where
the symbols li are literals:

l1 ln

49.1 Empty clauses


A clause can be empty (dened from an empty set of literals). The empty clause is denoted by various symbols such
as , , or . The truth evaluation of an empty clause is always f alse . This is justied by considering that f alse
is the neutral element of the monoid ({f alse, true}, ) .

49.2 Implicative form


Every nonempty clause is logically equivalent to an implication of a head from a body, where the head is an arbitrary
literal of the clause and the body is the conjunction of the negations of the other literals. That is, if a truth assignment
causes a clause to be true, and none of the literals of the body satisfy the clause, then the head must also be true.
This equivalence is commonly used in logic programming, where clauses are usually written as an implication in this
form. More generally, the head may be a disjunction of literals. If b1 , . . . , bm are the literals in the body of a clause
and h1 , . . . , hn are those of its head, the clause is usually written as follows:

h1 , . . . , hn b1 , . . . , bm .

If n = 1 and m = 0, the clause is called a (Prolog) fact.

If n = 1 and m > 0, the clause is called a (Prolog) rule.

If n = 0 and m > 0, the clause is called a (Prolog) query.

If n > 1, the clause is no longer Horn.

195
196 CHAPTER 49. CLAUSE (LOGIC)

49.3 See also


Conjunctive normal form

Disjunctive normal form


Horn clause

49.4 References
[1] Chang, Chin-Liang; Richard Char-Tung Lee (1973). Symbolic Logic and Mechanical Theorem Proving. Academic Press.
p. 48. ISBN 0-12-170350-9.

49.5 External links


Clause logic related terminology

Clause simultaneously translated in several languages and meanings


Chapter 50

Cohen algebra

Not to be confused with Cohen ring or RankinCohen algebra.


For the quotient of the algebra of Borel sets by the ideal of meager sets, sometimes called the Cohen algebra, see
Cantor algebra.

In mathematical set theory, a Cohen algebra, named after Paul Cohen, is a type of Boolean algebra used in the
theory of forcing. A Cohen algebra is a Boolean algebra whose completion is isomorphic to the completion of a free
Boolean algebra (Koppelberg 1993).

50.1 References
Koppelberg, Sabine (1993), Characterizations of Cohen algebras, Papers on general topology and applications
(Madison, WI, 1991), Annals of the New York Academy of Sciences, 704, New York Academy of Sciences,
pp. 222237, MR 1277859, doi:10.1111/j.1749-6632.1993.tb52525.x

197
Chapter 51

Cointerpretability

In mathematical logic, cointerpretability is a binary relation on formal theories: a formal theory T is cointerpretable
in another such theory S, when the language of S can be translated into the language of T in such a way that S proves
every formula whose translation is a theorem of T. The translation here is required to preserve the logical structure
of formulas.
This concept, in a sense dual to interpretability, was introduced by Japaridze (1993), who also proved that, for theories
of Peano arithmetic and any stronger theories with eective axiomatizations, cointerpretability is equivalent to 1
-conservativity.

51.1 See also


Cotolerance

interpretability logic.
Tolerance (in logic)

51.2 References
Japaridze (Dzhaparidze), Giorgi (Giorgie) (1993), A generalized notion of weak interpretability and the corre-
sponding modal logic, Annals of Pure and Applied Logic, 61 (1-2): 113160, MR 1218658, doi:10.1016/0168-
0072(93)90201-N.
Japaridze, Giorgi; de Jongh, Dick (1998), The logic of provability, in Buss, Samuel R., Handbook of Proof
Theory, Studies in Logic and the Foundations of Mathematics, 137, Amsterdam: North-Holland, pp. 475546,
MR 1640331, doi:10.1016/S0049-237X(98)80022-0.

198
Chapter 52

Collapsing algebra

In mathematics, a collapsing algebra is a type of Boolean algebra sometimes used in forcing to reduce (collapse)
the size of cardinals. The posets used to generate collapsing algebras were introduced by Azriel Lvy (1963).
The collapsing algebra of is a complete Boolean algebra with at least elements but generated by a countable
number of elements. As the size of countably generated complete Boolean algebras is unbounded, this shows that
there is no free complete Boolean algebra on a countable number of elements.

52.1 Denition
There are several slightly dierent sorts of collapsing algebras.
If and are cardinals, then the Boolean algebra of regular open sets of the product space is a collapsing algebra.
Here and are both given the discrete topology. There are several dierent options for the topology of . The
simplest option is to take the usual product topology. Another option is to take the topology generated by open sets
consisting of functions whose value is specied on less than elements of .

52.2 References
Bell, J. L. (1985). Boolean-Valued Models and Independence Proofs in Set Theory. Oxford Logic Guides. 12
(2nd ed.). Oxford: Oxford University Press (Clarendon Press). ISBN 0-19-853241-5. Zbl 0585.03021.
Jech, Thomas (2003). Set theory (third millennium (revised and expanded) ed.). Springer-Verlag. ISBN 3-
540-44085-2. OCLC 174929965. Zbl 1007.03002.
Lvy, Azriel (1963). Independence results in set theory by Cohens method. IV,. Notices Amer. Math. Soc.
10: 593.

199
Chapter 53

Commutative property

For other uses, see Commute (disambiguation).

x y y x

xy yx
An operation is commutative i x y = y x for each x and y . This image illustrates this property with the concept of an
operation as a calculation machine. It doesn't matter for the output x y or y x respectively which order the arguments x and y
have the nal outcome is the same.

In mathematics, a binary operation is commutative if changing the order of the operands does not change the result.
It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Most familiar
as the name of the property that says 3 + 4 = 4 + 3 or 2 5 = 5 2, the property can also be used in more
advanced settings. The name is needed because there are operations, such as division and subtraction, that do not
have it (for example, 3 5 5 3); such operations are not commutative, and so are referred to as noncommutative
operations. The idea that simple operations such as the multiplication and addition of numbers are commutative, was
for many years implicitly assumed. Thus, this property was not named until the 19th century, when mathematics
started to become formalized.[1][2] A corresponding property exists for binary relations; a binary relation is said to be
symmetric if the relation applies regardless of the order of its operands; for example, equality is symmetric as two
equal mathematical objects are equal regardless of their order.[3]

200
53.1. COMMON USES 201

53.1 Common uses


The commutative property (or commutative law) is a property generally associated with binary operations and functions.
If the commutative property holds for a pair of elements under a certain binary operation then the two elements are
said to commute under that operation.

53.2 Mathematical denitions


Further information: Symmetric function

The term commutative is used in several related senses.[4][5]

1. A binary operation on a set S is called commutative if:


xy =yx for all x, y S
An operation that does not satisfy the above property is called non-commutative.
2. One says that x commutes with y under if:
xy =yx

3. A binary function f : A A B is called commutative if:


f (x, y) = f (y, x) for all x, y A

53.3 Examples

53.3.1 Commutative operations in everyday life

=
=
=

The cumulation of apples, which can be seen as an addition of natural numbers, is commutative.

Putting on socks resembles a commutative operation since which sock is put on rst is unimportant. Either
way, the result (having both socks on), is the same. In contrast, putting on underwear and trousers is not
commutative.
The commutativity of addition is observed when paying for an item with cash. Regardless of the order the bills
are handed over in, they always give the same total.
202 CHAPTER 53. COMMUTATIVE PROPERTY

53.3.2 Commutative operations in mathematics

+ b a
a a

b
The addition of vectors is commutative, because a + b = b + a .

Two well-known examples of commutative binary operations:[4]

The addition of real numbers is commutative, since

y+z =z+y for all y, z R

For example 4 + 5 = 5 + 4, since both expressions equal 9.

The multiplication of real numbers is commutative, since

yz = zy for all y, z R

For example, 3 5 = 5 3, since both expressions equal 15.


53.3. EXAMPLES 203

Some binary truth functions are also commutative, since the truth tables for the functions are the same when
one changes the order of the operands.

For example, the logical biconditional function p q is equivalent to q p. This function is also written
as p IFF q, or as p q, or as Epq.
The last form is an example of the most concise notation in the article on truth functions, which lists the
sixteen possible binary truth functions of which eight are commutative: Vpq = Vqp; Apq (OR) = Aqp;
Dpq (NAND) = Dqp; Epq (IFF) = Eqp; Jpq = Jqp; Kpq (AND) = Kqp; Xpq (NOR) = Xqp; Opq = Oqp.

Further examples of commutative binary operations include addition and multiplication of complex numbers,
addition and scalar multiplication of vectors, and intersection and union of sets.

53.3.3 Noncommutative operations in daily life


Concatenation, the act of joining character strings together, is a noncommutative operation. For example,

EA + T = EAT = T EA = T + EA

Washing and drying clothes resembles a noncommutative operation; washing and then drying produces a
markedly dierent result to drying and then washing.
Rotating a book 90 around a vertical axis then 90 around a horizontal axis produces a dierent orientation
than when the rotations are performed in the opposite order.
The twists of the Rubiks Cube are noncommutative. This can be studied using group theory.
Thought processes are noncommutative: A person asked a question (A) and then a question (B) may give
dierent answers to each question than a person asked rst (B) and then (A), because asking a question may
change the persons state of mind.

53.3.4 Noncommutative operations in mathematics


Some noncommutative binary operations:[6]

Subtraction and division

Subtraction is noncommutative, since 0 1 = 1 0 .


Division is noncommutative, since 1 2 = 2 1 .

Truth functions

Some truth functions are noncommutative, since the truth tables for the functions are dierent when one changes the
order of the operands. For example, the truth tables for f (A, B) = A B (A AND NOT B) and f (B, A) = B
A are

For the eight noncommutative functions, Bqp = Cpq; Mqp = Lpq; Cqp = Bpq; Lqp = Mpq; Fqp = Gpq; Iqp = Hpq;
Gqp = Fpq; Hqp = Ipq.[7]

Matrix multiplication

Matrix multiplication is almost always noncommutative, for example:

[ ] [ ] [ ] [ ] [ ] [ ]
0 2 1 1 0 1 0 1 1 1 0 1
= = =
0 1 0 1 0 1 0 1 0 1 0 1
204 CHAPTER 53. COMMUTATIVE PROPERTY

Vector product

The vector product (or cross product) of two vectors in three dimensions is anti-commutative; i.e., b a = (a b).

53.4 History and etymology

The rst known use of the term was in a French Journal published in 1814

Records of the implicit use of the commutative property go back to ancient times. The Egyptians used the commuta-
tive property of multiplication to simplify computing products.[8][9] Euclid is known to have assumed the commutative
property of multiplication in his book Elements.[10] Formal uses of the commutative property arose in the late 18th
and early 19th centuries, when mathematicians began to work on a theory of functions. Today the commutative
property is a well known and basic property used in most branches of mathematics.
The rst recorded use of the term commutative was in a memoir by Franois Servois in 1814,[1][11] which used the
word commutatives when describing functions that have what is now called the commutative property. The word is a
combination of the French word commuter meaning to substitute or switch and the sux -ative meaning tending
to so the word literally means tending to substitute or switch. The term then appeared in English in 1838[2] in
Duncan Farquharson Gregory's article entitled On the real nature of symbolical algebra published in 1840 in the
Transactions of the Royal Society of Edinburgh.[12]

53.5 Propositional logic

53.5.1 Rule of replacement


In truth-functional propositional logic, commutation,[13][14] or commutativity[15] refer to two valid rules of replacement.
The rules allow one to transpose propositional variables within logical expressions in logical proofs. The rules are:

(P Q) (Q P )
and

(P Q) (Q P )
where " " is a metalogical symbol representing can be replaced in a proof with.
53.6. SET THEORY 205

53.5.2 Truth functional connectives


Commutativity is a property of some logical connectives of truth functional propositional logic. The following logical
equivalences demonstrate that commutativity is a property of particular connectives. The following are truth-functional
tautologies.

Commutativity of conjunction (P Q) (Q P )

Commutativity of disjunction (P Q) (Q P )

Commutativity of implication (also called the law of permutation)


(P (Q R)) (Q (P R))

Commutativity of equivalence (also called the complete commutative law of equivalence)


(P Q) (Q P )

53.6 Set theory


In group and set theory, many algebraic structures are called commutative when certain operands satisfy the com-
mutative property. In higher branches of mathematics, such as analysis and linear algebra the commutativity of
well-known operations (such as addition and multiplication on real and complex numbers) is often used (or implicitly
assumed) in proofs.[16][17][18]

53.7 Mathematical structures and commutativity


A commutative semigroup is a set endowed with a total, associative and commutative operation.

If the operation additionally has an identity element, we have a commutative monoid

An abelian group, or commutative group is a group whose group operation is commutative.[17]

A commutative ring is a ring whose multiplication is commutative. (Addition in a ring is always commutative.)[19]

In a eld both addition and multiplication are commutative.[20]

53.8 Related properties

53.8.1 Associativity
Main article: Associative property

The associative property is closely related to the commutative property. The associative property of an expression
containing two or more occurrences of the same operator states that the order operations are performed in does not
aect the nal result, as long as the order of terms doesn't change. In contrast, the commutative property states that
the order of the terms does not aect the nal result.
Most commutative operations encountered in practice are also associative. However, commutativity does not imply
associativity. A counterexample is the function

x+y
f (x, y) = ,
2
which is clearly commutative (interchanging x and y does not aect the result), but it is not associative (since, for
example, f (4, f (0, +4)) = 1 but f (f (4, 0), +4) = +1 ). More such examples may be found in Commutative
non-associative magmas.
206 CHAPTER 53. COMMUTATIVE PROPERTY

53.8.2 Distributive

Main article: Distributive property

53.8.3 Symmetry

10
8
6
4
2
0
-2
-4
-6
-8
-10
Graph showing the symmetry of the addition function

Main article: Symmetry in mathematics

Some forms of symmetry can be directly linked to commutativity. When a commutative operator is written as a
binary function then the resulting function is symmetric across the line y = x. As an example, if we let a function f
represent addition (a commutative operation) so that f(x,y) = x + y then f is a symmetric function, which can be seen
in the image on the right.
For relations, a symmetric relation is analogous to a commutative operation, in that if a relation R is symmetric, then
aRb bRa .

53.9 Non-commuting operators in quantum mechanics


Main article: Canonical commutation relation

In quantum mechanics as formulated by Schrdinger, physical variables are represented by linear operators such as x
d
(meaning multiply by x), and dx . These two operators do not commute as may be seen by considering the eect of
d d
their compositions x dx and dx x (also called products of operators) on a one-dimensional wave function (x) :
53.10. SEE ALSO 207

d d
x = x = x = + x
dx dx

According to the uncertainty principle of Heisenberg, if the two operators representing a pair of variables do not
commute, then that pair of variables are mutually complementary, which means they cannot be simultaneously mea-
sured or known precisely. For example, the position and the linear momentum in the x-direction of a particle are
represented by the operators x and i x
, respectively (where is the reduced Planck constant). This is the same
example except for the constant i , so again the operators do not commute and the physical meaning is that the
position and linear momentum in a given direction are complementary.

53.10 See also


Anticommutativity

Centralizer or Commutant

Commutative diagram

Commutative (neurophysiology)

Commutator

Parallelogram law

Particle statistics (for commutativity in physics)

Quasi-commutative property

Trace monoid

53.11 Notes
[1] Cabilln and Miller, Commutative and Distributive

[2] Flood, Raymond; Rice, Adrian; Wilson, Robin, eds. (2011). Mathematics in Victorian Britain. Oxford University Press.
p. 4.

[3] Weisstein, Eric W. Symmetric Relation. MathWorld.

[4] Krowne, p.1

[5] Weisstein, Commute, p.1

[6] Yark, p.1.

[7] Jozef Maria Bochenski (1959), Precis of Mathematical Logic, rev., Albert Menne, ed. and trans., Otto Bird, New York:
Gordon and Breach, Part II, Sec. 3.32, 16 dyadic truth functors, (truth tables), p. 11.

[8] Lumpkin, p.11

[9] Gay and Shute, p.?

[10] O'Conner and Robertson, Real Numbers

[11] O'Conner and Robertson, Servois

[12] D. F. Gregory (1840). On the real nature of symbolical algebra. Transactions of the Royal Society of Edinburgh. 14:
208216.

[13] Moore and Parker

[14] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall.
208 CHAPTER 53. COMMUTATIVE PROPERTY

[15] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing.

[16] Axler, p.2

[17] Gallian, p.34

[18] p. 26,87

[19] Gallian p.236

[20] Gallian p.250

53.12 References

53.12.1 Books
Axler, Sheldon (1997). Linear Algebra Done Right, 2e. Springer. ISBN 0-387-98258-2.

Abstract algebra theory. Covers commutativity in that context. Uses property throughout book.

Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall.

Gallian, Joseph (2006). Contemporary Abstract Algebra, 6e. Boston, Mass.: Houghton Miin. ISBN 0-618-
51471-6.

Linear algebra theory. Explains commutativity in chapter 1, uses it throughout.

Goodman, Frederick (2003). Algebra: Abstract and Concrete, Stressing Symmetry, 2e. Prentice Hall. ISBN
0-13-067342-0.

Abstract algebra theory. Uses commutativity property throughout book.

Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing.

53.12.2 Articles
https://web.archive.org/web/20070713072942/http://www.ethnomath.org/resources/lumpkin1997.pdf Lump-
kin, B. (1997). The Mathematical Legacy Of Ancient Egypt - A Response To Robert Palter. Unpublished
manuscript.

Article describing the mathematical ability of ancient civilizations.

Robins, R. Gay, and Charles C. D. Shute. 1987. The Rhind Mathematical Papyrus: An Ancient Egyptian Text.
London: British Museum Publications Limited. ISBN 0-7141-0944-4

Translation and interpretation of the Rhind Mathematical Papyrus.

53.12.3 Online resources


Hazewinkel, Michiel, ed. (2001) [1994], Commutativity, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

Krowne, Aaron, Commutative at PlanetMath.org., Accessed 8 August 2007.

Denition of commutativity and examples of commutative operations


53.12. REFERENCES 209

Weisstein, Eric W. Commute. MathWorld.

, Accessed 8 August 2007.

Explanation of the term commute

Yark. Examples of non-commutative operations at PlanetMath.org., Accessed 8 August 2007

Examples proving some noncommutative operations

O'Conner, J J and Robertson, E F. MacTutor history of real numbers, Accessed 8 August 2007

Article giving the history of the real numbers

Cabilln, Julio and Miller, Je. Earliest Known Uses Of Mathematical Terms, Accessed 22 November 2008

Page covering the earliest uses of mathematical terms

O'Conner, J J and Robertson, E F. MacTutor biography of Franois Servois, Accessed 8 August 2007

Biography of Francois Servois, who rst used the term


Chapter 54

Commutativity of conjunction

In propositional logic, the commutativity of conjunction is a valid argument form and truth-functional tautology.
It is considered to be a law of classical logic. It is the principle that the conjuncts of a logical conjunction may switch
places with each other, while preserving the truth-value of the resulting proposition.[1]

54.1 Formal notation


Commutativity of conjunction can be expressed in sequent notation as:

(P Q) (Q P )
and

(Q P ) (P Q)
where is a metalogical symbol meaning that (Q P ) is a syntactic consequence of (P Q) , in the one case, and
(P Q) is a syntactic consequence of (Q P ) in the other, in some logical system;
or in rule form:

P Q
QP
and

QP
P Q
where the rule is that wherever an instance of " (P Q) " appears on a line of a proof, it can be replaced with "
(Q P ) " and wherever an instance of " (Q P ) " appears on a line of a proof, it can be replaced with " (P Q) ";
or as the statement of a truth-functional tautology or theorem of propositional logic:

(P Q) (Q P )
and

(Q P ) (P Q)
where P and Q are propositions expressed in some formal system.

210
54.2. GENERALIZED PRINCIPLE 211

54.2 Generalized principle


For any propositions H1 , H2 , ... Hn, and permutation (n) of the numbers 1 through n, it is the case that:

H1 H2 ... H

is equivalent to

H H H .

For example, if H1 is

It is raining

H2 is

Socrates is mortal

and H3 is

2+2=4

then
It is raining and Socrates is mortal and 2+2=4
is equivalent to
Socrates is mortal and 2+2=4 and it is raining
and the other orderings of the predicates.

54.3 References
[1] Elliott Mendelson (1997). Introduction to Mathematical Logic. CRC Press. ISBN 0-412-80830-7.
Chapter 55

Complete Boolean algebra

This article is about a type of mathematical structure. For complete sets of Boolean operators, see Functional com-
pleteness.

In mathematics, a complete Boolean algebra is a Boolean algebra in which every subset has a supremum (least
upper bound). Complete Boolean algebras are used to construct Boolean-valued models of set theory in the theory
of forcing. Every Boolean algebra A has an essentially unique completion, which is a complete Boolean algebra
containing A such that every element is the supremum of some subset of A. As a partially ordered set, this completion
of A is the DedekindMacNeille completion.
More generally, if is a cardinal then a Boolean algebra is called -complete if every subset of cardinality less than
has a supremum.

55.1 Examples
Every nite Boolean algebra is complete.

The algebra of subsets of a given set is a complete Boolean algebra.

The regular open sets of any topological space form a complete Boolean algebra. This example is of particular
importance because every forcing poset can be considered as a topological space (a base for the topology
consisting of sets that are the set of all elements less than or equal to a given element). The corresponding
regular open algebra can be used to form Boolean-valued models which are then equivalent to generic extensions
by the given forcing poset.

The algebra of all measurable subsets of a -nite measure space, modulo null sets, is a complete Boolean
algebra. When the measure space is the unit interval with the -algebra of Lebesgue measurable sets, the
Boolean algebra is called the random algebra.

The algebra of all measurable subsets of a measure space is a 1 -complete Boolean algebra, but is not usually
complete.

The algebra of all subsets of an innite set that are nite or have nite complement is a Boolean algebra but is
not complete.

The Boolean algebra of all Baire sets modulo meager sets in a topological space with a countable base is
complete; when the topological space is the real numbers the algebra is sometimes called the Cantor algebra.

Another example of a Boolean algebra that is not complete is the Boolean algebra P() of all sets of natural
numbers, quotiented out by the ideal Fin of nite subsets. The resulting object, denoted P()/Fin, consists of
all equivalence classes of sets of naturals, where the relevant equivalence relation is that two sets of naturals are
equivalent if their symmetric dierence is nite. The Boolean operations are dened analogously, for example,
if A and B are two equivalence classes in P()/Fin, we dene A B to be the equivalence class of a b , where
a and b are some (any) elements of A and B respectively.

212
55.2. PROPERTIES OF COMPLETE BOOLEAN ALGEBRAS 213

Now let a0 , a1 ,... be pairwise disjoint innite sets of naturals, and let A0 , A1 ,... be their corresponding
equivalence classes in P()/Fin . Then given any upper bound X of A0 , A1 ,... in P()/Fin, we can nd
a lesser upper bound, by removing from a representative for X one element of each an. Therefore the
An have no supremum.

A Boolean algebra is complete if and only if its Stone space of prime ideals is extremally disconnected.

55.2 Properties of complete Boolean algebras


Sikorskis extension theorem states that

if A is a subalgebra of a Boolean algebra B, then any homomorphism from A to a complete Boolean algebra C can be
extended to a morphism from B to C.

Every subset of a complete Boolean algebra has a supremum, by denition; it follows that every subset also has
an inmum (greatest lower bound).
For a complete boolean algebra both innite distributive laws hold.
For a complete boolean algebra innite de-Morgans laws hold.

55.3 The completion of a Boolean algebra


The completion of a Boolean algebra can be dened in several equivalent ways:

The completion of A is (up to isomorphism) the unique complete Boolean algebra B containing A such that A
is dense in B; this means that for every nonzero element of B there is a smaller non-zero element of A.
The completion of A is (up to isomorphism) the unique complete Boolean algebra B containing A such that
every element of B is the supremum of some subset of A.

The completion of a Boolean algebra A can be constructed in several ways:

The completion is the Boolean algebra of regular open sets in the Stone space of prime ideals of A. Each
element x of A corresponds to the open set of prime ideals not containing x (which open and closed, and
therefore regular).
The completion is the Boolean algebra of regular cuts of A. Here a cut is a subset U of A+ (the non-zero
elements of A) such that if q is in U and pq then p is in U, and is called regular if whenever p is not in U there
is some r p such that U has no elements r. Each element p of A corresponds to the cut of elements p.

If A is a metric space and B its completion then any isometry from A to a complete metric space C can be extended to
a unique isometry from B to C. The analogous statement for complete Boolean algebras is not true: a homomorphism
from a Boolean algebra A to a complete Boolean algebra C cannot necessarily be extended to a (supremum preserving)
homomorphism of complete Boolean algebras from the completion B of A to C. (By Sikorskis extension theorem it
can be extended to a homomorphism of Boolean algebras from B to C, but this will not in general be a homomorphism
of complete Boolean algebras; in other words, it need not preserve suprema.)

55.4 Free -complete Boolean algebras


Unless the Axiom of Choice is relaxed,[1] free complete boolean algebras generated by a set do not exist (unless the set
is nite). More precisely, for any cardinal , there is a complete Boolean algebra of cardinality 2 greater than that
is generated as a complete Boolean algebra by a countable subset; for example the Boolean algebra of regular open
sets in the product space , where has the discrete topology. A countable generating set consists of all sets am,n
214 CHAPTER 55. COMPLETE BOOLEAN ALGEBRA

for m, n integers, consisting of the elements x such that x(m)<x(n). (This boolean algebra is called a collapsing
algebra, because forcing with it collapses the cardinal onto .)
In particular the forgetful functor from complete Boolean algebras to sets has no left adjoint, even though it is contin-
uous and the category of Boolean algebras is small-complete. This shows that the solution set condition in Freyds
adjoint functor theorem is necessary.
Given a set X, one can form the free Boolean algebra A generated by this set and then take its completion B. However
B is not a free complete Boolean algebra generated by X (unless X is nite or AC is omitted), because a function
from X to a free Boolean algebra C cannot in general be extended to a (supremum-preserving) morphism of Boolean
algebras from B to C.
On the other hand, for any xed cardinal , there is a free (or universal) -complete Boolean algebra generated by
any given set.

55.5 See also


Complete lattice
Complete Heyting algebra

55.6 References
[1] Stavi, Jonathan (1974), A model of ZF with an innite free complete Boolean algebra (reprint), Israel Journal of Math-
ematics, 20 (2): 149163, doi:10.1007/BF02757883.

Johnstone, Peter T. (1982), Stone spaces, Cambridge University Press, ISBN 0-521-33779-8
Koppelberg, Sabine (1989), Monk, J. Donald; Bonnet, Robert, eds., Handbook of Boolean algebras, 1, Ams-
terdam: North-Holland Publishing Co., pp. xx+312, ISBN 0-444-70261-X, MR 0991565
Monk, J. Donald; Bonnet, Robert, eds. (1989), Handbook of Boolean algebras, 2, Amsterdam: North-Holland
Publishing Co., ISBN 0-444-87152-7, MR 0991595

Monk, J. Donald; Bonnet, Robert, eds. (1989), Handbook of Boolean algebras, 3, Amsterdam: North-Holland
Publishing Co., ISBN 0-444-87153-5, MR 0991607

Vladimirov, D.A. (2001) [1994], Boolean algebra, in Hazewinkel, Michiel, Encyclopedia of Mathematics,
Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Chapter 56

Composition of relations

In mathematics, the composition of binary relations is a concept of forming a new relation S R from two given
relations R and S, having as its best-known special case the composition of functions.

56.1 Denition
If R X Y and S Y Z are two binary relations, then their composition S R is the relation

S R = {(x, z) X Z | y Y : (x, y) R (y, z) S}.

In other words, S R X Z is dened by the rule that says (x, z) S R if and only if there is an element
y Y such that x R y S z (i.e. (x, y) R and (y, z) S ).
In particular elds, authors might denote by R S what is dened here to be S R. The convention chosen here is
such that function composition (with the usual notation) is obtained as a special case, when R and S are functional
relations. Some authors[1] prefer to write l and r explicitly when necessary, depending whether the left or the right
relation is the rst one applied.
A further variation encountered in computer science is the Z notation: is used to denote the traditional (right)
composition, but ; (a fat open semicolon with Unicode code point U+2A3E) denotes left composition.[2][3] This use
of semicolon coincides with the notation for function composition used (mostly by computer scientists) in Category
theory,[4] as well as the notation for dynamic conjunction within linguistic dynamic semantics.[5] The semicolon
notation (with this semantic) was introduced by Ernst Schrder in 1895.[6]
The binary relations R X Y are sometimes regarded as the morphisms R : X Y in a category Rel which
has the sets as objects. In Rel, composition of morphisms is exactly composition of relations as dened above. The
category Set of sets is a subcategory of Rel that has the same objects but fewer morphisms. A generalization of this
is found in the theory of allegories.

56.2 Properties
Composition of relations is associative.
The inverse relation of S R is (S R)1 = R1 S 1 . This property makes the set of all binary relations on a set a
semigroup with involution.
The composition of (partial) functions (i.e. functional relations) is again a (partial) function.
If R and S are injective, then S R is injective, which conversely implies only the injectivity of R.
If R and S are surjective, then S R is surjective, which conversely implies only the surjectivity of S.
The set of binary relations on a set X (i.e. relations from X to X) together with (left or right) relation composition
forms a monoid with zero, where the identity map on X is the neutral element, and the empty set is the zero element.

215
216 CHAPTER 56. COMPOSITION OF RELATIONS

56.3 Join: another form of composition


Main article: Join (relational algebra)

Other forms of composition of relations, which apply to general n-place relations instead of binary relations, are
found in the join operation of relational algebra. The usual composition of two binary relations as dened here can
be obtained by taking their join, leading to a ternary relation, followed by a projection that removes the middle
component.

56.4 Composition in terms of matrices


If R is a relation between two nite sets then one obtains an associated adjacency matrix MR (see here). Then if R
and S are two relations which can be composed, the matrix MSR is exactly the matrix product MR MS , where it is
understood that 1 + 1 = 1 . (Note also the reversal of order of R and S .)

56.5 See also


Binary relation
Relation algebra

Demonic composition
Function composition

Join (SQL)
Logical matrix

56.6 Notes
[1] Kilp, Knauer & Mikhalev, p. 7

[2] ISO/IEC 13568:2002(E), p. 23

[3] http://www.fileformat.info/info/unicode/char/2a3e/index.htm

[4] http://www.math.mcgill.ca/triples/Barr-Wells-ctcs.pdf, p. 6

[5] http://plato.stanford.edu/entries/dynamic-semantics/#EncDynTypLog

[6] Paul Taylor (1999). Practical Foundations of Mathematics. Cambridge University Press. p. 24. ISBN 978-0-521-63107-5.
A free HTML version of the book is available at http://www.cs.man.ac.uk/~{}pt/Practical_Foundations/

56.7 References
M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories with Applications to Wreath Products and
Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, ISBN 3-11-015248-7.
Chapter 57

Conditional quantier

In logic, a conditional quantier is a kind of Lindstrm quantier (or generalized quantier) QA that, relative to a
classical model A, satises some or all of the following conditions ("X" and "Y" range over arbitrary formulas in one
free variable):
(The implication arrow denotes material implication in the metalanguage.) The minimal conditional logic M is char-
acterized by the rst six properties, and stronger conditional logics include some of the other ones. For example, the
quantier A, which can be viewed as set-theoretic inclusion, satises all of the above except [symmetry]. Clearly
[symmetry] holds for A while e.g. [contraposition] fails.
A semantic interpretation of conditional quantiers involves a relation between sets of subsets of a given structure
i.e. a relation between properties dened on the structure. Some of the details can be found in the article Lindstrm
quantier.
Conditional quantiers are meant to capture certain properties concerning conditional reasoning at an abstract level.
Generally, it is intended to clarify the role of conditionals in a rst-order language as they relate to other connectives,
such as conjunction or disjunction. While they can cover nested conditionals, the greater complexity of the formula,
specically the greater the number of conditional nesting, the less helpful they are as a methodological tool for un-
derstanding conditionals, at least in some sense. Compare this methodological strategy for conditionals with that of
rst-degree entailment logics.

57.1 References
Serge Lapierre. Conditionals and Quantiers, in Quantiers, Logic, and Language, Stanford University, pp. 237
253, 1995.

217
Chapter 58

Conditioned disjunction

In logic, conditioned disjunction (sometimes called conditional disjunction) is a ternary logical connective in-
troduced by Church.[1] Given operands p, q, and r, which represent truth-valued propositions, the meaning of the
conditioned disjunction [p, q, r] is given by:

[p, q, r] (q p) (q r)

In words, [p, q, r] is equivalent to: if q then p, else r", or "p or r, according as q or not q". This may also be stated
as "q implies p, and not q implies r". So, for any values of p, q, and r, the value of [p, q, r] is the value of p when q
is true, and is the value of r otherwise.
The conditioned disjunction is also equivalent to:

(q p) (q r)

and has the same truth table as the ternary (?:) operator in many programming languages.
In conjunction with truth constants denoting each truth-value, conditioned disjunction is truth-functionally complete
for classical logic.[2] Its truth table is the following:
There are other truth-functionally complete ternary connectives.

58.1 References
[1] Church, Alonzo (1956). Introduction to Mathematical Logic. Princeton University Press.

[2] Wesselkamper, T., A sole sucient operator, Notre Dame Journal of Formal Logic, Vol. XVI, No. 1 (1975), pp. 86-88.

218
Chapter 59

Conjunction elimination

In propositional logic, conjunction elimination (also called and elimination, elimination,[1] or simplica-
tion)[2][3][4] is a valid immediate inference, argument form and rule of inference which makes the inference that,
if the conjunction A and B is true, then A is true, and B is true. The rule makes it possible to shorten longer proofs
by deriving one of the conjuncts of a conjunction on a line by itself.
An example in English:

Its raining and its pouring.


Therefore its raining.

The rule consists of two separate sub-rules, which can be expressed in formal language as:

P Q
P
and

P Q
Q
The two sub-rules together mean that, whenever an instance of " P Q " appears on a line of a proof, either " P
" or " Q " can be placed on a subsequent line by itself. The above example in English is an application of the rst
sub-rule.

59.1 Formal notation


The conjunction elimination sub-rules may be written in sequent notation:

(P Q) P

and

(P Q) Q

where is a metalogical symbol meaning that P is a syntactic consequence of P Q and Q is also a syntactic
consequence of P Q in logical system;
and expressed as truth-functional tautologies or theorems of propositional logic:

219
220 CHAPTER 59. CONJUNCTION ELIMINATION

(P Q) P

and

(P Q) Q

where P and Q are propositions expressed in some formal system.

59.2 References
[1] David A. Duy (1991). Principles of Automated Theorem Proving. New York: Wiley. Sect.3.1.2.1, p.46

[2] Copi and Cohen

[3] Moore and Parker

[4] Hurley
Chapter 60

Conjunction introduction

Conjunction introduction (often abbreviated simply as conjunction and also called and introduction[1][2][3] ) is
a valid rule of inference of propositional logic. The rule makes it possible to introduce a conjunction into a logical
proof. It is the inference that if the proposition p is true, and proposition q is true, then the logical conjunction of the
two propositions p and q is true. For example, if its true that its raining, and its true that I'm inside, then its true
that its raining and I'm inside. The rule can be stated:

P, Q
P Q
where the rule is that wherever an instance of " P " and " Q " appear on lines of a proof, a " P Q " can be placed
on a subsequent line.

60.1 Formal notation


The conjunction introduction rule may be written in sequent notation:

P, Q P Q

where is a metalogical symbol meaning that P Q is a syntactic consequence if P and Q are each on lines of a
proof in some logical system;
where P and Q are propositions expressed in some formal system.

60.2 References
[1] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing. pp. 34651.

[2] Copi and Cohen

[3] Moore and Parker

221
Chapter 61

Conjunctive normal form

In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of
one or more clauses, where a clause is a disjunction of literals; otherwise put, it is an AND of ORs. As a normal
form, it is useful in automated theorem proving. It is similar to the product of sums form used in circuit theory.
All conjunctions of literals and all disjunctions of literals are in CNF, as they can be seen as conjunctions of one-
literal clauses and conjunctions of a single clause, respectively. As in the disjunctive normal form (DNF), the only
propositional connectives a formula in CNF can contain are and, or, and not. The not operator can only be used as
part of a literal, which means that it can only precede a propositional variable or a predicate symbol.
In automated theorem proving, the notion "clausal normal form" is often used in a narrower sense, meaning a par-
ticular representation of a CNF formula as a set of sets of literals.

61.1 Examples and non-examples


All of the following formulas in the variables A, B, C, D, and E are in conjunctive normal form:

A (B C)
(A B) (B C D) (D E)
AB
AB

The third formula is in conjunctive normal form because it is viewed as a conjunction with just one conjunct, namely
the clause A B . Incidentally, the last two formulas are also in disjunctive normal form.
The following formulas are not in conjunctive normal form:

(B C) , since an OR is nested within a NOT


(A B) C
A (B (D E)) , since an AND is nested within an OR

Every formula can be equivalently written as a formula in conjunctive normal form. In particular this is the case for
the three non-examples just mentioned; they are respectively equivalent to the following three formulas, which are in
conjunctive normal form:

B C
(A C) (B C)
A (B D) (B E).

222
61.2. CONVERSION INTO CNF 223

61.2 Conversion into CNF


Every propositional formula can be converted into an equivalent formula that is in CNF. This transformation is based
on rules about logical equivalences: the double negative law, De Morgans laws, and the distributive law.
Since all logical formulae can be converted into an equivalent formula in conjunctive normal form, proofs are often
based on the assumption that all formulae are CNF. However, in some cases this conversion to CNF can lead to an
exponential explosion of the formula. For example, translating the following non-CNF formula into CNF produces a
formula with 2n clauses:

(X1 Y1 ) (X2 Y2 ) (Xn Yn ).

In particular, the generated formula is:

(X1 X2 Xn )(Y1 X2 Xn )(X1 Y2 Xn )(Y1 Y2 Xn ) (Y1 Y2 Yn ).

This formula contains 2n clauses; each clause contains either Xi or Yi for each i .
There exist transformations into CNF that avoid an exponential increase in size by preserving satisability rather than
equivalence.[1][2] These transformations are guaranteed to only linearly increase the size of the formula, but introduce
new variables. For example, the above formula can be transformed into CNF by adding variables Z1 , . . . , Zn as
follows:

(Z1 Zn ) (Z1 X1 ) (Z1 Y1 ) (Zn Xn ) (Zn Yn ).

An interpretation satises this formula only if at least one of the new variables is true. If this variable is Zi , then
both Xi and Yi are true as well. This means that every model that satises this formula also satises the original one.
On the other hand, only some of the models of the original formula satisfy this one: since the Zi are not mentioned
in the original formula, their values are irrelevant to satisfaction of it, which is not the case in the last formula. This
means that the original formula and the result of the translation are equisatisable but not equivalent.
An alternative translation, the Tseitin transformation, includes also the clauses Zi Xi Yi . With these clauses,
the formula implies Zi Xi Yi ; this formula is often regarded to dene Zi to be a name for Xi Yi .

61.3 First-order logic


In rst order logic, conjunctive normal form can be taken further to yield the clausal normal form of a logical formula,
which can be then used to perform rst-order resolution. In resolution-based automated theorem-proving, a CNF
formula
See below for an example.

61.4 Computational complexity


An important set of problems in computational complexity involves nding assignments to the variables of a boolean
formula expressed in Conjunctive Normal Form, such that the formula is true. The k-SAT problem is the problem of
nding a satisfying assignment to a boolean formula expressed in CNF in which each disjunction contains at most k
variables. 3-SAT is NP-complete (like any other k-SAT problem with k>2) while 2-SAT is known to have solutions
in polynomial time. As a consequence,[3] the task of converting a formula into a DNF, preserving satisability, is
NP-hard; dually, converting into CNF, preserving validity, is also NP-hard; hence equivalence-preserving conversion
into DNF or CNF is again NP-hard.
Typical problems in this case involve formulas in 3CNF": conjunctive normal form with no more than three variables
per conjunct. Examples of such formulas encountered in practice can be very large, for example with 100,000
variables and 1,000,000 conjuncts.
224 CHAPTER 61. CONJUNCTIVE NORMAL FORM

A formula in CNF can be converted into an equisatisable formula in "kCNF (for k3) by replacing each conjunct
with more than k variables X1 Xk Xn by two conjuncts X1 Xk1 Z and Z Xk Xn
with Z a new variable, and repeating as often as necessary.

61.5 Converting from rst-order logic


To convert rst-order logic to CNF:[4]

1. Convert to negation normal form.

(a) Eliminate implications and equivalences: repeatedly replace P Q with P Q ; replace P Q


with (P Q) (P Q) . Eventually, this will eliminate all occurrences of and .
(b) Move NOTs inwards by repeatedly applying De Morgans Law. Specically, replace (P Q) with
(P ) (Q) ; replace (P Q) with (P ) (Q) ; and replace P with P ; replace (xP (x))
with xP (x) ; (xP (x)) with xP (x) . After that, a may occur only immediately before a
predicate symbol.

2. Standardize variables

(a) For sentences like (xP (x)) (xQ(x)) which use the same variable name twice, change the name of
one of the variables. This avoids confusion later when dropping quantiers. For example, x[yAnimal(y)
Loves(x, y)] [yLoves(y, x)] is renamed to x[yAnimal(y) Loves(x, y)] [zLoves(z, x)] .

3. Skolemize the statement

(a) Move quantiers outwards: repeatedly replace P (xQ(x)) with x(P Q(x)) ; replace P (xQ(x))
with x(P Q(x)) ; replace P (xQ(x)) with x(P Q(x)) ; replace P (xQ(x)) with x(P
Q(x)) . These replacements preserve equivalence, since the previous variable standardization step en-
sured that x doesn't occur in P . After these replacements, a quantier may occur only in the initial prex
of the formula, but never inside a , , or .
(b) Repeatedly replace x1 . . . xn y P (y) with x1 . . . xn P (f (x1 , . . . , xn )) , where f is a new n -ary
function symbol, a so-called "skolem function". This is the only step that preserves only satisability
rather than equivalence. It eliminates all existential quantiers.

4. Drop all universal quantiers.

5. Distribute ORs inwards over ANDs: repeatedly replace P (Q R) with (P Q) (P R) .

As an example, the formula saying Anyone who loves all animals, is in turn loved by someone is converted into CNF
(and subsequently into clause form in the last line) as follows (highlighting replacement rule redices in red ):
Informally, the skolem function g(x) can be thought of as yielding the person by whom x is loved, while f (x) yields
the animal (if any) that x doesn't love. The 3rd last line from below then reads as " x doesn't love the animal f (x) ,
or else x is loved by g(x) ".
The 2nd last line from above, (Animal(f (x))Loves(g(x), x))(Loves(x, f (x))Loves(g(x), x)) , is the CNF.

61.6 Notes
[1] Tseitin (1968)

[2] Jackson and Sheridan (2004)

[3] since one way to check a CNF for satisability is to convert it into a DNF, the satisability of which can be checked in
linear time

[4] Articial Intelligence: A modern Approach [1995...] Russell and Norvig


61.7. SEE ALSO 225

61.7 See also


Algebraic normal form

Disjunctive normal form


Horn clause

QuineMcCluskey algorithm

61.8 References
Paul Jackson, Daniel Sheridan: Clause Form Conversions for Boolean Circuits. In: Holger H. Hoos, David G.
Mitchell (Eds.): Theory and Applications of Satisability Testing, 7th International Conference, SAT 2004,
Vancouver, BC, Canada, May 1013, 2004, Revised Selected Papers. Lecture Notes in Computer Science
3542, Springer 2005, pp. 183198

G.S. Tseitin: On the complexity of derivation in propositional calculus. In: Slisenko, A.O. (ed.) Structures in
Constructive Mathematics and Mathematical Logic, Part II, Seminars in Mathematics (translated from Rus-
sian), pp. 115125. Steklov Mathematical Institute (1968)

61.9 External links


Hazewinkel, Michiel, ed. (2001) [1994], Conjunctive normal form, Encyclopedia of Mathematics, Springer
Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

Java applet for converting to CNF and DNF, showing laws used
Mayuresh S. Pardeshi, Dr. Bashirahamed F. Momin Conversion of cnf to dnf using grid computing IEEE,
ISBN 978-1-4673-2816-6
Mayuresh S. Pardeshi, Dr. Bashirahamed F. Momin Conversion of cnf to dnf using grid computing in parallel
IEEE, ISBN 978-1-4799-4041-7
Chapter 62

Consensus theorem

In Boolean algebra, the consensus theorem or rule of consensus[1] is the identity:

xy xz yz = xy xz

The consensus or resolvent of the terms xy and xz is yz . It is the conjunction of all the unique literals of the terms,
excluding the literal that appears unnegated in one term and negated in the other.
The conjunctive dual of this equation is:

(x y)(x z)(y z) = (x y)(x z)

62.1 Proof
xy xz yz = xy xz (x x)yz
= xy xz xyz xyz
= (xy xyz) (xz xyz)
= xy(1 z) xz(1 y)
= xy xz

62.2 Consensus
The consensus or consensus term of two conjunctive terms of a disjunction is dened when one term contains the
literal a and the other the literal a , an opposition. The consensus is the conjunction of the two terms, omitting both
a and a , and repeated literals. For example, the consensus of xyz and wyz is wxz .[2] The consensus is undened if
there is more than one opposition.
For the conjunctive dual of the rule, the consensus y z can be derived from (xy) and (xz) through the resolution
inference rule. This shows that the LHS is derivable from the RHS (if A B then A AB; replacing A with RHS
and B with (y z) ). The RHS can be derived from the LHS simply through the conjunction elimination inference
rule. Since RHS LHS and LHS RHS (in propositional calculus), then LHS = RHS (in Boolean algebra).

62.3 Applications
In Boolean algebra, repeated consensus is the core of one algorithm for calculating the Blake canonical form of a
formula.[2]
In digital logic, including the consensus term in a circuit can eliminate race hazards.[3]

226
62.4. HISTORY 227

62.4 History
The concept of consensus was introduced by Archie Blake in 1937, related to the Blake canonical form.[4] It was
rediscovered by Samson and Mills in 1954[5] and by Quine in 1955.[6] Quine coined the term 'consensus. Robinson
used it for clauses in 1965 as the basis of his "resolution principle".[7][8]

62.5 References
[1] Frank Markham Brown, Boolean Reasoning: The Logic of Boolean Equations, 2nd edition 2003, p. 44

[2] Frank Markham Brown, Boolean Reasoning: The Logic of Boolean Equations, 2nd edition 2003, p. 81

[3] M. Raquzzaman, Fundamentals of Digital Logic and Microcontrollers, 6th edition (2014), ISBN 1118855795, p. 75

[4] Canonical expressions in Boolean algebra, Dissertation, Department of Mathematics, University of Chicago, 1937, re-
viewed in J. C. C. McKinsey, The Journal of Symbolic Logic 3:2:93 (June 1938) doi:10.2307/2267634 JSTOR 2267634

[5] Edward W. Samson, Burton E. Mills, Air Force Cambridge Research Center, Technical Report 54-21, April 1954

[6] Willard van Orman Quine, The problem of simplifying truth functions, American Mathematical Monthly 59:521-531,
1952 JSTOR 2308219

[7] John Alan Robinson, A Machine-Oriented Logic Based on the Resolution Principle, Journal of the ACM 12:1: 2341.

[8] Donald Ervin Knuth, The Art of Computer Programming 4A: Combinatorial Algorithms, part 1, p. 539

62.6 Further reading


Roth, Charles H. Jr. and Kinney, Larry L. (2004, 2010). Fundamentals of Logic Design, 6th Ed., p. 66.
Chapter 63

Consequentia mirabilis

Consequentia mirabilis (Latin for admirable consequence), also known as Clavius's Law, is used in traditional
and classical logic to establish the truth of a proposition from the inconsistency of its negation.[1] It is thus similar
to reductio ad absurdum, but it can prove a proposition true using just its negation. It states that if a proposition is
a consequence of its negation, then it is true, for consistency. It can thus be demonstrated without using any other
principle, but that of consistency. (Barnes[2] claims in passing that the term 'consequentia mirabilis refers only to the
inference of the proposition from the inconsistency of its negation, and that the term 'Lex Clavia' (or Clavius Law)
refers to the inference of the propositions negation from the inconsistency of the proposition.)
In formal notation: (A A) A which is equivalent to (A A) A .
Consequentia mirabilis was a pattern of argument popular in 17th century Europe that rst appeared in a fragment
of Aristotles Protrepticus: If we ought to philosophise, then we ought to philosophise; and if we ought not to
philosophise, then we ought to philosophise (i.e. in order to justify this view); in any case, therefore, we ought to
philosophise.[3]
The most famous example is perhaps the Cartesian cogito ergo sum: Even if one can question the validity of the
thinking, no one can deny that they are thinking.

63.1 See also


Ex falso quodlibet
Tertium non datur

Peirces law

63.2 References
[1] Sainsbury, Richard. Paradoxes. Cambridge University Press, 2009, p. 128.

[2] Barnes, Julian. The Pre-Socratic Philosophers: The Arguments of the Philosophers. Routledge, 1982, p. 277.

[3] Kneale, William (1957). Aristotle and the Consequentia Mirabilis. The Journal of Hellenic Studies. 77 (1): 6266.
JSTOR 628635.

228
Chapter 64

Constructive dilemma

Constructive dilemma[1][2][3] is a name of a valid rule of inference of propositional logic. It is the inference that,
if P implies Q and R implies S and either P or R is true, then Q or S has to be true. In sum, if two conditionals are
true and at least one of their antecedents is, then at least one of their consequents must be too. Constructive dilemma
is the disjunctive version of modus ponens, whereas, destructive dilemma is the disjunctive version of modus tollens.
The rule can be stated:

P Q, R S, P R
QS
where the rule is that whenever instances of " P Q ", " R S ", and " P R " appear on lines of a proof, "
Q S " can be placed on a subsequent line.

64.1 Formal notation


The constructive dilemma rule may be written in sequent notation:

(P Q), (R S), (P R) (Q S)
where is a metalogical symbol meaning that Q S is a syntactic consequence of P Q , R S , and Q S in
some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:

(((P Q) (R S)) (P R)) (Q S)


where P , Q , R and S are propositions expressed in some formal system.

64.2 Variable English


If P then Q. If R then S. P or R. Therefore, Q or S.

64.3 Natural language example


If I win a million dollars, I will donate it to an orphanage.
If my friend wins a million dollars, he will donate it to a wildlife fund.
I win a million dollars or my friend wins a million dollars.
Therefore, either an orphanage will get a million dollars, or a wildlife fund will get a million dollars.

229
230 CHAPTER 64. CONSTRUCTIVE DILEMMA

The dilemma derives its name because of the transfer of disjunctive operator.

64.4 References
[1] Hurley, Patrick. A Concise Introduction to Logic With Ilrn Printed Access Card. Wadsworth Pub Co, 2008. Page 361

[2] Moore and Parker

[3] Copi and Cohen


Chapter 65

Contour set

In mathematics, contour sets generalize and formalize the everyday notions of

everything superior to something


everything superior or equivalent to something
everything inferior to something
everything inferior or equivalent to something.

65.1 Formal denitions


Given a relation on pairs of elements of set X

X2

and an element x of X

xX

The upper contour set of x is the set of all y that are related to x :

{y y x}

The lower contour set of x is the set of all y such that x is related to them:

{y x y}

The strict upper contour set of x is the set of all y that are related to x without x being in this way related to any
of them:

{y (y x) (x y)}

The strict lower contour set of x is the set of all y such that x is related to them without any of them being in this
way related to x :

{y (x y) (y x)}

231
232 CHAPTER 65. CONTOUR SET

The formal expressions of the last two may be simplied if we have dened

= {(a, b) (a b) (b a)}

so that a is related to b but b is not related to a , in which case the strict upper contour set of x is

{y y x}

and the strict lower contour set of x is

{y x y}

65.1.1 Contour sets of a function


In the case of a function f () considered in terms of relation , reference to the contour sets of the function is implicitly
to the contour sets of the implied relation

(a b) [f (a) f (b)]

65.2 Examples

65.2.1 Arithmetic
Consider a real number x , and the relation . Then

the upper contour set of x would be the set of numbers that were greater than or equal to x ,

the strict upper contour set of x would be the set of numbers that were greater than x ,

the lower contour set of x would be the set of numbers that were less than or equal to x , and

the strict lower contour set of x would be the set of numbers that were less than x .

Consider, more generally, the relation

(a b) [f (a) f (b)]

Then

the upper contour set of x would be the set of all y such that f (y) f (x) ,

the strict upper contour set of x would be the set of all y such that f (y) > f (x) ,

the lower contour set of x would be the set of all y such that f (x) f (y) , and

the strict lower contour set of x would be the set of all y such that f (x) > f (y) .

It would be technically possible to dene contour sets in terms of the relation

(a b) [f (a) f (b)]
65.3. COMPLEMENTARITY 233

though such denitions would tend to confound ready understanding.


In the case of a real-valued function f () (whose arguments might or might not be themselves real numbers), reference
to the contour sets of the function is implicitly to the contour sets of the relation

(a b) [f (a) f (b)]

Note that the arguments to f () might be vectors, and that the notation used might instead be

[(a1 , a2 , . . .) (b1 , b2 , . . .)] [f (a1 , a2 , . . .) f (b1 , b2 , . . .)]

65.2.2 Economic

In economics, the set X could be interpreted as a set of goods and services or of possible outcomes, the relation
as strict preference, and the relationship as weak preference. Then

the upper contour set, or better set,[1] of x would be the set of all goods, services, or outcomes that were at
least as desired as x ,

the strict upper contour set of x would be the set of all goods, services, or outcomes that were more desired
than x ,

the lower contour set, or worse set,[1] of x would be the set of all goods, services, or outcomes that were no
more desired than x , and

the strict lower contour set of x would be the set of all goods, services, or outcomes that were less desired than
x.

Such preferences might be captured by a utility function u() , in which case

the upper contour set of x would be the set of all y such that u(y) u(x) ,

the strict upper contour set of x would be the set of all y such that u(y) > u(x) ,

the lower contour set of x would be the set of all y such that u(x) u(y) , and

the strict lower contour set of x would be the set of all y such that u(x) > u(y) .

65.3 Complementarity
On the assumption that is a total ordering of X , the complement of the upper contour set is the strict lower contour
set.

X 2 \ {y y x} = {y x y}

X 2 \ {y x y} = {y y x}
and the complement of the strict upper contour set is the lower contour set.

X 2 \ {y y x} = {y x y}

X 2 \ {y x y} = {y y x}
234 CHAPTER 65. CONTOUR SET

65.4 See also


Epigraph

Hypograph

65.5 References
[1] Robert P. Gilles (1996). Economic Exchange and Social Organization: The Edgeworthian Foundations of General Equilib-
rium Theory. Springer. p. 35.

65.6 Bibliography
Andreu Mas-Colell, Michael D. Whinston, and Jerry R. Green, Microeconomic Theory (LCC HB172.M6247
1995), p43. ISBN 0-19-507340-1 (cloth) ISBN 0-19-510268-1 (paper)
Chapter 66

Contradiction

For other uses, see Contradiction (disambiguation).


In classical logic, a contradiction consists of a logical incompatibility between two or more propositions. It occurs
when the propositions, taken together, yield two conclusions which form the logical, usually opposite inversions of
each other. Illustrating a general tendency in applied logic, Aristotle's law of noncontradiction states that One cannot
say of something that it is and that it is not in the same respect and at the same time.
By extension, outside of classical logic, one can speak of contradictions between actions when one presumes that their
motives contradict each other.

66.1 History

By creation of a paradox, Plato's Euthydemus dialogue demonstrates the need for the notion of contradiction. In the
ensuing dialogue Dionysodorus denies the existence of contradiction, all the while that Socrates is contradicting
him:

"... I in my astonishment said: What do you mean Dionysodorus? I have often heard, and have been
amazed to hear, this thesis of yours, which is maintained and employed by the disciples of Protagoras and
others before them, and which to me appears to be quite wonderful, and suicidal as well as destructive,
and I think that I am most likely to hear the truth about it from you. The dictum is that there is no such
thing as a falsehood; a man must either say what is true or say nothing. Is not that your position?"

Indeed, Dionysodorus agrees that there is no such thing as false opinion ... there is no such thing as ignorance and
demands of Socrates to Refute me. Socrates responds But how can I refute you, if, as you say, to tell a falsehood
is impossible?".[1]

66.2 In formal logic

Note: The symbol (falsum) represents an arbitrary contradiction, with the dual tee symbol used
to denote an arbitrary tautology. Contradiction is sometimes symbolized by Opq", and tautology by
Vpq". The turnstile symbol, is often read as yields or proves.

In classical logic, particularly in propositional and rst-order logic, a proposition is a contradiction if and only if
. Since for contradictory it is true that for all (because ), one may prove any proposition
from a set of axioms which contains contradictions. This is called the "principle of explosion" or ex falso quodlibet
(from falsity, whatever you like).
In a complete logic, a formula is contradictory if and only if it is unsatisable.

235
236 CHAPTER 66. CONTRADICTION

This diagram shows the contradictory relationships between categorical propositions in the square of opposition of Aristotelian logic.

66.2.1 Proof by contradiction

Main article: Proof by contradiction

For a proposition it is true that , i. e. that is a tautology, i. e. that it is always true, if and only if
, i. e. if the negation of is a contradiction. Therefore, a proof that also proves that is true. The use of
this fact constitutes the technique of the proof by contradiction, which mathematicians use extensively. This applies
only in a logic using the excluded middle A A as an axiom.
66.2. IN FORMAL LOGIC 237

66.2.2 Symbolic representation

In mathematics, the symbol used to represent a contradiction within a proof varies. Some symbols that may be
used to represent a contradiction include , Opq, , , , and ; in any symbolism, a contradiction may be
substituted for the truth value "false", as symbolized, for instance, by 0. It is not uncommon to see Q.E.D. or some
variant immediately after a contradiction symbol; this occurs in a proof by contradiction, to indicate that the original
assumption was false and that its negation must therefore be true.

66.2.3 The notion of contradiction in an axiomatic system and a proof of its consistency

A consistency proof requires (i) an axiomatic system (ii) a demonstration that it is not the case that both the formula
p and its negation ~p can be derived in the system. But by whatever method one goes about it, all consistency
proofs would seem to necessitate the primitive notion of contradiction; moreover, it seems as if this notion would
simultaneously have to be outside the formal system in the denition of tautology.
When Emil Post in his 1921 Introduction to a general theory of elementary propositions extended his proof of the
consistency of the propositional calculus (i.e. the logic) beyond that of Principia Mathematica (PM) he observed that
with respect to a generalized set of postulates (i.e. axioms) he would no longer be able to automatically invoke the
notion of contradiction such a notion might not be contained in the postulates:

The prime requisite of a set of postulates is that it be consistent. Since the ordinary notion of consistency
involves that of contradiction, which again involves negation, and since this function does not appear in
general as a primitive in [the generalized set of postulates] a new denition must be given.[2]

Posts solution to the problem is described in the demonstration An Example of a Successful Absolute Proof of Con-
sistency oered by Ernest Nagel and James R. Newman in their 1958 Gdel's Proof. They too observe a problem
with respect to the notion of contradiction with its usual truth values of truth and falsity. They observe that:

The property of being a tautology has been dened in notions of truth and falsity. Yet these notions
obviously involve a reference to something outside the formula calculus. Therefore, the procedure men-
tioned in the text in eect oers an interpretation of the calculus, by supplying a model for the system.
This being so, the authors have not done what they promised, namely, 'to dene a property of for-
mulas in terms of purely structural features of the formulas themselves. [Indeed] ... proofs of
consistency which are based on models, and which argue from the truth of axioms to their consistency,
merely shift the problem.[3]

Given some primitive formulas such as PMs primitives S1 V S2 [inclusive OR], ~S (negation) one is forced to
dene the axioms in terms of these primitive notions. In a thorough manner Post demonstrates in PM, and denes
(as do Nagel and Newman, see below), that the property of tautologous as yet to be dened is inherited": if
one begins with a set of tautologous axioms (postulates) and a deduction system that contains substitution and modus
ponens then a consistent system will yield only tautologous formulas.
So what will be the denition of tautologous?
Nagel and Newman create two mutually exclusive and exhaustive classes K1 and K2 into which fall (the outcome of)
the axioms when their variables e.g. S1 and S2 are assigned from these classes. This also applies to the primitive
formulas. For example: A formula having the form S1 V S2 is placed into class K2 if both S1 and S2 are in K2 ;
otherwise it is placed in K1 ", and A formula having the form ~S is placed in K2 , if S is in K1 ; otherwise it is placed
in K1 ".[4]
Nagel and Newman can now dene the notion of tautologous: a formula is a tautology if, and only if, it falls in the
class K1 no matter in which of the two classes its elements are placed.[5] Now the property of being tautologous
is described without reference to a model or an interpretation.

For example, given a formula such as ~S1 V S2 and an assignment of K1 to S1 and K2 to S2 one can
evaluate the formula and place its outcome in one or the other of the classes. The assignment of K1 to
S1 places ~S1 in K2 , and now we can see that our assignment causes the formula to fall into class K2 .
Thus by denition our formula is not a tautology.
238 CHAPTER 66. CONTRADICTION

Post observed that, if the system were inconsistent, a deduction in it (that is, the last formula in a sequence of formulas
derived from the tautologies) could ultimately yield S itself. As an assignment to variable S can come from either
class K1 or K2 , the deduction violates the inheritance characteristic of tautology, i.e. the derivation must yield an
(evaluation of a formula) that will fall into class K1 . From this, Post was able to derive the following denition of
inconsistency without the use of the notion of contradiction:

Denition. A system will be said to be inconsistent if it yields the assertion of the unmodied variable p
[S in the Newman and Nagel examples].

In other words, the notion of contradiction can be dispensed when constructing a proof of consistency; what replaces
it is the notion of mutually exclusive and exhaustive classes. More interestingly, an axiomatic system need not
include the notion of contradiction.

66.3 Philosophy
Adherents of the epistemological theory of coherentism typically claim that as a necessary condition of the justication
of a belief, that belief must form a part of a logically non-contradictory system of beliefs. Some dialetheists, including
Graham Priest, have argued that coherence may not require consistency.[6]

66.3.1 Pragmatic contradictions

A pragmatic contradiction occurs when the very statement of the argument contradicts the claims it purports. An
inconsistency arises, in this case, because the act of utterance, rather than the content of what is said, undermines its
conclusion.[7]

66.3.2 Dialectical materialism

In dialectical materialism: Contradictionas derived from Hegelianismusually refers to an opposition inherently


existing within one realm, one unied force or object. This contradiction, as opposed to metaphysical thinking, is not
an objectively impossible thing, because these contradicting forces exist in objective reality, not cancelling each other
out, but actually dening each others existence. According to Marxist theory, such a contradiction can be found, for
example, in the fact that:

(a) enormous wealth and productive powers coexist alongside:


(b) extreme poverty and misery;
(c) the existence of (a) being contrary to the existence of (b).

Hegelian and Marxist theory stipulates that the dialectic nature of history will lead to the sublation, or synthesis, of its
contradictions. Marx therefore postulated that history would logically make capitalism evolve into a socialist society
where the means of production would equally serve the exploited and suering class of society, thus resolving the
prior contradiction between (a) and (b).[8]
Mao Zedongs philosophical essay On Contradiction (1937) furthered Marx and Lenins thesis and suggested that all
existence is the result of contradiction.[9]

66.4 Outside formal logic


Colloquial usage can label actions or statements as contradicting each other when due (or perceived as due) to
presuppositions which are contradictory in the logical sense.
Proof by contradiction is used in mathematics to construct proofs.
66.5. SEE ALSO 239

Contradiction on Graham's Hierarchy of Disagreement

66.5 See also


Argument Clinic, a Monty Python sketch which shows two disputants only repeatedly using contradictions in
their argument

Auto-antonym

Contrary (logic)

Double standard

Doublethink

Irony

Oxymoron

Paraconsistent logic

Paradox

Tautology

TRIZ

66.6 Footnotes
[1] Dialog Euthydemus from The Dialogs of Plato translated by Benjamin Jowett appearing in: BK 7 Plato: Robert Maynard
Hutchins, editor in chief, 1952, Great Books of the Western World, Encyclopdia Britannica, Inc., Chicago.
240 CHAPTER 66. CONTRADICTION

[2] Post 1921 Introduction to a general theory of elementary propositions in van Heijenoort 1967:272.

[3] boldface italics added, Nagel and Newman:109-110.

[4] Nagel and Newman:110-111

[5] Nagel and Newman:111

[6] In Contradiction: A Study of the Transconsistent By Graham Priest

[7] Stoljar, Daniel (2006). Ignorance and Imagination. Oxford University Press - U.S. p. 87. ISBN 0-19-530658-9.

[8] Srensen -, MK (2006). CAPITAL AND LABOUR: CAN THE CONFLICT BE SOLVED?". Retrieved 28 May 2017.

[9] ON CONTRADICTION

66.7 References
Jzef Maria Bocheski 1960 Prcis of Mathematical Logic, translated from the French and German editions
by Otto Bird, D. Reidel, Dordrecht, South Holland.
Jean van Heijenoort 1967 From Frege to Gdel: A Source Book in Mathematical Logic 1879-1931, Harvard
University Press, Cambridge, MA, ISBN 0-674-32449-8 (pbk.)
Ernest Nagel and James R. Newman 1958 Gdels Proof, New York University Press, Card Catalog Number:
58-5610.

66.8 External links


Hazewinkel, Michiel, ed. (2001) [1994], Contradiction (inconsistency)", Encyclopedia of Mathematics, Springer
Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

Hazewinkel, Michiel, ed. (2001) [1994], Contradiction, law of, Encyclopedia of Mathematics, Springer
Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

Horn, Laurence R. Contradiction. Stanford Encyclopedia of Philosophy.


Chapter 67

Contraposition

For contraposition in the eld of traditional logic, see Contraposition (traditional logic). For contraposition in the
eld of symbolic logic, see Transposition (logic).

In logic, contraposition is an inference that says that a conditional statement is logically equivalent to its contrapos-
itive. The contrapositive of the statement has its antecedent and consequent inverted and ipped: the contrapositive
of P Q is thus Q P . For instance, the proposition "All bats are mammals" can be restated as the condi-
tional "If something is a bat, then it is a mammal". Now, the law says that statement is identical to the contrapositive
"If something is not a mammal, then it is not a bat.
The contrapositive can be compared with three other relationships between conditional statements:

Inversion (the inverse), P Q


"If something is not a bat, then it is not a mammal. Unlike the contrapositive, the inverses truth value is not
at all dependent on whether or not the original proposition was true, as evidenced here. The inverse here is
clearly not true.
Conversion (the converse), Q P
"If something is a mammal, then it is a bat. The converse is actually the contrapositive of the inverse and so
always has the same truth value as the inverse, which is not necessarily the same as that of the original propo-
sition.
Negation, (P Q)
"There exists a bat that is not a mammal. " If the negation is true, the original proposition (and by extension
the contrapositive) is false. Here, of course, the negation is false.

Note that if P Q is true and we are given that Q is false, Q , it can logically be concluded that P must be false,
P . This is often called the law of contrapositive, or the modus tollens rule of inference.

67.1 Intuitive explanation


Consider the Euler diagram shown. According to this diagram, if something is in A, it must be in B as well. So we
can interpret all of A is in B as:

AB

It is also clear that anything that is not within B (the blue region) cannot be within A, either. This statement,

B A

is the contrapositive. Therefore, we can say that

241
242 CHAPTER 67. CONTRAPOSITION

B
A

(A B) (B A)

Practically speaking, this may make life much easier when trying to prove something. For example, if we want to
prove that every girl in the United States (A) has brown hair (B), we can try to directly prove A B by checking all
girls in the United States to see if they all have brown hair. Alternatively, we can try to prove B A by checking
all girls without brown hair to see if they are all outside the US. This means that if we nd at least one girl without
brown hair within the US, we will have disproved B A , and equivalently A B .
To conclude, for any statement where A implies B, then not B always implies not A. Proving or disproving either one
of these statements automatically proves or disproves the other. They are fully equivalent.

67.2 Formal denition


A proposition Q is implicated by a proposition P when the following relationship holds:

(P Q)
67.3. SIMPLE PROOF BY DEFINITION OF A CONDITIONAL 243

This states that, if P, then Q", or, if Socrates is a man, then Socrates is human. In a conditional such as this, P is
the antecedent, and Q is the consequent. One statement is the contrapositive of the other only when its antecedent
is the negated consequent of the other, and vice versa. The contrapositive of the example is

(Q P )
That is, If not-Q, then not-P", or, more clearly, If Q is not the case, then P is not the case. Using our example,
this is rendered If Socrates is not human, then Socrates is not a man. This statement is said to be contraposed to the
original and is logically equivalent to it. Due to their logical equivalence, stating one eectively states the other; when
one is true, the other is also true. Likewise with falsity.
Strictly speaking, a contraposition can only exist in two simple conditionals. However, a contraposition may also
exist in two complex conditionals, if they are similar. Thus, x(P x Qx) , or All Ps are Qs, is contraposed to
x(Qx P x) , or All non-Qs are non-Ps.

67.3 Simple proof by denition of a conditional


In rst-order logic, the conditional is dened as:

A B A B
We have:

A B B A
B A

67.4 Simple proof by contradiction


Let:

(A B) B
It is given that, if A is true, then B is true, and it is also given that B is not true. We can then show that A must not
be true by contradiction. For, if A were true, then B would have to also be true (given). However, it is given that B is
not true, so we have a contradiction. Therefore, A is not true (assuming that we are dealing with concrete statements
that are either true or not true):

(A B) (B A)
We can apply the same process the other way round:

(B A) A
We also know that B is either true or not true. If B is not true, then A is also not true. However, it is given that A is
true; so, the assumption that B is not true leads to contradiction and must be false. Therefore, B must be true:

(B A) (A B)
Combining the two proved statements makes them logically equivalent:

(A B) (B A)
244 CHAPTER 67. CONTRAPOSITION

67.5 More rigorous proof of the equivalence of contrapositives


Logical equivalence between two propositions means that they are true together or false together. To prove that
contrapositives are logically equivalent, we need to understand when material implication is true or false.

(P Q)

This is only false when P is true and Q is false. Therefore, we can reduce this proposition to the statement False
when P and not-Q" (i.e. True when it is not the case that P and not-Q"):

(P Q)

The elements of a conjunction can be reversed with no eect (by commutativity):

(Q P )

We dene R as equal to " Q ", and S as equal to P (from this, S is equal to P , which is equal to just P ):

(R S)

This reads It is not the case that (R is true and S is false)", which is the denition of a material conditional. We can
then make this substitution:

(R S)

When we swap our denitions of R and S, we arrive at the following:

(Q P )

67.6 Comparisons

67.6.1 Examples
Take the statement "All red objects have color." This can be equivalently expressed as "If an object is red, then it has
color."

The contrapositive is "If an object does not have color, then it is not red." This follows logically from our initial
statement and, like it, it is evidently true.
The inverse is "If an object is not red, then it does not have color." An object which is blue is not red, and still
has color. Therefore, in this case the inverse is false.
The converse is "If an object has color, then it is red." Objects can have other colors, of course, so, the converse
of our statement is false.
The negation is "There exists a red object that does not have color." This statement is false because the initial
statement which it negates is true.

In other words, the contrapositive is logically equivalent to a given conditional statement, though not sucient for a
biconditional.
Similarly, take the statement "All quadrilaterals have four sides," or equivalently expressed "If a polygon is a quadri-
lateral, then it has four sides."
67.7. APPLICATION 245

The contrapositive is "If a polygon does not have four sides, then it is not a quadrilateral." This follows logically,
and as a rule, contrapositives share the truth value of their conditional.

The inverse is "If a polygon is not a quadrilateral, then it does not have four sides." In this case, unlike the last
example, the inverse of the argument is true.

The converse is "If a polygon has four sides, then it is a quadrilateral." Again, in this case, unlike the last
example, the converse of the argument is true.

The negation is "There is at least one quadrilateral that does not have four sides." This statement is clearly
false.

Since the statement and the converse are both true, it is called a biconditional, and can be expressed as "A polygon
is a quadrilateral if, and only if, it has four sides." (The phrase if and only if is sometimes abbreviated i.) That
is, having four sides is both necessary to be a quadrilateral, and alone sucient to deem it a quadrilateral.

67.6.2 Truth
If a statement is true, then its contrapositive is true (and vice versa).

If a statement is false, then its contrapositive is false (and vice versa).

If a statements inverse is true, then its converse is true (and vice versa).

If a statements inverse is false, then its converse is false (and vice versa).

If a statements negation is false, then the statement is true (and vice versa).

If a statement (or its contrapositive) and the inverse (or the converse) are both true or both false, it is known
as a logical biconditional.

67.7 Application
Because the contrapositive of a statement always has the same truth value (truth or falsity) as the statement itself,
it can be a powerful tool for proving mathematical theorems. A proof by contraposition (contrapositive) is a direct
proof of the contrapositive of a statement.[1] However, indirect methods such as proof by contradiction can also be
used with contraposition, as, for example, in the proof of the
irrationality of the square root of 2. By the denition
of a rational number, the statement can be made that "If 2 is rational, then it can be expressed as an irreducible
fraction".
This statement is true because it is a restatement of a denition. The contrapositive of this statement is
"If 2 cannot be expressed as an irreducible fraction, then it is not rational". This contrapositive, like the original
statement, is also true.Therefore, if it can be proven that 2 cannot be expressed as an irreducible fraction, then it
must be the case that 2 is not a rational number. The latter can be proved by contradiction.
The previous example employed the contrapositive of a denition to prove a theorem. One can also prove a theorem
by proving the contrapositive of the theorems statement. To prove that if a positive integer N is a non-square number,
its square root is irrational, we can equivalently prove its contrapositive, that if a positive integer N has a square root
that is rational, then N is a square number. This can be shown by setting N equal to the rational expression a/b with
a and b being positive integers with no common prime factor, and squaring to obtain N = a2 /b2 and noting that since
N is a positive integer b=1 so that N = a2 , a square number.

67.8 Correspondence to other mathematical frameworks

67.8.1 Probability calculus


Contraposition represents an instance of Bayes theorem which in a specic form can be expressed as:
Pr(Q|P ) a(P )
Pr(P | Q) = Pr(Q|P ) a(P )+Pr(Q|P ) a(P ) .
246 CHAPTER 67. CONTRAPOSITION

In the equation above the conditional probability Pr(Q | P ) generalizes the logical statement P Q , i.e. in
addition to assigning TRUE or FALSE we can also assign any probability to the statement. The term a(P ) denotes
the base rate (aka. the prior probability) of P . Assume that Pr(Q | P ) = 1 is equivalent to P Q being TRUE,
and that Pr(Q | P ) = 0 is equivalent to P Q being FALSE. It is then easy to see that Pr(P | Q) = 1
when Pr(Q | P ) = 1 i.e. when P Q is TRUE. This is because Pr(Q | P ) = 1 Pr(Q | P ) = 0 so that the
fraction on the right-hand side of the equation above is equal to 1, and hence Pr(P | Q) = 1 which is equivalent
to Q P being TRUE. Hence, Bayes theorem represents a generalization of contraposition [2] .

67.8.2 Subjective logic


Contraposition represents an instance of the subjective Bayes theorem in subjective logic expressed as:
(PA|Q , PA|Q ) = (Q|P
A A
, Q|P ) e aP ,
A A
where (Q|P , Q|P ) denotes a pair of binomial conditional opinions, as expressed by source A . The parameter
aP denotes the base rate (aka. the prior probability) of P . The pair of inverted conditional opinions is denoted
(PA|Q , PA|Q ) . The conditional opinion Q|P
A
generalizes the logical statement P Q , i.e. in addition to
A
assigning TRUE or FALSE the source A can assign any subjective opinion to the statement. The case where Q|P
is an absolute TRUE opinion is equivalent to source A saying that P Q is TRUE, and the case where Q|P A
is an
absolute FALSE opinion is equivalent to source A saying that P Q is FALSE. In the case when the conditional
A
opinion Q|P is absolute TRUE the subjective Bayes theorem operator e of subjective logic produces an absolute
FALSE conditional opinion Ae and thereby an absolute TRUE conditional opinion A e which is equivalent
P |Q P |Q
to Q P being TRUE. Hence, the subjective Bayes theorem represents a generalization of both contraposition
and Bayes theorem [3] .

67.9 See also


Reductio ad absurdum

67.10 References
[1] Smith, Douglas; Eggen, Maurice; St. Andre, Richard (2001), A Transition to Advanced Mathematics (5th ed.), Brooks/Cole,
p. 37, ISBN 0-534-38214-2

[2] Audun Jsang 2016:2

[3] Audun Jsang 2016:92

67.11 Sources
Audun Jsang, 2016, Subjective Logic; A formalism for Reasoning Under Uncertainty Springer, Cham, ISBN
978-3-319-42337-1
Chapter 68

Contraposition (traditional logic)

In traditional logic, contraposition is a form of immediate inference in which from a given proposition another
is inferred having for its subject the contradictory of the original predicate, and in some cases involving a change
of quality (armation or negation).[1] For its symbolic expression in modern logic see the rule of transposition.
Contraposition also has distinctive applications in its philosophical application distinct from the other traditional
inference processes of conversion and obversion where equivocation varies with dierent proposition types.

68.1 Traditional logic


In traditional logic the process of contraposition is a schema composed of several steps of inference involving
categorical propositions and classes.[2] A categorical proposition contains a subject and predicate where the exis-
tential impact of the copula implies the proposition as referring to a class with at least one member, in contrast to
the conditional form of hypothetical or materially implicative propositions, which are compounds of other proposi-
tions, e.g. If P, then Q, where P and Q are both propositions, and their existential impact is dependent upon further
propositions where in quantication existence is instantiated (existential instantiation).
Conversion by contraposition is the simultaneous interchange and negation of the subject and predicate, and is valid
only for the type A and type O propositions of Aristotelian logic, with considerations for the validity an E type
proposition with limitations and changes in quantity. This is considered full contraposition. Since in the process of
contraposition the obverse can be obtained in all four types of traditional propositions, yielding propositions with
the contradictory of the original predicate, contraposition is rst obtained by converting the obvert of the original
proposition. Thus, partial contraposition can be obtained conditionally in an E type proposition with a change
in quantity. Because nothing is said in the denition of contraposition with regard to the predicate of the inferred
proposition, it can be either the original subject, or its contradictory, resulting in two contrapositives which are the
obverts of one another in the A, O, and E type propositions.[3]
By example: from an original, 'A' type categorical proposition,

All residents are voters,

which presupposes that all classes have members and the existential import presumed in the form of categorical
propositions, one can derive rst by obversion the 'E' type proposition,

No residents are non-voters.

The contrapositive of the original proposition is then derived by conversion to another 'E' type proposition,

No non-voters are residents.

The process is completed by further obversion resulting in the 'A' type proposition that is the obverted contrapositive
of the original proposition,

247
248 CHAPTER 68. CONTRAPOSITION (TRADITIONAL LOGIC)

All non-voters are non-residents.

The schema of contraposition:[4]


Notice that contraposition is a valid form of immediate inference only when applied to A and O propositions. It
is not valid for I propositions, where the obverse is an O proposition which has no converse. The contraposition
of the E proposition is valid only with limitations (per accidens). This is because the obverse of the E proposition
is an A proposition which cannot be validly converted except by limitation, that is, contraposition plus a change in
the quantity of the proposition from universal to particular.
Also, notice that contraposition is a method of inference which may require the use of other rules of inference.
The contrapositive is the product of the method of contraposition, with dierent outcomes depending upon whether
the contraposition is full, or partial. The successive applications of conversion and obversion within the process of
contraposition may be given by a variety of names.
The process of the logical equivalence of a statement and its contrapositive as dened in traditional class logic is not
one of the axioms of propositional logic. In traditional logic there is more than one contrapositive inferred from each
original statement. In regard to the A proposition this is circumvented in the symbolism of modern logic by the
rule of transposition, or the law of contraposition. In its technical usage within the eld of philosophic logic, the term
contraposition may be limited by logicians (e.g. Irving Copi, Susan Stebbing) to traditional logic and categorical
propositions. In this sense the use of the term contraposition is usually referred to by transposition when applied
to hypothetical propositions or material implications.

68.2 See also

68.3 Notes
[1] Brody, Bobuch A. Glossary of Logical Terms. Encyclopedia of Philosophy. Vol. 5-6, p. 61. Macmillan, 1973. Also,
Stebbing, L. Susan. A Modern Introduction to Logic. Seventh edition, p.65-66. Harper, 1961, and Irving Copis Introduction
to Logic, p. 141, Macmillan, 1953. All sources give virtually identical denitions.

[2] Irving Copis Introduction to Logic, pp. 123-157, Macmillan, 1953.

[3] Brody, p. 61. Macmillan, 1973. Also, Stebbing, p.65-66, Harper, 1961, and Copi, p. 141-143, Macmillan, 1953.

[4] Stebbing, L. Susan. A Modern Introduction to Logic. Seventh edition, p. 66. Harper, 1961.

68.4 References
Blumberg, Albert E. Logic, Modern. Encyclopedia of Philosophy, Vol.5, Macmillan, 1973.
Brody, Bobuch A. Glossary of Logical Terms. Encyclopedia of Philosophy. Vol. 5-6, p. 61. Macmillan,
1973.
Copi, Irving. Introduction to Logic. MacMillan, 1953.

Copi, Irving. Symbolic Logic. MacMillan, 1979, fth edition.


Prior, A.N. Logic, Traditional. Encyclopedia of Philosophy, Vol.5, Macmillan, 1973.

Stebbing, Susan. A Modern Introduction to Logic. Cromwell Company, 1931.


Chapter 69

Converse implication

Converse implication is the converse of implication. That is to say; that for any two propositions P and Q, if Q
implies P, then P is the converse implication of Q.
It may take the following forms:

pq, Bpq, or pq

69.1 Denition

69.1.1 Truth table

The truth table of AB

69.1.2 Venn diagram


The Venn diagram of If B then A (the white area shows where the statement is false)

69.2 Properties
truth-preserving: The interpretation under which all variables are assigned a truth value of 'true' produces a truth
value of 'true' as a result of converse implication.

69.3 Symbol

69.4 Natural language


Not q without p.
p if q.

249
250 CHAPTER 69. CONVERSE IMPLICATION

69.5 Boolean Algebra


(A + B')

69.6 See also


Logical connective
Material conditional
Chapter 70

Converse nonimplication

In logic, converse nonimplication[1] is a logical connective which is the negation of the converse of implication.

70.1 Denition
pq which is the same as (pq)

70.1.1 Truth table


The truth table of pq .[2]

70.1.2 Venn diagram


The Venn Diagram of It is not the case that B implies A (the red area is true).
Also related to the relative complement (set theory), where the relative complement of A in B is denoted B A.

70.2 Properties
falsehood-preserving: The interpretation under which all variables are assigned a truth value of 'false' produces a
truth value of 'false' as a result of converse nonimplication

70.3 Symbol
Alternatives for p q are

pq
:
combines Converse implications left arrow( ) with Negations tilde( ).
M pq : uses prexed capital letter.
p q : combines Converse implications left arrow( ) denied by means of a stroke(/).

251
252 CHAPTER 70. CONVERSE NONIMPLICATION

70.4 Natural language

70.4.1 Grammatical

Classic passive aggressive: yeah, no

70.4.2 Rhetorical

not A but B

70.4.3 Colloquial

70.5 Boolean algebra


Converse Nonimplication in a general Boolean algebra is dened as qp=q p .
Example of a 2-element Boolean algebra: the 2 elements {0,1} with 0 as zero and 1 as unity element, operators as
complement operator, as join operator and as meet operator, build the Boolean algebra of propositional logic.
Example of a 4-element Boolean algebra: the 4 divisors {1,2,3,6} of 6 with 1 as zero and 6 as unity element, operators
c
(codivisor of 6) as complement operator, (least common multiple) as join operator and (greatest common divisor)
as meet operator, build a Boolean algebra.

70.5.1 Properties

Non-associative

r(qp)=(rq)p i rp=0 #s5 (In a two-element Boolean algebra the latter condition is reduced to r=0 or p=0 ).
Hence in a nontrivial Boolean algebra Converse Nonimplication is nonassociative.

(r q) p = r q p denition) (by

= (r q) p denition) (by
= (r + q )p laws) Morgan's (De
= (r + r q )p law) (Absorption
= rp + r q p
= rp + r (q p) denition) (by
= rp + r (q p) denition) (by

Clearly, it is associative i rp=0 .

Non-commutative

qp=pq i q=p #s6. Hence Converse Nonimplication is noncommutative.

Neutral and absorbing elements

0 is a left neutral element ( 0p=p ) and a right absorbing element ( p0=0 ).

1p=0 , p1=p , and pp=0 .

Implication qp is the dual of Converse Nonimplication qp #s7.


70.6. COMPUTER SCIENCE 253

70.6 Computer science


An example for converse nonimplication in computer science can be found when performing a right outer join on a
set of tables from a database, if records not matching the join-condition from the left table are being excluded.[3]

70.7 References
[1] Lehtonen, Eero, and Poikonen, J.H.

[2] Knuth 2011, p. 49

[3] http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html

Knuth, Donald E. (2011). The Art of Computer Programming, Volume 4A: Combinatorial Algorithms, Part 1
(1st ed.). Addison-Wesley Professional. ISBN 0-201-03804-8.
Chapter 71

Correlation immunity

In mathematics, the correlation immunity of a Boolean function is a measure of the degree to which its outputs
are uncorrelated with some subset of its inputs. Specically, a Boolean function is said to be correlation-immune
of order m if every subset of m or fewer variables in x1 , x2 , . . . , xn is statistically independent of the value of
f (x1 , x2 , . . . , xn ) .

71.1 Denition
A function f : Fn2 F2 is k -th order correlation immune if for any independent n binary random variables
X0 . . . Xn1 , the random variable Z = f (X0 , . . . , Xn1 ) is independent from any random vector (Xi1 . . . Xik )
with 0 i1 < . . . < ik < n .

71.2 Results in cryptography


When used in a stream cipher as a combining function for linear feedback shift registers, a Boolean function with low-
order correlation-immunity is more susceptible to a correlation attack than a function with correlation immunity of
high order.
Siegenthaler showed that the correlation immunity m of a Boolean function of algebraic degree d of n variables satises
m + d n; for a given set of input variables, this means that a high algebraic degree will restrict the maximum possible
correlation immunity. Furthermore, if the function is balanced then m + d n 1.[1]

71.3 References
[1] T. Siegenthaler (September 1984). Correlation-Immunity of Nonlinear Combining Functions for Cryptographic Appli-
cations. IEEE Transactions on Information Theory. 30 (5): 776780. doi:10.1109/TIT.1984.1056949.

71.3.1 Further reading


1. Cusick, Thomas W. & Stanica, Pantelimon (2009). Cryptographic Boolean functions and applications. Aca-
demic Press. ISBN 9780123748904.

254
Chapter 72

Counting quantication

A counting quantier is a mathematical term for a quantier of the form there exists at least k elements that satisfy
property X". In rst-order logic with equality, counting quantiers can be dened in terms of ordinary quantiers,
so in this context they are a notational shorthand. However, they are interesting in the context of logics such as two-
variable logic with counting that restrict the number of variables in formulas. Also, generalized counting quantiers
that say there exists innitely many are not expressible using a nite number of formulas in rst-order logic.

72.1 See also


Uniqueness quantication

72.2 References
Erich Graedel, Martin Otto, and Eric Rosen. Two-Variable Logic with Counting is Decidable. In Proceedings
of 12th IEEE Symposium on Logic in Computer Science LICS `97, Warschau. 1997. Postscript le OCLC
282402933

255
Chapter 73

Cut rule

In mathematical logic, the cut rule is an inference rule of sequent calculus. It is a generalisation of the classical modus
ponens inference rule. Its meaning is that, if a formula A appears as a conclusion in one proof and an hypothesis in
another, then another proof in which the formula A does not appear can be deduced. In the particular case of the
modus ponens, for example occurrences of man are eliminated of Every man is mortal, Socrates is a man to deduce
Socrates is mortal.

73.1 Formal notation


Formal notation in sequent calculus notation :

cut
A, , A
, ,

73.2 Elimination
The cut rule is the subject of an important theorem, the cut elimination theorem. It states that any judgement that
possesses a proof in the sequent calculus that makes use of the cut rule also possesses a cut-free proof, that is, a proof
that does not make use of the cut rule.

256
Chapter 74

DavisPutnam algorithm

The DavisPutnam algorithm was developed by Martin Davis and Hilary Putnam for checking the validity of a
rst-order logic formula using a resolution-based decision procedure for propositional logic. Since the set of valid
rst-order formulas is recursively enumerable but not recursive, there exists no general algorithm to solve this prob-
lem. Therefore, the DavisPutnam algorithm only terminates on valid formulas. Today, the term DavisPutnam
algorithm is often used synonymously with the resolution-based propositional decision procedure that is actually
only one of the steps of the original algorithm.

74.1 Overview
The procedure is based on Herbrands theorem, which implies that an unsatisable formula has an unsatisable ground
instance, and on the fact that a formula is valid if and only if its negation is unsatisable. Taken together, these facts
imply that to prove the validity of it is enough to prove that a ground instance of is unsatisable. If is not
valid, then the search for an unsatisable ground instance will not terminate.
The procedure roughly consists of these three parts:

put the formula in prenex form and eliminate quantiers

generate all propositional ground instances, one by one

check if each instance is satisable

The last part is probably the most innovative one, and works as follows:

for every variable in the formula

for every clause c containing the variable and every clause n containing the negation of the variable
resolve c and n and add the resolvent to the formula
remove all original clauses containing the variable or its negation

At each step, the intermediate formula generated is equisatisable, but possibly not equivalent, to the original formula.
The resolution step leads to a worst-case exponential blow-up in the size of the formula.
The DavisPutnamLogemannLoveland algorithm is a 1962 renement of the propositional satisability step of the
DavisPutnam procedure which requires only a linear amount of memory in the worst case. It still forms the basis
for todays (as of 2015) most ecient complete SAT solvers.

74.2 See also


Herbrandization

257
258 CHAPTER 74. DAVISPUTNAM ALGORITHM

74.3 References
Davis, Martin; Putnam, Hilary (1960). A Computing Procedure for Quantication Theory. Journal of the
ACM. 7 (3): 201215. doi:10.1145/321033.321034.
Beckford, Jahbrill; Logemann, George; Loveland, Donald (1962). A Machine Program for Theorem Prov-
ing. Communications of the ACM. 5 (7): 394397. doi:10.1145/368273.368557.

R. Dechter; I. Rish. Directional Resolution: The DavisPutnam Procedure, Revisited. In J. Doyle and
E. Sandewall and P. Torasso. Principles of Knowledge Representation and Reasoning: Proc. of the Fourth
International Conference (KR'94). Starswager18. pp. 134145.
John Harrison (2009). Handbook of practical logic and automated reasoning. Cambridge University Press. pp.
7990. ISBN 978-0-521-89957-4.
Chapter 75

De Morgans laws

In propositional logic and boolean algebra, De Morgans laws[1][2][3] are a pair of transformation rules that are both
valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules
allow the expression of conjunctions and disjunctions purely in terms of each other via negation.
The rules can be expressed in English as:

the negation of a disjunction is the conjunction of the negations; and


the negation of a conjunction is the disjunction of the negations;

or

the complement of the union of two sets is the same as the intersection of their complements; and
the complement of the intersection of two sets is the same as the union of their complements.

In set theory and Boolean algebra, these are written formally as

A B = A B,
A B = A B,

where

A and B are sets,

A is the complement of A,

is the intersection, and

is the union.

In formal language, the rules are written as

(P Q) (P ) (Q),

and

(P Q) (P ) (Q)

where

259
260 CHAPTER 75. DE MORGANS LAWS

De Morgans laws represented with Venn diagrams. In each case, the resultant set is the set of all points in any shade of blue.

P and Q are propositions,


is the negation logic operator (NOT),
is the conjunction logic operator (AND),
is the disjunction logic operator (OR),
is a metalogical symbol meaning can be replaced in a logical proof with.
75.1. FORMAL NOTATION 261

Applications of the rules include simplication of logical expressions in computer programs and digital circuit designs.
De Morgans laws are an example of a more general concept of mathematical duality.

75.1 Formal notation


The negation of conjunction rule may be written in sequent notation:

(P Q) (P Q).

The negation of disjunction rule may be written as:

(P Q) (P Q).

In rule form: negation of conjunction

(P Q)
P Q
and negation of disjunction

(P Q)
P Q
and expressed as a truth-functional tautology or theorem of propositional logic:

(P Q) (P Q),
(P Q) (P Q),

where P and Q are propositions expressed in some formal system.

75.1.1 Substitution form


De Morgans laws are normally shown in the compact form above, with negation of the output on the left and negation
of the inputs on the right. A clearer form for substitution can be stated as:

(P Q) (P Q),
(P Q) (P Q).

This emphasizes the need to invert both the inputs and the output, as well as change the operator, when doing a
substitution.

75.1.2 Set theory and Boolean algebra


In set theory and Boolean algebra, it is often stated as union and intersection interchange under complementation,[4]
which can be formally expressed as:

A B = A B,
A B = A B,

where:
262 CHAPTER 75. DE MORGANS LAWS

A is the negation of A, the overline being written above the terms to be negated,

is the intersection operator (AND),

is the union operator (OR).

The generalized form is:


Ai Ai ,
iI iI

Ai Ai ,
iI iI

where I is some, possibly uncountable, indexing set.


In set notation, De Morgans laws can be remembered using the mnemonic break the line, change the sign.[5]

75.1.3 Engineering

In electrical and computer engineering, De Morgans laws are commonly written as:

AB A+B

and

A + B A B,

where:

is a logical AND,

+ is a logical OR,

the overbar is the logical NOT of what is underneath the overbar.

75.1.4 Text searching

De Morgans laws commonly apply to text searching using Boolean operators AND, OR, and NOT. Consider a set of
documents containing the words cars and trucks. De Morgans laws hold that these two searches will return the
same set of documents:

Search A: NOT (cars OR trucks)


Search B: (NOT cars) AND (NOT trucks)

The corpus of documents containing cars or trucks can be represented by four documents:

Document 1: Contains only the word cars.


Document 2: Contains only trucks.
Document 3: Contains both cars and trucks.
Document 4: Contains neither cars nor trucks.
75.2. HISTORY 263

To evaluate Search A, clearly the search (cars OR trucks) will hit on Documents 1, 2, and 3. So the negation of
that search (which is Search A) will hit everything else, which is Document 4.
Evaluating Search B, the search (NOT cars) will hit on documents that do not contain cars, which is Documents 2
and 4. Similarly the search (NOT trucks) will hit on Documents 1 and 4. Applying the AND operator to these two
searches (which is Search B) will hit on the documents that are common to these two searches, which is Document 4.
A similar evaluation can be applied to show that the following two searches will return the same set of documents
(Documents 1, 2, 4):

Search C: NOT (cars AND trucks),


Search D: (NOT cars) OR (NOT trucks).

75.2 History
The laws are named after Augustus De Morgan (18061871),[6] who introduced a formal version of the laws to
classical propositional logic. De Morgans formulation was inuenced by algebraization of logic undertaken by George
Boole, which later cemented De Morgans claim to the nd. Nevertheless, a similar observation was made by Aristotle,
and was known to Greek and Medieval logicians.[7] For example, in the 14th century, William of Ockham wrote down
the words that would result by reading the laws out.[8] Jean Buridan, in his Summulae de Dialectica, also describes
rules of conversion that follow the lines of De Morgans laws.[9] Still, De Morgan is given credit for stating the laws
in the terms of modern formal logic, and incorporating them into the language of logic. De Morgans laws can be
proved easily, and may even seem trivial.[10] Nonetheless, these laws are helpful in making valid inferences in proofs
and deductive arguments.

75.3 Informal proof


De Morgans theorem may be applied to the negation of a disjunction or the negation of a conjunction in all or part
of a formula.

75.3.1 Negation of a disjunction


In the case of its application to a disjunction, consider the following claim: it is false that either of A or B is true,
which is written as:

(A B).

In that it has been established that neither A nor B is true, then it must follow that both A is not true and B is not true,
which may be written directly as:

(A) (B).

If either A or B were true, then the disjunction of A and B would be true, making its negation false. Presented in
English, this follows the logic that since two things are both false, it is also false that either of them is true.
Working in the opposite direction, the second expression asserts that A is false and B is false (or equivalently that not
A and not B are true). Knowing this, a disjunction of A and B must be false also. The negation of said disjunction
must thus be true, and the result is identical to the rst claim.

75.3.2 Negation of a conjunction


The application of De Morgans theorem to a conjunction is very similar to its application to a disjunction both in
form and rationale. Consider the following claim: it is false that A and B are both true, which is written as:
264 CHAPTER 75. DE MORGANS LAWS

(A B).

In order for this claim to be true, either or both of A or B must be false, for if they both were true, then the conjunction
of A and B would be true, making its negation false. Thus, one (at least) or more of A and B must be false (or
equivalently, one or more of not A and not B must be true). This may be written directly as,

(A) (B).

Presented in English, this follows the logic that since it is false that two things are both true, at least one of them
must be false.
Working in the opposite direction again, the second expression asserts that at least one of not A and not B must
be true, or equivalently that at least one of A and B must be false. Since at least one of them must be false, then their
conjunction would likewise be false. Negating said conjunction thus results in a true expression, and this expression
is identical to the rst claim.

75.4 Formal proof


The proof that (A B)c = Ac B c is completed in 2 steps by proving both (A B)c Ac B c and Ac B c
(A B)c .

75.4.1 Part 1

Let x (A B)c . Then, x A B .


Because A B = {y|y A y B} , it must be the case that x A or x B .
If x A , then x Ac , so x Ac B c .
Similarly, if x B , then x B c , so x Ac B c .
Thus, x(x (A B)c x Ac B c ) ;
that is, (A B)c Ac B c .

75.4.2 Part 2

To prove the reverse direction, let x Ac B c , and assume x (A B)c .


Under that assumption, it must be the case that x A B ,
so it follows that x A and x B , and thus x Ac and x B c .
However, that means x Ac B c , in contradiction to the hypothesis that x Ac B c ,
therefore, the assumption x (A B)c must not be the case, meaning that x (A B)c .
Hence, x(x Ac B c x (A B)c ) ,
that is, Ac B c (A B)c .

75.4.3 Conclusion

If Ac B c (A B)c and (A B)c Ac B c , then (A B)c = Ac B c ; this concludes the proof of De


Morgans law.
The other De Morgans law, (A B)c = Ac B c , is proven similarly.
75.5. GENERALISING DE MORGAN DUALITY 265

1 &

& 1
De Morgans Laws represented as a circuit with logic gates

75.5 Generalising De Morgan duality


In extensions of classical propositional logic, the duality still holds (that is, to any logical operator one can always
nd its dual), since in the presence of the identities governing negation, one may always introduce an operator that
is the De Morgan dual of another. This leads to an important property of logics based on classical logic, namely
the existence of negation normal forms: any formula is equivalent to another formula where negations only occur
applied to the non-logical atoms of the formula. The existence of negation normal forms drives many applications,
for example in digital circuit design, where it is used to manipulate the types of logic gates, and in formal logic, where
it is needed to nd the conjunctive normal form and disjunctive normal form of a formula. Computer programmers
use them to simplify or properly negate complicated logical conditions. They are also often useful in computations
in elementary probability theory.
Let one dene the dual of any propositional operator P(p, q, ...) depending on elementary propositions p, q, ... to be
the operator Pd dened by

Pd (p, q, ...) = P (p, q, . . . ).

75.6 Extension to predicate and modal logic


This duality can be generalised to quantiers, so for example the universal quantier and existential quantier are
duals:

x P (x) [x P (x)]
x P (x) [x P (x)]
To relate these quantier dualities to the De Morgan laws, set up a model with some small number of elements in its
domain D, such as
266 CHAPTER 75. DE MORGANS LAWS

D = {a, b, c}.

Then

x P (x) P (a) P (b) P (c)


and

x P (x) P (a) P (b) P (c).


But, using De Morgans laws,

P (a) P (b) P (c) (P (a) P (b) P (c))


and

P (a) P (b) P (c) (P (a) P (b) P (c)),


verifying the quantier dualities in the model.
Then, the quantier dualities can be extended further to modal logic, relating the box (necessarily) and diamond
(possibly) operators:

p p,
p p.
In its application to the alethic modalities of possibility and necessity, Aristotle observed this case, and in the case of
normal modal logic, the relationship of these modal operators to the quantication can be understood by setting up
models using Kripke semantics.

75.7 See also


Isomorphism NOT operator as isomorphism between positive logic and negative logic
List of Boolean algebra topics
Positive logic

75.8 References
[1] Copi and Cohen
[2] Hurley, Patrick J. (2015), A Concise Introduction to Logic (12th ed.), Cengage Learning, ISBN 978-1-285-19654-1
[3] Moore and Parker
[4] Boolean Algebra by R. L. Goodstein. ISBN 0-486-45894-6
[5] 2000 Solved Problems in Digital Electronics by S. P. Bali
[6] DeMorgans Theorems at mtsu.edu
[7] Bocheskis History of Formal Logic
[8] William of Ockham, Summa Logicae, part II, sections 32 and 33.
[9] Jean Buridan, Summula de Dialectica. Trans. Gyula Klima. New Haven: Yale University Press, 2001. See especially
Treatise 1, Chapter 7, Section 5. ISBN 0-300-08425-0
[10] Augustus De Morgan (18061871) by Robert H. Orr
75.9. EXTERNAL LINKS 267

75.9 External links


Hazewinkel, Michiel, ed. (2001) [1994], Duality principle, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Weisstein, Eric W. de Morgans Laws. MathWorld.

de Morgans laws at PlanetMath.org.


Chapter 76

Deductive closure

For other uses, see Closure.

Deductive closure is a property of a set of objects (usually the objects in question are statements). A set of objects,
O, is said to exhibit closure or to be closed under a given operation, R, provided that for every object, x, if x is a
member of O and x is R-related to any object, y, then y is a member of O.[1] In the context of statements, a deductive
closure is the set of all the statements that can be deduced from a given set of statements.
In propositional logic, the set of all true propositions exhibits deductive closure: if set O is the set of true propositions,
and operation R is logical consequence ( ), then provided that proposition p is a member of O and p is R-related
to q (i.e., p q), q is also a member of O.

76.1 Epistemic closure


Main article: Epistemic closure

In epistemology, many philosophers have and continue to debate whether particular subsets of propositionsespecially
ones ascribing knowledge or justication of a belief to a subjectare closed under deduction.

76.2 References
[1] Peter D. Klein, Closure, The Cambridge Dictionary of Philosophy (second edition)

268
Chapter 77

Demonic composition

In mathematics, demonic composition is an operation on binary relations that is somewhat comparable to ordinary
composition of relations but is robust to renement of the relations into (partial) functions or injective relations.
Unlike ordinary composition of relations, demonic composition is not associative.

77.1 Denition
Suppose R is a binary relation between X and Y and S is a relation between Y and Z. Their right demonic composition
R ; S is a relation between X and Z. Its graph is dened as

{(x, z) | x (S R) z y Y (x R y y S z)}.

Conversely, their left demonic composition R ; S is dened by

{(x, z) | x (S R) z y Y (y S z x R y)}.

77.2 References
Backhouse, Roland; van der Woude, Jaap (1993), Demonic operators and monotype factors, Mathematical
Structures in Computer Science, 3 (4): 417433, MR 1249420, doi:10.1017/S096012950000030X.

269
Chapter 78

Denying the antecedent

Denying the antecedent, sometimes also called inverse error or fallacy of the inverse, is a formal fallacy of
inferring the inverse from the original statement. It is committed by reasoning in the form:[1]

If P, then Q.
Therefore, if not P, then not Q.

which may also be phrased as

P Q (P implies Q)
P Q (therefore, not-P implies not-Q)[1]

Arguments of this form are invalid. Informally, this means that arguments of this form do not give good reason to
establish their conclusions, even if their premises are true.
The name denying the antecedent derives from the premise not P", which denies the if clause of the conditional
premise.
One way to demonstrate the invalidity of this argument form is with a counterexample with true premises but an
obviously false conclusion. For example:

If you are a ski instructor, then you have a job.


You are not a ski instructor
Therefore, you have no job[1]

That argument is intentionally bad, but arguments of the same form can sometimes seem supercially convincing, as
in the following example oered, by Alan Turing in the article "Computing Machinery and Intelligence":

If each man had a denite set of rules of conduct by which he regulated his life he would be no better
than a machine. But there are no such rules, so men cannot be machines.[2]

However, men could still be machines that do not follow a denite set of rules. Thus, this argument (as Turing intends)
is invalid.
It is possible that an argument that denies the antecedent could be valid, if the argument instantiates some other valid
form. For example, if the claims P and Q express the same proposition, then the argument would be trivially valid,
as it would beg the question. In everyday discourse, however, such cases are rare, typically only occurring when the
if-then premise is actually an "if and only if" claim (i.e., a biconditional/equality). For example:

If I am President of the United States, then I can veto Congress.


I am not President.
Therefore, I cannot veto Congress.

270
78.1. SEE ALSO 271

The above argument is not valid, but would be if the rst premise ended thus: "...and if I can veto Congress, then I
am the U.S. President (as is in fact true). More to the point, the validity of the new argument stems not from denying
the antecedent, but modus tollens (denying the consequent).

78.1 See also


Arming the consequent

Modus ponens
Modus tollens

Necessity and suciency

78.2 References
[1] Matthew C. Harris. Denying the antecedent. Khan academy.

[2] Turing, Alan (October 1950), Computing Machinery and Intelligence, Mind, LIX (236): 433460, ISSN 0026-4423,
doi:10.1093/mind/LIX.236.433, retrieved 2008-08-18

78.3 External links


FallacyFiles.org: Denying the Antecedent

safalra.com: Denying The Antecedent


Chapter 79

Derivative algebra (abstract algebra)

In abstract algebra, a derivative algebra is an algebraic structure of the signature

<A, , +, ', 0, 1, D >

where

<A, , +, ', 0, 1>

is a Boolean algebra and D is a unary operator, the derivative operator, satisfying the identities:

1. 0D = 0

2. xDD x + xD
3. (x + y)D = xD + yD .

xD is called the derivative of x. Derivative algebras provide an algebraic abstraction of the derived set operator in
topology. They also play the same role for the modal logic wK4 = K + p?p ??p that Boolean algebras play for
ordinary propositional logic.

79.1 References
Esakia, L., Intuitionistic logic and modality via topology, Annals of Pure and Applied Logic, 127 (2004) 155-
170

McKinsey, J.C.C. and Tarski, A., The Algebra of Topology, Annals of Mathematics, 45 (1944) 141-191

272
Chapter 80

Destructive dilemma

Destructive dilemma[1][2] is the name of a valid rule of inference of propositional logic. It is the inference that, if P
implies Q and R implies S and either Q is false or S is false, then either P or R must be false. In sum, if two conditionals
are true, but one of their consequents is false, then one of their antecedents has to be false. Destructive dilemma is
the disjunctive version of modus tollens. The disjunctive version of modus ponens is the constructive dilemma. The
rule can be stated:

P Q, R S, Q S
P R

where the rule is that wherever instances of " P Q ", " R S ", and " Q S " appear on lines of a proof, "
P R " can be placed on a subsequent line.

80.1 Formal notation


The destructive dilemma rule may be written in sequent notation:

(P Q), (R S), (Q S) (P R)

where is a metalogical symbol meaning that P R is a syntactic consequence of P Q , R S , and


Q S in some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:

(((P Q) (R S)) (Q S)) (P R)

where P , Q , R and S are propositions expressed in some formal system.

80.2 Natural language example

If it rains, we will stay inside.


If it is sunny, we will go for a walk.
Either we will not stay inside, or we will not go for a walk, or both.
Therefore, either it will not rain, or it will not be sunny, or both.

273
274 CHAPTER 80. DESTRUCTIVE DILEMMA

80.3 Proof

80.4 Example proof


The validity of this argument structure can be shown by using both conditional proof (CP) and reductio ad absurdum
(RAA) in the following way:

80.5 References
[1] Hurley, Patrick. A Concise Introduction to Logic With Ilrn Printed Access Card. Wadsworth Pub Co, 2008. Page 361

[2] Moore and Parker

80.6 Bibliography
Howard-Snyder, Frances; Howard-Snyder, Daniel; Wasserman, Ryan. The Power of Logic (4th ed.). McGraw-
Hill, 2009, ISBN 978-0-07-340737-1, p. 414.

80.7 External links


http://mathworld.wolfram.com/DestructiveDilemma.html
Chapter 81

Homogeneous relation

Relation (mathematics)" redirects here. For a more general notion of relation, see nitary relation. For a more
combinatorial viewpoint, see theory of relations. For other uses, see Relation (disambiguation).

In mathematics, a binary relation on a set A is a collection of ordered pairs of elements of A. In other words, it is a
subset of the Cartesian product A2 = A A. More generally, a binary relation between two sets A and B is a subset
of A B. The terms correspondence, dyadic relation and 2-place relation are synonyms for binary relation.
An example is the "divides" relation between the set of prime numbers P and the set of integers Z, in which every
prime p is associated with every integer z that is a multiple of p (but with no integer that is not a multiple of p). In
this relation, for instance, the prime 2 is associated with numbers that include 4, 0, 6, 10, but not 1 or 9; and the
prime 3 is associated with numbers that include 0, 6, and 9, but not 4 or 13.
Binary relations are used in many branches of mathematics to model concepts like "is greater than", "is equal to", and
divides in arithmetic, "is congruent to" in geometry, is adjacent to in graph theory, is orthogonal to in linear
algebra and many more. The concept of function is dened as a special kind of binary relation. Binary relations are
also heavily used in computer science.
A binary relation is the special case n = 2 of an n-ary relation R A1 An, that is, a set of n-tuples where the
jth component of each n-tuple is taken from the jth domain Aj of the relation. An example for a ternary relation on
ZZZ is " ... lies between ... and ..., containing e.g. the triples (5,2,8), (5,8,2), and (4,9,7).
In some systems of axiomatic set theory, relations are extended to classes, which are generalizations of sets. This
extension is needed for, among other things, modeling the concepts of is an element of or is a subset of in set
theory, without running into logical inconsistencies such as Russells paradox.

81.1 Formal denition

A binary relation R between arbitrary sets (or classes) X (the set of departure) and Y (the set of destination or
codomain) is specied by its graph G, which is a subset of the Cartesian product X Y. The binary relation R itself
is usually identied with its graph G, but some authors dene it as an ordered triple (X, Y, G), which is otherwise
referred to as a correspondence.[1]
The statement (x, y) G is read "x is R-related to y", and is denoted by xRy or R(x, y). The latter notation corresponds
to viewing R as the characteristic function of the subset G of X Y, i.e. R(x, y) equals to 1 (true), if (x, y) G, and
0 (false) otherwise.
The order of the elements in each pair of G is important: if a b, then aRb and bRa can be true or false, independently
of each other. Resuming the above example, the prime 3 divides the integer 9, but 9 doesn't divide 3.
The domain of R is the set of all x such that xRy for at least one y. The range of R is the set of all y such that xRy
for at least one x. The eld of R is the union of its domain and its range.[2][3][4]

275
276 CHAPTER 81. HOMOGENEOUS RELATION

81.1.1 Is a relation more than its graph?


According to the denition above, two relations with identical graphs but dierent domains or dierent codomains
are considered dierent. For example, if G = {(1, 2), (1, 3), (2, 7)} , then (Z, Z, G) , (R, N, G) , and (N, R, G) are
three distinct relations, where Z is the set of integers, R is the set of real numbers and N is the set of natural numbers.
Especially in set theory, binary relations are often dened as sets of ordered pairs, identifying binary relations with
their graphs. The domain of a binary relation R is then dened as the set of all x such that there exists at least one
y such that (x, y) R , the range of R is dened as the set of all y such that there exists at least one x such that
(x, y) R , and the eld of R is the union of its domain and its range.[2][3][4]
A special case of this dierence in points of view applies to the notion of function. Many authors insist on distin-
guishing between a functions codomain and its range. Thus, a single rule, like mapping every real number x to
x2 , can lead to distinct functions f : R R and f : R R+ , depending on whether the images under that
rule are understood to be reals or, more restrictively, non-negative reals. But others view functions as simply sets of
ordered pairs with unique rst components. This dierence in perspectives does raise some nontrivial issues. As an
example, the former camp considers surjectivityor being ontoas a property of functions, while the latter sees it
as a relationship that functions may bear to sets.
Either approach is adequate for most uses, provided that one attends to the necessary changes in language, notation,
and the denitions of concepts like restrictions, composition, inverse relation, and so on. The choice between the two
denitions usually matters only in very formal contexts, like category theory.

81.1.2 Example
Example: Suppose there are four objects {ball, car, doll, gun} and four persons {John, Mary, Ian, Venus}. Suppose
that John owns the ball, Mary owns the doll, and Venus owns the car. Nobody owns the gun and Ian owns nothing.
Then the binary relation is owned by is given as

R = ({ball, car, doll, gun}, {John, Mary, Ian, Venus}, {(ball, John), (doll, Mary), (car, Venus)}).

Thus the rst element of R is the set of objects, the second is the set of persons, and the last element is a set of ordered
pairs of the form (object, owner).
The pair (ball, John), denoted by RJ means that the ball is owned by John.
Two dierent relations could have the same graph. For example: the relation

({ball, car, doll, gun}, {John, Mary, Venus}, {(ball, John), (doll, Mary), (car, Venus)})

is dierent from the previous one as everyone is an owner. But the graphs of the two relations are the same.
Nevertheless, R is usually identied or even dened as G(R) and an ordered pair (x, y) G(R)" is usually denoted as
"(x, y) R".[5]

81.2 Special types of binary relations


Some important types of binary relations R between two sets X and Y are listed below. To emphasize that X and Y
can be dierent sets, some authors call such binary relations heterogeneous.[6][7]
Uniqueness properties:

injective (also called left-unique[8] ): for all x and z in X and y in Y it holds that if xRy and zRy then x = z. For
example, the green relation in the diagram is injective, but the red relation is not, as it relates e.g. both x = 5
and z = +5 to y = 25.
functional (also called univalent[9] or right-unique[8] or right-denite[10] ): for all x in X, and y and z in Y
it holds that if xRy and xRz then y = z; such a binary relation is called a partial function. Both relations in
the picture are functional. An example for a non-functional relation can be obtained by rotating the red graph
clockwise by 90 degrees, i.e. by considering the relation x=y2 which relates e.g. x=25 to both y=5 and z=+5.
81.2. SPECIAL TYPES OF BINARY RELATIONS 277

Example relations between real numbers. Red: y=x2 . Green: y=2x+20.

one-to-one (also written 1-to-1): injective and functional. The green relation is one-to-one, but the red is not.

Totality properties (only denable if the sets of departure X resp. destination Y are specied; not to be confused with
a total relation):

left-total:[8] for all x in X there exists a y in Y such that xRy. For example, R is left-total when it is a function
or a multivalued function. Note that this property, although sometimes also referred to as total, is dierent
from the denition of total in the next section. Both relations in the picture are left-total. The relation x=y2 ,
obtained from the above rotation, is not left-total, as it doesn't relate, e.g., x = 14 to any real number y.
surjective (also called right-total[8] or onto): for all y in Y there exists an x in X such that xRy. The green
relation is surjective, but the red relation is not, as it doesn't relate any real number x to e.g. y = 14.
278 CHAPTER 81. HOMOGENEOUS RELATION

Uniqueness and totality properties:

A function: a relation that is functional and left-total. Both the green and the red relation are functions.

An injective function: a relation that is injective, functional, and left-total.

A surjective function or surjection: a relation that is functional, left-total, and right-total.

A bijection: a surjective one-to-one or surjective injective function is said to be bijective, also known as
one-to-one correspondence.[11] The green relation is bijective, but the red is not.

81.2.1 Difunctional
Less commonly encountered is the notion of difunctional (or regular) relation, dened as a relation R such that
R=RR1 R.[12]
To understand this notion better, it helps to consider a relation as mapping every element xX to a set xR = { yY
| xRy }.[12] This set is sometimes called the successor neighborhood of x in R; one can dene the predecessor
neighborhood analogously.[13] Synonymous terms for these notions are afterset and respectively foreset.[6]
A difunctional relation can then be equivalently characterized as a relation R such that wherever x1 R and x2 R have a
non-empty intersection, then these two sets coincide; formally x1 R x2 R implies x1 R = x2 R.[12]
As examples, any function or any functional (right-unique) relation is difunctional; the converse doesn't hold. If one
considers a relation R from set to itself (X = Y), then if R is both transitive and symmetric (i.e. a partial equivalence
relation), then it is also difunctional.[14] The converse of this latter statement also doesn't hold.
A characterization of difunctional relations, which also explains their name, is to consider two functions f: A C
and g: B C and then dene the following set which generalizes the kernel of a single function as joint kernel: ker(f,
g) = { (a, b) A B | f(a) = g(b) }. Every difunctional relation R A B arises as the joint kernel of two functions
f: A C and g: B C for some set C.[15]
In automata theory, the term rectangular relation has also been used to denote a difunctional relation. This ter-
minology is justied by the fact that when represented as a boolean matrix, the columns and rows of a difunctional
relation can be arranged in such a way as to present rectangular blocks of true on the (asymmetric) main diagonal.[16]
Other authors however use the term rectangular to denote any heterogeneous relation whatsoever.[7]

81.3 Relations over a set


If X = Y then we simply say that the binary relation is over X, or that it is an endorelation over X.[17] In computer
science, such a relation is also called a homogeneous (binary) relation.[7][17][18] Some types of endorelations are
widely studied in graph theory, where they are known as simple directed graphs permitting loops.
The set of all binary relations Rel(X) on a set X is the set 2X X which is a Boolean algebra augmented with the
involution of mapping of a relation to its inverse relation. For the theoretical explanation see Relation algebra.
Some important properties that a binary relation R over a set X may have are:

reexive: for all x in X it holds that xRx. For example, greater than or equal to () is a reexive relation but
greater than (>) is not.

irreexive (or strict): for all x in X it holds that not xRx. For example, > is an irreexive relation, but is not.

coreexive relation: for all x and y in X it holds that if xRy then x = y.[19] An example of a coreexive relation
is the relation on integers in which each odd number is related to itself and there are no other relations. The
equality relation is the only example of a both reexive and coreexive relation, and any coreexive relation is
a subset of the identity relation.

The previous 3 alternatives are far from being exhaustive; e.g. the red relation y=x2 from the
above picture is neither irreexive, nor coreexive, nor reexive, since it contains the pair
(0,0), and (2,4), but not (2,2), respectively.
81.4. OPERATIONS ON BINARY RELATIONS 279

symmetric: for all x and y in X it holds that if xRy then yRx. Is a blood relative of is a symmetric relation,
because x is a blood relative of y if and only if y is a blood relative of x.

antisymmetric: for all x and y in X, if xRy and yRx then x = y. For example, is anti-symmetric; so is >, but
vacuously (the condition in the denition is always false).[20]

asymmetric: for all x and y in X, if xRy then not yRx. A relation is asymmetric if and only if it is both
anti-symmetric and irreexive.[21] For example, > is asymmetric, but is not.

transitive: for all x, y and z in X it holds that if xRy and yRz then xRz. For example, is ancestor of is transitive,
while is parent of is not. A transitive relation is irreexive if and only if it is asymmetric.[22]

total: for all x and y in X it holds that xRy or yRx (or both). This denition for total is dierent from left total
in the previous section. For example, is a total relation.

trichotomous: for all x and y in X exactly one of xRy, yRx or x = y holds. For example, > is a trichotomous
relation, while the relation divides on natural numbers is not.[23]

Right Euclidean: for all x, y and z in X it holds that if xRy and xRz, then yRz.

Left Euclidean: for all x, y and z in X it holds that if yRx and zRx, then yRz.

Euclidean: A Euclidean relation is both left and right Euclidean. Equality is a Euclidean relation because if
x=y and x=z, then y=z.

serial: for all x in X, there exists y in X such that xRy. "Is greater than" is a serial relation on the integers. But
it is not a serial relation on the positive integers, because there is no y in the positive integers such that 1>y.[24]
However, "is less than" is a serial relation on the positive integers, the rational numbers and the real numbers.
Every reexive relation is serial: for a given x, choose y=x. A serial relation can be equivalently characterized
as every element having a non-empty successor neighborhood (see the previous section for the denition of this
notion). Similarly an inverse serial relation is a relation in which every element has non-empty predecessor
neighborhood.[13]

set-like (or local): for every x in X, the class of all y such that yRx is a set. (This makes sense only if relations
on proper classes are allowed.) The usual ordering < on the class of ordinal numbers is set-like, while its inverse
> is not.

A relation that is reexive, symmetric, and transitive is called an equivalence relation. A relation that is symmetric,
transitive, and serial is also reexive. A relation that is only symmetric and transitive (without necessarily being
reexive) is called a partial equivalence relation.
A relation that is reexive, antisymmetric, and transitive is called a partial order. A partial order that is total is called
a total order, simple order, linear order, or a chain.[25] A linear order where every nonempty subset has a least element
is called a well-order.

81.4 Operations on binary relations


If R, S are binary relations over X and Y, then each of the following is a binary relation over X and Y:

Union: R S X Y, dened as R S = { (x, y) | (x, y) R or (x, y) S }. For example, is the union of >
and =.

Intersection: R S X Y, dened as R S = { (x, y) | (x, y) R and (x, y) S }.

If R is a binary relation over X and Y, and S is a binary relation over Y and Z, then the following is a binary relation
over X and Z: (see main article composition of relations)
280 CHAPTER 81. HOMOGENEOUS RELATION

Composition: S R, also denoted R ; S (or R S), dened as S R = { (x, z) | there exists y Y, such that (x, y)
R and (y, z) S }. The order of R and S in the notation S R, used here agrees with the standard notational
order for composition of functions. For example, the composition is mother of is parent of yields is
maternal grandparent of, while the composition is parent of is mother of yields is grandmother of.

A relation R on sets X and Y is said to be contained in a relation S on X and Y if R is a subset of S, that is, if x R y
always implies x S y. In this case, if R and S disagree, R is also said to be smaller than S. For example, > is contained
in .
If R is a binary relation over X and Y, then the following is a binary relation over Y and X:

Inverse or converse: R 1 , dened as R 1 = { (y, x) | (x, y) R }. A binary relation over a set is equal to its
inverse if and only if it is symmetric. See also duality (order theory). For example, is less than (<) is the
inverse of is greater than (>).

If R is a binary relation over X, then each of the following is a binary relation over X:

Reexive closure: R = , dened as R = = { (x, x) | x X } R or the smallest reexive relation over X containing
R. This can be proven to be equal to the intersection of all reexive relations containing R.
Reexive reduction: R , dened as R
= R \ { (x, x) | x X } or the largest irreexive relation over X
contained in R.
Transitive closure: R + , dened as the smallest transitive relation over X containing R. This can be seen to be
equal to the intersection of all transitive relations containing R.
Reexive transitive closure: R *, dened as R * = (R + ) = , the smallest preorder containing R.
Reexive transitive symmetric closure: R , dened as the smallest equivalence relation over X containing
R.

81.4.1 Complement
If R is a binary relation over X and Y, then the following too:

The complement S is dened as x S y if not x R y. For example, on real numbers, is the complement of >.

The complement of the inverse is the inverse of the complement.


If X = Y, the complement has the following properties:

If a relation is symmetric, the complement is too.


The complement of a reexive relation is irreexive and vice versa.
The complement of a strict weak order is a total preorder and vice versa.

The complement of the inverse has these same properties.

81.4.2 Restriction
The restriction of a binary relation on a set X to a subset S is the set of all pairs (x, y) in the relation for which x and
y are in S.
If a relation is reexive, irreexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial
order, total order, strict weak order, total preorder (weak order), or an equivalence relation, its restrictions are too.
However, the transitive closure of a restriction is a subset of the restriction of the transitive closure, i.e., in general
not equal. For example, restricting the relation "x is parent of y" to females yields the relation "x is mother of
the woman y"; its transitive closure doesn't relate a woman with her paternal grandmother. On the other hand, the
81.5. SETS VERSUS CLASSES 281

transitive closure of is parent of is is ancestor of"; its restriction to females does relate a woman with her paternal
grandmother.
Also, the various concepts of completeness (not to be confused with being total) do not carry over to restrictions.
For example, on the set of real numbers a property of the relation "" is that every non-empty subset S of R with an
upper bound in R has a least upper bound (also called supremum) in R. However, for a set of rational numbers this
supremum is not necessarily rational, so the same property does not hold on the restriction of the relation "" to the
set of rational numbers.
The left-restriction (right-restriction, respectively) of a binary relation between X and Y to a subset S of its domain
(codomain) is the set of all pairs (x, y) in the relation for which x (y) is an element of S.

81.4.3 Algebras, categories, and rewriting systems


Various operations on binary endorelations can be treated as giving rise to an algebraic structure, known as relation
algebra. It should not be confused with relational algebra which deals in nitary relations (and in practice also nite
and many-sorted).
For heterogenous binary relations, a category of relations arises.[7]
Despite their simplicity, binary relations are at the core of an abstract computation model known as an abstract
rewriting system.

81.5 Sets versus classes


Certain mathematical relations, such as equal to, member of, and subset of, cannot be understood to be binary
relations as dened above, because their domains and codomains cannot be taken to be sets in the usual systems of
axiomatic set theory. For example, if we try to model the general concept of equality as a binary relation =, we
must take the domain and codomain to be the class of all sets, which is not a set in the usual set theory.
In most mathematical contexts, references to the relations of equality, membership and subset are harmless because
they can be understood implicitly to be restricted to some set in the context. The usual work-around to this problem
is to select a large enough set A, that contains all the objects of interest, and work with the restriction =A instead of
=. Similarly, the subset of relation needs to be restricted to have domain and codomain P(A) (the power set of
a specic set A): the resulting set relation can be denoted A. Also, the member of relation needs to be restricted
to have domain A and codomain P(A) to obtain a binary relation A that is a set. Bertrand Russell has shown that
assuming to be dened on all sets leads to a contradiction in naive set theory.
Another solution to this problem is to use a set theory with proper classes, such as NBG or MorseKelley set theory,
and allow the domain and codomain (and so the graph) to be proper classes: in such a theory, equality, membership,
and subset are binary relations without special comment. (A minor modication needs to be made to the concept of
the ordered triple (X, Y, G), as normally a proper class cannot be a member of an ordered tuple; or of course one
can identify the function with its graph in this context.)[26] With this denition one can for instance dene a function
relation between every set and its power set.

81.6 The number of binary relations


2
The number of distinct binary relations on an n-element set is 2n (sequence A002416 in the OEIS):
Notes:

The number of irreexive relations is the same as that of reexive relations.


The number of strict partial orders (irreexive transitive relations) is the same as that of partial orders.
The number of strict weak orders is the same as that of total preorders.
The total orders are the partial orders that are also total preorders. The number of preorders that are neither
a partial order nor a total preorder is, therefore, the number of preorders, minus the number of partial orders,
minus the number of total preorders, plus the number of total orders: 0, 0, 0, 3, and 85, respectively.
282 CHAPTER 81. HOMOGENEOUS RELATION

the number of equivalence relations is the number of partitions, which is the Bell number.

The binary relations can be grouped into pairs (relation, complement), except that for n = 0 the relation is its own
complement. The non-symmetric ones can be grouped into quadruples (relation, complement, inverse, inverse com-
plement).

81.7 Examples of common binary relations


order relations, including strict orders:

greater than
greater than or equal to
less than
less than or equal to
divides (evenly)
is a subset of

equivalence relations:

equality
is parallel to (for ane spaces)
is in bijection with
isomorphy

dependency relation, a nite, symmetric, reexive relation.

independency relation, a symmetric, irreexive relation which is the complement of some dependency relation.

81.8 See also


Conuence (term rewriting)

Hasse diagram

Incidence structure

Logic of relatives

Order theory

Triadic relation

81.9 Notes
[1] Encyclopedic dictionary of Mathematics. MIT. 2000. pp. 13301331. ISBN 0-262-59020-4.

[2] Suppes, Patrick (1972) [originally published by D. van Nostrand Company in 1960]. Axiomatic Set Theory. Dover. ISBN
0-486-61630-4.

[3] Smullyan, Raymond M.; Fitting, Melvin (2010) [revised and corrected republication of the work originally published in
1996 by Oxford University Press, New York]. Set Theory and the Continuum Problem. Dover. ISBN 978-0-486-47484-7.

[4] Levy, Azriel (2002) [republication of the work published by Springer-Verlag, Berlin, Heidelberg and New York in 1979].
Basic Set Theory. Dover. ISBN 0-486-42079-5.

[5] Megill, Norman (5 August 1993). df-br (Metamath Proof Explorer)". Retrieved 18 November 2016.
81.9. NOTES 283

[6] Christodoulos A. Floudas; Panos M. Pardalos (2008). Encyclopedia of Optimization (2nd ed.). Springer Science & Business
Media. pp. 299300. ISBN 978-0-387-74758-3.

[7] Michael Winter (2007). Goguen Categories: A Categorical Approach to L-fuzzy Relations. Springer. pp. xxi. ISBN
978-1-4020-6164-6.

[8] Kilp, Knauer and Mikhalev: p. 3. The same four denitions appear in the following:

Peter J. Pahl; Rudolf Damrath (2001). Mathematical Foundations of Computational Engineering: A Handbook.
Springer Science & Business Media. p. 506. ISBN 978-3-540-67995-0.
Eike Best (1996). Semantics of Sequential and Parallel Programs. Prentice Hall. pp. 1921. ISBN 978-0-13-
460643-9.
Robert-Christoph Riemann (1999). Modelling of Concurrent Systems: Structural and Semantical Methods in the High
Level Petri Net Calculus. Herbert Utz Verlag. pp. 2122. ISBN 978-3-89675-629-9.

[9] Gunther Schmidt, 2010. Relational Mathematics. Cambridge University Press, ISBN 978-0-521-76268-7, Chapt. 5

[10] Ms, Stephan (2007), Reasoning on Spatial Semantic Integrity Constraints, Spatial Information Theory: 8th International
Conference, COSIT 2007, Melbourne, Australia, September 1923, 2007, Proceedings, Lecture Notes in Computer Science,
4736, Springer, pp. 285302, doi:10.1007/978-3-540-74788-8_18

[11] Note that the use of correspondence here is narrower than as general synonym for binary relation.

[12] Chris Brink; Wolfram Kahl; Gunther Schmidt (1997). Relational Methods in Computer Science. Springer Science &
Business Media. p. 200. ISBN 978-3-211-82971-4.

[13] Yao, Y. (2004). Semantics of Fuzzy Sets in Rough Set Theory. Transactions on Rough Sets II. Lecture Notes in Computer
Science. 3135. p. 309. ISBN 978-3-540-23990-1. doi:10.1007/978-3-540-27778-1_15.

[14] William Craig (2006). Semigroups Underlying First-order Logic. American Mathematical Soc. p. 72. ISBN 978-0-8218-
6588-0.

[15] Gumm, H. P.; Zarrad, M. (2014). Coalgebraic Simulations and Congruences. Coalgebraic Methods in Computer Science.
Lecture Notes in Computer Science. 8446. p. 118. ISBN 978-3-662-44123-7. doi:10.1007/978-3-662-44124-4_7.

[16] Julius Richard Bchi (1989). Finite Automata, Their Algebras and Grammars: Towards a Theory of Formal Expressions.
Springer Science & Business Media. pp. 3537. ISBN 978-1-4613-8853-1.

[17] M. E. Mller (2012). Relational Knowledge Discovery. Cambridge University Press. p. 22. ISBN 978-0-521-19021-3.

[18] Peter J. Pahl; Rudolf Damrath (2001). Mathematical Foundations of Computational Engineering: A Handbook. Springer
Science & Business Media. p. 496. ISBN 978-3-540-67995-0.

[19] Fonseca de Oliveira, J. N., & Pereira Cunha Rodrigues, C. D. J. (2004). Transposing Relations: From Maybe Functions
to Hash Tables. In Mathematics of Program Construction (p. 337).

[20] Smith, Douglas; Eggen, Maurice; St. Andre, Richard (2006), A Transition to Advanced Mathematics (6th ed.), Brooks/Cole,
p. 160, ISBN 0-534-39900-2

[21] Nievergelt, Yves (2002), Foundations of Logic and Mathematics: Applications to Computer Science and Cryptography,
Springer-Verlag, p. 158.

[22] Flaka, V.; Jeek, J.; Kepka, T.; Kortelainen, J. (2007). Transitive Closures of Binary Relations I (PDF). Prague: School
of Mathematics Physics Charles University. p. 1. Lemma 1.1 (iv). This source refers to asymmetric relations as strictly
antisymmetric.

[23] Since neither 5 divides 3, nor 3 divides 5, nor 3=5.

[24] Yao, Y.Y.; Wong, S.K.M. (1995). Generalization of rough sets using relationships between attribute values (PDF).
Proceedings of the 2nd Annual Joint Conference on Information Sciences: 3033..

[25] Joseph G. Rosenstein, Linear orderings, Academic Press, 1982, ISBN 0-12-597680-1, p. 4

[26] Tarski, Alfred; Givant, Steven (1987). A formalization of set theory without variables. American Mathematical Society. p.
3. ISBN 0-8218-1041-3.
284 CHAPTER 81. HOMOGENEOUS RELATION

81.10 References
M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories: with Applications to Wreath Products and
Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, ISBN 3-11-015248-7.
Gunther Schmidt, 2010. Relational Mathematics. Cambridge University Press, ISBN 978-0-521-76268-7.

81.11 External links


Hazewinkel, Michiel, ed. (2001) [1994], Binary relation, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Chapter 82

Disjunction elimination

In propositional logic, disjunction elimination[1][2] (sometimes named proof by cases, case analysis, or or elimi-
nation), is the valid argument form and rule of inference that allows one to eliminate a disjunctive statement from a
logical proof. It is the inference that if a statement P implies a statement Q and a statement R also implies Q , then
if either P or R is true, then Q has to be true. The reasoning is simple: since at least one of the statements P and R
is true, and since either of them would be sucient to entail Q, Q is certainly true.

If I'm inside, I have my wallet on me.


If I'm outside, I have my wallet on me.
It is true that either I'm inside or I'm outside.
Therefore, I have my wallet on me.

It is the rule can be stated as:

P Q, R Q, P R
Q
where the rule is that whenever instances of " P Q ", and " R Q " and " P R " appear on lines of a proof, "
Q " can be placed on a subsequent line.

82.1 Formal notation


The disjunction elimination rule may be written in sequent notation:

(P Q), (R Q), (P R) Q

where is a metalogical symbol meaning that Q is a syntactic consequence of P Q , and R Q and P R in


some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:

(((P Q) (R Q)) (P R)) Q

where P , Q , and R are propositions expressed in some formal system.

82.2 See also


Disjunction

285
286 CHAPTER 82. DISJUNCTION ELIMINATION

Argument in the alternative

Disjunct normal form

82.3 References
[1] Archived copy. Archived from the original on 2015-04-18. Retrieved 2015-04-09.

[2] http://www.cs.gsu.edu/~{}cscskp/Automata/proofs/node6.html
Chapter 83

Disjunction introduction

Disjunction introduction or addition (also called or introduction)[1][2][3] is a rule of inference of propositional


logic and almost every other deduction system. The rule makes it possible to introduce disjunctions to logical proofs.
It is the inference that if P is true, then P or Q must be true.

Socrates is a man.
Therefore, Socrates is a man or pigs are ying in formation over the English Channel.

The rule can be expressed as:

P
P Q

where the rule is that whenever instances of " P " appear on lines of a proof, " P Q " can be placed on a subsequent
line.
More generally its also a simple valid argument form, this means that if the premise is true, then the conclusion is
also true as any rule of inference should be, and an immediate inference, as it has a single proposition in its premises.
Disjunction introduction is not a rule in some paraconsistent logics because in combination with other rules of logic,
it leads to explosion (i.e. everything becomes provable) and paraconsistent logic tries to avoid explosion and to be
able to reason with contradictions. One of the solutions is to introduce disjunction with over rules. See Tradeos in
Paraconsistent logic.

83.1 Formal notation


The disjunction introduction rule may be written in sequent notation:

P (P Q)

where is a metalogical symbol meaning that P Q is a syntactic consequence of P in some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:

P (P Q)

where P and Q are propositions expressed in some formal system.

287
288 CHAPTER 83. DISJUNCTION INTRODUCTION

83.2 References
[1] Hurley

[2] Moore and Parker

[3] Copi and Cohen


Chapter 84

Disjunctive normal form

In boolean logic, a disjunctive normal form (DNF) is a standardization (or normalization) of a logical formula
which is a disjunction of conjunctive clauses; it can also be described as an OR of ANDs, a sum of products, or (in
philosophical logic) a cluster concept. As a normal form, it is useful in automated theorem proving.

84.1 Denition
A logical formula is considered to be in DNF if and only if it is a disjunction of one or more conjunctions of one or
more literals.[1]:153 A DNF formula is in full disjunctive normal form if each of its variables appears exactly once
in every clause. As in conjunctive normal form (CNF), the only propositional operators in DNF are and, or, and not.
The not operator can only be used as part of a literal, which means that it can only precede a propositional variable.
The following is a formal grammar for DNF:

1. disjunction (conjunction disjunction)

2. disjunction conjunction

3. conjunction (literal conjunction)

4. conjunction literal

5. literal variable

6. literal variable

Where variable is any variable.


For example, all of the following formulae are in DNF:

(A B C) (D E F )

(A B) C

AB

However, the following formulae are not in DNF:

(A B) , since an OR is nested within a NOT

A (B (C D)) , since an OR is nested within an AND

289
290 CHAPTER 84. DISJUNCTIVE NORMAL FORM

AB
CD 00 01 11 10

10 1 1
11 1 1
01 1 1
00 1 1
Karnaugh map of the disjunctive normal form (ABD) (ABC) (ABD) (ABC)

84.2 Conversion to DNF

Converting a formula to DNF involves using logical equivalences, such as the double negative elimination, De Mor-
gans laws, and the distributive law.
All logical formulas can be converted into an equivalent disjunctive normal form.[1]:152-153 However, in some cases
conversion to DNF can lead to an exponential explosion of the formula. For example, the DNF of a logical formula
of the following form has 2n terms:

(X1 Y1 ) (X2 Y2 ) (Xn Yn )

Any particular Boolean function can be represented by one and only one[note 1] full disjunctive normal form, one of
the canonical forms. In contrast, two dierent plain disjunctive normal forms may denote the same Boolean function,
see pictures.
84.3. COMPLEXITY ISSUES 291

AB
CD 00 01 11 10

10 1 1
11 1 1
01 1 1
00 1 1
Karnaugh map of the disjunctive normal form (ACD) (BCD) (ACD) (BCD). Despite the dierent
grouping, the same elds contain a 1 as in the previous map.

84.3 Complexity issues


An important variation used in the study of computational complexity is k-DNF. A formula is in k-DNF if it is in
DNF and each clause contains at most k literals. Dually to CNFs, the problem of deciding whether a given DNF is
true for every variable assignment is NP-complete, the same holds if only k-DNFs are considered.

84.4 See also


Algebraic normal form

Boolean function

Boolean-valued function

Conjunctive normal form

Horn clause
292 CHAPTER 84. DISJUNCTIVE NORMAL FORM

Karnaugh map

Logical graph
Propositional logic

QuineMcCluskey algorithm
Truth table

84.5 Notes
[1] Ignoring variations based on associativity and commutativity of AND and OR.

84.6 References
[1] B.A. Davey and H.A. Priestley (1990). Introduction to Lattices and Order. Cambridge Mathematical Textbooks. Cambridge
University Press.

84.7 External links


Hazewinkel, Michiel, ed. (2001) [1994], Disjunctive normal form, Encyclopedia of Mathematics, Springer
Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Chapter 85

Disjunctive syllogism

In classical logic, disjunctive syllogism[1][2] (historically known as modus tollendo ponens) is a valid argument
form which is a syllogism having a disjunctive statement for one of its premises.[3][4]

The breach is a safety violation, or it is not subject to nes.


The breach is not a safety violation.
Therefore, it is not subject to nes.

In propositional logic, disjunctive syllogism (also known as disjunction elimination and or elimination, or ab-
breviated E),[5][6][7][8] is a valid rule of inference. If we are told that at least one of two statements is true; and
also told that it is not the former that is true; we can infer that it has to be the latter that is true. If P is true or Q is
true and P is false, then Q is true. The reason this is called disjunctive syllogism is that, rst, it is a syllogism, a
three-step argument, and second, it contains a logical disjunction, which simply means an or statement. P or Q
is a disjunction; P and Q are called the statements disjuncts. The rule makes it possible to eliminate a disjunction
from a logical proof. It is the rule that:

P Q, P
Q
where the rule is that whenever instances of " P Q ", and " P " appear on lines of a proof, " Q " can be placed
on a subsequent line.
Disjunctive syllogism is closely related and similar to hypothetical syllogism, in that it is also type of syllogism, and
also the name of a rule of inference. It is also related to the Law of noncontradiction and the Law of excluded middle,
two of the three traditional laws of thought.

85.1 Formal notation


The disjunctive syllogism rule may be written in sequent notation:

P Q, P Q

where is a metalogical symbol meaning that Q is a syntactic consequence of P Q , and P in some logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:

((P Q) P ) Q

where P , and Q are propositions expressed in some formal system.

293
294 CHAPTER 85. DISJUNCTIVE SYLLOGISM

85.2 Natural language examples


Here is an example:

I will choose soup or I will choose salad.


I will not choose soup.
Therefore, I will choose salad.

Here is another example:

It is red or it is blue.
It is not blue.
Therefore, it is red.

85.3 Inclusive and exclusive disjunction


Please observe that the disjunctive syllogism works whether 'or' is considered 'exclusive' or 'inclusive' disjunction. See
below for the denitions of these terms.
There are two kinds of logical disjunction:

inclusive means and/or - at least one of them is true, or maybe both.


Exclusive or exclusive (xor) means exactly one must be true, but they cannot both be.

The widely used English language concept of or is often ambiguous between these two meanings, but the dierence
is pivotal in evaluating disjunctive arguments.
This argument:

P or Q.
Not P.
Therefore, Q.

is valid and indierent between both meanings. However, only in the exclusive meaning is the following form valid:

Either (only) P or (only) Q.


P.
Therefore, not Q.

However, if the fact is true it does not commit the fallacy.


With the inclusive meaning you could draw no conclusion from the rst two premises of that argument. See arming
a disjunct.

85.4 Related argument forms


Unlike modus ponendo ponens and modus ponendo tollens, with which it should not be confused, disjunctive syllogism
is often not made an explicit rule or axiom of logical systems, as the above arguments can be proven with a (slightly
devious) combination of reductio ad absurdum and disjunction elimination.
Other forms of syllogism:

hypothetical syllogism
85.5. REFERENCES 295

categorical syllogism

Disjunctive syllogism holds in classical propositional logic and intuitionistic logic, but not in some paraconsistent
logics.[9]

85.5 References
[1] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall. p. 362.

[2] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing. pp. 3201.

[3] Hurley

[4] Copi and Cohen

[5] Sanford, David Hawley. 2003. If P, Then Q: Conditionals and the Foundations of Reasoning. London, UK: Routledge: 39

[6] Hurley

[7] Copi and Cohen

[8] Moore and Parker

[9] Chris Mortensen, Inconsistent Mathematics, Stanford encyclopedia of philosophy, First published Tue Jul 2, 1996; substan-
tive revision Thu Jul 31, 2008
Chapter 86

Distributive property

Distributivity redirects here. It is not to be confused with Distributivism.


In abstract algebra and formal logic, the distributive property of binary operations generalizes the distributive

a + a = a
b c b c

ab + ac = a(b+c)
Visualization of distributive law for positive numbers

law from elementary algebra. In propositional logic, distribution refers to two valid rules of replacement. The rules
allow one to reformulate conjunctions and disjunctions within logical proofs.
For example, in arithmetic:

2 (1 + 3) = (2 1) + (2 3), but 2 / (1 + 3) (2 / 1) + (2 / 3).

In the left-hand side of the rst equation, the 2 multiplies the sum of 1 and 3; on the right-hand side, it multiplies the
1 and the 3 individually, with the products added afterwards. Because these give the same nal answer (8), it is said
that multiplication by 2 distributes over addition of 1 and 3. Since one could have put any real numbers in place of
2, 1, and 3 above, and still have obtained a true equation, we say that multiplication of real numbers distributes over
addition of real numbers.

86.1 Denition
Given a set S and two binary operators and + on S, we say that the operation:
is left-distributive over + if, given any elements x, y, and z of S,

x (y + z) = (x y) + (x z),

is right-distributive over + if, given any elements x, y, and z of S,

(y + z) x = (y x) + (z x),

296
86.2. MEANING 297

is distributive over + if it is left- and right-distributive.[1]


Notice that when is commutative, the three conditions above are logically equivalent.

86.2 Meaning
The operators used for examples in this section are the binary operations of addition ( + ) and multiplication ( ) of
numbers.
There is a distinction between left-distributivity and right-distributivity:

a (b c) = a b a c (left-distributive)
(a b) c = a c b c (right-distributive)

In either case, the distributive property can be described in words as:


To multiply a sum (or dierence) by a factor, each summand (or minuend and subtrahend) is multiplied by this factor
and the resulting products are added (or subtracted).
If the operation outside the parentheses (in this case, the multiplication) is commutative, then left-distributivity implies
right-distributivity and vice versa.
One example of an operation that is only right-distributive is division, which is not commutative:

(a b) c = a c b c

In this case, left-distributivity does not apply:

a (b c) = a b a c

The distributive laws are among the axioms for rings and elds. Examples of structures in which two operations are
mutually related to each other by the distributive law are Boolean algebras such as the algebra of sets or the switching
algebra. There are also combinations of operations that are not mutually distributive over each other; for example,
addition is not distributive over multiplication.
Multiplying sums can be put into words as follows: When a sum is multiplied by a sum, multiply each summand of a
sum with each summand of the other sums (keeping track of signs), and then adding up all of the resulting products.

86.3 Examples

86.3.1 Real numbers


In the following examples, the use of the distributive law on the set of real numbers R is illustrated. When multi-
plication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of
view of algebra, the real numbers form a eld, which ensures the validity of the distributive law.

First example (mental and written multiplication)

During mental arithmetic, distributivity is often used unconsciously:

6 16 = 6 (10 + 6) = 6 10 + 6 6 = 60 + 36 = 96

Thus, to calculate 6 16 in your head, you rst multiply 6 10 and 6 6 and add the intermediate results. Written
multiplication is also based on the distributive law.
298 CHAPTER 86. DISTRIBUTIVE PROPERTY

Second example (with variables)

3a2 b (4a 5b) = 3a2 b 4a 3a2 b 5b = 12a3 b 15a2 b2

Third example (with two sums)

(a + b) (a b) = a (a b) + b (a b) = a2 ab + ba b2 = a2 b2
= (a + b) a (a + b) b = a2 + ba ab b2 = a2 b2

Here the distributive law was applied twice, and it does not matter which bracket is rst multiplied out.

Fourth Example Here the distributive law is applied the other way around compared to the previous examples.
Consider

12a3 b2 30a4 bc + 18a2 b3 c2 .

Since the factor 6a2 b occurs in all summand, it can be factored out. That is, due to the distributive law one obtains

12a3 b2 30a4 bc + 18a2 b3 c2 = 6a2 b(2ab 5a2 c + 3b2 c2 ) .

86.3.2 Matrices
The distributive law is valid for matrix multiplication. More precisely,

(A + B) C = A C + B C

for all l m -matrices A, B and m n -matrices C , as well as

A (B + C) = A B + A C

for all l m -matrices A and m n -matrices B, C . Because the commutative property does not hold for matrix
multiplication, the second law does not follow from the rst law. In this case, they are two dierent laws.

86.3.3 Other examples


1. Multiplication of ordinal numbers, in contrast, is only left-distributive, not right-distributive.
2. The cross product is left- and right-distributive over vector addition, though not commutative.
3. The union of sets is distributive over intersection, and intersection is distributive over union.
4. Logical disjunction (or) is distributive over logical conjunction (and), and vice versa.
5. For real numbers (and for any totally ordered set), the maximum operation is distributive over the minimum
operation, and vice versa: max(a, min(b, c)) = min(max(a, b), max(a, c)) and min(a, max(b, c)) = max(min(a,
b), min(a, c)).
6. For integers, the greatest common divisor is distributive over the least common multiple, and vice versa: gcd(a,
lcm(b, c)) = lcm(gcd(a, b), gcd(a, c)) and lcm(a, gcd(b, c)) = gcd(lcm(a, b), lcm(a, c)).
7. For real numbers, addition distributes over the maximum operation, and also over the minimum operation: a
+ max(b, c) = max(a + b, a + c) and a + min(b, c) = min(a + b, a + c).
8. For binomial multiplication, distribution is sometimes referred to as the FOIL Method[2] (First terms ac, Outer
ad, Inner bc, and Last bd) such as: (a + b) * (c + d) = ac + ad + bc + bd.
9. Polynomial multiplication is similar to that for binomials: (a + b) * (c + d + e) = ac + ad + ae + bc + bd + be.
10. Complex number multiplication is distributive: u(v + w) = uv + uw, (u + v)w = uw + vw
86.4. PROPOSITIONAL LOGIC 299

86.4 Propositional logic

86.4.1 Rule of replacement


In standard truth-functional propositional logic, distribution[3][4] in logical proofs uses two valid rules of replacement
to expand individual occurrences of certain logical connectives, within some formula, into separate applications of
those connectives across subformulas of the given formula. The rules are:

(P (Q R)) ((P Q) (P R))

and

(P (Q R)) ((P Q) (P R))

where " ", also written , is a metalogical symbol representing can be replaced in a proof with or is logically
equivalent to.

86.4.2 Truth functional connectives


Distributivity is a property of some logical connectives of truth-functional propositional logic. The following logical
equivalences demonstrate that distributivity is a property of particular connectives. The following are truth-functional
tautologies.

Distribution of conjunction over conjunction (P (Q R)) ((P Q) (P R))

Distribution of conjunction over disjunction (P (Q R)) ((P Q) (P R))

Distribution of disjunction over conjunction (P (Q R)) ((P Q) (P R))

Distribution of disjunction over disjunction (P (Q R)) ((P Q) (P R))

Distribution of implication (P (Q R)) ((P Q) (P R))

Distribution of implication over equivalence (P (Q R)) ((P Q) (P R))

Distribution of disjunction over equivalence (P (Q R)) ((P Q) (P R))

((P Q) (R S)) (((P R) (P S)) ((Q R) (Q S)))


Double distribution
((P Q) (R S)) (((P R) (P S)) ((Q R) (Q S)))

86.5 Distributivity and rounding


In practice, the distributive property of multiplication (and division) over addition may appear to be compromised or
lost because of the limitations of arithmetic precision. For example, the identity + + = (1 + 1 + 1) / 3 appears
to fail if the addition is conducted in decimal arithmetic; however, if many signicant digits are used, the calculation
will result in a closer approximation to the correct results. For example, if the arithmetical calculation takes the form:
0.33333 + 0.33333 + 0.33333 = 0.99999 1, this result is a closer approximation than if fewer signicant digits had
been used. Even when fractional numbers can be represented exactly in arithmetical form, errors will be introduced
if those arithmetical values are rounded or truncated. For example, buying two books, each priced at 14.99 before
a tax of 17.5%, in two separate transactions will actually save 0.01, over buying them together: 14.99 1.175
= 17.61 to the nearest 0.01, giving a total expenditure of 35.22, but 29.98 1.175 = 35.23. Methods such
as bankers rounding may help in some cases, as may increasing the precision used, but ultimately some calculation
errors are inevitable.
300 CHAPTER 86. DISTRIBUTIVE PROPERTY

86.6 Distributivity in rings


Distributivity is most commonly found in rings and distributive lattices.
A ring has two binary operations (commonly called "+" and ""), and one of the requirements of a ring is that must
distribute over +. Most kinds of numbers (example 1) and matrices (example 4) form rings. A lattice is another kind
of algebraic structure with two binary operations, and . If either of these operations (say ) distributes over the
other (), then must also distribute over , and the lattice is called distributive. See also the article on distributivity
(order theory).
Examples 4 and 5 are Boolean algebras, which can be interpreted either as a special kind of ring (a Boolean ring) or
a special kind of distributive lattice (a Boolean lattice). Each interpretation is responsible for dierent distributive
laws in the Boolean algebra. Examples 6 and 7 are distributive lattices which are not Boolean algebras.
Failure of one of the two distributive laws brings about near-rings and near-elds instead of rings and division rings
respectively. The operations are usually congured to have the near-ring or near-eld distributive on the right but not
on the left.
Rings and distributive lattices are both special kinds of rigs, certain generalizations of rings. Those numbers in
example 1 that don't form rings at least form rigs. Near-rigs are a further generalization of rigs that are left-distributive
but not right-distributive; example 2 is a near-rig.

86.7 Generalizations of distributivity


In several mathematical areas, generalized distributivity laws are considered. This may involve the weakening of the
above conditions or the extension to innitary operations. Especially in order theory one nds numerous important
variants of distributivity, some of which include innitary operations, such as the innite distributive law; others being
dened in the presence of only one binary operation, such as the according denitions and their relations are given in
the article distributivity (order theory). This also includes the notion of a completely distributive lattice.
In the presence of an ordering relation, one can also weaken the above equalities by replacing = by either or .
Naturally, this will lead to meaningful concepts only in some situations. An application of this principle is the notion
of sub-distributivity as explained in the article on interval arithmetic.
In category theory, if (S, , ) and (S, , ) are monads on a category C, a distributive law S.S S.S is a natural
transformation : S.S S.S such that (S, ) is a lax map of monads S S and (S, ) is a colax map of monads S
S. This is exactly the data needed to dene a monad structure on S.S: the multiplication map is S.S 2 .SS and
the unit map is S.. See: distributive law between monads.
A generalized distributive law has also been proposed in the area of information theory.

86.7.1 Notions of antidistributivity


The ubiquitous identity that relates inverses to the binary operation in any group, namely (xy)1 = y1 x1 , which
is taken as an axiom in the more general context of a semigroup with involution, has sometimes been called an
antidistributive property (of inversion as a unary operation).[5]
In the context of a near-ring, which removes the commutativity of the additively written group and assumes only one-
sided distributivity, one can speak of (two-sided) distributive elements but also of antidistributive elements. The
latter reverse the order of (the non-commutative) addition; assuming a left-nearring (i.e. one which all elements dis-
tribute when multiplied on the left), then an antidistributive element a reverses the order of addition when multiplied
to the right: (x + y)a = ya + xa.[6]
In the study of propositional logic and Boolean algebra, the term antidistributive law is sometimes used to denote
the interchange between conjunction and disjunction when implication factors over them:[7]

(a b) c (a c) (b c)

(a b) c (a c) (b c)

These two tautologies are a direct consequence of the duality in De Morgans laws.
86.8. NOTES 301

86.8 Notes
[1] Distributivity of Binary Operations from Mathonline

[2] Kim Steward (2011) Multiplying Polynomials from Virtual Math Lab at West Texas A&M University

[3] Elliott Mendelson (1964) Introduction to Mathematical Logic, page 21, D. Van Nostrand Company

[4] Alfred Tarski (1941) Introduction to Logic, page 52, Oxford University Press

[5] Chris Brink; Wolfram Kahl; Gunther Schmidt (1997). Relational Methods in Computer Science. Springer. p. 4. ISBN
978-3-211-82971-4.

[6] Celestina Cotti Ferrero; Giovanni Ferrero (2002). Nearrings: Some Developments Linked to Semigroups and Groups.
Kluwer Academic Publishers. pp. 62 and 67. ISBN 978-1-4613-0267-4.

[7] Eric C.R. Hehner (1993). A Practical Theory of Programming. Springer Science & Business Media. p. 230. ISBN
978-1-4419-8596-5.

86.9 External links


A demonstration of the Distributive Law for integer arithmetic (from cut-the-knot)
Chapter 87

DiVincenzos criteria

The DiVincenzo criteria are a list of conditions that are necessary for constructing a quantum computer proposed
by the theoretical physicist David P. DiVincenzo in his 2000 paper The Physical Implementation of Quantum
Computation.[1] Quantum computation was rst proposed by Richard Feynman[2] (1982) as a means to eciently
simulate quantum systems. There have been many proposals of how to construct a quantum computer, all of which
have varying degrees of success against the dierent challenges of constructing quantum devices. Some of these
proposals involve using superconducting qubits, trapped ions, liquid and solid state nuclear magnetic resonance or
optical cluster states all of which have remarkable prospects, however, they all have issues that prevent practical
implementation. The DiVincenzo criteria are a list of conditions that are necessary for constructing the quantum
computer as proposed by Feynman.
The DiVincenzo criteria consist of 5+2 conditions that an experimental setup must satisfy in order to successfully
implement quantum algorithms such as Grovers search algorithm or Shor factorisation. The 2 additional conditions
are necessary in order to implement quantum communication such as that used in quantum key distribution. One
can consider DiVincenzos criteria for a classical computer and demonstrate that these are satised. Comparing each
statement between the classical and quantum regimes highlights both the complications that arise in dealing with
quantum systems and the source of the quantum speed up.

87.1 Statement of the criteria


In order to construct a quantum computer the following conditions must be met by the experimental setup. The rst
ve are necessary for quantum computation and the remaining two are necessary for quantum communication.

1. A scalable physical system with well characterised qubits.

2. The ability to initialise the state of the qubits to a simple ducial state.

3. Long relevant decoherence times.

4. A universal set of quantum gates.

5. A qubit-specic measurement capability.

6. The ability to interconvert stationary and ying qubits.

7. The ability to faithfully transmit ying qubits between specied locations.

87.2 Why the DiVincenzo criteria?


Divincenzos criteria was proposed after many attempts of constructing a quantum computer. Below we state why
these statements are important and present examples to highlight these facts.

302
87.2. WHY THE DIVINCENZO CRITERIA? 303

87.2.1 Scalable with well characterised qubits

Most models of quantum computation require the use of qubits for computation. Some models use qudits and in this
case the rst statement is logically extended. Quantum mechanically a qubit is dened as 2 level system with some
energy gap. This can sometimes be dicult to implement physically and so we can focus on a particular transition of
atomic levels, etc. Whatever the system we choose, we require that the system remain (almost) always in the subspace
of these two levels and in doing so we can say it is a well characterised qubit. An example of a system that is not
well characterised would be 2 one-electron quantum dots (potential wells where we only allow for single electron
occupations). The electron being in one well or the other is properly characterised as a single qubit, however if we
consider a state such as |00 + |11 then this would correspond to a two qubit state. Such a state is not physically
allowed (we only permit single electron occupation) and so we cannot say that we have 2 well characterised qubits.
With todays technology it is simple to create a system that has a well characterised qubit, however it is a challenge to
create a system that has an arbitrary number of well characterised qubits. Currently one of the biggest problems being
faced is that we require exponentially larger experimental setups in order to accommodate greater number of qubits.
The quantum computer is capable of exponential speed ups on current classical algorithms for prime factorisation of
numbers, however if this requires an exponentially large setup then our advantage is lost. In the case of liquid state
nuclear magnetic resonance[3] it was found that the macroscopic size of the system caused the computational 'qubits
to be initialised in a highly mixed state. In spite of this a computation model was found that could still use these
mixed states for computation, however the more mixed these states are the weaker the induction signal corresponding
to a quantum measurement. If this signal is below the noise level a solution is to increase the size of the sample to
boost the signal strength, and this is the source of the non-scalability of liquid state NMR as a means for quantum
computation. One could say that as the number of computational qubits increases they become less well characterised
until we reach a threshold in which they are no longer useful.

87.2.2 The ability to initialise the state of the qubits to a simple ducial state

All models of quantum computation (and classical computation) are based on performing some operations on a
state (qubit or bit) and nally measuring/reading out a result, a procedure that is dependent on the initial state of the
system. In particular, the unitary nature of quantum mechanics makes initialisation of the qubits extremely important.
In many cases the approach to initialise a state is to let the system anneal into the ground state and then we can start the
computation. This is of particular importance when you consider Quantum Error Correction, a procedure to perform
quantum processes that are robust to certain types of noise, that requires a large supply of fresh initialised qubits. This
places restrictions on how fast the initialisation needs to be. An example of annealing is described in Petta et al.[4]
(2005) where a Bell pair of electrons is prepared in quantum dots. This procedure relies on T 1 to anneal the system
and the paper is dedicated to measuring the T 2 relaxation time of the quantum dot system. This quantum dot system
gives an idea of the timescales involved in initialising a system by annealing (~milliseconds) and this would become
a fundamental issue given that the decoherence time is shorter than the initialisation time. Alternative approaches
(usually involving optical pumping[5] ) have been developed to reduce the initialisation time and improve the delity
of the procedure.

87.2.3 Long relevant decoherence times

The emergence classicality in large quantum systems comes about from the increased decoherence experienced
by macroscopic systems. The timescale associated with this loss of quantum behaviour then becomes important
when constructing large quantum computation systems. The quantum resources used by quantum computing models
(superposition and/or entanglement) are quickly destroyed by decoherence and so long decoherence times are desired,
much longer than the average gate time so that we can combat decoherence with error correction and/or dynamical
decoupling. In solid state NMR using NV centres the orbital electron experiences short decoherence times making
computations problematic. The proposed solution has been to encode the qubit into the nuclear spin of the nitrogen
atom and this increases the decoherence time. In other systems such as the quantum dot we have issues with strong
environmental eects limiting the T 2 decoherence time. One of the problems in satisfying this criteria is that systems
that can be manipulated quickly (through strong interactions) tend to experience decoherence via these very same
strong interactions and so there is a trade-o between ability to implement control and increased decoherence.
304 CHAPTER 87. DIVINCENZOS CRITERIA

87.2.4 A universal set of quantum gates

Both in classical computing and quantum computing the algorithms that we are permitted to implement are restricted
by the gates we are capable of implementing on the state. In the case of quantum computing we can construct a uni-
versal quantum computer (a quantum Turing machine) with a very small set of 1 and 2 qubit gates. Any experimental
setup that manages to have well characterised qubits, quick faithful initialisation and long decoherence times must
also be capable of inuencing the Hamiltonian of the system in order to implement coherent changes capable of
implementing the universal set of gates. Perfect implementation of gates is not always necessary as gate sequences
can be created that are more robust to certain systematic and random noise models.[6] Liquid state NMR was one of
the rst setups capable of implementing a universal set of gates through the use of precise timing and magnetic eld
pulses, however as mentioned above this system was not scalable.

87.2.5 A qubit-specic measurement capability

For any process applied to a quantum state of qubits the nal measurement is of fundamental importance when
performing computations. If our system allows for non-demolition projective measurements then we can in principle
use this for state preparation. Measurement is at the corner of all quantum algorithms, especially in concepts such as
teleportation. Note that some measurement techniques are not 100% ecient and so these tend to be corrected by
repeating the experiment in order to increase the success rate. Examples of reliable measurement devices are found
in optical systems where homodyne detectors have reached the point of reliably counting how many photons have
passed through the detecting cross-section. For more challenging measurement systems we can look at quantum dots
in Petta et al.[4] (2005) where they use the energy gap between the |01 + |10 and |01 |10 to measure the relative
spins of the 2 electrons.

87.2.6 The ability to interconvert stationary and ying qubits and the ability to faithfully
transmit ying qubits between specied locations

These two conditions are only necessary when considering quantum communication protocols such as quantum key
distribution that involve exchange of coherent quantum states or exchange of entangled qubits (for example the BB84
protocol). When creating pairs of entangled qubits in some experimental set up usually these qubits are 'stationary'
and cannot be moved from the laboratory. If these qubits can be teleported to ying qubits such as encoded into the
polarisation of a photon then we can consider sending entangled photons to a third party and having them extract
that information, leaving two entangled stationary qubits at two dierent locations. The ability to transmit the ying
qubit without decoherence is major problem. Currently at the Institute for Quantum Computing there are eorts to
produce a pair of entangled photons and transmit one of the photons to some other part of the world by reecting o
of a satellite. The main issue now is the decoherence the photon experiences whilst interacting with particles in the
atmosphere. Similarly some attempts have been made to use optical bres, however the attenuation of the signal has
stopped this from being a reality.

87.3 Acknowledgements
This article is based principally on DiVincenzos 2000[1] paper and lectures from the Perimeter Institute for Theo-
retical Physics.

87.4 See also

Quantum computing

Nuclear magnetic resonance quantum computer

Trapped ion quantum computer


87.5. REFERENCES 305

87.5 References
[1] DiVincenzo, David P. (2000-04-13). The Physical Implementation of Quantum Computation. Fortschritte der Physik.
48: 771783. arXiv:quant-ph/0002077 [quant-ph]. doi:10.1002/1521-3978(200009)48:9/11<771::AID-PROP771>3.0.CO;2-
E.

[2] Feynman, R. P. (June 1982). Simulating physics with computers. International Journal of Theoretical Physics. 21 (6):
467488. Bibcode:1982IJTP...21..467F. doi:10.1007/BF02650179.

[3] Menicucci NC, Caves CM (2002). Local realistic model for the dynamics of bulk-ensemble NMR information pro-
cessing. Physical Review Letters. 88 (16): 167901. Bibcode:2002PhRvL..88p7901M. PMID 11955265. arXiv:quant-
ph/0111152 . doi:10.1103/PhysRevLett.88.167901.

[4] Petta, J. R.; Johnson, A. C.; Taylor, J. M.; Laird, E. A.; Yacoby, A.; Lukin, M. D.; Marcus, C. M.; Hanson, M. P.; Gossard,
A. C. (September 2005). Coherent Manipulation of Coupled Electron Spins in Semiconductor Quantum Dots. Science.
309 (5744): 21802184. doi:10.1126/science.1116955.

[5] Atatre, Mete; Dreiser, Jan; Badolato, Antonio; Hgele, Alexander; Karrai, Khaled; Imamoglu, Atac (April 2006).
Quantum-Dot Spin-State Preparation with Near-Unity Fidelity. Science. 312 (5773): 551553. PMID 16601152.
doi:10.1126/science.1126074.

[6] Green, Todd J.; Sastrawan, Jarrah; Uys, Hermann; Biercuk, Michael J. (September 2013). Arbitrary quantum control of
qubits in the presence of universal noise. New Journal of Physics. 15 (9): 095004. doi:10.1088/1367-2630/15/9/095004.
Chapter 88

Domain of discourse

In the formal sciences, the domain of discourse, also called the universe of discourse, universal set, or simply
universe, is the set of entities over which certain variables of interest in some formal treatment may range.

88.1 Overview
The domain of discourse is usually identied in the preliminaries, so that there is no need in the further treatment to
specify each time the range of the relevant variables.[1] Many logicians distinguish, sometimes only tacitly, between
the domain of a science and the universe of discourse of a formalization of the science.[2] Giuseppe Peano
formalized number theory (arithmetic of positive integers) taking its domain to be the positive integers and the
universe of discourse to include all numbers, not just integers.

88.2 Examples
For example, in an interpretation of rst-order logic, the domain of discourse is the set of individuals over which
the quantiers range. In one interpretation, the domain of discourse could be the set of real numbers; in another
interpretation, it could be the set of natural numbers. If no domain of discourse has been identied, a proposition
such as x (x2 2) is ambiguous. If the domain of discourse is the set of real numbers, the proposition is false, with
x = 2 as counterexample; if the domain is the set of naturals, the proposition is true, since 2 is not the square of any
natural number.

88.3 Universe of discourse


The term universe of discourse generally refers to the collection of objects being discussed in a specic discourse.
In model-theoretical semantics, a universe of discourse is the set of entities that a model is based on. The concept
universe of discourse is generally attributed to Augustus De Morgan (1846) but the name was used for the rst time
in history by George Boole (1854) on page 42 of his Laws of Thought in a long and incisive passage well worth
study. Booles denition is quoted below. The concept, probably discovered independently by Boole in 1847, played
a crucial role in his philosophy of logic especially in his stunning principle of wholistic reference.
A database is a model of some aspect of the reality of an organisation. It is conventional to call this reality the
universe of discourse or domain of discourse.

88.4 Booles 1854 denition


In every discourse, whether of the mind conversing with its own thoughts, or of the individual in his
intercourse with others, there is an assumed or expressed limit within which the subjects of its opera-
tion are conned. The most unfettered discourse is that in which the words we use are understood in

306
88.4. BOOLES 1854 DEFINITION 307

Giuseppe Peano

the widest possible application, and for them the limits of discourse are co-extensive with those of the
universe itself. But more usually we conne ourselves to a less spacious eld. Sometimes, in discoursing
of men we imply (without expressing the limitation) that it is of men only under certain circumstances
and conditions that we speak, as of civilized men, or of men in the vigour of life, or of men under some
other condition or relation. Now, whatever may be the extent of the eld within which all the objects of
our discourse are found, that eld may properly be termed the universe of discourse. Furthermore, this
universe of discourse is in the strictest sense the ultimate subject of the discourse.
George Boole, [3]
308 CHAPTER 88. DOMAIN OF DISCOURSE

George Boole

88.5 See also


Domain of a function

Domain theory

Interpretation (logic)
88.6. REFERENCES 309

Term algebra

Universe (mathematics)

88.6 References
[1] Corcoran, John. Universe of discourse. Cambridge Dictionary of Philosophy, Cambridge University Press, 1995, p. 941.

[2] Jos Miguel Sagillo, Domains of sciences, universe of discourse, and omega arguments, History and philosophy of logic,
vol. 20 (1999), pp. 267280.

[3] Page 42: George Boole. 1854/2003. The Laws of Thought. Facsimile of 1854 edition, with an introduction by J. Corcoran.
Bualo: Prometheus Books (2003). Reviewed by James van Evra in Philosophy in Review 24 (2004): 167169.
Chapter 89

Donkey sentence

Donkey sentences are sentences that contain a pronoun whose reference is clear to the reader (it is bound semantically)
but whose syntactical role in the sentence poses challenges to grammarians.[1][2] The pronoun in question is sometimes
termed a donkey pronoun or donkey anaphora.
The following sentences are examples of donkey sentences.

Every farmer who owns a donkey beats it. Peter Geach (1962), Reference and Generality

Every police ocer who arrested a murderer insulted him.

Such sentences defy straightforward attempts to generate their formal language equivalents. The diculty is with
understanding how English speakers parse such sentences.[3]

89.1 History
Peter Geachs original donkey sentence was a counterexample to Richard Montague's proposal for a generalized
formal representation of quantication in natural language (see Geach 1962). The example was reused by David
Lewis (1975), Gareth Evans (1977) and many others, and is still quoted in recent publications.

89.2 Features
Features of the sentence, Every farmer who owns a donkey beats it, require careful consideration for adequate
description (though reading each in place of every does simplify the formal analysis). The donkey pronoun in this
case is the word it. The indenite article 'a' is normally understood as an existential quantier, but the most natural
reading of the donkey sentence requires it to be understood as a nested universal quantier.
There is nothing wrong with donkey sentences: they are grammatically correct, they are well-formed, their syntax is
regular. They are also logically meaningful, they have well-dened truth conditions, and their semantics are unam-
biguous. However, it is dicult to explain how donkey sentences produce their semantic results, and how those results
generalize consistently with all other language use. If such an analysis were successful, it might allow a computer
program to accurately translate natural language forms into logical form.[4] The question is, how are natural language
users, apparently eortlessly, agreeing on the meaning of sentences like these?
There may be several equivalent ways of describing this process. In fact, Hans Kamp (1981) and Irene Heim (1982)
independently proposed very similar accounts in dierent terminology, which they called discourse representation
theory (DRT) and le change semantics (FCS) respectively.
In 2007, Adrian Brasoveanu published studies of donkey pronoun analogs in Hindi, and analysis of complex and
modal versions of donkey pronouns in English.

310
89.3. DISCOURSE REPRESENTATION THEORY 311

89.3 Discourse representation theory


Donkey sentences became a major force in advancing semantic research in the 1980s, with the introduction of
discourse representation theory (DRT). During that time, an eort was made to settle the inconsistencies which
arose from the attempts to translate donkey sentences into rst-order logic.
Donkey sentences present the following problem, when represented in rst-order logic: The systematic translation
of every existential expression in the sentence into existential quantiers produces an incorrect representation of the
sentence, since it leaves a free occurrence of the variable y in BEAT(x.y):

x (FARMER(x) y (DONKEY(y) OWNS(x, y)) BEAT(x, y))


Trying to extend the scope of existential quantier also does not solve the problem:

x y (FARMER(x) DONKEY(y) OWNS(x, y) BEAT(x, y))


In this case, the logical translation fails to give correct truth conditions to donkey sentences: Imagine a situation where
there is a farmer owning a donkey and a pig, and not beating any of them. The formula will be true in that situation,
because for each farmer we need to nd at least one object that either is not a donkey owned by this farmer, or is
beaten by the farmer. Hence, if this object denotes the pig, the sentence will be true in that situation.
A correct translation into rst-order logic for the donkey sentence seems to be:

x y ((FARMER(x) DONKEY(y) OWNS(x, y)) BEAT(x, y))


Unfortunately, this translation leads to a serious problem of inconsistency. One possible interpretation, for example,
might be that every farmer that owns any donkeys beats every donkey. Clearly this is rarely the intentional meaning.
Indenites must sometimes be interpreted as existential quantiers, and other times as universal quantiers, without
any apparent regularity.
The solution that DRT provides for the donkey sentence problem can be roughly outlined as follows: The com-
mon semantic function of non-anaphoric noun phrases is the introduction of a new discourse referent, which is in
turn available for the binding of anaphoric expressions. No quantiers are introduced into the representation, thus
overcoming the scope problem that the logical translations had.

89.4 See also


Epsilon calculus
Garden path sentence
Generic antecedent
Lambda calculus
Montague grammar
Singular they

89.5 Notes
[1] Emar Maier describes donkey pronouns as bound but not c-commanded" in a Linguist List review of Paul D. Elbournes
Situations and Individuals (MIT Press, 2006).

[2] Barker and Shan dene a donkey pronoun as a pronoun that lies outside the restrictor of a quantier or the antecedent of a
conditional, yet covaries with some quanticational element inside it, usually an indenite. Chris Barker and Chung-chieh
Shan, 'Donkey Anaphora is Simply Binding' Archived May 15, 2008, at the Wayback Machine., colloquium presentation,
Frankfurt, 2007.
312 CHAPTER 89. DONKEY SENTENCE

[3] David Lewis describes this as his motivation for considering the issue in the introduction to Papers in Philosophical Logic, a
collection of reprints of his articles. There was no satisfactory way to assign relative scopes to quantier phrases. (CUP,
1998: 2.)

[4] Alistair Knott, 'An Algorithmic Framework for Specifying the Semantics of Discourse Relations, Computational Intelli-
gence 16 (2000).

89.6 References
Kamp, H. and Reyle, U. 1993. From Discourse to Logic. Kluwer, Dordrecht.

Kadmon, N. 2001. Formal Pragmatics: Semantics, Pragmatics, Presupposition, and Focus. Oxford: Blackwell
Publishers.

89.7 Further reading


Abbott, Barbara. 'Donkey Demonstratives. Natural Language Semantics 10 (2002): 285298.

Barker, Chris. 'Individuation and Quantication'. Linguistic Inquiry 30 (1999): 683691.

Barker, Chris. 'Presuppositions for Proportional Quantiers. Natural Language Semantics 4 (1996): 237259.

Brasoveanu, Adrian. Structured Nominal and Modal Reference. Rutgers University PhD dissertation, 2007.

Burgess, John P. E Pluribus Unum: Plural Logic and Set Theory', Philosophia Mathematica 12' (2004):
193221.

Cheng, Lisa LS and C-T James Huang. 'Two Types of Donkey Sentences. Natural Language Semantics 4
(1996): 121163.

Cohen, Ariel. Think Generic! Stanford, California: CSLI Publications, 1999.

Conway, L. and S. Crain. 'Donkey Anaphora in Child Grammar'. In Proceedings of the North East Linguistics
Society (NELS) 25. University of Massachusetts Amherst, 1995.

Evans, Gareth. 'Pronouns. Linguistic Inquiry 11 (1980): 337362.

Geach Peter. Reference and Generality: An Examination of Some Medieval and Modern Theories. Ithaca, New
York: Cornell University Press, 1962.

Geurts, Bart. Presuppositions and Pronouns. Oxford: Elsevier, 1999.

Harman, Gilbert. 'Anaphoric Pronouns as Bound Variables: Syntax or Semantics?' Language 52 (1976):
7881.

Heim, Irene. 'E-Type Pronouns and Donkey Anaphora'. Linguistics and Philosophy 13 (1990): 137177.

Heim, Irene. The Semantics of Denite and Indenite Noun Phrases. University of Massachusetts Amherst
PhD dissertation, 1982.

Just, MA. 'Comprehending Quantied Sentences: The Relation between Sentencepicture and Semantic Mem-
ory Verication'. Cognitive Psychology 6 (1974): 216236.

Just, MA and PA Carpenter. 'Comprehension of Negation with Quantication'. Journal of Verbal Learning
and Verbal Behavior 10 (1971): 244253.

Kanazawa, Makoto. 'Singular Donkey Pronouns Are Semantically Singular'. Linguistics and Philosophy 24
(2001): 383403.

Kanazawa, Makoto. 'Weak vs. Strong Readings of Donkey Sentences and Monotonicity Inference in a Dy-
namic Setting'. Linguistics and Philosophy 17 (1994): 109158.
89.8. EXTERNAL LINKS 313

Krifka, Manfred. 'Pragmatic Strengthening in Plural Predications and Donkey Sentences. In Proceedings from
Semantics and Linguistic Theory (SALT) 6. Ithaca, New York: Cornell University, 1996. Pages 136153.

Lappin, Shalom. 'An Intensional Parametric Semantics for Vague Quantiers. Linguistics and Philosophy 23
(2000): 599620.

Lappin, Shalom Lappin and Nissim Francez. 'E-type Pronouns, i-Sums, and Donkey Anaphora'. Linguistics
and Philosophy 17 (1994): 391428.

Lappin, Shalom. 'Donkey Pronouns Unbound'. Theoretical Linguistics 15 (1989): 263286.

Lewis, David. Parts of Classes, Oxford: Blackwell Publishing, 1991.

Lewis, David. 'General Semantics. Synthese 22 (1970): 1827.

Partee, Barbara H. 'Opacity, Coreference, and Pronouns. Synthese 21 (1970): 359385.

Montague, Richard. 'Universal Grammar'. Theoria 26 (1970): 373398.

Neale, Stephen. Descriptions. Cambridge: MIT Press, 1990.

Neale, Stephen. 'Descriptive Pronouns and Donkey Anaphora'. Journal of Philosophy 87 (1990): 113-150.

Quine, Willard Van Orman. Word and Object. Cambridge, Massachusetts: MIT Press, 1970.

Rooij, Robert van. 'Free Choice Counterfactual Donkeys. Journal of Semantics 23 (2006): 383402.

Yoon, Y-E. Weak and Strong Interpretations of Quantiers and Denite NPs in English and Korean. University
of Texas at Austin PhD dissertation, 1994.

Kamp, Hans. and Reyle, U. 1993. From Discourse to Logic. Kluwer, Dordrecht.

Kadmon, N. 2001. Formal Pragmatics: Semantics, Pragmatics, Presupposition, and Focus. Oxford: Blackwell
Publishers.

89.8 External links


The Handbook of Philosophical Logic

Discourse Representation Theory

Introduction to Discourse Representation Theory

SEP Entry

Archive of CSI 5386 Donkey Sentence Discussion

Barker, Chris. 'A Presuppositional Account of Proportional Ambiguity'. In Proceedings of Semantic and Lin-
guistic Theory (SALT) 3. Ithaca, New York: Cornell University, 1993. Pages 118.

Brasoveanu, Adrian. 'Donkey Pluralities: Plural Information States vs. Non-Atomic Individuals. In Proceed-
ings of Sinn und Bedeutung 11. Edited by E. Puig-Waldmller. Barcelona: Pompeu Fabra University, 2007.
Pages 106120.

Evans, Gareth. 'Pronouns, Quantiers, and Relative Clauses (I)'. Canadian Journal of Philosophy 7 (1977):
467536.

Geurts, Bart. 'Donkey Business. Linguistics and Philosophy 25 (2002): 129156.

Huang, C-T James. 'Logical Form'. Chapter 3 in Government and Binding Theory and the Minimalist Program:
Principles and Parameters in Syntactic Theory edited by Gert Webelhuth. Oxford and Cambridge: Blackwell
Publishing, 1995. Pages 127177.

Kamp, Hans. 'A Theory of Truth and Semantic Representation'. In J. Groenendijk and others (eds.). Formal
Methods in the Study of Language. Amsterdam: Mathematics Center, 1981.
314 CHAPTER 89. DONKEY SENTENCE

Kitagawa, Yoshihishi. 'Copying Variables. Chapter 2 in Functional Structure(s), Form and Interpretation:
Perspectives from East Asian Languages. Edited by Yen-hui Audrey Li and others. Routledge, 2003. Pages
2864.

Lewis, David. 'Adverbs of Quantication'. In Formal Semantics of Natural Language. Edited by Edward L
Keenan. Cambridge: Cambridge University Press, 1975. Pages 315.

Montague, Richard. 'The Proper Treatment of Quantication in Ordinary English'. In KJJ Hintikka and others
(eds). Proceedings of the 1970 Stanford Workshop on Grammar and Semantics. Dordrecht: Reidel, 1973. Pages
212242.
Chapter 90

Double negation

This article is about the logical concept. For the linguistic concept, see double negative.

In propositional logic, double negation is the theorem that states that If a statement is true, then it is not the case
that the statement is not true. This is expressed by saying that a proposition A is logically equivalent to not (not-A),
or by the formula A ~(~A) where the sign expresses logical equivalence and the sign ~ expresses negation.[1]
Like the law of the excluded middle, this principle is considered to be a law of thought in classical logic,[2] but it
is disallowed by intuitionistic logic.[3] The principle was stated as a theorem of propositional logic by Russell and
Whitehead in Principia Mathematica as:

4 13. . p ( p) [4]
This is the principle of double negation, i.e. a proposition is equivalent of the falsehood of
its negation.

90.1 Double negative elimination


Double negative elimination (also called double negation elimination, double negative introduction, double negation
introduction, double negation, or negation elimination) are two valid rules of replacement. They are the inferences
that if A is true, then not not-A is true and its converse, that, if not not-A is true, then A is true. The rule allows one
to introduce or eliminate a negation from a logical proof. The rule is based on the equivalence of, for example, It is
false that it is not raining. and It is raining.
The double negation introduction rule is:

P P

and the double negation elimination rule is:

P P

Where " " is a metalogical symbol representing can be replaced in a proof with.

90.1.1 Formal notation


The double negation introduction rule may be written in sequent notation:

P P

The double negation elimination rule may be written as:

315
316 CHAPTER 90. DOUBLE NEGATION

P P
In rule form:

P
P
and

P
P
or as a tautology (plain propositional calculus sentence):

P P
and

P P
These can be combined together into a single biconditional formula:

P P
Since biconditionality is an equivalence relation, any instance of A in a well-formed formula can be replaced by
A, leaving unchanged the truth-value of the well-formed formula.
Double negative elimination is a theorem of classical logic, but not of weaker logics such as intuitionistic logic and
minimal logic. Double negation introduction is a theorem of both intuitionistic logic and minimal logic, as is A
A .
Because of their constructive character, a statement such as Its not the case that its not raining is weaker than Its
raining. The latter requires a proof of rain, whereas the former merely requires a proof that rain would not be
contradictory. This distinction also arises in natural language in the form of litotes.
In set theory also we have the negation operation of the complement which obeys this property: a set A and a set
(AC )C (where AC represents the complement of A) are the same.

90.2 See also


GdelGentzen negative translation

90.3 References
[1] Or alternate symbolism such as A (A) or Kleenes *49o : A A (Kleene 1952:119; in the original Kleene uses an
elongated tilde for logical equivalence, approximated here with a lazy S.)

[2] Hamilton is discussing Hegel in the following: In the more recent systems of philosophy, the universality and necessity
of the axiom of Reason has, with other logical laws, been controverted and rejected by speculators on the absolute.[On
principle of Double Negation as another law of Thought, see Fries, Logik, 41, p. 190; Calker, Denkiehre odor Logic und
Dialecktik, 165, p. 453; Beneke, Lehrbuch der Logic, 64, p. 41.]" (Hamilton 1860:68)

[3] The o of Kleenes formula *49o indicates the demonstration is not valid for both systems [classical system and intuitionistic
system]", Kleene 1952:101.

[4] PM 1952 reprint of 2nd edition 1927 pages 101-102, page 117.
90.4. BIBLIOGRAPHY 317

90.4 Bibliography
William Hamilton, 1860, Lectures on Metaphysics and Logic, Vol. II. Logic; Edited by Henry Mansel and John
Veitch, Boston, Gould and Lincoln.
Christoph Sigwart, 1895, Logic: The Judgment, Concept, and Inference; Second Edition, Translated by Helen
Dendy, Macmillan & Co. New York.

Stephen C. Kleene, 1952, Introduction to Metamathematics, 6th reprinting with corrections 1971, North-
Holland Publishing Company, Amsterdam NY, ISBN 0-7204-2103-9.

Stephen C. Kleene, 1967, Mathematical Logic, Dover edition 2002, Dover Publications, Inc, Mineola N.Y.
ISBN 0-486-42533-9

Alfred North Whitehead and Bertrand Russell, Principia Mathematica to *56, 2nd edition 1927, reprint 1962,
Cambridge at the University Press.
Chapter 91

Drinker paradox

The drinker paradox (also known as the drinkers theorem, the drinkers principle, or the drinking principle) is
a theorem of classical predicate logic which can be stated as There is someone in the pub such that, if he is drinking,
then everyone in the pub is drinking. It was popularised by the mathematical logician Raymond Smullyan, who called
it the drinking principle in his 1978 book What Is the Name of this Book?[1]
The apparently paradoxical nature of the statement comes from the way it is usually stated in natural language. It
seems counterintuitive both that there could be a person who is causing the others to drink, or that there could be a
person such that all through the night that one person were always the last to drink. The rst objection comes from
confusing formal if then statements with causation (see Correlation does not imply causation or Relevance logic for
logics which demand relevant relationships between premise and consequent, unlike classical logic assumed here).
The formal statement of the theorem is timeless, eliminating the second objection because the person the statement
holds true for at one instant is not necessarily the same person it holds true for at any other instant.
The formal statement of the theorem is

x P. [D(x) y P. D(y)].

where D is an arbitrary predicate and P is an arbitrary nonempty set.

91.1 Proofs
The proof begins by recognizing it is true that either everyone in the pub is drinking, or at least one person in the pub
is not drinking. Consequently, there are two cases to consider:[1][2]

1. Suppose everyone is drinking. For any particular person, it cannot be wrong to say that if that particular person
is drinking, then everyone in the pub is drinking because everyone is drinking. Because everyone is drinking,
then that one person must drink because when that person drinks everybody drinks, everybody includes that
person.[1][2]
2. Otherwise at least one person is not drinking. For any nondrinking person, the statement if that particular
person is drinking, then everyone in the pub is drinking is formally true: its antecedent (that particular person
is drinking) is false, therefore the statement is true due to the nature of material implication in formal logic,
which states that If P, then Q is always true if P is false.[1][2] (These kinds of statements are said to be
vacuously true.)

A slightly more formal way of expressing the above is to say that, if everybody drinks, then anyone can be the witness
for the validity of the theorem. And if someone does not drink, then that particular non-drinking individual can be
the witness to the theorems validity.[3]
The proof above is essentially model-theoretic (can be formalized as such). A purely syntactic proof is possible and
can even be mechanized (in Otter for example), but only for an equisatisable rather than an equivalent negation of
the theorem.[4] Namely, the negation of the theorem is

318
91.2. EXPLANATION OF PARADOXICALITY 319

[x. [D(x) y. D(y)]]


which is equivalent with the prenex normal form
xy. [D(x) D(y)]
By Skolemization the above is equisatisable with
x. [D(x) D(f (x))]
The resolution of the two clauses D(x) and D(f (x)) results in an empty set of clauses (i.e. a contradiction), thus
proving the negation of the theorem is unsatisable. The resolution is slightly non-straightforward because it involves
a search based on Herbrands theorem for ground instances that are propositionally unsatisable. The bound variable
x is rst instantiated with a constant d (making use of the assumption that the domain is non-empty), resulting in the
Herbrand universe:[5]
{d, f (d), f (f (d)), f (f (f (d))), . . .}
One can sketch the following natural deduction:[4]
x. [D(x) D(f (x))] x. [D(x) D(f (x))]
E E
D(d) D(f (d)) D(f (d)) D(f (f (d)))
E E
D(f (d)) D(f (d))
E

Or spelled out:

1. Instantiating x with d yields [D(d) D(f (d))] which implies D(f (d))
2. x is then instantiated with f(d) yielding [D(f (d)) D(f (f (d)))] which implies D(f (d)) .

Observe that D(f (d)) and D(f (d)) unify syntactically in their predicate arguments. An (automated) search thus
nishes in two steps:[5]

1. D(d) D(f (d))


2. D(d) D(f (d)) D(f (d)) D(f (f (d)))

The proof by resolution given here uses the law of excluded middle, the axiom of choice, and non-emptiness of the
domain as premises.[4]

91.2 Explanation of paradoxicality


The paradox is ultimately based on the principle of formal logic that the statement A B is true whenever A is
false, i.e., any statement follows from a false statement[1] (ex falso quodlibet).
What is important to the paradox is that the conditional in classical (and intuitionistic) logic is the material conditional.
It has the property that A B is true if B is true or if A is false (in classical logic, but not intuitionistic logic, this is
also a necessary condition).
So as it was applied here, the statement if he is drinking, everyone is drinking was taken to be correct in one case,
if everyone was drinking, and in the other case, if he was not drinking even though his drinking may not have had
anything to do with anyone elses drinking.
On the other hand, in natural language, typically if ... then ... is used as an indicative conditional.

91.3 History and variations


Smullyan in his 1978 book attributes the naming of The Drinking Principle to his graduate students.[1] He also
discusses variants (obtained by replacing D with other, more dramatic predicates):

there is a woman on earth such that if she becomes sterile, the whole human race will die out. Smullyan
writes that this formulation emerged from a conversation he had with philosopher John Bacon.[1]
320 CHAPTER 91. DRINKER PARADOX

A dual version of the Principle: there is at least one person such that if anybody drinks, then he does.[1]

As Smullyans Drinkers principle or just Drinkers principle it appears in H.P. Barendregt's The quest for cor-
rectness (1996), accompanied by some machine proofs.[2] Since then it has made regular appearance as an example in
publications about automated reasoning; it is sometimes used to contrast the expressiveness of proof assistants.[4][5][6]

91.3.1 Non-empty domain


In the setting with empty domains allowed, the drinker paradox must be formulated as follows:[7]
A set P satises

x P. [D(x) y P. D(y)]

if and only if it is non-empty.


Or in words:

If and only if there is someone in the pub, there is someone in the pub such that, if he is drinking, then
everyone in the pub is drinking.

91.4 See also


List of paradoxes
Reication (linguistics)

Temporal logic
Relevance logic

91.5 References
[1] Raymond Smullyan (1978). What is the Name of this Book? The Riddle of Dracula and Other Logical Puzzles. Prentice
Hall. chapter 14. How to Prove Anything. (topic) 250. The Drinking Principle. pp. 209211. ISBN 0-13-955088-7.

[2] H.P. Barendregt (1996). The quest for correctness. Images of SMC Research 1996 (PDF). Stichting Mathematisch
Centrum. pp. 5455. ISBN 978-90-6196-462-9.

[3] Peter J. Cameron (1999). Sets, Logic and Categories. Springer. p. 91. ISBN 978-1-85233-056-9.

[4] Marc Bezem, Dimitri Hendriks (2008). Clausication in Coq

[5] J. Harrison (2008). Automated and Interactive Theorem Proving. In Orna Grumberg; Tobias Nipkow; Christian Pfaller.
Formal Logical Methods for System Security and Correctness. IOS Press. pp. 123124. ISBN 978-1-58603-843-4.

[6] Freek Wiedijk. 2001. Mizar Light for HOL Light. In Proceedings of the 14th International Conference on Theorem
Proving in Higher Order Logics (TPHOLs '01), Richard J. Boulton and Paul B. Jackson (Eds.). Springer-Verlag, London,
UK, 378-394.

[7] Martn Escard; Paulo Oliva. Searchable Sets, Dubuc-Penon Compactness, Omniscience Principles, and the Drinker
Paradox (PDF). Computability in Europe 2010: 2.
Chapter 92

Empty domain

In rst-order logic the empty domain is the empty set having no members. In traditional and classical logic domains
are restrictedly non-empty in order that certain theorems be valid. Interpretations with an empty domain are shown to
be a trivial case by a convention originating at least in 1927 with Bernays and Schnnkel (though possibly earlier) but
oft-attributed to Quine 1951. The convention is to assign any formula beginning with a universal quantier the value
truth while any formula beginning with an existential quantier is assigned the value falsehood. This follows from the
idea that existentially quantied statements have existential import (i.e. they imply the existence of something) while
universally quantied statements do not. This interpretation reportedly stems from George Boole in the late 19th
century but this is debatable. In modern model theory, it follows immediately for the truth conditions for quantied
sentences:

A |= x(x) an is there i a A that such A |= [a]

A |= x(x) every i a A that such is A |= [a]

In other words, an existential quantication of the open formula is true in a model i there is some element in the
domain (of the model) that satises the formula; i.e. i that element has the property denoted by the open formula.
A universal quantication of an open formula is true in a model i every element in the domain satises that
formula. (Note that in the metalanguage, everything that is such that X is such that Y is interpreted as a universal
generalization of the material conditional if anything is such that X then it is such that Y. Also, the quantiers are
given their usual objectual readings, so that a positive existential statement has existential import, while a universal
one does not.) An analogous case concerns the empty conjunction and the empty disjunction. The semantic clauses
for, respectively, conjunctions and disjunctions are given by

A |= 1 n i (1 i n), A |= i

A |= 1 n i (1 i n), A |= i .

It is easy to see that the empty conjunction is trivially true, and the empty disjunction trivially false.
Logics whose theorems are valid in every, including the empty, domain were rst considered by Jaskowski 1934,
Mostowski 1951, Hailperin 1953, Quine 1954, Leonard 1956, and Hintikka 1959. While Quine called such logics
inclusive logic they are now referred to as free logic.

92.1 See also


Table of logic symbols

321
322 CHAPTER 92. EMPTY DOMAIN

In modern logic only the contradictories in the square of opposition apply, because domains may be empty.
(Black areas are empty,
red areas are nonempty.)
Chapter 93

Evasive Boolean function

In mathematics, an evasive Boolean function (of n variables) is a Boolean function for which every decision tree
algorithm has running time of exactly n. Consequently, every decision tree algorithm that represents the function has,
at worst case, a running time of n.

93.1 Examples

93.1.1 An example for a non-evasive boolean function

The following is a Boolean function on the three variables x, y, z:


(where is the bitwise and, is the bitwise or, and is the bitwise not).
This function is not evasive, because there is a decision tree that solves it by checking exactly two variables: The
algorithm rst checks the value of x. If x is true, the algorithm checks the value of y and returns it.

( (x = false) ((x z) = false) )

If x is false, the algorithm checks the value of z and returns it.

93.1.2 A simple example for an evasive boolean function


Consider this simple and function on three variables:
A worst-case input (for every algorithm) is 1, 1, 1. In every order we choose to check the variables, we have to check
all of them. (Note that in general there could be a dierent worst-case input for every decision tree algorithm.) Hence
the functions: and, or (on n variables) are evasive.

93.2 Binary zero-sum games


For the case of binary zero-sum games, every evaluation function is evasive.
In every zero-sum game, the value of the game is achieved by the minimax algorithm (player 1 tries to maximize the
prot, and player 2 tries to minimize the cost).
In the binary case, the max function equals the bitwise or, and the min function equals the bitwise and.
A decision tree for this game will be of this form:

every leaf will have value in {0, 1}.

every node is connected to one of {"and, or"}

323
324 CHAPTER 93. EVASIVE BOOLEAN FUNCTION

For every such tree with n leaves, the running time in the worst case is n (meaning that the algorithm must check all
the leaves):
We will exhibit an adversary that produces a worst-case input for every leaf that the algorithm checks, the adversary
will answer 0 if the leafs parent is an Or node, and 1 if the parent is an And node.
This input (0 for all Or nodes children, and 1 for all And nodes children) forces the algorithm to check all nodes:
As in the second example

in order to calculate the Or result, if all children are 0 we must check them all.

In order to calculate the And result, if all children are 1 we must check them all.

93.3 See also


AanderaaKarpRosenberg conjecture, the conjecture that every nontrivial monotone graph property is eva-
sive.
Chapter 94

Exceptional isomorphism

In mathematics, an exceptional isomorphism, also called an accidental isomorphism, is an isomorphism between


members ai and bj of two families (usually innite) of mathematical objects, that is not an example of a pattern of
such isomorphisms.[note 1] These coincidences are at times considered a matter of trivia,[1] but in other respects they
can give rise to other phenomena, notably exceptional objects.[1] In the below, coincidences are listed in all places
they occur.

94.1 Groups

94.1.1 Finite simple groups


The exceptional isomorphisms between the series of nite simple groups mostly involve projective special linear
groups and alternating groups, and are:[1]

L2 (4)
= L2 (5)
= A5 , the smallest non-abelian simple group (order 60);
L2 (7)
= L3 (2), the second-smallest non-abelian simple group (order 168) PSL(2,7);
L2 (9)
= A6 ,
L4 (2)
= A8 ,
PSU4 (2)
= PSp4 (3), between a projective special orthogonal group and a projective symplectic group.

94.1.2 Groups of Lie type


In addition to the aforementioned, there are some isomorphisms involving SL, PSL, GL, PGL, and the natural maps
between these. For example, the groups over F5 have a number of exceptional isomorphisms:

PSL(2, 5)
= A5
= I, the alternating group on ve elements, or equivalently the icosahedral group;
PGL(2, 5)
= S5 , the symmetric group on ve elements;
SL(2, 5)
= 2 A5
= 2I, the double cover of the alternating group A5 , or equivalently the binary icosahedral
group.

94.1.3 Alternating groups and symmetric groups


There are coincidences between alternating groups and small groups of Lie type:

L2 (4)
= L2 (5)
= A5 ,

325
326 CHAPTER 94. EXCEPTIONAL ISOMORPHISM

The compound of ve tetrahedra expresses the exceptional isomorphism between the icosahedral group and the alternating group on
ve letters.

L2 (9)
= Sp4 (2)
= A6 ,
Sp4 (2)
= S6 ,
L4 (2)
= O6 (+, 2)
= A8 ,
O6 (+, 2)
= S8 .

These can all be explained in a systematic way by using linear algebra (and the action of Sn on ane n -space) to
dene the isomorphism going from the right side to the left side. (The above isomorphisms for A8 and S8 are linked
via the exceptional isomorphism SL4 /2 = SO6 .) There are also some coincidences with symmetries of regular
polyhedra: the alternating group A5 agrees with the icosahedral group (itself an exceptional object), and the double
cover of the alternating group A5 is the binary icosahedral group.

94.1.4 Cyclic groups


Cyclic groups of small order especially arise in various ways, for instance:

C2
= {1}
= O(1)
= Spin(1)
= Z , the last being the group of units of the integers
94.1. GROUPS 327

94.1.5 Spheres

The spheres S0 , S1 , and S3 admit group structures, which arise in various ways:

S0
= O(1) ,

S1
= SO(2)
= U(1)
= Spin(2) ,

S3
= Spin(3)
= SU(2)
= Sp(1) .

94.1.6 Coxeter groups

B2 C2
A3 D3

A4 E 4

D5 E 5

The exceptional isomorphisms of connected Dynkin diagrams.


328 CHAPTER 94. EXCEPTIONAL ISOMORPHISM

There are some exceptional isomorphisms of Coxeter diagrams, yielding isomorphisms of the corresponding Coxeter
groups and of polytopes realizing the symmetries. These are:

A2 = I2(2) (2-simplex is regular 3-gon/triangle);


BC2 = I2(4) (2-cube (square) = 2-cross-polytope (diamond) = regular 4-gon)
A3 = D3 (3-simplex (tetrahedron) is 3-demihypercube (demicube), as per diagram)
A1 = B1 = C1 (= D1?)
D2 = A1 A1
A4 = E4
D5 = E5

Closely related ones occur in Lie theory for Dynkin diagrams.

94.2 Lie theory


In low dimensions, there are isomorphisms among the classical Lie algebras and classical Lie groups called acciden-
tal isomorphisms. For instance, there are isomorphisms between low-dimensional spin groups and certain classical
Lie groups, due to low-dimensional isomorphisms between the root systems of the dierent families of simple Lie
algebras, visible as isomorphisms of the corresponding Dynkin diagrams:

Trivially, A0 = B0 = C0 = D0
A1 = B1 = C1, or sl2
= so3
= sp1
B2 = C2, or so5
= sp2
D2 = A1 A1, or so4
= sl2 sl2 ; note that these are disconnected, but part of the D-series
A3 = D3 sl4
= so6
A4 = E4; the E-series usually starts at 6, but can be started at 4, yielding isomorphisms
D5 = E5

Spin(1) = O(1)
Spin(2) = U(1) = SO(2)
Spin(3) = Sp(1) = SU(2)
Spin(4) = Sp(1) Sp(1)
Spin(5) = Sp(2)
Spin(6) = SU(4)

94.3 See also


Exceptional object
Mathematical coincidence, for numerical coincidences

94.4 Notes
[1] Because these series of objects are presented dierently, they are not identical objects (do not have identical descriptions),
but turn out to describe the same object, hence one refers to this as an isomorphism, not an equality (identity).
94.5. REFERENCES 329

94.5 References
[1] Wilson, Robert A. (2009), Chapter 1: Introduction, The nite simple groups, Graduate Texts in Mathematics 251, 251,
Berlin, New York: Springer-Verlag, ISBN 978-1-84800-987-5, Zbl 1203.20012, doi:10.1007/978-1-84800-988-2, 2007
preprint; Chapter doi:10.1007/978-1-84800-988-2_1.
Chapter 95

Exclusive or

XOR redirects here. For the logic gate, see XOR gate. For other uses, see XOR (disambiguation).

Exclusive or or exclusive disjunction is a logical operation that outputs true only when inputs dier (one is true, the
other is false).[1]
It is symbolized by the prex operator J[2] and by the inx operators XOR (/ks r/), EOR, EXOR, , , , and
. The negation of XOR is logical biconditional, which outputs true only when both inputs are the same.
It gains the name exclusive or because the meaning of or is ambiguous when both operands are true; the exclusive
or operator excludes that case. This is sometimes thought of as one or the other but not both. This could be written
as A or B, but not, A and B.
More generally, XOR is true only when an odd number of inputs are true. A chain of XORsa XOR b XOR c XOR
d (and so on)is true whenever an odd number of the inputs are true and is false whenever an even number of inputs
are true.

95.1 Truth table


The truth table of A XOR B shows that it outputs true whenever the inputs dier:

0, false
1, true

95.2 Equivalences, elimination, and introduction


Exclusive disjunction essentially means 'either one, but not both nor none'. In other words, the statement is true if and
only if one is true and the other is false. For example, if two horses are racing, then one of the two will win the race,
but not both of them. The exclusive disjunction p q , or Jpq, can be expressed in terms of the logical conjunction
(logical and, ), the disjunction (logical or, ), and the negation ( ) as follows:

pq = (p q) (p q)

The exclusive disjunction p q can also be expressed in the following way:

pq = (p q) (p q)

This representation of XOR may be found useful when constructing a circuit or network, because it has only one
operation and small number of and operations. A proof of this identity is given below:

330
95.2. EQUIVALENCES, ELIMINATION, AND INTRODUCTION 331

Arguments on the left combined by XOR. This is a binary Walsh matrix (cf. Hadamard code).

pq = (p q) (p q)
= ((p q) p) ((p q) q)
= ((p p) (q p)) ((p q) (q q))
= (p q) (p q)
= (p q) (p q)

It is sometimes useful to write p q in the following way:

pq = ((p q) (p q))

or:

pq = (p q) (p q)

This equivalence can be established by applying De Morgans laws twice to the fourth line of the above proof.
The exclusive or is also equivalent to the negation of a logical biconditional, by the rules of material implication (a
material conditional is equivalent to the disjunction of the negation of its antecedent and its consequence) and material
equivalence.
In summary, we have, in mathematical and in engineering notation:
332 CHAPTER 95. EXCLUSIVE OR

pq = (p q) (p q) = pq + pq
= (p q) (p q) = (p + q)(p + q)
= (p q) (p q) = (p + q)(pq)

95.3 Relation to modern algebra


Although the operators (conjunction) and (disjunction) are very useful in logic systems, they fail a more gener-
alizable structure in the following way:
The systems ({T, F }, ) and ({T, F }, ) are monoids, but neither is a group. This unfortunately prevents the
combination of these two systems into larger structures, such as a mathematical ring.
However, the system using exclusive or ({T, F }, ) is an abelian group. The combination of operators and
over elements {T, F } produce the well-known eld F2 . This eld can represent any logic obtainable with the system
(, ) and has the added benet of the arsenal of algebraic analysis tools for elds.
More specically, if one associates F with 0 and T with 1, one can interpret the logical AND operation as multi-
plication on F2 and the XOR operation as addition on F2 :

r =pq r = p q (mod 2)
r =pq r = p + q (mod 2)

Using this basis to describe a boolean system is referred to as algebraic normal form.

95.4 Exclusive or in English


The Oxford English Dictionary explains either ... or as follows:

The primary function of either, etc., is to emphasize the perfect indierence of the two (or more)
things or courses ... ; but a secondary function is to emphasize the mutual exclusiveness, = either of the
two, but not both.[3]

The exclusive-or explicitly states one or the other, but not neither nor both. However, the mapping correspondence
between formal Boolean operators and natural language conjunctions is far from simple or one-to-one, and has been
studied for decades in linguistics and analytic philosophy.
Following this kind of common-sense intuition about or, it is sometimes argued that in many natural languages,
English included, the word or has an exclusive sense.[4] The exclusive disjunction of a pair of propositions, (p, q),
is supposed to mean that p is true or q is true, but not both. For example, it might be argued that the normal intention
of a statement like You may have coee, or you may have tea is to stipulate that exactly one of the conditions can be
true. Certainly under some circumstances a sentence like this example should be taken as forbidding the possibility
of ones accepting both options. Even so, there is good reason to suppose that this sort of sentence is not disjunctive
at all. If all we know about some disjunction is that it is true overall, we cannot be sure which of its disjuncts is true.
For example, if a woman has been told that her friend is either at the snack bar or on the tennis court, she cannot
validly infer that he is on the tennis court. But if her waiter tells her that she may have coee or she may have tea,
she can validly infer that she may have tea. Nothing classically thought of as a disjunction has this property. This is
so even given that she might reasonably take her waiter as having denied her the possibility of having both coee and
tea.
In English, the construct either ... or is usually used to indicate exclusive or and or generally used for inclusive.
But in Spanish, the word "o" (or) can be used in the form "p o q" (exclusive) or the form "o p o q" (inclusive). Some
may contend that any binary or other n-ary exclusive or is true if and only if it has an odd number of true inputs
(this is not, however, the only reasonable denition; for example, digital xor gates with multiple inputs typically do
not use that denition), and that there is no conjunction in English that has this general property. For example, Barrett
and Stenner contend in the 1971 article The Myth of the Exclusive 'Or'" (Mind, 80 (317), 116121) that no author
95.5. ALTERNATIVE SYMBOLS 333

has produced an example of an English or-sentence that appears to be false because both of its inputs are true, and
brush o or-sentences such as The light bulb is either on or o as reecting particular facts about the world rather
than the nature of the word or. However, the "barber paradox"Everybody in town shaves himself or is shaved
by the barber, who shaves the barber? -- would not be paradoxical if or could not be exclusive (although a purist
could say that either is required in the statement of the paradox).
Whether these examples can be considered natural language is another question. Certainly when one sees a menu
stating Lunch special: sandwich and soup or salad (parsed as sandwich and (soup or salad)" according to common
usage in the restaurant trade), one would not expect to be permitted to order both soup and salad. Nor would one
expect to order neither soup nor salad, because that belies the nature of the special, that ordering the two items
together is cheaper than ordering them a la carte. Similarly, a lunch special consisting of one meat, French fries or
mashed potatoes and vegetable would consist of three items, only one of which would be a form of potato. If one
wanted to have meat and both kinds of potatoes, one would ask if it were possible to substitute a second order of
potatoes for the vegetable. And, one would not expect to be permitted to have both types of potato and vegetable,
because the result would be a vegetable plate rather than a meat plate.

95.5 Alternative symbols


The symbol used for exclusive disjunction varies from one eld of application to the next, and even depends on the
properties being emphasized in a given context of discussion. In addition to the abbreviation XOR, any of the
following symbols may also be seen:

A plus sign (+). This makes sense mathematically because exclusive disjunction corresponds to addition modulo
2, which has the following addition table, clearly isomorphic to the one above:

The use of the plus sign has the added advantage that all of the ordinary algebraic properties of mathematical
rings and elds can be used without further ado. However, the plus sign is also used for inclusive disjunction
in some notation systems.
A plus sign that is modied in some way, such as being encircled ( ). This usage faces the objection that this
same symbol is already used in mathematics for the direct sum of algebraic structures.
A prexed J, as in Jpq.
An inclusive disjunction symbol ( ) that is modied in some way, such as being underlined ( ) or with dot
above ( ).
In several programming languages, such as C, C++, C#, D, Java, Perl, Ruby, PHP and Python, a caret (^) is
used to denote the bitwise XOR operator. This is not used outside of programming contexts because it is too
easily confused with other uses of the caret.

The symbol , sometimes written as >< or as >-<.

95.6 Properties
Commutativity: yes
Associativity: yes
Distributivity: The exclusive or doesn't distribute over any binary function (not even itself), but logical conjunction
distributes over exclusive or. C (A B) = C A C B (Conjunction and exclusive or form the
multiplication and addition operations of a eld GF(2), and as in any eld they obey the distributive law.)
Idempotency: no
Monotonicity: no
Truth-preserving: no When all inputs are true, the output is not true.
334 CHAPTER 95. EXCLUSIVE OR

Falsehood-preserving: yes When all inputs are false, the output is false.

Walsh spectrum: (2,0,0,2)

Non-linearity: 0 The function is linear.

If using binary values for true (1) and false (0), then exclusive or works exactly like addition modulo 2.

95.7 Computer science

A
Q
B
Traditional symbolic representation of an XOR logic gate

95.7.1 Bitwise operation

Main article: Bitwise operation


Exclusive disjunction is often used for bitwise operations. Examples:

1 XOR 1 = 0

1 XOR 0 = 1

0 XOR 1 = 1

0 XOR 0 = 0

11102 XOR 10012 = 01112 (this is equivalent to addition without carry)

As noted above, since exclusive disjunction is identical to addition modulo 2, the bitwise exclusive disjunction of two
n-bit strings is identical to the standard vector of addition in the vector space (Z/2Z)n .
In computer science, exclusive disjunction has several uses:

It tells whether two bits are unequal.

It is an optional bit-ipper (the deciding input chooses whether to invert the data input).

It tells whether there is an odd number of 1 bits ( A B C D E is true i an odd number of the variables
are true).
95.7. COMPUTER SCIENCE 335

Nimber addition is the exclusive or of nonnegative integers in binary representation. This is also the vector addition in (Z/2Z)4 .

In logical circuits, a simple adder can be made with an XOR gate to add the numbers, and a series of AND, OR and
NOT gates to create the carry output.
On some computer architectures, it is more ecient to store a zero in a register by XOR-ing the register with itself
(bits XOR-ed with themselves are always zero) instead of loading and storing the value zero.
In simple threshold activated neural networks, modeling the XOR function requires a second layer because XOR is
not a linearly separable function.
Exclusive-or is sometimes used as a simple mixing function in cryptography, for example, with one-time pad or Feistel
network systems.
336 CHAPTER 95. EXCLUSIVE OR

Exclusive-or is also heavily used in block ciphers such as AES (Rijndael) or Serpent and in block cipher implemen-
tation (CBC, CFB, OFB or CTR).
Similarly, XOR can be used in generating entropy pools for hardware random number generators. The XOR operation
preserves randomness, meaning that a random bit XORed with a non-random bit will result in a random bit. Multiple
sources of potentially random data can be combined using XOR, and the unpredictability of the output is guaranteed
to be at least as good as the best individual source.[5]
XOR is used in RAID 36 for creating parity information. For example, RAID can back up bytes 100111002 and
011011002 from two (or more) hard drives by XORing the just mentioned bytes, resulting in (111100002) and writing
it to another drive. Under this method, if any one of the three hard drives are lost, the lost byte can be re-created by
XORing bytes from the remaining drives. For instance, if the drive containing 011011002 is lost, 100111002 and
111100002 can be XORed to recover the lost byte.[6]
XOR is also used to detect an overow in the result of a signed binary arithmetic operation. If the leftmost retained
bit of the result is not the same as the innite number of digits to the left, then that means overow occurred. XORing
those two bits will give a 1 if there is an overow.
XOR can be used to swap two numeric variables in computers, using the XOR swap algorithm; however this is
regarded as more of a curiosity and not encouraged in practice.
XOR linked lists leverage XOR properties in order to save space to represent doubly linked list data structures.
In computer graphics, XOR-based drawing methods are often used to manage such items as bounding boxes and
cursors on systems without alpha channels or overlay planes.

95.8 Encodings
Apart from the ASCII codes, the operator is encoded at U+22BB XOR (HTML &#8891;) and U+2295 CIR-
CLED PLUS (HTML &#8853; &oplus;), both in block Mathematical Operators.

95.9 See also


Material conditional (Paradox)

Arming a disjunct

Ampheck

Boolean algebra (logic)

Boolean domain

Boolean function

Boolean-valued function

Controlled NOT gate

Disjunctive syllogism

First-order logic

Inclusive or

Involution

List of Boolean algebra topics

Logical graph

Logical value

Operation
95.10. NOTES 337

Parity bit

Propositional calculus
Rule 90

Symmetric dierence
XOR cipher

XOR gate
XOR linked list

95.10 Notes
[1] Germundsson, Roger; Weisstein, Eric. XOR. MathWorld. Wolfram Research. Retrieved 17 June 2015.

[2] Craig, Edward, ed. (1998), Routledge Encyclopedia of Philosophy, 10, Taylor & Francis, p. 496, ISBN 9780415073103

[3] or, conj.2 (adv.3) 2a Oxford English Dictionary, second edition (1989). OED Online.

[4] Jennings quotes numerous authors saying that the word or has an exclusive sense. See Chapter 3, The First Myth of
'Or'":
Jennings, R. E. (1994). The Genealogy of Disjunction. New York: Oxford University Press.

[5] Davies, Robert B (28 February 2002). Exclusive OR (XOR) and hardware random number generators (PDF). Retrieved
28 August 2013.

[6] Nobel, Rickard (26 July 2011). How RAID 5 actually works. Retrieved 23 March 2017.

95.11 External links


An example of XOR being used in cryptography
Chapter 96

Existential generalization

In predicate logic, existential generalization[1][2] (also known as existential introduction, I) is a valid rule of
inference that allows one to move from a specic statement, or one instance, to a quantied generalized statement, or
existential proposition. In rst-order logic, it is often used as a rule for the existential quantier () in formal proofs.
Example: Rover loves to wag his tail. Therefore, something loves to wag its tail.
In the Fitch-style calculus:

Q(a) x Q(x)

Where a replaces all free instances of x within Q(x).[3]

96.1 Quine
Universal instantiation and Existential Generalization are two aspects of a single principle, for instead of saying that
"x x=x" implies Socrates=Socrates, we could as well say that the denial SocratesSocrates"' implies "x xx".
The principle embodied in these two operations is the link between quantications and the singular statements that
are related to them as instances. Yet it is a principle only by courtesy. It holds only in the case where a term names
and, furthermore, occurs referentially.[4]

96.2 See also


Inference rules

96.3 References
[1] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall.

[2] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing.

[3] pg. 347. Jon Barwise and John Etchemendy, Language proof and logic Second Ed., CSLI Publications, 2008.

[4] Willard van Orman Quine; Roger F. Gibson (2008). V.24. Reference and Modality. Quintessence. Cambridge, Mass:
Belknap Press of Harvard University Press. Here: p.366.

338
Chapter 97

Existential instantiation

In predicate logic, existential instantiation (also called existential elimination)[1][2][3] is a valid rule of inference
which says that, given a formula of the form (x)(x) , one may infer (c) for a new constant or variable symbol
c. The rule has the restriction that the constant or variable c introduced by the rule must be a new term that has not
occurred earlier in the proof.
In one formal notation, the rule may be denoted

(x)Fx :: Fa,

where a is an arbitrary term that has not been a part of our proof thus far.

97.1 See also


existential fallacy

97.2 References
[1] Hurley, Patrick. A Concise Introduction to Logic. Wadsworth Pub Co, 2008.

[2] Copi and Cohen

[3] Moore and Parker

339
Chapter 98

Existential quantication

Existential quantier redirects here. For the symbol conventionally used for this quantier, see Turned E.
"" redirects here. It is not to be confused with .

In predicate logic, an existential quantication is a type of quantier, a logical constant which is interpreted as there
exists, there is at least one, or for some. Some sources use the term existentialization to refer to existential
quantication.[1] It is usually denoted by the turned E () logical operator symbol, which, when used together with
a predicate variable, is called an existential quantier ("x or "(x)"). Existential quantication is distinct from
universal quantication (for all), which asserts that the property or relation holds for all members of the domain.

98.1 Basics
Consider a formula that states that some natural number multiplied by itself is 25.

00 = 25, or 11 = 25, or 22 = 25, or 33 = 25, and so on.

This would seem to be a logical disjunction because of the repeated use of or. However, the and so on makes this
impossible to integrate and to interpret as a disjunction in formal logic. Instead, the statement could be rephrased
more formally as

For some natural number n, nn = 25.

This is a single statement using existential quantication.


This statement is more precise than the original one, as the phrase and so on does not necessarily include all natural
numbers, and nothing more. Since the domain was not stated explicitly, the phrase could not be interpreted formally.
In the quantied statement, on the other hand, the natural numbers are mentioned explicitly.
This particular example is true, because 5 is a natural number, and when we substitute 5 for n, we produce 55 =
25, which is true. It does not matter that "nn = 25 is only true for a single natural number, 5; even the existence
of a single solution is enough to prove the existential quantication true. In contrast, For some even number n, nn
= 25 is false, because there are no even solutions.
The domain of discourse, which species which values the variable n is allowed to take, is therefore of critical im-
portance in a statements trueness or falseness. Logical conjunctions are used to restrict the domain of discourse to
fulll a given predicate. For example:

For some positive odd number n, nn = 25

is logically equivalent to

For some natural number n, n is odd and nn = 25.

340
98.2. PROPERTIES 341

Here, and is the logical conjunction.


In symbolic logic, "" (a backwards letter "E" in a sans-serif font) is used to indicate existential quantication.[2]
Thus, if P(a, b, c) is the predicate "ab = c and N is the set of natural numbers, then

nN P (n, n, 25)

is the (true) statement

For some natural number n, nn = 25.

Similarly, if Q(n) is the predicate "n is even, then

( )
nN Q(n) P (n, n, 25)

is the (false) statement

For some natural number n, n is even and nn = 25.

In mathematics, the proof of a some statement may be achieved either by a constructive proof, which exhibits an
object satisfying the some statement, or by a nonconstructive proof which shows that there must be such an object
but without exhibiting one.

98.2 Properties

98.2.1 Negation
A quantied propositional function is a statement; thus, like statements, quantied functions can be negated. The
symbol is used to denote negation.
For example, if P(x) is the propositional function x is greater than 0 and less than 1, then, for a domain of discourse
X of all natural numbers, the existential quantication There exists a natural number x which is greater than 0 and
less than 1 is symbolically stated:

xX P (x)

This can be demonstrated to be irrevocably false. Truthfully, it must be said, It is not the case that there is a natural
number x that is greater than 0 and less than 1, or, symbolically:

xX P (x)

If there is no element of the domain of discourse for which the statement is true, then it must be false for all of those
elements. That is, the negation of

xX P (x)

is logically equivalent to For any natural number x, x is not greater than 0 and less than 1, or:

xX P (x)

Generally, then, the negation of a propositional function's existential quantication is a universal quantication of that
propositional functions negation; symbolically,
342 CHAPTER 98. EXISTENTIAL QUANTIFICATION

xX P (x) xX P (x)
A common error is stating all persons are not married (i.e. there exists no person who is married) when not all
persons are married (i.e. there exists a person who is not married) is intended:

xX P (x) xX P (x) xX P (x) xX P (x)


Negation is also expressible through a statement of for no, as opposed to for some":

xX P (x) xX P (x)
Unlike the universal quantier, the existential quantier distributes over logical disjunctions:
xX P (x) Q(x) (xX P (x) xX Q(x))

98.2.2 Rules of Inference


A rule of inference is a rule justifying a logical step from hypothesis to conclusion. There are several rules of inference
which utilize the existential quantier.
Existential introduction (I) concludes that, if the propositional function is known to be true for a particular element
of the domain of discourse, then it must be true that there exists an element for which the proposition function is true.
Symbolically,

P (a) xX P (x)
Existential elimination, when conducted in a Fitch style deduction, proceeds by entering a new sub-derivation while
substituting an existentially quantied variable for a subject which does not appear within any active sub-derivation.
If a conclusion can be reached within this sub-derivation in which the substituted subject does not appear, then one
can exit that sub-derivation with that conclusion. The reasoning behind existential elimination (E) is as follows: If
it is given that there exists an element for which the proposition function is true, and if a conclusion can be reached
by giving that element an arbitrary name, that conclusion is necessarily true, as long as it does not contain the name.
Symbolically, for an arbitrary c and for a proposition Q in which c does not appear:

xX P (x) ((P (c) Q) Q)


P (c) Q must be true for all values of c over the same domain X; else, the logic does not follow: If c is not
arbitrary, and is instead a specic element of the domain of discourse, then stating P(c) might unjustiably give more
information about that object.

98.2.3 The empty set


The formula x P (x) is always false, regardless of P(x). This is because denotes the empty set, and no x of any
description let alone an x fullling a given predicate P(x) exist in the empty set. See also vacuous truth.

98.3 As adjoint
Main article: Universal quantication As adjoint

In category theory and the theory of elementary topoi, the existential quantier can be understood as the left adjoint of
a functor between power sets, the inverse image functor of a function between sets; likewise, the universal quantier
is the right adjoint.[3]
98.4. HTML ENCODING OF EXISTENTIAL QUANTIFIERS 343

98.4 HTML encoding of existential quantiers


Symbols are encoded U+2203 THERE EXISTS (HTML &#8707; &exist; as a mathematical symbol) and
U+2204 THERE DOES not EXIST (HTML &#8708;).

98.5 See also


First-order logic

List of logic symbols for the unicode symbol


Quantier variance

Quantiers
Uniqueness quantication

98.6 Notes
[1] Allen, Colin; Hand, Michael (2001). Logic Primer. MIT Press. ISBN 0262303965.

[2] This symbol is also known as the existential operator. It is sometimes represented with V.

[3] Saunders Mac Lane, Ieke Moerdijk, (1992) Sheaves in Geometry and Logic Springer-Verlag. ISBN 0-387-97710-4 See
page 58

98.7 References
Hinman, P. (2005). Fundamentals of Mathematical Logic. A K Peters. ISBN 1-56881-262-0.
Chapter 99

Exportation (logic)

Exportation[1][2][3][4] is a valid rule of replacement in propositional logic. The rule allows conditional statements
having conjunctive antecedents to be replaced by statements having conditional consequents and vice versa in logical
proofs. It is the rule that:
((P Q) R) (P (Q R))
Where " " is a metalogical symbol representing can be replaced in a proof with.

99.1 Formal notation


The exportation rule may be written in sequent notation:

((P Q) R) (P (Q R))

where is a metalogical symbol meaning that (P (Q R)) is a syntactic equivalent of ((P Q) R) in


some logical system;
or in rule form:

(P Q)R P (QR)
P (QR) , (P Q)R .

where the rule is that wherever an instance of " (P Q) R " appears on a line of a proof, it can be replaced with
" P (Q R) " and vice versa;
or as the statement of a truth-functional tautology or theorem of propositional logic:

((P Q) R)) (P (Q R)))

where P , Q , and R are propositions expressed in some logical system.

99.2 Natural language

99.2.1 Truth values

At any time, if PQ is true, it can be replaced by P(PQ).


One possible case for PQ is for P to be true and Q to be true; thus PQ is also true, and P(PQ) is true.
Another possible case sets P as false and Q as true. Thus, PQ is false and P(PQ) is false; falsefalse is true.
The last case occurs when both P and Q are false. Thus, PQ is false and P(PQ) is true.

344
99.3. PROOF 345

99.2.2 Example
It rains and the sun shines implies that there is a rainbow.
Thus, if it rains, then the sun shines implies that there is a rainbow.

99.3 Proof
The following proof uses Material Implication, double negation, De Morgans Laws, the negation of the conditional
statement, the Associative Property of conjunction, the negation of another conditional statement, and double negation
again, in that order to derive the result.

99.4 Relation to functions


Exportation is associated with Currying via the CurryHoward correspondence.

99.5 References
[1] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing. pp. 3645.

[2] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall. p. 371.

[3] Moore and Parker

[4] http://www.philosophypages.com/lg/e11b.htm
Chapter 100

Extension (predicate logic)

The extension of a predicate a truth-valued function is the set of tuples of values that, used as arguments, satisfy
the predicate. Such a set of tuples is a relation.

100.1 Examples
For example, the statement "d2 is the weekday following d1" can be seen as a truth function associating to each
tuple (d2, d1) the value true or false. The extension of this truth function is, by convention, the set of all such tuples
associated with the value true, i.e.
{(Monday, Sunday), (Tuesday, Monday), (Wednesday, Tuesday), (Thursday, Wednesday), (Friday, Thursday), (Sat-
urday, Friday), (Sunday, Saturday)}
By examining this extension we can conclude that Tuesday is the weekday following Saturday (for example) is false.
Using set-builder notation, the extension of the n-ary predicate can be written as

{(x1 , ..., xn ) | (x1 , ..., xn )} .

100.2 Relationship with characteristic function


If the values 0 and 1 in the range of a characteristic function are identied with the values false and true, respectively
making the characteristic function a predicate , then for all relations R and predicates the following two statements
are equivalent:

is the characteristic function of R

R is the extension of

100.3 See also


Extensional logic

Extensional set

Extensionality

Intension

346
100.4. REFERENCES 347

100.4 References
extension (semantics) in nLab
Chapter 101

False (logic)

See also: Falsity

In logic, false or untrue is the state of possessing negative truth value or a nullary logical connective. In a truth-
functional system of propositional logic it is one of two postulated truth values, along with its negation, truth.[1] Usual
notations of the false are 0 (especially in Boolean logic and computer science), O (in prex notation, Opq), and the
up tack symbol .[2]
Another approach is used for several formal theories (for example, intuitionistic propositional calculus) where the
false is a propositional constant (i.e. a nullary connective) , the truth value of this constant being always false in the
sense above.[3][4][5]

101.1 In classical logic and Boolean logic

Boolean logic denes the false in both senses mentioned above: 0 is a propositional constant, whose value by
denition is 0. In a classical propositional calculus, depending on the chosen set of fundamental connectives, the false
may or may not have a dedicated symbol. Such formulas as p p and (p p) may be used instead.
In both systems the negation of the truth gives false. The negation of false is equivalent to the truth not only in classical
logic and Boolean logic, but also in most other logical systems, as explained below.

101.2 False, negation and contradiction

In most logical systems, negation, material conditional and false are related as:

p (p )

This is the denition of negation in some systems,[6] such as intuitionistic logic, and can be proven in propositional
calculi where negation is a fundamental connective. Because p p is usually a theorem or axiom, a consequence is
that the negation of false ( ) is true.
The contradiction is a statement which entails the false, i.e. . Using the equivalence above, the fact that is a
contradiction may be derived, for example, from . Contradiction and the false are sometimes not distinguished,
especially due to Latin term falsum denoting both. Contradiction means a statement is proven to be false, but the
false itself is a proposition which is dened to be opposite to the truth.
Logical systems may or may not contain the principle of explosion (in Latin, ex falso quodlibet), .

348
101.3. CONSISTENCY 349

101.3 Consistency
Main article: Consistency

A formal theory using "" connective is dened to be consistent if and only if the false is not among its theorems.
In the absence of propositional constants, some substitutes such as mentioned above may be used instead to dene
consistency.

101.4 See also


Contradiction
Logical truth

Tautology (logic) (for symbolism of logical truth)

101.5 References
[1] Jennifer Fisher, On the Philosophy of Logic, Thomson Wadsworth, 2007, ISBN 0-495-00888-5, p. 17.

[2] Willard Van Orman Quine, Methods of Logic, 4th ed, Harvard University Press, 1982, ISBN 0-674-57176-2, p. 34.

[3] George Edward Hughes and D.E. Londey, The Elements of Formal Logic, Methuen, 1965, p. 151.

[4] Leon Horsten and Richard Pettigrew, Continuum Companion to Philosophical Logic, Continuum International Publishing
Group, 2011, ISBN 1-4411-5423-X, p. 199.

[5] Graham Priest, An Introduction to Non-Classical Logic: From If to Is, 2nd ed, Cambridge University Press, 2008, ISBN
0-521-85433-4, p. 105.

[6] Dov M. Gabbay and Franz Guenthner (eds), Handbook of Philosophical Logic, Volume 6, 2nd ed, Springer, 2002, ISBN
1-4020-0583-0, p. 12.
Chapter 102

Fiber (mathematics)

In mathematics, the term ber (or bre in British English) can have two meanings, depending on the context:

1. In naive set theory, the ber of the element y in the set Y under a map f : X Y is the inverse image of the
singleton {y} under f.

2. In algebraic geometry, the notion of a ber of a morphism of schemes must be dened more carefully because,
in general, not every point is closed.

102.1 Denitions

102.1.1 Fiber in naive set theory


Let f : X Y be a map. The ber of an element y Y , commonly denoted by f 1 (y) , is dened as

f 1 ({y}) = {x X | f (x) = y}.

In various applications, this is also called:

the inverse image of {y} under the map f


the preimage of {y} under the map f
the level set of the function f at the point y.

The term level set is only used if f maps into the real numbers and so y is simply a number. If f is a continuous
function and if y is in the image of f, then the level set of y under f is a curve in 2D, a surface in 3D, and more
generally a hypersurface of dimension d-1.

102.1.2 Fiber in algebraic geometry


In algebraic geometry, if f : X Y is a morphism of schemes, the ber of a point p in Y is the bered product
X Y Spec k(p) where k(p) is the residue eld at p.

102.2 Terminological variance


The recommended practice is to use the terms ber, inverse image, preimage, and level set as follows:

the ber of the element y under the map f

350
102.3. SEE ALSO 351

the inverse image of the set {y} under the map f


the preimage of the set {y} under the map f
the level set of the function f at the point y.

By abuse of language, the following terminology is sometimes used but should be avoided:

the ber of the map f at the element y


the inverse image of the map f at the element y
the preimage of the map f at the element y
the level set of the point y under the map f.

102.3 See also


Fibration
Fiber bundle

Fiber product
Image (category theory)

Image (mathematics)

Inverse relation
Kernel (mathematics)

Level set
Preimage

Relation
Zero set
Chapter 103

Field of sets

Set algebra redirects here. For the basic properties and laws of sets, see Algebra of sets.

In mathematics a eld of sets is a pair X, F where X is a set and F is an algebra over X i.e., a non-empty subset
of the power set of X closed under the intersection and union of pairs of sets and under complements of individual
sets. In other words, F forms a subalgebra of the power set Boolean algebra of X . (Many authors refer to F itself
as a eld of sets. The word eld in eld of sets is not used with the meaning of eld from eld theory.) Elements
of X are called points and those of F are called complexes and are said to be the admissible sets of X .
Fields of sets play an essential role in the representation theory of Boolean algebras. Every Boolean algebra can be
represented as a eld of sets.

103.1 Fields of sets in the representation theory of Boolean algebras

103.1.1 Stone representation


Every nite Boolean algebra can be represented as a whole power set - the power set of its set of atoms; each element
of the Boolean algebra corresponds to the set of atoms below it (the join of which is the element). This power set
representation can be constructed more generally for any complete atomic Boolean algebra.
In the case of Boolean algebras which are not complete and atomic we can still generalize the power set representation
by considering elds of sets instead of whole power sets. To do this we rst observe that the atoms of a nite Boolean
algebra correspond to its ultralters and that an atom is below an element of a nite Boolean algebra if and only if
that element is contained in the ultralter corresponding to the atom. This leads us to construct a representation of a
Boolean algebra by taking its set of ultralters and forming complexes by associating with each element of the Boolean
algebra the set of ultralters containing that element. This construction does indeed produce a representation of the
Boolean algebra as a eld of sets and is known as the Stone representation. It is the basis of Stones representation
theorem for Boolean algebras and an example of a completion procedure in order theory based on ideals or lters,
similar to Dedekind cuts.
Alternatively one can consider the set of homomorphisms onto the two element Boolean algebra and form complexes
by associating each element of the Boolean algebra with the set of such homomorphisms that map it to the top
element. (The approach is equivalent as the ultralters of a Boolean algebra are precisely the pre-images of the top
elements under these homomorphisms.) With this approach one sees that Stone representation can also be regarded
as a generalization of the representation of nite Boolean algebras by truth tables.

103.1.2 Separative and compact elds of sets: towards Stone duality


A eld of sets is called separative (or dierentiated) if and only if for every pair of distinct points there is a
complex containing one and not the other.
A eld of sets is called compact if and only if for every proper lter over X the intersection of all the complexes
contained in the lter is non-empty.

352
103.2. FIELDS OF SETS WITH ADDITIONAL STRUCTURE 353

These denitions arise from considering the topology generated by the complexes of a eld of sets. Given a eld of
sets X = X, F the complexes form a base for a topology, we denote the corresponding topological space by T (X)
. Then

T (X) is always a zero-dimensional space.

T (X) is a Hausdor space if and only if X is separative.

T (X) is a compact space with compact open sets F if and only if X is compact.

T (X) is a Boolean space with clopen sets F if and only if X is both separative and compact (in which case it
is described as being descriptive)

The Stone representation of a Boolean algebra is always separative and compact; the corresponding Boolean space is
known as the Stone space of the Boolean algebra. The clopen sets of the Stone space are then precisely the complexes
of the Stone representation. The area of mathematics known as Stone duality is founded on the fact that the Stone
representation of a Boolean algebra can be recovered purely from the corresponding Stone space whence a duality
exists between Boolean algebras and Boolean spaces.

103.2 Fields of sets with additional structure

103.2.1 Sigma algebras and measure spaces


If an algebra over a set is closed under countable intersections and countable unions, it is called a sigma algebra
and the corresponding eld of sets is called a measurable space. The complexes of a measurable space are called
measurable sets.
A measure space is a triple X, F, where X, F is a measurable space and is a measure dened on it. If
is in fact a probability measure we speak of a probability space and call its underlying measurable space a sample
space. The points of a sample space are called samples and represent potential outcomes while the measurable sets
(complexes) are called events and represent properties of outcomes for which we wish to assign probabilities. (Many
use the term sample space simply for the underlying set of a probability space, particularly in the case where every
subset is an event.) Measure spaces and probability spaces play a foundational role in measure theory and probability
theory respectively.
The Loomis-Sikorski theorem provides a Stone-type duality between abstract sigma algebras and measurable spaces.

103.2.2 Topological elds of sets


A topological eld of sets is a triple X, T , F where X, T is a topological space and X, F is a eld of sets
which is closed under the closure operator of T or equivalently under the interior operator i.e. the closure and interior
of every complex is also a complex. In other words, F forms a subalgebra of the power set interior algebra on X, T
.
Every interior algebra can be represented as a topological eld of sets with its interior and closure operators corre-
sponding to those of the topological space.
Given a topological space the clopen sets trivially form a topological eld of sets as each clopen set is its own interior
and closure. The Stone representation of a Boolean algebra can be regarded as such a topological eld of sets.

Algebraic elds of sets and Stone elds

A topological eld of sets is called algebraic if and only if there is a base for its topology consisting of complexes.
If a topological eld of sets is both compact and algebraic then its topology is compact and its compact open sets are
precisely the open complexes. Moreover, the open complexes form a base for the topology.
Topological elds of sets that are separative, compact and algebraic are called Stone elds and provide a generalization
of the Stone representation of Boolean algebras. Given an interior algebra we can form the Stone representation of
354 CHAPTER 103. FIELD OF SETS

its underlying Boolean algebra and then extend this to a topological eld of sets by taking the topology generated by
the complexes corresponding to the open elements of the interior algebra (which form a base for a topology). These
complexes are then precisely the open complexes and the construction produces a Stone eld representing the interior
algebra - the Stone representation.

103.2.3 Preorder elds


A preorder eld is a triple X, , F where X, is a preordered set and X, F is a eld of sets.
Like the topological elds of sets, preorder elds play an important role in the representation theory of interior alge-
bras. Every interior algebra can be represented as a preorder eld with its interior and closure operators corresponding
to those of the Alexandrov topology induced by the preorder. In other words,

Int(S) = {x X : there exists a y S with y x} and


Cl(S) = {x X : there exists a y S with x y} for all S F

Preorder elds arise naturally in modal logic where the points represent the possible worlds in the Kripke semantics
of a theory in the modal logic S4 (a formal mathematical abstraction of epistemic logic), the preorder represents the
accessibility relation on these possible worlds in this semantics, and the complexes represent sets of possible worlds
in which individual sentences in the theory hold, providing a representation of the Lindenbaum-Tarski algebra of the
theory.

Algebraic and canonical preorder elds

A preorder eld is called algebraic if and only if it has a set of complexes A which determines the preorder in the
following manner: x y if and only if for every complex S A , x S implies y S . The preorder elds
obtained from S4 theories are always algebraic, the complexes determining the preorder being the sets of possible
worlds in which the sentences of the theory closed under necessity hold.
A separative compact algebraic preorder eld is said to be canonical. Given an interior algebra, by replacing the
topology of its Stone representation with the corresponding canonical preorder (specialization preorder) we obtain a
representation of the interior algebra as a canonical preorder eld. By replacing the preorder by its corresponding
Alexandrov topology we obtain an alternative representation of the interior algebra as a topological eld of sets. (The
topology of this "Alexandrov representation" is just the Alexandrov bi-coreection of the topology of the Stone
representation.)

103.2.4 Complex algebras and elds of sets on relational structures


The representation of interior algebras by preorder elds can be generalized to a representation theorem for arbi-
trary (normal) Boolean algebras with operators. For this we consider structures X, (Ri )I , F where X, (Ri )I is a
relational structure i.e. a set with an indexed family of relations dened on it, and X, F is a eld of sets. The com-
plex algebra (or algebra of complexes) determined by a eld of sets X = X, (Ri )I , F on a relational structure,
is the Boolean algebra with operators

C(X) = F, , , , , X, (fi )I
where for all i I , if Ri is a relation of arity n + 1 , then fi is an operator of arity n and for all S1 , ..., Sn F

fi (S1 , ..., Sn ) = {x X : there exist x1 S1 , ..., xn Sn such that Ri (x1 , ..., xn , x)}

This construction can be generalized to elds of sets on arbitrary algebraic structures having both operators and
relations as operators can be viewed as a special case of relations. If F is the whole power set of X then C(X) is
called a full complex algebra or power algebra.
Every (normal) Boolean algebra with operators can be represented as a eld of sets on a relational structure in the
sense that it is isomorphic to the complex algebra corresponding to the eld.
(Historically the term complex was rst used in the case where the algebraic structure was a group and has its origins
in 19th century group theory where a subset of a group was called a complex.)
103.3. SEE ALSO 355

103.3 See also


List of Boolean algebra topics

Algebra of sets
Sigma algebra

Measure theory

Probability theory
Interior algebra

Alexandrov topology
Stones representation theorem for Boolean algebras

Stone duality
Boolean ring

Preordered eld

103.4 References
Goldblatt, R., Algebraic Polymodal Logic: A Survey, Logic Journal of the IGPL, Volume 8, Issue 4, p. 393-450,
July 2000
Goldblatt, R., Varieties of complex algebras, Annals of Pure and Applied Logic, 44, p. 173-242, 1989

Johnstone, Peter T. (1982). Stone spaces (3rd ed.). Cambridge: Cambridge University Press. ISBN 0-521-
33779-8.

Naturman, C.A., Interior Algebras and Topology, Ph.D. thesis, University of Cape Town Department of Math-
ematics, 1991
Patrick Blackburn, Johan F.A.K. van Benthem, Frank Wolter ed., Handbook of Modal Logic, Volume 3 of
Studies in Logic and Practical Reasoning, Elsevier, 2006

103.5 External links


Hazewinkel, Michiel, ed. (2001) [1994], Algebra of sets, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Chapter 104

Finitary relation

This article is about the set-theoretic notion of relation. For the common case, see binary relation.
For other uses, see Relation (disambiguation).

In mathematics, a nitary relation has a nite number of places. In set theory and logic, a relation is a property
that assigns truth values to k -tuples of individuals. Typically, the property describes a possible connection between
the components of a k -tuple. For a given set of k -tuples, a truth value is assigned to each k -tuple according to
whether the property does or does not hold.
An example of a ternary relation (i.e., between three individuals) is: " X was introduced to Y by Z ", where (X, Y, Z)
is a 3-tuple of persons; for example, "Beatrice Wood was introduced to Henri-Pierre Roch by Marcel Duchamp" is
true, while "Karl Marx was introduced to Friedrich Engels by Queen Victoria" is false.

104.1 Informal introduction


Relation is formally dened in the next section. In this section we introduce the concept of a relation with a familiar
everyday example. Consider the relation involving three roles that people might play, expressed in a statement of the
form "X thinks that Y likes Z ". The facts of a concrete situation could be organized in a table like the following:
Each row of the table records a fact or makes an assertion of the form "X thinks that Y likes Z ". For instance, the
rst row says, in eect, Alice thinks that Bob likes Denise. The table represents a relation S over the set P of people
under discussion:

P = {Alice, Bob, Charles, Denise}.

The data of the table are equivalent to the following set of ordered triples:

S = {(Alice, Bob, Denise), (Charles, Alice, Bob), (Charles, Charles, Alice), (Denise, Denise, Denise)}.

By a slight abuse of notation, it is usual to write S(Alice, Bob, Denise) to say the same thing as the rst row of
the table. The relation S is a ternary relation, since there are three items involved in each row. The relation itself
is a mathematical object dened in terms of concepts from set theory (i.e., the relation is a subset of the Cartesian
product on {Person X, Person Y, Person Z}), that carries all of the information from the table in one neat package.
Mathematically, then, a relation is simply an ordered set.
The table for relation S is an extremely simple example of a relational database. The theoretical aspects of databases
are the specialty of one branch of computer science, while their practical impacts have become all too familiar in our
everyday lives. Computer scientists, logicians, and mathematicians, however, tend to see dierent things when they
look at these concrete examples and samples of the more general concept of a relation.
For one thing, databases are designed to deal with empirical data, and experience is always nite, whereas mathematics
at the very least concerns itself with potential innity. This dierence in perspective brings up a number of ideas that
may be usefully introduced at this point, if by no means covered in depth.

356
104.2. RELATIONS WITH A SMALL NUMBER OF PLACES 357

104.2 Relations with a small number of places


The variable k giving the number of "places" in the relation, 3 for the above example, is a non-negative integer,
called the relations arity, adicity, or dimension. A relation with k places is variously called a k -ary, a k -adic, or
a k -dimensional relation. Relations with a nite number of places are called nite-place or nitary relations. It
is possible to generalize the concept to include innitary relations between innitudes of individuals, for example
innite sequences; however, in this article only nitary relations are discussed, which will from now on simply be
called relations.
Since there is only one 0-tuple, the so-called empty tuple ( ), there are only two zero-place relations: the one that
always holds, and the one that never holds. They are sometimes useful for constructing the base case of an induction
argument. One-place relations are called unary relations. For instance, any set (such as the collection of Nobel
laureates) can be viewed as a collection of individuals having some property (such as that of having been awarded
the Nobel prize). Two-place relations are called binary relations or, in the past, dyadic relations. Binary relations are
very common, given the ubiquity of relations such as:

Equality and inequality, denoted by signs such as ' = ' and ' < ' in statements like ' 5 < 12 ';
Being a divisor of, denoted by the sign ' | ' in statements like ' 13 | 143 ';
Set membership, denoted by the sign ' ' in statements like ' 1 N '.

A k -ary relation is a straightforward generalization of a binary relation.

104.3 Formal denitions


When two objects, qualities, classes, or attributes, viewed together by the mind, are seen under some
connexion, that connexion is called a relation.
Augustus De Morgan[1]

The simpler of the two denitions of k-place relations encountered in mathematics is:
Denition 1. A relation L over the sets X1 , , Xk is a subset of their Cartesian product, written L X1 Xk.
Relations are classied according to the number of sets in the dening Cartesian product, in other words, according
to the number of terms following L. Hence:

Lu denotes a unary relation or property;


Luv or uLv denote a binary relation;
Luvw denotes a ternary relation;
Luvwx denotes a quaternary relation.

Relations with more than four terms are usually referred to as k-ary or n-ary, for example, a 5-ary relation. A k-ary
relation is simply a set of k-tuples.
The second denition makes use of an idiom that is common in mathematics, stipulating that such and such is an
n-tuple in order to ensure that such and such a mathematical object is determined by the specication of n component
mathematical objects. In the case of a relation L over k sets, there are k + 1 things to specify, namely, the k sets plus
a subset of their Cartesian product. In the idiom, this is expressed by saying that L is a (k + 1)-tuple.
Denition 2. A relation L over the sets X1 , , Xk is a (k + 1)-tuple L = (X1 , , Xk, G(L)), where G(L) is a subset
of the Cartesian product X1 Xk. G(L) is called the graph of L.
Elements of a relation are more briey denoted by using boldface characters, for example, the constant element a =
(a1 , , ak) or the variable element x = (x1 , , xk).
A statement of the form "a is in the relation L " or "a satises L " is taken to mean that a is in L under the rst
denition and that a is in G(L) under the second denition.
The following considerations apply under either denition:
358 CHAPTER 104. FINITARY RELATION

The sets Xj for j = 1 to k are called the domains of the relation. Under the rst denition, the relation does not
uniquely determine a given sequence of domains.

If all of the domains Xj are the same set X, then it is simpler to refer to L as a k-ary relation over X.

If any of the domains Xj is empty, then the dening Cartesian product is empty, and the only relation over such
a sequence of domains is the empty relation L = . Hence it is commonly stipulated that all of the domains
be nonempty.

As a rule, whatever denition best ts the application at hand will be chosen for that purpose, and anything that falls
under it will be called a relation for the duration of that discussion. If it becomes necessary to distinguish the two
denitions, an entity satisfying the second denition may be called an embedded or included relation.
If L is a relation over the domains X1 , , Xk, it is conventional to consider a sequence of terms called variables, x1 ,
, xk, that are said to range over the respective domains.
Let a Boolean domain B be a two-element set, say, B = {0, 1}, whose elements can be interpreted as logical values,
typically 0 = false and 1 = true. The characteristic function of the relation L, written L or (L), is the Boolean-valued
function L : X1 Xk B, dened in such a way that L( x ) = 1 just in case the k-tuple x is in the relation L.
Such a function can also be called an indicator function, particularly in probability and statistics, to avoid confusion
with the notion of a characteristic function in probability theory.
It is conventional in applied mathematics, computer science, and statistics to refer to a Boolean-valued function like L
as a k-place predicate. From the more abstract viewpoint of formal logic and model theory, the relation L constitutes
a logical model or a relational structure that serves as one of many possible interpretations of some k-place predicate
symbol.
Because relations arise in many scientic disciplines as well as in many branches of mathematics and logic, there
is considerable variation in terminology. This article treats a relation as the set-theoretic extension of a relational
concept or term. A variant usage reserves the term relation to the corresponding logical entity, either the logical
comprehension, which is the totality of intensions or abstract properties that all of the elements of the relation in
extension have in common, or else the symbols that are taken to denote these elements and intensions. Further, some
writers of the latter persuasion introduce terms with more concrete connotations, like relational structure, for the
set-theoretic extension of a given relational concept.

104.4 History
The logician Augustus De Morgan, in work published around 1860, was the rst to articulate the notion of relation
in anything like its present sense. He also stated the rst formal results in the theory of relations (on De Morgan and
relations, see Merrill 1990). Charles Sanders Peirce restated and extended De Morgans results. Bertrand Russell
(1938; 1st ed. 1903) was historically important, in that it brought together in one place many 19th century results on
relations, especially orders, by Peirce, Gottlob Frege, Georg Cantor, Richard Dedekind, and others. Russell and A.
N. Whitehead made free use of these results in their Principia Mathematica.

104.5 Notes
[1] De Morgan, A. (1858) On the syllogism, part 3 in Heath, P., ed. (1966) On the syllogism and other logical writings.
Routledge. P. 119,

104.6 See also


Correspondence (mathematics)

Functional relation

Incidence structure

Hypergraph
104.7. REFERENCES 359

Logic of relatives
Logical matrix
Partial order
Projection (set theory)
Reexive relation
Relation algebra
Sign relation
Transitive relation
Relational algebra
Relational model
Predicate (mathematical logic)

104.7 References
Peirce, C.S. (1870), Description of a Notation for the Logic of Relatives, Resulting from an Amplication
of the Conceptions of Booles Calculus of Logic, Memoirs of the American Academy of Arts and Sciences 9,
31778, 1870. Reprinted, Collected Papers CP 3.45149, Chronological Edition CE 2, 359429.
Ulam, S.M. and Bednarek, A.R. (1990), On the Theory of Relational Structures and Schemata for Parallel
Computation, pp. 477508 in A.R. Bednarek and Franoise Ulam (eds.), Analogies Between Analogies: The
Mathematical Reports of S.M. Ulam and His Los Alamos Collaborators, University of California Press, Berkeley,
CA.

104.8 Bibliography
Bourbaki, N. (1994) Elements of the History of Mathematics, John Meldrum, trans. Springer-Verlag.
Carnap, Rudolf (1958) Introduction to Symbolic Logic with Applications. Dover Publications.
Halmos, P.R. (1960) Naive Set Theory. Princeton NJ: D. Van Nostrand Company.
Lawvere, F.W., and R. Rosebrugh (2003) Sets for Mathematics, Cambridge Univ. Press.
Lucas, J. R. (1999) Conceptual Roots of Mathematics. Routledge.
Maddux, R.D. (2006) Relation Algebras, vol. 150 in 'Studies in Logic and the Foundations of Mathematics.
Elsevier Science.
Merrill, Dan D. (1990) Augustus De Morgan and the logic of relations. Kluwer.
Peirce, C.S. (1984) Writings of Charles S. Peirce: A Chronological Edition, Volume 2, 1867-1871. Peirce
Edition Project, eds. Indiana University Press.
Russell, Bertrand (1903/1938) The Principles of Mathematics, 2nd ed. Cambridge Univ. Press.
Suppes, Patrick (1960/1972) Axiomatic Set Theory. Dover Publications.
Tarski, A. (1956/1983) Logic, Semantics, Metamathematics, Papers from 1923 to 1938, J.H. Woodger, trans.
1st edition, Oxford University Press. 2nd edition, J. Corcoran, ed. Indianapolis IN: Hackett Publishing.
Ulam, S.M. (1990) Analogies Between Analogies: The Mathematical Reports of S.M. Ulam and His Los Alamos
Collaborators in A.R. Bednarek and Franoise Ulam, eds., University of California Press.
R. Frass, Theory of Relations (North Holland; 2000).
Chapter 105

First-order logic

Predicate logic redirects here. For logics admitting predicate or function variables, see Higher-order logic.

First-order logicalso known as rst-order predicate calculus and predicate logicis a collection of formal
systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantied variables
over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such
as Socrates is a man one can have expressions in the form there exists X such that X is Socrates and X is a man and
there exists is a quantier while X is a variable.[1] This distinguishes it from propositional logic, which does not use
quantiers or relations.[2]
A theory about a topic is usually a rst-order logic together with a specied domain of discourse over which the
quantied variables range, nitely many functions from that domain to itself, nitely many predicates dened on that
domain, and a set of axioms believed to hold for those things. Sometimes theory is understood in a more formal
sense, which is just a set of sentences in rst-order logic.
The adjective rst-order distinguishes rst-order logic from higher-order logic in which there are predicates having
predicates or functions as arguments, or in which one or both of predicate quantiers or function quantiers are
permitted.[3] In rst-order theories, predicates are often associated with sets. In interpreted higher-order theories,
predicates may be interpreted as sets of sets.
There are many deductive systems for rst-order logic which are both sound (all provable statements are true in all
models) and complete (all statements which are true in all models are provable). Although the logical consequence
relation is only semidecidable, much progress has been made in automated theorem proving in rst-order logic. First-
order logic also satises several metalogical theorems that make it amenable to analysis in proof theory, such as the
LwenheimSkolem theorem and the compactness theorem.
First-order logic is the standard for the formalization of mathematics into axioms and is studied in the foundations
of mathematics. Peano arithmetic and ZermeloFraenkel set theory are axiomatizations of number theory and set
theory, respectively, into rst-order logic. No rst-order theory, however, has the strength to uniquely describe a
structure with an innite domain, such as the natural numbers or the real line. Axioms systems that do fully describe
these two structures (that is, categorical axiom systems) can be obtained in stronger logics such as second-order logic.
The foundations of rst-order logic were developed independently by Gottlob Frege and Charles Sanders Peirce.[4]
For a history of rst-order logic and how it came to dominate formal logic, see Jos Ferreirs (2001).

105.1 Introduction
While propositional logic deals with simple declarative propositions, rst-order logic additionally covers predicates
and quantication.
A predicate takes an entity or entities in the domain of discourse as input and outputs either True or False. Consider
the two sentences Socrates is a philosopher and Plato is a philosopher. In propositional logic, these sentences
are viewed as being unrelated and might be denoted, for example, by variables such as p and q. The predicate is
a philosopher occurs in both sentences, which have a common structure of "a is a philosopher. The variable a is
instantiated as Socrates in the rst sentence and is instantiated as Plato in the second sentence. While rst-order

360
105.2. SYNTAX 361

logic allows for the use of predicates, such as is a philosopher in this example, propositional logic does not.[5]
Relationships between predicates can be stated using logical connectives. Consider, for example, the rst-order
formula if a is a philosopher, then a is a scholar. This formula is a conditional statement with "a is a philosopher
as its hypothesis and "a is a scholar as its conclusion. The truth of this formula depends on which object is denoted
by a, and on the interpretations of the predicates is a philosopher and is a scholar.
Quantiers can be applied to variables in a formula. The variable a in the previous formula can be universally
quantied, for instance, with the rst-order sentence For every a, if a is a philosopher, then a is a scholar. The
universal quantier for every in this sentence expresses the idea that the claim if a is a philosopher, then a is a
scholar holds for all choices of a.
The negation of the sentence For every a, if a is a philosopher, then a is a scholar is logically equivalent to the
sentence There exists a such that a is a philosopher and a is not a scholar. The existential quantier there exists
expresses the idea that the claim "a is a philosopher and a is not a scholar holds for some choice of a.
The predicates is a philosopher and is a scholar each take a single variable. In general, predicates can take several
variables. In the rst-order sentence Socrates is the teacher of Plato, the predicate is the teacher of takes two
variables.
An interpretation (or model) of a rst-order formula species what each predicate means and the entities that can
instantiate the variables. These entities form the domain of discourse or universe, which is usually required to be a
nonempty set. For example, in an interpretation with the domain of discourse consisting of all human beings and the
predicate is a philosopher understood as was the author of the Republic", the sentence There exists a such that a
is a philosopher is seen as being true, as witnessed by Plato.

105.2 Syntax
There are two key parts of rst-order logic. The syntax determines which collections of symbols are legal expressions
in rst-order logic, while the semantics determine the meanings behind these expressions.

105.2.1 Alphabet
Unlike natural languages, such as English, the language of rst-order logic is completely formal, so that it can be
mechanically determined whether a given expression is legal. There are two key types of legal expressions: terms,
which intuitively represent objects, and formulas, which intuitively express predicates that can be true or false. The
terms and formulas of rst-order logic are strings of symbols which together form the alphabet of the language. As
with all formal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often
regarded simply as letters and punctuation symbols.
It is common to divide the symbols of the alphabet into logical symbols, which always have the same meaning, and
non-logical symbols, whose meaning varies by interpretation. For example, the logical symbol always represents
and"; it is never interpreted as or. On the other hand, a non-logical predicate symbol such as Phil(x) could be
interpreted to mean "x is a philosopher, "x is a man named Philip, or any other unary predicate, depending on the
interpretation at hand.

Logical symbols

There are several logical symbols in the alphabet, which vary by author but usually include:

The quantier symbols and


The logical connectives: for conjunction, for disjunction, for implication, for biconditional, for
negation. Occasionally other logical connective symbols are included. Some authors use Cpq, instead of ,
and Epq, instead of , especially in contexts where is used for other purposes. Moreover, the horseshoe
may replace ; the triple-bar may replace ; a tilde (~), Np, or Fpq, may replace ; ||, or Apq may replace
; and &, Kpq, or the middle dot, , may replace , especially if these symbols are not available for technical
reasons. (Note: the aforementioned symbols Cpq, Epq, Np, Apq, and Kpq are used in Polish notation.)
Parentheses, brackets, and other punctuation symbols. The choice of such symbols varies depending on context.
362 CHAPTER 105. FIRST-ORDER LOGIC

An innite set of variables, often denoted by lowercase letters at the end of the alphabet x, y, z, ... . Subscripts
are often used to distinguish variables: x0 , x1 , x2 , ... .
An equality symbol (sometimes, identity symbol) =; see the section on equality below.

It should be noted that not all of these symbols are required only one of the quantiers, negation and conjunc-
tion, variables, brackets and equality suce. There are numerous minor variations that may dene additional logical
symbols:

Sometimes the truth constants T, Vpq, or , for true and F, Opq, or , for false are included. Without any
such logical operators of valence 0, these two constants can only be expressed using quantiers.
Sometimes additional logical connectives are included, such as the Sheer stroke, Dpq (NAND), and exclusive
or, Jpq.

Non-logical symbols

The non-logical symbols represent predicates (relations), functions and constants on the domain of discourse. It used
to be standard practice to use a xed, innite set of non-logical symbols for all purposes. A more recent practice is to
use dierent non-logical symbols according to the application one has in mind. Therefore, it has become necessary
to name the set of all non-logical symbols used in a particular application. This choice is made via a signature.[6]
The traditional approach is to have only one, innite, set of non-logical symbols (one signature) for all applications.
Consequently, under the traditional approach there is only one language of rst-order logic.[7] This approach is still
common, especially in philosophically oriented books.

1. For every integer n 0 there is a collection of n-ary, or n-place, predicate symbols. Because they represent
relations between n elements, they are also called relation symbols. For each arity n we have an innite supply
of them:
P n 0 , P n 1 , P n 2 , P n 3 , ...
2. For every integer n 0 there are innitely many n-ary function symbols:
f n 0 , f n 1 , f n 2 , f n 3 , ...

In contemporary mathematical logic, the signature varies by application. Typical signatures in mathematics are {1,
} or just {} for groups, or {0, 1, +, , <} for ordered elds. There are no restrictions on the number of non-logical
symbols. The signature can be empty, nite, or innite, even uncountable. Uncountable signatures occur for example
in modern proofs of the Lwenheim-Skolem theorem.
In this approach, every non-logical symbol is of one of the following types.

1. A predicate symbol (or relation symbol) with some valence (or arity, number of arguments) greater than or
equal to 0. These are often denoted by uppercase letters P, Q, R,... .
Relations of valence 0 can be identied with propositional variables. For example, P, which can stand for
any statement.
For example, P(x) is a predicate variable of valence 1. One possible interpretation is "x is a man.
Q(x,y) is a predicate variable of valence 2. Possible interpretations include "x is greater than y" and "x is
the father of y".
2. A function symbol, with some valence greater than or equal to 0. These are often denoted by lowercase letters
f, g, h,... .
Examples: f(x) may be interpreted as for the father of x". In arithmetic, it may stand for "-x. In set
theory, it may stand for the power set of x. In arithmetic, g(x,y) may stand for "x+y". In set theory, it
may stand for the union of x and y".
Function symbols of valence 0 are called constant symbols, and are often denoted by lowercase letters
at the beginning of the alphabet a, b, c,... . The symbol a may stand for Socrates. In arithmetic, it may
stand for 0. In set theory, such a constant may stand for the empty set.
105.2. SYNTAX 363

The traditional approach can be recovered in the modern approach by simply specifying the custom signature to
consist of the traditional sequences of non-logical symbols.

105.2.2 Formation rules


The formation rules dene the terms and formulas of rst order logic.[8] When terms and formulas are represented
as strings of symbols, these rules can be used to write a formal grammar for terms and formulas. These rules are
generally context-free (each production has a single symbol on the left side), except that the set of symbols may be
allowed to be innite and there may be many start symbols, for example the variables in the case of terms.

Terms

The set of terms is inductively dened by the following rules:

1. Variables. Any variable is a term.


2. Functions. Any expression f(t 1 ,...,tn) of n arguments (where each argument ti is a term and f is a func-
tion symbol of valence n) is a term. In particular, symbols denoting individual constants are nullary function
symbols, and are thus terms.

Only expressions which can be obtained by nitely many applications of rules 1 and 2 are terms. For example, no
expression involving a predicate symbol is a term.

Formulas

The set of formulas (also called well-formed formulas[9] or WFFs) is inductively dened by the following rules:

1. Predicate symbols. If P is an n-ary predicate symbol and t1, ..., tn are terms then P(t 1 ,...,t ) is a formula.
2. Equality. If the equality symbol is considered part of logic, and t1 and t 2 are terms, then t 1 = t 2 is a formula.
3. Negation. If is a formula, then is a formula.
4. Binary connectives. If and are formulas, then ( ) is a formula. Similar rules apply to other binary
logical connectives.
5. Quantiers. If is a formula and x is a variable, then x (for all x, holds) and x (there exists x such
that ) are formulas.

Only expressions which can be obtained by nitely many applications of rules 15 are formulas. The formulas ob-
tained from the rst two rules are said to be atomic formulas.
For example,

xy(P (f (x)) (P (x) Q(f (y), x, z)))

is a formula, if f is a unary function symbol, P a unary predicate symbol, and Q a ternary predicate symbol. On the
other hand, x x is not a formula, although it is a string of symbols from the alphabet.
The role of the parentheses in the denition is to ensure that any formula can only be obtained in one way by following
the inductive denition (in other words, there is a unique parse tree for each formula). This property is known as
unique readability of formulas. There are many conventions for where parentheses are used in formulas. For
example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are
inserted. Each authors particular denition must be accompanied by a proof of unique readability.
This denition of a formula does not support dening an if-then-else function ite(c, a, b), where c is a condition
expressed as a formula, that would return a if c is true, and b if it is false. This is because both predicates and
functions can only accept terms as parameters, but the rst parameter is a formula. Some languages built on rst-order
logic, such as SMT-LIB 2.0, add this.[10]
364 CHAPTER 105. FIRST-ORDER LOGIC

Notational conventions

For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need
to write parentheses in some cases. These rules are similar to the order of operations in arithmetic. A common
convention is:

is evaluated rst

and are evaluated next

Quantiers are evaluated next

is evaluated last.

Moreover, extra punctuation not required by the denition may be inserted to make formulas easier to read. Thus the
formula

(xP (x) xP (x))

might be written as

([xP (x)]) x[P (x)].

In some elds, it is common to use inx notation for binary relations and functions, instead of the prex notation
dened above. For example, in arithmetic, one typically writes 2 + 2 = 4 instead of "=(+(2,2),4)". It is common
to regard formulas in inx notation as abbreviations for the corresponding formulas in prex notation, cf. also term
structure vs. representation.
The denitions above use inx notation for binary connectives such as . A less common convention is Polish
notation, in which one writes , , and so on in front of their arguments rather than between them. This convention
allows all punctuation symbols to be discarded. Polish notation is compact and elegant, but rarely used in practice
because it is hard for humans to read it. In Polish notation, the formula

xy(P (f (x)) (P (x) Q(f (y), x, z)))

becomes "xyPfx PxQfyxz.

105.2.3 Free and bound variables


Main article: Free variables and bound variables

In a formula, a variable may occur free or bound. Intuitively, a variable is free in a formula if it is not quantied: in
y P (x, y) , variable x is free while y is bound. The free and bound variables of a formula are dened inductively as
follows.

1. Atomic formulas. If is an atomic formula then x is free in if and only if x occurs in . Moreover, there
are no bound variables in any atomic formula.

2. Negation. x is free in if and only if x is free in . x is bound in if and only if x is bound in .

3. Binary connectives. x is free in ( ) if and only if x is free in either or . x is bound in ( ) if and


only if x is bound in either or . The same rule applies to any other binary connective in place of .

4. Quantiers. x is free in y if and only if x is free in and x is a dierent symbol from y. Also, x is bound
in y if and only if x is y or x is bound in . The same rule holds with in place of .
105.3. SEMANTICS 365

For example, in x y (P(x) Q(x,f(x),z)), x and y are bound variables, z is a free variable, and w is neither because
it does not occur in the formula.
Free and bound variables of a formula need not be disjoint sets: x is both free and bound in P (x) x Q(x) .
Freeness and boundness can be also specialized to specic occurrences of variables in a formula. For example, in
P (x) x Q(x) , the rst occurrence of x is free while the second is bound. In other words, the x in P (x) is free
while the x in x Q(x) is bound.
A formula in rst-order logic with no free variables is called a rst-order sentence. These are the formulas that will
have well-dened truth values under an interpretation. For example, whether a formula such as Phil(x) is true must
depend on what x represents. But the sentence x Phil(x) will be either true or false in a given interpretation.

105.2.4 Example: ordered abelian groups


In mathematics the language of ordered abelian groups has one constant symbol 0, one unary function symbol , one
binary function symbol +, and one binary relation symbol . Then:

The expressions +(x, y) and +(x, +(y, (z))) are terms. These are usually written as x + y and x + y z.
The expressions +(x, y) = 0 and (+(x, +(y, (z))), +(x, y)) are atomic formulas. These are usually written as
x + y = 0 and x + y z x + y.
The expression (xy [(+(x, y), z) x y +(x, y) = 0)] is a formula, which is usually written as
xy(x + y z) xy(x + y = 0). This formula has one free variable, z.

The axioms for ordered abelian groups can be expressed as a set of sentences in the language. For example, the axiom
stating that the group is commutative is usually written (x)(y)[x + y = y + x].

105.3 Semantics
An interpretation of a rst-order language assigns a denotation to all non-logical constants in that language. It also
determines a domain of discourse that species the range of the quantiers. The result is that each term is assigned an
object that it represents, and each sentence is assigned a truth value. In this way, an interpretation provides semantic
meaning to the terms and formulas of the language. The study of the interpretations of formal languages is called
formal semantics. What follows is a description of the standard or Tarskian semantics for rst-order logic. (It is also
possible to dene game semantics for rst-order logic, but aside from requiring the axiom of choice, game semantics
agree with Tarskian semantics for rst-order logic, so game semantics will not be elaborated herein.)
The domain of discourse D is a nonempty set of objects of some kind. Intuitively, a rst-order formula is a statement
about these objects; for example, xP (x) states the existence of an object x such that the predicate P is true where
referred to it. The domain of discourse is the set of considered objects. For example, one can take D to be the set of
integer numbers.
The interpretation of a function symbol is a function. For example, if the domain of discourse consists of integers, a
function symbol f of arity 2 can be interpreted as the function that gives the sum of its arguments. In other words,
the symbol f is associated with the function I(f) which, in this interpretation, is addition.
The interpretation of a constant symbol is a function from the one-element set D0 to D, which can be simply identied
with an object in D. For example, an interpretation may assign the value I(c) = 10 to the constant symbol c .
The interpretation of an n-ary predicate symbol is a set of n-tuples of elements of the domain of discourse. This
means that, given an interpretation, a predicate symbol, and n elements of the domain of discourse, one can tell
whether the predicate is true of those elements according to the given interpretation. For example, an interpretation
I(P) of a binary predicate symbol P may be the set of pairs of integers such that the rst one is less than the second.
According to this interpretation, the predicate P would be true if its rst argument is less than the second.

105.3.1 First-order structures


Main article: Structure (mathematical logic)
366 CHAPTER 105. FIRST-ORDER LOGIC

The most common way of specifying an interpretation (especially in mathematics) is to specify a structure (also
called a model; see below). The structure consists of a nonempty set D that forms the domain of discourse and an
interpretation I of the non-logical terms of the signature. This interpretation is itself a function:

Each function symbol f of arity n is assigned a function I(f) from Dn to D . In particular, each constant symbol
of the signature is assigned an individual in the domain of discourse.
Each predicate symbol P of arity n is assigned a relation I(P) over Dn or, equivalently, a function from Dn to
{true, false} . Thus each predicate symbol is interpreted by a Boolean-valued function on D.

105.3.2 Evaluation of truth values


A formula evaluates to true or false given an interpretation, and a variable assignment that associates an element
of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings
to formulas with free variables, such as y = x . The truth value of this formula changes depending on whether x and
y denote the same individual.
First, the variable assignment can be extended to all terms of the language, with the result that each term maps to
a single element of the domain of discourse. The following rules are used to make this assignment:

1. Variables. Each variable x evaluates to (x)


2. Functions. Given terms t1 , . . . , tn that have been evaluated to elements d1 , . . . , dn of the domain of discourse,
and a n-ary function symbol f, the term f (t1 , . . . , tn ) evaluates to (I(f ))(d1 , . . . , dn ) .

Next, each formula is assigned a truth value. The inductive denition used to make this assignment is called the
T-schema.

1. Atomic formulas (1). A formula P (t1 , . . . , tn ) is associated the value true or false depending on whether
v1 , . . . , vn I(P ) , where v1 , . . . , vn are the evaluation of the terms t1 , . . . , tn and I(P ) is the interpreta-
tion of P , which by assumption is a subset of Dn .
2. Atomic formulas (2). A formula t1 = t2 is assigned true if t1 and t2 evaluate to the same object of the domain
of discourse (see the section on equality below).
3. Logical connectives. A formula in the form , , etc. is evaluated according to the truth table for
the connective in question, as in propositional logic.
4. Existential quantiers. A formula x(x) is true according to M and if there exists an evaluation of
the variables that only diers from regarding the evaluation of x and such that is true according to the
interpretation M and the variable assignment . This formal denition captures the idea that x(x) is true
if and only if there is a way to choose a value for x such that (x) is satised.
5. Universal quantiers. A formula x(x) is true according to M and if (x) is true for every pair composed
by the interpretation M and some variable assignment that diers from only on the value of x. This captures
the idea that x(x) is true if every possible choice of a value for x causes (x) to be true.

If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not aect
its truth value. In other words, a sentence is true according to M and if and only if it is true according to M and
every other variable assignment .
There is a second common approach to dening truth values that does not rely on variable assignment functions.
Instead, given an interpretation M, one rst adds to the signature a collection of constant symbols, one for each
element of the domain of discourse in M; say that for each d in the domain the constant symbol cd is xed. The
interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain.
One now denes truth for quantied formulas syntactically, as follows:

1. Existential quantiers (alternate). A formula x(x) is true according to M if there is some d in the domain
of discourse such that (cd ) holds. Here (cd ) is the result of substituting cd for every free occurrence of x in
.
105.3. SEMANTICS 367

2. Universal quantiers (alternate). A formula x(x) is true according to M if, for every d in the domain of
discourse, (cd ) is true according to M.

This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments.

105.3.3 Validity, satisability, and logical consequence


See also: Satisability

If a sentence evaluates to True under a given interpretation M, one says that M satises ; this is denoted M
. A sentence is satisable if there is some interpretation under which it is true.
Satisability of formulas with free variables is more complicated, because an interpretation on its own does not
determine the truth value of such a formula. The most common convention is that a formula with free variables is
said to be satised by an interpretation if the formula remains true regardless which individuals from the domain of
discourse are assigned to its free variables. This has the same eect as saying that a formula is satised if and only if
its universal closure is satised.
A formula is logically valid (or simply valid) if it is true in every interpretation. These formulas play a role similar
to tautologies in propositional logic.
A formula is a logical consequence of a formula if every interpretation that makes true also makes true. In
this case one says that is logically implied by .

105.3.4 Algebraizations
An alternate approach to the semantics of rst-order logic proceeds via abstract algebra. This approach generalizes
the LindenbaumTarski algebras of propositional logic. There are three ways of eliminating quantied variables from
rst-order logic that do not involve replacing quantiers with other variable binding term operators:

Cylindric algebra, by Alfred Tarski and his coworkers;

Polyadic algebra, by Paul Halmos;

Predicate functor logic, mainly due to Willard Quine.

These algebras are all lattices that properly extend the two-element Boolean algebra.
Tarski and Givant (1987) showed that the fragment of rst-order logic that has no atomic sentence lying in the scope
of more than three quantiers has the same expressive power as relation algebra. This fragment is of great interest
because it suces for Peano arithmetic and most axiomatic set theory, including the canonical ZFC. They also prove
that rst-order logic with a primitive ordered pair is equivalent to a relation algebra with two ordered pair projection
functions.

105.3.5 First-order theories, models, and elementary classes


A rst-order theory of a particular signature is a set of axioms, which are sentences consisting of symbols from that
signature. The set of axioms is often nite or recursively enumerable, in which case the theory is called eective.
Some authors require theories to also include all logical consequences of the axioms. The axioms are considered to
hold within the theory and from them other sentences that hold within the theory can be derived.
A rst-order structure that satises all sentences in a given theory is said to be a model of the theory. An elementary
class is the set of all structures satisfying a particular theory. These classes are a main subject of study in model
theory.
Many theories have an intended interpretation, a certain model that is kept in mind when studying the theory.
For example, the intended interpretation of Peano arithmetic consists of the usual natural numbers with their usual
operations. However, the LwenheimSkolem theorem shows that most rst-order theories will also have other,
nonstandard models.
368 CHAPTER 105. FIRST-ORDER LOGIC

A theory is consistent if it is not possible to prove a contradiction from the axioms of the theory. A theory is complete
if, for every formula in its signature, either that formula or its negation is a logical consequence of the axioms of the
theory. Gdels incompleteness theorem shows that eective rst-order theories that include a sucient portion of
the theory of the natural numbers can never be both consistent and complete.
For more information on this subject see List of rst-order theories and Theory (mathematical logic)

105.3.6 Empty domains

Main article: Empty domain

The denition above requires that the domain of discourse of any interpretation must be a nonempty set. There are
settings, such as inclusive logic, where empty domains are permitted. Moreover, if a class of algebraic structures
includes an empty structure (for example, there is an empty poset), that class can only be an elementary class in
rst-order logic if empty domains are permitted or the empty structure is removed from the class.
There are several diculties with empty domains, however:

Many common rules of inference are only valid when the domain of discourse is required to be nonempty.
One example is the rule stating that x implies x( ) when x is not a free variable in . This rule,
which is used to put formulas into prenex normal form, is sound in nonempty domains, but unsound if the
empty domain is permitted.

The denition of truth in an interpretation that uses a variable assignment function cannot work with empty
domains, because there are no variable assignment functions whose range is empty. (Similarly, one cannot
assign interpretations to constant symbols.) This truth denition requires that one must select a variable as-
signment function ( above) before truth values for even atomic formulas can be dened. Then the truth value
of a sentence is dened to be its truth value under any variable assignment, and it is proved that this truth
value does not depend on which assignment is chosen. This technique does not work if there are no assignment
functions at all; it must be changed to accommodate empty domains.

Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply
exclude the empty domain by denition.

105.4 Deductive systems


A deductive system is used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence
of another formula. There are many such systems for rst-order logic, including Hilbert-style deductive systems,
natural deduction, the sequent calculus, the tableaux method, and resolution. These share the common property that
a deduction is a nite syntactic object; the format of this object, and the way it is constructed, vary widely. These
nite deductions themselves are often called derivations in proof theory. They are also often called proofs, but are
completely formalized unlike natural-language mathematical proofs.
A deductive system is sound if any formula that can be derived in the system is logically valid. Conversely, a deductive
system is complete if every logically valid formula is derivable. All of the systems discussed in this article are both
sound and complete. They also share the property that it is possible to eectively verify that a purportedly valid
deduction is actually a deduction; such deduction systems are called eective.
A key property of deductive systems is that they are purely syntactic, so that derivations can be veried without
considering any interpretation. Thus a sound argument is correct in every possible interpretation of the language,
regardless whether that interpretation is about mathematics, economics, or some other area.
In general, logical consequence in rst-order logic is only semidecidable: if a sentence A logically implies a sentence
B then this can be discovered (for example, by searching for a proof until one is found, using some eective, sound,
complete proof system). However, if A does not logically imply B, this does not mean that A logically implies the
negation of B. There is no eective procedure that, given formulas A and B, always correctly decides whether A
logically implies B.
105.4. DEDUCTIVE SYSTEMS 369

105.4.1 Rules of inference

Further information: List of rules of inference

A rule of inference states that, given a particular formula (or set of formulas) with a certain property as a hypothesis,
another specic formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truth-preserving)
if it preserves validity in the sense that whenever any interpretation satises the hypothesis, that interpretation also
satises the conclusion.
For example, one common rule of inference is the rule of substitution. If t is a term and is a formula possibly
containing the variable x, then [t/x] is the result of replacing all free instances of x by t in . The substitution rule
states that for any and any term t, one can conclude [t/x] from provided that no free variable of t becomes
bound during the substitution process. (If some free variable of t becomes bound, then to substitute t for x it is rst
necessary to change the bound variables of to dier from the free variables of t.)
To see why the restriction on bound variables is necessary, consider the logically valid formula given by x(x = y)
, in the signature of (0,1,+,,=) of arithmetic. If t is the term x + 1, the formula [t/y] is x(x = x+1) , which will
be false in many interpretations. The problem is that the free variable x of t became bound during the substitution.
The intended replacement can be obtained by renaming the bound variable x of to something else, say z, so that
the formula after substitution is z(z = x + 1) , which is again logically valid.
The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can
tell whether it was correctly applied without appeal to any interpretation. It has (syntactically dened) limitations on
when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often
the case, these limitations are necessary because of interactions between free and bound variables that occur during
syntactic manipulations of the formulas involved in the inference rule.

105.4.2 Hilbert-style systems and natural deduction

A deduction in a Hilbert-style deductive system is a list of formulas, each of which is a logical axiom, a hypothesis
that has been assumed for the derivation at hand, or follows from previous formulas via a rule of inference. The
logical axioms consist of several axiom schemas of logically valid formulas; these encompass a signicant amount of
propositional logic. The rules of inference enable the manipulation of quantiers. Typical Hilbert-style systems have
a small number of rules of inference, along with several innite schemas of logical axioms. It is common to have only
modus ponens and universal generalization as rules of inference.
Natural deduction systems resemble Hilbert-style systems in that a deduction is a nite list of formulas. However,
natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can
be used to manipulate the logical connectives in formulas in the proof.

105.4.3 Sequent calculus

Further information: Sequent calculus

The sequent calculus was developed to study the properties of natural deduction systems. Instead of working with
one formula at a time, it uses sequents, which are expressions of the form

A1 , . . . , An B1 , . . . , Bk ,

where A1 , ..., An, B1 , ..., Bk are formulas and the turnstile symbol is used as punctuation to separate the two halves.
Intuitively, a sequent expresses the idea that (A1 An ) implies (B1 Bk ) .

105.4.4 Tableaux method

Further information: Method of analytic tableaux


370 CHAPTER 105. FIRST-ORDER LOGIC

Unlike the methods just described, the derivations in the tableaux method are not lists of formulas. Instead, a deriva-
tion is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that
the negation of A is unsatisable. The tree of the derivation has A at its root; the tree branches in a way that reects
the structure of the formula. For example, to show that C D is unsatisable requires showing that C and D are
each unsatisable; this corresponds to a branching point in the tree with parent C D and children C and D.

105.4.5 Resolution

The resolution rule is a single rule of inference that, together with unication, is sound and complete for rst-order
logic. As with the tableaux method, a formula is proved by showing that the negation of the formula is unsatisable.
Resolution is commonly used in automated theorem proving.
The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must rst
be converted to this form through Skolemization. The resolution rule states that from the hypotheses A1 Ak C
and B1 Bl C , the conclusion A1 Ak B1 Bl can be obtained.

105.4.6 Provable identities


Many identities can be proved, which establish equivalences between particular formulas. These identities allow for
rearranging formulas by moving quantiers across other connectives, and are useful for putting formulas in prenex
normal form. Some provable identities include:

x P (x) x P (x)
x P (x) x P (x)
x y P (x, y) y x P (x, y)
x y P (x, y) y x P (x, y)
x P (x) x Q(x) x (P (x) Q(x))
x P (x) x Q(x) x (P (x) Q(x))
P x Q(x) x (P Q(x)) (where x must not occur free in P )
P x Q(x) x (P Q(x)) (where x must not occur free in P )

105.5 Equality and its axioms


There are several dierent conventions for using equality (or identity) in rst-order logic. The most common con-
vention, known as rst-order logic with equality, includes the equality symbol as a primitive logical symbol which
is always interpreted as the real equality relation between members of the domain of discourse, such that the two
given members are the same member. This approach also adds certain axioms about equality to the deductive system
employed. These equality axioms are:

1. Reexivity. For each variable x, x = x.

2. Substitution for functions. For all variables x and y, and any function symbol f,

x = y f(...,x,...) = f(...,y,...).

3. Substitution for formulas. For any variables x and y and any formula (x), if ' is obtained by replacing any
number of free occurrences of x in with y, such that these remain free occurrences of y, then

x = y ( ').

These are axiom schemas, each of which species an innite set of axioms. The third schema is known as Leibnizs
law, the principle of substitutivity, the indiscernibility of identicals, or the replacement property. The second
schema, involving the function symbol f, is (equivalent to) a special case of the third schema, using the formula
105.6. METALOGICAL PROPERTIES 371

x = y (f(...,x,...) = z f(...,y,...) = z).

Many other properties of equality are consequences of the axioms above, for example:

1. Symmetry. If x = y then y = x.
2. Transitivity. If x = y and y = z then x = z.

105.5.1 First-order logic without equality


An alternate approach considers the equality relation to be a non-logical symbol. This convention is known as rst-
order logic without equality. If an equality relation is included in the signature, the axioms of equality must now be
added to the theories under consideration, if desired, instead of being considered rules of logic. The main dierence
between this method and rst-order logic with equality is that an interpretation may now interpret two distinct indi-
viduals as equal (although, by Leibnizs law, these will satisfy exactly the same formulas under any interpretation).
That is, the equality relation may now be interpreted by an arbitrary equivalence relation on the domain of discourse
that is congruent with respect to the functions and relations of the interpretation.
When this second convention is followed, the term normal model is used to refer to an interpretation where no
distinct individuals a and b satisfy a = b. In rst-order logic with equality, only normal models are considered, and
so there is no term for a model other than a normal model. When rst-order logic without equality is studied, it is
necessary to amend the statements of results such as the LwenheimSkolem theorem so that only normal models
are considered.
First-order logic without equality is often employed in the context of second-order arithmetic and other higher-order
theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted.

105.5.2 Dening equality within a theory


If a theory has a binary formula A(x,y) which satises reexivity and Leibnizs law, the theory is said to have equality,
or to be a theory with equality. The theory may not have all instances of the above schemas as axioms, but rather as
derivable theorems. For example, in theories with no function symbols and a nite number of relations, it is possible
to dene equality in terms of the relations, by dening the two terms s and t to be equal if any relation is unchanged
by changing s to t in any argument.
Some theories allow other ad hoc denitions of equality:

In the theory of partial orders with one relation symbol , one could dene s = t to be an abbreviation for s t
t s.
In set theory with one relation , one may dene s = t to be an abbreviation for x (s x t x) x (x
s x t). This denition of equality then automatically satises the axioms for equality. In this case, one
should replace the usual axiom of extensionality, which can be stated as xy[z(z x z y) x = y]
, with an alternative formulation xy[z(z x z y) z(x z y z)] , which says that if sets
x and y have the same elements, then they also belong to the same sets.

105.6 Metalogical properties


One motivation for the use of rst-order logic, rather than higher-order logic, is that rst-order logic has many
metalogical properties that stronger logics do not have. These results concern general properties of rst-order logic
itself, rather than properties of individual theories. They provide fundamental tools for the construction of models
of rst-order theories.

105.6.1 Completeness and undecidability


Gdels completeness theorem, proved by Kurt Gdel in 1929, establishes that there are sound, complete, eective
deductive systems for rst-order logic, and thus the rst-order logical consequence relation is captured by nite prov-
ability. Naively, the statement that a formula logically implies a formula depends on every model of ; these
372 CHAPTER 105. FIRST-ORDER LOGIC

models will in general be of arbitrarily large cardinality, and so logical consequence cannot be eectively veried by
checking every model. However, it is possible to enumerate all nite derivations and search for a derivation of from
. If is logically implied by , such a derivation will eventually be found. Thus rst-order logical consequence is
semidecidable: it is possible to make an eective enumeration of all pairs of sentences (,) such that is a logical
consequence of .
Unlike propositional logic, rst-order logic is undecidable (although semidecidable), provided that the language has
at least one predicate of arity at least 2 (other than equality). This means that there is no decision procedure that
determines whether arbitrary formulas are logically valid. This result was established independently by Alonzo Church
and Alan Turing in 1936 and 1937, respectively, giving a negative answer to the Entscheidungsproblem posed by
David Hilbert in 1928. Their proofs demonstrate a connection between the unsolvability of the decision problem for
rst-order logic and the unsolvability of the halting problem.
There are systems weaker than full rst-order logic for which the logical consequence relation is decidable. These
include propositional logic and monadic predicate logic, which is rst-order logic restricted to unary predicate symbols
and no function symbols. Other logics with no function symbols which are decidable are the guarded fragment of
rst-order logic, as well as two-variable logic. The BernaysSchnnkel class of rst-order formulas is also decidable.
Decidable subsets of rst-order logic are also studied in the framework of description logics.

105.6.2 The LwenheimSkolem theorem


The LwenheimSkolem theorem shows that if a rst-order theory of cardinality has an innite model, then it has
models of every innite cardinality greater than or equal to . One of the earliest results in model theory, it implies
that it is not possible to characterize countability or uncountability in a rst-order language. That is, there is no
rst-order formula (x) such that an arbitrary structure M satises if and only if the domain of discourse of M is
countable (or, in the second case, uncountable).
The LwenheimSkolem theorem implies that innite structures cannot be categorically axiomatized in rst-order
logic. For example, there is no rst-order theory whose only model is the real line: any rst-order theory with an
innite model also has a model of cardinality larger than the continuum. Since the real line is innite, any theory
satised by the real line is also satised by some nonstandard models. When the LwenheimSkolem theorem is
applied to rst-order set theories, the nonintuitive consequences are known as Skolems paradox.

105.6.3 The compactness theorem


The compactness theorem states that a set of rst-order sentences has a model if and only if every nite subset of it
has a model. This implies that if a formula is a logical consequence of an innite set of rst-order axioms, then it
is a logical consequence of some nite number of those axioms. This theorem was proved rst by Kurt Gdel as a
consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central
tool in model theory, providing a fundamental method for constructing models.
The compactness theorem has a limiting eect on which collections of rst-order structures are elementary classes.
For example, the compactness theorem implies that any theory that has arbitrarily large nite models has an in-
nite model. Thus the class of all nite graphs is not an elementary class (the same holds for many other algebraic
structures).
There are also more subtle limitations of rst-order logic that are implied by the compactness theorem. For example,
in computer science, many situations can be modeled as a directed graph of states (nodes) and connections (directed
edges). Validating such a system may require showing that no bad state can be reached from any good state. Thus
one seeks to determine if the good and bad states are in dierent connected components of the graph. However, the
compactness theorem can be used to show that connected graphs are not an elementary class in rst-order logic, and
there is no formula (x,y) of rst-order logic, in the logic of graphs, that expresses the idea that there is a path from
x to y. Connectedness can be expressed in second-order logic, however, but not with only existential set quantiers,
as 11 also enjoys compactness.

105.6.4 Lindstrms theorem


Main article: Lindstrms theorem
105.7. LIMITATIONS 373

Per Lindstrm showed that the metalogical properties just discussed actually characterize rst-order logic in the sense
that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindstrm dened
a class of abstract logical systems, and a rigorous denition of the relative strength of a member of this class. He
established two theorems for systems of this type:

A logical system satisfying Lindstrms denition that contains rst-order logic and satises both the Lwenheim
Skolem theorem and the compactness theorem must be equivalent to rst-order logic.

A logical system satisfying Lindstrms denition that has a semidecidable logical consequence relation and
satises the LwenheimSkolem theorem must be equivalent to rst-order logic.

105.7 Limitations
Although rst-order logic is sucient for formalizing much of mathematics, and is commonly used in computer
science and other elds, it has certain limitations. These include limitations on its expressiveness and limitations of
the fragments of natural languages that it can describe.
For instance, rst-order logic is undecidable, meaning a sound, complete and terminating decision algorithm is im-
possible. This has led to the study of interesting decidable fragments such as C2 , rst-order logic with two variables
and the counting quantiers n and n (these quantiers are, respectively, there exists at least n" and there exists
at most n").[11]

105.7.1 Expressiveness

The LwenheimSkolem theorem shows that if a rst-order theory has any innite model, then it has innite models
of every cardinality. In particular, no rst-order theory with an innite model can be categorical. Thus there is
no rst-order theory whose only model has the set of natural numbers as its domain, or whose only model has the
set of real numbers as its domain. Many extensions of rst-order logic, including innitary logics and higher-order
logics, are more expressive in the sense that they do permit categorical axiomatizations of the natural numbers or
real numbers. This expressiveness comes at a metalogical cost, however: by Lindstrms theorem, the compactness
theorem and the downward LwenheimSkolem theorem cannot hold in any logic stronger than rst-order.

105.7.2 Formalizing natural languages

First-order logic is able to formalize many simple quantier constructions in natural language, such as every person
who lives in Perth lives in Australia. But there are many more complicated features of natural language that cannot
be expressed in (single-sorted) rst-order logic. Any logical system which is appropriate as an instrument for the
analysis of natural language needs a much richer structure than rst-order predicate logic.[12]

105.8 Restrictions, extensions, and variations


There are many variations of rst-order logic. Some of these are inessential in the sense that they merely change
notation without aecting the semantics. Others change the expressive power more signicantly, by extending the
semantics through additional quantiers or other new logical symbols. For example, innitary logics permit formulas
of innite size, and modal logics add symbols for possibility and necessity.

105.8.1 Restricted languages

First-order logic can be studied in languages with fewer logical symbols than were described above.

Because x(x) can be expressed as x(x) , and x(x) can be expressed as x(x) , either of the
two quantiers and can be dropped.
374 CHAPTER 105. FIRST-ORDER LOGIC

Since can be expressed as ( ) and can be expressed as ( ) , either or can


be dropped. In other words, it is sucient to have and , or and , as the only logical connectives.

Similarly, it is sucient to have only and as logical connectives, or to have only the Sheer stroke (NAND)
or the Peirce arrow (NOR) operator.

It is possible to entirely avoid function symbols and constant symbols, rewriting them via predicate symbols
in an appropriate way. For example, instead of using a constant symbol 0 one may use a predicate 0(x)
(interpreted as x = 0 ), and replace every predicate such as P (0, y) with x (0(x) P (x, y)) . A
function such as f (x1 , x2 , ..., xn ) will similarly be replaced by a predicate F (x1 , x2 , ..., xn , y) interpreted
as y = f (x1 , x2 , ..., xn ) . This change requires adding additional axioms to the theory at hand, so that
interpretations of the predicate symbols used have the correct semantics.

Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemas in
deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes
more dicult to express natural-language statements in the formal system at hand, because the logical connectives
used in the natural language statements must be replaced by their (longer) denitions in terms of the restricted col-
lection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems
that include additional connectives. There is thus a trade-o between the ease of working within the formal system
and the ease of proving results about the formal system.
It is also possible to restrict the arities of function symbols and predicate symbols, in suciently expressive theories.
One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1
in theories that include a pairing function. This is a function of arity 2 that takes pairs of elements of the domain
and returns an ordered pair containing them. It is also sucient to have two predicate symbols of arity 2 that dene
projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for
a pairing function and its projections are satised.

105.8.2 Many-sorted logic

Ordinary rst-order interpretations have a single domain of discourse over which all quantiers range. Many-sorted
rst-order logic allows variables to have dierent sorts, which have dierent domains. This is also called typed rst-
order logic, and the sorts called types (as in data type), but it is not the same as rst-order type theory. Many-sorted
rst-order logic is often used in the study of second-order arithmetic.
When there are only nitely many sorts in a theory, many-sorted rst-order logic can be reduced to single-sorted
rst-order logic.[13] One introduces into the single-sorted theory a unary predicate symbol for each sort in the many-
sorted theory, and adds an axiom saying that these unary predicates partition the domain of discourse. For example,
if there are two sorts, one adds predicate symbols P1 (x) and P2 (x) and the axiom

x(P1 (x) P2 (x)) x(P1 (x) P2 (x))

Then the elements satisfying P1 are thought of as elements of the rst sort, and elements satisfying P2 as elements
of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range
of quantication. For example, to say there is an element of the rst sort satisfying formula (x), one writes

x(P1 (x) (x))

105.8.3 Additional quantiers

Additional quantiers can be added to rst-order logic.

Sometimes it is useful to say that "P(x) holds for exactly one x", which can be expressed as ! x P(x). This
notation, called uniqueness quantication, may be taken to abbreviate a formula such as x (P(x) y (P(y)
(x = y))).
105.8. RESTRICTIONS, EXTENSIONS, AND VARIATIONS 375

First-order logic with extra quantiers has new quantiers Qx,..., with meanings such as there are many x
such that .... Also see branching quantiers and the plural quantiers of George Boolos and others.

Bounded quantiers are often used in the study of set theory or arithmetic.

105.8.4 Innitary logics

Main article: Innitary logic

Innitary logic allows innitely long sentences. For example, one may allow a conjunction or disjunction of innitely
many formulas, or quantication over innitely many variables. Innitely long sentences arise in areas of mathematics
including topology and model theory.
Innitary logic generalizes rst-order logic to allow formulas of innite length. The most common way in which
formulas can become innite is through innite conjunctions and disjunctions. However, it is also possible to admit
generalized signatures in which function and relation symbols are allowed to have innite arities, or in which quantiers
can bind innitely many variables. Because an innite formula cannot be represented by a nite string, it is necessary
to choose some other representation of formulas; the usual representation in this context is a tree. Thus formulas are,
essentially, identied with their parse trees, rather than with the strings being parsed.
The most commonly studied innitary logics are denoted L, where and are each either cardinal numbers
or the symbol . In this notation, ordinary rst-order logic is L. In the logic L, arbitrary conjunctions or
disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the
logic that permits conjunctions or disjunctions with less than constituents is known as L. For example, L1
permits countable conjunctions and disjunctions.
The set of free variables in a formula of L can have any cardinality strictly less than , yet only nitely many of
them can be in the scope of any quantier when a formula appears as a subformula of another.[14] In other innitary
logics, a subformula may be in the scope of innitely many quantiers. For example, in L, a single universal or ex-
istential quantier may bind arbitrarily many variables simultaneously. Similarly, the logic L permits simultaneous
quantication over fewer than variables, as well as conjunctions and disjunctions of size less than .

105.8.5 Non-classical and modal logics

Intuitionistic rst-order logic uses intuitionistic rather than classical propositional calculus; for example,
need not be equivalent to .

First-order modal logic allows one to describe other possible worlds as well as this contingently true world
which we inhabit. In some versions, the set of possible worlds varies depending on which possible world one
inhabits. Modal logic has extra modal operators with meanings which can be characterized informally as, for
example it is necessary that " (true in all possible worlds) and it is possible that " (true in some possible
world). With standard rst-order logic we have a single domain and each predicate is assigned one extension.
With rst-order modal logic we have a domain function that assigns each possible world its own domain, so that
each predicate gets an extension only relative to these possible worlds. This allows us to model cases where,
for example, Alex is a Philosopher, but might have been a Mathematician, and might not have existed at all.
In the rst possible world P(a) is true, in the second P(a) is false, and in the third possible world there is no a
in the domain at all.

rst-order fuzzy logics are rst-order extensions of propositional fuzzy logics rather than classical propositional
calculus.

105.8.6 Fixpoint logic

Fixpoint logic extends rst-order logic by adding the closure under the least xed points of positive operators.[15]
376 CHAPTER 105. FIRST-ORDER LOGIC

105.8.7 Higher-order logics


Main article: Higher-order logic

The characteristic feature of rst-order logic is that individuals can be quantied, but not predicates. Thus

a(Phil(a))
is a legal rst-order formula, but

Phil(Phil(a))
is not, in most formalizations of rst-order logic. Second-order logic extends rst-order logic by adding the latter
type of quantication. Other higher-order logics allow quantication over even higher types than second-order logic
permits. These higher types include relations between relations, functions from relations to relations between relations,
and other higher-type objects. Thus the rst in rst-order logic describes the type of objects that can be quantied.
Unlike rst-order logic, for which only one semantics is studied, there are several possible semantics for second-
order logic. The most commonly employed semantics for second-order and higher-order logic is known as full
semantics. The combination of additional quantiers and the full semantics for these quantiers makes higher-order
logic stronger than rst-order logic. In particular, the (semantic) logical consequence relation for second-order and
higher-order logic is not semidecidable; there is no eective deduction system for second-order logic that is sound
and complete under full semantics.
Second-order logic with full semantics is more expressive than rst-order logic. For example, it is possible to create
axiom systems in second-order logic that uniquely characterize the natural numbers and the real line. The cost of
this expressiveness is that second-order and higher-order logics have fewer attractive metalogical properties than rst-
order logic. For example, the LwenheimSkolem theorem and compactness theorem of rst-order logic become
false when generalized to higher-order logics with full semantics.

105.9 Automated theorem proving and formal methods


Further information: First-order theorem proving

Automated theorem proving refers to the development of computer programs that search and nd derivations (formal
proofs) of mathematical theorems. Finding derivations is a dicult task because the search space can be very large; an
exhaustive search of every possible derivation is theoretically possible but computationally infeasible for many systems
of interest in mathematics. Thus complicated heuristic functions are developed to attempt to nd a derivation in less
time than a blind search.
The related area of automated proof verication uses computer programs to check that human-created proofs are
correct. Unlike complicated automated theorem provers, verication systems may be small enough that their correct-
ness can be checked both by hand and through automated software verication. This validation of the proof verier
is needed to give condence that any derivation labeled as correct is actually correct.
Some proof veriers, such as Metamath, insist on having a complete derivation as input. Others, such as Mizar and
Isabelle, take a well-formatted proof sketch (which may still be very long and detailed) and ll in the missing pieces
by doing simple proof searches or applying known decision procedures: the resulting derivation is then veried by a
small, core kernel. Many such systems are primarily intended for interactive use by human mathematicians: these
are known as proof assistants. They may also use formal logics that are stronger than rst-order logic, such as type
theory. Because a full derivation of any nontrivial result in a rst-order deductive system will be extremely long for
a human to write,[16] results are often formalized as a series of lemmas, for which derivations can be constructed
separately.
Automated theorem provers are also used to implement formal verication in computer science. In this setting,
theorem provers are used to verify the correctness of programs and of hardware such as processors with respect to a
formal specication. Because such analysis is time-consuming and thus expensive, it is usually reserved for projects
in which a malfunction would have grave human or nancial consequences.
105.10. SEE ALSO 377

105.10 See also


ACL2 A Computational Logic for Applicative Common Lisp.
Equiconsistency
Extension by denitions
Herbrandization
Higher-order logic
List of logic symbols
Lwenheim number
Nonrstorderizability
Prenex normal form
Relational algebra
Relational model
Second-order logic
Skolem normal form
Tarskis World
Truth table
Type (model theory)
Prolog

105.11 Notes
[1] Hodgson, Dr. J. P. E., First Order Logic, Saint Josephs University, Philadelphia, 1995.

[2] Hughes, G. E., & Cresswell, M. J., A New Introduction to Modal Logic (London: Routledge, 1996), p.161.

[3] Mendelson, Elliott (1964). Introduction to Mathematical Logic. Van Nostrand Reinhold. p. 56.

[4] Eric M. Hammer: Semantics for Existential Graphs, Journal of Philosophical Logic, Volume 27, Issue 5 (October 1998),
page 489: Development of rst-order logic independently of Frege, anticipating prenex and Skolem normal forms

[5] Goertzel, B., Geisweiller, N., Coelho, L., Janii, P., & Pennachin, C., Real-World Reasoning: Toward Scalable, Uncertain
Spatiotemporal, Contextual and Causal Inference (Amsterdam & Paris: Atlantis Press, 2011), p. 29.

[6] The word language is sometimes used as a synonym for signature, but this can be confusing because language can also
refer to the set of formulas.

[7] More precisely, there is only one language of each variant of one-sorted rst-order logic: with or without equality, with or
without functions, with or without propositional variables, ....

[8] Smullyan, R. M., First-order Logic (New York: Dover Publications, 1968), p. 5.

[9] Some authors who use the term well-formed formula use formula to mean any string of symbols from the alphabet.
However, most authors in mathematical logic use formula to mean well-formed formula and have no term for non-well-
formed formulas. In every context, it is only the well-formed formulas that are of interest.

[10] The SMT-LIB Standard: Version 2.0, by Clark Barrett, Aaron Stump, and Cesare Tinelli. http://smtlib.cs.uiowa.edu/
language.shtml

[11] Horrocks 2010

[12] Gamut 1991, p. 75


378 CHAPTER 105. FIRST-ORDER LOGIC

[13] Herbert Enderton. A Mathematical Introduction to Logic (2nd Edition). Academic Press, 2001, pp.296-299.

[14] Some authors only admit formulas with nitely many free variables in L, and more generally only formulas with < free
variables in L.

[15] Bosse, Uwe (1993). An EhrenfeuchtFrass game for xpoint logic and stratied xpoint logic. In Brger, Egon.
Computer Science Logic: 6th Workshop, CSL'92, San Miniato, Italy, September 28 - October 2, 1992. Selected Papers.
Lecture Notes in Computer Science. 702. Springer-Verlag. pp. 100114. ISBN 3-540-56992-8. Zbl 0808.03024.

[16] Avigad et al. (2007) discuss the process of formally verifying a proof of the prime number theorem. The formalized proof
required approximately 30,000 lines of input to the Isabelle proof verier.

105.12 References
Andrews, Peter B. (2002); An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof,
2nd ed., Berlin: Kluwer Academic Publishers. Available from Springer.

Avigad, Jeremy; Donnelly, Kevin; Gray, David; and Ra, Paul (2007); A formally veried proof of the prime
number theorem, ACM Transactions on Computational Logic, vol. 9 no. 1 doi:10.1145/1297658.1297660

Barwise, Jon (1977); An Introduction to First-Order Logic, in Barwise, Jon, ed. (1982). Handbook of
Mathematical Logic. Studies in Logic and the Foundations of Mathematics. Amsterdam, NL: North-Holland.
ISBN 978-0-444-86388-1.

Barwise, Jon; and Etchemendy, John (2000); Language Proof and Logic, Stanford, CA: CSLI Publications
(Distributed by the University of Chicago Press)

Bocheski, Jzef Maria (2007); A Prcis of Mathematical Logic, Dordrecht, NL: D. Reidel, translated from
the French and German editions by Otto Bird

Ferreirs, Jos (2001); The Road to Modern Logic An Interpretation, Bulletin of Symbolic Logic, Volume
7, Issue 4, 2001, pp. 441484, doi:10.2307/2687794, JSTOR 2687794

Gamut, L. T. F. (1991); Logic, Language, and Meaning, Volume 2: Intensional Logic and Logical Grammar,
Chicago, Illinois: University of Chicago Press, ISBN 0-226-28088-8

Hilbert, David; and Ackermann, Wilhelm (1950); Principles of Mathematical Logic, Chelsea (English transla-
tion of Grundzge der theoretischen Logik, 1928 German rst edition)

Hodges, Wilfrid (2001); Classical Logic I: First Order Logic, in Goble, Lou (ed.); The Blackwell Guide to
Philosophical Logic, Blackwell

Ebbinghaus, Heinz-Dieter; Flum, Jrg; and Thomas, Wolfgang (1994); Mathematical Logic, Undergraduate
Texts in Mathematics, Berlin, DE/New York, NY: Springer-Verlag, Second Edition, ISBN 978-0-387-94258-
2

Rautenberg, Wolfgang (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York, NY: Springer
Science+Business Media, ISBN 978-1-4419-1220-6, doi:10.1007/978-1-4419-1221-3

Tarski, Alfred and Givant, Steven (1987); A Formalization of Set Theory without Variables. Vol.41 of American
Mathematical Society colloquium publications, Providence RI: American Mathematical Society, ISBN 978-
0821810415.

105.13 External links


Hazewinkel, Michiel, ed. (2001) [1994], Predicate calculus, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

Stanford Encyclopedia of Philosophy: Shapiro, Stewart; "Classical Logic". Covers syntax, model theory, and
metatheory for rst-order logic in the natural deduction style.
105.13. EXTERNAL LINKS 379

Magnus, P. D.; forall x: an introduction to formal logic. Covers formal semantics and proof theory for rst-
order logic.
Metamath: an ongoing online project to reconstruct mathematics as a huge rst-order theory, using rst-order
logic and the axiomatic set theory ZFC. Principia Mathematica modernized.
Podnieks, Karl; Introduction to mathematical logic

Cambridge Mathematics Tripos Notes (typeset by John Fremlin). These notes cover part of a past Cambridge
Mathematics Tripos course taught to undergraduates students (usually) within their third year. The course is
entitled Logic, Computation and Set Theory and covers Ordinals and cardinals, Posets and Zorns Lemma,
Propositional logic, Predicate logic, Set theory and Consistency issues related to ZFC and other set theories.

Tree Proof Generator can validate or invalidate formulas of rst-order logic through the semantic tableaux
method.
380 CHAPTER 105. FIRST-ORDER LOGIC
Chapter 106

First-order predicate

In mathematical logic, a rst-order predicate is a predicate that takes only individual(s) constants or variables as
argument(s).[1] Compare second-order predicate and higher-order predicate.
This is not to be confused with a one-place predicate or monad, which is a predicate that takes only one argument.
For example the expression is a planet is a one-place predicate, while the expression is father of is a two-place
predicate.

106.1 See also


First-order predicate calculus

Monadic predicate calculus

106.2 References
[1] Flew, Antony (1984), A Dictionary of Philosophy: Revised Second Edition, Macmillan, p. 147, ISBN 9780312209230.

381
Chapter 107

Formation rule

In mathematical logic, formation rules are rules for describing which strings of symbols formed from the alphabet of
a formal language are syntactically valid within the language. These rules only address the location and manipulation
of the strings of the language. It does not describe anything else about a language, such as its semantics (i.e. what
the strings mean). (See also formal grammar).

107.1 Formal language


Main article: Formal language

A formal language is an organized set of symbols the essential feature being that it can be precisely dened in terms
of just the shapes and locations of those symbols. Such a language can be dened, then, without any reference to any
meanings of any of its expressions; it can exist before any interpretation is assigned to itthat is, before it has any
meaning. A formal grammar determines which symbols and sets of symbols are formulas in a formal language.

107.2 Formal systems


Main article: Formal system

A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a
deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation
rules (also called inference rules) or a set of axioms, or have both. A formal system is used to derive one expression
from one or more other expressions. Propositional and predicate calculi are examples of formal systems.

107.3 Propositional and predicate logic


The formation rules of a propositional calculus may, for instance, take a form such that;

if we take to be a propositional formula we can also take to be a formula;

if we take and to be a propositional formulas we can also take ( ), ( ), ( ) and ( ) to also be


formulas.

A predicate calculus will usually include all the same rules as a propositional calculus, with the addition of quantiers
such that if we take to be a formula of propositional logic and as a variable then we can take () and () each
to be formulas of our predicate calculus.

382
107.4. SEE ALSO 383

107.4 See also


Finite state automaton
Chapter 108

Formula game

A formula game is an articial game represented by a fully quantied Boolean formula. Players turns alternate and
the space of possible moves is denoted by bound variables. If a variable is universally quantied, the formula following
it has the same truth value as the formula beginning with the universal quantier regardless of the move taken. If a
variable is existentially quantied, the formula following it has the same truth value as the formula beginning with the
existential quantier for at least one move available at the turn. Turns alternate, and a player loses if he cannot move
at his turn. In computational complexity theory, the language FORMULA-GAME is dened as all formulas such
that Player 1 has a winning strategy in the game represented by . FORMULA-GAME is PSPACE-complete.

108.1 References
Sipser, Michael. (2006). Introduction to the Theory of Computation. Boston: Thomson Course Technology.

384
Chapter 109

Free Boolean algebra

In mathematics, a free Boolean algebra is a Boolean algebra with a distinguished set of elements, called generators,
such that:

1. Each element of the Boolean algebra can be expressed as a nite combination of generators, using the Boolean
operations, and
2. The generators are as independent as possible, in the sense that there are no relationships among them (again in
terms of nite expressions using the Boolean operations) that do not hold in every Boolean algebra no matter
which elements are chosen.

109.1 A simple example


The generators of a free Boolean algebra can represent independent propositions. Consider, for example, the propo-
sitions John is tall and Mary is rich. These generate a Boolean algebra with four atoms, namely:

John is tall, and Mary is rich;


John is tall, and Mary is not rich;
John is not tall, and Mary is rich;
John is not tall, and Mary is not rich.

Other elements of the Boolean algebra are then logical disjunctions of the atoms, such as John is tall and Mary is
not rich, or John is not tall and Mary is rich. In addition there is one more element, FALSE, which can be thought
of as the empty disjunction; that is, the disjunction of no atoms.
This example yields a Boolean algebra with 16 elements; in general, for nite n, the free Boolean algebra with n
n
generators has 2n atoms, and therefore 22 elements.
If there are innitely many generators, a similar situation prevails except that now there are no atoms. Each element of
the Boolean algebra is a combination of nitely many of the generating propositions, with two such elements deemed
identical if they are logically equivalent.

109.2 Category-theoretic denition


In the language of category theory, free Boolean algebras can be dened simply in terms of an adjunction between
the category of sets and functions, Set, and the category of Boolean algebras and Boolean algebra homomorphisms,
BA. In fact, this approach generalizes to any algebraic structure denable in the framework of universal algebra.
Above, we said that a free Boolean algebra is a Boolean algebra with a set of generators that behave a certain way;
alternatively, one might start with a set and ask which algebra it generates. Every set X generates a free Boolean

385
386 CHAPTER 109. FREE BOOLEAN ALGEBRA

The Hasse diagram of the free Boolean algebra on two generators, p and q. Take p (left circle) to be John is tall and q (right
circle)to be Mary is rich. The atoms are the four elements in the row just above FALSE.

algebra FX dened as the algebra such that for every algebra B and function f : X B, there is a unique Boolean
algebra homomorphism f : FX B that extends f. Diagrammatically,
where iX is the inclusion, and the dashed arrow denotes uniqueness. The idea is that once one chooses where to send
the elements of X, the laws for Boolean algebra homomorphisms determine where to send everything else in the free
algebra FX. If FX contained elements inexpressible as combinations of elements of X, then f wouldn't be unique,
and if the elements of X weren't suciently independent, then f wouldn't be well dened! It is easily shown that FX
is unique (up to isomorphism), so this denition makes sense. It is also easily shown that a free Boolean algebra with
generating set X, as dened originally, is isomorphic to FX, so the two denitions agree.
One shortcoming of the above denition is that the diagram doesn't capture that f is a homomorphism; since it is a
diagram in Set each arrow denotes a mere function. We can x this by separating it into two diagrams, one in BA and
one in Set. To relate the two, we introduce a functor U : BA Set that "forgets" the algebraic structure, mapping
algebras and homomorphisms to their underlying sets and functions.
If we interpret the top arrow as a diagram in BA and the bottom triangle as a diagram in Set, then this diagram
properly expresses that every function f : X B extends to a unique Boolean algebra homomorphism f : FX B.
The functor U can be thought of as a device to pull the homomorphism f back into Set so it can be related to f.
The remarkable aspect of this is that the latter diagram is one of the various (equivalent) denitions of when two
functors are adjoint. Our F easily extends to a functor Set BA, and our denition of X generating a free Boolean
algebra FX is precisely that U has a left adjoint F.
109.3. TOPOLOGICAL REALIZATION 387

109.3 Topological realization

The free Boolean algebra with generators, where is a nite or innite cardinal number, may be realized as the
collection of all clopen subsets of {0,1} , given the product topology assuming that {0,1} has the discrete topology.
For each <, the th generator is the set of all elements of {0,1} whose th coordinate is 1. In particular, the free
Boolean algebra with 0 generators is the collection of all clopen subsets of a Cantor space, sometimes called the
Cantor algebra. Surprisingly, this collection is countable. In fact, while the free Boolean algebra with n generators,
n
n nite, has cardinality 22 , the free Boolean algebra with 0 generators, as for any free algebra with 0 generators
and countably many nitary operations, has cardinality 0 .
For more on this topological approach to free Boolean algebra, see Stones representation theorem for Boolean alge-
bras.

109.4 See also

Boolean algebra (structure)

Generating set
388 CHAPTER 109. FREE BOOLEAN ALGEBRA

109.5 References
Steve Awodey (2006) Category Theory (Oxford Logic Guides 49). Oxford University Press.

Paul Halmos and Steven Givant (1998) Logic as Algebra. Mathematical Association of America.

Saunders Mac Lane (1998) Categories for the Working Mathematician. 2nd ed. (Graduate Texts in Mathematics
5). Springer-Verlag.

Saunders Mac Lane (1999) Algebra, 3d. ed. American Mathematical Society. ISBN 0-8218-1646-2.
109.5. REFERENCES 389

Robert R. Stoll, 1963. Set Theory and Logic, chpt. 6.7. Dover reprint 1979.
Chapter 110

Free variables and bound variables

For bound variables in computer programming, see Name binding.

In mathematics, and in other disciplines involving formal languages, including mathematical logic and computer
science, a free variable is a notation that species places in an expression where substitution may take place. Some
older books use the terms real variable and apparent variable for free variable and bound variable. The idea is
related to a placeholder (a symbol that will later be replaced by some literal string), or a wildcard character that
stands for an unspecied symbol.
In computer programming, the term free variable refers to variables used in a function that are neither local variables
nor parameters of that function.[1] The term non-local variable is often a synonym in this context.
A bound variable is a variable that was previously free, but has been bound to a specic value or set of values. For
example, the variable x becomes a bound variable when we write:

'For all x, (x + 1)2 = x2 + 2x + 1.'

or

'There exists x such that x2 = 2.'

In either of these propositions, it does not matter logically whether we use x or some other letter. However, it could
be confusing to use the same letter again elsewhere in some compound proposition. That is, free variables become
bound, and then in a sense retire from being available as stand-in values for other values in the creation of formulae.
The term dummy variable is also sometimes used for a bound variable (more often in general mathematics than in
computer science), but that use can create an ambiguity with the denition of dummy variables in regression analysis.

110.1 Examples
Before stating a precise denition of free variable and bound variable, the following are some examples that perhaps
make these two concepts clearer than the denition would:
In the expression

10
f (k, n),
k=1

n is a free variable and k is a bound variable; consequently the value of this expression depends on the value of n, but
there is nothing called k on which it could depend.
In the expression

390
110.2. FORMAL EXPLANATION 391


xy1 ex dx,
0

y is a free variable and x is a bound variable; consequently the value of this expression depends on the value of y, but
there is nothing called x on which it could depend.
In the expression

f (x + h) f (x)
lim ,
h0 h
x is a free variable and h is a bound variable; consequently the value of this expression depends on the value of x, but
there is nothing called h on which it could depend.
In the expression

[ ]
x y (x, y, z) ,

z is a free variable and x and y are bound variables; consequently the logical value of this expression depends on the
value of z, but there is nothing called x or y on which it could depend.

110.1.1 Variable-binding operators


The following


dx lim x x
0 x0
xS xS

are variable-binding operators. Each of them binds the variable x for some set S.
Note that many of these are operators which act on functions of the bound variable. In more complicated contexts,
such notations can become awkward and confusing. It can be useful to switch to notations which make the binding
explicit, such as


(k 7 f (k, n))
1,...,10

for sums or

( )
D x 7 x2 + 2x + 1

for dierentiation.

110.2 Formal explanation


Variable-binding mechanisms occur in dierent contexts in mathematics, logic and computer science. In all cases,
however, they are purely syntactic properties of expressions and variables in them. For this section we can sum-
marize syntax by identifying an expression with a tree whose leaf nodes are variables, constants, function constants
or predicate constants and whose non-leaf nodes are logical operators. This expression can then be determined by
doing an inorder traversal of the tree. Variable-binding operators are logical operators that occur in almost every
formal language. Languages that do not have them may be either extremely inexpressive or extremely dicult to
use. A binding operator Q takes two arguments: a variable v and an expression P, and when applied to its arguments
392 CHAPTER 110. FREE VARIABLES AND BOUND VARIABLES

x


y B z
A x
Tree summarizing the syntax of the expression x ((y A(x)) B(z))

produces a new expression Q(v, P). The meaning of binding operators is supplied by the semantics of the language
and does not concern us here.
Variable binding relates three things: a variable v, a location a for that variable in an expression and a non-leaf node
n of the form Q(v, P). Note: we dene a location in an expression as a leaf node in the syntax tree. Variable binding
occurs when that location is below the node n.
In the lambda calculus, x is a bound variable in the term M = x . T, and a free variable of T. We say x is bound in
M and free in T. If T contains a subterm x . U then x is rebound in this term. This nested, inner binding of x is said
to shadow the outer binding. Occurrences of x in U are free occurrences of the new x.
Variables bound at the top level of a program are technically free variables within the terms to which they are bound
but are often treated specially because they can be compiled as xed addresses. Similarly, an identier bound to a
recursive function is also technically a free variable within its own body but is treated specially.
A closed term is one containing no free variables.
110.3. NATURAL LANGUAGE 393

110.2.1 Function expressions


To give an example from mathematics, consider an expression which denes a function

f = [(x1 , . . . , xn ) 7 t]

where t is an expression. t may contain some, all or none of the x1 , ..., xn and it may contain other variables. In this
case we say that function denition binds the variables x1 , ..., xn.
In this manner, function denition expressions of the kind shown above can be thought of as the variable binding
operator, analogous to the lambda expressions of lambda calculus. Other binding operators, like the summation sign,
can be thought of as higher-order functions applying to a function. So, for example, the expression

x2
xA

could be treated as a notation for

(x 7 x2 )
A

where S f is an operator with two parametersa one-parameter function, and a set to evaluate that function over.
The other operators listed above can be expressed in similar ways; for example, the universal quantier x S P (x)
can be thought of as an operator that evaluates to the logical conjunction of the boolean-valued function P applied
over the (possibly innite) set S.

110.3 Natural language


When analyzed in formal semantics, natural languages can be seen to have free and bound variables. In English,
personal pronouns like he, she, they, etc. can act as free variables.

Lisa found her book.

In the sentence above, the possessive pronoun her is a free variable. It may refer to the previously mentioned Lisa or to
any other female. In other words, her book could be referring to Lisas book (an instance of coreference) or to a book
that belongs to a dierent female (e.g. Janes book). Whoever the referent of her is can be established according to
the situational (i.e. pragmatic) context. The identity of the referent can be shown using coindexing subscripts where
i indicates one referent and j indicates a second referent (dierent from i). Thus, the sentence Lisa found her book
has the following interpretations:

Lisai found heri book. (interpretation #1: her = of Lisa)


Lisai found herj book. (interpretation #2: her = of a female that is not Lisa)

The distinction is not purely of academic interest, as some languages do actually have dierent forms for heri and
herj: for example, Norwegian translates coreferent heri as sin and noncoreferent herj as hennes.
English does allow specifying coreference, but it is optional, as both interpretations of the previous example are valid.

Lisai found heri own book. (interpretation #1: her = of Lisa)


*Lisai found herj own book. (interpretation #2: her = of a female that is not Lisa)

However, reexive pronouns, such as himself, herself, themselves, etc., and reciprocal pronouns, such as each other,
act as bound variables. In a sentence like the following:
394 CHAPTER 110. FREE VARIABLES AND BOUND VARIABLES

Jane hurt herself.

the reexive herself can only refer to the previously mentioned antecedent Jane. It can never refer to a dierent
female person. In other words, the person being hurt and the person doing the hurting are both the same person, i.e.
Jane. The semantics of this sentence is abstractly: JANE hurt JANE. And it cannot be the case that this sentence
could mean JANE hurt LISA. The reexive herself must refer and can only refer to the previously mentioned Jane. In
this sense, the variable herself is bound to the noun Jane that occurs in subject position. Indicating the coindexation,
the rst interpretation with Jane and herself coindexed is permissible, but the other interpretation where they are not
coindexed is ungrammatical (the ungrammatical interpretation is indicated with an asterisk):

Janei hurt herselfi. (interpretation #1: herself = Jane)


*Janei hurt herselfj. (interpretation #2: herself = a female that is not Jane)

Note that the coreference binding can be represented using a lambda expression as mentioned in the previous Formal
explanation section. The sentence with the reexive could be represented as

(x.x hurt x)Jane

in which Jane is the subject referent argument and x.x hurt x is the predicate function (a lambda abstraction) with
the lambda notation and x indicating both the semantic subject and the semantic object of sentence as being bound.
This returns the semantic interpretation JANE hurt JANE with JANE being the same person.
Pronouns can also behave in a dierent way. In the sentence below

Ashley hit her.

the pronoun her can only refer to a female that is not Ashley. This means that it can never have a reexive meaning
equivalent to Ashley hit herself. The grammatical and ungrammatical interpretations are:

*Ashleyi hit heri. (interpretation #1: her = Ashley)


Ashleyi hit herj. (interpretation #2: her = a female that is not Ashley)

The rst interpretation is impossible. Only the second interpretation is permitted by the grammar.
Thus, it can be seen that reexives and reciprocals are bound variables (known technically as anaphors) while true
pronouns are free variables in some grammatical structures but variables that cannot be bound in other grammatical
structures. The binding phenomena found in natural languages was particularly important to the syntactic government
and binding theory (see also: Binding (linguistics)).

110.4 See also


Closure (computer science)

Combinatory logic
Lambda lifting

Name binding
Scope (programming)

110.5 References
[1] Free variables in Lisp
Chapter 111

Frege system

In proof complexity, a Frege system is a propositional proof system whose proofs are sequences of formulas derived
using a nite set of sound and implicationally complete inference rules. Frege systems (more often known as Hilbert
systems in general proof theory) are named after Gottlob Frege.

111.1 Formal denition


Let K be a nite functionally complete set of Boolean connectives, and consider propositional formulas built from
variables p0 , p1 , p2 , ... using K-connectives. A Frege rule is an inference rule of the form

B1 , . . . , B n
r= ,
B
where B1 , ..., Bn, B are formulas. If R is a nite set of Frege rules, then F = (K,R) denes a derivation system in the
following way. If X is a set of formulas, and A is a formula, then an F-derivation of A from axioms X is a sequence
of formulas A1 , ..., Am such that Am = A, and every Ak is a member of X, or it is derived from some of the formulas
Ai, i < k, by a substitution instance of a rule from R. An F-proof of a formula A is an F-derivation of A from the
empty set of axioms ( X = ). F is called a Frege system if

F is sound: every F-provable formula is a tautology.

F is implicationally complete: for every formula A and a set of formulas X, if X entails A, then there is an
F-derivation of A from X.

The length (number of lines) in a proof A1 , ..., Am is m. The size of the proof is the total number of symbols.
A derivation system F as above is refutationally complete, if for every inconsistent set of formulas X, there is an
F-derivation of a xed contradiction from X.

111.2 Examples
Freges propositional calculus is a Frege system.

There are many examples of sound Frege rules on the Propositional calculus page.

Resolution is not a Frege system because it only operates on clauses, not on formulas built in an arbitrary way
by a functionally complete set of connectives. Moreover, it is not implicationally complete, i.e. we cannot
conclude A B from A . However, adding the weakening rule: AB A
makes it implicationally complete.
Resolution is also refutationally complete.

395
396 CHAPTER 111. FREGE SYSTEM

111.3 Properties
Reckhows theorem (1979) states that all Frege systems are p-equivalent.

Natural deduction and sequent calculus (Gentzen system with cut) are also p-equivalent to Frege systems.
There are polynomial-size Frege proofs of the pigeonhole principle (Buss 1987).

Frege systems are considered to be fairly strong systems. Unlike, say, resolution, there are no known superlinear
lower bounds on the number of lines in Frege proofs, and the best known lower bounds on the size of the proofs
are quadratic.

The minimal number of rounds in the prover-adversary game needed to prove a tautology is proportional to
the logarithm of the minimal number of steps in a Frege proof of .

Proof strengths of dierent systems.

111.4 References
Krajek, Jan (1995). Bounded Arithmetic, Propositional Logic, and Complexity Theory, Cambridge Uni-
versity Press.

Cook, Stephen; Reckhow, Robert A. (1979). The Relative Eciency of Propositional Proof Systems. Jour-
nal of Symbolic Logic. 44 (1). pp. 3650. JSTOR 2273702.

Buss, S. R. (1987). Polynomial size proofs of the propositional pigeonhole principle, Journal of Symbolic
Logic 52, pp. 916927.

Pudlk, P., Buss, S. R. (1995). How to lie without being (easily) convicted and the lengths of proofs in
propositional calculus, in: Computer Science Logic'94 (Pacholski and Tiuryn eds.), Springer LNCS 933,
1995, pp. 151162.
Chapter 112

Freges propositional calculus

In mathematical logic Freges propositional calculus was the rst axiomatization of propositional calculus. It was
invented by Gottlob Frege, who also invented predicate calculus, in 1879 as part of his second-order predicate calculus
(although Charles Peirce was the rst to use the term second-order and developed his own version of the predicate
calculus independently of Frege).
It makes use of just two logical operators: implication and negation, and it is constituted by six axioms and one
inference rule: modus ponens.
Axioms
THEN-1: A (B A)
THEN-2: (A (B C)) ((A B) (A C))
THEN-3: (A (B C)) (B (A C))
FRG-1: (A B) (B A)
FRG-2: A A
FRG-3: A A
Inference Rule
MP: P, PQ Q
Freges propositional calculus is equivalent to any other classical propositional calculus, such as the standard PC
with 11 axioms. Freges PC and standard PC share two common axioms: THEN-1 and THEN-2. Notice that axioms
THEN-1 through THEN-3 only make use of (and dene) the implication operator, whereas axioms FRG-1 through
FRG-3 dene the negation operator.
The following theorems will aim to nd the remaining nine axioms of standard PC within the theorem-space of
Freges PC, showing that the theory of standard PC is contained within the theory of Freges PC.
(A theory, also called here, for gurative purposes, a theorem-space, is a set of theorems which are a subset of a
universal set of well-formed formulas. The theorems are linked to each other in a directed manner by inference rules,
forming a sort of dendritic network. At the roots of the theorem-space are found the axioms, which generate the
theorem-space much like a generating set generates a group.)
Rule THEN-1*: A BA
Rule THEN-2*: A(BC) (AB)(AC)
Rule THEN-3*: A(BC) B(AC)
Rule FRG-1*: AB BA
Rule TH1*: AB, BC AC
Theorem TH1: (AB)((BC)(AC))
Theorem TH2: A(AB)
Theorem TH3: A(AB)
Theorem TH4: (AB)A
Theorem TH5: (AB)(BA)

397
398 CHAPTER 112. FREGES PROPOSITIONAL CALCULUS

Theorem TH6: (AB)B


Theorem TH7: AA
Theorem TH8: A((AB)B)
Theorem TH9: B((AB)B)
Theorem TH10: A(B(AB))
Note: (AB)A (TH4), (AB)B (TH6), and A(B(AB)) (TH10), so (AB) behaves just like
AB (compare with axioms AND-1, AND-2, and AND-3).
Theorem TH11: (AB)((AB)A)
TH11 is axiom NOT-1 of standard PC, called "reductio ad absurdum".
Theorem TH12: ((AB)C)(A(BC))
Theorem TH13: (B(BC))(BC)
Rule TH14*: A(BP), PQ A(BQ)
Theorem TH15: ((AB)(AC))(A(BC))
Theorem TH15 is the converse of axiom THEN-2.
Theorem TH16: (AB)(BA)
Theorem TH17: (AB)(BA)
Compare TH17 with theorem TH5.
Theorem TH18: ((AB)B)(AB)
Theorem TH19: (AC) ((BC)(((AB)B)C))
Note: A((AB)B) (TH8), B((AB)B) (TH9), and (AC)((BC)(((AB)B)C)) (TH19), so
((AB)B) behaves just like AB. (Compare with axioms OR-1, OR-2, and OR-3.)
Theorem TH20: (AA)A
TH20 corresponds to axiom NOT-3 of standard PC, called "tertium non datur".
Theorem TH21: A(AB)
TH21 corresponds to axiom NOT-2 of standard PC, called "ex contradictione quodlibet".
All the axioms of standard PC have be derived from Freges PC, after having let
AB := (AB) and AB := (AB)B. These expressions are not unique, e.g. AB could also have been dened
as (BA)A, AB, or BA. Notice, though, that the denition AB := (AB)B contains no negations. On
the other hand, AB cannot be dened in terms of implication alone, without using negation.
In a sense, the expressions AB and AB can be thought of as black boxes. Inside, these black boxes contain
formulas made up only of implication and negation. The black boxes can contain anything, as long as when plugged
into the AND-1 through AND-3 and OR-1 through OR-3 axioms of standard PC the axioms remain true. These
axioms provide complete syntactic denitions of the conjunction and disjunction operators.
The next set of theorems will aim to nd the remaining four axioms of Freges PC within the theorem-space of
standard PC, showing that the theory of Freges PC is contained within the theory of standard PC.
Theorem ST1: AA
Theorem ST2: AA
ST2 is axiom FRG-3 of Freges PC.
Theorem ST3: BC(CB)
Theorem ST4: AA
ST4 is axiom FRG-2 of Freges PC.
Prove ST5: (A(BC))(B(AC))
ST5 is axiom THEN-3 of Freges PC.
Theorem ST6: (AB)(BA)
112.1. SEE ALSO 399

ST6 is axiom FRG-1 of Freges PC.


Each of Freges axioms can be derived from the standard axioms, and each of the standard axioms can be derived
from Freges axioms. This means that the two sets of axioms are interdependent and there is no axiom in one set
which is independent from the other set. Therefore, the two sets of axioms generate the same theory: Freges PC is
equivalent to standard PC.
(For if the theories should be dierent, then one of them should contain theorems not contained by the other theory.
These theorems can be derived from their own theorys axiom set: but as has been shown this entire axiom set can be
derived from the other theorys axiom set, which means that the given theorems can actually be derived solely from
the other theorys axiom set, so that the given theorems also belong to the other theory. Contradiction: thus the two
axiom sets span the same theorem-space. By construction: Any theorem derived from the standard axioms can be
derived from Freges axioms, and vice versa, by rst proving as theorems the axioms of the other theory as shown
above and then using those theorems as lemmas to derive the desired theorem.)

112.1 See also


Begrisschrift

112.2 References
Buss, Samuel (1998). An introduction to proof theory. Handbook of proof theory. Elsevier. pp. 178.
ISBN 0-444-89840-9.
Chapter 113

Freges theorem

In metalogic and metamathematics, Freges theorem is a metatheorem that states that the Peano axioms of arithmetic
can be derived in second-order logic from Humes principle. It was rst proven, informally, by Gottlob Frege in his
Die Grundlagen der Arithmetik (The Foundations of Arithmetic), published in 1884, and proven more formally in
his Grundgesetze der Arithmetik (The Basic Laws of Arithmetic), published in two volumes, in 1893 and 1903. The
theorem was re-discovered by Crispin Wright in the early 1980s and has since been the focus of signicant work. It
is at the core of the philosophy of mathematics known as neo-logicism.

113.1 Freges theorem in propositional logic


In propositional logic, Freges theorems refers to this tautology:

(P (QR)) ((PQ) (PR))

The truth table to the right gives a proof. For all possible assignments of false () or true () to P, Q, and R (columns
1, 3, 5), each subformula is evaluated according to the rules for material conditional, the result being shown below its
main operator. Column 6 shows that the whole formula evaluates to true in every case, i.e. that it is a tautology. In
fact, its antecedent (column 2) and its consequent (column 10) are even equivalent.

113.2 References
Zalta, Edward (2013), Freges Theorem and Foundations for Arithmetic, Stanford Encyclopedia of Philoso-
phy.

Gottlob Frege (1884). Die Grundlagen der Arithmetik eine logisch-mathematische Untersuchung ber den
Begri der Zahl (PDF) (in German). Breslau: Verlage Wilhelm Koebner.

Gottlob Frege (1893). Grundgesetze der Arithmetik (in German). 1. Jena: Verlag Hermann Pohle. Edition
in modern notation

Gottlob Frege (1903). Grundgesetze der Arithmetik (in German). 2. Jena: Verlag Hermann Pohle. Edition
in modern notation

400
Chapter 114

Functional completeness

In logic, a functionally complete set of logical connectives or Boolean operators is one which can be used to express
all possible truth tables by combining members of the set into a Boolean expression.[1][2] A well-known complete set
of connectives is { AND, NOT }, consisting of binary conjunction and negation. Each of the singleton sets { NAND
} and { NOR } is functionally complete.
In a context of propositional logic, functionally complete sets of connectives are also called (expressively) ade-
quate.[3]
From the point of view of digital electronics, functional completeness means that every possible logic gate can be
realized as a network of gates of the types prescribed by the set. In particular, all logic gates can be assembled from
either only binary NAND gates, or only binary NOR gates.

114.1 Introduction
Modern texts on logic typically take as primitive some subset of the connectives: conjunction ( ); disjunction ( );
negation ( ); material conditional ( ); and possibly the biconditional ( ). Further connectives can be dened,
if so desired, by dening them in terms of these primitives. For example, NOR (sometimes denoted , the negation
of the disjunction) can be expressed as conjunction of two negations:

A B := A B

Similarly, the negation of the conjunction, NAND (sometimes denoted as ), can be dened in terms of disjunction
and negation. It turns out that every binary connective can be dened in terms of {, , , , } , so this set is
functionally complete.
However, it still contains some redundancy: this set is not a minimal functionally complete set, because the conditional
and biconditional can be dened in terms of the other connectives as

A B := A B
A B := (A B) (B A).

It follows that the smaller set {, , } is also functionally complete. But this is still not minimal, as can be dened
as

A B := (A B).

Alternatively, may be dened in terms of in a similar manner, or may be dened in terms of :

A B := A B.

401
402 CHAPTER 114. FUNCTIONAL COMPLETENESS

No further simplications are possible. Hence, every two-element set of connectives containing and one of
{, , } is a minimal functionally complete subset of {, , , , } .

114.2 Formal denition


Given the Boolean domain B = {0,1}, a set F of Boolean functions : Bni B is functionally complete if the clone
on B generated by the basic functions contains all functions : Bn B, for all strictly positive integers n 1. In
other words, the set is functionally complete if every Boolean function that takes at least one variable can be expressed
in terms of the functions . Since every Boolean function of at least one variable can be expressed in terms of binary
Boolean functions, F is functionally complete if and only if every binary Boolean function can be expressed in terms
of the functions in F.
A more natural condition would be that the clone generated by F consist of all functions : Bn B, for all integers n
0. However, the examples given above are not functionally complete in this stronger sense because it is not possible
to write a nullary function, i.e. a constant expression, in terms of F if F itself does not contain at least one nullary
function. With this stronger denition, the smallest functionally complete sets would have 2 elements.
Another natural condition would be that the clone generated by F together with the two nullary constant functions
be functionally complete or, equivalently, functionally complete in the strong sense of the previous paragraph. The
example of the Boolean function given by S(x, y, z) = z if x = y and S(x, y, z) = x otherwise shows that this condition
is strictly weaker than functional completeness.[4][5][6]

114.3 Characterization of functional completeness


Further information: Posts lattice

Emil Post proved that a set of logical connectives is functionally complete if and only if it is not a subset of any of
the following sets of connectives:

The monotonic connectives; changing the truth value of any connected variables from F to T without changing
any from T to F never makes these connectives change their return value from T to F, e.g. , , , .
The ane connectives, such that each connected variable either always or never aects the truth value these
connectives return, e.g. , , , , .
The self-dual connectives, which are equal to their own de Morgan dual; if the truth values of all variables are
reversed, so is the truth value these connectives return, e.g. , MAJ(p,q,r).
The truth-preserving connectives; they return the truth value T under any interpretation which assigns T to
all variables, e.g. , , , , .
The falsity-preserving connectives; they return the truth value F under any interpretation which assigns F to
all variables, e.g. , , , , .

In fact, Post gave a complete description of the lattice of all clones (sets of operations closed under composition and
containing all projections) on the two-element set {T, F}, nowadays called Posts lattice, which implies the above
result as a simple corollary: the ve mentioned sets of connectives are exactly the maximal clones.

114.4 Minimal functionally complete operator sets


When a single logical connective or Boolean operator is functionally complete by itself, it is called a Sheer func-
tion[7] or sometimes a sole sucient operator. There are no unary operators with this property. NAND and NOR ,
which are dual to each other, are the only two binary Sheer functions. These were discovered, but not published, by
Charles Sanders Peirce around 1880, and rediscovered independently and published by Henry M. Sheer in 1913.[8]
In digital electronics terminology, the binary NAND gate and the binary NOR gate are the only binary universal logic
gates.
114.5. EXAMPLES 403

The following are the minimal functionally complete sets of logical connectives with arity 2:[9]

One element {}, {}.


Two elements {, } , {, } , {, } , {, } , {, } , {, } , {, } , {, } , {, } , {, } ,
{, } , {, } , {, } , {, } , {, } , {, } , {, } , {, } .
Three elements {, , } , {, , } , {, , } , {, , } , {, , } , {, , } .

There are no minimal functionally complete sets of more than three at most binary logical connectives.[9] In order to
keep the lists above readable, operators that ignore one or more inputs have been omitted. For example, an operator
that ignores the rst input and outputs the negation of the second could be substituted for a unary negation.

114.5 Examples
Examples of using the NAND() completeness. As illustrated by,[10]
A = A A
A B = (A B) = (A B) (A B)
A B = (A A) (B B)
Examples of using the NOR() completeness. As illustrated by,[11]
A = A A
A B = (A A) (B B)
A B = (A B) = (A B) (A B)

Note that, an electronic circuit or a software function is optimized by the reuse, that reduce the number of gates. For
instance, the A B operation, when expressed by gates, is implemented with the reuse of A B,

X = (A B); A B = X X

114.6 In other domains


Apart from logical connectives (Boolean operators), functional completeness can be introduced in other domains.
For example, a set of reversible gates is called functionally complete, if it can express every reversible operator.
The 3-input Fredkin gate is functionally complete reversible gate by itself a sole sucient operator. There are many
other three-input universal logic gates, such as the Tooli gate.

114.7 Set theory


There is an isomorphism between the algebra of sets and the Boolean algebra, that is, they have the same structure.
Then, if we map boolean operators into set operators, the translated above text are valid also for sets: there are
many minimal complete set of set-theory operators that can generate any other set relations. The more popular
Minimal complete operator sets are {, } and {, }.

114.8 See also


Completeness (logic)
Algebra of sets
Boolean algebra
404 CHAPTER 114. FUNCTIONAL COMPLETENESS

114.9 References
[1] Enderton, Herbert (2001), A mathematical introduction to logic (2nd ed.), Boston, MA: Academic Press, ISBN 978-0-12-
238452-3. (Complete set of logical connectives).

[2] Nolt, John; Rohatyn, Dennis; Varzi, Achille (1998), Schaums outline of theory and problems of logic (2nd ed.), New York:
McGrawHill, ISBN 978-0-07-046649-4. ("[F]unctional completeness of [a] set of logical operators).

[3] Smith, Peter (2003), An introduction to formal logic, Cambridge University Press, ISBN 978-0-521-00804-4. (Denes
expressively adequate, shortened to adequate set of connectives in a section heading.)

[4] Wesselkamper, T.C. (1975), A sole sucient operator, Notre Dame Journal of Formal Logic, 16: 8688, doi:10.1305/ndj/1093891614

[5] Massey, G.J. (1975), Concerning an alleged Sheer function, Notre Dame Journal of Formal Logic, 16 (4): 549550,
doi:10.1305/ndj/1093891898

[6] Wesselkamper, T.C. (1975), A Correction To My Paper A. Sole Sucient Operator, Notre Dame Journal of Formal
Logic, 16 (4): 551, doi:10.1305/ndj/1093891899

[7] The term was originally restricted to binary operations, but since the end of the 20th century it is used more generally.
Martin, N.M. (1989), Systems of logic, Cambridge University Press, p. 54, ISBN 978-0-521-36770-7.

[8] Scharle, T.W. (1965), Axiomatization of propositional calculus with Sheer functors, Notre Dame J. Formal Logic, 6
(3): 209217, doi:10.1305/ndj/1093958259.

[9] Wernick, William (1942) Complete Sets of Logical Functions, Transactions of the American Mathematical Society 51:
11732. In his list on the last page of the article, Wernick does not distinguish between and , or between and .

[10] NAND Gate Operations at http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nand.html

[11] NOR Gate Operations at http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nor.html


Chapter 115

Game semantics

Game semantics (German: dialogische Logik, translated as dialogical logic) is an approach to formal semantics that
grounds the concepts of truth or validity on game-theoretic concepts, such as the existence of a winning strategy for
a player, somewhat resembling Socratic dialogues or medieval theory of Obligationes.

115.1 History
In the late 1950s Paul Lorenzen was the rst to introduce a game semantics for logic, and it was further developed
by Kuno Lorenz. At almost the same time as Lorenzen, Jaakko Hintikka developed a model-theoretical approach
known in the literature as GTS. Since then, a number of dierent game semantics have been studied in logic.
Shahid Rahman (Lille) and collaborators developed dialogic into a general framework for the study of logical and
philosophical issues related to logical pluralism. Beginning 1994 this triggered a kind of Renaissance with lasting
consequences. This new philosophical impulse experienced a parallel renewal in the elds of theoretical computer
science, computational linguistics, articial intelligence and the formal semantics of programming languages, for
instance the work of Johan van Benthem and collaborators in Amsterdam who looked thoroughly at the interface
between logic and games, and Hanno Nickau who addressed the full abstraction problem in programming languages
by means of games. New results in linear logic by J-Y. Girard in the interfaces between mathematical game theory
and logic on one hand and argumentation theory and logic on the other hand resulted in the work of many others,
including S. Abramsky, J. van Benthem, A. Blass, D. Gabbay, M. Hyland, W. Hodges, R. Jagadeesan, G. Japaridze,
E. Krabbe, L. Ong, H. Prakken, G. Sandu D. Walton, and J. Woods who placed game semantics at the center of a
new concept in logic in which logic is understood as a dynamic instrument of inference.

115.2 Classical logic


The simplest application of game semantics is to propositional logic. Each formula of this language is interpreted
as a game between two players, known as the Verier and the Falsier. The Verier is given ownership of
all the disjunctions in the formula, and the Falsier is likewise given ownership of all the conjunctions. Each move
of the game consists of allowing the owner of the dominant connective to pick one of its branches; play will then
continue in that subformula, with whichever player controls its dominant connective making the next move. Play ends
when a primitive proposition has been so chosen by the two players; at this point the Verier is deemed the winner
if the resulting proposition is true, and the Falsier is deemed the winner if it is false. The original formula will be
considered true precisely when the Verier has a winning strategy, while it will be false whenever the Falsier has
the winning strategy.
If the formula contains negations or implications, other, more complicated, techniques may be used. For example, a
negation should be true if the thing negated is false, so it must have the eect of interchanging the roles of the two
players.
More generally, game semantics may be applied to predicate logic; the new rules allow a dominant quantier to be
removed by its owner (the Verier for existential quantiers and the Falsier for universal quantiers) and its bound
variable replaced at all occurrences by an object of the owners choosing, drawn from the domain of quantication.

405
406 CHAPTER 115. GAME SEMANTICS

Note that a single counterexample falsies a universally quantied statement, and a single example suces to verify
an existentially quantied one. Assuming the axiom of choice, the game-theoretical semantics for classical rst-
order logic agree with the usual model-based (Tarskian) semantics. For classical rst-order logic the winning strategy
for the verier essentially consists of nding adequate Skolem functions and witnesses. For example, if S denotes
xy (x, y) then an equisatisable statement for S is f x (x, f (x)) . The Skolem function f (if it exists) actually
codies a winning strategy for the verier of S by returning a witness for the existential sub-formula for every choice
of x the falsier might make.[1]
The above denition was rst formulated by Jaakko Hintikka as part of his GTS interpretation. The original version
of game semantics for classical (and intuitionistic) logic due to Paul Lorenzen and Kuno Lorenz was not dened in
terms of models but of winning strategies over formal dialogues (P. Lorenzen, K. Lorenz 1978, S. Rahman and L.
Kei 2005). Shahid Rahman and Tero Tulenheimo developed an algorithm to convert GTS-winning strategies for
classical logic into the dialogical winning strategies and vice versa.
For most common logics, including the ones above, the games that arise from them have perfect information - that
is, the two players always know the truth values of each primitive, and are aware of all preceding moves in the game.
However, with the advent of game semantics, logics, such as the Independence-friendly logic of Hintikka and Sandu,
with a natural semantics in terms of games of imperfect information have been proposed.

115.3 Intuitionistic logic, denotational semantics, linear logic, logical plu-


ralism
The primary motivation for Lorenzen and Kuno Lorenz was to nd a game-theoretic (their term was dialogical
Dialogische Logik) semantics for intuitionistic logic. Andreas Blass[2] was the rst to point out connections between
game semantics and linear logic. This line was further developed by Samson Abramsky, Radhakrishnan Jagadeesan,
Pasquale Malacaria and independently Martin Hyland and Luke Ong, who placed special emphasis on composition-
ality, i.e. the denition of strategies inductively on the syntax. Using game semantics, the authors mentioned above
have solved the long-standing problem of dening a fully abstract model for the programming language PCF. Con-
sequently, game semantics has led to fully abstract semantic models for a variety of programming languages and, to
new semantic-directed methods of software verication by software model checking.
Shahid Rahman and Helge Rckert extended the dialogical approach to the study of several non-classical logics such
as modal logic, relevance logic, free logic and connexive logic. Recently, Rahman and collaborators developed the
dialogical approach into a general framework aimed at the discussion of logical pluralism.[3]

115.4 Quantiers
Foundational considerations of game semantics have been more emphasised by Jaakko Hintikka and Gabriel Sandu,
especially for Independence-friendly logic (IF logic, more recently Information-friendly logic), a logic with branching
quantiers. It was thought that the principle of compositionality fails for these logics, so that a Tarskian truth denition
could not provide a suitable semantics. To get around this problem, the quantiers were given a game-theoretic
meaning. Specically, the approach is the same as in classical propositional logic, except that the players do not always
have perfect information about previous moves by the other player. Wilfrid Hodges has proposed a compositional
semantics and proved it equivalent to game semantics for IF-logics.

115.5 Computability logic


Japaridzes computability logic is a game-semantical approach to logic in an extreme sense, treating games as targets
to be serviced by logic rather than as technical or foundational means for studying or justifying logic. Its starting
philosophical point is that logic is meant to be a universal, general-utility intellectual tool for navigating the real
world and, as such, it should be construed semantically rather than syntactically, because it is semantics that serves as
a bridge between real world and otherwise meaningless formal systems (syntax). Syntax is thus secondary, interesting
only as much as it services the underlying semantics. From this standpoint, Japaridze has repeatedly criticized the
often followed practice of adjusting semantics to some already existing target syntactic constructions, with Lorenzens
approach to intuitionistic logic being an example. This line of thought then proceeds to argue that the semantics, in
115.6. SEE ALSO 407

turn, should be a game semantics, because games oer the most comprehensive, coherent, natural, adequate and
convenient mathematical models for the very essence of all navigational activities of agents: their interactions with
the surrounding world. [4] Accordingly, the logic-building paradigm adopted by computability logic is to identify the
most natural and basic operations on games, treat those operators as logical operations, and then look for sound and
complete axiomatizations of the sets of game-semantically valid formulas. On this path a host of familiar or unfamiliar
logical operators have emerged in the open-ended language of computability logic, with several sorts of negations,
conjunctions, disjunctions, implications, quantiers and modalities. Games are played between two agents: a machine
and its environment, where the machine is required to follow only eective strategies. This way, games are seen as
interactive computational problems, and the machines winning strategies for them as solutions to those problems.
It has been established that computability logic is robust with respect to reasonable variations in the complexity of
allowed strategies, which can be brought down as low as logarithmic space and polynomial time (one does not imply
the other in interactive computations) without aecting the logic. All this explains the name computability logic and
determines applicability in various areas of computer science. Classical logic, independence-free logic and certain
extensions of linear and intuitionistic logics turn out to be special fragments of CoL, obtained merely by disallowing
certain groups of operators or atoms.

115.6 See also


Computability logic
Dependence logic
EhrenfeuchtFrass game
Independence-friendly logic
Interactive computation
Intuitionistic logic
Ludics

115.7 References
[1] J. Hintikka and G. Sandu, 2009, Game-Theoretical Semantics in Keith Allan (ed.) Concise Encyclopedia of Semantics,
Elsevier, ISBN 0-08095-968-7, pp. 341343

[2] Andreas R. Blass

[3] http://stl.recherche.univ-lille3.fr/sitespersonnels/rahman/accueilrahman.html

[4] G. Japaridze, In the beginning was game semantics. In: Games: Unifying Logic, Language and Philosophy. O. Majer,
A.-V. Pietarinen and T. Tulenheimo, eds. Springer 2009, pp.249-350. ]

115.7.1 Articles
S. Abramsky and R.Jagadeesan, Games and full completeness for multiplicative linear logic. Journal of Sym-
bolic Logic 59 (1994): 543-574.
A. Blass, A game semantics for linear logic. Annals of Pure and Applied Logic 56 (1992): 151-166.
D.R. Ghica, Applications of Game Semantics: From Program Analysis to Hardware Synthesis. 2009 24th Annual
IEEE Symposium on Logic In Computer Science: 17-26. ISBN 978-0-7695-3746-7.
G. Japaridze, Introduction to computability logic. Annals of Pure and Applied Logic 123 (2003): 1-99.
G. Japaridze, In the beginning was game semantics. In Ondrej Majer, Ahti-Veikko Pietarinen and Tero Tulen-
heimo (editors), Games: Unifying logic, Language and Philosophy. Springer (2009).
Krabbe, E. C. W., 2001. Dialogue Foundations: Dialogue Logic Restituted [title has been misprinted as
"...Revisited"], Supplement to the Proceedings of The Aristotelian Society 75: 33-49.
408 CHAPTER 115. GAME SEMANTICS

H. Nickau (1994). Hereditarily Sequential Functionals. In A. Nerode; Yu.V. Matiyasevich. Proc. Symp.
Logical Foundations of Computer Science: Logic at St. Petersburg. Lecture Notes in Computer Science. 813.
Springer-Verlag. pp. 253264. doi:10.1007/3-540-58140-5_25.

S. Rahman and L. Kei, On how to be a dialogician. In Daniel Vanderken (ed.), Logic Thought and Action,
Springer (2005), 359-408. ISBN 1-4020-2616-1.

S. Rahman and T. Tulenheimo, From Games to Dialogues and Back: Towards a General Frame for Validity. In
Ondrej Majer, Ahti-Veikko Pietarinen and Tero Tulenheimo (editors), Games: Unifying logic, Language and
Philosophy. Springer (2009).
Johan van Benthem (2003). Logic and Game Theory: Close Encounters of the Third Kind. In G. E. Mints;
Reinhard Muskens. Games, logic, and constructive sets. CSLI Publications. ISBN 978-1-57586-449-5.

115.7.2 Books
T. Aho and A-V. Pietarinen (eds.) Truth and Games. Essays in honour of Gabriel Sandu. Societas Philosophica
Fennica (2006).ISBN 951-9264-57-4.
J. van Benthem, G. Heinzmann, M. Rebuschi and H. Visser (eds.) The Age of Alternative Logics. Springer
(2006).ISBN 978-1-4020-5011-4.
R. Inhetveen: Logik. Eine dialog-orientierte Einfhrung., Leipzig 2003 ISBN 3-937219-02-1

L. Kei Le Pluralisme Dialogique. Thesis Universit de Lille 3 (2007).

K. Lorenz, P. Lorenzen: Dialogische Logik, Darmstadt 1978


P. Lorenzen: Lehrbuch der konstruktiven Wissenschaftstheorie, Stuttgart 2000 ISBN 3-476-01784-2

O. Majer, A.-V. Pietarinen and T. Tulenheimo (editors). Games: Unifying Logic, Language and Philosophy.
Springer (2009).

S. Rahman, ber Dialogue protologische Kategorien und andere Seltenheiten. Frankfurt 1993 ISBN 3-631-
46583-1

S. Rahman and H. Rckert (editors), New Perspectives in Dialogical Logic. Synthese 127 (2001) ISSN 0039-
7857.

J. Redmond & M. Fontaine, How to play dialogues. An introduction to Dialogical Logic. London, College
Publications (Col. Dialogues and the Games of Logic. A Philosophical Perspective N 1). (ISBN 978-1-
84890-046-2)

115.8 External links


Computability Logic Homepage

GALOP: Workshop on Games for Logic and Programming Languages


Game Semantics or Linear Logic?

Internet Encyclopedia of Philosophy entry on Dialogical Logic

Stanford Encyclopedia of Philosophy entry on Logic and Games


SEP entry on Dialogical Logic
Chapter 116

Generalized quantier

In linguistic semantics, a generalized quantier is an expression that denotes a set of sets. This is the standard
semantics assigned to quantied noun phrases. For example, the generalized quantier every boy denotes the set of
sets of which every boy is a member.

{X | {x | x is a boy} X}

This treatment of quantiers has been essential in achieving a compositional semantics for sentences containing
quantiers.[1][2]

116.1 Type theory


A version of type theory is often used to make the semantics of dierent kinds of expressions explicit. The standard
construction denes the set of types recursively as follows:

1. e and t are types.

2. If a and b are both types, then so is a, b

3. Nothing is a type, except what can be constructed on the basis of lines 1 and 2 above.

Given this denition, we have the simple types e and t, but also a countable innity of complex types, some of which
include:

e, t; t, t; e, t, t; e, e, t; e, t, e, t, t; ...

Expressions of type e denote elements of the universe of discourse, the set of entities the discourse is about.
This set is usually written as De . Examples of type e expressions include John and he.

Expressions of type t denote a truth value, usually rendered as the set {0, 1} , where 0 stands for false and 1
stands for true. Examples of expressions that are sometimes said to be of type t are sentences or propositions.

Expressions of type e, t denote functions from the set of entities to the set of truth values. This set of functions
is rendered as DtDe . Such functions are characteristic functions of sets. They map every individual that is an
element of the set to true, and everything else to false. It is common to say that they denote sets rather than
characteristic functions, although, strictly speaking, the latter is more accurate. Examples of expressions of
this type are predicates, nouns and some kinds of adjectives.

409
410 CHAPTER 116. GENERALIZED QUANTIFIER

In general, expressions of complex types a, b denote functions from the set of entities of type a to the set of
entities of type b , a construct we can write as follows: DbDa .

We can now assign types to the words in our sentence above (Every boy sleeps) as follows.

Type(boy)= e, t
Type(sleeps)= e, t
Type(every)= e, t, e, t, t

Thus, every denotes a function from a set to a function from a set to a truth value. Put dierently, it denotes a function
from a set to a set of sets. It is that function which for any two sets A,B, every(A)(B)= 1 if and only if A B .

116.2 Typed lambda calculus


A useful way to write complex functions is the lambda calculus. For example, one can write the meaning of sleeps as
the following lambda expression, which is a function from an individual x to the proposition that x sleeps.

x.sleep (x)

Such lambda terms are functions whose domain is what precedes the period, and whose range are the type of thing
that follows the period. If x is a variable that ranges over elements of De , then the following lambda term denotes
the identity function on individuals:

x.x

We can now write the meaning of every with the following lambda term, where X,Y are variables of type e, t :

X.Y.X Y

If we abbreviate the meaning of boy and sleeps as "B" and "S", respectively, we have that the sentence every boy sleeps
now means the following:

(X.Y.X Y )(B)(S) -reduction


(Y.B Y )(S) -reduction
BS

The expression every is a determiner. Combined with a noun, it yields a generalized quantier of type e, t, t .

116.3 Properties

116.3.1 Monotonicity
Monotone increasing GQs

A generalized quantier GQ is said to be monotone increasing, also called upward entailing, just in case, for any two
sets X and Y the following holds:
116.3. PROPERTIES 411

if X Y , then GQ(X) entails GQ(Y).

The GQ every boy is monotone increasing. For example, the set of things that run fast is a subset of the set of things
that run. Therefore, the rst sentence below entails the second:

1. Every boy runs fast.

2. Every boy runs.

Monotone decreasing GQs

A GQ is said to be monotone decreasing, also called downward entailing just in case, for any two sets X and Y, the
following holds:

If X Y , then GQ(Y) entails GQ(X).

An example of a monotone decreasing GQ is no boy. For this GQ we have that the rst sentence below entails the
second.

1. No boy runs.

2. No boy runs fast.

The lambda term for the determiner no is the following. It says that the two sets have an empty intersection.

X.Y.X Y =

Monotone decreasing GQs are among the expressions that can license a negative polarity item, such as any. Monotone
increasing GQs do not license negative polarity items.

1. Good: No boy has any money.

2. Bad: *Every boy has any money.

Non-monotone GQs

A GQ is said to be non-monotone if it is neither monotone increasing nor monotone decreasing. An example of such
a GQ is exactly three boys. Neither of the following two sentences entail the other.

1. Exactly three students ran.

2. Exactly three students ran fast.

The rst sentence doesn't entail the second. The fact that the number of students that ran is exactly three doesn't
entail that each of these students ran fast, so the number of students that did that can be smaller than 3. Conversely,
the second sentence doesn't entail the rst. The sentence exactly three students ran fast can be true, even though the
number of students who merely ran (i.e. not so fast) is greater than 3.
The lambda term for the (complex) determiner exactly three is the following. It says that the cardinality of the
intersection between the two sets equals 3.

X.Y.|X Y | = 3
412 CHAPTER 116. GENERALIZED QUANTIFIER

116.3.2 Conservativity
A determiner D is said to be conservative if the following equivalence holds:

D(A)(B) D(A)(A B)

For example, the following two sentences are equivalent.

1. Every boy sleeps.


2. Every boy is a boy who sleeps.

It has been proposed that all natural language determiners (i.e. in every language) are conservative (Barwise and
Cooper 1981). The expression only is not conservative. The following two sentences are not equivalent. But it is, in
fact not common to analyze only as a determiner. Rather, it is standardly treated as a focus-sensitive adverb.

1. Only boys sleep.

2. Only boys are boys who sleep.

116.4 See also


Lindstrm quantier
Branching quantier

116.5 References
[1] Montague, Richard: 1974, 'The proper treatment of quantication in English', in R. Montague, Formal Philosophy, ed. by
R. Thomason (New Haven).

[2] Barwise, Jon and Robin Cooper. 1981. Generalized quantiers and natural language. Linguistics and Philosophy 4: 159-
219.

116.6 Further reading


Stanley Peters; Dag Westersthl (2006). Quantiers in language and logic. Clarendon Press. ISBN 978-0-19-
929125-0.
Antonio Badia (2009). Quantiers in Action: Generalized Quantication in Query, Logical and Natural Lan-
guages. Springer. ISBN 978-0-387-09563-9.

116.7 External links


Dag Westersthl, 2011. 'Generalized Quantiers'. Stanford Encyclopedia of Philosophy.
Chapter 117

George Boole

Boole redirects here. For other uses, see Boole (disambiguation).

Warning: Page using Template:Infobox person with unknown parameter religion (this message is shown only in
preview).

George Boole (/bul/; 2 November 1815 8 December 1864) was an English mathematician, educator, philosopher
and logician. He worked in the elds of dierential equations and algebraic logic, and is best known as the author of
The Laws of Thought (1854) which contains Boolean algebra. Boolean logic is credited with laying the foundations
for the information age.[3] Boole maintained that:

No general method for the solution of questions in the theory of probabilities can be established
which does not explicitly recognise, not only the special numerical bases of the science, but also those
universal laws of thought which are the basis of all reasoning, and which, whatever they may be as to
their essence, are at least mathematical as to their form.[4]

117.1 Early life


Boole was born in Lincoln, Lincolnshire, England, the son of John Boole Sr (17791848), a shoemaker[5] and Mary
Ann Joyce.[6] He had a primary school education, and received lessons from his father, but had little further formal
and academic teaching. William Brooke, a bookseller in Lincoln, may have helped him with Latin, which he may
also have learned at the school of Thomas Bainbridge. He was self-taught in modern languages.[2] At age 16, Boole
became the breadwinner for his parents and three younger siblings, taking up a junior teaching position in Doncaster
at Heighams School.[7] He taught briey in Liverpool.[1]
Boole participated in the Mechanics Institute, in the Greyfriars, Lincoln, which was founded in 1833.[2][8] Edward
Bromhead, who knew John Boole through the institution, helped George Boole with mathematics books[9] and he was
given the calculus text of Sylvestre Franois Lacroix by the Rev. George Stevens Dickson of St Swithins, Lincoln.[10]
Without a teacher, it took him many years to master calculus.[1]
At age 19, Boole successfully established his own school in Lincoln. Four years later he took over Halls Academy in
Waddington, outside Lincoln, following the death of Robert Hall. In 1840 he moved back to Lincoln, where he ran a
boarding school.[1] Boole immediately became involved in the Lincoln Topographical Society, serving as a member of
the committee, and presenting a paper entitled, On the origin, progress and tendencies Polytheism, especially amongst
the ancient Egyptians, and Persians, and in modern India. [11] on 30 November 1841.
Boole became a prominent local gure, an admirer of John Kaye, the bishop.[12] He took part in the local campaign
for early closing.[2] With E. R. Larken and others he set up a building society in 1847.[13] He associated also with the
Chartist Thomas Cooper, whose wife was a relation.[14]
From 1838 onwards Boole was making contacts with sympathetic British academic mathematicians and reading more
widely. He studied algebra in the form of symbolic methods, as far as these were understood at the time, and began
to publish research papers.[1]

413
414 CHAPTER 117. GEORGE BOOLE

Booles House and School at 3 Pottergate in Lincoln

Greyfriars, Lincoln, which housed the Mechanics Institute


117.2. PROFESSOR AT CORK 415

Plaque from the house in Lincoln

117.2 Professor at Cork

Booles status as mathematician was recognised by his appointment in 1849 as the rst professor of mathematics
at Queens College, Cork (now University College Cork (UCC)) in Ireland. He met his future wife, Mary Everest,
there in 1850 while she was visiting her uncle John Ryall who was Professor of Greek. They married some years
later in 1855.[15] He maintained his ties with Lincoln, working there with E. R. Larken in a campaign to reduce
prostitution.[16]

117.3 Honours and awards

Boole was awarded the Keith Medal by the Royal Society of Edinburgh in 1855 [17] and was elected a Fellow of
the Royal Society (FRS) in 1857.[10] He received honorary degrees of LL.D. from the University of Dublin and the
University of Oxford.[18]

117.4 Death

In late November 1864, Boole walked, in heavy rain, from his home at Licheld Cottage in Ballintemple[19] to the
university, a distance of three miles, and lectured wearing his wet clothes.[20] He soon became ill, developing a severe
cold and high fever, or possibly pneumonia.[21] Booles condition worsened and on 8 December 1864, he died of
fever-induced pleural eusion.
He was buried in the Church of Ireland cemetery of St Michaels, Church Road, Blackrock (a suburb of Cork). There
is a commemorative plaque inside the adjoining church.[22]
416 CHAPTER 117. GEORGE BOOLE

The house at 5 Grenville Place in Cork, in which Boole lived between 1849 and 1855, and where he wrote The Laws of Thought

117.5 Works

Booles rst published paper was Researches in the theory of analytical transformations, with a special application to
the reduction of the general equation of the second order, printed in the Cambridge Mathematical Journal in February
1840 (Volume 2, no. 8, pp. 6473), and it led to a friendship between Boole and Duncan Farquharson Gregory, the
editor of the journal. His works are in about 50 articles and a few separate publications.[23]
In 1841 Boole published an inuential paper in early invariant theory.[10] He received a medal from the Royal Society
117.5. WORKS 417

Booles gravestone in Blackrock, Cork, Ireland

for his memoir of 1844, On A General Method of Analysis. It was a contribution to the theory of linear dierential
equations, moving from the case of constant coecients on which he had already published, to variable coecients.[24]
The innovation in operational methods is to admit that operations may not commute.[25] In 1847 Boole published The
Mathematical Analysis of Logic, the rst of his works on symbolic logic.[26]

117.5.1 Dierential equations

Boole completed two systematic treatises on mathematical subjects during his lifetime. The Treatise on Dierential
Equations[27] appeared in 1859, and was followed, the next year, by a Treatise on the Calculus of Finite Dierences, a
sequel to the former work.

117.5.2 Analysis

In 1857, Boole published the treatise On the Comparison of Transcendents, with Certain Applications to the Theory of
Denite Integrals,[28] in which he studied the sum of residues of a rational function. Among other results, he proved
what is now called Booles identity:

{ }
1 ak ak
mes x R | t =
x bk t

for any real numbers ak > 0, bk, and t > 0.[29] Generalisations of this identity play an important role in the theory of
the Hilbert transform.[29]

117.5.3 Symbolic logic

Main article: Boolean algebra


418 CHAPTER 117. GEORGE BOOLE

Detail of stained glass window in Lincoln Cathedral dedicated to Boole

In 1847 Boole published the pamphlet Mathematical Analysis of Logic. He later regarded it as a awed exposition
of his logical system, and wanted An Investigation of the Laws of Thought on Which are Founded the Mathematical
Theories of Logic and Probabilities to be seen as the mature statement of his views. Contrary to widespread belief,
Boole never intended to criticise or disagree with the main principles of Aristotles logic. Rather he intended to
systematise it, to provide it with a foundation, and to extend its range of applicability.[30] Booles initial involvement
in logic was prompted by a current debate on quantication, between Sir William Hamilton who supported the theory
of quantication of the predicate, and Booles supporter Augustus De Morgan who advanced a version of De
Morgan duality, as it is now called. Booles approach was ultimately much further reaching than either sides in the
controversy.[31] It founded what was rst known as the algebra of logic tradition.[32]
117.5. WORKS 419

Plaque beneath Booles window in Lincoln Cathedral

Among his many innovations is his principle of wholistic reference, which was later, and probably independently,
adopted by Gottlob Frege and by logicians who subscribe to standard rst-order logic. A 2003 article[33] provides a
systematic comparison and critical evaluation of Aristotelian logic and Boolean logic; it also reveals the centrality of
wholistic reference in Booles philosophy of logic.

1854 denition of universe of discourse

In every discourse, whether of the mind conversing with its own thoughts, or of the individual in his
intercourse with others, there is an assumed or expressed limit within which the subjects of its opera-
tion are conned. The most unfettered discourse is that in which the words we use are understood in
the widest possible application, and for them the limits of discourse are co-extensive with those of the
universe itself. But more usually we conne ourselves to a less spacious eld. Sometimes, in discoursing
of men we imply (without expressing the limitation) that it is of men only under certain circumstances
and conditions that we speak, as of civilised men, or of men in the vigour of life, or of men under some
other condition or relation. Now, whatever may be the extent of the eld within which all the objects of
our discourse are found, that eld may properly be termed the universe of discourse. Furthermore, this
universe of discourse is in the strictest sense the ultimate subject of the discourse.[34]

Treatment of addition in logic

Boole conceived of elective symbols of his kind as an algebraic structure. But this general concept was not available
to him: he did not have the segregation standard in abstract algebra of postulated (axiomatic) properties of operations,
and deduced properties.[35] His work was a beginning to the algebra of sets, again not a concept available to Boole as a
familiar model. His pioneering eorts encountered specic diculties, and the treatment of addition was an obvious
diculty in the early days.
Boole replaced the operation of multiplication by the word 'and' and addition by the word 'or'. But in Booles original
420 CHAPTER 117. GEORGE BOOLE

system, + was a partial operation: in the language of set theory it would correspond only to disjoint union of subsets.
Later authors changed the interpretation, commonly reading it as exclusive or, or in set theory terms symmetric
dierence; this step means that addition is always dened.[32][36]
In fact there is the other possibility, that + should be read as disjunction,[35] This other possibility extends from the
disjoint union case, where exclusive or and non-exclusive or both give the same answer. Handling this ambiguity was
an early problem of the theory, reecting the modern use of both Boolean rings and Boolean algebras (which are
simply dierent aspects of one type of structure). Boole and Jevons struggled over just this issue in 1863, in the form
of the correct evaluation of x + x. Jevons argued for the result x, which is correct for + as disjunction. Boole kept the
result as something undened. He argued against the result 0, which is correct for exclusive or, because he saw the
equation x + x = 0 as implying x = 0, a false analogy with ordinary algebra.[10]

117.5.4 Probability theory


The second part of the Laws of Thought contained a corresponding attempt to discover a general method in probabili-
ties. Here the goal was algorithmic: from the given probabilities of any system of events, to determine the consequent
probability of any other event logically connected with those events.[37]

117.6 Legacy
Boolean algebra is named after him, as is the crater Boole on the Moon. The keyword Bool represents a Boolean
datatype in many programming languages, though Pascal and Java, among others, both use the full name Boolean.[38]
The library, underground lecture theatre complex and the Boole Centre for Research in Informatics[39] at University
College Cork are named in his honour. A road called Boole Heights in Bracknell, Berkshire is named after him.

117.6.1 19th-century development


Booles work was extended and rened by a number of writers, beginning with William Stanley Jevons. Augustus De
Morgan had worked on the logic of relations, and Charles Sanders Peirce integrated his work with Booles during the
1870s.[40] Other signicant gures were Platon Sergeevich Poretskii, and William Ernest Johnson. The conception of
a Boolean algebra structure on equivalent statements of a propositional calculus is credited to Hugh MacColl (1877),
in work surveyed 15 years later by Johnson.[40] Surveys of these developments were published by Ernst Schrder,
Louis Couturat, and Clarence Irving Lewis.

117.6.2 20th-century development


In 1921 the economist John Maynard Keynes published a book on probability theory, A Treatise of Probability.
Keynes believed that Boole had made a fundamental error in his denition of independence which vitiated much
of his analysis.[41] In his book The Last Challenge Problem, David Miller provides a general method in accord with
Booles system and attempts to solve the problems recognised earlier by Keynes and others. Theodore Hailperin
showed much earlier that Boole had used the correct mathematical denition of independence in his worked out
problems [42]
Booles work and that of later logicians initially appeared to have no engineering uses. Claude Shannon attended
a philosophy class at the University of Michigan which introduced him to Booles studies. Shannon recognised
that Booles work could form the basis of mechanisms and processes in the real world and that it was therefore
highly relevant. In 1937 Shannon went on to write a masters thesis, at the Massachusetts Institute of Technology, in
which he showed how Boolean algebra could optimise the design of systems of electromechanical relays then used in
telephone routing switches. He also proved that circuits with relays could solve Boolean algebra problems. Employing
the properties of electrical switches to process logic is the basic concept that underlies all modern electronic digital
computers. Victor Shestakov at Moscow State University (19071987) proposed a theory of electric switches based
on Boolean logic even earlier than Claude Shannon in 1935 on the testimony of Soviet logicians and mathematicians
Sofya Yanovskaya, Gaaze-Rapoport, Roland Dobrushin, Lupanov, Medvedev and Uspensky, though they presented
their academic theses in the same year, 1938. But the rst publication of Shestakovs result took place only in 1941 (in
Russian). Hence, Boolean algebra became the foundation of practical digital circuit design; and Boole, via Shannon
and Shestakov, provided the theoretical grounding for the Information Age.[43]
117.6. LEGACY 421

In modern notation, the free Boolean algebra on basic propositions p and q arranged in a Hasse diagram. The Boolean combinations
make up 16 dierent propositions, and the lines show which are logically related.

117.6.3 21st-century celebration

Booles legacy surrounds us everywhere, in the computers, information storage and retrieval, electronic circuits and
controls that support life, learning and communications in the 21st century. His pivotal advances in mathematics,
logic and probability provided the essential groundwork for modern mathematics, microelectronic engineering and
computer science.

University College Cork.[3]


2015 saw the 200th anniversary of George Booles birth. To mark the bicentenary year, University College Cork
joined admirers of Boole around the world to celebrate his life and legacy.
UCCs George Boole 200[44] project, featured events, student outreach activities and academic conferences on Booles
legacy in the digital age, including a new edition of Desmond MacHale's 1985 biography The Life and Work of George
Boole: A Prelude to the Digital Age,[45] 2014).
The search engine Google marked the 200th anniversary of his birth on 2 November 2015 with an algebraic reimaging
of its Google Doodle.[3]
Litcheld Cottage in Ballintemple, Cork, where Boole lived for the last two years of his life, bears a memorial
plaque. His former residence, in Grenville Place, is being restored through a collaboration between UCC and Cork
City Council, as the George Boole House of Innovation, after the city council acquired the premises under the Derelict
Sites Act.[46]
422 CHAPTER 117. GEORGE BOOLE

117.7 Views
Booles views were given in four published addresses: The Genius of Sir Isaac Newton; The Right Use of Leisure; The
Claims of Science; and The Social Aspect of Intellectual Culture.[47] The rst of these was from 1835, when Charles
Anderson-Pelham, 1st Earl of Yarborough gave a bust of Newton to the Mechanics Institute in Lincoln.[48] The
second justied and celebrated in 1847 the outcome of the successful campaign for early closing in Lincoln, headed
by Alexander Leslie-Melville, of Branston Hall.[49] The Claims of Science was given in 1851 at Queens College,
Cork.[50] The Social Aspect of Intellectual Culture was also given in Cork, in 1855 to the Cuvierian Society.[51]
Though his biographer Des MacHale describes Boole as an agnostic deist,[52][53] Boole read a wide variety of
Christian theology. Combining his interests in mathematics and theology, he compared the Christian trinity of Father,
Son, and Holy Ghost with the three dimensions of space, and was attracted to the Hebrew conception of God as an
absolute unity. Boole considered converting to Judaism but in the end was said to have chosen Unitarianism. Boole
came to speak against a what he saw as prideful scepticism, and instead, favoured the belief in a Supreme Intelligent
Cause.[54] He also declared I rmly believe, for the accomplishment of a purpose of the Divine Mind.[55][56] In
addition, he stated that he perceived teeming evidences of surrounding design" and concluded that the course of
this world is not abandoned to chance and inexorable fate.[57][58]
Two inuences on Boole were later claimed by his wife, Mary Everest Boole: a universal mysticism tempered by
Jewish thought, and Indian logic.[59] Mary Boole stated that an adolescent mystical experience provided for his lifes
work:

My husband told me that when he was a lad of seventeen a thought struck him suddenly, which
became the foundation of all his future discoveries. It was a ash of psychological insight into the
conditions under which a mind most readily accumulates knowledge [...] For a few years he supposed
himself to be convinced of the truth of the Bible as a whole, and even intended to take orders as a
clergyman of the English Church. But by the help of a learned Jew in Lincoln he found out the true
nature of the discovery which had dawned on him. This was that mans mind works by means of some
mechanism which functions normally towards Monism.[60]

In Ch. 13 of Laws of Thought Boole used examples of propositions from Baruch Spinoza and Samuel Clarke. The
work contains some remarks on the relationship of logic to religion, but they are slight and cryptic.[61] Boole was
apparently disconcerted at the books reception just as a mathematical toolset:

George afterwards learned, to his great joy, that the same conception of the basis of Logic was held by
Leibnitz, the contemporary of Newton. De Morgan, of course, understood the formula in its true sense;
he was Booles collaborator all along. Herbert Spencer, Jowett, and Robert Leslie Ellis understood, I feel
sure; and a few others, but nearly all the logicians and mathematicians ignored [953] the statement that
the book was meant to throw light on the nature of the human mind; and treated the formula entirely as
a wonderful new method of reducing to logical order masses of evidence about external fact.[60]

Mary Boole claimed that there was profound inuence via her uncle George Everest of Indian thought on
George Boole, as well as on Augustus De Morgan and Charles Babbage:

Think what must have been the eect of the intense Hinduizing of three such men as Babbage,
De Morgan, and George Boole on the mathematical atmosphere of 183065. What share had it in
generating the Vector Analysis and the mathematics by which investigations in physical science are now
conducted?[60]

117.8 Family
In 1855 he married Mary Everest (niece of George Everest), who later wrote several educational works on her
husbands principles.
The Booles had ve daughters:

Mary Ellen(18561908)[62] who married the mathematician and author Charles Howard Hinton and had four
children: George (18821943), Eric (*1884), William (18861909)[63] and Sebastian (18871923) inventor
117.9. SEE ALSO 423

of the Jungle gym. After the sudden death of her husband, Mary Ellen committed suicide, in Washington,
D.C., in May 1908.[64] Sebastian had three children:

Jean Hinton (married name Rosner) (19172002) peace activist.


William H. Hinton (19192004) visited China in the 1930s and 40s and wrote an inuential account of
the Communist land reform.
Joan Hinton (19212010) worked for the Manhattan Project and lived in China from 1948 until her death
on 8 June 2010; she was married to Sid Engst.

Margaret, (1858 1935) married Edward Ingram Taylor, an artist.

Their elder son Georey Ingram Taylor became a mathematician and a Fellow of the Royal Society.
Their younger son Julian was a professor of surgery.

Alicia (18601940), who made important contributions to four-dimensional geometry.

Lucy Everest (18621904), who was the rst female professor of chemistry in England.

Ethel Lilian (18641960), who married the Polish scientist and revolutionary Wilfrid Michael Voynich and
was the author of the novel The Gady.

117.9 See also


Boolean algebra, a logical calculus of truth values or set membership

Boolean algebra (structure), a set with operations resembling logical ones

Boolean ring, a ring consisting of idempotent elements

Boolean circuit, a mathematical model for digital logical circuits.

Boolean data type is a data type, having two values (usually denoted true and false)

Boolean expression, an expression in a programming language that produces a Boolean value when evaluated

Boolean function, a function that determines Boolean values or operators

Boolean model (probability theory), a model in stochastic geometry

Boolean network, a certain network consisting of a set of Boolean variables whose state is determined by other
variables in the network

Boolean processor, a 1-bit variables computing unit

Boolean satisability problem

Booles syllogistic is a logic invented by 19th-century British mathematician George Boole, which attempts to
incorporate the empty set.

List of Boolean algebra topics

List of pioneers in computer science

117.10 Notes
[1] O'Connor, John J.; Robertson, Edmund F., George Boole, MacTutor History of Mathematics archive, University of St
Andrews.

[2] Hill, p. 149; Google Books.

[3] Who is George Boole: the mathematician behind the Google doodle. Sydney Morning Herald. 2 November 2015.
424 CHAPTER 117. GEORGE BOOLE

[4] Boole, George (2012) [Originally published by Watts & Co., London, in 1952]. Rhees, Rush, ed. Studies in Logic and
Probability (Reprint ed.). Mineola, NY: Dover Publications. p. 273. ISBN 978-0-486-48826-4. Retrieved 27 October
2015.

[5] John Boole. Lincoln Boole Foundation. Retrieved 6 November 2015.

[6] Chisholm, Hugh, ed. (1911). "Boole, George". Encyclopdia Britannica (11th ed.). Cambridge University Press.

[7] Rhees, Rush. (1954) George Boole as Student and Teacher. By Some of His Friends and Pupils, Proceedings of the
Royal Irish Academy. Section A: Mathematical and Physical Sciences. Vol. 57. Royal Irish Academy

[8] Society for the History of Astronomy, Lincolnshire.

[9] Edwards, A. W. F. Bromhead, Sir Edward Thomas French. Oxford Dictionary of National Biography (online ed.). Oxford
University Press. doi:10.1093/ref:odnb/37224. (Subscription or UK public library membership required.)

[10] Burris, Stanley. George Boole. Stanford Encyclopedia of Philosophy.

[11] A Selection of Papers relative to the County of Lincoln, read before the Lincolnshire Topographical Society, 1841-1842.
Printed by W. and B. Brooke, High-Street, Lincoln, 1843.

[12] Hill, p. 172 note 2; Google Books.

[13] Hill, p. 130 note 1; Google Books.

[14] Hill, p. 148; Google Books.

[15] Ronald Calinger, Vita mathematica: historical research and integration with teaching (1996), p. 292; Google Books.

[16] Hill, p. 138 note 4; Google Books.

[17] Keith Awards 1827-1890. Canmbridge Journals Online. Retrieved 29 November 2014.

[18] Ivor Grattan-Guinness, Grard Bornet, George Boole: Selected manuscripts on logic and its philosophy (1997), p. xiv;
Google Books.

[19] Dublin City Quick Search: Buildings of Ireland: National Inventory of Architectural Heritage.

[20] Barker, Tommy (13 June 2015). Have a look inside the home of UCC maths professor George Boole. Irish Examiner.
Retrieved 6 November 2015.

[21] Stanford Encyclopedia of Philosophy

[22] Death-His Life-- George Boole 200.

[23] A list of Booles memoirs and papers is in the Catalogue of Scientic Memoirs published by the Royal Society, and in the
supplementary volume on dierential equations, edited by Isaac Todhunter. To the Cambridge Mathematical Journal and
its successor, the Cambridge and Dublin Mathematical Journal, Boole contributed 22 articles in all. In the third and fourth
series of the Philosophical Magazine are found 16 papers. The Royal Society printed six memoirs in the Philosophical
Transactions, and a few other memoirs are to be found in the Transactions of the Royal Society of Edinburgh and of the
Royal Irish Academy, in the Bulletin de l'Acadmie de St-Ptersbourg for 1862 (under the name G. Boldt, vol. iv. pp.
198215), and in Crelles Journal. Also included is a paper on the mathematical basis of logic, published in the Mechanics
Magazine in 1848.

[24] Andrei Nikolaevich Kolmogorov, Adolf Pavlovich Yushkevich (editors), Mathematics of the 19th Century: function theory
according to Chebyshev, ordinary dierential equations, calculus of variations, theory of nite dierences (1998), pp. 1302;
Google Books.

[25] Jeremy Gray, Karen Hunger Parshall, Episodes in the History of Modern Algebra (18001950) (2007), p. 66; Google
Books.

[26] George Boole, The Mathematical Analysis of Logic, Being an Essay towards a Calculus of Deductive Reasoning (London,
England: Macmillan, Barclay, & Macmillan, 1847).

[27] George Boole, A treatsie on dierential equations (1859), Internet Archive.

[28] Boole, George (1857). On the Comparison of Transcendents, with Certain Applications to the Theory of Denite Inte-
grals. Philosophical Transactions of the Royal Society of London. 147: 745803. JSTOR 108643. doi:10.1098/rstl.1857.0037.

[29] Cima, Joseph A.; Matheson, Alec; Ross, William T. (2005). The Cauchy transform. Quadrature domains and their
applications. Oper. Theory Adv. Appl. 156. Basel: Birkhuser. pp. 79111. MR 2129737.
117.10. NOTES 425

[30] John Corcoran, Aristotles Prior Analytics and Booles Laws of Thought, History and Philosophy of Logic, vol. 24 (2003),
pp. 261288.

[31] Grattan-Guinness, I. Boole, George. Oxford Dictionary of National Biography (online ed.). Oxford University Press.
doi:10.1093/ref:odnb/2868. (Subscription or UK public library membership required.)

[32] Witold Marciszewski (editor), Dictionary of Logic as Applied in the Study of Language (1981), pp. 1945.

[33] Corcoran, John (2003). Aristotles Prior Analytics and Booles Laws of Thought. History and Philosophy of Logic,
24: 261288. Reviewed by Risto Vilkko. Bulletin of Symbolic Logic, 11(2005) 8991. Also by Marcel Guillaume,
Mathematical Reviews 2033867 (2004m:03006).

[34] George Boole. 1854/2003. The Laws of Thought, facsimile of 1854 edition, with an introduction by John Corcoran.
Bualo: Prometheus Books (2003). Reviewed by James van Evra in Philosophy in Review.24 (2004) 167169.

[35] Andrei Nikolaevich Kolmogorov, Adolf Pavlovich Yushkevich, Mathematics of the 19th Century: mathematical logic, al-
gebra, number theory, probability theory (2001), pp. 15 (note 15)16; Google Books.

[36] Burris, Stanley. The Algebra of Logic Tradition. Stanford Encyclopedia of Philosophy.

[37] Boole, George (1854). An Investigation of the Laws of Thought. London: Walton & Maberly. pp. 265275.

[38] P. J. Brown, Pascal from Basic, Addison-Wesley, 1982. ISBN 0-201-13789-5, page 72

[39] Boole Centre for Research in Informatics.

[40] Ivor Grattan-Guinness, Grard Bornet, George Boole: Selected manuscripts on logic and its philosophy (1997), p. xlvi;
Google Books.

[41] Chapter XVI, p. 167, section 6 of A treatise on probability, volume 4: The central error in his system of probability
arises out of his giving two inconsistent denitions of 'independence' (2) He rst wins the readers acquiescence by giving a
perfectly correct denition: Two events are said to be independent when the probability of either of them is unaected by
our expectation of the occurrence or failure of the other. (3) But a moment later he interprets the term in quite a dierent
sense; for, according to Booles second denition, we must regard the events as independent unless we are told either that
they must concur or that they cannot concur. That is to say, they are independent unless we know for certain that there is,
in fact, an invariable connection between them. The simple events, x, y, z, will be said to be conditioned when they are not
free to occur in every possible combination; in other words, when some compound event depending upon them is precluded
from occurring. ... Simple unconditioned events are by denition independent. (1) In fact as long as xz is possible, x and
z are independent. This is plainly inconsistent with Booles rst denition, with which he makes no attempt to reconcile it.
The consequences of his employing the term independence in a double sense are far-reaching. For he uses a method of
reduction which is only valid when the arguments to which it is applied are independent in the rst sense, and assumes that
it is valid if they are independent in second sense. While his theorems are true if all propositions or events involved are
independent in the rst sense, they are not true, as he supposes them to be, if the events are independent only in the second
sense.

[42] ZETETIC GLEANINGS.

[43] That dissertation has since been hailed as one of the most signicant masters theses of the 20th century. To all intents and
purposes, its use of binary code and Boolean algebra paved the way for the digital circuitry that is crucial to the operation
of modern computers and telecommunications equipment."Emerson, Andrew (8 March 2001). Claude Shannon. United
Kingdom: The Guardian.

[44] George Boole 200 - George Boole Bicentenary Celebrations.

[45] Cork University Press

[46] Boolean logic meets Victorian gothic in leafy Cork suburb.

[47] 1902 Britannica article by Jevons; online text.

[48] James Gasser, A Boole Anthology: recent and classical studies in the logic of George Boole (2000), p. 5; Google Books.

[49] Gasser, p. 10; Google Books.

[50] Boole, George (1851). The Claims of Science, especially as founded in its relations to human nature; a lecture. Retrieved 4
March 2012.

[51] Boole, George (1855). The Social Aspect of Intellectual Culture: an address delivered in the Cork Athenum, May 29th,
1855 : at the soire of the Cuvierian Society. George Purcell & Co. Retrieved 4 March 2012.
426 CHAPTER 117. GEORGE BOOLE

[52] International Association for Semiotic Studies; International Council for Philosophy and Humanistic Studies; International
Social Science Council (1995). A tale of two amateurs. Semiotica, Volume 105. Mouton. p. 56. MacHales biography
calls George Boole 'an agnostic deist'. Both Booles classication of 'religious philosophies as monistic, dualistic, and
trinitarian left little doubt about their preference for 'the unity religion', whether Judaic or Unitarian.

[53] International Association for Semiotic Studies; International Council for Philosophy and Humanistic Studies; International
Social Science Council (1996). Semiotica, Volume 105. Mouton. p. 17. MacHale does not repress this or other evidence
of the Booles nineteenth-century beliefs and practices in the paranormal and in religious mysticism. He even concedes
that George Booles many distinguished contributions to logic and mathematics may have been motivated by his distinctive
religious beliefs as an agnostic deist and by an unusual personal sensitivity to the suerings of other people.

[54] Boole, George. Studies in Logic and Probability. 2002. Courier Dover Publications. p. 201-202

[55] Boole, George. Studies in Logic and Probability. 2002. Courier Dover Publications. p. 451

[56] Some-Side of a Scientic Mind (2013). pp. 112-3. The University Magazine, 1878. London: Forgotten Books. (Original
work published 1878)

[57] Concluding remarks of his treatise of Clarke and Spinoza, as found in Boole, George (2007). An Investigation of the
Laws of Thought. Cosimo, Inc. Chap . XIII. p. 217-218. (Original work published 1854)

[58] Boole, George (1851). The claims of science, especially as founded in its relations to human nature; a lecture, Volume 15.
p. 24

[59] Jonardon Ganeri (2001), Indian Logic: a reader, Routledge, p. 7, ISBN 0-7007-1306-9; Google Books.

[60] Boole, Mary Everest Indian Thought and Western Science in the Nineteenth Century, Boole, Mary Everest Collected Works
eds. E. M. Cobham and E. S. Dummer, London, Daniel 1931 pp.947967

[61] Grattan-Guinness and Bornet, p. 16; Google Books.

[62] Family and Genealogy - His Life George Boole 200. Georgeboole.com. Retrieved 7 March 2016.

[63] Smothers In Orchard in The Los Angeles Times v. 27 February 1909.

[64] `My Right To Die, Woman Kills Self in The Washington Times v. 28 May 1908 (PDF); Mrs. Mary Hinton A Suicide in The
New York Times v. 29 May 1908 (PDF).

117.11 References
University College Cork, George Boole 200 Bicentenary Celebration, GeorgeBoole.com.
Chisholm, Hugh, ed. (1911). "Boole, George". Encyclopdia Britannica (11th ed.). Cambridge University
Press.
Ivor Grattan-Guinness, The Search for Mathematical Roots 18701940. Princeton University Press. 2000.
Francis Hill (1974), Victorian Lincoln; Google Books.
Des MacHale, George Boole: His Life and Work. Boole Press. 1985.
Des MacHale, The Life and Work of George Boole: A Prelude to the Digital Age (new edition). Cork University
Press. 2014
Stephen Hawking, God Created the Integers. Running Press, Philadelphia. 2007.

117.12 External links


Roger Parsons article on Boole
George Boole: A 200-Year View by Stephen Wolfram.
Works by George Boole at Project Gutenberg
Works by or about George Boole at Internet Archive
117.12. EXTERNAL LINKS 427

George Booles work as rst Professor of Mathematics in University College, Cork, Ireland

George Boole website


Author prole in the database zbMATH
Chapter 118

GoodmanNguyenvan Fraassen algebra

A GoodmanNguyenvan Fraassen algebra is a type of conditional event algebra (CEA) that embeds the standard
Boolean algebra of unconditional events in a larger algebra which is itself Boolean. The goal (as with all CEAs) is
to equate the conditional probability P(A B) / P(A) with the probability of a conditional event, P(A B) for more
than just trivial choices of A, B, and P.

118.1 Construction of the algebra


Given set , which is the set of possible outcomes, and set F of subsets of so that F is the set of possible events
consider an innite Cartesian product of the form E 1 E 2 En , where E 1 , E 2 , En are
members of F. Such a product species the set of all innite sequences whose rst element is in E 1 , whose second
element is in E 2 , , and whose nth element is in En, and all of whose elements are in . Note that one such product
is the one where E 1 = E 2 = = En = , i.e., the set . Designate this set as ; it is the set of
all innite sequences whose elements are in .
A new Boolean algebra is now formed, whose elements are subsets of . To begin with, any event which was formerly
represented by subset A of is now represented by A = A .
Additionally, however, for events A and B, let the conditional event A B be represented as the following innite
union of disjoint sets:

[(A B) ]
[A (A B) ]
[A A (A B) ] .

The motivation for this representation of conditional events will be explained shortly. Note that the construction can
be iterated; A and B can themselves be conditional events.
Intuitively, unconditional event A ought to be representable as conditional event A. And indeed: because A
= A and = , the innite union representing A reduces to A .
Let F now be a set of subsets of , which contains representations of all events in F and is otherwise just large
enough to be closed under construction of conditional events and under the familiar Boolean operations. F is a
Boolean algebra of conditional events which contains a Boolean algebra corresponding to the algebra of ordinary
events.

118.2 Denition of the extended probability function


Corresponding to the newly constructed logical objects, called conditional events, is a new denition of a probability
function, P , based on a standard probability function P:

428
118.3. P(A B) = P(B|A) 429

P (E 1 E 2 En ) = P(E 1 )P(E 2 ) P(En)P()P()P() = P(E 1 )P(E 2 )


P(En), since P() = 1.

It follows from the denition of P that P ( A ) = P(A). Thus P = P over the domain of P.

118.3 P(A B) = P(B|A)


Now comes the insight which motivates all of the preceding work. For P, the original probability function, P(A) = 1
P(A), and therefore P(B|A) = P(A B) / P(A) can be rewritten as P(A B) / [1 P(A)]. The factor 1 / [1 P(A)],
however, can in turn be represented by its Maclaurin series expansion, 1 + P(A) + P(A)2 . Therefore, P(B|A) =
P(A B) + P(A)P(A B) + P(A)2 P(A B) + .
The right side of the equation is exactly the expression for the probability P of A B, just dened as a union of
carefully chosen disjoint sets. Thus that union can be taken to represent the conditional event A B, such that P (A
B) = P(B|A) for any choice of A, B, and P. But since P = P over the domain of P, the hat notation is optional.
So long as the context is understood (i.e., conditional event algebra), one can write P(A B) = P(B|A), with P now
being the extended probability function.

118.4 References
Bamber, Donald, I. R. Goodman, and H. T. Nguyen. 2004. Deduction from Conditional Knowledge. Soft Com-
puting 8: 247255.
Goodman, I. R., R. P. S. Mahler, and H. T. Nguyen. 1999. What is conditional event algebra and why should you
care?" SPIE Proceedings, Vol 3720.
Chapter 119

Head normal form

In the lambda calculus, a term is in beta normal form if no beta reduction is possible.[1] A term is in beta-eta
normal form if neither a beta reduction nor an eta reduction is possible. A term is in head normal form if there is
no beta-redex in head position.

119.1 Beta reduction


In the lambda calculus, a beta redex is a term of the form

(x.A)M

A redex r is in head position in a term t , if t t has the following shape:

x1 . . . xn . (x.A)M1 M2 . . . Mm , where n 0 and m 1 .


| {z }
redex ther

A beta reduction is an application of the following rewrite rule to a beta redex contained in a term t :

(x.A)M A[x := M ]

where A[x := M ] is the result of substituting the term M for the variable x in the term A .
A head beta reduction is a beta reduction applied in head position, that is, of the following form:

x1 . . . xn .(x.A)M1 M2 . . . Mm x1 . . . xn .A[x := M1 ]M2 . . . Mm , where n 0 and


m1.

Any other reduction is an internal beta reduction.


A normal form is a term that does not contain any beta redex, i.e. that cannot be further reduced. More generally, a
head normal form is a term that does not contain a beta redex in head position, i.e. that cannot be further reduced
by a head reduction. Head normal forms are the terms of the following shape:

x1 . . . xn .xM1 M2 . . . Mm , where x is a variable (if considering the simple lambda calculus), n 0


and m 0 .

A head normal form is not always a normal form, because the applied arguments Mj need not be normal. However,
the converse is true: any normal form is also a head normal form. In fact, the normal forms are exactly the head
normal forms in which the subterms Mj are themselves normal forms. This gives an inductive syntactic description
of normal forms.

430
119.2. REDUCTION STRATEGIES 431

119.2 Reduction strategies


In general, a given term can contain several redexes, hence several dierent beta reductions could be applied. We
may specify a strategy to choose which redex to reduce.

Normal-order reduction is the strategy in which one continually applies the rule for beta reduction in head
position until no more such reductions are possible. At that point, the resulting term is in head normal form. One
then continues applying head reduction in the subterms Mj , from left to right. Stated otherwise, normalorder
reduction is the strategy that always reduces the leftmost outermost redex rst.

By contrast, in applicative order reduction, one applies the internal reductions rst, and then only applies the
head reduction when no more internal reductions are possible.

Normal-order reduction is complete, in the sense that if a term has a head normal form, then normalorder reduction
will eventually reach it. By the syntactic description of normal forms above, this entails the same statement for a
fully normal form (this is the standardization theorem). By contrast, applicative order reduction may not terminate,
even when the term has a normal form. For example, using applicative order reduction, the following sequence of
reductions is possible:

(x.z)((w.www)(w.www))
(x.z)((w.www)(w.www)(w.www))
(x.z)((w.www)(w.www)(w.www)(w.www))
(x.z)((w.www)(w.www)(w.www)(w.www)(w.www))
...

But using normal-order reduction, the same starting point reduces quickly to normal form:

(x.z)((w.www)(w.www))

z
Sinots director strings is one method by which the computational complexity of beta reduction can be optimized.

119.3 See also


Lambda calculus

Normal form (disambiguation)

119.4 References
[1] Beta normal form. Encyclopedia. TheFreeDictionary.com. Retrieved 18 November 2013.
Chapter 120

Herbrandization

The Herbrandization of a logical formula (named after Jacques Herbrand) is a construction that is dual to the
Skolemization of a formula. Thoralf Skolem had considered the Skolemizations of formulas in prenex form as part
of his proof of the LwenheimSkolem theorem (Skolem 1920). Herbrand worked with this dual notion of Her-
brandization, generalized to apply to non-prenex formulas as well, in order to prove Herbrands theorem (Herbrand
1930).
The resulting formula is not necessarily equivalent to the original one. As with Skolemization which only preserves
satisability, Herbrandization being Skolemizations dual preserves validity: the resulting formula is valid if and only
if the original one is.

120.1 Denition and examples


Let F be a formula in the language of rst-order logic. We may assume that F contains no variable that is bound by
two dierent quantier occurrences, and that no variable occurs both bound and free. (That is, F could be relettered
to ensure these conditions, in such a way that the result is an equivalent formula).
The Herbrandization of F is then obtained as follows:

First, replace any free variables in F by constant symbols.


Second, delete all quantiers on variables that are either (1) universally quantied and within an even number
of negation signs, or (2) existentially quantied and within an odd number of negation signs.
Finally, replace each such variable v with a function symbol fv (x1 , . . . , xk ) , where x1 , . . . , xk are the variables
that are still quantied, and whose quantiers govern v .

For instance, consider the formula F := yx[R(y, x) zS(x, z)] . There are no free variables to replace. The
variables y, z are the kind we consider for the second step, so we delete the quantiers y and z . Finally, we then
replace y with a constant cy (since there were no other quantiers governing y ), and we replace z with a function
symbol fz (x) :

F H = x[R(cy , x) S(x, fz (x))].

The Skolemization of a formula is obtained similarly, except that in the second step above, we would delete quantiers
on variables that are either (1) existentially quantied and within an even number of negations, or (2) universally
quantied and within an odd number of negations. Thus, considering the same F from above, its Skolemization
would be:

F S = y[R(y, fx (y)) zS(fx (y), z)].

To understand the signicance of these constructions, see Herbrands theorem or the LwenheimSkolem theorem.

432
120.2. SEE ALSO 433

120.2 See also


Predicate functor logic

120.3 References
Skolem, T. Logico-combinatorial investigations in the satisability or provability of mathematical proposi-
tions: A simplied proof of a theorem by L. Lwenheim and generalizations of the theorem. (In van Hei-
jenoort 1967, 252-63.)

Herbrand, J. Investigations in proof theory: The properties of true propositions. (In van Heijenoort 1967,
525-81.)

van Heijenoort, J. From Frege to Gdel: A Source Book in Mathematical Logic, 1879-1931. Harvard University
Press, 1967.
Chapter 121

Herbrandization

The Herbrandization of a logical formula (named after Jacques Herbrand) is a construction that is dual to the
Skolemization of a formula. Thoralf Skolem had considered the Skolemizations of formulas in prenex form as part
of his proof of the LwenheimSkolem theorem (Skolem 1920). Herbrand worked with this dual notion of Her-
brandization, generalized to apply to non-prenex formulas as well, in order to prove Herbrands theorem (Herbrand
1930).
The resulting formula is not necessarily equivalent to the original one. As with Skolemization which only preserves
satisability, Herbrandization being Skolemizations dual preserves validity: the resulting formula is valid if and only
if the original one is.

121.1 Denition and examples


Let F be a formula in the language of rst-order logic. We may assume that F contains no variable that is bound by
two dierent quantier occurrences, and that no variable occurs both bound and free. (That is, F could be relettered
to ensure these conditions, in such a way that the result is an equivalent formula).
The Herbrandization of F is then obtained as follows:

First, replace any free variables in F by constant symbols.


Second, delete all quantiers on variables that are either (1) universally quantied and within an even number
of negation signs, or (2) existentially quantied and within an odd number of negation signs.
Finally, replace each such variable v with a function symbol fv (x1 , . . . , xk ) , where x1 , . . . , xk are the variables
that are still quantied, and whose quantiers govern v .

For instance, consider the formula F := yx[R(y, x) zS(x, z)] . There are no free variables to replace. The
variables y, z are the kind we consider for the second step, so we delete the quantiers y and z . Finally, we then
replace y with a constant cy (since there were no other quantiers governing y ), and we replace z with a function
symbol fz (x) :

F H = x[R(cy , x) S(x, fz (x))].

The Skolemization of a formula is obtained similarly, except that in the second step above, we would delete quantiers
on variables that are either (1) existentially quantied and within an even number of negations, or (2) universally
quantied and within an odd number of negations. Thus, considering the same F from above, its Skolemization
would be:

F S = y[R(y, fx (y)) zS(fx (y), z)].

To understand the signicance of these constructions, see Herbrands theorem or the LwenheimSkolem theorem.

434
121.2. SEE ALSO 435

121.2 See also


Predicate functor logic

121.3 References
Skolem, T. Logico-combinatorial investigations in the satisability or provability of mathematical proposi-
tions: A simplied proof of a theorem by L. Lwenheim and generalizations of the theorem. (In van Hei-
jenoort 1967, 252-63.)

Herbrand, J. Investigations in proof theory: The properties of true propositions. (In van Heijenoort 1967,
525-81.)

van Heijenoort, J. From Frege to Gdel: A Source Book in Mathematical Logic, 1879-1931. Harvard University
Press, 1967.
Chapter 122

Homogeneous relation

Relation (mathematics)" redirects here. For a more general notion of relation, see nitary relation. For a more
combinatorial viewpoint, see theory of relations. For other uses, see Relation (disambiguation).

In mathematics, a binary relation on a set A is a collection of ordered pairs of elements of A. In other words, it is a
subset of the Cartesian product A2 = A A. More generally, a binary relation between two sets A and B is a subset
of A B. The terms correspondence, dyadic relation and 2-place relation are synonyms for binary relation.
An example is the "divides" relation between the set of prime numbers P and the set of integers Z, in which every
prime p is associated with every integer z that is a multiple of p (but with no integer that is not a multiple of p). In
this relation, for instance, the prime 2 is associated with numbers that include 4, 0, 6, 10, but not 1 or 9; and the
prime 3 is associated with numbers that include 0, 6, and 9, but not 4 or 13.
Binary relations are used in many branches of mathematics to model concepts like "is greater than", "is equal to", and
divides in arithmetic, "is congruent to" in geometry, is adjacent to in graph theory, is orthogonal to in linear
algebra and many more. The concept of function is dened as a special kind of binary relation. Binary relations are
also heavily used in computer science.
A binary relation is the special case n = 2 of an n-ary relation R A1 An, that is, a set of n-tuples where the
jth component of each n-tuple is taken from the jth domain Aj of the relation. An example for a ternary relation on
ZZZ is " ... lies between ... and ..., containing e.g. the triples (5,2,8), (5,8,2), and (4,9,7).
In some systems of axiomatic set theory, relations are extended to classes, which are generalizations of sets. This
extension is needed for, among other things, modeling the concepts of is an element of or is a subset of in set
theory, without running into logical inconsistencies such as Russells paradox.

122.1 Formal denition

A binary relation R between arbitrary sets (or classes) X (the set of departure) and Y (the set of destination or
codomain) is specied by its graph G, which is a subset of the Cartesian product X Y. The binary relation R itself
is usually identied with its graph G, but some authors dene it as an ordered triple (X, Y, G), which is otherwise
referred to as a correspondence.[1]
The statement (x, y) G is read "x is R-related to y", and is denoted by xRy or R(x, y). The latter notation corresponds
to viewing R as the characteristic function of the subset G of X Y, i.e. R(x, y) equals to 1 (true), if (x, y) G, and
0 (false) otherwise.
The order of the elements in each pair of G is important: if a b, then aRb and bRa can be true or false, independently
of each other. Resuming the above example, the prime 3 divides the integer 9, but 9 doesn't divide 3.
The domain of R is the set of all x such that xRy for at least one y. The range of R is the set of all y such that xRy
for at least one x. The eld of R is the union of its domain and its range.[2][3][4]

436
122.2. SPECIAL TYPES OF BINARY RELATIONS 437

122.1.1 Is a relation more than its graph?


According to the denition above, two relations with identical graphs but dierent domains or dierent codomains
are considered dierent. For example, if G = {(1, 2), (1, 3), (2, 7)} , then (Z, Z, G) , (R, N, G) , and (N, R, G) are
three distinct relations, where Z is the set of integers, R is the set of real numbers and N is the set of natural numbers.
Especially in set theory, binary relations are often dened as sets of ordered pairs, identifying binary relations with
their graphs. The domain of a binary relation R is then dened as the set of all x such that there exists at least one
y such that (x, y) R , the range of R is dened as the set of all y such that there exists at least one x such that
(x, y) R , and the eld of R is the union of its domain and its range.[2][3][4]
A special case of this dierence in points of view applies to the notion of function. Many authors insist on distin-
guishing between a functions codomain and its range. Thus, a single rule, like mapping every real number x to
x2 , can lead to distinct functions f : R R and f : R R+ , depending on whether the images under that
rule are understood to be reals or, more restrictively, non-negative reals. But others view functions as simply sets of
ordered pairs with unique rst components. This dierence in perspectives does raise some nontrivial issues. As an
example, the former camp considers surjectivityor being ontoas a property of functions, while the latter sees it
as a relationship that functions may bear to sets.
Either approach is adequate for most uses, provided that one attends to the necessary changes in language, notation,
and the denitions of concepts like restrictions, composition, inverse relation, and so on. The choice between the two
denitions usually matters only in very formal contexts, like category theory.

122.1.2 Example
Example: Suppose there are four objects {ball, car, doll, gun} and four persons {John, Mary, Ian, Venus}. Suppose
that John owns the ball, Mary owns the doll, and Venus owns the car. Nobody owns the gun and Ian owns nothing.
Then the binary relation is owned by is given as

R = ({ball, car, doll, gun}, {John, Mary, Ian, Venus}, {(ball, John), (doll, Mary), (car, Venus)}).

Thus the rst element of R is the set of objects, the second is the set of persons, and the last element is a set of ordered
pairs of the form (object, owner).
The pair (ball, John), denoted by RJ means that the ball is owned by John.
Two dierent relations could have the same graph. For example: the relation

({ball, car, doll, gun}, {John, Mary, Venus}, {(ball, John), (doll, Mary), (car, Venus)})

is dierent from the previous one as everyone is an owner. But the graphs of the two relations are the same.
Nevertheless, R is usually identied or even dened as G(R) and an ordered pair (x, y) G(R)" is usually denoted as
"(x, y) R".[5]

122.2 Special types of binary relations


Some important types of binary relations R between two sets X and Y are listed below. To emphasize that X and Y
can be dierent sets, some authors call such binary relations heterogeneous.[6][7]
Uniqueness properties:

injective (also called left-unique[8] ): for all x and z in X and y in Y it holds that if xRy and zRy then x = z. For
example, the green relation in the diagram is injective, but the red relation is not, as it relates e.g. both x = 5
and z = +5 to y = 25.
functional (also called univalent[9] or right-unique[8] or right-denite[10] ): for all x in X, and y and z in Y
it holds that if xRy and xRz then y = z; such a binary relation is called a partial function. Both relations in
the picture are functional. An example for a non-functional relation can be obtained by rotating the red graph
clockwise by 90 degrees, i.e. by considering the relation x=y2 which relates e.g. x=25 to both y=5 and z=+5.
438 CHAPTER 122. HOMOGENEOUS RELATION

Example relations between real numbers. Red: y=x2 . Green: y=2x+20.

one-to-one (also written 1-to-1): injective and functional. The green relation is one-to-one, but the red is not.

Totality properties (only denable if the sets of departure X resp. destination Y are specied; not to be confused with
a total relation):

left-total:[8] for all x in X there exists a y in Y such that xRy. For example, R is left-total when it is a function
or a multivalued function. Note that this property, although sometimes also referred to as total, is dierent
from the denition of total in the next section. Both relations in the picture are left-total. The relation x=y2 ,
obtained from the above rotation, is not left-total, as it doesn't relate, e.g., x = 14 to any real number y.
surjective (also called right-total[8] or onto): for all y in Y there exists an x in X such that xRy. The green
relation is surjective, but the red relation is not, as it doesn't relate any real number x to e.g. y = 14.
122.3. RELATIONS OVER A SET 439

Uniqueness and totality properties:

A function: a relation that is functional and left-total. Both the green and the red relation are functions.

An injective function: a relation that is injective, functional, and left-total.

A surjective function or surjection: a relation that is functional, left-total, and right-total.

A bijection: a surjective one-to-one or surjective injective function is said to be bijective, also known as
one-to-one correspondence.[11] The green relation is bijective, but the red is not.

122.2.1 Difunctional
Less commonly encountered is the notion of difunctional (or regular) relation, dened as a relation R such that
R=RR1 R.[12]
To understand this notion better, it helps to consider a relation as mapping every element xX to a set xR = { yY
| xRy }.[12] This set is sometimes called the successor neighborhood of x in R; one can dene the predecessor
neighborhood analogously.[13] Synonymous terms for these notions are afterset and respectively foreset.[6]
A difunctional relation can then be equivalently characterized as a relation R such that wherever x1 R and x2 R have a
non-empty intersection, then these two sets coincide; formally x1 R x2 R implies x1 R = x2 R.[12]
As examples, any function or any functional (right-unique) relation is difunctional; the converse doesn't hold. If one
considers a relation R from set to itself (X = Y), then if R is both transitive and symmetric (i.e. a partial equivalence
relation), then it is also difunctional.[14] The converse of this latter statement also doesn't hold.
A characterization of difunctional relations, which also explains their name, is to consider two functions f: A C
and g: B C and then dene the following set which generalizes the kernel of a single function as joint kernel: ker(f,
g) = { (a, b) A B | f(a) = g(b) }. Every difunctional relation R A B arises as the joint kernel of two functions
f: A C and g: B C for some set C.[15]
In automata theory, the term rectangular relation has also been used to denote a difunctional relation. This ter-
minology is justied by the fact that when represented as a boolean matrix, the columns and rows of a difunctional
relation can be arranged in such a way as to present rectangular blocks of true on the (asymmetric) main diagonal.[16]
Other authors however use the term rectangular to denote any heterogeneous relation whatsoever.[7]

122.3 Relations over a set


If X = Y then we simply say that the binary relation is over X, or that it is an endorelation over X.[17] In computer
science, such a relation is also called a homogeneous (binary) relation.[7][17][18] Some types of endorelations are
widely studied in graph theory, where they are known as simple directed graphs permitting loops.
The set of all binary relations Rel(X) on a set X is the set 2X X which is a Boolean algebra augmented with the
involution of mapping of a relation to its inverse relation. For the theoretical explanation see Relation algebra.
Some important properties that a binary relation R over a set X may have are:

reexive: for all x in X it holds that xRx. For example, greater than or equal to () is a reexive relation but
greater than (>) is not.

irreexive (or strict): for all x in X it holds that not xRx. For example, > is an irreexive relation, but is not.

coreexive relation: for all x and y in X it holds that if xRy then x = y.[19] An example of a coreexive relation
is the relation on integers in which each odd number is related to itself and there are no other relations. The
equality relation is the only example of a both reexive and coreexive relation, and any coreexive relation is
a subset of the identity relation.

The previous 3 alternatives are far from being exhaustive; e.g. the red relation y=x2 from the
above picture is neither irreexive, nor coreexive, nor reexive, since it contains the pair
(0,0), and (2,4), but not (2,2), respectively.
440 CHAPTER 122. HOMOGENEOUS RELATION

symmetric: for all x and y in X it holds that if xRy then yRx. Is a blood relative of is a symmetric relation,
because x is a blood relative of y if and only if y is a blood relative of x.

antisymmetric: for all x and y in X, if xRy and yRx then x = y. For example, is anti-symmetric; so is >, but
vacuously (the condition in the denition is always false).[20]

asymmetric: for all x and y in X, if xRy then not yRx. A relation is asymmetric if and only if it is both
anti-symmetric and irreexive.[21] For example, > is asymmetric, but is not.

transitive: for all x, y and z in X it holds that if xRy and yRz then xRz. For example, is ancestor of is transitive,
while is parent of is not. A transitive relation is irreexive if and only if it is asymmetric.[22]

total: for all x and y in X it holds that xRy or yRx (or both). This denition for total is dierent from left total
in the previous section. For example, is a total relation.

trichotomous: for all x and y in X exactly one of xRy, yRx or x = y holds. For example, > is a trichotomous
relation, while the relation divides on natural numbers is not.[23]

Right Euclidean: for all x, y and z in X it holds that if xRy and xRz, then yRz.

Left Euclidean: for all x, y and z in X it holds that if yRx and zRx, then yRz.

Euclidean: A Euclidean relation is both left and right Euclidean. Equality is a Euclidean relation because if
x=y and x=z, then y=z.

serial: for all x in X, there exists y in X such that xRy. "Is greater than" is a serial relation on the integers. But
it is not a serial relation on the positive integers, because there is no y in the positive integers such that 1>y.[24]
However, "is less than" is a serial relation on the positive integers, the rational numbers and the real numbers.
Every reexive relation is serial: for a given x, choose y=x. A serial relation can be equivalently characterized
as every element having a non-empty successor neighborhood (see the previous section for the denition of this
notion). Similarly an inverse serial relation is a relation in which every element has non-empty predecessor
neighborhood.[13]

set-like (or local): for every x in X, the class of all y such that yRx is a set. (This makes sense only if relations
on proper classes are allowed.) The usual ordering < on the class of ordinal numbers is set-like, while its inverse
> is not.

A relation that is reexive, symmetric, and transitive is called an equivalence relation. A relation that is symmetric,
transitive, and serial is also reexive. A relation that is only symmetric and transitive (without necessarily being
reexive) is called a partial equivalence relation.
A relation that is reexive, antisymmetric, and transitive is called a partial order. A partial order that is total is called
a total order, simple order, linear order, or a chain.[25] A linear order where every nonempty subset has a least element
is called a well-order.

122.4 Operations on binary relations


If R, S are binary relations over X and Y, then each of the following is a binary relation over X and Y:

Union: R S X Y, dened as R S = { (x, y) | (x, y) R or (x, y) S }. For example, is the union of >
and =.

Intersection: R S X Y, dened as R S = { (x, y) | (x, y) R and (x, y) S }.

If R is a binary relation over X and Y, and S is a binary relation over Y and Z, then the following is a binary relation
over X and Z: (see main article composition of relations)
122.4. OPERATIONS ON BINARY RELATIONS 441

Composition: S R, also denoted R ; S (or R S), dened as S R = { (x, z) | there exists y Y, such that (x, y)
R and (y, z) S }. The order of R and S in the notation S R, used here agrees with the standard notational
order for composition of functions. For example, the composition is mother of is parent of yields is
maternal grandparent of, while the composition is parent of is mother of yields is grandmother of.

A relation R on sets X and Y is said to be contained in a relation S on X and Y if R is a subset of S, that is, if x R y
always implies x S y. In this case, if R and S disagree, R is also said to be smaller than S. For example, > is contained
in .
If R is a binary relation over X and Y, then the following is a binary relation over Y and X:

Inverse or converse: R 1 , dened as R 1 = { (y, x) | (x, y) R }. A binary relation over a set is equal to its
inverse if and only if it is symmetric. See also duality (order theory). For example, is less than (<) is the
inverse of is greater than (>).

If R is a binary relation over X, then each of the following is a binary relation over X:

Reexive closure: R = , dened as R = = { (x, x) | x X } R or the smallest reexive relation over X containing
R. This can be proven to be equal to the intersection of all reexive relations containing R.
Reexive reduction: R , dened as R
= R \ { (x, x) | x X } or the largest irreexive relation over X
contained in R.
Transitive closure: R + , dened as the smallest transitive relation over X containing R. This can be seen to be
equal to the intersection of all transitive relations containing R.
Reexive transitive closure: R *, dened as R * = (R + ) = , the smallest preorder containing R.
Reexive transitive symmetric closure: R , dened as the smallest equivalence relation over X containing
R.

122.4.1 Complement
If R is a binary relation over X and Y, then the following too:

The complement S is dened as x S y if not x R y. For example, on real numbers, is the complement of >.

The complement of the inverse is the inverse of the complement.


If X = Y, the complement has the following properties:

If a relation is symmetric, the complement is too.


The complement of a reexive relation is irreexive and vice versa.
The complement of a strict weak order is a total preorder and vice versa.

The complement of the inverse has these same properties.

122.4.2 Restriction
The restriction of a binary relation on a set X to a subset S is the set of all pairs (x, y) in the relation for which x and
y are in S.
If a relation is reexive, irreexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial
order, total order, strict weak order, total preorder (weak order), or an equivalence relation, its restrictions are too.
However, the transitive closure of a restriction is a subset of the restriction of the transitive closure, i.e., in general
not equal. For example, restricting the relation "x is parent of y" to females yields the relation "x is mother of
the woman y"; its transitive closure doesn't relate a woman with her paternal grandmother. On the other hand, the
442 CHAPTER 122. HOMOGENEOUS RELATION

transitive closure of is parent of is is ancestor of"; its restriction to females does relate a woman with her paternal
grandmother.
Also, the various concepts of completeness (not to be confused with being total) do not carry over to restrictions.
For example, on the set of real numbers a property of the relation "" is that every non-empty subset S of R with an
upper bound in R has a least upper bound (also called supremum) in R. However, for a set of rational numbers this
supremum is not necessarily rational, so the same property does not hold on the restriction of the relation "" to the
set of rational numbers.
The left-restriction (right-restriction, respectively) of a binary relation between X and Y to a subset S of its domain
(codomain) is the set of all pairs (x, y) in the relation for which x (y) is an element of S.

122.4.3 Algebras, categories, and rewriting systems


Various operations on binary endorelations can be treated as giving rise to an algebraic structure, known as relation
algebra. It should not be confused with relational algebra which deals in nitary relations (and in practice also nite
and many-sorted).
For heterogenous binary relations, a category of relations arises.[7]
Despite their simplicity, binary relations are at the core of an abstract computation model known as an abstract
rewriting system.

122.5 Sets versus classes


Certain mathematical relations, such as equal to, member of, and subset of, cannot be understood to be binary
relations as dened above, because their domains and codomains cannot be taken to be sets in the usual systems of
axiomatic set theory. For example, if we try to model the general concept of equality as a binary relation =, we
must take the domain and codomain to be the class of all sets, which is not a set in the usual set theory.
In most mathematical contexts, references to the relations of equality, membership and subset are harmless because
they can be understood implicitly to be restricted to some set in the context. The usual work-around to this problem
is to select a large enough set A, that contains all the objects of interest, and work with the restriction =A instead of
=. Similarly, the subset of relation needs to be restricted to have domain and codomain P(A) (the power set of
a specic set A): the resulting set relation can be denoted A. Also, the member of relation needs to be restricted
to have domain A and codomain P(A) to obtain a binary relation A that is a set. Bertrand Russell has shown that
assuming to be dened on all sets leads to a contradiction in naive set theory.
Another solution to this problem is to use a set theory with proper classes, such as NBG or MorseKelley set theory,
and allow the domain and codomain (and so the graph) to be proper classes: in such a theory, equality, membership,
and subset are binary relations without special comment. (A minor modication needs to be made to the concept of
the ordered triple (X, Y, G), as normally a proper class cannot be a member of an ordered tuple; or of course one
can identify the function with its graph in this context.)[26] With this denition one can for instance dene a function
relation between every set and its power set.

122.6 The number of binary relations


2
The number of distinct binary relations on an n-element set is 2n (sequence A002416 in the OEIS):
Notes:

The number of irreexive relations is the same as that of reexive relations.


The number of strict partial orders (irreexive transitive relations) is the same as that of partial orders.
The number of strict weak orders is the same as that of total preorders.
The total orders are the partial orders that are also total preorders. The number of preorders that are neither
a partial order nor a total preorder is, therefore, the number of preorders, minus the number of partial orders,
minus the number of total preorders, plus the number of total orders: 0, 0, 0, 3, and 85, respectively.
122.7. EXAMPLES OF COMMON BINARY RELATIONS 443

the number of equivalence relations is the number of partitions, which is the Bell number.

The binary relations can be grouped into pairs (relation, complement), except that for n = 0 the relation is its own
complement. The non-symmetric ones can be grouped into quadruples (relation, complement, inverse, inverse com-
plement).

122.7 Examples of common binary relations


order relations, including strict orders:

greater than
greater than or equal to
less than
less than or equal to
divides (evenly)
is a subset of

equivalence relations:

equality
is parallel to (for ane spaces)
is in bijection with
isomorphy

dependency relation, a nite, symmetric, reexive relation.

independency relation, a symmetric, irreexive relation which is the complement of some dependency relation.

122.8 See also


Conuence (term rewriting)

Hasse diagram

Incidence structure

Logic of relatives

Order theory

Triadic relation

122.9 Notes
[1] Encyclopedic dictionary of Mathematics. MIT. 2000. pp. 13301331. ISBN 0-262-59020-4.

[2] Suppes, Patrick (1972) [originally published by D. van Nostrand Company in 1960]. Axiomatic Set Theory. Dover. ISBN
0-486-61630-4.

[3] Smullyan, Raymond M.; Fitting, Melvin (2010) [revised and corrected republication of the work originally published in
1996 by Oxford University Press, New York]. Set Theory and the Continuum Problem. Dover. ISBN 978-0-486-47484-7.

[4] Levy, Azriel (2002) [republication of the work published by Springer-Verlag, Berlin, Heidelberg and New York in 1979].
Basic Set Theory. Dover. ISBN 0-486-42079-5.

[5] Megill, Norman (5 August 1993). df-br (Metamath Proof Explorer)". Retrieved 18 November 2016.
444 CHAPTER 122. HOMOGENEOUS RELATION

[6] Christodoulos A. Floudas; Panos M. Pardalos (2008). Encyclopedia of Optimization (2nd ed.). Springer Science & Business
Media. pp. 299300. ISBN 978-0-387-74758-3.

[7] Michael Winter (2007). Goguen Categories: A Categorical Approach to L-fuzzy Relations. Springer. pp. xxi. ISBN
978-1-4020-6164-6.

[8] Kilp, Knauer and Mikhalev: p. 3. The same four denitions appear in the following:

Peter J. Pahl; Rudolf Damrath (2001). Mathematical Foundations of Computational Engineering: A Handbook.
Springer Science & Business Media. p. 506. ISBN 978-3-540-67995-0.
Eike Best (1996). Semantics of Sequential and Parallel Programs. Prentice Hall. pp. 1921. ISBN 978-0-13-
460643-9.
Robert-Christoph Riemann (1999). Modelling of Concurrent Systems: Structural and Semantical Methods in the High
Level Petri Net Calculus. Herbert Utz Verlag. pp. 2122. ISBN 978-3-89675-629-9.

[9] Gunther Schmidt, 2010. Relational Mathematics. Cambridge University Press, ISBN 978-0-521-76268-7, Chapt. 5

[10] Ms, Stephan (2007), Reasoning on Spatial Semantic Integrity Constraints, Spatial Information Theory: 8th International
Conference, COSIT 2007, Melbourne, Australia, September 1923, 2007, Proceedings, Lecture Notes in Computer Science,
4736, Springer, pp. 285302, doi:10.1007/978-3-540-74788-8_18

[11] Note that the use of correspondence here is narrower than as general synonym for binary relation.

[12] Chris Brink; Wolfram Kahl; Gunther Schmidt (1997). Relational Methods in Computer Science. Springer Science &
Business Media. p. 200. ISBN 978-3-211-82971-4.

[13] Yao, Y. (2004). Semantics of Fuzzy Sets in Rough Set Theory. Transactions on Rough Sets II. Lecture Notes in Computer
Science. 3135. p. 309. ISBN 978-3-540-23990-1. doi:10.1007/978-3-540-27778-1_15.

[14] William Craig (2006). Semigroups Underlying First-order Logic. American Mathematical Soc. p. 72. ISBN 978-0-8218-
6588-0.

[15] Gumm, H. P.; Zarrad, M. (2014). Coalgebraic Simulations and Congruences. Coalgebraic Methods in Computer Science.
Lecture Notes in Computer Science. 8446. p. 118. ISBN 978-3-662-44123-7. doi:10.1007/978-3-662-44124-4_7.

[16] Julius Richard Bchi (1989). Finite Automata, Their Algebras and Grammars: Towards a Theory of Formal Expressions.
Springer Science & Business Media. pp. 3537. ISBN 978-1-4613-8853-1.

[17] M. E. Mller (2012). Relational Knowledge Discovery. Cambridge University Press. p. 22. ISBN 978-0-521-19021-3.

[18] Peter J. Pahl; Rudolf Damrath (2001). Mathematical Foundations of Computational Engineering: A Handbook. Springer
Science & Business Media. p. 496. ISBN 978-3-540-67995-0.

[19] Fonseca de Oliveira, J. N., & Pereira Cunha Rodrigues, C. D. J. (2004). Transposing Relations: From Maybe Functions
to Hash Tables. In Mathematics of Program Construction (p. 337).

[20] Smith, Douglas; Eggen, Maurice; St. Andre, Richard (2006), A Transition to Advanced Mathematics (6th ed.), Brooks/Cole,
p. 160, ISBN 0-534-39900-2

[21] Nievergelt, Yves (2002), Foundations of Logic and Mathematics: Applications to Computer Science and Cryptography,
Springer-Verlag, p. 158.

[22] Flaka, V.; Jeek, J.; Kepka, T.; Kortelainen, J. (2007). Transitive Closures of Binary Relations I (PDF). Prague: School
of Mathematics Physics Charles University. p. 1. Lemma 1.1 (iv). This source refers to asymmetric relations as strictly
antisymmetric.

[23] Since neither 5 divides 3, nor 3 divides 5, nor 3=5.

[24] Yao, Y.Y.; Wong, S.K.M. (1995). Generalization of rough sets using relationships between attribute values (PDF).
Proceedings of the 2nd Annual Joint Conference on Information Sciences: 3033..

[25] Joseph G. Rosenstein, Linear orderings, Academic Press, 1982, ISBN 0-12-597680-1, p. 4

[26] Tarski, Alfred; Givant, Steven (1987). A formalization of set theory without variables. American Mathematical Society. p.
3. ISBN 0-8218-1041-3.
122.10. REFERENCES 445

122.10 References
M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories: with Applications to Wreath Products and
Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, ISBN 3-11-015248-7.
Gunther Schmidt, 2010. Relational Mathematics. Cambridge University Press, ISBN 978-0-521-76268-7.

122.11 External links


Hazewinkel, Michiel, ed. (2001) [1994], Binary relation, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Chapter 123

Horn clause

In mathematical logic and logic programming, a Horn clause is a logical formula of a particular rule-like form which
gives it useful properties for use in logic programming, formal specication, and model theory. Horn clauses are
named for the logician Alfred Horn, who rst pointed out their signicance in 1951.[1]

123.1 Denition
A Horn clause is a clause (a disjunction of literals) with at most one positive, i.e. unnegated, literal.
Conversely, a disjunction of literals with at most one negated literal is called a dual-Horn clause.
A Horn clause with exactly one positive literal is a denite clause; a denite clause with no negative literals is
sometimes called a fact; and a Horn clause without a positive literal is sometimes called a goal clause (note that
the empty clause consisting of no literals is a goal clause). These three kinds of Horn clauses are illustrated in the
following propositional example:
In the non-propositional case, all variables[note 2] in a clause are implicitly universally quantied with scope the entire
clause. Thus, for example:

human(X) mortal(X)

stands for:

X( human(X) mortal(X) )

which is logically equivalent to:

X ( human(X) mortal(X) )

123.1.1 Signicance

Horn clauses play a basic role in constructive logic and computational logic. They are important in automated theorem
proving by rst-order resolution, because the resolvent of two Horn clauses is itself a Horn clause, and the resolvent
of a goal clause and a denite clause is a goal clause. These properties of Horn clauses can lead to greater eciencies
in proving a theorem (represented as the negation of a goal clause).
Propositional Horn clauses are also of interest in computational complexity. The problem of nding truth value
assignments to make a conjunction of propositional Horn clauses true is a P-complete problem, solvable in linear
time,[2] and sometimes called HORNSAT. (The unrestricted Boolean satisability problem is an NP-complete prob-
lem however.) Satisability of rst-order Horn clauses is undecidable.

446
123.2. LOGIC PROGRAMMING 447

123.2 Logic programming


Horn clauses are also the basis of logic programming, where it is common to write denite clauses in the form of an
implication:

(p q ... t) u

In fact, the resolution of a goal clause with a denite clause to produce a new goal clause is the basis of the SLD
resolution inference rule, used to implement logic programming in the programming language Prolog.
In logic programming a denite clause behaves as a goal-reduction procedure. For example, the Horn clause written
above behaves as the procedure:

to show u, show p and show q and ... and show t.

To emphasize this reverse use of the clause, it is often written in the reverse form:

u (p q ... t)

In Prolog this is written as:


u :- p, q, ..., t.
In logic programming and datalog, computation and query evaluation are performed by representing the negation of
a problem to be solved as a goal clause. For example, the problem of solving the existentially quantied conjunction
of positive literals:

X (p q ... t)

is represented by negating the problem (denying that it has a solution), and representing it in the logically equivalent
form of a goal clause:

X (false p q ... t)

In Prolog this is written as:


:- p, q, ..., t.
Solving the problem amounts to deriving a contradiction, which is represented by the empty clause (or false). The
solution of the problem is a substitution of terms for the variables in the goal clause, which can be extracted from the
proof of contradiction. Used in this way, goal clauses are similar to conjunctive queries in relational databases, and
Horn clause logic is equivalent in computational power to a universal Turing machine.
The Prolog notation is actually ambiguous, and the term goal clause is sometimes also used ambiguously. The
variables in a goal clause can be read as universally or existentially quantied, and deriving false can be interpreted
either as deriving a contradiction or as deriving a successful solution of the problem to be solved.
Van Emden and Kowalski (1976) investigated the model theoretic properties of Horn clauses in the context of logic
programming, showing that every set of denite clauses D has a unique minimal model M. An atomic formula A
is logically implied by D if and only if A is true in M. It follows that a problem P represented by an existentially
quantied conjunction of positive literals is logically implied by D if and only if P is true in M. The minimal model
semantics of Horn clauses is the basis for the stable model semantics of logic programs.[3]

123.3 Notes
[1] Like in resolution theorem proving, intuitive meanings show " and assume " are synonymous (indirect proof); they
both correspond to the same formula, viz. . This way, a mechanical proving tool needs to maintain only one set of
formulas (assumptions), rather than two sets (assumptions and (sub)goals).
[2] Formula constituent names dier between propositional and rst-order logic: an atomic formula is just a propositional
variable in the former logic, while it is composed of a predicate symbol and appropriately many terms, each of which may
contain domain variables, in the latter logic. Domain variables are meant here.
448 CHAPTER 123. HORN CLAUSE

123.4 References
[1] Horn, Alfred (1951). On sentences which are true of direct unions of algebras. Journal of Symbolic Logic. 16 (1): 1421.
doi:10.2307/2268661.

[2] Dowling, William F.; Gallier, Jean H. (1984). Linear-time algorithms for testing the satisability of propositional Horn
formulae. Journal of Logic Programming. 1 (3): 267284. doi:10.1016/0743-1066(84)90014-1.

[3] van Emden, M. H.; Kowalski, R. A. (1976). The semantics of predicate logic as a programming language (PDF). Journal
of the ACM. 23 (4): 733742. doi:10.1145/321978.321991.
Chapter 124

Hypostatic abstraction

Hypostatic abstraction in mathematical logic, also known as hypostasis or subjectal abstraction, is a formal
operation that transforms a predicate into a relation; for example Honey is sweet is transformed into Honey has
sweetness. The relation is created between the original subject and a new term that represents the property expressed
by the original predicate.
Hypostasis changes a propositional formula of the form X is Y to another one of the form X has the property of being
Y or X has Y-ness. The logical functioning of the second object Y-ness consists solely in the truth-values of those
propositions that have the corresponding abstract property Y as the predicate. The object of thought introduced in
this way may be called a hypostatic object and in some senses an abstract object and a formal object.
The above denition is adapted from the one given by Charles Sanders Peirce (CP 4.235, The Simplest Mathematics
(1902), in Collected Papers, CP 4.227323). As Peirce describes it, the main point about the formal operation of
hypostatic abstraction, insofar as it operates on formal linguistic expressions, is that it converts an adjective or predicate
into an extra subject, thus increasing by one the number of subject slots -- called the arity or adicity -- of the main
predicate.
The transformation of honey is sweet into honey possesses sweetness can be viewed in several ways:

The grammatical trace of this hypostatic transformation is a process that extracts the adjective sweet from the
predicate is sweet, replacing it by a new, increased-arity predicate possesses, and as a by-product of the reaction,
as it were, precipitating out the substantive sweetness as a second subject of the new predicate.
The abstraction of hypostasis takes the concrete physical sense of taste found in honey is sweet and gives it formal
metaphysical characteristics in honey has sweetness.

449
450 CHAPTER 124. HYPOSTATIC ABSTRACTION

124.1 See also

124.2 References
Peirce, C.S., Collected Papers of Charles Sanders Peirce, vols. 16 (19311935), Charles Hartshorne and Paul
Weiss, eds., vols. 78 (1958), Arthur W. Burks, ed., Harvard University Press, Cambridge, MA.

124.3 External links


J. Jay Zeman, Peirce on Abstraction
Chapter 125

Hypothetical syllogism

In classical logic, hypothetical syllogism is a valid argument form which is a syllogism having a conditional statement
for one or both of its premises.[1][2]

If I do not wake up, then I cannot go to work.


If I cannot go to work, then I will not get paid.
Therefore, if I do not wake up, then I will not get paid.

In propositional logic, hypothetical syllogism is the name of a valid rule of inference[3][4] (often abbreviated HS and
sometimes also called the chain argument, chain rule, or the principle of transitivity of implication). Hypothetical
syllogism is one of the rules in classical logic that is not always accepted in certain systems of non-classical logic. The
rule may be stated:

P Q, Q R
P R
where the rule is that whenever instances of " P Q ", and " Q R " appear on lines of a proof, " P R " can
be placed on a subsequent line.
Hypothetical syllogism is closely related and similar to disjunctive syllogism, in that it is also type of syllogism, and
also the name of a rule of inference.

125.1 Formal notation


The hypothetical syllogism rule may be written in sequent notation:

(P Q), (Q R) (P R)
where is a metalogical symbol meaning that P R is a syntactic consequence of P Q , and Q R in some
logical system;
and expressed as a truth-functional tautology or theorem of propositional logic:

((P Q) (Q R)) (P R)
where P , Q , and R are propositions expressed in some formal system.

125.2 See also


Modus Ponens

451
452 CHAPTER 125. HYPOTHETICAL SYLLOGISM

Modus Tollens

Arming the consequent


Denying the antecedent

Transitive relation

125.3 References
[1] Hurley

[2] Copi and Cohen

[3] Hurley

[4] Copi and Cohen

125.4 External links


Philosophy Index: Hypothetical Syllogism
Chapter 126

Idempotence

For the concept in matrix algebra, see Idempotent matrix.

Idempotence (UK: /dmpotns/;[1] US: /admpotns/ EYE-dm-POH-tns)[2] is the property of certain operations
in mathematics and computer science, that can be applied multiple times without changing the result beyond the initial
application. The concept of idempotence arises in a number of places in abstract algebra (in particular, in the theory of
projectors and closure operators) and functional programming (in which it is connected to the property of referential
transparency).
The term was introduced by Benjamin Peirce[3] in the context of elements of algebras that remain invariant when
raised to a positive integer power, and literally means "(the quality of having) the same power, from idem + potence
(same + power).
There are several meanings of idempotence, depending on what the concept is applied to:

A unary operation (or function) is idempotent if, whenever it is applied twice to any value, it gives the same
result as if it were applied once; i.e., ((x)) (x). For example, the absolute value function, where abs(abs(x))
abs(x), is idempotent.

Given a binary operation, an idempotent element (or simply an idempotent) for the operation is a value for
which the operation, when given that value for both of its operands, gives that value as the result. For example,
the number 1 is an idempotent of multiplication: 1 1 = 1.

A binary operation is called idempotent if all elements are idempotent elements with respect to the operation.
In other words, whenever it is applied to two equal values, it gives that value as the result. For example, the
function giving the maximum value of two equal values is idempotent: max(x, x) x.

126.1 Denitions

126.1.1 Unary operation

A unary operation f , that is, a map from some set S into itself, is called idempotent if, for all x in S ,

f (f (x)) = f (x)

In particular, the identity function idS , dened by idS (x) = x , is idempotent, as is the constant function Kc , where
c is an element of S , dened by Kc (x) = c .
An important class of idempotent functions is given by projections in a vector space. An example of a projection is
the function xy dened by xy (x, y, z) = (x, y, 0) , which projects an arbitrary point in 3D space to a point on the
xy -plane, where the third coordinate ( z ) is equal to 0.

453
454 CHAPTER 126. IDEMPOTENCE

A unary operation f : S S is idempotent if it maps each element of S to a xed point of f . We can partition a
set with n elements into k chosen xed points and n k non-xed points, and then k nk is the number of dierent
idempotent functions. Hence, taking into account all possible partitions,

n ( )
n
k nk
k
k=0

is the total number of possible idempotent functions on the set. The integer sequence of the number of idempotent
functions as given by the sum above for n = {0, 1, 2, . . . } starts with 1, 1, 3, 10, 41, 196, 1057, 6322, 41393, . . . .
(sequence A000248 in the OEIS)
Neither the property of being idempotent nor that of being not is preserved under composition of unary functions.[4]
As an example for the former, f(x) = x mod 3 and g(x) = max(x, 5) are both idempotent, but f g is not,[5] although
g f happens to be.[6] As an example for the latter, the negation function on truth values isn't idempotent, but
is.

126.1.2 Idempotent elements and binary operations

Given a binary operation on a set S , an element x is said to be idempotent (with respect to ) if:

x x = x.

In particular an identity element of , if it exists, is idempotent with respect to the operation , and the same is
true of an absorbing element. The binary operation itself is called idempotent if every element of S is idempotent.
That is, for all x S where denotes set membership:

x x = x.

For example, the operations of set union and set intersection are both idempotent, as are logical conjunction and
logical disjunction, and, in general, the meet and join operations of a lattice.

126.1.3 Connections

The connections between the three notions are as follows.

The statement that the binary operation on a set S is idempotent, is equivalent to the statement that every
element of S is idempotent for .

The dening property of unary idempotence, f(f(x)) = f(x) for x in the domain of f, can equivalently be
rewritten as f f = f, using the binary operation of function composition denoted by . Thus, the statement
that f is an idempotent unary operation on S is equivalent to the statement that f is an idempotent element with
respect to the function composition operation on functions from S to S.

126.2 Common examples

126.2.1 Functions

As mentioned above, the identity map and the constant maps are always idempotent maps. The absolute value function
of a real or complex argument, and the oor function of a real argument are idempotent.
The function that assigns to every subset U of some topological space X the closure of U is idempotent on the power
set P (X) of X . It is an example of a closure operator; all closure operators are idempotent functions.
126.3. COMPUTER SCIENCE MEANING 455

The operation of subtracting the mean of a list of numbers from every number in the list is idempotent. For example,
consider the numbers 3, 6, 8, 8, and10 . The mean is 3+6+8+8+10 5 = 35
5 = 7 . Subtracting 7 from every number in
(4)+(1)+1+1+3
the list yields (4) , (1) , 1, 1, 3 . The mean of that list is 5 = 05 = 0 . Subtracting 0 from every
number in that list yields the same list.

126.2.2 Formal languages


The Kleene star and Kleene plus operators used to express repetition in formal languages are idempotent.

126.2.3 Idempotent ring elements


Main article: Idempotent (ring theory)

An idempotent element of a ring is, by denition, an element that is idempotent for the rings multiplication.[7] That
is, an element a is idempotent precisely when a2 = a.
Idempotent elements of rings yield direct decompositions of modules, and play a role in describing other homological
properties of the ring. While idempotent usually refers to the multiplication operation of a ring, there are rings in
which both operations are idempotent: Boolean algebras are such an example.

126.2.4 Other examples


In Boolean algebra, both the 'logical and' and the 'logical or' operations are idempotent. This implies that every
element of Boolean algebra is idempotent with respect to both of these operations. Specically, x x = x and
x x = x for all x . In linear algebra, projections are idempotent. In fact, the projections of a vector space are
exactly the idempotent elements of the ring of linear transformations of the vector space. After xing a basis, it can
be shown that the matrix of a projection with respect to this basis is an idempotent matrix. An idempotent semiring
(also sometimes called a dioid) is a semiring whose addition (not multiplication) is idempotent. If both operations of
the semiring are idempotent, then the semiring is called doubly idempotent.[8]

126.3 Computer science meaning


See also: Referential transparency (computer science), Reentrant (subroutine), and Stable sort

In computer science, the term idempotent is used more comprehensively to describe an operation that will produce
the same results if executed once or multiple times.[9] This may have a dierent meaning depending on the context
in which it is applied. In the case of methods or subroutine calls with side eects, for instance, it means that the
modied state remains the same after the rst call. In functional programming, though, an idempotent function is
one that has the property f(f(x)) = f(x) for any value x.[10]
This is a very useful property in many situations, as it means that an operation can be repeated or retried as often
as necessary without causing unintended eects. With non-idempotent operations, the algorithm may have to keep
track of whether the operation was already performed or not.

126.3.1 Examples
A function looking up a customers name and address in a database is typically idempotent, since this will not cause
the database to change. Similarly, changing a customers address is typically idempotent, because the nal address
will be the same no matter how many times it is submitted. However, placing an order for a car for the customer is
typically not idempotent, since running the call several times will lead to several orders being placed. Canceling an
order is idempotent, because the order remains canceled no matter how many requests are made.
A composition of idempotent methods or subroutines, however, is not necessarily idempotent if a later method in
the sequence changes a value that an earlier method depends on idempotence is not closed under composition. For
456 CHAPTER 126. IDEMPOTENCE

example, suppose the initial value of a variable is 3 and there is a sequence that reads the variable, then changes it to 5,
and then reads it again. Each step in the sequence is idempotent: both steps reading the variable have no side eects
and changing a variable to 5 will always have the same eect no matter how many times it is executed. Nonetheless,
executing the entire sequence once produces the output (3, 5), but executing it a second time produces the output (5,
5), so the sequence is not idempotent.[11]
In the Hypertext Transfer Protocol (HTTP), idempotence and safety are the major attributes that separate HTTP
verbs. Of the major HTTP verbs, GET, PUT, and DELETE should be implemented in an idempotent manner
according to the standard, but POST need not be.[11] GET retrieves a resource; PUT stores content at a resource;
and DELETE eliminates a resource. As in the example above, reading data usually has no side eects, so it is
idempotent (in fact nullipotent). Storing and deleting a given set of content are each usually idempotent as long as the
request species a location or identier that uniquely identies that resource and only that resource again in the future.
The PUT and DELETE operations with unique identiers reduce to the simple case of assignment to an immutable
variable of either a value or the null-value, respectively, and are idempotent for the same reason; the end result is
always the same as the result of the initial execution.
Violation of the unique identication requirement in storage or deletion typically causes violation of idempotence.
For example, storing or deleting a given set of content without specifying a unique identier: POST requests, which
do not need to be idempotent, often do not contain unique identiers, so the creation of the identier is delegated
to the receiving system which then creates a corresponding new record. Similarly, PUT and DELETE requests with
nonspecic criteria may result in dierent outcomes depending on the state of the system - for example, a request to
delete the most recent record. In each case, subsequent executions will further modify the state of the system, so they
are not idempotent.
In Event Stream Processing, idempotence refers to the ability of a system to produce the same outcome, even if an
event or message is received more than once.
In a load-store architecture, instructions that might possibly cause a page fault are idempotent. So if a page fault
occurs, the OS can load the page from disk and then simply re-execute the faulted instruction. In a processor where
such instructions are not idempotent, dealing with page faults is much more complex.
When reformatting output, pretty-printing is expected to be idempotent. In other words, if the output is already
pretty, there should be nothing to do for the pretty-printer.

126.4 Applied examples


Applied examples that many people could encounter in their day-to-day lives include elevator call buttons and cross-
walk buttons.[12] The initial activation of the button moves the system into a requesting state, until the request is
satised. Subsequent activations of the button between the initial activation and the request being satised have no
eect.

126.5 See also


Closure operator
Fixed point (mathematics)
Idempotent of a code
Nilpotent
Idempotent matrix
Idempotent relation a generalization of idempotence to binary relations
List of matrices
Pure function
Referential transparency
Iterated function
126.6. REFERENCES 457

Biordered set

Involution (mathematics)

126.6 References
[1] idempotence. Oxford English Dictionary (3rd ed.). Oxford University Press. 2010.

[2] idemptent. Merriam-Webster.

[3] Polcino & Sehgal (2002), p. 127.

[4] If f and g commute, i.e. if f g = g f, then idempotency of both f and g implies that of f g, since (f g) (f g) = (f
f) (g g) = f g, using the associativity of composition.

[5] e.g. f(g(7)) = f(7) = 1, but f(g(1)) = f(5) = 2 1

[6] also showing that commutation of f and g is not a necessary condition for idempotency preservation

[7] See Hazewinkel et al. (2004), p. 2.

[8] Gondran & Minoux. Graphs, dioids and semirings. Springer, 2008, p. 34

[9] Rodriguez, Alex. RESTful Web services: The basics. IBM developerWorks. IBM. Retrieved 24 April 2013.

[10] http://foldoc.org/idempotent

[11] IETF, Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content. See also HyperText Transfer Protocol.

[12] https://web.archive.org/web/20110523081716/http://www.nclabor.com/elevator/geartrac.pdf For example, this design spec-


ication includes detailed algorithm for when elevator cars will respond to subsequent calls for service

126.7 Further reading


idempotent at FOLDOC

Goodearl, K. R. (1991), von Neumann regular rings (2 ed.), Malabar, FL: Robert E. Krieger Publishing Co.
Inc., pp. xviii+412, ISBN 0-89464-632-X, MR 1150975 (93m:16006)

Gunawardena, Jeremy (1998), An introduction to idempotency, in Gunawardena, Jeremy, Idempotency.


Based on a workshop, Bristol, UK, October 37, 1994 (PDF), Cambridge: Cambridge University Press, pp.
149, Zbl 0898.16032
Hazewinkel, Michiel, ed. (2001) [1994], Idempotent, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Hazewinkel, Michiel; Gubareni, Nadiya; Kirichenko, V. V. (2004), Algebras, rings and modules. vol. 1,
Mathematics and its Applications, 575, Dordrecht: Kluwer Academic Publishers, pp. xii+380, ISBN 1-4020-
2690-0, MR 2106764 (2006a:16001)

Lam, T. Y. (2001), A rst course in noncommutative rings, Graduate Texts in Mathematics, 131 (2 ed.), New
York: Springer-Verlag, pp. xx+385, ISBN 0-387-95183-0, MR 1838439 (2002c:16001)

Lang, Serge (1993), Algebra (Third ed.), Reading, Mass.: Addison-Wesley, ISBN 978-0-201-55540-0, Zbl
0848.13001 p. 443
Peirce, Benjamin. Linear Associative Algebra 1870.

Polcino Milies, Csar; Sehgal, Sudarshan K. (2002), An introduction to group rings, Algebras and Applications,
1, Dordrecht: Kluwer Academic Publishers, pp. xii+371, ISBN 1-4020-0238-6, MR 1896125 (2003b:16026)
Chapter 127

Idempotency of entailment

Idempotency of entailment is a property of logical systems that states that one may derive the same consequences
from many instances of a hypothesis as from just one. This property can be captured by a structural rule called
contraction, and in such systems one may say that entailment is idempotent if and only if contraction is an admissible
rule.
Rule of Contraction: from

A,C,C B

is derived

A,C B.

Or in sequent calculus notation,

, C, C B
, C B

127.1 See also


No-deleting theorem

458
Chapter 128

Material equivalence

I redirects here. For other uses, see IFF (disambiguation).


"" redirects here. It is not to be confused with Bidirectional trac.

Logical symbols representing i

In logic and related elds such as mathematics and philosophy, if and only if (shortened i) is a biconditional logical
connective between statements.
In that it is biconditional, the connective can be likened to the standard material conditional (only if, equal to if
... then) combined with its reverse (if); hence the name. The result is that the truth of either one of the connected
statements requires the truth of the other (i.e. either both statements are true, or both are false). It is controversial
whether the connective thus dened is properly rendered by the English if and only if, with its pre-existing meaning.
There is nothing to stop one from stipulating that we may read this connective as only if and if, although this may
lead to confusion.
In writing, phrases commonly used, with debatable propriety, as alternatives to P if and only if Q include Q is
necessary and sucient for P, P is equivalent (or materially equivalent) to Q (compare material implication), P precisely
if Q, P precisely (or exactly) when Q, P exactly in case Q, and P just in case Q.[1] Many authors regard i as unsuitable
in formal writing;[2] others use it freely.[3]
In logic formulae, logical symbols are used instead of these phrases; see the discussion of notation.

128.1 Denition

The truth table of P Q is as follows:[4][5]


Note that it is equivalent to that produced by the XNOR gate, and opposite to that produced by the XOR gate.

128.2 Usage

128.2.1 Notation

The corresponding logical symbols are "", " ", and "", and sometimes i. These are usually treated as equiv-
alent. However, some texts of mathematical logic (particularly those on rst-order logic, rather than propositional
logic) make a distinction between these, in which the rst, , is used as a symbol in logic formulas, while is used
in reasoning about those logic formulas (e.g., in metalogic). In ukasiewicz's notation, it is the prex symbol 'E'.
Another term for this logical connective is exclusive nor.

459
460 CHAPTER 128. MATERIAL EQUIVALENCE

128.2.2 Proofs

In most logical systems, one proves a statement of the form P i Q by proving if P, then Q and if Q, then P.
Proving this pair of statements sometimes leads to a more natural proof since there are not obvious conditions in
which one would infer a biconditional directly. An alternative is to prove the disjunction "(P and Q) or (not-P and
not-Q)", which itself can be inferred directly from either of its disjunctsthat is, because i is truth-functional, P
i Q follows if P and Q have both been shown true, or both false.

128.2.3 Origin of i and pronunciation

Usage of the abbreviation i rst appeared in print in John L. Kelley's 1955 book General Topology.[6] Its invention
is often credited to Paul Halmos, who wrote I invented 'i,' for 'if and only if'but I could never believe I was really
its rst inventor.[7]
It is somewhat unclear how i was meant to be pronounced. In current practice, the single 'word' i is almost
always read as the four words if and only if. However, in the preface of General Topology, Kelley suggests that it
should be read dierently: In some cases where mathematical content requires 'if and only if' and euphony demands
something less I use Halmos 'i'". The authors of one discrete mathematics textbook suggest:[8] Should you need
to pronounce i, really hang on to the '' so that people hear the dierence from 'if'", implying that i could be
pronounced as /f/.

128.3 Distinction from if and only if


1. Madison will eat the fruit if it is an apple. (equivalent to Only if Madison will eat the fruit, it is an
apple;" or Madison will eat the fruit fruit is an apple)

This states simply that Madison will eat fruits that are apples. It does not, however, exclude the
possibility that Madison might also eat bananas or other types of fruit. All that is known for certain
is that she will eat any and all apples that she happens upon. That the fruit is an apple is a sucient
condition for Madison to eat the fruit.

2. Madison will eat the fruit only if it is an apple. (equivalent to If Madison will eat the fruit, then it is
an apple or Madison will eat the fruit fruit is an apple)

This states that the only fruit Madison will eat is an apple. It does not, however, exclude the possi-
bility that Madison will refuse an apple if it is made available, in contrast with (1), which requires
Madison to eat any available apple. In this case, that a given fruit is an apple is a necessary condition
for Madison to be eating it. It is not a sucient condition since Madison might not eat all the apples
she is given.

3. Madison will eat the fruit if and only if it is an apple (equivalent to Madison will eat the fruit fruit
is an apple)

This statement makes it clear that Madison will eat all and only those fruits that are apples. She
will not leave any apple uneaten, and she will not eat any other type of fruit. That a given fruit is an
apple is both a necessary and a sucient condition for Madison to eat the fruit.

Suciency is the converse of necessity. That is to say, given PQ (i.e. if P then Q), P would be a sucient condition
for Q, and Q would be a necessary condition for P. Also, given PQ, it is true that QP (where is the negation
operator, i.e. not). This means that the relationship between P and Q, established by PQ, can be expressed in the
following, all equivalent, ways:

P is sucient for Q
Q is necessary for P
Q is sucient for P
P is necessary for Q
128.4. IN TERMS OF EULER DIAGRAMS 461

As an example, take (1), above, which states PQ, where P is the fruit in question is an apple and Q is Madison
will eat the fruit in question. The following are four equivalent ways of expressing this very relationship:

If the fruit in question is an apple, then Madison will eat it.


Only if Madison will eat the fruit in question, is it an apple.
If Madison will not eat the fruit in question, then it is not an apple.
Only if the fruit in question is not an apple, will Madison not eat it.

So we see that (2), above, can be restated in the form of if...then as If Madison will eat the fruit in question, then it
is an apple"; taking this in conjunction with (1), we nd that (3) can be stated as If the fruit in question is an apple,
then Madison will eat it; and if Madison will eat the fruit, then it is an apple.

128.4 In terms of Euler diagrams


A

1 11

4 8
B
A is a proper subset of B. A number is in A only if it is in B; a number is in B
if it is in A.

1 11

4 8
B
C is a subset but not a proper subset of B. A number is in B if and only if it is in
C, and a number is in C if and only if it is in B.

Euler diagrams show logical relationships among events, properties, and so forth. P only if Q, if P then Q, and
PQ all mean that P is a subset, either proper or improper, of Q. P if Q, if Q then P, and QP all mean that
Q is a proper or improper subset of P. P if and only if Q and Q if and only if P both mean that the sets P and Q
are identical to each other.

128.5 More general usage


I is used outside the eld of logic, wherever logic is applied, especially in mathematical discussions. It has the same
meaning as above: it is an abbreviation for if and only if, indicating that one statement is both necessary and sucient
for the other. This is an example of mathematical jargon. (However, as noted above, if, rather than i, is more often
used in statements of denition.)
The elements of X are all and only the elements of Y is used to mean: for any z in the domain of discourse, z is in
X if and only if z is in Y.

128.6 See also


Covariance
462 CHAPTER 128. MATERIAL EQUIVALENCE

Logical biconditional

Logical equality
Necessary and sucient condition

Polysyllogism

128.7 Footnotes
[1] Weisstein, Eric W. I. From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/Iff.html

[2] E.g. Daepp, Ulrich; Gorkin, Pamela (2011), Reading, Writing, and Proving: A Closer Look at Mathematics, Undergraduate
Texts in Mathematics, Springer, p. 52, ISBN 9781441994790, While it can be a real time-saver, we don't recommend it
in formal writing.

[3] Rothwell, Edward J.; Cloud, Michael J. (2014), Engineering Writing by Design: Creating Formal Documents of Lasting
Value, CRC Press, p. 98, ISBN 9781482234312, It is common in mathematical writing.

[4] p <=> q. Wolfram|Alpha

[5] If and only if, UHM Department of Mathematics, Theorems which have the form P if and only Q are much prized in
mathematics. They give what are called necessary and sucient conditions, and give completely equivalent and hopefully
interesting new ways to say exactly the same thing..

[6] General Topology, reissue ISBN 978-0-387-90125-1

[7] Nicholas J. Higham (1998). Handbook of writing for the mathematical sciences (2nd ed.). SIAM. p. 24. ISBN 978-0-
89871-420-3.

[8] Maurer, Stephen B.; Ralston, Anthony (2005). Discrete Algorithmic Mathematics (3rd ed.). Boca Raton, Fla.: CRC Press.
p. 60. ISBN 1568811667.

128.8 External links


Language Log: Just in Case

Southern California Philosophy for philosophy graduate students: Just in Case


Chapter 129

Implicant

In Boolean logic, an implicant is a covering (sum term or product term) of one or more minterms in a sum of
products (or maxterms in product of sums) of a Boolean function. Formally, a product term P in a sum of products
is an implicant of the Boolean function F if P implies F. More precisely:

P implies F (and thus is an implicant of F) if F also takes the value 1 whenever P equals 1.

where

F is a Boolean function of n variables.


P is a product term.

This means that P F with respect to the natural ordering of the Boolean space. For instance, the function

f (x, y, z, w) = xy + yz + w

is implied by xy , by xyz , by xyzw , by w and many others; these are the implicants of f .

129.1 Prime implicant


A prime implicant of a function is an implicant that cannot be covered by a more general, (more reduced - meaning
with fewer literals) implicant. W. V. Quine dened a prime implicant of F to be an implicant that is minimal - that
is, the removal of any literal from P results in a non-implicant for F. Essential prime implicants (aka core prime
implicants) are prime implicants that cover an output of the function that no combination of other prime implicants
is able to cover.
Using the example above, one can easily see that while xy (and others) is a prime implicant, xyz and xyzw are not.
From the latter, multiple literals can be removed to make it prime:

x , y and z can be removed, yielding w .


Alternatively, z and w can be removed, yielding xy .
Finally, x and w can be removed, yielding yz .

The process of removing literals from a Boolean term is called expanding the term. Expanding by one literal doubles
the number of input combinations for which the term is true (in binary Boolean algebra). Using the example function
above, we may expand xyz to xy or to yz without changing the cover of f .[1]
The sum of all prime implicants of a Boolean function is called its complete sum, minimal covering sum, or Blake
canonical form.

463
464 CHAPTER 129. IMPLICANT

129.2 See also


QuineMcCluskey algorithm

Karnaugh map
Petricks method

129.3 References
[1] De Micheli, Giovanni. Synthesis and Optimization of Digital Circuits. McGraw-Hill, Inc., 1994

129.4 External links


Slides explaining implicants, prime implicants and essential prime implicants

Examples of nding essential prime implicants using K-map


Chapter 130

Implication graph

~x2 x0

x6 ~x4 x3

~x5 ~x1 x1 x5

~x3 x4 ~x6

~x0 x2

(x0 x2 )(x0 x3 )(x1 x3 )(x1 x4 )(x2 x4 )


An implication graph representing the 2-satisability instance (x0 x5 )(x1 x5 )(x2 x5 )(x3 x6 )(x4 x6 )(x5 x6 ).

In mathematical logic, an implication graph is a skew-symmetric directed graph G(V, E) composed of vertex set
V and directed edge set E. Each vertex in V represents the truth status of a Boolean literal, and each directed edge
from vertex u to vertex v represents the material implication If the literal u is true then the literal v is also true.
Implication graphs were originally used for analyzing complex Boolean expressions.

465
466 CHAPTER 130. IMPLICATION GRAPH

130.1 Applications
A 2-satisability instance in conjunctive normal form can be transformed into an implication graph by replacing
each of its disjunctions by a pair of implications. For example, the statement (x0 x1 ) can be rewritten as the
pair (x0 x1 ), (x1 x0 ) . An instance is satisable if and only if no literal and its negation belong to the
same strongly connected component of its implication graph; this characterization can be used to solve 2-satisability
instances in linear time.[1]
In CDCL SAT-solvers, unit propagation can be naturally associated with an implication graph that captures all possible
ways of deriving all implied literals from decision literals,[2] which is then used for clause learning.

130.2 References
[1] Aspvall, Bengt; Plass, Michael F.; Tarjan, Robert E. (1979). A linear-time algorithm for testing the truth of certain
quantied boolean formulas. Information Processing Letters. 8 (3): 121123. doi:10.1016/0020-0190(79)90002-4.

[2] Paul Beame; Henry Kautz; Ashish Sabharwal (2003). Understanding the Power of Clause Learning (PDF). IJCAI. pp.
11941201.
Chapter 131

Implicational propositional calculus

In mathematical logic, the implicational propositional calculus is a version of classical propositional calculus which
uses only one connective, called implication or conditional. In formulas, this binary operation is indicated by im-
plies, if ..., then ..., "", " ", etc..

131.1 Virtual completeness as an operator


Implication alone is not functionally complete as a logical operator because one cannot form all other two-valued
truth functions from it. However, if one has a propositional formula which is known to be false and uses that as if it
were a nullary connective for falsity, then one can dene all other truth functions. So implication is virtually complete
as an operator. If P,Q, and F are propositions and F is known to be false, then:

P is equivalent to P F
P Q is equivalent to (P (Q F)) F
P Q is equivalent to (P Q) Q
P Q is equivalent to ((P Q) ((Q P) F)) F

More generally, since the above operators are known to be functionally complete, it follows that any truth function
can be expressed in terms of "" and "F", if we have a proposition F which is known to be false.
It is worth noting that F is not denable from and arbitrary sentence variables: any formula constructed from
and propositional variables must receive the value true when all of its variables are evaluated to true. It follows as a
corollary that {} is not functionally complete. It cannot, for example, be used to dene the two-place truth function
that always returns false.

131.2 Axiom system


The following statements are considered tautologies (irreducible and intuitively true, by denition).

Axiom schema 1 is P (Q P).


Axiom schema 2 is (P (Q R)) ((P Q) (P R)).
Axiom schema 3 (Peirces law) is ((P Q) P) P.
The one non-nullary rule of inference (modus ponens) is: from P and P Q infer Q.

Where in each case, P, Q, and R may be replaced by any formulas which contain only "" as a connective. If is a
set of formulas and A a formula, then A means that A is derivable using the axioms and rules above and formulas
from as additional hypotheses.

467
468 CHAPTER 131. IMPLICATIONAL PROPOSITIONAL CALCULUS

ukasiewicz (1948) found an axiom system for the implicational calculus, which replaces the schemas 13 above
with a single schema

((P Q) R) ((R P) (S P)).

He also argued that there is no shorter axiom system.

131.3 Basic properties of derivation


Since all axioms and rules of the calculus are schemata, derivation is closed under substitution:

If A, then () (A),

where is any substitution (of formulas using only implication).


The implicational propositional calculus also satises the deduction theorem:

If , A B , then A B.

As explained in the deduction theorem article, this holds for any axiomatic extension of the system containing axiom
schemas 1 and 2 above and modus ponens.

131.4 Completeness
The implicational propositional calculus is semantically complete with respect to the usual two-valued semantics of
classical propositional logic. That is, if is a set of implicational formulas, and A is an implicational formula entailed
by , then A .

131.4.1 Proof

A proof of the completeness theorem is outlined below. First, using the compactness theorem and the deduction
theorem, we may reduce the completeness theorem to its special case with empty , i.e., we only need to show that
every tautology is derivable in the system.
The proof is similar to completeness of full propositional logic, but it also uses the following idea to overcome the
functional incompleteness of implication. If A and F are formulas, then A F is equivalent to (A*) F, where
A* is the result of replacing in A all, some, or none of the occurrences of F by falsity. Similarly, (A F) F is
equivalent to A* F. So under some conditions, one can use them as substitutes for saying A* is false or A* is true
respectively.
We rst observe some basic facts about derivability:

Indeed, we can derive A (B C) using Axiom 1, and then derive A C by modus ponens
(twice) from Ax. 2.

This follows from (1) by the deduction theorem.


131.4. COMPLETENESS 469

If we further assume C B, we can derive A B using (1), then we derive C by modus


ponens. This shows A C, (A B) C, C B C , and the deduction theorem
gives A C, (A B) C (C B) C . We apply Ax. 3 to obtain (3).

Let F be an arbitrary xed formula. For any formula A, we dene A0 = (A F) and A1 = ((A F) F). Let us
consider only formulas in propositional variables p1 , ..., pn. We claim that for every formula A in these variables and
every truth assignment e,

We prove (4) by induction on A. The base case A = pi is trivial. Let A = (B C). We distinguish three cases:

1. e(C) = 1. Then also e(A) = 1. We have

(C F ) F ((B C) F ) F

by applying (2) twice to the axiom C (B C). Since we have derived (C F) F by the
induction hypothesis, we can infer ((B C) F) F.
2. e(B) = 0. Then again e(A) = 1. The deduction theorem applied to (3) gives

B F ((B C) F ) F.

Since we have derived B F by the induction hypothesis, we can infer ((B C) F) F.


3. e(B) = 1 and e(C) = 0. Then e(A) = 0. We have

(B F ) F, C F, B C B F (1) by
F ponens, modus by

thus (B F ) F, C F (B C) F by the deduction theorem. We have derived (B


F) F and C F by the induction hypothesis, hence we can infer (B C) F. This completes
the proof of (4).

Now let A be a tautology in variables p1 , ..., pn. We will prove by reverse induction on k = n,...,0 that for every
assignment e,

The base case k = n is a special case of (4). Assume that (5) holds for k + 1, we will show it for k. By applying
deduction theorem to the induction hypothesis, we obtain

e(p1 ) e(pk )
p1 , . . . , pk (pk+1 F ) A1 ,
e(p1 ) e(pk )
p1 , . . . , pk ((pk+1 F ) F ) A1 ,

by rst setting e(pk) = 0 and second setting e(pk) = 1. From this we derive (5) using (3).
For k = 0 we obtain that the formula A1 , i.e., (A F) F, is provable without assumptions. Recall that F was an
arbitrary formula, thus we can choose F = A, which gives us provability of the formula (A A) A. Since A A
is provable by the deduction theorem, we can infer A.
This proof is constructive. That is, given a tautology, one could actually follow the instructions and create a proof
of it from the axioms. However, the length of such a proof increases exponentially with the number of propositional
variables in the tautology, hence it is not a practical method for any but the very shortest tautologies.
470 CHAPTER 131. IMPLICATIONAL PROPOSITIONAL CALCULUS

131.5 The BernaysTarski axiom system


The BernaysTarski axiom system is often used. In particular, ukasiewiczs paper derives the BernaysTarski axioms
from ukasiewiczs sole axiom as a means of showing its completeness.
It diers from the axiom schemas above by replacing axiom schema 2, (P(QR))((PQ)(PR)), with

Axiom schema 2': (PQ)((QR)(PR))

which is called hypothetical syllogism. This makes derivation of the deduction meta-theorem a little more dicult,
but it can still be done.
We show that from P(QR) and PQ one can derive PR. This fact can be used in lieu of axiom schema 2 to
get the meta-theorem.

1. P(QR) given
2. PQ given
3. (PQ)((QR)(PR)) ax 2'
4. (QR)(PR) mp 2,3
5. (P(QR))(((QR)(PR))(P(PR))) ax 2'
6. ((QR)(PR))(P(PR)) mp 1,5
7. P(PR) mp 4,6
8. (P(PR))(((PR)R)(PR)) ax 2'
9. ((PR)R)(PR) mp 7,8
10. (((PR)R)(PR))(PR) ax 3
11. PR mp 9,10 qed

131.6 Testing whether a formula of the implicational propositional calcu-


lus is a tautology
Main articles: Tautology (logic) Ecient verication and the Boolean satisability problem, and Boolean satisa-
bility problem Algorithms for solving SAT

In this case, a useful technique is to presume that the formula is not a tautology and attempt to nd a valuation which
makes it false. If one succeeds, then it is indeed not a tautology. If one fails, then it is a tautology.
Example of a non-tautology:
Suppose [(AB)((CA)E)]([F((CD)E)][(AF)(DE)]) is false.
Then (AB)((CA)E) is true; F((CD)E) is true; AF is true; D is true; and E is false.
Since D is true, CD is true. So the truth of F((CD)E) is equivalent to the truth of FE.
Then since E is false and FE is true, we get that F is false.
Since AF is true, A is false. Thus AB is true and (CA)E is true.
CA is false, so C is true.
The value of B does not matter, so we can arbitrarily choose it to be true.
Summing up, the valuation which sets B, C and D to be true and A, E and F to be false will make [(AB)((CA)E)]([F((CD)
false. So it is not a tautology.
Example of a tautology:
131.7. ADDING AN AXIOM SCHEMA 471

Suppose ((AB)C)((CA)(DA)) is false.


Then (AB)C is true; CA is true; D is true; and A is false.
Since A is false, AB is true. So C is true. Thus A must be true, contradicting the fact that it is false.
Thus there is no valuation which makes ((AB)C)((CA)(DA)) false. Consequently, it is a tautology.

131.7 Adding an axiom schema


What would happen if another axiom schema were added to those listed above? There are two cases: (1) it is a
tautology; or (2) it is not a tautology.
If it is a tautology, then the set of theorems remains the set of tautologies as before. However, in some cases it may
be possible to nd signicantly shorter proofs for theorems. Nevertheless, the minimum length of proofs of theorems
will remain unbounded, that is, for any natural number n there will still be theorems which cannot be proved in n or
fewer steps.
If the new axiom schema is not a tautology, then every formula becomes a theorem (which makes the concept of
a theorem useless in this case). What is more, there is then an upper bound on the minimum length of a proof of
every formula, because there is a common method for proving every formula. For example, suppose the new axiom
schema were ((BC)C)B. Then ((A(AA))(AA))A is an instance (one of the new axioms) and also not
a tautology. But [((A(AA))(AA))A]A is a tautology and thus a theorem due to the old axioms (using the
completeness result above). Applying modus ponens, we get that A is a theorem of the extended system. Then all
one has to do to prove any formula is to replace A by the desired formula throughout the proof of A. This proof will
have the same number of steps as the proof of A.

131.8 An alternative axiomatization


The axioms listed above primarily work through the deduction metatheorem to arrive at completeness. Here is another
axiom system which aims directly at completeness without going through the deduction metatheorem.
First we have axiom schemas which are designed to eciently prove the subset of tautologies which contain only one
propositional variable.

aa 1: AA
aa 2: (AB)(A(CB))
aa 3: A((BC)((AB)C))
aa 4: A(BA)

The proof of each such tautology would begin with two parts (hypothesis and conclusion) which are the same. Then
insert additional hypotheses between them. Then insert additional tautological hypotheses (which are true even when
the sole variable is false) into the original hypothesis. Then add more hypotheses outside (on the left). This procedure
will quickly give every tautology containing only one variable. (The symbol "" in each axiom schema indicates where
the conclusion used in the completeness proof begins. It is merely a comment, not a part of the formula.)
Consider any formula which may contain A, B, C 1 , ..., Cn and ends with A as its nal conclusion. Then we take

aa 5: ()

as an axiom schema where is the result of replacing B by A throughout and is the result of replacing B by
(AA) throughout . This is a schema for axiom schemas since there are two level of substitution: in the rst is
substituted (with variations); in the second, any of the variables (including both A and B) may be replaced by arbitrary
formulas of the implicational propositional calculus. This schema allows one to prove tautologies with more than one
variable by considering the case when B is false and the case when B is true .
If the variable which is the nal conclusion of a formula takes the value true, then the whole formula takes the value
true regardless of the values of the other variables. Consequently if A is true, then , , and () are
472 CHAPTER 131. IMPLICATIONAL PROPOSITIONAL CALCULUS

all true. So without loss of generality, we may assume that A is false. Notice that is a tautology if and only if both
and are tautologies. But while has n+2 distinct variables, and both have n+1. So the question of
whether a formula is a tautology has been reduced to the question of whether certain formulas with one variable each
are all tautologies. Also notice that () is a tautology regardless of whether is, because if is false then
either or will be false depending on whether B is false or true.
Examples:
Deriving Peirces law

1. [((PP)P)P]([((P(PP))P)P][((PQ)P)P]) aa 5

2. PP aa 1

3. (PP)((PP)(((PP)P)P)) aa 3

4. (PP)(((PP)P)P) mp 2,3

5. ((PP)P)P mp 2,4

6. [((P(PP))P)P][((PQ)P)P] mp 5,1

7. P(PP) aa 4

8. (P(PP))((PP)(((P(PP))P)P)) aa 3

9. (PP)(((P(PP))P)P) mp 7,8

10. ((P(PP))P)P mp 2,9

11. ((PQ)P)P mp 10,6 qed

Deriving ukasiewicz' sole axiom

1. [((PQ)P)((PP)(SP))]([((PQ)(PP))(((PP)P)(SP))][((PQ)R)((RP)(SP))])
aa 5

2. [((PP)P)((PP)(SP))]([((P(PP))P)((PP)(SP))][((PQ)P)((PP)(SP))])
aa 5

3. P(SP) aa 4

4. (P(SP))(P((PP)(SP))) aa 2

5. P((PP)(SP)) mp 3,4

6. PP aa 1

7. (PP)((P((PP)(SP)))[((PP)P)((PP)(SP))]) aa 3

8. (P((PP)(SP)))[((PP)P)((PP)(SP))] mp 6,7

9. ((PP)P)((PP)(SP)) mp 5,8

10. [((P(PP))P)((PP)(SP))][((PQ)P)((PP)(SP))] mp 9,2

11. P(PP) aa 4

12. (P(PP))((P((PP)(SP)))[((P(PP))P)((PP)(SP))]) aa 3

13. (P((PP)(SP)))[((P(PP))P)((PP)(SP))] mp 11,12

14. ((P(PP))P)((PP)(SP)) mp 5,13

15. ((PQ)P)((PP)(SP)) mp 14,10

16. [((PQ)(PP))(((PP)P)(SP))][((PQ)R)((RP)(SP))] mp 15,1


131.9. SEE ALSO 473

17. (PP)((P(SP))[((PP)P)(SP)]) aa 3

18. (P(SP))[((PP)P)(SP)] mp 6,17


19. ((PP)P)(SP) mp 3,18

20. (((PP)P)(SP))[((PQ)(PP))(((PP)P)(SP))] aa 4
21. ((PQ)(PP))(((PP)P)(SP)) mp 19,20

22. ((PQ)R)((RP)(SP)) mp 21,16 qed

Using a truth table to verify ukasiewicz' sole axiom would require consideration of 16=24 cases since it contains 4
distinct variables. In this derivation, we were able to restrict consideration to merely 3 cases: R is false and Q is false,
R is false and Q is true, and R is true. However because we are working within the formal system of logic (instead of
outside it, informally), each case required much more eort.

131.9 See also


Deduction theorem

List of logic systems#Implicational propositional calculus


Peirces law

Propositional calculus
Tautology (logic)

Truth table
Valuation (logic)

131.10 References
Mendelson, Elliot (1997) Introduction to Mathematical Logic, 4th ed. London: Chapman & Hall.
ukasiewicz, Jan (1948) The shortest axiom of the implicational calculus of propositions, Proc. Royal Irish
Academy, vol. 52, sec. A, no. 3, pp. 2533.
Chapter 132

Inclusion (Boolean algebra)

In Boolean algebra (structure), the inclusion relation a b is dened as ab = 0 and is the Boolean analogue to the
subset relation in set theory. Inclusion is a partial order.
The inclusion relation a < b can be expressed in many ways:

a<b

ab = 0
a + b = 1

b < a
a+b=b

ab = a

The inclusion relation has a natural interpretation in various Boolean algebras: in the subset algebra, the subset relation;
in arithmetic Boolean algebra, divisibility; in the algebra of propositions, material implication; in the two-element
algebra, the set { (0,0), (0,1), (1,1) }.
Some useful properties of the inclusion relation are:

aa+b

ab a

The inclusion relation may be used to dene Boolean intervals such that a x b A Boolean algebra whose carrier
set is restricted to the elements in an interval is itself a Boolean algebra.

132.1 References
Frank Markham Brown, Boolean Reasoning: The Logic of Boolean Equations, 2nd edition, 2003, p. 52

474
Chapter 133

Independence of premise

In proof theory and constructive mathematics, the principle of independence of premise states that if and x
are sentences in a formal theory and x is provable, then x ( ) is provable. Here x cannot be a free
variable of .
The principle is valid in classical logic. Its main application is in the study of intuitionistic logic, where the principle
is not always valid.

133.1 In classical logic


The principle of independence of premise is valid in classical logic because of the law of the excluded middle. Assume
that x is provable. Then, if holds, there is an x satisfying but if does not hold then any x satises
. In either case, there is some x such that . Thus x ( ) is provable.

133.2 In intuitionistic logic


The principle of independence of premise is not generally valid in intuitionistic logic (Avigad and Feferman 1999).
This can be illustrated by the BHK interpretation, which says that in order to prove x intuitionistically, one
must create a function that takes a proof of and returns a proof of x . Here the proof itself is an input to the
function and may be used to construct x. On the other hand, a proof of x ( ) must rst demonstrate a particular
x, and then provide a function that converts a proof of into a proof of in which x has that particular value.
As a weak counterexample, suppose (x) is some decidable predicate of a natural number such that it is not known
whether any x satises . For example, may say that x is a formal proof of some mathematical conjecture whose
provability is not known. Let the formula z (z). Then x is trivially provable. However, to prove x
( ), one must demonstrate a particular value of x such that, if any value of x satises , then the one that was
chosen satises . This cannot be done without already knowing whether x holds, and thus x ( ) is not
intuitionistically provable in this situation.

133.3 References
Jeremy Avigad and Solomon Feferman (1999). Gdels functional (Dialectica) interpretation (PDF). in S.
Buss ed., The Handbook of Proof Theory, North-Holland. pp. 337405.

475
Chapter 134

Indicative conditional

In natural languages, an indicative conditional[1][2] is the logical operation given by statements of the form If A then
B. Unlike the material conditional, an indicative conditional does not have a stipulated denition. The philosophical
literature on this operation is broad, and no clear consensus has been reached.

134.1 Distinctions between the material conditional and the indicative con-
ditional
The material conditional does not always function in accordance with everyday if-then reasoning. Therefore there
are drawbacks with using the material conditional to represent if-then statements.
One problem is that the material conditional allows implications to be true even when the antecedent is irrelevant to
the consequent. For example, its commonly accepted that the sun is made of gas, on one hand, and that 3 is a prime
number, on the other. The standard denition of implication allows us to conclude that, if the sun is made of gas,
then 3 is a prime number. This is arguably synonymous to the following: the suns being made of gas makes 3 be
a prime number. Many people intuitively think that this is false, because the sun and the number three simply have
nothing to do with one another. Logicians have tried to address this concern by developing alternative logics, e.g.,
relevance logic.
For a related problem, see vacuous truth.
Another issue is that the material conditional is not designed to deal with counterfactuals and other cases that people
often nd in if-then reasoning. This has inspired people to develop modal logic.
A further problem is that the material conditional is such that (P AND P) Q, regardless of what Q is taken to
mean. That is, a contradiction implies that absolutely everything is true. Logicians concerned with this have tried to
develop paraconsistent logics.

134.2 Psychology and indicative conditionals


Most behavioral experiments on conditionals in the psychology of reasoning have been carried out with indicative
conditionals, causal conditionals, and counterfactual conditionals. People readily make the modus ponens inference,
that is, given if A then B, and given A, they conclude B, but only about half of participants in experiments make the
modus tollens inference, that is, given if A then B, and given not-B, only about half of participants conclude not-A,
the remainder say that nothing follows (Evans et al., 1993). When participants are given counterfactual conditionals,
they make both the modus ponens and the modus tollens inferences (Byrne, 2005).

134.3 See also


Material conditional

476
134.4. REFERENCES 477

Counterfactual conditional

Logical consequence
Strict conditional

134.4 References
[1] Stalnaker, R, Philosophia (1975)

[2] Ellis, B, Australasian Journal of Philosophy (1984)

134.5 Further reading


Byrne, R.M.J. (2005). The Rational Imagination: How People Create Counterfactual Alternatives to Reality.
Cambridge, MA: MIT Press.

Edgington, Dorothy. (2006). Conditionals. The Stanford Encyclopedia of Philosophy, Edward Zalta (ed.).
http://plato.stanford.edu/entries/conditionals/.
Evans, J. St. B. T., Newstead, S. and Byrne, R. M. J. (1993). Human Reasoning: The Psychology of Deduction.
Hove, Psychology Press.
Chapter 135

Intensional logic

Not to be confused with intentional logic.

Intensional logic is an approach to predicate logic that extends rst-order logic, which has quantiers that range
over the individuals of a universe (extensions), by additional quantiers that range over terms that may have such
individuals as their value (intensions). The distinction between intensional and extensional entities is parallel to the
distinction between sense and reference.

135.1 Its place inside logic


Logic is the study of proof and deduction as manifested in language (abstracting from any underlying psychological
or biological processes).[1] Logic is not a closed, completed science, and presumably, it will never stop developing:
the logical analysis can penetrate into varying depths of the language[2] (sentences regarded as atomic, or splitting
them to predicates applied to individual terms, or even revealing such ne logical structures like modal, temporal,
dynamic, epistemic ones).
In order to achieve its special goal, logic was forced to develop its own formal tools, most notably its own grammar,
detached from simply making direct use of the underlying natural language.[3] Functors belong to the most important
categories in logical grammar (along with basic categories like sentence and individual name[4] ): a functor can be
regarded as an incomplete expression with argument places to ll in. If we ll them in with appropriate subexpres-
sions, then the resulting entirely completed expression can be regarded as a result, an output.[5] Thus, a functor acts
like a function sign,[6] taking on input expressions, resulting in a new, output expression.[5]
Semantics links expressions of language to the outside world. Also logical semantics has developed its own structure.
Semantic values can be attributed to expressions in basic categories: the reference of an individual name (the desig-
nated object named by that) is called its extension; and as for sentences, their truth value is called also extension.[7]
As for functors, some of them are simpler than others: extension can be attributed to them in a simple way. In case of
a so-called extensional functor we can in a sense abstract from the material part of its inputs and output, and regard
the functor as a function turning directly the extension of its input(s) into the extension of its output. Of course, it
is assumed that we can do so at all: the extension of input expression(s) determines the extension of the resulting
expression. Functors for which this assumption does not hold are called intensional.[8]
Natural languages abound with intensional functors,[9] this can be illustrated by intensional statements. Extensional
logic cannot reach inside such ne logical structures of the language, it stops at a coarser level. The attempts for such
deep logical analysis have a long past: authors as early as Aristotle had already studied modal syllogisms.[10] Gottlob
Frege developed a kind of two dimensional semantics: for resolving questions like those of intensional statements, he
has introduced a distinction between two semantic values: sentences (and individual terms) have both an extension
and an intension.[6] These semantic values can be interpreted, transferred also for functors (except for intensional
functors, they have only intension).
As mentioned, motivations for settling problems that belong today to intensional logic have a long past. As for
attempts of formalizations. the development of calculi often preceded the nding of their corresponding formal
semantics. Intensional logic is not alone in that: also Gottlob Frege accompanied his (extensional) calculus with

478
135.2. MODAL LOGIC 479

detailed explanations of the semantical motivations, but the formal foundation of its semantics appeared only in the
20th century. Thus sometimes similar patterns repeated themselves for the history of development of intensional
logic like earlier for that of extensional logic.[11]
There are some intensional logic systems that claim to fully analyze the common language:

Transparent Intensional Logic


Modal logic

135.2 Modal logic


Main article: Modal logic

Modal logic is historically the earliest area in the study of intensional logic, originally motivated by formalizing
necessity and possibility (recently, this original motivation belongs to alethic logic, just one of the many branches
of modal logic).[12]
Modal logic can be regarded also as the most simple appearance of such studies: it extends extensional logic just
with a few sentential functors:[13] these are intensional, and they are interpreted (in the metarules of semantics) as
quantifying over possible worlds. For example, the Necessity operator (the 'square') when applied to a sentence A
says ' The sentence "('square')A is true in world i if it is true in all worlds accessible from world i'. The corresponding
Possibility operator (the 'diamond') when applied to A asserts that "('diamond')A is true in world i i A is true in
some worlds (at least one) accessible to world i. The exact semantic content of these assertions therefore depends
crucially on the nature of the Accessibility relation. For example, is world i accessible from itself? The answer to this
question characterizes the precise nature of the system, and many exist, answering moral and temporal questions (in
a temporal system, the accessibility relation covers states or 'instants and only the future is accessible from a given
moment. The Necessity operator corresponds to 'for all future moments in this logic. The operators are related to
one another by similar dualities to quantiers do[14] (for example by the analogous correspondents of De Morgans
laws). I.e., Something is necessary i its negation is not possible, i.e. inconsistent. Syntactically, the operators are not
quantiers, they do not bind variables,[15] but govern whole sentences. This gives rise to the problem of Referential
Opacity, i.e. the problem of quantifying over or 'into' modal contexts. The operators appear in the grammar as
sentential functors,[14] they are called modal operators.[15]
As mentioned, precursors of modal logic includes Aristotle. Medieval scholastic discussions accompanied its devel-
opment, for example about de re versus de dicto modalities: said in recent terms, in the de re modality the modal
functor is applied to an open sentence, the variable is bound by a quantier whose scope includes the whole intensional
subterm.[10]
Modern modal logic began with the Clarence Irving Lewis, his work was motivated by establishing the notion of
strict implication.[16] Possible worlds approach enabled more exact study of semantical questions. Exact formalization
resulted in Kripke semantics (developed by Saul Kripke, Jaakko Hintikka, Stig Kanger).[13]

135.3 Type theoretical intensional logic


Already in 1951, Alonzo Church had developed an intensional calculus. The semantical motivations were explained
expressively, of course without those tools that we know in establishing semantics for modal logic in a formal way,
because they had not been invented then:[17] Church has not provided formal semantic denitions.[18]
Later, possible world approach to semantics provided tools for a comprehensive study in intensional semantics.
Richard Montague could preserve the most important advantages of Churchs intensional calculus in his system. Un-
like its forerunner, Montague grammar was built in a purely semantical way: a simpler treatment became possible,
thank to the new formal tools invented since Churchs work.[17]

135.4 See also


Extensionality
480 CHAPTER 135. INTENSIONAL LOGIC

Kripke semantics

FregeChurch ontology

135.5 Notes
[1] Ruzsa 2000, p. 10

[2] Ruzsa 2000, p. 13

[3] Ruzsa 2000, p. 12

[4] Ruzsa 2000, p. 21

[5] Ruzsa 2000, p. 22

[6] Ruzsa 2000, p. 24

[7] Ruzsa 2000, pp. 2223

[8] Ruzsa 2000, pp. 2526

[9] Ruzsa 1987, p. 724

[10] Ruzsa 2000, pp. 246247

[11] Ruzsa 2000, p. 128

[12] Ruzsa 2000, p. 252

[13] Ruzsa 2000, p. 247

[14] Ruzsa 2000, p. 245

[15] Ruzsa 2000, p. 269

[16] Ruzsa 2000, p. 256

[17] Ruzsa 2000, p. 297

[18] Ruzsa 1989, p. 492

135.6 References
Melvin Fitting (2004). First-order intensional logic. Annals of Pure and Applied Logic 127:171193. The
2003 preprint is used in this article.

(2007). Intensional Logic. In the Stanford Encyclopedia of Philosophy.

Ruzsa, Imre (1984), Klasszikus, modlis s intenzionlis logika (in Hungarian), Budapest: Akadmiai Kiad,
ISBN 963-05-3084-8. Translation of the title: Classical, modal and intensional logic.

Ruzsa, Imre (1987), Fggelk. Az utols kt vtized, in Kneale, William; Kneale, Martha, A logika fejldse
(in Hungarian), Budapest: Gondolat, pp. 695734, ISBN 963-281-780-X. Original: The Development of
Logic. Translation of the title of the Appendix by Ruzsa, present only in Hungarian publication: The last
two decades.

Ruzsa, Imre (1988), Logikai szintaxis s szemantika (in Hungarian), 1, Budapest: Akadmiai Kiad, ISBN
963-05-4720-1. Translation of the title: Syntax and semantics of logic.

Ruzsa, Imre (1989), Logikai szintaxis s szemantika, 2, Budapest: Akadmiai Kiad, ISBN 963-05-5313-9.

Ruzsa, Imre (2000), Bevezets a modern logikba, Osiris tanknyvek (in Hungarian), Budapest: Osiris, ISBN
963-379-978-3 Translation of the title: Introduction to modern logic.
135.7. EXTERNAL LINKS 481

135.7 External links


Fitting, Melvin. Intensional logic. Stanford Encyclopedia of Philosophy.
Chapter 136

Interior algebra

In abstract algebra, an interior algebra is a certain type of algebraic structure that encodes the idea of the topological
interior of a set. Interior algebras are to topology and the modal logic S4 what Boolean algebras are to set theory and
ordinary propositional logic. Interior algebras form a variety of modal algebras.

136.1 Denition
An interior algebra is an algebraic structure with the signature

S, , +, , 0, 1, I

where

S, , +, , 0, 1

is a Boolean algebra and postx I designates a unary operator, the interior operator, satisfying the identities:

1. xI x
2. xII = xI
3. (xy)I = xI yI
4. 1I = 1

xI is called the interior of x.


The dual of the interior operator is the closure operator C dened by xC = ((x)I ). xC is called the closure of x. By
the principle of duality, the closure operator satises the identities:

1. xC x
2. xCC = xC
3. (x + y)C = xC + yC
4. 0C = 0

If the closure operator is taken as primitive, the interior operator can be dened as xI = ((x)C ). Thus the theory
of interior algebras may be formulated using the closure operator instead of the interior operator, in which case one
considers closure algebras of the form S, , +, , 0, 1, C , where S, , +, , 0, 1 is again a Boolean algebra and C satises
the above identities for the closure operator. Closure and interior algebras form dual pairs, and are paradigmatic
instances of Boolean algebras with operators. The early literature on this subject (mainly Polish topology) invoked
closure operators, but the interior operator formulation eventually became the norm.

482
136.2. OPEN AND CLOSED ELEMENTS 483

136.2 Open and closed elements


Elements of an interior algebra satisfying the condition xI = x are called open. The complements of open elements
are called closed and are characterized by the condition xC = x. An interior of an element is always open and the
closure of an element is always closed. Interiors of closed elements are called regular open and closures of open
elements are called regular closed. Elements which are both open and closed are called clopen. 0 and 1 are clopen.
An interior algebra is called Boolean if all its elements are open (and hence clopen). Boolean interior algebras
can be identied with ordinary Boolean algebras as their interior and closure operators provide no meaningful addi-
tional structure. A special case is the class of trivial interior algebras which are the single element interior algebras
characterized by the identity 0 = 1.

136.3 Morphisms of interior algebras

136.3.1 Homomorphisms
Interior algebras, by virtue of being algebraic structures, have homomorphisms. Given two interior algebras A and B,
a map f : A B is an interior algebra homomorphism if and only if f is a homomorphism between the underlying
Boolean algebras of A and B, that also preserves interiors and closures. Hence:

f(xI ) = f(x)I ;
f(xC ) = f(x)C .

136.3.2 Topomorphisms
Topomorphisms are another important, and more general, class of morphisms between interior algebras. A map f :
A B is a topomorphism if and only if f is a homomorphism between the Boolean algebras underlying A and B, that
also preserves the open and closed elements of A. Hence:

If x is open in A, then f(x) is open in B;


If x is closed in A, then f(x) is closed in B.

Every interior algebra homomorphism is a topomorphism, but not every topomorphism is an interior algebra homo-
morphism.

136.4 Relationships to other areas of mathematics

136.4.1 Topology
Given a topological space X = X, T one can form the power set Boolean algebra of X:

P(X), , , , , X

and extend it to an interior algebra

A(X) = P(X), , , , , X, I ,

where I is the usual topological interior operator. For all S X it is dened by

S I = {O : O S and O is open in X}

For all S X the corresponding closure operator is given by


484 CHAPTER 136. INTERIOR ALGEBRA

S C = {C : S C and C is closed in X}

S I is the largest open subset of S and S C is the smallest closed superset of S in X. The open, closed, regular open,
regular closed and clopen elements of the interior algebra A(X) are just the open, closed, regular open, regular closed
and clopen subsets of X respectively in the usual topological sense.
Every complete atomic interior algebra is isomorphic to an interior algebra of the form A(X) for some topological
space X. Moreover, every interior algebra can be embedded in such an interior algebra giving a representation of an
interior algebra as a topological eld of sets. The properties of the structure A(X) are the very motivation for the
denition of interior algebras. Because of this intimate connection with topology, interior algebras have also been
called topo-Boolean algebras or topological Boolean algebras.
Given a continuous map between two topological spaces

f: XY

we can dene a complete topomorphism

A(f) : A(Y) A(X)

by

A(f)(S) = f 1 [S]

for all subsets S of Y. Every complete topomorphism between two complete atomic interior algebras can be derived
in this way. If Top is the category of topological spaces and continuous maps and Cit is the category of complete
atomic interior algebras and complete topomorphisms then Top and Cit are dually isomorphic and A : Top Cit
is a contravariant functor that is a dual isomorphism of categories. A(f) is a homomorphism if and only if f is a
continuous open map.
Under this dual isomorphism of categories many natural topological properties correspond to algebraic properties, in
particular connectedness properties correspond to irreducibility properties:

X is empty if and only if A(X) is trivial


X is indiscrete if and only if A(X) is simple
X is discrete if and only if A(X) is Boolean
X is almost discrete if and only if A(X) is semisimple
X is nitely generated (Alexandrov) if and only if A(X) is operator complete i.e. its interior and closure
operators distribute over arbitrary meets and joins respectively
X is connected if and only if A(X) is directly indecomposable
X is ultraconnected if and only if A(X) is nitely subdirectly irreducible
X is compact ultra-connected if and only if A(X) is subdirectly irreducible

Generalized topology

The modern formulation of topological spaces in terms of topologies of open subsets, motivates an alternative for-
mulation of interior algebras: A generalized topological space is an algebraic structure of the form

B, , +, , 0, 1, T

where B, , +, , 0, 1 is a Boolean algebra as usual, and T is a unary relation on B (subset of B) such that:

1. 0,1 T
136.4. RELATIONSHIPS TO OTHER AREAS OF MATHEMATICS 485

2. T is closed under arbitrary joins (i.e. if a join of an arbitrary subset of T exists then it will be in T)
3. T is closed under nite meets
4. For every element b of B, the join {a T : a b} exists

T is said to be a generalized topology in the Boolean algebra.


Given an interior algebra its open elements form a generalized topology. Conversely given a generalized topological
space

B, , +, , 0, 1, T

we can dene an interior operator on B by bI = {a T : a b} thereby producing an interior algebra whose open
elements are precisely T. Thus generalized topological spaces are equivalent to interior algebras.
Considering interior algebras to be generalized topological spaces, topomorphisms are then the standard homomor-
phisms of Boolean algebras with added relations, so that standard results from universal algebra apply.

Neighbourhood functions and neighbourhood lattices

The topological concept of neighbourhoods can be generalized to interior algebras: An element y of an interior algebra
is said to be a neighbourhood of an element x if x yI . The set of neighbourhoods of x is denoted by N(x) and
forms a lter. This leads to another formulation of interior algebras:
A neighbourhood function on a Boolean algebra is a mapping N from its underlying set B to its set of lters, such
that:

1. For all x B, max{y B : x N(y)} exists


2. For all x,y B, x N(y) if and only if there is a z B such that y z x and z N(z).

The mapping N of elements of an interior algebra to their lters of neighbourhoods is a neighbourhood function on
the underlying Boolean algebra of the interior algebra. Moreover, given a neighbourhood function N on a Boolean
algebra with underlying set B, we can dene an interior operator by xI = max {y B : x N(y)} thereby obtaining an
interior algebra. N(x) will then be precisely the lter of neighbourhoods of x in this interior algebra. Thus interior
algebras are equivalent to Boolean algebras with specied neighbourhood functions.
In terms of neighbourhood functions, the open elements are precisely those elements x such that x N(x). In terms
of open elements x N(y) if and only if there is an open element z such that y z x.
Neighbourhood functions may be dened more generally on (meet)-semilattices producing the structures known as
neighbourhood (semi)lattices. Interior algebras may thus be viewed as precisely the Boolean neighbourhood lattices
i.e. those neighbourhood lattices whose underlying semilattice forms a Boolean algebra.

136.4.2 Modal logic


Given a theory (set of formal sentences) M in the modal logic S4, we can form its Lindenbaum-Tarski algebra:

L(M) = M / ~, , , , F, T,

where ~ is the equivalence relation on sentences in M given by p ~ q if and only if p and q are logically equivalent
in M, and M / ~ is the set of equivalence classes under this relation. Then L(M) is an interior algebra. The interior
operator in this case corresponds to the modal operator (necessarily), while the closure operator corresponds to
(possibly). This construction is a special case of a more general result for modal algebras and modal logic.
The open elements of L(M) correspond to sentences that are only true if they are necessarily true, while the closed
elements correspond to those that are only false if they are necessarily false.
Because of their relation to S4, interior algebras are sometimes called S4 algebras or Lewis algebras, after the
logician C. I. Lewis, who rst proposed the modal logics S4 and S5.
486 CHAPTER 136. INTERIOR ALGEBRA

136.4.3 Preorders

Since interior algebras are (normal) Boolean algebras with operators, they can be represented by elds of sets on
appropriate relational structures. In particular, since they are modal algebras, they can be represented as elds of sets
on a set with a single binary relation, called a modal frame. The modal frames corresponding to interior algebras
are precisely the preordered sets. Preordered sets (also called S4-frames) provide the Kripke semantics of the modal
logic S4, and the connection between interior algebras and preorders is deeply related to their connection with modal
logic.
Given a preordered set X = X, we can construct an interior algebra

B(X) = P(X), , , , , X, I

from the power set Boolean algebra of X where the interior operator I is given by

S I = {x X : for all y X, x y implies y S} for all S X.

The corresponding closure operator is given by

S C = {x X : there exists a y S with x y} for all S X.

S I is the set of all worlds inaccessible from worlds outside S, and S C is the set of all worlds accessible from some
world in S. Every interior algebra can be embedded in an interior algebra of the form B(X) for some preordered set
X giving the above-mentioned representation as a eld of sets (a preorder eld).
This construction and representation theorem is a special case of the more general result for modal algebras and
modal frames. In this regard, interior algebras are particularly interesting because of their connection to topology.
The construction provides the preordered set X with a topology, the Alexandrov topology, producing a topological
space T(X) whose open sets are:

{O X : for all x O and all y X, x y implies y O}.

The corresponding closed sets are:

{C X : for all x C and all y X, y x implies y C}.

In other words, the open sets are the ones whose worlds are inaccessible from outside (the up-sets), and the closed sets
are the ones for which every outside world is inaccessible from inside (the down-sets). Moreover, B(X) = A(T(X)).

136.4.4 Monadic Boolean algebras

Any monadic Boolean algebra can be considered to be an interior algebra where the interior operator is the universal
quantier and the closure operator is the existential quantier. The monadic Boolean algebras are then precisely the
variety of interior algebras satisfying the identity xIC = xI . In other words, they are precisely the interior algebras in
which every open element is closed or equivalently, in which every closed element is open. Moreover, such interior
algebras are precisely the semisimple interior algebras. They are also the interior algebras corresponding to the modal
logic S5, and so have also been called S5 algebras.
In the relationship between preordered sets and interior algebras they correspond to the case where the preorder is
an equivalence relation, reecting the fact that such preordered sets provide the Kripke semantics for S5. This also
reects the relationship between the monadic logic of quantication (for which monadic Boolean algebras provide
an algebraic description) and S5 where the modal operators (necessarily) and (possibly) can be interpreted
in the Kripke semantics using monadic universal and existential quantication, respectively, without reference to an
accessibility relation.
136.5. METAMATHEMATICS 487

136.4.5 Heyting algebras


The open elements of an interior algebra form a Heyting algebra and the closed elements form a dual Heyting algebra.
The regular open elements and regular closed elements correspond to the pseudo-complemented elements and dual
pseudo-complemented elements of these algebras respectively and thus form Boolean algebras. The clopen elements
correspond to the complemented elements and form a common subalgebra of these Boolean algebras as well as of
the interior algebra itself. Every Heyting algebra can be represented as the open elements of an interior algebra.
Heyting algebras play the same role for intuitionistic logic that interior algebras play for the modal logic S4 and
Boolean algebras play for propositional logic. The relation between Heyting algebras and interior algebras reects
the relationship between intuitionistic logic and S4, in which one can interpret theories of intuitionistic logic as S4
theories closed under necessity.

136.4.6 Derivative algebras


Given an interior algebra A, the closure operator obeys the axioms of the derivative operator, D . Hence we can form
a derivative algebra D(A) with the same underlying Boolean algebra as A by using the closure operator as a derivative
operator.
Thus interior algebras are derivative algebras. From this perspective, they are precisely the variety of derivative
algebras satisfying the identity xD x. Derivative algebras provide the appropriate algebraic semantics for the modal
logic WK4. Hence derivative algebras stand to topological derived sets and WK4 as interior/closure algebras stand
to topological interiors/closures and S4.
Given a derivative algebra V with derivative operator D , we can form an interior algebra I(V) with the same underlying
Boolean algebra as V, with interior and closure operators dened by xI = xx D and xC = x + xD , respectively. Thus
every derivative algebra can be regarded as an interior algebra. Moreover, given an interior algebra A, we have
I(D(A)) = A. However, D(I(V)) = V does not necessarily hold for every derivative algebra V.

136.5 Metamathematics
Grzegorczyk proved the elementary theory of closure algebras undecidable.[1]

136.6 Notes
[1] Andrzej Grzegorczyk (1951) Undecidability of some topological theories, Fundamenta Mathematicae 38: 137-52.

136.7 References
Blok, W.A., 1976, Varieties of interior algebras, Ph.D. thesis, University of Amsterdam.

Esakia, L., 2004, "Intuitionistic logic and modality via topology, Annals of Pure and Applied Logic 127:
155-70.

McKinsey, J.C.C. and Alfred Tarski, 1944, The Algebra of Topology, Annals of Mathematics 45: 141-91.
Naturman, C.A., 1991, Interior Algebras and Topology, Ph.D. thesis, University of Cape Town Department of
Mathematics.
Chapter 137

Intermediate logic

In mathematical logic, a superintuitionistic logic is a propositional logic extending intuitionistic logic. Classical
logic is the strongest consistent superintuitionistic logic; thus, other consistent superintuitionistic logics are called
intermediate logics (the logics are intermediate between intuitionistic logic and classical logic).[1]

137.1 Denition
A superintuitionistic logic is a set L of propositional formulas in a countable set of variables pi satisfying the following
properties:

1. all axioms of intuitionistic logic belong to L;


2. if F and G are formulas such that F and F G both belong to L, then G also belongs to L (closure
under modus ponens);
3. if F(p1 , p2 , ..., pn) is a formula of L, and G1 , G2 , ..., Gn are any formulas, then F(G1 , G2 , ..., Gn)
belongs to L (closure under substitution).

Such a logic is intermediate if furthermore

4. L is not the set of all formulas.

137.2 Properties and examples


There exists a continuum of dierent intermediate logics. Specic intermediate logics are often constructed by adding
one or more axioms to intuitionistic logic, or by a semantical description. Examples of intermediate logics include:

intuitionistic logic (IPC, Int, IL, H)

classical logic (CPC, Cl, CL): IPC + p p = IPC + p p = IPC + ((p q) p) p

the logic of the weak excluded middle (KC, Jankov's logic, De Morgan logic[2] ): IPC + p p

GdelDummett logic (LC, G): IPC + (p q) (q p)

KreiselPutnam logic (KP): IPC + (p (q r)) ((p q) (p r))

Medvedev's logic of nite problems (LM, ML): dened semantically as the logic of all frames of the form
P(X) \ {X}, for nite sets X (Boolean hypercubes without top), as of 2015 not known to be recursively
axiomatizable

realizability logics

488
137.3. SEMANTICS 489

Scott's logic (SL): IPC + ((p p) (p p)) (p p)

Smetanichs logic (SmL): IPC + (q p) (((p q) p) p)


n ( )
logics of bounded cardinality (BCn): IPC + i=0 j<i pj pi
n (
logics of bounded width, also known as the logic of bounded anti-chains (BWn, BAn): IPC+ pj
) i=0 j=i
pi

logics of bounded depth (BDn): IPC + pn (pn (pn (pn ... (p2 (p2 (p1 p1 )))...)))
n ( )
logics of bounded top width (BTWn): IPC + i=0 j<i pj pi
n (( ) ) n
logics of bounded branching (Tn, BBn): IPC + i=0 pi j=i pj j=i pj i=0 pi

Gdel n-valued logics (Gn): LC + BCn = LC + BDn

Superintuitionistic or intermediate logics form a complete lattice with intuitionistic logic as the bottom and the in-
consistent logic (in the case of superintuitionistic logics) or classical logic (in the case of intermediate logics) as the
top. Classical logic is the only coatom in the lattice of superintuitionistic logics; the lattice of intermediate logics also
has a unique coatom, namely SmL.
The tools for studying intermediate logics are similar to those used for intuitionistic logic, such as Kripke semantics.
For example, GdelDummett logic has a simple semantic characterization in terms of total orders.

137.3 Semantics
Given a Heyting algebra H, the set of propositional formulas that are valid in H is an intermediate logic. Conversely,
given an intermediate logic it is possible to construct its Lindenbaum algebra which is a Heyting algebra.
An intuitionistic Kripke frame F is a partially ordered set, and a Kripke model M is a Kripke frame with valuation such
that {x | M, x p} is an upper subset of F. The set of propositional formulas that are valid in F is an intermediate
logic. Given an intermediate logic L it is possible to construct a Kripke model M such that the logic of M is L (this
construction is called canonical model). A Kripke frame with this property may not exist, but a general frame always
does.

137.4 Relation to modal logics


Main article: Modal companion

Let A be a propositional formula. The GdelTarski translation of A is dened recursively as follows:

T (pn ) = pn

T (A) = T (A)

T (A B) = T (A) T (B)

T (A B) = T (A) T (B)

T (A B) = (T (A) T (B))

If M is a modal logic extending S4 then M = {A | T(A) M} is a superintuitionistic logic, and M is called a modal
companion of M. In particular:

IPC = S4

KC = S4.2
490 CHAPTER 137. INTERMEDIATE LOGIC

LC = S4.3

CPC = S5

For every intermediate logic L there are many modal logics M such that L = M.

137.5 See also


List of logic systems

137.6 References
[1] Intermediate logic. Encyclopedia of Mathematics. Retrieved 19 August 2017.

[2] Constructive Logic and the Medvedev Lattice, Sebastiaan A. Terwijn, Notre Dame J. Formal Logic, Volume 47, Number
1 (2006), 73-82.

Toshio Umezawa. On logics intermediate between intuitionistic and classical predicate logic. Journal of Sym-
bolic Logic, 24(2):141153, June 1959.
Alexander Chagrov, Michael Zakharyaschev. Modal Logic. Oxford University Press, 1997.
Chapter 138

Inverse trigonometric functions

In mathematics, the inverse trigonometric functions (occasionally also called arcus functions,[1][2][3][4][5] an-
titrigonometric functions[6] or cyclometric functions[7][8][9] ) are the inverse functions of the trigonometric func-
tions (with suitably restricted domains). Specically, they are the inverses of the sine, cosine, tangent, cotangent,
secant, and cosecant functions, and are used to obtain an angle from any of the angles trigonometric ratios. Inverse
trigonometric functions are widely used in engineering, navigation, physics, and geometry.

138.1 Notation
There are several notations used for the inverse trigonometric functions.
The most common convention is to name inverse trigonometric functions using an arc- prex, e.g., arcsin(x), arccos(x),
arctan(x), etc.[6] This convention is used throughout the article. When measuring in radians, an angle of radians
will correspond to an arc whose length is r, where r is the radius of the circle. Thus, in the unit circle, the arc
whose cosine is x is the same as the angle whose cosine is x, because the length of the arc of the circle in radii is
the same as the measurement of the angle in radians.[10] Similarly, in computer programming languages the inverse
trigonometric functions are usually called asin, acos, atan.
The notations sin1 (x), cos1 (x), tan1 (x), etc., as introduced by John Herschel in 1813,[11][12] are often used as
well in English[6] sources, but this convention logically conicts with the common semantics for expressions like
sin2 (x), which refer to numeric power rather than function composition, and therefore may result in confusion between
multiplicative inverse and compositional inverse. The confusion is somewhat ameliorated by the fact that each of the
reciprocal trigonometric functions has its own namefor example, (cos(x))1 = sec(x). Nevertheless, certain authors
advise against using it for its ambiguity.[6][13]
Another convention used by a few authors[14] is to use a majuscule (capital/upper-case) rst letter along with a 1
superscript, e.g., Sin1 (x), Cos1 (x), Tan1 (x), etc., which avoids confusing them with the multiplicative inverse,
which should be represented by sin1 (x), cos1 (x), etc.

138.2 Basic properties

138.2.1 Principal values

Since none of the six trigonometric functions are one-to-one, they are restricted in order to have inverse functions.
Therefore the ranges of the inverse functions are proper subsets of the domains of the original functions
For example, using function in the sense of multivalued functions, just as the square root function y = x could be
dened from y2 = x, the function y = arcsin(x) is dened so that sin(y) = x. For a given real number x, with 1 x
1, there are multiple (in fact, countably innitely many) numbers y such that sin(y) = x; for example, sin(0) = 0,
but also sin() = 0, sin(2) = 0, etc. When only one value is desired, the function may be restricted to its principal
branch. With this restriction, for each x in the domain the expression arcsin(x) will evaluate only to a single value,
called its principal value. These properties apply to all the inverse trigonometric functions.

491
492 CHAPTER 138. INVERSE TRIGONOMETRIC FUNCTIONS

The principal inverses are listed in the following table.


(Note: Some authors dene the range of arcsecant to be ( 0 y < /2 or y < 3/2 ), because the tangent
function is nonnegative on this domain. This makes some computations more consistent. For example using this
range, tan(arcsec(x)) = x2 1, whereas with the range ( 0 y < /2 or /2 < y ), we would have to write
tan(arcsec(x)) = x2 1, since tangent is nonnegative on 0 y < /2 but nonpositive on /2 < y . For a similar
reason, the same authors dene the range of arccosecant to be < y /2 or 0 < y /2.)
If x is allowed to be a complex number, then the range of y applies only to its real part.

138.2.2 Relationships between trigonometric functions and inverse trigonometric func-


tions

Trigonometric functions of inverse trigonometric functions are tabulated below. A quick way to derive them is by
considering the geometry of a right-angled triangle, with one side of length 1, and another side of length x (any real
number between 0 and 1), then applying the Pythagorean theorem and denitions of the trigonometric ratios. Purely
algebraic derivations are longer.

138.2.3 Relationships among the inverse trigonometric functions

Complementary angles:


arccos(x) = arcsin(x)
2

arccot(x) = arctan(x)
2

arccsc(x) = arcsec(x)
2

Negative arguments:

arcsin(x) = arcsin(x)
arccos(x) = arccos(x)
arctan(x) = arctan(x)
arccot(x) = arccot(x)
arcsec(x) = arcsec(x)
arccsc(x) = arccsc(x)

Reciprocal arguments:
138.2. BASIC PROPERTIES 493

( )
1
arccos = arcsec(x)
x
( )
1
arcsin = arccsc(x)
x
( )
1
arctan = arctan(x) = arccot(x) , if x > 0
x 2
( )
1
arctan = arctan(x) = arccot(x) , if x < 0
x 2
( )
1
arccot = arccot(x) = arctan(x) , if x > 0
x 2
( )
1 3
arccot = arccot(x) = + arctan(x) , if x < 0
x 2
( )
1
arcsec = arccos(x)
x
( )
1
arccsc = arcsin(x)
x

If you only have a fragment of a sine table:

( )
arccos(x) = arcsin 1 x2 , if 0 x 1
1 ( )
arccos(x) = arccos 2x2 1 , if 0 x 1
2
1 ( )
arcsin(x) = arccos 1 2x2 , if 0 x 1
2 ( )
x
arctan(x) = arcsin
x2 + 1
Whenever the square root of a complex number is used here, we choose the root with the positive real part (or positive
imaginary part if the square was negative real).
( ) sin()
From the half-angle formula, tan 2 = 1+cos() , we get:

( )
x
arcsin(x) = 2 arctan
1 + 1 x2
( )
1 x2
arccos(x) = 2 arctan , if 1 < x +1
1+x
( )
x
arctan(x) = 2 arctan
1 + 1 + x2

138.2.4 Arctangent addition formula


( )
u+v
arctan(u) + arctan(v) = arctan (mod ) , uv = 1 .
1 uv
This is derived from the tangent addition formula

tan() + tan()
tan( + ) = ,
1 tan() tan()
494 CHAPTER 138. INVERSE TRIGONOMETRIC FUNCTIONS

by letting

= arctan(u) , = arctan(v) .

138.3 In calculus

138.3.1 Derivatives of inverse trigonometric functions

Main article: Dierentiation of trigonometric functions

The derivatives for complex values of z are as follows:

d 1
arcsin(z) = ; z = 1, +1
dz 1 z2
d 1
arccos(z) = ; z = 1, +1
dz 1 z2
d 1
arctan(z) = ; z = i, +i
dz 1 + z2
d 1
arccot(z) = ; z = i, +i
dz 1 + z2
d 1
arcsec(z) = ; z = 1, 0, +1
dz z 1 1
2
z2
d 1
arccsc(z) = ; z = 1, 0, +1
dz z2 1 1
z2

Only for real values of x:

d 1
arcsec(x) = ; |x| > 1
dx |x| x2 1
d 1
arccsc(x) = ; |x| > 1
dx |x| x2 1

For a sample derivation: if = arcsin(x) , we get:

d arcsin(x) d d 1 1 1
= = = = =
dx d sin() cos()d cos()
1 sin2 () 1 x2

138.3.2 Expression as denite integrals

Integrating the derivative and xing the value at one point gives an expression for the inverse trigonometric function
as a denite integral:
138.3. IN CALCULUS 495

x
1
arcsin(x) = dz , |x| 1
0 1 z2
1
1
arccos(x) = dz , |x| 1
1 z2
xx
1
arctan(x) = 2+1
dz ,
z
0
1
arccot(x) = 2+1
dz ,
x z
x 1
1 1
arcsec(x) = dz = + dz , x1
z z 12 z z2 1
1 x x
1 1
arccsc(x) = dz = dz , x1
x z z 12
z z 1
2

When x equals 1, the integrals with limited domains are improper integrals, but still well-dened.

138.3.3 Innite series


Like the sine and cosine functions, the inverse trigonometric functions can be calculated using power series, as follows.
1
For arcsine, the series can be derived by expanding its derivative, 1z 2
, as a binomial series, and integrating term
by term (using the integral denition as above). The series for arctangent can similarly be derived by expanding its
1
derivative 1+z 2 in a geometric series and applying the integral denition above (see Leibniz series).

( ) 3 ( ) ( )
1 z 1 3 z5 1 3 5 z7
arcsin(z) = z + + + +
2 3 24 5 246 7

(2n 1)!! z 2n+1
=
n=0
(2n)!! 2n + 1
(2n)
2n+1
n z
= ; |z| 1
n=0
4n (2n + 1)


z3 z5 z7 (1)n z 2n+1
arctan(z) = z + + = ; |z| 1 z = i, i
3 5 7 n=0
2n + 1
Series for the other inverse trigonometric functions can be given in terms of these according to the relationships given
above. For example, arccos(x) = /2 arcsin(x) , arccsc(x) = arcsin(1/x) , and so on. Another series is given
by:[15]

( ( x ))2
x2n
2 arcsin = ( )
2 n=1
n2 2n
n

Leonhard Euler found a more ecient series for the arctangent, which is:

n
z 2kz 2
arctan(z) = 2
.
1 + z n=0 (2k + 1)(1 + z 2 )
k=1

(Notice that the term in the sum for n = 0 is the empty product which is 1.)
Alternatively, this can be expressed:


22n (n!)2 z 2n+1
arctan(z) =
n=0
(2n + 1)! (1 + z 2 )n+1
496 CHAPTER 138. INVERSE TRIGONOMETRIC FUNCTIONS

Variant: Continued fractions for arctangent

Two alternatives to the power series for arctangent are these generalized continued fractions:

z z
arctan(z) = =
(1z)2 (1z)2
1+ 1+
(3z)2 (2z)2
3 1z 2 + 3+
(5z)2 (3z)2
5 3z 2 + 5+
(7z)2 (4z)2
7 5z 2 + 7+
. .
9 7z 2 + . . 9 + ..
The second of these is valid in the cut complex plane. There are two cuts, from i to the point at innity, going down
the imaginary axis, and from i to the point at innity, going up the same axis. It works best for real numbers running
from 1 to 1. The partial denominators are the odd natural numbers, and the partial numerators (after the rst) are
just (nz)2 , with each perfect square appearing once. The rst was developed by Leonhard Euler; the second by Carl
Friedrich Gauss utilizing the Gaussian hypergeometric series.

138.3.4 Indenite integrals of inverse trigonometric functions


For real and complex values of z:


arcsin(z) dz = z arcsin(z) + 1 z2 + C

arccos(z) dz = z arccos(z) 1 z2 + C

1 ( )
arctan(z) dz = z arctan(z) ln 1 + z 2 + C
2

1 ( )
arccot(z) dz = z arccot(z) + ln 1 + z 2 + C
2
[ ( )]
z2 1
arcsec(z) dz = z arcsec(z) ln z 1 + +C
z2
[ ( )]
z2 1
arccsc(z) dz = z arccsc(z) + ln z 1 + +C
z2

For real x 1:

( )
arcsec(x) dx = x arcsec(x) ln x + x2 1 + C
( )
arccsc(x) dx = x arccsc(x) + ln x + x2 1 + C

For all real x not between 1 and 1:

( )

arcsec(x) dx = x arcsec(x) sgn(x) ln x + x2 1 + C
( )

arccsc(x) dx = x arccsc(x) + sgn(x) ln x + x2 1 + C

The absolute value is necessary to compensate for both negative and positive values of the arcsecant and arccosecant
functions. The signum function is also necessary due to the absolute values in the derivatives of the two functions,
which create two dierent solutions for positive and negative values of x. These can be further simplied using the
logarithmic denitions of the inverse hyperbolic functions:
138.4. EXTENSION TO COMPLEX PLANE 497

arcsec(x) dx = x arcsec(x) arcosh(|x|) + C

arccsc(x) dx = x arccsc(x) + arcosh(|x|) + C

The absolute value in the argument of the arcosh function creates a negative half of its graph, making it identical to
the signum logarithmic function shown above.
All of these antiderivatives can be derived using integration by parts and the simple derivative forms shown above.

Example

Using u dv = uv v du (i.e. integration by parts), set

u = arcsin(x) dv = dx
dx
du = v=x
1 x2
Then


x
arcsin(x) dx = x arcsin(x) dx,
1 x2
which by a simple substitution yields the nal result:


arcsin(x) dx = x arcsin(x) + 1 x2 + C

138.4 Extension to complex plane


Since the inverse trigonometric functions are analytic functions, they can be extended from the real line to the complex
plane. This results in functions with multiple sheets and branch points. One possible way of dening the extensions
is:

z
dx
arctan(z) = z = i, +i
0 1 + x2
where the part of the imaginary axis which does not lie strictly between i and +i is the cut between the principal
sheet and other sheets;

( )
z
arcsin(z) = arctan z = 1, +1
1 z2
where (the square-root function has its cut along the negative real axis and) the part of the real axis which does not
lie strictly between 1 and +1 is the cut between the principal sheet of arcsin and other sheets;


arccos(z) = arcsin(z) z = 1, +1
2
which has the same cut as arcsin;


arccot(z) = arctan(z) z = i, i
2
498 CHAPTER 138. INVERSE TRIGONOMETRIC FUNCTIONS

which has the same cut as arctan;

( )
1
arcsec(z) = arccos z = 1, 0, +1
z

where the part of the real axis between 1 and +1 inclusive is the cut between the principal sheet of arcsec and other
sheets;

( )
1
arccsc(z) = arcsin z = 1, 0, +1
z

which has the same cut as arcsec.

138.4.1 Logarithmic forms


These functions may also be expressed using complex logarithms. This extends in a natural fashion their domain to
the complex plane.

( ) ( )
1
arcsin(z) = i ln iz + 1 z2 = arccsc
z

( ) ( ) ( )
1
arccos(z) = i ln z + z 2 1 = + i ln iz + 1 z 2 = arcsin(z) = arcsec
2 2 z
( )
1
arctan(z) = 1
2 i [ln (1 iz) ln (1 + iz)] = arccot
z
[ ( ) ( )] ( )
i i 1
arccot(z) = 1
2i ln 1 ln 1 + = arctan
z z z
( ) ( ) ( )
1 1 1 i 1
arcsec(z) = i ln 2
1+ = i ln 1 2 + + = arccsc(z) = arccos
z z z z 2 2 z

( ) ( )
1 i 1
arccsc(z) = i ln 1 2 + = arcsin
z z z

Elementary proofs of these relations proceed via expansion to exponential forms of the trigonometric functions.

Example proof

sin() = z
= arcsin(z)
Using the exponential denition of sine, one obtains

ei ei
z=
2i
Let

= ei
138.5. APPLICATIONS 499

Solving for

1

z=
2i

1
2iz =

1
2iz =0

2 2iz 1 = 0

= iz 1 z 2 = ei
( )
i = ln iz 1 z 2

( )
= i ln iz 1 z 2

(the positive branch is chosen)

( )
= arcsin(z) = i ln iz + 1 z 2

138.5 Applications

138.5.1 General solutions

Each of the trigonometric functions is periodic in the real part of its argument, running through all its values twice in
each interval of 2. Sine and cosecant begin their period at 2k /2 (where k is an integer), nish it at 2k + /2,
and then reverse themselves over 2k + /2 to 2k + 3/2. Cosine and secant begin their period at 2k, nish it at
2k + , and then reverse themselves over 2k + to 2k + 2. Tangent begins its period at 2k /2, nishes it at
2k + /2, and then repeats it (forward) over 2k + /2 to 2k + 3/2. Cotangent begins its period at 2k, nishes
it at 2k + , and then repeats it (forward) over 2k + to 2k + 2.
This periodicity is reected in the general inverses where k is some integer:

sin(y) = x y = arcsin(x) + 2k or y = arcsin(x) + 2k

sin(y) = x y = (1)k arcsin(x) + k.

cos(y) = x y = arccos(x) + 2k or y = 2 arccos(x) + 2k

cos(y) = x y = arccos(x) + 2k.

tan(y) = x y = arctan(x) + k

cot(y) = x y = arccot(x) + k

sec(y) = x y = arcsec(x) + 2k or y = 2 arcsec(x) + 2k

csc(y) = x y = arccsc(x) + 2k or y = arccsc(x) + 2k


500 CHAPTER 138. INVERSE TRIGONOMETRIC FUNCTIONS

Application: nding the angle of a right triangle

Inverse trigonometric functions are useful when trying to determine the remaining two angles of a right triangle when
the lengths of the sides of the triangle are known. Recalling the right-triangle denitions of sine, for example, it
follows that

( )
opposite
= arcsin .
hypotenuse
Often, the hypotenuse is unknown and would need to be calculated before using arcsine or arccosine using the
Pythagorean Theorem: a2 + b2 = h2 where h is the length of the hypotenuse. Arctangent comes in handy in
this situation, as the length of the hypotenuse is not needed.

( )
opposite
= arctan .
adjacent
For example, suppose a roof drops 8 feet as it runs out 20 feet. The roof makes an angle with the horizontal, where
may be computed as follows:

( ) ( ) ( )
opposite rise 8
= arctan = arctan = arctan 21.8 .
adjacent run 20

138.5.2 In computer science and engineering


Two-argument variant of arctangent

Main article: atan2

The two-argument atan2 function computes the arctangent of y / x given y and x, but with a range of (, ]. In
other words, atan2(y, x) is the angle between the positive x-axis of a plane and the point (x, y) on it, with positive
sign for counter-clockwise angles (upper half-plane, y > 0), and negative sign for clockwise angles (lower half-plane,
y < 0). It was rst introduced in many computer programming languages, but it is now also common in other elds
of science and engineering.
In terms of the standard arctan function, that is with range of (/2, /2), it can be expressed as follows:


y
arctan( x ) x>0



arctan( xy ) + y0, x<0


arctan( y ) y<0, x<0
atan2(y, x) = x

y>0, x=0

2

2


y<0, x=0

undened y=0, x=0
It also equals the principal value of the argument of the complex number x + iy.
This function may also be dened using the tangent half-angle formulae as follows:

( )
y
atan2(y, x) = 2 arctan
x2 + y 2 + x
provided that either x > 0 or y 0. However this fails if given x 0 and y = 0 so the expression is unsuitable for
computational use.
The above argument order (y, x) seems to be the most common, and in particular is used in ISO standards such as
the C programming language, but a few authors may use the opposite convention (x, y) so some caution is warranted.
These variations are detailed at atan2.
138.6. SEE ALSO 501

Arctangent function with location parameter

In many applications the solution y of the equation x = tan(y) is to come as close as possible to a given value
< < . The adequate solution is produced by the parameter modied arctangent function

( )
arctan(x)
y = arctan (x) := arctan(x) + rni .

The function rni rounds to the nearest integer.

Numerical accuracy

For angles near 0 and , arccosine is ill-conditioned and will thus calculate the angle with reduced accuracy in a
computer implementation (due to the limited number of digits).[16] Similarly, arcsine is inaccurate for angles near
/2 and /2.

138.6 See also


Inverse exsecant

Inverse versine

Inverse hyperbolic function

List of integrals of inverse trigonometric functions

List of trigonometric identities

Trigonometric function

138.7 References
[1] Taczanowski, Stefan (1978-10-01). On the optimization of some geometric parameters in 14 MeV neutron activation
analysis. Nuclear Instruments and Methods. ScienceDirect. 155 (3): 543546. Retrieved 2017-07-26.

[2] Hazewinkel, Michiel (1994) [1987]. Encyclopaedia of Mathematics (unabridged reprint ed.). Kluwer Academic Publishers
/ Springer Science & Business Media. ISBN 978-155608010-4. ISBN 1556080107.

[3] Ebner, Dieter (2005-07-25). Preparatory Course in Mathematics (PDF) (6 ed.). Department of Physics, University of
Konstanz. Archived (PDF) from the original on 2017-07-26. Retrieved 2017-07-26.

[4] Mejlbro, Leif (2010-11-11). Stability, Riemann Surfaces, Conformal Mappings - Complex Functions Theory (PDF) (1 ed.).
Ventus Publishing ApS / Bookboon. ISBN 978-87-7681-702-2. ISBN 87-7681-702-4. Archived (PDF) from the original
on 2017-07-26. Retrieved 2017-07-26.

[5] Durn, Mario (2012). Mathematical methods for wave propagation in science and engineering. 1: Fundamentals (1 ed.).
Ediciones UC. p. 88. ISBN 978-956141314-6. ISBN 956141314-0.

[6] Hall, Arthur Graham; Frink, Fred Goodrich (January 1909). Chapter II. The Acute Angle [14] Inverse trigonometric
functions. Written at Ann Arbor, Michigan, USA. Trigonometry. Part I: Plane Trigonometry. New York, USA: Henry
Holt and Company / Norwood Press / J. S. Cushing Co. - Berwick & Smith Co., Norwood, Massachusetts, USA. p.
15. Retrieved 2017-08-12. [] = arcsin m: It is frequently read "arc-sine m" or "anti-sine m, since two mutually
inverse functions are said each to be the anti-function of the other. [] A similar symbolic relation holds for the other
trigonometric functions. [] This notation is universally used in Europe and is fast gaining ground in this country. A less
desirable symbol, = sin1 m, is still found in English and American texts. The notation = inv sin m is perhaps better
still on account of its general applicability. []

[7] Klein, Christian Felix (1924) [1902]. Elementarmathematik vom hheren Standpunkt aus: Arithmetik, Algebra, Analysis
(in German). 1 (3rd ed.). Berlin: J. Springer.
502 CHAPTER 138. INVERSE TRIGONOMETRIC FUNCTIONS

[8] Klein, Christian Felix (2004) [1932]. Elementary Mathematics from an Advanced Standpoint: Arithmetic, Algebra, Analysis.
Translated by Hedrick, E. R.; Noble, C. A. (Translation of 3rd German ed.). Dover Publications, Inc. / The Macmillan
Company. ISBN 978-0-48643480-3. ISBN 0-48643480-X. Retrieved 2017-08-13.

[9] Drrie, Heinrich (1965). Triumph der Mathematik. Translated by Antin, David. Dover Publications. p. 69. ISBN 0-486-
61348-8.

[10] Beach, Frederick Converse; Rines, George Edwin, eds. (1912). Inverse trigonometric functions. The Americana: a
universal reference library. 21.

[11] Cajori, Florian (1919). A History of Mathematics (2 ed.). New York, USA: The Macmillan Company. p. 272.

[12] Herschel, John Frederick William (1813). On a remarkable Application of Cotess Theorem. Philosophical Transactions.
Royal Society, London. 103 (1): 8.

[13] Korn, Grandino Arthur; Korn, Theresa M. (2000) [1961]. 21.2.4. Inverse Trigonometric Functions. Mathematical
handbook for scientists and engineers: Denitions, theorems, and formulars for reference and review (3 ed.). Mineola, New
York, USA: Dover Publications, Inc. p. 811. ISBN 978-0-486-41147-7.

[14] Bhatti, Sanaullah; Nawab-ud-Din; Ahmed, Bashir; Yousuf, S. M.; Taheem, Allah Bukhsh (1999). Dierentiation of
Trigonometric, Logarithmic and Exponential Functions. In Ellahi, Mohammad Maqbool; Dar, Karamat Hussain; Hussain,
Faheem. Calculus and Analytic Geometry (1 ed.). Lahore: Punjab Textbook Board. p. 140.

[15] Borwein, Jonathan; Bailey, David; Gingersohn, Roland (2004). Experimentation in Mathematics: Computational Paths to
Discovery (1 ed.). Wellesley, MA, USA: :A. K. Peters. p. 51. ISBN 1-56881-136-5.

[16] Gade, Kenneth (2010). A non-singular horizontal position representation (PDF). The Journal of Navigation. Cambridge
University Press. 63 (3): 395417. doi:10.1017/S0373463309990415.

138.8 External links


Weisstein, Eric W. Inverse Trigonometric Functions. MathWorld.

Weisstein, Eric W. Inverse Tangent. MathWorld.


138.8. EXTERNAL LINKS 503
504 CHAPTER 138. INVERSE TRIGONOMETRIC FUNCTIONS

The usual principal values of the arctan(x) and arccot(x) functions graphed on the cartesian plane.

Principal values of the arcsec(x) and arccsc(x) functions graphed on the cartesian plane.
138.8. EXTERNAL LINKS 505

A right triangle.
Chapter 139

Join (sigma algebra)

In mathematics, the join of two sigma algebras over the same set X is the coarsest sigma algebra containing both.[1]

139.1 References
[1] Peter Walters, An Introduction to Ergodic Theory (1982) Springer Verlag, ISBN 0-387-90599-5

506
Chapter 140

Veitch chart

The Karnaugh map (KM or K-map) is a method of simplifying Boolean algebra expressions. Maurice Karnaugh
introduced it in 1953[1] as a renement of Edward Veitch's 1952 Veitch chart,[2][3] which actually was a rediscovery
of Allan Marquand's 1881 logical diagram[4] aka Marquand diagram[3] but with a focus now set on its utility for
switching circuits.[3] Veitch charts are therefore also known as MarquandVeitch diagrams,[3] and Karnaugh maps as
KarnaughVeitch maps (KV maps).
The Karnaugh map reduces the need for extensive calculations by taking advantage of humans pattern-recognition
capability.[1] It also permits the rapid identication and elimination of potential race conditions.
The required Boolean results are transferred from a truth table onto a two-dimensional grid where, in Karnaugh maps,
the cells are ordered in Gray code,[5][3] and each cell position represents one combination of input conditions, while
each cell value represents the corresponding output value. Optimal groups of 1s or 0s are identied, which represent
the terms of a canonical form of the logic in the original truth table.[6] These terms can be used to write a minimal
Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using a minimum
number of physical logic gates. A sum-of-products expression can always be implemented using AND gates feeding
into an OR gate, and a product-of-sums expression leads to OR gates feeding an AND gate.[7] Karnaugh maps can
also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional
statements, can get very complicated, which makes the code dicult to read and to maintain. Once minimised,
canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic
operators.[8]

140.1 Example
Karnaugh maps are used to facilitate the simplication of Boolean algebra functions. For example, consider the
Boolean function described by the following truth table.
Following are two dierent notations describing the same function in unsimplied Boolean algebra, using the Boolean
variables A, B, C, D, and their inverses.


f (A, B, C, D) = mi , i {6, 8, 9, 10, 11, 12, 13, 14} where mi are the minterms to map (i.e., rows that
have output 1 in the truth table).

f (A, B, C, D) = Mi , i {0, 1, 2, 3, 4, 5, 7, 15} where Mi are the maxterms to map (i.e., rows that have
output 0 in the truth table).

140.1.1 Karnaugh map

In the example above, the four input variables can be combined in 16 dierent ways, so the truth table has 16 rows,
and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 4 grid.

507
508 CHAPTER 140. VEITCH CHART

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
An example Karnaugh map. This image actually shows two Karnaugh maps: for the function , using minterms (colored rectangles)
and
for its complement, using maxterms (gray rectangles). In the image, E() signies a sum of minterms, denoted in the article as
mi .

The row and column indices (shown across the top, and down the left side of the Karnaugh map) are ordered in Gray
code rather than binary numerical order. Gray code ensures that only one variable changes between each pair of
adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the functions output
for that combination of inputs.
After the Karnaugh map has been constructed, it is used to nd one of the simplest possible forms a canonical
form for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify
the expression. The minterms ('minimal terms) for the nal expression are found by encircling groups of 1s in the
map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8). Minterm
rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one
larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and
green groups overlap. The red group is a 2 2 square, the green group is a 4 1 rectangle, and the overlap area is
indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For
140.1. EXAMPLE 509

K-map drawn on a torus, and in a plane. The dot-marked cells are adjacent.

AB
00 01 11 10
ABCD ABCD
00

0 4 12 8 0000 - 0 1000 - 8
0001 - 1 1001 - 9
01

1 5 13 9 0010 - 2 1010 - 10
CD

0011 - 3 1011 - 11
0100 - 4 1100 - 12
11

3 7 15 11
0101 - 5 1101 - 13
0110 - 6 1110 - 14
10

2 6 14 10
0111 - 7 1111 - 15

K-map construction. Instead of containing output values, this diagram shows the numbers of outputs, therefore it is not a Karnaugh
map.

example, AD would mean a cell which covers the 2x2 area where A and D are true, i.e. the cells numbered 13, 9,
15, 11 in the diagram above. On the other hand, AD would mean the cells where A is true and D is false (that is, D
is true).
The grid is toroidally connected, which means that rectangular groups can wrap across the edges (see picture). Cells
on the extreme right are actually 'adjacent' to those on the far left; similarly, so are those at the very top and those
at the bottom. Therefore, AD can be a valid termit includes cells 12 and 8 at the top, and wraps to the bottom to
include cells 10 and 14as is B, D, which includes the four corners.
510 CHAPTER 140. VEITCH CHART

In three dimensions, one can bend a rectangle into a torus.

140.1.2 Solution
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic
minterms can be found by examining which variables stay the same within each box.
For the red grouping:

A is the same and is equal to 1 throughout the box, therefore it should be included in the algebraic representation
of the red minterm.
B does not maintain the same state (it shifts from 1 to 0), and should therefore be excluded.
C does not change. It is always 0, so its complement, NOT-C, should be included. Thus, C should be included.
D changes, so it is excluded.

Thus the rst minterm in the Boolean sum-of-products expression is AC.


For the green grouping, A and B maintain the same state, while C and D change. B is 0 and has to be negated before
it can be included. The second term is therefore AB. Note that it is acceptable that the green grouping overlaps with
the red one.
In the same way, the blue grouping gives the term BCD.
The solutions of each grouping are combined: the normal form of the circuit is AC + AB + BCD .
Thus the Karnaugh map has guided a simplication of

f (A, B, C, D) = ABCD + AB C D + AB CD + ABCD +


ABCD + ABC D + ABCD + ABCD
= AC + AB + BCD
140.1. EXAMPLE 511

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
Diagram showing two K-maps. The K-map for the function f(A, B, C, D) is shown as colored rectangles which correspond to
minterms. The brown region is an overlap of the red 22 square and the green 41 rectangle. The K-map for the inverse of f is
shown as gray rectangles, which correspond to maxterms.

It would also have been possible to derive this simplication by carefully applying the axioms of boolean algebra, but
the time it takes to do that grows exponentially with the number of terms.

140.1.3 Inverse
The inverse of a function is solved in the same way by grouping the 0s instead.
The three terms to cover the inverse are all shown with grey boxes with dierent colored borders:

brown: A, B

gold: A, C

blue: BCD
512 CHAPTER 140. VEITCH CHART

This yields the inverse:

f (A, B, C, D) = A B + A C + BCD

Through the use of De Morgans laws, the product of sums can be determined:

f (A, B, C, D) = A B + A C + BCD
f (A, B, C, D) = A B + A C + BCD
( )
f (A, B, C, D) = (A + B) (A + C) B + C + D

140.1.4 Don't cares

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 X 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=A+BCD'
F=(A+B)(A+C)(A+D')
The value of for ABCD = 1111 is replaced by a don't care. This removes the green term completely and allows the red term to be
larger. It also allows blue inverse term to shift and become larger
140.2. RACE HAZARDS 513

Karnaugh maps also allow easy minimizations of functions whose truth tables include "don't care" conditions. A
don't care condition is a combination of inputs for which the designer doesn't care what the output is. Therefore,
don't care conditions can either be included in or excluded from any rectangular group, whichever makes it larger.
They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value of f(1,1,1,1) replaced by a don't care.
This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:

f (A, B, C, D) = A + BCD

Note that the rst term is just A, not AC. In this case, the don't care has dropped a term (the green rectangle);
simplied another (the red one); and removed the race hazard (removing the yellow term as shown in the following
section on race hazards).
The inverse case is simplied as follows:

f (A, B, C, D) = A B + A C + AD

140.2 Race hazards

140.2.1 Elimination
Karnaugh maps are useful for detecting and eliminating race conditions. Race hazards are very easy to spot using a
Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions
circumscribed on the map. However, because of the nature of Gray coding, adjacent has a special denition explained
above - we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.

In the example above, a potential race condition exists when C is 1 and D is 0, A is 1, and B changes from 1
to 0 (moving from the blue state to the green state). For this case, the output is dened to remain unchanged
at 1, but because this transition is not covered by a specic term in the equation, a potential for a glitch (a
momentary transition of the output to 0) exists.
There is a second potential glitch in the same example that is more dicult to spot: when D is 0 and A and B
are both 1, with C changing from 1 to 0 (moving from the blue state to the red state). In this case the glitch
wraps around from the top of the map to the bottom.

Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to
worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in
time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term of AD would eliminate the potential race hazard, bridging between the green and blue
output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom
to the top of the right half) in the adjacent diagram.
The term is redundant in terms of the static logic of the system, but such redundant, or consensus terms, are often
needed to assure race-free dynamic performance.
Similarly, an additional term of AD must be added to the inverse to eliminate another potential(race hazard.
) Applying
De Morgans laws creates another product of sums expression for f, but with a new factor of A + D .

140.2.2 2-variable map examples


Thefollowing are all the possible 2-variable, 2 2 Karnaugh maps. Listed with each is the minterms as a function
of m() and the race hazard free (see previous section) minimum equation. A minterm is dened as an expression
that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical inter-
connected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These
514 CHAPTER 140. VEITCH CHART

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
Race hazards are present in this diagram.

expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to
be mapped. Here are all the blocks with one eld.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge
of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and
horizontal row. A visualization of the k-map can be considered cylindrical. The elds at edges on the left and right
are adjacent, and the top and bottom are adjacent. K-Maps for 4 variables must be depicted as a donut or torus shape.
The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables
and more.
140.2. RACE HAZARDS 515

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
Above diagram with consensus terms added to avoid race hazards.

A
0 1

0 0
0
B

0 0
1

f(A,B) = E()
K=0
K'=1 m(0); K = 0
516 CHAPTER 140. VEITCH CHART

A
0 1

1 0
0
B
0 0
1

f(A,B) = E(1)
K=A'B'
K'=A+B m(1); K = AB

A
0 1

0 1
0
B

0 0
1

f(A,B) = E(2)
K=AB'
K'=A'+B m(2); K = AB

A
0 1

0 0
0
B

1 0
1

f(A,B) = E(3)
K=A'B
K'=A+B' m(3); K = AB

A
0 1

0 0
0
B

0 1
1

f(A,B) = E(4)
K=AB
K'=A'+B' m(4); K = AB

A
0 1

1 1
0
B

0 0
1

f(A,B) = E(1,2)
K=B'
K'=B m(1,2); K = B

A
0 1

1 0
0
B

1 0
1

f(A,B) = E(1,3)
K=A'
K'=A m(1,3); K = A
140.2. RACE HAZARDS 517

A
0 1

1 0
0
B
0 1
1

f(A,B) = E(1,4)
K=A'B'+AB
K'=AB'+A'B m(1,4); K = AB + AB

A
0 1

0 1
0
B

1 0
1

f(A,B) = E(2,3)
K=AB'+A'B
K'=A'B'+AB m(2,3); K = AB + AB

A
0 1

0 1
0
B

0 1
1

f(A,B) = E(2,4)
K=A
K'=A' m(2,4); K = A

A
0 1

0 0
0
B

1 1
1

f(A,B) = E(3,4)
K=B
K'=B' m(3,4); K = B

A
0 1

1 1
0
B

1 0
1

f(A,B) = E(1,2,3)
K=A'+B'
K'=AB m(1,2,3); K = A' + B

A
0 1

1 1
0
B

0 1
1

f(A,B) = E(1,2,4)
K=B'+A
K'=A'B m(1,2,4); K = A + B
518 CHAPTER 140. VEITCH CHART

A
0 1

1 0
0
B
1 1
1

f(A,B) = E(1,3,4)
K=A'+B
K'=AB' m(1,3,4); K = A + B

A
0 1

0 1
0
B

1 1
1

f(A,B) = E(2,3,4)
K=A+B
K'=A'B' m(2,3,4); K = A + B

A
0 1

1 1
0
B

1 1
1

f(A,B) = E(1,2,3,4)
K=1
K'=0 m(1,2,3,4); K = 1

140.3 Other graphical methods


Alternative graphical minimization methods include:

Marquand diagram (1881) by Allan Marquand (18531924)[4][3]


Harvard minimizing chart (1951) by Howard H. Aiken and Martha L. Whitehouse of the Harvard Computation
Laboratory[9][1][10][11]
Veitch chart (1952) by Edward Veitch (19242013)[2][3]
Svobodas graphical aids (1956) and triadic map by Antonn Svoboda (19071980)[12][13][14][15]
Hndler circle graph (aka Hndlerscher Kreisgraph, Kreisgraph nach Hndler, Hndler-Kreisgraph, Hndler-
Diagramm, Minimisierungsgraph [sic]) (1958) by Wolfgang Hndler (19201998)[16][17][18][14][19][20][21][22][23]
Graph method (1965) by Herbert Kortum (19071979)[24][25][26][27][28][29]

140.4 See also


Circuit minimization
Espresso heuristic logic minimizer
List of Boolean algebra topics
QuineMcCluskey algorithm
Algebraic normal form (ANF)
Ring sum normal form (RSNF)
140.5. REFERENCES 519

Zhegalkin normal form


Reed-Muller expansion
Venn diagram
Punnett square (a similar diagram in biology)

140.5 References
[1] Karnaugh, Maurice (November 1953) [1953-04-23, 1953-03-17]. The Map Method for Synthesis of Combinational Logic
Circuits (PDF). Transactions of the American Institute of Electrical Engineers part I. 72 (9): 593599. doi:10.1109/TCE.1953.6371932.
Paper 53-217. Archived (PDF) from the original on 2017-04-16. Retrieved 2017-04-16. (NB. Also contains a short review
by Samuel H. Caldwell.)

[2] Veitch, Edward W. (1952-05-03) [1952-05-02]. A Chart Method for Simplifying Truth Functions. ACM Annual Con-
ference/Annual Meeting: Proceedings of the 1952 ACM Annual Meeting (Pittsburg). New York, USA: ACM: 127133.
doi:10.1145/609784.609801.

[3] Brown, Frank Markham (2012) [2003, 1990]. Boolean Reasoning - The Logic of Boolean Equations (reissue of 2nd ed.).
Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-42785-0.

[4] Marquand, Allan (1881). XXXIII: On Logical Diagrams for n terms. The London, Edinburgh, and Dublin Philosophical
Magazine and Journal of Science. 5. 12 (75): 266270. doi:10.1080/14786448108627104. Retrieved 2017-05-15. (NB.
Quite many secondary sources erroneously cite this work as A logical diagram for n terms or On a logical diagram for
n terms.)

[5] Wakerly, John F. (1994). Digital Design: Principles & Practices. New Jersey, USA: Prentice Hall. pp. 222, 4849. ISBN
0-13-211459-3. (NB. The two page sections taken together say that K-maps are labeled with Gray code. The rst section
says that they are labeled with a code that changes only one bit between entries and the second section says that such a code
is called Gray code.)

[6] Belton, David (April 1998). Karnaugh Maps Rules of Simplication. Archived from the original on 2017-04-18.
Retrieved 2009-05-30.

[7] Dodge, Nathan B. (September 2015). Simplifying Logic Circuits with Karnaugh Maps (PDF). The University of Texas
at Dallas, Erik Jonsson School of Engineering and Computer Science. Archived (PDF) from the original on 2017-04-18.
Retrieved 2017-04-18.

[8] Cook, Aaron. Using Karnaugh Maps to Simplify Code. Quantum Rarity. Archived from the original on 2017-04-18.
Retrieved 2012-10-07.

[9] Aiken, Howard H.; Blaauw, Gerrit; Burkhart, William; Burns, Robert J.; Cali, Lloyd; Canepa, Michele; Ciampa, Carmela
M.; Coolidge, Jr., Charles A.; Fucarile, Joseph R.; Gadd, Jr., J. Orten; Gucker, Frank F.; Harr, John A.; Hawkins, Robert
L.; Hayes, Miles V.; Hofheimer, Richard; Hulme, William F.; Jennings, Betty L.; Johnson, Stanley A.; Kalin, Theodore;
Kincaid, Marshall; Lucchini, E. Edward; Minty, William; Moore, Benjamin L.; Remmes, Joseph; Rinn, Robert J.; Roche,
John W.; Sanbord, Jacquelin; Semon, Warren L.; Singer, Theodore; Smith, Dexter; Smith, Leonard; Strong, Peter F.;
Thomas, Helene V.; Wang, An; Whitehouse, Martha L.; Wilkins, Holly B.; Wilkins, Robert E.; Woo, Way Dong; Lit-
tle, Elbert P.; McDowell, M. Scudder (1952) [January 1951]. Chapter V: Minimizing charts. Synthesis of electronic
computing and control circuits (second printing, revised ed.). Write-Patterson Air Force Base: Harvard University Press
(Cambridge, Massachusetts, USA) / Georey Cumberlege Oxford University Press (London). pp. preface, 5067. Re-
trieved 2017-04-16. [] Martha Whitehouse constructed the minimizing charts used so profusely throughout this book,
and in addition prepared minimizing charts of seven and eight variables for experimental purposes. [] Hence, the present
writer is obliged to record that the general algebraic approach, the switching function, the vacuum-tube operator, and the
minimizing chart are his proposals, and that he is responsible for their inclusion herein. [] (NB. Work commenced in
April 1948.)

[10] Phister, Jr., Montgomery (1959) [December 1958]. Logical design of digital computers. New York, USA: John Wiley &
Sons Inc. pp. 7583. ISBN 0471688053.

[11] Curtis, H. Allen (1962). A new approach to the design of switching circuits. Princeton: D. van Nostrand Company.

[12] Svoboda, Antonn (1956). Gracko-mechanick pomcky uvan pi analyse a synthese kontaktovch obvod [Utilization
of graphical-mechanical aids for the analysis and synthesis of contact circuits]. Stroje na zpracovn informac [Symphosium
IV on information processing machines] (in Czech). IV. Prague: Czechoslovak Academy of Sciences, Research Institute
of Mathematical Machines. pp. 921.
520 CHAPTER 140. VEITCH CHART

[13] Svoboda, Antonn (1956). Graphical Mechanical Aids for the Synthesis of Relay Circuits. Nachrichtentechnische Fach-
berichte (NTF), Beihefte der Nachrichtentechnischen Zeitschrift (NTZ). Braunschweig, Germany: Vieweg-Verlag.

[14] Steinbuch, Karl W.; Weber, Wolfgang; Heinemann, Traute, eds. (1974) [1967]. Taschenbuch der Informatik - Band II
- Struktur und Programmierung von EDV-Systemen. Taschenbuch der Nachrichtenverarbeitung (in German). 2 (3 ed.).
Berlin, Germany: Springer-Verlag. pp. 25, 62, 96, 122123, 238. ISBN 3-540-06241-6. LCCN 73-80607.

[15] Svoboda, Antonn; White, Donnamaie E. (2016) [1979-08-01]. Advanced Logical Circuit Design Techniques (PDF) (re-
typed electronic reissue ed.). Garland STPM Press (original issue) / WhitePubs (reissue). ISBN 978-0-8240-7014-4.
Archived (PDF) from the original on 2017-04-14. Retrieved 2017-04-15.

[16] Hndler, Wolfgang (1958). Ein Minimisierungsverfahren zur Synthese von Schaltkreisen: Minimisierungsgraphen (Disser-
tation) (in German). Technische Hochschule Darmstadt. D 17. (NB. Although written by a German, the title contains an
anglicism; the correct German term would be Minimierung instead of Minimisierung.)

[17] Hndler, Wolfgang (2013) [1961]. Zum Gebrauch von Graphen in der Schaltkreis- und Schaltwerktheorie. In Peschl,
Ernst Ferdinand; Unger, Heinz. Colloquium ber Schaltkreis- und Schaltwerk-Theorie - Vortragsauszge vom 26. bis 28.
Oktober 1960 in Bonn - Band 3 von Internationale Schriftenreihe zur Numerischen Mathematik [International Series of
Numerical Mathematics] (ISNM) (in German). 3. Institut fr Angewandte Mathematik, Universitt Saarbrcken, Rheinisch-
Westflisches Institut fr Instrumentelle Mathematik: Springer Basel AG / Birkhuser Verlag Basel. pp. 169198. ISBN
978-3-0348-5771-0. doi:10.1007/978-3-0348-5770-3.

[18] Berger, Erich R.; Hndler, Wolfgang (1967) [1962]. Steinbuch, Karl W.; Wagner, Siegfried W., eds. Taschenbuch der
Nachrichtenverarbeitung (in German) (2 ed.). Berlin, Germany: Springer-Verlag OHG. pp. 64, 10341035, 1036, 1038.
LCCN 67-21079. Title No. 1036. [] bersichtlich ist die Darstellung nach Hndler, die smtliche Punkte, numeriert
nach dem Gray-Code [], auf dem Umfeld eines Kreises anordnet. Sie erfordert allerdings sehr viel Platz. [] [Hndlers
illustration, where all points, numbered according to the Gray code, are arranged on the circumference of a circle, is easily
comprehensible. It needs, however, a lot of space.]

[19] Hotz, Gnter (1974). Schaltkreistheorie [Switching circuit theory]. DeGruyter Lehrbuch (in German). Walter de Gruyter
& Co. p. 117. ISBN 3-11-00-2050-5. [] Der Kreisgraph von Hndler ist fr das Aunden von Primimplikanten
gut brauchbar. Er hat den Nachteil, da er schwierig zu zeichnen ist. Diesen Nachteil kann man allerdings durch die
Verwendung von Schablonen verringern. [] [The circle graph by Hndler is well suited to nd prime implicants. A
disadvantage is that it is dicult to draw. This can be remedied using stencils.]

[20] Informatik Sammlung Erlangen (ISER)" (in German). Erlangen, Germany: Friedrich-Alexander Universitt. 2012-03-
13. Retrieved 2017-04-12. (NB. Shows a picture of a Kreisgraph by Hndler.)

[21] Informatik Sammlung Erlangen (ISER) - Impressum (in German). Erlangen, Germany: Friedrich-Alexander Universitt.
2012-03-13. Archived from the original on 2012-02-26. Retrieved 2017-04-15. (NB. Shows a picture of a Kreisgraph by
Hndler.)

[22] Zemanek, Heinz (2013) [1990]. Geschichte der Schaltalgebra [History of circuit switching algebra]. In Broy, Man-
fred. Informatik und Mathematik [Computer Sciences and Mathematics] (in German). Springer-Verlag. pp. 4372. ISBN
9783642766770. Einen Weg besonderer Art, der damals zu wenig beachtet wurde, wies W. Hndler in seiner Dissertation
[] mit einem Kreisdiagramm. [] (NB. Collection of papers at a colloquium held at the Bayerische Akademie der
Wissenschaften, 1989-06-12/14, in honor of Friedrich L. Bauer.)

[23] Bauer, Friedrich Ludwig; Wirsing, Martin (March 1991). Elementare Aussagenlogik (in German). Berlin / Heidelberg:
Springer-Verlag. pp. 5456, 71, 112113, 138139. ISBN 978-3-540-52974-3. [] handelt es sich um ein Hndler-
Diagramm [], mit den Wrfelecken als Ecken eines 2m -gons. [] Abb. [] zeigt auch Gegenstcke fr andere Di-
mensionen. Durch waagerechte Linien sind dabei Tupel verbunden, die sich nur in der ersten Komponente unterscheiden;
durch senkrechte Linien solche, die sich nur in der zweiten Komponente unterscheiden; durch 45-Linien und 135-Linien
solche, die sich nur in der dritten Komponente unterscheiden usw. Als Nachteil der Hndler-Diagramme wird angefhrt,
da sie viel Platz beanspruchen. []

[24] Kortum, Herbert (1965). Minimierung von Kontaktschaltungen durch Kombination von Krzungsverfahren und Graphen-
methoden. messen-steuern-regeln (msr) (in German). Verlag Technik. 8 (12): 421425.

[25] Kortum, Herbert (1966). Konstruktion und Minimierung von Halbleiterschaltnetzwerken mittels Graphentransformation.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (1): 912.

[26] Kortum, Herbert (1966). Weitere Bemerkungen zur Minimierung von Schaltnetzwerken mittels Graphenmethoden.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (3): 96102.

[27] Kortum, Herbert (1966). Weitere Bemerkungen zur Behandlung von Schaltnetzwerken mittels Graphen. messen-steuern-
regeln (msr) (in German). Verlag Technik. 9 (5): 151157.
140.6. FURTHER READING 521

[28] Kortum, Herbert (1967). "ber zweckmige Anpassung der Graphenstruktur diskreter Systeme an vorgegebene Auf-
gabenstellungen. messen-steuern-regeln (msr) (in German). Verlag Technik. 10 (6): 208211.

[29] Tafel, Hans Jrg (1971). 4.3.5. Graphenmethode zur Vereinfachung von Schaltfunktionen. Written at RWTH, Aachen,
Germany. Einfhrung in die digitale Datenverarbeitung [Introduction to digital information processing] (in German). Mu-
nich, Germany: Carl Hanser Verlag. pp. 98105, 107113. ISBN 3-446-10569-7.

140.6 Further reading


Katz, Randy Howard (1998) [1994]. Contemporary Logic Design. The Benjamin/Cummings Publishing Com-
pany. pp. 7085. ISBN 0-8053-2703-7. doi:10.1016/0026-2692(95)90052-7.

Vingron, Shimon Peter (2004) [2003-11-05]. Karnaugh Maps. Switching Theory: Insight Through Predicate
Logic. Berlin, Heidelberg, New York: Springer-Verlag. pp. 5776. ISBN 3-540-40343-4.

Wickes, William E. (1968). Logic Design with Integrated Circuits. New York, USA: John Wiley & Sons.
pp. 3649. LCCN 68-21185. A renement of the Venn diagram in that circles are replaced by squares and
arranged in a form of matrix. The Veitch diagram labels the squares with the minterms. Karnaugh assigned 1s
and 0s to the squares and their labels and deduced the numbering scheme in common use.

Maxeld, Clive Max (2006-11-29). Reed-Muller Logic. Logic 101. EETimes. Part 3. Archived from the
original on 2017-04-19. Retrieved 2017-04-19.

140.7 External links


Detect Overlapping Rectangles, by Herbert Glarner.
Using Karnaugh maps in practical applications, Circuit design project to control trac lights.

K-Map Tutorial for 2,3,4 and 5 variables


Karnaugh Map Example

POCKETPC BOOLEAN FUNCTION SIMPLIFICATION, Ledion Bitincka George E. Antoniou


Chapter 141

Law of excluded middle

Not to be confused with fallacy of the excluded middle.


This article uses forms of logical notation. For a concise description of the symbols used in this notation, see List of
logic symbols.

In logic, the law of excluded middle (or the principle of excluded middle) is the third of the three classic laws of
thought. It states that for any proposition, either that proposition is true, or its negation is true.
The law is also known as the law (or principle) of the excluded third, in Latin principium tertii exclusi. Another
Latin designation for this law is tertium non datur: no third (possibility) is given.
The earliest known formulation is in Aristotles discussion of the principle of non-contradiction, rst proposed in On
Interpretation,[1] where he says that of two contradictory propositions (i.e. where one proposition is the negation of
the other) one must be true, and the other false.[2] He also states it as a principle in the Metaphysics book 3, saying that
it is necessary in every case to arm or deny,[3] and that it is impossible that there should be anything between the
two parts of a contradiction.[4] The principle was stated as a theorem of propositional logic by Russell and Whitehead
in Principia Mathematica as:
2 11. . p p .[5]
The principle should not be confused with the semantical principle of bivalence, which states that every proposition
is either true or false.

141.1 Classic laws of thought


The principle of excluded middle, along with its complement, the law of contradiction (the second of the three classic
laws of thought), are correlates of the law of identity (the rst of these laws).

141.2 Analogous laws


Some systems of logic have dierent but analogous laws. For some nite n-valued logics, there is an analogous law
called the law of excluded n+1th. If negation is cyclic and "" is a max operator, then the law can be expressed
in the object language by (P ~P ~~P ... ~...~P), where "~...~" represents n1 negation signs and " ... " n1
disjunction signs. It is easy to check that the sentence must receive at least one of the n truth values (and not a value
that is not one of the n).
Other systems reject the law entirely.

141.3 Examples
For example, if P is the proposition:

522
141.3. EXAMPLES 523

Socrates is mortal.

then the law of excluded middle holds that the logical disjunction:

Either Socrates is mortal, or it is not the case that Socrates is mortal.

is true by virtue of its form alone. That is, the middle position, that Socrates is neither mortal nor not-mortal, is
excluded by logic, and therefore either the rst possibility (Socrates is mortal) or its negation (it is not the case that
Socrates is mortal) must be true.
An example of an argument that depends on the law of excluded middle follows.[6] We seek to prove that there exist
two irrational numbers a and b such that

ab

It is known that 2 is irrational (see proof). Consider the number

2
2

Clearly (excluded middle) this number is either rational or irrational. If it is rational, the proof is complete, and

a= 2 and b = 2 .

2
But if 2 is irrational, then let

2
a= 2 and b = 2 .

Then


( ) 2
2 (22) 2
ab = 2 = 2 = 2 =2

and 2 is certainly rational. This concludes the proof.


In the above argument, the assertion this number is either rational or irrational invokes the law of excluded middle.
An intuitionist, for example, would not accept this argument without further support for that statement. This might
come in the form of a proof that the number in question is in fact irrational (or rational, as the case may be); or a
nite algorithm that could determine whether the number is rational.

141.3.1 The Law in non-constructive proofs over the innite


The above proof is an example of a non-constructive proof disallowed by intuitionists:

The proof is non-constructive because it doesn't give specic numbers a and bthat satisfy the theorem
2
but only two separate possibilities, one of which must work. (Actually a = 2 is irrational but there
is no known easy proof of that fact.) (Davis 2000:220)

(Constructive proofs of the specic example above are not hard to produce; for example a = 2 and b = log2 9 both
easily shown irrational where ab = 3 ; a proof allowed by intuitionists ).
By non-constructive Davis means that a proof that there actually are mathematic entities satisfying certain conditions
would have to provide a method to exhibit explicitly the entities in question. (p. 85). Such proofs presume the
existence of a totality that is complete, a notion disallowed by intuitionists when extended to the innitefor them
the innite can never be completed:
524 CHAPTER 141. LAW OF EXCLUDED MIDDLE

In classical mathematics there occur non-constructive or indirect existence proofs, which intuition-
ists do not accept. For example, to prove there exists an n such that P(n), the classical mathematician
may deduce a contradiction from the assumption for all n, not P(n). Under both the classical and the
intuitionistic logic, by reductio ad absurdum this gives not for all n, not P(n). The classical logic allows
this result to be transformed into there exists an n such that P(n), but not in general the intuitionistic...
the classical meaning, that somewhere in the completed innite totality of the natural numbers there
occurs an n such that P(n), is not available to him, since he does not conceive the natural numbers as a
completed totality.[7] (Kleene 1952:4950)

Indeed, David Hilbert and Luitzen E. J. Brouwer both give examples of the law of excluded middle extended to the
innite. Hilberts example: the assertion that either there are only nitely many prime numbers or there are innitely
many (quoted in Davis 2000:97); and Brouwers: Every mathematical species is either nite or innite. (Brouwer
1923 in van Heijenoort 1967:336).
In general, intuitionists allow the use of the law of excluded middle when it is conned to discourse over nite
collections (sets), but not when it is used in discourse over innite sets (e.g. the natural numbers). Thus intuition-
ists absolutely disallow the blanket assertion: For all propositions P concerning innite sets D: P or ~P" (Kleene
1952:48).

For more about the conict between the intuitionists (e.g. Brouwer) and the formalists (Hilbert) see
Foundations of mathematics and Intuitionism.

Putative counterexamples to the law of excluded middle include the liar paradox or Quines Paradox. Certain res-
olutions of these paradoxes, particularly Graham Priest's dialetheism as formalised in LP, have the law of excluded
middle as a theorem, but resolve out the Liar as both true and false. In this way, the law of excluded middle is true,
but because truth itself, and therefore disjunction, is not exclusive, it says next to nothing if one of the disjuncts is
paradoxical, or both true and false.

141.4 History

141.4.1 Aristotle
Aristotle wrote that ambiguity can arise from the use of ambiguous names, but cannot exist in the facts themselves:

It is impossible, then, that being a man should mean precisely not being a man, if man not
only signies something about one subject but also has one signicance. ... And it will not be possible
to be and not to be the same thing, except in virtue of an ambiguity, just as if one whom we call man,
and others were to call not-man"; but the point in question is not this, whether the same thing can at
the same time be and not be a man in name, but whether it can be in fact. (Metaphysics 4.4, W.D. Ross
(trans.), GBWW 8, 525526).

Aristotles assertion that "...it will not be possible to be and not to be the same thing, which would be written in
propositional logic as (P P), is a statement modern logicians could call the law of excluded middle (P P), as
distribution of the negation of Aristotles assertion makes them equivalent, regardless that the former claims that no
statement is both true and false, while the latter requires that any statement is either true or false.
However, Aristotle also writes, since it is impossible that contradictories should be at the same time true of the same
thing, obviously contraries also cannot belong at the same time to the same thing (Book IV, CH 6, p. 531). He then
proposes that there cannot be an intermediate between contradictories, but of one subject we must either arm or
deny any one predicate (Book IV, CH 7, p. 531). In the context of Aristotles traditional logic, this is a remarkably
precise statement of the law of excluded middle, P P.

141.4.2 Leibniz
Its usual form, Every judgment is either true or false [footnote 9]..."(from Kolmogorov in van Hei-
jenoort, p. 421) footnote 9: This is Leibniz's very simple formulation (see Nouveaux Essais, IV,2)....
(ibid p 421)
141.4. HISTORY 525

141.4.3 Bertrand Russell and Principia Mathematica


Bertrand Russell asserts a distinction between the law of excluded middle and the law of noncontradiction. In
The Problems of Philosophy, he cites three Laws of Thought as more or less self-evident or a priori in the sense
of Aristotle:

1. Law of identity: Whatever is, is.


2. Law of noncontradiction: Nothing can both be and not be.
3. Law of excluded middle: Everything must either be or not be.

These three laws are samples of self-evident logical principles... (p. 72)

It is correct, at least for bivalent logici.e. it can be seen with a Karnaugh mapthat Russells Law (2) removes
the middle of the inclusive-or used in his law (3). And this is the point of Reichenbachs demonstration that some
believe the exclusive-or should take the place of the inclusive-or.
About this issue (in admittedly very technical terms) Reichenbach observes:

The tertium non datur


29. (x)[f(x) ~f(x)]
is not exhaustive in its major terms and is therefore an inated formula. This fact may perhaps
explain why some people consider it unreasonable to write (29) with the inclusive-'or', and
want to have it written with the sign of the exclusive-'or'

30. (x)[f(x) ~f(x)], where the symbol "" signies exclusive-or[8]


in which form it would be fully exhaustive and therefore nomological in the narrower sense.
(Reichenbach, p. 376)

In line (30) the "(x)" means for all or for every, a form used by Russell and Reichenbach; today the symbolism is
usually x. Thus an example of the expression would look like this:

(pig): (Flies(pig) ~Flies(pig))


(For all instances of pig seen and unseen): (Pig does y or Pig does not y but not both simultaneously)

A formal denition from Principia Mathematica

Principia Mathematica (PM) denes the law of excluded middle formally:

*2.1 : ~p p (PM p. 101)


Example: Either it is true that this is red, or it is true that this is not red. Hence it is true that
this is red or this is not red. (See below for more about how this is derived from the primitive axioms).

So just what is truth and falsehood"? At the opening PM quickly announces some denitions:

Truth-values. The truth-values of a proposition is truth if it is true and falsehood if it is false*


[*This phrase is due to Frege]...the truth-value of p q is truth if the truth-value of either p or q is
truth, and is falsehood otherwise ... that of "~ p is the opposite of that of p... (p. 7-8)

This is not much help. But later, in a much deeper discussion, (Denition and systematic ambiguity of Truth and
Falsehood Chapter II part III, p. 41 ) PM denes truth and falsehood in terms of a relationship between the a
and the b and the percipient. For example This 'a' is 'b'" (e.g. This 'object a' is 'red'") really means "'object a' is
a sense-datum and "'red' is a sense-datum, and they stand in relation to one another and in relation to I. Thus
what we really mean is: I perceive that 'This object a is red'" and this is an undeniable-by-3rd-party truth.
PM further denes a distinction between a sense-datum and a sensation":
526 CHAPTER 141. LAW OF EXCLUDED MIDDLE

That is, when we judge (say) this is red, what occurs is a relation of three terms, the mind, and
this, and red. On the other hand, when we perceive the redness of this, there is a relation of two
terms, namely the mind and the complex object the redness of this (pp. 4344).

Russell reiterated his distinction between sense-datum and sensation in his book The Problems of Philosophy
(1912) published at the same time as PM (19101913):

Let us give the name of sense-data to the things that are immediately known in sensation: such
things as colours, sounds, smells, hardnesses, roughnesses, and so on. We shall give the name sensation
to the experience of being immediately aware of these things... The colour itself is a sense-datum, not a
sensation. (p. 12)

Russell further described his reasoning behind his denitions of truth and falsehood in the same book (Chapter
XII Truth and Falsehood).

Consequences of the law of excluded middle in Principia Mathematica

From the law of excluded middle, formula 2.1 in Principia Mathematica, Whitehead and Russell derive some of
the most powerful tools in the logicians argumentation toolkit. (In Principia Mathematica, formulas and propositions
are identied by a leading asterisk and two numbers, such as "2.1.)
2.1 ~p p This is the Law of excluded middle (PM, p. 101).
The proof of 2.1 is roughly as follows: primitive idea 1.08 denes p q = ~p q. Substituting p for q in this
rule yields p p = ~p p. Since p p is true (this is Theorem 2.08, which is proved separately), then ~p p must
be true.
2.11 p ~p (Permutation of the assertions is allowed by axiom 1.4)
2.12 p ~(~p) (Principle of double negation, part 1: if this rose is red is true then its not true that "'this rose is
not-red' is true.)
2.13 p ~{~(~p)} (Lemma together with 2.12 used to derive 2.14)
2.14 ~(~p) p (Principle of double negation, part 2)
2.15 (~p q) (~q p) (One of the four Principles of transposition. Similar to 1.03, 1.16 and 1.17. A very
long demonstration was required here.)
2.16 (p q) (~q ~p) (If its true that If this rose is red then this pig ies then its true that If this pig doesn't
y then this rose isn't red.)
2.17 ( ~p ~q ) (q p) (Another of the Principles of transposition.)
2.18 (~p p) p (Called The complement of reductio ad absurdum. It states that a proposition which follows
from the hypothesis of its own falsehood is true (PM, pp. 103104).)
Most of these theoremsin particular 2.1, 2.11, and 2.14are rejected by intuitionism. These tools are recast
into another form that Kolmogorov cites as Hilberts four axioms of implication and Hilberts two axioms of
negation (Kolmogorov in van Heijenoort, p. 335).
Propositions 2.12 and 2.14, double negation": The intuitionist writings of L. E. J. Brouwer refer to what he calls
the principle of the reciprocity of the multiple species, that is, the principle that for every system the correctness of a
property follows from the impossibility of the impossibility of this property (Brouwer, ibid, p. 335).
This principle is commonly called the principle of double negation (PM, pp. 101102). From the law of excluded
middle (2.1 and 2.11), PM derives principle 2.12 immediately. We substitute ~p for p in 2.11 to yield ~p
~(~p), and by the denition of implication (i.e. 1.01 p q = ~p q) then ~p ~(~p)= p ~(~p). QED (The
derivation of 2.14 is a bit more involved.)

141.5 Use in computer science proofs


The law of excluded middle can be used to prove the decidability of certain computational problems. Usually, de-
cidability is proved by showing an algorithm that solves the problem (i.e. a constructive proof). However, in some
cases it is possible to prove that a problem is decidable without showing an algorithm that solves it.
For example, consider the following constant function f:
141.6. CRITICISMS 527

{
1 true is conjecture Goldbach's if
f=
0 else

By the law of excluded middle, Goldbachs conjecture is either true or false. If it is true then f is 1, and the required
algorithm is just print 1. If it is false then the required algorithm is just print 0. In either case, there is a simple,
one-line algorithm that prints f, so by denition, f is computationally decidable. It is true that we don't know which
algorithm to use, but we do know that an algorithm exists.
A slightly more complicated example is:

{
1 0n of representation decimal the in occurs
f (n) =
0 else

The function f is computable because, by the law of excluded middle, there are only two possibilities to consider:

For every positive integer n, the string 0n appears in the decimal representation of . In this case, the algorithm
that always returns 1 is always correct.
There is a largest integer N such that 0N appears in the decimal representation of . In this case the following
algorithm (with the value N hard-coded) is always correct:

Zeros-in-pi(n):
if (n > N) then return 0 else return 1

We have no idea which of these possibilities is correct, or what value of N is the right one in the second case.
Nevertheless, one of these algorithms is guaranteed to be correct. Thus, there is an algorithm to decide whether a
string of n zeros appears in ; the problem is decidable.[9]

141.6 Criticisms
Many modern logic systems replace the law of excluded middle with the concept of negation as failure. Instead
of a proposition either being true or false, a proposition is either true or not able to be proven true.[10] These two
dichotomies only dier in logical systems that are not complete. The principle of negation as failure is used as a
foundation for autoepistemic logic, and is widely used in logic programming. In these systems, the programmer is
free to assert the law of excluded middle as a true fact, but it is not built-in a priori into these systems.
Mathematicians such as L. E. J. Brouwer and Arend Heyting have also contested the usefulness of the law of excluded
middle in the context of modern mathematics.[11]

141.7 See also


BrouwerHilbert controversy: an account on the formalist-intuitionist divide around the Law of the excluded
middle
Consequentia mirabilis
Diaconescus theorem
Intuitionistic logic
Law of bivalence
Law of excluded fourth
Law of excluded middle is untrue in many-valued logics such as ternary logic and fuzzy logic
528 CHAPTER 141. LAW OF EXCLUDED MIDDLE

Laws of thought
Liar paradox
Limited principle of omniscience
Logical graphs: a graphical syntax for propositional logic
Peirces law: another way of turning intuition classical
Logical determinism - the application excluded middle to modal propositions
Non-arming negation in the Prasangika school of Buddhism, another system in which the law of excluded
middle is untrue.

141.8 Footnotes
[1] Geach p. 74

[2] On Interpretation, c. 9

[3] Metaphysics 2, 996b 2630

[4] Metaphysics 7, 1011b 2627

[5] Alfred North Whitehead, Bertrand Russell (1910), Principia Mathematica, Cambridge, p. 105

[6] This well-known example of a non-constructive proof depending on the law of excluded middle can be found in many
places, for example: Megill, Norman. Metamath: A Computer Language for Pure Mathematics, footnote on p. 17,. and
Davis 2000:220, footnote 2.

[7] In a comparative analysis (pp. 4359) of the three "-isms (and their foremost spokesmen)Logicism (Russell and White-
head), Intuitionism (Brouwer) and Formalism (Hilbert)Kleene turns his thorough eye toward intuitionism, its founder
Brouwer, and the intuitionists complaints with respect to the law of excluded middle as applied to arguments over the
completed innite.

[8] The original symbol as used by Reichenbach is an upside down V, nowadays used for AND. The AND for Reichenbach
is the same as that used in Principia Mathematica -- a dot cf p. 27 where he shows a truth table where he denes a.b.
Reichenbach denes the exclusive-or on p. 35 as the negation of the equivalence. One sign used nowadays is a circle
with a + in it, i.e. (because in binary, a b yields modulo-2 addition -- addition without carry). Other signs are (not
identical to), or (not equal to).

[9] Adapted from: How can it be decidable whether has some sequence of digits?". Computer Science Stack Exchange.
Retrieved 21 November 2014.

[10] Clark, Keith (1978). Logic and Data Bases (PDF). Springer-Verlag. pp. 293322 (Negation as a failure). doi:10.1007/978-
1-4684-3384-5_11.

[11] Proof and Knowledge in Mathematics by Michael Detlefsen

141.9 References
Aquinas, Thomas, "Summa Theologica", Fathers of the English Dominican Province (trans.), Daniel J. Sulli-
van (ed.), vols. 1920 in Robert Maynard Hutchins (ed.), Great Books of the Western World, Encyclopdia
Britannica, Inc., Chicago, IL, 1952. Cited as GB 1920.
Aristotle, "Metaphysics", W.D. Ross (trans.), vol. 8 in Robert Maynard Hutchins (ed.), Great Books of the
Western World, Encyclopdia Britannica, Inc., Chicago, IL, 1952. Cited as GB 8. 1st published, W.D. Ross
(trans.), The Works of Aristotle, Oxford University Press, Oxford, UK.
Martin Davis 2000, Engines of Logic: Mathematicians and the Origin of the Computer, W. W. Norton &
Company, NY, ISBN 0-393-32229-7 pbk.
Dawson, J., Logical Dilemmas, The Life and Work of Kurt Gdel, A.K. Peters, Wellesley, MA, 1997.
141.10. EXTERNAL LINKS 529

van Heijenoort, J., From Frege to Gdel, A Source Book in Mathematical Logic, 18791931, Harvard University
Press, Cambridge, MA, 1967. Reprinted with corrections, 1977.
Luitzen Egbertus Jan Brouwer, 1923, On the signicance of the principle of excluded middle in mathematics,
especially in function theory [reprinted with commentary, p. 334, van Heijenoort]
Andrei Nikolaevich Kolmogorov, 1925, On the principle of excluded middle, [reprinted with commentary, p.
414, van Heijenoort]
Luitzen Egbertus Jan Brouwer, 1927, On the domains of denitions of functions,[reprinted with commentary,
p. 446, van Heijenoort] Although not directly germane, in his (1923) Brouwer uses certain words dened in
this paper.

Luitzen Egbertus Jan Brouwer, 1927(2), Intuitionistic reections on formalism,[reprinted with commentary, p.
490, van Heijenoort]

Stephen C. Kleene 1952 original printing, 1971 6th printing with corrections, 10th printing 1991, Introduction
to Metamathematics, North-Holland Publishing Company, Amsterdam NY, ISBN 0-7204-2103-9.
Kneale, W. and Kneale, M., The Development of Logic, Oxford University Press, Oxford, UK, 1962. Reprinted
with corrections, 1975.
Alfred North Whitehead and Bertrand Russell, Principia Mathematica to *56, Cambridge at the University
Press 1962 (Second Edition of 1927, reprinted). Extremely dicult because of arcane symbolism, but a must-
have for serious logicians.

Bertrand Russell, An Inquiry Into Meaning and Truth. The William James Lectures for 1940 Delivered at
Harvard University.

Bertrand Russell, The Problems of Philosophy, With a New Introduction by John Perry, Oxford University Press,
New York, 1997 edition (rst published 1912). Very easy to read: Russell was a wonderful writer.

Bertrand Russell, The Art of Philosophizing and Other Essays, Littleeld, Adams & Co., Totowa, NJ, 1974
edition (rst published 1968). Includes a wonderful essay on The Art of drawing Inferences.

Hans Reichenbach, Elements of Symbolic Logic, Dover, New York, 1947, 1975.
Tom Mitchell, Machine Learning, WCB McGraw-Hill, 1997.

Constance Reid, Hilbert, Copernicus: Springer-Verlag New York, Inc. 1996, rst published 1969. Contains a
wealth of biographical information, much derived from interviews.

Bart Kosko, Fuzzy Thinking: The New Science of Fuzzy Logic, Hyperion, New York, 1993. Fuzzy thinking at
its nest. But a good introduction to the concepts.

David Hume, An Inquiry Concerning Human Understanding, reprinted in Great Books of the Western World
Encyclopdia Britannica, Volume 35, 1952, p. 449 . This work was published by Hume in 1758 as his
rewrite of his juvenile Treatise of Human Nature: Being An attempt to introduce the experimental method of
Reasoning into Moral Subjects Vol. I, Of The Understanding rst published 1739, reprinted as: David Hume, A
Treatise of Human Nature, Penguin Classics, 1985. Also see: David Applebaum, The Vision of Hume, Vega,
London, 2001: a reprint of a portion of An Inquiry starts on p. 94

141.10 External links


Contradiction entry in the Stanford Encyclopedia of Philosophy
Chapter 142

Law of identity

This article uses forms of logical notation. For a concise description of the symbols used in this notation,
see List of logic symbols.

In logic, the law of identity is the rst of the three classical laws of thought. It states that each thing is identical
with itself. By this it is meant that each thing (be it a universal or a particular) is composed of its own unique set of
characteristic qualities or features, which the ancient Greeks called its essence.
In its symbolic representation, "a=a", Epp", or For all x: x = x".
In logical discourse, violations of the Law of Identity (LOI) result in the informal logical fallacy known as equivocation.[1]
That is to say, we cannot use the same term in the same discourse while having it signify dierent senses or meanings
even though the dierent meanings are conventionally prescribed to that term. Corollaries of the law of identity,
Epp", "p if and only if p", are the law of noncontradiction, NEpNp", not that p if and only if not p"; and the law of
excluded middle, JpNp", "p or not p, exclusively,[2] in which the prex J represents the exclusive or, the negation
of the prex E, the logical biconditional.
In everyday language, violations of the LOI introduce ambiguity into the discourse, making it dicult to form an
interpretation at the desired level of specicity. The LOI also allows for substitution.

142.1 History
The earliest recorded use of the law appears to occur in Plato's dialogue Theaetetus (185a), wherein Socrates attempts
to establish that what we call sounds and colours are two dierent classes of thing:

Socrates: With regard to sound and colour, in the rst place, do you think this about both: do they
exist?
Theaetetus: Yes.
Socrates: Then do you think that each diers to the other, and is identical to itself?
Theaetetus: Certainly.
Socrates: And that both are two and each of them one?
Theaetetus: Yes, that too.

Aristotle takes recourse to the law of identitythough he does not identify it as suchin an attempt to negatively
demonstrate the law of non-contradiction. However, in doing so, he shows that the law of non-contradiction is not
the more fundamental of the two:

First then this at least is obviously true, that the word 'be' or 'not be' has a denite meaning, so that
not everything will be 'so and not so'. Again, if 'man' has one meaning, let this be 'two-footed animal';
by having one meaning I understand this:-if 'man' means 'X', then if A is a man 'X' will be what 'being
a man' means for him. It makes no dierence even if one were to say a word has several meanings, if
only they are limited in number; for to each denition there might be assigned a dierent word. For in-
stance, we might say that 'man' has not one meaning but several, one of which would have one denition,

530
142.1. HISTORY 531

viz. 'two-footed animal', while there might be also several other denitions if only they were limited in
number; for a peculiar name might be assigned to each of the denitions. If, however, they were not
limited but one were to say that the word has an innite number of meanings, obviously reasoning would
be impossible; for not to have one meaning is to have no meaning, and if words have no meaning our
reasoning with one another, and indeed with ourselves, has been annihilated; for it is impossible to think
of anything if we do not think of one thing; but if this is possible, one name might be assigned to this
thing.
Aristotle, Metaphysics, Book IV, Part 4

It is used explicitly only once in Aristotle, in a proof in the Prior Analytics:[3][4]

When A belongs to the whole of B and C, and is predicated of nothing else, and B belongs to all C,
A and B must convert; for since A is said only of B and C, and B is predicated both of itself and of C, it
is clear that B will be said of everything of which A is said, excepting of A itself.

Both Thomas Aquinas (Met. IV., lect. 6) and Duns Scotus (Quaest. sup. Met. IV., Q. 3) follow Aristotle. Antonius
Andreas, the Spanish disciple of Scotus (d. 1320), argues that the rst place should belong to the law Every Being
is a Being (Omne Ens est Ens, Qq. in Met. IV., Q. 4), but the late scholastic writer Francisco Suarez (Disp. Met.
III., 3) disagreed, also preferring to follow Aristotle.
Another possible allusion to the same principle may be found in the writings of Nicholas of Cusa (1431-1464) where
he says:

... there cannot be several things exactly the same, for in that case there would not be several things,
but the same thing itself. Therefore all things both agree with and dier from one another.[5]

Gottfried Wilhelm Leibniz claimed that the law of Identity, which he expresses as Everything is what it is, is the
rst primitive truth of reason which is armative, and the law of noncontradiction is the rst negative truth (Nouv.
Ess. IV., 2, i), arguing that the statement that a thing is what it is, is prior to the statement that it is not another
thing (Nouv. Ess. IV., 7, 9). Wilhelm Wundt credits Gottfried Leibniz with the symbolic formulation, A is A.[6]
George Boole, in the introduction to his treatise The Laws of Thought made the following observation with respect
to the nature of language and those principles that must inhere naturally within them, if they are to be intelligible:

There exist, indeed, certain general principles founded in the very nature of language, by which the
use of symbols, which are but the elements of scientic language, is determined. To a certain extent
these elements are arbitrary. Their interpretation is purely conventional: we are permitted to employ
them in whatever sense we please. But this permission is limited by two indispensable conditions, rst,
that from the sense once conventionally established we never, in the same process of reasoning, depart;
secondly, that the laws by which the process is conducted be founded exclusively upon the above xed
sense or meaning of the symbols employed.

John Locke (Essay Concerning Human Understanding IV. vii. iv. (Of Maxims) says:

[...] whenever the mind with attention considers any proposition, so as to perceive the two ideas
signied by the terms, and armed or denied one of the other to be the same or dierent; it is presently
and infallibly certain of the truth of such a proposition; and this equally whether these propositions be
in terms standing for more general ideas, or such as are less so: e.g., whether the general idea of Being
be armed of itself, as in this proposition, whatsoever is, is"; or a more particular idea be armed of
itself, as a man is a man"; or, whatsoever is white is white [...]

African Spir proclaims the law of identity as the fundamental law of knowledge, which is opposed to the changing
appearance of the empirical reality.[7]
532 CHAPTER 142. LAW OF IDENTITY

142.2 See also


Aristotle's Organon
Law of thought
Equality (mathematics)
Dialetheism[8]
Gilles Deleuze's Dierence and Repetition[9]

142.2.1 Allusions
Rose is a rose is a rose is a rose

142.2.2 People
Thomas Aquinas
Aristotle
Ren Descartes
Keith Donnellan
Steve Ditko
Thomas Hobbes
David Kaplan
Saul Kripke
Gottfried Wilhelm Leibniz
John Locke
Plato
Willard Van Orman Quine
Hilary Putnam
Ayn Rand
Baruch Spinoza
John Searle
Afrikan Spir

142.3 References
[1] Things are said to be named 'equivocally' when, though they have a common name, the denition corresponding with the
name diers for each.

[2] Jozef Maria Bochenski (1959), Precis of Mathematical Logic, rev., Albert Menne, ed. and trans., Otto Bird, New York:
Gordon and Breach, Part II, The Logic of Sentences, Sect. 3.32, p. 11, and Sect. 3.92, p. 14.

[3] Wang, Hao (10 June 2016). From Mathematics to Philosophy (Routledge Revivals)". Routledge via Google Books.

[4] Thomas, Ivo (1 April 1974). On a passage of Aristotle.. Notre Dame J. Formal Logic. 15 (2): 347348. doi:10.1305/ndj/1093891315
via Project Euclid.
142.3. REFERENCES 533

[5] De Venatione Sapientiae, 23.

[6] La philosophie ternelle ou traditionnelle, la mtaphysique, la logique, la raison et l'intelligence

[7] Forschung nach der Gewissheit in der Erkenntniss der Wirklichkeit, Leipzig, J.G. Findel, 1869 and Denken und Wirklichkeit:
Versuch einer Erneuerung der kritischen Philosophie, Leipzig, J. G. Findel, 1873.

[8]

[9]
Chapter 143

Law of noncontradiction

For the Fargo episode, see The Law of Non-Contradiction.

This article uses forms of logical notation. For a concise description of the symbols used in this notation,
see List of logic symbols.

In classical logic, the law of non-contradiction (LNC) (or the law of contradiction or the principle of non-
contradiction (PNC), or the principle of contradiction) is the second of the three classic laws of thought. It states
that contradictory statements cannot both be true in the same sense at the same time, e.g. the two propositions "A is
B " and "A is not B " are mutually exclusive.
The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as:

3 24. . (p. p) [1]

The law of non-contradiction, along with its complement, the law of excluded middle (the third of the three classic
laws of thought) and the law of identity (the rst of the three classic laws of thought), partitions its logical Universe
into exactly two parts; it creates a dichotomy wherein the two parts are mutually exclusive and jointly exhaustive.
The law of non-contradiction is merely an expression of the mutually exclusive aspect of that dichotomy, and the law
of excluded middle, an expression of its jointly exhaustive aspect.

143.1 Interpretations
One diculty in applying the law of non-contradiction is ambiguity in the propositions. For instance, if time is not
explicitly specied as part of the propositions A and B, then A may be B at one time, and not at another. A and B
may in some cases be made to sound mutually exclusive linguistically even though A may be partly B and partly not
B at the same time. However, it is impossible to predicate of the same thing, at the same time, and in the same sense,
the absence and the presence of the same xed quality.

143.1.1 Heraclitus
According to both Plato and Aristotle,[2] Heraclitus was said to have denied the law of non-contradiction. This is
quite likely[3] if, as Plato pointed out, the law of non-contradiction does not hold for changing things in the world.
If a philosophy of Becoming is not possible without change, then (the potential of) what is to become must already
exist in the present object. In "We step and do not step into the same rivers; we are and we are not", both Heraclituss
and Platos object simultaneously must, in some sense, be both what it now is and have the potential (dynamis) of
what it might become.[4]
Unfortunately, so little remains of Heraclitus' aphorisms that not much about his philosophy can be said with certainty.
He seems to have held that strife of opposites is universal both within and without, therefore both opposite existents or
qualities must simultaneously exist, although in some instances in dierent respects. The road up and down are one

534
143.1. INTERPRETATIONS 535

and the same" implies either the road leads both ways, or there can be no road at all. This is the logical complement of
the law of non-contradiction. According to Heraclitus, change, and the constant conict of opposites is the universal
logos of nature.

143.1.2 Protagoras
Personal subjective perceptions or judgments can only be said to be true at the same time in the same respect, in which
case, the law of non-contradiction must be applicable to personal judgments. The most famous saying of Protagoras
is: "Man is the measure of all things: of things which are, that they are, and of things which are not, that they are
not".[5] However, Protagoras was referring to things that are used by or in some way related to humans. This makes a
great dierence in the meaning of his aphorism. Properties, social entities, ideas, feelings, judgements, etc. originate
in the human mind. However, Protagoras has never suggested that man must be the measure of stars, or the motion
of the stars.

143.1.3 Parmenides
Parmenides employed an ontological version of the law of non-contradiction to prove that being is and to deny the
void, change, and motion. He also similarly disproved contrary propositions. In his poem On Nature, he said,

the only routes of inquiry there are for thinking:

the one that [it] is and that [it] cannot not be


is the path of Persuasion (for it attends upon truth)
the other, that [it] is not and that it is right that [it] not be,
this I point out to you is a path wholly inscrutable
for you could not know what is not (for it is not to be accomplished)
nor could you point it out For the same thing is for thinking and for being

The nature of the is or what-is in Parmenides is a highly contentious subject. Some have taken it to be whatever
exists, some to be whatever is or can be the object of scientic inquiry.[6]

143.1.4 Socrates
In Platos early dialogues, Socrates uses the elenctic method to investigate the nature or denition of ethical concepts
such as justice or virtue. Elenctic refutation depends on a dichotomous thesis, one that may be divided into exactly
two mutually exclusive parts, only one of which may be true. Then Socrates goes on to demonstrate the contrary of
the commonly accepted part using the law of non-contradiction. According to Gregory Vlastos,[7] the method has
the following steps:

1. Socrates interlocutor asserts a thesis, for example Courage is endurance of the soul, which Socrates considers
false and targets for refutation.

2. Socrates secures his interlocutors agreement to further premises, for example Courage is a ne thing and
Ignorant endurance is not a ne thing.

3. Socrates then argues, and the interlocutor agrees, that these further premises imply the contrary of the original
thesis, in this case it leads to: courage is not endurance of the soul.

4. Socrates then claims that he has shown that his interlocutors thesis is false and that its negation is true.

143.1.5 Platos synthesis


Plato's version of the law of non-contradiction states that "The same thing clearly cannot act or be acted upon in the
same part or in relation to the same thing at the same time, in contrary ways" (The Republic (436b)). In this, Plato
carefully phrases three axiomatic restrictions on action or reaction: 1) in the same part, 2) in the same relation, 3) at
536 CHAPTER 143. LAW OF NONCONTRADICTION

the same time. The eect is to momentarily create a frozen, timeless state, somewhat like gures frozen in action on
the frieze of the Parthenon.[8]
This way, he accomplishes two essential goals for his philosophy. First, he logically separates the Platonic world
of constant change[9] from the formally knowable world of momentarily xed physical objects.[10][11] Second, he
provides the conditions for the dialectic method to be used in nding denitions, as for example in the Sophist. So
Platos law of non-contradiction is the empirically derived necessary starting point for all else he has to say.[12]
In contrast, Aristotle reverses Platos order of derivation. Rather than starting with experience, Aristotle begins a
priori with the law of non-contradiction as the fundamental axiom of an analytic philosophical system.[13] This axiom
then necessitates the xed, realist model. Now, he starts with much stronger logical foundations than Platos non-
contrariety of action in reaction to conicting demands from the three parts of the soul.

143.1.6 Aristotles contribution


The traditional source of the law of non-contradiction is Aristotle's Metaphysics where he gives three dierent
versions.[14]

1. ontological: It is impossible that the same thing belong and not belong to the same thing at the same time and
in the same respect. (1005b19-20)

2. psychological: No one can believe that the same thing can (at the same time) be and not be. (1005b23-24)[15]

3. logical: The most certain of all basic principles is that contradictory propositions are not true simultaneously.
(1011b13-14)

Aristotle attempts several proofs of this law. He rst argues that every expression has a single meaning (otherwise
we could not communicate with one another). This rules out the possibility that by to be a man, not to be a man
is meant. But man means two-footed animal (for example), and so if anything is a man, it is necessary (by virtue
of the meaning of man) that it must be a two-footed animal, and so it is impossible at the same time for it not to
be a two-footed animal. Thus it is not possible to say truly at the same time that the same thing is and is not a man
(Metaphysics 1006b 35). Another argument is that anyone who believes something cannot believe its contradiction
(1008b).

Why does he not just get up rst thing and walk into a well or, if he nds one, over a cli? In fact, he
seems rather careful about clis and wells.[16]

143.1.7 Avicenna
Avicenna's commentary on the Metaphysics illustrates the common view that the Law of Noncontradiction and
their like are among the things that do not require our elaboration. Avicennas words for the obdurate are quite
facetious: he must be subjected to the conagration of re, since 're' and 'not re' are one. Pain must be inicted
on him through beating, since 'pain' and 'no pain' are one. And he must be denied food and drink, since eating and
drinking and the abstention from both are one [and the same].[17]

143.1.8 Eastern philosophy


The law of non-contradiction is found in ancient Indian logic as a meta-rule in the Shrauta Sutras, the grammar of
Pini,[18] and the Brahma Sutras attributed to Vyasa. It was later elaborated on by medieval commentators such as
Madhvacharya.[19]

143.1.9 Leibniz and Kant


Leibniz and Kant adopted a dierent statement, by which the law assumes an essentially dierent meaning. Their
formula is A is not not-A; in other words it is impossible to predicate of a thing a quality which is its contradictory.
Unlike Aristotles law this law deals with the necessary relation between subject and predicate in a single judgment.
143.2. ALLEGED IMPOSSIBILITY OF ITS PROOF OR DENIAL 537

For example, in Gottlob Ernst Schulze's Aenesidemus, it is asserted, " nothing supposed capable of being thought
may contain contradictory characteristics. Whereas Aristotle states that one or other of two contradictory proposi-
tions must be false, the Kantian law states that a particular kind of proposition is in itself necessarily false. On the
other hand, there is a real connection between the two laws. The denial of the statement A is not-A presupposes some
knowledge of what A is, i.e. the statement A is A. In other words, a judgment about A is implied.
Kants analytical judgments of propositions depend on presupposed concepts which are the same for all people. His
statement, regarded as a logical principle purely and apart from material facts, does not therefore amount to more
than that of Aristotle, which deals simply with the signicance of negation.

143.1.10 Modern logics

Traditionally, in Aristotle's classical logical calculus, in evaluating any proposition there are only two possible truth
values, true and false. An obvious extension to classical two-valued logic is a many-valued logic for more than
two possible values. In logic, a many- or multi-valued logic is a propositional calculus in which there are more than
two values. Those most popular in the literature are three-valued (e.g., ukasiewiczs and Kleenes), which accept the
values true, false, and unknown, nite-valued with more than three values, and the innite-valued (e.g. fuzzy
logic and probability logic) logics.

143.1.11 Dialetheism

Graham Priest advocates the view that under some conditions, some statements can be both true and false simulta-
neously, or may be true and false at dierent times. Dialetheism arises from formal logical paradoxes, such as the
Liars paradox and Russells paradox.[20]

143.2 Alleged impossibility of its proof or denial

As is true of all axioms of logic, the law of non-contradiction is alleged to be neither veriable nor falsiable, on
the grounds that any proof or disproof must use the law itself prior to reaching the conclusion. In other words, in
order to verify or falsify the laws of logic one must resort to logic as a weapon, an act which would essentially be
self-defeating.[21] Since the early 20th century, certain logicians have proposed logics that deny the validity of the law.
Collectively, these logics are known as "paraconsistent" or inconsistency-tolerant logics. But not all paraconsistent
logics deny the law, since they are not necessarily completely agnostic to inconsistencies in general. Graham Priest
advances the strongest thesis of this sort, which he calls "dialetheism".
In several axiomatic derivations of logic,[22] this is eectively resolved by showing that (P P) and its negation
are constants, and simply dening TRUE as (P P) and FALSE as (P P), without taking a position as to the
principle of bivalence or the law of excluded middle.
Some, such as David Lewis, have objected to paraconsistent logic on the ground that it is simply impossible for a
statement and its negation to be jointly true.[23] A related objection that is that negation in paraconsistent logic is
not really negation; it is merely a subcontrary-forming operator.[24][25]

143.3 In popular culture

The Fargo episode "The Law of Non-Contradiction", which takes its name from the law, was noted for its several
elements relating to the law of non-contradiction, as the episodes main character faces several paradoxes. For ex-
ample, she is still the acting chief of police while having been demoted from the position, and tries to investigate a
man that both was and was not named Ennis Stussy, and who both was and was not her stepfather. It also features
the story of a robot who, after having spent million of years unable to help humanity, is told that he greatly helped
mankind all along by observing history.[26]
538 CHAPTER 143. LAW OF NONCONTRADICTION

143.4 See also


Contradiction
First principle
Identity (philosophy)
Law of excluded middle
Law of identity
Laws of thought
Liars Paradox
Peirces law
Principle of bivalence
Principle of explosion
Principle of sucient reason
Reductio ad absurdum
Oxymoron

143.5 Bibliography
Aristotle (1998). Lawson-Tancred, H., ed. Aristotles Metaphysics. Penguin.
Bziau (2000).
Lewis (1982).
ukasiewicz, Jan (1971) [1910 in Polish], On the Principle of Contradiction in Aristotle, Review of Meta-
physics, 24: 485509
Slater (1995).

143.6 References
[1] Alfred North Whitehead, Bertrand Russell (1910), Principia Mathematica, Cambridge, pp. 116117

[2] Aristotle, Metaphysics (IV,1005b), to suppose that the same thing is and is not, as some imagine that Heraclitus says

[3] Heraclitus, Fragments 36,57,59 (Bywater)

[4] Cornford, F.M., Platos Theory of Knowledge, p. 234

[5] (80B1 DK). According to Platos Theaetetus, section 152a.

[6] Curd, Patricia, Presocratic Philosophy, The Stanford Encyclopedia of Philosophy (Summer 2011 Edition), Edward N.
Zalta (ed.), URL = http://plato.stanford.edu/archives/sum2011/entries/presocratics/

[7] Gregory Vlastos, 'The Socratic Elenchus, Oxford Studies in Ancient Philosophy I, Oxford 1983, 2758.

[8] James Danaher, The Laws of Thought The restrictions Plato places on the laws of thought (i.e., in the same respect, and
at the same time,) are an attempt to isolate the object of thought by removing it from all other time but the present and
all respects but one.

[9] Platos Divided Line describes the four Platonic worlds

[10] Cratylus, starting at 439e


143.7. FURTHER READING 539

[11] A thing which is F at one time, or in one way, or in one relation, or from one point of view, will be all too often not-F, at
another time, in another way (Metaphysical Paradox in Gregory Vlastos, Platonic Studies, p.50)

[12] Two Principles of Noncontradiction in Samuel Scolnicov, Platos Parmenides, pp.12-16

[13] Similarly, Kant remarked that Newton "by no means dared to prove this law a priori, and therefore appealed rather to
experience" (Metaphysical Foundations, 4:449)

[14] ukasiewicz (1971) p.487

[15] Whitaker, CWA Aristotles De Interpretatione: Contradiction and Dialectic page 184

[16] 1008b, trans. Lawson-Tancred

[17] Avicenna, Metaphysics, I.8 53.1315 (sect. 12 [p. 43] in ed. Michael Marmura); commenting on Aristotle, Topics
I.11.105a45. The editorial addition (brackets) is present in Marmuras translation.

[18] Frits Staal (1988), Universals: Studies in Indian Logic and Linguistics, Chicago, pp. 10928 (cf. Bull, Malcolm (1999),
Seeing Things Hidden, Verso, p. 53, ISBN 1-85984-263-1)

[19] Dasgupta, Surendranath (1991), A History of Indian Philosophy, Motilal Banarsidass, p. 110, ISBN 81-208-0415-5

[20] Graham Priest; Francesco Berto (2013). Dialetheism, (Stanford Encyclopedia of Philosophy)

[21] S.M. Cohen, Aristotle on the Principle of Non-Contradiction "Aristotles solution in the Posterior Analytics is to distinguish
between episteme (scientic knowledge) and nous (intuitive intellect). First principles, such as PNC, are not objects of scientic
knowledge - since they are not demonstrable - but are still known, since they are grasped by nous".

[22] Steven Wolfram, A New Kind Of Science, ISBN 1-57955-008-8

[23] See Lewis (1982), p.

[24] See Slater (1995), p.

[25] Bziau (2000), p.

[26] Is Fargo Still Fargo If Its In Los Angeles? You Betcha!". Uproxx. May 3, 2017. Retrieved May 6, 2017.

143.7 Further reading


Benardete, Seth (1989). Socrates Second Sailing: On Platos Republic. University of Chicago Press.

143.8 External links


S.M. Cohen, "Aristotle on the Principle of Non-Contradiction", Canadian Journal of Philosophy, Vol. 16, No.
3

James Danaher (2004), "The Laws of Thought", The Philosopher, Vol. LXXXXII No. 1
Paula Gottlieb, "Aristotle on Non-contradiction" (Stanford Encyclopedia of Philosophy)

Laurence Horn, "Contradiction" (Stanford Encyclopedia of Philosophy)


Graham Priest and Francesco Berto, "Dialetheism" (Stanford Encyclopedia of Philosophy)

Graham Priest and Koji Tanaka, "Paraconsistent logic" (Stanford Encyclopedia of Philosophy)
Peter Suber, "Non-Contradiction and Excluded Middle", Earlham College
Chapter 144

Laws of Form

Laws of Form (hereinafter LoF) is a book by G. Spencer-Brown, published in 1969, that straddles the boundary
between mathematics and philosophy. LoF describes three distinct logical systems:

The primary arithmetic (described in Chapter 4 of LoF), whose models include Boolean arithmetic;
The primary algebra" (Chapter 6 of LoF), whose models include the two-element Boolean algebra (hereinafter
abbreviated 2), Boolean logic, and the classical propositional calculus;
Equations of the second degree (Chapter 11), whose interpretations include nite automata and Alonzo
Church's Restricted Recursive Arithmetic (RRA).

Boundary algebra is Dr Philip Meguire's (2011) term[1] for the union of the primary algebra (hereinafter abbreviated
pa) and the primary arithmetic. Laws of Form sometimes loosely refers to the pa as well as to LoF.

144.1 The book


LoF emerged from work in electronic engineering its author did around 1960, and from subsequent lectures on
mathematical logic he gave under the auspices of the University of London's extension program. LoF has appeared
in several editions, the most recent being a 1997 German translation, and has never gone out of print.
The mathematics lls only about 55pp and is rather elementary. But LoF's mystical and declamatory prose, and its
love of paradox, make it a challenging read for all. Spencer-Brown was inuenced by Wittgenstein and R. D. Laing.
LoF also echoes a number of themes from the writings of Charles Sanders Peirce, Bertrand Russell, and Alfred North
Whitehead.
The entire book is written in an operational way, giving instructions to the reader instead of telling him what is. In
accordance with G. Spencer-Browns interest in paradoxes, the only sentence that makes a statement that something
is, is the statement, which says no such statements are used in this book.[2] Except for this one sentence the book can
be seen as an example of E-Prime.

144.2 Reception
Ostensibly a work of formal mathematics and philosophy, LoF became something of a cult classic, praised in the
Whole Earth Catalog. Those who agree point to LoF as embodying an enigmatic mathematics of consciousness,
its algebraic symbolism capturing an (perhaps even the) implicit root of cognition: the ability to distinguish.
LoF argues that primary algebra reveals striking connections among logic, Boolean algebra, and arithmetic, and the
philosophy of language and mind.
Banaschewski (1977)[3] argues that the pa is nothing but new notation for Boolean algebra. Indeed, the two-element
Boolean algebra 2 can be seen as the intended interpretation of the pa. Yet the notation of the pa":

Fully exploits the duality characterizing not just Boolean algebras but all lattices;

540
144.3. THE FORM (CHAPTER 1) 541

Highlights how syntactically distinct statements in logic and 2 can have identical semantics;
Dramatically simplies Boolean algebra calculations, and proofs in sentential and syllogistic logic.

Moreover, the syntax of the pa can be extended to formal systems other than 2 and sentential logic, resulting in
boundary mathematics (see Related Work below).
LoF has inuenced, among others, Heinz von Foerster, Louis Kauman, Niklas Luhmann, Humberto Maturana,
Francisco Varela and William Bricken. Some of these authors have modied the primary algebra in a variety of
interesting ways.
LoF claimed that certain well-known mathematical conjectures of very long standing, such as the Four Color Theo-
rem, Fermats Last Theorem, and the Goldbach conjecture, are provable using extensions of the pa. Spencer-Brown
eventually circulated a purported proof of the Four Color Theorem, but it met with skepticism.[4]

144.3 The form (Chapter 1)


The symbol:

also called the mark or cross, is the essential feature of the Laws of Form. In Spencer-Browns inimitable and
enigmatic fashion, the Mark symbolizes the root of cognition, i.e., the dualistic Mark indicates the capability of
dierentiating a this from everything else but this.
In LoF, a Cross denotes the drawing of a distinction, and can be thought of as signifying the following, all at once:

The act of drawing a boundary around something, thus separating it from everything else;
That which becomes distinct from everything by drawing the boundary;
Crossing from one side of the boundary to the other.

All three ways imply an action on the part of the cognitive entity (e.g., person) making the distinction. As LoF puts
it:

The rst command:


Draw a distinction
can well be expressed in such ways as:
Let there be a distinction,
Find a distinction,
See a distinction,
Describe a distinction,
Dene a distinction,
Or:
Let a distinction be drawn. (LoF, Notes to chapter 2)

The counterpoint to the Marked state is the Unmarked state, which is simply nothing, the void, or the un-expressable
innite represented by a blank space. It is simply the absence of a Cross. No distinction has been made and nothing
has been crossed. The Marked state and the void are the two primitive values of the Laws of Form.
The Cross can be seen as denoting the distinction between two states, one considered as a symbol and another
not so considered. From this fact arises a curious resonance with some theories of consciousness and language.
Paradoxically, the Form is at once Observer and Observed, and is also the creative act of making an observation. LoF
(excluding back matter) closes with the words:
"...the rst distinction, the Mark and the observer are not only interchangeable, but, in the form, identical.
C. S. Peirce came to a related insight in the 1890s; see Related Work.
542 CHAPTER 144. LAWS OF FORM

144.4 The primary arithmetic (Chapter 4)


The syntax of the primary arithmetic (PA) goes as follows. There are just two atomic expressions:

The empty Cross ;


All or part of the blank page (the void).

There are two inductive rules:

A Cross may be written over any expression;


Any two expressions may be concatenated.

The semantics of the primary arithmetic are perhaps nothing more than the sole explicit denition in LoF: Distinction
is perfect continence.
Let the unmarked state be a synonym for the void. Let an empty Cross denote the marked state. To cross is to
move from one value, the unmarked or marked state, to the other. We can now state the arithmetical axioms A1
and A2, which ground the primary arithmetic (and hence all of the Laws of Form):
1. The law of Calling. Calling twice from a state is indistinguishable from calling once. To make a distinction
twice has the same eect as making it once. For example, saying Let there be light and then saying Let there be
light again, is the same as saying it once. Formally:

A2. The law of Crossing. After crossing from the unmarked to the marked state, crossing again (recrossing)
starting from the marked state returns one to the unmarked state. Hence recrossing annuls crossing. Formally:

In both A1 and A2, the expression to the right of '=' has fewer symbols than the expression to the left of '='. This
suggests that every primary arithmetic expression can, by repeated application of A1 and A2, be simplied to one of
two states: the marked or the unmarked state. This is indeed the case, and the result is the expressions simplication.
The two fundamental metatheorems of the primary arithmetic state that:

Every nite expression has a unique simplication. (T3 in LoF);


Starting from an initial marked or unmarked state, complicating an expression by a nite number of repeated
application of A1 and A2 cannot yield an expression whose simplication diers from the initial state. (T4 in
LoF).

Thus the relation of logical equivalence partitions all primary arithmetic expressions into two equivalence classes:
those that simplify to the Cross, and those that simplify to the void.
A1 and A2 have loose analogs in the properties of series and parallel electrical circuits, and in other ways of dia-
gramming processes, including owcharting. A1 corresponds to a parallel connection and A2 to a series connection,
with the understanding that making a distinction corresponds to changing how two points in a circuit are connected,
and not simply to adding wiring.
The primary arithmetic is analogous to the following formal languages from mathematics and computer science:

A Dyck language of order 1 with a null alphabet;


The simplest context-free language in the Chomsky hierarchy;
A rewrite system that is strongly normalizing and conuent.

The phrase calculus of indications in LoF is a synonym for primary arithmetic.


144.5. THE PRIMARY ALGEBRA (CHAPTER 6) 543

144.4.1 The notion of canon


A concept peculiar to LoF is that of canon. While LoF does not dene canon, the following two excerpts from the
Notes to chpt. 2 are apt:

The more important structures of command are sometimes called canons. They are the ways in
which the guiding injunctions appear to group themselves in constellations, and are thus by no means
independent of each other. A canon bears the distinction of being outside (i.e., describing) the system
under construction, but a command to construct (e.g., 'draw a distinction'), even though it may be of
central importance, is not a canon. A canon is an order, or set of orders, to permit or allow, but not to
construct or create.

"...the primary form of mathematical communication is not description but injunction... Music is
a similar art form, the composer does not even attempt to describe the set of sounds he has in mind,
much less the set of feelings occasioned through them, but writes down a set of commands which, if
they are obeyed by the performer, can result in a reproduction, to the listener, of the composers original
experience.

These excerpts relate to the distinction in metalogic between the object language, the formal language of the logi-
cal system under discussion, and the metalanguage, a language (often a natural language) distinct from the object
language, employed to exposit and discuss the object language. The rst quote seems to assert that the canons are
part of the metalanguage. The second quote seems to assert that statements in the object language are essentially
commands addressed to the reader by the author. Neither assertion holds in standard metalogic.

144.5 The primary algebra (Chapter 6)

144.5.1 Syntax
Given any valid primary arithmetic expression, insert into one or more locations any number of Latin letters bearing
optional numerical subscripts; the result is a pa formula. Letters so employed in mathematics and logic are called

variables. A pa variable indicates a location where one can write the primitive value or its complement .
Multiple instances of the same variable denote multiple locations of the same primitive value.

144.5.2 Rules governing logical equivalence


The sign '=' may link two logically equivalent expressions; the result is an equation. By logically equivalent is meant
that the two expressions have the same simplication. Logical equivalence is an equivalence relation over the set of
pa formulas, governed by the rules R1 and R2. Let C and D be formulae each containing at least one instance of
the subformula A:

R1, Substitution of equals. Replace one or more instances of A in C by B, resulting in E. If A=B, then C=E.
R2, Uniform replacement. Replace all instances of A in C and D with B. C becomes E and D becomes F. If
C=D, then E=F. Note that A=B is not required.

R2 is employed very frequently in pa demonstrations (see below), almost always silently. These rules are routinely
invoked in logic and most of mathematics, nearly always unconsciously.
The pa consists of equations, i.e., pairs of formulae linked by an inx '='. R1 and R2 enable transforming one
equation into another. Hence the pa is an equational formal system, like the many algebraic structures, including
Boolean algebra, that are varieties. Equational logic was common before Principia Mathematica (e.g., Peirce,1,2,3
Johnson 1892), and has present-day advocates (Gries and Schneider 1993).
Conventional mathematical logic consists of tautological formulae, signalled by a prexed turnstile. To denote that

the pa formula A is a tautology, simply write "A = ". If one replaces '=' in R1 and R2 with the biconditional, the
544 CHAPTER 144. LAWS OF FORM

resulting rules hold in conventional logic. However, conventional logic relies mainly on the rule modus ponens; thus
conventional logic is ponential. The equational-ponential dichotomy distills much of what distinguishes mathematical
logic from the rest of mathematics.

144.5.3 Initials
An initial is a pa equation veriable by a decision procedure and as such is not an axiom. LoF lays down the initials:

J1 : ((A)A) = .

The absence of anything to the right of the "=" above, is deliberate.

J2 : ((A)(B))C = ((AC)(BC)).

J2 is the familiar distributive law of sentential logic and Boolean algebra.


Another set of initials, friendlier to calculations, is:

J0 : (())A = A.
J1a : (A)A = ()
C2 : A(AB) = A(B).

It is thanks to C2 that the pa is a lattice. By virtue of J1a, it is a complemented lattice whose upper bound is (). By
J0, (()) is the corresponding lower bound and identity element. J0 is also an algebraic version of A2 and makes clear
the sense in which (()) aliases with the blank page.
T13 in LoF generalizes C2 as follows. Any pa (or sentential logic) formula B can be viewed as an ordered tree with
branches. Then:
T13: A subformula A can be copied at will into any depth of B greater than that of A, as long as A and its copy are
in the same branch of B. Also, given multiple instances of A in the same branch of B, all instances but the shallowest
are redundant.
While a proof of T13 would require induction, the intuition underlying it should be clear.
C2 or its equivalent is named:

Generation in LoF;
Exclusion in Johnson (1892);
Pervasion in the work of William Bricken.

Perhaps the rst instance of an axiom or rule with the power of C2 was the Rule of (De)Iteration, combining T13
and AA=A, of C. S. Peirce's existential graphs.
LoF asserts that concatenation can be read as commuting and associating by default and hence need not be explicitly
assumed or demonstrated. (Peirce made a similar assertion about his existential graphs.) Let a period be a temporary
notation to establish grouping. That concatenation commutes and associates may then be demonstrated from the:

Initial AC.D=CD.A and the consequence AA=A (Byrne 1946). This result holds for all lattices, because AA=A
is an easy consequence of the absorption law, which holds for all lattices;
Initials AC.D=AD.C and J0. Since J0 holds only for lattices with a lower bound, this method holds only for
bounded lattices (which include the pa and 2). Commutativity is trivial; just set A=(()). Associativity: AC.D =
CA.D = CD.A = A.CD.

Having demonstrated associativity, the period can be discarded.


The initials in Meguire (2011) are AC.D=CD.A, called B1; B2, J0 above; B3, J1a above; and B4, C2. By design,
these initials are very similar to the axioms for an abelian group, G1-G3 below.
144.5. THE PRIMARY ALGEBRA (CHAPTER 6) 545

144.5.4 Proof theory

The pa contains three kinds of proved assertions:

Consequence is a pa equation veried by a demonstration. A demonstration consists of a sequence of steps,


each step justied by an initial or a previously demonstrated consequence.

Theorem is a statement in the metalanguage veried by a proof, i.e., an argument, formulated in the metalan-
guage, that is accepted by trained mathematicians and logicians.

Initial, dened above. Demonstrations and proofs invoke an initial as if it were an axiom.

The distinction between consequence and theorem holds for all formal systems, including mathematics and logic, but
is usually not made explicit. A demonstration or decision procedure can be carried out and veried by computer. The
proof of a theorem cannot be.
Let A and B be pa formulas. A demonstration of A=B may proceed in either of two ways:

Modify A in steps until B is obtained, or vice versa;

Simplify both (A)B and (B)A to . This is known as a calculation.

Once A=B has been demonstrated, A=B can be invoked to justify steps in subsequent demonstrations. pa demonstra-
tions and calculations often require no more than J1a, J2, C2, and the consequences ()A=() (C3 in LoF), ((A))=A
(C1), and AA=A (C5).
The consequence (((A)B)C) = (AC)((B)C), C7 in LoF, enables an algorithm, sketched in LoFs proof of T14, that
transforms an arbitrary pa formula to an equivalent formula whose depth does not exceed two. The result is a normal
form, the pa analog of the conjunctive normal form. LoF (T14-15) proves the pa analog of the well-known Boolean
algebra theorem that every formula has a normal form.
Let A be a subformula of some formula B. When paired with C3, J1a can be viewed as the closure condition for
calculations: B is a tautology if and only if A and (A) both appear in depth 0 of B. A related condition appears in
some versions of natural deduction. A demonstration by calculation is often little more than:

Invoking T13 repeatedly to eliminate redundant subformulae;

Erasing any subformulae having the form ((A)A).

The last step of a calculation always invokes J1a.


LoF includes elegant new proofs of the following standard metatheory:

Completeness: all pa consequences are demonstrable from the initials (T17).

Independence: J1 cannot be demonstrated from J2 and vice versa (T18).

That sentential logic is complete is taught in every rst university course in mathematical logic. But university courses
in Boolean algebra seldom mention the completeness of 2.

144.5.5 Interpretations

If the Marked and Unmarked states are read as the Boolean values 1 and 0 (or True and False), the pa interprets 2
(or sentential logic). LoF shows how the pa can interpret the syllogism. Each of these interpretations is discussed
in a subsection below. Extending the pa so that it could interpret standard rst-order logic has yet to be done, but
Peirce's beta existential graphs suggest that this extension is feasible.
546 CHAPTER 144. LAWS OF FORM

Two-element Boolean algebra 2

The pa is an elegant minimalist notation for the two-element Boolean algebra 2. Let:

One of Boolean meet () or join (+) interpret concatenation;

The complement of A interpret


0 (1) interpret the empty Mark if meet (join) interprets concatenation.

If meet (join) interprets AC, then join (meet) interprets ((A)(C)). Hence the pa and 2 are isomorphic but for one detail:
pa complementation can be nullary, in which case it denotes a primitive value. Modulo this detail, 2 is a model of the
primary algebra. The primary arithmetic suggests the following arithmetic axiomatization of 2: 1+1=1+0=0+1=1=~0,
and 0+0=0=~1.

The set B = { , } is the Boolean domain or carrier. In the language of universal algebra, the pa is the
algebraic structure B, , (), () of type 2, 1, 0 . The expressive adequacy of the Sheer stroke points to the
pa also being a B, (), () algebra of type 2, 0 . In both cases, the identities are J1a, J0, C2, and ACD=CDA.
Since the pa and 2 are isomorphic, 2 can be seen as a B, +, , 1 algebra of type 2, 1, 0 . This description of 2 is
simpler than the conventional one, namely an B, +, , , 1, 0 algebra of type 2, 2, 1, 0, 0 .

Sentential logic

Let the blank page denote True or False, and let a Cross be read as Not. Then the primary arithmetic has the
following sentential reading:

= False

= True = not False

= Not True = False

The pa interprets sentential logic as follows. A letter represents any given sentential expression. Thus:

interprets Not A

interprets A Or B

interprets Not A Or B or If A Then B.

interprets Not (Not A Or Not B)


or Not (If A Then Not B)
or A And B.

(((A)B)(A(B))), ((A)(B))(AB) both interpret A if and only if B or A is equivalent to B.


Thus any expression in sentential logic has a pa translation. Equivalently, the pa interprets sentential logic. Given an
assignment of every variable to the Marked or Unmarked states, this pa translation reduces to a PA expression, which
144.5. THE PRIMARY ALGEBRA (CHAPTER 6) 547

can be simplied. Repeating this exercise for all possible assignments of the two primitive values to each variable,
reveals whether the original expression is tautological or satisable. This is an example of a decision procedure, one
more or less in the spirit of conventional truth tables. Given some pa formula containing N variables, this decision
procedure requires simplifying 2N PA formulae. For a less tedious decision procedure more in the spirit of Quine's
truth value analysis, see Meguire (2003).
Schwartz (1981) proved that the pa is equivalent -- syntactically, semantically, and proof theoreticallywith the
classical propositional calculus. Likewise, it can be shown that the pa is syntactically equivalent with expressions built
up in the usual way from the classical truth values true and false, the logical connectives NOT, OR, and AND, and
parentheses.
Interpreting the Unmarked State as False is wholly arbitrary; that state can equally well be read as True. All that is
required is that the interpretation of concatenation change from OR to AND. IF A THEN B now translates as (A(B))
instead of (A)B. More generally, the pa is self-dual, meaning that any pa formula has two sentential or Boolean
readings, each the dual of the other. Another consequence of self-duality is the irrelevance of De Morgans laws;
those laws are built into the syntax of the pa from the outset.
The true nature of the distinction between the pa on the one hand, and 2 and sentential logic on the other, now
emerges. In the latter formalisms, complementation/negation operating on nothing is not well-formed. But an
empty Cross is a well-formed pa expression, denoting the Marked state, a primitive value. Hence a nonempty Cross
is an operator, while an empty Cross is an operand because it denotes a primitive value. Thus the pa reveals that
the heretofore distinct mathematical concepts of operator and operand are in fact merely dierent facets of a single
fundamental action, the making of a distinction.

Syllogisms

Appendix 2 of LoF shows how to translate traditional syllogisms and sorites into the pa. A valid syllogism is simply
one whose pa translation simplies to an empty Cross. Let A* denote a literal, i.e., either A or (A), indierently. Then
all syllogisms that do not require that one or more terms be assumed nonempty are one of 24 possible permutations
of a generalization of Barbara whose pa equivalent is (A*B)((B)C*)A*C*. These 24 possible permutations include
the 19 syllogistic forms deemed valid in Aristotelian and medieval logic. This pa translation of syllogistic logic also
suggests that the pa can interpret monadic and term logic, and that the pa has anities to the Boolean term schemata
of Quine (1982: Part II).

144.5.6 An example of calculation


The following calculation of Leibniz's nontrivial Praeclarum Theorema exemplies the demonstrative power of the pa.
Let C1 be ((A))=A, and let OI mean that variables and subformulae have been reordered in a way that commutativity
and associativity permit. Because the only commutative connective appearing in the Theorema is conjunction, it
is simpler to translate the Theorema into the pa using the dual interpretation. The objective then becomes one of
simplifying that translation to (()).

[(PR)(QS)][(PQ)(RS)]. Praeclarum Theorema.

((P(R))(Q(S))((PQ(RS)))). pa translation.

= ((P(R))P(Q(S))Q(RS)). OI; C1.

= (((R))((S))PQ(RS)). Invoke C2 2x to eliminate the bold letters in the previous expression; OI.

= (RSPQ(RS)). C1,2x.

= ((RSPQ)RSPQ). C2; OI.

= (()). J1.

Remarks:

C1 (C2) is repeatedly invoked in a fairly mechanical way to eliminate nested parentheses (variable instances).
This is the essence of the calculation method;
548 CHAPTER 144. LAWS OF FORM

A single invocation of J1 (or, in other contexts, J1a) terminates the calculation. This too is typical;

Experienced users of the pa are free to invoke OI silently. OI aside, the demonstration requires a mere 7 steps.

144.5.7 A technical aside

Given some standard notions from mathematical logic and some suggestions in Bostock (1997: 83, fn 11, 12), {}
and _ (underscore added for clarity.) may be interpreted as the classical bivalent truth values. Let the extension of
an n-place atomic formula be the set of ordered n-tuples of individuals that satisfy it (i.e., for which it comes out
true). Let a sentential variable be a 0-place atomic formula, whose extension is a classical truth value, by denition.
An ordered 2-tuple is an ordered pair, whose standard (Kuratowski's denition) set theoretic denition is <a,b> =
{{a},{{a,b}}, where a,b are individuals. Ordered n-tuples for any n>2 may be obtained from ordered pairs by a well-
known recursive construction. Dana Scott has remarked that the extension of a sentential variable can also be seen as
the empty ordered pair (ordered 0-tuple), {{},{}} = _ because {a,a}={a} for all a. Hence _ has the interpretation
True. Reading {} as False follows naturally.
It should be noted that choosing _ as True is arbitrary. All of the Laws of Form algebra and calculus work perfectly
as long as {} != _ .

144.5.8 Relation to magmas

The pa embodies a point noted by Huntington in 1933: Boolean algebra requires, in addition to one unary operation,
one, and not two, binary operations. Hence the seldom-noted fact that Boolean algebras are magmas. (Magmas
were called groupoids until the latter term was appropriated by category theory.) To see this, note that the pa is a
commutative:

Semigroup because pa juxtaposition commutes and associates;

Monoid with identity element (()), by virtue of J0.

Groups also require a unary operation, called inverse, the group counterpart of Boolean complementation. Let (a)
denote the inverse of a. Let () denote the group identity element. Then groups and the pa have the same signatures,
namely they are both --,(-),() algebras of type 2,1,0 . Hence the pa is a boundary algebra. The axioms for an
abelian group, in boundary notation, are:

G1. abc = acb (assuming association from the left);

G2. ()a = a;

G3. (a)a = ().

From G1 and G2, the commutativity and associativity of concatenation may be derived, as above. Note that G3 and
J1a are identical. G2 and J0 would be identical if (())=() replaced A2. This is the dening arithmetical identity of
group theory, in boundary notation.
The pa diers from an abelian group in two ways:

From A2, it follows that (()) (). If the pa were a group, (())=() would hold, and one of (a)a=(()) or a()=a
would have to be a pa consequence. Note that () and (()) are mutual pa complements, as group theory requires,
so that ((())) = () is true of both group theory and the pa;

C2 most clearly demarcates the pa from other magmas, because C2 enables demonstrating the absorption law
that denes lattices, and the distributive law central to Boolean algebra.

Both A2 and C2 follow from B 's being an ordered set.


144.6. EQUATIONS OF THE SECOND DEGREE (CHAPTER 11) 549

144.6 Equations of the second degree (Chapter 11)


Chapter 11 of LoF introduces equations of the second degree, composed of recursive formulae that can be seen
as having innite depth. Some recursive formulae simplify to the marked or unmarked state. Others oscillate
indenitely between the two states depending on whether a given depth is even or odd. Specically, certain recursive
formulae can be interpreted as oscillating between true and false over successive intervals of time, in which case a
formula is deemed to have an imaginary truth value. Thus the ow of time may be introduced into the pa.
Turney (1986) shows how these recursive formulae can be interpreted via Alonzo Church's Restricted Recursive
Arithmetic (RRA). Church introduced RRA in 1955 as an axiomatic formalization of nite automata. Turney (1986)
presents a general method for translating equations of the second degree into Churchs RRA, illustrating his method
using the formulae E1, E2, and E4 in chapter 11 of LoF. This translation into RRA sheds light on the names Spencer-
Brown gave to E1 and E4, namely memory and counter. RRA thus formalizes and claries LoF 's notion of an
imaginary truth value.

144.7 Related work


Gottfried Leibniz, in memoranda not published before the late 19th and early 20th centuries, invented Boolean logic.
His notation was isomorphic to that of LoF: concatenation read as conjunction, and non-(X)" read as the complement
of X. Leibnizs pioneering role in algebraic logic was foreshadowed by Lewis (1918) and Rescher (1954). But a full
appreciation of Leibnizs accomplishments had to await the work of Wolfgang Lenzen, published in the 1980s and
reviewed in Lenzen (2004).
Charles Sanders Peirce (18391914) anticipated the pa in three veins of work:

1. Two papers he wrote in 1886 proposed a logical algebra employing but one symbol, the streamer, nearly iden-
tical to the Cross of LoF. The semantics of the streamer are identical to those of the Cross, except that Peirce
never wrote a streamer with nothing under it. An excerpt from one of these papers was published in 1976,[5]
but they were not published in full until 1993.[6]

2. In a 1902 encyclopedia article,[7] Peirce notated Boolean algebra and sentential logic in the manner of this
entry, except that he employed two styles of brackets, toggling between '(', ')' and '[', ']' with each increment in
formula depth.

3. The syntax of his alpha existential graphs is merely concatenation, read as conjunction, and enclosure by ovals,
read as negation.[8] If pa concatenation is read as conjunction, then these graphs are isomorphic to the pa
(Kauman 2001).

Ironically, LoF cites vol. 4 of Peirces Collected Papers, the source for the formalisms in (2) and (3) above. (1)-(3)
were virtually unknown at the time when (1960s) and in the place where (UK) LoF was written. Peirces semiotics,
about which LoF is silent, may yet shed light on the philosophical aspects of LoF.
Kauman (2001) discusses another notation similar to that of LoF, that of a 1917 article by Jean Nicod, who was a
disciple of Bertrand Russell's.
The above formalisms are, like the pa, all instances of boundary mathematics, i.e., mathematics whose syntax is
limited to letters and brackets (enclosing devices). A minimalist syntax of this nature is a boundary notation.
Boundary notation is free of inx, prex, or postx operator symbols. The very well known curly braces ('{', '}') of
set theory can be seen as a boundary notation.
The work of Leibniz, Peirce, and Nicod is innocent of metatheory, as they wrote before Emil Post's landmark 1920
paper (which LoF cites), proving that sentential logic is complete, and before Hilbert and ukasiewicz showed how
to prove axiom independence using models.
Craig (1979) argued that the world, and how humans perceive and interact with that world, has a rich Boolean
structure. Craig was an orthodox logician and an authority on algebraic logic.
Second-generation cognitive science emerged in the 1970s, after LoF was written. On cognitive science and its rele-
vance to Boolean algebra, logic, and set theory, see Lako (1987) (see index entries under Image schema examples:
container) and Lako and Nez (2001). Neither book cites LoF.
550 CHAPTER 144. LAWS OF FORM

The biologists and cognitive scientists Humberto Maturana and his student Francisco Varela both discuss LoF in
their writings, which identify distinction as the fundamental cognitive act. The Berkeley psychologist and cognitive
scientist Eleanor Rosch has written extensively on the closely related notion of categorization.
Other formal systems with possible anities to the primary algebra include:

Mereology which typically has a lattice structure very similar to that of Boolean algebra. For a few authors,
mereology is simply a model of Boolean algebra and hence of the primary algebra as well.

Mereotopology, which is inherently richer than Boolean algebra;

The system of Whitehead (1934), whose fundamental primitive is indication.

The primary arithmetic and algebra are a minimalist formalism for sentential logic and Boolean algebra. Other
minimalist formalisms having the power of set theory include:

The lambda calculus;

Combinatory logic with two (S and K) or even one (X) primitive combinators;

Mathematical logic done with merely three primitive notions: one connective, NAND (whose pa translation
is (AB) or, dually, (A)(B)), universal quantication, and one binary atomic formula, denoting set membership.
This is the system of Quine (1951).

The beta existential graphs, with a single binary predicate denoting set membership. This has yet to be explored.
The alpha graphs mentioned above are a special case of the beta graphs.

144.8 See also


Boolean algebra (Simple English Wikipedia)

Boolean algebra (introduction)

Boolean algebra (logic)

Boolean algebra (structure)

Boolean algebras canonically dened

Boolean logic

Entitative graph

Existential graph

List of Boolean algebra topics

Propositional calculus

Two-element Boolean algebra

144.9 Notes
[1] Meguire, P. (2011) Boundary Algebra: A Simpler Approach to Basic Logic and Boolean Algebra. Saarbrcken: VDM
Publishing Ltd. 168pp

[2] Felix Lau: Die Form der Paradoxie, 2005 Carl-Auer Verlag, ISBN 9783896703521

[3] B. Banaschewski (Jul 1977). On G. Spencer Browns Laws of Form. Notre Dame Journal of Formal Logic. 18 (3):
507509.

[4] For a sympathetic evaluation, see Kauman (2001).


144.10. REFERENCES 551

[5] Qualitative Logic, MS 736 (c. 1886) in Eisele, Carolyn, ed. 1976. The New Elements of Mathematics by Charles S.
Peirce. Vol. 4, Mathematical Philosophy. (The Hague) Mouton: 101-15.1

[6] Qualitative Logic, MS 582 (1886) in Kloesel, Christian et al., eds., 1993. Writings of Charles S. Peirce: A Chronological
Edition, Vol. 5, 1884-1886. Indiana University Press: 323-71. The Logic of Relatives: Qualitative and Quantitative,
MS 584 (1886) in Kloesel, Christian et al., eds., 1993. Writings of Charles S. Peirce: A Chronological Edition, Vol. 5,
1884-1886. Indiana University Press: 372-78.

[7] Reprinted in Peirce, C.S. (1933) Collected Papers of Charles Sanders Peirce, Vol. 4, Charles Hartshorne and Paul Weiss,
eds. Harvard University Press. Paragraphs 378-383

[8] The existential graphs are described at length in Peirce, C.S. (1933) Collected Papers, Vol. 4, Charles Hartshorne and Paul
Weiss, eds. Harvard University Press. Paragraphs 347-529.

144.10 References
Editions of Laws of Form:

1969. London: Allen & Unwin, hardcover.


1972. Crown Publishers, hardcover: ISBN 0-517-52776-6
1973. Bantam Books, paperback. ISBN 0-553-07782-1
1979. E.P. Dutton, paperback. ISBN 0-525-47544-3
1994. Portland OR: Cognizer Company, paperback. ISBN 0-9639899-0-1
1997 German translation, titled Gesetze der Form. Lbeck: Bohmeier Verlag. ISBN 3-89094-321-7
2008 Bohmeier Verlag, Leipzig, 5th international edition. ISBN 978-3-89094-580-4

Bostock, David, 1997. Intermediate Logic. Oxford Univ. Press.

Byrne, Lee, 1946, Two Formulations of Boolean Algebra, Bulletin of the American Mathematical Society:
268-71.

Craig, William (1979). Boolean Logic and the Everyday Physical World. Proceedings and Addresses of the
American Philosophical Association. 52 (6): 75178. JSTOR 3131383. doi:10.2307/3131383.

David Gries, and Schneider, F B, 1993. A Logical Approach to Discrete Math. Springer-Verlag.

William Ernest Johnson, 1892, The Logical Calculus, Mind 1 (n.s.): 3-30.

Louis H. Kauman, 2001, "The Mathematics of C.S. Peirce", Cybernetics and Human Knowing 8: 79-110.

------, 2006, "Reformulating the Map Color Theorem."

------, 2006a. "Laws of Form - An Exploration in Mathematics and Foundations." Book draft (hence big).

Lenzen, Wolfgang, 2004, "Leibnizs Logic" in Gabbay, D., and Woods, J., eds., The Rise of Modern Logic:
From Leibniz to Frege (Handbook of the History of Logic Vol. 3). Amsterdam: Elsevier, 1-83.

Lako, George, 1987. Women, Fire, and Dangerous Things. University of Chicago Press.

-------- and Rafael E. Nez, 2001. Where Mathematics Comes From: How the Embodied Mind Brings Mathe-
matics into Being. Basic Books.

Meguire, P. G. (2003). Discovering Boundary Algebra: A Simplied Notation for Boolean Algebra and the
Truth Functors. International Journal of General Systems. 32: 2587. doi:10.1080/0308107031000075690.

--------, 2011. Boundary Algebra: A Simpler Approach to Basic Logic and Boolean Algebra. VDM Publishing
Ltd. ISBN 978-3639367492. The source for much of this entry, including the notation which encloses in
parentheses what LoF places under a cross. Steers clear of the more speculative aspects of LoF.

Willard Quine, 1951. Mathematical Logic, 2nd ed. Harvard University Press.

--------, 1982. Methods of Logic, 4th ed. Harvard University Press.


552 CHAPTER 144. LAWS OF FORM

Rescher, Nicholas (1954). Leibnizs Interpretation of His Logical Calculi. Journal of Symbolic Logic. 18:
113. doi:10.2307/2267644.
Schwartz, Daniel G. (1981). "Isomorphisms of G. Spencer-Brown's Laws of Form and F. Varelas Calculus for
Self-Reference. International Journal of General Systems. 6 (4): 23955. doi:10.1080/03081078108934802.
Turney, P. D. (1986). "Laws of Form and Finite Automata. International Journal of General Systems. 12 (4):
30718. doi:10.1080/03081078608934939.
A. N. Whitehead, 1934, Indication, classes, number, validation, Mind 43 (n.s.): 281-97, 543. The corrigenda
on p. 543 are numerous and important, and later reprints of this article do not incorporate them.
Dirk Baecker (ed.) (1993), Kalkl der Form. Suhrkamp; Dirk Baecker (ed.), Probleme der Form. Suhrkamp.

Dirk Baecker (ed.) (1999), Problems of Form, Stanford University Press.


Dirk Baecker (ed.) (2013), A Mathematics of Form, A Sociology of Observers, Cybernetics & Human Knowing,
vol. 20, no. 3-4.

144.11 External links


Laws of Form, archive of website by Richard Shoup.
Spencer-Browns talks at Esalen, 1973. Self-referential forms are introduced in the section entitled Degree
of Equations and the Theory of Types.
Louis H. Kauman, "Box Algebra, Boundary Mathematics, Logic, and Laws of Form."

Kissel, Matthias, "A nonsystematic but easy to understand introduction to Laws of Form."
The Laws of Form Forum, where the primary algebra and related formalisms have been discussed since 2002.

A meeting with G.S.B by Moshe Klein


Chapter 145

Lindstrm quantier

In mathematical logic, a Lindstrm quantier is a generalized polyadic quantier. They are a generalization of
rst-order quantiers, such as the existential quantier, the universal quantier, and the counting quantiers. They
were introduced by Per Lindstrm in 1966. They were later studied for their applications in logic in computer science
and database query languages.

145.1 Generalization of rst-order quantiers


In order to facilitate discussion, some notational conventions need explaining. The expression

A,x,a = {x A : A |= [x, a]}

for A an L-structure (or L-model) in a language L, an L-formula, and a a tuple of elements of the domain dom(A)
of A. In other words, A,x,a denotes a (monadic) property dened on dom(A). In general, where x is replaced by an
n-tuple x of free variables, A,x,a denotes an n-ary relation dened on dom(A). Each quantier QA is relativized to a
structure, since each quantier is viewed as a family of relations (between relations) on that structure. For a concrete
example, take the universal and existential quantiers and , respectively. Their truth conditions can be specied
as

A |= x[x, a] A,x,a A

A |= x[x, a] A,x,a A ,
where A is the singleton whose sole member is dom(A), and A is the set of all non-empty subsets of dom(A) (i.e.
the power set of dom(A) minus the empty set). In other words, each quantier is a family of properties on dom(A),
so each is called a monadic quantier. Any quantier dened as an n > 0-ary relation between properties on dom(A)
is called monadic. Lindstrm introduced polyadic ones that are n > 0-ary relations between relations on domains of
structures.
Before we go on to Lindstrms generalization, notice that any family of properties on dom(A) can be regarded as
a monadic generalized quantier. For example, the quantier there are exactly n things such that... is a family of
subsets of the domain a structure, each of which has a cardinality o size n. Then, there are exactly 2 things such
that " is true in A i the set of things that are such that is a member of the set of all subsets of dom(A) of size 2.
A Lindstrm quantier is a polyadic generalized quantier, so instead being a relation between subsets of the domain,
it is a relation between relations dened on the domain. For example, the quantier QA x1 x2 y1 z1 z2 z3 ((x1 x2 ), (y1 ), (z1 z2 z3 ))
is dened semantically as

A |= QA x1 x2 y1 z1 z2 z3 (, , )[a] (A,x1 x2 ,a , A,y1 ,a , A,z1 z2 z3 ,a ) QA

where

553
554 CHAPTER 145. LINDSTRM QUANTIFIER

A,x,a = {(x1 , . . . , xn ) An : A |= [x, a]}

for an n-tuple x of variables.


Lindstrm quantiers are classied according to the number structure of their parameters. For example Qxy(x)(y)
is a type (1,1) quantier, whereas Qxy(x, y) is a type (2) quantier. An example of type (1,1) quantier is Hartigs
quantier testing equicardinality, i.e. the extension of {A, B M: |A| = |B|}. An example of a type (4) quantier is
the Henkin quantier.

145.2 Expressiveness hierarchy


The rst result in this direction was obtained by Lindstrm (1966) who showed that a type (1,1) quantier was not
denable in terms of a type (1) quantier. After Lauri Hella (1989) developed a general technique for proving the
relative expressiveness of quantiers, the resulting hierarchy turned out to be lexicographically ordered by quantier
type:

(1) < (1, 1) < . . . < (2) < (2, 1) < (2, 1, 1) < . . . < (2, 2) < . . . (3) < . . .

For every type t, there is a quantier of that type that is not denable in rst-order logic extended with quantiers that
are of types less than t.

145.3 As precursors to Lindstrms theorem


Although Lindstrm had only partially developed the hierarchy of quantiers which now bear his name, it was enough
for him to observe that some nice properties of rst-order logic are lost when it is extended with certain generalized
quantiers. For example, adding a there exist nitely many quantier results in a loss of compactness, whereas
adding a there exist uncountably many quantier to rst-order logic results in a logic no longer satisfying the
LwenheimSkolem theorem. In 1969 Lindstrm proved a much stronger result now known as Lindstrms the-
orem, which intuitively states that rst-order logic is the strongest logic having both properties.

145.4 Algorithmic characterization

145.5 References
Lindstrom, P. (1966). First order predicate logic with generalized quantiers. Theoria. 32: 186195.
doi:10.1111/j.1755-2567.1966.tb00600.x.
L. Hella. Denability hierarchies of generalized quantiers, Annals of Pure and Applied Logic, 43(3):235
271, 1989, doi:10.1016/0168-0072(89)90070-5.
L. Hella. Logical hierarchies in PTIME. In Proceedings of the 7th IEEE Symposium on Logic in Computer
Science, 1992.
L. Hella, K. Luosto, and J. Vaananen. The hierarchy theorem for generalized quantiers. Journal of Symbolic
Logic, 61(3):802817, 1996.
Burtschick, Hans-Jrg; Vollmer, Heribert (1999), Lindstrm Quantiers and Leaf Language Denability,
ECCC TR96-005
Westersthl, Dag (2001), Quantiers, in Goble, Lou, The Blackwell Guide to Philosophical Logic, Blackwell
Publishing, pp. 437460.
Antonio Badia (2009). Quantiers in Action: Generalized Quantication in Query, Logical and Natural Lan-
guages. Springer. ISBN 978-0-387-09563-9.
145.6. FURTHER READING 555

145.6 Further reading


Jouko Vnanen (ed.), Generalized Quantiers and Computation. 9th European Summer School in Logic, Lan-
guage, and Information. ESSLLI97 Workshop. Aix-en-Provence, France, August 1122, 1997. Revised Lec-
tures, Springer Lecture Notes in Computer Science 1754, ISBN 3-540-66993-0

145.7 External links


Dag Westersthl, 2011. 'Generalized Quantiers'. Stanford Encyclopedia of Philosophy.
Chapter 146

List of Boolean algebra topics

This is a list of topics around Boolean algebra and propositional logic.

146.1 Articles with a wide scope and introductions


Algebra of sets

Boolean algebra (structure)

Boolean algebra

Field of sets

Logical connective

Propositional calculus

146.2 Boolean functions and connectives


Ampheck

Boolean algebras canonically dened

Conditioned disjunction

Evasive Boolean function

Exclusive or

Functional completeness

Logical biconditional

Logical conjunction

Logical disjunction

Logical equality

Logical implication

Logical negation

Logical NOR

Lupanov representation

556
146.3. EXAMPLES OF BOOLEAN ALGEBRAS 557

Majority function

Material conditional

Peirce arrow

Sheer stroke

Sole sucient operator

Symmetric Boolean function

Symmetric dierence

Zhegalkin polynomial

146.3 Examples of Boolean algebras


Boolean domain

Interior algebra

LindenbaumTarski algebra

Two-element Boolean algebra

146.4 Extensions and generalizations


Complete Boolean algebra

Derivative algebra (abstract algebra)

First-order logic

Free Boolean algebra

De Morgan algebra

Heyting algebra

Monadic Boolean algebra

skew Boolean algebra

146.5 Syntax
Algebraic normal form

Boolean conjunctive query

Canonical form (Boolean algebra)

Conjunctive normal form

Disjunctive normal form

Formal system
558 CHAPTER 146. LIST OF BOOLEAN ALGEBRA TOPICS

146.6 Technical applications


And-inverter graph

Logic gate

Boolean analysis

146.7 Theorems and specic laws


Boolean prime ideal theorem

Compactness theorem

Consensus theorem

De Morgans laws

Duality (order theory)

Laws of classical logic

Peirces law

Stones representation theorem for Boolean algebras

146.8 People
Boole, George

De Morgan, Augustus

Jevons, William Stanley

Peirce, Charles Sanders

Stone, Marshall Harvey

Venn, John

Zhegalkin, Ivan Ivanovich

146.9 Philosophy
Booles syllogistic

Boolean implicant

Entitative graph

Existential graph

Laws of Form

Logical graph
146.10. VISUALIZATION 559

146.10 Visualization
Truth table

Karnaugh map
Venn diagram

146.11 Unclassied
Boolean function

Boolean-valued function
Boolean-valued model

Boolean satisability problem


Indicator function (also called the characteristic function, but that term is used in probability theory for a
dierent concept)
Espresso heuristic logic minimizer

Logical matrix
Logical value

Stone duality
Stone space

Topological Boolean algebra


Chapter 147

List of logic systems

This article contains a list of sample Hilbert-style deductive systems for propositional logic.

147.1 Classical propositional calculus systems


Classical propositional calculus is the standard propositional logic. Its intended semantics is bivalent and its main
property is that it is syntactically complete, otherwise said that no new axiom not already consequence of the existing
axioms can be added without making the logic inconsistent. Many dierent equivalent complete axiom systems have
been formulated. They dier in the choice of basic connectives used, which in all cases have to be functionally
complete (i.e. able to express by composition all n-ary truth tables), and in the exact complete choice of axioms over
the chosen basis of connectives.

147.1.1 Implication and negation


The formulations here use implication and negation {, } as functionally complete set of basic connectives. Every
logic system requires at least one non-nullary rule of inference. Classical propositional calculus typically uses the rule
of modus ponens:

A, A B B.

We assume this rule is included in all systems below unless stated otherwise.
Frege's axiom system:[1]

A (B A)

(A (B C)) ((A B) (A C))


(A B) (B A)
A A
A A
Hilbert's axiom system:[1]

A (B A)

(A (B C)) (B (A C))
(B C) ((A B) (A C))

560
147.1. CLASSICAL PROPOSITIONAL CALCULUS SYSTEMS 561

A (A B)
(A B) ((A B) B)
ukasiewicz's axiom systems:[1]

First:

(A B) ((B C) (A C))

(A A) A
A (A B)

Second:

((A B) C) (A C)

((A B) C) (B C)
(A C) ((B C) ((A B) C))

Third:

A (B A)

(A (B C)) ((A B) (A C))


(A B) (B A)

Fourth:

(A B) ((B C) (A C))

A (A B)
(A B) ((B A) A)

ukasiewicz and Tarski's axiom system:[2]

[(A (B A)) ([(C (D E)) [(C (D F )) ((E D) (E F ))]] G)] (H G)

Meredith's axiom system:

((((A B) (C D)) C) E) ((E A) (D A))

Mendelson's axiom system:[3]

A (B A)

(A (B C)) ((A B) (A C))


562 CHAPTER 147. LIST OF LOGIC SYSTEMS

(A B) ((A B) A)
Russell's axiom system:[1]

A (B A)
(A B) ((B C) (A C))
(A (B C)) (B (A C))
A A
(A A) A
(A B) (B A)
Sobociski's axiom systems:[1]

First:

(A B) (B (A C))
A (B (C A))
(A B) ((A B) B)

Second:

A (A B)
A (B (C A))
(A C) ((B C) ((A B) C))

147.1.2 Implication and falsum


Instead of negation, classical logic can also be formulated using the functionally complete set {, } of connectives.
TarskiBernaysWajsberg axiom system:

(A B) ((B C) (A C))
A (B A)
((A B) A) A
A
Church's axiom system:

A (B A)
(A (B C)) ((A B) (A C))
((A ) ) A
Merediths axiom systems:

First:[4][5][6]

((((A B) (C )) D) E) ((E A) (C A))

Second:[4]

((A B) (( C) D)) ((D A) (E (F A)))


147.1. CLASSICAL PROPOSITIONAL CALCULUS SYSTEMS 563

147.1.3 Negation and disjunction


Instead of implication, classical logic can also be formulated using the functionally complete set {, } of connectives.
These formulations use the following rule of inference;

A, A B B.

RussellBernays axiom system:

(B C) ((A B) (A C))

(A B) (B A)
A (B A)
(A A) A
Merediths axiom systems:[7]

First:

((A B) (C (D E))) ((D A) (C (E A)))

Second:

((A B) (C (D E))) ((E D) (C (A D)))

Third:

((A B) (C (D E))) ((C A) (E (D A)))

Dually, classical propositional logic can be dened using only conjunction and negation.

147.1.4 Sheers stroke


Because Sheers stroke (also known as NAND operator) is functionally complete, it can be used to create an entire
formulation of propositional calculus. NAND formulations use a rule of inference called Nicod's modus ponens:

A, A | (B | C) C.

Nicods axiom system:[4]

(A | (B | C)) | [(E | (E | E)) | ((D | B) | [(A | D) | (A | D)])]

ukasiewiczs axiom systems:[4]

First:
564 CHAPTER 147. LIST OF LOGIC SYSTEMS

(A | (B | C)) | [(D | (D | D)) | ((D | B) | [(A | D) | (A | D)])]

Second:

(A | (B | C)) | [(A | (C | A)) | ((D | B) | [(A | D) | (A | D)])]

Wajsbergs axiom system:[4]

(A | (B | C)) | [((D | C) | [(A | D) | (A | D)]) | (A | (A | B))]

Argonne axiom systems:[4]

First:

(A | (B | C)) | [(A | (B | C)) | ((D | C) | [(C | D) | (A | D)])]

Second:

(A | (B | C)) | [([(B | D) | (A | D)] | (D | B)) | ((C | B) | A)] [8]

Computer analysis by Argonne has revealed > 60 additional single axiom systems that can be used to formulate NAND
propositional calculus.[6]

147.2 Implicational propositional calculus


The implicational propositional calculus is the fragment of the classical propositional calculus which only admits the
implication connective. It is not functionally complete (because it lacks the ability to express falsity and negation) but
it is however syntactically complete. The implicational calculi below use modus ponens as an inference rule.
[9]
BernaysTarski axiom system:

A (B A)

(A B) ((B C) (A C))
((A B) A) A
ukasiewicz and Tarskis axiom systems:

First:[9]

[(A (B A)) [([((C D) E) F ] [(D F ) (C F )]) G]] G

Second:[9]

[(A B) ((C D) E)] ([F ((C D) E)] [(A F ) (D E)])


147.3. INTUITIONISTIC AND INTERMEDIATE LOGICS 565

Third:

((A B) (C D)) (E ((D A) (C A)))

Fourth:

((A B) (C D)) ((D A) (E (C A)))

ukasiewiczs axiom system:[10][9]

((A B) C) ((C A) (D A))

147.3 Intuitionistic and intermediate logics


Intuitionistic logic is a subsystem of classical logic. It is commonly formulated with {, , , } as the set of
(functionally complete) basic connectives. It is not syntactically complete since it lacks excluded middle AA or
Peirces law ((AB)A)A which can be added without making the logic inconsistent. It has modus ponens as
inference rule, and the following axioms:

A (B A)

(A (B C)) ((A B) (A C))


(A B) A
(A B) B
A (B (A B))
A (A B)
B (A B)
(A C) ((B C) ((A B) C))
A
Alternatively, intuitionistic logic may be axiomatized using {, , , } as the set of basic connectives, replacing
the last axiom with

(A A) A

A (A B)
Intermediate logics are in between intuitionistic logic and classical logic. Here are a few intermediate logics:

Jankov logic (KC) is an extension of intuitionistic logic, which can be axiomatized by the intuitionistic axiom
system plus the axiom[11]

A A.

GdelDummett logic (LC) can be axiomatized over intuitionistic logic by adding the axiom[11]

(A B) (B A).
566 CHAPTER 147. LIST OF LOGIC SYSTEMS

147.4 Positive implicational calculus


The positive implicational calculus is the implicational fragment of intuitionistic logic. The calculi below use modus
ponens as an inference rule.
ukasiewiczs axiom system:

A (B A)
(A (B C)) ((A B) (A C))
Merediths axiom systems:

First:

E ((A B) (((D A) (B C)) (A C)))

Second:

A (B A)
(A B) ((A (B C)) (A C))

Third:

((A B) C) (D ((B (C E)) (B E))) [12]

Hilberts axiom systems:

First:

(A (A B)) (A B)
(B C) ((A B) (A C))
(A (B C)) (B (A C))
A (B A)

Second:

(A (A B)) (A B)
(A B) ((B C) (A C))
A (B A)

Third:

AA
(A B) ((B C) (A C))
(B C) ((A B) (A C))
(A (A B)) (A B)
147.5. POSITIVE PROPOSITIONAL CALCULUS 567

147.5 Positive propositional calculus


Positive propositional calculus is the fragment of intuitionistic logic using only the (non functionally complete) con-
nectives {, , } . It can be axiomatized by any of the above-mentioned calculi for positive implicational calculus
together with the axioms

(A B) A

(A B) B
A (B (A B))
A (A B)
B (A B)
(A C) ((B C) ((A B) C))
Optionally, we may also include the connective and the axioms

(A B) (A B)

(A B) (B A)
(A B) ((B A) (A B))
Johansson's minimal logic can be axiomatized by any of the axiom systems for positive propositional calculus and
expanding its language with the nullary connective , with no additional axiom schemas. Alternatively, it can also
be axiomatized in the language {, , , } by expanding the positive propositional calculus with the axiom

(A B) (B A)

or the pair of axioms

(A B) (B A)

A A
Intuitionistic logic in language with negation can be axiomatized over the positive calculus by the pair of axioms

(A B) (B A)

A (A B)
or the pair of axioms[13]

(A A) A

A (A B)
Classical logic in the language {, , , } can be obtained from the positive propositional calculus by adding the
axiom

(A B) (B A)

or the pair of axioms


568 CHAPTER 147. LIST OF LOGIC SYSTEMS

(A B) (B A)

A A
Fitch calculus takes any of the axiom systems for positive propositional calculus and adds the axioms[13]

A (A B)

A A
(A B) (A B)
(A B) (A B)
Note that the rst and third axioms are also valid in intuitionistic logic.

147.6 Equivalential calculus


Equivalential calculus is the subsystem of classical propositional calculus that only allows the (functionally incomplete)
equivalence connective, denoted here as . The rule of inference used in these systems is as follows:

A, A B B

Iskis axiom system:[14]

((A C) (B A)) (C B)

(A (B C)) ((A B) C)
IskiArai axiom system:[15]

AA

(A B) (B A)
(A B) ((B C) (A C))
Arais axiom systems;

First:

(A (B C)) ((A B) C)

((A C) (B A)) (C B)

Second:

(A B) (B A)

((A C) (B A)) (C B)
147.6. EQUIVALENTIAL CALCULUS 569

ukasiewiczs axiom systems:[16]

First:

(A B) ((C B) (A C))

Second:

(A B) ((A C) (C B))

Third:

(A B) ((C A) (B C))

Merediths axiom systems:[16]

First:

((A B) C) (B (C A))

Second:

A ((B (A C)) (C B))

Third:

(A (B C)) (C (A B))

Fourth:

(A B) (C ((B C) A))

Fifth:

(A B) (C ((C B) A))

Sixth:

((A (B C)) C) (B A)
570 CHAPTER 147. LIST OF LOGIC SYSTEMS

Seventh:

((A (B C)) B) (C A)

Kalman's axiom system:[16]

A ((B (C A)) (C B))

Winker's axiom systems:[16]

First:

A ((B C) ((A C) B))

Second:

A ((B C) ((C A) B))

XCB axiom system:[16]

A (((A B) (C B)) C)

147.7 References
[1] Yasuyuki Imai, Kiyoshi Iski, On axiom systems of propositional calculi, I, Proceedings of the Japan Academy. Volume
41, Number 6 (1965), 436439.

[2] Part XIII: Shtar Tanaka. On axiom systems of propositional calculi, XIII. Proc. Japan Acad., Volume 41, Number 10
(1965), 904907.

[3] Elliott Mendelson, Introduction to Mathematical Logic, Van Nostrand, New York, 1979, p. 31.

[4] [Fitelson, 2001] New Elegant Axiomatizations of Some Sentential Logics by Branden Fitelson

[5] (Computer analysis by Argonne has revealed this to be the shortest single axiom with least variables for propositional
calculus).

[6] Some New Results in Logical Calculi Obtained Using Automated Reasoning, Zac Ernst, Ken Harris, & Branden Fitelson,
http://www.mcs.anl.gov/research/projects/AR/award-2001/fitelson.pdf

[7] C. Meredith, Single axioms for the systems (C, N), (C, 0) and (A, N) of the two-valued propositional calculus, Journal of
Computing Systems, pp. 155164, 1954.

[8] , p. 9, A Spectrum of Applications of Automated Reasoning, Larry Wos; arXiv:cs/0205078v1

[9] Investigations into the Sentential Calculus in Logic, Semantics, Metamathematics: Papers from 1923 to 1938 by Alfred
Tarski, Corcoran, J., ed. Hackett. 1st edition edited and translated by J. H. Woodger, Oxford Uni. Press. (1956)

[10] ukasiewicz, J.. (1948). The Shortest Axiom of the Implicational Calculus of Propositions. Proceedings of the Royal
Irish Academy. Section A: Mathematical and Physical Sciences, 52, 2533. Retrieved from https://www.jstor.org/stable/
20488489
147.7. REFERENCES 571

[11] A. Chagrov, M. Zakharyaschev, Modal logic, Oxford University Press, 1997.

[12] C. Meredith, A single axiom of positive logic, Journal of Computing Systems, p. 169170, 1954.

[13] L. H. Hacksta, Systems of Formal Logic, Springer, 1966.

[14] Kiyoshi Iski, On axiom systems of propositional calculi, XV, Proceedings of the Japan Academy. Volume 42, Number 3
(1966), 217220.

[15] Yoshinari Arai, On axiom systems of propositional calculi, XVII, Proceedings of the Japan Academy. Volume 42, Number
4 (1966), 351354.

[16] XCB, the Last of the Shortest Single Axioms for the Classical Equivalential Calculus, LARRY WOS, DOLPH ULRICH,
BRANDEN FITELSON; arXiv:cs/0211015v1
Chapter 148

List of rules of inference

This is a list of rules of inference, logical laws that relate to mathematical formulae.

148.1 Introduction
Rules of inference are syntactical transform rules which one can use to infer a conclusion from a premise to create
an argument. A set of rules can be used to infer any valid conclusion if it is complete, while never inferring an invalid
conclusion, if it is sound. A sound and complete set of rules need not include every rule in the following list, as many
of the rules are redundant, and can be proven with the other rules.
Discharge rules permit inference from a subderivation based on a temporary assumption. Below, the notation

indicates such a subderivation from the temporary assumption to .

148.2 Rules for classical sentential calculus


Sentential calculus is also known as propositional calculus.

148.2.1 Rules for negations


Reductio ad absurdum (or Negation Introduction)

Reductio ad absurdum (related to the law of excluded middle)

Noncontradiction (or Negation Elimination)

572
148.2. RULES FOR CLASSICAL SENTENTIAL CALCULUS 573

Double negation elimination

Double negation introduction

148.2.2 Rules for conditionals


Deduction theorem (or Conditional Introduction)

Modus ponens (or Conditional Elimination)

Modus tollens

148.2.3 Rules for conjunctions


Adjunction (or Conjunction Introduction)

Simplication (or Conjunction Elimination)

574 CHAPTER 148. LIST OF RULES OF INFERENCE

148.2.4 Rules for disjunctions

Addition (or Disjunction Introduction)

Case analysis (or Proof by Cases or Argument by Cases)

Disjunctive syllogism

Constructive dilemma

148.2.5 Rules for biconditionals

Biconditional introduction

Biconditional Elimination


148.3. RULES OF CLASSICAL PREDICATE CALCULUS 575

148.3 Rules of classical predicate calculus


In the following rules, (/) is exactly like except for having the term everywhere has the free variable .

Universal Generalization (or Universal Introduction)


(/)

Restriction 1: is a variable which does not occur in .


Restriction 2: is not mentioned in any hypothesis or undischarged assumptions.

Universal Instantiation (or Universal Elimination)



(/)

Restriction: No free occurrence of in falls within the scope of a quantier quantifying a variable occurring in .

Existential Generalization (or Existential Introduction)


(/)

Restriction: No free occurrence of in falls within the scope of a quantier quantifying a variable occurring in .

Existential Instantiation (or Existential Elimination)



(/)

Restriction 1: is a variable which does not occur in .


Restriction 2: There is no occurrence, free or bound, of in .
Restriction 3: is not mentioned in any hypothesis or undischarged assumptions.
576 CHAPTER 148. LIST OF RULES OF INFERENCE

148.4 Rules of substructural logic


The following are special cases of universal generalization and existential elimination; these occur in substructrual
logics, such as linear logic.

Rule of weakening (or monotonicity of entailment) (aka no-cloning theorem)

Rule of contraction (or idempotency of entailment) (aka no-deleting theorem)

, ,

148.5 Table: Rules of Inference


The rules above can be summed up in the following table.[1] The "Tautology" column shows how to interpret the
notation of a given rule.
All rules use the basic logic operators. A complete table of logic operators is shown by a truth table, giving deni-
tions of all the possible (16) truth functions of 2 boolean variables (p, q):
where T = true and F = false, and, the columns are the logical operators: 0, false, Contradiction; 1, NOR, Logical
NOR; 2, Converse nonimplication; 3, p, Negation; 4, Material nonimplication; 5, q, Negation; 6, XOR, Exclusive
disjunction; 7, NAND, Logical NAND; 8, AND, Logical conjunction; 9, XNOR, If and only if, Logical bicondi-
tional; 10, q, Projection function; 11, if/then, Logical implication; 12, p, Projection function; 13, then/if, Converse
implication; 14, OR, Logical disjunction; 15, true, Tautology.
Each logic operator can be used in an assertion about variables and operations, showing a basic rule of inference.
Examples:

The column-14 operator (OR), shows Addition rule: when p=T (the hypothesis selects the rst two lines of the
table), we see (at column-14) that pq=T.

We can see also that, with the same premise, another conclusions are valid: columns 12, 14 and 15
are T.

The column-8 operator (AND), shows Simplication rule: when pq=T (rst line of the table), we see that
p=T.

With this premise, we also conclude that q=T, pq=T, etc. as showed by columns 9-15.

The column-11 operator (IF/THEN), shows Modus ponens rule: when pq=T and p=T only one line of the
truth table (the rst) satises these two conditions. On this line, q is also true. Therefore, whenever p q is
true and p is true, q must also be true.

Machines and well-trained people use this look at table approach to do basic inferences, and to check if other infer-
ences (for the same premises) can be obtained.
148.6. REFERENCES 577

148.5.1 Example 1
Let us consider the following assumptions: If it rains today, then we will not go on a canoe today. If we do not go
on a canoe trip today, then we will go on a canoe trip tomorrow. Therefore (Mathematical symbol for therefore is
), if it rains today, we will go on a canoe trip tomorrow. To make use of the rules of inference in the above table
we let p be the proposition If it rains today, q be We will not go on a canoe today and let r be We will go on a
canoe trip tomorrow. Then this argument is of the form:
pq
qr
pr

148.5.2 Example 2
Let us consider a more complex set of assumptions: It is not sunny today and it is colder than yesterday. We will
go swimming only if it is sunny, If we do not go swimming, then we will have a barbecue, and If we will have
a barbecue, then we will be home by sunset lead to the conclusion We will be home by sunset. Proof by rules of
inference: Let p be the proposition It is sunny today, q the proposition It is colder than yesterday, r the proposition
We will go swimming, s the proposition We will have a barbecue, and t the proposition We will be home by
sunset. Then the hypotheses become p q, r p, r s and s t . Using our intuition we conjecture that
the conclusion might be t . Using the Rules of Inference table we can proof the conjecture easily:

148.6 References
[1] Kenneth H. Rosen: Discrete Mathematics and its Applications, Fifth Edition, p. 58.

148.7 See also


List of logic systems
Chapter 149

List of valid argument forms

Of the many and varied argument forms that can possibly be constructed, only very few are valid argument forms.
In order to evaluate these forms, statements are put into logical form. Logical form replaces any sentences or ideas
with letters to remove any bias from content and allow one to evaluate the argument without any bias due to its subject
matter.[1]
Being a valid argument does not necessarily mean the conclusion will be true. It is valid because if the premises are
true, then the conclusion has to be true. This can be proven for any valid argument form using a truth table which
shows that there is no situation in which there are all true premises and a false conclusion.[2]

149.1 Valid syllogistic forms


In syllogistic logic, there are 256 possible ways to construct categorical syllogisms using the A, E, I, and O statement
forms in the square of opposition. Of the 256, only 24 are valid forms. Of the 24 valid forms, 15 are unconditionally
valid, and 9 are conditionally valid.

149.1.1 Unconditionally valid

149.1.2 Conditionally valid

149.2 Valid propositional forms

149.2.1 Modus ponens


One valid argument form is known as modus ponens, not to be mistaken with modus tollens which is another valid
argument form that has a like-sounding name and structure. Modus ponens (sometimes abbreviated as MP) says that
if one thing is true, then another will be. It then states that the rst is true. The conclusion is that the second thing is
true.[3] It is shown below in logical form.

If A, then B
A
Therefore B

Before being put into logical form the above statement could have been something like below.

If Kelly does not nish his homework, he will not go to class


Kelly did not nish his homework
Therefore, Kelly will not go to class

The rst two statements are the premises while the third is the conclusion derived from them.

578
149.2. VALID PROPOSITIONAL FORMS 579

149.2.2 Modus tollens


Another form of argument is known as modus tollens (commonly abbreviated MT). In this form, you start with the
same rst premise as with modus ponens. However, the second part of the premise is denied, leading to the conclusion
that the rst part of the premise should be denied as well. It is shown below in logical form.

If A, then B
Not B
Therefore not A.[3]

When modus tollens is used with actual content, it looks like below.

If the Saints win the Super Bowl, there will be a party in New Orleans that night
There was no party in New Orleans that night
Therefore, the Saints did not win the Super Bowl

149.2.3 Hypothetical syllogism


Much like modus ponens and modus tollens, hypothetical syllogism (sometimes abbreviated as HS) contains two
premises and a conclusion. It is however, slightly more complicated than the rst two. In short, it states that if one
thing happens, another will as well. If that second thing happens, a third will follow it. Therefore, if the rst thing
happens, it is inevitable that the third will too.[3] It is shown below in logical form.

If A, then B
If B, then C
Therefore if A, then C

When put into words it looks like below.

If it rains today, I will wear my rain jacket


If I wear my rain jacket, I will keep dry
Therefore if it rains today, I will keep dry

149.2.4 Disjunctive syllogism


Disjunctive syllogism (sometimes abbreviated DS) has one of the same characteristics as modus tollens in that it
contains a premise, then in a second premise it denies a statement, leading to the conclusion. In Disjunctive Syllogism,
the rst premise establishes two options. The second takes one away, so the conclusion states that the remaining one
must be true.[3] It is shown below in logical form.

A or B
Not A
Therefore B

When used A and B are replaced with real life examples it looks like below.

Either you will see Joe in class today or he will oversleep


You did not see Joe in class today
Therefore Joe overslept

Disjunctive syllogism takes two options and narrows it down to one.


580 CHAPTER 149. LIST OF VALID ARGUMENT FORMS

149.2.5 Constructive dilemma


Another valid form of argument is known as constructive dilemma or sometimes just 'dilemma'. It does not leave
the user with one statement alone at the end of the argument, instead, it gives an option of two dierent statements.
The rst premise gives an option of two dierent statements. Then it states that if the rst one happens, there will
be a particular outcome and if the second happens, there will be a separate outcome. The conclusion is that either
the rst outcome or the second outcome will happen. The criticism with this form is that it does not give a denitive
conclusion; just a statement of possibilities.[3] When it is written in argument form it looks like below.

A or B
If A then C
If B then D
Therefore C or D

When content is inserted in place of the letters, it looks like below.

Bill will either take the stairs or the elevator to his room
If he takes the stairs, he will be tired when he gets to his room
If he takes the elevator, he will miss the start of the football game on TV
Therefore Bill will either be tired when he gets to his room or he will miss the start of the football game

There is a slightly dierent version of dilemma that uses negation rather than arming something known as destructive
dilemma. When put in argument form it looks like below.

If A then C
If B then D
Not C or not D
Therefore not A or not B [4]

149.3 References
[1] May, Robert (1993). Logical form: its structure and derivation. Cambridge, Mass: MIT Press.

[2] Stanley, Jason (30 August 2000). Context and Logical Form. Linguistics and Philosophy. 23 (4).

[3] Johnson, Robert (2006). A Logic Book: Fundamentals of Reasoning. Cengage Learning.

[4] Elugardo, Reinaldo (1 September 2001). Logical Form and the Vernacular. Mind and Language. 16 (4).
Chapter 150

Literal (mathematical logic)

In mathematical logic, a literal is an atomic formula (atom) or its negation. The denition mostly appears in proof
theory (of classical logic), e.g. in conjunctive normal form and the method of resolution.
Literals can be divided into two types:

A positive literal is just an atom.


A negative literal is the negation of an atom.

For a literal l , the complementary literal is a literal corresponding to the negation of l , we can write l to denote
the complementary literal of l . More precisely, if l x then l is x and if l x then l is x .
In the context of a formula in the conjunctive normal form, a literal is pure if the literals complement does not appear
in the formula.

150.1 Examples
In propositional calculus a literal is simply a propositional variable or its negation.
In predicate calculus a literal is an atomic formula or its negation, where an atomic formula is a predicate symbol
applied to some terms, P (t1 , . . . , tn ) with the terms recursively dened starting from constant symbols, variable
symbols, and function symbols. For example, Q(f (g(x), y, 2), x) is a negative literal with the constant symbol 2,
the variable symbols x, y, the function symbols f, g, and the predicate symbol Q.

150.2 References
Samuel R. Buss (1998). An introduction to proof theory. In Samuel R. Buss. Handbook of proof theory.
Elsevier. pp. 178. ISBN 0-444-89840-9.

581
Chapter 151

Logic alphabet

The logic alphabet, also called the X-stem Logic Alphabet (XLA), constitutes an iconic set of symbols that system-
atically represents the sixteen possible binary truth functions of logic. The logic alphabet was developed by Shea
Zellweger. The major emphasis of his iconic logic alphabet is to provide a more cognitively ergonomic notation for
logic. Zellwegers visually iconic system more readily reveals, to the novice and expert alike, the underlying symmetry
relationships and geometric properties of the sixteen binary connectives within Boolean algebra.

151.1 Truth functions


Truth functions are functions from sequences of truth values to truth values. A unary truth function, for example,
takes a single truth value and maps it onto another truth value. Similarly, a binary truth function maps ordered pairs
of truth values onto truth values, while a ternary truth function maps ordered triples of truth values onto truth values,
and so on.
In the unary case, there are two possible inputs, viz. T and F, and thus four possible unary truth functions: one
mapping T to T and F to F, one mapping T to F and F to F, one mapping T to T and F to T, and nally one mapping
T to F and F to T, this last one corresponding to the familiar operation of logical negation. In the form of a table,
the four unary truth functions may be represented as follows.
In the binary case, there are four possible inputs, viz. (T,T), (T,F), (F,T), and (F,F), thus yielding sixteen possible
n
binary truth functions. Quite generally, for any number n, there are 22 possible n-ary truth functions. The sixteen
possible binary truth functions are listed in the table below.

151.2 Content
Zellwegers logic alphabet oers a visually systematic way of representing each of the sixteen binary truth functions.
The idea behind the logic alphabet is to rst represent the sixteen binary truth functions in the form of a square
matrix rather than the more familiar tabular format seen in the table above, and then to assign a letter shape to each
of these matrices. Letter shapes are derived from the distribution of Ts in the matrix. When drawing a logic symbol,
one passes through each square with assigned F values while stopping in a square with assigned T values. In the
extreme examples, the symbol for tautology is a X (stops in all four squares), while the symbol for contradiction is an
O (passing through all squares without stopping). The square matrix corresponding to each binary truth function, as
well as its corresponding letter shape, are displayed in the table below.

151.3 Signicance
The interest of the logic alphabet lies in its aesthetic, symmetric, and geometric qualities. These qualities combine
to allow an individual to more easily, rapidly and visually manipulate the relationships between entire truth tables. A
logic operation performed on a two dimensional logic alphabet connective, with its geometric qualities, produces a
symmetry transformation. When a symmetry transformation occurs, each input symbol, without any further thought,

582
151.4. SEE ALSO 583

immediately changes into the correct output symbol. For example, by reecting the symbol for NAND (viz. 'h')
across the vertical axis we produce the symbol for , whereas by reecting it across the horizontal axis we produce
the symbol for , and by reecting it across both the horizontal and vertical axes we produce the symbol for .
Similar symmetry transformations can be obtained by operating upon the other symbols.
In eect, the X-stem Logic Alphabet is derived from three disciplines that have been stacked and combined: (1)
mathematics, (2) logic, and (3) semiotics. This happens because, in keeping with the mathelogical semiotics, the
connectives have been custom designed in the form of geometric letter shapes that serve as iconic replicas of their
corresponding square-framed truth tables. Logic cannot do it alone. Logic is sandwiched between mathematics and
semiotics. Indeed, Zellweger has constructed intriguing structures involving the symbols of the logic alphabet on
the basis of these symmetries ( ). The considerable aesthetic appeal of the logic alphabet has led to exhibitions of
Zellwegers work at the Museum of Jurassic Technology in Los Angeles, among other places.
The value of the logic alphabet lies in its use as a visually simpler pedagogical tool than the traditional system for logic
notation. The logic alphabet eases the introduction to the fundamentals of logic, especially for children, at much earlier
stages of cognitive development. Because the logic notation system, in current use today, is so deeply embedded in our
computer culture, the logic alphabets adoption and value by the eld of logic itself, at this juncture, is questionable.
Additionally, systems of natural deduction, for example, generally require introduction and elimination rules for each
connective, meaning that the use of all sixteen binary connectives would result in a highly complex proof system.
Various subsets of the sixteen binary connectives (e.g., {,&,,~}, {,~}, {&, ~}, {,~}) are themselves functionally
complete in that they suce to dene the remaining connectives. In fact, both NAND and NOR are sole sucient
operators, meaning that the remaining connectives can all be dened solely in terms of either of them. Nonetheless,
the logic alphabets 2-dimensional geometric letter shapes along with its group symmetry properties can help ease the
learning curve for children and adult students alike, as they become familiar with the interrelations and operations on
all 16 binary connectives. Giving children and students this advantage is a decided gain.

151.4 See also


Polish notation
Propositional logic

Boolean function
Boolean algebra (logic)

Logic gate

151.5 External links


Page dedicated to Zellwegers logic alphabet

Exhibition in a small museum: Flickr photopage, including a discussion between Tilman Piesk and probably
Shea Zellweger
Chapter 152

Logic optimization

For other uses, see Minimisation.

Logic optimization, a part of logic synthesis in electronics, is the process of nding an equivalent representation of
the specied logic circuit under one or more specied constraints. Generally the circuit is constrained to minimum
chip area meeting a prespecied delay.

152.1 Introduction
With the advent of logic synthesis, one of the biggest challenges faced by the electronic design automation (EDA)
industry was to nd the best netlist representation of the given design description. While two-level logic optimiza-
tion had long existed in the form of the QuineMcCluskey algorithm, later followed by the Espresso heuristic logic
minimizer, the rapidly improving chip densities, and the wide adoption of HDLs for circuit description, formalized
the logic optimization domain as it exists today.
Today, logic optimization is divided into various categories:
Based on circuit representation

Two-level logic optimization

Multi-level logic optimization

Based on circuit characteristics

Sequential logic optimization

Combinational logic optimization

Based on type of execution

Graphical optimization methods

Tabular optimization methods

Algebraic optimization methods

While a two-level circuit representation of circuits strictly refers to the attened view of the circuit in terms of SOPs
(sum-of-products) which is more applicable to a PLA implementation of the design a multi-level representation
is a more generic view of the circuit in terms of arbitrarily connected SOPs, POSs (product-of-sums), factored form
etc. Logic optimization algorithms generally work either on the structural (SOPs, factored form) or functional (BDDs,
ADDs) representation of the circuit.

584
152.2. TWO-LEVEL VERSUS MULTI-LEVEL REPRESENTATIONS 585

152.2 Two-level versus multi-level representations

If we have two functions F 1 and F 2 :

F1 = AB + AC + AD,

F2 = A B + A C + A E.

The above 2-level representation takes six product terms and 24 transistors in CMOS Rep.
A functionally equivalent representation in multilevel can be:

P = B + C.

F 1 = AP + AD.

F 2 = A'P + A'E.

While the number of levels here is 3, the total number of product terms and literals reduce because of the sharing of
the term B + C.
Similarly, we distinguish between sequential and combinational circuits, whose behavior can be described in terms
of nite-state machine state tables/diagrams or by Boolean functions and relations respectively.

152.3 Circuit minimization in Boolean algebra

In Boolean algebra, circuit minimization is the problem of obtaining the smallest logic circuit (Boolean formula) that
represents a given Boolean function or truth table. The unbounded circuit minimization problem was long-conjectured
[1]
to be P
2 -complete, a result nally proved in 2008, but there are eective heuristics such as Karnaugh maps and
the QuineMcCluskey algorithm that facilitate the process.

152.3.1 Purpose

The problem with having a complicated circuit (i.e. one with many elements, such as logic gates) is that each element
takes up physical space in its implementation and costs time and money to produce in itself. Circuit minimization
may be one form of logic optimization used to reduce the area of complex logic in integrated circuits.

152.3.2 Example

While there are many ways to minimize a circuit, this is an example that minimizes (or simplies) a boolean function.
Note that the boolean function carried out by the circuit is directly related to the algebraic expression from which the
function is implemented.[2] Consider the circuit used to represent (A B) (A B) . It is evident that two negations,
two conjunctions, and a disjunction are used in this statement. This means that to build the circuit one would need
two inverters, two AND gates, and an OR gate.
We can simplify (minimize) the circuit by applying logical identities or using intuition. Since the example states that
A is true when B is false or the other way around, we can conclude that this simply means A = B . In terms of logical
gates, inequality simply means an XOR gate (exclusive or). Therefore, (A B) (A B) A = B . Then
the two circuits shown below are equivalent:
586 CHAPTER 152. LOGIC OPTIMIZATION

Original Circuit
A B

Simplied (Minimized) Circuit


A B

You can additionally check the correctness of the result using a truth table.

152.4 Graphical two-level logic minimization methods


Graphical minimization methods for two-level logic include:

Marquand diagram (1881) by Allan Marquand (18531924)[3][4]


Harvard minimizing chart (1951) by Howard H. Aiken and Martha L. Whitehouse of the Harvard Computation
Laboratory[5][6][7][8]
Veitch chart (1952) by Edward Veitch (19242013)[9][4]
Karnaugh map (1953) by Maurice Karnaugh (1924)
Svobodas graphical aids (1956) and triadic map by Antonn Svoboda (19071980)[10][11][12][13]
Hndler circle graph (aka Hndlerscher Kreisgraph, Kreisgraph nach Hndler, Hndler-Kreisgraph, Hndler-
Diagramm, Minimisierungsgraph [sic]) (1958) by Wolfgang Hndler (19201998)[14][15][16][12][17][18][19][20][21]
Graph method (1965) by Herbert Kortum (19071979)[22][23][24][25][26][27]

152.5 See also


Binary decision diagram
Circuit minimization
Espresso heuristic logic minimizer
Karnaugh map
Petricks method
Prime implicant
Circuit complexity
152.6. REFERENCES 587

Function composition
Function decomposition
Gate underutilization

152.6 References
[1] Buchfuhrer, D.; Umans, C. (2011). The complexity of Boolean formula minimization. Journal of Computer and System
Sciences. 77: 142. doi:10.1016/j.jcss.2010.06.011. This is an extended version of the conference paper Buchfuhrer,
D.; Umans, C. (2008). The Complexity of Boolean Formula Minimization. Automata, Languages and Programming.
Lecture Notes in Computer Science. 5125. p. 24. ISBN 978-3-540-70574-1. doi:10.1007/978-3-540-70575-8_3.

[2] M. Mano, C. Kime. Logic and Computer Design Fundamentals (Fourth Edition). Pg 54

[3] Marquand, Allan (1881). XXXIII: On Logical Diagrams for n terms. The London, Edinburgh, and Dublin Philosophical
Magazine and Journal of Science. 5. 12 (75): 266270. doi:10.1080/14786448108627104. Retrieved 2017-05-15. (NB.
Quite many secondary sources erroneously cite this work as A logical diagram for n terms or On a logical diagram for
n terms.)

[4] Brown, Frank Markham (2012) [2003, 1990]. Boolean Reasoning - The Logic of Boolean Equations (reissue of 2nd ed.).
Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-42785-0.

[5] Aiken, Howard H.; Blaauw, Gerrit; Burkhart, William; Burns, Robert J.; Cali, Lloyd; Canepa, Michele; Ciampa, Carmela
M.; Coolidge, Jr., Charles A.; Fucarile, Joseph R.; Gadd, Jr., J. Orten; Gucker, Frank F.; Harr, John A.; Hawkins, Robert
L.; Hayes, Miles V.; Hofheimer, Richard; Hulme, William F.; Jennings, Betty L.; Johnson, Stanley A.; Kalin, Theodore;
Kincaid, Marshall; Lucchini, E. Edward; Minty, William; Moore, Benjamin L.; Remmes, Joseph; Rinn, Robert J.; Roche,
John W.; Sanbord, Jacquelin; Semon, Warren L.; Singer, Theodore; Smith, Dexter; Smith, Leonard; Strong, Peter F.;
Thomas, Helene V.; Wang, An; Whitehouse, Martha L.; Wilkins, Holly B.; Wilkins, Robert E.; Woo, Way Dong; Lit-
tle, Elbert P.; McDowell, M. Scudder (1952) [January 1951]. Chapter V: Minimizing charts. Synthesis of electronic
computing and control circuits (second printing, revised ed.). Write-Patterson Air Force Base: Harvard University Press
(Cambridge, Massachusetts, USA) / Georey Cumberlege Oxford University Press (London). pp. preface, 5067. Re-
trieved 2017-04-16. [] Martha Whitehouse constructed the minimizing charts used so profusely throughout this book,
and in addition prepared minimizing charts of seven and eight variables for experimental purposes. [] Hence, the present
writer is obliged to record that the general algebraic approach, the switching function, the vacuum-tube operator, and the
minimizing chart are his proposals, and that he is responsible for their inclusion herein. [] (NB. Work commenced in
April 1948.)

[6] Karnaugh, Maurice (November 1953) [1953-04-23, 1953-03-17]. The Map Method for Synthesis of Combinational Logic
Circuits (PDF). Transactions of the American Institute of Electrical Engineers part I. 72 (9): 593599. doi:10.1109/TCE.1953.6371932.
Paper 53-217. Retrieved 2017-04-16. (NB. Also contains a short review by Samuel H. Caldwell.)

[7] Phister, Jr., Montgomery (1959) [December 1958]. Logical design of digital computers. New York, USA: John Wiley &
Sons Inc. pp. 7583. ISBN 0471688053.

[8] Curtis, H. Allen (1962). A new approach to the design of switching circuits. Princeton: D. van Nostrand Company.

[9] Veitch, Edward W. (1952-05-03) [1952-05-02]. A Chart Method for Simplifying Truth Functions. ACM Annual Con-
ference/Annual Meeting: Proceedings of the 1952 ACM Annual Meeting (Pittsburg). New York, USA: ACM: 127133.
doi:10.1145/609784.609801.

[10] Svoboda, Antonn (1956). Gracko-mechanick pomcky uvan pi analyse a synthese kontaktovch obvod [Utilization
of graphical-mechanical aids for the analysis and synthesis of contact circuits]. Stroje na zpracovn informac [Symphosium
IV on information processing machines] (in Czech). IV. Prague: Czechoslovak Academy of Sciences, Research Institute
of Mathematical Machines. pp. 921.

[11] Svoboda, Antonn (1956). Graphical Mechanical Aids for the Synthesis of Relay Circuits. Nachrichtentechnische Fach-
berichte (NTF), Beihefte der Nachrichtentechnischen Zeitschrift (NTZ). Braunschweig, Germany: Vieweg-Verlag.

[12] Steinbuch, Karl W.; Weber, Wolfgang; Heinemann, Traute, eds. (1974) [1967]. Taschenbuch der Informatik - Band II
- Struktur und Programmierung von EDV-Systemen. Taschenbuch der Nachrichtenverarbeitung (in German). 2 (3 ed.).
Berlin, Germany: Springer-Verlag. pp. 25, 62, 96, 122123, 238. ISBN 3-540-06241-6. LCCN 73-80607.

[13] Svoboda, Antonn; White, Donnamaie E. (2016) [1979-08-01]. Advanced Logical Circuit Design Techniques (PDF) (re-
typed electronic reissue ed.). Garland STPM Press (original issue) / WhitePubs (reissue). ISBN 978-0-8240-7014-4.
Archived (PDF) from the original on 2016-03-15. Retrieved 2017-04-15.
588 CHAPTER 152. LOGIC OPTIMIZATION

[14] Hndler, Wolfgang (1958). Ein Minimisierungsverfahren zur Synthese von Schaltkreisen: Minimisierungsgraphen (Disser-
tation) (in German). Technische Hochschule Darmstadt. D 17. (NB. Although written by a German, the title contains an
anglicism; the correct German term would be Minimierung instead of Minimisierung.)

[15] Hndler, Wolfgang (2013) [1961]. Zum Gebrauch von Graphen in der Schaltkreis- und Schaltwerktheorie. In Peschl,
Ernst Ferdinand; Unger, Heinz. Colloquium ber Schaltkreis- und Schaltwerk-Theorie - Vortragsauszge vom 26. bis 28.
Oktober 1960 in Bonn - Band 3 von Internationale Schriftenreihe zur Numerischen Mathematik [International Series of
Numerical Mathematics] (ISNM) (in German). 3. Institut fr Angewandte Mathematik, Universitt Saarbrcken, Rheinisch-
Westflisches Institut fr Instrumentelle Mathematik: Springer Basel AG / Birkhuser Verlag Basel. pp. 169198. ISBN
978-3-0348-5771-0. doi:10.1007/978-3-0348-5770-3.

[16] Berger, Erich R.; Hndler, Wolfgang (1967) [1962]. Steinbuch, Karl W.; Wagner, Siegfried W., eds. Taschenbuch der
Nachrichtenverarbeitung (in German) (2 ed.). Berlin, Germany: Springer-Verlag OHG. pp. 64, 10341035, 1036, 1038.
LCCN 67-21079. Title No. 1036. [] bersichtlich ist die Darstellung nach Hndler, die smtliche Punkte, numeriert
nach dem Gray-Code [], auf dem Umfeld eines Kreises anordnet. Sie erfordert allerdings sehr viel Platz. [] [Hndlers
illustration, where all points, numbered according to the Gray code, are arranged on the circumference of a circle, is easily
comprehensible. It needs, however, a lot of space.]

[17] Hotz, Gnter (1974). Schaltkreistheorie [Switching circuit theory]. DeGruyter Lehrbuch (in German). Walter de Gruyter
& Co. p. 117. ISBN 3-11-00-2050-5. [] Der Kreisgraph von Hndler ist fr das Aunden von Primimplikanten
gut brauchbar. Er hat den Nachteil, da er schwierig zu zeichnen ist. Diesen Nachteil kann man allerdings durch die
Verwendung von Schablonen verringern. [] [The circle graph by Hndler is well suited to nd prime implicants. A
disadvantage is that it is dicult to draw. This can be remedied using stencils.]

[18] Informatik Sammlung Erlangen (ISER)" (in German). Erlangen, Germany: Friedrich-Alexander Universitt. 2012-03-
13. Retrieved 2017-04-12. (NB. Shows a picture of a Kreisgraph by Hndler.)

[19] Informatik Sammlung Erlangen (ISER) - Impressum (in German). Erlangen, Germany: Friedrich-Alexander Universitt.
2012-03-13. Archived from the original on 2012-02-26. Retrieved 2017-04-15. (NB. Shows a picture of a Kreisgraph by
Hndler.)

[20] Zemanek, Heinz (2013) [1990]. Geschichte der Schaltalgebra [History of circuit switching algebra]. In Broy, Man-
fred. Informatik und Mathematik [Computer Sciences and Mathematics] (in German). Springer-Verlag. pp. 4372. ISBN
9783642766770. Einen Weg besonderer Art, der damals zu wenig beachtet wurde, wies W. Hndler in seiner Dissertation
[] mit einem Kreisdiagramm. [] (NB. Collection of papers at a colloquium held at the Bayerische Akademie der
Wissenschaften, 1989-06-12/14, in honor of Friedrich L. Bauer.)

[21] Bauer, Friedrich Ludwig; Wirsing, Martin (March 1991). Elementare Aussagenlogik (in German). Berlin / Heidelberg:
Springer-Verlag. pp. 5456, 71, 112113, 138139. ISBN 978-3-540-52974-3. [] handelt es sich um ein Hndler-
Diagramm [], mit den Wrfelecken als Ecken eines 2m -gons. [] Abb. [] zeigt auch Gegenstcke fr andere Di-
mensionen. Durch waagerechte Linien sind dabei Tupel verbunden, die sich nur in der ersten Komponente unterscheiden;
durch senkrechte Linien solche, die sich nur in der zweiten Komponente unterscheiden; durch 45-Linien und 135-Linien
solche, die sich nur in der dritten Komponente unterscheiden usw. Als Nachteil der Hndler-Diagramme wird angefhrt,
da sie viel Platz beanspruchen. []

[22] Kortum, Herbert (1965). Minimierung von Kontaktschaltungen durch Kombination von Krzungsverfahren und Graphen-
methoden. messen-steuern-regeln (msr) (in German). Verlag Technik. 8 (12): 421425.

[23] Kortum, Herbert (1966). Konstruktion und Minimierung von Halbleiterschaltnetzwerken mittels Graphentransformation.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (1): 912.

[24] Kortum, Herbert (1966). Weitere Bemerkungen zur Minimierung von Schaltnetzwerken mittels Graphenmethoden.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (3): 96102.

[25] Kortum, Herbert (1966). Weitere Bemerkungen zur Behandlung von Schaltnetzwerken mittels Graphen. messen-steuern-
regeln (msr) (in German). Verlag Technik. 9 (5): 151157.

[26] Kortum, Herbert (1967). "ber zweckmige Anpassung der Graphenstruktur diskreter Systeme an vorgegebene Auf-
gabenstellungen. messen-steuern-regeln (msr) (in German). Verlag Technik. 10 (6): 208211.

[27] Tafel, Hans Jrg (1971). 4.3.5. Graphenmethode zur Vereinfachung von Schaltfunktionen. Written at RWTH, Aachen,
Germany. Einfhrung in die digitale Datenverarbeitung [Introduction to digital information processing] (in German). Mu-
nich, Germany: Carl Hanser Verlag. pp. 98105, 107113. ISBN 3-446-10569-7.
152.7. FURTHER READING 589

152.7 Further reading


De Micheli, Giovanni (1994). Synthesis and Optimization of Digital Circuits. McGraw-Hill. ISBN 0-07-
016333-2. (NB. Chapters 7-9 cover combinatorial two-level, combinatorial multi-level, and respectively se-
quential circuit optimization.)

Hachtel, Gary D.; Somenzi, Fabio (2006) [1996]. Logic Synthesis and Verication Algorithms. Springer Science
& Business Media. ISBN 978-0-387-31005-3.

Zvi Kohavi, Niraj K. Jha. Switching and Finite Automata Theory. 3rd ed. Cambridge University Press. 2009.
ISBN 978-0-521-85748-2, chapters 46
Knuth, Donald E. (2010). chapter 7.1.2: Boolean Evaluation. The Art of Computer Programming. 4A.
Addison-Wesley. pp. 96133. ISBN 0-201-03804-8.
Multi-level minimization part I, partII: CMU lectures slides by Rob A. Rutenbar

Tomaszewski, S. P., Celik, I. U., Antoniou, G. E., WWW-based Boolean function minimization International
Journal of Applied Mathematics and Computer Science, VOL 13; PART 4, pages 577-584, 2003.
Chapter 153

Logic redundancy

Logic redundancy occurs in a digital gate network containing circuitry that does not aect the static logic function.
There are several reasons why logic redundancy may exist. One reason is that it may have been added deliberately to
suppress transient glitches (thus causing a race condition) in the output signals by having two or more product terms
overlap with a third one.
Consider the following equation:

Y = AB + AC + BC.

The third product term BC is a redundant consensus term. If A switches from 1 to 0 while B = 1 and C = 1 , Y
remains 1. During the transition of signal A in logic gates, both the rst and second term may be 0 momentarily. The
third term prevents a glitch since its value of 1 in this case is not aected by the transition of signal A .
Another reason for logic redundancy is poor design practices which unintentionally result in logically redundantly
terms. This causes an unnecessary increase in network complexity, and possibly hampering the ability to test manu-
factured designs using traditional test methods (single stuck-at fault models). (Note: testing might be possible using
IDDQ models.)

153.1 Removing logic redundancy


Logic redundancy is, in general, not desired. Redundancy, by denition, requires extra parts (in this case: logical
terms) which raises the cost of implementation (either actual cost of physical parts or CPU time to process). Logic
redundancy can be removed by several well-known techniques, such as Karnaugh maps, the QuineMcCluskey algo-
rithm, and the heuristic computer method.

153.2 Adding logic redundancy


Main article: hazard (logic)
In some cases it may be desirable to add logic redundancy. One of those cases is to avoid race conditions whereby
an output can uctuate because dierent terms are racing to turn o and on. To explain this in more concrete terms
the Karnaugh map to the right shows the minterms and maxterms for the following function:

f (A, B, C, D) = E(6, 8, 9, 10, 11, 12, 13, 14).

The boxes represent the minimal AND/OR terms needed to implement this function:

F = AC + AB + BCD.

590
153.3. SEE ALSO 591

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
A k-map showing a particular logic function

The k-map visually shows where race conditions occur in the minimal expression by having gaps between minterms or
gaps between maxterms. For example, the gap between the blue and green rectangles. If the input ABCD = 1110
were to change to ABCD = 1010 then a race will occur between BCD turning o and AB turning o. If the
blue term switches o before the green turns on then the output will uctuate and may register as 0. Another race
condition is between the blue and the red for transition of ABCD = 1110 to ABCD = 1100 .
The race condition is removed by adding in logic redundancy, which is contrary to the aims of using a k-map in
the rst place. Both minterm race conditions are covered by addition of the yellow term AD . (The maxterm race
condition is covered by addition of the green-bordered grey term A + D .)
In this case, the addition of logic redundancy has stabilized the output to avoid output uctuations because terms are
racing each other to change state.

153.3 See also


592 CHAPTER 153. LOGIC REDUNDANCY

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
Above k-map with the AD term added to avoid race hazards
Chapter 154

Logical biconditional

In logic and mathematics, the logical biconditional (sometimes known as the material biconditional) is the logical
connective of two statements asserting "p if and only if q", where p is an antecedent and q is a consequent.[1] This is
often abbreviated "p i q". The operator is denoted using a doubleheaded arrow (), a prexed E (Epq), an equality
sign (=), an equivalence sign (), or EQV. It is logically equivalent to (p q) (q p). It is also logically equivalent
to "(p and q) or (not p and not q)" (or the XNOR (exclusive nor) boolean operator), meaning both or neither.
The only dierence from material conditional is the case when the hypothesis is false but the conclusion is true. In
that case, in the conditional, the result is true, yet in the biconditional the result is false.
In the conceptual interpretation, a = b means All a 's are b 's and all b 's are a 's"; in other words, the sets a and b
coincide: they are identical. This does not mean that the concepts have the same meaning. Examples: triangle and
trilateral, equiangular trilateral and equilateral triangle. The antecedent is the subject and the consequent is the
predicate of a universal armative proposition.
In the propositional interpretation, a b means that a implies b and b implies a; in other words, that the propositions
are equivalent, that is to say, either true or false at the same time. This does not mean that they have the same meaning.
Example: The triangle ABC has two equal sides, and The triangle ABC has two equal angles. The antecedent is
the premise or the cause and the consequent is the consequence. When an implication is translated by a hypothetical
(or conditional) judgment the antecedent is called the hypothesis (or the condition) and the consequent is called the
thesis.
A common way of demonstrating a biconditional is to use its equivalence to the conjunction of two converse conditionals,
demonstrating these separately.
When both members of the biconditional are propositions, it can be separated into two conditionals, of which one
is called a theorem and the other its reciprocal. Thus whenever a theorem and its reciprocal are true we have a
biconditional. A simple theorem gives rise to an implication whose antecedent is the hypothesis and whose consequent
is the thesis of the theorem.
It is often said that the hypothesis is the sucient condition of the thesis, and the thesis the necessary condition of
the hypothesis; that is to say, it is sucient that the hypothesis be true for the thesis to be true; while it is necessary
that the thesis be true for the hypothesis to be true also. When a theorem and its reciprocal are true we say that its
hypothesis is the necessary and sucient condition of the thesis; that is to say, that it is at the same time both cause
and consequence.

154.1 Denition
Logical equality (also known as biconditional) is an operation on two logical values, typically the values of two
propositions, that produces a value of true if and only if both operands are false or both operands are true.

154.1.1 Truth table

The truth table for A B (also written as A B, A = B, or A EQ B) is as follows:

593
594 CHAPTER 154. LOGICAL BICONDITIONAL

More than two statements combined by are ambiguous:


x1 x2 x3 ... xn may be meant as (((x1 x2 ) x3 ) ...) xn ,
or may be used to say that all xi are together true or together false: ( x1 ... xn ) (x1 ... xn )
Only for zero or two arguments this is the same.
The following truth tables show the same bit pattern only in the line with no argument and in the lines with two
arguments:

x1 ... xn meant as equivalent to


(x1 ... xn ) The central Venn diagram below,
and line (ABC ) in this matrix
represent the same operation.

The left Venn diagram below, and the lines (AB ) in these matrices represent the same operation.

154.1.2 Venn diagrams

Red areas stand for true (as in for and).

154.2 Properties
commutativity: yes
associativity: yes
154.3. RULES OF INFERENCE 595

x1 ... xn meant as shorthand for


( x1 ... xn ) (x1 ... xn ) The Venn diagram directly below,
and line (ABC ) in this matrix
represent the same operation.

distributivity: Biconditional doesn't distribute over any binary function (not even itself),
but logical disjunction (see there) distributes over biconditional.
idempotency: no

monotonicity: no
truth-preserving: yes
When all inputs are true, the output is true.
falsehood-preserving: no
When all inputs are false, the output is not false.
Walsh spectrum: (2,0,0,2)
Nonlinearity: 0 (the function is linear)

154.3 Rules of inference

Main article: Rules of inference

Like all connectives in rst-order logic, the biconditional has rules of inference that govern its use in formal proofs.
596 CHAPTER 154. LOGICAL BICONDITIONAL

154.3.1 Biconditional introduction

Main article: Biconditional introduction

Biconditional introduction allows you to infer that, if B follows from A, and A follows from B, then A if and only if
B.
For example, from the statements if I'm breathing, then I'm alive and if I'm alive, then I'm breathing, it can be
inferred that I'm breathing if and only if I'm alive or, equally inferrable, I'm alive if and only if I'm breathing.
BAABABBAABBA

154.3.2 Biconditional elimination

Biconditional elimination allows one to infer a conditional from a biconditional: if ( A B ) is true, then one may
infer one direction of the biconditional, ( A B ) and ( B A ).
For example, if its true that I'm breathing if and only if I'm alive, then its true that if I'm breathing, I'm alive; likewise,
its true that if I'm alive, I'm breathing.
Formally:
(AB)(AB)
also
(AB)(BA)

154.4 Colloquial usage


One unambiguous way of stating a biconditional in plain English is of the form "b if a and a if b". Another is "a
if and only if b". Slightly more formally, one could say "b implies a and a implies b". The plain English if'" may
sometimes be used as a biconditional. One must weigh context heavily.
For example, I'll buy you a new wallet if you need one may be meant as a biconditional, since the speaker doesn't
intend a valid outcome to be buying the wallet whether or not the wallet is needed (as in a conditional). However, it
is cloudy if it is raining is not meant as a biconditional, since it can be cloudy while not raining.

154.5 See also

If and only if

Logical equivalence

Logical equality

XNOR gate

Biconditional elimination

Biconditional introduction

154.6 Notes
[1] Handbook of Logic, page 81
154.7. REFERENCES 597

154.7 References
Brennan, Joseph G. Handbook of Logic, 2nd Edition. Harper & Row. 1961

This article incorporates material from Biconditional on PlanetMath, which is licensed under the Creative Commons
Attribution/Share-Alike License.
Chapter 155

Logical conjunction

Venn diagram of AB

In logic and mathematics, and is the truth-functional operator of logical conjunction; the and of a set of operands is
true if and only if all of its operands are true. The logical connective that represents this operator is typically written
as or .
"A and B" is true only if A is true and B is true.
An operand of a conjunction is a conjunct.
The term logical conjunction is also used for the greatest lower bound in lattice theory.
Related concepts in other elds are:

In natural language, the coordinating conjunction and.

In programming languages, the short-circuit and control structure.

598
155.1. NOTATION 599

Venn diagram of ABC

In set theory, intersection.

In predicate logic, universal quantication.

155.1 Notation
And is usually denoted by an inx operator: in mathematics and logic, or ; in electronics, ; and in programming
languages, &, &&, or and. In Jan ukasiewicz's prex notation for logic, the operator is K, for Polish koniunkcja.[1]

155.2 Denition
Logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if and only if both of its operands are true.
The conjunctive identity is 1, which is to say that AND-ing an expression with 1 will never change the value of the
expression. In keeping with the concept of vacuous truth, when conjunction is dened as an operator or function of
600 CHAPTER 155. LOGICAL CONJUNCTION

arbitrary arity, the empty conjunction (AND-ing over an empty set of operands) is often dened as having the result
1.

155.2.1 Truth table

Conjunctions of the arguments on the left The true bits form a Sierpinski triangle.

The truth table of A B :

155.2.2 Dened by other operators


In systems where logical connective is not a primitive, it may be dened as[2]

A B = (A B)

155.3 Introduction and elimination rules


As a rule of inference, conjunction introduction is a classically valid, simple argument form. The argument form has
two premises, A and B. Intuitively, it permits the inference of their conjunction.

A,
B.
Therefore, A and B.
155.4. PROPERTIES 601

or in logical operator notation:

A,
B
AB
Here is an example of an argument that ts the form conjunction introduction:

Bob likes apples.


Bob likes oranges.
Therefore, Bob likes apples and oranges.

Conjunction elimination is another classically valid, simple argument form. Intuitively, it permits the inference from
any conjunction of either element of that conjunction.

A and B.
Therefore, A.

...or alternately,

A and B.
Therefore, B.

In logical operator notation:

AB
A
...or alternately,

AB
B

155.4 Properties
commutativity: yes
associativity: yes
distributivity: with various operations, especially with or
idempotency: yes

monotonicity: yes
truth-preserving: yes
When all inputs are true, the output is true.
falsehood-preserving: yes
When all inputs are false, the output is false.
Walsh spectrum: (1,1,1,1)
Nonlinearity: 1 (the function is bent)
If using binary values for true (1) and false (0), then logical conjunction works exactly like normal arithmetic multiplication.
602 CHAPTER 155. LOGICAL CONJUNCTION

155.5 Applications in computer engineering

AND logic gate

In high-level computer programming and digital electronics, logical conjunction is commonly represented by an inx
operator, usually as a keyword such as AND, an algebraic multiplication, or the ampersand symbol "&". Many
languages also provide short-circuit control structures corresponding to logical conjunction.
Logical conjunction is often used for bitwise operations, where 0 corresponds to false and 1 to true:

0 AND 0 = 0,
0 AND 1 = 0,
1 AND 0 = 0,
1 AND 1 = 1.

The operation can also be applied to two binary words viewed as bitstrings of equal length, by taking the bitwise
AND of each pair of bits at corresponding positions. For example:

11000110 AND 10100011 = 10000010.

This can be used to select part of a bitstring using a bit mask. For example, 10011101 AND 00001000 = 00001000
extracts the fth bit of an 8-bit bitstring.
In computer networking, bit masks are used to derive the network address of a subnet within an existing network
from a given IP address, by ANDing the IP address and the subnet mask.
Logical conjunction AND is also used in SQL operations to form database queries.
The CurryHoward correspondence relates logical conjunction to product types.

155.6 Set-theoretic correspondence


The membership of an element of an intersection set in set theory is dened in terms of a logical conjunction: x A
B if and only if (x A) (x B). Through this correspondence, set-theoretic intersection shares several properties
with logical conjunction, such as associativity, commutativity, and idempotence.

155.7 Natural language


As with other notions formalized in mathematical logic, the logical conjunction and is related to, but not the same
as, the grammatical conjunction and in natural languages.
English and has properties not captured by logical conjunction. For example, and sometimes implies order. For
example, They got married and had a child in common discourse means that the marriage came before the child.
155.8. SEE ALSO 603

The word and can also imply a partition of a thing into parts, as The American ag is red, white, and blue. Here
it is not meant that the ag is at once red, white, and blue, but rather that it has a part of each color.

155.8 See also


And-inverter graph
AND gate
Binary and
Bitwise AND
Boolean algebra (logic)
Boolean algebra topics
Boolean conjunctive query
Boolean domain
Boolean function
Boolean-valued function
Conjunction introduction
Conjunction elimination
De Morgans laws
First-order logic
Frchet inequalities
Grammatical conjunction
Logical disjunction
Logical negation
Logical graph
Logical value
Operation
PeanoRussell notation
Propositional calculus

155.9 References
[1] Jzef Maria Bocheski (1959), A Prcis of Mathematical Logic, translated by Otto Bird from the French and German
editions, Dordrecht, North Holland: D. Reidel, passim.
[2] Smith, Peter. Types of proof system (PDF). p. 4.

155.10 External links


Hazewinkel, Michiel, ed. (2001) [1994], Conjunction, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Wolfram MathWorld: Conjunction
Chapter 156

Logical connective

This article is about connectives in logical systems. For connectors in natural languages, see discourse connective.
For other logical symbols, see List of logic symbols.

In logic, a logical connective (also called a logical operator) is a symbol or word used to connect two or more
sentences (of either a formal or a natural language) in a grammatically valid way, such that the value of the compound
sentence produced depends only on that of the original sentences and on the meaning of the connective.
The most common logical connectives are binary connectives (also called dyadic connectives) which join two
sentences which can be thought of as the functions operands. Also commonly, negation is considered to be a unary
connective.
Logical connectives along with quantiers are the two main types of logical constants used in formal systems such
as propositional logic and predicate logic. Semantics of a logical connective is often, but not always, presented as a
truth function.
A logical connective is similar to but not equivalent to a conditional operator.[1]

156.1 In language

156.1.1 Natural language

In the grammar of natural languages two sentences may be joined by a grammatical conjunction to form a gram-
matically compound sentence. Some but not all such grammatical conjunctions are truth functions. For example,
consider the following sentences:

A: Jack went up the hill.

B: Jill went up the hill.

C: Jack went up the hill and Jill went up the hill.

D: Jack went up the hill so Jill went up the hill.

The words and and so are grammatical conjunctions joining the sentences (A) and (B) to form the compound sentences
(C) and (D). The and in (C) is a logical connective, since the truth of (C) is completely determined by (A) and (B):
it would make no sense to arm (A) and (B) but deny (C). However, so in (D) is not a logical connective, since it
would be quite reasonable to arm (A) and (B) but deny (D): perhaps, after all, Jill went up the hill to fetch a pail of
water, not because Jack had gone up the hill at all.
Various English words and word pairs express logical connectives, and some of them are synonymous. Examples are:

604
156.2. COMMON LOGICAL CONNECTIVES 605

156.1.2 Formal languages

In formal languages, truth functions are represented by unambiguous symbols. These symbols are called logical
connectives, logical operators, propositional operators, or, in classical logic, "truth-functional connectives. See
well-formed formula for the rules which allow new well-formed formulas to be constructed by joining other well-
formed formulas using truth-functional connectives.
Logical connectives can be used to link more than two statements, so one can speak about "n-ary logical connective.

156.2 Common logical connectives

156.2.1 List of common logical connectives

Commonly used logical connectives include

Negation (not): , N (prex), ~

Conjunction (and): , K (prex), & ,

Disjunction (or): , A (prex)

Material implication (if...then): , C (prex), ,

Biconditional (if and only if): , E (prex), , =

Alternative names for biconditional are i, xnor and bi-implication.


For example, the meaning of the statements it is raining and I am indoors is transformed when the two are combined
with logical connectives. For statement P = It is raining and Q = I am indoors:

It is not raining (P)

It is raining and I am indoors ( P Q )

It is raining or I am indoors ( P Q )

If it is raining, then I am indoors ( P Q )

If I am indoors, then it is raining ( Q P )

I am indoors if and only if it is raining ( P Q )

It is also common to consider the always true formula and the always false formula to be connective:

True formula (, 1, V [prex], or T)

False formula (, 0, O [prex], or F)

156.2.2 History of notations

Negation: the symbol appeared in Heyting in 1929.[2][3] (compare to Frege's symbol


A
in his Begrisschrift);
the symbol ~ appeared in Russell in 1908;[4] an alternative notation is to add an horizontal line on top of the
formula, as in P ; another alternative notation is to use a prime symbol as in P'.

Conjunction: the symbol appeared in Heyting in 1929[2] (compare to Peano's use of the set-theoretic notation
of intersection [5] ); & appeared at least in Schnnkel in 1924;[6] . comes from Boole's interpretation of logic
as an elementary algebra.
606 CHAPTER 156. LOGICAL CONNECTIVE

Disjunction: the symbol appeared in Russell in 1908[4] (compare to Peano's use of the set-theoretic notation
of union ); the symbol + is also used, in spite of the ambiguity coming from the fact that the + of ordinary
elementary algebra is an exclusive or when interpreted logically in a two-element ring; punctually in the history
a + together with a dot in the lower right corner has been used by Peirce,[7]
Implication: the symbol can be seen in Hilbert in 1917;[8] was used by Russell in 1908[4] (compare to
Peanos inverted C notation); was used in Vax.[9]
Biconditional: the symbol was used at least by Russell in 1908;[4] was used at least by Tarski in 1940;[10]
was used in Vax; other symbols appeared punctually in the history such as in Gentzen,[11] ~ in Schnnkel[6]
or in Chazal.[12]
True: the symbol 1 comes from Boole's interpretation
of logic as an elementary algebra over the two-element
Boolean algebra; other notations include to be found in Peano.

False: the symbol 0 comes also from Booles interpretation of logic as a ring; other notations include to be
found in Peano.

Some authors used letters for connectives at some time of the history: u. for conjunction (Germans und for
and) and o. for disjunction (Germans oder for or) in earlier works by Hilbert (1904); Np for negation, Kpq
for conjunction, Dpq for alternative denial, Apq for disjunction, Xpq for joint denial, Cpq for implication, Epq for
biconditional in ukasiewicz (1929);[13] cf. Polish notation.

156.2.3 Redundancy
Such a logical connective as converse implication "" is actually the same as material conditional with swapped
arguments; thus, the symbol for converse implication is redundant. In some logical calculi (notably, in classical logic)
certain essentially dierent compound statements are logically equivalent. A less trivial example of a redundancy is
the classical equivalence between P Q and P Q. Therefore, a classical-based logical system does not need the
conditional operator "" if "" (not) and "" (or) are already in use, or may use the "" only as a syntactic sugar for
a compound having one negation and one disjunction.
There are sixteen Boolean functions associating the input truth values P and Q with four-digit binary outputs.[14]
These correspond to possible choices of binary logical connectives for classical logic. Dierent implementations of
classical logic can choose dierent functionally complete subsets of connectives.
One approach is to choose a minimal set, and dene other connectives by some logical form, as in the example with
the material conditional above. The following are the minimal functionally complete sets of operators in classical
logic whose arities do not exceed 2:

One element {}, {}.


Two elements {, } , {, } , {, } , {, } , {, } , {, } , {, } , {, } , {, } , {, } ,
{, } , {, } , {, } , {, } , {, } , {, } , {, } , {, } .
Three elements {, , } , {, , } , {, , } , {, , } , {, , } , {, , } .

See more details about functional completeness in classical logic at Functional completeness in truth function.
Another approach is to use with equal rights connectives of a certain convenient and functionally complete, but not
minimal set. This approach requires more propositional axioms, and each equivalence between logical forms must be
either an axiom or provable as a theorem.
The situation, however, is more complicated in intuitionistic logic. Of its ve connectives, {, , , , }, only nega-
tion "" can be reduced to other connectives (see details). Neither conjunction, disjunction, nor material conditional
has an equivalent form constructed of the other four logical connectives.

156.3 Properties
Some logical connectives possess properties which may be expressed in the theorems containing the connective. Some
of those properties that a logical connective may have are:
156.4. ORDER OF PRECEDENCE 607

Associativity: Within an expression containing two or more of the same associative connectives in a row, the
order of the operations does not matter as long as the sequence of the operands is not changed.
Commutativity: The operands of the connective may be swapped preserving logical equivalence to the original
expression.
Distributivity: A connective denoted by distributes over another connective denoted by +, if a (b + c) = (a
b) + (a c) for all operands a, b, c.
Idempotence: Whenever the operands of the operation are the same, the compound is logically equivalent to
the operand.
Absorption: A pair of connectives , satises the absorption law if a (a b) = a for all operands a, b.
Monotonicity: If f(a1 , ..., an) f(b1 , ..., bn) for all a1 , ..., an, b1 , ..., bn {0,1} such that a1 b1 , a2 b2 ,
..., an bn. E.g., , , , .
Anity: Each variable always makes a dierence in the truth-value of the operation or it never makes a
dierence. E.g., , , , , .
Duality: To read the truth-value assignments for the operation from top to bottom on its truth table is the same
as taking the complement of reading the table of the same or another connective from bottom to top. Without
resorting to truth tables it may be formulated as g (a1 , ..., an) = g(a1 , ..., an). E.g., .
Truth-preserving: The compound all those argument are tautologies is a tautology itself. E.g., , , , ,
, . (see validity)
Falsehood-preserving: The compound all those argument are contradictions is a contradiction itself. E.g., ,
, , , , . (see validity)
Involutivity (for unary connectives): f(f(a)) = a. E.g. negation in classical logic.

For classical and intuitionistic logic, the "=" symbol means that corresponding implications "" and "" for
logical compounds can be both proved as theorems, and the "" symbol means that "" for logical compounds
is a consequence of corresponding "" connectives for propositional variables. Some many-valued logics may
have incompatible denitions of equivalence and order (entailment).
Both conjunction and disjunction are associative, commutative and idempotent in classical logic, most varieties of
many-valued logic and intuitionistic logic. The same is true about distributivity of conjunction over disjunction and
disjunction over conjunction, as well as for the absorption law.
In classical logic and some varieties of many-valued logic, conjunction and disjunction are dual, and negation is
self-dual, the latter is also self-dual in intuitionistic logic.

156.4 Order of precedence


As a way of reducing the number of necessary parentheses, one may introduce precedence rules: has higher
precedence than , higher than , and higher than . So for example, P Q R S is short for
(P (Q (R))) S .
Here is a table that shows a commonly used precedence of logical operators.[15]

Operator Precedence
1
2
3
4
5
However, not all compilers use the same order; for instance, an ordering in which disjunction is lower precedence than
implication or bi-implication has also been used.[16] Sometimes precedence between conjunction and disjunction is
unspecied requiring to provide it explicitly in given formula with parentheses. The order of precedence determines
which connective is the main connective when interpreting a non-atomic formula.
608 CHAPTER 156. LOGICAL CONNECTIVE

156.5 Computer science


A truth-functional approach to logical operators is implemented as logic gates in digital circuits. Practically all digital
circuits (the major exception is DRAM) are built up from NAND, NOR, NOT, and transmission gates; see more
details in Truth function in computer science. Logical operators over bit vectors (corresponding to nite Boolean
algebras) are bitwise operations.
But not every usage of a logical connective in computer programming has a Boolean semantic. For example, lazy
evaluation is sometimes implemented for P Q and P Q, so these connectives are not commutative if some of
expressions P, Q has side eects. Also, a conditional, which in some sense corresponds to the material conditional
connective, is essentially non-Boolean because for if (P) then Q; the consequent Q is not executed if the antecedent
P is false (although a compound as a whole is successful true in such case). This is closer to intuitionist and
constructivist views on the material conditional, rather than to classical logics ones.

156.6 See also

156.7 Notes
[1] Cogwheel. What is the dierence between logical and conditional /operator/". Stack Overow. Retrieved 9 April 2015.
[2] Heyting (1929) Die formalen Regeln der intuitionistischen Logik.
[3] Denis Roegel (2002), Petit panorama des notations logiques du 20e sicle (see chart on page 2).
[4] Russell (1908) Mathematical logic as based on the theory of types (American Journal of Mathematics 30, p222262, also
in From Frege to Gdel edited by van Heijenoort).
[5] Peano (1889) Arithmetices principia, nova methodo exposita.
[6] Schnnkel (1924) ber die Bausteine der mathematischen Logik, translated as On the building blocks of mathematical
logic in From Frege to Gdel edited by van Heijenoort.
[7] Peirce (1867) On an improvement in Booles calculus of logic.
[8] Hilbert (1917/1918) Prinzipien der Mathematik (Bernays course notes).
[9] Vax (1982) Lexique logique, Presses Universitaires de France.
[10] Tarski (1940) Introduction to logic and to the methodology of deductive sciences.
[11] Gentzen (1934) Untersuchungen ber das logische Schlieen.
[12] Chazal (1996) : lments de logique formelle.
[13] See Roegel
[14] Bocheski (1959), A Prcis of Mathematical Logic, passim.
[15] O'Donnell, John; Hall, Cordelia; Page, Rex (2007), Discrete Mathematics Using a Computer, Springer, p. 120, ISBN
9781846285981.
[16] Jackson, Daniel (2012), Software Abstractions: Logic, Language, and Analysis, MIT Press, p. 263, ISBN 9780262017152.

156.8 References
Bocheski, Jzef Maria (1959), A Prcis of Mathematical Logic, translated from the French and German edi-
tions by Otto Bird, D. Reidel, Dordrecht, South Holland.
Enderton, Herbert (2001), A Mathematical Introduction to Logic (2nd ed.), Boston, MA: Academic Press,
ISBN 978-0-12-238452-3
Gamut, L.T.F (1991), Chapter 2, Logic, Language and Meaning, 1, University of Chicago Press, pp. 5464,
OCLC 21372380
Rautenberg, W. (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York: Springer Sci-
ence+Business Media, ISBN 978-1-4419-1220-6, doi:10.1007/978-1-4419-1221-3.
156.9. FURTHER READING 609

156.9 Further reading


Lloyd Humberstone (2011). The Connectives. MIT Press. ISBN 978-0-262-01654-4.

156.10 External links


Hazewinkel, Michiel, ed. (2001) [1994], Propositional connective, Encyclopedia of Mathematics, Springer
Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Lloyd Humberstone (2010), "Sentence Connectives in Formal Logic", Stanford Encyclopedia of Philosophy
(An abstract algebraic logic approach to connectives.)
John MacFarlane (2005), "Logical constants", Stanford Encyclopedia of Philosophy.
Chapter 157

Logical consequence

Entailment redirects here. For other uses, see Entail (disambiguation).


Therefore redirects here. For the therefore symbol (), see Therefore sign.
Logical implication redirects here. For the binary connective, see Material conditional.
"" redirects here. For the symbol, see Double turnstile.

Logical consequence (also entailment) is a fundamental concept in logic, which describes the relationship between
statements that holds true when one statement logically follows from one or more statements. A valid logical argument
is one in which the conclusions are entailed by the premises, because the conclusions are consequences of the premises.
The philosophical analysis of logical consequence involves the questions: In what sense does a conclusion follow from
its premises? and What does it mean for a conclusion to be a consequence of premises?[1] All of philosophical logic
is meant to provide accounts of the nature of logical consequence and the nature of logical truth.[2]
Logical consequence is necessary and formal, by way of examples that explain with formal proof and models of
interpretation.[1] A sentence is said to be a logical consequence of a set of sentences, for a given language, if and only
if, using only logic (i.e. without regard to any interpretations of the sentences) the sentence must be true if every
sentence in the set is true.[3]
Logicians make precise accounts of logical consequence regarding a given language L , either by constructing a
deductive system for L or by formal intended semantics for language L . The Polish logician Alfred Tarski identied
three features of an adequate characterization of entailment: (1) The logical consequence relation relies on the logical
form of the sentences, (2) The relation is a priori, i.e. it can be determined whether or not it holds without regard to
empirical evidence (sense experience), and (3) The logical consequence relation has a modal component.[3]

157.1 Formal accounts

The most widely prevailing view on how to best account for logical consequence is to appeal to formality. This is to say
that whether statements follow from one another logically depends on the structure or logical form of the statements
without regard to the contents of that form.
Syntactic accounts of logical consequence rely on schemes using inference rules. For instance, we can express the
logical form of a valid argument as:
All X are Y . All Y are Z . Therefore, all X are Z .
This argument is formally valid, because every instance of arguments constructed using this scheme are valid.
This is in contrast to an argument like Fred is Mikes brothers son. Therefore Fred is Mikes nephew. Since this
argument depends on the meanings of the words brother, son, and nephew, the statement Fred is Mikes
nephew is a so-called material consequence of Fred is Mikes brothers son, not a formal consequence. A formal
consequence must be true in all cases, however this is an incomplete denition of formal consequence, since even the
argument " P is Q 's brothers son, therefore P is Q 's nephew is valid in all cases, but is not a formal argument.[1]

610
157.2. A PRIORI PROPERTY OF LOGICAL CONSEQUENCE 611

157.2 A priori property of logical consequence


If you know that Q follows logically from P no information about the possible interpretations of P or Q will aect that
knowledge. Our knowledge that Q is a logical consequence of P cannot be inuenced by empirical knowledge.[1] De-
ductively valid arguments can be known to be so without recourse to experience, so they must be knowable a priori.[1]
However, formality alone does not guarantee that logical consequence is not inuenced by empirical knowledge. So
the a priori property of logical consequence is considered to be independent of formality.[1]

157.3 Proofs and models


The two prevailing techniques for providing accounts of logical consequence involve expressing the concept in terms
of proofs and via models. The study of the syntactic consequence (of a logic) is called (its) proof theory whereas the
study of (its) semantic consequence is called (its) model theory.[4]

157.3.1 Syntactic consequence


See also: and

A formula A is a syntactic consequence[5][6][7][8] within some formal system FS of a set of formulas if there is a
formal proof in FS of A from the set .

F S A

Syntactic consequence does not depend on any interpretation of the formal system.[9]

157.3.2 Semantic consequence


See also:

A formula A is a semantic consequence within some formal system FS of a set of statements

|=F S A,

if and only if there is no model I in which all members of are true and A is false.[10] Or, in other words, the set of
the interpretations that make all members of true is a subset of the set of the interpretations that make A true.

157.4 Modal accounts


Modal accounts of logical consequence are variations on the following basic idea:

A is true if and only if it is necessary that if all of the elements of are true, then A is true.

Alternatively (and, most would say, equivalently):

A is true if and only if it is impossible for all of the elements of to be true and A false.

Such accounts are called modal because they appeal to the modal notions of logical necessity and logical possibility.
'It is necessary that' is often expressed as a universal quantier over possible worlds, so that the accounts above translate
as:
612 CHAPTER 157. LOGICAL CONSEQUENCE

A is true if and only if there is no possible world at which all of the elements of are true and A is
false (untrue).

Consider the modal account in terms of the argument given as an example above:

All frogs are green.


Kermit is a frog.
Therefore, Kermit is green.

The conclusion is a logical consequence of the premises because we can't imagine a possible world where (a) all frogs
are green; (b) Kermit is a frog; and (c) Kermit is not green.

157.4.1 Modal-formal accounts


Modal-formal accounts of logical consequence combine the modal and formal accounts above, yielding variations on
the following basic idea:

A if and only if it is impossible for an argument with the same logical form as / A to have true
premises and a false conclusion.

157.4.2 Warrant-based accounts


The accounts considered above are all truth-preservational, in that they all assume that the characteristic feature of a
good inference is that it never allows one to move from true premises to an untrue conclusion. As an alternative, some
have proposed "warrant-preservational accounts, according to which the characteristic feature of a good inference
is that it never allows one to move from justiably assertible premises to a conclusion that is not justiably assertible.
This is (roughly) the account favored by intuitionists such as Michael Dummett.

157.4.3 Non-monotonic logical consequence


Main article: Non-monotonic logic

The accounts discussed above all yield monotonic consequence relations, i.e. ones such that if A is a consequence of
, then A is a consequence of any superset of . It is also possible to specify non-monotonic consequence relations
to capture the idea that, e.g., 'Tweety can y' is a logical consequence of

{Birds can typically y, Tweety is a bird}

but not of

{Birds can typically y, Tweety is a bird, Tweety is a penguin}.

For more on this, see Belief revision#Non-monotonic inference relation.

157.5 See also

157.6 Notes
[1] Beall, JC and Restall, Greg, Logical Consequence The Stanford Encyclopedia of Philosophy (Fall 2009 Edition), Edward
N. Zalta (ed.).

[2] Quine, Willard Van Orman, Philosophy of Logic.


157.7. RESOURCES 613

[3] McKeon, Matthew, Logical Consequence Internet Encyclopedia of Philosophy.


[4] Kosta Dosen (1996). Logical consequence: a turn in style. In Maria Luisa Dalla Chiara; Kees Doets; Daniele Mundici;
Johan van Benthem. Logic and Scientic Methods: Volume One of the Tenth International Congress of Logic, Methodology
and Philosophy of Science, Florence, August 1995. Springer. p. 292. ISBN 978-0-7923-4383-7.
[5] Dummett, Michael (1993) Frege: philosophy of language Harvard University Press, p.82
[6] Lear, Jonathan (1986) Aristotle and Logical Theory Cambridge University Press, 136p.
[7] Creath, Richard, and Friedman, Michael (2007) The Cambridge companion to Carnap Cambridge University Press, 371p.
[8] FOLDOC: syntactic consequence
[9] Hunter, Georey, Metalogic: An Introduction to the Metatheory of Standard First-Order Logic, University of California
Pres, 1971, p. 75.
[10] Etchemendy, John, Logical consequence, The Cambridge Dictionary of Philosophy

157.7 Resources
Anderson, A.R.; Belnap, N.D., Jr. (1975), Entailment, 1, Princeton, NJ: Princeton.
Augusto, Luis M. (2017), Logical consequences. Theory and applications: An introduction. London: College
Publications. Series: Mathematical logic and foundations.
Barwise, Jon; Etchemendy, John (2008), Language, Proof and Logic, Stanford: CSLI Publications.
Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations 1st edition, Kluwer
Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY, 2003.
Davis, Martin, (editor) (1965), The Undecidable, Basic Papers on Undecidable Propositions, Unsolvable Prob-
lems And Computable Functions, New York: Raven Press. Papers include those by Gdel, Church, Rosser,
Kleene, and Post.
Dummett, Michael (1991), The Logical Basis of Metaphysics, Harvard University Press.
Edgington, Dorothy (2001), Conditionals, Blackwell in Lou Goble (ed.), The Blackwell Guide to Philosophical
Logic.
Edgington, Dorothy (2006), Conditionals in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy.
Etchemendy, John (1990), The Concept of Logical Consequence, Harvard University Press.
Goble, Lou, ed. (2001), The Blackwell Guide to Philosophical Logic, Blackwell.
Hanson, William H (1997), The concept of logical consequence, The Philosophical Review, 106 365409.
Hendricks, Vincent F. (2005), Thought 2 Talk: A Crash Course in Reection and Expression, New York: Au-
tomatic Press / VIP, ISBN 87-991013-7-8
Planchette, P. A. (2001), Logical Consequence in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic.
Blackwell.
Quine, W.V. (1982), Methods of Logic, Cambridge, MA: Harvard University Press (1st ed. 1950), (2nd ed.
1959), (3rd ed. 1972), (4th edition, 1982).
Shapiro, Stewart (2002), Necessity, meaning, and rationality: the notion of logical consequence in D. Jacquette,
ed., A Companion to Philosophical Logic. Blackwell.
Tarski, Alfred (1936), On the concept of logical consequence Reprinted in Tarski, A., 1983. Logic, Semantics,
Metamathematics, 2nd ed. Oxford University Press. Originally published in Polish and German.
A paper on 'implication' from math.niu.edu, Implication
A denition of 'implicant' AllWords
Ryszard Wjcicki (1988). Theory of Logical Calculi: Basic Theory of Consequence Operations. Springer.
ISBN 978-90-277-2785-5.
614 CHAPTER 157. LOGICAL CONSEQUENCE

157.8 External links


Logical Consequence. Stanford Encyclopedia of Philosophy.

Logical consequence at the Indiana Philosophy Ontology Project


Logical consequence. Internet Encyclopedia of Philosophy.

Logical consequence at PhilPapers

Hazewinkel, Michiel, ed. (2001) [1994], Implication, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Chapter 158

Logical disjunction

Disjunction redirects here. For the logic gate, see OR gate. For separation of chromosomes, see Meiosis. For
disjunctions in distribution, see Disjunct distribution.
In logic and mathematics, or is the truth-functional operator of (inclusive) disjunction, also known as alternation;

Venn diagram of AB

the or of a set of operands is true if and only if one or more of its operands is true. The logical connective that
represents this operator is typically written as or +.
"A or B" is true if A is true, or if B is true, or if both A and B are true.
In logic, or by itself means the inclusive or, distinguished from an exclusive or, which is false when both of its
arguments are true, while an or is true in that case.
An operand of a disjunction is called a disjunct.
Related concepts in other elds are:

615
616 CHAPTER 158. LOGICAL DISJUNCTION

Venn diagram of ABC

In natural language, the coordinating conjunction or.

In programming languages, the short-circuit or control structure.

In set theory, union.

In predicate logic, existential quantication.

158.1 Notation

Or is usually expressed with an inx operator: in mathematics and logic, ; in electronics, +; and in most program-
ming languages, |, ||, or or. In Jan ukasiewicz's prex notation for logic, the operator is A, for Polish alternatywa
(English: alternative).[1]
158.2. DEFINITION 617

158.2 Denition
Logical disjunction is an operation on two logical values, typically the values of two propositions, that has a value
of false if and only if both of its operands are false. More generally, a disjunction is a logical formula that can have
one or more literals separated only by 'ors. A single literal is often considered to be a degenerate disjunction.
The disjunctive identity is false, which is to say that the or of an expression with false has the same value as the original
expression. In keeping with the concept of vacuous truth, when disjunction is dened as an operator or function of
arbitrary arity, the empty disjunction (OR-ing over an empty set of operands) is generally dened as false.

158.2.1 Truth table


The truth table of A B :

158.3 Properties
The following properties apply to disjunction:

associativity: a (b c) (a b) c
commutativity: a b b a
distributivity: (a (b c)) ((a b) (a c))

(a (b c)) ((a b) (a c))


(a (b c)) ((a b) (a c))

idempotency: a a a
monotonicity: (a b) ((c a) (c b))

(a b) ((a c) (b c))

truth-preserving: The interpretation under which all variables are assigned a truth value of 'true' produces a
truth value of 'true' as a result of disjunction.
falsehood-preserving: The interpretation under which all variables are assigned a truth value of 'false' pro-
duces a truth value of 'false' as a result of disjunction.

158.4 Symbol
The mathematical symbol for logical disjunction varies in the literature. In addition to the word or, and the formula
Apq", the symbol " ", deriving from the Latin word vel (either, or) is commonly used for disjunction. For
example: "A B " is read as "A or B ". Such a disjunction is false if both A and B are false. In all other cases it is
true.
All of the following are disjunctions:

AB
A B
A B C D E.
The corresponding operation in set theory is the set-theoretic union.
618 CHAPTER 158. LOGICAL DISJUNCTION

158.5 Applications in computer science

A out
B
OR logic gate

Operators corresponding to logical disjunction exist in most programming languages.

158.5.1 Bitwise operation

Disjunction is often used for bitwise operations. Examples:

0 or 0 = 0

0 or 1 = 1

1 or 0 = 1

1 or 1 = 1

1010 or 1100 = 1110

The or operator can be used to set bits in a bit eld to 1, by or-ing the eld with a constant eld with the relevant bits
set to 1. For example, x = x | 0b00000001 will force the nal bit to 1 while leaving other bits unchanged.

158.5.2 Logical operation

Many languages distinguish between bitwise and logical disjunction by providing two distinct operators; in languages
following C, bitwise disjunction is performed with the single pipe (|) and logical disjunction with the double pipe (||)
operators.
Logical disjunction is usually short-circuited; that is, if the rst (left) operand evaluates to true then the second (right)
operand is not evaluated. The logical disjunction operator thus usually constitutes a sequence point.
In a parallel (concurrent) language, it is possible to short-circuit both sides: they are evaluated in parallel, and if one
terminates with value true, the other is interrupted. This operator is thus called the parallel or.
Although in most languages the type of a logical disjunction expression is boolean and thus can only have the value
true or false, in some (such as Python and JavaScript) the logical disjunction operator returns one of its operands: the
rst operand if it evaluates to a true value, and the second operand otherwise.

158.5.3 Constructive disjunction

The CurryHoward correspondence relates a constructivist form of disjunction to tagged union types.
158.6. UNION 619

158.6 Union
The membership of an element of an union set in set theory is dened in terms of a logical disjunction: x A B if
and only if (x A) (x B). Because of this, logical disjunction satises many of the same identities as set-theoretic
union, such as associativity, commutativity, distributivity, and de Morgans laws.

158.7 Natural language


As with other notions formalized in mathematical logic, the meaning of the natural-language coordinating conjunction
or is closely related to, but dierent from the logical or. For example, Please ring me or send an email likely means
do one or the other, but not both. On the other hand, Her grades are so good that shes either very bright or studies
hard does not exclude the possibility of both. In other words, in ordinary language or (even if used with either)
can mean the inclusive or exclusive or.

158.8 See also

158.9 Notes
George Boole, closely following analogy with ordinary mathematics, premised, as a necessary condition to
the denition of x + y, that x and y were mutually exclusive. Jevons, and practically all mathematical logi-
cians after him, advocated, on various grounds, the denition of logical addition in a form which does not
necessitate mutual exclusiveness.

158.10 References
[1] Jzef Maria Bocheski (1959), A Prcis of Mathematical Logic, translated by Otto Bird from the French and German
editions, Dordrecht, North Holland: D. Reidel, passim.

158.11 External links


Hazewinkel, Michiel, ed. (2001) [1994], Disjunction, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

Aloni, Maria. Disjunction. Stanford Encyclopedia of Philosophy.


Eric W. Weisstein. Disjunction. From MathWorldA Wolfram Web Resource
Chapter 159

Logical equality

For the corresponding concept in combinational logic, see XNOR gate.


Logical equality is a logical operator that corresponds to equality in Boolean algebra and to the logical biconditional

A
Q
B
XNOR logic gate symbol

in propositional calculus. It gives the functional value true if both functional arguments have the same logical value,
and false if they are dierent.
It is customary practice in various applications, if not always technically precise, to indicate the operation of logical
equality on the logical operands x and y by any of the following forms:

xy xy Exy
x EQ y x=y

Some logicians, however, draw a rm distinction between a functional form, like those in the left column, which they
interpret as an application of a function to a pair of arguments and thus a mere indication that the value of the
compound expression depends on the values of the component expressions and an equational form, like those in
the right column, which they interpret as an assertion that the arguments have equal values, in other words, that the
functional value of the compound expression is true.
In mathematics, the plus sign "+" almost invariably indicates an operation that satises the axioms assigned to addition
in the type of algebraic structure that is known as a eld. For boolean algebra, this means that the logical operation
signied by "+" is not the same as the inclusive disjunction signied by "" but is actually equivalent to the logical
inequality operator signied by "", or what amounts to the same thing, the exclusive disjunction signied by XOR
or "". Naturally, these variations in usage have caused some failures to communicate between mathematicians and
switching engineers over the years. At any rate, one has the following array of corresponding forms for the symbols
associated with logical inequality:

620
159.1. DEFINITION 621

x+y x y Jxy
x XOR y x = y

This explains why EQ is often called "XNOR" in the combinational logic of circuit engineers, since it is the nega-
tion of the XOR operation; NXOR is a less commonly used alternative.[1] Another rationalization of the admittedly
circuitous name XNOR is that one begins with the both false operator NOR and then adds the eXception or
both true.

159.1 Denition
Logical equality is an operation on two logical values, typically the values of two propositions, that produces a value
of true if and only if both operands are false or both operands are true.
The truth table of p EQ q (also written as p = q, p q, p q, or p == q) is as follows:

The Venn diagram of A EQ B (red part is true)

159.2 Alternative descriptions


The form (x = y) is equivalent to the form (x y) (x y).
(x = y) = (x y) = x y = x y = (x y) (x y) = (x y) (x y)
For the operands x and y, the truth table of the logical equality operator is as follows:
622 CHAPTER 159. LOGICAL EQUALITY

159.3 See also


Boolean function

If and only if
Logical equivalence

Logical biconditional

Propositional calculus

159.4 References
[1] Keeton, Brian; Cavaness, Chuck; Friesen, Geo (2001), Using Java 2, Que Publishing, p. 112, ISBN 9780789724687.

159.5 External links


Mathworld, XNOR
Chapter 160

Logical matrix

A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0,1) matrix is a matrix with entries from
the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of nite sets.

160.1 Matrix representation of a relation


If R is a binary relation between the nite indexed sets X and Y (so R XY), then R can be represented by the
logical matrix M whose row and column indices index the elements of X and Y, respectively, such that the entries of
M are dened by:

{
1 (xi , yj ) R
Mi,j =
0 (xi , yj ) R

In order to designate the row and column numbers of the matrix, the sets X and Y are indexed with positive integers:
i ranges from 1 to the cardinality (size) of X and j ranges from 1 to the cardinality of Y. See the entry on indexed sets
for more detail.

160.1.1 Example
The binary relation R on the set {1, 2, 3, 4} is dened so that aRb holds if and only if a divides b evenly, with no
remainder. For example, 2R4 holds because 2 divides 4 without leaving a remainder, but 3R4 does not hold because
when 3 divides 4 there is a remainder of 1. The following set is the set of pairs for which the relation R holds.

{(1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 4), (3, 3), (4, 4)}.

The corresponding representation as a Boolean matrix is:


1 1 1 1
0 1 0 1
.
0 0 1 0
0 0 0 1

160.2 Other examples


A permutation matrix is a (0,1)-matrix, all of whose columns and rows each have exactly one nonzero element.

A Costas array is a special case of a permutation matrix

623
624 CHAPTER 160. LOGICAL MATRIX

An incidence matrix in combinatorics and nite geometry has ones to indicate incidence between points (or
vertices) and lines of a geometry, blocks of a block design, or edges of a graph (discrete mathematics)
A design matrix in analysis of variance is a (0,1)-matrix with constant row sums.
An adjacency matrix in graph theory is a matrix whose rows and columns represent the vertices and whose
entries represent the edges of the graph. The adjacency matrix of a simple, undirected graph is a binary
symmetric matrix with zero diagonal.
The biadjacency matrix of a simple, undirected bipartite graph is a (0,1)-matrix, and any (0,1)-matrix arises
in this way.
The prime factors of a list of m square-free, n-smooth numbers can be described as a m(n) (0,1)-matrix,
where is the prime-counting function and aij is 1 if and only if the jth prime divides the ith number. This
representation is useful in the quadratic sieve factoring algorithm.
A bitmap image containing pixels in only two colors can be represented as a (0,1)-matrix in which the 0s
represent pixels of one color and the 1s represent pixels of the other color.
A binary matrix can be used to check the game rules in the game of Go [1]

160.3 Some properties


The matrix representation of the equality relation on a nite set is an identity matrix, that is, one whose entries on the
diagonal are all 1, while the others are all 0.
If the Boolean domain is viewed as a semiring, where addition corresponds to logical OR and multiplication to logical
AND, the matrix representation of the composition of two relations is equal to the matrix product of the matrix
representations of these relation. This product can be computed in expected time O(n2 ).[2]
Frequently operations on binary matrices are dened in terms of modular arithmetic mod 2that is, the elements
are treated as elements of the Galois eld GF(2) = 2 . They arise in a variety of representations and have a number
of more restricted special forms. They are applied e.g. in XOR-satisability.
The number of distinct m-by-n binary matrices is equal to 2mn , and is thus nite.

160.4 See also


List of matrices
Binatorix (a binary De Bruijn torus)
Redheer matrix
Relation algebra

160.5 Notes
[1] Petersen, Kjeld (February 8, 2013). Binmatrix. Retrieved August 11, 2017.
[2] Patrick E. O'Neil, Elizabeth J. O'Neil (1973). A Fast Expected Time Algorithm for Boolean Matrix Multiplication and
Transitive Closure (PDF). Information and Control. 22 (2): 132138. doi:10.1016/s0019-9958(73)90228-3. The
algorithm relies on addition being idempotent, cf. p.134 (bottom).

160.6 References
Hogben, Leslie (2006), Handbook of Linear Algebra (Discrete Mathematics and Its Applications), Boca Raton:
Chapman & Hall/CRC, ISBN 978-1-58488-510-8, section 31.3, Binary Matrices
Kim, Ki Hang, Boolean Matrix Theory and Applications, ISBN 0-8247-1788-0
160.7. EXTERNAL LINKS 625

160.7 External links


Hazewinkel, Michiel, ed. (2001) [1994], Logical matrix, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Chapter 161

Logical NOR

This article is about NOR in the logical sense. For the electronic gate, see NOR gate. For other uses, see Nor.
Peirce arrow redirects here. It is not to be confused with Pierce-Arrow, an automobile manufacturer.
In boolean logic, logical nor or joint denial is a truth-functional operator which produces a result that is the negation

Venn diagram of A B
(nor part in red)

of logical or. That is, a sentence of the form (p NOR q) is true precisely when neither p nor q is truei.e. when both
of p and q are false. In grammar, nor is a coordinating conjunction.
The NOR operator is also known as Peirces arrowCharles Sanders Peirce introduced the symbol for it,[1] and
demonstrated that the logical NOR is completely expressible: by combining uses of the logical NOR it is possible
to express any logical operation on two variables. Thus, as with its dual, the NAND operator (a.k.a. the Sheer
strokesymbolized as either | or /), NOR can be used by itself, without any other logical operator, to constitute a
logical formal system (making NOR functionally complete). It is also known as Quine's dagger (his symbol was ),
the ampheck (from Greek , cutting both ways) by Peirce,[2] or neither-nor.

626
161.1. DEFINITION 627

One way of expressing p NOR q is p q , where the symbol signies OR and the bar signies the negation of the
expression under it: in essence, simply (p q) . Other ways of expressing p NOR q are Xpq, and p + q .
The computer used in the spacecraft that rst carried humans to the moon, the Apollo Guidance Computer, was
constructed entirely using NOR gates with three inputs.[3]

161.1 Denition
The NOR operation is a logical operation on two logical values, typically the values of two propositions, that produces
a value of true if and only if both operands are false. In other words, it produces a value of false if and only if at least
one operand is true.

161.1.1 Truth table


The truth table of A NOR B (also written as A B) is as follows:

161.2 Properties
Logical NOR does not possess any of the ve qualities (truth-preserving, false-preserving, linear, monotonic, self-
dual) required to be absent from at least one member of a set of functionally complete operators. Thus, the set
containing only NOR suces as a complete set.

161.3 Introduction, elimination, and equivalencies


NOR has the interesting feature that all other logical operators can be expressed by interlaced NOR operations. The
logical NAND operator also has this ability.
The logical NOR is the negation of the disjunction:
Expressed in terms of NOR , the usual operators of propositional logic are:

161.4 See also

161.5 References
[1] Hans Kleine Bning; Theodor Lettmann (1999). Propositional logic: deduction and algorithms. Cambridge University
Press. p. 2. ISBN 978-0-521-63017-7.

[2] C.S. Peirce, CP 4.264

[3] Hall, Eldon C. (1996), Journey to the Moon: The History of the Apollo Guidance Computer, Reston, Virginia, USA: AIAA,
p. 196, ISBN 1-56347-185-X
Chapter 162

Logical truth

Logical truth is one of the most fundamental concepts in logic, and there are dierent theories on its nature. A
logical truth is a statement which is true, and remains true under all reinterpretations of its components other than its
logical constants. It is a type of analytic statement. All of philosophical logic can be thought of as providing accounts
of the nature of logical truth, as well as logical consequence.[1]
Logical truths (including tautologies) are truths which are considered to be necessarily true. This is to say that they
are considered to be such that they could not be untrue and no situation could arise which would cause us to reject a
logical truth. It must be true in every sense of intuition, practices, and bodies of beliefs. However, it is not universally
agreed that there are any statements which are necessarily true.
A logical truth is considered by some philosophers to be a statement which is true in all possible worlds. This is
contrasted with facts (which may also be referred to as contingent claims or synthetic claims) which are true in this
world, as it has historically unfolded, but which is not true in at least one possible world, as it might have unfolded.
The proposition If p and q, then p and the proposition All married people are married are logical truths because
they are true due to their inherent structure and not because of any facts of the world. Later, with the rise of formal
logic a logical truth was considered to be a statement which is true under all possible interpretations.
The existence of logical truths has been put forward by rationalist philosophers as an objection to empiricism because
they hold that it is impossible to account for our knowledge of logical truths on empiricist grounds. Empiricists
commonly respond to this objection by arguing that logical truths (which they usually deem to be mere tautologies),
are analytic and thus do not purport to describe the world.

162.1 Logical truths and analytic truths

Main article: Analyticsynthetic distinction

Logical truths, being analytic statements, do not contain any information about any matters of fact. Other than logical
truths, there is also a second class of analytic statements, typied by no bachelor is married. The characteristic
of such a statement is that it can be turned into a logical truth by substituting synonyms for synonyms salva veritate.
No bachelor is married can be turned into no unmarried man is married by substituting unmarried man for its
synonym bachelor.
In his essay Two Dogmas of Empiricism, the philosopher W. V. O. Quine called into question the distinction between
analytic and synthetic statements. It was this second class of analytic statements that caused him to note that the
concept of analyticity itself stands in need of clarication, because it seems to depend on the concept of synonymy,
which stands in need of clarication. In his conclusion, Quine rejects that logical truths are necessary truths. Instead
he posits that the truth-value of any statement can be changed, including logical truths, given a re-evaluation of the
truth-values of every other statement in ones complete theory.

628
162.2. TRUTH VALUES AND TAUTOLOGIES 629

162.2 Truth values and tautologies


Main article: Tautology (logic)

Considering dierent interpretations of the same statement leads to the notion of truth value. The simplest approach
to truth values means that the statement may be true in one case, but false in another. In one sense of the term
tautology, it is any type of formula or proposition which turns out to be true under any possible interpretation of
its terms (may also be called a valuation or assignment depending upon the context). This is synonymous to logical
truth.
However, the term tautology is also commonly used to refer to what could more specically be called truth-
functional tautologies. Whereas a tautology or logical truth is true solely because of the logical terms it contains
in general (e.g. "every", "some", and is), a truth-functional tautology is true because of the logical terms it contains
which are logical connectives (e.g. "or", "and", and "nor"). Not all logical truths are tautologies of such a kind.

162.3 Logical truth and logical constants


Main article: Logical constant

Logical constants, including logical connectives and quantiers, can all be reduced conceptually to logical truth. For
instance, two statements or more are logically incompatible if, and only if their conjunction is logically false. One
statement logically implies another when it is logically incompatible with the negation of the other. A statement is
logically true if, and only if its opposite is logically false. The opposite statements must contradict one another. In
this way all logical connectives can be expressed in terms of preserving logical truth. The logical form of a sentence
is determined by its semantic or syntactic structure and by the placement of logical constants. Logical constants
determine whether a statement is a logical truth when they are combined with a language that limits its meaning.
Therefore, until it is determined how to make a distinction between all logical constants regardless of their language,
it is impossible to know the complete truth of a statement or argument.[2]

162.4 Logical truth and rules of inference


The concept of logical truth is closely connected to the concept of a rule of inference.[3]

162.5 Logical truth and logical positivism


Logical positivism was a movement in the 20th century and was followed to a great extent in Europe and the United
States. It is a structured method of determining how valid the knowledge is. It introduced mathematics and the
natural sciences into the eld of philosophy. It is also known as scientic empiricism. Logical positivism considers
philosophy as an analytical inquiry. Philosophy was deciphered as an activity rather than a theory. Logical positivists
worked to explain the language of science by showing that scientic theories could be broken down to logical truths
along with the experience of the ve senses. The concepts created in this movement are very closely followed today in
the west. Logical positivism was a way to decipher whether a statement was truly a logical truth by means of relating
it to a scientic theory or mathematics. It determined the validity of the statement as well as gave it the rank of a
logical truth or a false truth.[4]

162.6 Non-classical logics


Main article: Non-classical logic

Non-classical logic is the name given to formal systems which dier in a signicant way from standard logical systems
such as propositional and predicate logic. There are several ways in which this is done, including by way of extensions,
630 CHAPTER 162. LOGICAL TRUTH

deviations, and variations. The aim of these departures is to make it possible to construct dierent models of logical
consequence and logical truth.[5]

162.7 See also


Contradiction

False (logic)

Satisability
Tautology (logic) (for symbolism of logical truth)

Theorem
Validity

162.8 References
[1] Quine, Willard Van Orman, Philosophy of logic

[2] MacFarlane, J. (May 16, 2005). Logical Constants.

[3] Alfred Ayer, Language, Truth, and Logic

[4] Logical Positivism. Columbia Electronic Encyclopedia. 2016.

[5] Theodore Sider, Logic for philosophy

162.9 External links


Logical Truth. Stanford Encyclopedia of Philosophy.

Logical truth at the Indiana Philosophy Ontology Project


Logical truth at PhilPapers
Chapter 163

Lupanov representation

Lupanovs (k, s)-representation, named after Oleg Lupanov, is a way of representing Boolean circuits so as to show
that the reciprocal of the Shannon eect. Shannon had showed that almost all Boolean functions of n variables need
a circuit of size at least 2n n1 . The reciprocal is that:

All Boolean functions of n variables can be computed with a circuit of at most 2n n1 + o(2n n1 )
gates.

163.1 Denition
The idea is to represent the values of a boolean function in a table of 2k rows, representing the possible values of
the k rst variables x1 , ..., ,xk, and 2nk columns representing the values of the other variables.
Let A1 , ..., Ap be a partition of the rows of this table such that for i < p, |Ai| = s and |Ap | = s s . Let i(x) = (x)
i x Ai.
Moreover, let Bi,w be the set of the columns whose intersection with Ai is w .

163.2 See also


Course material describing the Lupanov representation
An additional example from the same course material

631
Chapter 164

Maharam algebra

In mathematics, a Maharam algebra is a complete Boolean algebra with a continuous submeasure. They were
introduced by Maharam (1947).

164.1 Denitions
A continuous submeasure or Maharam submeasure on a Boolean algebra is a real-valued function m such that

m(0) = 0, m(1) = 1, m(x) > 0 if x 0.

If x < y then m(x) < m(y)


m(x y) m(x) + m(y)

If xn is a decreasing sequence with intersection 0, then the sequence m(xn) has limit 0.

A Maharam algebra is a complete Boolean algebra with a continuous submeasure.

164.2 Examples
Every probability measure is a continuous submeasure, so as the corresponding Boolean algebra of measurable sets
modulo measure zero sets is complete it is a Maharam algebra.

164.3 References
Balcar, Bohuslav; Jech, Thomas (2006), Weak distributivity, a problem of von Neumann and the mystery of
measurability, Bulletin of Symbolic Logic, 12 (2): 241266, MR 2223923, Zbl 1120.03028, doi:10.2178/bsl/1146620061
Maharam, Dorothy (1947), An algebraic characterization of measure algebras, Annals of Mathematics (2),
48: 154167, JSTOR 1969222, MR 0018718, Zbl 0029.20401, doi:10.2307/1969222
Velickovic, Boban (2005), ccc forcing and splitting reals, Israel J. Math., 147: 209220, MR 2166361, Zbl
1118.03046, doi:10.1007/BF02785365

632
Chapter 165

Majority function

In Boolean logic, the majority function (also called the median operator) is a function from n inputs to one out-
put. The value of the operation is false when n/2 or more arguments are false, and true otherwise. Alternatively,
representing true values as 1 and false values as 0, we may use the formula

n
1 ( i=1 pi ) 1/2
Majority (p1 , . . . , pn ) = + .
2 n

The "1/2 in the formula serves to break ties in favor of zeros when n is even. If the term "1/2 is omitted, the
formula can be used for a function that breaks ties in favor of ones.

165.1 Boolean circuits

Four bit majority circuit

A majority gate is a logical gate used in circuit complexity and other applications of Boolean circuits. A majority gate
returns true if and only if more than 50% of its inputs are true.
For instance, in a full adder, the carry output is found by applying a majority function to the three inputs, although
frequently this part of the adder is broken down into several simpler logical gates.

633
634 CHAPTER 165. MAJORITY FUNCTION

Many systems have triple modular redundancy; they use the majority function for majority logic decoding to imple-
ment error correction.
A major result in circuit complexity asserts that the majority function cannot be computed by AC0 circuits of subex-
ponential size.

165.2 Monotone formulae for majority


For n = 1 the median operator is just the unary identity operation x. For n = 3 the ternary median operator can be
expressed using conjunction and disjunction as xy + yz + zx. Remarkably this expression denotes the same operation
independently of whether the symbol + is interpreted as inclusive or or exclusive or.
For an arbitrary n there exists a monotone formula for majority of size O(n5.3 ).[1] This is proved using probabilistic
method. Thus, this formula is non-constructive. However, one can obtain an explicit formula for majority of poly-
nomial size using a sorting network of Ajtai, Komls, and Szemerdi.
The majority function produces 1 when more than half of the inputs are 1; it produces 0 when more than half
the inputs are 0. Most applications deliberately force an odd number of inputs so they don't have to deal with the
question of what happens when exactly half the inputs are 0 and exactly half the inputs are 1. The few systems that
calculate the majority function on an even number of inputs are often biased towards 0they produce 0 when
exactly half the inputs are 0 -- for example, a 4-input majority gate has a 0 output only when two or more 0s appear
at its inputs.[2] In a few systems, a 4-input majority network randomly chooses 1 or 0 when exactly two 0s appear
at its inputs.[3]

165.3 Properties
For any x, y, and z, the ternary median operator x, y, z satises the following equations.

x, y, y = y

x, y, z = z, x, y

x, y, z = x, z, y

x, w, y, w, z = x, w, y, w, z

An abstract system satisfying these as axioms is a median algebra.

165.4 Notes
[1] Valiant, Leslie (1984). Short monotone formulae for the majority function. Journal of Algorithms. 5 (3): 363366.
doi:10.1016/0196-6774(84)90016-6.

[2] Peterson, William Wesley; Weldon, E.J. (1972). Error-correcting Codes. MIT Press. ISBN 9780262160391.

[3] Chaouiya, Claudine; Ourrad, Ouerdia; Lima, Ricardo (July 2013). Majority Rules with Random Tie-Breaking in Boolean
Gene Regulatory Networks. PLoS ONE. 8 (7). Public Library of Science. doi:10.1371/journal.pone.0069626.

165.5 References

Knuth, Donald E. (2008). Introduction to combinatorial algorithms and Boolean functions. The Art of Com-
puter Programming. 4a. Upper Saddle River, NJ: Addison-Wesley. pp. 6474. ISBN 0-321-53496-4.
165.6. SEE ALSO 635

165.6 See also


Media related to Majority functions at Wikimedia Commons

Boolean algebra (structure)


Boolean algebras canonically dened

BoyerMoore majority vote algorithm

Majority problem (cellular automaton)


Chapter 166

Veitch chart

The Karnaugh map (KM or K-map) is a method of simplifying Boolean algebra expressions. Maurice Karnaugh
introduced it in 1953[1] as a renement of Edward Veitch's 1952 Veitch chart,[2][3] which actually was a rediscovery
of Allan Marquand's 1881 logical diagram[4] aka Marquand diagram[3] but with a focus now set on its utility for
switching circuits.[3] Veitch charts are therefore also known as MarquandVeitch diagrams,[3] and Karnaugh maps as
KarnaughVeitch maps (KV maps).
The Karnaugh map reduces the need for extensive calculations by taking advantage of humans pattern-recognition
capability.[1] It also permits the rapid identication and elimination of potential race conditions.
The required Boolean results are transferred from a truth table onto a two-dimensional grid where, in Karnaugh maps,
the cells are ordered in Gray code,[5][3] and each cell position represents one combination of input conditions, while
each cell value represents the corresponding output value. Optimal groups of 1s or 0s are identied, which represent
the terms of a canonical form of the logic in the original truth table.[6] These terms can be used to write a minimal
Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using a minimum
number of physical logic gates. A sum-of-products expression can always be implemented using AND gates feeding
into an OR gate, and a product-of-sums expression leads to OR gates feeding an AND gate.[7] Karnaugh maps can
also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional
statements, can get very complicated, which makes the code dicult to read and to maintain. Once minimised,
canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic
operators.[8]

166.1 Example
Karnaugh maps are used to facilitate the simplication of Boolean algebra functions. For example, consider the
Boolean function described by the following truth table.
Following are two dierent notations describing the same function in unsimplied Boolean algebra, using the Boolean
variables A, B, C, D, and their inverses.


f (A, B, C, D) = mi , i {6, 8, 9, 10, 11, 12, 13, 14} where mi are the minterms to map (i.e., rows that
have output 1 in the truth table).

f (A, B, C, D) = Mi , i {0, 1, 2, 3, 4, 5, 7, 15} where Mi are the maxterms to map (i.e., rows that have
output 0 in the truth table).

166.1.1 Karnaugh map

In the example above, the four input variables can be combined in 16 dierent ways, so the truth table has 16 rows,
and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 4 grid.

636
166.1. EXAMPLE 637

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
An example Karnaugh map. This image actually shows two Karnaugh maps: for the function , using minterms (colored rectangles)
and
for its complement, using maxterms (gray rectangles). In the image, E() signies a sum of minterms, denoted in the article as
mi .

The row and column indices (shown across the top, and down the left side of the Karnaugh map) are ordered in Gray
code rather than binary numerical order. Gray code ensures that only one variable changes between each pair of
adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the functions output
for that combination of inputs.
After the Karnaugh map has been constructed, it is used to nd one of the simplest possible forms a canonical
form for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify
the expression. The minterms ('minimal terms) for the nal expression are found by encircling groups of 1s in the
map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8). Minterm
rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one
larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and
green groups overlap. The red group is a 2 2 square, the green group is a 4 1 rectangle, and the overlap area is
indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For
638 CHAPTER 166. VEITCH CHART

K-map drawn on a torus, and in a plane. The dot-marked cells are adjacent.

AB
00 01 11 10
ABCD ABCD
00

0 4 12 8 0000 - 0 1000 - 8
0001 - 1 1001 - 9
01

1 5 13 9 0010 - 2 1010 - 10
CD

0011 - 3 1011 - 11
0100 - 4 1100 - 12
11

3 7 15 11
0101 - 5 1101 - 13
0110 - 6 1110 - 14
10

2 6 14 10
0111 - 7 1111 - 15

K-map construction. Instead of containing output values, this diagram shows the numbers of outputs, therefore it is not a Karnaugh
map.

example, AD would mean a cell which covers the 2x2 area where A and D are true, i.e. the cells numbered 13, 9,
15, 11 in the diagram above. On the other hand, AD would mean the cells where A is true and D is false (that is, D
is true).
The grid is toroidally connected, which means that rectangular groups can wrap across the edges (see picture). Cells
on the extreme right are actually 'adjacent' to those on the far left; similarly, so are those at the very top and those
at the bottom. Therefore, AD can be a valid termit includes cells 12 and 8 at the top, and wraps to the bottom to
include cells 10 and 14as is B, D, which includes the four corners.
166.1. EXAMPLE 639

In three dimensions, one can bend a rectangle into a torus.

166.1.2 Solution
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic
minterms can be found by examining which variables stay the same within each box.
For the red grouping:

A is the same and is equal to 1 throughout the box, therefore it should be included in the algebraic representation
of the red minterm.
B does not maintain the same state (it shifts from 1 to 0), and should therefore be excluded.
C does not change. It is always 0, so its complement, NOT-C, should be included. Thus, C should be included.
D changes, so it is excluded.

Thus the rst minterm in the Boolean sum-of-products expression is AC.


For the green grouping, A and B maintain the same state, while C and D change. B is 0 and has to be negated before
it can be included. The second term is therefore AB. Note that it is acceptable that the green grouping overlaps with
the red one.
In the same way, the blue grouping gives the term BCD.
The solutions of each grouping are combined: the normal form of the circuit is AC + AB + BCD .
Thus the Karnaugh map has guided a simplication of

f (A, B, C, D) = ABCD + AB C D + AB CD + ABCD +


ABCD + ABC D + ABCD + ABCD
= AC + AB + BCD
640 CHAPTER 166. VEITCH CHART

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
Diagram showing two K-maps. The K-map for the function f(A, B, C, D) is shown as colored rectangles which correspond to
minterms. The brown region is an overlap of the red 22 square and the green 41 rectangle. The K-map for the inverse of f is
shown as gray rectangles, which correspond to maxterms.

It would also have been possible to derive this simplication by carefully applying the axioms of boolean algebra, but
the time it takes to do that grows exponentially with the number of terms.

166.1.3 Inverse
The inverse of a function is solved in the same way by grouping the 0s instead.
The three terms to cover the inverse are all shown with grey boxes with dierent colored borders:

brown: A, B

gold: A, C

blue: BCD
166.1. EXAMPLE 641

This yields the inverse:

f (A, B, C, D) = A B + A C + BCD

Through the use of De Morgans laws, the product of sums can be determined:

f (A, B, C, D) = A B + A C + BCD
f (A, B, C, D) = A B + A C + BCD
( )
f (A, B, C, D) = (A + B) (A + C) B + C + D

166.1.4 Don't cares

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 X 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=A+BCD'
F=(A+B)(A+C)(A+D')
The value of for ABCD = 1111 is replaced by a don't care. This removes the green term completely and allows the red term to be
larger. It also allows blue inverse term to shift and become larger
642 CHAPTER 166. VEITCH CHART

Karnaugh maps also allow easy minimizations of functions whose truth tables include "don't care" conditions. A
don't care condition is a combination of inputs for which the designer doesn't care what the output is. Therefore,
don't care conditions can either be included in or excluded from any rectangular group, whichever makes it larger.
They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value of f(1,1,1,1) replaced by a don't care.
This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:

f (A, B, C, D) = A + BCD

Note that the rst term is just A, not AC. In this case, the don't care has dropped a term (the green rectangle);
simplied another (the red one); and removed the race hazard (removing the yellow term as shown in the following
section on race hazards).
The inverse case is simplied as follows:

f (A, B, C, D) = A B + A C + AD

166.2 Race hazards

166.2.1 Elimination
Karnaugh maps are useful for detecting and eliminating race conditions. Race hazards are very easy to spot using a
Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions
circumscribed on the map. However, because of the nature of Gray coding, adjacent has a special denition explained
above - we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.

In the example above, a potential race condition exists when C is 1 and D is 0, A is 1, and B changes from 1
to 0 (moving from the blue state to the green state). For this case, the output is dened to remain unchanged
at 1, but because this transition is not covered by a specic term in the equation, a potential for a glitch (a
momentary transition of the output to 0) exists.
There is a second potential glitch in the same example that is more dicult to spot: when D is 0 and A and B
are both 1, with C changing from 1 to 0 (moving from the blue state to the red state). In this case the glitch
wraps around from the top of the map to the bottom.

Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to
worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in
time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term of AD would eliminate the potential race hazard, bridging between the green and blue
output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom
to the top of the right half) in the adjacent diagram.
The term is redundant in terms of the static logic of the system, but such redundant, or consensus terms, are often
needed to assure race-free dynamic performance.
Similarly, an additional term of AD must be added to the inverse to eliminate another potential(race hazard.
) Applying
De Morgans laws creates another product of sums expression for f, but with a new factor of A + D .

166.2.2 2-variable map examples


Thefollowing are all the possible 2-variable, 2 2 Karnaugh maps. Listed with each is the minterms as a function
of m() and the race hazard free (see previous section) minimum equation. A minterm is dened as an expression
that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical inter-
connected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These
166.2. RACE HAZARDS 643

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
Race hazards are present in this diagram.

expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to
be mapped. Here are all the blocks with one eld.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge
of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and
horizontal row. A visualization of the k-map can be considered cylindrical. The elds at edges on the left and right
are adjacent, and the top and bottom are adjacent. K-Maps for 4 variables must be depicted as a donut or torus shape.
The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables
and more.
644 CHAPTER 166. VEITCH CHART

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
Above diagram with consensus terms added to avoid race hazards.

A
0 1

0 0
0
B

0 0
1

f(A,B) = E()
K=0
K'=1 m(0); K = 0
166.2. RACE HAZARDS 645

A
0 1

1 0
0
B
0 0
1

f(A,B) = E(1)
K=A'B'
K'=A+B m(1); K = AB

A
0 1

0 1
0
B

0 0
1

f(A,B) = E(2)
K=AB'
K'=A'+B m(2); K = AB

A
0 1

0 0
0
B

1 0
1

f(A,B) = E(3)
K=A'B
K'=A+B' m(3); K = AB

A
0 1

0 0
0
B

0 1
1

f(A,B) = E(4)
K=AB
K'=A'+B' m(4); K = AB

A
0 1

1 1
0
B

0 0
1

f(A,B) = E(1,2)
K=B'
K'=B m(1,2); K = B

A
0 1

1 0
0
B

1 0
1

f(A,B) = E(1,3)
K=A'
K'=A m(1,3); K = A
646 CHAPTER 166. VEITCH CHART

A
0 1

1 0
0
B
0 1
1

f(A,B) = E(1,4)
K=A'B'+AB
K'=AB'+A'B m(1,4); K = AB + AB

A
0 1

0 1
0
B

1 0
1

f(A,B) = E(2,3)
K=AB'+A'B
K'=A'B'+AB m(2,3); K = AB + AB

A
0 1

0 1
0
B

0 1
1

f(A,B) = E(2,4)
K=A
K'=A' m(2,4); K = A

A
0 1

0 0
0
B

1 1
1

f(A,B) = E(3,4)
K=B
K'=B' m(3,4); K = B

A
0 1

1 1
0
B

1 0
1

f(A,B) = E(1,2,3)
K=A'+B'
K'=AB m(1,2,3); K = A' + B

A
0 1

1 1
0
B

0 1
1

f(A,B) = E(1,2,4)
K=B'+A
K'=A'B m(1,2,4); K = A + B
166.3. OTHER GRAPHICAL METHODS 647

A
0 1

1 0
0
B
1 1
1

f(A,B) = E(1,3,4)
K=A'+B
K'=AB' m(1,3,4); K = A + B

A
0 1

0 1
0
B

1 1
1

f(A,B) = E(2,3,4)
K=A+B
K'=A'B' m(2,3,4); K = A + B

A
0 1

1 1
0
B

1 1
1

f(A,B) = E(1,2,3,4)
K=1
K'=0 m(1,2,3,4); K = 1

166.3 Other graphical methods


Alternative graphical minimization methods include:

Marquand diagram (1881) by Allan Marquand (18531924)[4][3]


Harvard minimizing chart (1951) by Howard H. Aiken and Martha L. Whitehouse of the Harvard Computation
Laboratory[9][1][10][11]
Veitch chart (1952) by Edward Veitch (19242013)[2][3]
Svobodas graphical aids (1956) and triadic map by Antonn Svoboda (19071980)[12][13][14][15]
Hndler circle graph (aka Hndlerscher Kreisgraph, Kreisgraph nach Hndler, Hndler-Kreisgraph, Hndler-
Diagramm, Minimisierungsgraph [sic]) (1958) by Wolfgang Hndler (19201998)[16][17][18][14][19][20][21][22][23]
Graph method (1965) by Herbert Kortum (19071979)[24][25][26][27][28][29]

166.4 See also


Circuit minimization
Espresso heuristic logic minimizer
List of Boolean algebra topics
QuineMcCluskey algorithm
Algebraic normal form (ANF)
Ring sum normal form (RSNF)
648 CHAPTER 166. VEITCH CHART

Zhegalkin normal form


Reed-Muller expansion
Venn diagram
Punnett square (a similar diagram in biology)

166.5 References
[1] Karnaugh, Maurice (November 1953) [1953-04-23, 1953-03-17]. The Map Method for Synthesis of Combinational Logic
Circuits (PDF). Transactions of the American Institute of Electrical Engineers part I. 72 (9): 593599. doi:10.1109/TCE.1953.6371932.
Paper 53-217. Archived (PDF) from the original on 2017-04-16. Retrieved 2017-04-16. (NB. Also contains a short review
by Samuel H. Caldwell.)

[2] Veitch, Edward W. (1952-05-03) [1952-05-02]. A Chart Method for Simplifying Truth Functions. ACM Annual Con-
ference/Annual Meeting: Proceedings of the 1952 ACM Annual Meeting (Pittsburg). New York, USA: ACM: 127133.
doi:10.1145/609784.609801.

[3] Brown, Frank Markham (2012) [2003, 1990]. Boolean Reasoning - The Logic of Boolean Equations (reissue of 2nd ed.).
Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-42785-0.

[4] Marquand, Allan (1881). XXXIII: On Logical Diagrams for n terms. The London, Edinburgh, and Dublin Philosophical
Magazine and Journal of Science. 5. 12 (75): 266270. doi:10.1080/14786448108627104. Retrieved 2017-05-15. (NB.
Quite many secondary sources erroneously cite this work as A logical diagram for n terms or On a logical diagram for
n terms.)

[5] Wakerly, John F. (1994). Digital Design: Principles & Practices. New Jersey, USA: Prentice Hall. pp. 222, 4849. ISBN
0-13-211459-3. (NB. The two page sections taken together say that K-maps are labeled with Gray code. The rst section
says that they are labeled with a code that changes only one bit between entries and the second section says that such a code
is called Gray code.)

[6] Belton, David (April 1998). Karnaugh Maps Rules of Simplication. Archived from the original on 2017-04-18.
Retrieved 2009-05-30.

[7] Dodge, Nathan B. (September 2015). Simplifying Logic Circuits with Karnaugh Maps (PDF). The University of Texas
at Dallas, Erik Jonsson School of Engineering and Computer Science. Archived (PDF) from the original on 2017-04-18.
Retrieved 2017-04-18.

[8] Cook, Aaron. Using Karnaugh Maps to Simplify Code. Quantum Rarity. Archived from the original on 2017-04-18.
Retrieved 2012-10-07.

[9] Aiken, Howard H.; Blaauw, Gerrit; Burkhart, William; Burns, Robert J.; Cali, Lloyd; Canepa, Michele; Ciampa, Carmela
M.; Coolidge, Jr., Charles A.; Fucarile, Joseph R.; Gadd, Jr., J. Orten; Gucker, Frank F.; Harr, John A.; Hawkins, Robert
L.; Hayes, Miles V.; Hofheimer, Richard; Hulme, William F.; Jennings, Betty L.; Johnson, Stanley A.; Kalin, Theodore;
Kincaid, Marshall; Lucchini, E. Edward; Minty, William; Moore, Benjamin L.; Remmes, Joseph; Rinn, Robert J.; Roche,
John W.; Sanbord, Jacquelin; Semon, Warren L.; Singer, Theodore; Smith, Dexter; Smith, Leonard; Strong, Peter F.;
Thomas, Helene V.; Wang, An; Whitehouse, Martha L.; Wilkins, Holly B.; Wilkins, Robert E.; Woo, Way Dong; Lit-
tle, Elbert P.; McDowell, M. Scudder (1952) [January 1951]. Chapter V: Minimizing charts. Synthesis of electronic
computing and control circuits (second printing, revised ed.). Write-Patterson Air Force Base: Harvard University Press
(Cambridge, Massachusetts, USA) / Georey Cumberlege Oxford University Press (London). pp. preface, 5067. Re-
trieved 2017-04-16. [] Martha Whitehouse constructed the minimizing charts used so profusely throughout this book,
and in addition prepared minimizing charts of seven and eight variables for experimental purposes. [] Hence, the present
writer is obliged to record that the general algebraic approach, the switching function, the vacuum-tube operator, and the
minimizing chart are his proposals, and that he is responsible for their inclusion herein. [] (NB. Work commenced in
April 1948.)

[10] Phister, Jr., Montgomery (1959) [December 1958]. Logical design of digital computers. New York, USA: John Wiley &
Sons Inc. pp. 7583. ISBN 0471688053.

[11] Curtis, H. Allen (1962). A new approach to the design of switching circuits. Princeton: D. van Nostrand Company.

[12] Svoboda, Antonn (1956). Gracko-mechanick pomcky uvan pi analyse a synthese kontaktovch obvod [Utilization
of graphical-mechanical aids for the analysis and synthesis of contact circuits]. Stroje na zpracovn informac [Symphosium
IV on information processing machines] (in Czech). IV. Prague: Czechoslovak Academy of Sciences, Research Institute
of Mathematical Machines. pp. 921.
166.5. REFERENCES 649

[13] Svoboda, Antonn (1956). Graphical Mechanical Aids for the Synthesis of Relay Circuits. Nachrichtentechnische Fach-
berichte (NTF), Beihefte der Nachrichtentechnischen Zeitschrift (NTZ). Braunschweig, Germany: Vieweg-Verlag.

[14] Steinbuch, Karl W.; Weber, Wolfgang; Heinemann, Traute, eds. (1974) [1967]. Taschenbuch der Informatik - Band II
- Struktur und Programmierung von EDV-Systemen. Taschenbuch der Nachrichtenverarbeitung (in German). 2 (3 ed.).
Berlin, Germany: Springer-Verlag. pp. 25, 62, 96, 122123, 238. ISBN 3-540-06241-6. LCCN 73-80607.

[15] Svoboda, Antonn; White, Donnamaie E. (2016) [1979-08-01]. Advanced Logical Circuit Design Techniques (PDF) (re-
typed electronic reissue ed.). Garland STPM Press (original issue) / WhitePubs (reissue). ISBN 978-0-8240-7014-4.
Archived (PDF) from the original on 2017-04-14. Retrieved 2017-04-15.

[16] Hndler, Wolfgang (1958). Ein Minimisierungsverfahren zur Synthese von Schaltkreisen: Minimisierungsgraphen (Disser-
tation) (in German). Technische Hochschule Darmstadt. D 17. (NB. Although written by a German, the title contains an
anglicism; the correct German term would be Minimierung instead of Minimisierung.)

[17] Hndler, Wolfgang (2013) [1961]. Zum Gebrauch von Graphen in der Schaltkreis- und Schaltwerktheorie. In Peschl,
Ernst Ferdinand; Unger, Heinz. Colloquium ber Schaltkreis- und Schaltwerk-Theorie - Vortragsauszge vom 26. bis 28.
Oktober 1960 in Bonn - Band 3 von Internationale Schriftenreihe zur Numerischen Mathematik [International Series of
Numerical Mathematics] (ISNM) (in German). 3. Institut fr Angewandte Mathematik, Universitt Saarbrcken, Rheinisch-
Westflisches Institut fr Instrumentelle Mathematik: Springer Basel AG / Birkhuser Verlag Basel. pp. 169198. ISBN
978-3-0348-5771-0. doi:10.1007/978-3-0348-5770-3.

[18] Berger, Erich R.; Hndler, Wolfgang (1967) [1962]. Steinbuch, Karl W.; Wagner, Siegfried W., eds. Taschenbuch der
Nachrichtenverarbeitung (in German) (2 ed.). Berlin, Germany: Springer-Verlag OHG. pp. 64, 10341035, 1036, 1038.
LCCN 67-21079. Title No. 1036. [] bersichtlich ist die Darstellung nach Hndler, die smtliche Punkte, numeriert
nach dem Gray-Code [], auf dem Umfeld eines Kreises anordnet. Sie erfordert allerdings sehr viel Platz. [] [Hndlers
illustration, where all points, numbered according to the Gray code, are arranged on the circumference of a circle, is easily
comprehensible. It needs, however, a lot of space.]

[19] Hotz, Gnter (1974). Schaltkreistheorie [Switching circuit theory]. DeGruyter Lehrbuch (in German). Walter de Gruyter
& Co. p. 117. ISBN 3-11-00-2050-5. [] Der Kreisgraph von Hndler ist fr das Aunden von Primimplikanten
gut brauchbar. Er hat den Nachteil, da er schwierig zu zeichnen ist. Diesen Nachteil kann man allerdings durch die
Verwendung von Schablonen verringern. [] [The circle graph by Hndler is well suited to nd prime implicants. A
disadvantage is that it is dicult to draw. This can be remedied using stencils.]

[20] Informatik Sammlung Erlangen (ISER)" (in German). Erlangen, Germany: Friedrich-Alexander Universitt. 2012-03-
13. Retrieved 2017-04-12. (NB. Shows a picture of a Kreisgraph by Hndler.)

[21] Informatik Sammlung Erlangen (ISER) - Impressum (in German). Erlangen, Germany: Friedrich-Alexander Universitt.
2012-03-13. Archived from the original on 2012-02-26. Retrieved 2017-04-15. (NB. Shows a picture of a Kreisgraph by
Hndler.)

[22] Zemanek, Heinz (2013) [1990]. Geschichte der Schaltalgebra [History of circuit switching algebra]. In Broy, Man-
fred. Informatik und Mathematik [Computer Sciences and Mathematics] (in German). Springer-Verlag. pp. 4372. ISBN
9783642766770. Einen Weg besonderer Art, der damals zu wenig beachtet wurde, wies W. Hndler in seiner Dissertation
[] mit einem Kreisdiagramm. [] (NB. Collection of papers at a colloquium held at the Bayerische Akademie der
Wissenschaften, 1989-06-12/14, in honor of Friedrich L. Bauer.)

[23] Bauer, Friedrich Ludwig; Wirsing, Martin (March 1991). Elementare Aussagenlogik (in German). Berlin / Heidelberg:
Springer-Verlag. pp. 5456, 71, 112113, 138139. ISBN 978-3-540-52974-3. [] handelt es sich um ein Hndler-
Diagramm [], mit den Wrfelecken als Ecken eines 2m -gons. [] Abb. [] zeigt auch Gegenstcke fr andere Di-
mensionen. Durch waagerechte Linien sind dabei Tupel verbunden, die sich nur in der ersten Komponente unterscheiden;
durch senkrechte Linien solche, die sich nur in der zweiten Komponente unterscheiden; durch 45-Linien und 135-Linien
solche, die sich nur in der dritten Komponente unterscheiden usw. Als Nachteil der Hndler-Diagramme wird angefhrt,
da sie viel Platz beanspruchen. []

[24] Kortum, Herbert (1965). Minimierung von Kontaktschaltungen durch Kombination von Krzungsverfahren und Graphen-
methoden. messen-steuern-regeln (msr) (in German). Verlag Technik. 8 (12): 421425.

[25] Kortum, Herbert (1966). Konstruktion und Minimierung von Halbleiterschaltnetzwerken mittels Graphentransformation.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (1): 912.

[26] Kortum, Herbert (1966). Weitere Bemerkungen zur Minimierung von Schaltnetzwerken mittels Graphenmethoden.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (3): 96102.

[27] Kortum, Herbert (1966). Weitere Bemerkungen zur Behandlung von Schaltnetzwerken mittels Graphen. messen-steuern-
regeln (msr) (in German). Verlag Technik. 9 (5): 151157.
650 CHAPTER 166. VEITCH CHART

[28] Kortum, Herbert (1967). "ber zweckmige Anpassung der Graphenstruktur diskreter Systeme an vorgegebene Auf-
gabenstellungen. messen-steuern-regeln (msr) (in German). Verlag Technik. 10 (6): 208211.

[29] Tafel, Hans Jrg (1971). 4.3.5. Graphenmethode zur Vereinfachung von Schaltfunktionen. Written at RWTH, Aachen,
Germany. Einfhrung in die digitale Datenverarbeitung [Introduction to digital information processing] (in German). Mu-
nich, Germany: Carl Hanser Verlag. pp. 98105, 107113. ISBN 3-446-10569-7.

166.6 Further reading


Katz, Randy Howard (1998) [1994]. Contemporary Logic Design. The Benjamin/Cummings Publishing Com-
pany. pp. 7085. ISBN 0-8053-2703-7. doi:10.1016/0026-2692(95)90052-7.

Vingron, Shimon Peter (2004) [2003-11-05]. Karnaugh Maps. Switching Theory: Insight Through Predicate
Logic. Berlin, Heidelberg, New York: Springer-Verlag. pp. 5776. ISBN 3-540-40343-4.

Wickes, William E. (1968). Logic Design with Integrated Circuits. New York, USA: John Wiley & Sons.
pp. 3649. LCCN 68-21185. A renement of the Venn diagram in that circles are replaced by squares and
arranged in a form of matrix. The Veitch diagram labels the squares with the minterms. Karnaugh assigned 1s
and 0s to the squares and their labels and deduced the numbering scheme in common use.

Maxeld, Clive Max (2006-11-29). Reed-Muller Logic. Logic 101. EETimes. Part 3. Archived from the
original on 2017-04-19. Retrieved 2017-04-19.

166.7 External links


Detect Overlapping Rectangles, by Herbert Glarner.
Using Karnaugh maps in practical applications, Circuit design project to control trac lights.

K-Map Tutorial for 2,3,4 and 5 variables


Karnaugh Map Example

POCKETPC BOOLEAN FUNCTION SIMPLIFICATION, Ledion Bitincka George E. Antoniou


Chapter 167

Material conditional

Logical conditional redirects here. For other related meanings, see Conditional statement.
Not to be confused with Material inference.
The material conditional (also known as material implication, material consequence, or simply implication, implies,

Venn diagram of the truth function of the material conditional A B . The circle on the left bounds all members of set A , and the
one on the right bounds all members of set B . The red area describes all members for which the material conditional is true, and
the white area describes all members for which it is false. The material conditional diers signicantly from a natural languages
if...then... statement. It is only false when both the antecedent A is true and the consequent B is false.

or conditional) is a logical connective (or a binary operator) that is often symbolized by a forward arrow "". The
material conditional is used to form statements of the form pq (termed a conditional statement) which is read as
if p then q". Unlike the English construction if...then..., the material conditional statement pq does not specify
a causal relationship between p and q. It is merely to be understood to mean if p is true, then q is also true such
that the statement pq is false only when p is true and q is false.[1] The material conditional only states that q is true
when (but not necessarily only when) p is true, and makes no claim that p causes q.

651
652 CHAPTER 167. MATERIAL CONDITIONAL

The material conditional is also symbolized using:

1. (Although this symbol may be used for the superset symbol in set theory.);
2. (Although this symbol is often used for logical consequence (i.e., logical implication) rather than for
material conditional.)
3. C (using ukasiewicz notation)

With respect to the material conditionals above:

p is termed the antecedent of the conditional, and


q is termed the consequent of the conditional.

Conditional statements may be nested such that either or both of the antecedent or the consequent may themselves
be conditional statements. In the example "(pq) (rs)", meaning if the truth of p implies the truth of q, then
the truth of r implies the truth of s), both the antecedent and the consequent are conditional statements.
In classical logic p q is logically equivalent to (p q) and by De Morgans Law logically equivalent to p q
.[2] Whereas, in minimal logic (and therefore also intuitionistic logic) p q only logically entails (p q) ; and in
intuitionistic logic (but not minimal logic) p q entails p q .

167.1 Denitions of the material conditional


Logicians have many dierent views on the nature of material implication and approaches to explain its sense.[3]

167.1.1 As a truth function


In classical logic, the compound pq is logically equivalent to the negative compound: not both p and not q. Thus
the compound pq is false if and only if both p is true and q is false. By the same stroke, pq is true if and only if
either p is false or q is true (or both). Thus is a function from pairs of truth values of the components p, q to truth
values of the compound pq, whose truth value is entirely a function of the truth values of the components. Hence,
this interpretation is called truth-functional. The compound pq is logically equivalent also to pq (either not p, or
q (or both)), and to qp (if not q then not p). But it is not equivalent to pq, which is equivalent to qp.

Truth table

The truth table associated with the material conditional pq is identical to that of pq. It is as follows:
It may also be useful to note that in Boolean algebra, true and false can be denoted as 1 and 0 respectively with an
equivalent table.

167.1.2 As a formal connective


The material conditional can be considered as a symbol of a formal theory, taken as a set of sentences, satisfying all
the classical inferences involving , in particular the following characteristic rules:

1. Modus ponens;
2. Conditional proof;
3. Classical contraposition;
4. Classical reductio ad absurdum.

Unlike the truth-functional one, this approach to logical connectives permits the examination of structurally identi-
cal propositional forms in various logical systems, where somewhat dierent properties may be demonstrated. For
example, in intuitionistic logic which rejects proofs by contraposition as valid rules of inference, (p q) p q
is not a propositional theorem, but the material conditional is used to dene negation.
167.2. FORMAL PROPERTIES 653

167.2 Formal properties


When studying logic formally, the material conditional is distinguished from the semantic consequence relation |= .
We say A |= B if every interpretation that makes A true also makes B true. However, there is a close relationship
between the two in most logics, including classical logic. For example, the following principles hold:

If |= then |= (1 n ) for some 1 , . . . , n . (This is a particular form of the


deduction theorem. In words, it says that if models this means that can be deduced just from some
subset of the theorems in .)

The converse of the above

Both and |= are monotonic; i.e., if |= then |= , and if then ( ) for any ,
. (In terms of structural rules, this is often referred to as weakening or thinning.)

These principles do not hold in all logics, however. Obviously they do not hold in non-monotonic logics, nor do they
hold in relevance logics.
Other properties of implication (the following expressions are always true, for any logical values of variables):

distributivity: (s (p q)) ((s p) (s q))

transitivity: (a b) ((b c) (a c))

reexivity: a a

totality: (a b) (b a)

truth preserving: The interpretation under which all variables are assigned a truth value of 'true' produces a
truth value of 'true' as a result of material implication.

commutativity of antecedents: (a (b c)) (b (a c))

Note that a (b c) is logically equivalent to (a b) c ; this property is sometimes called un/currying.


Because of these properties, it is convenient to adopt a right-associative notation for where a b c denotes
a (b c) .
Comparison of Boolean truth tables shows that a b is equivalent to a b , and one is an equivalent replacement
for the other in classical logic. See material implication (rule of inference).

167.3 Philosophical problems with material conditional


Outside of mathematics, it is a matter of some controversy as to whether the truth function for material implica-
tion provides an adequate treatment of conditional statements in a natural language such as English, i.e., indicative
conditionals and counterfactual conditionals. An indicative conditional is a sentence in the indicative mood with a
conditional clause attached. A counterfactual conditional is a false-to-fact sentence in the subjunctive mood.[4] That
is to say, critics argue that in some non-mathematical cases, the truth value of a compound statement, if p then q",
is not adequately determined by the truth values of p and q.[4] Examples of non-truth-functional statements include:
"q because p", "p before q" and it is possible that p".[4]
"[Of] the sixteen possible truth-functions of A and B, material implication is the only serious candidate. First, it is
uncontroversial that when A is true and B is false, If A, B" is false. A basic rule of inference is modus ponens: from
If A, B" and A, we can infer B. If it were possible to have A true, B false and If A, B" true, this inference would
be invalid. Second, it is uncontroversial that If A, B" is sometimes true when A and B are respectively (true, true),
or (false, true), or (false, false) Non-truth-functional accounts agree that If A, B" is false when A is true and B
is false; and they agree that the conditional is sometimes true for the other three combinations of truth-values for
the components; but they deny that the conditional is always true in each of these three cases. Some agree with the
truth-functionalist that when A and B are both true, If A, B" must be true. Some do not, demanding a further relation
between the facts that A and that B.[4]
654 CHAPTER 167. MATERIAL CONDITIONAL

The truth-functional theory of the conditional was integral to Frege's new logic (1879). It was taken
up enthusiastically by Russell (who called it material implication), Wittgenstein in the Tractatus, and
the logical positivists, and it is now found in every logic text. It is the rst theory of conditionals which
students encounter. Typically, it does not strike students as obviously correct. It is logics rst surprise.
Yet, as the textbooks testify, it does a creditable job in many circumstances. And it has many defenders.
It is a strikingly simple theory: If A, B" is false when A is true and B is false. In all other cases, If A,
B" is true. It is thus equivalent to "~(A&~B)" and to "~A or B". "A B" has, by stipulation, these truth
conditions.
Dorothy Edgington, The Stanford Encyclopedia of Philosophy, Conditionals[4]

The meaning of the material conditional can sometimes be used in the English if condition then consequence" con-
struction (a kind of conditional sentence), where condition and consequence are to be lled with English sentences.
However, this construction also implies a reasonable connection between the condition (protasis) and consequence
(apodosis) (see Connexive logic).
The material conditional can yield some unexpected truths when expressed in natural language. For example, any
material conditional statement with a false antecedent is true (see vacuous truth). So the statement if 2 is odd then 2
is even is true. Similarly, any material conditional with a true consequent is true. So the statement if I have a penny
in my pocket then Paris is in France is always true, regardless of whether or not there is a penny in my pocket. These
problems are known as the paradoxes of material implication, though they are not really paradoxes in the strict sense;
that is, they do not elicit logical contradictions. These unexpected truths arise because speakers of English (and other
natural languages) are tempted to equivocate between the material conditional and the indicative conditional, or other
conditional statements, like the counterfactual conditional and the material biconditional.
It is not surprising that a rigorously dened truth-functional operator does not correspond exactly to all notions of
implication or otherwise expressed by 'if ... then ...' sentences in natural languages. For an overview of some of the
various analyses, formal and informal, of conditionals, see the References section below. Relevance logic attempts
to capture these alternate concepts of implication that material implication glosses over.

167.4 See also

167.4.1 Conditionals

Counterfactual conditional

Indicative conditional

Corresponding conditional

Strict conditional

167.5 References
[1] Magnus, P.D (January 6, 2012). forallx: An Introduction to Formal Logic (PDF). Creative Commons. p. 25. Retrieved
28 May 2013.

[2] Teller, Paul (January 10, 1989). A Modern Formal Logic Primer: Sentence Logic Volume 1 (PDF). Prentice Hall. p.
54. Retrieved 28 May 2013.

[3] Clarke, Matthew C. (March 1996). A Comparison of Techniques for Introducing Material Implication. Cornell Univer-
sity. Retrieved March 4, 2012.

[4] Edgington, Dorothy (2008). Edward N. Zalta, ed. Conditionals. The Stanford Encyclopedia of Philosophy (Winter 2008
ed.).
167.6. FURTHER READING 655

167.6 Further reading


Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer
Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY, 2003.
Edgington, Dorothy (2001), Conditionals, in Lou Goble (ed.), The Blackwell Guide to Philosophical Logic,
Blackwell.

Quine, W.V. (1982), Methods of Logic, (1st ed. 1950), (2nd ed. 1959), (3rd ed. 1972), 4th edition, Harvard
University Press, Cambridge, MA.

Stalnaker, Robert, Indicative Conditionals, Philosophia, 5 (1975): 269286.

167.7 External links


Edgington, Dorothy. Conditionals. Stanford Encyclopedia of Philosophy.
Chapter 168

Material equivalence

I redirects here. For other uses, see IFF (disambiguation).


"" redirects here. It is not to be confused with Bidirectional trac.

Logical symbols representing i

In logic and related elds such as mathematics and philosophy, if and only if (shortened i) is a biconditional logical
connective between statements.
In that it is biconditional, the connective can be likened to the standard material conditional (only if, equal to if
... then) combined with its reverse (if); hence the name. The result is that the truth of either one of the connected
statements requires the truth of the other (i.e. either both statements are true, or both are false). It is controversial
whether the connective thus dened is properly rendered by the English if and only if, with its pre-existing meaning.
There is nothing to stop one from stipulating that we may read this connective as only if and if, although this may
lead to confusion.
In writing, phrases commonly used, with debatable propriety, as alternatives to P if and only if Q include Q is
necessary and sucient for P, P is equivalent (or materially equivalent) to Q (compare material implication), P precisely
if Q, P precisely (or exactly) when Q, P exactly in case Q, and P just in case Q.[1] Many authors regard i as unsuitable
in formal writing;[2] others use it freely.[3]
In logic formulae, logical symbols are used instead of these phrases; see the discussion of notation.

168.1 Denition

The truth table of P Q is as follows:[4][5]


Note that it is equivalent to that produced by the XNOR gate, and opposite to that produced by the XOR gate.

168.2 Usage

168.2.1 Notation

The corresponding logical symbols are "", " ", and "", and sometimes i. These are usually treated as equiv-
alent. However, some texts of mathematical logic (particularly those on rst-order logic, rather than propositional
logic) make a distinction between these, in which the rst, , is used as a symbol in logic formulas, while is used
in reasoning about those logic formulas (e.g., in metalogic). In ukasiewicz's notation, it is the prex symbol 'E'.
Another term for this logical connective is exclusive nor.

656
168.3. DISTINCTION FROM IF AND ONLY IF 657

168.2.2 Proofs

In most logical systems, one proves a statement of the form P i Q by proving if P, then Q and if Q, then P.
Proving this pair of statements sometimes leads to a more natural proof since there are not obvious conditions in
which one would infer a biconditional directly. An alternative is to prove the disjunction "(P and Q) or (not-P and
not-Q)", which itself can be inferred directly from either of its disjunctsthat is, because i is truth-functional, P
i Q follows if P and Q have both been shown true, or both false.

168.2.3 Origin of i and pronunciation

Usage of the abbreviation i rst appeared in print in John L. Kelley's 1955 book General Topology.[6] Its invention
is often credited to Paul Halmos, who wrote I invented 'i,' for 'if and only if'but I could never believe I was really
its rst inventor.[7]
It is somewhat unclear how i was meant to be pronounced. In current practice, the single 'word' i is almost
always read as the four words if and only if. However, in the preface of General Topology, Kelley suggests that it
should be read dierently: In some cases where mathematical content requires 'if and only if' and euphony demands
something less I use Halmos 'i'". The authors of one discrete mathematics textbook suggest:[8] Should you need
to pronounce i, really hang on to the '' so that people hear the dierence from 'if'", implying that i could be
pronounced as /f/.

168.3 Distinction from if and only if


1. Madison will eat the fruit if it is an apple. (equivalent to Only if Madison will eat the fruit, it is an
apple;" or Madison will eat the fruit fruit is an apple)

This states simply that Madison will eat fruits that are apples. It does not, however, exclude the
possibility that Madison might also eat bananas or other types of fruit. All that is known for certain
is that she will eat any and all apples that she happens upon. That the fruit is an apple is a sucient
condition for Madison to eat the fruit.

2. Madison will eat the fruit only if it is an apple. (equivalent to If Madison will eat the fruit, then it is
an apple or Madison will eat the fruit fruit is an apple)

This states that the only fruit Madison will eat is an apple. It does not, however, exclude the possi-
bility that Madison will refuse an apple if it is made available, in contrast with (1), which requires
Madison to eat any available apple. In this case, that a given fruit is an apple is a necessary condition
for Madison to be eating it. It is not a sucient condition since Madison might not eat all the apples
she is given.

3. Madison will eat the fruit if and only if it is an apple (equivalent to Madison will eat the fruit fruit
is an apple)

This statement makes it clear that Madison will eat all and only those fruits that are apples. She
will not leave any apple uneaten, and she will not eat any other type of fruit. That a given fruit is an
apple is both a necessary and a sucient condition for Madison to eat the fruit.

Suciency is the converse of necessity. That is to say, given PQ (i.e. if P then Q), P would be a sucient condition
for Q, and Q would be a necessary condition for P. Also, given PQ, it is true that QP (where is the negation
operator, i.e. not). This means that the relationship between P and Q, established by PQ, can be expressed in the
following, all equivalent, ways:

P is sucient for Q
Q is necessary for P
Q is sucient for P
P is necessary for Q
658 CHAPTER 168. MATERIAL EQUIVALENCE

As an example, take (1), above, which states PQ, where P is the fruit in question is an apple and Q is Madison
will eat the fruit in question. The following are four equivalent ways of expressing this very relationship:

If the fruit in question is an apple, then Madison will eat it.


Only if Madison will eat the fruit in question, is it an apple.
If Madison will not eat the fruit in question, then it is not an apple.
Only if the fruit in question is not an apple, will Madison not eat it.

So we see that (2), above, can be restated in the form of if...then as If Madison will eat the fruit in question, then it
is an apple"; taking this in conjunction with (1), we nd that (3) can be stated as If the fruit in question is an apple,
then Madison will eat it; and if Madison will eat the fruit, then it is an apple.

168.4 In terms of Euler diagrams


A

1 11

4 8
B
A is a proper subset of B. A number is in A only if it is in B; a number is in B
if it is in A.

1 11

4 8
B
C is a subset but not a proper subset of B. A number is in B if and only if it is in
C, and a number is in C if and only if it is in B.

Euler diagrams show logical relationships among events, properties, and so forth. P only if Q, if P then Q, and
PQ all mean that P is a subset, either proper or improper, of Q. P if Q, if Q then P, and QP all mean that
Q is a proper or improper subset of P. P if and only if Q and Q if and only if P both mean that the sets P and Q
are identical to each other.

168.5 More general usage


I is used outside the eld of logic, wherever logic is applied, especially in mathematical discussions. It has the same
meaning as above: it is an abbreviation for if and only if, indicating that one statement is both necessary and sucient
for the other. This is an example of mathematical jargon. (However, as noted above, if, rather than i, is more often
used in statements of denition.)
The elements of X are all and only the elements of Y is used to mean: for any z in the domain of discourse, z is in
X if and only if z is in Y.

168.6 See also


Covariance
168.7. FOOTNOTES 659

Logical biconditional

Logical equality
Necessary and sucient condition

Polysyllogism

168.7 Footnotes
[1] Weisstein, Eric W. I. From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/Iff.html

[2] E.g. Daepp, Ulrich; Gorkin, Pamela (2011), Reading, Writing, and Proving: A Closer Look at Mathematics, Undergraduate
Texts in Mathematics, Springer, p. 52, ISBN 9781441994790, While it can be a real time-saver, we don't recommend it
in formal writing.

[3] Rothwell, Edward J.; Cloud, Michael J. (2014), Engineering Writing by Design: Creating Formal Documents of Lasting
Value, CRC Press, p. 98, ISBN 9781482234312, It is common in mathematical writing.

[4] p <=> q. Wolfram|Alpha

[5] If and only if, UHM Department of Mathematics, Theorems which have the form P if and only Q are much prized in
mathematics. They give what are called necessary and sucient conditions, and give completely equivalent and hopefully
interesting new ways to say exactly the same thing..

[6] General Topology, reissue ISBN 978-0-387-90125-1

[7] Nicholas J. Higham (1998). Handbook of writing for the mathematical sciences (2nd ed.). SIAM. p. 24. ISBN 978-0-
89871-420-3.

[8] Maurer, Stephen B.; Ralston, Anthony (2005). Discrete Algorithmic Mathematics (3rd ed.). Boca Raton, Fla.: CRC Press.
p. 60. ISBN 1568811667.

168.8 External links


Language Log: Just in Case

Southern California Philosophy for philosophy graduate students: Just in Case


Chapter 169

Material implication (rule of inference)

For other uses, see Material implication.


Not to be confused with Material inference.

In propositional logic, material implication [1][2] is a valid rule of replacement that allows for a conditional statement
to be replaced by a disjunction in which the antecedent is negated. The rule states that P implies Q is logically equivalent
to not-P or Q and can replace each other in logical proofs.

P Q P Q

Where " " is a metalogical symbol representing can be replaced in a proof with.

169.1 Formal notation


The material implication rule may be written in sequent notation:

(P Q) (P Q)

where is a metalogical symbol meaning that (P Q) is a syntactic consequence of (P Q) in some logical


system;
or in rule form:

P Q
P Q
where the rule is that wherever an instance of " P Q " appears on a line of a proof, it can be replaced with "
P Q ";
or as the statement of a truth-functional tautology or theorem of propositional logic:

(P Q) (P Q)

where P and Q are propositions expressed in some formal system.

169.2 Example
An example is:

660
169.3. REFERENCES 661

If it is a bear, then it can swim.


Thus, it is not a bear or it can swim.

where P is the statement it is a bear and Q is the statement it can swim.


If it was found that the bear could not swim, written symbolically as P Q , then both sentences are false but
otherwise they are both true.

169.3 References
[1] Hurley, Patrick (1991). A Concise Introduction to Logic (4th ed.). Wadsworth Publishing. pp. 3645.

[2] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall. p. 371.
Chapter 170

Material nonimplication

Venn diagram of A B

Material nonimplication or abjunction (latin ab = from, junctio ="joining) is the negation of material impli-
cation. That is to say that for any two propositions P and Q, the material nonimplication from P to Q is true if and
only if the negation of the material implication from P to Q is true. This is more naturally stated as that the material
nonimplication from P to Q is true only if P is true and Q is false.
It may be written using logical notation as:

pq
Lpq
pq

And is equivalent to:

662
170.1. DEFINITION 663

p~q

170.1 Denition

170.1.1 Truth table

170.2 Properties
falsehood-preserving: The interpretation under which all variables are assigned a truth value of false produces a
truth value of false as a result of material nonimplication.

170.3 Symbol
The symbol for material nonimplication is simply a crossed-out material implication symbol. Its Unicode symbol is
8603 (decimal).

170.4 Natural language

170.4.1 Grammatical

170.4.2 Rhetorical
p but not q.

170.5 Boolean algebra


Further information: Boolean algebra

(A'+B)'

170.6 Computer science


Bitwise operation: A&(~B)
Logical operation: A&&(!B)

170.7 See also


Implication

170.8 References
Chapter 171

Mereology

In philosophy and mathematical logic, mereology (from the Greek , root: -, part and the sux -logy
study, discussion, science) is the study of parts and the wholes they form. Whereas set theory is founded on the
membership relation between a set and its elements, mereology emphasizes the meronomic relation between entities,
whichfrom a set-theoretic perspectiveis closer to the concept of inclusion between sets.
Mereology has been explored in various ways as applications of predicate logic to formal ontology, in each of which
mereology is an important part. Each of these elds provides its own axiomatic denition of mereology. A common
element of such axiomatizations is the assumption, shared with inclusion, that the part-whole relation orders its uni-
verse, meaning that everything is a part of itself (reexivity), that a part of a part of a whole is itself a part of that
whole (transitivity), and that two distinct entities cannot each be a part of the other (antisymmetry), thus forming
a poset. A variant of this axiomatization denies that anything is ever part of itself (irreexivity) while accepting
transitivity, from which antisymmetry follows automatically.
Although mereology is an application of mathematical logic, what could be argued to be a sort of proto-geometry,
it has been wholly developed by logicians, ontologists, linguists, engineers, and computer scientists, especially those
working in articial intelligence.
Mereology can also refer to formal work in General Systems Theory on system decomposition and parts, wholes
and boundaries (by, e.g., Mihajlo D. Mesarovic (1970), Gabriel Kron (1963), or Maurice Jessel (see Bowden (1989,
1998)). A hierarchical version of Gabriel Kron's Network Tearing was published by Keith Bowden (1991), reecting
David Lewiss ideas on Gunk. Such ideas appear in theoretical computer science and physics, often in combination
with Sheaf, Topos, or Category Theory. See also the work of Steve Vickers on (parts of) specications in computer
science, Joseph Goguen on physical systems, and Tom Etter (1996, 1998) on link theory and quantum mechanics.
In computer science, the class concept of object-oriented programming is in some ways similar to mereological no-
tions, but is a kind of and is a part of are not the same notion. Class/sub-class relationships are not explicit
language constructs in either imperative programs or declarative programs. Method inheritance enriches this appli-
cation of mereology by providing for passing procedural information down the part-whole relation, thereby making
method inheritance a naturally arising aspect of mereology.

171.1 History

Informal part-whole reasoning was consciously invoked in metaphysics and ontology from Plato (in particular, in the
second half of the Parmenides) and Aristotle onwards, and more or less unwittingly in 19th-century mathematics
until the triumph of set theory around 1910. Ivor Grattan-Guinness (2001) sheds much light on part-whole reasoning
during the 19th and early 20th centuries, and reviews how Cantor and Peano devised set theory. In seventh century
India, parts and wholes were studied extensively by Dharmakirti (see [1] ). In Europe, however, it appears that the
rst to reason consciously and at length about parts and wholes was Edmund Husserl, in 1901, in the second volume
of Logical Investigations - Third Investigation: On the Theory of Wholes and Parts (Husserl 1970 is the English
translation). However, the word mereology is absent from his writings, and he employed no symbolism even though
his doctorate was in mathematics.

664
171.2. AXIOMS AND PRIMITIVE NOTIONS 665

Stanisaw Leniewski coined mereology in 1927, from the Greek word (mros, part), to refer to a formal
theory of part-whole he devised in a series of highly technical papers published between 1916 and 1931, and translated
in Leniewski (1992). Leniewskis student Alfred Tarski, in his Appendix E to Woodger (1937) and the paper
translated as Tarski (1984), greatly simplied Leniewskis formalism. Other students (and students of students)
of Lesniewski elaborated this Polish mereology over the course of the 20th century. For a good selection of the
literature on Polish mereology, see Srzednicki and Rickey (1984). For a survey of Polish mereology, see Simons
(1987). Since 1980 or so, however, research on Polish mereology has been almost entirely historical in nature.
A.N. Whitehead planned a fourth volume of Principia Mathematica, on geometry, but never wrote it. His 1914
correspondence with Bertrand Russell reveals that his intended approach to geometry can be seen, with the benet
of hindsight, as mereological in essence. This work culminated in Whitehead (1916) and the mereological systems
of Whitehead (1919, 1920).
In 1930, Henry Leonard completed a Harvard Ph.D. dissertation in philosophy, setting out a formal theory of the
part-whole relation. This evolved into the calculus of individuals of Goodman and Leonard (1940). Goodman
revised and elaborated this calculus in the three editions of Goodman (1951). The calculus of individuals is the
starting point for the post-1970 revival of mereology among logicians, ontologists, and computer scientists, a revival
well-surveyed in Simons (1987) and Casati and Varzi (1999).

171.2 Axioms and primitive notions


Reexivity: A basic choice in dening a mereological system, is whether to consider things to be parts of themselves.
In naive set theory a similar question arises: whether a set is to be considered a subset of itself. In both cases, yes
gives rise to paradoxes analogous to Russells paradox: Let there be an object O such that every object that is not a
proper part of itself is a proper part of O. Is O a proper part of itself? No, because no object is a proper part of itself;
and yes, because it meets the specied requirement for inclusion as a proper part of O. In set theory, a set is often
termed an improper subset of itself. Given such paradoxes, mereology requires an axiomatic formulation.
A mereological system is a rst-order theory (with identity) whose universe of discourse consists of wholes and their
respective parts, collectively called objects. Mereology is a collection of nested and non-nested axiomatic systems,
not unlike the case with modal logic.
The treatment, terminology, and hierarchical organization below follow Casati and Varzi (1999: Ch. 3) closely. For
a more recent treatment, correcting certain misconceptions, see Hovda (2008). Lower-case letters denote variables
ranging over objects. Following each symbolic axiom or denition is the number of the corresponding formula in
Casati and Varzi, written in bold.
A mereological system requires at least one primitive binary relation (dyadic predicate). The most conventional choice
for such a relation is parthood (also called inclusion), "x is a part of y", written Pxy. Nearly all systems require
that parthood partially order the universe. The following dened relations, required for the axioms below, follow
immediately from parthood alone:

An immediate dened predicate is x is a proper part of y", written PPxy, which holds (i.e., is satised, comes
out true) if Pxy is true and Pyx is false. Compared to parthood (which is a partial order), ProperPart is a strict
partial order.

P P xy (P xy P yx). 3.3
An object lacking proper parts is an atom. The mereological universe consists of all objects we wish to
think about, and all of their proper parts:

Overlap: x and y overlap, written Oxy, if there exists an object z such that Pzx and Pzy both hold.

Oxy z[P zx P zy]. 3.1


The parts of z, the overlap or product of x and y, are precisely those objects that are parts of both x
and y.

Underlap: x and y underlap, written Uxy, if there exists an object z such that x and y are both parts of z.

U xy z[P xz P yz]. 3.2


666 CHAPTER 171. MEREOLOGY

Overlap and Underlap are reexive, symmetric, and intransitive.


Systems vary in what relations they take as primitive and as dened. For example, in extensional mereologies (dened
below), parthood can be dened from Overlap as follows:

P xy z[Ozx Ozy]. 3.31

The axioms are:

Parthood partially orders the universe:

M1, Reexive: An object is a part of itself.


P xx. P.1
M2, Antisymmetric: If Pxy and Pyx both hold, then x and y are the same object.
(P xy P yx) x = y. P.2
M3, Transitive: If Pxy and Pyz, then Pxz.
(P xy P yz) P xz. P.3

M4, Weak Supplementation: If PPxy holds, there exists a z such that Pzy holds but Ozx does not.

P P xy z[P zy Ozx]. P.4

M5, Strong Supplementation: Replace "PPxy holds in M4 with "Pyx does not hold.

P yx z[P zy Ozx]. P.5

M5', Atomistic Supplementation: If Pxy does not hold, then there exists an atom z such that Pzx holds but
Ozy does not.

P xy z[P zx Ozy v[P P vz]]. P.5'

Top: There exists a universal object, designated W, such that PxW holds for any x.

W x[P xW ]. 3.20
Top is a theorem if M8 holds.

Bottom: There exists an atomic null object, designated N, such that PNx holds for any x.

N x[P N x]. 3.22

M6, Sum: If Uxy holds, there exists a z, called the sum or fusion of x and y, such that the objects overlapping
of z are just those objects that overlap either x or y.

U xy zv[Ovz (Ovx Ovy)]. P.6

M7, Product: If Oxy holds, there exists a z, called the product of x and y, such that the parts of z are just
those objects that are parts of both x and y.

Oxy zv[P vz (P vx P vy)]. P.7


If Oxy does not hold, x and y have no parts in common, and the product of x and y is undened.

M8, Unrestricted Fusion: Let (x) be a rst-order formula in which x is a free variable. Then the fusion of
all objects satisfying exists.
171.3. VARIOUS SYSTEMS 667

x[(x)] zy[Oyz x[(x) Oyx]]. P.8


M8 is also called General Sum Principle, Unrestricted Mereological Composition, or Universalism.
M8 corresponds to the principle of unrestricted comprehension of naive set theory, which gives rise to
Russells paradox. There is no mereological counterpart to this paradox simply because parthood, unlike
set membership, is reexive.

M8', Unique Fusion: The fusions whose existence M8 asserts are also unique. P.8'
M9, Atomicity: All objects are either atoms or fusions of atoms.

y[P yx z[P P zy]]. P.10

171.3 Various systems


Simons (1987), Casati and Varzi (1999) and Hovda (2008) describe many mereological systems whose axioms are
taken from the above list. We adopt the boldface nomenclature of Casati and Varzi. The best-known such system
is the one called classical extensional mereology, hereinafter abbreviated CEM (other abbreviations are explained
below). In CEM, P.1 through P.8' hold as axioms or are theorems. M9, Top, and Bottom are optional.
The systems in the table below are partially ordered by inclusion, in the sense that, if all the theorems of system A are
also theorems of system B, but the converse is not necessarily true, then B includes A. The resulting Hasse diagram
is similar to that in Fig. 2, and Fig. 3.2 in Casati and Varzi (1999: 48).
There are two equivalent ways of asserting that the universe is partially ordered: Assume either M1M3, or that
Proper Parthood is transitive and asymmetric, hence a strict partial order. Either axiomatization results in the system
M. M2 rules out closed loops formed using Parthood, so that the part relation is well-founded. Sets are well-founded
if the axiom of regularity is assumed. The literature contains occasional philosophical and common-sense objections
to the transitivity of Parthood.
M4 and M5 are two ways of asserting supplementation, the mereological analog of set complementation, with M5
being stronger because M4 is derivable from M5. M and M4 yield minimal mereology, MM. MM, reformulated in
terms of Proper Part, is Simonss (1987) preferred minimal system.
In any system in which M5 or M5' are assumed or can be derived, then it can be proved that two objects having the
same proper parts are identical. This property is known as Extensionality, a term borrowed from set theory, for which
extensionality is the dening axiom. Mereological systems in which Extensionality holds are termed extensional, a
fact denoted by including the letter E in their symbolic names.
M6 asserts that any two underlapping objects have a unique sum; M7 asserts that any two overlapping objects have a
unique product. If the universe is nite or if Top is assumed, then the universe is closed under sum. Universal closure
of Product and of supplementation relative to W requires Bottom. W and N are, evidently, the mereological analog of
the universal and empty sets, and Sum and Product are, likewise, the analogs of set-theoretical union and intersection.
If M6 and M7 are either assumed or derivable, the result is a mereology with closure.
Because Sum and Product are binary operations, M6 and M7 admit the sum and product of only a nite number of
objects. The fusion axiom, M8, enables taking the sum of innitely many objects. The same holds for Product, when
dened. At this point, mereology often invokes set theory, but any recourse to set theory is eliminable by replacing
a formula with a quantied variable ranging over a universe of sets by a schematic formula with one free variable.
The formula comes out true (is satised) whenever the name of an object that would be a member of the set (if it
existed) replaces the free variable. Hence any axiom with sets can be replaced by an axiom schema with monadic
atomic subformulae. M8 and M8' are schemas of just this sort. The syntax of a rst-order theory can describe only a
denumerable number of sets; hence, only denumerably many sets may be eliminated in this fashion, but this limitation
is not binding for the sort of mathematics contemplated here.
If M8 holds, then W exists for innite universes. Hence, Top need be assumed only if the universe is innite and M8
does not hold. It is interesting to note that Top (postulating W) is not controversial, but Bottom (postulating N) is.
Leniewski rejected Bottom, and most mereological systems follow his example (an exception is the work of Richard
Milton Martin). Hence, while the universe is closed under sum, the product of objects that do not overlap is typically
undened. A system with W but not N is isomorphic to:

A Boolean algebra lacking a 0


668 CHAPTER 171. MEREOLOGY

A join semilattice bounded from above by 1. Binary fusion and W interpret join and 1, respectively.

Postulating N renders all possible products denable, but also transforms classical extensional mereology into a set-
free model of Boolean algebra.
If sets are admitted, M8 asserts the existence of the fusion of all members of any nonempty set. Any mereolog-
ical system in which M8 holds is called general, and its name includes G. In any general mereology, M6 and M7
are provable. Adding M8 to an extensional mereology results in general extensional mereology, abbreviated GEM;
moreover, the extensionality renders the fusion unique. On the converse, however, if the fusion asserted by M8 is
assumed unique, so that M8' replaces M8, thenas Tarski (1929) had shownM3 and M8' suce to axiomatize
GEM, a remarkably economical result. Simons (1987: 3841) lists a number of GEM theorems.
M2 and a nite universe necessarily imply Atomicity, namely that everything either is an atom or includes atoms
among its proper parts. If the universe is innite, Atomicity requires M9. Adding M9 to any mereological system, X
results in the atomistic variant thereof, denoted AX. Atomicity permits economies, for instance, assuming that M5'
implies Atomicity and extensionality, and yields an alternative axiomatization of AGEM.

171.4 Set theory


The notion of subset in set theory is not entirely the same as the notion of subpart in mereology Stanisaw
Leniewski rejected set theory (a related to but not the same as nominalism (see https://plato.stanford.edu/entries/
nominalism-metaphysics/). For a long time, nearly all philosophers and mathematicians avoided mereology, seeing
it as tantamount to a rejection of set theory. Goodman too was a nominalist, and his fellow nominalist Richard Milton
Martin employed a version of the calculus of individuals throughout his career, starting in 1941.
Much early work on mereology was motivated by a suspicion that set theory was ontologically suspect, and that
Occams razor requires that one minimise the number of posits in ones theory of the world and of mathematics.
Mereology replaces talk of sets of objects with talk of sums of objects, objects being no more than the various
things that make up wholes.
Many logicians and philosophers reject these motivations, on such grounds as:

They deny that sets are in any way ontologically suspect

Occams razor, when applied to abstract objects like sets, is either a dubious principle or simply false

Mereology itself is guilty of proliferating new and ontologically suspect entities such as fusions.

For a survey of attempts to found mathematics without using set theory, see Burgess and Rosen (1997).
In the 1970s, thanks in part to Eberle (1970), it gradually came to be understood that one can employ mereology
regardless of ones ontological stance regarding sets. This understanding is called the ontological innocence of
mereology. This innocence stems from mereology being formalizable in either of two equivalent ways:

Quantied variables ranging over a universe of sets

Schematic predicates with a single free variable.

Once it became clear that mereology is not tantamount to a denial of set theory, mereology became largely accepted
as a useful tool for formal ontology and metaphysics.
In set theory, singletons are atoms that have no (non-empty) proper parts; many consider set theory useless or
incoherent (not well-founded) if sets cannot be built up from unit sets. The calculus of individuals was thought to
require that an object either have no proper parts, in which case it is an atom, or be the mereological sum of atoms.
Eberle (1970), however, showed how to construct a calculus of individuals lacking "atoms", i.e., one where every
object has a proper part (dened below) so that the universe is innite.
There are analogies between the axioms of mereology and those of standard Zermelo-Fraenkel set theory (ZF), if
Parthood is taken as analogous to subset in set theory. On the relation of mereology and ZF, also see Bunt (1985).
One of the very few contemporary set theorists to discuss mereology is Potter (2004).
171.5. MATHEMATICS 669

Lewis (1991) went further, showing informally that mereology, augmented by a few ontological assumptions and
plural quantication, and some novel reasoning about singletons, yields a system in which a given individual can
be both a member and a subset of another individual. In the resulting system, the axioms of ZFC (and of Peano
arithmetic) are theorems.
Forrest (2002) revises Lewiss analysis by rst formulating a generalization of CEM, called Heyting mereology,
whose sole nonlogical primitive is Proper Part, assumed transitive and antireexive. There exists a ctitious null
individual that is a proper part of every individual. Two schemas assert that every lattice join exists (lattices are
complete) and that meet distributes over join. On this Heyting mereology, Forrest erects a theory of pseudosets,
adequate for all purposes to which sets have been put.

171.5 Mathematics
Husserl never claimed that mathematics could or should be grounded in part-whole rather than set theory. Lesniewski
consciously derived his mereology as an alternative to set theory as a foundation of mathematics, but did not work
out the details. Goodman and Quine (1947) tried to develop the natural and real numbers using the calculus of
individuals, but were mostly unsuccessful; Quine did not reprint that article in his Selected Logic Papers. In a series of
chapters in the books he published in the last decade of his life, Richard Milton Martin set out to do what Goodman
and Quine had abandoned 30 years prior. A recurring problem with attempts to ground mathematics in mereology
is how to build up the theory of relations while abstaining from set-theoretic denitions of the ordered pair. Martin
argued that Eberles (1970) theory of relational individuals solved this problem.
Topological notions of boundaries and connection can be married to mereology, resulting in mereotopology; see
Casati and Varzi (1999: chpts. 4,5). Whiteheads 1929 Process and Reality contains a good deal of informal
mereotopology.

171.6 Natural language


Bunt (1985), a study of the semantics of natural language, shows how mereology can help understand such phenomena
as the masscount distinction and verb aspect. But Nicolas (2008) argues that a dierent logical framework, called
plural logic, should be used for that purpose. Also, natural language often employs part of in ambiguous ways
(Simons 1987 discusses this at length). Hence, it is unclear how, if at all, one can translate certain natural language
expressions into mereological predicates. Steering clear of such diculties may require limiting the interpretation of
mereology to mathematics and natural science. Casati and Varzi (1999), for example, limit the scope of mereology
to physical objects.

171.7 Metaphysics
In metaphysics there are many troubling questions pertaining to parts and wholes. One question addresses constitution
and persistence, another asks about composition.

171.7.1 Mereological constitution

In metaphysics, there are several puzzles concerning cases of mereological constitution.[2] That is, what makes up a
whole. We are still concerned with parts and wholes, but instead of looking at what parts make up a whole, we are
wondering what a thing is made of, such as its materials: e.g. the bronze in a bronze statue. Below are two of the
main puzzles that philosophers use to discuss constitution.
Ship of Theseus: Briey, the puzzle goes something like this. There is a ship called the Ship of Theseus. Overtime
the boards start to rot so we remove the boards and place them in a pile. First question, is the ship made of the new
boards the same as the ship that had all the old boards? Second, if we reconstruct a ship using all of the old planks,
etc. from the Ship of Theseus, and we also have a ship that was built out of new boards (each added one-by-one over
time to replace old decaying boards), which ship is the real Ship of Theseus?
670 CHAPTER 171. MEREOLOGY

Statue and Lump of Clay: Roughly, a sculptor decides to mold a statue out of a lump of clay. At time t1 the sculptor
has a lump of clay. After many manipulations at time t2 there is a statue. The question asked is, is the lump of clay
and the statue (numerically) identical? If so, how and why?[3]
Constitution typically has implications for views on persistence: how does an object persist over time if any of its parts
(materials) change or are removed, as is the case with humans who lose cells, change height, hair color, memories,
and yet we are said to be the same person today as we were when we were rst born. For example, Ted Sider is the
same today as he was when he was bornhe just changed. But how can this be if many parts of Ted today did not
exist when Ted was just born? Is it possible for things, such as organisms to persist? And if so, how? There are
several views that attempt to answer this question. Some of the views are as follows (note, there are several other
views):[4][5]
(a) Constitution View. This view accepts cohabitation. That is, two objects share exactly the same matter. Here, it
follows, that there are no temporal parts.
(b) Mereological Essentialism, which states that the only objects that exist are quantities of matter, which are things
dened by their parts. The object persists if matter is removed (or the form changes); but the object ceases to exist if
any matter is destroyed.
(c) Dominant Sorts. This is the view that tracing is determined by which sort is dominant; they reject cohabitation.
For example, lump does not equal statue because they're dierent sorts.
(d) Nihilismwhich makes the claim that no objects exist, except simples, so there is no persistence problem.
(e) 4 Dimensionalism, or Temporal Parts (may also go by the names Perdurantism or Exdurantism), which roughly
states that aggregates of temporal parts are intimately related. For example, two roads merging, momentarily and
spatially, are still one road, because they share a part.
(f) 3 Dimensionalism (may also go by the name Endurantism), where the object is wholly present. That is, the
persisting object retains numerical identity.

171.7.2 Mereological composition


One question that is addressed by philosophers is which is more fundamental: parts, wholes, or neither?[6][7][8][9][10][11][12][13][14][15]
Another pressing question is called the Special Composition Question (SCQ): For any Xs, when is it the case that
there is a Y such that the Xs compose Y?[4][16][17][18][19][20][21] This question has caused philosophers to run in three
dierent directions: nihilism, universal composition (UC), or a moderate view (restricted composition). The rst
two views are considered extreme since the rst denies composition, and the second allows any and all non-spatially
overlapping objects to compose another object. The moderate view encompasses several theories that try to make
sense of SCQ without saying 'no' to composition or 'yes to unrestricted composition.

Fundamentality

There are philosophers who are concerned with the question of fundamentality. That is, which is more ontologically
fundamental the parts or their wholes. There are several responses to this question, though one of the default as-
sumptions is that the parts are more fundamental. That is, the whole is grounded in its parts. This is the mainstream
view. Another view, explored by Shaer (2010) is monism, where the parts are grounded in the whole. Shaer does
not just mean that, say, the parts that make up my body are grounded in my body. Rather, Shaer argues that the
whole cosmos is more fundamental and everything else is a part of the cosmos. Then, there is the identity theory
which claims that there is no hierarchy or fundamentality to parts and wholes. Instead wholes are just (or equivalent
to) their parts. There can also be a two-object view which says that the wholes are not equal to the partsthey are
numerically distinct from one another. Each of these theories has benets and costs associated with them.[6][7][8][9]

Special Composition Question (SCQ)

Philosophers want to know when some, Xs, composes something Y. There are several kinds of responses:

One response to this question is called nihilism. Nihilism states that there are no mereological complex objects
(read: composite objects); there are only simples. Nihilists do not entirely reject composition because they
do think that simples compose themselves, but this is a dierent point. More formally Nihilists would say:
171.8. IMPORTANT SURVEYS 671

Necessarily, for any non-overlapping Xs, there is an object composed of the Xs if and only if there is only one
of the Xs.[17][21][22] This theory, though well explored, has its own set of problems. Some of which include,
but are not limited to: experiences and common sense, incompatible with atomless gunk, and it is unsupported
by space-time physics.[17][21]

Another prominent response is called universal composition (UC). UC says that so long as the Xs do not spatially
overlap, the Xs can compose a complex object. Universal compositionalists are also considered those who
support unrestricted composition. More formally: Necessarily, for any non-overlapping Xs, there is a Y such
that Y is composed of the Xs. For example, someones left thumb, the top half of another persons right shoe,
and a quark in the center of their galaxy can compose a complex object according to universal composition.
Likewise, this theory also has some issues, most of them dealing with our experiences that these randomly
chosen parts make up a complex whole and there are far too many objects posited in our ontology.

A third response (perhaps less explored than the previous two) includes a range of restricted composition views.
Though there are several views, they all share a common idea: that there is a restriction on what counts as a
complex object: some (but not all) Xs come together to compose a complex Y. Some of these theories include:

(a) Contact Viewwhere the Xs compose a complex Y if and only if the Xs are in contact;
(b) Fastenationthe Xs compose a complex Y if and only if the Xs are fastened;
(c) Cohesionthe Xs compose a complex Y if and only if the Xs cohere (cannot be pulled apart or moved in relation
to each other without breaking);
(d) Fusionthe Xs compose a complex Y if and only if the Xs are fused (fusion is when the Xs are joined together
such that there is no boundary);
(e) VIPA: van Inwagen Proposed AnswerXs compose a complex Y if and only if either the activities of the Xs
constitute a life or there is only one of the Xs;[22] and
(f) Brutal Composition"Its just the way things are. There is no true, nontrivial, and nitely long answer.[23]
This is not an exhaustive list as many more hypothesis continue to be explored. However, a common problem with
these theories is that they are vague. It remains unclear what fastened or life mean, for example. But there are
many other issues within the restricted composition responsesthough many of them are subject to which theory is
being discussed.[17]

171.8 Important surveys


The books by Simons (1987) and Casati and Varzi (1999) dier in their strengths:

Simons (1987) sees mereology primarily as a way of formalizing ontology and metaphysics. His strengths
include the connections between mereology and:

The work of Stanislaw Leniewski and his descendants


Various continental philosophers, especially Edmund Husserl
Contemporary English-speaking technical philosophers such as Kit Fine and Roderick Chisholm
Recent work on formal ontology and metaphysics, including continuants, occurrents, class nouns, mass
nouns, and ontological dependence and integrity
Free logic as a background logic
Extending mereology with tense logic and modal logic
Boolean algebras and lattice theory.

Casati and Varzi (1999) see mereology primarily as a way of understanding the material world and how humans
interact with it. Their strengths include the connections between mereology and:

A proto-geometry for physical objects


Topology and mereotopology, especially boundaries, regions, and holes
672 CHAPTER 171. MEREOLOGY

A formal theory of events


Theoretical computer science
The writings of Alfred North Whitehead, especially his Process and Reality and work descended therefrom.[24]

Simons devotes considerable eort to elucidating historical notations. The notation of Casati and Varzi is often used.
Both books include excellent bibliographies. To these works should be added Hovda (2008), which presents the latest
state of the art on the axiomatization of mereology.

171.9 See also


Attitude polarization

Gunk (mereology)

Implicate and explicate order according to David Bohm

Laws of Form by G. Spencer Brown

Mereological essentialism

Mereological nihilism

Mereotopology

Meronomy

Meronymy

Monad (Greek philosophy)

Plural quantication

Quantier variance

Simple (philosophy)

Whiteheads point-free geometry

171.10 References
[1] Dunne, John D., 2004. Foundations of Dharmakirtis Philosophy. Wisdom Publications.

[2] Mereological constitution. Stanford Encyclopedia of Philosophy.

[3] Rea, Michael (1995). The Problem of Material Constitution. The Philosophical Review. 104.4: 525552.

[4] Ney, Alyssa (2014). Metaphysics: An Introduction. Routledge.

[5] In Theodore Sider, John Hawthorne & Dean W. Zimmerman (eds.), Contemporary Debates in Metaphysics. Blackwell Pub.
241-262 (2007).

[6] Healey, Richard; Unk, Jos (2013). Part and Whole in Physics: An Introduction. Studies in History and Philosophy of
Science Part B. 44.1: 2021.

[7] Healey, Richard (2013). Physical Composition. Studies in History and Philosophy of Science Part B. 44.1: 4862.

[8] Kadano, Leo (2013). Relating Theories Via Renormalization. Studies in History and Philosophy of Science Part B. 44.1:
2239.

[9] Ghirardi, GianCarlo (2013). The Parts and the Whole: Collapse Theories and Systems with Identical Constituents.
Studies in History and Philosophy of Science Part B. 44.1: 4047.

[10] Shaer, Jonathan (2010). Monism: The Priority of the Whole. Philosophical Review. 119.1: 3176.
171.10. REFERENCES 673

[11] Cameron, Ross (2014). Parts Generate the Whole but they are not Identical to it. Oxford University Press.

[12] Loss, Roberto (2016). Parts Ground the Whole and are Identical to it. Australasian Journal of Philosophy. 94.3.

[13] Cotnoir, Aaron (2014). Composition as Identity: Framing the Debate. Oxford University Press.

[14] Sider, Ted (2015). Nothing Over and Above. Grazer Philosophische Studien. 91: 191216.

[15] Wallace, Megan (2011). Composition as Identity: Pt. I & II. Philosophy Compass. 6.11: 804827.

[16] James van Cleve (2008). The Moon and Sixpence: A Defense of Mereological Universalism. In Sider, Ted. Contempo-
rary Debates in Metaphysics. Blackwell Publishing.

[17] Ned Markosian (2008). Restricted Composition. In Sider, Ted. Contemporary Debates in Metaphysics. Blackwell
Publishing. pp. 341363.

[18] McDaniel, Kris (2010). Parts and Wholes. Philosophy Compass. 5.5: 412425.

[19] Korman, Daniel; Carmichael, Chad (2016). Composition (Draft: 9/29/15)". Oxford Handbooks Online.

[20] Varzi, Achille. Mereology.

[21] Sider, Ted (2013). Against Parthood. Oxford Studies in Metaphysics. 8: 237293.

[22] van Inwagen, Peter (1990). Material Beings. Cornell University Press.

[23] Markosian, Ned (1998). Brutal Composition. Philosophical Studies. 92: 211249.

[24] Cf. Peter Simons, Whitehead and Mereology, in Guillaume Durand et Michel Weber (diteurs), Les principes de la
connaissance naturelle dAlfred North Whitehead Alfred North Whiteheads Principles of Natural Knowledge, Frankfurt
/ Paris / Lancaster, ontos verlag, 2007. See also the relevant entries of Michel Weber and Will Desmond, (eds.), Handbook
of Whiteheadian Process Thought, Frankfurt / Lancaster, ontos verlag, Process Thought X1 & X2, 2008.

Bowden, Keith, 1991. Hierarchical Tearing: An Ecient Holographic Algorithm for System Decomposition, Int.
J. General Systems, Vol. 24(1), pp 2338.
Bowden, Keith, 1998. Huygens Principle, Physics and Computers. Int. J. General Systems, Vol. 27(1-3),pp.
932.
Bunt, Harry, 1985. Mass terms and model-theoretic semantics. Cambridge Univ. Press.
Burgess, John, and Rosen, Gideon, 1997. A Subject with No Object. Oxford Univ. Press.
Burkhardt, H., and Dufour, C.A., 1991, Part/Whole I: History in Burkhardt, H., and Smith, B., eds., Hand-
book of Metaphysics and Ontology. Muenchen: Philosophia Verlag.
Casati, R., and Varzi, A., 1999. Parts and Places: the structures of spatial representation. MIT Press.
Eberle, Rolf, 1970. Nominalistic Systems. Kluwer.
Etter, Tom, 1996. Quantum Mechanics as a Branch of Mereology in Tooli T., et al., PHYSCOMP96, Proceed-
ings of the Fourth Workshop on Physics and Computation, New England Complex Systems Institute.
Etter, Tom, 1998. Process, System, Causality and Quantum Mechanics. SLAC-PUB-7890, Stanford Linear
Accelerator Centre.
Forrest, Peter, 2002, "Nonclassical mereology and its application to sets", Notre Dame Journal of Formal Logic
43: 79-94.
Goodman, Nelson, 1977 (1951). The Structure of Appearance. Kluwer.
Goodman, Nelson, and Quine, Willard, 1947, Steps toward a constructive nominalism, Journal of Symbolic
Logic 12: 97-122.
Gruszczynski R., and Pietruszczak A., 2008, "Full development of Tarskis geometry of solids", Bulletin of
Symbolic Logic 14: 481-540. A system of geometry based on Lesniewskis mereology, with basic properties
of mereological structures.
Hovda, Paul, 2008, "What is classical mereology?" Journal of Philosophical Logic 38(1): 55-82.
674 CHAPTER 171. MEREOLOGY

Husserl, Edmund, 1970. Logical Investigations, Vol. 2. Findlay, J.N., trans. Routledge.

Kron, Gabriel, 1963, Diakoptics: The Piecewise Solution of Large Scale Systems. Macdonald, London.
Lewis, David K., 1991. Parts of Classes. Blackwell.

Leonard, H.S., and Goodman, Nelson, 1940, The calculus of individuals and its uses, Journal of Symbolic
Logic 5: 4555.

Leniewski, Stanisaw, 1992. Collected Works. Surma, S.J., Srzednicki, J.T., Barnett, D.I., and Rickey, V.F.,
editors and translators. Kluwer.

Lucas, J. R., 2000. Conceptual Roots of Mathematics. Routledge. Chpts. 9.12 and 10 discuss mereology,
mereotopology, and the related theories of A.N. Whitehead, all strongly inuenced by the unpublished writings
of David Bostock.
Mesarovic, M.D., Macko, D., and Takahara, Y., 1970, Theory of Multilevel, Hierarchical Systems. Aca-
demic Press.

Nicolas, David, 2008, "Mass nouns and plural logic", Linguistics and Philosophy 31(2): 21144.
Pietruszczak A., 1996, "Mereological sets of distributive classes", Logic and Logical Philosophy 4: 105-22.
Constructs, using mereology, mathematical entities from set theoretical classes.
Pietruszczak A., 2005, "Pieces of mereology", Logic and Logical Philosophy 14: 211-34. Basic mathematical
properties of Lesniewskis mereology.
Potter, Michael, 2004. Set Theory and Its Philosophy. Oxford Univ. Press.

Simons, Peter, 1987 (reprinted 2000). Parts: A Study in Ontology. Oxford Univ. Press.
Srzednicki, J. T. J., and Rickey, V. F., eds., 1984. Lesniewskis Systems: Ontology and Mereology. Kluwer.

Tarski, Alfred, 1984 (1956), Foundations of the Geometry of Solids in his Logic, Semantics, Metamathemat-
ics: Papers 192338. Woodger, J., and Corcoran, J., eds. and trans. Hackett.

Varzi, Achille C., 2007, "Spatial Reasoning and Ontology: Parts, Wholes, and Locations" in Aiello, M. et al.,
eds., Handbook of Spatial Logics. Springer-Verlag: 945-1038.

Whitehead, A.N., 1916, La Theorie Relationiste de l'Espace, Revue de Metaphysique et de Morale 23: 423-
454. Translated as Hurley, P.J., 1979, The relational theory of space, Philosophy Research Archives 5: 712-
741.
------, 1919. An Enquiry Concerning the Principles of Natural Knowledge. Cambridge Univ. Press. 2nd ed.,
1925.
------, 1920. The Concept of Nature. Cambridge Univ. Press. 2004 paperback, Prometheus Books. Being the
1919 Tarner Lectures delivered at Trinity College, Cambridge.
------, 1978 (1929). Process and Reality. Free Press.

Woodger, J. H., 1937. The Axiomatic Method in Biology. Cambridge Univ. Press.

171.11 External links


Stanford Encyclopedia of Philosophy:
"Mereology" Achille Varzi.
"Boundary" Achille Varzi.
Chapter 172

Modal algebra

In algebra and logic, a modal algebra is a structure A, , , , 0, 1, such that

A, , , , 0, 1 is a Boolean algebra,

is a unary operation on A satisfying 1 = 1 and (x y) = x y for all x, y in A.

Modal algebras provide models of propositional modal logics in the same way as Boolean algebras are models of
classical logic. In particular, the variety of all modal algebras is the equivalent algebraic semantics of the modal logic
K in the sense of abstract algebraic logic, and the lattice of its subvarieties is dually isomorphic to the lattice of normal
modal logics.
Stones representation theorem can be generalized to the JnssonTarski duality, which ensures that each modal
algebra can be represented as the algebra of admissible sets in a modal general frame.

172.1 See also


interior algebra

Heyting algebra

172.2 References
A. Chagrov and M. Zakharyaschev, Modal Logic, Oxford Logic Guides vol. 35, Oxford University Press, 1997.
ISBN 0-19-853779-4

675
Chapter 173

Modal operator

A modal connective (or modal operator) is a logical connective for modal logic. It is an operator which forms
propositions from propositions. In general, a modal operator has the formal property of being non-truth-functional,
and is intuitively characterized by expressing a modal attitude (such as necessity, possibility, belief, or knowledge)
about the proposition to which the operator is applied.

173.1 Modality interpreted


There are several ways to interpret modal operators in modal logic, including: alethic, deontic, axiological, epistemic,
and doxastic.

173.1.1 Alethic
Alethic modal operators (M-operators) determine the fundamental conditions of possible worlds, especially causality,
time-space parameters, and the action capacity of persons. They indicate the possibility, impossibility and necessity
of actions, states of aairs, events, people, and qualities in the possible worlds.

173.1.2 Deontic
Deontic modal operators (P-operators) inuence the construction of possible worlds as proscriptive or prescriptive
norms, i.e. they indicate what is prohibited, obligatory, or permitted.

173.1.3 Axiological
Axiological modal operators (G-operators) transform the worlds entities into values and disvalues as seen by a social
group, a culture, or a historical period. Axiological modalities are highly subjective categories: what is good for one
person may be considered as bad by another one.

173.1.4 Epistemic
Epistemic modal operators (K-operators) reect the level of knowledge, ignorance and belief in the possible world.

173.1.5 Doxastic
Doxastic modal operators express belief in statements.

676
Chapter 174

Modus non excipiens

In logic, modus non excipiens[1][2] is a valid rule of inference that is closely related to modus ponens. This argument
form was created by Bart Verheij to address certain arguments which are types of modus ponens arguments, but must
be considered to be invalid. An instance of a particular modus ponens type argument is

A large majority accept A as true. Therefore, there exists a presumption in favor of A.

However, this is an argumentum ad populum, and is not deductively valid. The problem can be addressed by drawing
a distinction between two types of inference identied by Verheij:
Modus ponens:

Premises:
As a rule, if P then Q
P

Conclusion:
Q

and
Modus non excipiens

Premises:
As a rule, if P then Q
P
It is not the case that there is an exception to the rule that if P then Q

Conclusion:
Q

[1] Bart Verheij, Logic, Context and Valid Inference Or: Can There be a Logic of Law, 2000. Available on bart.verheij@metajur.unimaas.nl,

[2] Walton, Douglas; Are Some Modus Ponens Arguments Deductively Invalid?, University of Winnipeg

677
Chapter 175

Modus ponendo tollens

Modus ponendo tollens (Latin: mode that by arming, denies)[1] is a valid rule of inference for propositional logic,
sometimes abbreviated MPT.[2] It is closely related to modus ponens and modus tollens. It is usually described as
having the form:

1. Not both A and B


2. A
3. Therefore, not B

For example:

1. Ann and Bill cannot both win the race.


2. Ann won the race.
3. Therefore, Bill cannot have won the race.

As E.J. Lemmon describes it:"Modus ponendo tollens is the principle that, if the negation of a conjunction holds and
also one of its conjuncts, then the negation of its other conjunct holds.[3]
In logic notation this can be represented as:

1. (A B)
2. A
3. B

Based on the Sheer Stroke (alternative denial), "|", the inference can also be formalized in this way:

1. A | B
2. A
3. B

175.1 References
[1] Stone, Jon R. 1996. Latin for the Illiterati: Exorcizing the Ghosts of a Dead Language. London, UK: Routledge:60.

[2] Politzer, Guy & Carles, Laure. 2001. 'Belief Revision and Uncertain Reasoning'. Thinking and Reasoning. 7:217-234.

[3] Lemmon, Edward John. 2001. Beginning Logic. Taylor and Francis/CRC Press: 61.

678
Chapter 176

Modus ponens

In propositional logic, modus ponendo ponens (Latin for the way that arms by arming"; generally abbreviated to
MP or modus ponens[1] ) or implication elimination is a rule of inference.[2] It can be summarized as "P implies Q
and P are both asserted to be true, so therefore Q must be true. The history of modus ponens goes back to antiquity.[3]
Modus ponens is closely related to another valid form of argument, modus tollens. Both have apparently similar
but invalid forms such as arming the consequent, denying the antecedent, and evidence of absence. Constructive
dilemma is the disjunctive version of modus ponens. Hypothetical syllogism is closely related to modus ponens and
sometimes thought of as double modus ponens.

176.1 Formal notation


The modus ponens rule may be written in sequent notation:

P Q, P Q

where is a metalogical symbol meaning that Q is a syntactic consequence of P Q and P in some logical system;
or as the statement of a truth-functional tautology or theorem of propositional logic:

((P Q) P ) Q

where P, and Q are propositions expressed in some formal system.

176.2 Explanation
The argument form has two premises (hypothesis). The rst premise is the ifthen or conditional claim, namely
that P implies Q. The second premise is that P, the antecedent of the conditional claim, is true. From these two
premises it can be logically concluded that Q, the consequent of the conditional claim, must be true as well. In
articial intelligence, modus ponens is often called forward chaining.
An example of an argument that ts the form modus ponens:

If today is Tuesday, then John will go to work.


Today is Tuesday.
Therefore, John will go to work.

This argument is valid, but this has no bearing on whether any of the statements in the argument are true; for modus
ponens to be a sound argument, the premises must be true for any true instances of the conclusion. An argument

679
680 CHAPTER 176. MODUS PONENS

can be valid but nonetheless unsound if one or more premises are false; if an argument is valid and all the premises
are true, then the argument is sound. For example, John might be going to work on Wednesday. In this case, the
reasoning for Johns going to work (because it is Wednesday) is unsound. The argument is not only sound on Tuesdays
(when John goes to work), but valid on every day of the week. A propositional argument using modus ponens is said
to be deductive.
In single-conclusion sequent calculi, modus ponens is the Cut rule. The cut-elimination theorem for a calculus says
that every proof involving Cut can be transformed (generally, by a constructive method) into a proof without Cut,
and hence that Cut is admissible.
The CurryHoward correspondence between proofs and programs relates modus ponens to function application: if f
is a function of type P Q and x is of type P, then f x is of type Q.

176.3 Justication via truth table


The validity of modus ponens in classical two-valued logic can be clearly demonstrated by use of a truth table.
In instances of modus ponens we assume as premises that p q is true and p is true. Only one line of the truth
tablethe rstsatises these two conditions (p and p q). On this line, q is also true. Therefore, whenever p
q is true and p is true, q must also be true.

176.4 Status
While modus ponens is one of the most commonly used argument forms in logic it must not be mistaken for a
logical law; rather, it is one of the accepted mechanisms for the construction of deductive proofs that includes the
rule of denition and the rule of substitution.[4] Modus ponens allows one to eliminate a conditional statement
from a logical proof or argument (the antecedents) and thereby not carry these antecedents forward in an ever-
lengthening string of symbols; for this reason modus ponens is sometimes called the rule of detachment[5] or the
law of detachment.[6] Enderton, for example, observes that modus ponens can produce shorter formulas from longer
ones,[7] and Russell observes that the process of the inference cannot be reduced to symbols. Its sole record is the
occurrence of q [the consequent] . . . an inference is the dropping of a true premise; it is the dissolution of an
implication.[8]
A justication for the trust in inference is the belief that if the two former assertions [the antecedents] are not in
error, the nal assertion [the consequent] is not in error.[9] In other words: if one statement or proposition implies
a second one, and the rst statement or proposition is true, then the second one is also true. If P implies Q and P is
true, then Q is true.[10]

176.5 Correspondence to other mathematical frameworks

176.5.1 Probability calculus

Modus ponens represents an instance of the Law of total probability which for a binary variable is expressed as:
Pr(Q) = Pr(Q | P ) Pr(P ) + Pr(Q | P ) Pr(P ) ,
where e.g. Pr(Q) denotes the probability of Q and the conditional probability Pr(Q | P ) generalizes the logical
implication P Q . Assume that Pr(Q) = 1 is equivalent to Q being TRUE, and that Pr(Q) = 0 is equivalent to
Q being FALSE. It is then easy to see that Pr(Q) = 1 when Pr(Q | P ) = 1 and Pr(P ) = 1 . Hence, the law of total
probability represents a generalization of modus ponens [11] .

176.5.2 Subjective logic

Modus ponens represents an instance of the binomial deduction operator in subjective logic expressed as:
A
QP A
= (Q|P A
, Q|P ) PA ,
176.6. POSSIBLE FALLACIES 681

where PA denotes the subjective opinion about P as expressed by source A , and the conditional opinion Q|PA

generalizes the logical implication P Q . The deduced marginal opinion about Q is denoted by QP A
. The
A
case where P is an absolute TRUE opinion about P is equivalent to source A saying that P is TRUE, and the case
where PA is an absolute FALSE opinion about P is equivalent to source A saying that P is FALSE. The deduction
operator of subjective logic produces an absolute TRUE deduced opinion QP
A A
when the conditional opinion Q|P
is absolute TRUE and the antecedent opinion PA is absolute TRUE. Hence, subjective logic deduction represents a
generalization of both modus ponens and the Law of total probability [12] .

176.6 Possible fallacies


The fallacy of arming the consequent is a common misinterpretation of the modus ponens.

176.7 See also


Condensed detachment

What the Tortoise Said to Achilles

176.8 References
[1] Stone, Jon R. (1996). Latin for the Illiterati: Exorcizing the Ghosts of a Dead Language. London, UK: Routledge: 60.

[2] Enderton 2001:110

[3] Susanne Bobzien (2002). The Development of Modus Ponens in Antiquity, Phronesis 47, No. 4, 2002.

[4] Alfred Tarski 1946:47. Also Enderton 2001:110.

[5] Tarski 1946:47

[6] https://www.encyclopediaofmath.org/index.php/Modus_ponens

[7] Enderton 2001:111

[8] Whitehead and Russell 1927:9

[9] Whitehead and Russell 1927:9

[10] Jago, Mark (2007). Formal Logic. Humanities-Ebooks LLP. ISBN 978-1-84760-041-7. External link in |publisher= (help)

[11] Audun Jsang 2016:2

[12] Audun Jsang 2016:92

176.9 Sources
Herbert B. Enderton, 2001, A Mathematical Introduction to Logic Second Edition, Harcourt Academic Press,
Burlington MA, ISBN 978-0-12-238452-3.

Audun Jsang, 2016, Subjective Logic; A formalism for Reasoning Under Uncertainty Springer, Cham, ISBN
978-3-319-42337-1

Alfred North Whitehead and Bertrand Russell 1927 Principia Mathematica to *56 (Second Edition) paperback
edition 1962, Cambridge at the University Press, London UK. No ISBN, no LCCCN.

Alfred Tarski 1946 Introduction to Logic and to the Methodology of the Deductive Sciences 2nd Edition, reprinted
by Dover Publications, Mineola NY. ISBN 0-486-28462-X (pbk).
682 CHAPTER 176. MODUS PONENS

176.10 External links


Hazewinkel, Michiel, ed. (2001) [1994], Modus ponens, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Modus ponens at PhilPapers

Modus ponens at Wolfram MathWorld


Chapter 177

Modus tollens

In propositional logic, modus tollens[1][2][3][4] (or modus tollendo tollens and also denying the consequent)[5] (Latin
for the way that denies by denying)[6] is a valid argument form and a rule of inference. It is an application of the
general truth that if a statement is true, then so is its contra-positive.
The rst to explicitly describe the argument form modus tollens were the Stoics.[7]
The inference rule modus tollens validates the inference from P implies Q and the contradictory of Q to the contra-
dictory of P .
The modus tollens rule can be stated formally as:

P Q, Q
P
where P Q stands for the statement P implies Q. Q stands for it is not the case that Q (or in brief not Q).
Then, whenever " P Q " and " Q " each appear by themselves as a line of a proof, then " P " can validly be
placed on a subsequent line. The history of the inference rule modus tollens goes back to antiquity.[8]
Modus tollens is closely related to modus ponens. There are two similar, but invalid, forms of argument: arming the
consequent and denying the antecedent. See also contraposition and proof by contrapositive.

177.1 Formal notation


The modus tollens rule may be written in sequent notation:

P Q, Q P

where is a metalogical symbol meaning that P is a syntactic consequence of P Q and Q in some logical
system;
or as the statement of a functional tautology or theorem of propositional logic:

((P Q) Q) P

where P and Q are propositions expressed in some formal system;


or including assumptions:

P Q Q
P
though since the rule does not change the set of assumptions, this is not strictly necessary.

683
684 CHAPTER 177. MODUS TOLLENS

More complex rewritings involving modus tollens are often seen, for instance in set theory:

P Q

x
/Q
x
/P
(P is a subset of Q. x is not in Q. Therefore, x is not in P.)
Also in rst-order predicate logic:

x : P (x) Q(x)

x : Q(x)
x : P (x)
(For all x if x is P then x is Q. There exists some x that is not Q. Therefore, there exists some x that is not P.)
Strictly speaking these are not instances of modus tollens, but they may be derived from modus tollens using a few
extra steps.

177.2 Explanation
Requirements:

1. The argument has two premises.


2. The rst premise is a conditional or if-then statement, for example that if P then Q.
3. The second premise is that it is not the case that Q.
4. From these two premises, it can be logically concluded that it is not the case that P.

Consider an example:

If the watch-dog detects an intruder, the watch-dog will bark.


The watch-dog did not bark.
Therefore, no intruder was detected by the watch-dog.

Supposing that the premises are both true (the dog will bark if it detects an intruder, and does indeed not bark), it
follows that no intruder has been detected. This is a valid argument since it is not possible for the conclusion to be
false if the premises are true. (It is conceivable that there may have been an intruder that the dog did not detect,
but that does not invalidate the argument; the rst premise is if the watch-dog detects an intruder. The thing of
importance is that the dog detects or does not detect an intruder, not whether there is one.)
Another example:

If I am the axe murderer, then I can use an axe.


I cannot use an axe.
Therefore, I am not the axe murderer.

Another example:

If Rex is a chicken, then he is a bird.


Rex is not a bird.
Therefore, Rex is not a chicken.
177.3. RELATION TO MODUS PONENS 685

177.3 Relation to modus ponens


Every use of modus tollens can be converted to a use of modus ponens and one use of transposition to the premise
which is a material implication. For example:

If P, then Q. (premise material implication)


If not Q , then not P. (derived by transposition)
Not Q . (premise)
Therefore, not P. (derived by modus ponens)

Likewise, every use of modus ponens can be converted to a use of modus tollens and transposition.

177.4 Justication via truth table


The validity of modus tollens can be clearly demonstrated through a truth table.
In instances of modus tollens we assume as premises that p q is true and q is false. There is only one line of the
truth tablethe fourth linewhich satises these two conditions. In this line, p is false. Therefore, in every instance
in which p q is true and q is false, p must also be false.

177.5 Formal proof

177.5.1 Via disjunctive syllogism

177.5.2 Via reductio ad absurdum

177.5.3 Via contraposition

177.6 Correspondence to other mathematical frameworks

177.6.1 Probability calculus


Modus tollens represents an instance of the Law of total probability combined with Bayes theorem expressed as:
Pr(P ) = Pr(P | Q) Pr(Q) + Pr(P | Q) Pr(Q) ,
where the conditionals Pr(P | Q) and Pr(P | Q) are obtained with (the extended form of) Bayes theorem expressed
as:
Pr(Q|P ) a(P ) Pr(Q|P ) a(P )
Pr(P | Q) = Pr(Q|P ) a(P )+Pr(Q|P ) a(P ) and Pr(P | Q) = Pr(Q|P ) a(P )+Pr(Q|P ) a(P ) .
In the equations above Pr(Q) denotes the probability of Q , and a(P ) denotes the base rate (aka. prior probability)
of P . The conditional probability Pr(Q | P ) generalizes the logical statement P Q , i.e. in addition to assigning
TRUE or FALSE we can also assign any probability to the statement. Assume that Pr(Q) = 1 is equivalent to Q
being TRUE, and that Pr(Q) = 0 is equivalent to Q being FALSE. It is then easy to see that Pr(P ) = 0 when
Pr(Q | P ) = 1 and Pr(Q) = 0 . This is because Pr(Q | P ) = 1 Pr(Q | P ) = 0 so that Pr(P | Q) = 0
in the last equation. Therefore, the product terms in the rst equation always have a zero factor so that Pr(P ) = 0
which is equivalent to P being FALSE. Hence, the law of total probability combined with Bayes theorem represents
a generalization of modus tollens [9] .

177.6.2 Subjective logic


Modus tollens represents an instance of the abduction operator in subjective logic expressed as:
A A e
= (Q|P , Q|P )(aP , Q ) ,
PAQ A
686 CHAPTER 177. MODUS TOLLENS

A A A
where Q denotes the subjective opinion about Q , and (Q|P , Q|P ) denotes a pair of binomial conditional opin-
ions, as expressed by source A . The parameter aP denotes the base rate (aka. the prior probability) of P . The
abduced marginal opinion on P is denoted PAQ A
. The conditional opinion Q|P generalizes the logical statement
P Q , i.e. in addition to assigning TRUE or FALSE the source A can assign any subjective opinion to the state-
A
ment. The case where Q is an absolute TRUE opinion is equivalent to source A saying that Q is TRUE, and the case
A
where Q is an absolute FALSE opinion is equivalent to source A saying that Q is FALSE. The abduction operator
e of subjective logic produces an absolute FALSE abduced opinion Ae when the conditional opinion Q|P
A
is
P Q
A
absolute TRUE and the consequent opinion Q is absolute FALSE. Hence, subjective logic abduction represents a
generalization of both modus tollens and of the Law of total probability combined with Bayes theorem [10] .

177.7 See also


Evidence of absence
Non sequitur

Proof by contradiction
Proof by contrapositive

177.8 Notes
[1] University of North Carolina, Philosophy Department, Logic Glossary. Accessdate on 31 October 2007.

[2] Copi and Cohen

[3] Hurley

[4] Moore and Parker

[5] Sanford, David Hawley (2003). If P, Then Q: Conditionals and the Foundations of Reasoning (2nd ed.). London: Routledge.
p. 39. ISBN 0-415-28368-X. [Modus] tollens is always an abbreviation for modus tollendo tollens, the mood that by denying
denies.

[6] Stone, Jon R. (1996). Latin for the Illiterati: Exorcizing the Ghosts of a Dead Language. London: Routledge. p. 60. ISBN
0-415-91775-1.

[7] Stanford Encyclopedia of Philosophy: Ancient Logic: The Stoics"

[8] Susanne Bobzien (2002). The Development of Modus Ponens in Antiquity, Phronesis 47.

[9] Audun Jsang 2016:p.2

[10] Audun Jsang 2016:p.92

177.9 Sources
Audun Jsang, 2016, Subjective Logic; A formalism for Reasoning Under Uncertainty Springer, Cham, ISBN
978-3-319-42337-1

177.10 External links


Modus Tollens at Wolfram MathWorld
Chapter 178

Monadic Boolean algebra

In abstract algebra, a monadic Boolean algebra is an algebraic structure A with signature

, +, ', 0, 1, of type 2,2,1,0,0,1,

where A, , +, ', 0, 1 is a Boolean algebra.


The monadic/unary operator denotes the existential quantier, which satises the identities (using the received
prex notation for ):

0 = 0
x x
(x + y) = x + y
xy = (xy).

x is the existential closure of x. Dual to is the unary operator , the universal quantier, dened as x := (x' )'.
A monadic Boolean algebra has a dual denition and notation that take as primitive and as dened, so that x
:= (x ' )' . (Compare this with the denition of the dual Boolean algebra.) Hence, with this notation, an algebra A
has signature , +, ', 0, 1, , with A, , +, ', 0, 1 a Boolean algebra, as before. Moreover, satises the following
dualized version of the above identities:

1. 1 = 1
2. x x
3. (xy) = xy
4. x + y = (x + y).

x is the universal closure of x.

178.1 Discussion
Monadic Boolean algebras have an important connection to topology. If is interpreted as the interior operator
of topology, (1)-(3) above plus the axiom (x) = x make up the axioms for an interior algebra. But (x) =
x can be proved from (1)-(4). Moreover, an alternative axiomatization of monadic Boolean algebras consists of
the (reinterpreted) axioms for an interior algebra, plus (x)' = (x)' (Halmos 1962: 22). Hence monadic Boolean
algebras are the semisimple interior/closure algebras such that:

The universal (dually, existential) quantier interprets the interior (closure) operator;

687
688 CHAPTER 178. MONADIC BOOLEAN ALGEBRA

All open (or closed) elements are also clopen.

A more concise axiomatization of monadic Boolean algebra is (1) and (2) above, plus (xy) = xy (Halmos
1962: 21). This axiomatization obscures the connection to topology.
Monadic Boolean algebras form a variety. They are to monadic predicate logic what Boolean algebras are to propositional
logic, and what polyadic algebras are to rst-order logic. Paul Halmos discovered monadic Boolean algebras while
working on polyadic algebras; Halmos (1962) reprints the relevant papers. Halmos and Givant (1998) includes an
undergraduate treatment of monadic Boolean algebra.
Monadic Boolean algebras also have an important connection to modal logic. The modal logic S5, viewed as a
theory in S4, is a model of monadic Boolean algebras in the same way that S4 is a model of interior algebra. Like-
wise, monadic Boolean algebras supply the algebraic semantics for S5. Hence S5-algebra is a synonym for monadic
Boolean algebra.

178.2 See also


clopen set
interior algebra

Kuratowski closure axioms


ukasiewiczMoisil algebra

modal logic
monadic logic

178.3 References
Paul Halmos, 1962. Algebraic Logic. New York: Chelsea.

------ and Steven Givant, 1998. Logic as Algebra. Mathematical Association of America.
Chapter 179

Monadic predicate calculus

In logic, the monadic predicate calculus (also called monadic rst-order logic) is the fragment of rst-order logic
in which all relation symbols in the signature are monadic (that is, they take only one argument), and there are no
function symbols. All atomic formulas are thus of the form P (x) , where P is a relation symbol and x is a variable.
Monadic predicate calculus can be contrasted with polyadic predicate calculus, which allows relation symbols that
take two or more arguments.

179.1 Expressiveness
The absence of polyadic relation symbols severely restricts what can be expressed in the monadic predicate calculus.
It is so weak that, unlike the full predicate calculus, it is decidablethere is a decision procedure that determines
whether a given formula of monadic predicate calculus is logically valid (true for all nonempty domains).[1][2] Adding
a single binary relation symbol to monadic logic, however, results in an undecidable logic.

179.2 Relationship with term logic


The need to go beyond monadic logic was not appreciated until the work on the logic of relations, by Augustus De
Morgan and Charles Sanders Peirce in the nineteenth century, and by Frege in his 1879 Begrisschrit. Prior to the
work of these three men, term logic (syllogistic logic) was widely considered adequate for formal deductive reasoning.
Inferences in term logic can all be represented in the monadic predicate calculus. For example the syllogism

All dogs are mammals.


No mammal is a bird.
Thus, no dog is a bird.

can be notated in the language of monadic predicate calculus as

(x D(x) M (x)) (y M (y) B(y)) (z D(z) B(z))

where D , M and B denote the predicates of being, respectively, a dog, a mammal, and a bird.
Conversely, monadic predicate calculus is not signicantly more expressive than term logic. Each formula in the
monadic predicate calculus is equivalent to a formula in which quantiers appear only in closed subformulas of the
form

x P1 (x) Pn (x) P1 (x) Pm



(x)

689
690 CHAPTER 179. MONADIC PREDICATE CALCULUS

or

x P1 (x) Pn (x) P1 (x) Pm



(x),

These formulas slightly generalize the basic judgements considered in term logic. For example, this form allows
statements such as "Every mammal is either a herbivore or a carnivore (or both)", (x M (x) H(x) C(x)) .
Reasoning about such statements can, however, still be handled within the framework of term logic, although not by
the 19 classical Aristotelian syllogisms alone.
Taking propositional logic as given, every formula in the monadic predicate calculus expresses something that can
likewise be formulated in term logic. On the other hand, a modern view of the problem of multiple generality in
traditional logic concludes that quantiers cannot nest usefully if there are no polyadic predicates to relate the bound
variables.

179.3 Variants
The formal system described above is sometimes called the pure monadic predicate calculus, where pure signi-
es the absence of function letters. Allowing monadic function letters changes the logic only supercially, whereas
admitting even a single binary function letter results in an undecidable logic.
Monadic second-order logic allows predicates of higher arity in formulas, but restricts second-order quantication to
unary predicates, i.e. the only second-order variables allowed are subset variables.

179.4 Footnotes
[1] Heinrich Behmann, Beitrge zur Algebra der Logik, insbesondere zum Entscheidungsproblem, in Mathematische Annalen
(1922)

[2] Lwenheim, L. (1915) "ber Mglichkeiten im Relativkalkl, Mathematische Annalen 76: 447-470. Translated as On
possibilities in the calculus of relatives in Jean van Heijenoort, 1967. A Source Book in Mathematical Logic, 1879-1931.
Harvard Univ. Press: 228-51.
Chapter 180

Monotonicity of entailment

Monotonicity of entailment is a property of many logical systems that states that the hypotheses of any derived fact
may be freely extended with additional assumptions. In sequent calculi this property can be captured by an inference
rule called weakening, or sometimes thinning, and in such systems one may say that entailment is monotone if and
only if the rule is admissible. Logical systems with this property are occasionally called monotonic logics in order to
dierentiate them from non-monotonic logics.

180.1 Weakening rule


To illustrate, consider the natural deduction sequent:

That is, on the basis of a list of assumptions , one can prove C. Weakening, by adding an assumption A, allows one
to conclude:

, A C

For example, the syllogism All men are mortal. Socrates is a man. Therefore Socrates is mortal. can be weakened
by adding a premise: All men are mortal. Socrates is a man. Cows produce milk. Therefore Socrates is mortal.
The validity of the original conclusion is not changed by the addition of premises.

180.2 Non-monotonic logics


Main article: Non-monotonic logic

In most logics, weakening is either an inference rule or a metatheorem if the logic doesn't have an explicit rule.
Notable exceptions are:

Strict logic or relevant logic, where every hypothesis must be necessary for the conclusion.
Linear logic which disallows arbitrary contraction in addition to arbitrary weakening.
Bunched implications where weakening is restricted to additive composition.
Various types of default reasoning.
Abductive reasoning, the process of deriving the most likely explanations of the known facts.
Reasoning about knowledge, where statements specifying that something is not known need to be retracted
when that thing is learned.

691
692 CHAPTER 180. MONOTONICITY OF ENTAILMENT

180.3 See also


Contraction

Exchange rule
Substructural logic

No-cloning theorem
Chapter 181

Near sets

Figure 1. Descriptively, very near sets

In mathematics, near sets are either spatially close or descriptively close. Spatially close sets have nonempty intersection.
In other words, spatially close sets are not disjoint sets, since they always have at least one element in common. De-
scriptively close sets contain elements that have matching descriptions. Such sets can be either disjoint or non-disjoint
sets. Spatially near sets are also descriptively near sets.
The underlying assumption with descriptively close sets is that such sets contain elements that have location and
measurable features such as colour and frequency of occurrence. The description of the element of a set is dened
by a feature vector. Comparison of feature vectors provides a basis for measuring the closeness of descriptively near

693
694 CHAPTER 181. NEAR SETS

Figure 2. Descriptively, minimally near sets

sets. Near set theory provides a formal basis for the observation, comparison, and classication of elements in sets
based on their closeness, either spatially or descriptively. Near sets oer a framework for solving problems based on
human perception that arise in areas such as image processing, computer vision as well as engineering and science
problems.
Near sets have a variety of applications in areas such as topology[37] , pattern detection and classication[50] , abstract al-
gebra[51] , mathematics in computer science[38] , and solving a variety of problems based on human perception[42][82][47][52][56]
that arise in areas such as image analysis[54][14][46][17][18] , image processing[40] , face recognition[13] , ethology[64] , as
well as engineering and science problems[55][64][42][19][17][18] . From the beginning, descriptively near sets have proved
to be useful in applications of topology[37] , and visual pattern recognition [50] , spanning a broad spectrum of applica-
tions that include camouage detection, micropaleontology, handwriting forgery detection, biomedical image analy-
sis, content-based image retrieval, population dynamics, quotient topology, textile design, visual merchandising, and
topological psychology.
As an illustration of the degree of descriptive nearness between two sets, consider an example of the Henry colour
model for varying degrees of nearness between sets of picture elements in pictures (see, e.g.,[17] 4.3). The two pairs
of ovals in Fig. 1 and Fig. 2 contain coloured segments. Each segment in the gures corresponds to an equivalence
class where all pixels in the class have similar descriptions, i.e., picture elements with similar colours. The ovals in
Fig.1 are closer to each other descriptively than the ovals in Fig. 2.
181.1. HISTORY 695

181.1 History
It has been observed that the simple concept of nearness unies various concepts of topological structures[20] inas-
much as the category Near of all nearness spaces and nearness preserving maps contains categories sTop (symmetric
topological spaces and continuous maps[3] ), Prox (proximity spaces and -maps[8][67] ), Unif (uniform spaces and uni-
formly continuous maps[81][77] ) and Cont (contiguity spaces and contiguity maps[24] ) as embedded full subcategories[20][59] .
The categories AN ear and AM er are shown to be full supercategories of various well-known categories, in-
cluding the category sT op of symmetric topological spaces and continuous maps, and the category M et of
extended metric spaces and nonexpansive maps. The notation A , B reads category A is embedded in category B
. The categories AM er and AN ear are supercategories for a variety of familiar categories[76] shown in Fig. 3.
Let AN ear denote the category of all -approach nearness spaces and contractions, and let AM er denote the
category of all -approach merotopic spaces and contractions.

Figure 3. Supercats

Among these familiar categories is sT op , the symmetric form of T op (see category of topological spaces), the cat-
egory with objects that are topological spaces and morphisms that are continuous maps between them[1][32] . M et
with objects that are extended metric spaces is a subcategory of AP (having objects -approach spaces and con-
tractions) (see also[57][75] ). Let X , Y be extended pseudometrics on nonempty sets X, Y , respectively. The map
f : (X, X ) (Y, Y ) is a contraction if and only if f : (X, DX ) (Y, DY ) is a contraction. For
nonempty subsets A, B 2X , the distance function D : 2X 2X [0, ] is dened by

{
inf {(a, b) : a A, b B}, ifA and Bempty not are ,
D (A, B) =
, ifA or Bempty is .
Thus AP is embedded as a full subcategory in AN ear by the functor F : AP AN ear dened by
F ((X, )) = (X, D ) and F (f ) = f . Then f : (X, X ) (Y, Y ) is a contraction if and only if f :
696 CHAPTER 181. NEAR SETS

(X, DX ) (Y, DY ) is a contraction. Thus AP is embedded as a full subcategory in AN ear by the


functor F : AP AN ear dened by F ((X, )) = (X, D ) and F (f ) = f. Since the category M et
of extended metric spaces and nonexpansive maps is a full subcategory of AP , therefore, AN ear is also a full
supercategory of M et . The category AN ear is a topological construct[76] .

Figure 4. Frigyes Riesz, 1880-1956

The notions of near and far[A] in mathematics can be traced back to works by Johann Benedict Listing and Felix
Hausdor. The related notions of resemblance and similarity can be traced back to J.H. Poincar, who introduced
sets of similar sensations (nascent tolerance classes) to represent the results of G.T. Fechners sensation sensitivity
experiments[10] and a framework for the study of resemblance in representative spaces as models of what he termed
physical continua[63][60][61] . The elements of a physical continuum (pc) are sets of sensations. The notion of a pc
and various representative spaces (tactile, visual, motor spaces) were introduced by Poincar in an 1894 article on
the mathematical continuum[63] , an 1895 article on space and geometry[60] and a compendious 1902 book on science
181.2. NEARNESS OF SETS 697

and hypothesis[61] followed by a number of elaborations, e.g.,[62] . The 1893 and 1895 articles on continua (Pt. 1,
ch. II) as well as representative spaces and geometry (Pt. 2, ch IV) are included as chapters in[61] . Later, F. Riesz
introduced the concept of proximity or nearness of pairs of sets at the International Congress of Mathematicians
(ICM) in 1908[65] .
During the 1960s, E.C. Zeeman introduced tolerance spaces in modelling visual perception[83] . A.B. Sossinsky
observed in 1986[71] that the main idea underlying tolerance space theory comes from Poincar, especially[60] . In
2002, Z. Pawlak and J. Peters[B] considered an informal approach to the perception of the nearness of physical objects
such as snowakes that was not limited to spatial nearness. In 2006, a formal approach to the descriptive nearness of
objects was considered by J. Peters, A. Skowron and J. Stepaniuk[C] in the context of proximity spaces[39][33][35][21] .
In 2007, descriptively near sets were introduced by J. Peters[D][E] followed by the introduction of tolerance near
sets[41][45] . Recently, the study of descriptively near sets has led to algebraic[22][51] , topological and proximity space[37]
foundations of such sets.

181.2 Nearness of sets


The adjective near in the context of near sets is used to denote the fact that observed feature value dierences of
distinct objects are small enough to be considered indistinguishable, i.e., within some tolerance.
The exact idea of closeness or 'resemblance' or of 'being within tolerance' is universal enough to appear, quite naturally,
in almost any mathematical setting (see, e.g.,[66] ). It is especially natural in mathematical applications: practical
problems, more often than not, deal with approximate input data and only require viable results with a tolerable level
of error[71] .
The words near and far are used in daily life and it was an incisive suggestion of F. Riesz[65] that these intuitive
concepts be made rigorous. He introduced the concept of nearness of pairs of sets at the ICM in Rome in 1908. This
concept is useful in simplifying teaching calculus and advanced calculus. For example, the passage from an intuitive
denition of continuity of a function at a point to its rigorous epsilon-delta denition is sometime dicult for teachers
to explain and for students to understand. Intuitively, continuity can be explained using nearness language, i.e., a
function f : R R is continuous at a point c , provided points {x} near c go into points {f (x)} near f (c) . Using
Rieszs idea, this denition can be made more precise and its contrapositive is the familiar denition[4][36] .

181.3 Generalization of set intersection


From a spatial point of view, nearness (aka proximity) is considered a generalization of set intersection. For disjoint
sets, a form of nearness set intersection is dened in terms of a set of objects (extracted from disjoint sets) that have
similar features within some tolerance (see, e.g., 3 in[80] ). For example, the ovals in Fig. 1 are considered near each
other, since these ovals contain pairs of classes that display similar (visually indistinguishable) colours.

181.4 Efremovi proximity space


Let X denote a metric topological space that is endowed with one or more proximity relations and let 2X denote the
collection of all subsets of X . The collection 2X is called the power set of X .
There are many ways to dene Efremovi proximities on topological spaces (discrete proximity, standard proximity,
metric proximity, ech proximity, Alexandro proximity, and Freudenthal proximity), For details, see 2, pp. 93
94 in[6] . The focus here is on standard proximity on a topological space. For A, B X , A is near B (denoted by
A B ), provided their closures share a common point.
The closure of a subset A 2X (denoted by cl(A) ) is the usual Kuratowski closure of a set[F] , introduced in 4, p.
20[27] , is dened by

cl(A) = {x X : D(x, A) = 0} , where


D(x, A) = inf {d(x, a) : a A} .

i.e. cl(A) is the set of all points x in X that are close to A ( D(x, A) is the Hausdor distance (see 22, p. 128,
698 CHAPTER 181. NEAR SETS

in[15] ) between x and the set A and d(x, a) = |x a| (standard distance)). A standard proximity relation is dened
by

{ }
= (A, B) 2X 2X : cl(A) cl(B) = .

Whenever sets A and B have no points in common, the sets are farfrom each other (denoted A B ).
The following EF-proximity[G] space axioms are given by Jurij Michailov Smirnov[67] based on what Vadim Arsenye-
vi Efremovi introduced during the rst half of the 1930s[8] . Let A, B, E 2X .

EF.1 If the set A is close to B , then B is close to A .

EF.2 A B is close to E , if and only if, at least one of the sets A or B is close to E .

EF.3 Two points are close, if and only if, they are the same point.

EF.4 All sets are far from the empty set .

EF.5 For any two sets A and B which are far from each other, there exists C, D 2X , C D = X , such that A
is far from C and B is far from D (Efremovi-axiom).

The pair (X, ) is called an EF-proximity space. In this context, a space is a set with some added structure. With
a proximity space X , the structure of X is induced by the EF-proximity relation . In a proximity space X , the
closure of A in X coincides with the intersection of all closed sets that contain A .

Theorem 1[67] The closure of any set A in the proximity space X is the set of points x X that are close to A .

181.5 Visualization of EF-axiom


Let the set X be represented by the points inside the rectangular region in Fig. 5. Also, let A, B be any two non-
intersection subsets (i.e. subsets spatially far from each other) in X , as shown in Fig. 5. Let C c = X\C (complement
of the set C ). Then from the EF-axiom, observe the following:

A B,
B C,
D = C c,
X = D C,
A D, hence, we can write
A B A C and B D, for some C, D in X so that C D = X.

181.6 Descriptive proximity space


Descriptively near sets were introduced as a means of solving classication and pattern recognition problems arising
from disjoint sets that resemble each other[44][43] . Recently, the connections between near sets in EF-spaces and near
sets in descriptive EF-proximity spaces have been explored in[53][48] .
Again, let X be a metric topological space and let = {1 , . . . , n } a set of probe functions that represent features
of each x X . The assumption made here is X contains non-abstract points that have measurable features such as
gradient orientation. A non-abstract point has a location and features that can be measured (see 3 in [26] ).
A probe function : X R represents a feature of a sample point in X . The mapping : X Rn is dened
by (x) = (1 (x), . . . , n (x)) , where Rn is an n-dimensional real Euclidean vector space. (x) is a feature vector
for x , which provides a description of x X . For example, this leads to a proximal view of sets of picture points
in digital images[48] .
181.6. DESCRIPTIVE PROXIMITY SPACE 699

X
Cc C
B
A

Figure 5. Example of a descriptive EF-proximity relation between sets A, B , and C c

To obtain a descriptive proximity relation (denoted by ), one rst chooses a set of probe functions. Let Q : 2X
n n
2R be a mapping on a subset of 2X into a subset of 2R . For example, let A, B 2X and Q(A), Q(B) denote
sets of descriptions of points in A, B , respectively. That is,

Q(A) = {(a) : a A} ,
Q(B) = {(b) : b B} .

The expression A B reads A is descriptively near B . Similarly, A B reads A is descriptively far from B . The
descriptive proximity of A and B is dened by

A B Q(cl(A)) Q(cl(B)) = .

The descriptive intersection of A and B is dened by

A B = {x A B : Q(A) Q(B)} .

That is, x A B is in A B , provided (x) = (a) = (b) for some a A, b B . Observe that A and B
can be disjoint and yet A B can be nonempty. The descriptive proximity relation is dened by

{ }
= (A, B) 2X 2X : cl(A) cl(B) = .

Whenever sets A and B have no points with matching descriptions, the sets are descriptively far from each other
(denoted by A B ).
The binary relation is a descriptive EF-proximity, provided the following axioms are satised for A, B, E X .
700 CHAPTER 181. NEAR SETS

dEF.1 If the set A is descriptively close to B , then B is descriptively close to A .


dEF.2 A B is descriptively close to E , if and only if, at least one of the sets A or B is descriptively close to E .
dEF.3 Two points x, y X are descriptively close, if and only if, the description of x matches the description of
y.
dEF.4 All nonempty sets are descriptively far from the empty set .
dEF.5 For any two sets A and B which are descriptively far from each other, there exists C, D 2X , C D = X
, such that A is descriptively far from C and B is descriptively far from D (Descriptive Efremovi axiom).

The pair (X, ) is called a descriptive proximity space.

181.7 Proximal relator spaces


A relator is a nonvoid family of relations R on a nonempty set X [72] . The pair (X, R) (also denoted X(R) ) is
called a relator space. Relator spaces are natural generalizations of ordered sets and uniform spaces[73][74] }. With the
introduction of a family of proximity relations R on X , we obtain a proximal relator space (X, R ) . For simplicity,
we consider only two proximity relations, namely, the Efremovi proximity [8] and the descriptive proximity in
dening the descriptive relator R [53][48] . The pair (X, R ) is called a proximal relator space [49] . In this work, X
denotes a metric topological space that is endowed with the relations in a proximal relator. With the introduction of
(X, R ) , the traditional closure of a subset (e.g., [9][7] ) can be compared with the more recent descriptive closure
of a subset.
In a proximal relator space X , the descriptive closure of a set A (denoted by cl (A) ) is dened by

cl (A) = {x X : (x)Q(cl(A))} .

That is, x X is in the descriptive closure of A , provided the closure of (x) and the closure of Q(cl(A)) have at
least one element in common.

Theorem 2 [50] The descriptive closure of any set A in the descriptive EF-proximity space (X, R ) is the set of
points x X that are descriptively close to A .

Theorem 3 [50] Kuratowski closure of a set A is a subset of the descriptive closure of A in a descriptive EF-proximity
space.

Theorem 4 [49] Let (X, R ) be a proximal relator space, A X . Then cl(A) cl (A) .

Proof Let (x) Q(X \ cl(A)) such that (x) = (a) for some a clA . Consequently, (x) Q(cl (A)) .
Hence, cl(A) cl (A)

In a proximal relator space, EF-proximity leads to the following results for descriptive proximity .

Theorem 5 [49] Let (X, R ) be a proximal relator space, A, B, C X . Then

1
A B implies A B .

2
(A B) C implies (A B) C .

3
clA clB implies clA clB .

Proof
181.8. DESCRIPTIVE -NEIGHBOURHOODS 701

1
A B A B = . For x A B, (x) Q(A) and (x) Q(B) . Consequently, A B .

1 2

3
clA clB implies that clA and clA have at least one point in common. Hence, 1 o 3o .

181.8 Descriptive -neighbourhoods

X
X \E 2 E2
B
E1
A

Figure 6. Example depicting -neighbourhoods

In a pseudometric proximal relator space X , the neighbourhood of a point x X (denoted by Nx, ), for > 0 , is
dened by

Nx, = {y X : d(x, y) < } .

The interior of a set A (denoted by int(A) ) and boundary of A (denoted by bdy(A) ) in a proximal relator space X
are dened by

int(A) = {x X : Nx, A} .

bdy(A) = cl(A) \ int(A).


702 CHAPTER 181. NEAR SETS

A set A has a natural strong inclusion in a set B associated with [5][6] } (denoted by A B ), provided A intB
, i.e., A X \ intB ( A is far from the complement of intB ). Correspondingly, a set A has a descriptive strong
inclusion in a set B associated with (denoted by A B ), provided Q(A) Q(intB) , i.e., A X \ intB (
Q(A) is far from the complement of intB ).
Let be a descriptive -neighbourhood relation dened by

{ }
= (A, B) 2X 2X : Q(A) Q(intB) .

That is, A B , provided the description of each a A is contained in the set of descriptions of the points b intB
. Now observe that any A, B in the proximal relator space X such that A B have disjoint -neighbourhoods,
i.e.,

A B A E1, B E2, for some E1, E2 X (See Fig. 6).

Theorem 6 [50] Any two sets descriptively far from each other belong to disjoint descriptive -neighbourhoods in
a descriptive proximity space X .

A consideration of strong containment of a nonempty set in another set leads to the study of hit-and-miss topologies
and the Wijsman topology[2] .

181.9 Tolerance near sets


Let be a real number greater than zero. In the study of sets that are proximally near within some tolerance, the set
of proximity relations R is augmented with a pseudometric tolerance proximity relation (denoted by , ) dened
by

D (A, B) = inf {d((a), (a)) : (a) Q(A), (a) Q(B)} ,


n
d((a), (a)) = |i (a) i (b)|,
{ i=1 }
, = (A, B) 2X 2X : |D(cl(A), cl(B))| < .

Let R, = R {, } . In other words, a nonempty set equipped with the proximal relator R, has underlying
structure provided by the proximal relator R and provides a basis for the study of tolerance near sets in X that are
near within some tolerance. Sets A, B in a descriptive pseudometric proximal relator space (X, R, ) are tolerance
near sets (i.e., A , B ), provided

D (A, B) < .

181.10 Tolerance classes and preclasses


Relations with the same formal properties as similarity relations of sensations considered by Poincar[62] are nowadays,
after Zeeman[83] , called tolerance relations. A tolerance on a set O is a relation O O that is reexive and
symmetric. In algebra, the term tolerance relation is also used in a narrow sense to denote reexive and symmetric
relations dened on universes of algebras that are also compatible with operations of a given algebra, i.e., they are
generalizations of congruence relations (see e.g.,[12] ). In referring to such relations, the term algebraic tolerance or
the term algebraic tolerance relation is used. Transitive tolerance relations are equivalence relations. A set O together
with a tolerance is called a tolerance space (denoted (O, ) ). A set A O is a -preclass (or briey preclass
when is understood) if and only if for any x, y A , (x, y) .
The family of all preclasses of a tolerance space is naturally ordered by set inclusion and preclasses that are maximal
with respect to set inclusion are called -classes or just classes, when is understood. The family of all classes of
the space (O, ) is particularly interesting and is denoted by H (O) . The family H (O) is a covering of O [58] .
181.10. TOLERANCE CLASSES AND PRECLASSES 703

The work on similarity by Poincar and Zeeman presage the introduction of near sets[44][43] and research on similarity
relations, e.g.,[79] . In science and engineering, tolerance near sets are a practical application of the study of sets that
are near within some tolerance. A tolerance (0, ] is directly related to the idea of closeness or resemblance
(i.e., being within some tolerance) in comparing objects. By way of application of Poincar's approach in dening
visual spaces and Zeemans approach to tolerance relations, the basic idea is to compare objects such as image patches
in the interior of digital images.

181.10.1 Examples
Simple Example
The following simple example demonstrates the construction of tolerance classes from real data. Consider the 20
objects in the table below with || = 1 .

Let a tolerance relation be dened as

= = {(x, y) O O : (x) (y) 2 }

Then, setting = 0.1 gives the following tolerance classes:

= (O) ={{x1 , x8 , x10 , x11 }, {x1 , x9 , x10 , x11 , x14 },


H
{x2 , x7 , x18 , x19 },
{x3 , x12 , x17 },
{x4 , x13 , x20 }, {x4 , x18 },
{x5 , x6 , x15 , x16 }, {x5 , x6 , x15 , x20 },
{x6 , x13 , x20 }}.

Observe that each object in a tolerance class satises the condition (x) (y) 2 , and that almost all of the
objects appear in more than one class. Moreover, there would be twenty classes if the indiscernibility relation was
used since there are no two objects with matching descriptions.
Image Processing Example

Figure 7. Example of images that are near each other. (a) and (b) Images from the freely available LeavesDataset (see, e.g.,
www.vision.caltech.edu/archive.html).

The following example provides an example based on digital images. Let a subimage be dened as a small subset of
pixels belonging to a digital image such that the pixels contained in the subimage form a square. Then, let the sets X
and Y respectively represent the subimages obtained from two dierent images, and let O = {X Y } . Finally, let
the description of an object be given by the Green component in the RGB color model. The next step is to nd all
704 CHAPTER 181. NEAR SETS

the tolerance classes using the tolerance relation dened in the previous example. Using this information, tolerance
classes can be formed containing objects that have similar (within some small ) values for the Green component
in the RGB colour model. Furthermore, images that are near (similar) to each other should have tolerance classes
divided among both images (instead of a tolerance classes contained solely in one of the images). For example, the
gure accompanying this example shows a subset of the tolerance classes obtained from two leaf images. In this
gure, each tolerance class is assigned a separate colour. As can be seen, the two leaves share similar tolerance
classes. This example highlights a need to measure the degree of nearness of two sets.

181.11 Nearness measure


Let (U, R, ) denote a particular descriptive pseudometric EF-proximal relator space equipped with the proximity
relation , and with nonempty subsets X, Y 2U and with the tolerance relation =, dened in terms of a set of
probes and with (0, ] , where

Figure 8. Examples of degree of nearness between two sets: (a) High degree of nearness, and (b) Low degree of nearness.

, = {(x, y) U U | |(x) (y)| }.


Further, assume Z = X Y and let H, (Z) denote the family of all classes in the space (Z, , ) .
Let A X, B Y . The distance DtN M : 2U 2U : [0, ] is dened by

{
1 tN M (A, B), if X and Y are not empty,
DtN M (X, Y ) =
, if X or Y is empty,

where

( )1
min(|C A|, |[C B|)
tN M (A, B) = |C| |C| .
max(|C A|, |C B|)
CH, (Z) CH, (Z)

The details concerning tN M are given in[14][16][17] . The idea behind tN M is that sets that are similar should have
a similar number of objects in each tolerance class. Thus, for each tolerance class obtained from the covering of
Z = X Y , tN M counts the number of objects that belong to X and Y and takes the ratio (as a proper fraction) of
their cardinalities. Furthermore, each ratio is weighted by the total size of the tolerance class (thus giving importance
to the larger classes) and the nal result is normalized by dividing by the sum of all the cardinalities. The range of
tN M is in the interval [0,1], where a value of 1 is obtained if the sets are equivalent (based on object descriptions)
and a value of 0 is obtained if they have no descriptions in common.
As an example of the degree of nearness between two sets, consider gure below in which each image consists of two
sets of objects, X and Y . Each colour in the gures corresponds to a set where all the objects in the class share the
181.12. NEAR SET EVALUATION AND RECOGNITION (NEAR) SYSTEM 705

same description. The idea behind tN M is that the nearness of sets in a perceptual system is based on the cardinality
of tolerance classes that they share. Thus, the sets in left side of the gure are closer (more near) to each other in
terms of their descriptions than the sets in right side of the gure.

181.12 Near set evaluation and recognition (NEAR) system

Figure 9. NEAR system GUI.

The Near set Evaluation and Recognition (NEAR) system, is a system developed to demonstrate practical applications
of near set theory to the problems of image segmentation evaluation and image correspondence. It was motivated
by a need for a freely available software tool that can provide results for research and to generate interest in near
set theory. The system implements a Multiple Document Interface (MDI) where each separate processing task is
performed in its own child frame. The objects (in the near set sense) in this system are subimages of the images being
processed and the probe functions (features) are image processing functions dened on the subimages. The system
was written in C++ and was designed to facilitate the addition of new processing tasks and probe functions. Currently,
the system performs six major tasks, namely, displaying equivalence and tolerance classes for an image, performing
segmentation evaluation, measuring the nearness of two images, performing Content Based Image Retrieval (CBIR),
and displaying the output of processing an image using a specic probe function.

181.13 Proximity System


The Proximity System is an application developed to demonstrate descriptive-based topological approaches to near-
ness and proximity within the context of digital image analysis. The Proximity System grew out of the work of S.
Naimpally and J. Peters on Topological Spaces. The Proximity System was written in Java and is intended to run in
two dierent operating environments, namely on Android smartphones and tablets, as well as desktop platforms run-
706 CHAPTER 181. NEAR SETS

Figure 10. The Proximity System.

ning the Java Virtual Machine. With respect to the desktop environment, the Proximity System is a cross-platform
Java application for Windows, OSX, and Linux systems, which has been tested on Windows 7 and Debian Linux
using the Sun Java 6 Runtime. In terms of the implementation of the theoretical approaches, both the Android and
the desktop based applications use the same back-end libraries to perform the description-based calculations, where
the only dierences are the user interface and the Android version has less available features due to restrictions on
system resources.

181.14 See also


Alternative set theory
Category:Mathematical relations
Category:Topology
Feature vector
Proximity space
Rough set
Topology

181.15 Notes
1. ^ J.R. Isbell observed that the notions near and far are important in a uniform space. Sets A, B are far
(uniformaly distal), provided the {A, B} is a discrete collection. A nonempty set U is a uniform neighbour-
hood of a set A , provided the complement of U is far from U . See, 33 in [23]
2. ^ The intuition that led to the discovery of descriptively near sets is given in Pawlak, Z.;Peters, J.F. (2002,
2007) Jak blisko (How Near)". Systemy Wspomagania Decyzji I 57 (109)
3. ^ Descriptively near sets are introduced in[48] . The connections between traditional EF-proximity and descrip-
tive EF-proximity are explored in [37] .
181.16. REFERENCES 707

4. ^ Reminiscent of M. Pavels approach, descriptions of members of sets objects are dened relative to vectors
of values obtained from real-valued functions called probes. See, Pavel, M. (1993). Fundamentals of pattern
recognition. 2nd ed. New York: Marcel Dekker, for the introduction of probe functions considered in the
context of image registration.
5. ^ A non-spatial view of near sets appears in, C.J. Mozzochi, M.S. Gagrat, and S.A. Naimpally, Symmetric
generalized topological structures, Exposition Press, Hicksville, NY, 1976., and, more recently, nearness of
disjoint sets X and Y based on resemblance between pairs of elements x X, y Y (i.e. x and y have
similar feature vectors (x), (y) and the norm (x) (y) p < ) See, e.g.,[43][42][53] .
6. ^ The basic facts about closure of a set were rst pointed out by M. Frchet in[11] , and elaborated by B. Knaster
and C. Kuratowski in[25] .
7. ^ Observe that up to the 1970s, proximity meant EF-proximity, since this is the one that was studied intensively.
The pre-1970 work on proximity spaces is exemplied by the series of papers by J. M. Smirnov during the
rst half of the 1950s[68][67][69][70] , culminating in the compendious collection of results by S.A. Naimpally and
B.D. Warrack[34] . But in view of later developments, there is a need to distinguish between various proximities.
A basic proximity or ech-proximity was introduced by E. ech during the late 1930s (see 25 A.1, pp. 439-
440 in [78] ). The conditions for the non-symmetric case for a proximity were introduced by S. Leader[28] and
for the symmetric case by M.W. Lodato[29][30][31] .

181.16 References
1. ^ Admek, J.; Herrlich, H.; Strecker, G. E. (1990). Abstract and concrete categories. London: Wiley-
Interscience. pp. ix+482.
2. ^ Beer, G. (1993), Topologies on closed and closed convex sets, London, UK: Kluwer Academic Pub., pp.
xi + 340pp. Missing or empty |title= (help)
3. ^ Bentley, H. L.; Colebunders, E.; Vandermissen, E. (2009), A convenient setting for completions and func-
tion spaces, in Mynard, F.; Pearl, E., Contemporary Mathematics, Providence, RI: American Mathematical
Society, pp. 3788 Missing or empty |title= (help)
4. ^ Cameron, P.; Hockingand, J. G.; Naimpally, S. A. (1974). Nearnessa better approach to continuity and
limits. American Mathematical Monthly. 81 (7): 739745. doi:10.2307/2319561.
5. ^ Di Concilio, A. (2008), Action, uniformity and proximity, in Naimpally, S. A.; Di Maio, G., Theory and
Applications of Proximity, Nearness and Uniformity, Seconda Universit di Napoli, Napoli: Prentice-Hall, pp.
7188 Missing or empty |title= (help)
6. ^ a b Di Concilio, A. (2009). Proximity: A powerful tool in extension theory, function spaces, hyperspaces,
boolean algebras and point-free geometry. Contemporary Mathematics. 486: 89114. doi:10.1090/conm/486/09508.
7. ^ Devi, R.; Selvakumar, A.; Vigneshwaran, M. (2010). " (I, ) -generalized semi-closed sets in topological
spaces. FILOMAT. 24 (1): 97100. doi:10.2298/l1001097d.
8. ^ a b c Efremovi, V. A. (1952). The geometry of proximity I (in Russian)". Mat. Sb. (N.S.). 31 (73):
189200.
9. ^ Peters, J. F. (2008). A note on a-open sets and e -sets. FILOMAT. 22 (1): 8996.
10. ^ Fechner, G. T. (1966). Elements of Psychophysics, vol. I. London, UK: Hold, Rinehart & Winston. pp. H.
E. Adlers trans. of Elemente der Psychophysik, 1860.
11. ^ Frchet, M. (1906). Sur quelques points du calcul fonctionnel. Rend. Circ. Mat. Palermo. 22: 174.
doi:10.1007/bf03018603.
12. ^ Grtzer, G.; Wenzel, G. H. (1989). Tolerances, covering systems, and the axiom of choice. Archivum
Mathematicum. 25 (12): 2734.
13. ^ Gupta, S.; Patnaik, K. (2008). Enhancing performance of face recognition systems by using near set ap-
proach for selecting facial features. Journal of Theoretical and Applied Information Technology. 4 (5): 433
441.
708 CHAPTER 181. NEAR SETS

14. ^ a b Hassanien, A. E.; Abraham, A.; Peters, J. F.; Schaefer, G.; Henry, C. (2009). Rough sets and near sets in
medical imaging: A review, IEEE. Transactions on Information Technology in Biomedicine. 13 (6): 955968.
doi:10.1109/TITB.2009.2017017.

15. ^ Hausdor, F. (1914). Grundzuge der mengenlehre. Leipzig: Veit and Company. pp. viii + 476.

16. ^ Henry, C.; Peters, J. F. (2010). Perception-based image classication, International. Journal of Intelligent
Computing and Cybernetics. 3 (3): 410430. doi:10.1108/17563781011066701.

17. ^ a b c d Henry, C. J. (2010), Near sets: Theory and applications, Ph.D. thesis, Dept. Elec. Comp. Eng., Uni.
of MB, supervisor: J.F. Peters

18. ^ a b Henry, C.; Peters, J. F. (2011). Arthritic hand-nger movement similarity measurements: Tolerance near
set approach. Computational and Mathematical Methods in Medicine. 2011: 114. doi:10.1155/2011/569898.

19. ^ Henry, C. J.; Ramanna, S. (2011). Parallel Computation in Finding Near Neighbourhoods. Lecture Notes
in Computer Science: 523532.

20. ^ a b Herrlich, H. (1974). A concept of nearness. General Topology and its Applications. 4: 191212.
doi:10.1016/0016-660x(74)90021-x.

21. ^ Hocking, J. G.; Naimpally, S. A. (2009), Nearnessa better approach to continuity and limits, Allahabad
Mathematical Society Lecture Note Series, 3, Allahabad: The Allahabad Mathematical Society, pp. iv+66,
ISBN 978-81-908159-1-8 Missing or empty |title= (help)

22. ^ nan, E.; ztrk, M. A. (2012). Near groups on nearness approximation spaces. Hacettepe Journal of
Mathematics and Statistics. 41 (4): 545558.

23. ^ Isbell, J. R. (1964). Uniform spaces. Providence, Rhode Island: American Mathematical Society. pp. xi +
175.

24. ^ Ivanova, V. M.; Ivanov, A. A. (1959). Contiguity spaces and bicompact extensions of topological spaces
(russian)". Dokl. Akad. Nauk SSSR. 127: 2022.

25. ^ Knaster, B.; Kuratowski, C. (1921). Sur les ensembles connexes. Fundamenta Mathematicae. 2: 206255.

26. ^ Kovr, M. M. (2011). A new causal topology and why the universe is co-compact. arXiv:1112.0817
[math-ph].

27. ^ Kuratowski, C. (1958), Topologie i, Warsaw: Panstwowe Wydawnictwo Naukowe, pp. XIII + 494pp.
Missing or empty |title= (help)

28. ^ Leader, S. (1967). Metrization of proximity spaces. Proceedings of the American Mathematical Society.
18: 10841088. doi:10.2307/2035803.

29. ^ Lodato, M. W. (1962), On topologically induced generalized proximity relations, Ph.D. thesis, Rutgers
University

30. ^ Lodato, M. W. (1964). On topologically induced generalized proximity relations I. Proceedings of the
American Mathematical Society. 15: 417422. doi:10.2307/2034517.

31. ^ Lodato, M. W. (1966). On topologically induced generalized proximity relations II. Pacic Journal of
Mathematics. 17: 131135.

32. ^ MacLane, S. (1971). Categories for the working mathematician. Berlin: Springer. pp. v+262pp.

33. ^ Mozzochi, C. J.; Naimpally, S. A. (2009), Uniformity and proximity, Allahabad Mathematical Society
Lecture Note Series, 2, Allahabad: The Allahabad Mathematical Society, pp. xii+153, ISBN 978-81-908159-
1-8 Missing or empty |title= (help)

34. ^ Naimpally, S. A. (1970). Proximity spaces. Cambridge, UK: Cambridge University Press. pp. x+128. ISBN
978-0-521-09183-1.

35. ^ Naimpally, S. A. (2009). Proximity approach to problems in topology and analysis. Munich, Germany:
Oldenbourg Verlag. pp. ix + 204. ISBN 978-3-486-58917-7.
181.16. REFERENCES 709

36. ^ Naimpally, S. A.; Peters, J. F. (2013). Preservation of continuity. Scientiae Mathematicae Japonicae. 76
(2): 17.
37. ^ a b c d Naimpally, S. A.; Peters, J. F. (2013). Topology with Applications. Topological Spaces via Near and
Far. Singapore: World Scientic.
38. ^ Naimpally, S. A.; Peters, J. F.; Wolski, M. (2013). Near set theory and applications. Special Issue in
Mathematics in Computer Science. 7. Berlin: Springer. p. 136.
39. ^ Naimpally, S. A.; Warrack, B. D. (1970), Proximity spaces, Cambridge Tract in Mathematics, 59, Cam-
bridge, UK: Cambridge University Press, pp. x+128 Missing or empty |title= (help)
40. ^ Pal, S. K.; Peters, J. F. (2010). Rough fuzzy image analysis. Foundations and methodologies. London, UK,:
CRC Press, Taylor & Francis Group. ISBN 9781439803295.
41. ^ Peters, J. F. (2009). Tolerance near sets and image correspondence. International Journal of Bio-Inspired
Computation. 1 (4): 239245. doi:10.1504/ijbic.2009.024722.
42. ^ a b c Peters, J. F.; Wasilewski, P. (2009). Foundations of near sets. Information Sciences. 179 (18): 3091
3109. doi:10.1016/j.ins.2009.04.018.
43. ^ a b c Peters, J. F. (2007). Near sets. General theory about nearness of objects. Applied Mathematical
Sciences. 1 (53): 26092629.
44. ^ a b Peters, J. F. (2007). Near sets. Special theory about nearness of objects. Fundamenta Informaticae. 75
(14): 407433.
45. ^ Peters, J. F. (2010). Corrigenda and addenda: Tolerance near sets and image correspondence. International
Journal of Bio-Inspired Computation. 2 (5): 310318. doi:10.1504/ijbic.2010.036157.
46. ^ Peters, J. F. (2011), How near are Zdzisaw Pawlaks paintings? Merotopic distance between regions of
interest, in Skowron, A.; Suraj, S., Intelligent Systems Reference Library volume dedicated to Prof. Zdzisaw
Pawlak, Berlin: Springer, pp. 119 Missing or empty |title= (help)
47. ^ Peters, J. F. (2011), Suciently near sets of neighbourhoods, in Yao, J. T.; Ramanna, S.; Wang, G.; et al.,
Lecture Notes in Articial Intelligence 6954, Berlin: Springer, pp. 1724 Missing or empty |title= (help)
48. ^ a b c d Peters, J. F. (2013). Near sets: An introduction. Mathematics in Computer Science. 7 (1): 39.
doi:10.1007/s11786-013-0149-6.
49. ^ a b c Peters, J. F. (2014). Proximal relator spaces. FILOMAT: 15 (in press).
50. ^ a b c d e Peters, J. F. (2014). Topology of Digital Images. Visual Pattern Discovery in Proximity Spaces. 63.
Springer. p. 342. ISBN 978-3-642-53844-5.
51. ^ a b Peters, J. F.; nan, E.; ztrk, M. A. (2014). Spatial and descriptive isometries in proximity spaces.
General Mathematics Notes. 21 (2): 125134.
52. ^ Peters, J. F.; Naimpally, S. A. (2011). Approach spaces for near families. General Mathematics Notes. 2
(1): 159164.
53. ^ a b c Peters, J. F.; Naimpally, S. A. (2011). General Mathematics Notes. 2 (1): 159164. Missing or empty
|title= (help)
54. ^ Peters, J. F.; Puzio, L. (2009). Image analysis with anisotropic wavelet-based nearness measures. Interna-
tional Journal of Computational Intelligence Systems. 2 (3): 168183. doi:10.1016/j.ins.2009.04.018.
55. ^ Peters, J. F.; Shahfar, S.; Ramanna, S.; Szturm, T. (2007), Biologically-inspired adaptive learning: A near
set approach, Frontiers in the Convergence of Bioscience and Information Technologies, Korea Missing or
empty |title= (help)
56. ^ Peters, J. F.; Tiwari, S. (2011). Approach merotopies and near lters. Theory and application. General
Mathematics Notes. 3 (1): 3245.
57. ^ Peters, J. F.; Tiwari, S. (2011). Approach merotopies and near lters. Theory and application. General
Mathematics Notes. 3 (1): 3245.
710 CHAPTER 181. NEAR SETS

58. ^ Peters, J. F.; Wasilewski, P. (2012). Tolerance spaces: Origins, theoretical aspects and applications. In-
formation Sciences. 195: 211225. doi:10.1016/j.ins.2012.01.023.
59. ^ Picado, J. Weil nearness spaces. Portugaliae Mathematica. 55 (2): 233254.
60. ^ a b c Poincar, J. H. (1895). L'espace et la gomtrie. Revue de m'etaphysique et de morale. 3: 631646.
61. ^ a b c Poincar, J. H. (1902). Sur certaines surfaces algbriques; troisime complment 'a l'analysis situs.
Bulletin de la Socit de France. 30: 4970.
62. ^ a b Poincar, J. H. (1913 & 2009). Dernires penses, trans. by J.W. Bolduc as Mathematics and science: Last
essays. Paris & NY: Flammarion & Kessinger. Check date values in: |date= (help)
63. ^ a b Poincar, J. H. (1894). Sur la nature du raisonnement mathmatique. Revue de maphysique et de
morale. 2: 371384.
64. ^ a b Ramanna, S.; Meghdadi, A. H. (2009). Measuring resemblances between swarm behaviours: A percep-
tual tolerance near set approach. Fundamenta Informaticae. 95 (4): 533552. doi:10.3233/FI-2009-163.
65. ^ a b Riesz, F. (1908). Stetigkeitsbegri und abstrakte mengenlehre (PDF). Atti del IV Congresso Inter-
nazionale dei Matematici II: 1824.
66. ^ Shreider, J. A. (1975). Equality, resemblance, and order. Russia: Mir Publishers. p. 279.
67. ^ a b c d Smirnov, J. M. (1952). On proximity spaces. Mat. Sb. (N.S.). 31 (73): 543574 (English translation:
Amer. Math. Soc. Trans. Ser. 2, 38, 1964, 535).
68. ^ Smirnov, J. M. (1952). On proximity spaces in the sense of V.A. Efremovi". Math. Sb. (N.S.). 84:
895898, English translation: Amer. Math. Soc. Trans. Ser. 2, 38, 1964, 14.
69. ^ Smirnov, J. M. (1954). On the completeness of proximity spaces. I.. Trudy Moskov. Mat. Ob. 3:
271306, English translation: Amer. Math. Soc. Trans. Ser. 2, 38, 1964, 3774.
70. ^ Smirnov, J. M. (1955). On the completeness of proximity spaces. II.. Trudy Moskov. Mat. Ob. 4:
421438, English translation: Amer. Math. Soc. Trans. Ser. 2, 38, 1964, 7594.
71. ^ a b Sossinsky, A. B. (1986). Tolerance space theory and some applications. Acta Applicandae Mathemati-
cae. 5 (2): 137167. doi:10.1007/bf00046585.
72. ^ Szz, . (1997). Uniformly, proximally and topologically compact relators. Mathematica Pannonica. 8
(1): 103116.
73. ^ Szz, . (1987). Basic tools and mild continuities in relator spaces. Acta Mathematica Hungarica. 50:
177201. doi:10.1007/bf01903935.
74. ^ Szz, (2000). An extension of Kelleys closed relation theorem to relator spaces. FILOMAT. 14: 4971.
75. ^ Tiwari, S. (2010), Some aspects of general topology and applications. Approach merotopic structures and
applications, Ph.D. thesis, Dept. of Math., Allahabad (U.P.), India, supervisor: M. khare
76. ^ a b Tiwari, S.; Peters, J. F. (2013). A new approach to the study of extended metric spaces. Mathematica
Aeterna. 3 (7): 565577.
77. ^ Tukey, J. W. (1940), Convergence and uniformity in topology, Annals of Mathematics Studies, AM2,
Princeton, NJ: Princeton Univ. Press, p. 90 Missing or empty |title= (help)
78. ^ ech, E. (1966). Topological spaces, revised ed. by Z. Frolik and M. Kattov. London: John Wiley & Sons.
p. 893.
79. ^ Wasilewski, P. (2004), On selected similarity relations and their applications into cognitive science, Ph.D.
thesis, Dept. Logic
80. ^ Wasilewski, P.; Peters, J. F.; Ramanna, S. (2011). Perceptual tolerance intersection. Transactions on
Rough Sets XIII: 159174.
81. ^ Weil, A. (1938), Sur les espaces structure uniforme et sur la topologie gnrale, Actualits scientique
et industrielles, Paris: Harmann & cie Missing or empty |title= (help)
181.17. FURTHER READING 711

82. ^ Wolski, M. (2010). Perception and classication. A note on near sets and rough sets. Fundamenta Infor-
maticae. 101: 143155.
83. ^ a b Zeeman, E. C. (1962), The topology of the brain and visual perception, in Fort, Jr., M. K., Topology
of 3-Manifolds and Related Topics, University of Georgia Institute Conference Proceedings (1962): Prentice-
Hall, pp. 240256 Missing or empty |title= (help)

181.17 Further reading


Naimpally, S. A.; Peters, J. F. (2013). Topology with Applications. Topological Spaces via Near and Far.
World Scientic Publishing . Co. Pte. Ltd. ISBN 978-981-4407-65-6.
Naimpally, S. A.; Peters, J. F.; Wolski, M. (2013), Near Set Theory and Applications, Mathematics in Computer
Science, 7 (1), Berlin: Springer
Peters, J. F. (2014), Topology of Digital Images. Visual Pattern Discovery in Proximity Spaces, Intelligent
Systems Reference Library, 63, Berlin: Springer
Henry, C. J.; Peters, J. F. (2012), Near set evaluation and recognition (NEAR) system V3.0, UM CI Laboratory
Technical Report No. TR-2009-015, Computational Intelligence Laboratory, University of Manitoba
Concilio, A. Di (2014). Proximity: A powerful tool in extension theory, function spaces, hyperspaces, boolean
algebras and point-free geometry. Computational Intelligence Laboratory, University of Manitoba. UM CI
Laboratory Technical Report No. TR-2009-021.

Peters, J. F.; Naimpally, S. A. (2012). Applications of near sets (PDF). Notices of the American Mathematical
Society. 59 (4): 536542. CiteSeerX 10.1.1.371.7903 .
Chapter 182

Negation

For other uses, see Negation (disambiguation) and NOT gate.

In logic, negation, also called logical complement, is an operation that takes a proposition p to another proposition
not p", written p, which is interpreted intuitively as being true when p is false, and false when p is true. Negation
is thus a unary (single-argument) logical connective. It may be applied as an operation on notions, propositions, truth
values, or semantic values more generally. In classical logic, negation is normally identied with the truth function
that takes truth to falsity and vice versa. In intuitionistic logic, according to the BrouwerHeytingKolmogorov
interpretation, the negation of a proposition p is the proposition whose proofs are the refutations of p.

182.1 Denition

No agreement exists as to the possibility of dening negation, as to its logical status, function, and meaning, as to its
eld of applicability..., and as to the interpretation of the negative judgment, (F.H. Heinemann 1944).[1]
Classical negation is an operation on one logical value, typically the value of a proposition, that produces a value
of true when its operand is false and a value of false when its operand is true. So, if statement A is true, then A
(pronounced not A) would therefore be false; and conversely, if A is false, then A would be true.
The truth table of p is as follows:
Classical negation can be dened in terms of other logical operations. For example, p can be dened as p F, where
"" is logical consequence and F is absolute falsehood. Conversely, one can dene F as p & p for any proposition
p, where "&" is logical conjunction. The idea here is that any contradiction is false. While these ideas work in both
classical and intuitionistic logic, they do not work in paraconsistent logic, where contradictions are not necessarily
false. But in classical logic, we get a further identity: p q can be dened as p q, where "" is logical disjunction:
not p, or q".
Algebraically, classical negation corresponds to complementation in a Boolean algebra, and intuitionistic negation to
pseudocomplementation in a Heyting algebra. These algebras provide a semantics for classical and intuitionistic logic
respectively.

182.2 Notation

The negation of a proposition p is notated in dierent ways in various contexts of discussion and elds of application.
Among these variants are the following:
In set theory \ is also used to indicate 'not member of': U \ A is the set of all members of U that are not members of
A.
No matter how it is notated or symbolized, the negation p / p can be read as it is not the case that p", not that p",
or usually more simply as not p".

712
182.3. PROPERTIES 713

182.3 Properties

182.3.1 Double negation


Within a system of classical logic, double negation, that is, the negation of the negation of a proposition p, is logically
equivalent to p. Expressed in symbolic terms, p p. In intuitionistic logic, a proposition implies its double negation
but not conversely. This marks one important dierence between classical and intuitionistic negation. Algebraically,
classical negation is called an involution of period two.
However, in intuitionistic logic we do have the equivalence of p and p. Moreover, in the propositional case, a
sentence is classically provable if its double negation is intuitionistically provable. This result is known as Glivenkos
theorem.

182.3.2 Distributivity
De Morgans laws provide a way of distributing negation over disjunction and conjunction :

(a b) (a b) , and
(a b) (a b) .

182.3.3 Linearity
Let denote the logical xor operation. In Boolean algebra, a linear function is one such that:
If there exists a0 , a1 , ..., a {0,1} such that f(b1 , ..., b ) = a0 (a1 b1 ) ... (a b ), for all b1 , ..., b {0,1}.
Another way to express this is that each variable always makes a dierence in the truth-value of the operation or it
never makes a dierence. Negation is a linear logical operator.

182.3.4 Self dual


In Boolean algebra a self dual function is one such that:
f(a1 , ..., a ) = ~f(~a1 , ..., ~a ) for all a1 , ..., a {0,1}. Negation is a self dual logical operator.

182.4 Rules of inference


There are a number of equivalent ways to formulate rules for negation. One usual way to formulate classical negation
in a natural deduction setting is to take as primitive rules of inference negation introduction (from a derivation of p to
both q and q, infer p; this rule also being called reductio ad absurdum), negation elimination (from p and p infer
q; this rule also being called ex falso quodlibet), and double negation elimination (from p infer p). One obtains the
rules for intuitionistic negation the same way but by excluding double negation elimination.
Negation introduction states that if an absurdity can be drawn as conclusion from p then p must not be the case (i.e.
p is false (classically) or refutable (intuitionistically) or etc.). Negation elimination states that anything follows from
an absurdity. Sometimes negation elimination is formulated using a primitive absurdity sign . In this case the rule
says that from p and p follows an absurdity. Together with double negation elimination one may infer our originally
formulated rule, namely that anything follows from an absurdity.
Typically the intuitionistic negation p of p is dened as p. Then negation introduction and elimination are just
special cases of implication introduction (conditional proof) and elimination (modus ponens). In this case one must
also add as a primitive rule ex falso quodlibet.

182.5 Programming
As in mathematics, negation is used in computer science to construct logical statements.
714 CHAPTER 182. NEGATION

if (!(r == t)) { /*...statements executed when r does NOT equal t...*/ }

The "!" signies logical NOT in B, C, and languages with a C-inspired syntax such as C++, Java, JavaScript, Perl,
and PHP. NOT is the operator used in ALGOL 60, BASIC, and languages with an ALGOL- or BASIC-inspired
syntax such as Pascal, Ada, Eiel and Seed7. Some languages (C++, Perl, etc.) provide more than one operator for
negation. A few languages like PL/I and Ratfor use for negation. Some modern computers and operating systems
will display as ! on les encoded in ASCII. Most modern languages allow the above statement to be shortened from
if (!(r == t)) to if (r != t), which allows sometimes, when the compiler/interpreter is not able to optimize it, faster
programs.
In computer science there is also bitwise negation. This takes the value given and switches all the binary 1s to 0s
and 0s to 1s. See bitwise operation. This is often used to create ones complement or "~" in C or C++ and twos
complement (just simplied to "-" or the negative sign since this is equivalent to taking the arithmetic negative value
of the number) as it basically creates the opposite (negative value equivalent) or mathematical complement of the
value (where both values are added together they create a whole).
To get the absolute (positive equivalent) value of a given integer the following would work as the "-" changes it from
negative to positive (it is negative because x < 0 yields true)
unsigned int abs(int x) { if (x < 0) return -x; else return x; }

To demonstrate logical negation:


unsigned int abs(int x) { if (!(x < 0)) return x; else return -x; }

Inverting the condition and reversing the outcomes produces code that is logically equivalent to the original code, i.e.
will have identical results for any input (note that depending on the compiler used, the actual instructions performed
by the computer may dier).
This convention occasionally surfaces in written speech, as computer-related slang for not. The phrase !voting, for
example, means not voting.

182.6 Kripke semantics


In Kripke semantics where the semantic values of formulae are sets of possible worlds, negation can be taken to mean
set-theoretic complementation. (See also possible world semantics.)

182.7 See also


Logical conjunction

Logical disjunction

NOT gate

Bitwise NOT

Ampheck

Apophasis

Binary opposition

Cyclic negation

Double negative elimination

Grammatical polarity

Negation (linguistics)
182.8. REFERENCES 715

Negation as failure

Platos beard
Square of opposition

182.8 References
[1] Horn, Laurence R (2001). Chapter 1. A NATURAL HISTORY OF NEGATION (PDF). Stanford University: CLSI Pub-
lications. p. 1. ISBN 1-57586-336-7. Retrieved 29 Dec 2013.

182.9 Further reading


Gabbay, Dov, and Wansing, Heinrich, eds., 1999. What is Negation?, Kluwer.
Horn, L., 2001. A Natural History of Negation, University of Chicago Press.

G. H. von Wright, 195359, On the Logic of Negation, Commentationes Physico-Mathematicae 22.

Wansing, Heinrich, 2001, Negation, in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic, Blackwell.
Tettamanti, Marco; Manenti, Rosa; Della Rosa, Pasquale A.; Falini, Andrea; Perani, Daniela; Cappa, Stefano
F.; Moro, Andrea (2008). Negation in the brain: Modulating action representation. NeuroImage. 43 (2):
358367. PMID 18771737. doi:10.1016/j.neuroimage.2008.08.004.

182.10 External links


Horn, Laurence R.; Wansing, Heinrich. Negation. Stanford Encyclopedia of Philosophy.
Hazewinkel, Michiel, ed. (2001) [1994], Negation, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
NOT, on MathWorld
Chapter 183

Negation as failure

Negation as failure (NAF, for short) is a non-monotonic inference rule in logic programming, used to derive not p
(i.e. that p is assumed not to hold) from failure to derive p . Note that not p can be dierent from the statement p
of the logical negation of p , depending on the completeness of the inference algorithm and thus also on the formal
logic system.
Negation as failure has been an important feature of logic programming since the earliest days of both Planner and
Prolog. In Prolog, it is usually implemented using Prologs extralogical constructs.

183.1 Planner semantics


In Planner, negation as failure could be implemented as follows:

if (not (goal p)), then (assert p)

which says that if an exhaustive search to prove p fails, then assert p.[1] Note that the above example uses true
mathematical negation, which cannot be expressed in Prolog.

183.2 Prolog semantics


In pure Prolog, NAF literals of the form not p can occur in the body of clauses and can be used to derive other NAF
literals. For example, given only the four clauses

p q not r

qs
qt
t
NAF derives not s , not r and p .

183.3 Completion semantics


The semantics of NAF remained an open issue until Keith Clark [1978] showed that it is correct with respect to the
completion of the logic program, where, loosely speaking, only and are interpreted as if and only if, written
as i or " ".
For example, the completion of the four clauses above is

716
183.4. AUTOEPISTEMIC SEMANTICS 717

p q not r

q st

t true

r false

s false

The NAF inference rule simulates reasoning explicitly with the completion, where both sides of the equivalence are
negated and negation on the right-hand side is distributed down to atomic formulae. For example, to show not p ,
NAF simulates reasoning with the equivalences

not p not q r

not q not s not t

not t false

not r true

not s true

In the non-propositional case, the completion needs to be augmented with equality axioms, to formalise the assump-
tion that individuals with distinct names are distinct. NAF simulates this by failure of unication. For example, given
only the two clauses

p(a)

p(b)

NAF derives not p(c) .


The completion of the program is

p(X) X = a X = b

augmented with unique names axioms and domain closure axioms.


The completion semantics is closely related both to circumscription and to the closed world assumption.

183.4 Autoepistemic semantics


The completion semantics justies interpreting the result not p of a NAF inference as the classical negation p of p .
However, Michael Gelfond [1987] showed that it is also possible to interpret not p literally as " p can not be shown,
" p is not known or " p is not believed, as in autoepistemic logic. The autoepistemic interpretation was developed
further by Gelfond and Lifschitz [1988] and is the basis of answer set programming.
The autoepistemics semantics of a pure Prolog program P with NAF literals is obtained by expanding P with a set
of ground (variable-free) NAF literals that is stable in the sense that

= { not p | p is not implied by P }


718 CHAPTER 183. NEGATION AS FAILURE

In other words, a set of assumptions about what can not be shown is stable if and only if is the set of all sentences
that truly can not be shown from the program P expanded by . Here, because of the simple syntax of pure Prolog
programs, implied by can be understood very simply as derivability using modus ponens and universal instantiation
alone.
A program can have zero, one or more stable expansions. For example,

p not p

has no stable expansions.

p not q

has exactly one stable expansion = { not q }

p not q

q not p
has exactly two stable expansions 1 = { not p } and 2 = { not q }.
The autoepistemic interpretation of NAF can be combined with classical negation, as in extended logic programming
and answer set programming. Combining the two negations, it is possible to express, for example

p not p (the closed world assumption) and


p not p ( p holds by default).

183.5 Footnotes
[1] Clark, Keith (1978). Logic and Data Bases (PDF). Springer-Verlag. pp. 293322 (Negation as a failure). doi:10.1007/978-
1-4684-3384-5_11.

183.6 References
K. Clark [1978, 1987]. Negation as failure. Readings in nonmonotonic reasoning, Morgan Kaufmann Publish-
ers, pages 311-325.
M. Gelfond [1987] On Stratied Autoepistemic Theories Proc. AAAI, pages 207-211.

M. Gelfond and V. Lifschitz [1988] The Stable Model Semantics for Logic Programming Proc. 5th Inter-
national Conference and Symposium on Logic Programming (R. Kowalski and K. Bowen, eds), MIT Press,
pages 1070-1080.
J.C. Shepherdson [1984] Negation as failure: a comparison of Clarks completed data base and Reiters closed
world assumption, Journal of Logic Programming, vol 1, 1984, pages 5181.
J.C. Shepherdson [1985] Negation as failure II, Journal of Logic Programming, vol 3, 1985, pages 185-202.

183.7 External links


Report from the W3C Workshop on Rule Languages for Interoperability. Includes notes on NAF and SNAF
(scoped negation as failure).
Chapter 184

Negation introduction

Negation introduction is a rule of inference, or transformation rule, in the eld of propositional calculus.
Negation introduction states that if a given antecedent implies both the consequent and its complement, then the
antecedent is a contradiction.[1] [2]

184.1 Formal notation


This can be written as: (P Q) (P Q) P
An example of its use would be an attempt to prove two contradictory statements from a single fact. For example, if
a person were to state When the phone rings I get happy and then later state When the phone rings I get annoyed,
the logical inference which is made from this contradictory information is that the person is making a false statement
about the phone ringing.

184.2 External links


Category:Propositional Calculus on ProofWiki (GFDLed)

184.3 References
[1] Wansing (Ed.), Heinrich (1996). Negation: A notion in focus. Berlin: Walter de Gruyter. ISBN 3110147696.

[2] Haegeman, Lilliane (30 Mar 1995). The Syntax of Negation. Cambridge: Cambridge University Press. p. 70. ISBN
0521464927.

719
Chapter 185

Negation normal form

In mathematical logic, a formula is in negation normal form if the negation operator ( , not) is only applied to
variables and the only other allowed Boolean operators are conjunction ( , and) and disjunction ( , or).
Negation normal form is not a canonical form: for example, a (b c) and (a b) (a c) are equivalent, and
are both in negation normal form.
In classical logic and many modal logics, every formula can be brought into this form by replacing implications and
equivalences by their denitions, using De Morgans laws to push negation inwards, and eliminating double negations.
This process can be represented using the following rewrite rules (Handbook of Automated Reasoning 1, p. 204):

A B A B

(A B) A B
(A B) A B
A A
xA xA
xA xA
[In these rules, the symbol indicates logical implication in the formula being rewritten, and is the rewriting
operation.]
A formula in negation normal form can be put into the stronger conjunctive normal form or disjunctive normal form
by applying distributivity.

185.1 Examples and counterexamples


The following formulae are all in negation normal form:

(A B) C

(A (B C) C) D
A B
A B
The rst example is also in conjunctive normal form and the last two are in both conjunctive normal form and
disjunctive normal form, but the second example is in neither.
The following formulae are not in negation normal form:

720
185.2. REFERENCES 721

AB

(A B)
(A B)
(A C)
They are however respectively equivalent to the following formulae in negation normal form:

A B

A B
A B
A C

185.2 References
Alan J.A. Robinson and Andrei Voronkov, Handbook of Automated Reasoning 1:203 (2001) ISBN 0444829490.

185.3 External links


Java applet for converting logical formula to Negation Normal Form, showing laws used
Chapter 186

Nicods axiom

Nicods axiom (named after Jean Nicod) is an axiom in propositional calculus that can be used as a sole w in a
two-axiom formalization of zeroth-order logic.
The axiom states the following always has a true truth value.

(( ( )) (( ( )) (( ) (( ) ( ))))[1]

To utilize this axiom, Nicod made a rule of inference, called Nicods modus ponens.
1.
2. ( ( ))
[2]
In 1931, Mordechaj Wajsberg found an adequate, and easier-to-work-with alternative.

(( ( )) ((( ) (( ) ( ))) ( ( ))))[3]

186.1 References
[1] http://us.metamath.org/mpegif/nic-ax.html

[2] http://us.metamath.org/mpegif/nic-mp.html

[3] http://www.wolframscience.com/nksonline/page-1151a-text

186.2 External links


Works related to A Reduction in the number of the Primitive Propositions of Logic at Wikisource

722
Chapter 187

Open formula

An open formula is a formula that contains at least one free variable. Some educational resources use the term
"open sentence",[1] but this use conicts with the denition of "sentence" as a formula that does not contain any free
variables.

187.1 See also


First-order logic
Higher-order logic

Quantier (logic)
Predicate (mathematical logic)

187.2 References and notes


[1] http://mathforum.org/library/drmath/view/53280.html

723
Chapter 188

Parity function

In Boolean algebra, a parity function is a Boolean function whose value is 1 if and only if the input vector has an
odd number of ones. The parity function of two inputs is also known as the XOR function.
The parity function is notable for its role in theoretical investigation of circuit complexity of Boolean functions.
The output of the Parity Function is the Parity bit.

188.1 Denition

The n -variable parity function is the Boolean function f : {0, 1}n {0, 1} with the property that f (x) = 1 if and
only if the number of ones in the vector x {0, 1}n is odd. In other words, f is dened as follows:

f (x) = x1 x2 xn

188.2 Properties

Parity only depends on the number of ones and is therefore a symmetric Boolean function.
The n-variable parity function and its negation are the only Boolean functions for which all disjunctive normal forms
have the maximal number of 2 n 1 monomials of length n and all conjunctive normal forms have the maximal number
of 2 n 1 clauses of length n.[1]

188.3 Circuit complexity

In the early 1980s, Merrick Furst, James Saxe and Michael Sipser[2] and independently Mikls Ajtai[3] established
super-polynomial lower bounds on the size of constant-depth Boolean circuits for the parity function, i.e., they showed
that polynomial-size constant-depth circuits cannot compute the parity function. Similar results were also established
for the majority, multiplication and transitive closure functions, by reduction from the parity function.[2]
Hstad (1987) established tight exponential lower bounds on the size of constant-depth Boolean circuits for the parity
function. Hstads Switching Lemma is the key technical tool used for these lower bounds and Johan Hstad was
awarded the Gdel Prize for this work in 1994. The precise result is that depth-k circuits with AND, OR, and NOT
1
gates require size exp((n k1 )) to compute the parity function. This is asymptotically almost optimal as there are
1
depth-k circuits computing parity which have size exp(O(n k1 )t) .

724
188.4. INFINITE VERSION 725

188.4 Innite version


An innite parity function is a function f : {0, 1} {0, 1} mapping every innite binary string to 0 or 1, having
the following property: if w and v are innite binary strings diering only on nite number of coordinates then
f (w) = f (v) if and only if w and v dier on even number of coordinates.
Assuming axiom of choice it can be easily proved that parity functions exist and there are 2c many of them - as many
as the number of all functions from {0, 1} to {0, 1} . It is enough to take one representative per equivalence class of
relation dened as follows: w v if w and v dier at nite number of coordinates. Having such representatives,
we can map all of them to 0; the rest of f values are deducted unambiguously.
Innite parity functions are often used in theoretical Computer Science and Set Theory because of their simple
denition and - on the other hand - their descriptive complexity. For example, it can be shown that an inverse image
f 1 [0] is a non-Borel set.

188.5 See also


Related topics

Error Correction

Error Detection

The output of the function

Parity bit

188.6 References
[1] Ingo Wegener, Randall J. Pruim, Complexity Theory, 2005, ISBN 3-540-21045-8, p. 260

[2] Merrick Furst, James Saxe and Michael Sipser, Parity, Circuits, and the Polynomial-Time Hierarchy, Annu. Intl. Symp.
Found.Coimputer Sci., 1981, Theory of Computing Systems, vol. 17, no. 1, 1984, pp. 1327, doi:10.1007/BF01744431

[3] Mikls Ajtai, " 11 -Formulae on Finite Structures, Annals of Pure and Applied Logic, 24 (1983) 148.

Hstad, Johan (1987), Computational limitations of small depth circuits (PDF), Ph.D. thesis, Massachusetts
Institute of Technology.
Chapter 189

Partial function

Not to be confused with partial function of a multilinear map or the mathematical concept of a piecewise function.

In mathematics, a partial function from X to Y (written as f: X Y) is a function f: X Y, for some subset X


of X. It generalizes the concept of a function f: X Y by not forcing f to map every element of X to an element
of Y (only some subset X of X). If X = X, then f is called a total function and is equivalent to a function. Partial
functions are often used when the exact domain, X, is not known (e.g. many functions in computability theory).
Specically, we will say that for any x X, either:

f(x) = y Y (it is dened as a single element in Y) or


f(x) is undened.

For example, we can consider the square root function restricted to the integers

g: Z Z

g(n) = n.
Thus g(n) is only dened for n that are perfect squares (i.e., 0, 1, 4, 9, 16, ...). So, g(25) = 5, but g(26) is undened.

189.1 Basic concepts


There are two distinct meanings in current mathematical usage for the notion of the domain of a partial function.
Most mathematicians, including recursion theorists, use the term domain of f" for the set of all values x such that
f(x) is dened (X' above). But some, particularly category theorists, consider the domain of a partial function f:X
Y to be X, and refer to X' as the domain of denition. Similarly, the term range can refer to either the codomain or
the image of a function.
Occasionally, a partial function with domain X and codomain Y is written as f: X Y, using an arrow with vertical
stroke.
A partial function is said to be injective or surjective when the total function given by the restriction of the partial
function to its domain of denition is. A partial function may be both injective and surjective.
Because a function is trivially surjective when restricted to its image, the term partial bijection denotes a partial
function which is injective.[1]
An injective partial function may be inverted to an injective partial function, and a partial function which is both
injective and surjective has an injective function as inverse. Furthermore, a total function which is injective may be
inverted to an injective partial function.
The notion of transformation can be generalized to partial functions as well. A partial transformation is a function
f: A B, where both A and B are subsets of some set X.[1]

726
189.2. TOTAL FUNCTION 727

189.2 Total function


Total function is a synonym for function. The use of the adjective total is to suggest that it is a special case of a
partial function (specically, a total function with domain X is a special case of a partial function over X). The adjective
will typically be used for clarity in contexts where partial functions are common, for example in computability theory.

189.3 Discussion and examples


The rst diagram above represents a partial function that is not a total function since the element 1 in the left-hand set
is not associated with anything in the right-hand set. Whereas, the second diagram represents a total function since
every element on the left-hand set is associated with exactly one element in the right hand set.

189.3.1 Natural logarithm


Consider the natural logarithm function mapping the real numbers to themselves. The logarithm of a non-positive
real is not a real number, so the natural logarithm function doesn't associate any real number in the codomain with
any non-positive real number in the domain. Therefore, the natural logarithm function is not a total function when
viewed as a function from the reals to themselves, but it is a partial function. If the domain is restricted to only include
the positive reals (that is, if the natural logarithm function is viewed as a function from the positive reals to the reals),
then the natural logarithm is a total function.

189.3.2 Subtraction of natural numbers


Subtraction of natural numbers (non-negative integers) can be viewed as a partial function:

f :NNN
f (x, y) = x y.
It is dened only when x y .

189.3.3 Bottom element


In denotational semantics a partial function is considered as returning the bottom element when it is undened.
In computer science a partial function corresponds to a subroutine that raises an exception or loops forever. The IEEE
oating point standard denes a not-a-number value which is returned when a oating point operation is undened
and exceptions are suppressed, e.g. when the square root of a negative number is requested.
In a programming language where function parameters are statically typed, a function may be dened as a partial
function because the languages type system cannot express the exact domain of the function, so the programmer
instead gives it the smallest domain which is expressible as a type and contains the true domain.

189.3.4 In Category theory


In Category theory, when considering the operation of morphism composition in Concrete Categories, the compo-
sition operation : hom(C) hom(C) hom(C) is a total function if and only if ob(C) has one element. The
reason for this is that two morphisms f : X Y and g : U V can only be composed as g f if Y = U , that is,
the codomain of f must equal the domain of g .
The category of sets and partial functions is equivalent to but not isomorphic with the category of pointed sets and
point-preserving maps.[2] One textbook notes that This formal completion of sets and partial maps by adding im-
proper, innite elements was reinvented many times, in particular, in topology (one-point compactication) and
in theoretical computer science.[3]
The category of sets and partial bijections is equivalent to its dual.[4] It is the prototypical inverse category.[5]
728 CHAPTER 189. PARTIAL FUNCTION

189.3.5 In abstract algebra


Partial algebra generalizes the notion of universal algebra to partial operations. An example would be a eld, in which
the multiplicative inversion is the only proper partial operation (because division by zero is not dened).[6]
The set of all partial functions (partial transformations) on a given base set, X, forms a regular semigroup called the
semigroup of all partial transformations (or the partial transformation semigroup on X), typically denoted by PT X
.[7][8][9] The set of all partial bijections on X forms the symmetric inverse semigroup.[7][8]

189.3.6 Charts and atlases for manifolds and ber bundles


Charts in the atlases which specify the structure of manifolds and ber bundles are partial functions. In the case of
manifolds, the domain is the point set of the manifold. In the case of ber bundles, the domain is the total space of
the ber bundle. In these applications, the most important construction is the transition map, which is the composite
of one chart with the inverse of another. The initial classication of manifolds and ber bundles is largely expressed
in terms of constraints on these transition maps.
The reason for the use of partial functions instead of total functions is to permit general global topologies to be
represented by stitching together local patches to describe the global structure. The patches are the domains where
the charts are dened.

189.4 See also


Bijection

Injective function

Surjective function

Multivalued function

Densely dened operator

189.5 References
[1] Christopher Hollings (2014). Mathematics across the Iron Curtain: A History of the Algebraic Theory of Semigroups.
American Mathematical Society. p. 251. ISBN 978-1-4704-1493-1.

[2] Lutz Schrder (2001). Categories: a free tour. In Jrgen Koslowski and Austin Melton. Categorical Perspectives. Springer
Science & Business Media. p. 10. ISBN 978-0-8176-4186-3.

[3] Neal Koblitz; B. Zilber; Yu. I. Manin (2009). A Course in Mathematical Logic for Mathematicians. Springer Science &
Business Media. p. 290. ISBN 978-1-4419-0615-1.

[4] Francis Borceux (1994). Handbook of Categorical Algebra: Volume 2, Categories and Structures. Cambridge University
Press. p. 289. ISBN 978-0-521-44179-7.

[5] Marco Grandis (2012). Homological Algebra: The Interplay of Homology with Distributive Lattices and Orthodox Semi-
groups. World Scientic. p. 55. ISBN 978-981-4407-06-9.

[6] Peter Burmeister (1993). Partial algebras an introductory survey. In Ivo G. Rosenberg and Gert Sabidussi. Algebras
and Orders. Springer Science & Business Media. ISBN 978-0-7923-2143-9.

[7] Alfred Hoblitzelle Cliord; G. B. Preston (1967). The Algebraic Theory of Semigroups. Volume II. American Mathematical
Soc. p. xii. ISBN 978-0-8218-0272-4.

[8] Peter M. Higgins (1992). Techniques of semigroup theory. Oxford University Press, Incorporated. p. 4. ISBN 978-0-19-
853577-5.

[9] Olexandr Ganyushkin; Volodymyr Mazorchuk (2008). Classical Finite Transformation Semigroups: An Introduction.
Springer Science & Business Media. pp. 16 and 24. ISBN 978-1-84800-281-4.
189.5. REFERENCES 729

Martin Davis (1958), Computability and Unsolvability, McGrawHill Book Company, Inc, New York. Re-
published by Dover in 1982. ISBN 0-486-61471-9.
Stephen Kleene (1952), Introduction to Meta-Mathematics, North-Holland Publishing Company, Amsterdam,
Netherlands, 10th printing with corrections added on 7th printing (1974). ISBN 0-7204-2103-9.
Harold S. Stone (1972), Introduction to Computer Organization and Data Structures, McGrawHill Book Com-
pany, New York.
Chapter 190

Peirces law

In logic, Peirces law is named after the philosopher and logician Charles Sanders Peirce. It was taken as an axiom
in his rst axiomatisation of propositional logic. It can be thought of as the law of excluded middle written in a form
that involves only one sort of connective, namely implication.
In propositional calculus, Peirces law says that ((PQ)P)P. Written out, this means that P must be true if there
is a proposition Q such that the truth of P follows from the truth of if P then Q". In particular, when Q is taken to
be a false formula, the law says that if P must be true whenever it implies falsity, then P is true. In this way Peirces
law implies the law of excluded middle.
Peirces law does not hold in intuitionistic logic or intermediate logics and cannot be deduced from the deduction
theorem alone.
Under the CurryHoward isomorphism, Peirces law is the type of continuation operators, e.g. call/cc in Scheme.[1]

190.1 History
Here is Peirces own statement of the law:

A fth icon is required for the principle of excluded middle and other propositions connected with it.
One of the simplest formulae of this kind is:

This is hardly axiomatical. That it is true appears as follows. It can only be false by the nal consequent
x being false while its antecedent (x y) x is true. If this is true, either its consequent, x, is true, when
the whole formula would be true, or its antecedent x y is false. But in the last case the antecedent of
x y, that is x, must be true. (Peirce, the Collected Papers 3.384).

Peirce goes on to point out an immediate application of the law:

From the formula just given, we at once get:

where the a is used in such a sense that (x y) a means that from (x y) every proposition follows.
With that understanding, the formula states the principle of excluded middle, that from the falsity of the
denial of x follows the truth of x. (Peirce, the Collected Papers 3.384).

Warning: ((xy)a)x is not a tautology. However, [ax][((xy)a)x] is a tautology.

190.2 Other proofs of Peirces law


Showing Peirces Law applies does not mean that PQ or Q is true, we have that P is true but only (PQ)P, not
P(PQ) (see arming the consequent).
simple proof: (p q) p p q p p q p (pq)p (pq)(p1) p(q 1) p1 p.

730
190.3. USING PEIRCES LAW WITH THE DEDUCTION THEOREM 731

190.3 Using Peirces law with the deduction theorem


Peirces law allows one to enhance the technique of using the deduction theorem to prove theorems. Suppose one
is given a set of premises and one wants to deduce a proposition Z from them. With Peirces law, one can add
(at no cost) additional premises of the form ZP to . For example, suppose we are given PZ and (PQ)Z
and we wish to deduce Z so that we can use the deduction theorem to conclude that (PZ)(((PQ)Z)Z) is a
theorem. Then we can add another premise ZQ. From that and PZ, we get PQ. Then we apply modus ponens
with (PQ)Z as the major premise to get Z. Applying the deduction theorem, we get that (ZQ)Z follows from
the original premises. Then we use Peirces law in the form ((ZQ)Z)Z and modus ponens to derive Z from the
original premises. Then we can nish o proving the theorem as we originally intended.

PZ 1. hypothesis
(PQ)Z 2. hypothesis
ZQ 3. hypothesis
P 4. hypothesis
Z 5. modus ponens using steps 4 and 1
Q 6. modus ponens using steps 5 and 3
PQ 7. deduction from 4 to 6
Z 8. modus ponens using steps 7 and 2
(ZQ)Z 9. deduction from 3 to 8
((ZQ)Z)Z 10. Peirces law
Z 11. modus ponens using steps 9 and 10
((PQ)Z)Z 12. deduction from 2 to 11
(PZ)((PQ)Z)Z) 13. deduction from 1 to 12 QED

190.4 Completeness of the implicational propositional calculus


Main article: Implicational propositional calculus

One reason that Peirces law is important is that it can substitute for the law of excluded middle in the logic which
only uses implication. The sentences which can be deduced from the axiom schemas:

P(QP)
(P(QR))((PQ)(PR))
((PQ)P)P
from P and PQ infer Q

(where P,Q,R contain only "" as a connective) are all the tautologies which use only "" as a connective.

190.5 Notes
[1] A Formulae-as-Types Notion of Control - Grin denes K on page 3 as an equivalent to Schemes call/cc and then discusses
its type being the equivalent of Peirces law at the end of section 5 on page 9.

190.6 Further reading


Main article: Charles Sanders Peirce bibliography
732 CHAPTER 190. PEIRCES LAW

Peirce, C.S., On the Algebra of Logic: A Contribution to the Philosophy of Notation, American Journal of
Mathematics 7, 180202 (1885). Reprinted, the Collected Papers of Charles Sanders Peirce 3.359403 and the
Writings of Charles S. Peirce: A Chronological Edition 5, 162190.

Peirce, C.S., Collected Papers of Charles Sanders Peirce, Vols. 16, Charles Hartshorne and Paul Weiss (eds.),
Vols. 78, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA, 19311935, 1958.
Chapter 191

Petricks method

In Boolean algebra, Petricks method (also known as the branch-and-bound method) is a technique described by
Stanley R. Petrick (19312006)[1][2] in 1956[3] for determining all minimum sum-of-products solutions from a prime
implicant chart. Petricks method is very tedious for large charts, but it is easy to implement on a computer.

1. Reduce the prime implicant chart by eliminating the essential prime implicant rows and the corresponding
columns.

2. Label the rows of the reduced prime implicant chart P1 , P2 , P3 , P4 , etc.

3. Form a logical function P which is true when all the columns are covered. P consists of a product of sums
where each sum term has the form (Pi0 + Pi1 + +PiN ) , where each Pij represents a row covering column
i.

4. Reduce P to a minimum sum of products by multiplying out and applying X + XY = X .

5. Each term in the result represents a solution, that is, a set of rows which covers all of the minterms in the
table. To determine the minimum solutions, rst nd those terms which contain a minimum number of prime
implicants.

6. Next, for each of the terms found in step ve, count the number of literals in each prime implicant and nd the
total number of literals.

7. Choose the term or terms composed of the minimum total number of literals, and write out the corresponding
sums of prime implicants.

Example of Petricks method[4]


Following is the function we want to reduce:


f (A, B, C) = m(0, 1, 2, 5, 6, 7)

The prime implicant chart from the Quine-McCluskey algorithm is as follows:


| 0 1 2 5 6 7 ---------------|------------ K (0,1) a'b' | X X L (0,2) a'c' | X X M (1,5) b'c | X X N (2,6) bc' | X X P (5,7)
ac | X X Q (6,7) ab | X X
Based on the X marks in the table above, build a product of sums of the rows where each row is added, and columns
are multiplied together:
(K+L)(K+M)(L+N)(M+P)(N+Q)(P+Q)
Use the distributive law to turn that expression into a sum of products. Also use the following equivalences to simplify
the nal expression: X + XY = X and XX = X and X+X=X
= (K+L)(K+M)(L+N)(M+P)(N+Q)(P+Q) = (K+LM)(N+LQ)(P+MQ) = (KN+KLQ+LMN+LMQ)(P+MQ) = KNP
+ KLPQ + LMNP + LMPQ + KMNQ + KLMQ + LMNQ + LMQ

733
734 CHAPTER 191. PETRICKS METHOD

Now use again the following equivalence to further reduce the equation: X + XY = X
= KNP + KLPQ + LMNP + LMQ + KMNQ
Choose products with fewest terms, in this example, there are two products with three terms:
KNP LMQ
Choose term or terms with fewest total literals. In our example, the two products both expand to six literals total
each:
KNP expands to a'b'+ bc'+ ac LMQ expands to a'c'+ b'c + ab
So either one can be used. In general, application of Petricks method is tedious for large charts, but it is easy to
implement on a computer.

191.1 References
[1] Unknown. Biographical note. Retrieved 2017-04-12. Stanley R. Petrick was born in Cedar Rapids, Iowa on August 16,
1931. He attended the Roosevelt High School and received a B. S. degree in Mathematics from the Iowa State University
in 1953. During 1953 to 1955 he attended MIT while on active duty as an Air Force ocer and received the S. M. degree
from the Department of Electrical Engineering in 1955. He was elected to Sigma Xi in 1955.
Mr. Petrick has been associated with the Applied Mathematics Board of the Data Sciences Laboratory at the Air Force
Cambridge Research Laboratories since 1955 and his recent studies at MIT have been partially supported by AFCRL.
During 1959-1962 he held the position of Lecturer in Mathematics in the Evening Graduate Division of Northeastern
University.
Mr. Petrick is currently a member of the Linguistic Society of America, The Linguistic Circle of New York, The American
Mathematical Association, The Association for Computing Machinery, and the Association for Machine Translation and
Computational Linguistics.

[2] Obituaries - Cedar Rapids - Stanley R. Petrick. The Gazette. 2006-08-05. p. 16. Retrieved 2017-04-12. [] CEDAR
RAPIDS Stanley R. Petrick, 74, formerly of Cedar Rapids, died July 27, 2006, in Presbyterian/St. Lukes Hospital, Denver,
Colo., following a 13-year battle with leukemia. A memorial service will be held Aug. 14 at the United Presbyterian
Church in Laramie, Wyo., where he lived for many years. [] Stan Petrick was born in Cedar Rapids on Aug. 6, 1931 to
Catherine Hunt Petrick and Fred Petrick. He graduated from Roosevelt High School in 1949 and received a B.S. degree
in mathematics from Iowa State University. Stan married Mary Ethel Buxton in 1953.
He joined the U.S. Air Force and was assigned as a student ocer studying digital computation at MIT, where he earned an
M.S. degree. He was then assigned to the Applied Mathematics Branch of the Air Force Cambridge Research Laboratory
and while there earned a Ph.D. in linguistics.
He spent 20 years in the Theoretical and Computational Linguistics Group of the Mathematical Sciences Department at
IBM's T.J. Watson Research Center, conducting research in formal language theory. He had served as an assistant director
of the Mathematical Sciences Department, chairman of the Special Interest Group on Symbolic and Algebraic Manipulation
of the Association for Computing Machinery and president of the Association for Computational Linguistics. He authored
many technical publications.
He taught three years at Northeastern University and 13 years at the Pratt Institute. Dr. Petrick joined the University of
Wyoming in 1987, where he was instrumental in developing and implementing the Ph.D. program in the department and
served as a thesis adviser for many graduate students. He retired in 1995. [] (NB. Includes a photo of Stanley R. Petrick.)

[3] Petrick, Stanley R. (1956-04-10). A Direct Determination of the Irredundant Forms of a Boolean Function from the Set
of Prime Implicants. Bedford, Cambridge, MA, USA: Air Force Cambridge Research Center. AFCRC Technical Report
TR-56-110.

[4] http://www.mrc.uidaho.edu/mrc/people/jff/349/lect.10 Lecture #10: Petricks Method

191.2 Further reading


Roth, Jr., Charles H. Fundamentals of Logic Design

191.3 External links


Tutorial on Quine-McCluskey and Petricks method (pdf).
191.3. EXTERNAL LINKS 735

Prime Implicant Simplication Using Petricks Method


Chapter 192

Plural quantication

In mathematics and logic, plural quantication is the theory that an individual variable x may take on plural, as
well as singular, values. As well as substituting individual objects such as Alice, the number 1, the tallest building in
London etc. for x, we may substitute both Alice and Bob, or all the numbers between 0 and 10, or all the buildings
in London over 20 stories.
The point of the theory is to give rst-order logic the power of set theory, but without any "existential commitment"
to such objects as sets. The classic expositions are Boolos 1984, and Lewis 1991.

192.1 History

The view is commonly associated with George Boolos, though it is older (see notably Simons 1982), and is related
to the view of classes defended by John Stuart Mill and other nominalist philosophers. Mill argued that universals or
classes are not a peculiar kind of thing, having an objective existence distinct from the individual objects that fall
under them, but is neither more nor less than the individual things in the class. (Mill 1904, II. ii. 2,also I. iv. 3).
A similar position was also discussed by Bertrand Russell in chapter VI of Russell (1903), but later dropped in favour
of a no-classes theory. See also Gottlob Frege 1895 for a critique of an earlier view defended by Ernst Schroeder.
The general idea can be traced back to Leibniz. (Levey 2011, pp. 129133)
Interest revived in plurals with work in linguistics in the 1970s by Remko Scha, Godehard Link, Fred Landman,
Roger Schwarzschild, Peter Lasersohn and others, who developed ideas for a semantics of plurals.

192.2 Background and motivation

192.2.1 Multigrade (variably polyadic) predicates and relations

Sentences like

Alice and Bob cooperate.


Alice, Bob and Carol cooperate.

are said to involve a multigrade (aka variably polyadic, also anadic) predicate or relation (cooperate in this
example), meaning that they stand for the same concept even though they don't have a xed arity (cf. Linnebo &
Nicolas 2008). The notion of multigrade relation/predicate has appeared as early as the 1940s and has been notably
used by Quine (cf. Morton 1975). Plural quantication deals with formalizing the quantication over the variable-
length arguments of such predicates, e.g. "xx cooperate where xx is a plural variable. Note that in this example it
makes no sense, semantically, to instantiate xx with the name of a single person.

736
192.3. FORMAL DEFINITION 737

192.2.2 Nominalism

Main article: Nominalism

Broadly speaking, nominalism denies the existence of universals (abstract entities), like sets, classes, relations, prop-
erties, etc. Thus the plural logic(s) were developed as an attempt to formalize reasoning about plurals, such as those
involved in multigrade predicates, apparently without resorting to notions that nominalists deny, e.g. sets.
Standard rst order logic has diculties in representing some sentences with plurals. Most well-known is the Geach
Kaplan sentence some critics admire only one another. Kaplan proved that it is nonrstorderizable (the proof can
be found in that article). Hence its paraphrase into a formal language commits us to quantication over (i.e. the
existence of) sets. But some nd it implausible that a commitment to sets is essential in explaining these sentences.
Note that an individual instance of the sentence, such as Alice, Bob and Carol admire only one another, need not
involve sets and is equivalent to the conjunction of the following rst-order sentences:

x(if Alice admires x, then x = Bob or x = Carol)


x(if Bob admires x, then x = Alice or x = Carol)
x(if Carol admires x, then x = Alice or x = Bob)

where x ranges over all critics [it being taken as read that critics cannot admire themselves]. But this seems to be an
instance of some people admire only one another, which is nonrstorderizable.
Boolos argued that 2nd-order monadic quantication may be systematically interpreted in terms of plural quantica-
tion, and that, therefore, 2nd-order monadic quantication is ontologically innocent.[1]
Later, Oliver & Smiley (2001), Rayo (2002), Yi (2005) and McKay (2006) argued that sentences such as

They are shipmates


They are meeting together
They lifted a piano
They are surrounding a building
They admire only one another

also cannot be interpreted in monadic second order logic. This is because predicates such as are shipmates, are
meeting together, are surrounding a building are not distributive. A predicate F is distributive if, whenever some
things are F, each one of them is F. But in standard logic, every monadic predicate is distributive. Yet such sentences
also seem innocent of any existential assumptions, and do not involve quantication.
So one can propose a unied account of plural terms that allows for both distributive and non-distributive satisfaction
of predicates, while defending this position against the singularist assumption that such predicates are predicates of
sets of individuals (or of mereological sums).
Several writers have suggested that plural logic opens the prospect of simplifying the foundations of mathematics,
avoiding the paradoxes of set theory, and simplifying the complex and unintuitive axiom sets needed in order to avoid
them.
Recently, Linnebo & Nicolas (2008) have suggested that natural languages often contain superplural variables (and
associated quantiers) such as these people, those people, and these other people compete against each other (e.g.
as teams in an online game), while Nicolas (2008) has argued that plural logic should be used to account for the
semantics of mass nouns, like wine and furniture.

192.3 Formal denition


This section presents a simple formulation of plural logic/quantication approximately the same as given by Boolos
in Nominalist Platonism (Boolos 1985).
738 CHAPTER 192. PLURAL QUANTIFICATION

192.3.1 Syntax
Sub-sentential units are dened as

Predicate symbols F , G , etc. (with appropriate arities, which are left implicit)
Singular variable symbols x , y , etc.
Plural variable symbols x , y , etc.

Full sentences are dened as

If F is an n-ary predicate symbol, and x0 , . . . , xn are singular variable symbols, then F (x0 , . . . , xn ) is a
sentence.
If P is a sentence, then so is P
If P and Q are sentences, then so is P Q
If P is a sentence and x is a singular variable symbol, then x.P is a sentence
If x is a singular variable symbol and y is a plural variable symbol, then x y is a sentence (where is usually
interpreted as the relation is one of)
If P is a sentence and x is a plural variable symbol, then x.P is a sentence

The last two lines are the only essentially new component to the syntax for plural logic. Other logical symbols denable
in terms of these can be used freely as notational shorthands.
This logic turns out to be equi-interpretable with monadic second order logic.

192.3.2 Model theory


Plural logics model theory/semantics is where the logics lack of sets is cashed out. A model is dened as a tuple
(D, V, s, R) where D is the domain, V is a collection of valuations VF for each predicate name F in the usual sense,
and s is a Tarskian sequence (assignment of values to variables) in the usual sense (i.e. a map from singular variable
symbols to elements of D ). The new component R is a binary relation relating values in the domain to plural variable
symbols.
Satisfaction is given as

(D, V, s, R) |= F (x0 , . . . , xn ) i (sx0 , . . . , sxn ) VF


(D, V, s, R) |= P i (D, V, s, R) P
(D, V, s, R) |= P Q i (D, V, s, R) |= P and (D, V, s, R) |= Q
(D, V, s, R) |= x.P i there is an s x s such that (D, V, s , R) |= P
(D, V, s, R) |= x y i sx Ry
(D, V, s, R) |= x.P i there is an R x R such that (D, V, s, R ) |= P

Where for singular variable symbols, s x s means that for all singular variable symbols y other than x , it holds
that sy = sy , and for plural variable symbols, R x R means that for all plural variable symbols y other than x ,
and for all objects of the domain d , it holds that dRy = dR y .
As in the syntax, only the last two are truly new in plural logic. Boolos observes that by using assignment relations R ,
the domain does not have to include sets, and therefore plural logic achieves ontological innocence while still retaining
the ability to talk about the extensions of a predicate. Thus, the plural logic comprehension schema x.y.y x
F (y) does not yield Russells paradox because the quantication of plural variables does not quantify over the domain.
Another aspect of the logic as Boolos denes it, crucial to this bypassing of Russells paradox, is the fact that sentences
of the form F (x) are not well-formed: predicate names can only combine with singular variable symbols, not plural
variable symbols.
This can be taken as the simplest, and most obvious argument that plural logic as Boolos dened it is ontologically
innocent.
192.4. CRITICISM 739

192.4 Criticism
Philippe de Rouilhan (2000) has argued that Boolos relied on the assumption, never defended in detail, that plural
expressions in ordinary language are manifestly and obviously free of existential commitment. But when I utter
there are critics who admire only one another is it manifest and obvious that I am only committing myself with
respect to critics? Or is Boolos victim of a grammatical illusion (p. 10)? Consider

There is at least one critic who admires only himself.


There are critics who admire only one another

The rst case is clearly innocent. But what about the second? There is an obvious logical dierence, since in the rst
case the plural is distributive, in the second, it is collective, and irreducibly so. How is it obvious that this dierence
is innocent? Also, the second is equivalent to

Some group (or collection) of critics is such that they admire only one another

But what is a group or collection in this sense? That is the whole problem. Perhaps Boolos has accorded a kind
of innocence to [the second] that would actually belong only to the rst.

192.5 See also


variadic function
generalized quantier

192.6 Notes
[1] Harman, Gilbert; Lepore, Ernest (2013), A Companion to W. V. O. Quine, Blackwell Companions to Philosophy, John
Wiley & Sons, p. 390, ISBN 9781118608029.

192.7 References
George Boolos, 1984, To be is to be the value of a variable (or to be some values of some variables), Journal
of Philosophy 81: 430449. In Boolos 1998, 5472.
--------, 1985, Nominalist platonism. Philosophical Review 94: 327344. In Boolos 1998, 7387.
--------, 1998. Logic, Logic, and Logic. Harvard University Press.
Burgess, J.P., From Frege to Friedman: A Dream Come True?"
--------, 2004, E Pluribus Unum: Plural Logic and Set Theory, Philosophia Mathematica 12(3): 193221.
Cameron, J. R., 1999, Plural Reference, Ratio.
Cocchiarella, Nino (2002). On the Logic of Classes as Many. Studia Logica. 70: 303338. doi:10.1023/A:1015190829525.
De Rouilhan, P., 2002, On What There Are, Proceedings of the Aristotelian Society: 183200.
Gottlob Frege, 1895, A critical elucidation of some points in E. Schroeders Vorlesungen Ueber Die Algebra
der Logik, Archiv fr systematische Philosophie: 433456.
Landman, F., 2000. Events and Plurality. Kluwer.
Laycock, Henry (2006), Words without Objects, Oxford: Clarendon Press, ISBN 9780199281718, doi:10.1093/0199281718.001.0
David K. Lewis, 1991. Parts of Classes. London: Blackwell.
740 CHAPTER 192. PLURAL QUANTIFICATION

Linnebo, ystein; Nicolas, David. Superplurals in English (PDF). Analysis. 68 (3): 18697. doi:10.1093/analys/68.3.186.

McKay, Thomas J. (2006), Plural Predication, New York: Oxford University Press, ISBN 978-0-19-927814-5
John Stuart Mill, 1904, A System of Logic, 8th ed. London: .

Nicolas, David (2008). Mass nouns and plural logic (PDF). Linguistics and Philosophy. 31 (2): 211244.
doi:10.1007/s10988-008-9033-2.

Oliver, Alex; Smiley, Timothy (2001). Strategies for a Logic of Plurals. Philosophical Quarterly. 51 (204):
289306. doi:10.1111/j.0031-8094.2001.00231.x.

Oliver, Alex (2004). Multigrade Predicates. Mind. 113: 609681. doi:10.1093/mind/113.452.609.


Rayo, Agustn (2002). Word and Objects. Nos. 36: 43664. doi:10.1111/1468-0068.00379.

--------, 2006, Beyond Plurals, in Rayo and Uzquiano (2006).

--------, 2007, Plurals, forthcoming in Philosophy Compass.


--------, and Gabriel Uzquiano, eds., 2006. Absolute Generality Oxford University Press.

Bertrand Russell, B., 1903. The Principles of Mathematics. Oxford Univ. Press.
Peter Simons, 1982, Plural Reference and Set Theory, in Barry Smith, ed., Parts and Moments: Studies in
Logic and Formal Ontology. Munich: Philosophia Verlag.
--------, 1987. Parts. Oxford University Press.

Uzquiano, Gabriel (2003). Plural Quantication and Classes. Philosophia Mathematica. 11 (1): 6781.
doi:10.1093/philmat/11.1.67.

Yi, Byeong-Uk (1999). Is two a property?". Journal of Philosophy. 95: 163190.


--------, 2005, The Logic and Meaning of Plurals, Part I, Journal of Philosophical Logic 34: 459506.

Adam Morton. Complex individuals and multigrade relations. Nos (1975): 309-318. JSTOR 2214634
Samuel Levey (2011) Logical theory in Leibniz in Brandon C. Look (ed.) The Continuum Companion to
Leibniz, Continuum International Publishing Group, ISBN 0826429750

192.8 External links


Linnebo, ystein. Plural quantication. Stanford Encyclopedia of Philosophy.
Moltmann, Friederike. (August 2012) "Plural Reference and Reference to a Plurality. A Reassessment of the
Linguistic Facts"
A more extensive bibliography

http://lumiere.ens.fr/~{}amari/genius/PapersSeminar/Nicolas-Semantics-for-plurals-Handout-0110.pdf
Chapter 193

Poretskys law of forms

In Boolean algebra, Poretskys law of forms shows that the single Boolean equation f (X) = 0 is equivalent to
g(X) = h(X) if and only if g = f h , where represents exclusive or.
The law of forms was discovered by Platon Poretsky.

193.1 References
Frank Markham Brown, Boolean Reasoning: The Logic of Boolean Equations, 2nd edition, 2003, p. 100
Louis Couturat, The Algebra Of Logic, 1914, p. 53, section 0.43

Clarence Irving Lewis, A Survey of Symbolic Logic, 1918, p. 145, section 7.15

193.2 External links


Transhuman Reections - Poretsky Form to Solve

741
Chapter 194

Predicate (mathematical logic)

Predicate (logic)" redirects here. For other uses, see Predicate (disambiguation) Logic.

In mathematical logic, a predicate is commonly understood to be a Boolean-valued function P: X {true, false},


called the predicate on X. However, predicates have many dierent uses and interpretations in mathematics and logic,
and their precise denition, meaning and use will vary from theory to theory. So, for example, when a theory denes
the concept of a relation, then a predicate is simply the characteristic function or the indicator function of a relation.
However, not all theories have relations, or are founded on set theory, and so one must be careful with the proper
denition and semantic interpretation of a predicate.

194.1 Simplied overview


Informally, a predicate is a statement that may be true or false depending on the values of its variables.[1] It can
be thought of as an operator or function that returns a value that is either true or false.[2] For example, predicates
are sometimes used to indicate set membership: when talking about sets, it is sometimes inconvenient or impossible
to describe a set by listing all of its elements. Thus, a predicate P(x) will be true or false, depending on whether x
belongs to a set.
Predicates are also commonly used to talk about the properties of objects, by dening the set of all objects that have
some property in common. So, for example, when P is a predicate on X, one might sometimes say P is a property of
X. Similarly, the notation P(x) is used to denote a sentence or statement P concerning the variable object x. The set
dened by P(x) is written as {x | P(x)}, and is just a collection of all the objects for which P is true.
For instance, {x | x is a natural number less than 4} is the set {1,2,3}.
If t is an element of the set {x | P(x)}, then the statement P(t) is true.
Here, P(x) is referred to as the predicate, and x the subject of the proposition. Sometimes, P(x) is also called a
propositional function, as each choice of x produces a proposition.
A simple form of predicate is a Boolean expression, in which case the inputs to the expression are themselves Boolean
values, combined using Boolean operations. Similarly, a Boolean expression with inputs predicates is itself a more
complex predicate.

194.2 Formal denition


The precise semantic interpretation of an atomic formula and an atomic sentence will vary from theory to theory.

In propositional logic, atomic formulas are called propositional variables.[3] In a sense, these are nullary (i.e.
0-arity) predicates.

In rst-order logic, an atomic formula consists of a predicate symbol applied to an appropriate number of terms.

742
194.3. SEE ALSO 743

In set theory, predicates are understood to be characteristic functions or set indicator functions, i.e. functions
from a set element to a truth value. Set-builder notation makes use of predicates to dene sets.
In autoepistemic logic, which rejects the law of excluded middle, predicates may be true, false, or simply
unknown; i.e. a given collection of facts may be insucient to determine the truth or falsehood of a predicate.
In fuzzy logic, predicates are the characteristic functions of a probability distribution. That is, the strict
true/false valuation of the predicate is replaced by a quantity interpreted as the degree of truth.

194.3 See also


Free variables and bound variables
Predicate functor logic

Truthbearer
Multigrade predicate

Opaque predicate
Classifying topos

binary relation

194.4 References
[1] Cunningham, Daniel W. (2012). A Logical Introduction to Proof. New York: Springer. p. 29. ISBN 9781461436317.

[2] Haas, Guy M. What If? (Predicates)". Introduction to Computer Programming. Berkeley Foundation for Opportunities in
IT (BFOIT). Retrieved 20 July 2013.

[3] Lavrov, Igor Andreevich and Larisa Maksimova (2003). Problems in Set Theory, Mathematical Logic, and the Theory of
Algorithms. New York: Springer. p. 52. ISBN 0306477122.

194.5 External links


Introduction to predicates
Chapter 195

Predicate functor logic

In mathematical logic, predicate functor logic (PFL) is one of several ways to express rst-order logic (also known
as predicate logic) by purely algebraic means, i.e., without quantied variables. PFL employs a small number of
algebraic devices called predicate functors (or predicate modiers) that operate on terms to yield terms. PFL is
mostly the invention of the logician and philosopher Willard Quine.

195.1 Motivation
The source for this section, as well as for much of this entry, is Quine (1976). Quine proposed PFL as a way
of algebraizing rst-order logic in a manner analogous to how Boolean algebra algebraizes propositional logic. He
designed PFL to have exactly the expressive power of rst-order logic with identity. Hence the metamathematics of
PFL are exactly those of rst-order logic with no interpreted predicate letters: both logics are sound, complete, and
undecidable. Most work Quine published on logic and mathematics in the last 30 years of his life touched on PFL in
some way.
Quine took functor from the writings of his friend Rudolf Carnap, the rst to employ it in philosophy and mathematical
logic, and dened it as follows:

The word functor, grammatical in import but logical in habitat... is a sign that attaches to one or
more expressions of given grammatical kind(s) to produce an expression of a given grammatical kind.
(Quine 1982: 129)

Ways other than PFL to algebraize rst-order logic include:

Cylindric algebra by Alfred Tarski and his American students. The simplied cylindric algebra proposed in
Bernays (1959) led Quine to write the paper containing the rst use of the phrase predicate functor";

The polyadic algebra of Paul Halmos. By virtue of its economical primitives and axioms, this algebra most
resembles PFL;

Relation algebra algebraizes the fragment of rst-order logic consisting of formulas having no atomic formula
lying in the scope of more than three quantiers. That fragment suces, however, for Peano arithmetic and the
axiomatic set theory ZFC; hence relation algebra, unlike PFL, is incompletable. Most work on relation algebra
since about 1920 has been by Tarski and his American students. The power of relation algebra did not become
manifest until the monograph Tarski and Givant (1987), published after the three important papers bearing on
PFL, namely Bacon (1985), Kuhn (1983), and Quine (1976);

Combinatory logic builds on combinators, higher order functions whose domain is another combinator or func-
tion, and whose range is yet another combinator. Hence combinatory logic goes beyond rst-order logic by
having the expressive power of set theory, which makes combinatory logic vulnerable to paradoxes. A predicate
functor, on the other hand, simply maps predicates (also called terms) into predicates.

744
195.2. KUHNS FORMALIZATION 745

PFL is arguably the simplest of these formalisms, yet also the one about which the least has been written.
Quine had a lifelong fascination with combinatory logic, attested to by his (1976) and his introduction to the translation
in Van Heijenoort (1967) of the paper by the Russian logician Moses Schnnkel founding combinatory logic. When
Quine began working on PFL in earnest, in 1959, combinatory logic was commonly deemed a failure for the following
reasons:

Until Dana Scott began writing on the model theory of combinatory logic in the late 1960s, almost only Haskell
Curry, his students, and Robert Feys in Belgium worked on that logic;
Satisfactory axiomatic formulations of combinatory logic were slow in coming. In the 1930s, some formulations
of combinatory logic were found to be inconsistent. Curry also discovered the Curry paradox, peculiar to
combinatory logic;
The lambda calculus, with the same expressive power as combinatory logic, was seen as a superior formalism.

195.2 Kuhns formalization


The PFL syntax, primitives, and axioms described in this section are largely Kuhns (1983). The semantics of the
functors are Quines (1982). The rest of this entry incorporates some terminology from Bacon (1985).

195.2.1 Syntax
An atomic term is an upper case Latin letter, I and S excepted, followed by a numerical superscript called its degree, or
by concatenated lower case variables, collectively known as an argument list. The degree of a term conveys the same
information as the number of variables following a predicate letter. An atomic term of degree 0 denotes a Boolean
variable or a truth value. The degree of I is invariably 2 and so is not indicated.
The combinatory (the word is Quines) predicate functors, all monadic and peculiar to PFL, are Inv, inv, , +, and
p. A term is either an atomic term, or constructed by the following recursive rule. If is a term, then Inv, inv,
, +, and p are terms. A functor with a superscript n, n a natural number > 1, denotes n consecutive applications
(iterations) of that functor.
A formula is either a term or dened by the recursive rule: if and are formulas, then and ~() are likewise
formulas. Hence "~" is another monadic functor, and concatenation is the sole dyadic predicate functor. Quine
called these functors alethic. The natural interpretation of "~" is negation; that of concatenation is any connective
that, when combined with negation, forms a functionally complete set of connectives. Quines preferred functionally
complete set was conjunction and negation. Thus concatenated terms are taken as conjoined. The notation + is
Bacons (1985); all other notation is Quines (1976; 1982). The alethic part of PFL is identical to the Boolean term
schemata of Quine (1982).
As is well known, the two alethic functors could be replaced by a single dyadic functor with the following syntax and
semantics: if and are formulas, then () is a formula whose semantics are not ( and/or )" (see NAND and
NOR).

195.2.2 Axioms and semantics


Quine set out neither axiomatization nor proof procedure for PFL. The following axiomatization of PFL, one of two
proposed in Kuhn (1983), is concise and easy to describe, but makes extensive use of free variables and so does not
do full justice to the spirit of PFL. Kuhn gives another axiomatization dispensing with free variables, but that is harder
to describe and that makes extensive use of dened functors. Kuhn proved both of his PFL axiomatizations sound
and complete.
This section is built around the primitive predicate functors and a few dened ones. The alethic functors can be
axiomatized by any set of axioms for sentential logic whose primitives are negation and one of or . Equivalently,
all tautologies of sentential logic can be taken as axioms.
Quines (1982) semantics for each predicate functor are stated below in terms of abstraction (set builder notation),
followed by either the relevant axiom from Kuhn (1983), or a denition from Quine (1976). The notation {x1 ...xn :
F x1 ...xn } denotes the set of n-tuples satisfying the atomic formula F x1 ...xn .
746 CHAPTER 195. PREDICATE FUNCTOR LOGIC

Identity, I, is dened as:

IF x1 x2 ...xn F x1 x1 ...xn F x2 x2 ...xn .


Identity is reexive (Ixx), symmetric (IxyIyx), transitive ( (IxyIyz) Ixz), and obeys the substitution property:

(F x1 ...xn Ix1 y) F yx2 ...xn .

Padding, +, adds a variable to the left of any argument list.

+F n =def {x0 x1 ...xn : F n x1 ...xn }.


+F x1 ...xn F x2 ...xn .

Cropping, , erases the leftmost variable in any argument list.

F n =def {x2 ...xn : x1 F n x1 ...xn }.


F x1 ...xn F x2 ...xn .
Cropping enables two useful dened functors:

Reection, S:

SF n =def {x2 ...xn : F n x2 x2 ...xn }.


SF n IF n .
S generalizes the notion of reexivity to all terms of any nite degree greater than 2. N.B: S should not be confused
with the primitive combinator S of combinatory logic.

Cartesian product, ;

F m Gn F m m Gn .
Here only, Quine adopted an inx notation, because this inx notation for Cartesian product is very well established
in mathematics. Cartesian product allows restating conjunction as follows:

F m x1 ...xm Gn x1 ...xn (F m Gn )x1 ...xm x1 ...xn .

Reorder the concatenated argument list so as to shift a pair of duplicate variables to the far left, then invoke S to
eliminate the duplication. Repeating this as many times as required results in an argument list of length max(m,n).
The next three functors enable reordering argument lists at will.

Major inversion, Inv, rotates the variables in an argument list to the right, so that the last variable becomes the
rst.

InvF n =def {x1 ...xn : F n xn x1 ...xn1 }.


InvF x1 ...xn F xn x1 ...xn1 .

Minor inversion, inv, swaps the rst two variables in an argument list.

invF n =def {x1 ...xn : F n x2 x1 ...xn }.


invF x1 ...xn F x2 x1 ...xn .

Permutation, p, rotates the second through last variables in an argument list to the left, so that the second
variable becomes the last.
195.3. BACONS WORK 747

pF n =def {x1 ...xn : F n x1 x3 ...xn x2 }.


pF x1 ...xn InvinvF x1 x3 ...xn x2 .
Given an argument list consisting of n variables, p implicitly treats the last n1 variables like a bicycle chain, with
each variable constituting a link in the chain. One application of p advances the chain by one link. k consecutive
applications of p to F n moves the k+1 variable to the second argument position in F.
When n=2, Inv and inv merely interchange x1 and x2 . When n=1, they have no eect. Hence p has no eect when
n<3.
Kuhn (1983) takes Major inversion and Minor inversion as primitive. The notation p in Kuhn corresponds to inv; he
has no analog to Permutation and hence has no axioms for it. If, following Quine (1976), p is taken as primitive, Inv
and inv can be dened as nontrivial combinations of +, , and iterated p.
The following table summarizes how the functors aect the degrees of their arguments.

Expression Degree
p; Inv; inv; ; I change no
+F n1 ; F n+1 ; SF n+1 n
m n ; F m Gn max(m, n)

195.2.3 Rules
All instances of a predicate letter may be replaced by another predicate letter of the same degree, without aecting
validity. The rules are:

Modus ponens;
Let and be PFL formulas in which x1 does not appear. Then if ( F x1 ...xn ) is a PFL theorem,
then ( F x2 ...xn ) is likewise a PFL theorem.

195.2.4 Some useful results


Instead of axiomatizing PFL, Quine (1976) proposed the following conjectures as candidate axioms.

n1 consecutive iterations of p restores the status quo ante:

F n pn1 F n

+ and annihilate each other:


Negation distributes over +, , and p:
+ and p distributes over conjunction:
Identity has the interesting implication:

IF n pn2 p + F n

Quine also conjectured the rule: If is a PFL theorem, then so are p, +, and .

195.3 Bacons work


Bacon (1985) takes the conditional, negation, Identity, Padding, and Major and Minor inversion as primitive, and
Cropping as dened. Employing terminology and notation diering somewhat from the above, Bacon (1985) sets out
two formulations of PFL:
748 CHAPTER 195. PREDICATE FUNCTOR LOGIC

A natural deduction formulation in the style of Frederick Fitch. Bacon proves this formulation sound and
complete in full detail.
An axiomatic formulation which Bacon asserts, but does not prove, equivalent to the preceding one. Some of
these axioms are simply Quine conjectures restated in Bacons notation.

Bacon also:

Discusses the relation of PFL to the term logic of Sommers (1982), and argues that recasting PFL using a
syntax proposed in Lockwoods appendix to Sommers, should make PFL easier to read, use, and teach";
Touches on the group theoretic structure of Inv and inv;
Mentions that sentential logic, monadic predicate logic, the modal logic S5, and the Boolean logic of (un)permuted
relations, are all fragments of PFL.

195.4 From rst-order logic to PFL


The following algorithm is adapted from Quine (1976: 300-2). Given a closed formula of rst-order logic, rst do
the following:

Attach a numerical subscript to every predicate letter, stating its degree;


Translate all universal quantiers into existential quantiers and negation;
Restate all atomic formulas of the form x=y as Ixy.

Now apply the following algorithm to the preceding result:


1. Translate the matrices of the most deeply nested quantiers into disjunctive normal form, consisting of disjuncts of
conjuncts of terms, negating atomic terms as required. The resulting subformula contains only negation, conjunction,
disjunction, and existential quantication.
2. Distribute the existential quantiers over the disjuncts in the matrix using the rule of passage (Quine 1982: 119):

x[(x) (x)] (x(x) x(x)).

3. Replace conjunction by Cartesian product, by invoking the fact:

(F m Gn ) (F m Gn ) (F m m Gn ); m < n.

4. Concatenate the argument lists of all atomic terms, and move the concatenated list to the far right of the subformula.
5. Use Inv and inv to move all instances of the quantied variable (call it y) to the left of the argument list.
6. Invoke S as many times as required to eliminate all but the last instance of y. Eliminate y by prexing the
subformula with one instance of .
7. Repeat (1)-(6) until all quantied variables have been eliminated. Eliminate any disjunctions falling within the
scope of a quantier by invoking the equivalence:

( ...) ( ...).

The reverse translation, from PFL to rst-order logic, is discussed in Quine (1976: 302-4).
The canonical foundation of mathematics is axiomatic set theory, with a background logic consisting of rst-order
logic with identity, with a universe of discourse consisting entirely of sets. There is a single predicate letter of degree
2, interpreted as set membership. The PFL translation of the canonical axiomatic set theory ZFC is not dicult, as
no ZFC axiom requires more than 6 quantied variables.[1]
195.5. SEE ALSO 749

195.5 See also


algebraic logic

combinatory logic
cylindric algebra

relation algebra

term logic

195.6 Footnotes
[1] Metamath axioms.

195.7 References
Bacon, John, 1985, The completeness of a predicate-functor logic, Journal of Symbolic Logic 50: 903-26.

Paul Bernays, 1959, Uber eine naturliche Erweiterung des Relationenkalkuls in Heyting, A., ed., Construc-
tivity in Mathematics. North Holland: 1-14.

Kuhn, Stephen T., 1983, "An Axiomatization of Predicate Functor Logic," Notre Dame Journal of Formal
Logic 24: 233-41.

Willard Quine, 1976, Algebraic Logic and Predicate Functors in Ways of Paradox and Other Essays, enlarged
ed. Harvard Univ. Press: 283-307.

--------, 1982. Methods of Logic, 4th ed. Harvard Univ. Press. Chpt. 45.
Sommers, Fred, 1982. The Logic of Natural Language. Oxford Univ. Press.

Alfred Tarski and Givant, Steven, 1987. A Formalization of Set Theory Without Variables. AMS.

Jean Van Heijenoort, 1967. From Frege to Gdel: A Source Book on Mathematical Logic. Harvard Univ. Press.

195.8 External links


An introduction to predicate-functor logic (one-click download, PS le) by Mats Dahllf (Department of Lin-
guistics, Uppsala University)
Chapter 196

Predicate variable

In rst-order logic, a predicate variable is a predicate letter which can stand for a relation (between terms) but
which has not been specically assigned any particular relation (or meaning). In rst-order logic (FOL) they can be
more properly called metalinguistic variables. In higher-order logic, predicate variables correspond to propositional
variables which can stand for well-formed formulas of the same logic, and such variables can be quantied by means
of (at least) second-order quantiers.

196.1 Usage
In the metavariable sense, a predicate variable can be used to dene an axiom schema. Predicate variables should be
distinguished from predicate constants, which could be represented either with a dierent (exclusive) set of predicate
letters, or by their own symbols which really do have their own specic meaning in their domain of discourse: e.g.
=, , , <, , ... .
If letters are used for predicate constants as well as for predicate variables, then there has to be a way of distinguishing
between them. For example, letters W, X, Y, Z could be designated to represent predicate variables, whereas letters
A, B, C,..., U, V could represent predicate constants. If these letters are not enough, then numerical subscripts can
be appended, e.g. X1 , X2 , X3 ,... However, if the predicate variables are not perceived (or dened) as belonging to the
vocabulary of the predicate calculus, then they are predicate metavariables, whereas the rest of the predicate letters
are just called predicate letters. The metavariables are thus understood to be used to code for axiom schemata
and theorem schemata (derived from the axiom schemata). Whether the predicate letters are constants or variables
is a subtlepoint: they are not constants in the same sense that =, , , <, , are predicate constants, or that
1, 2, 3, 2, , e are numerical constants.
Another option is to use Greek lower-case letters to represent such metavariable predicates. Then, such letters could
be used to represent entire well-formed formulae (w) of the predicate calculus: any free variable terms of the w
could be incorporated as terms of the Greek-letter predicate. This is the rst step towards creating a higher-order
logic.
If predicate variables are only allowed to be bound to predicate letters of zero arity (which have no arguments),
where such letters represent propositions, then such variables are propositional variables, and any predicate logic
which allows second-order quantiers to be used to bind such propositional variables is a second-order predicate
calculus, or second-order logic.
If predicate variables are also allowed to be bound to predicate letters which are unary or have higher arity, and
when such letters represent propositional functions, such that the domain of the arguments is mapped to a range
of dierent propositions, and when such variables can be bound by quantiers to such sets of propositions, then the
result is a higher-order predicate calculus, or higher-order logic.

196.2 References
Rudolf Carnap and William H. Meyer. Introduction to Symbolic Logic and Its Applications. Dover Publications
(June 1, 1958). ISBN 0-486-60453-5

750
Chapter 197

Prenex normal form

A formula of the predicate calculus is in prenex[1] normal form if it is written as a string of quantiers (referred to
as the prex) followed by a quantier-free part (referred to as the matrix).
Every formula in classical logic is equivalent to a formula in prenex normal form. For example, if (y) , (z) , and
(x) are quantier-free formulas with the free variables shown then

xyz((y) ((z) (x)))

is in prenex normal form with matrix (y) ((z) (x)) , while

x((y(y)) ((z(z)) (x)))

is logically equivalent but not in prenex normal form.

197.1 Conversion to prenex form


Every rst-order formula is logically equivalent (in classical logic) to some formula in prenex normal form. There
are several conversion rules that can be recursively applied to convert a formula to prenex normal form. The rules
depend on which logical connectives appear in the formula.

197.1.1 Conjunction and disjunction


The rules for conjunction and disjunction say that

(x) is equivalent to x( ) ,
(x) is equivalent to x( ) ;

and

(x) is equivalent to x( ) ,
(x) is equivalent to x( ) .

The equivalences are valid when x does not appear as a free variable of ; if x does appear free in , one can
rename the bound x in (x) and obtain the equivalent (x [x/x ]) .
For example, in the language of rings,

(x(x2 = 1)) (0 = y) is equivalent to x(x2 = 1 0 = y) ,

751
752 CHAPTER 197. PRENEX NORMAL FORM

but

(x(x2 = 1)) (0 = x) is not equivalent to x(x2 = 1 0 = x)

because the formula on the left is true in any ring when the free variable x is equal to 0, while the formula on the
right has no free variables and is false in any nontrivial ring. So (x(x2 = 1)) (0 = x) will be rst rewritten as
(x (x2 = 1)) (0 = x) and then put in prenex normal form x (x2 = 1 0 = x) .

197.1.2 Negation
The rules for negation say that

x is equivalent to x

and

x is equivalent to x .

197.1.3 Implication
There are four rules for implication: two that remove quantiers from the antecedent and two that remove quantiers
from the consequent. These rules can be derived by rewriting the implication as and applying the
rules for disjunction above. As with the rules for disjunction, these rules require that the variable quantied in one
subformula does not appear free in the other subformula.
The rules for removing quantiers from the antecedent are:

(x) is equivalent to x( ) ,
(x) is equivalent to x( ) .

The rules for removing quantiers from the consequent are:

(x) is equivalent to x( ) ,
(x) is equivalent to x( ) .

197.1.4 Example
Suppose that , , and are quantier-free formulas and no two of these formulas share any free variable. Consider
the formula

( x) z

By recursively applying the rules starting at the innermost subformulas, the following sequence of logically equivalent
formulas can be obtained:

( x) z

(x( )) z
(x( )) z
(x( )) z
x(( ) z)
197.1. CONVERSION TO PRENEX FORM 753

x(( ) z)

x(z(( ) ))

xz(( ) )

This is not the only prenex form equivalent to the original formula. For example, by dealing with the consequent
before the antecedent in the example above, the prenex form

zx(( ) )

can be obtained:

z(( x) )

z((x( )) )

z(x(( ) ))

zx(( ) )

197.1.5 Intuitionistic logic

The rules for converting a formula to prenex form make heavy use of classical logic. In intuitionistic logic, it is not
true that every formula is logically equivalent to a prenex formula. The negation connective is one obstacle, but not the
only one. The implication operator is also treated dierently in intuitionistic logic than classical logic; in intuitionistic
logic, it is not denable using disjunction and negation.
The BHK interpretation illustrates why some formulas have no intuitionistically-equivalent prenex form. In this
interpretation, a proof of

(x) y (1)

is a function which, given a concrete x and a proof of (x) , produces a concrete y and a proof of (y). In this case
it is allowable for the value of y to be computed from the given value of x. A proof of

y(x ), (2)

on the other hand, produces a single concrete value of y and a function that converts any proof of x into a proof
of (y). If each x satisfying can be used to construct a y satisfying but no such y can be constructed without
knowledge of such an x then formula (1) will not be equivalent to formula (2).
The rules for converting a formula to prenex form that do fail in intuitionistic logic are:

(1) x( ) implies (x) ,


(2) x( ) implies (x) ,
(3) (x) implies x( ) ,
(4) (x) implies x( ) ,
(5) x implies x ,

(x does not appear as a free variable of in (1) and (3); x does not appear as a free variable of in (2) and (4)).
754 CHAPTER 197. PRENEX NORMAL FORM

197.2 Use of prenex form


Some proof calculi will only deal with a theory whose formulae are written in prenex normal form. The concept is
essential for developing the arithmetical hierarchy and the analytical hierarchy.
Gdel's proof of his completeness theorem for rst-order logic presupposes that all formulae have been recast in
prenex normal form.

197.3 See also


Herbrandization

Skolemization
Arithmetical hierarchy

197.4 Notes
[1] The term 'prenex' comes from the Latin praenexus tied or bound up in front, past participle of praenectere (archived as
of May 27, 2011 at )

197.5 References
Hinman, P. (2005), Fundamentals of Mathematical Logic, A K Peters, ISBN 978-1-56881-262-5
Chapter 198

Principle of distributivity

The principle of distributivity states that the algebraic distributive law is valid for classical logic, where both logical
conjunction and logical disjunction are distributive over each other so that for any propositions A, B and C the
equivalences

A (B C) (A B) (A C)

and

A (B C) (A B) (A C)

hold.
The principle of distributivity is valid in classical logic, but invalid in quantum logic.
The article "Is Logic Empirical?" discusses the case that quantum logic is the correct, empirical logic, on the grounds
that the principle of distributivity is inconsistent with a reasonable interpretation of quantum phenomena.

755
Chapter 199

Principle of explosion

EFQ redirects here. For the literary baseball journal, see Elysian Fields Quarterly.
Ex falso quodlibet redirects here. For the software, see Ex Falso (software).

The principle of explosion (Latin: ex falso (sequitur) quodlibet (EFQ), from falsehood, anything (follows)", or ex
contradictione (sequitur) quodlibet (ECQ), from contradiction, anything (follows)"), or the principle of Pseudo-
Scotus, is the law of classical logic, intuitionistic logic and similar logical systems, according to which any statement
can be proven from a contradiction.[1] That is, once a contradiction has been asserted, any proposition (including their
negations) can be inferred from it.
As a demonstration of the principle, consider two contradictory statements All lemons are yellow and Not all
lemons are yellow, and suppose (for the sake of argument) that both are simultaneously true. If that is the case,
anything can be proven, e.g. unicorns exist, by using the following argument:

1. We know that All lemons are yellow as it is dened to be true.


2. Therefore, the statement that (All lemons are yellow OR unicorns exist) must also be true, since the rst
part is true.
3. However, if Not all lemons are yellow (and this is also dened to be true), unicorns must exist otherwise
statement 2 would be false. It has thus been proven that unicorns exist. The same could be applied to any
assertion, including the statement unicorns do not exist.

Due to the principle of explosion, the existence of a contradiction (inconsistency) in a formal axiomatic system is
disastrous; since any statement can be proved true it trivializes the concepts of truth and falsity.[2] Around the turn
of the 20th century, the discovery of contradictions such as Russells paradox at the foundations of mathematics thus
threatened the entire structure of mathematics. Mathematicians such as Gottlob Frege, Ernst Zermelo, Abraham
Fraenkel, and Thoralf Skolem put much eort into revising set theory to eliminate these contradictions, resulting in
the modern Zermelo-Frankel set theory.
In a dierent solution to these problems, some mathematicians have devised alternate theories of logic called paraconsistent
logics, which eliminate the principle of explosion.[2] These allow some contradictory statements to be proved without
aecting other proofs. In articial intelligence and models of human reasoning it is common for such logics to be
used. Truth maintenance systems are AI models which try to capture this process.

199.1 Symbolic representation


In symbolic logic, the principle of explosion can be expressed in the following way

P Q : (P P ) Q

(For any statements P and Q, if P and not-P are both true, then Q is true)

756
199.2. PROOF 757

199.2 Proof
Below is a formal proof of the principle using symbolic logic

1. P P

assumption

2. P

from (1) by conjunction elimination

3. P

from (1) by conjunction elimination

4. P Q

from (2) by disjunction introduction

5. Q

from (3) and (4) by disjunctive syllogism

6. (P P ) Q

from (5) by conditional proof (discharging assumption 1)

This is just the symbolic version of the informal argument given in the introduction, with P standing for all lemons
are yellow and Q standing for Unicorns exist. From all lemons are yellow and not all lemons are yellow (1),
we infer all lemons are yellow (2) and not all lemons are yellow (3); from all lemons are yellow (2), we infer
all lemons are yellow or unicorns exist (4); and from not all lemons are yellow (3) and all lemons are yellow or
unicorns exist (4), we infer unicorns exist (5). Hence, if all lemons are yellow and not all lemons are yellow, then
unicorns exist.

199.2.1 Semantic argument


An alternate argument for the principle stems from model theory. A sentence P is a semantic consequence of a set
of sentences only if every model of is a model of P . But there is no model of the contradictory set (P P ) .
A fortiori, there is no model of (P P ) that is not a model of Q . Thus, vacuously, every model of (P P ) is a
model of Q . Thus Q is a semantic consequence of (P P ) .

199.3 Paraconsistent logic


Paraconsistent logics have been developed that allow for sub-contrary forming operators. Model-theoretic paracon-
sistent logicians often deny the assumption that there can be no model of {, } and devise semantical systems in
which there are such models. Alternatively, they reject the idea that propositions can be classied as true or false.
Proof-theoretic paraconsistent logics usually deny the validity of one of the steps necessary for deriving an explosion,
typically including disjunctive syllogism, disjunction introduction, and reductio ad absurdum.

199.4 Use
The metamathematical value of the principle of explosion is that for any logical system where this principle holds,
any derived theory which proves (or an equivalent form, ) is worthless because all its statements would
become theorems, making it impossible to distinguish truth from falsehood. That is to say, the principle of explosion
is an argument for the law of non-contradiction in classical logic, because without it all truth statements become
meaningless.
758 CHAPTER 199. PRINCIPLE OF EXPLOSION

199.5 See also


Consequentia mirabilis Claviuss Law

Dialetheism belief in the existence of true contradictions


Law of excluded middle every proposition is either true or not true

Law of noncontradiction no proposition can be both true and not true

Paraconsistent logic a family of logics used to address contradictions


Paradox of entailment a seeming paradox derived from the principle of explosion

Reductio ad absurdum concluding that a proposition is false because it produces a contradiction


Trivialism the belief that all statements of the form P and not-P are true

199.6 References
[1] Carnielli, W. and Marcos, J. (2001) Ex contradictione non sequitur quodlibet Proc. 2nd Conf. on Reasoning and Logic
(Bucharest, July 2000)

[2] McKubre-Jordens, Maarten (August 2011). This is not a carrot: Paraconsistent mathematics. Plus Magazine. Millennium
Mathematics Project. Retrieved January 14, 2017.
Chapter 200

Product term

In Boolean logic, a product term is a conjunction of literals, where each literal is either a variable or its negation.

200.1 Examples
Examples of product terms include:

AB

A (B) (C)
A

200.2 Origin
The terminology comes from the similarity of AND to multiplication as in the ring structure of Boolean rings.

200.3 Minterms
For a boolean function of n variables x1 , . . . , xn , a product term in which each of the n variables appears once (in
either its complemented or uncomplemented form) is called a minterm. Thus, a minterm is a logical expression of n
variables that employs only the complement operator and the conjunction operator.

200.4 References
Fredrick J. Hill, and Gerald R. Peterson, 1974, Introduction to Switching Theory and Logical Design, Second
Edition, John Wiley & Sons, NY, ISBN 0-471-39882-9

759
Chapter 201

Proof by contradiction

In logic, proof by contradiction is a form of proof, and more specically a form of indirect proof, that establishes
the truth or validity of a proposition. It starts by assuming that the opposite proposition is true, and then shows that
such an assumption leads to a contradiction. Proof by contradiction is also known as indirect proof, apagogical
argument, proof by assuming the opposite, and reductio ad impossibilem. It is a particular kind of the more
general form of argument known as reductio ad absurdum.[1][2]
G. H. Hardy described proof by contradiction as one of a mathematicians nest weapons, saying It is a far ner
gambit than any chess gambit: a chess player may oer the sacrice of a pawn or even a piece, but a mathematician
oers the game.[1]

201.1 Principle
Proof by contradiction is based on the law of noncontradiction as rst formalized as a metaphysical principle by
Aristotle. Noncontradiction is also a theorem in propositional logic. This states that an assertion or mathematical
statement cannot be both true and false. That is, a proposition Q and its negation Q (not-Q") cannot both be true.
In a proof by contradiction it is shown that the denial of the statement being proved results in such a contradiction. It
has the form of a reductio ad absurdum argument. If P is the proposition to be proved:

1. P is assumed to be false, that is P is true.


2. It is shown that P implies two mutually contradictory assertions, Q and Q.
3. Since Q and Q cannot both be true, the assumption that P is false must be wrong, and P must be true.

An alternate form derives a contradiction with the statement to be proved itself:

1. P is assumed to be false.
2. It is shown that P implies P.
3. Since P and P cannot both be true, the assumption must be wrong and P must be true.

An existence proof by contradiction assumes that some object doesn't exist, and then proves that this would lead to
a contradiction; thus, such an object must exist. Although it is quite freely used in mathematical proofs, not every
school of mathematical thought accepts this kind of nonconstructive proof as universally valid.

201.1.1 Law of the excluded middle


Main article: Law of the excluded middle

Proof by contradiction also depends on the law of the excluded middle, also rst formulated by Aristotle. This states
that either an assertion or its negation must be true

760
201.2. RELATIONSHIP WITH OTHER PROOF TECHNIQUES 761

P (P P )
(For all propositions P, either P or not-P is true)

That is, there is no other truth value besides true and false that a proposition can take. Combined with the
principle of noncontradiction, this means that exactly one of P and P is true. In proof by contradiction, this
permits the conclusion that since the possibility of P has been excluded, P must be true.
The law of the excluded middle is accepted in virtually all formal logics, however some intuitionist mathematicians
do not accept it, and thus reject proof by contradiction as a proof technique.

201.2 Relationship with other proof techniques


Proof by contradiction is closely related to proof by contrapositive, and the two are sometimes confused, though they
are distinct methods. The main distinction is that a proof by contrapositive applies only to statements of the form
P Q (i.e., implications), whereas the technique of proof by contradiction applies to statements Q of any form:

Proof by contradiction (general): assume Q and derive a contradiction.

This corresponds, in the framework of propositional logic, to the equivalence Q Q


Q , where is the logical contradiction, or false value.

In the case where the statement to be proven is an implication P Q , let us look at the dierences between direct
proof, proof by contrapositive, and proof by contradiction:

Direct proof: assume P and show Q .

Proof by contrapositive: assume Q and show P .

P Q Q P

Proof by contradiction: assume P and Q and derive a contradiction.

P Q (P Q) (P Q) (P Q)

201.3 Examples

201.3.1 Irrationality of the square root of 2

A classic proof by contradiction from mathematics is the proof that the square root of 2 is irrational.[3] If it were
rational, it could be expressed as a fraction a/b in lowest terms, where a and b are integers, at least one of which is
odd. But if a/b = 2, then a2 = 2b2 . Therefore a2 must be even. Because the square of an odd number is odd, that
in turn implies that a is even. This means that b must be odd because a/b is in lowest terms.
On the other hand, if a is even, then a2 is a multiple of 4. If a2 is a multiple of 4 and a2 = 2b2 , then 2b2 is a multiple
of 4, and therefore b2 is even, and so is b.
So b is odd and even, a contradiction. Therefore the initial assumptionthat 2 can be expressed as a fractionmust
be false.
762 CHAPTER 201. PROOF BY CONTRADICTION

201.3.2 The length of the hypotenuse


The method of proof by contradiction has also been used to show that for any non-degenerate right triangle, the length
of the hypotenuse is less than the sum of the lengths of the two remaining sides.[4] The proof relies on the Pythagorean
theorem. Letting c be the length of the hypotenuse and a and b the lengths of the legs, the claim is that a + b > c.
The claim is negated to assume that a + b c. Squaring both sides results in (a + b)2 c2 or, equivalently, a2 + 2ab
+ b2 c2 . A triangle is non-degenerate if each edge has positive length, so it may be assumed that a and b are greater
than 0. Therefore, a2 + b2 < a2 + 2ab + b2 c2 . The transitive relation may be reduced to a2 + b2 < c2 . It is known
from the Pythagorean theorem that a2 + b2 = c2 . This results in a contradiction since strict inequality and equality
are mutually exclusive. The latter was a result of the Pythagorean theorem and the former the assumption that a + b
c. The contradiction means that it is impossible for both to be true and it is known that the Pythagorean theorem
holds. It follows that the assumption that a + b c must be false and hence a + b > c, proving the claim.

201.3.3 No least positive rational number


Consider the proposition, P: there is no smallest rational number greater than 0. In a proof by contradiction, we
start by assuming the opposite, P: that there is a smallest rational number, say, r.
Now r/2 is a rational number greater than 0 and smaller than r. (In the above symbolic argument, "r/2 is the smallest
rational number would be Q and "r (which is dierent from r/2) is the smallest rational number would be Q.)
But that contradicts our initial assumption, P, that r was the smallest rational number. So we can conclude that the
original proposition, P, must be true there is no smallest rational number greater than 0.

201.3.4 Other
For other examples, see proof that the square root of 2 is not rational (where indirect proofs dierent from the above
one can be found) and Cantors diagonal argument.

201.4 Notation
Proofs by contradiction sometimes end with the word Contradiction!". Isaac Barrow and Baermann used the notation
Q.E.A., for "quod est absurdum" (which is absurd), along the lines of Q.E.D., but this notation is rarely used today.[5]
A graphical symbol sometimes used for contradictions is a downwards zigzag arrow lightning symbol (U+21AF:
), for example in Davey and Priestley.[6] Others sometimes used include a pair of opposing arrows (as or
), struck-out arrows ( ), a stylized form of hash (such as U+2A33: ), or the reference mark (U+203B: ).[7][8]
The up tack symbol (U+22A5: ) used by philosophers and logicians (see contradiction) also appears, but is often
avoided due to its usage for orthogonality.

201.5 See also


Proof by innite descent, a form of proof by contradiction

201.6 References
[1] G. H. Hardy, A Mathematicians Apology; Cambridge University Press, 1992. ISBN 9780521427067. PDF p.19.

[2] S. M. Cohen, Introduction to Logic, Chapter 5 proof by contradiction ... Also called indirect proof or reductio ad
absurdum ...

[3] Alfeld, Peter (16 August 1996). Why is the square root of 2 irrational?". Understanding Mathematics, a study guide.
Department of Mathematics, University of Utah. Retrieved 6 February 2013.

[4] Stone, Peter. Logic, Sets, and Functions: Honors (PDF). Course materials. pp 1423: Department of Computer Sciences,
The University of Texas at Austin. Retrieved 6 February 2013.
201.7. FURTHER READING AND EXTERNAL LINKS 763

[5] Math Forum Discussions.

[6] B. Davey and H.A. Priestley, Introduction to lattices and order, Cambridge University Press, 2002.

[7] The Comprehensive LaTeX Symbol List, pg. 20. http://www.ctan.org/tex-archive/info/symbols/comprehensive/symbols-a4.


pdf

[8] Gary Hardegree, Introduction to Modal Logic, Chapter 2, pg. II2. https://wayback.archive.org/web/20110607061046/
http://people.umass.edu/gmhwww/511/pdf/c02.pdf

201.7 Further reading and external links


Franklin, James (2011). Proof in Mathematics: An Introduction. chapter 6: Kew. ISBN 978-0-646-54509-7.

Proof by Contradiction from Larry W. Cusicks How To Write Proofs


Reductio ad Absurdum Internet Encyclopedia of Philosophy; ISSN 2161-0002
Chapter 202

Proof by contrapositive

In logic, the contrapositive of a conditional statement is formed by negating both terms and reversing the direction
of inference. Explicitly, the contrapositive of the statement if A, then B is if not B, then not A. A statement and
its contrapositive are logically equivalent: if the statement is true, then its contrapositive is true, and vice versa.[1]
In mathematics, proof by contraposition is a rule of inference used in proofs. This rule infers a conditional statement
from its contrapositive.[2] In other words, the conclusion if A, then B is drawn from the single premise if not B,
then not A.

202.1 Example
Let x be an integer.

To prove: If x is even, then x is even.

Although a direct proof can be given, we choose to prove this statement by contraposition. The contrapositive of the
above statement is:

If x is not even, then x is not even.

This latter statement can be proven as follows. Suppose x is not even. Then x is odd. The product of two odd numbers
is odd, hence x = xx is odd. Thus x is not even.
Having proved the contrapositive, we infer the original statement.[3]

202.2 See also


Contraposition
Modus tollens
Reductio ad absurdum
Proof by contradiction: relationship with other proof techniques.

202.3 References
[1] Regents Exam Prep, contrapositive denition
[2] Larry Cusicks (CSU-Fresno) How to write proofs tutorial
[3] Franklin, J.; A. Daoud (2011). Proof in Mathematics: An Introduction. Sydney: Kew Books. ISBN 0-646-54509-4. (p.
50).

764
Chapter 203

Proposition

This article is about the term in logic and philosophy. For other uses, see Proposition (disambiguation).
Not to be confused with preposition.

The term proposition has a broad use in contemporary philosophy. It is used to refer to some or all of the following:
the primary bearers of truth-value, the objects of belief and other "propositional attitudes" (i.e., what is believed,
doubted, etc.), the referents of that-clauses, and the meanings of declarative sentences. Propositions are the sharable
objects of attitudes and the primary bearers of truth and falsity. This stipulation rules out certain candidates for
propositions, including thought- and utterance-tokens which are not sharable, and concrete events or facts, which
cannot be false.[1]

203.1 Historical usage

203.1.1 By Aristotle
Aristotelian logic identies a proposition as a sentence which arms or denies a predicate of a subject with the help
of a 'Copula'. An Aristotelian proposition may take the form All men are mortal or Socrates is a man. In the
rst example the subject is men, predicate is mortal and copula is are. In the second example the subject is
Socrates, the predicate is a man and copula is is.

203.1.2 By the logical positivists


Often propositions are related to closed sentences to distinguish them from what is expressed by an open sentence.
In this sense, propositions are statements that are truth-bearers. This conception of a proposition was supported by
the philosophical school of logical positivism.
Some philosophers argue that some (or all) kinds of speech or actions besides the declarative ones also have proposi-
tional content. For example, yesno questions present propositions, being inquiries into the truth value of them. On
the other hand, some signs can be declarative assertions of propositions without forming a sentence nor even being
linguistic, e.g. trac signs convey denite meaning which is either true or false.
Propositions are also spoken of as the content of beliefs and similar intentional attitudes such as desires, preferences,
and hopes. For example, I desire that I have a new car, or I wonder whether it will snow" (or, whether it is the
case that it will snow). Desire, belief, and so on, are thus called propositional attitudes when they take this sort of
content.

203.1.3 By Russell
Bertrand Russell held that propositions were structured entities with objects and properties as constituents. Wittgen-
stein held that a proposition is the set of possible worlds/states of aairs in which it is true. One important dierence
between these views is that on the Russellian account, two propositions that are true in all the same states of aairs

765
766 CHAPTER 203. PROPOSITION

can still be dierentiated. For instance, the proposition that two plus two equals four is distinct on a Russellian ac-
count from three plus three equals six. If propositions are sets of possible worlds, however, then all mathematical
truths (and all other necessary truths) are the same set (the set of all possible worlds).

203.2 Relation to the mind


In relation to the mind, propositions are discussed primarily as they t into propositional attitudes. Propositional
attitudes are simply attitudes characteristic of folk psychology (belief, desire, etc.) that one can take toward a propo-
sition (e.g. 'it is raining,' 'snow is white,' etc.). In English, propositions usually follow folk psychological attitudes by
a that clause (e.g. Jane believes that it is raining). In philosophy of mind and psychology, mental states are often
taken to primarily consist in propositional attitudes. The propositions are usually said to be the mental content of
the attitude. For example, if Jane has a mental state of believing that it is raining, her mental content is the proposi-
tion 'it is raining.' Furthermore, since such mental states are about something (namely propositions), they are said to
be intentional mental states. Philosophical debates surrounding propositions as they relate to propositional attitudes
have also recently centered on whether they are internal or external to the agent or whether they are mind-dependent
or mind-independent entities (see the entry on internalism and externalism in philosophy of mind).

203.3 Treatment in logic


As noted above, in Aristotelian logic a proposition is a particular kind of sentence, one which arms or denies a
predicate of a subject with the help of a'Copula'. Aristotelian propositions take forms like All men are mortal and
Socrates is a man.
Propositions show up in modern formal logic as objects of a formal language. A formal language begins with dierent
types of symbols. These types can include variables, operators, function symbols, predicate (or relation) symbols,
quantiers, and propositional constants. (Grouping symbols are often added for convenience in using the language
but do not play a logical role.) Symbols are concatenated together according to recursive rules in order to construct
strings to which truth-values will be assigned. The rules specify how the operators, function and predicate symbols,
and quantiers are to be concatenated with other strings. A proposition is then a string with a specic form. The
form that a proposition takes depends on the type of logic.
The type of logic called propositional, sentential, or statement logic includes only operators and propositional constants
as symbols in its language. The propositions in this language are propositional constants, which are considered atomic
propositions, and composite propositions, which are composed by recursively applying operators to propositions.
Application here is simply a short way of saying that the corresponding concatenation rule has been applied.
The types of logics called predicate, quanticational, or n-order logic include variables, operators, predicate and
function symbols, and quantiers as symbols in their languages. The propositions in these logics are more complex.
First, terms must be dened. A term is (i) a variable or (ii) a function symbol applied to the number of terms required
by the function symbols arity. For example, if + is a binary function symbol and x, y, and z are variables, then
x+(y+z) is a term, which might be written with the symbols in various orders. A proposition is (i) a predicate symbol
applied to the number of terms required by its arity, (ii) an operator applied to the number of propositions required
by its arity, or (iii) a quantier applied to a proposition. For example, if = is a binary predicate symbol and is a
quantier, then x,y,z [(x = y) (x+z = y+z)] is a proposition. This more complex structure of propositions allows
these logics to make ner distinctions between inferences, i.e., to have greater expressive power.
In this context, propositions are also called sentences, statements, statement forms, formulas, and well-formed formu-
las, though these terms are usually not synonymous within a single text. This denition treats propositions as syntactic
objects, as opposed to semantic or mental objects. That is, propositions in this sense are meaningless, formal, abstract
objects. They are assigned meaning and truth-values by mappings called interpretations and valuations, respectively.

203.4 Objections to propositions


Attempts to provide a workable denition of proposition include

Two meaningful declarative sentences express the same proposition if and only if they mean the
203.5. SEE ALSO 767

same thing.

thus dening proposition in terms of synonymity. For example, Snow is white (in English) and Schnee ist wei"
(in German) are dierent sentences, but they say the same thing, so they express the same proposition.

Two meaningful declarative sentence-tokens express the same proposition if and only if they mean
the same thing.

Unfortunately, the above denitions have the result that two sentences/sentence-tokens which have the same meaning
and thus express the same proposition could have dierent truth-values, e.g. I am Spartacus said by Spartacus and
said by John Smith; and e.g. It is Wednesday said on a Wednesday and on a Thursday.
A number of philosophers and linguists claim that all denitions of a proposition are too vague to be useful. For
them, it is just a misleading concept that should be removed from philosophy and semantics. W.V. Quine maintained
that the indeterminacy of translation prevented any meaningful discussion of propositions, and that they should be
discarded in favor of sentences.[2] Strawson advocated the use of the term "statement".

203.5 See also


Main contention

203.6 References
[1] Propositions (Stanford Encyclopedia of Philosophy)". Plato.stanford.edu. Retrieved 2014-06-23.

[2] Quine W.V. Philosophy of Logic, Prentice-Hall NJ USA: 1970, pp 1-14

203.7 External links


Stanford Encyclopedia of Philosophy articles on:
Propositions, by Matthew McGrath
Singular Propositions, by Greg Fitch
Structured Propositions, by Jerey C. King
Chapter 204

Propositional calculus

Propositional calculus (also called propositional logic, sentential calculus, sentential logic, or sometimes zeroth-
order logic) is the branch of logic concerned with the study of propositions (whether they are true or false) that are
formed by other propositions with the use of logical connectives, and how their value depends on the truth value of
their components.

204.1 Explanation
Logical connectives are found in natural languages. In English for example, some examples are and (conjunction),
or (disjunction), not (negation) and if (but only when used to denote material conditional).
The following is an example of a very simple inference within the scope of propositional logic:

Premise 1: If its raining then its cloudy.


Premise 2: Its raining.
Conclusion: Its cloudy.

Both premises and the conclusion are propositions. The premises are taken for granted and then with the application
of modus ponens (an inference rule) the conclusion follows.
As propositional logic is not concerned with the structure of propositions beyond the point where they can't be decom-
posed anymore by logical connectives, this inference can be restated replacing those atomic statements with statement
letters, which are interpreted as variables representing statements:

P Q

P
Q
The same can be stated succinctly in the following way:

P Q, P Q

When P is interpreted as Its raining and Q as its cloudy the above symbolic expressions can be seen to exactly
correspond with the original expression in natural language. Not only that, but they will also correspond with any
other inference of this form, which will be valid on the same basis that this inference is.
Propositional logic may be studied through a formal system in which formulas of a formal language may be interpreted
to represent propositions. A system of inference rules and axioms allows certain formulas to be derived. These
derived formulas are called theorems and may be interpreted to be true propositions. A constructed sequence of such

768
204.2. HISTORY 769

formulas is known as a derivation or proof and the last formula of the sequence is the theorem. The derivation may
be interpreted as proof of the proposition represented by the theorem.
When a formal system is used to represent formal logic, only statement letters are represented directly. The natural
language propositions that arise when they're interpreted are outside the scope of the system, and the relation between
the formal system and its interpretation is likewise outside the formal system itself.
Usually in truth-functional propositional logic, formulas are interpreted as having either a truth value of true or a
truth value of false. Truth-functional propositional logic and systems isomorphic to it, are considered to be zeroth-
order logic.

204.2 History
Main article: History of logic

Although propositional logic (which is interchangeable with propositional calculus) had been hinted by earlier philoso-
phers, it was developed into a formal logic by Chrysippus in the 3rd century BC[1] and expanded by his successor
Stoics. The logic was focused on propositions. This advancement was dierent from the traditional syllogistic logic
which was focused on terms. However, later in antiquity, the propositional logic developed by the Stoics was no
longer understood . Consequently, the system was essentially reinvented by Peter Abelard in the 12th century.[2]
Propositional logic was eventually rened using symbolic logic. The 17th/18th-century mathematician Gottfried
Leibniz has been credited with being the founder of symbolic logic for his work with the calculus ratiocinator. Al-
though his work was the rst of its kind, it was unknown to the larger logical community. Consequently, many of the
advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan completely
independent of Leibniz.[3]
Just as propositional logic can be considered an advancement from the earlier syllogistic logic, Gottlob Freges
predicate logic was an advancement from the earlier propositional logic. One author describes predicate logic as
combining the distinctive features of syllogistic logic and propositional logic.[4] Consequently, predicate logic ush-
ered in a new era in logics history; however, advances in propositional logic were still made after Frege, including
Natural Deduction, Truth-Trees and Truth-Tables. Natural deduction was invented by Gerhard Gentzen and Jan
ukasiewicz. Truth-Trees were invented by Evert Willem Beth.[5] The invention of truth-tables, however, is of un-
certain attribution.
Within works by Frege[6] and Bertrand Russell,[7] are ideas inuential to the invention of truth tables. The actual
tabular structure (being formatted as a table), itself, is generally credited to either Ludwig Wittgenstein or Emil
Post (or both, independently).[6] Besides Frege and Russell, others credited with having ideas preceding truth-tables
include Philo, Boole, Charles Sanders Peirce[8] , and Ernst Schrder. Others credited with the tabular structure include
Jan ukasiewicz, Ernst Schrder, Alfred North Whitehead, William Stanley Jevons, John Venn, and Clarence Irving
Lewis.[7] Ultimately, some have concluded, like John Shosky, that It is far from clear that any one person should be
given the title of 'inventor' of truth-tables..[7]

204.3 Terminology
In general terms, a calculus is a formal system that consists of a set of syntactic expressions (well-formed formulas),
a distinguished subset of these expressions (axioms), plus a set of formal rules that dene a specic binary relation,
intended to be interpreted as logical equivalence, on the space of expressions.
When the formal system is intended to be a logical system, the expressions are meant to be interpreted as statements,
and the rules, known to be inference rules, are typically intended to be truth-preserving. In this setting, the rules
(which may include axioms) can then be used to derive (infer) formulas representing true statements from given
formulas representing true statements.
The set of axioms may be empty, a nonempty nite set, a countably innite set, or be given by axiom schemata. A
formal grammar recursively denes the expressions and well-formed formulas of the language. In addition a semantics
may be given which denes truth and valuations (or interpretations).
The language of a propositional calculus consists of
770 CHAPTER 204. PROPOSITIONAL CALCULUS

1. a set of primitive symbols, variously referred to as atomic formulas, placeholders, proposition letters, or vari-
ables, and
2. a set of operator symbols, variously interpreted as logical operators or logical connectives.

A well-formed formula is any atomic formula, or any formula that can be built up from atomic formulas by means of
operator symbols according to the rules of the grammar.
Mathematicians sometimes distinguish between propositional constants, propositional variables, and schemata. Propo-
sitional constants represent some particular proposition, while propositional variables range over the set of all atomic
propositions. Schemata, however, range over all propositions. It is common to represent propositional constants by
A, B, and C, propositional variables by P, Q, and R, and schematic letters are often Greek letters, most often , ,
and .

204.4 Basic concepts


The following outlines a standard propositional calculus. Many dierent formulations exist which are all more or less
equivalent but dier in the details of:

1. their language, that is, the particular collection of primitive symbols and operator symbols,
2. the set of axioms, or distinguished formulas, and
3. the set of inference rules.

Any given proposition may be represented with a letter called a 'propositional constant', analogous to representing a
number by a letter in mathematics, for instance, a = 5. All propositions require exactly one of two truth-values: true
or false. For example, let P be the proposition that it is raining outside. This will be true (P) if it is raining outside
and false otherwise (P).

We then dene truth-functional operators, beginning with negation. P represents the negation of P, which
can be thought of as the denial of P. In the example above, P expresses that it is not raining outside, or by a
more standard reading: It is not the case that it is raining outside. When P is true, P is false; and when P is
false, P is true. P always has the same truth-value as P.
Conjunction is a truth-functional connective which forms a proposition out of two simpler propositions, for
example, P and Q. The conjunction of P and Q is written P Q, and expresses that each are true. We read P
Q for P and Q. For any two propositions, there are four possible assignments of truth values:
1. P is true and Q is true
2. P is true and Q is false
3. P is false and Q is true
4. P is false and Q is false

The conjunction of P and Q is true in case 1 and is false otherwise. Where P is the proposition that it is
raining outside and Q is the proposition that a cold-front is over Kansas, P Q is true when it is raining
outside and there is a cold-front over Kansas. If it is not raining outside, then P Q is false; and if there
is no cold-front over Kansas, then P Q is false.

Disjunction resembles conjunction in that it forms a proposition out of two simpler propositions. We write it
P Q, and it is read P or Q. It expresses that either P or Q is true. Thus, in the cases listed above, the
disjunction of P with Q is true in all cases except case 4. Using the example above, the disjunction expresses
that it is either raining outside or there is a cold front over Kansas. (Note, this use of disjunction is supposed
to resemble the use of the English word or. However, it is most like the English inclusive or, which can
be used to express the truth of at least one of two propositions. It is not like the English exclusive or, which
expresses the truth of exactly one of two propositions. That is to say, the exclusive or is false when both P
and Q are true (case 1). An example of the exclusive or is: You may have a bagel or a pastry, but not both.
204.4. BASIC CONCEPTS 771

Often in natural language, given the appropriate context, the addendum but not both is omitted but implied.
In mathematics, however, or is always inclusive or; if exclusive or is meant it will be specied, possibly by
xor.)

Material conditional also joins two simpler propositions, and we write P Q, which is read if P then Q.
The proposition to the left of the arrow is called the antecedent and the proposition to the right is called
the consequent. (There is no such designation for conjunction or disjunction, since they are commutative
operations.) It expresses that Q is true whenever P is true. Thus it is true in every case above except case 2,
because this is the only case when P is true but Q is not. Using the example, if P then Q expresses that if it is
raining outside then there is a cold-front over Kansas. The material conditional is often confused with physical
causation. The material conditional, however, only relates two propositions by their truth-valueswhich is not
the relation of cause and eect. It is contentious in the literature whether the material implication represents
logical causation.

Biconditional joins two simpler propositions, and we write P Q, which is read P if and only if Q. It
expresses that P and Q have the same truth-value, and so, in cases 1 and 4, P is true if and only if Q is true,
and false otherwise.

It is extremely helpful to look at the truth tables for these dierent operators, as well as the method of analytic tableaux.

204.4.1 Closure under operations


Propositional logic is closed under truth-functional connectives. That is to say, for any proposition , is also a
proposition. Likewise, for any propositions and , is a proposition, and similarly for disjunction, conditional,
and biconditional. This implies that, for instance, is a proposition, and so it can be conjoined with another
proposition. In order to represent this, we need to use parentheses to indicate which proposition is conjoined with
which. For instance, P Q R is not a well-formed formula, because we do not know if we are conjoining P
Q with R or if we are conjoining P with Q R. Thus we must write either (P Q) R to represent the former, or
P (Q R) to represent the latter. By evaluating the truth conditions, we see that both expressions have the same
truth conditions (will be true in the same cases), and moreover that any proposition formed by arbitrary conjunctions
will have the same truth conditions, regardless of the location of the parentheses. This means that conjunction is
associative, however, one should not assume that parentheses never serve a purpose. For instance, the sentence P
(Q R) does not have the same truth conditions of (P Q) R, so they are dierent sentences distinguished only by
the parentheses. One can verify this by the truth-table method referenced above.
Note: For any arbitrary number of propositional constants, we can form a nite number of cases which list their
possible truth-values. A simple way to generate this is by truth-tables, in which one writes P, Q, ..., Z, for any list of
k propositional constantsthat is to say, any list of propositional constants with k entries. Below this list, one writes
2k rows, and below P one lls in the rst half of the rows with true (or T) and the second half with false (or F). Below
Q one lls in one-quarter of the rows with T, then one-quarter with F, then one-quarter with T and the last quarter
with F. The next column alternates between true and false for each eighth of the rows, then sixteenths, and so on,
until the last propositional constant varies between T and F for each row. This will give a complete listing of cases or
truth-value assignments possible for those propositional constants.

204.4.2 Argument
The propositional calculus then denes an argument to be a list of propositions. A valid argument is a list of proposi-
tions, the last of which follows fromor is implied bythe rest. All other arguments are invalid. The simplest valid
argument is modus ponens, one instance of which is the following list of propositions:

1. P Q
2. P
Q

This is a list of three propositions, each line is a proposition, and the last follows from the rest. The rst two lines are
called premises, and the last line the conclusion. We say that any proposition C follows from any set of propositions
(P1 , ..., Pn ) , if C must be true whenever every member of the set (P1 , ..., Pn ) is true. In the argument above, for
772 CHAPTER 204. PROPOSITIONAL CALCULUS

any P and Q, whenever P Q and P are true, necessarily Q is true. Notice that, when P is true, we cannot consider
cases 3 and 4 (from the truth table). When P Q is true, we cannot consider case 2. This leaves only case 1, in
which Q is also true. Thus Q is implied by the premises.
This generalizes schematically. Thus, where and may be any propositions at all,

1.
2.

Other argument forms are convenient, but not necessary. Given a complete set of axioms (see below for one such
set), modus ponens is sucient to prove all other argument forms in propositional logic, thus they may be considered
to be a derivative. Note, this is not true of the extension of propositional logic to other logics like rst-order logic.
First-order logic requires at least one additional rule of inference in order to obtain completeness.
The signicance of argument in formal logic is that one may obtain new truths from established truths. In the rst
example above, given the two premises, the truth of Q is not yet known or stated. After the argument is made, Q is
deduced. In this way, we dene a deduction system to be a set of all propositions that may be deduced from another
set of propositions. For instance, given the set of propositions A = {P Q, Q R, (P Q) R} , we can
dene a deduction system, , which is the set of all propositions which follow from A. Reiteration is always assumed,
so P Q, Q R, (P Q) R . Also, from the rst element of A, last element, as well as modus ponens,
R is a consequence, and so R . Because we have not included suciently complete axioms, though, nothing
else may be deduced. Thus, even though most deduction systems studied in propositional logic are able to deduce
(P Q) (P Q) , this one is too weak to prove such a proposition.

204.5 Generic description of a propositional calculus


A propositional calculus is a formal system L = L (A, , Z, I) , where:

The alpha set A is a nite set of elements called proposition symbols or propositional variables. Syntactically
speaking, these are the most basic elements of the formal language L , otherwise referred to as atomic formulas
or terminal elements. In the examples to follow, the elements of A are typically the letters p, q, r, and so on.
The omega set is a nite set of elements called operator symbols or logical connectives. The set is partitioned
into disjoint subsets as follows:

= 0 1 . . . j . . . m .

In this partition, j is the set of operator symbols of arity j.

In the more familiar propositional calculi, is typically partitioned as follows:

1 = {},

2 {, , , }.

A frequently adopted convention treats the constant logical values as operators of arity zero, thus:

0 = {0, 1}.
204.6. EXAMPLE 1. SIMPLE AXIOM SYSTEM 773

Some writers use the tilde (~), or N, instead of ; and some use the ampersand (&), the prexed K, or
instead of . Notation varies even more for the set of logical values, with symbols like {false, true},
{F, T}, or {, } all being seen in various contexts instead of {0, 1}.

The zeta set Z is a nite set of transformation rules that are called inference rules when they acquire logical
applications.
The iota set I is a nite set of initial points that are called axioms when they receive logical interpretations.

The language of L , also known as its set of formulas, well-formed formulas, is inductively dened by the following
rules:

1. Base: Any element of the alpha set A is a formula of L .


2. If p1 , p2 , . . . , pj are formulas and f is in j , then (f (p1 , p2 , . . . , pj )) is a formula.
3. Closed: Nothing else is a formula of L .

Repeated applications of these rules permits the construction of complex formulas. For example:

1. By rule 1, p is a formula.
2. By rule 2, p is a formula.
3. By rule 1, q is a formula.
4. By rule 2, (p q) is a formula.

204.6 Example 1. Simple axiom system


Let L1 = L(A, , Z, I) , where A , , Z , I are dened as follows:

The alpha set A , is a nite set of symbols that is large enough to supply the needs of a given discussion, for
example:

A = {p, q, r, s, t, u}.

Of the three connectives for conjunction, disjunction, and implication ( , , and ), one can be taken as
primitive and the other two can be dened in terms of it and negation ().[9] Indeed, all of the logical connectives
can be dened in terms of a sole sucient operator. The biconditional () can of course be dened in terms
of conjunction and implication, with a b dened as (a b) (b a) .

= 1 2

1 = {},
2 = {}.

An axiom system discovered by Jan ukasiewicz formulates a propositional calculus in this language as follows.
The axioms are all substitution instances of:

(p (q p))

((p (q r)) ((p q) (p r)))

((p q) (q p))

The rule of inference is modus ponens (i.e., from p and (p q) , infer q). Then a b is dened as a b ,
and a b is dened as (a b) . This system is used in Metamath set.mm formal proof database.
774 CHAPTER 204. PROPOSITIONAL CALCULUS

204.7 Example 2. Natural deduction system


Let L2 = L(A, , Z, I) , where A , , Z , I are dened as follows:

The alpha set A , is a nite set of symbols that is large enough to supply the needs of a given discussion, for
example:

A = {p, q, r, s, t, u}.

The omega set = 1 2 partitions as follows:

1 = {},

2 = {, , , }.

In the following example of a propositional calculus, the transformation rules are intended to be interpreted as the
inference rules of a so-called natural deduction system. The particular system presented here has no initial points,
which means that its interpretation for logical applications derives its theorems from an empty axiom set.

The set of initial points is empty, that is, I = .


The set of transformation rules, Z , is described as follows:

Our propositional calculus has ten inference rules. These rules allow us to derive other true formulas given a set of
formulas that are assumed to be true. The rst nine simply state that we can infer certain well-formed formulas from
other well-formed formulas. The last rule however uses hypothetical reasoning in the sense that in the premise of the
rule we temporarily assume an (unproven) hypothesis to be part of the set of inferred formulas to see if we can infer
a certain other formula. Since the rst nine rules don't do this they are usually described as non-hypothetical rules,
and the last one as a hypothetical rule.
In describing the transformation rules, we may introduce a metalanguage symbol . It is basically a convenient
shorthand for saying infer that. The format is , in which is a (possibly empty) set of formulas called
premises, and is a formula called conclusion. The transformation rule means that if every proposition in is
a theorem (or has the same truth value as the axioms), then is also a theorem. Note that considering the following
rule Conjunction introduction, we will know whenever has more than one formula, we can always safely reduce it
into one formula using conjunction. So for short, from that time on we may represent as one formula instead of a
set. Another omission for convenience is when is an empty set, in which case may not appear.

Negation introduction From (p q) and (p q) , infer p .


That is, {(p q), (p q)} p .
Negation elimination From p , infer (p r) .
That is, {p} (p r) .
Double negative elimination From p , infer p.
That is, p p .
Conjunction introduction From p and q, infer (p q) .
That is, {p, q} (p q) .
Conjunction elimination From (p q) , infer p.
From (p q) , infer q.
That is, (p q) p and (p q) q .
Disjunction introduction From p, infer (p q) .
From q, infer (p q) .
204.8. BASIC AND DERIVED ARGUMENT FORMS 775

That is, p (p q) and q (p q) .


Disjunction elimination From (p q) and (p r) and (q r) , infer r.
That is, {p q, p r, q r} r .
Biconditional introduction From (p q) and (q p) , infer (p q) .
That is, {p q, q p} (p q) .
Biconditional elimination From (p q) , infer (p q) .
From (p q) , infer (q p) .
That is, (p q) (p q) and (p q) (q p) .
Modus ponens (conditional elimination) From p and (p q) , infer q.
That is, {p, p q} q .
Conditional proof (conditional introduction) From [accepting p allows a proof of q], infer (p q) .
That is, (p q) (p q) .

204.8 Basic and derived argument forms

204.9 Proofs in propositional calculus


One of the main uses of a propositional calculus, when interpreted for logical applications, is to determine relations
of logical equivalence between propositional formulas. These relationships are determined by means of the available
transformation rules, sequences of which are called derivations or proofs.
In the discussion to follow, a proof is presented as a sequence of numbered lines, with each line consisting of a single
formula followed by a reason or justication for introducing that formula. Each premise of the argument, that is, an
assumption introduced as an hypothesis of the argument, is listed at the beginning of the sequence and is marked as
a premise in lieu of other justication. The conclusion is listed on the last line. A proof is complete if every line
follows from the previous ones by the correct application of a transformation rule. (For a contrasting approach, see
proof-trees).

204.9.1 Example of a proof


To be shown that A A.
One possible proof of this (which, though valid, happens to contain more steps than are necessary) may be
arranged as follows:

Interpret A A as Assuming A, infer A. Read A A as Assuming nothing, infer that A implies A, or It is


a tautology that A implies A, or It is always true that A implies A.

204.10 Soundness and completeness of the rules


The crucial properties of this set of rules are that they are sound and complete. Informally this means that the rules
are correct and that no other rules are required. These claims can be made more formal as follows.
We dene a truth assignment as a function that maps propositional variables to true or false. Informally such a
truth assignment can be understood as the description of a possible state of aairs (or possible world) where certain
statements are true and others are not. The semantics of formulas can then be formalized by dening for which state
of aairs they are considered to be true, which is what is done by the following denition.
We dene when such a truth assignment A satises a certain well-formed formula with the following rules:
776 CHAPTER 204. PROPOSITIONAL CALCULUS

A satises the propositional variable P if and only if A(P) = true

A satises if and only if A does not satisfy

A satises ( ) if and only if A satises both and

A satises ( ) if and only if A satises at least one of either or

A satises ( ) if and only if it is not the case that A satises but not

A satises ( ) if and only if A satises both and or satises neither one of them

With this denition we can now formalize what it means for a formula to be implied by a certain set S of formulas.
Informally this is true if in all worlds that are possible given the set of formulas S the formula also holds. This
leads to the following formal denition: We say that a set S of well-formed formulas semantically entails (or implies)
a certain well-formed formula if all truth assignments that satisfy all the formulas in S also satisfy .
Finally we dene syntactical entailment such that is syntactically entailed by S if and only if we can derive it with
the inference rules that were presented above in a nite number of steps. This allows us to formulate exactly what it
means for the set of inference rules to be sound and complete:
Soundness: If the set of well-formed formulas S syntactically entails the well-formed formula then S semantically
entails .
Completeness: If the set of well-formed formulas S semantically entails the well-formed formula then S syntac-
tically entails .
For the above set of rules this is indeed the case.

204.10.1 Sketch of a soundness proof


(For most logical systems, this is the comparatively simple direction of proof)
Notational conventions: Let G be a variable ranging over sets of sentences. Let A, B and C range over sentences. For
G syntactically entails A we write G proves A. For G semantically entails A we write G implies A.
We want to show: (A)(G) (if G proves A, then G implies A).
We note that G proves A has an inductive denition, and that gives us the immediate resources for demonstrating
claims of the form If G proves A, then .... So our proof proceeds by induction.

1. Basis. Show: If A is a member of G, then G implies A.

2. Basis. Show: If A is an axiom, then G implies A.

3. Inductive step (induction on n, the length of the proof):

(a) Assume for arbitrary G and A that if G proves A in n or fewer steps, then G implies A.
(b) For each possible application of a rule of inference at step n + 1, leading to a new theorem B, show that
G implies B.

Notice that Basis Step II can be omitted for natural deduction systems because they have no axioms. When used,
Step II involves showing that each of the axioms is a (semantic) logical truth.
The Basis steps demonstrate that the simplest provable sentences from G are also implied by G, for any G. (The
proof is simple, since the semantic fact that a set implies any of its members, is also trivial.) The Inductive step will
systematically cover all the further sentences that might be provableby considering each case where we might reach
a logical conclusion using an inference ruleand shows that if a new sentence is provable, it is also logically implied.
(For example, we might have a rule telling us that from A we can derive A or B. In III.a We assume that if A is
provable it is implied. We also know that if A is provable then A or B is provable. We have to show that then A
or B too is implied. We do so by appeal to the semantic denition and the assumption we just made. A is provable
from G, we assume. So it is also implied by G. So any semantic valuation making all of G true makes A true. But
any valuation making A true makes A or B true, by the dened semantics for or. So any valuation which makes
204.10. SOUNDNESS AND COMPLETENESS OF THE RULES 777

all of G true makes A or B true. So A or B is implied.) Generally, the Inductive step will consist of a lengthy but
simple case-by-case analysis of all the rules of inference, showing that each preserves semantic implication.
By the denition of provability, there are no sentences provable other than by being a member of G, an axiom, or
following by a rule; so if all of those are semantically implied, the deduction calculus is sound.

204.10.2 Sketch of completeness proof


(This is usually the much harder direction of proof.)
We adopt the same notational conventions as above.
We want to show: If G implies A, then G proves A. We proceed by contraposition: We show instead that if G does
not prove A then G does not imply A.

1. G does not prove A. (Assumption)


2. If G does not prove A, then we can construct an (innite) Maximal Set, G , which is a superset of G and
which also does not prove A.

(a) Place an ordering on all the sentences in the language (e.g., shortest rst, and equally long ones in extended
alphabetical ordering), and number them (E 1 , E 2 , ...)
(b) Dene a series G of sets (G0 , G1 , ...) inductively:

i. G0 = G
ii. If Gk {Ek+1 } proves A, then Gk+1 = Gk
iii. If Gk {Ek+1 } does not prove A, then Gk+1 = Gk {Ek+1 }
(c) Dene G as the union of all the G . (That is, G is the set of all the sentences that are in any G .)
(d) It can be easily shown that

i. G contains (is a superset of) G (by (b.i));


ii. G does not prove A (because if it proves A then some sentence was added to some G which caused
it to prove A; but this was ruled out by denition); and
iii. G is a Maximal Set with respect to A: If any more sentences whatever were added to G , it would
prove A. (Because if it were possible to add any more sentences, they should have been added when
they were encountered during the construction of the G , again by denition)
3. If G is a Maximal Set with respect to A, then it is truth-like. This means that it contains C only if it does not
contain C; If it contains C and contains If C then B then it also contains B; and so forth.
4. If G is truth-like there is a G -Canonical valuation of the language: one that makes every sentence in G true
and everything outside G false while still obeying the laws of semantic composition in the language.
5. A G -canonical valuation will make our original set G all true, and make A false.
6. If there is a valuation on which G are true and A is false, then G does not (semantically) imply A.

QED

204.10.3 Another outline for a completeness proof


If a formula is a tautology, then there is a truth table for it which shows that each valuation yields the value true for the
formula. Consider such a valuation. By mathematical induction on the length of the subformulas, show that the truth
or falsity of the subformula follows from the truth or falsity (as appropriate for the valuation) of each propositional
variable in the subformula. Then combine the lines of the truth table together two at a time by using "(P is true implies
S) implies ((P is false implies S) implies S)". Keep repeating this until all dependencies on propositional variables
have been eliminated. The result is that we have proved the given tautology. Since every tautology is provable, the
logic is complete.
778 CHAPTER 204. PROPOSITIONAL CALCULUS

204.11 Interpretation of a truth-functional propositional calculus


An interpretation of a truth-functional propositional calculus P is an assignment to each propositional symbol of
P of one or the other (but not both) of the truth values truth (T) and falsity (F), and an assignment to the connective
symbols of P of their usual truth-functional meanings. An interpretation of a truth-functional propositional calculus
may also be expressed in terms of truth tables.[11]
For n distinct propositional symbols there are 2n distinct possible interpretations. For any particular symbol a , for
example, there are 21 = 2 possible interpretations:

1. a is assigned T, or
2. a is assigned F.

For the pair a , b there are 22 = 4 possible interpretations:

1. both are assigned T,


2. both are assigned F,
3. a is assigned T and b is assigned F, or
4. a is assigned F and b is assigned T.[11]

Since P has 0 , that is, denumerably many propositional symbols, there are 20 = c , and therefore uncountably
many distinct possible interpretations of P .[11]

204.11.1 Interpretation of a sentence of truth-functional propositional logic


Main article: Interpretation (logic)

If and are formulas of P and I is an interpretation of P then:

A sentence of propositional logic is true under an interpretation I i I assigns the truth value T to that sentence.
If a sentence is true under an interpretation, then that interpretation is called a model of that sentence.
is false under an interpretation I i is not true under I .[11]
A sentence of propositional logic is logically valid if it is true under every interpretation

|= means that is logically valid

A sentence of propositional logic is a semantic consequence of a sentence i there is no interpretation


under which is true and is false.
A sentence of propositional logic is consistent i it is true under at least one interpretation. It is inconsistent if
it is not consistent.

Some consequences of these denitions:

For any given interpretation a given formula is either true or false.[11]


No formula is both true and false under the same interpretation.[11]
is false for a given interpretation i is true for that interpretation; and is true under an interpretation
i is false under that interpretation.[11]
If and ( ) are both true under a given interpretation, then is true under that interpretation.[11]
If |=P and |=P ( ) , then |=P .[11]
204.12. ALTERNATIVE CALCULUS 779

is true under I i is not true under I .

( ) is true under I i either is not true under I or is true under I .[11]

A sentence of propositional logic is a semantic consequence of a sentence i ( ) is logically valid,


that is, |=P i |=P ( ) .[11]

204.12 Alternative calculus


It is possible to dene another version of propositional calculus, which denes most of the syntax of the logical
operators by means of axioms, and which uses only one inference rule.

204.12.1 Axioms

Let , , and stand for well-formed formulas. (The well-formed formulas themselves would not contain any Greek
letters, but only capital Roman letters, connective operators, and parentheses.) Then the axioms are as follows:

Axiom THEN-2 may be considered to be a distributive property of implication with respect to implication.

Axioms AND-1 and AND-2 correspond to conjunction elimination. The relation between AND-1 and AND-
2 reects the commutativity of the conjunction operator.

Axiom AND-3 corresponds to conjunction introduction.

Axioms OR-1 and OR-2 correspond to disjunction introduction. The relation between OR-1 and OR-2
reects the commutativity of the disjunction operator.

Axiom NOT-1 corresponds to reductio ad absurdum.

Axiom NOT-2 says that anything can be deduced from a contradiction.

Axiom NOT-3 is called "tertium non datur" (Latin: a third is not given) and reects the semantic valuation
of propositional formulas: a formula can have a truth-value of either true or false. There is no third truth-value,
at least not in classical logic. Intuitionistic logicians do not accept the axiom NOT-3.

204.12.2 Inference rule

The inference rule is modus ponens:

204.12.3 Meta-inference rule

Let a demonstration be represented by a sequence, with hypotheses to the left of the turnstile and the conclusion to
the right of the turnstile. Then the deduction theorem can be stated as follows:

If the sequence

1 , 2 , ..., n ,

has been demonstrated, then it is also possible to demonstrate the sequence

1 , 2 , ..., n
780 CHAPTER 204. PROPOSITIONAL CALCULUS

This deduction theorem (DT) is not itself formulated with propositional calculus: it is not a theorem of propositional
calculus, but a theorem about propositional calculus. In this sense, it is a meta-theorem, comparable to theorems
about the soundness or completeness of propositional calculus.
On the other hand, DT is so useful for simplifying the syntactical proof process that it can be considered and used as
another inference rule, accompanying modus ponens. In this sense, DT corresponds to the natural conditional proof
inference rule which is part of the rst version of propositional calculus introduced in this article.
The converse of DT is also valid:

If the sequence

1 , 2 , ..., n

has been demonstrated, then it is also possible to demonstrate the sequence

1 , 2 , ..., n ,

in fact, the validity of the converse of DT is almost trivial compared to that of DT:

If

1 , ..., n

then

1 , ..., n ,

1 , ..., n ,
and from (1) and (2) can be deduced

1 , ..., n ,

by means of modus ponens, Q.E.D.

The converse of DT has powerful implications: it can be used to convert an axiom into an inference rule. For example,
the axiom AND-1,

can be transformed by means of the converse of the deduction theorem into the inference rule

which is conjunction elimination, one of the ten inference rules used in the rst version (in this article) of the propo-
sitional calculus.

204.12.4 Example of a proof


The following is an example of a (syntactical) demonstration, involving only axioms THEN-1 and THEN-2:
Prove: A A (Reexivity of implication).
Proof:

1. (A ((B A) A)) ((A (B A)) (A A))

Axiom THEN-2 with = A, = B A, = A


204.13. EQUIVALENCE TO EQUATIONAL LOGICS 781

2. A ((B A) A)
Axiom THEN-1 with = A, = B A
3. (A (B A)) (A A)
From (1) and (2) by modus ponens.
4. A (B A)
Axiom THEN-1 with = A, = B
5. A A
From (3) and (4) by modus ponens.

204.13 Equivalence to equational logics


The preceding alternative calculus is an example of a Hilbert-style deduction system. In the case of propositional
systems the axioms are terms built with logical connectives and the only inference rule is modus ponens. Equational
logic as standardly used informally in high school algebra is a dierent kind of calculus from Hilbert systems. Its
theorems are equations and its inference rules express the properties of equality, namely that it is a congruence on
terms that admits substitution.
Classical propositional calculus as described above is equivalent to Boolean algebra, while intuitionistic propositional
calculus is equivalent to Heyting algebra. The equivalence is shown by translation in each direction of the theorems
of the respective systems. Theorems of classical or intuitionistic propositional calculus are translated as equations
= 1 of Boolean or Heyting algebra respectively. Conversely theorems x = y of Boolean or Heyting algebra are
translated as theorems (x y) (y x) of classical or intuitionistic calculus respectively, for which x y is a
standard abbreviation. In the case of Boolean algebra x = y can also be translated as (x y) (x y) , but this
translation is incorrect intuitionistically.
In both Boolean and Heyting algebra, inequality x y can be used in place of equality. The equality x = y is
expressible as a pair of inequalities x y and y x . Conversely the inequality x y is expressible as the equality
x y = x , or as x y = y . The signicance of inequality for Hilbert-style systems is that it corresponds to the
latters deduction or entailment symbol . An entailment

1 , 2 , . . . , n

is translated in the inequality version of the algebraic framework as

1 2 . . . n

Conversely the algebraic inequality x y is translated as the entailment

x y

The dierence between implication x y and inequality or entailment x y or x y is that the former is internal
to the logic while the latter is external. Internal implication between two terms is another term of the same kind.
Entailment as external implication between two terms expresses a metatruth outside the language of the logic, and
is considered part of the metalanguage. Even when the logic under study is intuitionistic, entailment is ordinarily
understood classically as two-valued: either the left side entails, or is less-or-equal to, the right side, or it is not.
Similar but more complex translations to and from algebraic logics are possible for natural deduction systems as
described above and for the sequent calculus. The entailments of the latter can be interpreted as two-valued, but a
782 CHAPTER 204. PROPOSITIONAL CALCULUS

more insightful interpretation is as a set, the elements of which can be understood as abstract proofs organized as
the morphisms of a category. In this interpretation the cut rule of the sequent calculus corresponds to composition
in the category. Boolean and Heyting algebras enter this picture as special categories having at most one morphism
per homset, i.e., one proof per entailment, corresponding to the idea that existence of proofs is all that matters: any
proof will do and there is no point in distinguishing them.

204.14 Graphical calculi


It is possible to generalize the denition of a formal language from a set of nite sequences over a nite basis to include
many other sets of mathematical structures, so long as they are built up by nitary means from nite materials. Whats
more, many of these families of formal structures are especially well-suited for use in logic.
For example, there are many families of graphs that are close enough analogues of formal languages that the concept
of a calculus is quite easily and naturally extended to them. Indeed, many species of graphs arise as parse graphs
in the syntactic analysis of the corresponding families of text structures. The exigencies of practical computation on
formal languages frequently demand that text strings be converted into pointer structure renditions of parse graphs,
simply as a matter of checking whether strings are well-formed formulas or not. Once this is done, there are many
advantages to be gained from developing the graphical analogue of the calculus on strings. The mapping from strings
to parse graphs is called parsing and the inverse mapping from parse graphs to strings is achieved by an operation
that is called traversing the graph.

204.15 Other logical calculi


Propositional calculus is about the simplest kind of logical calculus in current use. It can be extended in several ways.
(Aristotelian syllogistic calculus, which is largely supplanted in modern logic, is in some ways simpler but in other
ways more complex than propositional calculus.) The most immediate way to develop a more complex logical
calculus is to introduce rules that are sensitive to more ne-grained details of the sentences being used.
First-order logic (a.k.a. rst-order predicate logic) results when the atomic sentences of propositional logic are
broken up into terms, variables, predicates, and quantiers, all keeping the rules of propositional logic with some
new ones introduced. (For example, from All dogs are mammals we may infer If Rover is a dog then Rover is
a mammal.) With the tools of rst-order logic it is possible to formulate a number of theories, either with explicit
axioms or by rules of inference, that can themselves be treated as logical calculi. Arithmetic is the best known of these;
others include set theory and mereology. Second-order logic and other higher-order logics are formal extensions of
rst-order logic. Thus, it makes sense to refer to propositional logic as zeroth-order logic, when comparing it with
these logics.
Modal logic also oers a variety of inferences that cannot be captured in propositional calculus. For example, from
Necessarily p we may infer that p. From p we may infer It is possible that p. The translation between modal
logics and algebraic logics concerns classical and intuitionistic logics but with the introduction of a unary operator on
Boolean or Heyting algebras, dierent from the Boolean operations, interpreting the possibility modality, and in the
case of Heyting algebra a second operator interpreting necessity (for Boolean algebra this is redundant since necessity
is the De Morgan dual of possibility). The rst operator preserves 0 and disjunction while the second preserves 1 and
conjunction.
Many-valued logics are those allowing sentences to have values other than true and false. (For example, neither and
both are standard extra values"; continuum logic allows each sentence to have any of an innite number of degrees
of truth between true and false.) These logics often require calculational devices quite distinct from propositional
calculus. When the values form a Boolean algebra (which may have more than two or even innitely many values),
many-valued logic reduces to classical logic; many-valued logics are therefore only of independent interest when the
values form an algebra that is not Boolean.

204.16 Solvers
Finding solutions to propositional logic formulas is an NP-complete problem. However, practical methods exist (e.g.,
DPLL algorithm, 1962; Cha algorithm, 2001) that are very fast for many useful cases. Recent work has extended
204.17. SEE ALSO 783

the SAT solver algorithms to work with propositions containing arithmetic expressions; these are the SMT solvers.

204.17 See also

204.17.1 Higher logical levels


First-order logic
Second-order propositional logic
Second-order logic
Higher-order logic

204.17.2 Related topics

204.18 References
[1] Bobzien, Susanne (1 January 2016). Zalta, Edward N., ed. The Stanford Encyclopedia of Philosophy via Stanford
Encyclopedia of Philosophy.
[2] Marenbon, John (2007). Medieval philosophy: an historical and philosophical introduction. Routledge. p. 137.
[3] Peckhaus, Volker (1 January 2014). Zalta, Edward N., ed. The Stanford Encyclopedia of Philosophy via Stanford Ency-
clopedia of Philosophy.
[4] Hurley, Patrick (2007). A Concise Introduction to Logic 10th edition. Wadsworth Publishing. p. 392.
[5] Beth, Evert W.; Semantic entailment and formal derivability, series: Mededlingen van de Koninklijke Nederlandse
Akademie van Wetenschappen, Afdeling Letterkunde, Nieuwe Reeks, vol. 18, no. 13, Noord-Hollandsche Uitg. Mij.,
Amsterdam, 1955, pp. 30942. Reprinted in Jaakko Intikka (ed.) The Philosophy of Mathematics, Oxford University
Press, 1969
[6] Truth in Frege
[7] Russell: the Journal of Bertrand Russell Studies.
[8] Anellis, Irving H. (2012). Peirces Truth-functional Analysis and the Origin of the Truth Table. History and Philosophy
of Logic. 33: 8797. doi:10.1080/01445340.2011.621702.
[9] Wernick, William (1942) Complete Sets of Logical Functions, Transactions of the American Mathematical Society 51,
pp. 117132.
[10] Toida, Shunichi (2 August 2009). Proof of Implications. CS381 Discrete Structures/Discrete Mathematics Web Course
Material. Department Of Computer Science, Old Dominion University. Retrieved 10 March 2010.
[11] Hunter, Georey (1971). Metalogic: An Introduction to the Metatheory of Standard First-Order Logic. University of
California Pres. ISBN 0-520-02356-0.

204.19 Further reading


Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer
Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY.
Chang, C.C. and Keisler, H.J. (1973), Model Theory, North-Holland, Amsterdam, Netherlands.
Kohavi, Zvi (1978), Switching and Finite Automata Theory, 1st edition, McGrawHill, 1970. 2nd edition,
McGrawHill, 1978.
Korfhage, Robert R. (1974), Discrete Computational Structures, Academic Press, New York, NY.
Lambek, J. and Scott, P.J. (1986), Introduction to Higher Order Categorical Logic, Cambridge University Press,
Cambridge, UK.
Mendelson, Elliot (1964), Introduction to Mathematical Logic, D. Van Nostrand Company.
784 CHAPTER 204. PROPOSITIONAL CALCULUS

204.19.1 Related works


Hofstadter, Douglas (1979). Gdel, Escher, Bach: An Eternal Golden Braid. Basic Books. ISBN 978-0-465-
02656-2.

204.20 External links


Klement, Kevin C. (2006), Propositional Logic, in James Fieser and Bradley Dowden (eds.), Internet Ency-
clopedia of Philosophy, Eprint.

Formal Predicate Calculus, contains a systematic formal development along the lines of Alternative calculus

forall x: an introduction to formal logic, by P.D. Magnus, covers formal semantics and proof theory for sen-
tential logic.

Chapter 2 / Propositional Logic from Logic In Action


Propositional sequent calculus prover on Project Nayuki. (note: implication can be input in the form !X|Y, and
a sequent can be a single formula prexed with > and having no commas)
Chapter 205

Propositional directed acyclic graph

A propositional directed acyclic graph (PDAG) is a data structure that is used to represent a Boolean function. A
Boolean function can be represented as a rooted, directed acyclic graph of the following form:

Leaves are labeled with (true), (false), or a Boolean variable.

Non-leaves are (logical and), (logical or) and (logical not).

- and -nodes have at least one child.

-nodes have exactly one child.

Leaves labeled with ( ) represent the constant Boolean function which always evaluates to 1 (0). A leaf labeled
with a Boolean variable x is interpreted as the assignment x = 1 , i.e. it represents the Boolean function which
evaluates to 1 if and only if x = 1 . The Boolean function represented by a -node is the one that evaluates to
1, if and only if the Boolean function of all its children evaluate to 1. Similarly, a -node represents the Boolean
function that evaluates to 1, if and only if the Boolean function of at least one child evaluates to 1. Finally, a -node
represents the complemenatary Boolean function its child, i.e. the one that evaluates to 1, if and only if the Boolean
function of its child evaluates to 0.

205.1 PDAG, BDD, and NNF


Every binary decision diagram (BDD) and every negation normal form (NNF) are also a PDAG with some
particular properties. The following pictures represent the Boolean function f (x1, x2, x3) = x1 x2 x3 +
x1 x2 + x2 x3 :

205.2 See also


Data structure

Boolean satisability problem

Proposition

205.3 References
M. Wachter & R. Haenni, Propositional DAGs: a New Graph-Based Language for Representing Boolean
Functions, KR'06, 10th International Conference on Principles of Knowledge Representation and Reasoning,
Lake District, UK, 2006.

785
786 CHAPTER 205. PROPOSITIONAL DIRECTED ACYCLIC GRAPH

M. Wachter & R. Haenni, Probabilistic Equivalence Checking with Propositional DAGs, Technical Report
iam-2006-001, Institute of Computer Science and Applied Mathematics, University of Bern, Switzerland,
2006.

M. Wachter, R. Haenni & J. Jonczy, Reliability and Diagnostics of Modular Systems: a New Probabilistic
Approach, DX'06, 18th International Workshop on Principles of Diagnosis, Pearanda de Duero, Burgos,
Spain, 2006.
Chapter 206

Propositional formula

In propositional logic, a propositional formula is a type of syntactic formula which is well formed and has a truth
value. If the values of all variables in a propositional formula are given, it determines a unique truth value. A
propositional formula may also be called a propositional expression, a sentence, or a sentential formula.
A propositional formula is constructed from simple propositions, such as ve is greater than three or propositional
variables such as P and Q, using connectives such as NOT, AND, OR, and IMPLIES; for example:

(P AND NOT Q) IMPLIES (P OR Q).

In mathematics, a propositional formula is often more briey referred to as a "proposition", but, more precisely, a
propositional formula is not a proposition but a formal expression that denotes a proposition, a formal object under
discussion, just like an expression such as "x + y" is not a value, but denotes a value. In some contexts, maintaining
the distinction may be of importance.

206.1 Propositions
For the purposes of the propositional calculus, propositions (utterances, sentences, assertions) are considered to be
either simple or compound.[1] Compound propositions are considered to be linked by sentential connectives, some
of the most common of which are AND, OR, IF ... THEN ..., NEITHER ... NOR..., "... IS EQUIVALENT
TO ... . The linking semicolon ";", and connective BUT are considered to be expressions of AND. A sequence
of discrete sentences are considered to be linked by AND"s, and formal analysis applies a recursive parenthesis
rule with respect to sequences of simple propositions (see more below about well-formed formulas).

For example: The assertion: This cow is blue. That horse is orange but this horse here is purple. is
actually a compound proposition linked by AND"s: ( (This cow is blue AND that horse is orange)
AND this horse here is purple ) .

Simple propositions are declarative in nature, that is, they make assertions about the condition or nature of a particular
object of sensation e.g. This cow is blue, Theres a coyote!" (That coyote IS there, behind the rocks.).[2] Thus
the simple primitive assertions must be about specic objects or specic states of mind. Each must have at least a
subject (an immediate object of thought or observation), a verb (in the active voice and present tense preferred), and
perhaps an adjective or adverb. Dog!" probably implies I see a dog but should be rejected as too ambiguous.

Example: That purple dog is running, This cow is blue, Switch M31 is closed, This cap is o,
Tomorrow is Friday.

For the purposes of the propositional calculus a compound proposition can usually be reworded into a series of simple
sentences, although the result will probably sound stilted.

787
788 CHAPTER 206. PROPOSITIONAL FORMULA

206.1.1 Relationship between propositional and predicate formulas


The predicate calculus goes a step further than the propositional calculus to an analysis of the inner structure of
propositions[3] It breaks a simple sentence down into two parts (i) its subject (the object (singular or plural) of
discourse) and (ii) a predicate (a verb or possibly verb-clause that asserts a quality or attribute of the object(s)).
The predicate calculus then generalizes the subject|predicate form (where | symbolizes concatenation (stringing
together) of symbols) into a form with the following blank-subject structure " ___|predicate, and the predicate in
turn generalized to all things with that property.

Example: This blue pig has wings becomes two sentences in the propositional calculus: This pig has
wings AND This pig is blue, whose internal structure is not considered. In contrast, in the predicate
calculus, the rst sentence breaks into this pig as the subject, and has wings as the predicate. Thus
it asserts that object this pig is a member of the class (set, collection) of winged things. The second
sentence asserts that object this pig has an attribute blue and thus is a member of the class of blue
things. One might choose to write the two sentences connected with AND as:

p|W AND p|B

The generalization of this pig to a (potential) member of two classes winged things and blue things means that
it has a truth-relationship with both of these classes. In other words, given a domain of discourse winged things,
either we nd p to be a member of this domain or not. Thus we have a relationship W (wingedness) between p (pig)
and { T, F }, W(p) evaluates to { T, F }. Likewise for B (blueness) and p (pig) and { T, F }: B(p) evaluates to { T, F
}. So we now can analyze the connected assertions B(p) AND W(p)" for its overall truth-value, i.e.:

( B(p) AND W(p) ) evaluates to { T, F }

In particular, simple sentences that employ notions of all, some, a few, one of, etc. are treated by the predicate
calculus. Along with the new function symbolism F(x)" two new symbols are introduced: (For all), and (There
exists ..., At least one of ... exists, etc.). The predicate calculus, but not the propositional calculus, can establish the
formal validity of the following statement:

All blue pigs have wings but some pigs have no wings, hence some pigs are not blue.

206.1.2 Identity
Tarski asserts that the notion of IDENTITY (as distinguished from LOGICAL EQUIVALENCE) lies outside the
propositional calculus; however, he notes that if a logic is to be of use for mathematics and the sciences it must contain
a theory of IDENTITY.[4] Some authors refer to predicate logic with identity to emphasize this extension. See
more about this below.

206.2 An algebra of propositions, the propositional calculus


An algebra (and there are many dierent ones), loosely dened, is a method by which a collection of symbols called
variables together with some other symbols such as parentheses (, ) and some sub-set of symbols such as *, +, ~, &,
, =, , , are manipulated within a system of rules. These symbols, and well-formed strings of them, are said
to represent objects, but in a specic algebraic system these objects do not have meanings. Thus work inside the
algebra becomes an exercise in obeying certain laws (rules) of the algebras syntax (symbol-formation) rather than
in semantics (meaning) of the symbols. The meanings are to be found outside the algebra.
For a well-formed sequence of symbols in the algebraa formula -- to have some usefulness outside the algebra the
symbols are assigned meanings and eventually the variables are assigned values; then by a series of rules the formula
is evaluated.
When the values are restricted to just two and applied to the notion of simple sentences (e.g. spoken utterances
or written assertions) linked by propositional connectives this whole algebraic system of symbols and rules and
evaluation-methods is usually called the propositional calculus or the sentential calculus.
206.2. AN ALGEBRA OF PROPOSITIONS, THE PROPOSITIONAL CALCULUS 789

While some of the familiar rules of arithmetic algebra continue to hold in the algebra of propositions (e.g. the
commutative and associative laws for AND and OR), some do not (e.g. the distributive laws for AND, OR and
NOT).

206.2.1 Usefulness of propositional formulas

Analysis: In deductive reasoning, philosophers, rhetoricians and mathematicians reduce arguments to formulas and
then study them (usually with truth tables) for correctness (soundness). For example: Is the following argument
sound?

Given that consciousness is sucient for an articial intelligence and only conscious entities can pass
the Turing test, before we can conclude that a robot is an articial intelligence the robot must pass the
Turing test.

Engineers analyze the logic circuits they have designed using synthesis techniques and then apply various reduction
and minimization techniques to simplify their designs.
Synthesis: Engineers in particular synthesize propositional formulas (that eventually end up as circuits of symbols)
from truth tables. For example, one might write down a truth table for how binary addition should behave given the
addition of variables b and a and carry_in ci, and the results carry_out co and sum :

Example: in row 5, ( (b+a) + ci ) = ( (1+0) + 1 ) = the number 2. written as a binary number this is 102 ,
where co"=1 and =0 as shown in the right-most columns.

206.2.2 Propositional variables

The simplest type of propositional formula is a propositional variable. Propositions that are simple (atomic), sym-
bolic expressions are often denoted by variables named a, b, or A, B, etc. A propositional variable is intended to
represent an atomic proposition (assertion), such as It is Saturday = a (here the symbol = means " ... is assigned
the variable named ...) or I only go to the movies on Monday = b.

206.2.3 Truth-value assignments, formula evaluations

Evaluation of a propositional formula begins with assignment of a truth value to each variable. Because each
variable represents a simple sentence, the truth values are being applied to the truth or falsity of these simple
sentences.
Truth values in rhetoric, philosophy and mathematics: The truth values are only two: { TRUTH T, FALSITY
F }. An empiricist puts all propositions into two broad classes: analytictrue no matter what (e.g. tautology), and
syntheticderived from experience and thereby susceptible to conrmation by third parties (the verication theory of
meaning).[5] Empiricits hold that, in general, to arrive at the truth-value of a synthetic proposition, meanings (pattern-
matching templates) must rst be applied to the words, and then these meaning-templates must be matched against
whatever it is that is being asserted. For example, my utterance That cow is blue!" Is this statement a TRUTH? Truly
I said it. And maybe I am seeing a blue cowunless I am lying my statement is a TRUTH relative to the object of
my (perhaps awed) perception. But is the blue cow really there"? What do you see when you look out the same
window? In order to proceed with a verication, you will need a prior notion (a template) of both cow and blue,
and an ability to match the templates against the object of sensation (if indeed there is one).
Truth values in engineering: Engineers try to avoid notions of truth and falsity that bedevil philosophers, but in the
nal analysis engineers must trust their measuring instruments. In their quest for robustness, engineers prefer to pull
known objects from a small libraryobjects that have well-dened, predictable behaviors even in large combinations,
(hence their name for the propositional calculus: combinatorial logic). The fewest behaviors of a single object are
two (e.g. { OFF, ON }, { open, shut }, { UP, DOWN } etc.), and these are put in correspondence with { 0, 1 }.
Such elements are called digital; those with a continuous range of behaviors are called analog. Whenever decisions
must be made in an analog system, quite often an engineer will convert an analog behavior (the door is 45.32146%
UP) to digital (e.g. DOWN=0 ) by use of a comparator.[6]
790 CHAPTER 206. PROPOSITIONAL FORMULA

Thus an assignment of meaning of the variables and the two value-symbols { 0, 1 } comes from outside the formula
that represents the behavior of the (usually) compound object. An example is a garage door with two limit switches,
one for UP labelled SW_U and one for DOWN labelled SW_D, and whatever else is in the doors circuitry. Inspection
of the circuit (either the diagram or the actual objects themselvesdoor, switches, wires, circuit board, etc.) might
reveal that, on the circuit board node 22 goes to +0 volts when the contacts of switch SW_D are mechanically in
contact (closed) and the door is in the down position (95% down), and node 29 goes to +0 volts when the door
is 95% UP and the contacts of switch SW_U are in mechanical contact (closed).[7] The engineer must dene the
meanings of these voltages and all possible combinations (all 4 of them), including the bad ones (e.g. both nodes
22 and 29 at 0 volts, meaning that the door is open and closed at the same time). The circuit mindlessly responds to
whatever voltages it experiences without any awareness of TRUTH or FALSEHOOD, RIGHT or WRONG, SAFE
or DANGEROUS.

206.3 Propositional connectives


Arbitrary propositional formulas are built from propositional variables and other propositional formulas using propositional
connectives. Examples of connectives include:

The unary negation connective. If is a formula, then is a formula.

The classical binary connectives , , , . Thus, for example, if and are formulas, so is ( ) .

Other binary connectives, such as NAND, NOR, and XOR

The ternary connective IF ... THEN ... ELSE ...

Constant 0-ary connectives and (alternately, constants { T, F }, { 1, 0 } etc. )

The theory-extension connective EQUALS (alternately, IDENTITY, or the sign " = " as distinguished from
the logical connective )

206.3.1 Connectives of rhetoric, philosophy and mathematics

The following are the connectives common to rhetoric, philosophy and mathematics together with their truth tables.
The symbols used will vary from author to author and between elds of endeavor. In general the abbreviations T
and F stand for the evaluations TRUTH and FALSITY applied to the variables in the propositional formula (e.g.
the assertion: That cow is blue will have the truth-value T for Truth or F for Falsity, as the case may be.).
The connectives go by a number of dierent word-usages, e.g. a IMPLIES b is also said IF a THEN b. Some of
these are shown in the table.

206.3.2 Engineering connectives

In general, the engineering connectives are just the same as the mathematics connectives excepting they tend to
evaluate with 1 = T and 0 = F. This is done for the purposes of analysis/minimization and synthesis of
formulas by use of the notion of minterms and Karnaugh maps (see below). Engineers also use the words logical
product from Boole's notion (a*a = a) and logical sum from Jevons' notion (a+a = a).[8]

206.3.3 CASE connective: IF ... THEN ... ELSE ...

The IF ... THEN ... ELSE ... connective appears as the simplest form of CASE operator of recursion theory
and computation theory and is the connective responsible for conditional gotos (jumps, branches). From this one
connective all other connectives can be constructed (see more below). Although " IF c THEN b ELSE a " sounds
like an implication it is, in its most reduced form, a switch that makes a decision and oers as outcome only one of
two alternatives a or b (hence the name switch statement in the C programming language).[9]
The following three propositions are equivalent (as indicated by the logical equivalence sign ):
206.3. PROPOSITIONAL CONNECTIVES 791

Engineering symbols have varied over the years, but these are commonplace. Sometimes they appear simply as boxes with symbols
in them. a and b are called the inputs and c is called the output. An output will typical connect to an input (unless it is the
nal connective); this accomplishes the mathematical notion of substitution.

1. ( IF 'counter is zero' THEN 'go to instruction b ' ELSE 'go to instruction a ')
2. ( (c b) & (~c a) ) ( ( IF 'counter is zero' THEN 'go to instruction b ' ) AND ( IF 'It is NOT the case that
counter is zero' THEN 'go to instruction a ) "
3. ( (c & b) (~c & a) ) " ( 'Counter is zero' AND 'go to instruction b ) OR ( 'It is NOT the case that 'counter
is zero' AND 'go to instruction a ) "

Thus IF ... THEN ... ELSEunlike implicationdoes not evaluate to an ambiguous TRUTH when the rst
proposition is false i.e. c = F in (c b). For example, most people would reject the following compound proposition
as a nonsensical non sequitur because the second sentence is not connected in meaning to the rst.[10]

Example: The proposition " IF 'Winston Churchill was Chinese' THEN 'The sun rises in the east' "
evaluates as a TRUTH given that 'Winston Church was Chinese' is a FALSEHOOD and 'The sun rises
in the east' evaluates as a TRUTH.

In recognition of this problem, the sign of formal implication in the propositional calculus is called material
implication to distinguish it from the everyday, intuitive implication.[11]
The use of the IF ... THEN ... ELSE construction avoids controversy because it oers a completely deterministic
choice between two stated alternatives; it oers two objects (the two alternatives b and a), and it selects between
them exhaustively and unabiguously.[12] In the truth table below, d1 is the formula: ( (IF c THEN b) AND (IF NOT-c
THEN a) ). Its fully reduced form d2 is the formula: ( (c AND b) OR (NOT-c AND a). The two formulas are
equivalent as shown by the columns "=d1 and "=d2. Electrical engineers call the fully reduced formula the AND-
OR-SELECT operator. The CASE (or SWITCH) operator is an extension of the same idea to n possible, but mutually
exclusive outcomes. Electrical engineers call the CASE operator a multiplexer.

206.3.4 IDENTITY and evaluation


The rst table of this section stars *** the entry logical equivalence to note the fact that "Logical equivalence" is not
the same thing as identity. For example, most would agree that the assertion That cow is blue is identical to the
assertion That cow is blue. On the other hand, logical equivalence sometimes appears in speech as in this example:
" 'The sun is shining' means 'I'm biking' " Translated into a propositional formula the words become: IF 'the sun is
shining' THEN 'I'm biking', AND IF 'I'm biking' THEN 'the sun is shining'":[13]

IF 's THEN 'b' AND IF 'b' THEN 's " is written as ((s b) & (b s)) or in an abbreviated form as
(s b). As the rightmost symbol string is a denition for a new symbol in terms of the symbols on the
left, the use of the IDENTITY sign = is appropriate:
792 CHAPTER 206. PROPOSITIONAL FORMULA

((s b) & (b s)) = (s b)

Dierent authors use dierent signs for logical equivalence: (e.g. Suppes, Goodstein, Hamilton), (e.g. Robbin),
(e.g. Bender and Williamson). Typically identity is written as the equals sign =. One exception to this rule is found
in Principia Mathematica. For more about the philosophy of the notion of IDENTITY see Leibnizs law.
As noted above, Tarski considers IDENTITY to lie outside the propositional calculus, but he asserts that without the
notion, logic is insucient for mathematics and the deductive sciences. In fact the sign comes into the propositional
calculus when a formula is to be evaluated.[14]
In some systems there are no truth tables, but rather just formal axioms (e.g. strings of symbols from a set { ~, , (,
), variables p1 , p2 , p3 , ... } and formula-formation rules (rules about how to make more symbol strings from previous
strings by use of e.g. substitution and modus ponens). the result of such a calculus will be another formula (i.e. a
well-formed symbol string). Eventually, however, if one wants to use the calculus to study notions of validity and
truth, one must add axioms that dene the behavior of the symbols called the truth values {T, F} ( or {1, 0}, etc.)
relative to the other symbols.
For example, Hamilton uses two symbols = and when he denes the notion of a valuation v of any ws A and B
in his formal statement calculus L. A valuation v is a function from the ws of his system L to the range (output)
{ T, F }, given that each variable p1 , p2 , p3 in a w is assigned an arbitrary truth value { T, F }.

The two denitions (i) and (ii) dene the equivalent of the truth tables for the ~ (NOT) and (IMPLICATION)
connectives of his system. The rst one derives F T and T F, in other words " v(A) does not mean v(~A)".
Denition (ii) species the third row in the truth table, and the other three rows then come from an application of
denition (i). In particular (ii) assigns the value F (or a meaning of F) to the entire expression. The denitions
also serve as formation rules that allow substitution of a value previously derived into a formula:
Some formal systems specify these valuation axioms at the outset in the form of certain formulas such as the law of
contradiction or laws of identity and nullity. The choice of which ones to use, together with laws such as commutation
and distribution, is up to the systems designer as long as the set of axioms is complete (i.e. sucient to form and to
evaluate any well-formed formula created in the system).

206.4 More complex formulas


As shown above, the CASE (IF c THEN b ELSE a ) connective is constructed either from the 2-argument connectives
IF...THEN... and AND or from OR and AND and the 1-argument NOT. Connectives such as the n-argument AND
(a & b & c & ... & n), OR (a b c ... n) are constructed from strings of two-argument AND and OR and
written in abbreviated form without the parentheses. These, and other connectives as well, can then used as building
blocks for yet further connectives. Rhetoricians, philosophers, and mathematicians use truth tables and the various
theorems to analyze and simplify their formulas.
Electrical engineering uses drawn symbols and connect them with lines that stand for the mathematicals act of sub-
stitution and replacement. They then verify their drawings with truth tables and simplify the expressions as shown
below by use of Karnaugh maps or the theorems. In this way engineers have created a host of combinatorial logic
(i.e. connectives without feedback) such as decoders, encoders, mutifunction gates, majority logic, binary
adders, arithmetic logic units, etc.

206.4.1 Denitions
A denition creates a new symbol and its behavior, often for the purposes of abbreviation. Once the denition is
presented, either form of the equivalent symbol or formula can be used. The following symbolism =D is following
the convention of Reichenbach.[15] Some examples of convenient denitions drawn from the symbol set { ~, &, (,
) } and variables. Each denition is producing a logically equivalent formula that can be used for substitution or
replacement.
206.5. INDUCTIVE DEFINITION 793

denition of a new variable: (c & d) =D s


OR: ~(~a & ~b) =D (a b)
IMPLICATION: (~a b) =D (a b)
XOR: (~a & b) (a & ~b) =D (a b)
LOGICAL EQUIVALENCE: ( (a b) & (b a) ) =D ( a b )

206.4.2 Axiom and denition schemas


The denitions above for OR, IMPLICATION, XOR, and logical equivalence are actually schemas (or schemata),
that is, they are models (demonstrations, examples) for a general formula format but shown (for illustrative purposes)
with specic letters a, b, c for the variables, whereas any variable letters can go in their places as long as the letter
substitutions follow the rule of substitution below.

Example: In the denition (~a b) =D (a b), other variable-symbols such as SW2 and CON1
might be used, i.e. formally:

a =D SW2, b =D CON1, so we would have as an instance of the denition schema (~SW2


CON1) =D (SW2 CON1)

206.4.3 Substitution versus replacement


Substitution: The variable or sub-formula to be substituted with another variable, constant, or sub-formula must be
replaced in all instances throughout the overall formula.

Example: (c & d) (p & ~(c & ~d)), but (q1 & ~q2) d. Now wherever variable d occurs, substitute
(q1 & ~q2 ):

(c & (q1 & ~q2 )) (p & ~(c & ~(q1 & ~q2 )))

Replacement: (i) the formula to be replaced must be within a tautology, i.e. logically equivalent ( connected by
or ) to the formula that replaces it, and (ii) unlike substitution its permissible for the replacement to occur only in
one place (i.e. for one formula).

Example: Use this set of formula schemas/equivalences:

1. ( (a 0) a ).
2. ( (a & ~a) 0 ).
3. ( (~a b) =D (a b) ).
4. ( ~(~a) a )

1. start with a": a


2. Use 1 to replace a with (a 0): (a 0)
3. Use the notion of schema to substitute b for a in 2: ( (a & ~a) 0 )
4. Use 2 to replace 0 with (b & ~b): ( a (b & ~b) )
5. (see below for how to distribute a " over (b & ~b), etc.)

206.5 Inductive denition


The classical presentation of propositional logic (see Enderton 2002) uses the connectives , , , , . The set
of formulas over a given set of propositional variables is inductively dened to be the smallest set of expressions such
that:
794 CHAPTER 206. PROPOSITIONAL FORMULA

Each propositional variable in the set is a formula,


() is a formula whenever is, and
( ) is a formula whenever and are formulas and is one of the binary connectives , , , .

This inductive denition can be easily extended to cover additional connectives.


The inductive denition can also be rephrased in terms of a closure operation (Enderton 2002). Let V denote a set of
propositional variables and let XV denote the set of all strings from an alphabet including symbols in V, left and right
parentheses, and all the logical connectives under consideration. Each logical connective corresponds to a formula
building operation, a function from XXV to XXV:

Given a string z, the operation E (z) returns (z) .


Given strings y and z, the operation E (y, z) returns (y x) . There are similar operations E , E , and E
corresponding to the other binary connectives.

The set of formulas over V is dened to be the smallest subset of XXV containing V and closed under all the formula
building operations.

206.6 Parsing formulas


The following laws of the propositional calculus are used to reduce complex formulas. The laws can be easily
veried with truth tables. For each law, the principal (outermost) connective is associated with logical equivalence
or identity =. A complete analysis of all 2n combinations of truth-values for its n distinct variables will result in
a column of 1s (Ts) underneath this connective. This nding makes each law, by denition, a tautology. And, for
a given law, because its formula on the left and right are equivalent (or identical) they can be substituted for one
another.

Example: The following truth table is De Morgans law for the behavior of NOT over OR: ~(a b) (~a &
~b). To the left of the principal connective (yellow column labelled taut) the formula ~(b a) evaluates
to (1, 0, 0, 0) under the label P. On the right of taut the formula (~(b) ~(a)) also evaluates to (1, 0, 0,
0) under the label Q. As the two columns have equivalent evaluations, the logical equivalence under taut
evaluates to (1, 1, 1, 1), i.e. P Q. Thus either formula can be substituted for the other if it appears in a larger
formula.

Enterprising readers might challenge themselves to invent an axiomatic system that uses the symbols { , &, ~, (,
), variables a, b, c }, the formation rules specied above, and as few as possible of the laws listed below, and then
derive as theorems the others as well as the truth-table valuations for , &, and ~. One set attributed to Huntington
(1904) (Suppes:204) uses eight of the laws dened below.
Note that if used in an axiomatic system, the symbols 1 and 0 (or T and F) are considered to be ws and thus obey
all the same rules as the variables. Thus the laws listed below are actually axiom schemas, that is, they stand in place
of an innite number of instances. Thus ( x y ) ( y x ) might be used in one instance, ( p 0 ) ( 0 p ) and
in another instance ( 1 q ) ( q 1 ), etc.

206.6.1 Connective seniority (symbol rank)


In general, to avoid confusion during analysis and evaluation of propositional formulas make liberal use parentheses.
However, quite often authors leave them out. To parse a complicated formula one rst needs to know the seniority, or
rank, that each of the connectives (excepting *) has over the other connectives. To well-form a formula, start with
the connective with the highest rank and add parentheses around its components, then move down in rank (paying
close attention to the connectives scope over which the it is working). From most- to least-senior, with the predicate
signs x and x, the IDENTITY = and arithmetic signs added for completeness:[16]

(LOGICAL EQUIVALENCE)
206.6. PARSING FORMULAS 795

(IMPLICATION)
& (AND)
(OR)
~ (NOT)
x (FOR ALL x)
x (THERE EXISTS AN x)
= (IDENTITY)
+ (arithmetic sum)
* (arithmetic multiply)
' (s, arithmetic successor).

Thus the formula can be parsedbut note that, because NOT does not obey the distributive law, the parentheses
around the inner formula (~c & ~d) is mandatory:

Example: " d & c w " rewritten is ( (d & c) w )


Example: " a & a b a & ~a b " rewritten (rigorously) is

has seniority: ( ( a & a b ) ( a & ~a b ) )


has seniority: ( ( a & (a b) ) ( a & ~a b ) )
& has seniority both sides: ( ( ( (a) & (a b) ) ) ( ( (a) & (~a b) ) )
~ has seniority: ( ( ( (a) & (a b) ) ) ( ( (a) & (~(a) b) ) )
check 9 ( -parenthesis and 9 ) -parenthesis: ( ( ( (a) & (a b) ) ) ( ( (a) & (~(a) b)
))

Example:

d & c p & ~(c & ~d) c & d p & c p & ~d rewritten is ( ( (d & c) ( p & ~((c & ~(d))
) ) ) ( (c & d) (p & c) (p & ~(d)) ) )

206.6.2 Commutative and associative laws

Both AND and OR obey the commutative law and associative law:

Commutative law for OR: ( a b ) ( b a )

Commutative law for AND: ( a & b ) ( b & a )

Associative law for OR: (( a b ) c ) ( a (b c) )

Associative law for AND: (( a & b ) & c ) ( a & (b & c) )

Omitting parentheses in strings of AND and OR: The connectives are considered to be unary (one-variable, e.g.
NOT) and binary (i.e. two-variable AND, OR, IMPLIES). For example:

( (c & d) (p & c) (p & ~d) ) above should be written ( ((c & d) (p & c)) (p & ~(d) ) ) or possibly
( (c & d) ( (p & c) (p & ~(d)) ) )

However, a truth-table demonstration shows that the form without the extra parentheses is perfectly adequate.
Omitting parentheses with regards to a single-variable NOT: While ~(a) where a is a single variable is perfectly
clear, ~a is adequate and is the usual way this literal would appear. When the NOT is over a formula with more than
one symbol, then the parentheses are mandatory, e.g. ~(a b).
796 CHAPTER 206. PROPOSITIONAL FORMULA

206.6.3 Distributive laws

OR distributes over AND and AND distributes over OR. NOT does not distribute over AND or OR. See below about
De Morgans law:

Distributive law for OR: ( c ( a & b) ) ( (c a) & (c b) )

Distributive law for AND: ( c & ( a b) ) ( (c & a) (c & b) )

206.6.4 De Morgans laws

NOT, when distributed over OR or AND, does something peculiar (again, these can be veried with a truth-table):

De Morgans law for OR: ~(a b) (~a & ~b)

De Morgans law for AND: ~(a & b) (~a ~b)

206.6.5 Laws of absorption

Absorption, in particular the rst one, causes the laws of logic to dier from the laws of arithmetic:

Absorption (idempotency) for OR: (a a) a

Absorption (idempotency) for AND: (a & a) a

206.6.6 Laws of evaluation: Identity, nullity, and complement

The sign " = " (as distinguished from logical equivalence , alternately or ) symbolizes the assignment of value or
meaning. Thus the string (a & ~(a)) symbolizes 1, i.e. it means the same thing as symbol 1 ". In some systems
this will be an axiom (denition) perhaps shown as ( (a & ~(a)) =D 1 ); in other systems, it may be derived in the
truth table below:

Commutation of equality: (a = b) (b = a)

Identity for OR: (a 0) = a or (a F) = a

Identity for AND: (a & 1) = a or (a & T) = a

Nullity for OR: (a 1) = 1 or (a T) = T

Nullity for AND: (a & 0) = 0 or (a & F) = F

Complement for OR: (a ~a) = 1 or (a ~a) = T, law of excluded middle

Complement for AND: (a & ~a) = 0 or (a & ~a) = F, law of contradiction

206.6.7 Double negative (involution)

~(~a) = a
206.7. WELL-FORMED FORMULAS (WFFS) 797

206.7 Well-formed formulas (ws)

A key property of formulas is that they can be uniquely parsed to determine the structure of the formula in terms
of its propositional variables and logical connectives. When formulas are written in inx notation, as above, unique
readability is ensured through an appropriate use of parentheses in the denition of formulas. Alternatively, formulas
can be written in Polish notation or reverse Polish notation, eliminating the need for parentheses altogether.
The inductive denition of inx formulas in the previous section can be converted to a formal grammar in Backus-
Naur form:
<formula> ::= <propositional variable> | ( <formula> ) | ( <formula> <formula>) | ( <formula> <formula> ) |
( <formula> <formula> ) | ( <formula> <formula> )

It can be shown that any expression matched by the grammar has a balanced number of left and right parentheses, and
any nonempty initial segment of a formula has more left than right parentheses.[17] This fact can be used to give an
algorithm for parsing formulas. For example, suppose that an expression x begins with ( . Starting after the second
symbol, match the shortest subexpression y of x that has balanced parentheses. If x is a formula, there is exactly one
symbol left after this expression, this symbol is a closing parenthesis, and y itself is a formula. This idea can be used
to generate a recursive descent parser for formulas.
Example of parenthesis counting:
This method locates as 1 the principal connective the connective under which the overall evaluation of the
formula occurs for the outer-most parentheses (which are often omitted).[18] It also locates the inner-most connective
where one would begin evaluatation of the formula without the use of a truth table, e.g. at level 6.

206.7.1 Ws versus valid formulas in inferences

The notion of valid argument is usually applied to inferences in arguments, but arguments reduce to propositional
formulas and can be evaluated the same as any other propositional formula. Here a valid inference means: The
formula that represents the inference evaluates to truth beneath its principal connective, no matter what truth-
values are assigned to its variables, i.e. the formula is a tautology.[19] Quite possibly a formula will be well-formed
but not valid. Another way of saying this is: Being well-formed is necessary for a formula to be valid but it is not
sucient. The only way to nd out if it is both well-formed and valid is to submit it to verication with a truth table
or by use of the laws":

Example 1: What does one make of the following dicult-to-follow assertion? Is it valid? If its sunny, but
if the frog is croaking then its not sunny, then its the same as saying that the frog isn't croaking. Convert this
to a propositional formula as follows:

" IF (a AND (IF b THEN NOT-a) THEN NOT-a where " a " represents its sunny and
" b " represents the frog is croaking":
( ( (a) & ( (b) ~(a) ) ~(b) )
This is well-formed, but is it valid? In other words, when evaluated will this yield a tautology (all
T) beneath the logical-equivalence symbol ? The answer is NO, it is not valid. However, if
reconstructed as an implication then the argument is valid.
Saying its sunny, but if the frog is croaking then its not sunny, implies that the frog isn't croaking.
Other circumstances may be preventing the frog from croaking: perhaps a crane ate it.

Example 2 (from Reichenbach via Bertrand Russell):

If pigs have wings, some winged animals are good to eat. Some winged animals are good to eat,
so pigs have wings.
( ((a) (b)) & (b) (a) ) is well formed, but an invalid argument as shown by the red evaluation
under the principal implication:
798 CHAPTER 206. PROPOSITIONAL FORMULA

The engineering symbol for the NAND connective (the 'stroke') can be used to build any propositional formula. The notion that truth
(1) and falsity (0) can be dened in terms of this connective is shown in the sequence of NANDs on the left, and the derivations of
the four evaluations of a NAND b are shown along the bottom. The more common method is to use the denition of the NAND from
the truth table.

206.8 Reduced sets of connectives


A set of logical connectives is called complete if every propositional formula is tautologically equivalent to a formula
with just the connectives in that set. There are many complete sets of connectives, including {, } , {, } , and
{, } . There are two binary connectives that are complete on their own, corresponding to NAND and NOR,
respectively.[20] Some pairs are not complete, for example {, } .

206.8.1 The stroke (NAND)

The binary connective corresponding to NAND is called the Sheer stroke, and written with a vertical bar | or vertical
arrow . The completeness of this connective was noted in Principia Mathematica (1927:xvii). Since it is complete on
its own, all other connectives can be expressed using only the stroke. For example, where the symbol " " represents
logical equivalence:

~p p|p
p q p|~q
p q ~p|~q
p & q ~(p|q)
206.9. NORMAL FORMS 799

In particular, the zero-ary connectives (representing truth) and (representing falsity) can be expressed using the
stroke:

(a|(a|a))

(|)

206.8.2 IF ... THEN ... ELSE

This connective together with { 0, 1 }, ( or { F, T } or { , } ) forms a complete set. In the following the
IF...THEN...ELSE relation (c, b, a) = d represents ( (c b) (~c a) ) ( (c & b) (~c & a) ) = d

(c, b, a):
(c, 0, 1) ~c
(c, b, 1) (c b)
(c, c, a) (c a)
(c, b, c) (c & b)

Example: The following shows how a theorem-based proof of "(c, b, 1) (c b)" would proceed, below the proof
is its truth-table verication. ( Note: (c b) is dened to be (~c b) ):

Begin with the reduced form: ( (c & b) (~c & a) )


Substitute 1 for a: ( (c & b) (~c & 1) )
Identity (~c & 1) = ~c: ( (c & b) (~c) )
Law of commutation for V: ( (~c) (c & b) )
Distribute "~c V over (c & b): ( ((~c) c ) & ((~c) b )
Law of excluded middle (((~c) c ) = 1 ): ( (1) & ((~c) b ) )
Distribute "(1) &" over ((~c) b): ( ((1) & (~c)) ((1) & b )) )
Commutivity and Identity (( 1 & ~c) = (~c & 1) = ~c, and (( 1 & b) (b & 1) b: ( ~c b )
( ~c b ) is dened as c b Q. E. D.

In the following truth table the column labelled taut for tautology evaluates logical equivalence (symbolized here by
) between the two columns labelled d. Because all four rows under taut are 1s, the equivalence indeed represents
a tautology.

206.9 Normal forms


An arbitrary propositional formula may have a very complicated structure. It is often convenient to work with formulas
that have simpler forms, known as normal forms. Some common normal forms include conjunctive normal form
and disjunctive normal form. Any propositional formula can be reduced to its conjunctive or disjunctive normal form.

206.9.1 Reduction to normal form


Reduction to normal form is relatively simple once a truth table for the formula is prepared. But further attempts to
minimize the number of literals (see below) requires some tools: reduction by De Morgans laws and truth tables
can be unwieldy, but Karnaugh maps are very suitable a small number of variables (5 or less). Some sophisticated
tabular methods exist for more complex circuits with multiple outputs but these are beyond the scope of this article;
for more see QuineMcCluskey algorithm.
800 CHAPTER 206. PROPOSITIONAL FORMULA

Literal, term and alterm

In electrical engineering a variable x or its negation ~(x) is lumped together into a single notion called a literal. A
string of literals connected by ANDs is called a term. A string of literals connected by OR is called an alterm.
Typically the literal ~(x) is abbreviated ~x. Sometimes the &-symbol is omitted altogether in the manner of algebraic
multiplication.

Examples
1. a, b, c, d are variables. ((( a & ~(b) ) & ~(c)) & d) is a term. This can be abbreviated as (a & ~b & ~c &
d), or a~b~cd.
2. p, q, r, s are variables. (((p & ~(q) ) & r) & ~(s) ) is an alterm. This can be abbreviated as (p ~q r
~s).

Minterms

In the same way that a 2n -row truth table displays the evaluation of a propositional formula for all 2n possible values
of its variables, n variables produces a 2n -square Karnaugh map (even though we cannot draw it in its full-dimensional
realization). For example, 3 variables produces 23 = 8 rows and 8 Karnaugh squares; 4 variables produces 16 truth-
table rows and 16 squares and therefore 16 minterms. Each Karnaugh-map square and its corresponding truth-table
evaluation represents one minterm.
Any propositional formula can be reduced to the logical sum (OR) of the active (i.e. 1"- or T"-valued) minterms.
When in this form the formula is said to be in disjunctive normal form. But even though it is in this form, it is not
necessarily minimized with respect to either the number of terms or the number of literals.
In the following table, observe the peculiar numbering of the rows: (0, 1, 3, 2, 6, 7, 5, 4, 0). The rst column is the
decimal equivalent of the binary equivalent of the digits cba, in other words:

Example
cba2 = c*22 + b*21 + a*20 :
cba = (c=1, b=0, a=0) = 1012 = 1*22 + 0*21 + 1*20 = 510

This numbering comes about because as one moves down the table from row to row only one variable at a time
changes its value. Gray code is derived from this notion. This notion can be extended to three and four-dimensional
hypercubes called Hasse diagrams where each corners variables change only one at a time as one moves around the
edges of the cube. Hasse diagrams (hypercubes) attened into two dimensions are either Veitch diagrams or Karnaugh
maps (these are virtually the same thing).
When working with Karnaugh maps one must always keep in mind that the top edge wrap arounds to the bottom
edge, and the left edge wraps around to the right edgethe Karnaugh diagram is really a three- or four- or n-
dimensional attened object.

206.9.2 Reduction by use of the map method (Veitch, Karnaugh)


Veitch improved the notion of Venn diagrams by converting the circles to abutting squares, and Karnaugh simplied
the Veitch diagram by converting the minterms, written in their literal-form (e.g. ~abc~d) into numbers.[21] The
method proceeds as follows:

Produce the formulas truth table

Produce the formulas truth table. Number its rows using the binary-equivalents of the variables (usually just sequen-
tially 0 through n-1) for n variables.

Technically, the propositional function has been reduced to its (unminimized) conjunctive normal form:
each row has its minterm expression and these can be OR'd to produce the formula in its (unminimized)
conjunctive normal form.
206.9. NORMAL FORMS 801

Example: ((c & d) (p & ~(c & (~d)))) = q in conjunctive normal form is:

( (~p & d & c ) (p & d & c) (p & d & ~c) (p & ~d & ~c) ) = q

However, this formula be reduced both in the number of terms (from 4 to 3) and in the total count of its literals (12
to 6).

Create the formulas Karnaugh map

Use the values of the formula (e.g. p) found by the truth-table method and place them in their into their respective
(associated) Karnaugh squares (these are numbered per the Gray code convention). If values of d for don't care
appear in the table, this adds exibility during the reduction phase.

Reduce minterms

Minterms of adjacent (abutting) 1-squares (T-squares) can be reduced with respect to the number of their literals,
and the number terms also will be reduced in the process. Two abutting squares (2 x 1 horizontal or 1 x 2 vertical,
even the edges represent abutting squares) lose one literal, four squares in a 4 x 1 rectangle (horizontal or vertical)
or 2 x 2 square (even the four corners represent abutting squares) lose two literals, eight squares in a rectangle lose 3
literals, etc. (One seeks out the largest square or rectangles and ignores the smaller squares or rectangles contained
totally within it. ) This process continues until all abutting squares are accounted for, at which point the propositional
formula is minimized.
For example, squares #3 and #7 abut. These two abutting squares can lose one literal (e.g. p from squares #3 and
#7), four squares in a rectangle or square lose two literals, eight squares in a rectangle lose 3 literals, etc. (One seeks
out the largest square or rectangles.) This process continues until all abutting squares are accounted for, at which
point the propositional formula is said to be minimized.
Example: The map method usually is done by inspection. The following example expands the algebraic method to
show the trick behind the combining of terms on a Karnaugh map:

Minterms #3 and #7 abut, #7 and #6 abut, and #4 and #6 abut (because the tables edges wrap around).
So each of these pairs can be reduced.

Observe that by the Idempotency law (A A) = A, we can create more terms. Then by association and distributive
laws the variables to disappear can be paired, and then disappeared with the Law of contradiction (x & ~x)=0. The
following uses brackets [ and ] only to keep track of the terms; they have no special signicance:

Put the formula in conjunctive normal form with the formula to be reduced:

q = ( (~p & d & c ) (p & d & c) (p & d & ~c) (p & ~d & ~c) ) = ( #3
#7 #6 #4 )

Idempotency (absorption) [ A A) = A:

( #3 [ #7 #7 ] [ #6 #6 ] #4 )

Associative law (x (y z)) = ( (x y) z )

( [ #3 #7 ] [ #7 #6 ] [ #6 #4] )
[ (~p & d & c ) (p & d & c) ] [ (p & d & c) (p & d & ~c) ] [ (p & d & ~c)
(p & ~d & ~c) ].

Distributive law ( x & (y z) ) = ( (x & y) (x & z) ) :

( [ (d & c) (~p & p) ] [ (p & d) (~c & c) ] [ (p & ~c) (c & ~c) ] )
802 CHAPTER 206. PROPOSITIONAL FORMULA

Commutative law and law of contradiction (x & ~x) = (~x & x) = 0:

( [ (d & c) (0) ] [ (p & d) (0) ] [ (p & ~c) (0) ] )

Law of identity ( x 0 ) = x leading to the reduced form of the formula:

q = ( (d & c) (p & d) (p & ~c) )

Verify reduction with a truth table

206.10 Impredicative propositions


Given the following examples-as-denitions, what does one make of the subsequent reasoning:

(1) This sentence is simple. (2) This sentence is complex, and it is conjoined by AND.

Then assign the variable s to the left-most sentence This sentence is simple. Dene compound c = not simple
~s, and assign c = ~s to This sentence is compound"; assign j to It [this sentence] is conjoined by AND. The
second sentence can be expressed as:

( NOT(s) AND j )

If truth values are to be placed on the sentences c = ~s and j, then all are clearly FALSEHOODS: e.g. This sentence
is complex is a FALSEHOOD (it is simple, by denition). So their conjunction (AND) is a falsehood. But when
taken in its assembed form, the sentence a TRUTH.
This is an example of the paradoxes that result from an impredicative denitionthat is, when an object m has a
property P, but the object m is dened in terms of property P.[22] The best advice for a rhetorician or one involved
in deductive analysis is avoid impredicative denitions but at the same time be on the lookout for them because they
can indeed create paradoxes. Engineers, on the other hand, put them to work in the form of propositional formulas
with feedback.

206.11 Propositional formula with feedback


The notion of a propositional formula appearing as one of its own variables requires a formation rule that allows the
assignment of the formula to a variable. In general there is no stipulation (either axiomatic or truth-table systems of
objects and relations) that forbids this from happening.[23]
The simplest case occurs when an OR formula becomes one its own inputs e.g. p = q. Begin with (p s) = q, then
let p = q. Observe that qs denition depends on itself q as well as on s and the OR connective; this denition
of q is thus impredicative. Either of two conditions can result:[24] oscillation or memory.
It helps to think of the formula as a black box. Without knowledge of what is going on inside the formula-"box
from the outside it would appear that the output is no longer a function of the inputs alone. That is, sometimes
one looks at q and sees 0 and other times 1. To avoid this problem one has to know the state (condition) of the
hidden variable p inside the box (i.e. the value of q fed back and assigned to p). When this is known the apparent
inconsistency goes away.
To understand [predict] the behavior of formulas with feedback requires the more sophisticated analysis of sequential
circuits. Propositional formulas with feedback lead, in their simplest form, to state machines; they also lead to
memories in the form of Turing tapes and counter-machine counters. From combinations of these elements one
can build any sort of bounded computational model (e.g. Turing machines, counter machines, register machines,
Macintosh computers, etc.).
206.11. PROPOSITIONAL FORMULA WITH FEEDBACK 803

206.11.1 Oscillation
In the abstract (ideal) case the simplest oscillating formula is a NOT fed back to itself: ~(~(p=q)) = q. Analysis of
an abstract (ideal) propositional formula in a truth-table reveals an inconsistency for both p=1 and p=0 cases: When
p=1, q=0, this cannot be because p=q; ditto for when p=0 and q=1.
Oscillation with delay: If an delay[25] (ideal or non-ideal) is inserted in the abstract formula between p and q then
p will oscillate between 1 and 0: 101010...101... ad innitum. If either of the delay and NOT are not abstract (i.e.
not ideal), the type of analysis to be used will be dependent upon the exact nature of the objects that make up the
oscillator; such things fall outside mathematics and into engineering.
Analysis requires a delay to be inserted and then the loop cut between the delay and the input p. The delay must
be viewed as a kind of proposition that has qd (q-delayed) as output for q as input. This new proposition adds
another column to the truth table. The inconsistency is now between qd and p as shown in red; two stable states
resulting:

206.11.2 Memory
Without delay, inconsistencies must be eliminated from a truth table analysis. With the notion of delay, this con-
dition presents itself as a momentary inconsistency between the fed-back output variable q and p = q .
A truth table reveals the rows where inconsistencies occur between p = q at the input and q at the output. After
breaking the feed-back,[26] the truth table construction proceeds in the conventional manner. But afterwards, in
every row the output q is compared to the now-independent input p and any inconsistencies between p and q are noted
(i.e. p=0 together with q=1, or p=1 and q=0); when the line is remade both are rendered impossible by the Law
of contradiction ~(p & ~p)). Rows revealing inconsistencies are either considered transient states or just eliminated
as inconsistent and hence impossible.

Once-ip memory

About the simplest memory results when the output of an OR feeds back to one of its inputs, in this case output
q feeds back into p. Given that the formula is rst evaluated (initialized) with p=0 & q=0, it will ip once
when set by s=1. Thereafter, output q will sustain q in the ipped condition (state q=1). This behavior, now
time-dependent, is shown by the state diagram to the right of the once-ip.

Flip-op memory

The next simplest case is the set-reset ip-op shown below the once-ip. Given that r=0 & s=0 and q=0 at the
outset, it is set (s=1) in a manner similar to the once-ip. It however has a provision to reset q=0 when r"=1.
And additional complication occurs if both set=1 and reset=1. In this formula, the set=1 forces the output q=1 so
when and if (s=0 & r=1) the ip-op will be reset. Or, if (s=1 & r=0) the ip-op will be set. In the abstract (ideal)
instance in which s=1 s=0 & r=1 r=0 simultaneously, the formula q will be indeterminate (undecidable). Due
to delays in real OR, AND and NOT the result will be unknown at the outset but thereafter predicable.

Clocked ip-op memory

The formula known as clocked ip-op memory (c is the clock and d is the data) is given below. It works
as follows: When c = 0 the data d (either 0 or 1) cannot get through to aect output q. When c = 1 the data d gets
through and output q follows ds value. When c goes from 1 to 0 the last value of the data remains trapped at
output q. As long as c=0, d can change value without causing q to change.

Examples

1. ( ( c & d ) ( p & ( ~( c & ~( d ) ) ) ) = q, but now let p = q:


2. ( ( c & d ) ( q & ( ~( c & ~( d ) ) ) ) = q

The state diagram is similar in shape to the ip-ops state diagram, but with dierent labelling on the transitions.
804 CHAPTER 206. PROPOSITIONAL FORMULA

206.12 Historical development


Bertrand Russell (1912:74) lists three laws of thought that derive from Aristotle: (1) The law of identity: Whatever
is, is., (2) The law of contradiction: Nothing cannot both be and not be, and (3) The law of excluded middle:
Everything must be or not be.

Example: Here O is an expression about an objects BEING or QUALITY:


1. Law of Identity: O = O
2. Law of contradiction: ~(O & ~(O))
3. Law of excluded middle: (O ~(O))

The use of the word everything in the law of excluded middle renders Russells expression of this law open to
debate. If restricted to an expression about BEING or QUALITY with reference to a nite collection of objects (a
nite universe of discourse) -- the members of which can be investigated one after another for the presence or
absence of the assertionthen the law is considered intuitionistically appropriate. Thus an assertion such as: This
object must either BE or NOT BE (in the collection)", or This object must either have this QUALITY or NOT have
this QUALITY (relative to the objects in the collection)" is acceptable. See more at Venn diagram.
Although a propositional calculus originated with Aristotle, the notion of an algebra applied to propositions had to
wait until the early 19th century. In an (adverse) reaction to the 2000 year tradition of Aristotles syllogisms, John
Locke's Essay concerning human understanding (1690) used the word semiotics (theory of the use of symbols). By
1826 Richard Whately had critically analyzed the syllogistic logic with a sympathy toward Lockes semiotics. George
Bentham's work (1827) resulted in the notion of quantication of the predicate (1827) (nowadays symbolized as
for all). A row instigated by William Hamilton over a priority dispute with Augustus De Morgan inspired
George Boole to write up his ideas on logic, and to publish them as MAL [Mathematical Analysis of Logic] in 1847
(Grattin-Guinness and Bornet 1997:xxviii).
About his contribution Grattin-Guinness and Bornet comment:

Booles principal single innovation was [the] law [ xn = x ] for logic: it stated that the mental acts of
choosing the property x and choosing x again and again is the same as choosing x once... As consequence
of it he formed the equations x(1-x)=0 and x+(1-x)=1 which for him expressed respectively the law of
contradiction and the law of excluded middle (p. xxvii). For Boole 1 was the universe of discourse
and 0 was nothing.

Gottlob Frege's massive undertaking (1879) resulted in a formal calculus of propositions, but his symbolism is so
daunting that it had little inuence excepting on one person: Bertrand Russell. First as the student of Alfred North
Whitehead he studied Freges work and suggested a (famous and notorious) emendation with respect to it (1904)
around the problem of an antinomy that he discovered in Freges treatment ( cf Russells paradox ). Russells work
led to a collatoration with Whitehead that, in the year 1912, produced the rst volume of Principia Mathematica
(PM). It is here that what we consider modern propositional logic rst appeared. In particular, PM introduces NOT
and OR and the assertion symbol as primitives. In terms of these notions they dene IMPLICATION ( def.
*1.01: ~p q ), then AND (def. *3.01: ~(~p ~q) ), then EQUIVALENCE p q (*4.01: (p q) & ( q p ) ).

Henry M. Sheer (1921) and Jean Nicod demonstrate that only one connective, the stroke | is sucient to
express all propositional formulas.
Emil Post (1921) develops the truth-table method of analysis in his Introduction to a general theory of ele-
mentary propositions. He notes Nicods stroke | .
Whitehead and Russell add an introduction to their 1927 re-publication of PM adding, in part, a favorable
treatment of the stroke.

Computation and switching logic:

William Eccles and F. W. Jordan (1919) describe a trigger relay made from a vacuum tube.
George Stibitz (1937) invents the binary adder using mechanical relays. He builds this on his kitchen table.
206.13. FOOTNOTES 805

Example: Given binary bits a and b and carry-in ( c_in), their summation and carry-out (c_out) are:
( ( a XOR b ) XOR c_in )=
( a & b ) c_in ) = c_out;

Alan Turing builds a multiplier using relays (19371938). He has to hand-wind his own relay coils to do this.
Textbooks about switching circuits appear in early 1950s.
Willard Quine 1952 and 1955, E. W. Veitch 1952, and M. Karnaugh (1953) develop map-methods for simpli-
fying propositional functions.
George H. Mealy (1955) and Edward F. Moore (1956) address the theory of sequential (i.e. switching-circuit)
machines.
E. J. McCluskey and H. Shorr develop a method for simplifying propositional (switching) circuits (1962).

206.13 Footnotes
[1] Hamilton 1978:1

[2] PM p. 91 eschews the because they require a clear-cut object of sensation"; they stipulate the use of this

[3] (italics added) Reichenbach p.80.

[4] Tarski p.54-68. Suppes calls IDENTITY a further rule of inference and has a brief development around it; Robbin,
Bender and Williamson, and Goodstein introduce the sign and its usage without comment or explanation. Hamilton p. 37
employs two signs and = with respect to the valuation of a formula in a formal calculus. Kleene p. 70 and Hamilton p.
52 place it in the predicate calculus, in particular with regards to the arithmetic of natural numbers.

[5] Empiricits eschew the notion of a priori (built-in, born-with) knowledge. Radical reductionists such as John Locke
and David Hume held that every idea must either originate directly in sense experience or else be compounded of ideas
thus originating"; quoted from Quine reprinted in 1996 The Emergence of Logical Empriricism, Garland Publishing Inc.
http://www.marxists.org/reference/subject/philosophy/works/us/quine.htm

[6] Neural net modelling oers a good mathematical model for a comparator as follows: Given a signal S and a threshold thr,
subtract thr from S and substitute this dierence d to a sigmoid function: For large gains k, e.g. k=100, 1/( 1 + ek*d
) = 1/( 1 + ek*(S-thr) ) = { 0, 1 }. For example, if The door is DOWN means The door is less than 50% of the way
up, then a threshold thr=0.5 corresponding to 0.5*5.0 = +2.50 volts could be applied to a linear measuring-device with
an output of 0 volts when fully closed and +5.0 volts when fully open.

[7] In actuality the digital 1 and 0 are dened over non-overlapping ranges e.g. { 1 = +5/+0.2/1.0 volts, 0 = +0.5/0.2 volts
}. When a value falls outside the dened range(s) the value becomes u -- unknown; e.g. +2.3 would be u.

[8] While the notion of logical product is not so peculiar (e.g. 0*0=0, 0*1=0, 1*0=0, 1*1=1), the notion of (1+1=1 is peculiar;
in fact (a "+" b) = (a + (b - a*b)) where "+" is the logical sum but + and - are the true arithmetic counterparts. Occasionally
all four notions do appear in a formula: A AND B = 1/2*( A plus B minus ( A XOR B ) ] (cf p. 146 in John Wakerly 1978,
Error Detecting Codes, Self-Checking Circuits and Applications, North-Holland, New York, ISBN 0-444-00259-6 pbk.)

[9] A careful look at its Karnaugh map shows that IF...THEN...ELSE can also be expressed, in a rather round-about way, in
terms of two exclusive-ORs: ( (b AND (c XOR a)) OR (a AND (c XOR b)) ) = d.

[10] Robbin p. 3.

[11] Rosenbloom p. 30 and p. 54 discusses this problem of implication at some length. Most philosophers and mathematicians
just accept the material denition as given above. But some do not, including the intuitionists; they consider it a form of
the law of excluded middle misapplied.

[12] Indeed, exhaustive selection between alternatives -- mutual exclusion -- is required by the denition that Kleene gives the
CASE operator (Kleene 1952229)

[13] The use of quote marks around the expressions is not accidental. Tarski comments on the use of quotes in his 18. Identity
of things and identity of their designations; use of quotation marks p. 58.

[14] Hamilton p. 37. Bender and Williamson p. 29 state In what follows, we'll replace equals with the symbol " "
(equivalence) which is usually used in logic. We use the more familiar " = " for assigning meaning and values.
806 CHAPTER 206. PROPOSITIONAL FORMULA

[15] Reichenbach p. 20-22 and follows the conventions of PM. The symbol =D is in the metalanguage and is not a formal
symbol with the following meaning: by symbol ' s ' is to have the same meaning as the formula '(c & d)' ".

[16] Rosenbloom 1950:32. Kleene 1952:73-74 ranks all 11 symbols.

[17] cf Minsky 1967:75, section 4.2.3 The method of parenthesis counting. Minsky presents a state machine that will do the
job, and by use of induction (recursive denition) Minsky proves the method and presents a theorem as the result. A
fully generalized parenthesis grammar requires an innite state machine (e.g. a Turing machine) to do the counting.

[18] Robbin p. 7

[19] cf Reichenbach p. 68 for a more involved discussion: If the inference is valid and the premises are true, the inference is
called conclusive.

[20] As well as the rst three, Hamilton pp.19-22 discusses logics built from only | (NAND), and (NOR).

[21] Wickes 1967:36. Wickes oers a good example of 8 of the 2 x 4 (3-variable maps) and 16 of the 4 x 4 (4-variable)
maps. As an arbitrary 3-variable map could represent any one of 28 =256 2x4 maps, and an arbitrary 4-variable map could
represent any one of 216 = 65,536 dierent formula-evaluations, writing down every one is infeasible.

[22] This denition is given by Stephen Kleene. Both Kurt Gdel and Kleene believed that the classical paradoxes are uniformly
examples of this sort of denition. But Kleene went on to assert that the problem has not been solved satisfactorily and
impredicative denitions can be found in analysis. He gives as example the denition of the least upper bound (l.u.b) u of
M. Given a Dedekind cut of the number line C and the two parts into which the number line is cut, i.e. M and (C - M),
l.u.b. = u is dened in terms of the notion M, whereas M is dened in terms of C. Thus the denition of u, an element
of C, is dened in terms of the totality C and this makes its denition impredicative. Kleene asserts that attempts to argue
this away can be used to uphold the impredicative denitions in the paradoxes.(Kleene 1952:43).

[23] McCluskey comments that it could be argued that the analysis is still incomplete because the word statement The outputs
are equal to the previous values of the inputs has not been obtained"; he goes on to dismiss such worries because English
is not a formal language in a mathematical sense, [and] it is not really possible to have a formal procedure for obtaining
word statements (p. 185).

[24] More precisely, given enough loop gain, either oscillation or memory will occur (cf McCluskey p. 191-2). In abstract
(idealized) mathematical systems adequate loop gain is not a problem.

[25] The notion of delay and the principle of local causation as caused ultimately by the speed of light appears in Robin Gandy
(1980), Churchs thesis and Principles for Mechanisms, in J. Barwise, H. J. Keisler and K. Kunen, eds., The Kleene
Symposium, North-Holland Publishing Company (1980) 123-148. Gandy considered this to be the most important of his
principles: Contemporary physics rejects the possibility of instantaneous action at a distance (p. 135). Gandy was Alan
Turing's student and close friend.

[26] McKlusky p. 194-5 discusses breaking the loop and inserts ampliers to do this; Wickes (p. 118-121) discuss inserting
delays. McCluskey p. 195 discusses the problem of races caused by delays.

206.14 References
Bender, Edward A. and Williamson, S. Gill, 2005, A Short Course in Discrete Mathematics, Dover Publications,
Mineola NY, ISBN 0-486-43946-1. This text is used in a lower division two-quarter [computer science]
course at UC San Diego.

Enderton, H. B., 2002, A Mathematical Introduction to Logic. Harcourt/Academic Press. ISBN 0-12-238452-0

Goodstein, R. L., (Pergamon Press 1963), 1966, (Dover edition 2007), Boolean Algebra, Dover Publications,
Inc. Minola, New York, ISBN 0-486-45894-6. Emphasis on the notion of algebra of classes with set-
theoretic symbols such as , , ' (NOT), (IMPLIES). Later Goldstein replaces these with &, , , (re-
spectively) in his treatment of Sentence Logic pp. 7693.

Ivor Grattan-Guinness and Grard Bornet 1997, George Boole: Selected Manuscripts on Logic and its Philoso-
phy, Birkhuser Verlag, Basil, ISBN 978-0-8176-5456-6 (Boston).

A. G. Hamilton 1978, Logic for Mathematicians, Cambridge University Press, Cambridge UK, ISBN 0-521-
21838-1.
206.14. REFERENCES 807

E. J. McCluskey 1965, Introduction to the Theory of Switching Circuits, McGraw-Hill Book Company, New
York. No ISBN. Library of Congress Catalog Card Number 65-17394. McCluskey was a student of Willard
Quine and developed some notable theorems with Quine and on his own. For those interested in the history,
the book contains a wealth of references.
Marvin L. Minsky 1967, Computation: Finite and Innite Machines, Prentice-Hall, Inc, Englewood Clis, N.J..
No ISBN. Library of Congress Catalog Card Number 67-12342. Useful especially for computability, plus good
sources.

Paul C. Rosenbloom 1950, Dover edition 2005, The Elements of Mathematical Logic, Dover Publications, Inc.,
Mineola, New York, ISBN 0-486-44617-4.

Joel W. Robbin 1969, 1997, Mathematical Logic: A First Course, Dover Publications, Inc., Mineola, New
York, ISBN 0-486-45018-X (pbk.).

Patrick Suppes 1957 (1999 Dover edition), Introduction to Logic, Dover Publications, Inc., Mineola, New York.
ISBN 0-486-40687-3 (pbk.). This book is in print and readily available.
On his page 204 in a footnote he references his set of axioms to E. V. Huntington, Sets of Independent
Postulates for the Algebra of Logic, Transactions of the American Mathematical Society, Vol. 5 91904) pp.
288-309.

Alfred Tarski 1941 (1995 Dover edition), Introduction to Logic and to the Methodology of Deductive Sciences,
Dover Publications, Inc., Mineola, New York. ISBN 0-486-28462-X (pbk.). This book is in print and readily
available.
Jean van Heijenoort 1967, 3rd printing with emendations 1976, From Frege to Gdel: A Source Book in Mathe-
matical Logic, 1879-1931, Harvard University Press, Cambridge, Massachusetts. ISBN 0-674-32449-8 (pbk.)
Translation/reprints of Frege (1879), Russells letter to Frege (1902) and Freges letter to Russell (1902),
Richards paradox (1905), Post (1921) can be found here.
Alfred North Whitehead and Bertrand Russell 1927 2nd edition, paperback edition to *53 1962, Principia
Mathematica, Cambridge University Press, no ISBN. In the years between the rst edition of 1912 and the 2nd
edition of 1927, H. M. Sheer 1921 and M. Jean Nicod (no year cited) brought to Russells and Whiteheads
attention that what they considered their primitive propositions (connectives) could be reduced to a single |,
nowadays known as the stroke or NAND (NOT-AND, NEITHER ... NOR...). Russell-Whitehead discuss
this in their Introduction to the Second Edition and makes the denitions as discussed above.
William E. Wickes 1968, Logic Design with Integrated Circuits, John Wiley & Sons, Inc., New York. No
ISBN. Library of Congress Catalog Card Number: 68-21185. Tight presentation of engineerings analysis and
synthesis methods, references McCluskey 1965. Unlike Suppes, Wickes presentation of Boolean algebra
starts with a set of postulates of a truth-table nature and then derives the customary theorems of them (p.
18).
808 CHAPTER 206. PROPOSITIONAL FORMULA

A truth table will contain 2n rows, where n is the number of variables (e.g. three variables p, d, c produce 23 rows). Each
206.14. REFERENCES 809

Steps in the reduction using a Karnaugh map. The nal result is the OR (logical sum) of the three reduced terms.
810 CHAPTER 206. PROPOSITIONAL FORMULA
206.14. REFERENCES 811

About the simplest memory results when the output of an OR feeds back to one of its inputs, in this case output q feeding back
into p. The next simplest is the ip-op shown below the once-ip. Analysis of these sorts of formulas can be done by either
cutting the feedback path(s) or inserting (ideal) delay in the path. A cut path and an assumption that no delay occurs anywhere in the
circuit results in inconsistencies for some of the total states (combination of inputs and outputs, e.g. (p=0, s=1, r=1) results in an
inconsistency). When delay is present these inconsistencies are merely transient and expire when the delay(s) expire. The drawings
on the right are called state diagrams.
812 CHAPTER 206. PROPOSITIONAL FORMULA

A clocked ip-op memory (c is the clock and d is the data). The data can change at any time when clock c=0; when clock
c=1 the output q tracks the value of data d. When c goes from 1 to 0 it traps d = qs value and this continues to appear at q no
matter what d does (as long as c remains 0).
Chapter 207

Propositional function

A propositional function in logic, is a sentence expressed in a way that would assume the value of true or false, except
that within the sentence is a variable (x) that is not dened or specied, which leaves the statement undetermined.
Of course, the sentence can consist of several such variables (e.g. n variables, in which case the function takes n
arguments). As a mathematical function, A(x) or A(x1 , x2 , , xn), the propositional function is abstracted from
predicates or propositional forms. As an example, lets imagine the predicate, x is hot. The substitution of any
entity for x will produce a specic proposition that can be described as either true or false, even though "x is hot on
its own has no value as either a true or false statement. However, when you assign x a value, such as lava, the function
then has the value true; while if you assign x a value like ice, the function then has the value false.
Propositional functions are useful in set theory for the formation of sets. For example, in 1903 Bertrand Russell wrote
in The Principles of Mathematics (page 106):

"...it has become necessary to take propositional function as a primitive notion.

Later Russell examined the problem of whether propositional functions were predicative or not, and he proposed two
theories to try to get at this question: the zig-zag theory and the ramied theory of types.[1]
A Propositional Function, or a predicate, in a variable x is a sentence p(x) involving x that becomes a proposition
when we give x a denite value from the set of values it can take.

207.1 See also


Predicate (mathematical logic)

Boolean-valued function
Formula (logic)

Sentence (logic)
Open sentence

207.2 References
[1] Tiles, Mary (2004). The philosophy of set theory an historical introduction to Cantors paradise (Dover ed.). Mineola, N.Y.:
Dover Publications. p. 159. ISBN 978-0-486-43520-6. Retrieved 1 February 2013.

813
Chapter 208

Propositional proof system

In propositional calculus and proof complexity a propositional proof system (pps), also called a CookReckhow
propositional proof system, is system for proving classical propositional tautologies.

208.1 Mathematical denition


Formally a pps is a polynomial-time function P whose range is the set of all propositional tautologies (denoted
TAUT).[1] If A is a formula, then any x such that P(x) = A is called a P-proof of A. The condition dening pps
can be broken up as follows:

Completeness: every propositional tautology has a P-proof,

Soundness: if a propositional formula has a P-proof then it is a tautology,

Eciency: P runs in polynomial time.

In general, a proof system for a language L is a polynomial-time function whose range is L. Thus, a propositional
proof system is a proof system for TAUT.
Sometimes the following alternative denition is considered: a pps is given as a proof-verication algorithm P(A,x)
with two inputs. If P accepts the pair (A,x) we say that x is a P-proof of A. P is required to run in polynomial time,
and moreover, it must hold that A has a P-proof if and only if it is a tautology.
If P 1 is a pps according to the rst denition, then P 2 dened by P 2 (A,x) if and only if P 1 (x) = A is a pps according
to the second denition. Conversely, if P 2 is a pps according to the second denition, then P 1 dened by

{
A ifP2 (A, x)
P1 (x, A) =
otherwise

(P 1 takes pairs as input) is a pps according to the rst denition, where is a xed tautology.

208.2 Algorithmic interpretation


One can view the second denition as a non-deterministic algorithm for solving membership in TAUT. This means that
proving a superpolynomial proof size lower-bound for pps would rule out existence of a certain class of polynomial-
time algorithms based on that pps.
As an example, exponential proof size lower-bounds in resolution for the pigeon hole principle imply that any algo-
rithm based on resolution cannot decide TAUT or SAT eciently and will fail on pigeon hole principle tautologies.
This is signicant because the class of algorithms based on resolution includes most of current propositional proof
search algorithms and modern industrial SAT solvers.

814
208.3. HISTORY 815

208.3 History
Historically, Freges propositional calculus was the rst propositional proof system. The general denition of a
propositional proof system is due to Stephen Cook and Robert A. Reckhow (1979).[1]

208.4 Relation with computational complexity theory


Propositional proof system can be compared using the notion of p-simulation. A propositional proof system P p-
simulates Q (written as P pQ) when there is a polynomial-time function F such that P(F(x)) = Q(x) for every x.[1]
That is, given a Q-proof x, we can nd in polynomial time a P-proof of the same tautology. If P pQ and Q pP,
the proof systems P and Q are p-equivalent. There is also a weaker notion of simulation: a pps P simulates or weakly
p-simulates a pps Q if there is a polynomial p such that for every Q-proof x of a tautology A, there is a P-proof y of A
such that the length of y, |y| is at most p(|x|). (Some authors use the words p-simulation and simulation interchangeably
for either of these two concepts, usually the latter.)
A propositional proof system is called p-optimal if it p-simulates all other propositional proof systems, and it is
optimal if it simulates all other pps. A propositional proof system P is polynomially bounded (also called super) if
every tautology has a short (i.e., polynomial-size) P-proof.
If P is polynomially bounded and Q simulates P, then Q is also polynomially bounded.
The set of propositional tautologies, TAUT, is a coNP-complete set. A propositional proof system is a certicate-
verier for membership in TAUT. Existence of a polynomially bounded propositional proof system means that there
is a verier with polynomial-size certicates, i.e., TAUT is in NP. In fact these two statements are equivalent, i.e.,
there is a polynomially bounded propositional proof system if and only if the complexity classes NP and coNP are
equal.[1]
Some equivalence classes of proof systems under simulation or p-simulation are closely related to theories of bounded
arithmetic; they are essentially non-uniform versions of the bounded arithmetic, in the same way that circuit classes
are non-uniform versions of resource-based complexity classes. Extended Frege systems (allowing the introduction
of new variables by denition) correspond in this way to polynomially-bounded systems, for example. Where the
bounded arithmetic in turn corresponds to a circuit-based complexity class, there are often similarities between the
theory of proof systems and the theory of the circuit families, such as matching lower bound results and separations.
For example, just as counting cannot be done by an AC0 circuit family of subexponential size, many tautologies
relating to the pigeonhole principle cannot have subexponential proofs in a proof system based on bounded-depth
formulas (and in particular, not by resolution-based systems, since they rely solely on depth 1 formulas).

208.5 Examples of propositional proof systems


Some examples of propositional proof systems studied are:

Propositional Resolution and various restrictions and extensions of it like DPLL algorithm

Natural deduction

Sequent calculus

Frege system

Extended Frege

Polynomial calculus

Nullstellensatz system

Cutting-plane method
816 CHAPTER 208. PROPOSITIONAL PROOF SYSTEM

Relationship between some common proof systems

208.6 References
[1] Cook, Stephen; Reckhow, Robert A. (1979). The Relative Eciency of Propositional Proof Systems. Journal of Symbolic
Logic. 44 (1). pp. 3650.

208.7 Further reading


Samuel Buss (1998), An introduction to proof theory, in: Handbook of Proof Theory (ed. S.R.Buss), Else-
vier (1998).

P. Pudlk (1998), The lengths of proofs, in: Handbook of Proof Theory (ed. S.R.Buss), Elsevier, (1998).

P. Beame and T. Pitassi (1998). Propositional proof complexity: past, present and future. Technical Report
TR98-067, Electronic Colloquium on Computational Complexity.

Nathan Segerlind (2007) The Complexity of Propositional Proofs, Bulletin of Symbolic Logic 13(4): 417
481
208.8. EXTERNAL LINKS 817

J. Krajek (1995), Bounded Arithmetic, Propositional Logic, and Complexity Theory, Cambridge University
Press.
J. Krajek, Proof complexity, in: Proc. 4th European Congress of Mathematics (ed. A. Laptev), EMS, Zurich,
pp. 221231, (2005).
J. Krajek, Propositional proof complexity I. and Proof complexity and arithmetic.

Stephen Cook and Phuong Nguyen, Logical Foundations of Proof Complexity, Cambridge University Press,
2010 (draft from 2008)

Robert Reckhow, On the Lengths of Proofs in the Propositional Calculus, PhD Thesis, 1975.

208.8 External links


Proof Complexity
Chapter 209

Propositional variable

In mathematical logic, a propositional variable (also called a sentential variable or sentential letter) is a variable
which can either be true or false. Propositional variables are the basic building-blocks of propositional formulas,
used in propositional logic and higher logics.

209.1 Uses
Formulas in logic are typically built up recursively from some propositional variables, some number of logical con-
nectives, and some logical quantiers. Propositional variables are the atomic formulas of propositional logic.

Example

In a given propositional logic, we might dene a formula as follows:

Every propositional variable is a formula.


Given a formula X the negation X is a formula.

Given two formulas X and Y, and a binary connective b (such as the logical conjunction ), then (X b Y) is a
formula. (Note the parentheses.)

In this way, all of the formulas of propositional logic are built up from propositional variables as a basic unit. Propo-
sitional variables should not be confused with the metavariables which appear in the typical axioms of propositional
calculus; the latter eectively range over well-formed formulae.

209.2 In rst order logic


Propositional variables are represented as nullary predicates in rst order logic.

209.3 See also

209.4 References
Smullyan, Raymond M. First-Order Logic. 1968. Dover edition, 1995. Chapter 1.1: Formulas of Propositional
Logic.

818
Chapter 210

Quanticational variability eect

Quanticational variability eect (QVE) is the intuitive equivalence of certain sentences with quanticational
adverbs (Q-adverbs) and sentences without these, but with quanticational determiner phrases (DP) in argument
position instead.

1. (a) A cat is usually smart. (Q-adverb)

1. (b) Most cats are smart. (DP)

2. (a) A dog is always smart. (Q-adverb)

2. (b) All dogs are smart. (DP)[1]

Analysis of QVE is widely cited as entering the literature with David Lewis' Adverbs of Quantication (1975),
where he proposes QVE as a solution to Peter Geach's donkey sentence (1962). Terminology, and comprehen-
sive analysis, is normally attributed to Stephen Bermans Situation-Based Semantics for Adverbs of Quantication
(1987).

210.1 See also


Donkey pronoun

210.2 Notes
[1] Adapted from Endriss and Hinterwimmer (2005).

210.3 External links


Core text

Lewis, David. 'Adverbs of Quantication'. In Formal Semantics of Natural Language. Edited by Edward L
Keenan. Cambridge: Cambridge University Press, 1975. Pages 315.

Other texts available online

Endriss, Cornelia and Stefan Hinterwimmer. 'The Non-Uniformity of Quanticational Variability Eects: A
Comparison of Singular Indenites, Bare Plurals and Plural Denites. Belgian Journal of Linguistics 19 (2005):
93120.

819
820 CHAPTER 210. QUANTIFICATIONAL VARIABILITY EFFECT

210.4 Literature
Core texts

Berman, Stephen. The Semantics of Open Sentences. PhD thesis. University of Massachusetts Amherst, 1991.

Berman, Stephen. 'An Analysis of Quantier Variability in Indirect Questions. In MIT Working Papers in
Linguistics 11. Edited by Phil Branigan and others. Cambridge: MIT Press, 1989. Pages 116.

Berman, Stephen. 'Situation-Based Semantics for Adverbs of Quantication'. In University of Massachusetts


Occasional Papers 12. Edited by J. Blevins and Anne Vainikka. Graduate Linguistic Student Association
(GLSA), University of Massachusetts Amherst, 1987. Pages 4568.

Select bibliography
Chapter 211

Quantier (linguistics)

In linguistics and grammar, a quantier is a type of determiner, such as all, some, many, few, a lot, and no, (but not
numerals) that indicates quantity.
Quantication is also used in logic, where it is a formula constructor that produces new formulas from old ones.
Natural languages' determiners have been argued to correspond to logical quantiers at the semantic level.

211.1 Introduction
All known human languages make use of quantication (Wiese 2004). For example, in English:

Every glass in my recent order was chipped.


Some of the people standing across the river have white armbands.
Most of the people I talked to didn't have a clue who the candidates were.
A lot of people are smart.

The words in italics are quantiers. There exists no simple way of reformulating any one of these expressions as
a conjunction or disjunction of sentences, each a simple predicate of an individual such as That wine glass was
chipped. These examples also suggest that the construction of quantied expressions in natural language can be
syntactically very complicated. Fortunately, for mathematical assertions, the quantication process is syntactically
more straightforward.
The study of quantication in natural languages is much more dicult than the corresponding problem for formal
languages. This comes in part from the fact that the grammatical structure of natural language sentences may conceal
the logical structure. Moreover, mathematical conventions strictly specify the range of validity for formal language
quantiers; for natural language, specifying the range of validity requires dealing with non-trivial semantic problems.
For example the sentence "Someone gets mugged in New York every 10 minutes" does not identify whether it is the
same person getting mugged every 10 minutes, see also below.
Montague grammar gives a novel formal semantics of natural languages. Its proponents argue that it provides a much
more natural formal rendering of natural language than the traditional treatments of Frege, Russell and Quine.

211.2 Nesting
The order of quantiers is critical to meaning. While mathematical formal notation requires to write quantiers in
front, thus avoiding ambiguity, problems arise in natural (or mixed) language when quantiers are also appended:

"A: B: C" unambiguous


there is an A such that B: C" unambiguous

821
822 CHAPTER 211. QUANTIFIER (LINGUISTICS)

there is an A such that for all B, C" unambiguous, provided that the separation between B and C is clear

there is an A such that C for all B" it is often clear that what is meant is

there is an A such that (C for all B)", formally: "A: B: C"


but it could be interpreted as
"(there is an A such that C) for all B", formally: "B: A: C"

there is an A such that C B" suggests more strongly that the rst is meant; this may be reinforced by the
layout, for example by putting "C B" on a new line.

211.3 History
Term logic, also called Aristotelian logic, treats quantication in a manner that is closer to natural language, and also
less suited to formal analysis. Term logic treated All, Some and No in the 4th century BC, in an account also touching
on the alethic modalities. Starting with Gottlob Frege's 1879 Begrisschrift, Charles Sanders Peirce's 1885 work, and
Bertrand Russell's 1903 Principles of Mathematics, quantiers were introduced into mathematical logic formalism.
See Quantier (logic)#History for details.

211.4 See also


Generalized quantier the standard semantics assigned to determiner phrases
Indenite pronoun

Number names

211.5 References
Dag Westersthl (2001). Quantiers, in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic. Black-
well.

Stanley Peters, Dag Westersthl (2002). "Quantiers."


Heike Wiese (2003). Numbers, language, and the human mind. Cambridge University Press. ISBN 0-521-
83182-2.
Edward Keenan; Denis Paperno (2012). Handbook of Quantiers in Natural Language. Studies in Linguistics
and Philosophy. 90. Springer Science & Business Media. p. 16. ISBN 9400726813.
Chapter 212

Quantier (logic)

In logic, quantication species the quantity of specimens in the domain of discourse that satisfy an open formula.
The two most common quantiers mean "for all" and "there exists". For example, in arithmetic, quantiers allows
one to say that the natural numbers go on for ever, by writing that for all n (where n is a natural number), there is
another number (say, the successor of n) which is one bigger than n.
A language element which generates a quantication (such as every) is called a quantier. The resulting expression
is a quantied expression, it is said to be quantied over the predicate (such as the natural number x has a successor)
whose free variable is bound by the quantier. In formal languages, quantication is a formula constructor that
produces new formulas from old ones. The semantics of the language species how the constructor is interpreted.
Two fundamental kinds of quantication in predicate logic are universal quantication and existential quantication.
The traditional symbol for the universal quantier all is "", a rotated letter "A", and for the existential quantier
exists is "", a rotated letter "E". These quantiers have been generalized beginning with the work of Mostowski
and Lindstrm.
Quantication is used as well in natural languages; examples of quantiers in English are for all, for some, many,
few, a lot, and no; see Quantier (linguistics) for details.

212.1 Mathematics
Consider the following statement:

1 2 = 1 + 1, and 2 2 = 2 + 2, and 3 2 = 3 + 3, ..., and 100 2 = 100 + 100, and ..., etc.

This has the appearance of an innite conjunction of propositions. From the point of view of formal languages this is
immediately a problem, since syntax rules are expected to generate nite objects. The example above is fortunate in
that there is a procedure to generate all the conjuncts. However, if an assertion were to be made about every irrational
number, there would be no way to enumerate all the conjuncts, since irrationals cannot be enumerated. A succinct
formulation which avoids these problems uses universal quantication:

For each natural number n, n 2 = n + n.

A similar analysis applies to the disjunction,

1 is equal to 5 + 5, or 2 is equal to 5 + 5, or 3 is equal to 5 + 5, ... , or 100 is equal to 5 + 5, or ..., etc.

which can be rephrased using existential quantication:

For some natural number n, n is equal to 5+5.

823
824 CHAPTER 212. QUANTIFIER (LOGIC)

212.2 Algebraic approaches to quantication


It is possible to devise abstract algebras whose models include formal languages with quantication, but progress has
been slow and interest in such algebra has been limited. Three approaches have been devised to date:

Relation algebra, invented by Augustus De Morgan, and developed by Charles Sanders Peirce, Ernst Schrder,
Alfred Tarski, and Tarskis students. Relation algebra cannot represent any formula with quantiers nested
more than three deep. Surprisingly, the models of relation algebra include the axiomatic set theory ZFC and
Peano arithmetic;
Cylindric algebra, devised by Alfred Tarski, Leon Henkin, and others;
The polyadic algebra of Paul Halmos.

212.3 Notation
The two most common quantiers are the universal quantier and the existential quantier. The traditional symbol
for the universal quantier is "", a rotated letter "A", which stands for for all or all. The corresponding symbol
for the existential quantier is "", a rotated letter "E", which stands for there exists or exists.
An example of translating a quantied English statement would be as follows. Given the statement, Each of Peters
friends either likes to dance or likes to go to the beach, we can identify key aspects and rewrite using symbols including
quantiers. So, let X be the set of all Peters friends, P(x) the predicate "x likes to dance, and Q(x) the predicate "x
likes to go to the beach. Then the above sentence can be written in formal notation as xX, P (x) Q(x) , which
is read, for every x that is a member of X, P applies to x or Q applies to x.
Some other quantied expressions are constructed as follows,

x P x P
for a formula P. These two expressions (using the denitions above) are read as there exists a friend of Peter who likes
to dance and all friends of Peter like to dance respectively. Variant notations include, for set X and set members
x:

(x)P (x . P ) x P (x : P ) x(P ) x P x, P xX P x:X P


All of these variations also apply to universal quantication. Other variations for the universal quantier are


(x) P P
x

Some versions of the notation explicitly mention the range of quantication. The range of quantication must always
be specied; for a given mathematical theory, this can be done in several ways:

Assume a xed domain of discourse for every quantication, as is done in ZermeloFraenkel set theory,
Fix several domains of discourse in advance and require that each variable have a declared domain, which is the
type of that variable. This is analogous to the situation in statically typed computer programming languages,
where variables have declared types.
Mention explicitly the range of quantication, perhaps using a symbol for the set of all objects in that domain
or the type of the objects in that domain.

One can use any variable as a quantied variable in place of any other, under certain restrictions in which variable
capture does not occur. Even if the notation uses typed variables, variables of that type may be used.
Informally or in natural language, the "x" or "x" might appear after or in the middle of P(x). Formally, however,
the phrase that introduces the dummy variable is placed in front.
Mathematical formulas mix symbolic expressions for quantiers, with natural language quantiers such as
212.4. NESTING 825

For every natural number x, ....


There exists an x such that ....
For at least one x.

Keywords for uniqueness quantication include:

For exactly one natural number x, ....


There is one and only one x such that ....

Further, x may be replaced by a pronoun. For example,

For every natural number, its product with 2 equals to its sum with itself
Some natural number is prime.

212.4 Nesting
The order of quantiers is critical to meaning, as is illustrated by the following two propositions:

For every natural number n, there exists a natural number s such that s = n2 .

This is clearly true; it just asserts that every natural number has a square. The meaning of the assertion in which the
quantiers are turned around is dierent:

There exists a natural number s such that for every natural number n, s = n2 .

This is clearly false; it asserts that there is a single natural number s that is at the same time the square of every
natural number. This is because the syntax directs that any variable cannot be a function of subsequently introduced
variables.
A less trivial example from mathematical analysis are the concepts of uniform and pointwise continuity, whose def-
initions dier only by an exchange in the positions of two quantiers. A function f from R to R is called

pointwise continuous if >0 xR >0 hR (|h| < |f(x) f(x + h)| < )
uniformly continuous if >0 >0 xR hR (|h| < |f(x) f(x + h)| < )

In the former case, the particular value chosen for can be a function of both and x, the variables that precede it.
In the latter case, can be a function only of , i.e. it has to be chosen independent of x. For example, f(x) = x2
satises pointwise, but not uniform continuity. In contrast, interchanging the two initial universal quantiers in the
denition of pointwise continuity does not change the meaning.
The maximum depth of nesting of quantiers inside a formula is called its quantier rank.

212.5 Equivalent expressions


If D is a domain of x and P(x) is a predicate dependent on x, then the universal proposition can be expressed as

x D P (x)

This notation is known as restricted or relativized or bounded quantication. Equivalently,

x (x D P (x))
826 CHAPTER 212. QUANTIFIER (LOGIC)

The existential proposition can be expressed with bounded quantication as

x D P (x)

or equivalently

x (x D P (x))

Together with negation, only one of either the universal or existential quantier is needed to perform both tasks:

(x D P (x)) x D P (x),

which shows that to disprove a for all x" proposition, one needs no more than to nd an x for which the predicate is
false. Similarly,

(x D P (x)) x D P (x),

to disprove a there exists an x" proposition, one needs to show that the predicate is false for all x.

212.6 Range of quantication


Every quantication involves one specic variable and a domain of discourse or range of quantication of that variable.
The range of quantication species the set of values that the variable takes. In the examples above, the range of
quantication is the set of natural numbers. Specication of the range of quantication allows us to express the
dierence between, asserting that a predicate holds for some natural number or for some real number. Expository
conventions often reserve some variable names such as "n" for natural numbers and "x" for real numbers, although
relying exclusively on naming conventions cannot work in general since ranges of variables can change in the course
of a mathematical argument.
A more natural way to restrict the domain of discourse uses guarded quantication. For example, the guarded quan-
tication

For some natural number n, n is even and n is prime

means

For some even number n, n is prime.

In some mathematical theories a single domain of discourse xed in advance is assumed. For example, in Zermelo
Fraenkel set theory, variables range over all sets. In this case, guarded quantiers can be used to mimic a smaller
range of quantication. Thus in the example above to express

For every natural number n, n2 = n + n

in ZermeloFraenkel set theory, it can be said

For every n, if n belongs to N, then n2 = n + n,

where N is the set of all natural numbers.


212.7. FORMAL SEMANTICS 827

212.7 Formal semantics


Mathematical Semantics is the application of mathematics to study the meaning of expressions in a formal language.
It has three elements: A mathematical specication of a class of objects via syntax, a mathematical specication of
various semantic domains and the relation between the two, which is usually expressed as a function from syntactic
objects to semantic ones. This article only addresses the issue of how quantier elements are interpreted.
Given a model theoretical logical framework, the syntax of a formula can be given by a syntax tree. Quantiers have
scope and a variable x is free if it is not within the scope of a quantication for that variable. Thus in

x(yB(x, y)) C(y, x)

the occurrence of both x and y in C(y,x) is free.

Syntactic tree illustrating scope and variable capture

An interpretation for rst-order predicate calculus assumes as given a domain of individuals X. A formula A whose free
variables are x1 , ..., x is interpreted as a boolean-valued function F(v1 , ..., vn) of n arguments, where each argument
ranges over the domain X. Boolean-valued means that the function assumes one of the values T (interpreted as truth)
or F (interpreted as falsehood). The interpretation of the formula

xn A(x1 , . . . , xn )

is the function G of n1 arguments such that G(v1 , ...,vn) = T if and only if F(v1 , ..., vn, w) = T for every w in
X. If F(v1 , ..., vn, w) = F for at least one value of w, then G(v1 , ...,vn) = F. Similarly the interpretation of the
formula

xn A(x1 , . . . , xn )

is the function H of n1 arguments such that H(v1 , ...,vn) = T if and only if F(v1 , ...,vn, w) = T for at least one
w and H(v1 , ..., vn) = F otherwise.
828 CHAPTER 212. QUANTIFIER (LOGIC)

The semantics for uniqueness quantication requires rst-order predicate calculus with equality. This means there
is given a distinguished two-placed predicate "="; the semantics is also modied accordingly so that "=" is always
interpreted as the two-place equality relation on X. The interpretation of

!xn A(x1 , . . . , xn )
then is the function of n1 arguments, which is the logical and of the interpretations of

xn A(x1 , . . . , xn )
y, z {A(x1 , . . . , xn1 , y) A(x1 , . . . , xn1 , z) = y = z} .

212.8 Paucal, multal and other degree quantiers


See also: Fubinis theorem and measurable

None of the quantiers previously discussed apply to a quantication such as

There are many integers n < 100, such that n is divisible by 2 or 3 or 5.

One possible interpretation mechanism can be obtained as follows: Suppose that in addition to a semantic domain X,
we have given a probability measure P dened on X and cuto numbers 0 < a b 1. If A is a formula with free
variables x1 ,...,xn whose interpretation is the function F of variables v1 ,...,vn then the interpretation of

many xn A(x1 , . . . , xn1 , xn )


is the function of v1 ,...,vn which is T if and only if

P{w : F (v1 , . . . , vn1 , w) = T} b


and F otherwise. Similarly, the interpretation of

few xn A(x1 , . . . , xn1 , xn )


is the function of v1 ,...,vn which is F if and only if

0 < P{w : F (v1 , . . . , vn1 , w) = T} a


and T otherwise.

212.9 Other quantiers


A few other quantiers have been proposed over time. In particular, the solution quantier,[1] noted (section sign)
and read those. For example:

[ ]
n N n2 4 = {0, 1, 2}
is read those n in N such that n2 4 are in {0,1,2}. The same construct is expressible in set-builder notation:

{n N : n2 4} = {0, 1, 2}
Some other quantiers sometimes used in mathematics include:
212.10. HISTORY 829

There are innitely many elements such that...

For all but nitely many elements... (sometimes expressed as for almost all elements...).

There are uncountably many elements such that...

For all but countably many elements...

For all elements in a set of positive measure...

For all elements except those in a set of measure zero...

212.10 History
Term logic, also called Aristotelian logic, treats quantication in a manner that is closer to natural language, and also
less suited to formal analysis. Term logic treated All, Some and No in the 4th century BC, in an account also touching
on the alethic modalities.
In 1827, George Bentham published his Outline of a new system of logic, with a critical examination of Dr Whatelys
Elements of Logic, describing the principle of the quantier, but the book was not widely circulated.[2]
William Hamilton claimed to have coined the terms quantify and quantication, most likely in his Edinburgh
lectures c. 1840. Augustus De Morgan conrmed this in 1847, but modern usage began with De Morgan in 1862
where he makes statements such as We are to take in both all and some-not-all as quantiers.[3]
Gottlob Frege, in his 1879 Begrisschrift, was the rst to employ a quantier to bind a variable ranging over a domain
of discourse and appearing in predicates. He would universally quantify a variable (or relation) by writing the variable
over a dimple in an otherwise straight line appearing in his diagrammatic formulas. Frege did not devise an explicit
notation for existential quantication, instead employing his equivalent of ~x~, or contraposition. Freges treatment
of quantication went largely unremarked until Bertrand Russell's 1903 Principles of Mathematics.
In work that culminated in Peirce (1885), Charles Sanders Peirce and his student Oscar Howard Mitchell indepen-
dently invented universal and existential quantiers, and bound variables. Peirce and Mitchell wrote and where
we now write x and x. Peirces notation can be found in the writings of Ernst Schrder, Leopold Loewenheim,
Thoralf Skolem, and Polish logicians into the 1950s. Most notably, it is the notation of Kurt Gdel's landmark 1930
paper on the completeness of rst-order logic, and 1931 paper on the incompleteness of Peano arithmetic.
Peirces approach to quantication also inuenced William Ernest Johnson and Giuseppe Peano, who invented yet
another notation, namely (x) for the universal quantication of x and (in 1897) x for the existential quantication of
x. Hence for decades, the canonical notation in philosophy and mathematical logic was (x)P to express all individuals
in the domain of discourse have the property P, and "(x)P" for there exists at least one individual in the domain
of discourse having the property P. Peano, who was much better known than Peirce, in eect diused the latters
thinking throughout Europe. Peanos notation was adopted by the Principia Mathematica of Whitehead and Russell,
Quine, and Alonzo Church. In 1935, Gentzen introduced the symbol, by analogy with Peanos symbol. did not
become canonical until the 1960s.
Around 1895, Peirce began developing his existential graphs, whose variables can be seen as tacitly quantied.
Whether the shallowest instance of a variable is even or odd determines whether that variables quantication is
universal or existential. (Shallowness is the contrary of depth, which is determined by the nesting of negations.)
Peirces graphical logic has attracted some attention in recent years by those researching heterogeneous reasoning
and diagrammatic inference.

212.11 See also


Generalized quantier a higher-order property used as standard semantics of quantied noun phrases

Lindstrm quantier a generalized polyadic quantier

Quantier elimination
830 CHAPTER 212. QUANTIFIER (LOGIC)

Augustus De Morgan (1806-1871) was the rst to use quantier in the modern way.

212.12 References
[1] Hehner, Eric C. R., 2004, Practical Theory of Programming, 2nd edition, p. 28
[2] George Bentham, Outline of a new system of logic: with a critical examination of Dr. Whatelys Elements of Logic (1827);
Thoemmes; Facsimile edition (1990) ISBN 1-85506-029-9
[3] Peters, Stanley; Westersthl, Dag (2006-04-27). Quantiers in Language and Logic. Clarendon Press. pp. 34. ISBN
978-0-19-929125-0.

Barwise, Jon; and Etchemendy, John, 2000. Language Proof and Logic. CSLI (University of Chicago Press)
and New York: Seven Bridges Press. A gentle introduction to rst-order logic by two rst-rate logicians.
212.13. EXTERNAL LINKS 831

Frege, Gottlob, 1879. Begrisschrift. Translated in Jean van Heijenoort, 1967. From Frege to Gdel: A Source
Book on Mathematical Logic, 1879-1931. Harvard University Press. The rst appearance of quantication.
Hilbert, David; and Ackermann, Wilhelm, 1950 (1928). Principles of Mathematical Logic. Chelsea. Transla-
tion of Grundzge der theoretischen Logik. Springer-Verlag. The 1928 rst edition is the rst time quantication
was consciously employed in the now-standard manner, namely as binding variables ranging over some xed
domain of discourse. This is the dening aspect of rst-order logic.
Peirce, C. S., 1885, On the Algebra of Logic: A Contribution to the Philosophy of Notation, American Journal
of Mathematics, Vol. 7, pp. 180202. Reprinted in Kloesel, N. et al., eds., 1993. Writings of C. S. Peirce, Vol.
5. Indiana University Press. The rst appearance of quantication in anything like its present form.

Reichenbach, Hans, 1975 (1947). Elements of Symbolic Logic, Dover Publications. The quantiers are dis-
cussed in chapters 18 Binding of variables through 30 Derivations from Synthetic Premises.

Westersthl, Dag, 2001, Quantiers, in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic. Black-
well.
Wiese, Heike, 2003. Numbers, language, and the human mind. Cambridge University Press. ISBN 0-521-
83182-2.

212.13 External links


Hazewinkel, Michiel, ed. (2001) [1994], Quantier, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Stanford Encyclopedia of Philosophy:

"Classical Logic by Stewart Shapiro. Covers syntax, model theory, and metatheory for rst order logic
in the natural deduction style.
"Generalized quantiers" by Dag Westersthl.

Peters, Stanley; Westersthl, Dag (2002). "Quantiers."


Chapter 213

Quantier rank

In mathematical logic, the quantier rank of a formula is the depth of nesting of its quantiers. It plays an essential
role in model theory.
Notice that the quantier rank is a property of the formula itself (i.e. the expression in a language). Thus two logically
equivalent formulae can have dierent quantier ranks, when they express the same thing in dierent ways.

213.1 Denition
Quantier Rank of a Formula in First-order language (FO)
Let be a FO formula. The quantier rank of , written qr(), is dened as

qr() = 0 , if is atomic.

qr(1 2 ) = qr(1 2 ) = max(qr(1 ), qr(2 )) .

qr() = qr() .

qr(x ) = qr() + 1 .

Remarks

We write FO[n] for the set of all rst-order formulas with qr() n .

Relational FO[n] (without function symbols) is always of nite size, i.e. contains a nite number of formulas

Notice that in Prenex normal form the Quantier Rank of is exactly the number of quantiers appearing in
.

Quantier Rank of a higher order Formula

For Fixpoint logic, with a least x point operator LFP:

qr([LFP]y) = 1 + qr( )

...

213.2 Examples
A sentence of quantier rank 2:

832
213.3. SEE ALSO 833

xyR(x, y)

A formula of quantier rank 1:

xR(y, x) xR(x, y)

A formula of quantier rank 0:

R(x, y) x = y

A sentence in prenex normal form of quantier rank 3:

xyz((x = y xRy) (x = z zRx))

A sentence, equivalent to the previous, although of quantier rank 2:

x(y(x = y xRy)) z(x = z zRx))

213.3 See also


Prenex normal form

Ehrenfeucht game
Quantier

213.4 References
Ebbinghaus, Heinz-Dieter; Flum, Jrg (1995), Finite Model Theory, Springer, ISBN 978-3-540-60149-4.
Grdel, Erich; Kolaitis, Phokion G.; Libkin, Leonid; Maarten, Marx; Spencer, Joel; Vardi, Moshe Y.; Venema,
Yde; Weinstein, Scott (2007), Finite model theory and its applications, Texts in Theoretical Computer Science.
An EATCS Series, Berlin: Springer-Verlag, p. 133, ISBN 978-3-540-00428-8, Zbl 1133.03001.

213.5 External links


Quantier Rank Spectrum of L-innity-omega BA Thesis, 2000
Chapter 214

Quantier variance

The term quantier variance refers to claims there is no uniquely best ontological language with which to describe
the world.[1] According to Hirsch, it is an outgrowth of Urmsons dictum:

If two sentences are equivalent to each other, then while the use of one rather than the other may
be useful for some philosophical purposes, it is not the case that one will be nearer to reality than the
other...We can say a thing this way, and we can say it that way, sometimes...But it is no use asking which
is the logically or metaphysically right way to say it.[2]
James Opie Urmson, Philosophical Analysis, p. 186

The term quantier variance rests upon the philosophical term 'quantier', more precisely existential quantier. A
'quantier' is an expression like there exists at least one such-and-such".[3]

214.1 Quantiers
Main articles: Existential quantication and Quantication (logic)

The word quantier in the introduction refers to a variable used in a domain of discourse, a collection of objects
under discussion. In daily life, the domain of discourse could be 'apples, or 'persons, or even everything.[4] In a more
technical arena, the domain of discourse could be 'integers, say. The quantier variable x, say, in the given domain
of discourse can take on the 'value' or designate any object in the domain. The presence of a particular object, say a
'unicorn' is expressed in the manner of symbolic logic as:

x; x is a unicorn.

Here the 'turned E ' or is read as there exists... and is called the symbol for existential quantication. Relations be-
tween objects also can be expressed using quantiers. For example, in the domain of integers (denoting the quantier
by n, a customary choice for an integer) we can indirectly identify '5' by its relation with the number '25':

n; n n = 25.

If we want to point out specically that the domain of integers is meant, we could write:

n ; n n = 25.

Here, = is a member of... and is called the symbol for set membership; and denotes the set of integers.
There are a variety of expressions that serve the same purpose in various ontologies, and they are accordingly all
quantier expressions.[1] Quantier variance is then one argument concerning exactly what expressions can be con-
strued as quantiers, and just which arguments of a quantier, that is, which substitutions for such-and-such, are
permissible.[5]

834
214.2. USAGE, NOT 'EXISTENCE'? 835

214.2 Usage, not 'existence'?


Hirsch says the notion of quantier variance is a concept concerning how languages work, and is not connected to
the ontological question of what 'really' exists.[6] That view is not universal.[7]
The thesis underlying quantier variance was stated by Putnam:

The logical primitives themselves, and in particular the notions of object and existence, have a mul-
titude of dierent uses rather than one absolute 'meaning'.[8]
Hilary Putnam, Truth and Convention, p. 71

Citing this quotation from Putnam, Wasserman states: This thesis the thesis that there are many meanings for the
existential quantier that are equally neutral and equally adequate for describing all the facts is often referred to as
the doctrine of quantier variance".[7]
Hirschs quantier variance has been connected to Carnap's idea of a linguistic framework as a 'neo'-Carnapian
view, namely, the view that there are a number of equally good meanings of the logical quantiers; choosing one
of these frameworks is to be understood analogously to choosing a Carnapian framework.[9] Of course, not all
philosophers (notably Quine and the 'neo'-Quineans) subscribe to the notion of multiple linguistic frameworks.[9] See
meta-ontology.
Hirsch himself suggests some care in connecting his version of quantier variance with Carnap: Lets not call any
philosophers quantier variantists unless they are clearly committed to the idea that (most of) the things that exist are
completely independent of language. In this connection Hirsch says I have a problem, however, in calling Carnap
a quantier variantist, insofar as he is often viewed as a vericationist anti-realist.[1] Although Thomasson does
not think Carnap is properly considered to be an antirealist, she still disassociates Carnap from Hirschs version of
quantier variance: Ill argue, however, that Carnap in fact is not committed to quantier variance in anything like
Hirschs sense, and that he [Carnap] does not rely on it in his ways of deating metaphysical debates.[10]

214.3 See also


Ordinary language philosophy
Metaphilosophy
Mereology

214.4 References
[1] Eli Hirsch (2011). Introduction. Quantier Variance and Realism : Essays in Metaontology: Essays in Metaontology.
Oxford University Press. p. xii. ISBN 0199732116.

[2] J.O. Urmson (1967). Philosophical analysis: its development between the two world wars. Oxford University Press. p. 186.
Quoted by Eli Hirsch.

[3] A 'quantier' in symbolic logic originally was the part of statements involving the logic symbols (for all) and (there
exists) as in an expression like for allsuch-and-such P is true ( x: P(x)) or there exists at least one such-and-such
such that P is true ( x: P(x)) where such-and-such, or x, is an element of a set and P is a proposition or assertion.
However, the idea of a quantier has since been generalized. See Dag Westersthl (April 19, 2011). Edward N. Zalta, ed.
Generalized Quantiers. The Stanford Encyclopedia of Philosophy (Summer 2011 Edition).

[4] Alan Hausman; Howard. Kahane; Paul. Tidman (2012). Domain of discourse. Logic and Philosophy: A Modern
Introduction (12th ed.). Cengage Learning. p. 194. ISBN 113305000X.

[5] Theodore Sider (2011). Writing the book of the world. Oxford University Press. p. 175. ISBN 0199697906. Quantier
variance, on my formulation, says that there are many candidates for being meant by quantiers; but the quantier vari-
antist needn't take this quantication [that is, this variety of quantier-meanings] seriously...we could choose to use the
sentence There exists something that is composed of [such-and-such composite object]" so it comes out true, or we could
choose to use it so it comes out false; and under neither choice would our words [get closer to reality] than under the other.
[Italics added, 'cute' in-group phrases replaced, as indicated by brackets]
836 CHAPTER 214. QUANTIFIER VARIANCE

[6] Eli Hirsch (2011). Chapter 12: Ontology and alternative languages. Quantier Variance and Realism : Essays in Metaon-
tology: Essays in Metaontology. Oxford University Press. pp. 220250. ISBN 0199780714. I take it for granted that the
world and the things in it exist for the most part in complete independence of our knowledge or language. Our linguistic
choices do not determine what exists, but determine what we are to mean by the words what exists and related words.

[7] Wasserman, Ryan (April 5, 2013). Edward N. Zalta, ed. Material Constitution. The Stanford Encyclopedia of Philosophy
(Summer 2013 Edition).

[8] Hilary Putnam (1987). Truth and convention: On Davidsons refutation of conceptual relativism. Dialectica. 41: 6977.
doi:10.1111/j.1746-8361.1987.tb00880.x.

[9] Helen Beebee; Nikk Engham; Philip Go (2012). Metaphysics: The Key Concepts. Taylor & Francis. p. 125. ISBN
0203835255.

[10] Amie L. Thomasson. Carnap and the Prospects for Easy Ontology. Retrieved 2013-06-20. To be published in Ontology
after Carnap, Stephan Blatti and Sandra LaPointe, eds., Oxford University Press.
Chapter 215

Quasi-commutative property

In mathematics, the quasi-commutative property is an extension or generalization of the general commutative


property. This property is used in specic applications with various denitions.

215.1 Applied to matrices


Two matrices p and q are said to have the commutative property whenever

pq = qp

The quasi-commutative property in matrices is dened[1] as follows. Given two non-commutable matrices x and y

xy yx = z

satisfy the quasi-commutative property whenever z satises the following properties:

xz = zx

yz = zy
An example is found in the matrix mechanics introduced by Heisenberg as a version of quantum mechanics. In this
mechanics, p and q are innite matrices corresponding respectively to the momentum and position variables of a
particle.[1] These matrices are written out at Matrix mechanics#Harmonic oscillator, and z = i times the innite unit
matrix, where is the reduced Planck constant.

215.2 Applied to functions


A function f, dened as follows:

f :X Y X

is said to be quasi-commutative[2] if for all x X and for all y1 , y2 Y ,

f (f (x, y1 ), y2 ) = f (f (x, y2 ), y1 )

837
838 CHAPTER 215. QUASI-COMMUTATIVE PROPERTY

215.3 See also


Commutative property

Accumulator (cryptography)

215.4 References
[1] Neal H. McCoy. On quasi-commutative matrices. Transactions of the American Mathematical Society, 36(2), 327340.

[2] Benaloh, J., & De Mare, M. (1994, January). One-way accumulators: A decentralized alternative to digital signatures. In
Advances in CryptologyEUROCRYPT93 (pp. 274285). Springer Berlin Heidelberg.
Chapter 216

QuineMcCluskey algorithm

The QuineMcCluskey algorithm (or the method of prime implicants) is a method used for minimization of
Boolean functions that was developed by Willard V. Quine and extended by Edward J. McCluskey.[1][2][3] It is func-
tionally identical to Karnaugh mapping, but the tabular form makes it more ecient for use in computer algorithms,
and it also gives a deterministic way to check that the minimal form of a Boolean function has been reached. It is
sometimes referred to as the tabulation method.
The method involves two steps:

1. Finding all prime implicants of the function.

2. Use those prime implicants in a prime implicant chart to nd the essential prime implicants of the function, as
well as other prime implicants that are necessary to cover the function.

216.1 Complexity
Although more practical than Karnaugh mapping when dealing with more than four variables, the QuineMcCluskey
algorithm also has a limited range of use since the problem it solves is NP-hard: the runtime of the QuineMcCluskey
algorithm grows exponentially with the number of variables. It can be shown that for a function of n variables the
upper bound on the number of prime implicants is 3n ln(n). If n = 32 there may be over 6.5 * 1015 prime implicants.
Functions with a large number of variables have to be minimized with potentially non-optimal heuristic methods, of
which the Espresso heuristic logic minimizer is the de-facto standard in 1995.[4]

216.2 Example

216.2.1 Step 1: nding prime implicants


Minimizing an arbitrary function:


f (A, B, C, D) = m(4, 8, 10, 11, 12, 15) + d(9, 14).

This expression says that the output function f will be 1 for the minterms 4,8,10,11,12 and 15 (denoted by the 'm'
term). But it also says that we don't care about the output for 9 and 14 combinations (denoted by the 'd' term). ('x'
stands for don't care).
One can easily form the canonical sum of products expression from this table, simply by summing the minterms
(leaving out don't-care terms) where the function evaluates to one:

fA,B,C,D = A BC D + AB C D + AB CD + AB CD + ABC D + ABCD.

839
840 CHAPTER 216. QUINEMCCLUSKEY ALGORITHM

which is not minimal. So to optimize, all minterms that evaluate to one are rst placed in a minterm table. Don't-care
terms are also added into this table, so they can be combined with minterms:
At this point, one can start combining minterms with other minterms. If two terms vary by only a single digit changing,
that digit can be replaced with a dash indicating that the digit doesn't matter. Terms that can't be combined any more
are marked with an asterisk (*). When going from Size 2 to Size 4, treat '-' as a third bit value. For instance, 110
and 100 can be combined, as well as 110 and 11-, but 110 and 011- cannot. (Trick: Match up the '-' rst.)
Note: In this example, none of the terms in the size 4 implicants table can be combined any further. Be aware that
this processing should be continued otherwise (size 8 etc.).

216.2.2 Step 2: prime implicant chart


None of the terms can be combined any further than this, so at this point we construct an essential prime implicant
table. Along the side goes the prime implicants that have just been generated, and along the top go the minterms
specied earlier. The don't care terms are not placed on topthey are omitted from this section because they are not
necessary inputs.
To nd the essential prime implicants, we run along the top row. We have to look for columns with only 1 X. If a
column has only 1 X, this means that the minterm can only be covered by 1 prime implicant. This prime implicant
is essential.
For example: in the rst column, with minterm 4, there is only 1 X. This means that m(4,12) is essential. So we
place a star next to it. Minterm 15 also has only 1 X, so m(10,11,14,15) is also essential. Now all columns with 1
X are covered.
The second prime implicant can be 'covered' by the third and fourth, and the third prime implicant can be 'covered'
by the second and rst, and neither is thus essential. If a prime implicant is essential then, as would be expected, it is
necessary to include it in the minimized boolean equation. In some cases, the essential prime implicants do not cover
all minterms, in which case additional procedures for chart reduction can be employed. The simplest additional
procedure is trial and error, but a more systematic way is Petricks method. In the current example, the essential
prime implicants do not handle all of the minterms, so, in this case, one can combine the essential implicants with
one of the two non-essential ones to yield one equation:

fA,B,C,D = BC D + AB + AC [5]

Both of those nal equations are functionally equivalent to the original, verbose equation:

fA,B,C,D = A BC D + AB C D + AB C D + AB CD + AB CD + ABC D + ABCD + ABCD.

216.3 See also


Buchbergers algorithm analogous algorithm for algebraic geometry
Petricks method

216.4 References
[1] Quine, Willard Van Orman (October 1952). The Problem of Simplifying Truth Functions. The American Mathematical
Monthly. 59 (8): 521531. JSTOR 2308219. doi:10.2307/2308219.
[2] Quine, Willard Van Orman (November 1955). A Way to Simplify Truth Functions. The American Mathematical Monthly.
62 (9): 627631. JSTOR 2307285. doi:10.2307/2307285.
[3] McCluskey, Jr., Edward J. (November 1956). Minimization of Boolean Functions. Bell System Technical Journal. 35
(6): 14171444. doi:10.1002/j.1538-7305.1956.tb03835.x. Retrieved 2014-08-24.
[4] Nelson, Victor P.; et al. (1995). Digital Logic Circuit Analysis and Design. Prentice Hall. p. 234. Retrieved 2014-08-26.
[5] Logic Friday program
216.5. EXTERNAL LINKS 841

216.5 External links


Quine-McCluskey Solver, by Hatem Hassan.

Quine-McCluskey algorithm implementation with a search of all solutions, by Frdric Carpon.


Modied Quine-McCluskey Method, by Vitthal Jadhav, Amar Buchade.

All about Quine-McClusky, article by Jack Crenshaw comparing Quine-McClusky to Karnaugh maps

Karma 3, A set of logic synthesis tools including Karnaugh maps, Quine-McCluskey minimization, BDDs,
probabilities, teaching module and more. Logic Circuits Synthesis Labs (LogiCS) - UFRGS, Brazil.

A. Costa BFunc, QMC based boolean logic simpliers supporting up to 64 inputs / 64 outputs (independently)
or 32 outputs (simultaneously)

Python Implementation by Robert Dick, with an optimized version.


Python Implementation for symbolically reducing Boolean expressions.

Quinessence, an open source implementation written in Free Pascal by Marco Caminati.


QCA an open source, R based implementation used in the social sciences, by Adrian Dua

A series of two articles describing the algorithm(s) implemented in R: rst article and second article. The R
implementation is exhaustive and it oers complete and exact solutions. It processes up to 20 input variables.

minBool an implementation by Andrey Popov.


QMC applet, an applet for a step by step analyze of the QMC- algorithm by Christian Roth

C++ implementation SourceForge.net C++ program implementing the algorithm.


Perl Module by Darren M. Kulp.

Tutorial Tutorial on Quine-McCluskey and Petricks method (pdf).


Petrick C++ implementation (including Petrick) based on the tutorial above

C program Public Domain console based C program on SourceForge.net.


Tomaszewski, S. P., Celik, I. U., Antoniou, G. E., WWW-based Boolean function minimization INTER-
NATIONAL JOURNAL OF APPLIED MATHEMATICS AND COMPUTER SCIENCE, VOL 13; PART
4, pages 577-584, 2003.
For a fully worked out example visit: http://www.cs.ualberta.ca/~{}amaral/courses/329/webslides/Topic5-QuineMcCluskey/
sld024.htm
An excellent resource detailing each step: Olivier Coudert Two-level logic minimization: an overview IN-
TEGRATION, the VLSI journal, 17-2, pp. 97140, October 1994
The Boolean Bot: A JavaScript implementation for the web: http://booleanbot.com/

open source gui QMC minimizer


Computer Simulation Codes for the Quine-McCluskey Method, by Sourangsu Banerji.
Chapter 217

Random algebra

In set theory, the random algebra or random real algebra is the Boolean algebra of Borel sets of the unit interval
modulo the ideal of measure zero sets. It is used in random forcing to add random reals to a model of set theory.
The random algebra was studied by John von Neumann in 1935 (in work later published as Neumann (1998, p. 253))
who showed that it is not isomorphic to the Cantor algebra of Borel sets modulo meager sets. Random forcing was
introduced by Solovay (1970).

217.1 References
Bartoszyski, Tomek (2010), Invariants of measure and category, Handbook of set theory, 2, Springer, pp.
491555, MR 2768686
Bukowsk, Lev (1977), Random forcing, Set theory and hierarchy theory, V (Proc. Third Conf., Bierutowice,
1976), Lecture Notes in Math., 619, Berlin: Springer, pp. 101117, MR 0485358
Solovay, Robert M. (1970), A model of set-theory in which every set of reals is Lebesgue measurable, Annals
of Mathematics. Second Series, 92: 156, ISSN 0003-486X, JSTOR 1970696, MR 0265151
Neumann, John von (1998) [1960], Continuous geometry, Princeton Landmarks in Mathematics, Princeton
University Press, ISBN 978-0-691-05893-1, MR 0120174

842
Chapter 218

Read-once function

In mathematics, a read-once function is a special type of Boolean function that can be described by a Boolean
expression in which each variable appears only once.
More precisely, the expression is required to use only the operations of logical conjunction, logical disjunction, and
negation. By applying De Morgans laws, such an expression can be transformed into one in which negation is used
only on individual variables (still with each variable appearing only once). By replacing each negated variable with a
new positive variable representing its negation, such a function can be transformed into an equivalent positive read-
once Boolean function, represented by a read-once expression without negations.[1]

218.1 Examples
For example, for three variables a, b, and c, the expressions

abc

a (b c)
(a b) c
abc
are all read-once (as are the other functions obtained by permuting the variables in these expressions). However, the
Boolean median operation, given by the expression

(a b) (a c) (b c)

is not read-once: this formula has more than one copy of each variable, and there is no equivalent formula that uses
each variable only once.[2]

218.2 Characterization
The disjunctive normal form of a (positive) read-once function is not generally itself read-once. Nevertheless, it
carries important information about the function. In particular, if one forms a co-occurrence graph in which the
vertices represent variables, and edges connect pairs of variables that both occur in the same clause of the conjunctive
normal form, then the co-occurrence graph of a read-once function is necessarily a cograph. More precisely, a positive
Boolean function is read-once if and only if its co-occurrence graph is a cograph, and in addition every maximal clique
of the co-occurrence graph forms one of the conjunctions (prime implicants) of the disjunctive normal form.[3] That
is, when interpreted as a function on sets of vertices of its co-occurrence graph, a read-once function is true for
sets of vertices that contain a maximal clique, and false otherwise. For instance the median function has the same

843
844 CHAPTER 218. READ-ONCE FUNCTION

co-occurrence graph as the conjunction of three variables, a triangle graph, but the three-vertex complete subgraph
of this graph (the whole graph) forms a subset of a clause only for the conjunction and not for the median.[4] Two
variables of a positive read-once expression are adjacent in the co-occurrence graph if and only if their lowest common
ancestor in the expression is a conjunction,[5] so the expression tree can be interpreted as a cotree for the corresponding
cograph.[6]
Another alternative characterization of positive read-once functions combines their disjunctive and conjunctive normal
form. A positive function of a given system of variables, that uses all of its variables, is read-once if and only if every
prime implicant of the disjunctive normal form and every clause of the conjunctive normal form have exactly one
variable in common.[7]

218.3 Recognition
It is possible to recognize read-once functions from their disjunctive normal form expressions in polynomial time.[8]
It is also possible to nd a read-once expression for a positive read-once function, given access to the function only
through a black box that allows its evaluation at any truth assignment, using only a quadratic number of function
evaluations.[9]

218.4 Notes
[1] Golumbic & Gurvich (2011), p. 519.

[2] Golumbic & Gurvich (2011), p. 520.

[3] Golumbic & Gurvich (2011), Theorem 10.1, p. 521; Golumbic, Mintz & Rotics (2006).

[4] Golumbic & Gurvich (2011), Examples f 2 and f 3 , p. 521.

[5] Golumbic & Gurvich (2011), Lemma 10.1, p. 529.

[6] Golumbic & Gurvich (2011), Remark 10.4, pp. 540541.

[7] Gurvi (1977); Mundici (1989); Karchmer et al. (1993).

[8] Golumbic & Gurvich (2011), Theorem 10.8, p. 541; Golumbic, Mintz & Rotics (2006); Golumbic, Mintz & Rotics (2008).

[9] Golumbic & Gurvich (2011), Theorem 10.9, p. 548; Angluin, Hellerstein & Karpinski (1993).

218.5 References
Angluin, Dana; Hellerstein, Lisa; Karpinski, Marek (1993), Learning read-once formulas with queries,
Journal of the ACM, 40 (1): 185210, MR 1202143, doi:10.1145/138027.138061.
Golumbic, Martin C.; Gurvich, Vladimir (2011), Read-once functions (PDF), in Crama, Yves; Hammer, Pe-
ter L., Boolean functions, Encyclopedia of Mathematics and its Applications, 142, Cambridge University Press,
Cambridge, pp. 519560, ISBN 978-0-521-84751-3, MR 2742439, doi:10.1017/CBO9780511852008.
Golumbic, Martin Charles; Mintz, Aviad; Rotics, Udi (2006), Factoring and recognition of read-once func-
tions using cographs and normality and the readability of functions associated with partial k-trees, Discrete
Applied Mathematics, 154 (10): 14651477, MR 2222833, doi:10.1016/j.dam.2005.09.016.
Golumbic, Martin Charles; Mintz, Aviad; Rotics, Udi (2008), An improvement on the complexity of fac-
toring read-once Boolean functions, Discrete Applied Mathematics, 156 (10): 16331636, MR 2432929,
doi:10.1016/j.dam.2008.02.011.
Gurvi, V. A. (1977), Repetition-free Boolean functions, Uspekhi Matematicheskikh Nauk, 32 (1(193)):
183184, MR 0441560.
Karchmer, M.; Linial, N.; Newman, I.; Saks, M.; Wigderson, A. (1993), Combinatorial characterization of
read-once formulae, Discrete Mathematics, 114 (1-3): 275282, MR 1217758, doi:10.1016/0012-365X(93)90372-
Z.
218.5. REFERENCES 845

Mundici, Daniele (1989), Functions computed by monotone Boolean formulas with no repeated variables,
Theoretical Computer Science, 66 (1): 113114, MR 1018849, doi:10.1016/0304-3975(89)90150-3.
Chapter 219

Reduct

This article is about a relation on algebraic structures. For reducts in abstract rewriting, see Conuence (abstract
rewriting).

In universal algebra and in model theory, a reduct of an algebraic structure is obtained by omitting some of the
operations and relations of that structure. The converse of reduct is expansion.

219.1 Denition
Let A be an algebraic structure (in the sense of universal algebra) or equivalently a structure in the sense of model
theory, organized as a set X together with an indexed family of operations and relations on that set, with index set
I. Then the reduct of A dened by a subset J of I is the structure consisting of the set X and J-indexed family of
operations and relations whose j-th operation or relation for jJ is the j-th operation or relation of A. That is, this
reduct is the structure A with the omission of those operations and relations i for which i is not in J.
A structure A is an expansion of B just when B is a reduct of A. That is, reduct and expansion are mutual converses.

219.2 Examples
The monoid (Z, +, 0) of integers under addition is a reduct of the group (Z, +, , 0) of integers under addition and
negation, obtained by omitting negation. By contrast, the monoid (N,+,0) of natural numbers under addition is not
the reduct of any group.
Conversely the group (Z, +, , 0) is the expansion of the monoid (Z, +, 0), expanding it with the operation of negation.

219.3 References
Burris, Stanley N.; H. P. Sankappanavar (1981). A Course in Universal Algebra. Springer. ISBN 3-540-90578-
2.
Hodges, Wilfrid (1993). Model theory. Cambridge University Press. ISBN 0-521-30442-3.

846
Chapter 220

Reductio ad absurdum

In logic, reductio ad absurdum (Latin for reduction to absurdity"; or argumentum ad absurdum, argument
to absurdity) is a form of argument which attempts either to disprove a statement by showing it inevitably leads to
a ridiculous, absurd, or impractical conclusion, or to prove one by showing that if it were not true, the result would
be absurd or impossible.[1][2] Traced back to classical Greek philosophy in Aristotles Prior Analytics (Greek:
, translit. h eis atopon apagg, lit. 'reduction to the impossible'),[2] this technique has been used
throughout history in both formal mathematical and philosophical reasoning, as well as in debate.
Examples of arguments using reductio ad absurdum are as follows:

The Earth cannot be at, otherwise we would nd people falling o the edge.

There is no smallest positive rational number, because if there were, then it could be divided by two to get a
smaller one.

The rst example shows that it would be absurd to argue that the Earth is at, because it would lead to an outcome
that is impossible since it contradicts laws of nature. The second example is a mathematical proof by contradiction
directly bound to the mathematical concept of innitesimals, which argues that the denial of the premise would result
in a logical contradiction (there is a smallest number and yet there is a number smaller than it).[3]

220.1 Greek philosophy

This technique is used throughout Greek philosophy, beginning with Presocratic philosophers. The earliest Greek
example of a reductio argument is supposedly in fragments of a satirical poem attributed to Xenophanes of Colophon
(c.570 c.475 BC).[4] Criticizing Homer's attribution of human faults to the gods, he states that humans also believe
that the gods bodies have human form. But if horses and oxen could draw, they would draw the gods with horse and
oxen bodies. The gods cannot have both forms, so this is a contradiction. Therefore the attribution of other human
characteristics to the gods, such as human faults, is also false.
The earlier dialogues of Plato (424 348 BC), relating the debates of his teacher Socrates, raised the use of reductio
arguments to a formal dialectical method (Elenchus), now called the Socratic method[5] which is taught in law schools.
Typically Socrates opponent would make an innocuous assertion, then Socrates by a step-by-step train of reasoning,
bringing in other background assumptions, would make the person admit that the assertion resulted in an absurd
or contradictory conclusion, forcing him to abandon his assertion. The technique was also a focus of the work of
Aristotle (384 322 BC).
Greek mathematicians used the technique to prove fundamental propositions which include the preceding propositions
to prove the antithetical or superposition arguments.[6] Euclid(c. mid 3rd - 4th centuries B.C.) and Archimedes(c.
287 c. 212 BC) are two very early examples.

847
848 CHAPTER 220. REDUCTIO AD ABSURDUM

220.2 Principle of non-contradiction


Aristotle claried the connection between contradiction and falsity in his principle of non-contradiction, which states
that a proposition cannot be both true and false.[7][8] That is, a proposition and its negation cannot both be true.
Therefore if a proposition Q and its negation Q (not-Q) can both be derived logically from a premise, it can be
concluded that the premise is false. This technique, called proof by contradiction has formed the basis of reductio ad
absurdum arguments in formal elds like logic and mathematics.
The principle of non-contradiction has seemed absolutely undeniable to most philosophers. However a few philoso-
phers such as Heraclitus and Hegel have accepted contradictions.

220.2.1 Principle of explosion and paraconsistent logic

A curious logical consequence of the principle of non-contradiction is that a contradiction implies any statement; if a
contradiction is accepted, any proposition (or its negation) can be proved from it.[8] This is known as the principle of
explosion (Latin: ex falso quodlibet, from a falsehood, anything [follows]", or ex contradictione sequitur quodlibet,
from a contradiction, anything follows), or the principle of pseudo-scotus.

Q : (P P ) Q
(for all Q, P and not-P implies Q)

The discovery of contradictions at the foundations of mathematics at the beginning of the 20th century, such as
Russells paradox, threatened the entire structure of mathematics due to the principle of explosion. This has led
a few philosophers such as Newton da Costa, Walter Carnielli and Graham Priest to reject the principle of non-
contradiction, giving rise to theories such as paraconsistent logic and its particular form, dialethism, which accepts
that there exist statements that are both true and false.
Paraconsistent logics usually deny that the principle of explosion holds for all sentences in logic, which amounts
to denying that a contradiction entails everything (what is called deductive explosion). The logics of formal in-
consistency (LFIs) are a family of paraconsistent logics where the notions of contradiction and consistency are not
coincident; although the validity of the principle of explosion is not accepted for all sentences, it is accepted for
consistent sentences. Most paraconsistent logics, as the LFIs, also reject the principle of non-contradiction.

220.3 Straw man argument


Main article: Straw man

A fallacious argument similar to reductio ad absurdum often seen in polemical debate is the straw man logical
fallacy.[9][10] A straw man argument attempts to refute a given proposition by showing that a slightly dierent or
inaccurate form of the proposition (the straw man) has an absurd, unpleasant, or ridiculous consequence, relying
on the audience failing to notice that the argument does not actually apply to the original proposition.

220.4 See also


Argument from fallacy

Contraposition

List of Latin phrases

Mathematical proof

Prasangika
220.5. REFERENCES 849

220.5 References
[1] reductio ad absurdum, Collins English Dictionary Complete and Unabridged (12th ed.), 2014 [1991], retrieved October
29, 2016

[2] Nicholas Rescher. Reductio ad absurdum. The Internet Encyclopedia of Philosophy. Retrieved 21 July 2009.

[3] Howard-Snyder, Frances; Howard-Snyder, Daniel; Wasserman, Ryan (30 March 2012). The Power of Logic (5th ed.).
McGraw-Hill Higher Education. ISBN 0078038197.

[4] Daigle, Robert W. (1991). The reductio ad absurdum argument prior to Aristotle. Masters Thesis. San Jose State Univ.
Retrieved August 22, 2012.

[5] Bobzian, Suzanne (2006). Ancient Logic. Stanford Encyclopedia of Philosophy. The Metaphysics Research Lab, Stanford
University. Retrieved August 22, 2012.

[6] Miller, Nathaniel (2007). Euclid and His Twentieth Century Rivals: Diagrams in the Logic of Euclidean Geometry
(PDF). Stanford. Stanford: CSLI Publications. p. 20. Retrieved January 15, 2017.

[7] Ziembiski, Zygmunt (2013). Practical Logic. Springer. p. 95. ISBN 940175604X.

[8] Ferguson, Thomas Macaulay; Priest, Graham (2016). A Dictionary of Logic. Oxford University Press. p. 146. ISBN
0192511556.

[9] Lapakko, David (2009). Argumentation: Critical Thinking in Action. iUniverse. p. 119. ISBN 1440168385.

[10] Van Den Brink-Budgen, Roy (2011). Critical Thinking for Students. Little, Brown Book Group. p. 89. ISBN 1848034202.

220.6 External links


Reductio ad absurdum. Internet Encyclopedia of Philosophy.
Chapter 221

ReedMuller expansion

In Boolean logic, a ReedMuller expansion (or Davio expansion) is a decomposition of a Boolean function.
For a Boolean function f (x1 , . . . , xn ) we set with respect to xi :

fxi (x) = f (x1 , . . . , xi1 , 1, xi+1 , . . . , xn )


fxi (x) = f (x1 , . . . , xi1 , 0, xi+1 , . . . , xn )
f
= fxi (x) fxi (x)
xi
as the positive and negative cofactors of f , and the boolean derivation of f , where denotes the XOR operator.
Then we have for the ReedMuller or positive Davio expansion:

f
f = fxi xi .
xi
This equation is written in a way that it resembles a Taylor expansion of f about xi = 0 . There is a similar
decomposition corresponding to an expansion about xi = 1 (negative Davio):

f
f = fxi xi .
xi
Repeated application of the ReedMuller expansion results in an XOR polynomial in x1 , . . . , xn :

f = a1 a2 x1 a3 x2 a4 x1 x2 . . . a2n x1 xn

This representation is unique and sometimes also called ReedMuller expansion.[1]


E.g. for n = 2 the result would be

fx2 fx1 2f
f (x1 , x2 ) = fx1 x2 x1 x2 x1 x2
x1 x2 x1 x2
where

2f
= fx1 x2 fx1 x2 fx1 x2 fx1 x2
x1 x2
For n = 3 the result would be

850
221.1. DERIVATIONS 851

fx2 x3 fx1 x3 fx1 x2 2 fx3 2 fx2 2 fx1 3f


f (x1 , x2 , x3 ) = fx1 x2 x3 x1 x2 x3 x1 x2 x1 x3 x2 x3 x1 x2 x3
x1 x2 x3 x1 x2 x1 x3 x2 x3 x1 x2 x3
where

3f
= fx1 x2 x3 fx1 x2 x3 fx1 x2 x3 fx1 x2 x3 fx1 x2 x3 fx1 x2 x3 fx1 x2 x3 fx1 x2 x3
x1 x2 x3
This n = 3 case can be given a cubical geometric interpretation (or a graph-theoretic interpretation) as follows: when
moving along the edge from x1 x2 x3 to x1 x2 x3 , XOR up the functions of the two end-vertices of the edge in order
to obtain the coecient of x1 . To move from x1 x2 x3 to x1 x2 x3 there are two shortest paths: one is a two-edge path
passing through x1 x2 x3 and the other one a two-edge path passing through x1 x2 x3 . These two paths encompass
four vertices of a square, and XORing up the functions of these four vertices yields the coecient of x1 x2 . Finally, to
move from x1 x2 x3 to x1 x2 x3 there are six shortest paths which are three-edge paths, and these six paths encompass
all the vertices of the cube, therefore the coecient of x1 x2 x3 can be obtained by XORing up the functions of all
eight of the vertices. (The other, unmentioned coecients can be obtained by symmetry.)
The shortest paths all involve monotonic changes to the values of the variables, whereas non-shortest paths all involve
non-monotonic changes of such variables; or, to put it another way, the shortest paths all have lengths equal to the
Hamming distance between the starting and destination vertices. This means that it should be easy to generalize an
algorithm for obtaining coecients from a truth table by XORing up values of the function from appropriate rows of a
truth table, even for hyperdimensional cases ( n = 4 and above). Between the starting and destination rows of a truth
table, some variables have their values remaining xed: nd all the rows of the truth table such that those variables
likewise remain xed at those given values, then XOR up their functions and the result should be the coecient for
the monomial corresponding to the destination row. (In such monomial, include any variable whose value is 1 (at that
row) and exclude any variable whose value is 0 (at that row), instead of including the negation of the variable whose
value is 0, as in the minterm style.)
Similar to binary decision diagrams (BDDs), where nodes represent Shannon expansion with respect to the according
variable, we can dene a decision diagram based on the ReedMuller expansion. These decision diagrams are called
functional BDDs (FBDDs).

221.1 Derivations
The ReedMuller expansion can be derived from the XOR-form of the Shannon decomposition, using the identity
x=1x:

f = xi fxi xi fxi
= xi fxi (1 xi )fxi
= xi fxi fxi xi fxi
f
= fxi xi .
xi
Derivation of the expansion for n = 2 :

f
f = fx1 x1
x1
( )
( f ) f x 2 x 2
f
x2
= fx2 x2 x1
x2 x1 x1
fx1 ( f 2f )
x2
= fx1 x2 x2 x1 x2
x2 x1 x1 x2
fx1 fx2 2f
= fx1 x2 x2 x1 x1 x2 .
x2 x1 x1 x2
852 CHAPTER 221. REEDMULLER EXPANSION

Derivation of the second-order boolean derivative:

2f ( f )
= = (fx2 fx2 )
x1 x2 x1 x2 x1
= (fx2 fx2 )x1 (fx2 fx2 )x1
= fx1 x2 fx1 x2 fx1 x2 fx1 x2 .

221.2 See also


Algebraic normal form (ANF)
Ring sum normal form (RSNF)

Zhegalkin normal form


Karnaugh map

Irving Stoy Reed


David Eugene Muller

ReedMuller code

221.3 References
[1] Kebschull, U.; Schubert, E.; Rosenstiel, W. (1992). Multilevel logic synthesis based on functional decision diagrams.
Proceedings 3rd European Conference on Design Automation.

221.4 Further reading


Kebschull, U.; Rosenstiel, W. (1993). Ecient graph-based computation and manipulation of functional
decision diagrams. Proceedings 4th European Conference on Design Automation: 278282.

Maxeld, Clive Max (2006-11-29). Reed-Muller Logic. Logic 101. EETimes. Part 3. Archived from the
original on 2017-04-19. Retrieved 2017-04-19.
Chapter 222

Relation algebra

Not to be confused with relational algebra, a framework for nitary relations and relational databases.

In mathematics and abstract algebra, a relation algebra is a residuated Boolean algebra expanded with an involution
called converse, a unary operation. The motivating example of a relation algebra is the algebra 2X of all binary
relations on a set X, that is, subsets of the cartesian square X2 , with RS interpreted as the usual composition of
binary relations R and S, and with the converse of R interpreted as the inverse relation.
Relation algebra emerged in the 19th-century work of Augustus De Morgan and Charles Peirce, which culminated
in the algebraic logic of Ernst Schrder. The equational form of relation algebra treated here was developed by
Alfred Tarski and his students, starting in the 1940s. Tarski and Givant (1987) applied relation algebra to a variable-
free treatment of axiomatic set theory, with the implication that mathematics founded on set theory could itself be
conducted without variables.

222.1 Denition
A relation algebra (L, , , , 0, 1, , I, ) is an algebraic structure equipped with the Boolean operations of conjunc-
tion xy, disjunction xy, and negation x , the Boolean constants 0 and 1, the relational operations of composition
xy and converse x, and the relational constant I, such that these operations and constants satisfy certain equations
constituting an axiomatization of relation algebras. Roughly, a relation algebra is to a system of binary relations on a
set containing the empty (0), complete (1), and identity (I) relations and closed under these ve operations as a group
is to a system of permutations of a set containing the identity permutation and closed under composition and inverse.
However, the rst order theory of relation algebras is not complete for such systems of binary relations.
Following Jnsson and Tsinakis (1993) it is convenient to dene additional operations xy = xy, and, dually, xy
= xy . Jnsson and Tsinakis showed that Ix = xI, and that both were equal to x. Hence a relation algebra can
equally well be dened as an algebraic structure (L, , , , 0, 1, , I, , ). The advantage of this signature over
the usual one is that a relation algebra can then be dened in full simply as a residuated Boolean algebra for which
Ix is an involution, that is, I(Ix) = x . The latter condition can be thought of as the relational counterpart of the
equation 1/(1/x) = x for ordinary arithmetic reciprocal, and some authors use reciprocal as a synonym for converse.
Since residuated Boolean algebras are axiomatized with nitely many identities, so are relation algebras. Hence the
latter form a variety, the variety RA of relation algebras. Expanding the above denition as equations yields the
following nite axiomatization.

222.1.1 Axioms
The axioms B1-B10 below are adapted from Givant (2006: 283), and were rst set out by Tarski in 1948.[1]
L is a Boolean algebra under binary disjunction, , and unary complementation () :

B1: A B = B A
B2: A (B C) = (A B) C

853
854 CHAPTER 222. RELATION ALGEBRA

B3: (A B) (A B ) = A

This axiomatization of Boolean algebra is due to Huntington (1933). Note that the meet of the implied Boolean
algebra is not the operator (even though it distributes over like a meet does), nor is the 1 of the Boolean algebra
the I constant.
L is a monoid under binary composition () and nullary identity I:

B4: A(BC) = (AB)C


B5: AI = A

Unary converse () is an involution with respect to composition:

B6: A = A
B7: (AB) = BA

Axiom B6 denes conversion as an involution, whereas B7 expresses the antidistributive property of conversion
relative to composition.[2]
Converse and composition distribute over disjunction:

B8: (AB) = AB
B9: (AB)C = (AC)(BC)

B10 is Tarskis equational form of the fact, discovered by Augustus De Morgan, that AB C AC B CB
A .

B10: (A(AB) )B = B

These axioms are ZFC theorems; for the purely Boolean B1-B3, this fact is trivial. After each of the following axioms
is shown the number of the corresponding theorem in Chapter 3 of Suppes (1960), an exposition of ZFC: B4 27, B5
45, B6 14, B7 26, B8 16, B9 23.

222.2 Expressing properties of binary relations in RA


The following table shows how many of the usual properties of binary relations can be expressed as succinct RA
equalities or inequalities. Below, an inequality of the form AB is shorthand for the Boolean equation AB = B.
The most complete set of results of this nature is Chapter C of Carnap (1958), where the notation is rather distant
from that of this entry. Chapter 3.2 of Suppes (1960) contains fewer results, presented as ZFC theorems and using a
notation that more resembles that of this entry. Neither Carnap nor Suppes formulated their results using the RA of
this entry, or in an equational manner.

222.3 Expressive power


The metamathematics of RA are discussed at length in Tarski and Givant (1987), and more briey in Givant (2006).
RA consists entirely of equations manipulated using nothing more than uniform replacement and the substitution of
equals for equals. Both rules are wholly familiar from school mathematics and from abstract algebra generally. Hence
RA proofs are carried out in a manner familiar to all mathematicians, unlike the case in mathematical logic generally.
RA can express any (and up to logical equivalence, exactly the) rst-order logic (FOL) formulas containing no more
than three variables. (A given variable can be quantied multiple times and hence quantiers can be nested arbitrarily
deeply by reusing variables.) Surprisingly, this fragment of FOL suces to express Peano arithmetic and almost
all axiomatic set theories ever proposed. Hence RA is, in eect, a way of algebraizing nearly all mathematics,
222.4. EXAMPLES 855

while dispensing with FOL and its connectives, quantiers, turnstiles, and modus ponens. Because RA can express
Peano arithmetic and set theory, Gdels incompleteness theorems apply to it; RA is incomplete, incompletable, and
undecidable. (N.B. The Boolean algebra fragment of RA is complete and decidable.)
The representable relation algebras, forming the class RRA, are those relation algebras isomorphic to some re-
lation algebra consisting of binary relations on some set, and closed under the intended interpretation of the RA
operations. It is easily shown, e.g. using the method of pseudoelementary classes, that RRA is a quasivariety, that
is, axiomatizable by a universal Horn theory. In 1950, Roger Lyndon proved the existence of equations holding in
RRA that did not hold in RA. Hence the variety generated by RRA is a proper subvariety of the variety RA. In
1955, Alfred Tarski showed that RRA is itself a variety. In 1964, Donald Monk showed that RRA has no nite
axiomatization, unlike RA, which is nitely axiomatized by denition.

222.3.1 Q-Relation Algebras


An RA is a Q-Relation Algebra (QRA) if, in addition to B1-B10, there exist some A and B such that (Tarski and
Givant 1987: 8.4):

Q0: AA I
Q1: BB I
Q2: AB = 1

Essentially these axioms imply that the universe has a (non-surjective) pairing relation whose projections are A and
B. It is a theorem that every QRA is a RRA (Proof by Maddux, see Tarski & Givant 1987: 8.4(iii) ).
Every QRA is representable (Tarski and Givant 1987). That not every relation algebra is representable is a fun-
damental way RA diers from QRA and Boolean algebras, which, by Stones representation theorem for Boolean
algebras, are always representable as sets of subsets of some set, closed under union, intersection, and complement.

222.4 Examples
1. Any Boolean algebra can be turned into a RA by interpreting conjunction as composition (the monoid multipli-
cation ), i.e. xy is dened as xy. This interpretation requires that converse interpret identity ( = y), and that both
residuals y\x and x/y interpret the conditional yx (i.e., yx).
2. The motivating example of a relation algebra depends on the denition of a binary relation R on a set X as any
subset R X, where X is the Cartesian square of X. The power set 2X consisting of all binary relations on X is
a Boolean algebra. While 2X can be made a relation algebra by taking RS = RS, as per example (1) above, the
standard interpretation of is instead x(RS)z = y:xRy.ySz. That is, the ordered pair (x,z) belongs to the relation RS
just when there exists y X such that (x,y) R and (y,z) S. This interpretation uniquely determines R\S as consisting
of all pairs (y,z) such that for all x X, if xRy then xSz. Dually, S/R consists of all pairs (x,y) such that for all z X,
if yRz then xSz. The translation = (y\I) then establishes the converse R of R as consisting of all pairs (y,x) such
that (x,y) R.
3. An important generalization of the previous example is the power set 2E where E X is any equivalence relation on
the set X. This is a generalization because X is itself an equivalence relation, namely the complete relation consisting
of all pairs. While 2E is not a subalgebra of 2X when E X (since in that case it does not contain the relation X,
the top element 1 being E instead of X), it is nevertheless turned into a relation algebra using the same denitions
of the operations. Its importance resides in the denition of a representable relation algebra as any relation algebra
isomorphic to a subalgebra of the relation algebra 2E for some equivalence relation E on some set. The previous
section says more about the relevant metamathematics.
4. Let G be group. Then the power set 2G is a relation algebra with the obvious boolean algebra operations, com-
position given by the product of group subsets, the converse by the inverse subset ( A1 = {a1 | a A} ), and
the identity by the singleton subset {e} . There is a relation algebra homomorphism embedding 2G in 2GG which
sends each subset A G to the relation RA = {(g, h) G G | h Ag} . The image of this homomorphism is
the set of all right-invariant relations on G .
5. If group sum or product interprets composition, group inverse interprets converse, group identity interprets I, and
if R is a one-to-one correspondence, so that RR = RR = I,[3] then L is a group as well as a monoid. B4-B7 become
856 CHAPTER 222. RELATION ALGEBRA

well-known theorems of group theory, so that RA becomes a proper extension of group theory as well as of Boolean
algebra.

222.5 Historical remarks


De Morgan founded RA in 1860, but C. S. Peirce took it much further and became fascinated with its philosophical
power. The work of DeMorgan and Peirce came to be known mainly in the extended and denitive form Ernst
Schrder gave it in Vol. 3 of his Vorlesungen (18901905). Principia Mathematica drew strongly on Schrders RA,
but acknowledged him only as the inventor of the notation. In 1912, Alwin Korselt proved that a particular formula
in which the quantiers were nested four deep had no RA equivalent.[4] This fact led to a loss of interest in RA until
Tarski (1941) began writing about it. His students have continued to develop RA down to the present day. Tarski
returned to RA in the 1970s with the help of Steven Givant; this collaboration resulted in the monograph by Tarski
and Givant (1987), the denitive reference for this subject. For more on the history of RA, see Maddux (1991, 2006).

222.6 Software
RelMICS / Relational Methods in Computer Science maintained by Wolfram Kahl

Carsten Sinz: ARA / An Automatic Theorem Prover for Relation Algebras

222.7 See also

222.8 Footnotes
[1] Alfred Tarski (1948) Abstract: Representation Problems for Relation Algebras, Bulletin of the AMS 54: 80.

[2] Chris Brink; Wolfram Kahl; Gunther Schmidt (1997). Relational Methods in Computer Science. Springer. pp. 4 and 8.
ISBN 978-3-211-82971-4.

[3] Tarski, A. (1941), p. 87.

[4] Korselt did not publish his nding. It was rst published in Leopold Loewenheim (1915) "ber Mglichkeiten im Rela-
tivkalkl, Mathematische Annalen 76: 447470. Translated as On possibilities in the calculus of relatives in Jean van
Heijenoort, 1967. A Source Book in Mathematical Logic, 18791931. Harvard Univ. Press: 228251.

222.9 References
Rudolf Carnap (1958) Introduction to Symbolic Logic and its Applications. Dover Publications.

Givant, Steven (2006). The calculus of relations as a foundation for mathematics. Journal of Automated
Reasoning. 37: 277322. doi:10.1007/s10817-006-9062-x.

Halmos, P. R., 1960. Naive Set Theory. Van Nostrand.

Leon Henkin, Alfred Tarski, and Monk, J. D., 1971. Cylindric Algebras, Part 1, and 1985, Part 2. North
Holland.

Hirsch R., and Hodkinson, I., 2002, Relation Algebra by Games, vol. 147 in Studies in Logic and the Foundations
of Mathematics. Elsevier Science.

Jnsson, Bjarni; Tsinakis, Constantine (1993). Relation algebras as residuated Boolean algebras. Algebra
Universalis. 30: 46978. doi:10.1007/BF01195378.

Maddux, Roger (1991). The Origin of Relation Algebras in the Development and Axiomatization of the
Calculus of Relations (PDF). Studia Logica. 50 (34): 421455. doi:10.1007/BF00370681.
222.10. EXTERNAL LINKS 857

--------, 2006. Relation Algebras, vol. 150 in Studies in Logic and the Foundations of Mathematics. Elsevier
Science.
Patrick Suppes, 1960. Axiomatic Set Theory. Van Nostrand. Dover reprint, 1972. Chapter 3.

Gunther Schmidt, 2010. Relational Mathematics. Cambridge University Press.


Tarski, Alfred (1941). On the calculus of relations. Journal of Symbolic Logic. 6: 7389. JSTOR 2268577.
doi:10.2307/2268577.
------, and Givant, Steven, 1987. A Formalization of Set Theory without Variables. Providence RI: American
Mathematical Society.

222.10 External links


Yohji AKAMA, Yasuo Kawahara, and Hitoshi Furusawa, "Constructing Allegory from Relation Algebra and
Representation Theorems."

Richard Bird, Oege de Moor, Paul Hoogendijk, "Generic Programming with Relations and Functors."
R.P. de Freitas and Viana, "A Completeness Result for Relation Algebra with Binders."

Peter Jipsen:
Relation algebras. In Mathematical structures. If there are problems with LaTeX, see an old HTML
version here.
"Foundations of Relations and Kleene Algebra."
"Computer Aided Investigations of Relation Algebras."
"A Gentzen System And Decidability For Residuated Lattices.
Vaughan Pratt:

"Origins of the Calculus of Binary Relations." A historical treatment.


"The Second Calculus of Binary Relations."
Priss, Uta:

"An FCA interpretation of Relation Algebra."


"Relation Algebra and FCA" Links to publications and software

Kahl, Wolfram, and Schmidt, Gunther, "Exploring (Finite) Relation Algebras Using Tools Written in Haskell."
See homepage of the whole project.
Chapter 223

Relation construction

In logic and mathematics, relation construction and relational constructibility have to do with the ways that one
relation is determined by an indexed family or a sequence of other relations, called the relation dataset. The relation
in the focus of consideration is called the faciendum. The relation dataset typically consists of a specied relation
over sets of relations, called the constructor, the factor, or the method of construction, plus a specied set of other
relations, called the faciens, the ingredients, or the makings.
Relation composition and relation reduction are special cases of relation constructions.

223.1 See also


Projection
Relation

Relation composition

Relation reduction

858
Chapter 224

Representation (mathematics)

In mathematics, representation is a very general relationship that expresses similarities between objects. Roughly
speaking, a collection Y of mathematical objects may be said to represent another collection X of objects, provided
that the properties and relationships existing among the representing objects yi conform in some consistent way to
those existing among the corresponding represented objects xi. Somewhat more formally, for a set of properties
and relations, a -representation of some structure X is a structure Y that is the image of X under a s homomorphism
that preserves . The label representation is sometimes also applied to the homomorphism itself.

224.1 Representation theory


Perhaps the most well-developed example of this general notion is the subeld of abstract algebra called representation
theory, which studies the representing of elements of algebraic structures by linear transformations of vector spaces.

224.2 Other examples


Although the term representation theory is well established in the algebraic sense discussed above, there are many
other uses of the term representation throughout mathematics.

224.2.1 Graph theory


An active area of graph theory is the exploration of isomorphisms between graphs and other structures. A key
class of such problems stems from the fact that, like adjacency in undirected graphs, intersection of sets (or, more
precisely, non-disjointness) is a symmetric relation. This gives rise to the study of intersection graphs for innumerable
families of sets.[1] One foundational result here, due to Paul Erds and colleagues, is that every n-vertex graph may
be represented in terms of intersection among subsets of a set of size no more than n2 /4.[2]
Representing a graph by such algebraic structures as its adjacency matrix and Laplacian matrix gives rise to the eld
of spectral graph theory.[3]

224.2.2 Order theory


Dual to the observation above that every graph is an intersection graph is the fact that every partially ordered set is
isomorphic to a collection of sets ordered by the containment (or inclusion) relation . Among the posets that arise
as the containment orders for natural classes of objects are the Boolean lattices and the orders of dimension n.[4]
Many partial orders arise from (and thus can be represented by) collections of geometric objects. Among them are
the n-ball orders. The 1-ball orders are the interval-containment orders, and the 2-ball orders are the so-called circle
orders, the posets representable in terms of containment among disks in the plane. A particularly nice result in this
eld is the characterization of the planar graphs as those graphs whose vertex-edge incidence relations are circle
orders.[5]

859
860 CHAPTER 224. REPRESENTATION (MATHEMATICS)

There are also geometric representations that are not based on containment. Indeed, one of the best studied classes
among these are the interval orders,[6] which represent the partial order in terms of what might be called disjoint
precedence of intervals on the real line: each element x of the poset is represented by an interval [x1 , x2 ] such that for
any y and z in the poset, y is below z if and only if y2 < z1 .

224.2.3 Polysemy
Under certain circumstances, a single function f:X Y is at once an isomorphism from several mathematical struc-
tures on X. Since each of those structures may be thought of, intuitively, as a meaning of the image Yone of the
things that Y is trying to tell usthis phenomenon is called polysemy, a term borrowed from linguistics. Examples
include:

intersection polysemypairs of graphs G1 and G2 on a common vertex set V that can be simultaneously
represented by a single collection of sets Sv such that any distinct vertices u and w in V...

are adjacent in G1 if and only if their corresponding sets intersect ( Su Sw ), and


are adjacent in G2 if and only if the complements do ( SuC SwC ).[7]

competition polysemymotivated by the study of ecological food webs, in which pairs of species may have
prey in common or have predators in common. A pair of graphs G1 and G2 on one vertex set is competition
polysemic if and only if there exists a single directed graph D on the same vertex set such that any distinct
vertices u and v...

are adjacent in G1 if and only if there is a vertex w such that both uw and vw are arcs in D,
and
are adjacent in G2 if and only if there is a vertex w such that both wu and wv are arcs in D.[8]

interval polysemypairs of posets P 1 and P 2 on a common ground set that can be simultaneously represented
by a single collection of real intervals that is an interval-order representation of P 1 and an interval-containment
representation of P 2 .[9]

224.3 See also


Representation theorems
Model theory

224.4 References
[1] McKee, Terry A.; McMorris, F. R. (1999), Topics in Intersection Graph Theory, SIAM Monographs on Discrete
Mathematics and Applications, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 0-89871-430-
3, MR 1672910

[2] Erds, Paul; Goodman, A. W.; Psa, Louis (1966), The representation of a graph by set intersections, Canadian Journal
of Mathematics, 18 (1): 106112, MR 0186575, doi:10.4153/cjm-1966-014-3

[3] Biggs, Norman (1994), Algebraic Graph Theory, Cambridge Mathematical Library, Cambridge University Press,
ISBN 978-0-521-45897-9, MR 1271140

[4] Trotter, William T. (1992), Combinatorics and Partially Ordered Sets: Dimension Theory, Johns Hopkins Series in the
Mathematical Sciences, Baltimore: The Johns Hopkins University Press, ISBN 978-0-8018-4425-6, MR 1169299

[5] Scheinerman, Edward (1991), A note on planar graphs and circle orders, SIAM Journal on Discrete Mathematics,
4 (3): 448451, MR 1105950, doi:10.1137/0404040

[6] Fishburn, Peter C. (1985), Interval Orders and Interval Graphs: A Study of Partially Ordered Sets, Wiley-Interscience
Series in Discrete Mathematics, John Wiley & Sons, ISBN 978-0-471-81284-5, MR 0776781
224.4. REFERENCES 861

[7] Tanenbaum, Paul J. (1999), Simultaneous intersection representation of pairs of graphs, Journal of Graph Theory,
32 (2): 171190, MR 1709659, doi:10.1002/(SICI)1097-0118(199910)32:2<171::AID-JGT7>3.0.CO;2-N

[8] Fischermann, Miranca; Knoben, Werner; Kremer, Dirk; Rautenbachh, Dieter (2004), Competition polysemy,
Discrete Mathematics, 282 (13): 251255, MR 2059526, doi:10.1016/j.disc.2003.11.014

[9] Tanenbaum, Paul J. (1996), Simultaneous representation of interval and interval-containment orders, Order, 13
(4): 339350, MR 1452517, doi:10.1007/BF00405593
Chapter 225

Residuated Boolean algebra

In mathematics, a residuated Boolean algebra is a residuated lattice whose lattice structure is that of a Boolean
algebra. Examples include Boolean algebras with the monoid taken to be conjunction, the set of all formal languages
over a given alphabet under concatenation, the set of all binary relations on a given set X under relational composi-
tion, and more generally the power set of any equivalence relation, again under relational composition. The original
application was to relation algebras as a nitely axiomatized generalization of the binary relation example, but there
exist interesting examples of residuated Boolean algebras that are not relation algebras, such as the language example.

225.1 Denition
A residuated Boolean algebra is an algebraic structure (L, , , , 0, 1, , I, \, /) such that

(i) (L, , , , I, \, /) is a residuated lattice, and


(ii) (L, , , , 0, 1) is a Boolean algebra.

An equivalent signature better suited to the relation algebra application is (L, , , , 0, 1, , I, , ) where the unary
operations x\ and x are intertranslatable in the manner of De Morgans laws via

x\y = (xy), xy = (x\y), and dually /y and y as


x/y = (xy), xy = (x/y),

with the residuation axioms in the residuated lattice article reorganized accordingly (replacing z by z) to read

(xz)y = 0 (xy)z = 0 (zy)x = 0

This De Morgan dual reformulation is motivated and discussed in more detail in the section below on conjugacy.
Since residuated lattices and Boolean algebras are each denable with nitely many equations, so are residuated
Boolean algebras, whence they form a nitely axiomatizable variety.

225.2 Examples
1. Any Boolean algebra, with the monoid multiplication taken to be conjunction and both residuals taken to be
material implication xy. Of the remaining 15 binary Boolean operations that might be considered in place of
conjunction for the monoid multiplication, only ve meet the monotonicity requirement, namely 0, 1, x, y, and
xy. Setting y = z = 0 in the residuation axiom y x\z xy z, we have 0 x\0 x0 0, which is falsied
by taking x = 1 when xy = 1, x, or xy. The dual argument for z/y rules out xy = y. This just leaves xy = 0 (a
constant binary operation independent of x and y), which satises almost all the axioms when the residuals are
both taken to be the constant operation x/y = x\y = 1. The axiom it fails is xI = x = Ix, for want of a suitable
value for I. Hence conjunction is the only binary Boolean operation making the monoid multiplication that of
a residuated Boolean algebra.

862
225.3. CONJUGACY 863

2. The power set 2X made a Boolean algebra as usual with , and complement relative to X, and made a
monoid with relational composition. The monoid unit I is the identity relation {(x,x)|x X}. The right residual
R\S is dened by x(R\S)y if and only if for all z in X, zRx implies zSy. Dually the left residual S/R is dened by
y(S/R)x if and only if for all z in X, xRz implies ySz.
3. The power set 2* made a Boolean algebra as for example 2, but with language concatenation for the monoid.
Here the set is used as an alphabet while * denotes the set of all nite (including empty) words over that
alphabet. The concatenation LM of languages L and M consists of all words uv such that u L and v M.
The monoid unit is the language {} consisting of just the empty word . The right residual M\L consists of
all words w over such that Mw L. The left residual L/M is the same with wM in place of Mw.

225.3 Conjugacy
The De Morgan duals and of residuation arise as follows. Among residuated lattices, Boolean algebras are special
by virtue of having a complementation operation . This permits an alternative expression of the three inequalities

y x\z xy z x z/y

in the axiomatization of the two residuals in terms of disjointness, via the equivalence x y xy = 0. Abbreviating
xy = 0 to x # y as the expression of their disjointness, and substituting z for z in the axioms, they become with a
little Boolean manipulation

(x\z) # y xy # z (z/y) # x

Now (x\z) is reminiscent of De Morgan duality, suggesting that x\ be thought of as a unary operation f, dened by
f(y) = x\y, that has a De Morgan dual f(y), analogous to x(x) = x(x). Denoting this dual operation as x,
we dene xz as (x\z). Similarly we dene another operation zy as (z/y). By analogy with x\ as the residual
operation associated with the operation x, we refer to x as the conjugate operation, or simply conjugate, of x.
Likewise y is the conjugate of y. Unlike residuals, conjugacy is an equivalence relation between operations: if f
is the conjugate of g then g is also the conjugate of f, i.e. the conjugate of the conjugate of f is f. Another advantage
of conjugacy is that it becomes unnecessary to speak of right and left conjugates, that distinction now being inherited
from the dierence between x and x, which have as their respective conjugates x and x. (But this advantage
accrues also to residuals when x\ is taken to be the residual operation to x.)
All this yields (along with the Boolean algebra and monoid axioms) the following equivalent axiomatization of a
residuated Boolean algebra.

y # xz xy # z x # zy

With this signature it remains the case that this axiomatization can be expressed as nitely many equations.

225.4 Converse
In examples 2 and 3 it can be shown that xI = Ix. In example 2 both sides equal the converse x of x, while
in example 3 both sides are I when x contains the empty word and 0 otherwise. In the former case x = x. This is
impossible for the latter because xI retains hardly any information about x. Hence in example 2 we can substitute
x for x in xI = x = Ix and cancel (soundly) to give

xI = x = Ix.

x = x can be proved from these two equations. Tarski's notion of a relation algebra can be dened as a residuated
Boolean algebra having an operation x satisfying these two equations.
The cancellation step in the above is not possible for example 3, which therefore is not a relation algebra, x being
uniquely determined as xI.
Consequences of this axiomatization of converse include x = x, (x) = (x), (xy) = xy, and (xy) = yx.
864 CHAPTER 225. RESIDUATED BOOLEAN ALGEBRA

225.5 References
Bjarni Jnsson and Constantine Tsinakis, Relation algebras as residuated Boolean algebras, Algebra Universalis,
30 (1993) 469-478.
Peter Jipsen, Computer aided investigations of relation algebras, Ph.D. Thesis, Vanderbilt University, May 1992.
Chapter 226

Resolution (logic)

In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation
theorem-proving technique for sentences in propositional logic and rst-order logic. In other words, iteratively apply-
ing the resolution rule in a suitable way allows for telling whether a propositional formula is satisable and for proving
that a rst-order formula is unsatisable. Attempting to prove a satisable rst-order formula as unsatisable may
result in a nonterminating computation; this problem doesn't occur in propositional logic.
The resolution rule can be traced back to Davis and Putnam (1960);[1] however, their algorithm required to try all
ground instances of the given formula. This source of combinatorial explosion was eliminated in 1965 by John
Alan Robinson's syntactical unication algorithm, which allowed one to instantiate the formula during the proof on
demand just as far as needed to keep refutation completeness.[2]
The clause produced by a resolution rule is sometimes called a resolvent.

226.1 Resolution in propositional logic

226.1.1 Resolution rule


The resolution rule in propositional logic is a single valid inference rule that produces a new clause implied by two
clauses containing complementary literals. A literal is a propositional variable or the negation of a propositional
variable. Two literals are said to be complements if one is the negation of the other (in the following, c is taken to
be the complement to c ). The resulting clause contains all the literals that do not have complements. Formally:

a1 . . . ai1 c ai+1 . . . an , b1 . . . bj1 c bj+1 . . . bm


a1 . . . ai1 ai+1 . . . an b1 . . . bj1 bj+1 . . . bm
where

all a s, b s and c are literals,


the dividing line stands for entails

The clause produced by the resolution rule is called the resolvent of the two input clauses. It is the principle of
consensus applied to clauses rather than terms.[3]
When the two clauses contain more than one pair of complementary literals, the resolution rule can be applied
(independently) for each such pair; however, the result is always a tautology.
Modus ponens can be seen as a special case of resolution (of a one-literal clause and a two-literal clause).

p q, p
q
is equivalent to

865
866 CHAPTER 226. RESOLUTION (LOGIC)

p q, p
q

226.1.2 A resolution technique


When coupled with a complete search algorithm, the resolution rule yields a sound and complete algorithm for de-
ciding the satisability of a propositional formula, and, by extension, the validity of a sentence under a set of axioms.
This resolution technique uses proof by contradiction and is based on the fact that any sentence in propositional logic
can be transformed into an equivalent sentence in conjunctive normal form.[4] The steps are as follows.

All sentences in the knowledge base and the negation of the sentence to be proved (the conjecture) are con-
junctively connected.
The resulting sentence is transformed into a conjunctive normal form with the conjuncts viewed as elements in
a set, S, of clauses.[4]
For example (A1 A2 ) (B1 B2 B3 ) (C1 ) gives rise to the set S = {A1 A2 , B1 B2 B3 , C1 }
.
The resolution rule is applied to all possible pairs of clauses that contain complementary literals. After each
application of the resolution rule, the resulting sentence is simplied by removing repeated literals. If the
sentence contains complementary literals, it is discarded (as a tautology). If not, and if it is not yet present in
the clause set S, it is added to S, and is considered for further resolution inferences.
If after applying a resolution rule the empty clause is derived, the original formula is unsatisable (or contra-
dictory), and hence it can be concluded that the initial conjecture follows from the axioms.
If, on the other hand, the empty clause cannot be derived, and the resolution rule cannot be applied to derive
any more new clauses, the conjecture is not a theorem of the original knowledge base.

One instance of this algorithm is the original DavisPutnam algorithm that was later rened into the DPLL algorithm
that removed the need for explicit representation of the resolvents.
This description of the resolution technique uses a set S as the underlying data-structure to represent resolution deriva-
tions. Lists, Trees and Directed Acyclic Graphs are other possible and common alternatives. Tree representations
are more faithful to the fact that the resolution rule is binary. Together with a sequent notation for clauses, a tree
representation also makes it clear to see how the resolution rule is related to a special case of the cut-rule, restricted
to atomic cut-formulas. However, tree representations are not as compact as set or list representations, because they
explicitly show redundant subderivations of clauses that are used more than once in the derivation of the empty clause.
Graph representations can be as compact in the number of clauses as list representations and they also store structural
information regarding which clauses were resolved to derive each resolvent.

226.2 A simple example


ab, ac
bc
In plain language: Suppose a is false. In order for the premise a b to be true, b must be true. Alternatively, suppose
a is true. In order for the premise a c to be true, c must be true. Therefore regardless of falsehood or veracity of
a , if both premises hold, then the conclusion b c is true.

226.3 Resolution in rst order logic


Resolution rule can be generalized to rst-order logic to:[5]

1 {L1 } 2 {L2 }

(1 2 )
226.3. RESOLUTION IN FIRST ORDER LOGIC 867

where is a most general unier of L1 and L2 and 1 and 2 have no common variables.

226.3.1 Example
The clauses P (x), Q(x) and P (b) can apply this rule with [b/x] as unier.
Here x is a variable and b is a constant.

P (x), Q(x) P (b)


[b/x]
Q(b)
Here we see that

The clauses P (x), Q(x) and P (b) are the inferences premises
Q(b) (the resolvent of the premises) is its conclusion.
The literal P (x) is the left resolved literal,
The literal P (b) is the right resolved literal,
P is the resolved atom or pivot.
[b/x] is the most general unier of the resolved literals.

226.3.2 Informal explanation


In rst order logic, resolution condenses the traditional syllogisms of logical inference down to a single rule.
To understand how resolution works, consider the following example syllogism of term logic:

All Greeks are Europeans.


Homer is a Greek.
Therefore, Homer is a European.

Or, more generally:

x.P (x) Q(x)

P (a)
Q(a)
To recast the reasoning using the resolution technique, rst the clauses must be converted to conjunctive normal
form (CNF). In this form, all quantication becomes implicit: universal quantiers on variables (X, Y, ...) are simply
omitted as understood, while existentially-quantied variables are replaced by Skolem functions.

P (x) Q(x)

P (a)
Q(a)
So the question is, how does the resolution technique derive the last clause from the rst two? The rule is simple:

Find two clauses containing the same predicate, where it is negated in one clause but not in the other.
Perform a unication on the two predicates. (If the unication fails, you made a bad choice of predicates. Go
back to the previous step and try again.)
868 CHAPTER 226. RESOLUTION (LOGIC)

If any unbound variables which were bound in the unied predicates also occur in other predicates in the two
clauses, replace them with their bound values (terms) there as well.

Discard the unied predicates, and combine the remaining ones from the two clauses into a new clause, also
joined by the "" operator.

To apply this rule to the above example, we nd the predicate P occurs in negated form

P(X)

in the rst clause, and in non-negated form

P(a)

in the second clause. X is an unbound variable, while a is a bound value (term). Unifying the two produces the
substitution

Xa

Discarding the unied predicates, and applying this substitution to the remaining predicates (just Q(X), in this case),
produces the conclusion:

Q(a)

For another example, consider the syllogistic form

All Cretans are islanders.


All islanders are liars.
Therefore all Cretans are liars.

Or more generally,

X P(X) Q(X)
X Q(X) R(X)
Therefore, X P(X) R(X)

In CNF, the antecedents become:

P(X) Q(X)
Q(Y) R(Y)

(Note that the variable in the second clause was renamed to make it clear that variables in dierent clauses are distinct.)
Now, unifying Q(X) in the rst clause with Q(Y) in the second clause means that X and Y become the same variable
anyway. Substituting this into the remaining clauses and combining them gives the conclusion:

P(X) R(X)

The resolution rule, as dened by Robinson, also incorporated factoring, which unies two literals in the same clause,
before or during the application of resolution as dened above. The resulting inference rule is refutation-complete,[6]
in that a set of clauses is unsatisable if and only if there exists a derivation of the empty clause using resolution alone.
226.4. PARAMODULATION 869

226.4 Paramodulation
Paramodulation is a related technique for reasoning on sets of clauses where the predicate symbol is equality. It
generates all equal versions of clauses, except reexive identities. The paramodulation operation takes a positive
from clause, which must contain an equality literal. It then searches an into clause with a subterm that unies with one
side of the equality. The subterm is then replaced by the other side of the equality. The general aim of paramodulation
is to reduce the system to atoms, reducing the size of the terms when substituting.[7]

226.5 Implementations
CARINE

Gandalf

Otter

Prover9

SNARK

SPASS

Vampire

226.6 See also


Condensed detachment an earlier version of resolution

Inductive logic programming

Inverse resolution

Logic programming

Method of analytic tableaux

SLD resolution

Resolution inference

226.7 Notes
[1] Martin Davis, Hilary Putnam (1960). A Computing Procedure for Quantication Theory. J.ACM. 7 (3): 201215.
doi:10.1145/321033.321034. Here: p.210, III. Rule for Eliminating Atomic Formulas.

[2] J.A. Robinson (Jan 1965). A Machine-Oriented Logic Based on the Resolution Principle. Journal of the ACM. 12 (1):
2341. doi:10.1145/321250.321253.

[3] D.E. Knuth, The Art of Computer Programming 4A: Combinatorial Algorithms, part 1, p. 539

[4] Leitsch, Alexander (1997), The resolution calculus, EATCS Monographs in Theoretical Computer Science, Springer, p.
11, Before applying the inference method itself, we transform the formulas to quantier-free conjunctive normal form.

[5] Enrique P. Ars, Juan L. Gonzlez y Fernando M. Rubio, Lgica Computacional, Thomson, (2005).

[6] Stuart J. Russell; Peter Norvig (2009). Articial Intelligence: A Modern Approach (3rd ed.). Prentice Hall. p. 350 (=p.286
in the 1st edition of 1995)

[7] Nieuwenhuis, Robert; Rubio, Alberto. Paramodulation-Based Theorem Proving. Handbook of Automated Reasoning.
870 CHAPTER 226. RESOLUTION (LOGIC)

226.8 References
Robinson, J. Alan (1965). A Machine-Oriented Logic Based on the Resolution Principle. Journal of the
ACM (JACM). 12 (1): 2341. doi:10.1145/321250.321253.
Leitsch, Alexander (1997). The Resolution Calculus. Springer.

Gallier, Jean H. (1986). Logic for Computer Science: Foundations of Automatic Theorem Proving. Harper &
Row Publishers.
Lee, Chin-Liang Chang, Richard Char-Tung (1987). Symbolic logic and mechanical theorem proving ([reprint]
ed.). San Diego: Academic Press. ISBN 0-12-170350-9.

Approaches to non-clausal resolution, i.e. resolution of rst-order formulas that need not be in clausal normal form,
are presented in:

D. Wilkins (1973). QUEST A Non-Clausal Theorem Proving System (Masters Thesis). Univ. of Essex,
England.

Neil V. Murray (Feb 1979). A Proof Procedure for Quantier-Free Non-Clausal First Order Logic (Technical
report). Syracuse Univ. 2-79. (Cited from Manna, Waldinger, 1980 as: A Proof Procedure for Non-Clausal
First-Order Logic, 1978)

Zohar Manna, Richard Waldinger (Jan 1980). A Deductive Approach to Program Synthesis. ACM Transac-
tions on Programming Languages and Systems. 2: 90121. doi:10.1145/357084.357090. A preprint appearead
in Dec 1978 as SRI Technical Note 177
N. V. Murray (1982). Completely Non-Clausal Theorem Proving. Articial Intelligence. 18: 6785.
doi:10.1016/0004-3702(82)90011-x.
Schmerl, U.R. (1988). Resolution on Formula-Trees. Acta Informatica. 25: 425438. doi:10.1007/bf02737109.
Summary

226.9 External links


Alex Sakharov. Resolution Principle. MathWorld.

Alex Sakharov. Resolution. MathWorld.


Chapter 227

Resolution inference

In propositional logic, a resolution inference is an instance of the following rule:[1]

{ }
1 {} 2
||
1 2
We call:
{ }
The clauses 1 {} and 2 are the inferences premises
1 2 (the resolvent of the premises) is its conclusion.
The literal is the left resolved literal,
The literal is the right resolved literal,
|| is the resolved atom or pivot.

This rule can be generalized to rst-order logic to:[2]

1 {L1 } 2 {L2 }

(1 2 )
where is a most general unier of L1 and L2 and 1 and 2 have no common variables.

227.1 Example
The clauses P (x), Q(x) and P (b) can apply this rule with [b/x] as unier.
Here x is a variable and b is a constant.

P (x), Q(x) P (b)


[b/x]
Q(b)
Here we see that

The clauses P (x), Q(x) and P (x) are the inferences premises
Q(b) (the resolvent of the premises) is its conclusion.
The literal P (x) is the left resolved literal,
The literal P (b) is the right resolved literal,
P is the resolved atom or pivot.
[b/x] is the most general unier of the resolved literals.

871
872 CHAPTER 227. RESOLUTION INFERENCE

227.2 Notes
[1] Fontaine, Pascal; Merz, Stephan; Woltzenlogel Paleo, Bruno. Compression of Propositional Resolution Proofs via Partial
Regularization. 23rd International Conference on Automated Deduction, 2011.

[2] Enrique P. Ars, Juan L. Gonzlez y Fernando M. Rubio, Lgica Computacional, Thomson, (2005).
Chapter 228

Robbins algebra

In abstract algebra, a Robbins algebra is an algebra containing a single binary operation, usually denoted by , and
a single unary operation usually denoted by . These operations satisfy the following axioms:
For all elements a, b, and c:

1. Associativity: a (b c) = (a b) c
2. Commutativity: a b = b a
3. Robbins equation: ( (a b) (a b)) = a

For many years, it was conjectured, but unproven, that all Robbins algebras are Boolean algebras. This was proved
in 1996, so the term Robbins algebra is now simply a synonym for Boolean algebra.

228.1 History
In 1933, Edward Huntington proposed a new set of axioms for Boolean algebras, consisting of (1) and (2) above,
plus:

Huntingtons equation: (a b) (a b) = a.

From these axioms, Huntington derived the usual axioms of Boolean algebra.
Very soon thereafter, Herbert Robbins posed the Robbins conjecture, namely that the Huntington equation could
be replaced with what came to be called the Robbins equation, and the result would still be Boolean algebra. would
interpret Boolean join and Boolean complement. Boolean meet and the constants 0 and 1 are easily dened from
the Robbins algebra primitives. Pending verication of the conjecture, the system of Robbins was called Robbins
algebra.
Verifying the Robbins conjecture required proving Huntingtons equation, or some other axiomatization of a Boolean
algebra, as theorems of a Robbins algebra. Huntington, Robbins, Alfred Tarski, and others worked on the problem,
but failed to nd a proof or counterexample.
William McCune proved the conjecture in 1996, using the automated theorem prover EQP. For a complete proof
of the Robbins conjecture in one consistent notation and following McCune closely, see Mann (2003). Dahn (1998)
simplied McCunes machine proof.

228.2 See also


Boolean algebra
Algebraic structure

873
874 CHAPTER 228. ROBBINS ALGEBRA

228.3 References
Dahn, B. I. (1998) Abstract to "Robbins Algebras Are Boolean: A Revision of McCunes Computer-Generated
Solution of Robbins Problem," Journal of Algebra 208(2): 52632.
Mann, Allen (2003) "A Complete Proof of the Robbins Conjecture."

William McCune, "Robbins Algebras Are Boolean," With links to proofs and other papers.
Chapter 229

Rule of inference

In logic, a rule of inference, inference rule or transformation rule is a logical form consisting of a function which
takes premises, analyzes their syntax, and returns a conclusion (or conclusions). For example, the rule of inference
called modus ponens takes two premises, one in the form If p then q and another in the form p, and returns the
conclusion q. The rule is valid with respect to the semantics of classical logic (as well as the semantics of many other
non-classical logics), in the sense that if the premises are true (under an interpretation), then so is the conclusion.
Typically, a rule of inference preserves truth, a semantic property. In many-valued logic, it preserves a general
designation. But a rule of inferences action is purely syntactic, and does not need to preserve any semantic property:
any function from sets of formulae to formulae counts as a rule of inference. Usually only rules that are recursive
are important; i.e. rules such that there is an eective procedure for determining whether any given formula is the
conclusion of a given set of formulae according to the rule. An example of a rule that is not eective in this sense is
the innitary -rule.[1]
Popular rules of inference in propositional logic include modus ponens, modus tollens, and contraposition. First-order
predicate logic uses rules of inference to deal with logical quantiers.

229.1 The standard form of rules of inference


In formal logic (and many related areas), rules of inference are usually given in the following standard form:
Premise#1
Premise#2
...
Premise#n
Conclusion
This expression states that whenever in the course of some logical derivation the given premises have been obtained,
the specied conclusion can be taken for granted as well. The exact formal language that is used to describe both
premises and conclusions depends on the actual context of the derivations. In a simple case, one may use logical
formulae, such as in:

AB
A
B
This is the modus ponens rule of propositional logic. Rules of inference are often formulated as schemata employing
metavariables.[2] In the rule (schema) above, the metavariables A and B can be instantiated to any element of the
universe (or sometimes, by convention, a restricted subset such as propositions) to form an innite set of inference
rules.
A proof system is formed from a set of rules chained together to form proofs, also called derivations. Any derivation
has only one nal conclusion, which is the statement proved or derived. If premises are left unsatised in the derivation,
then the derivation is a proof of a hypothetical statement: "if the premises hold, then the conclusion holds.

875
876 CHAPTER 229. RULE OF INFERENCE

229.2 Axiom schemas and axioms

Inference rules may also be stated in this form: (1) zero or more premises, (2) a turnstile symbol , which means
infers, proves, or concludes, and (3) a conclusion. This form usually embodies the relational (as opposed to
functional) view of a rule of inference, where the turnstile stands for a deducibility relation holding between premises
and conclusion.
An inference rule containing no premises is called an axiom schema or, if it contains no metavariables, simply an
axiom.[2]
Rules of inference must be distinguished from axioms of a theory. In terms of semantics, axioms are valid assertions.
Axioms are usually regarded as starting points for applying rules of inference and generating a set of conclusions. Or,
in less technical terms:
Rules are statements about the system, axioms are statements in the system. For example:

The rule that from p we can infer Provable(p) is a statement that says if you've proven p , it follows that p
is provable. This rule holds in Peano arithmetic, for example.

The axiom p Provable(p) would mean that every true statement is provable. This axiom does not hold in
Peano arithmetic.

Rules of inference play a vital role in the specication of logical calculi as they are considered in proof theory, such
as the sequent calculus and natural deduction.

229.3 Example: Hilbert systems for two propositional logics

In a Hilbert system, the premises and conclusion of the inference rules are simply formulae of some language, usually
employing metavariables. For graphical compactness of the presentation and to emphasize the distinction between
axioms and rules of inference, this section uses the sequent notation () instead of a vertical presentation of rules.
The formal language for classical propositional logic can be expressed using just negation (), implication () and
propositional symbols. A well-known axiomatization, comprising three axiom schemata and one inference rule
(modus ponens), is:
(CA1) A (B A)
(CA2) (A (B C)) ((A B) (A C))
(CA3) (A B) (B A)
(MP) A, A B B
It may seem redundant to have two notions of inference in this case, and . In classical propositional logic, they
indeed coincide; the deduction theorem states that A B if and only if A B. There is however a distinction worth
emphasizing even in this case: the rst notation describes a deduction, that is an activity of passing from sentences to
sentences, whereas A B is simply a formula made with a logical connective, implication in this case. Without an
inference rule (like modus ponens in this case), there is no deduction or inference. This point is illustrated in Lewis
Carroll's dialogue called "What the Tortoise Said to Achilles".[3]
For some non-classical logics, the deduction theorem does not hold. For example, the three-valued logic 3 of
ukasiewicz can be axiomatized as:[4]
(CA1) A (B A)
(LA2) (A B) ((B C) (A C))
(CA3) (A B) (B A)
(LA4) ((A A) A) A
(MP) A, A B B
This sequence diers from classical logic by the change in axiom 2 and the addition of axiom 4. The classical
deduction theorem does not hold for this logic, however a modied form does hold, namely A B if and only if
A (A B).[5]
229.4. ADMISSIBILITY AND DERIVABILITY 877

229.4 Admissibility and derivability


Main article: Admissible rule

In a set of rules, an inference rule could be redundant in the sense that it is admissible or derivable. A derivable
rule is one whose conclusion can be derived from its premises using the other rules. An admissible rule is one
whose conclusion holds whenever the premises hold. All derivable rules are admissible. To appreciate the dierence,
consider the following set of rules for dening the natural numbers (the judgment n nat asserts the fact that n is a
natural number):

n nat
0 nat s(n) nat

The rst rule states that 0 is a natural number, and the second states that s(n) is a natural number if n is. In this proof
system, the following rule, demonstrating that the second successor of a natural number is also a natural number, is
derivable:

n nat
s(s(n)) nat

Its derivation is the composition of two uses of the successor rule above. The following rule for asserting the existence
of a predecessor for any nonzero number is merely admissible:

s(n) nat
n nat
This is a true fact of natural numbers, as can be proven by induction. (To prove that this rule is admissible, assume a
derivation of the premise and induct on it to produce a derivation of n nat .) However, it is not derivable, because it
depends on the structure of the derivation of the premise. Because of this, derivability is stable under additions to the
proof system, whereas admissibility is not. To see the dierence, suppose the following nonsense rule were added to
the proof system:

s(3) nat

In this new system, the double-successor rule is still derivable. However, the rule for nding the predecessor is no
longer admissible, because there is no way to derive 3 nat . The brittleness of admissibility comes from the way it
is proved: since the proof can induct on the structure of the derivations of the premises, extensions to the system add
new cases to this proof, which may no longer hold.
Admissible rules can be thought of as theorems of a proof system. For instance, in a sequent calculus where cut
elimination holds, the cut rule is admissible.

229.5 See also


Immediate inference

Inference objection

Law of thought

List of rules of inference

Logical truth

Structural rule
878 CHAPTER 229. RULE OF INFERENCE

229.6 References
[1] Boolos, George; Burgess, John; Jerey, Richard C. (2007). Computability and logic. Cambridge: Cambridge University
Press. p. 364. ISBN 0-521-87752-0.

[2] John C. Reynolds (2009) [1998]. Theories of Programming Languages. Cambridge University Press. p. 12. ISBN 978-0-
521-10697-9.

[3] Kosta Dosen (1996). Logical consequence: a turn in style. In Maria Luisa Dalla Chiara; Kees Doets; Daniele Mundici;
Johan van Benthem. Logic and Scientic Methods: Volume One of the Tenth International Congress of Logic, Methodology
and Philosophy of Science, Florence, August 1995. Springer. p. 290. ISBN 978-0-7923-4383-7. preprint (with dierent
pagination)

[4] Bergmann, Merrie (2008). An introduction to many-valued and fuzzy logic: semantics, algebras, and derivation systems.
Cambridge University Press. p. 100. ISBN 978-0-521-88128-9.

[5] Bergmann, Merrie (2008). An introduction to many-valued and fuzzy logic: semantics, algebras, and derivation systems.
Cambridge University Press. p. 114. ISBN 978-0-521-88128-9.
Chapter 230

Rule of replacement

In logic, a rule of replacement[1][2][3] is a transformation rule that may be applied to only a particular segment
of an expression. A logical system may be constructed so that it uses either axioms, rules of inference, or both as
transformation rules for logical expressions in the system. Whereas a rule of inference is always applied to a whole
logical expression, a rule of replacement may be applied to only a particular segment. Within the context of a logical
proof, logically equivalent expressions may replace each other. Rules of replacement are used in propositional logic
to manipulate propositions.
Common rules of replacement include de Morgans laws, commutation, association, distribution, double negation,[4]
transposition, material implication, material equivalence, exportation, and tautology.

230.1 References
[1] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall.

[2] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing.

[3] Moore and Parker

[4] not admitted in intuitionistic logic

879
Chapter 231

Structural rule

For the type of rule used in linguistics, see Phrase structure rule.

In proof theory, a structural rule is an inference rule that does not refer to any logical connective, but instead operates
on the judgment or sequents directly. Structural rules often mimic intended meta-theoretic properties of the logic.
Logics that deny one or more of the structural rules are classied as substructural logics.

231.1 Common structural rules


Three common structural rules are:

Weakening, where the hypotheses or conclusion of a sequent may be extended with additional members. In

symbolic form weakening rules can be written as ,A on the left of the turnstile, and A, on the right.

Contraction, where two equal (or uniable) members on the same side of a sequent may be replaced by a
single member (or common instance). Symbolically: ,A,A A,A,
,A and A, . Also known as factoring in
automated theorem proving systems using resolution. Known as idempotency of entailment in classical logic.
1 ,A,2 ,B,3
Exchange, where two members on the same side of a sequent may be swapped. Symbolically: 1 ,B,2 ,A,3
1 ,A,2 ,B,3
and 1 ,B,2 ,A,3 . (This is also known as the permutation rule.)

A logic without any of the above structural rules would interpret the sides of a sequent as pure sequences; with
exchange, they are multisets; and with both contraction and exchange they are sets.
These are not the only possible structural rules. A famous structural rule is known as cut. Considerable eort is spent
by proof theorists in showing that cut rules are superuous in various logics. More precisely, what is shown is that cut
is only (in a sense) a tool for abbreviating proofs, and does not add to the theorems that can be proved. The successful
'removal' of cut rules, known as cut elimination, is directly related to the philosophy of computation as normalization
(see CurryHoward correspondence); it often gives a good indication of the complexity of deciding a given logic.

231.2 See also


Ane logic
Linear logic

Ordered logic
Strict logic

Separation logic

880
Chapter 232

Scope (logic)

In logic, the scope of a quantier or a quantication is the range in the formula where the quantier engages in. It
is put right after the quantier, often in parentheses. Some authors describe this as including the variable put right
after the forall or exists symbol. In the formula xP, for example, P (or xP [1] ) is the scope of the quantier x (or ).
A variable in the formula is free, if and only if it does not occur in the scope of any quantier for that variable. A term
is free for a variable in the formula (i.e. free to substitute that variable that occurs free), if and only if that variable
does not occur free in the scope of any quantier for any variable in the term.

232.1 See also


Modal scope fallacy

232.2 Notes
[1] Bell, John L.; Machover, Mosh (April 15, 2007). Chapter 1. Beginning mathematical logic. A Course in Mathematical
Logic. Elsevier Science Ltd. p. 17. ISBN 978-0-7204-2844-5.

881
Chapter 233

Second-order predicate

In mathematical logic, a second-order predicate is a predicate that takes a rst-order predicate as an argument.[1]
Compare higher-order predicate.
The idea of second order predication was introduced by the German mathematician and philosopher Frege. It is
based on his idea that a predicate such as is a philosopher designates a concept, rather than an object.[2] Sometimes
a concept can itself be the subject of a proposition, such as in There are no Bosnian philosophers. In this case,
we are not saying anything of any Bosnian philosophers, but of the concept is a Bosnian philosopher that it is not
satised. Thus the predicate is not satised attributes something to the concept is a Bosnian philosopher, and is
thus a second-level predicate.
This idea is the basis of Freges theory of number.[3]

233.1 References
[1] Yaqub, Aladdin M. (2013), An Introduction to Logical Theory, Broadview Press, p. 288, ISBN 9781551119939.

[2] Oppy, Graham (2007), Ontological Arguments and Belief in God, Cambridge University Press, p. 145, ISBN 9780521039000.

[3] Kremer, Michael (1985), Freges theory of number and the distinction between function and object, Philosophical Studies,
47 (3): 313323, MR 788101, doi:10.1007/BF00355206.

882
Chapter 234

Second-order propositional logic

A second-order propositional logic is a propositional logic extended with quantication over propositions. A special
case are the logics that allow second-order Boolean propositions, where quantiers may range either just over the
Boolean truth values, or over the Boolean-valued truth functions.
The most widely known formalism is the intuitionistic logic with impredicative quantication, system F. Parigot
(1997) showed how this calculus can be extended to admit classical logic.

234.1 See also


Boolean satisability problem
Second-order arithmetic

Second-order logic
Type theory

234.2 References
Parigot, Michel (1997). Proofs of strong normalisation for second order classical natural deduction. Journal of
Symbolic Logic 62(4):14611479.

883
Chapter 235

Sentence (mathematical logic)

This article is a technical mathematical article in the area of predicate logic. For the ordinary English
language meaning see Sentence (linguistics), for a less technical introductory article see Statement (logic).

In mathematical logic, a sentence of a predicate logic is a boolean-valued well-formed formula with no free variables.
A sentence can be viewed as expressing a proposition, something that must be true or false. The restriction of having
no free variables is needed to make sure that sentences can have concrete, xed truth values: As the free variables of
a (general) formula can range over several values, the truth value of such a formula may vary.
Sentences without any logical connectives or quantiers in them are known as atomic sentences; by analogy to atomic
formula. Sentences are then built up out of atomic formulas by applying connectives and quantiers.
A set of sentences is called a theory; thus, individual sentences may be called theorems. To properly evaluate the
truth (or falsehood) of a sentence, one must make reference to an interpretation of the theory. For rst-order theories,
interpretations are commonly called structures. Given a structure or interpretation, a sentence will have a xed truth
value. A theory is satisable when all of its sentences are true. The study of algorithms to automatically discover
interpretations of theories that render all sentences as being true is known as the satisability modulo theories problem.

235.1 Example
The following example is in rst-order logic.

yx(x2 = y)

is a sentence. This sentence is true in the positive real numbers + , false in the real numbers , and true in the complex
numbers . (In plain English, this sentence is interpreted to mean that every member of the structure concerned is
the square of a member of that particular structure.) On the other hand, the formula

x(x2 = y)

is not a sentence, because of the presence of the free variable y. In the structure of the real numbers, this formula
is true if we substitute (arbitrarily) y = 2, but is false if y = 2. (It is the presence of a free variable, rather than the
inconstant truth value, that matters; for example, even in the structure of the complex numbers, where the statement
is always true, it is still not considered a sentence.) Such a formula may be called a predicate instead.

235.2 See also


Ground expression
Open sentence

884
235.3. REFERENCES 885

Statement (logic)

Proposition

235.3 References
Hinman, P. (2005). Fundamentals of Mathematical Logic. A K Peters. ISBN 1-56881-262-0.

Rautenberg, Wolfgang (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York: Springer
Science+Business Media, ISBN 978-1-4419-1220-6, doi:10.1007/978-1-4419-1221-3.
Chapter 236

Sequential composition

In computer science, the process calculi (or process algebras) are a diverse family of related approaches for for-
mally modelling concurrent systems. Process calculi provide a tool for the high-level description of interactions,
communications, and synchronizations between a collection of independent agents or processes. They also provide
algebraic laws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about
equivalences between processes (e.g., using bisimulation). Leading examples of process calculi include CSP, CCS,
ACP, and LOTOS.[1] More recent additions to the family include the -calculus, the ambient calculus, PEPA, the
fusion calculus and the join-calculus.

236.1 Essential features


While the variety of existing process calculi is very large (including variants that incorporate stochastic behaviour,
timing information, and specializations for studying molecular interactions), there are several features that all process
calculi have in common:[2]

Representing interactions between independent processes as communication (message-passing), rather than as


modication of shared variables.

Describing processes and systems using a small collection of primitives, and operators for combining those
primitives.

Dening algebraic laws for the process operators, which allow process expressions to be manipulated using
equational reasoning.

236.2 Mathematics of processes


To dene a process calculus, one starts with a set of names (or channels) whose purpose is to provide means of
communication. In many implementations, channels have rich internal structure to improve eciency, but this is
abstracted away in most theoretic models. In addition to names, one needs a means to form new processes from old
ones. The basic operators, always present in some form or other, allow:[3]

parallel composition of processes

specication of which channels to use for sending and receiving data

sequentialization of interactions

hiding of interaction points

recursion or process replication

886
236.2. MATHEMATICS OF PROCESSES 887

236.2.1 Parallel composition

Parallel composition of two processes P and Q , usually written P |Q , is the key primitive distinguishing the process
calculi from sequential models of computation. Parallel composition allows computation in P and Q to proceed
simultaneously and independently. But it also allows interaction, that is synchronisation and ow of information from
P to Q (or vice versa) on a channel shared by both. Crucially, an agent or process can be connected to more than one
channel at a time.
Channels may be synchronous or asynchronous. In the case of a synchronous channel, the agent sending a message
waits until another agent has received the message. Asynchronous channels do not require any such synchronization.
In some process calculi (notably the -calculus) channels themselves can be sent in messages through (other) channels,
allowing the topology of process interconnections to change. Some process calculi also allow channels to be created
during the execution of a computation.

236.2.2 Communication

Interaction can be (but isn't always) a directed ow of information. That is, input and output can be distinguished as
dual interaction primitives. Process calculi that make such distinctions typically dene an input operator (e.g. x(v)
) and an output operator (e.g. xy ), both of which name an interaction point (here x ) that is used to synchronise
with a dual interaction primitive.
Information should be exchanged, it will ow from the outputting to the inputting process. The output primitive will
specify the data to be sent. In xy , this data is y . Similarly, if an input expects to receive data, one or more bound
variables will act as place-holders to be substituted by data, when it arrives. In x(v) , v plays that role. The choice of
the kind of data that can be exchanged in an interaction is one of the key features that distinguishes dierent process
calculi.

236.2.3 Sequential composition

Sometimes interactions must be temporally ordered. For example, it might be desirable to specify algorithms such as:
rst receive some data on x and then send that data on y . Sequential composition can be used for such purposes. It is
well known from other models of computation. In process calculi, the sequentialisation operator is usually integrated
with input or output, or both. For example, the process x(v) P will wait for an input on x . Only when this input
has occurred will the process P be activated, with the received data through x substituted for identier v .

236.2.4 Reduction semantics

The key operational reduction rule, containing the computational essence of process calculi, can be given solely in
terms of parallel composition, sequentialization, input, and output. The details of this reduction vary among the
calculi, but the essence remains roughly the same. The reduction rule is:

xy P | x(v) Q P | Q[y/v ]

The interpretation to this reduction rule is:

1. The process xy P sends a message, here y , along the channel x . Dually, the process x(v) Q receives that
message on channel x .

2. Once the message has been sent, xy P becomes the process P , while x(v) Q becomes the process Q[y/v ]
, which is Q with the place-holder v substituted by y , the data received on x .

The class of processes that P is allowed to range over as the continuation of the output operation substantially inu-
ences the properties of the calculus.
888 CHAPTER 236. SEQUENTIAL COMPOSITION

236.2.5 Hiding

Processes do not limit the number of connections that can be made at a given interaction point. But interaction points
allow interference (i.e. interaction). For the synthesis of compact, minimal and compositional systems, the ability to
restrict interference is crucial. Hiding operations allow control of the connections made between interaction points
when composing agents in parallel. Hiding can be denoted in a variety of ways. For example, in the -calculus the
hiding of a name x in P can be expressed as ( x)P , while in CSP it might be written as P \ {x} .

236.2.6 Recursion and replication

The operations presented so far describe only nite interaction and are consequently insucient for full computability,
which includes non-terminating behaviour. Recursion and replication are operations that allow nite descriptions
of innite behaviour. Recursion is well known from the sequential world. Replication !P can be understood as
abbreviating the parallel composition of a countably innite number of P processes:

!P = P |!P

236.2.7 Null process

Process calculi generally also include a null process (variously denoted as nil , 0 , STOP , , or some other appropriate
symbol) which has no interaction points. It is utterly inactive and its sole purpose is to act as the inductive anchor on
top of which more interesting processes can be generated.

236.3 Discrete and continuous process algebra


Process algebra has been studied for discrete time and continuous time (real time or dense time).[4]

236.4 History
In the rst half of the 20th century, various formalisms were proposed to capture the informal concept of a com-
putable function, with -recursive functions, Turing machines and the lambda calculus possibly being the best-known
examples today. The surprising fact that they are essentially equivalent, in the sense that they are all encodable into
each other, supports the Church-Turing thesis. Another shared feature is more rarely commented on: they all are
most readily understood as models of sequential computation. The subsequent consolidation of computer science re-
quired a more subtle formulation of the notion of computation, in particular explicit representations of concurrency
and communication. Models of concurrency such as the process calculi, Petri nets in 1962, and the actor model in
1973 emerged from this line of inquiry.
Research on process calculi began in earnest with Robin Milner's seminal work on the Calculus of Communicating
Systems (CCS) during the period from 1973 to 1980. C.A.R. Hoare's Communicating Sequential Processes (CSP)
rst appeared in 1978, and was subsequently developed into a full-edged process calculus during the early 1980s.
There was much cross-fertilization of ideas between CCS and CSP as they developed. In 1982 Jan Bergstra and
Jan Willem Klop began work on what came to be known as the Algebra of Communicating Processes (ACP), and
introduced the term process algebra to describe their work.[1] CCS, CSP, and ACP constitute the three major branches
of the process calculi family: the majority of the other process calculi can trace their roots to one of these three calculi.

236.5 Current research


Various process calculi have been studied and not all of them t the paradigm sketched here. The most prominent
example may be the ambient calculus. This is to be expected as process calculi are an active eld of study. Currently
research on process calculi focuses on the following problems.
236.6. SOFTWARE IMPLEMENTATIONS 889

Developing new process calculi for better modeling of computational phenomena.

Finding well-behaved subcalculi of a given process calculus. This is valuable because (1) most calculi are fairly
wild in the sense that they are rather general and not much can be said about arbitrary processes; and (2)
computational applications rarely exhaust the whole of a calculus. Rather they use only processes that are very
constrained in form. Constraining the shape of processes is mostly studied by way of type systems.

Logics for processes that allow one to reason about (essentially) arbitrary properties of processes, following the
ideas of Hoare logic.

Behavioural theory: what does it mean for two processes to be the same? How can we decide whether two
processes are dierent or not? Can we nd representatives for equivalence classes of processes? Generally,
processes are considered to be the same if no context, that is other processes running in parallel, can detect a
dierence. Unfortunately, making this intuition precise is subtle and mostly yields unwieldy characterisations of
equality (which in most cases must also be undecidable, as a consequence of the halting problem). Bisimulations
are a technical tool that aids reasoning about process equivalences.

Expressivity of calculi. Programming experience shows that certain problems are easier to solve in some
languages than in others. This phenomenon calls for a more precise characterisation of the expressivity of
calculi modeling computation than that aorded by the Church-Turing thesis. One way of doing this is to
consider encodings between two formalisms and see what properties encodings can potentially preserve. The
more properties can be preserved, the more expressive the target of the encoding is said to be. For process
calculi, the celebrated results are that the synchronous -calculus is more expressive than its asynchronous
variant, has the same expressive power as the higher-order -calculus, but is less than the ambient calculus.

Using process calculus to model biological systems (stochastic -calculus, BioAmbients, Beta Binders, BioPEPA,
Brane calculus). It is thought by some that the compositionality oered by process-theoretic tools can help bi-
ologists to organise their knowledge more formally.

236.6 Software implementations


The ideas behind process algebra have given rise to several tools including:

CADP

Concurrency Workbench

mCRL2 toolset

236.7 Relationship to other models of concurrency


The history monoid is the free object that is generically able to represent the histories of individual communicating
processes. A process calculus is then a formal language imposed on a history monoid in a consistent fashion.[5] That
is, a history monoid can only record a sequence of events, with synchronization, but does not specify the allowed
state transitions. Thus, a process calculus is to a history monoid what a formal language is to a free monoid (a formal
language is a subset of the set of all possible nite-length strings of an alphabet generated by the Kleene star).
The use of channels for communication is one of the features distinguishing the process calculi from other models of
concurrency, such as Petri nets and the Actor model (see Actor model and process calculi). One of the fundamental
motivations for including channels in the process calculi was to enable certain algebraic techniques, thereby making
it easier to reason about processes algebraically.

236.8 See also


Stochastic probe
890 CHAPTER 236. SEQUENTIAL COMPOSITION

236.9 References
[1] Baeten, J.C.M. (2004). A brief history of process algebra (PDF). Rapport CSR 04-02. Vakgroep Informatica, Technische
Universiteit Eindhoven.

[2] Pierce, Benjamin. Foundational Calculi for Programming Languages. The Computer Science and Engineering Handbook.
CRC Press. pp. 21902207. ISBN 0-8493-2909-4.

[3] Baeten, J.C.M.; Bravetti, M. (August 2005). A Generic Process Algebra. Algebraic Process Calculi: The First Twenty
Five Years and Beyond (BRICS Notes Series NS-05-3). Bertinoro, Forl`, Italy: BRICS, Department of Computer Science,
University of Aarhus. Retrieved 2007-12-29.

[4] Baeten, J. C. M.; Middelburg, C. A. Process algebra with timing: Real time and discrete time. CiteSeerX 10.1.1.42.729
.

[5] Mazurkiewicz, Antoni (1995). Introduction to Trace Theory. In Diekert, V.; Rozenberg, G. The Book of Traces
(PostScript). Singapore: World Scientic. pp. 341. ISBN 981-02-2058-8.

236.10 Further reading


Matthew Hennessy: Algebraic Theory of Processes, The MIT Press, ISBN 0-262-08171-7.
C. A. R. Hoare: Communicating Sequential Processes, Prentice Hall, ISBN 0-13-153289-8.

This book has been updated by Jim Davies at the Oxford University Computing Laboratory and the new
edition is available for download as a PDF le at the Using CSP website.

Robin Milner: A Calculus of Communicating Systems, Springer Verlag, ISBN 0-387-10235-3.


Robin Milner: Communicating and Mobile Systems: the Pi-Calculus, Springer Verlag, ISBN 0-521-65869-1.

Andrew Mironov: Theory of processes


Chapter 237

Sheer stroke

Venn diagram of A B

In Boolean functions and propositional calculus, the Sheer stroke, named after Henry M. Sheer, written "|" (see
vertical bar, not to be confused with "||" which is often used to represent disjunction), Dpq", or "" (an upwards
arrow), denotes a logical operation that is equivalent to the negation of the conjunction operation, expressed in ordinary
language as not both. It is also called nand (not and) or the alternative denial, since it says in eect that at least
one of its operands is false. In Boolean algebra and digital electronics it is known as the NAND operation.
Like its dual, the NOR operator (also known as the Peirce arrow or Quine dagger), NAND can be used by itself,
without any other logical operator, to constitute a logical formal system (making NAND functionally complete). This
property makes the NAND gate crucial to modern digital electronics, including its use in computer processor design.

891
892 CHAPTER 237. SHEFFER STROKE

237.1 Denition
The NAND operation is a logical operation on two logical values. It produces a value of true, if and only if at
least one of the propositions is false.

237.1.1 Truth table


The truth table of A NAND B (also written as A | B, Dpq, or A B) is as follows:

237.2 History
The stroke is named after Henry M. Sheer, who in 1913 published a paper in the Transactions of the American
Mathematical Society (Sheer 1913) providing an axiomatization of Boolean algebras using the stroke, and proved
its equivalence to a standard formulation thereof by Huntington employing the familiar operators of propositional
logic (and, or, not). Because of self-duality of Boolean algebras, Sheers axioms are equally valid for either of the
NAND or NOR operations in place of the stroke. Sheer interpreted the stroke as a sign for nondisjunction (NOR)
in his paper, mentioning non-conjunction only in a footnote and without a special sign for it. It was Jean Nicod who
rst used the stroke as a sign for non-conjunction (NAND) in a paper of 1917 and which has since become current
practice.[1] Russell and Whitehead used the Sheer stroke in the 1927 second edition of Principia Mathematica and
suggested it as a replacement for the or and not operations of the rst edition.
Charles Sanders Peirce (1880) had discovered the functional completeness of NAND or NOR more than 30 years
earlier, using the term ampheck (for 'cutting both ways), but he never published his nding.

237.3 Properties
NAND does not possess any of the following ve properties, each of which is required to be absent from, and the
absence of all of which is sucient for, at least one member of a set of functionally complete operators: truth-
preservation, falsity-preservation, linearity, monotonicity, self-duality. (An operator is truth- (falsity-) preserving
if its value is truth (falsity) whenever all of its arguments are truth (falsity).) Therefore {NAND} is a functionally
complete set.
This can also be realized as follows: All three elements of the functionally complete set {AND, OR, NOT} can be
constructed using only NAND. Thus the set {NAND} must be functionally complete as well.

237.4 Introduction, elimination, and equivalencies


The Sheer stroke is the negation of the conjunction:
Expressed in terms of NAND , the usual operators of propositional logic are:

237.5 Formal system based on the Sheer stroke


The following is an example of a formal system based entirely on the Sheer stroke, yet having the functional ex-
pressiveness of the propositional logic:

237.5.1 Symbols
pn for natural numbers n
(|)
The Sheer stroke commutes but does not associate (e.g., (T|T)|F = T, but T|(T|F) = F). Hence any formal system
including the Sheer stroke must also include a means of indicating grouping. We shall employ '(' and ')' to this eect.
237.5. FORMAL SYSTEM BASED ON THE SHEFFER STROKE 893

We also write p, q, r, instead of p0 , p1 , p2 .

237.5.2 Syntax
Construction Rule I: For each natural number n, the symbol pn is a well-formed formula (w), called an atom.
Construction Rule II: If X and Y are ws, then (X|Y) is a w.
Closure Rule: Any formulae which cannot be constructed by means of the rst two Construction Rules are not ws.
The letters U, V, W, X, and Y are metavariables standing for ws.
A decision procedure for determining whether a formula is well-formed goes as follows: deconstruct the formula
by applying the Construction Rules backwards, thereby breaking the formula into smaller subformulae. Then repeat
this recursive deconstruction process to each of the subformulae. Eventually the formula should be reduced to its
atoms, but if some subformula cannot be so reduced, then the formula is not a w.

237.5.3 Calculus
All ws of the form

((U|(V|W))|((Y|(Y|Y))|((X|V)|((U|X)|(U|X)))))

are axioms. Instances of

(U|(V|W)), U W

are inference rules.

237.5.4 Simplication
Since the only connective of this logic is |, the symbol | could be discarded altogether, leaving only the parentheses to
group the letters. A pair of parentheses must always enclose a pair of ws. Examples of theorems in this simplied
notation are

(p(p(q(q((pq)(pq)))))),

(p(p((qq)(pp)))).

The notation can be simplied further, by letting

(U) := (UU)
((U)) U

for any U. This simplication causes the need to change some rules:

1. More than two letters are allowed within parentheses.

2. Letters or ws within parentheses are allowed to commute.

3. Repeated letters or ws within a same set of parentheses can be eliminated.

The result is a parenthetical version of the Peirce existential graphs.


Another way to simplify the notation is to eliminate parenthesis by using Polish Notation. For example, the earlier
examples with only parenthesis could be rewritten using only strokes as follows
894 CHAPTER 237. SHEFFER STROKE

(p(p(q(q((pq)(pq)))))) becomes
|p|p|q|q||pq|pq, and

(p(p((qq)(pp)))) becomes,
|p|p||qq|pp.

This follows the same rules as the parenthesis version, with opening parenthesis replaced with a Sheer stroke and
the (redundant) closing parenthesis removed.

237.6 See also


List of logic symbols

AND gate

Boolean domain

CMOS

Gate equivalent (GE)

Laws of Form

Logic gate

Logical graph

NAND Flash Memory

NAND logic

NAND gate

NOR gate

NOT gate

OR gate

Peirces law

Peirce arrow = NOR

Propositional logic

Sole sucient operator

XOR gate

Peirce arrow

Wolfram axiom

237.7 Notes
[1] Church (1956:134)
237.8. REFERENCES 895

237.8 References
Bocheski, Jzef Maria (1960), Precis of Mathematical Logic, rev., Albert Menne, edited and translated from
the French and German editions by Otto Bird, Dordrecht, South Holland: D. Reidel.
Church, Alonzo, (1956) Introduction to mathematical logic, Vol. 1, Princeton: Princeton University Press.

Nicod, Jean G. P. (1917). A Reduction in the Number of Primitive Propositions of Logic. Proceedings of
the Cambridge Philosophical Society. 19: 3241.
Charles Sanders Peirce, 1880, A Boolian[sic] Algebra with One Constant, in Hartshorne, C. and Weiss, P.,
eds., (193135) Collected Papers of Charles Sanders Peirce, Vol. 4: 1220, Cambridge: Harvard University
Press.

Sheer, H. M. (1913), A set of ve independent postulates for Boolean algebras, with application to logical
constants, Transactions of the American Mathematical Society, 14: 481488, JSTOR 1988701, doi:10.2307/1988701

237.9 External links


http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nand.html
implementations of 2 and 4-input NAND gates

Proofs of some axioms by Stroke function by Yasuo Set @ Project Euclid


Chapter 238

Sigma-algebra

"-algebra redirects here. For an algebraic structure admitting a given signature of operations, see Universal al-
gebra.

In mathematical analysis and in probability theory, a -algebra (also -eld) on a set X is a collection of subsets
of X that includes the empty subset, is closed under complement, and is closed under countable unions and countable
intersections. The pair (X, ) is called a measurable space.
A -algebra specializes the concept of a set algebra. An algebra of sets needs only to be closed under the union or
intersection of nitely many subsets.[1]
The main use of -algebras is in the denition of measures; specically, the collection of those subsets for which
a given measure is dened is necessarily a -algebra. This concept is important in mathematical analysis as the
foundation for Lebesgue integration, and in probability theory, where it is interpreted as the collection of events which
can be assigned probabilities. Also, in probability, -algebras are pivotal in the denition of conditional expectation.
In statistics, (sub) -algebras are needed for the formal mathematical denition of a sucient statistic,[2] particularly
when the statistic is a function or a random process and the notion of conditional density is not applicable.
If X = {a, b, c, d}, one possible -algebra on X is = { , {a, b}, {c, d}, {a, b, c, d} }, where is the empty set. In
general, a nite algebra is always a -algebra.
If {A1 , A2 , A3 , } is a countable partition of X then the collection of all unions of sets in the partition (including
the empty set) is a -algebra.
A more useful example is the set of subsets of the real line formed by starting with all open intervals and adding
in all countable unions, countable intersections, and relative complements and continuing this process (by transnite
iteration through all countable ordinals) until the relevant closure properties are achieved (a construction known as
the Borel hierarchy).

238.1 Motivation
There are at least three key motivators for -algebras: dening measures, manipulating limits of sets, and managing
partial information characterized by sets.

238.1.1 Measure
A measure on X is a function that assigns a non-negative real number to subsets of X; this can be thought of as making
precise a notion of size or volume for sets. We want the size of the union of disjoint sets to be the sum of their
individual sizes, even for an innite sequence of disjoint sets.
One would like to assign a size to every subset of X, but in many natural settings, this is not possible. For example,
the axiom of choice implies that when the size under consideration is the ordinary notion of length for subsets of the
real line, then there exist sets for which no size exists, for example, the Vitali sets. For this reason, one considers
instead a smaller collection of privileged subsets of X. These subsets will be called the measurable sets. They are

896
238.1. MOTIVATION 897

closed under operations that one would expect for measurable sets, that is, the complement of a measurable set is a
measurable set and the countable union of measurable sets is a measurable set. Non-empty collections of sets with
these properties are called -algebras.

238.1.2 Limits of sets


Many uses of measure, such as the probability concept of almost sure convergence, involve limits of sequences of
sets. For this, closure under countable unions and intersections is paramount. Set limits are dened as follows on
-algebras.

The limit supremum of a sequence A1 , A2 , A3 , ..., each of which is a subset of X, is



lim sup An = Am .
n
n=1 m=n

The limit inmum of a sequence A1 , A2 , A3 , ..., each of which is a subset of X, is



lim inf An = Am .
n
n=1 m=n

If, in fact,

lim inf An = lim sup An ,


n n

then the limn An exists as that common set.

238.1.3 Sub -algebras


In much of probability, especially when conditional expectation is involved, one is concerned with sets that represent
only part of all the possible information that can be observed. This partial information can be characterized with a
smaller -algebra which is a subset of the principal -algebra; it consists of the collection of subsets relevant only to
and determined only by the partial information. A simple example suces to illustrate this idea.
Imagine you and another person are betting on a game that involves ipping a coin repeatedly and observing whether
it comes up Heads (H) or Tails (T). Since you and your opponent are each innitely wealthy, there is no limit to how
long the game can last. This means the sample space must consist of all possible innite sequences of H or T:

= {H, T } = {(x1 , x2 , x3 , . . . ) : xi {H, T }, i 1}.


However, after n ips of the coin, you may want to determine or revise your betting strategy in advance of the next
ip. The observed information at that point can be described in terms of the 2n possibilities for the rst n ips.
Formally, since you need to use subsets of , this is codied as the -algebra

Gn = {A {H, T } : A {H, T }n }.
Observe that then

G1 G2 G3 G ,
where G is the smallest -algebra containing all the others.
898 CHAPTER 238. SIGMA-ALGEBRA

238.2 Denition and properties

238.2.1 Denition
Let X be some set, and let 2X represent its power set. Then a subset 2X is called a -algebra if it satises the
following three properties:[3]

1. X is in , and X is considered to be the universal set in the following context.


2. is closed under complementation: If A is in , then so is its complement, X \ A.
3. is closed under countable unions: If A1 , A2 , A3 , ... are in , then so is A = A1 A2 A3 .

From these properties, it follows that the -algebra is also closed under countable intersections (by applying De
Morgans laws).
It also follows that the empty set is in , since by (1) X is in and (2) asserts that its complement, the empty
set, is also in . Moreover, since {X, } satises condition (3) as well, it follows that {X, } is the smallest possible
-algebra on X. The largest possible -algebra on X is 2X .
Elements of the -algebra are called measurable sets. An ordered pair (X, ), where X is a set and is a -algebra
over X, is called a measurable space. A function between two measurable spaces is called a measurable function if
the preimage of every measurable set is measurable. The collection of measurable spaces forms a category, with the
measurable functions as morphisms. Measures are dened as certain types of functions from a -algebra to [0, ].
A -algebra is both a -system and a Dynkin system (-system). The converse is true as well, by Dynkins theorem
(below).

238.2.2 Dynkins - theorem


This theorem (or the related monotone class theorem) is an essential tool for proving many results about properties
of specic -algebras. It capitalizes on the nature of two simpler classes of sets, namely the following.

A -system P is a collection of subsets of X that is closed under nitely many intersections, and
a Dynkin system (or -system) D is a collection of subsets of X that contains X and is closed under
complement and under countable unions of disjoint subsets.

Dynkins - theorem says, if P is a -system and D is a Dynkin system that contains P then the -algebra (P)
generated by P is contained in D. Since certain -systems are relatively simple classes, it may not be hard to verify
that all sets in P enjoy the property under consideration while, on the other hand, showing that the collection D of all
subsets with the property is a Dynkin system can also be straightforward. Dynkins - Theorem then implies that
all sets in (P) enjoy the property, avoiding the task of checking it for an arbitrary set in (P).
One of the most fundamental uses of the - theorem is to show equivalence of separately dened measures or
integrals. For example, it is used to equate a probability for a random variable X with the Lebesgue-Stieltjes integral
typically associated with computing the probability:

P(X A) = A
F (dx) for all A in the Borel -algebra on R,

where F(x) is the cumulative distribution function for X, dened on R, while P is a probability measure, dened on
a -algebra of subsets of some sample space .

238.2.3 Combining -algebras


Suppose { : A} is a collection of -algebras on a space X.

The intersection of a collection of -algebras is a -algebra. To emphasize its character as a -algebra, it often
is denoted by:
238.2. DEFINITION AND PROPERTIES 899


.
A

Sketch of Proof: Let denote the intersection. Since X is in every , is not empty. Closure under
complement and countable unions for every implies the same must be true for . Therefore, is a
-algebra.

The union of a collection of -algebras is not generally a -algebra, or even an algebra, but it generates a
-algebra known as the join which typically is denoted

( )

= .
A A

A -system that generates the join is


{n }

P= Ai : Ai i , i A, n 1 .
i=1

Sketch of Proof: By the case n = 1, it is seen that each P , so



P.
A

This implies
( )

(P)
A

by the denition of a -algebra generated by a collection of subsets. On the other hand,


( )

P
A

which, by Dynkins - theorem, implies


( )

(P) .
A

238.2.4 -algebras for subspaces


Suppose Y is a subset of X and let (X, ) be a measurable space.

The collection {Y B: B } is a -algebra of subsets of Y.


Suppose (Y, ) is a measurable space. The collection {A X : A Y } is a -algebra of subsets of X.

238.2.5 Relation to -ring


A -algebra is just a -ring that contains the universal set X.[4] A -ring need not be a -algebra, as for example
measurable subsets of zero Lebesgue measure in the real line are a -ring, but not a -algebra since the real line
has innite measure and thus cannot be obtained by their countable union. If, instead of zero measure, one takes
measurable subsets of nite Lebesgue measure, those are a ring but not a -ring, since the real line can be obtained
by their countable union yet its measure is not nite.
900 CHAPTER 238. SIGMA-ALGEBRA

238.2.6 Typographic note


-algebras are sometimes denoted using calligraphic capital letters, or the Fraktur typeface. Thus (X, ) may be
denoted as (X, F ) or (X, F) .

238.3 Particular cases and examples

238.3.1 Separable -algebras


A separable -algebra (or separable -eld) is a -algebra F that is a separable space when considered as a metric
space with metric (A, B) = (AB) for A, B F and a given measure (and with being the symmetric dif-
ference operator).[5] Note that any -algebra generated by a countable collection of sets is separable, but the converse
need not hold. For example, the Lebesgue -algebra is separable (since every Lebesgue measurable set is equivalent
to some Borel set) but not countably generated (since its cardinality is higher than continuum).
A separable measure space has a natural pseudometric that renders it separable as a pseudometric space. The distance
between two sets is dened as the measure of the symmetric dierence of the two sets. Note that the symmetric
dierence of two distinct sets can have measure zero; hence the pseudometric as dened above need not to be a true
metric. However, if sets whose symmetric dierence has measure zero are identied into a single equivalence class,
the resulting quotient set can be properly metrized by the induced metric. If the measure space is separable, it can
be shown that the corresponding metric space is, too.

238.3.2 Simple set-based examples


Let X be any set.

The family consisting only of the empty set and the set X, called the minimal or trivial -algebra over X.
The power set of X, called the discrete -algebra.
The collection {, A, Ac , X} is a simple -algebra generated by the subset A.
The collection of subsets of X which are countable or whose complements are countable is a -algebra (which
is distinct from the power set of X if and only if X is uncountable). This is the -algebra generated by the
singletons of X. Note: countable includes nite or empty.
The collection of all unions of sets in a countable partition of X is a -algebra.

238.3.3 Stopping time sigma-algebras


A stopping time can dene a -algebra F , the so-called stopping time sigma-algebra, which in a ltered probability
space describes the information up to the random time in the sense that, if the ltered probability space is interpreted
as a random experiment, the maximum information that can be found out about the experiment from arbitrarily often
repeating it until the time is F .[6]

238.4 -algebras generated by families of sets

238.4.1 -algebra generated by an arbitrary family


Let F be an arbitrary family of subsets of X. Then there exists a unique smallest -algebra which contains every set
in F (even though F may or may not itself be a -algebra). It is, in fact, the intersection of all -algebras containing
F. (See intersections of -algebras above.) This -algebra is denoted (F) and is called the -algebra generated by
F.
If F is empty, then (F)={X, }. Otherwise (F) consists of all the subsets of X that can be made from elements of
F by a countable number of complement, union and intersection operations.
238.4. -ALGEBRAS GENERATED BY FAMILIES OF SETS 901

For a simple example, consider the set X = {1, 2, 3}. Then the -algebra generated by the single subset {1} is
({{1}}) = {, {1}, {2, 3}, {1, 2, 3}}. By an abuse of notation, when a collection of subsets contains only one
element, A, one may write (A) instead of ({A}); in the prior example ({1}) instead of ({{1}}). Indeed, using
(A1 , A2 , ...) to mean ({A1 , A2 , ...}) is also quite common.
There are many families of subsets that generate useful -algebras. Some of these are presented here.

238.4.2 -algebra generated by a function


If f is a function from a set X to a set Y and B is a -algebra of subsets of Y , then the -algebra generated by
the function f , denoted by (f ) , is the collection of all inverse images f 1 (S) of the sets S in B . i.e.

(f ) = {f 1 (S) | S B}.

A function f from a set X to a set Y is measurable with respect to a -algebra of subsets of X if and only if (f) is
a subset of .
One common situation, and understood by default if B is not specied explicitly, is when Y is a metric or topological
space and B is the collection of Borel sets on Y.
If f is a function from X to Rn then (f) is generated by the family of subsets which are inverse images of inter-
vals/rectangles in Rn :

( )
(f ) = {f 1 ((a1 , b1 ] (an , bn ]) : ai , bi R} .

A useful property is the following. Assume f is a measurable map from (X, X) to (S, S) and g is a measurable
map from (X, X) to (T, T). If there exists a measurable map h from (T, T) to (S, S) such that f(x) = h(g(x))
for all x, then (f) (g). If S is nite or countably innite or, more generally, (S, S) is a standard Borel space
(e.g., a separable complete metric space with its associated Borel sets), then the converse is also true.[7] Examples of
standard Borel spaces include Rn with its Borel sets and R with the cylinder -algebra described below.

238.4.3 Borel and Lebesgue -algebras


An important example is the Borel algebra over any topological space: the -algebra generated by the open sets (or,
equivalently, by the closed sets). Note that this -algebra is not, in general, the whole power set. For a non-trivial
example that is not a Borel set, see the Vitali set or Non-Borel sets.
On the Euclidean space Rn , another -algebra is of importance: that of all Lebesgue measurable sets. This -algebra
contains more sets than the Borel -algebra on Rn and is preferred in integration theory, as it gives a complete measure
space.

238.4.4 Product -algebra


Let (X1 , 1 ) and (X2 , 2 ) be two measurable spaces. The -algebra for the corresponding product space X1 X2
is called the product -algebra and is dened by

1 2 = ({B1 B2 : B1 1 , B2 2 }).

Observe that {B1 B2 : B1 1 , B2 2 } is a -system.


The Borel -algebra for Rn is generated by half-innite rectangles and by nite rectangles. For example,

B(Rn ) = ({(, b1 ] (, bn ] : bi R}) = ({(a1 , b1 ] (an , bn ] : ai , bi R}) .

For each of these two examples, the generating family is a -system.


902 CHAPTER 238. SIGMA-ALGEBRA

238.4.5 -algebra generated by cylinder sets


Suppose

X RT = {f : f (t) R, t T}

is a set of real-valued functions. Let B(R) denote the Borel subsets of R. A cylinder subset of X is a nitely restricted
set dened as

Ct1 ,...,tn (B1 , . . . , Bn ) = {f X : f (ti ) Bi , 1 i n}.

Each

{Ct1 ,...,tn (B1 , . . . , Bn ) : Bi B(R), 1 i n}

is a -system that generates a -algebra t1 ,...,tn . Then the family of subsets



FX = t1 ,...,tn
n=1 ti T,in

is an algebra that generates the cylinder -algebra for X. This -algebra is a subalgebra of the Borel -algebra
determined by the product topology of RT restricted to X.
An important special case is when T is the set of natural numbers and X is a set of real-valued sequences. In this
case, it suces to consider the cylinder sets

Cn (B1 , . . . , Bn ) = (B1 Bn R ) X = {(x1 , x2 , . . . , xn , xn+1 , . . . ) X : xi Bi , 1 i n},

for which

n = ({Cn (B1 , . . . , Bn ) : Bi B(R), 1 i n})

is a non-decreasing sequence of -algebras.

238.4.6 -algebra generated by random variable or vector


Suppose (, , P) is a probability space. If Y : Rn is measurable with respect to the Borel -algebra on Rn
then Y is called a random variable (n = 1) or random vector (n > 1). The -algebra generated by Y is

(Y ) = {Y 1 (A) : A B(Rn )}.

238.4.7 -algebra generated by a stochastic process


Suppose (, , P) is a probability space and RT is the set of real-valued functions on T . If Y : X RT is
measurable with respect to the cylinder -algebra (FX ) (see above) for X then Y is called a stochastic process or
random process. The -algebra generated by Y is

{ }
(Y ) = Y 1 (A) : A (FX ) = ({Y 1 (A) : A FX }),

the -algebra generated by the inverse images of cylinder sets.


238.5. SEE ALSO 903

238.5 See also


Join (sigma algebra)

Measurable function
Sample space

Sigma ring

Sigma additivity

238.6 References
[1] Probability, Mathematical Statistics, Stochastic Processes. Random. University of Alabama in Huntsville, Department
of Mathematical Sciences. Retrieved 30 March 2016.

[2] Billingsley, Patrick (2012). Probability and Measure (Anniversary ed.). Wiley. ISBN 978-1-118-12237-2.

[3] Rudin, Walter (1987). Real & Complex Analysis. McGraw-Hill. ISBN 0-07-054234-1.

[4] Vestrup, Eric M. (2009). The Theory of Measures and Integration. John Wiley & Sons. p. 12. ISBN 978-0-470-31795-2.

[5] Damonja, Mirna; Kunen, Kenneth (1995). Properties of the class of measure separable compact spaces (PDF). Fundamenta
Mathematicae: 262. If is a Borel measure on X , the measure algebra of (X, ) is the Boolean algebra of all Borel sets
modulo -null sets. If is nite, then such a measure algebra is also a metric space, with the distance between the two
sets being the measure of their symmetric dierence. Then, we say that is separable i this metric space is separable as
a topological space.

[6] Fischer, Tom (2013). On simple representations of stopping times and stopping time sigma-algebras. Statistics and
Probability Letters. 83 (1): 345349. doi:10.1016/j.spl.2012.09.024.

[7] Kallenberg, Olav (2001). Foundations of Modern Probability (2nd ed.). Springer. p. 7. ISBN 0-387-95313-2.

238.7 External links


Hazewinkel, Michiel, ed. (2001) [1994], Algebra of sets, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Sigma Algebra from PlanetMath.
Chapter 239

Skolem normal form

In mathematical logic, a formula of rst-order logic is in Skolem normal form if it is in prenex normal form with only
universal rst-order quantiers.
Every rst-order formula may be converted into Skolem normal form while not changing its satisability via a process
called Skolemization (sometimes spelled Skolemnization). The resulting formula is not necessarily equivalent to
the original one, but is equisatisable with it: it is satisable if and only if the original one is satisable.[1]
Reduction to Skolem normal form is a method for removing existential quantiers from formal logic statements, often
performed as the rst step in an automated theorem prover.

239.1 Examples
The simplest form of Skolemization is for existentially quantied variables which are not inside the scope of a universal
quantier. These may be replaced simply by creating new constants. For example, xP (x) may be changed to P (c)
, where c is a new constant (does not occur anywhere else in the formula).
More generally, Skolemization is performed by replacing every existentially quantied variable y with a term f (x1 , . . . , xn )
whose function symbol f is new. The variables of this term are as follows. If the formula is in prenex normal form,
x1 , . . . , xn are the variables that are universally quantied and whose quantiers precede that of y . In general, they
are the variables that are quantied universally and such that y occurs in the scope of their quantiers. The function
f introduced in this process is called a Skolem function (or Skolem constant if it is of zero arity) and the term is
called a Skolem term.
As an example, the formula xyz.P (x, y, z) is not in Skolem normal form because it contains the existential
quantier y . Skolemization replaces y with f (x) , where f is a new function symbol, and removes the quantication
over y . The resulting formula is xz.P (x, f (x), z) . The Skolem term f (x) contains x , but not z , because the
quantier to be removed y is in the scope of x , but not in that of z ; since this formula is in prenex normal form,
this is equivalent to saying that, in the list of quantiers, x precedes y while z does not. The formula obtained by this
transformation is satisable if and only if the original formula is.

239.2 How Skolemization works


Skolemization works by applying a second-order equivalence in conjunction to the denition of rst-order satisa-
bility. The equivalence provides a way for moving an existential quantier before a universal one.

( ) ( )
x R(g(x)) yR(x, y) f x R(g(x)) R(x, f (x))

where

f (x) is a function that maps x to y .

904
239.3. USES OF SKOLEMIZATION 905

Intuitively, the sentence for every x there exists a y such that R(x, y) " is converted into the equivalent form there
exists a function f mapping every x into a y such that, for every x it holds R(x, f (x)) ".
This equivalence is useful because the denition of rst-order satisability implicitly existentially quanties over the
evaluation of function symbols. In particular, a rst-order formula is satisable if there exists a model M and an
evaluation of the free variables of the formula that evaluate the formula to true. The model contains the evaluation
of all function symbols; therefore, Skolem functions are implicitly existentially quantied. In the example above,
x.R(x, f (x)) is satisable if and only if there exists a model M , which contains an evaluation for f , such that
x.R(x, f (x)) is true for some evaluation of its free variables (none in this case). This may be expressed in second
order as f x.R(x, f (x)) . By the above equivalence, this is the same as the satisability of xy.R(x, y) .
At the meta-level, rst-order satisability of a formula may be written with a little abuse of notation as M . (M, |=
) , where M is a model, is an evaluation of the free variables, and |= means that is true in M under . Since
rst-order models contain the evaluation of all function symbols, any Skolem function contains is implicitly ex-
istentially quantied by M . As a result, after replacing an existential quantier over variables into an existential
quantiers over functions at the front of the formula, the formula still may be treated as a rst-order one by remov-
ing these existential quantiers. This nal step of treating f x.R(x, f (x)) as x.R(x, f (x)) may be completed
because functions are implicitly existentially quantied by M in the denition of rst-order satisability.
Correctness of Skolemization may be shown on the example formula F1 = x1 . . . xn yR(x1 , . . . , xn , y) as
follows. This formula is satised by a model M if and only if, for each possible value for x1 , . . . , xn in the do-
main of the model, there exists a value for y in the domain of the model that makes R(x1 , . . . , xn , y) true. By
the axiom of choice, there exists a function f such that y = f (x1 , . . . , xn ) . As a result, the formula F2 =
x1 . . . xn R(x1 , . . . , xn , f (x1 , . . . , xn )) is satisable, because it has the model obtained by adding the evalua-
tion of f to M . This shows that F1 is satisable only if F2 is satisable as well. In the other way around, if F2 is
satisable, then there exists a model M that satises it; this model includes an evaluation for the function f such
that, for every value of x1 , . . . , xn , the formula R(x1 , . . . , xn , f (x1 , . . . , xn )) holds. As a result, F1 is satised by
the same model because one may choose, for every value of x1 , . . . , xn , the value y = f (x1 , . . . , xn ) , where f is
evaluated according to M .

239.3 Uses of Skolemization

One of the uses of Skolemization is automated theorem proving. For example, in the method of analytic tableaux,
whenever a formula whose leading quantier is existential occurs, the formula obtained by removing that quantier
via Skolemization may be generated. For example, if x.(x, y1 , . . . , yn ) occurs in a tableau, where x, y1 , . . . , yn
are the free variables of (x, y1 , . . . , yn ) , then (f (y1 , . . . , yn ), y1 , . . . , yn ) may be added to the same branch
of the tableau. This addition does not alter the satisability of the tableau: every model of the old formula may be
extended, by adding a suitable evaluation of f , to a model of the new formula.
This form of Skolemization is an improvement over classical Skolemization in that only variables that are free in the
formula are placed in the Skolem term. This is an improvement because the semantics of tableau may implicitly place
the formula in the scope of some universally quantied variables that are not in the formula itself; these variables are
not in the Skolem term, while they would be there according to the original denition of Skolemization. Another
improvement that may be used is applying the same Skolem function symbol for formulae that are identical up to
variable renaming.[2]
Another use is in the resolution method for rst order logic, where formulas are represented as sets of clauses under-
stood to be universally quantied. (For an example see drinker paradox.)

239.4 Skolem theories

In general, if T is a theory and for each formula F with free variables x1 , . . . , xn , y there is a Skolem function, then
T is called a Skolem theory.[3] For example, by the above, arithmetic with the Axiom of Choice is a Skolem theory.
Every Skolem theory is model complete, i.e. every substructure of a model is an elementary substructure. Given a
model M of a Skolem theory T, the smallest substructure containing a certain set A is called the Skolem hull of A.
The Skolem hull of A is an atomic prime model over A.
906 CHAPTER 239. SKOLEM NORMAL FORM

239.5 History
SNF is named after Thoralf Skolem.

239.6 See also


Herbrandization, the dual of Skolemization
Predicate functor logic

239.7 Notes
[1] Normal Forms and Skolemization (PDF). max planck institut informatik. Retrieved 15 December 2012.

[2] R. Hhnle. Tableaux and related methods. Handbook of Automated Reasoning.

[3] Sets, Models and Proofs (3.3) by I. Moerdijk and J. van Oosten

239.8 References
Hodges, Wilfrid (1997), A shorter model theory, Cambridge University Press, ISBN 978-0-521-58713-6

239.9 External links


Hazewinkel, Michiel, ed. (2001) [1994], Skolem function, Encyclopedia of Mathematics, Springer Sci-
ence+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

Skolemization on PlanetMath.org
Skolemization by Hector Zenil, The Wolfram Demonstrations Project.

Weisstein, Eric W. SkolemizedForm. MathWorld.


Chapter 240

SLD resolution

SLD resolution (Selective Linear Denite clause resolution) is the basic inference rule used in logic programming. It
is a renement of resolution, which is both sound and refutation complete for Horn clauses.

240.1 The SLD inference rule

Given a goal clause:


L1 Li Ln
with selected literal Li , and an input denite clause:
L K1 Km
whose positive literal (atom) L unies with the atom Li of the selected literal Li , SLD resolution derives another
goal clause, in which the selected literal is replaced by the negative literals of the input clause and the unifying
substitution is applied:
(L1 K1 Km Ln )
In the simplest case, in propositional logic, the atoms Li and L are identical, and the unifying substitution is
vacuous. However, in the more general case, the unifying substitution is necessary to make the two literals identical.

240.2 The origin of the name SLD

The name SLD resolution was given by Maarten van Emden for the unnamed inference rule introduced by Robert
Kowalski.[1] Its name is derived from SL resolution,[2] which is both sound and refutation complete for the unrestricted
clausal form of logic. SLD stands for SL resolution with Denite clauses.
In both, SL and SLD, L stands for the fact that a resolution proof can be restricted to a linear sequence of clauses:
C1 , C2 , , Cl
where the top clause C1 is an input clause, and every other clause Ci+1 is a resolvent one of whose parents is the
previous clause Ci . The proof is a refutation if the last clause Cl is the empty clause.
In SLD, all of the clauses in the sequence are goal clauses, and the other parent is an input clause. In SL resolution,
the other parent is either an input clause or an ancestor clause earlier in the sequence.
In both SL and SLD, S stands for the fact that the only literal resolved upon in any clause Ci is one that is uniquely
selected by a selection rule or selection function. In SL resolution, the selected literal is restricted to one which has
been most recently introduced into the clause. In the simplest case, such a last-in-rst-out selection function can be
specied by the order in which literals are written, as in Prolog. However, the selection function in SLD resolution is
more general than in SL resolution and in Prolog. There is no restriction on the literal that can be selected.

907
908 CHAPTER 240. SLD RESOLUTION

240.3 The computational interpretation of SLD resolution


In clausal logic, an SLD refutation demonstrates that the input set of clauses is unsatisable. In logic programming,
however, an SLD refutation also has a computational interpretation. The top clause L1 Li Ln
can be interpreted as the denial of a conjunction of subgoals L1 Li Ln . The derivation of clause
Ci+1 from Ci is the derivation, by means of backward reasoning, of a new set of sub-goals using an input clause as
a goal-reduction procedure. The unifying substitution both passes input from the selected subgoal to the body of
the procedure and simultaneously passes output from the head of the procedure to the remaining unselected subgoals.
The empty clause is simply an empty set of subgoals, which signals that the initial conjunction of subgoals in the top
clause has been solved.

240.4 SLD resolution strategies


SLD resolution implicitly denes a search tree of alternative computations, in which the initial goal clause is associated
with the root of the tree. For every node in the tree and for every denite clause in the program whose positive literal
unies with the selected literal in the goal clause associated with the node, there is a child node associated with the
goal clause obtained by SLD resolution.
A leaf node, which has no children, is a success node if its associated goal clause is the empty clause. It is a failure
node if its associated goal clause is non-empty but its selected literal unies with the positive literal of no input clause.
SLD resolution is non-deterministic in the sense that it does not determine the search strategy for exploring the search
tree. Prolog searches the tree depth-rst, one branch at a time, using backtracking when it encounters a failure node.
Depth-rst search is very ecient in its use of computing resources, but is incomplete if the search space contains
innite branches and the search strategy searches these in preference to nite branches: the computation does not
terminate. Other search strategies, including breadth-rst, best-rst, and branch-and-bound search are also possible.
Moreover, the search can be carried out sequentially, one node at a time, or in parallel, many nodes simultaneously.
SLD resolution is also non-deterministic in the sense, mentioned earlier, that the selection rule is not determined by
the inference rule, but is determined by a separate decision procedure, which can be sensitive to the dynamics of the
program execution process.
The SLD resolution search space is an or-tree, in which dierent branches represent alternative computations. In the
case of propositional logic programs, SLD can be generalised so that the search space is an and-or tree, whose nodes
are labelled by single literals, representing subgoals, and nodes are joined either by conjunction or by disjunction. In
the general case, where conjoint subgoals share variables, the and-or tree representation is more complicated.

240.5 Example
Given the logic program:
q :- p
p
and the top-level goal:
q
the search space consists of a single branch, in which q is reduced to p which is reduced to the empty set of subgoals,
signalling a successful computation. In this case, the program is so simple that there is no role for the selection
function and no need for any search.
In clausal logic, the program is represented by the set of clauses:
q p
p
and top-level goal is represented by the goal clause with a single negative literal:
q
The search space consists of the single refutation:
240.6. SLDNF 909

q, p, false
where false represents the empty clause.
If the following clause were added to the program:
q :- r
then there would be an additional branch in the search space, whose leaf node r is a failure node. In Prolog, if this
clause were added to the front of the original program, then Prolog would use the order in which the clauses are
written to determine the order in which the branches of the search space are investigated. Prolog would try this new
branch rst, fail, and then backtrack to investigate the single branch of the original program and succeed.
If the clause
p :- p
were now added to the program, then the search tree would contain an innite branch. If this clause were tried rst,
then Prolog would go into an innite loop and not nd the successful branch.

240.6 SLDNF
SLDNF[3] is an extension of SLD resolution to deal with negation as failure. In SLDNF, goal clauses can contain
negation as failure literals, say of the form not(p) , which can be selected only if they contain no variables. When
such a variable-free literal is selected, a subproof (or subcomputation) is attempted to determine whether there is an
SLDNF refutation starting from the corresponding unnegated literal p as top clause. The selected subgoal not(p)
succeeds if the subproof fails, and it fails if the subproof succeeds.

240.7 See also


John Alan Robinson

240.8 References
[1] Robert Kowalski Predicate Logic as a Programming Language Memo 70, Department of Articial Intelligence, Ed-
inburgh University. 1973. Also in Proceedings IFIP Congress, Stockholm, North Holland Publishing Co., 1974, pp.
569-574.

[2] Robert Kowalski and Donald Kuehner, Linear Resolution with Selection Function Articial Intelligence, Vol. 2, 1971,
pp. 227-60.

[3] Krzysztof Apt and Maarten van Emden, Contributions to the Theory of Logic Programming, Journal of the Association
for Computing Machinery. Vol, 1982 - portal.acm.org

Jean Gallier, SLD-Resolution and Logic Programming chapter 9 of Logic for Computer Science: Foundations
of Automatic Theorem Proving, 2003 online revision (free to download), originally published by Wiley, 1986
John C. Shepherdson, SLDNF-Resolution with Equality, Journal of Automated Reasoning 8: 297-306, 1992;
denes semantics with respect to which SLDNF-resolution with equality is sound and complete

240.9 External links


Denition from the Free On-Line Dictionary of Computing
Chapter 241

Standard translation

In modal logic, standard translation is a way of transforming formulas of modal logic into formulas of rst-order
logic which capture the meaning of the modal formulas. Standard translation is dened inductively on the structure of
the formula. In short, atomic formulas are mapped onto unary predicates and the objects in the rst-order language
are the accessible worlds. The logical connectives from propositional logic remain untouched and the modal operators
are transformed into rst-order formulas according to their semantics.

241.1 Denition
Standard translation is dened as follows:

STx (p) P (x) , where p is an atomic formula; P(x) is true when p holds in world x .

STx ()

STx ()

STx () STx ()

STx ( ) STx () STx ()

STx ( ) STx () STx ()

STx ( ) STx () STx ()

STx (m ) y(Rm (x, y) STy ())

STx (m ) y(Rm (x, y) STy ())

In the above, x is the world from which the formula is evaluated. Initially, a free variable x is used and whenever
a modal operator needs to be translated, a fresh variable is introduced to indicate that the remainder of the formula
needs to be evaluated from that world. Here, the subscript m refers to the accessibility relation that should be used:
normally, and refer to a relation R of the Kripke model but more than one accessibility relation can exist (a
multimodal logic) in which case subscripts are used. For example, a and a refer to an accessibility relation Ra
and b and b to Rb in the model. Alternatively, it can also be placed inside the modal symbol.

241.2 Example
As an example, when standard translation is applied to p , we expand the outer box to get

y(R(x, y) STy (p))

910
241.3. STANDARD TRANSLATION AND MODAL DEPTH 911

meaning that we have now moved from x to an accessible world y and we now evaluate the remainder of the formula,
p , in each of those accessible worlds.
The whole standard translation of this example becomes

y(R(x, y) (z(R(y, z) P (z))))

which precisely captures the semantics of two boxes in modal logic. The formula p holds in x when for all
accessible worlds y from x and for all accessible worlds z from y , the predicate P is true for z . Note that the
formula is also true when no such accessible worlds exist. In case x has no accessible worlds then R(x, y) is false but
the whole formula is vacuously true: an implication is also true when the antecedent is false.

241.3 Standard translation and modal depth


The modal depth of a formula also becomes apparent in the translation to rst-order logic. When the modal depth of a
formula is k, then the rst-order logic formula contains a 'chain' of k transitions from the starting world x . The worlds
are 'chained' in the sense that these worlds are visited by going from accessible to accessible world. Informally, the
number of transitions in the 'longest chain' of transitions in the rst-order formula is the modal depth of the formula.
The modal depth of the formula used in the example above is two. The rst-order formula indicates that the transitions
from x to y and from y to z are needed to verify the validity of the formula. This is also the modal depth of the
formula as each modal operator increases the modal depth by one.

241.4 References
Modal Logic: A Semantic Perspective, Patrick Blackburn and Johan van Benthem
Chapter 242

Stone functor

In mathematics, the Stone functor is a functor S: Topop Bool, where Top is the category of topological spaces
and Bool is the category of Boolean algebras and Boolean homomorphisms. It assigns to each topological space X
the Boolean algebra S(X) of its clopen subsets, and to each morphism f op : X Y in Topop (i.e., a continuous map
f: Y X) the homomorphism S(f): S(X) S(Y) given by S(f)(Z) = f 1 [Z].

242.1 See also


Stones representation theorem for Boolean algebras
Pointless topology

242.2 References
Abstract and Concrete Categories. The Joy of Cats. Jiri Admek, Horst Herrlich, George E. Strecker.

Peter T. Johnstone, Stone Spaces. (1982) Cambridge university Press ISBN 0-521-23893-5

912
Chapter 243

Stone space

In topology, and related areas of mathematics, a Stone space is a non-empty compact totally disconnected Hausdor
space.[1] Such spaces are also called pronite spaces.[2] They are named after Marshall Harvey Stone.
A form of Stones representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to the
Boolean algebra of clopen sets of a Stone space. This isomorphism forms a category-theoretic duality between the
categories of Boolean algebras and Stone spaces.

243.1 References
[1] Hazewinkel, Michiel, ed. (2001) [1994], Stone space, Encyclopedia of Mathematics, Springer Science+Business Media
B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

[2] Stone space in nLab

913
Chapter 244

Stones representation theorem for


Boolean algebras

In mathematics, Stones representation theorem for Boolean algebras states that every Boolean algebra is isomorphic
to a certain eld of sets. The theorem is fundamental to the deeper understanding of Boolean algebra that emerged
in the rst half of the 20th century. The theorem was rst proved by Marshall H. Stone (1936). Stone was led to it
by his study of the spectral theory of operators on a Hilbert space.

244.1 Stone spaces


Each Boolean algebra B has an associated topological space, denoted here S(B), called its Stone space. The points in
S(B) are the ultralters on B, or equivalently the homomorphisms from B to the two-element Boolean algebra. The
topology on S(B) is generated by a (closed) basis consisting of all sets of the form

{x S(B) | b x},

where b is an element of B. This is the topology of pointwise convergence of nets of homomorphisms into the two-
element Boolean algebra.
For every Boolean algebra B, S(B) is a compact totally disconnected Hausdor space; such spaces are called Stone
spaces (also pronite spaces). Conversely, given any topological space X, the collection of subsets of X that are clopen
(both closed and open) is a Boolean algebra.

244.2 Representation theorem


A simple version of Stones representation theorem states that every Boolean algebra B is isomorphic to the algebra
of clopen subsets of its Stone space S(B). The isomorphism sends an element bB to the set of all ultralters that
contain b. This is a clopen set because of the choice of topology on S(B) and because B is a Boolean algebra.
Restating the theorem using the language of category theory; the theorem states that there is a duality between the
category of Boolean algebras and the category of Stone spaces. This duality means that in addition to the corre-
spondence between Boolean algebras and their Stone spaces, each homomorphism from a Boolean algebra A to a
Boolean algebra B corresponds in a natural way to a continuous function from S(B) to S(A). In other words, there is
a contravariant functor that gives an equivalence between the categories. This was an early example of a nontrivial
duality of categories.
The theorem is a special case of Stone duality, a more general framework for dualities between topological spaces
and partially ordered sets.
The proof requires either the axiom of choice or a weakened form of it. Specically, the theorem is equivalent to the
Boolean prime ideal theorem, a weakened choice principle that states that every Boolean algebra has a prime ideal.

914
244.3. SEE ALSO 915

An extension of the classical Stone duality to the category of Boolean spaces (= zero-dimensional locally compact
Hausdor spaces) and continuous maps (respectively, perfect maps) was obtained by G. D. Dimov (respectively, by
H. P. Doctor) (see the references below).

244.3 See also


Field of sets

List of Boolean algebra topics


Stonean space

Stone functor
Pronite group

Representation theorem

244.4 References
Paul Halmos, and Givant, Steven (1998) Logic as Algebra. Dolciani Mathematical Expositions No. 21. The
Mathematical Association of America.
Johnstone, Peter T. (1982) Stone Spaces. Cambridge University Press. ISBN 0-521-23893-5.

Marshall H. Stone (1936) "The Theory of Representations of Boolean Algebras," Transactions of the American
Mathematical Society 40: 37-111.

G. D. Dimov (2012) Some generalizations of the Stone Duality Theorem. Publ. Math. Debrecen 80: 255293.
H. P. Doctor (1964) The categories of Boolean lattices, Boolean rings and Boolean spaces. Canad. Math.
Bulletin 7: 245252.

A monograph available free online:

Burris, Stanley N., and H.P. Sankappanavar, H. P.(1981) A Course in Universal Algebra. Springer-Verlag.
ISBN 3-540-90578-2.
Chapter 245

Strict conditional

In logic, a strict conditional is a conditional governed by a modal operator, that is, a logical connective of modal
logic. It is logically equivalent to the material conditional of classical logic, combined with the necessity operator from
modal logic. For any two propositions p and q, the formula p q says that p materially implies q while (p q) says
that p strictly implies q.[1] Strict conditionals are the result of Clarence Irving Lewis's attempt to nd a conditional for
logic that can adequately express indicative conditionals in natural language.[2] They have also been used in studying
Molinist theology.[3]

245.1 Avoiding paradoxes


The strict conditionals may avoid paradoxes of material implication. The following statement, for example, is not
correctly formalized by material implication:

If Bill Gates had graduated in Medicine, then Elvis never died.

This condition should clearly be false: the degree of Bill Gates has nothing to do with whether Elvis is still alive.
However, the direct encoding of this formula in classical logic using material implication leads to:

Bill Gates graduated in Medicine Elvis never died.

This formula is true because whenever the antecedent A is false, a formula A B is true. Hence, this formula is not
an adequate translation of the original sentence. An encoding using the strict conditional is:

In modal logic, this formula means (roughly) that, in every possible world in which Bill Gates graduated in Medicine,
Elvis never died. Since one can easily imagine a world where Bill Gates is a Medicine graduate and Elvis is dead, this
formula is false. Hence, this formula seems a correct translation of the original sentence.

245.2 Problems
Although the strict conditional is much closer to being able to express natural language conditionals than the material
conditional, it has its own problems with consequents that are necessarily true (such as 2 + 2 = 4) or antecedents that
are necessarily false.[4] The following sentence, for example, is not correctly formalized by a strict conditional:

If Bill Gates graduated in Medicine, then 2 + 2 = 4.

Using strict conditionals, this sentence is expressed as:

916
245.3. SEE ALSO 917

In modal logic, this formula means that, in every possible world where Bill Gates graduated in medicine, it holds that
2 + 2 = 4. Since 2 + 2 is equal to 4 in all possible worlds, this formula is true, although it does not seem that the
original sentence should be. A similar situation arises with 2 + 2 = 5, which is necessarily false:

If 2 + 2 = 5, then Bill Gates graduated in Medicine.

Some logicians view this situation as indicating that the strict conditional is still unsatisfactory. Others have noted
that the strict conditional cannot adequately express counterfactual conditionals,[5] and that it does not satisfy certain
logical properties.[6] In particular, the strict conditional is transitive, while the counterfactual conditional is not.[7]
Some logicians, such as Paul Grice, have used conversational implicature to argue that, despite apparent diculties,
the material conditional is just ne as a translation for the natural language 'if...then...'. Others still have turned to
relevance logic to supply a connection between the antecedent and consequent of provable conditionals.

245.3 See also


Counterfactual conditional
Indicative conditional
Material conditional
Logical consequence
Corresponding conditional

245.4 References
[1] Graham Priest, An Introduction to Non-Classical Logic: From if to is, 2nd ed, Cambridge University Press, 2008, ISBN
0-521-85433-4, p. 72.

[2] Nicholas Bunnin and Jiyuan Yu (eds), The Blackwell Dictionary of Western Philosophy, Wiley, 2004, ISBN 1-4051-0679-4,
strict implication, p. 660.

[3] Jonathan L. Kvanvig, Creation, Deliberation, and Molinism, in Destiny and Deliberation: Essays in Philosophical Theol-
ogy, Oxford University Press, 2011, ISBN 0-19-969657-8, p. 127136.

[4] Roy A. Sorensen, A Brief History of the Paradox: Philosophy and the labyrinths of the mind, Oxford University Press, 2003,
ISBN 0-19-515903-9, p. 105.

[5] Jens S. Allwood, Lars-Gunnar Andersson, and sten Dahl, Logic in Linguistics, Cambridge University Press, 1977, ISBN
0-521-29174-7, p. 120.

[6] Hans Rott and Vtezslav Hork, Possibility and Reality: Metaphysics and Logic, ontos verlag, 2003, ISBN 3-937202-24-2,
p. 271.

[7] John Bigelow and Robert Pargetter, Science and Necessity, Cambridge University Press, 1990, ISBN 0-521-39027-3, p.
116.

245.5 Bibliography
Edgington, Dorothy, 2001, Conditionals, in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic.
Blackwell.

For an introduction to non-classical logic as an attempt to nd a better translation of the conditional, see:
918 CHAPTER 245. STRICT CONDITIONAL

Priest, Graham, 2001. An Introduction to Non-Classical Logic. Cambridge Univ. Press.

For an extended philosophical discussion of the issues mentioned in this article, see:

Mark Sainsbury, 2001. Logical Forms. Blackwell Publishers.

Jonathan Bennett, 2003. A Philosophical Guide to Conditionals. Oxford Univ. Press.


Chapter 246

Structural rule

For the type of rule used in linguistics, see Phrase structure rule.

In proof theory, a structural rule is an inference rule that does not refer to any logical connective, but instead operates
on the judgment or sequents directly. Structural rules often mimic intended meta-theoretic properties of the logic.
Logics that deny one or more of the structural rules are classied as substructural logics.

246.1 Common structural rules


Three common structural rules are:

Weakening, where the hypotheses or conclusion of a sequent may be extended with additional members. In

symbolic form weakening rules can be written as ,A on the left of the turnstile, and A, on the right.

Contraction, where two equal (or uniable) members on the same side of a sequent may be replaced by a
single member (or common instance). Symbolically: ,A,A A,A,
,A and A, . Also known as factoring in
automated theorem proving systems using resolution. Known as idempotency of entailment in classical logic.
1 ,A,2 ,B,3
Exchange, where two members on the same side of a sequent may be swapped. Symbolically: 1 ,B,2 ,A,3
1 ,A,2 ,B,3
and 1 ,B,2 ,A,3 . (This is also known as the permutation rule.)

A logic without any of the above structural rules would interpret the sides of a sequent as pure sequences; with
exchange, they are multisets; and with both contraction and exchange they are sets.
These are not the only possible structural rules. A famous structural rule is known as cut. Considerable eort is spent
by proof theorists in showing that cut rules are superuous in various logics. More precisely, what is shown is that cut
is only (in a sense) a tool for abbreviating proofs, and does not add to the theorems that can be proved. The successful
'removal' of cut rules, known as cut elimination, is directly related to the philosophy of computation as normalization
(see CurryHoward correspondence); it often gives a good indication of the complexity of deciding a given logic.

246.2 See also


Ane logic
Linear logic

Ordered logic
Strict logic

Separation logic

919
Chapter 247

Substitution (logic)

Not to be confused with Substitution (algebra).

Substitution is a fundamental concept in logic. A substitution is a syntactic transformation on formal expressions.


To apply a substitution to an expression means to consistently replace its variable, or placeholder, symbols by other
expressions. The resulting expression is called a substitution instance of the original expression.

247.1 Propositional logic

247.1.1 Denition

Where and represent formulas of propositional logic, is a substitution instance of if and only if may be
obtained from by substituting formulas for symbols in , always replacing an occurrence of the same symbol by an
occurrence of the same formula. For example:

(R S) & (T S)

is a substitution instance of:

P&Q

and

(A A) (A A)

is a substitution instance of:

(A A)

In some deduction systems for propositional logic, a new expression (a proposition) may be entered on a line of a
derivation if it is a substitution instance of a previous line of the derivation (Hunter 1971, p. 118). This is how new
lines are introduced in some axiomatic systems. In systems that use rules of transformation, a rule may include the
use of a substitution instance for the purpose of introducing certain variables into a derivation.
In rst-order logic, every closed propositional formula that can be derived from an open propositional formula a by
substitution is said to be a substitution instance of a. If a is a closed propositional formula we count a itself as its only
substitution instance.

920
247.2. FIRST-ORDER LOGIC 921

247.1.2 Tautologies

A propositional formula is a tautology if it is true under every valuation (or interpretation) of its predicate symbols.
If is a tautology, and is a substitution instance of , then is again a tautology. This fact implies the soundness
of the deduction rule described in the previous section.

247.2 First-order logic


In rst-order logic, a substitution is a total mapping : V T from variables to terms; the notation { x1 t 1 , ...,
xk tk } [note 1] refers to a substitution mapping each variable xi to the corresponding term ti, for i=1,...,k, and every
other variable to itself; the xi must be pairwise distinct. Applying that substitution to a term t is written in postx
notation as t { x1 t 1 , ..., xk tk }; it means to (simultaneously) replace every occurrence of each xi in t by ti.[note 2]
The result t of applying a substitution to a term t is called an instance of that term t. For example, applying the
substitution { x z, z h(a,y) } to the term

The domain dom() of a substitution is commonly dened as the set of variables actually replaced, i.e. dom()
= { x V | x x }. A substitution is called a ground substitution if it maps all variables of its domain to ground,
i.e. variable-free, terms. The substitution instance t of a ground substitution is a ground term if all of t's variables
are in 's domain, i.e. if vars(t) dom(). A substitution is called a linear substitution if t is a linear term for
some (and hence every) linear term t containing precisely the variables of 's domain, i.e. with vars(t) = dom(). A
substitution is called a at substitution if x is a variable for every variable x. A substitution is called a renaming
substitution if it is a permutation on the set of all variables. Like every permutation, a renaming substitution always
has an inverse substitution 1 , such that t1 = t = t1 for every term t. However, it is not possible to dene an
inverse for an arbitrary substitution.
For example, { x 2, y 3+4 } is a ground substitution, { x x1 , y y2 +4 } is non-ground and non-at, but
linear, { x y2 , y y2 +4 } is non-linear and non-at, { x y2 , y y2 } is at, but non-linear, { x x1 , y
y2 } is both linear and at, but not a renaming, since is maps both y and y2 to y2 ; each of these substitutions has the
set {x,y} as its domain. An example for a renaming substitution is { x x1 , x1 y, y y2 , y2 x }, it has the
inverse { x y2 , y2 y, y x1 , x1 x }. The at substitution { x z, y z } cannot have an inverse, since e.g.
(x+y) { x z, y z } = z+z, and the latter term cannot be transformed back to x+y, as the information about the
origin a z stems from is lost. The ground substitution { x 2 } cannot have an inverse due to a similar loss of origin
information e.g. in (x+2) { x 2 } = 2+2, even if replacing constants by variables was allowed by some ctitious
kind of generalized substitutions.
Two substitutions are considered equal if they map each variable to structurally equal result terms, formally: = if
x = x for each variable x V. The composition of two substitutions = { x1 t 1 , ..., xk tk } and = { y1
u1 , ..., yl ul } is obtained by removing from the substitution { x1 t 1 , ..., xk tk, y1 u1 , ..., yl ul } those
pairs yi ui for which yi { x1 , ..., xk }. The composition of and is denoted by . Composition is an associative
operation, and is compatible with substitution application, i.e. () = (), and (t) = t(), respectively, for every
substitutions , , , and every term t. The identity substitution, which maps every variable to itself, is the neutral
element of substitution composition. A substitution is called idempotent if = , and hence t = t for every
term t. The substitution { x1 t 1 , ..., xk tk } is idempotent if and only if none of the variables xi occurs in any ti.
Substitution composition is not commutative, that is, may be dierent from , even if and are idempotent.[1][2]
For example, { x 2, y 3+4 } is equal to { y 3+4, x 2 }, but dierent from { x 2, y 7 }. The
substitution { x y+y } is idempotent, e.g. ((x+y) {xy+y}) {xy+y} = ((y+y)+y) {xy+y} = (y+y)+y, while the
substitution { x x+y } is non-idempotent, e.g. ((x+y) {xx+y}) {xx+y} = ((x+y)+y) {xx+y} = ((x+y)+y)+y.
An example for non-commuting substitutions is { x y } { y z } = { x z, y z }, but { y z} { x y} = {
x y, y z }.

247.3 See also


Substitution property in Equality (mathematics)#Some basic logical properties of equality
922 CHAPTER 247. SUBSTITUTION (LOGIC)

First-order logic#Rules of inference

Universal instantiation
Lambda calculus#Substitution

Truth-value semantics
Unication (computer science)

Metavariable
Mutatis mutandis

Rule of replacement

247.4 Notes
[1] some authors use [ t 1 /x1 , ..., tk/xk ] to denote that substitution, e.g. M. Wirsing (1990). Jan van Leeuwen, ed. Algebraic
Specication. Handbook of Theoretical Computer Science. B. Elsevier. pp. 675788., here: p.682;

[2] From a term algebra point of view, the set T of terms is the free term algebra over the set V of variables, hence for each
substitution mapping : V T there is a unique homomorphism : T T that agrees with on V T; the above-dened
application of to a term t is then viewed as applying the function to the argument t.

247.5 References
Crabb, M. (2004). On the Notion of Substitution. Logic Journal of the IGPL, 12, 111-124.

Curry, H. B. (1952) On the denition of substitution, replacement and allied notions in an abstract formal system.
Revue philosophique de Louvain 50, 251269.

Hunter, G. (1971). Metalogic: An Introduction to the Metatheory of Standard First Order Logic. University of
California Press. ISBN 0-520-01822-2

Kleene, S. C. (1967). Mathematical Logic. Reprinted 2002, Dover. ISBN 0-486-42533-9

[1] David A. Duy (1991). Principles of Automated Theorem Proving. Wiley.; here: p.73-74

[2] Franz Baader, Wayne Snyder (2001). Alan Robinson and Andrei Voronkov, ed. Unication Theory (PDF). Elsevier. pp.
439526.; here: p.445-446

247.6 External links


Substitution in nLab
Chapter 248

Surjective function

Onto redirects here. For other uses, see wiktionary:onto.

In mathematics, a function f from a set X to a set Y is surjective (or onto), or a surjection, if for every element y in
the codomain Y of f there is at least one element x in the domain X of f such that f(x) = y. It is not required that x
is unique; the function f may map one or more elements of X to the same element of Y.
The term surjective and the related terms injective and bijective were introduced by Nicolas Bourbaki,[1] a group
of mainly French 20th-century mathematicians who under this pseudonym wrote a series of books presenting an
exposition of modern advanced mathematics, beginning in 1935. The French prex sur means over or above and
relates to the fact that the image of the domain of a surjective function completely covers the functions codomain.
Any function induces a surjection by restricting its codomain to its range. Every surjective function has a right inverse,
and every function with a right inverse is necessarily a surjection. The composite of surjective functions is always
surjective. Any function can be decomposed into a surjection and an injection.

248.1 Denition
For more details on notation, see Function (mathematics) Notation.

A surjective function is a function whose image is equal to its codomain. Equivalently, a function f with domain
X and codomain Y is surjective if for every y in Y there exists at least one x in X with f (x) = y . Surjections are
sometimes denoted by a two-headed rightwards arrow (U+21A0 RIGHTWARDS TWO HEADED ARROW),[2]
as in f : X Y.
Symbolically,

If f : X Y , then f is said to be surjective if

y Y, x X, f (x) = y

248.2 Examples
For any set X, the identity function idX on X is surjective.
The function f : Z {0,1} dened by f(n) = n mod 2 (that is, even integers are mapped to 0 and odd integers to 1)
is surjective.
The function f : R R dened by f(x) = 2x + 1 is surjective (and even bijective), because for every real number y
we have an x such that f(x) = y: an appropriate x is (y 1)/2.
The function f : R R dened by f(x) = x3 3x is surjective, because the pre-image of any real number y is the
solution set of the cubic polynomial equation x3 3x y = 0 and every cubic polynomial with real coecients has at

923
924 CHAPTER 248. SURJECTIVE FUNCTION

X Y
1 D
2 B

3 C

A surjective function from domain X to codomain Y. The function is surjective because every point in the codomain is the value of
f(x) for at least one point x in the domain.

least one real root. However, this function is not injective (and hence not bijective) since e.g. the pre-image of y = 2
is {x = 1, x = 2}. (In fact, the pre-image of this function for every y, 2 y 2 has more than one element.)
The function g : R R dened by g(x) = x2 is not surjective, because there is no real number x such that x2 = 1.
However, the function g : R R0 + dened by g(x) = x2 (with restricted codomain) is surjective because for every y
in the nonnegative real codomain Y there is at least one x in the real domain X such that x2 = y.
The natural logarithm function ln : (0,+) R is a surjective and even bijective mapping from the set of positive
real numbers to the set of all real numbers. Its inverse, the exponential function, is not surjective as its range is the set
of positive real numbers and its domain is usually dened to be the set of all real numbers. The matrix exponential
is not surjective when seen as a map from the space of all nn matrices to itself. It is, however, usually dened as a
map from the space of all nn matrices to the general linear group of degree n, i.e. the group of all nn invertible
matrices. Under this denition the matrix exponential is surjective for complex matrices, although still not surjective
for real matrices.
The projection from a cartesian product A B to one of its factors is surjective unless the other factor is empty.
In a 3D video game vectors are projected onto a 2D at screen by means of a surjective function.
248.3. PROPERTIES 925

x f(x)

X Y

f:XY
A non-surjective function from domain X to codomain Y. The smaller oval inside Y is the image (also called range) of f. This
function is not surjective, because the image does not ll the whole codomain. In other words, Y is colored in a two-step process:
First, for every x in X, the point f(x) is colored yellow; Second, all the rest of the points in Y, that are not yellow, are colored blue.
The function f is surjective only if there are no blue points.

248.3 Properties

A function is bijective if and only if it is both surjective and injective.


If (as is often done) a function is identied with its graph, then surjectivity is not a property of the function itself, but
rather a relationship between the function and its codomain. Unlike injectivity, surjectivity cannot be read o of the
graph of the function alone.

248.3.1 Surjections as right invertible functions

The function g : Y X is said to be a right inverse of the function f : X Y if f(g(y)) = y for every y in Y (g can be
undone by f). In other words, g is a right inverse of f if the composition f o g of g and f in that order is the identity
function on the domain Y of g. The function g need not be a complete inverse of f because the composition in the
other order, g o f, may not be the identity function on the domain X of f. In other words, f can undo or "reverse" g,
but cannot necessarily be reversed by it.
Every function with a right inverse is necessarily a surjection. The proposition that every surjective function has a
right inverse is equivalent to the axiom of choice.
If f : X Y is surjective and B is a subset of Y, then f(f 1 (B)) = B. Thus, B can be recovered from its preimage
f 1 (B).
For example, in the rst illustration, above, there is some function g such that g(C) = 4. There is also some function
f such that f(4) = C. It doesn't matter that g(C) can also equal 3; it only matters that f reverses g.
926 CHAPTER 248. SURJECTIVE FUNCTION

y y

y Y
im f
y Y
im f

x x
x X
x X1 x X2
f :X Y
f : X1 Y1 f : X2 Y2
y f x
y f x

Interpretation for surjective functions in the Cartesian plane, dened by the mapping f : X Y, where y = f(x), X = domain of
function, Y = range of function. Every element in the range is mapped onto from an element in the domain, by the rule f. There
may be a number of domain elements which map to the same range element. That is, every y in Y is mapped from an element x in
X, more than one x can map to the same y. Left: Only one domain is shown which makes f surjective. Right: two possible domains
X1 and X2 are shown.

y y3 Y y

y Y
y Y
y2 Y
im f
im f

y0 Y y1 Y

x x
x X
x0 X x X
x1 X x3 X

x2 X
f :X Y
y f x x X1 x X2
f : X1 Y1 f : X2 Y2
y f x

Non-surjective functions in the Cartesian plane. Although some parts of the function are surjective, where elements y in Y do have
a value x in X such that y = f(x), some parts are not. Left: There is y0 in Y, but there is no x0 in X such that y0 = f(x0 ). Right:
There are y1 , y2 and y3 in Y, but there are no x1 , x2 , and x3 in X such that y1 = f(x1 ), y2 = f(x2 ), and y3 = f(x3 ).

X Y Z
1 D P
2 B Q
3 C R
4 A
Surjective composition: the rst function need not be surjective.
248.3. PROPERTIES 927

X Y
1 D

2 B

3 C

4 A
Another surjective function. (This one happens to be a bijection)

X Y
1 D

2 B

3 C

A
A non-surjective function. (This one happens to be an injection)

248.3.2 Surjections as epimorphisms

A function f : X Y is surjective if and only if it is right-cancellative:[3] given any functions g,h : Y Z, whenever g
o f = h o f, then g = h. This property is formulated in terms of functions and their composition and can be generalized
to the more general notion of the morphisms of a category and their composition. Right-cancellative morphisms are
called epimorphisms. Specically, surjective functions are precisely the epimorphisms in the category of sets. The
prex epi is derived from the Greek preposition meaning over, above, on.
Any morphism with a right inverse is an epimorphism, but the converse is not true in general. A right inverse g of a
morphism f is called a section of f. A morphism with a right inverse is called a split epimorphism.

248.3.3 Surjections as binary relations

Any function with domain X and codomain Y can be seen as a left-total and right-unique binary relation between X
and Y by identifying it with its function graph. A surjective function with domain X and codomain Y is then a binary
relation between X and Y that is right-unique and both left-total and right-total.

248.3.4 Cardinality of the domain of a surjection

The cardinality of the domain of a surjective function is greater than or equal to the cardinality of its codomain: If f
: X Y is a surjective function, then X has at least as many elements as Y, in the sense of cardinal numbers. (The
proof appeals to the axiom of choice to show that a function g : Y X satisfying f(g(y)) = y for all y in Y exists. g
is easily seen to be injective, thus the formal denition of |Y| |X| is satised.)
Specically, if both X and Y are nite with the same number of elements, then f : X Y is surjective if and only if
f is injective.
Given two sets X and Y, the notation X * Y is used to say that either X is empty or that there is a surjection from Y
onto X. Using the axiom of choice one can show that X * Y and Y * X together imply that |Y| = |X|, a variant of
the SchrderBernstein theorem.

248.3.5 Composition and decomposition

The composite of surjective functions is always surjective: If f and g are both surjective, and the codomain of g
is equal to the domain of f, then f o g is surjective. Conversely, if f o g is surjective, then f is surjective (but g,
the function applied rst, need not be). These properties generalize from surjections in the category of sets to any
epimorphisms in any category.
928 CHAPTER 248. SURJECTIVE FUNCTION

Any function can be decomposed into a surjection and an injection: For any function h : X Z there exist a surjection
f : X Y and an injection g : Y Z such that h = g o f. To see this, dene Y to be the sets h 1 (z) where z is in
Z. These sets are disjoint and partition X. Then f carries each x to the element of Y which contains it, and g carries
each element of Y to the point in Z to which h sends its points. Then f is surjective since it is a projection map, and
g is injective by denition.

248.3.6 Induced surjection and induced bijection


Any function induces a surjection by restricting its codomain to its range. Any surjective function induces a bijection
dened on a quotient of its domain by collapsing all arguments mapping to a given xed image. More precisely, every
surjection f : A B can be factored as a projection followed by a bijection as follows. Let A/~ be the equivalence
classes of A under the following equivalence relation: x ~ y if and only if f(x) = f(y). Equivalently, A/~ is the set of
all preimages under f. Let P(~) : A A/~ be the projection map which sends each x in A to its equivalence class
[x]~, and let fP : A/~ B be the well-dened function given by fP([x]~) = f(x). Then f = fP o P(~).

248.4 See also


Bijection, injection and surjection
Cover (algebra)

Covering map
Enumeration

Fiber bundle
Index set

Section (category theory)

248.5 Notes
[1] Miller, Je, Injection, Surjection and Bijection, Earliest Uses of Some of the Words of Mathematics, Tripod.

[2] Arrows Unicode (PDF). Retrieved 2013-05-11.

[3] Goldblatt, Robert (2006) [1984]. Topoi, the Categorial Analysis of Logic (Revised ed.). Dover Publications. ISBN 978-0-
486-45026-1. Retrieved 2009-11-25.

248.6 References
Bourbaki, Nicolas (2004) [1968]. Theory of Sets. Springer. ISBN 978-3-540-22525-6.
Chapter 249

Suslin algebra

In mathematics, a Suslin algebra is a Boolean algebra that is complete, atomless, countably distributive, and satises
the countable chain condition. They are named after Mikhail Yakovlevich Suslin.
The existence of Suslin algebras is independent of the axioms of ZFC, and is equivalent to the existence of Suslin
trees or Suslin lines.

249.1 References
Jech, Thomas (2003). Set theory (third millennium (revised and expanded) ed.). Springer-Verlag. ISBN 3-
540-44085-2. OCLC 174929965. Zbl 1007.03002.

929
Chapter 250

Symmetric Boolean function

In mathematics, a symmetric Boolean function is a Boolean function whose value does not depend on the permutation
of its input bits, i.e., it depends only on the number of ones in the input.[1]
From the denition follows, that there are 2n+1 symmetric n-ary Boolean functions. It implies that instead of the truth
table, traditionally used to represent Boolean functions, one may use a more compact representation for an n-variable
symmetric Boolean function: the (n + 1)-vector, whose i-th entry (i = 0, ..., n) is the value of the function on an input
vector with i ones.

250.1 Special cases


A number of special cases are recognized.[1]

Threshold functions: their value is 1 on input vectors with k or more ones for a xed k

Exact-value functions: their value is 1 on input vectors with k ones for a xed k
Counting functions : their value is 1 on input vectors with the number of ones congruent to k mod m for xed
k, m
Parity functions: their value is 1 if the input vector has odd number of ones.

250.2 References
[1] Ingo Wegener, The Complexity of Symmetric Boolean Functions, in: Computation Theory and Logic, Lecture Notes in
Computer Science, vol. 270, 1987, pp. 433442

250.3 See also


Majority function

930
Chapter 251

Syncategorematic term

In scholastic logic, a syncategorematic term (syncategorema) is a word that cannot serve as the subject or the pred-
icate of a proposition, and thus cannot stand for any of Aristotles categories, but can be used with other terms to
form a proposition. Words such as 'all', 'and', 'if' are examples of such terms.[1]
The distinction between categorematic and syncategorematic terms was established in ancient Greek grammar. Words
that designate self-sucient entities (i.e., nouns or adjectives) were called categorematic, and those that do not stand
by themselves were dubbed syncategorematic, (i.e., prepositions, logical connectives, etc.). Priscian in his Institutiones
grammaticae [2] translates the word as consignicantia. Scholastics retained the dierence, which became a dissertable
topic after the 13th century revival of logic. William of Sherwood, a representative of terminism, wrote a treatise
called Syncategoremata. Later his pupil, Peter of Spain, produced a similar work entitled Syncategoreumata.[3]
In propositional calculus, a syncategorematic term is a term that has no individual meaning (a term with an individual
meaning is called categorematic). Whether a term is syncategorematic or not is determined by the way it is dened
or introduced in the language.
In the common denition of propositional logic, examples of syncategorematic terms are the logical connectives. Let
us take the connective for instance, its semantic rule is:
= 1 i = = 1
So its meaning is dened when it occurs in combination with two formulas and . But it has no meaning when
taken in isolation, i.e. is not dened.
We could however dene the in a dierent manner, e.g., using -abstraction: (b.(v.b(v)(b))) , which expects
a pair of Boolean-valued arguments, i.e., arguments that are either TRUE or FALSE, dened as (x.(y.x)) and
(x.(y.y)) respectively. This is an expression of type t, t, t . Its meaning is thus a binary function from pairs of
entities of type truth-value to an entity of type truth-value. Under this denition it would be non-syncategorematic,
or categorematic. Note that while this denition would formally dene the function, it requires the use of -
abstraction, in which case the itself is introduced syncategorematically, thus simply moving the issue up another
level of abstraction.

251.1 Notes
[1] Grant, p. 120.

[2] Priscian, Institutiones grammaticae, II, 15

[3] Peter of Spain, Stanford Encyclopedia of Philosophy online

251.2 References
Grant, Edward, God and Reason in the Middle Ages, Cambridge University Press (July 30, 2001), ISBN 978-
0-521-00337-7.

931
Chapter 252

System L

System L is a natural deductive logic developed by E.J. Lemmon.[1] Derived from Suppes' method,[2] it represents
natural deduction proofs as sequences of justied steps. Both methods are derived from Gentzens 1934/1935 natural
deduction system,[3] in which proofs were presented in tree-diagram form rather than in the tabular form of Suppes
and Lemmon. Although the tree-diagram layout has advantages for philosophical and educational purposes, the
tabular layout is much more convenient for practical applications.
A similar tabular layout is presented by Kleene.[4] The main dierence is that Kleene does not abbreviate the left-hand
sides of assertions to line numbers, preferring instead to either give full lists of precedent propositions or alternatively
indicate the left-hand sides by bars running down the left of the table to indicate dependencies. However, Kleenes
version has the advantage that it is presented, although only very sketchily, within a rigorous framework of metamath-
ematical theory, whereas the books by Suppes[2] and Lemmon[1] are applications of the tabular layout for teaching
introductory logic.

252.1 Description of the deductive system


The syntax of proof is governed by nine primitive rules:

1. The Rule of Assumption (A)


2. Modus Ponendo Ponens (MPP)
3. The Rule of Double Negation (DN)
4. The Rule of Conditional Proof (CP)
5. The Rule of -introduction (I)
6. The Rule of -elimination (E)
7. The Rule of -introduction (I)
8. The Rule of -elimination (E)
9. Reductio Ad Absurdum (RAA)

In system L, a proof has a denition with the following conditions:

1. has a nite sequence of well-formed formulas (or ws)


2. each line of it is justied by a rule of the system L
3. the last line of the proof is what is intended, and this last line of the proof uses only the premises that were
given, if any.

If no premise is given, the sequent is called theorem. Therefore, the denition of a theorem in system L is:

a theorem is a sequent that can be proved in system L, using an empty set of assumptions.

932
252.2. EXAMPLES 933

252.2 Examples
An example of the proof of a sequent (Modus Tollendo Tollens in this case):
An example of the proof of a sequent (a theorem in this case):
Each rule of system L has its own requirements for the type of input(s) or entry(es) that it can accept and has its own
way of treating and calculating the assumptions used by its inputs.

252.3 History of tabular natural deduction systems


The historical development of tabular-layout natural deduction systems, which are rule-based, and which indicate
antecedent propositions by line numbers (and related methods such as vertical bars or asterisks) includes the following
publications.

1940: In a textbook, Quine[5] indicated antecedent dependencies by line numbers in square brackets, antici-
pating Suppes 1957 line-number notation.

1950: In a textbook, Quine (1982, pp. 241255) demonstrated a method of using one or more asterisks to the
left of each line of proof to indicate dependencies. This is equivalent to Kleenes vertical bars. (It is not totally
clear if Quines asterisk notation appeared in the original 1950 edition or was added in a later edition.)

1957: An introduction to practical logic theorem proving in a textbook by Suppes (1999, pp. 25150). This
indicated dependencies (i.e. antecedent propositions) by line numbers at the left of each line.

1963: Stoll (1979, pp. 183190, 215219) uses sets of line numbers to indicate antecedent dependencies of
the lines of sequential logical arguments based on natural deduction inference rules.

1965: The entire textbook by Lemmon (1965) is an introduction to logic proofs using a method based on that
of Suppes.

1967: In a textbook, Kleene (2002, pp. 5058, 128130) briey demonstrated two kinds of practical logic
proofs, one system using explicit quotations of antecedent propositions on the left of each line, the other system
using vertical bar-lines on the left to indicate dependencies.[6]

252.4 See also


Natural deduction

Sequent calculus

Deductive systems

252.5 Notes
[1] See Lemmon 1965 for an introductory presentation of Lemmons natural deduction system.

[2] See Suppes 1999, pp. 25150, for an introductory presentation of Suppes natural deduction system.

[3] Gentzen 1934, Gentzen 1935.

[4] Kleene 2002, pp. 5056, 128130.

[5] Quine (1981). See particularly pages 9193 for Quines line-number notation for antecedent dependencies.

[6] A particular advantage of Kleenes tabular natural deduction systems is that he proves the validity of the inference rules for
both propositional calculus and predicate calculus. See Kleene 2002, pp. 4445, 118119.
934 CHAPTER 252. SYSTEM L

252.6 References
Gentzen, Gerhard Karl Erich (1934). Untersuchungen ber das logische Schlieen. I. Mathematische
Zeitschrift. 39 (2): 176210. doi:10.1007/BF01201353. (English translation Investigations into Logical De-
duction in Szabo.)

Gentzen, Gerhard Karl Erich (1935). Untersuchungen ber das logische Schlieen. II. Mathematische
Zeitschrift. 39 (3): 405431. doi:10.1007/bf01201363.

Kleene, Stephen Cole (2002) [1967]. Mathematical logic. Mineola, New York: Dover Publications. ISBN
978-0-486-42533-7.
Lemmon, Edward John (1965). Beginning logic. Thomas Nelson. ISBN 0-17-712040-1.

Quine, Willard Van Orman (1981) [1940]. Mathematical logic (Revised ed.). Cambridge, Massachusetts:
Harvard University Press. ISBN 978-0-674-55451-1.

Quine, Willard Van Orman (1982) [1950]. Methods of logic (Fourth ed.). Cambridge, Massachusetts: Harvard
University Press. ISBN 978-0-674-57176-1.

Stoll, Robert Roth (1979) [1963]. Set Theory and Logic. Mineola, New York: Dover Publications. ISBN
978-0-486-63829-4.

Suppes, Patrick Colonel (1999) [1957]. Introduction to logic. Mineola, New York: Dover Publications. ISBN
978-0-486-40687-9.

Szabo, M.E. (1969). The collected papers of Gerhard Gentzen. Amsterdam: North-Holland.

252.7 External links


Pelletier, Je, "A History of Natural Deduction and Elementary Logic Textbooks."
Chapter 253

Tarskis World

Tarskis World is a computer-based introduction to rst-order logic written by Jon Barwise and John Etchemendy. It
is named after the mathematical logician Alfred Tarski. The package includes a book, which serves as a textbook and
manual, and a computer program which together serve as an introduction to the semantics of logic through games
in which simple, three-dimensional worlds are populated with various geometric gures and these are used to test
the truth or falsehood of rst-order logic sentences. The program is also included in Language, Proof and Logic
package.[1][2][3][4][5]

253.1 The programme


Barwise, J., & Etchemendy, J. (1993). Tarskis world. Stanford, Calif: CSLI Publ.
Barker-Plummer, D., Barwise, J., & Etchemendy, J. (2008). Tarskis world. Stanford, Calif: CSLI Publica-
tions.
The Openproof Project at CSLI:home page of the Tarskis World courseware package, Dave Barker-Plummer,
Jon Barwise and John Etchemendy in collaboration with Albert Liu

253.2 References
[1] Goldson, D., (1994) Review of The Language of First-Order Logic, including the Macintosh Program Tarskis World. The
Philosophical Quarterly, 44, 175, 272275.

[2] Fallis, D.,(1999). Review of The Language of First-Order Logic, Including the IBM-Compatible Windows Version of
Tarskis World 4.0. Journal of Symbolic Logic, 64, 2, 916918.

[3] Compton, K. J., (1993). Review of The Language of First-Order Logic, including the Program Tarskis World. Journal of
Symbolic Logic, 58, 1, 362363.

[4] Bailhache, P.(1992). Review of The Language of First-Order Logic, Including the Macintosh Tarskis World. Studia
Logica, 51, 1, 145147.

[5] Goldson, D., Reeves, S. and R. Bornat (1993) A Review of Several Programs for the Teaching of Logic, The Computer
Journal, Volume 36, Issue 4, pp. 373-386

253.3 External links


A short video clip showing how to use the Tarskis World program for Language Proof and Logic.

935
Chapter 254

Tautology (logic)

In logic, a tautology (from the Greek word ) is a formula that is true in every possible interpretation.
Philosopher Ludwig Wittgenstein rst applied the term to redundancies of propositional logic in 1921. (It had been
used earlier to refer to rhetorical tautologies, and continues to be used in that alternative sense.) A formula is satisable
if it is true under at least one interpretation, and thus a tautology is a formula whose negation is unsatisable. Unsat-
isable statements, both through negation and armation, are known formally as contradictions. A formula that is
neither a tautology nor a contradiction is said to be logically contingent. Such a formula can be made either true or
false based on the values assigned to its propositional variables. The double turnstile notation S is used to indicate
that S is a tautology. Tautology is sometimes symbolized by Vpq", and contradiction by Opq". The tee symbol
is sometimes used to denote an arbitrary tautology, with the dual symbol (falsum) representing an arbitrary con-
tradiction; in any symbolism, a tautology may be substituted for the truth value "true, as symbolized, for instance,
by 1.
Tautologies are a key concept in propositional logic, where a tautology is dened as a propositional formula that is
true under any possible Boolean valuation of its propositional variables. A key property of tautologies in propositional
logic is that an eective method exists for testing whether a given formula is always satised (or, equivalently, whether
its negation is unsatisable).
The denition of tautology can be extended to sentences in predicate logic, which may contain quantiers, unlike
sentences of propositional logic. In propositional logic, there is no distinction between a tautology and a logically valid
formula. In the context of predicate logic, many authors dene a tautology to be a sentence that can be obtained by
taking a tautology of propositional logic and uniformly replacing each propositional variable by a rst-order formula
(one formula per propositional variable). The set of such formulas is a proper subset of the set of logically valid
sentences of predicate logic (which are the sentences that are true in every model).

254.1 History
The word tautology was used by the ancient Greeks to describe a statement that was asserted to be true merely by
virtue of saying the same thing twice, a pejorative meaning that is still used for rhetorical tautologies. Between 1800
and 1940, the word gained new meaning in logic, and is currently used in mathematical logic to denote a certain type
of propositional formula, without the pejorative connotations it originally possessed.
In 1800, Immanuel Kant wrote in his book Logic:

The identity of concepts in analytical judgments can be either explicit (explicita) or non-explicit (im-
plicita). In the former case analytic propositions are tautological."

Here analytic proposition refers to an analytic truth, a statement in natural language that is true solely because of the
terms involved.
In 1884, Gottlob Frege proposed in his Grundlagen that a truth is analytic exactly if it can be derived using logic.
But he maintained a distinction between analytic truths (those true based only on the meanings of their terms) and
tautologies (statements devoid of content).

936
254.2. BACKGROUND 937

In 1921, in his Tractatus Logico-Philosophicus, Ludwig Wittgenstein proposed that statements that can be deduced
by logical deduction are tautological (empty of meaning) as well as being analytic truths. Henri Poincar had made
similar remarks in Science and Hypothesis in 1905. Although Bertrand Russell at rst argued against these remarks
by Wittgenstein and Poincar, claiming that mathematical truths were not only non-tautologous but were synthetic,
he later spoke in favor of them in 1918:

Everything that is a proposition of logic has got to be in some sense or the other like a tautology. It has
got to be something that has some peculiar quality, which I do not know how to dene, that belongs to
logical propositions but not to others.

Here logical proposition refers to a proposition that is provable using the laws of logic.
During the 1930s, the formalization of the semantics of propositional logic in terms of truth assignments was devel-
oped. The term tautology began to be applied to those propositional formulas that are true regardless of the truth
or falsity of their propositional variables. Some early books on logic (such as Symbolic Logic by C. I. Lewis and
Langford, 1932) used the term for any proposition (in any formal logic) that is universally valid. It is common in
presentations after this (such as Stephen Kleene 1967 and Herbert Enderton 2002) to use tautology to refer to a logi-
cally valid propositional formula, but to maintain a distinction between tautology and logically valid in the context of
rst-order logic (see below).

254.2 Background
Main article: propositional logic

Propositional logic begins with propositional variables, atomic units that represent concrete propositions. A for-
mula consists of propositional variables connected by logical connectives, built up in such a way that the truth of the
overall formula can be deduced from the truth or falsity of each variable. A valuation is a function that assigns each
propositional variable either T (for truth) or F (for falsity). So, for example, using the propositional variables A and
B, the binary connectives and representing disjunction and conjunction respectively, and the unary connective
representing negation, the following formula can be obtained:: (A B) (A) (B) . A valuation here must
assign to each of A and B either T or F. But no matter how this assignment is made, the overall formula will come out
true. For if the rst conjunction (A B) is not satised by a particular valuation, then one of A and B is assigned F,
which will cause the corresponding later disjunct to be T.

254.3 Denition and examples


A formula of propositional logic is a tautology if the formula itself is always true regardless of which valuation is
used for the propositional variables.
There are innitely many tautologies. Examples include:

(A A) ("A or not A"), the law of the excluded middle. This formula has only one propositional variable,
A. Any valuation for this formula must, by denition, assign A one of the truth values true or false, and assign
A the other truth value.

(A B) (B A) (if A implies B, then not-B implies not-A", and vice versa), which expresses the
law of contraposition.

((A B) (A B)) A (if not-A implies both B and its negation not-B, then not-A must be false,
then A must be true), which is the principle known as reductio ad absurdum.

(A B) (A B) (if not both A and B, then not-A or not-B", and vice versa), which is known as De
Morgans law.

((A B) (B C)) (A C) (if A implies B and B implies C, then A implies C"), which is the
principle known as syllogism.
938 CHAPTER 254. TAUTOLOGY (LOGIC)

((A B) (A C) (B C)) C (if at least one of A or B is true, and each implies C, then C must
be true as well), which is the principle known as proof by cases.

A minimal tautology is a tautology that is not the instance of a shorter tautology.

(A B) (A B) is a tautology, but not a minimal one, because it is an instantiation of C C .

254.4 Verifying tautologies


The problem of determining whether a formula is a tautology is fundamental in propositional logic. If there are n
variables occurring in a formula then there are 2n distinct valuations for the formula. Therefore, the task of deter-
mining whether or not the formula is a tautology is a nite, mechanical one: one need only evaluate the truth value of
the formula under each of its possible valuations. One algorithmic method for verifying that every valuation causes
this sentence to be true is to make a truth table that includes every possible valuation.
For example, consider the formula

((A B) C) (A (B C)).

There are 8 possible valuations for the propositional variables A, B, C, represented by the rst three columns of the
following table. The remaining columns show the truth of subformulas of the formula above, culminating in a column
showing the truth value of the original formula under each valuation.
Because each row of the nal column shows T, the sentence in question is veried to be a tautology.
It is also possible to dene a deductive system (proof system) for propositional logic, as a simpler variant of the
deductive systems employed for rst-order logic (see Kleene 1967, Sec 1.9 for one such system). A proof of a
tautology in an appropriate deduction system may be much shorter than a complete truth table (a formula with n
propositional variables requires a truth table with 2n lines, which quickly becomes infeasible as n increases). Proof
systems are also required for the study of intuitionistic propositional logic, in which the method of truth tables cannot
be employed because the law of the excluded middle is not assumed.

254.5 Tautological implication


Main article: Tautological consequence

A formula R is said to tautologically imply a formula S if every valuation that causes R to be true also causes S to
be true. This situation is denoted R |= S . It is equivalent to the formula R S being a tautology (Kleene 1967 p.
27).
For example, let S be A (B B) . Then S is not a tautology, because any valuation that makes A false will make
S false. But any valuation that makes A true will make S true, because B B is a tautology. Let R be the formula
A C . Then R |= S , because any valuation satisfying R makes A true and thus makes S true.
It follows from the denition that if a formula R is a contradiction then R tautologically implies every formula, because
there is no truth valuation that causes R to be true and so the denition of tautological implication is trivially satised.
Similarly, if S is a tautology then S is tautologically implied by every formula.

254.6 Substitution
Main article: Substitution instance

There is a general procedure, the substitution rule, that allows additional tautologies to be constructed from a given
tautology (Kleene 1967 sec. 3). Suppose that S is a tautology and for each propositional variable A in S a xed
254.7. SEMANTIC COMPLETENESS AND SOUNDNESS 939

sentence SA is chosen. Then the sentence obtained by replacing each variable A in S with the corresponding sentence
SA is also a tautology.
For example, let S be the tautology

(A B) A B

Let SA be C D and let SB be C E . It follows from the substitution rule that the sentence

((C D) (C E)) (C D) (C E)

is a tautology, too. In turn, a tautology may be substituted for the truth value "true".

254.7 Semantic completeness and soundness


An axiomatic system is complete if every tautology is a theorem (derivable from axioms). An axiomatic system is
sound if every theorem is a tautology.

254.8 Ecient verication and the Boolean satisability problem


The problem of constructing practical algorithms to determine whether sentences with large numbers of propositional
variables are tautologies is an area of contemporary research in the area of automated theorem proving.
The method of truth tables illustrated above is provably correct the truth table for a tautology will end in a column
with only T, while the truth table for a sentence that is not a tautology will contain a row whose nal column is F, and
the valuation corresponding to that row is a valuation that does not satisfy the sentence being tested. This method
for verifying tautologies is an eective procedure, which means that given unlimited computational resources it can
always be used to mechanistically determine whether a sentence is a tautology. This means, in particular, the set of
tautologies over a xed nite or countable alphabet is a decidable set.
As an ecient procedure, however, truth tables are constrained by the fact that the number of valuations that must be
checked increases as 2k , where k is the number of variables in the formula. This exponential growth in the computation
length renders the truth table method useless for formulas with thousands of propositional variables, as contemporary
computing hardware cannot execute the algorithm in a feasible time period.
The problem of determining whether there is any valuation that makes a formula true is the Boolean satisability
problem; the problem of checking tautologies is equivalent to this problem, because verifying that a sentence S is a
tautology is equivalent to verifying that there is no valuation satisfying S . It is known that the Boolean satisability
problem is NP complete, and widely believed that there is no polynomial-time algorithm that can perform it. Current
research focuses on nding algorithms that perform well on special classes of formulas, or terminate quickly on
average even though some inputs may cause them to take much longer.

254.9 Tautologies versus validities in rst-order logic


The fundamental denition of a tautology is in the context of propositional logic. The denition can be extended,
however, to sentences in rst-order logic (see Enderton (2002, p. 114) and Kleene (1967 secs. 1718)). These sen-
tences may contain quantiers, unlike sentences of propositional logic. In the context of rst-order logic, a distinction
is maintained between logical validities, sentences that are true in every model, and tautologies, which are a proper
subset of the rst-order logical validities. In the context of propositional logic, these two terms coincide.
A tautology in rst-order logic is a sentence that can be obtained by taking a tautology of propositional logic and
uniformly replacing each propositional variable by a rst-order formula (one formula per propositional variable). For
example, because A A is a tautology of propositional logic, (x(x = x)) (x(x = x)) is a tautology in
rst order logic. Similarly, in a rst-order language with a unary relation symbols R,S,T, the following sentence is a
tautology:
940 CHAPTER 254. TAUTOLOGY (LOGIC)

(((xRx) (xSx)) xT x) ((xRx) ((xSx) xT x)).

It is obtained by replacing A with xRx , B with xSx , and C with xT x in the propositional tautology ((A
B) C) (A (B C)) .
Not all logical validities are tautologies in rst-order logic. For example, the sentence

(xRx) xRx

is true in any rst-order interpretation, but it corresponds to the propositional sentence A B which is not a tautology
of propositional logic.

254.10 See also

254.10.1 Normal forms


Algebraic normal form
Conjunctive normal form

Disjunctive normal form


Logic optimization

254.10.2 Related logical topics

254.11 References
Bocheski, J. M. (1959) Prcis of Mathematical Logic, translated from the French and German editions by
Otto Bird, Dordrecht, South Holland: D. Reidel.
Enderton, H. B. (2002) A Mathematical Introduction to Logic, Harcourt/Academic Press, ISBN 0-12-238452-0.

Kleene, S. C. (1967) Mathematical Logic, reprinted 2002, Dover Publications, ISBN 0-486-42533-9.
Reichenbach, H. (1947). Elements of Symbolic Logic, reprinted 1980, Dover, ISBN 0-486-24004-5

Wittgenstein, L. (1921). Logisch-philosophiche Abhandlung, Annalen der Naturphilosophie (Leipzig), v. 14,


pp. 185262, reprinted in English translation as Tractatus logico-philosophicus, New York City and London,
1922.

254.12 External links


Hazewinkel, Michiel, ed. (2001) [1994], Tautology, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

Weisstein, Eric W. Tautology. MathWorld.


Chapter 255

Tautology (rule of inference)

In propositional logic, tautology is one of two commonly used rules of replacement.[1][2][3] The rules are used to
eliminate redundancy in disjunctions and conjunctions when they occur in logical proofs. They are:
The principle of idempotency of disjunction:

P P P

and the principle of idempotency of conjunction:

P P P

Where " " is a metalogical symbol representing can be replaced in a logical proof with.

255.1 Formal notation


Theorems are those logical formulas where is the conclusion of a valid proof,[4] while the equivalent semantic
consequence |= indicates a tautology.
The tautology rule may be expressed as a sequent:

P P P

and

P P P

where is a metalogical symbol meaning that P is a syntactic consequence of P P , in the one case, P P in the
other, in some logical system;
or as a rule of inference:

P P
P
and

P P
P

941
942 CHAPTER 255. TAUTOLOGY (RULE OF INFERENCE)

where the rule is that wherever an instance of " P P " or " P P " appears on a line of a proof, it can be replaced
with " P ";
or as the statement of a truth-functional tautology or theorem of propositional logic. The principle was stated as a
theorem of propositional logic by Russell and Whitehead in Principia Mathematica as:

(P P ) P

and

(P P ) P

where P is a proposition expressed in some formal system.

255.2 References
[1] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing. pp. 3645.

[2] Copi and Cohen

[3] Moore and Parker

[4] Logic in Computer Science, p. 13


Chapter 256

Ternary equivalence relation

In mathematics, a ternary equivalence relation is a kind of ternary relation analogous to a binary equivalence
relation. A ternary equivalence relation is symmetric, reexive, and transitive. The classic example is the relation
of collinearity among three points in Euclidean space. In an abstract set, a ternary equivalence relation determines a
collection of equivalence classes or pencils that form a linear space in the sense of incidence geometry. In the same
way, a binary equivalence relation on a set determines a partition.

256.1 Denition
A ternary equivalence relation on a set X is a relation E X3 , written [a, b, c], that satises the following axioms:

1. Symmetry: If [a, b, c] then [b, c, a] and [c, b, a]. (Therefore also [a, c, b], [b, a, c], and [c, a, b].)

2. Reexivity: [a, b, b]. Equivalently, if a, b, and c are not all distinct, then [a, b, c].

3. Transitivity: If a b and [a, b, c] and [a, b, d] then [b, c, d]. (Therefore also [a, c, d].)

256.2 References
Arajoa, Joo; Koniecznyc, Janusz (2007), A method of nding automorphism groups of endomorphism
monoids of relational systems, Discrete Mathematics, 307: 16091620, doi:10.1016/j.disc.2006.09.029
Bachmann, Friedrich, Aufbau der Geometrie aus dem Spiegelungsbegri

Karzel, Helmut (2007), Loops related to geometric structures, Quasigroups and Related Systems, 15: 4776
Karzel, Helmut; Pianta, Silvia (2008), Binary operations derived from symmetric permutation sets and appli-
cations to absolute geometry, Discrete Mathematics, 308: 415421, doi:10.1016/j.disc.2006.11.058
Karzel, Helmut; Marchi, Mario; Pianta, Silvia (December 2010), The defect in an invariant reection struc-
ture, Journal of Geometry, 99 (1-2): 6787, doi:10.1007/s00022-010-0058-7
Lingenberg, Rolf (1979), Metric planes and metric vector spaces, Wiley

Rainich, G.Y. (1952), Ternary relations in geometry and algebra, Michigan Mathematical Journal, 1 (2):
97111, doi:10.1307/mmj/1028988890

Szmielew, Wanda (1981), On n-ary equivalence relations and their application to geometry, Warsaw: Instytut
Matematyczny Polskiej Akademi Nauk

943
Chapter 257

Ternary relation

In mathematics, a ternary relation or triadic relation is a nitary relation in which the number of places in the
relation is three. Ternary relations may also be referred to as 3-adic, 3-ary, 3-dimensional, or 3-place.
Just as a binary relation is formally dened as a set of pairs, i.e. a subset of the Cartesian product A B of some sets
A and B, so a ternary relation is a set of triples, forming a subset of the Cartesian product A B C of three sets A,
B and C.
An example of a ternary relation in elementary geometry is the collinearity of points.

257.1 Examples

257.1.1 Binary functions

Further information: Graph of a function and Binary function

A function : A B C in two variables, taking values in two sets A and B, respectively, is formally a function that
associates to every pair (a,b) in A B an element (a, b) in C. Therefore, its graph consists of pairs of the form ((a,
b), (a, b)). Such pairs in which the rst element is itself a pair are often identied with triples. This makes the graph
of a ternary relation between A, B and C, consisting of all triples (a, b, (a, b)), for all a in A and b in B.

257.1.2 Cyclic orders

Main article: Cyclic order

Given any set A whose elements are arranged on a circle, one can dene a ternary relation R on A, i.e. a subset of A3
= A A A, by stipulating that R(a, b, c) holds if and only if the elements a, b and c are pairwise dierent and when
going from a to c in a clockwise direction one passes through b. For example, if A = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
12 } represents the hours on a clock face, then R(8, 12, 4) holds and R(12, 8, 4) does not hold.

257.1.3 Betweenness relations

Main article: Betweenness relation

257.1.4 Congruence relation

Main article: Congruence modulo m

944
257.2. FURTHER READING 945

The ordinary congruence of arithmetics

ab (mod m)

which holds for three integers a, b, and m if and only if m divides a b, formally may be considered as a ternary
relation. However, usually, this instead is considered as a family of binary relations between the a and the b, indexed
by the modulus m. For each xed m, indeed this binary relation has some natural properties, like being an equivalence
relation; while the combined ternary relation in general is not studied as one relation.

257.1.5 Typing relation


Main article: Simply typed lambda calculus Typing rules

A typing relation e : indicates that e is a term of type in context , and is thus a ternary relation between
contexts, terms and types.

257.2 Further reading


Myers, Dale (1997), An interpretive isomorphism between binary and ternary relations, in Mycielski, Jan;
Rozenberg, Grzegorz; Salomaa, Arto, Structures in Logic and Computer Science, Lecture Notes in Computer
Science, 1261, Springer, pp. 84105, ISBN 3-540-63246-8, doi:10.1007/3-540-63246-8_6

Novk, Vtzslav (1996), Ternary structures and partial semigroups, Czechoslovak Mathematical Journal, 46
(1): 111120, hdl:10338.dmlcz/127275
Novk, Vtzslav; Novotn, Miroslav (1989), Transitive ternary relations and quasiorderings, Archivum
Mathematicum, 25 (12): 512, hdl:10338.dmlcz/107333
Novk, Vtzslav; Novotn, Miroslav (1992), Binary and ternary relations, Mathematica Bohemica, 117 (3):
283292, hdl:10338.dmlcz/126278
Novotn, Miroslav (1991), Ternary structures and groupoids, Czechoslovak Mathematical Journal, 41 (1):
9098, hdl:10338.dmlcz/102437
lapal, Josef (1993), Relations and topologies, Czechoslovak Mathematical Journal, 43 (1): 141150, hdl:10338.dmlcz/128381
Chapter 258

Transposition (logic)

In propositional logic, transposition[1][2][3] is a valid rule of replacement that permits one to switch the antecedent
with the consequent of a conditional statement in a logical proof if they are also both negated. It is the inference from
the truth of "A implies B" the truth of Not-B implies not-A", and conversely.[4][5] It is very closely related to the rule
of inference modus tollens. It is the rule that:
(P Q) (Q P )
Where " " is a metalogical symbol representing can be replaced in a proof with.

258.1 Formal notation


The transposition rule may be expressed as a sequent:

(P Q) (Q P )
where is a metalogical symbol meaning that (Q P ) is a syntactic consequence of (P Q) in some logical
system;
or as a rule of inference:

P Q
Q P
where the rule is that wherever an instance of " P Q " appears on a line of a proof, it can be replaced with "
Q P ";
or as the statement of a truth-functional tautology or theorem of propositional logic. The principle was stated as a
theorem of propositional logic by Russell and Whitehead in Principia Mathematica as:

(P Q) (Q P )
where P and Q are propositions expressed in some formal system.

258.2 Traditional logic

258.2.1 Form of transposition


In the inferred proposition, the consequent is the contradictory of the antecedent in the original proposition, and the
antecedent of the inferred proposition is the contradictory of the consequent of the original proposition. The symbol
for material implication signies the proposition as a hypothetical, or the if-then form, e.g. if P then Q.

946
258.2. TRADITIONAL LOGIC 947

The biconditional statement of the rule of transposition () refers to the relation between hypothetical () propo-
sitions, with each proposition including an antecent and consequential term. As a matter of logical inference, to
transpose or convert the terms of one proposition requires the conversion of the terms of the propositions on both
sides of the biconditional relationship. Meaning, to transpose or convert (P Q) to (Q P) requires that the other
proposition, (~Q ~P), be transposed or converted to (~P ~Q). Otherwise, to convert the terms of one proposition
and not the other renders the rule invalid, violating the sucient condition and necessary condition of the terms of
the propositions, where the violation is that the changed proposition commits the fallacy of denying the antecedent
or arming the consequent by means of illicit conversion.
The truth of the rule of transposition is dependent upon the relations of sucient condition and necessary condition
in logic.

258.2.2 Sucient condition


In the proposition If P then Q, the occurrence of 'P' is sucient reason for the occurrence of 'Q'. 'P', as an individual
or a class, materially implicates 'Q', but the relation of 'Q' to 'P' is such that the converse proposition If Q then P
does not necessarily have sucient condition. The rule of inference for sucient condition is modus ponens, which
is an argument for conditional implication:
Premise (1): If P, then Q
Premise (2): P
Conclusion: Therefore, Q

258.2.3 Necessary condition


Since the converse of premise (1) is not valid, all that can be stated of the relationship of 'P' and 'Q' is that in the
absence of 'Q', 'P' does not occur, meaning that 'Q' is the necessary condition for 'P'. The rule of inference for
necessary condition is modus tollens:
Premise (1): If P, then Q
Premise (2): not Q
Conclusion: Therefore, not P

258.2.4 Necessity and suciency example


An example traditionally used by logicians contrasting sucient and necessary conditions is the statement If there
is re, then oxygen is present. An oxygenated environment is necessary for re or combustion, but simply because
there is an oxygenated environment does not necessarily mean that re or combustion is occurring. While one can
infer that re stipulates the presence of oxygen, from the presence of oxygen the converse If there is oxygen present,
then re is present cannot be inferred. All that can be inferred from the original proposition is that If oxygen is not
present, then there cannot be re.

258.2.5 Relationship of propositions


The symbol for the biconditional ("") signies the relationship between the propositions is both necessary and
sucient, and is verbalized as "if and only if", or, according to the example If P then Q 'if and only if' if not Q then
not P.
Necessary and sucient conditions can be explained by analogy in terms of the concepts and the rules of immediate
inference of traditional logic. In the categorical proposition All S is P, the subject term 'S' is said to be distributed,
that is, all members of its class are exhausted in its expression. Conversely, the predicate term 'P' cannot be said to
be distributed, or exhausted in its expression because it is indeterminate whether every instance of a member of 'P'
as a class is also a member of 'S' as a class. All that can be validly inferred is that Some P are S. Thus, the type
'A' proposition All P is S cannot be inferred by conversion from the original 'A' type proposition All S is P. All
that can be inferred is the type A proposition All non-P is non-S (Note that (P Q) and (~Q ~P) are both 'A'
type propositions). Grammatically, one cannot infer all mortals are men from All men are mortal. An 'A' type
948 CHAPTER 258. TRANSPOSITION (LOGIC)

proposition can only be immediately inferred by conversion when both the subject and predicate are distributed, as
in the inference All bachelors are unmarried men from All unmarried men are bachelors.

258.2.6 Transposition and the method of contraposition


In traditional logic the reasoning process of transposition as a rule of inference is applied to categorical propositions
through contraposition and obversion,[6] a series of immediate inferences where the rule of obversion is rst applied to
the original categorical proposition All S is P"; yielding the obverse No S is non-P. In the obversion of the original
proposition to an 'E' type proposition, both terms become distributed. The obverse is then converted, resulting in
No non-P is S, maintaining distribution of both terms. The No non-P is S is again obverted, resulting in the
[contrapositive] All non-P is non-S. Since nothing is said in the denition of contraposition with regard to the
predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory, and
the predicate term of the resulting 'A' type proposition is again undistributed. This results in two contrapositives, one
where the predicate term is distributed, and another where the predicate term is undistributed.[7]

258.2.7 Dierences between transposition and contraposition


Note that the method of transposition and contraposition should not be confused. Contraposition is a type of
immediate inference in which from a given categorical proposition another categorical proposition is inferred which
has as its subject the contradictory of the original predicate. Since nothing is said in the denition of contraposi-
tion with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or
its contradictory. This is in contradistinction to the form of the propositions of transposition, which may be ma-
terial implication, or a hypothetical statement. The dierence is that in its application to categorical propositions
the result of contraposition is two contrapositives, each being the obvert of the other,[8] i.e. No non-P is S and
All non-P is non-S. The distinction between the two contrapositives is absorbed and eliminated in the principle of
transposition, which presupposes the mediate inferences[9] of contraposition and is also referred to as the law of
contraposition.[10]

258.3 Transposition in mathematical logic


See Transposition (mathematics), Set theory

258.4 Proof

258.5 See also

258.6 References
[1] Hurley, Patrick (2011). A Concise Introduction to Logic (11th ed.). Cengage Learning. p. 414.

[2] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall. p. 371.

[3] Moore and Parker

[4] Brody, Bobuch A. Glossary of Logical Terms. Encyclopedia of Philosophy. Vol. 56, p. 76. Macmillan, 1973.

[5] Copi, Irving M. Symbolic Logic. 5th ed. Macmillan, 1979. See the Rules of Replacement, pp. 3940.

[6] Stebbing, 1961, pp. 6566. For reference to the initial step of contraposition as obversion and conversion, see Copi, 1953,
p. 141.

[7] See Stebbing, 1961, pp. 6566. Also, for reference to the immediate inferences of obversion, conversion, and obversion
again, see Copi, 1953, p. 141.

[8] See Stebbing, 1961, p. 66.


258.7. FURTHER READING 949

[9] For an explanation of the absorption of obversion and conversion as mediate inferences see: Copi, Irving. Symbolic Logic.
pp. 17174, MacMillan, 1979, fth edition.

[10] Prior, A.N. Logic, Traditional. Encyclopedia of Philosophy, Vol.5, Macmillan, 1973.

258.7 Further reading


Brody, Bobuch A. Glossary of Logical Terms. Encyclopedia of Philosophy. Vol. 5-6, p. 61. Macmillan,
1973.

Copi, Irving. Introduction to Logic. MacMillan, 1953.


Copi, Irving. Symbolic Logic. MacMillan, 1979, fth edition.

Prior, A.N. Logic, Traditional. Encyclopedia of Philosophy, Vol. 5, Macmillan, 1973.


Stebbing, Susan. A Modern Introduction to Logic. Harper, 1961, Seventh edition

258.8 External links


Improper Transposition (Fallacy Files)
Chapter 259

True quantied Boolean formula

In computational complexity theory, the language TQBF is a formal language consisting of the true quantied
Boolean formulas. A (fully) quantied Boolean formula is a formula in quantied propositional logic where every
variable is quantied (or bound), using either existential or universal quantiers, at the beginning of the sentence.
Such a formula is equivalent to either true or false (since there are no free variables). If such a formula evaluates to
true, then that formula is in the language TQBF. It is also known as QSAT (Quantied SAT).

259.1 Overview
In computational complexity theory, the quantied Boolean formula problem (QBF) is a generalization of the
Boolean satisability problem in which both existential quantiers and universal quantiers can be applied to each
variable. Put another way, it asks whether a quantied sentential form over a set of Boolean variables is true or false.
For example, the following is an instance of QBF:

x y z ((x z) y)
QBF is the canonical complete problem for PSPACE, the class of problems solvable by a deterministic or nonde-
terministic Turing machine in polynomial space and unlimited time.[1] Given the formula in the form of an abstract
syntax tree, the problem can be solved easily by a set of mutually recursive procedures which evaluate the formula.
Such an algorithm uses space proportional to the height of the tree, which is linear in the worst case, but uses time
exponential in the number of quantiers.
Provided that MA PSPACE, which is widely believed, QBF cannot be solved, nor can a given solution even be
veried, in either deterministic or probabilistic polynomial time (in fact, unlike the satisability problem, theres no
known way to specify a solution succinctly). It is trivial to solve using an alternating Turing machine in linear time,
which is no surprise since in fact AP = PSPACE, where AP is the class of problems alternating machines can solve
in polynomial time.[2]
When the seminal result IP = PSPACE was shown (see interactive proof system), it was done by exhibiting an
interactive proof system that could solve QBF by solving a particular arithmetization of the problem.[3]
QBF formulas have a number of useful canonical forms. For example, it can be shown that there is a polynomial-
time many-one reduction that will move all quantiers to the front of the formula and make them alternate between
universal and existential quantiers. There is another reduction that proved useful in the IP = PSPACE proof where
no more than one universal quantier is placed between each variables use and the quantier binding that variable.
This was critical in limiting the number of products in certain subexpressions of the arithmetization.

259.2 Prenex normal form


A fully quantied Boolean formula can be assumed to have a very specic form, called prenex normal form. It has two
basic parts: a portion containing only quantiers and a portion containing an unquantied Boolean formula usually
denoted as . If there are n Boolean variables, the entire formula can be written as

950
259.3. SOLVING 951

x1 x2 x3 Qn xn (x1 , x2 , x3 , . . . , xn )

where every variable falls within the scope of some quantier. By introducing dummy variables, any formula in
prenex normal form can be converted into a sentence where existential and universal quantiers alternate. Using the
dummy variable y1 ,

x1 x2 (x1 , x2 ) 7 x1 y1 x2 (x1 , x2 )

The second sentence has the same truth value but follows the restricted syntax. Assuming fully quantied Boolean
formulas to be in prenex normal form is a frequent feature of proofs.

259.3 Solving
There is a simple recursive algorithm for determining whether a QBF is in TQBF (i.e. is true). Given some QBF

Q1 x1 Q2 x2 Qn xn (x1 , x2 , . . . , xn ).

If the formula contains no quantiers, we can just return the formula. Otherwise, we take o the rst quantier and
check both possible values for the rst variable:

A = Q2 x2 Qn xn (0, x2 , . . . , xn ),

B = Q2 x2 Qn xn (1, x2 , . . . , xn ).
If Q1 = , then return A B . If Q1 = , then return A B .
How fast does this algorithm run? For every quantier in the initial QBF, the algorithm makes two recursive calls on
only a linearly smaller subproblem. This gives the algorithm an exponential runtime O(2n ).
How much space does this algorithm use? Within each invocation of the algorithm, it needs to store the intermediate
results of computing A and B. Every recursive call takes o one quantier, so the total recursive depth is linear in the
number of quantiers. Formulas that lack quantiers can be evaluated in space logarithmic in the number of variables.
The initial QBF was fully quantied, so there are at least as many quantiers as variables. Thus, this algorithm uses
O(n + log n) = O(n) space. This makes the TQBF language part of the PSPACE complexity class.

259.4 PSPACE-completeness
The TQBF language serves in complexity theory as the canonical PSPACE-complete problem. Being PSPACE-
complete means that a language is in PSPACE and that the language is also PSPACE-hard. The algorithm above
shows that TQBF is in PSPACE. Showing that TQBF is PSPACE-hard requires showing that any language in the
complexity class PSPACE can be reduced to TQBF in polynomial time. I.e.,

L PSPACE, L p TQBF.

This means that, for a PSPACE language L, whether an input x is in L can be decided by checking whether f (x) is in
TQBF, for some function f that is required to run in polynomial time (relative to the length of the input) Symbolically,

x L f (x) TQBF.

Proving that TQBF is PSPACE-hard, requires specication of f.


952 CHAPTER 259. TRUE QUANTIFIED BOOLEAN FORMULA

So, suppose that L is a PSPACE language. This means that L can be decided by a polynomial space deterministic
Turing machine (DTM). This is very important for the reduction of L to TQBF, because the congurations of any
such Turing Machine can be represented as Boolean formulas, with Boolean variables representing the state of the
machine as well as the contents of each cell on the Turing Machine tape, with the position of the Turing Machine head
encoded in the formula by the formulas ordering. In particular, our reduction will use the variables c1 and c2 , which
represent two possible congurations of the DTM for L, and a natural number t, in constructing a QBF c1 ,c2 ,t which
is true if and only if the DTM for L can go from the conguration encoded in c1 to the conguration encoded in c2 in
no more than t steps. The function f, then, will construct from the DTM for L a QBF cstart ,caccept ,T , where cstart
is the DTMs starting conguration, caccept is the DTMs accepting conguration, and T is the maximum number of
steps the DTM could need to move from one conguration to the other. We know that T = O(exp(n)), where n is the
length of the input, because this bounds the total number of possible congurations of the relevant DTM. Of course,
it cannot take the DTM more steps than there are possible congurations to reach caccept unless it enters a loop, in
which case it will never reach caccept anyway.
At this stage of the proof, we have already reduced the question of whether an input formula w (encoded, of course,
in cstart ) is in L to the question of whether the QBF cstart ,caccept ,T , i.e., f (w) , is in TQBF. The remainder of
this proof proves that f can be computed in polynomial time.
For t = 1 , computation of c1 ,c2 ,t is straightforwardeither one of the congurations changes to the other in one
step or it does not. Since the Turing Machine that our formula represents is deterministic, this presents no problem.
For t > 1 , computation of c1 ,c2 ,t involves a recursive evaluation, looking for a so-called middle point m1 . In
this case, we rewrite the formula as follows:

c1 ,c2 ,t = m1 (c1 ,m1 ,t/2 m1 ,c2 ,t/2 ).

This converts the question of whether c1 can reach c2 in t steps to the question of whether c1 reaches a middle point
m1 in t/2 steps, which itself reaches c2 in t/2 steps. The answer to the latter question of course gives the answer to
the former.
Now, t is only bounded by T, which is exponential (and so not polynomial) in the length of the input. Additionally,
each recursive layer virtually doubles the length of the formula. (The variable m1 is only one midpointfor greater t,
there are more stops along the way, so to speak.) So the time required to recursively evaluate c1 ,c2 ,t in this manner
could be exponential as well, simply because the formula could become exponentially large. This problem is solved
by universally quantifying using variables c3 and c4 over the conguration pairs (e.g., {(c1 , m1 ), (m1 , c2 )} ), which
prevents the length of the formula from expanding due to recursive layers. This yields the following interpretation of
c1 ,c2 ,t :

c1 ,c2 ,t = m1 (c3 , c4 ) {(c1 , m1 ), (m1 , c2 )}(c3 ,c4 ,t/2 ).

This version of the formula can indeed be computed in polynomial time, since any one instance of it can be computed
in polynomial time. The universally quantied ordered pair simply tells us that whichever choice of (c3 , c4 ) is made,
c1 ,c2 ,t c3 ,c4 ,t/2 .
Thus, L PSPACE, L p TQBF , so TQBF is PSPACE-hard. Together with the above result that TQBF is in
PSPACE, this completes the proof that TQBF is a PSPACE-complete language.
(This proof follows Sipser 2006 pp. 310313 in all essentials. Papadimitriou 1994 also includes a proof.)

259.5 Miscellany
One important subproblem in TQBF is the Boolean satisability problem. In this problem, you wish to know
whether a given Boolean formula can be made true with some assignment of variables. This is equivalent to
the TQBF using only existential quantiers:

x1 xn (x1 , . . . , xn )
259.6. NOTES AND REFERENCES 953

This is also an example of the larger result NP PSPACE which follows directly from the observation
that a polynomial time verier for a proof of a language accepted by a NTM (Non-deterministic Turing
machine) requires polynomial space to store the proof.

Any class in the polynomial hierarchy (PH) has TQBF as a hard problem. In other words, for the class com-
prising all languages L for which there exists a poly-time TM V, a verier, such that for all input x and some
constant i,

x L y1 y2 Qi yi V (x, y1 , y2 , . . . , yi ) = 1

which has a specic QBF formulation that is given as

such that x1 x2 Qi xi (x1 , x2 , . . . , xi ) = 1

xi

It is important to note that while TQBF the language is dened as the collection of true quantied Boolean
formulas, the abbreviation TQBF is often used (even in this article) to stand for a totally quantied Boolean
formula, often simply called a QBF (quantied Boolean formula, understood as fully or totally quanti-
ed). It is important to distinguish contextually between the two uses of the abbreviation TQBF in reading the
literature.

A TQBF can be thought of as a game played between two players, with alternating moves. Existentially quan-
tied variables are equivalent to the notion that a move is available to a player at a turn. Universally quantied
variables mean that the outcome of the game does not depend on what move a player makes at that turn. Also, a
TQBF whose rst quantier is existential corresponds to a formula game in which the rst player has a winning
strategy.

A TQBF for which the quantied formula is in 2-CNF may be solved in linear time, by an algorithm involving
strong connectivity analysis of its implication graph. The 2-satisability problem is a special case of TQBF for
these formulas, in which every quantier is existential.[4][5]

There is a systematic treatment of restricted versions of quantied boolean formulas (giving Schaefer-type
classications) provided in an expository paper by Hubie Chen.[6]

259.6 Notes and references


[1] M. Garey & D. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman,
San Francisco, California. ISBN 0-7167-1045-5.

[2] A. Chandra, D. Kozen, and L. Stockmeyer (1981). Alternation. Journal of the ACM. 28 (1): 114133. doi:10.1145/322234.322243.

[3] Adi Shamir (1992). Ip = Pspace. Journal of the ACM. 39 (4): 869877. doi:10.1145/146585.146609.

[4] Krom, Melven R. (1967). The Decision Problem for a Class of First-Order Formulas in Which all Disjunctions are Binary.
Zeitschrift fr Mathematische Logik und Grundlagen der Mathematik. 13: 1520. doi:10.1002/malq.19670130104.

[5] Aspvall, Bengt; Plass, Michael F.; Tarjan, Robert E. (1979). A linear-time algorithm for testing the truth of certain
quantied boolean formulas (PDF). Information Processing Letters. 8 (3): 121123. doi:10.1016/0020-0190(79)90002-
4.

[6] Chen, Hubie (December 2009). A Rendezvous of Logic, Complexity, and Algebra. ACM Computing Surveys. ACM. 42
(1): 1. doi:10.1145/1592451.1592453.

Fortnow & Homer (2003) provides some historical background for PSPACE and TQBF.

Zhang (2003) provides some historical background of Boolean formulas.


954 CHAPTER 259. TRUE QUANTIFIED BOOLEAN FORMULA

Arora, Sanjeev. (2001). COS 522: Computational Complexity. Lecture Notes, Princeton University. Retrieved
October 10, 2005.
Fortnow, Lance & Steve Homer. (2003, June). A short history of computational complexity. The Computa-
tional Complexity Column, 80. Retrieved October 9, 2005.
Papadimitriou, C. H. (1994). Computational Complexity. Reading: Addison-Wesley.

Sipser, Michael. (2006). Introduction to the Theory of Computation. Boston: Thomson Course Technology.
Zhang, Lintao. (2003). Searching for truth: Techniques for satisability of boolean formulas. Retrieved
October 10, 2005.

259.7 See also


CookLevin theorem, stating that SAT is NP-complete
Generalized geography

259.8 External links


The Quantied Boolean Formulas Library (QBFLIB)
DepQBF - a search-based solver for quantied boolean formula

International Workshop on Quantied Boolean Formulas


Chapter 260

Truth table

A truth table is a mathematical table used in logicspecically in connection with Boolean algebra, boolean func-
tions, and propositional calculuswhich sets out the functional values of logical expressions on each of their func-
tional arguments, that is, for each combination of values taken by their logical variables (Enderton, 2001). In partic-
ular, truth tables can be used to show whether a propositional expression is true for all legitimate input values, that
is, logically valid.
A truth table has one column for each input variable (for example, P and Q), and one nal column showing all of
the possible results of the logical operation that the table represents (for example, P XOR Q). Each row of the truth
table contains one possible conguration of the input variables (for instance, P=true Q=false), and the result of the
operation for those values. See the examples below for further clarication. Ludwig Wittgenstein is often credited
with inventing the truth table in his Tractatus Logico-Philosophicus,[1] though it appeared at least a year earlier in a
paper on propositional logic by Emil Leon Post.[2]

260.1 Unary operations


There are 4 unary operations:

Always true

Never true, unary falsum

Unary Identity

Unary negation

260.1.1 Logical true

The output value is always true, regardless of the input value of p

260.1.2 Logical false

The output value is never true: that is, always false, regardless of the input value of p

260.1.3 Logical identity

Logical identity is an operation on one logical value p, for which the output value remains p.
The truth table for the logical identity operator is as follows:

955
956 CHAPTER 260. TRUTH TABLE

260.1.4 Logical negation


Logical negation is an operation on one logical value, typically the value of a proposition, that produces a value of
true if its operand is false and a value of false if its operand is true.
The truth table for NOT p (also written as p, Np, Fpq, or ~p) is as follows:

260.2 Binary operations


There are 16 possible truth functions of two binary variables:

260.2.1 Truth table for all binary logical operators


Here is an extended truth table giving denitions of all possible truth functions of two Boolean variables P and Q:[note 1]
where

T = true.
F = false.
The Com row indicates whether an operator, op, is commutative - P op Q = Q op P.
The L id row shows the operators left identities if it has any - values I such that I op Q = Q.
The R id row shows the operators right identities if it has any - values I such that P op I = P.[note 2]

The four combinations of input values for p, q, are read by row from the table above. The output function for each p,
q combination, can be read, by row, from the table.
Key:
The following table is oriented by column, rather than by row. There are four columns rather than four rows, to
display the four combinations of p, q, as input.
p: T T F F
q: T F T F
There are 16 rows in this key, one row for each binary function of the two binary variables, p, q. For example, in
row 2 of this Key, the value of Converse nonimplication (' ') is solely T, for the column denoted by the unique
combination p=F, q=T; while in row 2, the value of that ' ' operation is F for the three remaining columns of p, q.
The output row for is thus
2: F F T F
and the 16-row[3] key is
Logical operators can also be visualized using Venn diagrams.

260.2.2 Logical conjunction (AND)


Logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if both of its operands are true.
The truth table for p AND q (also written as p q, Kpq, p & q, or p q) is as follows:
In ordinary language terms, if both p and q are true, then the conjunction p q is true. For all other assignments of
logical values to p and to q the conjunction p q is false.
It can also be said that if p, then p q is q, otherwise p q is p.

260.2.3 Logical disjunction (OR)


Logical disjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if at least one of its operands is true.
260.2. BINARY OPERATIONS 957

The truth table for p OR q (also written as p q, Apq, p || q, or p + q) is as follows:


Stated in English, if p, then p q is p, otherwise p q is q.

260.2.4 Logical implication

Logical implication and the material conditional are both associated with an operation on two logical values, typically
the values of two propositions, which produces a value of false if the rst operand is true and the second operand is
false, and a value of true otherwise.
The truth table associated with the logical implication p implies q (symbolized as p q, or more rarely Cpq) is as
follows:
The truth table associated with the material conditional if p then q (symbolized as p q) is as follows:
It may also be useful to note that p q and p q are equivalent to p q.

260.2.5 Logical equality

Logical equality (also known as biconditional) is an operation on two logical values, typically the values of two
propositions, that produces a value of true if both operands are false or both operands are true.
The truth table for p XNOR q (also written as p q, Epq, p = q, or p q) is as follows:
So p EQ q is true if p and q have the same truth value (both true or both false), and false if they have dierent truth
values.

260.2.6 Exclusive disjunction

Exclusive disjunction is an operation on two logical values, typically the values of two propositions, that produces a
value of true if one but not both of its operands is true.
The truth table for p XOR q (also written as p q, Jpq, p q, or p q) is as follows:
For two propositions, XOR can also be written as (p q) (p q).

260.2.7 Logical NAND

The logical NAND is an operation on two logical values, typically the values of two propositions, that produces a
value of false if both of its operands are true. In other words, it produces a value of true if at least one of its operands
is false.
The truth table for p NAND q (also written as p q, Dpq, or p | q) is as follows:
It is frequently useful to express a logical operation as a compound operation, that is, as an operation that is built up or
composed from other operations. Many such compositions are possible, depending on the operations that are taken
as basic or primitive and the operations that are taken as composite or derivative.
In the case of logical NAND, it is clearly expressible as a compound of NOT and AND.
The negation of a conjunction: (p q), and the disjunction of negations: (p) (q) can be tabulated as follows:

260.2.8 Logical NOR

The logical NOR is an operation on two logical values, typically the values of two propositions, that produces a value
of true if both of its operands are false. In other words, it produces a value of false if at least one of its operands is
true. is also known as the Peirce arrow after its inventor, Charles Sanders Peirce, and is a Sole sucient operator.
The truth table for p NOR q (also written as p q, or Xpq) is as follows:
The negation of a disjunction (p q), and the conjunction of negations (p) (q) can be tabulated as follows:
958 CHAPTER 260. TRUTH TABLE

Inspection of the tabular derivations for NAND and NOR, under each assignment of logical values to the functional
arguments p and q, produces the identical patterns of functional values for (p q) as for (p) (q), and for (p q)
as for (p) (q). Thus the rst and second expressions in each pair are logically equivalent, and may be substituted
for each other in all contexts that pertain solely to their logical values.
This equivalence is one of De Morgans laws.

260.3 Applications
Truth tables can be used to prove many other logical equivalences. For example, consider the following truth table:
This demonstrates the fact that p q is logically equivalent to p q .

260.3.1 Truth table for most commonly used logical operators


Here is a truth table that gives denitions of the 6 most commonly used out of the 16 possible truth functions of two
Boolean variables P and Q:
where

T = true
F = false
= AND (logical conjunction)
= OR (logical disjunction)
= XOR (exclusive or)
= XNOR (exclusive nor)
= conditional if-then
= conditional then-if
= biconditional if-and-only-if.

260.3.2 Condensed truth tables for binary operators


For binary operators, a condensed form of truth table is also used, where the row headings and the column headings
specify the operands and the table cells specify the result. For example, Boolean logic uses this condensed truth table
notation:
This notation is useful especially if the operations are commutative, although one can additionally specify that the
rows are the rst operand and the columns are the second operand. This condensed notation is particularly useful
in discussing multi-valued extensions of logic, as it signicantly cuts down on combinatoric explosion of the number
of rows otherwise needed. It also provides for quickly recognizable characteristic shape of the distribution of the
values in the table which can assist the reader in grasping the rules more quickly.

260.3.3 Truth tables in digital logic


Truth tables are also used to specify the function of hardware look-up tables (LUTs) in digital logic circuitry. For
an n-input LUT, the truth table will have 2^n values (or rows in the above tabular format), completely specifying
a boolean function for the LUT. By representing each boolean value as a bit in a binary number, truth table values
can be eciently encoded as integer values in electronic design automation (EDA) software. For example, a 32-bit
integer can encode the truth table for a LUT with up to 5 inputs.
When using an integer representation of a truth table, the output value of the LUT can be obtained by calculating a bit
index k based on the input values of the LUT, in which case the LUTs output value is the kth bit of the integer. For
example, to evaluate the output value of a LUT given an array of n boolean input values, the bit index of the truth tables
output value can be computed as follows: if the ith input is true, let Vi = 1 , else let Vi = 0 . Then the kth bit of the
260.4. HISTORY 959

binary representation of the truth table is the LUTs output value, where k = V0 20 +V1 21 +V2 22 + +Vn 2n
.
Truth tables are a simple and straightforward way to encode boolean functions, however given the exponential growth
in size as the number of inputs increase, they are not suitable for functions with a large number of inputs. Other
representations which are more memory ecient are text equations and binary decision diagrams.

260.3.4 Applications of truth tables in digital electronics


In digital electronics and computer science (elds of applied logic engineering and mathematics), truth tables can be
used to reduce basic boolean operations to simple correlations of inputs to outputs, without the use of logic gates or
code. For example, a binary addition can be represented with the truth table:
A B | C R 1 1 | 1 0 1 0 | 0 1 0 1 | 0 1 0 0 | 0 0 where A = First Operand B = Second Operand C = Carry R = Result
This truth table is read left to right:

Value pair (A,B) equals value pair (C,R).

Or for this example, A plus B equal result R, with the Carry C.

Note that this table does not describe the logic operations necessary to implement this operation, rather it simply
species the function of inputs to output values.
With respect to the result, this example may be arithmetically viewed as modulo 2 binary addition, and as logically
equivalent to the exclusive-or (exclusive disjunction) binary logic operation.
In this case it can be used for only very simple inputs and outputs, such as 1s and 0s. However, if the number of types
of values one can have on the inputs increases, the size of the truth table will increase.
For instance, in an addition operation, one needs two operands, A and B. Each can have one of two values, zero or
one. The number of combinations of these two values is 22, or four. So the result is four possible outputs of C and
R. If one were to use base 3, the size would increase to 33, or nine possible outputs.
The rst addition example above is called a half-adder. A full-adder is when the carry from the previous operation
is provided as input to the next adder. Thus, a truth table of eight rows would be needed to describe a full adder's
logic:
A B C* | C R 0 0 0 | 0 0 0 1 0 | 0 1 1 0 0 | 0 1 1 1 0 | 1 0 0 0 1 | 0 1 0 1 1 | 1 0 1 0 1 | 1 0 1 1 1 | 1 1 Same as previous,
but.. C* = Carry from previous adder

260.4 History
Irving Anellis has done the research to show that C.S. Peirce appears to be the earliest logician (in 1893) to devise a
truth table matrix.[4] From the summary of his paper:

In 1997, John Shosky discovered, on the verso of a page of the typed transcript of Bertrand Russells
1912 lecture on The Philosophy of Logical Atomism truth table matrices. The matrix for negation is
Russells, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein.
It is shown that an unpublished manuscript identied as composed by Peirce in 1893 includes a truth
table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An
unpublished manuscript by Peirce identied as having been composed in 188384 in connection with
the composition of Peirces On the Algebra of Logic: A Contribution to the Philosophy of Notation
that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth
table for the conditional.

260.5 Notes
[1] Information about notation may be found in Bocheski (1959), Enderton (2001), and Quine (1982).
960 CHAPTER 260. TRUTH TABLE

[2] The operators here with equal left and right identities (XOR, AND, XNOR, and OR) are also commutative monoids because
they are also associative. While this distinction may be irrelevant in a simple discussion of logic, it can be quite important
in more advanced mathematics. For example, in category theory an enriched category is described as a base category
enriched over a monoid, and any of these operators can be used for enrichment.

260.6 See also


Boolean domain

Boolean-valued function

Espresso heuristic logic minimizer

Excitation table

First-order logic

Functional completeness

Karnaugh maps

Logic gate

Logical connective

Logical graph

Method of analytic tableaux

Propositional calculus

Truth function

260.7 References
[1] Georg Henrik von Wright (1955). Ludwig Wittgenstein, A Biographical Sketch. The Philosophical Review. 64 (4):
527545 (p. 532, note 9). JSTOR 2182631. doi:10.2307/2182631.

[2] Emil Post (July 1921). Introduction to a general theory of elementary propositions. American Journal of Mathematics.
43 (3): 163185. JSTOR 2370324. doi:10.2307/2370324.

[3] Ludwig Wittgenstein (1922) Tractatus Logico-Philosophicus Proposition 5.101

[4] Anellis, Irving H. (2012). Peirces Truth-functional Analysis and the Origin of the Truth Table. History and Philosophy
of Logic. 33: 8797. doi:10.1080/01445340.2011.621702.

260.8 Further reading


Bocheski, Jzef Maria (1959), A Prcis of Mathematical Logic, translated from the French and German edi-
tions by Otto Bird, Dordrecht, South Holland: D. Reidel.

Enderton, H. (2001). A Mathematical Introduction to Logic, second edition, New York: Harcourt Academic
Press. ISBN 0-12-238452-0

Quine, W.V. (1982), Methods of Logic, 4th edition, Cambridge, MA: Harvard University Press.
260.9. EXTERNAL LINKS 961

260.9 External links


Hazewinkel, Michiel, ed. (2001) [1994], Truth table, Encyclopedia of Mathematics, Springer Science+Business
Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Truth Tables, Tautologies, and Logical Equivalence

PEIRCE'S TRUTH-FUNCTIONAL ANALYSIS AND THE ORIGIN OF TRUTH TABLES by Irving H.


Anellis
Converting truth tables into Boolean expressions
Chapter 261

Two-element Boolean algebra

In mathematics and abstract algebra, the two-element Boolean algebra is the Boolean algebra whose underlying set
(or universe or carrier) B is the Boolean domain. The elements of the Boolean domain are 1 and 0 by convention, so
that B = {0, 1}. Paul Halmos's name for this algebra "2" has some following in the literature, and will be employed
here.

261.1 Denition
B is a partially ordered set and the elements of B are also its bounds.
An operation of arity n is a mapping from Bn to B. Boolean algebra consists of two binary operations and unary
complementation. The binary operations have been named and notated in various ways. Here they are called 'sum'
and 'product', and notated by inx '+' and '', respectively. Sum and product commute and associate, as in the usual
algebra of real numbers. As for the order of operations, brackets are decisive if present. Otherwise '' precedes '+'.
Hence AB + C is parsed as (AB) + C and not as A(B + C). Complementation is denoted by writing an overbar
over its argument. The numerical analog of the complement of X is 1 X. In the language of universal algebra, a
Boolean algebra is a B, +, ., .., 1, 0 algebra of type 2, 2, 1, 0, 0 .
Either one-to-one correspondence between {0,1} and {True,False} yields classical bivalent logic in equational form,
with complementation read as NOT. If 1 is read as True, '+' is read as OR, and '' as AND, and vice versa if 1 is read
as False.

261.2 Some basic identities


2 can be seen as grounded in the following trivial Boolean arithmetic:

1+1=1+0=0+1=1
0+0=0
00=01=10=0
11=1
1=0
0=1

Note that:

'+' and '' work exactly as in numerical arithmetic, except that 1+1=1. '+' and '' are derived by analogy from
numerical arithmetic; simply set any nonzero number to 1.

Swapping 0 and 1, and '+' and '' preserves truth; this is the essence of the duality pervading all Boolean algebras.

962
261.2. SOME BASIC IDENTITIES 963

This Boolean arithmetic suces to verify any equation of 2, including the axioms, by examining every possible
assignment of 0s and 1s to each variable (see decision procedure).
The following equations may now be veried:

A+A=A
AA=A
A+0=A
A+1=1
A0=0
A=A

Each of '+' and '' distributes over the other:

A (B + C) = A B + A C;

A + (B C) = (A + B) (A + C).

That '' distributes over '+' agrees with elementary algebra, but not '+' over ''. For this and other reasons, a sum of
products (leading to a NAND synthesis) is more commonly employed than a product of sums (leading to a NOR
synthesis).
Each of '+' and '' can be dened in terms of the other and complementation:

AB =A+B

A + B = A B.

We only need one binary operation, and concatenation suces to denote it. Hence concatenation and overbar suce
to notate 2. This notation is also that of Quine's Boolean term schemata. Letting (X) denote the complement of X
and "()" denote either 0 or 1 yields the syntax of the primary algebra.
A basis for 2 is a set of equations, called axioms, from which all of the above equations (and more) can be de-
rived. There are many known bases for all Boolean algebras and hence for 2. An elegant basis notated using only
concatenation and overbar is:

1. ABC = BCA (Concatenation commutes, associates)

2. AA = 1 (2 is a complemented lattice, with an upper bound of 1)

3. A0 = A (0 is the lower bound).

4. AAB = AB (2 is a distributive lattice)

Where concatenation = OR, 1 = true, and 0 = false, or concatenation = AND, 1 = false, and 0 = true. (overbar is
negation in both cases.)
If 0=1, (1)-(3) are the axioms for an abelian group.
(1) only serves to prove that concatenation commutes and associates. First assume that (1) associates from either the
left or the right, then prove commutativity. Then prove association from the other direction. Associativity is simply
association from the left and right combined.
This basis makes for an easy approach to proof, called calculation, that proceeds by simplifying expressions to 0 or
1, by invoking axioms (2)(4), and the elementary identities AA = A, A = A, 1 + A = 1 , and the distributive law.
964 CHAPTER 261. TWO-ELEMENT BOOLEAN ALGEBRA

261.3 Metatheory
De Morgans theorem states that if one does the following, in the given order, to any Boolean function:

Complement every variable;


Swap '+' and '' operators (taking care to add brackets to ensure the order of operations remains the same);

Complement the result,

the result is logically equivalent to what you started with. Repeated application of De Morgans theorem to parts of
a function can be used to drive all complements down to the individual variables.
A powerful and nontrivial metatheorem states that any theorem of 2 holds for all Boolean algebras.[1] Conversely,
an identity that holds for an arbitrary nontrivial Boolean algebra also holds in 2. Hence all the mathematical content
of Boolean algebra is captured by 2. This theorem is useful because any equation in 2 can be veried by a decision
procedure. Logicians refer to this fact as "2 is decidable". All known decision procedures require a number of steps
that is an exponential function of the number of variables N appearing in the equation to be veried. Whether there
exists a decision procedure whose steps are a polynomial function of N falls under the P = NP conjecture.

261.4 See also


Boolean algebra

Bounded set
Lattice (order)

Order theory

261.5 References
[1] Givant, S., and Halmos, P. (2009) Introduction to Boolean Algebras, Springer Verlag. Theorem 9.

261.6 Further reading


Many elementary texts on Boolean algebra were published in the early years of the computer era. Perhaps the best
of the lot, and one still in print, is:

Mendelson, Elliot, 1970. Schaums Outline of Boolean Algebra. McGrawHill.

The following items reveal how the two-element Boolean algebra is mathematically nontrivial.

Stanford Encyclopedia of Philosophy: "The Mathematics of Boolean Algebra," by J. Donald Monk.


Burris, Stanley N., and H.P. Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag.
ISBN 3-540-90578-2.
Chapter 262

Unimodality

Unimodal redirects here. For the company that promotes personal rapid transit, see SkyTran.

In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a
single highest value, somehow dened, of some mathematical object.[1]

262.1 Unimodal probability distribution

= 0, = 0.2
0.9 = 0, = 1.0
= 0, = 5.0
0.8 = -2, = 0.5

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
-5 -4 -3 -2 -1 0 1 2 3 4 5

Figure 1. probability density function of normal distributions, an example of unimodal distribution.

In statistics, a unimodal probability distribution or unimodal distribution is a probability distribution which has

965
966 CHAPTER 262. UNIMODALITY

Figure 2. a simple bimodal distribution.

Figure 3. a distribution which, though strictly unimodal, is usually referred to as bimodal.

a single mode. As the term mode has multiple meanings, so does the term unimodal.
Strictly speaking, a mode of a discrete probability distribution is a value at which the probability mass function (pmf)
takes its maximum value. In other words, it is a most likely value. A mode of a continuous probability distribution
is a value at which the probability density function (pdf) attains its maximum value. Note that in both cases there
can be more than one mode, since the maximum value of either the pmf or the pdf can be attained at more than one
value.
If there is a single mode, the distribution function is called unimodal. If it has more modes it is bimodal (2),
trimodal (3), etc., or in general, multimodal.[2] Figure 1 illustrates normal distributions, which are unimodal.
Other examples of unimodal distributions include Cauchy distribution, Students t-distribution, chi-squared distribu-
tion and exponential distribution. Among discrete distributions, the binomial distribution and Poisson distribution
262.1. UNIMODAL PROBABILITY DISTRIBUTION 967

can be seen as unimodal, though for some parameters they can have two adjacent values with the same probability.
Figure 2 illustrates a bimodal distribution.
Figure 3 illustrates a distribution with a single global maximum which by strict denition is unimodal. However,
confusingly, and mostly with continuous distributions, when a pdf function has multiple local maxima it is common
to refer to all of the local maxima as modes of the distribution. Therefore, if a pdf has more than one local maximum
it is referred to as multimodal. Under this common denition, Figure 3 illustrates a bimodal distribution.

262.1.1 Other denitions

Other denitions of unimodality in distribution functions also exist.


In continuous distributions, unimodality can be dened through the behavior of the cumulative distribution function
(cdf).[3] If the cdf is convex for x < m and concave for x > m, then the distribution is unimodal, m being the mode.
Note that under this denition the uniform distribution is unimodal,[4] as well as any other distribution in which the
maximum distribution is achieved for a range of values, e.g. trapezoidal distribution. Note also that usually this
denition allows for a discontinuity at the mode; usually in a continuous distribution the probability of any single
value is zero, while this denition allows for a non-zero probability, or an atom of probability, at the mode.
Criteria for unimodality can also be dened through the characteristic function of the distribution[3] or through its
LaplaceStieltjes transform.[5]
Another way to dene a unimodal discrete distribution is by the occurrence of sign changes in the sequence of dif-
ferences of the probabilities.[6] A discrete distribution with a probability mass function, {pn ; n = . . . , 1, 0, 1, . . . }
, is called unimodal if the sequence . . . , p2 p1 , p1 p0 , p0 p1 , p1 p2 , . . . has exactly one sign change
(when zeroes don't count).

262.1.2 Uses and results

One reason for the importance of distribution unimodality is that it allows for several important results. Several
inequalities are given below which are only valid for unimodal distributions. Thus, it is important to assess whether
or not a given data set comes from a unimodal distribution. Several tests for unimodality are given in the article on
multimodal distribution.

262.1.3 Inequalities

See also: Chebychevs inequality Unimodal distributions

Gausss inequality

A rst important result is Gausss inequality.[7] Gausss inequality gives an upper bound on the probability that a value
lies more than any given distance from its mode. This inequality depends on unimodality.

VysochanskiPetunin inequality

A second is the VysochanskiPetunin inequality,[8] a renement of the Chebyshev inequality. The Chebyshev in-
equality guarantees that in any probability distribution, nearly all the values are close to the mean value. The
VysochanskiPetunin inequality renes this to even nearer values, provided that the distribution function is contin-
uous and unimodal. Further results were shown by Sellke & Sellke.[9]

Mode, median and mean

Gauss also showed in 1823 that for a unimodal distribution[10]


968 CHAPTER 262. UNIMODALITY

and


3
| | ,
4
where the median is , the mean is and is the root mean square deviation from the mode.
It can be shown for a unimodal distribution that the median and the mean lie within (3/5)1/2 0.7746 standard
deviations of each other.[11] In symbols,


| | 3

5
where |.| is the absolute value.
A similar relation holds between the median and the mode : they lie within 31/2 1.732 standard deviations of each
other:

| |
3.

It can also be shown that the mean and the mode lie within 31/2 of each other.

| |
3.

Skewness and kurtosis

Rohatgi and Szekely have shown that the skewness and kurtosis of a unimodal distribution are related by the inequality:[12]

6
2
5
where is the kurtosis and is the skewness.
Klaassen, Mokveld, and van Es derived a slightly dierent inequality (shown below) from the one derived by Rohatgi
and Szekely (shown above), which tends to be more inclusive (i.e., yield more positives) in tests of unimodality:[13]

186
2
125

262.2 Unimodal function


As the term modal applies to data sets and probability distribution, and not in general to functions, the denitions
above do not apply. The denition of unimodal was extended to functions of real numbers as well.
A common denition is as follows: a function f(x) is a unimodal function if for some value m, it is monotonically
increasing for x m and monotonically decreasing for x m. In that case, the maximum value of f(x) is f(m) and
there are no other local maxima.
Proving unimodality is often hard. One way consists in using the denition of that property, but it turns out to be
suitable for simple functions only. A general method based on derivatives exists,[14] but it does not succeed for every
function despite its simplicity.
262.3. OTHER EXTENSIONS 969

Examples of unimodal functions include quadratic polynomial functions with a negative quadratic coecient, tent
map functions, and more.
The above is sometimes related to as strong unimodality, from the fact that the monotonicity implied is strong
monotonicity. A function f(x) is a weakly unimodal function if there exists a value m for which it is weakly mono-
tonically increasing for x m and weakly monotonically decreasing for x m. In that case, the maximum value
f(m) can be reached for a continuous range of values of x. An example of a weakly unimodal function which is not
strongly unimodal is every other row in a Pascal triangle.
Depending on context, unimodal function may also refer to a function that has only one local minimum, rather than
maximum.[15] For example, local unimodal sampling, a method for doing numerical optimization, is often demon-
strated with such a function. It can be said that a unimodal function under this extension is a function with a single
local extremum.
One important property of unimodal functions is that the extremum can be found using search algorithms such as
golden section search, ternary search or successive parabolic interpolation.

262.3 Other extensions


A function f(x) is S-unimodal (often referred to as S-unimodal map) if its Schwarzian derivative is negative for
all x = c , where c is the critical point.[16]
In computational geometry if a function is unimodal it permits the design of ecient algorithms for nding the
extrema of the function.[17]
A more general denition, applicable to a function f(X) of a vector variable X is that f is unimodal if there is a one-to-
one dierentiable mapping X = G(Z) such that f(G(Z)) is convex. Usually one would want G(Z) to be continuously
dierentiable with nonsingular Jacobian matrix.
Quasiconvex functions and quasiconcave functions extend the concept of unimodality to functions whose arguments
belong to higher-dimensional Euclidean spaces.

262.4 See also


Bimodal distribution

262.5 References
[1] Weisstein, Eric W. Unimodal. MathWorld.

[2] Weisstein, Eric W. Mode. MathWorld.

[3] A.Ya. Khinchin (1938). On unimodal distributions. Trams. Res. Inst. Math. Mech. (in Russian). University of Tomsk.
2 (2): 17.

[4] Ushakov, N.G. (2001) [1994], Unimodal distribution, in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer
Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4

[5] Vladimirovich Gnedenko and Victor Yu Korolev (1996). Random summation: limit theorems and applications. CRC-Press.
ISBN 0-8493-2875-6. p. 31

[6] Medgyessy, P. (March 1972). On the unimodality of discrete distributions. Periodica Mathematica Hungarica. 2 (14):
245257. doi:10.1007/bf02018665.

[7] Gauss, C. F. (1823). Theoria Combinationis Observationum Erroribus Minimis Obnoxiae, Pars Prior. Commentationes
Societatis Regiae Scientiarum Gottingensis Recentiores. 5.

[8] D. F. Vysochanskij, Y. I. Petunin (1980). Justication of the 3 rule for unimodal distributions. Theory of Probability
and Mathematical Statistics. 21: 2536.

[9] Sellke, T.M.; Sellke, S.H. (1997). Chebyshev inequalities for unimodal distributions. American Statistician. American
Statistical Association. 51 (1): 3440. JSTOR 2684690. doi:10.2307/2684690.
970 CHAPTER 262. UNIMODALITY

[10] Gauss C.F. Theoria Combinationis Observationum Erroribus Minimis Obnoxiae. Pars Prior. Pars Posterior. Supplemen-
tum. Theory of the Combination of Observations Least Subject to Errors. Part One. Part Two. Supplement. 1995.
Translated by G.W. Stewart. Classics in Applied Mathematics Series, Society for Industrial and Applied Mathematics,
Philadelphia

[11] Basu, Sanjib, and Anirban DasGupta. The mean, median, and mode of unimodal distributions: a characterization. Theory
of Probability & Its Applications 41.2 (1997): 210-223.

[12] Rohatgi VK, Szekely GJ (1989) Sharp inequalities between skewness and kurtosis. Statistics & Probability Letters 8:297-
299

[13] Klaassen CAJ, Mokveld PJ, van Es B (2000) Squared skewness minus kurtosis bounded by 186/125 for unimodal distri-
butions. Stat & Prob Lett 50 (2) 131135

[14] On the unimodality of METRIC Approximation subject to normally distributed demands. (PDF). Method in appendix
D, Example in theorem 2 page 5. Retrieved 2013-08-28.

[15] Mathematical Programming Glossary.. Retrieved 2010-07-07.

[16] See e.g. John Guckenheimer and Stewart Johnson (July 1990). Distortion of S-Unimodal Maps. The Annals of Mathe-
matics, Second Series. 132 (1). pp. 71130. doi:10.2307/1971501.

[17] Godfried T. Toussaint (June 1984). Complexity, convexity, and unimodality. International Journal of Computer and
Information Sciences. 13 (3). pp. 197217. doi:10.1007/bf00979872.
Chapter 263

Uniqueness quantication

Unique (mathematics)" redirects here. For other uses, see Unique (disambiguation).

In mathematics and logic, the phrase there is one and only one" is used to indicate that exactly one object with a
certain property exists. In mathematical logic, this sort of quantication is known as uniqueness quantication or
unique existential quantication.
Uniqueness quantication is often denoted with the symbols "!" or ". For example, the formal statement

!n N (n 2 = 4)

may be read aloud as there is exactly one natural number n such that n 2 = 4.

263.1 Proving uniqueness


The most common technique to proving unique existence is to rst prove existence of entity with the desired condition;
then, to assume there exist two entities (say, a and b) that both satisfy the condition, and logically deduce their equality,
i.e. a = b.
As a simple high school example, to show x + 2 = 5 has exactly one solution, we rst show by demonstration that at
least one solution exists, namely 3; the proof of this part is simply the calculation

3 + 2 = 5.

We now assume that there are two solutions, namely a and b, satisfying x + 2 = 5. Thus

a + 2 = 5 and b + 2 = 5.

By transitivity of equality,

a + 2 = b + 2.

By cancellation,

a = b.

This simple example shows how a proof of uniqueness is done, the end result being the equality of the two quantities
that satisfy the condition.

971
972 CHAPTER 263. UNIQUENESS QUANTIFICATION

Both existence and uniqueness must be proven, in order to conclude that there exists exactly one solution.
An alternative way to prove uniqueness is to prove there exists a value a satisfying the condition, and then proving
that, for all x , the condition for x implies x = a .

263.2 Reduction to ordinary existential and universal quantication


Uniqueness quantication can be expressed in terms of the existential and universal quantiers of predicate logic by
dening the formula !x P(x) to mean literally,

x (P (x) y (P (y) y = x))

which is the same as

x (P (x) y (P (y) y = x)).

An equivalent denition that has the virtue of separating the notions of existence and uniqueness into two clauses, at
the expense of brevity, is

x P (x) y z ((P (y) P (z)) y = z).

Another equivalent denition with the advantage of brevity is

x y (P (y) y = x).

263.3 Generalizations
One generalization of uniqueness quantication is counting quantication. This includes both quantication of the
form exactly k objects exist such that " as well as innitely many objects exist such that " and only nitely
many objects exist such that". The rst of these forms is expressible using ordinary quantiers, but the latter two
cannot be expressed in ordinary rst-order logic.[1]
Uniqueness depends on a notion of equality. Loosening this to some coarser equivalence relation yields quantication
of uniqueness up to that equivalence (under this framework, regular uniqueness is uniqueness up to equality). For
example, many concepts in category theory are dened to be unique up to isomorphism.

263.4 See also


One-hot
Singleton (mathematics)

263.5 References
Kleene, Stephen (1952). Introduction to Metamathematics. Ishi Press International. p. 199.
Andrews, Peter B. (2002). An introduction to mathematical logic and type theory to truth through proof (2.
ed.). Dordrecht: Kluwer Acad. Publ. p. 233. ISBN 1-4020-0763-9.

[1] This is a consequence of the compactness theorem.


Chapter 264

Universal generalization

In predicate logic, generalization (also universal generalization or universal introduction,[1][2][3] GEN) is a valid
inference rule. It states that if P (x) has been derived, then x P (x) can be derived.

264.1 Generalization with hypotheses


The full generalization rule allows for hypotheses to the left of the turnstile, but with restrictions. Assume is a set
of formulas, a formula, and (y) has been derived. The generalization rule states that x(x) can be
derived if y is not mentioned in and x does not occur in .
These restrictions are necessary for soundness. Without the rst restriction, one could conclude xP (x) from the
hypothesis P (y) . Without the second restriction, one could make the following deduction:

1. zw(z = w) (Hypothesis)

2. w(y = w) (Existential instantiation)

3. y = x (Existential instantiation)

4. x(x = x) (Faulty universal generalization)

This purports to show that zw(z = w) x(x = x), which is an unsound deduction.

264.2 Example of a proof


Prove: x (P (x) Q(x)) (x P (x) x Q(x)) is derivable from x (P (x) Q(x)) and x P (x) .
Proof:
In this proof, Universal generalization was used in step 8. The Deduction theorem was applicable in steps 10 and 11
because the formulas being moved have no free variables.

264.3 See also


First-order logic

Hasty generalization

Universal instantiation

973
974 CHAPTER 264. UNIVERSAL GENERALIZATION

264.4 References
[1] Copi and Cohen

[2] Hurley

[3] Moore and Parker


Chapter 265

Universal instantiation

In predicate logic universal instantiation[1][2][3] (UI, also called universal specication or universal elimination,
and sometimes confused with Dictum de omni) is a valid rule of inference from a truth about each member of a class
of individuals to the truth about a particular individual of that class. It is generally given as a quantication rule for
the universal quantier but it can also be encoded in an axiom. It is one of the basic principles used in quantication
theory.
Example: All dogs are mammals. Fido is a dog. Therefore Fido is a mammal.
In symbols the rule as an axiom schema is

x A(x) A(a/x),

for some term a and where A(a/x) is the result of substituting a for all free occurrences of x in A. A(a/x) is an
instance of x A(x).
And as a rule of inference it is
from x A infer A(a/x),
with A(a/x) the same as above.
Irving Copi noted that universal instantiation "...follows from variants of rules for 'natural deduction', which were
devised independently by Gerhard Gentzen and Stanisaw Jakowski in 1934. [4]

265.1 Quine

Universal Instantiation and Existential generalization are two aspects of a single principle, for instead of saying that
"x x=x" implies Socrates=Socrates, we could as well say that the denial SocratesSocrates implies "x xx".
The principle embodied in these two operations is the link between quantications and the singular statements that
are related to them as instances. Yet it is a principle only by courtesy. It holds only in the case where a term names
and, furthermore, occurs referentially.[5]

265.2 See also

Existential generalization

Existential quantication

Inference rules

975
976 CHAPTER 265. UNIVERSAL INSTANTIATION

265.3 References
[1] Irving M. Copi; Carl Cohen; Kenneth McMahon (Nov 2010). Introduction to Logic. Pearson Education. ISBN 978-
0205820375.

[2] Hurley

[3] Moore and Parker

[4] Copi, Irving M. (1979). Symbolic Logic, 5th edition, Prentice Hall, Upper Saddle River, NJ

[5] Willard van Orman Quine; Roger F. Gibson (2008). V.24. Reference and Modality. Quintessence. Cambridge, Mass:
Belknap Press of Harvard University Press. Here: p.366.
Chapter 266

Universal quantication

"" redirects here. For similar symbols, see Turned A.

In predicate logic, a universal quantication is a type of quantier, a logical constant which is interpreted as given
any or for all. It expresses that a propositional function can be satised by every member of a domain of discourse.
In other words, it is the predication of a property or relation to every member of the domain. It asserts that a predicate
within the scope of a universal quantier is true of every value of a predicate variable.
It is usually denoted by the turned A () logical operator symbol, which, when used together with a predicate variable,
is called a universal quantier ("x, "(x)", or sometimes by "(x)" alone). Universal quantication is distinct from
existential quantication (there exists), which asserts that the property or relation holds only for at least one member
of the domain.
Quantication in general is covered in the article on quantication (logic). Symbols are encoded U+2200 FOR
ALL (HTML &#8704; &forall; as a mathematical symbol).

266.1 Basics
Suppose it is given that

20 = 0 + 0, and 21 = 1 + 1, and 22 = 2 + 2, etc.

This would seem to be a logical conjunction because of the repeated use of and. However, the etc. cannot be
interpreted as a conjunction in formal logic. Instead, the statement must be rephrased:

For all natural numbers n, 2n = n + n.

This is a single statement using universal quantication.


This statement can be said to be more precise than the original one. While the etc. informally includes natural
numbers, and nothing more, this was not rigorously given. In the universal quantication, on the other hand, the
natural numbers are mentioned explicitly.
This particular example is true, because any natural number could be substituted for n and the statement 2n = n +
n" would be true. In contrast,

For all natural numbers n, 2n > 2 + n

is false, because if n is substituted with, for instance, 1, the statement 21 > 2 + 1 is false. It is immaterial that 2n
> 2 + n" is true for most natural numbers n: even the existence of a single counterexample is enough to prove the
universal quantication false.
On the other hand, for all composite numbers n, 2n > 2 + n is true, because none of the counterexamples are
composite numbers. This indicates the importance of the domain of discourse, which species which values n can

977
978 CHAPTER 266. UNIVERSAL QUANTIFICATION

take.[note 1] In particular, note that if the domain of discourse is restricted to consist only of those objects that satisfy
a certain predicate, then for universal quantication this requires a logical conditional. For example,

For all composite numbers n, 2n > 2 + n

is logically equivalent to

For all natural numbers n, if n is composite, then 2n > 2 + n.

Here the if ... then construction indicates the logical conditional.

266.1.1 Notation
In symbolic logic, the universal quantier symbol (an inverted "A" in a sans-serif font, Unicode U+2200) is used
to indicate universal quantication.[1]
For example, if P(n) is the predicate 2n > 2 + n" and N is the set of natural numbers, then:

n N P (n)
is the (false) statement:

For all natural numbers n, 2n > 2 + n.

Similarly, if Q(n) is the predicate "n is composite, then

( )
n N Q(n) P (n)
is the (true) statement:

For all natural numbers n, if n is composite, then 2n > 2 + n

and since "n is composite implies that n must already be a natural number, we can shorten this statement to the
equivalent:

( )
n Q(n) P (n)

For all composite numbers n, 2n > 2 + n.

Several variations in the notation for quantication (which apply to all forms) can be found in the quantication article.
There is a special notation used only for universal quantication, which is given:

(nN) P (n)
The parentheses indicate universal quantication by default.

266.2 Properties

266.2.1 Negation
Note that a quantied propositional function is a statement; thus, like statements, quantied functions can be negated.
The notation most mathematicians and logicians utilize to denote negation is: . However, some use the tilde (~).
For example, if P(x) is the propositional function x is married, then, for a universe of discourse X of all living
human beings, the universal quantication
266.2. PROPERTIES 979

Given any living person x, that person is married

is given:

xX P (x)

It can be seen that this is irrevocably false. Truthfully, it is stated that

It is not the case that, given any living person x, that person is married

or, symbolically:

xX P (x)

If the statement is not true for every element of the Universe of Discourse, then, presuming the universe of discourse
is non-empty, there must be at least one element for which the statement is false. That is, the negation of xX P (x)
is logically equivalent to There exists a living person x who is not married, or:

xX P (x)

Generally, then, the negation of a propositional functions universal quantication is an existential quantication of
that propositional functions negation; symbolically,

xX P (x) xX P (x)

It is erroneous to state all persons are not married (i.e. there exists no person who is married) when it is meant
that not all persons are married (i.e. there exists a person who is not married):

xX P (x) xX P (x) xX P (x) xX P (x)

266.2.2 Other connectives


The universal (and existential) quantier moves unchanged across the logical connectives , , , and , as long as
the other operand is not aected; that is:

P (x) (yY Q(y)) yY (P (x) Q(y))

P (x) (yY Q(y)) yY (P (x) Q(y)), provided that Y =


P (x) (yY Q(y)) yY (P (x) Q(y)), provided that Y =
P (x) (yY Q(y)) yY (P (x) Q(y))
P (x) (yY Q(y)) yY (P (x) Q(y)), provided that Y =
P (x) (yY Q(y)) yY (P (x) Q(y))
P (x) (yY Q(y)) yY (P (x) Q(y))
P (x) (yY Q(y)) yY (P (x) Q(y)), provided that Y =
Conversely, for the logical connectives , , , and , the quantiers ip:

P (x) (yY Q(y)) yY (P (x) Q(y))


980 CHAPTER 266. UNIVERSAL QUANTIFICATION

P (x) (yY Q(y)) yY (P (x) Q(y)), provided that Y =

P (x) (yY Q(y)) yY (P (x) Q(y)), provided that Y =

P (x) (yY Q(y)) yY (P (x) Q(y))

P (x) (yY Q(y)) yY (P (x) Q(y)), provided that Y =

P (x) (yY Q(y)) yY (P (x) Q(y))

P (x) (yY Q(y)) yY (P (x) Q(y))

P (x) (yY Q(y)) yY (P (x) Q(y)), provided that Y =

266.2.3 Rules of inference

A rule of inference is a rule justifying a logical step from hypothesis to conclusion. There are several rules of inference
which utilize the universal quantier.
Universal instantiation concludes that, if the propositional function is known to be universally true, then it must be
true for any arbitrary element of the universe of discourse. Symbolically, this is represented as

xX P (x) P (c)

where c is a completely arbitrary element of the universe of discourse.


Universal generalization concludes the propositional function must be universally true if it is true for any arbitrary
element of the universe of discourse. Symbolically, for an arbitrary c,

P (c) xX P (x).

The element c must be completely arbitrary; else, the logic does not follow: if c is not arbitrary, and is instead a
specic element of the universe of discourse, then P(c) only implies an existential quantication of the propositional
function.

266.2.4 The empty set

By convention, the formula x P (x) is always true, regardless of the formula P(x); see vacuous truth.

266.3 Universal closure


The universal closure of a formula is the formula with no free variables obtained by adding a universal quantier
for every free variable in . For example, the universal closure of

P (y) xQ(x, z)

is

yz(P (y) xQ(x, z))


266.4. AS ADJOINT 981

266.4 As adjoint
In category theory and the theory of elementary topoi, the universal quantier can be understood as the right adjoint of
a functor between power sets, the inverse image functor of a function between sets; likewise, the existential quantier
is the left adjoint.[2]
For a set X , let PX denote its powerset. For any function f : X Y between sets X and Y , there is an inverse
image functor f : PY PX between powersets, that takes subsets of the codomain of f back to subsets of its
domain. The left adjoint of this functor is the existential quantier f and the right adjoint is the universal quantier
f .
That is, f : PX PY is a functor that, for each subset S X , gives the subset f S Y given by

f S = {y Y | there exists x S. f (x) = y}


Likewise, the universal quantier f : PX PY is given by

f S = {y Y | for all x. f (x) = y = x S}


The more familiar form of the quantiers as used in rst-order logic is obtained by taking the function f to be the
unique function ! : X 1 so that P(1) = {T, F } is the two-element set holding the values true and false, a subset
S is that subset for which the predicate S(x) holds, and

P(!) : P(1) P(X)


T 7 X
F 7 {}
! S = x.S(x)
! S = x.S(x)
The universal and existential quantiers given above generalize to the presheaf category.

266.5 See also


Existential quantication
First-order logic
List of logic symbolsfor the Unicode symbol

266.6 Notes
[1] Further information on using domains of discourse with quantied statements can be found in the Quantication (logic)
article.

266.7 References
[1] The inverted A was used in the 19th century by Charles Sanders Peirce as a logical symbol for 'un-American' (unamer-
ican). Page 320 in Randall Dipert, "Peirces deductive logic". In Cheryl Misak, ed. The Cambridge Companion to Peirce.
2004
[2] Saunders Mac Lane, Ieke Moerdijk, (1992) Sheaves in Geometry and Logic Springer-Verlag. ISBN 0-387-97710-4 See
page 58

Hinman, P. (2005). Fundamentals of Mathematical Logic. A K Peters. ISBN 1-56881-262-0.


Franklin, J. and Daoud, A. (2011). Proof in Mathematics: An Introduction. Kew Books. ISBN 978-0-646-
54509-7. (ch. 2)
982 CHAPTER 266. UNIVERSAL QUANTIFICATION

266.8 External links


The dictionary denition of every at Wiktionary
Chapter 267

Unsatisable core

In mathematical logic, given an unsatisable Boolean propositional formula in conjunctive normal form, a subset of
clauses whose conjunction is still unsatisable is called an unsatisable core of the original formula.
Many SAT solvers can produce a resolution graph which proves the unsatisability of the original problem. This can
be analyzed to produce a smaller unsatisable core.
An unsatisable core is called a minimal unsatisable core, if every proper subset (allowing removal of any arbitrary
clause or clauses) of it is satisable. Thus, such a core is a local minimum, though not necessarily a global one. There
are several practical methods of computing minimal unsatisable cores.[1][2]
A minimum unsatisable core contains the smallest number of the original clauses required to still be unsatisable. No
practical algorithms for computing the minimum core are known. Algorithms for Computing Minimal Unsatisable
Subsets. Notice the terminology: whereas minimal unsatisable core was a local problem with an easy solution, the
minimum unsatisable core is a global problem with no known easy solution.

267.1 References
[1] N. Dershowitz, Z. Hanna, and A. Nadel, A Scalable Algorithm for Minimal Unsatisable Core Extraction

[2] Stefan Szeider, Minimal unsatisable formulas with bounded clause-variable dierence are xed-parameter tractable

983
Chapter 268

Vector logic

Vector logic[1][2] is an algebraic model of elementary logic based on matrix algebra. Vector logic assumes that the
truth values map on vectors, and that the monadic and dyadic operations are executed by matrix operators.

268.1 Overview
Classic binary logic is represented by a small set of mathematical functions depending on one (monadic ) or two
(dyadic) variables. In the binary set, the value 1 corresponds to true and the value 0 to false. A two-valued vector
logic requires a correspondence between the truth-values true (t) and false (f), and two q-dimensional normalized
column vectors composed by real numbers s and n, hence:

t 7 s and f 7 n

(where q 2 is an arbitrary natural number, and normalized means that the length of the vector is 1; usually s
and n are orthogonal vectors). This correspondence generates a space of vector truth-values: V 2 = {s,n}. The basic
logical operations dened using this set of vectors lead to matrix operators.
The operations of vector logic are based on the scalar product between q-dimensional column vectors: uT v = u, v
: the orthonormality between vectors s and n implies that u, v = 1 if u = v , and u, v = 0 if u = v .

268.1.1 Monadic operators


The monadic operators result from the application M on : V2 V2 , and the associated matrices have q rows and q
columns. The two basic monadic operators for this two-valued vector logic are the identity and the negation:

Identity: A logical identity ID(p)is represented by matrix I = ssT + nnT . This matrix operates as follows:
Ip = p, p V 2 ; due to the orthogonality of s respect to n, we have Is = ssT s + nnT s = ss, s + nn, s = s
, and conversely In = n .

Negation: A logical negation p is represented by matrix N = nsT + snT Consequently, Ns = n and Nn = s.


The involutory behavior of the logical negation, namely that (p) equals p, corresponds with the fact that N 2
= I. Is important to note that this vector logic identity matrix is not generally an identity matrix in the sense of
matrix algebra.

268.1.2 Dyadic operators


The 16 two-valued dyadic operators correspond to functions of the type Dyad : V2 V2 V2 ; the dyadic matrices
have q rows and q2 columns. The matrices that execute these dyadic operations are based on the properties of the
Kronecker product.
Two properties of this product are essential for the formalism of vector logic:

984
268.1. OVERVIEW 985

1. The mixed-product property


If A, B, C and D are matrices of such size that one can form the matrix products AC and BD, then

(A B)(C D) = AC BD

2. Distributive transpose The operation of transposition is distributive over the Kronecker product:

(A B)T = AT B T .

Using these properties, expressions for dyadic logic functions can be obtained:

Conjunction. The conjunction (pq) is executed by a matrix that acts on two vector truth-values: C(u v)
.This matrix reproduces the features of the classical conjunction truth-table in its formulation:

C = s(s s)T + n(s n)T + n(n s)T + n(n n)T

and veries

C(s s) = s,

C(s n) = C(n s) = C(n n) = n.

Disjunction. The disjunction (pq) is executed by the matrix

D = s(s s)T + s(s n)T + s(n s)T + n(n n)T ,

D(s s) = D(s n) = D(n s) = s


D(n n) = n.

Implication. The implication corresponds in classical logic to the expression p q p q. The vector logic
version of this equivalence leads to a matrix that represents this implication in vector logic: L = D(N I) .
The explicit expression for this implication is:

L = s(s s)T + n(s n)T + s(n s)T + n(n n)T ,

and the properties of classical implication are satised:


L(s s) = L(n s) = L(n n) = s and
L(s n) = n.

Equivalence and Exclusive or. In vector logic the equivalence pq is represented by the following matrix:

E = s(s s)T + n(s n)T + n(n s)T + s(n n)T


986 CHAPTER 268. VECTOR LOGIC

E(s s) = E(n n) = s

E(s n) = E(n s) = n.

X = NE

X = n(s s)T + s(s n)T + s(n s)T + n(n n)T ,

X(s s) = X(n n) = n

X(s n) = X(n s) = s.

NAND and NOR

The matrices S and P correspond to the Sheer (NAND) and the Peirce (NOR) operations, respectively:

S = NC

P = ND

268.1.3 De Morgans law

In the two-valued logic, the conjunction and the disjunction operations satisfy the De Morgans law: pq(pq),
and its dual: pq(pq)). For the two-valued vector logic this Law is also veried:

C(u v) = N D(N u N v) , where u and v are two logic vectors.

The Kronecker product implies the following factorization:

C(u v) = N D(N N )(u v).

Then it can be proved that in the twodimensional vector logic the De Morgans law is a law involving operators, and
not only a law concerning operations:[3]

C = N D(N N )
268.2. MANY-VALUED TWO-DIMENSIONAL LOGIC 987

268.1.4 Law of contraposition


In the classical propositional calculus, the Law of Contraposition p q q p is proved because the equivalence
holds for all the possible combinations of truth-values of p and q.[4] Instead, in vector logic, the law of contraposition
emerges from a chain of equalities within the rules of matrix algebra and Kronecker products, as shown in what
follows:

L(u v) = D(N I)(u v) = D(N u v) = D(N u N N v) =


D(N N v N u) = D(N I)(N v N u) = L(N v N u)

This result is based in the fact that D, the disjunction matrix, represents a commutative operation.

268.2 Many-valued two-dimensional logic


Many-valued logic was developed by many researchers, particularly by Jan ukasiewicz and allows extending logical
operations to truth-values that include uncertainties.[5] In the case of two-valued vector logic, uncertainties in the
truth values can be introduced using vectors with s and n weighted by probabilities.
Let f = s + n , with , [0, 1], + = 1 be this kind of probabilistic vectors. Here, the many-valued
character of the logic is introduced a posteriori via the uncertainties introduced in the inputs.[1]

268.2.1 Scalar projections of vector outputs


The outputs of this many-valued logic can be projected on scalar functions and generate a particular class of prob-
abilistic logic with similarities with the many-valued logic of Reichenbach.[6][7][8] Given two vectors u = s + n
and v = s + n and a dyadic logical matrix G , a scalar probabilistic logic is provided by the projection over
vector s:

V al(scalars) = sT G(vectors)

Here are the main results of these projections:

N OT () = sT N u = 1
OR(, ) = sT D(u v) = +
AN D(, ) = sT C(u v) =
IM P L(, ) = sT L(u v) = 1 (1 )
XOR(, ) = sT X(u v) = + 2

The associated negations are:

N OR(, ) = 1 OR(, )
N AN D(, ) = 1 AN D(, )
EQU I(, ) = 1 XOR(, )

If the scalar values belong to the set {0, , 1}, this many-valued scalar logic is for many of the operators almost
identical to the 3-valued logic of ukasiewicz. Also, it has been proved that when the monadic or dyadic operators
act over probabilistic vectors belonging to this set, the output is also an element of this set.[3]
988 CHAPTER 268. VECTOR LOGIC

268.3 History
The approach has been inspired in neural network models based on the use of high-dimensional matrices and vectors.[9][10]
Vector logic is a direct translation into a matrix-vector formalism of the classical Boolean polynomials.[11] This kind
of formalism has been applied to develop a fuzzy logic in terms of complex numbers.[12] Other matrix and vec-
tor approaches to logical calculus have been developed in the framework of quantum physics, computer science
and optics.[13][14][15] Early attempts to use linear algebra to represent logic operations can be referred to Peirce and
Copilowish.[16] The Indian biophysicist G.N. Ramachandran developed a formalism using algebraic matrices and
vectors to represent many operations of classical Jain Logic known as Syad and Saptbhangi. Indian logic.[17] It re-
quires independent armative evidence for each assertion in a proposition, and does not make the assumption for
binary complementation.

268.4 Boolean polynomials


George Boole established the development of logical operations as polynomials.[11] For the case of monadic operators
(such as identity or negation), the Boolean polynomials look as follows:

f (x) = f (1)x + f (0)(1 x)

The four dierent monadic operations result from the dierent binary values for the coecients. Identity operation
requires f(1) = 1 and f(0) = 0, and negation occurs if f(1) = 0 and f(0) = 1. For the 16 dyadic operators, the Boolean
polynomials are of the form:

f (x, y) = f (1, 1)xy + f (1, 0)x(1 y) + f (0, 1)(1 x)y + f (0, 0)(1 x)(1 y)

The dyadic operations can be translated to this polynomial format when the coecients f take the values indicated in
the respective truth tables. For instance: the NAND operation requires that:

f (1, 1) = 0 and f (1, 0) = f (0, 1) = f (0, 0) = 1 .

These Boolean polynomials can be immediately extended to any number of variables, producing a large potential
variety of logical operators. In vector logic, the matrix-vector structure of logical operators is an exact translation to
the format of liner algebra of these Boolean polynomials, where the x and 1-x correspond to vectors s and n respectively
(the same for y and 1-y). In the example of NAND, f(1,1)=n and f(1,0)=f(0,1)=f(0,0)=s and the matrix version
becomes:

S = n(s s)T + s[(s n)T + (n s)T + (n n)T ]

268.5 Extensions
Vector logic can be extended to include many truth values since large dimensional vector spaces allow to create
many orthogonal truth values and the corresponding logical matrices.[2]
Logical modalities can be fully represented in this context, with recursive process inspired in neural models[2][18]
Some cognitive problems about logical computations can be analyzed using this formalism, in particular re-
cursive decisions. Any logical expression of classical propositional calculus can be naturally represented by a
tree structure.[4] This fact is retained by vector logic, and has been partially used in neural models focused in
the investigation of the branched structure of natural languages.[19][20][21][22][23][24]
268.6. SEE ALSO 989

The computation via reversible operations as the Fredkin gate can be implemented in vector logic. This im-
plementations provides explicit expressions for matrix operators that produce the input format and the output
ltering necessary for obtaining computations[2][3]
Elementary cellular automata can be analyzed using the operator structure of vector logic; this analysis leads
to a spectral decomposition of the laws governing its dynamics[25][26]
In addition, based on this formalism, a discrete dierential and integral calculus has been developed[27]

268.6 See also


Fuzzy logic
Quantum logic
Boolean algebra
Propositional calculus
George Boole
Jan ukasiewicz

268.7 References
[1] Mizraji, E. (1992). Vector logics: the matrix-vector representation of logical calculus. Fuzzy Sets and Systems, 50, 179
185, 1992

[2] Mizraji, E. (2008) Vector logic: a natural algebraic representation of the fundamental logical gates. Journal of Logic and
Computation, 18, 97121, 2008

[3] Mizraji, E. (1996) The operators of vector logic. Mathematical Logic Quarterly, 42, 2739

[4] Suppes, P. (1957) Introduction to Logic, Van Nostrand Reinhold, New York.

[5] ukasiewicz, J. (1980) Selected Works. L. Borkowski, ed., pp. 153178. North-Holland, Amsterdam, 1980

[6] Rescher, N. (1969) Many-Valued Logic. McGrawHill, New York

[7] Blanch, R. (1968) Introduction la Logique Contemporaine, Armand Colin, Paris

[8] Klir, G.J., Yuan, G. (1995) Fuzzy Sets and Fuzzy Logic. PrenticeHall, New Jersey

[9] Kohonen, T. (1977) Associative Memory: A System-Theoretical Approach. Springer-Verlag, New York

[10] Mizraji, E. (1989) Context-dependent associations in linear distributed memories. Bulletin of Mathematical Biology, 50,
195205

[11] Boole, G. (1854) An Investigation of the Laws of Thought, on which are Founded the Theories of Logic and Probabilities.
Macmillan, London, 1854; Dover, New York Reedition, 1958

[12] Dick, S. (2005) Towards complex fuzzy logic. IEEE Transactions on Fuzzy Systems, 15,405414, 2005

[13] Mittelstaedt, P. (1968) Philosophische Probleme der Modernen Physik, Bibliographisches Institut, Mannheim

[14] Stern, A. (1988) Matrix Logic: Theory and Applications. North-Holland, Amsterdam

[15] Westphal, J., Hardy, J. (2005) Logic as a vector system. Journal of Logic and Computation, 15, 751765

[16] Copilowish, I.M. (1948) Matrix development of the calculus of relations. Journal of Symbolic Logic, 13, 193203

[17] Jain, M.K. (2011) Logic of evidence-based inference propositions, Current Science, 16631672, 100

[18] Mizraji, E. (1994) Modalities in vector logic. Notre Dame Journal of Formal Logic, 35, 272283

[19] Mizraji, E., Lin, J. (2002) The dynamics of logical decisions. Physica D, 168169, 386396
990 CHAPTER 268. VECTOR LOGIC

[20] beim Graben, P., Potthast, R. (2009). Inverse problems in dynamic cognitive modeling. Chaos, 19, 015103

[21] beim Graben, P., Pinotsis, D., Saddy, D., Potthast, R. (2008). Language processing with dynamic elds. Cogn. Neurodyn.,
2, 7988

[22] beim Graben, P., Gerth, S., Vasishth, S.(2008) Towards dynamical system models of language-related brain potentials.
Cogn. Neurodyn., 2, 229255

[23] beim Graben, P., Gerth, S. (2012) Geometric representations for minimalist grammars. Journal of Logic, Language and
Information, 21, 393-432 .

[24] Binazzi, A.(2012) Cognizione logica e modelli mentali. Studi sulla formazione, 12012, pag. 6984

[25] Mizraji, E. (2006) The parts and the whole: inquiring how the interaction of simple subsystems generates complexity.
International Journal of General Systems, 35, pp. 395415.

[26] Arruti, C., Mizraji, E. (2006) Hidden potentialities. International Journal of General Systems, 35, 461469.

[27] Mizraji, E. (2015) Dierential and integral calculus for logical operations. A matrixvector approach Journal of Logic and
Computation 25, 613-638, 2015
Chapter 269

Veitch chart

The Karnaugh map (KM or K-map) is a method of simplifying Boolean algebra expressions. Maurice Karnaugh
introduced it in 1953[1] as a renement of Edward Veitch's 1952 Veitch chart,[2][3] which actually was a rediscovery
of Allan Marquand's 1881 logical diagram[4] aka Marquand diagram[3] but with a focus now set on its utility for
switching circuits.[3] Veitch charts are therefore also known as MarquandVeitch diagrams,[3] and Karnaugh maps as
KarnaughVeitch maps (KV maps).
The Karnaugh map reduces the need for extensive calculations by taking advantage of humans pattern-recognition
capability.[1] It also permits the rapid identication and elimination of potential race conditions.
The required Boolean results are transferred from a truth table onto a two-dimensional grid where, in Karnaugh maps,
the cells are ordered in Gray code,[5][3] and each cell position represents one combination of input conditions, while
each cell value represents the corresponding output value. Optimal groups of 1s or 0s are identied, which represent
the terms of a canonical form of the logic in the original truth table.[6] These terms can be used to write a minimal
Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using a minimum
number of physical logic gates. A sum-of-products expression can always be implemented using AND gates feeding
into an OR gate, and a product-of-sums expression leads to OR gates feeding an AND gate.[7] Karnaugh maps can
also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional
statements, can get very complicated, which makes the code dicult to read and to maintain. Once minimised,
canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic
operators.[8]

269.1 Example
Karnaugh maps are used to facilitate the simplication of Boolean algebra functions. For example, consider the
Boolean function described by the following truth table.
Following are two dierent notations describing the same function in unsimplied Boolean algebra, using the Boolean
variables A, B, C, D, and their inverses.


f (A, B, C, D) = mi , i {6, 8, 9, 10, 11, 12, 13, 14} where mi are the minterms to map (i.e., rows that
have output 1 in the truth table).

f (A, B, C, D) = Mi , i {0, 1, 2, 3, 4, 5, 7, 15} where Mi are the maxterms to map (i.e., rows that have
output 0 in the truth table).

269.1.1 Karnaugh map

In the example above, the four input variables can be combined in 16 dierent ways, so the truth table has 16 rows,
and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 4 grid.

991
992 CHAPTER 269. VEITCH CHART

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
An example Karnaugh map. This image actually shows two Karnaugh maps: for the function , using minterms (colored rectangles)
and
for its complement, using maxterms (gray rectangles). In the image, E() signies a sum of minterms, denoted in the article as
mi .

The row and column indices (shown across the top, and down the left side of the Karnaugh map) are ordered in Gray
code rather than binary numerical order. Gray code ensures that only one variable changes between each pair of
adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the functions output
for that combination of inputs.
After the Karnaugh map has been constructed, it is used to nd one of the simplest possible forms a canonical
form for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify
the expression. The minterms ('minimal terms) for the nal expression are found by encircling groups of 1s in the
map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8). Minterm
rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one
larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and
green groups overlap. The red group is a 2 2 square, the green group is a 4 1 rectangle, and the overlap area is
indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For
269.1. EXAMPLE 993

K-map drawn on a torus, and in a plane. The dot-marked cells are adjacent.

AB
00 01 11 10
ABCD ABCD
00

0 4 12 8 0000 - 0 1000 - 8
0001 - 1 1001 - 9
01

1 5 13 9 0010 - 2 1010 - 10
CD

0011 - 3 1011 - 11
0100 - 4 1100 - 12
11

3 7 15 11
0101 - 5 1101 - 13
0110 - 6 1110 - 14
10

2 6 14 10
0111 - 7 1111 - 15

K-map construction. Instead of containing output values, this diagram shows the numbers of outputs, therefore it is not a Karnaugh
map.

example, AD would mean a cell which covers the 2x2 area where A and D are true, i.e. the cells numbered 13, 9,
15, 11 in the diagram above. On the other hand, AD would mean the cells where A is true and D is false (that is, D
is true).
The grid is toroidally connected, which means that rectangular groups can wrap across the edges (see picture). Cells
on the extreme right are actually 'adjacent' to those on the far left; similarly, so are those at the very top and those
at the bottom. Therefore, AD can be a valid termit includes cells 12 and 8 at the top, and wraps to the bottom to
include cells 10 and 14as is B, D, which includes the four corners.
994 CHAPTER 269. VEITCH CHART

In three dimensions, one can bend a rectangle into a torus.

269.1.2 Solution
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic
minterms can be found by examining which variables stay the same within each box.
For the red grouping:

A is the same and is equal to 1 throughout the box, therefore it should be included in the algebraic representation
of the red minterm.
B does not maintain the same state (it shifts from 1 to 0), and should therefore be excluded.
C does not change. It is always 0, so its complement, NOT-C, should be included. Thus, C should be included.
D changes, so it is excluded.

Thus the rst minterm in the Boolean sum-of-products expression is AC.


For the green grouping, A and B maintain the same state, while C and D change. B is 0 and has to be negated before
it can be included. The second term is therefore AB. Note that it is acceptable that the green grouping overlaps with
the red one.
In the same way, the blue grouping gives the term BCD.
The solutions of each grouping are combined: the normal form of the circuit is AC + AB + BCD .
Thus the Karnaugh map has guided a simplication of

f (A, B, C, D) = ABCD + AB C D + AB CD + ABCD +


ABCD + ABC D + ABCD + ABCD
= AC + AB + BCD
269.1. EXAMPLE 995

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
Diagram showing two K-maps. The K-map for the function f(A, B, C, D) is shown as colored rectangles which correspond to
minterms. The brown region is an overlap of the red 22 square and the green 41 rectangle. The K-map for the inverse of f is
shown as gray rectangles, which correspond to maxterms.

It would also have been possible to derive this simplication by carefully applying the axioms of boolean algebra, but
the time it takes to do that grows exponentially with the number of terms.

269.1.3 Inverse
The inverse of a function is solved in the same way by grouping the 0s instead.
The three terms to cover the inverse are all shown with grey boxes with dierent colored borders:

brown: A, B

gold: A, C

blue: BCD
996 CHAPTER 269. VEITCH CHART

This yields the inverse:

f (A, B, C, D) = A B + A C + BCD

Through the use of De Morgans laws, the product of sums can be determined:

f (A, B, C, D) = A B + A C + BCD
f (A, B, C, D) = A B + A C + BCD
( )
f (A, B, C, D) = (A + B) (A + C) B + C + D

269.1.4 Don't cares

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 X 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=A+BCD'
F=(A+B)(A+C)(A+D')
The value of for ABCD = 1111 is replaced by a don't care. This removes the green term completely and allows the red term to be
larger. It also allows blue inverse term to shift and become larger
269.2. RACE HAZARDS 997

Karnaugh maps also allow easy minimizations of functions whose truth tables include "don't care" conditions. A
don't care condition is a combination of inputs for which the designer doesn't care what the output is. Therefore,
don't care conditions can either be included in or excluded from any rectangular group, whichever makes it larger.
They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value of f(1,1,1,1) replaced by a don't care.
This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:

f (A, B, C, D) = A + BCD

Note that the rst term is just A, not AC. In this case, the don't care has dropped a term (the green rectangle);
simplied another (the red one); and removed the race hazard (removing the yellow term as shown in the following
section on race hazards).
The inverse case is simplied as follows:

f (A, B, C, D) = A B + A C + AD

269.2 Race hazards

269.2.1 Elimination
Karnaugh maps are useful for detecting and eliminating race conditions. Race hazards are very easy to spot using a
Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions
circumscribed on the map. However, because of the nature of Gray coding, adjacent has a special denition explained
above - we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.

In the example above, a potential race condition exists when C is 1 and D is 0, A is 1, and B changes from 1
to 0 (moving from the blue state to the green state). For this case, the output is dened to remain unchanged
at 1, but because this transition is not covered by a specic term in the equation, a potential for a glitch (a
momentary transition of the output to 0) exists.
There is a second potential glitch in the same example that is more dicult to spot: when D is 0 and A and B
are both 1, with C changing from 1 to 0 (moving from the blue state to the red state). In this case the glitch
wraps around from the top of the map to the bottom.

Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to
worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in
time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term of AD would eliminate the potential race hazard, bridging between the green and blue
output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom
to the top of the right half) in the adjacent diagram.
The term is redundant in terms of the static logic of the system, but such redundant, or consensus terms, are often
needed to assure race-free dynamic performance.
Similarly, an additional term of AD must be added to the inverse to eliminate another potential(race hazard.
) Applying
De Morgans laws creates another product of sums expression for f, but with a new factor of A + D .

269.2.2 2-variable map examples


Thefollowing are all the possible 2-variable, 2 2 Karnaugh maps. Listed with each is the minterms as a function
of m() and the race hazard free (see previous section) minimum equation. A minterm is dened as an expression
that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical inter-
connected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These
998 CHAPTER 269. VEITCH CHART

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'
F=(A+B)(A+C)(B'+C'+D')
Race hazards are present in this diagram.

expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to
be mapped. Here are all the blocks with one eld.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge
of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and
horizontal row. A visualization of the k-map can be considered cylindrical. The elds at edges on the left and right
are adjacent, and the top and bottom are adjacent. K-Maps for 4 variables must be depicted as a donut or torus shape.
The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables
and more.
269.2. RACE HAZARDS 999

AB
00 01 11 10
00

0 0 1 1
01

0 0 1 1
CD
11

0 0 0 1
10

0 1 1 1

f(A,B,C,D) = E(6,8,9,10,11,12,13,14)
F=AC'+AB'+BCD'+AD'
F=(A+B)(A+C)(B'+C'+D')(A+D')
Above diagram with consensus terms added to avoid race hazards.

A
0 1

0 0
0
B

0 0
1

f(A,B) = E()
K=0
K'=1 m(0); K = 0
1000 CHAPTER 269. VEITCH CHART

A
0 1

1 0
0
B
0 0
1

f(A,B) = E(1)
K=A'B'
K'=A+B m(1); K = AB

A
0 1

0 1
0
B

0 0
1

f(A,B) = E(2)
K=AB'
K'=A'+B m(2); K = AB

A
0 1

0 0
0
B

1 0
1

f(A,B) = E(3)
K=A'B
K'=A+B' m(3); K = AB

A
0 1

0 0
0
B

0 1
1

f(A,B) = E(4)
K=AB
K'=A'+B' m(4); K = AB

A
0 1

1 1
0
B

0 0
1

f(A,B) = E(1,2)
K=B'
K'=B m(1,2); K = B

A
0 1

1 0
0
B

1 0
1

f(A,B) = E(1,3)
K=A'
K'=A m(1,3); K = A
269.2. RACE HAZARDS 1001

A
0 1

1 0
0
B
0 1
1

f(A,B) = E(1,4)
K=A'B'+AB
K'=AB'+A'B m(1,4); K = AB + AB

A
0 1

0 1
0
B

1 0
1

f(A,B) = E(2,3)
K=AB'+A'B
K'=A'B'+AB m(2,3); K = AB + AB

A
0 1

0 1
0
B

0 1
1

f(A,B) = E(2,4)
K=A
K'=A' m(2,4); K = A

A
0 1

0 0
0
B

1 1
1

f(A,B) = E(3,4)
K=B
K'=B' m(3,4); K = B

A
0 1

1 1
0
B

1 0
1

f(A,B) = E(1,2,3)
K=A'+B'
K'=AB m(1,2,3); K = A' + B

A
0 1

1 1
0
B

0 1
1

f(A,B) = E(1,2,4)
K=B'+A
K'=A'B m(1,2,4); K = A + B
1002 CHAPTER 269. VEITCH CHART

A
0 1

1 0
0
B
1 1
1

f(A,B) = E(1,3,4)
K=A'+B
K'=AB' m(1,3,4); K = A + B

A
0 1

0 1
0
B

1 1
1

f(A,B) = E(2,3,4)
K=A+B
K'=A'B' m(2,3,4); K = A + B

A
0 1

1 1
0
B

1 1
1

f(A,B) = E(1,2,3,4)
K=1
K'=0 m(1,2,3,4); K = 1

269.3 Other graphical methods


Alternative graphical minimization methods include:

Marquand diagram (1881) by Allan Marquand (18531924)[4][3]


Harvard minimizing chart (1951) by Howard H. Aiken and Martha L. Whitehouse of the Harvard Computation
Laboratory[9][1][10][11]
Veitch chart (1952) by Edward Veitch (19242013)[2][3]
Svobodas graphical aids (1956) and triadic map by Antonn Svoboda (19071980)[12][13][14][15]
Hndler circle graph (aka Hndlerscher Kreisgraph, Kreisgraph nach Hndler, Hndler-Kreisgraph, Hndler-
Diagramm, Minimisierungsgraph [sic]) (1958) by Wolfgang Hndler (19201998)[16][17][18][14][19][20][21][22][23]
Graph method (1965) by Herbert Kortum (19071979)[24][25][26][27][28][29]

269.4 See also


Circuit minimization
Espresso heuristic logic minimizer
List of Boolean algebra topics
QuineMcCluskey algorithm
Algebraic normal form (ANF)
Ring sum normal form (RSNF)
269.5. REFERENCES 1003

Zhegalkin normal form


Reed-Muller expansion
Venn diagram
Punnett square (a similar diagram in biology)

269.5 References
[1] Karnaugh, Maurice (November 1953) [1953-04-23, 1953-03-17]. The Map Method for Synthesis of Combinational Logic
Circuits (PDF). Transactions of the American Institute of Electrical Engineers part I. 72 (9): 593599. doi:10.1109/TCE.1953.6371932.
Paper 53-217. Archived (PDF) from the original on 2017-04-16. Retrieved 2017-04-16. (NB. Also contains a short review
by Samuel H. Caldwell.)

[2] Veitch, Edward W. (1952-05-03) [1952-05-02]. A Chart Method for Simplifying Truth Functions. ACM Annual Con-
ference/Annual Meeting: Proceedings of the 1952 ACM Annual Meeting (Pittsburg). New York, USA: ACM: 127133.
doi:10.1145/609784.609801.

[3] Brown, Frank Markham (2012) [2003, 1990]. Boolean Reasoning - The Logic of Boolean Equations (reissue of 2nd ed.).
Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-42785-0.

[4] Marquand, Allan (1881). XXXIII: On Logical Diagrams for n terms. The London, Edinburgh, and Dublin Philosophical
Magazine and Journal of Science. 5. 12 (75): 266270. doi:10.1080/14786448108627104. Retrieved 2017-05-15. (NB.
Quite many secondary sources erroneously cite this work as A logical diagram for n terms or On a logical diagram for
n terms.)

[5] Wakerly, John F. (1994). Digital Design: Principles & Practices. New Jersey, USA: Prentice Hall. pp. 222, 4849. ISBN
0-13-211459-3. (NB. The two page sections taken together say that K-maps are labeled with Gray code. The rst section
says that they are labeled with a code that changes only one bit between entries and the second section says that such a code
is called Gray code.)

[6] Belton, David (April 1998). Karnaugh Maps Rules of Simplication. Archived from the original on 2017-04-18.
Retrieved 2009-05-30.

[7] Dodge, Nathan B. (September 2015). Simplifying Logic Circuits with Karnaugh Maps (PDF). The University of Texas
at Dallas, Erik Jonsson School of Engineering and Computer Science. Archived (PDF) from the original on 2017-04-18.
Retrieved 2017-04-18.

[8] Cook, Aaron. Using Karnaugh Maps to Simplify Code. Quantum Rarity. Archived from the original on 2017-04-18.
Retrieved 2012-10-07.

[9] Aiken, Howard H.; Blaauw, Gerrit; Burkhart, William; Burns, Robert J.; Cali, Lloyd; Canepa, Michele; Ciampa, Carmela
M.; Coolidge, Jr., Charles A.; Fucarile, Joseph R.; Gadd, Jr., J. Orten; Gucker, Frank F.; Harr, John A.; Hawkins, Robert
L.; Hayes, Miles V.; Hofheimer, Richard; Hulme, William F.; Jennings, Betty L.; Johnson, Stanley A.; Kalin, Theodore;
Kincaid, Marshall; Lucchini, E. Edward; Minty, William; Moore, Benjamin L.; Remmes, Joseph; Rinn, Robert J.; Roche,
John W.; Sanbord, Jacquelin; Semon, Warren L.; Singer, Theodore; Smith, Dexter; Smith, Leonard; Strong, Peter F.;
Thomas, Helene V.; Wang, An; Whitehouse, Martha L.; Wilkins, Holly B.; Wilkins, Robert E.; Woo, Way Dong; Lit-
tle, Elbert P.; McDowell, M. Scudder (1952) [January 1951]. Chapter V: Minimizing charts. Synthesis of electronic
computing and control circuits (second printing, revised ed.). Write-Patterson Air Force Base: Harvard University Press
(Cambridge, Massachusetts, USA) / Georey Cumberlege Oxford University Press (London). pp. preface, 5067. Re-
trieved 2017-04-16. [] Martha Whitehouse constructed the minimizing charts used so profusely throughout this book,
and in addition prepared minimizing charts of seven and eight variables for experimental purposes. [] Hence, the present
writer is obliged to record that the general algebraic approach, the switching function, the vacuum-tube operator, and the
minimizing chart are his proposals, and that he is responsible for their inclusion herein. [] (NB. Work commenced in
April 1948.)

[10] Phister, Jr., Montgomery (1959) [December 1958]. Logical design of digital computers. New York, USA: John Wiley &
Sons Inc. pp. 7583. ISBN 0471688053.

[11] Curtis, H. Allen (1962). A new approach to the design of switching circuits. Princeton: D. van Nostrand Company.

[12] Svoboda, Antonn (1956). Gracko-mechanick pomcky uvan pi analyse a synthese kontaktovch obvod [Utilization
of graphical-mechanical aids for the analysis and synthesis of contact circuits]. Stroje na zpracovn informac [Symphosium
IV on information processing machines] (in Czech). IV. Prague: Czechoslovak Academy of Sciences, Research Institute
of Mathematical Machines. pp. 921.
1004 CHAPTER 269. VEITCH CHART

[13] Svoboda, Antonn (1956). Graphical Mechanical Aids for the Synthesis of Relay Circuits. Nachrichtentechnische Fach-
berichte (NTF), Beihefte der Nachrichtentechnischen Zeitschrift (NTZ). Braunschweig, Germany: Vieweg-Verlag.

[14] Steinbuch, Karl W.; Weber, Wolfgang; Heinemann, Traute, eds. (1974) [1967]. Taschenbuch der Informatik - Band II
- Struktur und Programmierung von EDV-Systemen. Taschenbuch der Nachrichtenverarbeitung (in German). 2 (3 ed.).
Berlin, Germany: Springer-Verlag. pp. 25, 62, 96, 122123, 238. ISBN 3-540-06241-6. LCCN 73-80607.

[15] Svoboda, Antonn; White, Donnamaie E. (2016) [1979-08-01]. Advanced Logical Circuit Design Techniques (PDF) (re-
typed electronic reissue ed.). Garland STPM Press (original issue) / WhitePubs (reissue). ISBN 978-0-8240-7014-4.
Archived (PDF) from the original on 2017-04-14. Retrieved 2017-04-15.

[16] Hndler, Wolfgang (1958). Ein Minimisierungsverfahren zur Synthese von Schaltkreisen: Minimisierungsgraphen (Disser-
tation) (in German). Technische Hochschule Darmstadt. D 17. (NB. Although written by a German, the title contains an
anglicism; the correct German term would be Minimierung instead of Minimisierung.)

[17] Hndler, Wolfgang (2013) [1961]. Zum Gebrauch von Graphen in der Schaltkreis- und Schaltwerktheorie. In Peschl,
Ernst Ferdinand; Unger, Heinz. Colloquium ber Schaltkreis- und Schaltwerk-Theorie - Vortragsauszge vom 26. bis 28.
Oktober 1960 in Bonn - Band 3 von Internationale Schriftenreihe zur Numerischen Mathematik [International Series of
Numerical Mathematics] (ISNM) (in German). 3. Institut fr Angewandte Mathematik, Universitt Saarbrcken, Rheinisch-
Westflisches Institut fr Instrumentelle Mathematik: Springer Basel AG / Birkhuser Verlag Basel. pp. 169198. ISBN
978-3-0348-5771-0. doi:10.1007/978-3-0348-5770-3.

[18] Berger, Erich R.; Hndler, Wolfgang (1967) [1962]. Steinbuch, Karl W.; Wagner, Siegfried W., eds. Taschenbuch der
Nachrichtenverarbeitung (in German) (2 ed.). Berlin, Germany: Springer-Verlag OHG. pp. 64, 10341035, 1036, 1038.
LCCN 67-21079. Title No. 1036. [] bersichtlich ist die Darstellung nach Hndler, die smtliche Punkte, numeriert
nach dem Gray-Code [], auf dem Umfeld eines Kreises anordnet. Sie erfordert allerdings sehr viel Platz. [] [Hndlers
illustration, where all points, numbered according to the Gray code, are arranged on the circumference of a circle, is easily
comprehensible. It needs, however, a lot of space.]

[19] Hotz, Gnter (1974). Schaltkreistheorie [Switching circuit theory]. DeGruyter Lehrbuch (in German). Walter de Gruyter
& Co. p. 117. ISBN 3-11-00-2050-5. [] Der Kreisgraph von Hndler ist fr das Aunden von Primimplikanten
gut brauchbar. Er hat den Nachteil, da er schwierig zu zeichnen ist. Diesen Nachteil kann man allerdings durch die
Verwendung von Schablonen verringern. [] [The circle graph by Hndler is well suited to nd prime implicants. A
disadvantage is that it is dicult to draw. This can be remedied using stencils.]

[20] Informatik Sammlung Erlangen (ISER)" (in German). Erlangen, Germany: Friedrich-Alexander Universitt. 2012-03-
13. Retrieved 2017-04-12. (NB. Shows a picture of a Kreisgraph by Hndler.)

[21] Informatik Sammlung Erlangen (ISER) - Impressum (in German). Erlangen, Germany: Friedrich-Alexander Universitt.
2012-03-13. Archived from the original on 2012-02-26. Retrieved 2017-04-15. (NB. Shows a picture of a Kreisgraph by
Hndler.)

[22] Zemanek, Heinz (2013) [1990]. Geschichte der Schaltalgebra [History of circuit switching algebra]. In Broy, Man-
fred. Informatik und Mathematik [Computer Sciences and Mathematics] (in German). Springer-Verlag. pp. 4372. ISBN
9783642766770. Einen Weg besonderer Art, der damals zu wenig beachtet wurde, wies W. Hndler in seiner Dissertation
[] mit einem Kreisdiagramm. [] (NB. Collection of papers at a colloquium held at the Bayerische Akademie der
Wissenschaften, 1989-06-12/14, in honor of Friedrich L. Bauer.)

[23] Bauer, Friedrich Ludwig; Wirsing, Martin (March 1991). Elementare Aussagenlogik (in German). Berlin / Heidelberg:
Springer-Verlag. pp. 5456, 71, 112113, 138139. ISBN 978-3-540-52974-3. [] handelt es sich um ein Hndler-
Diagramm [], mit den Wrfelecken als Ecken eines 2m -gons. [] Abb. [] zeigt auch Gegenstcke fr andere Di-
mensionen. Durch waagerechte Linien sind dabei Tupel verbunden, die sich nur in der ersten Komponente unterscheiden;
durch senkrechte Linien solche, die sich nur in der zweiten Komponente unterscheiden; durch 45-Linien und 135-Linien
solche, die sich nur in der dritten Komponente unterscheiden usw. Als Nachteil der Hndler-Diagramme wird angefhrt,
da sie viel Platz beanspruchen. []

[24] Kortum, Herbert (1965). Minimierung von Kontaktschaltungen durch Kombination von Krzungsverfahren und Graphen-
methoden. messen-steuern-regeln (msr) (in German). Verlag Technik. 8 (12): 421425.

[25] Kortum, Herbert (1966). Konstruktion und Minimierung von Halbleiterschaltnetzwerken mittels Graphentransformation.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (1): 912.

[26] Kortum, Herbert (1966). Weitere Bemerkungen zur Minimierung von Schaltnetzwerken mittels Graphenmethoden.
messen-steuern-regeln (msr) (in German). Verlag Technik. 9 (3): 96102.

[27] Kortum, Herbert (1966). Weitere Bemerkungen zur Behandlung von Schaltnetzwerken mittels Graphen. messen-steuern-
regeln (msr) (in German). Verlag Technik. 9 (5): 151157.
269.6. FURTHER READING 1005

[28] Kortum, Herbert (1967). "ber zweckmige Anpassung der Graphenstruktur diskreter Systeme an vorgegebene Auf-
gabenstellungen. messen-steuern-regeln (msr) (in German). Verlag Technik. 10 (6): 208211.

[29] Tafel, Hans Jrg (1971). 4.3.5. Graphenmethode zur Vereinfachung von Schaltfunktionen. Written at RWTH, Aachen,
Germany. Einfhrung in die digitale Datenverarbeitung [Introduction to digital information processing] (in German). Mu-
nich, Germany: Carl Hanser Verlag. pp. 98105, 107113. ISBN 3-446-10569-7.

269.6 Further reading


Katz, Randy Howard (1998) [1994]. Contemporary Logic Design. The Benjamin/Cummings Publishing Com-
pany. pp. 7085. ISBN 0-8053-2703-7. doi:10.1016/0026-2692(95)90052-7.

Vingron, Shimon Peter (2004) [2003-11-05]. Karnaugh Maps. Switching Theory: Insight Through Predicate
Logic. Berlin, Heidelberg, New York: Springer-Verlag. pp. 5776. ISBN 3-540-40343-4.

Wickes, William E. (1968). Logic Design with Integrated Circuits. New York, USA: John Wiley & Sons.
pp. 3649. LCCN 68-21185. A renement of the Venn diagram in that circles are replaced by squares and
arranged in a form of matrix. The Veitch diagram labels the squares with the minterms. Karnaugh assigned 1s
and 0s to the squares and their labels and deduced the numbering scheme in common use.

Maxeld, Clive Max (2006-11-29). Reed-Muller Logic. Logic 101. EETimes. Part 3. Archived from the
original on 2017-04-19. Retrieved 2017-04-19.

269.7 External links


Detect Overlapping Rectangles, by Herbert Glarner.
Using Karnaugh maps in practical applications, Circuit design project to control trac lights.

K-Map Tutorial for 2,3,4 and 5 variables


Karnaugh Map Example

POCKETPC BOOLEAN FUNCTION SIMPLIFICATION, Ledion Bitincka George E. Antoniou


Chapter 270

Witness (mathematics)

In mathematical logic, a witness is a specic value t to be substituted for variable x of an existential statement of the
form x (x) such that (t) is true.

270.1 Examples
For example, a theory T of arithmetic is said to be inconsistent if there exists a proof in T of the formula 0=1. The
formula I(T), which says that T is inconsistent, is thus an existential formula. A witness for the inconsistency of T is
a particular proof of 0 = 1 in T.
Boolos, Burgess, and Jerey (2002:81) dene the notion of a witness with the example, in which S is an n-place
relation on natural numbers, R is an n-place recursive relation, and indicates logical equivalence (if and only if):

" S(x1 , ..., xn) y R(x1 , . . ., xn, y)


" A y such that R holds of the xi may be called a 'witness to the relation S holding of the xi (provided
we understand that when the witness is a number rather than a person, a witness only testies to what
is true). In this particular example, B-B-J have dened s to be (positively) recursively semidecidable, or
simply semirecursive.

270.2 Henkin witnesses


In predicate calculus, a Henkin witness for a sentence x (x) in a theory T is a term c such that T proves (c)
(Hinman 2005:196). The use of such witnesses is a key technique in the proof of Gdels completeness theorem
presented by Leon Henkin in 1949.

270.3 Relation to game semantics


The notion of witness leads to the more general idea of game semantics. In the case of sentence x (x) the winning
strategy for the verier is to pick a witness for . For more complex formulas involving universal quantiers, the
existence of a winning strategy for the verier depends on the existence of appropriate Skolem functions. For example,
if S denotes xy (x, y) then an equisatisable statement for S is f x (x, f (x)) . The Skolem function f (if it
exists) actually codies a winning strategy for the verier of S by returning a witness for the existential sub-formula
for every choice of x the falsier might make.

270.4 See also


Certicate (complexity), an analogous concept in computational complexity theory

1006
270.5. REFERENCES 1007

270.5 References
George S. Boolos, John P. Burgess, and Richard C. Jerey, 2002, Computability and Logic: Fourth Edition,
Cambridge University Press, ISBN 0-521-00758-5.
Leon Henkin, 1949, The completeness of the rst-order functional calculus, Journal of Symbolic Logic v. 14
n. 3, pp. 159166.

Peter G. Hinman, 2005, Fundamentals of mathematical logic, A.K. Peters, ISBN 1-56881-262-0.
J. Hintikka and G. Sandu, 2009, Game-Theoretical Semantics in Keith Allan (ed.) Concise Encyclopedia of
Semantics, Elsevier, ISBN 0-08095-968-7, pp. 341343
Chapter 271

Wolfram axiom

The Wolfram axiom is the result of a computer exploration undertaken by Stephen Wolfram[1] in his A New Kind of
Science looking for the shortest single axiom equivalent to the axioms of Boolean algebra (or propositional calculus).
The result[2] of his search was an axiom with six Nands and three variables equivalent to Boolean algebra:

((a|b) | c) | (a | ((a|c) | a)) = c

With the vertical bar representing the Nand logical operation (also known as the Sheer stroke), with the following
meaning: p Nand q is true if and only if not both p and q are true. It is named for Henry M. Sheer, who proved that
all the usual operators of Boolean algebra (Not, And, Or, Implies) could be expressed in terms of Nand. This means
that logic can be set up using a single operator.
Wolframs 25 candidates are precisely the set of Sheer identities of length less or equal to 15 elements (excluding
mirror images) that have no noncommutative models of size less or equal to 4 (variables).[3]
Researchers have known for some time that single equational axioms (i.e., 1-bases) exist for Boolean algebra, includ-
ing representation in terms of disjunction and negation and in terms of the Sheer stroke. Wolfram proved that there
were no smaller 1-bases candidates than the axiom he found using the techniques described in his NKS book. The
proof is given in two pages (in 4-point type) in Wolframs book. Wolframs axiom is, therefore, the single simplest
axiom by number of operators and variables needed to reproduce Boolean algebra.
Sheer identities were independently obtained by dierent means and reported in a technical memorandum[4] in June
2000 acknowledging correspondence with Wolfram in February 2000 in which Wolfram discloses to have found the
axiom in 1999 while preparing his book. In[5] is also shown that a pair of equations (conjectured by Stephen Wolfram)
are equivalent to Boolean algebra.

271.1 See also


Boolean algebra

271.2 References
[1] Stephen Wolfram, A New Kind of Science, 2002, p. 808811 and 1174.

[2] Rudy Rucker, A review of NKS, The Mathematical Association of America, Monthly 110, 2003.

[3] William Mccune, Robert Vero, Branden Fitelson, Kenneth Harris, Andrew Feist and Larry Wos, Short Single Axioms
for Boolean algebra, J. Automated Reasoning, 2002.

[4] Robert Vero and William McCune, A Short Sheer Axiom for Boolean algebra, Technical Memorandum No. 244

[5] Robert Vero, Short 2-Bases for Boolean algebra in Terms of the Sheer stroke. Tech. Report TR-CS-2000-25, Computer
Science Department, University of New Mexico, Albuquerque, NM

1008
271.3. EXTERNAL LINKS 1009

271.3 External links


Stephen Wolfram, 2002, "A New Kind of Science, online.

Weisstein, Eric W. Wolfram Axiom. MathWorld.

http://hyperphysics.phy-astr.gsu.edu/hbase/electronic/nand.html
Weisstein, Eric W. Boolean algebra. MathWorld.

Weisstein, Eric W. Robbins Axiom. MathWorld.

Weisstein, Eric W. Huntington Axiom. MathWorld.


Chapter 272

Zeroth-order logic

Zeroth-order logic is rst-order logic without variables or quantiers. Some authors use the phrase zeroth-order
logic as a synonym for the propositional calculus,[1] but an alternative denition extends propositional logic by adding
constants, operations, and relations on non-Boolean values.[2] Every zeroth-order language in this broader sense is
complete and compact.[2]

272.1 References
[1] Andrews, Peter B. (2002), An introduction to mathematical logic and type theory: to truth through proof, Applied Logic Se-
ries, 27 (Second ed.), Kluwer Academic Publishers, Dordrecht, p. 201, ISBN 1-4020-0763-9, MR 1932484, doi:10.1007/978-
94-015-9934-4.

[2] Tao, Terence (2010), 1.4.2 Zeroth-order logic, An epsilon of room, II, American Mathematical Society, Providence, RI,
pp. 2731, ISBN 978-0-8218-5280-4, MR 2780010, doi:10.1090/gsm/117.

1010
Chapter 273

Zhegalkin polynomial

Zhegalkin (also Zegalkin or Gegalkine) polynomials form one of many possible representations of the operations
of Boolean algebra. Introduced by the Russian mathematician I. I. Zhegalkin in 1927, they are the polynomials of
ordinary high school algebra interpreted over the integers mod 2. The resulting degeneracies of modular arithmetic
result in Zhegalkin polynomials being simpler than ordinary polynomials, requiring neither coecients nor exponents.
Coecients are redundant because 1 is the only nonzero coecient. Exponents are redundant because in arithmetic
mod 2, x2 = x. Hence a polynomial such as 3x2 y5 z is congruent to, and can therefore be rewritten as, xyz.

273.1 Boolean equivalent


Prior to 1927 Boolean algebra had been considered a calculus of logical values with logical operations of conjunc-
tion, disjunction, negation, etc. Zhegalkin showed that all Boolean operations could be written as ordinary numeric
polynomials, thinking of the logical constants 0 and 1 as integers mod 2. The logical operation of conjunction is re-
alized as the arithmetic operation of multiplication xy, and logical exclusive-or as arithmetic addition mod 2, (written
here as xy to avoid confusion with the common use of + as a synonym for inclusive-or ). Logical complement
x is then derived from 1 and as x1. Since and form a sucient basis for the whole of Boolean algebra,
meaning that all other logical operations are obtainable as composites of these basic operations, it follows that the
polynomials of ordinary algebra can represent all Boolean operations, allowing Boolean reasoning to be performed
reliably by appealing to the familiar laws of high school algebra without the distraction of the dierences from high
school algebra that arise with disjunction in place of addition mod 2.
An example application is the representation of the Boolean 2-out-of-3 threshold or median operation as the Zhegalkin
polynomial xyyzzx, which is 1 when at least two of the variables are 1 and 0 otherwise.

273.2 Formal properties


Formally a Zhegalkin monomial is the product of a nite set of distinct variables (hence square-free), including
the empty set whose product is denoted 1. There are 2n possible Zhegalkin monomials in n variables, since each
monomial is fully specied by the presence or absence of each variable. A Zhegalkin polynomial is the sum (exclusive-
or) of a set of Zhegalkin monomials, with the empty set denoted by 0. A given monomials presence or absence
in a polynomial corresponds to that monomials coecient being 1 or 0 respectively. The Zhegalkin monomials,
being linearly independent, span a 2n -dimensional vector space over the Galois eld GF(2) (NB: not GF(2n ), whose
n
multiplication is quite dierent). The 22 vectors of this space, i.e. the linear combinations of those monomials as
unit vectors, constitute the Zhegalkin polynomials. The exact agreement with the number of Boolean operations on
n variables, which exhaust the n-ary operations on {0,1}, furnishes a direct counting argument for completeness of
the Zhegalkin polynomials as a Boolean basis.
This vector space is not equivalent to the free Boolean algebra on n generators because it lacks complementation
(bitwise logical negation) as an operation (equivalently, because it lacks the top element as a constant). This is
not to say that the space is not closed under complementation or lacks top (the all-ones vector) as an element, but
rather that the linear transformations of this and similarly constructed spaces need not preserve complement and top.

1011
1012 CHAPTER 273. ZHEGALKIN POLYNOMIAL

Those that do preserve them correspond to the Boolean homomorphisms, e.g. there are four linear transformations
from the vector space of Zhegalkin polynomials over one variable to that over none, only two of which are Boolean
homomorphisms.

273.3 Method of computation

Computing the Zhegalkin polynomial for an example function P by the table method

There are three known methods generally used for the computation of the Zhegalkin polynomial.

Using the method of indeterminate coecients


By constructing the canonical disjunctive normal form
By using tables

273.3.1 The method of indeterminate coecients


Using the method of indeterminate coecients, a linear system consisting of all the tuples of the function and their
values is generated. Solving the linear gives the coecients of the Zhegalkin polynomial.

273.3.2 Using the canonical disjunctive normal form


Using this method, the canonical disjunctive normal form (a fully expanded disjunctive normal form) is computed
rst. Then the negations in this expression are replaced by an equivalent expression using the mod 2 sum of the
variable and 1. The disjunction signs are changed to addition mod 2, the brackets are opened, and the resulting
Boolean expression is simplied. This simplication results in the Zhegalkin polynomial.

273.4 Related work


In the same year as Zhegalkins paper (1927) the American mathematician E. T. Bell published a sophisticated
arithmetization of Boolean algebra based on Dedekinds ideal theory and general modular arithmetic (as opposed to
273.5. SEE ALSO 1013

arithmetic mod 2). The much simpler arithmetic character of Zhegalkin polynomials was rst noticed in the west
(independently, communication between Soviet and Western mathematicians being very limited in that era) by the
American mathematician Marshall Stone in 1936 when he observed while writing up his celebrated Stone duality
theorem that the supposedly loose analogy between Boolean algebras and rings could in fact be formulated as an
exact equivalence holding for both nite and innite algebras, leading him to substantially reorganize his paper.

273.5 See also


Ivan Ivanovich Zhegalkin

Algebraic normal form (ANF)


Ring sum normal form (RSNF)

Reed-Muller expansion
Boolean algebra (logic)

Boolean domain
Boolean function

Boolean-valued function
Karnaugh map

273.6 References
Bell, Eric Temple (1927). Arithmetic of Logic. Transactions of the American Mathematical Society. Trans-
actions of the American Mathematical Society, Vol. 29, No. 3. 29 (3): 597611. JSTOR 1989098.
doi:10.2307/1989098.
Gindikin, S. G. (1972). Algebraic Logic (Russian: ). Moscow: Nauka (English
translation Springer-Verlag 1985). ISBN 0-387-96179-8.

Stone, Marshall (1936). The Theory of Representations for Boolean Algebras. Transactions of the American
Mathematical Society. Transactions of the American Mathematical Society, Vol. 40, No. 1. 40 (1): 37111.
ISSN 0002-9947. JSTOR 1989664. doi:10.2307/1989664.
Zhegalkin, Ivan Ivanovich (1927). On the Technique of Calculating Propositions in Symbolic Logic. Matematicheskii
Sbornik. 43: 928.
1014 CHAPTER 273. ZHEGALKIN POLYNOMIAL

273.7 Text and image sources, contributors, and licenses


273.7.1 Text
(, )-denition of limit Source: https://en.wikipedia.org/wiki/(%CE%B5%2C_%CE%B4)-definition_of_limit?oldid=795520514 Con-
tributors: Michael Hardy, Dan Koehl, William M. Connolley, Bkell, Tea2min, Giftlite, Bender235, Sligocki, AtomAnt, Rjwilmsi, Chobot,
DVdm, Ninly, Arthur Rubin, SmackBot, InverseHypercube, Nbarth, Gobonobo, Aleenf1, Lottamiata, CBM, Cydebot, Matthew Fen-
nell, Thenub314, Magioladitis, JamesBWatson, Maurice Carbonaro, Dispenser, Skier Dude, Brvman, LokiClock, Anonymous Dissident,
Dmcq, Katzmik, This, that and the other, Amahoney, Niceguyedc, Computer97, DumZiBoT, Addbot, TutterMouse, TheFreeloader,
PV=nRT, Yobot, AnomieBOT, Omnipaedista, Oscarjquintana, Locobot, WebCiteBOT, Citation bot 1, Tkuvho, Serols, Trappist the
monk, Lexischemen~enwiki, Wuschelkopf, Wikipelli, Dcirovic, Slawekb, Josve05a, Usb10, ClueBot NG, Mesoderm, Joel B. Lewis, Help-
ful Pixie Bot, BG19bot, Walrus068, Amp71, , BattyBot, Myxomatosis57, Kephir, SamCardioNgo, Dakkagon, Rastus
Vernon, Ailenus, Jamisonsloan, Stillmorepeople, Zyvov, Some1Redirects4You, Jcrewseiii, InternetArchiveBot, GreenC bot, Dualspace,
Amiwikieditor, Imminent77, Xuzsagon, Jon Kolbert, FrozenJelloAndReason, Glena-b, StudentsDesk, Lucas Luo and Anonymous: 68
2-valued morphism Source: https://en.wikipedia.org/wiki/2-valued_morphism?oldid=626107757 Contributors: BD2412, Trovatore,
Gzabers, Pietdesomere, SmackBot, Bluebot, Epasseto, Iridescent, CmdrObot, Ste4k, David Eppstein, Onopearls, Hans Adler, PhiRho,
Jesse V. and Anonymous: 3
Absorption (logic) Source: https://en.wikipedia.org/wiki/Absorption_(logic)?oldid=790658532 Contributors: Michael Hardy, Gregbard,
PamD, Alejandrocaro35, SwisterTwister, Dcirovic, Dooooot, BukLauBrah, Zeiimer, Deacon Vorbis and Anonymous: 3
Absorption law Source: https://en.wikipedia.org/wiki/Absorption_law?oldid=759178643 Contributors: Cyp, Schneelocke, Charles Matthews,
David Newton, Dcoetzee, Dysprosia, Tea2min, Lethe, Macrakis, Bob.v.R, Chalst, Szquirrel, Bookandcoee, Kbdank71, Awis, Chobot,
NTBot~enwiki, Trovatore, Poulpy, SmackBot, RDBury, Mhss, Octahedron80, 16@r, BranStark, CRGreathouse, Gregbard, Cydebot,
PamD, David Eppstein, Policron, Jamelan, SieBot, JackSchmidt, Denisarona, Hans Adler, Skarebo, Addbot, Erik9bot, Constructive
editor, Trappist the monk, Dcirovic, ClarCharr and Anonymous: 14
Admissible rule Source: https://en.wikipedia.org/wiki/Admissible_rule?oldid=782060683 Contributors: Timo Honkasalo, Rpyle731,
Chalst, EmilJ, Hailey C. Shannon, BD2412, Rjwilmsi, Brighterorange, YurikBot, Pacogo7, Zvika, SmackBot, Nihonjoe, Lambiam,
Woodshed, CBM, Simeon, Gregbard, Magioladitis, SchreiberBike, Addbot, Yobot, Omnipaedista, DrilBot, I dream of horses, Tijfo098,
Frietjes, Jerey Bosboom, WillemienH, InternetArchiveBot, Magic links bot and Anonymous: 3
Arming a disjunct Source: https://en.wikipedia.org/wiki/Affirming_a_disjunct?oldid=787365930 Contributors: Bryan Derksen, Mr-
wojo, Vzbs34, Taak, Silence, Chi Sigma, HasharBot~enwiki, Knucmo2, Bookandcoee, KSchutte, Dysmorodrepanis~enwiki, Shinmawa,
Niggurath, SmackBot, Jon Awbrey, Andeggs, Grumpyyoungman01, Gregbard, Wizymon, Graymornings, Argel1200, XDanielx, Clue-
Bot, SolomonFreer, Addbot, Srich32977, Machine Elf 1735, Frayr, Philocentric, EmausBot, Jeraphine Gryphon, Justincheng12345-bot,
Admiral.Mercurial, Apollo The Logician and Anonymous: 13
Arming the consequent Source: https://en.wikipedia.org/wiki/Affirming_the_consequent?oldid=798482849 Contributors: Bryan Derk-
sen, Tarquin, Larry Sanger, Mrwojo, Michael Hardy, Voidvector, Theresa knott, Cadr, Peter Damian (original account), Wik, Corey, Rur-
sus, Alba, Sundar, Taak, Neilc, Ljhenshall, Ludimer~enwiki, Rdsmith4, SjolanderM, Klemen Kocjancic, Mindspillage, Rich Farmbrough,
Silence, Raistlinjones, Bender235, Kjoonlee, Srd2005, Thickslab, JRM, Walkiped, Corvi42, Kwdavids, Sumergocognito, The Nameless,
Graham87, Nneonneo, Dianelos, Akhenaten0, N8cantor, YurikBot, KSchutte, Newagelink, Shawnc, SmackBot, Bloomingdedalus, Fac-
torial, DMacks, Andeggs, John Bentley, Loodog, Loadmaster, Grumpyyoungman01, Dl2000, Peter1c, CBM, Forest51690, Gregbard,
MC10, Steel, Teratornis, 271828182, Neil Brown, AntiVandalBot, Isilanes, Magioladitis, Swpb, THobern, Manwiththemasterplan, Phil-
ogo, Jamelan, Jfromcanada, Aprofe1, Snaxalotl, Alexbot, Rabbiz, Suseno, Kyu-san, Legobot, Denispir, Sonia, Shock Brigade Harvester
Boris, Humanoid12, ArthurBot, RibotBOT, Logicchecker, Momergil, George Orwell III, B F Gray, Philocentric, The electron, ZroBot,
Korruski, ClueBot NG, TyWMick, SomeDudeWithAUserName, Parcly Taxel, Jeraphine Gryphon, Bhmunos, DevAudio, ChrisGualtieri,
RogerDulhunty, Hillary.conway, Foelering, BrayLockBoy, Luis150902, Xddddddddddddddddddddddddddddddddddddddddddddddddddds-
dfs, Hu5k3rDu and Anonymous: 54
Algebraic normal form Source: https://en.wikipedia.org/wiki/Algebraic_normal_form?oldid=776413263 Contributors: Michael Hardy,
Charles Matthews, Olathe, CyborgTosser, Macrakis, Mairi, Oleg Alexandrov, Linas, Ner102, GBL, Jon Awbrey, CBM, Salgueiro~enwiki,
Magioladitis, JackSchmidt, Hans Adler, Uscitizenjason, Legobot, Yobot, Omnipaedista, Klbrain, Matthiaspaul, Jiri 1984, Jochen Burghardt,
YiFeiBot, Anarchyte and Anonymous: 5
Allegory (category theory) Source: https://en.wikipedia.org/wiki/Allegory_(category_theory)?oldid=794705221 Contributors: The Anome,
Crislax, Aempirei, Woohookitty, Mindmatrix, Jfr26, Sam Staton, Yobot, ComputScientist, RjwilmsiBot and Anonymous: 7
Alternating multilinear map Source: https://en.wikipedia.org/wiki/Alternating_multilinear_map?oldid=791823927 Contributors: Michael
Hardy, Rich Farmbrough, Tompw, Vipul, BD2412, Malcolma, Elonka, CBM, Mclay1, David Eppstein, Haseldon, Alsosaid1987, Loki-
Clock, JP.Martin-Flatin, AnomieBOT, Erik9bot, Paine Ellsworth, Quondum, Bomazi, BG19bot, Solomon7968, Jamesholbert, Quiddital
and Anonymous: 1
Analysis of Boolean functions Source: https://en.wikipedia.org/wiki/Analysis_of_Boolean_functions?oldid=795936394 Contributors:
Michael Hardy, Magioladitis, Dodger67, Onel5969, Yuval Filmus, Kbabej, The garmine, DrStrauss and TheSandDoctor
Antidistributive Source: https://en.wikipedia.org/wiki/Distributive_property?oldid=798567342 Contributors: AxelBoldt, Tarquin, Youssef-
san, Toby Bartels, Patrick, Xavic69, Michael Hardy, Andres, Ideyal, Dysprosia, Malcohol, Andrewman327, Shizhao, PuzzletChung, Ro-
manm, Chris Roy, Wikibot, Tea2min, Giftlite, Markus Krtzsch, Dissident, Nodmonkey, Mike Rosoft, Smimram, Discospinster, Paul
August, ESkog, Rgdboer, EmilJ, Bobo192, Robotje, Smalljim, Jumbuck, Arthena, Keenan Pepper, Mykej, Bsadowski1, Blaxthos, Linas,
Evershade, Isnow, Marudubshinki, Salix alba, Vegaswikian, Nneonneo, Bgura, FlaBot, Alexb@cut-the-knot.com, Mathbot, Andy85719,
Ichudov, DVdm, YurikBot, Michael Slone, Grafen, Trovatore, Bota47, Banus, Melchoir, Yamaguchi , Gilliam, Bluebot, Ladislav the
Posthumous, Octahedron80, UNV, Jiddisch~enwiki, Khazar, FrozenMan, Bando26, 16@r, Dicklyon, EdC~enwiki, Engelec, Exzakin,
Jokes Free4Me, Simeon, Gregbard, Thijs!bot, Barticus88, Marek69, Nezzadar, Escarbot, Mhaitham.shammaa, Salgueiro~enwiki, JAnD-
bot, Onkel Tuca~enwiki, Acroterion, Drewmutt, Numbo3, Katalaveno, AntiSpamBot, GaborLajos, Lyctc, Idioma-bot, Janice Margaret
Vian, Montchav, TXiKiBoT, Oshwah, Anonymous Dissident, Dictouray, Oxfordwang, Martin451, Skylarkmichelle, Jackfork, Envi-
roboy, Dmcq, AlleborgoBot, Gerakibot, Bentogoa, Flyer22 Reborn, Radon210, Hello71, Denisarona, ClueBot, The Thing That Should
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1015

Not Be, Cli, Mild Bill Hiccup, Niceguyedc, Goldkingtut5, Excirial, Jusdafax, NuclearWarfare, NERIC-Security, Pichpich, Mm40, Ad-
dbot, Jojhutton, Ronhjones, Zarcadia, Favonian, Squandermania, Jarble, Ben Ben, Legobot, Luckas-bot, AnomieBOT, Materialscientist,
NFD9001, Greatfermat, False vacuum, RibotBOT, Intelligentsium, Pinethicket, I dream of horses, MastiBot, Andrea105, Slon02, Saul34,
J36miles, John of Reading, Davejohnsan, Orphan Wiki, Super48paul, Sp33dyphil, Slawekb, Quondum, BrokenAnchorBot, TyA, Don-
ner60, Chewings72, DASHBotAV, AlecJansen, ClueBot NG, Wcherowi, IfYouDoIfYouDon't, Dreth, O.Koslowski, Asukite, Widr, Vib-
hijain, Helpful Pixie Bot, Pmi1924, BG19bot, TCN7JM, Saulpila2000, Dan653, Forkloop, CallofDutyboy9, EuroCarGT, Sandeep.ps4,
Christian314, Ivashikhmin, IsraphelMac, Mogism, Darcourse, Makecat-bot, Stephan Kulla, Lugia2453, Gphilip, Brirush, Wywin, BB-
GUN101, ElHef, DavidLeighEllis, Shaun9876, Pkramer2021, Kitkat1234567880, Kcolemantwin3, Gracecandy1143, Justin15w, David88063,
Abruce123412, Amortias, Solid Frog, Loraof, Jj 1213 wiki, Dalangster, Iwamwickham, 123456me123456, RedPanda25, Some1Redirects4You,
ProprioMe OW, CLCStudent, Friendlyyoshi, Pockybits, Bender the Bot, Deacon Vorbis, RileyBugz, Magic links bot, Ya boy biggie smalle
and Anonymous: 276
Associative property Source: https://en.wikipedia.org/wiki/Associative_property?oldid=797057619 Contributors: AxelBoldt, Zundark,
Jeronimo, Andre Engels, XJaM, Christian List, Toby~enwiki, Toby Bartels, Patrick, Michael Hardy, Ellywa, Andres, Pizza Puzzle, Ideyal,
Charles Matthews, Dysprosia, Kwantus, Robbot, Mattblack82, Wikibot, Wereon, Robinh, Tea2min, Giftlite, Smjg, Lethe, Herbee,
Brona, Jason Quinn, Deleting Unnecessary Words, Creidieki, Rich Farmbrough, Guanabot, Paul August, Bender235, Rgdboer, Jum-
buck, Alansohn, Gary, Burn, H2g2bob, Bsadowski1, Oleg Alexandrov, Linas, Palica, Graham87, Deadcorpse, SixWingedSeraph, Fre-
plySpang, Yurik, Josh Parris, Rjwilmsi, Salix alba, Mathbot, Chobot, YurikBot, Thane, AdiJapan, BOT-Superzerocool, Bota47, Banus,
KnightRider~enwiki, Mmernex, Melchoir, PJTraill, Thumperward, Fuzzform, SchftyThree, Octahedron80, Zven, Furby100, Wine Guy,
Smooth O, Cybercobra, Cothrun, Will Beback, SashatoBot, Frentos, IronGargoyle, Physis, 16@r, JRSpriggs, CBM, Gregbard, Xtv, Mr
Gronk, Rbanzai, Thijs!bot, Egrin, Marek69, Escarbot, Salgueiro~enwiki, DAGwyn, David Eppstein, JCraw, Anaxial, Trusilver, Acala-
mari, Pyrospirit, Krishnachandranvn, Daniel5Ko, GaborLajos, Darklich14, LokiClock, AlnoktaBOT, Rei-bot, Wiae, SieBot, Gerakibot,
Flyer22 Reborn, Hello71, Trefgy, Classicalecon, ClueBot, The Thing That Should Not Be, Cli, Razimantv, Mudshark36, Auntof6,
DragonBot, Watchduck, Bender2k14, SoxBot III, Addbot, Some jerk on the Internet, Download, BepBot, Tide rolls, Jarble, Ettrig,
Luckas-bot, Yobot, Worldbruce, TaBOT-zerem, Kuraga, Materialscientist, MauritsBot, GrouchoBot, PI314r, Charvest, Ex13, Erik9bot,
MacMed, Pinethicket, MastiBot, Fox Wilson, Ujoimro, GoingBatty, ZroBot, NuclearDuckie, Quondum, D.Lazard, Donner60, EdoBot,
Crown Prince, ClueBot NG, Wcherowi, Matthiaspaul, Kevin Gorman, Widr, BG19bot, Furkaocean, IkamusumeFan, SuperNerd137,
Jimw338, Rang213, Stephan Kulla, Lugia2453, Jshynn, Epicgenius, Kenrambergm2374, Southparkfan, Matthew Kastor, Wbakeriii,
Gareld Gareld, Raycheng200, Zowayix001, Subcientico, Loraof, Julietdeltalima, Monarchrob1, Luis150902, Allthefoxes, Deepgrass,
Testamartestamatwrersvs, Here2help, Ya boy biggie smalle and Anonymous: 140
Atomic formula Source: https://en.wikipedia.org/wiki/Atomic_formula?oldid=754398090 Contributors: Hyacinth, Aleph4, Creidieki,
Kaustuv, Paul August, Spayrard, Linas, BD2412, Tizio, Trovatore, Hughitt1, Mhss, Jon Awbrey, Mets501, E-boy, CBM, HenningTh-
ielemann, Gregbard, Cydebot, Thijs!bot, MartinBot, Naohiro19, Gurchzilla, The best gamer, Dessources, VolkovBot, Morenooso, Kyle
the bot, Philogo, RatnimSnave, Curtdbz, Addbot, Charletan, Yobot, Citation bot, Xqbot, D'ohBot, Dinamik-bot, RjwilmsiBot, Classerre,
Alpha Quadrant (alt), Tijfo098, Helpful Pixie Bot, Wasbeer, Jianhui67, Arpitzain and Anonymous: 15
Atomic sentence Source: https://en.wikipedia.org/wiki/Atomic_sentence?oldid=756992271 Contributors: AugPi, OkPerson, Creidieki,
Nortexoid, Oleg Alexandrov, Joriki, Linas, BD2412, Rjwilmsi, TheDJ, Rick Norwood, Tomisti, Mhss, Byelf2007, Iridescent, Drefty-
mac, CRGreathouse, CBM, Gregbard, Cydebot, Julian Mendez, DWRZ, Zorakoid, Juliancolton, Tomer T, Philogo, JimJJewett, Arda
Xi, Flyer22 Reborn, Alejandrocaro35, Botsjeh, WikHead, Addbot, Zorrobot, Headlikeawhole, FrescoBot, BrideOfKripkenstein, Jo-
erom5883, LittleWink, Mhiji, Blitzmut, Khazar2, Camila Cavalcanti Nery, Erick Lucena, Kaliratra and Anonymous: 11
Balanced boolean function Source: https://en.wikipedia.org/wiki/Balanced_boolean_function?oldid=749850455 Contributors: Bearcat,
Xezbeth, Allens, Fetchcomms, Uncle Milty, Addbot, HRoestBot, Gzorg, AvicAWB, Quondum, ClueBot NG, M.r.ebraahimi, Snow Bliz-
zard, Prakhar098, Qetuth, Govind285, Ryanguyaneseboy, Lobopizza, Int80 and Anonymous: 2
Bent function Source: https://en.wikipedia.org/wiki/Bent_function?oldid=791120153 Contributors: Phil Boswell, Rich Farmbrough,
Will Orrick, Rjwilmsi, Marozols, Yahya Abdal-Aziz, Ntsimp, Headbomb, David Eppstein, Anonymous Dissident, Watchduck, Yobot,
AdmiralHood, Nageh, Citation bot 1, Tom.Reding, Trappist the monk, Gzorg, EmausBot, Quondum, Ebehn, Wcherowi, Helpful Pixie
Bot, ChrisGualtieri, Pintoch, Zieglerk, Monkbot, Cryptowarrior, InternetArchiveBot and Anonymous: 5
BernaysSchnnkel class Source: https://en.wikipedia.org/wiki/Bernays%E2%80%93Sch%C3%B6nfinkel_class?oldid=785930739
Contributors: David.Monniaux, MSGJ, Rich Farmbrough, EmilJ, Blotwell, Rjwilmsi, NavarroJ, CBM, Widefox, Magioladitis, David
Eppstein, Izno, Yobot, TuvianNavy, Citation bot 1, Trappist the monk, RjwilmsiBot, Fakusb and Anonymous: 1
Beta normal form Source: https://en.wikipedia.org/wiki/Beta_normal_form?oldid=772223164 Contributors: Michael Hardy, Dominus,
Dysprosia, Ruakh, Spayrard, Tromp, Linas, BD2412, StevenDaryl, William Lovas, Salsb, Jpbowen, Cedar101, Donhalcon, SmackBot,
Mhss, Addbot, Erik9bot, Trappist the monk, BG19bot, JonathasDantas, Synthwave.94, Shallchang, Malan and Anonymous: 4
Biconditional elimination Source: https://en.wikipedia.org/wiki/Biconditional_elimination?oldid=637933553 Contributors: Patrick,
Justin Johnson, Angela, Rich Farmbrough, GregorB, Graham87, SmackBot, Bluebot, Lambiam, Jim.belk, CBM, Gregbard, Alejan-
drocaro35, LilHelpa, Erik9bot, Gamewizard71, Mark viking and Anonymous: 4
Biconditional introduction Source: https://en.wikipedia.org/wiki/Biconditional_introduction?oldid=619411971 Contributors: Patrick,
Justin Johnson, Jitse Niesen, Sketchee, Graham87, Arthur Rubin, SmackBot, Bluebot, Jim.belk, CBM, Gregbard, VolkovBot, Legobot,
KamikazeBot, Erik9bot, Gamewizard71, AvicBot, Dooooot and Anonymous: 9
Bidirectional transformation Source: https://en.wikipedia.org/wiki/Bidirectional_transformation?oldid=758680067 Contributors: Michael
Hardy, Bearcat, Raphink, Ruud Koot, Frappyjohn, Educres, Jarble, Yobot, Winterst, Borkicator, Mr Sheep Measham, BG19bot and
Tmslnz
Bijection Source: https://en.wikipedia.org/wiki/Bijection?oldid=798557885 Contributors: Damian Yerrick, AxelBoldt, Tarquin, Jan Hid-
ders, XJaM, Toby Bartels, Michael Hardy, Wshun, TakuyaMurata, GTBacchus, Karada, , Glenn, Poor Yorick, Rob Hooft,
Pizza Puzzle, Hashar, Hawthorn, Charles Matthews, Dcoetzee, Dysprosia, Hyacinth, David Shay, Ed g2s, Bevo, Robbot, Fredrik, Ben-
wing, Bkell, Salty-horse, Tea2min, Giftlite, Jorge Stol, Alberto da Calvairate~enwiki, MarkSweep, Tsemii, Vivacissamamente, Guan-
abot, Guanabot2, Quistnix, Paul August, Ignignot, MisterSheik, Nickj, Kevin Lamoreau, Obradovic Goran, Pearle, HasharBot~enwiki,
Dallashan~enwiki, ABCD, Schapel, Palica, MarSch, Salix alba, FlaBot, VKokielov, Nihiltres, RexNL, Chobot, YurikBot, Michael
Slone, Spl, Member, SmackBot, RDBury, Mmernex, Octahedron80, Mhym, Bwoodacre, Dreadstar, Davipo, Loadmaster, Mets501,
Dreftymac, Hilverd, Johnfuhrmann, Bill Malloy, Domitori, JRSpriggs, CmdrObot, Gregbard, Yaris678, Sam Staton, Panzer raccoon!,
1016 CHAPTER 273. ZHEGALKIN POLYNOMIAL

Kilva, AbcXyz, Escarbot, Salgueiro~enwiki, JAnDbot, David Eppstein, Martynas Patasius, Paulnwatts, Cpiral, GaborLajos, Policron,
Diegovb, UnicornTapestry, Yomcat, Wykypydya, Bongoman666, SieBot, Paradoctor, Flyer22 Reborn, Paolo.dL, Smaug123, MiNom-
breDeGuerra, JackSchmidt, I Spel Good~enwiki, Peiresc~enwiki, Classicalecon, Adrianwn, Biagioli, Watchduck, Hans Adler, Huma-
nengr, Neuralwarp, Baudway, FactChecker1199, Kal-El-Bot, Subversive.sound, Tanhabot, Glane23, PV=nRT, Meisam, Legobot, Luckas-
bot, Yobot, Ash4Math, Shvahabi, Omnipaedista, RibotBOT, Thehelpfulbot, FrescoBot, MarcelB612, CodeBlock, MastiBot, FoxBot,
Duoduoduo, Xnn, EmausBot, Hikaslap, TuHan-Bot, Checkingfax, Cobaltcigs, Wikfr, Karthikndr, Wikiloop, Anita5192, ClueBot NG,
Akuindo, Wcherowi, Widr, Strike Eagle, PhnomPencil, Knwlgc, Dhoke sanket, Victor Yus, Dexbot, Cerabot~enwiki, JPaestpreornJe-
olhlna, Yardimsever, CasaNostra, KoriganStone, Whamsicore, JMP EAX, Pierre MC0123, Kiwist, Sweepy, Abjee, Bender the Bot,
Oatsy, JoeO1806 and Anonymous: 103
Bijection, injection and surjection Source: https://en.wikipedia.org/wiki/Bijection%2C_injection_and_surjection?oldid=799333364
Contributors: The Anome, TakuyaMurata, PingPongBoy, Revolver, Charles Matthews, Altenmann, Mattaschen, Tea2min, Rock69~enwiki,
Antonis Christodes, Giftlite, MathKnight, Jcw69, Rich Farmbrough, Luqui, Bender235, Blotwell, Oleg Alexandrov, Ryan Reich, Jacj,
Dpr, MarSch, Salix alba, FlaBot, Mathbot, Nihiltres, YurikBot, Hairy Dude, Conscious, KSmrq, Grubber, Zwobot, Cconnett, Master-
campbell, InverseHypercube, XudongGuan~enwiki, Da nuke, Lambiam, Mike Fikes, CBM, Gregbard, Jac16888, Barticus88, Konradek,
Headbomb, NLuchs, Magioladitis, R'n'B, Quantling, Perel, Oshwah, Versus22, Eraheem, Addbot, MrOllie, Jarble, Yobot, AnomieBOT,
Materialscientist, DannyAsher, Omnipaedista, Sawomir Biay, Oxonienses, Dmwpowers, Chharvey, Wikfr, George Makepeace, Anita5192,
ClueBot NG, Wcherowi, TricksterWolf, Jochen Burghardt, CSB radio, One Of Seven Billion, AwesomeEvilGenius, Sphinx-muse, Jascharn-
horst, Nbro, Deacon Vorbis, Pharask and Anonymous: 67
Binary decision diagram Source: https://en.wikipedia.org/wiki/Binary_decision_diagram?oldid=792679397 Contributors: Taw, Heron,
Michael Hardy, Charles Matthews, Greenrd, Furrykef, David.Monniaux, Rorro, Michael Snow, Gtrmp, Laudaka, Andris, Ryan Clark,
Sam Hocevar, McCart42, Andreas Kaufmann, Jkl, Kakesson, Uli, EmilJ, AshtonBenson, Mdd, Dirk Beyer~enwiki, Sade, IMeowbot,
Ruud Koot, YHoshua, Bluemoose, GregorB, Matumio, Qwertyus, Kinu, Brighterorange, YurikBot, KSchutte, Trovatore, Mikeblas, Jess
Riedel, SmackBot, Jcarroll, Karlbrace, LouScheer, Derek farn, CmdrObot, Pce3@ij.net, Jay.Here, Wikid77, Headbomb, Ajo Mama,
Bobke, Hermel, Nouiz, Karsten Strehl, David Eppstein, Hitanshu D, Boute, Rohit.nadig, Aaron Rotenberg, Karltk, Rdhettinger, AMC-
Costa, Trivialist, Pranavashok, Sun Creator, Addbot, YannTM, Zorrobot, Yobot, Amirobot, Jason Recliner, Esq., SailorH, Twri, J04n,
Bigfootsbigfoot, Britlak, MondalorBot, Sirutan, Onel5969, Dewritech, EleferenBot, Ort43v, Elaz85, Tijfo098, Helpful Pixie Bot, Cal-
abe1992, BG19bot, Solomon7968, Happyuk, Divy.dv, ChrisGualtieri, Denim1965, Lone boatman, Mark viking, Behandeem, Melcous,
Thibaut120094, Damonamc, Cjdrake1, JMP EAX, InternetArchiveBot, Magic links bot and Anonymous: 91
Bitwise operation Source: https://en.wikipedia.org/wiki/Bitwise_operation?oldid=800409914 Contributors: AxelBoldt, Tim Starling,
Wapcaplet, Delirium, MichaelJanich, Ahoerstemeier, Dwo, Dcoetzee, Furrykef, Lowellian, Tea2min, Giftlite, DavidCary, AlistairMcMil-
lan, Nayuki, Nathan Hamblen, Vadmium, Ak301, Jimwilliams57, Andreas Kaufmann, Discospinster, Xezbeth, Paul August, Bender235,
ESkog, ZeroOne, Plugwash, Pilatus, RoyBoy, Spoon!, JeR, Hooperbloob, Cburnett, Suruena, Voxadam, Forderud, Brookie, Jonathan
de Boyne Pollard, Distalzou, Bluemoose, Kevin.dickerson, Turnstep, Tslocum, Qwertyus, Kbdank71, Sjakkalle, XP1, Moskvax, Math-
bot, GnniX, Quuxplusone, Visor, YurikBot, Wavelength, NTBot~enwiki, FrenchIsAwesome, Locke411, Rsrikanth05, Troller Trolling
Rodriguez, Trovatore, Sekelsenmat, Mikeblas, Klutzy, Cedar101, Cmglee, Amalthea, SmackBot, Incnis Mrsi, Mr link, Rmosler2100,
Plainjane335, @modi, Oli Filth, Baa, Torzsmokus, Teehee123, BlindWanderer, Loadmaster, Optakeover, Glen Pepicelli, Hu12, Yageroy,
Amniarix, Ceran, CRGreathouse, RomanXNS, ClearQ, Thijs!bot, Kubanczyk, N5iln, Acetate~enwiki, Widefox, Guy Macon, Chico75,
Snjrdn, Altamel, JAnDbot, ZZninepluralZalpha, Hackster78, David Eppstein, Gwern, Numbo3, Ian.thomson, Owlgorithm, NewEng-
landYankee, Rbakker99, Seraphim, Quindraco, Sillygwailo, Yintan, SimonTrew, ClueBot, Panchoy, Watchduck, Heckledpie, RedYeti,
Dthomsen8, Dsimic, Addbot, WQDW412, Jarble, Luckas-bot, Yobot, Tubybb, Fraggle81, Timeroot, AnomieBOT, Xqbot, Jayeshsen-
jaliya, Cal27, Rimcob, Frosted14, Perplextase, Intelliproject, 0x0309, Tilkax, Jopazhani, D'ohBot, Vareyn, Kmdouglass, Guillefc, Zvn,
Jveldeb, EmausBot, Set theorist, Noloader, Dewritech, Mateen Ulhaq, Scgtrp, Da Scientist, ZroBot, Dennis714, Jajabinks97, Sbmeirow,
Wikiloop, ClueBot NG, Matthiaspaul, Jcgoble3, Sharkqwy, Episcophagus, Helpful Pixie Bot, Iste Praetor, BG19bot, SimonZucker-
braun, AtrumVentus, PartTimeGnome, Johnny honestly, BattyBot, Oalders, Sfarney, Tagremover, FoCuSandLeArN, Fuebar, Poka-
janje, Jochen Burghardt, Zziccardi, Ben-Yeudith, Artoria2e5, Edgarphs, Franois Robere, Zenzhong8383, JoseEN, Zeenamoe, Xxiggy,
CoolOppo, AdityaKPorwal, Kajhutan, Eavestn, User000name, Boehm, JaimeGallego, JustSomeRandomPersonWithAComputer, Ttt74,
Alokaabasan123, Fmadd, NoToleranceForIntolerance, Peacecop kalmer and Anonymous: 229
Blake canonical form Source: https://en.wikipedia.org/wiki/Blake_canonical_form?oldid=797759780 Contributors: Macrakis, David
Eppstein, Jarble, Matthiaspaul, Kmzayeem and Magic links bot
Booles expansion theorem Source: https://en.wikipedia.org/wiki/Boole{}s_expansion_theorem?oldid=791896639 Contributors: Michael
Hardy, SebastianHelm, Charles Matthews, Giftlite, SamB, Macrakis, McCart42, Photonique, Qwertyus, Siddhant, Trovatore, Closed-
mouth, SmackBot, Javalenok, Bwgames, Freewol, Harrigan, AndrewHowse, Hamaryns, Plm209, DAGwyn, Cebus, LOTRrules, Denis-
arona, Addbot, Loz777, Yobot, Omnipaedista, AManWithNoPlan, BabbaQ, KLBot2, Muammar Gadda, Dwija Prasad De, Bernatis123,
Engheta, InternetArchiveBot, Bender the Bot, Xhan0o, Zensei2x and Anonymous: 19
Boolean algebra Source: https://en.wikipedia.org/wiki/Boolean_algebra?oldid=800235639 Contributors: William Avery, Michael Hardy,
Dan Koehl, Tacvek, Hyacinth, Dimadick, Tea2min, Thorwald, Paul August, Bender235, ESkog, El C, EmilJ, Coolcaesar, Wtmitchell,
Mindmatrix, Michiel Helvensteijn, BD2412, Rjwilmsi, Pleiotrop3, GnniX, Jrtayloriv, Rotsor, Wavelength, Trovatore, MacMog, Arthur
Rubin, Rms125a@hotmail.com, Caballero1967, Sardanaphalus, SmackBot, Incnis Mrsi, Gilliam, Tamfang, Lambiam, Wvbailey, Khazar,
Iridescent, Vaughan Pratt, CBM, Neelix, Widefox, QuiteUnusual, Magioladitis, David Eppstein, TonyBrooke, Glrx, Jmajeremy, Nwbee-
son, Cebus, Hurkyl, JohnBlackburne, Oshwah, Tavix, Jackfork, PericlesofAthens, CMBJ, Waldhorn, Soler97, Jruderman, Francvs,
Binksternet, Bruceschuman, Excirial, Hugo Herbelin, Johnuniq, Pgallert, Fluernutter, Favonian, Yobot, AnomieBOT, Danielt998, Ma-
terialscientist, Citation bot, MetaNest, Kivgaen, Pinethicket, Minusia, Oxonienses, Gamewizard71, Trappist the monk, Jordgette, It-
sZippy, Rbaleksandar, Jmencisom, Winner 42, Dcirovic, D.Lazard, Sbmeirow, Pun, Tijfo098, SemanticMantis, LZ6387, ClueBot
NG, LuluQ, Matthiaspaul, Abecedarius, Delusion23, Jiri 1984, Calisthenis, Helpful Pixie Bot, Shantnup, BG19bot, Northamerica1000,
Ivannoriel, Supernerd11, Robert Thyder, LanaEditArticles, Brad7777, Wolfmanx122, Proxyma, Soa karampataki, Muammar Gadda,
Cerabot~enwiki, Fuebar, Telfordbuck, Ruby Murray, Rlwood1, Shevek1981, Seppi333, The Rahul Jain, Matthew Kastor, LarsHugo,
Happy Attack Dog, Abc 123 def 456, Trax support, Lich counter, Mathematical Truth, Ksarnek, LukasMatt, Anjana Larka, Petr.savicky,
Myra Gul, DiscantX, Striker0614, Masih.bist, KasparBot, Jamieddd, Da3mon, MegaManiaZ, Bawb131, Prayasjain7, Simplexity22,
Striker0615, Integrvl, Fmadd, Pioniepl, Bender the Bot, 72, Wikishovel, Thbreacker, Antgaucho, Neehalsharrma1419 and Anonymous:
152
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1017

Boolean algebra (structure) Source: https://en.wikipedia.org/wiki/Boolean_algebra_(structure)?oldid=791495665 Contributors: Ax-


elBoldt, Mav, Bryan Derksen, Zundark, Tarquin, Taw, Jeronimo, Ed Poor, Perry Bebbington, XJaM, Toby Bartels, Heron, Camem-
bert, Michael Hardy, Pit~enwiki, Shellreef, Justin Johnson, GTBacchus, Ellywa, , DesertSteve, Samuel~enwiki, Charles
Matthews, Timwi, Dcoetzee, Dysprosia, Jitse Niesen, OkPerson, Maximus Rex, Imc, Fibonacci, Mosesklein, Sandman~enwiki, John-
leemk, JorgeGG, Robbot, Josh Cherry, Fredrik, Romanm, Voodoo~enwiki, Robinh, Ruakh, Tea2min, Ancheta Wis, Giftlite, Markus
Krtzsch, Lethe, MSGJ, Elias, Eequor, Pvemburg, Macrakis, Gauss, Ukexpat, Eduardoporcher, Barnaby dawson, Talkstosocks, Poccil,
Guanabot, Cacycle, Slipstream, Ivan Bajlo, Mani1, Paul August, Bunny Angel13, Plugwash, Elwikipedista~enwiki, Chalst, Nortexoid,
Jojit fb, Wrs1864, Masashi49, Msh210, Andrewpmk, ABCD, Water Bottle, Cburnett, Alai, Klparrot, Woohookitty, Linas, Igny, Uncle G,
Kzollman, Graham87, Magister Mathematicae, Ilya, Qwertyus, SixWingedSeraph, Rjwilmsi, Isaac Rabinovitch, MarSch, KamasamaK,
Staecker, GOD, Salix alba, Yamamoto Ichiro, FlaBot, Mathbot, Alhutch, Celestianpower, Scythe33, Chobot, Visor, Nagytibi, Yurik-
Bot, RobotE, Hairy Dude, Baccala@freesoft.org, KSmrq, Joebeone, Archelon, Wiki alf, Trovatore, Yahya Abdal-Aziz, Bota47, Ott2,
Kompik, StuRat, Cullinane, , Arthur Rubin, JoanneB, Ilmari Karonen, Bsod2, SmackBot, FunnyYetTasty, Incnis Mrsi, Uny-
oyega, SaxTeacher, Btwied, Srnec, ERcheck, Izehar, Ciacchi, Cybercobra, Clarepawling, Jon Awbrey, Lambiam, Cronholm144, Meco,
Condem, Avantman42, Zero sharp, Vaughan Pratt, Makeemlighter, CBM, Gregbard, Sopoforic, Pce3@ij.net, Sommacal alfonso, Julian
Mendez, Tawkerbot4, Thijs!bot, Sagaciousuk, Colin Rowat, Tellyaddict, KrakatoaKatie, AntiVandalBot, JAnDbot, MER-C, Magioladitis,
Albmont, Omicron18, David Eppstein, Honx~enwiki, Dai mingjie, Samtheboy, Policron, Fylwind, Pleasantville, Enoksrd, Anonymous
Dissident, Plclark, The Tetrast, Fwehrung, Escher26, Wing gundam, CBM2, WimdeValk, He7d3r, Hans Adler, Andreasabel, Hugo
Herbelin, Aguitarheroperson, Download, LinkFA-Bot, ., Jarble, Yobot, Ht686rg90, 2D, AnomieBOT, RobertEves92, Citation
bot, ArthurBot, LilHelpa, Gonzalcg, RibotBOT, SassoBot, Charvest, Constructive editor, FrescoBot, Irmy, Citation bot 1, Microme-
sistius, DixonDBot, EmausBot, Faceless Enemy, KHamsun, Dcirovic, Thecheesykid, D.Lazard, Sbmeirow, Tijfo098, ChuispastonBot,
Anita5192, ClueBot NG, Delusion23, Jiri 1984, Widr, Helpful Pixie Bot, Solomon7968, CitationCleanerBot, ChrisGualtieri, Tagre-
mover, Freeze S, Dexbot, Kephir, Jochen Burghardt, Cosmia Nebula, GeoreyT2000, JMP EAX, Bender the Bot, Magic links bot and
Anonymous: 156
Boolean algebras canonically dened Source: https://en.wikipedia.org/wiki/Boolean_algebras_canonically_defined?oldid=782969258
Contributors: Zundark, Michael Hardy, Tea2min, Pmanderson, D6, Paul August, EmilJ, Woohookitty, Linas, BD2412, Rjwilmsi, Salix
alba, Hairy Dude, KSmrq, Robertvan1, Arthur Rubin, Bluebot, Jon Awbrey, Lambiam, Assulted Peanut, Vaughan Pratt, CBM, Gregbard,
Barticus88, Headbomb, Nick Number, Magioladitis, Srice13, David Eppstein, STBot, R'n'B, Jeepday, Michael.Deckers, The Tetrast,
Wing gundam, Fratrep, Dabomb87, Hans Adler, DOI bot, Bte99, Yobot, Pcap, AnomieBOT, Citation bot, RJGray, FrescoBot, Irmy,
Citation bot 1, Set theorist, Klbrain, Tommy2010, Tijfo098, Helpful Pixie Bot, Solomon7968, Zeke, the Mad Horrorist, Fuebar, JJMC89,
Magic links bot and Anonymous: 11
Boolean conjunctive query Source: https://en.wikipedia.org/wiki/Boolean_conjunctive_query?oldid=799407316 Contributors: Trylks,
Tizio, Cedar101, Gregbard, Alaibot, DOI bot, Cdrdata, Dcirovic and Anonymous: 5
Boolean data type Source: https://en.wikipedia.org/wiki/Boolean_data_type?oldid=799640491 Contributors: CesarB, Nikai, Charles
Matthews, Greenrd, Furrykef, Grendelkhan, Robbot, Sbisolo, Wlievens, Mattaschen, Mfc, Tea2min, Enochlau, Jorge Stol, Mboverload,
Fanf, SamSim, Zondor, Bender235, The Noodle Incident, Spoon!, Causa sui, Grick, R. S. Shaw, A-Day, Rising~enwiki, Minority Report,
Metron4, Cburnett, Tony Sidaway, RainbowOfLight, DanBishop, Chris Mason, Tabletop, Marudubshinki, Josh Parris, Rjwilmsi, Salix
alba, Darguz Parsilvan, AndyKali, Vlad Patryshev, Mathbot, Spankthecrumpet, DevastatorIIC, NevilleDNZ, RussBot, Arado, Trovatore,
Thiseye, Mikeblas, Ospalh, Bucketsofg, Praetorian42, Max Schwarz, Wknight94, Richardcavell, Andyluciano~enwiki, HereToHelp, JLa-
Tondre, SmackBot, Melchoir, Renku, Mscuthbert, Sam Pointon, Gilliam, Cyclomedia, Gracenotes, Royboycrashfan, Kindall, Kcordina,
Jpaulm, BIL, Cybercobra, Decltype, Paddy3118, Richard001, Henning Makholm, Lambiam, Doug Bell, Amine Brikci N, Hiiiiiiiiii-
iiiiiiiiiii, Mirag3, EdC~enwiki, Jafet, Nyktos, SqlPac, Jokes Free4Me, Sgould, Torc2, Thijs!bot, Epbr123, JustAGal, AntiVandalBot,
Guy Macon, Seaphoto, JAnDbot, Db099221, Datrukup, Martinkunev, Bongwarrior, Soulbot, Tonyfaull, Burbble, Gwern, MwGamera,
Rettetast, Gah4, Zorakoid, Letter M, Raise exception, Adammck, Joeinwap, Philip Trueman, Qwayzer, Technopat, Hqb, Ashrentum,
Mcclarke, Jamelan, Jamespurs, LOTRrules, Kbrose, SieBot, Jerryobject, SimonTrew, SoulComa, Ctxppc, ClueBot, Hitherebrian, Pointil-
list, AmirOnWiki, DumZiBoT, Cmcqueen1975, Nickha0, Addbot, Mortense, Btx40, Lightbot, Gail, Luckas-bot, Yobot, Wonder, Gtz,
NeatNit, Xqbot, Miym, GrouchoBot, Shirik, Mark Renier, Zero Thrust, Machine Elf 1735, HRoestBot, MastiBot, FoxBot, Staaki, Vrena-
tor, RjwilmsiBot, Ripchip Bot, EmausBot, WikitanvirBot, Ahmed Fouad(the lord of programming), Thecheesykid, Donner60, Tijfo098,
ClueBot NG, Kylecbenton, Rezabot, Masssly, BG19bot, Hasegeli, MusikAnimal, Chmarkine, Aaron613, The Anonymouse, Lesser Car-
tographies, Monkbot, Zahardzhan, Eurodyne, FiendYT, Milos996, Blahblehblihblohbluh27, KolbertBot and Anonymous: 162
Boolean domain Source: https://en.wikipedia.org/wiki/Boolean_domain?oldid=788112863 Contributors: Toby Bartels, Asparagus, El
C, Cje~enwiki, Versageek, Jerey O. Gustafson, Salix alba, DoubleBlue, Closedmouth, Bibliomaniac15, SmackBot, Incnis Mrsi, C.Fred,
Mhss, Bluebot, Octahedron80, Nbarth, Ccero, NickPenguin, Jon Awbrey, JzG, Coredesat, Slakr, KJS77, CBM, AndrewHowse, Gogo
Dodo, Pascal.Tesson, Xuanji, Hut 8.5, Brigit Zilwaukee, Yolanda Zilwaukee, Hans Dunkelberg, Matthi3010, Boute, Jamelan, Maelgwn-
bot, Francvs, Cli, Wolf of the Steppes, Doubtentry, Icharus Ixion, Hans Adler, Buchanans Navy Sec, Overstay, Marsboat, Viva La
Information Revolution!, Autocratic Uzbek, Poke Salat Annie, Flower Mound Belle, Navy Pierre, Mrs. Lovetts Meat Puppets, Chester
County Dude, Southeast Penna Poppa, Delaware Valley Girl, Theonlydavewilliams, Addbot, Yobot, Erik9bot, EmausBot, ClueBot NG,
BG19bot, Monkbot and Anonymous: 8
Boolean expression Source: https://en.wikipedia.org/wiki/Boolean_expression?oldid=798497825 Contributors: Patrick, Andreas Kauf-
mann, Bobo192, Giraedata, Eclecticos, BorgHunter, YurikBot, Michael Slone, Arado, Trovatore, StuRat, SmackBot, Rune X2, Miguel
Andrade, Sct72, Tamfang, Timcrall, David Eppstein, R'n'B, ClueBot, R000t, Addbot, Ptbotgourou, Materialscientist, Xqbot, Erik9bot,
Serols, Gryllida, NameIsRon, ClueBot NG, BG19bot, Solomon7968, Pratyya Ghosh, Jaqoc, Bender the Bot and Anonymous: 19
Boolean function Source: https://en.wikipedia.org/wiki/Boolean_function?oldid=791935592 Contributors: Patrick, Michael Hardy, Ci-
phergoth, Charles Matthews, Hyacinth, Michael Snow, Giftlite, Matt Crypto, Neilc, Gadum, Clemwang, Murtasa, Arthena, Oleg Alexan-
drov, Mindmatrix, Jok2000, CharlesC, Waldir, Qwertyus, Ner102, RobertG, Gene.arboit, NawlinWiki, Trovatore, TheKoG, SDS, Smack-
Bot, Mhss, Jon Awbrey, Poa, Bjankuloski06en~enwiki, Loadmaster, Eassin, Gregbard, Ntsimp, Pce3@ij.net, Shyguy92, Steveprutz, David
Eppstein, BigrTex, Trusilver, AltiusBimm, TheSeven, Policron, Tatrgel, TreasuryTag, TXiKiBoT, Spinningspark, Kumioko (renamed),
ClueBot, Watchduck, Farisori, Hans Adler, Addbot, Liquidborn, Luckas-bot, Amirobot, AnomieBOT, Chillout003, Twri, Quebec99,
Ayda D, Xqbot, Omnipaedista, Erik9bot, Nageh, Theorist2, EmausBot, Sivan.rosenfeld, ClueBot NG, Jiri 1984, Rezabot, WikiPuppies,
Allanthebaws, Int80, Nigellwh, Hannasnow, Anaszt5, InternetArchiveBot and Anonymous: 31
Boolean prime ideal theorem Source: https://en.wikipedia.org/wiki/Boolean_prime_ideal_theorem?oldid=784473773 Contributors:
AxelBoldt, Michael Hardy, Chinju, Eric119, AugPi, Charles Matthews, Dfeuer, Dysprosia, Aleph4, Tea2min, Giftlite, Markus Krtzsch,
1018 CHAPTER 273. ZHEGALKIN POLYNOMIAL

MarkSweep, Vivacissamamente, Mathbot, Trovatore, Hennobrandsma, Ott2, Mhss, Jon Awbrey, CRGreathouse, CBM, Myasuda, Head-
bomb, RobHar, Trioculite, David Eppstein, Kope, TexD, Geometry guy, Alexey Muranov, Hugo Herbelin, Addbot, Xqbot, RoodyAlien,
Gonzalcg, FrescoBot, Citation bot 1, Tkuvho, Bomazi, Helpful Pixie Bot, PhnomPencil, Avsmal, Absolutelypuremilk and Anonymous:
16
Boolean ring Source: https://en.wikipedia.org/wiki/Boolean_ring?oldid=792365063 Contributors: AxelBoldt, Michael Hardy, Takuya-
Murata, AugPi, Charles Matthews, Dcoetzee, Dysprosia, Fredrik, Giftlite, Jason Quinn, AshtonBenson, Oleg Alexandrov, Ruud Koot,
Rjwilmsi, Salix alba, R.e.b., Michael Slone, Hwasungmars, Trovatore, Vanished user 1029384756, Crasshopper, Cedar101, NSLE,
Mhss, Valley2city, Bluebot, Nbarth, Amazingbrock, Rschwieb, Vaughan Pratt, TAnthony, Albmont, Jeepday, TXiKiBoT, Wing gundam,
JackSchmidt, Watchduck, Hans Adler, Alexey Muranov, Addbot, Download, Zorrobot, Luckas-bot, Yobot, Ptbotgourou, JackieBot, The
Banner, Aliotra, Tom.Reding, Jakito, Dcirovic, Jasonanaggie, Anita5192, Matthiaspaul, Toploftical, Jochen Burghardt, Paul2520 and
Anonymous: 20
Boolean satisability algorithm heuristics Source: https://en.wikipedia.org/wiki/Boolean_satisfiability_algorithm_heuristics?oldid=
765344495 Contributors: Michael Hardy, Teb728, Postcard Cathy, Malcolmxl5, John of Reading, BG19bot, Surt91, Iordantrenkov and
Anonymous: 3
Boolean satisability problem Source: https://en.wikipedia.org/wiki/Boolean_satisfiability_problem?oldid=795334382 Contributors:
Damian Yerrick, LC~enwiki, Brion VIBBER, The Anome, Ap, Jan Hidders, ChangChienFu, B4hand, Dwheeler, Twanvl, Nealmcb,
Michael Hardy, Tim Starling, Cole Kitchen, Chinju, Zeno Gantner, Karada, Michael Shields, Schneelocke, Timwi, Dcoetzee, Dysprosia,
Wik, Markhurd, Jimbreed, Jecar, Hh~enwiki, David.Monniaux, Naddy, MathMartin, Nilmerg, Alex R S, Saforrest, Snobot, Giftlite, Ev-
eryking, Dratman, Elias, Mellum, Bacchiad, Gdr, Karl-Henner, Sam Hocevar, Creidieki, McCart42, Mjuarez, Leibniz, FT2, Night Gyr,
DcoetzeeBot~enwiki, Bender235, ZeroOne, Ben Standeven, Clement Cherlin, Chalst, Obradovic Goran, Quaternion, Diego Moya, Sur-
realWarrior, Cdetrio, Sl, Twiki-walsh, Zander, Drbreznjev, Linas, Queerudite, Jok2000, Graham87, BD2412, Mamling, Rjwilmsi, Tizio,
Mathbot, YurikBot, Hairy Dude, Canadaduane, Msoos, Jpbowen, Hohohob, Cedar101, Janto, RG2, SmackBot, Doomdayx, FlashSheri-
dan, DBeyer, Cachedio, Chris the speller, Javalenok, Mhym, LouScheer, Zarrapastro, Localzuk, Jon Awbrey, SashatoBot, Wvbailey,
J. Finkelstein, Igor Markov, Mets501, Dan Gluck, Tawkerbot2, Ylloh, CRGreathouse, CBM, Gregbard, AndrewHowse, Julian Mendez,
Oerjan, Electron9, Alphachimpbot, Wasell, Mmn100, Robby, A3nm, David Eppstein, Gwern, Andre.holzner, R'n'B, CommonsDelinker,
Vegasprof, Enmiles, Naturalog, VolkovBot, Dejan Jovanovi, LokiClock, Maghnus, Bovineboy2008, Jobu0101, Luuva, Piyush Sriva, Lo-
gan, Hattes, Fratrep, Svick, Fancieryu, PerryTachett, PsyberS, DFRussia, Simon04, Mutilin, DragonBot, Oliver Kullmann, Bender2k14,
Hans Adler, Rswarbrick, Max613, DumZiBoT, Arlolra, ~enwiki, Alexius08, Yury.chebiryak, Sergei, Favonian, Legobot,
Yobot, Mqasem, AnomieBOT, Erel Segal, Kingpin13, Citation bot, ArthurBot, Weichaoliu, Miym, Nameless23, Vaxquis, FrescoBot,
Artem M. Pelenitsyn, Milimetr88, Ahalwright, Guarani.py, Tom.Reding, Skyerise, Cnwilliams, Daniel.noland, MrX, Yaxy2k, Jowa
fan, Siteswapper, John of Reading, Wiki.Tango.Foxtrot, GoingBatty, PoeticVerse, Dcirovic, Rafaelgm, Dennis714, Zephyrus Tavvier,
Tigerauna, Orange Suede Sofa, Tijfo098, Chaotic iak, Musatovatattdotnet, Helpful Pixie Bot, Taneltammet, Saragh90, Cyberbot II,
ChrisGualtieri, JYBot, Dexbot, Girondaniel, Adrians wikiname, Jochen Burghardt, Natematic, Me, Myself, and I are Here, Sravan11k,
SiraRaven, Feydun, Juliusbier, Jacob irwin, Djhulme, RaBOTnik, 22merlin, Monkbot, Song of Spring, JeremiahY, KevinGoedecke,
EightTwoThreeFiveOneZeroSevenThreeOne, Marketanova984, InternetArchiveBot, Inblah, GreenC bot, Blue Edits, Bender the Bot,
EditingGirae and Anonymous: 177
Boolean-valued function Source: https://en.wikipedia.org/wiki/Boolean-valued_function?oldid=773507247 Contributors: Toby Bartels,
MathMartin, Giftlite, El C, Versageek, Oleg Alexandrov, Jerey O. Gustafson, Linas, BD2412, Qwertyus, Salix alba, DoubleBlue, Titoxd,
Trovatore, Closedmouth, Arthur Rubin, Bibliomaniac15, SmackBot, C.Fred, Jim62sch, Mhss, Chris the speller, Bluebot, NickPenguin,
Jon Awbrey, JzG, Coredesat, Slakr, Tawkerbot2, CRGreathouse, CBM, Sdorrance, Gregbard, Gogo Dodo, Thijs!bot, Dougher, Hut
8.5, Brigit Zilwaukee, Yolanda Zilwaukee, Seb26, Jamelan, Maelgwnbot, WimdeValk, Wolf of the Steppes, Doubtentry, Icharus Ixion,
DragonBot, Hans Adler, Buchanans Navy Sec, Overstay, Marsboat, Viva La Information Revolution!, Autocratic Uzbek, Poke Salat
Annie, Flower Mound Belle, Navy Pierre, Mrs. Lovetts Meat Puppets, Chester County Dude, Southeast Penna Poppa, Delaware Valley
Girl, AnomieBOT, Samppi111, Omnipaedista, Cnwilliams, Tijfo098, Frietjes, BG19bot, AK456, Kephir and Anonymous: 9
Boolean-valued model Source: https://en.wikipedia.org/wiki/Boolean-valued_model?oldid=782969282 Contributors: Michael Hardy,
TakuyaMurata, Charles Matthews, Tea2min, Giftlite, Marcos, Ryan Reich, BD2412, R.e.b., Mathbot, Trovatore, SmackBot, Mhss,
Ligulembot, Mets501, Zero sharp, CBM, Gregbard, Cydebot, Newbyguesses, Alexey Muranov, Addbot, Lightbot, Citation bot, DrilBot,
Kiefer.Wolfowitz, Dewritech, Radegast, Magic links bot and Anonymous: 7
Booleo Source: https://en.wikipedia.org/wiki/Booleo?oldid=794759598 Contributors: Furrykef, 2005, Mindmatrix, Gregbard, ImageR-
emovalBot, Mild Bill Hiccup, Yobot, AnomieBOT, Nemesis63, LilHelpa, Jesse V., Wcherowi, Jranbrandt, Zeke, the Mad Horrorist,
Ducknish, MrNiceGuy1113, Shevek1981 and Anonymous: 2
Bounded quantier Source: https://en.wikipedia.org/wiki/Bounded_quantifier?oldid=775922650 Contributors: EmilJ, Ruud Koot, Hairy
Dude, JRSpriggs, CBM, Cydebot, Headbomb, Magioladitis, Philosophy.dude, Unzerlegbarkeit, Yobot, Pcap, Citation bot, Chharvey, Ti-
jfo098, Helpful Pixie Bot and Anonymous: 1
Branching quantier Source: https://en.wikipedia.org/wiki/Branching_quantifier?oldid=748266895 Contributors: Charles Matthews,
Mike Rosoft, ChrisX, EmilJ, Nortexoid, BD2412, SmackBot, Chris the speller, Mets501, JRSpriggs, CBM, Czego szukasz, Addbot,
LaaknorBot, AnomieBOT, Citation bot, Chricho, Tijfo098, Pietro Galliani, Helpful Pixie Bot, Illia Connell, Dexbot, Jodosma, Inter-
netArchiveBot, Bender the Bot and Anonymous: 3
Canonical normal form Source: https://en.wikipedia.org/wiki/Canonical_normal_form?oldid=794763438 Contributors: Michael Hardy,
Ixfd64, Charles Matthews, Dysprosia, Sanders muc, Giftlite, Gil Dawson, Macrakis, Mako098765, Discospinster, ZeroOne, MoraSique,
Bookandcoee, Bkkbrad, Joeythehobo, Gurch, Fresheneesz, Bgwhite, YurikBot, Wavelength, Trovatore, Modify, SDS, SmackBot, Mhss,
MK8, Cybercobra, Jon Awbrey, Wvbailey, Freewol, Eric Le Bigot, Myasuda, Odwl, Pasc06, MER-C, Phosphoricx, Henriknordmark,
Jwh335, Gmoose1, AlnoktaBOT, AlleborgoBot, WereSpielChequers, Gregie156, WheezePuppet, WurmWoode, Sarbruis, Mhayeshk,
Hans Adler, Addbot, Douglaslyon, SpBot, Mess, Linket, Wrelwser43, Bluere 000, Sumurai8, Xqbot, Hughbs, Zvn, Reach Out to the
Truth, Semmendinger, Iamnitin, Tijfo098, ClueBot NG, Matthiaspaul, BG19bot, ChrisGualtieri, Enterprisey, NottNott, Sammitysam
and Anonymous: 56
Cantor algebra Source: https://en.wikipedia.org/wiki/Cantor_algebra?oldid=747432762 Contributors: R.e.b. and Bender the Bot
Cha algorithm Source: https://en.wikipedia.org/wiki/Chaff_algorithm?oldid=792773626 Contributors: Stephan Schulz, McCart42,
Andreas Kaufmann, Tizio, Salix alba, NavarroJ, Banes, Jpbowen, SmackBot, Bsilverthorn, Mets501, CBM, Pgr94, Cydebot, Alaibot,
Widefox, Hermel, TAnthony, David Eppstein, Fraulein451 and Anonymous: 4
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1019

Clause (logic) Source: https://en.wikipedia.org/wiki/Clause_(logic)?oldid=782501939 Contributors: Greenrd, Stephan Schulz, Yuriz,


Tizio, Jrtayloriv, SmackBot, Mhss, PieRRoMaN, Charivari, Slakr, Amalas, Pgr94, Simeon, Gregbard, Cydebot, David Eppstein, Rafael
Keller Menezes, VVVBot, Lgavel, Lartoven, Addbot, Luckas-bot, Yobot, Calle, Animist, EmausBot, Tijfo098, Mikhail Ryazanov, Helpful
Pixie Bot, Compulogger, ChrisGualtieri, Monkbot and Anonymous: 15
Cohen algebra Source: https://en.wikipedia.org/wiki/Cohen_algebra?oldid=683514051 Contributors: Michael Hardy, R.e.b. and David
Eppstein
Cointerpretability Source: https://en.wikipedia.org/wiki/Cointerpretability?oldid=758760624 Contributors: Charles Matthews, Dys-
prosia, Kntg, PWilkinson, Aleph0~enwiki, Oleg Alexandrov, Mathbot, Gregbard, David Eppstein, Hans Adler, Gamewizard71 and
Anonymous: 1
Collapsing algebra Source: https://en.wikipedia.org/wiki/Collapsing_algebra?oldid=615789032 Contributors: R.e.b., David Eppstein
and Deltahedron
Commutative property Source: https://en.wikipedia.org/wiki/Commutative_property?oldid=795025567 Contributors: AxelBoldt, Zun-
dark, Tarquin, Andre Engels, Christian List, Toby~enwiki, Toby Bartels, Patrick, Michael Hardy, Wshun, Ixfd64, GTBacchus, Aho-
erstemeier, Snoyes, Jll, Pizza Puzzle, Ideyal, Charles Matthews, Wikiborg, Dysprosia, Furrykef, Jeq, Robbot, RedWolf, Romanm,
Robinh, Isopropyl, Tea2min, Enochlau, Giftlite, BenFrantzDale, Lupin, Herbee, Peruvianllama, Waltpohl, Frencheigh, Gdr, Knutux,
OverlordQ, B.d.mills, Chris Howard, Mormegil, Rich Farmbrough, Ebelular, Mikael Brockman, Dbachmann, Paul August, MisterSheik,
El C, Szquirrel, Touriste, Samadam, Malcolm rowe, Jumbuck, Arthena, Mattpickman, Mlessard, Burn, Mlm42, Stillnotelf, M3tainfo,
Tony Sidaway, Oleg Alexandrov, The JPS, Linas, Justinlebar, Je3000, Palica, Ashmoo, Graham87, Josh Parris, Rjwilmsi, Salix alba,
Vegaswikian, FlaBot, VKokielov, Ground Zero, Srleer, Masnevets, Reetep, YurikBot, Hairy Dude, Wolfmankurd, Michael Slone,
Sasuke Sarutobi, Rick Norwood, Samuel Huang, Derek.cashman, FF2010, Cedar101, Petri Krohn, Vicarious, SmackBot, YellowMon-
key, Slashme, Melchoir, KocjoBot~enwiki, Jab843, PJTraill, Chris the speller, Bluebot, Master of Puppets, Thumperward, SchftyThree,
Complexica, Octahedron80, DHN-bot~enwiki, JustUser, Cybercobra, Wybot, Thehakimboy, Acdx, Bando26, 16@r, A. Parrot, Childzy,
Dan Gluck, Iridescent, Dreftymac, DBooth, CBM, Robert.McGibbon, Floridi~enwiki, Unmitigated Success, Gregbard, MichaelRWolf,
Cydebot, Larsnostdal, Kozuch, Dinominant, JamesAM, Headbomb, Second Quantization, Wmasterj, Thomprod, Dzer0, Grayshi, Escar-
bot, Fr33ke, AntiVandalBot, Nacho Librarian, Gcm, 100110100, Magioladitis, Mikemill, Wikidudeman, DAGwyn, Dirac66, JoergenB,
MartinBot, JLHunter, Nev1, Daniele.tampieri, Haseldon, Policron, KylieTastic, Sarregouset, Useight, VolkovBot, TreasuryTag, Am Fio-
saigear~enwiki, Philip Trueman, TXiKiBoT, Oshwah, Anonymous Dissident, Aaron Rotenberg, Geometry guy, Spinningspark, Life,
Liberty, Property, SieBot, Ivan tambuk, Legion , Toddst1, Flyer22 Reborn, Xvani, Weston.pace, OKBot, Mike2vil, Francvs, Classi-
calecon, ClueBot, Cli, Bloodholds, R000t, CounterVandalismBot, Deathnomad, Excirial, Ftbhrygvn, Joe8824, Nas ru, Stephen Pop-
pitt, Addbot, LaaknorBot, SpBot, Gail, Luckas-bot, Yobot, Weisicong, AnomieBOT, DemocraticLuntz, Citation bot, MauritsBot, Xqbot,
Dithridge, 12cookk, Ubcule, GrouchoBot, Omnipaedista, Dger, Steve Quinn, HamburgerRadio, MacMed, Pinethicket, Adlerbot, Serols,
Psimmler, ThinkEnemies, JV Smithy, Onel5969, Ujoimro, Aceshooter, Slightsmile, Checkingfax, Quondum, Joshlepaknpsa, Wayne
Slam, Arnaugir, Scientic29, ClueBot NG, Wcherowi, Gilderien, Sayginer, Marechal Ney, Widr, AvocatoBot, Mark Arsten, Ameulen11,
CeraBot, Cyberbot II, ChrisGualtieri, None but shining hours, Khazar2, Dexbot, FoCuSandLeArN, Stephan Kulla, Fox2k11, Jochen
Burghardt, Stewwie, Pjpeters, Dskjhgds, DavidLeighEllis, Spacenut42, Stannic, Davidliu0421, Wikibritannica, Niallhoranluv123, JMP
EAX, WillemienH, Troolium, Esquivalience, Hinmatowyalahtqit, Holt Mcdougal, 3 of Diamonds, Tiger7890, Un Fou, InternetArchive-
Bot, Anareth, Manishkrisna108, RainFall, 72, Deacon Vorbis, Indig0 league p0kem0n and Anonymous: 203
Commutativity of conjunction Source: https://en.wikipedia.org/wiki/Commutativity_of_conjunction?oldid=630181027 Contributors:
Charles Matthews, OkPerson, Hyacinth, Mboverload, Robertbowerman, SmackBot, Reedy, CommBoy, Mihirgk, Gregbard, Cydebot,
PhilKnight, Yobot, Citation bot, Gamewizard71, Helpful Pixie Bot and Anonymous: 2
Complete Boolean algebra Source: https://en.wikipedia.org/wiki/Complete_Boolean_algebra?oldid=712984941 Contributors: Michael
Hardy, Silversh, Charles Matthews, Giftlite, AshtonBenson, Jemiller226, R.e.b., Mathbot, Scythe33, Shanel, Trovatore, Closedmouth,
SmackBot, Melchoir, Mhss, Mets501, Zero sharp, Vaughan Pratt, CBM, Cydebot, Headbomb, Noobeditor, Tim Ayeles, Hans Adler, Ad-
dbot, Angelobear, Citation bot, VictorPorton, Qm2008q, Citation bot 1, Dcirovic, Helpful Pixie Bot, BG19bot, K9re11 and Anonymous:
7
Composition of relations Source: https://en.wikipedia.org/wiki/Composition_of_relations?oldid=783534848 Contributors: Rp, AugPi,
Charles Matthews, Tea2min, Giftlite, EmilJ, Oliphaunt, MFH, SixWingedSeraph, MarSch, Nbarth, Lambiam, Happy-melon, CBM, Sam
Staton, David Eppstein, Chiswick Chap, Synthebot, Classicalecon, Hans Adler, Addbot, Luckas-bot, Yobot, Pcap, FrescoBot, Gamewiz-
ard71, Quondum, Wcherowi, Nbrader, Agile Antechinus, DeathOfBalance, JMP EAX, Bender the Bot, Magic links bot and Anonymous:
8
Conditional quantier Source: https://en.wikipedia.org/wiki/Conditional_quantifier?oldid=729604050 Contributors: Jeq, Nortexoid,
CBM, CharlotteWebb, Uanfala, ChrisGualtieri and Jochen Burghardt
Conditioned disjunction Source: https://en.wikipedia.org/wiki/Conditioned_disjunction?oldid=681250017 Contributors: DavidWBrooks,
HarmonicSphere, Nortexoid, GregorB, BD2412, SmackBot, Elonka, Lambiam, CBM, Gregbard, Addbot, Yobot, HRoestBot, ZroBot
and Matthew Kastor
Conjunction elimination Source: https://en.wikipedia.org/wiki/Conjunction_elimination?oldid=662661702 Contributors: Owen, Rob-
bot, Smjg, Gdr, Jiy, Anthony Appleyard, Oleg Alexandrov, Algebraist, Arthur Rubin, SmackBot, Jim.belk, IronGargoyle, JHunterJ,
Grumpyyoungman01, Nicko6, CBM, Gregbard, Thijs!bot, Dfrg.msc, Father Goose, Markus Prokott, Fratrep, Silvergoat, Dylan620, Ale-
jandrocaro35, Addbot, AnnaFrance, Jarble, Luckas-bot, Yobot, AnomieBOT, Erik9bot, Andrewjameskirk, Dooooot, Jochen Burghardt
and Anonymous: 9
Conjunction introduction Source: https://en.wikipedia.org/wiki/Conjunction_introduction?oldid=783845089 Contributors: Justin John-
son, Silversh, Big Bob the Finder, Bloodshedder, Jiy, Rich Farmbrough, AllyUnion, Oleg Alexandrov, Graham87, YurikBot, Arthur
Rubin, SmackBot, Kurykh, Jim.belk, CBM, Gregbard, Alejandrocaro35, Legobot, Yobot, Constructive editor, LucienBOT, Set theorist,
Dooooot and Anonymous: 8
Conjunctive normal form Source: https://en.wikipedia.org/wiki/Conjunctive_normal_form?oldid=800324129 Contributors: Bryan Derk-
sen, Robert Merkel, BenBaker, Arvindn, Toby Bartels, B4hand, Michael Hardy, Aquatopia~enwiki, Ldo, Thesilverbail, Jleedev, Giftlite,
CyborgTosser, Niteowlneils, Macrakis, Gubbubu, Wzwz, Ary29, Erik Garrison, MementoVivere, ESkog, Cherlin, Obradovic Goran,
Poromenos, Oleg Alexandrov, Linas, Jacobolus, Eclecticos, Simsong, Graham87, BD2412, Tizio, Salix alba, Mike Segal, Mathbot, Jrtay-
loriv, Fresheneesz, Masnevets, Chobot, Dresdnhope, YurikBot, Hairy Dude, Cedar101, GrinBot~enwiki, SmackBot, Jpvinall, Rotemliss,
1020 CHAPTER 273. ZHEGALKIN POLYNOMIAL

Mhss, Bluebot, PrimeHunter, Jon Awbrey, Danlev, Ylloh, CBM, Andkore, Myasuda, Simeon, Gregbard, Blaisorblade, Thijs!bot, Hermel,
Magioladitis, A3nm, Aydos~enwiki, Mikhail Dvorkin, AntiSpamBot, Policron, Ross Fraser, Bandanna, TXiKiBoT, Mx2323, Jamelan,
AlleborgoBot, IsleLaMotte, Tesi1700, Dardasavta, Alejandrocaro35, DumZiBoT, Addbot, Download, Jarble, Yobot, AnomieBOT, IM
Serious, LilHelpa, Snikeris, Hobsonlane, D'ohBot, Mikolasj, EmausBot, Frostyandy2k, Mo ainm, Bobogoobo, Tijfo098, Cognominally,
Petrb, Hofmic, OSDevLabs, Jamesx12345, Jochen Burghardt, SA 13 Bro, Tleo, NicoScribe, GreenC bot, Bchangip, Magic links bot,
JerryWexler and Anonymous: 80
Consensus theorem Source: https://en.wikipedia.org/wiki/Consensus_theorem?oldid=783547962 Contributors: AugPi, Macrakis, Rich
Farmbrough, Trovatore, Firetrap9254, Gregbard, Sabar, Kruckenberg.1, Magioladitis, Success dreamer89, Wiae, AlleborgoBot, Niceguyedc,
Addbot, Yobot, Pcap, Erik9bot, Merlion444, EmausBot, Kenathte, ZroBot, Wcherowi, Matthiaspaul, Solomon7968, Dexbot, Darcourse,
Hiswoundedwarriors2, Deacon Vorbis, Magic links bot and Anonymous: 9
Consequentia mirabilis Source: https://en.wikipedia.org/wiki/Consequentia_mirabilis?oldid=748700172 Contributors: Rich Farmbrough,
RL0919, SmackBot, Kilbosh, Byelf2007, Gobonobo, Owlbuster, Gregbard, Thijs!bot, Vanished User 1203921309213982139821, JoDonHo,
Evud, Addbot, Erik9bot, Machine Elf 1735, Peter Damian (temporary), Dhanyavaada, Magmalex, HiW-Bot, AvicAWB and Anonymous:
5
Constructive dilemma Source: https://en.wikipedia.org/wiki/Constructive_dilemma?oldid=638343034 Contributors: Michael Hardy,
Charles Matthews, Jiy, Rich Farmbrough, Nortexoid, Zenosparadox, Oleg Alexandrov, Awis, Arthur Rubin, SmackBot, Jim.belk, Greg-
bard, Nozzer42, ClueBot, Arunsingh16, Alejandrocaro35, Yobot, Govindjsk, Erik9bot, Gamewizard71, PrinceXantar, Dooooot and
Anonymous: 8
Contour set Source: https://en.wikipedia.org/wiki/Contour_set?oldid=786602004 Contributors: Giftlite, John Quiggin, Ms2ger, CBM,
WillowW, Arch dude, David Eppstein, SlamDiego, RockMFR, Addbot, Xp54321, Econotechie, AndersBot, Flewis, Kornsystem69,
WaysToEscape, Neil P. Quinn, Marcocapelle, MahdiBot, Yamaha5, Bender the Bot and Anonymous: 1
Contradiction Source: https://en.wikipedia.org/wiki/Contradiction?oldid=799097851 Contributors: Ryguasu, Michael Hardy, Dominus,
Kku, CesarB, Jpatokal, Ootachi, Netsnipe, Dysprosia, Jitse Niesen, OkPerson, Markhurd, Pedant17, Hyacinth, Andyfugard, HarryHen-
ryGebel, Robbot, Ajd, Ancheta Wis, Giftlite, Jason Quinn, Siroxo, SWAdair, Utcursch, Andycjp, Jdevine, Piotrus, Jokestress, Poccil, Dis-
cospinster, Rich Farmbrough, Hydrox, Lulu of the Lotus-Eaters, Lycurgus, Chalst, Smalljim, Jumbuck, BanyanTree, RainbowOfLight,
Oleg Alexandrov, Velho, Daira Hopwood, Ekem, BD2412, Kbdank71, Porcher, Mayumashu, Rillian, NeonMerlin, Sango123, AndriuZ,
Vanished user psdwnef3niurunuh234ruhfwdb7, Spencerk, Algebraist, Roboto de Ajvol, YurikBot, Wavelength, Thane, Chooserr, Jp-
bowen, Aldux, GHcool, M3taphysical, Carabinieri, Katieh5584, SmackBot, Eskimbot, Bluebot, GeneralManager, Can't sleep, clown will
eat me, Talmage, Xyzzy n, Panchitaville, Springnuts, Acebrock, Wvbailey, JackLumber, 16@r, A. Parrot, Mets501, Siebrand, Xinyu, Mar-
tious, Courcelles, George100, Rnickel, Zarex, CBM, Lazulilasher, Gregbard, Cydebot, Peterdjones, GassyGuy, Litnin200, Andyjsmith,
Who123, Marek69, EdJohnston, AntiVandalBot, Organous, JAnDbot, Husond, Hut 8.5, RebelRobot, Connormah, Bongwarrior, VoABot
II, Epsilon0, CCS81, CommonsDelinker, J.delanoy, Pharaoh of the Wizards, Maurice Carbonaro, Sumek, Katalaveno, Mrg3105, Poli-
cron, KylieTastic, Squids and Chips, Perostoj, Gary1234, VolkovBot, Nburden, Zekechills, A4bot, Victor2399, Broadbot, Graymornings,
Ryguy1994, Barkeep, SieBot, Yintan, Keilana, Anchor Link Bot, Francvs, ClueBot, NickCT, Snigbrook, Mild Bill Hiccup, PixelBot, The
Founders Intent, Nvvchar, SchreiberBike, Muro Bot, Johnuniq, Feministo, Addbot, Tcncv, Atethnekos, CarsracBot, Glane23, Tassede-
the, 84user, Tide rolls, , Legobot, Yobot, Trinitrix, AnomieBOT, Killiondude, Jim1138, Kingpin13, Iamtheman217, RandomAct,
Materialscientist, JohnnyB256, ArthurBot, Holen4, LilHelpa, Xqbot, Jerey Mall, Anna Frodesiak, Srich32977, GrouchoBot, Oursi-
pan, Shadowjams, FrescoBot, Robo37, Pinethicket, RedBot, Barras, Gamewizard71, TobeBot, Wikielwikingo, TjBot, WikitanvirBot,
Solomonfromnland, PBS-AWB, Thomasdav, Orange Suede Sofa, Fillupp, Tziemer991, ClueBot NG, Wcherowi, Joefromrandb, Helpful
Pixie Bot, Blue Mist 1, CitationCleanerBot, Lysheh, Abootmoose, Vs3894, Struwwelpeter, Nathanielrst, Hair, Alexander1257, Me, My-
self, and I are Here, Jodosma, Ugog Nizdast, Quenhitran, Lukeskywanker76, Jaymczone, Crossswords, Woodbird966, WordSeventeen,
Loraof, Ashika Bieber, Nkkenbuer, ScoobyDooWop, Bender the Bot, Crashed greek, Khinterlong, Magic links bot and Anonymous:
168
Contraposition Source: https://en.wikipedia.org/wiki/Contraposition?oldid=800894767 Contributors: Edward, Ixfd64, Stevenj, Stismail,
Amerindianarts, Melaen, Dasunst3r, BD2412, ScottJ, FlaBot, Kwammi, Chobot, Roboto de Ajvol, Bamgooly, Widdma, Red Slash,
Jaymax, Big Brother 1984, Doncram, Googl, Carabinieri, HereToHelp, Otto ter Haar, SmackBot, Javalenok, HLwiKi, NickPenguin,
Byelf2007, John, DouglasCalvert, Iridescent, Harold f, Dycedarg, CBM, WeggeBot, Andkore, Simeon, Gregbard, Blindman shady,
Gimmetrow, Thijs!bot, Epbr123, Raeven0, Jheiv, Prgrmr@wrk, Okloster, McSly, Shinju, Jimmaths, Deleet, Synthebot, AlleborgoBot,
SieBot, Wilson44691, Fratrep, Jonlandrum, Pointylittlethingy, XDanielx, Josang, The Thing That Should Not Be, Metaprimer, Mild Bill
Hiccup, Alejandrocaro35, Aquillyne, SchreiberBike, DumZiBoT, Addbot, Tide rolls, AtiwH, Luckas-bot, Ptbotgourou, Burning.amer,
AnomieBOT, Makeswell, Dendropithecus, Elliottwolf, Omnipaedista, Erik9bot, FrescoBot, Dashed, Miracle Pen, Katovatzschyn, Ksing-
hal, Reijoslav, Klbrain, JSquish, ClueBot NG, Wcherowi, Frietjes, Widr, Henneyj, Blue Mist 1, Jaysn1, Narky Blert, Inverted-Pun and
Anonymous: 78
Contraposition (traditional logic) Source: https://en.wikipedia.org/wiki/Contraposition_(traditional_logic)?oldid=800894792 Contrib-
utors: Michael Hardy, Kku, Charles Matthews, BenFrantzDale, Robertbowerman, Amerindianarts, PsiXi, Oleg Alexandrov, OwenX,
Woohookitty, BD2412, Red Slash, Doncram, SGNDave, Mhss, Cybercobra, JimStyle61093475, Gregbard, Julian Mendez, Barticus88,
Singularity, STBot, N4nojohn, Aspects, Mild Bill Hiccup, CallMeLee, AnomieBOT, LilHelpa, Dendropithecus, Klbrain, Frietjes, Hansen
Sebastian and Anonymous: 13
Converse implication Source: https://en.wikipedia.org/wiki/Converse_implication?oldid=795582224 Contributors: Urhixidur, PWilkin-
son, GregorB, BD2412, Kbdank71, DouglasCalvert, CBM, Gregbard, Cydebot, David Eppstein, Francvs, Watchduck, Hans Adler, Ad-
dbot, Meisam, Yobot, Dante Cardoso Pinto de Almeida, Gorthian and Anonymous: 6
Converse nonimplication Source: https://en.wikipedia.org/wiki/Converse_nonimplication?oldid=799771457 Contributors: Michael Hardy,
Rich Farmbrough, EmilJ, GregorB, Kbdank71, Vegaswikian, Stormbay, Cedar101, SmackBot, CBM, Gregbard, Cydebot, David Epp-
stein, Francvs, Watchduck, Yobot, AnomieBOT, Materialscientist, Tikurion, LucienBOT, Metadjinn~enwiki, ClueBot NG, MerlIwBot,
Solomon7968, Roshan220195, SupperNope, Minawa Yukino, KardoPaska, Timothyjosephwood and Anonymous: 5
Correlation immunity Source: https://en.wikipedia.org/wiki/Correlation_immunity?oldid=783587365 Contributors: Michael Hardy,
Apokrif, Ner102, Rjwilmsi, Intgr, Pascal.Tesson, Thijs!bot, Magioladitis, Addbot, DOI bot, Yobot, Monkbot, Cryptowarrior, Magic
links bot and Anonymous: 4
Counting quantication Source: https://en.wikipedia.org/wiki/Counting_quantification?oldid=622115029 Contributors: Silversh, Oleg
Alexandrov, BD2412, Mets501, George100, CBM and Anonymous: 3
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1021

Cut rule Source: https://en.wikipedia.org/wiki/Cut_rule?oldid=791881930 Contributors: Michael Hardy, Chalst, Henning Makholm,
David Eppstein, Taemyr, Dawynn, Yobot, AnomieBOT, TomT0m and Robevans123
DavisPutnam algorithm Source: https://en.wikipedia.org/wiki/Davis%E2%80%93Putnam_algorithm?oldid=776063534 Contributors:
Michael Hardy, Silversh, Stephan Schulz, Gdm, Ary29, McCart42, Andreas Kaufmann, C S, Fawcett5, Iannigb, Rgrig, Linas, Tizio,
Mathbot, Algebraist, Jpbowen, SmackBot, Acipsen, Freak42, Jon Awbrey, CRGreathouse, CBM, Pgr94, Myasuda, Simeon, Gregbard,
Cydebot, Alaibot, Liquid-aim-bot, Salgueiro~enwiki, Magioladitis, David Eppstein, R'n'B, Botx, Fuenfundachtzig, AlleborgoBot, Hans
Adler, Addbot, DOI bot, Luckas-bot, Yobot, DemocraticLuntz, Omnipaedista, Citation bot 1, Trappist the monk, RjwilmsiBot, Mo ainm,
Tijfo098, Helpful Pixie Bot, Jochen Burghardt, MostlyListening, Ugog Nizdast, Omg panda bear, Monkbot, Starswager18, Dorybadboy
and Anonymous: 9
De Morgans laws Source: https://en.wikipedia.org/wiki/De_Morgan{}s_laws?oldid=798093722 Contributors: The Anome, Tarquin,
Jeronimo, Mudlock, Michael Hardy, TakuyaMurata, Ihcoyc, Ijon, AugPi, DesertSteve, Charles Matthews, Dcoetzee, Dino, Choster, Dys-
prosia, Xiaodai~enwiki, Hyacinth, David Shay, SirPeebles, Fredrik, Dor, Hadal, Giftlite, Starblue, DanielZM, Guppynsoup, Smimram,
Bender235, ESkog, Chalst, Art LaPella, EmilJ, Scrutcheld, Linj, Alphax, Boredzo, Larryv, Jumbuck, Smylers, Oleg Alexandrov, Linas,
Mindmatrix, Bkkbrad, Btyner, Graham87, Sj, Miserlou, The wub, Marozols, Mathbot, Subtractive, DVdm, YurikBot, Wavelength,
RobotE, Hairy Dude, Michael Slone, BMAH07, Cori.schlegel, Saric, Cdiggins, Lt-wiki-bot, Rodrigoq~enwiki, SmackBot, RDBury,
Gilliam, MooMan1, Mhss, JRSP, DHN-bot~enwiki, Ebertek, DHeyward, Coolv, Cybercobra, Jon Awbrey, Vina-iwbot~enwiki, Petrejo,
Gobonobo, Darktemplar, 16@r, Loadmaster, Drae, MTSbot~enwiki, Adambiswanger1, Nutster, JForget, Gregbard, Kanags, Thijs!bot,
Epbr123, Jojan, Helgus, Futurebird, AntiVandalBot, Widefox, Hannes Eder, MikeLynch, JAnDbot, Jqavins, Nitku, Stdazi, Gwern,
General Jazza, Snood1205, R'n'B, Bongomatic, Fruits Monster, Javawizard, Kratos 84, Policron, TWiStErRob, VolkovBot, TXiKiBoT,
Oshwah, Drake Redcrest, Ttennebkram, Epgui, Smoseson, SieBot, Squelle, Fratrep, Melcombe, Into The Fray, Mx. Granger, Clue-
Bot, B1atv, Mild Bill Hiccup, Cholmeister, PixelBot, Alejandrocaro35, Hans Adler, Cldoyle, Rror, Alexius08, Addbot, Mitch feaster,
Tide rolls, Luckas-bot, Yobot, Linket, KamikazeBot, Eric-Wester, AnomieBOT, Joule36e5, Materialscientist, DannyAsher, Obersach-
sebot, Xqbot, Capricorn42, Boongie, Action ben, JascalX, Omnipaedista, Jsorr, Mfwitten, Rapsar, Stpasha, RBarryYoung, DixonDBot,
Teknad, EmausBot, WikitanvirBot, Mbonet, Vernonmorris1, Donner60, Chewings72, Davikrehalt, Llightex, ClueBot NG, Wcherowi,
BarrelProof, Benjgil, Widr, Helpful Pixie Bot, David815, Sylvier11, Waleed.598, ChromaNebula, Jochen Burghardt, Epicgenius, Blue-
mathman, G S Palmer, 7Sidz, Idonei, Scotus12, Loraof, LatinAddict, Danlarteygh, Luis150902, Robert S. Barlow, Gomika, Bender the
Bot, Wanliusa, Deacon Vorbis, Magic links bot and Anonymous: 169
Deductive closure Source: https://en.wikipedia.org/wiki/Deductive_closure?oldid=547342594 Contributors: Bumm13, Guppynsoup,
John Quiggin, Koavf, RussBot, KSchutte, Ninly, RDBrown, NeilFraser, Pjrm, CBM, Gregbard, Cydebot, Thijs!bot, Andrewaskew, Hans
Adler, Addbot, Tassedethe and Anonymous: 3
Demonic composition Source: https://en.wikipedia.org/wiki/Demonic_composition?oldid=633771432 Contributors: Michael Hardy,
David Eppstein, LokiClock, Classicalecon, AnomieBOT and Anonymous: 2
Denying the antecedent Source: https://en.wikipedia.org/wiki/Denying_the_antecedent?oldid=791952016 Contributors: Bryan Derk-
sen, Mrwojo, Zocky, Voidvector, HarmonicSphere, Iulianu, William M. Connolley, WhisperToMe, Rursus, Mattaschen, Taak, Elembis,
TiMike, Silence, Kaveh, Sasquatch, Jeltz, Bookandcoee, Angr, Waldir, KYPark, Gaga~enwiki, Supermor, YurikBot, Pseudomonas,
Jpeob, Shawnc, Ybbor, NickelShoe, SmackBot, Ck4829, Bluebot, Factorial, Furby100, Richard001, Andeggs, DavidHOzAu, Gihanuk,
Gregbard, Steel, 271828182, Isilanes, DavidSTaylor, Obscurans, Mark.camp, Hiddenhearts, Lyctc, Gen. Quon, Jamelan, Gerakibot,
DancingPhilosopher, CharlesGillingham, Jfromcanada, Drmies, Addbot, Xaquseg, Luckas-bot, Gongshow, AnomieBOT, Gemtpm, Free-
KnowledgeCreator, Allformweek, ZroBot, Akerans, ClueBot NG, Rkohar, ChrisGualtieri, BrianPansky, SilverSylvester, Apollo The
Logician and Anonymous: 42
Derivative algebra (abstract algebra) Source: https://en.wikipedia.org/wiki/Derivative_algebra_(abstract_algebra)?oldid=628688481
Contributors: Giftlite, EmilJ, Oleg Alexandrov, Trovatore, Mhss, Bluebot, Mets501, CBM, Davyzhu, Addbot, Unara, Brad7777 and
Anonymous: 4
Destructive dilemma Source: https://en.wikipedia.org/wiki/Destructive_dilemma?oldid=788041735 Contributors: Michael Hardy, Zen-
su, Jiy, Rich Farmbrough, Zenosparadox, Oleg Alexandrov, Lsu, MagneticFlux, SmackBot, Cybercobra, Tesseran, Jim.belk, Floridi~enwiki,
Gregbard, Adavidb, Niceguyedc, Alejandrocaro35, Erik9bot, 478jjjz, Helpful Pixie Bot, Greenm22, Dooooot, Magic links bot and
Anonymous: 11
Difunctional Source: https://en.wikipedia.org/wiki/Binary_relation?oldid=796087488 Contributors: AxelBoldt, Bryan Derksen, Zun-
dark, Tarquin, Jan Hidders, Roadrunner, Mjb, Tomo, Patrick, Xavic69, Michael Hardy, Wshun, Isomorphic, Dominus, Ixfd64, Takuya-
Murata, Charles Matthews, Timwi, Dcoetzee, Jitse Niesen, Robbot, Chocolateboy, MathMartin, Tea2min, Giftlite, Fropu, Dratman,
Jorge Stol, Jlr~enwiki, Andycjp, Quarl, Guanabot, Yuval madar, Slipstream, Paul August, Elwikipedista~enwiki, Shanes, EmilJ, Ran-
dall Holmes, Ardric47, Obradovic Goran, Eje211, Alansohn, Dallashan~enwiki, Keenan Pepper, PAR, Adrian.benko, Oleg Alexan-
drov, Joriki, Linas, Apokrif, MFH, Dpv, Pigcatian, Penumbra2000, Fresheneesz, Chobot, YurikBot, Hairy Dude, Koeyahoo, Trova-
tore, Bota47, Arthur Rubin, Netrapt, SmackBot, Royalguard11, SEIBasaurus, Cybercobra, Jon Awbrey, Turms, Lambiam, Dbtfz, Mr
Stephen, Mets501, Dreftymac, Happy-melon, Petr Matas, CRGreathouse, CBM, Yrodro, WillowW, Xantharius, Thijs!bot, Egrin,
Rlupsa, Marek69, Fayenatic london, JAnDbot, MER-C, TAnthony, Magioladitis, Vanish2, Avicennasis, David Eppstein, Robin S, Akurn,
Adavidb, LajujKej, Owlgorithm, Djjrjr, Policron, DavidCBryant, Quux0r, VolkovBot, Boute, Vipinhari, Anonymous Dissident, PaulTa-
nenbaum, Jackfork, Wykypydya, Dmcq, AlleborgoBot, AHMartin, Ocsenave, Sftd, Paradoctor, Henry Delforn (old), MiNombreDeGuerra,
DuaneLAnderson, Anchor Link Bot, CBM2, Classicalecon, ClueBot, Snigbrook, Rhubbarb, Hans Adler, SounderBruce, SilvonenBot,
BYS2, Plmday, Addbot, LinkFA-Bot, Tide rolls, Jarble, Legobot, Luckas-bot, Yobot, Ht686rg90, Lesliepogi, Pcap, Labus, Nallim-
bot, Reindra, FredrikMeyer, AnomieBOT, Floquenbeam, Royote, Hahahaha4, Materialscientist, Belkovich, Citation bot, Racconish,
Jellystones, Xqbot, Isheden, Geero, GhalyBot, Ernsts, Howard McCay, Constructive editor, Mark Renier, Mfwitten, RandomDSdevel,
NearSetAccount, SpaceFlight89, Yunshui, Miracle Pen, Brambleclawx, RjwilmsiBot, Nomen4Omen, Chharvey, SporkBot, OnePt618,
Sameer143, Socialservice, ResearchRave, ClueBot NG, Wcherowi, Frietjes, Helpful Pixie Bot, Koertefa, BG19bot, ChrisGualtieri,
YFdyh-bot, Dexbot, Makecat-bot, ScitDei, Lerutit, Jochen Burghardt, Jodosma, Karim132, Cosmia Nebula, Monkbot, Pratincola, ,
Neycalazans, Some1Redirects4You, The Quixotic Potato, Luis150902, Magic links bot and Anonymous: 114
Disjunction elimination Source: https://en.wikipedia.org/wiki/Disjunction_elimination?oldid=754678363 Contributors: Michael Hardy,
Justin Johnson, Cimon Avaro, Evercat, Arcadian, Linas, GregorB, Graham87, Jameshsher, Arthur Rubin, SmackBot, Kurykh, Cyber-
cobra, Jim.belk, Mets501, CBM, Gregbard, Julian Mendez, Vzaliva, Hotfeba, Paradoctor, Alejandrocaro35, Erik9bot, Dooooot, Inter-
netArchiveBot and Anonymous: 4
1022 CHAPTER 273. ZHEGALKIN POLYNOMIAL

Disjunction introduction Source: https://en.wikipedia.org/wiki/Disjunction_introduction?oldid=769592942 Contributors: Amillar, Justin


Johnson, Evercat, Sam Hocevar, Esperant, Jiy, Rctay, Graham87, SmackBot, Jim.belk, Gregbard, Alejandrocaro35, Download, Legobot,
Luckas-bot, Calle, Erik9bot, TomT0m, Pbruins84, Voomoo, GSS-1987 and Anonymous: 12
Disjunctive normal form Source: https://en.wikipedia.org/wiki/Disjunctive_normal_form?oldid=781236232 Contributors: Bryan Derk-
sen, BenBaker, Toby Bartels, B4hand, Sarrazip, Altenmann, Tea2min, DavidCary, Bnn, Brona, CyborgTosser, Macrakis, Wzwz, Me-
mentoVivere, ZeroOne, EmilJ, Haham hanuka, Linas, A3r0, Graham87, BD2412, Tizio, Fresheneesz, Roboto de Ajvol, GrinBot~enwiki,
Ajm81, Mhss, Bluebot, Jon Awbrey, Ben Spinozoan, CBM, Simeon, Gregbard, Widefox, Dougher, Smerdis, Batenka~enwiki, Kundu,
Policron, Jamelan, Tvdm, Alejandrocaro35, Hans Adler, Addbot, Linket, AnomieBOT, Groovenstein, Doulos Christos, Gryllida, Igor
Yalovecky, Diego Grez Bot, Jiri 1984, Intervallic, ZiYouXunLu, Ref1fois, Jochen Burghardt, Nikhilponnuru and Anonymous: 32
Disjunctive syllogism Source: https://en.wikipedia.org/wiki/Disjunctive_syllogism?oldid=784193183 Contributors: AugPi, Cimon Avaro,
Evercat, Charles Matthews, Dysprosia, Taak, Jiy, Rich Farmbrough, Guanabot, ESkog, Jumbuck, Bookandcoee, Oleg Alexandrov,
FlaBot, Kwhittingham, Mathbot, Jameshsher, YurikBot, KSchutte, Lomn, Shadro, Arthur Rubin, Pentasyllabic, SmackBot, Wlmg,
Mhss, Bluebot, Kingdon, Jim.belk, Sophomoric, Tawkerbot2, Gregbard, Nmajdan, Thijs!bot, Mcguire, Anarchia, It Is Me Here, Jame-
lan, Alejandrocaro35, UnCatBot, Flash94, Addbot, SpillingBot, Yobot, AnomieBOT, John of Reading, Donner60, Dooooot, Me, Myself,
and I are Here and Anonymous: 31
Distributive property Source: https://en.wikipedia.org/wiki/Distributive_property?oldid=798567342 Contributors: AxelBoldt, Tarquin,
Youssefsan, Toby Bartels, Patrick, Xavic69, Michael Hardy, Andres, Ideyal, Dysprosia, Malcohol, Andrewman327, Shizhao, PuzzletChung,
Romanm, Chris Roy, Wikibot, Tea2min, Giftlite, Markus Krtzsch, Dissident, Nodmonkey, Mike Rosoft, Smimram, Discospinster, Paul
August, ESkog, Rgdboer, EmilJ, Bobo192, Robotje, Smalljim, Jumbuck, Arthena, Keenan Pepper, Mykej, Bsadowski1, Blaxthos, Linas,
Evershade, Isnow, Marudubshinki, Salix alba, Vegaswikian, Nneonneo, Bgura, FlaBot, Alexb@cut-the-knot.com, Mathbot, Andy85719,
Ichudov, DVdm, YurikBot, Michael Slone, Grafen, Trovatore, Bota47, Banus, Melchoir, Yamaguchi , Gilliam, Bluebot, Ladislav the
Posthumous, Octahedron80, UNV, Jiddisch~enwiki, Khazar, FrozenMan, Bando26, 16@r, Dicklyon, EdC~enwiki, Engelec, Exzakin,
Jokes Free4Me, Simeon, Gregbard, Thijs!bot, Barticus88, Marek69, Nezzadar, Escarbot, Mhaitham.shammaa, Salgueiro~enwiki, JAnD-
bot, Onkel Tuca~enwiki, Acroterion, Drewmutt, Numbo3, Katalaveno, AntiSpamBot, GaborLajos, Lyctc, Idioma-bot, Janice Margaret
Vian, Montchav, TXiKiBoT, Oshwah, Anonymous Dissident, Dictouray, Oxfordwang, Martin451, Skylarkmichelle, Jackfork, Envi-
roboy, Dmcq, AlleborgoBot, Gerakibot, Bentogoa, Flyer22 Reborn, Radon210, Hello71, Denisarona, ClueBot, The Thing That Should
Not Be, Cli, Mild Bill Hiccup, Niceguyedc, Goldkingtut5, Excirial, Jusdafax, NuclearWarfare, NERIC-Security, Pichpich, Mm40, Ad-
dbot, Jojhutton, Ronhjones, Zarcadia, Favonian, Squandermania, Jarble, Ben Ben, Legobot, Luckas-bot, AnomieBOT, Materialscientist,
NFD9001, Greatfermat, False vacuum, RibotBOT, Intelligentsium, Pinethicket, I dream of horses, MastiBot, Andrea105, Slon02, Saul34,
J36miles, John of Reading, Davejohnsan, Orphan Wiki, Super48paul, Sp33dyphil, Slawekb, Quondum, BrokenAnchorBot, TyA, Don-
ner60, Chewings72, DASHBotAV, AlecJansen, ClueBot NG, Wcherowi, IfYouDoIfYouDon't, Dreth, O.Koslowski, Asukite, Widr, Vib-
hijain, Helpful Pixie Bot, Pmi1924, BG19bot, TCN7JM, Saulpila2000, Dan653, Forkloop, CallofDutyboy9, EuroCarGT, Sandeep.ps4,
Christian314, Ivashikhmin, IsraphelMac, Mogism, Darcourse, Makecat-bot, Stephan Kulla, Lugia2453, Gphilip, Brirush, Wywin, BB-
GUN101, ElHef, DavidLeighEllis, Shaun9876, Pkramer2021, Kitkat1234567880, Kcolemantwin3, Gracecandy1143, Justin15w, David88063,
Abruce123412, Amortias, Solid Frog, Loraof, Jj 1213 wiki, Dalangster, Iwamwickham, 123456me123456, RedPanda25, Some1Redirects4You,
ProprioMe OW, CLCStudent, Friendlyyoshi, Pockybits, Bender the Bot, Deacon Vorbis, RileyBugz, Magic links bot, Ya boy biggie smalle
and Anonymous: 276
DiVincenzos criteria Source: https://en.wikipedia.org/wiki/DiVincenzo{}s_criteria?oldid=797507338 Contributors: Rjwilmsi, Magi-
oladitis, BG19bot, GrammarFascist, BattyBot, Mhhossein, GeoreyT2000, Reetssydney, QI Explorations 2016, KolbertBot and Anony-
mous: 2
Domain of discourse Source: https://en.wikipedia.org/wiki/Domain_of_discourse?oldid=799063431 Contributors: AxelBoldt, Takuya-
Murata, Minesweeper, William M. Connolley, Silversh, Leonard G., Karol Langner, Spayrard, Mdd, Oleg Alexandrov, Daranz, Velho,
Linas, BD2412, RussBot, SmackBot, Mhss, Byelf2007, Lambiam, Mets501, Big Smooth, CBM, Gregbard, Escarbot, David Eppstein,
Maurice Carbonaro, Tparameter, Nwbeeson, Dan Polansky, Cheaposgrungy, Ctxppc, Kai-Hendrik, ZuluPapa5, DumZiBoT, Addbot,
MrOllie, SpBot, BOOLE1847, Luckas-bot, Andy.melnikov, Aditya, Materialscientist, DSisyphBot, Omnipaedista, Erik9bot, Pangur Ban
My Cat, Hpvpp, Azurengar, Blue Mist 1, ChrisGualtieri, Kephir, Brirush, Rupert loup, JHU1959, Mark Jhomel, Dsing43 and Anonymous:
14
Donkey sentence Source: https://en.wikipedia.org/wiki/Donkey_sentence?oldid=799836667 Contributors: Michael Hardy, Jason Quinn,
Joriki, BD2412, Clean Copy, Alastair Haines, Geekdiva, Cnilep, Fuddle, Hkotek, AnomieBOT, Klbrain, JonRicheld, Kyoakoa, Narky
Blert, Kanjuzi, InternetArchiveBot, GreenC bot, Bender the Bot, KolbertBot and Anonymous: 5
Double negation Source: https://en.wikipedia.org/wiki/Double_negation?oldid=775777141 Contributors: Angela, Dysprosia, Wik, Ja-
son Quinn, Cybercobra, David ekstrand, Wvbailey, CBM, Myasuda, Gregbard, Headbomb, Alan U. Kennington, Fratrep, TFCforever,
Alejandrocaro35, Haruth, Helpful Pixie Bot, ChrisGualtieri, YiFeiBot and Anonymous: 7
Drinker paradox Source: https://en.wikipedia.org/wiki/Drinker_paradox?oldid=791210019 Contributors: Michael Shulman, Michael
Hardy, ESnyder2, Hyacinth, Phil Boswell, Bearcat, Kaustuv, Leonbloy, RJHall, BRW, Jclemens, Koavf, Hairy Dude, Anomalocaris, Mal-
colma, Tonywalton, AndrewWTaylor, Robertd, SmackBot, McGeddon, Miquonranger03, Acid Ammo, Chris3145, Andeggs, Byelf2007,
Anguis, JRSpriggs, John Moore 309, CRGreathouse, Jaxad0127, Gregbard, Eubulide, Zickzack, Qwerty Binary, Anna Lincoln, Dmcq,
TJRC, Paradoctor, Yoda of Borg, ClueBot, Yobot, Citation bot, Sawomir Biay, Tkuvho, Danchristensen, RjwilmsiBot, John of Read-
ing, Dcirovic, Staszek Lem, Tijfo098, N1ghtshade3, ClueBot NG, Jamesrmeyer, Groupuscule, Bonniemylove, Andreschulz, 786b6364,
Kernsters, Jochen Burghardt, Tinkinswood, 9mjb, There is a T101 in your kitchen, Clark.meng.90, Bender the Bot and Anonymous: 35
Empty domain Source: https://en.wikipedia.org/wiki/Empty_domain?oldid=610499708 Contributors: Nortexoid, Oleg Alexandrov,
DTM, CBM, Gregbard, MiNombreDeGuerra, Watchduck, Yobot, Omnipaedista, Erik9bot, Schwede66, ChrisGualtieri and Anonymous:
2
Evasive Boolean function Source: https://en.wikipedia.org/wiki/Evasive_Boolean_function?oldid=782036740 Contributors: Michael
Hardy, David Eppstein, Mild Bill Hiccup, Watchduck, Certes, Addbot, , Yobot, AnomieBOT, MuedThud, Xnn, Sivan.rosenfeld
and Dewritech
Exceptional isomorphism Source: https://en.wikipedia.org/wiki/Exceptional_isomorphism?oldid=793941144 Contributors: Michael
Hardy, Charles Matthews, Tea2min, Rjwilmsi, Koavf, GnniX, Wavelength, SmackBot, Nbarth, Tamfang, Headbomb, David Eppstein,
Maxzimet, Citation bot, Twri, Jamontaldi, Teddyktchan and Anonymous: 5
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1023

Exclusive or Source: https://en.wikipedia.org/wiki/Exclusive_or?oldid=786945060 Contributors: AxelBoldt, Toby Bartels, Heron, Dwheeler,


Stevertigo, Hfastedge, Michael Hardy, Erik Zachte, DopeshJustin, Nixdorf, Lousyd, Graue, TakuyaMurata, Karada, Looxix~enwiki, IM-
SoP, Charles Matthews, Dcoetzee, Malcohol, The Anomebot, Furrykef, Nthomas, Fibonacci, Indefatigable, AnonMoos, Olathe, Robbot,
Fredrik, Sbisolo, L3prador, Merovingian, Henrygb, Tea2min, Centrx, Giftlite, Nickdc, Fo0bar, Taak, Ram434, Quackor, Dnas, Bob.v.R,
Bigpeteb, Taka, Wjl, Gscshoyru, Creidieki, Imroy, Narsil, Samboy, SocratesJedi, Paul August, Plugwash, Kwamikagami, Spoon!, R. S.
Shaw, Blotwell, Larryv, Helix84, Sam Korn, Knucmo2, Jumbuck, Beamishboy, Jhertel, Water Bottle, Feb30th1712, Bookandcoee, Ling
Kah Jai, Kay Dekker, Oleg Alexandrov, Mindmatrix, Bholleman, LOL, Barrylb, Tokek, GraemeLeggett, Gerbrant, BD2412, Kbdank71,
DePiep, Reisio, Snaekid, Golem Unity, Amire80, Stephonovich, Miserlou, Jobarts, FlaBot, VKokielov, Mathbot, Gurch, Quuxplusone,
Fresheneesz, Mahlon, Sderose, Roboto de Ajvol, Wavelength, TSO1D, Daverocks, IanManka, Rsrikanth05, Philopedia, Hwasungmars,
Dijxtra, Trovatore, CecilWard, Mditto, Delos~enwiki, Mike92591, Square87~enwiki, PurplePlatypus, RG2, DrJolo, SmackBot, Fire-
man bi, Melchoir, FlashSheridan, Ohnoitsjamie, Winterheart, Bluebot, Da nuke, Thumperward, ToobMug, Jjbeard~enwiki, Jsmethers,
Richlumaui, Munibert, UU, Cybercobra, Samineru, Pulu, Derek R Bullamore, Jon Awbrey, BlackTerror, Manderr, Sugarskane, Good-
nightmush, Bjankuloski06en~enwiki, 16@r, Loadmaster, Hvn0413, Marklsimon, Iridescent, Aeons, Origin415, Eassin, Htmlland, Jafet,
CRGreathouse, Wafulz, CBM, Ibadibam, Jokes Free4Me, Jesse Viviano, Shandris, TheTito, Gregbard, Cydebot, Tonyv414, Horaceli,
Wdspann, Yellowdesk, Hamaryns, JAnDbot, Deective, Marmor10, David Eppstein, DerHexer, Patstuart, Wkussmaul, Gwern, Santiago
Saint James, Andre.holzner, Elemeno, Themania, R'n'B, CommonsDelinker, LedgendGamer, Supuhstar, Nwbeeson, SJP, Dennis Malan-
dro, Rosenkreuz, Priceman86, WarddrBOT, Jfrascencio, TedColes, Inductiveload, SieBot, ToePeu.bot, This, that and the other, Aeoza,
Flyer22 Reborn, ~enwiki, Francvs, Separa, ClueBot, Justin W Smith, Boing! said Zebedee, Richard B. Frost, Watchduck, Q
oooooooo, Hans Adler, Apparition11, C. A. Russell, Addbot, Melab-1, Btx40, Download, CarsracBot, FiriBot, Zorrobot, Jarble, Meisam,
Luckas-bot, Yobot, TaBOT-zerem, Gongshow, AnomieBOT, AmritasyaPutra, SnorlaxMonster, Xqbot, Ziyaeen, KommX, GrouchoBot,
Samppi111, Pandamonia, Silverfox11202, Xenfreak, Iwwwwwwi, Igor Yalovecky, Todd434, Najeeb1010, Mjaked, ZroBot, Acwilson9,
Tijfo098, RockMagnetist, ClueBot NG, Roxanne Feline, Wcherowi, PetScarecrow, VladikVP, N-double-u, H2o8polo, BG19bot, Pac-
erier, Xor logician, Klilidiplomus, Merglee, IsraphelMac, Darcourse, SiBr4, Cerabot~enwiki, Dingo591, Muelleum, Kennyr87, Loraof,
Esquivalience, Hamiddavoodishandiz, TheMagikBOT, Normand Martel, Dufaer and Anonymous: 209
Existential generalization Source: https://en.wikipedia.org/wiki/Existential_generalization?oldid=799499504 Contributors: Hyacinth,
Loadmaster, Yarou, Gregbard, Alejandrocaro35, RJGray, Wcherowi, BG19bot, Fan Singh Long and Jochen Burghardt
Existential instantiation Source: https://en.wikipedia.org/wiki/Existential_instantiation?oldid=780259318 Contributors: InverseHyper-
cube, CBM, Gregbard, Alejandrocaro35, Quondum, ClueBot NG, Theinactivist and Anonymous: 4
Existential quantication Source: https://en.wikipedia.org/wiki/Existential_quantification?oldid=784488277 Contributors: The Cunc-
tator, Tarquin, Andre Engels, Toby Bartels, Caltrop, Michael Hardy, Chinju, TakuyaMurata, Poor Yorick, Andres, Charles Matthews,
Dcoetzee, Dysprosia, Stephan Schulz, Ashley Y, Giftlite, Ketil, Urhixidur, Kine, Nortexoid, Ish ishwar, Oleg Alexandrov, Joriki, Marudub-
shinki, BD2412, Waninoco, DePiep, Pdelong, Salix alba, Jameshsher, RobotE, Hede2000, Jengelh, Dpakoha, Twin Bird, Tomisti, Arthur
Rubin, Jsnx, SmackBot, Mhss, Clconway, Cybercobra, Tim Q. Wells, Mets501, CBM, Neelix, Gregbard, Cydebot, Julian Mendez,
Robertinventor, Thijs!bot, Skomorokh, Quentar~enwiki, Bradgib, Richiar, Jimmaths, Notecardforfree, Rei-bot, Anonymous Dissident,
Yoda1522, TheSmuel, Naleh, PixelBot, Brews ohare, MystBot, Addbot, Jarble, Luckas-bot, Yobot, Sardoodledom, E235, Citation bot,
RedBot, Miracle Pen, Dcirovic, Tentontunic, ClueBot NG, Helpful Pixie Bot, Faus, Titodutta, Solomon7968, Kc0gnu, Jochen Burghardt,
Me, Myself, and I are Here, Jonash36, GigaMario5, Dvfdvdfvbd, Magic links bot, MagneticInk and Anonymous: 44
Exportation (logic) Source: https://en.wikipedia.org/wiki/Exportation_(logic)?oldid=795995402 Contributors: Michael Hardy, Jitse
Niesen, Arthur Rubin, ShakespeareFan00, HenningThielemann, Gregbard, R'n'B, Niceguyedc, Alejandrocaro35, Yobot, SwisterTwister,
RjwilmsiBot, John of Reading, Dooooot, Charles.w.chambers, Errosica and Anonymous: 1
Extension (predicate logic) Source: https://en.wikipedia.org/wiki/Extension_(predicate_logic)?oldid=788453222 Contributors: Paul
August, Salix alba, SmackBot, Nbarth, AndrewWarden, Lambiam, CBM, Gregbard, Ktr101, RichardBergmair, Omnipaedista, Erik9bot,
SD5bot and AK456
False (logic) Source: https://en.wikipedia.org/wiki/False_(logic)?oldid=799942215 Contributors: Toby Bartels, Tea2min, PWilkinson,
Incnis Mrsi, IronGargoyle, Hvn0413, P199, ShelfSkewed, Gregbard, Oshwah, Paradoctor, Bentogoa, Francvs, Ocer781, AnomieBOT,
Mcoupal, ClueBot NG, Masssly, Helpful Pixie Bot, Lugia2453, Eyesnore, Milos996, AdrianLego, Barry886, Redboy376, Profkane,
Bender the Bot, Connigma, Magic links bot and Anonymous: 4
Fiber (mathematics) Source: https://en.wikipedia.org/wiki/Fiber_(mathematics)?oldid=798233635 Contributors: Michael Hardy, Chinju,
Charles Matthews, Bkonrad, Oleg Alexandrov, Christopher Thomas, MarSch, LkNsngth, Jon Awbrey, Krasnoludek, JRSpriggs, CBM,
Kilva, OrenBochman, Camrn86, LokiClock, Dmcq, JP.Martin-Flatin, Addbot, Ptbotgourou, Ciphers, Erik9bot, Artem M. Pelenitsyn,
Tom.Reding, ZroBot, Roman3, Beaumont877, Qetuth, SillyBunnies, Deacon Vorbis, Volunteer1234 and Anonymous: 9
Field of sets Source: https://en.wikipedia.org/wiki/Field_of_sets?oldid=798201434 Contributors: Charles Matthews, David Shay, Tea2min,
Giftlite, William Elliot, Rich Farmbrough, Paul August, Touriste, DaveGorman, Kuratowskis Ghost, Bart133, Oleg Alexandrov, Salix
alba, YurikBot, Trovatore, Mike Dillon, Arthur Rubin, That Guy, From That Show!, SmackBot, Mhss, Gala.martin, Stotr~enwiki,
Marek69, Mathematrucker, R'n'B, Lamro, BotMultichill, VVVBot, Hans Adler, Addbot, DaughterofSun, Jarble, AnomieBOT, Cita-
tion bot, Kiefer.Wolfowitz, Yahia.barie, EmausBot, Tijfo098, ClueBot NG, MerlIwBot, BattyBot, Deltahedron, Mohammad Abubakar
and Anonymous: 14
Finitary relation Source: https://en.wikipedia.org/wiki/Finitary_relation?oldid=782854645 Contributors: Damian Yerrick, AxelBoldt,
The Anome, Tarquin, Jan Hidders, Patrick, Michael Hardy, Wshun, Kku, Ellywa, Andres, Charles Matthews, Dcoetzee, Hyacinth,
Robbot, Romanm, MathMartin, Tea2min, Alan Liefting, Marc Venot, Giftlite, Almit39, Zfr, Starx, PhotoBox, Erc, ArnoldReinhold,
Paul August, Elwikipedista~enwiki, Randall Holmes, Obradovic Goran, Oleg Alexandrov, Woohookitty, Mangojuice, Michiel Helven-
steijn, Isnow, Qwertyus, Dpr, MarSch, Salix alba, Oblivious, Mathbot, Jrtayloriv, Chobot, YurikBot, Hairy Dude, Dmharvey, RussBot,
Muu-karhu, Zwobot, Bota47, Arthur Rubin, Reyk, Netrapt, Claygate, JoanneB, Pred, GrinBot~enwiki, SmackBot, Mmernex, Uny-
oyega, Nbarth, DHN-bot~enwiki, Tinctorius, Jon Awbrey, Henning Makholm, Lambiam, Dfass, Newone, Aeons, CRGreathouse, Greg-
bard, King Bee, Kilva, Escarbot, Salgueiro~enwiki, Nosbig, JAnDbot, .anacondabot, Tarif Ezaz, VoABot II, Jonny Cache, DerHexer,
R'n'B, Mike.lifeguard, And Dedicated To, Aervanath, VolkovBot, Rponamgi, The Tetrast, Mscman513, GirasoleDE, Newbyguesses,
SieBot, Phe-bot, Paolo.dL, Siorale, Skeptical scientist, Sheez Louise, Mild Bill Hiccup, DragonBot, Cenarium, Palnot, Cat Dancer WS,
Kal-El-Bot, Addbot, MrOllie, ChenzwBot, Ariel Black, SpBot, Yobot, Ptbotgourou, Bgttgb, QueenCake, Dinnertimeok, AnomieBOT,
Jim1138, JRB-Europe, Xqbot, Nishantjr, Anne Bauval, Howard McCay, Paine Ellsworth, Throw it in the Fire, RandomDSdevel, Miracle
Pen, Mean as custard, Straightontillmorning, ZroBot, Cackleberry Airman, Paulmiko, Tijfo098, Mister Stan, ClueBot NG, Deer*lake,
1024 CHAPTER 273. ZHEGALKIN POLYNOMIAL

Frietjes, BG19bot, ChrisGualtieri, Fuebar, Brirush, Mark viking, Andrei Petre, 65HCA7, KasparBot, Some1Redirects4You, MA-
HONEY.ALTMAN, Amintajdar and Anonymous: 53
First-order logic Source: https://en.wikipedia.org/wiki/First-order_logic?oldid=797367379 Contributors: AxelBoldt, The Anome, Ben-
Baker, Dwheeler, Youandme, Stevertigo, Frecklefoot, Edward, Patrick, Michael Hardy, Kwertii, Kku, Ixfd64, Chinju, Zeno Gantner,
Minesweeper, Looxix~enwiki, TallJosh, Julesd, AugPi, Dpol, Jod, Nzrs0006, Charles Matthews, Timwi, Dcoetzee, Dysprosia, Greenrd,
Markhurd, Hyacinth, David.Monniaux, Robbot, Fredrik, Vanden, Wikibot, Jleedev, Tea2min, Filemon, Snobot, Giftlite, Xplat, Kim
Bruning, Lethe, Jorend, Guanaco, Siroxo, Gubbubu, Mmm~enwiki, Utcursch, Kusunose, Almit39, Karl-Henner, Creidieki, Urhixidur,
Lucidish, Mormegil, Rich Farmbrough, Guanabot, Paul August, Bender235, Elwikipedista~enwiki, Pmetzger, Spayrard, Chalst, Nile,
Rsmelt, EmilJ, Marner, Randall Holmes, Per Olofsson, Nortexoid, Spug, ToastieIL, AshtonBenson, Obradovic Goran, Mpeisenbr, Of-
ciallyover, Msh210, Axl, Harburg, Dhruvee, Caesura, BRW, Iannigb, Omphaloscope, Apolkhanov, Bookandcoee, Oleg Alexandrov,
Kendrick Hang, Hq3473, Joriki, Velho, Kelly Martin, Linas, Ahouseholder, Ruud Koot, BD2412, SixWingedSeraph, Grammarbot,
Rjwilmsi, Tizio, .digamma, MarSch, Mike Segal, Ekspiulo, R.e.b., Penumbra2000, Mathbot, Banazir, NavarroJ, Chobot, Bgwhite, Jayme,
Roboto de Ajvol, Wavelength, Borgx, Michael Slone, Marcus Cyron, Meloman, Trovatore, Expensivehat, Hakeem.gadi, JECompton,
Saric, Arthur Rubin, Cedar101, Netrapt, Nahaj, Katieh5584, RG2, Otto ter Haar, Jsnx, SmackBot, InverseHypercube, Brick Thrower,
Yamaguchi , Slaniel, NickGarvey, Mhss, Foxjwill, Onceler, Jon Awbrey, Turms, Henning Makholm, Tesseran, Byelf2007, Lambiam,
Cdills, Dbtfz, Richard L. Peterson, Cronholm144, Physis, Loadmaster, Mets501, Pezant, Phuzion, Mike Fikes, JulianMendez, Dan Gluck,
Iridescent, Hilverd, Zero sharp, JRSpriggs, 8754865, CRGreathouse, CBM, Mindfruit, Gregbard, Fl, Danman3459, Blaisorblade, Ju-
lian Mendez, Juansempere, Eubulide, Malleus Fatuorum, Mojo Hand, Headbomb, RobHar, Nick Number, Rriegs, Klausness, Eleuther,
Jirislaby, VictorAnyakin, Childoftv, Tigranes Damaskinos, JAnDbot, Ahmed saeed, Thenub314, RubyQ, Igodard, Martinkunev, Alas-
tair Haines, Jay Gatsby, LookingGlass, A3nm, David Eppstein, Pkrecker, TechnoFaye, Avakar, Exostor, Pomte, Maurice Carbonaro,
WarthogDemon, Inquam, SpigotMap, Policron, Heyitspeter, Mistercupcake, Camrn86, English Subtitle, Crowne, Voorlandt, The Tetrast,
Philogo, LBehounek, Jesin, VanishedUserABC, Paradoctor, Kgoarany, RJaguar3, ConcernedScientist, Lord British, Ljf255, SouthLake,
Kumioko (renamed), DesolateReality, Anchor Link Bot, Wireless99, Randomblue, CBM2, NoBu11, Francvs, Classicalecon, Phyte,
NicDumZ, Jan1nad, Gherson2, Mild Bill Hiccup, Dkf11, Nanobear~enwiki, Nanmus, Watchduck, Cacadril, Hans Adler, Djk3, Will-
hig, Palnot, WikHead, Subversive.sound, Sameer0s, Addbot, Norman Ramsey, Histre, Pdibner, Tassedethe, ., Snaily, Legobot,
Yobot, Ht686rg90, Cloudyed, Pcap, AnakngAraw, AnomieBOT, Citation bot, TitusCarus, Grim23, Ejars, Omnipaedista, Raulshc, Fres-
coBot, Hobsonlane, Mark Renier, Liiiii, Citation bot 1, Tkuvho, DrilBot, I dream of horses, Sh Najd, 34jjkky, Rlcolasanti, Dian-
naa, Reach Out to the Truth, Lauri.pirttiaho, WildBot, Gf uip, Klbrain, Carbo1200, Be hajian, Chharvey, Sampletalk, Bulwersator,
Jaseemabid, Tijfo098, Templatetypedef, ClueBot NG, Johannes Schtzel, MerlIwBot, Daviddwd, BG19bot, Pacerier, Lifeformnoho,
Dhruvbaldawa, Virago250, Solomon7968, Rjs.swarnkar, Sanpra1989, Dexbot, Deltahedron, Gabefair, Jochen Burghardt, Meltingwood,
Hoppeduppeanut, Cptwunderlich, Seppi333, Finnusertop, Holyseven007, Wilbertcr, 22merlin, Threerealtrees, Immanuel Thoughtmaker,
Jwinder47, Mario Casteln Castro, Purgy Purgatorio, Comp-heur-intel, Broswald, Timothyjosephwood, DiViNiX, Ndesh26, The Terrible
Toes, Scirocco0316, BardRapt, DIYeditor, Swclyde, Magic links bot and Anonymous: 273
First-order predicate Source: https://en.wikipedia.org/wiki/First-order_predicate?oldid=772185084 Contributors: Tobias Hoevekamp,
Awaterl, Michael Hardy, Oliver Pereira, Silversh, Alex S, Pfortuny, Rholton, SimonMayer, Brona, Jabowery, Vina, Oleg Alexandrov,
Graham87, Fram, SmackBot, Mets501, CBM, Gregbard, David Eppstein, Philogo, Sophivorus, Erik9bot, Bender the Bot and Anonymous:
1
Formation rule Source: https://en.wikipedia.org/wiki/Formation_rule?oldid=635230107 Contributors: Michael Hardy, Hyacinth, Tim-
rollpickering, Giftlite, EmilJ, BD2412, Kbdank71, Arthur Rubin, SmackBot, Gregbard, Cydebot, Cpiral, ClueBot, Hans Adler, Addbot,
Tahu88810, Xqbot, The Wiki ghost, Snotbot, Brirush and Anonymous: 1
Formula game Source: https://en.wikipedia.org/wiki/Formula_game?oldid=662182480 Contributors: Michael Hardy, Bearcat, Alai,
BD2412, ForgeGod, Bluebot, Gregbard, Complex (de) and Deutschgirl
Free Boolean algebra Source: https://en.wikipedia.org/wiki/Free_Boolean_algebra?oldid=784766886 Contributors: Zundark, Chris-
martin, Charles Matthews, CSTAR, Chalst, Oleg Alexandrov, BD2412, R.e.b., Mathbot, Trovatore, Arthur Rubin, SmackBot, Mhss,
Vaughan Pratt, CBM, Gregbard, R'n'B, Output~enwiki, Watchduck, Addbot, Daniel Brown, AnomieBOT, LilHelpa, Jiri 1984, Helpful
Pixie Bot, Magic links bot and Anonymous: 6
Free variables and bound variables Source: https://en.wikipedia.org/wiki/Free_variables_and_bound_variables?oldid=790708541 Con-
tributors: Toby Bartels, Edward, Michael Hardy, Emperorbma, Charles Matthews, Bevo, Robbot, MathMartin, Pengo, Tea2min, Ssd,
CSTAR, SeanProctor, Spayrard, Shenme, Homerjay, Diego Moya, Ish ishwar, Daira Hopwood, MFH, BD2412, R.e.b., Eubot, Freshe-
neesz, Algebraist, YurikBot, SimonMorgan, Wasseralm, Mustard~enwiki, SmackBot, Mhss, Jerome Charles Potts, Shunpiker, Cyberco-
bra, Jon Awbrey, Henning Makholm, Ysoldak, MagnaMopus, Physis, Paul Foxworthy, CBM, Gregbard, Nick Number, JohnPaulPagano,
AnAj, Abcarter, Faizhaider, Usien6, R'n'B, Arronax50, Trumpet marietta 45750, Xnuala, Camrn86, Aaron Rotenberg, WereSpielChe-
quers, Reuqr, Ddxc, Maxalbanese, Classicalecon, Shaded0, 718 Bot, SchreiberBike, Franklin.vp, Subversive.sound, Addbot, Ojb500,
Yobot, Pcap, Gongshow, 4th-otaku, Iitmadras, Cmccormick8, FrescoBot, Maggyero, Winterst, Lotje, WikitanvirBot, BattyBot, Treb-
Dozer, Deacon Vorbis and Anonymous: 44
Frege system Source: https://en.wikipedia.org/wiki/Frege_system?oldid=798658418 Contributors: Michael Hardy, EmilJ, Myasuda,
Headbomb, Bynne, FrescoBot, Empty Buer and Anonymous: 1
Freges propositional calculus Source: https://en.wikipedia.org/wiki/Frege{}s_propositional_calculus?oldid=774571120 Contributors:
Kku, AugPi, Charles Matthews, MathMartin, Gubbubu, Rich Farmbrough, Wclark, Cmdrjameson, Jpbowen, SmackBot, Mhss, Mets501,
CBM, Gregbard, Julian Mendez, Vantelimus, R'n'B, DorganBot, The Tetrast, Addbot, Lightbot, Yobot, Taneb, LilHelpa, EmausBot and
Anonymous: 4
Freges theorem Source: https://en.wikipedia.org/wiki/Frege{}s_theorem?oldid=768289762 Contributors: Chinju, AugPi, Tea2min,
Elwikipedista~enwiki, Oleg Alexandrov, Algebraist, NawlinWiki, Joth, SmackBot, Shadow1, CRGreathouse, CBM, Gregbard, Rgheck,
NorthernThunder, David Eppstein, VolkovBot, FrankEM, Geometry guy, Dangercrow, Alexbot, Addbot, Neodop, Luckas-bot, AnomieBOT,
Xqbot, Gilo1969, El Caro, Chatsam, Erik9bot, EmausBot, ZroBot, Jochen Burghardt, TheAvatard and Anonymous: 7
Functional completeness Source: https://en.wikipedia.org/wiki/Functional_completeness?oldid=795193008 Contributors: Slrubenstein,
Michael Hardy, Paul Murray, Ancheta Wis, Kaldari, Guppynsoup, EmilJ, Nortexoid, Domster, CBright, LOL, Paxsimius, Qwertyus,
Kbdank71, MarSch, Nihiltres, Jameshsher, R.e.s., Cedar101, RichF, SmackBot, InverseHypercube, CBM, Gregbard, Cydebot, Krauss,
Swpb, Sergey Marchenko, Joshua Issac, FMasic, Saralee Arrowood Viognier, Francvs, Hans Adler, Cnoguera, Dsimic, Addbot, Yobot,
AnomieBOT, TechBot, Infvwl, Citation bot 1, Abazgiri, Dixtosa, ZroBot, Tijfo098, ClueBot NG, Helpful Pixie Bot, BG19bot, Wck000
and Anonymous: 29
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1025

Game semantics Source: https://en.wikipedia.org/wiki/Game_semantics?oldid=797717416 Contributors: Edward, Denny, Charles Matthews,


Giftlite, Thv, Betelgeuse, Kntg, Smimram, Ben Standeven, Chalst, Vasiliscul, Nortexoid, Mpeisenbr, Velho, Kzollman, Dysepsion,
BD2412, Rjwilmsi, Mathbot, Michael Slone, Pacogo7, Ott2, Draicone, SmackBot, Mhss, Cybercobra, Chenli, CBM, Gregbard, Cydebot,
Nick Number, Falcor84, Cometstyles, Squids and Chips, Camrn86, Jamelan, Fratrep, DFRussia, Quercus basaseachicensis, Doprendek,
DumZiBoT, XLinkBot, Dthomsen8, Algebran, Addbot, Yobot, 4bpp, John of Reading, Tijfo098, Snotbot, Pietro Galliani, Helpful Pixie
Bot, Modelpractice, Knife-in-the-drawer, Magic links bot, KolbertBot and Anonymous: 25
Generalized quantier Source: https://en.wikipedia.org/wiki/Generalized_quantifier?oldid=799499235 Contributors: Nortexoid, Woohookitty,
BD2412, Rjwilmsi, Salix alba, RussBot, Neither, Gryon, Gregbard, CapnPrep, RJGray, Tijfo098, Helpful Pixie Bot, Curb Chain,
Kyoakoa, AK456 and Anonymous: 8
George Boole Source: https://en.wikipedia.org/wiki/George_Boole?oldid=799946153 Contributors: Mav, Tarquin, Deb, Karen John-
son, William Avery, Heron, Hirzel, Hephaestos, Michael Hardy, Dcljr, Ellywa, Ahoerstemeier, Stan Shebs, Poor Yorick, BRG, Charles
Matthews, RickK, Reddi, Doradus, Markhurd, Hyacinth, Grendelkhan, Samsara, Proteus, Lumos3, Dimadick, Frisket, Robbot, Jaredwf,
Fredrik, Altenmann, Romanm, Smallweed, Pingveno, Blainster, Wereon, Alan Liefting, Snobot, Ancheta Wis, Giftlite, Inter, Tom har-
rison, Peruvianllama, Jason Quinn, Zoney, Djegan, Isidore, Gadum, Antandrus, PeterMKehoe, DragonySixtyseven, Pmanderson,
Icairns, Almit39, Zfr, Jackiespeel, TiMike, TonyW, Babelsch, Lucidish, D6, Discospinster, Zaheen, Xezbeth, Bender235, Djordjes,
Elwikipedista~enwiki, Kaszeta, El C, Rgdboer, Chalst, Kwamikagami, Mwanner, Kyz, Bobo192, Ruszewski, Smalljim, Bollar, Roy da
Vinci, Jumbuck, Arthena, Andrew Gray, ABCD, Orelstrigo, Wtmitchell, Notjim, Alai, Umapathy, Oleg Alexandrov, Nuker~enwiki,
Woohookitty, FeanorStar7, MattGiuca, Scjessey, Mandarax, RichardWeiss, Graham87, Lastorset, BD2412, Ketiltrout, Rjwilmsi, Crazy-
nas, The wub, MarnetteD, Yamamoto Ichiro, FlaBot, Emarsee, RexNL, Gurch, Goeagles4321, Wgfcrafty, Sodin, Introvert, Chobot,
Jaraalbe, Guliolopez, Peterl, YurikBot, Hairy Dude, RussBot, SpuriousQ, CambridgeBayWeather, Rsrikanth05, TheGrappler, Wiki alf,
Trovatore, Bayle Shanks, Banes, Samir, Tomisti, Nikkimaria, CWenger, ArielGold, Caballero1967, Katieh5584, JDspeeder1, Finell,
Tinlv7, SmackBot, Derek Andrews, Incnis Mrsi, Hydrogen Iodide, C.Fred, Jagged 85, Renesis, Eskimbot, HalfShadow, Gilliam, Slaniel,
Skizzik, Irlchrism, Bluebot, Keegan, Da nuke, Djln, MalafayaBot, DHN-bot~enwiki, Can't sleep, clown will eat me, Tamfang, DRahier,
Ww2censor, Mhym, Addshore, SundarBot, Fuhghettaboutit, Ghiraddje, Studentmrb, Jon Awbrey, Ske2, Bejnar, Kukini, Ged UK, Ohcon-
fucius, Lilhinx, SashatoBot, EDUCA33E, Breno, AppaBalwant, IronGargoyle, Ckatz, A. Parrot, Grumpyyoungman01, Timmy1, Naa-
man Brown, Ojan, Gernch, Tawkerbot2, Amniarix, Xcentaur, Tanthalas39, Ale jrb, CBM, Mr. Science, Fordmadoxfraud, Gregbard,
Logicus, Doctormatt, Cydebot, Grahamec, JFreeman, ST47, ArcherOne, DumbBOT, Nabokov, Alaibot, Wdspann, Malleus Fatuorum,
Thijs!bot, Jan Blanicky, Brainboy109, Tapir Terric, Adw2000, Pfranson, Escarbot, AntiVandalBot, RobotG, Seaphoto, Deective,
MER-C, Matthew Fennell, Db099221, Alpinu, TAnthony, Cgilmer, .anacondabot, Acroterion, Magioladitis, Bongwarrior, VoABot
II, Dr.h.friedman, Rivertorch, Waacstats, Jim Douglas, Illuminismus, SandStone, Animum, David Eppstein, Spellmaster, Edward321,
DGG, MartinBot, Genghiskhanviet, Rettetast, Keith D, R'n'B, Shellwood, J.delanoy, Skeptic2, Sageofwisdom, AltiusBimm, Alphapeta,
Chiswick Chap, JonMcLoone, KylieTastic, WJBscribe, Mxmsj, Kolja21, Inwind, CA387, Squids and Chips, Mateck, Idioma-bot, Lights,
VolkovBot, ABF, Pleasantville, Je G., Ryan032, Philip Trueman, Martinevans123, TXiKiBoT, Oshwah, Dwight666, A4bot, Hqb, Mi-
randa, GcSwRhIc, Akramm1, Vanished user ikijeirw34iuaeolaseric, Anna Lincoln, Ontoraul, The Tetrast, TedColes, BotKung, Dun-
can.Hull, Softlavender, Sylent, Brianga, Logan, EmxBot, Steven Weston, BlueBart, Cj1340, Newbyguesses, Cwkmail, Xenophon777,
Anglicanus, Monegasque, Meigan Way, Lightmouse, OKBot, Kumioko (renamed), Msrasnw, Adam Cuerden, Denisarona, Randy Kryn,
Heureka!, Loren.wilton, Martarius, Sfan00 IMG, ClueBot, Fyyer, Professorial, Gylix, Supertouch, Arakunem, CounterVandalismBot, Ot-
tawahitech, Excirial, Ketchup1147, El bot de la dieta, Thingg, Tvwatch~enwiki, LincsPaul, XLinkBot, RogDel, YeIrishJig, Chanakal, Ne-
penthes, Little Mountain 5, Alexius08, Addbot, Wsvlqc, Logicist, Ashishlohorung, Any820, Ironholds, Chimes12, Gzhanstong, LPChang,
Chzz, Favonian, Uscitizenjason, Ehrenkater, BOOLE1847, Zorrobot, Margin1522, Luckas-bot, Yobot, OrgasGirl, The Grumpy Hacker,
PMLawrence, 2008CM, ValBaz, AnomieBOT, DemocraticLuntz, Kingpin13, Materialscientist, Puncakes, Bob Burkhardt, LilHelpa,
Xqbot, Addihockey10, Capricorn42, 4twenty42o, Jmundo, Miym, Omnipaedista, Shubinator, BSTemple, A.amitkumar, GreenC, Ob-
mijtinokcus, FrescoBot, Boldstep, Atlantia, 444x1, Plucas58, Martinvl, At-par, Pmokeefe, Serols, Tyssul, Cnwilliams, 19cass20, Lotje,
Fox Wilson, Abcpathros, LilyKitty, JamAKiska, Sideways713, DARTH SIDIOUS 2, Mandolinface, John of Reading, WikitanvirBot,
IncognitoErgoSum, Mallspeeps, Dcirovic, K6ka, Kkm010, PBS-AWB, Daonguyen95, Knight1993, Luisrock2008, Rcsprinter123, Don-
ner60, Chris857, Peter Karlsen, TYelliot, Petrb, ClueBot NG, Jnorton7558, Derick1259, Goose friend, Yashowardhani, MickyDripping,
Widr, Australopithecus2, Helpful Pixie Bot, Indiangrove, BG19bot, MusikAnimal, Prashantgonarkar, Payppp, Solomon7968, Snow
Blizzard, Lekro, Rjparsons, Toploftical, Ninmacer20, FoCuSandLeArN, Webclient101, Lugia2453, VIAFbot, Grembleben, Jassson-
pet, Jochen Burghardt, Cadillac000, Blankslate8, Nimetapoeg, Red-eyed demon, Tentinator, Eric Corbett, Noyster, OccultZone, Crow,
NABRASA, Hypotune, POLY1956, Melcous, Vieque, BethNaught, Prisencolin, Trax support, Kinetic37, TheWarLizard, Wolverne,
MRD2014, JC713, RyanTQuinn, Slugsayshi, Anonimeco, Lewismason1lm, Mihai.savastre, Eteethan, GB200UCC, DatWillFarrLad,
PennywisePedantry, Sicostel, Tmould1, GeneralizationsAreBad, KasparBot, ProprioMe OW, Feminist, CyberWarfare, MBlaze Light-
ning, Erfasser, CLCStudent, Lockedsmith, Spli Joint Blunt, Dusti1000, Wobwob8888, Dicldicldicl, Chrissymad, Highly Ridiculous,
Pictomania, Doyen786, Bender the Bot, KAP03, Suede Cat, Verideous, Prasanthk18 and Anonymous: 514
GoodmanNguyenvan Fraassen algebra Source: https://en.wikipedia.org/wiki/Goodman%E2%80%93Nguyen%E2%80%93van_
Fraassen_algebra?oldid=753394249 Contributors: Michael Hardy, Trovatore, CharlotteWebb, Knorlin, Psinu, Good Olfactory, Yobot,
Worldbruce, DrilBot, Marcocapelle, Brad7777, Mark viking and Anonymous: 1
Head normal form Source: https://en.wikipedia.org/wiki/Beta_normal_form?oldid=772223164 Contributors: Michael Hardy, Dominus,
Dysprosia, Ruakh, Spayrard, Tromp, Linas, BD2412, StevenDaryl, William Lovas, Salsb, Jpbowen, Cedar101, Donhalcon, SmackBot,
Mhss, Addbot, Erik9bot, Trappist the monk, BG19bot, JonathasDantas, Synthwave.94, Shallchang, Malan and Anonymous: 4
Herbrand normal form Source: https://en.wikipedia.org/wiki/Herbrandization?oldid=775528064 Contributors: Tea2min, AshtonBen-
son, Bjones, Linas, SmackBot, MarshBot, Mere Interlocutor, Omnipaedista, MorganGreen, BG19bot and Anonymous: 3
Herbrandization Source: https://en.wikipedia.org/wiki/Herbrandization?oldid=775528064 Contributors: Tea2min, AshtonBenson, Bjones,
Linas, SmackBot, MarshBot, Mere Interlocutor, Omnipaedista, MorganGreen, BG19bot and Anonymous: 3
Homogeneous relation Source: https://en.wikipedia.org/wiki/Binary_relation?oldid=796087488 Contributors: AxelBoldt, Bryan Derk-
sen, Zundark, Tarquin, Jan Hidders, Roadrunner, Mjb, Tomo, Patrick, Xavic69, Michael Hardy, Wshun, Isomorphic, Dominus, Ixfd64,
TakuyaMurata, Charles Matthews, Timwi, Dcoetzee, Jitse Niesen, Robbot, Chocolateboy, MathMartin, Tea2min, Giftlite, Fropu,
Dratman, Jorge Stol, Jlr~enwiki, Andycjp, Quarl, Guanabot, Yuval madar, Slipstream, Paul August, Elwikipedista~enwiki, Shanes,
EmilJ, Randall Holmes, Ardric47, Obradovic Goran, Eje211, Alansohn, Dallashan~enwiki, Keenan Pepper, PAR, Adrian.benko, Oleg
Alexandrov, Joriki, Linas, Apokrif, MFH, Dpv, Pigcatian, Penumbra2000, Fresheneesz, Chobot, YurikBot, Hairy Dude, Koeyahoo,
Trovatore, Bota47, Arthur Rubin, Netrapt, SmackBot, Royalguard11, SEIBasaurus, Cybercobra, Jon Awbrey, Turms, Lambiam, Dbtfz,
1026 CHAPTER 273. ZHEGALKIN POLYNOMIAL

Mr Stephen, Mets501, Dreftymac, Happy-melon, Petr Matas, CRGreathouse, CBM, Yrodro, WillowW, Xantharius, Thijs!bot, Egrif-
n, Rlupsa, Marek69, Fayenatic london, JAnDbot, MER-C, TAnthony, Magioladitis, Vanish2, Avicennasis, David Eppstein, Robin
S, Akurn, Adavidb, LajujKej, Owlgorithm, Djjrjr, Policron, DavidCBryant, Quux0r, VolkovBot, Boute, Vipinhari, Anonymous Dis-
sident, PaulTanenbaum, Jackfork, Wykypydya, Dmcq, AlleborgoBot, AHMartin, Ocsenave, Sftd, Paradoctor, Henry Delforn (old), Mi-
NombreDeGuerra, DuaneLAnderson, Anchor Link Bot, CBM2, Classicalecon, ClueBot, Snigbrook, Rhubbarb, Hans Adler, Sounder-
Bruce, SilvonenBot, BYS2, Plmday, Addbot, LinkFA-Bot, Tide rolls, Jarble, Legobot, Luckas-bot, Yobot, Ht686rg90, Lesliepogi, Pcap,
Labus, Nallimbot, Reindra, FredrikMeyer, AnomieBOT, Floquenbeam, Royote, Hahahaha4, Materialscientist, Belkovich, Citation bot,
Racconish, Jellystones, Xqbot, Isheden, Geero, GhalyBot, Ernsts, Howard McCay, Constructive editor, Mark Renier, Mfwitten, Ran-
domDSdevel, NearSetAccount, SpaceFlight89, Yunshui, Miracle Pen, Brambleclawx, RjwilmsiBot, Nomen4Omen, Chharvey, Spork-
Bot, OnePt618, Sameer143, Socialservice, ResearchRave, ClueBot NG, Wcherowi, Frietjes, Helpful Pixie Bot, Koertefa, BG19bot,
ChrisGualtieri, YFdyh-bot, Dexbot, Makecat-bot, ScitDei, Lerutit, Jochen Burghardt, Jodosma, Karim132, Cosmia Nebula, Monkbot,
Pratincola, , Neycalazans, Some1Redirects4You, The Quixotic Potato, Luis150902, Magic links bot and Anonymous: 114
Horn clause Source: https://en.wikipedia.org/wiki/Horn_clause?oldid=789355311 Contributors: Vkuncak, Edward, Michael Hardy,
Karada, Angela, Charles Matthews, Dcoetzee, Doradus, Thadk, Ldo, Robbot, Fieldmethods, Altenmann, Kwi, Cek, Tom harrison, Gub-
bubu, Sigfpe, Xmlizer, Luqui, Jonathanischoice, Elwikipedista~enwiki, Chalst, EmilJ, Karlheg, Jumbuck, Woohookitty, Linas, Jacobo-
lus, MattGiuca, BD2412, Tizio, Tawker, YurikBot, Ritchy, Bota47, Ott2, GrinBot~enwiki, Mhss, Theone256, NYKevin, Rsimmonds01,
Ft1~enwiki, CRGreathouse, Tajko, Gregbard, A Softer Answer, Thijs!bot, Hannes Eder, Kevin.cohen, David Eppstein, Paullakso, In-
quam, Rod57, Knverma, Jamelan, RatnimSnave, Logperson, Hans Adler, Addbot, Lightbot, Jarble, Ptbotgourou, AnomieBOT, Rubinbot,
ArthurBot, MIRROR, Olexa Riznyk, Nunoplopes, EmausBot, Tijfo098, ClueBot NG, Gareth Grith-Jones, Osnetwork, Michael Zeising,
Helpful Pixie Bot, BLNarayanan, Compulogger, ChrisGualtieri, Hmainsbot1, Jochen Burghardt, Rintdusts, Marcello sachs, Jackee1234
and Anonymous: 35
Hypostatic abstraction Source: https://en.wikipedia.org/wiki/Hypostatic_abstraction?oldid=692525468 Contributors: Bevo, MistToys,
El C, Diego Moya, Versageek, Jerey O. Gustafson, Magister Mathematicae, DoubleBlue, TeaDrinker, Brandmeister (old), Closedmouth,
C.Fred, Rajah9, JasonMR, Jon Awbrey, Inhahe, JzG, Slakr, CBM, Gogo Dodo, Hut 8.5, Brigit Zilwaukee, Yolanda Zilwaukee, Karrade,
Mike V, The Tetrast, Rjd0060, Wolf of the Steppes, Doubtentry, Icharus Ixion, Hans Adler, Buchanans Navy Sec, Mr. Peabodys Boy,
Overstay, Marsboat, Unco Guid, Viva La Information Revolution!, Autocratic Uzbek, Poke Salat Annie, Flower Mound Belle, Navy
Pierre, Mrs. Lovetts Meat Puppets, Chester County Dude, Southeast Penna Poppa, Delaware Valley Girl, Denispir, AnomieBOT, Paine
Ellsworth, Gamewizard71, PhnomPencil and Anonymous: 3
Hypothetical syllogism Source: https://en.wikipedia.org/wiki/Hypothetical_syllogism?oldid=638420630 Contributors: Rossami, Charles
Matthews, I R Solecism, Dysprosia, Taak, Jiy, Chalst, Arthena, RJFJR, Oleg Alexandrov, Mel Etitis, Lsloan, Algebraist, Noam~enwiki,
Arthur Rubin, SmackBot, Haza-w, Mhss, Bluebot, Cybercobra, Jim.belk, Simeon, Gregbard, Cydebot, JamesBWatson, .Jos~enwiki,
Squids and Chips, Jamelan, Chenzw, ClueBot, Alexbot, Alejandrocaro35, SchreiberBike, Addbot, BobHelix, Chzz, Luckas-bot, The
Parting Glass, Erik9bot, ClueBot NG, Dooooot, Flosfa, Brad7777, LatinAddict and Anonymous: 33
Idempotence Source: https://en.wikipedia.org/wiki/Idempotence?oldid=799748778 Contributors: Damian Yerrick, AxelBoldt, Zundark,
Taw, Patrick, Chas zzz brown, Michael Hardy, Ahoerstemeier, Stevan White, Andres, Zarius, Revolver, Charles Matthews, Dysprosia, Jitse
Niesen, Glimz~enwiki, Hyacinth, Robbot, Mattblack82, Altenmann, Tea2min, Giftlite, Fropu, Pashute, Mboverload, DefLog~enwiki,
J. 'mach' wust, Joseph Myers, Alex Cohn, Tothebarricades.tk, Andreas Kaufmann, Gcanyon, Mormegil, Rgdboer, Kwamikagami, Nigelj,
Sleske, Obradovic Goran, Mdd, Abdulqabiz, Dirac1933, Bookandcoee, Simetrical, Igny, MFH, Grammarbot, Ketiltrout, MarkHudson,
Tokigun, FlaBot, Mathbot, Vonkje, DVdm, Roboto de Ajvol, Angus Lepper, RobotE, Hairy Dude, Michael Slone, Gaius Cornelius,
FuzzyBSc, Aeusoes1, Deskana, Trovatore, Sangwine, Sfnhltb, Crasshopper, Tomisti, Kompik, Cedar101, Evilbu, SmackBot, Elronx-
enu, Octahedron80, Nbarth, Syberghost, Cybercobra, Lambiam, Robosh, Loadmaster, JHunterJ, MedeaMelana, Rschwieb, Alex Selby,
Dreftymac, Happy-melon, CRGreathouse, CmdrObot, CBM, Gregbard, Julian Mendez, Thijs!bot, Kilva, Konradek, DB Durham NC,
RichardVeryard, RobHar, Gioto, Ouc, JAnDbot, Deective, Randal Oulton, Danculley, Vanish2, Albmont, Brewhaha@edmc.net, David
Eppstein, Emw, Cander0000, Stolsvik, Catskineater, Robin S, Gwern, Bostonvaulter, TomyDuby, Krishnachandranvn, Policron, John-
Blackburne, TXiKiBoT, Una Smith, Broadbot, Jamelan, SieBot, LungZeno, Roujo, Ctxppc, Svick, Stjarnblom, Niceguyedc, DragonBot,
Alexbot, He7d3r, Nardog, Herbert1000, Rswarbrick, Qwfp, Vog, Gniemeyer, Addbot, MrOllie, Download, Legobot, Luckas-bot, Yobot,
AnomieBOT, Gtz, JackieBot, NOrbeck, Kaoru Itou, DKPHA, RandomDSdevel, Pinethicket, Serols, Ms7821, Duoduoduo, Timtem-
pleton, Peoplemerge, Wham Bam Rock II, Quondum, Dumbier, ClueBot NG, BG19bot, Deltahedron, Jochen Burghardt, 00prometheus,
JaconaFrere, Acominym, Viam Ferream, Loraof, Ira Leviton, CLCStudent, Krebert, InternetArchiveBot, Endless Hair and Anonymous:
90
Idempotency of entailment Source: https://en.wikipedia.org/wiki/Idempotency_of_entailment?oldid=744541692 Contributors: GT-
Bacchus, Hyacinth, Chalst, Wood Thrush, Melaen, RJFJR, MadMax, SmackBot, Gregbard, Fadesga, Chaosdruid, Addbot, Erik9bot,
ChuispastonBot, Spockticus and Anonymous: 6
If and only if Source: https://en.wikipedia.org/wiki/If_and_only_if?oldid=794464701 Contributors: Damian Yerrick, AxelBoldt, Matthew
Woodcraft, Vicki Rosenzweig, Zundark, Tarquin, Larry_Sanger, Toby Bartels, Ark~enwiki, Camembert, Stevertigo, Patrick, Chas zzz
brown, Michael Hardy, Wshun, DopeshJustin, Dante Alighieri, Dominus, SGBailey, Wwwwolf, Delirium, Georey~enwiki, Stevenj,
Kingturtle, UserGoogol, Andres, Evercat, Jacquerie27, Adam Conover, Revolver, Wikiborg, Dysprosia, Itai, Ledge, McKay, Robbot, Psy-
chonaut, Henrygb, Ruakh, Diberri, Tea2min, Adam78, Enochlau, Giftlite, DavidCary, var Arnfjr Bjarmason, Mellum, Chinasaur,
Jason Quinn, Taak, Superfrank~enwiki, Wmahan, LiDaobing, Mvc, Neutrality, Urhixidur, Ropers, Jewbacca, Karl Dickman, PhotoBox,
Brianjd, Paul August, Sunborn, Elwikipedista~enwiki, Chalst, Edward Z. Yang, Pearle, Ekevu, IgorekSF, Msh210, Interiot, ABCD, Still-
notelf, Suruena, Voltagedrop, Rhialto, Forderud, Oleg Alexandrov, Joriki, Velho, Woohookitty, Mindmatrix, Ruud Koot, Ryan Reich,
Adjam, Pope on a Rope, Eyu100, R.e.b., FlaBot, Mathbot, Rbonvall, Glenn L, BMF81, Bgwhite, RussBot, Postglock, Voidxor, El Pollo
Diablo, Gadget850, Jkelly, Danielpi, Lt-wiki-bot, TheMadBaron, Nzzl, Arthur Rubin, Netrapt, SmackBot, Ttzz, Gloin~enwiki, Incnis
Mrsi, InverseHypercube, Melchoir, GoOdCoNtEnT, Thumperward, Javalenok, Jonatan Swift, Peterwhy, Acdx, Shirifan, Evildictaitor,
Abolen, Rainwarrior, Dicklyon, Mets501, Yuide, Shoeofdeath, CRGreathouse, CBM, Joshwa, Picaroon, Gregbard, Eu.stefan, Letranova,
Thijs!bot, Egrin, Schneau, Jojan, Davkal, AgentPeppermint, Urdutext, Holyknight33, Escarbot, WinBot, Serpents Choice, Timlevin,
Singularity, David Eppstein, Msknathan, MartinBot, AstroHurricane001, Vanished user 47736712, Kenneth M Burke, Alsosaid1987,
DorganBot, Bsroiaadn, TXiKiBoT, Anonymous Dissident, Abyaly, Ichtaca, Mouse is back, Rjgodoy, TrippingTroubadour, KjellG, Alle-
borgoBot, Lillingen, SieBot, Iamthedeus, This, that and the other, Smaug123, Skippydo, Warman06~enwiki, Tiny plastic Grey Knight,
Francvs, Minehava, ClueBot, Surfeited, BANZ111, Master11218, WestwoodMatt, Excirial, He7d3r, Pfhorrest, Kmddmk, Addbot, Ron-
hjones, Wikimichael22, Lightbot, Jarble, Yobot, Bryan.burgers, AnomieBOT, Nejatarn, Ciphers, Quintus314, , Wissens-
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1027

Drster, Ex13, Rapsar, Mikespedia, Lotje, Igor Yalovecky, EmausBot, Ruxkor, Chricho, Sugarfoot1001, Tijfo098, FeatherPluma, Clue-
Bot NG, Wcherowi, Widr, MerlIwBot, Helpful Pixie Bot, Solomon7968, Chmarkine, CarrieVS, Me, Myself, and I are Here, Ekips39,
Epicgenius, Dr Lindsay B Yeates, Seppi333, Matthew Kastor, Cpt Wise, Loraof, Bender the Bot, Imminent77, Halo0520, Here2help,
Magic links bot and Anonymous: 147
Implicant Source: https://en.wikipedia.org/wiki/Implicant?oldid=799907636 Contributors: Michael Hardy, Charles Matthews, Jmabel,
Macrakis, McCart42, Svdb, Mailer diablo, Pako, Mathbot, Fresheneesz, YurikBot, Buster79, Pwoestyn, HopeSeekr of xMule, Smack-
Bot, Mhss, Chendy, Jon Awbrey, Courcelles, Nviladkar, Odwl, Sri go, Genuineleather, Squids and Chips, VolkovBot, Ra2007, Addbot,
MrOllie, Materialscientist, Portisere, DrilBot, Fcdesign, Matthiaspaul, Ceklock and Anonymous: 25
Implication graph Source: https://en.wikipedia.org/wiki/Implication_graph?oldid=745821013 Contributors: Altenmann, Vadmium,
PWilkinson, GregorB, CBM, Thisisraja, David Eppstein, DavidCBryant, R0uge, DOI bot, Twri, Dcirovic, ClueBot NG, BG19bot, 0a.io
and Anonymous: 3
Implicational propositional calculus Source: https://en.wikipedia.org/wiki/Implicational_propositional_calculus?oldid=798294979 Con-
tributors: Michael Hardy, EmilJ, BD2412, Qwertyus, Grafen, Cedar101, RDBury, Mhss, Byelf2007, JRSpriggs, CmdrObot, CBM, Greg-
bard, Cydebot, Thijs!bot, Balloonguy, R'n'B, N4nojohn, Hotfeba, Graymornings, SloppyG, Hugo Herbelin, Addbot, BG19bot, Deacon
Vorbis, KolbertBot and Anonymous: 3
Inclusion (Boolean algebra) Source: https://en.wikipedia.org/wiki/Inclusion_(Boolean_algebra)?oldid=567164022 Contributors: Macrakis
Independence of premise Source: https://en.wikipedia.org/wiki/Independence_of_premise?oldid=607144839 Contributors: CBM, Greg-
bard, Yobot and Anonymous: 3
Indicative conditional Source: https://en.wikipedia.org/wiki/Indicative_conditional?oldid=791696975 Contributors: Ryguasu, Patrick,
Michael Hardy, TakuyaMurata, Iseeaboar, Charles Matthews, Dysprosia, Radgeek, Phil Boswell, Robbot, Snobot, Recentchanges, Bnn,
Aquaeur, , Paul August, Oleg Alexandrov, Mwilde, Kzollman, Elvarg, Margosbot~enwiki, Fresheneesz, YurikBot, KSchutte, Copy-
man~enwiki, Bota47, Pacogo7, Arthur Rubin, J-gyorke, SmackBot, Mhss, Bluebot, Jaymay, Gregbard, WinBot, AlleborgoBot, Eluard,
Otr500, Addbot, Donjoe334, Machine Elf 1735, MindShifts, Staszek Lem, Hanlon1755, ChrisGualtieri, Lingzhi and Anonymous: 24
Intensional logic Source: https://en.wikipedia.org/wiki/Intensional_logic?oldid=788452152 Contributors: Michael Hardy, Charles Matthews,
Beland, Chalst, Tabor, Rend~enwiki, BD2412, KSchutte, SmackBot, Physis, Gregbard, AndrewHowse, Alaibot, Nick Number, EagleFan,
Ontoraul, Dan Polansky, SieBot, KathrynLybarger, Johnuniq, Tomiiik, Addbot, AKappa, LilHelpa, The Land Surveyor, Omnipaedista,
Citation bot 1, Miracle Pen, John of Reading, Klbrain, PBS-AWB, Rlithgow, Tijfo098, Helpful Pixie Bot, Billy Ockham, Gcevanna,
Jodosma, Monkbot, MaxPlankton and Anonymous: 7
Interior algebra Source: https://en.wikipedia.org/wiki/Interior_algebra?oldid=798387132 Contributors: Zundark, Michael Hardy, Charles
Matthews, Hyacinth, Giftlite, Kuratowskis Ghost, Oleg Alexandrov, Linas, Mathbot, Trovatore, SmackBot, Mhss, Bejnar, Mets501,
Gregbard, Gogo Dodo, MetsBot, R'n'B, LokiClock, Aspects, Hans Adler, Jarble, Yobot, Omnipaedista, EmausBot, Dewritech and Anony-
mous: 11
Intermediate logic Source: https://en.wikipedia.org/wiki/Intermediate_logic?oldid=796249393 Contributors: AugPi, Silversh, Charles
Matthews, Leibniz, EmilJ, Grue, Lysdexia, Oleg Alexandrov, Kbdank71, Curpsbot-unicodify, Mhss, Byelf2007, Mets501, CBM, Greg-
bard, Cydebot, Rsmyth, Kenneth M Burke, LokiClock, Hugo Herbelin, Addbot, Lightbot, AnomieBOT, Xqbot, WikitanvirBot, Hpvpp,
Frietjes, Jtron2000 and Anonymous: 8
Inverse trigonometric functions Source: https://en.wikipedia.org/wiki/Inverse_trigonometric_functions?oldid=800788646 Contribu-
tors: XJaM, Patrick, Michael Hardy, Stevenj, Dysprosia, Fibonacci, Robbot, Tea2min, Giftlite, Anville, SURIV, Daniel,levine, Pmander-
son, Abdull, Discospinster, Osrevad, Bender235, Zenohockey, Army1987, Jrme, Alansohn, Anthony Appleyard, Wtmitchell, Stradi-
variusTV, Armando, Gerbrant, Emallove, R.e.b., Kri, Glenn L, Salvatore Ingala, Chobot, Visor, DVdm, Algebraist, YurikBot, Wave-
length, Sceptre, Hede2000, KSmrq, Grafen, Int 80h, NorsemanII, Bamse, RDBury, Maksim-e~enwiki, Thelukeeect, Eskimbot, Mhss,
Mirokado, JCSantos, PrimeHunter, Deathanatos, V1adis1av, BentSm, Saippuakauppias, Lambiam, Eridani, Ian Vaughan, ChaoticLlama,
CapitalR, JRSpriggs, Conrad.Irwin, HenningThielemann, Fommil, Rian.sanderson, Palmtree3000, Zalgo, Thijs!bot, Spikedmilk, Nonag-
onal Spider, Headbomb, EdJohnston, BigJohnHenry, Luna Santin, Hannes Eder, Pichote, JAnDbot, Ricardo sandoval, Jetstreamer,
JNW, Albmont, Gammy, JoergenB, Ac44ck, Gwern, Isamil, Mythealias, Pomte, Knorlin, TungstenWolfram, Hennessey, Patrick, Bo-
bianite, BentonMiller, Sigmundur, DavidCBryant, Alsosaid1987, Alan U. Kennington, VolkovBot, Indubitably, LokiClock, Justtysen,
VasilievVV, Riku92mr, Anonymous Dissident, Corvus coronoides, Rdengler, Dmcq, Vertciel, Logan, CagedKiller360, Ck, Aly89,
AlanUS, ClueBot, JoeHillen, Excirial, Bender2k14, Cenarium, Leandropls, Kiensvay, Nikhilkrgvr, Aaron north, Dthomsen8, DaL33T,
Addbot, Fgnievinski, Iceblock, Zarcadia, Sleepaholic, Jasper Deng, Zorrobot, Luckas-bot, Yobot, Tohd8BohaithuGh1, Ptbotgourou,
TaBOT-zerem, AnomieBOT, Archon 2488, JackieBot, Nickweedon, Geek1337~enwiki, Diego Queiroz, Txebixev, St.nerol, Hdullin,
GrouchoBot, Uniwersalista, SassoBot, Prari, Nixphoeni, D'ohBot, Kusluj, Emjaye, Number Googol, Serols, Double sharp, Trappist the
monk, Adammerlinsmith, TjBot, Jowa fan, EmausBot, ModWilson, Velowiki, X-4-V-I, RA0808, Wham Bam Rock II, Dcirovic, ZroBot,
Michael.YX.Wu, Isaac Euler, Tolly4bolly, Jay-Sebastos, Colin.campbell.27, Maschen, ChuispastonBot, Rmashhadi, Anita5192, ClueBot
NG, Matthiaspaul, Hdreuter, Helpful Pixie Bot, KLBot2, Vagobot, Gar, Crh23, YatharthROCK, , StevinSimon,
Tfr000, Modalanalytiker, Pratyya Ghosh, Ahmed Magdy Hosny, Brirush, Yardimsever, Wamiq, Jerming, Mathmensch, Blackbombchu,
Pqnlrn, DTL LAPOS, Iperetta, De Riban5, Monkbot, Cpt Wise, Arsenal CR7, Cdserio99, JaimeGallego, Deacon Vorbis and Anonymous:
188
Join (sigma algebra) Source: https://en.wikipedia.org/wiki/Join_(sigma_algebra)?oldid=786309626 Contributors: Tsirel, Linas, Magic
links bot and Anonymous: 1
Karnaugh map Source: https://en.wikipedia.org/wiki/Karnaugh_map?oldid=800150074 Contributors: Bryan Derksen, Zundark, LA2,
PierreAbbat, Fubar Obfusco, Heron, BL~enwiki, Michael Hardy, Chan siuman, Justin Johnson, Seav, Chadloder, Iulianu, Nveitch, Bog-
dangiusca, GRAHAMUK, Jitse Niesen, Fuzheado, Colin Marquardt, Furrykef, Omegatron, Vaceituno, Ckape, Robbot, Naddy, Tex-
ture, Paul Murray, Ancheta Wis, Giftlite, DocWatson42, SamB, Bovlb, Macrakis, Mobius, Goat-see, Ktvoelker, Grunt, Perey, Dis-
cospinster, Caesar, Dcarter, MeltBanana, Murtasa, ZeroOne, Plugwash, Nigelj, Unstable-Element, Obradovic Goran, Pearle, Mdd, Phy-
zome, Jumbuck, Fritzpoll, Snowolf, Wtshymanski, Cburnett, Bonzo, Kenyon, Acerperi, Wikiklrsc, Dionyziz, Eyreland, Marudubshinki,
Jake Wartenberg, MarSch, Mike Segal, Oblivious, Ligulem, Ademkader, Mathbot, Winhunter, Fresheneesz, Tardis, LeCire~enwiki,
Bgwhite, YurikBot, RobotE, RussBot, SpuriousQ, B-Con, Anomie, Arichnad, Trovatore, RolandYoung, RazorICE, RUL3R, Rohan-
mittal, Cedar101, Tim Parenti, Gulliveig, HereToHelp, RG2, Sinan Taifour, SmackBot, InverseHypercube, Thunder Wolf, Edgar181,
Gilliam, Bluebot, Thumperward, Villarinho, Moonshiner, DHN-bot~enwiki, Locriani, Sct72, HLwiKi, Michael.Pohoreski, Hex4def6,
1028 CHAPTER 273. ZHEGALKIN POLYNOMIAL

SashatoBot, Wvbailey, MagnaMopus, Freewol, Vobrcz, Jmgonzalez, Augustojd, CRGreathouse, CBM, Jokes Free4Me, Reywas92, Czar
Kirk, Tkynerd, Thijs!bot, Headbomb, JustAGal, Jonnie5, CharlotteWebb, RazoreRobin, Leuko, Ndyguy, VoABot II, Swpb, Gantoniou,
Carrige, R'n'B, Yim~enwiki, JoeFloyd, Aervanath, FreddieRic, KylieTastic, Sigra~enwiki, TXiKiBoT, Cyberjoac, Cremepu222, Mar-
tinPackerIBM, Kelum.kosala, Spinningspark, FxBit, Pitel, Serprex, SieBot, VVVBot, Aeoza, IdreamofJeanie, OKBot, Svick, Rrfwiki,
WimdeValk, Justin W Smith, Rjd0060, Unbuttered Parsnip, Czarko, Dsamarin, Watchduck, Sps00789, Hans Adler, Gciriani, B.Zsolt,
Jmanigold, Tullywinters, ChyranandChloe, Avoided, Cmr08, Writer130, Addbot, DOI bot, Loafers, Delaszk, Dmenet, AgadaUrbanit,
Luckas-bot, Kartano, Hhedeshian, SwisterTwister, Mhayes46, AnomieBOT, Jim1138, Utility Knife, Citation bot, Dannamite, Arthur-
Bot, Pnettle, Miym, GrouchoBot, TunLuek, Abed pacino, Macjohn2, BillNace, Amplitude101, Pdebonte, Biker Biker, Pinethicket,
RedBot, The gulyan89, SpaceFlight89, Trappist the monk, Vrenator, Katragadda465, RjwilmsiBot, Alessandro.goulartt, Zap Rows-
dower, Norlik, Njoutram, Rocketrod1960, Voomoo, ClueBot NG, Bukwoy, Matthiaspaul, AHA.SOLAX, Frietjes, Imyourfoot, Widr,
Danim, Jk2q3jrklse, Spudpuppy, Nbeverly, Ceklock, Giorgos.antoniou, Icigic, CARPON, Usmanraza9, Wolfmanx122, Shidh, Elec-
tricmun11, EuroCarGT, Yaxinr, Mrphious, Jochen Burghardt, Mdcoope3, TheEpTic, Akosibrixy, Microchirp, Cheater00, Lennerton,
GreenWeasel11, Loraof, Scipsycho, BILL ABK, Acayl, ShigaIntern, InternetArchiveBot, GreenC bot, Gerdhuebner, Abduw09, Dhoni
barath, NoahB123, Ngonz424, Arun8277 and Anonymous: 279
Law of excluded middle Source: https://en.wikipedia.org/wiki/Law_of_excluded_middle?oldid=800734047 Contributors: LC~enwiki,
Zundark, Tarquin, Andre Engels, Patrick, Dominus, Wapcaplet, Justin Johnson, Chinju, CesarB, Snoyes, Error, AugPi, Evercat, Sethma-
honey, Cherkash, Schneelocke, Charles Matthews, Dcoetzee, Doradus, Hyacinth, Populus, Drernie, Altenmann, TittoAssini, Guy Pe-
ters, Xanzzibar, Tea2min, Nagelfar, Giftlite, Dbenbenn, Peruvianllama, Elias, Jorend, Dmmaus, Colinb, Guppynsoup, Leibniz, Ben-
der235, Nortexoid, Pearle, Lysdexia, Jumbuck, Hackwrench, Kocio, Velho, Simetrical, Linas, LOL, IHendry, StradivariusTV, Before
My Ken, Teemu Leisti, Graham87, Dweinberger, Rjwilmsi, Guyd, Rangek, Ian Pitchford, Margosbot~enwiki, Riki, WhyBeNormal,
CiaPan, Bgwhite, Hairy Dude, Gaius Cornelius, BirgitteSB, Misza13, Bota47, Closedmouth, Eigenlambda, SmackBot, Lestrade, Rtc,
Reedy, Wjmallard, Eskimbot, Aberrantgeek, Mhss, Zonko, Colonies Chris, WikiPedant, Pegua, Sommers, Apostolos Margaritis, BIL,
Cybercobra, Omgoleus, Jon Awbrey, Byelf2007, Lambiam, ArglebargleIV, Wvbailey, Loadmaster, Lim Wei Quan, Grumpyyoung-
man01, Mets501, Philippschaumann, Chetvorno, George100, CmdrObot, Sdorrance, Myasuda, Gregbard, Cydebot, Asmeurer, Husond,
CosineKitty, Smartcat, Magioladitis, Spontini, The Real Marauder, Scaro, VolkovBot, Freequark~enwiki, Optigan13, Mray1, SieBot,
Laocon11, Iamthedeus, Soler97, Larek, KathrynLybarger, DeaconJohnFairfax, Mild Bill Hiccup, Stevnewb, Flyingpasta, Addbot, With
goodness in mind, Download, SamatBot, Squandermania, Yinweichen, Luckas-bot, Andrewrp, Erel Segal, Mauro Lanari, Xqbot, Pe-
ter Damian, Red van man, GliderMaven, Tkuvho, LittleWink, RedBot, Gamewizard71, TobeBot, Lotje, Gnothiseautonpantarei, Petrus
Damianus, Magmalex, WikitanvirBot, Faolin42, Wham Bam Rock II, Houiostesmoiras, Tijfo098, Neil P. Quinn, RGGehue, Helpful
Pixie Bot, Ameulen11, DarafshBot, Khazar2, Schwatzwutz, Balljust, 22merlin, Narky Blert, IWillBuildTheRoads and Anonymous: 86
Law of identity Source: https://en.wikipedia.org/wiki/Law_of_identity?oldid=798742280 Contributors: Shii, Stevertigo, Ihcoyc, Charles
Matthews, Fvw, Academic Challenger, Doidimais Brasil, Gwalla, Herbee, TiMike, Oknazevad, Amicuspublilius, Smyth, Brian0918,
Causa sui, Giraedata, PWilkinson, Exomnium, Wtmitchell, Karbinski, Teemu Leisti, BD2412, Qwertyus, RL0919, Petri Krohn, Smack-
Bot, Kentyman, InverseHypercube, Mgreenbe, XxAvalanchexX, Mhss, Bluebot, Nbarth, WikiPedant, Cybercobra, Herb-Sewell, Love-
Monkey, Mladek, Michael Rogers, Byelf2007, Lambiam, Tim bates, Avedomni, Nabeth, Hayats, Mlinck, George100, Gregbard, Cydebot,
Frzl, Scarpy, Thijs!bot, Andyjsmith, Ernalve, WikiSlasher, JAnDbot, Skomorokh, BenB4, Magioladitis, Tito-, DAGwyn, MartinBot, Nin-
jaLore, N4nojohn, J.delanoy, Zurishaddai, Andysoh, Steven J. Anderson, Billinghurst, Fishtron, KoshVorlon, Bballguy7100, Superbeecat,
Francvs, Metaprimer, Hafspajen, GoEThe, Trivialist, Masterpiece2000, MilesAgain, DumZiBoT, Gonzonoir, Gerhardvalentin, Skarebo,
WikHead, Man, Addbot, Willking1979, C6541, Cssiitcic, Jan eissfeldt, Luckas-bot, Yobot, Ptbotgourou, RigdzinPhurba, Paradoxe
allemand, AnomieBOT, Galoubet, Xqbot, GrouchoBot, Peter Damian, Omnipaedista, Xenfreak, PlyrStar93, Jikybebna, Gamewizard71,
Askedonty, Gehue, ClueBot NG, RGGehue, Caute AF, Hofman stern, Der Blaue Wolf, BG19bot, MisterCake, Profaneprimate, Euphile-
tos, Hmainsbot1, Jochen Burghardt, Manofperson, Pietro13, Finnusertop, SJ Defender, Narky Blert, Nkkenbuer, Bender the Bot and
Anonymous: 63
Law of noncontradiction Source: https://en.wikipedia.org/wiki/Law_of_noncontradiction?oldid=800360554 Contributors: Tobias Ho-
evekamp, LC~enwiki, Ryguasu, PhilipMW, Michael Hardy, Wapcaplet, AugPi, Dod1, Evercat, Schneelocke, Charles Matthews, Dys-
prosia, OldNick, Hyacinth, Tea2min, Tagishsimon, Gadum, Alexf, Jonathancamp, Smyth, Mdd, Jumbuck, Velho, Teemu Leisti, BD2412,
Messenger88, Wragge, Chobot, YurikBot, Hairy Dude, Retodon8, Thane, Anomalocaris, Pnrj, Hakeem.gadi, Fustbariclation~enwiki,
SmackBot, YellowMonkey, Jagged 85, AnOddName, Izzynn, Mhss, Daniel J. Forman, WikiPedant, Jon Awbrey, Giorgiomugnaini, Mya-
suda, Gregbard, Farzaneh, Julian Mendez, Ernalve, Serenity id, BenMcLean, TheRepairMan, TAnthony, Magioladitis, Lenticel, OAC,
Heyitspeter, Stormwind77, Wiae, Ageyban, Larek, Metapunk, Zakinstein, Francvs, Innite.magic, SchreiberBike, Paralipsis, DumZi-
BoT, Addbot, With goodness in mind, Peter Damian (old), Unzerlegbarkeit, Yobot, AnomieBOT, Fern 24, FrescoBot, I dream of horses,
Gamewizard71, LilyKitty, Janburse, ClueBot NG, Hyliad, Helpful Pixie Bot, BG19bot, Blue Mist 1, MisterCake, Don of Cherry, Leprof
7272, Michipedian, Kennethaw88, Loraof, PaulBustion87, Mcginnisjd, Jbryniar1 and Anonymous: 44
Laws of Form Source: https://en.wikipedia.org/wiki/Laws_of_Form?oldid=790730653 Contributors: Zundark, Michael Hardy, Qaz,
Charles Matthews, Timwi, Imc, Blainster, Giftlite, Lupin, Supergee, Sigfpe, Ebear422, Creidieki, Sam, CALR, Rich Farmbrough, Leib-
niz, John Vandenberg, PWilkinson, Arthena, Rodw, Suruena, Bluemoose, Waldir, Rjwilmsi, Salix alba, FayssalF, Chobot, Bgwhite,
Hairy Dude, Cyferx, RussBot, IanManka, Gaius Cornelius, Grafen, Trovatore, Mike Dillon, Reyk, SmackBot, Lavintzin, Scdevine,
AustinKnight, Jpvinall, Commander Keane bot, Chris the speller, Autarch, Concerned cynic, Ernestrome, Tompsci, Jon Awbrey, Robosh,
Mets501, Rschwieb, Nehrams2020, Paul Foxworthy, Philip ea, CBM, Gregbard, Chris83, AndrewHowse, Cydebot, M a s, PamD,
Nick Number, Abracadab, Leolaursen, Magioladitis, Pdturney, Ccrummer, EagleFan, David Eppstein, JaGa, Gwern, R'n'B, Kingding,
N4nojohn, Adavidb, Station1, The Tetrast, Nerketur, Sapphic, Newbyguesses, Paradoctor, Gerold Broser, Randy Kryn, Kai-Hendrik,
Dutton Peabody, Hans Adler, SchreiberBike, Ospix, Palnot, XLinkBot, Addbot, CountryBot, Yobot, Denispir, AnomieBOT, Daniel-
gschwartz, Citation bot, LilHelpa, CXCV, J04n, Omnipaedista, FrescoBot, Citation bot 1, Skyerise, EmausBot, The Nut, RANesbit,
Tijfo098, Wcherowi, NULL, Helpful Pixie Bot, BG19bot, PhnomPencil, CitationCleanerBot, Jochen Burghardt, BruceME, Eyesnore,
Dirkbaecker, GreenC bot, Jmcgnh, Bender the Bot, Deacon Vorbis and Anonymous: 49
Lindstrm quantier Source: https://en.wikipedia.org/wiki/Lindstr%C3%B6m_quantifier?oldid=787005020 Contributors: Michael
Hardy, Creidieki, Nortexoid, Gene Nygaard, CBM, David Eppstein, VanishedUserABC, Hans Adler, Yobot, Oracle of Truth, AnomieBOT,
John of Reading, Tijfo098, Xlae, Dexbot, Jochen Burghardt and Anonymous: 4
List of Boolean algebra topics Source: https://en.wikipedia.org/wiki/List_of_Boolean_algebra_topics?oldid=744472575 Contributors:
Michael Hardy, Charles Matthews, Michael Snow, Neilc, ZeroOne, Oleg Alexandrov, FlaBot, Mathbot, Rvireday, Scythe33, YurikBot,
Trovatore, StuRat, Mhss, GBL, Fplay, MichaelBillington, Jon Awbrey, Igor Markov, Syrcatbot, Gregbard, Cydebot, Pce3@ij.net, The
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1029

Transhumanist, Kruckenberg.1, The Tetrast, WimdeValk, Joaopitacosta, Niceguyedc, Hans Adler, Addbot, Verbal, Sz-iwbot, Gamewiz-
ard71, CaroleHenson, Zeke, the Mad Horrorist, Sam Sailor, Liz, Matthew Kastor and Anonymous: 10
List of logic systems Source: https://en.wikipedia.org/wiki/List_of_logic_systems?oldid=798796070 Contributors: Michael Hardy, Markhurd,
EmilJ, SmackBot, JRSpriggs, Gregbard, Nick Number, R'n'B, Radagast3, Sun Creator, Hugo Herbelin, Omnipaedista, Sawomir Biay,
Tkuvho, UniversumExNihilo, ClueBot NG, BjrnF, Ddupard68, KolbertBot and Anonymous: 4
List of rules of inference Source: https://en.wikipedia.org/wiki/List_of_rules_of_inference?oldid=790779308 Contributors: Toby Bar-
tels, Humanoid, AugPi, Poor Yorick, Jitse Niesen, Bkell, Jiy, ArnoldReinhold, ZeroOne, El C, Qwertyus, Mathbot, JMRyan, Shadro,
Mhss, Colonies Chris, Sidmow, Syrcatbot, Stwalkerster, JRSpriggs, CBM, Gregbard, Julian Mendez, Fireice, Magioladitis, Jonathanzung,
Rezarob, Anonymous Dissident, Kumioko (renamed), Excirial, Marc van Leeuwen, Jarble, Yobot, LilHelpa, Mark Renier, Momergil,
JamesMazur22, Innity ive, ClueBot NG, Mogism, TheKing44, YiFeiBot, Golopotw, Wikishelt, BjrnF, Paul Er Coranjeh, Deacon
Vorbis and Anonymous: 34
List of valid argument forms Source: https://en.wikipedia.org/wiki/List_of_valid_argument_forms?oldid=780297126 Contributors:
Wavelength, Josh3580, Gregbard, McSly, Flyer22 Reborn, Josve05a, Pdyoung, BattyBot, AntiCompositeNumber, Hollth, BjrnF, Qzd
and Anonymous: 21
Literal (mathematical logic) Source: https://en.wikipedia.org/wiki/Literal_(mathematical_logic)?oldid=695750754 Contributors: Obradovic
Goran, Kbdank71, Mhss, Tsca.bot, CBM, Simeon, Gregbard, Cydebot, Thijs!bot, Egrin, R'n'B, Mikhail Dvorkin, Matj Grabovsk,
Yobot, Termininja, Mikolasj, Krassotkin, Lagenar, Tijfo098, Tiago de Jesus Neves and Anonymous: 8
Logic alphabet Source: https://en.wikipedia.org/wiki/Logic_alphabet?oldid=787895463 Contributors: Michael Hardy, Topbanana, Giftlite,
Rich Farmbrough, Kenyon, Oleg Alexandrov, BD2412, Cactus.man, Trovatore, EAderhold, Gregbard, PamD, Nick Number, Danger,
Smerdis, R'n'B, Romicron, Algotr, Slysplace, ImageRemovalBot, Alksentrs, Watchduck, ResidueOfDesign, Saeed.Veradi, WilliamB-
Hall, Ettrig, DrilBot, LittleWink, Gamewizard71, Masssly, Bender the Bot and Anonymous: 10
Logic optimization Source: https://en.wikipedia.org/wiki/Logic_optimization?oldid=800573937 Contributors: Zundark, Michael Hardy,
Abdull, Simon Fenney, Diego Moya, Wtshymanski, SmackBot, Sct72, Cydebot, MarshBot, Hitanshu D, NovaSTL, WimdeValk, Dekart,
Delaszk, AnomieBOT, Quebec99, Klbrain, Tijfo098, Matthiaspaul, Masssly, InternetArchiveBot, GreenC bot, PrimeBOT and Anony-
mous: 4
Logic redundancy Source: https://en.wikipedia.org/wiki/Logic_redundancy?oldid=645028636 Contributors: Michael Hardy, Nurg, Giftlite,
Cburnett, SmackBot, Chris the speller, Mart22n, WimdeValk, TeamX, The Thing That Should Not Be, Addbot, Yobot and Anonymous:
5
Logical biconditional Source: https://en.wikipedia.org/wiki/Logical_biconditional?oldid=789976828 Contributors: Patrick, TakuyaMu-
rata, BAxelrod, Dysprosia, Snobot, Giftlite, DavidCary, Recentchanges, Lethe, Andycjp, Discospinster, Paul August, Elwikipedista~enwiki,
Oleg Alexandrov, Velho, MattGiuca, BD2412, Kbdank71, Jittat~enwiki, Gaius Cornelius, Arthur Rubin, SmackBot, InverseHypercube,
Melchoir, WookieInHeat, GoOdCoNtEnT, Bluebot, Qphilo, Wen D House, Radagast83, Jon Awbrey, Lambiam, Bjankuloski06en~enwiki,
Rainwarrior, Mets501, Adambiswanger1, CBM, Gregbard, Cydebot, Julian Mendez, Shirulashem, Letranova, Infovarius, Commons-
Delinker, J.delanoy, Freekh, Anonymous Dissident, Wolfrock, Graymornings, Tiny plastic Grey Knight, Francvs, DEMcAdams, ClueBot,
Watchduck, Alejandrocaro35, MilesAgain, 1ForTheMoney, Addbot, Jarble, Meisam, Yobot, Worldbruce, TaBOT-zerem, AnomieBOT,
Machine Elf 1735, 777sms, Mijelliott, Kuzmaka, JSquish, Chharvey, Wayne Slam, DASHBotAV, Chester Markel, Masssly, MerlIwBot,
Pine, Hanlon1755, Jujutsuan, Bender the Bot and Anonymous: 39
Logical conjunction Source: https://en.wikipedia.org/wiki/Logical_conjunction?oldid=793188756 Contributors: AxelBoldt, Toby Bar-
tels, Enchanter, B4hand, Mintguy, Stevertigo, Chas zzz brown, Michael Hardy, EddEdmondson, Justin Johnson, TakuyaMurata, AugPi,
Poor Yorick, Andres, Dysprosia, Jitse Niesen, Fredrik, Voodoo~enwiki, Goodralph, Snobot, Giftlite, Oberiko, Lethe, Yekrats, Jason
Quinn, Macrakis, Brockert, Leonard Vertighel, ALE!, Wikimol, Rdsmith4, Poccil, Richie, RuiMalheiro, Cfailde, SocratesJedi, Paul
August, Eric Kvaalen, Emvee~enwiki, Rzelnik, Ling Kah Jai, Oleg Alexandrov, Mindmatrix, Bluemoose, Btyner, LimoWreck, Gra-
ham87, BD2412, Kbdank71, VKokielov, Jameshsher, Fresheneesz, Chobot, Hede2000, Dijxtra, Trovatore, Mditto, EAderhold, Van-
ished user 34958, JoanneB, Tom Morris, Melchoir, Bluebot, Cybercobra, Richard001, Jon Awbrey, Lambiam, Clark Mobarry, TastyPou-
tine, JoshuaF, Happy-melon, Daniel5127, Gregbard, Cydebot, Thijs!bot, JAnDbot, Slacka123, VoABot II, Vujke, Gwern, Oren0, San-
tiago Saint James, Crisneda2000, R'n'B, CommonsDelinker, AdrienChen, On This Continent, GaborLajos, Policron, Fylwind, Enix150,
Trevor Goodyear, Hotfeba, TXiKiBoT, Geometry guy, Wiae, Wolfrock, SieBot, WarrenPlatts, Oxymoron83, Majorbrainy, Callowschool-
boy, Francvs, Classicalecon, DEMcAdams, Niceguyedc, Watchduck, Hans Adler, Lab-oratory, Addbot, MrOllie, CarsracBot, Meisam,
Legobot, Luckas-bot, Yobot, BG SpaceAce, , No names available, MastiBot, H.ehsaan, Magmalex, EmausBot, Mjaked,
2andrewknyazev, Frietjes, Masssly, Scwarebang, Golopotw, Interapple and Anonymous: 93
Logical connective Source: https://en.wikipedia.org/wiki/Logical_connective?oldid=799269299 Contributors: AxelBoldt, Rmhermen,
Christian List, Stevertigo, Michael Hardy, Dominus, Justin Johnson, TakuyaMurata, Ahoerstemeier, AugPi, Andres, Dysprosia, Hy-
acinth, Robbot, Sbisolo, Ojigiri~enwiki, Filemon, Snobot, Giftlite, DavidCary, Risk one, Siroxo, Boothinator, Wiki Wikardo, Kaldari,
Sam Hocevar, Indolering, Abdull, R, Jiy, Guanabot, Paul August, Bender235, ZeroOne, Elwikipedista~enwiki, Charm, Chalst, Shanes,
EmilJ, Spoon!, Kappa, SurrealWarrior, Suruena, Bookandcoee, Oleg Alexandrov, Joriki, Mindmatrix, Graham87, BD2412, Kbdank71,
Hiding, Fresheneesz, Chobot, YurikBot, RussBot, Gaius Cornelius, Rick Norwood, Trovatore, Cullinane, Arthur Rubin, Cedar101,
Masquatto, Nahaj, SmackBot, Incnis Mrsi, InvictaHOG, JRSP, Chris the speller, Bluebot, Tolmaion, Jon Awbrey, Lambiam, Nishkid64,
Bjankuloski06en~enwiki, JHunterJ, RichardF, Iridescent, JRSpriggs, CRGreathouse, CBM, Gregbard, Cydebot, Julian Mendez, Dumb-
BOT, Letranova, Jdm64, Danger, DuncanHill, TAnthony, David Eppstein, Nleclerc~enwiki, Drewmutt, R'n'B, Christian424, GoatGuy,
Darkvix, Arcanedude91, Policron, VolkovBot, Je G., TXiKiBoT, Anonymous Dissident, Philogo, Dmcq, Sergio01, SieBot, Gerakibot,
Yintan, Skippydo, Huku-chan, Denisarona, Francvs, ClueBot, Justin W Smith, Ktr101, Watchduck, ZuluPapa5, Hans Adler, MilesAgain,
Hugo Herbelin, Djk3, Johnuniq, Addbot, Mortense, Melab-1, Download, SpBot, Peti610botH, Loupeter, Yobot, Amirobot, AnomieBOT,
Jim1138, Racconish, ArthurBot, Xqbot, El Caro, Sophivorus, Entropeter, FrescoBot, Citation bot 1, RandomDSdevel, Pinethicket, Tim-
boat, Der Elbenkoenig, Dude1818, Orenburg1, Hriber, Greenfernglade, Ipersite, BAICAN XXX, ,, Seabuoy, Mentibot, Tijfo098,
Mhiji, ClueBot NG, Thebombzen, Matthiaspaul, Masssly, Helpful Pixie Bot, Owarihajimari, Weaktofu, Hanlon1755, Fuebar, Tom-
mor7835, Everymorning, Star767, Dai Pritchard, Sk8rcoolkat6969, Joserbala, ExperiencedArticleFixer, Student342, Mikeharwitz, Hy-
perbolick and Anonymous: 96
Logical consequence Source: https://en.wikipedia.org/wiki/Logical_consequence?oldid=799557658 Contributors: The Anome, Anders
Feder, Hyacinth, Ancheta Wis, Giftlite, Mani1, Bender235, Chalst, Eric Kvaalen, BDD, Velho, Dionyziz, Macaddct1984, BD2412, Kb-
dank71, Koavf, Mathbot, Algebraist, Siddhant, Borgx, Alynna Kasmira, Arthur Rubin, SmackBot, Incnis Mrsi, Kintetsubualo, Bluebot,
1030 CHAPTER 273. ZHEGALKIN POLYNOMIAL

Javalenok, DMacks, Dbtfz, Grumpyyoungman01, Slakr, Inquisitus, KyleP, Igoldste, CBM, Gregbard, Cydebot, Gimmetrow, Thijs!bot,
Luna Santin, Albany NY, Magioladitis, Trusilver, Maurice Carbonaro, VolkovBot, Philogo, Jamelan, Graymornings, Wemlands, Cnilep,
Botev, Aplex, ClueBot, Tomas e, Sps00789, Panyd, Hans Adler, Good Olfactory, Iranway, Addbot, Niriel, Anypodetos, AnomieBOT,
RJGray, Gilo1969, RibotBOT, Minister Alkabaz, Machine Elf 1735, I dream of horses, Toolnut, MoreNet, Adam.a.a.golding, Dcirovic,
PBS-AWB, Staszek Lem, SpikeballUnion, Tijfo098, Tziemer991, ClueBot NG, Wbm1058, Hanlon1755, Hugopako, , Aubrey-
bardo, TuCove, X1X2X3, Chas. Caltrop and Anonymous: 37
Logical disjunction Source: https://en.wikipedia.org/wiki/Logical_disjunction?oldid=781949804 Contributors: AxelBoldt, Bryan Derk-
sen, Tarquin, Toby Bartels, B4hand, Mintguy, Patrick, D, Michael Hardy, Pit~enwiki, Stephen C. Carlson, Ixfd64, Justin Johnson, Takuya-
Murata, Poor Yorick, DesertSteve, Dysprosia, Colin Marquardt, Robbot, Kowey, Voodoo~enwiki, Tea2min, Giftlite, Recentchanges,
Lethe, Proslaes, Macrakis, Espetkov, Siefca, Bact, Poccil, Guanabot, SocratesJedi, Paul August, ZeroOne, Jnestorius, Daemondust,
Blinken, Obradovic Goran, Hesperian, Emvee~enwiki, Ling Kah Jai, Oleg Alexandrov, Thryduulf, Mindmatrix, Kzollman, Bluemoose,
Mandarax, LimoWreck, BD2412, Kbdank71, Xiao Li, FlaBot, Gringo300, Mathbot, Fresheneesz, Chobot, YurikBot, Gaius Cornelius,
Dijxtra, Trovatore, Tony1, Mditto, Acetic Acid, Vanished user 34958, Nahaj, Katieh5584, Tom Morris, Melchoir, BiT, Bluebot, Kurykh,
OrangeDog, Cybercobra, Charles Merriam, Jon Awbrey, EdC~enwiki, Doc Daneeka, RekishiEJ, CBM, Gregbard, Cydebot, Julian
Mendez, PamD, Thijs!bot, Wikid77, Moulder, Nick Number, JAnDbot, Arachnocapitalist, Slacka123, Laymanal, Magioladitis, Tony
Winter, David65536, Santiago Saint James, CommonsDelinker, On This Continent, Supuhstar, Policron, Althepal, Enix150, Ajfweb,
VolkovBot, TXiKiBoT, Gwib, Ontoraul, Bbukh, World.suman, SieBot, Soler97, Ctxppc, AlanUS, Anyeverybody, Francvs, Classicale-
con, ClueBot, C xong, Rumping, Watchduck, Hans Adler, Dthomsen8, Wernhervonbraun, MrVanBot, CarsracBot, AndersBot, FiriBot,
Tripsone, Meisam, Legobot, Luckas-bot, Maxdamantus, Charlatino, AnomieBOT, The High Fin Sperm Whale, MauritsBot, Xqbot,
RadiX, FrescoBot, RedBot, Gamewizard71, Dinamik-bot, EmausBot, Matthewbeckler, PBS-AWB, 2andrewknyazev, Pengkeu, ClueBot
NG, Masssly, Scwarebang, BG19bot, PhnomPencil, CarrieVS, Fuebar, Jochen Burghardt, Lemnaminor, JMurphy73 and Anonymous: 93
Logical equality Source: https://en.wikipedia.org/wiki/Logical_equality?oldid=770314203 Contributors: Toby Bartels, Patrick, Ixfd64,
Lethe, Peruvianllama, Paul August, AzaToth, Oleg Alexandrov, Mindmatrix, BD2412, Canderson7, FlaBot, YurikBot, Daverocks, Dijx-
tra, Trovatore, Arthur Rubin, SmackBot, Melchoir, Bluebot, Jerome Charles Potts, Jjbeard~enwiki, Radagast83, Jon Awbrey, Gregbard,
Cydebot, Julian Mendez, Letranova, Thijs!bot, Escarbot, David Eppstein, Infovarius, MartinBot, Santiago Saint James, R'n'B, SieBot, Bot-
Multichill, Aeoza, Sitush, Francvs, Rumping, Hans Adler, Addbot, AnomieBOT, 2ndjpeg, Gamewizard71, Kuzmaka, Mikhail Ryazanov,
Wcherowi, Masssly, MerlIwBot, Faus, Trinitresque, MikeShafe, Bender the Bot and Anonymous: 22
Logical matrix Source: https://en.wikipedia.org/wiki/Logical_matrix?oldid=795046783 Contributors: AugPi, Carlossuarez46, Paul Au-
gust, El C, Oleg Alexandrov, Jerey O. Gustafson, BD2412, RxS, Rjwilmsi, DoubleBlue, Nihiltres, TeaDrinker, BOT-Superzerocool,
Wknight94, Closedmouth, SmackBot, InverseHypercube, C.Fred, Aksi great, Octahedron80, MaxSem, Jon Awbrey, Lambiam, JzG,
Slakr, Mets501, Happy-melon, CBM, , Jheiv, Hut 8.5, Brusegadi, Catgut, David Eppstein, Brigit Zilwaukee,
Yolanda Zilwaukee, Policron, Cerberus0, TXiKiBoT, Seb26, ClueBot, Cli, Blanchardb, RABBU, REBBU, DEBBU, DABBU, BAB-
BU, RABBU, Wolf of the Steppes, REBBU, Doubtentry, DEBBU, Education Is The Basis Of Law And Order, -Midorihana-,
Bare In Mind, Preveiling Opinion Of Dominant Opinion Group, Buchanans Navy Sec, Overstay, Marsboat, Unco Guid, Poke Salat An-
nie, Flower Mound Belle, Mrs. Lovetts Meat Puppets, Addbot, Breggen, Floquenbeam, Erik9bot, FrescoBot, Kimmy007, EmausBot,
Quondum, Tijfo098, Masssly, Deyvid Setti, Helpful Pixie Bot, Jochen Burghardt, Suelru, Zeiimer, Pyrrhonist05 and Anonymous: 15
Logical NOR Source: https://en.wikipedia.org/wiki/Logical_NOR?oldid=776409800 Contributors: Dreamyshade, Bryan Derksen, Stev-
ertigo, Edward, Ixfd64, Hermeneus, Jallan, Colin Marquardt, Furrykef, Grendelkhan, Cameronc, SirPeebles, Jerzy, DocWatson42, Gub-
bubu, Kaldari, Alex Cohn, Urhixidur, MementoVivere, ArnoldReinhold, SocratesJedi, Paul August, Sietse Snel, EmilJ, Nortexoid, Io-
lar~enwiki, Emvee~enwiki, Bookandcoee, Mindmatrix, BD2412, Qwertyus, Kbdank71, Maxim Razin, Chobot, YurikBot, RussBot,
Dijxtra, Trovatore, JMRyan, Krymzon, Tyomitch, Melchoir, Stimpy, Ohnoitsjamie, Mhss, Chris the speller, Bluebot, Nbarth, Ccero,
UU, Jon Awbrey, Bjankuloski06en~enwiki, Loadmaster, Pukkie, CBM, Gregbard, Cydebot, Thijs!bot, Hut 8.5, Vujke, Santiago Saint
James, Policron, The Tetrast, IronMaidenRocks, BotKung, Jean-Frdric, Dogah, WarrenPlatts, Prestonmag, Francvs, Adrianwn, Watch-
duck, Hans Adler, Addbot, Nesarose, MrOllie, A:-)Brunu, Meisam, GorgeUbuasha, Nallimbot, Queen of the Dishpan, AnomieBOT,
RokerHRO, Jkbw, RibotBOT, SassoBot, Branzillo, Lantern Leatherhead, Dega180, Gamewizard71, Igotta Lemma, Igor Yalovecky, Web-
ber Jocky, Metadjinn~enwiki, Pangur Ban My Cat, Iserdo, Tijfo098, Masssly, Helpful Pixie Bot, GKFX, 7804j, YiFeiBot, BU Rob13,
Bender the Bot, Yagirlyagirl and Anonymous: 49
Logical truth Source: https://en.wikipedia.org/wiki/Logical_truth?oldid=799943081 Contributors: Dcljr, Markhurd, Kjell Andr, Mcapdev-
ila, Jason Quinn, Dionyziz, Marudubshinki, BD2412, Al Silonov, Incnis Mrsi, Chris the speller, Dbtfz, CRGreathouse, Gregbard, Cydebot,
Keith D, Francvs, ClueBot, Ocer781, Addbot, Xqbot, Gilo1969, Omnipaedista, Onjacktallcuca, PBS-AWB, Connoriscool123, Inyesta,
Mikhail Ryazanov, ClueBot NG, Marek Mazurkiewicz, Masssly, Dr Lindsay B Yeates, McLean.Alex, Little Slislo, Seadowns, Laiba951,
SteveNewcomb and Anonymous: 17
Lupanov representation Source: https://en.wikipedia.org/wiki/Lupanov_representation?oldid=692960339 Contributors: Michael Hardy,
Oleg Alexandrov, Welsh, A3nm, AnomieBOT, Alvin Seville, RobinK, Maalosh and Anonymous: 1
Maharam algebra Source: https://en.wikipedia.org/wiki/Maharam_algebra?oldid=745941092 Contributors: Finlay McWalter, R.e.b.,
David Eppstein, Deltahedron and Anonymous: 1
Majority function Source: https://en.wikipedia.org/wiki/Majority_function?oldid=742283469 Contributors: Tobias Hoevekamp, Jz-
cool, Michael Hardy, Ckape, Robbot, DavidCary, ABCD, Bluebot, Radagast83, Lambiam, J. Finkelstein, Gregbard, Pascal.Tesson, Al-
phachimpbot, Magioladitis, Vanish2, David Eppstein, Ilyaraz, Alexei Kopylov, TFCforever, DOI bot, Balabiot, Legobot, Luckas-bot,
Yobot, Rubinbot, Citation bot, Citation bot 1, , Jesse V., Monkbot, EDickenson and Anonymous: 7
Marquand diagram Source: https://en.wikipedia.org/wiki/Karnaugh_map?oldid=800150074 Contributors: Bryan Derksen, Zundark,
LA2, PierreAbbat, Fubar Obfusco, Heron, BL~enwiki, Michael Hardy, Chan siuman, Justin Johnson, Seav, Chadloder, Iulianu, Nveitch,
Bogdangiusca, GRAHAMUK, Jitse Niesen, Fuzheado, Colin Marquardt, Furrykef, Omegatron, Vaceituno, Ckape, Robbot, Naddy, Tex-
ture, Paul Murray, Ancheta Wis, Giftlite, DocWatson42, SamB, Bovlb, Macrakis, Mobius, Goat-see, Ktvoelker, Grunt, Perey, Dis-
cospinster, Caesar, Dcarter, MeltBanana, Murtasa, ZeroOne, Plugwash, Nigelj, Unstable-Element, Obradovic Goran, Pearle, Mdd, Phy-
zome, Jumbuck, Fritzpoll, Snowolf, Wtshymanski, Cburnett, Bonzo, Kenyon, Acerperi, Wikiklrsc, Dionyziz, Eyreland, Marudubshinki,
Jake Wartenberg, MarSch, Mike Segal, Oblivious, Ligulem, Ademkader, Mathbot, Winhunter, Fresheneesz, Tardis, LeCire~enwiki,
Bgwhite, YurikBot, RobotE, RussBot, SpuriousQ, B-Con, Anomie, Arichnad, Trovatore, RolandYoung, RazorICE, RUL3R, Rohan-
mittal, Cedar101, Tim Parenti, Gulliveig, HereToHelp, RG2, Sinan Taifour, SmackBot, InverseHypercube, Thunder Wolf, Edgar181,
Gilliam, Bluebot, Thumperward, Villarinho, Moonshiner, DHN-bot~enwiki, Locriani, Sct72, HLwiKi, Michael.Pohoreski, Hex4def6,
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1031

SashatoBot, Wvbailey, MagnaMopus, Freewol, Vobrcz, Jmgonzalez, Augustojd, CRGreathouse, CBM, Jokes Free4Me, Reywas92, Czar
Kirk, Tkynerd, Thijs!bot, Headbomb, JustAGal, Jonnie5, CharlotteWebb, RazoreRobin, Leuko, Ndyguy, VoABot II, Swpb, Gantoniou,
Carrige, R'n'B, Yim~enwiki, JoeFloyd, Aervanath, FreddieRic, KylieTastic, Sigra~enwiki, TXiKiBoT, Cyberjoac, Cremepu222, Mar-
tinPackerIBM, Kelum.kosala, Spinningspark, FxBit, Pitel, Serprex, SieBot, VVVBot, Aeoza, IdreamofJeanie, OKBot, Svick, Rrfwiki,
WimdeValk, Justin W Smith, Rjd0060, Unbuttered Parsnip, Czarko, Dsamarin, Watchduck, Sps00789, Hans Adler, Gciriani, B.Zsolt,
Jmanigold, Tullywinters, ChyranandChloe, Avoided, Cmr08, Writer130, Addbot, DOI bot, Loafers, Delaszk, Dmenet, AgadaUrbanit,
Luckas-bot, Kartano, Hhedeshian, SwisterTwister, Mhayes46, AnomieBOT, Jim1138, Utility Knife, Citation bot, Dannamite, Arthur-
Bot, Pnettle, Miym, GrouchoBot, TunLuek, Abed pacino, Macjohn2, BillNace, Amplitude101, Pdebonte, Biker Biker, Pinethicket,
RedBot, The gulyan89, SpaceFlight89, Trappist the monk, Vrenator, Katragadda465, RjwilmsiBot, Alessandro.goulartt, Zap Rows-
dower, Norlik, Njoutram, Rocketrod1960, Voomoo, ClueBot NG, Bukwoy, Matthiaspaul, AHA.SOLAX, Frietjes, Imyourfoot, Widr,
Danim, Jk2q3jrklse, Spudpuppy, Nbeverly, Ceklock, Giorgos.antoniou, Icigic, CARPON, Usmanraza9, Wolfmanx122, Shidh, Elec-
tricmun11, EuroCarGT, Yaxinr, Mrphious, Jochen Burghardt, Mdcoope3, TheEpTic, Akosibrixy, Microchirp, Cheater00, Lennerton,
GreenWeasel11, Loraof, Scipsycho, BILL ABK, Acayl, ShigaIntern, InternetArchiveBot, GreenC bot, Gerdhuebner, Abduw09, Dhoni
barath, NoahB123, Ngonz424, Arun8277 and Anonymous: 279
Material conditional Source: https://en.wikipedia.org/wiki/Material_conditional?oldid=789802675 Contributors: William Avery, Dcljr,
AugPi, Charles Matthews, Dcoetzee, Doradus, Cholling, Giftlite, Jason Quinn, Nayuki, TedPavlic, Elwikipedista~enwiki, Nortexoid,
Vesal, Eric Kvaalen, BD2412, Kbdank71, Martin von Gagern, Joel D. Reid, Nihiltres, Fresheneesz, Vonkje, NevilleDNZ, Bgwhite,
RussBot, KSchutte, NawlinWiki, Trovatore, Avraham, Closedmouth, Arthur Rubin, SyntaxPC, Fctk~enwiki, SmackBot, Amcbride, In-
cnis Mrsi, Pokipsy76, BiT, Mhss, Jaymay, Tisthammerw, Sholto Maud, Wen D House, Cybercobra, Jon Awbrey, Oceanofperceptions,
Byelf2007, Grumpyyoungman01, Clark Mobarry, Beefyt, Rory O'Kane, Dansiman, Dreftymac, Eassin, JRSpriggs, Gregbard, FilipeS,
Cydebot, Julian Mendez, Thijs!bot, Egrin, Jojan, Escarbot, Applemeister, WinBot, Salgueiro~enwiki, JAnDbot, Olaf, Alastair Haines,
Arno Matthias, JaGa, Santiago Saint James, Pharaoh of the Wizards, Pyrospirit, SFinside, Anonymous Dissident, The Tetrast, Cnilep,
Radagast3, Newbyguesses, Lightbreather, Paradoctor, Iamthedeus, Soler97, BrightR, Francvs, Classicalecon, Josang, Ruy thompson,
Watchduck, Alejandrocaro35, Hans Adler, Djk3, Marc van Leeuwen, Tbsdy lives, Addbot, Melab-1, Fyrael, Morriswa, SpellingBot,
CarsracBot, Chzz, Jarble, Meisam, Luckas-bot, AnomieBOT, Erel Segal, Sonia, Pnq, Bearnfder, FrescoBot, Greyfriars, Machine Elf
1735, RedBot, MoreNet, Beyond My Ken, John of Reading, PBS-AWB, Hgetnet, Staszek Lem, Hibou57, ClueBot NG, Movses-bot,
Jiri 1984, Masssly, Pacerier, Dooooot, Noobnubcakes, Hanlon1755, Leif Czerny, CarrieVS, Jochen Burghardt, Lukekfreeman, Lingzhi,
NickDragonRyder, Indomitavis, Rathkirani, AnotherPseudonym, Xerula, Matthew Kastor, Mathematical Truth, Loraof, David de Wit,
Uncle Roy and Anonymous: 80
Material equivalence Source: https://en.wikipedia.org/wiki/If_and_only_if?oldid=794464701 Contributors: Damian Yerrick, Axel-
Boldt, Matthew Woodcraft, Vicki Rosenzweig, Zundark, Tarquin, Larry_Sanger, Toby Bartels, Ark~enwiki, Camembert, Stevertigo,
Patrick, Chas zzz brown, Michael Hardy, Wshun, DopeshJustin, Dante Alighieri, Dominus, SGBailey, Wwwwolf, Delirium, Geof-
frey~enwiki, Stevenj, Kingturtle, UserGoogol, Andres, Evercat, Jacquerie27, Adam Conover, Revolver, Wikiborg, Dysprosia, Itai, Ledge,
McKay, Robbot, Psychonaut, Henrygb, Ruakh, Diberri, Tea2min, Adam78, Enochlau, Giftlite, DavidCary, var Arnfjr Bjarmason,
Mellum, Chinasaur, Jason Quinn, Taak, Superfrank~enwiki, Wmahan, LiDaobing, Mvc, Neutrality, Urhixidur, Ropers, Jewbacca, Karl
Dickman, PhotoBox, Brianjd, Paul August, Sunborn, Elwikipedista~enwiki, Chalst, Edward Z. Yang, Pearle, Ekevu, IgorekSF, Msh210,
Interiot, ABCD, Stillnotelf, Suruena, Voltagedrop, Rhialto, Forderud, Oleg Alexandrov, Joriki, Velho, Woohookitty, Mindmatrix, Ruud
Koot, Ryan Reich, Adjam, Pope on a Rope, Eyu100, R.e.b., FlaBot, Mathbot, Rbonvall, Glenn L, BMF81, Bgwhite, RussBot, Post-
glock, Voidxor, El Pollo Diablo, Gadget850, Jkelly, Danielpi, Lt-wiki-bot, TheMadBaron, Nzzl, Arthur Rubin, Netrapt, SmackBot, Ttzz,
Gloin~enwiki, Incnis Mrsi, InverseHypercube, Melchoir, GoOdCoNtEnT, Thumperward, Javalenok, Jonatan Swift, Peterwhy, Acdx,
Shirifan, Evildictaitor, Abolen, Rainwarrior, Dicklyon, Mets501, Yuide, Shoeofdeath, CRGreathouse, CBM, Joshwa, Picaroon, Greg-
bard, Eu.stefan, Letranova, Thijs!bot, Egrin, Schneau, Jojan, Davkal, AgentPeppermint, Urdutext, Holyknight33, Escarbot, WinBot,
Serpents Choice, Timlevin, Singularity, David Eppstein, Msknathan, MartinBot, AstroHurricane001, Vanished user 47736712, Ken-
neth M Burke, Alsosaid1987, DorganBot, Bsroiaadn, TXiKiBoT, Anonymous Dissident, Abyaly, Ichtaca, Mouse is back, Rjgodoy, Trip-
pingTroubadour, KjellG, AlleborgoBot, Lillingen, SieBot, Iamthedeus, This, that and the other, Smaug123, Skippydo, Warman06~enwiki,
Tiny plastic Grey Knight, Francvs, Minehava, ClueBot, Surfeited, BANZ111, Master11218, WestwoodMatt, Excirial, He7d3r, Pfhorrest,
Kmddmk, Addbot, Ronhjones, Wikimichael22, Lightbot, Jarble, Yobot, Bryan.burgers, AnomieBOT, Nejatarn, Ciphers, Quintus314,
, WissensDrster, Ex13, Rapsar, Mikespedia, Lotje, Igor Yalovecky, EmausBot, Ruxkor, Chricho, Sugarfoot1001, Ti-
jfo098, FeatherPluma, ClueBot NG, Wcherowi, Widr, MerlIwBot, Helpful Pixie Bot, Solomon7968, Chmarkine, CarrieVS, Me, Myself,
and I are Here, Ekips39, Epicgenius, Dr Lindsay B Yeates, Seppi333, Matthew Kastor, Cpt Wise, Loraof, Bender the Bot, Imminent77,
Halo0520, Here2help, Magic links bot and Anonymous: 147
Material implication (rule of inference) Source: https://en.wikipedia.org/wiki/Material_implication_(rule_of_inference)?oldid=789423739
Contributors: Toby Bartels, Jason Quinn, Nihiltres, RussBot, KSchutte, Yoninah, Arthur Rubin, Incnis Mrsi, Gregbard, Alejandrocaro35,
Quondum, Helpful Pixie Bot, Dooooot, CarrieVS, Jochen Burghardt, David9550, Simplexity22 and Anonymous: 12
Material nonimplication Source: https://en.wikipedia.org/wiki/Material_nonimplication?oldid=787658856 Contributors: Kaldari, BD2412,
Kbdank71, Chris Capoccia, MacMog, Cedar101, SmackBot, Cybercobra, Bjankuloski06en~enwiki, Gregbard, Cydebot, David Eppstein,
Maurice Carbonaro, Anzurio, Francvs, Classicalecon, BANZ111, Alex836, Watchduck, Addbot, Meisam, Luckas-bot, Yobot, FrescoBot,
Olexa Riznyk, Jesse V., EmausBot, Jontturi, Matthew Kastor, Casamajor and Anonymous: 7
Mereology Source: https://en.wikipedia.org/wiki/Mereology?oldid=797172364 Contributors: Zundark, Michael Hardy, Kku, Docu,
Charles Matthews, Dysprosia, WhisperToMe, Pedant17, Merovingian, Wile E. Heresiarch, Nagelfar, Rich Farmbrough, Liberatus, EmilJ,
Orbst, Pspealman, Andreala, Joriki, Woohookitty, Linas, RHaworth, Kzollman, Noetica, Turnstep, BD2412, NonNobis~enwiki, Sderose,
CiaPan, Chobot, DanMS, Archelon, Trovatore, Arthur Rubin, Reyk, Npeters22, Tropylium, SmackBot, Thorseth, Bazonka, Clconway,
Colonies Chris, Lpgeen, Drphilharmonic, Kuru, Khazar, Atoll, Dr Greg, Mets501, Vaughan Pratt, Sdorrance, Gregbard, Cydebot,
Blaisorblade, EdJohnston, Dawnseeker2000, Hamaryns, Gabriel Kielland, David Eppstein, R'n'B, Buttons to Push Buttons, AstroHur-
ricane001, Inimino, Nikk50, RickardV, Biglovinb, Ross Fraser, Samlyn.josfyn, Funandtrvl, Antoni Barau, Wordsmith, BookLubber,
Uncletravelinmatt, SieBot, WereSpielChequers, Aristolaos, Linforest, Skandha101, Anapazapa, Niceguyedc, Pointillist, Keithbowden,
Dsmntl, Brews ohare, Arjayay, SchreiberBike, Aitias, Palnot, Rror, Bert Carpenter, Krifka, Addbot, Tassedethe, Yobot, AnomieBOT,
Materialscientist, Chikuku, Omnipaedista, The Wiki ghost, FrescoBot, T of Locri, Machine Elf 1735, Jonesey95, MastiBot, RestChem,
Arborrhythms, EmausBot, John of Reading, Set theorist, ZroBot, ChuispastonBot, ClueBot NG, David.schoonover, Joel B. Lewis, Faus,
Curb Chain, BG19bot, Blue Mist 1, Brad7777, Qetuth, Wikilew45, CarrieVS, Jochen Burghardt, Me, Myself, and I are Here, Bio-
geographist, Tango303, Ostomachion, Norbornene, Kk, Tlendriss, InternetArchiveBot, Taschu269, Greaber, Deacon Vorbis and Anony-
mous: 60
1032 CHAPTER 273. ZHEGALKIN POLYNOMIAL

Modal algebra Source: https://en.wikipedia.org/wiki/Modal_algebra?oldid=787289589 Contributors: EmilJ, Mhss, Addbot and Prime-
BOT
Modal operator Source: https://en.wikipedia.org/wiki/Modal_operator?oldid=739464088 Contributors: Markhurd, Hyacinth, Alten-
mann, Filemon, Velho, BD2412, SmackBot, BenetD, JRSpriggs, CBM, Gregbard, Nick Number, Cic, Gwern, R'n'B, Erhasalz, Tanl13,
Mild Bill Hiccup, TheOldJacobite, Addbot, EmbraceParadox, Jim1138, PhnomPencil, Sydactive and Anonymous: 2
Modus non excipiens Source: https://en.wikipedia.org/wiki/Modus_non_excipiens?oldid=718509607 Contributors: Michael Hardy,
Gregbard and Josve05a
Modus ponendo tollens Source: https://en.wikipedia.org/wiki/Modus_ponendo_tollens?oldid=790738218 Contributors: Evercat, Mike
Rosoft, Macai, Amsoman, FlaBot, Carolynparrishfan, Mikeblas, Arthur Rubin, Srnec, Gobonobo, Jim.belk, Gregbard, HenryHRich,
Anarchia, Addbot, Aaagmnr, Rmtzr, Dooooot, Wvenialbo, Deacon Vorbis and Anonymous: 18
Modus ponens Source: https://en.wikipedia.org/wiki/Modus_ponens?oldid=800300154 Contributors: AxelBoldt, Zundark, The Anome,
Tarquin, Larry Sanger, Andre Engels, Rootbeer, Ryguasu, Frecklefoot, Michael Hardy, Voidvector, Liftarn, J'raxis, AugPi, BAxelrod,
Charles Matthews, Dysprosia, Jitse Niesen, Andyfugard, Ruakh, Giftlite, Jrquinlisk, Leonard G., 20040302, Siroxo, Matt Crypto, Neilc,
Toytoy, Antandrus, Yayay, Sword~enwiki, Jiy, Rich Farmbrough, Elwikipedista~enwiki, Jonon, Nortexoid, Obradovic Goran, Jumbuck,
Marabean, M7, Trylks, Dandv, Ruud Koot, Waldir, Marudubshinki, Graham87, Rjwilmsi, Notapipe, Alexb@cut-the-knot.com, RexNL,
Spencerk, WhyBeNormal, YurikBot, Vecter, KSmrq, Shawn81, Schoen, Voidxor, Noam~enwiki, Hakeem.gadi, Pacogo7, Tharos, Otto
ter Haar, Incnis Mrsi, Eskimbot, Mhss, Nicolas.Wu, Cybercobra, Spiritia, Wvbailey, Gobonobo, Robosh, Jim.belk, Don Warren, CBM,
Gregbard, Cydebot, Steel, Thijs!bot, Leuko, TAnthony, Magioladitis, Yocko, Mbarbier, Ercarter, David Eppstein, Heliac, Anarchia,
Nathanjones15, Policron, ABF, TXiKiBoT, Broadbot, Cgwaldman, Jamelan, Keepssouth, Chenzw, Paradoctor, Yintan, Svick, Classi-
calecon, Velvetron, Josang, Logperson, Alejandrocaro35, Darkicebot, Addbot, Luckas-bot, Yobot, AnomieBOT, Jim1138, Jo3sampl,
Xqbot, Tyrol5, Romnempire, Machine Elf 1735, WillNess, Wingman4l7, ClueBot NG, Trustedgunny, DBigXray, Svartskgg, MRG90,
Dooooot, Planeswalkerdude, Firerere2, HFS-er, Nimrodomer, PrimeBOT, Narutoarya and Anonymous: 88
Modus tollens Source: https://en.wikipedia.org/wiki/Modus_tollens?oldid=800894775 Contributors: AxelBoldt, The Cunctator, Void-
vector, J'raxis, Kingturtle, Andrewa, AugPi, Charles Matthews, Dcoetzee, Dysprosia, Andyfugard, Banno, Chuunen Baka, Highland-
wolf, Jrquinlisk, Wwoods, Jorend, 20040302, Sundar, Nayuki, Neilc, Pgan002, Toytoy, Iantresman, Neutrality, Brianjd, EricBright,
Jiy, Bender235, Elwikipedista~enwiki, Charm, El C, Obradovic Goran, Keenan Pepper, Kenyon, Oleg Alexandrov, Philbarker, Waldir,
Graham87, KYPark, FlaBot, Notapipe, Mathbot, Nivaca, Spencerk, YurikBot, Schoen, W33v1l, Icelight, Voidxor, Doncram, FF2010,
Shawnc, Georey.landis, Otto ter Haar, KnightRider~enwiki, Anastrophe, Chris the speller, Leland McInnes, Wen D House, Richard001,
Gobonobo, Tim bates, Robosh, Jim.belk, Gregbard, Cydebot, Steel, Rieman 82, Mr Gronk, Wejstheman, Thijs!bot, Pampas Cat, Win-
Bot, JAnDbot, Husond, Nyttend, Anarchia, Nullie, Hiddenhearts, Izno, VolkovBot, Human step, Chaos5023, The Wonky Gnome, Fer-
engi, Cgwaldman, Jamelan, Dictioneer, Hotbelgo, Michael Wyckmans, Josang, Wanderer57, Liempt, Auntof6, Alejandrocaro35, Alastair
Carnegie, Mccaskey, Addbot, Luckas-bot, Yobot, Tojasonharris, KDS4444, 1exec1, Xqbot, Anders, Anime Addict AA, FrescoBot,
MondalorBot, Full-date unlinking bot, Lightlowemon, Ripchip Bot, Tesseract2, Farrest, Klbrain, Arno Peters, JSquish, Exfenestracide,
Kranix, ClueBot NG, TheJrLinguist, Dooooot, Flosfa, Kephir, Lambda Fairy, Me, Myself, and I are Here, Franois Robere, Bobblond,
Kolyvansky, Lukesnydermusic, Luis150902 and Anonymous: 112
Monadic Boolean algebra Source: https://en.wikipedia.org/wiki/Monadic_Boolean_algebra?oldid=623204166 Contributors: Michael
Hardy, Charles Matthews, Kuratowskis Ghost, Oleg Alexandrov, Trovatore, Mhss, Gregbard, R'n'B, Safek, Hans Adler, Alexey Muranov,
Addbot, Tijfo098, JMP EAX and Anonymous: 4
Monadic predicate calculus Source: https://en.wikipedia.org/wiki/Monadic_predicate_calculus?oldid=782678778 Contributors: Michael
Hardy, Hyacinth, Rich Farmbrough, RussAbbott, BD2412, Patrickr, Arthur Rubin, Mhss, Henning Makholm, ShakespeareFan00, CBM,
Ezrakilty, Myasuda, Gregbard, AndrewHowse, Magioladitis, David Eppstein, The Tetrast, Kumioko (renamed), Nekiko, Addbot, Pcap,
Klbrain, Wham Bam Rock II, Tijfo098 and Anonymous: 9
Monotonicity of entailment Source: https://en.wikipedia.org/wiki/Monotonicity_of_entailment?oldid=775024401 Contributors: Hy-
acinth, Joyous!, Chalst, Smmurphy, Josh Parris, Helvetius, Intgr, Open2universe, SmackBot, JanusDC, Sabik, Floridi~enwiki, Gregbard,
Thijs!bot, Meredyth, Mcclarke, Addbot, Constructive editor, Bweilz and Anonymous: 10
Near sets Source: https://en.wikipedia.org/wiki/Near_sets?oldid=799138119 Contributors: Michael Hardy, Bearcat, Gandalf61, Giftlite,
Rjwilmsi, Algebraist, SmackBot, MartinPoulter, Headbomb, NSH001, Sebras, Salih, Philip Trueman, JL-Bot, SchreiberBike, Tassede-
the, Jarble, Alvin Seville, Erik9bot, FrescoBot, NSH002, NearSetAccount, Jonesey95, AManWithNoPlan, Frietjes, Helpful Pixie Bot,
BG19bot, Pintoch, Jochen Burghardt, ChristopherJamesHenry, KolbertBot and Anonymous: 2
Negation Source: https://en.wikipedia.org/wiki/Negation?oldid=800103558 Contributors: AxelBoldt, Zundark, Arvindn, Toby Bartels,
William Avery, Ryguasu, Youandme, Stevertigo, Edward, Patrick, Ihcoyc, Andres, Hyacinth, David Shay, Omegatron, Francs2000, Rob-
bot, PBS, Lowellian, Hadal, Wikibot, Benc, Tea2min, Adam78, Giftlite, DocWatson42, Nayuki, Chameleon, Neilc, Jonathan Grynspan,
Poccil, Paul August, Chalst, EmilJ, Spoon!, John Vandenberg, Nortexoid, Rajah, Deryck Chan, Daf, Pazouzou, Obradovic Goran, Pearle,
Knucmo2, Musiphil, Cesarschirmer~enwiki, RainbowOfLight, Forderud, Eric Qel-Droma, Oleg Alexandrov, Mindmatrix, Troels.jensen~enwiki,
Bluemoose, BrydoF1989, TAKASUGI Shinji, BD2412, Kbdank71, Rjwilmsi, FlaBot, Gparker, Chobot, DTOx, Visor, Roboto de Ajvol,
YurikBot, RussBot, Xihr, RJC, Gaius Cornelius, Cookman, Trovatore, Vanished user 1029384756, Dhollm, Mditto, SmackBot, Knowl-
edgeOfSelf, Eskimbot, Bluebot, Iain.dalton, Thumperward, Ewjw, Furby100, Rrburke, Mr.Z-man, UU, Cybercobra, Revengeful Lob-
ster, Decltype, Jon Awbrey, Quatloo, Byelf2007, Lambiam, Christoel K~enwiki, Loadmaster, Hans Bauer, Adambiswanger1, Mudd1,
TheTito, Andkore, Simeon, Gregbard, FilipeS, Cydebot, Reywas92, Thijs!bot, Zron, Escarbot, Djihed, Slacka123, Catgut, WhatamIdo-
ing, Gwern, Santiago Saint James, R'n'B, Kavadi carrier, Policron, Enix150, VolkovBot, Semmelweiss, Pasixxxx, Cs-Reaperman, PG-
SONIC, TXiKiBoT, Gwib, Anonymous Dissident, Ontoraul, HeirloomGardener, AlleborgoBot, SieBot, Ivan tambuk, Soler97, Francvs,
ClueBot, PixelBot, Alejandrocaro35, Holothurion, Wikidsp, AHRtbA==, HumphreyW, DumZiBoT, Mifter, Addbot, ConCompS, An-
dersBot, Gail, Jarble, Meisam, Qwertymith, Legobot, Luckas-bot, AnomieBOT, Nasnema, Oursipan, Zhentmdfan, Pinethicket, Half
price, MastiBot, DixonDBot, Mayoife, Xnn, Shadex9999, EmausBot, PBS-AWB, ClueBot NG, Wcherowi, Strcat, Wdchk, Masssly, Tito-
dutta, Hadi Payami, Victor Yus, ChrisGualtieri, YFdyh-bot, Dexbot, Averruncus, GinAndChronically, Solid Frog, Eteethan, Reybansingh,
CAPTAIN RAJU, Prahlad balaji, Sephistication, Deacon Vorbis, MereTechnicality, Ethan9870 and Anonymous: 102
Negation as failure Source: https://en.wikipedia.org/wiki/Negation_as_failure?oldid=784011976 Contributors: LittleDan, Robbot, Robert-
bowerman, H2g2bob, Oleg Alexandrov, GregorB, Tizio, NotInventedHere, Mhss, Bluebot, CBM, Simeon, Gregbard, Thijs!bot, Kermes-
beere, Hamaryns, Nosbig, Robert Kowalski, Pasixxxx, Una Smith, Runewiki777, Baosheng, Vlifschitz, Logperson, Eusebius, Addbot,
Albertzeyer, BG19bot, Balljust, Fitindia, Deacon Vorbis and Anonymous: 14
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1033

Negation introduction Source: https://en.wikipedia.org/wiki/Negation_introduction?oldid=675899777 Contributors: Michael Hardy,


Jitse Niesen, JRSpriggs, Alejandrocaro35, Matthew Kastor, Mcpheeandrew, C1776M and Joejoebob
Negation normal form Source: https://en.wikipedia.org/wiki/Negation_normal_form?oldid=798064600 Contributors: Vkuncak, Silver-
sh, Olathe, Macrakis, Starblue, Obradovic Goran, Pearle, Oleg Alexandrov, Linas, Kbdank71, Mets501, CRGreathouse, CBM, Greg-
bard, Cydebot, Christian75, Amikake3, Brian Geppert, Rpgoldman, Addbot, Jarble, Luckas-bot, Tomdwrightenator, Janburse, Kejia,
Mogism, Ayush3292 and Anonymous: 8
Nicods axiom Source: https://en.wikipedia.org/wiki/Nicod{}s_axiom?oldid=698014895 Contributors: Michael Hardy, GrafZahl, Ser
Amantio di Nicolao, Gregbard, Henryk Tannhuser, Omnipaedista, Empty Buer, UniversumExNihilo and Anonymous: 1
Open formula Source: https://en.wikipedia.org/wiki/Open_formula?oldid=791881504 Contributors: Toby Bartels, Michael Hardy, Oliver
Pereira, AugPi, Pizza Puzzle, Ruakh, Alan Liefting, Jason Quinn, Klemen Kocjancic, Rgdboer, EmilJ, Oleg Alexandrov, Linas, LOL,
Ledy, BD2412, Kbdank71, Jshadias, Mathbot, DevastatorIIC, Wimt, ENeville, Malcolma, Sharkb, Melchoir, Bigbluesh, Mhss, Chris
the speller, Clconway, Atoll, CBM, Gregbard, AndrewHowse, Cydebot, Alaibot, Nemti, Mhaitham.shammaa, JamesBWatson, David
Eppstein, JaGa, Unisonus, AlnoktaBOT, Paradoctor, Randomblue, ClueBot, Fadesga, Addbot, Yobot, Magog the Ogre, AnomieBOT,
Etincelles, ClueBot NG, Jochen Burghardt, Suelru, MagneticInk and Anonymous: 17
Parity function Source: https://en.wikipedia.org/wiki/Parity_function?oldid=786594379 Contributors: Michael Hardy, Giftlite, Eyre-
land, Qwertyus, Ott2, Cedar101, Ylloh, CBM, R'n'B, The enemies of god, M gol, Addbot, Luckas-bot, Twri, Amilevy, Helpful Pixie Bot,
Drqwertysilence, Bender the Bot and Anonymous: 1
Partial function Source: https://en.wikipedia.org/wiki/Partial_function?oldid=795247326 Contributors: AxelBoldt, The Anome, Jan
Hidders, Edemaine, Michael Hardy, Pizza Puzzle, Charles Matthews, Timwi, Zoicon5, Altenmann, MathMartin, Henrygb, Tea2min,
Tosha, Connelly, Giftlite, Lethe, Waltpohl, Jabowery, DRE, Paul August, Rgdboer, Pearle, Sligocki, Schapel, Artur adib, LunaticFringe,
Oleg Alexandrov, , Mpatel, MFH, Salix alba, Dougluce, FlaBot, VKokielov, Jameshsher, Vonkje, Hairy Dude, Grubber, Trovatore,
Kooky, Arthur Rubin, Crystallina, Jbalint, SmackBot, Incnis Mrsi, Reedy, Ppntori, Chris the speller, Bluebot, Tsca.bot, TheArchivist,
Ccero, Wvbailey, Etatoby, CBM, Blaisorblade, Dvandersluis, Dricherby, Albmont, Timlevin, Ravikaushik, Joshua Issac, Alan U. Ken-
nington, TrippingTroubadour, Dmcq, Paolo.dL, , Classicalecon, ClueBot, Gulmammad, AmirOnWiki, Maheshexp, El bot de la
dieta, Lightbot, Legobot, Luckas-bot, Yobot, Pcap, AnomieBOT, Isheden, Brookswift, NoJr0xx, Nicolas Perrault III, SkpVwls, Unbit-
wise, Exfenestracide, AvicAWB, Quondum, Wcherowi, This lousy T-shirt, YannickN, Jergas, Tomtom2357, JamesHaigh, SJ Defender,
JMP EAX, Shaelja, GSS-1987, Quiddital, Bender the Bot and Anonymous: 44
Peirces law Source: https://en.wikipedia.org/wiki/Peirce{}s_law?oldid=721310426 Contributors: Michael Hardy, Charles Matthews,
Doradus, Hyacinth, Banno, Ashley Y, GreatWhiteNortherner, Giftlite, FunnyMan3595, Leibniz, Roo72, Chalst, JRM, Nortexoid, Ricky81682,
Ossiemanners, Obsidian-fox, Pabix, SmackBot, Mhss, Furby100, Jon Awbrey, 16@r, Mets501, JRSpriggs, Gregbard, Julian Mendez, Hut
8.5, Four Dog Night, Jesper Carlstrom, Squids and Chips, The Tetrast, VVVBot, Phe-bot, XLinkBot, Addbot, Doctor Dillamond, MrOl-
lie, Ariel Black, Tassedethe, Queen of the Dishpan, La Mejor Ratonera, Tkuvho, Pinethicket, Branzillo, Lantern Leatherhead, Gamewiz-
ard71, Aratus Soli, Savourneen Deelish, Webber Jocky, Pangur Ban My Cat, Iserdo, Deep Atlantis, Janburse, Mkoconnor, BG19bot and
Anonymous: 19
Petricks method Source: https://en.wikipedia.org/wiki/Petrick{}s_method?oldid=776095422 Contributors: Michael Hardy, Willem,
Kulp, Paul August, Oleg Alexandrov, Mindmatrix, Tizio, Fresheneesz, Trovatore, SmackBot, Andrei Stroe, Jay Uv., MystBot, Addbot,
Luckas-bot, Timeroot, ArthurBot, Harry0xBd, Njoutram, Matthiaspaul, Wolfmanx122 and Anonymous: 8
Plural quantication Source: https://en.wikipedia.org/wiki/Plural_quantification?oldid=793545088 Contributors: Arvindn, Michael
Hardy, CesarB, Peter Damian (original account), GreatWhiteNortherner, LadyPuball, Gzornenplatz, Augur, Ben Standeven, Etimbo,
Nortexoid, Kzollman, BD2412, Rjwilmsi, DanMS, SmackBot, CBM, Gregbard, AndrewHowse, Headbomb, David Eppstein, Cosmic
Latte, Ontoraul, Henry laycock, Welsh-girl-Lowri, DumZiBoT, Addbot, Yobot, AnomieBOT, Citation bot, Chikuku, Omnipaedista, ,
Francis Lima, PBS-AWB, Tijfo098, Helpful Pixie Bot, DBigXray, BG19bot, ChrisGualtieri, Jovelyn Bortanog, DA - DP, Jodosma,
Bender the Bot and Anonymous: 12
Poretskys law of forms Source: https://en.wikipedia.org/wiki/Poretsky{}s_law_of_forms?oldid=762099079 Contributors: Bearcat,
Macrakis, Malcolma and WikiWhatthe
Predicate (mathematical logic) Source: https://en.wikipedia.org/wiki/Predicate_(mathematical_logic)?oldid=783738271 Contributors:
Zundark, Toby Bartels, William Avery, SimonP, Marknew, Alex S, Hyacinth, Barbara Shack, Jason Quinn, Abdull, Bender235, Gold-
enRing, Csar (usurped), Arthena, Kenyon, Velho, Linas, Action potential, SmackBot, Nbarth, Mladilozof, Maksim-bot, Mmehdi.g,
FrozenMan, Joseph Solis in Australia, Scsibug, CBM, Gregbard, Thijs!bot, Odoncaoa, JAnDbot, Soroush83, R'n'B, Llorenzi, VolkovBot,
Lradrama, Philogo, Sandik~enwiki, UncleFluy, Svick, Undisputedloser, Hans Adler, Neftas, MystBot, Addbot, Jarble, AnomieBOT,
Lh389, Jburlinson, LucienBOT, Bunyk, Gamewizard71, Onel5969, EmausBot, Ego White Tray, Tijfo098, Masssly, David R MacKay,
MOS 11, Hyperbolick and Anonymous: 21
Predicate functor logic Source: https://en.wikipedia.org/wiki/Predicate_functor_logic?oldid=760806487 Contributors: Michael Hardy,
AugPi, Woohookitty, BD2412, Qwertyus, Hv, Cedar101, Myasuda, Gregbard, Nick Number, Thenub314, R'n'B, Nono64, Fratrep,
CBM2, Classicalecon, SchreiberBike, Palnot, Yobot, AnomieBOT, BrideOfKripkenstein, SchreyP, PequalsNP, Qetuth, CarrieVS, Jochen
Burghardt and Anonymous: 7
Predicate variable Source: https://en.wikipedia.org/wiki/Predicate_variable?oldid=799429333 Contributors: AugPi, Charles Matthews,
BD2412, SmackBot, Mhss, CBM, Gregbard, Julian Mendez, Ego White Tray, Tijfo098, Nfer, KLBot2, Ira Leviton, MartinZ, Deacon
Vorbis and Anonymous: 3
Prenex normal form Source: https://en.wikipedia.org/wiki/Prenex_normal_form?oldid=783966640 Contributors: The Anome, Michael
Hardy, AugPi, Charles Matthews, Dysprosia, Greenrd, Pfortuny, Gandalf61, Lockeownzj00, 4pq1injbok, Fuxx, Oleg Alexandrov, Joriki,
Linas, MattGiuca, BD2412, Reetep, Jayme, PhS, SmackBot, Mhss, Esoth~enwiki, CRGreathouse, CBM, Thijs!bot, Jakob.scholbach,
Toobaz, AlleborgoBot, SieBot, IsleLaMotte, PixelBot, Addbot, SamatBot, Coreyoconnor, AnomieBOT, Omnipaedista, Epiglottisz, Wik-
itanvirBot, Jimw338 and Anonymous: 22
Principle of distributivity Source: https://en.wikipedia.org/wiki/Principle_of_distributivity?oldid=760583961 Contributors: Charles
Matthews, Tea2min, Chalst, Oleg Alexandrov, Btyner, Rjwilmsi, Jb-adder, Shirahadasha, CBM, Gregbard, Cydebot, Erudecorp, Gzhanstong,
LiederLover1982, AnomieBOT, Omnipaedista, Erik9bot, Gamewizard71, Eskilp, Mark viking and Anonymous: 6
1034 CHAPTER 273. ZHEGALKIN POLYNOMIAL

Principle of explosion Source: https://en.wikipedia.org/wiki/Principle_of_explosion?oldid=790773377 Contributors: Michael Hardy,


Dominus, Kku, Tgeorgescu, ThirdParty, AugPi, Schneelocke, Furrykef, Hyacinth, Fibonacci, Ruakh, Kachooney, Leibniz, Smyth, Chalst,
Lysdexia, Rh~enwiki, MoraSique, Ruud Koot, Btyner, Hairy Dude, Ed de Jonge, Sneftel, SmackBot, Incnis Mrsi, Jushi, Mhss, Thumper-
ward, Epastore, Jon Awbrey, Byelf2007, Lambiam, LesterRoquefort, Dbtfz, Shadowcrimejas, ILikeThings, Chetvorno, Banedon, Greg-
bard, Cheesefather, Nick Number, WinBot, Majorly, Arno Matthias, Nyttend, Dave Dial, Heyitspeter, Joshua Issac, Cayafas, Wiae,
AlleborgoBot, SieBot, Toddst1, IdNotFound, Niceguyedc, Otr500, Addbot, SamatBot, Luckas-bot, AnomieBOT, Duland21, Fundoctor,
FrescoBot, I dream of horses, Magmalex, Jfmantis, EmausBot, Snied, Denbosch, TitaniumCarbide, Helpful Pixie Bot, BG19bot, Dualus,
Punk physicist, Mark Arsten, Doublethink1984, Chip Wildon Forster, Chrismorey, An0nym0us7r011, SkateTier, Arun2462, David de
Wit, Mtheorylord, Deacon Vorbis and Anonymous: 58
Product term Source: https://en.wikipedia.org/wiki/Product_term?oldid=786524106 Contributors: Alai, Oleg Alexandrov, Trovatore,
Mets501, CBM, Yobot, Materialscientist, Erik9bot, ClueBot NG, Brirush and Anonymous: 3
Proof by contradiction Source: https://en.wikipedia.org/wiki/Proof_by_contradiction?oldid=799574434 Contributors: AxelBoldt, Mag-
nus Manske, Lee Daniel Crocker, Vicki Rosenzweig, Mav, Zundark, The Anome, Tarquin, Larry Sanger, Andre Engels, Roadrun-
ner, FvdP, B4hand, Patrick, Chas zzz brown, Michael Hardy, Oliver Pereira, DopeshJustin, Kidburla, Dominus, Dcljr, Skysmith,
Andrewa, Scott, EdH, Ideyal, Revolver, Charles Matthews, Populus, Fibonacci, Leonariso, Robbot, Mohan ravichandran, Altenmann,
Romanm, Chancemill, Sam Spade, Bkell, UtherSRG, Centrx, Giftlite, Barbara Shack, Everyking, Cortina, 20040302, Chrismear, Ja-
son Quinn, Chameleon, Gubbubu, Toytoy, Michaelcarraher, Rdsmith4, Grossdomestic, Peter Kwok, Neale Monks, Lacrimosus, Hy-
drox, Roybb95~enwiki, Paul August, Brian0918, DrewRobinson, Cretog8, Dungodung, Man vyi, Larryv, Jonsafari, Nsaa, Rd232, Burn,
Bart133, Omphaloscope, Out180, Chamaeleon, Axeman89, Kazvorpal, Harvestdancer, Mindmatrix, Rodrigo Rocha~enwiki, Hdante,
Graham87, BD2412, Chun-hian, Patrick Zanon, X1011, Mathbot, Markkbilbo, Mattman00000, Lemuel Gulliver, Glenn L, Chobot,
Reetep, Algebraist, YurikBot, Severa, Zaroblue05, Kerowren, Dkg11hu, Doctor Whom, TERdON, Bota47, Andersersej~enwiki, MagicOgre,
SV Resolution, PurplePlatypus, Cmglee, Seanjacksontc, SmackBot, Pgk, BiT, Wje, Psiphiorg, Qwasty, ScottForschler, Jfsamper, Nbarth,
Kobayen, Simpsons contributor, Aerobird, Ioscius, Mhym, Byelf2007, Lambiam, Dbtfz, Pliny, Loodog, Robosh, Mgiganteus1, Grumpyy-
oungman01, Xiaphias, Dr.K., Emx~enwiki, Ncosmob, Chetvorno, Wafulz, CBM, Gregbard, Mattbuck, Cydebot, Shirulashem, Marcel-
Lionheart, Epbr123, Kajisol, Pampas Cat, Flarity, CZeke, Hopiakuta, JAnDbot, Pedro, LookingGlass, NeighborTotoro, Electriceel,
TheBusiness, JuanPaBJ16, Neonguru, AltiusBimm, Colincbn, Pmbcomm, Kesal, DorganBot, VolkovBot, Jimmaths, TXiKiBoT, Liko81,
Melsaran, Synthebot, GlassFET, Katzmik, Arkwatem, The Evil Spartan, Luciengav, Gamall Wednesday Ida, Marc van Leeuwen, Silvo-
nenBot, NjardarBot, MrVanBot, CarsracBot, Nikie42, AgadaUrbanit, Tide rolls, Alexander.mitsos, Legobot, Yobot, Cm001, Majestic-
chimp, AnomieBOT, OpenFuture, RavShimon, Dave3457, Lipsquid, Sawomir Biay, LegendFSL, Double sharp, Duoduoduo, Petron-
iusArb, DexDor, Jdl22, Bastian964, Makecat, Mehdi, ClueBot NG, Ayhorse, Helpful Pixie Bot, Blue Mist 1, Crh23, Vesta Zenobia,
P76837, Vclam068, Darcourse, Jochen Burghardt, Password is DOB, Xin-Xin W., Golopotw, The Original Bob, Maths314, Loraof,
Baking Soda, Deacon Vorbis, PrimeBOT, KolbertBot and Anonymous: 164
Proof by contrapositive Source: https://en.wikipedia.org/wiki/Proof_by_contrapositive?oldid=800894781 Contributors: Andyfugard,
Centrx, Jason Quinn, Brianjd, BD2412, UsaSatsui, Doncram, Carabinieri, Georey.landis, SmackBot, InverseHypercube, NickPen-
guin, Byelf2007, Gregbard, Cydebot, Daniel5Ko, Jimmaths, DesolateReality, Gamall Wednesday Ida, PerryTachett, Badgernet, Alexan-
der.mitsos, , Yobot, Erik9bot, BertSeghers, Klbrain, Jackofhats, Wcherowi, Helpful Pixie Bot, CitationCleanerBot, Any-
thingcouldhappen, Monkbot, Loraof and Anonymous: 8
Proposition Source: https://en.wikipedia.org/wiki/Proposition?oldid=789387401 Contributors: AxelBoldt, Mav, Toby Bartels, Zoe,
Stevertigo, K.lee, Michael Hardy, Zeno Gantner, TakuyaMurata, Minesweeper, Evercat, Sethmahoney, Conti, Reddi, Greenrd, Markhurd,
Hyacinth, Banno, RedWolf, Ojigiri~enwiki, Timrollpickering, Tea2min, Giftlite, Jason Quinn, Stevietheman, Antandrus, Superborsuk,
Sebbe, Amicuspublilius, Martpol, Hapsiainen, Vanished user lp09qa86ft, Chalst, Phiwum, Duesentrieb, Bobo192, Larryv, MPerel, He-
lix84, V2Blast, Ish ishwar, Emvee~enwiki, RJFJR, Bobrayner, Philthecow, Velho, Woohookitty, Kzollman, Isnow, Patl, Brolin Em-
pey, Lakitu~enwiki, Fresheneesz, Bornhj, YurikBot, Hairy Dude, Rick Norwood, Trovatore, Wknight94, Finell, SmackBot, Evanreyes,
Ignacioerrico, Bluebot, Jaymay, DHN-bot~enwiki, Cybercobra, Richard001, Lacatosias, Jon Awbrey, Vina-iwbot~enwiki, Byelf2007,
Harryboyles, SilkTork, Ckatz, 16@r, Grumpyyoungman01, Stwalkerster, Caiaa, Levineps, Iridescent, JoeBot, Gveret Tered, Eastlaw,
CRGreathouse, CBM, Sdorrance, Andkore, Gregbard, Juansempere, Yesterdog, Thijs!bot, Barticus88, Kredal, AllenFerguson, Voyaging,
NSH001, JAnDbot, MER-C, Leolaursen, Bookinvestor, Connormah, VoABot II, WhatamIdoing, Pomte, Thirdright, J.delanoy, Ali, Gin-
sengbomb, Katalaveno, Coppertwig, Nieske, Funandtrvl, King Lopez, ABF, TXiKiBoT, Philogo, Tracerbullet11, Cnilep, Barkeep, SieBot,
Legion , Oxymoron83, OKBot, ClueBot, The Thing That Should Not Be, Watchduck, Estirabot, Hans Adler, Hugo Herbelin, DumZi-
BoT, Makotoy, Crazy Boris with a red beard, Dthomsen8, Dwnelson, SilvonenBot, Good Olfactory, Addbot, Andrewghutchison, LAAFan,
Luckas-bot, TheSuave, Denyss, THEN WHO WAS PHONE?, Ehuss, KamikazeBot, AnomieBOT, E235, Yalckram, Wortafad, Arthur-
Bot, Sophivorus, Omnipaedista, FrescoBot, BrideOfKripkenstein, Motomuku, Pinethicket, A8UDI, Serols, Monkeymanman, Gamewiz-
ard71, FoxBot, Lotje, TheMesquito, Daliot, EmausBot, John of Reading, Eekerz, Look2See1, Honestrosewater, Britannic124, Dcirovic,
Bollyje, Coasterlover1994, Chewings72, ClueBot NG, Wcherowi, MelbourneStar, Satellizer, Masssly, Helpful Pixie Bot, Hans-Jrgen
Streicher~enwiki, Marcocapelle, , ChrisGualtieri, Jochen Burghardt, Eyesnore, Purnendu Karmakar, Damin A. Fernndez Beanato,
DetectiveKraken, SanketDash, Ashika Bieber, Eavestn, ExperiencedArticleFixer, Nkkenbuer, Tomi adeola, Dogovan and Anonymous:
176
Propositional calculus Source: https://en.wikipedia.org/wiki/Propositional_calculus?oldid=798963524 Contributors: The Anome, Tar-
quin, Jan Hidders, Tzartzam, Michael Hardy, JakeVortex, Kku, Justin Johnson, Minesweeper, Looxix~enwiki, AugPi, Rossami, Ev-
ercat, BAxelrod, Charles Matthews, Dysprosia, Hyacinth, Ed g2s, UninvitedCompany, BobDrzyzgula, Robbot, Benwing, MathMartin,
Rorro, GreatWhiteNortherner, Marc Venot, Ancheta Wis, Giftlite, Lethe, Jason Quinn, Gubbubu, Gadum, LiDaobing, Beland, Grauw,
Almit39, Kutulu, Creidieki, Urhixidur, PhotoBox, EricBright, Extrapiramidale, Rich Farmbrough, Guanabot, FranksValli, Paul August,
Glenn Willen, Elwikipedista~enwiki, Tompw, Chalst, BrokenSegue, Cmdrjameson, Nortexoid, Varuna, Red Winged Duck, ABCD, Xee,
Nightstallion, Bookandcoee, Oleg Alexandrov, Japanese Searobin, Joriki, Linas, Mindmatrix, Ruud Koot, Trevor Andersen, Waldir,
Graham87, BD2412, Qwertyus, Kbdank71, Porcher, Koavf, PlatypeanArchcow, Margosbot~enwiki, Kri, No Swan So Fine, Roboto
de Ajvol, Hairy Dude, Russell C. Sibley, Gaius Cornelius, Ihope127, Rick Norwood, Trovatore, TechnoGuyRob, Jpbowen, Cruise,
Voidxor, Jerome Kelly, Arthur Rubin, Cedar101, Reyk, Teply, GrinBot~enwiki, SmackBot, Michael Meyling, Imz, Incnis Mrsi, Srnec,
Mhss, Bluebot, Cybercobra, Clean Copy, Jon Awbrey, Andeggs, Ohconfucius, Lambiam, Wvbailey, Scientizzle, Loadmaster, Mets501,
Pejman47, JulianMendez, Adriatikus, Zero sharp, JRSpriggs, George100, Harold f, 8754865, Vaughan Pratt, CBM, ShelfSkewed,
Sdorrance, Gregbard, Cydebot, Julian Mendez, Taneli HUUSKONEN, EdJohnston, Applemeister, GeePriest, Salgueiro~enwiki, JAnD-
bot, Thenub314, Hut 8.5, Magioladitis, Paroswiki, MetsBot, JJ Harrison, Epsilon0, Santiago Saint James, Anaxial, R'n'B, N4nojohn,
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1035

Wideshanks, TomS TDotO, Created Equal, The One I Love, Our Fathers, STBotD, Mistercupcake, VolkovBot, JohnBlackburne, TXiKi-
BoT, Lynxmb, The Tetrast, Philogo, Wiae, General Reader, Jmath666, VanishedUserABC, Sapphic, Newbyguesses, SieBot, Iamthedeus,
, Jimmycleveland, OKBot, Svick, Huku-chan, Francvs, ClueBot, Unica111, Wysprgr2005, Garyzx, Niceguyedc,
Thinker1221, Shivakumar2009, Estirabot, Alejandrocaro35, Reuben.cornel, Hans Adler, MilesAgain, Djk3, Lightbearer, Addbot, Rdan-
neskjold, Legobot, Yobot, Tannkrem, Stefan.vatev, Jean Santeuil, AnomieBOT, LlywelynII, Materialscientist, Ayda D, Doezxcty, Cwchng,
Omnipaedista, SassoBot, January2009, Thehelpfulbot, FrescoBot, LucienBOT, Xenfreak, HRoestBot, Dinamik-bot, Steve03Mills, Emaus-
Bot, John of Reading, 478jjjz, Chharvey, Chewings72, Bomazi, Tijfo098, ClueBot NG, Golden herring, MrKoplin, Frietjes, Helpful
Pixie Bot, BG19bot, Llandale, Brad7777, Wolfmanx122, Hanlon1755, Khazar2, Jochen Burghardt, Mark viking, Mrellisdee, Christian
Nassif-Haynes, Matthew Kastor, Marco volpe, Jwinder47, Mario Casteln Castro, Eavestn, SiriusGR, CLCStudent, Quiddital, DIYeditor,
CaryaSun, BobU, Nicolai uy and Anonymous: 166
Propositional directed acyclic graph Source: https://en.wikipedia.org/wiki/Propositional_directed_acyclic_graph?oldid=561376261
Contributors: Selket, Andreas Kaufmann, BD2412, Brighterorange, Trovatore, Bluebot, Nbarth, CmdrObot, RUN, MetsBot, Aagtbdfoua,
DRap, Mvinyals, Dennis714, Tijfo098 and Anonymous: 2
Propositional formula Source: https://en.wikipedia.org/wiki/Propositional_formula?oldid=799359014 Contributors: Michael Hardy,
Hyacinth, Timrollpickering, Tea2min, Filemon, Giftlite, Golbez, PWilkinson, Klparrot, Bookandcoee, Woohookitty, Linas, Mindma-
trix, Tabletop, BD2412, Kbdank71, Rjwilmsi, Bgwhite, YurikBot, Hairy Dude, RussBot, Open2universe, Cedar101, SmackBot, Hmains,
Chris the speller, Bluebot, Colonies Chris, Tsca.bot, Jon Awbrey, Muhammad Hamza, Lambiam, Wvbailey, Wizard191, Iridescent,
Happy-melon, ChrisCork, CBM, Gregbard, Cydebot, Julian Mendez, Headbomb, Nick Number, Arch dude, Djihed, R'n'B, Raise ex-
ception, Wiae, Billinghurst, Spinningspark, WRK, Maelgwnbot, Jaded-view, Mild Bill Hiccup, Neuralwarp, Addbot, Yobot, Adelpine,
AnomieBOT, Neurolysis, LilHelpa, The Evil IP address, Kwiki, John of Reading, Klbrain, ClueBot NG, Kevin Gorman, Helpful Pixie
Bot, BG19bot, PhnomPencil, Wolfmanx122, Dexbot, Jochen Burghardt, Mark viking, Knife-in-the-drawer, JJMC89 and Anonymous:
21
Propositional function Source: https://en.wikipedia.org/wiki/Propositional_function?oldid=726320246 Contributors: Michael Hardy,
TakuyaMurata, Jason Quinn, Rgdboer, Linas, Thekohser, Arthur Rubin, Gregbard, Uncle uncle uncle, Izno, Ost316, Addbot, Mont-
blanc1988, Orenburg1, Helpful Pixie Bot, YFdyh-bot, JYBot, TE5ITA and Anonymous: 4
Propositional proof system Source: https://en.wikipedia.org/wiki/Propositional_proof_system?oldid=749391746 Contributors: Michael
Hardy, Ben Standeven, EmilJ, RDT, CBM, Gregbard, Magioladitis, David Eppstein, R'n'B, LittleWink, Jocme, Wcherowi, Solomon7968,
CitationCleanerBot and Anonymous: 3
Propositional variable Source: https://en.wikipedia.org/wiki/Propositional_variable?oldid=635471524 Contributors: Giftlite, Creidieki,
Aisaac, Woohookitty, Kbdank71, Mitsukai, Trovatore, Mhss, Foxjwill, Jon Awbrey, Mets501, CBM, Gregbard, AndrewHowse, Cydebot,
Julian Mendez, Thijs!bot, Pomte, Pdabrowiecki, Addbot, Amirobot, Tijfo098, Brirush and Anonymous: 2
Quanticational variability eect Source: https://en.wikipedia.org/wiki/Quantificational_variability_effect?oldid=791882005 Con-
tributors: Michael Hardy, Greenrd, Alastair Haines, David Eppstein, Pollinosisss, InternetArchiveBot, GreenC bot and Bender the Bot
Quantier (linguistics) Source: https://en.wikipedia.org/wiki/Quantifier_(linguistics)?oldid=800331220 Contributors: Joeblakesley, Os-
hwah, AnomieBOT, Materialscientist, Deltahedron, Jochen Burghardt, Magic links bot and Anonymous: 3
Quantier (logic) Source: https://en.wikipedia.org/wiki/Quantifier_(logic)?oldid=797156257 Contributors: Hyacinth, Trylks, R.e.b.,
Arthur Rubin, CBM, Chrisahn, Chiswick Chap, Daniel5Ko, AnomieBOT, LilHelpa, JamesHDavenport, John of Reading, Quondum,
Pacerier, David.moreno72, Kephir, Jochen Burghardt, Dough34, Crito10, Eduardo Cortez, MagneticInk and Anonymous: 7
Quantier rank Source: https://en.wikipedia.org/wiki/Quantifier_rank?oldid=742552603 Contributors: Bearcat, BD2412, Ott2, Ster-
ling, Magioladitis, Yobot, Modelpractice, David.moreno72, AK456, Deltahedron, InternetArchiveBot, GreenC bot, Emlili and Anony-
mous: 3
Quantier variance Source: https://en.wikipedia.org/wiki/Quantifier_variance?oldid=742703784 Contributors: Bender235, SlimVir-
gin, Rjwilmsi, Gregbard, Snowded, Brews ohare, Omnipaedista, FreeKnowledgeCreator, Machine Elf 1735, Moswento, Dcirovic, Spec-
tral sequence, Jochen Burghardt, Zeiimer, Bender the Bot and Anonymous: 1
Quasi-commutative property Source: https://en.wikipedia.org/wiki/Quasi-commutative_property?oldid=780824208 Contributors: Michael
Hardy, Dirac66, GrahamHardy, Marius siuram, OccultZone, Hydronium Hydroxide and Anonymous: 1
QuineMcCluskey algorithm Source: https://en.wikipedia.org/wiki/Quine%E2%80%93McCluskey_algorithm?oldid=793370921 Con-
tributors: Michael Hardy, TakuyaMurata, AugPi, Charles Matthews, Dcoetzee, Dysprosia, Roachmeister, Noeckel, Lithiumhead, Giftlite,
Gzornenplatz, Two Bananas, Simoneau, McCart42, Revision17, Ralph Corderoy, Jkl, Bender235, James Foster, Alansohn, RJFJR,
Kobold, Oleg Alexandrov, Woohookitty, Ruud Koot, Mendaliv, Rjwilmsi, Wikibofh, Dar-Ape, Mathbot, Ysangkok, Fresheneesz, Anti-
matter15, CiaPan, Chobot, YurikBot, Pi Delport, Andrew Bunn, Trovatore, Hv, Hgomersall, Cedar101, Gulliveig, Modify, Skryskalla,
Looper5920, Gilliam, Mhss, Durova, Allan McInnes, Cybercobra, Akshaysrinivasan, Jon Awbrey, Romanski, Tlesher, Dfass, Huntscor-
pio, Pqrstuv, Iridescent, Chetvorno, Gregbard, Elanthiel, QuiteUnusual, Salgueiro~enwiki, BBar, Johnbibby, Narfanator, Gwern, Infran-
gible, Jim.henderson, OneWorld22, Potopi, LordAnubisBOT, Andionita, VolkovBot, AlnoktaBOT, Jay Uv., W1k13rh3nry, WimdeValk,
ClueBot, AMCCosta, Dusa.adrian, Ra2007, Addbot, AgadaUrbanit, Luckas-bot, Yobot, Ipatrol, Sz-iwbot, RedLunchBag, Gulyan89,
DixonDBot, EmausBot, John of Reading, Dusadrian, Alessandro.goulartt, Njoutram, Clementina, Matthiaspaul, Ceklock, Citation-
CleanerBot, CARPON, Wh1chwh1tch, HHadavi, BattyBot, MatthewIreland, Cyberbot II, ChrisGualtieri, Snilan, Monkbot, Jakobjb,
LuckyBulldog, Srdrucker, GreenC bot, Zernity and Anonymous: 138
Random algebra Source: https://en.wikipedia.org/wiki/Random_algebra?oldid=747624533 Contributors: R.e.b., Moonraker, Bender
the Bot and Anonymous: 1
Read-once function Source: https://en.wikipedia.org/wiki/Read-once_function?oldid=723729058 Contributors: David Eppstein
Reduct Source: https://en.wikipedia.org/wiki/Reduct?oldid=618205741 Contributors: John Baez, Spring Rubber, SmackBot, Vaughan
Pratt, Tikiwont, Hans Adler, Backslash Forwardslash, Gf uip, EmausBot, Bgeron, Monkbot and Anonymous: 3
Reductio ad absurdum Source: https://en.wikipedia.org/wiki/Reductio_ad_absurdum?oldid=798413257 Contributors: Michael Hardy,
Dominus, Julesd, BenKovitz, Hyacinth, Xanzzibar, Centrx, Giftlite, Suspekt~enwiki, 20040302, Paul August, Bender235, Prodicus,
Richard Arthur Norton (1958- ), Mindmatrix, TheAlphaWolf, Mandarax, Spezied, Mendaliv, XP1, Theinsomniac4life, Savethemooses,
Diza, Jemptymethod, Newagelink, Thnidu, HereToHelp, Williamjacobs, SmackBot, InverseHypercube, Bomac, Bmearns, BenAveling,
1036 CHAPTER 273. ZHEGALKIN POLYNOMIAL

Hibbleton, Occultations, Byelf2007, Loodog, Gobonobo, Wizard191, Chetvorno, N2e, Gregbard, Jordan Brown, Rstrug, PamD, Sense-
maker, Gapooh007, JAnDbot, Asnac, Matthew Fennell, Albany NY, JamesBWatson, Nyttend, Cardamon, ForestAngel, David Epp-
stein, Nev1, Uranium grenade, Ohms law, GDW13, Treisijs, Anarchangel, Y, KirbenS, Iamthedeus, Hello71, Yoda of Borg, Martarius,
Reydeyo, Excirial, Alexbot, SpikeToronto, Zebrasil, Tired time, Mrszantogabor, HumphreyW, Gerhardvalentin, Ronhjones, AgadaUr-
banit, Numbo3-bot, Yobot, Majestic-chimp, Angel ivanov angelov, Nallimbot, Tojasonharris, AnomieBOT, Ciphers, OpenFuture, Mark-
worthen, Transity, Alumnum, Dave3457, Much noise, Hirpex, Momergil, Tom.Reding, Thedarkknight491, Pappaj333, Dashed, Polli-
nosisss, Jordgette, Philocentric, LilyKitty, Diyan.boyanov, Wolfzx99, BertSeghers, EmausBot, Jrjspencer, Dcirovic, PBS-AWB, ,
Donner60, Sudozero, ClueBot NG, TheConduqtor, Ayhorse, Primergrey, Jeraphine Gryphon, BG19bot, Original Position, Blue Mist 1,
E1618978, Snow Blizzard, Cengime, JSWIFT13, Esszet, Lugia2453, Elephantplot, Jochen Burghardt, You Can Act Like A Man, Scythe-
mantic, Yamaha5, Aubreybardo, Lance Chance, Immanuel Thoughtmaker, Adam31415926535, Elmeter, Joshwond, Bualo Bill Cody,
David Reisling, Permstrump, Deacon Vorbis, DIYeditor, Jeaden and Anonymous: 105
ReedMuller expansion Source: https://en.wikipedia.org/wiki/Reed%E2%80%93Muller_expansion?oldid=794983448 Contributors:
Michael Hardy, AugPi, Macrakis, Macha, DavidCBryant, Cebus, Fyrael, Legobot, Yobot, Jason Recliner, Esq., RobinK, Klbrain, Matthi-
aspaul and Anonymous: 4
Relation algebra Source: https://en.wikipedia.org/wiki/Relation_algebra?oldid=799654562 Contributors: Zundark, Michael Hardy, AugPi,
Charles Matthews, Tea2min, Lethe, Fropu, Mboverload, D6, Elwikipedista~enwiki, Giraedata, AshtonBenson, Woohookitty, Paul Car-
penter, BD2412, Rjwilmsi, Koavf, Tillmo, Wavelength, Ott2, Cedar101, Mhss, Concerned cynic, Nbarth, Jon Awbrey, Lambiam, Physis,
Mets501, Vaughan Pratt, CBM, Gregbard, Sam Staton, King Bee, JustAGal, Balder ten Cate, David Eppstein, R'n'B, Leyo, Ramsey2006,
Plasticup, JohnBlackburne, The Tetrast, Linelor, Hans Adler, Addbot, QuadrivialMind, Yobot, AnomieBOT, Nastor, LilHelpa, Xqbot,
Samppi111, Charvest, FrescoBot, Irmy, Sjcjoosten, SchreyP, Seabuoy, BG19bot, CitationCleanerBot, Brad7777, Khazar2, Lerutit, RPI,
JaconaFrere, Narky Blert, SaltHerring, Some1Redirects4You and Anonymous: 41
Relation construction Source: https://en.wikipedia.org/wiki/Relation_construction?oldid=564892457 Contributors: Charles Matthews,
Carlossuarez46, Kaldari, Paul August, El C, DoubleBlue, Nihiltres, TeaDrinker, Gwernol, Wknight94, Closedmouth, SmackBot, Jon
Awbrey, JzG, Coredesat, Slakr, CBM, Gogo Dodo, Hut 8.5, Brusegadi, David Eppstein, Brigit Zilwaukee, Yolanda Zilwaukee, Ars Tottle,
The Proposition That, Spellcast, Corvus cornix, Seb26, Maelgwnbot, Blanchardb, RABBU, REBBU, DEBBU, DABBU, Wolf of the
Steppes, REBBU, Doubtentry, Education Is The Basis Of Law And Order, Bare In Mind, Preveiling Opinion Of Dominant Opinion
Group, VOC, Buchanans Navy Sec, Kaiba, Marsboat, Trainshift, Pluto Car, Unco Guid, Viva La Information Revolution!, Flower
Mound Belle, Editortothemasses, Navy Pierre, Mrs. Lovetts Meat Puppets, Unknown Justin, West Goshen Guy, Southeast Penna Poppa,
Delaware Valley Girl, Erik9bot and Lerutit
Representation (mathematics) Source: https://en.wikipedia.org/wiki/Representation_(mathematics)?oldid=738119982 Contributors:
Michael Hardy, Giftlite, El C, Linas, BD2412, SixWingedSeraph, Rjwilmsi, MarSch, Reyk, Magioladitis, A3nm, David Eppstein,
NewEnglandYankee, PaulTanenbaum, Geometry guy, SieBot, Addbot, Twri, Trappist the monk, Helpful Pixie Bot, BattyBot, Mrt3366
and Anonymous: 4
Residuated Boolean algebra Source: https://en.wikipedia.org/wiki/Residuated_Boolean_algebra?oldid=777417470 Contributors: Tea2min,
Gracefool, PWilkinson, Cedar101, Mhss, Vaughan Pratt, Ctxppc, Addbot, Yobot, Charvest and Anonymous: 2
Resolution (logic) Source: https://en.wikipedia.org/wiki/Resolution_(logic)?oldid=782574837 Contributors: Kku, AugPi, Charles Matthews,
Robbot, Stephan Schulz, Tea2min, Thaukko~enwiki, Jorend, Macrakis, Gubbubu, Mu, Mukerjee, Kmweber, Jiy, Peter M Gerdes,
MoraSique, MattGiuca, Ruud Koot, GregorB, Xitdedragon, Rjwilmsi, Tizio, Ekspiulo, Fresheneesz, Ogai, Gaius Cornelius, Jpbowen,
GrinBot~enwiki, DVD R W, Bluebot, Acipsen, Lifetime, Henning Makholm, Byelf2007, Antonielly, SwordAngel, T boyd, Wjejske-
newr, Zero sharp, CBM, Pgr94, Simeon, Gregbard, JLD, Esowteric, Hannes Eder, Matt Kovacs, Barnardb, Pdbogen, David Eppstein,
N4nojohn, Shmenonpie, Hquain, AHMartin, RatnimSnave, Toddst1, Jdaloner, Prohlep, Methossant, Alexbot, Ceilican, Addbot, L1chn,
Tassedethe, Lightbot, Jarble, Termar, Luckas-bot, Yobot, Cloudyed, AnomieBOT, D'ohBot, Sawomir Biay, Mikolasj, LittleWink, Ir-
bisgreif, Gamewizard71, Porphyrus, RjwilmsiBot, Lagenar, D.Lazard, Tijfo098, Masssly, Helpful Pixie Bot, BG19bot, Solomon7968,
Mathnerd314159, Jochen Burghardt, Dough34, Monkbot, 0a.io and Anonymous: 47
Resolution inference Source: https://en.wikipedia.org/wiki/Resolution_inference?oldid=720960978 Contributors: Michael Hardy, Greg-
bard, Ceilican, BG19bot, Ezequiel234 and Anonymous: 2
Robbins algebra Source: https://en.wikipedia.org/wiki/Robbins_algebra?oldid=722337346 Contributors: Tea2min, Giftlite, Nick8325,
Zaslav, John Vandenberg, Qwertyus, Salix alba, Trovatore, Christian75, Jdvelasc, SieBot, Thehotelambush, Addbot, Lightbot, Pcap, Spiros
Bousbouras, Xqbot, Irbisgreif, Afteread, Shishir332, Rcsprinter123 and Anonymous: 14
Rule of inference Source: https://en.wikipedia.org/wiki/Rule_of_inference?oldid=761854035 Contributors: Michael Hardy, Darkwind,
Poor Yorick, Rossami, BAxelrod, Hyacinth, Ldo, Timrollpickering, Markus Krtzsch, Jason Quinn, Khalid hassani, Neilc, Quadell,
CSTAR, Lucidish, MeltBanana, Bender235, Elwikipedista~enwiki, EmilJ, Nortexoid, Giraedata, Joriki, Ruud Koot, Hurricane Angel,
Waldir, BD2412, Kbdank71, Emallove, Brighterorange, Algebraist, YurikBot, Rsrikanth05, Cleared as led, Arthur Rubin, Fram, Nahaj,
Elwood j blues, Yamaguchi , Mhss, Chlewbot, Byelf2007, ArglebargleIV, Robosh, Tktktk, Jim.belk, Physis, JHunterJ, Grumpyy-
oungman01, Dan Gluck, CRGreathouse, CBM, Simeon, Gregbard, Cydebot, Thijs!bot, Epbr123, LokiClock, TXiKiBoT, Cli, Euse-
bius, Alejandrocaro35, Addbot, Luckas-bot, AnomieBOT, Citation bot, GrouchoBot, RibotBOT, WillMall, Undsoweiter, Jonesey95,
Gamewizard71, Onel5969, TomT0m, Tesseract2, Helptry, Dcirovic, Neptilo, Tijfo098, ClueBot NG, Delphinebbd, Ginsuloft, Sweepy,
Anareth and Anonymous: 31
Rule of replacement Source: https://en.wikipedia.org/wiki/Rule_of_replacement?oldid=689615639 Contributors: Michael Hardy, ENeville,
Arthur Rubin, Gregbard, Cli, Legobot, SwisterTwister, FrescoBot, Olexa Riznyk, IfYouDoIfYouDon't, Helpful Pixie Bot, BG19bot,
Dooooot, Jochen Burghardt and Anonymous: 6
Rule of weakening Source: https://en.wikipedia.org/wiki/Structural_rule?oldid=736195870 Contributors: Charles Matthews, Hyacinth,
Ruakh, Kaustuv, STHayden, RDBury, Mhss, Fplay, Byelf2007, Soumyasch, Gregbard, Pi zero, Hans Adler, Addbot, Milksea, Noamz,
Mattg82, Erik9bot, RobinK, TomT0m, Mark viking, W. P. Uzer and Anonymous: 6
Scope (logic) Source: https://en.wikipedia.org/wiki/Scope_(logic)?oldid=785079386 Contributors: RJGray, I dream of horses, Doy-
oon1995, GeoreyT2000, DrStrauss and Krankmeister1917
Second-order predicate Source: https://en.wikipedia.org/wiki/Second-order_predicate?oldid=746908769 Contributors: Awaterl, Michael
Hardy, Oliver Pereira, Dori, Alex S, Elwikipedista~enwiki, Oleg Alexandrov, GregorB, Graham87, Tachs, Fram, SmackBot, CBM, Greg-
bard, David Eppstein, YSSYguy, Erik9bot, Molleya123 and Anonymous: 4
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1037

Second-order propositional logic Source: https://en.wikipedia.org/wiki/Second-order_propositional_logic?oldid=620964980 Contrib-


utors: Michael Hardy, Jason Quinn, Chalst, Meloman, Gregbard, Epsilon0 and Jochen Burghardt
Sentence (mathematical logic) Source: https://en.wikipedia.org/wiki/Sentence_(mathematical_logic)?oldid=789501864 Contributors:
Toby Bartels, Michael Hardy, Silversh, Charles Matthews, Giftlite, Spayrard, Rgdboer, Oleg Alexandrov, Linas, BD2412, Qwertyus,
Mkehrt, 4C~enwiki, Ihope127, Rick Norwood, Trovatore, Bbaumer, Maksim-e~enwiki, Bigbluesh, Mhss, Mets501, CBM, Gregbard,
Skier Dude, Anonymous Dissident, Philogo, Ctxppc, DesolateReality, Alejandrocaro35, Addbot, TaBOT-zerem, Citation bot, MastiBot,
ClueBot NG, Masssly, MerlIwBot, Helpful Pixie Bot, Georgydunaev and Anonymous: 6
Sequential composition Source: https://en.wikipedia.org/wiki/Process_calculus?oldid=797441231 Contributors: Michael Hardy, AlexR,
Theresa knott, Charles Matthews, Zoicon5, Phil Boswell, Wmahan, Neilc, Solitude, Leibniz, Linas, Ruud Koot, MarSch, XP1, Jamesh-
sher, Vonkje, GangofOne, Wavelength, Koeyahoo, CarlHewitt, Gareth Jones, RabidDeity, Jpbowen, SockPuppetVandal, Voidxor,
Misza13, SmackBot, McGeddon, Chris the speller, Nbarth, Allan McInnes, Ezrakilty, Sam Staton, Blaisorblade, Thijs!bot, Dougher,
Skraedingie, Barkjon, Roxy the dog, MystBot, Addbot, Tassedethe, Lightbot, Yobot, AnomieBOT, GrouchoBot, BehnazCh, Vasywriter,
Ifai, WikitanvirBot, Clayrat, Serketan, Bethnim, Architectchao, Helpful Pixie Bot, BG19bot, Ulidtko, Pintoch, Dough34, PrimeBOT
and Anonymous: 40
Sheer stroke Source: https://en.wikipedia.org/wiki/Sheffer_stroke?oldid=786911580 Contributors: AxelBoldt, Fubar Obfusco, David
spector, Vik-Thor, Michael Hardy, AugPi, Jouster, Dcoetzee, Dysprosia, Markhurd, Hyacinth, Cameronc, Johnleemk, Robbot, Saaska,
Rorro, Paul Murray, Snobot, Giftlite, DocWatson42, Brouhaha, Zigger, Gubbubu, Halo, Sam, Urhixidur, Ratiocinate, Rich Farmbrough,
Leibniz, Pie4all88, TheJames, SocratesJedi, Paul August, Chalst, EmilJ, Nortexoid, Redfarmer, Emvee~enwiki, Dominic, Bookandcoee,
Drakferion, Woohookitty, Mindmatrix, Steven Luo, Ruud Koot, Wayward, BD2412, Qwertyus, Kbdank71, Rjwilmsi, R.e.b., Ademkader,
FlaBot, Mathbot, George Leung, Algebraist, RobotE, Sceptre, Imagist, Archelon, Ksyrie, NormalAsylum, Dijxtra, Trovatore, Nad, Yahya
Abdal-Aziz, Prolineserver, JMRyan, Rohanmittal, Luethi, JoanneB, SmackBot, Melchoir, Mhss, Chris the speller, Bluebot, Thumper-
ward, UU, Cybercobra, Jon Awbrey, Lambiam, Loadmaster, Yoderj, CBM, Ezrakilty, Gregbard, Nilfanion, Rotiro, Cydebot, Julian
Mendez, Asenine, SpK, Widefox, Royas, MER-C, Magioladitis, VoABot II, Vujke, Seba5618, Santiago Saint James, Kloisiie, Olmsfam,
Somejan, Josephholsten, The Tetrast, Philogo, Manusharma, Jamelan, Inductiveload, Dogah, CultureDrone, Francvs, ClueBot, Plastik-
spork, Achlaug, Watchduck, Dspark76, Hans Adler, Addbot, Meisam, Bunnyhop11, TaBOT-zerem, Erud, M&M987, Dante Cardoso
Pinto de Almeida, LittleWink, Dhanyavaada, Omerta-ve, Dega180, Gamewizard71, SobakaKachalova, RjwilmsiBot, Arielkoiman, Set
theorist, Zahnradzacken, Hpvpp, SporkBot, ClueBot NG, Masssly, Widr, Jones11235813, MerlIwBot, SOFooBah, Yamaha5, Brianlen,
SarahRMadden, Soa Koutsouveli, Narky Blert, AntonioRuedaToicen and Anonymous: 69
Sigma-algebra Source: https://en.wikipedia.org/wiki/Sigma-algebra?oldid=800163136 Contributors: AxelBoldt, Zundark, Tarquin, Iwn-
bap, Miguel~enwiki, Michael Hardy, Chinju, Karada, Stevan White, Charles Matthews, Dysprosia, Vrable, AndrewKepert, Fibonacci,
McKay, Robbot, Romanm, Aetheling, Ruakh, Tea2min, Giftlite, Lethe, MathKnight, Mboverload, Gubbubu, Gauss, Barnaby dawson, Vi-
vacissamamente, William Elliot, ArnoldReinhold, Paul August, Bender235, Zaslav, Elwikipedista~enwiki, MisterSheik, EmilJ, SgtThroat,
Jung dalglish, Tsirel, Passw0rd, Msh210, Jheald, Cmapm, Ultramarine, Oleg Alexandrov, Linas, Graham87, Salix alba, FlaBot, Mathbot,
Jrtayloriv, Chobot, Jayme, YurikBot, Lucinos~enwiki, Archelon, Trovatore, Mindthief, Solstag, Crasshopper, Dinno~enwiki, Nielses,
SmackBot, Melchoir, JanusDC, Object01, Dan Hoey, MalafayaBot, RayAYang, Nbarth, DHN-bot~enwiki, Javalenok, Gala.martin,
Henning Makholm, Lambiam, Dbtfz, Jim.belk, Mets501, Stotr~enwiki, Madmath789, CRGreathouse, CBM, David Cooke, Mct mht,
Blaisorblade, Xantharius, , Thijs!bot, Marek69, Escarbot, Keith111, Forgetfulfunctor, Quentar~enwiki, MSBOT, Magioladitis, Ro-
gierBrussee, Paartha, Joeabauer, Hippasus, Policron, Cerberus0, Digby Tantrum, Jmath666, Alfredo J. Herrera Lago, StevenJohnston,
Ocsenave, Tcamps42, SieBot, Melcombe, MicoFils~enwiki, Andrewbt, The Thing That Should Not Be, Mild Bill Hiccup, BearMa-
chine, 1ForTheMoney, DumZiBoT, Addbot, Luckas-bot, Yobot, Li3939108, Amirmath, Godvjrt, Xqbot, RibotBOT, Charvest, Fres-
coBot, BrideOfKripkenstein, AstaBOTh15, Stpasha, RedBot, Soumyakundu, Wikiborg4711, Stj6, TjBot, Max139, KHamsun, Ra5749,
Mikhail Ryazanov, ClueBot NG, Marcocapelle, Thegamer 298, QuarkyPi, Brad7777, AntanO, Shalapaws, Crawfoal, Dexbot, Y256,
Jochen Burghardt, A koyee314, Limit-theorem, Mark viking, NumSPDE, Moyaccercchi, Lewis Goudy, Killaman slaughtermaster, Daren-
Cline, 5D2262B74, Byyourleavesir, Amateur bert, KolbertBot and Anonymous: 99
Skolem normal form Source: https://en.wikipedia.org/wiki/Skolem_normal_form?oldid=798013127 Contributors: Charles Matthews,
Dysprosia, Aleph4, Gandalf61, Ashley Y, Thesilverbail, Jason Quinn, Gubbubu, Nortexoid, Oleg Alexandrov, Linas, Tizio, Winterstein,
Mathbot, Jrtayloriv, Fresheneesz, Jayme, Hairy Dude, 4C~enwiki, Maxme~enwiki, Jsnx, SmackBot, Leechuck, Commander Keane bot,
Mhss, Chris the speller, Xyzzy n, Tompsci, ILikeThings, Myasuda, Kayobee, Rriegs, David Eppstein, Policron, 83d40m, Heyitspeter,
VolkovBot, Pleasantville, Kyle the bot, IsleLaMotte, PixelBot, Hans Adler, El bot de la dieta, Addbot, Legobot, Yobot, AnomieBOT,
Xqbot, Omnipaedista, Klangenfurt, John of Reading, KYLEMONGER, Tijfo098, Wcherowi, BG19bot, Ruchkin, Jochen Burghardt,
Mark viking, Kal71, Jtron2000 and Anonymous: 29
SLD resolution Source: https://en.wikipedia.org/wiki/SLD_resolution?oldid=790003495 Contributors: AugPi, JeremyA, Pmcjones,
Grafen, SmackBot, AlexDitto, Momet, Pgr94, Gregbard, Alaibot, Thenub314, Tomaxer, Ctxppc, PaulBrinkley, PipepBot, Logperson,
Eusebius, Addbot, Jarble, Albertzeyer, Carbo1200, Tijfo098, BG19bot and Anonymous: 20
Standard translation Source: https://en.wikipedia.org/wiki/Standard_translation?oldid=433391110 Contributors: Simeon and Ponyo
Stone functor Source: https://en.wikipedia.org/wiki/Stone_functor?oldid=786531729 Contributors: Bearcat, Porton, John Z, TenPound-
Hammer, CBM, Shawn in Montreal, Blanchardb, AtheWeatherman, IkamusumeFan and Anonymous: 5
Stone space Source: https://en.wikipedia.org/wiki/Stone_space?oldid=770379543 Contributors: Michael Hardy, Charles Matthews, Markus
Krtzsch, David Eppstein, GeoreyT2000 and Anonymous: 1
Stones representation theorem for Boolean algebras Source: https://en.wikipedia.org/wiki/Stone{}s_representation_theorem_for_
Boolean_algebras?oldid=800707600 Contributors: Zundark, Michael Hardy, Chinju, AugPi, Smack, Naddy, Tea2min, Giftlite, Markus
Krtzsch, Fropu, Vivacissamamente, Pjacobi, Chalst, Porton, Blotwell, Tsirel, Kuratowskis Ghost, Aleph0~enwiki, Linas, R.e.b., Yurik-
Bot, Trovatore, SmackBot, BeteNoir, Sharpcomputing, Mhss, Rschwieb, CBM, Christian75, Thijs!bot, JanCK, David Eppstein, Falcor84,
R'n'B, StevenJohnston, YonaBot, Alexbot, Beroal, EEng, Addbot, Luckas-bot, Yobot, GrouchoBot, Tkuvho, Slawekb, Nosuchforever,
Dexbot, TheCoeeAddict, Larry Eaglet, KolbertBot and Anonymous: 17
Strict conditional Source: https://en.wikipedia.org/wiki/Strict_conditional?oldid=788499731 Contributors: Michael Hardy, Charles
Matthews, Hyacinth, J heisenberg, Guanaco, Mu, Aquaeur, Citizensunshine, Chalst, Cdegough, Alansohn, Kbdank71, Tizio, KSchutte,
Arthur Rubin, Incnis Mrsi, Mhss, Melbournian, Ocanter, Beefyt, Gregbard, Cydebot, Magioladitis, Philogo, Graymornings, Addbot, Tide
rolls, Luckas-bot, AnomieBOT, Gemtpm, Machine Elf 1735, Hobbes Goodyear, D.Lazard, ClueBot NG, Helpful Pixie Bot, Hanlon1755,
Wadh27, Bender the Bot, Magic links bot and Anonymous: 17
1038 CHAPTER 273. ZHEGALKIN POLYNOMIAL

Structural rule Source: https://en.wikipedia.org/wiki/Structural_rule?oldid=736195870 Contributors: Charles Matthews, Hyacinth, Ru-


akh, Kaustuv, STHayden, RDBury, Mhss, Fplay, Byelf2007, Soumyasch, Gregbard, Pi zero, Hans Adler, Addbot, Milksea, Noamz,
Mattg82, Erik9bot, RobinK, TomT0m, Mark viking, W. P. Uzer and Anonymous: 6
Substitution (logic) Source: https://en.wikipedia.org/wiki/Substitution_(logic)?oldid=798009645 Contributors: The Anome, Markhurd,
Chalst, Jhertel, Ruud Koot, Gdrbot, Cedar101, JDspeeder1, SmackBot, CBM, Gregbard, Cydebot, R'n'B, VolkovBot, Mild Bill Hiccup,
Addbot, Tassedethe, Jarble, Pcap, Erik9bot, Bomazi, Tijfo098, Mhiji, Jochen Burghardt, Lewis Goudy, SmerdjakovK and Anonymous:
6
Surjective function Source: https://en.wikipedia.org/wiki/Surjective_function?oldid=774567777 Contributors: AxelBoldt, Tarquin, Amil-
lar, XJaM, Toby Bartels, Michael Hardy, Wshun, Pit~enwiki, Karada, , Glenn, Jeandr du Toit, Hashar, Hawthorn, Charles
Matthews, Dysprosia, David Shay, Ed g2s, Phil Boswell, Aleph4, Robbot, Fredrik, Tea2min, Giftlite, Lethe, Jason Quinn, Jorge Stol,
Matt Crypto, Keeyu, Rheun, MarkSweep, AmarChandra, Tsemii, TheObtuseAngleOfDoom, Vivacissamamente, Rich Farmbrough,
Quistnix, Paul August, Bender235, Nandhp, Kevin Lamoreau, Larryv, Obradovic Goran, Dallashan~enwiki, Anders Kaseorg, ABCD,
Schapel, Oleg Alexandrov, Tbsmith, Mindmatrix, LOL, Waldir, Rjwilmsi, MarSch, FlaBot, Nihiltres, Chobot, Manscher, Algebraist,
Angus Lepper, Ksnortum, Rick Norwood, Sbyrnes321, SmackBot, Rotemliss, Bluebot, Javalenok, TedE, Soapergem, Dreadstar, Saip-
puakauppias, MickPurcell, 16@r, Inquisitus, CBM, MatthewMain, Gregbard, Marqueed, Sam Staton, Pjvpjv, Prolog, Salgueiro~enwiki,
JAnDbot, JamesBWatson, JJ Harrison, Martynas Patasius, MartinBot, TechnoFaye, Malerin, Dubhe.sk, Theabsurd, UnicornTapestry, Eli-
uha gmail.com, Anonymous Dissident, SieBot, SLMarcus, Paolo.dL, Peiresc~enwiki, Classicalecon, Jmcclaskey54, UKoch, Watchduck,
Bender2k14, SchreiberBike, Neuralwarp, Petru Dimitriu, Matthieumarechal, Kal-El-Bot, Addbot, Download, PV=nRT, , Zorrobot,
Jarble, Legobot, Luckas-bot, Yobot, Fraggle81, II MusLiM HyBRiD II, Xqbot, TechBot, Shvahabi, Raamaiden, Omnipaedista, Ap-
plebringer, Erik9bot, LucienBOT, Tbhotch, Xnn, Jowa fan, EmausBot, PrisonerOfIce, WikitanvirBot, GoingBatty, AManWithNoPlan,
Sasuketiimer, Maschen, Mjbmrbot, Anita5192, ClueBot NG, Helpful Pixie Bot, BG19bot, Cispyre, Lfahlberg, JPaestpreornJeolhlna,
TranquilHope and Anonymous: 92
Suslin algebra Source: https://en.wikipedia.org/wiki/Suslin_algebra?oldid=616031609 Contributors: R.e.b. and David Eppstein
Symmetric Boolean function Source: https://en.wikipedia.org/wiki/Symmetric_Boolean_function?oldid=742464417 Contributors: Michael
Hardy, Watchduck, Addbot, Luckas-bot, Twri, HamburgerRadio, DixonDBot, ZroBot, Mark viking and Anonymous: 1
Syncategorematic term Source: https://en.wikipedia.org/wiki/Syncategorematic_term?oldid=788980365 Contributors: AugPi, Drag-
onySixtyseven, Mike Rosoft, Paul August, Wbeek, Fram, SmackBot, Hmains, Tim Q. Wells, Markyb23, Gregbard, Cydebot, Ael 2,
SonofPorkins, LlywelynII, Omnipaedista, Dale Chock, John of Reading, Honestrosewater, Here today, gone tomorrow, HistorianofLogic,
Grammar conquistador, Helpful Pixie Bot, Khazar2, PrimeBOT and Anonymous: 7
System L Source: https://en.wikipedia.org/wiki/System_L?oldid=800739452 Contributors: Michael Hardy, Rjwilmsi, Cydebot, Richard-
Veryard, Alan U. Kennington, Hugo Herbelin, Aiken drum, Lchris314, KolbertBot and Anonymous: 3
Tarskis World Source: https://en.wikipedia.org/wiki/Tarski{}s_World?oldid=795997931 Contributors: Michael Hardy, AugPi, Bee-
Jay~enwiki, Erp, Gregbard, Msrasnw, AnomieBOT, BG19bot, Deltahedron, Lucker423, Bender the Bot and Anonymous: 1
Tautology (logic) Source: https://en.wikipedia.org/wiki/Tautology_(logic)?oldid=795058151 Contributors: Michael Hardy, Chris-martin,
Doradus, Markhurd, Tpbradbury, Furrykef, Hyacinth, Robbot, Fredrik, Kuszi, Giftlite, Allefant, Robertbowerman, Laurascudder, Nor-
texoid, Jumbuck, Anthony Appleyard, Hoary, Sligocki, Omphaloscope, Zntrip, Pchov, Miss Madeline, Apokrif, BD2412, Kbdank71,
Strait, Seliopou, Hairy Dude, RussBot, Chaser, ENeville, Trovatore, Cedar101, SmackBot, PizzaMargherita, Eupedia, Benjaminevans82,
Ccero, Jon Awbrey, Wvbailey, Dbtfz, Wtwilson3, JorisvS, Bjankuloski06en~enwiki, SimonATL, Rnb, Eastlaw, CBM, AshLin, Neelix,
RoddyYoung, Simeon, Gregbard, Cydebot, Rieman 82, Julian Mendez, Bsmntbombdood, Thijs!bot, Nick Number, Madder, Drake Wil-
son, JAnDbot, Robina Fox, Magioladitis, Rivertorch, Tanstaa28, Hitanshu D, Victor Blacus, Maurice Carbonaro, Policron, Diego, Squids
and Chips, StevenHirlston, VolkovBot, Kagnu, AlnoktaBOT, Mf140, Philogo, Jamelan, Brianga, SieBot, Laocon11, Indianandrew,
Fratrep, Francvs, XDanielx, Blanchardb, Auntof6, Jemmy Button, Homonihilis, Kasufcgslfguhvsne, Hans Adler, Lkruijsw, Gerhard-
valentin, WikHead, Addbot, MrVanBot, Mohsenkazempur, Meisam, Royalasa, Sz-iwbot, Materialscientist, Amanster, LilHelpa, Ayda
D, Xqbot, GrouchoBot, LennyCZ, RibotBOT, Jmlullo, Neurosojourn, Jusses2, MastiBot, Tbhotch, George Richard Leeming, Vramasub,
ArsenalTechKB, ClueBot NG, Kejia, Masssly, Langing, Benzband, Hanlon1755, ChrisGualtieri, zen, Jochen Burghardt, Golopotw,
Neurovibes, CoeeAddictUK and Anonymous: 90
Tautology (rule of inference) Source: https://en.wikipedia.org/wiki/Tautology_(rule_of_inference)?oldid=790777644 Contributors: Michael
Hardy, ENeville, BiT, Gregbard, GDW13, Alejandrocaro35, Tinton5, Tbhotch, RjwilmsiBot, John of Reading, Deacon Vorbis and
Anonymous: 5
Ternary equivalence relation Source: https://en.wikipedia.org/wiki/Ternary_equivalence_relation?oldid=756215679 Contributors: Rjwilmsi,
Melchoir and David Eppstein
Ternary relation Source: https://en.wikipedia.org/wiki/Ternary_relation?oldid=768559737 Contributors: Michael Hardy, Charles Matthews,
Tea2min, Ancheta Wis, Abdull, Paul August, El C, Rgdboer, Versageek, Oleg Alexandrov, Jerey O. Gustafson, RxS, DoubleBlue, Ni-
hiltres, TeaDrinker, Wknight94, Closedmouth, Luk, Sardanaphalus, SmackBot, KnowledgeOfSelf, Melchoir, C.Fred, BiT, Aksi great,
Nbarth, Jon Awbrey, Lambiam, JzG, Tim Q. Wells, Slakr, Politepunk, General Eisenhower, Happy-melon, Tawkerbot2, CBM, Gogo
Dodo, Alaibot, Luna Santin, Hut 8.5, Transcendence, Brusegadi, David Eppstein, JoergenB, Santiago Saint James, Brigit Zilwaukee,
Yolanda Zilwaukee, Fallopius Manque, Mike V, CardinalDan, Rei-bot, Seb26, GlobeGores, Lucien Odette, REBBU, RABBU, Wolf
of the Steppes, REBBU, Doubtentry, DEBBU, Education Is The Basis Of Law And Order, Bare In Mind, Preveiling Opinion Of
Dominant Opinion Group, Hans Adler, Buchanans Navy Sec, Mr. Peabodys Boy, Overstay, Marsboat, Unco Guid, Viva La Information
Revolution!, Autocratic Uzbek, Poke Salat Annie, Flower Mound Belle, Navy Pierre, Mrs. Lovetts Meat Puppets, Unknown Justin, IP
Phreely, West Goshen Guy, Delaware Valley Girl, Addbot, Jarble, Yobot, Vini 17bot5, AnomieBOT, In digma, Erik9bot, I dream of
horses, AManWithNoPlan, CitationCleanerBot and Anonymous: 5
Transposition (logic) Source: https://en.wikipedia.org/wiki/Transposition_(logic)?oldid=800894790 Contributors: AxelBoldt, Mrwojo,
Bdesham, Michael Hardy, Dominus, Charles Matthews, BenFrantzDale, Kwamikagami, Circeus, Cmdrjameson, Bgeer, Amerindianarts,
BD2412, Gaius Cornelius, Doncram, Crystallina, SmackBot, Reedy, Mhss, Drae, Gregbard, David Eppstein, Greenwoodtree, CitizenB,
Alan U. Kennington, Watchduck, Gerhardvalentin, Fluernutter, Noideta, Srich32977, RjwilmsiBot, Klbrain, Quondum, SporkBot,
Dooooot, Bender the Bot and Anonymous: 16
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1039

True quantied Boolean formula Source: https://en.wikipedia.org/wiki/True_quantified_Boolean_formula?oldid=741858594 Con-


tributors: Edward, Michael Hardy, Charles Matthews, Neilc, Creidieki, EmilJ, Spug, Giraedata, Kusma, Twin Bird, Cedar101, Smack-
Bot, Karmastan, ForgeGod, Radagast83, Lambiam, Drae, Gregbard, Michael Fourman, David Eppstein, TXiKiBoT, Ilia Kr., Mis-
terspock1, Addbot, DOI bot, Citation bot, MauritsBot, Xqbot, Miym, Milimetr88, Citation bot 1, RobinK, RjwilmsiBot, EmausBot,
Dcirovic, KYLEMONGER, ChuispastonBot, Helpful Pixie Bot, Khazar2, Jochen Burghardt and Anonymous: 13
Truth table Source: https://en.wikipedia.org/wiki/Truth_table?oldid=800454606 Contributors: Mav, Bryan Derksen, Tarquin, Larry
Sanger, Webmaestro, Heron, Hephaestos, Stephen pomes, Bdesham, Patrick, Michael Hardy, Wshun, Liftarn, Ixfd64, Justin Johnson,
Delirium, Jimfbleak, AugPi, Andres, DesertSteve, Charles Matthews, Dcoetzee, Dysprosia, Markhurd, Hyacinth, Pakaran, Banno, Rob-
bot, RedWolf, Snobot, Ancheta Wis, Giftlite, Lethe, Jason Quinn, Vadmium, Lst27, Antandrus, JimWae, Schnits, Creidieki, Joyous!,
Rich Farmbrough, ArnoldReinhold, Paul August, CanisRufus, Gershwinrb, Robotje, Billymac00, Nortexoid, Photonique, Jonsafari, Mdd,
LutzL, Alansohn, Gary, Noosphere, Cburnett, Crystalllized, Bookandcoee, Oleg Alexandrov, Mindmatrix, Bluemoose, Abd, Graham87,
BD2412, Kbdank71, Xxpor, Rjwilmsi, JVz, Koavf, Tangotango, Bubba73, FlaBot, Maustrauser, Fresheneesz, Aeroknight, Chobot,
DVdm, Bgwhite, YurikBot, Wavelength, SpuriousQ, Trovatore, Sir48, Kyle Barbour, Cheese Sandwich, Pooryorick~enwiki, Rofthorax,
Cedar101, CWenger, LeonardoRob0t, Cmglee, KnightRider~enwiki, SmackBot, InverseHypercube, KnowledgeOfSelf, Vilerage, Can-
thusus, The Rhymesmith, Gilliam, Mhss, Gaiacarra, Deli nk, Can't sleep, clown will eat me, Chlewbot, Cybercobra, Uthren, Gschadow,
Jon Awbrey, Antonielly, Nakke, Dr Smith, Parikshit Narkhede, Beetstra, Dicklyon, Mets501, Iridescent, Richardcook, Danlev, CR-
Greathouse, CBM, WeggeBot, Gregbard, Slazenger, Starylon, Cydebot, Flowerpotman, Julian Mendez, Lee, Letranova, Oreo Priest, Anti-
VandalBot, Kitty Davis, Quintote, Vantelimus, K ganju, JAnDbot, Avaya1, Olaf, Holly golightly, Johnbrownsbody, R27smith200245, San-
tiago Saint James, Sevillana~enwiki, CZ Top, Aston Martian, On This Continent, LordAnubisBOT, Bergin, NewEnglandYankee, Policron,
Lights, VolkovBot, The Tetrast, Wiae, AlleborgoBot, Logan, SieBot, Paradoctor, Krawi, Djayjp, Flyer22 Reborn, WimdeValk, Francvs,
Someone the Person, JP.Martin-Flatin, Ocer781, ParisianBlade, Hans Adler, XTerminator2000, Wstorr, Vegetator, Aitias, Qwfp,
Staticshakedown, Addbot, Ghettoblaster, AgadaUrbanit, Ehrenkater, Kiril Simeonovski, C933103, Clon89, Luckas-bot, Yobot, Nallim-
bot, Fox89, Materialscientist, Racconish, Quad4rax, Xqbot, Addihockey10, RibotBOT, Jonesey95, Tom.Reding, MastiBot, Fixer88,
TBloemink, Onel5969, Mean as custard, K6ka, ZroBot, Tijfo098, ChuispastonBot, ClueBot NG, Smtchahal, Akuindo, Wcherowi,
Millermk, WikiPuppies, Jk2q3jrklse, Wbm1058, Bmusician, Ceklock, Joydeep, Supernerd11, CitationCleanerBot, Annina.kari, Achal
Singh, Wolfmanx122, La marts boys, JYBot, Darcourse, Seppi333, UY Scuti, Richard Kohar, ChamithN, Swashski, Aichotoitinhyeu97,
KasparBot, Adam9007, NoToleranceForIntolerance, The devil dipak, Deacon Vorbis, United Massachusetts and Anonymous: 296
Two-element Boolean algebra Source: https://en.wikipedia.org/wiki/Two-element_Boolean_algebra?oldid=786518528 Contributors:
Zundark, Michael Hardy, GTBacchus, Julesd, Nurg, Giftlite, Plugwash, Oleg Alexandrov, Linas, Igny, Salix alba, Trovatore, SmackBot,
Incnis Mrsi, Mhss, Nbarth, Nakon, NickPenguin, Lambiam, CmdrObot, CBM, Gregbard, Pjvpjv, Nick Number, Avaya1, David Epp-
stein, Gwern, R'n'B, WimdeValk, Classicalecon, Ngebendi, Hans Adler, Palnot, Addbot, Luckas-bot, AnomieBOT, FrescoBot, Jordgette,
Gernot.salzer, Tijfo098, Matthiaspaul, Tagremover, MCAllen91, Kephir, Assiliza, Fuebar, Bobanobahoba and Anonymous: 11
Unimodality Source: https://en.wikipedia.org/wiki/Unimodality?oldid=787274704 Contributors: Michael Hardy, Komap, Cherkash,
Henrygb, Commander Keane, Rjwilmsi, Wavelength, Bhny, Tropylium, Wen D House, Henning Makholm, JzG, CRGreathouse, Ma-
gioladitis, David Eppstein, DrMicro, Asaduzaman, Melcombe, Muhandes, Juanm55, Addbot, Nate Wessel, Yobot, Flavonoid, Isheden,
FrescoBot, Or michael, Duoduoduo, Gypped, Tgoossens, Helpful Pixie Bot, BG19bot, Op47, CitationCleanerBot, BattyBot, Chris-
Gualtieri, Limit-theorem, Virion123, Loraof, Iamyatin and Anonymous: 16
Uniqueness quantication Source: https://en.wikipedia.org/wiki/Uniqueness_quantification?oldid=796694069 Contributors: Toby Bar-
tels, Michael Hardy, Dominus, Liftarn, AugPi, Nikai, Dysprosia, Jogloran, Jni, MathMartin, Tea2min, Nagelfar, Giftlite, ErikNY, Ger-
rit, EmilJ, Kine, Mareino, Alansohn, Gpvos, Richwales, Oleg Alexandrov, Apokrif, Marudubshinki, Qwertyus, DVdm, Hairy Dude,
Ihope127, SmackBot, Melchoir, Ohnoitsjamie, Mhss, MalafayaBot, TenPoundHammer, The Man in Question, Dr Greg, Waggers, Mets501,
Shoeofdeath, JRSpriggs, Intel4004, CRGreathouse, CBM, FilipeS, Cydebot, Alaibot, Thijs!bot, James086, JAnDbot, Steveprutz, CTF83!,
David Eppstein, It Is Me Here, Quux0r, Crisperdue, VictorMak, SieBot, Tiddly Tom, BotMultichill, ClueBot, Robmac87, Hans Adler,
Pugget, XLinkBot, MystBot, Addbot, ., PV=nRT, Ptbotgourou, AnomieBOT, ItsAlwaysLupus, Materialscientist, OllieFury, In-
ferno, Lord of Penguins, Rocco.Rossi, Constructive editor, I dream of horses, Serols, Horcrux92, Slawekb, ClueBot NG, Peter James,
Widr, Wbm1058, Jpeterson8425$, Luca Balsanelli, BattyBot, Jochen Burghardt, K9re11, Innite0694, BU Rob13, Luis150902, Samuel
Vaughn D. Duncan, Deacon Vorbis and Anonymous: 44
Universal generalization Source: https://en.wikipedia.org/wiki/Universal_generalization?oldid=697631248 Contributors: Dieter Si-
mon, Michael Hardy, Raymer, AugPi, Dcoetzee, Oleg Alexandrov, Vegaswikian, Ilmari Karonen, SmackBot, Mhss, Jim.belk, CBM,
Gregbard, Peterdjones, Julian Mendez, Fyedernoggersnodden, Minimiscience, Rettetast, Nieske, Hans Adler, Addbot, Yobot, Erik9bot,
Lotje, ChuispastonBot, Vsaraph, Op47 and Anonymous: 12
Universal instantiation Source: https://en.wikipedia.org/wiki/Universal_instantiation?oldid=795794529 Contributors: Michael Hardy,
Kku, Charles Matthews, Hyacinth, Nortexoid, Awared, Akrabbim, SmackBot, Mhss, Byelf2007, Jim.belk, CBM, Simeon, Gregbard,
WinBot, JamesBWatson, GrahamHardy, SieBot, RatnimSnave, Eusebius, Alejandrocaro35, Burket, Addbot, Yobot, AnomieBOT, RJGray,
Erik9bot, Active Banana, Dcirovic, Vsaraph, Fan Singh Long, Jochen Burghardt, Mark viking, KathieHarine, ClarCharr and Anonymous:
8
Universal quantication Source: https://en.wikipedia.org/wiki/Universal_quantification?oldid=798784072 Contributors: Toby Bar-
tels, Ryguasu, Michael Hardy, Poor Yorick, Andres, Dcoetzee, Dysprosia, Robbot, Henrygb, Bkell, Giftlite, DocWatson42, Urhix-
idur, Thorwald, Kenj0418, Mykhal, Bobo192, Jpceayene, Spug, Joriki, Mindmatrix, Ruud Koot, Neocapitalist, Marudubshinki, Jsha-
dias, Salix alba, DoubleBlue, Laurentius, RobotE, Hairy Dude, Hede2000, Ihope127, Lemon-s, Dpakoha, Grafen, Tomisti, Arthur Ru-
bin, GrinBot~enwiki, Eigenlambda, SmackBot, Slashme, Pgk, Mhss, Clconway, Cinephile2, Khukri, Lambiam, 16@r, Mets501, Zero
sharp, DBooth, CBM, Gregbard, Cydebot, Benzi455, Julian Mendez, Robertinventor, Salgueiro~enwiki, JAnDbot, VoABot II, Avi-
cennasis, Felliax08, JMyrleFuller, Aeron Daly, Jimmaths, PGSONIC, Anonymous Dissident, The Thing That Should Not Be, Mild
Bill Hiccup, Jusdafax, Computer97, Addbot, Wsvlqc, Luckas-bot, Maxdamantus, AnomieBOT, E235, Citation bot, Jangirke, Fres-
coBot, Kiefer.Wolfowitz, Edsu, Miracle Pen, WikiPerfector, Quondum, ClueBot NG, Rich Smith, Helpful Pixie Bot, Faus, RedDotTrail,
Solomon7968, CitationCleanerBot, Jochen Burghardt, , Ktlabe, Logoprofeta, Bender the Bot, MagneticInk and Anonymous: 51
Unsatisable core Source: https://en.wikipedia.org/wiki/Unsatisfiable_core?oldid=759872195 Contributors: Edward, Michael Hardy,
Jok2000, DBeyer, Salmar, LouScheer, Gregbard, Alaibot, D123488, EgoWumpus, Neiyay and Anonymous: 8
Vector logic Source: https://en.wikipedia.org/wiki/Vector_logic?oldid=743447485 Contributors: Michael Hardy, Chris the speller, Mya-
suda, Almadana, Paradoctor, Yobot, FrescoBot, Josve05a, Frietjes, BG19bot, DPL bot and Anonymous: 9
1040 CHAPTER 273. ZHEGALKIN POLYNOMIAL

Veitch chart Source: https://en.wikipedia.org/wiki/Karnaugh_map?oldid=800150074 Contributors: Bryan Derksen, Zundark, LA2, Pier-
reAbbat, Fubar Obfusco, Heron, BL~enwiki, Michael Hardy, Chan siuman, Justin Johnson, Seav, Chadloder, Iulianu, Nveitch, Bogdan-
giusca, GRAHAMUK, Jitse Niesen, Fuzheado, Colin Marquardt, Furrykef, Omegatron, Vaceituno, Ckape, Robbot, Naddy, Texture, Paul
Murray, Ancheta Wis, Giftlite, DocWatson42, SamB, Bovlb, Macrakis, Mobius, Goat-see, Ktvoelker, Grunt, Perey, Discospinster, Cae-
sar, Dcarter, MeltBanana, Murtasa, ZeroOne, Plugwash, Nigelj, Unstable-Element, Obradovic Goran, Pearle, Mdd, Phyzome, Jumbuck,
Fritzpoll, Snowolf, Wtshymanski, Cburnett, Bonzo, Kenyon, Acerperi, Wikiklrsc, Dionyziz, Eyreland, Marudubshinki, Jake Warten-
berg, MarSch, Mike Segal, Oblivious, Ligulem, Ademkader, Mathbot, Winhunter, Fresheneesz, Tardis, LeCire~enwiki, Bgwhite, Yurik-
Bot, RobotE, RussBot, SpuriousQ, B-Con, Anomie, Arichnad, Trovatore, RolandYoung, RazorICE, RUL3R, Rohanmittal, Cedar101,
Tim Parenti, Gulliveig, HereToHelp, RG2, Sinan Taifour, SmackBot, InverseHypercube, Thunder Wolf, Edgar181, Gilliam, Bluebot,
Thumperward, Villarinho, Moonshiner, DHN-bot~enwiki, Locriani, Sct72, HLwiKi, Michael.Pohoreski, Hex4def6, SashatoBot, Wvbai-
ley, MagnaMopus, Freewol, Vobrcz, Jmgonzalez, Augustojd, CRGreathouse, CBM, Jokes Free4Me, Reywas92, Czar Kirk, Tkynerd,
Thijs!bot, Headbomb, JustAGal, Jonnie5, CharlotteWebb, RazoreRobin, Leuko, Ndyguy, VoABot II, Swpb, Gantoniou, Carrige, R'n'B,
Yim~enwiki, JoeFloyd, Aervanath, FreddieRic, KylieTastic, Sigra~enwiki, TXiKiBoT, Cyberjoac, Cremepu222, MartinPackerIBM,
Kelum.kosala, Spinningspark, FxBit, Pitel, Serprex, SieBot, VVVBot, Aeoza, IdreamofJeanie, OKBot, Svick, Rrfwiki, WimdeValk,
Justin W Smith, Rjd0060, Unbuttered Parsnip, Czarko, Dsamarin, Watchduck, Sps00789, Hans Adler, Gciriani, B.Zsolt, Jmanigold, Tul-
lywinters, ChyranandChloe, Avoided, Cmr08, Writer130, Addbot, DOI bot, Loafers, Delaszk, Dmenet, AgadaUrbanit, Luckas-bot, Kar-
tano, Hhedeshian, SwisterTwister, Mhayes46, AnomieBOT, Jim1138, Utility Knife, Citation bot, Dannamite, ArthurBot, Pnettle, Miym,
GrouchoBot, TunLuek, Abed pacino, Macjohn2, BillNace, Amplitude101, Pdebonte, Biker Biker, Pinethicket, RedBot, The gulyan89,
SpaceFlight89, Trappist the monk, Vrenator, Katragadda465, RjwilmsiBot, Alessandro.goulartt, Zap Rowsdower, Norlik, Njoutram,
Rocketrod1960, Voomoo, ClueBot NG, Bukwoy, Matthiaspaul, AHA.SOLAX, Frietjes, Imyourfoot, Widr, Danim, Jk2q3jrklse, Spud-
puppy, Nbeverly, Ceklock, Giorgos.antoniou, Icigic, CARPON, Usmanraza9, Wolfmanx122, Shidh, Electricmun11, EuroCarGT, Yax-
inr, Mrphious, Jochen Burghardt, Mdcoope3, TheEpTic, Akosibrixy, Microchirp, Cheater00, Lennerton, GreenWeasel11, Loraof, Scipsy-
cho, BILL ABK, Acayl, ShigaIntern, InternetArchiveBot, GreenC bot, Gerdhuebner, Abduw09, Dhoni barath, NoahB123, Ngonz424,
Arun8277 and Anonymous: 279
Witness (mathematics) Source: https://en.wikipedia.org/wiki/Witness_(mathematics)?oldid=786594176 Contributors: Michael Hardy,
RussBot, Wvbailey, CBM, David Eppstein, WereSpielChequers, BlazerKnight, Tijfo098, Helpful Pixie Bot, Qetuth and ChrisGualtieri
Wolfram axiom Source: https://en.wikipedia.org/wiki/Wolfram_axiom?oldid=780584081 Contributors: Michael Hardy, Bearcat, Greg-
bard, Nick Number, Magioladitis, Pleasantville, Addbot, FrescoBot, EmausBot, Bourbaki78, BG19bot, G McGurk and Anonymous:
6
Zeroth-order logic Source: https://en.wikipedia.org/wiki/Zeroth-order_logic?oldid=787325531 Contributors: Michael Hardy, Aleph4,
Giftlite, Lethe, Kaldari, Paul August, Versageek, Kbdank71, RxS, Jameshsher, Michael Slone, Trovatore, SmackBot, Unschool, C.Fred,
Mhss, Jon Awbrey, Lambiam, JzG, Coredesat, Slakr, Mets501, CBM, Gregbard, Cydebot, Gogo Dodo, Julian Mendez, Majorly, Hut 8.5,
David Eppstein, Santiago Saint James, Ars Tottle, The Proposition That, The One I Love, Redrocket, CardinalDan, Seb26, GlobeGores,
DionysiusThrax, Maelgwnbot, Nn123645, Humain-comme, Hans Adler, Trainshift, Unco Guid, Viva La Information Revolution!, Au-
tocratic Uzbek, Poke Salat Annie, Flower Mound Belle, Navy Pierre, Mrs. Lovetts Meat Puppets, Unknown Justin, IP Phreely, West
Goshen Guy, Delaware Valley Girl, Addbot, ClueBot NG, Fraulein451, Mapplejacks and Anonymous: 11
Zhegalkin polynomial Source: https://en.wikipedia.org/wiki/Zhegalkin_polynomial?oldid=776268342 Contributors: Michael Hardy,
Macrakis, Bkkbrad, Rjwilmsi, GBL, Vaughan Pratt, CRGreathouse, Myasuda, Gregbard, Alaibot, Towopedia, Dougher, Jeepday, Hans
Adler, Addbot, DOI bot, Legobot, Luckas-bot, Yobot, Citation bot, Citation bot 1, Klbrain, Matthiaspaul, Jochen Burghardt, Nwezeakunel-
son and Anonymous: 1

273.7.2 Images
File:0001_0001_0001_1110_nonlinearity.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/66/0001_0001_0001_1110_
nonlinearity.svg License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.
svg' class='image'><img alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.
svg.png' width='40' height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.
svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703'
data-le-height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:2010-05-26_at_18-05-02.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/4e/2010-05-26_at_18-05-02.jpg License:
CC BY 3.0 Contributors: Own work Original artist: Marcovanhogan
File:3_Pottergate_-_geograph.org.uk_-_657140.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/a2/3_Pottergate_-_
geograph.org.uk_-_657140.jpg License: CC BY-SA 2.0 Contributors: From geograph.org.uk Original artist: Richard Croft
File:AND_Gate_diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/41/AND_Gate_diagram.svg License: Pub-
lic domain Contributors: No machine-readable source provided. Own work assumed (based on copyright claims). Original artist: No
machine-readable author provided. Palmtree3000 assumed (based on copyright claims).
File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do-
main Contributors: Own work based on: Ambox scales.svg Original artist: Dsmurat, penubag
File:Arcsecant_Arccosecant.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/56/Arcsecant_Arccosecant.svg License:
CC BY-SA 3.0 Contributors: Own work Original artist: Geek3
File:Arcsine_Arccosine.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Arcsine_Arccosine.svg License: CC BY-
SA 3.0 Contributors: Own work Original artist: Geek3
File:Arctangent_Arccotangent.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9a/Arctangent_Arccotangent.svg Li-
cense: CC BY-SA 3.0 Contributors: Own work Original artist: Geek3
File:Associatividadecat.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a7/Associatividadecat.svg License: Public do-
main Contributors: This le was derived from Associatividadecat.png: <a href='//commons.wikimedia.org/wiki/File:Associatividadecat.
png' class='image'><img alt='Associatividadecat.png' src='https://upload.wikimedia.org/wikipedia/commons/thumb/b/b4/Associatividadecat.
png/50px-Associatividadecat.png' width='50' height='57' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/b/b4/Associatividadecat.
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1041

png/75px-Associatividadecat.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/b/b4/Associatividadecat.png/100px-Associatividadecat.


png 2x' data-le-width='253' data-le-height='287' /></a>
Original artist: Associatividadecat.png: Campani
File:Associativity_of_binary_operations_(without_question_marks).svg Source: https://upload.wikimedia.org/wikipedia/commons/
2/2e/Associativity_of_binary_operations_%28without_question_marks%29.svg License: CC0 Contributors: derivative work of File:Associativity
of binary operations.svg Original artist: File:Associativity of binary operations.svg: Talonnn
File:Associativity_of_real_number_addition.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f6/Associativity_of_real_
number_addition.svg License: CC BY 3.0 Contributors: Own work Original artist: Stephan Kulla (User:Stephan Kulla)
File:BDD.png Source: https://upload.wikimedia.org/wikipedia/commons/9/91/BDD.png License: CC-BY-SA-3.0 Contributors: Trans-
ferred from en.wikipedia to Commons. Original artist: The original uploader was IMeowbot at English Wikipedia
File:BDD2pdag.png Source: https://upload.wikimedia.org/wikipedia/commons/f/f4/BDD2pdag.png License: CC-BY-SA-3.0 Contrib-
utors: Transferred from en.wikipedia to Commons. Original artist: RUN at English Wikipedia
File:BDD2pdag_simple.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/90/BDD2pdag_simple.svg License: CC-BY-
SA-3.0 Contributors: Self made from BDD2pdag_simple.png (here and on English Wikipedia) Original artist: User:Selket and User:RUN
(original)
File:BDD_Variable_Ordering_Bad.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/28/BDD_Variable_Ordering_Bad.
svg License: CC-BY-SA-3.0 Contributors: self-made using CrocoPat, a tool for relational programming, and GraphViz dot, a tool for graph
layout Original artist: Dirk Beyer
File:BDD_Variable_Ordering_Good.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4b/BDD_Variable_Ordering_
Good.svg License: CC-BY-SA-3.0 Contributors: self-made using CrocoPat, a tool for relational programming, and GraphViz dot, a tool
for graph layout Original artist: Dirk Beyer
File:BDD_simple.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/14/BDD_simple.svg License: CC-BY-SA-3.0 Con-
tributors: self-made using CrocoPat, a tool for relational programming, and GraphViz dot, a tool for graph layout Original artist: Dirk
Beyer
File:Begriffsschrift_connective1.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/67/Begriffsschrift_connective1.svg
License: Public domain Contributors: Own work Original artist: Guus Hoekman
File:Bijection.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a5/Bijection.svg License: Public domain Contributors:
enwiki Original artist: en:User:Schapel
File:Bijective_composition.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a2/Bijective_composition.svg License: Pub-
lic domain Contributors: ? Original artist: ?
File:Bimodal.png Source: https://upload.wikimedia.org/wikipedia/commons/e/e2/Bimodal.png License: CC-BY-SA-3.0 Contributors:
? Original artist: ?
File:Bimodal_geological.PNG Source: https://upload.wikimedia.org/wikipedia/commons/b/bc/Bimodal_geological.PNG License: Pub-
lic domain Contributors: Own work Original artist: Tungsten
File:Binary_math_expression_tree.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2a/Binary_math_expression_tree.
svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Jozef Sivek
File:Bloch_Sphere.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f4/Bloch_Sphere.svg License: CC BY-SA 3.0 Con-
tributors: Own work Original artist: Glosser.ca
File:BoolePlacque.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/BoolePlacque.jpg License: Public domain Con-
tributors: Own work Original artist: Logicus
File:BoolePlaque2.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9d/BoolePlaque2.jpg License: Public domain Con-
tributors: Own work Original artist: Logicus
File:BooleWindow(bottom_third).jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/BooleWindow%28bottom_third%
29.jpg License: Public domain Contributors: Own work Original artist: Logicus
File:Boole_House_Cork.jpg Source: https://upload.wikimedia.org/wikipedia/en/f/fb/Boole_House_Cork.jpg License: CC0 Contribu-
tors:
self-made
Original artist:
SandStone
File:Boolean_functions_like_1000_nonlinearity.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/21/Boolean_functions_
like_1000_nonlinearity.svg License: Public domain Contributors: Own work Original artist: Lipedia
File:Boolean_satisfiability_vs_true_literal_counts.png Source: https://upload.wikimedia.org/wikipedia/commons/4/42/Boolean_satisfiability_
vs_true_literal_counts.png License: CC BY-SA 3.0 Contributors: Own work Original artist: Jochen Burghardt
File:CardContin.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/75/CardContin.svg License: Public domain Contrib-
utors: en:Image:CardContin.png Original artist: en:User:Trovatore, recreated by User:Stannered
File:Circuit-minimization.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Circuit-minimization.svg License: CC
BY-SA 3.0 Contributors: Self-made, based on public-domain raster image Circuit-minimization.jpg, by user Uoft ftw, from Wikipedia.
Original artist: Steaphan Greene (talk)
File:Codomain2.SVG Source: https://upload.wikimedia.org/wikipedia/commons/6/64/Codomain2.SVG License: Public domain Con-
tributors: Own work (Original text: I created this work entirely by myself.) Original artist: Damien Karras (talk)
File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: PD Contributors: ? Orig-
inal artist: ?
1042 CHAPTER 273. ZHEGALKIN POLYNOMIAL

File:Commutative_Addition.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/36/Commutative_Addition.svg License:


CC-BY-SA-3.0 Contributors: self-made using previous GFDL work by Melchoir Original artist: Weston.pace. Attribution: Apples in
image were created by Melchoir
File:Commutative_Word_Origin.PNG Source: https://upload.wikimedia.org/wikipedia/commons/d/da/Commutative_Word_Origin.
PNG License: Public domain Contributors: Annales de Gergonne, Tome V, pg. 98 Original artist: Francois Servois
File:Commutativity_of_binary_operations_(without_question_mark).svg Source: https://upload.wikimedia.org/wikipedia/commons/
1/17/Commutativity_of_binary_operations_%28without_question_mark%29.svg License: CC0 Contributors: derivative work of File:
Commutativity of binary operations.svg Original artist: File:Commutativity of binary operations.svg: Talonnn
File:Complex_ArcCot.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/60/Complex_ArcCot.jpg License: Public do-
main Contributors: Eigenes Werk (own work) made with mathematica Original artist: Jan Homann
File:Complex_ArcCsc.jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/fd/Complex_ArcCsc.jpg License: Public do-
main Contributors: Eigenes Werk (own work) made with mathematica Original artist: Jan Homann
File:Complex_ArcSec.jpg Source: https://upload.wikimedia.org/wikipedia/commons/e/ec/Complex_ArcSec.jpg License: Public do-
main Contributors: Eigenes Werk (own work) made with mathematica Original artist: Jan Homann
File:Complex_arccos.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/4d/Complex_arccos.jpg License: Public domain
Contributors: made with mathematica, own work Original artist: Jan Homann
File:Complex_arcsin.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/be/Complex_arcsin.jpg License: Public domain
Contributors: made with mathematica, own work Original artist: Jan Homann
File:Complex_arctan.jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/f5/Complex_arctan.jpg License: Public domain
Contributors: made with mathematica, own work Original artist: Jan Homann
File:Compound_of_five_tetrahedra.png Source: https://upload.wikimedia.org/wikipedia/commons/6/6c/Compound_of_five_tetrahedra.
png License: Attribution Contributors: Transferred from en.wikipedia to Commons. Original artist: ?
File:Crystal_Clear_app_kedit.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e8/Crystal_Clear_app_kedit.svg License:
LGPL Contributors: Own work Original artist: w:User:Tkgd, Everaldo Coelho and YellowIcon
File:DeMorganGates.GIF Source: https://upload.wikimedia.org/wikipedia/commons/3/3a/DeMorganGates.GIF License: CC BY 3.0
Contributors: Own work Original artist: Vaughan Pratt
File:DeMorgan_Logic_Circuit_diagram_DIN.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/db/DeMorgan_Logic_
Circuit_diagram_DIN.svg License: Public domain Contributors: Own work Original artist: MichaelFrey
File:De_Morgan_Augustus.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/2c/De_Morgan_Augustus.jpg License: Pub-
lic domain Contributors: Memoir of Augustus De Morgan Original artist: Sophia Elizabeth De Morgan
File:Demorganlaws.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Demorganlaws.svg License: CC BY-SA 4.0
Contributors: Own work Original artist: Teknad
File:Descriptive.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c7/Descriptive.svg License: CC BY-SA 3.0 Contrib-
utors: Own work Original artist: ChristopherJamesHenry
File:Descriptively_Near_Sets.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8c/Descriptively_Near_Sets.svg License:
CC BY-SA 3.0 Contributors: Own work Original artist: ChristopherJamesHenry
File:Dynkin_Diagram_Isomorphisms.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/03/Dynkin_Diagram_Isomorphisms.
svg License: CC BY-SA 3.0 Contributors:
Connected_Dynkin_Diagrams.svg Original artist: Connected_Dynkin_Diagrams.svg: R. A. Nonenmacher
File:EANear_and_eAMer_Supercategories.png Source: https://upload.wikimedia.org/wikipedia/commons/f/f1/EANear_and_eAMer_
Supercategories.png License: CC BY-SA 3.0 Contributors: Own work Original artist: ChristopherJamesHenry
File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The
Tango! Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the le, specically: Andreas Nilsson, and Jakub Steiner (although
minimally).
File:Emoji_u1f510.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/35/Emoji_u1f510.svg License: Apache License
2.0 Contributors: https://github.com/googlei18n/noto-emoji/blob/f2a4f72/svg/emoji_u1f510.svg Original artist: Google
File:Example_of_A_is_a_proper_subset_of_B.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/56/Example_of_A_
is_a_proper_subset_of_B.svg License: CC0 Contributors: Own work Original artist: Stephan Kulla (User:Stephan Kulla)
File:Example_of_C_is_no_proper_subset_of_B.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/59/Example_of_C_
is_no_proper_subset_of_B.svg License: CC0 Contributors: Own work Original artist: Stephan Kulla (User:Stephan Kulla)
File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-
by-sa-3.0 Contributors: ? Original artist: ?
File:Four-Bit_Majority_Circuit.png Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Four-Bit_Majority_Circuit.png
License: CC BY-SA 4.0 Contributors: Own work Original artist: EDickenson
File:Free-Boolean-algebra-unit-sloppy.png Source: https://upload.wikimedia.org/wikipedia/commons/7/7e/Free-Boolean-algebra-unit-sloppy.
png License: Public domain Contributors: LaTeXiT Original artist: Daniel Brown
File:Free-Boolean-algebra-unit.png Source: https://upload.wikimedia.org/wikipedia/commons/5/5c/Free-Boolean-algebra-unit.png
License: Public domain Contributors: LaTeXiT Original artist: Daniel Brown
File:Free-boolean-algebra-hasse-diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/72/Free-boolean-algebra-hasse-diagram.
svg License: CC0 Contributors: Own work Original artist: Chris-martin
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1043

File:Frigyes_Riesz.jpeg Source: https://upload.wikimedia.org/wikipedia/commons/c/cb/Frigyes_Riesz.jpeg License: Public domain


Contributors: ? Original artist: ?
File:George_Boole_color.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/ce/George_Boole_color.jpg License: Public
domain Contributors: http://schools.keldysh.ru/sch444/museum/1_17-19.htm Original artist: Unknown<a href='https://www.wikidata.
org/wiki/Q4233718' title='wikidata:Q4233718'><img alt='wikidata:Q4233718' src='https://upload.wikimedia.org/wikipedia/commons/
thumb/f/ff/Wikidata-logo.svg/20px-Wikidata-logo.svg.png' width='20' height='11' srcset='https://upload.wikimedia.org/wikipedia/commons/
thumb/f/ff/Wikidata-logo.svg/30px-Wikidata-logo.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.
svg/40px-Wikidata-logo.svg.png 2x' data-le-width='1050' data-le-height='590' /></a>
File:Giuseppe_Peano.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/3a/Giuseppe_Peano.jpg License: Public domain
Contributors: School of Mathematics and Statistics, University of St Andrews, Scotland [1] Original artist: Unknown<a href='https://www.
wikidata.org/wiki/Q4233718' title='wikidata:Q4233718'><img alt='wikidata:Q4233718' src='https://upload.wikimedia.org/wikipedia/
commons/thumb/f/ff/Wikidata-logo.svg/20px-Wikidata-logo.svg.png' width='20' height='11' srcset='https://upload.wikimedia.org/wikipedia/
commons/thumb/f/ff/Wikidata-logo.svg/30px-Wikidata-logo.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/f/
ff/Wikidata-logo.svg/40px-Wikidata-logo.svg.png 2x' data-le-width='1050' data-le-height='590' /></a>
File:Globe_of_letters.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/de/Globe_of_letters.svg License: LGPL Con-
tributors:

<a href='//commons.wikimedia.org/wiki/File:Gnome-globe.svg' class='image'><img alt='' src='https://upload.wikimedia.org/wikipedia/


commons/thumb/f/f3/Gnome-globe.svg/120px-Gnome-globe.svg.png' width='120' height='120' srcset='https://upload.wikimedia.
org/wikipedia/commons/thumb/f/f3/Gnome-globe.svg/180px-Gnome-globe.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/
commons/thumb/f/f3/Gnome-globe.svg/240px-Gnome-globe.svg.png 2x' data-le-width='48' data-le-height='48' /></a>

Gnome-globe.svg
<a href='//commons.wikimedia.org/wiki/File:Globe_of_letters.png' class='image'><img alt='' src='https://upload.wikimedia.org/
wikipedia/commons/thumb/6/62/Globe_of_letters.png/120px-Globe_of_letters.png' width='120' height='97' srcset='https://upload.
wikimedia.org/wikipedia/commons/6/62/Globe_of_letters.png 1.5x' data-le-width='144' data-le-height='116' /></a>

Globe of letters.png

Original artist: Seahen


File:Graham{}s_Hierarchy_of_Disagreement-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a3/Graham%27s_
Hierarchy_of_Disagreement-en.svg License: CC BY 3.0 Contributors: hand-coded by uploader; based on en:Image:Graham{}s Hi-
erarchy of Disagreement.jpg by 'Loudacris (originally from blog.createdebate.com) (Transferred from en.wikipedia to Commons by
Cloudbound.) Original artist: 'Loudacris. Modied by Rocket000
File:Graph_of_non-injective,_non-surjective_function_(red)_and_of_bijective_function_(green).gif Source: https://upload.wikimedia.
org/wikipedia/commons/b/b0/Graph_of_non-injective%2C_non-surjective_function_%28red%29_and_of_bijective_function_%28green%
29.gif License: CC BY-SA 3.0 Contributors: Own work Original artist: Jochen Burghardt
File:Greyfriars,_Lincoln_-_geograph.org.uk_-_106215.jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/fe/Greyfriars%
2C_Lincoln_-_geograph.org.uk_-_106215.jpg License: CC BY-SA 2.0 Contributors: From geograph.org.uk Original artist: Dave Hitch-
borne
File:Hasse2Free.png Source: https://upload.wikimedia.org/wikipedia/commons/7/7c/Hasse2Free.png License: Public domain Contrib-
utors: ? Original artist: ?
File:Hasse_diagram_of_powerset_of_3.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ea/Hasse_diagram_of_powerset_
of_3.svg License: CC-BY-SA-3.0 Contributors: self-made using graphviz's dot. Original artist: KSmrq
File:Hypostasis-diagram.png Source: https://upload.wikimedia.org/wikipedia/commons/a/a2/Hypostasis-diagram.png License: Public
domain Contributors: Own work. Transferred from the English Wikipedia Original artist: User:Drini
File:IMG_Tree.gif Source: https://upload.wikimedia.org/wikipedia/commons/6/6e/IMG_Tree.gif License: CC-BY-SA-3.0 Contribu-
tors: Transferred from zh.wikipedia to Commons by Shizhao using CommonsHelper. Original artist: The original uploader was Mhss at
Chinese Wikipedia
File:Illustration_of_distributive_property_with_rectangles.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/74/Illustration_
of_distributive_property_with_rectangles.svg License: CC0 Contributors: Own work Original artist: Stephan Kulla (User:Stephan Kulla)
File:Implication_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2f/Implication_graph.svg License: Public do-
main Contributors: Own work Original artist: David Eppstein
File:Injection.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/02/Injection.svg License: Public domain Contributors: ?
Original artist: ?
File:Injective_composition.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Injective_composition.svg License: Pub-
lic domain Contributors: ? Original artist: ?
File:Internet_map_1024.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d2/Internet_map_1024.jpg License: CC BY
2.5 Contributors: Originally from the English Wikipedia; description page is/was here. Original artist: The Opte Project
File:K-map_2x2_1,2,3,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/eb/K-map_2x2_1%2C2%2C3%2C4.svg Li-
cense: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1,2,3.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/48/K-map_2x2_1%2C2%2C3.svg License: CC-
BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1,2,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/42/K-map_2x2_1%2C2%2C4.svg License: CC-
BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
1044 CHAPTER 273. ZHEGALKIN POLYNOMIAL

File:K-map_2x2_1,2.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c0/K-map_2x2_1%2C2.svg License: CC-BY-


SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1,3,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/77/K-map_2x2_1%2C3%2C4.svg License: CC-
BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1,3.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0b/K-map_2x2_1%2C3.svg License: CC-BY-
SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8d/K-map_2x2_1%2C4.svg License: CC-BY-
SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_1.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d2/K-map_2x2_1.svg License: CC-BY-SA-3.0 Con-
tributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_2,3,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/80/K-map_2x2_2%2C3%2C4.svg License: CC-
BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_2,3.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/01/K-map_2x2_2%2C3.svg License: CC-BY-
SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_2,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4a/K-map_2x2_2%2C4.svg License: CC-BY-
SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_2.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9f/K-map_2x2_2.svg License: CC-BY-SA-3.0 Con-
tributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_3,4.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/55/K-map_2x2_3%2C4.svg License: CC-BY-
SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_3.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/db/K-map_2x2_3.svg License: CC-BY-SA-3.0 Con-
tributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_4.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9e/K-map_2x2_4.svg License: CC-BY-SA-3.0 Con-
tributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_2x2_none.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f5/K-map_2x2_none.svg License: CC-BY-SA-
3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:K-map_6,8,9,10,11,12,13,14.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b7/K-map_6%2C8%2C9%2C10%
2C11%2C12%2C13%2C14.svg License: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist:
en:User:Cburnett
File:K-map_6,8,9,10,11,12,13,14_anti-race.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/02/K-map_6%2C8%2C9%
2C10%2C11%2C12%2C13%2C14_anti-race.svg License: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape.
Original artist: en:User:Cburnett
File:K-map_6,8,9,10,11,12,13,14_don't_care.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/00/K-map_6%2C8%
2C9%2C10%2C11%2C12%2C13%2C14_don%27t_care.svg License: CC-BY-SA-3.0 Contributors: This vector image was created with
Inkscape. Original artist: en:User:Cburnett
File:K-map_minterms_A.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1a/K-map_minterms_A.svg License: CC-
BY-SA-3.0 Contributors: en:User:Cburnett - modication of Image:K-map_minterms.svg Original artist: Werneuchen
File:Karnaugh6.gif Source: https://upload.wikimedia.org/wikipedia/commons/3/3b/Karnaugh6.gif License: CC BY-SA 3.0 Contribu-
tors: Own work Original artist: Jochen Burghardt
File:Karnaugh_map_KV_4mal4_18.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0c/Karnaugh_map_KV_4mal4_
18.svg License: Public domain Contributors: Own work Original artist: RosarioVanTulpe
File:Karnaugh_map_KV_4mal4_19.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d4/Karnaugh_map_KV_4mal4_
19.svg License: Public domain Contributors: Own work Original artist: RosarioVanTulpe
File:KleinBottle-01.png Source: https://upload.wikimedia.org/wikipedia/commons/4/46/KleinBottle-01.png License: Public domain
Contributors: ? Original artist: ?
File:LAlphabet_AND.jpg Source: https://upload.wikimedia.org/wikipedia/en/7/73/LAlphabet_AND.jpg License: Cc-by-sa-3.0 Con-
tributors: ? Original artist: ?
File:LAlphabet_AND_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/c/c1/LAlphabet_AND_table.jpg License: Cc-by-
sa-3.0 Contributors: ? Original artist: ?
File:LAlphabet_F.jpg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/LAlphabet_F.jpg License: Cc-by-sa-3.0 Contributors: ?
Original artist: ?
File:LAlphabet_FI.jpg Source: https://upload.wikimedia.org/wikipedia/en/d/d2/LAlphabet_FI.jpg License: Cc-by-sa-3.0 Contributors:
? Original artist: ?
File:LAlphabet_FI_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/a/a3/LAlphabet_FI_table.jpg License: Cc-by-sa-3.0
Contributors: ? Original artist: ?
File:LAlphabet_F_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/6/6f/LAlphabet_F_table.jpg License: Cc-by-sa-3.0
Contributors: ? Original artist: ?
File:LAlphabet_IFF.jpg Source: https://upload.wikimedia.org/wikipedia/en/2/26/LAlphabet_IFF.jpg License: Cc-by-sa-3.0 Contribu-
tors: ? Original artist: ?
File:LAlphabet_IFF_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/3/30/LAlphabet_IFF_table.jpg License: Cc-by-sa-
3.0 Contributors: ? Original artist: ?
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1045

File:LAlphabet_IFTHEN.jpg Source: https://upload.wikimedia.org/wikipedia/en/c/c8/LAlphabet_IFTHEN.jpg License: Cc-by-sa-


3.0 Contributors: ? Original artist: ?
File:LAlphabet_IFTHEN_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/0/0d/LAlphabet_IFTHEN_table.jpg License:
Cc-by-sa-3.0 Contributors: ? Original artist: ?
File:LAlphabet_NAND.jpg Source: https://upload.wikimedia.org/wikipedia/en/6/6e/LAlphabet_NAND.jpg License: Cc-by-sa-3.0 Con-
tributors: ? Original artist: ?
File:LAlphabet_NAND_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/c/c7/LAlphabet_NAND_table.jpg License: Cc-
by-sa-3.0 Contributors: ? Original artist: ?
File:LAlphabet_NFI.jpg Source: https://upload.wikimedia.org/wikipedia/en/f/f6/LAlphabet_NFI.jpg License: Cc-by-sa-3.0 Contribu-
tors: ? Original artist: ?
File:LAlphabet_NFI_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/b/ba/LAlphabet_NFI_table.jpg License: Cc-by-sa-
3.0 Contributors: ? Original artist: ?
File:LAlphabet_NIF.jpg Source: https://upload.wikimedia.org/wikipedia/en/4/49/LAlphabet_NIF.jpg License: Cc-by-sa-3.0 Contrib-
utors: ? Original artist: ?
File:LAlphabet_NIF_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/3/35/LAlphabet_NIF_table.jpg License: Cc-by-sa-
3.0 Contributors: ? Original artist: ?
File:LAlphabet_NOR.jpg Source: https://upload.wikimedia.org/wikipedia/en/b/b9/LAlphabet_NOR.jpg License: Cc-by-sa-3.0 Con-
tributors: ? Original artist: ?
File:LAlphabet_NOR_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/1/12/LAlphabet_NOR_table.jpg License: Cc-by-
sa-3.0 Contributors: ? Original artist: ?
File:LAlphabet_NOTP.jpg Source: https://upload.wikimedia.org/wikipedia/en/a/a0/LAlphabet_NOTP.jpg License: Cc-by-sa-3.0 Con-
tributors: ? Original artist: ?
File:LAlphabet_NOTP_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/9/92/LAlphabet_NOTP_table.jpg License: Cc-
by-sa-3.0 Contributors: ? Original artist: ?
File:LAlphabet_NOTQ.jpg Source: https://upload.wikimedia.org/wikipedia/en/0/0d/LAlphabet_NOTQ.jpg License: Cc-by-sa-3.0 Con-
tributors: ? Original artist: ?
File:LAlphabet_NOTQ_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/e/e1/LAlphabet_NOTQ_table.jpg License: Cc-
by-sa-3.0 Contributors: ? Original artist: ?
File:LAlphabet_OR.jpg Source: https://upload.wikimedia.org/wikipedia/en/9/99/LAlphabet_OR.jpg License: Cc-by-sa-3.0 Contribu-
tors: ? Original artist: ?
File:LAlphabet_OR_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/0/09/LAlphabet_OR_table.jpg License: Cc-by-sa-
3.0 Contributors: ? Original artist: ?
File:LAlphabet_P.jpg Source: https://upload.wikimedia.org/wikipedia/en/b/bd/LAlphabet_P.jpg License: Cc-by-sa-3.0 Contributors:
? Original artist: ?
File:LAlphabet_P_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/0/0a/LAlphabet_P_table.jpg License: Cc-by-sa-3.0
Contributors: ? Original artist: ?
File:LAlphabet_Q.jpg Source: https://upload.wikimedia.org/wikipedia/en/1/13/LAlphabet_Q.jpg License: Cc-by-sa-3.0 Contributors:
? Original artist: ?
File:LAlphabet_Q_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/4/47/LAlphabet_Q_table.jpg License: Cc-by-sa-3.0
Contributors: ? Original artist: ?
File:LAlphabet_T.jpg Source: https://upload.wikimedia.org/wikipedia/en/d/d4/LAlphabet_T.jpg License: Cc-by-sa-3.0 Contributors:
? Original artist: ?
File:LAlphabet_T_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/b/b4/LAlphabet_T_table.jpg License: Cc-by-sa-3.0
Contributors: ? Original artist: ?
File:LAlphabet_XOR.jpg Source: https://upload.wikimedia.org/wikipedia/en/2/22/LAlphabet_XOR.jpg License: Cc-by-sa-3.0 Con-
tributors: ? Original artist: ?
File:LAlphabet_XOR_table.jpg Source: https://upload.wikimedia.org/wikipedia/en/8/82/LAlphabet_XOR_table.jpg License: Cc-by-
sa-3.0 Contributors: ? Original artist: ?
File:LampFlowchart.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/LampFlowchart.svg License: CC-BY-SA-
3.0 Contributors: vector version of Image:LampFlowchart.png Original artist: svg by Booyabazooka

File:Laws_of_Form_-_a_and_b.gif Source: https://upload.wikimedia.org/wikipedia/commons/e/e0/Laws_of_Form_-_a_and_b.gif Li-


cense: CC-BY-SA-3.0 Contributors: ? Original artist: ?
File:Laws_of_Form_-_a_or_b.gif Source: https://upload.wikimedia.org/wikipedia/commons/3/36/Laws_of_Form_-_a_or_b.gif Li-
cense: CC-BY-SA-3.0 Contributors: ? Original artist: ?
File:Laws_of_Form_-_cross.gif Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Laws_of_Form_-_cross.gif License:
CC-BY-SA-3.0 Contributors: Sam (talk) (Uploads) Original artist: Sam (talk) (Uploads)
File:Laws_of_Form_-_double_cross.gif Source: https://upload.wikimedia.org/wikipedia/commons/f/ff/Laws_of_Form_-_double_cross.
gif License: CC-BY-SA-3.0 Contributors: ? Original artist: ?
File:Laws_of_Form_-_if_a_then_b.gif Source: https://upload.wikimedia.org/wikipedia/commons/4/4f/Laws_of_Form_-_if_a_then_
b.gif License: CC-BY-SA-3.0 Contributors: ? Original artist: ?
1046 CHAPTER 273. ZHEGALKIN POLYNOMIAL

File:Laws_of_Form_-_not_a.gif Source: https://upload.wikimedia.org/wikipedia/commons/0/09/Laws_of_Form_-_not_a.gif License:


CC-BY-SA-3.0 Contributors: ? Original artist: ?
File:Lebesgue_Icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Lebesgue_Icon.svg License: Public domain
Contributors: w:Image:Lebesgue_Icon.svg Original artist: w:User:James pic
File:Linguistics_stub.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/dc/Linguistics_stub.svg License: Public domain
Contributors: ? Original artist: ?
File:Lock-green.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg License: CC0 Contributors: en:
File:Free-to-read_lock_75.svg Original artist: User:Trappist the monk
File:Logic.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e7/Logic.svg License: CC BY-SA 3.0 Contributors: Own
work Original artist: It Is Me Here
File:LogicGates.GIF Source: https://upload.wikimedia.org/wikipedia/commons/4/41/LogicGates.GIF License: CC BY 3.0 Contribu-
tors: Own work Original artist: Vaughan Pratt
File:Logic_portal.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/7c/Logic_portal.svg License: CC BY-SA 3.0 Con-
tributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.svg' class='image'><img alt='Watchduck.svg'
src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.svg.png' width='40' height='46' srcset='https:
//upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.svg.png 1.5x, https://upload.wikimedia.org/
wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703' data-le-height='806' /></a> Watchduck
(a.k.a. Tilman Piesk)
File:Logical_connectives_Hasse_diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Logical_connectives_
Hasse_diagram.svg License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.
svg' class='image'><img alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.
svg.png' width='40' height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.
svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703'
data-le-height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:Lmite_01.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d1/L%C3%ADmite_01.svg License: Public domain
Contributors: Own work Original artist: User:HiTe
File:Merge-arrow.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/aa/Merge-arrow.svg License: Public domain Con-
tributors: ? Original artist: ?
File:Minimally_Descriptive_Near_Sets.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/32/Minimally_Descriptive_
Near_Sets.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: ChristopherJamesHenry
File:Multigrade_operator_AND.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/74/Multigrade_operator_AND.svg
License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.svg' class='image'><img
alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.svg.png' width='40'
height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.svg.png 1.5x, https:
//upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703' data-le-
height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:Multigrade_operator_XNOR.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/66/Multigrade_operator_XNOR.
svg License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.svg' class='image'><img
alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.svg.png' width='40'
height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.svg.png 1.5x, https:
//upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703' data-le-
height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:Multigrade_operator_XOR.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1e/Multigrade_operator_XOR.svg
License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.svg' class='image'><img
alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.svg.png' width='40'
height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.svg.png 1.5x, https:
//upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703' data-le-
height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:Multigrade_operator_all_or_nothing.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3b/Multigrade_operator_
all_or_nothing.svg License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.
svg' class='image'><img alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.
svg.png' width='40' height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.
svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703'
data-le-height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:NearImages.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0d/NearImages.svg License: Public domain Contrib-
utors: Own work Original artist: NearSetAccount
File:Near_gui.jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/f3/Near_gui.jpg License: CC BY-SA 3.0 Contributors:
Own work Original artist: NearSetAccount
File:Nicolas_P._Rougier{}s_rendering_of_the_human_brain.png Source: https://upload.wikimedia.org/wikipedia/commons/7/73/
Nicolas_P._Rougier%27s_rendering_of_the_human_brain.png License: GPL Contributors: http://www.loria.fr/~{}rougier Original artist:
Nicolas Rougier
File:Non-surjective_function2.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f0/Non-surjective_function2.svg Li-
cense: CC BY-SA 3.0 Contributors: http://en.wikipedia.org/wiki/File:Non-surjective_function.svg Original artist: original version: Maschen,
the correction: raamaiden
File:Normal_distribution_pdf.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fb/Normal_distribution_pdf.svg License:
CC-BY-SA-3.0 Contributors: ? Original artist: ?
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1047

File:Nuvola_apps_edu_mathematics_blue-p.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Nuvola_apps_edu_


mathematics_blue-p.svg License: GPL Contributors: Derivative work from Image:Nuvola apps edu mathematics.png and Image:Nuvola
apps edu mathematics-p.svg Original artist: David Vignoni (original icon); Flamurai (SVG convertion); bayo (color)
File:Or-gate-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Or-gate-en.svg License: CC-BY-SA-3.0 Contrib-
utors: ? Original artist: ?
File:P_vip.svg Source: https://upload.wikimedia.org/wikipedia/en/6/69/P_vip.svg License: PD Contributors: ? Original artist: ?
File:Partial_function.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/cd/Partial_function.svg License: Public domain
Contributors: ? Original artist: ?
File:People_icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/People_icon.svg License: CC0 Contributors: Open-
Clipart Original artist: OpenClipart
File:PinocchioChiostri22.jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/f0/PinocchioChiostri22.jpg License: Public
domain Contributors: Scan of a book: Collodi, Le avventure di Pinocchio, 1901 Original artist: Carlo Chiostri (1863 - 1939)
File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors:
? Original artist: ?
File:Proofstrength.png Source: https://upload.wikimedia.org/wikipedia/en/9/90/Proofstrength.png License: PD Contributors:
I (Bynne (talk)) created this work entirely by myself. Original artist:
Bynne (talk)
File:Prop-tableau-4.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/21/Prop-tableau-4.svg License: CC-BY-SA-3.0
Contributors: Transferred from en.wikipedia to Commons by Piquart using CommonsHelper. Original artist: Tizio at English Wikipedia
File:Propositional_formula_3.png Source: https://upload.wikimedia.org/wikipedia/commons/d/dc/Propositional_formula_3.png Li-
cense: CC-BY-SA-3.0 Contributors: Drawn by wvbailey in Autosketch then imported into Adobe Acrobat and exported as .png. Original
artist: User:Wvbailey
File:Propositional_formula_NANDs.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Propositional_formula_NANDs.
png License: CC-BY-SA-3.0 Contributors: Own work Original artist: User:Wvbailey
File:Propositional_formula_connectives_1.png Source: https://upload.wikimedia.org/wikipedia/commons/c/ca/Propositional_formula_
connectives_1.png License: CC-BY-SA-3.0 Contributors: Own work by the original uploader Original artist: User:Wvbailey
File:Propositional_formula_flip_flops_1.png Source: https://upload.wikimedia.org/wikipedia/commons/5/5b/Propositional_formula_
flip_flops_1.png License: CC-BY-SA-3.0 Contributors: Own work by the original uploader Original artist: User:Wvbailey
File:Propositional_formula_maps_1.png Source: https://upload.wikimedia.org/wikipedia/commons/b/bb/Propositional_formula_maps_
1.png License: CC-BY-SA-3.0 Contributors: Own work by the original uploader Original artist: User:Wvbailey
File:Propositional_formula_maps_2.png Source: https://upload.wikimedia.org/wikipedia/commons/9/90/Propositional_formula_maps_
2.png License: CC-BY-SA-3.0 Contributors: Own work by the original uploader Original artist: User:Wvbailey
File:Propositional_formula_oscillator_1.png Source: https://upload.wikimedia.org/wikipedia/commons/e/e3/Propositional_formula_
oscillator_1.png License: CC-BY-SA-3.0 Contributors: Own work by the original uploader Original artist: User:Wvbailey
File:Proximity_System.png Source: https://upload.wikimedia.org/wikipedia/commons/c/cd/Proximity_System.png License: CC BY-
SA 3.0 Contributors: Own work Original artist: ChristopherJamesHenry
File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0
Contributors:
Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:
Tkgd2007
File:Rotate_left.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/09/Rotate_left.svg License: CC-BY-SA-3.0 Contrib-
utors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rotate_left_logically.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5c/Rotate_left_logically.svg License: CC-
BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rotate_left_through_carry.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/71/Rotate_left_through_carry.svg Li-
cense: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rotate_right.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/Rotate_right.svg License: CC-BY-SA-3.0 Con-
tributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rotate_right_arithmetically.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/Rotate_right_arithmetically.svg
License: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rotate_right_logically.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/64/Rotate_right_logically.svg License: CC-
BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rotate_right_through_carry.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/27/Rotate_right_through_carry.svg
License: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: en:User:Cburnett
File:Rubik{}s_cube_v3.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b6/Rubik%27s_cube_v3.svg License: CC-
BY-SA-3.0 Contributors: Image:Rubik{}s cube v2.svg Original artist: User:Booyabazooka, User:Meph666 modied by User:Niabot
File:Sample_EF-space.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bf/Sample_EF-space.svg License: CC BY-SA
3.0 Contributors: Own work Original artist: ChristopherJamesHenry
File:Sat_reduced_to_Clique_from_Sipser.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a5/Sat_reduced_to_Clique_
from_Sipser.svg License: CC BY-SA 3.0 Contributors: Own work (Original text: I (Thore Husfeldt (talk)) created this work entirely by
myself.) Original artist: Thore Husfeldt (talk)
1048 CHAPTER 273. ZHEGALKIN POLYNOMIAL

File:Schaefer{}s_3-SAT_to_1-in-3-SAT_reduction.gif Source: https://upload.wikimedia.org/wikipedia/commons/9/9b/Schaefer%


27s_3-SAT_to_1-in-3-SAT_reduction.gif License: CC BY-SA 3.0 Contributors: Own work Original artist: Jochen Burghardt
File:Semigroup_associative.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/80/Semigroup_associative.svg License:
CC BY-SA 4.0 Contributors: Own work Original artist: IkamusumeFan
File:Sinus_und_Kosinus_am_Einheitskreis_1.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/72/Sinus_und_Kosinus_
am_Einheitskreis_1.svg License: CC0 Contributors: Own work Original artist: Stephan Kulla (User:Stephan Kulla)
File:Socrates.png Source: https://upload.wikimedia.org/wikipedia/commons/c/cd/Socrates.png License: Public domain Contributors:
Transferred from en.wikipedia to Commons. Original artist: The original uploader was Magnus Manske at English Wikipedia Later
versions were uploaded by Optimager at en.wikipedia.
File:Software_spanner.png Source: https://upload.wikimedia.org/wikipedia/commons/8/82/Software_spanner.png License: CC-BY-
SA-3.0 Contributors: Transferred from en.wikipedia to Commons by Rockfang. Original artist: CharlesC at English Wikipedia
File:Square_of_opposition,_set_diagrams.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/51/Square_of_opposition%
2C_set_diagrams.svg License: Public domain Contributors: Own work Original artist: <a href='//commons.wikimedia.org/wiki/File:
Watchduck.svg' class='image'><img alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.
svg/40px-Watchduck.svg.png' width='40' height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.
svg/60px-Watchduck.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.
png 2x' data-le-width='703' data-le-height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:Stone_functor.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d2/Stone_functor.svg License: CC BY-SA 4.0 Con-
tributors: Own work Original artist: IkamusumeFan
File:Surjection.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6c/Surjection.svg License: Public domain Contribu-
tors: ? Original artist: ?
File:Surjective_composition.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a2/Surjective_composition.svg License:
Public domain Contributors: ? Original artist: ?
File:Surjective_function.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4d/Surjective_function.svg License: Public
domain Contributors: Own work Original artist: Maschen
File:Symbol_book_class2.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/89/Symbol_book_class2.svg License: CC
BY-SA 2.5 Contributors: Mad by Lokal_Prol by combining: Original artist: Lokal_Prol
File:Symmetry_Of_Addition.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4a/Symmetry_Of_Addition.svg License:
Public domain Contributors: Own work Original artist: Weston.pace
File:T_30.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/10/T_30.svg License: CC0 Contributors: Own work Original
artist: Mini-oh
File:Tamari_lattice.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/46/Tamari_lattice.svg License: Public domain Con-
tributors: Own work Original artist: David Eppstein
File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_
with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svg
from the Tango project. Original artist: Benjamin D. Esham (bdesham)
File:Torus_from_rectangle.gif Source: https://upload.wikimedia.org/wikipedia/commons/6/60/Torus_from_rectangle.gif License: Pub-
lic domain Contributors: Own work Original artist: Kie
File:Total_function.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/02/Total_function.svg License: Public domain
Contributors: ? Original artist: ?
File:Translation_to_english_arrow.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8a/Translation_to_english_arrow.
svg License: CC-BY-SA-3.0 Contributors: Own work, based on :Image:Translation_arrow.svg. Created in Adobe Illustrator CS3 Original
artist: tkgd2007
File:Trigonometric_functions_and_inverse.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0b/Trigonometric_functions_
and_inverse.svg License: CC0 Contributors: Own work Original artist: Maschen
File:Trigonometric_functions_and_inverse2.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d8/Trigonometric_functions_
and_inverse2.svg License: CC0 Contributors: Own work Original artist: Maschen
File:Trigonometric_functions_and_inverse3.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Trigonometric_functions_
and_inverse3.svg License: CC0 Contributors: Own work Original artist: Maschen
File:Trigonometric_functions_and_inverse4.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/24/Trigonometric_functions_
and_inverse4.svg License: CC0 Contributors: Own work Original artist: Maschen
File:Trigonometric_functions_and_inverse5.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/17/Trigonometric_functions_
and_inverse5.svg License: CC0 Contributors: Own work Original artist: Maschen
File:Trigonometric_functions_and_inverse6.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/61/Trigonometric_functions_
and_inverse6.svg License: CC0 Contributors: Own work Original artist: Maschen
File:Trigonometry_triangle.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/7e/Trigonometry_triangle.svg License: CC-
BY-SA-3.0 Contributors: Transferred from en.wikipedia to Commons. Original artist: The original uploader was Tarquin at English
Wikipedia Later versions were uploaded by Limaner at en.wikipedia.
File:Tsitkin_frames.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/18/Tsitkin_frames.svg License: CC BY-SA 3.0
Contributors: Own work Original artist: EmilJ
File:Unbalanced_scales.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fe/Unbalanced_scales.svg License: Public do-
main Contributors: ? Original artist: ?
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1049

File:Vector_Addition.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3d/Vector_Addition.svg License: Public do-


main Contributors: No machine-readable source provided. Own work assumed (based on copyright claims). Original artist: No machine-
readable author provided. Booyabazooka assumed (based on copyright claims).
File:Venn00.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5c/Venn00.svg License: Public domain Contributors: Own
work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.svg' class='image'><img alt='Watchduck.svg' src='https://
upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.svg.png' width='40' height='46' srcset='https:
//upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.svg.png 1.5x, https://upload.wikimedia.org/
wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703' data-le-height='806' /></a> Watchduck
(a.k.a. Tilman Piesk)
File:Venn0001.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/99/Venn0001.svg License: Public domain Contributors:
? Original artist: ?
File:Venn0010.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5a/Venn0010.svg License: Public domain Contributors:
? Original artist: ?
File:Venn0011.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/76/Venn0011.svg License: Public domain Contributors:
? Original artist: ?
File:Venn01.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Venn01.svg License: Public domain Contributors: Own
work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.svg' class='image'><img alt='Watchduck.svg' src='https://
upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.svg.png' width='40' height='46' srcset='https:
//upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.svg.png 1.5x, https://upload.wikimedia.org/
wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703' data-le-height='806' /></a> Watchduck
(a.k.a. Tilman Piesk)
File:Venn0100.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e6/Venn0100.svg License: Public domain Contributors:
? Original artist: ?
File:Venn0101.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/10/Venn0101.svg License: Public domain Contributors:
? Original artist: ?
File:Venn0110.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/46/Venn0110.svg License: Public domain Contributors:
? Original artist: ?
File:Venn0111.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/30/Venn0111.svg License: Public domain Contributors:
? Original artist: ?
File:Venn10.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/73/Venn10.svg License: Public domain Contributors: Own
work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.svg' class='image'><img alt='Watchduck.svg' src='https://
upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.svg.png' width='40' height='46' srcset='https:
//upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.svg.png 1.5x, https://upload.wikimedia.org/
wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703' data-le-height='806' /></a> Watchduck
(a.k.a. Tilman Piesk)
File:Venn1000.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3c/Venn1000.svg License: Public domain Contributors:
? Original artist: ?
File:Venn1001.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/47/Venn1001.svg License: Public domain Contributors:
? Original artist: ?
File:Venn1010.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/eb/Venn1010.svg License: Public domain Contributors:
? Original artist: ?
File:Venn1011.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1e/Venn1011.svg License: Public domain Contributors:
? Original artist: ?
File:Venn11.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3f/Venn11.svg License: Public domain Contributors: Own
work Original artist: <a href='//commons.wikimedia.org/wiki/File:Watchduck.svg' class='image'><img alt='Watchduck.svg' src='https://
upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/40px-Watchduck.svg.png' width='40' height='46' srcset='https:
//upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/60px-Watchduck.svg.png 1.5x, https://upload.wikimedia.org/
wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.svg.png 2x' data-le-width='703' data-le-height='806' /></a> Watchduck
(a.k.a. Tilman Piesk)
File:Venn1100.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/31/Venn1100.svg License: Public domain Contributors:
? Original artist: ?
File:Venn1101.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/35/Venn1101.svg License: Public domain Contributors:
? Original artist: ?
File:Venn1110.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/cb/Venn1110.svg License: Public domain Contributors:
? Original artist: ?
File:Venn_0000_0001.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Venn_0000_0001.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0000_0011.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fb/Venn_0000_0011.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0000_0101.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e5/Venn_0000_0101.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0000_1010.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4b/Venn_0000_1010.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0000_1111.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0e/Venn_0000_1111.svg License: Public do-
main Contributors: ? Original artist: ?
1050 CHAPTER 273. ZHEGALKIN POLYNOMIAL

File:Venn_0001_0000.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/ba/Venn_0001_0000.svg License: Public do-


main Contributors: ? Original artist: ?
File:Venn_0001_0001.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bc/Venn_0001_0001.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0001_0100.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Venn_0001_0100.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0001_0101.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c3/Venn_0001_0101.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0001_1011.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Venn_0001_1011.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0011_0000.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/cf/Venn_0011_0000.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0011_1100.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2f/Venn_0011_1100.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0011_1111.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/56/Venn_0011_1111.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0101_0101.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/85/Venn_0101_0101.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0101_1010.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a9/Venn_0101_1010.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0110_0110.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bd/Venn_0110_0110.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0110_1001.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ae/Venn_0110_1001.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_0111_1111.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ee/Venn_0111_1111.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_1000_0001.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/de/Venn_1000_0001.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_1001_1001.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/35/Venn_1001_1001.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_1010_0101.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/be/Venn_1010_0101.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_1011_1011.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/61/Venn_1011_1011.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_1011_1101.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/ca/Venn_1011_1101.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_1100_0011.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b7/Venn_1100_0011.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_1101_1011.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b8/Venn_1101_1011.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_1111_1011.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9b/Venn_1111_1011.svg License: Public do-
main Contributors: ? Original artist: ?
File:Venn_A_intersect_B.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Venn_A_intersect_B.svg License: Pub-
lic domain Contributors: Own work Original artist: Cepheus
File:Venn_A_subset_B.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b0/Venn_A_subset_B.svg License: Public do-
main Contributors: Own work Original artist: User:Booyabazooka
File:Vennandornot.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ae/Vennandornot.svg License: Public domain Con-
tributors: Own work Original artist: Watchduck
File:Visulization_of_nearness_measure.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Visulization_of_nearness_
measure.jpg License: CC BY-SA 3.0 Contributors: Own work Original artist: NearSetAccount
File:Wiki_letter_w.svg Source: https://upload.wikimedia.org/wikipedia/en/6/6c/Wiki_letter_w.svg License: Cc-by-sa-3.0 Contributors:
? Original artist: ?
File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:
CC-BY-SA-3.0 Contributors: This le was derived from Wiki letter w.svg: <a href='//commons.wikimedia.org/wiki/File:Wiki_letter_w.
svg' class='image'><img alt='Wiki letter w.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/
50px-Wiki_letter_w.svg.png' width='50' height='50' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_
w.svg/75px-Wiki_letter_w.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/100px-Wiki_
letter_w.svg.png 2x' data-le-width='44' data-le-height='44' /></a>
Original artist: Derivative work by Thumperward
File:Wikibooks-logo-en-noslogan.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Wikibooks-logo-en-noslogan.
svg License: CC BY-SA 3.0 Contributors: Own work Original artist: User:Bastique, User:Ramac et al.
File:Wikibooks-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikibooks-logo.svg License: CC BY-SA 3.0
Contributors: Own work Original artist: User:Bastique, User:Ramac et al.
273.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 1051

File:Wikidata-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/ff/Wikidata-logo.svg License: Public domain Con-


tributors: Own work Original artist: User:Planemad
File:Wikinews-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/24/Wikinews-logo.svg License: CC BY-SA 3.0
Contributors: This is a cropped version of Image:Wikinews-logo-en.png. Original artist: Vectorized by Simon 01:05, 2 August 2006
(UTC) Updated by Time3000 17 April 2007 to use ocial Wikinews colours and appear correctly on dark backgrounds. Originally
uploaded by Simon.
File:Wikiquote-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikiquote-logo.svg License: Public domain
Contributors: Own work Original artist: Rei-artur
File:Wikisource-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Wikisource-logo.svg License: CC BY-SA
3.0 Contributors: Rei-artur Original artist: Nicholas Moreau
File:Wikiversity-logo-Snorky.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1b/Wikiversity-logo-en.svg License:
CC BY-SA 3.0 Contributors: Own work Original artist: Snorky
File:Wikiversity-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1b/Wikiversity-logo-en.svg License: CC BY-
SA 3.0 Contributors: Own work Original artist: Snorky
File:Wikiversity-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Wikiversity-logo.svg License: CC BY-SA
3.0 Contributors: Snorky (optimized and cleaned up by verdy_p) Original artist: Snorky (optimized and cleaned up by verdy_p)
File:Wiktionary-logo-en-v2.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/99/Wiktionary-logo-en-v2.svg License:
CC-BY-SA-3.0 Contributors: ? Original artist: ?
File:Wiktionary-logo-v2.svg Source: https://upload.wikimedia.org/wikipedia/en/0/06/Wiktionary-logo-v2.svg License: CC-BY-SA-
3.0 Contributors: ? Original artist: ?
File:X-or.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b5/X-or.svg License: Public domain Contributors: No machine-
readable source provided. Own work assumed (based on copyright claims). Original artist: No machine-readable author provided. The
Unbound assumed (based on copyright claims).
File:XNOR_ANSI_Labelled.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b8/XNOR_ANSI_Labelled.svg License:
Public domain Contributors: Own work Original artist: Inductiveload
File:XOR_ANSI_Labelled.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/17/XOR_ANSI_Labelled.svg License: Pub-
lic domain Contributors: Own work Original artist: Inductiveload
File:Z2\char"005E\relax{}4;_Cayley_table;_binary.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d6/Z2%5E4%
3B_Cayley_table%3B_binary.svg License: CC BY 3.0 Contributors: Own work Original artist: <a href='//commons.wikimedia.org/
wiki/File:Watchduck.svg' class='image'><img alt='Watchduck.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/
Watchduck.svg/40px-Watchduck.svg.png' width='40' height='46' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/
Watchduck.svg/60px-Watchduck.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/d/d8/Watchduck.svg/80px-Watchduck.
svg.png 2x' data-le-width='703' data-le-height='806' /></a> Watchduck (a.k.a. Tilman Piesk)
File:_______.gif Source: https://upload.
wikimedia.org/wikipedia/commons/d/df/%D0%9F%D1%80%D0%B5%D0%BE%D0%B1%D1%80%D0%B0%D0%B7%D0%BE%D0%
B2%D0%B0%D0%BD%D0%B8%D0%B5_%D1%82%D0%B0%D0%B1%D0%BB%D0%B8%D1%86%D1%8B_%D0%B8%D1%81%
D1%82%D0%B8%D0%BD%D0%BD%D0%BE%D1%81%D1%82%D0%B8_%D0%B2_%D0%BF%D0%BE%D0%BB%D0%B8%
D0%BD%D0%BE%D0%BC_%D0%96%D0%B5%D0%B3%D0%B0%D0%BB%D0%BA%D0%B8%D0%BD%D0%B0_%D0%BC%
D0%B5%D1%82%D0%BE%D0%B4%D0%BE%D0%BC_%D1%82%D1%80%D0%B5%D1%83%D0%B3%D0%BE%D0%BB%D1%
8C%D0%BD%D0%B8%D0%BA%D0%B0.gif License: CC BY-SA 3.0 Contributors: Own work - Orig-
inal artist: AdmiralHood (<a href='//commons.wikimedia.org/wiki/User_talk:AdmiralHood' title='User talk:AdmiralHood'>talk</a>)
07:35, 6 June 2011 (UTC)

273.7.3 Content license


Creative Commons Attribution-Share Alike 3.0

Das könnte Ihnen auch gefallen