Beruflich Dokumente
Kultur Dokumente
org/wiki/Mathematical_fallacy
Mathematical fallacy
From Wikipedia, the free encyclopedia
In mathematics, certain kinds of mistaken proof are often exhibited, and sometimes collected, as
illustrations of a concept of mathematical fallacy. There is a distinction between a simple mistake
and a mathematical fallacy in a proof: a mistake in a proof leads to an invalid proof just in the same
way, but in the best-known examples of mathematical fallacies, there is some concealment in the
presentation of the proof. For example the reason validity fails may be a division by zero that is
hidden by algebraic notation. There is a striking quality of the mathematical fallacy: as typically
presented, it leads not only to an absurd result, but does so in a crafty or clever way.[1] Therefore
these fallacies, for pedagogic reasons, usually take the form of spurious proofs of obvious
contradictions. Although the proofs are flawed, the errors, usually by design, are comparatively
subtle, or designed to show that certain steps are conditional, and should not be applied in the cases
that are the exceptions to the rules.
The traditional way of presenting a mathematical fallacy is to give an invalid step of deduction mixed
in with valid steps, so that the meaning of fallacy is here slightly different from the logical fallacy.
The latter applies normally to a form of argument that is not a genuine rule of logic, where the
problematic mathematical step is typically a correct rule applied with a tacit wrong assumption.
Beyond pedagogy, the resolution of a fallacy can lead to deeper insights into a subject (such as the
introduction of Pasch's axiom of Euclidean geometry).[2] Pseudaria, an ancient lost book of false
proofs, is attributed to Euclid.[3]
Contents
1 Howlers
2 Division by zero
2.1 All numbers equal zero
2.2 All numbers equal all other numbers
2.3 Determinants equal to zero
3 Multivalued functions
3.1 Multivalued sinusoidal functions
3.2 Multivalued complex logarithms
4 Calculus
4.1 Indefinite integrals
4.2 Variable ambiguity
5 Infinite series
5.1 Associative law
5.2 Divergent series
6 Power and root
6.1 Positive and negative roots
6.2 Extraneous solutions
6.3 Complex roots
6.4 Fundamental theorem of algebra
7 Inequalities
8 Geometry
8.1 Any angle is zero
8.2 Fallacy of the isosceles triangle
9 See also
10 Notes
11 References
12 External links
Howlers
A correct result obtained by an incorrect line of reasoning is an example of a mathematical argument
that is true but invalid. This is the case, for instance, in the calculation
Although the conclusion 16/64 = 1/4 is correct, there is a fallacious invalid cancellation in the
middle step. Bogus proofs constructed to produce a correct result in spite of incorrect logic are
known as howlers.[4]
Division by zero
The division-by-zero fallacy has many variants.
The error here is in going from the second to the third line. The reasoning is started by assuming x
to be equal to 0; therefore, on the second to the third line: if x = 0, then division by x is
arithmetically meaningless.
The following example uses division by zero to "prove" that 2 = 1, but can be modified to prove that
any number equals any other number.
2. Multiply through by a
3. Subtract
5. Divide out
6. Observing that
Q.E.D.[6]
The fallacy is in line 5: the progression from line 4 to line 5 involves division by a − b, which is zero
since a equals b. Since division by zero is undefined, the argument is invalid. Deriving that the only
possible solution for lines 5, 6, and 7, namely that a = b = 0, this flaw is evident again in line 7,
where one must divide by b (0) in order to produce the fallacy (not to mention that the only possible
solution denies the original premise that a and b are nonzero). A similar invalid proof would be to say
that since 2 × 0 = 1 × 0 (which is true), one can divide by zero to obtain 2 = 1. An obvious
modification "proves" that any two real numbers are equal.
Many variants of this fallacy exist. For instance, it is possible to attempt to "repair" the proof by
supposing that a and b have a definite nonzero value to begin with, for instance, at the outset one
can suppose that a and b are both equal to one:
However, as already noted the step in line 5, when the equation is divided by a - b, is still division by
zero. As division by zero is undefined, the argument is invalid.
A "proof" that all numbers are equal to zero follows. Suppose we have the following system of linear
equations:
Dividing the first equation by c1, we get x1 + x2 + · · · + xn = 1. Let us now try to solve the system
via Cramer's rule:
Since each column of the coefficient matrix is equal to the resultant column vector, we have
Q.E.D.
This proof is fallacious because Cramer's rule can only be applied to systems with a unique solution;
however, all the equations in the system are obviously equivalent, and thus are insufficient to
provide a unique solution. The fallacy occurs when we try to divide |Ai| by |A|, as both are equal to 0.
Multivalued functions
Functions that are multivalued have poorly defined inverse functions over their entire range.
x = 2π
sin(x) = 0
x = arcsin(0)
x=0
2π = 0
Q.E.D.
The problem is in the third step, where we take the arcsin of each side. Since the arcsin is an
infinitely multivalued function, x = arcsin(0) is not necessarily true.
and
so we have
and hence
Dividing by πi gives
Q.E.D.
The mistake is that the rule ln(ex) = x is in general only valid for real x, not for complex x. The
complex logarithm is actually multi-valued; and ln( − 1) = (2k + 1)πi for any integer k, so we
see that πi and 3πi are two among the infinite possible values for ln(-1).
Calculus
Calculus as the mathematical study of infinitesimal change and limits can lead to mathematical
fallacies if the properties of integrals and differentials are ignored.
Indefinite integrals
[citation needed]
The following "proof" that 0 = 1 can be modified to "prove" that any number equals any other
number. Begin with the evaluation of the indefinite integral
and dv = dx
Thus,
and v=x
Q.E.D.
The error in this proof lies in an improper use of the integration by parts technique. Upon use of the
formula, a constant, C, must be added to the right-hand side of the equation. This is due to the
derivation of the integration by parts formula; the derivation involves the integration of an equation
and so a constant must be added. In most uses of the integration by parts technique, this initial
addition of C is ignored until the end when C is added a second time. However, in this case, the
constant must be added immediately because the remaining two integrals cancel each other out.
In other words, the second to last line is correct (1 added to any antiderivative of 1/x is still an
antiderivative of 1/x); but the last line is not. You cannot cancel because they are not
necessarily equal. There are infinitely many antiderivatives of a function, all differing by a constant.
In this case, the antiderivatives on both sides differ by 1.
This problem can be avoided if we use definite integrals (i.e. use bounds). Then in the second to last
line, 1 would be evaluated between some bounds, which would always evaluate to 1 - 1 = 0. The
remaining definite integrals on both sides would indeed be equal.
Variable ambiguity
[citation needed]
x=1
Taking the derivative of each side,
Q.E.D.
The error in this proof is it treats x as a variable, and not as a constant as stated with x = 1 in the
proof, when taking its derivative. Taking the proper derivative of x yields the correct result, 0 = 0.
Infinite series
As in the case of surreal and aleph numbers, mathematical situations which involve manipulation of
infinite series can lead to logical contradictions if care is not taken to remember the properties of
such series.
Associative law
[citation needed]
Of course −1+1=0
Q.E.D.
The error here is that the associative law cannot be applied freely to an infinite sum unless the sum
is absolutely convergent (see also conditionally convergent). Here that sum is 1 − 1 + 1 − 1 + · · ·, a
classic divergent series. In this particular argument, the second line gives the sequence of partial
sums 0, 0, 0, ... (which converges to 0) while the third line gives the sequence of partial sums 1, 1,
1, ... (which converges to 1), so these expressions need not be equal. This can be seen as a
counterexample to generalizing Fubini's theorem and Tonelli's theorem to infinite integrals (sums)
over measurable functions taking negative values.
In fact the associative law for addition just states something about three-term sums:
(a + b) + c = a + (b + c). It can easily be shown to imply that for any finite sequence of terms
separated by "+" signs, and any two ways to insert parentheses so as to completely determine which
are the operands of each "+", the sums have the same value; the proof is by induction on the number
of additions involved. In the given "proof" it is in fact not so easy to see how to start applying the
basic associative law, but with some effort one can arrange larger and larger initial parts of the first
summation to look like the second. However this would take an infinite number of steps to "reach"
the second summation completely. So the real error is that the proof compresses infinitely many
steps into one, while a mathematical proof must consist of only finitely many steps. To illustrate this,
consider the following "proof" of 1 = 0 that only uses convergent infinite sums, and only the law
allowing to interchange two consecutive terms in such a sum, which is definitely valid:
Divergent series
[citation needed]
Therefore
A variant of mathematical fallacies of this form involves the p-adic numbers[citation needed]
Let .
Then, .
Therefore, .
The error in such "proofs" is the implicit assumption that divergent series obey the ordinary laws of
arithmetic.
The fallacy is that the rule is generally valid only if at least one of the two numbers x
or y is positive, which is not the case here.
Although the fallacy is easily detected here, sometimes it is concealed more effectively in notation.
[8]
For instance, consider the equation
cos2x = 1 − sin2x
which holds as a consequence of the Pythagorean theorem. Then, by taking a square root,
cosx = (1 − sin2x)1 / 2
so that
1 + cosx = 1 + (1 − sin2x)1 / 2.
Squaring both sides gives
or
0=4
which is absurd.
The error in each of these examples fundamentally lies in the fact that any equation of the form
x2 = a2
has two solutions, provided a ≠ 0,
and it is essential to check which of these solutions is relevant to the problem at hand.[9] In the
above fallacy, the square root that allowed the second equation to be deduced from the first is valid
only when cos x is positive. In particular, when x is set to π, the second equation is rendered invalid.
Another example of this kind of fallacy, where the error is immediately detectable, is the following
invalid proof that −2 = 2. Letting x = −2, and then squaring gives
so that x = −2 = 2, which is absurd. Clearly when the square root was extracted, it was the negative
root −2, rather than the positive root, that was relevant for the particular solution in the problem.
The error here lies in the equality, where we are ignoring the other fourth roots of 1,[10] which are
−1, i and −i (where i is the imaginary unit). Seeing as we have squared our figure and then taken
roots, we cannot always assume that all the roots will be correct. So the correct fourth are i and −i,
which are the imaginary numbers defined to be .
Extraneous solutions
[citation needed]
Replacing the expression within parenthesis by the initial equation and canceling common terms
yields
Which produces the solution x = 2. Substituting this value into the original equation, one obtains
So therefore
Q.E.D.
In the forward direction, the argument merely shows that no x exists satisfying the given equation. If
you work backward from x=2, taking the cube root of both sides ignores the possible factors of
which are non-principal cube roots of one. An equation altered by raising both sides to a
power is a consequence, but not necessarily equivalent to, the original equation, so it may produce
more solutions. This is indeed the case in this example, where the solution x = 2 is arrived at while it
is clear that this is not a solution to the original equation. Also, every number has 3 cube roots, 2
complex and one either real or complex. Also the substitution of the first equation into the second to
get the third would be begging the question when working backwards.
Complex roots
[citation needed]
Then:
Q.E.D.
3
The fallacy here is in assuming that x = 1 implies x = 1 . There are in fact three cubed roots of
unity. Two of these roots, which are complex, are the solutions of the original equation. The
substitution has introduced the third one, which is real, as an extraneous solution. The equation
after the substitution step is implied by the equation before the substitution, but not the other way
around, which means that the substitution step could and did introduce new solutions.
3
Note that if we restrict our attention to the real numbers, so that x = 1 implies x = 1 is in fact
true, the proof will still fail. In fact, what it is saying is that if there is some real number solution to
the equation, then 0=3. This is true, but it is a big "if". The quadratic formula erases any lingering
fears that 0=3 by showing that no real number solution exists.
The fundamental theorem of algebra applies for any polynomial defined for complex numbers.
Ignoring this can lead to mathematical fallacies.[citation needed] Let x and y be any two numbers Then
let Let Let's compute:
Replacing , we get:
So:
Replacing :
Q.E.D.
The mistake here is that from z³ = w³ one may not in general deduce z = w (unless z and w are both
real, which they are not in our case).
Inequalities
Inequalities can lead to mathematical fallacies when operations that fail to preserve the inequality
[citation needed]
are not recognized. Let us first suppose that
Now we will take the logarithm of both sides. As long as x > 0, we can do this because logarithms are
monotonically increasing. Observing that the logarithm of 1 is 0, we get
Q.E.D.
The violation is found in the last step, the division. This step is invalid because ln(x) is negative for 0
< x < 1. While multiplication or division by a positive number preserves the inequality,
multiplication or division by a negative number reverses the inequality, resulting in the correct
expression 1 > 0.
Geometry
Many mathematical fallacies based on geometry come from assuming a geometrically impossible
situation. Often the fallacy is easy to expose through simple visualizations.
But if the angles DCH and ECH are equal then the angle DCE must be zero.
Q.E.D.
The error in the proof comes in the diagram and the final point. An accurate diagram would show
that the triangle ECH is a reflection of the triangle DCH in the line CH rather than being on the
same side, and so while the angles DCH and ECH are equal in magnitude, there is no justification for
subtracting one from the other; to find the angle DCE you need to subtract the angles DCH and ECH
from the angle of a full circle (2π or 360°).
The fallacy of the isosceles triangle, from (Maxwell 1959, Chapter II, §
1), purports to show that every triangle is isosceles, meaning that two
sides of the triangle are congruent.
Q.E.D.
As a corollary, one can show that all triangles are equilateral, by showing that AB = BC and AC = BC
in the same way.
All but the last step of the proof is indeed correct (those three triangles are indeed congruent). The
error in the proof is the assumption in the diagram that the point O is inside the triangle. In fact,
whenever AB ≠ AC, O lies outside the triangle. Furthermore, it can be further shown that, if AB is
longer than AC, then R will lie within AB, while Q will lie outside of AC (and vice versa). (Any
diagram drawn with sufficiently accurate instruments will verify the above two facts.) Because of
this, AB is still AR + RB, but AC is actually AQ - QC; and thus the lengths are not necessarily the
same.
See also
Paradox
Fallacy
Proof by intimidation
2+2=5
List of fallacious proofs
Notes
1. ^ Maxwell 1959, p. 9
2. ^ Maxwell 1959
3. ^ Heath & Helberg 1908, Chapter II, §I
4. ^ Maxwell 1959
5. ^ Maxwell 1959, p. 7
6. ^ Harro Heuser: Lehrbuch der Analysis - Teil 1, 6th edition, Teubner 1989, ISBN
978-3835101319, page 51 (German).
7. ^ Maxwell 1959, Chapter VI, §I.2
8. ^ Maxwell 1959, Chapter VI, §I.1
9. ^ Maxwell 1959, Chapter VI, §II
10. ^ In general, the expression evaluates to n complex numbers, called the nth roots of unity.
References
Barbeau, Edward J. (2000), Mathematical fallacies, flaws, and flimflam, MAA Spectrum,
Mathematical Association of America, MR1725831 (http://www.ams.org/mathscinet-
getitem?mr=1725831) , ISBN 978-0-88385-529-4.
Bunch, Bryan (1997), Mathematical fallacies and paradoxes, New York: Dover Publications,
MR1461270 (http://www.ams.org/mathscinet-getitem?mr=1461270) , ISBN 978-0-486-29664-7.
Heath, Sir Thomas Little; Heiberg, Johan Ludvig (1908), The thirteen books of Euclid's
Elements, Volume 1, The University Press.
Maxwell, E. A. (1959), Fallacies in mathematics, Cambridge University Press, MR0099907
(http://www.ams.org/mathscinet-getitem?mr=0099907) .
External links
Invalid proofs (http://www.cut-the-knot.org/proofs/index.shtml) at Cut-the-knot (including
literature references)
More invalid proofs from AhaJokes.com (http://www.ahajokes.com/math_jokes.html)
More invalid proofs also on this page (http://www.jokes-funblog.com/categories/49-Math-Jokes)
Retrieved from "http://en.wikipedia.org/wiki/Mathematical_fallacy"
Categories: Proof theory | Proofs | Logical fallacies