Sie sind auf Seite 1von 38

17TH CENTURY MATHEMATICS

In the wake of the Renaissance, the 17th Century saw an


unprecedented explosion of mathematical and scientific
ideas across Europe, a period sometimes called the Age of
Reason. Hard on the heels of the “Copernican Revolution”
of Nicolaus Copernicus in the 16th Century, scientists like
Galileo Galilei, Tycho Brahe and Johannes Kepler were
making equally revolutionary discoveries in the exploration
of the Solar system, leading to Kepler’s formulation of
mathematical laws of planetary motion.

The invention of the logarithm in the early 17th Century by


John Napier (and later improved by Napier and Henry
Briggs) contributed to the advance of science, astronomy
and mathematics by making some difficult calculations
relatively easy. It was one of the most significant
mathematical developments of the age, and 17th Century
physicists like Kepler and Newton could never have
performed the complex calculatons needed for their
innovations without it. The French astronomer and
mathematician Pierre Simon Laplace remarked, almost two
centuries later, that Napier, by halving the labours of
astronomers, had doubled their lifetimes.

The logarithm of a number is the exponent when that


number is expressed as a power of 10 (or any other base).
It is effectively the inverse of exponentiation. For example,
the base 10 logarithm of 100 (usually written log10 100 or
lg 100 or just log 100) is 2, because 102 = 100. The value
of logarithms arises from the fact that multiplication of two
Logarithms were invented by John Napier, early in the 17th Century
or more numbers is equivalent to adding their logarithms,
a much simpler operation. In the same way, division
involves the subtraction of logarithms, squaring is as simple as multiplying the logarithm by two (or by three for cubing, etc), square
roots requires dividing the logarithm by 2 (or by 3 for cube roots, etc).

Although base 10 is the most popular base, another common base for logarithms is the number e which has a value of 2.7182818... and
which has special properties which make it very useful for logarithmic calculations. These are known as natural logarithms, and are
written loge or ln. Briggs produced extensive lookup tables of common (base 10) logarithms, and by 1622 William Oughted had produced
a logarithmic slide rule, an instrument which became indispensible in technological innovation for the next 300 years.

Napier also improved Simon Stevin's decimal notation and popularized the use of the decimal point, and made lattice multiplication
(originally developed by the Persian mathematician Al-Khwarizmi and introduced into Europe by Fibonacci) more convenient with the
introduction of “Napier's Bones”, a multiplication tool using a set of numbered rods.

Although not principally a mathematician, the role of the


Frenchman Marin Mersenne as a sort of clearing house and
go-between for mathematical thought in France during this
period was crucial. Mersenne is largely remembered in
mathematics today in the term Mersenne primes - prime
numbers that are one less than a power of 2, e.g. 3 (22-1),
7 (23-1), 31 (25-1), 127 (27-1), 8191 (213-1), etc. In modern
times, the largest known prime number has almost always
been a Mersenne prime, but in actual fact, Mersenne’s real
connection with the numbers was only to compile a none-
too-accurate list of the smaller ones (when Edouard Lucas
devised a method of checking them in the 19th Century, he
pointed out that Mersenne had incorrectly included 267-1
and left out 261-1, 289-1 and 2107-1 from his list).

The Frenchman René Descartes is sometimes considered


the first of the modern school of mathematics. His
development of analytic geometry and Cartesian
coordinates in the mid-17th Century soon allowed the orbits
of the planets to be plotted on a graph, as well as laying
the foundations for the later development of calculus (and
much later multi-dimensional geometry). Descartes is also
credited with the first use of superscripts for powers or
exponents.

Two other great French mathematicians were close Graph of the number of digits in the known Mersenne primes
contemporaries of Descartes: Pierre de Fermat and Blaise
Pascal. Fermat formulated several theorems which greatly
extended our knowlege of number theory, as well as contributing some early work on infinitesimal calculus. Pascal is most famous for
Pascal’s Triangle of binomial coefficients, although similar figures had actually been produced by Chinese and Persian mathematicians
long before him.
It was an ongoing exchange of letters between Fermat and Pascal that led to the development of the concept of expected values and
the field of probability theory. The first published work on probability theory, however, and the first to outline the concept of mathematical
expectation, was by the Dutchman Christiaan Huygens in 1657, although it was largely based on the ideas in the letters of the two
Frenchmen.

The French mathematician and engineer Girard Desargues


is considered one of the founders of the field of projective
geometry, later developed further by Jean Victor Poncelet
and Gaspard Monge. Projective geometry considers what
happens to shapes when they are projected on to a non-
parallel plane. For example, a circle may be projected into
an ellipse or a hyperbola, and so these curves may all be
regarded as equivalent in projective geometry. In
particular, Desargues developed the pivotal concept of the
“point at infinity” where parallels actually meet. His
perspective theorem states that, when two triangles are in
perspective, their corresponding sides meet at points on the
same collinear line.

By “standing on the shoulders of giants”, the Englishman Sir


Isaac Newton was able to pin down the laws of physics in
an unprecedented way, and he effectively laid the
groundwork for all of classical mechanics, almost single-
handedly. But his contribution to mathematics should never
be underestimated, and nowadays he is often considered,
along with Archimedes and Gauss, as one of the greatest
mathematicians of all time.

Newton and, independently, the German philosopher and


mathematician Gottfried Leibniz, completely revolutionized
mathematics (not to mention physics, engineering,
economics and science in general) by the development of Desargues’ perspective theorem
infinitesimal calculus, with its two main operations,
differentiation and integration. Newtonprobably developed his work before Leibniz, but Leibniz published his first, leading to an extended
and rancorous dispute. Whatever the truth behind the various claims, though, it is Leibniz’s calculus notation that is the one still in use
today, and calculus of some sort is used extensively in everything from engineering to economics to medicine to astronomy.

Both Newton and Leibniz also contributed greatly in other areas of mathematics, including Newton’s contributions to a generalized
binomial theorem, the theory of finite differences and the use of infinite power series, and Leibniz’s development of a mechanical
forerunner to the computer and the use of matrices to solve linear equations.

However, credit should also be given to some earlier 17th Century mathematicians whose work partially anticipated, and to some extent
paved the way for, the development of infinitesimal calculus. As early as the 1630s, the Italian mathematician Bonaventura Cavalieri
developed a geometrical approach to calculus known as Cavalieri's principle, or the “method of indivisibles”. The Englishman John Wallis,
who systematized and extended the methods of analysis of Descartes and Cavalieri, also made significant contributions towards the
development of calculus, as well as originating the idea of the number line, introducing the symbol ∞ for infinity and the term “continued
fraction”, and extending the standard notation for powers to include negative integers and rational numbers. Newton's teacher Isaac
Barrow is usually credited with the discovery (or at least the first rigorous statrement of) the fundamental theorem of calculus, which
essentially showed that integration and differentiation are inverse operations, and he also made complete translations of Euclid into Latin
and English.

https://www.storyofmathematics.com/17th.html

17TH CENTURY MATHEMATICS - DESCARTES

René Descartes has been dubbed the "Father of Modern Philosophy", but he was also one of the
key figures in the Scientific Revolution of the 17th Century, and is sometimes considered the first
of the modern school of mathematics.

As a young man, he found employment for a time as a soldier (essentially as a mercenary in the
pay of various forces, both Catholic and Protestant). But, after a series of dreams or visions, and
after meeting the Dutch philosopher and scientist Isaac Beeckman, who sparked his interest in
mathematics and the New Physics, he concluded that his real path in life was the pursuit of true
wisdom and science.

Back in France, the young Descartes soon came to the conclusion that the key to philosophy, with
all its uncertainties and ambiguity, was to build it on the indisputable facts of mathematics. To
pursue his rather heretical ideas further, though, he moved from the restrictions of Catholic France
to the more liberal environment of the Netherlands, where he spent most of his adult life, and
where he worked on his dream of merging algebra and geometry.
René Descartes (1596-1650)
In 1637, he published his ground-breaking philosophical and mathematical treatise "Discours de
la méthode" (the “Discourse on Method”), and one of its appendices in particular, "La Géométrie",
is now considered a landmark in the history of mathematics. Following on from early movements towards the use of symbolic expressions
in mathematics by Diophantus, Al-Khwarizmi and François Viète, "La Géométrie" introduced what has become known as the standard
algebraic notation, using lowercase a, b and c for known quantities and x, y and z for unknown quantities. It was perhaps the first book
to look like a modern mathematics textbook, full of a's and b's, x2's, etc.

It was in "La Géométrie" that Descartes first proposed that


each point in two dimensions can be described by two
numbers on a plane, one giving the point’s horizontal
location and the other the vertical location, which have
come to be known as Cartesian coordinates. He used
perpendicular lines (or axes), crossing at a point called the
origin, to measure the horizontal (x) and vertical (y)
locations, both positive and negative, thus effectively
dividing the plane up into four quadrants.

Any equation can be represented on the plane by plotting


on it the solution set of the equation. For example, the
simple equation y = x yields a straight line linking together
the points (0,0), (1,1), (2,2), (3,3), etc. The equation y =
2x yields a straight line linking together the points (0,0),
(1,2), (2,4), (3,6), etc. More complex equations
involving x2, x3, etc, plot various types of curves on the
plane.

As a point moves along a curve, then, its coordinates


change, but an equation can be written to describe the
change in the value of the coordinates at any point in the
figure. Using this novel approach, it soon became clear that
an equation like x2 + y2 = 4, for example, describes a
circle; y2 - 16x a curve called a parabola; x2⁄a2+ y2⁄b2 = 1
Cartesian Coordinates
an ellipse; x ⁄a - y ⁄b = 1 a hyperbola; etc.
2 2 2 2

Descartes’ ground-breaking work, usually referred to as analytic geometry or Cartesian geometry, had the effect of allowing the
conversion of geometry into algebra (and vice versa). Thus, a pair of simultaneous equations could now be solved either algebraically or
graphically (at the intersection of two lines). It allowed the development of Newton’s and Leibniz’s subsequent discoveries of calculus. It
also unlocked the possibility of navigating geometries of higher dimensions, impossible to physically visualize - a concept which was to
become central to modern technology and physics - thus transforming mathematics forever.

Although analytic geometry was far and away Descartes’


most important contribution to mathematics, he also:
developed a “rule of signs” technique for determining the
number of positive or negative real roots of a polynomial;
"invented" (or at least popularized) the superscript notation
for showing powers or exponents (e.g. 24 to show 2 x 2 x 2
x 2); and re-discovered Thabit ibn Qurra's general formula
for amicable numbers, as well as the amicable pair
9,363,584 and 9,437,056 (which had also been discovered
by another Islamic mathematician, Yazdi, almost a century
earlier).

For all his importance in the development of modern


mathematics, though, Descartes is perhaps best known
today as a philosopher who espoused rationalism and
dualism. His philosophy consisted of a method of doubting
everything, then rebuilding knowledge from the ground,
and he is particularly known for the often-quoted statement
“Cogito ergo sum”(“I think, therefore I am”).

He also had an influential rôle in the development of


modern physics, a rôle which has been, until quite recently,
generally under-appreciated and under-investigated. He
provided the first distinctly modern formulation of laws of
nature and a conservation principle of motion, made
numerous advances in optics and the study of the reflection
and refraction of light, and constructed what would become
the most popular theory of planetary motion of the late 17th
Century. His commitment to the scientific method was met
with strident opposition by the church officials of the day. Descartes' Rule of Signs

His revolutionary ideas made him a centre of controversy in his day, and he died in 1650 far from home in Stockholm, Sweden. 13 years
later, his works were placed on the Catholic Church's "Index of Prohibited Books".

https://www.storyofmathematics.com/17th_descartes.html
17TH CENTURY MATHEMATICS - FERMAT

Another Frenchman of the 17th Century, Pierre de Fermat, effectively invented modern number
theory virtually single-handedly, despite being a small-town amateur mathematician. Stimulated
and inspired by the “Arithmetica” of the Hellenistic mathematician Diophantus, he went on to
discover several new patterns in numbers which had defeated mathematicians for centuries, and
throughout his life he devised a wide range of conjectures and theorems. He is also given credit
for early developments that led to modern calculus, and for early progress in probability theory.

Although he showed an early interest in mathematics, he went on study law at Orléans and
received the title of councillor at the High Court of Judicature in Toulouse in 1631, which he held
for the rest of his life. He was fluent in Latin, Greek, Italian and Spanish, and was praised for his
written verse in several languages, and eagerly sought for advice on the emendation of Greek
texts.

Fermat's mathematical work was communicated mainly in letters to friends, often with little or no
proof of his theorems. Although he himself claimed to have proved all his arithmetic theorems,
few records of his proofs have survived, and many mathematicians have doubted some of his
claims, especially given the difficulty of some of the problems and the limited mathematical tools
available to Fermat.

One example of his many theorems is the Two Square Pierre de Fermat (1601-1665)
Theorem, which shows that any prime number which, when
divided by 4, leaves a remainder of 1 (i.e. can be written in
the form 4n + 1), can always be re-written as the sum of
two square numbers (see image at right for examples).

His so-called Little Theorem is often used in the testing of


large prime numbers, and is the basis of the codes which
protect our credit cards in Internet transactions today. In
simple (sic) terms, it says that if we have two
numbers a and p, where p is a prime number and not a
factor of a, then amultiplied by itself p-1 times and then
divided by p, will always leave a remainder of 1. In
mathematical terms, this is written: ap-1 = 1(mod p). For
example, if a = 7 and p = 3, then 72 ÷ 3 should leave a
remainder of 1, and 49 ÷ 3 does in fact leave a remainder
of 1.

Fermat identified a subset of numbers, now known as


Fermat numbers, which are of the form of one less than 2
to the power of a power of 2, or, written mathematically,
22n + 1. The first five such numbers are: 21 + 1 = 3; 22 +
1 = 5; 24 + 1 = 17; 28 + 1 = 257; and 216 + 1 = 65,537.
Interestingly, these are all prime numbers (and are known
as Fermat primes), but all the higher Fermat numbers which
have been painstakingly identified over the years are NOT
prime numbers, which just goes to to show the value of Fermat’s Theorem on Sums of Two Squares
inductive proof in mathematics.

Fermat's pièce de résistance, though, was his famous Last


Theorem, a conjecture left unproven at his death, and
which puzzled mathematicians for over 350 years. The
theorem, originally described in a scribbled note in the
margin of his copy of Diophantus' “Arithmetica”, states that
no three positive integers a, b and c can satisfy the
equation an + bn = cn for any integer value of n greater
than two (i.e. squared). This seemingly simple conjecture
has proved to be one of the world’s hardest mathematical
problems to prove.

There are clearly many solutions - indeed, an infinite


number - when n = 2 (namely, all the Pythagorean triples),
but no solution could be found for cubes or higher powers.
Tantalizingly, Fermat himself claimed to have a proof, but
wrote that “this margin is too small to contain it”. As far as
we know from the papers which have come down to us,
however, Fermat only managed to partially prove the
theorem for the special case of n = 4, as did several other
mathematicians who applied themselves to it (and indeed
as had earlier mathematicians dating back to Fibonacci,
albeit not with the same intent).
Fermat’s Last Theorem
Over the centuries, several mathematical and scientific
academies offered substantial prizes for a proof of the
theorem, and to some extent it single-handedly stimulated the development of algebraic number theory in the 19th and 20th Centuries.
It was finally proved for ALL numbers only in 1995 (a proof usually attributed to British mathematician Andrew Wiles, although in reality
it was a joint effort of several steps involving many mathematicians over several years). The final proof made use of complex modern
mathematics, such as the modularity theorem for semi-stable elliptic curves, Galois representations and Ribet’s epsilon theorem, all of
which were unavailable in Fermat’s time, so it seems clear that Fermat's claim to have solved his last theorem was almost certainly an
exaggeration (or at least a misunderstanding).

In addition to his work in number theory, Fermat anticipated the development of calculus to some extent, and his work in this field was
invaluable later to Newton and Leibniz. While investigating a technique for finding the centres of gravity of various plane and solid figures,
he developed a method for determining maxima, minima and tangents to various curves that was essentially equivalent to differentiation.
Also, using an ingenious trick, he was able to reduce the integral of general power functions to the sums of geometric series.

Fermat’s correspondence with his friend Pascal also helped mathematicians grasp a very important concept in basic probability which,
although perhaps intuitive to us now, was revolutionary in 1654, namely the idea of equally probable outcomes and expected values.

https://www.storyofmathematics.com/17th_fermat.html

17TH CENTURY MATHEMATICS - PASCAL

The Frenchman Blaise Pascal was a prominent 17th Century scientist, philosopher and
mathematician. Like so many great mathematicians, he was a child prodigy and pursued many
different avenues of intellectual endeavour throughout his life. Much of his early work was in the
area of natural and applied sciences, and he has a physical law named after him (that “pressure
exerted anywhere in a confined liquid is transmitted equally and undiminished in all directions
throughout the liquid”), as well as the international unit for the meaurement of pressure. In
philosophy, Pascals’ Wager is his pragmatic approach to believing in God on the grounds that is
it is a better “bet” than not to.

But Pascal was also a mathematician of the first order. At the age of sixteen, he wrote a significant
treatise on the subject of projective geometry, known as Pascal's Theorem, which states that, if
a hexagon is inscribed in a circle, then the three intersection points of opposite sides lie on a
single line, called the Pascal line. As a young man, he built a functional calculating machine, able
to perform additions and subtractions, to help his father with his tax calculations.

Blaise Pascal (1623-1662)


He is best known, however, for Pascal’s Triangle, a
convenient tabular presentation of binomial co-efficients,
where each number is the sum of the two numbers directly
above it. A binomial is a simple type of algebraic expression
which has just two terms operated on only by addition,
subtraction, multiplication and positive whole-number
exponents, such as (x +y)2. The co-efficients produced
when a binomial is expanded form a symmetrical triangle
(see image at right).

Pascal was far from the first to study this triangle. The
Persian mathematician Al-Karaji had produced something
very similar as early as the 10th Century, and the Triangle
is called Yang Hui's Triangle in China after the 13th Century
Chinese mathematician, and Tartaglia’s Triangle in Italy
after the eponymous 16th Century Italian. But Pascal did
contribute an elegant proof by defining the numbers by
recursion, and he also discovered many useful and
interesting patterns among the rows, columns and
diagonals of the array of numbers. For instance, looking at
the diagonals alone, after the outside "skin" of 1's, the next
diagonal (1, 2, 3, 4, 5,...) is the natural numbers in order.
The next diagonal within that (1, 3, 6, 10, 15,...) is the
triangular numbers in order. The next (1, 4, 10, 20, 35,...)
is the pyramidal triangular numbers, etc, etc. It is also
possible to find prime numbers, Fibonacci numbers, Catalan
numbers, and many other series, and even to find fractal
patterns within it.

Pascal also made the conceptual leap to use the Triangle to


help solve problems in probability theory. In fact, it was
through his collaboration and correspondence with his
French contemporary Pierre de Fermat and the Dutchman
Christiaan Huygens on the subject that the mathematical
theory of probability was born. Before Pascal, there was no
actual theory of probability - notwithstanding Gerolamo
Cardano’s early exposition in the 16th Century - merely an
understanding (of sorts) of how to compute “chances” in
dice and card games by counting equally probable
outcomes. Some apparently quite elementary problems in
probability had eluded some of the best mathematicians, or
given rise to incorrect solutions.
The table of binomial coefficients known as Pascal’s Triangle
It fell to Pascal (with Fermat's help) to bring together the
separate threads of prior knowledge (including Cardano's
early work) and to introduce entirely new mathematical techniques for the solution of problems that had hitherto resisted solution. Two
such intransigent problems which Pascal and Fermat applied themselves to were the Gambler’s Ruin (determining the chances of winning
for each of two men playing a particular dice game with very specific rules) and the Problem of Points (determining how a game's
winnings should be divided between two equally skilled players if the game was ended prematurely). His work on the Problem of Points
in particular, although unpublished at the time, was highly influential in the unfolding new field.

The Problem of Points at its simplest can be illustrated by a


simple game of “winner take all” involving the tossing of a
coin. The first of the two players (say, Fermat and Pascal)
to achieve ten points or wins is to receive a pot of 100
francs. But, if the game is interrupted at the point
where Fermat, say, is winning 8 points to 7, how is the 100
franc pot to divided? Fermat claimed that, as he needed
only two more points to win the game, and Pascal needed
three, the game would have been over after four more
tosses of the coin (because, if Pascal did not get the
necessary 3 points for your victory over the four tosses,
then Fermat must have gained the necessary 2 points for
his victory, and vice versa. Fermat then exhaustively listed
the possible outcomes of the four tosses, and concluded
that he would win in 11 out of the 16 possible outcomes,
so he suggested that the 100 francs be split 11⁄16 (0.6875)
to him and 5⁄16 (0.3125) to Pascal.

Pascal then looked for a way of generalizing the problem


that would avoid the tedious listing of possibilities, and
realized that he could use rows from his triangle of Fermat and Pascal’s solution to the Problem of Points
coefficients to generate the numbers, no matter how many
tosses of the coin remained. As Fermat needed 2 more points to win the game and Pascal needed 3, he went to the fifth (2 + 3) row of
the triangle, i.e. 1, 4, 6, 4, 1. The first 3 terms added together (1 + 4 + 6 = 11) represented the outcomes where Fermat would win,
and the last two terms (4 + 1 = 5) the outcomes where Pascal would win, out of the total number of outcomes represented by the sum
of the whole row (1 + 4 + 6 +4 +1 = 16).
Pascal and Fermat had grasped through their correspondence a very important concept that, though perhaps intuitive to us today, was
all but revolutionary in 1654. This was the idea of equally probable outcomes, that the probability of something occurring could be
computed by enumerating the number of equally likely ways it could occur, and dividing this by the total number of possible outcomes
of the given situation. This allowed the use of fractions and ratios in the calculation of the likelhood of events, and the operation of
multiplication and addition on these fractional probabilities. For example, the probability of throwing a 6 on a die twice
is 1⁄6 x 1⁄6= 1⁄36 ("and" works like multiplication); the probability of throwing either a 3 or a 6 is 1⁄6 + 1⁄6 = 1⁄3 ("or" works like addition).

Later in life, Pascal and his sister Jacqueline strongly identified with the extreme Catholic religious movement of Jansenism. Following
the death of his father and a "mystical experience" in late 1654, he had his "second conversion" and abandoned his scientific work
completely, devoting himself to philosophy and theology. His two most famous works, the "Lettres provinciales" and the "Pensées", date
from this period, the latter left incomplete at his death in 1662. They remain Pascal’s best known legacy, and he is usually remembered
today as one of the most important authors of the French Classical Period and one of the greatest masters of French prose, much more
than for his contributions to mathematics.

https://www.storyofmathematics.com/17th_pascal.html

17TH CENTURY MATHEMATICS - NEWTON

In the heady atmosphere of 17th Century England, with the expansion of the British empire in full
swing, grand old universities like Oxford and Cambridge were producing many great scientists
and mathematicians. But the greatest of them all was undoubtedly Sir Isaac Newton.

Physicist, mathematician, astronomer, natural philosopher, alchemist and theologian, Newton is


considered by many to be one of the most influential men in human history. His 1687 publication,
the "Philosophiae Naturalis Principia Mathematica" (usually called simply the "Principia"), is
considered to be among the most influential books in the history of science, and it dominated the
scientific view of the physical universe for the next three centuries.

Although largely synonymous in the minds of the general public today with gravity and the story
of the apple tree, Newton remains a giant in the minds of mathematicians everywhere (on a par
with the all-time greats like Archimedes and Gauss), and he greatly influenced the subsequent
path of mathematical development.

Over two miraculous years, during the time of the Great Plague of 1665-6, the young Newton Sir Isaac Newton (1643-1727)
developed a new theory of light, discovered and quantified gravitation, and pioneered a
revolutionary new approach to mathematics: infinitesimal calculus. His theory of calculus built on
earlier work by his fellow Englishmen John Wallis and Isaac Barrow, as well as on work of such Continental mathematicians as René
Descartes, Pierre de Fermat, Bonaventura Cavalieri, Johann van Waveren Hudde and Gilles Personne de Roberval. Unlike the static
geometry of the Greeks, calculus allowed mathematicians and engineers to make sense of the motion and dynamic change in the changing
world around us, such as the orbits of planets, the motion of fluids, etc.

The initial problem Newton was confronting was that,


although it was easy enough to represent and calculate the
average slope of a curve (for example, the increasing speed
of an object on a time-distance graph), the slope of a curve
was constantly varying, and there was no method to give
the exact slope at any one individual point on the curve i.e.
effectively the slope of a tangent line to the curve at that
point.

Intuitively, the slope at a particular point can be


approximated by taking the average slope (“rise over run”)
of ever smaller segments of the curve. As the segment of
the curve being considered approaches zero in size (i.e. an
infinitesimal change in x), then the calculation of the slope
approaches closer and closer to the exact slope at a point
(see image at right).

Without going into too much complicated detail, Newton


(and his contemporary Gottfried Leibnizindependently)
calculated a derivative function f ‘(x) which gives the slope
at any point of a function f(x). This process of calculating
the slope or derivative of a curve or function is called
differential calculus or differentiation (or, in Newton’s
terminology, the “method of fluxions” - he called the
instantaneous rate of change at a particular point on a
curve the "fluxion", and the changing values of x and ythe
"fluents"). For instance, the derivative of a straight line of Differentiation (derivative) approximates the slope of a curve as the interval
the type f(x) = 4x is just 4; the derivative of a squared approaches zero
function f(x) = x2 is 2x; the derivative of cubic function f(x)
= x3 is 3x2, etc. Generalizing, the derivative of any power function f(x) = xr is rxr-1. Other derivative functions can be stated, according
to certain rules, for exponential and logarithmic functions, trigonometric functions such as sin(x), cos(x), etc, so that a derivative function
can be stated for any curve without discontinuities. For example, the derivative of the curve f(x) = x4 - 5x3 + sin(x2) would be f ’(x) =
4x3 - 15x2 + 2xcos(x2).
Having established the derivative function for a particular curve, it is then an easy matter to calcuate the slope at any particular point on
that curve, just by inserting a value for x. In the case of a time-distance graph, for example, this slope represents the speed of the object
at a particular point.

The “opposite” of differentiation is integration or integral


calculus (or, in Newton’s terminology, the “method of
fluents”), and together differentiation and integration are
the two main operations of calculus. Newton’s Fundamental
Theorem of Calculus states that differentiation and
integration are inverse operations, so that, if a function is
first integrated and then differentiated (or vice versa), the
original function is retrieved.

The integral of a curve can be thought of as the formula for


calculating the area bounded by the curve and the xaxis
between two defined boundaries. For example, on a graph
of velocity against time, the area “under the curve” would
represent the distance travelled. Essentially, integration is
based on a limiting procedure which approximates the area
of a curvilinear region by breaking it into infinitesimally thin
vertical slabs or columns. In the same way as for
differentiation, an integral function can be stated in general
terms: the integral of any power f(x) = xr is xr+1⁄r+1, and
there are other integral functions for exponential and
logarithmic functions, trigonometric functions, etc, so that
the area under any continuous curve can be obtained
between any two limits.

Newton chose not to publish his revolutionary mathematics


straight away, worried about being ridiculed for his
unconventional ideas, and contented himself with
circulating his thoughts among friends. After all, he had
many other interests such as philosophy, alchemy and his
Integration approximates the area under a curve as the size of the samples
work at the Royal Mint. However, in 1684, the approaches zero
German Leibniz published his own independent version of
the theory, whereas Newton published nothing on the
subject until 1693. Although the Royal Society, after due deliberation, gave credit for the first discovery to Newton (and credit for the
first publication to Leibniz), something of a scandal arose when it was made public that the Royal Society’s subsequent accusation of
plagiarism against Leibniz was actually authored by none other Newton himself, causing an ongoing controversy which marred the careers
of both men.

Despite being by far his best known contribution to


mathematics, calculus was by no means Newton’s only
contribution. He is credited with the generalized binomial
theorem, which describes the algebraic expansion of
powers of a binomial (an algebraic expression with two
terms, such as a2 - b2); he made substantial contributions
to the theory of finite differences (mathematical
expressions of the form f(x + b) - f(x + a)); he was one of
the first to use fractional exponents and coordinate
geometry to derive solutions to Diophantine equations
(algebraic equations with integer-only variables); he
developed the so-called “Newton's method” for finding
successively better approximations to the zeroes or roots of
a function; he was the first to use infinite power series with
any confidence; etc.
Newton's Method for approximating the roots of a curve by successive
In 1687, Newton published his “Principia” or “The interations after an initial guess
Mathematical Principles of Natural Philosophy”, generally
recognized as the greatest scientific book ever written. In
it, he presented his theories of motion, gravity and mechanics, explained the eccentric orbits of comets, the tides and their variations,
the precession of the Earth's axis and the motion of the Moon.

Later in life, he wrote a number of religious tracts dealing with the literal interpretation of the Bible, devoted a great deal of time to
alchemy, acted as Member of Parliament for some years, and became perhaps the best-known Master of the Royal Mint in 1699, a
position he held until his death in 1727. In 1703, he was made President of the Royal Society and, in 1705, became the first scientist
ever to be knighted. Mercury poisoning from his alchemical pursuits perhaps explained Newton's eccentricity in later life, and possibly
also his eventual death.

https://www.storyofmathematics.com/17th_newton.html

17TH CENTURY MATHEMATICS - LEIBNIZ


The German polymath Gottfried Wilhelm Leibniz occupies a grand place in the history of
philosophy. He was, along with René Descartes and Baruch Spinoza, one of the three great 17th
Century rationalists, and his work anticipated modern logic and analytic philosophy. Like many
great thinkers before and after him, Leibniz was a child prodigy and a contributor in many different
fields of endeavour.

But, between his work on philosophy and logic and his day job as a politician and representative
of the royal house of Hanover, Leibniz still found time to work on mathematics. He was perhaps
the first to explicitly employ the mathematical notion of a function to denote geometric concepts
derived from a curve, and he developed a system of infinitesimal calculus, independently of his
contemporary Sir Isaac Newton. He also revived the ancient method of solving equations using
matrices, invented a practical calculating machine and pioneered the use of the binary system.

Like Newton, Leibniz was a member of the Royal Society in London, and was almost certainly
aware of Newton’s work on calculus. During the 1670s (slightly later than Newton’s early work),
Leibniz developed a very similar theory of calculus, apparently completely independently. Within
the short period of about two months he had developed a complete theory of differential calculus
and integral calculus (see the section on Newton for a brief description and explanation of the
development of calculus). Gottfried Leibniz (1646-1716)

Unlike Newton, however, he was more than happy to


publish his work, and so Europe first heard about calculus
from Leibniz in 1684, and not from Newton(who published
nothing on the subject until 1693). When the Royal Society
was asked to adjudicate between the rival claims of the two
men over the development of the theory of calculus, they
gave credit for the first discovery to Newton, and credit for
the first publication to Leibniz. However, the Royal Society,
by then under the rather biassed presidency
of Newton himself, later also accused Leibniz of plagiarism,
a slur from which Leibniz never really recovered.

Ironically, it was Leibniz’s mathematics that eventually


triumphed, and his notation and his way of writing calculus,
not Newton’s more clumsy notation, is the one still used in
mathematics today.

In addition to calculus, Leibniz re-discovered a method of


arranging linear equations into an array, now called a
matrix, which could then be manipulated to find a solution.
A similar method had been pioneered by Chinese
mathematicians almost two millennia earlier, but had long
fallen into disuse. Leibniz paved the way for later work on
matrices and linear algebra by Carl Friedrich Gauss. He also
introduced notions of self-similarity and the principle of
continuity which foreshadowed an area of mathematics
which would come to be called topology.

Leibniz’s and Newton’s notation for Calculus


During the 1670s, Leibniz worked on the invention of a
practical calculating machine, which used the binary system
and was capable of multiplying, dividing and even
extracting roots, a great improvement on Pascal’s
rudimentary adding machine and a true forerunner of the
computer. He is usually credited with the early development
of the binary number system (base 2 counting, using only
the digits 0 and 1), although he himself was aware of similar
ideas dating back to the I Ching of Ancient China. Because
of the ability of binary to be represented by the two phases
"on" and "off", it would later become the foundation of
virtually all modern computer systems, and Leibniz's
documentation was essential in the development process.

Leibniz is also often considered the most important logician


between Aristotle in Ancient Greece and George Boole and
Augustus De Morgan in the 19th Century. Even though he
actually published nothing on formal logic in his lifetime, he
enunciated in his working drafts the principal properties of
what we now call conjunction, disjunction, negation,
identity, set inclusion and the empty set.

https://www.storyofmathematics.com/17th_leibniz.html

Binary Number System


18TH CENTURY MATHEMATICS

Most of the late 17th Century and a good part of the early 18th were taken up by the work of disciples
of Newtonand Leibniz, who applied their ideas on calculus to solving a variety of problems in physics,
astronomy and engineering.

The period was dominated, though, by one family, the Bernoulli’s of Basel in Switzerland, which
boasted two or three generations of exceptional mathematicians, particularly the brothers, Jacob and
Johann. They were largely responsible for further developing Leibniz’s infinitesimal calculus -
paricularly through the generalization and extension of calculus known as the "calculus of variations"
- as well as Pascal and Fermat’s probability and number theory.

Basel was also the home town of the greatest of the 18th Century mathematicians, Leonhard Euler,
although, partly due to the difficulties in getting on in a city dominated by
the Bernoulli family, Euler spent most of his time abroad, in Germany and St. Petersburg, Russia. He
Calculus of variations
excelled in all aspects of mathematics, from geometry to calculus to trigonometry to algebra to number
theory, and was able to find unexpected links between the different fields. He proved numerous
theorems, pioneered new methods, standardized mathematical notation and wrote many influential textbooks throughout his long
academic life.

In a letter to Euler in 1742, the German mathematician Christian Goldbach proposed the Goldbach Conjecture, which states that every
even integer greater than 2 can be expressed as the sum of two primes (e.g. 4 = 2 + 2; 8 = 3 + 5; 14 = 3 + 11 = 7 + 7; etc) or, in
another equivalent version, every integer greater than 5 can be expressed as the sum of three primes. Yet another version is the so-
called “weak” Goldbach Conjecture, that all odd numbers greater than 7 are the sum of three odd primes. They remain among the oldest
unsolved problems in number theory (and in all of mathematics), although the weak form of the conjecture appears to be closer to
resolution than the strong one. Goldbach also proved other theorems in number theory such as the Goldbach-Euler Theorem on perfect
powers.

Despite Euler’s and the Bernoullis’ dominance of 18th Century mathematics, many of the other important mathematicians were from
France. In the early part of the century, Abraham de Moivre is perhaps best known for de Moivre's formula, (cos x + isinx)n = cos(nx)
+ isin(nx), which links complex numbers and trigonometry. But he also generalized Newton’s famous binomial theorem into the
multinomial theorem, pioneered the development of analytic geometry, and his work on the normal distribution (he gave the first
statement of the formula for the normal distribution curve) and probability theory were of great importance.

France became even more prominent towards the end of the century, and a handful of late 18th Century French mathematicians in
particular deserve mention at this point, beginning with “the three L’s”.

Joseph Louis Lagrange collaborated with Euler in an important joint work on the calculus of variation, but he also contributed to
differential equations and number theory, and he is usually credited with originating the theory of groups, which would become so
important in 19th and 20th Century mathematics. His name is given an early theorem in group theory, which states that the number of
elements of every sub-group of a finite group divides evenly into the number of elements of the original finite group.

Lagrange is also credited with the four-square theorem, that any natural number can be represented
as the sum of four squares (e.g. 3 = 12 + 12 + 12 + 02; 31 = 52 + 22 + 12 + 12; 310 = 172 + 42 +
22 + 12; etc), as well as another theorem, confusingly also known as Lagrange’s Theorem or
Lagrange’s Mean Value Theorem, which states that, given a section of a smooth continuous
(differentiable) curve, there is at least one point on that section at which the derivative (or slope) of
the curve is equal (or parallel) to the average (or mean) derivative of the section. Lagrange’s 1788
treatise on analytical mechanics offered the most comprehensive treatment of classical mechanics
since Newton, and formed a basis for the development of mathematical physics in the 19th Century.

Pierre-Simon Laplace, sometimes referred to as “the French Newton”, was an important


mathematician and astronomer, whose monumental work “Celestial Mechanics” translated the
geometric study of classical mechanics to one based on calculus, opening up a much broader range Lagrange’s Mean value Theorem
of problems. Although his early work was mainly on differential equations and finite differences, he
was already starting to think about the mathematical and philosophical concepts of probability and statistics in the 1770s, and he
developed his own version of the so-called Bayesian interpretation of probability independently of Thomas Bayes. Laplace is well known
for his belief in complete scientific determinism, and he maintained that there should be a set of scientific laws that would allow us - at
least in principle - to predict everything about the universe and how it works.

Adrien-Marie Legendre also made important contributions to statistics, number theory, abstract algebra
and mathematical analysis in the late 18th and early 19th Centuries, athough much of his work (such as
the least squares method for curve-fitting and linear regression, the quadratic reciprocity law, the prime
number theorem and his work on elliptic functions) was only brought to perfection - or at least to general
notice - by others, particularly Gauss. His “Elements of Geometry”, a re-working of Euclid’s book, became
the leading geometry textbook for almost 100 years, and his extremely accurate measurement of the
terrestrial meridian inspired the creation, and almost universal adoption, of the metric system of measures
and weights.

Yet another Frenchman, Gaspard Monge was the inventor of descriptive geometry, a clever method of The first six Legendre
representing three-dimensional objects by projections on the two-dimensional plane using a specific set polynomials (solutions to
of procedures, a technique which would later become important in the fields of engineering, architecture Legendre’s differential
and design. His orthographic projection became the graphical method used in almost all modern equation)
mechanical drawing.
After many centuries of increasingly accurate approximations, Johann Lambert, a Swiss mathematician and prominent astronomer, finally
provided a rigorous proof in 1761 that π is irrational, i.e. it can not be expressed as a simple fraction using integers only or as a terminating
or repeating decimal. This definitively proved that it would never be possible to calculate it exactly, although the obsession with obtaining
more and more accurate approximations continues to this day. (Over a hundred years later, in 1882, Ferdinand von Lindemann would
prove that π is also transcendental, i.e. it cannot be the root of any polynomial equation with rational coefficients). Lambert was also the
first to introduce hyperbolic functions into trigonometry and made some prescient conjectures regarding non-Euclidean space and the
properties of hyperbolic triangles.

https://www.storyofmathematics.com/18th.html

18TH CENTURY MATHEMATICS - BERNOULLI BROTHERS

Unusually in the history of mathematics, a single family, the Bernoulli’s, produced half a dozen
outstanding mathematicians over a couple of generations at the end of the 17th and start of the
18th Century.

The Bernoulli family was a prosperous family of traders and scholars from the free city of Basel in
Switzerland, which at that time was the great commercial hub of central Europe.The brothers, Jacob
and Johann Bernoulli, however, flouted their father's wishes for them to take over the family spice
business or to enter respectable professions like medicine or the ministry, and began studying
mathematics together.

After Johann graduated from Basel University, the two developed a rather jealous and competitive
relationship. Johann in particular was jealous of the elder Jacob's position as professor at Basel Jacob (1654-1705) and Johann
University, and the two often attempted to outdo each other. After Jacob's early death from Bernoulli (1667-1748)
tuberculosis, Johann took over his brother's position, one of his young students being the great
Swiss mathematician Leonhard Euler. However, Johann merely shifted his jealousy toward his own talented son, Daniel (at one point,
Johann published a book based on Daniel's work, even changing the date to make it look as though his book had been published before
his son's).

Johann received a taste of his own medicine, though, when his student Guillaume de l'Hôpital published a book in his own name consisting
almost entirely of Johann's lectures, including his now famous rule about 0 ÷ 0 (a problem which had dogged mathematicians
since Brahmagupta's initial work on the rules for dealing with zero back in the 7th Century). This showed that 0 ÷ 0 does not equal zero,
does not equal 1, does not equal infinity, and is not even undefined, but is "indeterminate" (meaning it could equal any number). The
rule is still usually known as l'Hôpital's Rule, and not Bernoulli's Rule.

Despite their competitive and combative personal relationship, though, the brothers both had a clear aptitude for mathematics at a high
level, and constantly challenged and inspired each other. They established an early correspondence with Gottfried Leibniz, and were
among the first mathematicians to not only study and understand infinitesimal calculus but to apply it to various problems. They became
instrumental in disseminating the newly-discovered knowledge of calculus, and helping to make it the cornerstone of mathematics it has
become today.

But they were more than just disciples of Leibniz, and they also made their own important
contributions. One well known and topical problem of the day to which they applied themselves was
that of designing a sloping ramp which would allow a ball to roll from the top to the bottom in the
fastest possible time. Johann Bernoulli demonstrated through calculus that neither a straight ramp or
a curved ramp with a very steep initial slope were optimal, but actually a less steep curved ramp
known as a brachistochrone curve (a kind of upside-down cycloid, similar to the path followed by a
point on a moving bicycle wheel) is the curve of fastest descent.

This application was an example of the “calculus of variations”, a generalization of infinitesimal


calculus that the Bernoulli brothers developed together, and has since proved useful in fields as
diverse as engineering, financial investment, architecture and construction, and even space travel. The Bernoulli’s first derived the
Johann also derived the equation for a catenary curve, such as that formed by a chain hanging brachistrochrone curve, using his
between two posts, a problem presented to him by his brother Jacob. calculus of variation method

Jacob Bernoulli’s book “The Art of Conjecture”, published posthumously in 1713, consolidated
existing knowledge on probability theory and expected values, as well as adding personal
contributions, such as his theory of permutations and combinations, Bernoulli trials and Bernoulli
distribution, and some important elements of number theory, such as the Bernoulli Numbers
sequence. He also published papers on transcendental curves, and became the first person to
develop the technique for solving separable differential equations (the set of non-linear, but
solvable, differential equations are now named after him). He invented polar coordinates (a method
of describing the location of points in space using angles and distances) and was the first to use
the word “integral” to refer to the area under a curve.

Jacob Bernoulli also discovered the appropximate value of the irrational number e while exploring
the compound interest on loans. When compounded at 100% interest annually, $1.00 becomes
$2.00 after one year; when compounded semi-annually it ppoduces $2.25; compounded quarterly
$2.44; monthly $2.61; weekly $2.69; daily $2.71; etc. If it were to be compounded continuously,
the $1.00 would tend towards a value of $2.7182818... after a year, a value which became known
as e. Alegbraically, it is the value of the infinite series (1 + 1⁄1)1.(1 + 1⁄2)2.(1 + 1⁄3)3.(1 + 1⁄4)4... Bernoulli Numbers

Johann’s sons Nicolaus, Daniel and Johann II, and even his grandchildren Jacob II and Johann III, were all accomplished mathematicians
and teachers. Daniel Bernoulli, in particular, is well known for his work on fluid mechanics (especially Bernoulli’s Principle on the inverse
relationship between the speed and pressure of a fluid or gas), as much as for his work on probability and statistics.
https://www.storyofmathematics.com/18th_bernoulli.html

18TH CENTURY MATHEMATICS - EULER

Leonhard Euler was one of the giants of 18th Century mathematics. Like the Bernoulli’s, he was born in
Basel, Switzerland, and he studied for a while under Johann Bernoulli at Basel University. But, partly due to
the overwhelming dominance of the Bernoulli family in Swiss mathematics, and the difficulty of finding a
good position and recognition in his hometown, he spent most of his academic life in Russia and Germany,
especially in the burgeoning St. Petersburg of Peter the Great and Catherine the Great.

Despite a long life and thirteen children, Euler had more than his fair share of tragedies and deaths, and
even his blindness later in life did not slow his prodigious output - his collected works comprise nearly 900
books and, in the year 1775, he is said to have produced on average one mathematical paper every week -
as he compensated for it with his mental calculation skills and photographic memory (for example, he could
repeat the Aeneid of Virgil from beginning to end without hesitation, and for every page in the edition he
could indicate which line was the first and which the last).

Today, Euler is considered one of the greatest mathematicians of all time. His interests covered almost all Leonhard Euler (1707-
aspects of mathematics, from geometry to calculus to trigonometry to algebra to number theory, as well as 1783)
optics, astronomy, cartography, mechanics, weights and measures and even the theory of music.

Much of the notation used by mathematicians today - including e, i, f(x), ∑, and the use
of a, b and c as constants and x, y and z as unknowns - was either created, popularized or
standardized by Euler. His efforts to standardize these and other symbols (including π and the
trigonometric functions) helped to internationalize mathematics and to encourage collaboration
on problems.

He even managed to combine several of these together in an amazing feat of mathematical


alchemy to produce one of the most beautiful of all mathematical equations, eiπ = -1, sometimes
known as Euler’s Identity. This equation combines arithmetic, calculus, trigonometry and complex
analysis into what has been called "the most remarkable formula in mathematics", "uncanny and
sublime" and "filled with cosmic beauty", among other descriptions. Another such discovery, often
known simply as Euler’s Formula, is eix = cosx + isinx. In fact, in a recent poll of mathematicians,
three of the top five most beautiful formulae of all time were Euler’s. He seemed to have an
instinctive ability to demonstrate the deep relationships between trigonometry, exponentials and
complex numbers.
Mathematical notation created or
The discovery that initially sealed Euler’s reputation was announced in 1735 and concerned the popularized by Euler
calculation of infinite sums. It was called the Basel problem after the Bernoulli’s had tried and
failed to solve it, and asked what was the precise sum of the of the reciprocals of the squares of all the natural numbers to infinity
i.e. 1⁄12 + 1⁄22 + 1⁄32 + 1⁄42 ... (a zeta function using a zeta constant of 2). Euler’s friend Daniel Bernoulli had estimated the sum to be
about 13⁄5, but Euler’s superior method yielded the exact but rather unexpected result of π2⁄6. He also showed that the infinite series
was equivalent to an infinite product of prime numbers, an identity which would later inspire Riemann’s investigation of complex zeta
functions.

Also in 1735, Euler solved an intransigent mathematical and logical problem, known as the Seven
Bridges of Königsberg Problem, which had perplexed scholars for many years, and in doing so laid
the foundations of graph theory and presaged the important mathematical idea of topology. The
city of Königsberg in Prussia (modern-day Kaliningrad in Russia) was set on both sides of the Pregel
River, and included two large islands which were connected to each other and the mainland by
seven bridges. The problem was to find a route through the city that would cross each bridge once
and only once.

In fact, Euler proved that the problem has no solution, but in doing so he made the important
conceptual leap of pointing out that the choice of route within each landmass is irrelevant and the
only important feature is the sequence of bridges crossed. This allowed him to reformulate the
problem in abstract terms, replacing each land mass with an abstract node and each bridge with
an abstract connection. This resulted in a mathematical structure called a “graph”, a pictorial
representation made up of points (vertices) connected by non-intersecting curves (arcs), which
may be distorted in any way without changing the graph itself. In this way, Euler was able to
deduce that, because the four land masses in the original problem are touched by an odd number The Seven Bridges of Königsberg
of bridges, the existence of a walk traversing each bridge once only inevitably leads to a Problem
contradiction. If Königsberg had had one fewer bridges, on the other hand, with an even number
of bridges leading to each piece of land, then a solution would have been possible.
The list of theorems and methods pioneered by Euler is immense, and largely outside the
scope of an entry-level study such as this, but mention could be made of just some of them:

the demonstration of geometrical properties such as Euler’s Line and Euler’s Circle;
the definition of the Euler Characteristic χ (chi) for the surfaces of polyhedra, whereby the
number of vertices minus the number of edges plus the number of faces always equals 2
(see table at right);
a new method for solving quartic equations;
the Prime Number Theorem, which describes the asymptotic distribution of the prime
numbers;
proofs (and in some cases disproofs) of some of Fermat’s theorems and conjectures;
the discovery of over 60 amicable numbers (pairs of numbers for which the sum of the
divisors of one number equals the other number), although some were actually incorrect;
a method of calculating integrals with complex limits (foreshadowing the development of
modern complex analysis); The Euler Characteristic
the calculus of variations, including its best-known result, the Euler-Lagrange equation; a
proof of the infinitude of primes, using the divergence of the harmonic series;
the integration of Leibniz's differential calculus with Newton's Method of Fluxions into a form of calculus we would recognize today, as
well as the development of tools to make it easier to apply calculus to real physical problems;
etc, etc.

In 1766, Euler accepted an invitation from Catherine the Great to return to the St. Petersburg Academy, and spent the rest of his life in
Russia. However, his second stay in the country was marred by tragedy, including a fire in 1771 which cost him his home (and almost
his life), and the loss in 1773 of his dear wife of 40 years, Katharina. He later married Katharina's half-sister, Salome Abigail, and this
marriage would last until his death from a brain hemorrhage in 1783.

https://www.storyofmathematics.com/18th_euler.html

19TH CENTURY MATHEMATICS

The 19th Century saw an unprecedented increase in the breadth and complexity of
mathematical concepts. Both France and Germany were caught up in the age of revolution
which swept Europe in the late 18th Century, but the two countries treated mathematics
quite differently.

After the French Revolution, Napoleon emphasized the practical usefulness of mathematics
and his reforms and military ambitions gave French mathematics a big boost, as exemplified
by “the three L’s”, Lagrange, Laplace and Legendre (see the section on 18th Century
Mathematics), Fourier and Galois.

Joseph Fourier's study, at the beginning of the 19th Century, of infinite sums in which the
terms are trigonometric functions were another important advance in mathematical
analysis. Periodic functions that can be expressed as the sum of an infinite series of sines
and cosines are known today as Fourier Series, and they are still powerful tools in pure and
applied mathematics. Fourier (following Leibniz, Euler, Lagrange and others) also
contributed towards defining exactly what is meant by a function, although the definition
that is found in texts today - defining it in terms of a correspondence between elements of
the domain and the range - is usually attributed to the 19th Century German mathematician Approximation of a periodic function by the
Peter Dirichlet. Fourier Series

In 1806, Jean-Robert Argand published his paper on how complex numbers (of the form a + bi, where i is √-1) could be represented on
geometric diagrams and manipulated using trigonometry and vectors. Even though the Dane Caspar Wessel had produced a very similar
paper at the end of the 18th Century, and even though it was Gauss who popularized the practice, they are still known today as Argand
Diagrams.

The Frenchman Évariste Galois proved in the late 1820s that there is no general algebraic method for solving polynomial equations of
any degree greater than four, going further than the Norwegian Niels Henrik Abel who had, just a few years earlier, shown the impossibility
of solving quintic equations, and breaching an impasse which had existed for centuries. Galois' work also laid the groundwork for further
developments such as the beginnings of the field of abstract algebra, including areas like algebraic geometry, group theory, rings, fields,
modules, vector spaces and non-commutative algebra.

Germany, on the other hand, under the influence of the great educationalist Wilhelm von Humboldt, took a rather different approach,
supporting pure mathematics for its own sake, detached from the demands of the state and military. It was in this environment that the
young German prodigy Carl Friedrich Gauss, sometimes called the “Prince of Mathematics”, received his education at the prestigious
University of Göttingen. Some of Gauss’ ideas were a hundred years ahead of their time, and touched on many different parts of the
mathematical world, including geometry, number theory, calculus, algebra and probability. He is widely regarded as one of the three
greatest mathematicians of all times, along with Archimedes and Newton.
Later in life, Gauss also claimed to have investigated a kind
of non-Euclidean geometry using curved space but,
unwilling to court controversy, he decided not to pursue or
publish any of these avant-garde ideas. This left the field
open for János Bolyai and Nikolai
Lobachevsky (respectively, a Hungarian and a Russian)
who both independently explored the potential of
hyperbolic geometry and curved spaces.

The German Bernhard Riemann worked on a different kind


of non-Euclidean geometry called elliptic geometry, as well
as on a generalized theory of all the different types of Euclidean, hyperbolic and elliptic geometry
geometry. Riemann, however, soon took this even further,
breaking away completely from all the limitations of 2 and 3 dimensional geometry, whether flat or curved, and began to think in higher
dimensions. His exploration of the zeta function in multi-dimensional complex numbers revealed an unexpected link with the distribution
of prime numbers, and his famous Riemann Hypothesis, still unproven after 150 years, remains one of the world’s great unsolved
mathematical mysteries and the testing ground for new generations of mathematicians.

British mathematics also saw something of a resurgence in the early and mid-19th century. Although the roots of the computer go back
to the geared calculators of Pascal and Leibniz in the 17th Century, it was Charles Babbage in 19th Century England who designed a
machine that could automatically perform computations based on a program of instructions stored on cards or tape. His large "difference
engine" of 1823 was able to calculate logarithms and trigonometric functions, and was the true forerunner of the modern electronic
computer. Although never actually built in his lifetime, a machine was built almost 200 years later to his specifications and worked
perfectly. He also designed a much more sophisticated machine he called the "analytic engine", complete with punched cards, printer
and computational abilities commensurate with modern computers.

Another 19th Century Englishman, George Peacock, is usually credited with the invention of symbolic algebra, and the extension of the
scope of algebra beyond the ordinary systems of numbers. This recognition of the possible existence of non-arithmetical algebras was
an important stepping stone toward future developments in abstract algebra.

In the mid-19th Century, the British mathematician George Boole devised an algebra (now called Boolean algebra or Boolean logic), in
which the only operators were AND, OR and NOT, and which could be applied to the solution of logical problems and mathematical
functions. He also described a kind of binary system which used just two objects, "on" and "off" (or "true" and "false", 0 and 1, etc), in
which, famously, 1 + 1 = 1. Boolean algebra was the starting point of modern mathematical logic and ultimately led to the development
of computer science.

The concept of number and algebra was further extended by the Irish mathematician William
Hamilton, whose 1843 theory of quaternions (a 4-dimensional number system, where a quantity
representing a 3-dimensional rotation can be described by just an angle and a vector). Quaternions,
and its later generalization by Hermann Grassmann, provided the first example of a non-
commutative algebra (i.e. one in which a x b does not always equal b x a), and showed that several
different consistent algebras may be derived by choosing different sets of axioms.

The Englishman Arthur Cayley extended Hamilton's quaternions and developed the octonions. But
Cayley was one of the most prolific mathematicians in history, and was a pioneer of modern group
theory, matrix algebra, the theory of higher singularities, and higher dimensional geometry
(anticipating the later ideas of Klein), as well as the theory of invariants.

Throughout the 19th Century, mathematics in general became ever more complex and abstract. Hamilton’s quaternion
But it also saw a re-visiting of some older methods and an emphasis on mathematical rigour. In
the first decades of the century, the Bohemian priest Bernhard Bolzano was one of the earliest
mathematicians to begin instilling rigour into mathematical analysis, as well as giving the first purely analytic proof of both the
fundamental theorem of algebra and the intermediate value theorem, and early consideration of sets (collections of objects defined by a
common property, such as "all the numbers greater than 7" or "all right triangles", etc). When the German mathematician Karl Weierstrass
discovered the theoretical existence of a continuous function having no derivative (in other words, a continuous curve possessing no
tangent at any of its points), he saw the need for a rigorous “arithmetization” of calculus, from which all the basic concepts of analysis
could be derived.

Along with Riemann and, particularly, the Frenchman Augustin-Louis Cauchy, Weierstrass completely reformulated calculus in an even
more rigorous fashion, leading to the development of mathematical analysis, a branch of pure mathematics largely concerned with the
notion of limits (whether it be the limit of a sequence or the limit of a function) and with the theories of differentiation, integration,
infinite series and analytic functions. In 1845, Cauchy also proved Cauchy's theorem, a fundamental theorem of group theory, which he
discovered while examining permutation groups. Carl Jacobi also made important contributions to analysis, determinants and matrices,
and especially his theory of periodic functions and elliptic functions and their relation to the elliptic theta function.
August Ferdinand Möbius is best known for his 1858 discovery of the Möbius strip, a non-orientable two-
dimensional surface which has only one side when embedded in three-dimensional Euclidean space (actually
a German, Johann Benedict Listing, devised the same object just a couple of months before Möbius, but it
has come to hold Möbius' name). Many other concepts are also named after him, including the Möbius
configuration, Möbius transformations, the Möbius transform of number theory, the Möbius function and the
Möbius inversion formula. He also introduced homogeneous coordinates and discussed geometric and
projective transformations.

Felix Klein also pursued more developments in non-Euclidean geometry, include the Klein bottle, a one-sided
closed surface which cannot be embedded in three-dimensional Euclidean space, only in four or more
dimensions. It can be best visualized as a cylinder looped back through itself to join with its other end from
the "inside". Klein’s 1872 Erlangen Program, which classified geometries by their underlying symmetry Non-orientable surfaces
groups (or their groups of transformations), was a hugely influential synthesis of much of the mathematics with no identifiable "inner"
of the day, and his work was very important in the later development of group theory and function theory. and "outer" sides

The Norwegian mathematician Marius Sophus Lie also applied algebra to the study of geometry. He largely created the theory of
continuous symmetry, and applied it to the geometric theory of differential equations by means of continuous groups of transformations
known as Lie groups.

In an unusual occurrence in 1866, an unknown 16-year old Italian, Niccolò Paganini, discovered the second smallest pair of amicable
numbers (1,184 and 1210), which had been completely overlooked by some of the greatest mathematicians in history (including Euler,
who had identified over 60 such numbers in the 18th Century, some of them huge).

In the later 19th Century, Georg Cantor established the first foundations of set theory, which enabled the rigorous treatment of the
notion of infinity, and which has since become the common language of nearly all mathematics. In the face of fierce resistance from
most of his contemporaries and his own battle against mental illness, Cantor explored new mathematical worlds where there were many
different infinities, some of which were larger than others.

Cantor’s work on set theory was extended by another German, Richard Dedekind, who
defined concepts such as similar sets and infinite sets. Dedekind also came up with the
notion, now called a Dedekind cut which is now a standard definition of the real numbers.
He showed that any irrational number divides the rational numbers into two classes or sets,
the upper class being strictly greater than all the members of the other lower class. Thus,
every location on the number line continuum contains either a rational or an irrational
number, with no empty locations, gaps or discontinuities. In 1881, the Englishman John
Venn introduced his “Venn diagrams” which become useful and ubiquitous tools in set
theory.
Venn diagram
Building on Riemann’s deep ideas on the distribution of prime numbers, the year 1896 saw
two independent proofs of the asymptotic law of the distribution of prime numbers (known
as the Prime Number Theorem), one by Jacques Hadamard and one by Charles de la Vallée Poussin, which showed that the number of
primes occurring up to any number x is asymptotic to (or tends towards) x⁄log x.

Hermann Minkowski, a great friend of David Hilbert and teacher of the young Albert Einstein,
developed a branch of number theory called the "geometry of numbers" late in the 19th
Century as a geometrical method in multi-dimensional space for solving number theory
problems, involving complex concepts such as convex sets, lattice points and vector space.
Later, in 1907, it was Minkowski who realized that the Einstein’s 1905 special theory of relativity
could be best understood in a four-dimensional space, often referred to as Minkowski space-
time.

Gottlob Frege’s 1879 “Begriffsschrift” (roughly translated as “Concept-Script”) broke new


ground in the field of logic, including a rigorous treatment of the ideas of functions and
variables. In his attempt to show that mathematics grows out of logic, he devised techniques
that took him far beyond the logical traditions of Aristotle (and even of George Boole). He was
the first to explicitly introduce the notion of variables in logical statements, as well as the
notions of quantifiers, universals and existentials. He extended Boole's "propositional logic"
into a new "predicate logic" and, in so doing, set the stage for the radical advances of Giuseppe Minkowski space-time
Peano, Bertrand Russell and David Hilbert in the early 20th Century.

Henri Poincaré came to prominence in the latter part of the 19th Century with at least a partial solution to the “three body problem”, a
deceptively simple problem which had stubbornly resisted resolution since the time of Newton, over two hundred years earlier. Although
his solution actually proved to be erroneous, its implications led to the early intimations of what would later become known as chaos
theory. In between his important work in theoretical physics, he also greatly extended the theory of mathematical topology, leaving
behind a knotty problem known as the Poincaré conjecture which remined unsolved until 2002.

Poincaré was also an engineer and a polymath, and perhaps the last of the great mathematicians to adhere to an older conception of
mathematics, which championed a faith in human intuition over rigour and formalism. He is sometimes referred to as the “Last Univeralist”
as he was perhaps the last mathematician able to shine in almost all of the various aspects of what had become by now a huge,
encyclopedic and incredibly complex subject. The 20th Century would belong to the specialists.

https://www.storyofmathematics.com/19th.html
19TH CENTURY MATHEMATICS - GALOIS

Évariste Galois was radical republican and something of a romantic figure in French mathematical history.
He died in a duel at the young age of 20, but the work he published shortly before his death made his name
in mathematical circles, and would go on to allow proofs by later mathematicians of problems which had
been impossible for many centuries. It also laid the groundwork for many later developments in mathematics,
particularly the beginnings of the important fields of abstract algebra and group theory.

Despite his lacklustre performance at school (he twice failed entrance exams to the École Polytechnique),
the young Galois devoured the work of Legendre and Lagrange in his spare time. At the tender age of 17,
he began making fundamental discoveries in the theory of polynomial equations (equations constructed from
variables and constants, using only the operations of addition, subtraction, multiplication and non-negative
whole-number exponents, such as x2 - 4x + 7 = 0). He effectively proved that there can be no general
formula for solving quintic equations (polynomials including a term of x5), just as the young Norwegian Niels
Henrik Abel had a few years earlier, although by a different method. But he was also able to prove the more
general, and more powerful, idea that there is no general algebraic method for solving polynomial equations Évariste Galois (1811-
of any degree greater than four. 1832)

Galois achieved this general proof by looking at whether or not the “permutation group” of its
roots (now known as its Galois group) had a certain structure. He was the first to use the term
“group” in its modern mathematical sense of a group of permutations (foreshadowing the modern
field of group theory), and his fertile approach, now known as Galois theory, was adapted by
later mathematicians to many other fields of mathematics besides the theory of equations.

Galois’ breakthrough in turn led to definitive proofs (or rather disproofs) later in the century of
the so-called “Three Classical Problems” problems which had been first formulated by Plato and
others back in ancient Greece: the doubling of the cube and the trisection of an angle (both were
proved impossible in 1837), and the squaring of the circle (also proved impossible, in 1882).

Galois was a hot-headed political firebrand (he was arrested several times for political acts), and
his political affiliations and activities as a staunch republican during the rule of Louis-Philippe
continually distracted him from his mathematical work. He was killed in a duel in 1832, under
rather shady circumstances, but he had spent the whole of the previous night outlining his
mathematical ideas in a detailed letter to his friend Auguste Chevalier, as though convinced of
his impending death.
An example of Galois’ rather
Ironically, his young contemporary Abel also had a promising career cut short. He died in poverty undisciplined notes
of tubercolosis at the age of just 26, although his legacy lives on in the term “abelian” (usually
written with a small "a"), which has since become commonplace in discussing concepts such as the abelian group, abelian category and
abelian variety.

https://www.storyofmathematics.com/19th_galois.html

19TH CENTURY MATHEMATICS - GAUSS

Carl Friedrich Gauss is sometimes referred to as the "Prince of Mathematicians" and the "greatest mathematician
since antiquity". He has had a remarkable influence in many fields of mathematics and science and is ranked as
one of history's most influential mathematicians.

Gauss was a child prodigy. There are many anecdotes concerning his precocity as a child, and he made his first
ground-breaking mathematical discoveries while still a teenager.

At just three years old, he corrected an error in his father payroll calculations, and he was looking after his father’s
accounts on a regular basis by the age of 5. At the age of 7, he is reported to have amazed his teachers by
summing the integers from 1 to 100 almost instantly (having quickly spotted that the sum was actually 50 pairs
of numbers, with each pair summing to 101, total 5,050). By the age of 12, he was already attending gymnasium
and criticizing Euclid’s geometry. Carl Friedrich Gauss
(1777-1855)
Although his family was poor and working class, Gauss' intellectual abilities attracted the attention of the Duke of
Brunswick, who sent him to the Collegium Carolinum at 15, and then to the prestigious University of Göttingen (which he attended from
1795 to 1798). It was as a teenager attending university that Gauss discovered (or independently rediscovered) several important
theorems.

At 15, Gauss was the first to find any kind of a pattern in the occurrence of prime numbers, a problem
which had exercised the minds of the best mathematicians since ancient times. Although the occurrence
of prime numbers appeared to be almost competely random, Gauss approached the problem from a
different angle by graphing the incidence of primes as the numbers increased. He noticed a rough
pattern or trend: as the numbers increased by 10, the probability of prime numbers occurring reduced
by a factor of about 2 (e.g. there is a 1 in 4 chance of getting a prime in the number from 1 to 100, a
1 in 6 chance of a prime in the numbers from 1 to 1,000, a 1 in 8 chance from 1 to 10,000, 1 in 10 from
1 to 100,000, etc). However, he was quite aware that his method merely yielded an approximation and,
as he could not definitively prove his findings, and kept them secret until much later in life.
Graphs of the density of prime
numbers
In Gauss’s annus mirabilis of 1796, at just 19 years of age, he constructed a hitherto
unknown regular seventeen-sided figure using only a ruler and compass, a major advance
in this field since the time of Greek mathematics, formulated his prime number theorem
on the distribution of prime numbers among the integers, and proved that every positive
integer is representable as a sum of at most three triangular numbers.

Although he made contributions in almost all fields of mathematics, number theory was
always Gauss’ favourite area, and he asserted that “mathematics is the queen of the
sciences, and the theory of numbers is the queen of mathematics”. An example of how
Gauss revolutionized number theory can be seen in his work with complex numbers 17-sided heptadecagon constructed by Gauss
(combinations of real and imaginary numbers).

Gauss gave the first clear exposition of complex numbers and of the investigation of functions of
complex variables in the early 19th Century. Although imaginary numbers involving i(the imaginary
unit, equal to the square root of -1) had been used since as early as the 16th Century to solve
equations that could not be solved in any other way, and despite Euler’s ground-breaking work on
imaginary and complex numbers in the 18th Century, there was still no clear picture of how
imaginary numbers connected with real numbers until the early 19th Century. Gauss was not the
first to intepret complex numbers graphically (Jean-Robert Argand produced his Argand diagrams
in 1806, and the Dane Caspar Wessel had described similar ideas even before the turn of the
century), but Gauss was certainly responsible for popularizing the practice and also formally
introduced the standard notation a + bi for complex numbers. As a result, the theory of complex
numbers received a notable expansion, and its full potential began to be unleashed.
Representation of complex numbers

At the age of just 22, he proved what is now known as the Fundamental Theorem of Algebra
(although it was not really about algebra). The theorem states that every non-constant single-variable polynomial over the complex
numbers has at least one root (although his initial proof was not rigorous, he improved on it later in life). What it also showed was that
the field of complex numbers is algebraically "closed" (unlike real numbers, where the solution to a polynomial with real co-efficients can
yield a solution in the complex number field).

Then, in 1801, at 24 years of age, he published his book “Disquisitiones Arithmeticae”, which is regarded today as one of the most
influential mathematics books ever written, and which laid the foundations for modern number theory. Among many other things, the
book contained a clear presentation of Gauss’ method of modular arithmetic, and the first proof of the law of quadratic reciprocity (first
conjectured by Euler and Legendre).

For much of his life, Gauss also retained a strong interest in theoretical astrononomy, and he
held the post of Director of the astronomical observatory in Göttingen for many years. When
the planetoid Ceres was in the process of being identified in the late 17th Century, Gauss
made a prediction of its position which varied greatly from the predictions of most other
astronomers of the time. But, when Ceres was finally discovered in 1801, it was almost exacly
where Gauss had predicted. Although he did not explain his methods at the time, this was
one of the first applications of the least squares approximation method, usually attributed to
Gauss, although also claimed by the Frenchman Legendre. Gauss claimed to have done the
logarithmic calculations in his head.

As Gauss’ fame spread, though, and he became known throughout Europe as the go-to man
for complex mathematical questions, his character deteriorated and he became increasingly
arrogant, bitter, dismissive and unpleasant, rather than just shy. There are many stories of
the way in which Gauss had dismissed the ideas of young mathematicians or, in some cases,
claimed them as his own.
Line of best fit by Gauss’ least squares
In the area of probability and statistics, Gauss introduced what is now known as method
Gaussian distribution, the Gaussian function and the Gaussian error curve. He
showed how probability could be represented by a bell-shaped or “normal” curve,
which peaks around the mean or expected value and quickly falls off towards
plus/minus infinity, which is basic to descriptions of statistically distributed data.

He also made ths first systematic study of modular arithmetic - using integer
division and the modulus - which now has applications in number theory, abstract
algebra, computer science, cryptography, and even in visual and musical art.

While engaged on a rather banal surveying job for the Royal House of Hanover in
the years after 1818, Gauss was also looking into the shape of the Earth, and
starting to speculate on revolutionary ideas like shape of space itself. This led him
to question one of the central tenets of the whole of mathematics, Euclidean
geometry, which was clearly premised on a flat, and not a curved, universe. He
later claimed to have considered a non-Euclidean geometry (in which Euclid's
parallel axiom, for example, does not apply), which was internally consistent and
free of contradiction, as early as 1800. Unwilling to court controversy, however, Gaussian, or normal, probability curve
Gauss decided not to pursue or publish any of his avant-garde ideas in this area,
leaving the field open to Bolyai and Lobachevsky, although he is still considered
by some to be a pioneer of non-Euclidean geometry.
The Hanover survey work also fuelled Gauss' interest in differential geometry (a field of
mathematics dealing with curves and surfaces) and what has come to be known as
Gaussian curvature (an intrinsic measure of curvature, dependent only on how distances
are measured on the surface, not on the way it is embedded in space). All in all, despite
the rather pedestrian nature of his employment, the responsibilities of caring for his sick
mother and the constant arguments with his wife Minna (who desperately wanted to
move to Berlin), this was a very fruitful period of his academic life, and he published
over 70 papers between 1820 and 1830.

Gauss’ achievements were not limited to pure mathematics, however. During his
surveying years, he invented the heliotrope, an instrument that uses a mirror to reflect
sunlight over great distances to mark positions in a land survey. In later years, he
collaborated with Wilhelm Weber on measurements of the Earth's magnetic field, and
invented the first electric telegraph. In recognition of his contributions to the theory of
electromagnetism, the international unit of magnetic induction is known as the gauss.

https://www.storyofmathematics.com/19th_gauss.html Gaussian curvature

19TH CENTURY MATHEMATICS - BOLYAI AND LOBACHEVSKY

János Bolyai was a Hungarian mathematician who spent most of his life in a little-
known backwater of the Hapsburg Empire, in the wilds of the Transylvanian mountains
of modern-day Romania, far from the mainstream mathematical communities of
Germany, France and England. No original portrait of Bolyai survives, and the picture
that appears in many encyclopedias and on a Hungarian postage stamp is known to
be unauthentic.

His father and teacher, Farkas Bolyai, was himself an accomplished mathematician and
had been a student of the great German mathematician Gauss for a time, but the
cantankerous Gauss refused to take on the young prodigy János as a student. So, he
was forced to join the army in order to earn a living and support his family, although
he persevered with his mathematics in his spare time. He was also a talented linguist,
speaking nine foreign languages, including Chinese and Tibetan. János Bolyai (1802-1860) and Nikolai
Lobachevsky (1792-1856)
In particular, Bolyai became obsessed with Euclid's fifth postulate (often referred to as
the parallel postulate), a fundamental principle of geometry for over two millennia,
which essentially states that only one line can be drawn through a given point so that the line is
parallel to a given line that does not contain the point, along with its corollary that the interior
angles of a triangle sum to 180° or two right angles. In fact, he became obsessed to such an extent
that his father warned him that it may take up all his time and deprive him of his "health, peace of
mind and happiness in life", a tragic irony given the unfolding of subsequent events.

Bolyai, however, persisted in his quest, and eventually came to the radical conclusion that it was
in fact possible to have consistent geometries that were independent of the parallel postulate. In
the early 1820s, Bolyai explored what he called “imaginary geometry” (now known as hyperbolic
geometry), the geometry of curved spaces on a saddle-shaped plane, where the angles of a triangle
did NOT add up to 180° and apparently parallel lines were NOT actually parallel. In curved space,
the shortest distance between two points aand b is actually a curve, or geodesic, and not a straight
line. Thus, the angles of a triangle in hyperbolic space sum to less than 180°, and two parallel lines
in hyperbolic space actually diverge from each other. In a letter to his father, Bolyai marvelled,
“Out of nothing I have created a strange new universe”.
Euclid's parallel postulate
Although it is easy to visualize a flat surface and a surface with positive curvature (e.g. a sphere,
such as a the Earth), it is impossible to visualize a hyperbolic surface with negative curvature, other
than just over a small localized area, where it would look like a saddle or a Pringle. So the very concept of a hyperbolic surface appeared
to go against all sense of reality. It certainly represented a radical departure from Euclidean geometry, and the first step along the road
which would lead to Einstein’s Theory of Relativity among other applications (although it still fell well short of the multi-dimensional
geometry which was to be later realized by Riemann). Between 1820 and 1823, Bolyai prepared, but did not immediately publish, a
treatise on a complete system of non-Euclidean geometry.

His work was, however, only published in 1832, and then only a short exposition in the appendix of a textbook by his father. On reading
this, Gauss clearly recognized the genius of the younger Bolyai’s ideas, but he refused to encourage the young man, and even tried to
claim his ideas as his own. Further disheartened by the news that the Russian mathematician Lobachevski had published something quite
similar two years before his own paper, Bolyai became a recluse and gradually went insane. He died in obscurity in 1860. Although he
only ever published the 24 pages of the appendix, Bolyai left more than 20,000 pages of mathematical manuscripts when he died
(including the development of a rigorous geometric concept of complex numbers as ordered pairs of real numbers).
Completely independent from Bolyai, in the distant provincial Russian city of Kazan, Nikolai Ivanovich Lobachevsky
had also been working, along very similar lines as Bolyai, to develop a geometry in which Euclid’s fifth postulate
did not apply. His work on hyperbolic geometry was first reported in 1826 and published in 1830, although it did
not have general circulation until some time later.

This early non-Euclidean geometry is now often referred to as Lobachevskian geometry or Bolyai-Lobachevskian
geometry, thus sharing the credit. Gauss’ claims to have originated, but not published, the ideas are difficult to
judge in retrospect. Other much earlier claims are credited to the 11th Century Persian mathematician Omar Hyperbolic Bolyai-
Khayyam, and to the early 18th Century Italian priest Giovanni Saccheri, but their work was much more speculative Lobachevskian
and inconclusive in nature. geometry

Lobachevsky also died in poverty and obscurity, nearly blind and unable to walk. Among his other mathematical achievements, largely
unknown during his lifetime, was the development of a method for approximating the roots of algebraic equations (a method now known
as the Dandelin-Gräffe method, named after two other mathematicians who discovered it independently), and the definition of a function
as a correspondence between two sets of real numbers (usually credited to Dirichlet, who gave the same definition independently soon
after Lobachevsky).

https://www.storyofmathematics.com/19th_bolyai.html

19TH CENTURY MATHEMATICS - RIEMANN

Bernhard Riemann was another mathematical giant hailing from northern Germany. Poor, shy, sickly and devoutly
religious, the young Riemann constantly amazed his teachers and exhibited exceptional mathematical skills (such
as fantastic mental calculation abilities) from an early age, but suffered from timidity and a fear of speaking in
public. He was, however, given free rein of the school library by an astute teacher, where he devoured
mathematical texts by Legendre and others, and gradually groomed himself into an excellent mathematician. He
also continued to study the Bible intensively, and at one point even tried to prove mathematically the correctness
of the Book of Genesis.

Although he started studying philology and theology in order to become a priest and help with his family's finances,
Riemann's father eventually managed to gather enough money to send him to study mathematics at the renowned
University of Göttingen in 1846, where he first met, and attended the lectures of, Carl Friedrich Gauss. Indeed,
he was one of the very few who benefited from the support and patronage of Gauss, and he gradually worked his
way up the University's hierarchy to become a professor and, eventually, head of the mathematics department at Bernhard Riemann
Göttingen. (1826-1866)

Riemann developed a type of non-Euclidean geometry, different to the hyperbolic geometry


of Bolyai and Lobachevsky, which has come to be known as elliptic geometry. As with
hyperbolic geometry, there is no such thing as parallel lines, and the angles of a triangle do
not sum to 180° (in this case, however, they sum to more than 180º). He went on to develop
Riemannian geometry, which unified and vastly generalized the three types of geometry, as
well as the concept of a manifold or mathematical space, which generalized the ideas of
curves and surfaces.

A turning point in his career occurred in 1852 when, at the age of 26, have gave a lecture
on the foundations of geometry and outlined his vision of a mathematics of many different
kinds of space, only one of which was the flat, Euclidean space which we appear to inhabit.
He also introduced one-dimensional complex manifolds known as Riemann surfaces.
Although it was not widely understood at the time, Riemann’s mathematics changed how we
look at the world, and opened the way to higher dimensional geometry, a potential which
had existed, unrealized, since the time of Descartes.

With his “Riemann metric”, Riemann completely broke away from all the limitations of 2 Elliptic geometry
and 3 dimensional geometry, even the geometry of curved spaces of Bolyai and
Lobachevsky, and began to think in higher dimensions, extending the differential
geometry of surfaces into n dimensions. His conception of multi-dimensional space
(known as Riemannian space or Riemannian manifold or simply “hyperspace”) enabled
the later development of general relativity, and is at the heart of much of today’s
mathematics, in geometry, number theory and other branches of mathematics.

He introduced a collection of numbers (known as a tensor) at every point in space, which


would describe how much it was bent or curved. For instance, in four spatial dimensions,
a collection of ten numbers is needed at each point to describe the properties of the
mathematical space or manifold, no matter how distorted it may be.

Riemann’s big breakthrough occurred while working on a function in the complex plane
called the Riemann zeta function (an extension of the simpler zeta function first explored
by Euler in the previous century). He realized that he could use it to build a kind of 3-
dimensional landscape, and furthermore that the contours of that imaginary landscape
might be able to unlock the Holy Grail of mathematics, the age-old secret of prime
numbers. 2-D representation of Riemann’s zeta function
Riemann noticed that, at key places, the surface of his 3-dimensional graph
dipped down to height zero (known simply as “the zeroes”) and was able to
show that at least the first ten zeroes inexplicably appeared to line up in a
straight line through the 3-dimensional landscape of the zeta-function, known
as the critical line, where the real part of the value is equal to ½.

With a huge imaginative leap, Riemann realized that these zeroes had a
completely unexpected connection with the way the prime numbers are
distributed. It began to seem that they could be used to correct Gauss’ inspired
guesswork regarding the number of primes as numbers as one counts higher
and higher.

The famous Riemann Hypothesis, which remains unproven, suggests that ALL
the zeroes would be on the same straight line. Although he never provided a
definitive proof of this hypothesis, Riemann’s work did at least show that the 15- 3-D representation of Riemann’s zeta function and
year-old Gauss’ initial approximations of the incidence of prime numbers were Riemann’s Hypothesis
perhaps more accurate than even he could have known, and that the primes
were in fact distributed over the universe of numbers in a regular, balanced and
beautiful way.

The discovery of the Riemann zeta function and the relationship of its zeroes to the prime numbers brought Riemann instant fame when
it was published in 1859. He too, though, died young at just 39 years of age, in 1866, and many of his loose papers were accidentally
destroyed after his death, so we will never know just how close he was to proving his own hypothesis. Over 150 years later, the Riemann
Hypothesis is still considered one of the fundamental questions of number theory, and indeed of all mathematics, and a prize of $1 million
has been offered for the final solution.

https://www.storyofmathematics.com/19th_riemann.html

19TH CENTURY MATHEMATICS - BOOLE

The British mathematician and philosopher George Boole, along with his near contemporary and countryman Augustus
de Morgan, was one of the few since Leibniz to give any serious thought to logic and its mathematical implications.
Unlike Leibniz, though, Boole came to see logic as principally a discipline of mathematics, rather than of philosophy.

His extraordinary mathematical talents did not manifest themselves in early life. He received his early lessons in
mathematics from his father, a tradesman with an amateur interest in in mathematics and logic, but his favourite
subject at school was classics. He was a quiet, serious and modest young man from a humble working class
background, and largely self-taught in his mathematics (he would borrow mathematical journals from his local
Mechanics Institute).

George Boole
It was only at university and afterwards that his mathematical skills began to be fully realized, although, even then,
(1815-1864)
he was all but unknown in his own time, other than for a few insightful but rather abstruse papers on differential
equations and the calculus of finite differences. By the age of 34, though, he was well respected enough in his field
to be appointed as the first professor of mathematics of Queen's College (now University College) in Cork, Ireland.

But it was his contributions to the algebra of logic which were later to be viewed as immensely important and influential. Boole began to
see the possibilities for applying his algebra to the solution of logical problems, and he pointed out a deep analogy between the symbols
of algebra and those that can be made to represent logical forms and syllogisms. In fact, his ambitions stretched to a desire to devise
and develop a system of algebraic logic that would systematically define and model the function of the human brain. His novel views of
logical method were due to his profound confidence in symbolic reasoning, and he speculated on what he called a “calculus of reason”
during the 1840s and 1850s.

Determined to find a way to encode logical arguments into a language that could be manipulated and
solved mathematically, he came up with a type of linguistic algebra, now known as Boolean algebra.
The three most basic operations of this algebra were AND, OR and NOT, which Boole saw as the only
operations necessary to perform comparisons of sets of things, as well as basic mathematical functions.

Boole’s use of symbols and connectives allowed for the simplification of logical expressions, including
such important algebraic identities as: (X or Y) = (Y or X); not(not X) = X; not(X and Y) = (not X) or
(not Y); etc.

He also developed a novel approach based on a binary system, processing only two objects (“yes-no”,
“true-false”, “on-off”, “zero-one”). Therefore, if “true” is represented by 1 and “false” is represented
by 0, and two propositions are both true, then it is possible under Boolean algebra for 1 + 1 to equal
1 ( the “+” is an alternative representation of the OR operator)

Despite the standing he had won in the academic community by that time, Boole’s revolutionary ideas Boolean logic
were largely criticized or just ignored, until the American logician Charles Sanders Peirce (among
others) explained and elaborated on them some years after Boole’s death in 1864.

Almost seventy years later, Claude Shannon made a major breakthrough in realizing that Boole's work could form the basis of mechanisms
and processes in the real world, and particularly that electromechanical relay circuits could be used to solve Boolean algebra problems.
The use of electrical switches to process logic is the basic concept that underlies all modern electronic digital computers, and so Boole is
regarded in hindsight as a founder of the field of computer science, and his work led to the development of applications he could never
have imagined.
https://www.storyofmathematics.com/19th_boole.html

19TH CENTURY MATHEMATICS - CANTOR

The German Georg Cantor was an outstanding violinist, but an even more outstanding mathematician. He
was born in Saint Petersburg, Russia, where he lived until he was eleven. Thereafter, the family moved to
Germany, and Cantor received his remaining education at Darmstradt, Zürich, Berlin and (almost inevitably)
Göttingen before marrying and settling at the University of Halle, where he was to spend the rest of his
career.

He was made full professor at Halle at the age of just 34, a notable accomplishment, but his ambitions to
move to a more prestigious university, such as Berlin, were largely thwarted by Leopold Kronecker, a well-
established figure within the mathematical community and Cantor's former professor, who fundamentally
disagreed with the thrust of Cantor's work.

Cantor’s first ten papers were on number theory, after which he turned his attention to calculus (or analysis
as it had become known by this time), solving a difficult open problem on the uniqueness of the
representation of a function by trigonometric series. His main legacy, though, is as perhaps the first
mathematician to really understand the meaning of infinity and to give it mathematical precision.
Georg Cantor (1845-1918)
Back in the 17th Century, Galileo had tried to confront the idea of infinity and the apparent contradictions
thrown up by comparisons of different infinities, but in the end shied away from the problem. He had shown that a one-to-one
correspondence could be drawn between all the natural numbers and the squares of all the natural numbers to infinity, suggesting that
there were just as many square numbers as integers, even though it was intuitively obvious there were many integers that were were
not squares, a concept which came to be known as Galileo’s Paradox. He had also pointed out that two concentric circles must both be
comprised of an infinite number of points, even though the larger circle would appear to contain more points. However, Galileo had
essentially dodged the issue and reluctantly concluded that concepts like less, equals and greater could only be applied to finite sets of
numbers, and not to infinite sets. Cantor, however, was not content with this compromise.

Cantor's starting point was to say that, if it was possible to add 1 and 1, or 25 and 25, etc, then
it ought to be possible to add infinity and infinity. He realized that it was actually possible to add
and subtract infinities, and that beyond what was normally thought of as infinity existed another,
larger infinity, and then other infinities beyond that. In fact, he showed that there may be
infinitely many sets of infinite numbers - an infinity of infinities - some bigger than others, a
concept which clearly has philosophical, as well as just mathematical, significance. The sheer
audacity of Cantor’s theory set off a quiet revolution in the mathematical community, and
changed forever the way mathematics is approached.

His first intimations of all this came in the early 1870s when he considered an infinite series of
natural numbers (1, 2, 3, 4, 5, ...), and then an infinite series of multiples of ten (10, 20 , 30,
40, 50, ...). He realized that, even though the multiples of ten were clearly a subset of the natural
numbers, the two series could be paired up on a one-to-one basis (1 with 10, 2 with 20, 3 with
30, etc) - a process known as bijection - to show that they were the same “sizes” of infinite sets,
in that they had the same number of elements.

This clearly also applies to other subsets of the natural numbers, such as the even numbers 2,
4, 6, 8, 10, etc, or the squares 1, 4, 9, 16, 25, etc, and even to the set of negative numbers and Cantor’s procedure of bijection or one-
integers. In fact, Cantor realized that he could, in the same way, even pair up all the fractions to-one correspondence to compare
(or rational numbers) with all the whole numbers, thus showing that rational numbers were also infinite sets
the same sort of infinity as the natural numbers, despite the intuitive feeling that there must be
more fractions than whole numbers.
However, when Cantor considered an infinite series of decimal numbers, which includes
irrational numbers like π, e and √2, this method broke down. He used several clever
arguments (one being the "diagonal argument" explained in the box on the right) to
show how it was always possible to construct a new decimal number that was missing
from the original list, and so proved that the infinity of decimal numbers (or, technically,
real numbers) was in fact bigger than the infinity of natural numbers.

He also showed that they were “non-denumerable” or "uncountable" (i.e. contained more
elements than could ever be counted), as opposed to the set of rational numbers which
he had shown were technically (even if not practically) “denumerable” or "countable". In
fact, it can be argued that there are an infinite number of irrational numbers in between
each and every rational number. The patternless decimals of irrational numbers fill the
"spaces" between the patterns of the rational numbers.

Cantor coined the new word “transfinite” in an attempt to distinguish these various levels
of infinite numbers from an absolute infinity, which the religious Cantor effectively
equated with God (he saw no contradiction between his mathematics and the traditional
concept of God). Although the cardinality (or size) of a finite set is just a natural number
indicating the number of elements in the set, he also needed a new notation to describe
the sizes of infinite sets, and he used the Hebrew letter aleph ( ). He defined 0 (aleph-
null or aleph-nought) as the cardinality of the countably infinite set of natural
numbers; 1 (aleph-one) as the next larger cardinality, that of the uncountable set of
ordinal numbers; etc. Because of the unique properties of infinite sets, he showed that
0 + 0 = 0, and also that 0 x 0 = 0.

All of this represented a revolutionary step, and opened up new possibilities in


mathematics. However, it also opened up the possibility of other infinities, for instance Cantor’s diagonal argument for the existence
an infinity - or even many infinities - between the infinity of the whole numbers and the of uncountable sets
larger infinity of the decimal numbers. This idea is known as the continuum hypothesis,
and Cantor believed (but could not actually prove) that there was NO such intermediate
infinite set. The continuum hypothesis was one of the 23 important open problems identified by David Hilbertin his famous 1900 Paris
lecture, and it remained unproved - and indeed appeared to be unprovable - for almost a century, until the work of Paul Cohen in the
1960s.

Just as importantly, though, this work of Cantor's between 1874 and 1884 marks the real
origin of set theory, which has since become a fundamental part of modern mathematics,
and its basic concepts are used throughout all the various branches of mathematics.
Although the concept of a set had been used implicitly since the beginnings of mathematics,
dating back to the ideas of Aristotle, this was limited to everyday finite sets. In
contradistinction, the “infinite” was kept quite separate, and was largely considered a topic
for philosophical, rather than mathematical, discussion. Cantor, however, showed that, just
as there were different finite sets, there could be infinite sets of different sizes, some of
which are countable and some of which are uncountable.

Throughout the 1880s and 1890s, he refined his set theory, defining well-ordered sets and
power sets and introducing the concepts of ordinality and cardinality and the arithmetic of
infinite sets. What is now known as Cantor's theorem states generally that, for any set A,
the power set of A (i.e. the set of all subsets of A) has a strictly greater cardinality
than A itself. More specificially, the power set of a countably infinite set is uncountably
infinite.

Despite the central position of set theory in modern mathematics, it was often deeply
mistrusted and misunderstood by other mathematicians of the day. One quote, usually
attributed to Henri Poincaré, claimed that "later generations will regard Mengenlehre (set
theory) as a disease from which one has recovered". Others, however, were quick to see
the value and potential of the method, and David Hilbert declared in 1926 that "no one
shall expel us from the Paradise that Cantor has created".
Modern set theory notation

Cantor had few other mathematicians with whom he could discuss his ground-breaking work, and most were distinctly unnerved by his
contemplation of the infinite. During the 1880s, he encountered resistance, sometimes fierce resistance, from mathematical
contemporaries such as his old professor Leopold Kronecker and Henri Poincaré, as well as from philosophers like Ludwig Wittgenstein
and even from some Christian theologians, who saw Cantor's work as a challenge to their view of the nature of God. Cantor himself, a
deeply religious man, noted some annoying paradoxes thrown up by his own work, but some went further and saw it as the wilful
destruction of the comprehensible and logical base on which the whole of mathematics was based.

As he aged, Cantor suffered from more and more recurrences of mental illness, which some have directly linked to his constant
contemplation of such complex, abstract and paradoxical concepts. In the last decades of his life, he did no mathematical work at all,
but wrote extensively on his two obsessions: that Shakespeare’s plays were actually written by the English philosopher Sir Francis Bacon,
and that Christ was the natural son of Joseph of Arimathea. He spent long periods in the Halle sanatorium recovering from attacks of
manic depression and paranoia, and it was there, alone in his room, that he finally died in 1918, his great project still unfinished.

https://www.storyofmathematics.com/19th_cantor.html
19TH CENTURY MATHEMATICS - POINCARÉ

Paris was a great centre for world mathematics towards the end of the 19th Century, and Henri Poincaré was
one of its leading lights in almost all fields - geometry, algebra, analysis - for which he is sometimes called the
“Last Universalist”.

Even as a youth at the Lycée in Nancy, he showed himself to be a polymath, and he proved to be one of the top
students in every topic he studied. He continued to excel after he entered the École Polytechnique to study
mathematics in 1873, and, for his doctoral thesis, he devised a new way of studying the properties of differential
equations. Beginning in 1881, he taught at the Sorbonne in Paris, where he would spend the rest of his illustrious
career. He was elected to the French Academy of Sciences at the young age of 32, became its president in 1906,
and was elected to the Académie française in 1909.

Poincaré deliberately cultivated a work habit that has been compared to a bee flying from flower to flower. He
observed a strict work regime of 2 hours of work in the morning and two hours in the early evening, with the Henri Poincaré (1854-
intervening time left for his subconscious to carry on working on the problem in the hope of a flash of inspiration. 1912)
He was a great believer in intuition, and claimed that "it is by logic that we prove, but by intuition that we
discover".

It was one such flash of inspiration that earned Poincaré a generous prize from the King of Sweden in 1887 for his partial solution to the
“three-body problem”, a problem that had defeated mathematicians of the stature of Euler, Lagrange and Laplace. Newton had long ago
proved that the paths of two planets orbiting around each other would remain stable, but even the addition of just one more orbiting
body to this already simplified solar system resulted in the involvement of as many as 18 different variables (such as position, velocity in
each direction, etc), making it mathematically too complex to predict or disprove a stable orbit. Poincaré’s solution to the “three-body
problem”, using a series of approximations of the orbits, although admittedly only a partial solution, was sophisticated enough to win
him the prize.

But he soon realized that he had actually made a mistake, and that his simplifications
did not indicate a stable orbit after all. In fact, he realized that even a very small change
in his initial conditions would lead to vastly different orbits. This serendipitous discovery,
born from a mistake, led indirectly to what we now know as chaos theory, a burgeoning
field of mathematics most familiar to the general public from the common example of
the flap of a butterfly’s wings leading to a tornado on the other side of the world. It was
the first indication that three is the minimum threshold for chaotic behaviour.

Paradoxically, owning up to his mistake only served to enhance Poincaré’s reputation, if


anything, and he continued to produce a wide range of work throughout his life, as well
as several popular books extolling the importance of mathematics.
Computer representation of the paths
Poincaré also developed the science of topology, which Leonhard Euler had heralded generated by Poincaré’s analysis of the three
with his solution to the famous Seven Bridges of Königsberg problem. Topology is a kind body problem
of geometry which involves one-to-one correspondence of space. It is sometimes
referred to as “bendy geometry” or “rubber sheet geometry” because, in topology, two
shapes are the same if one can be bent or morphed into the other without cutting it. For example, a banana and a football are topologically
equivalent, as are a donut (with its hole in the middle) and a teacup (with its handle); but a football and a donut, are topologically
different because there is no way to morph one into the other. In the same way, a traditional pretzel, with its two holes is topological
different from all of these examples.

In the late 19th Century, Poincaré described all the possible 2-dimensional topological
surfaces but, faced with the challenge of describing the shape of our 3-dimensional
universe, he came up with the famous Poincaré conjecture, which became one of the
most important open questions in mathematics for almost a century. The conjecture
looks at a space that, locally, looks like ordinary 3-dimensional space but is connected,
finite in size and lacks any boundary (technically known as a closed 3-manifold or 3-
sphere). It asserts that, if a loop in that space can be continuously tightened to a point,
in the same way as a loop drawn on a 2-dimensional sphere can, then the space is
just a three-dimensional sphere. The problem remained unsolved until 2002, when an
extremely complex solution was provided by the eccentric and reclusive Russian
mathematician Grigori Perelman, involving the ways in which 3-dimensional shapes
can be “wrapped up” in higher dimensions.

Poincaré’s work in theoretical physics was also of great significance, and his
symmetrical presentation of the Lorentz transformations in 1905 was an important and
A 2-dimensional representation of the 3-
necessary step in the formulation of Einstein’s theory of special relativity (some even
dimensional problem in the Poincaré conjecture
hold that Poincaré and Lorentz were the true discoverers of relativity). He also made
important contribution in a whole host of other areas of physics including fluid
mechanics, optics, electricity, telegraphy, capillarity, elasticity, thermodynamics, potential theory, quantum theory and cosmology.

https://www.storyofmathematics.com/19th_poincare.html
20TH CENTURY MATHEMATICS

The 20th Century continued the trend of the 19th towards


increasing generalization and abstraction in mathematics,
in which the notion of axioms as “self-evident truths” was
largely discarded in favour of an emphasis on such logical
concepts as consistency and completeness.

It also saw mathematics become a major profession,


involving thousands of new Ph.D.s each year and jobs in
both teaching and industry, and the development of
hundreds of specialized areas and fields of study, such as
group theory, knot theory, sheaf theory, topology, graph
theory, functional analysis, singularity theory, catastrophe
theory, chaos theory, model theory, category theory, game
theory, complexity theory and many more.

The eccentric British mathematician G.H. Hardy and his


young Indian protégé Srinivasa Ramanujan, were just two
of the great mathematicians of the early 20th Century who
applied themselves in earnest to solving problems of the
previous century, such as the Riemann hypothesis.
Although they came close, they too were defeated by that
most intractable of problems, but Hardy is credited with
reforming British mathematics, which had sunk to
something of a low ebb at that time,
and Ramanujan proved himself to be one of the most
brilliant (if somewhat undisciplined and unstable) minds of
the century.

Others followed techniques dating back millennia but taken


to a 20th Century level of complexity. In 1904, Johann
Fields of Mathematics
Gustav Hermes completed his construction of a regular
polygon with 65,537 sides (216 + 1), using just a compass
and straight edge as Euclid would have done, a feat that took him over ten years.

The early 20th Century also saw the beginnings of the rise of the field of mathematical logic, building on the earlier advances of Gottlob
Frege, which came to fruition in the hands of Giuseppe Peano, L.E.J. Brouwer, David Hilbert and, particularly, Bertrand Russell and A.N.
Whitehead, whose monumental joint work the “Principia Mathematica” was so influential in mathematical and philosophical logicism.

The century began with a historic convention at the


Sorbonne in Paris in the summer of 1900 which is largely
remembered for a lecture by the young German
mathematician David Hilbert in which he set out what he
saw as the 23 greatest unsolved mathematical problems of
the day. These “Hilbert problems” effectively set the agenda
for 20th Century mathematics, and laid down the gauntlet
for generations of mathematicians to come. Of these
original 23 problems, 10 have now been solved, 7 are
partially solved, and 2 (the Riemann hypothesis and the
Kronecker-Weber theorem on abelian extensions) are still
open, with the remaining 4 being too loosely formulated to
be stated as solved or not.

Hilbert was himself a brilliant mathematician, responsible


for several theorems and some entirely new mathematical
concepts, as well as overseeing the development of what
amounted to a whole new style of abstract mathematical
thinking. Hilbert's approach signalled the shift to the
modern axiomatic method, where axioms are not taken to Part of the transcript of Hilbert’s 1900 Paris lecture, in which he set out his
be self-evident truths. He was unfailingly optimistic about 23 problems
the future of mathematics, famously declaring in a 1930
radio interview “We must know. We will know!”, and was a
well-loved leader of the mathematical community during the first part of the century.

However, the Austrian Kurt Gödel was soon to put some very severe constraints on what could and could not be solved, and turned
mathematics on its head with his famous incompleteness theorem, which proved the unthinkable - that there could be solutions to
mathematical problems which were true but which could never be proved.

Alan Turing, perhaps best known for his war-time work in breaking the German enigma code, spent his pre-war years trying to clarify
and simplify Gödel’s rather abstract proof. His methods led to some conclusions that were perhaps even more devastating than Gödel’s,
including the idea that there was no way of telling beforehand which problems were provable and which unprovable. But, as a spin-off,
his work also led to the development of computers and the first considerations of such concepts as artificial intelligence.

With the gradual and wilful destruction of the mathematics community of Germany and Austria by the anti-Jewish Nazi regime in the
1930 and 1940s, the focus of world mathematics moved to America, particularly to the Institute for Advanced Study in Princeton, which
attempted to reproduce the collegiate atmosphere of the old European universities in rural New Jersey. Many of the brightest European
mathematicians, including Hermann Weyl, John von Neumann, Kurt Gödel and Albert Einstein, fled the Nazis to this safe haven.

Emmy Noether, a German Jew who was also forced out of Germany by the Nazi regime, was considered by many (including Albert
Einstein) to be the most important woman in the history of mathematics. Her work in the 1920s and 1930s changed the face of abstract
algebra, and she made important contributions in the fields of algebraic invariants, commutative rings, number fields, non-commutative
algebra, and hypercomplex numbers. Noether's theorem on the connection between symmetry and conservation laws was key in the
development of quantum mechanics and other aspects of modern physics.

John von Neumann is considered one of the foremost


mathematicians in modern history, another mathematical
child prodigy who went on to make major contributions to
a vast range of fields. In addition to his physical work in
quantum theory and his role in the Manhattan Project and
the development of nuclear physics and the hydrogen
bomb, he is particularly remembered as a pioneer of game
theory, and particularly for his design model for a stored-
program digital computer that uses a processing unit and a
separate storage structure to hold both instructions and
data, a general architecture that most electronic computers
follow even today.

Another American, Claude Shannon, has become known as


the father of information theory, and he, von Neumann
and Alan Turing between them effectively kick-started the
computer and digital revolution of the 20th Century. His
early work on Boolean algebra and binary arithmetic
resulted in his foundation of digital circuit design in 1937
and a more robust exposition of communication and
information theory in 1948. He also made important
contributions in cryptography, natural language processing
and sampling theory.

The Soviet mathematician Andrey Kolmogorov is usually


credited with laying the modern axiomatic foundations of
probability theory in the 1930s, and he established a
reputation as the world's leading expert in this field. He also Von Neumann’s computer architecture design
made important contributions to the fields of topology,
intuitionistic logic, turbulence, classical mechanics,
algorithmic information theory and computational complexity.

André Weil was another refugee from the war in Europe, after narrowly avoiding death on a couple of occasions. His theorems, which
allowed connections to be made between number theory, algebra, geometry and topology, are considered among the greatest
achievements of modern mathematics. He was also responsible for setting up a group of French mathematicians who, under the secret
nom-de-plume of Nicolas Bourbaki, wrote many influential books on the mathematics of the 20th Century.

Perhaps the greatest heir to Weil’s legacy was Alexander Grothendieck, a charismatic and beloved figure in 20th Century French
mathematics. Grothendieck was a structuralist, interested in the hidden structures beneath all mathematics, and in the 1950s he created
a powerful new language which enabled mathematical structures to be seen in a new way, thus allowing new solutions in number theory,
geometry, even in fundamental physics. His “theory of schemes” allowed certain of Weil's number theory conjectures to be solved, and
his “theory of topoi” is highly relevant to mathematical logic. In addition, he gave an algebraic proof of the Riemann-Roch theorem, and
provided an algebraic definition of the fundamental group of a curve. Although, after the 1960s, Grothendieck all but abandoned
mathematics for radical politics, his achievements in algebraic geometry have fundamentally transformed the mathematical landscape,
perhaps no less than those of Cantor, Gödel and Hilbert, and he is considered by some to be one of the dominant figures of the whole
of 20th Century mathematics.

Paul Erdös was another inspired but distinctly non-establishment figure of 20th Century mathematics. The immensely prolific and famously
eccentric Hungarian mathematician worked with hundreds of different collaborators on problems in combinatorics, graph theory, number
theory, classical analysis, approximation theory, set theory, and probability theory. As a humorous tribute, an "Erdös number" is given to
mathematicians according to their collaborative proximity to him. He was also known for offering small prizes for solutions to various
unresolved problems (such as the Erdös conjecture on arithmetic progressions), some of which are still active after his death.
The field of complex dynamics (which is defined by the
iteration of functions on complex number spaces) was
developed by two Frenchmen, Pierre Fatou and Gaston
Julia, early in the 20th Century. But it only really gained
much attention in the 1970s and 1980s with the beautiful
computer plottings of Julia sets and, particularly, of the
Mandelbrot sets of yet another French mathematician,
Benoît Mandelbrot. Julia and Mandelbrot fractals are closely
related, and it was Mandelbrot who coined the term fractal,
and who became known as the father of fractal geometry.

The Mandelbrot set involves repeated iterations of complex


quadratic polynomial equations of the form zn+1 = zn2 + c,
(where z is a number in the complex plane of the
form x + iy). The iterations produce a form of feedback
based on recursion, in which smaller parts exhibit
approximate reduced-size copies of the whole, and which
are infinitely complex (so that, however much one zooms in
and magifies a part, it exhibits just as much complexity).

Paul Cohen is an example of a second generation Jewish


immigrant who followed the American dream to fame and
The Mandelbrot set, the most famous example of a fractal
success. His work rocked the mathematical world in the
1960s, when he proved that Cantor's continuum hypothesis
about the possible sizes of infinite sets (one of Hilbert’s original 23 problems) could be both true AND not true, and that there were
effectively two completely separate but valid mathematical worlds, one in which the continuum hypothesis was true and one where it
was not. Since this result, all modern mathematical proofs must insert a statement declaring whether or not the result depends on the
continuum hypothesis.

Another of Hilbert’s problems was finally resolved in 1970, when the young Russian Yuri Matiyasevich finally proved that Hilbert’s tenth
problem was impossible, i.e. that there is no general method for determining when polynomial equations have a solution in whole
numbers. In arriving at his proof, Matiyasevich built on decades of work by the American mathematician Julia Robinson, in a great show
of internationalism at the height of the Cold War.

In additon to complex dynamics, another field that benefitted greatly from the advent of the electronic computer, and particulary from
its ability to carry out a huge number of repeated iterations of simple mathematical formulas which would be impractical to do by hand,
was chaos theory. Chaos theory tells us that some systems seem to exhibit random behaviour even though they are not random at all,
and conversely some systems may have roughly predictable behaviour but are fundamentally unpredictable in any detail. The possible
behaviours that a chaotic system may have can also be mapped graphically, and it was discovered that these mappings, known as
"strange attractors", are fractal in nature (the more you zoom in, the more detail can be seen, although the overall pattern remains the
same).

An early pioneer in modern chaos theory was Edward Lorenz, whose interest in chaos came about accidentally through his work on
weather prediction. Lorenz's discovery came in 1961, when a computer model he had been running was actually saved using three-digit
numbers rather than the six digits he had been working with, and this tiny rounding error produced dramatically different results. He
discovered that small changes in initial conditions can produce large changes in the long-term outcome - a phenomenon he described by
the term “butterfly effect” - and he demonstrated this with his Lorenz attractor, a fractal structure corresponding to the behaviour of the
Lorenz oscillator (a 3-dimensional dynamical system that exhibits chaotic flow).

1976 saw a proof of the four colour theorem by Kenneth


Appel and Wolfgang Haken, the first major theorem to be
proved using a computer. The four colour conjecture was
first proposed in 1852 by Francis Guthrie (a student of
Augustus De Morgan), and states that, in any given
separation of a plane into contiguous regions (called a
“map”) the regions can be coloured using at most four
colours so that no two adjacent regions have the same
colour. One proof was given by Alfred Kempe in 1879, but
it was shown to be incorrect by Percy Heawood in 1890 in
proving the five colour theorem. The eventual proof that
only four colours suffice turned out to be significantly
harder. Appel and Haken’s solution required some 1,200
hours of computer time to examine around 1,500
configurations.

Also in the 1970s, origami became recognized as a serious


mathematical method, in some cases more powerful
than Euclidean geometry. In 1936, Margherita Piazzola
Beloch had shown how a length of paper could be folded to
give the cube root of its length, but it was not until 1980
that an origami method was used to solve the "doubling the
cube" problem which had defeated Example of a four-colour map
ancient Greek geometers. An origami proof of the equally
intractible "trisecting the angle" problem followed in 1986. The Japanese origami expert Kazuo Haga has at least three mathematical
theorems to his name, and his unconventional folding techniques have demonstrated many unexpected geometrical results.
The British mathematician Andrew Wiles finally proved Fermat’s Last Theorem for ALL numbers in 1995, some 350 years after Fermat’s
initial posing. It was an achievement Wiles had set his sights on early in life and pursued doggedly for many years. In reality, though, it
was a joint effort of several steps involving many mathematicans over several years, including Goro Shimura, Yutaka Taniyama, Gerhard
Frey, Jean-Pierre Serre and Ken Ribet, with Wiles providing the links and the final synthesis and, specifically, the final proof of the
Taniyama-Shimura Conjecture for semi-stable elliptic curves. The proof itself is over 100 pages long.

The most recent of the great conjectures to be proved was the Poincaré Conjecture, which was solved in 2002 (over 100 years
after Poincaré first posed it) by the eccentric and reclusive Russian mathematician Grigori man. However, Perelman, who lives a frugal
life with his mother in a suburb of St. Petersburg, turned down the $1 million prize, claiming that "if the proof is correct then no other
recognition is needed". The conjecture, now a theorem, states that, if a loop in connected, finite boundaryless 3-dimensional space can
be continuously tightened to a point, in the same way as a loop drawn on a 2-dimensional sphere can, then the space is a three-
dimensional sphere. Perelman provided an elegant but extremely complex solution involving the ways in which 3-dimensional shapes can
be “wrapped up” in even higher dimensions. Perelman has also made landmark contributions to Riemannian geometry and geometric
topology.

John Nash, the American economist and mathematician whose battle against paranoid schizophrenia has recently been popularized by
the Hollywood movie “A Beautiful Mind”, did some important work in game theory, differential geometry and partial differential equations
which have provided insight into the forces that govern chance and events inside complex systems in daily life, such as in market
economics, computing, artificial intelligence, accounting and military theory.

The Englishman John Horton Conway established the rules for the so-called "Game of Life" in 1970, an early example of a "cellular
automaton" in which patterns of cells evolve and grow in a grid, which became extremely popular among computer scientists. He has
made important contributions to many branches of pure mathematics, such as game theory, group theory, number theory and geometry,
and has also come up with some wonderful-sounding concepts like surreal numbers, the grand antiprism and monstrous moonshine, as
well as mathematical games such as Sprouts, Philosopher's Football and the Soma Cube.

Other mathematics-based recreational puzzles became even more popular among the general public, including Rubik's Cube (1974) and
Sudoku (1980), both of which developed into full-blown crazes on a scale only previously seen with the 19th Century fads of Tangrams
(1817) and the Fifteen puzzle (1879). In their turn, they generated attention from serious mathematicians interested in exploring the
theoretical limits and underpinnings of the games.

Computers continue to aid in the identification of phenomena such as Mersenne primes numbers (a prime number that is one less than
a power of two - see the section on 17th Century Mathematics). In 1952, an early computer known as SWAC identified 2257-1 as the 13th
Mersenne prime number, the first new one to be found in 75 years, before going on to identify several more even larger.

With the advent of the Internet in the 1990s, the Great


Internet Mersenne Prime Search (GIMPS), a collaborative
project of volunteers who use freely available computer
software to search for Mersenne primes, has led to another
leap in the discovery rate. Currently, the 13 largest
Mersenne primes were all discovered in this way, and the
largest (the 45th Mersenne prime number and also the
largest known prime number of any kind) was discovered
in 2009 and contains nearly 13 million digits. The search
also continues for ever more accurate computer
approximations for the irrational number π, with the current
record standing at over 5 trillion decimal places.

The P versus NP problem, introduced in 1971 by the


American-Canadian Stephen Cook, is a major unsolved
problem in computer science and the burgeoning field of
complexity theory, and is another of the Clay Mathematics
Institute's million dollar Millennium Prize problems. At its
simplest, it asks whether every problem whose solution can
be efficiently checked by a computer can also be efficiently
solved by a computer (or put another way, whether
questions exist whose answer can be quickly checked, but
which require an impossibly long time to solve by any direct
procedure). The solution to this simple enough sounding
problem, usually known as Cook's Theorem or the Cook-
Levin Theorem, has eluded mathematicians and computer
scientists for 40 years. A possible solution by Vinay Approximations for π
Deolalikar in 2010, claiming to prove that P is not equal to
NP (and thus such insolulable-but-easily-checked problems
do exist), has attracted much attention but has not as yet been fully accepted by the computer science community.

The Chinese-born American mathematician, Yitang Zhang, working in the area of number theory, achieved perhaps the most significant
result since Perelman, when he provided a proof of the first finite bound on gaps between prime numbers in 2013.

20TH CENTURY MATHEMATICS - HARDY AND RAMANUJAN


The eccentric British mathematician G.H. Hardy is known for his achievements
in number theory and mathematical analysis. But he is perhaps even better
known for his adoption and mentoring of the self-taught Indian mathematical
genius, Srinivasa Ramanujan.

Hardy himself was a prodigy from a young age, and stories are told about how
he would write numbers up to millions at just two years of age, and how he
would amuse himself in church by factorizing the hymn numbers. He graduated
with honours from Cambridge University, where he was to spend most of the
rest of his academic career.

Hardy is sometimes credited with reforming British mathematics in the early


20th Century by bringing a Continental rigour to it, more characteristic of the
French, Swiss and German mathematics he so much admired, rather than
British mathematics. He introduced into Britain a new tradition of pure
mathematics (as opposed to the traditional British forte of applied mathematics
in the shadow of Newton), and he proudly declared that nothing he had ever G.H. Hardy (1877-1947) and Srinivasa Ramanujan (1887-
done had any commercial or military usefulness (he was also an outspoken 1920)
pacifist).

Just before the First World War, Hardy (who was given to flamboyant gestures) made mathematical headlines when he claimed to have
proved the Riemann Hypothesis. In fact, he was able to prove that there were infinitely many zeroes on the critical line, but was not able
to prove that there did not exist other zeroes that were NOT on the line (or even infinitely many off the line, given the nature of infinity).

Meanwhile, in 1913, Srinivasa Ramanujan, a 23-year old shipping clerk from Madras, India, wrote to Hardy (and other academics at
Cambridge), claiming, among other things, to have devised a formula that calculated the number of primes up to a hundred million with
generally no error. The self-taught and obsessive Ramanujan had managed to prove all of Riemann’s results and more with almost no
knowledge of developments in the Western world and no formal tuition. He claimed that most of his ideas came to him in dreams.

Hardy was only one to recognize Ramanujan's genius, and brought him to Cambridge University, and was his friend and mentor for many
years. The two collaborated on many mathematical problems, although the Riemann Hypothesis continued to defy even their joint efforts.

A common anecdote about Ramanujan during this time


relates how Hardy arrived at Ramanujan's house in a cab
numbered 1729, a number he claimed to be totally
uninteresting. Ramanujan is said to have stated on the spot
that, on the contrary, it was actually a very interesting
number mathematically, being the smallest number
representable in two different ways as a sum of two cubes.
Such numbers are now sometimes referred to as "taxicab
numbers".

It is estimated that Ramanujan conjectured or proved over


3,000 theorems, identities and equations, including
properties of highly composite numbers, the partition
function and its asymptotics and mock theta functions. He
also carried out major investigations in the areas of gamma
functions, modular forms, divergent series, hypergeometric
series and prime number theory.

Among his other achievements, Ramanujan identified


several efficient and rapidly converging infinite series for
the calculation of the value of π, some of which could
compute 8 additional decimal places of π with each term in
the series. These series (and variations on them) have
become the basis for the fastest algorithms used by modern
computers to compute π to ever increasing levels of
accuracy (currently to about 5 trillion decimal places). Hardy-Ramanujan "taxicab numbers"

Eventually, though, the frustrated Ramanujan spiralled into depression and illness, even attempting suicide at one time. After a period in
a sanatorium and a brief return to his family in India, he died in 1920 at the tragically young age of 32. Some of his original and highly
unconventional results, such as the Ramanujan prime and the Ramanujan theta function, have inspired vast amounts of further research
and have have found applications in fields as diverse as crystallography and string theory.

Hardy lived on for some 27 years after Ramanujan’s death, to the ripe old age of 70. When asked in an interview what his greatest
contribution to mathematics was, Hardy unhesitatingly replied that it was the discovery of Ramanujan, and even called their collaboration
"the one romantic incident in my life". However, Hardy too became depressed later in life and attempted suicide by an overdose at one
point. Some have blamed the Riemann Hypothesis for Ramanujan and Hardy's instabilities, giving it something of the reputation of a
curse.

https://www.storyofmathematics.com/20th_hardy.html

20TH CENTURY MATHEMATICS - RUSSELL AND WHITEHEAD


Bertrand Russell and Alfred North Whitehead were British mathematicians,
logicians and philosophers, who were in the vanguard of the British revolt
against Continental idealism in the early 20th Century and, between them, they
made important contributions in the fields of mathematical logic and set
theory.

Whitehead was the elder of the two and came from a more pure mathematics
background. He became Russell’s tutor at Trinity College, Cambridge in the
1890s, and then collaborated with his more celebrated ex-student in the first
decade of the 20th Century on their monumental work, the “Principia
Mathematica”. After the First World War, though, much of which Russell spent
in prison due to his pacifist activities, the collaboration petered out, and
Whitehead’s academic career remained ever after in the shadow of that of the
more flamboyant Russell. He emigrated to the United States in the 1920s, and
spent the rest of his life there.
Bertrand Russell (1872-1970) and A.N. Whitehead (1861-
Russell was born into a wealthy family of the British aristocracy, although his 1947)
parents were extremely liberal and radical for the times. His parents died when
Russell was quite young and he was largely brought up by his staunchly Victorian (although quite progressive) grandmother. His
adolescence was very lonely and he suffered from bouts of depression, later claiming that it was only his love of mathematics that kept
him from suicide. He studied mathematics and philosophy at Cambridge University under G.E. Moore and A.N. Whitehead, where he
developed into an innovative philosopher, a prolific writer on many subjects, a committed atheist and an inspired mathematician and
logician. Today, he is considered one of the founders of analytic philosophy, but he wrote on almost every major area of philosophy,
particularly metaphysics, ethics, epistemology, the philosophy of mathematics and the philosophy of language.

Russell was a committed and high-profile political activist throughout his long life. He was a prominent anti-war activist during both the
First and Second World Wars, championed free trade and anti-imperialism, and later became a strident campaiger for nuclear
disarmament and socialism, and against Adolf Hitler, Soviet totalitarianism and the USA’s involvement in the Vietnam War.

Russell's mathematics was greatly influenced by the set


theory and logicism Gottlob Frege had developed in the
wake of Cantor's groundbreaking early work on sets. In his
1903 "The Principles of Mathematics", though, he identified
what has come to be known as Russell's Paradox (a set
containing sets that are not members of themselves), which
showed that Frege's naive set theory could in fact lead to
contradictions. The paradox is sometimes illustrated by this
simplistic example: "If a barber shaves all and only those
men in the village who do not shave themselves, does he
shave himself?"

The paradox seemed to imply that the very foundations of


the whole of mathematics could no longer be trusted, and
that, even in mathematics, the truth could never be known
absolutely (Gödel's and Turing's later work would only
make this worse). Russell's criticism was enough to rock
Frege’s confidence in the entire edifice of logicism, and he
was gracious enough to admit this openly in a hastily
written appendix to Volume II of his "Basic Laws of
Arithmetic".

But Russell's magnum opus was the monolithic “Principia


Mathematica”, published in three volumes in 1910, 1912
and 1913. The first volume was co-written by Whitehead,
although the later two were almost all Russell’s work. The Russell’s Paradox
aspiration of this ambitious work was nothing less than an
attempt to derive all of mathematics from purely logical
axioms, while avoiding the kinds of paradoxes and contradictions found in Frege’s earlier work on set theory. Russell achieved this by
employing a theory or system of "types”, whereby each mathematical entity is assigned to a type within a hierarchy of types, so that
objects of a given type are built exclusively from objects of preceding types lower in the hierarchy, thus preventing loops. Each set of
elements, then, is of a different type than each of its elements, so that one can not speak of the "set of all sets" and similar constructs,
which lead to paradoxes.

However, the “Principia" required, in addition to the basic axioms of type theory, three further axioms that seemed to not be true as
mere matters of logic, namely the “axiom of infinity” (which guarantees the existence of at least one infinite set, namely the set of all
natural numbers), the “axiom of choice” (which ensures that, given any collection of “bins”, each containing at least one object, it is
possible to make a selection of exactly one object from each bin, even if there are infinitely many bins, and that there is no "rule" for
which object to pick from each) and Russell’s own “axiom of reducibility” (which states that any propositional truth function can be
expressed by a formally equivalent predicative truth function).

During the ten years or so that Russell and Whitehead spent on the "Principia", draft after draft was begun and abandoned as Russell
constantly re-thought his basic premises. Russell and his wife Alys even moved in with the Whiteheads in order to expedite the work,
although his own marriage suffered as Russell became infatuated with Whitehead's young wife, Evelyn. Eventually, Whitehead insisted
on publication of the work, even if it was not (and might never be) complete, although they were forced to publish it at their own expense
as no commercial publishers would touch it.
Some idea of the scope and comprehensiveness of the
“Principia” can be gleaned from the fact that it takes over
360 pages to prove definitively that 1 + 1 = 2. Today, it is
widely considered to be one of the most important and
seminal works in logic since Aristotle's "Organon". It
seemed remarkably successful and resilient in its ambitious
aims, and soon gained world fame for Russell and
Whitehead. Indeed, it was only Gödel's 1931
incompleteness theorem that finally showed that the
“Principia” could not be both consistent and complete.

Russell was awarded the Order of Merit in 1949 and the


Nobel Prize in Literature in the following year. His fame
continued to grow, even outside of academic circles, and A small part of the long proof that 1+1 =2 in the “Principia Mathematica”
he became something of a household name in later life,
although largely as a result of his philosophical contributions and his political and social activism, which he continued until the end of his
long life. He died of influenza in his beloved Wales at the grand old age of 97.

https://www.storyofmathematics.com/20th_russell.html

20TH CENTURY MATHEMATICS - HILBERT

David Hilbert was a great leader and spokesperson for the discipline of mathematics in the early
20th Century. But he was an extremely important and respected mathematician in his own right.

Like so many great German mathematicians before him, Hilbert was another product of the
University of Göttingen, at that time the mathematical centre of the world, and he spent most of
his working life there. His formative years, though, were spent at the University of Königsberg,
where he developed an intense and fruitful scientific exchange with fellow mathematicians
Hermann Minkowski and Adolf Hurwitz.

Sociable, democratic and well-loved both as a student and as a teacher, and often seen as bucking
the trend of the formal and elitist system of German mathematics, Hilbert’s mathematical genius
nevertheless spoke for itself. He has many mathematical terms named after him, including Hilbert
space (an infinite dimensional Euclidean space), Hilbert curves, the Hilbert classification and the
Hilbert inequality, as well as several theorems, and he gradually established himself as the most
famous mathematician of his time.

His pithy enumeration of the 23 most important open mathematical questions at the 1900 Paris
conference of the International Congress of Mathematicians at the Sorbonne set the stage for
almost the whole of 20th Century mathematics. The details of some of these individual problems
are highly technical; some are very precise, while some are quite vague and subject to
interpretation; several problems have now already been solved, or at least partially solved, while
David Hilbert (1862-1943)
some may be forever unresolvable as stated; some relate to rather abstruse backwaters of
mathematical thought, while some deal with more mainstream and well-known issues such as the
Riemann hypothesis, the continuum hypothesis, group theory, theories of quadratic forms, real algebraic curves, etc.
As a young man, Hilbert began by pulling together all of the
may strands of number theory and abstract algebra, before
changing field completely to pursue studies in integral
equations, where he revolutionized the then current
practices. In the early 1890s, he developed continuous
fractal space-filling curves in multiple dimensions, building
on earlier work by Guiseppe Peano. As early as 1899, he
proposed a whole new formal set of geometrical axioms,
known as Hilbert's axioms, to substitute the traditional
axioms of Euclid.

But perhaps his greatest legacy is his work on equations,


often referred to as his finiteness theorem. He showed that
although there were an infinite number of possible
equations, it was nevertheless possible to split them up into
a finite number of types of equations which could then be
used, almost like a set of building blocks, to produce all the
other equations.

Interestingly, though, Hilbert could not actually construct


this finite set of equations, just prove that it must exist
(sometimes referred to as an existence proof, rather than
constructive proof). At the time, some critics passed this off
as mere theology or smoke-and-mirrors, but it effectively
marked the beginnings of a whole new style of abstract
mathematics.

This use of an existence proof rather than constructive


Hilbert’s algorithm for space-filling curves
proof was also implicit in his development, during the first
decade of the 20th Century, of the mathematical concept
of what came to be known as Hilbert space. Hilbert space
is a generalization of the notion of Euclidean space which
extends the methods of vector algebra and calculus to
spaces with any finite (or even infinite) number of
dimensions. Hilbert space provided the basis for important
contributions to the mathematics of physics over the
following decades, and may still offer one of the best
mathematical formulations of quantum mechanics.

Hilbert was unfailingly optimistic about the future of


mathematics, never doubting that his 23 problems would
soon be solved. In fact, he went so far as to claim that there
are absolutely no unsolvable problems - a famous quote of
his (dating from 1930, and also engraved on his tombstone)
proclaimed, “We must know! We will know!” - and he was
convinced that the whole of mathematics could, and
ultimately would, be put on unshakable logical foundations.
Another of his rallying cries was “in mathematics there is
no ignorabimus”, a reference to the traditional position on
the limits of scientific knowledge.

Unlike Russell, Hilbert’s formalism was premised on the idea


that the ultimate base of mathematics lies, not in logic itself,
but in a simpler system of pre-logical symbols which can be
collected together in strings or axioms and manipulated Among other things, Hilbert space can be used to study the harmonics of
according to a set of “rules of inference”. His ambitious vibrating strings
program to find a complete and consistent set of axioms for
all of mathematics (which became known as Hilbert’s
Program), received a severe set-back, however, with the incompleteness theorems of Kurt Gödel in the early 1930s. Nevertheless,
Hilbert's work had started logic on a course of clarification, and the need to understand Gödel's work then led to the development of
recursion theory and mathematical logic as an autonomous discipline in the 1930s, and later provided the basis for theoretical computer
science.

For a time, Hilbert bravely spoke out against the Nazi repression of his Jewish mathematician friends in Germany and Austria in the mid
1930s. But, after mass evictions, several suicides, many deaths in concentration camps, and even direct assassinations, he too eventually
lapsed into silence, and could only watch as one of the greatest mathematical centres of all time was systematically destroyed. By the
time of his death in 1943, little remained of the great mathematics community at Göttingen, and Hilbert was buried in relative obscurity,
his funeral attended by fewer than dozen people and hardly reported in the press.

https://www.storyofmathematics.com/20th_hilbert.html

20TH CENTURY MATHEMATICS - GÖDEL


Kurt Gödel grew up a rather strange, sickly child in Vienna. From an early age his parents took to
referring to him as “Herr Varum”, Mr Why, for his insatiable curiosity. At the University of Vienna,
Gödel first studied number theory, but soon turned his attention to mathematical logic, which was
to consume him for most of the rest of his life. As a young man, he was, like Hilbert, optimistic
and convinced that mathematics could be made whole again, and would recover from the
uncertainties introduced by the work of Cantor and Riemann.

Between the wars, Gödel joined in the cafe discussions of a group of intense intellectuals and
philosophers known as the Vienna Circle, which included logical positivists such as Moritz Schlick,
Hans Hahn and Rudolf Carnap, who rejected metaphysics as meaningless and sought to codify all
knowledge in a single standard language of science.

Although Gödel did not necessarily share the positivistic philosophical outlook of the Vienna Circle,
it was in this enviroment that Gödel pursued his dream of solving the second, and perhaps most
overarching, of Hilbert’s 23 problems, which sought to find a logical foundation for all of
mathematics. The ideas he came up with would revolutionize mathematics, as he effectively
proved, mathematically and philosophically, that Hilbert’s (and his own) optimism was unfounded
and that such a foundation was just not possible.
Kurt Gödel (1906-1978)
His first achievement, which actually served to advance Hilbert's Program, was his completeness
theorem, which showed that all valid statements in Freges's "first order logic" can be proved from a set of simple axioms. However, he
then turned his attention to "second order logic", i.e a logic powerful enough to support arithmetic and more complex mathematical
theories (essentially, one able to accept sets as values of variables).

Gödel’s incompleteness theorem (technically "incompleteness theorems", plural, as there were actually two separate theorems, although
they are usually spoken of together) of 1931 showed that, within any logical system for mathematics (or at least in any system that is
powerful and complex enough to be able to describe the arithmetic of the natural numbers, and therefore to be interesting to most
mathematicians), there will be some statements about numbers which are true but which can NEVER be proved. This was enough to
prompt John von Neumann to comment that "it's all over".

His approach began with the plain language assertion such


as “this statement cannot be proved”, a version of the
ancient “liar paradox”, and a statement which itself must be
either true or false. If the statement is false, then that
means that the statement can be proved, suggesting that
it is actually true, thus generating a contradiction. For this
to have implications in mathematics, though, Gödel needed
to convert the statement into a "formal language" (i.e. a
pure statement of arithmetic). He did this using a clever
code based on prime numbers, where strings of primes play
the roles of natural numbers, operators, grammatical rules
and all the other requirements of a formal language. The
resulting mathematical statement therefore appears, like its
natural language equivalent, to be true but unprovable, and
must therefore remain undecided.

The incompleteness theorem - surely a mathematician’s


worst nightmare - led to something of a crisis in the
mathematical community, raising the spectre of a problem Gödel’s Incompleteness Theorem
which may turn out to be true but is still unprovable,
something which had not been even considered in the whole two millennia plus history of mathematics. Gödel effectively put paid, at a
stroke, to the ambitions of mathematicians like Bertrand Russell and David Hilbertwho sought to find a complete and consistent set of
axioms for all of mathematics. His work PROVED that any system of logic or numbers that mathematicians ever come up with will always
rest on at least a few unprovable assumptions. His conclusions also imply that not all mathematical questions are even computable, and
that it is impossible, even in principle, to create a machine or computer that will be able to do all that a human mind can do.
Unfortunately, the theorems also led to a personal crisis for Gödel. In the mid 1930s, he suffered
a series of mental breakdowns and spent some significant time in a sanatorium. Nevertheless, he
threw himself into the same problem that had destroyed the mental well-being of Georg
Cantorduring the previous century, the continuum hypothesis. In fact, he made an important step
in the resolution of that notoriously difficult problem (by proving that the the axiom of choice is
independence from finite type theory), without which Paul Cohen would probably never have been
able to come to his definitive solution. Like Cantor and others after him, though, Gödel too
suffered a gradual deterioration in his mental and physical health.

He was only kept afloat at all by the love of his life, Adele Numbursky. Together, they witnessed
the virtual destruction of the German and Austrian mathematics community by the Nazi regime.
Eventually, along with many other eminent European mathematicians and scholars, Gödel fled
the Nazis to the safety of Princeton in the USA, where he became a close friend of fellow exile
Albert Einstein, contributing some demonstrations of paradoxical solutions to Einstein's field
equations in general relativity (including his celebrated Gödel metric of 1949).

But, even in the USA, he was not able to escape his demons, and was dogged by depression and
paranoia, suffering several more nervous breakdowns. Eventually, he would only eat food that
had been tested by his wife Adele, and, when Adele herself was hospitalized in 1977, Gödel simply
refused to eat and starved himself to death.

Gödel’s legacy is ambivalent. Although he is recognized as one of the great logicians of all time,
many were just not prepared to accept the almost nihilistic consequences of his conclusions, and
his explosion of the traditional formalist view of mathematics. Worse news was still to come,
though, as the mathematical community (including, as we will see, Alan Turing) struggled to come
to grips with Gödel’s findings. Representation of the Gödel Metric,
an exact solution to Einstein's field
https://www.storyofmathematics.com/20th_godel.html equations

20TH CENTURY MATHEMATICS - TURING

The British mathematician Alan Turing is perhaps most famous for his war-time work at the British
code-breaking centre at Bletchley Park where his work led to the breaking of the German enigma
code (according to some, shortening the Second World War at a stroke, and potentially saving
thousands of lives). But he was also responsible for making Gödel’s already devastating
incompleteness theorem even more bleak and discouraging, and it is mainly on this - and the
development of computer science that his work gave rise to - that Turing’s mathematical legacy
rests.

Despite attending an expensive private school which strongly emphasized the classics rather than
the sciences, Turing showed early signs of the genius which was to become more prominent later,
solving advanced problems as a teenager without having even studied elementary calculus, and
immersing himself in the complex mathematics of Albert Einstein's work. He became a confirmed
atheist after the death of his close friend and fellow Cambridge student Christopher Morcom, and
throughout his life he was an accomplished and committed long-distance runner.

In the years following the publication of Gödel’s incompleteness theorem, Turing desperately
wanted to clarify and simplify Gödel’s rather abstract and abstruse theorem, and to make it more
concrete. But his solution - which was published in 1936 and which, he later claimed, had come
to him in a vision - effectively involved the invention of something that has come to shape the
entire modern world, the computer. Alan Turing (1912-1954)
During the 1930s, Turing recast incompleteness in terms of
computers (or, more specifically, a theoretical device that
manipulates symbols, known as a Turing machine),
replacing Gödel's universal arithmetic-based formal
language with this formal and simple device. He first proved
that such a machine would be capable of performing any
conceivable mathematical computation if it were
representable as an algorithm. He then went on to show
that, even for such a logical machine, essentially driven by
arithmetic, there would always be some problems they
would never be able to solve, and that a machine fed such
a problem would never stop trying to solve it, but would
never succeed (known as the “halting problem”).

In the process, he also proved that there was no way of


telling beforehand which problems were the unprovable
ones, thus providing a negative proof to the so-called
Entscheidungsproblem or “decision problem“, posed
by David Hilbert in 1928. This was a further slap in the face
for a mathematics community still reeling from Gödel’s
crushing incompleteness theorem.

After the war, Turing continued the work he had begun,


and worked on the development of early computers such
as ACE (Automatic Computing Engine) and the Manchester
Mark 1. Although the computer he developed was a very Representation of a Turing Machine
basic and limited machine by modern standards, Turing
clearly saw its potential, and dreamed that one day
computers would be more than machines, capable of learning, thinking and communicating. He was the first to develop ideas for a chess-
playing computer program, and saw mastery in the game as one of the goals that designers of intelligent machines should strive for.

Indeed, he was the first to address the problem of artificial


intelligence, and proposed an experiment now known as the
Turing Test in an attempt to define a standard for a
machine to be called "intelligent". By this test, a computer
could be said to "think" if it could fool a human interrogator
into thinking that the conversation was with a human. This
showed remarkable foresight at a time long before the
Internet, when the only available computers were the size
of a room and less powerful than a modern pocket
calculator.

Turing’s personal philosophy was to be free from hypocrisy,


compromise and deceit. He was, for example, a homosexual
at a time when it was both illegal and even dangerous, yet
he never hid it nor made it an issue. Unlike Gödel (who
strongly believed in the power of intuition, and who was
convinced that the human mind was capable of going
beyond the limitations of the systems he described), Turing
clearly felt a certain affinity with computers and, to some
extent, he saw them as embodying this admirable absence
of lies or hypocrisy.

After the war, he was kept under surveillance as a potential


security risk by the authorities and eventually, in 1952, he
was arrested, charged and found guilty of engaging in a
homosexual act. As a result, he was chemically castrated by
an injection of the female hormone estrogen, which caused
him to grow breasts and also affected his mind. In 1954, Turing test
Turing was found dead, having committed suicide with
cyanide.

https://www.storyofmathematics.com/20th_turing.html

20TH CENTURY MATHEMATICS - WEIL


André Weil was a very influential French mathematician around the middle of the 20th Century.
Born into a properous Jewish family in Paris, he was brother to the well-known philosopher and
writer Simone Weil, and both were child prodigies. He was passionately addicted to mathematics
by the age of ten, but he also loved to travel and study languages (by the age of sixteen he had
read the "Bhagavad Gita" in the original Sanskrit).

He studied (and later taught) in Paris, Rome, Göttingen and elsewhere, as well as at the Aligarh
Muslim University in Uttar Pradesh, India, were he further explored what would become a life-
long interest in Hinduism and Sanskrit literature.

Even as a young man, Weil made substantial contributions in many areas of mathematics, and
was particularly animated by the idea of discovering profound connections between algebraic
geometry and number theory. His fascination with Diophantine equations led to his first
substantial piece of mathematical research on the theory of algebraic curves. During the 1930s,
he introduced the adele ring, a topological ring in algebraic number theory and topological
algebra, which is built on the field of rational numbers.

It was also at this time that he became a founding member,


and the de facto early leader, of the so-called Bourbaki
group of French mathematicians. This influential group André Weil (1906-1998)
published many textbooks on advanced 20th Century
mathematics under the assumed name of Nicolas Bourbaki,
in an attempt to give a unified description of all
mathematics founded on set theory. Bourbaki has the
distinction of having been refused membership of the
American Mathematical Society for being non-existent
(although he was a member of the Mathematical Society of
France!)

When the Second World War broke out, Weil, a committed


conscientious objector, fled to Finland, where he was
mistakenly arrested as a possible spy. Having made his way
back to France, he was again arrested and imprisoned as
for refusing to report for military service. In his trial, he
cited the Bhagavad Gita to justify his stand, arguing that his
true dharma was the pursuit of mathematics, not assisting
in the war effort, however just the cause. Given the choice
of five more years in prison or joining a French combat unit,
though, he chose the latter, an especially lucky decision
given that the prison was blown up shortly afterwards.

But it was in 1940, in a prison near Rouen, that Weil did the
work that really made his reputation (although his full
proofs had to wait until 1948, and even more rigorous
proofs were supplied by Pierre Deligne in 1973). Building
on the prescient work of his countryman Évariste Galois in
the previous century, Weil picked up the idea of using
geometry to analyze equations, and developed algebraic
geometry, a whole new language for understanding
solutions to equations.
Weil was an early leader of the Bourbaki group who published many
influential textbooks on modern mathematics
The Weil conjectures on local zeta-functions effectively
proved the Riemann hypothesis for curves over finite fields,
by counting the number of points on algebraic varieties over
finite fields. In the process, he introduced for the first time
the notion of an abstract algebraic variety and thereby laid
the foundations for abstract algebraic geometry and the
modern theory of abelian varieties, as well as the theory of
modular forms, automorphic functions and automorphic
representations. His work on algebraic curves has
influenced a wide variety of areas, including some outside
of mathematics, such as elementary particle physics and
string theory.

In 1941, Weil and his wife took the opportunity to sail for
the United States, where they spent the rest of the War and
the rest of their lives. In the late 1950s, Weil formulated
another important conjecture, this time on Tamagawa
numbers, which remained resistant to proof until 1989. He
was instrumental in the formulation of the so-called An illustration of the "cycle évanescent" or "vanishing cycle" described in
Shimura-Taniyama-Weil conjecture on elliptic curves which Deligne's proof of the Weil conjectures
was used by Andrew Wiles as a link in the proof of Fermat’s
Last Theorem. He also developed the Weil representation, an infinite-dimensional linear representation of theta functions which gave a
contemporary framework for understanding the classical theory of quadratic forms.
Over his lifetime, Weil received many honorary memberships, including the London Mathematical Society, the Royal Society of London,
the French Academy of Sciences and the American National Academy of Sciences. He remained active as professor emeritus at the
Institute for Advanced Studies at Princeton until a few years before his death.

https://www.storyofmathematics.com/20th_weil.html

20TH CENTURY MATHEMATICS - COHEN

Paul Cohen was one of a new generation of American mathematician inspired by the influx of
European exiles over the War years. He himself was a second generation Jewish immigrant, but
he was dauntingly intelligent and extremely ambitious. By sheer intelligence and force of will, he
went on to garner for himself fame, riches and the top mathematical prizes.

He was educated at New York, Brooklyn and the University of Chicago, before working his way
up to a professorship at Stanford University. He went on to win the prestigious Fields Medal in
mathematics, as well as the National Medal of Science and the Bôcher Memorial Prize in
mathematical analysis. His mathematical interests were very broad, ranging from mathematical
analysis and differential equations to mathematical logic and number theory.

In the early 1960s, he earnestly applied himself to the first of Hilbert’s 23 list of open
problems, Cantor’s continuum hypothesis, whether or not there exists a set of numbers of
numbers bigger than the set of all natural (or whole) numbers but smaller than the set of real (or
decimal) numbers. Cantor was convinced that the answer was “no” but was not able to prove it
satisfactorily, and neither was anyone else who had applied themselves to the problem since.

Some progress had been made since Cantor. Between


about 1908 and 1922, Ernst Zermelo and Abraham Fraenkel
developed the standard form of axiomatic set theory, which
was to become the most common foundation of Paul Cohen (1934-2007)
mathematics, known as the Zermelo-Fraenkel set theory
(ZF, or, as modified by the Axiom of Choice, as ZFC).

Kurt Gödel demonstrated in 1940 that the continuum


hypothesis is consistent with ZF, and that the continuum
hypothesis cannot be disproved from the standard Zermelo-
Fraenkel set theory, even if the axiom of choice is adopted.
Cohen’s task, then, was to show that the continuum
hypothesis was independent of ZFC (or not), and
specifically to prove the independence of the axiom of
choice.

Cohen’s extraordinary and daring conclusion, arrived at


using a new technique he developed himself called
"forcing", was that both answers could be true, i.e. that the
continuum hypothesis and the axiom of choice were
completely independent from ZF set theory. Thus, there
could be two different, internally consistent, mathematics:
one where the continuum hypothesis was true (and there
was no such set of numbers), and one where the hypothesis
was false (and a set of numbers did exist). The proof
seemed to be correct, but Cohen’s methods, particularly his
new technique of “forcing”, were so new that no-one was
really quite sure until Gödel finally gave his stamp of
approval in 1963.

His findings were as revolutionary as Gödel’s own. Since


that time, mathematicians have built up two different
mathematical worlds, one in which the continuum
hypothesis applies and one in which it does not, and
modern mathematical proofs must insert a statement One of several alternative formulations of the Zermelo-Fraenkel Axioms and
declaring whether or not the result depends on the Axiom of Choice
continuum hypothesis.

Cohen’s paradigm-changing proof brought him fame, riches and mathematical prizes galore, and he became a top professor at Stanford
and Princeton. Flushed with success, he decided to tackle the Holy Grail of modern mathematics, Hilbert’s eighth problem, the Riemann
hypothesis. However, he ended up spending the last 40 years of his life, until his death in 2007, on the problem, still with no resolution
(although his approach has given new hope to others, including his brilliant student, Peter Sarnak).

https://www.storyofmathematics.com/20th_cohen.html

20TH CENTURY MATHEMATICS - ROBINSON AND MATIYASEVICH


In a field almost completely dominated by men, Julia Robinson was one of the
very few women to have made a serious impact on mathematics - others who
merit mention are Sophie Germain and Sofia Kovaleskaya in the 19th Century,
and Alicia Stout and Emmy Noether in the 20th - and she became the first
women to be elected as president of the American Mathematical Society.

Brought up in the deserts of Arizona, Robinson was a shy and sickly child but
showed an innate love for, and facility with, numbers from an early age. She
had to overcome many obstacles and to fight to be allowed to continue
studying mathematics, but she persevered, obtained her PhD at Berkeley and
married a mathematician, her Berkeley professor, Raphael Robinson.

She spent most of her career pursuing computability and “decision problems”,
questions in formal systems with “yes” or “no” answers, depending on the
values of some input parameters. Her particular passion was Hilbert’s tenth
problem, and she applied herself to it obsessively. The problem was to Julia Robinson (1919-1985) and Yuri Matiyasevich (1947-
ascertain whether there was any way of telling whether or not any particular )
Diophantine equation (a polynomial equation whose variables can only be
integers) had whole number solutions. The growing belief was that no such universal method was possible, but it seemed very difficult
to actually prove that it would NEVER be possible to come up with such a method.

Throughout the 1950s and 1960s, Robinson, along with her colleagues Martin Davis and Hilary Putnam, doggedly pursued the problem,
and eventually developed what became known as the Robinson hypothesis, which suggested that, in order to show that no such method
existed, all that was needed was to construct one equation whose solution was a very specific set of numbers, one which grew
exponentially.

The problem had obsessed Robinson for over twenty years and she confessed to a desperate desire to see its solution before she died,
whoever might achieve it. In order to progress further, though, she needed input from the young Russian mathematician, Yuri
Matiyasevich.

Born and educated in Leningrad (St. Petersburg), Matiyasevich had already distinguished himself as a mathematical prodigy, and won
numerous prizes in mathematics. He turned to Hilbert’s tenth problem as the subject of his doctoral thesis at Leningrad State University,
and began to correspond with Robinson about her progress, and to search for a way forward.

After pursuing the problem during the late 1960s, Matiyasevich finally discovered the final missing piece of the jigsaw in 1970, when he
was just 22 years old. He saw how he could capture the famous Fibonacci sequence of numbers using the equations that were at the
heart of Hilbert’s tenth problem, and so, building on Robinson’s earlier work, it was finally proved that it is in fact impossible to devise a
process by which it can be determined in a finite number of operations whether Diophantine equations are solvable in rational integers.

In a poignant example of the internationalism of


mathematics at the height of the Cold War, Matiyasevich
freely acknowledged his debt to Robinson’s work, and the
two went on to work together on other problems until
Robinson’s death in 1984.

Among, his other achievements, Matiyasevich and his


colleague Boris Stechkin also developed an interesting
“visual sieve” for prime numbers, which effectively “crosses
out” all the composite numbers, leaving only the primes. He
has a theorem on recursively enumerable sets named after
him, as well as a polynomial related to the colourings of
triangulation of spheres. He is head of the Laboratory of
Mathematical Logic at the St. Petersburg Department of the
Steklov Institute of Mathematics of Russian Academy of
Sciences, and is a member of several mathematical
societies and boards.

Matiyasevich-Stechkin visual sieve for prime numbers

Das könnte Ihnen auch gefallen