Sie sind auf Seite 1von 7

CALCULUS WITHOUT LIMITS

The current standard for the calculus curriculum is, in my opinion, a failure in many
aspects. We try to present it with the modern standard of mathematical rigor and com-
prehensiveness — but of course have to leave out many crucial details — at the expense of
much of the wonderful intuition behind Leibniz’s notations, and many classical problems
that it helped solve. I believe in the revival, at least in a first introduction, of the old-
fashioned calculus,1 as was in use with great success for well over a hundred years before
the definition of limits was made precise by Cauchy. Its use remains in practice by many
who rediscovered it on their own, thanks to Leibniz’s notations (sadly the vast majority
of the students miss out on it). In brief, the concept of differentials, the mysterious infin-
itesimal quantities, takes the center stage. [For what it’s worth, I have not consulted any
pre-Cauchy calculus textbooks, so this may only represent my own interpretations.]

1. Differentials
The scenario is a bit different from the modern emphasis on functions. We start with
two variables, typically called x and y, that are related in some fashion — not just in the
form of y = f (x), i.e., y is given by an (explicit) expression of x, but more generally any
relation involving x and y such as
x2 + y 2 = 1
which describes (the points on) the unit circle.
The differentials will be quantities that are written as dx and dy, which are meant to
represent infinitesimal (i.e. infinitely small) changes in x and y, respectively, subject to
the constraint of the given relation. As x and y are related via a mathematical formula, dx
and dy will likewise be related in a precise way, and the game is to find this relation. What
we normally call the derivative (of y with respect to x) is, not surprisingly, the quotient
dy/dx of the two differentials.
The procedure to compute differentials is straightforward: You put in x + dx in place of
x, and y + dy in place of y, and manipulate according to the familiar rules of algebra. For
the relation above, we have
(x + dx)2 + (y + dy)2 = 1
x2 + 2x dx + dx2 + y 2 + 2y dy + dy 2 = 1
Using the fact that x2 + y 2 = 1, we have
2x dx + dx2 + 2y dy + dy 2 = 0
Date: January 8, 2016.
1
The word, by the way, simply means a set of rules or procedures (often simple and mechanical) for
calculating something, not so much the theory (if any) behind it. It has almost exclusively come to stand
for “calculus of differentials and integrals.”
1
2 CALCULUS WITHOUT LIMITS

Now, as dx is infinitely small, dx2 will be far smaller in comparison so we can safely neglect
it; the same goes with dy 2 . Therefore, we have
2x dx + 2y dy = 0
which tells us how dx and dy are related, at each point (x, y) on the circle. If you like, we
can write it in the form of a derivative
dy x x
=− =− √
dx y ± 1 − x2
where the ± depends
√ on which half of the circle you are on.
Example 2: y = x, so that
y2 = x
(y + dy)2 = x + dx
y 2 + 2y dy + dy 2 = x + dx
2y dy + dy 2 = dx
and “dropping the higher differential” as before, we get
2y dy = dx
or if you wish,
dy 1 1
= = √
dx 2y 2 x
Example 3: y = x−2 , or
x2 y = 1
(x + dx)2 (y + dy) = 1
(x2 + 2x dx + dx2 )(y + dy) = 1
Dropping the dx2 , and expand
x2 y + x2 dy + 2xy dx + 2x dx dy = 1
and then dropping the dx dy,
x2 dy + 2xy dx = 0
or
dy −2y
= = −2x−3
dx x
In fact, it’s handy to remember
d(xn ) = nxn−1 dx
even when n is a fraction or negative (and indeed any real number, but that’s rare in
applications). So, we may directly “apply d on both sides” of any (algebraic) relation of x
and y, which is I suppose the word differentiation originally meant, with the help of a few
rules that I’ll come to shortly.
CALCULUS WITHOUT LIMITS 3

Example 4: y = sin θ. We compute as follows:


y + dy = sin(θ + dθ) = sin θ cos(dθ) + cos θ sin(dθ)
according to the trigonometric identity. Here’s some hand-waving: as dθ is infinitely small,
cos(dθ) is very close to 1 while sin(dθ) is practically just dθ. So we get
y + dy = sin θ + cos θ dθ
so
dy = cos θ dθ
as we expect. Similarly, if x = cos θ, we have dx = − sin θ dθ. The picture below illustrates
(the relations of) these differentials geometrically, and may help convince you the validity
of the hand-waving argument.
The only thing that’s a bit troublesome is d(ex ) = ex dx, but the standard approach via
limits is not easy (and often omitted) anyways. So take that for granted. These are ALL
you need for all practical calculations involving differentials, with the help of the following
rules which are all very straightforward (here u and v are any expressions in x, or any
quantities related to x like y was above):
1. Sum rule: d(u + v) = du + dv
2. Product (or Leibniz’s) rule: d(uv) = u dv + v du (again du dv drops out because it’s
of “higher order”).
3. The quotient rule is a consequence of the product rule:
u u u
du = d( v) = dv + vd( )
v v v
so we get
u v du − u dv
d( ) =
v v2
4. Chain rule: just “follow your nose.” For example:
d sin(x2 ) = cos(x2 ) d(x2 ) = cos(x2 ) 2x dx
5. Inverse functions and implicit functions are easy (as shown already).
Example 5: y = tan−1 (x), or
x = tan y
sin y
dx = d( )
cos y
cos y d sin y − sin y d cos y
=
cos2 y
cos y dy + sin2 y dy
2
=
cos2 y
= (1 + tan2 y) dy
= sec2 y dy
4 CALCULUS WITHOUT LIMITS

(which, by the way, gives another useful rule d(tan y) = sec2 y dy.) If one wishes to see dy
in terms of x and dx on the other side, we shall note that 1 + tan2 y = 1 + x2 and obtain
dx
dy =
1 + x2

Note that this calculus of differentials instantly applies to “multi-variable functions”


with little extra efforts. For example, to differentiate
z = exy
dz = exy d(xy)
= exy (x dy + y dx)
= y exy dx + x exy dy
which is often called the “total differential” and written in terms of partial derivatives:
∂z ∂z
dz = dx + dy
∂x ∂y
Leibniz’s notation leads nicely to Cartan’s calculus of differential forms.

2. Integrals
In addition to taking the quotient of two differentials to get at a “meaningful” quantity
(the derivative), we can also add up infinitely many infinitesimals, and this process is called
integration. Again Leibniz provides us with the magical notation:
Z b
f (x) dx
a
2
R
which means, loosely speaking, to um up an infinite collection of infinitesimals of the
form f (x) dx, one for each x between a and b. For the simplest case when f (x) = 1
(constant), we see that the sum of all the infinitesimal increments dx just accumulate and
give the total increment in x from x = a to x = b, hence
Z b
dx = b − a
a
The general problem of integration is a very straightforward procedure, at least in princi-
ple: find a quantity y (i.e., find a relation y = F (x)), by whatever means you can, such that
dy = f (x) dx, so the sum of these infinitely many dy’s would give you the total increment
in the variable y from the point when x = a to x = b. In notation,
Z b Z y=F (b)
f (x) dx = dy = F (b) − F (a)
a y=F (a)

2A common old typography for s. See, e.g., the original copy of the Declaration of Independence.
CALCULUS WITHOUT LIMITS 5

which is precisely the Fundamental Theorem of Calculus. For example:


Z 1 Z x=1
dx π
2
= d tan−1 x = tan−1 (1) − tan−1 (0) =
0 1 + x x=0 4
Of course the hard part is to find the right
R y, and all the integration techniques are just
tricks to move the symbol d right next to , so they could “cancel out” in some sense. To
illustrate it, consider
Z p Z p Z
1 − x2 dx = 1 − cos2 θ d cos θ = sin θ(− sin θ dθ)

1 − cos(2θ)
and with the help of trig identity sin2 θ = , we can proceed
2
1 − cos(2θ)
Z Z Z Z Z
1 1 d(2θ) 1 1
=− dθ = − dθ + cos(2θ) =− dθ + d sin(2θ)
2 2 2 2 2 4
In other words, the quantity
1 1
y = − θ + sin(2θ) with x = cos θ
2 4

satisfies dy = 1 − x2 dx, as one can readily check. If we wish to evaluate the definite
integral, say from x = 0 to x = t, we simply need to find the total increment of y from the
point when x = 0 to x = t, by keeping track of the end values of θ. Alternatively, one may
write y purely in terms of x:
1 1 1 1 p
y = − cos−1 (x) + 2 sin θ cos θ = − cos−1 (x) + x 1 − x2
2 4 2 2
so that
Z tp    
2
1 −1 1 p 2

1 − x dx = − cos (t) + t 1 − t − − +0
0 2 2 22
1 1 p
= sin−1 (t) + t 1 − t2
2 2
This result can be read off from this picture:

so the intermediate variable θ does have geometric meaning.


6 CALCULUS WITHOUT LIMITS

Such geometric considerations are often behind integration techniques. For another
example, this picture

illustrates the formula Z Z


y dx = xy − x dy
which (if we used u and v instead) is nothing but the formula for integration by parts.

3. Taylor series
One important part of the subject of calculus is the Taylor series, and in the process
of making precise to what types of functions, and over which domains, it applies, we’ve
made it one of the hardest topics for the students. I found that an approach through the
(original) Mean Value Theorem is getting at the heart of the matter very quickly.
It seems to me that the Mean Value Theorem originally meant for the assertion
Z b
f (x) dx = f (c) · (b − a)
a
for some c between a and b. The value f (c) is rightfully the mean value of the function
over the interval.
Of course this theorem may fail if f (x) takes a jump, and skips over its mean value, but
something slightly less than continuity would suffice, namely that f (x) be the derivative
of some other function, and it is this version that has taken over the name of Mean Value
Theorem. [However, it has become apparent that being continuously differentiable is far
more useful a criterion than simply being differentiable, and has won the notation C 1 .]
The generalization for functions of several variables (say x1 , x2 , . . .), over a compact and
connected region D, is equally intuitive:
Z
f (x1 , x2 , . . .) dx1 dx2 · · · = f (c) · vol(D)
D
CALCULUS WITHOUT LIMITS 7

for some c ∈ D.
Now, let’s start with the Fundamental Theorem of Calculus
Z x
f (x) = f (a) + f 0 (x1 ) dx1
a
and note that we could do the same for f 0 (x1 )
Z x1
f 0 (x1 ) = f 0 (a) + f 00 (x2 ) dx2
a
Substituting this in, we get
Z x Z x1 
0 00
f (x) = f (a) + f (a) + f (x2 ) dx2 dx1
a a
Z x Z x1
0
= f (a) + f (a)(x − a) + f 00 (x2 )dx2 dx1
a a
The latter integral is over a triangle (in the x1 x2 -plane), so by the Mean Value Theorem
2
it is f 00 (c) · (x−a)
2 for some c ∈ [a, x]. Therefore,
f 00 (c)
f (x) = f (a) + f 0 (a)(x − a) + (x − a)2
2
The process clearly iterates, so we end up with Taylor’s Theorem (with Lagrange’s remain-
der):
r
X f (n) (a) f (r+1) (c)
f (x) = (x − a)n + (x − a)r+1
n! (r + 1)!
n=0
for some c ∈ [a, x], where the n! comes from the volume of the n-simplex.
For nice functions such as ex , sin x, and cos x, f (r+1) (c) in the remainder is bounded (as
c ∈ [a, x] and r ∈ N vary, but x is fixed) that the remainder tends to 0 as r → ∞, for
each fixed x ∈ R. For other functions such as tan−1 (x), this is true for x in some finite
interval (which turns out to be always of the type from a − R to a + R, with or without
the endpoints). In these cases we speak of the Taylor series of f (x) around x = a:

X f (n) (a)
f (x) = (x − a)n
n!
n=0

Das könnte Ihnen auch gefallen