Sie sind auf Seite 1von 20

TAYLORS THEORAM

In calculus, Taylor's theorem gives a sequence of approximations of a


differentiable function around a given point by polynomials (the Taylor
polynomials of that function) whose coefficients depend only on the derivatives of
the function at that point. The theorem also gives precise estimates on the size of
the error in the approximation. The theorem is named after the mathematician
Brook Taylor, who stated it in 1712, though the result was first discovered 41 years
earlier in 1671 by James Gregory.
History
The Greek philosopher Zeno considered the problem of summing an infinite series
to achieve a finite result, but rejected it as an impossibility: the result was Zeno's
paradox. Later, Aristotle proposed a philosophical resolution of the paradox, but
the mathematical content was apparently unresolved until taken up by Democritus
and then Archimedes. It was through Archimedes's method of exhaustion that an
infinite number of progressive subdivisions could be performed to achieve a finite
result. Liu Hui independently employed a similar method a few centuries later.
In the 14th century, the earliest examples of the use of Taylor series and closely-
related methods were given by Madhava of Sangamagrama. Though no record of
his work survives, writings of later Indian mathematicians suggest that he found a
number of special cases of the Taylor series, including those for the trigonometric
functions of sine, cosine, tangent, and arctangent. The Kerala school of astronomy
and mathematics further expanded his works with various series expansions and
rational approximations until the 16th century.
In the 17th century, James Gregory also worked in this area and published several
Maclaurin series. It was not until 1715 however that a general method for
constructing these series for all functions for which they exist was finally provided
by Brook Taylor, after whom the series are now named.
The Maclaurin series was named after Colin Maclaurin, a professor in Edinburgh,
who published the special case of the Taylor result in the 18th century.
Definitions
The Taylor series of a real or complex function (x) that is infinitely differentiable
in a neighbourhood of a real or complex number a is the power series

which in a more compact form can be written as

where n! denotes the factorial of n and
(n)
(a) denotes the nth derivative of
evaluated at the point a. The zeroth derivative of is defined to be itself and (x
a)
0
and 0! are both defined to be 1.
In the particular case where a = 0, the series is also called a Maclaurin series.

Taylor's theorem in one variable

Taylor's theorem asserts that any sufficiently smooth function can locally be
approximated by polynomials. A simple example of application of Taylor's
theorem is the approximation of the exponential function e
x
near x = 0:

The approximation is called the n-th order Taylor approximation to e
x
because it
approximates the value of the exponential function by a polynomial of degree n.
This approximation only holds for x close to zero, and as x moves further away
from zero, the approximation becomes worse. The quality of the approximation is
controlled by the remainder term:

More generally, Taylor's theorem applies to any sufficiently differentiable function
, giving an approximation, for x near a point a, of the form

The remainder term is just the difference of the function and its approximating
polynomial

Although an explicit formula for the remainder term is seldom of any use, Taylor's
theorem also provides several ways in which to estimate the value of the
remainder. In other words, for x near enough to a, the remainder ought to be
"small"; Taylor's theorem gives information on precisely how small it actually is.

Statements
The precise statement of the theorem is as follows: If n 0 is an integer and is a
function which is n times continuously differentiable on the closed interval [a, x]
and n + 1 times differentiable on the open interval (a, x), then

Here, n! denotes the factorial of n, and R
n
(x) is a remainder term, denoting the
difference between the Taylor polynomial of degree n and the original function.
The remainder term R
n
(x) depends on x and is small if x is close enough to a.
Several expressions are available for it.
The Lagrange form of the remainder term states that there exists a number
between a and x such that

This exposes Taylor's theorem as a generalization of the mean value theorem. In
fact, the mean value theorem is used to prove Taylor's theorem with the Lagrange
remainder term.
The Cauchy form of the remainder term states that there exists a number
between a and x such that

More generally, if G(t) is a continuous function on [a,x] which is differentiable
with non-vanishing derivative on (a,x), then there exists a number between a and
x such that

This exposes Taylor's theorem as a generalization of the Cauchy mean value
theorem.
The above forms are restricted to the case of functions taking real values.
However, the integral form of the remainder term applies as well when the
function takes complex values. It is:

provided, as is often the case,
(n)
is absolutely continuous on [a, x]. This shows the
theorem to be a generalization of the fundamental theorem of calculus.
In general, a function does not need to be equal to its Taylor series, since it is
possible that the Taylor series does not converge, or that it converges to a different
function. However, for many functions (x), one can show that the remainder term
R
n
approaches zero as n approaches . Those functions can be expressed as a
Taylor series in a neighbourhood of the point a and are called analytic.
Taylor's theorem (with the integral formulation of the remainder term) is also valid
if the function has complex values or vector values. Furthermore, there is a
version of Taylor's theorem for functions in several variables. For complex
functions analytic in a region containing a circle C surrounding a and its interior,
there is a contour integral expression for the remainder

valid inside of C.
Theorem 5.29 (Taylors Theorem - Lagrange form of Remainder) Let f be
continuous on [a, x], and assume that each of f', f'',..., f
(n+1)
is defined on [a, x].
Then we can write
f (x) = P
n
(x) + R
n
(x),
where P
n
(x), the Taylor polynomial of degree n about a, and R
n
(x), the
corresponding remainder, are given by
P
n
(x) =
f (a) + f'(a)(x - a) + (x - a)
2
+ (x - a)
n
,

R
n
(x) =
(x - a)
n+1
,



where c is some point between a and x.
We make no attempt to prove this, although the proof can be done with the tools
we have at our disposal. Some quick comments:
the theorem is also true for x < a; just restate it for the interval [x, a] etc;
if n = 0, we have f (x) = f (a) + (x - a)f'(c) for some c between a and x; this is a
restatement of the Mean Value Theorem;
if n = 1, we have
f (x) = f (a) + (x - a)f'(x) + (x - a)
2

for some c between a and x; this often called the Second Mean Value
Theorem;
in general we can restate Taylor's Theorem as
f (x) = f (a) + (x - a)f'(x) +...+ (x - a)
n
+ (x - a)
n+1
,
for some c between a and x;
the special case in which a = 0 has a special name; it is called Maclaurin's
Theorem;
just as with Rolle, or the Mean Value Theorem, there is no useful
information about the point c.
We now explore the meaning and content of the theorem with a number of
examples.
Example 5.30 Find the Taylor polynomial of order n about 0 for f (x) = e , and
write down the corresponding remainder term.
Solution. There is no difficulty here in calculating derivatives -- clearly f
(k)
(x) = e
for all k, and so f
(k)
(0) = 1. Thus by Taylor's theorem,
e = 1 + x + + +... + e
for some point c between 0 and x. In particular,
P
n
(x) = 1 + x + + +... and R
n
(x) = e .

We can actually say a little more about this example if we recall that x is fixed. We
have
e
x
= P
n
(x) + R
n
(x) = P
n
(x) + e
We show that R
n
(x) 0 as n , so that (again for fixed x), the sequence P
n
(x)
e
x
as n . If x < 0, e < 1, while if x 1, then since c < x, we have e < e .
thus
R
n
(x) = e max( e , 1) 0 as n .
We think of the limit of the polynomial as forming a series, the Taylor series for e
. We study series (and then Taylor series) in Section 7.
Example 5.31 Find the Taylor polynomial of order 1 about a for f (x) = e , and
write down the corresponding remainder term.
Solution. Using the derivatives computed above, by Taylor's theorem,
e = e + (x - a) e + e
for some point c between a and x. In particular,
P
1
(x) = e + (x - a) e and R
1
(x) = e .

Example 5.32 Find the Maclaurin polynomial of order n > 3 about 0 for f (x) = (1 +
x)
3
, and write down the corresponding remainder term.
Solution. We have
f (x) = (1 + x)
3

f'(x) = 3(1 + x)
2

f''(x) = 6(1 + x)
f'''(x) = 6
f
(n)
(x) = 0 if n > 3.

and so, by Taylor's theorem
(1 + x)
3
= 1 + 3x + x
2
+ x
3
,
a result we could have got directly, but which is at least reassuring.

Example 5.33 Find the Taylor polynomial of order n about 0 for f (x) = sin x, and
write down the corresponding remainder term.
Solution. There is no difficulty here in calculating derivatives -- we have
f (x) = sin x
f'(x) = cos x
f''(x) = - sin x
f'''(x) = - cos x
f
(4)
(x) = sin x and so on.

Thus by Taylor's theorem,
sin x = x - + +...+ (- 1)
n+1
+...
Writing down the remainder term isn't particularly useful, but the important point
is that
| R
2n+1
(x)| 0 as n .

Exercise 5.34 Recall that cosh x = , and that sinh x = . Now
check the shape of the following Taylor polynomials:

cos x =
1 - + +...+ (- 1)
n
+...

sinh x =
x + + +...+ +...

cosh x =
1 + + +...+ +...



Example
Compute the 7
th
degree Maclaurin polynomial for the function
.
First, rewrite the function as
.
We have for the natural logarithm (by using the big O notation)

and for the cosine function

The latter series expansion has a zero constant term, which enables us to substitute the second
series into the first one and to easily omit terms of higher order than the 7
th
degree by using the
big O notation:

Since the cosine is an even function, the coefficients for all the odd powers x, x
3
, x
5
, x
7
, ... have to
be zero.

Applications of Taylor Series

1
st.
Evaluating definite integrals
2
nd.
Understanding asymptotic behaviour
3
rd.
Understanding the growth of functions
4
th.
Solving differential equations

We started studying Taylor Series because we said that polynomial functions are
easy and that if we could find a way of representing complicated functions as series
("infinite polynomials") then maybe some properties of functions would be easy to
study too. In this section, we'll show you a few ways in Taylor series can make life
easy.

Evaluating definite integrals
Remember that we've said that some functions have no antiderivative which can be
expressed in terms of familiar functions. This makes evaluating definite integrals of
these functions difficult because the Fundamental Theorem of Calculus cannot be
used. However, if we have a series representation of a function, we can oftentimes
use that to evaluate a definite integral.
Here is an example. Suppose we want to evaluate the definite integral

The integrand has no antiderivative expressible in terms of familiar functions.
However, we know how to find its Taylor series: we know that

Now if we substitute , we have

In spite of the fact that we cannot antidifferentiate the function, we can
antidifferentiate the Taylor series:

Notice that this is an alternating series so we know that it converges. If we add up the
first four terms, the pattern becomes clear: the series converges to 0.31026.

Understanding asymptotic behaviour
Sometimes, a Taylor series can tell us useful information about how a function
behaves in an important part of its domain. Here is an example which will
demonstrate.
A famous fact from electricity and magnetism says that a charge q generates an
electric field whose strength is inversely proportional to the square of the distance
from the charge. That is, at a distance r away from the charge, the electric field is

where k is some constant of proportionality.
Oftentimes an electric charge is accompanied by an equal and opposite charge nearby.
Such an object is called an electric dipole. To describe this, we will put a charge q at
the point and a charge -q at .

Along the x axis, the strength of the electric fields is the sum of the electric fields
from each of the two charges. In particular,

If we are interested in the electric field far away from the dipole, we can consider
what happens for values of x much larger than d. We will use a Taylor series to study
the behaviour in this region.

Remember that the geometric series has the form

If we differentiate this series, we obtain

Into this expression, we can substitute to obtain

In the same way, if we substitute , we have

Now putting this together gives

In other words, far away from the dipole where x is very large, we see that the electric
field strength is proportional to the inverse cube of the distance. The two charges
partially cancel one another out to produce a weaker electric field at a distance.



Understanding the growth of functions
This example is similar is spirit to the previous one. Several times in this course, we
have used the fact that exponentials grow much more rapidly than polynomials. We
recorded this by saying that

for any exponent n . Let's think about this for a minute because it is an important
property of exponentials. The ratio is measuring how large the exponential is
compared to the polynomial. If this ratio was very small, we would conclude that the
polynomial is larger than the exponential. But if the ratio is large, we would conclude
that the exponential is much larger than the polynomial. The fact that this ratio
becomes arbitrarily large means that the exponential becomes larger than the
polynomial by a factor which is as large as we would like. This is what we mean when
we say "an exponential grows faster than a polynomial."
To see why this relationship holds, we can write down the Taylor series for .

Notice that this last term becomes arbitrarily large as . That implies that the
ratio we are interested in does as well:

Basically, the exponential grows faster than any polynomial because it behaves
like an infinite polynomial whose coefficients are all positive.

Solving differential equations
Some differential equations cannot be solved in terms of familiar functions (just as
some functions do not have antiderivatives which can be expressed in terms of
familiar functions). However, Taylor series can come to the rescue again. Here we
will present two examples to give you the idea.

Example 1:
We will solve the initial value problem

Of course, we know that the solution is , but we will see how to discover
this in a different way. First, we will write out the solution in terms of its Taylor
series:

Since this function satisfies the condition , we must have .
We also have

Since the differential equation says that , we can equate these two Taylor
series:

If we now equate the coefficients, we obtain:

This means that as we expect.
Of course, this is an intial value problem we know how to solve. The real value of this
method is in studying initial value problems that we do not know how to solve.

Example 2:
Here we will study Airy's equation with initial conditions:

This equation is important in optics. In fact, it explains why a rainbow appears the
way in which it does! As before, we will write the solution as a series:

Since we have the initial conditions, and .
Now we can write down the derivatives:

The equation then gives

Again, we can equate the coefficients of x to obtain

This gives us the first few terms of the solution:

If we continue in this way, we can write down many terms of the series (perhaps you
see the pattern already?) and then draw a graph of the solution. This looks like this:
Example 3
Determine the Taylor series for
about x=0.

Solution
This time there is no formula that will give us the derivative for each n so lets start
taking derivatives and plugging in x=0.



Once we reach this point its fairly clear that there is a pattern emerging here. Just
what this pattern is has yet to be determined, but it does seem fairly clear that a
pattern does exist.

Lets plug what weve got into the formula for the Taylor series and see what we
get.


So, every other term is zero.

We would like to write this in terms of a series, however finding a formula that is
zero every other term and gives the correct answer for those that arent zero would
be unnecessarily complicated. So, lets rewrite what weve got above and while
were at it renumber the terms as follows,



With this renumbering we can fairly easily get a formula for the Taylor series of
the cosine function about x=0.

Das könnte Ihnen auch gefallen