Sie sind auf Seite 1von 8

0.

1 Practical Guide - Power Series


Consider the series, which depends upon one parameter x R

n0
a
n
x
n
, a
n
R
The partial sums are polynomials
s
n
(x) = a
0
+ a
1
x + a
2
x
2
::: + a
n
x
n
=
n

k=0
a
k
x
k
Just like for polynomials the real numbers a
n
R , for all n _ 0 are called the coecients of the power series.
Namely " a
n
" is the coecient of x
n
.
The terms of the series are powers of "x" , which explains the denomination "power series".
Examples.
a) We should note that all coecients could be zero a
n
= 0 for all n _ 0 , we get the null series. In this case
the series is convergent for all x R
b) the coecients could be all zero but a nite number, that is a
n
= 0 for all n > p , we actually get a
polynomial a
0
+a
1
x+a
2
x
2
::: +a
p
x
p
of degree p. In this case the series is convergent for all x R since the partials
sums are constant for n _ p ,
s
n
(x) = a
0
+ a
1
x + a
2
x
2
::: + a
p
x
p
+ ::: + a
n
x
n
= a
0
+ a
1
x + a
2
x
2
::: + a
p
x
p
c) the "geometric" series, for which a
n
= 1 for all n _ 0

n0
x
n
For x = 1 the series is clearly divergent by necessary test.
Since for x ,= 1 we have
s
n
(x) = 1 + x + x
2
::: + x
n
=
1 x
n+1
1 x
it is easy to see this is convergent (as n ) only for [x[ < 1 and
lim
n!1
s
n
(x) = lim
n!1
1 x
n+1
1 x
=
1
1 x
d) the "exponential" series

n0
1
n!
x
n
, where 0!
def
= 1
Problem. We are interested in the convergence of the sequence of the partial sums (s
n
(x))
n1
, wich clearly
depends on the values of x regarded as a parameter.
Therefore the problem is to nd for which values x R the series is convergent.
It is easy to see that for x = 0 the sequence of the partial sums is constant therefore convergent.
s
n
(0) = a
0
for all n _ 0
The question is
"are there any nonzero values for x R (x ,= 0) for which the series is convergent" ?
A simple but quite important case is when ratio test may be applied to the series

n0
a
n
x
n
, for x ,= 0
namely we assume the limit exists
lim
n!1

a
n+1
x
n+1
a
n
x
n

1
Important remark.
The previous formula works only if a
n
,= 0 for all n _ 0 (or n _ p > 0) .
But it is also possible to have innitely many null coecients.
In such a case the above formula actually "reads" as
lim
n!1

a
n+k
x
n+k
a
n
x
n

, for a
n
,= 0 and a
n+k
the next non zero coecient.
In other words we consider

n0
a
n
x
n
as a number series and apply ratio test to the sequence of the nonzero
terms.
Example. Consider the series

n0
1
(2n)!
x
2n
= 1 +
1
2!
x
2
+
1
4!
x
4
+ :::
In this case we have a
2n
=
1
(2n)!
and a
2n+1
= 0 for all n _ 0 .
Therefore we take the ratio of every two consecutive nonzero terms and we compute
lim
n!1

a
2n+2
x
2n+2
a
2n
x
2n

= lim
n!1

1
(2n + 2)!
(2n)!
1

[x[
2
= 0 < 1
Consequently by ratio test, the series is convergent for all x R , since the limit is zero no matter the value of
x R .
But in "general" we still write
lim
n!1

a
n+1
x
n+1
a
n
x
n

= lim
n!1

a
n+1
a
n

[x[ = L [x[
Accordind to ratio test (for series) we have
i) if L [x[ < 1 then the series

n0
[a
n
[ [x[
n
is convergent,
so the series

n0
a
n
x
n
is also convergent
ii) if L [x[ > 1 then both series

n0
[a
n
[ [x[
n
,

n0
a
n
x
n
are divergent.
iii) if L [x[ = 1 then no pattern follows.
In other words if
lim
n!1

a
n
a
n+1

not
= r =
1
L
and r (0; +)
i) the power series is convergent (absolutely convergent) for [x[ < r
ii) the power series is divergent for [x[ > r
iii) for x = r and x = r we must directly investigate the corresponding series

n0
a
n
r
n
, respectively

n0
a
n
(r)
n
If r = 0 ( L = + ) the power series is divergent for all x ,= 0 .
If r = + ( L = 0 ) the power series is convergent for all x R
The number " r " is called the radius of convergence of the power series.
What happens if the ratio test cannot be applied, that is
@ lim
n!1

a
n
a
n+1

2
There are a few great theorems.
Theorem 1. (Abel) Consider the power series

n0
a
n
x
n
. If the series is convergent for x = ,= 0 , then the
power series is absolutely convergent for all x R with [x[ < [[ .
Comment. This result shows the set of convergence for a power series is an interval.
The set of convergence denotes the set containig all values of x for which the power series is convergent.
Denition. For a power series

n0
a
n
x
n
, dene the radius of convergence "r" as
r = supx R , such that the series

n0
a
n
x
n
is convergent
Clearly r _ 0 since any power series is convergent for x = 0 .
Theorem. (radius of convergence)
Consider a power series

n0
a
n
x
n
and r the corresponding radius of convergence.
i) if r = 0 then the power series is convergent only for x = 0
ii) if r = then the power series is convergent for all x R
iii) if r (0; +) , (r is nite ) then the power series is convergent for all [x[ < r
and divergent for all [x[ > r
Remark. Actually a power series is "absolutely convergent"
for all x R (if r = ) or for all [x[ < r (if r is nite),
that is the series

n0
[a
n
x
n
[ is convergent
Theorem. (Cauchy-Hadamard) For a power series

n0
a
n
x
n
and r the corresponding radius of convergence we
have the following formula
r =
1
limsup
n
_
[a
n
[
This means in particular that
1
+1
= 0 and
1
+0
= +
Denition. Consider a power series

n0
a
n
x
n
and assume r > 0 is the corresponding radius of convergence. For
every x R with [x[ < r the power series is convergent, which means we can dene the function s : (r; r) R
called the sum of the power series and dened as
s(x) = lim
n!1
s
n
(x) = lim
n!1
n

k=0
a
k
x
k
not
=
1

n=0
a
n
x
n
Theorem. Consider a power series

n0
a
n
x
n
and r > 0 the corresponding radius of convergence, s : (r; r) R
the sum function. Consider the "derivative" power series

n1
na
n
x
n1
. We prove that
1) the "derivative" power series has the same radius of convergence r
2) the sum of the power series s is derivable on (r; r) and
s
0
(x) =
1

n=1
na
n
x
n1
3) the sum function s is indenitely derivable on (r; r)
4) the power series

n0
an
n+1
x
n+1
has the radius of convergence r and its sum function
t(x)
not
=
1

n=0
a
n
n + 1
x
n+1
veries t
0
(x) = s(x)
3
In other words, to be more practical we may write
_
1

n=0
a
n
x
n
_
0
=
1

n=1
na
n
x
n1
and
_
_
1

n=0
a
n
x
n
_
dx =
1

n=0
a
n
n + 1
x
n+1
+ C
This means power series behave just like polynomials,
they can be dierentiated (derivated) or integrated "term by term".
Theorem. Consider a power series

n0
a
n
x
n
and r > 0 the corresponding radius of convergence, s : (r; r) R
the sum function, also r is nite (r < ).
If the series is convergent for x = r , that is the series

n0
a
n
r
n
is convergent, then the sum of this number
series is
1

n=0
a
n
r
n
= lim
x%r
s(x)
The same holds for x = r.
1

n=0
a
n
(r)
n
= lim
x&r
s(x)
Comment. This is a practical method to compute the sum of some convergent series.
Example. Consider the series

n1
(1)
n 1
n
. Prove it is convergent and compute the sum.
Proof. To prove convergence, just apply Leibniz test. Next consider the power series

n1
1
n
x
n
.
It is easy to compute the radius of convergence
r = lim

1
n
1
n+1

= lim
n + 1
n
= 1
Therefore the sum function s : (1; 1) R is
s(x) =
1

n=1
1
n
x
n
, for [x[ < 1
Now derivate this function and get
s
0
(x) =
_
1

n=1
1
n
x
n
_
0
=
1

n=1
x
n1
= 1 + x + x
2
+ ::: =
1

p=0
x
p
=
1
1 x
since we already know the sum of the geometric series.
Next integrate to get s(x)
s(x) =
_
s
0
(x)dx =
_
1
1 x
dx = ln(1 x) + C
Since s(x) =
1

n=1
1
n
x
n
= x +
1
2
x
2
+ ::: it follows that s(0) = 0 , and we get
0 = s(0) = ln(1 0) + C = C = 0
By the previous theorem the sum of the series for x = 1 is
1

n=1
(1)
n
1
n
= lim
x&1
s(x) = lim
x&1
[ln(1 x)] = ln2
which completes the proof.
4
Theorem. Consider a power series

n0
a
n
x
n
and r > 0 the corresponding radius of convergence, s : (r; r) R
the sum function. The following relations holds :
a
n
=
s
(n)
(0)
n!
s(x) =
1

n=0
s
(n)
(0)
n!
x
n
, for all x (r; r)
0.2 Taylor expansion
Denition. Consider a function f : (; ) R indenitely derivable on (; ) The power series

n0
f
(n)
(a)
n!
(x a)
n
is called the Taylor series of f at x = a (; ) .
Denition. If this power series is convergent for x (a "; a + ") (; ) and
f(x) =
1

n=0
f
(n)
(a)
n!
(x a)
n
for all x (a "; a + ")
then it is called the Taylor expansion of f at x = a , or we say f has Taylor expansion at x = a
Comment. In such a case the function f may be approximated by polynomials.
The nite sums are called Taylor polynomials
T
n
(x)
not
=
n

k=0
f
(k)
(a)
k!
(x a)
k
= f(a) +
f
0
(a)
1!
(x a) +
f
00
(a)
2!
(x a)
2
+ ::: +
f
(n)
(a)
n!
(x a)
n
5
Theorem. Let f : (; ) R be indenitely derivable on (; ) , and assume that all derivatives are uniformly
bounded in a neighbourhood of a (; ) , that is
M > 0 such that

f
(n)
(x)

_ M for all x (a "; a + ") , n _ 0


then the function f has Taylor expansion at x = a
f(x) =
1

n=0
f
(n)
(a)
n!
(x a)
n
for all x (a "; a + ")
Example. f(x) = cos x has Taylor expansion at x = 0 , because
f
0
(x) = sinx , f
00
(x) = cos x , ...
f
(n)
(x) =
_
(1)
k
sinx , n = 2k + 1
(1)
k
cos x , n = 2k
and these derivatives are all bounded

f
(n)
(x)

_ 1 for all x R , n _ 0
Consequently cos has Taylor expansion at x = 0 and
cos x =
1

n=0
f
(n)
(0)
n!
(x 0)
n
for all x R
cos x =
1

k=0
_
(1)
k
cos(0)
(2k)!
x
2k
+
(1)
k
sin(0)
(2k + 1)!
x
2k+1
_
cos x =
1

k=0
(1)
k
(2k)!
x
2k
= for all x R

There are many simple examples for which the previous theorem does not apply.
On the other hand it may be quite hard (if not impossible) to compute all derivatives f
(n)
(x)
Example A
f(x) = ln(1 + x) for which it is easy to compute all derivatives, so we get the Taylor series.
f
0
(x) =
1
1 + x
, f
00
(x) =
1
(1 + x)
2
, f
(3)
(x) =
(1)(2)
(1 + x)
3
, ....
f
(n)
(x) =
(1)
n1
(n 1)!
(1 + x)
n
And we get the Taylor series at x = 0
1

n=0
f
(n)
(0)
n!
x
n
=
1

n=0
(1)
n1
(n 1)!
n!
x
n
=
1

n=0
(1)
n1
n
x
n
But the derivatives are not bounded in a neighborhood of zero, since for x = 0 we have
f
(n)
(0) =
(1)
n1
(n 1)!
(1 + 0)
n
= (1)
n1
(n 1)!
Therefore the previous theorem does not apply to get the Taylor expansion.
In other words, in this case can compute the Taylor series but we dont know that its sum is f(x) , that is, the
Taylor series is convergent to f(x).
6
Example B The function f : R R dened as
f(x) =
_
e

1
x
for x (0; +)
0 for x (; 0]
i) is indenitely derivable on R.
ii) all derivatives at zero are null f
(n)
(0) = 0 , for all n _ 1
iii) therefore f has a Taylor series at x = 0 , the null series
iv) but f has no Taylor expansion at x = 0 , since f(x) > 0 for x > 0
In other words, in this case we know the Taylor series but we dont know that its sum is f(x) , that is, the
Taylor series is convergent to f(x)
What to do in such a case as example A ?
We use the last theorem about power series and the following remark.
Remark. Consider a power series

n0
a
n
x
n
, r > 0 the corresponding radius of convergence and s : (r; r) R
the sum function.
According to the last theorem about power series we have
a
n
=
s
(n)
(0)
n!
Consequently the sum function s has Taylor expansion at x = 0, since we have
s(x) =
1

n=0
a
n
x
n
=
1

n=0
s
(n)
(0)
n!
x
n
, for all x (r; r)
and the corresponding Taylor series is the power series itself.
Notice that we also get the interval (r; r) on which the Taylor expansion holds.
( the "convergence" or "expansion" interval )
Consequence. Consider a function f : (; ) R indenitely derivable on (; ). If we prove that the function
f is the sum of some power series centered at x
0
(; ),(with some radius of convergence r > 0) namely
f(x) = s(x) =
1

n=0
a
n
(x x
0
)
n
, for all [x x
0
[ (r; r)
then clearly the function f has Taylor expansion at x = x
0
.
In other words the Taylor expansion is unique, so it does not matter how we get it, and we do not need explicitely
the derivatives, but just the coecients of the Taylor expansion.
Example. Find the Taylor expansion for f(x) = ln(1 + x) at x = 0 .
We cannot apply the previous theorem, since the derivatives are not bounded in a neighborhood of x = 0
f
0
(x) =
1
1 + x
, f
(2)
(x) =
1
(1 + x)
2
, f
(3)
(x) =
(1)(2)
(1 + x)
3
, ...
f
(n)
(x) =
(1)(2):::(n + 1)
(1 + x)
n
Now clearly the derivatives at x = 0 are not uniformly bounded since

f
(n)
(0)

(1)
n1
(n 1)!
(1 + 0)
n

= (n 1)!
However, we can nd the Taylor expansion for the rst derivative, by using the geometric series
1
1 + x
=
1

n=0
(x)
n
=
1

n=0
(1)
n
x
n
for all x (1; 1)
7
and by integrating we get the Taylor expansion for f(x) = ln(1 + x)
f(x) = ln(1 + x) =
_
1
1 + x
dx =
_
_
1

n=0
(1)
n
x
n
_
dx =
=
1

n=0
(1)
n
x
n+1
n + 1
+ C = C +
x
1

x
2
2
+ :::
Now for x = 0 we get
0 = ln(1 + 0) = f(0) =
1

n=0
(1)
n
0
n+1
n + 1
+ C = 0 + C = C
It follows that C = 0 and the Taylor expansion at x = 0 is
f(x) = ln(1 + x) =
1

n=0
(1)
n
x
n+1
n + 1
for all x (1; 1)

Example.
Find the Taylor expansion for f(x) = arctg x at x = 0 .
The rst derivative is
f
0
(x) = (arctg x)
0
=
1
1 + x
2
The second derivative is
f
00
(x) =
_
1
1 + x
2
_
0
=
2x
(1 + x
2
)
2
There is little hope we may actually compute easily derivatives of higher order.
On the other hand the rst derivative looks like the sum of a geometric series, namely in
1

n=0
x
n
=
1
1 x
for all [x[ < 1
replace x by x
2
and we get
f
0
(x) = (arctg x)
0
=
1
1 + x
2
=
1

n=0
(x
2
)
n
=
1

n=0
(1)
n
x
2n
for all x (1; 1)
Therefore by integrating we get
f(x) =
_
f
0
(x)dx =
_
(arctg x)
0
dx =
_
_
1

n=0
(x
2
)
n
_
dx =
=
1

n=0
(1)
n
x
2n+1
2n + 1
+ C = C +
1
1
x
1
3
x
3
+
1
5
x
5
+ :::
Now for x = 0 we have
0 = arctg(0) = f(0) =
1

n=0
(1)
n
0
2n+1
2n + 1
+ C = 0 + C = C
Consequently C = 0 and the Taylor expansion for arctg at x = 0 is
f(x) = arctg x =
1

n=0
(1)
n
x
2n+1
2n + 1
for all x (1; 1)

Das könnte Ihnen auch gefallen