Sie sind auf Seite 1von 23

3.1.

Sequences

IRA

So far we have introduced sets as well as the number systems that we will use in this text. Next, we will study sequences of numbers. Sequences are, basically, countably many numbers arranged in an order that may or may not exhibit certain patterns. Here is the formal definition of a sequence: Definition 3.3.1: Sequence A sequence of real numbers is a function f: N R. In other words, a sequence can be written as f(1), f(2), f(3), ..... Usually, we will denote such a sequence by the symbol , where aj = f(j).

For example, the sequence 1, 1/2, 1/3, 1/4, 1/5, ... is written as . Keep in mind that despite the strange notation, a sequence can be thought of as an ordinary function. In many cases that may not be the most expedient way to look at the situation. It is often easier to simply look at a sequence as a 'list' of numbers that may or may not exhibit a certain pattern. We now want to describe what the long-term behavior, or pattern, of a sequence is, if any. Definition 3.1.2: Convergence A sequence of real (or complex) numbers is said to converge to a real (or complex) number c if for every > 0 there is an integer N > 0 such that if j > N then | aj - c | < The number c is called the limit of the sequence sometimes write aj c. and we

If a sequence Example 3.1.3:

does not converge, then we say that it diverges.

Consider the sequence it. The sequence

. It converges to zero. Prove does not converge. Prove it.

The sequence converges to zero Prove it. Convergent sequences, in other words, exhibit the behavior that they get closer and closer to a particular number. Note, however, that divergent sequence can also have a regular pattern, as in the second example above. But it is convergent sequences that will be particularly useful to us right now.

We are going to establish several properties of convergent sequences, most of which are probably familiar to you. Many proofs will use an ' argument' as in the proof of the next result. This type of argument is not easy to get used to, but it will appear again and again, so that you should try to get as familiar with it as you can. Proposition 3.1.4: Convergent Sequences are Bounded Let be a convergent sequence. Then the sequence is bounded, and the limit is unique.
Proof

Proof:
Let's prove uniqueness first. Suppose the sequence has two limits, a and a'. Take any > 0. Then there is an integer N such that: | aj - a | < if j > N. Also, there is another integer N' such that | aj - a' | < if j > N'. Then, by the triangle inequality: | a - a' | = | a - aj + aj - a' | |aj - a | + | aj - a' | < + =2 if j > max{N,N'}. Hence | a - a' | < 2 for any > 0. But that implies that a = a', so that the limit is indeed unique. Next, we prove boundedness. Since the sequence converges, we can take, for example, = 1. Then | aj - a | < 1 if j > N. Fix that number N. We have that | aj | | aj - a | + | a | < 1 + |a| for all j > N. Define M = max{|a1|, |a2|, ...., |aN|, (1 + |a|)} Then | aj | < M for all j, i.e. the sequence is bounded as required.

Example 3.1.5: The Fibonacci numbers are recursively defined as x1 = 1, x2 = 1, and for all n > 2 we set xn = xn - 1 + xn - 2. Show that the sequence of Fibonacci numbers {1, 1, 2, 3, 5, ...} does not converge. Convergent sequences can be manipulated on a term by term basis, just as one would expect: Proposition 3.1.6: Algebra on Convergent Sequences

Suppose Then

and

are converging to a and b, respectively.

1. Their sum is convergent to a + b, and the sequences can be added term by term. 2. Their product is convergent to a * b, and the sequences can be multiplied term by term. 3. Their quotient is convergent to a / b, provide that b # 0, and the sequences can be divided term by term (if the denominators are not zero). 4. If an bn for all n, then a b
Proof

Proof:
The proofs of these statements involve the triangle inequality, as well as an occasional trick of adding and subtracting zero, in a suitable form. A proof of the first statement, for example, goes as follows. Take any > 0. We know that an a, which implies that there exists an integer N1 such that

| an - a | < / 2 if n > N1. Similarly, since bn b there exists another integer N2 such that | bn - b | < / 2 if n > N2. But then we know that | (an + bn) - (a + b) | = | (an - a) + (bn - b) | | an - a | + | bn - b | < /2 + /2 = if n > max(N1, N2), which proves the first statement. Proving the second statement is similar, with some added tricks. We know that { bn } converges, therefore there exists an integer N1 such that | bn | < |b| + 1 if n > N1. We also know that we can find integers N2 and N3 so that | an - a | < / (|b| + 1) if n > N2, and | bn - b | < / (|a| + 1) if n > N3, because |a| and |b| are some fixed numbers. But then we have: | an bn - a b | = | an bn - a bn + a bn - a b | = | bn(an - a) + a (bn - b) | | bn| |an - a | + | a | | bn - b | < (| b | + 1) / (|b| + 1) + | a | / (|a| +1) < 2 if n > max(N1, N2, N3), which proves the second statement. The proof of the third statement is similar, so we will leave it as an exercise. The last statement does require a new trick: we will use a proof by contradiction to get that result: Assume that an bn for all n, but a > b.

We now need to work out the contradiction: the idea is that since a > b there is some number c such that b < c < a.
<----------[b]-------[a]--------> <----------[b]--[c]--[a]-------->

Since an converges to a, we can make the terms of the sequence fall between c and a, and the terms of bn between b and c. But then we no longer have that an bn, which is our contradiction. Now let's formalize this idea: Let c = (a + b)/2. Then clearly b < c < a (verify!). Choose N1 such that bn < c if n > N1. That works because b < c. Also choose N2 such that an > c if n > N2. But now we have that bn < c < an for n > max(N1, N2). That is a contradiction to the original assumption that an bn for all n. Hence it can not be true that a > b, so that the statement is indeed proved.

This theorem states exactly what you would expect to be true. The proof of it employs the standard trick of 'adding zero' and using the triangle inequality. Try to prove it on your own before looking it up. Note that the fourth statement is no longer true for strict inequalities. In other words, there are convergent sequences with an < bn for all n, but strict inequality is no longer true for their limits. Can you find an example ? While we now know how to deal with convergent sequences, we still need an easy criteria that will tell us whether a sequence converges. The next proposition gives reasonable easy conditions, but will not tell us the actual limit of the convergent sequence. First, recall the following definitions: Definition 3.1.7: Monotonicity A sequence j. is called monotone increasing if aj + 1 aj for all

A sequence is called monotone decreasing if aj aj + 1 for all j. In other words, if every next member of a sequence is larger than the previous one, the sequence is growing, or monotone increasing. If the next element is smaller than each previous one, the sequence is decreasing. While this condition is easy to understand, there are equivalent conditions that are often easier to check:

Monotone increasing: 1. aj + 1 aj

2. aj + 1 - aj 0 3. aj + 1 / aj 1, if aj > 0 Monotone decreasing: 1. aj + 1 aj 2. aj + 1 - aj 0 3. aj + 1 / aj 1, if aj > 0

Examples 3.1.8:

Is the sequence Is the sequence decreasing ?

monotone increasing or decreasing ? monotone increasing or

Is it true that a bounded sequence converges ? How about monotone increasing sequences ? Here is a very useful theorem to establish convergence of a given sequence (without, however, revealing the limit of the sequence): First, we have to apply our concepts of supremum and infimum to sequences:

If a sequence is bounded above, then c = sup(xk) is finite. Moreover, given any > 0, there exists at least one integer k such that xk > c - , as illustrated in the picture.

If a sequence is bounded below, then c = inf(xk) is finite. Moreover, given any > 0, there exists at least one integer k such that xk < c + , as illustrated in the picture.

Proposition 3.1.9: Monotone Sequences If is a monotone increasing sequence that is bounded above, then the sequence must converge.

If is a monotone decreasing sequence that is bounded below, then the sequence must converge.

Proof

Proof:
Let's look at the first statement, i.e. the sequence in monotone increasing. Take an > 0 and let c = sup(xk). Then c is finite, and given > 0, there exists at least one integer N such that xN > c - . Since the sequence is monotone increasing, we then have that xk > c for all k > N, or | c - xk | < for all k > N. But that means, by definition, that the sequence converges to c. The proof for the infimum is very similar, and is left as an exercise.

Using this result it is often easy to prove convergence of a sequence just by showing that it is bounded and monotone. The downside is that this method will not reveal the actual limit, just prove that there is one. Examples 3.1.10:

Prove that the sequences and converge. What is their limit? Define x1 = b and let xn = xn - 1 / 2 for all n > 1. Prove that this sequence converges for any number b. What is the limit ? Let a > 0 and x0 > 0 and define the recursive sequence

xn+1 = 1/2 (xn + a / xn) Show that this sequence converges to the square root of a regardless of the starting point x0 > 0. There is one more simple but useful theorem that can be used to find a limit if comparable limits are known. The theorem states that if a sequence is pinched in between two convergent sequences that converge to the same limit, then the sequence in between must also converge to the same limit. Theorem 3.1.11: The Pinching Theorem Suppose {aj} and {cj} are two convergent sequences such that lim aj = lim cj = L. If a sequence {bj} has the property that aj bj cj for all j, then the sequence {bj} converges and lim bj = L.
Proof

Proof:
The statement of the theorem is easiest to memorize by looking at a diagram:

All bj are between aj and cj, and since aj and cj converge to the same limit L the bj have no choice but to also converge to L. Of course this is not a formal proof, so here we go: we want to show that given any > 0 there exists an integer N such that | bj - L | < if j > N. We know that aj bj cj Subtracting L from these inequalities gives: aj - L bj - L cj - L But there exists an integer N1 such that | aj - L | < or equivalently - < aj - L < and another integer N2 such that | cj - L | < or equivalently - < cj - L < if j > max(N1, N2). Taking these inequalities together we get: - < aj - L bj - L cj - L < But that means that - < bj - L < or equivalently | bj - L | < as long as j > max(N1, N2). But that means that {bj} converges to L, as required.

Example 3.1.12:

Show that the sequence sin(n) / n and cos(n) / n both converge to zero.

3.2. Cauchy Sequences

IRA

What is slightly annoying for the mathematician (in theory and in praxis) is that we refer to the limit of a sequence in the definition of a convergent sequence when that limit may not be known at all. In fact, more often then not it is quite hard to determine the actual limit of a sequence. We would prefer to have a definition which only includes the known elements of the particular sequence in question and does not rely on the unknown limit. Therefore, we will introduce the following definition: Definition 3.2.1: Cauchy Sequence Let be a sequence of real (or complex) numbers. We say that the sequence satisfies the Cauchy criterion (or simply is Cauchy) if for each > 0 there is an integer N > 0 such that if j, k > N then | aj - ak | < This definition states precisely what it means for the elements of a sequence to get closer together, and to stay close together. Of course, we want to know what the relation between Cauchy sequences and convergent sequences is. Theorem 3.2.2: Completeness Theorem in R Let be a Cauchy sequence of real numbers. Then the sequence is bounded.

Let be a sequence of real numbers. The sequence is Cauchy if and only if it converges to some limit a.
Proof

Proof:
The proof of the first statement follows closely the proof of the corresponding result for convergent sequences. Can you do it ? To prove the second, more important statement, we have to prove two parts: First, assume that the sequence converges to some limit a. Take any > 0. There exists an integer N such that if j > N then | aj - a | < /2. Hence:

| aj - ak | | aj - a | + | a - ak| < 2 / 2 = if j, k > N. Thus, the sequence is Cauchy. Second, assume that the sequence is Cauchy (this direction is much harder). Define the set S = {x R: x < aj for all j except for finitely many} Since the sequence is bounded (by part one of the theorem), say by a constant M, we know that every term in the sequence is bigger than -M. Therefore -M is contained in S. Also, every term of the sequence is smaller than M, so that S is bounded by M. Hence, S is a nonempty, bounded subset of the real numbers, and by the least upper bound property it has a well-defined, unique least upper bound. Let a = sup(S) We will now show that this a is indeed the limit of the sequence. Take any > 0 , and choose an integer N > 0 such that | aj - ak | < / 2 if j, k > N. In particular, we have: | aj - aN + 1 | < / 2 if j > N, or equivalently - / 2 < aj - aN + 1 < / 2 Hence we have: aj > aN + 1 - / 2 for j > N. Thus, aN + 1 - / 2 is in the set S, and we have that a aN + 1 - / 2 It also follows that aj < aN + 1 < / 2 for j > N. Thus, aN + 1 < / 2 is not in the set S, and therefore a aN + 1 < / 2 But now, combining the last several line, we have that: |a - aN + 1 | < / 2 and together with the above that results in the following: | a - aj | < |a - aN + 1 | + | aN + 1 - aj | < 2 / 2 = for any j > N.

Thus, by considering Cauchy sequences instead of convergent sequences we do not need to refer to the unknown limit of a sequence, and in effect both concepts are the same. Note that the Completeness Theorem not true if we consider only rational numbers. For example, the sequence 1, 1.4, 1.41, 1.414, ... (convergent to the square root of 2) is Cauchy, but does not converge to a rational number. Therefore, the rational numbers are not complete, in the sense that not every Cauchy sequence of rational numbers converges to a rational number. Hence, the proof will have to use that property which distinguishes the reals from the rationals: the least upper bound property.

3.3. Subsequences

IRA

So far we have learned the basic definitions of a sequence (a function from the natural numbers to the Reals), the concept of convergence, and we have extended that concept to one which does not pre-suppose the unknown limit of a sequence (Cauchy sequence). Unfortunately, however, not all sequences converge. We will now introduce some techniques for dealing with those sequences. The first is to change the sequence into a convergent one (extract subsequences) and the second is to modify our concept of limit (lim sup and lim inf). Definition 3.3.1: Subsequence Let be a sequence. When we extract from this sequence only certain elements and drop the remaining ones we obtain a new sequences consisting of an infinite subset of the original sequence. That sequence is called a subsequence and denoted by One can extract infinitely many subsequences from any given sequence. Examples 3.3.2:

Take the sequence , which we have proved does not converge. Extract every other member, starting with the first. Does this sequence converge ? What if we extract every other member, starting with the second. What do you get in this case ?

Take the sequence . Extract three different subsequences of your choice. Do these subsequences converge ? Is so, to what limit ? The last example is an indication of a general result: Proposition 3.3.3: Subsequences from Convergent Sequence

If is a convergent sequence, then every subsequence of that sequence converges to the same limit

If is a sequence such that every possible subsequence extracted from that sequences converge to the same limit, then the original sequence also converges to that limit.
Proof

Proof:

The first statement is easy to prove: Suppose the original sequence {aj} converges to some limit L. Take any sequence nj of the natural numbers and consider the corresponding subsequence of the original sequence For any > 0 there exists an integer N such that | an - L | < as long as n > N. But then we also have the same inequality for the subsequence as long as nj > N. Therefore any subsequence must converge to the same limit L. The second statement is just as easy. Suppose {aj} is a sequence such that every subsequence extracted from it converges to the same limit L. Now take any > 0. Extract from the original sequence every other element, starting with the first. The resulting subsequence converges to L by assumption, i.e. there exists an integer N such that | aj - L | < where j is odd and j > N. Now extract every other element, starting with the second. The resulting subsequence again converges to L, so that | aj - L | < where j is even and j > N. But now we take any j, even or odd, and assume that j > N

if j is odd, then | aj - L | < because aj is part of the first subsequence if j is even, then | aj - L | < because aj is part of the second subsequence

Hence, the original sequence must also converge to L. Note that we can see from the proof that if the "even" and "odd" subsequence of a sequence converge to the same limit L, then the full sequence must also converge to L. It is not enough to just say that the "even" and "odd" subsequence simply converge, they must converge to the same limit.

The next statement is probably one on the most fundamental results of basic real analysis, and generalizes the above proposition. It also explains why subsequences can be useful, even if the original sequence does not converge. Theorem 3.3.4: Bolzano-Weierstrass Let be a sequence of real numbers that is bounded. Then there that converges.
Proof

exists a subsequence

Proof:
Since the sequence is bounded, there exists a number M such that | aj | < M for all j. Then:

either [-M, 0] or [0, M] contains infinitely many elements of the sequence Say that [0, M] does. Choose one of them, and call it either [0, M/2] or [M/2, M] contains infinitely many elements of the (original) sequence. Say it is [0, M/2]. Choose one of them, and call it either [0, M/4] or [M/4, M/2] contains infinitely many elements of the (original) sequence This time, say it is [M/4, M/2]. Pick one of them and call it Keep on going in this way, halving each interval from the previous step at the next step, and choosing one element from that new interval. Here is what we get:

| < M, because both are

Example 3.3.5:

Does converge ? Does there exist a convergent subsequence ? What is that subsequence ? In fact, the following is true: given any number L between -1 and 1, it is possible to extract a subsequence from the sequence prove. that converges to L. This is difficult to

Next, we will broaden our concept of limits.

3.4. Lim Sup and Lim Inf


When dealing with sequences there are two choices:

IRA

the sequence converges the sequence diverges

While we know how to deal with convergent sequences, we don't know much about divergent sequences. One possibility is to try and extract a convergent subsequence, as described in the last section. In particular, Bolzano-Weierstrass' theorem can be useful in case the original sequence was bounded. However, we often would like to discuss the limit of a sequence without having to spend much time on investigating convergence, or thinking about which subsequence to extract. Therefore, we need to broaden our concept of limits to allow for the possibility of divergent sequences.

Definition 3.4.1: Lim Sup and Lim Inf

Let be a sequence of real numbers. Define Aj = inf{aj , aj + 1 , aj + 2 , ...} and let c = lim (Aj). Then c is called the limit inferior of the sequence .

Let

be a sequence of real numbers. Define

Bj = sup{aj , aj + 1 , aj + 2 , ...} and let c = lim (Bj). Then c is called the limit superior of the sequence . In short, we have: 1. lim inf(aj) = lim(Aj) , where Aj = inf{aj , aj + 1 , aj + 2 , ...} 2. lim sup(aj) = lim(Bj) , where Bj = sup{aj , aj + 1 , aj + 2 , ...} When trying to find lim sup and lim inf for a given sequence, it is best to find the first few Aj's or Bj's, respectively, and then to determine the limit of those. If you try to guess the answer quickly, you might get confused between an ordinary supremum and the lim sup, or the regular infimum and the lim inf. Examples 3.4.2:

What is inf, sup, lim inf and lim sup for What is inf, sup, lim inf and lim sup for What is inf, sup, lim inf and lim sup for ?

While these limits are often somewhat counter-intuitive, they have one very useful property:

Proposition 3.4.3: Lim inf and Lim sup exist lim sup and lim inf always exist (possibly infinite) for any sequence of real numbers.
Proof

Proof:
The sequence

Aj = inf{aj , aj + 1 , aj + 2 , ...} is monotone increasing (which you should prove yourself). Hence, lim inf exists (possibly positive infinity). The sequence Bj = sup{aj , aj + 1 , aj + 2 , ...} is monotone decreasing (which you should prove yourself). Hence, lim sup exists (possibly negative infinity). Here we have to allow for a limit to be positive or negative infinity, which is different from saying that a limit does not exist.

It is important to try to develop a more intuitive understanding about lim sup and lim inf. The next results will attempt to make these concepts somewhat more clear. Proposition 3.4.4: Characterizing lim sup and lim inf Let be an arbitrary sequence and let c = lim sup(aj) and d = lim inf(aj). Then 1. there is a subsequence converging to c 2. there is a subsequence converging to d 3. d lim inf lim sup c for any subsequence { }

If c and d are both finite, then: given any > 0 there are arbitrary large j such that aj > c - and arbitrary large k such that ak < d +

Proof

Proof:
First let's assume that c = lim sup{aj} is finite, which implies that the sequence {aj} is bounded. Recall the properties of the sup (and inf) for sequences: If a sequence is bounded above, then given any > 0 there exists at least one integer k such that ak > c Now take any > 0. Then Ak = sup{ak, ak+1, ...} so by the above property there exists an integer jk > k such that Ak > > Ak - / 2 or equivalently

| Ak |< /2 We also have by definition that Ak converges to c so that there exists an integer N such that | Ak - c | < / 2 But now the subsequence { | -c|=| - Ak + Ak - c | } is the desired one, because:

| - Ak | + | Ak - c | < /2+ /2= /2 if jk > N. Hence, this particular subsequence of {an} converges to c. The proof to find a subsequence converging to the lim inf is similar and is left as an exercise. Statement (3) is pretty simple to prove: For any sequence we always have that inf{ak, ak+1, ... } sup{ak, ak+1, ... } Taking limits on both sides gives lim inf(an) lim sup(an) for any sequence, so it is true in particular for any subsequence. Next take any subsequence of {an}. Then: inf(ak, ak+1, ...) inf( , , ...) because an infimum over more numbers (on the left side) is less than or equal to an infimum over fewer numbers (on the right side). But then d lim inf( ) ) c is similar. Taking all pieces together we have The proof of the inequality lim sup( shown that d lim inf lim sup c }, as we set out to do. for any subsequence {

It remains to show that given any > 0 there are arbitrary large j such that aj > c - (as well as the corresponding statement for the lim inf d). But previously we have found a subsequence { integer N such that | -c|< - c < which implies that } that converges to c so that there exists an

if k > N. But that means that - <

c- < <c+ as long as k > N. But that of course means that there are arbitrarily large indices - namely those jk for which k > N - with the property that > c - as required. Hence, we have shown the last statement involving the lim sup, and a similar proof would work for the lim inf. All our proofs rely on the fact that the lim sup and lim inf are bounded. It is not hard to adjust them for unbounded values, but we will leave the details as an exercise.

A little bit more colloquial, we could say:


Aj picks out the greatest lower bound for the truncated sequences {aj}. Therefore Aj tends to the smallest possible limit of any convergent subsequence. Similarly, Bj picks the smallest upper bound of the truncated sequences, and hence tends to the greatest possible limit of any convergent subsequence.

Compare this with a similar statement about supremum and infimum. Example 3.4.5 If is the sequence of all rational numbers in the interval [0, 1], enumerated in any way, find the lim sup and lim inf of that sequence. The final statement relates lim sup and lim inf with our usual concept of limit. Proposition 3.4.6: Lim sup, lim inf, and limit

If a sequence {aj} converges then lim sup aj = lim inf aj = lim aj Conversely, if lim sup aj = lim inf aj are both finite then {aj} converges.
Proof

Proof:
Let c = lim sup aj. From before we know that there exists a subsequence of {aj} that converges to c. But since the original sequence converges, every subsequence must converge to the same limit. Hence c = lim sup aj = lim aj To prove that lim inf aj = c is similar. The converse of this statement can be proved by noting that Bj = inf(aj, aj+1, ...) aj sup(aj, aj+1, ...) = Aj Noting that lim Bj = lim inf(aj) = lim sup(aj) = lim Aj we can apply the Pinching Theorem to see that the terms in the middle must converge to the same value.

3.5. Special Sequences


In this section will take a look at some sequences that will appear again and again. You should try to memorize all those sequences and their convergence behavior. Power Sequence Exponent Sequence

IRA

Root of n Sequence n-th Root Sequence Binomial Sequence Euler's Sequence Exponential Sequence

Definition 3.5.1: Power Sequence


Power Sequence: The convergence properties of the power sequence depends on the size of the base a:

|a| < 1: the sequence converges to 0. a = 1: the sequence converges to 1 (being constant) a > 1: the series diverges to plus infinity a -1: the series diverges
back

Convergent Power series with a = -9/10 Divergent Power series with a = 11/10

Proof:
This seems an obvious statement: if a number is in absolute values less than one, it gets smaller and smaller when raised to higher and higher powers. Proving something 'obvious', however, is often difficult, because it may not be clear how to start. To prove the statement we have to resort to one of the elementary properties of the real number system: the Archimedian principle. Case a > 1: Take any real number K > 0 and define x=a-1 Since a > 1 we know that x > 0. By the Archimedian principle there exists a positive integer n such that nx > K - 1. Using Bernoulli's inequality for that n we have: an = (1 + x)n 1 + nx > 1 + (K - 1) = K But since K was an arbitrary number, this proves that the sequence {an} is unbounded. Hence it can not converge. Case 0 < a < 1: Take any > 0. Since 0 < a < 1 we know that 1/a > 1, so that by the previous proof we can find an N with

But then it follows that an < for all n > N This proves that the sequence {an} converges to zero. Case -1 < a < 0: By the above proof we know that | an | converges to zero. But since -| an | < an < | an | the sequence {an} again converges to zero by the Pinching Theorem. Case a < -1: Extract the subsequence {a2m} from the sequence {an}. Then this sequence diverges to infinity by the first part of the proof, and therefore the original sequence can not converge either. Case a = 1: This is the constant sequence, so it converges. Case a = -1: We have already proved that the sequence does not converge.

Definition 3.5.2: Exponent Sequence


Exponent Sequence: The convergence depends on the size of the exponent a:

a > 0: the sequence diverges to positive infinity a = 0: the sequence is constant a < 0: the sequence converges to 0
back

Exponent sequence with a = 2 Exponent sequence with a = -2

Proof:
Write n a = e a ln(n). Then:

if a > 0 then as n approaches infinity, the function ea ln(n) approaches infinity as well if a < 0 then as n approaches infinity, the function ea ln(n) approaches zero if a = 0 then the sequence is the constant sequence, and hence convergent

Is this a good proof ? Of course not, because at this stage we know nothing about the exponential or logarithm function. So, we should come up with a better proof. But actually we first need to understand exactly what na really means:

If a is an integer, then clearly na means to multiply n with itself a times

If a = p/q is a rational number, then np/q means to multiply n p times, then take the qroot It is unclear, however, what na means if a is an irrational number. For example, what is n , or n ?

One way to define na for all a is to resort to the exponential function: na = ea ln(n) In that light, the original 'proof' was not bad after all, but of course we now need to know how exactly the exponential function is defined, and what its properties are before we can continue. As it turns out, the exponential function is not easy to define properly. Generally one can either define it as the inverse function of the ln function, or via a power series. Another way to define na for a being rational it to take a sequence of rational numbers rn converging to a and to define na as the limit of the sequence {nrn}. There the problem is to show that this is well-defined, i.e. if there are two sequences of rational numbers, the resulting limit will be the same. In either case, we will base our proof on the simple fact: if p > 0 and x > y then xp > yp which seems clear enough and could be formally proved as soon as the exponential function is introduced to properly define xp. Now take any positive number K and let n be an integer bigger than K1/a. Since n > K1/a we can raise both sides to the a-th power to get na > K which means - since K was arbitrary - that the sequence {na} is unbounded. The case a = 0 is clear (since n0 = 1). The second case of a < 0 is related to the first by taking reciprocals (details are left as an exercise). Since we have already proved the first case we are done.

Definition 3.5.3: Root of n Sequence


Root of n Sequence: This sequence converges to 1.
back

Root of n sequence

Proof:

If n > 1, then > 1. Therefore, we can find numbers an > 0 such that = 1 + an for each n > 1 Hence, we can raise both sides to the n-th power and use the Binomial theorem: In particular, since all terms are positive, we obtain Solving this for an we obtain 0 an But that implies that an converges to zero as n approaches to infinity, which means, by the definition of an that converges to 1 as n goes to infinity. That is what we wanted to prove.

Definition 3.5.4: n-th Root Sequence


n-th Root Sequence: This sequence converges to 1 for any a > 0.
back

n-th Root sequence with a = 3

Proof:
Case a > 1: If a > 1, then for n large enough we have 1 < a < n. Taking roots on both sides we obtain 1< < But the right-hand side approaches 1 as n goes to infinity by our statement of the root-n sequence. Then the sequence { } must also approach 1, being squeezed between 1 on both sides (Pinching theorem). Case 0 < a < 1: If 0 < a < 1, then (1/a) > 1. Using the first part of this proof, the reciprocal of the sequence { } must converge to one, which implies the same for the original sequence. Incidentally, if a = 0 then we are dealing with the constant sequence, and the limit is of course equal to 0.

Definition 3.5.5: Binomial Sequence

Binomial Sequence: If b > 1 then the sequence converges to zero for any positive integer k. In fact, this is still true if k is replaced by any real number.
back

Binomial sequence with k = 2 and b = 1.3

Proof:
Note that both numerator and denominator tend to infinity. Our goal will be to show that the denominator grows faster than the k-th power of n, thereby 'winning' the race to infinity and forcing the whole expression to tend to zero. The name of this sequence indicates that we might try to use the binomial theorem for this. Indeed, define x such that b=1+x Since b > 1 we know that x > 0. Therefore, each term in the binomial theorem is positive, and we can use the (k+1)-st term of that theorem to estimate:

for any k+1 n. Let n = 2k + 1, or equivalently, k = (n-1)/2. Then n - k = n - (n-1)/2 = (n+1)/2 > n/2, so that each of the expressions n, n-1, n-2, ..., n - k is greater than n/2. Hence, we have that

But then, taking reciprocals, we have:

But this expression is true for all n > 2k + 1 as well, so that, with k fixed, we can take the limit as n approaches infinity and the right hand side will approach zero. Since the left-hand side is always greater than or equal to zero, the limit of the binomial sequence must also be zero. If k is replaced by any real number, see the exponential sequence to find out how nk could be defined for rational and irrational values of k. But perhaps some simple estimate will help if k is not an integer. Details are left as an exercise.

Definition 3.5.6: Euler Sequence

Euler's Sequence: Converges to e ~ 2.71828182845904523536028747135... (Euler's number). This sequence serves to define e.

back

Euler's sequence

Proof:
We will show that the sequence is monotone increasing and bounded above. If that was true, then it must converge. Its limit, by definition, will be called e for Euler's number. Euler's number e is irrational (in fact transcendental), and an approximation of e to 30 decimals is e ~ 2.71828182845904523536028747135. First, we can use the binomial theorem to expand the expression

Similarly, we can replace n by n+1 in this expression to obtain

The first expression has (n+1) terms, the second expression has (n+2) terms. Each of the first (n+1) terms of the second expression is greater than or equal to each of the (n+1) terms of the first expression, because

But then the sequence is monotone increasing, because we have shown that 0 Next, we need to show that the sequence is bounded. Again, consider the expansion

1+ Now we need to estimate the expression to finish the proof.

If we define Sn =

, then

so that, finally,

for all n. But then, putting everything together, we have shown that 1+ 1 + Sn 3 for all n. Hence, Euler's sequence is bounded by 3 for all n. Therefore, since the sequence is monotone increasing and bounded, it must converge. We already know that the limit is less than or equal to 3. In fact, the limit is approximately equal to 2.71828182845904523536028747135

Definition 3.5.7: Exponential Sequence

Exponential Sequence: Converges to the exponential function ex = exp(x) for any real number x.
back

Proof:
We will use a simple substitution to prove this. Let x/n = 1/u, or equivalently, n = u x Then we have

But the term inside the square brackets is Euler's sequence, which converges to Euler's number e. Hence, the whole expression converges to ex, as required. In fact, we have used a property relating to functions to make this proof work correctly. What is that property ? If we did not want to use functions, we could first prove the statement for x being an integer. Then we could expand it to rational numbers, and then, approximating x by rational number, we could prove the final result.

Das könnte Ihnen auch gefallen