Sie sind auf Seite 1von 41

Part 3: Two-Point Boundary Value Problems

Chapter 6 Fundamental Problems and Methods


6.1 Problems to be Solved
Several problems arising in science and engineering are modeled by di erential equations that involve conditions that are speci ed at more than one point. Some examples follow.

s P

P 1

Figure 6.1.1: Deformation of an elastica.


z r uz u ur

Figure 6.1.2: Swirling ow above a rotating disk. 3

1. Deformation of an Elastica. The transverse deformation of a thin elastic inextensional rod subjected to an axial loading and clamped at its ends is governed by the di erential system d2 + P sin = 0 0<s<1 ds2 (0) = (1) = 0: As shown in Figure 6.1.1, the rod has unit length, the magnitude of the loading is P , and is the angle that the deformed rod makes with the initial undeformed axis. This classical second-order nonlinear two-point boundary value problem is called the elastica problem. One solution is = 0. This solution, however, becomes unstable as P increases and the rod bends into a deformed shape as shown in Figure 6.1.1. Hence, this boundary value problem is also a di erential eigenvalue problem that consists of determining theta and the critical load P for deformed shapes to exist. Once has been determined, the Cartesian coordinates of a deformed point on the rod can be determined as the solution of the initial value problems

dx = cos ds

dy = sin ds

x(0) = y(0) = 0:
2. Swirling Flow. The swirling ow of a viscous incompressible uid over a disk spinning with speed (Figure 6.1.2) can be analyzed by solving the nonlinear two-point boundary value problem

d3f + 2f d2 f ; ( df )2 + g2 = dx3 dx2 dx f (0) = 0

(x lim g(x) = lim dfdx ) = 0 x!1 x!1 where is the Rosby number. The dimensionless variable x is related to the axial coordinate z (Figure 6.1.2) by r

d2g + 2f dg ; 2 df g = 0 dx2 dx dx df (0) = 0 g(0) = 1 dx

0<x<1

x=z
4

where is the kinematic viscosity. The radial, tangential, and axial components of the velocity vector, respectively, are obtained from the functions f (x) and g(x) as

p (x ur = r dfdx ) u = rg(x) uz = ;2f (x) : This problem involves a boundary value problem for a pair of second-order nonlinear ODEs. An interesting feature of this problem is that one of the \boundaries" is at in nity.

As indicated in these two examples, boundary value problems (BVPs) have several forms. The two that will be most important to us are: 1. A vector system of rst-order equations

y0(x) = f (x y(x))

(6.1.1)

where y0 = dy=dx and y and f are m-dimensional vectors. (We have changed the independent variable from t to x since it frequently corresponds to a spatial position rather than time.) 2. A scalar m th-order di erential equation

u(m) = g(x u u0 ::: u(m;1) ):

(6.1.2)

Naturally, the higher-order scalar problem can be reduced to a rst-order vector system (as described in Section 1.1) however, it may sometimes be convenient to work with the higher-order scalar problem (6.1.2). Focusing on the vector system (6.1.1) for the moment, if

f (x y) = A(x)y + b(x)
the ODE is linear, otherwise it is nonlinear. If (6.1.1) is to be solved on a < x < b then m conditions are needed to uniquely determine the solution of the BVP. These may be of the general form

g(y(a) y(b)) = 0
5

(6.1.3a)

where g has dimension m. For the most part, we will consider simpler boundary conditions having the form

gL(y(a)) = 0

gR (y(b)) = 0

(6.1.3b)

where gL has dimension l and gR has dimension r = m ; l. Conditions of the form (6.1.3b) are called separated while the more general form (6.1.3a) are unseparated. Linear versions of (6.1.3a) and (6.1.3b) have the forms

Ly(a) + Ry(b) = c
and

(6.1.4a) (6.1.4b)

LLy(a) = cL

RRy(b) = cR:

The matrices L and R are of dimension m m and the vector c is m-dimensional. For the separated conditions (6.1.4b), LL is l m-dimensional, RR is r m-dimensional, cL is l-dimensional, and cR is r-dimensional. There are three standard numerical approaches to solving two-point boundary value problems: 1. Shooting. An appropriate IVP is de ned and solved by initial value techniques and software. The IVP is de ned so that solutions iteratively converge to the solution of the original BVP. 2. Finite Di erences. A mesh is introduced on a b] and derivatives in the ODE are replaced by nite-di erences relative to the mesh. This leads to a linear or nonlinear algebraic problem which may be solved to produce a discrete approximation of the solution of the BVP. 3. Projections. The solution of the BVP is approximated by simpler functions, e.g., piecewise polynomials, and the di erential equations and boundary conditions are satis ed approximately. Collocation or nite element techniques often furnish these approximations. In the next three Sections we illustrate the basic ideas of these methods by using simple BVPs. 6

6.2 Introduction to Shooting


Let us consider a second-order nonlinear two-point BVP

u00 (x) = f (x u u0)

a<x<b

u(a) = A

u(b) = B:

(6.2.1)

Writing the ODE as a rst-order system, let us also consider the related IVP

0 y1 = y2 0 y2 = f ( x y 1 y 2 )

y1(a ) = A y2(a ) = :

(6.2.2a) (6.2.2b)

In what follows, we'll need to emphasize the dependence of the solution on the parameter appearing in the initial conditions, so we'll write the solution of (6.2.2) as yk (x ), k = 1 2. Solutions of (6.2.2) satisfy the original di erential equation and the initial condition at x = a but fail to satisfy the terminal condition at x = b. Thus, shooting consists of repeatedly solving (6.2.2) for di erent choices of until the terminal condition

y1(b ) = B

(6.2.3)

is also satis ed. Regarding (6.2.3) as a (nonlinear) function of , convergence to the solution of the BVP can be enhanced by using an iterative strategy for nonlinear algebraic equations. Secant and Newton iteration are two possible procedures. Let us illustrate the simpler secant method rst. 1. Solve the IVP (6.2.2) for two choices of , say 0 and 1. The corresponding solutions, y1(x 0 ) and y1(x 1 ), may appear as illustrated in Figure 6.2.1. The 0 various choices of alter the initial slope y1(a ) = y2(a ). Regarding the solution y1(x ) as the trajectory of a projectile red from a cannon at x = a, y1(a ) = A, the problem is to alter the initial angle of the cannon so that the projectile hits a target at at x = b, y1(a ) = B hence, the name shooting. 2. Assuming that y1(b ) is locally a linear function of , we use the two values y1(b 0) and y1(b 1) to compute the next value 2 in the sequence thus, 2 is the 7

y1 y1 (b; 1 ) B

y1 (b; 0 ) A 1 0

b
0

Figure 6.2.1: Solutions y1(x


y1 (b; )

) and y1(x

) of the IVP (6.2.2).


y1 (b; 1 )

y1 (b; 0 )

2
0

x
1

Figure 6.2.2: Secant method of using two guesses such that y1(b 2 ) B . solution of (Figure 6.2.2)

and

to select another guess

y1(b
1

) ; B = y1(b ;
1 2

) ; y 1 (b
1

):

Solving for

yields
2

y B 1 ; ( 1 ; 0) y (b 1(b ) ;)y;(b ) : 1 1 1 0
8

This can be repeated to yield the general relation


+1

; ( ; ;1) y (b y1(b ; y) ; B ) ) 1(b ;1 1


y1(b
);B

= 1 2 ::: :

(6.2.4)

The iteration may be terminated when, e.g.,

B for a prescribed value of . (Other termination criteria should be used when B = 0.)
Remark 1. If the ODE is linear then y1 (b ) is a linear function of and y1 (x 2 ) is the exact solution (neglecting round o errors) of the BVP (6.2.1). The nonlinear equation (6.2.3) can also be solved by Newton's method. If, for example, is a (su ciently close) guess to the solution of (6.2.3) then the next guess may be generated as
+1

; y1(b (b ) ; B : @y )
1

(6.2.5)

An expression for @y1 (b respect to to obtain

)=@ may be obtained by di erentiating the IVP (6.2.2) with ( @y1 )0 = @y2 @ @

@y1 (a ) = 0 @

(6.2.6a)

@y2 (a ) = 1: (6.2.6b) @ These equations are linear in the partial derivatives @y1 =@ and @y2 =@ . An algorithm for performing shooting with Newton's method is shown in Figure 6.2.3. In order to simplify the notation, let z1 = @y1 z2 = @y2 : (6.2.7) @ @
In order to solve the IVP, functions to evaluate @f=@y1 and @f=@y2 must be available In contrast, the secant method only requires knowledge of f . In fact, the secant method (6.2.4) can be viewed as an approximation of Newton's method (6.2.5) with backward 9

y y ( @y2 )0 = @f (x@y 1 y2) @y1 + @f (x@y 1 y2) @y2 @ @ @ 1 2

procedure newton begin repeat

Select an initial guess := 0

Solve the IVP for x 2 (a b] 0 y1 = y2, y1(a ) = A, 0 y2 = f (x y1 y2), y2(a ) = , 0 z1 = z2 , z1 (a ) = 0, z2 = fy1 (x y1 y2)z1 + fy2 (x y1 y2)z2, z2 (a if not converged then

)=1

begin

end until converged end

:= ; y1z(1b(b );B ) := + 1
+1

Figure 6.2.3: The shooting method for solving (6.2.1) with Newton iteration. di erences replacing @y1 (b )=@ . For second-order BVPs, Newton's method requires the solution of a four-dimensional IVP while the secant method only requires a twodimensional IVP. Convergence of Newton's method is generally second-order (quadratic), i.e.,

+1

; 1j C j ; 1j2

!1

where 1 is the value of that satis es the terminal condition (6.2.3). Convergence of the secant method is slightly slower, typically

+1

; 1j C j ; 1j1:5

!1

Thus, the secant method would generally be preferred to Newton's method. This, however, may not be the case with higher-dimensional BVPs. Example 6.2.1. Consider the solution of the clamped elastica problem 00 + P sin = 0 (0) = (1=2) = 0: by shooting methods using Newton iteration. (Symmetry considerations have been used to cut the domain of the problem illustrated in Figure 6.1.1 in half.) 10

Letting

y1 =
we introduce the IVP

y2 = 0 y1(0 ) = 0 y2(0 ) = : z1 (0 ) = 0 z2(0 ) = 1

0 y1 = y 2 0 y2 = ;P sin y1 0 z1 = z2 0 z2 = ;Pz1 cos y1


=

Di erentiating this system with respect to yields

where zk , k = 1 2, satis es (6.2.7). Iterates are computed by the relation


+1

(1=2 ) ; y1(1=2 ) : z
1

Using a convergence test of

jy1(1=2 )j 10;9
0

we found that Newton's method converged in ve iterations when P = 40 and

= 0:1.

6.3 Introduction to Finite Di erence Methods


We'll again use the second-order scalar nonlinear two-point boundary value problem

y00(x) = f (x y y0)

a<x<b

y(a) = A

y(b) = B

(6.3.1)

to describe the essential details of nite di erence methods. To begin, we divide the domain a x b into N uniform subintervals of width

h= b;a N

(6.3.2a)

as shown in Figure 6.3.1. Restriction to uniform subintervals is not essential, but is introduced here for simplicity. We also let

xi = a + ih

i = 0 1 : : : N:
11

(6.3.2b)

y y(x 1) y1 yN = B

y1 y0 = A

h a = x0 x1 xi x x N= b

Figure 6.3.1: Domain, discretization, and notation used for nite di erence solutions of (6.3.1). In solving the BVP (6.3.1) by nite di erences, all derivatives are replaced by nite di erence approximations. These can be constructed from interpolating polynomials, but we'll illustrate a di erent approach using Taylor's series expansions of the solution y(x). Thus, consider k 2 (6.3.3) y(x) = y(xi) + (x ; xi )y0(xi ) + (x ; xi ) y00(xi ) + : : : + (x ; !xi) y(k)( ) 2! k where is between xi and x. Speci cally, choosing x = xi+1 = xi + h yields 2 3 k y(xi+1) = y(xi) + hy0(xi ) + h y00 (xi) + h y000 (xi) + : : : + h ! y(k)( i+1): (6.3.4a) 2 6 k Similarly, selecting x = xi;1 = xi ; h produces 2 3 k y(xi;1) = y(xi) ; hy0(xi) + h y00(xi ) ; h y000(xi) + : : : + (;h) y(k)( i;1): (6.3.4b) 2 6 k! Setting k = 2 in (6.3.4a) and solving for y0(xi) yields y0(xi) = y(xi+1)h; y(xi) ; h y00( i+1): (6.3.5a) 2 Finite di erence approximations are obtained by neglecting the error term of the Taylor's series thus, the rst forward nite di erence approximation of y0(xi ) is yi0 = yi+1h; yi (6.3.5b) 12

and the local discretization error of this approximation is


i

= ; h y00( 2

i+1 ):

(6.3.5c)

Subscripts on y denote nite di erence approximations hence, yi denotes an approximation of y(xi), etc. In a similar manner, the rst backward di erence approximation of y0(xi ) is obtained by setting k = 2 in (6.3.4b)

yi0 = yi ;hyi;1

= h y00( i;1): 2

(6.3.6)

Notice, however, that a higher-order and symmetric di erence approximation can be obtained by subtracting (6.3.4b) from (6.3.4a) and setting k = 3 to get Solving for y0(xi)
3 y(xi+1) ; y(xi;1) = 2hy0(xi ) + h y000( i): 3

The di erence formula (6.3.7) is called the rst central di erence approximation of y0(xi ). In Chapter 5, we found that this approximation led to the leap frog scheme, which had poor stability characteristics. Here, with second-order ODEs, central di erences will generally be preferred to either forward or backward di erences because of their higherorder local discretization errors. Remark 1. The higher-order accuracy of (6.3.7) relative to (6.3.5) or (6.3.6) only occurs on a uniform mesh. With nonuniform spacing the second derivative terms in (6.3.4a) and (6.3.4b) would not cancel upon subtraction. A central di erence approximation of the second derivative y00(xi ) is obtained by adding (6.3.4a) and (6.3.4b) while setting k = 4 to obtain

; yi0 = yi+1 2h yi;1

= ; h y000 ( i): 6
2

(6.3.7)

yi00 = yi+1 ; 2y2i + yi;1 h

h2 = ; 12 yiv ( i): i

(6.3.8)

No further approximations are needed to solve (6.3.1) by nite di erences however, we note that approximations of higher derivatives are obtained by using Taylor's series 13

at more points. For example, consider evaluating the Taylor's series (6.3.3) at x = xi+2 and xi;2 to obtain h4 h3 (6.3.9a) y(xi+2) = y(xi) + 2hy0(xi ) + 2h2y00 (xi) + 43 y000(xi ) + 23 yiv (xi ) + : : : and h4 h3 y(xi;2) = y(xi) ; 2hy0(xi ) + 2h2y00(xi ) ; 43 y000(xi ) + 23 yiv (xi ) + : : : : (6.3.9b) Subtracting (6.3.9b) from (6.3.9a) yields 0(xi) + 8h3 y000 (xi) + O(h5): y(xi+2) ; y(xi;2) = 4hy 3 A similar subtraction of (6.3.4b) from (6.3.4a) yields 3 y(xi+1) ; y(xi;1) = 2hy0(xi ) + h y000(xi ) + O(h5): 3 Elimination of the rst derivative term yields a central di erence approximation of the third derivative as (6.3.10) yi000 = yi+2 ; 2yi+12+32yi;1 ; yi;2 : h The local discretization error i = O(h2). Similar combinations of (6.3.4) and (6.3.9) yield an O(h2) central di erence approximation of the fourth derivative as yiiv = yi+2 ; 4yi+1 + 6y4i ; 4yi;1 + yi;2 : (6.3.11) h Now let us return to the task of solving (6.3.1) by nite di erence approximations. We'll try central-di erence approximations because of their higher order. Thus, evaluating (6.3.1) at x = xi and replacing derivatives by central di erences using (6.3.7a) and (6.3.8a), we obtain yi+1 ; 2yi + yi;1 = f (x y yi+1 ; yi;1 ) (6.3.12a) i i h2 2h Writing (6.3.12a) at each interior mesh point i = 1 2 : : : N ; 1, and using the two boundary conditions

y0 = A
14

yN = B

(6.3.12b)

gives a system of N + 1 nonlinear algebraic equations in the N + 1 unknowns yi, i = 0 1 : : : N . This system is too complex for an introductory example, so let us con ne our attention to linear problems with

f (x y y0) = ;p(x)y0 ; q(x)y + r(x):


In this case, the approximation (6.3.12a) becomes

(6.3.13)

yi+1 ; 2yi + yi;1 + p yi+1 ; yi;1 + q y = r i = 1 2 ::: N ; 1 (6.3.14) i i i i h2 2h where pi = p(xi ), etc. Referring to this as the central-di erence approximation of the ODE we de ne the local discretization error as follows.

De nition 6.3.1. Consider an ODE in the form Ly(x) = 0 and let Lhy be a di erence approximation of it, with L and Lh being di erential and di erence operators. The local
discretization error or the local truncation error at x = xi is
i

= Lhy(xi):

(6.3.15)

Example 6.3.1. The di erential and di erence operators for the linear ODE (6.3.1, 6.3.13) satisfy Ly(x) = y00 + p(x)y0 + q(x)y ; r(x) = 0

and (x Lhy(xi) = y(xi+1) ; 2yh2 i) + y(xi;1) + p(xi) y(xi+1)2; y(i;1) + q(xi )y(xi) ; r(xi): h Using (6.3.7) and (6.3.8), we nd
2 h2 = y00(xi ) + 12 yiv ( i) + p(xi ) y0(xi ) + h y000( i)] + q(xi )y(xi) ; r(xi): i 6

Using the di erential equation


2 2 = h yiv ( i) + p(xi ) h y000 ( i): i 12 6

Thus, as we might have expected, the local discretization of the central di erence approximation of (6.3.1, 6.3.13) is O(h2). 15

The algebraic system (6.3.12b, 6.3.14) is linear for the linear BVP (6.3.1, 6.3.12b, 6.3.13) and may be solved by, e.g., Gaussian elimination. Towards this end, let us write (6.3.13) in the form

biyi;1 + ai yi + ciyi+1 = h2 ri
where

i = 1 2 ::: N ; 1 ci = 1 + h pi: 2

(6.3.16a)

bi = 1 ; h pi 2

ai = ;2 + h2 qi

(6.3.16b)

The boundary conditions (6.3.12b) may be used in conjunction with (6.3.16a) to create a system of dimension N + 1 or used to explicitly eliminate y0 and yN as unknowns from (6.3.16a). The latter approach is preferred for simple problems like this one. Thus, using (6.3.12b) with (6.3.16a) when i = 1, we nd

a1 y1 + c1 y2 = h2r1 ; b1A:
Similarly, using (6.3.12b) with (6.3.16a) when i = N ; 1 yields

(6.3.17a)

bN ;1 yN ;2 + aN ;1 yN ;1 = h2 rN ;1 ; cN ;1B:

(6.3.17b)

Grouping the N ; 1 equations, (6.3.17a), (6.3.17b), and (6.3.16a), i = 2 3 : : : N ; 2, yields

Ay = f
where
2 a1 c1 6 b2 a2 c2 6 A = 6 ... ... ... 6 6 4 bN ;2 aN ;2 cN ;2 2 6 y=6 6 4 3 y1 y2 7 7 ... 7 5 2 6 6 f =6 6 6 4 3 7 7 7 7 7 5 3 7 7 7: 7 7 5

(6.3.18a)

(6.3.18b)

bN ;1 aN ;1

yN ;1

h2r1 ; b1 A h2r2 ... h2rN ;2 h2 rN ;1 ; cN ;1 B

(6.3.18c)

16

The linear algebraic problem (6.3.18) requires the solution of a tridiagonal system to determine the N ; 1 unknowns yi, i = 1 2 : : : N ; 1. The basic solution strategy is Gaussian elimination. Pivoting is frequently unnecessary. As seen from (6.3.16b), A will be diagonally dominant unless jp(x)j is large relative to jq(x)j or h is too small. Pivoting and other special treatment may be necessary in these exceptional situations. Let us proceed without pivoting and write A in the slightly more general form 2 3 a1 c1 6 b2 a2 c2 7 6 7 6 7 ... ... ... A=6 (6.3.19a) 7 6 7 4 bn;1 an;1 cn;1 5 bn an where, in our case, n = N ; 1. We factor A as

A = LU

(6.3.19b)

where L is a lower triangular matrix and U is an upper triangular matrix. Having performed this factorization, we write (6.3.18a) as

LUy = f
let

Uy = z
and solve

(6.3.19c) (6.3.19d)

Lz = f :

Since L is lower triangular, (6.3.19d) may be solved for z by forward substitution. Knowing z, we determine y by solving (6.3.19c) by backward substitution. All that remains is the determination of L and U. Let us hypothesize that they have the following bidiagonal forms 3 2 3 2 1 1 1 7 6 7 6 2 1 2 2 7 6 7 6 7: 6 7 6 ... ... 1 3 (6.3.20) U=6 L=6 7 7 7 6 6 ... ... 7 4 5 4 n;1 n;1 5 n n 1 17

Using (6.3.20) with (6.3.19a,b) we nd


1

= a1

(6.3.21a) (6.3.21b) (6.3.21c) (6.3.21d)

i i

= bi = i;1

= ai ;
i

i i;1

i = 2 3 ::: n

= ci

i = 1 2 : : : n ; 1:

Using (6.3.20a) and (6.3.19d) we nd the forward substitution step to be

z1 = f1 zi = fi ; izi;1 yn = zn= yi = (zi ; iyi+1)=


i

(6.3.22a) (6.3.22b)

i = 2 3 ::: n

Similarly, using (6.3.20b) and (6.3.19c), the backward substitution step is


n

(6.3.23a) (6.3.23b)

i = n ; 1 n ; 2 : : : 1:

The solution procedure de ned by (6.3.19 - 6.3.23) is the basis of the famous tridiagonal algorithm. We state it as a pseudo-PASCAL algorithm in Figure 6.3.2. The version implemented in Figure 6.3.2 overwrites ai , bi , and ci with i, i, and i to reduce storage. By counting we see that the algorithm requires approximately 5n multiplications or divisions and 3n additions and subtractions. The work required to factor a full matrix by Gaussian elimination is approximately n3 =3. Thus, the ratio of the work to factor a tridiagonal matrix to that of a full matrix is approximately 15=n2. This is a significant savings even for small matrices and one should never use a Gaussian elimination procedure for full matrices to solve a tridiagonal system Example 6.3.2. Evidence from the Taylor's series expansion would suggest that the global error of the nite di erence solution of the BVP (6.3.1, 6.3.12b, 6.3.13) has an 18

Procedure tridi(n : integer var a b c f y : array 1::n] of real) begin


f Factorization g for i = 2 to n do begin

end

bi := bi =ai;1 ai := ai ; bici;1

f Forward substitution g

y1 := f1 for i = 2 to n do yi := fi ; bi yi;1

f Backward substitution g

end

yn := yn=an for i = n ; 1 downto 1 do yi := (yi ; ciyi+1)=ai


Figure 6.3.2: Tridiagonal algorithm

expansion in even powers of h beginning with O(h2) terms. Let's assume that this is the case. Then, Richardson's extrapolation can be used to both estimate the global error and to improve the solution. To this end, we calculate two solutions using di erent step sizes of, e.g., h and h=2. In order to emphasize the dependence of the discrete solution on step size, let yih denote the nite di erence solution yi at xi = a + ih obtained with step size h. With the assumed error dependence we have

y(a + ih) ; yih = Ch2 + O(h4)


and
h= y(a + ih) ; y2i 2 = C ( h )2 + O(h4): 2

h= The variable y2i 2 is the nite di erence solution at a + 2i(h=2) = a + ih. Subtracting the two error equations to eliminate the exact solution yields

4 h= Ch2 = 3 y2i 2 ; yih] + O(h4): 19

Using this result, we estimate the error in the ner grid solution as
h= y(a + ih) ; y2i 2 h= y2i 2 ; yih : 3

h=2 h h= i = 1 2 ::: N ;1 yih=2 = y2i 2 + y2i 3; yi ^ is an O(h4) approximation of the solution. Let us apply Richardson's extrapolation to the simple BVP

Furthermore,

y00 = y

y(0) = 0

y(1) = 1:

This problem has the form of (6.3.13) with p(x) = r(x) = 0 and q(x) = ;1. Thus, the elements of the tridiagonal system (6.3.18) are

ai = ;(2 + h2) bi = 1 i = 2 3 ::: N ; 1

i = 1 2 ::: N ;1 ci = 1 i = 1 2 : : : N ; 2:

Central-di erence solutions with h = 1=10 1=20, the solution by Richardson's extrapolation, and the exact solution y(x) = sinh x sinh 1 are shown in Table 6.3.1. Using the error at x = 0:5 as a measure of accuracy, we have
0 jy(0:5) ; y5:1j = 4:26 10;5 0: jy(0:5) ; y1005j = 1:04 10;5 0: 0 jy105 ; y5:1j = 1:07 10;5

3 jy(0:5) ; y5:05j = 3:3 10;7 ^0

These results indicate that 1. the global error of the centered nite di erence solution is approximately O(h2) since decreasing h by one-half quarters the error and 2. Richardson's extrapolation furnishes a good approximation of the error while also improving accuracy. 20

i xi y(xi) 0 0.00 0.0 1 0.05 0.04256364 2 0.10 0.08523370 3 0.15 0.12811690 4 0.20 0.17132045 5 0.25 0.21495240 6 0.30 0.25912184 7 0.35 0.30393920 8 0.40 0.34951658 9 0.45 0.39596749 10 0.50 0.44340942 11 0.55 0.49195965 12 0.60 0.54174004 13 0.65 0.59287506 14 0.70 0.64549258 15 0.75 0.69972415 16 0.80 0.75570543 17 0.85 0.81357636 18 0.90 0.87348163 19 0.95 0.93557107 20 1.00 1.0 Table 6.3.1: Solution of Example Richardson's extrapolation.

0.0 0.0 0.04256498 0.08524467 0.08523638 0.08523361 0.12812087 0.17134180 0.17132566 0.17132029 0.21495877 0.25915234 0.25912928 0.25912159 0.30394761 0.34955441 0.34952582 0.34951629 0.39597785 0.44345203 0.44341982 0.44340909 0.49197036 0.54178417 0.54175082 0.54173970 0.59288567 0.64553415 0.64550275 0.64549229 0.69973360 0.75573949 0.75571378 0.75570522 0.81358325 0.87350223 0.87348670 0.87348153 0.93557388 1.0 1.0 1.0 6.3.2 using central di erence approximations and

0.0

yih

h= y2i 2

yih=2 ^

As a next step, let us consider a linear BVP with a prescribed Robin boundary condition, e.g.,

y00 + p(x)y0 + q(x)y = r(x) y(a) = A y0(b) + Cy(b) = B:

a<x<b

(6.3.24a) (6.3.24b) (6.3.24c)

As distinct from the Dirichlet conditions used in (6.3.1), y(b) is now an unknown. We could approximate y0(b) by backward di erences and use the discrete version of the terminal condition (6.3.24c) to determine an approximation of y(b) however, this has some drawbacks. If rst-order backward di erences (6.3.6) were used to approximate y0(b) 21

y0 = A

h a = x0 x1 x N-1 xN = b x N+1

Figure 6.3.3: Domain and discretization used to approximate a Robin terminal condition. then the boundary condition (6.3.24c) would only be accurate to O(h) while the discrete approximation of the ODE (6.3.24a) is accurate to O(h2). If higher-order backward di erences were used to approximate y0(b) then the tridiagonal structure of the discrete system would be lost. The usual strategy is to introduce a ctitious external point xN +1 = b + h as shown in Figure 6.3.3. Extending the solution to this exterior point, we use central di erences to approximate the terminal condition (6.3.24c) to O(h2) as yN +1 ; yN ;1 + Cy = B: (6.3.25a) N 2h This does little to solve the problem since we've introduced both another equation (6.3.25a) and another unknown yN +1. The additional equation that we need is the central di erence approximation of the ODE (6.3.24a) at x = xN . Thus, using (6.3.16a) with i = N we have

bN yN ;1 + aN yN + cN yN +1 = h2 rN :

(6.3.25b)

Once again, It is common to eliminate yN +1 by combining (6.3.25a) and (6.3.25b) to obtain (bN + cN )yN ;1 + (aN ; 2hCcN )yN = h2rN ; 2hcN B: 22 (6.3.25c)

Observing that bN + cN = 2 by use of (6.3.16b), we obtain the tridiagonal system 2 32 3 2 2 3 a1 c1 h r1 ; b1 A 6 b2 a2 c2 7 6 y1 7 6 7 h2 r2 6 7 6 y2 7 6 7 6 76 7 = 6 7 ... ... ... ... (6.3.26) 6 74 . 5 6 7 .. 6 7 6 7 2 4 4 bN ;1 aN ;1 cN ;1 5 y h rN ;1 5 N 2 2 aN ; 2hCcN h rN ; 2hcN B which may be solved by the tridiagonal algorithm of Figure 6.3.2. Now let us return to the original nonlinear problem (6.3.12). Most iterative schemes for solving nonlinear algebraic equations can be used to determine the solution, but we'll illustrate the use of Newton's method, which is the most popular. To begin, let us write the nite di erence system (6.3.12a) in the form ; Fi(y) = yi;1 ; 2yi + yi+1 ; h2 f (xi yi yi+1 2h yi;1 ) = 0 i = 1 2 ::: N ; 1 (6.3.27) subject to the Dirichlet boundary conditions (6.3.12b) and the de nition of the vector of unknowns y given by (6.3.18c). Newton's iteration involves solving

Fy (y( ) )(y( +1) ; y( ) ) = ;F(y( ) )


where
2 F1(y) 6 F2 (y) F(y) = 6 .. 6 4 . FN ;1(y) 3 7 7 7 5 2 6 6 Fy (y) = 6 6 4
@F1 @y1 @F2 @y1 @FN ;1 @y1

= 0 1 :::
@F1 @y2 @F2 @y2 @FN ;1 @y2 @F1 @yN ;1 @F2 @yN ;1 @FN ;1 @yN ;1

(6.3.28a)
3 7 7 7: 7 5

...

...

...

(6.3.28b)

Di erentiating (6.3.27), the Jacobian Fy (y) is


8 > > > @Fi = < @yj > > > :
@f 1 + h @y0 2 ;2 ; h2 @f @y @f 1 ; h @y0 2 0

if j = i ; 1 if j = i : if j = i + 1 otherwise

(6.3.29)

Letting

y( ) ; y( ) @f b(i ) = 1 + h @y0 (xi yi( ) i+1 2h i;1 ) 2


23

(6.3.30a)

) @f (x y( ) yi(+1 ; yi(;)1 ) ai = ;2 ; h @y i i 2h ( ) 2

(6.3.30b) (6.3.30c)
3 7 7 7 7: 7 7 5

y( ) ; y( ) @f c(i ) = 1 ; h @y0 (xi yi( ) i+1 2h i;1 ): 2


gives
2 () () a c 6 b(1 ) a1 ) c( ) ( 6 2 2 2 6 ( ) ... ... ... 6 Fy (y ) = 6 6 ) ) ) b(N ;2 a(N ;2 c(N ;2 4 ( ) ( )

(6.3.30d)

bN ;1 aN ;1 Each Newton iteration requires the solution of a tridiagonal system. The Jacobian of this system need not be reevaluated and factored after each Newton step thus, only the forward and backward substitution steps of the tridiagonal algorithm shown in Figure 6.3.2 need be performed at each iterative step. The derivatives @f=@y and @f=@y0 can be approximated by nite di erences. Convergence of Newton's method is typically quadratic except at a bifurcation point where it is often linear. The use of nite di erence approximations in the Jacobian also slows the convergence rate. Example 6.3.3. Consider the elastica problem described in Section 6.1 and repeated here using the notation of this Section as y00 + P sin y = 0
Hence,

y(0) = y(1=2) = 0:

and

f (x y y0) = ;P sin y @f = ;P cos y @f = 0 @y @y0 b(i ) = c(i ) = 1 a(i ) = ;2 + h2 P cos yi( ):

Using the convergence criteria that

jjF(y( ))jj1 = 1 max;1 jFi(y( ) )j 10;9 i N


24

and setting P = 40, we found the number of Newton iterations and solution at x = 0:25 to be as recorded in Table 6.3.2. The number of Newton iterations decreases as the mesh becomes ner. This is a result of the solution appearing to be smoother. The convergence rate seems to be nearly quadratic.

h K i yi 0.1 5 5 0.41240948 0.05 4 10 0.34795042 0.025 3 20 0.32985549 Table 6.3.2: Number of iterations K to reach convergence and the approximate solution at x = 0:25 for Example 6.3.3.
The development and description of nite di erence equations may be simpli ed by introducing a set of nite di erence operators as shown in Table 6.3.3. The next several examples illustrate some applications of these nite di erence operators. Example 6.3.4. The centered di erence formula (6.3.7) can be expressed in terms of the central di erence and averaging operators and as yi = (yi+1=2 ; yi;1=2) = yi+1 ; yi;1 : h h 2h
Example 6.3.5. An operator raised to a positive integer power is iterated, e.g.,
2

yi = (yi+1=2 ; yi;1=2) = yi+1 ; 2yi + yi;1:

Thus, the centered second di erence approximation (6.3.8) of the second derivative can be written as 2 yi00 = hyi : 2
Example 6.3.6. Expanding y (xi+1) in a Taylor's series about xi yields
2 3 y(xi+1) = y(xi) + hy0(xi ) + h y00(xi) + h y000 (xi) + : : : : 2 6

Using the derivative operator D de ned in Table 6.3.3

y(xi+1) = 1 + hD + h D2 + : : : ]y(xi): 2
2

25

Operator Forward Di erence Backward Di erence Central Di erence Average Shift

Symbol

De nition yi := yi+1 ; yi

ryi := yi ; yi;1
yi := yi+1=2 ; yi;1=2 yi := (yi+1=2 + yi;1=2 )=2

Derivative D Table 6.3.3: De nition of nite di erence operators. This suggests the shorthand operator notation

Eyi := yi+1 Dyi := yi0

Ey(xi) = y(xi+1) = ehD y(xi)


where E is the shift operator (Table 6.3.3). We, thus, infer the identity between the shift, exponential, and derivative operators

E = ehD :

(6.3.31)

Additional relationships can be obtained by noting that yi = (E ; 1)yi, which implies that = E ; 1 or E = 1 + . Using this with (6.3.31) gives

hD = ln E = ln(1 + ) = ; 1 2

+1 3

; :::

(6.3.32a)

where the series expansion of ln(1+ x), jxj < 1, has been used. A similar relation in terms of the backward di erence operator can be constructed by noting that r = 1 ; E ;1 thus, 1 hD = ln E = ; ln(1 ; r) = r + 2 r2 + 1 r3 + : : : 3 (6.3.32b)

These identities can be used to derive high-order nite di erence approximations of rst derivatives. For example, suppose that we retain the rst two terms in (6.3.32a), i.e., hDyi ; 1 2 ]yi 2 26

or or

hDyi

y (yi+1 ; yi) ; yi+2 ; 22 i+1 + yi ] Dyi

2h This formula can be veri ed as an O(h2) approximation of y0(xi). Example 6.3.7. Let us use (6.3.31) with h replaced by h=2 to obtain
2 y(xi+1=2 ) = e h D y(xi) 2 y(xi;1=2 ) = e; h D y(xi):

;yi+2 + 4yi+1 ; 3yi :

Subtracting the two formulas and using the central di erence operator gives
2 2 y(xi) = (e h D ; e; h D )y(xi) = (2 sinh h D)y(xi): 2

Thus, or

= 2 sinh h D 2

2 (6.3.33) hD = 2 sinh;1 2 = ; 2213! 3 + 235! 5 ; : : : 4 which can be used to construct central di erence approximations of y0(xi ). Example 6.3.8. We can square, cube, etc. relations (6.3.32) and (6.3.33) to construct approximations of second, third, etc. derivatives. For example, squaring (6.3.33) gives 1 1 (6.3.34) h2 D2y(xi) = h2y00(xi) = 2 ; 12 4 + 90 6 + : : : ]y(xi): At some point, these formal manipulations would have to be veri ed as being correct and estimates of their local discretization errors would have to be obtained. Nevertheless, using the formal operators of Table 6.3.3 provides us with a simple way of developing high-order nite di erence approximations.

6.4 Introduction to Collocation Methods


Unlike nite di erence methods, projection methods such as collocation give a continuous approximation of the solution as a function of x. The basic idea is to approximate the 27

solution y(x) of a BVP by a simpler function Y (x) and then determine Y (x) so that it is the \best" approximation of y(x) in some sense. Two reasonable choices for Y (x) are a discrete Fourier series M ;1 X Y (x) = ck eikx and a polynomial
k=0

Y (x) =

M ;1 X k=0

ck xk :

It is convenient to regard the BVP solution y(x) as an element of an in nite-dimensional function space and Y (x) as an element of an M -dimensional subspace of it. Thus, assuming that y(x) has continuous second derivatives on a < x < b, we would write y(x) 2 C 2(a b) which is read \y(x) is an element of the space of functions that have continuous second derivatives on (a b)." Then, Y (x) 2 S M C 2 (a b) where the space S M consists of those C 2 functions having a prescribed form. The chosen functions, e.g., eikx or xk , k = 0 1 : : : M ; 1, comprise a basis for S M . In order to introduce some concepts, we'll again focus on the second-order, nonlinear, scalar BVP (6.3.1). After selecting a basis, the \coordinates" ck , k = 0 1 : : : M ; 1, can be determined by, e.g., the least squares technique Zb min R2 (x)dx Y 2sM a where R(x) is the residual

R(x) = Y 00 ; f (x Y Y 0):

In this case, it is clear that Y (x) is the \best" approximation of y(x) in the sense of minimizing the square of the integral of the residual. Using Galerkin's method, we determine Y (x) so that the residual R(x) is \orthogonal" to every function in S M , i.e., Zb w(x)R(x)dx = 0 8w(x) 2 S M :
a

The optimality of this procedure is not clear however, since Galerkin's method is primariliy used with partial di erential equations, we will not pursue it further. 28

Collocation has been shown to be a successful procedure tor two-point BVPs and is the one on which we focus. Collocation consists of satisfying

R( i ) = 0
with

i = 1 2 ::: M

a 1 < 2 < : : : < M b: The optimality of collocation is also not clear, but we'll pursue this elsewhere. Global approximations such as the Fourier series and the polynomials introduced above lead to ill-conditioned algebraic problems. It is far better to use piecewise polynomial approximations that result in sparse and well-conditioned algebraic systems. It is also unwise to infer more continuity than necessary. Discontinuous and continuous piecewise polynomial approximations might have the forms shown in Figure 6.4.1. The discontinuous polynomial (on the left) has jumps at xi , i = 1 2 : : : N ; 1. Thus, the rst derivative doesn't exist at these points and this would be an unsuitable function to approximate the solution of a second-order ODE. The continuous approximation (on the right) has jumps in its rst derivative at xi , i = 1 2 : : : N ; 1, and its second derivative doesn't exist at these points. Hence, minimally Y (x) 2 C 1 (a b). In this case, the rst derivative of Y (x) is continuous and the second derivative is piecewise continuous.
Y(x) Y(x)

x x0 x1 xN x0 x1

x xN

Figure 6.4.1: Discontinuous (left) and continuous (right) piecewise polynomial function Y (x) having jumps (left) and jumps in its rst derivative (right) at xi , i = 1 2 : : : N ; 1. Perhaps the simplest way of satisfying the continuity requirements is to select a basis for S M that includes them. For example, a basis for a space of piecewise constant 29

functions could be chosen as


i (x) =
0

1 if x 2 xi;1 xi) 0 otherwise


N X i=1

i = 1 2 : : : N:

(6.4.1)

The approximation Y (x) would then be written in the form

Y (x) =

ci 0 (x) i

(6.4.2)

and is shown in Figure 6.4.2. (In this case, the dimension of the subspace M and the number of subintervals N are identical however, this need not be so.)
Y

c1 c2 1 0 (x) 1 cN

x x0 x1 x i-1 x1 i xN

Figure 6.4.2: Piecewise constant basis element 0(x) and the resulting piecewise constant i approximation Y (x). Using (6.4.1) and (6.4.2) we see that

Y (x) = ci

x 2 xi;1 xi):

We may interpret ci as Y (xi;1=2 ) however, this is not necessary. As shown in Figure 6.4.2, the basis 0(x) 2 C ;1 a b) thus, Y (x) 2 C ;1 a b). i Of course, the basis (6.4.1) doesn't satisfy any continuity requirements however, it can be used to generate a continuous basis. More generally, a piecewise-polynomial basis whose continuity increases with increasing degree can be constructed by integrating a linear combination of basis elements of a piecewise-polynomial space having one degree 30

less than the desired degree. For example, let us construct a C 0 piecewise-linear basis by integrating 0 and 0+1 as i i Zx 1 0 0 i (x) = i (s) + i+1 (s)]ds:
x0

The appropriate continuity will be automatically obtained by the integration. The result for this example is 8 if x < xi;1 >0 > < (x ; x ) if xi;1 x < xi i;1 1 i (x) = > hi + (x ; xi ) if xi x < xi+1 > : h+ h if x x
i i+1 i+1

where

hi = xi ; xi;1 :

(6.4.3)

The parameters and are at our disposal. Let us pick them so that the resulting approximation has compact support, i.e., so that
i (x) = 0
1

x xi+1 :

Let us, furthermore normalize the basis so that


i (xi ) = 1:
1

Then, we nd
i

and the nal result

= 1 hi

=; 1 hi+1 (6.4.4)

8 x;xi;1 < x hi;x if xi;1 x < xi 1 i+1 if xi x < xi+1 : i (x) = : hi+1 0 otherwise

The approximation Y (x) has the form

Y (x) =

N X i=0

ci 1(x): i

(6.4.5)

31

Y c1 ci c0 1 1 (x) 1 cN Y(x)

x x0 x1 x i-1 x1 i x i+1 xN

Figure 6.4.3: Piecewise-linear basis element 1(x) and the resulting piecewise-linear api proximation. The basis element 1(x) and the piecewise-linear approximation Y (x) are shown in Figure i 6.4.3. Using (6.4.4) we see that 1 (xj ) = ij where ij is the Kronecker delta. Using this i with (6.4.5) yields Y (xi ) = ci, i = 0 1 : : : N . The restriction of Y (x) to the subinterval xi;1 xi) is the linear function

Y (x) = ci;1 1;1(x) + ci 1(x) i i

x 2 xi;1 xi):

We'll continue by constructing a C 1 piecewise-quadratic polynomial basis as Zx 2 1 1 i (x) = i;1 (s) + i (s)]ds: x
0

Proceeding in three steps, we see that 8 if x < xi;1 >0 > (x;xi;1 )2 > Zx < if xi;1 x < xi 1 (s)ds = > hi 2hihi+1 (xi+1;x)2 i if xi x < xi+1 x0 > 2 + 2 ; 2hi+1 > hi hi+1 : + if xi+1 x 2 2 and 80 if xi;2 < x > > (x;xi;2 )2 > > if xi;2 x < xi;1 > 1 < hi;1 (xi ;x)2 (x;xi;1 )2 2 if xi;1 x < xi : i (x) = 2 > hi;1 + hi ; hi ] + > (h + h ) + h + h ; h(ixi+1 ;x)2 ] if x x < x > i;1 i > i i+1 i i+1 > hi+1 : (hi;1 + hi ) + (hi + hi+1) if xi+1 x 32

Enforcing the condition that 2(x) = 0, x xi+1 implies i 2 = ;h + h : = h 2+ h i;1 i i i+1 Thus, 8 (x;x )2 ; > hi;1 (hii;12+hi ) if xi;2 x < xi;1 > > > < 1 ; (xi ;x)2 ; (x;xi;1 )2 if x i;1 x < xi 2 hi i : i (x) = > (xi+1(hx;21 +hi ) hi (hi +hi+1) ;) > hi+1 (hi +hi+1 ) if xi x < xi+1 > > :0 otherwise Normalizing the result so that 2(xi;1=2 ) = 1 yields i = 1;
1

(6.4.6a)

;1 h hi ; 4(h +ih ) : 4(hi;1 + hi) i i+1

(6.4.6b)

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0 1

0.5

0.5

1.5

Figure 6.4.4: Quadratic spline basis element 2(x) relative to a mesh with xi;2 = ;1, i xi;1 = 0, xi = 1, and xi+1 = 2. The basis f 2(x)gN de nes a quadratic spline. The element i i=1 6.4.4. Evaluating (6.4.6a,b) at a node xj yields 8 hi;1 > hi;1 +hi if j = i ; 1 < 2 +1 (xj ) = > hihihi+1 if j = i : i : 0+ otherwise 33
2

is shown in Figure

(6.4.6c)

The basis simpli es when the mesh is uniform thus, if hi = h, i = 1 2 : : : N , 8 x;xi;2 2 if xi;2 x < xi;1 >( h ) > ; 2 < 2 ; ( xih x )2 ; ( x;xi;1 )2 if xi;1 x < xi : 2 h (6.4.6d) i (x) = 3 > ( xi+1 ;x )2 if xi x < xi+1 > h :0 otherwise The quadratic spline approximation is written in the form N X 2 Y (x) = ci i (x)
i=1

(6.4.7a)

and its restriction to xi;1 xi ) is 8 2 x 2 x0 x1 ) < c1 1 (x) + c2 2 (x) 2 Y (x) = : ci;1 2;1(x) + ci 2(x) + ci+1 2+1(x) x 2 xi;1 xi ) i 6= 1 N : (6.4.7b) i i i cN ;1 2 ;1(x) + cN 2 (x) x 2 xN ;1 xN ) N N Thus, three elements of the spline basis are nonzero on any interval except the rst and the last where there are only two nonzero elements. Using (6.4.6) with (6.4.7b) we see that 8 c1 h0 > h0 +h1 < c h +c h if i = 1 i i+1 Y (xi ) = > i h+1 hi+1 i if i = 2 3 : : : N ; 1 : (6.4.7c) i+ : cN hN if i = N
hN +hN +1

One could take h0 = h1 and hN +1 = hN . When solving BVPs or interpolation and approximation problems, it's convenient to introduce an extra basis element and unknown at each end of the domain and write Y (x) as N +1 X Y (x) = ci 2(x): (6.4.8) i
i=0

Now there are three nonzero basis elements on every subinterval. For interpolation problems, the coe cients ci, i = 0 1 : : : N + 1, may be determined by satisfying

Y (xi;1 =2) = f (xi;1 =2)

i = 1 2 : : : N:

This yields N equations for the N + 2 unknowns. The extra two equations could be obtained by interpolating f (x) at x0 and xN , 34

prescribing f 0(x) at x0 and xN , or prescribing Y 0 (x0) = Y 0(xN ) = 0. Regardless of the boundary prescription, this interpolation problem requires the solution of a tridiagonal linear algebraic problem to determine ci, i = 0 1 : : : N + 1. It's possible to continue in this manner, introducing more continuity with increasing polynomial degree however, usually we'll want approximations having the minimum allowable continuity. Since higher-degree approximations increase the convergence rate of smooth solutions, we seek alternative spline constructions that do not increase smoothness with increasing polynomial degree. Such procedures exist 3] however, for the moment, we'll examine piecewise cubic Hermite approximation. Thus, let us consider a function of the form N X 3 Y (x) = ci i (x) + di!i3(x)]: (6.4.9) The basis f 3(x) !i3(x)gN is constructed to satisfy the following conditions: i i=0 1. 2. 3. 4.
i (x)
3 3 3 3

i=0

and !i3(x) are piecewise cubic polynomials on (x0 xN ),

i (x) i (x)

!i3(x) 2 C 1(x0 xN ),
and !i3(x) are nonzero only on xi;1 xi+1 ), and and !i3(xi ) = 0, i = 0 1 : : : N .
i (xj ) = ij
3

i (xi ) = 1

These conditions imply that


03

(xj ) = 0
ij

i j = 0 1 ::: N i j = 0 1 : : : N:

(6.4.10a) (6.4.10b)

!i3(xj ) = 0

!i03(xj ) =

Together, (6.4.9) and (6.4.10) give Y (xi ) = ci and Y 0(xi ) = di, i = 0 1 : : : N . Given these requirements, the piecewise cubic Hermite basis is 8 < 1 ; 3( x;ixi )2 ; 2( x;ixi )3 if xi;1 x < xi h h x x 3 (6.4.11a) (x) = : 1 ; 3( x;+1i )2 + 2( x;+1i )3 if xi x < xi+1 i hi hi 0 otherwise 35

1.2

0.8

0.6

0.4

0.2

0.2 1

0.8

0.6

0.4

0.2

0.2

0.4

0.6

0.8

Figure 6.4.5: Cubic Hermite polynomial basis elements 3(x) (solid) and !i3(x) (dashed) i relative to a mesh with xi;1 = ;1, xi = 0, and xi+1 = 1. 8 < (x ; xi ) 1 + x;ixi ]2 if xi;1 x < xi h x (6.4.11b) !i3(x) = : (x ; xi ) 1 ; x;+1i ]2 if xi x < xi+1 : hi 0 otherwise Representative basis elements are illustrated in Figure 6.4.5. Examining (6.4.9), we see that there are four nontrivial basis elements i;1, !i;1, i, and !i on the subinterval (xi;1 xi), i = 1 2 : : : N . Having constructed appropriate piecewise polynomial approximations, let us use them to de ne a collocation method by satisfying (6.3.1) at a prescribed number of points per subinterval and (typically) satisfying the boundary conditions. Thus,

Y 00( ij ) = f (

ij

Y ( ij ) Y 0 ( ij ))

j = 1 2 ::: J Y (b) = B:

i = 1 2 ::: N

(6.4.12a) (6.4.12b)

Y (a) = A

As indicated, there are J collocation points ij , j = 1 2 : : : J , per subinterval. For the piecewise quadratic spline approximation (6.4.8), we would determine the N + 2 unknowns ci, i = 0 1 : : : N , by collocating at J = 1 point per subinterval and satisfying the boundary conditions (6.4.12b). With the piecewise cubic Hermite polynomial (6.4.9), 36

we would determine the 2(N +1) unknowns ci di, i = 0 1 : : : N , by collocating at J = 2 point per subinterval and satisfying (6.4.12b). Example 6.4.1. We'll develop the collocation equations when piecewise quadratic splines (6.4.8) are applied to a linear problem with f (x y y0) given by (6.3.13). For simplicity, we'll assume that the mesh is uniform with spacing h. Utilizing (6.4.6) and (6.4.8), the boundary conditions (6.4.12b) are 2 Y (a) = A = 3 (c0 + c1) Selecting the sole collocation point
i1

Y (b) = B = 2 (cN + cN +1): 3 i = 1 2 ::: N

(6.4.13a)

= xi;1=2 = xi;12+ xi

at the center of each subinterval (Figure 6.4.6) and using (6.4.6) and (6.4.8) we have
1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Figure 6.4.6: Piecewise quadratic spline basis used for collocation on a mesh with spacing h = 0:2 on 0 1]. 1 Y (xi;1=2 ) = 6 (ci;1 + 6ci + ci+1 ) 37

and

Y 0(xi;1=2 ) = 32h (ci+1 ; ci;1 ) Y 00(xi;1=2 ) = 34 2 (ci;1 ; 2ci + ci+1): h

Substituting these results into (6.4.12a) while using (6.3.13), we nd 4 (c ; 2c + c ) + 2pi;1=2 (c ; c ) + qi;1=2 (c + 6c + c ) = r i i+1 i i+1 i;1=2 3h2 i;1 3h i+1 i;1 6 i;1 i = 1 2 : : : N: (6.4.13b) These di erence equations are similar, but not identical, to the central nite di erence equations (6.3.14). The relationship can be made more precise by re-writing (6.4.13) in terms of the nodal unknowns Y (xi ) and comparing this result with (6.3.14) (Problem 2). The system (6.4.13) may be written in a tridiagonal form as 2 32 3 2 3 c0 A 0 0 6 1 1 1 7 6 c1 7 6 r1 7 6 76 6 76 . 7 = 6 . 7 ... ... ... (6.4.14a) 6 76 . 7 6 . 7 6 76 . 7 6 . 7 7 6 7 4 N N N 5 4 cN 5 4 rN 5 cN +1 B N +1 N +1 where
0

N +1

=2 3

= ; 38 2 + qi;1=2 h

i = 1 2 ::: N
N +1

(6.4.14b) (6.4.14c) (6.4.14d)

2p i q = 34 2 ; 3;1=2 + i;1=2 h h 6
0

i = 1 2 ::: N

=2 3

=2 3

2pi q = 34 2 + 3;1=2 + i;1=2 h h 6

i = 1 2 : : : N:

As a simple numerical example, let's suppose p = 0, q = ;1, r = 0, A = 0, and B = 1. This is the problem of Example 6.3.2 which has the exact solution

y(x) = sinh x : sinh 1


Solution and pointwise (at xi , i = 0 1 : : : N ) error data are presented in Table 6.4.1 for N = 10. The maximum pointwise errors for collocation solutions computed with 38

i xi Y (xi) y(xi) ; Y (xi) 0 0.0 0.0000000 0.0000000E+00 1 0.1 0.0852282 0.5491078E-05 2 0.2 0.1713098 0.1069903E-04 3 0.3 0.2591066 0.1528859E-04 4 0.4 0.3494977 0.1892447E-04 5 0.5 0.4433881 0.2130866E-04 6 0.6 0.5417180 0.2211332E-04 7 0.7 0.6454719 0.2080202E-04 8 0.8 0.7556885 0.1698732E-04 9 0.9 0.8734714 0.1043081E-04 10 1.0 1.0000000 0.2384186E-06 Table 6.4.1: Solution and pointwise errors for Example 6.4.1 using collocation with piecewise quadratic splines at the center of each of ten subintervals. N kY ; yk1 N 2 kY ; yk1 5 0:8890263 10;4 0.002223 10 0:2214184 10;4 0.002214 20 0:5537689 10;5 0.002215 Table 6.4.2: Maximum pointwise errors for the solution of Example 6.4.1 using collocation with piecewise quadratic splines at the center of each of N subintervals. N = 5 10, and 20 subintervals are presented in Table 6.4.2. Solutions appear to be converging as O(h2). Pointwise errors are about half of those found using central nite di erences (Table 6.3.1). Example 6.4.2. In the previous example, it seemed natural to place the single collocation point at the center of each subinterval. Were we to used piecewise cubic Hermite approximations, however, we would have two collocation points per subinterval. Placing them at the ends of each subinterval is one possibility however, our work with implicit Runge-Kutta methods (Section 3.3) would suggest that the Gauss-Legendre points give a higher rate of convergence. DeBoor and Swartz 2] showed that this is the case and we will repeat this analysis in Chapter 9 however, for the moment, let us assume it so and consider the collocation solution of ( 1], Chapter 5) 0 8 0<x<1 y00 + y = ( 8 ; x2 )2 x y0(0) = y(1) = 0
39

N jjY ; yjj1 2 0:20 10;3 5 0:64 10;5 10 0:46 10;6 20 0:33 10;7 40 0:23 10;8 80 0:16 10;9 Table 6.4.3: Maximum pointwise errors for the solution of Example 6.4.2 using collocation with piecewise cubic Hermite polynomials at two Gauss-Legendre points per subinterval.
which has the exact solution 7 y(x) = 2 ln( 8 ; x2 ):

The solution is smooth on 0 x 1, but the coe cient p(x) = 1=x is unbounded at x = 0. This would lead to problems with nite di erence or shooting methods, but not with collocation methods that collocate at points other than subinterval ends. Ascher et al. 1] solve this problem by a variety of techniques. We'll report their results using piecewise cubic Hermite polynomials with collocation at the two Gauss-Legendre points 1 1 h h i 2 = (xi;1 + xi ) + p ] i 1 = (xi;1 + xi ) ; p ] 2 2 3 3 per subinterval. The maximum pointwise errors are presented in Table 6.4.3. A simple calculation veri es that the solution is converging as O(N ;4).

Problems

1. Construct the basis for a cubic spline approximation that is of class C 2 and has support on xi;2 xi+2 ] by integrating the quadratic splines 2 (x) and 2+1(x) of i i (6.4.6) and imposing appropriate normalization and compact support conditions. For simplicity, assume that the mesh is uniform with spacing h. 2. Using (6.4.7c) show that 2 Y (xi ) = 3 (ci + ci+1)

on a uniform mesh of spacing h. Use this to rewrite the quadratic-spline collocation equations (6.4.13) in terms of Y (xi), i = 1 2 : : : N , instead of ci, i = 0 1 : : : N + 1. Compare the results with the nite di erence equations (6.3.14). 40

Bibliography
1] U.M. Ascher, R. Mattheij, and R. Russell. Numerical Solution of Boundary Value Problems for Ordinary Di erential Equations. SIAM, Philadelphia, second edition, 1995. 2] C. de Boor and B. Swartz. Collocation at gaussian points. SIAM J. Numer. Anal., 10:582{687, 1973. 3] C. deBoor. A Practical Guide to Splices. Springer-Verlag, Berlin, 1978.

41

Das könnte Ihnen auch gefallen