Sie sind auf Seite 1von 5

1.

Bisection Method (Bolzano)

x0 + x1 x0 + x1 2 + 4
x2 = x2 = = =3
2 2 2
xn −1 + xn
xn +1 =
2

1. Muller Method

x0 − 2c
x=
+b − b 2 − 4ac

in this formula x0,x1, x2 will be and U will just put the values according to the given formulas.

h 2 f 1 − ( h1 + h 2) f 0 + h1 f 2
a=
h1h 2(h1 + h 2)
f 1 = f 0 − ah12
b=
h1
c = f ( x 0)
h1 = x1 − x 0
h 2 = x0 − x 2

2. Regula Falsi Method (Method of False position)

x 2 − x1 xn − xn −1
x3 = x 2 − f ( x 2) xn +1 = xn − f ( xn )
f ( x 2) − f ( x1) f ( xn ) − f ( xn −1 )

3. Newton Rophson method

f ( x0) f (x )
x1 = x0 − '
xn +1 = xn − ' n
f ( x0) f ( xn )

4. Secant Method

x0 f ( x1) − x1 f ( x0) x f ( xn ) − xn f ( xn −1 )
x2 = xn +1 = n −1
f ( x1) − f ( x0) f ( xn ) − f ( xn −1 )

5. Newton’s Formula

1
x1 = ( x0 + n )
2 x0

in this formula x0= 2 perfect square near to 12 such like 9 and 16

6. Graffee root squaring method

f ( x) f (− x) = a32 x 6 − (a2 2 − 2a1a3 ) x 4 + (a12 − 2a0 a2 ) x 2 − x02


f ( x) = a32t 3 − (a2 2 − 2a1a3 )t 2 + (a12 − 2a0 a2 )t − a02 a is constant t is x

7. Newton forward difference interpolation formula


p ( p −1) 2 p ( p −1)( p − 2) 3 x − x0
y x = y0 + p∆y0 + ∆ y0 + ∆ y0 Where p = ( )
2! 3! h
8. Newton backward difference interpolation formula

p( p + 1) 2 p( p + 1)( p + 2) 3 x − xn
y x = yn + p∇yn + ∇ yn + ∇ yn p=( )
2! 3! h

9. The Lagrange’s formula for interpolation

( x − x1 )( x − x2 )L ( x − xn ) ( x − x0 )( x − x2 )L ( x − xn )
y = f ( x) = y0 + y1 + L
( x0 − x1 )( x0 − x2 )L ( x0 − xn ) ( x1 − x0 )( x1 − x2 ) L ( x1 − xn )

10. The first order divided difference is defined as

y1 − y0
y[ x0 , x1 ] =
x1 − x0
Second order divided difference
y1[ x1 , x2 ] − y0 [ x0 , x1 ]
y[ x0 , x1 , x2 ] =
x2 − x0
y2 [ x2 , x3 ] − y1[ x1 , x2 ]
y[ x1 , x2 , x3 ] =
x3 − x1
y3[ x3 , x4 ] − y2 [ x2 , x3 ]
y[ x2 , x3 , x4 ] =
x4 − x2
Third order divided difference
y1[ x1 , x2 , x3 ] − y0 [ x0 , x1 , x2 ]
y[ x0 , x1 , x2 , x3 ] =
x3 − x0
y2 [ x2 , x3 , x4 ] − y1[ x1 , x2 , x3 ]
y[ x1 , x2 , x3 , x4 ] =
x4 − x1
Fourth order divided difference
y1[ x1 , x2 , x3 , x4 ] − y0 [ x0 , x1 , x2 , x3 ]
y[ x0 , x1 , x2 , x3 , x4 ] =
x4 − x0
11. RICHARDSON’S EXTRAPOLATION METHOD

y ( x + h) − y ( x − h ) y ( x + h) − y ( x − h )
y′( x) = y ′( x ) = F (h) =
2h
+ ET
2h
 h   h 
4m Fm −1  m ÷− Fm −1  m −1 ÷
 h  2   2  where m = 1, 2,3,K
Fm  m ÷ =
2  4 −1
m

12.The Trapezoidal
xn h b−a
∫x0
f ( x) dx = [ y0 + yn + 2( y1 + y2 + L + yn −1 )] + En h =
2 n

13.The Simpson’s 1/3 rule,


b h b−a
∫a
fx )dx = [ y0 + yn + 4( y1 + y3 + y5 + ...) + 2( y2 + y4 + ....)] h =
3 2n
h
= [ y0 + 4( y1 + y3 + L + y2 N −1 ) + 2( y2 + y4 + L + y2 N − 2 ) + y2 N ]
3
ERROR
x2 N − x0 4 (iv )
E=− h y (ξ )
180

14.Simpson’s 3/8 rule

b 3
∫a f ( x ) dx =
8
h[ y (a) + 3 y1 + 3 y2 + 2 y3 + 3 y4 + 3 y5 + 2 y6 + L + 2 yn− 3 + 3 yn − 2 + 3 yn−1 + y (b)]

b−a
h=
3n
ERROR

xn − x0 4 ( iv )
E=− h y (ξ )
80

15. EULER METHOD Simple

approx value at x
ym +1 = ym + hf (tm , ym ) Here h = given steps

16. MODIFIED EULER’S METHOD

 f (tm , ym ) + f (tm +1 , ym(1)+1 ) 


ym +1 = ym + h  
 2 

17. RUNGE – KUTTA METHOD

yn +1 = yn +W1k1 +W2 k 2
This well-known fourth-order R-K method is described in the following steps
1
yn +1 = yn + (k1 + 2k2 + 2k3 + k4 )
6
where

k1 = hf (tn , yn )
 h k 
k2 = hf  tn + , yn + 1 ÷
 2 2
 h k 
k3 = hf  tn + , yn + 2 ÷
 2 2
k4 = hf (tn + h, yn + k3 )

PREDICTOR – CORRECTOR METHOD

P : yn(0)+1 = yn + hf (tn , yn ) 

h 
C : yn(1)+1 = yn +  f (tn , yn ) + f (t n+1 , yn(0)+1 )  
2 
Milne’s Method

4h 
P : yn +1 = yn −3 + (2 yn′ − 2 − yn′ −1 + 2 yn′ ) 
3 

h
C : yn +1 = yn −1 + ( yn′ −1 + 4 yn′ + yn′ +1 ) 
3 

Adam-Moulton Method
( 55 yn′ − 59 yn′ −1 + 37 yn′ − 2 − 9 yn′ −3 ) 
h
P : yn +1 = yn +
24

h 
C : yn +1 = yn + ( 9 yn +1 + 19 yn − 5 yn −1 + yn −2 )
′ ′ ′ ′
24 

ROMBERG’S INTEGRATION

 h   h 
4m IT ( m −1)  m ÷− IT ( m−1)  m −1 ÷
 h  2  2 
ITm  m ÷= m −1
2  4

where m = 1, 2, … , withIT0(h) = IT(h)


TAYLOR’S SERIES METHOD

(t − t0 ) 2 (t − t 0 ) 3 (t − t0 ) 4 IV
y (t ) = y (t0 ) + (t − t0 ) y ′(t0 ) + y ′′(t0 ) + y ′′′(t0 ) + y (t0 ) +L
2! 3! 4!
AREA
b

∫ f (x) dx
a
VOLUME
d b

∫∫ g(x, y) dx dy
c a
Key concepts:
1. Integration is a summing process. Thus virtually all numerical
approximations can be represented by
b n
I = ∫ f(x)dx = ∑ ∆xf(x i ) + E t
a i =0

where:
x = weights
xi = sampling points
Et = truncation error
ck are the weighting coefficients given by
b
ck = ∫ Lk ( x) dx which are also called Cotes numbers
a

Equispaced nodes are defined by


x0 = a,
xn = b,
b−a
h = ,
n
xk = x0 + kh
DIFFERENCE OPREATORS:
Applications: ( Primes)
Remember Using forward difference operator ∆, the shift operator , the
backward difference operator and the average difference operator , we
obtained the following formulae:

Forward Primes
1 ∆ 2 y0 ∆ 3 y0 ∆ 4 y0 
Dy0 = y0′ =  ∆y0 − + − + L÷
h 2 3 4 
2
d y0 1  11 5 
D 2 y0 = 2
= y0′′ = 2  ∆ 2 y0 − ∆ 3 y0 + ∆ 4 y0 − ∆ 5 y0 + L ÷
dx h  12 6 

Backward Primes
d 1 ∇ 2 y n ∇ 3 yn ∇ 4 y n 
yn = Dyn = yn′ =  ∇yn − + + + L÷
dx h 2 3 4 
1 11 5 
yn′′ = D 2 yn = 2 ( ∇ 2 yn + ∇3 yn + ∇ 4 y n + ∇ 5 yn + L ÷
h 12 6 

Central Primes

d 1 1 3 5 
y = y′ = Dy =  δ y − δ 3 y + δ y −L÷
dx h 24 640 
1  1 1 
y ′′ = D 2 y = 2  δ 2 y − δ 4 y + δ 6 y − L ÷
h  12 90 
First derivative
yi +1 − yi
yi′ =
h
Second derivative
yi + 2 − 2 yi +1 + yi
yi′′ =
h2
Crout’s Reduction
[L][U]=[A]

Das könnte Ihnen auch gefallen