Beruflich Dokumente
Kultur Dokumente
ENGINEERING
LECTURE NOTES
Janusz ORKISZ
2007-09-09
l.888 Numerical Methods in Civil Engineering I
1. Introduction
1.1. Numerical method
1.2. Errors in numerical computation
1.3. Significant digits
1.4. Number representation
1.5. Error bounds
1.6. Convergence
1.7. Stability
8. Approximation
8.1. Introduction
8.2. Interpolation in 1D space
8.3. Lagrangian Interpolation ( 1D Approximation)
8.4. Inverse Lagrangian Interpolation
8.5. Chebychev polynomials
8.6. Hermite Interpolation
8.7. Interpolation by spline functions
8.7.1. Introduction
8.7.2. Definition
8.7.3. Extra conditions
8.8. The Best approximation
8.9. Least squares approach
8.10. Inner Product
8.11. The generation of orthogonal functions by GRAM - SCHMIDT process
8.11.1. Orthonormalization
8.11.2. Weighted orthogonalization
8.11.3. Weighted orthonormalization
8.12. Approximation in a 2D domain
8.12.1. Lagrangian approximation over rectangular domain
9. Numerical differentation
9.1. By means of the approximation and differentation
9.2. Generation of numerical derivatives by undetermined coefficients method
16. MFDM
16.1. MWLS Approximation
Chapter 1—1/6 2007-11-05
1. INTRODUCTION
x= a → x2 = a , a≥0
a
x= , x≠0
x
a
x+x = x+
x
1⎛ a⎞
x= ⎜x+ ⎟
2⎝ x⎠
− numerical method
1⎛ a ⎞
xn = ⎜⎜ x n −1 + ⎟
2⎝ x n −1 ⎟⎠
Types of errors :
Inevitable error
(i) Error arising from the inadequacy of the mathematical model
Example :
Pendulum
R
j
l
mg
ma
Chapter 1—2/6 2007-11-05
d 2ϕ
a=l 2 - acceleration
dt
d 2ϕ ⎛ dϕ ⎞ g
n
d 2ϕ g
+ ϕ=0 - simplified model - small displacement,
dt 2 l
linearized equation and no friction
l , g , ϕ ( 0) , d ϕ t =0
, ..........
dt
Example:
dϕ
= f (t , ϕ (t ))
dt
Euler method
t
Exact solution
Chapter 1—3/6 2007-11-05
Numerical errors
Example
∞
( −n π t ) sin nπ x
2 2 10 ∞ 10
u ( x, t ) = ∑ Cn
n =1
exp
l2 l
= ∑
n =1
+ ∑
n=11
≈ ∑
n=1
Example
2
x = = 0 .667
3
ε = ~x − x
~
x−x
δ=
x
PRESENTATION OF RESULTS
x expected = x% ± ε = x% (1 ± δ )
Example
Example
Number of Number of
significant digits significant digits
2345000 7 5 1
2.345000 7 5.0 2
0.023450 5 5.000 4
0.02345 4
Example
Number of
Subtraction
significant digits
2.3485302 8
-2.3485280 8
0.0000022 2
Given: a ± ∆a, b ± ∆b
Searched: x = a + b = a ± ∆a + b ± ∆b
error evaluation
∆x = x − a − b ≤ ∆a + ∆b
1.6. CONVERGENCE
Example
1⎛ a ⎞
xn = ⎜ x n −1 + ⎟ , lim x n = ?
2⎝ x n −1 ⎠ n →∞
let
xn − x
δn = → xn = x ( 1 + δ n )
x
1 ⎡ a ⎤
x ( 1+ δn ) = ⎢ x (1 + δ n −1 ) + ⎥
2 ⎣ x (1 + δ n −1 ) ⎦ x
⋅
a
⎡
1 1 ⎤ 1 ⎡ 1 + δ n −1 − δ n −1 ⎤
1+ δn = ⎢1 + δ n −1 + ⎥ = ⎢1 + δ n −1 + ⎥=
⎣
2 1 + δ n −1 ⎦ 2 ⎣ 1 + δ n −1 ⎦
1 ⎛ δ n2−1 ⎞
= ⎜ 2 + ⎟
2 ⎝ 1 + δ n −1 ⎠
for
δn−1
x0 = a → δ0 > 0 → δ n−1 > 0 → <1
1+ δ n−1
one obtaines
δ n2−1 1 δ n −1 1
δn = = δ n −1 < δ n −1
2 (1 + δ n −1 ) 2 1 + δ n −1 2
1
δn < δ n −1 → iteration is convergent
2
lim δ n = 0 → lim xn → a
n →∞ n →∞
1.7. STABILITY
Solution is stable if it remains bounded despite truncation and round off errors.
Let
1 ⎛ a ⎞ 1 δ 2 n −1
x%n = xn ( 1 + γ n ) = ⎜ x%n −1 + ⎟ (1 + γ n ) → δ n = (1 + γ n ) + γ n
2 ⎝ x%n −1 ⎠ 2 1 + δ n −1
Example
Unstable calculations
6
Chapter 2—1/15 2007-11-05
2.1. INTRODUCTION
- source of algebraic equations
- multiple roots
- start from sketch
- iteration methods
2.2.1. Algorithm
Algorithm Example
Let
1 ⎛ a ⎞
x = f (x) xn = ⎜ xn −1 + ⎟
2 ⎝ xn −1 ⎠
a=2, x0=2
1⎛ 2⎞ 3
x1 = f ( x0 ) x1 = ⎜ 2 + ⎟ = = 1.5000
2⎝ 2⎠ 2
1⎛3 2 ⎞ 17
x2 = f ( x1 ) x2 = ⎜ + 2 ⋅ ⎟ = = 1.4167
2⎝2 3 ⎠ 12
.................. ..................
xn = f ( xn−1 )
..................
Chapter 2—2/15 2007-11-05
Geometrical interpretation
Example :
x 2 − 4 x + 2.3 = 0 → x = f ( x)
Algorithm
(i)
(ii)
(x 2
+ 2.3)
x=
4
→ xn = (x
2
n −1 + 2.3)
1
4
x = 4 x − 2.3 → xn = 4 xn −1 − 2.3
Let x0 = 0.6
Theorem 2
Geometrical interpretation
xn − xn −1
δn = <B
xn
Residuum
f ( xn −1 ) − xn−1 x − xn −1
rn = = n = δn < B
f ( xn−1 ) xn
Notice : both criteria are the same for the simple iterations method
Chapter 2—4/15 2007-11-05
x = f ( x)
α 1
α x + x = α x + f ( x) → x = x+ f ( x) ≡ g ( x)
1+ α 1+ α
α 1
g '( x) = + f '( x)
1+α 1+α
let
g ' ( x * ) = 0 → α = − f ' ( x* )
then
1 f ' ( x* )
g ( x) = f ( x) − x
1 − f ' ( x* ) 1 − f ' ( x* )
Example :
a a a
x2 = a > 0 → x = → f ( x) = → f ′ ( x ) = − 2 = −1
x x x
then
1 a −1 1⎛ a⎞
g ( x) = − x = ⎜x+ ⎟
1 − ( −1) x 1 − ( −1) 2⎝ x⎠
hence
1⎛ a ⎞
xn = ⎜ xn −1 + ⎟
2⎝ xn−1 ⎠
Chapter 2—5/15 2007-11-05
2.3.1. Algorithm
F ( x) = 0
dF 1 d 2F 2
F ( x + h) = F ( x ) + h+ h + ... = F ( x) + F '( x)h + R ≈ F ( x) + F '( x)h = 0
dx x 2 dx 2 x
F ( x)
F ( x) + F′( x) h = 0 → h = −
F′( x)
F ( xn −1 )
xn = xn −1 + h = xn −1 −
F ′ ( xn −1 )
Geometrical interpretation
xn − xn −1
δn = < B1
xn
Residuum
F ( xn )
rn = < B2 , F ( x0 ) ≠ 0
F ( x0 )
Chapter 2—6/15 2007-11-05
Example
F
2
y=x - 2
x*
x2 x1 x0 x
x2 = 2 → x2 − 2 = 0
F ( x ) = x 2 − 2,
F ′( x ) = 2 x
xn2−1 − 2
xn = xn−1 −
2 xn−1
x0 = 2
22 − 2 3
x1 = 2 − = = 1.500000
2⋅2 2
3 94 − 2 17
x2 = − = = 1.416667
2 2 ⋅ 32 12
577
x3 = = 1.414216
408
……………………...
Convergence
3
−2
δ1 = 2
3
= 0.333333
2
( 23 ) −2
2
r1 = = 0.125000
22 − 2
Chapter 2—7/15 2007-11-05
17
− 23
δ2 = 12
17
= 0.058824
12
( 1217 )
−2
2
r2 = = 0.003472
2 −2 2
577 17
−
δ3 = 408
577
12
= 0.001733
408
( 577
408 )
−2
2
r3 = = 0.000003
2 −2 2
Convergence
0
log10(d), log10(r)
F ( x) = 0
1
α x + F ( x) = α x → x = x + F ( x) ≡ g ( x)
α
1
g ′( x ) = 1 + F ′( x )
α
g ′ ( x∗ ) = 0 → α = −
1
F ′ ( x* )
F ( x) F ( xn −1 )
x = x− → xn = xn −1 −
F′( x) F ′ ( xn −1 )
Chapter 2—8/15 2007-11-05
F ( x) F ( x ) F ′′ ( x )
u ( x) = → u′ ( x ) = 1 −
F′( x) ( F ′ ( x ))
2
u( xn−1 )
xn = xn−1 −
u′( xn−1 )
Example
100
2 4 6 8
x
-100
-200
F ( 4.0 ) -3.42
u (4) = = = -0.145408
F ′ ( 4.0 ) 23.52
F ( 4.0 ) ⋅ F'' ( 4.0 ) -3.42 ⋅ (-85.42)
u' ( 4 ) = 1.0 - = 1.0 - = 0.471906
( F' ( 4.0 ) )
2
(23.52)2
Chapter 2—9/15 2007-11-05
u( 4) − .145408
x1 = x0 − = 4.0 − = 4.308129
u ′( 4 ) .471906
x2 = 4.308129 − .00812 = 4.300001
x3 = 4.300000 convention al N − R method
x19 = 4.300000
………………………..........
xn = xn−1 −
Fn−1
(x − x )
Fn−1 − Fn−2 n−1 n−2
F ( x0 ) F ( x1 ) < 0
Geometrical interpretation
F(x)
F1
x0 x2 x3 x5
x4 x1 x
x*
F0
Chapter 2—10/15 2007-11-05
Example
F(x)
4
x3 = 3
x2 = 1 x1 = 2
x
34
x 4 = 81
x0 = 0
Algorithm
x2 = 2 → F ( x) ≡ x2 − 2 = 0
Let
x0 = 0 → F ( 0 ) = −2
and
x1 = 2 → F ( 2) = 4 − 2 = 2
then
2
x2 = 2 − (2 − 0) = 1 → F (1) = −1
2 − (−2)
−1 4 ⎛4⎞ 2
x3 = 1 − (1 − 2) = = 1.333333 → F ⎜ ⎟ = − = −0.222222
−1 − 2 3 ⎝3⎠ 9
4 − 2
⎛ 4 ⎞ 14 ⎛ 14 ⎞ 34
x4 = − 2 9 ⎜ − 1⎟ = = 1.555556 → F⎜ ⎟= = 0.419753
3 − 3 − ( −1) ⎝ 3 ⎠ 9 ⎝ 9 ⎠ 81
14 34
⎛ 14 4 ⎞ 55
x5 = − 81
⎜ − ⎟= = 1.410256
9 − ( − ) ⎝ 9 3 ⎠ 39
34
81
2
9
⎛ 55 ⎞ 17
→ F⎜ ⎟=− = −0.011177
⎝ 39 ⎠ 1521
xT = 2 ≈ 1.414214 − true solution
Chapter 2—11/15 2007-11-05
Convergence
2−0
δ1 = =1
2
2
r1 = =1
−2
1− 2
δ2 = =1
1
−1 1
r2 = = = 0.500000
−2 2
4
−1 1
δ3 = 3
4
= = 0.250000
3 4
− 92 1
r3 = = = 0.111111
−2 9
14
− 43 1
δ4 = 9
14
= = 0.142857
9 7
34
17
r4 = 81
= = 0.209877
−2 81
55
− 149 17
δ5 = 39
55
= = 0.103030
39 165
− 1521
17
17
r5 = = = 0.005588
−2 3042
Convergence
0
log10(d), log10(r)
Fn-1
xn = xn-1 - ( xn-1 - x0 ) , F ( x0 ) F ( x1 ) < 0
Fn-1 - F0
Geometrical interpretation
Example
x2 = 2 → F ( x) ≡ x2 − 2 = 0
Let
x0 = 2 → F ( 2 ) = +2
and
x1 = 0 → F ( 0 ) = −2
then
−2
x2 = 0 − (0 − 2) = 1 → F (1) = −1
−2 − 2
−1
(1 − 2 ) = = 1.333333 → F ⎛⎜ ⎞⎟ = −
4 4 2
x3 = 1 −
−1 − 2 3 ⎝3⎠ 9
2
−
4
x4 = − 9 ⎛ 4 − 2 ⎞ = 7 = 1.400000 → F ⎛ 7 ⎞ = − 1
⎜ ⎟ ⎜ ⎟
3 − −2⎝3
2 ⎠ 5 ⎝5⎠ 25
9
Chapter 2—13/15 2007-11-05
1
−
7
x5 = − 25 ⎛ 7 − 2 ⎞ = 24 = 1.411769
⎜ ⎟
5 − 1 −2⎝5 ⎠ 17
25
xT = 2 ≈~ 1.414214 − true solution
Convergence
0−2
δ1 = → not exist
0
−2
r1 = =1
2
1− 0 −1 1
δ2 = =1 r2 = = = 0.500000
1 2 2
4
−1 1 −2 1
δ3 = 3
= = 0.250000 r3 = 9 = = 0.111111
4
3 4 2 9
7
− 43 1 − 100
4
2
δ4 = 5
= = 0.047619 r4 = = = 0.020000
7
5 21 2 100
24
− 75 1 − 289
2
1
δ5 = 17
= = 0.008333 r5 = = = 0.003460
24
17 120 2 289
Convergence
0
log10(d), log10(r)
Remarks
The regula falsi algorithm is more stable but slower than the one corresponding to the
secant method.
Chapter 2—14/15 2007-11-05
Traps
Newton Raphson
DIVERGENT
(wrong starting point)
Secant Method
DIVERGENT
Regula Falsi
CONVERGENT
Chapter 2—15/15 2007-11-05
3.1.VECTOR NORM
Vector norm x of the vector x ∈ V
where:
(i) x ≥0 ∀ x ∈ V and x = 0 if x = 0
(ii) α x = α ⋅ x ∀ scalars α and ∀x ∈ V
(iii) x + y ≤ x + y ∀ x, y ∈ V
Examples
1
⎡N 2 ⎤2
(1) x 1 = ⎢ ∑ xi ⎥ p=2 Euclidean norm
⎣ i =1 ⎦
(2) x 2 = max xi p=∞ maximum norm
i
1
⎡N p⎤p
(3) x 3 = ⎢ ∑ xi ⎥ p ≥1
⎣ i =1 ⎦
Examples
x = {2,3,-6}
1
x 1 = ( 2 2 + 32 +6 2 ) 2 = 7 p=2
x 2 = -6 = 6 p=∞
x 3 = 2 + 3 + -6 = 11 p=1
Chapter 3—2/2 2007-11-05
3.2.MATRIX NORM
Matrix norm of the ( N × N ) matrix A must satisfy the following conditions:
Examples
1 1
⎡ N N ⎤2 ⎡ 1 N N
2⎤
2
A 1= ⎢ ∑∑ aij2 ⎥ or A 1= ⎢ 2 ∑∑ aij ⎥ - average value
⎣ i =1 j =1 ⎦ ⎣N i =1 j =1 ⎦
N N
1
A 2 = max ∑ aij or A2= max ∑ aij - maximum value
i
j =1 N i j =1
Example
⎡1 2 3⎤
A = ⎢⎢ 4 5 6 ⎥⎥ →
⎢⎣7 8 9 ⎥⎦
1
⎡1 ⎤2
A 1= ⎢ 2 (12 + 22 + 32 + 42 + 52 + 62 + 7 2 + 82 + 92 ) ⎥ = 5.627314
⎣3 ⎦
⎧1 + 2 + 3 ⎧6
1 ⎪ 1 ⎪
A 2 = max ⎨4 + 5 + 6 = max ⎨15 = 8
3 ⎪7 + 8 + 9 3 ⎪ 24
⎩ ⎩
Chapter 4—1/13 2007-11-05
Denotations
x = { x1 , x2 , x3 ,..., xN }
F ( x ) = { F1 ( x ) ,......FN ( x )}
F (x) = 0
Example
⎪⎧ F1 ( x, y ) ≡ y - 2x = 0
2
⎨
⎪⎩ F2 ( x, y ) ≡ x + y - 8 = 0
2 2
Algorithm
x n = f ( x n −1 ) f = { f1 (x), K , f n (x)} , x = { x1 , K , xn }
Example
⎧ 1 2 x = { x, y}
⎪x = 2 y ≡ f1 ( x)
⎨ ⎧1 ⎫
⎪ y = 8- x2 ≡ f ( x) f ( x ) = ⎨ y 2 , 8 - x2 ⎬
⎩ 2 ⎩2 ⎭
⎧⎪ x = y 2 − x ≡ f1 ( x )
⎨ ⇒ x = f ( x)
⎪⎩ y = x + y + y - 8 ≡ f 2 ( x )
2 2
Chapter 4—2/13 2007-11-05
Convergence criterion
x n − x n −1
δn = , δ n ≤ δ amd
xn
δ amd - admissible error
Theorem
Example
⎧⎪ f1 ( x ) = y 2 − x ⎡ -1 2y ⎤
⎨ → J=⎢ ⎥
⎪⎩ f 2 ( x ) = x + y + y - 8 ⎣ 2x 2y +1⎦
2 2
F ( x) = 0
∂ F (x) 1 ∂ F (x) 2
2
F (x + h) = F (x) + h+ h +L
∂x 2 ∂ 2x
∂ F (x)
F (x + h) ≈ F (x) + h ≡ F ( x ) + J ( x ) h = 0 → h = − J −1F
∂x
x n = x n −1 + h n −1 = x n −1 − J −n1−1Fn −1
J n −1x n = J n −1x n −1 − µ Fn −1
Example
⎪⎧ y = 2 x ⎪⎧ y − 2 x ⎫⎪ ⎧⎪ F1 ( x ) ⎪⎫
2 2
y 2 - 2x =0 ⎧x ⎫
⎨ 2 → → F (x) = ⎨ 2 ⎬≡ ⎨ ⎬, x = ⎨ ⎬
⎪⎩ x + y 2 = 8 x + y −8 =0
2 2
⎪⎩ x + y − 8 ⎪⎭ ⎩⎪ F2 ( x ) ⎭⎪
2
⎩ y⎭
⎡ ∂F1 ∂F1 ⎤
⎢ ∂x ∂y ⎥ ⎡ -2 2 y ⎤
J =⎢ ⎥=
⎢ ∂F2 ∂F2 ⎥ ⎢⎣ 2 x 2 y ⎥⎦
⎢ ∂x ∂y ⎥⎦
⎣
Chapter 4—4/13 2007-11-05
Algorithm
⎡−2 2y⎤ ⎧x⎫ ⎡−2 2y⎤ ⎧x⎫ ⎪⎧ y − 2 x ⎪⎫
2
⎢2x ⎥ ⎨ ⎬ = ⎢ ⎥ ⎨ ⎬ − µ ⎨ ⎬
⎣ 2 y ⎦ n −1 ⎩ y ⎭ n ⎣ 2 x 2 y ⎦ n −1 ⎩ y ⎭ n −1 ⎪⎩ x + y − 8 ⎭⎪
2 2
Let µ =1
⎧⎪0 ⎫⎪ ⎧0 ⎫
x0 = ⎨ ⎬=⎨ ⎬
⎩⎪2 2 ⎪⎭ ⎩2.8284 ⎭
⎡ -2 5.65685⎤ ⎧ x1 ⎫ ⎡ -2 5.65685⎤ ⎧0 ⎫ ⎧8 ⎫
⎢ 0 5.65685⎥ ⎨ y ⎬ = ⎢ 0 5.65685⎥ ⎨2.8284 ⎬ - ⎨0 ⎬
⎣ ⎦ ⎩ 1⎭ ⎣ ⎦⎩ ⎭ ⎩ ⎭
⎧ 4.0000 ⎫
x1 = ⎨ ⎬
⎩ 2.8284 ⎭
Error estimation
(after the first step of iteration)
x n − x n −1
Estimated relative solution error δ n =
xn
x1 − x0
δ1 =
x1
x1 − x0 = {4.0000 − 0.0000, 2.8284 − 2.8284}
= {4.0000 , 0.0000}
Euclidean norm
⎡ 12 ( 4.00002 + 0.00002 ) ⎤ 2 2.8284
1
δ1 = ⎣
E ⎦ = = 0.8165
⎡ 12 ( 4.0000 + 2.8284 ) ⎤ 2 3.4641
1
2 2
⎣ ⎦
Maximum norm
sup {4.0000 , 0.0000} 4.0000
δ1M = = = 1.0000
sup {4.0000 , 2.8284} 4.0000
Fn
Relative residual error rn =
F0
F1
r1 =
F0
1
⎧ n ⎫
2
Euclidean norm F E
= ⎨ 1n ∑ Fj (x) 2 ⎬
⎩ j =1 ⎭
Chapter 4—5/13 2007-11-05
F0 = {8.0000 , 0.0000}
F1 = {0.0000, 16.0000}
{ } { }
1
⎡ F1 ( x0 )2 + F2 ( x 0 )2 ⎤
2 1
{ }
1
11.3137
r1E = = 2.0000
5.6568
Maximum norm F M
= sup Fi
i
F0 M
= sup ( 8.0000 , 0.0000 ) = 8.0000
F1 M
= sup ( 0.0000, 16.0000 ) = 16.0000
16.0000
r1M = = 2.0000
8.0000
Brake-off test
Assume admissible errors for convergence BC and residuum BR ; check
⎧ 2.4000 ⎫
x2 = ⎨ ⎬
⎩ 2.2627 ⎭
Error estimation
(after the second step of iteration)
x 2 − x1
δ2 =
x2
x 2 − x1 = {2.4000 − 4.0000 , 2.2627 − 2.8284} = {−1.6000 , − 0.5657}
Chapter 4—6/13 2007-11-05
Euclidean norm
⎡ 12 (1.60002 + 0.5657 2 ) ⎤ 2 1.2000
1
δ2 = ⎣
E ⎦ = = 0.5145
⎡ 12 ( 2.4000 + 2.2627 ) ⎤ 2 2.3324
1
2 2
⎣ ⎦
Maximum norm
sup {1.6000 , 0.5657} 1.6000
δ 2M = = = 0.6667
sup {2.4000 , 2.2627} 2.4000
F2
r2 =
F0
1
⎧ n ⎫
2
Euclidean norm F E
= ⎨ 1n ∑ Fj (x) 2 ⎬
⎩ j =1 ⎭
F2 = {0.3200 , 2.8800}
{ } { ( ) }
1
⎡0 2 + 2 2 2 ⎤
1 2
⎡ F1 ( x 0 )2 + F2 ( x 0 )2 ⎤
2
F0 = 1
= 1
= 5.6568
E 2 ⎣ ⎦ 2
⎣⎢ ⎥⎦
{ }
1
2.0490
r2E = = 0.3622
5.6568
Maximum norm F M
= sup Fi
i
F0 M
= sup ( 8.0000, 0.0000 ) = 8.0000
F2 M
= sup ( 0.3200 , 2.8800 ) = 2.8800
2.8800
r2M = = 0.3600
8.0000
Brake-off test
⎧2.0235 ⎫
x3 = ⎨ ⎬
⎩2.0257 ⎭
Error estimation
(after five steps of iteration)
x3 − x 2
δ3 =
x3
x 3 − x 2 = {2.0235 − 2.4000 , 2.0257 − 2.2627} = {−0.3765 , − 0.2371}
Euclidean norm
⎡ 12 ( 0.37652 + 0.23712 ) ⎤
1
2
δ3 = ⎣
E ⎦ = 0.3146 = 0.1554
⎡ 12 ( 2.02352 + 2.02572 ) ⎤
1
2
2.0259
⎣ ⎦
Maximum norm
sup {0.3765 , 0.2371} 0.3765
δ 3M = = = 0.1859
sup {2.0235 , 2.0257} 2.0257
Euclidean norm F E
= ⎨ 1n ∑ Fj (x) 2 ⎬
⎩ j =1 ⎭
F3 = {+0.0562 , 0.1979}
{ } ={ }
1 1
⎡ F1 ( x0 )2 + F2 ( x0 )2 ⎤ ⎡0.00002 + ( 8.0000 )2 ⎤
2 2
F0 = 1 1
= 5.6568
E 2 ⎣ ⎦ 2 ⎣ ⎦
{ }
1
0.1455
r3E = = 0.0257
5.6568
Chapter 4—8/13 2007-11-05
Maximum norm F M
= sup Fi
i
F0 M
= sup ( 8.0000 , 0.0000 ) = 8.0000
F3 M
= sup ( 0.0562 , 0.1979 ) = 0.1979
0.1979
r3M = = 0.0247
8.0
Brake-off test
⎧ 2.0001⎫
x4 = ⎨ ⎬
⎩2.0002 ⎭
Error estimation
(after four steps of iteration)
Euclidean norm
⎡ 12 ( 0.02342 + 0.0254 2 ) ⎤
1
2
δ4 = ⎣
E ⎦ = 0.0247 = 0.0122
⎡ 12 ( 2.00012 + 2.0002 2 ) ⎤
1
2
2.0002
⎣ ⎦
Maximum norm
sup {0.0234 , 0.0254} 0.0254
δ 4M = = = 0.0127
sup {2.0001, 2.0002} 2.0002
Chapter 4—9/13 2007-11-05
Euclidean norm F E
= ⎨ 1n ∑ Fj (x) 2 ⎬
⎩ j =1 ⎭
F4 = {+0.0007 , 0.0012}
{ } { }
1 1
⎡ F1 ( x0 )2 + F2 ( x0 )2 ⎤ ⎡0.00002 + ( 8.0000 )2 ⎤
2 2
F0 = 1
= 1
= 5.6568
E 2 ⎣ ⎦ 2 ⎣ ⎦
{ }
1
0.0010
r4E = = 0.0002
5.6568
Maximum norm F M
= sup Fi
i
F0 M
= sup ( 8.0000 , 0.0000 ) = 8.0000
F4 M
= sup ( 0.0007 , 0.0012 ) = 0.0012
0.0012
r4M = = 0.0002
8
Brake-off test
x − xn = α n ( x − xn−1 )
ASSUME αn = α constant
then
x − xn = α ( x − xn −1 ) x − xn x − xn −1 x n − 2 x n − x n2−1
→ = → x=
x − xn−1 = α ( x − xn−2 ) x − xn −1 x − xn − 2 x n − 2 x n −1 + x n − 2
Example
⎧2.0000 ⎫
x5 = ⎨ ⎬
⎩2.0000 ⎭
Chapter 4—11/13 2007-11-05
Error estimation
(after five steps of iteration)
Euclidean norm
⎡ 12 ( 0.00152 + 0.00282 ) ⎤
1
2
δ5 = ⎣
E ⎦ = 0.0023 = 0.0011
⎡ 12 ( 2.00002 + 2.0000 2 ) ⎤
1
2
2.0000
⎣ ⎦
Maximum norm
sup {0.0015 , 0.0028} 0.0028
δ 5M = = = 0.0014
sup {2.0000 , 2.0000} 2.0000
Euclidean norm F E
= ⎨ 1n ∑ Fj (x) 2 ⎬
⎩ j =1 ⎭
F5 = {0.00001 , 0.00001}
{ } { }
1 1
⎡ F1 ( x0 )2 + F2 ( x0 )2 ⎤ ⎡0.00002 + ( 8.0000 )2 ⎤
2 2
F0 = 1
= 1
= 5.6568
E 2 ⎣ ⎦ 2 ⎣ ⎦
{ }
1
0.00001
r5E = = 0.000002
5.6568
Maximum norm F M
= sup Fi
i
F0 M
= sup ( 8.0000 , 0.0000 ) = 8.0000
F5 M
= sup ( 0.00001 , 0.00001) = 0.00001
0.00001
r5M = = 0.000002
8.0000
Chapter 4—12/13 2007-11-05
Brake-off test
δ 5E = 0.00113413 > BC = 10−6
δ 5M = 0.00142690 > BC = 10−6
r5E = 0.00000164 > BR = 10−8
r5M = 0.00000129 > BR = 10−8
SOLUTION SUMMARY
1.4
Magnitude of error
1.2
1
0.8
0.6
0.4
0.2
1 2 3 4 5 6
Number of iterations
-2
-3
-4
-5
-6
-7
-8
Residual Error - Maximum Norm
-9 Residual Error - Euclidean Norm
Estimated Solution Error - Maximum Norm
-10 Estimated Solution Error - Euclidean Norm
-11
(SLAE)
5.1. INTRODUCTION
- Sources of S.L.A.E.
- Features
A x=b
n×n n×1 n×1
⎧ AT = A → symmetric
⎪ xT Ax > 0 ∀ x ∈ Rn positive definite (energy >0)
⎪
⎪
mostly A : ⎨ banded (or sparse)
⎪
⎪
⎪
⎪⎩ n >> 1
Solution methods
Example
⎡ 6 2 2 4 ⎤ ⎧ x1 ⎫ ⎧ 1⎫
⎢ -1 2 2 ⎪ ⎪
⎢ -3 ⎥⎥ ⎪ x2 ⎪ ⎪⎪-1 ⎪⎪
⎨ ⎬= ⎨ ⎬
⎢ 0 1 1 4 ⎥ ⎪ x3 ⎪ ⎪ 2 ⎪
⎢ ⎥
⎣ 1 0 2 3 ⎦ ⎪⎩ x4 ⎪⎭ ⎩⎪ 1⎭⎪
Chapter 5—2/29 2007-12-13
Assume Table
[ AMb] → [I M x]
⎡6 2 2 4 1⎤ ⎡6 2 2 4 1 ⎤
⎢− 1 2 2 − 3 − 1⎥ → ⎢⎢0 7 3
⎥ 7 − 7 3 − 5 6 ⎥⎥ →
⎢ 3
⎢0 1 1 4 2⎥ ⎢0 1 1 4 2 ⎥
⎢ ⎥ ⎢ ⎥
⎣1 0 2 3 1⎦ ⎣0 − 1 3 5
3
7
3
5
6 ⎦
⎡6 2 2 4 1 ⎤ ⎡6 2 2 4 1 ⎤
⎢0 7 7 − 3 − 5 6 ⎥⎥
7
partial ⎢0 7 7 − 3 − 5 6 ⎥⎥
7
⎢ 3 3
pivoting ⎢ 3 3
→
⎢0 0 0 5 33 ⎥
14 → ⎢0 0 2 2 5 ⎥
7
⎢ ⎥ interchange ⎢ 0 ⎥
⎣0 0 2 2 5
7 ⎦
of rows 3,4
⎣ 0 0 5 33
14 ⎦
(i)
⎡6 2 2 0 − 31 35 ⎤ ⎡6 2 0 0 − 23 35 ⎤
⎢0 7 7 0 4 15 ⎥⎥ ⎢0 7 0 0 8 15 ⎥⎥
→ ⎢ 3 3
→ ⎢ 3
→
⎢0 0 2 0 − 8 35 ⎥ ⎢0 0 2 0 − 8 35 ⎥
⎢ ⎥ ⎢ ⎥
⎣0 0 0 5 3314 ⎦ ⎣0 0 0 5 3314 ⎦
⎡6 0 0 0 − 39 35 ⎤ ⎡1 0 0 0 − 13 70 ⎤
⎢0 7 0 0 8 15 ⎥⎥ ⎢0 1 0 0 8 35 ⎥⎥
→ ⎢ 3
→ ⎢
⎢0 0 2 0 − 8 35 ⎥ ⎢0 0 1 0 − 4 35 ⎥
⎢ ⎥ ⎢ ⎥
⎣0 0 0 5 3314 ⎦ ⎣0 0 0 1 33 70 ⎦
final solution
(ii)
⎡6 2 2 4 −1 ⎤ ⎡6 2 2 4 1 ⎤
⎢0 7 7 − 3 − 5 6 ⎥⎥
7 ⎢0 7 7 − 3 − 5 6 ⎥⎥
7
→ ⎢ 3 3
→ ⎢ 3 3
→
⎢0 0 2 2 5 ⎥
7 ⎢0 0 1 0 − 4 35 ⎥
⎢ ⎥ ⎢ ⎥
⎣0 0 0 1 33
70 ⎦ ⎣0 0 0 1 33
70 ⎦
⎡6 2 2 4 1 ⎤ ⎡1 0 0 0 − 13 70 ⎤
⎢0 1 0 0 8 35 ⎥⎥ ⎢0 1 0 0 8 35 ⎥⎥
→ ⎢ → ⎢
⎢0 0 1 0 - 4 35 ⎥ ⎢0 0 1 0 − 4 35 ⎥
⎢ ⎥ ⎢ ⎥
⎣0 0 0 0 33 70 ⎦ ⎣0 0 0 1 33 70 ⎦
final solution
Chapter 5—3/29 2007-12-13
General algorithm
n
Ax = b ↔ ∑a x
j =1
ij j = bi , i = 1,2, ..., n
where
j
⎡ a11 a12 L a1n ⎤
⎢a a22 L a2 n ⎥⎥
A ≡ ⎡⎣ aij ⎤⎦ = ⎢ 21
n× n ⎢ ⋅ ⋅ L ⋅ ⎥
⎢ ⎥
⎢ ⋅ ⋅ ..aij ⋅ ⎥ i
⎢⎣ an1 an 2 L ann ⎥⎦
I
steps forward (without pivoting)
aik(
k -1)
(k )
aij = aij ( k −1 )
− mik akj ( k −1 ) mik = ( k -1)
, aij(0 ) = aij , bi(0 ) = bi
where akk
(k ) ( k −1 ) ( k −1 )
bi = bi − mik bk k = 1,2,...,n - 1; j = k +1,...,n; i = k +1,...,n
II steps back
⎡ n ⎤
∑a
1
xi = ⎢bi( n −1) − ( n −1)
xj ⎥ i = n − 1,..., 2,1
aii( n −1)
ij
⎣⎢ j =i +1 ⎦⎥
Number of operations:
1 3
N + N 2 + 0( N ) - for Gauss procedure (not bounded)
3
N 4 + 0( N 3 ) - for Cramer’s formulas
[ A M b1 L b k ] → [ I M x1 L x k ]
Chapter 5—4/29 2007-12-13
Matrix factorization
A = LU
Given
Ly = b →y
{=b
L Ux
→
step foreward
y Ux = y →x step back
I. Obtain Ly = b → y = L-1b
Definition
Matrix factorization
A = LLT , U = LT
Remark
here U ≡ LT
Solution algorithm
Chapter 5—6/29 2007-12-13
⎛ j −1
⎞1
lij = ⎜ aij − ∑ lik l jk ⎟ off diagonal elements
⎝ k =1 ⎠ l jj
where j = 1, 2,..., n , i = j + 1,..., n
I step foreword
⎡ i −1 ⎤1
yi = ⎢bi − ∑ lij y j ⎥ i = 1,2, ..., n
⎣ j =1 ⎦ lii
Example
Cholesky factorization of the given matrix
→ → l22 = 5 - ( -1) = 2
2
l212 +l222 = a22 l22 = a22 - l212
1 1
l31l21 +l32l22 = a32 → l32 = ( a32 - l31l21 ) → l32 = ⎡⎣ -2 - 0× ( -1) ⎤⎦ = -1
l22 2
Column 3:
The method of simple iterations may be applied, using one of the following algorithms:
x1(n) = - 101 x2(n-1) + 201 x3(n-1) + 45 x1(n) = - 101 x2(n-1) + 201 x3(n-1) + 54
x2(n) = - 132 x1(n-1) + 132 x3(n-1) + 13
30
x2(n) = - 132 x1(n) + 132 x3(n-1) + 30
13
General algorithm
Matrix notation
Matrix decomposition
0 0
= +0 +0
A = L + D + U
Ax = b → L% x + D % x=b
%x + U
Iteration algorithms
Jacobi Gauss - Seidel
% -1 ( L% + U
x(n) = - D % ) x(n-1) + D
% -1b x(n) = -( L% + D
% )-1 U
% x(n-1) + ( L% + D
% )-1b
Index notation
∑a
i =1
ij x j = bi
Iteration algorithms
Jacobi Gauss – Seidel
⎡ ⎤
1 ⎡ i −1 ⎤
( n −1) (n) ( n −1)
1 ⎢ n n
= −∑ aij x j + bi ⎥ = ⎢ −∑ aij x j − ∑ aij x j + bi ⎥
(n) (n)
xi xi
aii ⎢ j =1 ⎥ aii ⎣⎢ j =1 j = i +1 ⎦⎥
⎢⎣ j ≠i ⎥⎦
i = 1, 2, ... , n
Chapter 5—9/29 2007-12-13
Theorem
When A is a positive definite matrix the Jacobi and Gauss – Seidel methods are
convergent. (It is a sufficient but not necessary condition)
Relaxation technique
xˆi( )
n
- Direct Gauss – Seidel result, n-th iteration
xi( n ) - relaxed solution, n-th iteration
µ >0 - relaxation parameter
Residuum
rˆ ( n −1) = xˆ ( n ) − x( n −1) , ∆rˆ ( n −1) = rˆ ( n ) − rˆ ( n −1)
let
r ( n ) = rˆ ( n −1) + µ (
n −1)
( rˆ (n)
− rˆ ( n −1) ) = rˆ ( n −1) + µ ( n −1) ∆rˆ ( n −1)
and
( ) ) r( ) = (r( ) ) rˆ ( ) + 2µ ( ) (r( ) ) ∆rˆ (
t t t
I = r(
n n n −1 n −1 n −1 n −1 n −1)
+ ( µ ( ) ) ( ∆rˆ ( ) ) ∆rˆ ( )
2 t
n −1 n −1 n −1
dI
( ) ( )
= 2 rˆ ( n −1) ∆rˆ ( n −1) + 2 ∆rˆ ( n −1) ∆rˆ ( n −1) + 2µ ( n −1) ( ∆rˆ ( n −1) ) ( ∆rˆ ( n −1) ) = 0
t t t
min I→
µ( n−1)
dµ ( n −1)
find the optimal relaxation coefficient
µ ( n −1)
=
( ∆rˆ ) rˆ = 1−
( ∆rˆ ) rˆ
( n −1) t ( n −1) ( n −1) t (n)
hence
xi( n ) = xˆ i( n −1) + µ ( ) rˆ ( n −1)
n −1
Further iterations
Gauss – Seidel followed by
relaxation
Error estimation
after the first step of iteration
Euclidean norm
⎡ 13 (1.2500002 + 2.1153852 + (−1.365385) 2 ) ⎤ 2 1.622922
1
δ1 = ⎣
E ⎦ = = 1.000000
⎡ 13 (1.250000 + 2.115385 + (−1.365385) ) ⎤ 2 1.622922
1
2 2 2
⎣ ⎦
Maximum norm
sup {1.250000 , 2.115385, −1.365385 } 2.115385
δ1M = = = 1.000000
sup {1.250000 , 2.115385, −1.365385 } 2.115385
Fn
Relative residual error rn =
F0
F1
r1 =
F0
1
⎧ n ⎫
2
⎡ F1 ( x 0 )2 + F2 ( x0 )2 + F3 ( x 0 )2 ⎤
2 1
= 2 2 2 2
1 1
F0 E 3 ⎣ ⎦ 3
3.595093
r1E = = 0.159245
22.575798
Maximum norm F M
= sup Fi
i
F0 M
= sup ( 25,30, 2 ) = 30
F1 M
= sup ( -5.596155 , -2.730775 ,0.000000 ) = 5.596155
5.596155
r1M = = 0.186539
30
Brake-off test
Assume admissible errors for convergence BC and residuum B R ; check
x 2 − x1
δ2 =
x2
x 2 − x1 = {0.970192 − 1.250000,1.948373 − 2.115385, −0.918565 + 1.365385}
= {−0.279808 , −0.167012, 0.446820}
Euclidean norm
δ 2E = ⎣ ⎦ = = 0.234088
⎡ 13 ( 0.970192 + 1.948373 + (−0.918565) ) ⎤ 2 1.363934
1
2 2 2
⎣ ⎦
Maximum norm
sup { −0.279808 , −0.167012 , 0.446820} 0.446820
δ 2M = = = 0.229330
sup {0.970192 ,1.948373, −0.918565 } 1.948373
F2
r2 =
F0
Euclidean norm
F2 = {0.780849 , 0.893637,0.000000}
{ } { }
1
⎡ F1 ( x0 )2 + F2 ( x0 )2 + F3 ( x0 )2 ⎤
2 1
0.685155
r2E = = 0.030349
22.575798
Maximum norm
F0 M
= sup ( 25,30, 2 ) = 30
F2 M
= sup ( 0.780849 , 0.893637,0.000000 ) = 0.893637
0.893637
r2M = = 0.029788
30
Chapter 5—13/29 2007-12-13
Brake-off test
Assume admissible errors for convergence BC and residuum B R ; check
x3 − x 2
δ3 =
x3
x3 − x 2 = {1.009234 − 0.970192, 2.011108 − 1.948373, −1.020342 + 0.918565}
= {0.039042, 0.062735, −0.101777}
Euclidean norm
δ3 = ⎣
E ⎦ = = 0.050906
⎡ 13 (1.009234 + 2.011108 + (−1.020342) ) ⎤ 2 1.426442
1
2 2 2
⎣ ⎦
Maximum norm
sup {0.039042 , 0.062735, −0.101777 } 0.101777
δ 3M = = = 0.050608
sup {1.009234 , 2.011108, −1.020342 } 2.011108
F3
r3 =
F0
Euclidean norm
F3 = {-0.227238 , − 0.203556, 0.000000}
⎡ F1 ( x0 )2 + F2 ( x0 )2 + F3 ( x0 )2 ⎤
2 1
= 2 2 2 2
1 1
F0 E 3 ⎣ ⎦ 3
0.176140
r3E = = 0.007802
22.575798
Chapter 5—14/29 2007-12-13
Maximum norm
F0 M
= sup ( 25,30, 2 ) = 30
F3 M
= sup ( -0.227238 , -0.203556 ,0.000000 ) = 0.227238
0.227238
r3M = = 0.007575
30
Brake-off test
Assume admissible errors for convergence BC and residuum B R ; check
x3' − x 2
δ 3' =
x3'
x3' − x 2 = {1.002145 − 0.970192, 1.999716 − 1.948373, − 1.001861 + 0.918565}
= {0.031953, 0.051343, − 0.083296}
Euclidean norm
δ 3'E = ⎣ ⎦ = = 0.041998
⎡ 3 (1.002145 + 1.999716 + (−1.001861) ) ⎤ 2 1.415025
1
1 2 2 2
⎣ ⎦
Maximum norm
sup {0.031953 , 0.051343, −0.083296 } 0.083296
δ 3'M = = = 0.041654
sup {1.002145 ,1.999716, −1.001861 } 1.999716
F3'
r3' =
F0
Euclidean norm
Chapter 5—15/29 2007-12-13
{ } = { ⎡⎣25 + 30 + 2 ⎤⎦}
1
⎡ F1 ( x0 )2 + F2 ( x0 )2 + F3 ( x0 )2 ⎤
2 1
= = 22.575798
2 2 2 2
1 1
F0 E 3 ⎣ ⎦ 3
{ }
1
0.025635
r3'E = = 0.001135
22.575798
Maximum norm
F0 M
= sup ( 25,30, 2 ) = 30
F3' M
= sup ( 0.044190 , 0.004317,0.000000 ) = 0.044190
0.044190
r3'M = = 0.001473
30
Brake-off test
Assume admissible errors for convergence BC and residuum B R ; check
x 4 − x3'
δ4 =
x4
x 4 − x3' = {0.999935 − 1.002145, 1.999724 − 1.999716, − 0.9996590 + 1.001861}
= {−0.002210, 0.000008, 0.002202}
Euclidean norm
δ4 = ⎣
E ⎦ = = 0.001274
⎡ 3 ( 0.999935 + 1.999724 + (−0.999659) ) ⎤
1
1 2 2 2 2
1.413988
⎣ ⎦
Maximum norm
Chapter 5—16/29 2007-12-13
F4
r4 =
F0
Euclidean norm
F4 = {-0.002186, − 0.004403, 0.000000}
{ } { }
1
⎡ F1 ( x0 )2 + F2 ( x0 )2 + F3 ( x0 )2 ⎤
2 1
0.002838
r4E = = 0.000126
22.575798
Maximum norm
F0 M
= sup ( 25,30, 2 ) = 30
F4 M
= sup ( -0.002193 , -0.004400 ,0.000000 ) = 0.004400
0.004403
r4M = = 0.000147
30
Brake-off test
Assume admissible errors for convergence BC and residuum B R ; check
x5 − x 4
δ5 =
x5
x5 − x 4 = {1.000045 − 0.999935, 2.000046 − 1.999724, − 1.000090 + 0.999659}
= {0.000109, 0.000322, −0.000431}
Euclidean norm
δ 5E = ⎣ ⎦ = = 0.000224
⎡ 13 (1.000045 + 2.000046 + (−1.000090) ) ⎤ 2 1.414267
1
2 2 2
⎣ ⎦
Maximum norm
F5
r5 =
F0
Euclidean norm
{ } { }
1
⎡ F1 ( x0 )2 + F2 ( x0 )2 + F3 ( x0 )2 ⎤
2 1
{ }
1
0.000796
r5E = = 0.000035
22.575798
Maximum norm
F0 M
= sup ( 25,30, 2 ) = 30
F5 M
= sup ( 0.001075 , 0.000862,0.000002 ) = 0.001075
0.001075
r5M = = 0.000036
30
Brake-off test
Chapter 5—18/29 2007-12-13
Euclidean norm
⎣ ⎦
Maximum norm
F5'
r5' =
F0
Euclidean norm
{ } = { ⎡⎣25 + 30 + 2 ⎤⎦}
1
⎡ F1 ( x0 )2 + F2 ( x0 )2 + F3 ( x0 )2 ⎤
2 1
= = 22.575798
2 2 2 2
1 1
F0 E 3 ⎣ ⎦ 3
{ }
1
0.000416
r5'E = = 0.000018
22.575798
Chapter 5—19/29 2007-12-13
Maximum norm
F0 M
= sup ( 25,30, 2 ) = 30
F5' M
= sup ( 0.00683 , 0.000230,0.000000 ) = 0.000683
0.000683
r5'M = = 0.000023
30
Brake-off test
0
-0,5 0 0,2 0,4 0,6 0,8
-1
Logarithm of error's
-1,5
magnitude
-2
-2,5
-3
-3,5
-4
-4,5
-5
Logarithm of iteration's number
A = LU
The LU factorization of matrix A may be done by the Gaussian elimination approach. The
main difference between the Gauss procedures of the solution of the SLAE and matrix
factorization LU is that in the last case we have to store the multipliers {mij}. Application:
– solution of problems with multiple right hand side
– matrix inversion
Example
⎡ 1 1 2 -4 ⎤
⎢ 2 -1 3 1 ⎥
⎢ ⎥→
⎢ 3 1 -1 2 ⎥
⎢ ⎥
⎣ 1 -1 -1 -1⎦
⎡ 1 1 2 -4 ⎤ ⎡ 1 1 2 -4 ⎤ ⎡ 1 1 2 -4 ⎤
⎢ 2 -3 -1 9 ⎥⎥ ⎢ 2 -3 -1 9 ⎥⎥ ⎢ 2 -3 -1 9 ⎥⎥
m21 ⎢ ⎢ ⎢
→ →
m31 ⎢ 3 -2 -7 14 ⎥ ⎢ 3 2 3 -19 3 8 ⎥ ⎢ 3 2 3 -19 3 8 ⎥
m41 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ 1 -2 -3 5 ⎦ ⎣ 1 2 3 -7 3 -1 ⎦ ⎣ 1 2 3 7 19 -75 19 ⎦
m32
m42 m43
Then
⎡1 0 0 0⎤ ⎡1 1 2 -4 ⎤
⎡ 1 1 2 -4 ⎤ ⎢ ⎢0 -3
⎢ 2 -1 3 1 ⎥ ⎢ 2 1 0 0 ⎥⎥ ⎢ -1 9 ⎥⎥
⎢ ⎥= ⎢ ⎢0 0 -19
⎢ 3 1 -1 2 ⎥ ⎢ 3
2 1 0⎥ 8 ⎥
3 ⎥ ⎢ 3 ⎥
⎢ ⎥ ⎢0 0
⎣ 1 -1 -1 -1⎦ ⎢⎢ 1 2 7 1⎥⎥ 0 -75 ⎥
⎣ 3 19 ⎦ ⎢⎣ 19 ⎦⎥
A = L U
Generally
⎡ a11 a12 … a1n ⎤ ⎡ 1 0 … 0⎤ ⎡u11 u12 … u1n ⎤
⎢a a22 … a2n ⎥⎥ ⎢⎢ m21 1 … 0 ⎥⎥ ⎢0 u … u2n ⎥⎥
⎢ 21 = ⎢ 22
⎢ M M ⎥ ⎢ M M⎥ ⎢ M M ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢⎣ an1 an2 … ann ⎥⎦ ⎢⎣ mn1 mn2 … 1⎥⎦ ⎢⎣ 0 0 … unn ⎥⎦
Chapter 5—21/29 2007-12-13
⎡2 1 1⎤
A = ⎢1 2 1⎥ →
⎢ ⎥
⎢⎣ 1 1 2 ⎥⎦
⎡2 1 1 M 1 0 0 ⎤ ⎡2 1 1 M 1 0 0⎤
⎢ ⎥ ⎢ ⎥
→ ⎢ 1 2 1 M 0 1 0 ⎥ → ⎢0 3 1 M - 1 1 0⎥ →
⎢ ⎥ ⎢ 2 2 2 ⎥
⎢⎣ 1 1 2 M 0 0 1⎥⎦ ⎢0 1 3 M - 1 0 1 ⎥⎥
⎢⎣ 2 2 2 ⎦
⎡2 1 1 M 1 0 0 ⎤⎥ ⎡⎢ 2 1 1 M 1 0 0 ⎤⎥
⎢
→ ⎢0 3 1 M -1 1 0 ⎥ → ⎢0 3 1 M -1 1 0 ⎥→
⎢ 2 2 2 ⎥ ⎢ 2 2 2 ⎥
⎢0 0 4 M - 1 - 1 1⎥⎥ ⎢⎢0 0 1 M - 1 - 1 3 ⎥
⎣⎢ 3 3 3 ⎦ ⎣ 4 4 4 ⎥⎦
⎡2 1 1 M 1 0 0 ⎤⎥ ⎡⎢ 2 1 0 M
5 1 -3 ⎤
⎢ 4 4 4⎥
→ ⎢0 3 0 M -3 7 0 ⎥ → ⎢0 3 0 M -3 9 -3 ⎥
⎢ 2 8 8 ⎥ ⎢ 2 8 8 8⎥
⎢0 0 1 M - 1 -1 3 ⎥ ⎢0 0 1 M -1 -1 3 ⎥
⎢⎣ 4 4 4 ⎦⎥ ⎣⎢ 4 4 4 ⎦⎥
⎡2 1 0 M 5 1 - 3 ⎤ ⎡2 0 0 M 3 -1 -1 ⎤
⎢ 4 4 4⎥ ⎢ 2 2 2⎥
⎢
→ 0 1 0 M - 1 3 - 1 ⎥ ⎢
→ 0 1 0 M - 1 3 - ⎥→
1
⎢ 4 4 4⎥ ⎢ 4 4 4⎥
⎢0 0 1 M - 1 -1 3 ⎥ ⎢ 0 0 1 M -1 -1 3 ⎥
⎢⎣ 4 4 4 ⎥⎦ ⎢⎣ 4 4 4 ⎥⎦
⎡1 0 0 M 3 -1 -1 ⎤
⎢ 4 4 4⎥
⎢
→ 0 1 0 M -1 3 -1 ⎥
⎢ 4 4 4⎥
⎢0 0 1 M - 1 -1 3 ⎥
⎣⎢ 4 4 4 ⎦⎥
Algorithm
AC=I , A = [aij] , C = [cij], where C0 = I
I. Step forward
( k −1)
( k −1) ( k −1) aik
= aij − mik a kj mik =
(k )
aij , ( k −1)
, k = 1, 2, … , n-1; i, j = k+1, ... , n;
a kk
( k −1) ( k −1)
= cij − mik c kj
(k )
cij j = 1, 2, ... , n;
Chapter 5—22/29 2007-12-13
( k −1) ( k −1)
akk = 1, aik =0, k = n, n-1, ... , 2; i = k-1, k-2, ... , 1;
( k −1) 1
= ckj
(k )
ckj (k )
mik
akk
aik ( k )
cij ( k −1) = cij ( k ) − ckj ( k ) j = 1, 2, ..., n;
akk ( k )
Example
⎡1 0 0 ⎤ ⎡ c11 0 0⎤
⎢ ⎥ C = ⎢⎢c21
?
0 ⎥⎥ = L−1
L = ⎢2 4 0 ⎥ , c22 , LC=I
⎢⎣3 5 6 ⎥⎦ ⎢⎣ c31 c32 c33 ⎥⎦
⎡ 1 0 0 ⎤ ⎡ c11 0 0 ⎤ ⎡1 0 0 ⎤
⎢ 2 4 0 ⎥ ⎢c c22 0 ⎥⎥ = ⎢⎢0 1 0 ⎥⎥
⎢ ⎥ ⎢ 21
⎢⎣ 3 5 6 ⎥⎦ ⎢⎣ c31 c32 c33 ⎥⎦ ⎢⎣0 0 1⎥⎦
Algorithm
1
cii = i = 1,2,...,n
lii
1 i −1
cij = − ∑ lik ckj
lii k = j j = 1,2,...,i − 1; k = j, j +1,...,i - 1
Chapter 5—23/29 2007-12-13
2
y=-
x-2
F(x)
(2,2)
( 32 , 34)
(76 , 79 ) x+ y=2
x− y=0
(1,1) x − 2 y = −2
0
y=
x+
x-
y=
2
B = ( x + y − 2) + ( x − y ) + ( x − 2 y + 2)
2 2 2
Let
∂B ⎫
= 2 ( x + y - 2 ) + 2 ( x - y ) + 2 ( x - 2y + 2 ) = 0 → ⎪ 3x - 2y = 0
∂x ⎪
∂B ⎬
= 2 ( x + y - 2 ) − 2 ( x - y ) − 2 ⋅ 2 ( x - 2y + 2 ) = 0 → ⎪ - 2x + 6 y = 6
∂y ⎪⎭
2 2 2
⎛1⎞ ⎛ 3⎞ ⎛2⎞ 2
solution x = 6 7 , y = 9 → B = ⎜ ⎟ +⎜− ⎟ +⎜ ⎟ =
7 ⎝7⎠ ⎝ 7⎠ ⎝7⎠ 7
General approach
Index notation
m
∑a x
j =1
ij j = bi , i = 1,2 ,L ,n ; j = 1,2 ,L ,m m<n
2
⎛ m n ⎞
B = ∑ ⎜⎜ ∑ aij x j − bi ⎟⎟
i =1 ⎝ j =1 ⎠
Chapter 5—24/29 2007-12-13
∂B n
⎛ m ⎞ n m n
= 2∑ aik ⎜ ∑ aij x j − bi ⎟ = 0 → ∑ aik ∑ aij x j = ∑ aik bi , k=1, …, m
∂ xk i =1 ⎝ j =1 ⎠ i =1 j =1 i =1
Matrix notation
A x = b → B = ( Ax − b ) ( Ax − b )
t
∂B
= 2 A t ( Ax − b ) = 0 → A t Ax = A t b
∂x
= =
n xm m x1 n x1 m xm m x1 m x1
Example
Once more the same example as before, but posed now in the matrix notation
⎡1 1 ⎤ ⎡2⎤
⎢ ⎥ ⎡ x⎤
A = ⎢1 − 1⎥ b = ⎢⎢ 0 ⎥⎥ x=⎢ ⎥
⎢⎣1 − 2⎥⎦ ⎢⎣ −2 ⎥⎦ ⎣ y⎦
⎡1 1 ⎤
⎡1 1 1 ⎤ ⎢ ⎥ = ⎡ 3 -2 ⎤ ,
A A=⎢
T
⎥ ⎢ 1 -1 ⎥ ⎢ -2 6 ⎥
⎣1 -1 -2 ⎦ ⎢ ⎣ ⎦
⎣1 -2 ⎥⎦
⎡2⎤
⎡1 1 1 ⎤ ⎢ ⎥ ⎡0 ⎤
Ab=⎢
t
⎥ ⎢0 ⎥= ⎢ ⎥
⎣1 -1 -2 ⎦ ⎢ ⎥ ⎣6 ⎦
⎣-2 ⎦
Chapter 5—25/29 2007-12-13
⎡6 ⎤
⎡ 3 -2 ⎤ ⎡ x ⎤ ⎡0 ⎤ ⎢ 7⎥
⎢ -2 6 ⎥ ⎢ y ⎥ = ⎢6 ⎥ → x = ⎢ 9 ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎣ 7⎦
Matrix notation
B = ( Ax − b ) W ( Ax − b )
t
hence
A t WAx = A t Wb
where
W = diag ( w1 , w2 ,K , wn ) = w1 ,K, wn
Index notation
2
n
⎛ m ⎞
B = ∑ ⎜ ∑ aij − bi ⎟ wi
i =1 ⎝ j =1 ⎠
∂B n m n
∂xk
=0 , k = 1, ..., m → ∑a w ∑a x = ∑a
i =1
ik i
j =1
ij j
i =1
ik wi bi
Chapter 5—26/29 2007-12-13
2.5
1.5
y = (1/3)*(5 - 2*x)
0.5 min (x 2 + y 2)
-0.5
-1
-1.5
-2
-1 0 1 2 3 4 5
when
2x + 3y = 5
(i) Elimination
1
y = (5 − 2 x)
3
hence
1
ρ 2 = x 2 + (5 − 2 x) 2
9
find
1
min ρ 2 , ρ 2 = x 2 + (5 − 2 x) 2
x 9
d 2 4 10 15
ρ = 2 x − (5 − 2 x) = 0 → x = , y=
dx 9 13 13
I = ( x 2 + y 2 ) − λ (2 x + 3 y − 5)
∂I ⎫ ⎧ 10
= 2 x − 2λ = 0 ⎪ ⎪ x = 13
∂x ⎪ ⎪
∂I ⎪ ⎪ 15
= 2 y − 3λ = 0 ⎬ ⇒ ⎨y =
∂y ⎪ ⎪ 13
∂I ⎪ ⎪ 10
= −( 2 x + 3 y − 5) = 0⎪ ⎪λ = 13
∂λ ⎭ ⎩
when
A x=b , m<n , linear constraints
mxn nx1 mx1
mxn
A =⎡A,
⎢⎣ m x m
A ⎤ ,
⎥⎦
m x ( n−m )
x =
n x1
{x m x1
, x }
( n −m ) x 1
eliminated remaining
unknowns unknowns
hence
A x = A x+ A x = b
m x n n x1 m x m m x1 m x ( n−m ) ( n−m ) x1 m x1
−1
x = A (b+ A x ) eliminated unknowns, (*)
m x1 mxm m x1 m x ( n −m ) ( n− m ) x 1
and
m n
ρ 2 = ∑ xi2 ( xn−m ,K, xn ) + ∑x i
2
= ρ 2 ( xn−m+1 ,K, xn ) .
i =1 i = n − m +1
- step 2 – use of the elimination formulas (*); they provide the remaining unknowns
x1 , K , xm
Example
Given undeterminen SLAE
2x + 3y − z = 4
− x + 4 y − 2 z = −4
when
⎡x⎤
⎡ 2 3 −1 ⎤⎢ ⎥ ⎡ 4 ⎤
⎢− 1 y =
⎣ 4 − 2 ⎥⎦ ⎢ ⎥ ⎢⎣ − 4 ⎥⎦
⎢⎣ z ⎥⎦
Hence
⎡ 2 3 | −1 ⎤ ⎡ 2 3⎤ ⎡ − 1⎤
A=⎢ ⎥ → A =⎢ ⎥ , A=⎢ ⎥
2 x3
⎣− 1 4 | −2⎦ 2x2
⎣ − 1 4⎦ 2 x1
⎣ − 2⎦
and
x = {x y | z} , b = {4 − 4}
Solution process
−1 1 ⎡4 − 3⎤
=
11 ⎢⎣ 1 2 ⎥⎦
A
2x2
⎡ x ⎤ 1 ⎡4 − 3⎤⎛ ⎡ 4 ⎤ ⎡ − 1 ⎤ ⎞ 1 ⎛ ⎡ 28⎤ ⎡− 2⎤ ⎞
⎢ y ⎥ = 11 ⎢1 ⎜ − [ z ]⎟ = ⎜ + [ z]⎟
⎣ ⎦ ⎣ 2 ⎥⎦⎜⎝ ⎢⎣− 4⎥⎦ ⎢⎣− 2⎥⎦ ⎟⎠ 11 ⎜⎝ ⎢⎣ − 4 ⎥⎦ ⎢⎣ 5 ⎥⎦ ⎟⎠
⎡x ⎤
z ]⎢⎢ y ⎥⎥ = 2 [(28 − 2 z ) 2 + (−4 + 5 z ) 2 + (11z ) 2 ]
1
ρ = [x y
2
11
⎢⎣ z ⎥⎦
Chapter 5—29/29 2007-12-13
dρ 2 2 1.52
= [−2(28 − 2 z ) + 5(−4 + 5 z ) + 121z ] = 0 → z =
dz 121 3
Finally
⎡ 2.453333⎤
z} = {7.36 , − 0.40 , 1.52} = ⎢⎢ − 0.133333 ⎥⎥
1
{x y
3
⎢⎣ 0.506667 ⎥⎦
Chapter 6—1/38 2008-01-23
6.1. INTRODUCTION
Application in mechanics : principal stresses, principal strains, dynamics, buckling, ...
Formulation
Example
2x + y = λx ⎡2 − λ 1 ⎤ ⎡x ⎤ ⎡0⎤
→⎢ ⎢ y⎥ =
x + 2y = λy ⎣ 1 2 − λ ⎥⎦ ⎣ ⎦
⎢0⎥
⎣ ⎦
eigenvalues evaluation
2−λ 1
det A = = ( 2 − λ ) − 1 = λ 2 − 4λ + 3 = 0
2
→ λ1 = 1, λ2 = 3
1 2−λ
Let
λ1 = 3
⎧ 1
2 x1 + y1 = 3 x1 → − x1 + y1 = 0 Let x + y =1
2 2 ⎪⎪ x1 = 2
x1 + 2 y1 = 3 y1
→
x1 − y1 = 0
1 1
⎨
⎪y = 1
⎪⎩ 1 2
λ2 = 1
⎧ 1
2 x2 + y2 = x2 → x2 + y2 = 0 Let x2 + y2 = 1
2 2 ⎪⎪ x2 = − 2
x2 + 2 y2 = y2 → x2 + y2 = 0 ⎨
⎪ y2 = + 1
⎪⎩ 2
x = { x, y}
1 ⎡ 1⎤ 1 ⎡ − 1⎤
x1 = ⎢ 1⎥ , x2 = ⎢ ⎥
2 ⎣ ⎦ 2 ⎣ +1 ⎦
Chapter 6—2/38 2008-01-23
Eigenvectors
- methods oriented on evaluation of all eigenvalues and eigenvectors (eg. Jacobi method)
- methods oriented on evaluation of a selected group of eigenvalues and eigenvectors
- methods oriented on evaluation of a single eigenvalue and eigenvector (usually
extremal, eg. power method, reverse iteration method)
6.3.THEOREMS
Theorem 1 If A has distinct all eigenvalues, then there exists a complete set of linearly
independent eigenvectors, unique up to a multiplicative constant.
Let
C = 3A 2 − 2A + 4I
then
λC = 3λA2 − 2λA + 4
Theorem 4 The eigenvalues (but not eigenvectors) are preserved under the similarity
transformation.
Definition 1 The similarity transformation R −1AR of the matrix A, where R is a non-
singular matrix, does not change the eigenvalue λ .
Chapter 6—3/38 2008-01-23
Let Ax = λx
x = Ry → y = R −1x , det R ≠ 0
R -1
ARy = λ Ry
R −1ARy = λ y
Thus eigenvalues for A and R-1AR matrices are the same
Let D denote the union of the discs Ci . Then all the eigenvalues of A lie within D.
Conclusion:
Example
⎡ − 2 1 3⎤ a11 = −2 R1 = 1 + 3 = 4
A = ⎢⎢ − 1 4 2⎥⎥ a 22 = 4 R2 = −1 + 2 = 3
⎢⎣ 3 − 2 3⎥⎦ a 33 =3 R3 = 3 + −2 = 5
⎧−2 − 4 ⎧ −2 + 4
⎪ ⎪
λmin > min ⎨ 4 − 3 = − 6 λmax < max ⎨ 4 + 3 = 8
⎪3 − 5 ⎪3 + 5
⎩ ⎩
Remarks
− Theorem is useful for a rough evaluation of the eigenvalues spectrum
− Theorem holds also for complex matrices
− The quality of the Gerschgorin’s evaluation depends on how much dominant are the
diagonal terms of the matrix A considered. Evaluation is exact for diagonal matrices.
Chapter 6—4/38 2008-01-23
⎡ b1 c1 ⎤ ⎡b1 − λ c1 ⎤
⎢c b2 c2 ⎥ ⎢ c b2 − λ c2 ⎥
⎢ 1 ⎥ ⎢ 1 ⎥
⎢ c2 b3 c3 ⎥ ⎢ c2 b3 − λ c3 ⎥
T=⎢ ⎥, T−λI = ⎢ ⎥
⎢K K K K K K⎥ ⎢ K K K K K K ⎥
⎢ cn −1 ⎥ ⎢ cn −1 ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣ cn −1 bn ⎥⎦ ⎢⎣ cn −1 bn − λ ⎥⎦
and
p0 ( λ ) = 1
p1 ( λ ) = b1 − λ
pk ( λ ) = ( bk − λ ) pk −1 ( λ ) − ck2−1 pk − 2 ( λ ) , k = 2,3,L , n
Remark:
If p j (λˆ ) = 0 then we record the sign opposite to the sign p j −1 (λˆ ) .
Example
⎡ 2 −1 ⎤
⎢− 1 2 − 1 ⎥
⎢ ⎥
T=⎢ −1 2 −1 ⎥
⎢ ⎥
⎢ − 1 2 − 1⎥
⎢⎣ − 1 2 ⎥⎦
Solution
⎡(2 − λ ) −1 ⎤
⎢ −1 (2 − λ ) −1 ⎥
⎢ ⎥
T − λI = ⎢ −1 (2 − λ ) −1 ⎥
⎢ ⎥
⎢ −1 (2 − λ ) −1 ⎥
⎢⎣ −1 (2 − λ )⎥⎦
Chapter 6—5/38 2008-01-23
hence
p0 ( λ ) = 1
p1 ( λ ) = 2 − λ
p2 ( λ ) = ( 2 − λ ) ( 2 − λ ) − ( −1) *1 = ( 2 − λ ) − 1 = λ 2 − 4λ + 3
2 2
p3 ( λ ) = ( 2 − λ ) ( λ 2 − 4λ + 3) − ( −1) ( 2 − λ ) = ( 2 − λ ) ( λ 2 − 4λ + 2 )
2
p4 ( λ ) = ( 2 − λ ) ( 2 − λ ) ( λ 2 − 4λ + 3) − ( −1) ( λ 2 − 4λ + 3) = ( 2 − λ ) ( λ 2 − 4λ + 2 ) − ( λ 2 − 4λ + 3)
2 2
p5 ( λ ) = ( 2 − λ ) ⎡ ( 2 − λ ) ( λ 2 − 4λ + 2 ) − ( λ 2 − 4λ + 3) ⎤ − ( −1) ( 2 − λ ) ( λ 2 − 4λ + 2 ) =
2 2
⎣ ⎦
⎣ ⎦ ⎣{ 2
⎦ }
= ( 2 − λ ) ⎡ ( 2 − λ ) ( λ 2 − 4λ + 2 ) − 2 ( λ 2 − 4λ + 2 ) − 1⎤ = ( 2 − λ ) ⎡( 2 − λ ) − 2 ⎤ ( λ 2 − 4λ + 2 ) − 1
2
≡ det ( T − λ I )
We select now values of λ̂ , and we record for each value of λ̂ the sign of the polynomials
p j (λˆ ) . For p j (λˆ ) = 0 we record the sign opposite to the sign p j −1 (λˆ )
Table no. 1
λ̂ Sign of pk (λˆ )
k 0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
0 + + + + + + + + +
1 + + + + - - - - -
2 + + - - - - - + +
3 + + - - + + + - -
4 + - - + + + - - +
5 + - + + - - + + -
Number of
eigenvalues 5 4 3 3 2 2 1 1 0
> λ̂
Examining the final row of this Table 1 we find a single eigenvalue in each of the
intervals [0, 0.5], [0.5, 1.0], [1.5, 2.0], [2.5, 3.0], [3.5, 4.0]. Moreover a conclusion can be
drawn from the first column of signs in Table 1 - the matrix T has 5 positive eigenvalues i.e.
it is positive definite. The same conclusion may be drawn from the Gershgorin’s Theorem:
λ min > 2 − ( − 1 + − 1 ) = 0
.
Theorem 7 Positive definite matrix has all eigenvalues positive. Symmetric positive
definite matrix ( n × n ) has all n linearly independent eigenvectors.
Chapter 6—6/38 2008-01-23
Example
⎡1 1 0 ⎤
A = ⎢0 1 1 ⎥ → ( λ − 1) = 0 → λ = 1 → one eigenvector {1 0 0}
3
⎢ ⎥
⎢⎣0 0 1 ⎥⎦
⎡1 0 0 ⎤
B = ⎢0 1 0 ⎥ → ( λ − 1) = 0 → λ = 1 → three eigenvectors {1 0 0}, {0 1 0}, {0 0 1}
3
⎢ ⎥
⎢⎣0 0 1 ⎥⎦
Given
Ax = λx ,
is an orthogonal matrix.
Let
x = Qy , where Q
Q Q = I → Q −1 = Q T
T
Then
AQy = λQy
Q T AQy = λ y
Remark:
The orthogonal transformation of the matrix is a particular case of the similarity
transformation. Therefore, the statement of this theorem is obvious.
Chapter 6—7/38 2008-01-23
⎡n k +1 ⎤
⎛λj ⎞
x k +1 = λk1 +1 ⎢∑ α j ⎜⎜ ⎟⎟ u j⎥
⎢ j =1 ⎝ λ1 ⎠ ⎥
⎣ ⎦
Notice:
⎝ λ1 ⎠
k
xs( k +1) ⎢
⎣ j =2 ⎥
⎦ ⎛ λ2 ⎞
= = λ1 + O ⎜ ⎟
⎡ λ ⎤ ⎝ λ1 ⎠
k
xs ( k ) n
⎛ ⎞
λ1k ⎢α1us1 + ∑ α j ⎜ j ⎟ usj ⎥
⎢⎣ j =2 ⎝ λ1 ⎠ ⎥⎦
λ2
Thus magnitude of decides about the convergence rate
λ1
Finally
xs( k +1)
lim = λ1
k →∞ xs ( k )
Chapter 6—8/38 2008-01-23
Remark:
If the dominant eigenvalue has multiplicity r, say we get
⎡ k +1 ⎤
r n ⎛λj ⎞
x k +1 = λk1 +1 ⎢∑ α ju j + ∑ α j ⎜⎜ ⎟⎟ uj⎥
⎢ j =1
⎣ j = r + 1 ⎝ λ1 ⎠ ⎥
⎦
and
k
xs( k +1) ⎛λ ⎞
= λ1 + O ⎜ r +1 ⎟
xs ( k ) ⎝ λ1 ⎠
The dominant eigenvalue λ1 is found but x k −1 converges to a linear combination of the first r
eigenvectors.
For real symmetric matrices the Rayleigh Quotient provides means of accelerating
x s(k +1)
the convergence rate over the ratio.
x s (k )
POWER METHOD
GIVEN PROBLEM Ax = λ x
xT Ax
RAYLEIGH QUOTIENT Λ= T
x x
0. ASSUMPTION x0
xk
vk =
( )
1. NORMALIZATION 1
T
xk xk 2
2. POWER STEP x k +1 = Av k
vTk Av k
3. RAYLEIGH QUOTIENT Λ k +1 = = vTk Av k = vTk x k +1
v k vk
T
Λ k +1 − Λ k
4. ERROR ESTIMATION ε k( Λ+1) = , ε k( v+)1 = v k +1 − v k
Λ k +1
? ?
5. BRAKE OFF TEST ε k( Λ+1) < BΛ , ε k( v+)1 < Bv if No – go to 1, if Yes – go to 6.
6. FINAL RESULTS λmax ≈ Λ k +1 , x max ≈ x k +1
Chapter 6—9/38 2008-01-23
Example
⎡4 1 0⎤ ⎡4 − λ 1 0 ⎤
⎢ 2 ⎥ ⎢ 2 ⎥
A = ⎢1 4 1 ⎥ → det (A − λI ) = ⎢ 1 4−λ 1 ⎥ =0 →
⎢ 2 1
2⎥ ⎢ 2 1
2 ⎥
⎢⎣ 0 4⎥ ⎢⎣ 0 4 − λ⎥
2 ⎦ 2 ⎦
1 1
→ λ1 = 4 + , λ 2 = 4 , λ3 = 4 − exact eigenvalues
2 2
λ1 = 4.7071067 , λ 2 = 4 , λ3 = 3.2928933
⎧1 1 1⎫ ⎧ 1 1 ⎫ ⎧ 1 1 1⎫
v1 = ⎨ ⎬ , v2 = ⎨ 0 − ⎬ , v3 = ⎨− − ⎬,
⎩2 2 2⎭ ⎩ 2 2⎭ ⎩ 2 2 2⎭
1 1 1 1
λ min > 4 − − =3 λ max < 4 + + =5
2 2 2 2
Assume
x 0 = {1 1 1}
hence
x0 x0 ⎧ 1 1 1 ⎫
v0 = = =⎨ ⎬
(x x )
1 1
T (1 1 1) ⎩ 3 3 3⎭
2 2
0 0
⎡1 ⎤
⎡4 1 0⎤⎢ ⎡9 ⎤
⎢ 2 ⎥⎢ 3⎥
⎥ ⎢ 2⎥
x1 = Av 0 = ⎢ 1 1 ⎥ 1 ⎥= 1 ⎢5⎥
2 ⎥ ⎢⎢
4
⎢ 2 3⎥ 3⎢ ⎥
⎢0 1 4⎥⎢1 ⎥ ⎢⎣ 9 2 ⎥⎦
⎣ 2 ⎦⎢
⎣ 3 ⎥⎦
⎡9 ⎤
⎢ 2 ⎥ 14
⎡
Λ 1= v 0 x1 = ⎢
t 1 1 1 ⎤ 1 ⎢ 5 ⎥= = 4.666667
⎣ 3 3 3 ⎥⎦ 3⎢ ⎥ 3
⎢⎣ 9 2 ⎥⎦
⎡ 0.556022 ⎤
1 ⎧9 9⎫ 1 6 ⎧9 9⎫ ⎢ ⎥
v1 = ⎨ 5 ⎬ 1
= ⎨ 5 ⎬ = ⎢ 0.617802 ⎥
3 ⎩2 2⎭ 13 3 ⎩ 2 2⎭
1 ⎡⎛ 9 ⎞ ⎛9⎞ ⎤
2 2 2
⎢⎣ 0.556022 ⎥⎦
⎢⎜ ⎟ + 5 + ⎜ ⎟ ⎥
2
3 ⎢⎣⎝ 2 ⎠ ⎝ 2 ⎠ ⎥⎦
Chapter 6—10/38 2008-01-23
⎡4 1 0⎤
⎢ 2 ⎥ ⎡ 0.556022 ⎤ ⎡ 2.532988⎤
x 2 = Av1 = ⎢ 1 4 1 ⎥ ⎢ 0.617802 ⎥ = ⎢3.027230 ⎥
⎢ 2 2⎥ ⎢ ⎥ ⎢ ⎥
⎢0 ⎢⎣ 0.556022 ⎥⎦ ⎢⎣ 2.532988⎥⎦
⎢⎣
1 4 ⎥⎥
2 ⎦
⎡ 2.532988 ⎤
Λ 2 = [0.556022, 0.617802,0.556022 ] ⎢3.027230 ⎥ = 4.687023
⎢ ⎥
⎢⎣ 2.532988 ⎥⎦
⎡ 0.540082 ⎤
x2
v2 = = ⎢⎢ 0.645464 ⎥⎥
( x2 x2 ) ⎢⎣0.540082⎥⎦
1
T 2
Error estimation
4.687023 − 4.666667
ε 2( Λ ) = = 0.004342
4.687023
ε ( v ) = v 2 − v1 = 0.035684
Λ 3 = 4.697206
Error estimation
4.69721 − 4.68702
ε 3( Λ ) = = 0.002169
4.69721
ε 3( v ) = 0.016438
……………………………………………
⎧⎪ v11 = {0.501681, 0.704721, 0.501681}
⎨
⎪⎩Λ11 = 4.707074
Ax = λ x
Let
λ = κ + l → Ax = κ x + lx
( A − lI ) x = κ x → x, κ → λ = κ + l
A → p (λ ) = λ
A − lI → p ( κ ) = p ( λ − l ) = λ − l
p
The optimal shift
λ2 + λn
lopt =
2
Example
1
4+4−
λ2 + λ3 2 = 4 − 1 = 3.646447 - optimal shift for λ
lopt = = 1
2 2 2 2
evaluation
Chapter 6—12/38 2008-01-23
⎡⎛ 1 ⎞ 1 ⎤ ⎡ 1 ⎤
⎢⎜ 4 − 4 + ⎟ 2
0 ⎥ ⎢ 1 0 ⎥
⎢⎝ 2 2⎠ ⎥ ⎢ 2 ⎥
⎢ 1 ⎛ 1 ⎞ 1 ⎥ 1⎢ 1 ⎥
A − lopt I = ⎢ ⎜4−4+ ⎟ ⎥= ⎢ 1 1 ⎥
⎢ 2 ⎝ 2 2⎠ 2 ⎥ 2⎢ 2
⎥
⎢ 1 ⎛ 1 ⎞⎥ ⎢ 1 ⎥
⎢ 0 4 − 4 + ⎥ ⎢⎣ 0 1
⎢⎣ 2 ⎜
⎝
⎟
2 2 ⎠ ⎥⎦ 2 ⎥⎦
Let
x 0 = {1, 1, 1}
⎧ 1 1 1 ⎫
v0 = ⎨ , , ⎬
⎩ 3 3 3⎭
⎡ 1 ⎤ ⎡ 1 ⎤
⎢ 1
0 ⎥ ⎢1 + ⎥
⎢ 2 ⎥ ⎢ 2⎥
⎡⎤
1 ⎡0.492799 ⎤
1⎢ 1 ⎥ 1 ⎢⎥ 1 ⎢ 1 ⎥ ⎢ ⎥
x1 = ⎢ 1 1 ⎥ ⎢1⎥ = ⎢2 + ⎥ = ⎢0.781474 ⎥
2 2
⎢ ⎥ 3 ⎢⎣1⎥⎦ 2 3 ⎢ 2⎥
⎢0.492799 ⎥⎦
⎢ 1 ⎥ ⎢ 1 ⎥ ⎣
⎢ 0 1 ⎥ ⎢1 + ⎥
⎣ 2⎦ ⎣ 2⎦
⎡ 1 ⎤
⎢1 + ⎥
⎢ 2⎥
1 1 ⎢ 1 ⎥ 1⎛ 3 ⎞
κ1 = [1, 1, 1] ⎢ 2+ ⎥ = ⎜4 + ⎟ = 1.020220 →
3 2 3 ⎢ 2⎥ 6⎝ 2⎠
⎢ 1 ⎥
⎢1 + ⎥
⎣ 2⎦
→ λ(1) = 1.020220 + 3.646447 = 4.666667
Remark:
Due to shift number of iterations was reduced from 25 to 8.
Chapter 6—13/38 2008-01-23
p(l) p(k)
p(l-lopt)
p(l-lmax)
Examples
(i) Let
1
l = λ3 = 4 + = 4.707107 → λ = κ + 4.707107
2
⎡⎛ 1 ⎞ 1 ⎤ ⎡ 1 1 ⎤
⎢⎜ 4 − 4 − ⎟ 0 ⎥ ⎢− 0 ⎥
⎢⎝ 2⎠ 2 ⎥ ⎢ 2 2
⎥
⎢ 1 ⎛ 1 ⎞ 1 ⎥ ⎢ 1 1 1 ⎥
A − λ 3I = ⎢ ⎜4 − 4 − ⎟ ⎥=⎢ 2 −
2 ⎝ 2 ⎠ 2 2 2 ⎥
⎢ ⎥
⎢ 1 ⎛ 1 ⎞⎥ ⎢⎢ 0 1
−
1 ⎥
⎥
⎢ 0 ⎜4 − 4 − ⎟
⎣ 2 ⎝ 2 ⎠⎥⎦ ⎣ 2 2⎦
Let
x 0 = {1, 1, 1}
x0 ⎧ 1 1 1 ⎫
v0 = =⎨ , , ⎬
(x ⋅ x0 )
1
t
0
2
⎩ 3 3 3⎭
⎡ 1 1 ⎤
⎢− 2
0 ⎥
⎢ 2 ⎥ ⎡1⎤ ⎡ 1 − 2 ⎤ ⎡ −0.119573⎤
⎢ 1 1 1 ⎥ 1 ⎢⎥ 1 ⎢ ⎥ ⎢ ⎥
x1 = ( A − λ3 I ) v 0 = ⎢ − ⎥ ⎢1 ⎥ = ⎢ 2 − 2 ⎥ = ⎢ 0.169102 ⎥
2 2 2
⎢ ⎥ 3 ⎢⎣1⎥⎦ 2 3 ⎢ 1 − 2 ⎥ ⎢⎣ −0.119573⎥⎦
⎢ 1 1 ⎥ ⎣⎢ ⎦⎥
⎢ 0 − ⎥
⎣ 2 2⎦
Chapter 6—14/38 2008-01-23
κ1 = v t0 ⋅ x1 =
1
3
[1, 1, 1] ⋅ {1
2 3
2
1 − 2, 2 − 2, 1 − 2 = −
3 2
2
}= −0.040440
x1 ⎧ 1 1 1⎫
v1 = = {−0.500000, 0.707107, − 0.500000} ≈ ⎨− , , − ⎬
(x ⋅ x1 )
1
t
1
2
⎩ 2 2 2⎭
⎧ 1 1 ⎫
x 2 = ( A − λ3I ) v1 = ⎨ , − 1, ⎬
⎩ 2 2⎭
κ 2 = v1T x 2 = − 2
1 1
λ(2) = κ 2 + l = − 2 + 4 + = 4− ≡ λ3
2 2
(ii) Let
λ1 + λ2 ⎛ 1 ⎞1 1
lopt = = ⎜4+ + 4⎟ = 4 + = 4.353553
2 ⎝ 2 ⎠2 2 2
⎡ 1 1 ⎤ ⎡ 1 ⎤
⎢− 2 2 2
0 ⎥ ⎢− 2 1 0 ⎥
⎢ ⎥ ⎢ ⎥
⎢ 1 1 1 ⎥ 1⎢ 1 ⎥
A − lopt I = ⎢ − = 1 − 1 ⎥
2 2 2 2 ⎥ 2⎢ 2
⎢ ⎥ ⎢ ⎥
⎢ 1 1 ⎥ ⎢ 1 ⎥
⎢⎣ 0 − ⎥ ⎢⎣ 0 1 −
2 2 2⎦ 2 ⎥⎦
Let
x0 = {1, 1, 1}
⎧ 1 1 1 ⎫
v0 = ⎨ , , ⎬
⎩ 3 3 3⎭
1 ⎧ 1 1 1 ⎫
x1 = ( A − lopt I ) v 0 = ⎨1 − , 2− , 1− ⎬ = {0.084551, 0.373226, 0.084551}
2 3⎩ 2 2 2⎭
κ1 = v t0 ⋅ x1 = 0.313113
x1
v1 = = {0.215739, 0.952320, 0.215739}
( x1t ⋅ x1 ) 2
1
……………………………………………………
Chapter 6—15/38 2008-01-23
Notice:
Here λc and µc mean eigenvalues closest to zero
INVERSE METHOD
0. ASSUMPTION x0
xk
1. NORMALIZATION vK = 1
( xTk ⋅ xk ) 2
2. POWER STEP
solution of
linear algebraic
simultaneous
equations
64748 ⎧ A = LU LU decomposition
−1 ⎪
x k +1 = A v k → Ax k +1 = v k → x k +1 = ⎨Ly = v → y
k k k step foreward
⎪U x = y → x
⎩ k k +1 k k +1
step back
No
vTk A −1 v k
3. RAYLEIGH QUOTIENT Λk +1 = = vTk x k +1 , λ( k +1) = Λ −k 1+1
vk vk
T
Λ k +1 − Λ k
4. ERROR ESTIMATION ε k( λ ) = , ε k( v+1) = v k +1 − v k
Λ k +1
?
5. BRAKE OFF TEST ε ( λ ) k +1 < Βλ , ε ( v ) k +1 < Βv
Yes
6. FINAL RESULTS λ c ≈ Λ −k1+1 xc ≈ x k +1
Chapter 6—16/38 2008-01-23
Example
Let
⎡ 1 ⎤
⎢4 2
0⎥
⎢1 1⎥
A=⎢ 4 ⎥
⎢2 2⎥
⎢0 1
4⎥
⎢⎣ 2 ⎥⎦
Matrix decomposition
⎡ ⎤ ⎡ 1 ⎤
⎢2 0 0 ⎥ ⎢2 0 ⎥
⎢ 4 ⎥
⎢ ⎥
⎢1 63 ⎥ ⎢ 63 2 ⎥
A = LLT = ⎢ 0 ⎥ ⎢0 ⎥
4 4 ⎢ 4 63 ⎥
⎢ ⎥
⎢ 2 62 ⎥ ⎢ 62 ⎥
⎢0 2
63 ⎥⎦
⎢0 0 2 ⎥
⎣ 63 ⎢⎣ 63 ⎥⎦
Let
x 0 = {1, 1, 1}
x0 ⎧ 1 1 1 ⎫
v0 = =⎨ , , ⎬
(x ⋅ x0 )
1
t
0
2
⎩ 3 3 3⎭
Chapter 6—17/38 2008-01-23
⎡ 1 ⎤
⎢4 2
0⎥
⎢ ⎥ ⎡ x1 ⎤ ⎡1⎤ ⎡ 0.130369 ⎤
Ax1 = ⎢
1 1⎥ ⎢ x ⎥ = 1 ⎢1⎥ → x = ⎧ 7 6 7 ⎫ ⎢ ⎥
4 ⎢ 2⎥ ⎨ , ⎬ = ⎢ 0.111745⎥
⎢2 2⎥ 3⎢ ⎥
1
⎢⎣ x3 ⎥⎦ ⎢⎣1⎥⎦ ⎩ 3 3, 3 3 3 3⎭
⎢⎣ 0.130369 ⎥⎦
⎢ 1 ⎥
⎢0 4⎥
⎢⎣ 2 ⎥⎦
x1 ⎧ 7 6 7 ⎫
v1 = =⎨ , , ⎬ = {0.604708, 0.518321, 0.604708}
(x ⋅ x )
1
t
1 1
2
⎩ 134 134 134 ⎭
⎡ 1 ⎤
⎢4 2
0⎥
⎢ ⎥ ⎡ x1 ⎤ ⎡7 ⎤ ⎡ 0.139334 ⎤
1 1⎥ ⎢ x ⎥ = 1 ⎢6⎥ → x = 1
Ax 2 = ⎢
⎢2
4 ⎢ 2⎥ ⎢ ⎥ {50, 34, 50} = ⎢⎢0.094747 ⎥⎥
2⎥
2
134 31 134
⎢ ⎥ ⎢⎣ x3 ⎥⎦ ⎢⎣7 ⎥⎦ ⎢⎣ 0.139334 ⎥⎦
⎢0 1
4⎥
⎣⎢ 2 ⎦⎥
x2 1
v2 = = {25, 17, 25} = {0.637266, 0.433341, 0.6372659}
(x ⋅ x2 )
1
t 2 9 19
2
⎡ 1 ⎤
⎢4 2
0⎥
⎢ ⎥ ⎡ x1 ⎤ ⎡ 25⎤ ⎡ 0.150477 ⎤
1 1⎥ ⎢ x ⎥ = 1 ⎢17 ⎥ → x = ⎢ 0.070716 ⎥
Ax3 = ⎢ 4 ⎢ 2 ⎥ 9 19 ⎢ ⎥ ⎢ ⎥
⎢2 2⎥
3
⎢ ⎥ ⎢⎣ x3 ⎥⎦ ⎢⎣ 25⎥⎦ ⎢⎣ 0.150477 ⎥⎦
⎢0 1
4⎥
⎣⎢ 2 ⎦⎥
...........................................
Remark:
Convergence is initially slow because of unhappy choice of x0 .
Chapter 6—18/38 2008-01-23
6.5.2. Use of inverse and shift In order to find the eigenvalue λ c closest to a
given one
p(l)
p(l) p(l-l)
Let
λ =κ +l
CONCEPT
p(l-l) The same like in the case of
inverse method but:
- A is replaced now by A − lI
l - λ = Λ −1 + l
lmin lc lmax
l Inverse
Inverse + shift Power
Example
⎡ 1 ⎤
⎢4 2
0⎥
⎢1 1⎥
A=⎢ 4 ⎥
⎢2 2⎥
⎢0 1
4⎥
⎢⎣ 2 ⎥⎦
Let
l = 3.75
Thus
⎡1 1 ⎤
⎢4 0⎥
2
⎢ ⎥
~ 1 1 1⎥
A = A − 3.75I = ⎢
⎢2 4 2⎥
⎢ 1 1 ⎥⎥
⎢0
⎢⎣ 2 4 ⎥⎦
Error estimation
Λ k +1 − Λ k
ε kλ = , ε k( v ) = v k +1 − v k
Λ k +1
Chapter 6—19/38 2008-01-23
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡1⎤ ⎡1⎤ ⎡ 0.329914⎤
⎢ ⎥
~ 1 1 1⎥ ⎢ x ⎥ = 1 ⎢1⎥ → x = 4 ⎢ ⎥ ⎢ ⎥
A x1 = ⎢ ⎢ 2⎥ ⎢3⎥ = ⎢ 0.989743⎥
⎢2 2⎥ 3 ⎢⎥
1
4 7 3
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣1⎥⎦ ⎢⎣1⎥⎦ ⎢⎣ 0.329914⎥⎦
⎢0 1 1 ⎥⎥
⎣⎢ 2 4 ⎥⎦
⎡ 0.301511⎤
x1 1
v1 = = {1, 3, 1} = ⎢⎢0.904534⎥⎥
(x ⋅ x )
1
t 2
11
1 1 ⎢⎣ 0.301511⎥⎦
20 21
Λ 1 = vT0 x1 = = 0.952381 → λ (1) = Λ1−1 + l = + 3.75 = 4.800000
21 20
error estimation
ε1(λ ) = 1.00 , ε1( v ) = 2.54 ×10 −1
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡1⎤ ⎡5⎤ ⎡ 0.861461⎤
⎢ ⎥
~ 1 1 1⎥ ⎢ x ⎥ = 1 ⎢ 3⎥ → x = 4 ⎢ ⎥ ⎢ ⎥
A x2 = ⎢ ⎢ 2⎥ ⎢1⎥ = ⎢ 0.172292⎥
⎢2 2⎥ 11 ⎢ ⎥
2
4 7 11
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣1⎥⎦ ⎢⎣5⎥⎦ ⎢⎣ 0.861461⎥⎦
⎢0 1 1 ⎥⎥
⎢⎣ 2 4 ⎥⎦
⎡ 0.700140 ⎤
x2 1
v2 = = {5, 1, 5} = ⎢⎢ 0.140028 ⎥⎥
(x ⋅ x2 )
1
t 2
51
2 ⎢⎣ 0.700140 ⎥⎦
52 77
Λ 2 = v1t x 2 = = 0.675325, λ (2) = Λ 2−1 + l = + 3.75 = 5.230769
77 52
error estimation
ε 2(λ ) = 8.24 ×10 −2 , ε 2( v ) = 5.48 ×10 −1
Chapter 6—20/38 2008-01-23
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡5 ⎤ ⎡ −3⎤ ⎡ −0.240048⎤
⎢ ⎥
~ 1 1 1⎥ ⎢ ⎥ 1 ⎢ ⎥ 4 ⎢ ⎥ ⎢ ⎥
A x3 = ⎢ ⎢ x2 ⎥ = 51 ⎢1⎥ → x3 = 7 51 ⎢19 ⎥ = ⎢ 1.520304 ⎥
⎢2 4 2⎥
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣5⎥⎦ ⎢⎣ −3⎥⎦ ⎢⎣ −0.240048⎥⎦
⎢0 1 1 ⎥⎥
⎢⎣ 2 4 ⎥⎦
⎡ −0.154100 ⎤
x3 1
v3 = = {−3, 19, − 3} = ⎢⎢ 0.975964 ⎥⎥
(x ⋅ x3 )
1
t 2
379
3 ⎢⎣ −0.154100 ⎥⎦
44 357
Λ 3 = vt2 x3 = − = −0.123249, λ (3) = Λ 3−1 + l = − + 3.75 = −4.363636
7 ⋅ 51 44
error estimation
ε 3(λ ) = 2.20 , ε 3( v ) = 6.57 ×10 −1
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ −3⎤ ⎡ 41 ⎤ ⎡ 1.203444 ⎤
⎢ ⎥
~ 1 1 1⎥ ⎢ x ⎥ = 1 ⎢19 ⎥ → x = 4 ⎢ −31⎥ = ⎢ −0.909922 ⎥
A x4 = ⎢ ⎢ 2⎥
⎢2 2⎥ 379 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
4
4 7 379
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣ −3⎥⎦ ⎢⎣ 41 ⎥⎦ ⎢⎣ 1.203444 ⎥⎦
⎢0 1 1 ⎥⎥
⎢⎣ 2 4 ⎥⎦
⎡ 0.623579 ⎤
x4 1
v4 = = {41, − 31, 41} = ⎢⎢ −0.471486 ⎥⎥
(x ⋅ x4 )
1
t 2
4323
4 ⎢⎣ 0.623579 ⎥⎦
1
Λ 4 = v 3t x 4 = −1.258952, λ4 = Λ 4−1 + l = − + 3.75 = 2.955689
1.258952
error estimation
ε 4(λ ) = 2.48 , ε 4( v ) = 4.81 ×10 −1
Chapter 6—21/38 2008-01-23
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ 41 ⎤ ⎡ −103⎤ ⎡ −0.895172⎤
⎢ ⎥
~ 1 1 1⎥ ⎢ ⎥ 1 ⎢ ⎥ 4 ⎢ ⎥ ⎢ ⎥
A x5 = ⎢ ⎢ x2 ⎥ = 4323 ⎢ −31⎥ → x5 = 7 4323 ⎢ 195 ⎥ = ⎢ 1.694743 ⎥
⎢2 4 2⎥
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣ 41 ⎥⎦ ⎢⎣ 103 ⎥⎦ ⎢⎣ −0.895172⎥⎦
⎢0 1 1 ⎥⎥
⎢⎣ 2 4 ⎥⎦
⎡ −0.423174 ⎤
x5 1
v5 = = {−103, 195, − 103} = ⎢⎢ 0.801154 ⎥⎥
(x ⋅ x5 )
1
t 2
59243
5 ⎢⎣ −0.423174 ⎥⎦
1
Λ 5 = vt4 x5 = −1.915469, λ5 = Λ5−1 + l = − + 3.75 = 3.227935
1.915469
error estimation
ε 5(λ ) = 8.43 ×10 −2 , ε 5( v ) = 2.51 ×10 −1
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ −103⎤ ⎡ 493 ⎤ ⎡ 1.157418 ⎤
⎢ ⎥
~ 1 1 1⎥ ⎢x ⎥ = 1 ⎢ 195 ⎥ → x = 4 ⎢ −607 ⎥ = ⎢ −1.425057 ⎥
A x6 = ⎢ ⎢ 2⎥ ⎢ ⎥
⎢2 2⎥ 7 59243 ⎢ ⎥ ⎢ ⎥
6
4 59243
⎢ ⎢⎣ x3 ⎦⎥ ⎢⎣ −103⎦⎥ ⎢⎣ 493 ⎦⎥ ⎣⎢ 1.157418 ⎦⎥
⎢0 1 1 ⎥⎥
⎣⎢ 2 4 ⎥⎦
⎡ 0.533309 ⎤
x6 1
v6 = = {493, − 607, 493} = ⎢⎢ −0.656630 ⎥⎥
(x ⋅ x6 )
1
t 2
854547
6 ⎢⎣ 0.533309 ⎥⎦
1
Λ 6 = v5t x 6 = −2.121268, λ6 = Λ 6−1 + l = − + 3.75 = 3.278584
2.121268
error estimation
ε 6(λ ) = 1.54 ×10 −2 , ε 6( v ) = 1.23 ×10 −1
...........................................
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ −0.496219 ⎤ ⎡ 1.097741 ⎤
⎢ ⎥
~ 1 1 1⎥ ⎢x ⎥ = ⎢ 0.712414 ⎥ → x = ⎢ −1.541308⎥
A x10 = ⎢ ⎢ 2⎥ ⎢ ⎥ ⎢ ⎥
⎢2 4 2⎥
10
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣ −0.496219 ⎥⎦ ⎢⎣ 1.097741 ⎥⎦
⎢0 1 1 ⎥⎥
⎣⎢ 2 4 ⎥⎦
Chapter 6—22/38 2008-01-23
⎡ 0.501796 ⎤
x10
v10 = = ⎢⎢ −0.704558⎥⎥
(x ⋅ x10 )
1
t 2
10 ⎢⎣ 0.501796 ⎥⎦
1
Λ 10 = v 9t x10 = −2.187631, λ10 = Λ10
−1
+l = − + 3.75 = 3.292884
2.187631
error estimation
ε10(λ ) = 3.94 ×10 −5 , ε10( v ) = 6.43 ×10 −3
...........................................
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ −0.500000 ⎤ ⎡ 0.500000 ⎤
⎢ ⎥
~ 1 1 1⎥ ⎢x ⎥ = ⎢ 0.707107 ⎥ → x → v = ⎢ −0.707107 ⎥
A x 28 = ⎢
⎢2 4 2⎥ ⎢ 2⎥ ⎢ ⎥ 28 28 ⎢ ⎥
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣ −0.500000 ⎥⎦ ⎢⎣ 0.500000 ⎥⎦
1 1⎥
⎢0 ⎥
⎣ 2 4⎦
1
Λ 28 = vt27 x 28 = −2.187673, λ28 = Λ 28
−1
+l = − + 3.75 = 3.292893
2.187673
error estimation
ε 28(λ ) = 1.35 ×10 −16 , ε 28( v ) = 1.35 ×10 −8
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ 0.500000 ⎤ ⎡ −0.500000 ⎤
⎢ ⎥
~ 1 1 1⎥ ⎢x ⎥ = ⎢ −0.707107 ⎥ → x → v = ⎢ 0.707107 ⎥
A x 29 = ⎢
⎢2 4 2⎥ ⎢ 2⎥ ⎢ ⎥ 29 29 ⎢ ⎥
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣ 0.500000 ⎥⎦ ⎢⎣ −0.500000 ⎥⎦
1 1⎥
⎢0 ⎥
⎣ 2 4⎦
1
Λ 29 = v t28 x 29 = −2.187673, λ29 = Λ 29
−1
+l = − + 3.75 = 3.292893
2.187673
error estimation
ε 29(λ ) = 0 , ε 29( v ) = 1.58 ×10 −8
Chapter 6—23/38 2008-01-23
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ −0.500000 ⎤ ⎡ 0.500000 ⎤
⎢ ⎥
~ 1 1 1⎥ ⎢x ⎥ = ⎢ 0.707107 ⎥ → x → v = ⎢ −0.707107 ⎥
A x 30 = ⎢
⎢2 4 2⎥ ⎢ 2⎥ ⎢ ⎥ 30 30 ⎢ ⎥
⎢ ⎣⎢ x3 ⎦⎥ ⎣⎢ −0.500000 ⎦⎥ ⎣⎢ 0.500000 ⎥⎦
1 1⎥
⎢0 ⎥
⎣ 2 4⎦
1
Λ 30 = vt29 x 30 = −2.187673, λ30 = − + 3.75 = 3.292893
2.187673
error estimation
ε 30(λ ) = 1.35 ×10 −16 , ε 30( v ) = 2.75 ×10 −8
……………………………
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ −0.500514 ⎤ ⎡ 1.085564 ⎤
⎢ ⎥
~ 1 1 1⎥ ⎢x ⎥ = ⎢ 0.707107 ⎥ → x = ⎢ −1.546912 ⎥
A x50 = ⎢
⎢2 4 2⎥ ⎢ 2⎥ ⎢ ⎥ 50 ⎢ ⎥
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣ −0.500514 ⎥⎦ ⎢⎣ 1.102099 ⎥⎦
1 1⎥
⎢0 ⎥
⎣ 2 4⎦
⎡ 0.496214 ⎤
x50
v50 = = ⎢ −0.707097 ⎥
⎢ ⎥
(x ⋅ x50 ) 2
1
t
50
⎣⎢ 0.503772 ⎦⎥
1
Λ 50 = v t49 x50 = −2.187662, λ50 = Λ50−1 + l = − + 3.75 = 3.292882
2.187662
error estimation
ε 50(λ ) = 2.35 ×10 −6 , ε 50( v ) = 4.77 ×10 −3
...........................................
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ −0.712203⎤ ⎡ 0.023209 ⎤
⎢ ⎥
~ 1 1 1⎥ ⎢x ⎥ = ⎢ 0.664212 ⎥ → x → v = ⎢ −0.588084 ⎥
A x58 = ⎢
⎢2 4 2⎥ ⎢ 2⎥ ⎢ ⎥ 58 58 ⎢ ⎥
⎢ 1 1⎥ ⎣⎢ x3 ⎦⎥ ⎣⎢ −0.227135⎦⎥ ⎢⎣ 0.808467 ⎦⎥
⎢0 ⎥
⎣ 2 4⎦
Λ 58 = v 57
t
x58 = −1.459722, λ58 = Λ 58
−1
+ 3.75 = 3.064938
Chapter 6—24/38 2008-01-23
error estimation
ε 58(λ ) = 5.62 ×10 −2 , ε 58( v ) = 5.22 ×10 −1
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ 0.023209 ⎤ ⎡ −0.863853⎤
⎢ ⎥
~ 1 1 1⎥ ⎢x ⎥ = ⎢ −0.588084 ⎥ → x → v = ⎢ 0.448093 ⎥
A x59 = ⎢
⎢2 4 2⎥ ⎢ 2⎥ ⎢ ⎥ 59 59 ⎢ ⎥
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣ 0.808467 ⎥⎦ ⎢⎣ 0.230153 ⎥⎦
1 1⎥
⎢0 ⎥
⎣ 2 4⎦
Λ 59 = v58
t
x59 = −0.279919, λ59 = Λ 59
−1
+ 3.75 = 0.177535
error estimation
ε 59(λ ) = 1.63 ×101 , ε 59( v ) = 5.95 ×10 −1
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ −0.863853⎤ ⎡ −0.440870 ⎤
⎢ ⎥
~ 1 1 1⎥ ⎢x ⎥ = ⎢ 0.448093 ⎥ → x → v = ⎢ −0.289111⎥
A x 60 = ⎢
⎢2 4 2⎥ ⎢ 2⎥ ⎢ ⎥ 60 60 ⎢ ⎥
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣ 0.230153 ⎥⎦ ⎢⎣ 0.849734 ⎥⎦
1 1⎥
⎢0 ⎥
⎣ 2 4⎦
Λ 60 = v59
t
x 60 = 1.515181, λ60 = Λ 60
−1
+ 3.75 = 4.409987
error estimation
ε 60(λ ) = 9.60 ×10 −1 , ε 60( v ) = 4.43 ×10 −1
...........................................
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ −0.708086 ⎤ ⎡ −0.706570 ⎤
⎢ ⎥
~ 1 1 1⎥ ⎢x ⎥ = ⎢ 0.001387 ⎥ → x → v = ⎢ −0.000759 ⎥
A x 70 = ⎢
⎢2 4 2⎥ ⎢ 2⎥ ⎢ ⎥ 70 70 ⎢ ⎥
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣ 0.706125 ⎥⎦ ⎢⎣ 0.707643 ⎥⎦
1 1⎥
⎢0 ⎥
⎣ 2 4⎦
1
Λ 70 = v 69
t
x 70 = 3.999885, λ70 = Λ 70
−1
+l = + 3.75 = 4.000001
3.999885
error estimation
ε 70(λ ) = 8.72 ×10 −7 , ε 70( v ) = 1.29 ×10 −3
...........................................
Chapter 6—25/38 2008-01-23
Finally
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡ −0.707107 ⎤ ⎡ −0.707107 ⎤
⎢ ⎥
~ 1 1 1⎥ ⎢x ⎥ = ⎢ 0.000000 ⎥ → x → v = ⎢ 0.000000 ⎥
A x89 = ⎢
⎢2 4 2⎥ ⎢ 2⎥ ⎢ ⎥ 89 89 ⎢ ⎥
⎢ 1 1⎥ ⎣⎢ x3 ⎦⎥ ⎣⎢ 0.707107 ⎦⎥ ⎢⎣ 0.707107 ⎦⎥
⎢0 ⎥
⎣ 2 4⎦
1
Λ89 = Λ final = 4.000000 → λfinal = Λ final
−1
+l = + 3.75 = 4.00000 = λsecond
3.999885
v89 = v final = {−0.707107, 0.000000, 0.707107} = v second
error estimation
ε 89(λ ) = 0 , ε 89( v ) = 1.35 ×10 −8
Remarks
0 2
1
-1 0
-1
Logarithm of eigenvector's error
-2
Logarithm of Lambda's error
-2
-3 -3
-4
-4 -5
-6
-5 -7
-6 -8
-9
-7 -10
-11
-8 -12
-13
-9 -14
-10 -15
28 59 -16
-11 -17
28 59
Chapter 6—26/38 2008-01-23
6
λa= 4.707107
4 λa= 4.000000
λa= 3.292893
2
Lambda
-2
-4
59
-6
0 10 20 30 40 50 60 70 80 90 100
Number of iterations
x 0 = {1, 1, − 1}
then
x0 1
v0 = = {1, 1, − 1}
(x ⋅ x0 )
1
t 2
3
0
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡1⎤ ⎡9⎤
⎢ ⎥
~ 1 1 1⎥ ⎢x ⎥ = 1 ⎢ 1 ⎥ → x = 4 ⎢ ⎥
A x1 = ⎢ ⎢ 2⎥ ⎢ −1⎥
⎢2 2⎥ 3 ⎢ ⎥
1
4 7 3
⎢ 1 1 ⎥⎥ ⎣⎢ x3 ⎦⎥ ⎣⎢ −1⎦⎥ ⎣⎢ −5⎦⎥
⎢0
⎢⎣ 2 4 ⎥⎦
x1 1
v1 = = {9, − 1, − 5}
(x ⋅ x )
1
t 2
107
1 1
52 21
Λ 1 = vT0 x1 = = 2.476191 → λ (1) = Λ1−1 + l = + 3.75 = 4.153846
21 52
⎡1 1 ⎤
⎢4 0⎥
2 ⎡ x1 ⎤ ⎡9⎤ ⎡ 45 ⎤
⎢ ⎥
~ 1 1 1⎥ ⎢ x ⎥ = 1 ⎢ −1⎥ → x = 4 ⎢ 9 ⎥
A x2 = ⎢ ⎢ 2⎥
⎢2 2⎥ 107 ⎢ ⎥ ⎢ ⎥
2
4 7 107
⎢ ⎢⎣ x3 ⎥⎦ ⎢⎣ −5⎥⎦ ⎢⎣ −57 ⎥⎦
⎢0 1 1 ⎥⎥
⎢⎣ 2 4 ⎥⎦
Chapter 6—27/38 2008-01-23
1
Λ 2 = v1t x 2 = 3.530040, λ (2) = Λ2−1 + l = + 3.75 = 4.033283
3,530040
……………………………………………………….
Finally
After 39 iterations we obtain as before
1
Λ final = 4.000000 → λ final = Λ final
−1
+ l = + 3.75 = 4.000000 = λ2
4
⎧ 1 1 ⎫
v = {0.707107, 0.000000, 0.707107} ≈ ⎨ , 0, ⎬
⎩ 2 2⎭
The error level is 10 −10 then.
Remark
Due to different starting vector convergence in the (ii) case proved to be much faster than in
the case (i).
Chapter 6—28/38 2008-01-23
then
B = LLt , Lt x = y → x = L− t y
Ax = λ LLt x → AL− t y = λ Ly
−1 −t
L
142 AL
43 y = λy
ˆy =λy,
A ˆ = L−1AL− t
A
Remark
The generalized eigenvalue problem was transformed into the standard one with the
same eigenvalues λ preserved.
ASSUMPTION y0
yn
NORMALIZATION u n ≡
(y ⋅ yn )
1
t 2
n
RESULTS λ max ≈ Λ n , x1 ≈ x n
Chapter 6—29/38 2008-01-23
Example
Ax = λBx
⎡ 7 4 3 ⎤ ⎡ x1 ⎤ ⎡8 1 3 ⎤ ⎡ x1 ⎤
⎢ 4 8 2⎥ ⎢ x ⎥ = λ ⎢1 6 4 ⎥ ⎢ x ⎥
⎢ ⎥ ⎢ 2⎥ ⎢ ⎥ ⎢ 2⎥
⎢⎣ 3 2 6 ⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣3 4 4⎥⎦ ⎢⎣ x3 ⎥⎦
1. Decompose B = LLt
ˆ = L−1AL− t
3. Find A
~
4. Find eigenvalues λ and eigenvectors u of A by any standard method
Step back Lt x n = u n → xn
⎡ 2.828427 0 0 ⎤ ⎡ y1 ⎤ ⎡ 7 4 3 ⎤ ⎡ x1 ⎤ ⎡ y1 ⎤
⎢ 0.353553 2.423840 0 ⎥ ⎢ y ⎥ = ⎢ 4 8 2⎥ ⎢ x ⎥ ⇒ ⎢ y ⎥ = L
⎢ ⎥ ⎢ 2⎥ ⎢ ⎥ ⎢ 2⎥ ⎢ 2⎥
⎢⎣1.060660 1.495561 0.798935⎥⎦ ⎢⎣ y3 ⎥⎦ n +1 ⎢⎣ 3 2 6 ⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣ y3 ⎥⎦
Let
y1 = {1, 1, 1}
y1 ⎧ 1 1 1 ⎫
u1 = =⎨ , , ⎬ = {0.577350, 0.577350, 0.577350}
(y y )
1
t
1 1
2 ⎩ 3 3 3⎭
y2
u2 = = {0.079475, − 0.045119, 0.995815}
( y t2 y 2 )
1
2
Chapter 6—31/38 2008-01-23
Finally
Ax = λx , Q t AQ = D diagonal , A = ⎡⎣ aij ⎤⎦
Let p q
⎡1 | | ⎤
⎢ 1 | | ⎥
⎢ ⎥
p ⎢− − c − − s − −⎥
⎢ ⎥
⎢ | 1 | ⎥
U=
⎢ | 1 | ⎥
⎢ ⎥
q ⎢− − −s − − c − −⎥
⎢ | | 1 ⎥
⎢ ⎥
⎣⎢ | | 1 ⎦⎥
1. orthogonality:
U −1 = U t because U t U = I → c 2 + s 2 = 1
Remarks:
- Appropriately chosen rotation of the coordinate system about the angle ϑ provides
annihilation of a 'pq
- In fact we need rather cos ϑ and sin ϑ than ϑ itself. They may be found from the following
formulas:
Chapter 6—33/38 2008-01-23
1
α=
2
( a pp − aqq )
β = (α 2 + a 2pq )
1
2
1
⎛1 α ⎞ 2
c = cos ϑ = ⎜ + ⎟
⎝ 2 2β ⎠
⎧ 1 α >0
− a pq ⎪
s = sin ϑ = sgn(α ), where sgn(α ) = ⎨−1 α < 0
2 β cos ϑ ⎪
⎩ 0 α =0
SOLUTION PROCEDURE
Average norms
1
⎛ ⎞ 2
⎜ 1 n n
2⎟
av = ⎜ 2
n − n
∑∑ aij ⎟ - The average norm of the off-diagonal elements
⎜ i =1 j =1
⎟
⎝ j ≠i ⎠
1
⎛1 n 2⎞ 2
Maximum norms
v ≤ εw
where: v, w are equal either to a v , a w or to m v , m w
ε is a given threshold e.g. ε = 10−6
Chapter 6—34/38 2008-01-23
Finally
U tk L U t2 U1t A U1U 2 L U k ≈ Qt AQ = D
14243 14243
Qt Q
If it is desired to compute the eigenvectors along with the eingevalues, this can be
accomplished by initializing the matrix Q as I , and then modifying Q along with
modifications to A in the following way
Columns of the final Q are the eigenvectors of the original matrix A, while its eigenvalues
are given by the diagonal terms of the matrix D.
Example
⎡4 2 3 7⎤
⎢2 8 5 1⎥⎥
A=⎢
⎢3 5 12 9⎥
⎢ ⎥
⎣7 1 9 7⎦
SOLUTION PROCEDURE
Average off-diagonal terms of the matrix A
Chapter 6—35/38 2008-01-23
⎛ 1 2 ⎞
2
v
a 0 = ⎜ 2 ⋅ 2 ⋅ ⎡
⎣ 2 2
+ 32
+ 7 2
+ 5 2
+ 12
+ 9 ⎤
⎦ ⎟ = 5.307228
⎝4 −4 ⎠
In order to find a reasonable off-diagonal element of the matrix A we seek for the first
element greater then a v0 . Thus we have:
1 1
α= ( a11 − a44 ) = ( 4 − 7 ) = −1.500000
2 2
1
1 ⎛ ⎛ 3⎞ ⎞
2 2
β = ( a142 + α )
2 2
= ⎜ 7 2 + ⎜ − ⎟ ⎟ = 7.158911
⎜ ⎝ 2 ⎠ ⎟⎠
⎝
1 1
⎛ 1 α ⎞2 ⎛ 1 1.5 ⎞2
c=⎜ + =
⎟ ⎜⎜ + ⎟⎟ = 0.777666
⎝ 2 2 β ⎠ ⎝ 2 2 ( 7.158911) ⎠
α ( − a14 ) ( −1.5)( −7 )
s= = = 0.628678
2 β α c 2 ( 7.158911) −1.5 ( .777666 )
Now we find
⎪ = 12.658910
⎩
a '12 = a '21 = ca12 − sa42 = 0.777666 ⋅ 2 − 0.628678 ⋅1 = 0.926655
a '13 = a '31 = ca13 − sa43 = 0.777666 ⋅ 3 − 0.628678 ⋅ 9 = −3.325099
a '42 = a '24 = sa12 + ca42 = 0.628678 ⋅ 2 + 0.777666 ⋅1 = 2.035051
a '43 = a '34 = sa13 + ca43 = 0.628678 ⋅ 3 + 0.777666 ⋅ 9 = 8.885027
initially Q=I
⎡ 0.777666 0 0 0.628678⎤
⎢ 0 1 0 0 ⎥
Q=⎢ ⎥
⎢ 0 0 1 0 ⎥
⎢ ⎥
⎣ −0.628678 0 0 0.777666 ⎦
Average norms
1
⎡1 2 2 2 ⎤
a w0 = ⎢ ( 4 + 8 + 12 + 7 ) ⎥ = 8.261356
2
2
- diagonal
⎣4 ⎦
1
⎡ 1 ⎤2
a v0 = ⎢ 2 ⋅ 2 ( 22 + 32 + 7 2 + 52 + 12 + 92 ) ⎥ = 5.307228 - off-diagonal
⎣4 − 4 ⎦
v 5.307228
a 0
= = 0.642416 > ε = 10−6
a w0 8.261356
Maximum norms
v 9
m 0
= = 0.75 > ε = 10−6
m w0 12
(ii) After the first iteration
Average norms
1
⎧1 ⎡ 2 ⎤⎫
2
a w1 = ⎨ ⎣( −1.658911) + 8 + 12 + 12.65891 ⎦ ⎬ = 9.63068
2 2 2
⎩4 ⎭
1
⎧ 1 ⎫2
⋅ 2 ⎡0.9266552 + ( −3.325099 ) + 02 + 52 + 2.0350212 + 8.885027 2 ⎤ ⎬ = 4.472136
2
a v1 = ⎨ 2
⎩4 − 4 ⎣ ⎦⎭
Maximum norms
v 4.4721357 v 8.885027
a 1
= = 0.464363 > ε , m 1
= = 0.701879 > ε
a w1 9.63068 m w1 12.65891
⎩4 ⎭
Chapter 6—38/38 2008-01-23
⎧ 1
v=⎨ 2 ⋅ 2[ (4.89 × 10−9 ) 2 + (−6.06 × 10−15 ) 2 + (1.72 × 10−5 ) 2 +
⎩4 − 4
a
1
⎫2
+ 02 + (−3.78 × 10−10 ) 2 + (5.14 × 10−6 ) 2 ] ⎬ = 7.33 × 10−6
⎭
Brake-off test
v 7.33 × 10−6
a
= = 5.93 × 10−7 < ε = 10−6
aw 12.36
Maximum norms
m w = 23.04466
m v = 1.72 ×10−5
Brake-off test
v 1.72 ×10−5
m
= = 7.46 × 10−7 < ε = 10−6
mw 23.04466
Chapter 7—1/7 2008-01-23
EQUATIONS
7.1. INTRODUCTION
Example
2x1 + 3x 2 = 5 ⎫
I ⎬ → x1 = x 2 = 1
2x1 + 3.1x 2 = 5.1⎭
2x1 + 3x 2 = 5 ⎫
II ⎬ → x1 = 10, x 2 = −5
1.999x1 + 3x 2 = 4.99 ⎭
ILL-CONDITIONED SLAE
y (1, 1)
(10, -5)
Thus a very small change in the coefficients gives rise to a large change in the solution.
Such behavior characterizes what is called an ill-conditioned system. Ill-conditioning causes
problem since we cannot carry out computations with infinite precision.
Chapter 7—2/7 2008-01-23
Ax = b → x T ≡ A −1 b
Ax c = b + r → r ≡ Ax c − b
x T − xc = − A −1r ≡ − e
where
xT − true solution
xc − computed solution
r − denote a residuum, due to rounding-off errors r may differ from zero
e − solution error
By ≤ B y
y
y ≤ B −1 By → ≤ By ≤ B y
B −1
e
}
r
≤ A −1r ≤ A −1 r ,
A
b 1 1 A
≤ A −1b ≤ A −1 b → −1
≤ ≤ ,
A 123 A b xT b
xT
since
A −1b = xT and A −1r = e
1 r e r
≤ ≤ A A -1
A A -1 b xT b
Chapter 7—3/7 2008-01-23
Assuming
k= A A −1 conditioning number
1 r e r
≤ ≤k
k b xT b
e r
Here is the relative error of solution, and is the relative residuum.
xT b
k= A A −1 , k ≥1
called the condition number ( cond A). Using the spectral norm of matrix S
1
A S {
= ρ ( A* A ) } 2
1
⎧N 2 ⎫2
x = ⎨∑ x i ⎬
⎩ i =1 ⎭
where
ρ ( A ) = max λ k
k
Here k ≥ 1 (conditioning number) is the measure of the system conditioning (ill- or well-).
Chapter 7—4/7 2008-01-23
Let
Ac = A + A E ,
where
A c = computed matrix, A = exact matrix, AE =matrix of perturbations
Let
Acxc = b → xc
Then
xT = A −1b = A −1A c x c = A −1 ( A + A E ) x c → xT − x c = A −1A E x c
Hence
AE AE
xT − xc ≤ A −1 A E xc = k x c = cond ( A ) xc
A A
where
cond ( A ) ≡ k = A A −1 ,
from which we have the following evaluation
xT − xc AE
≤k
xc A
Thus the computed solution can vary from the theoretical solution, as a result of round-off
errors, by an amount directly proportional to the conditioning number k.
On computational precision
q ≈ p − β log k
p - introduced precision
q - obtained precision
β - norm dependent coefficient ( for the spectral norm β=1 )
Examples
(i)
⎡2 3 ⎤ λ 5.060478
A1 = ⎢ ⎥ →k= 1 = = 128.04
⎣ 2 3.1⎦ λ2 0.039522
(ii)
⎡ 2 3⎤ 4.9994
A2 = ⎢ ⎥ →k ≈ = 8331.33
⎣1.999 3⎦ 0.0006
How important are „small perturbations”? Compare given and inversed matrices.
⎡2 3 ⎤ . − 15⎤
⎡ 155 ⎡ 2 3⎤ ⎡ 1000 1000 ⎤
A1 = ⎢ ⎥, A =⎢−1
⎥, A2 = ⎢ ⎥,
−1
A = ⎢ 1 2⎥
⎣− 10 10 ⎦ ⎢⎣− 666 3 666 3 ⎥⎦
1 2
⎣ 2 3.1⎦ ⎣1.999 3⎦
r ( ) = Ax( ) − b
1 1
- residuum
Let
ε ( ) = xT − x ( )
1 1
- solution error
(1) (1) (1) (1)
Aε = AxT − Ax = b − Ax = −r → ε (1)
The error vector ε (1) is itself the solution of the given system but with a new right-
hand side −r (1) . After solution we get from there
Problems
Which of the following matrices gives rise to an ill-conditioned system; estimate loss
of accuracy.
⎡1 2 − 1⎤ ⎡1⎤ ⎡1 2 3⎤ ⎡0⎤
⎢
A 1 = ⎢3 4 0 ⎥
⎥ b1 = ⎢⎢1⎥⎥ A 2 = ⎢⎢ 4 5 6 ⎥⎥ b 2 = ⎢⎢ 2 ⎥⎥
⎢⎣1 1 0 ⎥⎦ ⎢⎣1⎥⎦ , ⎢⎣7 8 8 ⎥⎦ ⎢⎣ 1 ⎥⎦
⎡1 0 0 ⎤ ⎡0⎤ ⎡ 1 1 0⎤ ⎡1 ⎤
⎢ 2 ⎥ ⎢ 2⎥
A 3 = ⎢⎢0 1 0 ⎥⎥ b3 = ⎢⎢ 2 ⎥⎥
A4 = ⎢ 1 1 1 ⎥ b 4 = ⎢ 1 ⎥
⎢⎣1 1 1 ⎥⎦ ⎢⎣1 ⎥⎦ , ⎢ 2 3 ⎥ ⎢ 4⎥
⎢ −1 −1 −1⎥ ⎢3 ⎥
⎣⎢ ⎥⎦ ⎣⎢ 4 ⎦⎥
Given
∑a x
j =1
ij j = bi , i, j = 1, 2,..., n
2
n
⎛ n ⎞ n
Find min I , I = ∑ ⎜ ∑ aij x j − bi ⎟ + α ∑ xi2
i =1 ⎝ j =1 ⎠
xi
i =1
Example
⎧2 x1 + 3 x2 = 5
⎨
⎩1.999 x1 + 3 x2 = 4.99
∂ ∂
I =0 , I =0 →
∂x1 ∂x2
Chapter 7—7/7 2008-01-23
0.00009 + 19.975α
x1 =
0.000009 + 25.996α + α 2
−0.000045 + 29.97α
x2 =
0.000009 + 25.996α + α 2
FAMILY OF SOLUTIONS
conditioning number
α x1 x2 λmax
k=
λmin
0 10 -5 7.51·107
0.1 0.7655 1.1484 260.96
1 0.7399 1.1102 27.00
10 0.5549 0.8326 3.60
100 0.1585 0.2379 1.26
106 2 ⋅10−5 3 ⋅10−5 1.00
"∞ " 0 0 1.00
Chapter 8—1/26 2008-04-08
8. APPROXIMATION
8.1. INTRODUCTION
Approximation is a replacement of a function f(x), given as a continuous or discrete
one, by an other function g(x).
f g2(x)
g1(x)
f(x)
Where :
a = {a0 ...an } - Unknown coefficients of approximation
ϕ = {ϕ0 (x)...ϕn (x)} - Assumed bases functions
x = { x(1) ...x(m) } - Position vector of an arbitrary point in m-dimensional space
Interpolation – Extrapolation
Bases functions
Examples
• Monomials
1, x, x2, ..., xn
Weierstrass Theorem
If continuous function f(x) is approximated by the polynomial Pn(x) then for any
given δ > 0 such n may be found that
• Tschebychev polynomials
• Trigonometric functions
8.2.INTERPOLATION IN 1D SPACE
⎧ a0 ⎫
⎪ ⎪
f ( xi ) = Pn ( xi ) = [ϕ0 ( xi )...ϕ n ( xi )] ⎨ ... ⎬ , i = 0,1…n ; f ( xi ) ≡ fi
⎪a ⎪
⎩ n⎭
Let
Φ a = F → a = Φ -1 F
Example
1D Example
Let
f ( x) ≈ a0 + a1 x - linear interpolation
then
f(x0 ) ≡ f0 = a0 + a1 x0 ⎡ f ⎤ ⎡1 x0 ⎤ ⎡ a0 ⎤ ⎡a ⎤ 1 ⎡ f0 x1 - f1 x0 ⎤
⇒ ⎢ 0⎥= ⎢ ⎥ ⎢ ⎥ → ⎢ 0⎥= ⎢ ⎥
f(x1 ) ≡ f1 = a0 + a1 x1 ⎣ f1 ⎦ ⎣1 x1 ⎦ ⎣ a1 ⎦ ⎣ a1 ⎦ x1 - x0 ⎣ f1 - f0 ⎦
hence
x − x1 x − x0
f ( x) ≈ P1 ( x) = f 0 + f1 = f 0 L(1)
0 ( x ) + f1 L1 ( x )
(1)
x0 − x1 x1 − x0
Generally
F = Φ a → a = Φ-1 F
Solution of SLAE (simultaneous linear algebraic equations)
Required : An interpolating polynomial of degree n which passes through n+1 points (xi, fi )
⎧1 if x j = x0
L(0n ) ( x j ) = ⎨
⎩0 if x j ≠ x0
for j = 0,1,..., n
f0 = 1
L(1)
0 ( x)
x0
Chapter 8—5/26 2008-04-08
⎧1 if x j = xi
L(i n ) ( x j ) = ⎨
⎩0 if x j ≠ xi
for i, j = 0,1,..., n
⎧1 if x j = xn
L(nn ) ( x j ) = ⎨
⎩0 if x j ≠ xn
for , j = 0,1,..., n
Generally
⎧ 1 if j = i , j = 0,1,…,n
L(jn ) ( xi ) = ⎨ (*)
⎩0 if j ≠ i , i = 0,1,…,n
The interpolating polynomial defined by :
n
Pn (x) = ∑ L(n)j (x)f j
j=0
satisfies equations
f(xi )= Pn (xi ), i = 0, 1,…,n
How to find Lj(x) ?
Given
x0 , x1 ,..., xi −1 , xi , xi +1 ,..., xn
Required the lowest order polynomial L(jn ) ( x) satisfying the conditions (*) posed above
Chapter 8—6/26 2008-04-08
∏ (x − x ) j
∏ (x − x )
j =0
i j
j ≠i
Examples
n=1
x - x1
L(1)
0 (x)=
x0 - x1
x - x0
L(1)
1 (x)=
x1 - x0
x - x1 x - x0
P1(x)= f0 + f1
x0 - x1 x1 - x0
n=2
L(2)
( x - x1 )( x - x2 )
0 (x)=
( x0 - x1 )( x0 - x2 )
L(2)
( x - x0 )( x - x2 )
1 (x)=
( x1 - x0 )( x1 - x2 )
L(2)
( x - x0 )( x - x1 )
2 (x)=
( x2 - x0 )( x2 - x1 )
P2 (x)=
( x - x1 )( x - x2 ) f0 +
( x - x0 )( x - x2 ) f + ( x - x0 )( x - x1 ) f2
( x0 - x1 )( x0 - x2 ) ( x1 - x0 )( x1 - x2 ) 1 ( x2 - x0 )( x2 - x1 )
Chapter 8—7/26 2008-04-08
Example
Interpolate for f(7) using the third order Lagrange polynomial. Repeat solution for the linear
interpolation.
P3 (7)= 1 ⋅ L(3)
0 (7)+ 3 ⋅ L1 (7)+7 ⋅ L2 (7)+11 ⋅ L3 (7)
(3) (3) (3)
L(3)
(7 - 2 )(7 - 4 )(7 - 8 ) = 0.71429
0 (7)=
( 1- 2 )( 1- 4 )( 1- 8 )
L(3)
(7 - 1)(7 - 4 )(7 - 8 ) = -1.5
1 (7)=
( 2 - 1)( 2 - 4 )( 2 - 8 )
L(3)
(7 - 1)(7 - 2 )(7 - 8 ) = 1.25
2 (7)=
( 4 - 1)( 4 - 2 )( 4 - 8 )
L(3)
(7 - 1)(7 - 2 )(7 - 4 ) = 0.53571
3 (7)=
( 8 - 1)( 8 - 2 )( 8 - 4 )
f(7) ≈ P3 (7)= 0.71429 + 3 ⋅ (-1.5)+7 ⋅ (1.25)+11 ⋅ (0.53571)= 10.85710
P1 (7)= 7 ⋅ L(1)
0 (7)+11 ⋅ L1 (7)
(1)
7 -8
L(1)
0 (7)= = 0.25
4-8
7 -4
L(1)
1 (7)= = 0.75
8-4
f(7) ≈ P1 (7)= 7 ⋅ (0.25)+11 ⋅ (0.75)= 10
ε ( n ) ( x) ≡ f ( x) − Pn ( x)
n n
f (n+1)
( ξ )∏ (x - xi ) ∏ (x - x )
i
ε (x) =
(n) i=0
≤ f (n+1) i =0
max for x0 ≤ ξ ≤ xn
(n+1)! (n+1)!
Chapter 8—8/26 2008-04-08
Example
6
Given f(x) = ln(x), x ∈ [1,2 ] , n = 3 → f IV = → f max
IV
=6 →
x4
6
ε (3) (x) ≤ ( x - x0 )( x - x1 )( x - x2 )
4!
Remarks
The reason is that monomials xn are not orthogonal to each other and hardly can be
distinguished for higher orders, e.g. x15 and x17
Chapter 8—9/26 2008-04-08
Problems
In many cases the contrasting question is asked : “find the value of the variable x at
which the function f (x) takes on a particular value” ( zero say) .
function
n f − fj
x( f ) = ∑ Li ( f ) xi L(i n ) ( f ) = ∏ , j = 0,1, K, n
i =0 j =0 fi − f j
j ≠i
Example
Find x ∈ [1.0, 2.0] for which f =0.50. The following values are given :
L(5)
0 (0.5)= 0.01122105 L(5)
3 (0.5)= 0.93966347
L(5)
1 (0.5)= -0.04725516 L(5)
4 (0.5)= -0.02204592
L(5)
2 (0.5)= 0.11677892 L(5)
5 (0.5)= 0.00163764
Hence
5
x = ∑ xi L(5)
i (0.5)= 1.58505952
i =0
8.5.CHEBYCHEV POLYNOMIALS
In order to minimize the error term in the Lagrangian interpolation we may minimize the
n
∏ (x − x )
i =0
i term by an appropriate choice of the interpolation nodes xi, i = 0, 1, …, n. We
(
min max ( x − x0 )( x − x1 ) L( x − xn −1 )
xi −1 ≤ x ≤ 1
)
The solution is called Chebyshev polynomial of degree n, which is defined as
T0 = 1 T1 = x
T2 = 2 x 2 − 1 T3 = 4 x 3 − 3x
T4 = 8 x 4 − 8 x 2 + 1 T5 = 16 x 5 − 20 x 3 + 5 x
behavior of behavior of
even terms odd terms
Orthogonality (weighted)
The Chebyshev polynomials form an orthogonal set over [ −1,1] with respect to the
1
weight function , i.e.
1 − x2
⎧0 i≠ j
⎪
1
Ti ( x)T j ( x)
I ij = ∫ 1 − x2
dx = ⎨ π2 i= j≠0
−1 ⎪π i = j =0
⎩
Example
Remark
i =0
i≠ j
When normalized it satisfies all conditions (1) and (3) except conditions
h j ( x j ) = 1 and h 'j ( x j ) = 0 .
However, if we write
h j ( x) = ⎡⎣ a ( x − x j ) + b ⎤⎦ ∏ ( x − xi ) = ⎡⎣ a ( x − x j ) + b ⎤⎦ l j ( x)
n
2
i =0
i≠ j
i =0
i≠ j
as well as
⎡ 6474 ≡0
8 ⎤ n
h j ( x j ) = a ( x j − x j ) + b ⎥ ∏ ( x j − xi ) 2 = 1 → b
⎢
⎢ ⎥ i =0
⎣ ⎦ i≠ j
hence we get
Chapter 8—13/26 2008-04-08
1 1
b= =
∏(x − xi )
n
2 l j (xj )
j
i =0
i≠ j
i =0
i≠ j
and
⎡ 6474 ≡0
8 ⎤
h ( x j ) = a∏ ( x j − xi ) + a ( x j − x j ) + b ⎥ l 'j ( x j ) = 0
n
⎢
2
'
→ a
⎢ ⎥
j
i =0
i≠ j ⎣ ⎦
we get
l 'j ( x j )
a=−
l 2j ( x j )
Hence
l j ( x) ⎡ l 'j ( x j ) ⎤
h j ( x) = ⎢ (
1 − x − x j) ⎥
l j ( x j ) ⎢⎣ l j ( x j ) ⎥⎦
Since the following holds
≡1
l j ( x) l 'j ( x) 67 8
≡ L(jn )2 ( x) , = 2 L j ( x j ) L(jn )' ( x j ) = 2 L(jn )' ( x j )
l j (x j ) l j (x j )
we finally have
h j ( x) = L(jn )2 ( x) ⎡⎣1 − 2 ( x − x j ) L(jn )' ( x j ) ⎤⎦
j =0 j =0
In case when values f j' are given in x j ; j=0, 1, …, r < n the Hermite formula becomes
n r
Pn + r +1 ( x) = ∑ h j ( x) f j + ∑ g j ( x) f j'
j =0 j =0
where
Chapter 8—14/26 2008-04-08
⎪
{ ⎣ ⎦}
⎧ 1 − ( x − x j ) ⎡ L(jn )' ( x j ) + L(jr )' ( x j ) ⎤ L(jn ) ( x) L(jr ) ( x), j = 0,1, K , r
h j ( x) = ⎨ ( n ) x − xr
(n)
⎪ L j ( x) Lr ( x) , j = r +1, K , n
⎩ x j − x r
and
g j ( x) = ( x − x j ) L(jr ) ( x) L(jn ) ( x), j = 0,1, K , r
π n ( x)π r ( x)
E ( x) = f ( n + r + 2) (ξ ) , ξ ∈ [ x0 , xn ]
(n+ r + 2)!
where
n
π n ( x) = ∏ ( x − xi )
i =0
Example
Ad a)
1 1
N 00 = ∑ f j L(1)j (x) ⎡⎣1- 2(x - x j )L(1)'
j (x j )⎦ + ∑ f j (x - x j )L j (x) =
⎤
2 2
' (1)
j=0 j=0
⎡ ⎛ 1 ⎞⎤ ⎛l−x⎞
2 2 2 2
⎛l-x⎞ ⎛ x⎞ ⎡ 1⎤ ⎛ x⎞
= 1⋅ ⎜ ⎟ ⎢1- 2(x - 0) ⎜ - l ⎟ ⎥ + 0 ⋅ (x - 0) ⎜ l ⎟ + 0 ⋅ ⎜ l ⎟ ⎢⎣1- 2(x - l) l ⎦⎥ + 0 ⋅ (x - l) ⎝⎜ l ⎠⎟
⎝ l ⎠ ⎣ ⎝ ⎠⎦ ⎝ ⎠ ⎝ ⎠
2
⎛ x⎞ ⎛ x⎞
N00 = ⎜ 1- ⎟ ⋅ ⎜ 1+ 2 ⎟
⎝ l⎠ ⎝ l⎠
Ad b)
⎡ ⎛ 1 ⎞⎤ ⎛l−x⎞
2 2 2 2
⎛l-x⎞ ⎛ x⎞ ⎡ 1⎤ ⎛ x⎞
N 01 = 0 ⋅ ⎜ ⎟ ⎢1- 2(x - 0) ⎜ - l ⎟ ⎥ + 1 ⋅ (x - 0) ⎜ l ⎟ + 0 ⋅ ⎜ l ⎟ ⎢⎣1- 2(x - l) l ⎦⎥ + 0 ⋅ (x - l) ⎝⎜ l ⎠⎟
⎝ l ⎠ ⎣ ⎝ ⎠⎦ ⎝ ⎠ ⎝ ⎠
2
⎛ x⎞
N01 = x ⎜ 1- ⎟
⎝ l⎠
Ad c)
2 2
⎛ x⎞ ⎡ 1⎤ ⎛ x ⎞ ⎛ x⎞
N10 = 1 ⋅ ⎜ ⎟ ⎢⎣1- 2(x - l) l ⎦⎥ = ⎝⎜ l ⎠⎟ ⎝⎜ 3 - 2 l ⎠⎟
⎝l⎠
Ad d)
2
⎛x⎞
N11 = (x - l) ⎜ ⎟
⎝l⎠
8.7.1. Introduction
Chapter 8—16/26 2008-04-08
8.7.2. Definition
A function which is a polynomial of degree k in each interval [ xi , xi +1 ] and which has
continuous derivatives up to and including order k-1 is called a spline function of degree k.
Example
Given is
⎧ 1- 2x = S1 ( x) for x ≤ -3
⎪
S ( x) = ⎨28 + 25x + 9x 2 + x 3 = S 2 ( x) for -3 ≤ x ≤ -1
⎪ 26 +19x + 3x 2 - x 3 = S3 ( x ) -1 ≤ x ≤ 0
⎩ for
hence
⎧ S1 (-3)= S 2 (-3)= 7 ⎧ S 2 (-1)= S3 (-1)= 11
⎪ ' ' ⎪ ' '
⎨ S1 (-3)= S 2 (-3)= -2 ⎨ S 2 (-1)= S3 (-1)= 10
⎪ S '' (-3)= S '' (-3)= 0 ⎪ S '' (-1)= S '' (-1)= 12
⎩ 1 2 ⎩ 2 3
Examples
Chapter 8—17/26 2008-04-08
THE SPLINE S(x) of degree k on the tubular points x0, x1, ..., xn is represented as :
n −1
S ( x) = pk ( x) + ∑ bi ( x − xi )+
k
i =1
where
k
pk ( x) = ∑ ai x i
i =0
⎧( x − xi ) k if x - xi > 0
( x − xi ) ≡ ⎨
k
⎩ 0 otherwise
Example
Determine the quadratic spline on the tabular points x0, x1, ..., xn and such that
'
S (x0 )= 0 .
n-1
S(x)= p2 (x)+ ∑ bi (x - xi )+2
i=1
p2 (x)= a0 + a1 x + a2 x 2
For x ∈ [ x0 , x1 ]
Hence we have
f1 - f0
a0 = f0 + a2 x02 , a1 = -2 x0 a2 , a2 =
( x1 - x0 )
2
For x ∈ ⎡⎣ x j , x j+1 ⎤⎦
j
S(x j+1 ) ≡ p2 (x j+1 ) + ∑ bi (x j+1 - xi )+2 = f j+1
i=1
hence
j −1
f j+1 - p2 (x j+1 ) - ∑ bi (x j+1 - xi )2
bj = i =1
(x - xj )
2
j+1
Homework
• Evaluate coefficients bj for cubic spline, if S ' (x0 )= 0, S '' (x0 )= 0.
• Find the spline given in the first example; assume
S ( x0 ), S ( x1 ), S ( x2 ), S ' ( x0 ), S '' ( x0 ).
Introduction
f (x) ≈ Pn (x) = aT ϕ
ε ≡ f − Pn in [ x0 , xn ]
Required
min ε = min f − Pn
a
Approximation is the best with the respect to a chosen norm. Mostly are used :
(i) ε ∞
= max ε - MINIMAX approximation (Chebyshev) : min max f − Pn
a x0 ≤ x ≤ xn
Chapter 8—19/26 2008-04-08
⎧ x 1
⎫
⎪⎛ n 2 ⎞ 2 ⎪
⎪⎜ ∫ ε dx ⎟⎟ − for f continuous ⎪
⎪⎜⎝ x0 ⎠ ⎪
(ii) ε 2
=⎨ ⎬ EUCLIDEAN
⎪ n 1
⎪
⎪ ⎛ ⎞
∑ for f discrete ⎪
2
⎜ ε i2 ⎟ −
⎪⎩ ⎝ i =1 ⎠ ⎪⎭
Example
Given
f(x)
P1(x)
n 0 1 2
x 1 2 3
f(x) 1 1 2
P1 ( x) = a0 + a1 x
2
I ≡ ε = ∑( f - pn (xi )) = (1- a0 - a1 )2 +(1- a0 - 2a1 )2 +(2 - a0 - 3a1 )2
2 2
2 i
i=0
∂I
= 2 [ -1(1- a0 - a1 ) - (1- a0 - 2a1 ) - a0 (2 - a0 - 3a1 )] = 0 → 2a0 +6 a1 = 4
∂a0
∂I
= 2 [ -1(1- a0 - a1 ) - 2(1- a0 - 2a1 ) - 3(2 - a0 - 3a1 )] = 0 → 6 a0 +14a1 = 9
∂a1
Hence solution is
1 1
P1 (x)= + x
3 2
Let
2
⎡ ⎤
xn n
I = ∫ ⎢ f − ∑ aiϕ i ( x) ⎥ dx
x0 ⎣ i =1 ⎦
Chapter 8—20/26 2008-04-08
∂I ⎡ ⎤
n xn
= −2 ∫ ϕ k ( x) ⎢ f − ∑ aiϕ i ( x) ⎥ dx = 0, k = 0,1, K, n
∂ak x0 ⎣ i =1 ⎦
Hence
xn n xn
∫ f ( x)ϕ k ( x)dx − ∑ ai ∫ ϕ i ( x) ϕ k ( x) dx = 0
x0 i =1 x0
Let
xn xn
Then
∑a Φ i ik = Fk ⇔ Φa = F → a = Φ −1F
a = { a0 , a1 , … , an } , Φ = [ Φ ik ]
( n +1) × 1 ( n +1) × ( n +1)
F = { F0 , F1 , … , Fn }
( n +1) × 1
If bases functions are orthogonal matrix Φ is diagonal : Φ = diag Φ ii and coefficients a are
explicitly determined.
Given are functions f ( x), g ( x), x ∈ [ a, b ] . Let us define the INNER PRODUCT :
⎧b
⎪ ∫ f ( x) g ( x) dx − if f and g are continuous
( f , g ) = ⎪⎨ an
⎪ f (x )g(x )
⎪⎩∑ i i − if f and g are discrete
i =0
Examples
( f , g ) = ∫ x ( 2x +1) dx = ( x 4 + x 2 ) = 1
1 2
0
2 0
b) given:
Chapter 8—21/26 2008-04-08
3
( f , g ) = ∑ f(xi )g(xi )= 0.0 ⋅ 1.0 +0.5 ⋅ 1.5+0.8 ⋅ 2.28 +1.0 ⋅ 3.0 = 5.574
i=0
⎧b ⎫
⎪ ∫ q j ( x)qk ( x) dx ⎪
⎪ ⎧ 0 j≠k
( q j , qk ) = ⎪⎨ an
if
⎬=⎨ j , k = 0, 1, K , m
⎪ q ( x )q ( x ) ⎪ ⎩ ≠ 0 if j = k
⎪⎩∑
i=0
j i k i ⎪
⎭
Let
q0 ( x) = ϕ 0 ( x)
q1 ( x) = ϕ1 ( x) − α 01 q0 ( x) but ( q1 , q0 ) = 0
hence
(ϕ1 , q0 )
(123
q1 , q0 ) = (ϕ1 , q0 ) − α 01 ( q0 , q0 ) → α 01 =
=0
( q0 , q0 )
q2 ( x) = ϕ 2 ( x) − α 02 q0 ( x) − α12 q1 ( x)
hence
(ϕ2 , q0 )
(12
q2 , q0 ) = (ϕ 2 , q0 ) − α 02 ( q0 , q0 ) − α12 ( q1 , q0 ) → α 02 =
=0
4
3 14243
=0
( q0 , q0 )
and
(ϕ2 , q1 )
(123
q2 , q1 ) = (ϕ 2 , q1 ) − α 02 ( q0 , q1 ) − α12 ( q1 , q1 ) → α12 =
=0
14243
=0
( q1 , q1 )
Generally
let
p −1
q p ( x) = ϕ p ( x) − ∑ α ip qi ( x)
i =0
Chapter 8—22/26 2008-04-08
(q , q ) = 0
j k if j≠k
we get
p −1
(q p , q j ) = (ϕ p , q j ) − ∑ α ip ( qi , q j ) = (ϕ p , q j ) −α jp ( q j , q j )
i =0
hence
α jp =
(ϕ , q )
p j p = 1,2 , …,n
(q , q )
j j
j = 0,1, …, p - 1
Example
(ϕ , ϕ ) = ∫ ϕ ( x ) ϕ
i j i j ( x) dx
0
q0 = 1
q1 = x - α 01 ⋅ 1
∫ x ⋅ 1dx x2
2
2
α 01 = 0 2 0
2
= 2
= =1
x0 2
∫ 1 ⋅ 1dx
0
q1 = x − 1
q2 = x 2 - α 02 ⋅ 1- α 12 ⋅ ( x − 1)
∫x ⋅ 1dx
2 2
x3
4
α 02 =
3 0
0
2
= =
2 3
∫ 1 ⋅ 1dx
0
2
∫ x ⋅ ( x − 1) dx (- )
2 2
x3 4
3 + x4 4
α 12 = 0
= 0
= 32 = 2
(x-2 )
2 2
∫ ( x − 1) ⋅ ( x − 1) dx
x2 3
2 + x3 3
0
0
4 2
q2 = x 2 - ⋅ 1- (+2) ⋅ (x − 1 )= x 2 - 2x +
3 3
Chapter 8—23/26 2008-04-08
⎛ 2⎞
q3 = x 3 - a03 ⋅ 1- α 13 ⋅ ( x − 1) - α 23 ⋅ ⎜ x 2 - 2x + ⎟
⎝ 3⎠
2
∫x ⋅ 1dx
3 2
x4
0 4 0
a03 = 2
= =2
2
∫ 1 ⋅ 1dx
0
2
∫ x ⋅ ( x − 1) dx (- )
3 2
x4 5
4 + x5 12
18
α 13 = 0
= 0
= 5
=
∫ ( x − 1) ⋅ ( x − 1) dx ( x - 2 )
2 2 2
2
x
+ x3
3
3 5
2
0
0
2
⎛ 2⎞
∫x ⋅ ⎜ x 2 - 2x + ⎟ dx
( )
3 2
x6 2 5 1 4
⎝ 3⎠ 6 - 5 x + 6 x 8
α 23 = 0 2 = 0
= 15
=3
∫0 ⎜⎝ x - 2x + 3 ⎟⎠ dx ( 5 9 )
2 2
⎛ 2 2⎞ 5 4 3 2 8
1 16 4 4
x - x + 9 x - 3 x + x 45
0
18 ⎛ 2⎞ 12 2
q3 = x 3 - 2+ (x − 1 ) - 3 ⎜ x 2 - 2x + ⎟ = x 3 - 3x2 + x -
5 ⎝ 3⎠ 5 5
8.11.1. Orthonormalization
(q , q ) = δ → qi =
qi
normalized function
( qi , qi )
i j ij 1
2
qi non-normalized function
⎧0 for i≠ j
( qi , wqi ) = ( wqi , qi ) = ⎨c ≠0 for i= j
⎩ j
( q , wq ) = δ
i j ij
Homework
Chapter 8—24/26 2008-04-08
Let
⎧1 if x = x j and y = y j
L(ijn,m ) (x, y) = ⎨
⎩0 otherwise
n m
f ( x, y ) = ∑∑ aij L(i n ) (x) ⋅ L(jm ) (y)
i=0 j=0
Chapter 8—25/26 2008-04-08
Example
9. NUMERICAL DIFFERENTATION
Problem
Given is a discrete function f ( xi), i=0, 1, …, n. Find the derivative of this function at
a point x=xj.
Solution
f ( x) ≈ Pn ( x) = aT ϕ ( x)
dPn ( x)
f ' ( x) ≈ = aT ϕ ' ( x)
dx
Example
n n
f ( x) ≈ ∑ ai L(i n ) ( x) → f ' ( x) ≈ ∑ ai L'i ( n ) ( x)
i =0 i =0
f(x)= f0
( x - x1 )( x - x2 ) + f ( x - x0 )( x - x2 ) + f ( x - x0 )( x - x1 )
( x0 - x1 )( x0 - x2 ) 1 ( x1 - x0 )( x1 - x2 ) 2 ( x2 - x0 )( x2 - x1 )
2x - x1 - x2 2x - x0 - x2 2x - x0 - x1
f ' (x)= f0 + f1 + f2
( x0 - x1 )( x0 - x2 ) ( x1 - x0 )( x1 - x2 ) ( x2 - x0 )( x2 - x1 )
Chapter 9-2/5 2008-04-08
In case of x2 - x1 = x1 - x0 = h
f0 f f
2 (
f ' (x)= 2x - x1 - x2 ) - 12 ( 2x - x0 - x2 ) + 22 ( 2x - x0 - x1 )
2h 2h 2h
hence finite difference formulas
3 2 1
f ' (x0 ) = - f 0 + f1 - f2
2h h 2h
f -f
f ' (x1 ) = 2 0 - central difference
2h
1 2 3
f ' (x2 ) = f0 - f1 + f2
2h h 2h
x 0.0 1
2 1.0
f(x) 1.0 13
16 0.0
1 1
x 0.0 3 2 1.0
f '
exact 0.0 − 14 27 -1.0 -2.0
f '
approx
1
4 − 7 12 -1.0 − 94
1 ⎛ 1 ⎞ 16 13
⎛ 1⎞ 0 ⎛ 1⎞ 5 1
f ' (x) ≈ ⎜ 2x - - 1 ⎟ - ⎜ 2x - 0 - ⎟ + ⎜ 2x - 0 - ⎟ = - x +
2 ⋅ ( 21 ) ⎝ 2 ⎠ ( 21 ) ⎝ 2 ⎠ 2 ⋅ ( 21 ) ⎝ 2⎠
2 2 2
2 4
COEFFICIENTS METHOD
Example
Required
f ' ( xi ) = α i −1 fi −1 + α i fi + α i +1 f i +1
Find
α i −1 , α i , α i +1 if xi −1 = xi − h and xi +1 = xi + h .
Let
1 1
fi −1 = f ( xi − h ) = fi − hf i ' + h 2 f i '' − h3 f i ''' + ... α i −1
2 6
fi = = fi αi
1 1
fi +1 = f ( xi + h ) = fi + hfi ' + h 2 fi '' + h3 fi ''' + ... α i +1
2 6
=0 =1
644744 8 644744 8
0 fi + 1 fi + 0 f i + K = f i (α i −1 + α i + α i +1 ) + f i ( − h α i −1 + h α i +1 ) +
' '' '
=0 =R
6474 8 64 4744 8
1 2 ''' 1 3
+ fi h (α i −1 + α i +1 ) + f i
''
h ( −α i −1 + α i +1 ) + L
2 6
hence
⎧ 1
α i −1 + α i + α i +1 = 0 ⎪ α i −1 = − 2h
⎪
− h ⋅ α i −1 + h ⋅ α i +1 = 1 → ⎨ αi = 0
α i −1 + α i +1 = 0 ⎪ 1
⎪ α i +1 =
⎩ 2h
Finally
fi +1 − fi −1 h 2 ''' f −f
f ' ( xi ) = + fi + L ≡ i +1 i −1 + O ( h 2 )
2h 6 2h
This is the central finite difference formula of the second order of accuracy
Chapter 9-4/5 2008-04-08
Homework
• Derive the same formula using Lagrangian approximation. Find the derivative f’(2.5)
if
x 0.0 1.0 2.0 3.0
f(x) 0.0 1.0 8.0 27.0
Derive the same formulas by using approach (ii), i.e. substituting for f(x) subsequently x j ,
j = 0, 1, …, k in order to obtain simultaneous equations for αι , i = 0,1,…,k
k
jx j −1 = ∑ αι xij → αι = ...
i =0
Chapter 10—1/10 2008-04-08
10.1. INTRODUCTION
Like in a case of numerical differentiation an approximation may be used in order to
replace the integrated function f(x). Thus
b b b
I exact = ∫ f ( x) dx ≈ ∫ P ( x) dx = ∫ a ϕ ( x) dx ≡ I approx
T
n
a a a
a) b) c)
∫
a
f ( x) dx ≈ ∫ f 0 dx = f 0 ∫ dx
a a
= hf 0 - rectangular rule
where
h =b−a
Example
Evaluate
1
2 1
0
3
0 ⎣ ⎦
Rectangular rule
I ≈ ( 1+0 ) 2 ⋅ ( 21 - 0 ) = 0.5
1
Chapter 10—2/10 2008-04-08
Trapezoidal rule
Simpson rule
( 41 - 0 ) ⎡⎣⎢(1+0 ) + 4 ( 1+ 41 ) 2 + ( 1+ 21 ) 2 ⎤⎥ = 0.5580734
1 1 1
I≈ 1 2
3
⎦
General approach
n b
I n +1 = ∑ ∫ L j ( x) f j dx
j =0 a
n
I n +1 = ∑ ∫ ∏ 0
q
( x + sh ) − ( x0 + kh )
n n q
f j h dx = h∑ ∫ ∏
n
( s − k ) f ds
j = 0 p k = 0 ( x0 + jh ) − ( x0 + kh ) j =0 p k =0 ( j − k )
j
k≠ j k≠ j
I n +1 = h∑
n
fj∏
n
1
q n
( s − k ) ds
j =0
∫ ∏
k =0 ( j − k ) p k =0 ( s − j )
k≠ j
n
I n +1 = ∑ α j f j
j =0
Chapter 10—3/10 2008-04-08
is given by
h
q n
(s − k )
αj = n ∫ ∏ ( s − j ) ds
∏( j − k )
k =0
p k =0
k≠ j
since
( −1) = ( −1)
n− j n− j
1 ⎛n⎞
= ⎜ ⎟
n
j !( n − j ) ! ⎝ j⎠
∏( j − k )
n!
k =0
k≠ j
( −1)
n− j
⎛n⎞ n s − k
q
αj = h ⎜ ⎟∫∏ ds j=0, 1, …, n
n! ⎝ j ⎠ p k =0 s − j
where E = I − I n +1 is the error term which may be evaluated for the Newton – Cotes formulas
as follows
⎧ 2h n + 2 m + 2 ( 2 r −1)
1
⎪ n+2 ! f (η ) ∫ ∏ ( s 2 − k 2 ) ds
⎩( ) - for n even
0 k =0
The results of the above formulas may be presented in the tabular form given below.
αj
Coefficients for NEWTON – COTES formulas (closed)
h
Conclusion
The Simpson formula displays the best accuracy up to third order integrating formulas.
Chapter 10—4/10 2008-04-08
Idea
We subdivide the interval [ a, b ] into a certain number of equal subintervals and apply
in each of them the same appropriate rule (rectangular, trapezoidal, …). We get this way :
⎡1 n −1
⎤ ( b − a ) h ( 2)
b 2
h⎡ n −1 n −1
⎤ ( b − a ) h ( 4)
b 4
∫ f ( x)dx ≈ ⎢
3⎣
f 0 + f 2n + 2 ∑
i =1
f 2i + 4 ∑
i =0
f 2i +1 ⎥ −
⎦ 180
f (η ) , a < η < b
a
Multiple integrals
b d ( x)
I =∫ ∫ f ( x, y ) dydx
a c( x)
⎡ ⎤
∆x ⎢ n −1 n−2
⎥
I= ⎢ g (a ) + g (b) + 4 ∑ g ( a + j∆x ) + 2 ∑ g ( a + j∆x ) ⎥
3 j =1 j =2
⎢ ⎥
⎣ j odd j even ⎦
Chapter 10—5/10 2008-04-08
where :
d (a)
g (a) = ∫
c(a)
f (a, y )dy
d ( a + j ∆x )
g ( a + j ∆x ) = ∫ f ( a + j ∆x, y ) dy
c ( a + j ∆x )
d (b)
g (b) = ∫
c (b )
f (b, y )dy
Let
2z − a − b b−a a+b
x= ⇔ z= x +
b−a 2 2
2 b−a
dx = dz → dz = dx
b−a { 2
=J
∫ F ( z ( x ))
dz
I = ∫ F ( z ) dz =
a −1
dx
{
dx =
2 ∫−1 F ⎜⎝ 2 x + 2 ⎟⎠ dx ≡ 2 ∫ f ( x ) dx
−1
=J
Numerical quadrature
1 n
∫ f ( x ) dx = ∑ wi f ( xi ) , i = 1, 2,K , n
−1 i =1
b−a b−a
b 1 1
∫ F ( z ) dz =
a
2 ∫ F ( z ( x ) ) dx =
−1
2 ∫ f ( x ) dx
−1
In the Gaussian integrating formulas we require 2n-1 order of accuracy, i.e. they should be
exact for monomials xk, k = 0, 1, 2, K , 2n − 1
n 1
1 ⎡
∑ wi xik = ∫ x k dx = 1 − ( −1) ⎤
k +1
i =1 −1
k +1 ⎣ ⎦
Example
Chapter 10—6/10 2008-04-08
n=2 → 2n − 1 = 2 ⋅ 2 − 1 = 3 , k = 0, 1, 2, 3
w1 + w2 = 2
1
w1 x1 + w2 x2 = 0 w1 = 1 x1 = − = − 0.5773502
3
2 ⇒
w1 x12 + w2 x22 = 1
3 w2 = 1 x2 = + = + 0.5773502
3
w1 x1 + w2 x2 = 0
3 3
Example
Find
( )
4
2
I = ∫ 1 + z dz = 5 5 − 1 = 6.78689
0
3
2z − 0 − 4 1 4−0 0+4
x= = z −1 → z= x+ = 2 ( x + 1)
4−0 2 2 2
4−0
1
f ( x) = 2x + 3
2 −∫1
I= 2 x + 3 dx →
⎛ 1 ⎞ ⎛ 1 ⎞
I% = 1 ⋅ 2 2 ⎜ − ⎟ + 3 + 1 ⋅ 2 2⎜ ⎟ + 3 = 6.79345
⎝ 3⎠ ⎝ 3⎠
Error
I − I% 6.78689 − 6.79346
= = − 0.0009669 ≈ − 0.01%
I 6.78689
Remark
F(z)
∫ K dz = ∫ K dz + ∫ K dz
0 0 2
z
2 4
m −1
h m −1 n
⎡h xj 1 ⎤
I = ∑ Ii = ∑ ∑α j f⎢ + ( zi + zi +1 ) ⎥
i =0 2 i =0 j =0 ⎣ 2 2 ⎦
a+b
z0 = a, z1 = , z2 = b
2
∫ f ( x ) dx = ∑ ∫ h ( x ) f ( x ) dx
−1 i =0 −1
i i + ∑ ∫ g ( x ) f ' ( x ) dx
i =0 −1
i i + EI
f( )
1 n 1 n+2 1
∫ f ( x ) dx = ∑ α i f ( xi ) + EI , EI = ∫ E dx = ∫ π ( x ) dx
2
−1 i =0 −1 ( 2n + 2 ) ! −1
n +1
assuming that
1 1 1
∫ g ( x ) f ' ( x ) dx = 0 → ∫ g ( x ) dx = ∫ ( x − x ) L ( x ) dx = 0
2
i i i i i
−1 −1 −1
i = 0, 1,..., n
because f ' ( x i ) may assume arbitrary value. Since the following holds
π n +1 ( x )
L i( x ) =
( x − xi ) π 'n+1 ( x i )
the requirement is
1
π n +1 ( x ) L i ( x ) dx
∫ π 'n+1 ( xi ) = 0
−1
P0 ( x ) = 1
P1 ( x ) = x
M
1
Pi ( x ) = ⎡⎣( 2i − 1) x Pi −1 ( x ) − ( i − 1) Pi − 2 ( x ) ⎤⎦
i i = 2, 3, ...
2n +1 ⎡⎣( n + 1) !⎤⎦
2
⎡⎣ 2 ( n + 1) ⎤⎦ !
The zeros of these polynomials are the required abscissas xi for our integrating formula.
Knowing the number of points we want to use in the integrating formula we consider the
zeros of the appropriate Legendre polynomial.
The coefficient α i , i = 0, 1,..., n are found now by integrating
1 1
α i = ∫ hi ( x ) dx = L 'i ( xi ) ∫ L i2 dx
−1 −1
1 0 2
2 ± 1
3 1,1
0 8
9
3
± 3
5
5
9 , 95
± 0.3399810436 0.6521451549
4
± 0.8611363116 0.3478548451
Chapter 10—9/10 2008-04-08
m −1
h m −1 n
⎡h yj 1 ⎤
I = ∑ Ii = ∑ ∑α i f⎢ + ( xi + xi +1 ) ⎥
i =0 2 i =0 j =0 ⎣ 2 2 ⎦
∫a F ( z ) dz = 2 ∫−1 F ⎜⎝ 2 x + 2 ⎟⎠ dx = 2 ∑ α F ⎜⎝
i =1
i
2
x+ ⎟
2 ⎠
where α i and xi are taken from the table. This formula is exact for polynomials up to the
2n − 1 order. Usually the composite formula is being used.
F(z)
m −1
h m −1 n ⎡h z 1 ⎤
I = ∑ Ii = ∑∑ α j F ⎢ i + ( zi + zi +1 ) ⎥
i =0 2 i =0 j =0 ⎣ 2 2 ⎦
a = z0 < z1 < ... < zm = b
zi +1 − zi = h i = 0, 1, ..., m − 1
a zi zi+1 b z
h
Numerical integration can not be directly applied in case of singularity or infinite intervals.
The following measures may be applied then
∫ f ( x ) dx = ∫ w ( x ) γ ( x ) dx = ∑ α iγ ( xi ) + E i
a a i =0
Example
1
x
∫
0 1− x
dx - Approach: remove singularity by means of integration by parts
∫
0 1− x
dx = − x ⋅ 2 1 − x + ∫ 2 1 − x dx = 2∫ 1 − x dx
0
0 0
Example
∞ 1 ∞
∫x e dx = ∫ x e dx + ∫ x 2 e− x dx
2 − x2 2 − x2 2
0 0 1
1
Let x 2 = y −1 → dx = − 3 dy
2y 2
∞ 1 1 −1
1 e y
∫x e dx = ∫ x e ∫0 y 5 2 dy = I1 + I 2
2 − x2 2 − x2
dx +
0 0
2
Remarks
1 −1 1 −1
e y 1 1 e y
∫0 y 5 2 dy = e + 2 ∫0 y 3 2 dy → etc
1
−
y
(ii) singularity due to e remains
1 e
1 − 1y
1 y e
1 −2 − y
1
1 d e
1
y
( )
1
1 3 1 −
−1
1
(iii) ∫
20 y
3
2
dy = ∫ −2 3
20 y y2
dy = ∫ − 1 = + ∫ y 2 e y dy
20 y 2 4e 8 0
and
1 −1
lim y e 2 y
=0
y → 0+