Beruflich Dokumente
Kultur Dokumente
(数值分析与应用)
Wei Zhang(张炜)
1
Course Information
2
Preliminaries and Error Analysis
(概论与误差分析)
Wei Zhang(张炜)
3
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Numerical analysis: the study of algorithms that use numerical approximation (as
opposed to general symbolic manipulations) for the problems of mathematical analysis (as
distinguished from discrete mathematics).
4
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Error analysis(误差分析)
Interpolation(插值)
Least Squares Approximation(最小二乘逼近)
Numerical Differentiation and Integration(数值微分与积分)
Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Numerical Solutions of Nonlinear Systems of Equations(数值求解非线性方程组)
6
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Outlines
Review of Calculus(微积分回顾)
Round-off Errors and Computer Arithmetic(舍入误差和计算机算法)
Algorithms and Convergence(算法和收敛性)
7
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Review of Calculus
8
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Review of Calculus
Limits and Continuity
• Functions that are not continuous can skip over points of interest and cause difficulties
when attempting to approximate a solution to a problem.
• Definition 1.1: A function f defined on a set X of real numbers has the limit L at x0,
if, given any real number ε > 0, there exists a real number δ > 0 such that
函数的极限
9
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Review of Calculus
Limits and Continuity
• Definition 1.2: Let f be a function defined on a set X of real numbers and x0 ∈ X. Then f is
continuous at x0 if,
连续函数
The function f is continuous on the set X if it is continuous at each number in X.
• Definition 1.3: Let {xn}∞n=1 be an infinite sequence of real numbers. This sequence has
the limit x (converges to x) if, for any ε > 0 there exists a positive integer N(ε) such that
|xn − x| < ε, whenever n > N(ε). The notation
数列的极限
means that the sequence {xn}∞n=1 converges to x.
10
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Review of Calculus
Differentiability
• A function with a smooth graph will normally behave more predictably than one with
numerous jagged features.
• Definition 1.5: Let f be a function defined in an open interval containing x0. The function
f is differentiable at x0 if
函数可微
exists. The number f ’(x0) is called the derivative of f at x0. A function that has a derivative
at each number in a set X is differentiable on X.
11
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Review of Calculus
Differentiability
12
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Review of Calculus
Differentiability
• Theorem 1.9: (Extreme Value Theorem) If f ∈ C[a, b], then c1, c2 ∈ [a, b] exist with
f (c1) ≤ f (x) ≤ f (c2), for all x ∈ [a, b]. In addition, if f is differentiable on (a, b), then the
numbers c1 and c2 occur either at the endpoints of [a, b] or where f ’ is zero.
极值定理
广义罗尔定理
• Theorem 1.10: (Generalized Rolle’s Theorem) Suppose f ∈ C[a, b] is n times
differentiable on (a, b). If f (x) = 0 at the n + 1 distinct numbers a ≤ x0 < x1 < . . . < xn ≤ b,
then a number c in (x0, xn), and hence in (a, b), exists with f (n)(c) = 0.
13
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Review of Calculus
Differentiability
介值定理
14
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Review of Calculus
Integration
• Theorem 1.12: The Riemann integral of the function f on the interval [a, b] is the
following limit, provided it exists:
黎曼积分
where the numbers x0, x1, . . . , xn satisfy a = x0 ≤ x1 ≤ · · · ≤ xn = b, where Δxi = xi − xi-1,
for each i = 1, 2,. . . , n, and zi is arbitrarily chosen in the interval [xi-1, xi].
15
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Review of Calculus
Integration
• Theorem 1.13: (Weighted Mean Value Theorem for Integrals) Suppose f ∈ C [a, b],
the Riemann integral of g exists on [a, b], and g(x) does not change sign on [a, b]. Then
there exists a number c in (a, b) with:
加权积分中值定理
16
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Review of Calculus
Taylor Polynomials and Series
• Theorem 1.14: (Taylor’s Theorem) Suppose f ∈ Cn[a, b], that f (n+1) exists on [a, b],
and x0 ∈ [a, b]. For every x ∈ [a, b], there exists a number ξ(x) between x0 and x with
where
泰勒定理
truncation error
17
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
18
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
• Reason: the arithmetic performed in a machine involves numbers with only a finite
number of digits, with the result that calculations are performed with only
approximate representations of the actual numbers.
19
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Characteristic Mantissa(尾数)
Single precision
Double precision
20
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Positive
Decimal number
21
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
22
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Chopping(截断)
Rounding(舍入)
• Example: Determine the five-digit (a) chopping and (b) rounding values of π
Chopping
Rounding
23
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
24
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
• Assume real numbers x and y, and their floating-point representations f l(x) and f l(y)
• Example 3: Suppose that x = 5/7 and y = 1/3. Use five-digit chopping for calculating x + y.
Solution
25
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
• Accuracy loss due to round-off error can also be reduced by rearranging calculations.
• Example 6: Evaluate f (x) = x3 − 6.1x2 + 3.2x + 1.5 at x = 4.71 using three-digit arithmetic.
Solution
Exact
Chopping
Rounding
Relative difference
26
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
• Accuracy loss due to round-off error can also be reduced by rearranging calculations.
• Example 6: Evaluate f (x) = x3 − 6.1x2 + 3.2x + 1.5 at x = 4.71 using three-digit arithmetic.
Solution An alternative way
Chopping
Relative difference
27
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
28
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
• Conditionally stable algorithm: stable only for certain choices of initial data.
• Definition 1.17: Suppose that E0 > 0 denotes an error introduced at some stage in the
calculations and En represents the magnitude of the error after n subsequent operations
• If En ≈ CnE0, where C is a constant independent of n, then the growth of error is linear.
• If En ≈ CnE0, for some C > 1, then the growth of error is called exponential.
29
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
• Definition 1.18: Suppose {βn}∞n=1 is a sequence known to converge to zero, and {αn}∞n=1
converges to a number α. If a positive constant K exists with
then we say that {αn}∞n=1 converges to α with rate, or order, of convergence O(βn). It is
indicated by writing αn = α + O(βn).
• Normally we use
• Example 2:
30
Numerical Analysis and Application(数值分析与应用)
Lecture 1: Preliminaries and Error Analysis(概论与误差分析)
Summary
Review of Calculus
• Limits, continuity, differentiability, Rolle’s theorem, mean value theorem, extreme value
theorem, generalized Rolle’s theorem, intermediate value theorem, Riemann integral,
Taylor’s theorem
Round-off Errors and Computer Arithmetic
• Binary and decimal machine numbers
• Finite-digit arithmetic, and nested arithmetic
31
Interpolation
(插值)
Wei Zhang(张炜)
32
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Outlines
Lagrange Interpolation(拉格朗日插值)
Neville’s Method(Neville插值)
Divided Differences(均差插值)
Hermite Interpolation(埃尔米特插值)
Cubic Spline Interpolation(样条插值)
33
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Introduction
Example
• The table lists the population, in thousands of people, from 1950 to 2000 for the United
States, and the data are also represented in the figure.
34
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Lagrange Interpolation
35
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Lagrange Interpolation
Fundamentals
• Algebraic polynomials(代数多项式): mapping the set of real numbers into itself
• For any function defined and continuous on a closed and bounded interval, there
exists a polynomial that is as “close” to the given function as desired.
36
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Lagrange Interpolation
Fundamentals
• Throrem 3.1: (Weierstrass Approximation Theorem) Suppose that f is defined and
continuous on [a, b]. For each ε > 0, there exists a polynomial P(x), with the property that
Karl Weierstrass
维尔斯特拉斯
37
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Lagrange Interpolation
First-degree polynomial interpolating
• Problem definition: determining a polynomial of degree one that passes through the
distinct points (x0, y0) and (x1, y1), i.e., f (x0) = y0 and f (x1) = y1.
The linear Lagrange interpolating polynomial through (x0, y0) and (x1, y1) is
38
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Lagrange Interpolation
First-degree polynomial interpolating: example
• Problem: Determine the linear Lagrange interpolating polynomial that passes through
the points (2, 4) and (5, 1).
Solution:
39
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Lagrange Interpolation
Generalized Lagrange interpolation
• Problem: construct a polynomial of degree at most n that passes through the n + 1 points
40
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Lagrange Interpolation
Generalized Lagrange interpolation
• Solution: we construct a polynomial function Ln,k(x) for xi to satisfy
41
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Lagrange Interpolation
Generalized Lagrange interpolation
• Theorem 3.2: if x0, x1, . . . , xn are n + 1 distinct numbers and f is a function whose values
are given at these numbers, then a unique polynomial P(x) of degree at most n exists with
42
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Lagrange Interpolation
Generalized Lagrange interpolation
• Example: use the numbers (called nodes) x0 = 2, x1 = 2.75, and x2 = 4 to find the second
Lagrange interpolating polynomial for f (x) = 1/x
Considering
43
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Lagrange Interpolation
Remainder term(余项)
• Theorem 3.3: suppose x0, x1, . . . , xn are distinct numbers in the interval [a, b] and f ∈
Cn+1[a, b]. Then, for each x in [a, b], a number ξ(x) (generally unknown) between x0,
x1, . . . , xn, and hence in (a, b), exists with
44
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Lagrange Interpolation
Remainder term
• Example: we have the second Lagrange polynomial for f (x) = 1/x on [2, 4] using the
nodes x0 = 2, x1 = 2.75, and x2 = 4. Determine the error form for this polynomial, and the
maximum error when the polynomial is used to approximate f (x) for x∈[2, 4].
• Solution: we have
45
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Lagrange Interpolation
Remainder term
• Example: we have the second Lagrange polynomial for f (x) = 1/x on [2, 4] using the
nodes x0 = 2, x1 = 2.75, and x2 = 4. Determine the error form for this polynomial, and the
maximum error when the polynomial is used to approximate f (x) for x∈[2, 4].
Solution: how to calculate the maximum value
46
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Neville’s Method
47
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Neville’s Method
Introduction
• Lagrange interpolation: the degree of the polynomial needed for the desired accuracy
is generally not known until computations have been performed.
• We now derive these approximating polynomials in a manner that uses the previous
calculations to greater advantage.
• Definition 3.4: let f be a function defined at x0, x1, x2, . . . , xn, and suppose that m1, m2, . . .,
mk are k distinct integers, with 0 ≤ mi ≤ n for each i. The Lagrange polynomial that agrees
with f (x) at the k points xm1, xm2, . . . , xmk is denoted Pm1,m2,...,mk(x).
• Example: suppose that x0 = 1, x1 = 2, x2 = 3, x3 = 4, x4 = 6, and f (x) = ex. Determine the
interpolating polynomial denoted P1,2,4(x).
Solution: Lagrange polynomial that agrees with f (x) at x1 = 2, x2 = 3, and x4 = 6
48
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Neville’s Method
Fundamentals
• Theorem 3.5: Let f be defined at x0, x1, . . . , xk, and let xj and xi be two distinct numbers in
this set. Then
is the kth Lagrange polynomial that interpolates f at the k + 1 points x0, x1, . . . , xk.
Using consecutive
points with larger i.
Neville’s Method
Fundamentals
• The interpolating polynomials can be generated recursively. For example:
• Notation: let Qi,j(x), for 0 ≤ j ≤ i, denote the interpolating polynomial of degree j on the
(j + 1) numbers xi−j, xi−j+1, . . . , xi−1, xi :
50
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Neville’s Method
Example
• Problem: apply Neville’s method to the data by constructing a recursive table for x=1.5.
Solution: Let x0 = 1.0, x1 = 1.3, x2 = 1.6, x3 = 1.9 and x4 = 2.2, Degree zero
then Q0,0 = f (1.0), Q1,0 = f (1.3), Q2,0 = f (1.6), Q3,0 = f (1.9) and Q4,0 = f (2.2)
51
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Neville’s Method
Example
• Problem: apply Neville’s method to the data by constructing a recursive table for x=1.5.
52
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Neville’s Method
Neville’s Iterated Interpolation
53
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Divided Differences
54
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Divided Differences
Introduction
• Divided-difference methods is used to generate successively higher-degree polynomial
approximations.
• The divided differences of f with respect to x0, x1, . . . , xn are used to express Pn(x) as
55
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Divided Differences
Notation
• The zeroth divided difference of the function f with respect to xi
• The first divided difference of f with respect to xi and xi+1 is denoted f [xi, xi+1]
56
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Divided Differences
Newton’s Divided Difference
• The Lagrange polynomial can be written in the form of Newton’s divided difference
the value of f [x0, x1, . . . , xk] is independent of the order of the numbers x0, x1, . . . , xk
57
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Hermite Interpolation
58
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Hermite Interpolation
Hermite Polynomial
• The Hermite polynomials agree with the magnitude and first derivative of f at x0, x1, . . . , xn.
• Theorem 3.9: If f ∈ C1[a, b] and x0, . . . , xn ∈ [a, b] are distinct, the unique polynomial of
least degree agreeing with f and f ’ at x0, . . . , xn is the Hermite polynomial of degree at
most 2n + 1 given by
where, for Ln, j(x) denoting the jth Lagrange coefficient polynomial of degree n
59
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Hermite Interpolation
Hermite Interpolation: Example
• Problem: Use the Hermite polynomial that agrees with the data in the table to find an
approximation of f (1.5).
60
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Hermite Interpolation
Hermite Interpolation: Example
• Problem: Use the Hermite polynomial that agrees with the data in the table to find an
approximation of f (1.5).
H5(1.5) = 0.5118277
61
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
62
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
63
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
64
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
65
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Four conditions: the splines must agree with the data at the nodes
66
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
67
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
for each j = 0, 1, . . . , n − 1.
• By introducing hj = xj+1 − xj , we have
68
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
69
Strictly diagonally dominant
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
70
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
j = n-1
71
Numerical Analysis and Application(数值分析与应用)
Lecture 2: Interpolation(插值)
Summary
Lagrange Interpolation
• n-1 order polynomial constructed in the whole domain
• nth Lagrange interpolating polynomial
• Neville’ method
72
Least Squares Approximation
(最小二乘逼近)
Wei Zhang(张炜)
73
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
Outlines
74
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
Introduction
• Example: Hooke’s law for force-deformation relationship F(l) = k(l − E)
Objective: find the line that best approximates all the data points.
Find fitting functions to given data and the best function to represent the data.
75
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
76
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
• One option: finding the equation of the best linear approximation in the absolute
sense requires that values of a0 and a1 be found to minimize
78
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
• Puts substantially more weight on a point that is out of line with the rest of the data.
• Will not permit that point to completely dominate the approximation.
79
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
• We need
80
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
81
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
82
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
83
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
of degree n < m − 1.
• Choose the constants a0, a1, . . ., an to minimize the least squares error
84
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
• The normal equations have a unique solution provided that the xi are distinct
85
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
• Example: fit the data with the discrete least squares polynomial
of degree at most 2
86
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
or
Linear Problem
87
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
88
Numerical Analysis and Application(数值分析与应用)
Lecture 3: Least Squares Approximation(最小二乘逼近)
Summary
Linear Least Square
• The approximate line does not necessarily intersect with the data.
• Minimize the square of error.
• Required data:
89
Numerical Differentiation and Integration
(数值微分与积分)
Wei Zhang(张炜)
90
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Outlines
Numerical Differentiation(数值微分)
Richardson’s Extrapolation(理查德森外插)
Elements of Numerical Integration(数值积分基本知识)
Composite Numerical Integration(复合积分)
Romberg Integration(龙贝格积分)
91
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Numerical Differentiation
Example
• A sheet of corrugated roofing
Difficult to determine
92
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Numerical Differentiation
93
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Numerical Differentiation
First-Order Derivative
• The derivative of the function f at x0 is
For small h
Numerical Differentiation
95
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Numerical Differentiation
Example
• Problem: Use the forward-difference formula to approximate the derivative of f (x) = ln x
at x0 = 1.8 using h = 0.1, h = 0.05 and h = 0.01, and determine bounds for the
approximation errors.
Solution: forward-difference formula
h = 0.1
Error bound
96
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Numerical Differentiation
Three-Point Formula
• Deriving the approximation formulas using Lagrange coefficient polynomial
j = 0, 1, 2
97
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Numerical Differentiation
Three-Point Formula
• Three-point endpoint formula
98
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Numerical Differentiation
Five-Point Formula
• Five-point endpoint formula
Second-Order Derivative
• Three-point midpoint formula
99
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Richardson’s Extrapolation
100
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Richardson’s Extrapolation
Purpose: generate high-accuracy results while using low-order formulas.
Applicability: an approximation technique has an error term with a predictable form, usually
the step size h.
General procedure: for a formula N1(h) that approximates an unknown constant M
101
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Richardson’s Extrapolation
Realization:
102
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Richardson’s Extrapolation
Realization: for each j = 2, 3, . . . , the O(h2j) approximation
103
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
104
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
• Basic idea: select a set of distinct nodes {x0, . . . , xn} from the interval [a, b]. Then
integrate the Lagrange interpolating polynomial
Numerical quadrature
105
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
106
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
107
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Simpson’s rule
108
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
109
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Solution:
110
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
111
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
n = 1: Trapezoidal rule
n = 2: Simpson’s rule
112
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
113
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
114
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
115
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
116
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
(3) For each j = 1, 2,. . . , (n/2) − 1, we have f (x2j) appearing in the term corresponding to
the interval [x2j−2, x2j] and also in the term corresponding to the interval [x2j, x2j+2]
117
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
118
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
119
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
120
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
n = π/h
n ≥ 360
121
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
n = π/h
n ≥ 18 122
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Romberg Integration
123
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Romberg Integration
Romberg Integration = Composite Trapezoidal Rule + Richardson’s Extrapolation
(1) Resulting approximations are denoted as R1,1, R2,1, R3,1, R4,1 , R5,1
124
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Romberg Integration
(2) Obtaining O(h4) approximations R2,2, R3,2, R4,2, R5,2
125
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Romberg Integration
Example:
• Problem: Use the Composite Trapezoidal rule to find approximations to
with n = 1, 2, 4, 8, and 16. Then perform Romberg extrapolation on the results.
Solution:
126
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Romberg Integration
The O(h4) approximations are
127
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Romberg Integration
General results:
Generalized form
128
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Romberg Integration
Example: easily to extend to larger n
• Problem: add an additional extrapolation row to the above table to approximate
Solution:
129
Numerical Analysis and Application(数值分析与应用)
Lecture 4: Numerical Differentiation and Integration(数值微分与积分)
Summary
Numerical Differentiation
Richardson’s Extrapolation
• Generate high-accuracy results while using low-order formulas.
Numerical Integration
• Trapezoidal rule, Simpson’s rule, closed Newton-Cotes formulas.
Composite Numerical Integration
• Low-order, piecewise approach.
• Based on the Trapezoidal rule or Simpson’s rule.
Romberg Integration
• Composite Trapezoidal Rule + Richardson’s Extrapolation
• Improved accuracy.
130
Initial Value Problems for
Ordinary Differential Equations
(常微分方程的初值问题)
Wei Zhang(张炜)
131
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Outlines
132
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Introduction
What is a initial value problem (IVP) ?
• The motion of a swinging pendulum
133
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
134
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
whenever (t, y1) and (t, y2) are in D. The constant L is called a Lipschitz constant for f.
• Definition 5.2: A set D ⊂ ℝ2 is said to be convex (凸集) if whenever (t1, y1) and (t2, y2)
belong to D, then ((1 − λ)t1 + λt2, (1 − λ)y1 + λy2) also belongs to D for every λ in [0, 1].
135
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
• Theorem 5.4: Suppose that D = {(t, y) | a ≤ t ≤ b and −∞ < y < ∞} and that f (t, y) is
continuous on D. If f satisfies a Lipschitz condition on D in the variable y, then the initial-
value problem
136
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
137
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
is well-posed.
138
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Euler’s Method
139
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Euler’s Method
Basic Idea
• The most elementary approximation technique for solving initial-value problems.
• Objective: to obtain approximations to the well-posed initial-value problem
140
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Euler’s Method
Basic Idea
(3) Constructing wi ≈ y(ti), for each i = 1, 2, . . . , N, the Euler’s method is
Example
• Problem: use Euler’s method to approximate the solution to
at t = 2 with h = 0.5.
Solution:
141
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Euler’s Method
Geometrical Interpretation
142
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Euler’s Method
Example
• Problem: use Euler’s method to approximate the solution to
with N = 10 to determine approximations, and compare these with the exact values given
by y(t) = (t + 1)2 − 0.5et.
143
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
144
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
145
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Truncation error
146
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
with step size h and if y ∈ Cn+1[a, b], then the local truncation error is O(hn).
147
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
148
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
149
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
150
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta Methods
151
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta Methods
Why Runge-Kutta Methods
• The Taylor methods requires the computation and evaluation of the derivatives of f (t, y).
• Advantages of Runge-Kutta methods: high-order, no need to evaluate f (n) (t, y).
• Theorem 5.13: Suppose that f (t, y) and all its partial derivatives of order less than or
equal to n + 1 are continuous on D = {(t, y) | a ≤ t ≤ b, c ≤ y ≤ d}, and let (t0, y0) ∊ D. For
every (t, y) ∊ D, there exists ξ between t and t0 and μ between y and y0 with
Residual
152
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta Methods
Runge-Kutta Methods of Order Two
• Objective: determine values for a1, α1, and β1 that a1 f (t + α1, y + β1) approximates
Taylor expansion
153
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta Methods
Runge-Kutta Methods of Order Two
Midpoint Method
154
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta Methods
Runge-Kutta Methods of Order Three
Heun’s method
• Problem: Applying the Heun’s method with N = 10, h = 0.2, ti = 0.2i, and w0 = 0.5 to
approximate the solution to the example
155
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta Methods
Runge-Kutta Methods of Order Four
156
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta Methods
Computational Comparisons
• The main computational effort in applying the RK methods is the evaluation of f .
• The methods of order less than five with smaller step size are used in preference to the
higher-order methods using a larger step size
157
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta-Fehlberg Method
158
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta-Fehlberg Method
Why Adaptive Method
• Use varying step sizes for integral approximations produced efficient methods.
• Step-size procedure to estimate the truncation error without approximation of the
higher derivatives of the function.
• For a initial value problem
159
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta-Fehlberg Method
Why Adaptive Method
O(hn)
O(hn+1)
O(hn)
Estimate the
truncation error
160
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta-Fehlberg Method
Runge-Kutta-Fehlberg Method
• Use a Runge-Kutta method with local truncation error of order five
Six evaluations of f
161
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta-Fehlberg Method
Runge-Kutta-Fehlberg Method
• Procedure
(1) Compute the first values of wi+1 and using the step h
(2) Compute q for that step
(3) When q < 1: repeat the calculations using the step size qh
When q ≥ 1: accept the computed value at this using the step size h, but change the
step size to qh for the (i + 1)st step
162
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta-Fehlberg Method
Runge-Kutta-Fehlberg Method
• Problem: Use the Runge-Kutta-Fehlberg method with a tolerance TOL = 10−5, a
maximum step size hmax = 0.25, and a minimum step size hmin = 0.01 to approximate the
solution to the initial-value problem
and compare the results with the exact solution y(t) = (t + 1)2 − 0.5et.
Solution: Determine w1 using h = 0.25
163
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Runge-Kutta-Fehlberg Method
Runge-Kutta-Fehlberg Method
q < 1: we can not accept the approximation 0.9204886 for y(0.25), but adjust the step
size to h = 0.9461033291(0.25) ≈ 0.2365258 and repeat.
164
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Multistep Methods
165
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Multistep Methods
Introduction
• The above one-step methods: approximation for the mesh point ti+1 involves
information from only one of the previous mesh points ti .
• Multi-step methods: using the approximation at more than one previous mesh point
to determine the approximation at the next point .
• Definition 5.14: An m-step multistep method for solving the initial-value problem
has a difference equation for finding the approximation wi+1 at the mesh point ti+1
represented by the following equation, where m is an integer greater than 1:
for i = m − 1, m, . . . , N − 1, where h = (b − a)/N, the a0, a1, . . . , am-1 and b0, b1, . . . , bm are
constants, and the starting values are specified
166
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Multistep Methods
Introduction
• The method is called explicit (显式) when bm = 0, and implicit (隐式) for bm ≠ 0.
• Explicit fourth-order Adams-Bashforth technique
and
is the (i + 1)st step in a multistep method, the local truncation error at this step is
167
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Multistep Methods
Adams-Bashforth Explicit Methods
• Two-step method
for i = 1, 2, . . . , N − 1
truncation error
• Three-step method
for i = 2, 3, . . . , N − 1
truncation error
168
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Multistep Methods
Adams-Bashforth Explicit Methods
• Four-step method
for i = 3, 4, . . . , N − 1
truncation error
• Five-step method
for i = 4, 5, . . . , N − 1
truncation error
169
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Multistep Methods
Adams-Moulton Implicit Methods
• Two-step method
for i = 1, 2, . . . , N − 1
truncation error
• Three-step method
for i = 2, 3, . . . , N − 1
truncation error
• Four-step method
for i = 3, 4, . . . , N − 1
Multistep Methods
Example
• Problem: Consider the initial-value problem
Use the exact values given from y(t) = (t + 1)2 − 0.5et as starting values and h = 0.2 to
compare the approximations from (a) by the explicit Adams-Bashforth four-step method
and (b) the implicit Adams-Moulton three-step method.
Solution: Adams-Bashforth method
i = 3, 4, . . . , 9
Adams-Moulton method
i = 2, 3, . . . , 9
171
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Multistep Methods
Example
172
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Multistep Methods
Predictor-Corrector Methods (预估-修正法)
• The implicit Adams-Moulton method gave better results than the explicit Adams-
Bashforth method of the same order.
• Deficiency of the implicit method: first having to convert the method algebraically to
an explicit representation for wi+1.
• Predictor-Corrector method: an explicit method to predict and an implicit to improve
the prediction.
• Procedure
(1) Calculate an approximation, w4p, to y(t4) using the four-step explicit Adams-Bashforth
method as predictor
(2) Improving by inserting w4p in the right side of the three-step implicit Adams-Moulton
method and using that method as a corrector
173
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Multistep Methods
Predictor-Corrector Methods (预估-修正法)
• Problem: Apply the Adams fourth-order predictor-corrector method with h = 0.2 and
starting values from the Runge-Kutta fourth order method to the initial-value problem
Solution:
174
Numerical Analysis and Application(数值分析与应用)
Lecture 5: Initial Value Problems for Ordinary Differential Equations(常微分方程的初值问题)
Summary
Euler’s Method
Higher-Order Taylor Methods
Runge-Kutta Methods
• Runge-Kutta method of order two
• Midpoint method
• Runge-Kutta method of order two
Runge-Kutta-Fehlberg Methods
• Estimate truncation error with minimal computational cost
• RK order five + RK order four
Multistep Methods
• Use solutions on more mesh points
• Explicit Adam-Bashforth methods
• Implicit Adam-Moulton methods
175
Direct Methods for Solving Linear Systems
(线性方程组的直接解法)
Wei Zhang(张炜)
176
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Introduction
Engineering Problem
• Kirchhoff’s laws of electrical circuits (基尔霍
夫电路定律)
• Linear system of equations for this problem
• General form of linear system of equations:
given the constants aij, for each i, j = 1, 2,. . . , n,
and bi, for each i = 1, 2, . . . , n, and we need to
determine the unknowns x1, . . . , xn.
• Direct methods: theoretically give the exact solution to the system in a finite number
of steps.
177
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Outlines
178
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
179
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
180
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
181
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
182
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
• Augmented matrix(增广矩阵)
183
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Linear system
vector b
Augmented matrix
Forward elimination
Eliminate the coefficient
Provided a11 ≠ 0
of x1 in each rows
Backward substitution
General solution
185
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Pivot element
186
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
187
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Pivoting Strategies
188
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Pivoting Strategies
Simple Row Interchange
• For pivot elements akk (k) = 0: row interchange (Ek) ↔ (Ep), where p is the smallest integer
greater than k with apk (k) ≠ 0.
Why Applying Pivoting (选主元) Strategies
(𝑘) (𝑘)
• If 𝑎𝑘𝑘 is small in magnitude compared to 𝑎𝑗𝑘 , then the magnitude of the multiplier
≫1
𝑥𝑗
Backward substitution
189
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Pivoting Strategies
Example
• Problem: Apply Gaussian elimination to the system
using four-digit arithmetic with rounding, and compare the results to the exact solution
x1 = 10.00 and x2 = 1.000.
(𝑘)
Solution: The first pivot element 𝑎𝑘𝑘 =0.003000
Exact
190
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Pivoting Strategies
Example
191
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Pivoting Strategies
Partial Pivoting (列主元法)
(𝒌)
• Pivoting is performed by selecting an element 𝒂𝒑𝒌 with a larger magnitude as the
pivot, and interchanging the kth and pth rows.
• Simple partial pivoting: select an element in the same column that is below the
diagonal and has the largest absolute value
smallest p ≥ k
using partial pivoting and four-digit arithmetic with rounding, and compare the results to
the exact solution x1 = 10.00 and x2 = 1.000.
192
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Pivoting Strategies
Partial Pivoting (列主元法)
Solution: Find the pivot
Exact solution
193
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Pivoting Strategies
Partial Pivoting (列主元法)
• Counter-Example: Apply Gaussian elimination to the system
is the same as the above problem except that all the entries in the first equation have
been multiplied by 104.
Solution: no row interchange is needed
Wrong results
194
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Pivoting Strategies
Scaled Partial Pivoting (比例消元法)
• The element in the pivot position that is largest relative to the entries in its row.
• Procedure: (1) define a scale factor si for each row as
195
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Pivoting Strategies
Scaled Partial Pivoting (比例消元法)
• Problem: Apply Gaussian elimination to the system
Exact solution
196
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Pivoting Strategies
Complete Pivoting (全主元消去法)
• Complete pivoting at the kth step searches all the entries aij, for i = k, k + 1, . . . , n and j =
k, k+1, . . . , n, to find the entry with the largest magnitude.
• Both row and column interchanges are performed.
• Massive computational cost for comparisons.
• The strategy recommended only for systems where accuracy is essential
197
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Matrix Factorization
198
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Matrix Factorization
LU Factorization
• The steps used to solve a system of the form Ax = b can be used to factor a matrix.
• The factorization is particularly useful when it has the form A = LU, where L is lower
triangular and U is upper triangular.
• Theorem 6.19: If Gaussian elimination can be performed on the linear system Ax = b
without row interchanges, then the matrix A can be factored into the product of a
lower-triangular matrix L and an upper-triangular matrix U, that is, A = LU, where
(𝒊) (𝒊)
𝒎𝒋𝒊 = 𝒂𝒋𝒊 𝒂𝒊𝒊
Matrix Factorization
LU Factorization
• Suppose A = LU, then solve for x by using a two-step process:
(1) First let y = Ux and solve the lower triangular system Ly = b for y.
(2) Once y is known, solve the upper triangular system Ux = y to determine the solution x.
• Problem: Determine the LU factorization for matrix A in the linear system Ax = b, where
200
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Matrix Factorization
LU Factorization
Solution:
201
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Matrix Factorization
LU Factorization
Introduce the substitution y = Ux. Then b = L(Ux) = Ly
Forward substitution
Backward substitution
202
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
203
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
204
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
• Theorem 6.23: If A is an n n positive definite matrix, then: (1) A has an inverse; (2) aii >
0, for each i = 1, 2, . . . , n; (3) max1≤𝑘,𝑗≤𝑛 𝑎𝑘𝑗 ≤ max1≤𝑖≤𝑛 𝑎𝑖𝑖 ; (4) (𝑎𝑖𝑗 )2 < 𝑎𝑖𝑖 𝑎𝑗𝑗 , for each
i ≠ j.
• Theorem 6.26: The symmetric matrix A is positive definite if and only if Gaussian
elimination without row interchanges can be performed on the linear system Ax = b with
all pivot elements positive. Moreover, in this case, the computations are stable with
respect to the growth of round-off errors.
206
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
p=q=2
bandwidth 2 + 2 − 1 = 3
207
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
• The entries in A can be overwritten by the entries in L and U with the result that no new
storage is required. 208
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Solution:
209
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Crout factorization
210
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
211
Numerical Analysis and Application(数值分析与应用)
Lecture 6: Direct Methods for Solving Linear Systems(线性方程组的直接解法)
Summary
Basics of Vectors and Matrices
Gauss Elimination Methods
• Forward elimination + backward substitution
• Pivoting: partial pivoting, scaled partial pivoting, complete pivoting
Matrix Factorization
• LU factorization: Crout’s method, Doolittle’s method, Cholesky’s method
Special Types of Matrices
• Diagnoal dominant matrices, positive definite matrices, tridiagnoal matrices
212
Iterative Methods for Solving Linear Systems
(线性方程组的迭代解法)
Wei Zhang(张炜)
213
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Introduction
Engineering Problem
• Trusses (桁架): lightweight structures
capable of carrying heavy loads
• The truss is in static equilibrium.
• Two endpoints (1, 4), four pin joints (1, 2, 3,
4), eight forces.
214
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Outlines
215
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
216
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
• Definition 7.2: The l2 and l∞ norms for the vector x = (x1, x2, . . . , xn)t are defined by
217
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
• The distance between two vectors is defined as the norm of the difference of the vectors.
• Definition 7.4: If x = (x1, x2, . . . , xn)t and y = (y1, y2, . . . , yn)t are vectors in ℝn, the l2 and l∞
distances between x and y are defined by
218
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
• The distance between n n matrices A and B with respect to this matrix norm is
𝐴−𝐵
219
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
220
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
• Definition 7.13: If p is the characteristic polynomial of the matrix A, the zeros of p are
eigenvalues (characteristic values) of the matrix A. If λ is an eigenvalue of A and x ≠ 0
satisfies (A − λI)x = 0, then x is an eigenvector (characteristic vector) of A corresponding
to the eigenvalue λ.
221
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
222
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Jacobi Method
223
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Jacobi Method
Iterative Methods
• The iterative methods are efficient in terms of both computer storage and computation
for large systems with a high percentage of 0 entries.
• An iterative technique to solve the n n linear system Ax = b starts with an initial
∞
approximation x(0) to the solution x and generates a sequence of vectors 𝐱 (𝒌) 𝒌=𝟎
that
converges to x.
Jacobi Method
• By solving the ith equation in Ax = b for xi to obtain (provided aii ≠ 0)
Final solution
(𝑘)
• For each k ≥ 1, generate the components 𝐱 𝑖 of x(k) from the components of x(k-1) by
Iteration
224
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Jacobi Method
Jacobi Method
• Problem: The linear system Ax = b given by
has the unique solution x = (1, 2, −1, 1)t. Use Jacobi method to find approximations x(k) to
x starting with x(0) = (0, 0, 0, 0)t until
225
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Jacobi Method
Jacobi Method
From the initial approximation x(0) = (0, 0, 0, 0)t we have x(1) given by
226
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Jacobi Method
Jacobi Method: Factorization Form
• Iterative methods: convert the system Ax = b into an equivalent system of the form
x = Tx + c for some fixed matrix T and vector c.
• The sequence of approximate solution vectors is generated by computing
227
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Jacobi Method
Jacobi Method: Factorization Form
228
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Gauss-Seidel Method
229
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Gauss-Seidel Method
Gauss-Seidel Method
(𝑘)
• In Jacobi method: components of x(k−1) are used to compute all the components 𝑥𝑖 of x(k).
(𝑘) (𝑘)
• For i > 1, the components 𝑥1 , . . . , 𝑥𝑖−1 of x(k) have already been computed and are
(𝑘−1) (𝑘−1)
expected to be better approximations to the actual solutions than 𝑥1 , . . . , 𝑥𝑖−1 .
(𝑘)
• Gauss-Seidel method: compute 𝑥𝑖 using the most recently calculated values
230
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Gauss-Seidel Method
Gauss-Seidel Method
Solution: we write the system, for each k = 1, 2, . . . as
231
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Gauss-Seidel Method
Gauss-Seidel Method: Factorization Form
Solution: we write the system, for each k = 1, 2, . . . as
232
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Relaxation Techniques
233
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Relaxation Techniques
Why Relaxation Methods
• The rate of convergence of an iterative technique depends on the spectral radius of the
matrix associated with the method.
• Convergence acceleration: choose a method whose associated matrix has minimal
spectral radius.
• Definition 7.23: If suppose 𝐱 ∊ ℝn is an approximation to the solution of the linear system
defined by Ax = b. The residual vector for 𝑥 with respect to this system is r = b − A𝐱.
Gauss-Seidel method presented by the residual
(𝑘)
Denote the approximate solution vector 𝐱 𝑖 defined by
residual
234
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Relaxation Techniques
Gauss-Seidel method presented by the residual
(𝑘)
The mth component of 𝐫𝑖 is
G-S iteration
235
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Relaxation Techniques
Gauss-Seidel method presented by the residual
By modifying the Gauss-Seidel procedure
Relaxation method
For certain choices of positive ω, we can reduce the norm of the residual vector and
obtain significantly faster convergence.
• Under-relaxation methods (欠松弛): 0 < ω < 1
• Over-relaxation methods (超松弛) : 1 < ω
• Successive over-relaxation (SOR) methods (逐次超松弛) : solving the system with G-S
method
236
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Relaxation Techniques
Example
• Problem: The linear system Ax = b given by
has the solution (3, 4,−5)t. Compare the iterations from the Gauss-Seidel method and the
SOR method with ω = 1.25 using x(0) = (1, 1, 1)t for both methods.
Solution: For each k = 1, 2, . . . , the equations for the Gauss-Seidel method are
and the equations for the SOR method with ω = 1.25 are
237
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Relaxation Techniques
Example
For the iterates to be accurate to seven decimal places, the Gauss-Seidel method requires
34 iterations, as opposed to 14 iterations for the SOR method with ω = 1.25.
238
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Relaxation Techniques
Convergence of SOR Method
• Theorem 7.24: If aii ≠ 0, for each i = 1, 2, . . . , n, then ρ(Tω) ≥ |ω−1|. This implies that the
SOR method can converge only if 0 < ω < 2.
• Theorem 7.25: If A is a positive definite matrix and 0 < ω < 2, then the SOR method
converges for any choice of initial approximate vector x(0).
• Theorem 7.26: If A is positive definite and tridiagonal, then ρ(Tg) = [ρ(Tj)]2 < 1, and the
optimal choice of ω for the SOR method is
239
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Relaxation Techniques
SOR Method: Factorization Form
240
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
241
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
has the unique solution x = (1, 1)t. Determine the residual vector for the poor approximation
𝐱 = (3, −0.0001)t.
Solution:
242
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
• Theorem 7.28: The condition number (条件数) of the nonsingular matrix A relative to
a norm ∙ is
244
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
245
Numerical Analysis and Application(数值分析与应用)
Lecture 7: Iterative Methods for Solving Linear Systems(线性方程组的迭代解法)
Summary
Norms of Vectors and Matrices
Jacobi Method
• Simple
• Always use the old solutions to compute the new ones
Gauss-Seidel Method
• Use the updated solutions to compute the new solutions
Relaxation Techniques
• Over-relaxation, under-relaxation, SOR
• 0<ω<2
Iterative Refinement
• Spectral radius, conditional number
• Solve the system for error and residual
246