Beruflich Dokumente
Kultur Dokumente
GOAL.
• To be able to plot the error function for visualization.
• To understand the error of polynomial interpolation.
sin(3x)
f1 (x) =
1 + 3x
defined on the interval [a, b] = [0, 6]. This function is plotted in Fig. 1.
The polynomial p5 (x) of degree 5 interpolating this function at the six
equally-spaced nodes:
6i
x(i) = , i = 0, 1, · · · , 5,
5
is plotted in Fig. 2, and the interpolation error e5 (x) = f1 (x) − p5 (x) is plotted
in Fig. 3. It is clear that the error can be quite large and the corresponding
polynomial interpolation is not acceptable. However, if we use 13 equally-spaced
nodes to interpolate f1 (x) with a polynomial p12 (x) of degree 12, then the error is
plotted in Fig. 4. This new polynomial is much closer to the function f1 (x) than
p5 (x). In other words, it might appear that functions can be better interpolated
1
Figure 1: Plot of f1 (x) = sin(3x)/(1 + 3x).
by polynomials when more interpolation points are used.... but this is not always
the case as we shall see later in this chapter.
The following MATLAB scripts were used to generate the figures. First, we
need a MATLAB function to compute the coefficients in the Newton divided
difference interpolating polynomial. This is given as follows:
function c=divdif(x_nodes,f_values)
%
% To compute the coefficients c(1),...,c(n) in the Newton form:
% p(x) = c(1) + x(2) (x-x(1)) + ... + c(n) (x-x(1))...(x-x(n-1))
%
divdif_f= f_values;
c = divdif_f;
n=length(x_nodes);
for i=2:n
for j =n:-1:i
divdif_f(j)=(divdif_f(j)-divdif_f(j-1)) ./(x_nodes(j)-x_nodes(j-i+1));
end
c(i)=divdif_f(i);
2
Figure 2: Plot of p5 (x) from six equally-spaced nodes.
end
end
Next, we need to use nested multiplication to evaluate the Newton divided
difference interpolating polynomial. The following MATLAB function will do
this:
function pval=NestedM(c,x,z)
n=length(c);
pval = c(n)*ones(size(z));
for k=n-1:-1:1
pval= (z-x(k)).*pval+c(k);
end
end
To plot the polynomial and the error, we use the following in MATLAB:
>> xint=linspace(0,6,13);
>> yint=sin(3*xint)./(1+3*xint);
>> cint=divdif(xint,yint);
3
Figure 3: Plot of the error e5 (x) = f1 (x) − p5 (x).
4
Figure 4: Plot of the error e12 (x) = f1 (x) − p12 (x).
• The first two lines define the interpolation data (xi , fi ) for i = 1, 2, . . . , 13,
with xi = 0.5 ∗ (i − 1).
• The third line computes the coefficients ci of the Newton divided difference
interpolating polynomial.
• The fourth line takes a sample of 200 points uniformly distributed on the
interval (0, 6); this is for plotting.
• The fifth line evaluates the Newton divided difference interpolating poly-
nomial at the 200 sample points, and the values are saved in the vector
pval().
5
• The sixth line plots the polynomial.
• The seventh line computes the value of the given function f (x) at the 200
sampling points.
• The eighth line plots the interpolation error.
• The last line plots both the error function and the interpolating polyno-
mial.
3 Error Estimates
Let the evaluation point be x and let all the nodes {xi }ni=0 lie in a closed interval
[a, b]. Then, as we shall prove shortly, if the function f has n + 1 continuous
derivatives on the interval [a, b], the error expression takes the form
ωn+1 (x) (n+1)
f (x) − pn (x) = f (ξx ), (1)
(n + 1)!
where
n
Y
ωn+1 (x) ≡ (x − x0 )(x − x1 ) · · · (x − xn ) = (x − xj ),
j=0
and ξx is some (unknown) point in the interval [a, b]. The precise location of
this point depends on {xi }ni=0 . Here f (n+1) (ξx ) is the (n+1)st derivative of f (x)
evaluated at the point x = ξx .
To prove the desired error expression (1), note first that the result is trivially
true when x is any node xi since then both sides of the expression are zero.
Assume that x does not equal to any node and consider the function F (t) where
and
[f (x) − pn (x)]
c= .
ωn+1 (x)
Observe that c is well defined because ωn+1 (x) 6= 0 since x is not a node. Note
also that F (xi ) = 0, i = 0, . . . , n, and F (x) = 0. Thus F (t) has at least
n + 2 distinct zeros in [a, b]. Now invoke Mean Value theorem which states that
between any two zeros of F there must occur a zero of F 0 . Thus, F 0 has at least
n + 1 distinct zeros. By similar reasoning, F 00 has at least n distinct zeros, and
so on. Finally, it can be inferred that F (n+1) must have at least one zero. Let
ξx be a zero of F (n+1) (t). Thus we have
(n + 1)!
0 = F (n+1) (ξx ) = f (n+1) (ξx ) − c(n + 1)! = f (n+1) (ξx ) − [f (x) − pn (x)],
ωn+1 (x)
(n+1)
since wn+1 (t) = (n + 1)!. The desired result (1) follows.
The following are some of the intrinsic properties of the interpolation error:
6
• For any value of i, the error is zero when x = xi because
ωn+1 (xi ) = 0
f (n+1) (ξx ),
7
For any given s ∈ (0, n), let i be an integer such that i < s < i + 1. It follows
that
Yn i−1
Y n
Y
|s − j| = |(s − i)(s − i − 1)| |s − j| |s − j|, (4)
j=0 j=0 j=i+2
Thus, if a function has ill-behaved higher derivatives, then the quality of the
polynomial interpolation may actually decrease as the degree of the polynomial
increases.
EXAMPLE 1: Consider f (x) = sin x and suppose values are known at three
equally-spaced nodes x0 , x1 , and x2 . Then, from (1), the error in approximating
the function sin x by p2 (x) is
8
√
The maximum of this function √ between x0 and x2 occurs at x = ±h/ 3 and
this maximum value is 2h3 /3 3. (You may wish to verify this.) Moreover, since
f (3) (x) = − cos x (recall that f (x) = sin x),
|f (3) (ξx )| ≤ 1,
we obtain √
3 3
max |f (x) − p2 (x)| ≤ h .
−h≤x≤h 27
If, for example, we wish to obtain seven place accuracy using quadratic inter-
polation, we would have to choose h such that
√
3 3
h < 5 · 10−8
27
Hence h ≈ 0.01.
Since
3 −5/2
f (3) (x) = x ,
8
it follows that |f (3) (ξ)| ≤ 83 . Further, from Example 1,
2h3
max |(x̄ − xi−1 )(x̄ − xi )(x̄ − xi+1 )| ≤ √ .
x∈[xi−1 ,xi+1 ] 3 3
2h3 3 1 h3
|f (x̄) − p2 (x̄)| ≤ √ × × = √ ,
3 3 8 3! 24 3
9
√
if p2 (x) is chosen as the quadratic polynomial which interpolates f (x) = x at
the three tabular points nearest x̄.
For an accuracy of 5 × 10−8 , we must choose h so that
h3
√ < 5 × 10−8 ,
24 3
giving h ≈ 0.01028 or N ≈ 79.
Fig. 6 depicts the polynomial ωn+1 (x) for the point x slightly outside the in-
terval [a, b] where ωn+1 (x) grows like xn+1 . Evaluating the polynomial pn (x) for
points x outside the interval [a, b] is called extrapolation. This illustration indi-
cates that extrapolation should be avoided whenever possible. This observation
implies that we need to know the application intended for the interpolant before
we measure the data, a fact frequently ignored when designing an experiment.
10
Figure 6: Plot of the polynomial ωn+1 (x) for x ∈ [−0.05, +1.05] when a = 0, b =
1, n = 5 with equally-spaced nodes.
Figure 7: Plot of the polynomial ωn+1 (x) for x ∈ [0, 1] when a = 0, b = 1 and
n = 5 with Chebyshev nodes.
11
Figs. 5–6 suggest that to choose pn (x) so that it will accurately approximate
all reasonable choices of f (x) on an interval [a, b], we should:
n
1. choose the nodes {xi }i=0 so that, for all likely evaluation points x, min(xi ) ≤
x ≤ max(xi );
2. if possible, choose the nodes so they are denser close to the endpoints of
the interval [a, b].
How do we choose the nodes {xi }ni=0 so that the resulting ωn+1 (x) is small?
The Chebyshev nodes turn out to be good candidates. To describe the Cheby-
shev nodes, we introduce the Chebyshev polynomials:
T0 (x) = 1,
T1 (x) = x,
T2 (x) = 2x2 − 1.
k cos−1 (xi ) = ti ,
The zeros of the Chebyshev polynomial Tn+1 (x) are called the Chebyshev
points for [−1, 1]. The Chebyshev points for an arbitrary interval [a, b] are
obtained using the following transformation:
b+a b−a
x= − t (11)
2 2
and are given by
b+a b−a 2i + 1
xi = − cos π , i = 0, 1, · · · , n.
2 2 2n + 2
Notice that the transformation (11) maps the interval [−1, 1] to the interval
[a, b]. The Chebyshev points do not include the endpoints a or b, and all the
Chebyshev points lie inside the open interval (a, b). For the data in Fig. 7, we
use these Chebyshev points.
12
With the Chebyshev points as nodes, the maximum value of the polynomial
|ωn+1 (x)| is less than one half of its value in Figs. 5 and 6. As n increases, the
improvement in the size of |ωn+1 (x)| when using the Chebyshev nodes rather
than equally-spaced points is even greater. Indeed, for polynomials of degree
20 or less, interpolation of the data measured at the Chebyshev points gives a
maximum error that is not greater than a factor of 2 larger than the smallest
possible maximum error (known as the minimax error) taken over all types of
polynomial fits to the data (not just using interpolation). However, there are
penalties:
1. Extrapolation using the polynomial interpolant based on data measured
at Chebyshev points is even more disastrous than extrapolation based on
the same number of equally-spaced data points.
2. It may be difficult to obtain the data fi measured at the Chebyshev points.
Figs. 5–7 suggest that to choose pn (x) so that it will accurately approximate
all reasonable choices of f (x) on an interval [a, b], we should:
n
1. choose the nodes {xi }i=0 so that, for all likely evaluation points x, min(xi ) ≤
x ≤ max(xi );
2. if possible, choose the nodes so they are denser close to the endpoints of
the interval [a, b].
13