Beruflich Dokumente
Kultur Dokumente
Function approximation
The vector space L2 (I) is closed under the basic operations of pointwise
addition and scalar multiplication, by the inequality,
(a + b)2 2(a2 + b2 ), 8a, b 0, (10.2)
which follows from Young’s inequality.
97
98 CHAPTER 10. FUNCTION APPROXIMATION
with the set { j }nj=1 a basis for S. For example, in a Fourier series the basis
functions j are trigonometric functions, in a power series monomials.
The question is now how to determine the coordinates ↵j so that fn (x)
is a good approximation of f (x). One approach to the problem is to use
the techniques of orthogonal projections previously studied for vectors in
Rn , an alternative approach is interpolation, where ↵j are chosen such that
fn (xi ) = f (xi ), in a set of nodes xi , for i = 1, ..., n. If we cannot evaluate
the function f (x) in arbitrary points x, but only have access to a set of
sampled data points {(xi , fi )}m i=1 , with m n, we can formulate a least
squares problem to determine the coordinates ↵j that minimize the error
f (xi ) fi , in a suitable norm.
10.1. FUNCTION APPROXIMATION 99
L2 projection
The L2 projection P f , onto the subspace S ⇢ V , defined by (10.8), of a
function f 2 V , with V = L2 (I), is the orthogonal projection of f on S,
that is,
(f P f, s) = 0, 8s 2 S, (10.9)
n
X
P f (x) = ↵j j (x). (10.11)
j=1
We note that if i (x) has local support, that is i (x) 6= 0 only for a
subinterval of I, then the matrix A is sparse, and for { i }ni=1 an orthonormal
basis, A is the identity matrix with ↵j = (f, j ).
Interpolation
The interpolant ⇡f 2 S, is determined by the condition that ⇡f (xi ) = f (xi ),
for n nodes {xi }ni=1 . That is,
n
X
f (xi ) = ⇡f (xi ) = ↵j j (xi ), i = 1, ..., n, (10.12)
j=1
n
X n
X
⇡f (x) = ↵j j (x) = f (xj ) j (x). (10.13)
j=1 j=1
100 CHAPTER 10. FUNCTION APPROXIMATION
Regression
If we cannot evaluate the function f (x) in arbitrary points, but only have
access to a set of data points {(xi , fi )}m
i=1 , with m n, we can formulate
the least squares problem,
n
X
min kfi fn (xi )k = min
n
kfi ↵j j (xi )k, i = 1, ..., m, (10.14)
fn 2S {↵j }j=1
j=1
for p, r 2 P q (I) and ↵ 2 R. One basis for P q (I) is the set of monomials
{xi }qi=0 , another is {(x c)i }qi=0 , which gives the power series,
q
X
p(x) = ai (x c)i = a0 + a1 (x c) + ... + aq (x c)q , (10.18)
i=0
1
f (x) = f (y) + f 0 (y)(x y) + f 00 (y)(x y)2 + ... (10.19)
2
10.2. PIECEWISE POLYNOMIAL APPROXIMATION 101
Langrange polynomials
For a set of nodes {xi }qi=0 , we define the Lagrange polynomials { }qi=0 , by
(x x0 ) · · · (x xi 1 )(x xi+1 ) · · · (x xq ) Yx xj
i (x) = = ,
(xi x0 ) · · · (xi xi 1 )(xi xi+1 ) · · · (xi xq ) i6=j xi xj
i (xj ) = ij , (10.20)
for which we let the mesh Th = {Ii } denote the set of subintervals Ij =
(xi 1 , xi ) of length hi = xi xi 1 , with the mesh function,
(q)
for i = 1, ..., m + 1, and j = 0, 1. For Vh we need to construct continuous
basis functions, for example,
8
>
<0, x 6= [xi 1 , xi+1 ],
i (x) = i,1 , x 2 [xi 1 , xi ], (10.31)
>
:
i+1,0 , x 2 [xi , xi+1 ],
(1)
for Vh , which we also refer to as hat functions.
φi,1(x) φi(x)
x x
x0=a x1 x2 xi-1 xi xi+1 xm xm+1=b x0=a x1 x2 xi-1 xi xi+1 xm xm+1=b
(1)
L2 projection in Vh
The L2 projection of a function f 2 L2 (I) onto the space of continuous
(1)
piecewise linear polynomials Vh , on a subdivision of the interval I with n
nodes, is given by
n
X
P f (x) = ↵j j (x), (10.32)
j=1
10.3 Exercises
Problem 34. Prove that the sum of two functions f, g 2 L2 (I) is a function
in L2 (I).
given a source term f (x), and boundary conditions at the endpoints of the
interval I = [0, 1].
We want to find an approximate solution to the boundary value problem
in the form of a continuous piecewise polynomial that satisfies the boundary
conditions (11.2), that is we seek
(q)
U 2 Vh = {v 2 Vh : v(0) = v(1) = 0}, (11.3)
105
106 CHAPTER 11. BOUNDARY VALUE PROBLEMS IN R
with R(u) = 0, for u = u(x) the solution of the boundary value problem.
Our strategy is now to find an approximate solution U 2 Vh , such that
R(U ) is small.
We have two natural methods to find a solution U 2 Vh with a minimal
residual: (i) the least squares method, where we seek the solution with the
minimal residual measured in the L2 -norm,
and (ii) Galerkin’s method, where we seek the solution for which the residual
is orthogonal the subspace Vh ,
(R(U ), v) = 0, 8v 2 Vh . (11.6)
for all v 2 Vh .
11.1. THE BOUNDARY VALUE PROBLEM 107
for all v 2 V .
To construct an appropriate vector space V for (11.14) to be well defined,
we need to extend L2 spaces to include also derivatives, which we refer to
as Sobolev spaces. We introduce the vector space H 1 (0, 1), defined by,
and the vector space that also satisfies the boundary conditions (11.2),
The variational form (11.14) is now well defined for V = H01 (0, 1), since
Z 1
u0 (x)v 0 (x) dx ku0 kkv 0 k < 1 (11.17)
0
ku U kE ku vkE , 8v 2 Vh . (11.23)
ku U k2E = (u U, u uh )E = (u U, u v)E + (u U, v uh ) E
= (u U, u v)E ku U kE ku vkE .
11.2 Exercises
Problem 37. Derive the variational formulation and the finite element
method for the boundary value problem
with
@Fi
rv F = (v · r)F = Jv = vj . (12.5)
@xj
111
112 CHAPTER 12. BOUNDARY VALUE PROBLEMS IN Rn
where
2 3
e1 e2 e3
@F3 @F2 @F1 @F3 @F2 @F1
r ⇥ F = det 4 @x@ 1 @
@x2
@ 5
@x3 =( , , ),
@x2 @x3 @x3 @x1 @x1 @x2
F1 F2 F3
which relates the volume integral over a domain ⌦ ⇢ R3 , with the surface
integral over the boundary with normal n.
Similarly, the rotation can be interpreted in terms of the Kelvin-Stokes
theorem, Z Z
r ⇥ F · ds = F · dr, (12.9)
⌃ @⌃
which relates the surface integral of the rotation over a surface ⌃ to the line
integral over its boundary @⌃ with positive orientation defined by dr.
@ 2f @ 2f @ 2f
f = r2 f = rT rf = r · rf = + ... + = , (12.10)
@x21 @x2n @x2i
and the Hessian H,
2 @2f @2f
3
@x1 @x1
··· @x1 @xn
00 6 T .. .. .. 7 @ 2f
H = f = rr f = 4 . . . 5= . (12.11)
@2f @2f
@xi @xj
@xn @x1
··· @xn @xn
12.2. FUNCTION SPACES 113
F = r2 F = ( F1 , ..., Fn ), (12.12)
F = r(r · F ) r ⇥ (r ⇥ F ). (12.13)
Partial integration in Rn
For a scalar function f : Rn ! R, and a vector valued function F : Rn !
Rn , we have the following generalization of partial integration over a domain
⌦ ⇢ Rn , referred to as Green’s theorem,
with boundary and outward unit normal vector n = n(x) for x 2 , where
we use the notation, Z
(v, w)L2 (⌦) = v · w dx, (12.15)
⌦
for two vector valued functions v, w, and
Z
(v, w)L2 ( ) = v · w ds, (12.16)
for the boundary integral. For two scalar valued functions the scalar product
in the integrand is replaced by the usual multiplication. With F = rg, for
g : Rn ! R a scalar function, Green’s theorem gives,
Lp (⌦) is a vector space, since (i) ↵f 2 Lp (⌦) for any ↵ 2 R, and (ii)
f + g 2 Lp (⌦) for f, g 2 Lp (⌦), by the inequality,
(a + b)p 2p 1 (ap + bp ), a, b 0, 1 p < 1, (12.22)
which follows from the convexity of the function t 7! tp .
Lp (⌦) is a Banach space, and L2 (⌦) is a Hilbert space with the inner
product (12.15) which induces the L2 (⌦)-norm. In the following we let it
be implicitly understood that (·, ·) = (·, ·)L2 ⌦ and k · k = k · kL2 (⌦) .
Sobolev spaces
To construct appropriate vector spaces for variational formulations of par-
tial di↵erential equations, we need to extend the spaces L2 (⌦) to include
also derivatives. The Sobolev space H 1 (⌦) is defined by,
H 1 (⌦) = {v 2 L2 (⌦) : rv 2 L2 (⌦)}, (12.23)
and we define
H01 (⌦) = {v 2 H 1 (⌦) : v(x) = 0, 8x 2 }, (12.24)
to be the space of functions in H 1 (⌦) that are zero on the boundary .
ru · n = gN , x2 , (12.27)
u = f, x 2 ⌦, (12.28)
u = 0, x2 , (12.29)
for all v 2 V , since the boundary term vanishes as the test function is an
element of the vector space H01 (⌦).
u = f, x 2 ⌦, (12.31)
ru · n = 0, x2 , (12.32)
for all v 2 V , since the boundary term vanishes by the Neumann boundary
condition. Thus the variational forms (12.30) and (12.33) are similar, with
the only di↵erence being the choice of test and trial spaces.
However, it turns out that the variational problem (12.33) has no unique
solution, since for any solution u 2 V , also v + C is a solution, with C 2 R
116 CHAPTER 12. BOUNDARY VALUE PROBLEMS IN Rn
u = f, x 2 ⌦, (12.35)
u(x) = gD , x 2 D, (12.36)
ru · n = gN , x 2 N, (12.37)
for all v 2 V0 .
The Dirichlet boundary condition is enforced through the trial space,
and is referred to as an essential boundary condition, whereas the Neumann
boundary condition is enforced through the variational form, referred to as
a natural boundary condition.
B(u) = g, x2 . (12.42)
a(u U, v) = 0, v 2 Vh , (12.52)
we have that
so that
ku U kE ku vkE , v 2 Vh . (12.53)
Specifically, by choosing v = ⇡h u, we obtain the a priori error estimate
M (u) M (U ) = a(u, ') a(U, ') = L(') a(U, ') = r(U, '), (12.57)
EK ⇡ rK (U, ). (12.61)
12.5 Exercises
Problem 38. Derive the variational formulation (12.39), and formulate
the finite element method.