Sie sind auf Seite 1von 165

VSBTechnical

University of Ostrava
Faculty of Electrical Engineering and Computer Science
Department of Applied Mathematics

Optimal Shape Design in Magnetostatics


Ph.D. Thesis

Author:

Ing. Dalibor Lukas

Supervisor: Prof. RNDr. Zdenek Dostal, CSc.


Department of Applied Mathematics,

VSBTechnical
University of Ostrava
Referees:

Prof. RNDr. Jaroslav Haslinger, DrSc.


Faculty of Mathematics and Physics,
Charles University Prague
Prof. RNDr. Michal Krz ek, DrSc.
Mathematical Institute,
The Academy of Sciences of the Czech Republic
o.Univ.Prof. Dipl.-Ing. Dr. Ulrich Langer
Institute of Computational Mathematics,
Johannes Kepler University Linz

Submitted in September 2003


VSBTechnical
University of Ostrava
Faculty of Electrical Engineering and Computer Science
Department of Applied Mathematics

Optimal Shape Design in Magnetostatics


Ph.D. Thesis

September 2003

Ing. Dalibor Luk


as

arka
Dedicated to S

iv

Abstract
This thesis treats with theoretical and computational aspects of threedimensional optimal shape
design problems that are governed by linear magnetostatics. The aim is to present a complete
process of mathematical modelling in a wellbalanced way. We stepbystep visit the world
of physics, functional analysis, computational mathematics, and we end up with reallife applications. Nevertheless, the main emphasis is put on an efficient implementation of numerical methods
for shape optimization which exploits an effective evaluation of gradients by the adjoint method
and a just recently introduced multilevel optimization approach. We also emphasize numerical
experiments with reallife problems of complex threedimensional geometries.
We begin from a description of the electromagnetic phenomena by Maxwells equations and
we derive their threedimensional (3d) and twodimensional (2d) magnetostatic cases with the
linear constitutive relation between the magnetic flux density and the magnetic strength density.
Then we start to develop a general theory that covers both 2d and 3d optimal interfaceshape
design problems that are constrained by a secondorder linear elliptic boundary vectorvalue problem (BVP). First we pose a weak formulation of the BVP with the homogeneous Dirichlet boundary condition. Whenever the kernel of the BVP operator is not trivial, we employ a regularization
technique such that the regularized solutions converge to the true one. The continuous weak formulation of the abstract BVP is discretized by the firstorder finite element method on triangles
and tetrahedra, respectively. We set an abstract continuous shape optimization problem, the state
problem of which involves one or more BVPs such that they only differ in the righthand sides,
i.e., different current excitations in case of magnetostatics. The design boundary is an interface
between two materials, rather than a part of the computational domain boundary, as it is usual in
optimal shape design for mechanics. We prove the existence of an optimal shape by checking the
continuity of the cost functional and the compactness of the set of admissible shapes. Then we discretize the continuous optimization problem by the finite element method and prove the existence
of the approximate solutions. The main theoretical result of this thesis is a proof of the convergence of the approximate optimized solutions to an optimal solution of the continuous problem,
where we also involve an inner approximation of the original computational domain with a Lipschitz boundary by a polyhedral (in the 3d case) or polygonal (in the 2d case) domain. Throughout
the abstract theory we introduce many assumptions that are checked for concrete applications
afterwards. These assumptions show the scope of the theory.
Concerning the computational aspects in optimization, we use the sequential quadratic programming method with a successive approximation of the Hessian. To justify the use, we verify
the smoothness of both the discretized cost and constraint functionals. Then we focus on the
calculation of gradients by means of the adjoint method and we derive an efficient algorithm for
that, including its Matlab implementation enclosed on the CD. We introduce a new multilevel
optimization approach as a possible adaptive optimization method.
Finally, we end up with physics again. We present two reallife applications with rather comv

vi

ABSTRACT

plex 3d geometries. After some motivation, we describe the optimization problem in terms of
verifying the theoretical assumptions, and we give numerical results. We present the speedup
of the adjoint method comparing to the numerical differentiation, and of the multilevel approach
comparing to the classical optimization. One optimized design was manufactured, we are provided
with measurements and, at the end, we discuss real improvements of the cost functional.

Acknowledgement
First of all I want to express my deep gratitude to my supervisor Prof. RNDr. Zdenek Dostal, CSc.,
the head of the Department of Applied Mathematics at the Technical University of Ostrava (TUO),
who introduced me to the magic of numerical mathematics and its applications in electromagnetism. I thank him for his kindness and patience during more than six years of my undergraduate
and doctoral studies for when he has been supervising me.
Special thanks are devoted to my very good friend, teacher, and colleague Doc. RNDr. Jir
Bouchala, Ph.D., who is an associated professor at the Department of Applied Mathematics, TUO.
Jirka convinced me that the mathematics is beautiful itself, even without applications. I thank to all
the other friends, colleagues, and teachers of mine from the Department of Applied Mathematics,
TUO, especially to Mgr. Vt Vondrak, Ph.D. and Mgr. Bohumil Krajc, Ph.D.
I am much obliged to o.Univ.Prof. Dipl.-Ing. Dr. Ulrich Langer, who is the head of the Institute
for Numerical Mathematics and the cospeaker of the Special Research Initiative SFB F013 Numerical and Symbolic Scientific Computing both associated to the Johannes Kepler University
Linz in Austria. Professor Langer offered me a Ph.D. position within the research project SFB
F013, where I spent one year and where I have been currently working again. Here I have made
a big progress in my research work and I have learned about efficient methods for solving large
discretized direct simulation problems based on the finite element and/or boundary element discretizations. I am also much indebted to my current or former colleagues, especially, I would like
to mention Dr. Dipl.-Ing. Joachim Schoberl, Dr. Dipl.-Ing. Michael Kuhn, MSc., and Dr. Dipl.Ing. Wolfram Muhlhuber.
I further thank to Prof. Ing. Jaromr Pistora, CSc., who is with the Institute of Physics, TUO,
and is the head of the Department of Education at the Faculty of Electrical Engineering. Professor
Pistora asked me to cooperate on development of a new generation of electromagnets used in the
research on magnetooptic effects. I also thank for the fruitful cooperation to the other colleagues
from the Institute of Physics, namely to Dr. Mgr. Kamil Postava, Dr. RNDr. Dalibor Ciprian,
Dr. Ing. Michal Lesna k, Ing. Martin Foldyna, and Ing. Igor Kopriva.
I very appreciate the possibility to consult my work with Prof. RNDr. Jaroslav Haslinger, DrSc.,
who is a world leading expert in optimal shape design. I thank very much to Prof. RNDr. Michal
Krz ek, DrSc. from the Mathematical Institute of the Czech Academy of Sciences that I could learn
a lot during several consultations in his office. I very much admire the enthusiasm of Prof. Krz ek
that he has devoted to math.
arka,
At the end let me express my deep grateness and love to my parents and to my girlfriend S
with whom my life is complete, no matter what the career is about.
This work has been supported by the Austrian Science Fund FWF within the SFB Numerical and
Symbolic Computing under the grant SFB F013, by the Czech Ministry of Education under the
research project CEZ: J17/98:272400019, and by the Grant Agency of the Czech Republic under
the grant 105/99/1698.
vii

viii

ACKNOWLEDGEMENT

Notation
N
R
i
C
C1 , . . . , C16

nonnegative integers
real numbers
imaginary unit
complex plane
fixed constants

Abbreviations
1d
2d
3d
PDE
BVP
FEM
BFGS
SQP
AD

onedimensional
twodimensional
threedimensional
partial differential equation
boundary (vector)value problem
finite element method
update formula for the Hessian matrix named after Broyden,
Fletcher, Goldfarb, and Shanno
sequential quadratic programming
automatic differentiation

Chapter 2
B
H

J
u

2d
J
u

magnetic flux density


magnetic field
permeability
direct electric current density
magnetic vector potential
threedimensional computational domain
twodimensional reduced computational domain which is the
cross section of with the plane x3 = 0
twodimensional scalar direct electric current density
twodimensional scalar magnetic potential

p. 10
p. 10
p. 10
p. 10
p. 10
p. 10
p. 11
p. 11
p. 11

norm in the normed linear vector space U


quotient space
kernel of the linear vector operator L
dual space to the normed linear vector space U
duality pairing
scalar product

p. 14
p. 14
p. 15
p. 15
p. 15
p. 15

Chapter 3
k kU
V /U
Ker(L)
U0
h, i
(, )

ix

x
U
H = U U
Rn
AT
det(A)
e
A
A1
m

n
C()
C k ()
C ()
supp v
C0 ()
C 0,1 ()
L
div
grad
curl
nu
B
B

Lp ()
L ()
meas()
a.e.
D u
H k ()
(, )k,
k kk,
| |k,
n
H k ()
(, )n,k,
H01 ()

NOTATION
orthogonal complement to the space U
orthogonal decomposition of the Hilbert space H
Euclidean space consisting of ndimensional real vectors
transposed matrix
determinant of the matrix A
adjoint matrix
inverse matrix
dimension of the computational domain , m {2, 3}
domain, i.e., open, bounded, and connected subset of R m
closure of the domain
boundary of the domain
unit outer normal vector to the boundary
space of functions continuous over
space of functions which are continuous up to their kth partial
derivatives over , k N
space of infinitely differentiable functions over
support of the function v
space of infinitely differentiable functions with a compact support
in
space of Lipschitz continuous functions over
set of all the domains with Lipschitz continuous boundaries
divergence operator
gradient operator
curl operator
cross product, tangential component of the function u along the
boundary


linear vector firstorder differential operator, B : C 1 () 1 7


C() 2 , 1 , 2 N
adjoint
operator
 related
1 to B by Greens theorem, B :
2
1
C ()
7 C()


trace operator related to B by Greens theorem, : C() 1 7
[C()]2
Lebesgue space of measurable functions defined over for which
the Lebesgue integral of their pth power is finite, p [1, )
Lebesgue space of measurable essentially bounded functions over

Lebesgue measure of the domain


almost everywhere
the th generalized derivative of the function u, is a multi
index
Sobolev space of functions whose generalized derivatives up to
the kth order belong to Lp ()
scalar product in H k ()
norm in H k ()
seminorm in H k ()
Cartesian productof Sobolev
n spaces, n N
k
scalar product in H ()
space of functions from H 1 () whose traces vanish along

p. 16
p. 17
p. 17
p. 18
p. 18
p. 18
p. 18
p. 19
p. 19
p. 19
p. 19
p. 19
p. 20
p. 20
p. 21
p. 21
p. 21
p. 21
p. 21
p. 22
p. 22
p. 22
p. 23
p. 23
p. 23
p. 23
p. 24
p. 24
p. 24
p. 24
p. 25
p. 25
p. 25
p. 25
p. 25
p. 25
p. 25
p. 26

xi
H 1/2 ()
H 1/2 ()
H(B; )
(, )B,
k kB,
| |B,
H0 (B; )
Ker(B; )
H0, (B; )
(S)
D
f
a(, )
f ()
(W )
u

a (, )
(W )
u

space of traces of all functions from H 1 ()


dual space to H 1/2 ()

space of functions from L2 () 1 whose
operator B
 generalized

(gradient, divergence, curl, etc.) is in L2 () 2
scalar product in H(B; )
norm in H(B; )
seminorm in H(B; )
space of functions from H(B; ) whose trace vanishes along
space of functions from H0 (B; ) whose belong to the kernel of
B
space of functions from H0 (B; ) that are orthogonal to
Ker(B; )
strong formulation of an abstract linear elliptic boundary vector
value problem
matrix function of material coefficients in (S)
vector function of the righthand side in (S)
bilinear form in (W )
linear functional in (W )
weak formulation of an abstract linear elliptic boundary vector
value problem
solution to (W )
positive regularization parameter that regularizes the non
ellipticity of the bilinear form a(, )
regularized bilinear form in (W )
regularized weak formulation
solution to (W )

p. 26
p. 26
p. 30
p. 30
p. 31
p. 31
p. 31
p. 31
p. 31
p. 32
p. 32
p. 32
p. 33
p. 33
p. 33
p. 33
p. 34
p. 34
p. 34
p. 34

Chapter 4
h
Vh
n
Dh
ah (, )
fh
f h ()
(Wh )
uh
An
fn
u n
h
n h
K i , K ei
Th
xh
n xh

positive discretization parameter


finite dimensional subspace of H0 (B; )
dimension of the space V h
discretization of the matrix function D
discretization of the bilinear form a (, )
discretization of the righthand side f
discretization of the linear functional f ()
Galerkin discretization of the problem (W )
solution to (Wh )
system matrix that arises from the discretized bilinear form
ah (, )
righthand side vector that arises from the discretized linear functional f h ()
solution vector (corresponds to uh ) of the arising linear system
polyhedral subdomain that approximates from inner
number of finite elements
domain (triangle or tetrahedron) of the ith element
discretization (e.g., triangulation) of h into elements
block vector of all the discretization nodes
number of the discretization nodes

p. 39
p. 39
p. 39
p. 39
p. 39
p. 39
p. 39
p. 40
p. 40
p. 42
p. 42
p. 42
p. 42
p. 42
p. 42
p. 43
p. 43
p. 43

xii
xhi
ei
P ei
ne
jei
ei
Eh
ih
h
G ei
i,j
ej i
hj
Ph
I0h
h
H0 B; h
(W ())
u ()
D ei
f ei
(Wh (h ))
uh (h )
Eih
aei (, )
f e ()
x ei
xej i
H ei
r
cr
x
cr
x
i
Rei , Rei
S ei , S ei
S eBi , SeBi
i
B n,e

B n
h ei
h

NOTATION
coordinates of the ith discretization node
the ith finite element
finite element space of the ith element
number of local degrees of freedom
the jth local degree of freedom of the ith element
set of all the degrees of freedom of the ith element
set of all the finite elements
the ith global degree of freedom
set of all the n global degrees of freedom
mapping from local to global degrees of freedom for the ith element, G ei : {1, . . . , ne } 7 {1, . . . , n}
Kroneckers symbol
the jth local shape (base) function of the ith element
the jth global shape (base) function
global finite element space
set of indices of those global degrees of freedom that determine
the trace along h
the finite element space V h
the problem (W ) for varying computational domain
solution to (W ())
the coefficient matrix Dh restricted to the ith element
the righthand side vector f h restricted to the ith element
finite element discretization of the problem (W (h ))
solution to (Wh (h ))
set of the elements neighbouring with the ith element
contribution to the bilinear form a h (, ) from the ith element
contribution to the linear functional f h () from the ith element
block vector of all the corners of the ith element
coordinates of the jth corner of the ith element
mapping from the element nodal indices to the global nodal indices, Hei : {1, . . . , m + 1} 7 {1, . . . , nxh }
reference element
block vector of all the reference element nodes
the ith corner of the reference element
linear mapping, the related matrix, from the reference element to
the ith element, Rei : K r 7 K ei
linear mapping, the related matrix, of finite element functions defined over K r to the ones defined over K ei , S ei : Pr 7 Pei
linear mapping, the related matrix, of the finite element functions
defined over
K r to the ones
K ei under the operator
 2defined
over
2
ei  2
2
r
e
i
B, S B : L (K )
7 L (K )
elementwise constant vector of the operator B applied to the solution uh
i
block vector of the elementwise constant vectors B n,e

discretization parameter associated to the ith element


the coarsest possible discretization parameter

p. 43
p. 43
p. 43
p. 43
p. 43
p. 43
p. 43
p. 43
p. 43
p. 43
p. 44
p. 44
p. 44
p. 44
p. 45
p. 45
p. 45
p. 45
p. 45
p. 45
p. 45
p. 45
p. 46
p. 46
p. 46
p. 46
p. 46
p. 46
p. 47
p. 47
p. 47
p. 47
p. 48
p. 48

p. 50
p. 50
p. 50
p. 50

xiii
Xh
X0 B; ; h
h %

ei

h

h
h0
4h h

extension
extends functions by zero, X h :
linear

 operator
 that
2
h
2
h
L ( ) 7 L () , , N
h
space of functions extended from H0 B; h by Xh1
approximation of by polygonal domains h from inner
characteristic function of
interpolation
associated to the ith element, ei :

1 operator
C (K ei ) 7 Pei
i
h
1

7 Ph
global interpolation operator, h : C (h )
i 1
h
h
7 H0 B; h
global interpolation operator, h0 : C (h )

p. 51
p. 51
p. 51
p. 52
p. 52
p. 52
p. 52

the most outer layer of finite elements

p. 53

nonempty polyhedral (m 1)dimensional domain


shape, C()
lower and upper box constraints, l , u R
set of admissible shapes
uniform convergence of shapes in U
number of design parameters
set of admissible design parameters, R n
parameterization of the admissible shapes, F : 7 U
decomposition of controlled by the shape
graph of the shape
material matrix function controlled by the shape
constant material matrices for the domains 0 (), 1 ()
the bilinear form a(, ) controlled by the shape
number of variations of the righthand side
state index within the multistate problem, v {1, . . . , n v }
one of the nv righthand side vectors
one of the nv linear functionals that corresponds to f v
multistate problem controlled by the shape
v ())
solution to the multistate
problem (W

n

cost functional, I : U L2 () 2 v 7 R
cost functional, J : U 7 R
continuous setting of the shape optimization problem
solution to (P )
parameterized cost functional, Je : 7 R
continuous setting of the shape optimization problem solved for
design parameters
solution to (Pe )
the regularized bilinear form a (, ) controlled by the shape
the multistate problem (W v ()) regularized by the regularization
parameter
regularized cost functional
regularized setting of the shape optimization problem
solution to (P )
regularized and parameterized cost functional, Je : 7 R

p. 68
p. 68
p. 68
p. 68
p. 68
p. 68
p. 68
p. 68
p. 68
p. 69
p. 69
p. 69
p. 69
p. 69
p. 69
p. 69
p, 69
p. 70
p. 70
p. 71
p. 71
p. 72
p. 72
p. 72
p. 72

Chapter 5

l , u
U
n
n

F
0 (), 1 ()
graph()
D
D0 , D 1
a (, )
nv
v
fv
f v ()
(W v ())
uv ()
I
J
(P )

Je
(Pe)
p
a, (, )
(Wv ())

J
(P )

Je

p. 72
p. 72
p. 72
p. 73
p. 73
p. 73
p. 74

xiv
(Pe )

p
nh
ih
Th
P 1 (Th )
xhh ,j

NOTATION
regularized setting of the shape optimization problem solved for
design parameters
solution to (Pe )
number of elements in the discretization of
the ith element in the discretization of
discretization of
space of continuous functions that are linear over ih
the jth corner of the ith element in the discretization of

p. 74
p. 74
p. 74
p. 74
p. 74
p. 74
p. 74

nxh
xh,j
Uh
h
h
h0 (h ), h1 (h )
T h (h )
(Wv,h (h ))
h
uv,h
( )
ah,h (, )

Dhh
f v,h ()
Jh
(Ph )

h
Jeh
(Peh )

number of nodes in the discretization of


the jth node in the discretization of
discretized set of admissible shapes
discretized shape, h U h
interpolation operator, h : U 7 P 1 (Th )
decomposition of h controlled by the discretized shape h
discretization of h controlled by the discretized shape h
finite element discretization of the regularized multistate problem
(Wv (h )) controlled by the discretized shape h
solution to (Wv,h (h ))
the discretized and regularized bilinear form a h (, ) controlled by
the discretized shape h
the discretized material matrix function D h controlled by the discretized shape h
the discretized multistate linear functional f v ()
the discretized and regularized cost functional, J : U h 7 R
discretization of the regularized shape optimization problem (P )
solution to (Ph )
discretization of the regularized and parameterized cost functional
Je , Jeh : 7 R
discretization of the regularized setting ( Pe ) of the shape optimization problem solved for design parameters

p. 74
p. 74
p. 74
p. 74
p. 75
p. 75
p. 75
p. 76

number of constraints
constraint function, : Rn 7 Rn
number of shape nodes, nh := nxh
designtoshape mapping, h : Rn 7 Rnh
shapetomesh mapping, xh : Rnh 7 Rmnxh
block vector of all the initial grid nodes, x h0 Rmnxh
block vector of all the grid displacements between the current and
initial grid, 4xh Rmnxh
matrix that identically maps the shape displacements h onto the
corresponding grid nodal coordinates x h , Mh : Rnh 7 Rmnxh
stiffness matrix for the initial grid x h0 of the auxiliary elasticity
(shapetomesh) problem
righthand side vector involving the inhomogeneous Dirichlet
condition h of the auxiliary elasticity (shapetomesh) problem

p. 84
p. 84
p. 84
p. 84
p. 84
p. 84
p. 84

p. 76
p. 76
p. 76
p. 76
p. 80
p. 80
p. 80
p. 81
p. 81

Chapter 6
n

nh
h
xh
xh0
4xh
Mh
K h (xh0 )
b h (h )

p. 84
p. 84
p. 84

xv
f v,n (xh )
h
u v,n
(x )
i
u v,n,e
(xh )

v,n,ei h
B
(x )

h
B v,n
(x )
h
I

(QP)
(QP 1 (p0 ))
Hess
Grad
(LS(p0 , sQP ))
(QP 2 (p0 , d))
Hk
G

righthand side vector that arises from the discretized linear functional f v,h () for the vth state
h
solution vector (corresponds to u v,h
( )) of the arising vth linear system
h
the solution vector u v,n
(x ) restricted to the ith element
elementwise constant vector of the operator B applied to the soh
lution uv,h
(x )
i
block vector of the elementwise constant vectors B v,n,e
(xh )

nh
h
revisited discretized cost functional, I : R
Rmnxh
2 n h n v
[R
] 7 R
quadratic programming problem
quadratic programming subproblem for the line search approach
Hessian, matrix of all the second partial derivatives of a scalar
function
gradient of a vector function whose columns are gradients of the
particular components of the function
line search problem
quadratic programming subproblem for the trust region method
the kth successive approximation of the Hessian, k N
matrix involving the sensitivity of the multistate system upon the
grid displacements

p. 85
p. 85
p. 85
p. 85
p. 85
p. 86
p. 90
p. 90
p. 90
p. 90
p. 90
p. 91
p. 92
p. 96

Chapter 7
MC
ORing
m
yoke
westp , . . .
westc , . . .
x,i,j
pi,j
in
I
nI
Sc
Jv

B avg,v

nvm

Maltese Cross electromagnet


ORing electromagnet
magnetization area
domain occupied by the ferromagnetic yoke
domains occupied by the ferromagnetic poles
domains occupied by the coils which complete the related poles
the (i, j)th node in the (tensor product) regular discretization of
the 2d shape domain
design parameter related to the node x ,i,j , a component of p
Bernstein polynom, in : [0, 1] 7 R, n, i N, i n
direct electric current
number of turns
area of the coil cross section
current density for the current variation v
the
the cost functional that measures the homogeneity, :
 2partof
m
L () 7 R, m {2, 3}
the part of the cost functional that for the vth state problem penalizes the magnetic
field being below the minimal required mag 2
m
v
nitude, : L () 7 R, m {2, 3}
penalty for v
average normal component (to the magnetization plane) of the
m
magnetic field of the vth state problem, B avg,v : L2 ()
7
R, m {2, 3}
unit outer normal vector to the vth magnetization plane

p. 105
p. 105
p. 106
p. 108
p. 108
p. 108
p. 111
p. 111
p. 111
p. 112
p. 112
p. 112
p. 112
p. 114
p. 114

p. 114
p. 114

p. 114

xvi
avg,v
Bmin

h
v,h
B avg,v,n
h
x,i
pi

NOTATION
minimal required magnitude of the magnetic field of the vth state
problem
discretization of , h : Rmnh 7 R
discretization of v , v,h : Rmnh 7 R
discretization of B avg,v
design interface
the ith node in the regular discretization of the 1d shape domain

design parameter related to the node x ,i , a component of p

p. 114
p. 116
p. 116
p. 116
p. 117
p. 118
p. 118

Contents
Abstract

Acknowledgement

vii

Notation

ix

Contents

xvii

Introduction
1.1 General aspects of optimization . . . . . . . . . . . . . . . . .
1.1.1 Optimization problems: Classification and connections
1.1.2 Optimization methods . . . . . . . . . . . . . . . . .
1.1.3 Iterative methods for linear systems of equations . . .
1.1.4 Commercial versus academic software tools . . . . . .
1.2 Optimal shape design . . . . . . . . . . . . . . . . . . . . . .
1.3 Computational electromagnetism . . . . . . . . . . . . . . . .
1.4 Structure of the thesis . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

1
2
2
3
3
4
5
6
6

Mathematical modelling in magnetostatics


2.1 Maxwells equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Threedimensional linear magnetostatics . . . . . . . . . . . . . . . . . . . . . .
2.3 Twodimensional linear magnetostatics . . . . . . . . . . . . . . . . . . . . . .

9
9
10
11

Abstract boundary vectorvalue problems


3.1 Preliminaries from linear functional analysis
3.1.1 Normed linear vector spaces . . . .
3.1.2 Linear operators . . . . . . . . . .
3.1.3 Hilbert spaces . . . . . . . . . . . .
3.1.4 Linear algebra . . . . . . . . . . .
3.2 Preliminaries from real analysis . . . . . .
3.2.1 Continuous function spaces . . . .
3.2.2 Some fundamental theorems . . . .
3.3 Hilbert function spaces . . . . . . . . . . .
3.3.1 Lebesgue spaces . . . . . . . . . .
3.3.2 Sobolev spaces . . . . . . . . . . .
3.3.3 The space H(grad) . . . . . . . .
3.3.4 The space H(curl) . . . . . . . . .
3.3.5 The space H(div) . . . . . . . . .

13
13
13
14
15
17
19
20
21
24
24
25
26
28
29

xvii

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

CONTENTS

xviii

3.4

3.3.6 The abstract space H(B) . . . . . . . . . . . . . . . . . . . . .


Weak formulations of boundary vectorvalue problems . . . . . . . . .
3.4.1 A regularized formulation in H 0 (B) . . . . . . . . . . . . . . .
3.4.2 A weak formulation of threedimensional linear magnetostatics
3.4.3 A weak formulation of twodimensional linear magnetostatics .

Finite element method


4.1 The concept of the method . . . . . . . . . . . . . . . . . .
4.1.1 Galerkin approximation . . . . . . . . . . . . . . .
4.1.2 Finite element method . . . . . . . . . . . . . . . .
4.1.3 Discretization of the domain . . . . . . . . . . . . .
4.1.4 Space of finite elements . . . . . . . . . . . . . . .
4.1.5 Finite element discretization of the weak formulation
4.2 Assembling finite elements . . . . . . . . . . . . . . . . . .
4.2.1 Reference element . . . . . . . . . . . . . . . . . .
4.2.2 BDB integrators . . . . . . . . . . . . . . . . . . .
4.2.3 The algorithm . . . . . . . . . . . . . . . . . . . . .
4.3 Approximation properties . . . . . . . . . . . . . . . . . . .
4.3.1 Approximation of the domain by polyhedra . . . . .
4.3.2 Apriori error estimate . . . . . . . . . . . . . . . .
4.3.3 Regular discretizations . . . . . . . . . . . . . . . .
4.3.4 Convergence of the finite element method . . . . . .
4.4 Finite elements for magnetostatics . . . . . . . . . . . . . .
4.4.1 Linear Lagrange elements on triangles . . . . . . . .
4.4.2 Linear Nedelec elements on tetrahedra . . . . . . . .
Abstract optimal shape design problem
5.1 A fundamental theorem . . . . . . . . . . .
5.2 Continuous setting . . . . . . . . . . . . .
5.2.1 Admissible shapes . . . . . . . . .
5.2.2 Multistate problem . . . . . . . . .
5.2.3 Shape optimization problem . . . .
5.3 Regularized setting . . . . . . . . . . . . .
5.4 Discretized setting . . . . . . . . . . . . .
5.4.1 Discretized set of admissible shapes
5.4.2 Discretized multistate problem . . .
5.4.3 Discretized optimization problem .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

30
32
34
36
37

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

39
39
39
42
42
43
45
46
46
48
49
50
51
52
52
54
57
58
62

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

67
67
68
68
69
71
72
74
74
75
80

Numerical methods for shape optimization


6.1 The discretized optimization problem revisited
6.1.1 Constraint function . . . . . . . . . .
6.1.2 Designtoshape mapping . . . . . .
6.1.3 Shapetomesh mapping . . . . . . .
6.1.4 Multistate problem . . . . . . . . . .
6.1.5 Cost functional . . . . . . . . . . . .
6.1.6 Smoothness of the cost functional . .
6.2 Newtontype optimization methods . . . . .
6.2.1 Quadratic programming subproblem .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

83
83
84
84
84
85
86
87
89
90

CONTENTS

6.3

6.4
7

6.2.2 Sequential quadratic programming . . . . . . .


The firstorder sensitivity analysis methods . . . . . .
6.3.1 Sensitivities of the cost and constraint functions
6.3.2 State sensitivity . . . . . . . . . . . . . . . . .
6.3.3 Semianalytical methods . . . . . . . . . . . .
6.3.4 Adjoint method . . . . . . . . . . . . . . . . .
6.3.5 An objectoriented software library . . . . . .
6.3.6 A note on using the automatic differentiation .
Multilevel optimization approach . . . . . . . . . . . .

An application and numerical experiments


7.1 A physical problem . . . . . . . . . . . . . . . . . . .
7.2 Threedimensional mathematical setting . . . . . . . .
7.2.1 Geometries of the electromagnets . . . . . . .
7.2.2 Set of admissible shapes . . . . . . . . . . . .
7.2.3 Continuous multistate problem . . . . . . . . .
7.2.4 Continuous shape optimization problem . . . .
7.2.5 Regularization and finite element discretization
7.3 Twodimensional mathematical setting . . . . . . . . .
7.4 Numerical results . . . . . . . . . . . . . . . . . . . .
7.4.1 Testing the multilevel approach . . . . . . . .
7.4.2 Testing the adjoint method . . . . . . . . . . .
7.5 Manufacture and measurements . . . . . . . . . . . .
Conclusion

xix
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

. 91
. 92
. 93
. 94
. 96
. 98
. 98
. 100
. 101

.
.
.
.
.
.
.
.
.
.
.
.

105
105
108
108
109
112
113
115
117
119
120
121
124

.
.
.
.
.
.
.
.
.
.
.
.

127

Bibliography

129

Curriculum vitae

143

xx

CONTENTS

Chapter 1

Introduction
Nowadays dynamic progress in computer technology has made powerful computers to become
cheap. This has been influencing the development of numerical methods. Many both commercial
and academic simulation software tools are available for a large variety of problems. Computer
simulations replaced prototyping. A usual picture is that developers in a company are modelling
a new product on a computer, doing some calculations, and thinking what parameters and how to
shift to achieve better properties of the product. Still increasing standard of technologies brings
together experts from different areas. Developers work is now much more interdisciplinary. It
involves
experts in the area of main interest, e.g., engineers, physicists, medics, economists, etc.,
theoretical mathematicians who introduce correct theories that can be used for mathematical
modelling,
numerical analysts who design efficient numerical methods and analyze their properties,
e.g., speedup, convergence rate, etc.,
and computer scientists who effectively implement the methods on a proper platform.
The people who are experienced in more areas are especially welcome to coordinate the design
process.
As far as the direct simulation is fast enough, it is straightforward to automatize also the
synthesis (design) process. To this end, a developer has to exactly formulate
the objective criterion saying what design is better,
the design parameters that can be changed including their possible limit values,
and some additional constraining criteria that the product must satisfy.
The objective criterion (the optimization goal) might be the minimal weight, the maximal output
power, the minimal cost, the minimal loss, etc. The design parameters are for example size of
the product, microstructure of the used material, or shape of the product. We might additionally
require that the product must not exceed a given volume, weight, or that it must be robust, e.g.,
stiff enough. Once we know these exactly, we have formulated an optimization problem that can
be solved automatically.
1

CHAPTER 1. INTRODUCTION

1.1 General aspects of optimization


We can optimize very disparate systems, for instance, maximize profit of a market, minimize pollution of a forest, minimize petrol consumption when driving a car, control a robot in an optimal way,
etc. Optimal shape design is only a small area within the general optimization context. Besides,
we can mention optimal control, optimal sizing, thickness optimization, topology optimization,
optimization in graphs, and so further. Each class of optimization problems has a structure of its
own. However, a direct simulation the state problem is always involved and the inverse process
(the synthesis) plays around with some parameters and resolve the direct problem until the system
behaves in a required way.

1.1.1 Optimization problems: Classification and connections


Optimization can be seen in a wider context of inverse problems, in which we know behaviour
of a system, usually from physical measurements, and under this knowledge we are looking for
structure of the system and/or for distribution of sources. A typical inverse problem is the computer
tomography in medicine. An introductionary textbook to this field is given by K IRSCH [109].
Inverse problems are known to be illposed which has to be treated by regularization techniques,
see E NGL , H ANKE , AND N EUBAUER [58]. Some connections between optimization and inverse
, RUDNICKI , AND S AVINI [144].
problems are presented by N EITTAANM AKI
Here, we are especially interested in structural optimization, where we change the structure
of an object, which is interacted in a physical field, in order to achieve required behaviour. The
structure means either material properties, topology, or shape of the object boundary or interfaces. Various issues of structural optimization are covered in BANICHUK [17], B ENDSE [21],
C HERKAEV [43], K ALAMKAROV [105], O LHOFF AND TAYLOR [152], P EDERSEN [154], ROZ VANY [173, 174, 175], S AVE AND P RAGER [180, 181], X IE AND S TEVEN [213]. Applications
in electromagnetism are given by H OPPE , P ETROVA , AND S CHULZ [97], in plasticity by Y UGE
AND K IKUCHI [216], and, for instance, in ergonomics by R ASMUSSEN ET AL . [165]. An optimal
design of microstructures is presented by JACOBSEN , O LHOFF , AND R NHOLT [100].
If we are interested in the topology design it is usually the question where to put holes
we speak about topology optimization. The basic literature is B ENDSE [20, 21], B ENDSE
AND S IGMUND [22], B ORRVALL [25]. Some applications in electromagnetism are presented
by H OPPE , P ETROVA , AND S CHULZ [96], YOO AND K IKUCHI [214]. More theoretical issues are
given by S TADLER [198] or by S IGMUND AND P ETERSSON [190].
In the design process, the second step after topology optimization is shape optimization,
where we tune the shape of the boundary or interfaces. The basic literature on shape optimization is given by B EGIS AND G LOWINSKI [19], M URAT AND S IMON [140], P IRONNEAU [159],
[85], H ASLINGER AND M AKINEN

H ASLINGER AND N EITTAANM AKI


[83], S OKOLOWSKI AND

Z OLESIO [196], B ORNER


[24], D ELFOUR AND Z OLESIO [54], K AWOHL ET AL . [107], M O HAMMADI AND P IRONNEAU [135]. Besides the basic textbooks, one can find a lot of theoretical analysis in B UCUR AND Z OLESIO [35], P EICHL AND R ING [155, 156], P ETERSSON AND
H ASLINGER [158], P ETERSSON [157]. Papers focused on applications in electromagnetism are,

S [123], M ARROCCO
for example, D I BARBA ET AL . [18], B RANDST ATTER
ET AL . [30], L UK A
AND P IRONNEAU [132], TAKAHASHI [206].
It turns out that there is much in common in topology and shape optimization. Recently there
have appeared several papers in this context, like C EA ET AL . [40], R IETZ AND P ETERSSON [171],
TANG AND C HANG [207].

1.1. GENERAL ASPECTS OF OPTIMIZATION

1.1.2 Optimization methods


Another point of view to optimization is from the side of numerical mathematics. We can classify optimization problems with respect to what algorithm is used. There is a class of evolutionary algorithms, cf. X IE AND S TEVEN [213], typical examples of which are genetic algorithms,
which search for the global optimum. However, the number of evaluations of the objective functional is exponential to the number of design variables. This is due to the fact that the whole
design space has to be randomly explored. On the other hand, there are Newtonlike algorithms,
which look for a local optimum. These use the first, eventually the secondorder derivatives
to approximate the objective functional locally by a quadratic function. The local algorithms
are much faster comparing to the global ones and in this thesis we will concern with them only.
Many algorithmical issues of local optimization are covered in N OCEDAL AND W RIGHT [148],
F LETCHER [61], D ENNIS AND S CHNABEL [55], G ILL , M URRAY, AND W RIGHT [66], G ROSS MANN AND T ERNO [72], C EA [39], H ESTENSEN [88, 89], H AGER , H EARN , AND PARDA
LOS [81], P OLAK [161], M UHLHUBER
[139], B OGGS AND T OLLE [23], C ONN , G OULD , AND
T OINT [49]. However, there are also optimization problems, whose cost functional is not differentiable, not even twice differentiable. This is the case of nonsmooth optimization, see e.g.,

AND N EITTAANM AKI


[130], H ASLINGER , M IETTINEN , AND PANA C LARKE [48], M AKEL
A
TIOTOPOULOS [84]. Let us also mention multicriterial optimization, which tries to include more
aspects with respect to which the design should be optimal, see O LHOFF [151].
There are several interesting optimization techniques that have appeared just recently. In the

papers by B URGER AND M UHLHUBER


[37, 38] they solve simultaneously for both the design
and state variables, i.e., they minimize at the same time the cost functional as well as the quadratic
energy functional of the direct problem. Another challenging issue in optimization is adaptivity. A
hierarchical approach in shape optimization is used by L UK A S [123, 128]. The works of R AMM ,
M AUTE , AND S CHWARZ [164], S CHLEUPEN , M AUTE , AND R AMM [185] make even use of the
FEadaptivity in both the topology and shape optimization. Using multilevel approach for solving
nonlinear illposed problems is presented in S CHERZER [182].
The Newtonlike optimization methods suffer from the computational costs and from the fact
that they are searching for local optima. It is partly overcome by the homogenization method, the
study of which has just been started. It aims at describing macroscopic behaviour of materials
with heterogeneous microstructures. For the literature see A LLAIRE [5], A LLAIRE ET AL . [7],
C IORANESCU AND D ONATO [47]. The method is very much connected to structural (both shape
and topology) optimization, which is studied in A LLAIRE [6], S UZUKI AND K IKUCHI [202], YOO
AND K IKUCHI [214], Y UGE , I WAI , AND K IKUCHI [215]. However, this method is wellsuited
only for some cost functionals and the linear elasticity. Another new interesting approach is the
levelset method, see S ETHIAN AND W IEGMANN [188], A LLAIRE , J OUVE , AND T OADER [8].
It determines the set of admissible designs implicitly by a levelset function and uses the shape
or topology derivative with respect to this implicit scheme. The levelset method was already
applied in the field of inverse problems, see B URGER [36]. Nevertheless, the method involves a
time explicit scheme, which takes many iterations. An overcome can be done by a coupling with
Newton methods.

1.1.3 Iterative methods for linear systems of equations


The main computational effort is related with solution of the state problem. Fast solution iterative
methods have been especially developed for linear systems with sparse symmetric positive definite
matrices. Such a system can be stated as a quadratic minimization problem. For those, since 50

CHAPTER 1. INTRODUCTION

years ago, the development of conjugate gradients methods has been running. The conjugate
gradients methods are looking for minimum of the quadratic functional in the directions that are
conjugated by the energy scalar product related to the system matrix. The research was initiated
by H ESTENSEN AND S TIEFEL [90] and from then an extensive literature to this topic has been
written, see Z OUTENDIJK [221], G OLUB AND VAN L OAN [69], A XELSSON [13], S AAD [179].
Nowadays, the key point is the construction of proper preconditioners. The ones that turned
out to be the best are based on multigrid techniques. They construct a hierarchy of finite element
discretizations such that they first minimize the low frequencies (eigenvalues) of the residual error on a coarse discretization, which is very fast, and then the higher frequencies on a finer one,
which is again fast, as the low frequencies are not any longer present. The hierarchy can be constructed either with respect to the computational grid (geometrical multigrids) or with respect to
the structure of the system matrix (algebraic multigrids). Various topics on multigrid techniques
are presented in H ACKBUSCH [78], B RAMBLE [28], B RAMBLE , PASCIAK , AND X U [29], H IPTMAIR [91], J UNG AND L ANGER [102], R EITZINGER [169], H AASE AND L ANGER [74], H AASE
ET AL . [75, 76]. Applications in electromagnetism can be found in S CHINNERL ET AL . [183, 184].
A software package based on algebraic multigrids was done by R EITZINGER [168].

1.1.4 Commercial versus academic software tools


Basically, we distinguish between commercial and academic software. Commercial software
tools, see T HOMAS , Z HOU , AND S CHRAMM [209] for a review, are developed to provide a large
functionality in a userfriendly way. They have to really attract as large audience as possible in
order to survive in the commercial market. They try to be robust, automatic, and sexy. They benefit from a deep engineering experience. From the matter of fact, commercial software tools are
much more suitable for immediate applications in the industry than the academic ones, because the
developers are much more closer to the industrial users. However, from a lack of knowledge they
cannot provide the latest scientific computational methods and the solution time is often rather
slow. Whenever the user needs more functionality, he/she has to wait until a new release is done.
Typical commercial software packages for both analysis and design are A NSYS [1] or F LUENT [2].
On the other hand, scientific computing tools are developed with respect to an apriori known
scientific goal. At the very beginning they do not need to attract a large audience, as they are
supported by research grants. They do not need to be userfriendly, as the researchers that use
them know very well what is going on and can remove some errors themselves. Their main
advantage is that the scientific computing tools implement the uptodate knowledge and they
use fast solution methods that have just appeared in the world research. However, they cannot
be directly applied in the industry, since they do not treat complicated reallife geometries, they
are not as userfriendly and as robust as the commercial ones. Some scientific computing tools

for the analysis or optimization are presented in K UHN , L ANGER , AND S CH OBERL
[117], S ILVA
AND B ITTERCOUNT [191], R ASMUSSEN ET AL . [166], PARKINSON AND BALLING [153]. An
example of more educational software system is in T SCHERNIAK AND S IGMUND [210]. A typical
commercial software directed to the academy is M ATLAB [208].
Until recently, one could hardly work in both the industry and academy, as their objectives
were rather different. The nowadays trend seems to be towards interdiciplinary work. Industrial
partners are invited to talk at scientific symposia, and many companies invest to further education
of their staff. The gap between the industry and academy gets smaller, thus, the difference between the commercial and scientific software does so. The commercial software should take more
into account the latest research progress and the scientists developing research software packages should put more effort into the documentation, userinterface, and better coordination of the

1.2. OPTIMAL SHAPE DESIGN

development. The more communication between the industry and academy there is, the more
improvement can be done.

1.2 Optimal shape design


In this thesis we treat with optimal shape design problems. These are very wellstructured, in
a consequence of which we can design a very efficient solution method taking the structure into
account. The direct problem within the shape optimization is a partial differential equation (PDE).
There is still a number of PDEs that can be considered. Imagine the following examples: a flying
aeroplane, a loaded bridge, an electromagnet pumped by direct electric currents (DC), or a hydraulic press acting on a piece of steel. Here, the PDEs are very diverse, namely, air fluid can be
modelled by a hyperbolic PDE, flight of the aeroplane by a parabolic PDE, load of the bridge or
the DC electromagnet by elliptic PDEs, and the hydraulic press is modelled as a contact problem,
which is nonsmooth. If we concern nonlinear constitutive relations, then the PDEs are even more
complicated to solve. Hence, the solution method should be suited for the type of PDE that we
are concerned with. In this thesis we will deal with shape optimization governed by elliptic linear
PDEs.
Since we will employ Newton algorithms for smooth optimization, the crucial point is the
sensitivity analysis, which is the evaluation of gradients of the cost and constraint functionals
with respect to the design variables. One can either derive a Frechet derivative from the continuous setting of the optimization problem, see S OKOLOWSKI AND Z OLESIO [196], or discretize
the continuous problem first and then use an algebraic approach, see H ASLINGER AND N EIT [85]. We prefer the second approach. In S OKOLOWSKI AND Z OCHOWSKI [195] a
TAANM AKI
connection between topological and shape sensitivity analysis is presented.
Let us consider a shape optimization problem governed by an elliptic PDE on a bounded computational domain. The most common solution approach is the following: At the very beginning,
given an initial shape design, we decompose the computational domain into polygonal (or polyhedral) convex elements, cf. G EORGE [65]. Then we discretize a weak formulation of the elliptic
PDE by the finite element method (FEM). We get a sparse positive definite system matrix and a
righthand side vector. We employ a fast iterative method to solve the system with a sufficient
precision. Then we calculate the cost (optimization) functional and we can start play around with
the shape. Some design variables describe the design boundary (or interface). Changes of the
design variables are mapped onto displacements of the nodes lying on the design boundary (or
interface), e.g., by means of Bezier parameterization, cf. FARIN [59]. Displacements of nodes
along the design boundary influence displacements of the remaining nodes in the discretization
grid. Finally, the displaced discretization grid influences the system matrix, and eventually the
righthand side vector. Thus, the grid influences the solution of the PDE and so it also influences
the cost functional. The main effort in the sensitivity analysis lies on an efficient evaluation of
the gradient of the solution to the direct simulation problem. This is usually done by the adjoint
method, cf. H AUG , C HOI , AND KOMKOV [86].
Besides the finite element analysis, there are also boundary element methods (BEM). They
discretize the boundary integral form of the PDE. These techniques are not as spread as the finite
elements. The fundamental principles are covered in BANERJEE AND B UTTERFIELD [16], B REB BIA [31], C HEN AND Z HOU [41]. Using BEM in optimal shape design is presented in C HEN ,
Z HOU , AND M C L EAN [42], K ITA AND TANIE [110], or in S IMON [193]. Applications in electromagnetism are given in H IPTMAIR AND S CHWAB [94], and in K ALTENBACHER ET AL . [106].

CHAPTER 1. INTRODUCTION

1.3 Computational electromagnetism


Due to some historical aspects, the finite element method was firstly reviewed and applied by mechanical engineers, cf. Z IENKIEWICZ [217]. Then the fluid dynamics community started to use the
method. Electrical engineers and scientists started to apply the method little bit later, nevertheless,
mainly in the last two decades the finite element analysis has been becoming still more popular
in the electromagnetic community, too. An important point was introducing a new class of finite
elements by N E D E LEC [142, 143]. A lot of theoretical work was done by A DAM ET AL . [3], A M UR
ET AL . [104],
ROUCHE ET AL . [9], C OSTABEL AND DAUGE [50], H IPTMAIR [92, 93], K A C
AND S ARANEN [146, 147]. The fundamental textbooks
M ONK [136, 137, 138], N EITTAANM AKI
were published by A RORA [11], B OSSAVIT [26], G IRAULT AND R AVIART [67], I DA AND BAS I Z EK AND N EITTAANM AKI
[115], M AYERGOYZ [133], S ILVESTER
TOS [99], KOST [112], K R
AND F ERRARI [192], S TEELE [199], VAN R IENEN [211].

1.4 Structure of the thesis


The rest of the thesis is structured as follows. In Chapter 2, we describe Maxwells equations
for 3dimensional timedependent electromagnetic fields. By neglecting some phenomena we
respectively arrive at 3dimensional (3d) and 2dimensional (2d) linear magnetostatic cases.
At the beginning of Chapter 3, we recall fundamental issues of the linear functional analysis.
We describe the Sobolev spaces H 1 , H(div), and H(curl). Then we start to develop an abstract
theory, which can cover a wide class of boundary value problems. Under four assumptions we
introduce an abstract function space H(B) for some abstract elliptic firstorder vectorvalue linear
differential operator B. We basically assume the abstract space to be dense in C , to fulfill
the trace theorem, Greens theorem, and Friedrichslike inequality. Further, we formulate an
abstract linear elliptic secondorder boundary vectorvalue problem (BVP) with the homogeneous
Dirichlet boundary condition. We derive its weak formulation. In the cases when the kernel of the
operator B is not trivial, the bilinear form is not H(B)elliptic and the weak formulation is not
suited for the finite element discretization. Therefore, we introduce a regularized weak formulation
and prove the convergence of the regularized solutions to the true one in the seminorm. At the end
of Chapter 3, we apply the theory to both 2d and 3d linear magnetostatics while all the previously
introduced assumptions are verified.
In Chapter 4, we first recall the general concept of the finite element method. Then we deal
with algorithmical aspects and derive an efficient assembling algorithm, which approximates our
abstract BVP. Further, under some assumptions we prove convergence of the approximate solutions to the true weak solution of the BVP. The proof is mainly based on first Strangs lemma
and on the Lebesgue dominated convergence theorem. In this approximation theory there is also
involved an inner approximation of the original Lipschitz boundary by polygonal (or polyhedral)
ones. At the end, we present Lagrange nodal and Nedelec edge finite elements, which are respectively used for 2d and 3d magnetostatics. For these two types of elements we verify all the
assumptions of the introduced convergence theory.
In Chapter 5, we introduce a continuous setting of a shape optimization problem, which is
governed by the abstract BVP. The shape controls an interface between two materials while the
computational domain is fixed. We spend some effort in proving the continuity of the state solution
with respect to the shape. We suppose the cost functional to be continuously dependent on the
state solution. The set of admissible shapes is compact by definition. Hence, the existence of
an optimal solution can be proved. We further employ the regularization of the state problem

1.4. STRUCTURE OF THE THESIS

and prove the corresponding convergence result. Finally, we discretize the state problem and,
consequently, the optimization problem by means of the finite element method. We prove the
convergence of the discretized optimized shapes to a continuous optimal one. The convergence
theory uses very standard tools of the functional and finite element analysis and it was inspired by
[85]. Nevertheless, one of its assumptions,
the monograph of H ASLINGER AND N EITTAANM AKI
namely the continuity of the mapping between the shape nodes and the remainding grid nodes,
is difficult to assure in practice. For nonacademic problems with complex geometries and for
fine discretizations one can hardly find such a continuous shapetomesh mapping, as for large
changes of the design shape some disturbed or even flipped elements can appear and the geometry
has to be remeshed. This brings the discontinuity of the cost functional into the business. In
Conclusion, possible outcomes are discussed.
In Chapter 6, we revisit our abstract optimization problem from the computational point of
view. We analyze the structure of the cost functional and present it as a compound mapping,
consisting of several smooth submappings. Therefore, we can prove the smoothness of the cost
functional, which justify us to use algorithms of the Newton type afterwards. We briefly mention
all the ingrediences of the sequential quadratic programming method. Then, we derive an efficient
method for the firstorder sensitivity analysis, including its implementation in Matlab, which is
enclosed on the CD. This is actually the heart of the whole thesis. Finally, we introduce a multilevel optimization algorithm, which is welldesigned to be adaptive with respect to aposteriori
the error analysis of the underlying finite element discretization of the state problem. We refer
to the very recent papers by S CHLEUPEN , M AUTE , AND R AMM [185], R AMM , M AUTE , AND
S CHWARZ [164], in which the finite element adaptivity is already used for calculating error of the
approximation of the cost functional.
At the beginning of Chapter 7, we present an application, which has arisen from the research on
magnetooptic effects. Our aim is to find optimal shapes of two electromagnets in order to minimize
inhomogeneities of the magnetic field in a certain area. The electromagnets have rather complex
3dimensional geometries. We formulate the cost functional, the set of admissible shapes, and the
state problem such that, simultaneously, we verify the related theoretical assumptions. We further
pose the corresponding reduced 2dimensional settings of the problems. Then, both 2d and 3d
numerical results are given. We discuss the speedup of the used adjoint method comparing to the
numerical differentiation, and the speedup of the multilevel approach with respect to the standard
approach. Finally, an optimized shape was manufactured and we are provided with physical measurements of the magnetic fields for both the original and optimized electromagnets. We present
the improvements of the magnetic field in terms of the cost functional.
In Conclusion, we summarize the results of this thesis and give directions of the further research.

CHAPTER 1. INTRODUCTION

Chapter 2

Mathematical modelling in
magnetostatics
In this chapter, we will start from Maxwells equations in a general timedependent 3dimensional
(3d) setting, we will pass through the timeharmonic case, and we will end up with the 3d magnetostatic boundary value problem. Neglecting the magnetic phenomena in a given direction, we
will arrive at the 2dimensional (2d) magnetostatic boundary value problem. Throughout this
chapter, we will formally describe the physical phenomena, rather than introduce all the necessary
assumptions on the smoothness of the domain or the differentiability of the physical quantities.
Mathematically correct settings will be introduced in Chapter 3.
For the theory of electromagnetism we refer to F EYNAM , L EIGHTON , AND S ANDS [60],
H AUS AND M ELCHER [87], S OLYMAR [197], and S TRATTON [201]. The monographs focused
[115], B OSSAVIT [26],
more on numerical modelling are given by K R I Z EK AND N EITTAANM AKI
VAN R IENEN [211], I DA AND BASTOS [99], KOST [112], M AYERGOYZ [133], S TEELE [199].

2.1 Maxwells equations


The physical phenomena of the time-dependent 3dimensional electromagnetic field are described
by Maxwells equations
D

curl(H) = J + E +

B
curl(E) =
,
(2.1)
t

div(D) =

div(B) = 0

together with the constitutive relations

D = E and B = H,

(2.2)

where E denotes the electric field (electric intensity), D is the electric flux density, > 0 is
the permittivity, J is the external electric current density, > 0 is the electric conductivity,
0 is the charge density, H is the magnetic field, B is the magnetic flux density, > 0 is the
permeability, t 0 is the time and the differential operators are defined as follows:
div(v) :=

v2
v3
v1
+
+
,
x1 x2 x3
9

10

CHAPTER 2. MATHEMATICAL MODELLING IN MAGNETOSTATICS


curl(v) :=

v2 v1
v3 v2
v1
v3

x2 x3 x3 x1 x1 x2

where v = (v1 , v2 , v3 ) is a vector function and x = (x1 , x2 , x3 ) are point coordinates.


Now we introduce 3d timeharmonic linear Maxwells equations. First, we restrict our consideration into a fixed bounded domain, assuming that the fields vanish outside the domain. Let
R3 be a nonempty domain the boundary of which is smooth enough. Further, let 0
be an angular frequency and assume that both the electric and magnetic field are timeharmonic

E(x, t) := Re E(x)eit ,

H(x, t) := Re H(x)eit ,

where E := E(x) and H := H(x) are complexvalued vector functions, i denotes the imaginary
unit, and Re(v) := (Re(v1 ), Re(v2 ), Re(v3 )) is the componentwise real part of the vector v :=
(v1 , v2 , v3 ). Moreover, we assume the charge density and Maxwells current to be zeros
= 0 and

D
= 0.
t

We also assume that the constitutive relations (2.2) are timeindependent, realvalued and linear
:= (x) and := (x),
rather than := (x, E) and := (x, H) in the nonlinear case. Finally, we assume the external
current density J and the conductivity to be timeindependent and realvalued
J := J(x) and := (x).
Now, Maxwells equations (2.1) can be rewritten as follows:

curl(H) = J + E

curl(E) = iB
div(D) = 0

div(B) = 0

in ,

(2.3)

where, according to (2.2), D = E and B = H. We prescribe that the electric field vanishes on
the boundary
n E = 0 on ,
(2.4)
where n denotes the outer unit normal to and where, given vectors u := (u 1 , u2 , u3 ) and
v := (v1 , v2 , v3 ),
u v := (u2 v3 u3 v2 , u3 v1 u1 v3 , u1 v2 u2 v1 )
is the vector cross product.

2.2 Threedimensional linear magnetostatics


We introduce the magnetic vector potential u by
curl(u) = B.

2.3. TWODIMENSIONAL LINEAR MAGNETOSTATICS


The first two equations in (2.3) now read as follows:


1
curl(u) = J + E
curl

the third equation becomes

11

curl(E) = icurl(u)

in ,

div(iu) = 0 in
and last Maxwells equation is automatically fulfilled, since the vector identity div(curl(u)) = 0
holds. We consider the timeindependent case of (2.3). Taking := 0 and neglecting the electric
field, we arrive at the following magnetostatic boundary value problem



1

curl
curl(u) = J in

.
(2.5)

n u = 0 on

2.3 Twodimensional linear magnetostatics

Let us assume that the magnetic field given by (2.5) does not significantly depend on the x 3
coordinate. This is often the case when J(x) = (0,0, J(x 1 , x2 )) and (x) = (x1 , x2 ) in a
large enough neighbourhood of the zeroplane Z := x R3 | x3 = 0 . We are interested in an
approximate solution of (2.5) in this neighbourhood. So, let us assume that
J(x) := (0, 0, J(x1 , x2 )) , (x) := (x1 , x2 ), and u(x) := (0, 0, u(x1 , x2 )) .
Using the latter, the problem (2.5) reduces to the following


1
div
grad(u) = J in 2d

u = 0 on 2d
where



2d := x0 = (x1 , x2 ) R2 | (x1 , x2 , 0)

(2.6)

represents a cross section of in the sense 2d {0} = Z, and where the differential operator
grad is defined as follows:


u u
grad(u) :=
,
,
x1 x2
where u is a scalar function. It is easy to see that the magnetic flux density is then given by


u
u
,
,0 ,
B=
x2 x1
where u solves (2.6).

12

CHAPTER 2. MATHEMATICAL MODELLING IN MAGNETOSTATICS

Chapter 3

Abstract boundary vectorvalue


problems
In this chapter, we will recall the necessary mathematics used for weak formulations in magnetostatics. We will begin with the continuous function spaces and introduce some Sobolev spaces
together with the corresponding trace, Greens theorems, and Friedrichslike inequalities. Then,
we will formally describe the concept of a weak formulation for an abstract elliptic linear boundary vectorvalue problem. Particularly, we will illustrate the concept on both the 3d and 2d linear
magnetostatic problems.
There is already an exhaustive literature on weak formulations of electromagnetic problems,
[115], A MROUCHE ET AL . [9], A DAM ET AL . [3], B OSSA see K R I Z EK AND N EITTAANM AKI
UR , N E C AS , P OL AK
, AND S OU C EK [104], M ONK [136, 137,
VIT [26] VAN R IENEN [211], K A C

138], H IPTMAIR [93], N EITTAANM AKI AND S ARANEN [146, 147], S TEELE [199], S ILVESTER
AND F ERRARI [192]. The monograph by G IRAULT AND R AVIART [67] and the paper by H IPTMAIR [92] inspired us to build an abstract theoretical framework for the weak formulations and
consequent finite element discretizations of the linear elliptic boundary vectorvalue problems.

3.1 Preliminaries from linear functional analysis


This section recalls basic definitions from functional analysis which will be frequently used in the
[115, p. 12].
sequel. Most definitions as well as notation are due to K R I Z EK AND N EITTAANM AKI
Let us mention the monographs by N E C AS [141], O DEN AND D EMKOWICZ [149], RUDIN [177],
C OURANT AND H ILBERT [51], S HOWALTER [189], F RANC U [64], or by R EKTORYS [170].

3.1.1 Normed linear vector spaces


The nonempty set V with the operations + : V V 7 V and . : R V 7 V is called a linear
vector space over reals if for any u, v, w V and any , R the following axioms are satisfied
(u + v) + w = u + (v + w),

u + v = v + u,

(u + v) = u + v,

( + )u = u + u,

z V : u + z = v,

(u) = ()u,

1u = u.
Among others, the axioms imply that the following hold
0 V u V : 0 + u = u,

u V (u) V : u + (u) = 0.
13

14

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS

We define the operation : V V 7 V by


u v := u + (v),

u, v V.

The subset U V is called a subspace of V if it is also a linear vector space with respect to the
operations . and +.
Let V be a linear vector space. The mapping k k V : V 7 R is called a norm if for any
u, v V and any R the relations
ku + vkV kukV + kvkV ,

kukV = ||kukV ,

kukV 6= 0 if v 6= 0

(3.1)

hold. The space V equipped with a norm is called a normed linear vector space.
Let V be a normed linear vector space. The sequence {u n }
n=1 V is said to be convergent
if there exists u V such that
kun ukV 0, as n .
We denote it by un u in V .
Let M V be a subset of a normed linear vector space V . The subset M is said to be closed
if for any convergent sequence {un }
n=1 M the following is true
un u in V u M.
The subset M is said to be dense in V if the condition
u V {un }
n=1 M : un u in V
is satisfied. We denote it by
V = M in the norm k kV .
Let V be a normed linear vector space and U V be a subspace. The space
V /U := {[u] V | u V and v U : u + v [u]}
is called a quotient space. The space V /U equipped with the norm
k[u]kV /U := inf ku + vkV
vU

forms a normed linear vector space. Moreover, if U is a closed subspace, then the infimum is
realized on U and it becomes the minimum.

3.1.2 Linear operators


Let U, V be normed linear vector spaces. Then the mapping L : U 7 V is called a linear operator
if for any u, v U and any R the following relations
L(u + v) = L(u) + L(v),

L(u) = L(u)

hold. The linear operator L : U 7 V is continuous if the following is satisfied


C > 0 u U : kL(u)kV CkukU .

3.1. PRELIMINARIES FROM LINEAR FUNCTIONAL ANALYSIS

15

The set
Ker(L) := {u U | L(u) = 0}

(3.2)

is called the kernel of the operator L and it is a closed subspace of U . The linear operator L 1 :
V 7 U is called the inverse to L if
u U v V : L(u) = v L1 (v) = u.
The mapping f : U 7 R is called a functional. The space of continuous linear functionals
that are defined on a normed linear vector space U is called a dual space and it is denoted by U 0 .
The mapping h, i : U 0 U 7 R defined by
hf, ui := f (u),

f U 0 , u U,

is called a duality pairing. The following


kf kU 0 := sup |hf, ui|
uU
kukU =1

is a norm. The space U 0 equipped with k kU 0 and with the following operations
hf + g, ui := hf, ui + hg, ui,

hf, ui := hf, ui

f, g U 0 , R, u U,

forms a normed linear vector space.

3.1.3 Hilbert spaces


The normed linear space V is called a Banach space if for any Cauchy sequence {u n }
n=1 V ,
i.e.,
 > 0 n0 N m, n N : m, n n0 kum un kV ,
the following holds
u V : un u in V.
Let H be a linear vector space. The mapping (, ) H : H H 7 R which satisfies for any
u, v, w H and any R the following conditions
((u + v), w)H = (u, w)H + (v, w)H ,
(u, u)H 0,

(u, v)H = (v, u)H ,


(u, u)H 6= 0 if u 6= 0

is called a scalar product. The norm defined by


kukH :=

p
(u, u)H

is called the induced norm. Moreover, if the space H with the scalar product and the induced norm
is a Banach space, then it is called a Hilbert space. The following CauchySchwarz inequality
holds:
|(u, v)H | kukH kvkH
(3.3)
for any u, v H. Let U H be a closed subspace of H, then it is a Hilbert space, too.

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS

16

Theorem 3.1. (Riesz theorem) Let H be a Hilbert space. Then for any f H 0 there exists exactly
one element u H such that
v H : (v, u)H = f (v).
(3.4)
Moreover,
kukH = kf kH 0 .
Proof. See O DEN

AND

(3.5)

D EOMKOWICZ [149, p. 557].

Let H be a Hilbert space. The mapping a(, ) : H H 7 R is called a bilinear form if for
any fixed u H both the mappings a(, u) and a(u, ) are linear functionals. The bilinear form is
said to be continuous on H if there exists a positive constant C 1 such that
u, v H : |a(u, v)| C1 kukH kvkH .
The bilinear form is called Helliptic if there exists a positive constant C 2 such that
v H : |a(v, v)| C2 kvk2H

(3.6)

Lemma 3.1. (LaxMilgram lemma) Let H be a Hilbert space and let a(, ) be a continuous
bilinear form on H which is Helliptic with the constant C 2 . Then for any f H 0 there exists
exactly one element u V such that
v H : a(v, u) = f (v).
Moreover,
kukH

(3.7)

1
kf kH 0 .
C2

Proof. See N E C AS [141, p. 38].


Lemma 3.2. Let the assumptions of Lemma 3.1 be satisfied and let the bilinear form be, in addition, symmetric on H, i.e.,
u, v H : a(u, v) = a(v, u).
Then (3.7) is equivalent to: Find u H such that
J(u) = min J(v),
vH

where J is a quadratic functional given by


J(v) :=
Proof. See K R I Z EK

AND

1
a(v, v) f (v),
2

v H.

[115, p. 14].
N EITTAANM AKI

The normed linear vector spaces U and V are said to be isomorphically isometric if there exists
a onetoone linear operator L : U 7 V such that
u U : kL(u)kV = kukU .
The operator L is called an isomorphism.
Let H be a Hilbert space and U H be a closed subspace. The space U defined by
U := {u H | v U : (u, v)H = 0}

(3.8)

3.1. PRELIMINARIES FROM LINEAR FUNCTIONAL ANALYSIS

17

is called the complementary space to U and the orthogonal decomposition


H = U U
holds, which means that
u H v U w U : u = v + w.
Let H be a Hilbert space. We say that the set E := {e i H | i N} forms a base of the
space H if the following two assumptions are fulfilled
P
(i) u H i N i R : u =
i=1 i ei ,
P
(ii) i N i R : i=1 i ei = 0 i = 0 .

The vectors ei are the base vectors and the real numbers i are the coordinates of the vector u
in the base E. If the base consists of only a finite number of base vectors, we say that H is
finitedimensional, otherwise, H is infinitedimensional.

3.1.4 Linear algebra


[57], is a special case of the linear
Linear algebra, cf. G OLUB AND VAN L OAN [69] or D OST AL
functional analysis, where we work with finitedimensional Hilbert spaces the Euclidean spaces.
By the Euclidean space Rn , n N, we mean the Hilbert space Rn equipped with the scalar product
(u, v) := u v,

u v :=

n
X

ui vi ,

i=1

where u := (u1 , . . . , un ) Rn , v := (v1 , . . . , vn ) Rn stand for column vectors. Then the set
{e1 , . . . , en } forms the Euclidean base, where all the entries of the Euclidean base vector e i Rn
are zeros except for the ith entry which is one.
Let A := (A1 , . . . , Am ) : Rn 7 Rm be a linear vector operator acting between two Euclidean
spaces. Then we can represent A by the following matrix

a1,1 a1,2 . . . a1,n


a2,1 a2,2 . . . a2,n

A := .
..
.. , where ai,j := Ai (ej ) for i = 1, . . . , m, j = 1, . . . , n.
.
.
.
.
.
.
.
am,1 am,2 . . . am,n

We will also denote the matrix by A := (a i,j ) (ai,j )i,j Rmn . From the linearity of A it
follows that for a vector u := (u1 , u2 , . . . , un ) Rn
Pn

a1,j uj
Pj=1
n

j=1 a2,j uj

A(u) = A u, where A u :=
.
..

.
Pn
j=1 am,j uj

We define the matrix norm k k : Rmn 7 R by

kAk := max |ai,j | .


i,j

(3.9)

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS

18

 
By the matrix AT := aTi,j Rnm we denote the transpose matrix to the matrix A, where

the entries of AT are as follows:

aTi,j := aj,i

for i = 1, . . . , n, j = 1, . . . , m.

The mapping AT : Rm 7 Rn , which is represented by the matrix A T , is called the transpose


mapping to the mapping A.
Let A := (A1 , . . . , Am ) : Rn 7 Rm and B := (B1 , . . . , Bp ) : Rm 7 Rp be linear vector operators, represented by the matrices A := (a i,j ) Rmn and B := (bi,j ) Rpm , respectively.
Then it can be easily proven that the compound mapping C := B A : R n 7 Rp , defined by
[B A] (u) := B (A(u)) ,

u Rn ,

is represented by the matrix C := (ci,j ) Rpn , where


ci,j :=

m
X

bi,k ak,j

for i = 1, . . . , p, j = 1, . . . , n.

k=1

The linear algebra provides a powerful tool for solving linear operator equations. Given a
linear mapping A : Rn 7 Rm and a vector f := (f1 , . . . , fm ) Rm , the linear operator
equation
A(u) = f ,
(3.10)
solved for u Rn , can be equivalently written as a system of linear algebraic equations, the
matrix form of which is
A u = f,
(3.11)
where the matrix A := (ai,j ) Rmn represents the linear operator A. Moreover, if m = n
and if there exists the inverse operator A1 : Rn 7 Rn , then the solution to the linear operator
equation (3.10) is represented by
u = A1 (f ).

The latter can be again written in terms of matrices. To this end, we introduce a multilinear form
det(A), called the determinant of the matrix A, which is recursively defined by
(
a1,1
,n = 1
det(A) := Pn
,
(3.12)
j+1
det(A1,j ) , n 2
j=1 (1)

where the matrix Ai,j R(n1)(n1) is made from the matrix A Rnn by excluding its ith
e := (af
row and jth column. Further, to the matrix A we associate the adjoint matrix A
i,j )
nn
R
by
i+j
af
det(Aj,i ) .
(3.13)
i,j := (1)

Lemma 3.3. Let A : Rn 7 Rn , n N, be a linear operator, which is represented by the


matrix A Rnn . Then there exists the inverse linear operator A 1 : Rn 7 Rn if and only if
det(A) 6= 0. The corresponding inverse matrix is then as follows:
A1 :=
and it is such that

1
e
A
det(A)

A A1 = A1 A = I,

where I := [e1 , . . . , en ] Rnn , where ei Rn denotes an Euclidean base vector.

(3.14)

3.2. PRELIMINARIES FROM REAL ANALYSIS

19

Proof. See G OLUB AND VAN L OAN [69].


Let a matrix A Rnn be given. If there does not exist the inverse matrix A 1 Rnn , then
A is said to be a singular matrix. Otherwise, A is nonsingular and then the solution to the system
of linear algebraic equations (3.11) reads as follows:
u=

1
e f.
A
det(A)

(3.15)

Note that (3.15) is extremely inappropriate for practical calculations of u, since the computation
e is very timeconsuming. We will rather use (3.15) for analysis only.
of both det(A) and A
Given a nonsingular matrix A Rnn , the transposition and the inversion are mutually commutative and we abbreviate them as follows:
T
1
AT := A1 = AT
.
x3

x3
A

PSfrag replacements
e3
e2

A(e3 )

x2

e1

x1

x2
A(e2 )

x1

A(e1 )

Figure 3.1: Linear transformation of a unit cube


The determinant det(A) has a clear geometric meaning, which is depicted in Fig. 3.1. If we
write the matrix A columnwise as A := (a 1 , . . . , an ), then the vectors ai := (a1,i , . . . , an,i ) =
A(ei ) are the images of the Euclidean base vectors e i in the mapping A. The vectors ei determine
an nth dimensional unit cube, while the value |det(A)| is the nth dimensional volume of the
image of this cube after the transformation A. This determinant property is for example used,
when a substitution is employed in some nth dimensional, e.g., volume integration.

3.2 Preliminaries from real analysis


[85, p. 2] a bounded, open, and connected
A domain is due to H ASLINGER AND N EITTAANM AKI
set Rm , m N. The symbol stands for the closure of , is the boundary of , and
n denotes the unit outer normal vector to the boundary . As only two or threedimensional
domains are meaningful for optimal shape design, we employ the following:

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS

20

Assumption 3.1. In all what follows we will assume that denotes a nonempty, bounded, open,
and connected subset of Rm , where m {2, 3}.
Nevertheless, all the results up to Chapter 5 are valid for any m N.

3.2.1 Continuous function spaces


[85, p. 24].
This section is due to H ASLINGER AND N EITTAANM AKI

A sequence {un }n=1 of realvalued functions defined in is uniformly bounded in if there


exists a constant C > 0 such that
n N x : |un (x)| C.
Let F be a collection of functions u : R. We say that the functions belonging to F and
the set F itself are equicontinuous at x if
> 0 (, x) > 0 u F : ky xk < (, x) |u(y) u(x)| .
Functions are equicontinuous in if they are equicontinuous at any x .
Let {un }
n=1 be a sequence of functions and let u be a function, all defined over . We say
that un uniformly converges to u if
max {|un (x) u(x)|} 0, as n ,

(3.16)

and we denote this convergence by un u in , as n .


Theorem 3.2. (AscoliArzela` ) Let {un }
n=1 be a set of uniformly bounded equicontinuous func
tions in , un : 7 R. Then there exists a subsequence {u nk }
k=1 {un }n=1 and a function u
(continuous in ) such that unk u in , as k .
Proof. See O DEN

AND

D EMKOWICZ [149, p. 365].

Let k N {0}. The symbol C k () denotes the space of continuous realvalued functions
that are differentiable up to the order k. In particular, we denote the space of continuous functions
by
C() := C 0 (),
which, being equipped with the norm
kukC() := max |u(x)|,
x

forms a Banach space.


m
Lemma
3.4. Let Rl and R
domains, where l, m N. Let u := (u1 , . . . , um )
 k m
 k be 
n
C () and v := (v1 , . . . , vn ) C , n N, be vector functions continuously differentiable up to the order k N, and let

x : u(x) .


n
Then v u C k () , where for x the function v u is defined by
(v u) (x) := v(u(x)).

3.2. PRELIMINARIES FROM REAL ANALYSIS

21

Moreover, partial derivatives of the compound function are as follows:


m

(v u)i (x) X vi (y) uk (x)


=
,
xj
yk
xj

i = 1, . . . , n, j = 1, . . . , l,

(3.17)

k=1

where x := (x1 , . . . , xl ) and y := (y1 , . . . , ym ) := (u1 (x), . . . , um (x)) .


Proof. See RUDIN [178, p. 86].
Further, we introduce the space of infinitely differentiable functions by
C () :=

C k ()

k=1

and the space of infinitely differentiable functions with a compact support by




C0 () := v C () | supp v ,

where

supp v := {x | v(x) 6= 0}.


We introduce the space of Lipschitz continuous functions by


C 0,1 () := u C() | C > 0 x, y : |u(x) u(y)| Ckx yk ,

where C > 0 is a Lipschitz constant.


Now we define a class of domains of a more practical use. The following definition is due
[115, p. 17] or H ASLINGER AND N EITTAANM AKI
[85, p. 4].
to K R I Z EK AND N EITTAANM AKI
Definition 3.1. A nonempty domain R m is said to have a Lipschitz continuous boundary if
for any z there exists a neighbourhood U := U (z) such that the set U can be expressed,
in some Cartesian coordinate system (x 1 , . . . , xm ), by the inequality xm < F (x1 , . . . , xm1 ),
where F is a Lipschitz continuous function.
The symbol L denotes the set of all domains with Lipschitz continuous boundaries.

3.2.2 Some fundamental theorems


Here, we refer to the classical textbooks by RUDIN [176, 178]. Let the vector := (1 , . . . , m )
denotes a multiindex, where 1 , . . . , m are nonnegative integers, and let || := 1 + + m
be an order of the multiindex. Then for any u C k () and any x we define the th
classical derivative, k, at the point x as follows:

|| u(x)
, || N
m
1

.
D u(x) := x1 ...xm
u(x)
, = (0, . . . , 0)
The following classical theorems of real analysis are due to RUDIN [178].


Theorem 3.3. (Taylors theorem) Let be an open subset of R m , m N, and let u C k .
Let further x := (x1 , . . . , xm ) and let z Rm be such that
t R : 0 t 1 x + tz .

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS

22
Then

u(x + z) =

m
k1
Y
X
1 X
D u(x)
(xj )j + r(z),
i!
i=1

j=1

0||i

where := (1 , . . . , m ) denotes a multiindex,


(Q
k
j=1 j
i! :=
1

,i N
,i = 0

denotes the factorial of i N {0}, and where the remainder function r : 7 R satisfies
r(z)
= 0.
z0 |z|k1
lim

Proof. See RUDIN [178, Exercise 30].


Theorem 3.4. (Greens theorem) Let R m , L, and let u, v C 1 (). Then, the relation
Z
Z
Z
u
v
v dx +
dx =
u
uvni ds for i = 1, . . . , m
xi
xi

holds, where n := (n1 , . . . , nm ) denotes the outer unit normal to .


Proof. See Exercise 1.22.3 in O DEN
being a (closed) C 1 curve.

AND

D EMKOWICZ [149, p. 120] for the case m = 2 and

m

Corollary 3.1. Let the assumptions on and u hold and let v C 1 () . Then the following
is satisfied
Z
Z
Z
(un) v ds ,
u div(v) dx =
grad(u) v dx +

where the differential operators grad and div are respectively defined as follows:


u
u
grad(u) :=
, u C 1 (),
,...,
x1
xm
div(v) :=

m
X
vi
,
xi
i=1


m
v := (v1 , . . . , vm ) C 1 () .

Proof. Denote v := (v1 , . . . , vm ), then, for each i = 1, . . . , m Greens formula


Z
Z
Z
vi
u
u
(uni )vi ds
vi dx +
dx =
xi
xi

holds. Summing up the latter for the index i = 1, . . . , m, we get the assertion.

3
Corollary 3.2. Let R3 , L, and let u, v C 1 () . Then the following is satisfied
Z
Z
Z
(n u) v ds,
(3.18)
u curl(v) dx =
curl(u) v dx


3
where for u := (u1 , u2 , u3 ) C 1 () the differential operators curl is defined by


u3 u2 u1 u3 u2 u1

,
curl(u) :=
x2 x3 x3 x1 x1 x2

(3.19)

3.2. PRELIMINARIES FROM REAL ANALYSIS

23

and the cross product is as follows:


n u := (n2 u3 n3 u2 , n3 u1 n1 u3 , n1 u2 n2 u1 ),

(3.20)

where n := (n1 , n2 , n3 ) denotes the outer unit normal vector to .


Proof. Using the definitions (3.19), (3.20), and Theorem 3.4, we can easily see that the relation (3.18) holds.
We will extend Theorem 3.4 for a general linear differential operator of the first order. To this
end we define the following.




Definition 3.2. Suppose Rm , L. Let B : C 1 () 1 7 C() 2 , 1 , 2 N, be a
linear differential operator of the first order defined by
B(u) := (B1 (u), . . . , B2 (u)) , where Bi (u) :=

1 X
m
X

(k,l) uk

, bi

k=1 l=1

xl

(3.21)



(k,l)
where bi
R for i = 1, . . . , 2 , and where u := (u1 , . . . , u1 ) C 1 () 1 . We define the




adjoint operator B : C 1 () 2 7 C() 1 to the operator B by

B (v) :=

B1 (v), . . . , B1 (v)

, where

Bk (v)

:=

2 X
m
X

(k,l) vi

bi

i=1 l=1

xl

for k = 1, . . . , 2 ,

(3.22)


and where v := (v1 , . . . , v2 ) C 1 () 2 .


Moreover, we define the trace operator : C() 1 7 [C()]2 associated to B by
(u) := (1 (u), . . . , 2 (u)) , where i (u) :=

1 X
m
X

(k,l)

bi

k=1 l=1

uk | nl

for i = 1, . . . , 2 ,
(3.23)

where n := (n1 , . . . , nm ) is the outer unit normal to .


The following is a consequence of Theorem 3.4.
Corollary 3.3. Let
and notation of the
 1the assumptions
 previous
 1
2 definition are fulfilled. Let u :=
1
(u1 , . . . , u1 ) C () and v := (v1 , . . . , v2 ) C () . Then the following is satisfied
Z
Z
Z
(u) v ds.
(3.24)
u B (v) dx =
B(u) v dx +



Proof. Let u C 1 ()
and v C 1 () 2 be arbitrary. Using the previous definition, we
write down the lefthand side of (3.24)
Z

 1

Z


Z
uk
vi
u B (v) dx =
B(u) v dx +
uk
vi dx +
dx =
xl
xl

k=1 i=1 l=1


Z
Z
2 X
1 X
m
X
(k,l)
bi
uk vi nl ds =
(u) v ds,
=
Z

1 X
2 X
m
X

k=1 i=1 l=1

where we used Theorem 3.4.

(k,l)
bi

24

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS


3
Theorem 3.5. (Stokes theorem) Let R 3 be an open set and let u := (u1 , u2 , u3 ) C 1 () ,
where is a 2dimensional surface with the boundary being a piecewise C 1 curve. Then the
following is satisfied
Z
Z

curl(u) n dx =

u n ds,

where n := (n1 , n2 , n3 ) denotes the outer unit normal to .


Proof. See RUDIN [178, p. 287]

3.3 Hilbert function spaces


3.3.1 Lebesgue spaces
Let L. In the sequel all the integrals will be understood in the Lebesgues sense, cf. L UKE S
[129]. We introduce the Lebesgue spaces of realvalued functions
AND M AL Y
Z



p
p

L () := u : R
|u| dx < + , p [1, ),

equipped with the norm

kukLp () :=

Z

|u| dx

1/p

u Lp ().

Let p, q (1, ) be adjoint by

1 1
+ =1
p q
and let u Lp (), v Lq (), then the following Holder inequality holds
Z
Z
1/q
1/p Z


q
p
uv dx =
.
|v| dx
|u| dx

(3.25)

The Lebesgue space of measurable essentially bounded functions is defined by







L () := u : R ess sup |u(x)| < +
x

and it is equipped with the norm

kukL () := ess sup |u(x)|,


x

u L ().

We say that the set is measurable in the Lebesgue sense if the following Lebesgue integral
exists
Z
dx,
meas() :=

and we call it the measure of .


Let L. We say that the function u L 1 () is defined almost everywhere (a.e.) if it is
defined for each x \ , where the subset is such that meas() = 0. The notion almost
1
everywhere is understood similarly in different contexts, e.g., the sequence {u n }
n=1 L () is
said to converge almost everywhere to u : 7 R if for any such that meas() = 0 the
following holds
un (x) u(x) in \ .

3.3. HILBERT FUNCTION SPACES

25

1
Theorem 3.6. (Lebesgue dominated convergence theorem) Let {u n }
n=1 L () be a sequence
of functions measurable in the Lebesgues sense. Let u n u almost everywhere in , where
u : R is a function. If there exists a function v L 1 () such that |un | v almost
everywhere in for all n N, then u L 1 () and
Z
Z
u dx = lim
un dx.
n

Proof. See L UKE S

AND

M AL Y [129, p. 26].

3.3.2 Sobolev spaces


There is a lot of references on this topic. Let us mention the monographs by S OBOLEV [194],
N E C AS [141], A DAMS [4], K UFNER , J OHN , AND F U C I K [116], M AZYA [134], or the paper
by D OKTOR [56].
The function z L2 () is said to be the th generalized derivative of the function u
2
L () if the following is satisfied
Z
Z

||
v C0 () :
zv dx = (1)
u D v dx.
(3.26)

We can easily see that for any u C k (), k N {0}, for a multiindex such that || k,
and for z := D u C(), which is the th classical derivative of u, the relation (3.26) holds
in virtue of Theorem 3.4. Therefore, we can extend the symbol D u and we denote the th
generalized derivative still by D u := z.
Now, for k N {0} we define the Sobolev spaces as follows:
H k () := {u L2 () | : || k D u L2 ()}.
The latter, equipped with the scalar product
X Z
(u, v)k, :=
D uD v dx,
||k

u, v H k (),

forms a Hilbert space with the following induced norm and seminorm
v
uX Z
q
u
|D u|2 dx,
kukk, := (u, u)k, , |u|k, := t
||=k

u H k (),


n
respectively. The Sobolev spaces of vector functions H k () , n N, equipped with the scalar
product
n
h
in
X
(u, v)n,k, :=
(ui , vi )k, , u, v H k () ,
i=1

where u := (u1 , . . . , un ) and v := (v1 , . . . , vn ) are Hilbert spaces, too.


We will make use of some properties of Sobolev spaces. The following theorem gives us an
insight how functions behave along the boundary .
Theorem 3.7. (Trace theorem) Let L. Then there exists exactly one linear continuous operator : H 1 () 7 L2 () such that
u C () : (u) = u| .

26

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS

Proof. See K UFNER , J OHN ,

AND

F U C I K [116, p. 318] or N E C AS [141, p. 15].

The function (u) is called the trace of u. The trace theorem enables us to define the space


H01 () := u H 1 () | (u) = 0 .

Finally, we denote the space of traces by




H 1/2 () := v L2 () | u H 1 () : (u) = v

and its dual space


H 1/2
 by
n ().

n

n

The spaces C
and [C0 ()]n , respectively, are dense in H 1 () and H01 () ,
i.e.,
 1
n 
n

n
H () = C
and H01 () = [C0 ()]n in the norm k kn,1, .
(3.27)
The next theorem extends Theorem 3.4.

Theorem 3.8. (Greens theorem) Let R m , L, and let u, v H 1 (). Then the relation
Z
Z
Z
v
u
u
(u)(v)ni ds for i = 1, . . . , m
v dx +
dx =
xi

xi
holds, where n := (n1 , . . . , nm ) denotes the outer unit normal to .
Proof. See N E C AS [141, p. 29].
Note that, avoiding some additional effort, yet we have not defined either the boundary integral
or the space L2 (), for which we refer to K UFNER , J OHN , AND F U C I K [116] or N E C AS [141].
The last theorem, of which we will make use later when analyzing the ellipticity of differential

operators in H 1 (), is due to H ASLINGER AND N EITTAANM AKI


[85, p. 9] or K R I Z EK AND
[115, p. 26].
N EITTAANM AKI
Theorem 3.9. (Friedrichs inequality) Let L. Then there exists a positive constant C 3
C3 () such that
u H01 () : kuk1, C3 |u|1, .
Proof. See N E C AS [141, p. 30].

3.3.3 The space H(grad)


Let Rm , L. In the previous section we extended the notion of the partial derivative

m
to the generalized case. Now we extend the differential
operator grad : C 1 () 7 C()

m
onto a subspace of L2 (). The function z L2 () is said to be the generalized gradient of
u L2 () if the following is satisfied
Z
Z
m

z v dx,
u div(v) dx =
v [C0 ()] :

and we denote the generalized gradient by grad(u) := z. In particular, from (3.26) it is clear that


u
u
1
,...,
,
u H () : grad(u) =
x1
xm

3.3. HILBERT FUNCTION SPACES

27

where the partial derivatives are the generalized ones. We define the space


m

H(grad; ) := u L2 () | z L2 () : z = grad(u) .

The latter together with the scalar product


(u, v)grad, :=

uv dx +

grad(u) grad(v) dx,

u, v H(grad; ),

forms a Hilbert space. We introduce the following induced norm and seminorm
kukgrad, :=

(u, u)grad, ,

|u|grad, :=

sZ

kgrad(u)k2 dx,

u H(grad; ),

respectively. Clearly
kuk2grad, = kuk20, + |u|2grad, ,

u H(grad; ),

holds. From the definition of H 1 () it is obvious that H(grad; ) = H 1 (), (u, v)grad, =
(u, v)1, , kukgrad, = kuk1, , and |u|grad, = |u|1, . Therefore, Theorem 3.7 holds and we can
define the space
H0 (grad; ) := {u H(grad; ) | (u) = 0} ,
where the trace operator
(u) := (u)n,
where n := (n1 , . . . , nm ) denotes the outer unit normal to and is due to Theorem 3.8.
Obviously, H0 (grad; ) = H01 () holds and H0 (grad; )/Ker(grad; ) is equal to H 01 (),
since Ker(grad; ) = {0}, where due to (3.2)
Ker(grad; ) := {u H0 (grad; ) | grad(u) = 0} .

(3.28)

m
Theorem 3.10. (Greens
 1 theorem
m in H(grad)) Let R , L, and let u H(grad; ),
v := (v1 , . . . , vm ) H () . Then the relation

grad(u) v dx +

u div(v) dx =

(u) v dx


m
holds, where the differential operator div is extended onto H 1 () as follows:
div(v) :=

m
X
vi
,
xi
i=1


m
v := (v1 , . . . , vm ) H 1 () .

Proof. We use Theorem 3.8 and similar arguments as in the proof of Corollary 3.1.
Finally, in Theorem 3.9 we replace the symbols kuk1, and |u|1, by the symbols kukgrad,
and |u|grad, , respectively, and the theorem holds with the same constant C 3 .

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS

28

3.3.4 The space H(curl)


Let R3 and L. Now, like in the cases of D and grad, we extend the differential

3

3

3

3
operator curl : C 1 () 7 C() onto a subspace of L2 () . The function z L2 ()

3
is said to be the generalized rotation of u L2 () if the following is satisfied
Z
Z
3

v [C0 ()] :
u curl(v) dx =
z v dx,

and we denote the generalized rotation by curl(u) := z. We define the space


n
o

3

3
H(curl; ) := u L2 () z L2 () : z = curl(u) ,

which, together with the scalar product


Z
Z
curl(u) curl(v) dx,
u v dx +
(u, v)curl, :=

u, v H(curl; ),

forms a Hilbert space. We introduce the induced norm and seminorm


sZ
q
kukcurl, := (u, u)curl, , |u|curl, :=
kcurl(u)k2 dx,

u H(curl; ),

respectively, and the following is satisfied


kuk2curl, = kuk23,0, + |u|2curl, ,
The following two theorems are due to G IRAULT

AND

u H(curl; ).
R AVIART [67, p. 34].

Theorem 3.11. (Trace theorem in H(curl)) Let R 3 , L. Then there exists exactly one

3
linear continuous operator : H(curl; ) 7 H 1/2 () such that
3

u C () : (u) = n u| ,

where n is the outer unit normal to .


Proof. See G IRAULT

AND

R AVIART [67, p. 34].

Theorem 3.11 enables us to define the space


H0 (curl; ) := {u H(curl; ) | (u) = 0} .
Then, due to (3.2),
Ker(curl; ) := {u H0 (curl; ) | curl(u) = 0} .
By G IRAULT

R AVIART [67, Corollary 2.9], the space Ker(curl; ) is equal to the space
n
o

3
H0,0 (curl; ) := u L2 () p H01 () : u = grad(p)
(3.29)

AND

and, by H IPTMAIR [91, p. 9495], the quotient space H0 (curl; )/Ker(curl; ) is isomorphically isometric to



Z

1

u grad(v) dx = 0 ,
(3.30)
H0, (curl; ) := u H0 (curl; ) v H0 () :

3.3. HILBERT FUNCTION SPACES

29

and, moreover, the following orthogonal decomposition holds

i.e.,


The spaces C

H0 (curl; ) = H0, (curl; ) H0,0 (curl; ).


3

and [C0 ()]3 are dense in H(curl; ) and H0 (curl; ), respectively,


3
and H0 (curl; ) = [C0 ()]3 in the norm k kcurl, .
H(curl; ) = C

(3.31)

Theorem 3.12. (Greens theorem in H(curl)) Let R 3 , L, and let u H(curl; ),



3
v H 1 () . Then the relation
Z
Z
u curl(v) dx = h(u), vi
curl(u) v dx


3

3
holds, where h(u), vi denotes the duality pairing between H 1/2 () and H 1/2 () .

Proof. See G IRAULT

AND

R AVIART [67, p. 34].

The last theorem is a Friedrichslike inequality and it will be useful for analyzing the ellipticity of differential operators defined on H(curl; ).
Theorem 3.13. (Friedrichs inequality in H(curl)) Let R 3 , L. Then there exists a
positive constant C4 C4 () such that
u H0, (curl; ) : kukcurl, C4 |u|curl, .
Proof. See H IPTMAIR [91, p. 96].

3.3.5 The space H(div)


3

Let R3 and L. We extend the differential operator div : C 1 () 7 C() onto

3
a subspace of L2 () . The function z L2 () is said to be the generalized divergence of

3
u L2 () if the following is satisfied
Z
Z

v C0 () :
u grad(v) dx =
zv dx,

and we denote the generalized divergence by div(u) := z. We define the space


n
o
3
H(div; ) := u L2 () z L2 () : z = div(u) ,

which, together with the scalar product


Z
Z
(u, v)div, :=
u v dx +
div(u) div(v) dx,

u, v H(div; ),

forms a Hilbert space. We introduce the induced norm and seminorm


sZ
q
(div(u))2 dx,
kukdiv, := (u, u)div, , |u|div, :=

u H(div; ),

respectively, and the following is satisfied


kuk2div, = kuk23,0, + |u|2div, ,
The following two theorems are due to G IRAULT

AND

u H(div; ).
R AVIART [67, p. 2728].

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS

30

Theorem 3.14. (Trace theorem in H(div)) Let R 3 , L. Then there exists exactly one
linear continuous operator : H(div; ) 7 H 1/2 () such that
3

u C () : (u) = n u| ,

where n is the outer unit normal to .


Proof. See G IRAULT

AND

R AVIART [67, p. 27].

Theorem 3.14 enables us to define the spaces


H0 (div; ) := {u H(div; ) | (u) = 0} ,

i.e.,


The spaces C

Ker(div; ) := {u H0 (div; ) | div(u) = 0} .


3

and [C0 ()]3 , respectively, are dense in H(div; ) and H 0 (div; ),

3

and H0 (div; ) = [C0 ()]3 in the norm k kdiv, .
H(div; ) = C

Theorem 3.15. (Greens theorem in H(div)) Let R 3 , L, and let u H(div; ),


v H 1 (). Then the relation
Z
Z
div(u) v dx +
u grad(v) dx = h(u), vi

holds, where h(u), vi denotes the duality pairing between (u) H 1/2 () and v
H 1/2 ().
Proof. See G IRAULT

AND

R AVIART [67, p. 28].

3.3.6 The abstract space H(B)


In the previous subsections we could observe a similar structure, which we will formally summarize now.




Suppose Rm , L. Let B : C 1 () 1 7 C() 2 be a linear differential operator


of the first order defined by (3.21), where 1 , 2 N. Let the adjoint operator B : C 1 () 2 7


C() 1 be defined by (3.22). Then,
formula (3.24)holds. We extend the differential
 2 theGreens

1
operator B onto a subspace of L () . The function z
L2 () 2 is said to be the gen

eralized firstorder linear differential operator B of u L2 () 1 if the following is satisfied


Z
Z
v [C0 ()]2 :
u B (v) dx = z v dx,
(3.32)

and we denote the generalized operator by B(u) := z. We define the space








H(B; ) := u L2 () 1 | z L2 () 2 : z = B(u) ,

which is obviously a linear space. Further, we define the bilinear form


Z
Z
B(u) B(v) dx, u, v H(B; ),
u v dx +
(u, v)B, :=

(3.33)

3.3. HILBERT FUNCTION SPACES

31

which can be shown to be a scalar product on H(B; ). The space H(B; ), together with this
scalar product, forms a Hilbert space. The induced norm and seminorm are as follows:
sZ
q
kB(u)k2 dx, u H(B; ),
kukB, := (u, u)B, , |u|B, :=

and the following is satisfied


kuk2B, = kuk21 ,0, + |u|2B, ,

u H(B; ).

(3.34)

We assume that the following trace property holds.




Assumption 3.2. Let Rm , L. We assume that the trace operator : C () 1 7
[C()]2 defined by
by continuity
to the operator, still denoted
 (3.23) can be
2 uniquely extended


by, : H(B; ) 7 H 1/2 () such that on C () 1 the relation (3.23) holds.
Now, we define the spaces

H0 (B; ) := {u H(B; ) | (u) = 0} ,


Ker(B; ) := {u H0 (B; ) | B(u) = 0} .
(3.35)



Assumption 3.3. Let Rm , L. We assume that C 1 and [C0 ()]1 , respectively,


are dense in H(B; ) and H0 (B; ), i.e.,


H(B; ) = C 1 and H0 (B; ) = [C0 ()]1 in the norm k kB; .
The following lemma gives a space which will be useful for the finite element approximation.

Lemma 3.5. The space


H0, (B; ) := {u H0 (B; ) | Ker(B; ) : (u, )1 ,0, = 0}

(3.36)

is isomorphically isometric to H0 (B; )/Ker(B; ) and the orthogonal decomposition


H0 (B; ) = H0, (B; ) Ker(B; )

(3.37)

holds.
Proof. Here, we use exactly the same technique as presented in H IPTMAIR [91, p. 9495].
Let us recall the norm in the quotient space H 0 (B; )/Ker(B; )
k[v]kH0 (B;)/Ker(B;) :=

min

wKer(B;)

kv + wkB, ,

v H0 (B; ).

We look for a subspace of H0 (B; ) that consists of the minimizers v + w(v) determined as
follows:
n
o
kv + w(v)k 2B, = kvk2B, +
min
2 (v, w(v)) 1 ,0, + kw(v)k21 ,0, .
w(v)Ker(B;)

Due to Lemma 3.2, we arrive at the variational problem


Find w(v) Ker(B; ) :

(, w(v)) 1 ,0, = (v, )1 ,0,

Ker(B; )

32

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS

and, since (, )1 ,0, is a scalar product on Ker(B; ), by Theorem 3.1 w(v) is unique. Therefore, the minimizer u := v + w(v) H0 (B; ) is uniquely characterized by
Ker(B; ) : (u, )1 ,0, = (u, )B, = 0.

(3.38)

The space H0, (B; ), see (3.36), which consists of such minimizers is a closed subspace of
H0 (B; ) and, due to (3.38) and (3.8),
H0, (B; ) = Ker(B; ) ,
which completes the proof.
Further, we assume that the following Greens formula holds.


Assumption 3.4. Let Rm , L, and let u H(B; ), v H 1 () 2 . We assume that
the relation
Z
Z
B(u) v dx +
u B (v) dx = h(u), vi


 2

 2
holds, in which the duality pairing between H 1/2 ()
and H 1/2 ()
is denoted by
h(u), vi .
Finally, we will need the ellipticity. To this end, we assume that the following Friedrichslike
inequality holds.
Assumption 3.5. Let Rm , L. We assume that there exists a positive constant C 5
C5 () such that
u H0, (B; ) : kukB, C5 |u|B, .
At the end, we summarize how the abstract operators B, B , and read in the spaces introduced above.
m
N
N
3

1
1
m
3

2
m
1
3

H(B; )
H(grad; )
H(div; )
H(curl; )

B
grad
div
curl

B
div
grad
curl

(v)
vn
nv
nv

Table 3.1: Operators in Hilbert function spaces

3.4 Weak formulations of boundary vectorvalue problems


Let us refer to some more literature, where the authors deal with weak settings of boundary value
problems, see R ITZ [172], AUBIN [12], WASHIZU [212], G ROSSMANN AND ROSS [71], J OHN C EK , H ASLINGER , N E C AS , AND L OVI S EK [95].
SON [101], S HOWALTER [189], or H LAV A
m
[115, p. 27], we consider a
Let R , L. Like in K R I Z EK AND N EITTAANM AKI


boundary value problem, the strong formulation of which reads as follows: Find u C 2 () 1 ,
1 N, such that
)
B (D B(u)) = f in
,
(S)
(u) = 0 on

3.4. WEAK FORMULATIONS OF BOUNDARY VECTORVALUE PROBLEMS

33



where B, B , and are defined by Definition 3.2, and where D C 1 () 2 2 , 2 N, is a
uniformly positive definite realvalued matrix, i.e., there exists a constant C 6 > 0 such that
x v R2 : v (D(x) v) C6 kvk2 ,
(3.39)
 1
 2
 1

and where f C() . The function u C () is called the classical solution to (S).
Now, we introduce a weak setting of (S), which will enable us to weaken the assumptions on
the differentiability of the data in (S) and to deal with problems of more practical purposes. Let
us take into account the extensions of definitions of B, B , and from Section 3.3.6, as well as
Assumptions 3.23.5 and Lemma 3.5 introduced there. We define the continuous bilinear form
a : H(B; ) H(B; ) 7 R and the continuous linear functional f : H(B; ) 7 R by
Z
B(v) (D B(u)) dx, u, v H(B; ),
(3.40)
a(v, u) :=

f (v) :=

f v dx,

v H(B; ),

(3.41)



respectively, where f L2 () 1 , D := (di,j ) is a matrix the entries of which di,j L (),
i, j = 1, . . . , 2 , and the condition (3.39) holds almost everywhere (a.e.) in . A weak formulation
of the problem (S) reads as follows:
)
Find u H0 (B; ):
.
(3.42)
a(v, u) = f (v) v H0 (B; )


Just by applying Corollary 3.3, we can see that the classical solution u C 2 () 1 of the
problem (S) is also a solution to (3.42). However, the problem (3.42) admits more general and
physically still reasonable data.
We can observe that if u H0 (B; ) is a solution to (3.42), then for any p Ker(B; )
the function u + p is a solution, too. This indicates a Neumannlike problem. Therefore, we
restrict our consideration onto the quotient space H 0 (B; )/Ker(B; ), which is by Lemma 3.5
isomorphically isometric to H0, (B; )), being a subspace of H0 (B; ). In this case, we have to
introduce a compatibility condition on the righthand side f
Z
p Ker(B; ) :
f p dx = 0.
(3.43)

The correct weak formulation of (S) reads as follows:


Find u H0, (B; ):
a(v, u) = f (v)

v H0, (B; )

(W )

where f satisfies (3.43). It can be easily verified that a solution u to (W ) also solves the problem
(3.42). On the other hand, the next theorem shows that (W ) has a unique solution, unlike the
problem (3.42).
Theorem 3.16. There exists exactly one solution u H 0, (B; ) to the problem (W ). Moreover,
there exists a positive constant C7 such that
kukB, C7 kf k1 ,0, .

(3.44)

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS

34

Proof. We will check the assumptions of Lemma 3.1 and the assertion then follows.
Since H0, (B; ) is a closed subspace of H0 (B; ), it is also a Hilbert space equipped with
the scalar product (3.33). The form a is clearly bilinear. Concerning the matrix D, we denote


d := max ess sup |di,j (x)|
(3.45)
i,j

and, using (3.3) and (3.34), we prove the continuity of a as follows:



Z



|a(v, u)| d B(v) B(u) dx d kB(v)k2 ,0, kB(u)k2 ,0, d kvkB, kukB, ,

where u, v H(B; ), and where we used that D [L ()]2 2 , (3.3), and (3.34), respectively. The H0, (B; )ellipticity of a follows from
Z
Z
C6
a(v, v) =
B(v) (D B(v)) dx C6
kB(v)k2 dx 2 kvk2B, , v H0, (B; ),
C5

(3.46)
where (3.39) and Assumption 3.5 were used, respectively. Finally, f is obviously a linear functional on H(B; ) and it is continuous thereon, too, since
Z




|f (v)| = f v dx kf k1 ,0, kvk1 ,0, kf k1 ,0, kvkB, , v H(B; ),

where we used (3.3) and (3.34), respectively. The assertion now follows from Lemma 3.1, where
the H0, (B; )ellipticity constant is
C6
C7 := 2 .
C5

3.4.1 A regularized formulation in H0 (B)


In many cases, as in both 2d and 3d magnetostatics, we look for B(u) rather than for u, a solution
to (W ). Since there is the additional condition (3.38) in the definition of the space H0, (B; ),
it would be difficult to approximate the space by the finite element method. Therefore, we will
introduce a regularized weak formulation in the original space H 0 (B; ) such that its solution
tends towards the solution u H0 (B; ) of the problem (W ), but in the seminorm | | B, only.
Let > 0 be a regularization parameter. We introduce the following bilinear form
Z
v u dx, u, v H(B; ),
(3.47)
a (v, u) := a(v, u) +

where a is given by (3.40). The regularized weak formulation reads as follows:


)
Find u H0 (B; ):
,
a (v, u ) = f (v) v H0 (B; )

(W )

where f is given by (3.41) such that (3.43) holds.


Theorem 3.17. For each > 0 there exists a unique solution u H0 (B; ) to the problem
(W ). Moreover, there exists a positive constant C 8 () such that
ku kB, C8 () kf k1 ,0, .

3.4. WEAK FORMULATIONS OF BOUNDARY VECTORVALUE PROBLEMS

35

Proof. The proof is fairly the same as the one of Theorem 3.16. The continuity of a is proven as
follows:

Z
Z








|a (v, u)| d B(v) B(u) dx + v u dx max {d, } kvkB, kukB, , (3.48)

where u, v H(B; ) and where d is given by (3.45). The H0 (B; )ellipticity of a follows
from
Z
Z
kB(v)k2 dx +
a (v, v) C6
|v|2 dx min{C6 , }kvk2B, , v H(B; ),

where C6 is given by (3.39). Therefore, the H0 (B; )ellipticity constant is


C8 () := min{C6 , }.

(3.49)

Theorem 3.18. The following holds:




B(u ) B(u) in L2 () 2 , as 0+ ,

(3.50)

where u H0 (B; ) are the solutions to (W ) and u H0, (B; ) is the solution to (W ).
Proof. Let > 0 be arbitrary. Using (3.39) and the definitions of (W ) and (W ), we have
Z
2
2
kB(u ) B(u)k2 dx
kB(u ) B(u)k2 ,0, = |u u|B, =

Z
1
1

a(u u, u u)
B(u u) (D B(u u)) dx =
C6
C6
1
1
a (u u, u u)) =
(f (u u) a (u u, u)) =

C6
C6


Z
1
f (u u) a(u u, u) (u u) u dx
=
C6

(3.51)
Using the orthogonal decomposition (3.37), there exists u, H0, (B; ) and there exists
u,0 Ker(B; ) such that
u = u, + u,0 .
Using the latter, the condition (3.43), (3.38) and (3.25), the estimate (3.51) reads
kB(u ) B(u)k22 ,0, = kB(u, ) B(u)k22 ,0,


Z
1

f (u, u) a(u, u, u) (u, u) u dx


C6


ku, ukB, kuk1 ,0, .
(u, u) u dx

C6
C6

Now we use Assumption 3.5

kB(u ) B(u)k22 ,0, = |u, u|2B,

C5
|u, u|B, kuk1 ,0, .
C6

After dividing the latter by |u, u|B, , the statement follows.

36

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS

3.4.2 A weak formulation of threedimensional linear magnetostatics


Here we apply the results of the previous sections to the strong formulation (2.5).
Let R3 , L. Taking a look into Table 3.1, at (S), and (2.5), we specify the symbols
B := curl,

B := curl,

and

(v) := n v| ,

where n := (n1 , n2 , n3 ) denotes the outer unit normal to . Due to (2.5), we determine the
symbols
1
D := , f := J,


3
where L (), > 0 a.e. in , J L2 () . The condition (3.39) is now equivalent to
1 > 0 : (x) 1 a.e. in

(3.52)

in such a way that


C6 :=

1
.
1

Since Ker(curl; ) is equal to the space H 0,0 (curl; ) defined by (3.29) the condition (3.43)
reads as follows:
Z
1
J grad(p) dx = 0.
p H0 () :

Finally, we specify the terms in (W ). As we have seen in Section 3.3.4, the quotient space
H0 (curl; )/Ker(curl; ) is isomorphically isometric to the space H 0, (curl; ), which was
defined by (3.30). The bilinear form (3.40) and the linear form (3.41) are, respectively, determined
by


Z
1
a(v, u) :=
curl(v)
curl(u) dx, u, v H(curl; ),

and by
f (v) :=

J v dx,

v H(curl; ).

We have specified all the assumptions on the abstract weak formulation (W ) introduced in Section 3.4. Therefore, Theorem 3.16 holds with the H0, (curl; )ellipticity constant
C7 :=

1
,
0 C42

where C4 is given by Theorem 3.13. In case of the formulation (W ), we only recall the regularized
bilinear form


Z
Z
1
v u dx, u, v H(curl; ),
curl(u) dx +
curl(v)
a (v, u) :=

whose ellipticity constant is given by (3.49).

3.4. WEAK FORMULATIONS OF BOUNDARY VECTORVALUE PROBLEMS

37

3.4.3 A weak formulation of twodimensional linear magnetostatics


Now we apply the results of Section 3.4 to the strong formulation (2.6).
Let R2 , L. We specify the symbols
B := grad,

B := div,

and

(v) := v| n,

where n := (n1 , n2 ) denotes the outer unit normal to . Due to (2.6), we determine the symbols
D :=

1
,

f := J,

(3.53)

where L (), > 0 a.e. in , J L2 (). We again replace the condition (3.39) by (3.52).
Since Ker(grad; ) defined by (3.28) is equal to the zero space {0}, the condition (3.43) always
holds and does not need to be introduced in this case. Finally, we specify the terms in (W ). As
we have seen in Section 3.3.3, the quotient space H0 (grad; )/{0} is equal to the space H 01 ().
The bilinear form (3.40) and the linear functional (3.41) are, respectively, determined by


Z
1
grad(u) dx, u, v H 1 (),
a(v, u) :=
grad(v)

and by
f (v) :=

Jv dx,

v H 1 ().

Now, Theorem 3.16 holds with the H01 ()ellipticity constant


C7 :=

1
,
0 C32

where C3 is given by Theorem 3.9. In this case, we do not need to introduce the regularized
problem (W ), since all the spaces are equal
H0, (grad; ) = H0 (grad; ) = H01 ().

38

CHAPTER 3. ABSTRACT BOUNDARY VECTORVALUE PROBLEMS

Chapter 4

Finite element method


In this chapter, we will recall the basic techniques of the finite element method. First, we will
present the ideas and concept of the method. Then, we will deal with algorithmic issues. Further,
approximation properties will be proved. At the end, we will describe the finite elements used
for the linear magnetostatics, namely Lagrange nodal elements on triangles and Nedelec edge
[115, Chapter 4] for a detailed
elements on tetrahedra. We refer to K R I Z EK AND N EITTAANM AKI
description of the method.
The large popularity of the method can be documented by the following unsorted list of literature: G IRAULT AND R AVIART [67], B RAESS [27], B OSSAVIT [26], S TRANG AND F IX [200],

C IARLET [45], Z L AMAL


[219], R AVIART AND T HOMAS [167], H IPTMAIR [91, 93], Z IENKIE WICZ [217], Z IENKIEWICZ AND TAYLOR [218], H ASLINGER , M IETTINEN , AND PANAGIOTOP ULOS [84], B REZZI AND F ORTIN [33], H UGHES [98], J UNG AND L ANGER [103], B RENNER
AND S COTT [32], G ROSSMANN AND ROSS [71], J OHNSON [101], K IKUCHI [108], O DEN AND
R EDDY [150], S CHWAB [187], S ZAB O AND BABU S KA [205], BABU S KA AND A ZIZ [14, 15],
[114, 115], K R I Z EK [113], N EITTAANM AKI

G LOWINSKI [68], K R I Z EK AND N EITTAANM AKI


AND S ARANEN [147, 146], S ILVESTER AND F ERRARI [192], H ACKBUSCH AND S AUTER [79,
80].

4.1 The concept of the method


We consider the regularized weak formulation (W ) of the abstract elliptic linear boundary vector
value problem (S), which was introduced in Section 3.4.1. The aim of this chapter is to develop
a method which approximates the continuous regularized solution u of the problem (W ) by a
sequence of some discretized solutions u h , where h > 0 stands for a discretization parameter.

4.1.1 Galerkin approximation


Let Vh H0 (B; ) be a closed subspace. We introduce approximations of the bilinear form (3.47)
and of the linear functional (3.41), respectively, by
Z
Z


ah (v, w) :=
B(v) Dh B(w) dx +
v w dx, v, w H(B; ),
(4.1)

f (v) :=

f h v dx,
39

v H(B; ),

(4.2)

CHAPTER 4. FINITE ELEMENT METHOD

40
where f h L2 ()

 1

and where Dh := dhi,j

is such that
h > 0 p Ker(B; ) :
 2

i,j=1

f h p dx = 0,

is a matrix the entries of which dhi,j L () and such that





h > 0 : ess sup dhi,j (x) ess sup |di,j (x)| ,

i, j = 1, . . . , 2 .

(4.3)

Both f h and Dh are assumed to be piecewise constant. Moreover, we suppose that




h > 0 v R2 : v Dh (x) v C6 kvk2 a.e. in ,

(4.4)

where C6 > 0 is given by (3.39). We consider the following problem

Find uh Vh :


 
,
ah vh , uh = f h vh
vh Vh

(Wh )

which is called the Galerkin approximation to the problem (W ).

Theorem 4.1. For each > 0 and h > 0 there exists a unique solution u h Vh to the problem
(Wh ). Moreover, there exists a positive constant C 8 () such that


h > 0 : kuh kB, C8 () f h
.
1 ,0,

Proof. The proof is fairly the same as the one of Theorem 3.17.
Let > 0 and h > 0 be arbitrary. By definition, V h is a closed subspace of H0 (B; ),
therefore, it is also a Hilbert space. The form a h is obviously bilinear and f h is a linear functional
on Vh . We have the continuity of ah



h
(v,
w)
max{dh , } kvkB, kwkB, , v, w H(B; ),
a

where




h
d := max ess sup di,j (x) .
h

i,j

Due to (4.4), we have the

Vh ellipticity

of

ah

ah (v, v) min{C6 , } kvk2B, ,

v H(B; ),

independently of h. Finally, the continuity of f h follows from




kvkB, , v H(B; ).
f h (v) f h
1 ,0,

Therefore, the assertion follows with the V h ellipticity constant


C8 () := min{C6 , }.

(4.5)

4.1. THE CONCEPT OF THE METHOD

41

The following lemma, cf. B RAESS [27], C IARLET [45], or S TRANG AND F IX [200], says
that we can study the approximation properties of u h via the approximation of H0 (B; ) by its
subspaces Vh and via the approximation properties of the forms a h and f h .
Lemma 4.1. (1st Strangs lemma) Let > 0 and h > 0 be a regularization and discretization
parameter, respectively. Then there exists a positive constant C 9 (), independent of h, such that






h
h
h
v V : u u
C9 () u vh
+
B,
B,



a v h , u h v h a h v h , u h v h

+
+
kuh vh kB,


 )
f uh vh f h uh vh
+
,
kuh vh kB,
where u H0 (B; ) is a solution to (W ) and uh Vh are solutions to (Wh ).
Proof. Let h > 0 and vh Vh be arbitrary. By the triangle inequality,












u vh
+ uh vh
u uh
B,

B,

B,

(4.6)

To make the proof more readable, we introduce the symbol


wh := uh vh .

(4.7)

Now, we use the Vh ellipticity of a , the bilinearity of a , ah , and the definitions of problems (W ), (Wh ), respectively, and we get
2









ah wh , wh = a wh , u vh a wh , u vh + ah wh , wh =
C8 () wh
B,

  



= a wh , u vh + a wh , vh ah wh , vh +
 



+ ah wh , uh a wh , u
=



  

= a wh , u vh + a wh , vh ah wh , vh +
 
  
+ f h wh f wh .
Dividing the latter by kwh kB, and using (3.48) yield


C8 () wh

B,





max{d, } u vh




a vh , wh ah vh , wh
+
+
kwh kB,
B,



f wh f h wh
+
.
kwh kB,

(4.8)

Combining (4.5), (4.6), (4.7), and (4.8), the assertion is proved, where the constant is as follows:


max{d, }
1
C9 () := max 1 +
,
.
min{C6 , } min{C6 , }

CHAPTER 4. FINITE ELEMENT METHOD

42

4.1.2 Finite element method


Let Vh be of a finite dimension. Having a base {v 1 , . . . , vn } of the space Vh , the problem (Wh )
is equivalent to the following system of linear equations
An u n = f n ,

(4.9)

where the matrix An Rnn and the righthand side vector f n Rn are as follows:

n

n
An := ah (vi , vj )
, f n := f h (vi )
,
i,j=1

i=1

(4.10)


respectively, and the solution vector u n := un,1 , . . . , un,n Rn corresponds to the approximate
solution uh in the following way
n
X
uh =
un,i vi .
i=1

The finite element method is a special case of the Galerkin method. The base {v 1 , . . . , vn }
of the space Vh is chosen such that the matrix An is sparse. In this case the system (4.9) can be
solved much faster and the matrix An takes less computer memory. The finite element method is
determined as follows:
The domain Rm is decomposed into smaller convex subdomains, e.g., line segments
for m = 1, triangles for m = 2, or tetrahedra for m = 3.
The base {v1 , . . . , vn } is chosen as simple functions, e.g., polynomials. The space V h is
called the space of finite elements.
The basis functions v1 , . . . , vn have small supports, which make the matrix A n to be sparse.

4.1.3 Discretization of the domain


We will employ polyhedral elements. To this end, we have to replace the original domain by
a polyhedral subdomain h . We subdivide h into a finite number of polyhedral subdomains
K1 , . . . , Knh , nh N, i.e., into open and connected polyhedral subsets of h such that the
following assumptions are satisfied:
n h

Ki ,

i=1

Ki 6= for each i = 1, . . . , nh ,
Ki 6= Kj Ki Kj = for each i, j = 1, . . . , nh ,
any face of any Ki is either a subset of the boundary h or a face of another element Kj ,
where i, j = 1, . . . , nh ,
each Ki has exactly m + 1 faces.
The last assumption means that in the cases of m = 1, m = 2, and m = 3 we deal with line
segments, triangles, and tetrahedra, respectively. This assumption will provide us to introduce a
reference element.

4.1. THE CONCEPT OF THE METHOD

43

The set
T h := {Ki | i = 1, . . . , nh }
is called a discretization of h . In Fig. 4.1 we can see some discretizations, which were generated

by Netgen, see S CH OBERL


[186]. The block column vector
i
h
(4.11)
xh := xh1 , . . . , xhn h Rmnxh ,
x

where xhi :=

xhi,1 , . . . , xhi,m

h for i = 1, . . . , nh , contains all the grid nodes xhi of the

discretization T h , where nxh N stands for the number of the discretization nodes.

Figure 4.1: Discretization of a circle and cylinder

4.1.4 Space of finite elements



A finite element is a triple e := K e , Pe , e , where K e T h is an element domain, Pe

1
C Ke
is an ne dimensional space of vector functions, the finite element
 e space
defined
e
e
e
over K , where ne N is common for all the elements, and where := 1 , . . . , ne , ie

 0
C K e 1 , is a set of ne linearly independent continuous linear functionals. The functionals
ie are called local degrees of freedom. The space P e usually consists of polynomials of a given
order. We denote the set of finite elements by



E h := ei = Ki , Pei , ei i = 1, . . . , nh ,
where Ki K ei stands for the same element domain.
We will introduce global degrees of freedom. Two adjacent elements e i , ej , i.e., i 6= j and
Ki Kj 6= , have in common some degrees of freedom. Therefore, the total number of degrees
of freedom is n < nh ne . We denote the set of global degrees of freedom by


h  i1 0
h
h
h

(4.12)
:= i C
i = 1, . . . , n ,

where the global degree of freedom ih corresponds to a local degree of freedom je , e E h , by


means of a mapping G e : {1, . . . , ne } 7 {1, . . . , n} defined by
G e (i) = j

if

jh |([C (K e )]1 )0 = ie ,

i = 1, . . . , ne ,

j = 1, . . . , n.

CHAPTER 4. FINITE ELEMENT METHOD

44

e
e
e
e
Since dim(P
 e ) = nee , ne eN, and 1 , . . . , ne are linearly independent, then there
exists a basis 1 , . . . , ne P such that
(

1 ,i = j
e e
.
i j = i,j , i, j = 1, . . . , ne , where i,j :=
0 , i 6= j

These base functions are called shape functions. In the same virtue, we introduce global shape
functions h1 , . . . , hn : h 7 R1 such that
 
(4.13)
ih hj = i,j , i, j = 1, . . . , n.

The global shape functions correspond to the local ones as follows:


hG e(i) |K e = ei ,

e Eh,

i = 1, . . . , ne .

The global shape functions form a basis for the following space

(
)
n

X

Ph := vh =
vi hi v1 , . . . , vn R .

(4.14)

i=1

Hence, the space Ph consists of such functions that are elementwise in P e , i.e.,
vh Ph e E h : vh |K e Pe

We need Ph to be a subspace of H B; h . This property is called the conformity of the finite
elements. The following lemma gives a sufficient condition on the conformity.

Lemma 4.2. Let vh Ph . Then vh H B; h if for any two adjacent elements ei , ej E h ,
i 6= j, with a common face fi,j := Ki Kj , the trace is continuous over fi,j , i.e.,




Ki vh |Ki |fi,j = Kj vh |Kj |fi,j ,
(4.15)

where is given by (3.23). Note that the minus sign appeared, since the outer unit normal vectors
on fi,j satisfy nKi = nKj .


Proof. Let vh Ph and let (4.15) holds. Clearly vh L2 h 1 . We will prove that vh



H B; h by means of (3.32). We take zh L2 h 2 such that


zh |Ki := B vh |K i , i = 1, . . . , nh .


Let wh C0 h 2 be arbitrary. Then, Corollary 3.3 and (4.15) yield
Z
 
 
X Z
h
h

vh B wh dx =
v B w dx =
h

X  Z
=

Ki T

=
=

Ki T h
h

Ki

z w dx +

zh wh dx +
zh wh dx,

which completes the proof.

Ki

X Z
fi,j

Ki

fi,j

Ki v


w ds

Ki vh wh ds +


Z

fi,j

Kj vh wh ds

4.1. THE CONCEPT OF THE METHOD

45


The next assumption ensures the conformity of the finite elements, i.e., P h H B; h . We
refer to H IPTMAIR [92] for a unified way of the design of conforming finite elements.
Assumption 4.1. Let e E h be an element and let f K e denote a face. We assume that
the degrees of freedom connected to the face f are exactly the ones which determine the trace
K e (ve ) |f , where ve Pe .
Further, we introduce the set of indices of those degrees of freedom which determine the trace
h . Due to (4.14), we can write an arbitrary v h Ph as
vh =

n
X

vih hi ,

i=1

where vih R for i = 1, . . . , n and where hi denote the global shape functions. Then,
n
 
  X
vih h hi .
h v h =
i=1

Therefore, the trace is determined by the following set of indices



n
 
o

I0h := i {1, . . . , n} h hi 6= 0 .


h
Then, the finite element space V h H0 B; h H0 B; h is defined by

h
o

n
 

H0 B; h := vh Ph i I0h : ih vh = 0 .

(4.16)

(4.17)

4.1.5 Finite element discretization of the weak formulation


Let Rm , L. We rewrite the problem (W ) as follows:
Find u() H0 (B; ):
a (v, u()) = f (v)

v H0 (B; )

(W ())

As long as L, the existence of the unique solution u () to (W ()) is given by Theorem 3.17.
Further, let h > 0 be a discretization parameter and let h be a nonempty polyhedral
subdomain. Then, h L. Let T h be a discretization of h and let E h be the corresponding set
of finite elements. Concerning the bilinear form a h and the linear functional f h given by (4.1) and
(4.2), respectively, we assume that for each e E h there exist a constant matrix De R2 2
and vector f e R1 such that
x K e : Dh (x) = De and f h (x) = f e .
The Galerkin approximation of the problem (W (h )) reads as follows:

 

h

Find uh h H0 B; h :

 
 

h ,
h
h
h
h
h
h
h
h

a v , u
=f v
v H0 B;

(4.18)

(Wh (h ))

h
where ah , f h , and H0 B; h are respectively
defined by (4.1), (4.2), and by (4.17). The exis
tence and uniqueness of the solution u h h to (Wh ) follows from Theorem 4.1.

CHAPTER 4. FINITE ELEMENT METHOD

46

4.2 Assembling finite elements


In the previous sections, we provided a discretization technique of the weak formulation (W ()).
The aim of this section is to build up an algorithm for an efficient assembling of the finite element
matrix An and the righthand side vector f n , both defined by (4.10).
Let us consider the global shape (base) functions h1 , . . . , hn Ph . They have small supports,
since the shape function hi is nonzero just for those neighbouring elements which have a degree
of freedom ih in common, i.e.,
[
K e , i = 1, . . . , n,
(4.19)
supp hi
eEih

where

n
o
Eih := e E h | j {1, . . . , ne } : G e (j) = i ,

i = 1, . . . , n,

(4.20)

is the set of the elements neighbouring with e i . Since ah is a bilinear form and f h is a linear
functional, we assemble the matrix A n and the righthand side vector f n , see (4.10), elementwise.
Due to (4.19), each element contributes only by its n e global degrees of freedom, i.e.,
(An )i,j

eEih Ejh

ne
X

ae ( ek , el ) ,

k,l=1

(f )i =

ne
X X

eEih

f e ( ek ) ,

i, j = 1, . . . , n,

where the local contributions to the matrix and to the righthand side vector are
Z
Z
e e e
e
e
e
a ( k , l ) :=
B( k ) (D B( l )) dx +
ek el dx, k, l = 1, . . . , ne ,
Ke

(4.21)

k=1

(4.22)

Ke

( ek )

:=

Ke

f e ek dx,

k = 1, . . . , ne ,

(4.23)

respectively. The solution to the problem (W h (h ) is then given by


n
 
X
uh h :=
un,i hi ,

(4.24)

i=1


where u n := un,1 , . . . , un,n Rn denotes the solution to the linear system (4.9).

4.2.1 Reference element

As each element domain K e is a polyhedron of m + 1 faces, it can be uniquely described by the


following block column vector, which consists of the m + 1 corners


(4.25)
xe := xe1 , . . . , xem+1 Rm(m+1) ,


where xei := xei,1 , . . . , xei,m K e for i = 1, . . . , m + 1. To each element e E h we associate
a mapping He : {1, . . . , m + 1} 7 {1, . . . , nxh } which maps the element nodal indices to the
global ones as follows:
He (i) = j

if

xei = xhj ,

i = 1, . . . , m + 1,

j = 1, . . . , nxh .

(4.26)

4.2. ASSEMBLING FINITE ELEMENTS

47


We further introduce a reference element r := K r , Pr , r such that the polyhedral domain K r
is determined by the following block column vector consisting of the reference corners
i
h
m(m+1)
r
cr , . . . , x[
cr := x
,
x
1
m+1 R



r
e
r
r ,...,x
r
cr := x
d
d
where x
i
i,1
i,m K , i = 1, . . . , m + 1, and where dim(P ) = dim(P ) = ne .
To each element e E h , we associate a onetoone linear mapping R e : K r 7 K e defined by
b + re ,
x := Re (b
x) := Re x

b Kr,
x K e, x

(4.27)

where Re Rmm is a nonsingular matrix and re Rm is a vector both of which are uniquely
determined by xe as follows:
cr + re = xei ,
Re x
i

i = 1, . . . , m + 1.

(4.28)

Obviously, both Re and re are continuously differentiable with respect to each coordinate of the
corners xe .
PSfrag replacements
x
c2
x2
Re

Ke
Kr
0

x
c1

x1

PSfrag replacements
Figure 4.2: A transformation between the reference and an element domain

b
v

Se

1 v
b

0
Kr
1
x
c1

S e (b
v)

x
c2

x2
Ke
x1

Figure 4.3: A transformation between the reference and an element shape function

i.e.,

r
Further, let us denote by br1 , . . . , c
ne the shape functions acting on the reference element r,

 
ir brj = i,j ,

i, j = 1, . . . , ne .

CHAPTER 4. FINITE ELEMENT METHOD

48

Assumption 4.2. We assume that there exist nonsingular matrices S e R1 1 and SeB
R2 2 , both of which are continuously differentiable with respect to the corners x e , such that
Se bri (b
x) = ei (x) ,

and

i = 1, . . . , ne ,



x) = Bx ( ei (x)) ,
SeB Bxb bri (b

i = 1, . . . , ne ,

where x := Re (b
x), and where Bx and Bxb , respectively, stand for the differential operator B,
defined by (3.21), with respect to the global coordinates x and withrespect to
co the reference


b. We define transformations S e : Pr 7 Pe and S eB : L2 (K r ) 2 7 L2 (K e ) 2
ordinates x
by
b(b
S e (b
v (b
x)) := Se v
x) and S eB (Bxb (b
v (b
x))) := SeB Bxb (b
v(b
x)) ,
b (b
where v
x) Pr .

The linear transformations S eB and S e are associated to the differential operator B and to the
identity operator, respectively. In general, the theory of differential forms, cf. H IPTMAIR [91],
can be used in order to derive a canonical transformation, see H IPTMAIR [92], which is related to
some differential operator and to some degrees of freedom.

4.2.2 BDB integrators


Making use of the reference element, the integration in (4.22) and (4.23) can be unified by the
substitutions Re and S eB in the first term of (4.22), and by the substitutions Re and S e in the
second term of (4.22) and in (4.23), as follows:
Z 
  

 
e
e e
SeB Bxb c
a (k , l ) =
rk De SeB Bxb brl
|det(Re )| db
x+
Kr

Z 

Kr

 

S crk Se brl |det(Re )| db
x,

(4.29)

( ek )

Kr



f e Se c
rk |det(Re )| db
x.

(4.30)

Now we employ the Gaussian quadrature method, cf. R ALSTON [163], C IARLET AND L I [46]. Having a sufficient number of Gaussian integration points, we can calculate the integrals exactly. Then, the matrix and the righthand side vector in (4.10) are evaluated elementwise,
where the contributions of the elements are
ONS

Ae :=

nG
X
i=1

nG
 T
 T
 
 
X
e
e c
G e c
e
e c
G
G
G +
wiG B eB xc
w
B
x
,

B
xG
x
i
B
i
i
i
i
i=1

f :=

nG
X
i=1

 T
G
g e,
wiG B e xc
i

d
G, . . . , x
G K r are the Gaussian integration points, w G , . . . , w G R are the Gaussian
where xc
g
nG
1
1
integration weights, and where
i
i
h


h 
r
r
x) , . . . , c
x) ,
x) , B e (b
x) := Se br1 (b
x) , . . . , Bxb c
B eB (b
x) := SeB Bxb br1 (b
ne (b
ne (b

4.2. ASSEMBLING FINITE ELEMENTS


D e := |det(Re )| De ,

49
I e := |det(Re )| Ine ,

g e := |det(Re )| f e ,

where Ine Rne ne is the unit matrix. Note that we will employ the lowestorder, i.e., linear,
finite elements. In this case, since both D h and f h are elementwise constant, then all the integrands
are linear over the element, thus, we will employ only one Gaussian point being the mass point of
K r , and the corresponding weight. These are respectively as follows:

1
, for m = 1

2

G
1
1
x1 :=
, for m = 2 , w1G := 1.
3, 3

1 1 1
, for m = 3
6, 6, 6

The name BDB integrators is due to the structure of the contributions to the bilinear form. The
differential operator is involved in the matrix B e while the matrix D e or I e provide the material
properties and geometrical parameters of the element domain K e .

4.2.3 The algorithm


The structure of the BDB integrators offers an efficient implementation using the objectoriented

technologies, cf. K UHN , L ANGER , AND S CH OBERL


[117]. Algorithm 1 describes assembling the
n
whole system matrix A and the righthand side vector f n , which is called postprocessing. When
assembling the matrix and the righthand side vector, we have to omit those rows and columns
whose corresponding degrees of freedom are connected to the boundary h along which the
zero trace is prescribed. This is done by setting all those rows and columns to zero except for the
diagonal entries in the matrix An .
Algorithm 1 Finite element method: preprocessing
An := 0, f n := 0
for i := 1, . . . , nh do
Evaluate Aei , f ei
for j := 1, . . . , ne do
for k := 1, . . . , ne do

if j = k or G ei(j) 6 I0h and G ei(k) 6 I0h then
(An )G ei(j),G ei(k) := (An )G ei(j),G ei(k) + (Aei )j,k
end if
end for
if G ei(j) 6 I0h then
(f n )G ei(j) := (f n )G ei(j) + (f ei )j
end if
end for
end for

The approximate solution uh h to the problem (Wh (h )), discretized by the finite element
method, is given by (4.24) while we solve the system
An u n = f n



for u n := un,1 , . . . , un,n Rn . In fact, we rather look for B uh (x) , which is elementwise
constant, since we have employed the lowest, i.e., the firstorder finite elements only. Therefore,

CHAPTER 4. FINITE ELEMENT METHOD

50


we can describe B uh (x) by the block column vector

h
n,en i
1
B n := B n,e
, . . . , B h R 2 n h

such that



i
B uh (x) |Ki = B n,e
,

where
i
B n,e

:=

ne
X
j=1



un,G ei(j) SeBi Bxb brj (b
x ) R 2

for i = 1, . . . , nh .

This procedure which assembles the vector B n is called postprocessing and it is depicted in Algorithm 2.
Algorithm 2 Finite element method: postprocessing
Given u n
B n := 0
for i := 1, . . . , nh do
i
B n,e
:= 0

for j := 1, . . . , ne do


i
i
x)
Evaluate B n,e
:= B n,e
+ un,G ei(j) SeBi Bxb brj (b

end for
k := 2 (i 1)
for j := 1, . . . , 2 do
i
[B n ]k+j := [B n,e
]j
end for
end for

4.3 Approximation properties


Now, we specify the meaning of the discretization parameter h in the formulation (W h (h )).
To each element domain K e we associate an element discretization parameter h e which is the
maximum edge size of K e , i.e., the length of the line segment, the maximum side of the triangle,
or the maximum edge of the tetrahedron in the cases of m = 1, m = 2, or m = 3, respectively.
The (global) discretization parameter h is defined by
h := max he .
eE h

(4.31)

Convention 4.1. In what follows, we will assume that there exists h > 0, being, e.g., the minimum
diameter of a sphere (or circle) containing , such that any considered discretization parameter
h fulfills
h h.
(4.32)
The aim of this section
 is to prove a convergence, in some sense, of the approximate finite
element solutions uh h to the true solution u ().

4.3. APPROXIMATION PROPERTIES

51

4.3.1 Approximation of the domain by polyhedra


We employ polyhedral elements. Therefore, we have to deal with an approximation of the original
domain by polyhedra and with a convergence of solutions over these polyhedra.
We introduce an extension operator. Let h > 0, N, and let h  be
 2a nonempty


h
2
h
polyhedral subdomain. We define a linear extension operator X : L
7 L () by
Xh (v(x))

:=

v(x)
0

, x h
,
, x \ h

h  i
.
v(x) L2 h

(4.33)

Lemma 4.3. Let Rm , L be a domain, let h be its nonempty polyhedral sub


h
h
domain, and let vh H0 B; h , where H0 B; h is defined by (4.17). Then Xh1 vh
H0 (B; ) and the space


h 
 

h

h
h
h
h
h
(4.34)
:= X1 v H0 (B; ) v H0 B;
X0 B; ;
is a closed subspace of H0 (B; ).

h

Proof. Let vh H0 B; h be arbitrary. We denote bh := Xh2 B vh . Clearly, bh
 1 h 2
 2

2

. Now the definition (4.33), AssumpL () 2 . Let [C


0 ()] , then |h H

tion 3.4, and h vh = 0, which is the trace along h , yield
Z
Z
Z
E
D
 
 
h
h
=
vh B (|h ) dx + h vh , |h
B v |h dx =
b dx =
h
h
h
Z

 
= Xh1 vh B () dx.





The latter implies that bh = B Xh1 vh and Xh1 vh H(B; ). Further, let H 1 () 2 ,



then |h H 1 h 2 . By (4.33), by Assumption 3.4, and since h vh = 0, we get
Z
Z
 
 

  E
D 
h
h
h
h
Xh1 vh B () dx =
dx +
B X 1 v
=
,
X 1 v

Z  
Z
=
B vh |h dx +
vh B (|h ) dx =
h
h
 
E
D
h
= h v , |h
= 0,
h


 2
stands for the trace operator along . Therefore,
where : H(B; ) 7 H 1/2 ()

h
h
h
h
H0 (B; ). Since H0 B;
X 1 v
is a finitedimensional Hilbert space (of the dih
h
h
mension less than n) and X1 : H0 B;
7 H0 (B; ) is a linear mapping, then the set

h
X0 B; ; h , defined by (4.34), is obviously a closed subspace of H 0 (B; ), hence, again a
finitedimensional Hilbert space.

m
Let
L be a domain and let h > 0 be a discretization parameter. We say that the
 h R ,
class h>0 , h approximates from the inner if the following is satisfied





xh h x : xh x h.

(4.35)

CHAPTER 4. FINITE ELEMENT METHOD

52

We denote this convergence by h % , as h 0+ . Let us further introduce a function h :


7 {0, 1}, which is called the characteristic function of h , by
(
1 , x h
.
(4.36)
h (x) :=
0 , x \ h
It is obvious that if h % , then
h (x) 1 a.e. in , as h 0+ .
Assumption 4.3. We assume that h % .

4.3.2 Apriori error estimate




We introduce an interpolation operator e : C K e 1 7 Pe such that

ie ( e (v)) = ie (v), i = 1, . . . , ne ,


holds for any v C K e 1 . Further, we introduce a global interpolation operator h :
 i1
 i1
h
h
7 Ph such that for any v C h
C h


ih h (v) = ih (v),

i = 1, . . . , n,

(4.37)

or we can introduce that equivalently by

h (v) |K e := e (v|K e ) ,

K e T h,

where Ph is due to (4.14). Moreover, another global interpolation operator h0 : C


h
 i1
h
H0 B; h is introduced such that for any v C h
ih

h0 (v)

:=

ih (v)
0

(4.38)
i1
7
h

, i 6 I0h
,
, i I0h

where I0h is defined by (4.16).


We suppose that the following apriori error estimate holds. For more results on apriori error
[115], S TRANG AND F IX [200].
estimates see C IARLET [45], K R I Z EK AND N EITTAANM AKI
Assumption 4.4. We assume that there exists a positive constant C 10 C10 (K e ) such that


v H 2 (K e ) 1 : kv e (v)kB,K e C10 he kvk1 ,2,K e .

4.3.3 Regular discretizations

We suppose that T h are regular discretizations in the sense of the following three assumptions.
Assumption 4.5. We assume that there exists a positive constant C 11 such that
h > 0 K e T h : C10 (K e ) C11 .

4.3. APPROXIMATION PROPERTIES

53

Assumption 4.6. We assume that for each v [C 0 ()]1 there exist positive constants C12
C12 (v) and C13 C13 (v) such that




h > 0 K e T h : K e h h je v|K e kSe k C12 and je v|K e kSeB k C13 ,

where j = 1, . . . , ne and where



o
n

h h := y h K e T h j {1, . . . , ne } : y K e and G e (j) I0h .

(4.39)

is the most outer layer of finite elements.

Assumption 4.7. We assume that for each v [C 0 ()]1 there exists a positive constant C14
C14 (v) such that
h > 0 K e T h x Re (b
x) K e :
n

e
X






Bx e v| e (x) =
x) C14 ,
ie v|K e SeB Bxb bri (b

K


i=1

where k k is the Euclidean norm.

Let us note that Assumption 4.5 is replaced, in case of m = 2, by the minimum angle con
dition, see Z L AMAL
[219, p. 397], or by the maximum angle condition, see K R I Z EK AND N EIT [115, p. 67], and, in case of m = 3, by either the minimum or maximum angle conTAANM AKI
[115,
ditions between the edges as well as between the faces, see K R I Z EK AND N EITTAANM AKI
p. 83]. For the used kind of elements we will show that Assumptions 4.6 and 4.7 follow from the
angle conditions.

Lemma 4.4. Let Rm , L be a domain and let h h>0 be a class of its nonempty polyhedral subdomains such that h % . Let further v [C0 ()]1 . Then, under Assumption 4.6,
the following convergence holds


X







h
h
h
h

0, as h 0+ .
i (v|h ) i
v|h 0 v|h h =
B,
iI h

B,h

Proof. The proof bases on Theorem 3.6. Let us write the square of the norm

2
X



h
h

(v|
)
h
i
i

iI h

0

B,h


2

Z X


h
h

=
i (v|h ) i (x)
dx+
iI h

0

2
Z X




h
h
dx.
+
)B

(x)

(v|
x
h
i
i

iI h

(4.40)

Due to (4.19), both hi and Bx ( hi ) have small supports. We take an arbitrary x . Since
h % and K e T h : he h, then due to (4.35) there exists h0 := miny kx yk/2 such
that
h h0 : x h \ h h

CHAPTER 4. FINITE ELEMENT METHOD

54

holds. Clearly, supp hi h h and for the above x h \ h h we get




2
2





X

X h



ih (v|h )Bx hi (x)
i (v|h ) hi (x)
h h0 :

= 0 and
= 0,
iI h
iI h

(4.41)

where k k denotes the Euclidean norm.


Now, we bound the integrands in (4.40). Given a fixed x , it is either the case of
x \ h h , then the integrands vanish, or the case of x R e (b
x) K e h h , then
by Assumption 4.6 the squares of the integrands are bounded as follows:





ne


X
X




h
e br
h
e
br (b

ne C12 max max

(v|

x
)
)S

(b
x
)
)
(x)

(v|
=

,
e
h
i
j
j
i
j
K


j=1,...,ne x
bK r
iI h

j=1
0





ne




X
X h





je (v|K e )SeB Bxb brj (b
i (v|h )Bx hi (x)
x)

=

iI h
j=1

0




x) ,
ne C13 max max Bxb brj (b
j=1,...,ne x
bK r

therefore, the integrands themselves are also bounded. Having the boundeness and by (4.41)
having also the convergence of the integrands to zero in , as h 0 + , we now apply Theorem 3.6
to both the integrals in (4.40), which yields
2
2




Z X
Z X






h
h
h
h



i (v|h )Bx i (x)
i (v|h ) i (x) dx 0 and

dx 0, as h 0+ .

iI h


iI h

4.3.4 Convergence of the finite element method


The following theorem states the convergence property of the finite element method.

Theorem 4.2. Let Rm , L be a domain, and let h h>0 be a class of its nonempty
polyhedral subdomains such that h % . Further, we assume that




(4.42)
max dhi,j (x) di,j (x) 0 a.e. in , as h 0+ ,
i,j

where dhi,j (x) := di,j (x) in \ h , and




h

f f

1 ,0,

0, as h 0+ ,

where f h (x) := f (x) in \ h . Then for each > 0 the following convergence holds
  
u () in H0 (B; ), as h 0+ ,
Xh1 uh h

(4.43)





where Xh1 : L2 h 1 7 L2 () 1 is the linear extension operator defined by (4.33),
h

uh h H0 B; h is the solution to (Wh (h )), and u() H0 (B; ) is the solution
to (W ()).

4.3. APPROXIMATION PROPERTIES

55

Proof. Let > 0 be arbitrary. Let us denote


  
u := u () and uh := Xh1 uh h .

(4.44)

h
From Lemma 4.3 we know that the set X0 B; ; h is a closed subspace of H0 (B; ). Therefore, the function uh is the Galerkin approximation to the solution u of (W ()) in the space
h
h
X0 B; ; h and we can employ Lemma 4.1, which for any vh X0 B; ; h yields
(







a vh , uh vh ah vh , uh vh



h
h
+
C9 () u v
+
u u
kuh vh kB,
B,
B,
(4.45)


 )
f uh vh f h uh vh
.
+
kuh vh kB,
[115, Th. 4.16], originally
Now the idea of the proof is like in K R I Z EK AND N EITTAANM AKI
f [C0 ()]1
from D OKTOR [56]. Let > 0 be arbitrary. By Assumption 3.3, there exists u
such that

f kB,
ku () u
.
(4.46)
6C9 ()
In the estimate (4.45) we choose



f |h .
vh := Xh1 h0 u

We estimate the first term on the righthand side of (4.45). By the triangle inequality (3.1),
by Lemma 4.3, and by (4.46), we get









h
h
h
f
f
f
f
u

v
=
u

u
+
u

ku

u
k
+
u

B,
B,
B,
B,


(4.47)



f vh
.
+ u

6C9 ()
B,
The second term on the righthand side of (4.47) reads
Z

2






2
h
f |h h =
f h0 u
f v
\h (x) kf
u (x)k2 + kB (f
u (x))k2 dx + u
=
u
B,
B,
Z
2






f |h
f h0 u
u (x)k2 + kB (f
u (x))k2 dx + u
,
= (1 h (x)) kf
h

B,

(4.48)

where k k denotes the Euclidean norm. Since h % , then (4.35) holds and we get


(1 h (x)) kf
u (x)k2 + kB(f
u (x))k2 0 a.e. in , as h 0+ .

Moreover, due to (4.36)



h > 0 : (1 h (x)) kf
u (x)k2 + kB(f
u (x))k2 kf
u (x)k2 + kB(f
u (x))k2 a.e. in .

Therefore, Theorem 3.6 yields


Z


(1 h (x)) kf
u (x)k2 + kB(f
u (x))k2 dx 0, as h 0+ .

CHAPTER 4. FINITE ELEMENT METHOD

56

Using the triangle inequality (3.1), the second term on the righthand side of (4.48) is estimated
as follows:










f |h h
f h u
f |h + h u
f |h h0 u
f |h h = u
f h0 u
u
B,
B,







h

h
h
f u
f |h h + u
f |h 0 u
f |h h .
u
B,

B,

(4.49)

The definition (4.38), Assumption 4.4, and Assumption 4.5, respectively, yield


2

f h u
f |h
u

B,h

K e T h

kf
u e (f
u |K e )k2B,K e

X 

K e T h

C10 (K e )he kf
u k1 ,2,K e

2

2 2
C11
h kf
u k21 ,2, .

By Lemma 4.4, the second term on the righthand side of (4.49) tends toward zero. Therefore, the
righthand side of (4.48) tends to zero, as h 0+ , i.e., there exists h1 > 0 such that

h h1 : u vh
.
(4.50)
3C9 ()
B,

Further, we estimate the second term on the righthand side of (4.45). The nominator reads as
follows:






a vh , uh vh ah vh , uh vh =
Z
T
 
 


h
h
h

DD
B vh
dx
= B u v

sZ



2
h



2
u v h
max dhi,j (x) di,j (x) kB(vh (x))k dx,
B,

i,j



where we used the CauchySchwarz inequality (3.3) in L2 () 2 . After dividing the latter by

uh vh , we get
B,



 sZ
2

a v h , u h v h a h v h , u h v h

h
2

(x)

d
(x)
d

max
kB(vh (x))k dx.
i,j
i,j
i,j
kuh vh kB,

(4.51)
Now, we use Theorem 3.6 to show that the integral on the righthand side of (4.51) vanishes, as
h 0+ . First, we prove the boundeness of the integrand. We take an arbitrary x . If x 6 h ,
then vh (x) = 0 and the integrand vanishes. If x K e for some K e T h , K e h h , then
by (4.3) and Assumption 4.6 the square of the integrand reads






max dhi,j (x) di,j (x) B vh (x) =
i,j






X





f |K e SeB Bxb brj (b
je u
= max dhi,j (x) di,j (x)
x)


i,j
j:G e (j)6I h

0




x) .
2 max kdi,j kL () ne C13 (f
u ) max max Bxb brj (b
i,j

j=1,...,ne x
bK r

4.4. FINITE ELEMENTS FOR MAGNETOSTATICS

57

Otherwise, x K e for some K e T h , K e h \ h h , and the square of the integrand reads








max dhi,j (x) di,j (x) B vh (x) =
i,j



ne




X

 e
h

e
r
b

f |K e SB Bxb j (b
= max di,j (x) di,j (x)
x)
j u

i,j

j=1
2 max kdi,j kL () C14 (f
u ) ,
i,j

where we used (4.3) and Assumption 4.7. Therefore, the integrand on the righthand side of (4.51)
is bounded by a constant independent of h, and due to the assumption (4.42) it converges to zero
almost everywhere in . Then, by Theorem 3.6
Z

2   2



max dhi,j di,j B vh dx 0, as h 0+ .
i,j

Hence, there exists h2 > 0 such that





a vh , uh vh ah vh , uh vh

.
h h2 :
h
h
ku v kB,
3C9 ()

(4.52)

Finally, we estimate the third term on the righthand side of (4.45). The nominator reads as
follows:



 Z 
 


h
h
h
h
h
h
h
h

f f u v dx
f u v f u v =






h

h
(4.53)

v
f f h

u
1 ,0,
1 ,0,




h



,
f f h
u v h
1 ,0,

B,



where we used the CauchySchwarz inequality in L2 () . Dividing (4.53) by uh vh B,
and using the assumption (4.43), it follows that there exists h3 > 0 such that



f uh vh f h uh vh

h h3 :

.
(4.54)
h
h
ku v kB,
3C9 ()


 1

At the end, combining (4.45), (4.50), (4.52), and (4.54), and recalling the notation (4.44), we have
proven the statement, i.e., for any > 0 there exists h 0 := min{h1 , h2 , h3 } such that

  


.
h h0 : u () Xh1 uh h
B,

4.4 Finite elements for magnetostatics


In this section, we derive two basic types of finite elements which are used for solving the 2
dimensional or 3dimensional magnetostatic problem, respectively. We will validate Assumptions 4.14.2 and Assumptions 4.44.7.

CHAPTER 4. FINITE ELEMENT METHOD

58

4.4.1 Linear Lagrange elements on triangles


These finite elements approximate the space H 1 (), where R2 , L. They are used,
e.g., for solving the 2dimensional linear magnetostatic problem introduced in Section 3.4.2. The
elements are characterized by triangular domains and by the degrees of freedom that are nodal
values in the corners.

The linear Lagrange element is a triple E := K e , P e , e , where K e R2 is a triangular
domain,



P e := p(x) := ae0 + ae1 x1 + ae2 x2 C K e ae0 , ae1 , ae2 R , x := (x1 , x2 ) K e ,

and the degrees of freedom are

e := {1e , 2e , 3e } ,


where ie : C K e 7 R is such that for v C K e
ie (v) := v(xei ) ,

i = 1, 2, 3,

(4.55)




where xe1 := xe1,1 , xe1,2 , xe2 := xe2,1 , xe2,2 , xe3 := xe3,1 , xe3,2 are the corners of K e .
We concern the space H 1 (K e ) with the trace operator K e (v) := v|K e , v P e . From (4.55)
it is easy to see that the following couples of degrees of freedom ( 1e , 2e ), (2e , 3e ), and (3e , 1e )
for any v P e determine the traces v|he3 , v|he2 , and v|he1 along the edges he3 := (xe1 , xe2 ), he1 :=
(xe2 , xe3 ), and he2 := (xe3 , xe1 ), respectively, see also Fig. 4.4. Therefore, Assumption 4.1 is fulfilled
and we say that the linear Lagrange elements are H 1 (K e )conforming.
According to (4.27), (4.28), and Fig. 4.2, we specify the transformation from the reference
element to the element e by
e

R :=

xe2,1 xe1,1 xe3,1 xe1,1


xe2,2 xe1,2 xe3,2 xe1,2

r :=

xe1,1
xe1,2

(4.56)

where the corners of the reference triangle K r are


cr := (0, 0),
x
1

cr := (1, 0),
x
2

Concerning Assumption 4.2, we specify S e by

cr := (0, 1).
x
3

(4.57)

Se := 1.

(4.58)

gradx (v(x)) = (Re )T gradxb (b


v (b
x)) ,

(4.59)

It is easy to see that


b + re . The reference shape functions read as follows:
where v(x) := Se vb(b
x) and x := Re x
x) := 1 x
c1 x
c2 ,
b1r (b

x) := x
c1 ,
b2r (b

x) := x
c2 ,
b3r (b

b := (c
where x
x1 , x
c2 ) K r

(4.60)

and where K r is the triangle in Fig. 4.2 the corners of which are given by (4.57).
Now we will state the element approximation property such that both Assumption
4.4 and

 e
e
h
1
Assumption 4.5 will be fulfilled. Suppose that we have a discretization T := K , . . . , K nh

a triangulation. The following definition is due to Z L AMAL


[219, p. 397].

4.4. FINITE ELEMENTS FOR MAGNETOSTATICS

59



Definition 4.1. A family F := T h | h > 0 of triangulations is said to satisfy the minimum
angle condition if there exists a constant 0 such that for any T h F and any K e T h we have
0 < 0 K e <

,
2

(4.61)

where K e is the minimum angle of the triangle K e .

The next lemma is due to Z L AMAL


[219] and it replaces both Assumption 4.4 and Assumption 4.5.
Lemma 4.5. Let F be a family of triangulations satisfying the minimum angle condition (4.61).
Then there exists a constant C11 > 0 such that for any T h F with h h we have

 


v H 2 h : v h (v) h C11 h |v|2,h ,
1,

 
h
h
where : C h 7 H 1 h is defined by (4.38), using the degrees of freedom (4.55).

Proof. See Z L AMAL


[219].

The next two lemmas fulfill Assumptions 4.6 and 4.7, respectively.
Lemma 4.6. Let v C0 (). Then there exist positive constants C 12 C12 (v) and C13
C13 (v) such that for any discretization parameter h > 0 satisfying (4.32), for any subdomain
h satisfying Assumption 4.3, and for any discretization T h which satisfies the minimum
angle condition (4.61) the following holds





K e T h : K e h h je v| e kSe k C12 and je v| e Se C13 ,
K

grad

where j = 1, 2, 3 and where h h is defined by (4.39).

Proof. Let v C0 () be an arbitrary function, h > 0 be a discretization parameter satisfying (4.32), h be a polygonal subdomain satisfying Assumption 4.3, and T h be a discretization which satisfies the minimum angle condition (4.61).
Let K e h be arbitrary and let x K e . Since Se = 1, the first estimate is as follows:
e



j v| e kSe k = v xej max |v(z)| , j = 1, 2, 3,
K
z

where xej K e is a corner of K e . Hence,

C12 := max |v(z)| .


z

Let now K e T h be such that K e h h and let x K e . Then, by Assumption 4.3, there
exist y and xh h such that








kx yk = x xh + xh y x xh + xh y 2he ,
where he is by definition the maximum side of K e . Since y , then v(y) = 0. Now, we use
Theorem 3.3
v(x) = v(y) + grad(v(z)) (x z) = grad(v(z)) (x z),

(4.62)

PSfrag replacements

CHAPTER 4. FINITE ELEMENT METHOD

60
xe3
he2
2e
xe1

he1

3e
e
1e

2e

1e = K e
xe2

he3 = he
Figure 4.4: A Lagrange triangle K e

where z K e lies on the line between x and y. Therefore,

x K e : |v(x)| max kgrad(v(z))k 2he .


z

(4.63)

To prove the second estimate, we exploit the structure of the matrix (R e )T . Using (3.9)
and (3.14), we get

e






xj+1,i xe1,i


e T
1
T
e
e

= max
= max (R )
(R ) = max (R )
i,j det (Re )
i,j
i,j
i,j
i,j
(4.64)
he

.
2 meas (K e )
From Fig. 4.4, it is clear that

he e
.
2
Using (4.63), (4.64), Fig. 4.4, and the minimum angle condition (4.61), the second estimate reads
as follows:
e

 e T

2he

j v| e kSeB k = v xej

max
kgrad(v(z))k
)

(R
K
z
e
2 (1e + 2e )

max kgrad(v(z))k
z
e
4 e
1
max kgrad(v(z))k e2 = 4 max kgrad(v(z))k

z
z

tan (K e )
1
,
4 max kgrad(v(z))k
z
tan (0 )
meas (K e ) =

where K e denotes the minimum angle of the triangle K e . Hence,


C13 :=

4 maxz kgrad(v(z))k
.
tan (0 )

Lemma 4.7. Let v C0 (). Then there exists a positive constant C 14 C14 (v) such that for
any discretization parameter h > 0, for any subdomain h , and for any discretization T h
which satisfies the minimum angle condition (4.61), the following holds
K e T h x Re (b
x) K e :

3
X







gradx e v| e (x) =
x) C14 ,
ie v|K e Segrad gradxb bir (b
K


i=1

4.4. FINITE ELEMENTS FOR MAGNETOSTATICS

61

where k k denotes the Euclidean norm.


Proof. Let v C0 () be an arbitrary function, h > 0 be a discretization parameter, h
be a polygonal subdomain, T h be a discretization of h which satisfies the minimum angle
condition (4.61), and let K e T h be an element domain. The gradients of the reference shape
functions, see (4.60), are constant over K r


gradxb b1r (b
x) = (1, 1),



gradxb b2r (b
x) = (1, 0),



gradxb b3r (b
x) = (0, 1),

b := (c
where x
x1 , x
c2 ) K r . Now, using the latter and the definition of R e , we exploit the structure
of the matrix Segrad (Re )T . It holds that



3
X





x)
ie v|K e Segrad gradxb bir (b



i=1



 e


2
x3,2 xe1,2  (v (xe2 ) v (xe1 )) + xe2,2 xe1,2  (v (xe1 ) v (xe3 ))

,

xe3,1 xe1,1 (v (xe1 ) v (xe2 )) + xe2,1 xe1,1 (v (xe3 ) v (xe1 ))


|det (Re )|
(4.65)

where we also used (3.14). Since he is the maximum side, it follows that
e

xi,j xe1,j he ,

i = 1, 2, 3,

j = 1, 2.

Similarly as in (4.62), Theorem 3.3 yields

|v (xei ) v (xe1 )| max kgrad(v(z))k he ,


z

i = 2, 3.

Finally, like at the end of the previous proof, from Fig. 4.4 it is clear that
|det (Re )| = 2 meas (K e ) = he e
and, due to the minimum angle condition (4.61) and Fig. 4.4, the estimate (4.65) is as follows:

3

X



2


e
e
r
b
x) e e 2he max kgrad(v(z))k he
i v|K e Sgrad gradxb i (b

h

z
i=1

e + e
2 e
2 2 max kgrad(v(z))k 1 e 2 2 2 max kgrad(v(z))k e2 =
z
z

1
1
4 2 max kgrad(v(z))k
,
= 4 2 max kgrad(v(z))k
z
z
tan (K e )
tan (0 )
where K e is the minimum angle of the triangle K e . Hence,

C14 := 4 2 max kgrad(v(z))k


z

1
.
tan (0 )

CHAPTER 4. FINITE ELEMENT METHOD

62

Kr
cbr6
1
x
c1

x
c3
1
0

x3

Re
cbr3

cbr1
cbr

cbr5

cbr2

Ke
ce3 ce5

ce6

ce2

ce1
c2
1 x

ce4

x2

x1

Figure 4.5: A transformation from the reference Nedelec tetrahedron

4.4.2 Linear N
ed
elec elements on tetrahedra
Here, we state a type of finite elements, which is frequently used for the approximation of the space
H(curl; ), where R3 , L. These elements can be used for solving the 3dimensional
linear magnetostatic problem, which was introduced in Section 3.4.3. The elements are defined
over tetrahedra and the degrees of freedom are calculated as integrals along the edges. The elements were first introduced by N E D E LEC [142] and, since then, they have become a standard.

The linear Nedelec element is a triple E := K e , Pe , e , where K e R3 is a tetrahedral
domain,



Pe := p(x) := ae x + be ae , be R3 , x := (x1 , x2 , x3 ) K e ,

and the degrees of freedom are

e := {1e , 2e , 3e , 4e , 5e , 6e } ,
3

3

7 R is such that for v C K e
where ie : C K e
Z
e
v tei ds, i = 1, . . . , 6,
i (v) :=
cei

where cei stand for the oriented edges, see Fig. 4.5, and tei are the related unit tangential vectors.
Now, we concern the space H(curl; K e ) and the corresponding trace operator K e (v) :=
e
n v on K e , where v Pe and ne denotes the unit outer normal vector to K e . By N E LEC [142, Theorem 1], Assumption 4.1 is fulfilled, thus, the linear Nedelec finite elements are
DE
H(curl; K e )conforming.
The transformation Re in Fig. 4.5 is determined by

e
e
x1,1
x2,1 xe1,1 xe3,1 xe1,1 xe4,1 xe1,1
(4.66)
Re := xe2,2 xe1,2 xe3,2 xe1,2 xe4,2 xe1,2 , re := xe1,2 ,
e
e
e
e
e
e
e
x1,3
x2,3 x1,3 x3,3 x1,3 x4,3 x1,3


where xei := xei,1 , xei,2 , xei,3 , i=1,. . . ,4, are the corners of the tetrahedron K e , which correspond
to the following corners of K r
cr := (0, 0, 0),
x
1

cr := (1, 0, 0),
x
2

cr := (0, 1, 0),
x
3

cr := (0, 0, 1).
x
4

(4.67)

4.4. FINITE ELEMENTS FOR MAGNETOSTATICS

63

As far as Assumption 4.2 is considered, we determine S e by


Se := (Re )T .
It can be shown that following Piolas transformation holds, see R AVIART
Formula 3.17],
1
curlx (v(x)) =
Re curlxb (b
v(b
x)) ,
det(Re )

(4.68)
AND

T HOMAS [167,
(4.69)

b (b
b + re . The reference shape functions read as follows:
where v(x) := Se v
x) and x := Re x


0
1
1
0
b + 0 ,
b + 1 ,
br1 (b
x) := 1 x
br2 (b
x) := 0 x
1
0
1
0




1
0
0
0
b + 0 ,
b + 0 ,
br3 (b
x) := 1 x
br4 (b
x) := 0 x
(4.70)
0
1
1
0




1
0
0
0
b + 0 ,
b + 0 ,
br5 (b
x) := 0 x
br6 (b
x) := 1 x
0
0
0
0

b := (c
where x
x1 , x
c2 , x
c3 ) K r and K r is the reference tetrahedron, see Fig. 4.5, the corners of
which are given by (4.67).
Now, we will state the element approximation property such that both Assumption
4.4 and As

e
sumption 4.5 will be fulfilled. Suppose that we have a decomposition T h := K e1 , . . . , K nh .
The following definition and lemma are due to N E D E LEC [142, p. 327].


Definition 4.2. A family F := T h | h > 0 of decompositions into tetrahedra is said to be
regular if there exists a constant C 15 > 0 such that for any T h F and any K e T h we have
he
C15 ,
e

(4.71)

where e denotes the radius of the largest sphere inscribed in K e .


Lemma 4.8. Let F be a regular family of decompositions into tetrahedra in the sense of Definition 4.2. Then there exists a constant C11 > 0 such that for any T h F with h h we
have
h  i1
v H 2 h
: kv e (v)kcurl,h C11 h |v|1 ,2,h .
Proof. The assertion is a direct consequence of N E D E LEC [142, Th. 2].
The next two lemmas fulfill Assumptions 4.6 and 4.7, respectively.
Lemma 4.9. Let v [C0 ()]3 . Then there exist positive constants C 12 C12 (v) and C13
C13 (v) such that for any discretization parameter h > 0, for any subdomain h satisfying
Assumption 4.3, and for any discretization T h which satisfies the regularity condition (4.71) the
following holds




K e T h : K e h h je v|K e kSe k C12 and je v|K e kSecurl k C13 ,

where j = 1, . . . , 6.

CHAPTER 4. FINITE ELEMENT METHOD

64

Proof. Let v := (v1 , . . . , v1 ) [C0 ()]1 be an arbitrary function, h > 0 be a discretization


parameter satisfying (4.32), h be a polygonal subdomain satisfying Assumption 4.3, T h
be a discretization which satisfies the regularity condition (4.71), and let K e T h be an element
domain. For j = 1, . . . , 6 we have the estimate
Z


e

e
v| e ) = v(x) t ds max kv(z)k he ,
(4.72)
j
j
K
ce

z
j

where he stands for the maximum edge size. Since S e := (Re )T , then, using (3.9) and (3.14), it
follows that


1
1
(he )2
fe
fe
.
(4.73)
kSe k =
R
=
R





|det (Re )|
6 meas (K e )
3 meas (K e )
Since e denotes the radius of the largest sphere inscribed in K e , from the regularity condition (4.71) it is obvious that
 e 3
h
4
4
e 3
e
.
(4.74)
meas (K ) ( )
3
3
C15

Putting the latter into (4.73) and combining that with (4.72), the first estimate reads as follows:
3
e

j v| e kSe k max kv(z)k (C15 ) ,
K
z
4

hence,

C12 := max kv(z)k


z

(C15 )3
.
4

Similarly as in the proof of Lemma 4.6, let K e T h be such that K e h h . Then there
exists xh h and, by Assumption 4.3, there exists y such that








kx yk = x xh + xh y x xh + xh y 2he ,

where he is by definition the maximum side of K e . Since y , then v(y) = 0. Now we use
Theorem 3.3
vi (x) = vi (y) + grad(vi (z)) (x z) = grad(vi (z)) (x z)

for i = 1, 2, 3

and
kv(x)k max max kgrad(vi (z))k 2he
i{1,2,3} z

for i = 1, 2, 3,

where z K e lies on the line between x and y. Therefore,



Z

e

j v| e = v(x) tej ds max max kgrad(vi (z))k 2 (he )2 .
K
i{1,2,3} z
ce
j

Concerning the second estimate, we have


kSecurl k =

1
kRe k
|det (Re )|





maxi,j xei+1,j xe1,j
6 meas (K e )

he
(C15 )3
,

6 meas (K e )
8 (he )2

(4.75)

4.4. FINITE ELEMENTS FOR MAGNETOSTATICS

65

where we used (4.74). Combining the latter with (4.75), the second estimate is as follows:
3
e

j v| e kSe k max max kgrad(vi (z))k 2 (he )2 (C15 )
curl
K
i{1,2,3} z
8 (he )2
1
max max kgrad(vi (z))k (C15 )3 ,

4 i{1,2,3} z

hence,
C13 :=

(4.76)

1
max max kgrad(vi (z))k (C15 )3 .
4 i{1,2,3} z

Lemma 4.10. Let v [C0 ()]3 . Then there exists a positive constant C 14 C14 (v) such that
for any discretization parameter h > 0, for any subdomain h , and for any discretization
T h which satisfy the regularity condition (4.71), the following holds
K e T h x Re (b
x) K e :


6


X


 e


e
e
r
curlx v| e =
x) C14 .
i v|K e Scurl curlxb bi (b
K

(4.77)

i=1

Proof. The proof is similar to that of Lemma 4.7. Let v [C0 ()]1 be an arbitrary function,
h > 0 be a discretization parameter, h be a polygonal subdomain, T h be a discretization of
h which satisfies the regularity condition (4.71), and let K e T h be an element domain. The
rotations of the reference shape functions, see (4.70), are constant over K r






curlxb br1 (b
x) = (0, 2, 2),
curlxb br2 (b
x) = (2, 0, 2),
curlxb br3 (b
x) = (2, 2, 0),






curlxb br4 (b
x) = (0, 0, 2),
curlxb br5 (b
x) = (2, 0, 0),
curlxb br6 (b
x) = (0, 2, 0),

b := (c
where x
x1 , x
c2 , x
c3 ) K r . Let us simplify the rest of the proof by the following notation

ie := ie v|K e
for i = 1, 2, . . . , 6.
Now, we exploit the structure of the matrix S ecurl . It holds that
e

curlx v|K e (x)



6


X
 e
1
e
br (b

v|
=
R

curl

x
)
=
e
b
x
i
i
K
det(Re )
i=1

e
x2,1 xe1,1  (2e 3e + 5e ) +
2
xe2,2 xe1,2 (2e 3e + 5e ) +
=

6 meas(K e )
xe2,3 xe1,3 (2e 3e + 5e ) +



+ xe3,1 xe1,1  (3e 1e + 6e ) + xe4,1 xe1,1  (1e 2e + 4e )


+ xe3,2 xe1,2  (3e 1e + 6e ) + xe4,2 xe1,2  (1e 2e + 4e ) .
+ xe3,3 xe1,3 (3e 1e + 6e ) + xe4,3 xe1,3 (1e 2e + 4e )

Let f2e , f3e , and f4e stand for the faces that are respectively opposite to the nodes x e2 , xe3 , and xe4 .
The following oriented closed curves
(xe1 , xe4 , xe3 , xe1 ) ,

(xe1 , xe2 , xe4 , xe1 ) ,

and

(xe1 , xe3 , xe4 , xe1 ) ,

CHAPTER 4. FINITE ELEMENT METHOD

66

see also Fig. 4.5, are respectively the positively oriented boundaries of the faces f 2e , f3e , and f4e
with the outer unit normal vectors n e2 , ne3 , and ne4 . Now, using Theorem 3.5, we arrive at
 e
R

v|
curl
(x)
n2 (x) dS
e
e
x
K
f2
R


1

Re f3e curlx v|K e (x) ne3 (x) dS .


curlx e v|K e (x) =
e
R

3 meas(K )
e
f e curlx v|K e (x) n4 (x) dS
4

Since for i = 2, 3, 4 and j = 1, 2, 3



Z

1

e


xi,j xe1,j he and
curlx v|K e (x) nej (x) dS max kcurlx (v(x))k (he )2
fie
2 x
and due to (4.71), the relation (4.74) holds and we get

hence

3


curlx e v| e (x) 3 maxx kcurlx (v(x))k (C15 ) ,
K
8

C14 :=

3 maxx kcurlx (v(x))k (C15 )3


.
8

Chapter 5

Abstract optimal shape design problem


In this chapter, we will introduce a shape optimization problem governed by the abstract linear
elliptic boundary vectorvalue problem, the weak formulation of which was introduced in Section 3.4. We will state a continuous setting of the shape optimization problem and prove the
existence of a solution. Further, we will deal with a regularized formulation and with a convergence of the regularized solutions to the true one. Finally, we will introduce a discretized shape
optimization problem and assumptions under which the discretized and regularized solutions converge to the true one. The theory here is very similar to the one presented by H ASLINGER AND
[85]. The main difference is that we fix the computational domain and the
N EITTAANM AKI
shapes control the material distribution rather than the boundary , which is usual in mechanics.
Moreover, we will be concerned with a multistate optimization, where several state problems with
the same bilinear form, but different linear functionals, in our case, different current excitations,
are involved.
Let us recall some basic literature on shape optimization: B EGIS AND G LOWINSKI [19], M U [85], H ASLIN RAT AND S IMON [140], P IRONNEAU [159], H ASLINGER AND N EITTAANM AKI

GER AND M AKINEN


[83], B ENDE [21], S OKOLOWSKI AND Z OLESIO [196], B ORNER
[24],
D ELFOUR AND Z OLESIO [54], K AWOHL ET AL . [107], M OHAMMADI AND P IRONNEAU [135].
Besides the basic textbooks, one can find a lot of theoretical analysis in B UCUR AND Z OLE
SIO [35], C HLEBOUN AND M AKINEN
[44], P EICHL AND R ING [155, 156], P ETERSSON [157],
P ETERSSON AND H ASLINGER [158]. Papers focused on applications in electromagnetism are, for

S [123], M ARROCCO AND


example, D I BARBA ET AL . [18], B RANDST ATTER
ET AL . [30], L UK A
P IRONNEAU [132], TAKAHASHI [206]. An optimization of mechanical components is presented
in H AASE AND L INDNER [76].

5.1 A fundamental theorem


Let us suppose that we have a set U being a subset of a normed linear space V and we have the
cost functional J : U 7 R. The optimization problem reads as follows:
)
Find U:
.
(P )
J ( ) J () U
We say that the set U is compact if for any sequence { n } U there exist a subsequence

{nk }
k=1 {n }n=1 and U such that nk in V , as k .
The next fundamental theorem of functional analysis examine the existence of a solution to
the problem (P ).
67

68

CHAPTER 5. ABSTRACT OPTIMAL SHAPE DESIGN PROBLEM

Theorem 5.1. Let U be a compact subset of the normed linear space V and let J : U 7 R be a
continuous functional. Then there exists a solution to the problem (P ).
Proof. See H ASLINGER

AND

[85, p. 67].
N EITTAANM AKI

5.2 Continuous setting


Results of this section are presented in L UK A S [120]. Let us recall that in Assumption 3.1 we
suppose that Rm , m {2, 3}, L is a computational domain that is forever fixed
independently from any parameter or variable.

5.2.1 Admissible shapes


The symbol stands for a shape, which is a continuous function, i.e., C(), where R m1
is a nonempty polyhedral domain, see also Fig. 5.1. We assume that for all the admissible shapes
there exists a common Lipschitz constant C 16 > 0, i.e.,
x, y : |(x) (y)| C16 kx yk.

(5.1)

We further employ the box constraints, i.e., there exist l , u R such that
x : l (x) u .

(5.2)

Then the set of admissible shapes is as follows:


U := { C() | (5.1) and (5.2) hold},

(5.3)

equipped with the uniform convergence, see (3.16),


n in U if n , as n .

(5.4)

Lemma 5.1. U is compact.

Proof. Let {n }
n=1 U be an arbitrary sequence of shapes. By (5.2) the sequence is uniformly bounded and by (5.1) it is equicontinuous. Then by Theorem 3.2 there exist a subsequence

{nk }
k=1 {n }n=1 and C() such that
nk in , as k .
It is easy to see that satisfies both (5.1) and (5.2), which completes the proof.
In Chapter 7, we will deal with an application where we will be at the end looking for smooth
shapes, e.g., Bezier curves or patches, cf. FARIN [59], rather than for continuous ones. To this

end, being inspired by C HLEBOUN AND M AKINEN


[44], we introduce a parameterization, i.e., a
nonempty compact set of design parameters R n , n N, and a continuous nonsurjective
mapping
F : 7 U.
(5.5)
Finally, without loosing generality we assume that the shape controls the following decomposition of into the subdomains 0 () and 1 ()
= 0 () 1 (), 0 () 1 () =

such that graph() 0 () 1 (), meas (0 ()) > 0, and meas (1 ()) > 0,

(5.6)

5.2. CONTINUOUS SETTING

69

as depicted in Fig. 5.1, where the graph is defined by


graph() := { (x1 , . . . , xm1 , y) Rm | x := (x1 , . . . , xm1 ) and y = (x)} .
x3

PSfrag replacements

0 ()
1 ()
x2

x1

Figure 5.1: Decomposition of

5.2.2 Multistate problem


The shape optimization problem is governed by a state problem that describes the related physical
field. This is in our case the weak formulation (W ) which is controlled by the shape via the
material distribution D. We restrict ourselves to the case of two materials with spatial constant
physical properties.
Assumption 5.1. We assume that the material function D D is controlled by the shape U
as follows:
(
D0 , x 0 ()
D (x) :=
,
(5.7)
D1 , x 1 ()
where D0 , D1 R2 2 , 2 N, are constant and positive definite matrices which correspond to
the particular materials.

From the positive definiteness of D0 and D1 , the relation (3.39) follows. The bilinear form
(3.40) now reads
Z
Z
a (v, u) :=
B(v) (D0 B(u)) dx +
B(v) (D1 B(u)) dx, u, v H(B; ),
0 ()

1 ()

(5.8)
where both the operator B and the space H 0 (B; ) were described in Section 3.3.6.
Concerning the linear functional (3.41), we distinguish several righthand sides f , e.g., several
current excitations in case of magnetostatics. The linear functional (3.41) reads as follows:
Z
v
f (v) :=
f v v dx, v H(B; ), for v = 1, 2, . . . , nv ,
(5.9)



where nv N is a number of the considered righthand sides f v L2 () 1 , 1 N, such that
they fulfill
Z
p Ker(B; ) :

where Ker(B; ) is defined by (3.35).

f v p dx = 0

for each v = 1, 2, . . . , nv ,

CHAPTER 5. ABSTRACT OPTIMAL SHAPE DESIGN PROBLEM

70

Assumption 5.2. We assume that for each v = 1, . . . , n v the righthand sides f v are independent
from U.
Now for each v = 1, 2, . . . , nv the state problem (W ) is rewritten as follows:
Find uv () H0, (B; ):
a (v, uv ()) = f v (v)

v H0, (B; )

(W v ())

where the space H0, (B; ) is defined by (3.36).


Lemma 5.2. For each U and v = 1, 2, . . . , n v there exists exactly one solution u v ()
H0, (B; ) to the problem (W v ()). Moreover, there exists a positive constant C 7 such that
U : kuv ()kB, C7 kf v k1 ,0, ,

v = 1, 2, . . . , nv .

Proof. Taking an arbitrary shape U and any v = 1, 2, . . . , n v , the proof is the same as the
one of Theorem 3.16, where the symbols a, f , D, and f are replaced by a , f v , D , and f v ,
respectively.

Lemma 5.3. For each v = 1, 2, . . . , nv the mapping uv : U 7 H0, (B; ) is continuous on U.


Proof. Let v = 1, 2, . . . , nv be arbitrary and let {n }
n=1 U be a sequence such that n
U. To make it readable, we denote
u := uv () and un := uv (n ) .

(5.10)

We observe that (3.46) holds independently of U. Therefore, by the definitions of (W v (n ))


and (W v ()), we have a simple case of Lemma 4.1
1 v
1
a (un u, un u) =
(f (un u) an (u, un u)) =
C7 n
C7
1
=
(a (u, un u) an (u, un u)),
C7

kun uk2B,

(5.11)

where C7 is the H0, (B; )ellipticity constant.


Further, we denote the characteristic functions of the sets 0 () and 1 () by 0 () (x) and
1 () (x), respectively. Since n , the following holds
0 (n ) (x) 0 () (x) and 1 (n ) (x) 1 () (x) a.e. in , as n .

(5.12)

Now, we write down the righthand side of (5.11) and using (3.45) and the CauchySchwarz

5.2. CONTINUOUS SETTING

71



inequality in L2 () 2 we get

Z

|a (u, un u) an (u, un u)| =
+

0 ()

B(u) (D1 B(un u)) dx

1 ()

B(u) (D0 B(un u)) dx+


Z

B(u) (D0 B(un u)) dx

0 (n )

1 (n )



B(u) (D1 B(un u)) dx

Z







0 () 0 (n ) B(u) (D0 B(un u)) dx +
Z






+
1 () 1 (n ) B(u) (D1 B(un u)) dx



d 0 () 0 (n ) B(u) 0, , +
2




+ 1 () 1 (n ) B(u) 0, , kB(un u)k0,2 , .
2

For a better clarity, we introduce the symbols





A0 (n) := 0 () 0 (n ) B(u) 0, , ,
2

Then, the relation (5.13) reads as follows:

(5.13)




A1 (n) := 1 () 1 (n ) B(u) 0,

|a (u, un u) an (u, un u)| d (A0 (n) + A1 (n)) kB(un u)k0,2 ,


d (A0 (n) + A1 (n)) kun ukB, .

From the relation (5.12), it follows that


)


() (x) ( ) (x) 2 kB(u(x))k2 0
n
0
0

a.e. in , as n
() (x) ( ) (x) 2 kB(u(x))k2 0
n
1
1

2 ,

(5.14)

(5.15)

and, since B(u) [L2 ()]2 , the functions on the lefthand side of (5.15) are in L1 () and each
bounded by kB(u)k2 L1 () from above. Now, using Theorem 3.6, we arrive at
A0 (n) 0 and A1 (n) 0, as n .

(5.16)

Combining (5.10), (5.11), (5.14), and (5.16), we have proven the statement
uv (n ) uv () in H(B; ), as n .

5.2.3 Shape optimization problem



 n
Let I : U L2 () 2 v 7 R be a continuous functional. Using (W v ()), we define the cost
functional J : U
7 R by



J () := I , B u1 () , B u2 () , . . . , B(unv ()) , U.

72

CHAPTER 5. ABSTRACT OPTIMAL SHAPE DESIGN PROBLEM

The continuous optimization problem then, in accordance with Section 5.1, reads as follows:
)
Find U:
.
(P )
J ( ) J () U
Theorem 5.2. There exists U that is a solution to (P ).
Proof. By Lemma 5.1,
subset of the normed linear space C(). Using the con U is acompact
n
tinuity of I on U L2 () 2 v and Lemma 5.3, the continuity of J on U follows. Now,
Theorem 5.1 completes the proof.
Moreover, we use (5.5) to define the cost functional Je : 7 R
Je(p) := J (F (p)),

p .

Then, by the compactness of , by the continuity of F on , and by the same arguments as in the
proof of Theorem 5.2, there exists a solution p to the optimization problem
)
Find p :
.
(Pe )
Je(p ) Je(p) p

5.3 Regularized setting

In this section, we will show a convergence of solutions of optimization problems whose state
problems are regularized, as described in Section 3.4.1, to a continuous solution .
Let > 0 be a regularization parameter. Due to (3.47) and (5.8), we introduce the regularized
bilinear form controlled by the shape U
Z
a, (v, u) := a (v, u) +
v u dx, u, v H(B; ).

For each v = 1, 2, . . . , nv the regularized weak formulation reads as follows:


)
Find uv () H0 (B; ):
.
a, (v, uv ()) = f v (v) v H0 (B; )

(Wv ())

Lemma 5.4. Let > 0, U. Then for each v = 1, 2, . . . , n v there exists a unique solution
uv () H0 (B; ) to the problem (Wv ()). Moreover, there exists a positive constant C 8 ()
such that
kuv ()kB, C8 ()kf v k1 ,0, for each v = 1, 2, . . . , nv .
Proof. Taking > 0, an arbitrary shape U, and any v = 1, 2, . . . , n v , the proof is the same as
the one of Theorem 3.17, while the symbols a , f , D, and f are replaced by a, , f v , D , and f v ,
respectively.
Lemma 5.5. Let > 0. Then for each v = 1, 2, . . . , n v the mapping uv : U 7 H0 (B; ) is
continuous on U.
Proof. Taking any > 0, the proof is the same as the one of Lemma 5.3, where all the proper
symbols are subscribed with .

5.3. REGULARIZED SETTING

73

Lemma 5.6. Let U. Then for each v = 1, 2, . . . , n v the following holds




B(uv ()) B(uv ()) in L2 () 2 , as 0+ ,

(5.17)

where uv () H0 (B; ) are the solutions to (Wv ()) and uv () H0, (B; ) is the solution
to (W v ()).

Proof. Taking an arbitrary shape U and any v = 1, 2, . . . , n v , the proof is the same as the
one of Theorem 3.18, where we replace the symbols related to the problems (W ) and (W ) by the
corresponding symbols related to (W v ()) and (Wv ()), respectively.
Now, we return to the shape optimization problem. We introduce the regularized cost functional by



J () := I , B u1 () , B u2 () , . . . , B(un v ()) , U.
The regularized shape optimization problem then reads as follows:
)
Find U:
.
J ( ) J () U

(P )

Theorem 5.3. Let > 0. Then there exists U that is a solution to (P ).


Proof. Taking any > 0, the proof is fairly the same as the one of Theorem 5.2, where we use the
symbol J instead of J , and Lemma 5.5 instead of Lemma 5.3.
Theorem 5.4. Let {n }
n=1 R be a sequence of positive regularization parameters such that
n 0+ , as n and let n U be the corresponding solutions to the problems (P n ). Then

there exist a subsequence {nk }


k=1 {n }n=1 and a shape U such that
n in U, as k
k

holds and, moreover, is a solution to the problem (P ).


Proof. By Theorem 5.3, for each n > 0 there exists n U which
 isa solution to (Pn). By
Lemma 5.1 there exists a subsequence of shapes { n }

n n=1 and a shape U


k k=1
such that
n in U, as k .
(5.18)
k

Let U be arbitrary. Then, due to the definition of (P nk ), for any k N we get




Jnk n Jnk ().
k

(5.19)

Using Lemma 5.6 and the continuity of I, the righthand side of (5.19) converges as follows:
Jnk () J (), as k .
Using (5.18), Lemma 5.5, Lemma 5.6, and the continuity of I, the lefthand side of (5.19) also
converges


Jnk n J ( ), as k .
k

Therefore, we have proven that for any U

J ( ) J ().

CHAPTER 5. ABSTRACT OPTIMAL SHAPE DESIGN PROBLEM

74

Finally, we introduce the regularized cost functional Je : 7 R by


Je (p) := J (F (p)),

p .

Then, the regularized optimization problem reads as follows:


Find p :
Je (p ) Je (p)

(Pe )

In the same fashion as in the case of (P ), we can derive the existence theory as well as the
approximation property of the problem ( Pe ).

5.4 Discretized setting

In this section, we introduce a setting of the shape optimization problem (P ) discretized by the
finite element method. We will prove a convergence of the approximate solutions to the true one.
Let > 0 be a regularization parameter and let h > 0 be a discretization parameter, see (4.31).
With any h > 0 we associate a nonempty polyhedral computational subdomain h such
that (4.35) is satisfied.

5.4.1 Discretized set of admissible shapes


First, wen introduce a o
finitedimensional approximation of the set U of admissible shapes. Let
h
h
h
T := 1 , . . . , nh , nh N, be a discretization of the nonempty polyhedral domain

Rm1 . Let P 1 Th C () denote a space of continuous functions that are linear over ih
for each i = 1, . . . , nh . We denote the corners of ih by xhh ,1 , . . . , xhh ,m Rm1 . By
xh,1 , . . . , xh,n

xh

Rm1 we denote all the nodes of the discretization T h .

Now, we discretize the condition (5.1) as follows:








h h



6 k,
xh ,j h xhh ,k C16 xhh ,j xhh ,k for i = 1, . . . , nh , j, k = 1, . . . , m, j =
i
i
i
i
(5.20)
which in total involves nh or 3nh conditions in case of m = 2 or m = 3, respectively. Discretized
box constraints (5.2) include the following nxh conditions


l h xh,i u

for i = 1, . . . , nxh .

Then the discretized set of admissible shapes is as follows:


o
n
 

U h := h P 1 Th (5.20) and (5.21) hold ,

(5.21)

h
equipped with the uniform
 h convergence (5.4). Obviously for each h > 0: U U and, by the
h
1
definition of P T , U is finite dimensional.

Lemma 5.7. For any h > 0 the set U h is compact.


Proof. See the proof of Lemma 5.1.

5.4. DISCRETIZED SETTING

75


We introduce an interpolation operator h : U 7 P 1 Th such that for each xhh ,j , a corner

of ih , it holds that



h
i
h
h
h
U : () xh ,j = xh ,j ,
i

i = 1, . . . , nh , j = 1, . . . , m.

(5.22)

Lemma 5.8. Let U be an arbitrary shape and let {h n }


n=1 R be a sequence of positive
discretization parameters. Then the following holds
hn () in U, as hn 0+ .
Proof. See B EGIS

AND

G LOWINSKI [19].

Further, let h > 0 be given. Since for any U: (5.1) implies (5.20), and (5.2) implies
(5.21), then also U implies h () U h . Moreover, as F : 7 U, then for any p it
follows that h (F (p)) U h . Therefore, we use also for the discretized setting.
Finally, like in the continuous case we assume that a discretized shape h controls the decomposition of h into the subdomains h0 (h ) and h1 (h ) as follows:
h = h0 (h ) h1 (h ), h0 (h ) h1 (h ) =




PSfrag replacements
such that graph(h ) h0 (h ) h1 (h ), meas h0 (h ) > 0, and meas h1 (h ) > 0,

(5.23)

where the boundaries h0 (h ), h1 (h ) are polyhedral, as depicted in Fig. 5.2.


h
x3

x3

PSfrag replacements

h0 (h )

h1 (h )
h0 (h )
h1 (h )

x1

x2

7h
1h
2h

8h

h h
10
11

9h

3h

4h

5h

6h

h
12

x2

x1
Figure 5.2: Decomposition of h

5.4.2 Discretized multistate problem


h
h
h h
Let > 0 be a regularization
n
o parameter. For each U we provide a discretization T ( ) :=
K1 (h ), . . . , Knh (h ) of the computational domain h such that

Ki (h ) T h (h ) : Ki (h ) h0 (h ) or Ki (h ) h1 (h ).

(5.24)

CHAPTER 5. ABSTRACT OPTIMAL SHAPE DESIGN PROBLEM

76

o
n
By E h := e1 , . . . , enh we denote the corresponding set of finite elements. For any h > 0 and

h U h we suppose that Assumptions 4.14.7 hold. We introduce another assumption, which is


[85, p. 67].
due to H ASLINGER AND N EITTAANM AKI
Assumption 5.3. We assume that for any h > 0 fixed the topology of the discretization grid
T h (h ) is independent from h U h , we further assume that the coordinates of x e1i (h ), . . . ,
i
xem+1
(h ) Rm , see (4.25), which are the corners of Ki (h ) T h (h ), still form a triangle
(m = 2) or a tetrahedron (m = 3), and they depend continuously on h U h .
The regularized and discretized setting of the multistate problem reads as follows:

h

 
h
h
v,h

:
Find u H0 B;
v,h h

 
 

h , v = 1, . . . , nv , (W ( ))
ah,h vh , uv,h
h
= f v,h vh
vh H0 B; h

h
where thefinite element space H0 B; h is defined by (4.17), where further for each v h , wh
H B; h we define
Z
Z


  
 
h
h
h
h
h
h
a,h v , w :=
B v D h B w
dx +
vh wh dx,
(5.25)
h

in which, in virtue of Assumption 5.1,

D0
h
Dh (x) := D1


, x h0 h

, x h1 h ,
, x \ h


and where for each vh H B; h we set
Z
 
v,h
h
f
v :=
f v,h vh dx,
h

(5.26)

v = 1, . . . , nv ,



in which, due to (4.18), f v,h L2 h 1 are elementwise constant and such that



v,h
0, as h 0+ , v = 1, . . . , nv ,
f f v
1 ,0,

(5.27)

where f v,h (x) := f v (x) in \ h . The following is in virtue of Assumption 5.2.

Assumption 5.4. We assume that for each v = 1, . . . , n v the righthand side f v,h is independent
of the shape h U h .
The following lemma assures that for any > 0, h > 0, and v = 1, . . . , n v fixed the mapping
h
: U h 7 H0 B; h is well defined.

uv,h

Lemma 5.9. For each > 0, h > 0, h U h , and v = 1, . . . , nv there exists a unique solution
h

uv,h
h H0 B; h to the problem (Wv,h (h )).

Proof. Since h is a polyhedron, then h L and the statement follows by the same arguments
as in the proof of Theorem 3.17.

5.4. DISCRETIZED SETTING

77

Lemma 5.10. Let > 0, h > 0. Then for each v = 1, 2, . . . , n v the mapping uv,h
: U h 7


h
H0 B; h is continuous on U h .

Proof. We take an arbitrary > 0, h > 0, and v = 1, . . . , n v . Note that we cannot use the
same technique as in the proof of Lemma 5.3, since the settings (Wv,h (h )) differ from h U h .
Therefore, the estimate (5.11) cannot be established. We will rather exploit the algebraic structure
of the mapping uv,h
.
In the similar manner as in (4.24), the solution to (Wv,h (h )) reads as follows:
uv,h

n
X
i=1

     
h
h
uv,n
hi xh h ,
,i x

(5.28)


where xh h denotes a vector of global coordinates of all element domains corners, which are

by Assumption 5.3 continuously dependent on the shape h U h , where further hi xh denotes
the global shape 
functions, and where we use the same notation for both the functions u v,n
h

v,n
h
h
and u x
  

  
  
 
v,n
h
h
Rn ,
,
.
.
.
,
u
xh h
:= uv,n

x
u v,n
xh h
h u v,n
,n

,1
which is the solution to the linear system (4.9). In this case, (4.9) reads as follows:
 
 
 
An xh u v,n
xh = f v,n xh .

(5.29)

Now, let us take a look into the assembling of the matrix and the righthand side vector
in (5.29). Due to (4.21), the element contributions to them are


An

v,n



i,j



n
X

ae,xe ( ek (xe ) , el (xe )) ,

eEih Ejh k,l=1


e

n
X X

v,e

(5.30)

( ek (xe )) ,

eEih k=1

for i, j = 1, . . . , n, where Eih denotes the set of elements neighbouring with e i , see (4.20), and
xe is the vector of coordinates of the element domain corners, see (4.25). Using the map from
the reference element r, the element contributions to the bilinear form and linear functional, see
also (4.29) and (4.30), respectively are
ae,xe ( ek (xe ) , el (xe )) =
Z 
  
 

=
rk Dexe SeB (xe ) Bxb brl
|det(Re (xe ))| db
x+
SeB (xe ) Bxb c
Kr

f v,e ( ek (xe )) =

Kr

Z 

Kr


 
x,
Se (xe ) c
rk Se (xe ) brl |det(Re (xe ))| db



x,
f e Se (xe ) c
rk |det(Re (xe ))| db

where k, l = 1, . . . , ne and where we also used Assumptions 5.1 and 5.4.

(5.31)

CHAPTER 5. ABSTRACT OPTIMAL SHAPE DESIGN PROBLEM

78


The expressions (5.28)(5.31) specify the function uv,h
h . Now, we will prove its continu

ity. Let h U h be an arbitrary discretized admissible shape and let hp p=1 U h be such a
sequence that
hp h in U h , as p .
Let us denote for each element e E h , where E h stands for the set of finite elements,
 
 
xep := xe hp and xe := xe h .

By Assumption 5.3, for each element e E h we get

xep xe in Rm(m+1) , as p .


Again by Assumption 5.3, for each e E h and each p N the element domains K e xep as
well as K e (xe ) still form a triangle (in case of m = 2) ora tetrahedron (in case of m = 3), and it
follows from the definition (4.28) that the matrices Re xep , Re (xe ) are nonsingular, and therefore


det Re xep > 0, |det(Re (xe ))| > 0.

Moreover, as both Re and re by (4.28) continuously depend on xe , we get:



det Re xep det(Re (xe )) in R, as p .

From Assumption 4.2 it follows that




Se xep Se (xe ) in R1 1 and SeB xep SeB (xe ) in R2 2 , as p .

(5.32)

(5.33)

b are
Now, the only symbols
 in the integrals

 (5.31) that depend on the integration variable x
r
r
r
r
c
b
c
b
k (b
x), l (b
x), Bxb k (b
x) , and Bxb l (b
x) . However, they are each independent from the vector
xe . We can expand the matrix multiplications in the integrands, which leads to the following finite
linear combinations of integrals
ae,xe ( ek (xe ) , el (xe ))

f v,e ( ek (xe )) =

N
X

i=1
M
X
j=1

ce,i (xe )

e
dv,e
j (x )

Kr

Kr

Fi (b
x) db
x,
Gj (b
x) db
x.

From (5.32) and (5.33) for i = 1, . . . , N , j = 1, . . . , M it follows that




e
ce,i xep ce,i (xe ) and dv,e
xep dv,e
j
j (x ) in R, as p ,
which consequently yields

and



ae,xep ek xep , el xep ae,xe ( ek (xe ) , el (xe )) in R, as p
f v,e ek xep



f v,e ( ek (xe )) in R, as p


for k, l = 1, . . . , ne . By Assumption 5.3 the topology of the mesh T h h does not change with
any h U h , hence, the sets Eih and Ejh in (5.30) remain unchanged. It follows that
   
   
An xh hp
An xh h
, as p ,
i,j

i,j

5.4. DISCRETIZED SETTING


and

79

  

  
in R, as p ,
f v,n xh h
f v,n xh hp
i

for each i, j = 1, . . . , n. From here and (5.29) we get


  
  
h
v,n
h
in Rn , as p .

u v,n

u
x
xh h
p

(5.34)

Using the map from the reference element r, the global shape function reads as follows:
 
X X
X X
hi xh =
Se (xe ) brj ,
ej (xe ) =
eEih j:G e (j)=i

eEih j:G e (j)=i

which together with (5.33) and with Assumption 5.3 yield


  
  
in Rn , as p .
hi xh h
hi xh hp

(5.35)

Combining (5.28), (5.34), and (5.35), we have completed the proof, i.e.,
 
 
v,h
h
h in U h , as p .

uv,h

Lemma 5.11. Let > 0 be a regularization parameter. Let {h n }


n=1 R
be a sequence of
positive discretization parameters such that h n 0+ , as n . Let hn n=1 , hn , be
a sequence
satisfying hn % , as n . Further, let U be a shape and
of subdomains
h
h
h
n n=1 U, n U n , be a sequence of discretized shapes such that
hn in U, as n .

Then for each v = 1, . . . , nv





hn
n
Xh1n uv,h

uv () in H0 (B; ), as n ,


where uv,hn hn is the solution to (Wv,hn (hn )) and uv () is the solution to (Wv ()).

Proof. It is enough to prove that the assumption (4.42) is fulfilled and the rest is then fairly the
same as the proof of Theorem 4.2, while (4.43) is replaced by the assumption (5.27).
Given an arbitrary x \ (0 () 1 ()), it follows from (5.6) that either x 0 ()
or x 1 (). Thus, by (5.7) either
D (x) = D0 or D (x) = D1 ,
respectively. Having hn % , hn , as n , and due
 n0 (x) N
 to (5.23), there exists
such that for each n N, n n0 (x) either x h0 n hn or x h1 n hn , respectively.
Therefore, either
Dhn (x) = D0 or Dhn (x) = D1 ,
respectively. Thus, we have verified the assumption (4.42), i.e., for any i, j = 1, . . . , 2 :


hn

dhn ,i,j (x) d,i,j (x) 0 a.e. in , as n ,


where Dhn (x) := dhnhn ,i,j (x)

i,j

R2 2 and D (x) := (d,i,j (x))i,j R2 2 .

80

CHAPTER 5. ABSTRACT OPTIMAL SHAPE DESIGN PROBLEM

5.4.3 Discretized optimization problem


The regularized and discretized cost functional is


 


 

 
nv ,h
h
h
Jh h := I h , B Xh1 u1,h
u

,
.
.
.
,
B
X
h
,

h U h , (5.36)

h
where Xh1 : H0 B; h 7 H0 (B; ) is due to (4.33) and Lemma 4.3. The relevant setting of
the shape optimization problem reads as follows:

Find h U h :
 
 
.
(Ph )
h
h
h
h
h
h
U
J J

Theorem 5.5. Let > 0 and h > 0. Then there exists h U h that is a solution to (Ph ).

Proof. Taking any > 0 and h > 0, the proof is fairly the same as the one of Theorem 5.2, where
we use the symbol Jh instead of J , and Lemma 5.10 instead of Lemma 5.3.
Theorem 5.6. Let > 0 be a fixed regularization parameter. Let {h n }
n=1 R be a sequence

of positive discretization parameters such that h n 0+ , as n , and let h n U hn denote


the corresponding solutions to the problems (P hn ). Then there exist a subsequence {h nk }
k=1
U such that
{hn }
and
a
shape

n=1
hn k

in U, as k ,

holds and, moreover, is a solution to the problem (P ).

hn
solution to (Phn ).
U hn , a o
Proof. By Theorem 5.5, for each > 0 and hn > 0 there
n exists
o n
hn k

By Lemma 5.1, there exist a subsequence of shapes
h n
and a shape
k=1
n=1

U such that
hn
k in U, as k .
(5.37)
 h
Let U be an arbitrary shape. By Lemma 5.8, there exists a sequence nk k=1 , hnk U hnk
such that
hnk in U, as k .
(5.38)
hn k

) for any k N we have


 h 


hn
hn
n
J k k J k hn k .

Then, due to the definition of (P

(5.39)

Using (5.37), (5.38), Lemma 5.11, and the continuity of I, both the left and righthand side
of (5.39) converge
 h 


hn
n
J k k J ( ) and J hnk hnk J () , as k .

Therefore, we have proven that for any U

J ( ) J ().

5.4. DISCRETIZED SETTING

81

Finally, we introduce the regularized and discretized cost functional Jeh : 7 R by




Jeh (p) := Jh h (F (p)) , p .

Then, the regularized and discretized optimization problem reads as follows:

Find ph :
 
.
Jeh ph Jeh (p) p

(Peh )

Since is a compact set and h F : 7 U h is a continuous mapping, we can state and prove the
existence theorem for (Peh ) similarly to Theorem 5.5. We can also state the convergence theorem,
the proof of which is even simpler than the one of Theorem 5.6, as the set of admissible design
parameters is not changed by discretization.

Remark 5.1. In cases of complex geometries, as those in Chapter 7, Assumption 5.3 is a serious
bottleneck of this discretization approach. For small discretization parameters and large changes
in the design we cannot guarantee that the perturbed elements still satisfy some regularity condition. They might be even flipped. In this case, we have to remesh the geometry and solve the
optimization problem again, but now on a grid of different topology. Then certainly the cost functional is not continuous any more and the just introduced convergence theory cannot be applied.
Nevertheless, in literature this approach is still the most frequently used one as far as a finite element discretization is concerned. In practice, after we get an optimized shape we should compare
the value of a very fine discretized cost functional for the optimized design with that value for the
initial one. If we can see a progress then the optimization surely did a good job. Some solutions to
this obstacle are discussed in Conclusion.

82

CHAPTER 5. ABSTRACT OPTIMAL SHAPE DESIGN PROBLEM

Chapter 6

Numerical methods for shape


optimization
In this chapter, which is the heart of the thesis, we will focus on the Newtontype algorithms for
smooth discretized shape optimization problems. At the beginning, we will revise the discretized
setting and we will show its smoothness, i.e., the continuity of both the cost and constraint functions up to their second derivatives. Further, we will recall a quasiNewton algorithm that uses
the firstderivatives only. Then, we will discuss the firstorder sensitivity analysis methods. We
will derive a robust, but still efficient, algorithm based on the algebraic approach of the firstorder
shape sensitivity analysis, and we will implement it into an objectoriented software framework.
At the end, we will introduce a multilevel optimization approach.
An extensive literature on the gradient or Newtontype optimization methods has been written. Let us refer to N OCEDAL AND W RIGHT [148], F LETCHER AND R EEVES [63], F LET CHER [61, 62], S VANBERG [203, 204], D ENNIS AND S CHNABEL [55], G ILL , M URRAY, AND
W RIGHT [66], G ROSSMANN AND T ERNO [72], C EA [39], C LARKE [48], H AGER , H EARN , AND
PARDALOS [81], H ESTENSEN [88, 89], P OLAK [161], P OLAK AND R IBI E` RE [162], B OGGS AND

AND N EITTAANM AKI


[130], Z OWE ,
T OLLE [23], C ONN , G OULD , AND T OINT [49], M AKEL
A
KO C VARA , AND B ENDSE [222]. There are many essential monographs and papers dealing with
the sensitivity analysis in shape optimization. Let us mention H AUG , C HOI , AND KOMKOV [86],

H ASLINGER AND N EITTAANM AKI


[85], H ASLINGER AND M AKINEN
[83], Z OLESIO [220],
S OKOLOWSKI AND Z OLESIO [196], S OKOLOWSKI AND Z OCHOWSKI [195], P ETERSSON [157],
S IMON [193], L APORTE AND TALLEC [118], D ELFOUR AND Z OLESIO [54], H ANSEN , Z IU ,

AND
AND O LHOFF [82], B ROCKMAN [34], G RIEWANK [70], M AKINEN
[131], N EITTAANM AKI
S ALMENJOKI [145].

6.1 The discretized optimization problem revisited


Throughout this chapter, we will consider the optimization problem ( Peh ), introduced in Chapter 5.
Recall that for a given > 0 and h > 0 the problem reads as follows:

Find ph :
 
,
(Peh )
Jeh ph Jeh (p) p

where Jeh : 7 R denotes the discretized and regularized cost functional and R n ,
n N, is the set of admissible design parameters.
83

84

CHAPTER 6. NUMERICAL METHODS FOR SHAPE OPTIMIZATION

6.1.1 Constraint function


Let us rewrite the admissible set as follows:
:= {p Rn | (p) 0 } ,
where : Rn 7 Rn , n N, is the constraint function.

n
Assumption 6.1. We assume that C 2 (Rn ) .

6.1.2 Designtoshape mapping


Let us take a deep look into the structure of the cost functional in order to prove its smoothness
and to derive the firstorder derivatives with respect tothe design variables. First, we define the
shape parameterization function h := h1 , . . . , hn h : Rn 7 Rnh , nh := nxh N, by





hi (p) := h F (p) [F (p)] xh,i

for i = 1, . . . , nh ,

(6.1)


where F : 7 U is due to (5.5) and h : U 7 P 1 Th is defined by (5.22).

6.1.3 Shapetomesh mapping


Here, we revisit the dependence of the discretization grid nodes on the shape control nodes, i.e., we
introduce a function xh : Rnh 7 Rmnxh the components of which correspond to the grid nodal
coordinates (4.11), where nxh N denotes the number of nodes in the discretization T h (h ) of
the domain h . The function xh () maps the control shape coordinates onto the remaining grid
nodal coordinates by means of solving an auxiliary discretized linear elasticity problem in terms of
grid displacements 4xh with a nonhomogeneous Dirichlet boundary condition that corresponds
to given shape displacements h , and with prescribed zero displacements on h and on such
inner interfaces that are not allowed to move. The zero displacements are, for example, prescribed
along the boundary of a subdomain containing nonzero sources f h . The shapetomesh mapping
is as follows:
 
 
 
 
 
xh h := xh0 + 4xh h + Mh h , where K h xh0 4xh h = b h h , (6.2)
in which the vector xh0 Rmnxh contains the initial grid nodal coordinates and is independent

of h , where further K h xh0 R(mnxh )(mnxh ) is a nonsingular symmetric stiffness matrix,

b h h Rmnxh is a righthand side vector linearly dependent on h Rnh , and where finally
Mh R(mnxh )nh is a rectangular permutation matrix that identically maps the shape nodal

coordinates onto the corresponding grid nodal coordinates. Both K h and b h arise from the finite
element discretization of the auxiliary linear elasticity problem. For finite elements in elasticity, we
refer to Z IENKIEWICZ [217]. The matrix Mh might also involve some symmetry assumptions on
the geometry, as we will state later in Chapter 7. Solving the equation (6.2) takes approximately
the same computational effort as solving one state problem. Nevertheless, the mapping is very
general, which fits to our intent in developing a robust and efficient numerical method for shape
optimization.

6.1. THE DISCRETIZED OPTIMIZATION PROBLEM REVISITED

85

6.1.4 Multistate problem


Concerning the multistate problem, we recall that we arrive at solving the following n v linear
systems of algebraic equations
 
 
 
v,n
h
(6.3)
xh , v = 1, . . . , nv ,
=
f
x
An xh u v,n

where both the system matrix and the right-hand side vectors are assembled by means of Algorithm 1
ne
  
X X
An xh
:=
ae,xe ( ek (xe ) , el (xe )) ,
i,j

f v,n xh



eEih Ejh k,l=1

:=

ne
X X

(6.4)

f v,e ( ek (xe ))

eEih k=1

for i, j = 1, . . . , n, where Eih denotes the set of elements neighbouring with e i , see (4.20), and
xe Rm(m+1) is the vector of coordinates of the element domain corners, see (4.25), which is
also included in xh by means of the mapping H e , see (4.26). Components of the solution to (6.3)
are denoted by


v,n
n
v,n
v = 1, . . . , nv .
u v,n
:= u,1 , . . . , u,n R ,

Using the map from the reference element r, the element contributions to the bilinear form and
linear functional, respectively, see also (4.29) and (4.30), are

ae,xe ( ek (xe ) , el (xe )) :=


Z 



 

:=
SeB (xe ) Bxb c
x)
|det(Re (xe ))| db
x+
rk (b
x) De SeB (xe ) Bxb brl (b
Kr

Z 

Kr

v,e

 

Se (xe ) c
rk (b
x) Se (xe ) brl (b
x) |det(Re (xe ))| db
x,

( ek (xe ))

:=

Kr



x) |det(Re (xe ))| db
x,
f v,e Se (xe ) c
rk (b

(6.5)

where k, l = 1, . . . , ne and where we consider (5.26) and Assumption 5.4. Then,


uv,h

x ;x =

n
X
i=1

  

h
uv,n
hi xh ; x ,
,i x

v = 1, . . . , nv , xh Rmnxh , x h ,

(6.6)


is the solution to the state problem (W v,h (h )), where hi xh ; x denote the global shape functions. Moreover, for e E h we introduce the element solution vector by


v,n,e
v,n,e
u v,n,e
:=
u
,
.
.
.
,
u
Rne , where uv,n,e
:= uv,n

,ne
,1
,i
,G e(i) for i = 1, . . . , ne .



As we look for B uv,h
rather than for uv,h

, we further elementwise evaluate the following


block column vector
 
i

 
 
h
v,n,en h 

1
xh
R2 nh (6.7)
xh := B xh , u v,n
xh , . . . , B
xh := B v,n,e
B v,n

86

CHAPTER 6. NUMERICAL METHODS FOR SHAPE OPTIMIZATION

for v = 1, . . . , nv , where for i = 1, . . . , nh the corresponding element vector is defined by


 

 
h
ei
v,n,ei
i
B v,n,e
x
:=
B
x
,
u
xh ,

and where xh Rmnxh contains all the grid nodes, xe Rmne contains the grid nodes related to
the element e E h , where further

 

 



h
ei
v,n,ei
h
v,h
h
B xh , u v,n
x
|
:=
B
x
,
u
x
:=
B
u
x
;
x
| Ki =
Ki
x

=
=

ne
X

j=1
ne
X
j=1



ei ei
ei

(x
;
x)
=
uv,n
(x
)
B
x
j
,G ei(j)



ei ei
ei
i
br (b

x
)
uv,n,e
(x
)
S
(x
)

B
b
x
j
,j
B

(6.8)
for i = 1, . . . , nh ,

where x := Rei b
x+rei Ki , and where ei E h is the element related
 to Ki . Recall
 that since we
v,h h
employ the lowest, i.e., firstorder finite elements, the function B x u (x ; x) is elementwise
constant.

6.1.5 Cost functional


Now, we revisit the cost functional (5.36) from the algebraic point of view. In addition, it depends
on the vector of grid nodal coordinates x h as follows:

 
 
 
Jeh (p) := I h h (p), xh h , B 1,n
xh , . . . , B n v ,n xh ,


xh
where h := h (p) is a vector of shape control coordinates, where for v = 1, . . . , n v B v,n

n
n
xh [R 2 h ] v 7 R is the revised
R2 nh is given by (6.7) and (6.8), and where I h : Rnh Rmn

cost functional which is for p and for x h := xh h (p) defined by
 
 

:=
xh , . . . , B n v ,n xh
I h h(p), xh , B 1,n

 
 



:= I h (F (p)) , Xh1 B u1,h
, . . . , Xh1 B un v ,h
,


7
in which h : U 7 U h is defined by (5.22), F : 7 U is due to (5.5), Xh1 : H0 B; h
v,h
H0 (B; ) is due to (4.33) and Lemma 4.3, and where u is the solution (6.6).
The complete evaluation of the cost functional proceeds as follows:
v,n
v,n
K h 4xh =b h(h )
B (xh ,u v,n
)
h F
An
FEM
u =f
p h xh An , f v,n
u v,n

nv ,n
B (xh ,u v,n
I h(h ,xh ,B 1,n
)
,...,B
)

Jeh (p).
B v,n

(6.9)
The cost functional Jeh is compounded of the following submappings:

h which is the discretized shape parameterization,



K h 4xh = b h h , see (6.2), that maps the shape control nodal coordinates h onto the
remaining nodal coordinates xh in the grid,

6.1. THE DISCRETIZED OPTIMIZATION PROBLEM REVISITED

87

FEM which assembles the system matrix A n and the righthand side vectors f 1,n , . . . ,
f nv ,n by means of the finite element method, as described in Algorithm 1,
v,n
An u v,n
that solve the nv linear systems of algebraic equations,
=f

B v,n
which is a blockcolumn vector whose individual vectors represent the elementwise

constant functions Bx uv,h


(x) , see in Algorithm 2, and

I h which calculates the cost functional.

6.1.6 Smoothness of the cost functional


To prove the smoothness of Jeh , we need the smoothness of all the submappings.

Assumption 6.2. We assume that for each h > 0 such that h h the following hold:
x : [F ()] (x) C 2 () ,
K h R(mnxh )(mnxh ) is nonsingular,
mn h
 
x
b h h C 2 (Rnh )
,


mm
e E h : Re (xe ) C 2 Rm(m+1)
,


2 2
e E h : SeB (xe ) C 2 Rm(m+1)
,


1 1
e E h : Se (xe ) C 2 Rm(m+1)
, and

nv ,n
C 2 (Rnh (R2 nh )nv ) .
I h h , B 1,n
, . . . , B

Lemma 6.1. Under Assumptions 5.1, 5.3, 5.4, and 6.2, for any h > 0 such that h h it holds
that
Jeh C 2 ().

Proof. We will stepbystep use Assumption 6.2 and apply Lemma 3.4 to prove the smoothness
of the individual submappings.
Let h > 0 be given such that h h. By Assumption 6.2, for each i = 1, . . . , nxh we have


[F ()] xh,i C 2 (), therefore,
h C 2 ()

n

(6.10)


6 0 and
Again by Assumption 6.2 and by Lemma 3.3, det K h =
h

Kh

i1

:=

1
gh ,
K
h
det K

gh denotes the adjoint matrix, which was defined by (3.13). Then, due to the latter
holds, where K
and by Assumption 6.2, we get
  
  h
i1
mn h
x
b h h C 2 (Rnh )
xh h = K h
.

(6.11)

88

CHAPTER 6. NUMERICAL METHODS FOR SHAPE OPTIMIZATION

Now, (6.10), (6.11), and Lemma 3.4 yield



mnxh
xh h C 2 ()
.


Let us further prove the smoothness of the solutions u v,n
xh to the discretized multistate

problem (6.3). Let > 0 and v = 1, . . . , nv be arbitrary. By Assumption 6.2, for each e E h we
get
h 
h 
imm
h 
i2 2
i1 1
Re C 2 Rm(m+1)
, SeB C 2 Rm(m+1)
, Se C 2 Rm(m+1)
.


Then, also due to the definition (3.12), det(Re ) C 2 Rm(m+1) . From Assumption 5.3 it follows
that the element K e must not flip, so the determinant does not change its sign, i.e.,


|det(Re )| C 2 Rm(m+1) .

Now let us look at the element contributions (6.5). Having Assumptions 5.1 and 5.4, only the refb. After some matrixvector
erence shape functions c
rk and brl depend on the integration variable x
multiplications we get the following structure of the element bilinear form and linear functional,
respectively,
Z
N
X
e
e
e
e
e e
e
c,i (x )
a,xe ( k (x ) , l (x )) =
Fi (b
x) db
x,
f

v,e

( ek (xe ))

i=1
M
X
i=1

Kr

e
dv,e
i (x )

Kr

Gi (b
x) db
x,


m(m+1) , since they arise as sumations and multiplications
e
2
where both ce,i (xe ) , dv,e
i (x ) C R
x) and
of the entries of SeB , De , Se , and f v,e multiplied then by |det(Re )|, and where both Fi (b
Gi (b
x) are common for all e E h . Thus, it follows that both the element bilinear form and element
linear functional are smooth, i.e.,
h 
ine ne
h 
ine
ae,xe ( ek (xe ) , el (xe )) C 2 Rm(m+1)
, f v,e ( ek (xe )) C 2 Rm(m+1)
. (6.12)
Now we employ Assumption 5.3, which assures that the topology of the discretization T
Hence, neither Eih nor Ejh in (6.3) depends on xh . From (6.12) it follows that
  
  
n
nn
and f v,n xh C 2 (Rmnxh ) ,
An xh C 2 (Rmnxh )

a consequence of which is

is fixed.

  
det An xh
C 2 (Rmnxh ) .

Lemma 5.9 provides us the existence of the solution u v,n
xh to (6.3). Hence, there exist the


1

. Then, by Lemma 3.3, det An xh 6= 0 and
inverse matrix An xh
  
h  i1

1
fn xh C 2 (Rmnxh ) nn
A
An xh
:=

n h
det(A (x ))

fn xh denotes the adjoint matrix, which was defined by (3.13). From (3.15) we
holds, where A

get
  
  h  i1
n
f v,n xh C 2 (Rmnxh )
for v = 1, . . . , nv .
(6.13)
u v,n xh = An xh

6.2. NEWTONTYPE OPTIMIZATION METHODS

89

The symbol B v,n


and mul is calculated by (6.7) and (6.8). Since there appear only summations 


v,n,e
h
e
e
x) ,
tiplications of the components of u
x and SB (x ) with the constant vectors Bxb brj (b
we can use (6.13) and Lemma 3.4 which yield
  

 
 n
h
v,n
h
C 2 (Rmnxh ) 2 h
for v = 1, . . . , nv .
(6.14)
xh
:=
B
x
,
u
x
B v,n

Finally, we compound the submappings h : 7 Rnh , xh : Rnh 7 Rmnxh , B v,n


:

Rmnxh 7 R2 nh , and I h : Rn Rmnxh (R2 nh )nv 7 R. First, Assumption 6.2 yields


I h C 2 (Rnh Rmnxh (R2 nh )nv ) .
Then, using the latter, (6.10), (6.14), and applying Lemma 3.4, we have proven the statement
i


 
h

h
h
B nv ,n xh h C 2 ().
Jeh := I h h xh h B 1,n
x

Convention 6.1. Just for the purposes of this chapter let us skip in our notation the discretization
parameter h, the superscript n in (6.3), and the regularization parameter that will be each
fixed for the moment. If not stated otherwise, all the symbols in the sequel will be considered
as discretized ones, even if they were previously reserved for the continuous setting. Hence, we
consider the following discrete optimization problem with inequality constraints

Je(p)
Find p := arg min
n
pR
,
(P)

subject to (p) 0
where Je : Rn 7 R and (p) := (1 (p), . . . , n (p)) : Rn 7 Rn . The problem (P) is
governed by the following multistate problem
A(x) u v (x) = f v (x)

(P v (x))

for v = 1, . . . , nv .

6.2 Newtontype optimization methods


Since we do not usually have any rigorous analysis locating the global solution p , it is hardly
possible to solve the problem (P) in a suitable computational time and with a suitable precision at
the same time, when having only some ten design variables. The algorithms looking for a global
minimizer are of an exponential order of complexity with respect to the number n of design
variables. On the other hand, the Newtontype algorithms search for a local minimizer only, but
the computational time is quadratically proportional to the distance of the initial design from the
closest local minimizer. This is due to the fact that we can precisely provide derivatives of the
cost function Je as well as of the constraint function with respect to the design variables p.
Here, we will restrict ourselves to developing efficient methods that calculate derivatives for shape
optimization. We refer to N OCEDAL AND W RIGHT [148] for a detail overview of optimization
methods for a large variety of problems.

90

CHAPTER 6. NUMERICAL METHODS FOR SHAPE OPTIMIZATION

6.2.1 Quadratic programming subproblem


The Newtontype algorithms are based on an approximation of the original nonlinear optimization problem by a quadratic or a sequence of quadratic optimization subproblems, which are also
referred to as quadratic programming subproblems. In general, a quadratic programming problem
reads as follows:

Find p := arg min


Q(p)
pRn
,
(QP)

subject to L(p) 0
where Q : Rn 7 R denotes a quadratic function and L : R n 7 Rn denotes a linear vector
function.
Basically, there are two approaches to the approximation of the problem (P) by a subproblem
(QP). In both of them the input p0 Rn denotes initial design parameters. The first approach
is called a line search approach where we look for an optimal Newton direction s QP being the
solution to the following subproblem
i o
nh 
e, p0 (s)
Find sQP := arg min
J
Q
sRn
,
(QP 1 (p0 ))

subject to [L (, p0 )] (s) 0
where s := p p0 stands for a directional vector from the initial design p 0 to the current one
p, Q Je, p0 stands for quadratic Taylors expansion, see Theorem 3.3, of the function Je at the
point p0 while skipping the constant term Je(p0 )
 


h 
i

1 
(6.15)
Q Je, p0 (s) := grad Je(p0 ) s + s Hess Je(p0 ) s , s Rn ,
2


in which Hess Je(p0 ) Rn n denotes the Hessian matrix whose entries are as follows:
h


i
Hess Je(p0 )

i,j

:=

2 Je(p0 )
,
pi pj

i, j = 1, . . . , n ,



and where L Je, p0 denotes linear Taylors expansion, see Theorem 3.3, of the vector function
at the point p0
[L(, p0 )] (s) := (p0 ) + Grad((p0 )) s,

s R n ,

(6.16)

in which the matrix Grad((p0 )) Rn n denotes the following gradient matrix


Grad((p0 )) := [grad(1 (p0 )) , . . . , grad(n (p0 ))] .
The optimal direction sQP is then an input to the following onedimensional optimization problem, the line search problem
n
o
Find QP := arg min Je p0 + sQP
>0
,
(LS(p0 , sQP ))


subject to p0 + sQP 0

and

pQP := p0 + QP sQP

6.2. NEWTONTYPE OPTIMIZATION METHODS

91

is the solution.
The second approach is called a trust region method. It supposes that the quadratic subproblem
approximates the original problem well, but just in a given neighbourhood of p 0 . Hence, given an
initial point p0 and a trust region diameter d > 0, we solve the following quadratic subproblem
nh 
i
o
e, p0 (p p0 )
Find pQP := arg min
J
Q

pRn

,
(QP 2 (p0 , d))
subject to [L (, p0 )] (p p0 ) 0

kp p0 k
2
where Q and L are respectively given by (6.15) and (6.16).

6.2.2 Sequential quadratic programming


The problem (QP) is usually solved sequentially such that the optimal solution p QP is used as
an initial design for the next quadratic subproblem. This is also referred to as sequential quadratic
programming (SQP). Its two simplest versions that use the line search or the trust region approach,
respectively, are sketched in Algorithm 3 or in Algorithm 4, cf. N OCEDAL AND W RIGHT [148,
p. 532].
Algorithm 3 Sequential quadratic programming using the line search method
Given p0
k := 0
while a convergence test is not satisfied do
Solve (QP 1 (pk ))
sQP
Solve (LS(pk , sQP ))
QP

pQP := p0 + QP sQP
pk+1 := pQP
k := k + 1
end while
p := pk

Algorithm 4 Sequential quadratic programming using the trust region method


Given p0 and d0 > 0
k := 0
while a convergence test is not satisfied do
Solve (QP 2 (pk , dk ))
pQP

pk+1 := pQP
Update dk
dk+1
k := k + 1
end while
p := pk
Let us note that there are many aspects to deal with, as to find a proper convergence criterion
or to modify the quadratic subproblem when it does not admit a solution, which is the case if
the Hessian matrix or its certain invariant is not positive definite. Here, we want to mention the

92

CHAPTER 6. NUMERICAL METHODS FOR SHAPE OPTIMIZATION

BFGS modification, see F LETCHER [62], named after its authors Broyden, Fletcher, Goldfarb, and
Shanno. It is originally based on the idea of DAVIDON [52, 53]. The method was a revolutionary
improvement of the SQP algorithm. At each iteration, it requires to evaluate only the gradient of
the objective and constraint functions, while the Hessian matrix is iteratively built up by measuring
changes in the gradients. For k 0, k N, the BFGS formula is the following, cf. N OCEDAL
AND W RIGHT [148, p. 25],
Hk+1 := Hk

(Hk sk ) (Hk sk ) yk ykT


+
,
sk (Hk sk )
yk s k

(6.17)



where Hk and Hk+1 are two successive approximations of the Hessian matrices Hess Je(pk )


and Hess Je(pk+1 ) , respectively, and where
sk := pk+1 pk

and





yk := grad Je(pk+1 ) grad Je(pk ) .

The SQP method with the BFGS update is classified as a quasiNewton method.

6.3 The firstorder sensitivity analysis methods


Recall that by the firstorder sensitivity analysis we mean calculation of gradients of the cost and
constraint functions. There are three kinds of the sensitivity analysis methods, namely, a numerical
differentiation, an automatic differentiation, and semianalytical methods, while others are certain
modifications and/or combinations of them.
The most frequently used is the numerical differentiation. It is usually based on formulas for the central difference. Given a function f C 1 , Rn , and a point x :=
(x1 , . . . , xi , . . . , xn ) , then, using linear Taylors expansion, see Theorem 3.3, we can derive the following first central difference formula
f (x1 , . . . , xi + , . . . , xn ) f (x1 , . . . , xi , . . . , xn )
f (x)

,
xi
2
where > 0. The approximation error decreases with 2 until a computer roundoff error becomes
significant. Therefore, we have to choose such that neither the approximation nor roundoff error
is large. Another possibility is using a numerical differentiation formula of a higher mth order,
m N, for which the approximation error decreases with m+1 . However, evaluating the gradient
approximation needs 2mn evaluations of f , which is in the case of shape optimization very time
consuming. Hence, we have to balance between the time issue and the precision. The advantages
of the method are robustness and easy implementation.
The automatic differentiation, see G RIEWANK [70], differentiate the function f symbolically.
The input of the method is a routine that evaluates f (x) and the output is again a routine which
now evaluates grad(f (x)). An implementation of the method is very difficult, since it involves a
syntax recognizing, and it relies on the programming language that f is coded in. Nowadays, there
are free software packages available. The method is robust and precise up to the computer round
off error, but it is too much both time and memory consuming in case of shape optimization, since
the routine for solving the linear system which arises from the finite element discretization is also
differentiated symbolically.

6.3. THE FIRSTORDER SENSITIVITY ANALYSIS METHODS

93

Here, we will focus on the semianalytical methods, cf. H AUG , C HOI , AND KOMKOV [86],
that bases on the algebraic approach to sensitivity analysis, cf. H ASLINGER AND N EITTAANM A KI [85]. The methods respect the structure of the shape optimization problem, in which solution
to a linear system of algebraic equations is involved. The cost functional is a compound map and
its gradient is then a product of the gradients of the individual submappings. Most difficult to
evaluate is differentiation of the solution to the linear system with respect to nodal coordinates
of the discretization grid. This is performed by solution to other linear systems with the original
but transposed system matrix and with new righthand side vectors. The method is precise up
to the numerical error of the linear system solver. The computational time roughly corresponds
to the computation of the function f . The method is not robust, as it covers just the shape optimization problems, nevertheless, some other classes of optimization problems, e.g., the topology
optimization, have a similar structure. Thus, an extension of the method is straightforward. The
semianalytical methods might also be combined with both the numerical and automatic differentiation.

6.3.1 Sensitivities of the cost and constraint functions


Consider the discretized shape optimization problem (P). The key point to an efficient implementation of the method is making use of the structure of the cost function Je. Recall that our
constraint function is stateindependent, i.e., it does not depend upon the solution to the governing state problem. Therefore, the evaluation of the gradient of the constraint function with respect
to the design variables is simple. For i = 1, . . . , n and j = 1, . . . , n the gradient is as follows:


i (p)
i (p)
R n ,
(6.18)
,...,
grad(i (p)) :=
p1
pn
where p := (p1 , . . . , pn ) Rn stands for a vector of design variables and where the constraint
function is denoted by (p) := (1 (p), . . . , n (p)) Rn . The evaluation of the cost functional
proceeds as depicted in (6.9). Let us express a partial derivative of the cost functional with respect
to a design variable. Using the chain rule (3.17) for differentiation of a compound function, for
i = 1, . . . , n we get

I , x, B 1 , . . . , B nv
Je(p)
=
=
pi
pi
(
"


n
nx X
m
X
X
I , x, B 1 , . . . , B nv
I , x, B 1 , . . . , B nv
=
+
+
j
xk,l
o=1
k=1 l=1

2
nv X X
(6.19)
X
Bjv,e (xe , u v,e )
I , B 1 , . . . , B nv
+
+
Bjv,e
xk,l
v=1 eE j=1

ne
n
X
X
Bjv,e (xe , u v,e ) uv,e
xk,l () o (p)
p (x)
+
,
v,e
up
xk,l
o pi
p=1
o=1
where

p := (p1 , . . . , pn ) Rn denotes a design vector,

(p) := (1 (p), . . . , n (p)) Rn denotes control coordinates of the shape and for
o = 1, . . . , n , i = 1, . . . , n it holds that
[F (p)](x,o )
o (p)
=
,
pi
pi

CHAPTER 6. NUMERICAL METHODS FOR SHAPE OPTIMIZATION

94

x() := [x1 (), . . . , xnx ()] Rmnx denotes a block column vector consisting of all the
grid nodes, where for o = 1, . . . , n

[M]1,o
4x()
4x()
b()
x()

..
:=
+
=
,
, where K (x0 )
.
o
o
o
o
[M]nx ,o



xe () := xe1 (), . . . , xem+1 () Rm(m+1) denotes a block column vector consisting of
the corners of the element domain K e , e E,


xel () := xel,1 (), . . . , xel,m () Rm denotes coordinates of the lth corner of the
element domain K e , e E,
u v (x) := (uv1 (x), . . . , uvn (x)) Rn denotes the solution to vth state problem, i.e., to the
system of linear algebraic equations (P v (x)),
v,e
ne denotes the solution of the problem (P v (x))
u v,e (x) := (uv,e
1 (x), . . . , une (x)) R
associated to an element e E in such a way that
v
uv,e
j (x) = uG e(j) (x),

B v (x) := [B v,e1 (x), . . . , B v,en (x)] R2 n denotes a block column vector resulting
after the application of the operator B to the solution of the problem (P v (x)),
2 denotes the value of B(uv (x)) over the element
B v,e (x) := (B1v,e (x), . . . , Bv,e
2 (x)) R
domain K e , e E, such that
ne


X
v,e
e
v,e
e
e
br (b
B (x) := B (x , u ) :=
uv,e
x
)
,

(x)
S
(x
)

B
b
x
j
B
j
j=1

and


Je(p) := I , x, B 1 , . . . , B nv R denotes the value of the cost functional.

6.3.2 State sensitivity

The main computational effort in the formula (6.19) is connected with the bracket term, which is
the sensitivity of B(x, u v (x)) with respect to the grid nodal coordinates x, i.e., with the derivatives
Bjv,e (xe , u v,e (x))
xk,l

:=

Bjv,e (xe , u v,e )


xk,l

ne
X
Bjv,e (xe , u v,e ) uv,e
p (x)

uv,e
p

p=1

xk,l

where for k = 1, . . . , nx , l = 1, . . . , m, and for p = 1, . . . , ne


Bjv,e (xe , u v,e )

if z {1, . . . , m + 1} : H e (z) 6= k,
#
"
ne


X
Bjv,e (xe , u v,e )
SeB (xe )
v,e
r
x)
, if He (z) = k,
Bxb bp (b
:=
up (x)
xk,l
xez,l
xk,l

:= 0,

p=1

Bjv,e (xe , u v,e )


uv,e
p

:= SeB (xe ) Bxb brp (b


x)

i

(6.20)

6.3. THE FIRSTORDER SENSITIVITY ANALYSIS METHODS

95

In (6.20) it remains to express the derivative u v (x)/xk,l . To this goal, let us differentiate the
state equation (P v (x)), where v = 1, . . . , nv , with respect to the lth coordinate of a node x k ,
where k = 1, . . . , nx . We arrive at the following linear system of equations
A(x)

u v (x)
f v (x) A(x)
=

u v (x)
xk,l
xk,l
xk,l

which is solved for u v (x)/xk,l , where


A1,1 (x)
...
xk,l

A(x)
.
..
..
:=
.

xk,l
An,1 (x)
...
xk,l

A1,n (x)
xk,l

..
.

An,n (x)
xk,l

u v (x)
:=

xk,l

uv1 (x)
xk,l

..
.

f v (x)
:=

xk,l

(6.21)

f1v (x)
xk,l

..
.

fnv (x)
xk,l

uvn (x)
xk,l
v
(fi (x))ni=1 ,

and where A(x) := (Ai,j (x))ni,j=1 , f v (x) :=


and u v (x) := (uvi (x))ni=1 , respectively, are the system matrix, the righthand side vector, and the solution to the state problem
(P v (x)). Due to Assumption 5.4, we can skip the term f v (x)/xk,l . Hence, it remains to
express the term A(x)/xk,l . From (6.4) and (6.5) it follows that

ne
X X
aexe eo (xe ) , ep (xe )
Ai,j (x)
=
,
xk,l
xk,l
eEi Ej o,p=1

where

aexe eo (xe ) , ep (xe )
= 0, if z {1, . . . , m + 1} : H e (z) 6= k,
xk,l

aexe eo (xe ) , ep (xe )
=
xk,l
!
Z
 


 
SeB (xe )
e
e
e
r
b
br

S
(x
)

B
=
|det(Re (xe ))| db
x+
b
b p
x
x
o
B
xez,l
Kr
!!
Z 
 
 
e (xe )
S
e
e
e
r
r
B
Bxb bp
+
SB (x ) Bxb bo D
|det(Re (xe ))| db
x+
xez,l
r
K
Z 
  

  |det(Re (xe ))|
db
x+
+
SeB (xe ) Bxb bro De SeB (xe ) Bxb brp
xez,l
r
K
!
Z


Se (xe ) br
e e
br |det(Re (xe ))| db
+

S
(x
)

x+
o
p
xez,l
Kr
!
Z 

e (xe )
S
e e
r
r
x+
bp |det(Re (xe ))| db
+
S (x ) bo
xez,l
Kr
Z 
 |det(Re (xe ))|
 
db
x, if He (z) = k.
+
Se (xe ) bro Se (xe ) brp
xez,l
Kr

(6.22)

CHAPTER 6. NUMERICAL METHODS FOR SHAPE OPTIMIZATION

96

Note that none of A(x)/xk,l is evaluated itself. In Section 6.3.4 we will rather assemble the
vector
[G(x, u v (x))]T v Rmnx ,
where

A(x)
A(x)
v
v
u (x), . . . ,
u (x) Rn(mnx ) ,
G(x, u (x)) :=
x1,1
xnx ,m+1


(6.23)

and where v Rn .

6.3.3 Semianalytical methods


We employ matrix notation and, using (6.20), (6.21), (6.23), and the symmetry of A(x), we
rewrite (6.19) as follows:


n

grad Je(p) = Grad((p)) grad I , x, B 1 , . . . , B nv +
} |
{z
}
|
{z
} | n {z
n

n 1

n 1

nv 
h
 X
Gradx (B(x, u v )) +
+ Grad(x()) gradx I , x, B 1 , . . . , B nv +
{z
} |
{z
}
|
{z
} v=1 |
n mnx

mnx 1

(mnx )(2 n )

 io
+ G(x, u ) A(x)1 Gradu v (B(x, u v )) gradB v I , x, B 1 , . . . , B nv
,
| {z } | {z } |
{z
} |
{z
}
v T

(mnx )n

nn

n(2 n )

(2 n )1

(6.24)

in which matrix (or vector) size is written under the brackets, and where the gradients are




grad Je(p) :=

Je(p)
p1

..
.

Je(p)
pn



, grad I , x, B 1 , . . . , B nv :=

Grad((p)) := [grad(1 (p)) , . . . , grad(n (p))] :=

1 (p)
p1

..
.

1 (p)
pn

Grad(x()) := [Grad(x1 ()) , . . . , Grad(xnx ())] :=





xnx ,1 ()
x1,1 ()
x1,m ()
.
.
.
.
.
.
1
1
1

.
.
..
..
:=



x1,m ()
xnx ,1 ()
x1,1 ()
.
.
.
.
.
.
n
n
n

I (,x,B 1 ,...,B nv )
1

..
.

I (,x,B 1 ,...,B nv )
n

...
..
.
...

n (p)
p1

..
.

n (p)
pn

...
..
.

xnx ,1 ()
1

...

xnx ,1 ()
n


gradx1 I , x, B 1 , . . . , B nv


..
gradx I , x, B 1 , . . . , B nv :=
,
.

1
nv
gradxnx I , x, B , . . . , B

,


6.3. THE FIRSTORDER SENSITIVITY ANALYSIS METHODS

97

in which for k = q, . . . , nx

gradxk I , x, B 1 , . . . , B nv



where further

:=

I (,x,B 1 ,...,B nv )
xk,1

..
.

I (,x,B 1 ,...,B nv )
xk,m

Gradx1 (B(xe1 , u v,e1 )) . . .

..
..
Gradx (B(x, u v )) :=
.
.
e
v,e
1
1
Gradxnx (B(x , u
)) . . .

in which for k = 1, . . . , nx

Gradxk (B(xe1 , u v,e1 )) = 0,

Gradxk (B(xe , u v,e )) =

where further

Gradx1 (B(xen , u v,en ))

..
,
.
e n
v,en
Gradxnx (B(x , u
))

z {1, . . . , m + 1} : H e (z) 6= k,

B2(xe ,u v,e )
B1(xe ,u v,e )
.
.
.
e
e
xz,1
xz,1

..
..
..
, if He (z) = k,
.
.
.

e
v,e
B2(x ,u )
B1(xe ,u v,e )
...
xe
xe
if

z,m

z,m

Gradu v (B(x, u v )) := [Gradu v (B(xe1 , u v,e1 )) , . . . , Gradu v (B (xen , u v,en ))] ,


in which for e E
Gradu v (B(xe , u v,e )) := [gradu v (B1 (xe , u v,e )) , . . . , gradu v (B2 (xe , u v,e ))] ,
and where finally

gradB v


gradB v,e1 I , x, B 1 , . . . , B nv


..
I , x, B 1 , . . . , B nv :=
,
.

1
nv
v,e
gradB n I , x, B , . . . , B

in which for e E

gradB v,e I , x, B 1 , . . . , B nv



:=

I (,x,B 1 ,...,B nv )
B1v,e

..
.

I (,x,B 1 ,...,B nv )
Bv,e
2

Now, all the art is how to evaluate the expression (6.24) efficiently. Basically, there are two
possibilities. Either we proceed from left to right, then it is called the direct method, or the other
way round, which is called the adjoint method. The main computational effort is in calculating
the state sensitivity. In case of the direct method, we would solve n v n systems consisting of n
linear equations, while, in case of the adjoint method, we have to solve just n v systems of n linear
equations. This is why we prefer the latter. Let us note that if the constraint function were state
dependent, the adjoint method would arrive at solving n v (1 + n ) systems of n linear equations.

98

CHAPTER 6. NUMERICAL METHODS FOR SHAPE OPTIMIZATION

6.3.4 Adjoint method


The method bases on evaluating the expression (6.24) from right to left such that not all the individual gradients are calculated but rather socalled adjoint variables are assembled. Algorithm 5
describes the method. There, the symbols R n , , 1 , 2 Rmnx , , Rn , and Rn
stand for the adjoint variables.
Algorithm 5 Adjoint method
Given p, , x, A(x), u v , and B v for v = 1, . .
. , nv
Evaluate I := grad I , x, B 1 , . . . , B nv
Evaluate I x := gradx I , x, B 1 , . . . , B nv
:= 0
for v := 1, . . . , nv do

Evaluate I B v := gradB v I , x, B 1 , . . . , B nv
Assemble 1 := Gradx (B(x, u v )) I B v
Assemble := Gradu v (B (x, u v )) I B v
Solve A(x) =

Assemble 2 := G(x, u v )T
:= + 1 + 2
end for
:= + I x
Assemble := Grad(x())
:= + I
Assemble
:= Grad((p))
 
e
grad J (p) :=

Only the gradients of I have to be provided by the user. All the other parts are more or less
independent. The particular assembling procedures are depicted in Algorithms 79. On the CD
there are enclosed the corresponding M ATLAB [208] routines used for optimal shape design in
2dimensional magnetostatics.
Algorithm 6 Adjoint method: the shape part (Assemble )
Given p, , and v
:= 0
for i := 1, . . . , n do
for j := 1, . . . , n do
 
i := i + [F (p)](x,j ) pi vj
end for
end for

6.3.5 An objectoriented software library


Here, we present an efficient implementation of the adjoint method for optimal shape design in an
objectoriented framework, see Fig. 6.1. Our main aim is a maximal reusability of the individual
components when solving various shape optimization problems. For the details we also refer

to L UK A S , M UHLHUBER
, AND K UHN [125].

6.3. THE FIRSTORDER SENSITIVITY ANALYSIS METHODS

99

Algorithm 7 Adjoint method: the grid part (Assemble )


Given K , , and
Solve K T =

for o := 1, . . . , n do


o := b() o
for j := 1, . . . , nx do
o := o + Mj,o j
end for
end for
Algorithm 8 Adjoint method: FEM preprocessor part (Assemble 2 )
Given x, u v , and
2 := 0
for i := 1, . . . , n do
for z := 1, . . . , m + 1 do
for l := 1, . . . , m do
k := m (Hei (z) 1) + l
for o, p := 1, . . . , ne do

if o = p or G ei(o) 6 I0h and G ei(p) 6I0h then

Evaluate [ 2 ]k := [ 2 ]k G ei(o) aexiei eoi (xei ) , epi (xei )


end if
end for
end for
end for
end for

.


xez,li uvG ei(p)

The library supports routines for evaluating the cost and constraint functions and their gradients with respect to the shape design parameters. The library can be used with any gradient or
Newtonlike optimization algorithm, as SQP with BFGS (BFGSSQP), see (6.17). The library
uses 3 external modules: a mesh generator, a finite element method (FEM) module, and a solver of
linear algebraic systems of equations. The mesh generator runs just once at the very beginning and
it discretizes the domain for the initial design p 0 . It provides initial grid nodes x0 := x ((p0 ))
and the discretization T . Having some grid nodes x, the FEM preprocessor assembles the matrix A(x) and the righthand side vector f v (x) for each state v = 1, . . . , nv . Then, the solver
of linear systems provides the solution u v to the FEM postprocessor that assembles the solution
B v . The FEM module is moreover supposed to assemble the corresponding gradientvector multiplications in Algorithm 5, namely 1 , 2 , and . The efficiency of the library strongly depends
on the linear system solver. We have used the software tools developed by K UHN , L ANGER ,

AND S CH OBERL
[117] at the University Linz in Austria, where the conjugate gradient method,
cf. G OLUB AND VAN L OAN [69], with a multigrid preconditioning, cf. H ACKBUSCH [78], is
involved.
Now, let us explain how the optimization proceeds in terms of Fig. 6.1. Given an initial vector
p0 of design parameters and a discretization parameter h, the BFGSSQP algorithm starts its run
while at the same time the mesh generator provides the initial grid x 0 and the grid topological
information T . Then, a quadratic programming subproblem is going
solved, see also Algo to be 
e
e
rithms 3 and 4, which requires evaluation of J (p0 ), (p0 ), grad J (p0 ) , and Grad((p0 )).

100

CHAPTER 6. NUMERICAL METHODS FOR SHAPE OPTIMIZATION

Algorithm 9 Adjoint method: FEM postprocessor part (Assemble 1 , Assemble )


Given x, u v , and I B v
1 := 0
:= 0
for i := 1, . . . , n do
for z := 1, . . . , m + 1 do
for l := 1, . . . , m do
k := m (Hei (z)) + l
for p := 1, . . . , ne do





x)
Evaluate c := uvG ei(p) SeBi (xei ) xez,li Bxb brp (b


Evaluate d := SeBi (xei ) Bxb brp(b
x)
for j := 1, . . . , 2 do
t := 2 (i 1) + j
[ 1 ]k := [ 1 ]k + cj [I B v ]t
G ei(p) := G ei(p) + dj [I B v ]t
end for
end for
end for
end for
end for
The evaluation of (p0 ) is straightforward. The evaluation of Je(p0 ) proceeds as depicted in (6.9),
where the designtoshape mapping, shapetomesh mapping, FEM preprocessor, solver of linear systems, FEM postprocessor, and computation of Je modules take control of the run consecutively. Computing Grad((p0 )) is again quite straightforward, since it is basically an analytical
formula, which is evaluated in the module Computation of Grad((p 0 )), see Fig. 6.1. The
most computationally
part is (together with the evaluation of the cost functional) the
 expensive

e
evaluation of grad J (p0 ) . This evaluation follows Algorithms 59, where the input data flows
as depicted in Fig. 6.1. Finally, the next iteration of the BFGSSQP algorithm is performed and the
procedure repeats for a new vector p of design parameters until a terminate criterion is fulfilled.
From the programming point of view, the only part which has always
 to be coded by the user
is the module that computes the cost functional I , x, B 1 , . . . , B nv and the constraint function
(p). Thus, we have minimized the programming effort that is necessary for solving a new shape
optimization problem just to the specification of the problem itself.

6.3.6 A note on using the automatic differentiation


The semianalytical methods turned out to be most effective for the shape optimization, since they
make much use of the problem structure. Nevertheless, some parts of the algorithm still remain
to be automatized using, e.g., the automatic differentiation (AD) method. We have in mind the
module that calculates the gradients of I and with respect to the input variables , x, B v , and
p, respectively. In fact, the routine calculating these gradients is just an analytical differentiation
of the routine that calculates the functions I and . In this case, we avoid the main obstacle of
using AD, namely, differentiation of the linear system solver. Another possible use of AD might
be an automatic generation of routines that calculate sensitivities of the local contributions (6.22)
to the system matrix A.

6.4. MULTILEVEL OPTIMIZATION APPROACH

101
p

p0
Je(p), (p)
p



grad Je(p)

BFGSSQP

Grad((p))
p

Computation
of Je,
Bv
T

Computation
of grad(Je), Grad()
T Bv
Bv

I , I x , I B v
((p0 ))
x0

User interface

Solver
of linear systems

Mesh
generator

A, f v
uv
T
FEM
pre/post
processor

Adjoint method
1, 2,

External PDE tools


H0 (B; ) , a, f v
control nodes
x,1 , . . . , x,n

IB v ,

x0 , T Shapetomesh
mapping
(p0 )

T
p

Designtoshape
mapping
p0

Figure 6.1: Structure of the library (data flow diagram)

6.4 Multilevel optimization approach


Here we introduce a rather new optimization approach. It has been inspired by techniques used
in multigrid methods. This research was initiated by Prof. Ulrich Langer whose group at the
University of Linz, Austria, has achieved worldleading results in scientific computing using

multigrid methods, cf. A PEL AND S CH OBERL


[10], H AASE ET AL . [73, 75, 76], H AASE AND
L ANGER [74], H AASE AND L INDNER [77], J UNG AND L ANGER [102], K UHN , L ANGER , AND

S CH OBERL
[117], S CHINNERL , L ANGER , AND L ERCH [183], or S CHINNERL ET AL . [184]. We
want to establish a hierarchy of discretizations to our continuous shape optimization problem ( Pe)
such that the optimized design achieved at a coarse level is used as the initial design at a next finer
discretization level. The first results can be found in L UK A S [123] and in L UK A S [128].
In this section, we employ the full notation with both the regularization parameter and the
discretization parameter h

Find ph :
 
,
(Peh )
h
h
h
e
e

J p J (p) p

102

CHAPTER 6. NUMERICAL METHODS FOR SHAPE OPTIMIZATION

where Jeh : 7 R denotes the discretized and regularized cost functional and
:= {p Rn | (p) 0 }

is the set of admissible design parameters, where : R n 7 Rn , n N.


By the classical optimization approach we mean the standard technique when, given a fixed
regularization parameter , a fixed discretization parameter h, both of which are small enough,
given an initial vector p0 of design parameters, the optimization algorithm proceeds just once

ending up with the optimized design p h , see Algorithm 10.


Algorithm 10 Classical optimization approach
Given , h and p0
Discretize (Pe)
(Peh )

h
Solve (Pe ) with the initial design p0
ph

ph is the optimized design


By the multilevel or hierarchical optimization approach we mean that, given an initial design
p1,0 , we first regularize and discretize the problem ( Pe) at the first level with a rather large values of

1 and h1 , and the optimization algorithm proceeds ending up with a coarse optimized design p h11 .
Then, we refine both the regularization and the discretization parameters and run the optimization

algorithm with smaller values of 2 and h2 while using the design ph11 as the initial one at this

second level. We end up with a finer optimized design p h22 , and so further. The approach is
described in Algorithm 11.
Algorithm 11 Hierarchical optimization approach
Given 1 , h1 , and p1,0
l := 1
while l > 1 and a terminate criterion is not satisfied do
Discretize (Pe) at the level l
(Pehll )

Solve (Pehll ) with the initial design pl,0


phll
Refine l , hl
l+1 , hl+1

pl+1,0 := phll
l := l + 1
end while
hl1
is the optimized design
pl1
The hierarchical approach in shape optimization has turned out to be much more effective than

the classical one whenever the coarse optimized design p h00 approximates the true one rather well.
The crucial part of the algorithm is the refinement step. The updated values of l+1 and hl+1 must
not be too smaller than those of l and hl , since SQP would take many iterations. On the other
hand, if the refinement is rather coarse, i.e., the values of l+1 and hl+1 are comparable to l and
hl , there is hardly any progress in the SQP algorithm and the hierarchical approach takes many
iterations. In Section 7.4 we provide some numerical experiments without any use of multigrid
yet. The idea, which we want to investigate in the future, is that the refinement strategy should
benefit from the aposteriori finite element error analysis and from multigrid techniques. The first
papers in this context have appeared just recently, see R AMM , M AUTE , AND S CHWARZ [164],
and S CHLEUPEN , M AUTE , AND R AMM [185]. In the paper by S CHERZER [182] a multilevel
approach is used for solving nonlinear illposed problems.

6.4. MULTILEVEL OPTIMIZATION APPROACH

103

Another improvement can be done, when applying the multilevel approach on the level of
mathematical modelling. It means that in those application where we can reduce the problem complexity by neglecting a dimension or some physical phenomena we can first solve the discretized
reduced problem and then use the result as the initial design for the more complex problem. A
typical example might be solving a shape optimization problem governed first by 2d linear magnetostatic state problem, then, prolong the optimized design dimensionally and use it as the initial
design for shape optimization governed by 3d linear magnetostatics, and finally use the resulting
shape as the initial design for shape optimization governed by 3d nonlinear magnetostatic state
problem. In Section 7.4, we will give a numerical test of the 2d/3d dimensional step.

104

CHAPTER 6. NUMERICAL METHODS FOR SHAPE OPTIMIZATION

Chapter 7

An application and numerical


experiments
At the beginning of this chapter, we will introduce a physical application, being of a practical use,
which will lead to a problem of optimal shape design of electromagnets. Then, we will introduce mathematical settings of the shape optimization problems governed by linear magnetostatic
multistate problems in both two and threedimensional cases. We will utilize the abstract theory
introduced in Chapters 35 such that we will specify all the symbols introduced in Chapter 5, validate the corresponding assumptions, and the related existence as well as convergence theorems
will then follow. We will also easily check Assumptions 6.1 and 6.2, both introduced in Chapter 6,
which will justify us to use the SQP algorithm. Then, we will present shapes resulting from numerical calculations. We will also present numerical experiments with the multilevel optimization
approach and with the adjoint method, which were both introduced in Chapter 6. We will compare them to the classical optimization approach and to the numerical differentiation, respectively.
Some of the optimized shapes were manufactured and we are provided with physical measurements of the magnetic field. At the end, we will discuss magnetic field improvements in terms of
the cost functional with respect to the original design.
The results of this chapter can be also found in P I S TORA , P OSTAVA , AND S EBESTA [160],
KOP R IVA ET AL . [111], L UK A S [119, 120, 121, 122, 123, 128], L UK A S ET AL . [124], and

in L UK A S , M UHLHUBER
, AND K UHN [125].

7.1 A physical problem


Let us consider two geometries of electromagnets: the Maltese Cross (MC) geometry and the O
Ring geometry that are both depicted in Fig. 7.1. Each consists of a ferromagnetic yoke and poles.
There are 4 poles in case of the Maltese Cross and 8 ones in case of the ORing electromagnet. The
poles are completed with coils which are pumped with direct electric currents. The electromagnets
are used for measurements of Kerr magnetooptic effects, cf. Z VEDIN [223, p. 40]. They require
the magnetic field as homogeneous, i.e., as constant as possible in a given normal direction. Let us
note that the magnetooptic effects are investigated for applications in high capacity data storage
media, like a development of new media materials for magnetic or compact discs recording. Let us

also note that the electromagnets have been developed at the Institute of Physics, V SBTechnical
University of Ostrava, Czech Republic in the research group of Prof. Jaromr Pistora. Some
instances have been already delivered to the following laboratories:
105

CHAPTER 7. AN APPLICATION AND NUMERICAL EXPERIMENTS

106

Figure 7.1: The Maltese Cross and ORing electromagnets


Institute of Physics, Charles University Prague, Czech Republic,
National Institute of Applied Sciences INSA in Toulouse, France,
Department of Physics, Simon Fraser University in Vancouver, Canada,
Department of Chemistry, Simon Fraser University in Vancouver, Canada,
PSfrag
replacements
and
University Paris VI., France.
First, we describe how the Kerr magnetooptic effect is measured. Let us consider, for instance,
the Maltese Cross electromagnet, as in Fig. 7.1, and its crosssection, see Fig. 7.2. A sample of
0.2
0.15

pole

ferromagnetic
yoke

pole head

0.1

x2 [m]

0.05

coil

0.05

0.1
0.15
0.2
0.2

magnetization
area m

magnetization
planes
0.1

0
x1 [m]

0.1

0.2

Figure 7.2: Crosssection of the Maltese Cross electromagnet


a magnetic material is placed into the magnetization area which is located in the middle among

7.1. A PHYSICAL PROBLEM

107

the pole heads. In this area the magnetic field is homogeneous enough with respect to the normal
vector of some polarization plane, see Fig. 7.2. We pass an optical (light) beam of a given polarPSfrag replacements
ization vector to the sample. There it reflects and components of the reflected polarization vector
are measured in terms of the Kerr rotation and ellipticity, respectively. Briefly saying, we measure
the polarization state of the reflected beam. The Kerr rotation means the difference between the
angle of the main ellipticity axis of the reflected beam from that one before the reflection. Typical
measured data is depicted in Fig. 7.3, which was measured by Ing. Igor Kopriva at the Institute of

Physics, VSBTechnical
University of Ostrava, see also KOP R IVA ET AL . [111].
0.15
0.1

Kerr rotation [mrad]

0.05
0
*
+

0.05

linear Kerr effect


quadratic Kerr effect
quadratic Kerr effect

0.1

0.15
0.2

20
40
60
80
100
120
sample orientation in magnetic field [degree]

140

Figure 7.3: Dependence of magnetooptic effects on a sample rotation


From Fig. 7.3 we can see that the Kerr rotation depends on the orientation of the sample in
the magnetic field, which is significant especially for the quadratic Kerr effect. This indicates
anisotropic behaviour of the sample. Therefore, we should measure it in as many directions as
possible. One has either to rotate the sample in the magnetic field, rotate the electromagnet while
the sample is fixed, or rotate the magnetic field itself while both the sample and electromagnet are
fixed. Certainly, the last variant is most preferred. The electromagnets have been developed such
that they are capable to generate magnetic fields homogeneous in stepbystep different directions
just by switching some currents in coils on or off, or by switching their senses. The more coils we
have, the more directions the magnetic field can be oriented in. In case of the Maltese Cross or
the ORing electromagnet, one can sequentially generate magnetic fields homogeneous in up to
8 or 16 directions, respectively. This will lead us to a multistate problem where only the current
excitations, i.e., the righthand sides differ.
Our aim is to improve the current geometries of electromagnets, see Fig. 7.1, in order to
be better suited for measurements of the Kerr effect. The generated magnetic field should be
strong and homogeneous enough in order to admit a magnetooptic effect. Unfortunately, these
assumptions are contradictory and we have to balance them. From physical experience we know
that the homogeneity of the magnetic field depends significantly on the shape of the pole heads.
Hence, we aim at designing shapes of the pole heads in such a way that inhomogeneities of the

CHAPTER 7. AN APPLICATION AND NUMERICAL EXPERIMENTS

108

magnetic field are minimized, but the field itself is still strong enough.

7.2 Threedimensional mathematical setting


Now, we introduce a complete 3d mathematical setting of the shape optimization problem. We
will specify the abstract symbols and assumptions that were introduced in the previous text and
that are also summarized in Section 6.1. As there are no principal differences between the Maltese
Cross and the ORing electromagnet, we will describe both at once.
Convention 7.1. In all what follows the dimensions will be given in meters except for Figs. 7.57.7
and 7.19, where they are in milimeters.

7.2.1 Geometries of the electromagnets


The computational domain is fixed and in case of the Maltese Cross and the ORing it is, respectively, as follows:

 
 

d1 d1
d2 d2
d3 d3
:= ,
,
,
2 2
2 2
2 2
and
:=
where




d3
2
2
2

,
x := (x1 , x2 , x3 ) R (x1 ) + (x2 ) < (r) and |x3 | <
2
3

d1 := d2 := 0.4 [m],

d3 := 0.02 [m],

r := 0.2 [m].

The computational domain obviously fulfills Assumption 3.1 and L.


We describe the geometrical models of the electromagnets. Referring to Fig. 7.4, the green

Figure 7.4: Geometrical models of the Maltese Cross and ORing electromagnets
parts are the ferromagnetic yoke and the poles. The blue parts are the coils. In Fig. 7.5 and in
Fig. 7.6 we can see dimensions in milimeters for geometrical models of the Maltese Cross and
of the ORing electromagnet, respectively. The symbol yoke stands for the domain occupied by
the ferromagnetic yoke, the symbols westp , northwestp , northp , northeastp , eastp , southeastp ,
southp , and southwestp denote the domains occupied by the particular poles, and the symbols
westc , northwestc , northc , northeastc , eastc , southeastc , southc , and southwestc denote the
domains that are occupied by the corresponding coils. In Fig. 7.7 we can see the west pole of

7.2. THREEDIMENSIONAL MATHEMATICAL SETTING

109
x2
45

x2

PSfrag replacements

45

northp
yoke
50

westc

160

eastp

65

eastc

x1
southc

x3

50

45

westp

northc

30

45

yoke

45

southp
22.5

25
45

Figure 7.5: Drawing of the Maltese Cross electromagnet


the ORing in detail. The only geometrical parts being changed during the optimization will be
shapes of the pole heads, see Fig. 7.7.

7.2.2 Set of admissible shapes


We assume all the shapes to be same and, moreover, symmetrical with respect to the two corresponding planes, e.g., with respect to x 1 = 0 and x3 = 0 in case of the north pole head. Thus,
from now on we will represent the shape of an arbitrary pole head by the shape of the north pole
head. Due to the symmetry we consider only its quarter. Shape is then a continuous function
defined over the domain

 

dpole,1
dpole,3
:= 0,
0,
,
2
2
where in case of the Maltese Cross electromagnet
dpole,1 := 0.0225 [m],

dpole,3 := 0.025 [m],

and in case of the ORing


dpole,1 := dpole,3 := 0.02 [m].

CHAPTER 7. AN APPLICATION AND NUMERICAL EXPERIMENTS

rag replacements 110

x2
northp

northc

yoke
50

northeastc

eastc
southwestc
southeastc

southwestp

160

eastp
x1

southeastp
southp
20

40

45

yoke
southc

x3

50

northwestc
westc
westp

northeastp

45

northwestp

40

x2

20
40

Figure 7.6: Drawing of the ORing electromagnet


The Lipschitz constant C16 in (5.1) corresponds to the maximal slope angle. We choose
C16 :=

3
.
8

The box constraints (5.2) are chosen such that the shape of the west pole head must not be either
higher than the bottom of the north coil or penetrate with the neighbouring pole head. Therefore,
we choose
l := 0.012 [m], u := 0.05 [m]
for the Maltese Cross and
l := 0.028 [m],

u := 0.05 [m]

for the ORing. Then, the set U of admissible shapes is given by (5.3) and Lemma 5.1 holds.
Since from the practical point of view we cannot manufacture any shape, we will restrict
ourselves to those that are described by a Bezier patch of a fixed number of design parameters
n := n,1 n,2 ,

where n,1 , n,2 N

and we choose
n,1 := 4,

n,2 := 3.

7.2. THREEDIMENSIONAL MATHEMATICAL SETTING

111

x2

pole head

x2

23

20

x1

20

westp

41.5

westc

60

PSfrag replacements

30

135

x3

20
41.5

electric currents

60

Figure 7.7: Detail drawing of the ORing west pole


Such shapes are by definition smooth enough. We decompose the domain into (n ,1 1) times
(n,2 1) regular rectangles whose n,1 times n,2 corners are


(i 1)dpole,1 (j 1)dpole,3
,
for i = 1, . . . , n,1 , j = 1, . . . , n,2 .
x,i,j :=
n,1 1
n,2 1
The set is defined as follows:




:= p := p1,1 , . . . , p1,n,2 , . . . , pn,1 ,1 , . . . , pn,1 ,n,2 Rn l pi,j u .

The mapping F : 7 U, see also (5.5), is the following (tensor product) Bezier mapping that
involves the symmetry
(x1 , x3 ) := [F (x1 , x3 )] (p) :=





n,1 n,2
XX
2n,1 1 2x1 + dpole,1
2n,1 1 2x1 + dpole,1
:=
pi,j i
+ i

2dpole,1
2dpole,1
i=1 j=1





2n,2 1 2x3 + dpole,3
2n,2 1 2x3 + dpole,3
j
+ j
, (x1 , x3 ) , (7.1)
2dpole,3
2dpole,3
where for n N, i N, i n, and t R such that 0 t 1
in (t) :=

(n 1)!
ti1 (1 t)ni ,
(i 1)! (n i)!

(7.2)

which is called the Bernstein polynom. We can easily check that


p : [F ()](p) U,
it means that both the relations (5.1) and (5.2) are fulfilled. An example of the mapping F is
depicted in Fig. 7.8. Concerning (5.6), we perform mirroring of the shape with respect to the
planes x1 = 0 and x3 = 0 and, moreover, we copy this shape to all the remaining pole heads. In
this way the shape controls the decomposition of into 0 () that denotes the domain occupied
by the coils or the air and into 1 () which is the domain occupied by yoke and poles.

CHAPTER 7. AN APPLICATION AND NUMERICAL EXPERIMENTS

112

15

x3 [mm]

x3 [mm]

15
10
5
0
5

Sfrag replacements

10
5
0
5

10

PSfrag replacements

15
30
25

10
15
30

15

15

25

10

20

10

15

20

10

x2 [mm]

10
5

15

x2 [mm]

x1 [mm]

5
10
15

15

x1 [mm]

Figure 7.8: Bezier design parameters and the corresponding shape of the north pole head

7.2.3 Continuous multistate problem


Here, we are concerned with the 3dimensional magnetostatics, i.e., with the differential operator
B := curl.
Therefore, Assumptions 3.23.5 are satisfied by Lemmas 3.113.13 and by (3.31).
Now, we proceed through Section 5.2.2. We specify Assumption 5.1 by
D0 :=

1
,
0

D1 :=

1
,
1

where 0 := 4107 [H.m1 ] and 1 := 51000 are the permeabilities of the air and the ferromagnetic parts, respectively.
Further, we consider
nv := 2 and nv := 3
variations of the current excitations in case of the Maltese Cross and the ORing electromagnet,
respectively. In both the cases the righthand side f v (x) is calculated from the electric current I,
from the number of turns nI
I := 5 [A] or I := 1.41 [A],

nI := 600,

respectively, for the Maltese Cross or the ORing, and from the crosssection area through the
coils, see also Figs. 7.57.7,
Sc := 0.03 0.01 = 3 104 [m2 ]

or

Sc := 0.023

0.02 + 0.01075
= 4.6125 104 [m2 ].
2

The current densities f v Jv satisfy (5.2.2), i.e., they are divergencefree. The absolute value of
Jv (x) is nonzero only in the subdomains westc , . . . , southwestc where the direct electric currents
are located
nI I
|Jv (x)| =
.
Sc
These subdomains are independent of the shape , therefore, Assumption 5.2 is satisfied.
Now, we will describe the directions of J v (x) for both the electromagnets and each variation
v of the current excitation. During the description we will be referring to Fig. 7.7 and to Figs. 7.9
7.13. We say that two coils are pumped (excited) in the same sense if there is a magnetic circuit

7.2. THREEDIMENSIONAL MATHEMATICAL SETTING

113

that goes through both of them. Otherwise, the coils are excited in the opposite sense. In Fig. 7.9
the north and the south coils are excited in the same sense, the other two coils are switched off. In
Fig. 7.10 all the coils are pumped by currents such that the west and the north one are in the same
sense, the south and east one as well, but the west and the south one are in the opposite sense, as
well as the north and the east one are. In Fig. 7.11 the north and the south coils are pumped in the
same sense, the others are switched off. In Fig. 7.12 the situation is similar to Fig. 7.10 while the
northwest, northeast, southeast, and southwest coils are switched off. Finally, in Fig. 7.13 the
following 4 couples of coils are excited in the same sense: the southwest and south, the west and
southeast, the northwest and east, and the north and northeast coil.

Figure 7.9: Magnetic flux lines for the vertical current excitation (v := 1) for the Maltese Cross

7.2.4 Continuous shape optimization problem


Now we shall specify the cost functional. Recall that we want to minimize inhomogeneities in
the magnetic field in the area where the optical beam is magnetized such that the magnetic field
is still strong enough. We will measure the inhomogeneities in the L 2 norm which is, from the
mathematical point of view, the most natural one. The magnetic field will not be allowed to
decrease under some minimal magnitude which will be prescribed by a penalty term.
The magnetization area is for both the geometries
m := [0.005, 0.005] [0.005, 0.005] [0.005, 0.005] [m].
We choose the cost functional such that it measures differences of the magnetic flux density from
its average value over the domain m . It is as follows:
nv

1 X
[ (Bv (x)) + v (Bv (x))] ,
I B1 (x), . . . , Bnv (x) :=
nv
v=1

(7.3)

CHAPTER 7. AN APPLICATION AND NUMERICAL EXPERIMENTS

114

Figure 7.10: Magnetic flux lines for the diagonal current excitation (v := 2) for the Maltese Cross
where
Bv (x) := curlx ([uv ()] (x)) ,
1

(B (x)) :=

meas(m )

avg,v 2
Bmin
m

|Bv (x) B avg,v (Bv (x)) nvm |2 dx,


2
avg,v
v (Bv (x)) := max 0, Bmin
B avg,v (Bv (x))
,

:= 106 ,

(7.4)

(7.5)

where uv () stands for the solution to (W v ()) and where the following is the average magnetic
flux density
Z
1
avg,v
v
|Bv (x) nvm | dx.
(7.6)

B
(B (x)) :=
meas(m )
m

Concerning the vectors

nvm

:=

nvm ,

they are chosen as follows:

(0, 1, 0)

(1/ 2, 1/ 2, 0)

,v = 1
,v = 2

or

0)
(0, 1,

v
nm := (1/ 2, 1/ 2, 0)

(cos(/8), sin(/8), 0)

,v = 1
,v = 2
,v = 3

in case of the Maltese Cross or the ORing, respectively. The minimal average magnetic flux
densities are
avg,1
avg,2
Bmin
:= 0.1 [T], Bmin
:= 0.15 [T]
for both the geometries and, additionally, in case of ORings superdiagonal excitation it is
avg,3
Bmin
:= 0.3 [T].

7.2. THREEDIMENSIONAL MATHEMATICAL SETTING

115

Figure 7.11: Magnetic flux lines for the vertical current excitation (v := 1) for the ORing

7.2.5 Regularization and finite element discretization


Once we choose a positive regularization parameter > 0, we have nothing more to specify
concerning Section 5.3. Thus, we can proceed throughout Section 5.4. We choose a discretization
parameter h > 0 such that
h h,
where h is the largest dimension in the geometry
h := 0.4 [m].
To any discretization parameter h we associate a polyhedral domain h that is for the Maltese
Cross h := , while for the ORing it is like in Figs. 4.1, 7.117.13. Obviously, for both cases
Assumption 4.3 is satisfied. Then we discretize the set of admissible shapes U via a discretization
Th of the domain , as described in Section 5.4.1. Further, we discretize the polygonal computational domain h such that (5.24) holds. We provide the shapetomesh mapping by solving the
auxiliary 3d discretized elasticity problem (6.2). Unfortunately, from a lot of numerical experiments we have learned that for slightly large shape deformations some elements flip. In this case
we have to remesh the geometry, as noted in Remark 5.1.
We employ linear Nedelec tetrahedral elements that are described in Section 4.4.2. Therefore,
Assumptions 4.14.2 and Assumptions 4.44.7 are satisfied whenever the discretization T h (h )
satisfies the regularity condition (4.71).
For each h U h the permeability function is defined by (5.26). For any discretization
parameter h h the coil domains westc , . . . , southwestc remain unchanged and their discretizations do not depend on h . This is guaranteed by the shapetomesh (elasticity) mapping (6.2)
where we prescribe the homogeneous Dirichlet boundary condition on westc , . . . , southwestc .
Therefore, Assumption 5.4 is true.

CHAPTER 7. AN APPLICATION AND NUMERICAL EXPERIMENTS

116

Figure 7.12: Magnetic flux lines for the diagonal current excitation (v := 2) for the ORing
Finally, the discretized (and regularized) cost functional is given by (5.36) and by the relations
(7.3)(7.6) which arrive at the following expressions
I

nv ,n
B 1,n
, . . . , B

nv h
i
1 X
v,h
:=
h (B v,n
(B v,n
)+
) ,
nv

(7.7)

v=1

where B v,n
is the elementwise constant magnetic field given by (6.7)(6.8) and where
h (B v,n
) :=

1
meas(m )

avg,v 2
Bmin
eE h :K e m

|B v,n,e
B avg,v,n (B v,n,e
) nvm |2 meas (K e ) ,

(7.8)


2
avg,v
v,h (B v,n ) := max 0, Bmin
B avg,v,n (B v,n
,
)
B avg,v,n (B v,n
) :=

meas(m )

eE h :K e

|B v,n,e
nvm | meas (K e ) .

(7.9)
(7.10)

In order to justify using a Newtonlike optimization algorithm, we still have to satisfy Assumptions 6.1 and 6.2. Concerning Assumption 6.1, components of the constraint functional
: Rn 7 Rn , where n := 2n = 2n,1 n,2 , are as follows:
(
l pi,j
, i n,1 , j n,2
k (p) :=
for k = in,2 + j = 1, . . . , n
pin,1 ,jn,2 u , i > n,1 , j > n,2
Hence, Assumption 6.1 is obviously satisfied. Now, we shall verify Assumption 6.2. The smoothness of the designtoshape mapping F with respect to p is easy to see from (7.1). Concerning
the smoothness of the shapetomesh mapping x h , which is given by (6.2), it is well known that

7.3. TWODIMENSIONAL MATHEMATICAL SETTING

117

Figure 7.13: Magnetic flux lines for the superdiagonal current excitation (v := 3) for the ORing
the stiffness matrix K h (x0 ) is nonsingular as far as we consider a Dirichlet boundary condition
at a certain part of either the boundary h or an interface h0 (h ) h1 (h ). Since b h h
involves the following nonhomogeneous Dirichlet design interface boundary condition
4xh = Mh h on h ,
where h denotes the design interface, then, x h , see (6.2), is smooth with respect to h . Further, due to (4.66), (4.68), and (4.69) we can see that each of Re (xe ), Se (xe ), and Securl (xe ),
respectively, is smooth as far as K e does not flip. The last item of Assumption 6.2 easily follows
from (7.7)(7.10).

7.3 Twodimensional mathematical setting


Here, we reduce our mathematical model by neglecting the third dimension of the geometry,
as described in Section 2.3. Thus, each 2dimensional domain, denoted formally by 2d , is
created
as the intersection
of the related 3dimensional domain with the zero plane Z :=


x R3 | x3 = 0 and by skipping the third component, i.e.,


2d := x := (x1 , x2 ) R2 | (x1 , x2 , 0) .
We consider

:=

dpole,1
0,
2

The Lipschitz constant as well as the box constraints remain. The set of admissible shapes is given
by (5.3).
Concerning , we choose the following number of design parameters
n := n,1 := 4,

CHAPTER 7. AN APPLICATION AND NUMERICAL EXPERIMENTS

118

which is the number of control Bezier nodes. The domain is decomposed into (n 1) subintervals with the n nodes
x,i :=

(i 1)dpole,1
n 1

for i = 1, . . . , n .

The set is as follows:


:= {p := (p1 , . . . , pn ) Rn | l pi u } .
The mapping F : 7 U, which again involves the symmetry, reads
(x1 ) := [F (x1 )] (p) :=

n
X
i=1

pi

i2n 1

2x1 + dpole,1
2dpole,1

i2n 1

2x1 + dpole,1
2dpole,1



(7.11)
where x1 and in is given by (7.2). The mapping F is depicted in Fig. 7.14, where the red
line connects the design parameters and the blue line is the resulting shape.
50

45

x2 [mm]

40

35

30

25

PSfrag replacements
20
15

10

x1 [mm]

10

15

Figure 7.14: Bezier design parameters and the corresponding 2d shape of the north pole head
We concern the 2dimensional magnetostatics with the differential operator
B := grad.
The space H(grad; 2d ) is equivalent to the space H 1 (2d ), hence, Assumptions 3.23.5 are
satisfied by Theorems 3.73.9 and by (3.27). Further, we can proceed throughout the rest of Section 7.2.4. We only recall that the compatibility condition (5.2.2) is satisfied, as Ker(grad; 2d ) =
{0}.
As far as calculation of the 2d continuous cost functional is concerned, the magnetization area
is
m := [0.005, 0.005] [0.005, 0.005] [m]

7.4. NUMERICAL RESULTS

119

and the expressions (7.3)(7.10) remain, where for v = 1, . . . , nv the vectors nvm are as follows:
nvm

:=

(0, 1)

(1/ 2, 1/ 2)

,v = 1
,v = 2

or

(0, 1)

v
nm := (1/ 2, 1/ 2)

(cos(/8), sin(/8))

,v = 1
,v = 2
,v = 3

in case of the Maltese Cross or the ORing electromagnet, respectively. The values of minimal
magnetic flux densities remain as well.
Now, we do not need to introduce any regularization of the state problem, as the bilinear form
is elliptic on the whole space H0 (grad; 2d ) H01 (2d ). Concerning the finite element discretization with a discretization parameter h > 0 such that h h, we approximate the domain
2d by a polygonal domain h2d , while for the Maltese Cross h2d := 2d and for the ORing it is
like in Figs. 7.117.13. Then, Assumption 4.3 holds. We use linear Lagrange elements that are described in Section 4.4.1. The discretization T h (h ) satisfies the minimum angle condition (4.61)
and, therefore, Assumptions 4.14.2 and Assumptions 4.44.7 are satisfied. The remaining specification of the 2d discretized shape optimization problem is as in Section 7.2.5. The only difference
is that the smoothness of Re (xe ), Se (xe ), and Segrad (xe ) is now due to (4.56), (4.58), and (4.59),
respectively.

7.4 Numerical results


In this section, we present numerical results for both 2d and 3d problems. In the optimization
we employed the SQP algorithm with the BFGS update of the Hessian, see Section 6.2.2. For
the calculation of gradients we used the firstorder numerical differentiation. Moreover, we used
a multilevel approach at 3 levels. The calculations were done using the scientific software tools

Netgen, see S CH OBERL


[186], and Fepp, see K UHN , L ANGER , AND S CH OBERL
[117], with an

extension package for shape optimization, see L UK A S , M UHLHUBER , AND K UHN [125], which
were all developed in the research project SFB F013 at the University Linz in Austria. All the

calculations were run at the Department of Applied Mathematics, V SBTechnical


University Ostrava, Czech Republic, on a Linux PC machine with the processor Pentium III (1GHz) and 256MB
of memory.
The optimized pole heads of the Maltese Cross electromagnet are depicted in Fig. 7.15 while
the initial shape was a rectangle such that init (x) := u . The 2d shape is described by 7 design

Figure 7.15: Optimized 2d and 3d pole heads of the Maltese Cross electromagnet
variables including the symmetry. Concerning discretization of the state problem, the discretiza-

CHAPTER 7. AN APPLICATION AND NUMERICAL EXPERIMENTS

120
tion parameters are

h := 0.05 [m],

h := 0.025 [m],

and

h := 0.0125 [m]

at the first (coarsest), second, and third (finest) level, respectively. There are 12272 degrees of
freedom at the last (3rd, finest) level. The optimization took 8 SQP iterations at the first (coarsest)
level, 35 ones at the second level, and 25 ones at the third (finest) level, which was all done in 1
hour and 59 minutes, see also Fig. 7.17. The cost functional decreased from 1.97 10 6 (1st level)
to 1.49 106 (3rd level). The 3d shape is determined by 12 design variables with the symmetry
involved. The state problem at the finest level is discretized by 29541 degrees of freedom. Within
the multilevel approach we made a step between the 2d and 3d model such that at the first level we
had a coarse discretization of the 2d problem, at the second level we had a coarse discretization
of the 3d problem, and at the last third level we had a fine discretization of the 3d problem. The
calculation proceeded in 6, 50, and 37 SQP iterations at the respective levels, i.e., 93 SQP iterations
in total, which took 29 hours and 46 minutes, see also Fig. 7.18. The cost functional decreased
from 2.57 106 (2nd level) to 7.32 107 (3rd level).
The 2d optimized pole head of the ORing electromagnet is depicted in Fig. 7.16 while the
initial shape was again a rectangle. The shape is described by 7 design variables including the

Figure 7.16: Optimized 2d pole head of the ORing electromagnet


symmetry. The state problem has 12005 degrees of freedom at the third (finest) level. The optimization took 19, 8, and 37 SQP iterations at the respective levels, which means 64 iterations in
total and it was all done in 3 hours and 41 minutes. The cost functional decreased from 8.14 10 4
(1st, coarsest level) to 2.87 104 (3rd, finest level).

7.4.1 Testing the multilevel approach


Here, we present numerical tests of the multilevel optimization approach that was introduced in
Section 6.4. We refer to Fig. 7.17, where we compare the multilevel approach with the classical
one. We apply them to the 2d Maltese Cross optimization problem. From the last column in
Fig. 7.17, we can see that the multilevel approach is much faster than the classical one. Using the
multilevel approach, the calculation took about 2 hours while it took almost 7 hours, when using
the classical approach.
In Fig. 7.18, a general multilevel approach is presented. It is tested on the 3d optimal shape
design problem of the Maltese Cross electromagnet. At the first level (h := 0.05 [m]), a coarse 2d
optimization proceeds from the initial rectangular shape. The 2d coarse optimized shape from the
first level is used as the initial guess at the second level, where a coarse (h := 0.05 [m]), but now,

7.4. NUMERICAL RESULTS


optimized
designs

121

number of des.
variables

number of
unknowns

SQP
iters.

CPU
time

1386

56s

4705

47

27m 31s

4970

53

30m 12s

12272

72

1h 58m 55s

12324

125

6h 44m 0s

Figure 7.17: Multilevel versus classical optimization approach

3d optimization is employed. This means that we have to prolong the 2d coarse optimized design
into the third dimension by constant. This is the step that goes through the thick line in Fig. 7.18.
Then, we proceed on, as we did in Fig. 7.17. We use the 3d coarse optimized design as the initial
guess at the third level (h := 0.025 [m]). From the last line in Fig. 7.18, we can see that the whole
calculation took almost 30 hours. We tried to compare this general multilevel approach with the
classical one, but the calculation took more than 4 days and several remeshings of the geometry
had to be done. Unfortunately, in 4 days we were still not able to achieve the optimal solution,
hence, the calculation was stopped.

7.4.2 Testing the adjoint method


Unfortunately, we have not finished the implementation of the adjoint method within the scientific

software tool Fepp, see K UHN , L ANGER , AND S CH OBERL


[117], yet. Despite of this fact, we

provide a Matlab implementation, see L UK A S [119], of the method, which is enclosed on the CD.
Let us consider a 2d academic optimization problem governed by linear magnetostatics. Its
geometry is depicted in Fig. 7.19. Due to the symmetry and since we employ only one state
problem, the computational domainis the topleft quarter
2d := (0.2, 0) (0, 0.1) [m].

CHAPTER 7. AN APPLICATION AND NUMERICAL EXPERIMENTS

122
optimized
designs

number of des.
variables

number of
unknowns

SQP
iters.

CPU
time

2777

44s

12086

47

7h 27m

12

29541

93

29h 46m

Figure 7.18: A general multilevel optimization approach


The cost functional reads as follows:
1
(B(x)) :=

meas(m ) kBreq k2

kB(x) Breq k2 dx,

where we choose the required magnetic flux density


Breq := (0.025, 0) [T].
The currents are located in the coil domains westc and eastc and the absolute value of the current
density is
|J(x)| = 106 [A m2 ].
The box constraints are
l := 0.02 [m],

u := 0.01 [m].

We discretize the problem with a discretization parameter


h := 0.01 [m].
The discretized grid and the solution to the state magnetostatic problem for the initial design are
depicted in Fig. 7.20. Those for the optimized design are depicted in Fig. 7.21. The design is
described by 4 variables, the state problem by 221 degrees of freedom. The cost functional has

7.4. NUMERICAL RESULTS

123
x2
40

40

20

40

20

westc

40

40

eastc

20

PSfrag replacements

40

40

40

40

eastp

x1

20

westp

40

eastc

westc

Figure 7.19: Geometry of the twocoils problem


0.1

x2 [m]

x2 [m]

0.1

0.05

PSfrag replacements

0.05

PSfrag replacements
0
0.2

0.1

x1 [m]

0
0.2

0.1

x1 [m]

Figure 7.20: Initial design and the magnetic field of the twocoils problem
improved from 0.0077 to 0.0042. In Table 7.1 there is a comparison of the SQP method using the
firstorder numerical differentiation with the SQP method using the adjoint method for calculating
gradients. Both the calculations took 4 SQP iterations. We can see that the SQP with the numerical
differentiation needed 21 evaluations of the state problem while only 5 were needed by the adjoint
method plus additional 4 evaluations of the adjoint state problem. In fact, the numerical differentiation took 5 evaluations of the cost functional plus additional 4 (number of design variables) times
4 (number of SQP iterations) evaluations, which give the total 21 evaluations. The cost functional
was evaluated in about 18 seconds while the adjoint state problem in about 5 seconds. Enclosed
there is a CD with the Matlab implementation, see also L UK A S [119], where you can run
> optimization(n); % numerical differentiation
> optimization(a); % adjoint method
to see this comparison.

CHAPTER 7. AN APPLICATION AND NUMERICAL EXPERIMENTS

124

0.1

x2 [m]

x2 [m]

0.1

0.05

PSfrag replacements

0.05

PSfrag replacements
0
0.2

0.1

0
0.2

x1 [m]

0.1

x1 [m]

Figure 7.21: Optimized design and the magnetic field of the twocoils problem
number of cost func. evals.
number of adjoint problem evals.
number of SQP iterations
CPU time

numerical differentiation
21
0
4
6min 26sec

adjoint method
5
4
4
1min 53sec

Table 7.1: Numerical differentiation versus the adjoint method

7.5 Manufacture and measurements


In my opinion, results of this section are the most highlight in this research. The calculated op
timized shape was manufactured by the team of Prof. Pistora at the Institute of Physics, V SB
Technical University Ostrava in the Czech Republic, and Dr. RNDr. Dalibor Ciprian measured the
magnetic field for both the initial and optimized designs of the pole heads of the Maltese Cross
electromagnet. These pole heads are depicted in Fig. 7.22.

Figure 7.22: Initial and optimized 2d pole heads of the Maltese Cross electromagnet
In Fig. 7.23 there are distributions of the normal component of the magnetic flux density depicted. The blue solid line is the normal magnetic flux density along the magnetization plane for
the diagonal excitation, see also Fig. 7.10, of the initial design, see Fig. 7.22. The red solid line
is the normal magnetic flux density for the diagonal excitation of the optimized design. The blue
and red dashed lines, respectively, are the normal magnetic flux densities along the magnetiza-

7.5. MANUFACTURE AND MEASUREMENTS

125

tion plane for the vertical excitation, see also Fig. 7.9, of the initial and optimized designs. In
Fig. 7.23 we can see a significant improvement of the homogeneity of the magnetic field. The cost
functional calculated from the measured data shows that it decreases 4.5times. The cost functional calculated from the computer simulated magnetic field decreases only twice. The relative
differences between the measured and the calculated magnetic fields are about 30%, which might
be caused by saturation of the magnetic field in the corners. Employing a nonlinear governing
magnetostatic state problem should improve also the mismatch of the magnetic fields. Nevertheless, the significant improvement of the cost functional shows that the optimization works well, no
matter how big the nonlinearities in the direct magnetic field problem are.

normal component of magnetic flux density [10 4 T]

1800
init. shape, vertical excit.
init. shape, diagonal excit.
opt. shape, vertical excit.
opt. shape, diagonal excit.
1600

1400

1200

1000

800

PSfrag replacements

600
8

distance [mm]

Figure 7.23: Magnetic field for the initial and optimized design of the MC electromagnet

126

CHAPTER 7. AN APPLICATION AND NUMERICAL EXPERIMENTS

Chapter 8

Conclusion
This thesis treated with the shape optimization in both two and threedimensional linear magnetostatics. The aim was to present a complete picture of the mathematical modelling process. We
dealt with both the theoretical and computational aspects and demonstrated them on an application
being of a practical purpose in the research on magnetooptic effects.
Let us summarize the main results obtained in the thesis.
In Section 3.4 we developed an abstract theory for weak formulations of linear elliptic
secondorder boundary vectorvalue problems (BVP).
In Theorem 4.2 we proved the convergence of the solution of the finite element approximation to our abstract BVP while also dealing with an inner approximation of the original
domain with the Lipschitz boundary by a sequence of domains with polyhedral (or polygonal) boundaries.
In Sections 4.4.1 and 4.4.2 we concretized the abstract framework for the linear Lagrange
and Nedelec elements on triangles or tetrahedra, respectively.
In Chapter 5 we introduced an abstract shape optimization problem and its finite element
approximation. We proved both the existence and convergence theorems while they rely on
Lemma 5.3 and Theorem 4.2, respectively.
In Section 6.3.4 there is the heart of the thesis. There we developed an efficient implementation of the adjoint method for the firstorder sensitivity analysis. We provided also a Matlab
implementation, which is enclosed on the CD.
In Section 6.4 we introduced a multilevel optimization approach, which is a rather new
technique. It is the first step towards adaptive optimization algorithms, as they have been
recently presented in R AMM , M AUTE , AND S CHWARZ [164] and in S CHLEUPEN , M AUTE ,
AND R AMM [185]. The efficiency of our multilevel optimization approach was documented
on numerical tests in Section 7.4.1.
Finally, in Chapter 7 we presented a reallife application arising from the research on magnetooptic effects. We began with the physical description, went through the mathematical
settings, and ended up with the manufacture of the optimized design and with the discussion
of real improvements based on the physical measurements of the magnetic field.
127

CHAPTER 8. CONCLUSION

128

In Chapter 5, we met one serious obstacle, see Remark 5.1, that the standard approximation
theory does not completely cover problems of complex geometries. Namely, it is due to that we
can hardly find a continuous mapping between the shape design nodes and the remaining nodes
in the discretization grid. For fine discretizations and large changes in the design shape some
elements flip. One possible outcome is in the use of the multilevel optimization techniques where
on the fine grids the difference between the initial and optimized shapes is not that big. Another
outcome might be when using composite finite elements that were developed for the treatment with
complicated geometries in the papers by H ACKBUSCH AND S AUTER [79, 80]. It is connected to an
idea which was given to me in January 2002 by RNDr. Jan Chleboun, CSc. from the Mathematical
Institute of the Czech Academy of Sciences. The idea is to use a fixed regular grid independent
of the geometry and to resolve the fine details of the geometry within special elements that arise
by the intersection of the geometry and the regular grid. This will move all the programming
effort into the development of such special finite elements instead the shapetomesh mapping.
We can also avoid this problem by using a boundary element discretization. From its matter, this
is very suited for optimal shape design, as we need to handle only the boundary discretization.
Nevertheless, construction of efficient multigrid solvers as well as using the method for nonlinear
governing state problems are still topics of the current research.
Finally, let us draw the further directions of this research. They are mainly focused
on development and rigorous analysis of the adaptive multilevel techniques in the shape
optimization,
on synergies among the inverse and shape optimization problems, namely, on the regularization techniques and numerical methods, e.g., the homogenization or levelset methods,
on common aspects in the topology and shape optimization,
on development of a userfriendly and welldocumented scientific computing software tool
for structural (both shape and topology) optimization,
and on reallife applications in both electromagnetism and mechanics involving complex
geometries and nonlinearities of the state problem, provided correct mathematical settings,
i.e., the existence of a solution at least.

Bibliography
[1] Ansys, http://www.ansys.com.
[2] Fluent, http://www.fluent.com.
[3] J. C. Adam, A. Gourdin-Serveniere, J. C. Nedelec, and P. A. Raviart, Study of an implicit scheme for integrating Maxwells equations, Comput. Methods Appl. Mech. Eng.
22 (1980), 327346.
[4] R. A. Adams, Sobolev spaces, Academic Press, New York, 1975.
[5] G. Allaire, Homogenization and twoscale convergence, SIAM J. Math. Anal. 23 (1992),
14821518.
[6]

, Shape optimization by the homogenization method, Springer, New York, Berlin,


Heidelberg, 2002.

[7] G. Allaire, E. Bonnetier, G. A. Francfort, and F. Jouve, Shape optimization by the homogenization method, Numer. Math. 76 (1997), 2768.
[8] G. Allaire, F. Jouve, and A.-M. Toader, A levelset method for shape optimization, C. R.
Acad. Sci. Paris 334 (2002), 11251130.
[9] C. Amrouche, C. Bernardi, M. Dauge, and V. Girault, Vector potentials in three
dimensional nonsmooth domains, Math. Methods Appl. Sci. 21 (1998), 823864.
[10] T. Apel and J. Schoberl, Multigrid methods for anisotropic edge refinement, SIAM J. Numer. Anal. 40 (2002), no. 5, 19932006.
[11] J. Arora, Introduction to optimum design, McGraw-Hill, New York, 1989.
[12] J. P. Aubin, Approximation of elliptic boundaryvalue problems, Pure and Applied Mathematics, vol. 26, John Wiley & Sons, New York, 1972.
[13] O. Axelsson, Iterative solution methods, Cambridge University Press, 1994.
[14] I. Babuska and A. K. Aziz, Survey lectures on the mathematical foundations of the finite
element method, The Mathematical Foundations of the Finite Element Method with Applications to Partial Differential Equations, Academic Press, New York and London, 1972.
[15]

, On the angle condition in the finite element method, SIAM J. Numer. Anal. 13
(1976), 214226.
129

BIBLIOGRAPHY

130

[16] P. K. Banerjee and R. Butterfield, Boundary element methods in engineering science,


McGrawHill Book Company, 1981.
[17] N. V. Banichuk, Introduction to optimization of structures, Springer, New York, 1990.
[18] P. Di Barba, P. Navarra, A. Savini, and R. Sikora, Optimum design of ironcore electromagnets, IEEE Transactions on Magnetics 26 (1990), 646649.
[19] D. Begis and R. Glowinski, Application de la methode des e lements finis a` la resolution
dun proble`eme de domaine optimal, Appl. Math. Optimization 2 (1975), 130169.
[20] M. P. Bendse, Optimal shape design as a material distribution problem, Struct. Multidisc.
Optim. 1 (1989), 193202.
[21]

, Optimization of structural topology, shape and material, Springer, Berlin, Heidelberg, 1995.

[22] M. P. Bendse and O. Sigmund, Topology optimization. Theory, methods and applications,
Springer, Berlin Heidelberg, 2003.
[23] P. T. Boggs and J. W. Tolle, Sequential quadratic programming, Acta Numer. 4 (1995),
151.
[24] A. D. Borner, Mathematische Modelle f u r Optimum Shape Design Probleme in der Magnetostatik, Ph.D. thesis, Universitat Hamburg, Germany, 1985.
[25] T. Borrvall, Computational topology optimization of elastic continua by design restrictions,
Ph.D. thesis, Linkoping University, Sweden, 2000.
[26] A. Bossavit, Computational electromagnetism. Variational formulations, complementarity,
edge elements, Orlando, FL: Academic Press, 1998.
[27] D. Braess, Finite elements, Cambridge, 2001.
[28] J. H. Bramble, Multigrid methods, Pitman Research Notes in Mathematics, vol. 294, John
Wiley & Sons, 1993.
[29] J. H. Bramble, J. E. Pasciak, and J. Xu, Parallel multilevel preconditioners, Math. Comput.
55 (1990), 19111922.
[30] R. B. Brandtstatter, W. Ring, Ch. Magele, and K. R. Richter, Shape design with great geometrical deformations using continuously moving finite element nodes, IEEE Transactions
on Magnetics 34 (1998), no. 5, 28772880.
[31] C. A. Brebbia (ed.), Topics in boundary element research, vol. 6 electromagnetic applications, Springer, Berlin, 1989.
[32] S. C. Brenner and L. R. Scott, The mathematical theory of finite element methods, Springer,
1994.
[33] F. Brezzi and M. Fortin, Mixed and hybrid finite element methods, Springer, New York,
1991.

BIBLIOGRAPHY

131

[34] R. A. Brockman, Geometric sensitivity analysis with isoparametric finite elements, Commun. Appl. Numer. Methods 3 (1987), 495499.
[35] D. Bucur and J.-P. Zolesio, Existence and stability of the optimum in shape optimization,
ZAMM 76 (1996), no. 2, 7780.
[36] M. Burger, A level set method for inverse problems, Inverse Probl. 17 (2001), no. 5, 1327
1355.
[37] M. Burger and W. Muhlhuber, Iterative regularization of parameter identification problems
by sequential quadratic programming methods, Inverse Probl. 18 (2002), no. 4, 943969.
[38]

, Numerical approximation of an SQP-type method for parameter identification,


SIAM J. Numer. Anal. 40 (2002), no. 5, 17751797.

[39] J. Cea, Optimization, theorie et algorithmes, Dunod, Paris, 1971.


[40] J. Cea, S. Garreau, P. Guillaume, and M. Masmoudi, The shape and topological optimization connections, Comput. Methods Appl. Mech. Eng. 188 (2000), 713726.
[41] G. Chen and J. Zhou, Boundary element methods, Computational Mathematics and Applications, Academic Press, Harcourt Brave Jovanovich, London, San Diego, New York,
1992.
[42] G. Chen, J. Zhou, and R. C. McLean, Boundary element method for shape (domain) optimization of linearquadratic elliptic boundary control problem, Boundary Control and
Variation. Proceedings of the 5th working conference held in Sophia Antipolis, France, Inc.
Lect. Notes Pure Appl. Math., vol. 163, 1994, pp. 2772.
[43] A. V. Cherkaev, Variational methods for structural optimization, Springer, New York,
Berlin, Heidelberg, 2000.
[44] J. Chleboun and R. Makinen, Primal hybrid formulation of an elliptic equation in smooth
optimal shape problems, Adv. Math. Sci. Appl. 5 (1995), no. 1, 139162.
[45] P. Ciarlet, The finite element method for elliptic problems, NorthHolland, 1978.
[46] P. G. Ciarlet and J. Lions (eds.), Handbook of numerical analysis, NorthHolland, Amsterdam, 1989.
[47] D. Cioranescu and P. Donato, An introduction to homogenization, Oxford Lect. Series Math.
Appl., vol. 17, Oxford University Press, 1999.
[48] F. H. Clarke, Nonsmooth analysis and optimization, Wiley & Sons, 1983.
[49] A. R. Conn, N. I. M. Gould, and P. L. Toint, Trustregion methods, MPSSIAM Series on
Optimization, vol. 1, SIAM, Philadelphia, 2000.
[50] M. Costabel and M. Dauge, Singularities of Maxwells equations on polyhedral domains,
Numerics and applications of differential and integral equations (M. Bach, C. Constanda,
and G. C. Hsiao et. al., eds.), Pitman Research Notes in Mathematics Series, vol. 379,
Harlow: Adison Wesley Longman, 1998.

BIBLIOGRAPHY

132

[51] R. Courant and D. Hilbert, Methods of mathematical physics, Interscience Publishers, New
York, 1962.
[52] W. C. Davidon, Variable metric method for minimization, Technical report ANL5990 (revised), Argonne National Laboratory, 1959.
[53]

, Variable metric method for minimization, SIAM J. Optim. 1 (1991), 117.

[54] M. C. Delfour and J.-P. Zolesio, Shape and geometries: Analysis, differential calculus, and
optimization, Advances in Design and Control, vol. 4, SIAM, 2001.
[55] J. E. Dennis and R. B. Schnabel, Numerical methods for unconstrained optimization and
nonlinear equations, Prentice Hall, 1983.
[56] P. Doktor, On the density of smooth functions in certain subspaces of Sobolev spaces, Comment. Math. Univ. Carol. 14 (1973), 609622.

[57] Z. Dostal, Linear algebra, Lecture Notes, V SBTechnical


University of Ostrava, 2000, in
Czech.
[58] H. W. Engl, M. Hanke, and A. Neubauer, Regularization of inverse problems, Kluwer Academic Publishers, Dordrecht, Boston, London, 1996.
[59] G. E. Farin, Curves and surfaces for computer-aided geometric design: A practical guide,
Academic Press, 1996.
[60] R. P. Feynman, R. B. Leightona, and M. Sands, The Feynman lectures on physics, vol. 3,
AddisonWesley, 1966.
[61] R. Fletcher, Practical optimization methods, 2nd edition, Wiley & Sons, 1987.
[62]

, An optimal positive definite update for sparse Hessian matrices, SIAM J. Optim.
5 (1995), 192218.

[63] R. Fletcher and C. M. Reeves, Function minimization by conjugate gradients, Comput. J. 7


(1964), 149154.
[64] J. Francu, Monotone operators. A survey directed to applications to differential equations,
Appl. Math. Praha 35 (1990), 257301.
[65] P. L. George, Automatic mesh generation, John Wiley & Sons, 1993.
[66] P. Gill, W. Murray, and M. Wright, Practical optimization, Academic Press, 1981.
[67] V. Girault and P. Raviart, Finite element methods for NavierStokes equations, Springer,
Berlin, 1986.
[68] R. Glowinski, Numerical methods for nonlinear variational problems, Springer, New York,
1984.
[69] G. H. Golub and C. F. van Loan, Matrix computation, 2nd ed., Johns Hopkins Series in the
Mathematical Sciences, The Johns Hopkins University Press, 1989.
[70] A. Griewank, Evaluating derivatives, principles and techniques of algorithmic differentiation, Frontiers in Applied Mathematics, vol. 19, SIAM, Philadelphia, 2000.

BIBLIOGRAPHY

133

[71] C. Grossmann and H.-G. Ross, Numerik partieller Differentialgleichungen, Teubner Studienbucher, Stuttgart, 1992.
[72] C. Grossmann and J. Terno, Numerik der Optimierung, 2nd ed., Teubner Studienbucher,
Stuttgart, 1997.
[73] G. Haase, M. Kuhn, U. Langer, S. Reitzinger, and J. Schoberl, Parallel Maxwell solvers,
Scientific computing in electrical engineering (Ursula van Rienen, Michael Gunther, and
Dirk Hecht, eds.), Lect. Notes Comput. Sci. Eng., vol. 18, 2001, pp. 7178.
[74] G. Haase and U. Langer, On the use of multigrid preconditioners in the domain decomposition method, Parallel algorithms for partial differential equations, Proc. 6th GAMM-Semin.,
Notes Numer. Fluid Mech., vol. 31, 1991, pp. 101110.
[75] G. Haase, U. Langer, A. Meyer, and S.V. Nepomnyaschikh, Hierarchical extension operators and local multigrid methods in domain decomposition preconditioners, East West J.
Numer. Math. 31 (1994), no. 45, 269272.
[76] G. Haase, U. Langer, S. Reitzinger, and J. Schoberl, Algebraic multigrid methods based on
element preconditioning, Int. J. Comput. Math. 78 (2001), no. 4, 575598.
[77] G. Haase and E. Lindner, Advanced solving techniques in optimization of machine components, Comput. Assist. Mech. Eng. Sci. 6 (1999), no. 3, 337343.
[78] W. Hackbusch, Multigrid methods and applications, Springer, Berlin, 1985.
[79] W. Hackbusch and S. A. Sauter, Composite finite elements for problems containing small
geometric details - part II: Implementation and numerical results, Comput. Vis. Sci. 1
(1997), 1525.
[80]

, Composite finite elements for the approximation of PDEs on domains with complicated microstructures, Numer. Math. 75 (1997), 447472.

[81] W. W. Hager, D. W. Hearn, and P. M. Pardalos (eds.), Large scale optimization, Kluwer
Academic Publishers, Dordrecht, 1994.
[82] J. S. Hansen, Z. S. Liu, and N. Olhoff, Shape sensitivity analysis using a fixed basis function
finite element approach, Struct. Multidisc. Optim. 21 (2001), 177196.
[83] J. Haslinger and R. A. E. Makinen, Introduction to shape optimization. Theory, approximation, and computation, Advances in Design and Control, vol. 7, SIAM, Philadelphia,
2003.
[84] J. Haslinger, M. Miettinen, and D. Panagiotopoulos, Finite element method for hemivariational inequalities. Theory, methods and applications, Nonconvex Optimization and its
Applications, vol. 35, Kluwer Academic Publishers, Dordrecht, 1999.
[85] J. Haslinger and P. Neittaanmaki, Finite element approximation for optimal shape, material
and topology design, 2nd ed., John Wiley & Sons Ltd., Chinchester, 1997.
[86] E. J. Haug, K. K. Choi, and V. Komkov, Sensitivity analysis in structural design, Mathematics in Science and Engineering, vol. 177, Academic Press, 1986.

BIBLIOGRAPHY

134

[87] H. A. Haus and J. R. Melcher, Electromagnetic fields and energy, Englewood Cliffs, Prentice Hall, 1989.
[88] M. R. Hestensen, Optimization theory, John Wiley & Sons, 1975.
, Conjugate direction methods in optimization, Springer, 1980.

[89]

[90] M. R. Hestensen and E. Stiefel, Methods of conjugate gradients for solving linear systems,
J. Res. Natl. Inst. Stand. Technol. 49 (1952), 409436.
[91] R. Hiptmair, Multilevel preconditioning for mixed problems in three dimensions, Ph.D. thesis, University of Augsburg, Germany, 1996.
[92]

, Canonical construction of finite elements, Math. Comput. 68 (1999), 13251346.


, Finite elements in computational electromagnetism, Acta Numer. (2002), 237

[93]
339.

[94] R. Hiptmair and C. Schwab, Natural boundary element methods for the electric field integral equation on polyhedra, SIAM J. Numer. Anal. 40 (2002), no. 1, 6686.
[95] I. Hlavac ek, J. Haslinger, J. Necas, and J. Lovsek, Numerical solution of variational inequalities, Springer, Berlin, 1988.
[96] R. H. W. Hoppe, S. I. Petrova, and V. H. Schulz, Numerical analysis and applications
topology optimization of conductive media described by Maxwells equations, Lecture
Notes Comput. Sci. (1988), 414422.
[97]

, 3d structural optimization in electromagnetics, Proceedings of Domain Decomposition Methods and Applications (Lyon, Oct. 912, 2000) (M. Garbey et al., ed.), 2001.

[98] T. J. R. Hughes, The finite element method, PrenticeHall, Englewood Cliffs, New Jersey,
1987.
[99] N. Ida and J. P. A. Bastos, Electromagnetics and calculation of fields, Springer, 1997.
[100] J. B. Jacobsen, N. Olhoff, and E. Rnholt, Generalized shape optimization of three
dimensional structures using materials with optimum microstuctures, Mechanics of Materials 28 (1998), no. 14, 207225.
[101] C. Johnson, Numerical solution of partial differential equations by the finite element
method, Cambridge University Press, 1990.
[102] M. Jung and U. Langer, Applications of multilevel methods to practical problems, Surv.
Math. Ind. 1 (1991), no. 3, 217257.
[103]

, Methode der finiten Elemente f u r Ingenieure: Eine Einfuhrung in die numerischen


Grundlagen und Computersimulation, Teubner, Stuttgart, 2001.

[104] J. Kacur, J. Necas, J. Polak, and J. Soucek, Convergence of a method for solving the magnetostatic field in nonlinear media, Appl. Math. Praha 13 (1968), 456465.
[105] A. M. Kalamkarov and A. G. Kolpakov, Analysis, design and optimization of composite
structures, John Wiley & Sons, New York; London; Sydney, 1997.

BIBLIOGRAPHY

135

[106] M. Kaltenbacher, H. Landes, R. Lerch, and F. Lindinger, A finiteelement / boundary


element method for the simulation of coupled electrostaticmechanical systems, J. Physique
III 7 (1997), 19751982.
[107] B. Kawohl, O. Pironneau, L. Tartar, and J.-P. Zolesio (eds.), Optimal shape design, Lecture
Notes in Mathematics, vol. 1740, Springer, Berlin, New York, Heidelberg, 2000.
[108] N. Kikuchi, Finite element methods in mechanics, Cambridge University Press, 1986.
[109] A. Kirsch, An introduction to the mathematical theory of inverse problems, Springer Series
in Applied Mathematical Sciences, vol. 120, Springer, Berlin, 1996.
[110] E. Kita and H. Tanie, Topology and shape optimization of continuum structures using GA
and BEM, Struct. Multidisc. Optim. 17 (1999), no. 23, 130139.
[111] I. Kopriva, D. Hrabovsky, K. Postava, D. Ciprian, and J. Pistora, Anisotropy of the quadratic
magneto-optical effects in a cubic crystal, Proceedings of SPIE, vol. 4016, 2000, pp. 5459.
[112] A. Kost, Numerische Methode in der Berechnung elektromagnetischer Felder, Springer,
Berlin, Heidelberg, New York, 1995.
[113] M. Krz ek, On the maximum angle condition for linear tetrahedral elements, SIAM J. Numer. Anal. 29 (1992), 513520.
[114] M. Krz ek and P. Neittaanmaki, On superconvergence techniques, Acta Appl. Math. 9
(1987), 175198.
[115] M. Krz ek and P. Neittaanmaki, Mathematical and numerical modelling in electrical engineering, Kluwer Academic Publishers, Dordrecht, 1996.
[116] A. Kufner, O. John, and S. Fuck, Function spaces, Academia, Prague, 1977.
[117] M. Kuhn, U. Langer, and J. Schoberl, Scientific computing tools for 3d magnetic field problems, The Mathematics of Finite Elements and Applications (MAFELAP X) (2000), 239
259.
[118] E. Laporte and P. Le Tallec, Numerical methods in sensitivity analysis and shape optimization, Modeling and Simulation in Science, Engineering and Technology, Birkhauser,
Boston, 2003.
[119] D. Lukas, 2d shape optimization for magnetostatics, Tutorial for a students project at the
conference Industrial Mathematics and Mathematical Modelling (IMAMM) 2003, Roznov
pod Radhostem, Czech Republic, http://lukas.am.vsb.cz/Teaching/IMAMM2003/.
[120]

, On solution to an optimal shape design problem in 3-dimensional magnetostatics,


Appl. Math. Praha, Submitted.

[121]

, On the road between Sobolev spaces and a manufacture of homogeneous elec

tromagnets, Transactions of the VSB-TU


of Ostrava, vol. 2, VSBTU
Ostrava, To appear.

[122]

, Shape optimization of homogeneous electromagnets, Tech. report, Specialforschungsbereich SFB F013, University Linz, 2000.

BIBLIOGRAPHY

136
[123]

, Shape optimization of homogeneous electromagnets, Scientific Computing in


Electrical Engineering (Ursula van Rienen, Michael Gunther, and Dirk Hecht, eds.), Lect.
Notes Comp. Sci. Eng., vol. 18, Springer, 2001, pp. 145152.

[124] D. Lukas, I. Kopriva, D. Ciprian, and J. Pistora, Shape optimization of homogeneous electromagnets and their application to measurements of magnetooptic effects, Records of COMPUMAG 2001, vol. 4, 2001, pp. 156157.
[125] D. Lukas, W. Muhlhuber, and M. Kuhn, An object-oriented library for the shape optimization problems governed by systems of linear elliptic partial differential equations, Transac

tions of the VSB-TU


of Ostrava, vol. 1, VSBTU
Ostrava, 2001, pp. 115128.
[126] D. Lukas, J. Vlcek, Z. Hytka, and Z. Dostal, Optimization of dislocation of magnets in a

magnetic separator, Transactions of the V SB-TU


of Ostrava, Electrical Engineering Series,

vol. 1, VSBTU Ostrava, 1999, pp. 112.


- Technical
[127] D. Lukas, Shape optimization of a magnetic separator, Masters thesis, V SB
University of Ostrava, Czech Republic, 1999, In Czech.
[128] D. Lukas, Multilevel solvers for 3dimensional optimal shape design with an application to
magnetostatics, Book of Abstracts of the 9th International Symposium on Microwave and
Optical Technology, Ostrava, 2003, p. 121.
[129] J. Lukes and J. Maly, Measure and integral, MATFYZPRESS, 1995.
[130] M. Makela and P. Neittaanmaki, Nonsmooth optimization: Analysis and algorithms with
applications to the optimal control, World Scientific Publishing Company, Singapore, 1992.
[131] R. Makinen, Finite element design sensitivity analysis for nonlinear potential problems,
Commun. Appl. Numer. Methods 6 (1986), 343350.
[132] A. Marrocco and O. Pironneau, Optimum design with Lagrangian finite elements: Design
of an electromagnet, Comput. Methods Appl. Mech. Eng. 15 (1978), 277308.
[133] I. D. Mayergoyz, Mathematical models of hysteresis, Springer, Berlin, 1991.
[134] V. G. Mazya, Sobolev spaces, Springer, Berlin, 1985.
[135] B. Mohammadi and O. Pironneau, Applied shape optimization for fluids, Oxford University
Press, Oxford, 2001.
[136] P. Monk, Analysis of a finite element method for Maxwells equations, SIAM J. Numer.
Anal. 29 (1992), 714729.
[137]

, A comparison of three mixed methods for the time dependent Maxwells equations,
SIAM J. Sci. Statist. Comput. 13 (1992), 10971122.

[138]

, A finite elemeny method for approximating the timeharmonic Maxwells equations, Numer. Math. 63 (1992), 243261.

[139] W. Muhlhuber, Efficient solvers for optimal design problems with PDE constraints, Ph.D.
thesis, Johannes Kepler University Linz, Austria, 2002.

BIBLIOGRAPHY

137

[140] F. Murat and J. Simon, Studies on optimal shape design problems, Lecture Notes in Computational Science, vol. 41, Springer, Berlin, 1976.
[141] J. Necas, Les methodes directes en theorie des e quations elliptiques, Academia, Prague,
1967.
[142] J. C. Nedelec, Mixed finite elements in R 3 , Numer. Math. 35 (1980), 315341.
[143]

, A new family of mixed finite elements in R 3 , Numer. Math. 50 (1986), 5781.

[144] P. Neittaanmaki, M. Rudnicki, and A. Savini, Inverse problems and optimal design in electricity and magnetism, Oxford University Press, 1995.
[145] P. Neittaanmaki and K. Salmenjoki, Sensitivity analysis for optimal shape design problems,
Struct. Multidisc. Optim. 1 (1989), 241251.
[146] P. Neittaanmaki and J. Saranen, Finite element approximation of electromagnetic fields in
the three dimensional space, Numer. Funct. Anal. Optim. 2 (1981), 487506.
[147]

, Finite element approximations of vector fields given by curl and divergence, Math.
Methods Appl. Sci. 3 (1981), 328335.

[148] J. Nocedal and S. J. Wright, Numerical optimization, Springer, 1999.


[149] J. T. Oden and L. F. Demkowicz, Applied functional analysis, CRC Press, 1996.
[150] J. T. Oden and J. N. Reddy, An introduction to the mathematical theory of finite elements,
John Wiley & Sons, 1976.
[151] N. Olhoff, Multicriterion structural optimization via bound formulation and mathematical
programming, Struct. Multidisc. Optim. 1 (1989), 1117.
[152] N. Olhoff and J. E. Taylor, On structural optimization, J. Appl. Math. 50 (1983), no. 4,
11391151.
[153] A. R. Parkinson and R. J. Balling, The OptdesX design optimization software, Struct. Multidisc. Optim. 23 (2002), 127139.
[154] P. Pedersen (ed.), Optimal design with advanced materials, Elsevier, Amsterdam, The
Netherlands, 1993.
[155] G. H. Peichl and W. Ring, Optimization of the shape of an electromagnet: Regularity results, Adv. Math. Sci. and Appl. 8 (1998), 9971014.
[156]

, Asymptotic commutativity of differentiation and discretization in shape optimization, Math. and Comput. Modelling 29 (1999), 1937.

[157] J. Petersson, On continuity of the designtostate mappings for trusses with variable topology, Int. J. Eng. Sci. 39 (2001), no. 10, 11191141.
[158] J. Petersson and J. Haslinger, An approximation theory for optimum sheets in unilateral
contact, Q. Appl. Math. 56 (1998), no. 2, 309332.
[159] O. Pironneau, Optimal shape design for elliptic systems, Springer Series in Computational
Physics, Springer, New-York, 1984.

BIBLIOGRAPHY

138

[160] J. Pistora, K. Postava, and R. Sebesta,


Optical guided modes in sandwiches with ultrathin
metallic films, Journal of Magnetism and Magnetic Materials 198199 (1999), 683685.
[161] E. Polak, Computational methods in optimization. A unified approach, Academic Press,
New York, 1971.
[162] E. Polak and G. Ribi`ere, Note sur la convergence de m e thodes de directions conjuguees,
Rev. Fr. Inf. Rech. Oper. 16 (1969), 3543.
[163] A. Ralston, Zaklady numericke matematiky (Fundamentals of numerical mathematics),
Academia Praha, 1978.
[164] E. Ramm, K. Maute, and S. Schwarz, Adaptive topology and shape optimization, 1998,
http://citeseer.nj.nec.com/ramm98adaptive.html.
[165] J. Rasmussen, M. Damsgaard, S. T. Christensen, and E. Surma, Design optimization with
respect to ergonomic properties, Struct. Multidisc. Optim. 24 (2002), 8997.
[166] J. Rasmussen, E. Lund, T. Birker, and N. Olhoff, The CAOS system, Int. Ser. Numer. Math.
110 (1993), 7596.
[167] P. A. Raviart and J. M. Thomas, A mixed finite element method for second order elliptic
problems, Lect. Notes Math. 606 (1977), 292315.
[168] S. Reitzinger, PEBBLES users guide, SFB Numerical and Symbolic Scientific Computing, http://www.sfb013.uni-linz.ac.at.
[169]

, Algebraic multigrid methods for large scale finite element equations, Ph.D. thesis,
Johannes Kepler University Linz, Austria, 2001.

[170] K. Rektorys, Survey of applicable mathematics, Kluwer, Amsterdam, 1994.


[171] A. Rietz and J. Petersson, Simultaneous shape and thickness optimization, Struct. Multidisc.
Optim. 23 (2001), no. 1, 1423.

[172] W. Ritz, Uber


eine neue Methode zur Lo sung gewisser Variationsprobleme der mathematischen Physik, J. Reine Angew. Math. 135 (1909), 161.
[173] G. I. N. Rozvany (ed.), Shape and layout optimization of structural systems and optimality
criteria methods, CISM Lecture Notes no. 325, Springer, Wien, 1992.
[174] G. I. N. Rozvany (ed.), Optimization of large structural systems, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1993.
[175]

, Aims, scope, methods, history and unified terminology of computeraided topology


optimization in structural mechanics, Struct. Multidisc. Optim. 21 (2001), 90108.

[176] W. Rudin, Real and complex analysis, McGraw Hill, New York, 1966.
[177]
[178]

, Functional analysis, McGraw Hill, New York, 1973.


, Principles of mathematical analysis, 3rd ed., International Series in Pure and Applied Mathematics, McGraw-Hill Book Company, 1976.

BIBLIOGRAPHY

139

[179] Y. Saad, Iterative methods for sparse linear systems, PWS, 1996, 2nd edition available at
http://www.cs.umn.edu/saad/books.html.
[180] M. Save and W. Prager (eds.), Structural optimization, vol. 1, optimality criteria, Plenum
Press, New York, 1985.
[181] M. Save and W. Prager (eds.), Structural optimization, vol. 2, mathematical optimization,
Plenum Press, New York, 1990.
[182] O. Scherzer, An iterative multi level algorithm for solving nonlinear illposed problems,
Numer. Math. 80 (1998), 579600.
[183] M. Schinnerl, U. Langer, and R. Lerch, Multigrid simulation of electromagnetic actuators,
ZAMM, Z. Angew. Math. Mech. 81 (2001), no. 3, 729730.
[184] M. Schinnerl, J. Schoberl, M. Kaltenbacher, U. Langer, and R. Lerch, Multigrid methods for
the fast numerical simulation of coupled magnetomechanical systems, ZAMM Z. Angew.
Math. Mech. 80 (2000), no. 1, 117120.
[185] A. Schleupen, K. Maute, and E. Ramm, Adaptive FEprocedures in shape optimization,
Struct. Multidisc. Optim. 19 (2000), 282302.
[186] J. Schoberl, NETGEN - An advancing front 2D/3D-mesh generator based on abstract rules,
Comput. Vis. Sci. (1997), 4152.
[187] C. Schwab, p and hp finite element methods: Theory and applications to solid and fluid
mechanics, Oxford University Press, 1998.
[188] J. Sethian and A. Wiegmann, Structural boundary design via level set and immersed interface methods, J. Comp. Phys. 163 (2000), 489528.
[189] R. E. Showalter, Hilbert space methods for partial differential equations, Pitman, London,
San Francisco, 1977.
[190] O. Sigmund and J. Petersson, Numerical instabilities in topology optimization: A survey
on procedures dealing with chechkerboards, meshdependencies and local minima, Struct.
Multidisc. Optim. 16 (1998), no. 1, 6875.
[191] C. A. C. Silva and M. L. Bittencourt, An objectoriented structural optimization program,
Struct. Multidisc. Optim. 20 (2000), 154166.
[192] P. P. Silvester and R. L. Ferrari, Finite elements for electrical engineers, Cambridge University Press, 1983.
[193] J. Simon, Differentiation with respect to the domain in boundary value problems, Numer.
Funct. Anal. Optimization 2 (1980), 649687.
[194] S. L. Sobolev, Applications of functional analysis in mathematical physics, American Mathematical Society, Providence, 1963.
[195] J. Sokolowski and A. Zochowski, On the topological derivative in shape optimization,
SIAM J. Control Optimization 37 (1999), 12511272.

BIBLIOGRAPHY

140

[196] J. Sokolowski and J.-P. Zolesio, Introduction to shape optimization, Springer Series in Computational Mathematics, no. 16, Springer, Berlin, 1992.
[197] L. Solymar, Lectures on electromagnetic theory, Oxford University Press, 1987.
[198] W. Stadler, Nonexistence of solutions in optimal structural design, Optim. Control Appl.
Methods 7 (1986), 243258.
[199] C. W. Steele, Numerical computation of electric and magnetic fields, Van Nostrand Reynold
Co., New York, 1987.
[200] G. Strang and G. J. Fix, An analysis of the finite element method, Prentice-Hall, 1973.
[201] J. A. Stratton, Electromagnetic theory, Mc GrawHill Book Co., Inc., New York, 1941.
[202] K. Suzuki and N. Kikuchi, A homogenization method for shape and topology optimization,
Comput. Methods Appl. Mech. Eng. 93 (1991), no. 3, 291318.
[203] K. Svanberg, The method of moving asymptotes a new method for structural optimization,
Int. J. Numer. Methods Eng. 24 (1987), 359373.
[204]

, A class of globally convergent optimization methods based on conservative convex


separable approximations, SIAM J. Optim. 12 (2002), no. 2, 555573.

[205] B. Szabo and I. Babuska, Finite element analysis, John Wiley & Sons, 1991.
[206] N. Takahashi, Optimization of die press model, Proceedings of the TEAM Workshop in the
Sixth Round (Okayama, Japan), March 1996.
[207] P.-S. Tang and K.-H. Chang, Integration of topology and shape optimization for design of
structural components, Struct. Multidisc. Optim. 22 (2001), 6582.
[208] The MathWorks, Inc., Matlab user manual, 1993.
[209] H. Thomas, L. Zhou, and U. Schramm, Issues of commercial optimization software development, Struct. Multidisc. Optim. 23 (2002), no. 2, 97110.
[210] D. Tscherniak and O. Sigmund, A webbased topology optimization program, Struct. Multidisc. Optim. 22 (2001), no. 3, 179187.
[211] U. van Rienen, Numerical methods in computational electrodynamics. linear systems in
practical applications, Lecture Notes in Computational Science and Engineering, vol. 12,
Springer, Berlin, 2001.
[212] K. Washizu, Variational methods in elasticity and plasticity, Pergamon, Oxford, Great
Britain, 1983.
[213] Y. M. Xie and G. P. Steven, Evolutionary structural optimization, Springer, Berlin, 1997.
[214] J. Yoo and N. Kikuchi, Topology optimization in magnetic fields using the homogenization
design method, Int. J. Numer. Meth. Eng. 48 (2000), 14631479.
[215] K. Yuge, N. Iwai, and N. Kikuchi, Optimization of 2d structures subjected to nonlinear
deformations using the homogenization method, Struct. Multidisc. Optim. 17 (1999), no. 4,
286298.

BIBLIOGRAPHY

141

[216] K. Yuge and N. Kikuchi, Optimization of a frame structure subjected to a plastic deformation, Struct. Multidisc. Optim. 10 (1995), no. 34, 197208.
[217] O. C. Zienkiewicz, The finite element method in engineering science, London etc.:
McGraw-Hill XIV, 1971.
[218] O. C. Zienkiewicz and R. L. Taylor, The finite element method (parts 13), Butterworth
Heinemann, Oxford, 2000.
[219] M. Zlamal, On the finite element method, Numer. Math. 12 (1968), 394409.
[220] J.-P. Zolesio, The material derivative (or speed) method for shape optimization, Optimization of Distributed Parameter Structures, Part II (E. J. Haug and J. Cea, eds.), NATO Adv.
Study Inst., 1981, pp. 10891151.
[221] G. Zoutendijk, Methods of feasible directions, Elsevier, Amsterdam, 1960.
[222] J. Zowe, M. Kocvara, and M. P. Bendse, Free material optimization via mathematical
programming, Math. Program. 79 (1997), no. 13, Ser. B, 445466.
[223] A. K. Zvedin and V. A. Kotov, Modern magnetooptics and magnetooptical materials, Institute of Physics Publishing Bristol and Philadelphia, 1997.

142

BIBLIOGRAPHY

Curriculum vitae
Personal data
Name:
Date and place of birth:

Ing. Dalibor Lukas


June 10, 1976 in Ostrava, Czech Republic

Education
1990 - 1994:
1994 - 1999:
since November 1999:

in Havrov, Czech Republic


High Electrotechnical School (SP SE)

studies in Informatics and Applied Mathematics at the V SB


Technical University of Ostrava, Czech Republic
Ph.D. student in Informatics and Applied Mathematics at the

VSBTechnical
University of Ostrava, Czech Republic

Career history
August 1996:
AprilJuly 1999:
April 2000March 2001:

December 2002May 2003:


since June 2003:

programmer at the company Q-gir in Ostrava, Czech Republic


programmer at the company Voest-Alpine Industrienlangebau
GmbH in Leoben, Austria
research fellow at the Spezialforschungsbereich SFB013 Numerical and Symbolic Scientific Computing, Project F1309 Hierarchical Optimal Design Methods, University Linz, Austria
research assistant at the Department of Applied Mathematics,

VSBTechnical
University of Ostrava, Czech Republic
research fellow at the SFB013 Numerical and Symbolic Scientific Computing, Project F1309 Multilevel Solvers for Large
Scale Discretized Optimization Problems, University Linz, Austria

Publications
M.Sc. thesis [127], 1 article to appear in a refereed journal [120], 2 articles in refereed conference
proceedings [123, 124], 4 refereed technical reports [121, 122, 125, 126], 5 contributed talks and
3 posters at international conferences, and 1 tutorial of a student conference project [119]

Research interests
Shape optimization, finite element method, mathematical modelling, scientific computing, magnetostatics
For more information see: http://lukas.am.vsb.cz
143

Das könnte Ihnen auch gefallen